paper
stringlengths
13.4k
699k
reviews
stringlengths
0
38.9k
# An Empirical Comparison Of O**-Policy Prediction Learning** Algorithms On The Collision Task Anonymous authors Paper under double-blind review ## Abstract O-policy prediction—learning the value function for one policy from data generated while following another policy—is one of the most challenging subproblems in reinforcement learning. This paper presents empirical results with eleven prominent o-policy learning algorithms that use linear function approximation: five Gradient-TD methods, two EmphaticTD methods, O-policy TD(⁄), Vtrace, and variants of Tree Backup and ABQ that are derived in this paper such that they are applicable to the prediction setting. Our experiments used the Collision task, a small o-policy problem analogous to that of an autonomous car trying to predict whether it will collide with an obstacle. We assessed the performance of the algorithms according to their learning rate, asymptotic error level, and sensitivity to step-size and bootstrapping parameters. By these measures, the eleven algorithms can be partially ordered on the Collision task. In the top tier, the two Emphatic-TD algorithms learned the fastest, reached the lowest errors, and were robust to parameter settings. In the middle tier, the five Gradient-TD algorithms and O-policy TD(⁄) were more sensitive to the bootstrapping parameter. The bottom tier comprised Vtrace, Tree Backup, and ABQ; these algorithms were no faster and had higher asymptotic error than the others. Our results are definitive for this task, though of course experiments with more tasks are needed before an overall assessment of the algorithms' merits can be made. ## 1 Introduction In reinforcement learning, it is not uncommon to learn the value function for one policy while following another policy. For example, the Q-learning algorithm (Watkins, 1989; Watkins & Dayan, 1992) learns the value of the greedy policy while the agent may select its actions according to a dierent, more exploratory, policy. The first policy, the one whose value function is being learned, is called the *target policy* while the more exploratory policy generating the data is called the *behavior policy*. When these two policies are dierent, as they are in Q-learning, the problem is said to be one of *o-policy learning*, whereas if they are the same, the problem is said to be one of *on-policy learning*. The former is 'o' in the sense that the data is from a dierent source than the target policy, whereas the latter is from data that is 'on' the policy. O-policy learning is more dicult than on-policy learning and subsumes it as a special case. There are various reasons for interest in o-policy learning. One reasons is that it has been the core of many of the great successes that have come out of the Deep Reinforcement Learning field in the past few years. Probably one of the most notable examples is the DQN architecture, in which the Q-learning algorithm was used to learn how to play Atari games (Mnih et al., 2015). Another reason for interest in o-policy learning is that it provides a clear way of intermixing exploration and exploitation. The dilemma is that an agent should always *exploit* what it has learned so far—it should take the best actions according to what it has learned—but it should also always *explore* to find actions that might be superior. No agent can simultaneously behave in both ways. However, an o-policy algorithm can, in a sense, pursue both goals at the same time. The behavior policy can explore freely while the target policy can converge to the fully exploitative, optimal policy independent of the behavior policy's explorations. Another appealing aspect of o-policy learning is that it enables learning about many policies in parallel. Once the target policy is freed from behavior, there is no reason to have a single target policy. With o-policy learning, an agent could simultaneously learn how to optimally perform many dierent tasks (as suggested by Jaderberg et al. (2016) and Rafiee et al. 2019). Parallel o-policy learning of value functions has even been proposed as a way of learning general, policy-dependent, world knowledge (e.g., Sutton et al., 2011; White, 2015; Ring, in prep). Finally, note that numerous ideas in the machine learning rely on o-policy learning, including the learning of temporally-abstract world models (Sutton, Precup, & Singh, 1999), predictive representations of state (Littman, Sutton, & Singh, 2002; Tanner & Sutton, 2005), auxiliary tasks (Jaderberg et al., 2016), life-long learning (White, 2015), and learning from historical data (Thomas, 2015). Many o-policy learning algorithms have been explored in the history of reinforcement learning. Q-learning (Watkins, 1989; Watkins & Dayan, 1992) is perhaps the oldest. In the 1990s it was realized that combining o-policy learning, function approximation, and temporal-dierence (TD) learning risked instability (Baird, 1995). Precup, Sutton, and Singh (2000) introduced o-policy algorithms with importance sampling and eligibility traces, as well as tree backup algorithms, but did not provide a practical solution to the risk of instability. Gradient-TD methods (see Maei, 2011; Sutton et al., 2009) assured stability by following the gradient of an objective function, as suggested by Baird (1999). Emphatic-TD methods (Sutton, Mahmood, & White, 2016) reweighted updates in such a way as to regain the convergence assurances of the original on-policy TD algorithms. These methods had convergence guarantees, but provide no assurances for eciency in practice. Other algorithms, including Retrace (Munos et al., 2016), Vtrace (Espeholt et al., 2018) and ABQ (Mahmood, Yu, & Sutton, 2017) were developed recently to overcome diculties encountered in practice. As more o-policy algorithms were developed, there was a need to compare them systematically. However, comparing algorithms fairly within a DQN-like architecture was not possible. In a DQN-like architecture, many elements work in concert to solve a task. Each element has one or more parameters that need tuning. On one hand, not all these parameters can be tuned systematically due to the computational cost, and on the other hand, tuning parameters carefully and studying performance over many parameters is necessary for a fair comparative study. In the original DQN work, for example, the parameters were not systematically tuned due to the computational burden; the DQN paper reads: "The values of all the hyperparameters and optimizer parameters were selected by performing an informal search on the games Pong, Breakout, Seaquest, Space Invaders and Beam Rider." (Mnih et al., 2015). Due to the computational cost, to be able to conduct a fair and detailed comparative study, separate parts of a DQN-like architecture need to be studied alone. We reduce the amount of required computation in this study in three ways. First, we focus on comparing o-policy algorithms and remove other confounding factors from the comparison. This means that the comparison will not include elements such as complex optimizers, target networks, or experience replay buers. Second, we focus on linearly learning the value function from given and fixed features. These learned value functions can later be used for control. Focusing on linearly learning the value function through fixed features is justified through the two time scale view of Neural Networks (NNs) as described by Chung et al. (2018). In this view, it is assumed that the features are learned using the first n ≠ 1 layers of the neural network at their own time scale, and then the features are used by the last layer to linearly learn the value function. Third, we focus on fully incremental online algorithms. Many algorithms referred to as the OPE family of algorithms assume access to data beyond what the agent experiences at each time step. Our paper, focuses on the fully incremental setting, in which the agent makes one interaction with the environment, receives a reward, learns from it, and then discards the sample and moves to the next step. This is in contrast to the setting in which the agent has access to historical data. Not having access to historical data, the agent is more limited in what it can learn. In fact, there have been a few empirical studies that compare o-policy prediction learning algorithms in small environments. The earliest systematic study was that by Geist and Scherrer (2014). Their experiments were on random MDPs and compared eight o-policy algorithms. A few months later, Dann, Neumann, and Peters (2014) published a more in-depth study with one additional algorithm and six test problems including random MDPs. Both studies considered o-policy problems in which the target and behavior policies were given and stationary. Such *prediction* problems allow for relatively simple experiments and are still challenging (e.g., they involve the same risk of instability). Both studies used linear function approximation with a given feature representation. The algorithms studied by Geist and Scherrer (2014), and by Dann, Neumann, and Peters (2014) can be divided into those whose per-step complexity is linear in the number of parameters, like TD(⁄), and methods whose complexity is quadratic in the number of parameters (proportional to the square of the number of parameters), like Least Squares TD(⁄) (Bradtke & Barto, 1996; Boyan, 1999). Quadratic-complexity methods avoid the risk of instability, but cannot be used in learning systems with large numbers (e.g., millions) of weights. A third systematic study, by White and White (2016), excluded quadratic-complexity algorithms, but added four additional linear-complexity algorithms. The current paper is similar to previous studies in that it treats prediction with linear function approximation, and similar to the study by White and White (2016) in restricting attention to linear complexity algorithms. Our study diers from earlier studies in that it treats more algorithms and does a deeper empirical analysis on a single problem, the Collision task. The additional algorithms are the prediction variants of Tree Backup(⁄) (Precup, Sutton, & Singh, 2000), Retrace(⁄) (Munos et al., 2016), ABQ(') (Mahmood, Yu, & Sutton, 2017), and TDRC(⁄) (Ghiassian et al., 2020). Our empirical analysis is deeper primarily in that we examine and report the dependency of all eleven algorithms' performance on all of their parameters individually. This level of detail is needed to expose our main result, an overall ordering of the performance of o-policy algorithms on the Collision task. Our results, though limited to this task, are a significant addition to what is known about the comparative performance of o-policy learning algorithms. ## 2 Formal Framework In this section, we formally explain the framework of o-policy prediction learning with linear function approximation. An agent and environment interact at discrete time steps, t = 0, 1, 2*,...*. The environment is a Markov Decision Process (MDP) with state St œ S at time step t. At each time step, the agent chooses an action At œ A with probability b(a|s), where the function b : A ◊ S æ [0, 1] with qaœA b(a|s)=1, 's œ S, is called the *behavior* policy because it determines the agent's behavior. After taking action At in state St, the agent receives from the environment a numerical reward Rt+1 œ R µ R and the next state St+1. In general the reward and next state are stochastically jointly determined by the current state and action. In prediction learning, we estimate for each state the expected discounted sum of future rewards, given that actions are taken according to a dierent policy fi, called the *target* policy (because learning its values is the target of our learning). For simplicity, both target and behavior policies are assumed here to be known and static, although of course in many applications of interest one or the other may be changing. The discounted sum of future rewards at time t is called the *return* and denoted Gt: $$G_{t}\ {\stackrel{\mathrm{def}}{=}}\ R_{t+1}+\gamma R_{t+2}+\gamma^{2}R_{t+3}+\cdots$$ The expected return when starting from a state and following a specific policy thereafter is called the *value* of the state under the policy. The *value function* vfi : S æ R for a policy fi takes a state as input and returns the value of that state: $$v_{\pi}(s)\ {\stackrel{\mathrm{def}}{=}}\ \mathbb{E}\big[G_{t}\ |\ S_{t}{=}s,A_{t:\infty}\sim\pi\big]\,.$$ vfi(s) def= E[Gt | St =*s, A*t:Œ ≥ fi] . (1) Prediction learning algorithms seek to learn an estimate vˆ : S æ R that approximates the true value function vfi. In many problems S is large and an exact approximation is not possible even in the limit of infinite time and data. Many parametric forms are possible, including deep artificial neural networks, but of particular interest, and our exclusive focus here, is the linear form: $$(1)$$ $${\hat{v}}(s,\mathbf{w})\ {\stackrel{\mathrm{def}}{=}}\ \mathbf{w}^{\top}\mathbf{x}(s),$$ vˆ(s, w) def= w€x(s), (2) where w œ Rd is a learned weight vector and x(s) œ Rd, 's œ S is a set of given feature vectors, one per state, where d π |S|. ## 3 Algorithms In this section, we briefly introduce the eleven algorithms used in our empirical study. These eleven are intended to include all the best candidate algorithms for o-policy prediction learning with linear function $$\left(2\right)$$ approximation. The complete update rules of all algorithms and additional technical discussion can be found in Appendix A and Appendix E respectively. Many algorithms studied in our paper can be combined with each other; for example, the combination of Emphatic TD and Gradient TD results in Emphatic Gradient TD. Similarly the combination of Gradient TD and Tree Backup results in GTB (Touati, 2018). We did not include algorithm combinations to keep the scope focused. O-policy TD(⁄) (Precup, Sutton, & Dasgupta, 2001) is the o-policy variant of the original TD(⁄) algorithm (Sutton, 1988) that uses importance sampling to reweight the returns and account for the dierences between the behavior and target policies. This algorithm has just one set of weights and one step-size parameter. Our study includes five algorithms from the Gradient-TD family. *GTD(*⁄) and *GTD2(*⁄) are based on algorithmic ideas introduced by Sutton et al., (2009), then extended to eligibility traces by Maei (2011). Proximal GTD2(⁄) (Mahadevan et al., 2014; Liu et al., 2015; Liu et al., 2016) is a "mirror descent" version of GTD2 using a saddle-point objective function. These algorithms approximate stochastic gradient descent (SGD) on an alternative objective function, the mean squared projected Bellman error. *HTD(*⁄) (Hackman, 2012; White & White, 2016) is a "hybrid" of GTD(⁄) and TD(⁄) which becomes equivalent to classic TD(⁄) where the behavior policy coincides with the target policy. *TDRC(*⁄) is a recent variant of GTD(⁄) that adds regularization. All these methods involve an additional set of learned weights (beyond that used in vˆ) and a second step-size parameter, which can complicate their use in practice. TDRC(⁄) oers a standard way of setting the second step-size parameter, which makes this less of an issue. All of these methods are guaranteed to converge with an appropriate setting of their two step-size parameters. Our study includes two algorithms from the Emphatic-TD family. Emphatic-TD algorithms attain stability by up- or down-weighting the updates made on each time step by O-policy TD(⁄). If this variation in the emphasis of updates is done in just the right way, stability can be guaranteed with a single set of weights and a single step-size parameter. The original emphatic algorithm, *Emphatic TD(*⁄), was introduced by Sutton, Mahmood, and White (2016). The variant *Emphatic TD(*⁄, —), introduced by Hallak et al., (2016), has an additional parameter, - œ [0, 1], intended to reduce variance. The final three algorithms in our study—ABTD('), Vtrace(⁄), and the prediction variant of Tree Backup(⁄)— can be viewed as attempts to address the problem of large variations in the product of importance sampling ratios. If this product might become large, then the step-size parameter must be set small to ensure there is no overshoot—and then learning may be slow. All these methods attempt to control the importance sampling product by changing the bootstrapping parameter from step to step (Yu, Mahmood, & Sutton, 2018). Munos et al., (2016) proposed simply putting a cap on the importance sampling ratio at each time step; they explored the theory and practical consequences of this modification in a control context with their Retrace algorithm. Vtrace(⁄) (Espeholt et al., 2018) is a modification of Retrace to make it suitable for prediction rather than control. Mahmood, Yu, and Sutton (2017) developed a more flexible algorithm that achieves a similar eect. Their algorithm was also developed for control; to apply the idea to prediction learning we had to develop a nominally new algorithm, *ABTD(*'), that naturally extends ABQ(') from control to prediction. ABTD(') will be developed in the next section. Finally, *Tree Backup(*⁄) (Precup, Sutton, & Singh, 2000) reduces the eective ⁄ by the probability of the action taken at each time step. Each of these algorithms (or their control predecessors) have been shown to be very eective on specific problems. ## 4 Derivations Of Tree Backup, Vtrace, And Abtd In this section, we derive the prediction variants of Tree Backup(⁄), Retrace(⁄), and ABQ('). The prediction variant of Tree Backup(⁄) and the prediction variant of ABQ('), which we call ABTD('), are new to this paper. Vtrace(⁄) is not a new algorithm, and was previously discussed by Espeholt et al., (2018). In this paper, we use the procedure suggested by Mahmood, Yu, and Sutton (2017) to arrive, in a new way, at the same Vtrace algorithm derived by Espeholt et al., (2018). We will additionally show that all three algorithms can be seen as O-policy TD(⁄) with ⁄t generalized from a constant to a function of (St, At). Readers who are only interested in the relative performance of algorithms and practical issues, can skip this section. Deriving the prediction variant of control algorithms is typically straightforward. However, deriving the prediction variant of the three mentioned algorithms is a little more involved. The three control algorithms— ABQ('), Retrace(⁄), and the control variant of Tree Backup(⁄)—avoid all importance sampling ratios in their update rules to stabilize learning. As we will shortly see, importance sampling ratios cannot be completely avoided in the prediction setting as was done in the control setting. Trying to avoid all importance sampling ratios in the prediction learning case might result in an incorrect version of these algorithms that we will discuss in Section 4.1. The prediction variant of all three algorithms can be derived in a similar way. To understand the prediction variant of these algorithms, we derive ABTD('). We then use ABTD(') to derive extensions to Vtrace(⁄) and Tree Backup(⁄) for prediction. The key idea is to set ⁄t = ⁄(St≠1, At≠1) adaptively in generic O-policy TD(⁄): $${\bf z}_{t}\leftarrow\rho_{t-1}\gamma_{t}\lambda_{t}{\bf z}_{t-1}+{\bf x}_{t}\quad\mbox{with}{\bf z}_{-1}={\bf0}$$ $${\bf w}_{t+1}\leftarrow{\bf w}_{t}+\alpha\rho_{t}\delta_{t}{\bf z}_{t},\tag{1}$$ (3) $\binom{4}{}$ . $$\begin{array}{l}{(5)}\\ {(6)}\end{array}$$ where "t is the TD-error, zt is eligibility trace, flt def= fi(At|St)/b(At|St) is the importance sampling ratio, and - is the step-size parameter. The update rules provided here for O-policy TD(⁄), are dierent from the following update rules often provided in the literature for O-policy TD(⁄): $\mathbf{z}_{t}\leftarrow\rho_{t}(\gamma_{t}\lambda_{t}\mathbf{z}_{t-1}+\mathbf{x}_{t})\quad\text{with}\mathbf{z}_{-1}=\mathbf{0}$ $\mathbf{w}_{t+1}\leftarrow\mathbf{w}_{t}+\alpha\delta_{t}\mathbf{z}_{t}$. In Appendix B we show that these two sets of update rules are the same numerically step by step. We use the first set of update rules here as they are more appropriate for our purposes in this paper. Consider the generalized ⁄-return, for a ⁄ based on the state and action—as in ABQ(')—or the entire transition (White, 2017). Let ⁄t+1 = ⁄(St, At, St+1) be defined based on the transition (St, At, St+1), corresponding to how rewards and discounts are defined based on the transition, Rt+1 = r(St, At, St+1) and "t+1 = "(St, At, St+1). Then, given a value function vˆ, the ⁄-return G⁄t for generalized " and ⁄ is defined recursively as $G_{t}^{\lambda}\stackrel{{\rm def}}{{=}}\rho_{t}\left(R_{t+1}+\gamma_{t+1}\left[(1-\lambda_{t+1})\hat{v}(S_{t+1})+\lambda_{t+1}G_{t+1}^{\lambda}\right]\right).$ $\bullet$\(\bullet Similar to ABQ(') (Mahmood et al., 2017, Equation 7), this ⁄-return can be written using TD-errors $$\delta_{t}\ {\stackrel{\mathrm{def}}{=}}\ R_{t+1}+\gamma_{t+1}{\hat{v}}(S_{t+1})-{\hat{v}}(S_{t}),$$ as G⁄t = flt !Rt+1 + "t+1vˆ(St+1) ≠ "t+1⁄t+1vˆ(St+1) + "t+1⁄t+1G⁄t+1" = flt !"t + ˆv(St) + "t+1⁄t+1 #G⁄t+1 ≠ vˆ(St+1) $" = flt"t + fltvˆ(St) + flt"t+1⁄t+1 !flt+1"t+1 + flt+1"t+2⁄t+2 #G⁄t+2 ≠ vˆ(St+2) $" = flt ÿŒ n=t (flt+1⁄t+1"t+1) n"t + fltvˆ(St), $${\mathit{i}}=t+1\ {\mathit{P}}$$ where we define (flt+1⁄t+1"t+1) n def= rni=t+1 fli⁄i"i. This return diers from the return used by ABQ('), because it corresponds to the return from a state, rather than the return from a state and action. In ABQ('), the goal is to estimate the action-value for a given state and action. For ABTD('), the goal is to estimate the value for a given state. For the return from a state St, we need to correct the distribution over actions At with importance sampling ratio flt. For ABQ('), the correction with flt is not necessary because St and At are both given, and importance sampling corrections only need to be computed for future states and actions, with flt+1 onward. For ABTD('), therefore, unlike ABQ('), not all importance sampling ratios can be avoided. We can, however, still set ⁄ in a similar way to ABQ(') to mitigate the variance eects of importance sampling. To ensure flt⁄t+1 is well-behaved, ABTD(') sets ⁄ as follows: $$\lambda(S_{t},A_{t},S_{t+1})=\nu(\psi,S_{t},A_{t})b(S_{t},A_{t}),$$ with the following scalar parameters to define ‹t (Mahmood, Yu, & Sutton, 2017): $$\begin{array}{l}{{\nu_{t}\stackrel{\mathrm{def}}{=}\nu\big(\psi(\zeta),S_{t},A_{t}\big)\stackrel{\mathrm{def}}{=}\min\left(\psi(\zeta),\frac{1}{\max(b(A_{t}|S_{t}),\pi(A_{t}|S_{t}))}\right),}}\\ {{\psi(\zeta)\stackrel{\mathrm{def}}{=}2\zeta\psi_{0}+\max(0,2\zeta-1)(\psi_{\max}-2\psi_{0}),}}\\ {{\psi_{0}\stackrel{\mathrm{def}}{=}\frac{1}{\max_{s,a}\max(b(a|s),\pi(a|s))},}}\\ {{\psi_{\max}\stackrel{\mathrm{def}}{=}\frac{1}{\min_{s,a}\max(b(a|s),\pi(a|s))}.}}\end{array}$$ In the ⁄-return, then $$\rho_{t}\lambda_{t+1}=\frac{\pi(S_{t},A_{t})}{b(S_{t},A_{t})}\nu(\psi,S_{t},A_{t})b(S_{t},A_{t})=\nu(\psi,S_{t},A_{t})\pi(S_{t},A_{t}).$$ This removes the importance sampling ratios from the eligibility trace. The resulting ABTD(') algorithm can be written as the standard O-policy TD(⁄) algorithm, for a particular setting of ⁄. The O-policy TD(⁄) algorithm, with this ⁄, is called ABTD('), with updates $$\begin{array}{r l}{{\delta_{t}\ \stackrel{\mathrm{def}}{=}\ R_{t+1}+\gamma_{t+1}\mathbf{w}_{t}^{\top}\mathbf{x}_{t+1}-\mathbf{w}_{t}^{\top}\mathbf{x}_{t}}}\\ {{}}&{{\mathbf{z}_{t}\gets\gamma_{t}\nu_{t-1}\pi_{t-1}\mathbf{z}_{t-1}+\mathbf{x}_{t}\quad{\mathrm{with~}}\mathbf{z}_{-1}=\mathbf{0}}}\\ {{\mathbf{w}_{t+1}\gets\mathbf{w}_{t}+\alpha\rho_{t}\delta_{t}\mathbf{z}_{t}.}}\end{array}$$ Finally, we can adapt Retrace(⁄) and Tree Backup(⁄) for policy evaluation. Mahmood, Yu, and Sutton (2017) showed that Retrace(⁄) can be specified with a particular setting of ‹t (in their Equation 36). We can similarly obtain Retrace(⁄) for prediction by setting $$\nu_{t-1}=\zeta\operatorname*{min}\left({\frac{1}{\pi_{t-1}}},{\frac{1}{b_{t-1}}}\right),$$ or more generally: $$\nu_{t-1}=\zeta\operatorname*{min}\left({\frac{\bar{c}}{\pi_{t-1}}},{\frac{1}{b_{t-1}}}\right),$$ where c¯ is a constant, which we will discuss in more detail shortly. For Tree Backup(⁄), the setting for ‹t is any constant value in [0, 1] (see Algorithm 2 of Precup, Sutton & Singh, 2000). So far, we derived ABTD(') for prediction by defining ⁄t in the eligibility trace update of O-policy TD(⁄). We then used two special settings of ‹ to recover Vtrace(⁄) and Tree Backup(⁄) algorithms. Now, we specify Tree Backup(⁄), and Vtrace(⁄) updates again, but this time in terms of a special setting of ⁄t in the O-policy TD(⁄) update. Prediction variant of Tree Backup(⁄) is O-policy TD(⁄) with ⁄t = bt≠1⁄, for some tuneable constant ⁄ œ [0, 1]. Replacing ⁄t with bt≠1⁄ in the eligibility trace update in (3) simplifies as follows: $$\begin{array}{r l}{\mathbf{z}_{t}\leftarrow\gamma_{t}{\frac{\pi_{t-1}}{b_{t-1}}}b_{t-1}\lambda\mathbf{z}_{t-1}+\mathbf{x}_{t},}\\ {\quad}&{{}}\\ {\quad}&{{}=\gamma_{t}\pi_{t-1}\lambda\mathbf{z}_{t-1}+\mathbf{x}_{t}.}\end{array}$$ = "tfit≠1⁄zt≠1 + xt. (7) A simplified variant of the Vtrace(⁄) algorithm (Espeholt et al., 2018) can be derived with a similar substitution: $$\lambda_{t}=\operatorname*{min}\left({\frac{\bar{c}}{\pi_{t-1}}},{\frac{1}{b_{t-1}}}\right)\lambda b_{t-1},$$ $$\left(7\right)$$ where c¯ œ R+ and ⁄ œ [0, 1] are both tuneable constants. The update rule for the eligibility trace of Vtrace(⁄) with this special setting of ⁄t at each time step becomes: $\mathbf{z}_{t}$ is a non-time step function: $$\mathbf{z}_{t}\leftarrow\gamma_{t}\min\left(\frac{\bar{c}}{\pi_{t-1}},\frac{1}{b_{t-1}}\right)\lambda b_{t-1}\frac{\pi_{t-1}}{b_{t-1}}\mathbf{z}_{t-1}+\mathbf{x}_{t}$$ $$=\gamma_{t}\min\left(\frac{\bar{c}}{\pi_{t-1}},\frac{1}{b_{t-1}}\right)\lambda\pi_{t-1}\mathbf{z}_{t-1}+\mathbf{x}_{t}$$ $$=\gamma_{t}\min\left(\frac{\bar{c}\pi_{t-1}}{\pi_{t-1}},\frac{\pi_{t-1}}{b_{t-1}}\right)\lambda\mathbf{z}_{t-1}+\mathbf{x}_{t}$$ $$=\gamma_{t}\min\left(\bar{c},\rho_{t-1}\right)\lambda\mathbf{z}_{t-1}+\mathbf{x}_{t}.$$ $$({\boldsymbol{\delta}})$$ = "t min (¯c, flt≠1) ⁄zt≠1 + xt. (8) The parameter c¯ is used to clip importance sampling ratios in the trace. Note that it is not possible to recover the full Vtrace(⁄) algorithm in this way. The more general Vtrace(⁄) algorithm uses an additional parameter, fl¯ œ R+ that clips the flt in the update to wt+1: min(fl¯, flt)"tzt. When fl¯ is set to the largest possible importance sampling ratio, it does not aect flt in the update to wt and so we obtain the equivalence above. For smaller fl¯, however, Vtrace(⁄) is no longer simply an instance of O-policy TD(⁄). In our experiments, we investigate this simplified variant of Vtrace(⁄) that does not clip flt and set c¯ = 1 as done in the original Retrace algorithm. Finally, as mentioned before, ABTD(') for ' œ [0, 1] uses ⁄t = ‹t≠1bt≠1 in the O-policy TD(⁄) update which results in the following eligibility trace update: $$\mathbf{z}_{t}\leftarrow\gamma_{t}\frac{\pi_{t-1}}{b_{t-1}}\nu_{t-1}b_{t-1}\mathbf{z}_{t-1}+\mathbf{x}_{t}$$ $$=\gamma_{t}\nu_{t-1}\pi_{t-1}\mathbf{z}_{t-1}+\mathbf{x}_{t},\tag{9}$$ The convergence properties of all three methods are similar to O-policy TD(⁄). They are not guaranteed to converge under o-policy sampling with weighting µb and function approximation. With the addition of gradient corrections similar to GTD(⁄), all three algorithms are convergent. For explicit theoretical results, see Mahmood, Yu, and Sutton (2017) for ABQ(') with gradient correction and Touati et al. (2018) for convergent versions of Retrace(⁄) and Tree Backup(⁄). ## 4.1 An Alternative But Incorrect Extension Of Abq('**) To Abtd(**') The ABQ(') algorithm specifies ⁄ to ensure that flt⁄t is well-behaved, whereas we specified ⁄ so that flt⁄t+1 is well-behaved. This dierence arises from the fact that for action-values, the immediate reward and next state are not re-weighted with flt. Consequently, the ⁄-return of a policy from a given state and action is: $$R_{t+1}+\gamma_{t+1}\left[(1-\lambda_{t+1})\hat{v}(S_{t+1})+\rho_{t+1}\lambda_{t+1}G_{t+1}^{\lambda}\right].$$ To mitigate variance in ABQ(') when learning action-values, therefore, ⁄t+1 should be set to ensure that flt+1⁄t+1 is well-behaved. For ABTD('), however, ⁄t+1 should be set to mitigate variance from flt rather than from flt+1. To see why more explicitly, the central idea of these algorithms is to avoid importance sampling altogether: this choice ensures that the eligibility trace does not include importance sampling ratios. The eligibility trace zat in TD when learning action-values is: for state-action features xat . For flt⁄t = ‹tfit, this trace reduces to zat = ‹tfit"tzat≠1 + xat (Equation 18, $$\mathbf{z}_{t}^{a}=\rho_{t}\lambda_{t}\gamma_{t}\mathbf{z}_{t-1}^{a}+\mathbf{x}_{t}^{a},$$ Mahmood et al., 2017). For ABTD('), one could in fact also choose to set ⁄t so that flt⁄t = ‹tfit instead of flt⁄t+1 = ‹tfit. However, this would result in eligibility traces that still contain importance sampling ratios. The eligibility trace in TD when learning state-values is: zt = flt≠1⁄t"tzt≠1 + xt. Setting flt⁄t = ‹tfit would result in the update zt = flt≠1‹t fit flt "tzt≠1 + xt, which does not remove important sampling ratios from the eligibility trace. $$\mathbf{z}_{t}=\rho_{t-1}{\boldsymbol{\lambda}}$$ $${\mathbf{t}}-{\mathbf{1}}+{\mathbf{x}}_{t}.$$ ## 5 Collision Task The Collision task is an idealized o-policy prediction-learning task. A vehicle moves along an eight-state track towards an obstacle with which it will collide if it keeps moving forward. In this episodic task, each episode begins with the vehicle in one of the first four states (selected at random with equal probability). In these four states, forward is the only possible action whereas, in the last four states, two actions are possible: forward and turnaway (see Figure 1). The forward action always moves the vehicle one state further along the track; if it is taken in the last state, then a collision is said to occur, the reward is 1, and the episode ends. The turnaway action causes the vehicle to "turn away" from the wall, which also ends the episode, except with a reward of zero. The reward is also zero on all earlier, non-terminating transitions. In an episodic task like this the return is accumulated only up to the end of the episode. After termination, the next state is the first state of the next episode, selected randomly from the first four as specified above. The target policy on this task is to always take the forward action, fi(forward|s)=1, 's œ S, whereas the behavior policy is to take the two actions (where available) with equal probability, b(forward|s) = b(turnaway|s) = 0.5, 's œ {5, 6, 7, 8}. The problem is discounted with a discount rate of " = 0.9. As always, we are seeking to learn the value function for the target policy, which in this case is vfi(s) = "8≠s. This function is shown as a dashed black line in Figure 2. The thin red lines show approximate value functions vˆ ¥ vfi, using various feature representations, as we discuss shortly below. ![7_image_0.png](7_image_0.png) Figure 1: The Collision task. Episodes start in one of the first four states and end when the forward action is taken from the eighth state, causing a crash and a reward of 1, or when the turnaway action is taken in one of the last four states. This idealized task is roughly analogous to and involves some similar issues as real-world autonomous driving problems, such as exiting a parallel parking spot without hitting the car in front of you, or learning how close you can get to other cars without risking collisions. In particular, if these problems can be treated as o-policy learning problems, then solutions can potentially be learned with fewer collisions. In this paper, we are testing the eciency of various o-policy prediction-learning algorithms at maximizing how much they learn from the same number of collisions. Similar problems have been studied using mobile robots. For example, White (2015) used o-policy learning algorithms running on an iRobot Create to predict collisions as signaled by activation of the robot's front bumper sensor. Rafiee et al. (2019) used a Kobuki robot to not only anticipate collisions, but to turn away from anticipated collisions before they occurred. Modayil and Sutton (2014) trained a custom robot to predict motor stalls and turn o the motor when a stall was predicted. Our task is a prediction, and not a control task. If the task was a control task, the car would learn to hit the obstacle more often given our reward function. However, in our setting, the behavior and target policies are fixed and given, and the goal is to only learn about collisions. These predictions about collisions can later be used for dierent purposes such as state construction, and control. Through o-policy learning, the agent will be able to experience collisions, and predict with great detail, how close the agent is to the end of the corridor, without having to experience collisions many times. Without o-policy learning, the agent would have to experience collisions many more times in order to have accurate predictions about them. We artificially introduce function approximation into the Collision task. Although a tabular approach is entirely feasible on this small problem, it would not be on the large problems of interest. In real applications, the agent would have sensor readings, which will go through an artificial neural network to create feature representations. We simulate such representations in the Collision task by randomly assigning to each of the eight states a binary feature vector x(s) œ {0, 1}d, 's œ {1..8}. We chose d = 6, so that was not possible for all eight of the feature vectors (one per state) to be linearly independent. In particular, we chose all eight feature vectors to have exactly three 1s and three 0s, with the location of the 1s for each state being chosen randomly. Because the feature vectors are linearly dependent, it is not possible in general for a linear approximation, vˆ(s, w) = w€x, to equal to vfi(s) at all eight states of the Collision task. This, in fact, is the sole reason the red approximate value functions in Figure 2 do not exactly match vfi. Given a feature representation x : S æ Rd, a linear approximate value function is completely determined by its weight vector w œ Rd. The quality of that approximation is assessed by its squared error at each state, weighted by how often each state occurs: $${\overline{{\operatorname{VE}}}}(\mathbf{w})=\sum_{s\in{\mathcal{S}}}\mu_{b}(s){\big[}{\hat{v}}(s,\mathbf{w})-v_{\pi}(s){\big]}^{2},$$ $2, (10) ![8_image_0.png](8_image_0.png) $$(10)$$ Figure 2: The ideal value function, vfi, and the best approximate value functions, vˆ, for 50 dierent feature representations. where µb(s) is the state distribution, the fraction of time steps in which St = s, under the behavior policy (here µb was approximated from visitation counts from one million sample time steps). The value functions shown by red lines in Figure 2 are for wú, the weight vector that minimizes VE(w), with each line corresponding to a dierent randomly selected feature representation as described earlier. For these value functions, VE(wú) ¥ 0.05. All the code for the Collision task and the experiments are provided. See Appendix D.3. ## 6 Experiment The Collision task, in conjunction with its behavior policy, was used to generate 20,000 time steps, comprising one run, and then this was repeated for a total of 50 independent runs. ![8_image_1.png](8_image_1.png) Figure 3: **Left:** An example of the approximate value function, vˆ, being learned over time. **Right:** Learning curves illustrating the range of things that can happen during a run. The average error over the 20,000 steps is a good combined measure of learning rate and asymptotic error. Each run also used a different feature representation randomly generated as described in the previous section. Focusing on one-hot representations, we decided to choose a dierent random representation for each of the 50 runs, to study the performance of algorithms across various one-hot representations. The eleven learning algorithms were then applied to the 50 runs, each with a range of parameter values; each combination of algorithm and parameter settings is termed an *algorithm instance*. A list of all parameter settings used can be found in Appendix C. They included 12 values of ⁄, 19 values of –, 15 values of ÷ (for the Gradient-TD family), six values of - (for ETD(⁄, —)), and 19 values of ' (for ABTD(')), for approximately 20,000 algorithm instances in total. In each run, the weight vector was initialized to w0 = 0 and then updated at each step by the algorithm instance to produce a sequence of wt. At each step we also computed and recorded VE(wt). In Deep Learning, it is important that the neural network is initialized using random weights because if not, the derivatives in backpropagation will be the same for all weights, and all learned features will be the same. In linear function approximation with given features this is not an issue, so we decided to initialize all weights to zero. With a successful learning procedure, we expect the value function to evolve over time as shown in the left panel of Figure 3. The approximate value function starts at vˆ(s, 0)=0, as shown by the pink line, then moves toward positive values, as shown by the blue and orange lines. Finally, the learned value function slants and comes to closely approximate the true value function, though always with some residual error due to the limited feature representation, as shown by the green line (and also by all the red lines in Figure 2). ![9_image_0.png](9_image_0.png) Figure 4: Performance of all algorithms on the Collision task as a function of their parameters - and ⁄. Each point is the average error over 50 runs. The bars over each point show the standard error. The red curves show the performance with ⁄ =0; the blue curves show the performance with ⁄ =1; and the gray curves show the performance with intermediate values of ⁄. The top tier algorithms (top row) attained a low error (¥0.1) at all ⁄ values. The middle tier of six algorithms attained a low error for ⁄ = 1, but not for ⁄ = 0. And the bottom-tier of three algorithms were unable to reach an error of ¥0.1 at any ⁄ value. The right panel of Figure 3 shows learning curves illustrating the range of things that happened in the experiment. Normally, we expect VE to decrease over the course of the experiment, starting at VE(0) ¥ 0.7 and falling to some minimum value, as in the red and black lines in Figure 3 (these and all other data are averaged over the 50 runs). If the primary step-size parameter, –, is small, then learning may be slow and incomplete by the end of the runs, as in the orange line. A larger step-size parameter may be faster, but, if it is too large, then divergence can occur, as in the blue line. For one algorithm, Proximal GTD2(⁄), we found that the error dipped low and then leveled o at a higher level, as in the olive line. ## 7 Main Results: A Partial Order Over Algorithms As an overall measure of the performance of an algorithm instance, we take its learning curve over 50 runs, as in Figure 3, and then average it across the 20,000 steps. In this way, we reduce all the data for an algorithm instance to a single number that summarizes performance. These numbers appear as points in our main results figure, Figure 4. Each panel of the figure is devoted to a single algorithm. For example, performance numbers for instances of O-policy TD(⁄) are shown as points in the left panel of the second row of Figure 4. This algorithm has two parameters, the step-size parameter, –, and the bootstrapping parameter, ⁄. The points are plotted as a function of –, and points with the same ⁄ value are connected by lines. The blue line shows the performances of the instances of O-policy TD(⁄) with ⁄ = 1, the red line shows the performances with ⁄ = 0, and the gray lines show the performances with intermediate ⁄s. Note that all the lines are U-shaped functions of –, as is to be expected; at small - learning is too slow to make much progress, and at large - there is overshoot and divergence, as in the blue line in Figure 3. For each point, the standard error over the 50 runs is also given as an error bar, though these are too small to be seen in all except the rightmost points of each line where the step size was highest and divergence was common. Except for these rightmost points, almost all visible dierences are statistically significant. First focus on the blue line (of the left panel on the second row of Figure 4), representing the performances of O-policy TD(⁄) with ⁄ = 1. There is a wide sweet spot, that is, there are many intermediate values of – at which good performance (low average error) is achieved. Note that the step-size parameter - is varied over a wide range, with logarithmic steps. The minimal error level of about 0.1 was achieved over four or five powers of two for –. This is the primary measure of good performance that we look for in these data: low error over a wide range of parameter values. Now contrast the blue line with the red and gray lines (for O-policy TD(⁄) in the left panel of the second row of Figure 4). Recall that the blue line is for ⁄ = 1, the red line is for ⁄ = 0, and the gray lines are for intermediate values of ⁄. First note that the red line shows generally worse performance; the error level at ⁄ = 0 was higher, and its range of good - values was slightly smaller (on a logarithmic scale). The intermediate values of ⁄ all had performances that were between the two extremes. Second, the sweet spot (the best - value) consistently shifted right, toward higher –, as ⁄ was decreased from 1 toward 0. Now, armed with a thorough understanding of the O-policy TD(⁄) panel, consider the other panels of Figure 4. Overall, there are a lot of similarities between the algorithms and how their performances varied with - and ⁄. For all algorithms, error was lower for ⁄ = 1 (the blue line) than for ⁄ = 0 (the red line). Bootstrapping apparently confers no advantage in the Collision task for any algorithm. The most obvious dierence between algorithms is that the performance of the two Emphatic-TD algorithms varied relatively little as a function of ⁄; their blue and red lines are almost on top of one another, whereas those of all the other algorithms are qualitatively dierent. The emphatic algorithms generally performed as well as or better than the other algorithms. At ⁄ = 1, the emphatic algorithms reached the minimal error level of all algorithms (¥0.1), and their ranges of good - values was as wide as that of the other algorithms. While at ⁄ = 0, the best errors of the emphatic algorithms were qualitatively better than those of the other algorithms. The minimal ⁄ = 0 error level of the emphatic algorithms was about 0.15, as compared to approximately 0.32 (shown as a second thin gray line) for all the other algorithms (except Proximal GTD2, a special case that we consider later). Moreover, for the emphatic algorithms the sweet spot for - shifted little as ⁄ varied. The shift was markedly less than for the six algorithms in the middle two rows of Figure 4. The lack of an interaction between the two parameter values is another potential advantage of the emphatic algorithms. The lowest error level for eight of the algorithms was ¥0.1 (shown as a thin gray line), and for the other three algorithms the best error was higher, ¥0.16. The dierences between the eight and the three were highly statistically significant, whereas the dierences within the two groups were negligible. The three algorithms that performed worse than the others were Tree Backup(⁄), Vtrace(⁄), and ABTD(')—shown in the bottom row of Figure 4. The dierence was only for large ⁄s; at ⁄ = 0 these three algorithms reached the same error level (¥0.32) as the other non-emphatic algorithms. The three worse algorithms' range of good - values was also slightly smaller than for the other algorithms (with the partial exception, again, of Proximal GTD2(⁄)). A mild strength of the three is that the best - value shifted less as a function of ⁄ than for the other six non-emphatic algorithms. Generally, the performances of these three algorithms in Figure 4 look very similar as a function of parameters. An interesting dierence is that for ABTD('), we only see three gray curves, whereas for the other two algorithms we see seven. For ABTD(') there is no ⁄ parameter, but the parameter ' plays the same role. In our experiment, ABTD(') performed identically for all ' values greater than 0.5; four gray lines with dierent ' values are hidden behind ABTD's blue curve. In summary, our main result is that on the Collision task the performances of the eleven algorithms fell into three groups, or tiers. In the top tier are the two Emphatic-TD algorithms, which performed well and almost identically at all values of ⁄ and significantly better than the other algorithms at low ⁄. Although this dierence did not aect best performance here (where ⁄ = 1 is best), the ability to perform well with bootstrapping is expected to be important on other tasks. In the middle tier are O-policy TD(⁄) and all the Gradient-TD algorithms including HTD(⁄), all of which performed well at ⁄ = 1 but less well at ⁄ = 0. Finally, in the bottom tier are Tree Backup(⁄), Vtrace(⁄), and ABTD(⁄), which performed very similarly and not as well as the other algorithms at their best parameter values. All of these dierences are statistically significant, albeit specific to this one task. In Figure 4 the three tiers are the top row, the two middle rows, and the bottom row. The reason why Emphatic TD algorithms reached a lower error level than some others might be the objective function they minimize. Emphatic TD algorithms minimize the Emphatic weighted Mean Squared Projected Bellman Error (MSPBE). This is in contrast to all other algorithms studied in this paper, that minimize the behavior policy weighted MSPBE. In our results, the error measure is the Mean Squared Value Error (VE): the dierence between the true value function and the value function found by an algorithm. Our results suggest that distance between the minimums of Emphatic weighted MSPBE and VE is smaller than the distance between the minimums of behavior weighted MSPBE and VE. The reason why ABTD, Tree Backup, and Vtrace did not perform as well as others is most probably that they cut o the importance sampling ratio. By cutting o the importance sampling ratio, these algorithms introduce bias into the solution that the algorithm finds, which in turn will cause the algorithm to converge to a higher error level. On the other hand, one can expect that these algorithms perform better than others on problems where importance sampling ratios are really large. In the next two sections we take a closer look at two of the tiers to find dierences within them. ## 8 Emphatic Td(⁄**) Vs. Emphatic Td(**⁄, —) In this section, the eect of the - parameter of Emphatic TD(⁄, —) on the algorithm's performance in the full bootstrapping case is analyzed. We focus on the full bootstrapping case (⁄ = 0) because the largest dierences were observed with this value of ⁄ in the previous section. The curves shown in Figure 4 in the previous section, are for the best values of —; meaning that, for each ⁄, we found the combination of - and - that resulted in the minimum average error, fixed —, and plotted the sensitivity for that fixed - over –. Here, we show how varying - aects performance. ![11_image_0.png](11_image_0.png) Figure 5: Detail on the performance of Emphatic TD(⁄, —) at ⁄ = 0. Note that Emphatic TD(⁄) is equivalent to Emphatic TD(⁄, "), and here " = 0.9. The flexibility provided by - does not help on the Collision task. The error of Emphatic TD(0), and Emphatic TD(0,—) for various values of - and - are shown in Figure 5. We see that both algorithms performed similarly well on the Collision task, meaning that they both had a wide sensitivity curve and reached the same (¥0.1) error level. Notice that, as — increased, the sensitivity curve for Emphatic TD(0,—) shifted to left and the error overall decreased. With - = 0, Emphatic TD(⁄, —), reduces to TD(⁄). With - = 0.8, and - = 1, Emphatic TD(⁄, —) reached the same error level as Emphatic TD(⁄). With - = ", Emphatic TD(⁄, —) reduces to Emphatic TD(⁄). This explains why the red curve is between the - = 0.8 and - = 1 curves. The results make it clear that the superior performance of emphatic methods are almost entirely due to the basic idea of emphasis; the additional flexibility provided by - of the Emphatic TD(⁄, —) was not important on the Collision problem. ## 9 Assessment Of Gradient-Td Algorithms We study how the ÷ parameter of Gradient-TD algorithms aects performance in the case of full bootstrapping (the second step size, –v, is equal to ÷ ◊ –). Previously, in Figure 4 we looked at the results with the best values of ÷ for each ⁄; meaning that for each ⁄, first the combination of - and ÷ that resulted in the lowest average VE was found and then sensitivity to step size was plotted for that specific value of ÷. Sensitivity to step size for various values of ÷ with ⁄ = 0 are shown in Figure 6. Each panel shows the result of two Gradient-TD algorithms for various ÷s: One main algorithm, shown with solid lines, and another additional algorithm shown with dashed lines for comparison. First focus on the upper left panel. The upper left panel shows the parameter sensitivity for GTD2(0), for four values of ÷, and additionally it shows GTD(0) results as dashed lines for comparison (for results with more values of ÷ see Appendix D). The color for each value of ÷ is consistent within and across the four panels, meaning that for example, ÷ = 256 is shown in green in all panels, either as dashed or solid lines. For all parameter combinations, GTD errors were lower than (or similar to) GTD2 errors. With two smaller values of ÷ (1 and 0.0625) GTD had a wider and lower sensitivity curve than GTD2, which means GTD was easier to use than GTD2. Let us now move on to the upper right panel of Figure 6. Proximal GTD2 had the most distinctive behavior among all Gradient-TD algorithms. As previously observed in Figure 3, it is the only algorithm that in some cases had a "bounce"; its error dipped down at first and then moved back up. With ⁄ = 0, in some cases it converged to a lower error than all other Gradient-TD algorithms. Proximal GTD2 was more sensitive to the choice of - than other Gradient-TD algorithms except GTD2. Proximal GTD2 had a lower error and a wider sensitivity curve than GTD2. To see this, compare the dotted and solid lines in the upper right panel of Figure 6. Moving on to the lower left panel, we see that GTD and HTD performed similarly. Sensitivity curves were similarly wide but HTD reached a lower error in some cases. We see this by comparing the dotted and solid pink curves in the lower left panel. ![12_image_1.png](12_image_1.png) ![12_image_0.png](12_image_0.png) The fourth panel shows sensitivity to the step-size parameter for HTD and TDRC. Notice that TDRC has one sensitivity curve, shown in dashed blue. This is because ÷ is set to one (also its regularization parameter was set to one) as proposed in the original paper. HTD's widest curve was with ÷ = 0.0625 which was as wide as TDRC's curve. For a more in-depth study of TDRC's extra parameters see Appendix D.1. Figure 6: Detail on the performance of Gradient-TD algorithms at ⁄ = 0. Each algorithm has a second step-size parameter, scaled by ÷. A second algorithm's performance is also shown in each panel, with dashed lines, for comparison. On one hand, among the Gradient-TD algorithms, TDRC was the easiest to use. On the other hand, in the case of full bootstrapping, Proximal GTD2 reached the lowest error level. The fact that proximal GTD2 converged to a lower error level might be due to a few dierent reasons. One possible reason is that it might not have converged to the minimum of the mean squared projected Bellman error like other Gradient-TD algorithms. Another reason might be that it converged to a minimum of the projected Bellman error that was dierent from the minimum the other algorithms converged to. Further analyses is required to investigate this. It remains to be seen how these algorithms compare on other problems. ## 10 Limitations And Future Work The present study is based on a single task, and this limits the conclusions that can be fairly drawn from it. For example, we have found that Emphatic-TD methods perform well over a wider range of parameters than Gradient-TD methods on the Collision task, but it is entirely possible that the reverse would be true on a dierent task. Many more tasks must be explored before it is possible for a consistent pattern to emerge that favors one class of algorithm over another. On the other hand, a pattern over empirical results must begin somewhere. We stress the need for extensive empirical results even for a single task. Ours is the first systematic study of o-policy learning to describe the eects of all algorithm parameters individually (rather than, for example, taking the best performing parameters or fixing one parameter and studying another). Such a thorough examination is necessary to obtain the understanding that is critical to using o-policy algorithms successfully and with confidence. There is a need for thorough empirical studies, but they take time, and a proper presentation of them takes space. While our study is not the last word, it does contribute to the growing database of reliable results comparing modern o-policy learning algorithms. Conducting additional experiments with other o-policy learning problems is a valuable direction for future work. In looking for the next problem, one might seek a task with greater challenges due to variance of the importance sampling ratios. In the Collision task, the product of ratios can grow nearly as large as 24 = 16. This could be made more extreme simply by increasing the number of states, or by changing the behavior policy. Also valuable would be exploring unrelated tasks with a dierent rationale for relevance to the real world. One possibility is to use a task related to parallel learning about multiple alternative ways of behaving, such as learning how to exit each room in a four-rooms gridworld (Sutton, Precup & Singh, 1999). ## References Baird, L. C. (1995). Residual algorithms: Reinforcement learning with function approximation. In *Proceedings* of the 12th International Conference on Machine Learning, pp. 30–37. Baird, L. C. (1999). *Reinforcement Learning through Gradient Descent.* PhD thesis, Carnegie Mellon University. Boyan, J. A. (1999). Least-squares temporal dierence learning. In Proceedings of the 16th International Conference on Machine Learning, pp. 49–56. Bradtke, S. J., Barto, A. G. (1996). Linear least-squares algorithms for temporal dierence learning. Machine Learning, 22 pp. 33–57. Chung, W., Nath, S., Joseph, A., White, M. (2018). Two-timescale networks for nonlinear value function approximation. In *International conference on learning representations*. Dann, C., Neumann, G., Peters, J. (2014). Policy evaluation with temporal-dierences: A survey and comparison. *Journal of Machine Learning Research, 15* pp. 809–883. Espeholt, L., Soyer, H., Munos, R., Simonyan, K., Mnih, V., Ward, T., Doron, Y., Firoiu, V., Harley, T., Dunning, I. and Legg, S. (2018) IMPALA: Scalable distributed Deep-RL with importance weighted actor-learner architectures. In *Proceedings of the 35th International Conference on Machine Learning.* pp. 1407–1416. Geist, M., Scherrer, B. (2014). O-policy learning with eligibility traces: A survey. *Journal of Machine* Learning Research 15 pp, 289–333. Ghiassian, S., Rafiee, B., Sutton, R. S. (2016). A first empirical study of emphatic temporal dierence learning. In *Workshop on Continual Learning and Deep Learning at the Conference on Neural Information* Processing Systems. ArXiv: 1705.04185. Ghiassian, S., Patterson, A., Garg, S., Gupta, D., White, A., White, M. (2020). Gradient temporal-dierence learning with regularized corrections. In *Proceedings of the 37th International Conference on Machine* Learning, pp. 3524–3534. Hackman, L. (2012). *Faster Gradient-TD Algorithms.* MSc thesis, University of Alberta. Hallak, A., Tamar, A., Munos, R., Mannor, S. (2016). Generalized emphatic temporal-dierence learning: Bias-variance analysis. In *Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence*, pp. 1631– 1637. Jaderberg, M., Mnih, V., Czarnecki, W. M., Schaul, T., Leibo, J. Z., Silver, D., Kavukcuoglu, K. (2016). Reinforcement learning with unsupervised auxiliary tasks. ArXiv: 1611.05397. Juditsky, A., Nemirovski, A., Tauvel, C. (2011). Solving variational inequalities with stochastic mirror-prox algorithm. *Stochastic Systems 1,* pp. 17–58. Liu B, Liu J, Ghavamzadeh M, Mahadevan S, Petrik M (2015). Finite-Sample Analysis of Proximal Gradient TD Algorithms. In *Proceedings of the 31st International Conference on Uncertainty in Artificial Intelligence*, pp. 504–513. Liu B, Liu J, Ghavamzadeh M, Mahadevan S, Petrik M (2016). Proximal Gradient Temporal-Dierence Learning Algorithms. In *Proceedings of the 25th International Conference on Artificial Intelligence (IJCAI16)*, pp. 4195–4199. Littman, M. L., Sutton, R. S., Singh, S. (2002). Predictive representations of state. In Advances in Neural Information Processing Systems 14, pp. 1555–1561. Mahadevan, S., Liu, B., Thomas, P., Dabney, W., Giguere, S., Jacek, N., Gemp, I., Liu, J. (2014). Proximal reinforcement learning: A new theory of sequential decision making in primal–dual spaces. ArXiv: 1405.6757. Mahmood, A. R., Yu, H., Sutton, R. S. (2017). Multi-step o-policy learning without importance sampling ratios. ArXiv: 1702.03006. Maei, H. R. (2011). Gradient temporal-di*erence learning algorithms.* PhD thesis, University of Alberta. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D. (2015). Human level control through deep reinforcement learning. Nature pp. 529–533. Modayil, J., Sutton, R. S. (2014). Prediction driven behavior: Learning predictions that drive fixed responses. In *AAAI-14 Workshop on Artificial Intelligence and Robotics*. Munos, R., Stepleton, T., Harutyunyan, A., Bellemare, M. (2016). Safe and ecient o-policy reinforcement learning. In *Advances in Neural Information Processing Systems 29,* pp. 1046–1054. Precup, D., Sutton, R. S., Dasgupta, S. (2001). O-policy temporal-dierence learning with function approximation. In *Proceedings of the 18th International Conference on Machine Learning*, pp. 417–424. Precup, D., Sutton, R. S., Singh, S. (2000). Eligibility traces for o-policy policy evaluation. In Proceedings of the 17th International Conference on Machine Learning, pp. 759–766. Rafiee, B., Ghiassian, S., White, A., Sutton, R. S. (2019). Prediction in Intelligence: An Empirical Comparison of O-policy Algorithms on Robots. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 332–340. Ring, M. B. (in preparation). Representing knowledge as forecasts (and state as knowledge). Sutton, R. S. (1988). Learning to predict by the algorithms of temporal-dierences. *Machine Learning, 3* pp. 9–44. Sutton, R. S., Barto, A. G. (2018). *Reinforcement Learning: An Introduction,* second edition. MIT press. Sutton, R. S., Maei, H. R., Precup, D., Bhatnagar, S., Silver, D., Szepesvári, Cs., Wiewiora, E. (2009). Fast gradient-descent algorithms for temporal-dierence learning with linear function approximation. In Proceedings of the 26th International Conference on Machine Learning, pp. 993–1000. Sutton, R. S., Maei, H. R., Szepesvári, C. (2008). A convergent O(n) algorithm for o-policy temporaldierence learning with linear function approximation. In *Advances in neural information processing systems* 21, pp. 1609–1616. Sutton, R. S., Mahmood, A. R., White, M. (2016). An emphatic approach to the problem of o-policy temporal- dierence learning. *Journal of Machine Learning Research, 17* pp. 1–29. Sutton, R. S., Modayil, J., Delp, M., Degris, T., Pilarski, P. M., White, A., Precup, D. (2011). Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In *Proceedings of the 10th International Conference on Autonomous Agents and Multiagent Systems*, pp. 761–768. Sutton, R. S., Precup, D., Singh, S. (1999). Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. *Artificial Intelligence, 112* pp. 181–211. Tanner, B., Sutton, R. S., (2005). TD(⁄) networks: temporal-dierence networks with eligibility traces. In Proceedings of the 22nd international conference on Machine learning, pp. 888–895. Thomas, P. S. (2015). *Safe reinforcement learning.* PhD thesis, University of Massachusetts Amherst. Touati, A., Bacon, P. L., Precup, D., Vincent, P. (2017). Convergent tree-backup and retrace with function approximation. ArXiv: 1705.09322. Watkins, C. J. C. H. (1989). *Learning from delayed rewards.* PhD thesis, University of Cambridge. Watkins, C. J. C. H., Dayan, P. (1992). Q-learning. *Machine Learning, 8* pp. 279–292. White, A. (2015). *Developing a predictive approach to knowledge.* PhD thesis, University of Alberta. White, M. (2017). Unifying Task Specification in Reinforcement Learning. In *Proceedings of the 34th* International Conference on Machine Learning, pp. 3742–3750. White, A., White, M. (2016). Investigating practical linear temporal dierence learning. In Proceedings of the 2016 International Conference on Autonomous Agents and Multiagent Systems, pp. 494–502. Yu, H., Mahmood, A. R., Sutton, R. S. (2018). On generalized bellman equations and temporal-dierence learning. *The Journal of Machine Learning Research, 19(1),* pp. 1864–1912.
Review 1: Summary: The paper provides an empirical evaluation of eleven off-policy prediction algorithms on a specific problem, the collision task. Since some of the approaches to be tested are designed for control, the authors first rephrase them into their prediction version, devising suitable update rules. Then, the collision task, a tabular domain, is presented and described in its features. Finally, the experimental evaluation is performed; each combination (algorithm, hyperparameters) is tested over 50 runs and the performance is evaluated in terms of value error. Specifically, the first set of experiments aims at comparing all the algorithms by varying the step-size $\alpha$ and the $\lambda$ parameter related to the $\lambda$-return. Successive experiments aim to dive into the peculiarities of emphatic TD methods and gradient TD algorithms. A final discussion on limitations and future works is present. Strengths and Weaknesses: **Strengths** * The paper provides, to the best of my knowledge, the first empirical comparison that includes a significantly large number of baselines (11), including some that are designed for control and rephrased for prediction. * The experimental evaluation is well conducted from a methodological perspective. Specifically: - Each (algorithm, hyperparameters) is tested multiple times (50 runs) and the total number of configurations is notably large (20k). - The plots include appropriate error bars, although (from the main paper) it is not clear what they represent. Standard deviation, confidence intervals? *The supplementary material includes the code and a ReadMe that describes the implementation of the algorithms, the implementation of the environment, and the instructions to run the code (note that I did not re-run the code). Moreover, the full range of hyperparameters tested is reported. * The authors demonstrate that they are aware of some of the limitations of the present work, especially the fact the experimental evaluation is performed in one environment only. **Weaknesses** * (About the Collision Task) If I understood well, the collision task aims at modeling the scenario in which a vehicle moves towards an obstacle and has to avoid it. If this is correct, I found the selected reward function quite unexpected. Indeed, if the vehicle decides to turnaway avoiding the obstacle, the episode terminates and the reward is $0$. If instead, the vehicle decides to hit the obstacle, the episode also terminates, but the reward is $+1$. Isn't it convenient to hit the obstacle with such a reward function? Can the authors clarify? Moreover, the description of the environment in the caption of Figure 1 is not very accurate (especially regarding the reward function). * (Tabular Task and Value Function Approximation) The considered task is a tabular one, made of 8 states and 2 actions (not available in all states). Nevertheless, the authors decided to use linear value function approximation with randomly generated features (generated with a specific criterion, see comment below). This seems quite artificial, as also the authors acknowledge. The authors motivate their choice by claiming that a tabular approach is not feasible in large problems. While I agree with the claim, the Collison task is not a large problem, and I think the statement does not justify the choice. It would have been more convincing to start from a continuous-state problem directly. Is there a specific reason why to use a tabular problem for this experimental evaluation? * (Construction of the Random Features) The authors model the fact that in real-world scenarios, the agent accesses sensors whose measurements then pass through a neural network, by considering a linear value function approximator with randomly generated features. Those features are 6-dimensional, binary, and randomly generated. In such a way, since the problem has 8 states, it is not possible to get a non-ambiguous representation. I appreciate the effort in coming up with an approach to model a complex phenomenon (sensor measurements + NN), but the proposed choice seem quite arbitrary. Why 6 features? Why binary? In my view, the authors should provide a convincing motivation behind these choices or test multiple alternatives (including the tabular representation). * (Weight Initialization) The authors decided to initialize the weights to zero. Is there a specific reason behind this choice? If the overall goal is to simulate what happens when training with NNs, shouldn't the weights be initialized randomly? **Summary** Although the paper has elements of appreciable novelty (as noted above), I identified several limitations that I will re-list in the following for convenience: * Evaluation conducted on just one task (acknowledged by the authors too); * Tabular task addressed with value function approximation; * Not-convincing construction of the features for the linear value function approximation; **Minor Issues** * Section 1 presents both the related works and the contributions. I suggest either moving the related works to another section or, at least, denoting with suitable subsections/titles the part of related works and that of contributions. * When multiple citations are listed in a row, I suggest sorting them by year. * Some full-line equations miss the punctuation (e.g., the equation right before equation (1). Requested Changes: **Critical Requested Changes** * The authors should provide convincing motivations for their choices. I am referring, in particular, to my comments (Tabular Task and Value Function Approximation), (Construction of the Random Features), (Weight Initialization). As an alternative, if no motivation can be provided, the authors could empirically evaluate (if this is feasible in terms of computational time requested) multiple alternatives for these choices (e.g., testing the exact tabular representation, re-run the experiments with randomly generated weights). * The authors should clarify the concern (About the Collision Task) regarding the reward function. **Non-Critical Requested Changes** * Fix minor issues. Broader Impact Concerns: None. ================================================== Review 2: Summary: The submission performs an empirical analysis of *off-policy prediction* of 11 off-policy RL algorithms on a simple RL task called *collision*. Off-policy prediction consists is learning the value function for a *target* policy with data collected with another *behavioral* policy. To the best of my understanding, this is what is more commonly called off-policy policy evaluation. Collision is a finite MDP with 8 states and two actions. The RL algorithms use a linear state representation of dimension 6, thus preventing a complete independence between the states. The 11 algorithms are $TD(\lambda)$, $GTD(\lambda)$, $GTD2(\lambda)$, Proximal $GTD2(\lambda)$, $HTD(\lambda)$, $TDRC(\lambda)$, $Emphatic$ $TD(\lambda)$, $Emphatic$ $TD(\lambda,\beta)$, $ABTD(\zeta)$, $Vtrace(\lambda)$, and $Tree$ $Backup(\lambda)$. Strengths and Weaknesses: --- Strengths --- * The writing is methodological, mostly typo-free. * The paper is overall well presented. --- Weaknesses --- **Major importance:** * I would like to understand the difference of the task with that of off-policy policy evaluation. There has been plenty of papers and dedicated algorithms on this topic, which are mostly ignored (https://web.mit.edu/6.246/www/lectures/Off-policy.pdf). The off-policy algorithms used in the benchmark are off-policy RL algorithms, meaning that they intend to optimize the target policy at the same time they learn their value function. Why evaluate these algorithms on the off-policy value prediction task, which is merely an auxiliary task for them? Also there has been an empirical benchmark on off-policy policy evaluation: https://openreview.net/pdf?id=IsK8iKbL-I . It would be useful to detail your contribution with respect to theirs. * The empirical study is very narrow: a single environment with 8 states and 2 actions, no stochasticity. I notice that the authors try to defuse this criticism, and I would agree with them in general: it is generally better to have clear insights on a constrained setting than vague conclusions on a broad task. But here, the analysis of the why is missing. We observe clear trends which are extensively described, but the paper does not provide any clue why some algorithms fail while others work. As a consequence, the contribution is limited to the very domain of collision. * Some bits of the exposition are missing or imprecise or inaccurate: * it is unclear during the whole introduction whether the algorithms are compared on their actual purpose (off-policy RL) or the off-policy prediction task. * the importance sampling ratio is not introduced / formally defined. * Page 5, in the definition of $G^\lambda_t$, $\delta_t$ should be $\delta_n$, and the notation used is very confusing as it can be interpreted as algebra and useless as it's longer than the full writing of the product. * why $\rho_t \lambda_{t+1}$ and not $\rho_t \lambda_t$? Why is it well-behaved? I am not very knowledgeable in linear off-policy TD-learning algorithms, and the exposition did not help me to construct a better map of them, because the specificities of each algorithm is not motivated. Requested Changes: Please address my concerns enumerated in the weaknesses. Broader Impact Concerns: no ================================================== Review 3: Summary: This paper focused on the empirical study of learning predictions in reinforcement learning (RL) algorithms, particularly in the multi-step off-policy setting. To do that, eleven off-policy learning algorithms are selected, including the algorithm families of gradient-TD, emphatic-TD, off-policy TD, Vtrace, and variants of Tree Backup and ABQ, with experiments being conducted on a single prediction problem of "collision task". Finally, their performance was accessed through the mean square value error, especially from the perspectives of the learning rate, asymptotic error level, sensitivity to step size, and bootstrapping parameters. Strengths and Weaknesses: Strengths: - Learning the value function is critical to RL. This paper targets learning predictions for one policy while following another policy with eligibility traces, which will be interesting to the RL community. Weaknesses: - This work is not well-motivated. It's not clear why those existing eleven linear-complexity algorithms are selected to make a comparison without proposing any new approaches. What's more important, it's not clear what specific problems in the existing eleven algorithms or what properties those algorithms have, or even what scenarios those algorithms can fit in. - The contributions look quite limited. There is no new algorithm proposed in this paper, although it claims in this paper that it treats more algorithms and does a deeper empirical analysis, compared to the previous work of White and White (2016). Actually, the current empirical analysis is limited to comparing the performance on a simple, small domain (collision task) in terms of the learning rate, error level, sensitivity to step size, and bootstrapping parameters, which is not convincing and deep enough. It should at least evaluate those algorithms on more different tasks with different complexity and show their performance regarding convergence/ standard deviation among multiple runs, etc. Some additional comments: - Regarding the collision task, since the target policy is to keep forwarding from state 5 to state 8 while the corresponding behavior policy is uniform, how this task can help exit a parallel parking spot without collision, or learn to get close to other cars without collisions? Furthermore, how to treat the previous two examples of avoiding collisions as off-policy learning problems and learn with fewer collisions? - All algorithms of Tree Backup, Retrace, and ABQ are evaluated on the policy evaluation tasks in their original papers, why does this paper need to derive their prediction variants? Are there any reasons for doing it in a different way? - The reference to "Chung et al. (2018)" is missing. - Why not select and compare those eleven algorithms with GTB and GRetrace as shown in Touati et al. (2018)? - Is this $\overline{\rm VE}(0) \approx 0.7$ correct? According to the description, should it be the square root of $\overline{\rm VE}(0)$ to be $0.7$? - In Figure 1, what is the algorithm running for this example? Requested Changes: - I have major concerns about the motivation and contribution of this paper as mentioned in the main weaknesses. Overall, I don't think the current status of the paper is ready. I suggest the authors address them first. - I also have some additional comments and the authors could check them and update the missing reference. Broader Impact Concerns: This paper focused on the simulated experimental environment. So I don't have any concerns regarding this. ================================================== Metareview: Recommendation: Reject Comment: The paper provides an empirical analysis of several off-policy prediction algorithms, using as a benchmark a simple collision task. Although the paper contains some interesting contributions, the reviewers have expressed several concerns about the lack of motivation for several choices and using a single environment for the evaluation, which is not enough to reach convincing conclusions, thus making the findings of the paper not of much interest for the TMLR audience. The authors' rebuttals and the new version of the paper have solved some of the reviewers' concerns, but the reviewers agree that the paper is not ready for publication. I encourage the authors to generate a new, significantly improved version of their paper that addresses the main issues highlighted by the reviewers and resubmit it. ==================================================
# Hierarchical Graph-Convolutional Variational Autoencoding For Generative Modelling Of Human Motion Anonymous authors Paper under double-blind review ## Abstract Models of human motion focus either on trajectory prediction or action classification but rarely both. The marked heterogeneity and intricate compositionality of human motion render each task vulnerable to the data degradation and distributional shift common to realworld scenarios. A sufficiently expressive generative model of action could in theory enable data conditioning and distributional resilience within a unified framework applicable to both tasks as well as facilitate data synthesis. We propose a novel architecture for generating a holistic model of action based on hierarchical variational autoencoders and deep graph convolutional neural networks. We show this Hierarchical Graph-convolutional Variational AutoEncoder (HG-VAE) to be capable of detecting out-of-distribution data, and imputing missing data by gradient ascent on the model's posterior, facilitating better downstream discriminative learning. We show that scaling to greater stochastic depth generates better likelihoods independently of model capacity. We further show that the efficient hierarchical dependencies HG-VAE learns enable the generation of coherent conditioned actions and robust definition of class domains at the top level of abstraction. We trained and evaluated on H3.6M and the largest collection of open source human motion data, AMASS. ## 1 Introduction Human motion is naturally intelligible as a time-varying graph of connected joints constrained by locomotor anatomy and physiology. An understanding of human motion is necessary for tasks such as pose estimation, action recognition, motion synthesis, and motion prediction, across a wide variety of applications within healthcare Geertsema et al. (2018); Kakar et al. (2005), physical rehabilitation and training Chang et al. (2012); Webster & Celik (2014), robotics Koppula & Saxena (2013b;a); Gui et al. (2018b), navigation Paden et al. (2016); Alahi et al. (2016); Bhattacharyya et al. (2018); Wang et al. (2019), manufacture Švec et al. (2014), entertainment and culture Shirai et al. (2007); Rofougaran et al. (2018); Lau & Chan (2008); Bourached & Cann (2019); Bourached et al. (2021); Cann et al. (2021); Stork et al. (2021); Kell et al. (2022), and security Kim & Paik (2010); Ma et al. (2018); Grant et al. (2019). The complex, compositional character of human motion amplifies the kinematic differences between teleologically identical actions while attenuating those between actions differing in their goals. Moreover, few real-world tasks restrict the plausible repertoire to a small number of classes—distinct or otherwise—that could be explicitly learned. Rather, any action may be drawn from a great diversity of possibilities—both kinematic and teleological—that shape the characteristics of the underlying movements. This has two crucial implications. First, any modelling approach that lacks awareness of the full space of motion possibilities will be vulnerable to poor generalization and brittle performance in the face of kinematic anomalies. Second, the relations between different actions and their kinematic signatures are plausibly determinable only across the entire domain of action. These considerations identify the modelling of human motion as yet another domain of machine learning where generative modelling is necessary for increased data-efficiency, generalization, and robustness. Yet, limited general purpose unsupervised methods have been investigated. ![1_image_0.png](1_image_0.png) b) 100 occlusions. Top: degraded. Bottom: Our MAP estimate. Motion this way −→ Figure 1: Motion sequence across 64 timepoints on H3.6M, sampled every 8th timepoint. Ground truth pose represented by a dotted line. The left side of the body is green, and the right is purple, while the ground truth is a dotted line where it differs. We propose a novel architecture based on hierarchical variational autoencoders and deep graph convolutional neural networks for the application of holistic modelling of action. Our model has 4 stochastic layers (z0, z1, z2, z3) which model local activity at the bottom z3, which is dependent on successively more global patterns for higher latent variables, zi<3. We create this hierarchy of abstraction by reducing graph size, via graph convolutions, for the higher latent variables. z0 represents a single node - a completely global feature space that we show can effectively encode action category for conditional generation and interpretation. Our framework allows easy manipulation of stochastic depth, enabling us to investigate the effect of stochastic depth on performance. Our contributions may be summarized as follows: 1. We propose a hierarchical graph-convolutional variational autoencoder (HG-VAE) for deep generative modelling of graph structured frequencies for general purpose application in human motion modelling. 2. We demonstrate HG-VAE's ability to detect anomalies, and impute missing human motion with maximum a posteriori (MAP) estimates by gradient ascent of the model's posterior, as well as the effect this may have on downstream prediction. 3. We show that scaling to greater stochastic depth yields better likelihoods independent of model capacity. 4. We demonstrate that the HG-VAE learns an efficient hierarchical ordering, including a completely global, or abstract, representation of motion that can facilitate action-wise conditional generation and classification. 5. We provide an open-source implementation of the model we describe, available at https://anonymous.4open.science/r/generative_imputation-31EC/README.md. ## 2 Related Work Discriminative tasks in of human motion prediction: Prediction and action classification are two of the main discriminative tasks in human motion modelling. Both literatures adopt similar architectures. ![2_image_0.png](2_image_0.png) Figure 2: A diagram of one stochastic layer, L, of our HG-VAE. The encoder is in green, and the decoder is in red. The computational blocks GCL, and GCB correspond to equations 4, and 5 respectively. The residual connections use a learnable residual weighting Bachlechner et al. (2020) initialised to 0, where the red arrow indicates the connection weighted by the learned parameter. N is the number of graph nodes in the observable, x, and F is the number of features maintained in the deterministic part of the decoder, which was 256 for all experiments. n1, and f1 are the number of nodes and features for the latent variable zi, at stochastic layer, L = l. Sequence-to-sequence prediction using Recurrent Neural Networks (RNNs) were the de facto standard for human motion prediction Fragkiadaki et al. (2015); Jain et al. (2016); Martinez et al. (2017); Pavllo et al. (2018); Gui et al. (2018a); Guo & Choi (2019); Gopalakrishnan et al. (2019); Li et al. (2020a). However, the current state-of-the-art is dominated by feed forward models Butepage et al. (2017); Li et al. (2018); Mao et al. (2019); Wei et al. (2020). Though inherently faster and easier to train than RNNs, there is little exploration of the behaviour of such models in the context of occluded or otherwise corrupted data. Generative modelling of human motion. Recently there has been a trend recognising that the diversity of plausible motion necessitates the use of latent variables and thereby a distribution over synthesised or predicted motion. VAEs have been a common choice for this task: Rempe et al. (2021) use a dynamic VAE to predict a distribution over joint positions for the next pose. This approach alleviates the issue of the diversity in prediction but may still create semantically implausible continuations of motion. Or, indeed, physically impossible poses. Zhang et al. (2021) mitigate this concern by applying the SMPL-X body model at each step to project the predictions back onto the space of valid positions. While Aliakbarian et al. (2021) condition on the posterior of past observation which acts as a prior on the data posterior, thereby encouraging the latent space to carry relevant information. Mao et al. (2022), use a VAE conditioned on an action class label and show that this approach facilitates smoother transitions between actions since training data can not realistically have sufficiently diverse action transitions and lengths. Bourached et al. (2022) use a VAE to regularise discriminative models and thereby increase their robustness to out-of-distribution, yet feasible, actions. Although these approaches address the important issue of diversity, robustness, and plausibility in the prediction of motion sequences, there are a myriad of other significant tasks that should be addressed by generative modelling: GANs are used in Wang et al. (2021) to create a plausible and diverse motion set conditioned on initial trajectory and the physical environment. Wang et al. (2021), building upon convolutional sequence generation networks proposed in Yan et al. (2019). These generative approaches to human motion have target applications in augmented reality and 3D character animations. Other GANbased approaches such as Barsoum et al. (2018); Cai et al. (2018), are also synthetic and not designed to handle missing, or degraded data. Tevet et al. (2022), creates a diffusion based model with transformers for task-to-motion and action-to-motion synthesis. Kolotouros et al. (2021) develops a normalizing flow model to create a plausible set of predictions of 3D motion, from 2D poses. While Aliakbarian et al. (2022) also uses a flow based model to impute missing data and create realistic avatar poses while observing that real world observations have sparsity due to occlusions or other types of degradation. The diversity of applications of latent space models for human motion highlighted here motivates the investigation of the optimal architecture for its holistic modelling. Motion-VAE Ling et al. (2020), and Hierarchical Motion VAE (HM-VAE) Li et al. (2021) aim to holistically model motion using VAEs. Specifically, HM-VAE uses a 2-layered hierarchical VAE to learn complex human motions independent of task. The bottom latent variable is a graphical representation of pose, where each feature (each joint, or their coordinates) is a node. While the top latent variable is a reduced node representation obtained by pooling adjacent joints in the encoder and unpooling in the decoder. In this study, we define a generative mechanism that models the joint distribution over latent and observed variables and hence provides a mathematically principled machine for anomaly detection, and imputation, as well as action generation. In contrast to HM-VAE we propose an architecture that generalises to N stochastic layers. We connect all latent variables by a deterministic pathway and efficiently scale depth by using rezero residual connections Bachlechner et al. (2020). Further, our model explicitly learns the contraction of joints for the higher latent variables—rather than pooling, and the top latent variable is represented by a single node providing complete abstraction in which we show action types may form well defined domains. We show that this HG-VAE yields a posterior that outperforms baselines, and provides a powerful model for imputation—therein better downstream task performance—, generation and anomaly detection. Graph convolutions: Graph neural networks Kipf & Welling (2016) have received increasing attention in recent years. *Spatial graph convolutions* directly operate on vertices and their neighbors Niepert et al. (2016), and are rapidly becoming a popular tool for capturing the spatial structure of skeletons in human motion ![4_image_0.png](4_image_0.png) Figure 3: Conditional samples from HG-VAE when trained conditioned on actions from H3.6M. The generated action is controlled by a one-hot vector appended to the top latent variable, z0 ∈ R 256. We trained on 13 actions for H3.6M, making z0 ∈ R 269. The left side of the body is green, and the right is purple. Yan et al. (2018); Wang et al. (2021); Yan et al. (2019); Mao et al. (2019); Wei et al. (2020); Bourached et al. (2022); Li et al. (2020a). In particular, *spatial graph convolutions* provide a natural means of learning contractions and expansions of the number of nodes in the graph. In this work, we use this property of graph convolutions to capture, in a generative fashion, the spatial structure of skeletons at a hierarchy of graphical resolutions; from a node for each Cartesian dimension of each joint (at zN−1), to a single node representing the global properties of the motion sequence (at z0). Further, graph convolutions enable us to scale to much greater stochastic and deterministic depth while keeping the increase in the number of parameters minimal. ## 3 Preliminaries We review prior work and introduce some of the basic terminology used in the field. ## 3.1 Variational Autoencoders Variational AutoEncoders (VAEs) Kingma & Welling (2013) consist of a generator pθ(x|z), a prior p(z), and an approximate posterior qϕ(z|x). Neural networks parametrized by ϕ and θ are trained end-to-end with backpropagation and the reparameterization trick in order to maximize the evidence lower bound (ELBO) $$\log p_{\theta}({\bf x})\geq\,\mathbb{E}_{{\bf z}\sim q_{\phi}({\bf z}|{\bf x})}\log p_{\theta}({\bf x}\mid{\bf z})\,-D_{\mathrm{KL}}\left[q_{\phi}({\bf z}\mid{\bf x})\|p_{\theta}({\bf z})\right].$$ ## 3.2 Hierarchical Variational Autoencoders Originally, VAEs used fully factorised Gaussians for the prior, p(z), and the approximate posterior, pϕ(z|x). This can induce the generated, pθ(x|z), to have the property that correlated features blur together, this is because the interpolation of any two instances in the latent variable, z, is continuous. However, most complex distributions contain variables that have dependency on other variables. $$(1)$$ In Sønderby et al. (2016), a Hierarchical VAE (HVAE) structure, also called a *ladder*, or *top-down* VAE is proposed, where the prior and posterior generate latent variables as $$p_{\theta}(\mathbf{z})=p_{\theta}\left(\mathbf{z}_{0}\right)p_{\theta}\left(\mathbf{z}_{1}\mid\mathbf{z}_{0}\right)\ldots p_{\theta}\left(\mathbf{z}_{N}\mid\mathbf{z}_{<N}\right),$$ $$q_{\phi}(\mathbf{z}\mid\mathbf{x})=q_{\phi}\left(\mathbf{z}_{0}\mid\mathbf{x}\right)q_{\phi}\left(\mathbf{z}_{1}\mid\mathbf{z}_{0},\mathbf{x}\right)\ldots q_{\phi}\left(\mathbf{z}_{N}\mid\mathbf{z}_{<N},\mathbf{x}\right).\tag{10}$$ (2) (3) $\text{}$ In this structure, ϕ first performs a deterministic *bottom-up* pass, producing multiple sets of features of x, ![5_image_0.png](5_image_0.png) {fN (x), fN−1(x), · · · , f1(x)}. Then θ performs a *bottom-down* pass producing latent variables from i = 0, to i = N − 1 as: pθ(zi), and qϕ(zi|fi(x)). Child (2021) shows that this structure actually generalises autoregressive models—models that probabilistically select states or variables based on previous states or variables—, and with the right training conditions and architectural choices, will outperform them across a range of major vision datasets. Figure 4: Percentage change in MSE from the ground truth of the MAP estimates with respect to mean imputation as a function of percentage of features occluded. HG-VAE gives a 77% ±1% decrease from mean imputation consistently across all degrees of degradation. ## 4 Our Approach We design a hierachical graph-convolutional variational autoencoder (HG-VAE), that avails of hierarchical latent variables with a graph-convolutional structure. Mao et al. (2019); Wei et al. (2020); Bourached et al. (2022); Zhang et al. (2021) use temporal frequencies encoded by DCT as features of nodes in a graph on which graph-convolutional operations are performed. This is similar to how Convolutional Neural Networks, for vision tasks Lawrence et al. (1997); Child (2021), expand RGB channels into a larger set of feature channels while preserving a 2-dimensional spatial structure through a spatial-convolutional operation. Early work with Graph Convolutional Networks (GCNs) used a symmetric (binary, or weighted) adjacency matrix such as in Kipf and Welling Kipf & Welling (2016), where the applications were citation networks, or knowledge graphs. Such methods are referred to as *spectral graph convolutions*, which operate on the spectral domain via graph Laplacians. However, usage in human motion prediction tasks necessitates a learnable weighted adjacency matrix, known as *spatial graph convolutions* which, for convenience, we'll refer to in short as a graph convolution. Li et al. (2020b) show that representing the skeletal graph at multiple scales is a sensible representation for discriminative purposes. Further, Yan et al. (2019); Wang et al. (2021) synthesise motion by sampling latent variables of successively greater graph resolution. In juxtaposition to Mao et al. (2019); Wei et al. (2020); Bourached et al. (2022); Li et al. (2020b), which have a discriminative focus, our objective is to obtain latent variables of a hierarchical structure. We use different hierarchical levels to represent variables at different graphical scales, and levels of abstraction. We refer to two main neural network computational blocks as a Graph Convolutional Layer (GCL), which takes as input a graph A ∈ R Nin×Fin with number of nodes, Nin, and number of features, Fin, and outputs a new graph with number of nodes, Nout, and number of features, Fout. F may be considered much like feature channels in a Convolutional Neural Network. For this problem they represent features derived from the frequencies of motion. The second main neural network computational block we refer to as a Graph Convolutional Block (GCB) which takes as input graph A ∈ R Nin×Fin , and outputs a graph of the same dimensionality. Mathematically, these are defined as $$\begin{array}{l}{{\mathrm{GCL}(\mathbf{A})=\!\sigma\!\left(\mathbf{S}\mathbf{A}\mathbf{W}+\mathbf{b}\right)}}\\ {{\mathrm{GCB}(\mathbf{A})=\!\sigma\!\left(\mathbf{S_{2}}\sigma\!\left(\mathbf{S_{1}}\mathbf{A}\mathbf{W_{1}}+\mathbf{b_{1}}\right)\mathbf{W_{2}}+\mathbf{b_{2}}\right)+\alpha\mathbf{A},}}\end{array}$$ $$\left({4}\right)$$ $$\left(5\right)$$ + αA, (5) where in equation 4, S ∈ R Nout×Nin , W ∈ R Fin×Fout , and b ∈ R Nout×Fout are all learnable parameters. Similarly, S,W, b ∈ R Nin×Fin , in equation 5, are also learnable parameters. α is a learnable residual weighting that is initialised to 0 as proposed recently by Bachlechner et al. (2020) as an efficient alternative to batch normalization, Ioffe & Szegedy (2015), to mitigate the effect of covariate shift. σ(·) is a Gaussian Error Linear Unit, Hendrycks & Gimpel (2016), (GeLU). ## 5 Implementation Network Architecture: In the top down decoders we maintain a principal deterministic route connected to each latent variable such that latent variables may contribute directly to the output without being required to contribute to composite features in dependent variables. This approach increases the speed and stability of training by simplifying the nature of the causal relationships between the latent variables and x, where possible. All computational blocks are of the form of either equation 4 or equation 5. The encoder-decoder architecture for a single stochastic level is shown in Figure 2. Training Details: We use the ELBO, defined by equation 1, to train the network end-to-end using the ADAM optimizer Kingma & Ba (2014), with a learning rate of 0.001 and a batch size of 800. We set pθ(x|z) = N (µ(z), σ(z)), where µ and σ are outputs of the final stochastic decoder. On the H3.6M dataset, we appended a one-hot vector to z0 to represent each action class. We use two main techniques for increasing training stability; gradient clipping, enforcing a gradient norm value of 100.0, and a similar warm-up procedure for the Kullback-Leibler (DKL) divergence to Sønderby et al. (2016), and Child (2021), increasing its weighting in the loss linearly from 0.001 to 1.0 over the first 200 epochs. We train for a total of 5000 epochs, which requires ca. one week on a NVIDIA GeForce RTX 2070 GPU. ## 6 Experiments 6.1 Datasets We are given a motion sequence X1:N = (x1, x2, x3, *· · ·* , xN ) consisting of N consecutive human poses, where xi ∈ R K, with K the number of parameters describing each pose. The temporal components are converted into frequencies using the Discrete Cosine Transformation (DCT) prior to input into the network, and then back to timepoints using the Inverse Discrete Cosine Transformation (IDCT) before being applied to the loss. Further details are supplied in the appendix. AMASS: The Archive of Motion Capture as Surface Shapes (AMASS) dataset Mahmood et al. (2019) is the largest open-source human motion dataset, which aggregates a number of mocap datasets, such as CMU, KIT and BMLrub, using a SMPL Loper et al. (2015); Romero et al. (2017) parametrization to obtain a human mesh. SMPL represents a human by a shape vector and joint rotation angles. The shape vector, which encompasses coefficients of different human shape bases, defines the human skeleton. We obtain human poses in 3D by applying forward kinematics to one human skeleton. In AMASS, a human pose is represented by 52 joints, including 22 body joints and 30 hand joints. Since we are interested in skeletal motion, we follow Wei et al. (2020) and discard the hand joints and the 4 static joints, leading to an 18-joint human pose. Further, we also use BMLrub1(522 min. video sequence), as our test set as each sequence consists of one actor performing one type of action. All quantitative results are reported from this dataset. We consider motion sequences of 50 timepoints taking only every second frame. A single datapoint here hence consists of 54 nodes, and 50 timepoints. Forming 2700 dimensional inputs. Human3.6M (H3.6M): The H3.6M dataset Ionescu et al. (2011; 2013), so called as it contains a selection of 3.6 million 3D human poses and corresponding images, consists of seven actors each performing 15 actions, such as walking, eating, discussion, sitting, and talking on the phone. Martinez et al. (2017); Mao et al. (2019); Li et al. (2020a) all follow the same training and evaluation procedure: training their motion prediction model on 6 (5 for train and 1 for cross-validation) of the actors, and use subject 5 as a heldout test set. In this work, we use our model trained on H3.6M to demonstrate most of our qualitative results. ## 6.2 Baselines VAE: A conventional, fully-connected, VAE is trained with nz = 50 dimensional latent variable and a symmetric encoder-decoder of architecture 2000, 1000, 500, 100, 50, with batch normalization applied to each layer and trained for 200 epochs with a learning rate of 0.001 using the ADAM optimizer. The model has 20.81M parameters. HM-VAE: We implement and train the motion prior model proposed by Li et al. (2021). We pool the joints in the upper latent variable to 6 joints, which results in a graph size of 18 at the top latent variable, and 54 on the bottom. In our model, we directly model the scale of the uncertainty in the reconstruction, σ(z), which implicitly learns an appropriate scaling between the two terms in the ELBO (equation 1). However, for HM-VAE we downweight the DKL divergence by a factor of 0.003, as in Li et al. (2018). We otherwise use the same hyperparameters as for our model. ## 6.3 Experimental Setup We present experiments in support of 3 main arguments for the proficiency of our generative model. First, we degrade the test set data by simulating occlusions wherein, a naïve imputation, the mean value of the missing feature over the train set, is substituted for the ground truth. The degree of degradation is controlled by the number of input features occluded. A sufficient level of degradation may be considered as shifting the data out-of-distribution, which we quantify using the model's posterior. We show that this well-informed imputation facilitates better downstream discriminative tasks by considering the degradation in performance of two deterministic deep network motion prediction models, convSeq2Seq Li et al. (2018), and HisRepItself 1Available at https://amass.is.tue.mpg.de/dataset. Wei et al. (2020) under feature occlusion. We train each model on 50 timepoints, to predict the next 25. At test time we randomly occlude the input features and report the error on the predicted future trajectory. At test time we randomly occlude the input features and report the error on the predicted future trajectory. We report the Mean Per Joint Position Error (MPJPE) Ionescu et al. (2013) given by $$\ell_{m}={\frac{1}{J(N+T)}}\sum_{n=1}^{N+T}\sum_{j=1}^{J}\left\|{\hat{\mathbf{p}}}_{j,n}-\mathbf{p}_{j,n}\right\|^{2}$$ $$(6)$$ 2(6) where pˆj,n ∈ R 3 denotes the predicted jth joint position in frame n. And pj,n is the corresponding ground truth, while J is the number of joints in the skeleton. Second, we examine the effect of stochastic depth, independent of capacity, on performance by training a 16 stochastic-layered model several times with a varying number of its top latent variables *turned off* (zL = µL, for each level, L, that is *turned off*, and KLL does not contribute to the loss). Third, we demonstrate that HG-VAE learns an efficient hierarchical ordering that facilitates levels of abstraction, the top of which may represent action classification as well as conditional generation. ## 7 Results Computing the average posterior on the H3.6M test set yielded the value p(z|x) = −9312 with a standard deviation of 500. This is the average negative log probability predicted by our trained model. We use this as a benchmark to compare anomalies, below. Figure 1, a and b, show two motion sequences (top of each) with the p(z|x) = −10039, and p(z|x) = −10364 respectively, meaning that both are well outside a standard deviation of the average. The bottom row of a and b show the MAP-imputed values using HG-VAE, with p(z|x) = −9643, and p(z|x) = −9717 respectively. A reasonable out-of-distribution (OoD) threshold of the standard deviation of the posterior would both enable the model to detect these OoD examples, and, if the samples are OoD due to occlusion, find the most probable in-distribution representation. In figure 5 we compare the raw performance of convSeq2Seq, and HisRepItself, given mean imputation for occlusions, against the MAP estimates obtained by performing 100 steps of gradient ascent on HG-VAE's posterior, as well as on HM-VAE's posterior. The use of MAP estimates from our model give significant improvement in downstream prediction task compared to HM-VAE, as measured by MPJPE indicating the greater quality of p(z|x), and capacity of our model for handling complex non-linear relationships present in human motion. In Figure 4 we consider the MSE between the ground truth and the naïve mean imputation; as well as each models' MAP estimates. We show this as a percentage difference in MSE compared to the MSE of mean imputation. HG-VAE gives a 77%±1% decrease consistently across all degrees of degradation, which greatly beats baseline methods. We investigated how statistical depth, independent of capacity, improved performance. Here a stochastic depth of 1 is a classical VAE with a very deep encoder, and decoder. Simply, zL = µL, for each level L that was *turned off*, and KLL didn't contribute to the loss. Table 1 shows that greater stochastic depth achieves higher likelihoods, reconstructions, and KL. Implying that the model benefits greatly from the latent variable dependency structure. Figure 3 show samples from the model trained on H3.6M with one-hot labels for walking, smoking, and sitting. Each action clearly emulates its purported action class indicating the spatial abstraction of the top latent variable achieved by the learned contractions. To further explain this, Figure 6 shows a conditional sample drawn with the label *walking* from a 4-layer HG-VAE at each level of resolution. 6a is the mean sample drawn from the walking cluster (see appendix), Zi = µi, for i > 0. While 6b draws genuine sample at the top level, Z0 = z0, and the mean for lower levels, Zi = µi, for i > 1, etc. We can see that each latent level adds further expression to the motion sequence, ![9_image_0.png](9_image_0.png) Figure 5: MPJPE for discriminative prediction models with varying degree of degradation with and without including the MAP estimate of HG-VAE and HM-VAE on the occluded features. | Table 1: HG-VAE Performance as a Function of Stochastic Depth. | | | | | |------------------------------------------------------------------|------------------|--------|------|---------------| | Parameters | Stochastic Depth | log(X) | MSE | KL-Divergence | | 15M | 1 | 1986 | 2.30 | 274 | | 15M | 2 | 3912 | 1.31 | 79 | | 15M | 4 | 6511 | 0.59 | 30 | | 15M | 8 | 7112 | 0.55 | 32 | | 15M | 16 | 7238 | 0.53 | 29 | illustrating that low levels of the hierarchy (6d) model fine movements, and local relationships, while higher levels model a much more course 'snapshot' of the motion sequence. ## 8 Conclusions We propose a novel hierarchical graph-convolutional variational autoencoder suited to the diversity and complexity of human motion modelling and demonstrate its ability to learn complex and action-generic latent distributions for human motion that may be used for highly informed imputation and anomaly detection, as well as downstream discriminative tasks. We show that with just 10 gradient ascent steps we obtain a 77% decrease in MSE compared to a mean imputation policy consistently across all degrees of degradation which is greater and more consistent than all baseline models. We demonstrate that stochastic depth matters, ![10_image_0.png](10_image_0.png) Figure 6: Motion from left to right. A walking sample from a 4-layered HG-VAE. Given Z0 = z0, p(z) was generated as in equation 2. We extract only the mean for each Z>i. independent of model capacity. We further show that the hierarchical latent variables have both enough expression in the lower latent variables (zi>0) to model local observables while benefiting from complete abstraction in the top latent variable (z0) such that it may be class-conditioned by appending a one-hot vector and produce qualitatively distinct actions. Due to the generality of the architecture, HG-VAE may be a suitable, deeply expressive, generative model for many applications in the modelling of human motion. ## Broader Impact Statement The method of missing data imputation via gradient ascent on the posterior, though powerful given a good generative model, is expensive. Despite only a small number of parameters—the number of missing features—each step needs almost a complete forward and backward pass through the model. So this may not be an effective method of imputation for online models. However, using a small number of posterior ascent steps is shown to already yield significant improvement, and may be applicable for many online applications. Furthermore, such a model trained on a motion capture dataset such as H3.6M, or AMASS—as is the case here—may be used to very effectively improve motion capture data containing occlusions that is captured in the wild by pose estimation methods. Further work might also include using this model trained on AMASS to synthesise larger datasets. We acknowledge that models capable of learning highly specific motion patterns could be used for nefarious, Orwellian, applications. However, we believe the potential benefit in an ever more automated world, especially in safety in healthcare, outweighs the potential cost. ## References Alexandre Alahi, Kratarth Goel, Vignesh Ramanathan, Alexandre Robicquet, Li Fei-Fei, and Silvio Savarese. Social lstm: Human trajectory prediction in crowded spaces. In *Proceedings of the IEEE conference on* computer vision and pattern recognition, pp. 961–971, 2016. Sadegh Aliakbarian, Fatemeh Saleh, Lars Petersson, Stephen Gould, and Mathieu Salzmann. Contextually plausible and diverse 3d human motion prediction. In *Proceedings of the IEEE/CVF International* Conference on Computer Vision, pp. 11333–11342, 2021. Sadegh Aliakbarian, Pashmina Cameron, Federica Bogo, Andrew Fitzgibbon, and Thomas J. Cashman. Flag: Flow-based 3d avatar generation from sparse observations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13253–13262, June 2022. Thomas Bachlechner, Bodhisattwa Prasad Majumder, Huanru Henry Mao, Garrison W Cottrell, and Julian McAuley. Rezero is all you need: Fast convergence at large depth. *arXiv preprint arXiv:2003.04887*, 2020. Emad Barsoum, John Kender, and Zicheng Liu. Hp-gan: Probabilistic 3d human motion prediction via gan. In *Proceedings of the IEEE conference on computer vision and pattern recognition workshops*, pp. 1418–1427, 2018. Apratim Bhattacharyya, Mario Fritz, and Bernt Schiele. Long-term on-board prediction of people in traffic scenes under uncertainty. In *Proceedings of the IEEE Conference on Computer Vision and Pattern* Recognition, pp. 4194–4202, 2018. Anthony Bourached and George Cann. Raiders of the lost art. *arXiv preprint arXiv:1909.05677*, 2019. Anthony Bourached, George H Cann, Ryan-Rhys Griffths, and David G Stork. Recovery of underdrawings and ghost-paintings via style transfer by deep convolutional neural networks: A digital tool for art scholars. Electronic Imaging, 2021(14):42–1, 2021. Anthony Bourached, Ryan-Rhys Griffiths, Robert Gray, Ashwani Jha, and Parashkev Nachev. Generative model-enhanced human motion prediction. *Applied AI Letters*, 3(2):e63, 2022. Judith Butepage, Michael J Black, Danica Kragic, and Hedvig Kjellstrom. Deep representation learning for human motion prediction and classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6158–6166, 2017. Haoye Cai, Chunyan Bai, Yu-Wing Tai, and Chi-Keung Tang. Deep video generation, prediction and completion of human action sequences. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 366–382, 2018. George H Cann, Anthony Bourached, Ryan-Rhys Griffths, and David G Stork. Resolution enhancement in the recovery of underdrawings via style transfer by generative adversarial deep neural networks. Electronic Imaging, 2021(14):17–1, 2021. Chien-Yen Chang, Belinda Lange, Mi Zhang, Sebastian Koenig, Phil Requejo, Noom Somboon, Alexander A Sawchuk, and Albert A Rizzo. Towards pervasive physical rehabilitation using microsoft kinect. In 2012 6th international conference on pervasive computing technologies for healthcare (PervasiveHealth) and workshops, pp. 159–162. IEEE, 2012. Rewon Child. Very deep vaes generalize autoregressive models and can outperform them on images, 2021. Katerina Fragkiadaki, Sergey Levine, Panna Felsen, and Jitendra Malik. Recurrent network models for human dynamics. In *Proceedings of the IEEE International Conference on Computer Vision*, pp. 4346– 4354, 2015. Evelien E Geertsema, Roland D Thijs, Therese Gutter, Ben Vledder, Johan B Arends, Frans S Leijten, Gerhard H Visser, and Stiliyan N Kalitzin. Automated video-based detection of nocturnal convulsive seizures in a residential care setting. *Epilepsia*, 59:53–60, 2018. Anand Gopalakrishnan, Ankur Mali, Dan Kifer, Lee Giles, and Alexander G Ororbia. A neural temporal model for human motion prediction. In *Proceedings of the IEEE Conference on Computer Vision and* Pattern Recognition, pp. 12116–12125, 2019. James Grant, Alexis Boukouvalas, Ryan-Rhys Griffiths, David Leslie, Sattar Vakili, and Enrique Munoz De Cote. Adaptive sensor placement for continuous spaces. In *International Conference on Machine* Learning, pp. 2385–2393. PMLR, 2019. Liang-Yan Gui, Yu-Xiong Wang, Xiaodan Liang, and José MF Moura. Adversarial geometry-aware human motion prediction. In *Proceedings of the European Conference on Computer Vision (ECCV)*, pp. 786–803, 2018a. Liang-Yan Gui, Kevin Zhang, Yu-Xiong Wang, Xiaodan Liang, José MF Moura, and Manuela Veloso. Teaching robots to predict human motion. In *2018 IEEE/RSJ International Conference on Intelligent* Robots and Systems (IROS), pp. 562–567. IEEE, 2018b. Xiao Guo and Jongmoo Choi. Human motion prediction via learning local structure representations and temporal dependencies. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 33, pp. 2580–2587, 2019. Dan Hendrycks and Kevin Gimpel. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. 2016. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *International conference on machine learning*, pp. 448–456. PMLR, 2015. Catalin Ionescu, Fuxin Li, and Cristian Sminchisescu. Latent structured models for human pose estimation. In *2011 International Conference on Computer Vision*, pp. 2220–2227. IEEE, 2011. Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. *IEEE transactions on pattern* analysis and machine intelligence, 36(7):1325–1339, 2013. Ashesh Jain, Amir R Zamir, Silvio Savarese, and Ashutosh Saxena. Structural-rnn: Deep learning on spatiotemporal graphs. In *Proceedings of the ieee conference on computer vision and pattern recognition*, pp. 5308–5317, 2016. Manish Kakar, Håkan Nyström, Lasse Rye Aarup, Trine Jakobi Nøttrup, and Dag Rune Olsen. Respiratory motion prediction by using the adaptive neuro fuzzy inference system (anfis). *Physics in Medicine &* Biology, 50(19):4721, 2005. Gregory Kell, Ryan-Rhys Griffiths, Anthony Bourached, and David G Stork. Extracting associations and meanings of objects depicted in artworks through bi-modal deep networks. *arXiv preprint* arXiv:2203.07026, 2022. Daehee Kim and J Paik. Gait recognition using active shape model and motion prediction. *IET Computer* Vision, 4(1):25–36, 2010. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. Nikos Kolotouros, Georgios Pavlakos, Dinesh Jayaraman, and Kostas Daniilidis. Probabilistic modeling for human mesh recovery. In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 11605–11614, 2021. Hema Koppula and Ashutosh Saxena. Learning spatio-temporal structure from rgb-d videos for human activity detection and anticipation. In *International conference on machine learning*, pp. 792–800, 2013a. Hema Swetha Koppula and Ashutosh Saxena. Anticipating human activities for reactive robotic response. In *IROS*, pp. 2071. Tokyo, 2013b. Rynson WH Lau and Addison Chan. Motion prediction for online gaming. In International Workshop on Motion in Games, pp. 104–114. Springer, 2008. Steve Lawrence, C Lee Giles, Ah Chung Tsoi, and Andrew D Back. Face recognition: A convolutional neural-network approach. *IEEE transactions on neural networks*, 8(1):98–113, 1997. Chen Li, Zhen Zhang, Wee Sun Lee, and Gim Hee Lee. Convolutional sequence to sequence model for human dynamics. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 5226–5234, 2018. Jiaman Li, Ruben Villegas, Duygu Ceylan, Jimei Yang, Zhengfei Kuang, Hao Li, and Yajie Zhao. Taskgeneric hierarchical human motion prior using vaes. *arXiv preprint arXiv:2106.04004*, 2021. Maosen Li, Siheng Chen, Yangheng Zhao, Ya Zhang, Yanfeng Wang, and Qi Tian. Dynamic multiscale graph neural networks for 3d skeleton based human motion prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 214–223, 2020a. Maosen Li, Siheng Chen, Yangheng Zhao, Ya Zhang, Yanfeng Wang, and Qi Tian. Dynamic multiscale graph neural networks for 3d skeleton based human motion prediction. In *Proceedings of the IEEE/CVF* Conference on Computer Vision and Pattern Recognition, pp. 214–223, 2020b. Hung Yu Ling, Fabio Zinno, George Cheng, and Michiel Van De Panne. Character controllers using motion vaes. *ACM Transactions on Graphics (TOG)*, 39(4):40–1, 2020. Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. Smpl: A skinned multi-person linear model. *ACM transactions on graphics (TOG)*, 34(6):1–16, 2015. Zhuo Ma, Xinglong Wang, Ruijie Ma, Zhuzhu Wang, and Jianfeng Ma. Integrating gaze tracking and head-motion prediction for mobile device authentication: A proof of concept. *Sensors*, 18(9):2894, 2018. Naureen Mahmood, Nima Ghorbani, Nikolaus F Troje, Gerard Pons-Moll, and Michael J Black. Amass: Archive of motion capture as surface shapes. In *Proceedings of the IEEE/CVF International Conference* on Computer Vision, pp. 5442–5451, 2019. Wei Mao, Miaomiao Liu, Mathieu Salzmann, and Hongdong Li. Learning trajectory dependencies for human motion prediction. In *Proceedings of the IEEE International Conference on Computer Vision*, pp. 9489– 9497, 2019. Wei Mao, Miaomiao Liu, and Mathieu Salzmann. Weakly-supervised action transition learning for stochastic human motion prediction. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 8151–8160, 2022. Julieta Martinez, Michael J Black, and Javier Romero. On human motion prediction using recurrent neural networks. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 2891–2900, 2017. Leland McInnes, John Healy, and James Melville. Umap: Uniform manifold approximation and projection for dimension reduction. *arXiv preprint arXiv:1802.03426*, 2018. Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In *International conference on machine learning*, pp. 2014–2023. PMLR, 2016. Brian Paden, Michal Čáp, Sze Zheng Yong, Dmitry Yershov, and Emilio Frazzoli. A survey of motion planning and control techniques for self-driving urban vehicles. *IEEE Transactions on intelligent vehicles*, 1(1):33–55, 2016. Dario Pavllo, David Grangier, and Michael Auli. Quaternet: A quaternion-based recurrent model for human motion. *arXiv preprint arXiv:1805.06485*, 2018. Davis Rempe, Tolga Birdal, Aaron Hertzmann, Jimei Yang, Srinath Sridhar, and Leonidas J Guibas. Humor: 3d human motion model for robust pose estimation. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 11488–11499, 2021. Ahmadreza Reza Rofougaran, Maryam Rofougaran, Nambirajan Seshadri, Brima B Ibrahim, John Walley, and Jeyhan Karaoguz. Game console and gaming object with motion prediction modeling and methods for use therewith, April 17 2018. US Patent 9,943,760. Javier Romero, Dimitrios Tzionas, and Michael J Black. Embodied hands: Modeling and capturing hands and bodies together. *ACM Transactions on Graphics (ToG)*, 36(6):1–17, 2017. Akihiko Shirai, Erik Geslin, and Simon Richir. Wiimedia: motion analysis methods and applications using a consumer video game controller. In *Proceedings of the 2007 ACM SIGGRAPH symposium on Video* games, pp. 133–140, 2007. Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. *Advances in neural information processing systems*, 29:3738–3746, 2016. David G Stork, Anthony Bourached, George H Cann, and Ryan-Rhys Griffths. Computational identification of significant actors in paintings through symbols and attributes. *Electronic Imaging*, 2021(14):15–1, 2021. Petr Švec, Atul Thakur, Eric Raboin, Brual C Shah, and Satyandra K Gupta. Target following with motion prediction for unmanned surface vehicle operating in cluttered environments. *Autonomous Robots*, 36(4): 383–405, 2014. Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, and Amit H. Bermano. Human motion diffusion model, 2022. Jingbo Wang, Sijie Yan, Bo Dai, and Dahua Lin. Scene-aware generative network for human motion synthesis. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 12206– 12215, 2021. Yijing Wang, Zhengxuan Liu, Zhiqiang Zuo, Zheng Li, Li Wang, and Xiaoyuan Luo. Trajectory planning and safety assessment of autonomous vehicles based on motion prediction and model predictive control. IEEE Transactions on Vehicular Technology, 68(9):8546–8556, 2019. David Webster and Ozkan Celik. Systematic review of kinect applications in elderly care and stroke rehabilitation. *Journal of neuroengineering and rehabilitation*, 11(1):108, 2014. Mao Wei, Liu Miaomiao, and Salzemann Mathieu. History repeats itself: Human motion prediction via motion attention. In *ECCV*, 2020. Sijie Yan, Yuanjun Xiong, and Dahua Lin. Spatial temporal graph convolutional networks for skeleton-based action recognition. In *Thirty-second AAAI conference on artificial intelligence*, 2018. Sijie Yan, Zhizhong Li, Yuanjun Xiong, Huahan Yan, and Dahua Lin. Convolutional sequence generation for skeleton-based action synthesis. In *Proceedings of the IEEE/CVF International Conference on Computer* Vision (ICCV), October 2019. Yan Zhang, Michael J Black, and Siyu Tang. We are more than our joints: Predicting how 3d bodies move. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3372–3382, 2021. ## Appendix The appendix consists of 2 parts. We provide a brief summary of each section below. Appendix A: we provide an elaboration of the formulation of the data and the equations used to transform the temporal component of the data before input to the model, as well as after output. Appendix B: we provide further results and explanation of the imputation experiments available in the main text. ## A Problem Formulation A.1 Dct-Based Temporal Encoding Mao et al. (2019) proposed transforming the temporal component of human motion to frequencies using Discrete Cosine Transformations (DCT). In this way each resulting coefficient encodes information of the entire sequence at a particular temporal frequency. Furthermore, the option to remove high or low frequencies is provided. Given a joint, k, the position of k over N time steps is given by the trajectory vector: xk = [xk,1, . . . , xk,N ] where we convert to a DCT vector of the form: Ck = [Ck,1, . . . , Ck,N ] where Ck,l represents the lth DCT coefficient. For δl1 ∈ R N = [1, 0, *· · ·* , 0], these coefficients may be computed as $$C_{k,l}=\sqrt{\frac{2}{N}}\sum_{n=1}^{N}x_{k,n}\frac{1}{\sqrt{1+\delta_{l1}}}\cos\left(\frac{\pi}{2N}(2n-1)(l-1)\right).\tag{7}$$ If no frequencies are cropped, the DCT is invertible via the Inverse Discrete Cosine Transform (IDCT): $$x_{k,l}=\sqrt{\frac{2}{N}}\sum_{l=1}^{N}C_{k,l}\frac{1}{\sqrt{1+\delta_{l1}}}\cos\left(\frac{\pi}{2N}(2n-1)(l-1)\right).\tag{8}$$ We find this lossless transformation effectively creates a feature space that is much more conducive for optimisation. The objective is still to learn p(x). ## B Imputation Figure 7 shows the initial posterior of the degraded input as well as the final after 10 steps, a p(z|x)MAP estimate, of gradient ascent on the model's posterior. For the HG-VAE (7b) the average posterior already lies outside the standard deviation of the ground truth for just 0.5% of features occluded, demonstrating acute out-of-distribution detection. In contrast, the VAE's average posterior only falls outside a standard deviation of the ground truth for approximately 50% occlusion. A fully expressive model should not, on average, give a higher MAP estimate than the posterior on the ground truth. Note, we show comparison to the VAE to illustrate this point, not to prove the efficacy of our model—which is evidenced by the experiments in the main text. Figure 7a indicates a lack of capacity to delineate latent features representative of the full expressivity of x, and hence gives a higher probability to a closer to average expression than the ground truth. We see no evidence of this lack of expressivity in Figure 7b. In fact, the MAP estimate strictly upper-bounds the mean imputation, p(z|x)init, and is strictly upper-bounded by the ground truth (p(z|x)GT). Imputation of missing data: Given occluded inputs, and a sufficiently expressive p(z|x), it is possible to gradient ascent the missing inputs on the model's posterior. If the model's maximum a posteriori (MAP), given the partially observed datapoint, is close to the ground truth, x, this will result in a statistically ![16_image_0.png](16_image_0.png) ![16_image_1.png](16_image_1.png) Figure 7: Posterior, p(z|x), for the motion data of 50 timepoints on BMLrub with increasing level of degradation. Note p(2/x)MAP is a MAP estimate obtained via gradient ascent. Number of features occluded is out of a maximum of 2700. well-informed imputation. For these experiments we use the Adam optimiser on the negative log posterior - > log (p(zi|z)) | and select a learning rate that is sufficiently low for stable descent. This was 100.0 for the VAE and 1.0 for HG-VAE, and HM-VAE (this difference is proportional to the difference in magnitude of the average log posterior for the respective models). We gradient ascended on 800 datapoints at once, for a maximum of 10 steps selecting the step of highest model posterior for each datapoint. Figure 8 shows the results from Figure 7b as a percentage change in the negative log posterior. We can see that the change on a logarithmic scale is consistent over all degrees of degradation indicating the sensitivity of the model to subtle changes in the observables. We use Uniform Manifold Approximation and Prediction (UMAP) McInnes et al. (2018), to project the top latent variables of 1000 random samples, conditioned on one-hot encodings, onto a 2-dimensional plane. We show two contrasting actions in figure 9, walking, and sitting. We can see that a very clear separation between the two different actions showing that there is a strong level of abstraction in the top latent variable. ![17_image_0.png](17_image_0.png) Figure 8: % change in negative log posterior in the HG-VAE for the motion data of 50 timepoints on BMLrub with increasing levels of degradation. Degree of degradation displayed on a log-log scale to illustrate the consistency of the model for small degrees of degradation. ![18_image_0.png](18_image_0.png) sitting.
Review 1: Summary: Based on the pre-studied hierarchical VAE, this paper proposed a hierarchical graph-convolutional VAE-based network that was specifically designed to predict future poses from past occluded poses. Two main neural network computational blocks are introduced: the Graph Convolutional Layer (GCL) and the Graph Convolutional Block (GCB). The GCL takes a graph as input and outputs a new graph with potentially different numbers of nodes and features. The GCB, on the other hand, takes a graph as input and outputs a graph of the same dimensionality. Equations are provided to mathematically define the operations of the GCL and GCB, including the learnable parameters involved. The use of a Gaussian Error Linear Unit (GeLU) activation function and a learnable residual weighting ($\alpha$) for mitigating covariate shift is also mentioned. The experiments demonstrate the proficiency of the HG-VAE model in several aspects. Strengths and Weaknesses: Strength: The proposed approach combines hierarchical latent variables with a graph-convolutional structure for modeling human motion. The hierarchical structure of the model allows for the learning of motion patterns at multiple levels of abstraction, capturing both local and global dependencies in human motion. This enables the model to generate coherent and realistic motion sequences. By leveraging graph-convolutional operations, the model can effectively capture the spatial relationships between joints in the human body, which are crucial for modeling complex human motion. This architecture is well-suited for tasks that involve structured data like human pose sequences. The experiments demonstrate that the HG-VAE model outperforms baseline models, such as conventional VAEs and HM-VAE, in terms of mean squared error (MSE) and likelihood. This indicates that the proposed model can effectively model and reconstruct human motion. Weakness: The novelty of the proposed method seems limited. The core of the submission is their proposed network that is embedded in Hierarchical VAE, which has been well studied, with assumptions of Gaussians for all the conditional probability, but it seems to lack of motivation and details of the network development, such as why and how such networks can handle the application. The proposed model's complexity, especially with the use of hierarchical latent variables and graph-convolutional structures, may result in high computational costs, limiting its scalability to larger datasets or real-time applications. The training time of 5000 epochs on a GPU suggests that the model may be computationally intensive. The training details mention the use of gradient clipping and a warm-up procedure for the KL divergence, indicating that the model may be sensitive to hyperparameters and require careful tuning. Requested Changes: See my comments above Broader Impact Concerns: N/A ================================================== Review 2: Summary: The submitted manuscript focuses on modeling human motion via a hierarchical variational autoencoder, implemented via graph convolutional layers. The proposed HG-VAE is supposed to be able to detect out-of-distribution data, and even to handle missing data. Ablation studies and comparisons with a few methods in the H3.6 and AMASS datasets are conducted. Strengths and Weaknesses: **Strengths** - The idea of a hierarchical graph-convolution variational autoencoder is interesting - Experiments are conducted in two large-scale datasets of motion modeling. **Weaknesses** - The reason why the study is limited to motion prediction is unclear, and the methodology is not specifically taylored to this application. So either the main contribution is the application, and one must compare with other methods, or the main contribution is the methodology, in which case the contributions must be very clearly explained. - Even if from an implementation perspective, there is some novelty, the methodological contribution is incremental, as I understand it as an implementation of the method in [a] with graph convolutional layers. - The notation is very confusing: it is very unclear how the temporality of the signal is dealt with, and if the network archtiecture can accomodate signals of different length. - There is no mention to the dynamica VAEs [b], that allow to process sequential data, which is highly related to the submitted manuscript. [a] Very deep VAEs generalise autoregressive models and can ourperform the on images, R. Child, ICLR 2021. [b] Dynamical Variational autoencoders; A comprehensive Review, L. Girin et al. FnT ML 2021. Requested Changes: The most important change to me is to clarify whether the paper is more oriented towards the human motion modeling application or towards the methodology. If it is the first case, then the motivation of the proposed methodology regarding the human motion modeling application has to be clarified. Also, the downstream tasks should be moved from the appendix to the main paper, and the advantage of the proposed method w.r.t. to the state-of-the-art (in general, prediction quality, inference time, etc) should be extensively discussed. If it is the second case, then the experiments cannot be limited to the human motion modeling task, as we need to be sure that the proposed methodology can be used on other domains (e.g. speech, ECG, etc). Also, the methodoogical contributions w.r.t. [a] (closest model in the literature) have to better explain, since right now it seems that it is an implementation matter. Finally, if the paper presents a methodological contribution, there should also be a comparison with dynamical VAEs [b], since they are the equivalent of VAE to process sequential data, and I believe MAP could also be applied to them. As is, I do not believe the paper will interest many TMLR readers (and this is why I selected "No" in the audience question)/ But if the choice is clarified and the paper adjusted accordingly, it is possible it would interest several TRML readers. Broader Impact Concerns: From a technical perspective, limitations are discussed. From a potential missuse, the discussion os fairly small. ================================================== Review 3: Summary: This paper present a hierarchical graph convolutional VAE to generate model of action. The proposed architecture relies on hierarchical latent representation, with 4 stochastic layers, where earlier stochastic layers global representation of action and the bottom ones represent more local activities. Authors demonstrate that more stochastic layers contributes more strongly to better likelihood estimates of the motions. Strengths and Weaknesses: **Strength**: The paper focuses on learning the distribution of complex human motion, which indeed is a challenging and interesting problem. **Weaknesses**: - Paper lacks technical novelty and lacks discussion of very similar efforts in this area, making it hard to judge which component of the proposed method is novel. - Paper lacks discussion of recent efforts in this area. The field of human motion analysis, specially with probabilistic generative models, has been growing very fast, and authors missed many important works after 2021-2022 (see references below) - The paper lacks clarity in writing the proposed method and highlighting its differences to existing techniques. Particularly, the main idea of the paper is presented in the preliminary section! The preliminary section seems over explained, e.g., section 3.2 and 3.1 can be summarized through citations. Also, long equation in section 3.2 is not labeled correctly. - The baselines used for comparison are not complete (see references below). Also, authors only reported results on Human3.6M, missing more challenging datasets to report on (e.g., standard AMSS test split results are not provided comprehensively in the main paper). - Qualitative results can be improved. Illustrating stickman make the judgement of plausibility of the generated motions very hard. One potential improvement is to use a parametric model of human body (e.g., SMPL) with neutral shape parameters, allowing better illustration of the 3D pose of human. Moreover, for such tasks, authors are highly recommended to attach a supplementary video. - Motion reconstruction quality is not studied thoroughly in this work. While some aspects of robustness to OOD is considered, many strong metrics for motion reconstruction (MPJPE, MPJVE, ...) and metrics related to quality and diversity of generated motions are not discussed. Results in section 6 are not well discussed. Authors are encouraged to make it clearer ,e.g., what does −10364 mean in `p(z|x) = −10364`. Is it good or bad? With standard deviation of 500, it is hard to judge the comparisons made in the following sentence of section 6. Overall, I encourage authors to better explain the results. - Paper discusses OOD motions. However, the proposed method is only compared to limited VAE-based methods, failing to compare against other generative models of human motion, e.g., Diffusion models, Flow-based models, or more recent VAE-based models (see references below). _References_ (+ many more recent works in this area, but these are the ones in similar-ish years authors used to discuss related work and make comparisons) - Rempe, Davis, et al. "Humor: 3d human motion model for robust pose estimation." Proceedings of the IEEE/CVF international conference on computer vision. 2021. - Kolotouros, Nikos, et al. "Probabilistic modeling for human mesh recovery." Proceedings of the IEEE/CVF international conference on computer vision. 2021. - Aliakbarian, Sadegh, et al. "Flag: Flow-based 3d avatar generation from sparse observations." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. - Tevet, Guy, et al. "Human motion diffusion model." arXiv preprint arXiv:2209.14916 (2022). - Mao, Wei, Miaomiao Liu, and Mathieu Salzmann. "Weakly-supervised action transition learning for stochastic human motion prediction." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. - Aliakbarian, Sadegh, et al. "Contextually plausible and diverse 3d human motion prediction." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. - Zhang, Yan, Michael J. Black, and Siyu Tang. "We are more than our joints: Predicting how 3d bodies move." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. Requested Changes: Weaknesses in the above section should be address through a major and thorough revision. Broader Impact Concerns: No major concern ================================================== Metareview: Recommendation: Reject Comment: The proposed paper presents a hierarchical graph convolutional VAE (HG-VAE) for modeling human motion. The HG-VAE is intended to be able to detect out-of-distribution inputs and handle missing data. Experiments and ablation studies are conducted on the H3.6 and AMASS datasets. The reviewers agree that the paper in its current form is missing experimental comparisons to existing work and motivational reasoning for design choices in the context of the motion prediction application. As such, the current decision from the reviewers is to reject the paper. With a stronger experiments section and further reasoning for the architecture design, the reviewers believe that the paper would be of interest to the community. ==================================================
# Inversion By Direct Iteration: An Alternative To Denoising Diffusion For Image Restoration Mauricio Delbracio *mdelbra@google.com* Google Research Peyman Milanfar milanfar@google.com Google Research Reviewed on OpenReview: *https: // openreview. net/ forum? id= VmyFF5lL3F* ## Abstract Inversion by Direct Iteration (InDI) is a new formulation for supervised image restoration that avoids the so-called "regression to the mean" effect and produces more realistic and detailed images than existing regression-based methods. It does this by gradually improving image quality in small steps, similar to generative denoising diffusion models. Image restoration is an ill-posed problem where multiple high-quality images are plausible reconstructions of a given low-quality input. Therefore, the outcome of a single step regression model is typically an aggregate of all possible explanations, therefore lacking details and realism. The main advantage of InDI is that it does not try to predict the clean target image in a single step but instead gradually improves the image in small steps, resulting in better perceptual quality. While generative denoising diffusion models also work in small steps, our formulation is distinct in that it does not require knowledge of any analytic form of the degradation process. Instead, we directly learn an iterative restoration process from low-quality and high-quality paired examples. InDI can be applied to virtually any image degradation, given paired training data. In conditional denoising diffusion image restoration the denoising network generates the restored image by repeatedly denoising an initial image of pure noise, conditioned on the degraded input. Contrary to conditional denoising formulations, InDI directly proceeds by iteratively restoring the input low-quality image, producing high-quality results on a variety of image restoration tasks, including motion and out-of-focus deblurring, super-resolution, compression artifact removal, and denoising. ## 1 Introduction Recovering a high-quality image from a low-quality observation is a fundamental problem in computer vision and computational imaging. Single image restoration is a highly ill-posed inverse problem where multiple plausible sharp and clean images could lead to the very same degraded observation. The typical supervised approach is to formulate image restoration as a problem of inferring the underlying image given a low-quality version of it, by training a model with paired examples of the relevant degradation (Ongie et al., 2020). One of the most common approaches is to directly minimize a pixel reconstruction error using the L1 or L2 loss; an approach that correlates well with the popular PSNR (peak signal-to-noise-ratio) metric. However, it has been observed often in recent literature that measures such as PSNR (and in general point-distortion metrics) do not correlate well to human perception (Blau & Michaeli, 2018; Delbracio et al., 2021b; Freirich et al., 2021). Despite these shortcomings, much of the recent research work has been focused on improving deep architectures and optimizing a variety of point-loss formulations, resulting in general models that give an aggregate improved image in one step of inference. To see the issues more concretely, let's assume that we are given image pairs (x, y) ∼ p(x, y) where x represents a target high-quality image, and y represents the respective degraded observation. For instance, x may be pristine images degraded by a combination of blur/compression and noise to yield y. A typical regression approach would predict x directly from y using a trained model xˆ(y) = Fθ(y) ≈ x, by minimizing the expected pixel error in some (e.g. Lp) metric as follows: $$\operatorname*{min}_{\theta}\mathbb{E}_{\mathbf{x},\mathbf{y}}\|F_{\theta}(\mathbf{y})-\mathbf{x}\|_{p}\approx\operatorname*{min}_{\theta}\sum_{i}\|F_{\theta}(\mathbf{y}^{i})-\mathbf{x}^{i}\|_{p}.$$ In the case p = 2, the minimum mean-squared error (MMSE) optimal solution is the conditional expectation: xMMSE(y) = E[ x|y ] = Rxp(x | y)dx. This evidently results in an image that is the (weighted) average of all plausible reconstructions1. This resulting image will not have a natural appearance as the details have been wiped out due to the effect aggregation (i.e. "regression to the mean") effect. The problem is compounded for more ill-posed problems. That is, the more ill-posed the inverse problem, the larger the set of plausible reconstructions and therefore the more severe the effect of the aggregation implied by the expectation of the posterior. To mitigate this problem, recent works have introduced additional loss terms (Gatys et al., 2016; Mechrez et al., 2018b;a; Delbracio et al., 2021b; Kupyn et al., 2018) that seek a balance in the formulation so the final image has improved perceptual quality (more on this in the next section). In this work, we explicitly address this problem by avoiding single-step prediction of the clean image, and instead iterating a series of inferences, where at each step we solve an 'easier' (i.e., less ill-posed) inverse problem than the original. Specifically, we generate a sequence of intermediate restorations where at each step the goal is to reconstruct only a slightly less corrupted image. The core observation underlying this approach is this: a small-step restoration largely avoids the regression-to-the-mean effect because the set of plausible 'slightly-less-bad' images is relatively small. The core technical component enabling our approach is still a single deep model, but one that is trained to predict a better image given one with an intermediate level of degradation in the previous step, as summarized by Algorithm 1. ## 2 Background Recently, much work on imaging inverse problems has been focused on using generative formulations (Bora et al., 2017; Kawar et al., 2022). Generative adversarial formulations train restoration networks with an adversarial loss that forces the restored image to be on the distribution of high-quality signals (Kupyn et al., 2018; 2019; Asim et al., 2020). GANs are in general hard to train and also hard to control image hallucinations since the two terms play an antagonic role (Lugmayr et al., 2021). Image priors, including generative ones, can be used to solve inverse problems in an unsupervised fashion where the degradation operator is only know at inference time (Rudin & Osher, 1994; Venkatakrishnan et al., 2013; Delbracio et al., 2021a; Romano et al., 2017; Ongie et al., 2020). DDPMs have been recently adapted for unsupervised model-based image restoration (Kawar et al., 2021a; 2022; Kadkhodaie & Simoncelli, 2021; Jalal et al., 2021a; Laumont et al., 2022; Chung et al., 2022; Kawar et al., 2022). How this approach compares to Denoising Diffusion. Denoising Diffusion Probabilistic Models (DDPMs) (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021a) and Score-based models (Song & Ermon, 2019; 2020; Song et al., 2021b) have emerged as two powerful classes of generative models that produce high-quality samples by inverting a *known* diffusion (degradation) process. The standard Gaussian denoising formulation has been extended to more general corruption process (Bansal et al., 2022; Hoogeboom & Salimans, 2022; Daras et al., 2023; Deasy et al., 2021; Hoogeboom et al., 2022a;b; Nachmani et al., 2021; Johnson et al., 2021; Lee et al., 2022; Ye et al., 2022). The main common idea is to analytically define a known degradation process that is reversed to generate new samples starting from a fully degraded image (e.g., pure noise). The inference procedure makes use of the known analytical degradation at every step. 1A similar statement is true for other p 6= 2 in which case the mean is replaced by another aggregation operator (e.g. median for L1) By contrast, in our formulation we do not require knowledge of any analytic form of the degradation process, we directly learn an iterative restoration process from low-quality/high-quality paired examples. This implies that we can apply our iterative procedure to virtually any degradation as long as we are given image pairs. Additionally, our formulation is motivated only from the idea of splitting the original inverse problem into multiple smaller ones. We do not require any knowledge of the underlying probability distributions, or the conditional distributions, at any step. Our inference procedure is solely based on the idea of restoring the signal a little bit at each step. This approach, with minimal assumptions, gives a unified formulation for any supervised image restoration problem under the same framework. The natural extension of DDPMs to image restoration tasks is through the use of a *Conditional* DDPM models (cDDPM) (Li et al., 2021; Saharia et al., 2021; 2022; Whang et al., 2022). The goal of a cDDPM is to generate plausible reconstructions given the low-quality input (e.g., by generating samples from the posterior distribution). The idea is to train a supervised *denoising* diffusion model using paired examples that is conditioned on the low-quality input. The denoising network learns to generate a valid restored image (sample) by repeatedly denoising an initial image of pure noise. Our formulation has some similarity to conditional diffusion models, but contrary to the denoising formulation, we directly proceed by iteratively restoring the input image. Overall, our method is straightforward to implement and train, and produces high-quality results. We evaluate the formulation on four different restoration tasks using different perceptual quality metrics. As shown, our method produces samples of higher quality than the state-of-the-art regression formulations while maintaining high-fidelity with respect to the original sample. ## 3 Related Work The goal of image restoration is to generate a high-quality image from its degraded low-quality measurement (e.g., low-resolution, compressed, noisy, blurry). Since the seminal super-resolution work of Dong et al. (2015) many recent image restoration methods adopt an end-to-end supervised formulation where a deep neural network is trained to directly produce a point estimate (Zhao et al., 2016; Lim et al., 2017; Tao et al., 2018; Chen et al., 2018) These methods rely on low-quality high-quality image pairs to train a regression model. Most of the work has been focused on developing better and more powerful network architectures (Zamir et al., 2022; Chen et al., 2022; Tu et al., 2022; Zamir et al., 2021) so we can achieve better pixel-level reconstruction. While this formulation leads to state-of-the-art PSNR, the image generated is at best an average of all plausible solutions (regression to the mean). In the limit case where the low-quality image is completely obfuscated the best prediction in terms of PSNR is the average of the distribution. Generative adversarial networks (Goodfellow et al., 2014; Arjovsky et al., 2017), and adversarial formulations (Ledig et al., 2017; Isola et al., 2017; Kupyn et al., 2018; 2019) have been introduced to push the generated image towards the manifold of natural images. GANs suffer from unstable training (Arora et al., 2017; Salimans et al., 2016; Arjovsky et al., 2017), while being prone to introduce significant image hallucinations. This is a direct consequence of a non-reference formulation that directly tries to minimize the distance of the range of the generator to the manifold of natural images (Cohen et al., 2018). Blau & Michaeli (2018) proved that there is a trade-off between image perceptual quality and distortion. It is not possible to minimize both distortion and perceptual quality simultaneously. In fact, minimizing the average point distortion (e.g., PSNR) can be only done in detriment of the perceptual quality (Blau & Michaeli, 2018; Freirich et al., 2021). A powerful way of avoiding the regression to the mean is to formulate the problem as one of sampling from the posterior distribution (Kawar et al., 2021b;a; 2022; Ohayon et al., 2021; Kadkhodaie & Simoncelli, 2021; Whang et al., 2022). An additional benefit of this formulation is to be able to generate multiple different plausible solutions that can be used for uncertainty quantification (Whang et al., 2021) or improving fairness (Jalal et al., 2021b). Variational auto-encoders (Prakash et al., 2020), Normalizing flows (Lugmayr et al., 2020; 2021), and Diffusion probabilistic models (DPMs) (Saharia et al., 2021; Li et al., 2021; Whang et al., 2022) have been successfully applied to different image restoration tasks, where a diverse set of candidates can be generated from the learned posterior (Prakash et al., 2020). Denoising Diffusion Probabilistic Models (DDPMs) (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021a), Score-based models (Song & Ermon, 2019; 2020; Song et al., 2021b) and their recent generalizations (Bansal et al., 2022; Hoogeboom & Salimans, 2022; Daras et al., 2023; Deasy et al., 2021; Hoogeboom et al., 2022a;b; Nachmani et al., 2021; Johnson et al., 2021; Lee et al., 2022; Ye et al., 2022) generate highquality samples by inverting a known degradation process. The main strategy is to analytically define a known degradation process that is reversed to generate new samples starting from a fully degraded image (e.g., pure noise). Bansal et al. (2022) introduced Cold Diffusion a generative framework to generate images by reverting arbitrary (known) degradations. They show promising results even with non-stochastic degradations such as blur, masking or pixelization. The strategy is to define intermediate analytical degradations (diffusion) and then revert them step by step. Daras et al. (2023) presented Soft Diffusion a generalization of diffusion models to linear degradations. The authors argue that noise is a fundamental component that is needed to provably learn the score. In this work, we adopt a similar strategy and propose to decompose the image restoration problem into a sequence of intermediate steps each of them being a much easier problem to solve (i.e., less ill-posed). This path of intermediate reconstructions takes us from a low-quality input to a high-quality reconstruction through a series of slightly less corrupted signals. Different than in traditional generative diffusion formulations, the degradation is only *implicitly* given through a series of pair images (low-quality and high-quality). In our formulation, we define the intermediate steps as a convex combination of the target/input signals. This induces a simple linear propagation from the high-quality sample to the low-quality one. An alternative formulation of the supervised image restoration problem is to use a conditional denoising diffusion models to generate samples from the posterior distribution (Li et al., 2021; Saharia et al., 2021; 2022; Whang et al., 2022). The overall idea is to train a denoising diffusion model that is conditioned on the low-quality input. The denoising network learns to generate a valid restored image by repeatedly denoising an initial image of pure noise. Our formulation has some similarity to conditional diffusion models, but contrary to the denoising formulation, we directly proceed by iteratively restoring the input image. The very recent work by Luo et al. (2023a) and Welker et al. (2022), and the concurrent work by Luo et al. (2023b); Song et al. (2023) introduce related image restoration techniques based on ODE/SDE diffusion formulations. A major difference with our work is that InDI is completely motivated and formulated from elementary principles by splitting the restoration tasks into multiple smaller ones. There is also significant amount of recent related work that analyzes the connection of diffusion image generation with Bridge and Flow matching, Optimal Transport and Schrodinger bridges (Albergo et al., 2023; Shi et al., 2023; Liu et al., 2023). Finally, the concurrent work of Heitz et al. (2023) adopts a linear diffusion scheme for image generation similar to, but less general than, the one in InDI. ## 4 Indi: Our Proposed Formulation Given (x, y) ∼ p(x, y), we define a continuous forward degradation process by $$t,\quad{\mathrm{with~}}t\in[0,1].$$ xt = (1 − t)x + ty, with t ∈ [0, 1]. (1) The idea of this forward process is that it starts from a clean sharp image at time t = 0, and then degrades it to the blurry/noisy observation at time t = 1. Here, xt indexed by t, represents an intermediate degraded image between the low-quality input y (i.e., t = 1) and the high-quality sharp target x (i.e., t = 0). Following the common notation in diffusion models, we will refer to the index t as the time-step. Our recovery method starts with the input degraded image (time t = 1), and then at a given time-step t generates the best possible reconstruction at time t − δ. This can be done, for example, by the short-time conditional mean xˆt−δ = E[ xt−δ | xˆt ] to the estimate. As we will show, by repeating this process we can thus invert the full degradation little by little. The following proposition provides the cornerstone of our approach. f $$(1)$$ ![4_image_0.png](4_image_0.png) p(x) x1 xt x0 regression Figure 1: 2D Toy Example. Estimation of conditional mean and iterated estimation for points from a multimodal (4 modes) distribution under: (a) Denoising strong Gaussian noise (H = I); and (b) missing information recovery, i.e., H = [1, 0; 0, 0], under moderate noise. Blue points represent observed samples, while red ones are the *regression* prediction. The black (hollow) circles represent the final point in our iterative procedure, always reaching a valid point in the data manifold (orange points). The small green circles indicate the iterative restoration path. $${\bf t i o n\ 4.1.\ L e t\ x_{s},x_{t}}$$ Proposition 4.1. Let xs, xt be given from equation 1, where s ≤ t*. Then,* $$\mathbb{E}[\,\mathbf{x}_{s}\,|\,\mathbf{x}_{t}\,]=\left(1-{\frac{s}{t}}\right)\mathbb{E}[\,\mathbf{x}_{0}\,|\,\mathbf{x}_{t}\,]+{\frac{s}{t}}\mathbf{x}_{t}.$$ The proof is a direct consequence of change of variables and is given in Appendix A. According to this proposition, the posterior mean (e.g. MMSE estimate) at time *s < t* can be deduced from the estimate at time t by first estimating the clean image (x = x0), and then doing a convex combination with the estimate xˆs at time s. We can then apply the following scheme to move from t to s = t − δ, $$\hat{\mathbf{x}}_{t-\delta}=\mathbb{E}[\,\mathbf{x}_{t-\delta}\,|\,\hat{\mathbf{x}}_{t}\,]=\frac{\delta}{t}\mathbb{E}[\,\mathbf{x}_{0}\,|\,\hat{\mathbf{x}}_{t}\,]+\left(1-\frac{\delta}{t}\right)\hat{\mathbf{x}}_{t}.\tag{2}$$ The process starts from xˆ1 = y, and the step δ < 1 controls the "speed" of the reverse process (e.g., at constant "speed", δ = 1 N , where N controls the total number of steps). Remark 4.2. For the iteration procedure in equation 2 to be well defined we require that E[ x0 |xˆt ] is well defined. Thus, we require pxt (xˆt) > 0. An intuitive motivation for the requirement in Remark 4.2 is that we need to move through a path of plausible samples xˆt at every step t. One simple way to guarantee this is by adding a small amount of noise to y. Then p(y), and therefore p(xt) will be non-zero everywhere. More discussion about this is presented at the end of this section, but first, we present a toy example to motivate our approach. A Toy Example: Let us assume we observe noisy samples y = Hx + n, where n ∼ N (0, σ2Id), drawn from a discrete multimodal distribution p(x) = Pd i=1 wiδx−ci , where ci ∈ R N and wi ≥ 0 and Pd i=1 wi = 1. Let xt be the intermediate degraded samples according to equation 1. In this simple example, there is a closed form expression for all posterior distributions and conditional means; namely, $$p(\mathbf{x}_{t}|\mathbf{x})=G\left(\frac{\mathbf{x}_{t}-\mathbf{H}_{t}\mathbf{x}}{\sigma_{t}}\right)\quad\text{and}\quad p(\mathbf{x}_{t})=\sum_{i=1}^{d}w_{i}G\left(\frac{\mathbf{x}_{t}-\mathbf{H}_{t}\mathbf{c}_{i}}{\sigma_{t}}\right),\tag{3}$$ where Ht = (1 − t)I + tH and G(x) is a Gaussian kernel with identity covariance and σt = tσ. Then, $$\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x}|\mathbf{x}_{t})}[\mathbf{x}]=\int{\frac{p(\mathbf{x}_{t}|\mathbf{x})p(\mathbf{x})}{p(\mathbf{x}_{t})}}\mathbf{x}d\mathbf{x}={\frac{\sum_{i=1}^{d}\mathbf{c}_{i}w_{i}G\left({\frac{\mathbf{x}_{t}-\mathbf{H}_{t}\mathbf{c}_{i}}{\sigma_{t}}}\right)}{\sum_{i=1}^{d}w_{i}G\left({\frac{\mathbf{x}_{t}-\mathbf{H}_{t}\mathbf{c}_{i}}{\sigma_{t}}}\right)}}$$ The iteration from equation 2 becomes: $${\hat{\mathbf{x}}}_{t-\delta}={\frac{\delta}{t}}{\frac{\sum_{i=1}^{d}\mathbf{c}_{i}w_{i}G\left({\frac{{\hat{\mathbf{x}}}_{t}-H_{t}\mathbf{c}_{i}}{\sigma_{t}}}\right)}{\sum_{i=1}^{d}w_{i}G\left({\frac{{\hat{\mathbf{x}}}_{t}-H_{t}\mathbf{c}_{i}}{\sigma_{t}}}\right)}}+\left(1-{\frac{\delta}{t}}\right){\hat{\mathbf{x}}}_{t}.$$ Figure 1 shows the results of applying the iterative regression scheme given by the above equation in two different examples. The iterative regression converges to one of the four possible modes (shown in orange), while the regression to the mean is always a weighted average of all possible modes (i.e., a blurry reconstruction, shown in red). Training and Inference: InDI ideal iterative scheme given by equation 2 requires to compute an estimate of the clean image at every step t (i.e., E[ x0 |xt ]). To that extent, we train a family of regressors Fθ(·;t), each of them specialized in reconstructing x0 from xt at a given t. That is, $$\min_{\theta}\mathbb{E}_{\mathbf{x},\mathbf{y}\sim p(\mathbf{x},\mathbf{y})}\mathbb{E}_{t\sim p(t)}\|F_{\theta}(\mathbf{x}_{t},t)-\mathbf{x}\|_{p},\tag{1}$$ where p(t) is a predefined distribution for t (e.g., uniform). The model Fθ allows us to do incremental reconstruction where from time step t we predict the slightly less corrupted signal at time t − δ as given in equation 2. Thus, the iterative scheme becomes: $\downarrow$ . $$\hat{\mathbf{x}}_{t-\delta}=\frac{\delta}{t}F_{\theta}(\hat{\mathbf{x}}_{t},t)+\left(1-\frac{\delta}{t}\right)\hat{\mathbf{x}}_{t},\tag{1}$$ $$\mathbf{\Sigma}$$ where 0 < δ ≤ 1. Although δ could be a function of time, in practice we use a constant time step, δ = 1 N , where N is the number of steps. InDI as a Residual Flow ODE: In the limit, as δ → 0, equation 5 leads to an ordinary differential equation (ODE); namely $${\frac{d\mathbf{x}_{t}}{d t}}=\operatorname*{lim}_{\delta\to0}{\frac{\mathbf{x}_{t}-\mathbf{x}_{t-\delta}}{\delta}}={\frac{\mathbf{x}_{t}-F_{\theta}(\mathbf{x}_{t},t)}{t}},$$ $$\mathbf{\Sigma}$$ where in the ideal case Fθ(xt, t) = E[ x0 | xt ]. The ODE can be interpreted as a "residual flow" because the right-hand side is the (normalized) residual of the inversion process at time t. We are interested in the solution of this equation at t = 0, starting from the initial condition x1 = y at t = 1. The residual flow formulation can be used to develop other numerical procedures using standard ODE solvers. Exploring this is left to future work. Another use of the continuous formulation is to understand the behavior of the proposed iterative procedure in terms of concrete examples. In Appendix B we show how the residual flow can be used to analyze the specific case where the prior is Gaussian and the restoration task is denoising. Connection to Denoising Score-Matching and Probabilistic ODE: An interesting connection emerges when the degradation is Gaussian noise (standard deviation σ 2). In this case, InDI's ODE in equation 6 boils down to the score-matching probabilistic ODE of Song et al. (2021b). More specifically, let xt = x + tn, so the noise level in xt is σ 2 t = t 2σ 2. The probabilistic flow ODE (Eq(13) in Song et al. (2021b)) is given by $$ \frac{d\mathbf{x}_t}{dt}=-\frac{1}{2}\frac{d(\sigma_t^2)}{dt}\nabla_{\mathbf{x}_t}\log p_t(\mathbf{x}_t)=-t\sigma^2\nabla_{\mathbf{x}_t}\log p_t(\mathbf{x}_t).$$ According to the denoising score-matching (DSM) approximation (Vincent (2011)), . $$-\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t})\approx{\frac{\mathbf{x}_{t}-F_{\theta}(\mathbf{x}_{t},t)}{\sigma_{t}^{2}}}={\frac{\mathbf{x}_{t}-F_{\theta}(\mathbf{x}_{t},t)}{t^{2}\sigma^{2}}},$$ so we end up recovering precisely the same ODE as in InDI. Furthermore, when Fθ(xt, t) = E[ x0 |xt ] (i.e., MMSE estimator) the DSM approximation is exact and the relation is given by Tweedie's formula (Robbins, 1956; Efron, 2011). Algorithm 1 Iterative Image Restoration (Inference) Require: y, Fθ(·, t)*, δ,* t n ∼ N (0, I) xˆ1 = y + 1n for t = 1 to 0 **with step** −δ do ζ ∼ N (0, I) xˆt−δ ← δ t Fθ(xˆt, t) + 1 − δ t xˆt + (t − δ) q 2 t−δ − 2 t ζ . Update rule from equation 10. end for return xˆ0 Stochastic Perturbation: To make sure we have the regularity requirements for the iterative procedure from equation 2 to be well defined (Remark 4.2), we add a small amount of white noise to the low-quality input. As shown in Section 5 this leads to a significant improvement in image quality in certain tasks (in particular those that are restorations from deterministic degradations). Our model with this noise perturbation becomes: $$\mathbf{x}_{t}=(1-t)\mathbf{x}+t\mathbf{y}^{\prime}=(1-t)\mathbf{x}+t\mathbf{y}+t\epsilon\mathbf{n}\quad{\mathrm{with~}}t\in[0,1],$$ 0 = (1 − t)x + ty + tn with t ∈ [0, 1], (7) where y 0 = y +n, is a small constant (e.g., = 0.01, where image values are in [−1, 1]), and n ∼ N (0*, Id*). A slightly more general formulation incorporates the perturbation as a general Brownian motion, where we can explicitly control the level of noise at each step. That is, $\downarrow$ . $$\mathbf{x}_{t}=(1-t)\mathbf{x}+t\mathbf{y}+{\sqrt{t}}\epsilon_{t}\mathbf{\eta}_{t}\quad{\mathrm{with~}}t\in[0,1],$$ $\downarrow$ . $$\cdot\,t\mathbf{y}+t\epsilon_{t}\mathbf{n};t)-\left.\mathbf{x}\right\rVert_{p}.$$ $$\mathbf{\Sigma}$$ √ttηt with t ∈ [0, 1], (8) where t is a non-negative function, and ηt is the standard Brownian motion having zero mean and covariance tI at index t. In this more general setting, the base training objective becomes $\underset{\theta}{\mathrm{min}}\;\mathbb{E}_{\mathrm{a}}$ θ Ex,y∼p(x,y)Et∼p(t)En∼N(0,Id) kFθ ((1 − t)x + ty + ttn;t) − xkp . (9) And as a result, the general inference procedure in equation 2 becomes: $${\hat{\mathbf{x}}}_{t-\delta}={\frac{\delta}{t}}F_{\theta}({\hat{\mathbf{x}}}_{t},t)+\left(1-{\frac{\delta}{t}}\right){\hat{\mathbf{x}}}_{t}+(t-\delta){\sqrt{\epsilon_{t-\delta}^{2}-\epsilon_{t}^{2}}}\,\mathbf{\zeta},$$ $$(10)$$ t ζ, (10) where the reconstruction process starts from t = 1, xˆ1 = y + n and n ∼ N (0*, Id*). At each step, a new ζ ∼ N (0*, Id*) is sampled and noise is added to the current state. The added Gaussian noise is such that the noise at time t has variance t 2t 2 as required by equation 8. To be well defined t needs to be a non-negative non-increasing function of t. In the limit case where t = we are in the simplified case given by equation 7, while if t = √t the noise perturbation is a pure Brownian motion. Our full iterative restoration inference scheme is given in Algorithm 1. ## 5 Experiments We train and evaluate our framework on four widely popular image restoration tasks: motion deblurring, defocus deblurring, compression artifacts removal and single image super-resolution. Our formulation is generative-based, and we show that can be used for image generation even if this is not the main focus of the present work. To evaluate the quality of the proposed method we compute several distortion based and perceptual metrics: PSNR, LPIPS (Zhang et al., 2018), FID (Fréchet Inception Distance) (Heusel et al., 2017), and KID (Kernel Inception Distance) (Bińkowski et al., 2018). Perception–Distortion tradeoff (Blau & Michaeli, 2018): To illustrate the potential of the method, we present results when using different number of steps for the reconstruction. This has a direct impact on the perception–distortion tradeoff. In general, a single step reconstruction with our model, will lead to an estimate that minimizes the average point distortion (e.g., PSNR) but this can be only done to the detriment of the perceptual quality. Model Architecture and Training: We adopt a U-Net-like architecture similar to the ones in diffusion strategies (Saharia et al., 2021; Whang et al., 2022). Following Whang et al. (2022) we removed attention layers and group normalization to have a fully-convolutional architecture. The size of the model varies for each evaluated task (in general we chose a model size proportional to the size of the dataset to avoid significant overfitting). The model is trained on image crops using ADAM optimizer. Learning rates and other hyper-parameters are given in Appendix C. For each experiment, we train a single model Fθ(·;t) that is conditioned on the parameter t. The model is trained using the loss function of equation 9 with p = 1. We found that the distribution of t, p(t) plays an important role. In Section 6.3 we present an empirical analysis of its impact. ## 5.1 Motion Deblurring Motion deblurring is a very challenging restoration task. Motion is intrinsically random in the sense that a priori we don't have a known degradation model. The current best end-to-end deep learning solution is to train regression models using paired data sharp, blurry frames. One of the most adopted training datasets is the GoPro motion deblurring dataset (Nah et al., 2017) containing 3214 pairs of clean and blurry 1280×720 images (1111 are reserved for evaluation). The blurry frames are generated by recording high-frame rate video clips and then averaging consecutive frames to simulate blurs caused due to longer exposure. We follow the standard setup (Nah et al., 2017; Kupyn et al., 2019; Chen et al., 2021a; Cho et al., 2021; Suin et al., 2020; Zhang et al., 2019) and perform training data augmentation with random horizontal/vertical flips and 90/180/270 rotations. We did not introduce additional noise to the blurry inputs ( = 0 in equation 7). ![7_image_0.png](7_image_0.png) https://luma-scope.g oogleplex.com/4102 40003?page=689&x =-1278&y=291&scal e=2.5 https://luma-scope.g oogleplex.com/4102 40003?page=997&x =-163&y=53&scale= 2 Input UNet MPRNet UFormer DeblurGANv2 DvSR Reference Figure 2: Examples of deblurred images from GoPro dataset. Our iterative reconstruction leads achieves better reconstruction of detailed textures than regression based models (Restormer, Maxim) and similar quality than conditional DPMs (DvSR). More results are provided in Appendix. Figure 2 shows a visual comparison of our iterative image restoration and current state-of-the-art deblurring models. The iterative scheme produces images with much more details than regression based solutions (Restormer (Zamir et al., 2022), MAXIM (Tu et al., 2022)). Our results are similar to the ones generated by Table 1: Image motion deblurring on the GoPro (Nah et al., 2017) dataset. Best values and second-best values for each metric are color-coded. KID values are scaled by a factor of 1000 for readability. Perceptual Distortion LPIPS↓ NIQE↓ FID↓ KID↓ PSNR↑ SSIM↑ Ground Truth 0.0 3.21 0.0 0.0 ∞ 1.000 HINet (Chen et al., 2021a) 0.088 4.01 17.91 8.15 32.77 0.960 MPRNet (Zamir et al., 2021) 0.089 4.09 20.18 9.10 32.66 0.959 MIMO-UNet+ (Cho et al., 2021) 0.091 4.03 18.05 8.17 32.45 0.957 SAPHNet (Suin et al., 2020) 0.101 3.99 19.06 8.48 31.89 0.953 DeblurGANv2 (Kupyn et al., 2019) 0.117 3.68 13.40 4.41 29.08 0.918 DvSR (Whang et al., 2022) 0.059 3.39 4.04 0.98 31.66 0.948 DvSR-SA (Whang et al., 2022) 0.078 4.07 17.46 8.03 33.23 0.963 Restormer (Zamir et al., 2022) 0.084 4.11 19.33 8.78 32.92 0.961 MAXIM (Tu et al., 2022) 0.087 3.94 22.76 10.06 32.86 0.962 NAFNet (Chen et al., 2022) 0.078 4.07 17.87 8.27 33.71 0.967 Ours (10 steps) 0.058 3.32 3.55 0.56 31.49 0.946 UNet MPRNet UFormer DeblurGANv2 DvSR Reference current conditional diffusion models (DvSR (Whang et al., 2022)). Quantitative results on the GoPro dataset are presented in Table 1. The proposed iterative reconstruction procedure achieves a new state-of-the-art performance across perceptual metrics while maintaining competitive PSNR to existing methods. ![8_image_0.png](8_image_0.png) Figure 3: Number of steps. The total number of steps in the iterative regression has a direct impact on the quality. The number of steps seems to control the Perception-distortion tradeoff. One step leads to the best possible MSE reconstruction (minimum distortion) but large perceptual discrepancy. Number of steps. Figure 3 shows the impact of the number of inference steps on the Perception–Distortion trade-off (Blau & Michaeli, 2018). While doing a reconstruction on a single step (e.g., direct regression) produces the best PSNR, the perceptual metrics are significantly improved when the number of steps is larger than one. Both metrics can't be optimized simultaneously (Blau & Michaeli, 2018). ## 5.2 Single-Image Super-Resolution We evaluated the iterative restoration methodology on single-image 4× super-resolution on the div2k dataset (Agustsson & Timofte, 2017). This dataset contains 1000 2K-resolution images (800 for training, 100 images for validation, 100 testing). We compare to other state-of-the art models that span from regression models having powerful architectures (Wang et al., 2018; Chen et al., 2021b; Liang et al., 2022) and/or generative formulations: GAN based, i.e., LDL (Liang et al., 2022), ESRGAN (Wang et al., 2018), BSRGAN (Zhang et al., 2021); and also based on Normalizing Flows, SRFLOW (Lugmayr et al., 2020). https://luma-scope.googleplex.com/410240003? visibility=111001000000&set=3&page=33&x=-1 ![9_image_0.png](9_image_0.png) Input UNet MPRNet UFormer DeblurGANv2 DvSR Reference Figure 4: Examples image 4× upscaling. Our iterative reconstruction leads achieves better reconstruction of detailed textures than RRDB regression model (Wang et al., 2018), less high-frequency artifacts than SRFlow (Lugmayr et al., 2020) generative normalizing flow, and comparable visual quality than LDL (Liang et al., 2022) a state-of-the-art customized generative adversarial model. More results are given in Appendix. Figure 5(b) summarizes the quantitative results on 4× SR div2k validation dataset. Figure 4 shows a selection of results. Our proposed framework leads to upscaled images with more defined structure than regression based formulations producing larger PSNR, e.g., RRDB (Wang et al., 2018). The recently introduced adversarial formulation LDL (Liang et al., 2022) produces slightly better fine grain details. This could indicate that in the situation where there is limited training data, careful adversarial formulation may be more data efficient. The importance of adding noise in deterministic super-resolution. In our formulation of superresolution, the degradation is a deterministic linear (blurring plus subsampling) operator. Figure 5(a) shows the importance of adding a small amount of noise to the input image. Directly applying the original iterative procedure (without adding noise to the input) leads to a blurry reconstruction (high PSNR but low FID score, = 0.0 in Figure 5 (a)). Adding a small amount of noise ( > 0 in Figure 5(a)) leads to significant better results in terms of perceptual quality (e.g., FID score). ## 5.3 Defocus Deblurring Defocus deblurring is the task of reducing the blur due to limited depth-of-field or misfocus. For such purposes we used the Canon dual-pixel (DP) defocus dataset (DDPD) provided by Abuolaim & Brown ![10_image_0.png](10_image_0.png) | | Type PSNR↑ LPIPS↓ FID↓ | KID↓ | | | | |--------------------------------------|--------------------------|--------|-------|--------------|-------| | LDL(SwinIR) (Liang et al., 2022) GAN | 26.64 | 0.102 | 10.80 | 0.400 | | | LDL(RRDB) (Liang et al., 2022) GAN | 26.43 | 0.109 | 11.36 | 0.582 | | | ESRGAN (Wang et al., 2018) | GAN | 25.54 | 0.125 | 12.77 | 0.732 | | SRFLOW (Lugmayr et al., 2020) | NF | 26.10 | 0.127 | 20.21 | 4.444 | | BSRGAN (Zhang et al., 2021) | GAN | 24.35 | 0.241 | 31.43 | 5.229 | | RRDB (Wang et al., 2018) | REG | 28.37 | 0.264 | 50.98 19.000 | | | LIIF (Chen et al., 2021b) | REG | 28.15 | 0.269 | 53.16 19.958 | | | BSRNET (Zhang et al., 2021) | REG | 25.90 | 0.348 | 84.11 36.901 | | | Ours (100 steps, = 0.015) | 26.45 | 0.136 | 15.39 | 1.761 | | | | (b) | | | | | Figure 5: 4× Super-resolution on div2k dataset (Agustsson & Timofte, 2017). Best values and second-best values for each metric are color-coded Input UNet MPRNet UFormer DeblurGANv2 DvSR Reference (2020), and train a defocus deblurring model only using single image input (i.e., we don't use the dual-pixel images given in the dataset). The DDPD dataset contains 1000 pairs of sharp and blurry 6720×4480 images, of which 30% are reserved for validation and testing. The blurry/sharp frames are generated by capturing two consecutive snapshots by changing the camera parameters (lens aperture). high-frame rate video clips and then averaging consecutive frames to simulate blurs caused due to longer exposure. https://luma-scope.googleplex.com/410240003? set=2&page=16&x=988&y=-2896&scale=6 https://luma-scope.googleplex.com/410240003? set=2&page=22&x=141&y=-1056&scale=2 https://luma-scope.googleplex.com/410240003? set=2&page=40&x=-4493&y=1809&scale=8 https://luma-scope.googleplex.com/410240003? set=2&page=68&x=2097&y=-880&scale=3 ![10_image_1.png](10_image_1.png) images with more texture compared to the direct regression (1 step). More results are provided in Appendix. Figure 6 shows a visual comparison of our iterative image restoration when a different number of inference steps is used. Increasing the number of steps has a direct impact on the quality of the result. Quantitative results on the DDPD dataset are summarized in Table 3 in Appendix. As in the other experiments the best PSNR is obtained with a single step (direct regression), while the best perceptual metrics are obtained when the restoration is done in multiple steps. We did not introduce additional noise to the blurry inputs ( = 0 in equation 7). ## 5.4 Compression Artifact Removal JPEG compression introduces blocking artifacts and lack of high-frequency details. We evaluated the proposed method on the task of removing strong JPEG compression artifacts (quality factor 15). To generate the training data we use the 1000 div2k high-quality images (Agustsson & Timofte, 2017). We evaluated the model on div2k validation set. https://luma-scope.googleplex.com/410240003? visibility=111001000000&set=3&page=33&x=-1 ![11_image_0.png](11_image_0.png) Figure 7: JPEG Compression artifacts removal (quality factor 15). As more inference steps are used more details are generated. Figure 7 show some visual results of restored images with the model applying a different number of steps. As more inference steps are used the restored images have more details. More results on JPEG compression removal are discussed in the next section. ## 6 Discussion 6.1 A Generative Framework A natural question to ask is whether the proposed approach is also generative in the spirit of diffusion formulations. Namely, if we take our formulation to the limit where the low-quality image is fully degraded, then could we potentially generate new samples from scratch? To test the idea we trained a restoration model that starts from pure Gaussian noise paired to a 64×64 celebA image and then proceed as described above. Figure 8 shows some generated samples with this formulation. The generated samples have a FID=9.19, which is not state-of-the-art2 but illustrate the point. In this specific case, our proposed methodology leads to a similar denoising training loss as the one in DDPM (Ho et al., 2020). Despite this similarity, the two methods come from different motivations/formulations and therefore have different inference strategies. We didn't fine-tune architecture or hyper-parameters to boost Input UNet MPRNet UFormer DeblurGANv2 DvSR Reference 2It is competitive with other methods from a couple years ago ![12_image_0.png](12_image_0.png) Figure 8: Examples of CelebA 64×64 generated samples (FID=9.19, 150 steps). Our iterative reconstruction can be potentially used to generated samples from a fully-degraded image (noise). the performance since our goal was to present the idea and show that the formulation, at its core, can be generative as well. ## 6.2 Comparison Of Inference Algorithms In what follows we discuss different alternatives for recovering the clean sample with the trained models. Naive Procedure and Relevance to Cold Diffusion: Given equation 1, one may be tempted to directly replace the clean image by the current estimate. This would lead to xˆt = (1 − t)Fθ(xˆs, t) + ty, and the inference iterative rule would become $${\hat{\mathbf{x}}}_{t-\delta}=(1-t+\delta)F_{\theta}({\hat{\mathbf{x}}}_{t},t)+(t-\delta)\mathbf{y}.$$ $$(11)$$ $\left(12\right)^{2}$ xˆt−δ = (1 − t + δ)Fθ(xˆt, t) + (t − δ)y. (11) Cold Diffusion (Bansal et al., 2022) proposes to generate images by inverting an arbitrary *known* degradation D(x, s), where s controls the strength. Our formulation is more general in the sense that we don't require an explicit knowledge of D. To apply Cold Diffusion sampling in our context, we define D(x, s) = (1−s)x+sy (given by equation 1). This leads to Cold Diffusion's naive sampling (Algorithm 1 in Bansal et al. (2022)), $${\hat{\mathbf{x}}}_{t-\delta}=D(F_{\theta}({\hat{\mathbf{x}}}_{t},t),t-\delta)=(1-t+\delta)F_{\theta}({\hat{\mathbf{x}}}_{t},t)+(t-\delta)\mathbf{y}.$$ Note that this sampling scheme is the same as the one in Eq. 11. Cold diffusion improved sampling (Algorithm 2 in Bansal et al. (2022)) is given by, $$\hat{\mathbf{x}}_{t-\delta}=\hat{\mathbf{x}}_{t}-D(F_{\theta}(\hat{\mathbf{x}}_{t},t),t)+D(F_{\theta}(\hat{\mathbf{x}}_{t},t),t-\delta)$$ $$=\hat{\mathbf{x}}_{t}-(1-t)F_{\theta}(\hat{\mathbf{x}}_{t},t)-t\mathbf{y}+(1-t+\delta)F_{\theta}(\hat{\mathbf{x}}_{t},t)+(t-\delta)\mathbf{y}$$ $$=\hat{\mathbf{x}}_{t}+\delta(F_{\theta}(\hat{\mathbf{x}}_{t},t)-\mathbf{y}).$$ In Figure 9 we compare our inference algorithm (equation 5), the naive inference algorithm (equation 11), and our adaptation of the Cold Diffusion sampler to our formulation (equation 15). In general, the naive sampler produces good results with very few steps (N=2,3) but then diverges. Our adaptation of Cold Diffusion sampler produces competitive results, while leading to slightly worse FID scores for the same distortion level than our proposed algorithm. In the limit, as the number of steps becomes very large, Cold Diffusion sampler seems to converge to a stable point, while ours after a certain large number of steps, deteriorates. Figure 9. shows the FID score of CelebA 64x64 generated images when using the three different variants of the inference algorithm (sampler). $\left(13\right)$ $\left(14\right)$ $\left(15\right)$ ![13_image_0.png](13_image_0.png) Figure 9: Impact of the Inference Algorithm for some of the different tested tasks. The naive inference algorithm (equation 11) produces a good baseline when used with very few steps (i.e. 2–5) but then diverges. The proposed updated rule (equation 5) produces better results than the one adapted from Cold Diffusion (Bansal et al., 2022) given in (equation 15). At very large number of steps, Cold Diffusion seems to be the most stable of the three tested samplers. Each of the points in each shown curve represents the results with a different number of steps (similar to what is shown in Figure 3. Points that leads to values that are out-of-the shown region are left out for improving visualization. ## 6.3 Impact Of Distribution P(T) The impact of the distribution of t used during training has a clear impact on performance. We evaluated several different options that are summarized in Figure 10 (a). Figure 10 (b) shows the results when different distributions are adopted. The best results are obtained when the model is trained with a bias towards t = 1 (more degradation). Intuitively, this could imply that the iterative procedure needs to be more certain of the direction to move at the very early steps of the procedure. Nonetheless, the best distribution can depend on a combination of model capacity and restoration task so we are not drawing general conclusions. ## 6.4 Impact Of Adding Noise On Inverting Deterministic Degradations JPEG compression is a non-linear, but deterministic, degradation. We empirically verified that adding a small amount of noise helps to improve the results as shown in Figure 11. We tested the variant of the inference algorithm that adds noise at each step (so the noise becomes a Brownian motion, e.g., t = /√t), and adding a constant noise level at the initial step (t = ). We did not observe any practical difference in the two approaches. | key | | p(t) | description | |--------------|--------------------------------------------------------|------------------------------------------------------|---------------| | 'linear_0' | t ∼ U[0, 1] | Uniform distribution | | | 'linear_a' | t ∼ (1 − a)U[0, 1] + aδ1, where a < 1 | Uniform distribution w/bias to t = 1 | | | 'bias_t1' | t = g(s), s ∼ U[0, 1] and g(s) = sin (sπ/2)) | Sine based distribution (bias to t = 1) | | | 'bias_t0' | t = g(s), s ∼ U[0, 1] and g(s) = sin ((s − 1)π/2)) + 1 | Sine based distribution (bias to t = 0) | | | 'bias_t0_t1' | t = g(s), with s ∼ U[0, 1] and g(s) = sin (sπ/2))2 | Sine based distribution (bias to t = 1) | | | | | (a) Different evaluated p(t) training distributions. | | ![14_image_0.png](14_image_0.png) Figure 10: Impact of the distribution of p(t) during training. Results for JPEG Compression removal (Q=15). The best results are obtained when the distribution of t is biased towards t = 1. ![14_image_1.png](14_image_1.png) Figure 11: Impact of adding noise to the low-quality input in JPEG compression removal. ## 6.5 Comparison To A Conditional Denoising Diffusion Model We compare InDI to a vanilla conditional DDPM (Ho et al., 2020). We trained a vanilla conditional DDPM, using the (continuous) noise level as an additional input, similarly as done in Saharia et al. (2021); Whang et al. (2022). The model architecture is the same as in InDI but the auxiliary noise image (needed in any DDPM) is concatenated with the low-quality input at each step. Figure 12 shows a comparison between InDI and the conditional DDPM. To generate the DDPM plot we merged several possible noise schedules using different number of steps that span the perception-distortion tradeoff. InDI produces comparable results using significant less number of steps than the vanilla DDPM. ## 7 Conclusions And Limitations We presented a novel formulation of image restoration that circumvents the regression-to-the mean problem. This allows us to get restored images with superior realism and perceptual quality, while still having a low ![15_image_0.png](15_image_0.png) Figure 12: Comparison of InDI to a conditional Denoising Diffusion Probabilistic Model (DDPM). The DDPM requires a noise schedule for inference. The plot showed herein was built by fusioning six different noise schedules taking the best PSNR vs FID score at a given number of steps. Our proposed InDI algorithm produces comparable results in much less number of steps. distortion error. Our method is motivated by the observation that restoration from a small distortion is a better-conditioned problem. We therefore break a restoration task into many small ones - each of them easier (and less ill-posed) than the larger problem we solve overall. This enables our iterative approach to transforming the degraded image into a high-quality image, in spirit similar to current generative diffusion models. Limitations. The present formulation is a supervised one, requiring paired training data. As such, for each type of degradation we need to train a specialized model, in contrast to unsupervised formulations such as RED (Romano et al., 2017), PnP (Venkatakrishnan et al., 2013; Kamilov et al., 2022), or DDRM (Kawar et al., 2022). Additionally, given the dependence to paired training data, its performance for out-of-distribution samples is not guaranteed. This question requires more in-depth analysis. Finally, while the proposed iterative inference algorithm produces high-quality restorations, in some tasks performance degrades after a certain number of steps. This is likely due to the accumulation of errors, and will likely require a more robust inference scheme. For future work, we would like to better characterize the limiting points of the proposed inference procedure. Other possible research avenues are developing robust formulations that can successfully handle out-ofdomain input. ## Acknowledgments The authors would like to thank our colleagues Jon Barron, Tim Salimans, Jascha Sohl-dickstein, Ben Poole, José Lezama, Sergey Ioffe, and Jason Baldridge for helpful discussions. ## References Abdullah Abuolaim and Michael S Brown. Defocus deblurring using dual-pixel data. In *Computer Vision–* ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X 16, pp. 111–126. Springer, 2020. Eirikur Agustsson and Radu Timofte. Ntire 2017 challenge on single image super-resolution: Dataset and study. In *The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops*, July 2017. Michael S Albergo, Nicholas M Boffi, and Eric Vanden-Eijnden. Stochastic interpolants: A unifying framework for flows and diffusions. *arXiv preprint arXiv:2303.08797*, 2023. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214–223. PMLR, 2017. Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (GANs). In *International Conference on Machine Learning*, pp. 224–232. PMLR, 2017. Muhammad Asim, Fahad Shamshad, and Ali Ahmed. Blind image deconvolution using deep generative priors. *IEEE Transactions on Computational Imaging*, 6:1493–1506, 2020. Arpit Bansal, Eitan Borgnia, Hong-Min Chu, Jie S Li, Hamid Kazemi, Furong Huang, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Cold Diffusion: Inverting arbitrary image transforms without noise. arXiv preprint arXiv:2208.09392, 2022. Mikołaj Bińkowski, Danica J Sutherland, Michael Arbel, and Arthur Gretton. Demystifying MMD GANs. In *International Conference on Learning Representations*, 2018. Yochai Blau and Tomer Michaeli. The Perception-Distortion Tradeoff. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6228–6237, 2018. Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G Dimakis. Compressed sensing using generative models. In *International Conference on Machine Learning (ICML)*, pp. 537–546. PMLR, 2017. Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 3291–3300, 2018. Liangyu Chen, Xin Lu, Jie Zhang, Xiaojie Chu, and Chengpeng Chen. Hinet: Half instance normalization network for image restoration. In *Proceedings of the IEEE/CVF Conference on Computer Vision and* Pattern Recognition (CVPR) Workshops, pp. 182–192, June 2021a. Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VII, pp. 17–33. Springer, 2022. Yinbo Chen, Sifei Liu, and Xiaolong Wang. Learning continuous image representation with local implicit image function. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 8628–8638, 2021b. Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. Rethinking coarse-tofine approach in single image deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 4641–4650, October 2021. Hyungjin Chung, Byeongsu Sim, Dohoon Ryu, and Jong Chul Ye. Improving diffusion models for inverse problems using manifold constraints. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural Information Processing Systems*, 2022. Joseph Paul Cohen, Margaux Luck, and Sina Honari. Distribution matching losses can hallucinate features in medical image translation. In *International conference on medical image computing and computer-assisted* intervention, pp. 529–536. Springer, 2018. Giannis Daras, Mauricio Delbracio, Hossein Talebi, Alex Dimakis, and Peyman Milanfar. Soft diffusion: Score matching with general corruptions. *Transactions on Machine Learning Research*, 2023. ISSN 28358856. Jacob Deasy, Nikola Simidjievski, and Pietro Liò. Heavy-tailed denoising score matching. arXiv preprint arXiv:2112.09788, 2021. Mauricio Delbracio, Ignacio Garcia-Dorado, Sungjoon Choi, Damien Kelly, and Peyman Milanfar. Polyblur: Removing mild blur by polynomial reblurring. *IEEE Transactions on Computational Imaging*, 7:837–848, 2021a. Mauricio Delbracio, Hossein Talebei, and Peyman Milanfar. Projected distribution loss for image enhancement. In *2021 IEEE International Conference on Computational Photography (ICCP)*, pp. 1–12. IEEE Computer Society, 2021b. Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep convolutional networks. *IEEE transactions on pattern analysis and machine intelligence*, 38(2):295–307, 2015. Bradley Efron. Tweedie's formula and selection bias. *Journal of the American Statistical Association*, 106 (496):1602–1614, 2011. Dror Freirich, Tomer Michaeli, and Ron Meir. A theory of the Distortion-Perception tradeoff in Wasserstein space. *Advances in Neural Information Processing Systems*, 34:25661–25672, 2021. Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 2414– 2423, 2016. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014. Eric Heitz, Laurent Belcour, and Thomas Chambon. Iterative α-(de) blending: a minimalist deterministic diffusion model. *arXiv preprint arXiv:2305.03486*, 2023. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing* Systems, volume 33, pp. 6840–6851, 2020. Emiel Hoogeboom and Tim Salimans. Blurring diffusion models. *arXiv preprint arXiv:2209.05557*, 2022. Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, and Tim Salimans. Autoregressive diffusion models. In *International Conference on Learning Representations*, 2022a. Emiel Hoogeboom, Vıctor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3d. In *International Conference on Machine Learning*, pp. 8867–8887. PMLR, 2022b. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In *IEEE conference on computer vision and pattern recognition*, pp. 1125–1134, 2017. Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G Dimakis, and Jon Tamir. Robust compressed sensing mri with deep generative priors. *Advances in Neural Information Processing Systems*, 34:14938–14954, 2021a. Ajil Jalal, Sushrut Karmalkar, Jessica Hoffmann, Alex Dimakis, and Eric Price. Fairness for image generation with uncertain sensitive attributes. In *International Conference on Machine Learning*, pp. 4721–4732. PMLR, 2021b. Daniel D Johnson, Jacob Austin, Rianne van den Berg, and Daniel Tarlow. Beyond in-place corruption: Insertion and deletion in denoising probabilistic models. *arXiv preprint arXiv:2107.07675*, 2021. Zahra Kadkhodaie and Eero P Simoncelli. Stochastic solutions for linear inverse problems using the prior implicit in a denoiser. In *Thirty-Fifth Conference on Neural Information Processing Systems*, 2021. Ulugbek S Kamilov, Charles A Bouman, Gregery T Buzzard, and Brendt Wohlberg. Plug-and-play methods for integrating physical and learned models in computational imaging. *arXiv preprint arXiv:2203.17061*, 2022. Bahjat Kawar, Gregory Vaksman, and Michael Elad. SNIPS: Solving noisy inverse problems stochastically. In *Thirty-Fifth Conference on Neural Information Processing Systems*, 2021a. Bahjat Kawar, Gregory Vaksman, and Michael Elad. Stochastic image denoising by sampling from the posterior distribution. In *Proceedings of the International Conference on Computer Vision (ICCV) Workshops*, 2021b. Bahjat Kawar, Michael Elad, Stefano Ermon, and Jiaming Song. Denoising diffusion restoration models. In Advances in Neural Information Processing Systems, 2022. Orest Kupyn, Volodymyr Budzan, Mykola Mykhailych, Dmytro Mishkin, and Jiří Matas. Deblurgan: Blind motion deblurring using conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8183–8192, 2018. Orest Kupyn, Tetiana Martyniuk, Junru Wu, and Zhangyang Wang. Deblurgan-v2: Deblurring (orders-ofmagnitude) faster and better. In *The IEEE International Conference on Computer Vision (ICCV)*, Oct 2019. Rémi Laumont, Valentin De Bortoli, Andrés Almansa, Julie Delon, Alain Durmus, and Marcelo Pereyra. Bayesian imaging using plug & play priors: when langevin meets tweedie. SIAM Journal on Imaging Sciences, 15(2):701–737, 2022. Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image superresolution using a generative adversarial network. In *IEEE conference on computer vision and pattern* recognition, pp. 4681–4690, 2017. Sangyun Lee, Hyungjin Chung, Jaehyeon Kim, and Jong Chul Ye. Progressive deblurring of diffusion models for coarse-to-fine image synthesis. *arXiv preprint arXiv:2207.11192*, 2022. Haoying Li, Yifan Yang, Meng Chang, Huajun Feng, Zhihai Xu, Qi Li, and Yueting Chen. Srdiff: Single image super-resolution with diffusion probabilistic models. *arXiv preprint arXiv:2104.14951*, 2021. Jie Liang, Hui Zeng, and Lei Zhang. Details or artifacts: A locally discriminative learning approach to realistic image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5657–5666, 2022. Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 136–144, 2017. Guan-Horng Liu, Arash Vahdat, De-An Huang, Evangelos A Theodorou, Weili Nie, and Anima Anandkumar. I 2sb: Image-to-image schrödinger bridge. *arXiv preprint arXiv:2302.05872*, 2023. Andreas Lugmayr, Martin Danelljan, Luc Van Gool, and Radu Timofte. Srflow: Learning the superresolution space with normalizing flow. In *European Conference on Computer Vision*, pp. 715–732. Springer, 2020. Andreas Lugmayr, Martin Danelljan, and Radu Timofte. Ntire 2021 learning the super-resolution space challenge. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 596–612, 2021. Ziwei Luo, Fredrik K Gustafsson, Zheng Zhao, Jens Sjölund, and Thomas B Schön. Image restoration with mean-reverting stochastic differential equations. *arXiv preprint arXiv:2301.11699*, 2023a. Ziwei Luo, Fredrik K Gustafsson, Zheng Zhao, Jens Sjölund, and Thomas B Schön. Refusion: Enabling large-size realistic image restoration with latent-space diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1680–1691, 2023b. Roey Mechrez, Itamar Talmi, Firas Shama, and Lihi Zelnik-Manor. Maintaining natural image statistics with the contextual loss. In *Asian Conference on Computer Vision*, pp. 427–443. Springer, 2018a. Roey Mechrez, Itamar Talmi, and Lihi Zelnik-Manor. The contextual loss for image transformation with non-aligned data. In *European Conference on Computer Vision (ECCV)*, pp. 768–783, 2018b. Eliya Nachmani, Robin San Roman, and Lior Wolf. Denoising diffusion gamma models. *arXiv preprint* arXiv:2110.05948, 2021. Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In *The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, July 2017. Guy Ohayon, Theo Adrai, Gregory Vaksman, Michael Elad, and Peyman Milanfar. High perceptual quality image denoising with a posterior sampling cgan. In *Proceedings of the International Conference on* Computer Vision (ICCV) Workshops, 2021. Gregory Ongie, Ajil Jalal, Christopher A Metzler, Richard G Baraniuk, Alexandros G Dimakis, and Rebecca Willett. Deep learning techniques for inverse problems in imaging. *IEEE Journal on Selected Areas in* Information Theory, 1(1):39–56, 2020. Mangal Prakash, Alexander Krull, and Florian Jug. Fully unsupervised diversity denoising with convolutional variational autoencoders. In *International Conference on Learning Representations*, 2020. Herbert Robbins. An empirical bayes approach to statistics. In Proc. 3rd Berkeley Symp. Math. Statist. Probab., 1956, volume 1, pp. 157–163, 1956. Yaniv Romano, Michael Elad, and Peyman Milanfar. The little engine that could: Regularization by denoising (RED). *SIAM Journal on Imaging Sciences*, 10(4):1804–1844, 2017. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In *International Conference on Medical image computing and computer-assisted intervention*, pp. 234–241. Springer, 2015. Leonid I Rudin and Stanley Osher. Total variation based image restoration with free local constraints. In Proceedings of 1st international conference on image processing, volume 1, pp. 31–35. IEEE, 1994. Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement. *arXiv preprint arXiv:2104.07636*, 2021. Chitwan Saharia, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, David Fleet, and Mohammad Norouzi. Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10, 2022. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. *Advances in neural information processing systems*, 29, 2016. Yuyang Shi, Valentin De Bortoli, Andrew Campbell, and Arnaud Doucet. Diffusion schr\" odinger bridge matching. *arXiv preprint arXiv:2303.16852*, 2023. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In *International Conference on Machine Learning*, pp. 2256–2265. PMLR, 2015. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In *International* Conference on Learning Representations, 2021a. Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems, 32, 2019. Yang Song and Stefano Ermon. Improved techniques for training score-based generative models. Advances in neural information processing systems, 33:12438–12448, 2020. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021b. Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. *arXiv preprint* arXiv:2303.01469, 2023. Maitreya Suin, Kuldeep Purohit, and A. N. Rajagopalan. Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. Xin Tao, Hongyun Gao, Xiaoyong Shen, Jue Wang, and Jiaya Jia. Scale-recurrent network for deep image deblurring. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, June 2018. Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxim: Multi-axis mlp for image processing. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pp. 5769–5780, 2022. Singanallur V Venkatakrishnan, Charles A Bouman, and Brendt Wohlberg. Plug-and-play priors for model based reconstruction. In *2013 IEEE Global Conference on Signal and Information Processing*, pp. 945–948. IEEE, 2013. Pascal Vincent. A connection between score matching and denoising autoencoders. *Neural computation*, 23 (7):1661–1674, 2011. Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. ESRGAN: enhanced super-resolution generative adversarial networks. In *Proceedings of the European* conference on computer vision (ECCV) workshops, pp. 0–0, 2018. Simon Welker, Henry N. Chapman, and Timo Gerkmann. Blind drifting: Diffusion models with a linear SDE drift term for blind image restoration tasks. In The Symbiosis of Deep Learning and Differential Equations II, 2022. URL https://openreview.net/forum?id=VCLnhfPVEB. Jay Whang, Erik Lindgren, and Alex Dimakis. Composing normalizing flows for inverse problems. In International Conference on Machine Learning, pp. 11158–11169. PMLR, 2021. Jay Whang, Mauricio Delbracio, Hossein Talebi, Chitwan Saharia, Alexandros G Dimakis, and Peyman Milanfar. Deblurring via stochastic refinement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16293–16303, 2022. Mao Ye, Lemeng Wu, and Qiang Liu. First hitting diffusion models. *arXiv preprint arXiv:2209.01170*, 2022. Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 14821–14831, 2021. Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5728–5739, 2022. Hongguang Zhang, Yuchao Dai, Hongdong Li, and Piotr Koniusz. Deep stacked hierarchical multi-patch network for image deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. Kai Zhang, Jingyun Liang, Luc Van Gool, and Radu Timofte. Designing a practical degradation model for deep blind image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4791–4800, 2021. Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In *Proceedings of the IEEE Conference on Computer Vision* and Pattern Recognition (CVPR), June 2018. Hang Zhao, Orazio Gallo, Iuri Frosio, and Jan Kautz. Loss functions for image restoration with neural networks. *IEEE Transactions on computational imaging*, 3(1):47–57, 2016. ## A Proof Of Proposition 4.1 Proof. We have xs = (1 − s)x + sy, and xt = (1 − t)x + ty, so by substituting y from one to the other we get, $$\mathbf{x}_{s}=(1-s)\mathbf{x}+s\left(\frac{\mathbf{x}_{t}-(1-t)\mathbf{x}}{t}\right)$$ $$=\mathbf{x}-s\mathbf{x}+\frac{s}{t}\mathbf{x}_{t}-\frac{s}{t}\mathbf{x}+s\mathbf{x}$$ $$=\left(1-\frac{s}{t}\right)\mathbf{x}+\frac{s}{t}\mathbf{x}_{t}.$$ Then, E[ xs |xt ] = Zxspxs|xt (xs|xt)dxs (19) = Zxspx|xt txs − sxt t − s xt t t − s dxs (20) = Z(t − s)x + sxt tpx|xt (x|xt)dx (21) = 1 − s t Zxpx|xt (x|xt)dx + s t xt (22) = 1 − s t E[ x|xt ] + s t xt. (23) where we have applied the fact that pxs|xt (xs|xt) = px|xt $\mathbf{x}_t(\mathbf{x}|\mathbf{x}_t)\frac{t}{t-s}$ and $\mathbf{x}=\frac{t\mathbf{x}_s-s\mathbf{x}_t}{t-s}$. ## B Denoising With A Gaussian Prior Let's analyze InDI's behavior in the particular case where p(x) = N(c, σ2 c I), and the restoration task is denoising y = x + n, where n is white Gaussian of fixed standard deviation σN . Then, p(y|x) = N(x, σ2N I) and E[x|xt] = σ 2 cxt+t 2σ 2 N c σ2 c+t 2σ 2 N , where we have used the fact that xt = (1 − t)x + ty = x + tn. InDI's ideal ODE is given by dxt dt = xt−E[x|xt] t, which in this case becomes: $${\frac{d\mathbf{x}_{t}}{d t}}={\frac{t\sigma_{N}^{2}(\mathbf{x}_{t}-\mathbf{c})}{\sigma_{N}^{2}t^{2}+\sigma_{c}^{2}}}.$$ We are interested in solving this equation at t = 0, with boundary condition x1 = y at t = 1. This is a separable ODE having general solution: xt = c + (y − c) qt 2+α2 1+α2 , where α = σc σN . The solution at t = 0, is $$\mathbf{X_{\mathrm{InDI}}}=\mathbf{c}+(\mathbf{y}-\mathbf{c}){\sqrt{\frac{\sigma_{c}^{2}}{\sigma_{c}^{2}+\sigma_{N}^{2}}}}.$$ ``` Note that E[xInDI] = c = E[x], and the covariance cov(xInDI) = cov(y)σ 2 c σ2 c+σ 2 N = cov(x), since cov(y) = ``` (σ 2 c + σ 2 N )I. Given p(x) and p(xInDI) are Gaussian distributions with same mean and covariance, we have p(xInDI) = p(x). That is, in the limit, InDI generates samples from the prior distribution p(x). It is worth noting that in this case the MMSE and MAP estimates coincide, $$\mathbf{x}_{\mathrm{MMSE}}=\mathbf{x}_{\mathrm{MAP}}={\frac{\sigma_{c}^{2}\mathbf{y}+\sigma_{N}^{2}\mathbf{c}}{\sigma_{c}^{2}+\sigma_{N}^{2}}},$$ and are in fact different from InDI's estimate. ## C Model And Training Details In all our restoration experiments we use a U-Net-like architecture (Ronneberger et al., 2015) similar to the one in SR3 (Saharia et al., 2021) and DvSR (Whang et al., 2022). We followed the same adaptations as the ones introduced in Whang et al. (2022) to make it fully-convolutional (removed self-attention layers and group normalization). Our U-Net has an adaptive number of resolutions each of them having an arbitrary number of channels (given by a multiplication factor from a base set of channels). Table 2 summarizes the model definition for each of the tested applications. | Table 2: Model and training parameters for each restoration task. channels multipliers noise p(t) batch size learning rate crop size | | # params. | | | | | | | |----------------------------------------------------------------------------------------------------------------------------------------|----|-------------|----|------------|------|-----------|-----------|--------| | motion deblurring | 64 | [1,2,3,4] | = 0 | bias_t1 | 256 | 10−4 | 128 × 128 | 27.68M | | defocus deblurring | 64 | [1,2,4,4] | = 0 | linear_0.5 | 1024 | 10−4 | 128 × 128 | 33.57M | | JPEG restoration | 64 | [1,2,4,4] | = 0.060 linear_1.0 | 1024 | 10−4 | 128 × 128 | 33.57M | | | 4× super-resolution | 96 | [1,2,3,4] | = 0.015 | bias_t1 | 256 | 10−4 | 256 × 256 | 62.25M | All models are trained for 500K steps using 32 TPUv3 cores. We used the Adam optimizer with a fixed learning rate, and EMA decay rate of 0.9999. Models were trained using the respective indicated distribution for p(t). For the super-resolution model, low-resolution crops of size 64 × 64 are upscaled using bilinear interpolation to 256 × 256 before feeding them into the model. ## D Additional Results Table 3: Defocus Deblurring on the DDPD dataset (Abuolaim & Brown, 2020). Best values and second-best values for each metric are color-coded. KID values are scaled by a factor of 1000 for readability Steps PSNR LPIPS FID KID 1 24.75 0.206 40.80 16.72 2 24.74 0.201 23.29 6.18 4 24.55 0.195 17.92 3.52 10 24.24 0.188 15.68 2.19 50 23.89 0.186 16.12 2.48 100 23.82 0.187 16.34 2.63 500 23.77 0.189 16.53 2.49 ![23_image_0.png](23_image_0.png) Figure 13: Additional GoPro deblurring results. The proposed method (InDI) applied with different number of reconstruction steps. Best viewed electronically. ![24_image_0.png](24_image_0.png) Figure 14: Additional GoPro deblurring results. The proposed method (InDI) applied with different number of reconstruction steps. Best viewed electronically. ![25_image_0.png](25_image_0.png) Figure 15: Additional GoPro deblurring results. The proposed method (InDI) applied with different number of reconstruction steps. Best viewed electronically. ![26_image_0.png](26_image_0.png) Figure 16: 4× super-resolution results (div2k dataset). The proposed method (InDI) applied with different number of reconstruction steps. Best viewed electronically. ![27_image_0.png](27_image_0.png) Figure 17: 4× super-resolution results (div2k dataset). The proposed method (InDI) applied with different number of reconstruction steps. Best viewed electronically. ![28_image_0.png](28_image_0.png) Figure 18: 4× super-resolution results (div2k dataset). The proposed method (InDI) applied with different number of reconstruction steps. Best viewed electronically. ![29_image_0.png](29_image_0.png) Figure 19: 4× super-resolution results (div2k dataset). The proposed method (InDI) applied with different number of reconstruction steps. Best viewed electronically. ![30_image_0.png](30_image_0.png) Figure 20: JPEG compression artifact removal results (quality factor 15, div2k test dataset). The proposed method (InDI) applied with different number of reconstruction steps. Best viewed electronically. ![31_image_0.png](31_image_0.png) Figure 21: JPEG compression artifact removal results (quality factor 15, div2k test dataset). The proposed method (InDI) applied with different number of reconstruction steps. Best viewed electronically. ![32_image_0.png](32_image_0.png) Figure 22: JPEG compression artifact removal results (quality factor 15, div2k test dataset). The proposed method (InDI) applied with different number of reconstruction steps. Best viewed electronically. ![33_image_0.png](33_image_0.png) Figure 23: JPEG compression artifact removal results (quality factor 15, div2k test dataset). The proposed method (InDI) applied with different number of reconstruction steps. Best viewed electronically.
Review 1: Summary: This paper proposes a novel diffusion-based image restoration process (Inversion by Direct Inversion) designed to solve imaging inverse problems without explicit knowledge of the degradation forward process. The proposed technique assumes access only to paired clean and degrades images (x,y)~p(x,y); it does not need to know p(y|x). That is, unlike other diffusion techniques (e.g., cold diffusion) it does not need to know the degradation process. The proposed method then constructs a series of intermediates samples x_{t_1}, x_{t_2}, ... from convex combinations of x and y. It trains a restoration model to reconstruct x from each of these samples, effectively training a network to compute E[x|x_t]. This restoration model is repeatedly applied (with a principled weighting scheme) to reconstruct x. If the restoration model uses only a few large steps it has little distortion (high PSNR) while if it is applied many iterations with small steps it has higher perceptual quality. The authors apply the proposed method to motion deblurring, super-resolution, defocus deblurring, and JPEG artifact removal. It offers SOTA performance at motion deblurring and is competitive with existing techniques on the other tasks. Strengths and Weaknesses: Strengths: -Well-motivated -Very clearly written -Interesting -Thorough ablation studies Weaknesses: -A few minor presentation issues -Not state-of-the-art at everything -Doesn't generalize (requires retraining per inverse problem) Requested Changes: I support accepting the paper, but addressing the following would strengthen the work: Overall, the paper provides a comprehensive survey of the literature and related work. The recent papers [A] and the follow-up [B] (concurrent) seem closely related and may be worth mentioning. [A] Luo, Ziwei, et al. "Image Restoration with Mean-Reverting Stochastic Differential Equations." arXiv preprint arXiv:2301.11699 (2023). [B] Luo, Ziwei, et al. "Refusion: Enabling Large-Size Realistic Image Restoration with Latent-Space Diffusion Models." arXiv preprint arXiv:2304.08291 (2023). Highlighting the best and second best results in all tables (as they were done in table 1) would make them easier to read and interpret. #### Typos and suggestions Pg 10: "Table 5(a)"-->"Figure 5(a)". Pg 10: I suggest making 5(b) Table 2 Pg 10: It's not obvious whether the "ours" in 5(b) had noise injection or not. The adjacent figure 5(a) suggests this is an important distinction to make. Broader Impact Concerns: I have no concerns about this work. Like other generative techniques, it could potentially be misused by bad actors. ================================================== Review 2: Summary: This work presents a novel formulation for supervised image restoration, which aims to circumvent the so-called "regression to the mean" effect. The proposed algorithm functions by iteratively restoring low-quality input images through a series of small steps. The underlying intuition is that taking smaller restoration steps can largely avoid the regression-to-the-mean effect, as the set of plausible minor changes is relatively small. The authors justify their approach by examining a simple 2D model, illustrating that the minimum mean-squared error (MMSE) optimal solution is susceptible to the "regression to the mean" effect. In contrast, iterative restoration converges to one of the plausible modes. The authors further support their claims by applying the method to various image restoration tasks, demonstrating that the Indi algorithm produces high-quality results for image denoising tasks. Overall, this is an innovative and straightforward approach to image restoration, offering more strengths than weaknesses. Consequently, I recommend acceptance of the paper following minor revisions. Strengths and Weaknesses: Strengths: 1) The described algorithm is remarkably straightforward to implement and does not necessitate any knowledge of the degradation process's analytic form, distinguishing it from generative denoising diffusion models. 2) The paper is well-written, with clear motivation and easy-to-understand proofs. The implementation process is presented in a user-friendly manner. 3) The authors effectively showcase the performance of their approach across various image restoration tasks, comparing it to both cold diffusion and conditional denoising diffusion models. 4) The authors also reveal the significant impact of bias during training time (with p(t) skewed towards t=1) on performance. This finding highlights the importance of the iterative procedure's increased certainty in determining the appropriate direction during the initial stages of the process. Weakness: 1) Although the authors demonstrate the "regression to the mean" effect using a 2D toy model, they do not fully explain how INDI mitigates this effect. They claim that "the set of plausible 'slightly-less-bad' images is relatively small," but the reasoning behind this statement remains unclear from the 2D example. Consider the following example for the MMSE case. Given the model $y = Hx + n$ and the specified prior distribution $p(x) = \sum_{i=1}^d w_{i} \delta_{x-c_{i}}$, the MMSE solution is $\int x p(x|y) dx = c \int x p(y|x)p(x)dx$, with $c = \frac{1}{p(y)}$. The posterior distribution can be expressed as the product of the likelihood term and the prior. Under the Gaussian noise assumption, it becomes $c \int x \exp^{-\frac{-||y - Hx||_{2}^2}{\sigma^2}} p(x) dx$, where $p(x)$ is the discrete multimodal distribution. Consequently, the integral simplifies to the sum over the four modalities: $\sum_{i=1}^4 w_{i} c_{i} \exp^{-\frac{-||y - Hc_{i}||_{2}^2}{\sigma^2}}$. For the first denoising model, where $H = I$, the regression to mean effect is apparent, as it is an average over the $c_{i}$'s weighted by the distance of $y$ to each $c_{i}$. In the second inpainting problem, the non-trivial null-space of $H$ causes the y-coordinate information to be erased, and the MMSE iterate converges to a point where the x-axis coordinate matches that of the true modalities, while the y-axis of the MMSE solution is simply the average of the y-axis of the modalities. The "regression to the mean effect" is thus a joint result of the multimodal distribution of $p(x)$ and the non-trivial null-space of $H$ (and possibly the condition number in certain cases). Considering this observation, how does INDI's update equation (equation 2) with the closed-form solution for the 2D model help alleviate the regression to mean effect? Is it because $E[x_{0}|\hat{x_{t}}]$ depends on $H_{t}$, which no longer has a null-space? A more detailed mathematical explanation of how INDI mitigates the regression to mean effect would be appreciated. 2) The ODE in equation-6 should be revised to reflect the residual ODE flow. However, the motivation for the ODE flow in this case remains unclear. It would be helpful if the authors could elaborate on what additional insights the ODE flow provides in this context. 3) It is surprising that the authors did not include any image inpainting experiments in their study. Given the current motivation and implementation of INDI, image inpainting appears to be a highly suitable task for the algorithm. The inclusion of such experiments would strengthen the paper and provide further evidence of INDI's applicability across different image restoration tasks. Requested Changes: I recommend the following major changes (1, 2) to address the weaknesses identified in the paper: 1) Weakness-1: The authors should provide a mathematical argument, corresponding to the 2D toy problem, that demonstrates how INDI can alleviate the regression to the mean effect. This argument should be based on the closed-form solution for the 2D experiment and should explain whether INDI reduces the degeneracy of the operator H. 2) Weakness-2: To provide a more comprehensive evaluation, the authors should include image inpainting experiments in the experiments section of the paper. Additionally, I suggest a minor change: 3) The authors should clarify the insights provided by the residual ODE flow and explain its relevance to the overall work. Broader Impact Concerns: No broader impact concerns are noted by the reviewer. ================================================== Review 3: Summary: The authors present a novel image restoration technique for the supervised learning setting which allows improved perceptual quality. The core of the method is to iteratively refine a solution in small steps. This allows to avoid the regression to the mean effect and get a solution closer to the image manifold. Strengths and Weaknesses: Strenghts: - The method is generally sound and an innovative technique to deal with the well-known distortion-perception tradeoff. - Controlling the number of steps seems to achieve different perception-distortion tradeoff which might be useful in certain applications - generally good experimental results Weaknesses: - as it is an iterative method, inference can be quite expensive - performance does not always seem to be state of the art, neither in minimizing distortion nor in maximizing visual quality (see e.g. Fig.5) Requested Changes: I think the paper could be strengthened by some revisions and the inclusion of additional material. I would like to see an extended set of comparisons with state-of-the-art methods where the extreme case of 1 iteration (which should maximize PSNR) is also shown. Additionally, it could be interesting to see results on the denoising problem. There are also questions about fairness of comparisons since the network architecture is different from other baselines. What would be the performance of the architecture when trained in a conventional manner? It could also be interesting to discuss if the authors think there are connections between their work and the literature on plug and play methods (https://arxiv.org/abs/2203.17061). As minor points: a few typos should be fixed throughout the paper; sections 2 and 3 are partially redundant and could be merged in a single section. Broader Impact Concerns: No concerns on broader impact. ================================================== Metareview: Recommendation: Accept as is Comment: This is a very well-written paper on a novel formulation for supervised image restoration. All reviewers appreciate the simple yet effective framework and solid evaluation on five different tasks (motion deblurring, super-resolution, defocus deblurring, jpeg artifact removal, and generative modeling), demonstrating the proposed method's applicability to various image restoration problems. The notable strengths by the reviewers are: "remarkably straightforward to implement" - Reviewer 6LtM "clear motivation and easy-to-understand proofs" - Reviewer 6LtM "Thorough ablation studies" - Reviewer 16FV "The method is generally sound and an innovative technique to deal with the well-known distortion-perception tradeoff" - Reviewer 9cRu There were some missing references and discussions of connection with prior work raised by the reviewers. After the authors' responses, all reviewers strongly support accepting the paper. The AE appreciates the simplicity of the work and the general applicability of the proposed method on different problems. While, the current method does not outperform the state-of-the-art on each individual task, the AE agrees that this is not the main focus of the paper. The method's simplicity and promising performance may inspire many follow-up research. The AE thus recommends "accepting as is" with featured certification. ==================================================
# Vulnerability-Aware Instance Reweighting For Adversarial Training Olukorede Fakorede* fakorede@iastate.edu Department of Computer Science Iowa State University Ashutosh Nirala *aknirala@iastate.edu* Department of Computer Science Iowa State University Modeste Atsague *modeste@iastate.edu* Department of Computer Science Iowa State University Jin Tian jtian@iastate.edu Department of Computer Science Iowa State University Reviewed on OpenReview: *https: // openreview. net/ forum? id= kdPcLdJbt1* ## Abstract Adversarial Training (AT) has been found to substantially improve the robustness of deep learning classifiers against adversarial attacks. AT involves obtaining robustness by including adversarial examples in training a classifier. Most variants of AT algorithms treat every training example equally. However, recent works have shown that better performance is achievable by treating them unequally. In addition, it has been observed that AT exerts an uneven influence on different classes in a training set and unfairly hurts examples corresponding to classes that are inherently harder to classify. Consequently, various reweighting schemes have been proposed that assign unequal weights to robust losses of individual examples in a training set. In this work, we propose a novel instance-wise reweighting scheme. It considers the vulnerability of each natural example and the resulting information loss on its adversarial counterpart occasioned by adversarial attacks. Through extensive experiments, we show that our proposed method significantly improves over existing reweighting schemes, especially against strong white and black-box attacks. ## 1 Introduction The practical application of deep learning in safety-critical domains has been thrown into doubt following the observed brittleness of deep learning to well-crafted adversarial perturbations (Szegedy et al., 2013). This chilling observation has led to an array of methods aimed at making deep learning classifiers robust to these adversarial perturbations. Prominent among these proposed methods is adversarial training (Goodfellow et al., 2015; Madry et al., 2018). Adversarial training (AT) is an effective method that typically involves the introduction of adversarial examples in training a deep learning classifier. Several AT variants have been proposed yielding modest improvements (Wang et al., 2019; Ding et al., 2019; Kannan et al., 2018; Zhang et al., 2019). More recently, methods have been proposed to boost the performance of the existing AT variants even further, including adversarial weight perturbation (Wu et al., 2020), utilizing hypersphere embedding (Pang et al., 2020; Fakorede et al., 2023), and augmenting dataset with unlabeled and/or extra labeled data (Carmon et al., 2019; Alayrac et al., 2019; Zhai et al., 2019). Despite the generally impressive performance of AT against adversarial attacks, Xu et al. (2021) raised concerns about fairness in AT. It is observed that AT encourages significant disparity in natural and robust accuracy among different classes. Furthermore, AT disproportionately hurts the robust accuracy of input examples that are intrinsically harder to classify by a naturally trained classifier. For example, a naturally trained PreactResNet-18 on the CIFAR-10 dataset classifies "cat" and "ship" at approximately 89% and 96% accuracy, respectively, while the robust accuracy of "cat" and "ship" produced by an adversarially trained PreactResNet-18 on PGD-attacked CIFAR-10 dataset are approximately 17% and 59% respectively (Xu et al., 2021). In other words, the gap between the standard accuracy and robust accuracy for "cat" is 72%, whereas the difference between the standard and robust accuracy for "ship" is 37%. This disparity suggests that adversarial examples corresponding to certain classes may be treated differently by AT. Adversarial examples are crafted from natural examples that exhibit varying degrees of intrinsic vulnerability measured by their closeness to the class decision boundaries. Intuitively, adversarial examples corresponding to intrinsically vulnerable examples are moved farther across the decision boundary into wrong classes which makes them easy to misclassify. Zhang et al. (2020) observed that AT encourages learning adversarial variants of less vulnerable natural examples at the expense of the intrinsically susceptible ones as the training proceeds. This may partly explain the phenomenon of robust overfitting (Chen et al., 2020; Zhang et al., 2020). Therefore, the performance of AT may be improved by assigning higher weights to the robust losses of adversarial variants of vulnerable natural examples. The idea of reweighting robust losses of adversarial examples has recently been explored in the literature. GAIRAT (Zhang et al., 2020) assigns smaller or larger weights to the robust losses of adversarial examples based on the geometric distance of their corresponding natural counterpart to the decision boundary. Specifically, *GAIRAT* measures the geometric distance using the least number of PGD steps needed to misclassify the natural example. As a result, the *GAIRAT* reweighting function can only take a few values (corresponding to discrete PGD steps) and is unstable because its value depends largely on the initial starting point of each natural example which may change depending on the attack path (Liu et al., 2021). Liu et al. (2021) proposed reweighting robust losses based on the margin between the estimated ground-truth probability of the adversarial example and the probability of the most confusing label. We contend that this method does not exploit information about the intrinsic vulnerability of a natural example. More importantly, we observe that these methods only yield competitive robustness on attacks like PGD and FGSM, but underperform against stronger white-box or black-box attacks CW, AA, or SPSA. This paper proposes a novel instance-wise weight assignment function for assigning importance to the robust losses of adversarial examples used for adversarial training. Our weight assignment function considers the intrinsic vulnerability of individual natural examples from which adversarial examples used during training are crafted. We capture the intrinsic vulnerability of each natural example using the likelihood of it being correctly classified, which we estimate using the model's confidence about the example belonging to its true class. In addition, we argue that adversarial attacks have a unique impact on each individual example. This contributes to the disparity in robustness accuracy exhibited by examples of different classes. Hence, we compute the discrepancies between a model's prediction on each natural example and its corresponding adversarial example as another measure for the vulnerability of the example. We summarize the contributions of this paper as follows: 1. We propose a novel Vulnerability-aware Instance Reweighting (VIR) function for adversarial training. The proposed reweighting function takes consideration of the intrinsic vulnerability of individual examples used for adversarial training and the information loss occasioned by adversarial attacks on natural examples. 2. We experimentally demonstrate the effectiveness of the proposed reweighting strategy in improving adversarial training. 3. We show that existing reweighting methods *GAIRAT* (Zhang et al., 2020) and *MAIL*(Liu et al., 2021) only yield significant robustness against attacks FGSM and PGD at the expense of stronger attacks CW (Carlini & Wagner, 2017), Autoattack(Croce & Hein, 2020b), and FMN (Pintor et al., 2021). Using various datasets and models, we show that the proposed VIR method consistently improves over the existing reweighting methods across various white-box and black-box attacks. ## 2 Related Work Adversarial Attacks. Since it became known that deep neural networks (DNN) are vulnerable to normbounded adversarial attacks (Biggio et al., 2013; Szegedy et al., 2013), a number of sophisticated adversarial attack algorithms have been proposed (Goodfellow et al., 2015; Madry et al., 2018; Carlini & Wagner, 2017; Athalye et al., 2018; Dong et al., 2018; Moosavi-Dezfooli et al., 2016). Adversarial attacks can broadly be classified into white-box and black-box attacks. White-box attacks are crafted with the attacker having full access to the model parameters. Prominent white-box attacks include Fast Gradient Sign Method (Goodfellow et al., 2015), DeepFool (Moosavi-Dezfooli et al., 2016), C&W (Carlini & Wagner, 2017), Projected Gradient Descent (PGD) (Madry et al., 2018) etc. On the other hand, in black-box settings, the attacker has no direct access to the model parameters, and black-box attacks usually rely on a substitute model (Papernot et al., 2017), or gradient estimation of the target model (Ilyas et al., 2018; Uesato et al., 2018). Adversarial Robustness. To mitigate the potential threat of adversarial attacks, extensive research has been conducted, leading to various methods (Guo et al., 2018; Song et al., 2017; Papernot et al., 2016; Madry et al., 2018; Zhang et al., 2019; Atsague et al., 2021). However, some of the proposed methods were later found to be ineffective against strong attacks (Athalye et al., 2018). Adversarial Training (AT) (Goodfellow et al., 2015; Madry et al., 2018), which requires training a classifier with adversarial examples, has been found to be effective to a degree in achieving robustness to adversarial examples. Formally, AT involves crafting adversarial examples during training and solving a saddle point problem formulated as: $$\min_{\theta}\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}}\left[\max_{\mathbf{x}^{\prime}\in B_{s}(\mathbf{x})}L(f_{\theta}(\mathbf{x}^{\prime}),y)\right]\tag{1}$$ $$\left(1\right)$$ where y is the true label of input feature x, L() is the loss function, θ are the model parameters, and Bϵ(x) : {x ′ ∈ X : ∥x ′ − x∥p ≤ ϵ} represents the lp norm ball centered around x constrained by radius ϵ . In Eq. (1), the inner maximization tries to obtain a worst-case adversarial version of the input x that increases the loss. The outer minimization then tries to find model parameters that would minimize this worst-case adversarial loss. The relative success of AT (Madry et al., 2018) has inspired various AT variants such as (Zhang et al., 2019; Wang et al., 2019; Ding et al., 2019; Kannan et al., 2018), to cite a few. Re-weighting. Recent works by Zhang et al. (2020) and Liu et al. (2021) have argued for assigning different weights to the losses corresponding to different adversarial examples in the training set. Zhang et al. (2020) assigns weights to an example based on its distance to the decision boundary. Examples that are closer to the decision boundary are assigned larger weights as follows: $$w(\mathbf{x}_{i},y_{i})={\frac{1+t a n h(\lambda+5\times(1-2k(\mathbf{x}_{i},y_{i})/K))}{2}}$$ where k(xi, yi) is the least number of PGD steps to cause misclassification of xi, K = 10 is the number of PGD steps used for generating the attack, and λ is set to -1. The assignment function in Eq. (2) is used to reweight the robust losses on each adversarial example in Eq. (1). Unlike (Zhang et al., 2020) which uses a re-weighting function that is discrete (i.e. the weighting function depends on k PGD iterations) and path-dependent, (Liu et al., 2021) proposed a re-weighting scheme that is continuous and path-independent based on the probability margin between the estimated ground-truth probability of an adversarial example and the probability of the class closest to the ground-truth as follows: $$P M(\mathbf{x},y;\theta)=f_{\theta}(\mathbf{x}^{\prime})_{y}-\operatorname*{max}_{j,j\neq y}f_{\theta}(\mathbf{x}^{\prime})_{j}$$ ′)j (3) The weight assignment function is then defined as: $$w(\mathbf{x}_{i},y_{i})=s i g m o i d(-\gamma(P M_{i}-\beta))$$ w(xi, yi) = *sigmoid*(−γ(PMi − β)) (4) $$\mathbf{\Sigma}$$ $$(3)$$ $$\left({4}\right)$$ where γ and β are hyperparameters. This work takes a different perspective on reweighting robust losses. We consider the inherent vulnerability of the natural examples used for crafting adversarial examples and the impact of adversarial attacks on each natural example. ## 3 Preliminaries We use bold letters to denote vectors. We denote D = {xi, yi} n i=1 as a data set of input feature vectors xi *∈ X ⊆* Rd and labels yi ∈ Y, where X and Y represent a feature space and a label set, respectively. Let fθ : X → RC denote a deep neural network (DNN) classifier parameterized by θ where C represents the number of output classes. For any x ∈ X , let the class label predicted by fθ be Fθ(x) = arg maxk fθ(x)k, where fθ(x)k denotes the k-th component of fθ(x). fθ(x)y is the likelihood of x belonging to class y. KL(P∥Q) is used to represent the Kullback-Leibler divergence of distributions P and Q. We denote *∥ · ∥*p as the lp- norm over Rd, that is, for a vector x ∈ Rd, ∥x∥p = (Pd i=1 |xi| p) 1 p . An ϵneighborhood for x is defined as Bϵ(x) : {x ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) ![3_image_2.png](3_image_2.png) ![3_image_3.png](3_image_3.png) ′ ∈ X : ∥x ′ − x∥p ≤ ϵ}. An adversarial example corresponding to a natural input x is denoted as x ′. We refer to a DNN trained only on natural examples as standard network, and the one trained using adversarial examples as robust network or adversarially trained network. Figure 1: Confusion matrix displaying standard accuracy of ResNet18 on the CIFAR-10 dataset and its robust accuracy under the PGD-100 attack trained by AT variants *vanilla-AT* (Madry et al., 2018), *TRADES* (Zhang et al., 2019), and *MART* (Wang et al., 2019) respectively. ## 4 Proposed Method The proposed approach is motivated by two major factors: (1) the unfairness exhibited by adversarial training - adversarial examples crafted from intrinsically vulnerable natural examples are underrepresented in the later stages of adversarial training (Zhang et al., 2020), (2) the discrepancy between individual natural examples and their corresponding attacked variants. We propose an effective reweighting strategy for assigning importance to robust losses based on these two factors. We will make a case for how the information about the inherent vulnerability of natural examples and the effect of adversarial attacks on these examples may be captured and utilized as inputs to the proposed weight assignment function. ## 4.1 Vulnerability Of Natural Examples Based on the findings in (Xu et al., 2021; Ma et al., 2022), supported by the confusion matrices in Figure 1, it is clear that adversarial training hurts adversarial examples of certain classes more than others, especially adversarial examples crafted from natural examples which are harder to classify under natural training. These natural examples are characterized by their closeness to the class decision boundaries (Xu et al., 2021; Zhang et al., 2020). Following these findings, we propose a reweighting component to pay attention to adversarial examples whose natural counterparts are more *difficult* to classify. We argue that a natural example being vulnerable is suggestive of the features of that example correlating to another (wrong) but semantically similar class. Hence, a vulnerable example may be classified with reasonably high confidence to the wrong class. Findings in (Wang, 2021) also indicate that classes that are semantically closer to each other have smaller distances to their decision boundaries; e.g., in the CIFAR-10 dataset, 'cat' is semantically closer to 'dog' than 'airplane,' and 'cat' and 'dog' have smaller distances to the decision boundaries than 'airplane.' In Figure 1, classes '3' ('cat') and '5' ('dog'), which are semantically similar, exhibit more vulnerability under natural training. Input examples corresponding to these classes contain certain similar features. We believe that this relative feature similarity in semantically closer classes makes models less confident in distinguishing their examples. Given this insight, we define the vulnerability of a natural example in terms of a model's estimated class probability. Given a natural input x with its corresponding true label y, and a model fθ(.), the model's estimated probability of x belonging to y is given as fθ(x)y. Note that y is not necessarily the same as arg maxk fθ(x)k. We formally define the notion of relative vulnerability of inputs below. Definition 4.1 (*Relative vulnerability* **of examples.)** Given two input-label pairs (x1, y1) and (x2, y2), we say the pair (x1, y1) *is more vulnerable than* (x2, y2), if fθ(x1)y1 < fθ(x2)y2 . Based on Definition 4.1, model fθ(.) is less confident in estimating the true class of more vulnerable examples. We provide more theoretical explanation to our notion of relative example vulnerability in Appendix A. Given that adversarial training involves training on adversarial examples, and it unfairly hurts adversarial examples crafted from vulnerable examples, it is intuitive that adversarial training sets up its decision boundary to favor adversarial examples of *invulnerable* classes, as observed in (Xu et al., 2021). This is indicated by the relatively low robust accuracy recorded on certain classes. Given this insight, we consider assigning unequal weights to adversarial examples based on the relative vulnerability of their original (natural) examples with respect to their respective true labels. Our proposed re-weighting strategy for adversarial training considers the vulnerability of each natural example used to generate adversarial examples for training. This vulnerability for an input x with label y is given by fθ(x)y. Adversarial examples corresponding to natural examples which are intrinsically vulnerable are assigned higher weights. We define a score function denoted as Sv(x, y) for adaptively assigning importance to input example x based on a model's confidence that x belongs to the true label y. The score function is as follows: $$S_{v}(\mathbf{x}_{i},y_{i})=\alpha\cdot e^{-\gamma f_{\theta}(\mathbf{x}_{i})y_{i}}$$ ## −Γfθ(Xi)Yi (5) Where Xiis A Natural Example, Yiis The True Label Of Xi, E (.)Represents An Exponential Function, And Γ >= 1.0 Is A Real-Valued Hyperparameter. Α Is Introduced For Numerical Stability To Ensure That Sv(X, Y) Values Are Not Too Small And Are Useful. The Formulation In Eq. 5 Ensures That Higher Values Are Returned For More Valnerable Examples Which Have Lower Fθ(Xi)Yi Values. Higher Γ Values Will Result In A Larger Disparity In Weights Between Vulnerable Examples And Less Vulnerable Examples. 4.2 Disparity Between Natural And Adversarial Examples The likelihood estimate of correctly classifying a natural example provides information about its intrinsic vulnerability before an adversarial perturbation is applied to it. However, it does not supply information about the discrepancies in the features learned by the model on natural examples and their corresponding adversarial variants, which largely explain the large disparity between natural and robust errors. Ilyas et al. (2019) and Tsipras et al. (2018) characterized learned features into robust and non-robust. Robust features refer to features that remain correlated to the true label under adversarial perturbation. In contrast, non-robust features are highly predictive, however, they are brittle and can be anti-correlated with the true label under adversarial perturbations. Tsipras et al. (2018) showed that a DNN classifier is able to learn any useful features on natural examples, but learns only robust features on adversarial examples, assigning zero or infinitesimal weight values to the predictive non-robust features. The depletion of features in the adversarial examples results in different information loss for different examples. We note that there may be a variation in non-robust features learned by a DNN classifier in the different adversarial examples used in training. For instance, consider two input-label pairs (x1, y1), (x2, y2) and their corresponding perturbed variants x ′ 1 , x ′ 2 . The robust features in x ′ 1 may have a stronger correlation to y1 than the robust features in x ′ 2 to y2. In fact, intrinsically vulnerable examples which are relatively weakly correlated with their classes may be affected more by an adversary since their features are further depleted. We propose to score the vulnerability of an example x as a weight for x by the discrepancy between the model's output prediction on x and that on its adversarial variant x ′. For simplicity, we score this discrepancies using the KL-divergence as follows: Sd(xi, x ′ i ) = KL(fθ(xi)∥fθ(x $$\mathbf{\Phi}(\mathbf{x}_{i})\|f_{\theta}(\mathbf{x}_{i}^{\prime}))$$ $\downarrow$ . )) (6) where Sd denotes the proposed discrepancy score function, and fθ(x) and fθ(x ′) denote the model's predictions on natural and the corresponding adversarial examples respectively. We note that the KL-divergence between a model's predictions on natural examples and adversarial examples is characterized as a boundary error in *TRADES* (Zhang et al., 2019), and is utilized as a regularization term for improving robustness (see Eq. 9). In contrast, here, it is used as a weighting (a value) and the gradient of the KL-divergence in Eq. (6) is not used. ## 4.3 Weight Assignment Function We propose a weight assignment function for re-weighting robust losses based on the observations in Sections 4.1 and 4.2. The proposed Vulnerability-aware Instance Reweighting (VIR) function for assigning importance to the example xiin a training set is as follows: $$w(\mathbf{x}_{i},\mathbf{x}_{i}^{\prime},y_{i})=S_{v}(\mathbf{x}_{i},y_{i})\cdot S_{d}(\mathbf{x}_{i},\mathbf{x}_{i}^{\prime})+\beta\tag{1}$$ $$\left(7\right)$$ where β is a hyperparameter. Sv(x, y) ensures that emphasis is given to vulnerable examples which are disproportionately hurt by adversarial training. The proposed weight assignment function assigns the largest weights to robust losses corresponding to vulnerable examples having the most significant discrepancy between the model's outputs on the natural and adversarial variants. Furthermore, the least weights are assigned to robust losses corresponding to invulnerable examples having the least discrepancy between the model's outputs on them and their corresponding adversarial variants. The hyperparameter β allows us to lower-bound the reweighting function and to balance the relative strength of the lowest and highest weight values. As a comparison, *GAIRAT* (Zhang et al., 2019) assigns importance to robust losses based on the k PGD steps required to attack an example x. The intuition is that vulnerable inputs take fewer steps to move across the decision boundary. However, as noted in (Liu et al., 2021), this approach is problematic because it is path-dependent, i.e, two inputs with similar starting points may reach their end points using different paths, thus making it unreliable. In addition, the reweighting function is constrained to accepting a few discrete values. In contrast, the proposed reweighting function in eqn (7) takes continuous values and is path-independent. ## 4.4 Applying The Weight Assignment Function We apply the proposed weight assignment function to prominent adversarial training methods vanilla AT (Madry et al., 2018) and TRADES (Zhang et al., 2019) by assigning importance to robust losses computed by these adversarial training methods. Specifically, we re-write the training objective of the vanilla AT as: $$\sum_{i}w(\mathbf{x}_{i},\mathbf{x}_{i}^{\prime},y_{i})\cdot L_{C E}(f_{\theta}(\mathbf{x}_{i}^{\prime}),y_{i})$$ ), yi) (8) where LCE refers to the cross-entropy loss function. The TRADES robust loss, originally stated as: $$\sum_{i}L_{C E}(f_{\theta}(\mathbf{x}_{i}),y)+{\frac{1}{\lambda}}\cdot K L(f_{\theta}(\mathbf{x}_{i})\|f_{\theta}(\mathbf{x}_{i}^{\prime})),$$ )), (9) $$({\boldsymbol{\delta}})$$ $\text{e TRADES robun}$ $$({\mathfrak{g}})$$ is re-written as: $$\sum_{i}L_{C E}(f_{\theta}(\mathbf{x}_{i}),y)+{\frac{1}{\lambda}}\cdot w(\mathbf{x}_{i},\mathbf{x}_{i}^{\prime},y_{i})\cdot K L(f_{\theta}(\mathbf{x}_{i})\|f_{\theta}(\mathbf{x}_{i}^{\prime}))$$ $$(10)$$ )) (10) where λ is a regularization hyper-parameter. We term the training objectives in Eq. (8) and (10) as **VIR-AT** and **VIR-TRADES** respectively. Burn-in Period. During the training, the weight w(xi, x ′ i , yi) is set to 1 in the initial epochs. The application of the proposed weight assignment function is delayed to later epochs. This is because at the initial training phase, the deep model has not sufficiently learned, and thus is less informative. Disregarding this fact may mislead the training process. Zhang et al. (2020) used a similar approach in implementing their re-weighting strategy. Our proposed VIR-AT algorithm is summarized in the following: Algorithm 1 VIR-AT Algorithm. Input: a neural network model with the parameters θ, step sizes κ1 and κ2, and a training dataset D of size n. Output: a robust model with parameters θ ∗ 1: for *epoch* = 1 to num_epochs do 2: for *batch* = 1 to num_batchs do 3: sample a mini-batch {(xi, yi)}M i=1 from D; ▷ mini-batch of size M. 4: for i = 1 to M do 5: x ′ i ← xi + 0.001 · N (0, 1), where N (0, I) is the Gaussian distribution with zero mean and 6: identity variance. 7: for k = 1 to K do 8: x ′ i ←QBϵ(xi) (xi + κ1 · *sign*(∇x ′ i · L(fθ(x ′ i ), yi)); ▷Qis a projection operator. 9: **end for** 10: Sv(xi, yi) ← α · e −γfθ(xi)y 11: Sd(x ′ i , xi) ← KL(fθ(xi)∥fθ(x ′ i )) 12: wi(xi, x ′ i , yi) ← Sv(xi, yi) · Sd(x ′ i , xi) + β; ▷ wi(xi, x ′ i , yi) ← 1 if epoch ≤ 76 13: **end for** 14: θ ← θ − κ2∇θPM i=1 wi(x ′ i , xi, yi) · L(fθ(x ′ i ), yi) 15: **end for** 16: **end for** ## 5 Experimental Section In this section, we verify the effectiveness of the proposed re-weighting function through extensive experiments on various datasets including CIFAR-10 (Krizhevsky et al., 2009), CIFAR-100 (Krizhevsky et al., 2009), SVHN(Netzer et al., 2011), and TinyImageNet(Deng et al., 2009). We employed ResNet-18 (RN-18) (He et al., 2016) and WideResNet-34-10 (WRN-34-10) (He et al., 2016) as the backbone models for exploring the effectiveness of the proposed method on CIFAR-10, while CIFAR-100, SVHN, and TinyImageNet are evaluated on ResNet-18. ## 5.1 Experimental Settings The models are trained for 115 epochs, using mini-batch gradient descent with momentum 0.9, batch size 128, weight decay 3.5e-3 (RN-18) and 7e-4 (WRN-34-10). The learning rates are set to 0.01 and 0.1 for RN-18 and WRN-34-10 respectively. In both cases, the learning rates are decayed by a factor of 10 at 75th, and then at 90th epoch. In *VIR-AT* and *VIR-TRADES*, we introduced the proposed reweighting function on the 76th epoch following (Zhang et al., 2020). The adversarial examples used during training are obtained by perturbing each image using the Projected Gradient Descent (PGD) (Madry et al., 2018) with the following hyperparameters: l∞ norm ϵ = 8/255, step-size κ = 2/255, and K = 10 iterations. ## 5.2 Baselines We compare the robustness obtained using *VIR-AT* and *VIR-TRADES* with prominent AT methods *vanillaAT* (Madry et al., 2018), *TRADES* (Zhang et al., 2019), and *MART* (Wang et al., 2019). In addition, we compare with the state-of-the-art reweighting schemes *GAIRAT* (Zhang et al., 2020) and *MAIL*(Liu et al., 2021). We conducted additional experiments on recent data augmentation-based defense (Wang et al., 2023) and the results are provided in Appendix B. ## 5.3 Hyperparameters Baseline Hyperparameters. The trade-off hyperparameter 1λ is set to 6.0 for training WRN-34-10 and 4.0 for RN-18 with *TRADES*. As recommended by the authors, we set the regularization hyperparameter β to 5.0 for training with *MART*. VIR Hyperparameters. The values of constants α and β are heuristically determined and set to 7.0 and 0.007 respectively in *VIR-AT* and 8.0 and 1.6 in *VIR-TRADES*. Similarly, we set the value of γ to 10.0 and 3.0 in *VIR-AT* and *VIR-TRADES* respectively. We set the value of γ to 3.0 for training TinyImageNet with VIR-AT. Also, the value of 1λ is set to 5.0 for training *VIR-TRADES*. ## 5.4 Threat Models The performance of the proposed reweighting function was evaluated using attacks under *White-box* and Black-box settings and *Auto attack*. White-box attacks. These attacks have unfettered access to model parameters. To evaluate robustness on CIFAR-10 using RN-18 and WRN34-10, we apply the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015) with ϵ = 8/255 ; PGD attack with ϵ = 8/255, step size κ = 1/255, K = 100; CW (CW loss (Carlini & Wagner, 2017) optimized by PGD-20) attack with ϵ = 8/255, step size 1/255. In addition, we evaluated the robustness of trained models against 100 iterations of l∞ version of Fast Minimum-norm (FMN) attacks (Pintor et al., 2021). On SVHN and CIFAR-100, we apply PGD attack with ϵ = 8/255, step size κ = 1/255, K = 100. We limited the white-box evaluation on TinyImageNet to PGD-20. Black-box attacks. Under black-box settings, the adversary has no access to the model parameters. We evaluated robust models trained on CIFAR-10 against strong query-based black-box attacks Square (Andriushchenko et al., 2020) with 5,000 queries and SPSA (Uesato et al., 2018) with 100 iterations, perturbation size 0.001 (gradient estimation), learning rate = 0.01, and 256 samples for each gradient estimation. All black-box evaluations are made on trained WRN-34-10. Ensemble of Attacks. Trained models are tested on powerful ensembles of attacks such as *Autoattack* (Croce & Hein, 2020b), which consisting of APGD-CE (Croce & Hein, 2020b), APGD-T (Croce & Hein, 2020b), FAB-T (Croce & Hein, 2020a), and Square (a black-box attack) (Andriushchenko et al., 2020) attacks. In addition, we evaluated the trained models on the Margin Decomposition Ensemble (MDE) attack (Ma et al., 2023). ## 5.5 Performance Evaluation We summarize our results on CIFAR-10 using RN-18 and WRN-34-10 in Tables 1 and 2, respectively. Moreover, we report results on CIFAR-100, SVHN, and Tiny Imagenet using RN-18 in Tables 3 and 4. Finally, black-box evaluations are made on trained WRN-34-10, and the results are reported in Table 5. Experiments were repeated four times with different random seeds; the mean and standard deviation are subsequently calculated. Results are reported as mean ± std. Table 1: Comparing white-box attack robustness (accuracy %) for ResNet-18 on CIFAR-10. For all methods, distance ϵ = 0.031. We highlight the best-performing method under each attack. | Defense | Natural | FGSM | PGD-100 | CW | FMN-100 | AA | MDE | |------------|-------------|-------------|--------------|-------------|-------------|-------------|-------------| | AT | 84.12± 0.16 | 57.88± 0.13 | 51.58 ± 0.17 | 51.75± 0.23 | 48.92±0.25 | 47.92±0.35 | 47.90 ±0.27 | | TRADES | 83.56±0.35 | 57.82±0.32 | 52.07±0.25 | 52.26±0.07 | 49.74±0.30 | 48.32±0.19 | 48.29 ±0.19 | | MART | 80.32±0.38 | 58.01±0.19 | 54.03±0.28 | 49.29±0.11 | 49.82±0.08 | 47.61±0.27 | 47.55 ±0.12 | | GAIRAT | 83.33±0.19 | 60.20±0.29 | 54.91± 0.19 | 40.95±0.39 | 38.62±0.29 | 32.89 ±0.33 | 32.70 ±0.18 | | MAIL-AT | 84.32±0.46 | 60.11±0.39 | 55.25±0.23 | 48.88±0.11 | 46.83 ±0.17 | 44.22 ±0.21 | 44.14 ±0.21 | | VIR-TRADES | 82.03±0.13 | 59.62±0.08 | 54.86±0.17 | 53.11±0.17 | 51.95±0.09 | 51.03±0.16 | 50.89 ±0.12 | | VIR-AT | 84.59±0.18 | 61.35±0.13 | 56.42± 0.18 | 52.18±0.15 | 50.56±0.12 | 48.21±0.08 | 48.04 ±0.08 | | Defense | Natural | FGSM | PGD-100 | CW | FMN-100 | AA | MDE | |------------|------------|-------------|-------------|-------------|-------------|------------|-------------| | AT | 86.17±0.26 | 61.68±0.13 | 54.45±0.31 | 55.17 ±0.33 | 54.05±0.21 | 51.90±0.28 | 51.74 ±0.19 | | TRADES | 85.20±0.25 | 61.47±0.35 | 54.81±0.31 | 56.02±0.29 | 53.95 ±0.10 | 53.09±0.18 | 52.71 ±0.12 | | MART | 84.59±0.11 | 62.20±0.14 | 56.45±0.16 | 54.52±0.11 | 53.17±0.12 | 51.21±0.23 | 50.92 ±0.15 | | GAIRAT | 85.24±0.19 | 62.67±0.36 | 57.09± 0.27 | 44.96±0.2 | 44.50±0.05 | 42.29±0.11 | 41.92 ±0.15 | | MAIL-AT | 84.83±0.39 | 64.09±0.32 | 58.86±0.25 | 51.26±0.20 | 51.64±0.15 | 47.10±0.22 | 47.04 ±0.11 | | VIR-TRADES | 84.95±0.21 | 63.07 ±0.17 | 57.56±0.21 | 56.92 ±0.19 | 55.72±0.13 | 54.55±0.26 | 54.14 ±0.09 | | VIR-AT | 87.13±0.36 | 64.71±0.29 | 59.82±0.29 | 56.11± 0.16 | 54.14±0.23 | 51.94±0.22 | 51.83 ±0.12 | | | | SVHN | | CIFAR-100 | | | |------------|-------------|-------------|--------------|-------------|--------------|--------------| | Defense | Natural | PGD-100 | AA | Natural | PGD-100 | AA | | AT | 92.94±0.46 | 54.74±0.28 | 45.94±0.29 | 59.60± 0.35 | 28.57± 0.25 | 24.75 ± 0.21 | | TRADES | 92.14±0.43 | 55.24±0.23 | 45.64±0.29 | 60.73 ±0.33 | 29.83± 0.25 | 24.83± 0.29 | | MART | 91.84±0.46 | 55.54±0.21 | 43.39±0.36 | 54.19± 0.26 | 29.94±0.21 | 25.30±0.50 | | GAIRAT | 90.47± 0.58 | 61.37± 0.28 | 37.27± 0.31 | 58.43± 0.26 | 25.74± 0.41 | 17.57 ± 0.33 | | MAIL-AT | 91.54±0.35 | 62.16± 0.18 | 41.18±0.29 | 60.74±0.15 | 27.62±0.0.27 | 22.44±0.53 | | VIR-TRADES | 89.24 ±0.23 | 58.63 ±0.32 | 50.06±0.22 | 59.20 ±0.35 | 31.69 ± 0.24 | 26.25 ±0.25 | | VIR-AT | 91.65±0.35 | 61.52 ±0.43 | 45.91 ± 0.41 | 59.85 ±0.11 | 32.06±0.35 | 24.73 ±0.22 | | Defense | Natural | PGD-20 | AA | |------------|-------------|-------------|-------------| | AT | 48.79±0.15 | 23.96±0.15 | 18.06±0.15 | | TRADES | 49.11±0.23 | 22.89±0.26 | 16.81 ±0.19 | | MART | 45.91 ±0.24 | 26.03 ±0.36 | 19.23 ±0.23 | | GAIRAT | 46.09 ±0.14 | 17.21±0.33 | 12.92 ±0.23 | | MAIL-AT | 49.72 ±0.36 | 24.32 ±0.33 | 17.61 ±0.35 | | VIR-TRADES | 51.17 ±0.19 | 25.82 ±0.13 | 18.68 ±0.19 | | VIR-AT | 49.09 ±0.25 | 26.65±0.32 | 18.42 ±0.21 | Table 2: Comparing white-box attack robustness (accuracy %) for WideResNet-34-10 on CIFAR-10. For all methods, distance ϵ = 0.031. The best-performing methods under each attack are highlighted. Table 3: Comparing white-box attack robustness (accuracy %) for RN-18 on SVHN and CIFAR-100. For all methods, distance ϵ = 0.031. Table 4: Comparing white-box attack robustness (accuracy %) for ResNet-18 on TinyImageNet. Perturbation size ϵ = 8/255 and step size κ = 1/255 are used for all methods. Comparison with prominent reweighting methods. The results in Tables 1-4 show that, in general, the proposed VIR method is more effective than the two prominent reweighting methods *MAIL-AT* and | Defense | Square | SPSA | |------------|-------------|-------------| | AT | 60.12±0.25 | 61.05±0.16 | | TRADES | 59.18±0.19 | 61.15±0.09 | | MART | 58.72±0.18 | 58.93±0.11 | | GAIRAT | 51.97±0.10 | 52.15 ±0.30 | | MAIL-AT | 58.34±0.16 | 59.24±0.32 | | VIR-TRADES | 59.73±0.08 | 61.45±0.26 | | VIR-AT | 60.51 ±0.11 | 62.59±0.21 | Table 5: Comparing black-box attack robustness (accuracy %) for Wideresnet-34-10 trained on CIFAR-10. GAIRAT, especially under stronger attacks FMN, CW, and Autoattack. For example, *VIR-AT* significantly outperforms *MAIL-AT* against FMN-100 (+2.5%), CW (+ 5.0%), and Autoattack (+ 5.0%) attacks on CIFAR-10 using WRN-34-10. The proposed *VIR-AT* method achieves large improvement margins over GAIRAT against FMN-100 (+ 9.6%), CW (+ 11.1%), and Autoattack (+ 9.6%) on CIFAR-10 using WRN34-10. *VIR-AT* also consistently performs better than *MAIL-AT* and *GAIRAT* on CIFAR-100 and TinyImageNet against PGD and Autoattack attacks. On SVHN, *MAIL-AT* slightly outperforms our *VIR-AT* method against PGD-100, however, *VIR-AT* performs significantly better against Autoattack by over 4%. Experimental results presented in Table 5 show that *VIR-AT* performs better than *GAIRAT* and *MAILAT* against strong query-based black-box attacks *Square* and *SPSA*. Furthermore, the poorer performance recorded by GAIRAT on *black-box* attacks compared to the PGD-100 *white-box* attack may potentially indicate that *GAIRAT* encourages gradient obfuscation according to the arguments made in (Athalye et al., 2018). Comparison with non-reweighted variants. We applied the proposed reweighting method to a prominent variant of vanilla AT, TRADES (Zhang et al., 2019). Experimental results in Tables 1-5 show that VIR-TRADES consistently improves upon *TRADES* against all attacks, especially stronger attacks CW, FMN, and Autoattack. In addition, the confusion matrices displayed in Figure 2 clearly shows the improvement in class-wise accuracies of data-sets corresponding to harder classes. We show in Figure 3 the class-weight distribution (the sum ![9_image_0.png](9_image_0.png) of all sample weights in each class). From Figure 3, we observe that our proposed instance-wise reweighting assigns the largest weight to the most vulnerable class '3' and the least weight to the least vulnerable class '1' for both VIR-AT and VIR-TRADES. Figure 2: Confusion matrix displaying robust accuracies under PGD-100 attack using VIR-AT and VIRTRADES for ResNet18 on CIFAR-10 dataset. Finally, these experimental results show that the existing reweighting methods *GAIRAT* and *MAIL* improved upon the vanilla AT against FGSM and PGD attacks but performed much worse against stronger attacks CW, FMN, and Autoattack. In contrast, our proposed *VIR-AT* performed significantly better than AT ![10_image_0.png](10_image_0.png) Figure 3: Class-weight distribution (sum of all sample weights in each class) for VIR-AT and VIR-TRADES for ResNet18 during training on CIFAR-10 dataset. on PGD-100 and FGSM without sacrificing performance against CW, FMN, and Autoattack. A likely explanation for the relatively poor performance of existing reweighting functions on stronger attacks such as Autoattack and CW is that they significantly diminish the influence of less vulnerable examples. While upweighting losses of vulnerable examples can be helpful, over-relying on the vulnerable samples may be detrimental as observed in (Dong et al., 2021). Our proposed VIR function achieved a good balance between weighting vulnerable and less vulnerable examples as shown in the ablation studies presented in the next section. ## 5.6 Ablation Studies And Impact Of Hyperparameters 5.6.1 Ablation Studies We conduct ablation studies on the proposed weight assignment function using ResNet-18 on CIFAR-10. The training settings are the same as those described in Section 5.1. We study the influence of each reweighting component on the robustness against white-box and black-box attacks. The results are presented in Table 6. | Reweighting component | Natural | PGD-100 | CW | AA | Square | SPSA | |-------------------------------------|------------|------------|------------|------------|-------------|------------| | LCE(fθ(x ′ i), yi) | 84.12±0.16 | 51.58±0.17 | 51.75±0.23 | 47.92±0.35 | 55.32±0.09 | 56.85±0.29 | | ′ | | | | | | | | Sv(xi, yi) · LCE(fθ(x i), yi) | 84.35±0.17 | 54.50±0.11 | 49.61±0.19 | 45.48±0.27 | 54.68±0.15 | 56.08±0.25 | | ′ | | | | | | | | Sd(xi, x′ i) · LCE(fθ(x i), yi) | 83.90±0.16 | 51.17±0.14 | 51.90±0.35 | 47.95±0.29 | 56.22 ±0.19 | 56.25±0.21 | | ′ | | | | | | | | w(xi, x′ i , yi) · LCE(fθ(x i), yi) | 84.59±0.18 | 56.42±0.18 | 52.18±0.15 | 48.21±0.08 | 56.89±0.19 | 57.35±0.15 | Table 6: Ablation studies on *VIR-AT* showing the impact of reweighting components proposed in Eq. 5 and 6 on white-box and black-box attack robustness (accuracy %). Reweighting LCE(fθ(x ′ i ), yi) with only Sv(xi, yi) improves robustness against PGD-100, but yeilds reduced robustness against CW and Autoattack. When LCE(fθ(x ′ i ), yi) is reweighted using Sd(xi, x′i ), no significant improvement in robustness was observed against PGD-100, however, stable performance over stronger attacks are observed. Reweighting LCE(fθ(x ′ i ), yi) with the proposed reweighting function significantly improves robustness accuracy against both white-box and black-box attacks. The ablation studies on *VIR-TRADES* on CIFAR-10 using ResNet-18 are presented in Table 7. | Reweighting component | Natural | PGD-100 | CW | AA | Square | SPSA | |-----------------------------------------------------------|-------------|--------------------------------------------------------|------------|------------|-------------|------------| | ′ | | | | | | | | LCE(fθ(xi), y) + 1 λKL(fθ(xi)∥fθ(x i)) | 83.56±0.35 | 52.07 ±0.25 | 52.26±0.07 | 48.32±0.19 | 55.47±0.13 | 56.36±0.23 | | ′ | | | | | | | | LCE(fθ(xi), y) + 1 λ Sv(xi, yi)KL(fθ(xi)∥fθ(x i)) | 77.95±0.11 | 54.21±0.19 | 51.70±0.13 | 49.95±0.15 | 55.18 ±0.22 | 56.11±0.21 | | ′ | | | | | | | | LCE(fθ(xi), y) + 1 λ Sd(xi, x′ i)KL(fθ(xi)∥fθ(x i)) | 86.59±0.21 | 49.71±0.18 | 49.65±0.15 | 46.77±0.13 | 54.97±0.11 | 55.85±0.15 | | LCE(fθ(xi), y) + 1 λ w(xi, x′ i , yi)KL(fθ(xi)∥fθ(x ′ i)) | 82.03±-0.13 | 54.86±0.17 53.11±0.17 51.03±0.16 56.58±0.19 57.80±0.09 | | | | | Table 7: Ablation studies on *VIR-TRADES* showing the impact of reweighting components proposed in Eq. 5 and 6 on white-box and black-box attack robustness (accuracy %). The results in Table 7 show that reweighting *TRADES* with Sv(xi, yi) yields better performance than the original *TRADES* against PGD-100 (+ 2.14 %) and Autoattack (+ 1.63 %). However, it yields lower performance against CW (-0.56 %) and natural examples (-5.61%). Reweighting *TRADES* using Sd(xi, x′i ) improves the natural accuracy by +3.03% but yields lower performance against attacks. The proposed reweighting function w(xi, x′i , yi) consistently improves *VIR-TRADES* over *TRADES* against PGD-100 (+ 2.97%), CW (+ 0.85%), Autoattack (+ 2.71%), Square attack (+ 1.1%), and SPSA (+ 1.44%). The results show neither Sv(xi, yi) nor Sd(xi, x′i ) is sufficient for improving *TRADES* and they achieve the best performance when combined. ## 5.6.2 Impact Of Hyperparameters We provide a brief discussion of the impact of the hyperparameters *γ, α*, and β used in the proposed reweigthing function. Hyperparameter γ appears in the power of the exponential function in Eq. 5. Setting γ high results in a large disparity in weights between vulnerable examples and less vulnerable examples. α helps with the numerical stability of Eq. 5. Introducing β into the reweighting function in Eq. 7 allows for a balance between relative weights of vulnerable examples and less vulnerable examples. Without adding β, the reweighting function could potentially return very low values, which significantly diminishes the influence of less vulnerable training samples. On the other hand, if β is set too high, the desired reweighting effect is lost. The values of these hyperparameters are heuristically determined. Experimental studies on the impact of the hyperparameters on the performance are shown in Appendix C. ## 6 Conclusion In this paper, we propose a novel vulnerability-aware instance-wise reweighting strategy for adversarial training. The proposed reweighting strategy takes into consideration the intrinsic vulnerability of natural examples used for crafting adversarial examples during adversarial training. We show that existing reweighting methods fail to achieve significant robustness against stronger white-box and black-box attacks. Lastly, we experimentally show that the proposed reweighting function effectively improves adversarial training without diminishing its performance against stronger attacks. In addition, the proposed method shows improvement on every dataset evaluated. ## References Jean-Baptiste Alayrac, Jonathan Uesato, Po-Sen Huang, Alhussein Fawzi, Robert Stanforth, and Pushmeet Kohli. Are labels required for improving adversarial robustness? Advances in Neural Information Processing Systems, 32, 2019. Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. Square attack: a query-efficient black-box adversarial attack via random search. In European Conference on Computer Vision, pp. 484–501. Springer, 2020. Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International conference on machine learning, pp. 274–283. PMLR, 2018. Modeste Atsague, Olukorede Fakorede, and Jin Tian. A mutual information regularization for adversarial training. In Asian Conference on Machine Learning, pp. 188–203. PMLR, 2021. Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pp. 387–402. Springer, 2013. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pp. 39–57. Ieee, 2017. Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C Duchi, and Percy S Liang. Unlabeled data improves adversarial robustness. Advances in Neural Information Processing Systems, 32, 2019. Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, and Zhangyang Wang. Robust overfitting may be mitigated by properly learned smoothening. In International Conference on Learning Representations, 2020. Francesco Croce and Matthias Hein. Minimally distorted adversarial examples with a fast adaptive boundary attack. In International Conference on Machine Learning, pp. 2196–2205. PMLR, 2020a. Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International conference on machine learning, pp. 2206–2216. PMLR, 2020b. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009. Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, and Ruitong Huang. Mma training: Direct input space margin maximization through adversarial training. In International Conference on Learning Representations, 2019. Chengyu Dong, Liyuan Liu, and Jingbo Shang. Data quality matters for adversarial training: An empirical study. arXiv preprint arXiv:2102.07437, 2021. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9185–9193, 2018. Olukorede Fakorede, Ashutosh Nirala, Modeste Atsague, and Jin Tian. Improving adversarial robustness with hypersphere embedding and angular-based regularizations. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1–5. IEEE, 2023. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2015. Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens van der Maaten. Countering adversarial images using input transformations. In International Conference on Learning Representations, 2018. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adversarial attacks with limited queries and information. In International Conference on Machine Learning, pp. 2137–2146. PMLR, 2018. Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. Advances in neural information processing systems, 32, 2019. Harini Kannan, Alexey Kurakin, and Ian Goodfellow. Adversarial logit pairing. arXiv preprint arXiv:1803.06373, 2018. Alex Krizhevsky et al. Learning multiple layers of features from tiny images. 2009. Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama, et al. Probabilistic margins for instance reweighting in adversarial training. Advances in Neural Information Processing Systems, 34:23258–23269, 2021. Xingjun Ma, Linxi Jiang, Hanxun Huang, Zejia Weng, James Bailey, and Yu-Gang Jiang. Imbalanced gradients: a subtle cause of overestimated adversarial robustness. Machine Learning, pp. 1–26, 2023. Xinsong Ma, Zekai Wang, and Weiwei Liu. On the tradeoff between robustness and fairness. In Advances in Neural Information Processing Systems, 2022. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582, 2016. Yurii Evgen'evich Nesterov. A method of solving a convex programming problem with convergence rate o\bigl(kˆ2\bigr). In Doklady Akademii Nauk, volume 269, pp. 543–547. Russian Academy of Sciences, 1983. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011. Tianyu Pang, Xiao Yang, Yinpeng Dong, Kun Xu, Jun Zhu, and Hang Su. Boosting adversarial training with hypersphere embedding. Advances in Neural Information Processing Systems, 33:7779–7792, 2020. Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE symposium on security and privacy (SP), pp. 582–597. IEEE, 2016. Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 506–519, 2017. Maura Pintor, Fabio Roli, Wieland Brendel, and Battista Biggio. Fast minimum-norm adversarial attacks through adaptive norm constraints. Advances in Neural Information Processing Systems, 34:20052–20062, 2021. Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarially robust generalization requires more data. Advances in neural information processing systems, 31, 2018. Leslie N Smith and Nicholay Topin. Super-convergence: Very fast training of neural networks using large learning rates. In Artificial intelligence and machine learning for multi-domain operations applications, volume 11006, pp. 369–386. SPIE, 2019. Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. arXiv preprint arXiv:1710.10766, 2017. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152, 2018. Jonathan Uesato, Brendan O'donoghue, Pushmeet Kohli, and Aaron Oord. Adversarial risk and the dangers of evaluating against weak attacks. In International Conference on Machine Learning, pp. 5025–5034. PMLR, 2018. Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, and Quanquan Gu. Improving adversarial robustness requires revisiting misclassified examples. In International Conference on Learning Representations, 2019. Zekai Wang, Tianyu Pang, Chao Du, Min Lin, Weiwei Liu, and Shuicheng Yan. Better diffusion models further improve adversarial training. arXiv preprint arXiv:2302.04638, 2023. Zi Wang. Zero-shot knowledge distillation from a decision-based black-box model. In International Conference on Machine Learning, pp. 10675–10685. PMLR, 2021. Dongxian Wu, Shu-Tao Xia, and Yisen Wang. Adversarial weight perturbation helps robust generalization. Advances in Neural Information Processing Systems, 33, 2020. Han Xu, Xiaorui Liu, Yaxin Li, Anil Jain, and Jiliang Tang. To be robust or to be fair: Towards fairness in adversarial training. In International Conference on Machine Learning, pp. 11492–11501. PMLR, 2021. Runtian Zhai, Tianle Cai, Di He, Chen Dan, Kun He, John Hopcroft, and Liwei Wang. Adversarially robust generalization just requires more unlabeled data. arXiv preprint arXiv:1906.00555, 2019. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, pp. 7472–7482. PMLR, 2019. Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, and Mohan Kankanhalli. Geometryaware instance-reweighted adversarial training. In International Conference on Learning Representations, 2020. ## A Appendix: Theoretical Explanation Of Input Vulnerability In this section, we show that, under a special setting, the example vulnerability directly corresponds to the probability of predicting the true class . We consider a binary classification task of natural examples sampled from a Gaussian mixture distribution, following the settings described in (Carmon et al., 2019; Schmidt et al., 2018; Xu et al., 2021; Ma et al., 2022). We formally define the settings below. Definition A.1 (**Gaussian Mixture Distribution**.) Let −µ, µ *be the mean parameters corresponding to* data sampled from two classes y = {−1, +1} *according to Gaussian distribution.* Let σ−, σ+ represent the variances of the classes -1 and +1 . Then the Gaussian mixture model is defined by the following distribution over (x,y) ∈ Rd *× {±*1}: $$(11)$$ $$\begin{array}{r l}{y\sim\{-1,+1\}}&{{}\mu=(\overbrace{\eta,...,\eta}^{d=d i m})}\\ {\mathbf{x}\sim{\begin{cases}{\mathcal{N}}(\mu,\sigma_{+}^{2}{\mathcal{I}}),&{}{\mathrm{if~}}y=+1\\ {\mathcal{N}}(-\mu,\sigma_{-}^{2}{\mathcal{I}}),&{}{\mathrm{if~}}y=-1\end{cases}}}\end{array}$$ I *is the d-dimension identity matrix.* To design classes with different intrinsic vulnerabilities, we ensure that there is a K-factor difference between the variances of the classes such that σ− : σ+ = 1 : K and K > 1. The larger variance of the examples corresponding to class +1 intuitively suggests higher vulnerability than an example corresponding to class -1. The analysis is done using a linear classifier as follows: f(x) = *sign*(⟨ω, x⟩ + b) (12) where ω and b respectively represent the model weights and bias term, and *sign*(z) returns 1 if z ≥ 0, else −1. The natural risk is denoted as: Rnat = E(x,y)∼D∗ (1(f(x) ̸= y) $=Pr(f_{nat}(\mathbf{x})\neq y)\\=Pr(y=-1)\cdot Pr(f_{nat}(\mathbf{x})=+1|y=-1)+Pr(y=+1)\cdot Pr(f_{nat}(\mathbf{x})=-1|y=+1)\\=Pr(y=-1)\cdot\mathbf{R}_{nat}^{-}(f_{nat})+Pr(y=+1)\cdot\mathbf{R}_{nat}^{+}(f_{nat})$ $\left(12\right)^{\frac{1}{2}}$ $$\left(13\right)$$ R− nat(fnat) and R+ nat(fnat) represent the class-wise risk of misclassifying -1 and +1 respectively, fnat is a naturally trained classifier. Theorem 1 ((Xu et al., 2021)) Given a Gaussian distribution D∗, a naturally trained classifier fnat which minimizes the expected natural risk: fnat(x) = arg minf E(x,y)∼D∗ (1(f(x) ̸= y)*. It has the classwise natural risk:* ${\bf R}_{nat}^{-}(f_{nat})=Pr\{{\cal N}(0,1)\leq A-K\cdot\sqrt{A^{2}+q(K)}\}$ ${\bf R}_{nat}^{+}(f_{nat})=Pr\{{\cal N}(0,1)\leq-K\cdot A+\sqrt{A^{2}+q(K)}\}$ where A =2 K2−1 √ dη σand q(K) = 2logK K2−1 which is a positive constant and depends only on K*. Therefore,* class +1 has a larger risk: R− nat(fnat) < R+(fnat). Theorem 1 shows that the vulnerable class +1 (with larger variance) is more difficult to classify than -1, because the an optimal fnat has a higher standard error for +1 than -1. Corollary 1 (Vulnerability of a data sample.) The vulnerability of an example may be estimated using a classifier's estimated class-probability. Proof 1 *The class-wise risks corresponding to classes -1 and +1 can be respectively written as:* $$\mathbf{R}_{n a t}^{-}(f_{n a t})=P r(f_{n a t}(\mathbf{x}))$$ nat(fnat) = P r(fnat(x) = −1|y = +1) = P r(⟨ω, x⟩ + b > 0) $$\mathbf{R}_{n a t}^{+}(f_{n a t})=P r(f_{n a t}(\mathbf{x}))$$ $\pi$ $\mathcal{L}$ . nat(fnat) = P r(fnat(x) = +1|y = −1) = P r(⟨ω, x⟩ + b < 0) Let P − nat(fnat) and P + nat(fnat) respectively denote the correct probability estimates of classes -1 and +1. Then, $$\begin{array}{l}{{\mathbf{P}_{n a t}^{-}(f_{n a t})=P r(f_{n a t}(\mathbf{x})=-1|y=-1)}}\\ {{\mathbf{P}_{n a t}^{+}(f_{n a t})=P r(f_{n a t}(\mathbf{x})=+1|y=+1).}}\end{array}$$ ## From Theorem 1, R− Nat(Fnat) < R+ Nat(Fnat). *It Follows That* P − Nat(Fnat) > P + Nat(Fnat). B Additional Experiments On Data Augmentation Based Defense We perform additional experiments on the data augmentation-based defense using extra datasets generated using the Elucidating Diffusion Model (EDM) following Wang et al. (2023). We utilize Wideresnet-28-10 as the backbone model for the experiments. The model was trained using 20M EDM-generated data for 400 epochs with the training batch size of 512. Common augmentation described in (Wang et al., 2023) is applied on the training samples. We employ the SGD optimizer with Nesterov momentum (Nesterov, 1983) where the momentum factor and weight decay were set to 0.9 and 5e −4 respectively. We use the cyclic learning rate schedule with cosine annealing (Smith & Topin, 2019),where the initial learning rate is set to 0.2. For VIR-TRADES, we used the settings described in section 5.3, while for VIR-AT, we set γ to 5.0, β to 0.2 and α to 7.0. The obtained results are reported in Table 8. The experimental results show improvement of VIR-TRADES over TRADES and VIR-AT over AT on CIFAR-10 Table 8: Comparing robustness (accuracy %) for Wideresnet-28-10 on CIFAR-10 trained on 20M EDMgenerated data. | Defense | Natural | PGD-100 | CW | AA | |------------|-------------|------------|------------|------------| | TRADES | 90.33±0.14 | 65.74±0.16 | 64.01±0.17 | 63.03±0.13 | | AT | 91.56±0.09 | 66.23±0.12 | 65.62±0.09 | 62.42±0.11 | | VIR-TRADES | 89.73±0.12 | 67.23±0.08 | 65.47±0.05 | 64.31±0.17 | | VIR-AT | 92.69 ±0.25 | 67.98±0.25 | 66.95±0.17 | 62.95±0.09 | ## C Ablation Studies On Hyperparameters We show in the following experimental results on varying hyperparameter *α, γ, β* values. The results are obtained from training ResNet-18 on CIFAR-10 dataset. Table 9: Ablation studies on *VIR-AT* showing the impact of the β hyperparameter on the performance of the proposed reweighting function. Reweighting Function Natural PGD-100 CW AA Sv(xi, yi) · Sd(xi, x′i) + (β = 0) 84.48±0.11 **57.20**±0.14 51.25±0.12 47.32±0.11 Sv(xi, yi) · Sd(xi, x′i) + (β = 0.007) **84.59**±0.18 56.42±0.18 52.18±0.15 48.21±**0.08** Sv(xi, yi) · Sd(xi, x′i) + (β = 0.1) 84.26± 0.07 53.52±0.13 52.20±**0.19** 48.17±0.13 | Reweighting Function | Natural | PGD-100 | CW | AA | |---------------------------------------|------------|----------------------------------|------------|------------| | Sv(xi, yi) · Sd(xi, x′ i) + (β = 0.5) | 83.32±0.18 | 53.70±0.13 | 51.60±0.12 | 49.29±0.09 | | Sv(xi, yi) · Sd(xi, x′ i) + (β = 1.6) | 82.03±0.13 | 54.86±0.17 53.11±0.17 51.03±0.16 | | | | Sv(xi, yi) · Sd(xi, x′ i) + (β = 2.0) | 80.86±0.10 | 54.58±0.15 | 52.62±0.08 | 50.56±0.13 | Table 10: Ablation studies on *VIR-TRADES* showing the impact of the β hyperparameter on the performance of the proposed reweighting function. Table 11: Ablation studies on *VIR-AT* showing the impact of the α and γ hyperparameters on the performance of the proposed reweighting function. | α | γ | β | Natural PGD-100 | CW | AA | |-----|----------|------------|-------------------|-----------------------|-----------------------| | 1 | 10 | 0.007 | 83.75±0.11 | 54.38±0.08 | 52.20±0.15 48.25±0.15 | | 2 | 10 | 0.007 | 83.80±0.09 | 55.28±0.05 | 52.13±0.15 48.18±0.16 | | 3 | 10 | 0.007 | 84.16±0.06 | 55.85±0.10 | 52.05±0.11 48.15±0.11 | | 5 | 10 | 0.007 | 84.31±0.06 | 55.92±0.10 | 51.80±0.09 47.85±0.09 | | 7 | 10 0.007 | 84.59±0.18 | 56.42±0.18 | 52.20±0.19 48.17±0.13 | | | 8 | 10 | 0.007 | 84.19±0.07 | 56.68±0.12 | 52.01±0.09 48.02±0.10 | | 7 | 2 | 0.007 | 83.96±0.15 | 54.75±0.13 | 51.90±0.08 47.89±0.05 | | 7 | 4 | 0.007 | 84.52±0.08 | 54.92±0.10 | 52.01±0.10 47.97±0.06 | | 7 | 6 | 0.007 | 84.36±0.08 | 55.82±0.11 | 51.76±0.08 47.70±0.08 | | 7 | 8 | 0.007 | 84.15±0.08 | 56.30±0.11 | 51.91±0.12 47.87±0.07 | | α | γ | β | Natural PGD-100 | CW | AA | |-----|---------|------------|-------------------|-----------------------|-----------------------| | 1 | 3.0 | 1.6 | 82.10±0.15 | 54.07±0.08 | 52.29±0.09 50.08±0.12 | | 2 | 3.0 | 1.6 | 82.13±13 | 54.10±0.05 | 52.34±0.09 50.13±0.08 | | 3 | 3.0 | 1.6 | 82.12±0.11 | 54.16±0.10 | 52.51±0.10 50.41±0.07 | | 5 | 3.0 | 1.6 | 82.06±0.11 | 54.31±0.08 | 52.54±0.13 50.43±0.05 | | 7 | 3.0 | 1.6 | 82.05±0.10 | 54.52±0.09 | 52.59±0.11 50.52±0.05 | | 8 | 3.0 1.6 | 82.03±0.13 | 54.86±0.17 | 53.11±0.17 51.03±0.16 | | | 8 | 1.0 | 1.6 | 81.05±0.10 | 53.96±0.13 | 51.85±0.06 49.89±0.09 | | 8 | 2.0 | 1.6 | 81.09±0.12 | 54.26±0.15 | 52.41±0.08 50.28±0.17 | | 8 | 4.0 | 1.6 | 81.91±0.09 | 54.42±0.13 | 52.70±0.07 50.34±0.11 | Table 12: Ablation studies on *VIR-TRADES* showing the impact of the α and γ hyperparameters on the performance of the proposed reweighting function.
Review 1: Summary: This paper investigates on a better instance reweighting method to help metigate the discrepancy of robust accuracy between easy and hard samples. Based on the observations and shortcomings of previous work, this work proposes a vulnerability measurement unitlizing true class probablity and disparity between natural and adversarial output. Experiments are provided to show the effectiveness of the proposed method on improving robust accuracy, and ablation study is conducted on the proposed vulnerablity metric. Strengths and Weaknesses: ## Strength This paper is overall well written and well motivated. The paper provides clear discussion on why the reweighting is needed, how the proposed reweighting criteria is proposed, and how is it different from previous methods. The formulation of the porposed method makes sense, and emperical results are promixing on multiple models and datasets. ## Weakness The proposed method invloves multiple hyperparameters in the formulation of the reweighting criteria, yet their usage and imact is not well discussed. More analysis and ablation study should be provided on how these hyperparameters impact the performance, and how should they be chosen in practice. Requested Changes: Please discuss more on the impact of hyperparameters (e.g. $\alpha, \gamma$ in Equ. (5) and $\beta$ in Equ. (10)) on the performance of the proposed method Broader Impact Concerns: No concern on broader impact ================================================== Review 2: Summary: This paper proposes a new strategy for weighting instances during adversarial training. In the traditional adversarial training method (without weighting), depending on the original class label, adversarial examples have different effects on the resulting robust accuracy. Some prior works tried to reduce this unfairness among adversarial examples by re-weighting the training loss and the authors propose another weighting strategy. The authors’ weighting is based on the vulnerability of the natural sample $\mathbf{x}$ and the discrepancy between the model predictions $f_\theta(\mathbf{x}_i)$ and $f_\theta(\mathbf{x}’_i)$. First, the vulnerability factor $S_v$ is measured by the model’s confidence $f_\theta(\mathbf{x})_y$ in the event that natural sample $\mathbf{x}$ belongs to class $y$. Then, the discrepancy factor $S_d$ is measured by the KL divergence of the model prediction probabilities $f_\theta(\mathbf{x})$ on a natural example $\mathbf{x}$ from the model prediction probabilities $f_\theta(\mathbf{x}’)$ on the adversarial example $\mathbf{x}’$ generated from $\mathbf{x}$. Then the weighting $w$ is determined as a linear function $w=\alpha S_v \cdot S_d + \beta$ for predetermined parameter $\alpha$ and $\beta$. Finally, the authors propose the use of this weighting with some burn-in period so that the model can learn enough information about the data. An ample amount of experiments are performed to present the improvement from the proposed weighting method. For most experiments, the adversarial training with the proposed method demonstrated descent robust accuracy compared to the existing methods. Especially, the proposed method successfully improved the robust accuracy against some attacks, e.g., CW and AutoAttack, whereas the existing method failed to maintain the robust accuracies from the adversarial training. The authors also present some ablation studies to show the effects of each weighting factor. Strengths and Weaknesses: # Strength 1. Experiments are done extensively on different models and different datasets against various attacks. Also, the results support the improvement well, demonstrating the success of the authors reweighting method. 1. The proposed method shows good performance against stronger attacks, e.g., CW and AutoAttack, whereas the existing methods failed to maintain their performances. 2. Ablation studies demonstrate the effect of each weighting factor well. The observations on each factor would be useful for other researchers who study the reweighting in AT. # Weaknesses 1. I can observe that $S_v$ (weighting factor from the vulnerability) is bounded, but $S_d$ (weighting factor from the KL divergence) is not bounded. This sounds like that, for some adversarial examples, the weighting could be mainly dominated by $S_d$. 1. Can the authors provide the distribution of $S_v$ values and $S_d$ values during the training? Experimental evidence that $S_d$ does not explode dramatically would be enough to support the authors' strategy. 2. If it is not possible to provide such evidence, isn’t it better to bound $S_d$ somehow? Requested Changes: 1. I can see some writing issues, e.g., grammar, notations, etc. I suggest the authors read through the writing. 1. I can see many citations in parentheses in the middle of a sentence, and this sometimes makes the writing hard to read. Please distinguish the use of `\citet{}` (citation as a part of a sentence) and `\citep{}`(citation not a part of a sentence). 2. The notations are not used consistently. For example, the class label $y$ is sometimes written in boldface in cases where it is definitely not a vector. Even boldface letters appear in different typefaces. Please go through the doc and make the notations consistent. 2. In my opinion, the authors should emphasize the failures of existing weighting strategies against CW and AutoAttack more. 1. In particular, ablation studies show that the introduction of $S_d$ was crucial for the robustness against those attacks. Can the authors relate this to the reason why the prior methods fail against the stronger attacks? 3. While determining the hyperparameters heuristically makes sense to me, it would be better to have some experimental study on these hyperparameters. 1. For example, my understanding of $\beta$ is that it lower bounds the weighting so that the training loss (in Eq. (11)) does not disappear for very low $S_v$ and $S_d$. Based on this understanding, setting $\beta=0.007$ seems quite small to me and I want to see the effect of setting this parameter higher. 4. One minor request: I don’t think that Section 4.2 has more valuable information than the ablation study in Appendix. (Of course, it is a meaningful theoretical insight, but this paper is not a theory-intensive work, anyway.) Consider moving Section 4.2 to Appendix and moving the ablation study in Appendix to the main body of the paper. Broader Impact Concerns: I don’t see a particular broader impact concern regarding this paper. ================================================== Review 3: Summary: This paper studies the imbalance problem in adversarial training and proposes to reweight the importance of each training sample based on the prediction disparity between the nature and adversarial version of the model. The assumption is that a large disparity indicates more vulnerability, which is proved from the perspective of risk-to-class-probability with Gaussian distribution. The proposed reweighting scheme demonstrates improvement in standard adversarial training and TRADES. Overall, the key contribution is the use of prediction change to reweight a training sample during adversarial training. Strengths and Weaknesses: Strengths: 1. The study of a challenging problem in adversarial machine learning, i.e., how to train adversarially robust models. 2. The proposed idea is simple enough to be applied to any existing AT method. 3. The improvement to TRADES according to AutoAttack seems to be significant. Weaknesses: 1. Not sure if instance reweighting is better than class reweighting or class-class reweighting. Certain classes are known to be easily attacked into other classes. How the proposed approach can address this type of imbalance? 2. Should the instance reweighting consider class imbalance problem, as some classes are notably harder than others? How the class-wsie accumulative of the instance reweighting aligns with the class distribution? 3. It is not clear if the proposed method is compatible with recent data augmentation methods, e.g., the one [1] on the robustbench leaderboard. 4. Since the reweighting will skew the logits distribution, the author should also test against the MDE attack [2] to prove it does not cause imbalanced gradients. [1] Better Diffusion Models Further Improve Adversarial Training. [2] https://github.com/HanxunH/MDAttack Requested Changes: 1. Show the class weight distribution (sum over all sample weights in the class) caused by the reweighting during training. 2. Test against the MDE attack. 3. Show its compatibility with recent SOTA defense, i.e., the data augmentation based defense. Broader Impact Concerns: No obvious concerns have been identified. ================================================== Metareview: Recommendation: Accept as is Comment: The authors have successfully addressed some major comments raised by the Reviewers during phase-1 review. As a result, all reviewers are satisfied with the revision and agree that the paper is interesting for audience and provide sufficient evidence to support their claims. ==================================================
# Pre-Trained Perceptual Features Improve Differentially Private Image Generation | Frederik Harder | fharder@tue.mpg.de | |----------------------------------------------------------------------------------------------|----------------------| | Max Planck Institute for Intelligent Systems & University of Tübingen Milad Jalali Asadabadi | miladj7@cs.ubc.ca | | University of British Columbia Danica J. Sutherland | dsuth@cs.ubc.ca | | University of British Columbia & Alberta Machine Intelligence Institute Mijung Park | mijungp@cs.ubc.ca | Mijung Park mijungp@cs.ubc.ca University of British Columbia & Alberta Machine Intelligence Institute Reviewed on OpenReview: *https://openreview.net/forum?id=R6W7zkMz0P* ## Abstract Training even moderately-sized generative models with differentially-private stochastic gradient descent (DP-SGD) is difficult: the required level of noise for reasonable levels of privacy is simply too large. We advocate instead building off a good, relevant representation on an informative public dataset, then learning to model the private data with that representation. In particular, we minimize the maximum mean discrepancy (MMD) between private target data and a generator's distribution, using a kernel based on perceptual features learned from a public dataset. With the MMD, we can simply privatize the data-dependent term once and for all, rather than introducing noise at each step of optimization as in DP-SGD. Our algorithm allows us to generate CIFAR10-level images with ϵ ≈ 2 which capture distinctive features in the distribution, far surpassing the current state of the art, which mostly focuses on datasets such as MNIST and FashionMNIST at a large ϵ ≈ 10. Our work introduces simple yet powerful foundations for reducing the gap between private and non-private deep generative models. Our code is available at https://github.com/ParkLabML/DP-MEPF. 1 ## 1 Introduction The gold standard privacy notion, *differential privacy* (DP), is now ubiquitous in a diverse range of academic research, industry products (Apple, 2017), and even government databases (National Conference of State Legislatures, 2021). DP provides a mathematically provable privacy guarantee, which is its main strength and reason for its popularity. DP even offers means of tracking the effect of multiple accesses to the same data on it's overall privacy level, but with each access, the privacy of the data gradually degrades. To guarantee a high level of privacy, one thus needs to limit access to data, a challenge in applying DP with the usual iterative optimization algorithms used in machine learning. Differentially private data generation solves this problem by creating a synthetic dataset that is *similar* to the private dataset, in terms of some chosen similarity metric. While producing such a synthetic dataset incurs a privacy loss, the resulting dataset can be used repeatedly without further loss of privacy. Classical approaches, however, typically assume a certain class of pre-specified purposes on how the synthetic data can be used (Mohammed et al., 2011; Xiao et al., 2010; Hardt et al., 2012; Zhu et al., 2017). If data analysts use the data for other tasks outside these pre-specified purposes, the theoretical guarantees on its utility are lost. 1This is a revision of the first published version which contained erroneous FID scores. Please refer to this paper's OpenReview page for a clarification of our errors and the older version. 1 To produce synthetic data usable for potentially any purpose, many papers on DP data generation have utilized the recent advances in deep generative modelling. The majority of these approaches are based on the generative adversarial network (GAN; Goodfellow et al., 2014) framework, where a discriminator and a generator play an adversarial game to optimize a given distance metric between the true and synthetic data distributions. Most approaches under this framework have used DP-SGD (Abadi et al., 2016), where the gradients of the discriminator (which compares generated samples to private data) are privatized in each training step, resulting in a high overall privacy loss (Park et al., 2017; Torkzadehmahani et al., 2019; Yoon et al., 2019; Xie et al., 2018; Frigerio et al., 2019). Another challenge is that, as the gradients must have bounded norm to derive the DP guarantee, the amount of noise for privatization in DP-SGD increases proportionally to the dimension of the discriminator. Hence, these methods are typically bound to relatively small discriminators, limiting the ability to learn data distributions beyond, say, MNIST (LeCun & Cortes, 2010) or FashionMNIST (Xiao et al., 2017). Given these challenges, the heavy machinery such as GANs and large-scale auto-encoder-based methods - capable of generating complex datasets in a non-private setting - fails to model datasets such as CIFAR-10 (Krizhevsky, 2009) or CelebA (Liu et al., 2015) with a meaningful privacy guarantee (e.g., ϵ ≈ 2). Typical deep generative modeling papers have moved well beyond these datasets, but to the best of our knowledge, currently there is no DP data generation method that can produce reliable samples at a reasonable privacy level. How can we reduce this huge gap between the performance of non-private deep generative models and that of private counterparts? We argue that we can narrow this gap by using the abundant resource of *public* data, in line with the core message of Tramèr & Boneh (2021): *We simply need better features for differentially private learning*. While Tramèr & Boneh demonstrated this in the context of DP classification, we aim to show the applicability of this reasoning for the more challenging problem of DP data generation, with a focus on high-dimensional image generation. We propose to exploit public data to learn *perceptual features* (PFs) from public data, which we will use to compare synthetic and real data distributions. Following dos Santos et al. (2019), we use "perceptual features" to mean the vector of all activations of a pretrained deep network for a given data point, e.g. the hundreds of thousands of hidden activations from applying a trained deep classifier to an image. Building on dos Santos et al. (2019), who use PFs for transfer learning in natural image generation, our goal is to improve the quality of natural images generated with differential privacy constraints. We construct a kernel on images using these powerful PFs, then train a generator by minimizing the Maximum Mean Discrepancy (MMD) (Gretton et al., 2012) between distributions (as in Harder et al., 2021; Li et al., 2015; Dziugaite et al., 2015; dos Santos et al., 2019). This scheme is non-adversarial, leading to simpler and more stable optimization; moreover, it allows us to privatize the mean embedding of the private dataset *once*, using it at each step of generator training without incurring cumulative privacy losses. We observe in our experiments that as long as the public data contains more complex patterns than private data, e.g., transferring the knowledge learned from ImageNet as public data to generate CIFAR-10 images as private data, the learned features from public data are useful enough to generate good synthetic data. We successfully generate reasonable samples for CIFAR-10, CelebA, MNIST, and FashionMNIST in high-privacy regimes. We also theoretically analyze the effect of privatizing our loss function, helping understand the privacy-accuracy trade-offs in our method. The main point of our paper is that features from public data are a key tool for improved DP data generation, a point we think our experiments make resoundingly; this may be "obvious", but has not been explored for image generation. Our proposed method, in particular, is a simple (which, we think, is a good thing) initial technique exploiting this idea, which outperforms simple pretraining of DP-GAN and DP-Sinkhorn (see Section 6). We hope this work will inspire future work on other ways to use public features for improving image generation with differential privacy. ## 2 Background We provide background information on maximum mean discrepancy and differential privacy. Maximum Mean Discrepancy The MMD is a distance between distributions based on a kernel kϕ(*x, y*) = ⟨ϕ(x), ϕ(y)⟩H, where ϕ maps data in X to a Hilbert space H (Gretton et al., 2012). One definition is MMDkϕ (*P, Q*) = Ex∼P [ϕ(x)] − Ey∼Q[ϕ(y)]H , where µϕ(P) = Ex∼P [ϕ(x)] ∈ H is known as the (kernel) *mean embedding* of P, and is guaranteed to exist if Ex∼P pk(*x, x*) < ∞ (Smola et al., 2007). If kϕ is *characteristic* (Sriperumbudur et al., 2011), then P 7→ µϕ(P) is injective, so MMDkϕ (*P, Q*) = 0 if and only if P = Q. For a sample set D = {xi} m i=1 ∼ P m, the empirical mean embedding µϕ(D) = 1m Pm i=1 ϕ(xi) is the "plug-in" estimator of µϕ(P) using the empirical distribution of D. Given D˜ = {x˜i} n i=1 ∼ Qn, we can estimate MMDkϕ (*P, Q*) as the distance between empirical mean embeddings, $$\text{MMD}_{k_{\phi}}(\mathcal{D},\mathcal{D})=\left\|\frac{1}{m}\sum_{i=1}^{m}\phi(\mathbf{x}_{i})-\frac{1}{n}\sum_{i=1}^{n}\phi(\hat{\mathbf{x}}_{i})\right\|_{\mathcal{H}}.\tag{1}$$ We would like to minimize the distance between a target data distribution P (based on samples D) and the output distribution Qgθ of a generator network gθ. If the feature map is finite-dimensional and norm-bounded, following Harder et al. (2021); Vinaroz et al. (2022), we can privatize the mean embedding of the data distribution µϕ(D) with a known DP mechanism such as the Gaussian or Laplace mechanisms, to be discussed shortly. As the summary of the real data does not change over the course of a generator training, we only need to privatize µϕ(D) once. Differential privacy A mechanism M is (ϵ, δ)-DP for a given ϵ ≥ 0 and δ ≥ 0 if and only if $$\Pr[M({\mathcal{D}})$$ $$\operatorname*{Pr}[{\mathcal{M}}({\mathcal{D}})\in S]\leq e^{\epsilon}\cdot\operatorname*{Pr}[{\mathcal{M}}({\mathcal{D}}^{\prime})\in S]+\delta$$ for all possible sets of the mechanism's outputs S and all neighbouring datasets D, D′that differ by a single entry. One of the most well-known and widely used DP mechanisms is the *Gaussian mechanism*. The Gaussian mechanism adds a calibrated level of noise to a function µ : *D 7→* R pto ensure that the output of the mechanism is (*ϵ, δ*)-DP: µe(D) = µ(D) + n, where n ∼ N (0, σ2∆2µ Ip). Here, σ is often called a privacy parameter, which is a function2 of ϵ and δ. ∆µ is often called the *global sensitivity* (Dwork et al., 2006), which is the maximum difference in L2-norm given two neighbouring D and D′, ||µ(D) − µ(D′)||2. In this paper, we will use the Gaussian mechanism to ensure the mean embedding of the data distribution is DP. ## 3 Method In this paper, to transfer knowledge from public to private data distributions, we construct a particular kernel kΦ to use in Equation 1 based on *perceptual features* (PFs). ## 3.1 Mmd With Perceptual Features As A Feature Map We call our proposed method *Differentially Private Mean Embeddings with Perceptual Features (DP-ME*PF), analogous to the related method DP-MERF (Harder et al., 2021). We use high-dimensional, over-complete perceptual features from a feature extractor network pre-trained on a public dataset, as illustrated in **Step 1** of Figure 1. Given a vector input x, the pre-trained feature extractor network outputs the perceptual features from each layer, where the jth layer's PF is denoted by ej (x). Each of the J layers' perceptual features is of a different length, ej (x) ∈ R dj; the total dimension of the perceptual feature vector is D =PJ j=1 dj . As illustrated in **Step 2** in Figure 1, we use those PFs to form our feature map Φ(x) := [ϕ1(x), ϕ2(x)], where the first part comes from a concatenation of PFs from all the layers: ϕ1(x) = [e1(x), *· · ·* , eJ (x)], while the second part comes from their squared values: ϕ2(x) = [e 2 1 (x), *· · ·* , e 2 J (x)], where e 2 j (x) means each entry of ej (x) is squared. Using this feature map, we then construct the mean embedding of a data distribution given the data samples D = {xi} m i=1: $$\mu_{P}(\mathcal{D})=\begin{bmatrix}\mu_{P}^{\phi_{1}}(\mathcal{D})\\ \\ \mu_{P}^{\phi_{2}}(\mathcal{D})\end{bmatrix}=\begin{bmatrix}\frac{1}{m}\sum_{i=1}^{m}\phi_{1}(\mathbf{x}_{i})\\ \\ \frac{1}{m}\sum_{i=1}^{m}\phi_{2}(\mathbf{x}_{i})\end{bmatrix}.$$ $$\text{(2)}$$. Lastly (**Step 3** in Figure 1), we will train a generator gθ that maps latent vectors zi ∼ N (0, I) to a synthetic data sample x˜i = gθ(zi); we need to find good parameters θ for the generator. In non-private settings, we estimate the generator's 2The relationship can be numerically computed by packages like auto-dp (Wang et al., 2019), among other methods. ![3_image_0.png](3_image_0.png) $$\mathbf{\partial})$$ Figure 1: Three steps in *differentially private mean embedding with perceptual features (DP-MEPF)*. **Step 1:** We train a feature extractor neural network, fθˆpub , using public data. This is a function of public data, with no privacy cost to train. A trained fθˆpub maps an input x to perceptual features (in green), the outputs of each layer. **Step 2:** We compute the mean embedding of the data distributions using a feature map consisting of the first and second moments (in green) of the perceptual features, and privatize it based on the Gaussian mechanism (see text). **Step 3:** We train a generator gθ, which produces synthetic data from latent codes zi ∼ N (0, I), by minimizing the privatized MMD. parameters by minimizing an estimate of MMD2 kΦ (*P, Q*gθ ), using D˜ = {x˜i} in Equation 1, similar to Dziugaite et al. (2015); Li et al. (2015); dos Santos et al. (2019). In private settings, we privatize D's mean embedding to µ˜ϕ(D) with the Gaussian mechanism (details below), and minimize $$\widehat{\mathrm{MMD}}_{k_{\Phi}}^{2}({\mathcal{D}},\tilde{\mathcal{D}})=\left\|\tilde{\mu}_{\phi}({\mathcal{D}})-\mu_{\phi}(\tilde{\mathcal{D}})\right\|^{2}.$$ 2. (3) A natural question that arises is whether the MMD using the PFs is a metric: if MMDkΦ (*P, Q*) = 0 only if P = Q. As PFs have a finite-dimensional embedding, we in fact know this cannot be the case (Sriperumbudur et al., 2011). Thus, there exists *some* pair of distributions which our MMD cannot distinguish. However, given that linear functions in perceptual feature spaces can obtain excellent performance on nearly any natural image task (as observed in transfer learning), it seems that PFs are "nearly" universal for natural distributions of images (dos Santos et al., 2019). Thus we expect the MMD with this kernel to do a good job of distinguishing "natural" distributions from one another, though the possibility of "adversarial attacks" perhaps remains. A more important question in our context is whether this MMD serves as a good loss for training a generator, and whether the resulting synthetic data samples are reasonably faithful to the original data samples. Our experiments in Section 6, as well as earlier work by dos Santos et al. (2019) in non-private settings, imply that it is. Privatization of mean embedding We privatize the mean embedding of the data distribution only once, and reuse it repeatedly during the training of the generator gθ. We use the Gaussian mechanism to separately privatize the first and second parts of the feature map. We normalize each type of perceptual features such that ∥ϕ1(xi)∥2 = 1 and ∥ϕ2(xi)∥2 = 1 for each sample xi. After this change, the sensitivity of each part of the mean embedding is $$\operatorname*{max}_{{\mathcal{D}},{\mathcal{D}}^{\prime}{\mathrm{\scriptsize~s.t.~}}|{\mathcal{D}}-{\mathcal{D}}^{\prime}|=1}\|\mu_{\phi_{t}}({\mathcal{D}})-\mu_{\phi_{t}}({\mathcal{D}}^{\prime})\|_{2}\leq{\frac{2}{m}},$$ m , (4) where µϕt (D) denotes the two parts of the mean embedding for t = 1, 2. Using these sensitivities, we add Gaussian noise to each part of the mean embedding, obtaining $$\tilde{\mu}_{\Phi}(\mathcal{D})=\begin{bmatrix}\tilde{\mu}_{\phi_{1}}(\mathcal{D})\\ \tilde{\mu}_{\phi_{2}}(\mathcal{D})\end{bmatrix}=\begin{bmatrix}\frac{1}{m}\sum_{i=1}^{m}\phi_{1}(\mathbf{x}_{i})+\mathbf{n}_{1}\\ \frac{1}{m}\sum_{i=1}^{m}\phi_{2}(\mathbf{x}_{i})+\mathbf{n}_{2}\end{bmatrix},$$ , (5) where nt ∼ N (0, 4σ 2 m2 I) for t = 1, 2. Since we are using the Gaussian mechanism twice, we simply compose the privacy losses from each mechanism. More precisely, given a desired privacy level *ϵ, δ*, we use the package of Wang et al. (2019) to find the corresponding σ for the two Gaussian mechanisms. $$(4)$$ $$(S)$$ Labeled data generation Extending our framework to generate both labels and input images is straightforward. As done by Harder et al. (2021), we construct a separate mean embedding for each class-conditional input distribution and then concatenate them into a single embedding $$\tilde{\mu}_{\phi_{t}}(\mathcal{D})=\left[\frac{1}{m}\sum_{i\in C_{t}}\phi_{t}(\mathbf{x}_{i})+\mathbf{n}_{t,1}\quad\cdots\quad\frac{1}{m}\sum_{i\in C_{K}}\phi_{t}(\mathbf{x}_{i})+\mathbf{n}_{t,K}\right]^{\top},$$ $$(6)$$ where K is the number of classes and Ck = {i ∈ [m]|yi = k} is the set of indices belonging to class k. As a result, the size of the final mean embedding is D × K (number of perceptual features by the number of classes) if we use only the first moment, or 2 × D × K if we use the first two moments. This is exactly the conditional mean embedding with a discrete kernel on the class label (Song et al., 2013). In the case of imbalanced data, an estimate of the label distribution can be obtained at low privacy cost with a DP release of the class counts, as done in Harder et al. (2021). Since all datasets considered in this paper are balanced, this step is not necessary in our experiments. ## 3.2 Differentially Private Early Stopping On some datasets (CelebA and Cifar10) we observe that the generated sample quality deteriorates if the model is trained for too many iterations in high-privacy settings. This is indicated by a steady increase in FID score (Heusel et al., 2017), and likely due to overfitting to the static noisy embedding. Since the FID score is based on the training data, simply choosing the iteration with the best FID score after training has completed would violate privacy. Privatizing the FID score requires privatizing the covariance of the output of the final pooling layer in the Inception network, which is quite sensitive. Instead, we privatize the first and second moment of data embeddings as in Equation 2, but using only the output of the final pooling layer in the Inception network. We then use this quantity as a private proxy for FID, and select the iteration with the lowest score. To minimize the privacy cost, we choose a larger noise parameter than for the main objective: σ*stopping* = 10σ, where σ is the noise scale for privatizing each part of the data mean embeddings, works well. Again, we compose these σs with the analysis of Wang et al. (2019). ## 4 Theoretical Analysis We now bound the effect of adding noise to our loss function, showing that asymptotically our noise does not hurt the rate at which our model converges to the optimal model. Appendix A proves full finite-sample versions of all of the following bounds, which are stated here using Op notation for simplicity. The statment X = Op(An) essentially means that X is O(An) with probability at least 1 − ρ for any constant choice of failure probability ρ > 0. The full version in the supplementary material is also ambivalent to the choice of covariance for the noise variable n, allowing in particular analysis of DP-MEPF based either on one or two moments of PFs. (The full version gives a slightly more refined treatment of the two-moment case, but the difference is typically not asymptotically relevant.) To begin, we use standard results on Gaussians to establish that the privatized MMD is close to the non-private MMD: Proposition 4.1. Given datasets D and D˜, the absolute difference between the privatized and non-private squared MMDs, a random function of only n*, satisfies* $$\widetilde{\mathrm{MMD}}_{k_{\Phi}}^{2}({\mathcal{D}},{\tilde{\mathcal{D}}})-\mathrm{MMD}_{k_{\Phi}}^{2}({\mathcal{D}},{\tilde{\mathcal{D}}})\big|={\mathcal{O}}_{p}\left(\frac{\sigma^{2}}{m^{2}}D+\frac{\sigma}{m}\,\mathrm{MMD}_{k_{\Phi}}({\mathcal{D}},{\tilde{\mathcal{D}}})\right).$$ One key quantity in the bound is σ/m, the ratio of the noise scale σ (inversely proportional to ε) to the number of observed (private) data points m. Note that σ depends only on the given privacy level, not on m, so the error becomes zero as long as m → ∞. In the second term, σ/m is multiplied by the (non-private, non-squared) MMD, which is bounded for our features, but for good generators (where our optimization hopefully spends most of its time) this term will also be nearly zero. The other term accounts for adding independent noise to each of the D feature dimensions; although D is typically large, so is m2. Having m = 50K private samples, e.g. for CIFAR-10, allows for a strong error bound as long as Dσ2 ≪ 625M. The above result is for a fixed pair of datasets. Because we only add noise n once, across all possible comparisons, we can use this to obtain a bound uniform over all possible generator distributions, in particular implying that the minimizer of the privatized MMD approximately minimizes the original, non-private MMD: Proposition 4.2. Fix a target dataset D. For each θ in some set Θ, fix a corresponding D˜θ*; in particular,* Θ = R p could be the set of all generator parameters, and D˜θ either the outcome of running a generator gθ *on a fixed set of* "seeds," D˜θ = {gθ(zi)} n i=1*, or the full output distribution of the generator* Qgθ . Let θe ∈ arg minθ∈Θ MMD ^2 kΦ (D, D˜θ) be the private minimizer, and θb ∈ arg minθ∈Θ MMD2 kΦ (D, D˜θ) *the non-private minimizer. Then* MMD2 kΦ (D, D˜θe ) − MMD2 kΦ (D, D˜θb ) = Op σ 2D m2 + σ √D m . The second term of this bound will generally dominate; it arises from uniformly bounding the σm MMDkΦ (D, D˜θ) term of Proposition 4.1 over all possible D˜θ. This approach, although the default way to prove this type of bound, misses that MMDkΦ (D, D˜θ) is hopefully small for θe and θb. We can in fact take advantage of this to provide an "optimistic" rate (Srebro et al., 2010; Zhou et al., 2021) that achieves faster convergence if the generator is capable of matching the target features (an "interpolating" regime): Proposition 4.3. *In the setting of Proposition 4.2,* $$\mathrm{MMD}_{k_{\Phi}}^{2}\left({\mathcal{D}},{\tilde{\mathcal{D}}}_{\widehat{\Theta}}\right)-\mathrm{MMD}_{k_{\Phi}}^{2}\left({\mathcal{D}},{\tilde{\mathcal{D}}}_{\widehat{\Theta}}\right)={\mathcal{O}}_{p}\left(\frac{\sigma^{2}D}{m^{2}}+\frac{\sigma{\sqrt{D}}}{m}\,\mathrm{MMD}_{k_{\Phi}}\left({\mathcal{D}},{\tilde{\mathcal{D}}}_{\widehat{\Theta}}\right)\right).$$ . Note that this bound implies the previous one, since MMDkΦ (D, D˜) is bounded. But in the case where the generator is capable of exactly matching the features of the target distribution, the second term becomes zero, and the rate with respect to m is greatly improved. In either regime, our approximate minimization of the empirical MMD is far faster than the rate at which minimizing the empirical MMD(D, Qgθ ) converges to minimizing the true, distribution-level MMD(*P, Q*gθ ): the known results there (e.g. Dziugaite et al., 2015, Theorem 1) give a 1/ √m rate, compared to our 1/m or even 1/m2. We show that minimizing DP-MEPF's loss actually pays no asymptotic penalty for privacy (especially when a perfect generator exists), with the privacy loss dwarfed by the statistical error for large datasets; this essentially agrees with experiments (see Section 6). This is not the case for all DP methods, and other DP generation papers didn't prove any such guarantees: DP-Sinkhorn only proved privacy, and DP-MERF showed only a much weaker guarantee (its gradient is asymptotically unbiased). ## 5 Related Work Initial work on differentially private data generation assumed strong constraints on the type of data and the intended use of the released data (Snoke & Slavkovic, 2018; Mohammed et al., 2011; Xiao et al., 2010; Hardt et al., 2012; Zhu et al., ´ 2017). While these studies provide theoretical guarantees on the utility of the synthetic data, they typically do not scale to our goal of large-scale image data generation. Recently, several papers focused on discrete data generation with limited domain size (Zhang et al., 2017; Qardaji et al., 2014; Chen et al., 2015; Zhang et al., 2021). These methods learn the correlation structure of small subsets of features and privatize them in order to produce differentially private synthetic data samples. These methods often require discretization of the data and have limited scalability, so are also unsuitable for high-dimensional image data generation. More recently, however, a new line of work has emerged that adopt the core ideas from the recent advances in deep generative models for a broad applicability of synthetic data with differential privacy constraints. The majority of this work (Xie et al., 2018; Torkzadehmahani et al., 2019; Frigerio et al., 2019; Yoon et al., 2019; Chen et al., 2020) uses generative adversarial networks (GANs; Goodfellow et al., 2014) along with some form of DP-SGD (Abadi et al., 2016). Other works in this line include PATE-GAN based on the private aggregation of teacher ensembles (Papernot et al., 2017) and variational autoencoders (Acs et al., 2018). The closest prior work to the proposed method is DP-MERF (Harder et al., 2021), where the kernel mean embeddings are constructed using random Fourier features (Rahimi & Recht, 2008). A recent variant of DP-MERF uses Hermite polynomial-based mean embeddings (Vinaroz et al., 2022). Unlike these methods, we use the perceptual features from a pre-trained network to construct kernel mean embeddings. Neither previous method applies to the perceptual kernels used here, so their empirical results are far worse (as we'll see shortly). Our theoretical analysis is also much more extensive: they only proved a bound on the expected error between the private and non-private empirical MMD for a fixed pair of datasets. More recently, a similar work to DP-MERF utilizes the Sinkhorn divergence for private data generation (Cao et al., 2021), which performs similarly to DP-MERF when the cost function is the L2 distance with a large regularizer. Another related work proposes to use the characteristic function and an adversarial re-weighting objective (Liew et al., 2022) in order to improve the generalization capability of DP-MERF. A majority of these related methods were evaluated only on relatively simple datasets such as MNIST and FashionMNIST. Even so, the DP-GAN-based methods mostly require a large privacy budget of ϵ ≈ 10 to generate synthetic data samples that are reasonably close to the real data samples. Our method goes far beyond this quality with much more stringent privacy constraints, as we will now see. ## 6 Experiments We will now compare our method to state-of-the-art methods for DP data generation. Table 1: Downstream accuracies by Logistic regression and MLP, evaluated on the generated data samples using MNIST and FashionMNIST as private data and SVHN and CIFAR-10 as public data, respectively. In all cases, we set ϵ = 10, δ = 10−5. In our method, we used both features ϕ1, ϕ2. | DP-MEPF | DP-Sinkhorn | GS-WGAN | DP-MERF | DP-HP | | | |--------------------|---------------------|-----------------------|------------------------|---------|----|----| | (Cao et al., 2021) | (Chen et al., 2020) | (Harder et al., 2021) | (Vinaroz et al., 2022) | | | | | MNIST | LogReg | 83 | 83 | 79 | 79 | 81 | | MLP | 90 | 83 | 79 | 78 | 82 | | | F-MNIST | LogReg | 76 | 75 | 68 | 76 | 73 | | MLP | 76 | 75 | 65 | 75 | 71 | | Datasets. We considered four image datasets3 of varying complexity. We started with the commonly used datasets MNIST (LeCun & Cortes, 2010) and FashionMNIST (Xiao et al., 2017), where each consist of 60,000 28 × 28 pixel grayscale images depicting hand-written digits and items of clothing, respectively, sorted into 10 classes. We also looked at the more complex CelebA (Liu et al., 2015) dataset, containing 202,599 color images of faces which we scale to sizes of 32 × 32 or 64 × 64 pixels and treat as unlabeled. We also study CIFAR-10 (Krizhevsky, 2009), a 50,000-sample dataset containing 32 × 32 color images of 10 classes of objects, including vehicles like ships and trucks, and animals such as horses and birds. Implementation. We implemented our code for all the experiments in PyTorch (Paszke et al., 2019), using the auto-dp package4(Wang et al., 2019) for the privacy analysis. Following Harder et al. (2021), we used the generator that consists of two fully connected layers followed by two convolutional layers with bilinear upsampling, for generating both MNIST and FashionMNIST datasets. For MNIST, we used the SVHN dataset as public data to pre-train ResNet18 (He et al., 2016), from which we took the perceptual features. For FashionMNIST, we used perceptual features from a ResNet18 trained on CIFAR-10. For CelebA and CIFAR-10, we followed dos Santos et al. (2019) in using perceptual features from a pre-trained VGG (Simonyan & Zisserman, 2014) on ImageNet, and a ResNet18-based generator. Further implementation details are given in the supplementary material, which also studies how different public datasets and feature extractors impact the performance. Evaluation metric. Evaluating the quality of generated data is a challenging problem of its own. We use two conventional measures. The first is the *Frechet Inception Distance (FID)* score (Heusel et al., 2017), which directly measures the quality of the generated samples. The FID score correlates with human evaluations of visual similarity to the real 3Dataset licenses: MNIST: CC BY-SA 3.0; FashionMNIST:MIT; CelebA: see https://mmlab.ie.cuhk.edu.hk/projects/CelebA. html; Cifar10: MIT 4https://github.com/yuxiangw/autodp | MNIST | | FashionMNIST | | | | | | | | |--------------|------------------|----------------|---------|-------|-------|-------|---------|----|----| | ϵ = 5 | ϵ = 2 | ϵ = 1 | ϵ = 0.2 | ϵ = 5 | ϵ = 2 | ϵ = 1 | ϵ = 0.2 | | | | MLP | DP-MEPF (ϕ1, ϕ2) | 90 | 89 | 89 | 80 | 76 | 75 | 75 | 70 | | DP-MEPF (ϕ1) | 88 | 88 | 87 | 77 | 75 | 76 | 75 | 69 | | | LogReg | DP-MEPF (ϕ1, ϕ2) | 83 | 83 | 82 | 76 | 75 | 76 | 75 | 73 | | DP-MEPF (ϕ1) | 81 | 80 | 79 | 72 | 75 | 76 | 76 | 72 | | ![7_image_0.png](7_image_0.png) Table 2: Downstream accuracies of our method for MNIST and FashionMNIST at varying values of ϵ. Figure 2: Synthetic 32 × 32 CelebA samples generated at different levels of privacy. Samples for DP-MERF and DP-Sinkhorn are taken from Cao et al. (2021) and DP-Diffusion samples are taken from Dockhorn et al. (2022). The pre-trained GAN is our baseline utilizing public data. Even at ϵ = 0.2, DP-MEPF (ϕ1, ϕ2) yields samples of higher visual quality than the comparison methods. data, and is commonly used in deep generative modelling. We computed FID scores with the pytorch_fid package (Seitzer, 2020), based on 5 000 generated samples, matching dos Santos et al. (2019). As discussed in Section 3.2, we use a private proxy for FID for early stopping, while the FID scores we report in this section are non-DP measures of our final model for fair comparison to other existing methods. The second metric we use is the accuracy of downstream classifiers, trained on generated datasets and then test on the real data test sets (used by Chen et al., 2020; Torkzadehmahani et al., 2019; Yoon et al., 2019; Chen et al., 2020; Harder et al., 2021; Cao et al., 2021). This test accuracy indicates how well the downstream classifiers generalize from the synthetic to the real data distribution and thus, the utility of using synthetic data samples instead of the real ones. We computed the downstream accuracy on MNIST and FashionMNIST using the logistic regression and MLP classifiers from scikit-learn (Pedregosa et al., 2011). For CIFAR-10, we used ResNet9 taken from FFCV5(Leclerc et al., 2022). In all experiments, we tested non-private training and settings with various levels of privacy, ranging from ϵ = 10 (no meaningful guarantee) to ϵ = 0.2 (strong privacy guarantee). We set δ = 10−5for MNIST, FashionMNIST, and Cifar10 and δ = 10−6for CelebA. In DP-MEPF, we also tested cases based on embeddings with only the first moment, written (ϕ1), and using the first two moments, written (ϕ1, ϕ2). Each value in all tables is an average of 3 or more runs; standard deviations are in the supplementary material. Since we are unaware of any prior work on DP data generation for image data using auxiliary datasets, we instead mostly compare to recent methods which do not access auxiliary data. As expected, due to the advantage of non-private data our approach outperforms these methods by a significant margin on the more complex datasets. As a simple baseline based on public data, we also pretrain a GAN on a downscaled version of ImageNet, at 32 × 32, and fine-tune this model with DP-SGD on CelebA and Cifar10. We use architectures based on ResNet9 with group normalization (Wu & He, 2018) for both generator and discriminator. As suggested by Bie et al. (2023), we update the generator at a lower frequency than the discriminator and use increased minibatch sizes. Further details can be found in the supplementary material. MNIST and FashionMNIST. We compare DP-MEPF to existing methods on the most common settings used in the literature, MNIST and FashionMNIST at ϵ = 10, in Table 1. For an MLP on MNIST, DP-MEPF's samples far 5https://github.com/libffcv/ffcv/blob/main/examples/cifar/train_cifar.py ![8_image_0.png](8_image_0.png) Figure 3: Synthetic 64 × 64 CelebA samples generated at different levels of privacy with DP-MEPF (ϕ1, ϕ2). | 32 64 | |---------| Table 3: CelebA FID scores (lower is better) for images of resolution 32 × 32 and 64 × 64. Results for DP Diffusion (DPDM) and DP Sinkhorn taken from Dockhorn et al. (2022) and Cao et al. (2021). DP-MEPF (ϕ1, ϕ2) 17.4 17.5 18.1 19.0 21.4 25.8 DP-MEPF (ϕ1) 16.3 16.9 16.5 17.2 21.8 25.5 DP-GAN (pre-trained) 58.1 66.9 67.1 81.3 109.1 192.0 DPDM (no public data) 21.2 - - 71.8 - - DP Sinkhorn (no public data) 189.5 - - - - - 64 DP-MEPF (ϕ1, ϕ2) 18.5 19.1 18.4 19.0 21.4 26.8 DP-MEPF (ϕ1) 17.4 16.5 16.9 18.4 20.4 27.7 DP-GAN (pre-trained) 57.1 62.3 65.2 72.5 91.9 133.3 outperform other methods for logistic regression and both classifiers on FashionMNIST, scores match or slightly exceed those of existing models. This might be because the domain shift between public dataset (CIFAR-10, color images of scenes) and private dataset (FashionMNIST, grayscale images of fashion items) is too large, or because the task is simple enough that random features as found in DP-MERF or DP-HP are already good enough. This will change as we proceed to more complex datasets. Table 2 shows that downstream test accuracy only starts to drop in high privacy regimes, ϵ < 1, due to the low sensitivity of µϕ. Samples for visual comparison between methods are included in the supplementary material. | ϵ = 10 | ϵ = 5 | ϵ = 2 | ϵ = 1 | ϵ = 0.5 | ϵ = 0.2 | |----------|---------|---------|---------|-----------|-----------| CelebA Figure 2 shows that previous attempts to generate CelebA samples without auxiliary data using DP-MERF or DP-Sinkhorn have only managed to capture very basic features of the data. Each sample depicts a face, but offers no details or variety. DP-MEPF produces more accurate samples at the same 32 × 32 resolution, which is also reflected in improved FID scores of around 17, while DP-Sinkhorn, as reported in Cao et al. (2021), achieves an FID of 189.5. Table 3 gives FID scores for both resolutions at varying ϵ. DP-MEPF consistently outperforms our pre-trained DP-GAN baseline and the scores reported for DP diffusion Dockhorn et al. (2022), As the dataset has over 200 000 samples, the feature embeddings have low sensitivity, and offer similar quality between ϵ = 10 and ϵ = 1, although quality begins to decline at ϵ < 1. Samples for 64 × 64 images are shown in Figure 3, with similar quality, and a quicker loss of quality in high privacy settings due to its larger embedding. In all cases, the ϕ1 embedding yields better results than ϕ1, ϕ2, suggesting that the second moment does not contribute useful information, perhaps because on the limited variance of the dataset. Figure 4: Samples from non-DP ![8_image_1.png](8_image_1.png) Sinkhorn. Top: ImageNet32. Bottom: CelebA after pretraining. Because DP-Sinkhorn is the best-performing method without public data, we perform experiments on DP-Sinkhorn, pretraining it non-DP on ImageNet32 and fine-tuning with DP on CelebA (ϵ = 10). After seeing no improvement, we tested non-DP fine-tuning and still saw no improvements beyond what is shown in Figure 4; we tried both BigGan- and ResNet18-based generators with hyperparameter grid searches. DP-Sinkhorn only compares features at image-level, without domain-specific priors, and it appears that even non-DP the method is not powerful enough to model image data beyond MNIST. (A DP-MEPF analogue that extracts features learned from public data might help, but this would be a novel method beyond scope for comparison.) DP-MERF is similarly limited by its random features, not DP noise, as shown by non-DP versions matching ϵ = 10 performance. Differentially private early stopping. For CelebA and Cifar10, we use DP early stopping as explained in Section 3.2 with a privacy parameter ten times larger than the σ used for the training objective. Keeping (*ϵ, δ*) fixed, this additional Table 4: Two examples of beneficial early stopping: For CelebA at 64 × 64 resolution and labeled Cifar10, DP-MEPF (ϕ1) sample quality (measured in FID) degrades with long training in high privacy settings (here ϵ ≤ 1). This makes the final model at the end of training a poor choice. Our DP selection of the best iteration via proxy stays close to the optimal choice. | | | ϵ = 1 | ϵ = 0.5 | ϵ = 0.2 | |-------------------|------------------------|---------|-----------|-----------| | | Best FID (not DP) | 17.7 | 20.1 | 27.0 | | CelebA 64 × 64 | DP proxy for FID | 18.4 | 20.4 | 27.7 | | | At the end of training | 18.4 | 22.1 | 45.2 | | | Best FID (not DP) | 54.8 | 92.0 | 268.3 | | Cifar10 (labeled) | DP proxy for FID | 56.5 | 92.0 | 268.3 | | | At the end of training | 198.6 | 267.7 | 357.1 | ![9_image_0.png](9_image_0.png) Figure 5: Labeled samples from DP-MEPF (ϕ1, ϕ2) and DP-MERF (Harder et al., 2021). release results only in a small increase in σ, and gives us a simple way for choosing the best iteration. In Table 4, we compare the true best FID, the FID picked by our private proxy, and the FID at the end of training to illustrate the advantage in high DP settings. FID scores were computed every 5 000 iterations, while the model trained for 200 000 iterations in total. CIFAR-10 Finally, we investigate a dataset which has not been covered in DP data generation. While CelebA depicts a centered face in every image, CIFAR-10 includes 10 visually distinct object classes, which raises the required minimum quality of samples to somewhat resemble the dataset. At only 5 000 samples per class, the dataset is also significantly smaller, which poses a challenge in the private setting. Figure 5 shows that DP-MEPF is capable of producing labelled private data (generating both labels and input images together) resembling the real data, but the quality does suffer in high privacy settings. This is also reflected in the FID scores (Table 5): at ϵ ≤ 1 labeled DP-MEPF scores deteriorate at a much quicker rate than the unlabeled counterpart. As the unlabeled embedding dimension is smaller by a factor of 10 (the number of classes), it is easier to release privately and retains some semblance of the data even in the highest privacy settings, as shown in Figure 6. The FID scores of our pre-trained DP-GAN baseline consistently exceed our results, usually by over 10 points. These scores are better than the DP-GAN results for CelebA, likely because 32 × 32 ImageNet is very similar to Cifar10. Nonetheless, the high privacy cost of DP-SGD makes DP-GAN a poor fit for a dataset of this complexity and limited size. | ϵ = 10 | ϵ = 5 | ϵ = 2 | ϵ = 1 | ϵ = 0.5 | ϵ = 0.2 | | | |------------------|------------------|---------|---------|-----------|-----------|-------|-------| | DP-MEPF (ϕ1, ϕ2) | 38.8 | 37.0 | 38.7 | 43.0 | 49.4 | 67.3 | | | unlabeled | DP-MEPF (ϕ1) | 38.5 | 38.6 | 40.1 | 45.1 | 49.8 | 72.3 | | DP-GAN | 54.6 | 54.7 | 62.4 | 74.9 | 62.7 | 73.4 | | | labeled | DP-MEPF (ϕ1, ϕ2) | 29.1 | 30.0 | 39.5 | 54.0 | 76.4 | 226.0 | | DP-MEPF (ϕ1) | 30.3 | 35.6 | 42.0 | 56.5 | 92.0 | 268.3 | | Table 5: FID scores for synthetic CIFAR-10 data; labeled generates both labels and images. ![10_image_0.png](10_image_0.png) Figure 6: Unlabeled CIFAR-10 samples from DP-MEPF (ϕ1, ϕ2) and DP-GAN. In Table 6 we show the test accuracy of models trained synthetic datasets applied to real data. While there is still a large gap between the 88.3% accuracy on the real data and our results, DP-MEPF achieves nontrivial results around 50% for ϵ = 10, which degrade as privacy is increased. While the drop in sample quality due to high privacy is quite substantial, it is less of a problem in the unlabelled case, since our embedding dimension is smaller by a factor of 10 (the number of classes) and thus easier to release privately. Table 6: Test accuracies (higher is better) of ResNet9 trained on CIFAR-10 synthetic data with varying privacy guarantees. When trained on real data, test accuracy is 88.3% | ϵ = 10 | ϵ = 5 | ϵ = 2 | ϵ = 1 | ϵ = 0.5 | ϵ = 0.2 | | |------------------|---------|---------|---------|-----------|-----------|------| | DP-MEPF (ϕ1, ϕ2) | 53.0 | 43.9 | 40.0 | 28.5 | 18.0 | 16.2 | | DP-MEPF (ϕ1) | 40.7 | 32.3 | 42.6 | 33.2 | 18.8 | 15.3 | | DP-MERF | 13.2 | 13.4 | 13.5 | 13.8 | 13.1 | 10.4 | ## 7 Discussion We have demonstrated the advantage of using auxiliary public data in DP data generation. Our method DP-MEPF takes advantage of features from pre-trained classifiers that are readily available, and allows us to tackle datasets like CelebA and CIFAR-10, which have been unreachable for private data generation up to this point. There are several avenues to extend our method in future work, in particular finding better options for the encoder features: the choice of VGG19 by dos Santos et al. (2019) works well in private settings, but a lower-dimensional embedding that still works well for training generative models - perhaps based on some kind of pruning scheme - might help reduce the sensitivity of µϕ and improve quality. Training other generative models such as GANs or VAEs with pretrained components is also exploring further than our initial attempt here. It may also be possible to take a "middle ground" and introduce some adaptation for features in DP-MEPF, to allow for more powerful, GAN-like models, without suffering too much privacy loss. In the non-private generative modelling community, this has proved important, but the challenge will be to do so while limiting the number of DP releases to allow modelling with, e.g., ϵ ≤ 2. ## 8 Acknowledgements We thank the anonymous reviewers for their valuable time helping us improve our manuscript. F. Harder is supported by the Max Planck Society and the Gibs Schüle Foundation and the Institutional Strategy of the University of Tübingen (ZUK63) and the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039B. F. Harder is also grateful for the support of the International Max Planck Research School for Intelligent Systems (IMPRS-IS). M. Jalali Asadabadi, M. Park, and D. J. Sutherland are supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canada CIFAR AI Chairs program. ## 9 Broader Impact Statement Our work is motivated by the need for strong and scalable data privacy, which we expect will have mainly beneficial societal impact. However, our work touches on two topics, which are known to contain a risk of harmful impact on individuals and thus need to be treated with caution. ## 9.1 Differential Privacy And Fairness Firstly, recent research has shown that DP is at odds with notions of fairness when if comes to under-represented groups in the data. For instance Chang & Shokri (2021) show that minorities are more susceptible to membership inference attacks in fair non-DP models (i.e. fairness reduces privacy) and Bagdasaryan et al. (2019) show the reverse effect: when training an unfair model with strong DP guarantees, the fairness is reduced further. The dilemma is intuitive: Fairness requires amplifying the impact of samples from minorities in the data, so they will not be ignored, while DP needs to limit the impact each individual sample can have in order to keep sensitivity low. Since its discovery, this trade-off has received attention both in works seeking a more detailed understanding (Cummings et al., 2019; Mangold et al., 2022; Esipova et al., 2022; Zhong et al., 2022; Sanyal et al., 2022) and works proposing custom approaches to DP fair machine learning (Ding et al., 2020; Xu et al., 2019; Jagielski et al., 2019; Tran et al., 2021a;b; Esipova et al., 2022). Given that the impact of DP on fairness is an active area of research and independent of our particular approach, we do not see the need to perform our own experiments on this matter. We will, however, provide an intuition on how the problem manifests in DP-MEPF by looking at labelled data generation with significant class imbalance. Assuming an imbalanced dataset with two classes and |C1| = 100 and |C2| = 10, we obtain the following mean embedding: $${\tilde{\mu}}_{\phi_{t}}(D)=\begin{bmatrix}{\frac{1}{m}}\sum_{i\in C_{1}}\phi_{t}(\mathbf{x}_{i})+\mathbf{n}_{t,1}\\ {\frac{1}{m}}\sum_{i\in C_{2}}\phi_{t}(\mathbf{x}_{i})+\mathbf{n}_{t,2}\end{bmatrix}.$$ $$\left(7\right)$$ . (7) With ∥ϕt(xi)∥2 = 1, we know that the norm of the unperturbed mean embedding for class 1, given by ∥ 1 m Pi∈C1 ϕt(xi)∥2 ≤ 100/110, may be ten times as large as the maximum possible norm for the class 2 embedding ∥ 1 m Pi∈C2 ϕt(xi)∥2 ≤ 10/110. Nonetheless, in order to preserve DP, both embeddings are perturbed with noise of the same magnitude, leading to a significantly worse signal-to-noise ratio for the class 2 embedding. As a result, the generative model trained on this embedding will produce more accurate samples for class 1 than for class 2. ## 9.2 Differential Privacy With Public Data The second issue regards the use of public data in DP. In a recent position paper, Tramèr et al. (2022) raise several concerns about the increasing trend of using auxiliary datasets in DP research. Their critique has two main arguments, the first being that publicly available data may still be sensitive and using such data may cause unintended privacy violations. Given that many large datasets are scraped from the internet with limited human oversight, this data may contain personal data that was released involuntarily or shared exclusively for a specific context. The authors suggest that responsible use of public data requires improved curation practices, including e.g. collection of explicit consent for data use, auditing for and removal of sensitive content, and providing channels for reporting privacy concerns. The other main criticism raised by Tramèr et al. (2022) is that the datasets used to demonstrate the benefits of public data in DP, such as Cifar10 or ImageNet, are poorly chosen, because they are often from nearly the same distribution as the private data. In contrast, they argue, using public data in realistic application scenarios such as medical imaging would likely require considerable domain shift, since no public data close to the target domain is available. This disparity leads to overly optimistic claims, as the experiments don't actually demonstrate good performance under significant domain shift. They further point out that the quality of a DP method becomes difficult to measure if it builds on e.g. a non-privately pre-trained model, as overall improvements may stem both from either the private and the non-private part of the method. The authors propose dedicated benchmarks for DP machine learning should be developed, in order to obtain results which are comparable and predictive of model performance in real-world applications. They also acknowledge that such benchmarks don't currently exist and their design requires careful consideration. We agree with the authors in their analysis of the challenges facing DP machine learning research and value their proposals for future directions and experiment design. In the light of all these problems introduced by public data, one might ask whether this is at all a research direction worth pursuing. Here, we emphasize a fact that is acknowledged in the final paragraph of Tramèr et al. (2022): "many recent works employing public data have played an important role in showing that differential privacy can be preserved for certain complex machine learning problems, without suffering devastating impacts on utility." DP currently sees little to no practical application in machine learning, in large part because the loss of utility it causes is often unacceptable. Auxiliary public data is the best candidate for achieving sufficient utility for practical use and so, in our eyes, the potential of these approaches outweighs the complications they introduce. It is thus vital that research in DP ML with public data is pursued further. ## References Martin Abadi, Andy Chu, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In *Proceedings of the 2016 ACM SIGSAC Conference on Computer and* Communications Security, CCS '16, pp. 308–318, New York, NY, USA, 2016. Association for Computing Machinery. ISBN 9781450341394. doi: 10.1145/2976749.2978318. Gergely Acs, Luca Melis, Claude Castelluccia, and Emiliano De Cristofaro. Differentially private mixture of generative neural networks. *IEEE Transactions on Knowledge and Data Engineering*, 31(6):1109–1121, 2018. Differential Privacy Team, Apple. Learning with privacy at scale, 2017. URL https://machinelearning. apple.com/research/learning-with-privacy-at-scale. Eugene Bagdasaryan, Omid Poursaeed, and Vitaly Shmatikov. Differential privacy has disparate impact on model accuracy. *Advances in neural information processing systems*, 32, 2019. Alex Bie, Gautam Kamath, and Guojun Zhang. Private GANs, revisited. *arXiv preprint arXiv:2302.02936*, 2023. S. Boucheron, G. Lugosi, and P. Massart. *Concentration inequalities: A nonasymptotic theory of independence*. Oxford University Press, 2013. Tianshi Cao, Alex Bie, Arash Vahdat, Sanja Fidler, and Karsten Kreis. Don't generate me: Training differentially private generative models with sinkhorn divergence. In *Neural Information Processing Systems (NeurIPS)*, 2021. Hongyan Chang and Reza Shokri. On the privacy risks of algorithmic fairness. In *2021 IEEE European Symposium on* Security and Privacy (EuroS&P), pp. 292–303. IEEE, 2021. Dingfan Chen, Tribhuvanesh Orekondy, and Mario Fritz. Gs-wgan: A gradient-sanitized approach for learning differentially private generators. In *Advances in Neural Information Processing Systems 33*, 2020. Rui Chen, Qian Xiao, Yu Zhang, and Jianliang Xu. Differentially private high-dimensional data publication via sampling-based inference. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 129–138, 2015. Rachel Cummings, Varun Gupta, Dhamma Kimpara, and Jamie Morgenstern. On the compatibility of privacy and fairness. In *Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization*, pp. 309–315, 2019. Jiahao Ding, Xinyue Zhang, Xiaohuan Li, Junyi Wang, Rong Yu, and Miao Pan. Differentially private and fair classification via calibrated functional mechanism. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 622–629, 2020. Tim Dockhorn, Tianshi Cao, Arash Vahdat, and Karsten Kreis. Differentially private diffusion models. *arXiv preprint* arXiv:2210.09929, 2022. Cícero Nogueira dos Santos, Youssef Mroueh, Inkit Padhi, and Pierre L. Dognin. Learning implicit generative models by matching perceptual features. In *2019 IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 4460–4469. IEEE, 2019. doi: 10.1109/ICCV.2019.00456. Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In *Advances in Cryptology - EUROCRYPT 2006, 25th Annual International* Conference on the Theory and Applications of Cryptographic Techniques, volume 4004 of *Lecture Notes in Computer* Science, pp. 486–503. Springer, 2006. doi: 10.1007/11761679_29. Gintare Karolina Dziugaite, Daniel M. Roy, and Zoubin Ghahramani. Training generative neural networks via Maximum Mean Discrepancy optimization. In UAI, 2015. Maria S Esipova, Atiyeh Ashari Ghomi, Yaqiao Luo, and Jesse C Cresswell. Disparate impact in differential privacy from gradient misalignment. *arXiv preprint arXiv:2206.07737*, 2022. Lorenzo Frigerio, Anderson Santana de Oliveira, Laurent Gomez, and Patrick Duverger. Differentially private generative adversarial networks for time series, continuous, and discrete open data. In *ICT Systems Security and Privacy* Protection - 34th IFIP TC 11 International Conference, SEC 2019, Lisbon, Portugal, June 25-27, 2019, Proceedings, pp. 151–164, 2019. doi: 10.1007/978-3-030-22312-0\_11. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial networks. In *Advances in Neural Information Processing Systems*, 2014. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *Journal of Machine Learning Research*, 13(Mar):723–773, 2012. Frederik Harder, Kamil Adamczewski, and Mijung Park. DP-MERF: Differentially private mean embeddings with random features for practical privacy-preserving data generation. In *AISTATS*, volume 130 of *Proceedings of Machine* Learning Research, pp. 1819–1827. PMLR, 2021. Moritz Hardt, Katrina Ligett, and Frank Mcsherry. A simple and practical algorithm for differentially private data release. In *Advances in Neural Information Processing Systems 25*, pp. 2339–2347. Curran Associates, Inc., 2012. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local Nash equilibrium. *Advances in Neural Information Processing Systems*, 30, 2017. Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, and Jonathan Ullman. Differentially private fair learning. In *International Conference on Machine Learning*, pp. 3000–3008. PMLR, 2019. Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. Beatrice Laurent and Pascal Massart. Adaptive estimation of a quadratic functional by model selection. *Annals of* Statistics, pp. 1302–1338, 2000. Guillaume Leclerc, Andrew Ilyas, Logan Engstrom, Sung Min Park, Hadi Salman, and Aleksander Madry. ffcv. https://github.com/libffcv/ffcv/, 2022. Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010. URL http://yann.lecun.com/ exdb/mnist/. Yujia Li, Kevin Swersky, and Richard S. Zemel. Generative moment matching networks. In *ICML*, 2015. Seng Pei Liew, Tsubasa Takahashi, and Michihiko Ueno. PEARL: Data synthesis via private embeddings and adversarial reconstruction learning. In *International Conference on Learning Representations*, 2022. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In *Proceedings of* International Conference on Computer Vision (ICCV), December 2015. Paul Mangold, Michaël Perrot, Aurélien Bellet, and Marc Tommasi. Differential privacy has bounded impact on fairness in classification. 2022. Noman Mohammed, Rui Chen, Benjamin C.M. Fung, and Philip S. Yu. Differentially private data release for data mining. In *Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, KDD '11, pp. 493–501, New York, NY, USA, 2011. ACM. ISBN 978-1-4503-0813-7. doi: 10.1145/2020408.2020487. National Conference of State Legislatures. Differential privacy for census data, 2021. URL https://www.ncsl. org/research/redistricting/differential-privacy-for-census-data-explained. aspx. Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian J. Goodfellow, and Kunal Talwar. Semi-supervised knowledge transfer for deep learning from private training data. In Proceedings of the International Conference on Learning Representations (ICLR), 2017. Mijung Park, James Foulds, Kamalika Choudhary, and Max Welling. DP-EM: Differentially Private Expectation Maximization. In *AISTATS*, volume 54 of *Proceedings of Machine Learning Research*, pp. 896–904, Fort Lauderdale, FL, USA, April 2017. PMLR. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing Systems* 32, pp. 8024–8035. Curran Associates, Inc., 2019. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. Wahbeh Qardaji, Weining Yang, and Ninghui Li. Priview: practical differentially private release of marginal contingency tables. In *Proceedings of the 2014 ACM SIGMOD international conference on Management of data*, pp. 1435–1446, 2014. Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In *Advances in Neural Information* Processing Systems, pp. 1177–1184, 2008. Amartya Sanyal, Yaxi Hu, and Fanny Yang. How unfair is private learning? In *Uncertainty in Artificial Intelligence*, pp. 1738–1748. PMLR, 2022. Maximilian Seitzer. pytorch-fid: FID Score for PyTorch. https://github.com/mseitzer/pytorch-fid, August 2020. Version 0.2.1. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. 2014. A. Smola, A. Gretton, L. Song, and B. Schölkopf. A Hilbert space embedding for distributions. In ALT, pp. 13–31, 2007. Joshua Snoke and Aleksandra Slavkovic. pmse mechanism: differentially private synthetic data with maximal ´ distributional similarity. In *International Conference on Privacy in Statistical Databases*, pp. 138–159. Springer, 2018. Le Song, Kenji Fukumizu, and Arthur Gretton. Kernel embeddings of conditional distributions: A unified kernel framework for nonparametric inference in graphical models. *IEEE Signal Processing Magazine*, 30(4):98–111, 2013. Nathan Srebro, Karthik Sridharan, and Ambuj Tewari. Optimistic rates for learning with a smooth loss, 2010. Bharath K Sriperumbudur, Kenji Fukumizu, and Gert RG Lanckriet. Universality, characteristic kernels and rkhs embedding of measures. *Journal of Machine Learning Research*, 12(7), 2011. Reihaneh Torkzadehmahani, Peter Kairouz, and Benedict Paten. Dp-cgan: Differentially private synthetic data and label generation. In *The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops*, June 2019. Florian Tramèr and Dan Boneh. Differentially private learning needs better features (or much more data). In International Conference on Learning Representations, 2021. Florian Tramèr, Gautam Kamath, and Nicholas Carlini. Considerations for differentially private learning with large-scale public pretraining. *arXiv preprint arXiv:2212.06470*, 2022. Cuong Tran, My Dinh, and Ferdinando Fioretto. Differentially private empirical risk minimization under the fairness lens. *Advances in Neural Information Processing Systems*, 34:27555–27565, 2021a. Cuong Tran, Ferdinando Fioretto, and Pascal Van Hentenryck. Differentially private and fair deep learning: A lagrangian dual approach. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pp. 9932–9939, 2021b. Margarita Vinaroz, Mohammad-Amin Charusaie, Frederik Harder, Kamil Adamczewski, and Mi Jung Park. Hermite polynomial features for private data generation. In *ICML*, volume 162 of *Proceedings of Machine Learning Research*, pp. 22300–22324. PMLR, 2022. Yu-Xiang Wang, Borja Balle, and Shiva Prasad Kasiviswanathan. Subsampled rényi differential privacy and analytical moments accountant. In *AISTATS*, 2019. Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pp. 3–19, 2018. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. 2017. Yonghui Xiao, Li Xiong, and Chun Yuan. Differentially private data release through multidimensional partitioning. In Secure Data Management, pp. 150–168, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg. ISBN 978-3-64215546-8. Liyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, and Jiayu Zhou. Differentially private generative adversarial network. 2018. Depeng Xu, Shuhan Yuan, and Xintao Wu. Achieving differential privacy and fairness in logistic regression. In Companion proceedings of The 2019 world wide web conference, pp. 594–599, 2019. Jinsung Yoon, James Jordon, and Mihaela van der Schaar. PATE-GAN: Generating synthetic data with differential privacy guarantees. In *International Conference on Learning Representations*, 2019. Jun Zhang, Graham Cormode, Cecilia M Procopiuc, Divesh Srivastava, and Xiaokui Xiao. Privbayes: Private data release via bayesian networks. *ACM Transactions on Database Systems (TODS)*, 42(4):1–41, 2017. Zhikun Zhang, Tianhao Wang, Ninghui Li, Jean Honorio, Michael Backes, Shibo He, Jiming Chen, and Yang Zhang. Privsyn: Differentially private data synthesis. In *30th USENIX Security Symposium (USENIX Security 21)*, 2021. Da Zhong, Haipei Sun, Jun Xu, Neil Gong, and Wendy Hui Wang. Understanding disparate effects of membership inference attacks and their countermeasures. In Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security, pp. 959–974, 2022. Lijia Zhou, Frederic Koehler, Danica J. Sutherland, and Nathan Srebro. Optimistic rates: A unifying theory for interpolation learning and regularization in linear regression, 2021. T. Zhu, G. Li, W. Zhou, and P. S. Yu. Differentially private data publishing and analysis: A survey. IEEE Transactions on Knowledge and Data Engineering, 29(8):1619–1638, August 2017. ISSN 1041-4347. doi: 10.1109/TKDE.2017. 2697856. # Supplementary Material ## A Proofs We will conduct our analysis in terms of general noise covariance Σ for the added noise, n ∼ N (0, Σ). The results will depend on various norms of Σ, as well as ∥Σ 1/2a∥, where a = µϕ(D) − µϕ(D˜) is the difference between empirical mean embeddings µϕ(D) = 1 |D| Px∈D ϕ(x). (Recall that MMD(D, D˜) = ∥a∥.) When we use only normalized first-moment features, the quantities appearing in the bounds are $$\Sigma=\frac{4\sigma^{2}}{m^{2}}I_{D}$$ $$\|\Sigma\|_{op}=\frac{4\sigma^{2}}{m^{2}}\qquad\|\Sigma\|_{F}=\frac{4\sigma^{2}}{m^{2}}\sqrt{D}\qquad\mbox{Tr}(\Sigma)=\frac{4\sigma^{2}}{m^{2}}D\tag{8}$$ $$\|\Sigma^{1/2}\mathbf{a}\|_{2}=\sqrt{\mathbf{a}^{\top}\Sigma\mathbf{a}}=\frac{2\sigma}{m}\;\mbox{MMD}_{k_{\Phi}}(\mathcal{D},\bar{\mathcal{D}}).$$ When we use first- and second-moment features with respective scales C1 and C2 (both 1 in our experiments here), we have Σ = "σ 22C1 m 2ID 0 0 σ 22C2 m 2ID # = 4σ 2 m2 C 2 1 ID 0 0 C 2 2 ID ∥Σ∥op = 4σ 2 m2 max(C 2 1 , C2 2 ) ∥Σ∥F = 4σ 2 m2 (C 2 1 + C 2 2 ) √ D Tr(Σ) = 4σ 2 m2 (C 2 1 + C 2 2 ) D (9) ∥Σ 1/2a∥2 = √ a⊤Σa = 2σ m qC2 1 MMD kϕ1 (D, D˜) 2 + C2 2 MMD kϕ2 (D, D˜) 2. Note that if C1 = C2 = C, then $$\sqrt{C_{1}^{2}\,\mathrm{MMD}\,k_{\phi_{1}}({\mathcal{D}},{\tilde{\mathcal{D}}})^{2}+C_{2}^{2}\,\mathrm{MMD}\,k_{\phi_{2}}({\mathcal{D}},{\tilde{\mathcal{D}}})^{2}}=C\,\mathrm{MMD}_{k_{\Phi}}({\mathcal{D}},{\tilde{\mathcal{D}}}).$$ ## A.1 Mean Absolute Error Of Loss Function Proposition A.1. *Given datasets* D = {xi} m i=1 and D˜ = {x˜j} n j=1 and a kernel kϕ with a D*-dimensional embedding* ϕ, let a = µϕ(D) − µϕ(D˜)*. Define* MMD ^2 kΦ (D, D˜) = ∥a + n∥ 2for a noise vector n ∼ N (0, Σ)*. Introducing the noise* n *affects the expected absolute error as* $$\mathbb{E}_{\mathbf{n}}\left[\widehat{\left|\mathrm{MMD}\right.}_{k_{\Phi}}^{2}(\mathcal{D},\tilde{\mathcal{D}})-\mathrm{MMD}_{k_{\Phi}}^{2}(\mathcal{D},\tilde{\mathcal{D}})\right|\right]\leq\mathrm{Tr}(\Sigma)+2\sqrt{\frac{2}{\pi}}\|\Sigma^{1/2}\mathbf{a}\|.\tag{10}$$ Proof. We have that $$\mathbb{E}_{\mathbf{n}}\left[\left|\widehat{\mathrm{MMD}}_{\mathbb{E}_{\mathbf{n}}}^{2}\left(\mathcal{D},\widehat{\mathcal{D}}\right)-\mathrm{MMD}_{\mathbb{E}_{\mathbf{n}}}^{2}\left(\mathcal{D},\widehat{\mathcal{D}}\right)\right|\right]\\ =\mathbb{E}_{\mathbf{n}}\left[\left|\left|\mathbf{a}+\mathbf{n}\right|\right|^{2}-\left|\mathbf{a}\right|^{2}\right]\right]=\mathbb{E}_{\mathbf{n}}\left[\left|\mathbf{n}^{\mathsf{T}}\mathbf{n}+2\mathbf{n}^{\mathsf{T}}\mathbf{a}\right|\right]\leq\mathbb{E}_{\mathbf{n}}\left[\mathbf{n}^{\mathsf{T}}\mathbf{n}\right]+2\,\mathbb{E}_{\mathbf{n}}\left[\left|\mathbf{n}^{\mathsf{T}}\mathbf{a}\right|\right].\tag{11}$$ The first term is standard: E n ⊤n = E Tr(n ⊤n) = E Tr(nn⊤) = Tr(E nn⊤) = Tr(Σ). For the second, note that $$\mathbf{a}^{\top}\mathbf{n}\sim{\mathcal{N}}(0,\mathbf{a}^{\top}\Sigma\mathbf{a}),$$ and so its absolute value is √a⊤Σa times a χ(1) random variable. Since the mean of a χ(1) distribution is √2 Γ(1) Γ(1/2) = q2 π , we obtain the desired bound. ## A.2 High-Probability Bound On The Error Proposition A.2. *Given datasets* D = {xi} m i=1 and D˜ = {x˜j} n j=1, let a = µϕ(D) − µϕ(D˜)*, and define* MMD ^2 kΦ (D, D˜) = ∥a + n∥ 2for a noise vector n ∼ N (0, Σ). Then for any ρ ∈ (0, 1), it holds with probability at least 1 − ρ over the choice of n *that* $$\widehat{\mathrm{MMD}}_{\mathsf{b}_{\theta}}^{-2}(\mathcal{D},\mathcal{D})-\mathrm{MMD}_{\mathsf{b}_{\theta}}^{2}(\mathcal{D},\mathcal{D})|\\ \leq\mathrm{Tr}(\Sigma)+\sqrt{\frac{2}{\pi}}\|\Sigma^{\frac{1}{2}}\mathbf{a}\|_{2}+2\left(\|\Sigma\|_{F}+\sqrt{2}\|\Sigma^{\frac{1}{2}}\mathbf{a}\|_{2}\right)\sqrt{\log(\frac{2}{\pi})}+2\|\Sigma\|_{\mathscr{D}}\log(\frac{2}{\pi}).\tag{12}$$ _This implies that_ $$\left|\widetilde{\mathrm{MMD}}_{k_{\Phi}}^{2}({\mathcal{D}},\tilde{\mathcal{D}})-\mathrm{MMD}_{k_{\Phi}}^{2}({\mathcal{D}},\tilde{\mathcal{D}})\right|={\mathcal{O}}_{p}\left(\mathrm{Tr}(\Sigma)+\|\Sigma^{1/2}\mathbf{a}\|_{2}\right).$$ Proof. Introduce z ∼ N (0, I) such that n = Σ12 z into Equation 11: $$\widehat{\rm MMD}^{2}_{k_{\Phi}}({\cal D},\tilde{\cal D})-{\rm MMD}^{2}_{k_{\Phi}}({\cal D},\tilde{\cal D})\big{|}\leq{\bf n}^{\top}{\bf n}+2\Big{|}{\bf n}^{\top}{\bf a}\Big{|}={\bf z}^{\top}\Sigma{\bf z}+2\Big{|}{\bf a}^{\top}\Sigma^{1/2}{\bf z}\Big{|}.\tag{13}$$ For the first term, denoting the eigendecomposition of Σ as QΛQ⊤, we can write $$\mathbf{z}^{\top}\Sigma\mathbf{z}=(\mathbf{Q}^{\top}\mathbf{z})^{\top}\Lambda(\mathbf{Q}^{\top}\mathbf{z}),$$ $$(14)$$ in which Q⊤z ∼ N (0, I) and Λ is diagonal. Thus, applying Lemma 1 of Laurent & Massart (2000), we obtain that with probability at least 1 − ρ 2 , $$\mathbf{z}^{\top}\Sigma\mathbf{z}\leq\operatorname{Tr}(\Sigma)+2\|\Sigma\|_{F}{\sqrt{\log({\frac{2}{\rho}})}}+2\|\Sigma\|_{o p}\log({\frac{2}{\rho}}).$$ In the second term, a ⊤Σ 1 2 z , can be viewed as a function of a standard normal variable z with Lipschitz constant at most ∥Σ 1 2 a∥2. Thus, applying the standard Gaussian Lipschitz concentration inequality (Boucheron et al., 2013, Theorem 5.6), we obtain that with probability at least 1 − ρ 2 , $$\left|\mathbf{z}^{\top}\Sigma^{\frac{1}{2}}\mathbf{a}\right|\leq\mathbb{E}\left|\mathbf{z}^{\top}\Sigma^{\frac{1}{2}}\mathbf{a}\right|+\|\Sigma^{\frac{1}{2}}\mathbf{a}\|_{2}{\sqrt{2\log({\frac{2}{\rho}})}}=\|\Sigma^{\frac{1}{2}}\mathbf{a}\|_{2}\left({\sqrt{\frac{2}{\pi}}}+{\sqrt{2\log({\frac{2}{\rho}})}}\right).$$ The first statement in the theorem follows by a union bound. The Op form follows by Lemma A.1 and the fact that Tr(A) ≥ ∥A∥F ≥ ∥A∥op for positive semi-definite matrices A. The following lemma shows how to convert high-probability bounds with both sub-exponential and sub-Gaussian tails into a Op statement. Lemma A.1. If a sequence of random variables Xn *satisfies* $$X_{n}\leq A_{n}+B_{n}{\sqrt{\log{\frac{b_{n}}{\rho}}}}+C_{n}\log{\frac{c_{n}}{\rho}}\qquad{\mathrm{~with~probability~at~least~}}1-\rho,$$ then the sequence of variables Xn is $${\mathcal{O}}_{p}\left(\operatorname*{max}\left(A_{n},B_{n}\operatorname*{max}({\sqrt{\log b_{n}}},1),C_{n}\operatorname*{max}(\log c_{n},1)\right)\right).$$ Proof. The definition of a sequence of random variables Xn being Op(Qn), where Qn is a sequence of scalars, means that the sequence Xn Qn is stochastically bounded: for each ρ, there is some constant Rρ such that Pr(Xn/Qn ≥ Rρ) ≤ ρ. Here, we have for all n with probability at least 1 − ρ that Xn max An, Bn max(√log bn, 1), Cn max(log cn, 1) ≤ An + Bn qlog bn ρ + Cn log cn ρ max An, Bn max(√log bn, 1), Cn max(log cn, 1) = An + Bn qlog bn + log 1ρ + Cn hlog cn + log 1ρ i max An, Bn max(√log bn, 1), Cn max(log cn, 1) ≤ An + Bn √log bn + Bn qlog 1ρ + Cn log cn + Cn log 1ρ max An, Bn max(√log bn, 1), Cn max(log cn, 1) ≤ 1 + 1 + rlog 1ρ + 1 + log 1ρ . Thus the desired bound holds with Rρ = 3 + qlog 1ρ + log 1ρ . ## A.3 Quality Of The Private Minimizer: Worst-Case Analysis We first show uniform convergence of the privatized MMD to the non-private MMD. Proposition A.3. *Suppose that* Φ : X → R D *is such that* supx∥Φ(x)∥ ≤ B*, and let* MMD ^kΦ (D, D˜) = ∥µΦ(D) − µΦ(D˜) + n∥ for n ∼ N (0, Σ). Then, with probability at least 1 − ρ *over the choice of* n, $$\sup_{\mathcal{D},\mathcal{D}}\left|\widetilde{\mathrm{MMD}}_{\mathbb{E}_{B}}^{2}\left(\mathcal{D},\widetilde{\mathcal{D}}\right)-\mathrm{MMD}_{\mathbb{E}_{B}}^{2}\left(\mathcal{D},\widetilde{\mathcal{D}}\right)\right|$$ $$\leq\mathrm{Tr}(\Sigma)+4B\sqrt{\mathrm{Tr}(\Sigma)}+2\left(\left\|\Sigma\right\|_{F}+2B\|\Sigma\|_{\mathcal{D}}^{\frac{1}{2}}\right)\sqrt{\log(\frac{4}{\rho})}+2\|\Sigma\|_{\mathrm{exp}}\log(\frac{4}{\rho})=\mathcal{O}_{\mathcal{P}}\left(\mathrm{Tr}(\Sigma)+B\sqrt{\mathrm{Tr}(\Sigma)}\right),$$ _where the supremum is taken over all distributions, including the empirical distribution of datasets $\mathcal{D},\widetilde{\mathcal{D}}$ of any size._ *Proof.* Introducing $\mathbf{z}\sim\mathcal{N}(0,I_D)$ such that $\mathbf{n}=\Sigma^{1/2}\mathbf{z}$, we have that . sup D,D˜ MMD ^2 kΦ (D, D˜) − MMD2 kΦ (D, D˜) ≤ sup D,D˜ z ⊤Σz + 2 a ⊤Σ 1/2z ≤ z ⊤Σz + 2 sup a:∥a∥≤2B a ⊤Σ 1/2z ≤ z ⊤Σz + 2 sup a:∥a∥≤2B ∥a∥∥Σ 1/2z∥ = z ⊤Σz + 4B∥Σ 1/2z∥. $$\square$$ To apply Gaussian Lipschitz concentration, we also need to know that $$\mathbb{E}\|\Sigma^{1/2}\mathbf{z}\|\leq{\sqrt{\mathbb{E}\|\Sigma^{1/2}\mathbf{z}\|^{2}}}={\sqrt{\operatorname{Tr}(\Sigma)}};$$ the exact expectation of a χ variable with more than one degree of freedom is inconvenient, but the gap is generally not asymptotically significant. Then we get that, with probability at least 1 − ρ 2 , $$\|\Sigma^{1/2}{\bf z}\|\leq\sqrt{\mathrm{Tr}(\Sigma)}+\|\Sigma\|_{o p}^{1/2}\sqrt{2\log\frac{2}{\rho}}.$$ Again combining with the bound of Equation 14, we get the stated bound. This bound is looser than in Proposition A.2, since the term depending on a is now "looking at" z in many directions rather than just one: we end up with a χ(dim(Σ)) random variable instead of χ(1). We can use this uniform convergence bound to show that the minimizer of the private loss approximately minimizes the non-private loss: Proposition A.4. Fix a target dataset D. For each θ in some set Θ, fix a corresponding D˜θ*; in particular,* Θ = R p could be the set of all generator parameters, and D˜θ either the outcome of running a generator gθ *on a fixed* set of "seeds," D˜θ = {gθ(zi)} n i=1*, or the full output distribution of the generator* Qgθ . Suppose that Φ : X → R D *is such that* supx∥Φ(x)∥ ≤ B*, and let* MMD ^kΦ (D, D˜) = ∥µΦ(D) − µΦ(D˜) + n∥ for n ∼ N (0, Σ)*. Let* θe ∈ arg minθ∈Θ MMD ^2 kΦ (D, D˜θ) *be the private minimizer, and* θb ∈ arg minθ∈Θ MMD ^2 kΦ (D, D˜θ) the non-private minimizer. For any ρ ∈ (0, 1), with probability at least 1 − ρ *over the choice of* n, $$\mathrm{MDD}_{\mathbf{z}_{\theta}}^{2}\left(\mathcal{D},\mathcal{D}_{\theta}\right)-\mathrm{MMD}_{\mathbf{z}_{\theta}}^{2}\left(\mathcal{D},\mathcal{D}_{\varphi}\right)$$ $$\leq2\mathrm{Tr}(\Sigma)+8B\sqrt{\mathrm{Tr}(\Sigma)}+4\left(\|\Sigma\|_{F}+2B\|\Sigma\|_{\mathcal{D}}^{\frac{1}{2}}\right)\sqrt{\log(\frac{2}{\rho})}+4\|\Sigma\|_{\mathrm{op}}\log(\frac{2}{\rho})=\mathcal{O}_{p}\left(\mathrm{Tr}(\Sigma)+B\sqrt{\mathrm{Tr}(\Sigma)}\right).$$ Proof. Let α represent the uniform error bound of Proposition A.2. Applying Proposition A.2, the definition of θe, then Proposition A.2 again: $\mathrm{MMD}_{\mathbf{k_{\phi}}}^{2}(\mathcal{D},\widehat{\mathcal{D}_{\mathbf{\phi}}})\leq\widehat{\mathrm{MMD}_{\mathbf{k_{\phi}}}^{2}}(\mathcal{D},\widehat{\mathcal{D}_{\mathbf{\phi}}})+\alpha\leq\widehat{\mathrm{MMD}_{\mathbf{k_{\phi}}}^{2}}(\mathcal{D},\widehat{\mathcal{D}_{\mathbf{\phi}}})+\alpha\leq\mathrm{MMD}_{\mathbf{k_{\phi}}}^{2}(\mathcal{D},\widehat{\mathcal{D}_{\mathbf{\phi}}})+2\alpha.$ ## A.4 Quality Of The Private Minimizer: "Optimistic" Analysis The preceding analysis is quite "worst-case," since we upper-bounded the MMD by the maximum possible value everywhere. Noticing that the approximation in Proposition A.2 is tighter when ∥Σ 1/2a∥ is smaller, we can instead show an "optimistic" rate which takes advantage of this fact to show tighter approximation for the minimizer of the noised loss. In the "interpolating" case where the generator can achieve zero empirical MMD, the convergence rate substantially improves (generally improving the squared MMD from Op(1/m) to Op(1/m2)). Proposition A.5. In the setup of Proposition A.4, we have with probability at least 1 − ρ over n *that* MMD2 kΦ (D, D˜θe ) − MMD2 kΦ (D, D˜θb ) ≤ 9Tr(Σ) + 4pTr(Σ) MMDkΦ (D, D˜θb ) + 2 9∥Σ∥F + 2q2∥Σ∥op MMDkΦ (D, D˜θb ) rlog 2ρ + 18∥Σ∥op log 2ρ = Op Tr(Σ) + pTr(Σ) MMDkΦ (D, D˜θb ) . Proof. Let's use MMD( \ θ) to denote MMDkΦ (D, D˜θ), and MMD( ^ θ) for MMD ^kΦ (D, D˜θ). For all θ, we have that $$\widehat{\mathrm{MMD}}^{2}(\mathbf{\theta})-\widehat{\mathrm{MMD}}^{2}(\mathbf{\theta})\big{|}\leq\mathbf{z}^{\top}\Sigma\mathbf{z}+2\big{|}(\mu^{\Phi}(\mathcal{D})-\mu^{\Phi}(\widetilde{\mathcal{D}}))^{\top}\Sigma^{1/2}\mathbf{z}\big{|}$$ $$\leq\mathbf{z}^{\top}\Sigma\mathbf{z}+2\,\widehat{\mathrm{MMD}}(\mathbf{\theta})\|\Sigma^{1/2}\mathbf{z}\|.$$ Thus, applying this inequality in both the first and third lines, $$\widehat{\mathrm{MMD}}^{2}(\widehat{\mathbf{\theta}})\leq\widehat{\mathrm{MMD}}^{2}(\widehat{\mathbf{\theta}})+\mathbf{z}^{\top}\Sigma\mathbf{z}+2\,\widehat{\mathrm{MMD}}(\widehat{\mathbf{\theta}})\|\Sigma^{1/2}\mathbf{z}\|$$ $$\leq\widehat{\mathrm{MMD}}^{2}(\widehat{\theta})+\mathbf{z}^{\top}\Sigma\mathbf{z}+2\,\widehat{\mathrm{MMD}}(\widehat{\mathbf{\theta}})\|\Sigma^{1/2}\mathbf{z}\|$$ $$\leq\widehat{\mathrm{MMD}}^{2}(\widehat{\theta})+2\mathbf{z}^{\top}\Sigma\mathbf{z}+2\left(\widehat{\mathrm{MMD}}(\widehat{\mathbf{\theta}})+\widehat{\mathrm{MMD}}(\widehat{\theta})\right)\|\Sigma^{1/2}\mathbf{z}\|;$$ in the second line we used that MMD( ^ θe) ≤ MMD( ^ θb). Rearranging, we get that $$\widehat{\mathrm{MMD}}^{2}(\widehat{\pmb{\theta}})-\beta\,\widehat{\mathrm{MMD}}(\widehat{\pmb{\theta}})-\gamma\leq0,$$ MMD \2(θe) − β MMD( \ θe) − γ ≤ 0, (15) where $$\beta=2\|\Sigma^{1/2}\mathbf{z}\|\geq0$$ $$\gamma=\widehat{\mathrm{MMD}}^{2}(\widehat{\partial})+2\mathbf{z}^{\top}\Sigma\mathbf{z}+2\,\widehat{\mathrm{MMD}}(\widehat{\partial})\|\Sigma^{1/2}\mathbf{z}\|\geq0.$$ The left-hand side of Equation 15 is a quadratic in MMD( \ θ˜) with positive curvature; it has two roots, at $\widehat{\mathrm{IMID}}(\hat{\boldsymbol{\theta}})$. $${\frac{\beta}{2}}\pm{\sqrt{\left({\frac{\beta}{2}}\right)^{2}+\gamma}}.$$ Thus the inequality Equation 15 can only hold in between the roots; the root with a minus sign is negative, and so does not concern us since we know that MMD( \ θ) ≥ 0. Thus, for Equation 15 to hold, we must have $$\widehat{\mathrm{MMD}}(\widehat{\mathbf{\theta}})\leq\frac{\beta}{2}+\sqrt{\left(\frac{\beta}{2}\right)^{2}+\gamma}$$ $$\widehat{\mathrm{MMD}}^{2}(\widehat{\mathbf{\theta}})\leq\frac{\beta^{2}}{4}+\left(\frac{\beta}{2}\right)^{2}+\gamma+\beta\sqrt{\left(\frac{\beta}{2}\right)^{2}+\gamma}$$ $$\leq\gamma+\beta^{2}+\beta\sqrt{\gamma}.$$ $\square$ $$\mathbf{l}_{*}$$ Also note that $$\gamma=\widehat{\mathrm{MMD}}^{2}(\widehat{\rho})+2\mathbf{z}^{\top}\Sigma\mathbf{z}+2\,\widehat{\mathrm{MMD}}(\widehat{\rho})\|\Sigma^{1/2}\mathbf{z}\|\leq\left(\widehat{\mathrm{MMD}}(\widehat{\rho})+\sqrt{2}\|\Sigma^{1/2}\mathbf{z}\|\right)^{2}.$$ Thus, substituting in for β and γ then simplifying, we have that $$\widehat{\mathrm{\widetilde{\mathrm{MMD}}}}^{2}(\widehat{\mathbf{\theta}})\leq\widehat{\mathrm{\widetilde{\mathrm{MMD}}}}^{2}(\widehat{\mathbf{\theta}})+(6+2\sqrt{2})\mathbf{z}^{\top}\Sigma\mathbf{z}+4\|\Sigma^{1/2}\mathbf{z}\|\,\widehat{\mathrm{\widetilde{\mathrm{MMD}}}}(\widehat{\mathbf{\theta}}).$$ Using the same bounds on z ⊤Σz and ∥Σ 1/2z∥ as in Proposition A.3, and 6 √2 < 9, gives the claimed bound. ## B Extended Implementation Details Repository. Our code is available at https://github.com/ParkLabML/DP-MEPF; the readme files contain further instructions on how to run the code. ## B.1 Hyperparameter Settings For each dataset, we tune the generator learning rate (LRgen) and moving average learning rate (LR*mavg*) from choices 10−kand 3 · 10−k with k ∈ {3, 4, 5} once for the non-private setting and once at ϵ = 2. The latter is used in all private experiments for that dataset, as shown in 7. After some initial unstructured experimentation, hyperparameters are chosen with identical values across dataset shown in 8 For the Cifar10 DP-MERF baseline we tested random tuned random features dimension d ∈ {10000, 50000}, random features sampling distribution σ ∈ {100, 300, 1000}, learning rate decay by 10% every e ∈ {1000, 10000} iterations and learning rate 10−k with k ∈ {2, 3, 4, 5, 6}. Results presented use d = 500000, σ = 1000, e = 10000, k = 3. The DP-GAN baseline for Cifar10 and CelebA uses the same generator as DP-MEPF with 3 residual blocks and a total of 8 convolutional layers and is paired with a ResNet9 discriminator which uses Groupnorm instead of Batchnorm to allow for per-sample gradient computation. We pre-train the model non-privately to convergence on downsampled imagenet in order to maintain the same resolution of 32 × 32 and then fine-tune the model for a smaller number of epochs. In case of the CelebA 64 × 64 data we add another residual block to discriminator and generator to account for the doubling in resolution. The base multiplier for number of feature maps is reduced from 64 to 50 to lessen the increase in number of weights. Results are the best scores of a grid-search over the following parameters at ϵ = 2, which is then used in all settings: number of epochs {1, 10, 30, 50} generator and discriminator learning rate separately for 10−kand 3 · 10−k with k ∈ {3, 4, 5}, clip-norm {10−3, 10−4, 10−5, 10−6}, batch size {128, 256, 512} and, as advised in Bie et al. (2023), number of discriminator updates per generator {1, 10, 30, 50}. The chosen values are given in table 9. | Table 8: Hyperparameters fixed across datasets Parameter Value (ϕ1)-bound 1 (ϕ2)-bound 1 iterations (MNIST & FashionMNIST) 100,000 batch size (MNIST and FashionMNIST) 100 iterations (Cifar10 & CelebA) 200,000 batch size (Cifar10 and CelebA) 128 seeds 1,2,3,4,5 | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Table 7: Learning rate hyperparameters across datasets | | | | | | |----------------------------------------------------------|------------|----------|----------|----------|----------| | Dataset | ε | (ϕ1, ϕ2) | (ϕ1) | | | | | LRgen | LRmavg | LRgen | LRmavg | | | MNIST | ε = ∞ | 10−5 | 10−3 | 10−5 | 10−3 | | | ε < ∞ | 10−5 | 10−4 | 10−5 | 10−4 | | FashionMNIST | ε = ∞ | 10−5 | 10−3 | 10−5 | 10−3 | | | ε < ∞ | 10−4 | 10−3 | 10−4 | 10−3 | | {∞, 10, 5} | 3 · 10−4 | 10−4 | 3 · 10−4 | 10−4 | | | CelebA32 | {2, 1} | 3 · 10−4 | 3 · 10−4 | 3 · 10−4 | 3 · 10−4 | | | {0.5, 0.2} | 10−3 | 3 · 10−4 | ·10−3 | 3 · 10−4 | | {∞, 10, 5} | 3 · 10−4 | 10−4 | 3 · 10−4 | 3 · 10−4 | | | CelebA64 | {2, 1} | 3 · 10−4 | 10−3 | 3 · 10−4 | 3 · 10−4 | | | {0.5, 0.2} | 10−3 | 10−3 | 10−3 | 10−3 | | {∞, 10, 5} | 10−3 | 3 · 10−4 | 10−3 | 10−4 | | | Cifar10 labeled | {2, 1} | 10−3 | 10−2 | 10−3 | 10−2 | | | {0.5, 0.2} | 10−3 | 10−2 | 10−3 | 10−2 | | {∞, 10, 5} | 10−3 | 10−3 | 10−3 | 10−3 | | | Cifar10 unlabeled | {2, 1} | 10−3 | 10−3 | 10−3 | 10−3 | | | {0.5, 0.2} | 10−3 | 10−3 | 10−3 | 10−3 | ## C Detailed Tables | Cifar10 | CelebA 32 × 32 | CelebA 64 × 64 | | | | | |-------------------------|------------------|------------------|----------|-------------|----------|----------| | | ϵ ∈ {0.2, 0.5} | ϵ = 1 | ϵ = 2 | ϵ ∈ {5, 10} | | | | LRgen | 10−4 | 3 · 10−4 | 3 · 10−4 | 3 · 10−4 | 3 · 10−4 | 3 · 10−4 | | LRdis | 10−3 | 3 · 10−4 | 10−3 | 3 · 10−4 | 10−3 | 10−3 | | batch size | 512 | 512 | 512 | 512 | 512 | 512 | | epochs | 10 | 10 | 10 | 10 | 10 | 10 | | discriminator frequency | 10 | 10 | 30 | 30 | 10 | 10 | | clip norm | 10−5 | 10−4 | 10−5 | 10−5 | 10−4 | 10−5 | | ϵ = ∞ | ϵ = 10 | ϵ = 5 | ϵ = 2 | ϵ = 1 | ϵ = 0.2 | | | |--------------|------------------|------------|------------|------------|------------|------------|------------| | MLP | DP-MEPF (ϕ1, ϕ2) | 91.4 ± 0.3 | 89.8 ± 0.5 | 89.9 ± 0.2 | 89.3 ± 0.3 | 89.3 ± 0.6 | 79.9 ± 1.3 | | DP-MEPF (ϕ1) | 88.2 ± 0.6 | 88.8 ± 0.1 | 88.4 ± 0.5 | 88.0 ± 0.2 | 87.5 ± 0.6 | 77.1 ± 0.4 | | | LogReg | DP-MEPF (ϕ1, ϕ2) | 84.6 ± 0.5 | 83.4 ± 0.6 | 83.3 ± 0.7 | 82.9 ± 0.7 | 82.5 ± 0.5 | 75.8 ± 1.1 | | DP-MEPF (ϕ1) | 81.4 ± 0.4 | 80.8 ± 0.9 | 80.8 ± 0.8 | 80.5 ± 0.6 | 79.0 ± 0.6 | 72.1 ± 1.4 | | Below we present the results from the main paper with added a ± b notation, where a is the mean and b is the standard deviation of the score distribution across three independent runs for MNIST and FashionMNIST and 5 independent runs for Cifar10 and CelebA. Table 10: Downstream accuracies of our method for MNIST at varying values of ϵ Table 9: Hyperparameters of DP-GAN for Cifar10 and CelebA Table 11: Downstream accuracies of our method for FashionMNIST at varying values of ϵ | ϵ = ∞ | ϵ = 10 | ϵ = 5 | ϵ = 2 | ϵ = 1 | ϵ = 0.2 | | | |--------------|------------------|------------|------------|------------|------------|------------|------------| | MLP | DP-MEPF (ϕ1, ϕ2) | 74.4 ± 0.3 | 76.0 ± 0.4 | 75.8 ± 0.6 | 75.1 ± 0.3 | 74.7 ± 1.1 | 70.4 ± 1.9 | | DP-MEPF (ϕ1) | 73.8 ± 0.5 | 75.5 ± 0.6 | 75.1 ± 0.8 | 75.8 ± 0.7 | 75.0 ± 1.8 | 69.0 ± 1.5 | | | LogReg | DP-MEPF (ϕ1, ϕ2) | 74.3 ± 0.1 | 75.7 ± 1.0 | 75.2 ± 0.4 | 75.8 ± 0.4 | 75.4 ± 1.1 | 72.5 ± 1.2 | | DP-MEPF (ϕ1) | 72.8 ± 0.5 | 75.5 ± 0.1 | 75.5 ± 0.8 | 76.4 ± 0.8 | 76.2 ± 0.8 | 71.7 ± 0.4 | | ϵ = ∞ ϵ = 10 ϵ = 5 ϵ = 2 ϵ = 1 ϵ = 0.5 ϵ = 0.2 DP-MEPF (ϕ1, ϕ2) 18.5 ± 0.5 17.4 ± 0.7 17.5 ± 0.6 18.1 ± 0.8 19.0 ± 0.5 21.4 ± 1.3 25.8 ± 2.1 DP-MEPF (ϕ1) 16.6 ± 0.7 16.3 ± 0.9 16.9 ± 0.5 16.5 ± 0.8 17.2 ± 0.9 21.8 ± 1.0 25.5 ± 1.1 Table 12: CelebA FID scores 32 × 32 (lower is better) Table 13: CelebA FID scores 64 × 64 (lower is better) ϵ = ∞ ϵ = 10 ϵ = 5 ϵ = 2 ϵ = 1 ϵ = 0.5 ϵ = 0.2 DP-MEPF (ϕ1, ϕ2) 18.6 ± 1.0 18.5 ± 1.2 19.1 ± 0.9 18.4 ± 1.0 19.0 ± 1.2 21.4 ± 1.3 26.8 ± 1.5 DP-MEPF (ϕ1) 16.3 ± 0.4 17.4 ± 1.4 16.5 ± 0.8 16.9 ± 1.1 18.4 ± 0.9 20.4 ± 0.8 27.7 ± 2.1 | ϵ = ∞ | ϵ = 10 | ϵ = 5 | ϵ = 2 | ϵ = 1 | ϵ = 0.5 | ϵ = 0.2 | | |------------------|------------|------------|------------|------------|------------|------------|-------------| | DP-MEPF (ϕ1, ϕ2) | 27.7 ± 3.1 | 29.1 ± 1.3 | 30.0 ± 0.8 | 39.5 ± 1.9 | 54.0 ± 1.3 | 76.4 ± 3.9 | 226.0 ± 5.4 | | DP-MEPF (ϕ1) | 28.4 ± 2.8 | 30.3 ± 2.1 | 35.6 ± 5.8 | 42.0 ± 3.0 | 56.5 ± 3.4 | 92.0 ± 3.5 | 268.3 ± 8.5 | | ϵ = ∞ | ϵ = 10 | ϵ = 5 | ϵ = 2 | ϵ = 1 | ϵ = 0.5 | ϵ = 0.2 | |---------|----------|---------|---------|---------|-----------|-----------| Table 14: FID scores for synthetic *labelled* CIFAR-10 data (generating both labels and input images) Table 15: Test accuracies (higher better) of ResNet9 trained on CIFAR-10 synthetic data with varying privacy guarantees. When trained on real data, test accuracy is 88.3% DP-MEPF (ϕ1, ϕ2) 57.5 ± 3.3 53.0 ± 2.8 43.9 ± 1.2 40.0 ± 1.9 28.5 ± 4.5 18.0 ± 1.0 16.2 ± 1.8 DP-MEPF (ϕ1) 43.8 ± 3.5 40.7 ± 4.2 32.3 ± 6.2 42.6 ± 1.6 33.2 ± 2.6 18.8 ± 4.0 15.3 ± 2.5 ## D Encoder Architecture Comparison We are testing a large collection of classifiers of different sizes from the torchvision library including VGG, ResNet, ConvNext and EfficientNet. For each we look at unlabelled Cifar10 generation quality in the non-DP setting and at ϵ = 0.2. In each architecture, we use all activations from convolutional layers with a kernel size greater than 1x1. We list the number of extracted features along with the achieved FID score in table 17, where each result is the best result obtained by tuning learning rates. As already observed in dos Santos et al. (2019), we find that VGG architectures appear to learn particularly useful features for feature matching. We hypothesized that in the private setting other architectures with fewer features might outperform the VGG model, but have found this to not be the case. ## E Public Dataset Comparison We pretrained a ResNet18 using ImageNet, CIFAR10, and SVHN as our public data, respectively. We then used the perceptual features to train a generator using CelebA dataset as our private data at a privacy budget of ϵ = 0.2 and obtained the scores shown in 18. These numbers reflect our intuition that as long as the public data is sufficiently similar and contains more complex patterns than private data, e.g., transferring the knowledge learned from ImageNet as public data to generate CelebA images as private data, the learned features from public data are useful enough to generate good synthetic data. In addition, as the public data become more simplistic (from CIFAR10 to SVHN), the usefulness of such features reduces in producing good CelebA synthetic samples. | Table 16: FID scores for synthetic unlabelled CIFAR-10 data ϵ = ∞ ϵ = 10 ϵ = 5 ϵ = 2 ϵ = 1 ϵ = 0.5 | ϵ = 0.2 | | | | | | | |------------------------------------------------------------------------------------------------------|------------|------------|------------|------------|------------|------------|------------| | DP-MEPF (ϕ1, ϕ2) | 38.5 ± 1.5 | 38.8 ± 2.0 | 37.0 ± 1.1 | 38.7 ± 2.2 | 43.0 ± 1.1 | 49.4 ± 1.0 | 67.3 ± 2.6 | | DP-MEPF (ϕ1) | 38.5 ± 0.6 | 38.5 ± 0.4 | 38.6 ± 1.3 | 40.1 ± 1.1 | 45.1 ± 2.4 | 49.8 ± 2.5 | 72.3 ± 4.0 | | Encoder model | #features | ϵ = ∞ | ϵ = 0.2 | | | |-----------------|-------------|----------|-----------|-------|-------| | (ϕ1, ϕ2) | (ϕ1) | (ϕ1, ϕ2) | (ϕ1) | | | | VGG19 | 303104 | 35.0 | 37.0 | 56.2 | 85.8 | | VGG16 | 276480 | 37.4 | 39.8 | 71.4 | 72.2 | | VGG13 | 249856 | 38.2 | 36.7 | 78.1 | 71.2 | | VGG11 | 151552 | 40.5 | 41.6 | 65.4 | 68.6 | | ResNet152 | 429568 | 71.8 | 70.1 | 88.6 | 87.9 | | ResNet101 | 300544 | 77.5 | 73.7 | 76.0 | 82.4 | | ResNet50 | 196096 | 71.5 | 76.3 | 90.0 | 105.1 | | ResNet34 | 72704 | 74.8 | 103.3 | 89.1 | 93.1 | | ResNet18 | 47104 | 84.9 | 85.0 | 104.5 | 95.2 | | ConvNext large | 161280 | 141.9 | 232.0 | 138.2 | 221.6 | | ConvNext base | 107520 | 142.4 | 248.0 | 157.0 | 200.1 | | ConvNext small | 80640 | 171.7 | 212.3 | 169.9 | 202.9 | | ConvNext tiny | 52992 | 145.6 | 218.2 | 138.8 | 205.8 | | EfficientNet L | 119168 | 200.9 | 229.0 | 243.7 | 226.6 | | EfficientNet M | 68704 | 185.7 | 177.1 | 218.7 | 227.1 | | EfficientNet S | 47488 | 157.5 | 160.6 | 171.5 | 186.7 | | ImageNet | Cifar10 | SVHN | | |------------|-----------|--------|------| | FID | 47.6 | 51.2 | 65.2 | Table 17: Unlabeled Cifar10 FID scores achieved with different feature extractors. VGG models yield the best results in both non-DP and high DP settings. Table 18: FID scores achieved for CelebA 32 × 32 using a ResNet encoder with different public training sets ## F Training Dp-Mepf Without Auxiliary Data While DP-MEPF is explicitly designed to take advantage of available public data, one might wonder how the method performs if no such data is available. The following experiment on CIFAR10 explores this scenario. We assume that a privacy budget of ϵ = 10 is given. We use some part of the budget for feature extractor (i.e. the classifier) training and the rest of the budget for the generator training. For a feature extractor, we have trained ResNet-20 classifiers with DP-SGD at three different levels of ϵ ∈ {2, 5, 8} for classifying the CIFAR10 dataset. We set the clipping norm to 0.01 and trained the classifiers for 7, 49 and 98 epochs, respectively. Their test accuracies are 38.4%, 49.5% and 54.0% respectively. We also include scores for DP-MEPF applied to the untrained Classifier, denoted as ϵ = 0. Then, we train the generator using these four sets of features to generate CIFAR10 images, where each generator training uses the rest of the budget, i.e., ϵ ∈ {8, 5, 2} and ϵ = 10 for the untrained classifier. We tune the learning rate in each of the four settings and keep other hyperparameters at default values. | ϵ for feature extractor training | for generator training | FID | |------------------------------------|--------------------------|-------| | 0 | 10 | 111.1 | | 2 | 8 | 127.0 | | 5 | 5 | 90.8 | | 8 | 2 | 119.0 | Table 19: DP-MEPF results in CIFAR10 when using a DP feature extractor (ϵ = 0 is an untrained extractor) As expected, in Table 19 we see a considerable increase in the FID score, compared to DP-MEPF with public data. A balanced allocation of privacy budget with ϵ = 5 each for classifier and generator training yields the best result at an FID score of 90.8 and performs significantly better than just using a randomly initialized feature extractor, which only achieves a score of 111.1. For comparison: with public data DP-MEPF achieves an FID score of 37.0 at ε = 5, highlighting the importance of such data to our method. ## G Additional Plots Below we show samples from our generated MNIST and FashionMNIST data in Figure 7 and Figure 8 respectively. ![24_image_0.png](24_image_0.png) Figure 7: MNIST samples produced with DP-MEPF (ϕ1, ϕ2) at various levels of privacy ![25_image_0.png](25_image_0.png) Figure 8: Fashion-MNIST samples produced with DP-MEPF (ϕ1, ϕ2) at various levels of privacy
Review 1: Summary: The paper proposes an interesting method to using public data for training a synthetic data generator with differential privacy guarantees. The method consists of a feature extractor (e.g. Resnet) trained on public data, and an estimator which computes empirical means and variances of the latent representations of the network. This estimator is evaluated on private data, and it is normalised and Gaussian noise is added and thus it is made DP. As a last step, a generator network is trained to generate synthetic data that is aimed to fit the internal representations of the aforementioned DP estimate that was evaluated on private data. By post-processing the resulting generator is DP with the same guarantees as the estimator (compositions -> DP accounting). Strengths and Weaknesses: Stenghts: - The idea seems interesting and novel and widely applicable, the feature extractor can be any network as well as the generator. - Strong experimental results. The lack of existing baselines for DP generative models that would exploit synthetic data underlines the novelty of this work. - Theoretical guarantees that illustrate the utility preservation of the MMD estimator (obtained from squared norm of the difference of the estimator with synthetic data and the private estimator). The results are qualitative but seem to illustrate well how things scale w.r.t. noise level and number of private data points etc. Suggestion: perhaps as a next step here could be e.g. sigmas and dataset sizes replaced by deltas and epsilons. Weaknesses: - Mainly small things, mentioned below in "Requested Changes". I hope you can answer my question regarding the early stopping. Requested Changes: Bigger questions I hope you can answer: - You use early stopping based on the DP FID score estimator. Is early stopping rigorously DP? If you think not, could you mention that this part is heuristic, or if it is, could you shortly explain why it is? Using e.g. that Wang et al. (2019) analysis (RDP - analysis), I think early stopping is DP for sure for the epsilons and deltas corresponding to the max number of iterations, but if you compute the epsilons using the number of iterations up to the early stopping, then I am unsure. - Could you clarify: how are the functions $\phi_1$ and $\phi_2$ normalized? Do you clip or just normalise them to 1, by enforcing or by implementing this in the network activations such that it happens automatically? - Why is the sensitivity of the concatenated vector $[ \phi_1, \phi_2]$ 2? Isn't it $\sqrt{2}$ if you use 2-norm? It seems that in most experiments the alternative that uses $[ \phi_1, \phi_2]$ is worse than the one that uses only $\phi_1$, perhaps this might be due to the utility loss you incur here? - Or if there is another reason, for the " $[ \phi_1, \phi_2]$ - alternative " being worse in the experiments, could you clarify? Small things: - p.3: The notation $\Phi$ for the concatenated features (values and their squares) does not seem to be used later (you could remove it I guess). - Perhaps change the following sentence somehow, it felt a bit out of place in the beginning: " However, one of the properties of DP is composability, meaning data can be accessed more than once – but the level of privacy guarantee degrades each time. " - p. 9: " achieves and FID"-> "an FID" - p. 10: something strange with a linebreak Broader Impact Concerns: no concerns. ================================================== Review 2: Summary: The paper proposes a new method for differentially private synthesis of data in the vision domain. The proposed method relies on existence of huge public training data, and differs from prior work by proposing a differentially private method that relies on MMD loss (minimizing the maximum mean discrepancy). The proposed method has three main steps, in the first step a feature extracted is trained on the public corpus, in the second step the mean embedding of the data distribution is computed and privatized (using the gaussian mechanism, and both first and second moment are used as features). Then, a generator which transforms noise to images is trained using the private features, with the MMD loss. There is also a private early stopping scheme that they suggest, which looks at a proxy FID score to determine when to stop training. The authors then evaluate the proposed method on MNIST, FashionMNIST, CIFAR and CelebA datasets, using FID score and classification accuracy of downstream tasks. Strengths and Weaknesses: Strengths: 1. At least to my knowledge and based on the claim in the paper, this is the first work in private data synthesis for images that that uses auxiliary data and provides meaningful synthesized samples of the CelebA dataset. 2. There are ample experiments to ablate different aspects of the proposed method, such as the early stopping. There is also Downstream task evaluation. Weakness (further elaborated in the requested changes section): 1. There is no study of the disparate impact that the proposed DP method has on the different subgroups. This could be a good discussion for the broader impacts statement, which is also missing from the paper. 2. I think section three could be expanded a bit, to better explain the proposed method, specifically the discussion surrounding the generation of both labels and images together, I think that part needs to be clarified more, maybe even added to figure 1. 3. The results, although there is plenty of them, are not well elaborated, specially Table 3. Requested Changes: To address the mentioned weaknesses, I propose the following changes to help improve the manuscript: 1. I think at least a discussion of broader impacts is needed, with focus on the fairness of the method. I have elaborated more in the section regarding broader impacts, but to summarize, there needs to be a study where either the FID/downstream classification accuracy for each subgroup in the data, potentially based on race for the CelebA dataset, is measured for both private and non-private data synthesis, to see how badly the underrepresented races suffer from loss in utility, and how it is exacerbated using the proposed DP method, and how it compares with one of the baselines (either DP-SGD trained GAN or DP-sinkhorn). 2. The results could be elaborated further/better, right now the progression is a bit confusing, since its based on dataset and not methods. Also, one very confusing thing is how the CelebA results are explained. Although I would assume Table 3 is a big part of the results, it is very briefly elaborated. Also, DP-sinkhorn is mentioned in the text for this dataset but there are no results in any tables for it, it is unclear where the FID 190 comes from or what the setup is there. I also have some doubts about the results in Table 3: why is there no 64*64 version of the DP-GAN? Also, the gap between DP-GAN and proposed method is a bit too high (FID 40 vs 11 for epsilon 10), how much tuning was done for the DP-GAN? 3. As mentioned above section 3 could be better explained. And one final minor issue, there is something odd about the formatting in page 10, beneath figure 7, the lines break at a strange place. Broader Impact Concerns: The paper does not have a broader impacts statement which is concerning, given the sensitivity of the issues covered. My main concern in terms of broader impact is the fairness/disparity in drop of utility and representation of different subgroups in the synthesized samples. As shown by numerous prior work, differential privacy has disparate impact on different subgroups, especially a negative impact on underrepresented groups. This is particularly important in the case of this paper where the main goal is synthesis of data which is as similar as possible to the private data distribution, and one of the case studies is CelebA data, which is faces. In such a case, it is important to make sure different people from different subgroups are presented. I think a discussion/study of how the proposed method differs from normal DP-SGD+GAN training in terms of representation of different subgroups is needed. ================================================== Review 3: Summary: This paper proposes to generate images in a differentially private manner using a three-stage pipeline. First, the authors pre-train a feature extractor network on a public dataset. Then, using this public network, they extract the mean embedding of the private dataset (from which to generate) and privatize it using the Gaussian mechanism. Finally, using post-processing, the authors leverage this private mean embedding to train (without DP) a network that generates images with a mean embedding that is close to the privatized one (minimizing the Maximum Mean Discrepancy or MMD). The authors claim that the proposed method allows them to generate "reasonable" CIFAR-10, CelebA, MNIST and FashionMNIST-like images with high privacy regimes ($\varepsilon = 2$ for CIFAR-10), as illustrated in Figures 2-7. They also provide a theoretical analysis showing that the noise added once to privatize the mean embedding of the private dataset does not hurt the generative model's convergence. Strengths and Weaknesses: Strengths: - The proposed method abstracts away the need to inject noise directly during the generation process (for instance with a GAN or a Diffusion model) by using a privatized mean embedding of the data distribution to model. - The authors experiment on 4 datasets and present convincing generation results for decently small values of the privacy budget epsilon. - The proposed method (DP-MEPF) generates synthetic images that consistently degrade less the accuracy on downstream tasks than previous state-of-the-art methods. - The authors propose a private early stopping method (Section 3.2) using a private proxy of the FID score. We thank the authors for paying attention to potential privacy leaks occurring through training hyperparameters, even though the authors could add that the bandwidth of such hyperparameters is generally too small to incur a meaningful privacy loss [1] Weaknesses: - The proposed method critically relies on (a tuple of) privatized mean embeddings that represent the true data distribution to model. Could the authors elaborate on the soundness of this method for modelling extremely large image datasets? Indeed, the largest considered dataset is CelebA (200k samples), whereas recent generative models leverage LAION (400 million images). In that case, would the mean embedding be enough to capture the diversity of a large-scale dataset? - The authors could add some discussion on the assumption of a publicly available dataset. Following [2], we can question whether public datasets generally scrapped from the web could be (1) considered as privacy-preserving and (2) available for certain tasks. Moreover, acquiring information only through the DP channel is indeed more difficult but could alleviate these concerns (the reviewer does not ask for additional experiments but rather to give more substance to this discussion in the paper). - The authors could mention recent/concurrent work on Differentially Private Stable Diffusion [3] - The authors motivate the need for synthetic image generation mainly through the angle of downstream tasks, for instance by generating private images that still maintain the classification accuracy of a given model. However, another angle is to be able to generate images from the original distribution that are private and consider these images as the end product. In that sense, a potential follow-up would be to perform empirical attacks on the generated images to see how close to the original training set these are. [1] "Private selection from private candidates", Liu and Talwar, 2019. [2] "Considerations for Differentially Private Learning with Large-Scale Public Pretraining", Tramer et al, 2022 [3] "Differentially Private Diffusion Models", Dockhorn et al, 2022 Requested Changes: In summary, the paper does match claims with evidence. We also thank the authors for providing code (which the reviewer read in part but did not execute). Note that the reviewer did not read the proofs nor check the correctness of the theoretical analysis in Section 4 but rather focused on the method, the experimental results and setup. For a detailed list of requested changes, see the "Strengths and Weaknesses" section. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: The authors made convincing answers to the reviewers' questions and revised their manuscript accordingly. After the discussion phase, there is a consensus on accepting the paper. ==================================================
# Accurate Neural Network Pruning Requires Rethinking Sparse Optimization Denis Kuznedelev ∗*denis.kuznedelev@skoltech.ru* Skoltech & Yandex Eldar Kurtic∗*eldar.kurtic@ista.ac.at* IST Austria Eugenia Iofinova∗*eugenia.iofinova@ista.ac.at* IST Austria Elias Frantar elias.frantar@ista.ac.at IST Austria Alexandra Peste alexandra.peste@ista.ac.at IST Austria Dan Alistarh dan.alistarh@ista.ac.at IST Austria & Neural Magic Reviewed on OpenReview: *https: // openreview. net/ forum? id= vgthYeRBAF* ## Abstract Obtaining versions of deep neural networks that are both highly-accurate and highly-sparse is one of the main challenges in the area of model compression, and several high-performance pruning techniques have been investigated by the community. Yet, much less is known about the interaction between sparsity and the standard stochastic optimization techniques used for training sparse networks, and most existing work uses standard dense schedules and hyperparameters for training sparse networks. In this work, we examine the impact of high sparsity on model training using the standard computer vision and natural language processing sparsity benchmarks. We begin by showing that using standard dense training recipes for sparse training is suboptimal, and provide evidence that this results in *undertraining*, loosely defined as using a suboptimal number of passes over the training data. We present training recipes for mitigating this issue for both sparse pre-training of vision models (e.g. ResNet50/ImageNet) and sparse fine-tuning of language models (e.g. BERT/GLUE), achieving state-of-the-art results in both settings in the high-sparsity regime, and providing detailed analyses for the difficulty of sparse training in both scenarios. Our work sets a new benchmark in terms of the accuracies that can be achieved under high sparsity, and should inspire further research into improving sparse model training, to reach higher accuracies under high sparsity, but also to do so efficiently. ## 1 Introduction The difficulty of finding deep neural networks (DNNs) that are both *accurate and sparse*, i.e., closely match the accuracy of dense models while having a large majority of their weights set to zero, is one of the main ∗These authors contributed equally. Author order was determined by experimental load (highest first). challenges in the area of model compression. On the conceptual side, this challenge connects to fundamental questions related to the *Lottery Ticket Hypothesis (LTH)* (Frankle & Carbin, 2019; Frankle et al., 2019), which posited that such sparse masks exist, and that, in some cases, they can even allow accurate training of sparse models *from scratch*, that is, applying the sparsity mask at initialization. On the practical side, obtaining highly-sparse and accurate networks can lead to significant practical speedups, both for inference (NeuralMagic, 2022) and training (Nikdan et al., 2023). In this work, we focus on the challenge of obtaining accurate DNNs in the high-sparsity regime, and investigate the barriers to obtaining **highly-sparse** and **highly-accurate** variants of DNNs for standard vision and language tasks. We mainly focus on two tasks that are, arguably, the standard benchmarks for sparsity in vision and language, respectively: image classification using the ResNet50 model (He et al., 2016) on the ImageNet-1K dataset (Russakovsky et al., 2015), e.g. Hoefler et al. (2021); Dong et al. (2017); Gale et al. (2019); Evci et al. (2020); Singh & Alistarh (2020); Savarese et al. (2021); Peste et al. (2021), and language modelling using the BERT-base model (Devlin et al., 2019) on the GLUE benchmark datasets (Wang et al., 2018), e.g. Sanh et al. (2020); Hoefler et al. (2021); Kurtic & Alistarh (2022); Kurtic et al. (2022). Roughly, for both benchmarks, it is known that sparsities lower than 90% can be achieved with approximately 1% accuracy loss relative to the original dense model, but accuracy rapidly decreases in the 90-95% range (Hoefler et al., 2021; Evci et al., 2020), and that decreases are drastic at higher (≥ 95%) sparsities (Singh & Alistarh, 2020; Kurtic et al., 2022). In this paper, we investigate the reasons behind this accuracy loss due to sparsity, mainly targeting *high sparsity*, i.e. sparsities between 90% and 99%, studying the difficulty of obtaining accurate models in this range, and providing ways to circumvent it. Contribution. We begin from the observation that, when training sparse models from scratch, following standard *dense training* schedules, *sparse models show clear evidence of undertraining*: both their accuracy and loss fail to saturate under standard number of training epochs, and their output continues to have high entropy. This finding suggests that maximization of the accuracy of sparse models requires longer training than the dense optimization recipes adopted in most of the work on model sparsification. Motivated by this observation, we propose a combination of techniques which can mitigate the inherent difficulty of sparse training. As a consequence, we significantly improve on the best currently-known sparsityaccuracy trade-offs on standard sparsity benchmarks for both image classification and language modelling. Specifically, we consider the two classic sparsification benchmarks in this setting: image classification (ResNet50 on ImageNet) and language modelling (BERT or SQuAD and GLUE tasks), and set new state-of-the-art results in both settings. For image classification, we obtain, for the first time, highly-accurate sparse versions of ResNet50, such as a 90%-sparse model with 78.5% Top-1 accuracy, a 95%-sparse model with 77.7% Top-1 accuracy, and a 98%-sparse model with 75.2% Top-1 accuracy. In the same context, the highest accuracy for a dense model we could obtain is 78.78% Top-1. In addition, we show that stable results can be obtained even for extreme sparsities (e.g., 99%). We also extend our results to language models from the BERT family Devlin et al. (2019), where we show that on challenging modeling tasks, as measured by the drop in accuracy relative to the dense model, similar techniques can improve results by 3 points in accuracy relative to the current state-of-the-art results at 90% sparsity. We arrive at these results as follows: - We perform an analysis of the output and training characteristics of models trained using current state-of-the-art techniques, relative to their dense counterparts. First, we show that sparse DNNs obtained via many current techniques behave similarly to dense models that have been *undertrained*, i.e. executed for a sub-optimal number of epochs: specifically, they tend to have high output entropy (alternatively, low "output confidence"), which correlates with their reduced accuracy. - This analysis provides clear evidence that optimizing *sparse models* is more difficult than standard dense optimization (Evci et al., 2019). This observation stands in contrast to the fact that most current sparsification techniques use standard *dense* training recipes for fine-tuning and recovery. We exploit this insight to obtain state-of-the-art accuracy for sparse models in two popular scenarios: sparse pretraining, i.e. training sparse models from scratch, and *sparse transfer*, i.e. optimizing a sparse pretrained model onto a target transfer task. - In the *sparse pretraining* scenario, illustrated by the standard task of obtaining a highly-sparse ResNet50 model on the ImageNet dataset, we show that we can circumvent the difficulty of sparse training by adopting a variant of the Alternating Compressed/Decompressed (AC/DC) algorithm (Peste et al., 2021) for training sparse DNNs, which has convergence guarantees for sparse recovery. Specifically, we show that, by scaling the algorithm's runtime, we can obtain state-of-the-art results for sparse pretraining on ImageNet for ResNet50 and MobileNet models, and reach extremely high sparsities (e.g. 98% and 99%) while still obtaining stable converegence. Moreover, only sparse models benefit from extended training, whereas dense models start to overfit with longer training. - We complement our analysis with a study of the *sparse transfer* scenario, popular in language modeling. Here, the difficulty of sparse training can manifest itself through both *undertraining* and *overfitting*, depending on the parametrization of the chosen transfer learning recipe, specifically on the training length. We address this via a modified version of the *gradual layer unfreezing* approach (Howard & Ruder, 2018), tailored towards a *sparse* transfer learning scenario, which allows us to obtain state-of-the-art results in the case of BERT-base transfer on downstream datasets. Discussion. Overall, our results suggest that the difficulty of obtaining highly-accurate sparse models is closely linked to the difficulty of accurate sparse optimization using current state-of-the-art techniques. Specifically, our work improves the best known results on standard sparsity benchmarks, for both sparse pretraining and sparse finetuning, both in terms of absolute accuracy, and accuracy loss relative to the dense baseline. Moreover, we observe the following: - Achieving state-of-the-art sparsity-vs-accuracy trade-offs currently requires using significant additional computational complexity and more epochs for training the sparse models, relative to the best known dense training methods. In turn, this suggests that sparse optimization may be inherently harder than its dense counterpart. - Reaching high validation accuracy for sparse models is strongly linked to reaching low training loss, which occurs at a slower rate for sparse models in the case of SGD-based optimization. At the same time, we do observe overfitting behavior (decrease of validation accuracy w.r.t. increased training time), especially at lower sparsities. - To further investigate the hardness of sparse optimization, we perform an analysis of the loss landscape of accurate sparse networks both in terms of sharpness and loss interpolation / mode connectivity. We observe that achieving highly-accurate sparse networks from initialization requires overcoming multiple loss barriers, and that sparsity mask exploration may be a key ingredient for overcoming these barriers. - In addition, we investigate the relationship between standard hyperparameters such as weight decay, on the one hand, and sparsity structure, on the other. We find that careful setting of weight decay is critical for accurate sparsity, and that weight decay additionally induces (partial) structured sparsity in highly-sparse models. This provides a first explanation to the emergence of structured sparsity in unstructured sparse networks, which has been observed previously (Peste et al., 2021; Iofinova et al., 2022; Yin et al., 2023). In summary, our results set new accuracy thresholds for sparse models using relatively simple techniques, and can be reproduced in reasonable time on commodity hardware. As such, they should serve as motivation for the community to investigate improved *sparsity-aware* optimization techniques, specifically allowing for faster, more efficient accuracy recovery. ## 2 Background And Motivation Formally, accurate pruning is a constrained optimization problem which, given the objective of minimizing a loss function L, aims to find an "optimal" sparsity mask M⋆ with a given target sparsity s, fraction of zero parameters,1 and weights W⋆such that $$(1)^{\frac{1}{2}}$$ $$\bigstar|_{\star}$$ M⋆,W⋆ = argminmask M, weights W [L(M ⊙ W)] with nnz(M) ≤ (1 − s)numel(M). (1) In its general form, where both the optimal mask and the optimal weights must be determined, this question is NP-hard (Blumensath & Davies, 2008), even for simple least-squares loss. However, this problem can be made tractable if we assume a fixed mask, or we wish to approximate the sparsity of the mask, e.g. Axiotis & Sviridenko (2020). In the context of pruning, this procedure can be logically split into 1) determining the sparsity mask M, which is often separated from 2) the optimization procedure over the non-zero weights. For instance, the standard Lottery Ticket Hypothesis (LTH) approach (Frankle & Carbin, 2019; Chen et al., 2021b) is to first identify a "ticket" mask by performing weight selection by magnitude over an already-trained model, followed by SGD-based finetuning, using the initialization and the same set of hyperparameters as for dense training. While several novel ways of choosing or updating the sparsity mask choice (step 1), have been investigated, by and large, for the second step, that of optimizing the remaining weights, sparse training methods largely emulate the hyperparameters of the baseline dense model, including the total number of training epochs (Gale et al., 2019; Jayakumar et al., 2020; Evci et al., 2020; Peste et al., 2021). However, it is intuitive that the problem of simultaneously finding near-optimal weights and a near-optimal mask may be harder to solve than a standard dense loss minimization problem. This naturally motivates an in-depth investigation into the following questions: *can optimization over sparse* networks converge with the same rate as over dense ones?, and are dense training recipes well-suited for sparse training? In this paper, we provide evidence that the answer to both questions is *negative*, suggesting that improved optimizers may be required for obtaining accurate sparse models under reduced training budgets. ## 3 Related Work The goal of most sparsification methods (Hoefler et al., 2021) is to create a DNN that is as accurate as possible, while maximizing sparsity. This goal can be achieved via different strategies: for instance, *post-training* sparsification methods assume a *pretrained dense model*, from which weights are removed either in a single step (one-shot) or progressively (gradual pruning). By contrast, in *sparse training methods*, parameters are pruned from the model during training from scratch, either close to initialization (Evci et al., 2020; Jayakumar et al., 2021; Lee et al., 2019; Vanholder, 2017; Schwarz et al., 2021), or progressively as the model is trained (Han et al., 2015; Gale et al., 2019; Savarese et al., 2021). A subset of sparse training methods are *dynamic*, in the sense that weights may be reintroduced during training (Evci et al., 2020; Peste et al., 2021). In this work, we mainly focus on the *high-sparsity regime*, in which *sparse training* methods provide the best known accuracy-vs-sparsity trade-offs. We begin by discussing methods for computer vision. Here, Gradual Magnitude Pruning (GMP), in which the lowest-magnitude weights are progressively removed throughout training, is a common baseline. In Gale et al. (2019), GMP was shown to be competitive with more sophisticated pruning methods on image classification models when properly tuned; similar results were later shown for language models (Kurtic & Alistarh, 2022). The RigL pruning method (Evci et al., 2020) is a common, high-performing benchmark for dynamic sparse training. In this method, the weights are initially pruned to the target sparsity and trained through (sparse) stochastic gradient descent. Periodically, however, the mask is updated by selecting weights with the highest magnitude gradient, subject to a limit on the total mask change. The authors run this method using two sparsity targets - Uniform sparsity, where all layers (except the first and last) are pruned to the same 1A *sparsity mask* is simply a binary tensor of the same dimensions as the model, with 0 at the indices of the sparsified entries, and 1 at the other indices. proportion, and Erdős–Rényi Kernel (ERK), where layer sparsity targets are set to optimize performance. The authors test their method in the normal-schedule (100 epochs on Imagenet) and 5x training regime, getting results of 73.22% validation accuracy and 74.63% validation accuracy at 95% global (ERK) and uniform sparsity, respectively when training for 500 epochs. Extending training to 10 000 epochs (100x) further allowed the authors to produce 99% sparse (ERK) ResNet50 models with 68.5% accuracy on ImageNet. RigL was improved by combining it with ITOP (Liu et al., 2021), by altering training hyperparameters to encourage mask exploration, which was shown to improve RigL results at medium (80-90%) sparsity (see Table 1). The GraNet(Liu et al.) method extends this approach by making it gradual - either starting from a dense network and performing RigL-like updates while simultaneously increasing sparsity until the target sparsity is achieved, or by starting by a partially sparse (50%) network and doing the same. Models trained with the sparse-init version of GraNet achieved 72.3% validation accuracy at 95% global sparsity when training for 100 epochs. The AC/DC pruning method (Peste et al., 2021) alternates dense and sparse pruning phases of several epochs each, effectively co-training dense and sparse models. Similar to RigL, AC/DC was tested in the normal and extended training regime, creating 95% globally sparse ImageNet-1K ResNet50 models with 73.14% top-1 accuracy, and 68.44% top-1 accuracy 98% sparse models after 100 epochs of training. The authors also experiment with extended training times, producing 95% uniform sparsity ResNet50 models with 74.3% validation accuracy. Another successful pruning approach is the combination of Powerpropagation (Schwarz et al., 2021) with TopKAST (Jayakumar et al., 2021). In Powerpropagation, the weights are reparametrized using f(w) = w|w| α−1 for α > 1, effectively encouraging high-magnitude weights to continue increasing while lower-magnitude weights are driven toward 0. Top-KAST is a dynamic sparse training scheme that is largely similar to RigL: in Top-KAST, for a target density D, the gradients of the top D′ < D weights are computed in each backpropagation round and allowed to accumulate, and the masks at these respective sparsities are periodically recomputed. The combination of these two methods results in 77.16% accuracy at 90% sparsity when trained for 3x their baseline of 32K steps. The recently-proposed ST-3 method (Vanderschueren & Vleeschouwer, 2023) uses the technique of soft thresholding with straight-through gradient estimation to progressively prune neural networks while allowing weights to move more smoothly between the dense and sparse states. Using this method, the authors were able to achieve ImageNet accuracies of between 74% and 75% at 96% sparsity on ResNet-50, depending on the method variant used. Additionally, some works have explored the difficulty of sparse optimization (Evci et al., 2019), explored changes to dense training pipelines to improve sparse training (ab Tessera et al., 2021; Jaiswal et al., 2022), or focused on the creation of sparse accurate neural networks outside of the standard paradigm of simultaneously searching for the optimal mask and weights. Notably, (Liu et al., 2021) explored the impact of mask exploration (that is, the total number of explored parameters at any point in sparse training), demonstrating the positive effect of extended training on both sparse network performance and total number of explored parameters. The STEP (Lu et al., 2023) learning method explored the interaction of sparsity with the Adam optimizer (Kingma & Ba, 2015), finding that the masked weights lead to an incorrect estimate of the second moment during optimization; these observations led to their proposal of a new method for N:M sparsity that alleviates these effects. The GradMax method (Evci et al., 2022) initializes a small neural network, then uses predicted gradients to grow a larger (while still small) neural network by adding additional neurons. The problem of the sparse optimization also emerges in the context of the optimal transport (Peyré & Cuturi, 2020; Cuturi, 2013). It is often desirable to have a sparse assignment between the source and target domain. Several works have studied this question with applications to color transfer (Blondel et al., 2018) and sparse mixture of experts (Liu et al., 2022). Language models For language models, the standard compression pipeline consists of two stages: pretraining on a large unlabeled text corpus followed by fine-tuning on a small and labeled task-specific dataset. The former is used to capture the statistical patterns and relationships that exist in the natural language, allowing the model to recognize and even generate various linguistic patterns. The latter stage, fine-tuning on a downstream task, builds on top of the learned representations and adapts them to solve specific tasks such as text classification, sentiment analysis, duplicate detection, etc. Sparsity has been explored in both stages: pruning during pre-training and pruning during fine-tuning. Methods such as Movement Pruning (Sanh et al., 2020) and The Optimal BERT Surgeon (oBERT) (Kurtic et al., 2022) make use of first-order (gradient) and second-order (curvature) information, respectively, to guide pruning decisions during the fine-tuning stage. However, recent work observed two problems with this approach when applied on small datasets: (Zhang et al., 2022) demonstrated instability due to large variability of estimated importance scores, while (Huang et al., 2021) observed overfitting despite reduced expressive power due to pruning. From the practical side, this approach is less favorable for practitioners as it requires extensive pruning-domain knowledge to properly configure pruners for each model and dataset combination. Therefore, the main focus of our work is on the other stage, leveraging already sparse pre-trained models with transfer learning to obtain highly accurate task-specific fine-tuned models. Prune Once for All (Prune OFA) (Zafrir et al., 2021) and oBERT (Kurtic et al., 2022) represent the most recent state-of-the-art techniques addressing this problem. Both methods first prune the model during the pre-training stage, and then apply transfer learning with a fixed sparsity mask to obtain fine-tuned and sparse models on various downstream datasets. Impact of sparsification beyond top-1 accuracy An open area of research is the impact that pruning in general, and the choice of pruning method in particular, have on the resulting model. In particular, pruned models have been shown to be more vulnerable to bias (Hooker et al., 2019; 2020; Iofinova et al., 2023), and worse at prediction accuracy under distribution shift (Liebenwein et al., 2021). Recent works by (Chen et al., 2021a) and (Iofinova et al., 2023) investigate the effects of pruning on a range of model trustworthiness metrics and find mixed results, with sparse neural networks having better calibration, but exaggerating spurious patterns in the existing data. Finally, works such as (Iofinova et al., 2022) and (Chen et al., 2021b) investigated the capacity of sparse CNNs for domain adaptation via transfer learning, finding that sparsely trained networks can have more generalizable features than dense ones. ## 4 The Difficulty Of Sparse Pretraining Of Vision Models 4.1 Sparse Vision Models Show Evidence Of "Undertraining" We begin by investigating correlations between the performance and output characteristics of dense and sparse models trained for increasing number of epochs. Specifically, we examine two key metrics: Top-1 accuracy on the validation/test set, and the *loss on the train set* for the trained models, while scaling the number of training epochs and the associated hyperparameters correspondingly. We will examine the evolution of these metrics as we increase the number of epochs, in parallel for sparse and dense models. We specifically look out for instances where sparse models behave similar to dense ones that have been trained for a sub-optimal (too low) number of epochs, a phenomenon we simply call *undertraining*. Metrics: Output Loss and Entropy. We examine model fit to the training data via the training loss at the last epoch of training. For multiclass classification, traditionally cross-entropy loss is used. We compute the cross-entropy loss by taking the softmax over the vector of output values of the network and then applying the standard cross-entropy formula, where the cross-entropy is taken with respect to the correct label distribution for the model (1 for the correct class and 0 otherwise). For an output of a network outputting a vector Z = (z1, z2*, ..., z*C ) of size C with correct label L, cross-entropy CE is given by the following formula: CE(Z) = − log ![5_image_0.png](5_image_0.png) . (2) Intuitively, the loss of the model is related to its "confidence" in the correct predictions, or equivalently could be said to measure the model's fit to the training data. We use this quantity as it is conventional with respect to measuring model convergence; however, we consider the *entropy* computed over *test* data to be an equally good choice, as it captures the model's confidence in its predictions (whether they be correct or incorrect) and can be computed on a *test* set, without access to the correct labels. We show in Appendix C that the two metrics give nearly identical results in our experiments. We expect a sufficiently large and well-trained model to have low loss on the training data. However, as is conventionally known, continued training on dense and low-sparsity models can result in in overfitting will lower these metrics further. Here we investigate whether the same rule applies to models with higher sparsity. Experimental setup. We examine validation accuracy on trained sparse and dense ResNet50 models on the ImageNet-1K dataset and compare it to the train loss on the last epoch of training. All models were trained using standard hyperparameters (see Appendix A) except for the difference in number of training of epochs in different experiments. Measurements represent the final accuracy and training loss after the last training epoch, so each marker on the plots represents a full experiment, rather than an intermediate checkpoint. Sparse models were pruned with Alternating Compression/Decompression (AC/DC) (Peste et al., 2021), likewise adjusting the total number of compressed and decompressed phases to the total run length. AC/DC was chosen as it was among the best-performing methods across all sparsities and training lengths (see Section 4.2.1). We use the FFCV library (Leclerc et al., 2022) for fast loading of the data. In contrast with other runs presented in this paper, we do not use progressive resizing or label smoothing, as the latter explicitly encourages high prediction entropy and cross-entropy. In these experiments, we keep the first and last layer dense. ![6_image_0.png](6_image_0.png) Figure 1: Average validation accuracy (left), and Train loss at final epoch (right) for sparse and dense ImageNet models trained for different numbers of epochs. The highest-accuracy model for each sparsity level is highlighted with a larger marker. The cross-entropy loss and entropy level of the dense model is also shown with a dashed line, to simplify comparison. Results. Our results are presented in Figure 1. On the left panel, we show the top-1 accuracy of the final models. We observe that 80% and 90% sparse models reach an accuracy that is similar to dense models, even slightly exceeding dense accuracy at 80% sparsity. Accuracy drops at higher sparsity (95% and 98%); this is consistent with the original AC/DC paper and results from other pruning methods. Examining accuracy across epoch budgets, and focusing on the best-performing model for each sparsity level, we observe the following: - *The dense model requires the fewest epochs* (88) to reach its best validation accuracy, and extending the training recipe results in *worse performance for the dense model*, commonly known as "overfitting." - *The outcome changes if we examine sparse models*, for which the ideal training length increases with sparsity: 250 epochs for 80% and 90% sparse models, and at least 500 epochs—the longest schedule we tried in this experiment—for 95% and 98% sparse models. Even at 500 epochs, the accuracy increase/loss decrease for these models does not appear to be saturated. We now examine loss on the training dataset in more detail. We observe that the training loss always decreases when the number of training epochs is increased. However, sparse models trained for the standard 100 epochs show similar training loss to dense models trained for far fewer epochs. For example, dense models trained for 24 epochs have a similar training loss to 95% sparse models trained for 100 epochs, while dense models trained for 100 epochs have a slightly lower training loss than 80%-sparse models trained for 250 epochs. When we consider the best-performing models at their respective sparsity levels, we find that they have similar training loss to the top-performing dense model, in cases where such low loss/entropy can be achieved in a reasonable number of epochs (at 80% and 90% sparsity); at all sparsities. Further, continuing to train sparse models to until training loss drops below the training loss of the optimal dense model results in worse validation accuracy (overfitting). Discussion. These findings further support our hypothesis that, due to the inherent difficulty of sparse optimization, using standard training recipes is not sufficient for sparse training, and suggests that longer training may mitigate this effect. Further, results suggest that training loss can act as a useful criterion to validate that the sparse models are properly trained2, with the latter criterion being also useful in cases where access to train data, or to any labeled data, is not possible. In Appendix Section C, we consider the alternative Validation entropy metric, and present a similar validation on the Celeb-A dataset. ## 4.2 State-Of-The-Art Accurate Sparse Pre-Training On Imagenet The above observations for vision models suggest that successful sparse training may benefit from an extended training schedule. We now build on this idea to achieve state-of-the-art results for the classic ResNet50/ImageNet benchmark by using an extended-training version of AC/DC, which we call AC/DC++. ## 4.2.1 Comparing Sparse Training Methods For the following experiments, we start from the current state-of-the-art training approach for ResNet50/ImageNet training, using the Pytorch FFCV package (Leclerc et al., 2022). In addition to an extended training schedule, we use label smoothing and a linear learning rate decay with warm-up, as well as progressive resizing of input samples 3. In this context, we implemented three leading sparse training methods: Gradual Magnitude Pruning (GMP) (Zhu & Gupta, 2017), RigL (Evci et al., 2020) and AC/DC (Peste et al., 2021), which we execute for an increasing number of epochs between 100 (standard) and 1000 (10x). For this, we scale the original training schedule proportionally, following the proportions employed by the original methods. For this experiment, models are compressed to 80%, 90%, and 95% sparsity. Following the most common experimental setup, we prune all weights in convolutional and linear layers (including input convolution and classification head). The exact training recipe is presented in detail in Appendix A. We note that each of the experiments presented in the paper takes less than a day on a standard 8-GPU NVIDIA RTX 3090 server. The results, in terms of accuracy and loss vs number of training epochs are presented in Figure 2 for 95% sparsity and in Figure P.9. Results. The results show a strong correlation between how well the methods achieve reduction in loss and their validation accuracy. This reinforces the point that sparse training methods saturate slower, both in terms of training loss and validation accuracy. This has also been investigated by prior work: Gale et al. (Gale et al., 2019) found that extended training did improved results for GMP in some cases, while RigL (Evci et al., 2020) and Powerpropagation (Schwarz et al., 2021) found diminishing improvements. At the same time, 2The 98% sparse model will likely never reach the entropy of the optimal dense model, suggesting that the accuracy may continue to improve with very long training schedules. In fact, the authors of RigL trained a 99% sparse model for 100 times the dense training time and were not able to saturate its accuracy. See www.github.com/google-research/rigl\# extended-training-results. 3We follow the setup from the FFCV ImageNet example repository for ResNet50. ![8_image_0.png](8_image_0.png) Figure 2: (**left**) Validation accuracy on ImageNet-1k vs number of epochs for different sparse training methods. (**right**) Training loss on ImageNet-1k vs number of epochs for different sparse training methods. we notice a significant difference between methods: specifically, AC/DC starts at a slightly better accuracy point, and consistently outperforms other methods both in terms of loss achieved, and in terms of validation accuracy, as we increase training time. (This is consistent with the AC/DC original results, executed at 100 epochs (Peste et al., 2021).) We observe that this correlates with the theoretical computational cost (FLOPs) of the methods: AC/DC will use more FLOPs than other methods due to the dense training phases, while GMP uses more FLOPs than RigL due to gradually increasing sparsity. In turn, this could also be correlated with the amount of mask exploration performed by the algorithm during training. At low sparsity RigL performs slightly better than GMP, but for higher sparsity GMP appears to perform better. For the smallest 80%, 90% AC/DC reaches a saturation point, whereas in all other setups model performance continues to improve with training budget. 4.2.2 Sparsity-vs-Accuracy Results ![8_image_1.png](8_image_1.png) Figure 3: Comparison of Accuracy change from dense baseline as a function of Inference FLOPs for leading sparse training methods, under uniform sparsity constraints (**left**) and global sparsity constraints (**right**). Due to a lack of a standard benchmark, global and Erdős–Rényi Kernel (ERK) sparsity constraints were grouped together. Both sparsity schedules of AC/DC++ (with all layers sparsified and with the first and last layer kept dense) are plotted together. Goals and Metrics. Based on these results, in this section, we aim to improve the best known sparsityversus-accuracy trade-offs by performing a thorough ablation over sparsities and training length parameters. We compare our results to the highest-performing previously published sparse training methods. In particular, | Top-1 accuracy (%) | ∆ Accuracy | Sparsity | Remaining | Inference FLOPs | | | |------------------------------------------------------------------------|------------------------------------|------------|-------------|-------------------|--------|------| | Method | Dense (D) Sparse (S) 100 × (S−D) D | (%) | # of params | prop. of dense | | | | Sparse Training AC/DC (Peste et al., 2021) | 76.8 | 75.03 | -1.77 | 90 | 2.56 M | 0.18 | | GraNet(s0 = 0.5) (Liu et al.) | 76.80 | 74.5 | -1.3 | 90 | - | 0.20 | | Powerpropagation + Top-KAST FLD (Schwarz et al., 2021) | 76.8 | 75.23 | -1.57 | 90 | - | - | | Powerpropagation + Top-KAST ERK (Schwarz et al., 2021) | 76.80 | 75.74 | -1.06 | 90 | - | 0.24 | | RIGL ERK 1x (Evci et al., 2020) | 76.80 | 73.00 | -4.94 | 90 | - | 0.24 | | RIGL-ITOP ERK 1x (Liu et al., 2021) | 76.80 | 73.82 | -2.98 | 90 | - | 0.24 | | ST-3 (Vanderschueren & Vleeschouwer, 2023) | 77.10 | 75.28 | -1.82 | 90 | - | 0.24 | | STR (Kusupati et al., 2020) | 77.01 | 74.31 | -3.51 | 90.23 | 2.49 M | - | | Variational Dropout (Molchanov et al., 2017) | 76.69 | 73.84 | -3.72 | 90.27 | 2.49 M | - | | Post-training sparsification Global Magnitude (Singh & Alistarh, 2020) | 77.01 | 75.15 | -2.42 | 90 | 2.56 M | - | | WoodFisher (Singh & Alistarh, 2020) | 77.01 | 75.21 | -2.34 | 90 | 2.56 M | - | | Extended sparse training AC/DC++ 5x (this work) | 78.78 | 78.49 | -0.29 | 90 | 2.60 M | 0.2 | | AC/DC++ FLD 5x (this work) | 78.78 | 78.6 | -0.18 | 90 | 4.45 M | 0.22 | | GMP FLD 1.5x (Gale et al., 2019) | 76.69 | 75.16 | -1.53 | 90 | - | - | | GraNet(s0 = 0.5) 2.5x (Liu et al.) | 76.80 | 76.4 | -0.4 | 90 | - | 0.20 | | Powerpropagation+Top-KAST ERK 3x(Schwarz et al., 2021) | 76.80 | 77.16 | +0.36 | 90 | - | 0.24 | | RIGL ERK 5x (Evci et al., 2020) | 76.80 | 76.42 | -0.38 | 90 | - | 0.24 | | RIGL-ITOP ERK 5x (Liu et al., 2021) | 76.80 | 75.50 | -1.30 | 90 | - | 0.24 | | Sparse Training AC/DC (Peste et al., 2021) | 76.8 | 73.14 | -3.66 | 95 | 1.28 M | 0.11 | | GraNet(s0 = 0.5) (Liu et al.) | 76.80 | 72.3 | -6.5 | 95 | - | 0.12 | | Powerpropagation + Top-KAST FLD(Schwarz et al., 2021) | 76.8 | 73.25 | -3.55 | 95 | - | - | | RIGL ERK 1x (Evci et al., 2020) | 76.80 | 70.00 | -8.85 | 95 | - | 0.12 | | ST-3 (Vanderschueren & Vleeschouwer, 2023) | 77.10 | 74.46 | -2.64 | 95 | - | 0.13 | | STR (Kusupati et al., 2020) | 77.01 | 70.40 | -8.58 | 95.03 | 1.27 M | - | | Variational Dropout (Molchanov et al., 2017) | 76.69 | 71.81 | -6.36 | 94.94 | 1.30 M | - | | Post-training sparsification Global Magnitude (Singh & Alistarh, 2020) | 77.01 | 71.72 | -6.29 | 95 | 1.28 M | - | | WoodFisher (Singh & Alistarh, 2020) | 77.01 | 72.12 | -6.89 | 95 | 1.28 M | - | | M-FAC (Frantar et al., 2021) | 77.01 | 72.6 | -4.41 | 95 | 1.28 M | - | | Extended sparse training AC/DC++ 10x (this work) | 78.78 | 77.27 | -1.48 | 95 | 1.33 M | 0.13 | | AC/DC++ FLD 10x (this work) | 78.78 | 77.7 | -1.08 | 95 | 3.28 M | 0.14 | | GMP FLD 1.5x (Gale et al., 2019) | 76.69 | 72.71 | -3.98 | 95 | 1.28 M | - | | RIGL ERK 5x (Evci et al., 2020) | 76.80 | 74.63 | -2.17 | 95 | 1.28 M | 0.12 | | Sparse training AC/DC (Peste et al., 2021) | 76.8 | 68.44 | -9.36 | 98 | 0.7 M | 0.06 | | ST-3 (Vanderschueren & Vleeschouwer, 2023) | 77.10 | 70.46 | -6.64 | 98 | - | 0.07 | | STR (Kusupati et al., 2020) | 77.01 | 70.40 | -8.58 | 98 | - | - | | Variational Dropout (Molchanov et al., 2017) | 76.69 | 64.52 | -15.87 | 98.57 | 0.36 M | - | | Post-training sparsification M-FAC (Frantar et al., 2021) | 77.01 | 67.5 | -9.51 | 98 | - | - | | WoodFisher (Singh & Alistarh, 2020) | 77.01 | 65.55 | -11.46 | 98 | 0.51M | - | | Extended sparse training AC/DC++ 10x (this work) | 78.78 | 74.06 | -4.72 | 98 | 0.51 M | - | | AC/DC++ FLD 10x (this work) | 78.78 | 76.6 | -2.28 | 98 | 2.58 M | 0.09 | | Sparse training ST-3 (Vanderschueren & Vleeschouwer, 2023) | 77.10 | 63.88 | -13.22 | 99 | - | 0.04 | | Extended sparse training AC/DC++ FLD 10x (this work) | 78.78 | 72.7 | -6.08 | 99 | 2.34 M | 0.06 | | RIGL ERK 5x (Evci et al., 2020) | 76.80 | 61.86 | -15.94 | 99 | - | 0.05 | | RIGL ERK 10x (Evci et al., 2020) | 76.80 | 63.89 | -12.91 | 99 | - | 0.05 | | RIGL ERK 50x (Evci et al., 2020) | 76.80 | 66.94 | -9.86 | 99 | - | 0.05 | | RIGL ERK 100x (Evci et al., 2020) | 76.80 | 68.15 | -8.65 | 99 | - | 0.05 | Table 1: Comparison between modern sparse training methods on ImageNet-1k with ResNet-50 models for various sparsity targets. ERK refers to the Erdos-Renyi Kernel sparsity distribution. FLD refers to the first and last layers being dense (AC/DC++) or the first layer being dense and the last layer being 80% sparse (GMP, PowerPropagation). we compare an extended-training version of AC/DC, which we call AC/DC++, results reported in the original RigL, ST-3, and Powerpropagation papers, as well as many other existing pruning methods.4. All methods are described in Section 3. In cases where the authors conducted extended training using their method, we present those numbers, and we use the FLOPs-optimized ST-3σ variant. AC/DC++ candidate models were trained for four preset training lengths (1x, 2.5x, 5x and 10x the standard ImageNet training time on ResNet50) at all sparsity levels, and we chose the best results obtained by ablating over the length of the training run. 4The most successful Powerpropagation approach presented in the paper combines this method with Top-KAST; we use this benchmark, as it performs better than Top-KAST alone. As different methods have different computational budgets and different dense baselines, to ensure a fair comparison, we examine the model performance both in terms of *Top-1 Validation accuracy*, and the *Top-1* Validation accuracy difference from the corresponding dense baseline. We use the best available numbers originally reported in the papers introducing the methods for comparisons. Experimental Setup. We compare two pruning regimes. First, we consider *Uniform Pruning*, in which every layer is pruned exactly to the target sparsity, except for the first and last layer, which are left dense. Second, we consider the *Global/Nonuniform Pruning* regime, in which the sparsity budget is set globally. Different works apportion the global budget differently, and also differ with respect to which parts of the network are subject to the global constraint. In particular, Extended GMP (Gale et al., 2019) and Top-KAST do not prune the first layer, prune the last layer to a fixed 80% sparsity, and prune the other layers using a global magnitude criterion. RigL uses an Erdős–Rényi-Kernel distribution for layer sparsity targets, and leaves only the first layer dense. The original AC/DC work uses global sparsity and prunes all convolutional and FC layers. Therefore, to create a more fair comparison, we consider estimated Floating-Point Operations (FLOPs) necessary for inference; these are computed as in (Evci et al., 2020). Using FLOPs also equalizes methods across slight variations in ResNet50 architectures, and so we use it also for the Uniform pruning comparison. In addition, we use two pruning schedules for AC/DC++: one which leaves the first and last layer dense and prunes the remaining layers using a global magnitude criterion, and one that prunes all layers using the global magnitude criterion. We do not ablate between the two, but rather present both sets of results in Figure 3 (jointly) and Table 1 (separately). We emphasize two key points regarding our comparisons: 1. Looking at accuracy alone favors AC/DC++, as it has a higher dense baseline: since we use several recent training innovations, the dense model can reach 78.78% dense accuracy over 100 epochs. Therefore, it becomes more challenging to maintain the performance of the dense model for highly sparse model compared to less-optimized baseline. 2. This is why we also examine *accuracy difference relative to the dense baseline*: this favors other methods, as they are benchmarked against a standard-recipe model that reaches lower 76.8% accuracy (77.1% for ST-3). Results. The results are presented in Figure 3 and Table 1. We observe that, for uniform pruning budgets, the AC/DC++ models outperform other methods, both in terms of absolute and relative validation accuracy. This is true even when we consider extended-training schedules for other methods, although we believe we are the first to systematically investigate the impact of increasing training schedules at these sparsity levels.5 When looking at models trained with global pruning budgets, we observe that AC/DC++ obtains the highest absolute validation accuracy, compared to results reported previously in literature. When considering accuracy change from the dense line, AC-DC++ loses less accuracy than other methods at very high sparsities (lowest FLOPs), despite having the highest-performing dense baseline; at lower sparsity (90%), it is competitive with other extended training methods. ## 4.3 Additional Validations And Ablations We performed additional analysis and ablations, to validate the performance and hyperparameters of the AC/DC++ model, and better understand the factors contributing to its high performance. These studies are summarized briefly below, and available in the Appendix. ## 4.3.1 Additional Evaluations Additional quality evaluations. Having demonstrated that extended training has a strong positive effect on sparse model top-1 test accuracy, we further investigate the impact of extended training on other aspects of model quality. We consider two additional quality metrics: their performance in transfer learning scenarios and robustness to common image perturbations. 5In prior work, RigL executed >5x extended training for a 99%-sparse model only (Evci et al., 2020). Iofinova et al. (2022) demonstrated that equally sparse models with comparable performance on the original task can vary widely in their performance once finetuned on other, smaller transfer tasks. We compare the transfer performance of dense and 95% sparse AC/DC++ models, both trained for 100 and 1000 epochs in two transfer learning regimes: linear finetuning, where the hidden layers of the model are trained only on the larger (ImageNet) task, and only the final FC layer is trained on the transfer task, and full-network finetuning, where all layers are finetuned on the transfer task. We find that extended training improves the transfer performance for both transfer scenarios for 95% sparse models, but is largely neutral for dense models. Full details of the experiment and evaluation are given in Appendix E. We test robustness by measuring model performance on the ImageNet-C dataset (Hendrycks & Dietterich, 2019), which digitally adds 19 types of perturbations to the ImageNet-1K validation set. (Liebenwein et al., 2021) and (Hooker et al., 2019) have found that compressed models are less robust under many types of perturbations, compared to dense models. As before, we consider dense and 95% sparse AC/DC++ models trained for 100–1000 total epochs. We find that robustness to perturbations increases with training time for sparse models, but stays the same for dense ones. Full details of the experiment and evaluation are given in Appendix F. Additional models. In Appendix D,we show that extended training with AC/DC++ produces state-ofthe-art results on the MobileNet-V1 architecture as well. ## 4.3.2 Parameter Ablations AC/DC dense fraction. We note that AC/DC is relatively expensive in terms of training FLOPs as compared to other sparse training methods. In Appendices H and L we investigate if this can be improved by shortening the duration of the decompression phase relative to the compression phase, and by using a lower, but nonzero sparsity during the decompression phases of AC/DC, respectively. We find that, for models with a target sparsity of 95%, spending 50% of the training time in each phase is optimal, consistent with the original model recipe. However, shortening the decompression phase so that the model spends only 20% of the training time in that phase has a small 0.2% accuracy drop. Additionally, we find that, for 95% sparse models, decompression phases that are up to 70% sparse have matching or better performance to the original AC/DC models with 0% sparse decompression phases. Therefore, if minimizing floating-point operations during training is an objective, it is possible to make the training more efficient in that regard. AC/DC phase duration. In Appendix I we confirm that for ResNet50 models trained on ImageNet and assuming equally-sized compression and decompression phases, the 5-epoch phase duration used in the initial paper is optimal. Impact of weight decay. in Appendix L, we consider the impact of weight decay on AC/DC model performance and sparsity. We find that using high values of weight decay (1e-3) results in models that stay largely sparse even during the decompression phase, and that are considerably less accurate. Conversely, using low values of weight decay (1e-5 and 1e-6) result in models with very low sparsity during the decompression phase, and also decreased performance. ## 4.3.3 Additional Analyses Mask Analysis. We investigate the importance of mask updates during sparse training. Specifically, we examine how much masks change as training progresses. We find (see Figure 4 (left)) that for both RigL and AC/DC, masks change substantially more early in the training, with an Intersection/Union scores of 0.3-0.4 for AC/DC 95% sparse models and 0.92-0.95 for RigL during the first 20% of training steps (recall that RigL masks are updated far more frequently than AC/DC masks). Later in the training, masks stabilize, with less change in consecutive masks. Full results are provided in Appendix J, Loss landscape analysis. In Appendix M, we investigate the sharpness of the loss landscape (as measured by an approximation to the highest eigenvalue of the Hessian matrix at the point of convergence). We find (see Figure 4 (left)) that, across all methods, sharpness increases with the length of the training run, indicating that sharper minima require extended training to be reached via SGD. Additionally, sharpness decreases with the increase of sparsity. All sparse training methods attain lower sharpness compared to the dense model. ![12_image_0.png](12_image_0.png) Figure 4: (**left**) Mask IoU between two consecutive checkpoints at 95% sparsity. (**right**) Sharpness (highest eigenvalue) of the loss surface vs number of epochs. Dashed lines correspond to the dense model. Structured sparsity. In Appendix K, we measure the number of channels that have been completely zeroed out during (unstructured) sparse training. We find that AC/DC and AC/DC++ models have a considerable amount of sparse channels, with AC/DC++ models having considerably higher structured sparsity than AC/DC for every sparsity. Comparison with smaller dense model. In Appendix G we confirm that a 95% sparse ResNet50 model substantially outperforms a half-width dense ResNet50 model trained for the same budget. Different sparsity patterns. In Appendix N, we confirm that adding constraints on the sparsity pattern (such as uniform per-layer sparsity and block-4 sparsity) lowers model accuracy. ## 5 The Difficulty Of Sparse Transfer In Language Modelling Next, we extend the analysis to language models, specifically to the very common scenario in which a large language model (BERT-base) is adapted to a specific task via *finetuning*. Thus, here we will examine the impact of sparsity target, number of iterations, loss function, and hyper-parametrization on the optimal recipe for the task of *finetuning* sparse models on the downstream dataset. In the context of our study, this setup naturally leads to the following questions: *"do finetuned sparse* language models suffer from being undertrained on the downstream task?", and "if yes, does the simple recipe of extended training suffice to mitigate the issue?". In this section, we will show that when dense finetuning recipes are used for sparse transfer learning in language models, the resulting models are indeed undertrained and have poor transfer performance. However, we also note an additional difficulty: extended training does not suffice to mitigate the issue, because sparse language models quickly shift from being undertrained to an overfitting regime. The latter is a far larger problem in language understanding tasks than in visual ones, which is likely why we don't observe the same issues with visual transfer learning in Appendix E - there we simply use a long finetuning schedule in all cases. In this section, we explore the problem of balancing underand over-training in sparse language models and propose a sparse finetuning recipe for creating properly tuned sparse models. ## 5.1 Under Standard Dense Transfer Learning Recipes, Sparse Models Are Undertrained Experimental Setup. In our experiments, we make use of open-sourced *sparse* pre-trained BERT-base models obtained by (Kurtic et al., 2022). On top of these, we apply various transfer learning recipes to obtain fine-tuned sparse models on datasets from the popular GLUE benchmark (Wang et al., 2018). For fair comparisons with results from prior work, we employ early stopping for all methods. We provide more details about each dataset in Appendix O. The most popular and widely adopted dense transfer learning recipe consists of fine-tuning all weights with linearly decaying learning rate for as much as two or three epochs on the target downstream task. In Table 2 we present results obtained with this approach when applied to sparse models, and denote it as a dense-transfer recipe. Under the same transfer learning recipe, we clearly observe significant gaps (up to 14 accuracy points on RTE and CoLA) between the transfer accuracy of the dense model (*Dense BERT-base*), and the transfer accuracy of the sparse model (*Dense-transfer recipe*). ## 5.2 Extended Training Shifts From Undertraining To Overfitting Observing that the dense transfer learning recipe does not produce competitive sparse finetuned models, we attempt to scale the length of the recipe to mitigate undertraining. Surprisingly, for sparse language models, this simple technique does not yield a unique setup with consistently better results as models quickly shift from undertraining to an overfitting regime, in which training loss goes to zero, while validation accuracy decreases sharply. To demonstrate this overfitting effect with the extended recipe, in Table 2, we compare results obtained with this approach (*Extended dense-transfer recipe*) against doing a full sweep of finetuning runs with rescaled recipes to \#epochs ∈ {1, 2, 3*, ...,* extended − 1} (*Full sweep of rescaled recipes*). Table 2: Sparse-transfer performance of 90% sparse pre-trained BERT-base model on the dev-set of the corresponding GLUE task, obtained with dense and extended dense (\#epochs=8) transfer learning recipes, as well as with the full sweep of rescaled recipes (\#epochs ∈ {1, 2*, ...,* 7}). | Sparse-transfer | RTE | QNLI | MRPC | SST-2 | CoLA | STS-B | MNLI | QQP | |--------------------------------|-------|--------|--------|---------|--------|---------|--------|-------| | Acc | Acc | Acc | Acc | Mcc | Pear | Acc | Acc | | | Dense BERT-base (baseline) | 66.1 | 91.3 | 85.5 | 93.0 | 56.8 | 88.9 | 84.6 | 91.5 | | Dense-transfer recipe | 52.4 | 88.9 | 82.8 | 91.2 | 42.5 | 87.1 | 82.2 | 90.0 | | Extended dense-transfer recipe | 55.2 | 88.7 | 85.6 | 91.4 | 47.2 | 87.6 | 81.6 | 90.3 | | Full sweep of rescaled recipes | 57.0 | 89.3 | 84.1 | 92.0 | 48.5 | 88.0 | 82.2 | 90.4 | | Best recipe length | 5 ep | 2 ep | 5 ep | 2 ep | 7 ep | 4 ep | 3 ep | 5 ep | The results suggest that with the existing recipes, there is no one-size-fits-all solution. Versions of this rescaling approach have been utilized by prior works like (Kurtic et al., 2022) and (Zafrir et al., 2021) to obtain accurate sparse models on various downstream datasets. However, this approach comes with a huge computational burden: for each rescaled recipe, a full hyperparameter sweep over relevant parameters has to be done in order to obtain competitive finetuned sparse models. Due to practicality and associated costs, this is not a desirable solution in practice. ## 5.3 Sparse Transfer Learning For Language Models In the previous section, we have demonstrated the following three problems with the existing approach of either using the dense finetuning recipe, or simply extending it for sparse finetuning: 1. following dense-transfer recipes, sparse language models are undertrained; 2. even at high sparsities, these models can still exhibit overfitting behavior under the extended training regime; 3. finding the optimal recipe to mitigate undertraining and overfitting has major computational burdens. To address these issues, we propose a simple approach for sparse transfer in NLP, which produces highly accurate and competitive sparse models on a wide range of downstream datasets with minimal hyperparameter tuning. Our technique is inspired by the idea of gradual layer unfreezing presented in the ULMFiT framework (Howard & Ruder, 2018), which introduced a universal framework for fine-tuning *dense* language models for text-classification tasks, with a focus on LSTM models (Hochreiter & Schmidhuber, 1997; Merity et al., 2017). Based on ULMFiT and findings of (Yosinski et al., 2014), which suggests that different layers Table 3: Our sparse-transfer performance of 90% sparse pre-trained BERT-base model on the dev-set of the corresponding GLUE tasks, benchmarked against the current state-of-the-art sparse-transfer results from Prune OFA (Zafrir et al., 2021) and oBERT (Kurtic et al., 2022). | QNLI | MRPC | SST-2 | CoLA | STS-B | MNLI | QQP | | | |---------------------------------|---------|---------|-------------|---------|--------|--------------|-------------|-------------| | Sparse-transfer | RTE Acc | Acc | F1 / Acc | Acc | Mcc | Pear / Spear | m / mm | Acc / F1 | | Dense BERT-base | 66.1 | 91.3 | 89.8 / 85.5 | 93.0 | 56.8 | 88.9 / 88.5 | 84.6 / 83.4 | 91.5 / 88.5 | | Prune OFA (Zafrir et al., 2021) | N/A | 89.1 | N/A | 90.9 | N/A | N/A | 81.5 / 82.4 | 90.9 / 87.6 | | oBERT (Kurtic et al., 2022) | 57.0 | 89.3 | 89.3 / 85.6 | 92.0 | 48.5 | 88.0 / 87.6 | 82.2 / 82.5 | 90.4 / 87.1 | | This work | 60.1 | 90.5 | 89.7 / 85.2 | 91.8 | 51.4 | 87.2 / 87.1 | 83.7 / 83.8 | 90.9 / 87.6 | capture different information and therefore should be fine-tuned to different extents, we adopt the idea of gradual unfreezing and adjust it for *transformer-based (Vaswani et al., 2017) sparse* language models. More specifically, we focus on the popular BERT-base model which consists of three groups of layers: embeddings, 12 identical transformer blocks, and a task-specific classifier head. Sparsified versions of this model, which are the main interest of this work, prune all linear layers across all transformer blocks, which is the standard practice in literature (Sanh et al., 2020; Kurtic & Alistarh, 2022; Kurtic et al., 2022; Zafrir et al., 2021) and brings the best accuracy-vs-latency trade-offs (Kurtic et al., 2022). ![14_image_0.png](14_image_0.png) Figure 5: Evaluation loss (lower is better) and F1 score (higher is better) during sparse-transfer with oBERT (Kurtic et al., 2022) and our approach on MRPC dataset. ![14_image_1.png](14_image_1.png) Figure 6: Evaluation loss (lower is better) and Matthew's correlation coefficient (higher is better) during sparse-transfer with oBERT (Kurtic et al., 2022) and our approach on CoLA dataset. Our approach can be summarized as follows. For each downstream task, we start from a sparse pre-trained model produced by (Kurtic et al., 2022) and randomly initialize a task-specific classifier head. Then we freeze all embeddings and sparsified linear weights, while keeping their biases and corresponding LayerNorm (Ba et al., 2016) layers unfrozen and trainable. We start by finetuning only the classifier head and all other trainable parameters (biases and LayerNorms) for one epoch, and then follow the same process from backto-front by unfreezing the unpruned linear weights in preceding transformer blocks. After the last layer is unfrozen and finetuned, we continue finetuning all layers together for one more epoch. Given that at each epoch we have a different model architecture (one more sparse transformer block unfrozen relative to the previous epoch), we finetune it with the linearly decaying learning rate and then rewind back to the initial value for the next epoch. We have also tried the slanted triangular learning rate schedule proposed in ULMFiT, but we found the warmup phase not very helpful as it is known that sparse language models usually require much higher learning rates relative to their dense counterparts in order to train and converge successfully (Kurtic & Alistarh, 2022). To validate the effectiveness of our proposed sparse transfer approach, we benchmark it against the two current state-of-the-art sparse-transfer results presented in *Prune Once for All (Prune OFA)* (Zafrir et al., 2021) and *The Optimal BERT Surgeon (oBERT)* (Kurtic et al., 2022) papers. The former makes use of knowledge distillation from a finetuned dense teacher model, while the latter uses a full sweep over extended and rescaled dense transfer recipes, such as the ones we presented in Section 5.2. As can be seen from Table 3, our approach outperforms highly competitive results by Prune OFA in all, and oBERT in eight out of twelve datasets, setting new state-of-the-art accuracy-vs-sparsity results for many tasks in the GLUE benchmark suite. It is worth emphasizing that all of our results are obtained with significantly less hyperparameter tuning than the other two competing methods, which aligns with our goal of finding a stable one-size-fits-all solution for the sparse-transfer problem. We search and tune the initial learning rate in {1e-4, 2e-4, 3e-4}, and dropout in {0.05, 0.1}, and report mean performance over the two best runs. Thus, our grid consists of only 6 different combinations for each considered dataset, whereas competing approaches sweep over 54 (Zafrir et al., 2021) and 24 (Kurtic et al., 2022) different combinations. It is worth emphasizing that all of the considered methods, including ours, have noticeable variability in results on small datasets across different seeds and hyperparameter configurations, which aligns with findings of (Devlin et al., 2019). To better understand what happens during our proposed sparse transfer learning setup, and to develop an intuition about why it is able to provide stable and competitive results across many different datasets ranging in sizes from 2.4k (RTE) and 392k (MNLI) labeled samples, we visualize evaluation loss and evaluation accuracy metrics over the entire transfer learning process in Figures 5 and 6. As can be seen, our approach enables slower and therefore more stable transfer learning on the target datasets which effectively prevents overfitting, even though the total number of epochs is two times larger than the extended dense-transfer recipes analyzed in Section 5.2. This aligns with findings in ULMFiT, which demonstrates that gradual unfreezing in combination with a carefully designed learning rate schedule prevents catastrophic forgetting and enables robust transfer learning across a wide range of different downstream tasks. ## 6 Conclusion In this work, we examined the impact of high sparsity on model training under standard computer vision and natural language recognition scenarios, and provided evidence that traditional training recipes used for dense models are generally too short for sparse training. Starting from this observation, we were able to produce state-of-the-art models for sparse computer vision on two classic benchmarks for pruning: the ResNet50/ImageNet from-scratch training benchmark, and transfer learning from BERT-base on several NLP datasets. Our work focused on the differences between sparse and dense training dynamics and their effect on optimal training, providing additional analysis towards the difficulty of sparse training. In our work we showed that very high levels of both sparsity and accuracy are possible simply by carefully adapting the number of training epochs and using sensible values for basic hyperparameters. We hope that these new results will encourage additional research on adapting training schedules, hyperparameters, optimizers, and data selection that will allow for the creation of sparsely-trained models that match these accuracy targets within a smaller training budget. We leave this as a challenge to the community. ## References Kale ab Tessera, Sara Hooker, and Benjamin Rosman. Keep the gradients flowing: Using gradient flow to study sparse network optimization, 2021. Kyriakos Axiotis and Maxim Sviridenko. Sparse convex optimization via adaptively regularized hard thresholding. In *International Conference on Machine Learning (ICML)*, 2020. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. *arXiv preprint* arXiv:1607.06450, 2016. Thomas Berg, Jiongxin Liu, Seung Woo Lee, Michelle L. Alexander, David W. Jacobs, and Peter N. Belhumeur. Birdsnap: Large-scale fine-grained visual categorization of birds. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2019–2026, 2014. doi:10.1109/CVPR.2014.259. Mathieu Blondel, Vivien Seguy, and Antoine Rolet. Smooth and sparse optimal transport. In *International* conference on artificial intelligence and statistics, 2018. Thomas Blumensath and Mike E Davies. Iterative thresholding for sparse approximations. *Journal of Fourier* Analysis and Applications, 14(5-6):629–654, 2008. Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 - mining discriminative components with random forests. In *European Conference on Computer Vision (ECCV)*, 2014. Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. Semantic textual similaritymultilingual and cross-lingual focused evaluation. In *SEMVAL International Workshop on Semantic* Evaluation, 2017. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. *arXiv preprint arXiv:2107.03374*, 2021a. Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, and Zhangyang Wang. The lottery tickets hypothesis for supervised and self-supervised pre-training in computer vision models. *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2021b. Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2014. Jeremy M. Cohen, Simran Kaur, Yuanzhi Li, J. Zico Kolter, and Ameet Talwalkar. Gradient descent on neural networks typically occurs at the edge of stability, 2022. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Conference on Neural Information Processing Systems (NeurIPS), 2013. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *North American Chapter of the Association for Computational* Linguistics (NAACL), 2019. Bill Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In *Third* International Workshop on Paraphrasing (IWP2005), 2005. Xin Dong, Shangyu Chen, and Sinno Jialin Pan. Learning to prune deep neural networks via layer-wise optimal brain surgeon. In *Conference on Neural Information Processing Systems (NeurIPS)*, 2017. Utku Evci, Fabian Pedregosa, Aidan Gomez, and Erich Elsen. The difficulty of training sparse neural networks. arXiv preprint arXiv:1906.10732, 2019. Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen. Rigging the lottery: Making all tickets winners. In *International Conference on Machine Learning (ICML)*, 2020. Utku Evci, Max Vladymyrov, Thomas Unterthiner, Bart van Merrienboer, and Fabian Pedregosa. Gradmax: Growing neural networks using gradient information. In *International Conference on Learning* Representations (ICLR), 2022. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In *International Conference on Learning Representations (ICLR)*, 2019. Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M Roy, and Michael Carbin. Stabilizing the lottery ticket hypothesis. *arXiv preprint arXiv:1903.01611*, 2019. Elias Frantar, Eldar Kurtic, and Dan Alistarh. M-FAC: Efficient matrix-free approximations of second-order information. In *Conference on Neural Information Processing Systems (NeurIPS)*, 2021. Trevor Gale, Erich Elsen, and Sara Hooker. The state of sparsity in deep neural networks. In International Conference on Machine Learning (ICML), 2019. Noah Golmant, Zhewei Yao, Amir Gholami, Michael Mahoney, and Joseph Gonzalez. pytorch-hessianeigenthings: efficient pytorch hessian eigendecomposition, 2018. URL https://github.com/noahgolmant/ pytorch-hessian-eigenthings. Gregory Griffin, Alexander D. Holub, and Pietro Perona. The Caltech 256. *Caltech Technical Report*, 2006. Song Han, Jeff Pool, John Tran, and William J Dally. Learning both weights and connections for efficient neural networks. In *Conference on Neural Information Processing Systems (NeurIPS)*, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In *International Conference on Learning Representations (ICLR)*, 2019. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural Comput.*, 9(8):1735–1780, nov 1997. ISSN 0899-7667. doi:10.1162/neco.1997.9.8.1735. URL https://doi.org/10.1162/neco.1997.9.8. 1735. Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. *arXiv preprint arXiv:2102.00554*, 2021. Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, and Andrea Frome. What do compressed deep neural networks forget? *arXiv preprint arXiv:1911.05248*, 2019. Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, and Emily Denton. Characterising bias in compressed models. *arXiv preprint arXiv:2010.03058*, 2020. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. MobileNets: Efficient convolutional neural networks for mobile vision applications. *arXiv preprint arXiv:1704.04861*, 2017. Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146, 2018. Shaoyi Huang, Dongkuan Xu, Ian EH Yen, Yijue Wang, Sung-En Chang, Bingbing Li, Shiyang Chen, Mimi Xie, Sanguthevar Rajasekaran, Hang Liu, et al. Sparse progressive distillation: Resolving overfitting under pretrain-and-finetune paradigm. *arXiv preprint arXiv:2110.08190*, 2021. Eugenia Iofinova, Alexandra Peste, and Dan Alistarh. How well do sparse imagenet models transfer? In Conference on Computer Vision and Pattern Recognition (CVPR), 2022. Eugenia Iofinova, Alexandra Peste, and Dan Alistarh. Bias in pruned vision models: In-depth analysis and countermeasures. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2023. Ajay Jaiswal, Haoyu Ma, Tianlong Chen, Ying Ding, and Zhangyang Wang. Training your sparse neural network better with any mask. In *International Conference on Machine Learning (ICML)*, 2022. Siddhant Jayakumar, Razvan Pascanu, Jack Rae, Simon Osindero, and Erich Elsen. Top-KAST: Top-K always sparse training. In *Conference on Neural Information Processing Systems (NeurIPS)*, 2020. Siddhant M Jayakumar, Razvan Pascanu, Jack W Rae, Simon Osindero, and Erich Elsen. Top-KAST: Top-K always sparse training. *arXiv preprint arXiv:2106.03517*, 2021. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR), 2015. Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better ImageNet models transfer better? In Conference on Computer Vision and Pattern Recognition (CVPR), 2019. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3D Object Representations for Fine-Grained Categorization. In *4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13)*, Sydney, Australia, 2013. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Eldar Kurtic and Dan Alistarh. Gmp*: Well-tuned global magnitude pruning can outperform most bertpruning methods. *arXiv preprint arXiv:2210.06384*, 2022. Eldar Kurtic, Daniel Campos, Tuan Nguyen, Elias Frantar, Mark Kurtz, Benjamin Fineran, Michael Goin, and Dan Alistarh. The optimal BERT surgeon: Scalable and accurate second-order pruning for large language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Abu Dhabi, United Arab Emirates, 2022. Aditya Kusupati, Vivek Ramanujan, Raghav Somani, Mitchell Wortsman, Prateek Jain, Sham Kakade, and Ali Farhadi. Soft threshold weight reparameterization for learnable sparsity. In *International Conference* on Machine Learning (ICML), 2020. Guillaume Leclerc, Andrew Ilyas, Logan Engstrom, Sung Min Park, Hadi Salman, and Aleksander Madry. ffcv. https://github.com/libffcv/ffcv/, 2022. Namhoon Lee, Thalaiyasingam Ajanthan, and Philip HS Torr. SNIP: Single-shot network pruning based on connection sensitivity. *International Conference on Learning Representations (ICLR)*, 2019. Fei-Fei Li, R. Fergus, and Pietro Perona. Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories. In Conference on Computer Vision and Pattern Recognition (CVPR), 2004. Lucas Liebenwein, Cenk Baykal, Brandon Carter, David Gifford, and Daniela Rus. Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy. Conference on Machine Learning and Systems (MLSys), 2021. Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, and Decebal Constantin Mocanu. Sparse training via boosting pruning plasticity with neuroregeneration. In *Conference on Neural Information Processing Systems (NeurIPS)*. Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, and Mykola Pechenizkiy. Do we actually need dense over-parameterization? in-time over-parameterization in sparse training. In *International Conference on* Machine Learning (ICML), 2021. Tianlin Liu, Joan Puigcerver, and Mathieu Blondel. Sparsity-constrained optimal transport. *arXiv preprint* arXiv:2209.15466, 2022. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, 2015. Yucheng Lu, Shivani Agrawal, Suvinay Subramanian, Oleg Rybakov, Christopher De Sa, and Amir Yazdanbakhsh. Step: Learning n:m structured sparsity masks from scratch with precondition, 2023. Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. *arXiv preprint ArXiv:1306.5151*, 2013. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. Regularizing and optimizing lstm language models. *arXiv preprint arXiv:1708.02182*, 2017. Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. In *International Conference on Machine Learning (ICML)*, 2017. NeuralMagic. The DeepSparse Inference Engine. https://github.com/neuralmagic/deepsparse, 2022. Mahdi Nikdan, Tomasso Pegolotti, Eugenia Iofinova, Eldar Kurtic, and Dan Alistarh. Sparseprop: Efficient sparse propagation for faster training of neural networks. *arXiv preprint arXiv:2302.04852*, 2023. Maria-Elena Nilsback and Andrew Zisserman. A visual vocabulary for flower classification. In Conference on Computer Vision and Pattern Recognition (CVPR), 2006. Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar. Cats and dogs. In *Conference on* Computer Vision and Pattern Recognition (CVPR), 2012. Alexandra Peste, Eugenia Iofinova, Adrian Vladu, and Dan Alistarh. AC/DC: Alternating compressed/decompressed training of deep neural networks. In *Conference on Neural Information Processing* Systems (NeurIPS), 2021. Gabriel Peyré and Marco Cuturi. Computational optimal transport. *arXiv preprint arXiv:1803.00567*, 2020. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015. Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do adversarially robust ImageNet models transfer better? *Conference on Neural Information Processing Systems (NeurIPS)*, 2020. Victor Sanh, Thomas Wolf, and Alexander Rush. Movement pruning: Adaptive sparsity by fine-tuning. Conference on Neural Information Processing Systems (NeurIPS), 2020. Pedro Savarese, Hugo Silva, and Michael Maire. Winning the lottery with continuous sparsification. arXiv preprint arXiv:1912.04427, 2021. Jonathan Schwarz, Siddhant Jayakumar, Razvan Pascanu, Peter Latham, and Yee Teh. Powerpropagation: A sparsity inducing weight reparameterisation. In *Conference on Neural Information Processing Systems* (NeurIPS), 2021. Sidak Pal Singh and Dan Alistarh. WoodFisher: Efficient second-order approximation for neural network compression. In *Conference on Neural Information Processing Systems (NeurIPS)*, 2020. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Conference on empirical methods in natural language processing (EMNLP), 2013. Antoine Vanderschueren and Christophe De Vleeschouwer. Are straight-through gradients and softthresholding all you need for sparse training? In *2023 IEEE/CVF Winter Conference on Applications of* Computer Vision (WACV), 2023. Han Vanholder. Efficient inference with TensorRT. NVIDIA GTC On-Demand. Slides available at https://on-demand-gtc.gputechconf.com/gtcnew/sessionview.php?sessionName=23425efficient+inference+with+tensorrt, 2017. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Conference on Neural Information Processing Systems* (NeurIPS), 2017. Richard von Mises and Hilda Pollaczek-Geiringer. Praktische verfahren der gleichungsauflösung . *Zammzeitschrift Fur Angewandte Mathematik Und Mechanik*, 1929. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multitask benchmark and analysis platform for natural language understanding. *arXiv preprint arXiv:1804.07461*, 2018. Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. Neural network acceptability judgments. *Transactions of the Association for Computational Linguistics*, 2019. Adina Williams, Nikita Nangia, and Samuel R Bowman. A broad-coverage challenge corpus for sentence understanding through inference. *arXiv preprint arXiv:1704.05426*, 2017. Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition, pp. 3485–3492. IEEE, 2010. Lu Yin, Gen Li, Meng Fang, Li Shen, Tianjin Huang, Zhangyang Wang, Vlado Menkovski, Xiaolong Ma, Mykola Pechenizkiy, and Shiwei Liu. Dynamic sparsity is channel-level sparsity learner. arXiv preprint arXiv:2305.19454, 2023. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? *Advances in neural information processing systems*, 27, 2014. Ofir Zafrir, Ariel Larey, Guy Boudoukh, Haihao Shen, and Moshe Wasserblat. Prune once for all: Sparse pre-trained language models. *arXiv preprint arXiv:2111.05754*, 2021. Qingru Zhang, Simiao Zuo, Chen Liang, Alexander Bukharin, Pengcheng He, Weizhu Chen, and Tuo Zhao. Platon: Pruning large transformer models with upper confidence bound of weight importance. In International Conference on Machine Learning, 2022. Michael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of pruning for model compression. *arXiv preprint arXiv:1710.01878*, 2017.
Review 1: Summary: The paper studies the challenges in the optimisation of sparse deep neural networks, both during pre-training and transfer settings, analysing different existing algorithms (AC/DC, RigL, GMP, …) and proposing some modifications to improve the quality of the resulting sparse models at different sparsity levels. The work includes a wide range of experiments: Most of them were done on image recognition tasks during pre-training of the sparse models (section 3), but section 4 analyses sparse transfer learning for language models. The two main discoveries from the study are that (1) sparse models need longer pre-training/fine-tuning, (2) using dense first and last layers makes sparse pre-training in vision, (3) while transferring language models it’s better to sparsify the layers gradually, from top (closer to head) to bottom, while fine-tuning the model. Strengths and Weaknesses: **Strengths** - The paper contains a detailed section dedicated to describe the state-of-the-art methods dealing with sparse deep neural networks. This is very important in all works, but it’s crucial for a study paper like this that compares many existing methods. I found this section very useful while reviewing the manuscript. - The paper does a thorough analysis of the pre-training of sparse deep neural networks. Section 3 contains a wide range of very relevant experiments (other maybe redundant, see weaknesses below). - Table 1 contains a wide set of results comparing the proposed modifications to AC/DC with many previously published works. This is a fantastic and quite complete results table of recent and relevant works in sparse deep neural networks. - I like very much the writing of the paper **Weaknesses** - Something that I’m missing in the related works section is a review of related works dealing with the _optimisation_ perspective of sparse models. In section 3.1, the paper mentions that the optimisation of sparse models is NP-Hard, even on simple regression models, but there is no mention to related works from the optimisation community. For instance, some works that deal with the sparse optimisation of a more simple problem (optimal transport, but with applications to sparse neural networks): Smooth and Sparse Optimal Transport (M. Blondel et al., 2017), Sparsity-Constrained Optimal Transport (T. Liu et al., 2022). - Given that the optimisation of sparse models is NP-Hard, the observation that longer training benefits sparse models (assuming no data size constraints) is not surprising at all. This reduces a lot of the amount of information that lots of experiments contribute (it also diminishes the value of the main contribution), in my opinion. - Section 3.6 also devotes a lot of text and experiments to study a well-known fact: “once a channel is completely zeroed out, it will continue to receive zero gradients”. This is a simple result of computing the gradients of a linear layer w.r.t params and inputs, and it’s (one of) the reason(s) why neural networks of more than 1 layer are initialised with small weights close to zero _but not exactly zero_. So, it’s not surprising at all that this happens more frequently when weight decay is applied more heavily. - The conclusion of the sparse decompression subsection is a bit misleading. The paper claims: “We observe that the performance is almost unaffected up to 80% decompression sparsity, showing that full mask exploration is not necessary during training”. The observation does not strictly support that conclusion: the fact that the performance doesn’t improve does not directly imply that full mask exploration is unnecessary, it could just show that the decompression approach is suboptimal. In fact, given the fundamental NP-Hardness of the problem, one could argue that mask exploration is indeed _very_ important. Requested Changes: - I did not quite understand how the experiments in section 3.7 are performed. Could you provide more details on how you compute the eigenvalues of the hessian matrix? The hessian matrix is not even well defined when one considers the mask, since it is a discrete (binary) variable, with no such thing as “gradients”. This is really important, since I could not really follow this section, which could greatly improve my final recommendation. - Given my feedback in the weaknesses section above, I would suggest to trim down some experiments sections, or move them to the appendix. As I mentioned, some experiments yield quite unsurprising results. This is just a minor suggestion, but I think would make the reading shorter and the understanding of main conclusions simpler. Broader Impact Concerns: No major concerns from my perspective. ================================================== Review 2: Summary: This paper mainly focuses on learning high-accurate and high-sparse models. It notices that few work has explore the interaction between sparsity and standard stochastic optimization techniques. Thus, in this work, it proposes new approaches for mitigating the issue. The proposed method achieves state-of-the-art performance. Strengths and Weaknesses: __Pros:__ - The results are good. __Cons:__ - Why the phenomenon that the accuracy and loss of a sparse model fails to saturate means undertraining? Do you consider this phenomenon from other perspectives? If not, from my perspective, I think it is normal since sparsity undermines the original structure of models and the connections among parameters. Thus, I do not think the motivation in this paper is strong. - The two observations mentioned in page 6 seems trivial. What the role does observation 1 play in this paper? It seems a common sense. - The curves in Fig 1 center and right seems with with the same shape. - Section 3 is not compact. There are many trivial contents which are little bit confusing. - This paper repeatedly mentions that sparse model requires more training episode in Section 3. From my perspective, I think it is a normal phenomenon and one experiment is enough. Besides, I am not sure about the conclusion in Section 3.3.1. Why the results 'reinforce the point that sparse training method saturate slower'? Why is this important? - I did not find the so called method mentioned in the abstract? Could you make it more clear? Requested Changes: - The writting of the paper can be further improved; - The definition of some concepts, such as _label-independent_ , are not well defined, which is confusing. - More information can be added in figures. - Too many content regarding experimental settings. It would be better to concisely mention these content and provide more intuition about the goal of the experiments conducted. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper presents a comprehensive overview of the sparse training. In particular, the authors point out that current sparse training algorithms that adopt the hyper-parameters of dense training will generally lead to undertraining of models. To this end, the authors try to extend the training pipeline by 5x or 10x, and successfully achieve better performance on standard vision tasks. The authors further examine several properties related to the sparsity, such as the structured sparse masks, the effect of weight decays, and the loss landscapes. Finally, the authors have studied the behaviors of sparsity under a more typical transfer scenario, i.e. language modeling, and have successfully improved the performance of sparse transfer. Strengths and Weaknesses: Strengths: - First of all, the topic is valuable as sparse models are getting more and more powerful (such as those SMoE models) - Second, massive amount of experiments are provided in this paper to support the authors' claim. I feel this is a very strong empirical paper regarding the sparse training. - A lot of design choices are covered. Weakness: - Essentially there is no additional technique proposed in this paper except for training for longer time. If the authors can provide some principled techniques to deal with the general under-training problem that would be better. - While a lot of experiments are provided, I still think some parts are loosely connected, or at least need a better connection between paragraphs and paragraphs. For instance, the sudden present of BERT might need better explanation, and probably should not just mentioning "Motivated by our findings for computer-vision models in previous sections,". Requested Changes: - Probably better organize the paper, and provide better support for sections in the paper. Broader Impact Concerns: - ================================================== Metareview: Recommendation: Accept as is Comment: While reviewers are unconvinced about the novelty and significance of the work, this is a good example where TMLR's primary criteria of evidence matching claims comes in. Based on this criteria and the unanimous agreement by reviewers that the claims match the evidence, I see no reason not to accept the paper. ==================================================
# Lo-Fi: Distributed Fine-Tuning Without Communication Mitchell Wortsman∗ *mitchnw@cs.washington.edu* University of Washington Suchin Gururangan *sg01@cs.washington.edu* University of Washington Shen Li *shenli@meta.com* Meta AI Research, FAIR Team Ali Farhadi *ali@cs.washington.edu* University of Washington Ludwig Schmidt *schmidt@cs.washington.edu* University of Washington Michael Rabbat mikerabbat@meta.com Meta AI Research, FAIR Team Ari S. Morcos *arimorcos@meta.com* Meta AI Research, FAIR Team Reviewed on OpenReview: *https: // openreview. net/ forum? id= 1U0aPkBVz0& referrer= %5BTMLR% 5D* ## Abstract When fine-tuning large neural networks, it is common to use multiple nodes and to communicate gradients at each optimization step. By contrast, we investigate completely local fine-tuning, which we refer to as lo-fi. During lo-fi, each node fine-tunes independently without any communication. Then, the weights are averaged across nodes at the conclusion of fine-tuning. When fine-tuning DeiT-base and DeiT-large on ImageNet, this procedure matches accuracy in-distribution and improves accuracy under distribution shift compared to the baseline, which observes the same amount of data but communicates gradients at each step. We also observe that lo-fi matches the baseline's performance when fine-tuning OPT language models (up to 1.3B parameters) on Common Crawl. By removing the communication requirement, lo-fi reduces resource barriers for fine-tuning large models and enables fine-tuning in settings with prohibitive communication cost. ## 1 Introduction Many of the best performing machine learning models today come from a two step procedure: First, *pre-train* on a large, heterogeneous dataset to learn a good representation. Next, *fine-tune* to adapt the model to a task of interest (Girshick et al., 2014; Yosinski et al., 2014; Kornblith et al., 2019; Kolesnikov et al., 2020). This paper operates within the second step of this procedure—fine-tuning—which is increasingly important with drastic improvements in pre-trained models, e.g., CLIP (Radford et al., 2021), GPT-3 (Brown et al., 2020), OPT (Zhang et al., 2022), and PaLM (Chowdhery et al., 2022). Indeed, recent advances such as Minerva (Lewkowycz et al., 2022) or InstructGPT (Ouyang et al., 2022) have come from fine-tuning rather than training from scratch. ∗Work done while MW and SG were at FAIR. ![1_image_0.png](1_image_0.png) Figure 1: In standard multi-node distributed dataparallel fine-tuning, there is synchronization between nodes at each step of fine-tuning. With lofi (local fine-tuning), there is no communication between nodes throughout fine-tuning. As a result, each node k independently produces their own model θ k. Then, lo-fi averages these models once for the final solution θlo-fi = 1 n Pn k=1 θ k. In this four-node fine-tuning run, we show (i) the average accuracy of the individual models θ k, (ii) the accuracy of θlo-fi at the end of each fine-tuning epoch, and (iii) the accuracy of the baseline which communicates among nodes every step. In particular, we fine-tune the ImageNet-21k pre-trained DeiT-base model from DeiT-III (Touvron et al., 2022) on ImageNet (Deng et al., 2009) using their code, which uses four nodes. | IN | IN-V2 | IN-R | Sketch | IN-A | | |-------------------|---------|--------|----------|--------|-------| | baseline (DeiT-b) | 85.96 | 76.65 | 62.66 | 46.86 | 57.15 | | lo-fi (DeiT-b) | 86.00 | 76.84 | 63.25 | 48.37 | 58.43 | | baseline (DeiT-l) | 87.12 | 78.18 | 69.87 | 54.41 | 68.97 | | lo-fi (DeiT-l) | 87.10 | 78.25 | 70.14 | 54.95 | 69.53 | Table 1: Comparing lo-fi (no communication during fine-tuning) to the baseline which communicates at each step when fine-tuning the ImageNet-21k pretrained DeiT-base and DeiT-large model from DeiTIII (Touvron et al., 2022) on ImageNet (Deng et al., 2009). Both lo-fi and the baseline use the same number of iterations, which have been tuned for the baseline. Underlined numbers indicate significantly better accuracy according to McNemar's test with significance level 0.05. Lo-fi matches performance on ImageNet (IN), but can outperform the baseline on some distribution shifts. The shifts we consider are IN-V2 (Recht et al., 2019), IN-R (Hendrycks et al., 2021a), Sketch (Wang et al., 2019), and IN-A (Hendrycks et al., 2021b). Most work developing learning methods still operates in the paradigm of training from scratch. Accordingly, both use similar algorithmic techniques despite important differences in the pre-training and fine-tuning regimes. In particular, one notable difference between pre-training and fine-tuning is that fine-tuned models appear to lie in a single low-error region (Neyshabur et al., 2020). Indeed, linearly interpolating the weights of fine-tuned models can have similar advantages as ensembling their predictions but without the added cost during inference (Wortsman et al., 2022). By contrast, linearly interpolating the weights of two models trained from scratch will encounter a high error barrier (Frankle et al., 2020; Garipov et al., 2018). Recently, the model soups approach (Wortsman et al., 2022) leveraged this similarity between ensembling outputs and averaging weights. Given a hyperparameter sweep over fine-tuned models, they average the weights of multiple models instead of the conventional procedure of selecting one model and discarding the remainder. However, the model soups approach does not modify the fine-tuning procedure itself. In this paper, we leverage the observation that fine-tuned models appear to lie in a single low error region to remove communication between nodes during distributed fine-tuning. In standard data-parallel multinode fine-tuning, gradients between nodes are communicated at each step. This synchronization of updates keeps the models at each node identical to each other during fine-tuning. However, in certain settings communication costs during fine-tuning may be prohibitive, and we therefore ask whether they are necessary at all. With our method of local fine-tuning, which we refer to as *lo-fi*, we remove all communication between nodes during fine-tuning. The models on each node therefore drift apart throughout fine-tuning. Then, to arrive at the final solution at the end, we average the weights of the models produced by each node. | | IN | IN-V2 | IN-R | Sketch | IN-A | epochs | stoch. depth | no extra cost | no comms | |-----------------------------------|-------|---------|--------|----------|--------|----------|----------------|-----------------|------------| | DeiT-base (Touvron et al., 2022) | 85.72 | 76.53 | 61.83 | 47.44 | 57.29 | 50 | 0.15 | ✓ | | | baseline | 85.96 | 76.65 | 62.66 | 46.86 | 57.15 | 24 | 0.15 | ✓ | | | individual node | 85.66 | 76.22 | 62.09 | 46.75 | 56.00 | 6 | 0.10 | ✓ | ✓ | | lo-fi | 86.00 | 76.84 | 63.25 | 48.37 | 58.43 | 24 | 0.10 | ✓ | ✓ | | lo-fi ensemble | 86.08 | 76.91 | 63.05 | 47.67 | 57.80 | 24 | 0.10 | ✓ | | | DeiT-large (Touvron et al., 2022) | 86.97 | 78.47 | 69.70 | 54.35 | 68.57 | 50 | 0.40 | ✓ | | | baseline | 87.12 | 78.18 | 69.87 | 54.41 | 68.97 | 12 | 0.30 | ✓ | | | individual node | 86.76 | 78.00 | 69.41 | 54.57 | 67.59 | 3 | 0.25 | ✓ | ✓ | | lo-fi | 87.10 | 78.25 | 70.14 | 54.95 | 69.53 | 12 | 0.25 | ✓ | ✓ | | lo-fi ensemble | 87.14 | 78.35 | 70.00 | 54.62 | 69.20 | 12 | 0.25 | ✓ | | Table 2: Expanding the comparison between lo-fi and the baseline (Table 1) when fine-tuning the ImageNet21k pre-trained models from DeiT-III (Touvron et al., 2022) on ImageNet (Deng et al., 2009). In this four node fine-tuning run, lo-fi removes communication between nodes so that each node produces an independent model. The weights of the models are then averaged at the end to produce the final solution. In this table we bold the highest number and evaluate the following models: i) paper, the fine-tuned models from the DeiT-III paper (Touvron et al., 2022), ii) baseline, which is our improved fine-tuning baseline after hyperparameter turning which requires less epochs of training but achieves slightly higher accuracy than reported in the DeiT-III paper (Touvron et al., 2022), iii) individual node, which is one of the individual node models that is produced by lo-fi, iv) lo-fi, which fine-tunes individual models on each node then averages their weights once at the end, and v) lo-fi ensemble which averages the outputs of the models produced by each node during lo-fi, and therefore requires more cost during inference. In addition to evaluating on ImageNet (IN), the task used for fine-tuning, we also evaluate on the distribution shifts ImageNet-V2 (IN-V2, (Recht et al., 2019)), ImageNet-R (IN-R, (Hendrycks et al., 2021a)), ImageNet-Sketch (Wang et al., 2019), and ImageNet-A (INA (Hendrycks et al., 2021b)). While more information is provided in Section 3.1, this table also displays some hyperparameter changes we made from the default DeiT-III fine-tuning script. Unlike (Touvron et al., 2022), we fine-tune with LP-FT (Kumar et al., 2022), and observe it is better for the baseline to use fewer fine-tuning epochs. Lo-fi observes the same amount of data as the tuned baseline and uses the same hyperparameters with the exception of slightly decreased regularization by lowering stoch. depth (Huang et al., 2016) drop probability by 0.05 (making the same change to the baseline decreased accuracy). Additional columns track whether the model incurs no additional cost during inference compared to a single model (denoted no extra cost), and also if there is no communication between nodes during fine-tuning (denoted no comms). Overall, lo-fi matches or outperforms the baseline without communication between nodes during fine-tuning. We note that these techniques are a natural extension of previous work: lo-fi is just a *model soup* (Wortsman et al., 2022) formed by splitting up a large fine-tuning job into multiple smaller jobs, each isolated to a node. Analogously, lo-fi is embarrassingly parallel training from *branch-train-merge* (Li et al., 2022) applied in the setting where no domain specialization info is provided and so each expert is trained on IID data. However, we believe that the application of these techniques in this setting is of practical interest, especially if models continue to grow. In computer vision we use the DeiT-III codebase (Touvron et al., 2022) to fine-tune the ImageNet-21k pre-trained DeiT-base and DeiT-large models, which are four-node fine-tuning jobs by default. We observe (Figure 1, Table 1) that lo-fi matches the accuracy of DeiT-base and DeiT-large on ImageNet, the task used for fine-tuning, while outperforming the baseline on some distribution shifts. These improvements come after hyperparameter tuning the baseline to slightly exceed that in the DeiT-III paper while requiring fewer fine-tuning epochs. Moreover, lo-fi and the baseline observe the same amount of data. While overall similar results are observed when fine-tuning CLIP ViT-L (Radford et al., 2021) on ImageNet or tasks from WILDS (Koh et al., 2021), lo-fi often requires more iterations in this setting. Finally, we test lo-fi beyond computer vision by fine-tuning OPT-125M and OPT-1.3B (Zhang et al., 2022) on Common Crawl, observing that lo-fi can match the baseline which communicates between nodes. Overall, our work is a test of whether communication between nodes is required during fine-tuning. However, we also wanted to understand the advantages of removing this communication. Therefore, we benchmark the wall-clock overhead of communication on an AWS cluster with EFA. We use the models from the DeiT-III repository (Touvron et al., 2022) in the context of image classification. In this setting and on the system used for this study, the advantages are overall less substantial than we initially expected, especially for large batch sizes. Notably, we observe that the trick of overlapping the communication and computation in the backwards pass (Li et al., 2020a), which is the default in PyTorch (Paszke et al., 2019) as of v1.5, reduces the overhead of using multiple nodes from roughly 50% slow-down to under 10% for the large DeiT model. Finally, we discuss how lo-fi can help with faster job scheduling and addresses the straggler and jitter problem in distributed training, where different nodes might experience random slowdowns. ## 2 Methods This section details the methods used in our experiments. We begin with the baseline of standard dataparallel training, and next outline our straightforward modification which i) removes communication between nodes then ii) averages the final models produced by each node. Consider a neural network f(*x, θ*) where x is the input data and θ ∈ R d are the network parameters. Since we are fine-tuning, θ is initialized as the weights of a pre-trained model. Moreover, as is standard in neural network training, the input data x is a batch rather than a single data point. Finally, let n denote the number of devices, b denote total batch size, and ℓ(ˆ*y, y*) denote loss for the vector of predicted labels yˆ = f(*x, θ*) and a vector of ground-truth labels y. With communication. The most straightforward and common approach for training with n devices is data-parallel. In this setting, each device has their own copy of the parameters θ. During fine-tuning, each batch x of size b is split into n disjoint sub-batches of size b/n. Each device i loads the sub-batch (xi, yi) and computes gradients gi = ∇θℓ(f(xi, θ), yi). Then the gradients are synchronized across nodes with each node computing an averaged gradient g¯ = 1 n Pn i=1 gi. After synchronizing gradients, each device uses g¯ to update θ. Since every device updates θ using an identical gradient g¯, the parameters θ remain identical across devices. lo-fi. With local-finetuning (lo-fi), we partition the n devices into K disjoint groups. In the majority of our experiments, each group is a single node containing 8 GPU devices. During fine-tuning we allow communication within each group, but not across groups. Each group k begins with parameters θ k which are initially identical across devices, but drift apart throughout fine-tuning. Then, at the end of fine-tuning there is a single communication and the parameters from each group are averaged to produce a final solution θ = 1 K PK k=1 θ k. There are two possible implementations for lo-fi which we refer to as implementation A and B. Implementation A proceeds as before—each device i loads the sub-batch (xi, yi) and computes gradients gi. There is then gradient synchronization only among devices belonging to the same group while devices from different groups apply different gradients. Data partitioning is accomplished without communication by coordinating random seeds, so long as each device knows its rank and the total number of devices. Our experiments primarily use Implementation A. In Implementation B, each group is a completely independent run—no knowledge of total number of devices is required. Accordingly, within each group the global batch size is scaled by 1/K so that the per-device batch size is matched. Our image classification results use Implementation A while our language modelling results use Implementation B. Our motivation for having one group per node and still allowing communication among the devices on the node is that communication within a node is faster than communication across nodes. ![4_image_0.png](4_image_0.png) Figure 2: We test whether the performance of lo-fi continues to improve when adding more nodes. On the contrary, this experiment suggests diminishing or even negative returns after 4 nodes. This experiment is for fine-tuning DeiT-base as in Table 1. Recall that when using four nodes, lo-fi and the baseline observe the same number of images, but lo-fi does not require communication between nodes. When moving beyond 4 nodes as we do in this experiment, lo-fi observes more images then the baseline. ## 3 Experiments This section presents our experiments which test whether communication is required during fine-tuning. First we use the DeiT-III codebase (Touvron et al., 2022) to fine-tune their pre-trained ImageNet-21k models on ImageNet, where we observe that lo-fi matches the baseline but without communication between nodes (Section 3.1). Next, we fine-tune CLIP (Radford et al., 2021) on ImageNet, WILDS-FMoW (Koh et al., 2021; Christie et al., 2018) and WILDS-iWildCam (Beery et al., 2021) (Section 3.2). Finally, we show preliminary experiments applying lo-fi outside of computer vision (Section 3.3) and benchmark the associated speed-ups by removing communication (Section 3.4). ## 3.1 Fine-Tuning Deit-Iii On Imagenet The aim of these experiments is to test whether communication between nodes is required when fine-tuning high accuracy models for image classification. To test this we begin by fine-tuning the DeiT-base and DeiTlarge models from the DeiT-III paper (Touvron et al., 2022) using their code. In particular, we fine-tune their ImageNet-21k models on ImageNet-1k (Deng et al., 2009) with and without lo-fi. We chose the models from DeiT-III for a few reasons: (i) DeiT-III is representative of state-of-the-art settings as it uses many advanced techniques such as stochastic depth (Huang et al., 2016), CutMix (Yun et al., 2019), and the Lamb optimizer (You et al., 2019). (ii) DeiT-III provides hyperparameter configurations which they used in their fine-tuning experiments. (iii) DeiT-III uses 4 nodes with 8 GPUs each when fine-tuning their pre-trained ImageNet-21k models on ImageNet. This provides an opportunity to test lo-fi in an equivalent setting where there is normally communication between nodes. Main results. Our overall finding is that communication between nodes is not necessary in this settinglo-fi matches the accuracy of the baseline while observing the same amount of data. These results are presented in Figure 1 and Tables 1 and 2. In these experiments, lo-fi uses 4 groups—each group corresponds to one node. Figure 1 illustrates accuracy throughout training when fine-tuning DeiT-base with and without lo-fi. We also report the average accuracy of the models produced by the individual nodes. To make this plot we display the accuracy of the averaged lo-fi model at the end of each epoch, though usually we would only average the models once at the end. A question emerges when looking at this plot: why does the accuracy of the individual node first dip before coming back up? The answer is due to the interaction of learning rate and batch size, which we discuss further in Appendix B. Table 1 evaluates the final models from Figure 1 on ImageNet as well as under distribution shift (on ImageNetV2 (Recht et al., 2019), ImageNet-R (Hendrycks et al., 2021a), ImageNet Sketch (Wang et al., 2019), and ImageNet-A (Hendrycks et al., 2021b)). In addition, Table 1 repeats the experiment from Figure 1 with the DeiT-large model. We underline any result that is significantly better (using McNemar's test with significance 0.05). Overall we observe that lo-fi matches the accuracy of the baseline which uses communication, and outperforms the baseline under distribution shift. Table 2 supplements Table 1 with additional details. In particular, we consider the accuracy of the model produced by an individual node during lo-fi, before the averaging. We also evaluate the output-space ensemble of the models produced by each node during lo-fi, which is more expensive during inference as a pass through each model is required. Finally, we display the accuracy of the models fine-tuned in the DeiT-III paper (Touvron et al., 2022). We improved our own baseline over that in the paper with the following hyperparemter changes: (i) Instead of removing the classification layer of the pre-trained model, we implement a version of LP-FT (Kumar et al., 2022) to fine-tune—we preserved the ImageNet-21k classifier then use a class mapping from ImageNet-21k to ImageNet classes. (ii) We remove the grayscale, solarization, and Gaussian blur augmentations, since we found this improves accuracy. This aligns with previous research where fine-tuning requires less augmentation (Wortsman et al., 2022). (iii) We fine-tuned for fewer epochs, which also required a switch to a cosine scheduler that updates every iteration instead of every epoch so the schedule could complete. We also considered different values for the learning rate and stochastic depth, but found the default values to be best (Touvron et al., 2022). This is with the exception of DeiT-large for which we found stochastic depth 0.3 to be better for the baseline, which is what we used. Lo-fi was run using identical hyperparameters except we decreased the stochastic depth drop rate by 0.05 for both DeiT-base and DeiT-large since each node is effectively fine-tuning on less data and may therefore require less regularization. The most substantive change from the DeiT-III code was to use LP-FT (Kumar et al., 2022), which we accomplished by preserving the classification layer from the pretrained model and using a mapping from ImageNet21k to ImageNet1. While this change results in a minor improvement for the baseline, we found it was necessary for achieving matching performance with lo-fi. Overall, despite the extensive hyperparameter tuning we performed for the baseline, lo-fi was still able to match or exceed the accuracy. Ablations. We ran three ablation studies to better understand the performance of lo-fi in this setting. First, we wanted to test whether adding more nodes was helpful. In the initial 4 node experiment with lo-fi, we matched the baseline in terms of total amount of data observed, allowing a fair compute-matched comparison. However, there are practical settings such as privacy-preserving ML in which the benefits of reduced communication may outweigh the importance of matched compute. In Figure 2 we observed that adding more nodes did not improve in-distribution accuracy. Interestingly, however, adding additional nodes marginally improved out-of-distribution performance, most notably on ImageNet-Sketch and ImageNet-A. ![5_image_0.png](5_image_0.png) Figure 3: For the experiment in Table 1, lo-fi outperforms the baseline under distribution shift. We wanted to test whether this OOD performance (yaxis) could be improved by applying weight averaging techniques to the baseline. We observe that the answer is yes with EMA (Szegedy et al., 2016), although this can come at slight cost in-distribution accuracy (x-axis). In this plot we try 4 different values of EMA decay β. Applying EMA to lo-fi had minimal benefit, as did applying WiSE-FT (Wortsman et al., 2021) to the baseline. The ImageNet-21k→ImageNet transfer setting is not characteristic of those studied in the WiSE-FT paper. Next, we wanted to understand if four groups, one per node, was optimal in this setting. What happens if we instead use 8 groups—2 per node, or 2 groups—each group consisting of 2 nodes? In this experiments the amount of data observed remains constant; all that changes is the amount of communication. As presented 1The only class in ImageNet but not ImageNet-21k is *teddy bear*—we initialize this row with *bear* instead. Groups 2 4 8 16 ImageNet Accuracy 85.95 86.00 85.85 85.73 Table 3: For our four-node fine-tuning jobs, we usually partition the 32 GPUs into 4 communication groups, one per-node. This table shows the effect of partitioning the GPUs into groups of different sizes, finding slightly worse performance when the number of groups is large. ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) Figure 4: We fine-tune CLIP ViT-L (Radford et al., 2021; Dosovitskiy et al., 2021) on ImageNet. In contrast to the DeiT fine-tuning experiments, the models were not pre-trained with stochastic depth and we found better accuracy when fine-tuning without stochastic depth. Instead, we fine-tune for 6, 12, and 24 epochs. lo-fi shows good performance under distribution shift, but on ImageNet requires more epochs to exceed the baseline accuracy unlike in the DeiT experiments. in Table 3, accuracy drops slightly when using a larger number of groups2. This result demonstrates that the best configuration is one group per node. Finally, we found it interesting the lo-fi outperformed the baseline under distribution shift. Accordingly, we wanted to test whether we could recover these out-of-distribution (OOD) improvements by applying other weight averaging techniques to the baseline. We observe in Figure 3 that the answer is yes, although at slight cost to in-distribution performance for the methods we tried. The best performing technique we tried was a debiased exponential moving average (EMA) (Szegedy et al., 2016; Kingma & Ba, 2014), for which we tried decay values 0.99, 0.999, 0.9999, and 0.99999. We also tried applying EMA and WiSE-FT (Wortsman et al., 2021) to lo-fi, but did not observe out-of-distribution improvements 3. ## 3.2 Fine-Tuning Clip Vit-L On Imagenet And Wilds In the previous section we observed that lo-fi matches the baseline for DeiT-III on ImageNet, but how does lo-fi perform for models pre-trained on larger datasets? In this section, we further test lo-fi for the CLIP ViT-L (Radford et al., 2021; Dosovitskiy et al., 2021) when fine-tuning on ImageNet (Figure 4) as well as two datasets from WILDS (Koh et al., 2021) (Figure 5). Unlike the DeiT models, CLIP was not pre-trained with stochastic depth and we find better accuracy when we fine-tune without stochastic depth. This is unlike the DeiT-III models, which we found performed best when we used some stochastic depth. Indeed, this allowed us to use slightly less regularization for lo-fi then we did for the baseline by decreasing stochastic depth drop rate by 0.05. As this is no longer the case, we instead show experiments when fine-tuning for different numbers of epochs. Other than this omission of stochastic depth and varying the training epochs, the hyperparameter configuration is identical to that discussed in the previous section and follows the ImageNet-21k→ImageNet fine-tuning set-up from DeiT-III (Touvron et al., 2022). 2When using 2, 8, and 16 groups we changed the stochastic depth drop rate by 0.05, -0.05, and -0.10, respectively, from the four group setting. 3The intuition from WiSE-FT (Wortsman et al., 2021) is that of combining a generalist and specialist. Our intuition for why WiSE-FT does not show substantial improvements in the ImageNet-21k→ImageNet transfer setting is because both models are ImageNet specialists. ![7_image_0.png](7_image_0.png) Figure 5: We repeat the CLIP ViT-L fine-tuning experiment from Figure 4 one two other image classification tasks: WILDS-FMoW (Koh et al., 2021; Christie et al., 2018), a satelite recognition task with a geographic and temporal distribution shift and WILDS-iWildCam (Koh et al., 2021; Beery et al., 2021), a camera trap dataset with a geographic distribution shift. Overall, we find similar results as in Figure 4. Results when fine-tuning CLIP ViT-L on ImageNet are presented in Figure 4. For this experiment, we initialize the classification head of the zero-shot model using the zero-shot classifier output by the CLIP text tower (as in Wortsman et al. (2021)). We observe that more fine-tuning epochs are required for lo-fi to outperform the baseline on ImageNet. Under distribution shift, lo-fi roughly matches or exceeds the baseline for each of the fine-tuning epochs we tried. While this result indicates that lo-fi is a promising alternative to the baseline in this setting, a key limitation is that additional fine-tuning epochs were required to enable this improvement. The accuracy improvements beyond the best baseline model are consistent with the results reported in model soups (Wortsman et al., 2022). We also test CLIP ViT-L on two further datasets, WILDS-FMoW (Koh et al., 2021; Christie et al., 2018), a satellite image recognition dataset with a temporal distribution shift and WILDS-iWildCam (Koh et al., 2021; Beery et al., 2021), a classification dataset with camera traps in the wild with a geographic distribution shift. Our motivation is to test lo-fi on natural images beyond the ImageNet universe. The results are presented in Figure 5, observing very similar results to the aforementioned experiment of fine-tuning CLIP ViT-L on ImageNet. However, there is an important difference in the experimental set-up. For these experiments, we first tried using the zero-shot initialization for the last layer of the model, as we did with ImageNet. However, this resulted in worse accuracy for lo-fi. Accordingly, these experiments are completed using the LP-FT method of fine-tuning (Kumar et al., 2022). First, we train a linear probe using one node. This linear probe is then used as the initialization when end-to-end fine-tuning individually on each node. We also apply this change to the baseline, but the benefit is much less substantial for the baseline than for lo-fi. Finally, for this experiment we used learning rate 7e-4 which we found resulted in higher accuracy for lo-fi and the baseline. ## 3.3 Language Model Fine-Tuning We also test lo-fi outside of image classification by fine-tuning OPT-125M and OPT-1.3B (Zhang et al., 2022). Experimental Setup We report i)individual node, which is the average performance of the models produced by each node when using lo-fi, ii) lo-fi, which averages models produced by each node, and iii) baseline, which uses communication between nodes. For the 125M parameter model, we set the learning rate to 6e5, with 1024-length sequence blocks, and 500K tokens per batch. For the 1.3B parameter model, we set the learning rate to 1e-5, with 512-length sequence blocks, and 1M tokens per batch. We use fp16 mixed precision (Micikevicius et al., 2017) for all experiments. We fine-tune the 125M parameter model with 4 nodes, and we fine-tune the 1.3B parameter model with 8 nodes. When using lo-fi there is no communication between nodes, so the experiments produce 4 and 8 models, respectively. Each node consists of 8 Volta 32GB GPUs connected with 400GBps interconnect. ![8_image_0.png](8_image_0.png) Figure 6: Fine-tuning a language model (left: OPT-125M, right: OPT-1.3B) on Common Crawl with lo-fi closely approaches the performance of the baseline of multi-node fine-tuning with communication. Here, we train four lo-fi workers independently, one per node. The baseline consists of standard data-parallel finetuning using four nodes, where there is communication between nodes at every iteration. The x-axis shows iterations, which does not take into account that lo-fi may be faster. Results We fine-tune on the Pile's Common Crawl subset (Gao et al., 2021) using the Huggingface Transformers library (Wolf et al., 2020). Results are presented in Figure 6. We observe that for both model scales, when comparing by step count, lo-fi roughly matches the performance of the baseline, providing large performance improvements over the individual node setting. These results suggest that lo-fi is an effective alternative to standard multi-node fine-tuning with communication. ## 3.4 How Much Is The Speed-Up, Really? We have shown that lo-fi produces high accuracy models without communication during fine-tuning. This leads to an important practical question: what is the wall-clock advantage of eliminating communication between nodes during fine-tuning? We examine the wall-clock training time advantage once nodes are allocated and also the time it takes for node allocation on a slurm cluster. Note that these experiments are for the DeiT-III (Touvron et al., 2022) models in the image classification setting. Wall-clock advantage. To examine the wall-clock advantage of lo-fi compared to the baseline we use A100 GPUs on AWS with fast interconnect of 400 GBps (EFA). This is representative of a fast and modern large scale neural network training set-up. In particular, we want to understand the effect of using modern distributed training tools, and also varying batch size. We note that our results depend critically on the quality of the interconnect between nodes. In a setting with a slower interconnect such as standard ethernet, we would expect the training speed-ups to be more substantial. In a setting with a faster interconnect such as TPUs, the training speed-ups should be more minor. A recent innovation in distributed training tooling is to overlap the backwards pass computation and gradient communication—the gradients for layer ℓ − 1 can be computed at the same time as communicating the gradients for layer ℓ (Li et al., 2020a; Paszke et al., 2019) 4. We experiment with turning on and off this overlapping communication/computation feature, finding substantial reductions in communication overhead when overlapping communication and computation. We also experiment with changing the batch size. In general, we observed that when using a smaller batch size, communication will account for a larger portion of training time. This is because the size of the gradient does not depend on the batch size, so absolute communication cost does not depend on batch size. However, using a smaller batch size will lower the total computation time and therefore communication cost will account for a larger fraction of the total training time. 4Overlapping communication/computation on by default in PyTorch ≥1.5 (Paszke et al., 2019; Li et al., 2020a). ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) Figure 7: **(Left)** On an AWS cluster we show on the y-axis the wall-clock overhead observed when switching from 1 to 4 nodes using models from the DeiT-III repository (Touvron et al., 2022) and constant per-GPU batch size. 100% indicates that the job becomes twice as slow while 0% indicates no difference switching from 1 to 4 nodes. With the method of overlapping the communication and computation in the backward pass (Li et al., 2020a), the slow-down is less substantial than we initially expected, especially for larger per-GPU batch sizes. The huge and giant models are deeper and there is more opportunity to overlap communication and computation. **(Right)** Jobs requiring only one node schedule faster than jobs requiring four nodes on the slurm cluster that we use for these experiments. This plot shows the median per-day wait time averaged over three months of job data on this cluster. Our experiments with varying batch size and turning on and off overlapping communication/computation are presented in Figure 7 (left). These experiments are for the vision transformer models DeiT-base, DeiT-large, DeiT-huge and DeiT-giant, ranging from roughly 108to 109 parameters. On the x-axis we show the different model sizes, while the y-axis shows the additional wall-clock time required to go from a 1 node to 4 node job (i.e., 0% indicates that the 4 node job is the same speed as the 1 node job, while 100% indicates that the 4 node job is twice as slow). In this experiment, the number of iterations and batch size per device is fixed5. We found that without overlapping communication/compute, shifting to a multi-node settings results in a substantial increase in training time of 25-55% (purple lines). However, overlapping communication and compute has proven surprisingly effective, reducing the communication cost to <10%. A potential issue with these experiments is that they currently reflects a "state-of-the-art" cluster setting, and actual workflows may be slower. We also believe that both GPU memory size and network bandwidth will improve in the future. Higher GPU memory capacity will allow users to train larger models, resulting in higher communication overhead, while higher network bandwidth will help to reduce the communication delay. Finally, we note that lo-fi can help the straggler and jitter problem in distributed training, where different nodes might experience random slowdowns due to various reasons. In standard data-parallel, synchronization will take place multiple times per iteration, such that any random slowdown on any node will slow down the entire run. Since lo-fi needs only one communication at the end (which can even be asynchronous), the straggler/jitter problem is no longer an issue. 5We note that scaling with fixed batch size may be unrealistic for certain problems as large batch sizes can cause accuracy to drop, which would be a reason to use lo-fi. Scheduling advantage. For modern cluster workloads, both on private and public clusters, the wait time to schedule a job can increase the total training time, especially during periods of heavy cluster usage. Since single-node jobs require fewer simultaneous resources to run, they should schedule faster, reducing the total training time. To measure this, we analyzed the time required to schedule a 1-node job vs. a multi-node job on a large slurm-based cluster and present the results in Figure 7 (right). These wait times are averaged over all jobs run on this cluster over a three month period. We found that scheduling a single node job was notably faster than multi-node jobs, taking ∼45 minutes for 1 node, ∼2 hours for 2-4 nodes, and ∼3 hours for 8 nodes. We note that these results are specific to the cluster used in these experiments and may or may not be representative of other clusters depending on their scheduling algorithm and workload distribution, amongst other factors. We also note that the scheduling benefit will only apply when using implementation B in which each group is trained independently (as described in Section 2). Regardless, we thought that it may be useful to collect and present this empirical data, providing quantitative support for the observation from (Li et al., 2022) that jobs requiring fewer nodes schedule faster. ## 3.5 Does Jointly Training To Increase Diversity Across Groups Improve Lo-Fi Performance? Previous work from (Gontijo-Lopes et al., 2021; Wortsman et al., 2022) has shown that more diverse models trained with different hyperparameters produce larger benefits when ensembles or weight averaged and also (Li et al., 2022) which showed that ensembling or weight averaging specialists trained on different domains incurs the largest benefit. We therefore asked whether encouraging diversity automatically through regularization during training might improve the performance of the final lo-fi model. While this strategy did indeed produce models with a larger averaging benefit (avg. model - best individual model), it also decreased the accuracy of the individual models such that overall performance was the same or worse than simply training the lo-fi components independently. We also tried pulling together the predictions of the models, which is also known as co-distillation (Anil et al., 2018; Sodhani et al., 2020). This improved the accuracy of the individual models, but as model diversity decreased, the benefit from weight-averaging was reduced, also leading to overall lower accuracy. We explored a number of variations of these approaches which we discuss in more detail in Appendix A. ## 4 Related Work Averaging and linearly interpolating models. Averaging or interpolating the weights of neural networks is a common technique for improving accuracy. Weight-averaging techniques for optimization date back to early work in convex optimization (Ruppert, 1988; Polyak, 1990). In deep learning, an exponential moving average (EMA) of weights can be used to improve accuracy (Szegedy et al., 2016). Another popular approach is Stochastic Weight Averaging (SWA) (Izmailov et al., 2018) which uses a uniform average of weights saved at each epoch while training with a constant or cyclic learning rate. Indeed, the SWA method was motivated in part by the analogy between weight-averaging and ensembling. While SWA and EMA average weights along the training trajectory, there has also been substantial interest in averaging weights across independent trajectories. In particular, Nagarajan & Kolter (Nagarajan & Kolter, 2019) observe that the weights of two models that are fine-tuned independently on MNIST (LeCun et al., 2010) from a shared initialization can be interpolated without increasing loss. For more difficult problems such as ImageNet, this naive linear interpolation encounters a high error barrier (Frankle et al., 2020; Fort et al., 2020). However, Frankle et al. (2020) observe that when the first part of the optimization trajectory is shared and the remainder of training is independent, models can once again be interpolated without reducing accuracy. They refer to this phenomena—interpolating weights without accuracy loss—as *linear* mode connectivity. Neyshabur et al. (2020) observed a similar phenomenon when interpolating between model pairs that are *fine-tuned* from a shared initialization on a new task. This observation was extended to interpolation between a zero-shot model and fine-tuned model with the WiSE-FT approach (Wortsman et al., 2021), to many models fine-tuned with different hyperparameters with model soups (Wortsman et al., 2022), to models fine-tuned on different datasets with Ilharco et al. (2022), and for creating better pre-trained models by Choshen et al. (2022). The overall observation that the objective landscape can appears roughly convex was also made by Li et al. (2018). While all of the aforementioned weight-averaging employ simple linear interpolation, more advanced weight-averaging techniques have also been developed with promising results (Matena & Raffel, 2021). Recently, Li et al. (2022) introduced branch-train-merge which is at the intersection of model combination and distributed training. They consider the case where the training data is partitioned into different textual domains, then train an individual expert model for each domain. As they are training from scratch, they first require an initial seed phase. They then combine all of these experts via weight averaging or ensembling to outperform the dense baseline of training one large model on all of the data. The main difference are that our work is for fine-tuning, and we do not assume the data is partitioned into different domains. Other research in the area includes Garipov et al. (2018) and Draxler et al. (2018) who concurrently found that two neural network solutions trained independently can be connected by a simple curve along which loss remains low. These findings were generalized by Benton et al. (2021) who learn high dimensional lowloss connectors between individual solutions. Concurrent work with Benton et al. (2021), Wortsman et al. learned these high dimensional low-loss subspaces from scratch. Then, Entezari et al. (2021) conjectured that all solutions could be made to be linearly connected by applying a permutation to the weights which does not change the function. Ainsworth et al. (2022) recently made progress towards confirming this conjecture. However, unlike the model interpolations we observe here, and have previously been observed (Wortsman et al., 2022; Li et al., 2022), the interpolations in Ainsworth et al. (2022) so far do not improve models in terms of accuracy. Regardless, they are interesting from a scientific perspective, and suggest the possibility of applying methods such as lo-fi for training from scratch in the future, although there is currently no evidence towards this. Distributed training and fine-tuning. Distributed training (Li et al., 2020a; Yuan et al., 2022) and fine-tuning (Borzunov et al., 2022; Wang et al., 2022) are increasingly important in deep learning as models become larger. An overview of the many standard approaches is detailed by Weng & Brockman (Weng & Brockman, 2022), including i) data-parallelism, where data is split among devices, ii) pipeline parallelism, where different layers are split among devices, and iii) tensor parallelism, where individual layers are split among devices. We note that these approaches are not mutually exclusive. Indeed, one can use pipeline parallelism to distribute a model across a node, then use data-parallelism across nodes. lo-fi is proposing an alternative to dataparallelism across nodes—instead of synchronizing the updates between nodes during fine-tuning, each node independently produces a model which is averaged at the end of fine-tuning. We emphasize that lo-fi can still be used if there is pipeline parallelism across the node. There have previously been many alternatives proposed to synchronizing gradients each step. The idea of training several models in parallel and averaging their weights once at the end of training has been investigated at least since (McDonald et al., 2009; Zinkevich et al., 2010). The focus in those works is on convex models and training from scratch, rather than fine-tuning. While lo-fi is simply borrowing these techniques from convex optimization and applying them to fine-tuning for deep learning, we believe that these findings are interesting and useful from a practical standpoint. Another alternative, HogWild (Recht et al., 2011) proposes asynchronous communication. The difference between HogWild and lo-fi is that lo-fi never communicates during fine-tuning, so it's as if the hogs each have their own individual farm. As another alternative, local-sgd (Stich, 2018; Ortiz et al., 2021) communicates updates every k steps instead of every step. lo-fi is equivalent to local-sgd applied to fine-tuning where k is the number of fine-tuning epochs. There have also been compelling recent methods for more efficient and accessible pipeline or tensor parallelism to enable learning or inference with extremely large models. For instance, with Petals (Borzunov et al., 2022) certain layers of very large models are computed and communicated among coordinated users. Also, researchers have been using decentralized training for very large models (Huang et al., 2019; Yuan et al., 2022) which is made possible by, e.g., compressing communication (Wang et al., 2022; Kairouz et al., 2021; Xie et al., 2020; Faghri et al., 2020; Li et al., 2020b). Indeed, even inference with very many-billion parameter models can pose interesting challenges (Dettmers et al., 2022). Our approach is orthogonal to the work in compressing communication, as we are instead removing communication, but may prove useful for large-scale decentralized training. There is also the active research area of federated learning (e.g., Kairouz et al. (2021); Pillutla et al. (2022)), which has recently been explored in transfer settings (Nguyen et al., 2022). In federated learning, the data on each client is different and updates are usually communicated every k steps. While lo-fi only considers the easier setting of IID data, it is possible that similar approaches based on weight averaging to reduce communication may prove beneficial for privacy-preserving machine learning. ## 5 Limitations And Conclusion Limitations. There are many limitations discussed throughout this text. For instance, we found that when fine-tuning CLIP ViT-L on ImageNet and WILDS, lo-fi needs to observe more data to exceed the baseline. This is similarly true during language model fine-tuning (Section 3.3). Therefore, we most recommend lo-fi when communication costs are prohibitive. A final limitation is that lo-fi can only achieve matching accuracy when no new parameters are introduced, which we accomplish with a "zero-shot" initialization, or via LP-FT (this does not come up during language model fine-tuning). Conclusion. Overall we have observed that communication between nodes is not required during finetuning in certain settings. These findings may prove beneficial to a number of settings including large-scale decentralized fine-tuning and privacy-preserving ML and represent a promising step in the overall direction of developing models like open-source software (Raffel, 2021) in which many institutions can collaboratively fine-tune a large model if none has the resource to do so individually. As more workloads shift to fine-tuning of pre-trained models and models grow increasingly larger, we hope that our results will help to reduce barriers to large-scale models. ## Acknowledgements We thank Beidi Chen, Surya Ganguli, Caleb Ho, Gabriel Ilharco, Teng Li, Mansheej Paul, Alex G, Andrew Saxe, David Schwab, Shubho Sengupta, and Hugo Touvron for useful discussions. ## References Samuel K Ainsworth, Jonathan Hayase, and Siddhartha Srinivasa. Git re-basin: Merging models modulo permutation symmetries. *arXiv preprint arXiv:2209.04836*, 2022. Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E Dahl, and Geoffrey E Hinton. Large scale distributed neural network training through online distillation. arXiv preprint arXiv:1804.03235, 2018. Sara Beery, Arushi Agarwal, Elijah Cole, and Vighnesh Birodkar. The iwildcam 2021 competition dataset. In *Conference on Computer Vision and Pattern Recognition (CVPR) FGVC8 Workshop*, 2021. https: //arxiv.org/abs/2105.03494. Gregory Benton, Wesley Maddox, Sanae Lotfi, and Andrew Gordon Gordon Wilson. Loss surface simplexes for mode connecting volumes and fast ensembling. In *International Conference on Machine Learning*, pp. 769–779. PMLR, 2021. Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, and Colin Raffel. Petals: Collaborative inference and fine-tuning of large models. *arXiv preprint arXiv:2209.01188*, 2022. Benjamin Brazowski and Elad Schneidman. Collective learning by ensembles of altruistic diversifying neural networks. *arXiv preprint arXiv:2006.11671*, 2020. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, et al. Language models are few-shot learners. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2020. https://arxiv.org/abs/2005.14165. Leshem Choshen, Elad Venezian, Noam Slonim, and Yoav Katz. Fusing finetuned models for better pretraining. *arXiv preprint arXiv:2204.03044*, 2022. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways, 2022. https://arxiv.org/abs/2204.02311. Gordon Christie, Neil Fendley, James Wilson, and Ryan Mukherjee. Functional map of the world. In Conference on Computer Vision and Pattern Recognition (CVPR), 2018. https://arxiv.org/abs/1711. 07846. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *Conference on Computer Vision and Pattern Recognition*, 2009. https://ieeexplore. ieee.org/document/5206848. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Llm. int8 (): 8-bit matrix multiplication for transformers at scale, 2022. https://arxiv.org/abs/2208.07339. Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. Finetuning pretrained language models: Weight initializations, data orders, and early stopping. arXiv preprint arXiv:2002.06305, 2020. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *International* Conference on Learning Representations (ICLR), 2021. https://arxiv.org/abs/2010.11929. Felix Draxler, Kambis Veschgini, Manfred Salmhofer, and Fred Hamprecht. Essentially no barriers in neural network energy landscape. In *International conference on machine learning*, pp. 1309–1318. PMLR, 2018. Rahim Entezari, Hanie Sedghi, Olga Saukh, and Behnam Neyshabur. The role of permutation invariance in linear mode connectivity of neural networks. *arXiv preprint arXiv:2110.06296*, 2021. Fartash Faghri, Iman Tabrizian, Ilia Markov, Dan Alistarh, Daniel M Roy, and Ali Ramezani-Kebrya. Adaptive gradient quantization for data-parallel sgd. *Advances in neural information processing systems*, 33:3174–3185, 2020. Stanislav Fort, Gintare Karolina Dziugaite, Mansheej Paul, Sepideh Kharaghani, Daniel M Roy, and Surya Ganguli. Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2020. https://arxiv.org/abs/2010.15110. Jonathan Frankle, Gintare Karolina Dziugaite, Daniel Roy, and Michael Carbin. Linear mode connectivity and the lottery ticket hypothesis. In *International Conference on Machine Learning (ICML)*, 2020. https: //arxiv.org/abs/1912.05671. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The pile: An 800gb dataset of diverse text for language modeling, 2021. URL https://arxiv.org/abs/2101.00027. Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, and Andrew Gordon Wilson. Loss surfaces, mode connectivity, and fast ensembling of dnns. In Advances in Neural Information Processing Systems (NeurIPS), 2018. https://arxiv.org/abs/1802.10026. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In *Conference on computer vision and pattern recognition (CVPR)*, 2014. https://openaccess.thecvf.com/content_cvpr_2014/papers/Girshick_Rich_ Feature_Hierarchies_2014_CVPR_paper.pdf. Raphael Gontijo-Lopes, Yann Dauphin, and Ekin D Cubuk. No one representation to rule them all: Overlapping features of training methods. *arXiv preprint arXiv:2110.12899*, 2021. Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, Dawn Song, Jacob Steinhardt, and Justin Gilmer. The many faces of robustness: A critical analysis of out-of-distribution generalization. International Conference on Computer Vision (ICCV), 2021a. https://arxiv.org/abs/2006.16241. Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2021b. https://arxiv.org/ abs/1907.07174. Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In *European conference on computer vision*, pp. 646–661. Springer, 2016. Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. *Advances in neural information processing systems*, 32, 2019. Gabriel Ilharco, Mitchell Wortsma, Samir Yitzhak Gadre, Shuran Song, Hannaneh Hajishirzi, Simon Kornblith, Ali Farhadi, and Ludwig Schmidt. Patching open-vocabulary models by interpolating weights, 2022. https://arxiv.org/abs/2208.05592. Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. In *Conference on Uncertainty in Artificial* Intelligence (UAI), 2018. https://arxiv.org/abs/1803.05407. Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. Foundations and Trends® *in Machine Learning*, 14(1–2):1–210, 2021. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton A. Earnshaw, Imran S. Haque, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. WILDS: A benchmark of in-the-wild distribution shifts. In *International Conference on Machine Learning (ICML)*, 2021. https://arxiv.org/abs/2012.07421. Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Big transfer (bit): General visual representation learning. In *European Conference on Computer* Vision (ECCV), 2020. https://arxiv.org/abs/1912.11370. Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? In Conference on Computer Vision and Pattern Recognition (CVPR), 2019. https://arxiv.org/abs/1805. 08974. Ananya Kumar, Aditi Raghunathan, Robbie Matthew Jones, Tengyu Ma, and Percy Liang. Fine-tuning can distort pretrained features and underperform out-of-distribution. In *International Conference on Learning* Representations, 2022. URL https://openreview.net/forum?id=UYneFzXSJWh. Yann LeCun, Corinna Cortes, and CJ Burges. Mnist handwritten digit database. 2010. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models, 2022. https://arxiv.org/abs/2206.14858. Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. *Advances in neural information processing systems*, 31, 2018. Margaret Li, Suchin Gururangan, Tim Dettmers, Mike Lewis, Tim Althoff, Noah A Smith, and Luke Zettlemoyer. Branch-train-merge: Embarrassingly parallel training of expert language models. arXiv preprint arXiv:2208.03306, 2022. Shen Li, Yanli Zhao, Rohan Varma, Omkar Salpekar, Pieter Noordhuis, Teng Li, Adam Paszke, Jeff Smith, Brian Vaughan, Pritam Damania, et al. Pytorch distributed: Experiences on accelerating data parallel training. *arXiv preprint arXiv:2006.15704*, 2020a. Zhize Li, Dmitry Kovalev, Xun Qian, and Peter Richtárik. Acceleration for compressed gradient descent in distributed and federated optimization. *arXiv preprint arXiv:2002.11364*, 2020b. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. *arXiv* preprint arXiv:1907.11692, 2019. Michael Matena and Colin Raffel. Merging models with fisher-weighted averaging, 2021. https://arxiv. org/abs/2111.09832. Ryan McDonald, Mehryar Mohri, Nathan Silberman, Dan Walker, and Gideon Mann. Efficient large-scale distributed training of conditional maximum entropy models. In *Advances in Neural Information Processing Systems*, volume 22, 2009. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. Mixed precision training. *arXiv* preprint arXiv:1710.03740, 2017. Vaishnavh Nagarajan and J. Zico Kolter. Uniform convergence may be unable to explain generalization in deep learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'AlchéBuc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ 05e97c207235d63ceb1db43c60db7bbb-Paper.pdf. Behnam Neyshabur, Hanie Sedghi, and Chiyuan Zhang. What is being transferred in transfer learning? In Advances in Neural Information Processing Systems (NeurIPS), 2020. https://arxiv.org/abs/2008. 11687. John Nguyen, Kshitiz Malik, Maziar Sanjabi, and Michael Rabbat. Where to begin? exploring the impact of pre-training and initialization in federated learning. *arXiv preprint arXiv:2206.15387*, 2022. Jose Javier Gonzalez Ortiz, Jonathan Frankle, Mike Rabbat, Ari Morcos, and Nicolas Ballas. Trade-offs of local sgd at scale: An empirical study. *arXiv preprint arXiv:2110.08133*, 2021. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. *arXiv preprint arXiv:2203.02155*, 2022. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing Systems*, pp. 8024–8035, 2019. Krishna Pillutla, Kshitiz Malik, Abdel-Rahman Mohamed, Mike Rabbat, Maziar Sanjabi, and Lin Xiao. Federated learning with partial model personalization. In *International Conference on Machine Learning*, pp. 17716–17758. PMLR, 2022. Boris Teodorovich Polyak. New method of stochastic approximation type. *Automation and remote control*, 1990. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (ICML), 2021. https://arxiv.org/abs/2103.00020. Colin Raffel. A call to build models like we build open-source software, 2021. https://colinraffel.com/ blog/a-call-to-build-models-like-we-build-open-source-software.html. Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild!: A lock-free approach to parallelizing stochastic gradient descent. *Advances in neural information processing systems*, 24, 2011. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do ImageNet classifiers generalize to ImageNet? In *International Conference on Machine Learning (ICML)*, 2019. https://arxiv.org/ abs/1902.10811. David Ruppert. Efficient estimations from a slowly convergent robbins-monro process, 1988. https:// ecommons.cornell.edu/handle/1813/8664. Shagun Sodhani, Olivier Delalleau, Mahmoud Assran, Koustuv Sinha, Nicolas Ballas, and Michael Rabbat. A closer look at codistillation for distributed training. *arXiv preprint arXiv:2010.02838*, 2020. Sebastian U Stich. Local sgd converges fast and communicates little. *arXiv preprint arXiv:1805.09767*, 2018. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In *Proceedings of the IEEE conference on computer vision and* pattern recognition, pp. 2818–2826, 2016. Hugo Touvron, Matthieu Cord, and Herve Jegou. Deit iii: Revenge of the vit. *arXiv preprint* arXiv:2204.07118, 2022. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. *arXiv preprint* arXiv:1804.07461, 2018. Haohan Wang, Songwei Ge, Zachary Lipton, and Eric P Xing. Learning robust global representations by penalizing local predictive power. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2019. https://arxiv.org/abs/1905.13549. Jue Wang, Binhang Yuan, Luka Rimanic, Yongjun He, Tri Dao, Beidi Chen, Christopher Re, and Ce Zhang. Fine-tuning language models over slow networks using activation compression with guarantees. arXiv preprint arXiv:2206.01299, 2022. Lilian Weng and Greg Brockman. Techniques for training large neural networks, 2022. https://openai. com/blog/techniques-for-training-large-neural-networks/. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45, Online, October 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020. emnlp-demos.6. URL https://aclanthology.org/2020.emnlp-demos.6. Mitchell Wortsman, Maxwell C Horton, Carlos Guestrin, Ali Farhadi, and Mohammad Rastegari. Learning neural network subspaces. In *International Conference on Machine Learning (ICML)*. https: //proceedings.mlr.press/v139/wortsman21a.html. Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo-Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, and Ludwig Schmidt. Robust fine-tuning of zero-shot models. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2021. https://arxiv.org/abs/2109.01903. Mitchell Wortsman, Gabriel Ilharco, Samir Yitzhak Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, et al. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In International Conference on Machine Learning (ICML), 2022. https://arxiv.org/abs/2203.05482. Cong Xie, Shuai Zheng, Sanmi Koyejo, Indranil Gupta, Mu Li, and Haibin Lin. Cser: Communicationefficient sgd with error reset. *Advances in Neural Information Processing Systems*, 33:12593–12603, 2020. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In *Advances in Neural Information Processing Systems (NeurIPS)*, 2014. https://arxiv. org/abs/1411.1792. Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes. *arXiv preprint arXiv:1904.00962*, 2019. Binhang Yuan, Yongjun He, Jared Quincy Davis, Tianyi Zhang, Tri Dao, Beidi Chen, Percy Liang, Christopher Re, and Ce Zhang. Decentralized training of foundation models in heterogeneous environments. *arXiv* preprint arXiv:2206.01288, 2022. Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In *Proceedings of the IEEE/CVF* international conference on computer vision, pp. 6023–6032, 2019. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models, 2022. https://arxiv.org/abs/2205.01068. Martin Zinkevich, Markus Weimer, Lihong Li, and Alex Smola. Parallelized stochastic gradient descent. In Advances in Neural Information Processing Systems, volume 23, 2010. ## A Negative Results When Using Regularization To Promote Model Diversity We also experiment with explicitly regularizing the models to have diverse predictions. We are motivated by previous work which shows that having diverse models can lead to better ensembles or weight-averages (Brazowski & Schneidman, 2020; Gontijo-Lopes et al., 2021; Wortsman et al., 2022; Li et al., 2022). In particular, we wanted to push apart model predictions from different nodes. However, in the standard set-up, different nodes observe different data. As an alternative to this, at each iteration we had pairs of nodes observe the same data. We did this without reducing batch size by using cutmix (Yun et al., 2019) with the data from one node to another. However, without any further changes, this modification reduced accuray by roughly 0.15pp on ImageNet. We suspect this is because the models on different nodes became less diverse by sharing data. Let y1 and y2 be the predictions from two nodes who observed the same data. We experimented with adding the term λ · DKL(y1||y2) to the loss where DKL is the KL divergence. The motivation was that with λ < 0 the models would become more diverse as in (Brazowski & Schneidman, 2020). When λ > 0, this procedure would become co-distillation (Anil et al., 2018) and hopefully accelerate training. However, while λ < 0 improved the absolute benefit from averaging models, it reduced the performance of the individual models. Moreover, while λ > 0 improved the performance of the individual models, it decreased the benefit of averaging. These experiments are presented in Figure 8, finding that any change to λ lowered the accuracy of lo-fi. We also experimented with a number of other approaches to improve diversity, including using other distance metrics, only pushing apart incorrect examples, pushing apart model weights, and also taking the PCA of the predictions and pushing apart in the unimportant PCs. However, all of these approaches exhibited the same qualitative effect: improving diversity reduced individual model performance and increased the benefit of averaging, but not by enough to offset the individual model reduction. Interestingly, while our search was not exhaustive, this result may suggest that simply fine-tuning models with random seeds might produce the optimal amount of diversity for ensembling and model averaging. ![18_image_0.png](18_image_0.png) Figure 8: We tried "pushing apart" or "pulling together" the models at each of the nodes by adding the KL divergence between their predictions to the loss. Using a positive coefficient is equivalent to co-distillation (Anil et al., 2018) which improved the individual models but decreased the accuracy of the average. Using a negative coefficient as in (Brazowski & Schneidman, 2020) overall increased the improvement from averaging but decreased the accuracy of the individual models. Overall, these approaches did not improve lo-fi. ![19_image_0.png](19_image_0.png) Figure 9: We supplement Figure 2 with the additional baseline of standard multi-node fine-tuning with communication. We do not replace Figure 2 as the intention of Figure 2 is to visualize how the curve changes as the x-axis changes, and our intention is not for readers to view the figure as a comparison between lo-fi and the baseline. ## B A Comment On Learning Rate In Figure 1 A question emerges when examining Figure 1: why does the accuracy of the individual node first dip before coming back up? The answer is due to learning rate. While both lo-fi and the baseline use the same learning rate, which is the learning rate used by DeiT-III paper of 3e-4, the individual lo-fi nodes have a smaller global batch. Therefore, this same learning rate acts larger. We tried to increase the learning rate for the baseline so that it also had this down-then-up trend but it resulted in worse accuracy. We also tried changing the learning rate for lo-fi but this also reduced accuracy. This is similar to an observation made in the context of EMA in model soups (Wortsman et al., 2022), which is that the best model for averaging weights is not necessarily the best model overall. We believe the learning helps individual nodes produce models which are different, and there is therefore more benefit from their combination. ## C Additional Experimental Results This section presents additional experimental results. First, we supplement Figure 2 with the additional baseline of multi-node training (Section C.1). Next, explore the application of lo-fi for fine-tuning on tasks from GLUE Wang et al. (2018). ## C.1 Adding The Additional Baseline Of Multi-Node Training To Figure 2 In Figure 9 we supplement Figure 2 with the additional baseline of multi-node training. Due to computational constraints for simultaneous nodes, we do not exceed 12 nodes. Overall we observe similar curve shapes for both the baseline and lo-fi. ## C.2 Additional Natural Language Processing Results In Section 3.3 we explored fine-tuning with the language modeling objective. This was because standard fine-tuning tasks in natural language processing are relatively small and don't require multiple nodes. In this section, we expand our results to also consider fine-tuning a RoBERTa-base (Liu et al., 2019) model on tasks from the GLUE benchmark (Wang et al., 2018). Instead of fine-tuning on multiple nodes without communication, we instead fine-tuning on multiple GPUs without communication. Since we are introducing new parameters (i.e., the classification layer), we use LP-FT similarly to the experiments in Section 3.1. We accomplish this via a common first epoch among lo-fi workers. The baseline is then 40 epochs with communication among 4 GPUs while each of the 4 lo-fi workers is 10 epochs on 1 GPU before a final averaging step. In contrast to vision experiments, NLP fine-tuning can sometimes have low accuracy depending on the seed (Dodge et al., 2020). To account for this we use the greedy averaging technique from Wortsman et al. (2022) to combine models instead of the uniform averaging used throughout the rest of the paper. | | SST2 | COLA | QQP | MRPC | RTE | MultiNLI | |----------|--------|--------|-------|--------|-------|------------| | baseline | 94.30 | 61.34 | 89.29 | 92.10 | 77.98 | 87.51 | | lo-fi | 95.30 | 63.33 | 89.12 | 92.20 | 77.98 | 87.83 | Table 4: The performance of lo-fi compared to the baseline when fine-tuning on GLUE tasks (Wang et al., 2018). Details in Section C.2.
Review 1: Summary: The work presents a distributed way of finetuning by finetuning separately and ensembling\merging models, with little communication. This allows for different nodes (groups of processing units that are interconnected well) not to communicate during training and still perform as well or even better. A crucial matter here is that, per node, the amount of computation is held constant. So it produces a protocol where training is more efficient when node communication is slow or problematic. Strengths and Weaknesses: Strengths: The paper has a lot of substance, experiments and ablations. The results show a clear trend, there is little doubt the method works and achieves the claims in the paper. Minor: The robustness to more epochs (e.g. Fig 4,5) is interesting and a good property. Have any guess as to why is it happening? It is also a small weakness that it sometimes requires more epochs to surpass the baseline, is that a cost of distributing or related to the drop rate? Weakness: While the framing discusses distributed finetuning, this fits the results only with some caveats. Finetuning can be done without multiple GPUs and reach reasonable results and this method could be seen as a way of ensembling weak models. This is little discussed, unless I missed it, but we see that training on just one node works not that bad, so essentially what the paper proposes is training without distribution, stopping before convergence (right?) and getting a lower result, and then performing a model-soup-like ensemble that, as a good ensemble, improves results over the individual models. So in essence we could just put more computing effort, calculate anything we want more times (for example the full fine-tuning or a partial one) and make an ensemble to get better results. That is far from new or surprising. So the paper does not really say deep things about the low communication, but instead could be understood as saying things on ensembling of undertrained models, which we can expect. This alternate view should be better discussed. For example, why train with a small batch size (1/K) and not regularly (accumulate to account for missing nodes) but stop in the middle of training, is it better? worse? faster? Ensembling weak models also goes a lot back, connecting this paper to that branch may allow others to more easily improve it. It should be emphasized more how lo-fi is in comparison to regular non-distributed training. Overall data is the same, overall GPU number is the same (although don't have to be), what about logical batch size (after accumulation), per node data, overall computation time? Except for the communication what changes with it? There are 3 aspects (data, compute and accuracy) and each of them is separated (device\node\overall + maybe accumulated batches). It takes time to understand, what of those require further costs, what are the possible gains and what is held constant. The accuracy part is quite elaborated, but the other aspects are spread across the paper despite being the real reason for someone to prefer this method. As far as I can tell (and the fact that I am unsure is discussed above), the difference between lo-fi one node and regular training is that lo-fi batches are 1/K size, but same learning rate. In that case, the authors show that training was worse when choosing the wrong learning rate, there is even Appendix showing it is likely the case. This raises suspicions, could the one node perform ok with an ok learning rate? (with gradient accumulation, surely it can because logically it is equivalent, the paper should mention it and explain why this is possible but not advisable as it is slow) If it can, then no problem, more probable is that we will see that the method can just boost some lower-performing models by fusing them (like model soups etc.). But then, the framing is different, we want to just train faster, not to get better results, because better results are achievable by training several times and merging (as we said model soups). This method proposes to train inefficiently without communication and not lose (much?), but gain in speed. So more emphasis should be put on times. Specifically, they can train each node better (fit the parameters, you made so much effort fitting the baselines well you can fit your own too), and stop the training given the same computation as given to the other method but not communication, or even stop before, and show at which point you can stop. Note, running until convergence will tell us nothing, the framing here is about no communication, but without communication and with more compute\time anything could be done, without Lo-Fi, just train on one node K times longer and accumulate gradients, this will be performing the computation of the K nodes on 1, and is not interesting. Minor: The introduction mentions nodes, but the method defines devices. Are those the same? Do those do anything other than processing? would they be called processing units (GPU TPU CPU?)? Device for me hints of phones, tablets, laptops, PCs etc. but, it may be just me, and I'm not even a native speaker. Only when discussing some devices talking to each other and others that do not did I understand you are referring to slurm-like definitions. You should define those somewhere. A node is a group of devices which can better communicate within themselves than with devices from different nodes. Also, make sure to define things in a general manner, if hardware would change tomorrow and call a group for example "pod" then the methodology of your paper, that in no way relies on the hardware should still be clear. Figure 1 is actually clearer in describing the setting. It is not overflowing the paper, so it is not critical, but still, because no experimental setup is provided, various details are written in places where they give no benefit for the reader. For example, when trying to understand the method, "In the majority of our experiments, each group is a single node containing 8 GPU devices." is of no interest.  Or which code is used for DeiT which is mentioned at least 5 times.   The introduction describes regions in the low space in quite a detail, which to me seems to miss the point. The motivation for the work is not clear. This is only the background to why it is possible or what came before this work. For example, what is a node (first mentioned when the method is introduced), or why would costs of communication between nodes (GPUs) would ever be "prohibitive". This is the motivation of this paper, communication during fine-tuning, not low regions in space, which are related but do not make this paper exciting, just possible. There is another possibility, and that is that the paper does make regions in space during finetuning its main motivation, but this would require reframing if not a bit different focus on the experiments. While changing this will mean rewriting the introduction, I don't see it as necessary for the paper to be published, just for it to be more exciting or convincing. Why did you use different implementations in text and vision experiments? Did you test how they affect results or times? I also could not follow the explanation of the implementations, you state this is implementation after the one-time communication, why batches and gradients are still needed at this stage? Figure 1 which the paper heavily relies on does it some disservice. The "lo-fo(individual node)" has little to do with lo-fi. It is not that the paper proposes to use this method. This looks like there are two things the paper suggests, and one of them might be tolerable despite lower performance. If every node can already train the model by itself, and you allow only one communication, why not have a small finishing touch just to improve over the noise of averaging\merging\fusing the model? For example, only learn the last layer to make sure it predicts based on the inputs. Or perform some final batches after the fuse (we have found it to help in other scenarios). The tables would benefit from a split to in domain and out of domain, as this is the main claim and the names of individual test sets are necessary but not the main message. This would highlight where lo-fi "wins". When you write "drop" in Table 2 you mean stochastic depth drop, right? worth writing it in the caption, as drop with little context may be dropout or other things. Why is appendix B the first one referenced in the text? Makes you think you skipped a note somewhere before. (By the way, the addition of this mystery is very nice) "we found it was necessary for achievingmatching performance with lo-fi." it is quite ok not to achieve the same performance with a more restricted method. how big was the drop? "in which this the benefits" typo Requested Changes: Mainly, things in the framing bother me: the relations to ensembling and individual training and the over discussion of the theoretical reasons to believe it would work (which are great but do not replace talking about this work too...), but under discussion of motivation and terminology. The explanation of the motivation is unclear to me. Also some of the nuances of the experimental setup, specifically those related to amount of compute, the answer to the question what really differs between each baseline (except the hardware speed and other factors that do not affect the reported results), especially between the one node lo-fi and the regular training. I believe I understand them now, but it was not explicit in any one place. Minor adjustments are just the minor weaknesses, Broader Impact Concerns: NA ================================================== Review 2: Summary: - This paper proposes a method lo-fi, which aims at achieving distributed fine-tuning without extra communication. In its vanilla version, lo-fi simply conducts local fine-tuning on each computing device before a final averaging step. - Motivations and intuitions are provided for the proposed lo-fi method. - Empirical and ablation studies were conducted to justify the effectiveness and communication efficiency of lo-fi. Strengths and Weaknesses: Strengths: - The idea of lo-fi is easy to follow. - Improving the communication efficiency of distributed fine-tuning is a promising research direction. Weaknesses: My major complaint about this paper is that the idea is simply too straightforward. Conducting distributed computation with only final gradient/model aggregation (i.e., after the convergence of all locally trained) is a simple and old idea (please see PSGD proposed in [1], which appeared at NeurIPS 2010). Lo-fi, in my view, is a simple extension of PSGD to fine-tuning tasks, which has limited novelty. More concerns are summarized below: - Many methods have been developed to make fine-tuning very memory and/or computation-efficient, e.g., the ones in [2-3]. In fact, those methods also manage to save communication costs under a distributed scenario (as their number of active tunable parameters is so few). I wonder how lo-fi compares against LoRA and Adapter in [2-3]. - The proposed lo-fi method seems to only apply to data parallelism. For the model that can not fit into a single GPU memory, how does lo-fi work? In another word, can lo-fi be modified to support model and pipeline parallelism strategies? - The draft seems to be written in rush, there are a few issues in formatting/writing, e.g., there is an abrupt line break at the end of the second paragraph of the Introduction. [1] https://papers.nips.cc/paper/2010/file/abea47ba24142ed16b7d8fbf2c740e0d-Paper.pdf [2] https://arxiv.org/abs/2106.09685 [3] https://arxiv.org/pdf/1902.00751.pdf [4] https://cs.stanford.edu/people/matei/papers/2021/sc_megatron_lm.pdf [5] https://arxiv.org/abs/2201.12023 Requested Changes: - One interesting direction/study that can be added to improve the current draft is: [1] is guaranteed to converge for convex functions. We all know that the loss landscape of a deep neural network is not convex at all, however why PSGD works particularly well for fine-tuned deep neural networks? Does the pre-training make the loss landscape more convex? I believe that some studies in [2] can be borrowed. - The proposed method lacks novelty. The authors have to convince potential readers very well why their proposed method is important. - The authors are encouraged to compare lo-fi against the other efficient fine-tuning methods, e.g., LoRA and Adapter. - A discussion on the compatibility of lo-fi to model and pipeline parallelism can improve the quality of the paper. - The writing quality of the current draft can be improved. [1] https://papers.nips.cc/paper/2010/file/abea47ba24142ed16b7d8fbf2c740e0d-Paper.pdf [2] https://arxiv.org/abs/1712.09913 Broader Impact Concerns: I do not find any concerns about the ethical implications of the work. Thus I would not urge the authors to add a Broader Impact Statement. ================================================== Review 3: Summary: This paper demonstrates that fine-tuning a model on individual nodes and then averaging their weights afterwards can match (or even outperform) training one model on all the nodes using data parallelism. They demonstrate that the results hold when fine-tuning vision and language models. Strengths and Weaknesses: Strengths - Strong experiments across different modalities that verify their claim - Additional experiment on wall-clock time demonstrates the actual use-case of their method - Good ablations on OOD data and effect of number of nodes Weaknesses - The NLP experiments fine-tune OPT on CommonCrawl, which is normally used for pre-training, not fine-tuning. This difference matters since pre-training is done on new parameters and fine-tuning is done on non-new parameters and the behavior of lo-fi is different in these two setups. “lo-fi can only achieve matching accuracy when no new parameters are introduced” - limitations. Requested Changes: - Simply strengthen work: NLP experiment which fine-tune a LM on some standard downstream task for fine-tuning. - Simply strengthen work: A baseline in figure 2 which uses gradient communication across various number of nodes. This would indicate if using gradient communication experiences diminishing returns after 4 nodes. Broader Impact Concerns: NA ================================================== Metareview: Recommendation: Accept as is Comment: This paper presents a straightforward application of existing methods (e.g. federated learning) to the setting of (distributed) transfer learning. One reviewer complained about lack of novelty, but I think that highlighting that this simple approach can work in the ubiquitous transfer learning setting is valuable. Other reviewers had some initial suggestions primarily around clarity and framing which were addressed. I think this paper makes a meaningful contribution and is worth publishing, since it could potentially open up distributed training to resource-constrained individuals in an incredibly common ML pipeline. ==================================================
# Contextual Vision Transformers For Robust Representation Learning Anonymous authors Paper under double-blind review ## Abstract We introduce Contextual Vision Transformers (ContextViT), a method designed to generate robust image representations for datasets experiencing shifts in latent factors across various groups. Derived from the concept of in-context learning, ContextViT incorporates an additional context token to encapsulate group-specific information. This integration allows the model to adjust the image representation in accordance with the group-specific context. Specifically, for a given input image, ContextViT maps images with identical group membership into this context token, which is appended to the input image tokens. Additionally, we introduce a context inference network to predict such tokens on-the-fly, given a batch of samples from the group. This enables ContextViT to adapt to new testing distributions during inference time. We demonstrate the efficacy of ContextViT across a wide range of applications. In supervised fine-tuning, we show that augmenting pre-trained ViTs with our proposed context conditioning mechanism results in consistent improvements in out-of-distribution generalization on iWildCam and FMoW. We also investigate selfsupervised representation learning with ContextViT. Our experiments on the Camelyon17 pathology imaging benchmark and the JUMP-CP microscopy imaging benchmark demonstrate that ContextViT excels in learning stable image featurizations amidst distribution shift, consistently outperforming its ViT counterpart. ## 1 Introduction In recent years, Vision Transformers (ViTs) have emerged as a powerful tool for image representation learning (Dosovitskiy et al.). However, real-world datasets often exhibit structured variations or shifts in latent factors across different groups. These shifts, which occur when known or unknown factors of variation change during data acquisition, can lead to inaccurate or biased predictions when the model encounters new data. To address this issue, we introduce Contextual Vision Transformers (ContextViT), a novel method that generates robust feature representations for images, effectively handling variation in underlying latent factors. Transformer-based models can perform test-time generalization by integrating available information through in-context learning (Brown et al., 2020), where input-label pairs are added to the transformer inputs. In this work, we leverage this principle to address distribution shifts at test time. However, direct application of this strategy presents challenges for vision transformers due to the quadratic scaling of image patch tokens with input resolution. Derived from the principle of in-context learning, we propose ContextViT, a variant of vision transformer that condenses the relevant context into a single token. This *context token* is shared across all images with the same underlying distribution and varies across different distributions. For instance, in medical imaging, this could correspond to learning a single hospital token for each hospital. The context token is appended to the input sequence of the transformer, conditioning the resulting image representation on the context. This approach reduces the need for conditioning the model on extra input examples and enables efficient inference at test time. Despite these advantages, one limitation remains: it does not allow for generalization to unseen hospitals in absence of inferred context tokens. To overcome this limitation, we introduce a *context inference network* that estimates these tokens *on-the-fly* from the input images. Given a group of images from the same context (e.g., a hospital), the context inference network predicts the token that encapsulates the characteristics of this context. We argue that this procedure should be applied iteratively across all transformer layers, a process we term *layer-wise context conditioning*. This is because earlier transformer layers capture local image patterns, while later layers capture more abstract concepts Ghiasi et al. (2022). Adjusting the covariates at the first layer is not sufficient to accommodate the covariate shift, which can be a concept-level change. Our empirical analysis begins with supervised fine-tuning, where we integrate our context conditioning mechanism with pre-trained ViTs. Our results show consistent performance gains across three pre-trained ViT models, namely DINO (Caron et al., 2021), SWAG (Li et al., 2016), and CLIP (Radford et al., 2021), on the WILDS benchmark iWildCam (Beery et al., 2021) and FMoW (Christie et al., 2018). We then explore the integration of self-supervised representation learning with ContextViT. On the microscopy cell imaging benchmark JUMP-CP Haghighi et al. (2022), we observe an interesting phenomenon: while the performance of ViT decreases due to the batch effect as we introduce more data plates for pre-training, the performance of ContextViT steadily increases with more data. On the histopathology benchmark Camelyon17-WILDS (Bandi et al., 2018; Sagawa et al.), ContextViT significantly improves the performance on unseen hospitals, setting a new state-of-the-art. The main contributions of this paper are: 1. We propose ContextViT, an adaptation of ViT for producing robust image representations using learned context tokens to capture distribution shift. We derive ContextViT from in-context learning under distribution shift. 2. We incorporate a context inference mechanism into ContextViT, enhancing its ability to adapt and generalize to previously unseen distributions on the fly. 3. We present layer-wise context conditioning for ContextViT, a process that uses per-layer context tokens iteratively across all transformer layers to handle concept-level changes across distributions. 4. We explore the integration of self-supervised learning with cell-imaging and histopathology datasets, demonstrating significant performance improvements under distribution shifts and establishing a new state-of-the-art. ## 2 Related Work In-context learning Our work draws inspiration from and expands the idea of in-context learning (Brown et al., 2020). We realize a form of conditioning that can position transformer-based models to perform test-time adaptation to new tasks, but instead of conditioning on a dataset explicitly for each query, we infer context tokens that are shareable across a group. Like Xie et al. (2022), we interpret in-context learning via an implicit generative model. The key distinction lies in our focus on adaptation to distribution shift, and our expansion of the framework to include explicit context tokens. There is a substantial body of previous work on conditioning transformers with extra tokens. Li & Liang (2021) introduced prefix tuning as a method to adapt a frozen Transformer model for various downstream tasks. Xu et al. (2022); Li et al. (2021); Mao et al. (2022a) employed additional tokens to encode extra domain knowledge, such as multi-modal information from language. We focus on the scenario where we do not assume existence multi-modal measurements and our domain knowledge is structural (knowing group membership). White et al. (2022) explored hierarchical models for transformers in the context of language modeling, highlighting similar group-specific effects and connections to mixed models. Zhang et al. (2023); Jia et al. (2022); Zhou et al. (2022) learns extra prompt tokens for different downstream imaging applications. ContextViT distinguishes itself by introducing the conditioning mechanism based on the group membership of the data. Unlike previous approaches, ContextViT derives context information directly from the input image and applies context conditioning at every layer of the transformer encoder. This enables ContextViT to effectively generalize across unseen data groups. ![2_image_0.png](2_image_0.png) Figure 1: Illustration of ContextViT. For clarity, this figure depicts only two layers of the Transformer. Model components are indicated by grey-colored blocks, while white blocks represent the input, hidden, and output representations. Prior to processing by each Transformer layer, ContextViT employs a context inference model (that aggregates information across all image patches within the current batch sharing identical group membership) to generate the **context token**. Robust learning Robustness in machine learning can be interpreted in various ways. Mahmood et al. (2021) discovered that ViTs are vulnerable to white-box adversarial attacks Hendrycks & Dietterich (2019). Consequently, numerous strategies have been proposed to enhance the adversarial robustness of ViTs ViTs (Mao et al., 2022b; Chefer et al., 2022). In this study, we concentrate on robustness in the face of systematic distribution shifts. Test time adaptation is a widely adopted approach (Sun et al., 2020; Liu et al., 2021; Zhang et al., 2022). Li et al. (2022) developed a matching network to select the most suitable pretrained models for testing. The use of batch normalization at test time has also been shown to improve the generalization of CNN across various applications (Zhang et al., 2021; Nado et al., 2020; Li et al., 2016; Schneider et al., 2020; Lin & Lu, 2022; Kaku et al., 2020). Our approach, however, diverges from directly normalizing feature statistics or training multiple expert models. Instead, we incorporate context information as an additional latent input to the Transformer layer. Finally, it's worth noting that the representations generated by ContextViT can be seamlessly integrated with robust learning algorithms such as Group Distributionally Robust Optimization (Sagawa et al., 2019) and Invariant Risk Minimization (Arjovsky et al., 2020), should users desire a specific form of invariance. ## 3 Vision Transformers (Vits) Patch token embeddings Let x ∈ R H×W×C be an image with C channels and resolution H by W. ViTs first partition the image into a sequence of non-overlapping 2D patches [xp1 , . . . , xpN ], each with resolution (Hp, Wp) and represented by xpi ∈ R Hp×Wp×C . ViTs treat each image patch as a "1D token" for the Transformer, and we obtain patch token embeddings by flattening each patch xpi into a 1D vector and applying a trainable affine projection. The resulting patch token sequence is denoted as [tp1 , . . . , tpN ] where each tpi ∈ R d. CLS **token and position embeddings** In ViTs, we prepend a trainable CLS token tCLS to the input sequence. This enables the Transformer encoder to capture global image features by aggregating information across the patch sequence. We retain positional information for each patch token using a trainable 1D position embedding pi. The input sequence for the Transformer encoder is thus given by [tCLS, tp1 + posp1 , . . . , tpN + pospN ]. Transformer The Transformer layer (Vaswani et al., 2017) is a critical component of the ViT architecture. It comprises a self-attention layer and a feed-forward layer, each with residual connections (He et al., 2016) and layer normalization (Ba et al., 2016). Self-attention enables the model to capture dependencies between patches in the input image by embedding each patch based on its similarity to other patches. ViTs utilize a stack of Transformer layers T (1)*, . . . T*(L)to encode the input sequence into a sequence of 1D features: $$[y_{\texttt{cls}}^{(L)},y_{p_{1}}^{(L)},\ldots,y_{p_{N}}^{(L)}]=T^{(L)}\cdots T^{(2)}T^{(1)}([t_{\texttt{cls}},t_{p_{1}}+p o s_{p_{1}},\ldots,t_{p_{N}}+p o s_{p_{N}}]).$$ ## 4 Contextual Vision Transformer (Contextvit) Let Dc denote a collection of samples from a distribution Pc(·) under a context with index c, which can point to an unobserved environmental effect, a set of covariates such as experimental conditions, or another factor of variation with a scope over the set Dc that systematically shapes its distribution. Our goal is as follows: given a collection of such related datasets from distributions with varying factors of variation, captured as varying contexts c, we would like to learn a model that can generalize gracefully across these different domains and ideally adapt to new contexts c ∗ during test time. We assume that we observe data from multiple distributions jointly, each varying by a context c, so that our dataset is comprised by their collection D = {D1*, ...,* Dc}. We aim to learn a single shared ViT model, parameterized by θ, across all sub-datasets with varying contexts. We denote a tuple {*x, y, c*} as the data describing the sample, where x denotes an input, the image, and y denotes an output variable such as a label or an unknown embedding, and c denoting *group membership* for each datum to each sub-distribution Pc(·). Over the next sections we will explain our assumptions for ContextViT and will present practical implementations. ## 4.1 In-Context Model A popular paradigm for incorporating test-time conditioning in Transformers is given by in-context learning (Brown et al., 2020) and prompt-conditioning (Radford et al., 2021). To translate that paradigm to our task, we incorporate group membership to the representation learning task by conditioning on the members of the group, assuming the following factorization of the joint distribution over all data: $$P(Y|X,C)=\prod_{c\ \ \{x,y\}\in{\mathcal{D}}_{c}}P(y|x,{\mathcal{D}}_{c};\,\theta).$$ $$(1)$$ P(y|x, Dc; θ). (1) Here, we would concatenate all images belonging to context c to the query image x. During ViT processing, these images are patchified and represented as patch tokens, resulting in an input to the Transformer consisting of the tokens belonging to the query image and to the context images. This process is closest to in-context reasoning as commonly applied for LLMs, but ported over to the scenario of representation learning for vision.1 ## 4.2 Context-Token Model For the sake of exposition, let's assume an implicit hierarchical generative model of images with shared global latent variables per group tc ∈ R d which summarizes the shared latent characteristics of Dc. This variable represents an embedding of the context (which can characterize the environment or implicit covariates determining each distribution, or other factors of variation shifting in c) and is treated as a *context token*, 1In our formulation, we exclusively utilize the data x as the context, rather than the data-label pair (x, y. This decision is informed by the practical reality that, while test distribution labels may be scarce or entirely absent, data itself is typically abundant. ![4_image_0.png](4_image_0.png) Figure 2: ContextViT inference (solid line) vs. the implicit generative model (dotted line) see Fig. 2. Such models are common in generative modeling when environmental effects or style-class disentanglement are proposed. The context token tc can be characterized by its posterior distribution given the members of the group P(tc|Dc) and can then be utilized in an adjusted model P(y|*x, t*c; θ) to explain the impact of Dc. Upon closer inspection, it is evident that the in-context-learning process described in Sec. 4.1 can be interpreted as inferring and marginalizing the posterior distribution P(tc|Dc) of the context-specific variable tc as shown in the following: $$P(y|x,{\mathcal{D}}_{c})=\int P(y|x,t_{c};\,\theta)P(t_{c}|{\mathcal{D}}_{c})\,\mathrm{d}t_{c}.$$ $$\mathrm{M}$$ $$\mathbf{\Phi}^{\dagger}$$ P(y|x, Dc) = ZP(y|*x, t*c; θ)P(tc|Dc) dtc. (2) Under this viewpoint using Eq.2, we can now see that the context token is shared over all members of a distribution with index c (similar to a variable shared in a plate in graphical models), allowing us to rewrite Eq.1 such that we can factorize the dependence of the model on Dc for each data point given this context token: $$P(Y|X,C)=\prod_{c}\int P(t_{c}|{\mathcal{D}}_{c})\prod_{\{x,y\}\in{\mathcal{D}}_{c}}P(y|x,t_{c};\,\theta)\,\mathrm{d}t_{c}.$$ We now have established equivalence between the in-context model, and performing inference in an implicit forward generative process which assumes existence of a context token as an additional factor of variation capturing distribution shifts for each group sharing a context. We note that recent work proposes a deeply related interpretation of in-context learning in (Xie et al., 2022) supporting our view of an implicit probabilistic model. ## 4.3 Oracle-Context Model To simplify the setup for use in a ViT framework, we devise simpler formulations of the model and perform maximum likelihood inference over the context token, simplifying Eq. 3 to: $$P(Y|X,C)=\prod_{c\ \ \{x,y\}\in{\mathcal{D}}_{c}}P(y|x;t_{c},\theta),$$ P(y|x;tc, θ), (4) where tc now is a shared parameter that can be inferred per existing group during training. We denote this model the Oracle-Context Model, since it only assumes knowledge of the indicator c about which group an image belongs to, and does not require access to the other members of the group. We can instantiate this oracle model P(y|x;tc, θ) by conditioning the ViTs with a learnable context token tc and append it to the input sequence of the Transformer: [tCLS, tc, t1 + pos1, . . . , tN + posN ]. The limitation, however, is that such a method cannot be applied during test-time beyond known distributions, since it lacks the ability to infer the token embedding without prior exposure to examples from this group during the training phase. $$\left(4\right)$$ ## 4.4 Context Inference Model We overcome this limitation by closer matching the objective and assuming a model that performs amortized inference over the context-token parameters on the fly given observations Dc by utilizing an inference network h(·; ϕ). Concretely, we will again simplify p(tc|Dc) to be given by maximum likelihood but this time also utilize Dc by posing: $$P(Y|X,C)=\prod_{c}\prod_{\{x,y\}\in{\mathcal{D}}_{c}}P(y|x,t_{c};\theta),\quad{\mathrm{with~}}t_{c}=h({\mathcal{D}}_{c};\,\phi).$$ $\left(5\right)^2$ P(y|*x, t*c; θ), with tc = h(Dc; ϕ). (5) Here, h(·; ϕ) infers tc *on the fly* when observing Dc, and can indeed also be used during test time on a previous *unknown distribution* with context c ∗time given a set of samples Dc ∗ . We note that the assumption of availability of samples Dc during testing is highly realistic and common, as datasets Dc (comprised of observed inputs sharing a context) typically appear in groups for free also during test time, for example one would never image a single cell in an experiment but a collection under fixed conditions, and the expensive step is to assess their unknown mapping to output labels per group. In our experiments we will show how contexts c can also be persistent objects, such as hospitals having measurement processes for pathology images, that easily allow us to access samples from the set of measurements given context c. There are multiple options available for instantiating the inference model h(·; ϕ) (refer to the Appendix for more details). In the case of ContextViT (Figure 1), the inference model h(·; ϕ) comprises three key components: Mean pooling Given an image x ∈ Dc with its corresponding patch embeddings tpi , a direct way to obtain the context token tc is to aggregate the patch embeddings of all images in Dc through an average pooling operation t mean c = h mean c(Dc) :=1 |Dc| Px∈Dc 1 N Pxpi∈x tpi . The input sequence for the image x would then be represented as [tCLS, tmean c, t1 + pos1, . . . , tN + posN ]. In practice, instead of pooling over the entire Dc, we apply mean pooling over *B ∩ D*c where B is the current batch. Linear transformation with gradients detaching To further enhance the context representation, we introduce a trainable linear transformation to the pooled representation. In order to prevent the patch embeddings from being penalized by the context inference model, we detach their gradients. This results in the expression h linear c(Dc; *b, W*) := b + W · detach(t mean c). Layerwise context conditioning Recent work (Ghiasi et al., 2022) has shown that Transformer features progress from abstract patterns in early layers to concrete objects in later layers. We explore the application of context conditioning beyond the input layer driven by the hypothesis that patch embeddings may not be able to capture higher-level concepts. For the l-th Transformer layer, we use y (l)to denote its output and D (l) c to denote the collection of *hidden* patch embeddings of context c. We propose *layerwise* context conditioning which performs amortized inference over context-token parameters for each layer in the ViT instead of just propagating the input-layer token: $$P(y^{(l)}|y^{(0)}:=x;t_{c})=\prod_{l=1}^{L}P(y^{(l)}|y^{(l-1)};t_{c}^{(l-1)}),\quad\mathrm{with~}t_{c}^{(l)}=h({\mathcal{D}}_{c}^{(l)};\phi^{(l)}).$$ For the $l$-th Transformer layer, we can express the layerwise context conditioning as . We $l$-th 'Transt'. $$[y_{\mathsf{cls}}^{(l)},y_{c}^{(l)},y_{p_{1}}^{(l)},\ldots,y_{p_{N}}^{(l)}]=T^{(l)}([y_{\mathsf{cls}}^{(l-1)},t_{c}^{(l-1)},y_{p_{1}}^{(l-1)},\ldots,y_{p_{N}}^{(l-1)}]).$$ ## 5 Experiments We demonstrate the utilities of ContextViT across a variety of applications. We start with the common supervised fine-tuning setting where we augment three existing pre-trained ViTs with our context inference Table 1: OOD accuracy on iWIldCam and worst-region OOD accuracy on FMoW for supervised fine-tuning (FT). Augmenting existing pre-trained ViTs with our context inference model consistently improves the out-ofdistribution generalization performance. Previous work: FLYP (Goyal et al., 2022), Model Soups (Wortsman et al., 2022), FT ViT (Wortsman et al., 2022; Kumar et al., 2022). | | iWildCam | | FMoW | | | | |-------------------------------|------------|----------|----------|----------|----------|----------| | | DINO | SWAG | CLIP | DINO | SWAG | CLIP | | | ViT-S/8 | ViT-B/16 | ViT-L/14 | ViT-S/8 | ViT-B/16 | ViT-L/14 | | FLYP | - | - | 76.2±0.4 | - | - | - | | Model Soups | - | - | 79.3±0.3 | - | - | 47.6±0.3 | | FT ViT | - | - | 78.3±1.1 | - | - | 49.9 | | Our implementations FT ViT | 75.7±0.2 | 77.5±0.5 | 81.5±0.2 | 35.0±0.2 | 39.8±0.5 | 46.6±1.1 | | FT ContextViT (w/o layerwise) | 77.5±0.2 | 78.6±0.4 | 81.9±0.1 | 37.3±0.6 | 41.3±1.0 | 49.1±0.7 | | FT ContextViT | 77.7±0.3 | 79.6±0.6 | 82.9±0.3 | 37.7±0.5 | 41.4±0.3 | 49.9±0.4 | model (Section 5.1). Next we experiment ContextViT with self-supervised representation learning. We demonstrate the importance of context conditioning for both in-distribution generalization 5.2 and outof-distribution generalization 5.3. We present our ablation study and analysis in Section 5.3. Due to the constraints of space, we kindly direct our readers to the Appendix for the pseudo code implementation details. We will release our code and data for reproducibility. ## 5.1 Fine-Tuning Pre-Trained Vits With Context Conditioning To evaluate robustness over data shifts, we consider two image classification datasets from the WILDS benchmarks (Koh et al., 2021): iWildCam (Beery et al., 2021) and FMoW (Christie et al., 2018). In iWildCam, the task is to classify the animal species (182-way classification) from images taken by different cameeras. We use the camera trap id for context inference and report the testing accuracy on unseen camera traps. In FMoW, the task is to classify the land use of a satellite image (62-way classification). We use the region id for context inference and report the worst-region accuracy for image from the testing time period. We consider three existing pre-trained ViT models, each with a different number of parameters: 1) DINO ViT-S/8 (Caron et al., 2021), which is pre-trained on ImageNet with self-distillation between a teacher and a student network. 2) SWAG ViT-B/16 (Singh et al., 2022), which is pre-trained on IG3.6B using weak supervision (hashtags from Instagram). 3) CLIP ViT-L/14, which is pre-trained on the Multi-modal WebImageText using language-image contrastive learning. Despite their differences in pre-training objectives and datasets, these models share the same ViT backbone. *Therefore we can augment these pre-trained* models with our proposed context conditioning mechanism and fine-tune the combined ContextViT jointly with empirical risk minimization. Table 1 presents our results for supervised fine-tuning. We note that our direct fine-tuning baseline (using CLIP ViT-L/14) outperforms the number reported by Wortsman et al. (2022) on iWildCam (81.5 vs. 78.3) and underperforms the number reported by Kumar et al. (2022) on FMoW (46.6 vs. 49.9). We think this is likely caused by the differences in the implementation details (data augmentation, learning rate scheduling, etc.), and unfortunately we cannot find the configuration online to reproduce the exact numbers. Nevertheless, upon comparing our implementations, we make the following observations: 1) Smaller ViTs exhibit inferior generalization compared to larger ViTs; 2) Incorporating our context inference model consistently enhances the performance of ViTs; 3) Layerwise context inference further enhances generalization. ## 5.2 In-Distribution Generalization With Self-Supervised Learning Microscopy imaging with cell painting has demonstrated its effectiveness in studying the effects of cellular perturbations (Haghighi et al., 2022; Moshkov et al., 2022; Sivanandan et al.). Despite meticulous regulation of Table 2: Accuracy of 160-way gene perturbation classification for three cell plates in JUMP-CP. For each plate, we train a classifier (one-layer MLP) on top of the pre-trained DINO embeddings and evaluate its held out performance. The DINO embeddings are pre-trained on either a single plate BR00116991 (first section) or all three plates combined (second section). | Method | BR00116991 | BR00116993 | BR00117000 | |------------------------------------------------------------------------------|--------------|--------------|--------------| | DINO pre-training on BR00116991 only ViT-S/16 56.5 | | 42.7 | 46.8 | | DINO pre-training on BR00116991 & BR00116993 & BR00117000 ViT-S/16 53.6 43.5 | | 45.6 | | | ContextViT-S/16 | 57.7 | 48.0 | 48.8 | experimental parameters like cell density and exposure time, technical artifacts can still confound measurements from these screens across different batches. Learning a robust cell representation that can effectively handle batch variations remains an ongoing challenge. Dataset We consider three cell plates (BR00116991, BR00116993, BR00117000) from the JUMP-CP dataset released by the JUMP-Cell Painting Consortium (Chandrasekaran et al., 2022). Each plate consists of 384 wells with perturbations (either via a chemical compound or a crispr guide) targeted at 160 different genes. Our input data consists of single-cell images: 229, 228 images for BR00116991, 226, 311 images for BR00116993 and 239, 347 images for BR00117000. We note that here each single-cell image has dimension 224 × 224 × 8 (5 fluorescence channels and 3 brightfield channels). For each plate, we split the images randomly into training (40%), validation (10%) and testing (50%). We use the plate id for context inference. Results For each model, we use DINO (Caron et al., 2021) to pre-train the ViT and ContextViT from scratch for 100 epochs. Once pre-training is completed, we freeze the backbone parameters and attach an MLP (with one hidden ReLU layer of dimension 384) to predict the targeted gene of the cell. Given the unique nature of each plate, we train an individual MLP classifier for **each separate plate** using the training split of that specific plate, and subsequently report the corresponding testing accuracy in Table 2. We note that the representation learned by DINO on a single plate (BR00116991) exhibits poor generalization across plates (42.7 for BR00116993 and 46.8 for BR00117000). Moreover, directly combining all three plates for pre-training results in a degradation of the model's performance: −2.9 for BR00116991, +0.8 for BR00116993 and −1.2 for BR00117000. In contrast, by utilizing ContextViT, the model effectively accounts for batch effects through context conditioning during the pre-training stage, resulting in superior performance across all three plates. ## 5.3 Out-Of-Distribution Generalization With Self-Supervised Learning In medical applications, models are often trained using data from a limited number of hospitals with the intention of deploying them across other hospitals more broadly (Yala et al., 2019; 2021; Bandi et al., 2018). However, this presents a challenge for the generalization of out-of-distribution data. In this section, we aim to evaluate the ability of self-supervised learning with ContextViT to achieve better out-of-distribution generalization. Dataset We consider the Camelyon17-WILDS benchmark (Bandi et al., 2018; Sagawa et al.). The dataset contains 455, 954 labeled and 2, 999, 307 unlabeled pathology images across five hospitals (3 for training, 1 for validation and 1 for testing). Given a 96 × 96 × 3 image, the task is to predict whether the image contains any tumor tissue. *We use the hospital id for context inference.* Results We utilize DINO to pre-train our ViT and ContextViT models from scratch on unlabeled pathology images. Unlike previous work ((Sagawa et al., 2021)) that incorporates both in-distribution and out-ofdistribution unlabeled data (Unlabeled ID&OOD), *we exclusively use images from the three training hospitals,* Table 3: Accuracy of linear probing of pre-trained DINO embeddings on Camelyon17-WILDS. We use †to denote our implementations. Unlabeled ID&OOD denotes unlabeled pathology images from all hospitals, while Unlabeled ID denotes unlabeled images from the three training hospitals. DenseNet121 results are adopted from Koh et al. (2021); Sagawa et al. (2021). The results of ViT-B/16 and ViT-L/14 are adopted from Kumar et al. (2022). | | | Pre-training | In-distribution | OOD | | |------------------------------------------------------|------------|---------------------|-------------------|----------|----------| | Backbone | Size | Pre-training Method | Dataset | Accuracy | Accuracy | | Fine-tuning all parameters DenseNet121 7.9M | Supervised | ImageNet1K | 90.6 | 82.0 | | | DenseNet121 | 7.9M | SwaV | Unlabeled ID&OOD | 92.3 | 91.4 | | ViT-B/16 | 86M | DINO | ImageNet | - | 90.6 | | ViT-L/14 | 303M | CLIP | WebImageText | 95.2 | 96.5 | | Linear probing on frozen backbone ViT-L/14 303M CLIP | | WebImageText | - | 92.6 | | | ViT-S/8† | 21.7M | DINO | Unlabeled ID | 98.9 | 93.8 | | ContextViT-S/8† | 21.8M | DINO | Unlabeled ID | 98.9 | 97.5 | | | DINO Pre-training | | |---------------------------------------------------|-------------------------|-----------| | Context Inference Method | Linear Probing OOD Acc. | Time Cost | | No context (ViT-S/8 baseline) | 93.8 | 21.5h | | Mean patch embeddings | 94.1 | 22.6h | | Mean patch embeddings + linear | 94.4 | 23.5h | | Mean detached patch embeddings + linear | 97.2 | 23.2h | | Layerwise mean detached patch embeddings + linear | 97.5 | 29.9h | | Deep sets over patch embeddings | 92.5 | 30.0h | | Deep sets over detached patch embeddings | 94.9 | 29.1h | Table 4: Ablation study and pre-training time cost (on a server with 8 A100 GPUs) of different context inference models. For each method, we pre-train the corresponding ContextViT on Camelyon17-WILDS (Unlabeled ID) from scratch using DINO for 100 epochs. We report the out-of-distribution accuracy of linear probing on the official OOD testing split. denoted as "Unlabeled ID", during pre-training stage to prevent any potential information leakage from out-of-distribution data. Given the 96 by 96 input resolution of our dataset, we opt for a smaller patch size (8 instead of 16) for the ViT models. Once pre-training is complete, we freeze the ViT parameters and train a linear classifier, **shared across all training hospitals**, with SGD to predict the target label, the presence of tumors. We report the testing accuracy on the out-of-distribution hospitals. Table 3 presents a comparison of our linear probing results (marked with†) with other published results on the Camelyon17 benchmark. Our DINO pre-trained ViT-S/8 baseline outperforms the much larger CLIP model in terms of linear probing (93.8 vs. 92.6), which is not surprising given the ViT-S/8 has been pre-trained on the same pathology domain. Next, we see that by conditioning our ViT-S/8 representation with other images from the same hospital within the testing batch, ContextViT-S/8 achieves an OOD accuracy of 97.5%, significantly outperforming all baselines. Moreover, Figure 3 demonstrates that the linear classifier built upon ContextViT-S/8 continues to enhance its OOD performance as the training of the linear classifier progresses, while the one built upon ViT-S/8 exhibits signs of over-fitting to the training data. Ablation study on the context inference model Due to space constraints, we have deferred additional analyses to the Appendix. In Table 4, we perform an ablation study of our context inference model h(·; ϕ). We observe that utilizing the mean patch embeddings as context (94.1) and incorporating an additional linear transformation (94.4) result in improved performance compared to the no context baseline (93.8). 9 ![9_image_1.png](9_image_1.png) Figure 3: Learning curves of linear probing on Camelyon17. Unlike ViT-S/8, the OOD accuracy on top of ContextViT-S/8 steadily improves as the training of the linear classifier progresses. ![9_image_0.png](9_image_0.png) Figure 4: On Camelyon17, we observe that even when we use a testing batch size of 1 (computing the context by aggregating 144 8 x 8 patches from a single testing image into a single context token), ContextViT-S/8 still significantly outperforms ViTS/8. ![9_image_2.png](9_image_2.png) Figure 5: Left: PCA visualization of context tokens inferred by ContextViT using pathology images from different hospitals. Right: Example images from different hospitals. Notably, we find that detaching the gradients when calculating the context token from the patch embeddings is crucial, leading to a performance boost from 94.4 to 97.2. Furthermore, the application of layerwise context conditioning further enhance the performance. We also experimented with other set representation models for context inference, including deep sets (Zaheer et al., 2017) (see Appendix for details). Although deep sets (with detached patch embeddings) outperformed the baseline (94.9 vs. 93.8), it still fell short of the simpler approach of mean pooling with linear transformation. Our hypothesis is that the Transformer block's self-attention mechanism, which utilizes the same key, query, and value transformations across the sequence, results in the mean pooling with linear transformation producing a context embedding that is more comparable to the original patch sequence. ![10_image_0.png](10_image_0.png) Figure 6: Attention heatmap of ContextViT on patholog images from in-distribution hospitals (top) and OOD hospitals (bottom) in Camelyon17. Time efficiency One limitation of ContextViT is the extra time cost for the context conditioning. In Table 4, we present the DINO pre-training time cost for different context inference methods. We observe that adding the non-layerwise context conditioning increases the baseline DINO pre-training time by 5%-9%. Applying layerwise context conditioning further increases the time cost by 29%. Sensitivity to testing batch size During the inference phase, ContextViT-S/8 takes a random batch of testing examples (512 in all previous experiments) and groups them based on their group membership, such as the hospital ID in the case of Camelyon17, to infer the corresponding context tokens. In Figure 4, we present a visualization of ContextViT's linear probing performance sensitivity across different testing batch sizes. Remarkably, even with a testing batch size of 1, where ContextViT-S/8 leverages patches from a single testing image (144 patches) to establish its context, it still outperforms our ViT-S/8 baseline by a significant margin (97.0 vs. 93.8). It's worth noting that after DINO pre-training, ContextViT-S/8 acquires the ability to condition its output features with respect to the context. As highlighted by Gao et al., pathology images from different hospitals exhibit distinct color distributions. The exceptional performance of ContextViT-S/8 with a small testing batch size demonstrates the model's capability to automatically estimate the color distribution shift from a few representative images during test time. Attention maps from multiple heads In line with the findings of Caron et al. (2021), which demonstrate that DINO can learn class-specific features leading to unsupervised object segmentations, we visualize the attention heatmaps for different heads learned with ContextViT in the Camelyon17 dataset. Figure 6 illustrates these attention maps, showcasing that the learned attentions are focused on meaningful aspects for both in-distribution data and out-of-distribution data. Furthermore, different attention heads exhibit distinct preferences for different cell types. For instance, head-2 primarily focuses on the contour of the cells, while head-3 directs its attention to the actual cells themselves. Visualizing the context tokens We employ PCA to visualize the learned context tokens for each hospital, and the results are presented in Figure 5. One approach to visualizing the context tokens is by inferring them from all examples belonging to each hospital, resulting in five unique context tokens. However, in practice, we infer the context token on-the-fly for the current mini-batch. Using a batch size of 256, we sample 300 batches for each hospital. Remarkably, the inferred context tokens for each hospital exhibit high similarity, appearing closely clustered together. Additionally, we include example images for each hospital on the right side of Figure 5. Notably, the distances between context tokens from different hospitals align well with their visual dissimilarities. For instance, hospital 3 (highlighted in red) is closer to hospital 1 (highlighted in orange) than it is to hospital 4 (highlighted in purple). ## 6 Conclusion We have presented Contextual Vision Transformers, a method that addresses challenges posed by structured variations and covariate shifts in image datasets. By leveraging the concept of in-context learning and introducing context tokens and token inference models, ContextViT enables robust feature representation learning across groups with shared characteristics. Through extensive experimental evaluations across diverse applications, ContextViT consistently demonstrates its utility compared to standard ViT models in terms of out-of-distribution generalization and resilience to batch effects. This success highlights the power and versatility of ContextViT when handling structured variations in real-world applications, where invariance to such nuisances may be desirable. ## References Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *stat*, 1050:27, 2020. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. *arXiv preprint* arXiv:1607.06450, 2016. Peter Bandi, Oscar Geessink, Quirine Manson, Marcory Van Dijk, Maschenka Balkenhol, Meyke Hermsen, Babak Ehteshami Bejnordi, Byungjae Lee, Kyunghyun Paeng, Aoxiao Zhong, et al. From detection of individual metastases to classification of lymph node status at the patient level: the camelyon17 challenge. IEEE transactions on medical imaging, 38(2):550–560, 2018. Sara Beery, Arushi Agarwal, Elijah Cole, and Vighnesh Birodkar. The iwildcam 2021 competition dataset. arXiv preprint arXiv:2105.03494, 2021. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. Alexander Buslaev, Vladimir I. Iglovikov, Eugene Khvedchenya, Alex Parinov, Mikhail Druzhinin, and Alexandr A. Kalinin. Albumentations: Fast and flexible image augmentations. *Information*, 11(2), 2020. ISSN 2078-2489. doi: 10.3390/info11020125. URL https://www.mdpi.com/2078-2489/11/2/125. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 9650–9660, 2021. Srinivas Niranj Chandrasekaran, Beth A Cimini, Amy Goodale, Lisa Miller, Maria Kost-Alimova, Nasim Jamali, John Doench, Briana Fritchman, Adam Skepner, Michelle Melanson, et al. Three million images and morphological profiles of cells treated with matched chemical and genetic perturbations. *bioRxiv*, pp. 2022–01, 2022. Hila Chefer, Idan Schwartz, and Lior Wolf. Optimizing relevance maps of vision transformers improves robustness. *Advances in Neural Information Processing Systems*, 35:33618–33632, 2022. Gordon Christie, Neil Fendley, James Wilson, and Ryan Mukherjee. Functional map of the world. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6172–6180, 2018. Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. *arXiv preprint arXiv:1708.04552*, 2017. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on Learning Representations*. Irena Gao, Shiori Sagawa, Pang Wei Koh, Tatsunori Hashimoto, and Percy Liang. Out-of-distribution robustness via targeted augmentations. In NeurIPS 2022 Workshop on Distribution Shifts: Connecting Methods and Applications. Amin Ghiasi, Hamid Kazemi, Eitan Borgnia, Steven Reich, Manli Shu, Micah Goldblum, Andrew Gordon Wilson, and Tom Goldstein. What do vision transformers learn? a visual exploration. *arXiv preprint* arXiv:2212.06727, 2022. Sachin Goyal, Ananya Kumar, Sankalp Garg, Zico Kolter, and Aditi Raghunathan. Finetune like you pretrain: Improved finetuning of zero-shot vision models. *arXiv preprint arXiv:2212.00638*, 2022. Marzieh Haghighi, Juan C Caicedo, Beth A Cimini, Anne E Carpenter, and Shantanu Singh. High-dimensional gene expression and morphology profiles of cells across 28,000 genetic and chemical perturbations. *Nature* Methods, pp. 1–8, 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In *International Conference on Learning Representations*. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. *arXiv preprint arXiv:1903.12261*, 2019. Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. In *European Conference on Computer Vision*, pp. 709–727. Springer, 2022. Aakash Kaku, Sreyas Mohan, Avinash Parnandi, Heidi Schambra, and Carlos Fernandez-Granda. Be like water: Robustness to extraneous variables via adaptive feature normalization. *arXiv preprint arXiv:2002.04019*, 2020. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al. Wilds: A benchmark of in-the-wild distribution shifts. In *International Conference on Machine Learning*, pp. 5637–5664. PMLR, 2021. Ananya Kumar, Ruoqi Shen, Sébastien Bubeck, and Suriya Gunasekar. How to fine-tune vision models with sgd. *arXiv preprint arXiv:2211.09359*, 2022. Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. *arXiv preprint* arXiv:2101.00190, 2021. Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, and Xiaodi Hou. Revisiting batch normalization for practical domain adaptation. *arXiv preprint arXiv:1603.04779*, 2016. Yanjie Li, Shoukui Zhang, Zhicheng Wang, Sen Yang, Wankou Yang, Shu-Tao Xia, and Erjin Zhou. Tokenpose: Learning keypoint tokens for human pose estimation. In Proceedings of the IEEE/CVF International conference on computer vision, pp. 11313–11322, 2021. Ziyue Li, Kan Ren, Xinyang Jiang, Yifei Shen, Haipeng Zhang, and Dongsheng Li. Simple: Specialized model-sample matching for domain generalization. In *The Eleventh International Conference on Learning* Representations, 2022. Alexander Lin and Alex Lu. Incorporating knowledge of plates in batch normalization improves generalization of deep learning for microscopy images. In *Machine Learning in Computational Biology*, pp. 74–93. PMLR, 2022. Yuejiang Liu, Parth Kothari, Bastien Van Delft, Baptiste Bellot-Gurlet, Taylor Mordan, and Alexandre Alahi. Ttt++: When does self-supervised test-time training fail or thrive? *Advances in Neural Information* Processing Systems, 34:21808–21820, 2021. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In *International Conference on* Learning Representations, 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7. Kaleel Mahmood, Rigel Mahmood, and Marten Van Dijk. On the robustness of vision transformers to adversarial examples. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 7838–7847, 2021. Xiaofeng Mao, Yuefeng Chen, Xiaojun Jia, Rong Zhang, Hui Xue, and Zhao Li. Context-aware robust fine-tuning. *arXiv preprint arXiv:2211.16175*, 2022a. Xiaofeng Mao, Gege Qi, Yuefeng Chen, Xiaodan Li, Ranjie Duan, Shaokai Ye, Yuan He, and Hui Xue. Towards robust vision transformer. In *Proceedings of the IEEE/CVF conference on Computer Vision and* Pattern Recognition, pp. 12042–12051, 2022b. Nikita Moshkov, Michael Bornholdt, Santiago Benoit, Matthew Smith, Claire McQuin, Allen Goodman, Rebecca A Senft, Yu Han, Mehrtash Babadi, Peter Horvath, et al. Learning representations for image-based profiling of perturbations. *bioRxiv*, pp. 2022–08, 2022. Zachary Nado, Shreyas Padhy, D Sculley, Alexander D'Amour, Balaji Lakshminarayanan, and Jasper Snoek. Evaluating prediction-time batch normalization for robustness under covariate shift. *arXiv preprint* arXiv:2006.10963, 2020. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International conference on machine learning*, pp. 8748–8763. PMLR, 2021. Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, et al. Extending the wilds benchmark for unsupervised adaptation. In *International Conference on Learning Representations*. Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks. In *International Conference on Learning Representations*, 2019. Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, et al. Extending the wilds benchmark for unsupervised adaptation. *arXiv preprint arXiv:2112.05090*, 2021. Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, and Matthias Bethge. Improving robustness against common corruptions by covariate shift adaptation. Advances in Neural Information Processing Systems, 33:11539–11551, 2020. Mannat Singh, Laura Gustafson, Aaron Adcock, Vinicius de Freitas Reis, Bugra Gedik, Raj Prateek Kosaraju, Dhruv Mahajan, Ross Girshick, Piotr Dollár, and Laurens Van Der Maaten. Revisiting weakly supervised pre-training of visual perception models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 804–814, 2022. Srinivasan Sivanandan, Max Salick, Bobby Leitmann, Kara Marie Liu, Mohammad Sultan, Navpreet Ranu, Cynthia Vivian Hao, Owen Chen, John Bisognano, Eric Lubeck, et al. Machine learning enabled pooled optical screening in human lung cancer cells. In NeurIPS 2022 Workshop on Learning Meaningful Representations of Life. Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei Efros, and Moritz Hardt. Test-time training with self-supervision for generalization under distribution shifts. In *International conference on machine* learning, pp. 9229–9248. PMLR, 2020. Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann LeCun, and Christoph Bregler. Efficient object localization using convolutional networks. In *Proceedings of the IEEE conference on computer vision and* pattern recognition, pp. 648–656, 2015. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. Julia White, Noah Goodman, and Robert Hawkins. Mixed-effects transformers for hierarchical adaptation, 2022. Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, et al. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In International Conference on Machine Learning, pp. 23965–23998. PMLR, 2022. Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. An explanation of in-context learning as implicit bayesian inference, 2022. Lian Xu, Wanli Ouyang, Mohammed Bennamoun, Farid Boussaid, and Dan Xu. Multi-class token transformer for weakly supervised semantic segmentation. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pp. 4310–4319, 2022. Adam Yala, Constance Lehman, Tal Schuster, Tally Portnoi, and Regina Barzilay. A deep learning mammography-based model for improved breast cancer risk prediction. *Radiology*, 292(1):60–66, 2019. Adam Yala, Peter G Mikhael, Fredrik Strand, Gigin Lin, Kevin Smith, Yung-Liang Wan, Leslie Lamb, Kevin Hughes, Constance Lehman, and Regina Barzilay. Toward robust mammography-based models for breast cancer risk. *Science Translational Medicine*, 13(578):eaba4373, 2021. Yang You, Zhao Zhang, Cho-Jui Hsieh, James Demmel, and Kurt Keutzer. Imagenet training in minutes. In Proceedings of the 47th International Conference on Parallel Processing, pp. 1–10, 2018. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. *Advances in neural information processing systems*, 30, 2017. Marvin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, and Chelsea Finn. Adaptive risk minimization: Learning to adapt to domain shift. *Advances in Neural Information Processing* Systems, 34:23664–23678, 2021. Marvin Zhang, Sergey Levine, and Chelsea Finn. Memo: Test time robustness via adaptation and augmentation. *Advances in Neural Information Processing Systems*, 35:38629–38642, 2022. Xin Zhang, Shixiang Shane Gu, Yutaka Matsuo, and Yusuke Iwasawa. Domain prompt learning for efficiently adapting clip to unseen domains. *Transactions of the Japanese Society for Artificial Intelligence*, 38(6): B–MC2_1, 2023. Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. *International Journal of Computer Vision*, 130(9):2337–2348, 2022. ## Appendix | A | Implementation details on supervised fine-tuning | 17 | | |-----------------------|--------------------------------------------------------------|------|----| | A.1 | Pseudo code | 17 | | | A.2 | Dataset details | | 18 | | A.3 | Data augmentation | 18 | | | A.4 | Optimization | | 18 | | B | Self-supervised learning on microscopy cell imaging: JUMP-CP | 18 | | | B.1 | DINO pre-training: data augmentation | 18 | | | B.2 | DINO pre-training: optimization | | 19 | | B.3 | Downstream classifier | | 19 | | C | Self-supervised learning on histopathology: Camelyon17-WILDS | 19 | | | C.1 | DINO pre-training: data augmentation | 19 | | | C.2 | DINO pre-training: optimization | | 20 | | C.3 | Linear probing | | 20 | | D Additional analysis | 20 | | | | D.1 | Practical Considerations For Batches with Many Groups | 20 | | | D.2 | Exploring Alternative Context Inference Models | 21 | | ## A Implementation Details On Supervised Fine-Tuning A.1 Pseudo Code The integration of ContextViT with existing pre-trained ViT models is straightforward. As illustrated in Figure 7, we provide a PyTorch-style pseudo code to outline the process. ![16_image_0.png](16_image_0.png) 5 11 # Output: ![16_image_1.png](16_image_1.png) 24 28 31 34 **return** y 35 36 ![16_image_2.png](16_image_2.png) 39 # Arguments: 43 # Output: 45 48 51 56 59 Figure 7: PyTorch-style pseudo code for ContextViT. ## A.2 Dataset Details To evaluate robustness over data shifts, we consider two image classification datasets from WILDS (Koh et al., 2021): iWildCam (Beery et al., 2021) and FMoW (Christie et al., 2018). iWildCam consists of 203,029 labeled images captured different camera traps. The task is to classify the animal species among 182 possible options. We use the official data split. At test time, we measure the average classification accuracy on unseen camera traps to assess out-of-distribution generalization. We use the camera trap id for the context inference. FMoW includes 141,696 satellite images labeled for 62 different land use categories, such as shopping malls. The images are also tagged with the year they were taken and their geographical region. We adopt the standard split, training our model on images from 2002-2013 and validating it on images from 2013-2016. We evaluate the model's *worst-region accuracy* on out-of-distribution images (from 2016-2018) during testing. We use the region id for the context inference. ## A.3 Data Augmentation We apply standard data augmentations during supervised fine-tuning for both ContextViTs and ViTs. These augmentations include random resized cropping, random horizontal flipping, and random color jittering. For the iWildcam dataset, we set the crop scale to be [0.08, 1.0], the horizontal flip probability is set to 0.5, and color jitter is applied with a maximum brightness change of 0.4, maximum contrast change of 0.4, maximum saturation change of 0.4, and maximum hue change of 0.1, with a probability of 0.8. In the FMoW dataset, objects typically only appear in a small part of the image, so aggressive cropping may introduce noise. Therefore, we set the crop scale to be [0.8, 1.0] and reduce the probability of color jittering to 0.1. ## A.4 Optimization We conduct fine-tuning of both ViTs and ContextViTs with a batch size of 256 for a total of 20 epochs. For optimization, we follow previous work Wortsman et al. (2022) and utilize the AdamW optimizer Loshchilov & Hutter (2019). To ensure a stable training process, we perform a two-epoch warm-up for the optimizer before applying a cosine annealing scheduler to decay the learning rate. We tune the learning rate within the range of [10−4, 10−5, 10−6]. Weight decay is applied to the model, excluding the bias parameters. Inspired by Caron et al. (2021), we set the weight decay to 0.04 at the beginning and gradually increase it using a cosine scheduler to reach 0.4 at the end of training. We fine-tune each model for 20 epochs with a batch size of 256. We perform model selection based on the accuracy on the official validation split and report the mean and standard deviation across five runs. ## B Self-Supervised Learning On Microscopy Cell Imaging: Jump-Cp B.1 Dino Pre-Training: Data Augmentation Each single-cell image in the JUMP-CP dataset has dimensions of 224 by 224 by 8. Since the images are consistently captured at a fixed magnification ratio, we have refrained from applying random scaling, as it could hinder the model's ability to infer the absolute size of the cells accurately. Instead, we have employed the following data augmentations: random padded cropping, horizontal and vertical flipping, rotation (with angles of 90, 180, and 270 degrees), defocus Hendrycks & Dietterich, coarse dropout DeVries & Taylor (2017), and input channel dropout Tompson et al. (2015). To facilitate these augmentations, we have utilized the Albumentations package Buslaev et al. (2020), which supports an arbitrary number of channels. During DINO pre-training, the teacher network receives two global views of the image, while the student network receives six local views. Here, we provide an explanation of the data augmentation configurations for both the global and local views. For the teacher network in DINO, the two global views are generated as follows: starting with the original single-cell image, we first apply random padding to expand the image to 256 × 256 and subsequently crop it to 224 × 224. We apply flipping and rotating transformations uniformly at random. In one view, we apply defocus with a radius range of (1, 3), while in the other view, the defocus radius range is (1, 5). Additionally, we apply coarse dropout, allowing for a maximum of 10 cutout holes, where each hole has a maximum dimension of 10 by 10. However, we do not apply input channel dropout for the global views. For the student network, which receives eight local views, we follow a similar process as with the global views. Starting with the original image, we apply random padding to expand it to 256 × 256 and then crop it to 96 × 96. We apply flipping and rotating transformations uniformly at random, and we use the same defocus radius range of (1, 3) for all six local views. Instead of coarse dropout, we randomly dropout the input channels with a probability of 0.2. ## B.2 Dino Pre-Training: Optimization Our optimization configuration for the DINO pre-training stage closely follows the guidelines provided in the Caron et al. (2021) GitHub repository. We utilize a batch size of 512 and train DINO for a total of 100 epochs. The key aspects of our configuration are as follows: - We employ the AdamW optimizer Loshchilov & Hutter (2019) and initiate the learning rate warm-up phase for the first 10 epochs. Considering our batch size, we set the maximum learning rate to 0.001, following recommendations from You et al. (2018); Caron et al. (2021). Subsequently, we decay the learning rate using a cosine learning rate scheduler. Our target learning rate at the end of optimization is set to 10−6. - Weight decay is applied to all parameters except for the biases. We set the initial weight decay to 0.04 and gradually increase it to 0.4 using a cosine learning rate scheduler towards the end of training. - The DINO projection head we utilize has 65536 dimensions, and we do not employ batch normalization in the projection head. - The output temperature of the teacher network is initially set to 0.04 and is linearly increased to 0.07 within the first 30 epochs. Throughout the remainder of training, the temperature is maintained at 0.07. Additionally, during the first epoch, we freeze the parameters of the output layer to enhance training stability. ## B.3 Downstream Classifier Based on our preliminary study on BR00116991 using ViT-S/16, we found that a multi-layer perceptron (MLP) with two hidden layers outperforms the linear classifier (53.6% accuracy vs. 10.4% accuracy). Our final MLP architecture consists of two hidden layers with ReLU activations, and each hidden layer has a dimension of 512. To optimize the parameters of the MLP, we employ the Adam optimizer Kingma & Ba (2014) with a batch size of 256, and we train the model for a maximum of 100 epochs. A weight decay of 10−5 is applied, and we fine-tune the learning rate within the range of ∈ [10−3, 10−4, 10−5] to find the optimal value. ## C Self-Supervised Learning On Histopathology: Camelyon17-Wilds C.1 Dino Pre-Training: Data Augmentation The Camelyon17-WILDS dataset is a patch-based variant of the Camelyon17 dataset, where each pathology image has a dimension of 96 by 96. To enable finer-grained reasoning in ViTs, we use a smaller patch size of 8 by 8. The images are stored in the RGB pixel format, consisting of 3 bytes per pixel. For pre-training the DINO embeddings, we apply standard data augmentations, as done in previous work Caron et al. (2021). These augmentations include random resizing and cropping, horizontal flipping, random color jittering, random grayscale transformation, Gaussian blur, and solarization. Similar to the JUMP-CP dataset, the teacher network receives two global views of the image, while the student network receives six local views. Here, we provide an explanation of the data augmentation configurations for both the global and local views. For the teacher network, the two global views are generated as follows: starting with the original pathology image, we first randomly crop the image with a scale sampled from the range of [0.4, 1.0]. Then, we resize the cropped image back to its original size of 96 by 96 using a bicubic transformation. A random horizontal flipping is applied with a probability of 0.5. Color jittering is performed with maximum brightness change of 0.4, maximum contrast change of 0.4, maximum saturation change of 0.4, and maximum hue change of 0.1, with a probability of 0.8. The image is transformed into grayscale with a probability of 0.2. Gaussian blur is applied using a kernel size of 1.0. Finally, solarization is applied to one of the global views with a threshold of 0.2. For the student network, which receives six local views, a similar process is followed with the following changes: 1) The random cropping scale is adjusted to [0.05, 0.4], ensuring that the student network observes only local views of the original image. 2) After cropping, the image is rescaled to a lower resolution of 32 by 32 instead of 96 by 96. 3) Gaussian blur is applied using a larger kernel size of 5. 4) Solarization is not applied to the local views. ## C.2 Dino Pre-Training: Optimization For the Camelyon17 dataset, we employ the same optimization configuration as for the JUMP-CP dataset during DINO pre-training. This includes training with a batch size of 512 for 100 epochs, utilizing the AdamW optimizer with a cosine learning rate scheduler, and applying weight decay, among other techniques that we have discussed in Section B.2. ## C.3 Linear Probing To evaluate the resulting DINO embeddings, we utilize the standard linear probing test, which involves training a linear classifier while keeping the ViT backbones frozen. Consistent with Caron et al. (2021), we employ the following data augmentation and optimization methods: For data augmentation, we first apply random cropping with a scale range of [0.08, 1.0] and subsequently resize the resulting image to a resolution of 96 by 96. Additionally, we incorporate horizontal flipping with a probability of 0.5. Moreover, we apply color jittering with a probability of 0.8, where the maximum allowed changes include a brightness change of 0.4, a contrast change of 0.4, a saturation change of 0.4, and a hue change of 0.1. In terms of optimization, we train the linear classifier for 100 epochs using a batch size of 512. Since the only trainable parameters are in the final linear layer, we do not employ warmup or weight decay techniques. To optimize the classifier, we employ SGD with a momentum value of 0.9. The learning rate is selected from the range [0.0005, 0.001, 0.005], and we utilize a cosine annealing scheduler to decay the learning rate gradually throughout the training process. ## D Additional Analysis D.1 Practical Considerations For Batches With Many Groups Context-based data sampler In all of our experimental settings, we have utilized the standard random sampler, which selects examples uniformly at random from the dataset, for simplicity. However, when dealing with covariates that can take a large number of distinct values, an alternative approach to enhance the performance of direct mean pooling is to employ a context-based data sampler. By sampling a batch comprising examples with the same group membership, we can ensure an adequate number of instances for estimating the context token accurately. Adopting a context-based sampler also offers the advantage of eliminating the need for grouping examples during the forward pass, as the sampler guarantees that all samples within a batch belong to the same group. This can potentially lead to further improvements in | iWildCam | | FMoW | | | | | |-------------------------------|-------------------|----------|-------------------|----------|----------|----------| | DINO | SWAG | CLIP | DINO | SWAG | CLIP | | | ViT-S/8 | ViT-B/16 ViT-L/14 | ViT-S/8 | ViT-B/16 ViT-L/14 | | | | | Our implementations FT ViT | 75.7±0.2 | 77.5±0.5 | 81.5±0.2 | 35.0±0.2 | 39.8±0.5 | 46.6±1.1 | | FT ViT in-context learning | 76.3±0.6 | 77.6±1.1 | 81.7±0.4 | 35.1±0.6 | 38.9±0.7 | 46.3±1.4 | | FT ContextViT (w/o layerwise) | 77.5±0.2 | 78.6±0.4 | 81.9±0.1 | 37.3±0.6 | 41.3±1.0 | 49.1±0.7 | | FT ContextViT | 77.7±0.3 | 79.6±0.6 | 82.9±0.3 | 37.7±0.5 | 41.4±0.3 | 49.9±0.4 | Table 5: OOD accuracy on iWildCam and worst-region OOD accuracy on FMoW for supervised fine-tuning (FT). In "VIT in-context learning", we append 256 random patches, sampled from images within the same group, to the input patch sequence. computational efficiency. It is worth noting that to prevent divergence across different groups, gradient accumulation across batches may be necessary. ## D.2 Exploring Alternative Context Inference Models The context inference model h(·; ϕ) in our study serves as a mapping from a set to a vector representation. In the paper, we propose a simple approach that employs mean pooling followed by a linear transformation. However, there are other options available for parameterizing the context inference model. Here, we discuss additional approaches that can be considered. In-context learning by sampling patches from other images in the group As discussed in Section 4.1, one direct way to incorporate context conditioning is by appending image patches from other images within the same group into the patch sequence of the current image. However, this would result in an excessively long sequence length. Since self-attention scales quadratically (in terms of memory and compute) with respect to the sequence length, this approach is not feasible. To overcome this limitation, an alternative is to sample a fixed number of patches from these images and utilize them as the context. We explored this baseline approach by sampling 256 patches for the context and present its supervised fine-tuning results in Table 5. The results show mixed performance, likely due to the randomness in the context conditioning. It outperforms the ViT baseline on iWildCam but underperforms ViTs on FMoW. Furthermore, even this sampling approach significantly increases memory consumption compared to ContextViT. For instance, conditioning the CLIP model with 196 patches leads to a 49% increase in GPU memory size. Deep sets Deep sets Zaheer et al. (2017) offers a framework for dealing with objective functions defined on sets that are permutation-invariant. In our application, for each patch token embedding tpi , it utilizes an encoder network φ(tpi ) to encode it. Then, it aggregates the embeddings of all patches belonging to the same group into a fixed-length vector using either sum pooling or max pooling. Finally, another network ρ processes the aggregated representation to generate the final output. In Section 4.4, we conducted experiments with a specific instance of the deep sets model, where we employed two multi-layer perceptrons (each with two hidden layers and ReLU activations) as the φ and ρ networks. Additionally, we incorporated residual connections for each hidden layer. We utilized the sum pooling mechanism to aggregate information across the set. As shown in Table 4, while this method exhibits increased representation power compared to mean pooling with a linear transformation, the latter demonstrates better generalization capabilities.
Review 1: Summary: The manuscript proposes a model named ContextViT, which augments the Vision Transformer using a context token in order to handle distribution shifts. Distribution shifts in this case are changes in the image capturing setup, say when using the imaging equipment in a different unseen hospital, or a different cell plate in the JUMP-CP dataset. The manuscript implements this model in the fine-tuning stage in section 5.1 and in the pre-training stage in self-supervised learning in sections 5.2 and 5.3. In each case the method is compared against published works and ablations. The manuscript claims that ContextViT is better than training on all available data and hoping that vanilla ViT generalizes out-of-distribution and that ContextViT is able to retain or even improve in-domain performance while generalizing out of distribution. Design choices such as detaching gradients in computing the context token and the effect of test batch size are ablated. Strengths and Weaknesses: ### Strengths ========= This manuscript tackles the problem of out of distribution generalization using a novel method that can be applied in settings where this distribution change can be attributed to specific changes in the imaging setup. This test time input is often available in practical applications and is, to the best of my knowledge, a relatively under-explored setting. The current trend is, on the other hand, to scale the training dataset and train foundation models to solve this problem but with considerable expense in terms of training and inference compute and data collection costs. The experiments cover a range of benchmarks and experimental settings: fine-tuning, self-supervised, Camelyon-17, Cell plates. This domain is, however, outside my expertise so my review is based on the technical contribution only. ### Weaknesses ============ In Table 1, FT ViT, FT ContextViT (w/o layerwise), and FT ContextViT are compared. However, either the number of parameters or training time (wall clock time) needs to be controlled for this comparison to be rigorous. In page 7, section 5.1, the first observation "Smaller ViTs exhibit inferior generalization compared to larger ViTs" involves two changes at the same time. While the ViT model size is changing in this comparison, the pre-training method (DINO vs SWAG vs CLIP) is also changing simultaneously making it impossible to disentangle whether the performance delta is coming from architecture changes or from changes in the pre-training method. Table 4, page 9, reports the pre-training time cost but leaves the following question unanswered: If the ablation models were trained for longer so as to match the pre-training time cost of ContextViT (29.9h), would they match the performance of ContextViT? Ideally this table should be a plot with pre-training time cost on the x-axis and linear probing OOD accuracy on the y-axis. A couple of models "No context" and "mean detached patch embeddings + linear" should be represented using lines that convey what happens when they are trained for longer so that one can compare these models and pick the one with the best trade-offs. My guess is that "layerwise" adds too much pre-training time cost which is better invested in a longer pre-training schedule for the non-layerwise ablation. The oracle context model, which uses say a 1-hot embedding for the hospital ID or cell plate ID, is not practically useful but its evaluation could add to the manuscript. Does the oracle context ViT improve performance in-domain over the no-context baseline? What is the in-domain performance gap between ContextViT and oracle ContextViT? Requested Changes: A discussion about the weaknesses listed above would be very helpful with new experiments as needed. Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: This paper introduces ContextViT for out-of-distribution (OOD) learning. The main idea is to introduce extra context tokens on pre-trained ViT when applying to different datasets. It introduces several approaches for obtaining the context tokens. The best choice is the proposed layer-wise context conditioning. Extensive experiments show that ContextViT is beneficial for OOD tasks. Strengths and Weaknesses: Strengths: - The idea of leveraging in-context learning and context tokens for out-of-distribution (OOD) learning in vision transformers is novel and intriguing. - The proposed mechanism for obtaining and utilizing context tokens is innovative and well-designed. - The paper is well-written and easy to follow. - The experimental evaluation is comprehensive, with extensive experiments and insightful ablation studies that provide a thorough understanding of the proposed approach's performance and behavior. Weaknesses: - While the authors have compared ContextViT with a fine-tuning baseline, a more comprehensive comparison with other existing OOD methods is lacking. - The introduction of context tokens and context inference models inevitably adds computational overhead, as acknowledged by the authors. A batch size of 512 during testing can significantly slow down the process. However, a direct comparison with other OOD methods in terms of accuracy-computation trade-offs is currently lacking. Requested Changes: The main requested change is adding extra comparison with other methods and demonstrating that the proposed method is beneficial in terms of accuracy or computation cost. Minor: Some typos need to be fixed, such as "cameeras" in Sec. 5.1. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper introduces ContextViT, that aims to learn robust ViT representations wrt distribution shifts in a dataset. In particular, it learns an additional context token that provides additional grouping caused by distribution shifts within the dataset. The learnt context token is generated on-the-fly and generalises well for unknown distribution at test time. Extensive experiments are performed to showcase the effectiveness of learning additional context token for grouping images, and improved robutstness on OOD distributions. Strengths and Weaknesses: *Strengths* 1. The paper introduces novel approach for robust representation learning based on learning context tokens that groups the input patches based on distribution shifts. 2. The learnt context tokens are generated based on input patches and thus can also generalise to OOD distributions. 3. The paper is nicely written with extensive experiments to showcase the effectiveness of the proposed approach. *Weaknesses* 1. How grouping is performed with context tokens: Thanks for providing the pseudo code in appendix, which makes it easier to understand the approach. The network takes covariate ‘c’ as input for training which I assume are the ‘tags’ that could separate the patches of different distributions during pre-training. My concern is regarding inference time, how do we obtain the ‘c’ as we don’t know the distribution tag ? Or the context token is removed during inference ? More information should be provided on this, and also the details on grouping (how to obtain covariate ‘c’) should be in the main paper. 2. Explicit training on Context Inference Model: Currently, there is no explicit training on Context Inference model, however since we have the labels during pre-training, how does explicit training on the inference model perform. Is there any reason that this should not be performed ? 3. What does context token look at ? It would be nice to show the attention map for the context token, as what regions it focuses on, and how it is different for different distributions. 4. Results on other dataset - It would be interesting to see the result on popular datasets like ImageNet-C or CIFAR-100-C, where the corruptions can be the distribution shifts. Such result will make the work stronger. Requested Changes: Overall I liked the work. I propose following additional changes/clarifications wrt weaknesses: 1. Clarification on how to obtain covariate 'c' (pre-training and test time) and inclusion of this description in main paper. 2. Visualisation for attention for the context token - how does it vary with different distributions. 3. Would be nice to see results on standard popular datasets such as ImageNet-C or CIFAR-100-C. Broader Impact Concerns: Not applicable ================================================== Metareview: Recommendation: Reject Comment: This is an interesting paper, with appeal to a wide community, that given the positive reviews would normally be accepted. However the authors hit a "time crunch" and could not even respond to the reviews. Given that two of the reviewers believe that the experiments need a little bit of work to make the claims convincing I think it's best to reject for now until there is clarity that the authors can put in the (not long) amount of time and work necessary to make the paper unanimously acceptable. Once they can do that they should resubmit and the paper should be quickly accepted. ==================================================
# Dreamedit: Subject-Driven Image Editing ♠,♣Tianle Li, ♠**Max Ku**∗, ♠,♣**Cong Wei**∗, ♠,♣**Wenhu Chen** ♠University of Waterloo ♣**Vector Institute, Toronto** {t29li,max.ku,cong.wei,wenhuchen}@uwaterloo.ca Reviewed on OpenReview: **https://openreview.net/forum?id=P9haooN9v2** ![0_image_0.png](0_image_0.png) Figure 1: The leftmost column is the customized subject, the middle column is the Subject Replacement task, the rightmost column is the Subject Addition task. The output is the generated results by DreamEditor ## Abstract Subject-driven image generation aims at generating images containing customized subjects, which has recently drawn enormous attention from the research community. Nevertheless, the previous works cannot precisely control the background and position of the target subject. In this work, we aspire to fill the void of the existing subject-driven generation tasks. To this end, we propose two novel subject-driven editing sub-tasks, i.e., Subject Replacement and Subject Addition. The new tasks are challenging in multiple aspects: replacing a subject with a customized one can totally change its shape, texture, and color, while adding a target subject to a designated position in a provided scene necessitates a rational context-aware posture of the subject. To conquer these two novel tasks, we first manually curate a new dataset called DreamEditBench containing 22 different types of subjects, and 440 source images, which cover diverse scenarios with different difficulty levels. We plan to host DreamEditBench and hire trained evaluators for enable standardized human evaluation. We also devise an innovative method DreamEditor to resolve these tasks by performing iterative generation, which enables a smooth adaptation to the customized subject. In this project, we conduct automatic and human evaluations to understand the performance of our DreamEditor and baselines on DreamEditBench. We found that the new tasks are challenging for the existing models. For Subject Replacement, we found that the existing models are particularly sensitive to the shape and color of the original subject. When the original subject and the customized subject are highly different, the model failure rate will dramatically increase. For Subject Addition, we found that the existing models cannot easily blend the customized subjects into the background smoothly, which causes noticeable artifacts in the generated image. We believe DreamEditBench can become a standardized platform to enable future investigations towards building more controllable subject-driven image editing. Our project and benchmark homepage is https://dreameditbenchteam.github.io/. ## 1 Introduction Can you imagine your favorite pet cat going to any place that any other cat has been to, just by picking a photo for them? Can you picture your most familiar objects appearing in authentic locations around the world, simply by providing a photograph of the surroundings? The replacement or incorporation of personalized subjects into a source image can present an exhilarating challenge, and these possibilities remain largely unexplored in previous endeavors. Current subject-driven image generation (as proposed by Ruiz et al. (2023); Gal et al. (2022); Chen et al. (2023b)) primarily relies on textual prompts to synthesize images with the given subject situated in diverse visual scenes. However, text-guided subject-driven image generation struggles to control the location or pose of the subject, background scene, image layout, etc. This lack of controllability hinders its applicability in real-world applications. In contrast, text-guided image editing (Dong et al., 2017; Li et al., 2020b;a) can precisely control the subject location, image layout, and background scene but falls short in controlling the synthesized subjects. Therefore, we are motivated to close the gap between subject-driven generation and image editing, proposing the new task of subject-driven image editing. In this work, we propose two novel subject-driven image editing subtasks, i.e., Subject Replacement and Subject Addition as shown in Figure 1. The goal of subject replacement is to replace a subject from a source image with a customized subject. In contrast, the aim of the subject addition task is to add a customized subject to a desired position in the source image. These two tasks pose challenges in the following aspects: 1) the generated subject needs to maintain a similar location and pose to the source image, and 2) the generated subject should blend in with the environment realistically. To evaluate the newly proposed task of subject-driven image editing, we first manually curate a new dataset, DreamEditBench. The dataset consists of examples with (source image, target subject) as inputs, and the model needs to generate a target image with the target subject appearing in the source image. The target subject is represented by a few reference images. The dataset spans more than 20 subject classes and 440 carefully picked source images with diverse backgrounds. For the addition task, we manually labeled different proper bounding boxes in the source image to specify the desired location for the target subject. Further, we develop a novel iterative generation method, i.e., DreamEditor. Our model takes three steps: (1) we fine-tune the text-to-image generation model with DreamBooth (Ruiz et al., 2023) on the target subject images to associate the subject with a special token [V]. (2) we adopt off-the-shelf segmentation tool to locate the region for inpainting and then use DDIM inversion to re-paint the region with the target subject, which is guided by the special token [V]. (3) However, we observe that the generated subject can still retain unwanted properties from the source subject. Therefore, we adopt an iterative in-painting, where we feed the generated result as input to perform another round of in-painting. After 2-3 iterations, we can observe the re-painted subject will gradually match the target subject while fitting naturally with the background. We compare DreamEditor with several baselines with human evaluation and show that DreamEditor can achieve better overall scores consistently on both tasks. After obtaining human evaluation results for DreamEditor and baselines, we observe that it is easier for the model to replace or add the target to the source image in some cases than others. For Subject Replacement, the source subject can be seamlessly replaced by the target one when they share similar features like colors and shape, thus merely one iteration suffices to yield satisfactory output. However, when it comes to the scenario that the target subject differs dramatically from the source, it will entail longer iterations to fit to the target gradually as shown in Figure 2. Similarly, for the subject addition task, the position and posture of the added target are more strictly stipulated for some of the backgrounds. As shown in the right column in Figure 1, the generated sloth plush should be precisely positioned on the chair rather than hovering in the air. For these cases, we note that DreamEditor can generate an irrational posture or position for the target subject in the initial iteration, but rectify it to a rational one in later iterations. Accordingly, we further divide the DreamEditBench into easy and hard levels. The sharp drop experiment results from easy to hard subsets reflect that the applied methods do struggle with the hard source images more than the easy ones. Moreover, we notice that the human evaluation results diverge largely from the automatic evaluation results. None of the automatic metrics like CLIP (Radford et al., 2021) and DINO (Caron et al., 2021) is able to reflect the success rate. Therefore, we advocate the necessity of conducting a rigorous human evaluation and plan to build a platform for more rigorous evaluation. ![2_image_0.png](2_image_0.png) Figure 2: The visualization of DreamEditor to iteratively refine the generated target subject. In a nutshell, the major contributions of this work are as follows: 1. Task Definition. We propose two novel new subject-driven image editing tasks, i.e. subject replacement and subject addition, and identify their challenges. 2. Benchmark. To standardize the evaluation of the two tasks, we collect the DreamEditBench, containing same-typed subjects and background images. 3. Method. We develop DreamEditor, a novel iterative generation method, to enable gradual adaptation from the source subject to the target one. ## 2 Related Work Conditional Image Editing with Deep Generative Models. The concept of Image-to-Image translation was introduced where a model is trained to learn a mapping between images distributions from two different domains so that an image from one domain can be translated to another domain (Isola et al., 2017; Zhu et al., 2017a;b; Huang et al., 2018; Park et al., 2020). Image-to-Image translation models are widely used in style transfer and photo enhancement tasks. However, Image-to-Image translation models lack the ability to edit locally on an image. Researchers have found that the generative prior can be used to manipulate parts of the generated image. To edit real images, they first need to be inverted into the latent space. This concept, known as 'Inversion', allows for detailed image editing. The idea of GAN inversion was first introduced by the method of Zhu et al. (2016), and a significant effort has been made on efficient GAN inversion (Perarnau et al., 2016; Creswell & Bharath, 2019; Abdal et al., 2019) and applying GAN inversion to image editing applications (Bau et al., 2019; Patashnik et al., 2021; Tov et al., 2021). Recently, diffusion models (Dhariwal & Nichol, 2021) dominate the field of image generation due to their superiority in generating realistic images. A denoising diffusion model learns to gradually denoise data starting from pure noise and a UNet (Ronneberger et al., 2015) is leveraged as the network architecture. The objective of a diffusion model is to learn a reverse diffusion process that restores structure in data to synthesize realistic data (Ho et al., 2020). Text-to-Image Diffusion Models are later proposed and achieve state-of-the-art image generation (Nichol et al., 2022; Ramesh et al., 2022; Saharia et al., 2022). In these methods, a pre-trained text encoder such as the CLIP (Contrastive Language–Image Pre-training) (Radford et al., 2021) is used to bridge the gap between textual descriptions and visual content. Diffusion-based image editing methods were also introduced in Lugmayr et al. (2022); Hertz et al. (2022); Mokady et al. (2022); Parmar et al. (2023); Couairon et al. (2022); Kawar et al. (2022). Subject-Driven Image Editing with Personalization of Pretrained Diffusion Model. As previously mentioned, image editing methods heavily rely on latent prior. In subject-driven editing tasks, the subject is often out of the latent distribution and thus it would be difficult to maintain subject fidelity when performing image editing. One way is to assume that a large-scale text-to-image diffusion model already understands a partial concept of the subject and that the subject can be derived from the right input text vector. Textual Inversion is done by performing a gradient update to optimize a text token vector to such that the text token vector represents the subject with a tiny portion of subject data (Gal et al., 2022). On the other hand, the straightforward approach is to fine-tune the large-scale text-to-image diffusion model to learn the concept of the subject. DreamBooth performs efficient fine-tuning on a large-scale diffusion model with a tiny portion of subject data (Ruiz et al., 2023). Following subject personalization works (Chen et al., 2023a; Shi et al., 2023; Tewel et al., 2023) investigate more efficient ways to perform subject-driven image generation. More recently, there are concurrent works like BLIP-Diffusion (Li et al., 2023a), CustomEdit (Choi et al., 2023) and PhotoSwap (Gu et al., 2023) working on the task of subject swapping. Our work differs from these works in two aspects: (1) the previous work only focuses on subject replacement, (2) our method can work on cases where the source and target subjects differ significantly. Our iterative method can modify the appearance iteratively to increase subject fidelity. In the context of subject addition, the objectives and approaches significantly deviate from those employed in image composition (Niu et al., 2021; Zhang et al., 2021). Contrary to the mere placement and subsequent harmonization typical in image composition, subject addition is inherently a generative process. This process not only introduces the subject into the background but also dynamically adjusts various attributes of the subject, such as posture, expression, and viewing angle, to achieve seamless integration with the existing background. This adaptation is executed automatically with diffusion process, underscoring the advanced capabilities of this approach in generating contextually coherent images. ## 3 Preliminary Text-to-Image Diffusion Models. A diffusion probabilistic model is a latent variable model that is trained to learn the image distribution by reversing the diffusion Markov chain. Specifically, we are interested in large text-to-image diffusion models pre-trained on large-scale text-image pairs (Rombach et al., 2021). The model consists of a CLIP (Radford et al., 2021) text encoder Γ, and a U-Net (Ronneberger et al., 2015) based conditional diffusion model ϵθ. Given a text prompt Q, the text encoder Γ generates a conditioning vector Γ(Q). With a randomly sampled noise ϵ ∼ N (0, I) and the time step t, we can get a noised image or latent code zt = αtx + σtϵ where x is the input image, αt and σt are the coefficients that control the noise schedule. Then the conditional diffusion model ϵ is trained with the denoising objective: $$\mathbb{E}_{x,Q,\epsilon,t}[||\epsilon-\epsilon_{\theta}(z_{t},t,\Gamma(Q))||_{2}^{2}]$$ $$(1)$$ ] (1) Where ϵ is trained to predict the noise condition on the noisy latent zt, a text prompt Q, and the time step t. At inference, given a noise latent zT , the noise is gradually removed by sequentially predicting it using ϵθ for T steps. A more detailed description is given in the supplementary material. Subject-Driven Text-to-Image Generation. Subject-driven generation models (Ruiz et al., 2023) often fine-tune a pre-trained text-to-image generation model ϵθ on a small set of demonstrations Cs of subject s, which contains a set of image and text description pairs Cs = {(xi, Qi)} K i=1, where xi means the i th image of subject s, while text description Qi of image xi contains a special text token Vs that is bound to the subject s (Ruiz et al., 2023). To customize a diffusion model ϵθs , we optimize the objective: $$\mathbb{E}_{(x,Q)\sim\mathbb{C}_{x},\epsilon,t}[||\epsilon-\epsilon_{\theta_{x}}(z_{t},t,\Gamma(Q))||_{2}^{2}]$$ ] (2) Where ϵθs is trained to always denoise for the images of the subject s when condition on a text description that contains the special text token Vs. DDIM Inversion for Real Image Edition. Text-guided editing of a real image with generative models often requires inverting the given image and textural prompt. Essentially, this means identifying an initial noise vector that can generate the given real image, while maintaining the model's ability to edit. We $$\left(2\right)$$ leverage the DDIM inversion scheme (Dhariwal & Nichol, 2021; Song et al., 2021) to encode a given image z0 = x0 onto a latent variable zT . $$z_{t+1}=\sqrt{\frac{\alpha_{t+1}}{\alpha_{t}}}z_{t}+\left(\sqrt{\frac{1}{\alpha_{t+1}}-1}-\sqrt{\frac{1}{\alpha_{t}}-1}\right)\cdot\epsilon_{\theta}(z_{t},t)\tag{3}$$ Based on the assumption that the ODE process can be revered in the limit of small steps, zT is computed by reversing the deterministic DDIM sampling. ϵθ(zt, t) is an unconditional model or equivalently a conditioning model with text ∅. i.e. ϵθ(zt*, t,* ∅). ## 4 Benchmark 4.1 Problem Definition Subject Replacement The Subject Replacement task aims to generate a new image by replacing the subject in a source image with the subject from a set of target subject images, guided by a general description. Formally, given the target subject images set Cs, a text prompt Q∗ describing the replaced image, and an image containing the same-typed subject to be replaced Srep, the task of Subject Replacement is targeted at obtaining a transformed image R = Ω(Cs, Q∗, Srep), so that the feature of the replaced subject in R aligns with the target subject in Cs, and simultaneously, the background in Srep is still preserved after transformation. As shown in the first row of Figure 1, we replace the heart cartoon in the source image Srep with the red evil cartoon in the target subject images Cs and preserve the background information in Srep. Subject Addition The Subject Addition task seeks to place the target subject at a designated position in a background image. Similar to the Subject Replacement task, given a set of images of the target subjects Cs, a text prompt Q∗ describing the after-placement image, a suitable background image Sadd for the customized subject, and a manually labeled bounding box *Bbox* specifying the addition position, the task can be formulated formally as A = Υ(Cs, Q∗, Sadd*, Bbox*). In the Subject Addition task, it is crucial to ensure the rationality of the interaction between the generated subject and the context. For instance, the grey sloth plush in the last column of Figure 1 should always be placed on the chair other than hovering in the air. In addition, the background details are necessitated to be preserved. ## 4.2 Data Collection To standardize the evaluation of the two proposed tasks, we curate a new benchmark, i.e. DreamEditBench, consisting of 22 subjects in alignment with DreamBooth [Ruiz et al. (2023)] with 20 images for each subject correspondingly. For the subject replacement task, we collect 10 images for each type, which include sametyped source subjects in diverse environments as shown in Figure 6a. The images are retrieved from the internet with the search query "a photo of [Class name]", and the source subject should be the main subject in the image which dominates a major part of the photo. For the subject addition task, we collect 10 reasonable backgrounds for each type of subject as shown in Figure 6b. In the meantime, we manually designate the specific location the target subject should be placed with a bounding box in the background. To collect the specific backgrounds for each subject, we first brainstorm and list the possible common environments of the subjects, then we search the listed keywords from the internet to retrieve and pick the backgrounds. ## 4.3 Dataset Analysis We notice that for Subject Replacement, some of the collected source images share similar features with the target, while others differ dramatically from the target in color, texture, or shape. According to our observation, the significant distinction in features can result in the degradation of model performance. For instance, it is easier to adapt to the target teapot for the source in the left column than the right in Figure 6a, as the left teapots share a more similar color to the target. Therefore, we divide the collected source images into 150 easy ones and 70 hard ones. For Subject Addition, the model is likely to generate the subject more reasonably in some of the backgrounds than the others. For instance, if the view and style of the background images diverge far away from the provided set of subjects, it is hard for the stable diffusion model to synthesize a target subject adapting to the unseen situation (e.g. the right column of a candle in Figure 6b is classified to hard as it requires the generation of the top view). Moreover, another example is the bear plush in Figure 6b, the bottom of the generated bear plush is strictly mandated to be at the bottom edge of the bounding box, otherwise, it makes no sense to place a plush in the air or below the bay window. Accordingly, we divide the 220 backgrounds into 122 easy-typed images and 98 hard ones. ![5_image_0.png](5_image_0.png) ## 5 Methodology Figure 3: DreamEditor generates the results iteratively. The output of the last iteration will serve as the input for the next. In each iteration, it leverages the dilated mask of subject segmentation and the specialized prompt to guide the DDIM inversion and gradually in-paint a subject more similar to the target one. To effectively resolve the two tasks, we build a novel system DreamEditor as an iterative adapter to the customized subject. DreamEditor can realize customized subject replacement or addition with the following steps: 1) Given a set of the target subject images, we fine-tune the text-to-image model with DreamBooth Ruiz et al. (2023) to associate the target subject to a special token. 2) We initialize the source image with two different strategies. 3) We leverage DDIM inversion guided by the segmentation mask and target text prompt to generate the initial result. 4) We repeat the process iteratively, where the output will become the input for the next round. An overview of the proposed DreamEditor is shown in Figure 3. Initialization Before inpainting, we deploy different initialization strategies. We use the source image Srep for replacement directly for the first type of initialization as it has already contained a same-typed subject as the target. However, for Subject Addition, it is challenging to directly generate a context-aware target subject in the designated *Bbox*. Instead, we devise an infill-then-generate schema. With a description of the target subject in the text, GLIGEN-infill can in-paint a subject similar to the target into the desired location in the source image, which is named as GLIGEN initialization merely for Subject Addition. The infill-then-generate schema enables easy adaptation from the "semi-customized" subject to the target one, as they share similar features. Moreover, the infilled subject tends to establish natural interaction with the context as it is trained with a large amount of image and bounding box pairs (Li et al., 2023b). Meanwhile, we propose COPY as another initialization strategy to preserve the characteristics of the target to the maximum degree. For COPY initialization, we segment the target subject from one of the target subject images Cs, and re-scale it to match the size of the source subject in Srep or the Bbox in Sadd. Then we replace the corresponding pixels in Srep or Sadd with the re-scaled target subject segmentation. We briefly define the initialization step as x (1) 0 = I(S*task*, Cs), where Stask ∈ {Srep, Sadd}. In-painting Given an input image x (i) 0 , we segment the source subject from the background with Segmentanything (Kirillov et al., 2023) to obtain the mask M (i) s . Then we dilate M (i) s to M ∗(i) s with morphological transformation and dilation kernel m as M ∗(i) s = D(x (i) 0 , m), so that the painted subject can have more space to harmonize with the background. To in-paint the target subject at the dilated mask, with the finetuned model ϵθs , we first encode the input x (i) 0into an intermediate noised latents x (i) kusing DDIM, where 0 < k(i) < T. Then we denoise x (i) kconditioned on prompt Q∗ with special token for the target subject to obtain z (i) t, where 0 *< t < k*. To preserve the background and solely modify the subject, we leverage the dilated segmentation mask M ∗(i) s to decide the partial pixels guided by Q∗. For the pixels outside M ∗(i) s , we replace them with the recorded latents in DDIM encoding to reconstruct the background, which can be formulated as z ∗(i) t = (1 − M ∗(i) s ) ∗ x (i) t + M ∗(i) s ∗ z (i) t. Then we can ultimately obtain the denoised image z (i) 0 as shown in Figure 3. We formulate the whole in-painting process as z (i) 0 = P(ϵθs , x (i) 0 , M∗(i) s , Q∗). Iterative Generation According to our observation, the generated subject from the first round may still retain the features from the source subject or cannot interact with the context rationally. Therefore, it motivates us to propose the iterative generation approach as shown in the orange block in Figure 3. In detail, DreamEditor (N) will treat the output from the last iteration as the input to the next iteration, i.e. x (i+1) 0 = z (i) 0 , and repeat the in-painting process for all the N iterations. It is worth noting that there is an iterative change of M ∗(i) s from round to round. The shape of the segmentation mask will gradually adapt to the target subject so as the generated subject. Briefly, the process is expressed as Algorithm 1. ## 6 Experiments 6.1 Experiment Setting We use stable diffusion version 1.41to fine-tune it with DreamBooth (Ruiz et al., 2023). For all of the 30 subjects involved in DreamBench (Ruiz et al., 2023), we set iteration number N = 5 and the mask dilation kernel m = 20. The encoding ratio k1/T is set to be 0.8 for the first iteration and decreases linearly as ki/T = k1/T − i ∗ 0.1. We leverage the Gligen inpainting pipeline implemented in the official code repository2for the initialization of DreamEditor for Subject Addition task. Meanwhile, we leverage the text-based Segment-anything implemented at lang-segment-anything3to obtain the segmentation mask of the source subject. All the experiments are run on a single A6000 GPU. ## 6.2 Baseline Methods We compare the proposed DreamEditor with the following baselines to show its effectiveness. DreamBooth (Ruiz et al., 2023) We finetune the stable diffusion model with target subject images Cs and the source image S*task* to encode the target subject and the source image by special tokens V ∗ and Y ∗. Then we prompt the model with *photo of a* V ∗*[class name] in* Y ∗*background*. to obtain the results. 1https://huggingface.co/CompVis/stable-diffusion-v1-4 2https://github.com/gligen/diffusers/tree/gligen 3https://github.com/luca-medeiros/lang-segment-anything ## Algorithm 1: Dreameditor Algorithm 1 **Input:** A source image S*task*, a target prompt Q∗, a fine-tuned stable diffusion model ϵθs , a set of target subject images Cs, mask dilation kernel m and iteration number N. 2 **Output:** A list of edited images Rsub. 3 x (1) 0 = I(S*task*, Cs); 4 **Function** P(ϵθs , x (i) 0 , M∗(i) s , Q∗) 5 x (i) 0 , x (i) 1 , . . . , x (i) k = *DDIMEncode*(ϵθs , x (i) 0 ); 6 z (i) k = x (i) k ; 7 for t = k − 1, k − 2*, . . . ,* 0 do 8 z (i) t = *DDIMSampler*(ϵθs , z (i) t+1, Q∗); 9 z ∗(i) t = (1 − M ∗(i) s ) ∗ x (i) t + M ∗(i) s ∗ z (i) t; 10 z (i) t = z ∗(i) t; 11 end 12 for i = 1, 2*, . . . , N* do 13 M ∗(i) s = D(x (i) 0 , m); 14 z (i) 0 = P(ϵθs , x (i) 0 , M∗(i) s , Q∗); 15 x (i+1) 0 = z (i) 0 ; 16 end 17 Rsub = {z (1) 0, z (2) 0*, . . . , z* (N) 0}; 18 **Return** Rsub | Method | Initialization | Dino-sub↑ | Dino-back↑ | ClipI-sub↑ | ClipI-back↑ | Overall↑ | |---------------------|---------------------|-------------|--------------|--------------|---------------|------------| | | Subject Replacement | | | | | | | DreamBooth | - | 0.718 | 0.481 | 0.867 | 0.744 | 0.608 | | Customized-DiffEdit | - | 0.619 | 0.878 | 0.834 | 0.915 | 0.790 | | CopyPaste | COPY | 0.775 | 0.822 | 0.874 | 0.896 | 0.819 | | PhotoSwap | - | 0.584 | 0.842 | 0.824 | 0.893 | 0.757 | | CopyHarmonize | COPY | 0.779 | 0.757 | 0.879 | 0.855 | 0.786 | | DreamEditor (1) | COPY | 0.753 | 0.779 | 0.875 | 0.877 | 0.791 | | DreamEditor (5) | COPY | 0.765 | 0.799 | 0.882 | 0.882 | 0.807 | | DreamEditor (1) | - | 0.546 | 0.664 | 0.763 | 0.853 | 0.646 | | DreamEditor (5) | - | 0.564 | 0.667 | 0.77 | 0.855 | 0.655 | | | Subject Addition | | | | | | | DreamBooth | - | 0.699 | 0.151 | 0.860 | 0.604 | 0.312 | | Customized-DiffEdit | GLIGEN | 0.577 | 0.641 | 0.818 | 0.789 | 0.653 | | CopyPaste | COPY | 0.444 | 0.666 | 0.693 | 0.826 | 0.579 | | PhotoSwap | GLIGEN | 0.550 | 0.684 | 0.808 | 0.809 | 0.660 | | CopyHarmonize | COPY | 0.441 | 0.698 | 0.690 | 0.844 | 0.592 | | DreamEditor (1) | COPY | 0.684 | 0.807 | 0.849 | 0.879 | 0.775 | | DreamEditor (5) | COPY | 0.664 | 0.773 | 0.841 | 0.850 | 0.751 | | DreamEditor (1) | GLIGEN | 0.579 | 0.773 | 0.818 | 0.870 | 0.717 | | DreamEditor (5) | GLIGEN | 0.632 | 0.798 | 0.838 | 0.874 | 0.753 | Table 1: Automatic Evaluation Results of DreamEditor and Baselines on Subject Replacement and Addition. Customized-DiffEdit (Couairon et al., 2022) DiffEdit can automatically generate the mask to be edited by contrasting predictions conditioned on source and target prompts, so that it can realize editing without changing the background. We replace the diffusion model in DiffEdit with a fine-tuned one by | Method | Initialization | Subject↑ | Background↑ | Realistic↑ | Overall↑ | |---------------------|------------------|------------|---------------|--------------|------------| | Subject Replacement | | | | | | | DreamBooth | - | 0.543 | 0.0 | 0.707 | 0.072 | | Customized-DiffEdit | - | 0.21 | 0.828 | 0.668 | 0.488 | | CopyPaste | COPY | 1.0 | 0.148 | 0.123 | 0.263 | | PhotoSwap | - | 0.15 | 0.773 | 0.663 | 0.425 | | CopyHarmonize | COPY | 1.0 | 0.552 | 0.147 | 0.433 | | DreamEditor (1) | COPY | 0.778 | 0.407 | 0.52 | 0.548 | | DreamEditor (5) | COPY | 0.817 | 0.505 | 0.54 | 0.606 | | DreamEditor (1) | - | 0.532 | 0.760 | 0.557 | 0.608 | | DreamEditor (5) | - | 0.630 | 0.800 | 0.582 | 0.664 | | Subject Addition | | | | | | | DreamBooth | - | 0.477 | 0.0 | 0.635 | 0.067 | | Customized-DiffEdit | GLIGEN | 0.288 | 0.302 | 0.252 | 0.280 | | CopyPaste | COPY | 0.983 | 1.0 | 0.033 | 0.319 | | PhotoSwap | GLIGEN | 0.21 | 0.562 | 0.305 | 0.33 | | CopyHarmonize | COPY | 0.983 | 1.0 | 0.295 | 0.662 | | DreamEditor (1) | COPY | 0.635 | 0.978 | 0.265 | 0.548 | | DreamEditor (5) | COPY | 0.633 | 0.973 | 0.393 | 0.623 | | DreamEditor (1) | GLIGEN | 0.287 | 0.99 | 0.427 | 0.495 | | DreamEditor (5) | GLIGEN | 0.478 | 0.972 | 0.528 | 0.626 | Table 2: Human Evaluation Results of DreamEditor and Baselines on Subject Replacement and Addition. DreamBooth. Then we modify the source and target prompts to *photo of a [class name]* and *photo of a* V ∗ [class name] correspondingly to enable the generation of the customized subject. For the Subject Addition, we initialize the input the same way as DreamEditor. Copy-Paste To have a naive but fundamental baseline, we directly use the output result from COPY initialization. Copy-Paste basically leverages Segment-anything to segment the target subject from one of the provided target subject images set Cs as "copy" and re-scale it to replace the corresponding pixels in the segmentation box in source image Srep or the labeled bounding box in Sadd. PhotoSwap (Gu et al., 2023) Similar with DreamEditor, PhotoSwap first learns the target concep with DreamBooth, and then swaps it into the target image with attention swap during diffusion process for replacement task. We employ GLIGEN initialization for PhotoSwap in addition task to enable a head-tohead comparison. CopyHarmonize To have a intuitive and straightforward baseline, for subject addition task, we copy-paste for the target subject first, and utilize a Harmonizer (Ke et al., 2022) to contextualize the copied subject in the source image. For subject replacement task, the source subject is segmented out first, and we conduct in-painting on the segmentation mask to fill up the background and repeat the same process with addition task. ## 6.3 Main Results We run the experiments for DreamEditor and baselines on DreamEditBench. We present results in Figure 5 and Figure 4. To make a more comprehensive comparison, we evaluate the results with both an automatic matrix and a rigorous human evaluation. Automatic Evaluation Results For both replacement and addition tasks, it is crucial to maintain fidelity to both subject and background. Therefore, we evaluate the fidelity of the generated results with DINO (Caron et al., 2021) and CLIP-I (Radford et al., 2021) scores for both the subject and the back- ![9_image_0.png](9_image_0.png) Figure 4: Results on Subject Replacement Task Compared with Baselines ground. The two matrices measure the average cosine similarity between the generated and real images with ViTS/16 DINO embeddings and CLIP embeddings. We define Dino-sub, Dino-back, ClipI-sub, ClipI-back as the measurements for the subject and background separately. For Dino-sub and ClipI-sub, it deploys all ![10_image_0.png](10_image_0.png) Figure 5: Results on Subject Addition Task Compared with Baselines the images from the provided target subject images set Cs as the real images. And for Dino-back and CilpIback, it leverages the source image S*task* as the real reference. To disentangle the subject from background or vice versa, the subject is segmented from the background for both the reference and generated images with off-the-shelf segmentation tool. The segmentation mask or its complementary part will be filled with white for background-oriented or subject-oriented evaluation respectively. The overall score is defined as the average of the geometric mean of the previous four measurements over all the examples. The automatic evaluation results are demonstrated in Table 1. The number in the brackets after DreamEditor refers to the number of iterations. Human Evaluation Results We conduct a human evaluation of the results for DreamEditBench to report a more realistic performance. We define three aspects to evaluate the quality of the generated results: 1) Subject Consistency: How well do the generated results preserve the feature of the customized subject in the provided set of images for the target? 2) Background Consistency: How well do the generated results preserve the background information in the source inputs? 3) Realism: Assess the overall realism and naturalness of the generated image. For each of the perspectives, we ask three experts to label the results with scores 0, 0.5, or 1. The detailed evaluation standard is described in Appendix A.2. For methods with several iterations, i.e. DreamEditor (5), the expert is asked to decide the best iteration first and evaluate the result for the best iteration. After obtaining the labeling, we average each aspect over all the examples to get the final results as shown in Table 2. The overall score is obtained by calculating the geometric mean of the scores from the three perspectives. Generally, the proposed DreamEditor (5) obtains the best overall score among all the methods, surpassing the best baseline Customized-DiffEdit by 0.176 in Subject Replacement. For subject addition task, though CopyHarmonize slightly performs better than DreamEditor (5) due to almost full score for Subject and Background consistency, DreamEditor (5) outperforms it by 0.233 in Realism score, indicating the higher fidelity of the results generated by DreamEditor. The detailed analysis for the comparison of realism is demonstrated in Appendix A.4. Moreover, it is imperative to note that the human evaluation results diverge dramatically from the automatic ones, which reflects the necessity of conducting a rigorous human evaluation for a more fair evaluation among methods. ## 6.4 Result Analysis As shown in Figure 4 and Figure 5, though with relatively high subject consistency and realism, DreamBooth can hardly preserve the background from the source. Customized-DiffEdit can merely either preserve the background or adapt to the target subject but can barely handle both. It is also difficult for PhotoSwap to adapt to the features of the target subject. CopyPaste although preserves both subject and background consistency, it fails to obtain realistic results in most cases, as it cannot generate context-aware subjects. The realism issue is alleviated in CopyHarmonize baseline, but still yields relatively low realism score. DreamEditor with COPY as initialization further mitigates the limitation of CopyHarmonize, leading to more realistic results. It works well when the posture of the segmented subject matches the target context (e.g. monster toy in Figure 5), but fails otherwise (e.g. backpack in Figure 5). DreamEditor without initialization for replacement or with GLIGEN initialization for addition balances subject, background, and realism to some extent, thus obtaining the highest overall human evaluation scores. It can be observed from Table 1, the models initialized with COPY mechanism can achieve better scores in subject-oriented evaluation measurements. While method with weakest background preservation mechanism like DreamBooth performs worst in the background-oriented evaluation matrix, which matches our intuition. Besides the main results, we conduct a more fine-grained analysis on the sub-division of DreamEditBench. As stated in Section 4.3, we divide the collected dataset into easy and hard subsets. We measure the overall results for the two divisions separately for each task as shown in Table 3. For the majority of the cases, the models achieve a better performance on the easy sub-division than the hard one by about 0.1. In addition, we analyze the effect of the number of iterations in an iterative generation. As shown in Table 4, for Subject Replacement, the Subject Consistency will consistently increase with a larger N. Nevertheless, the Background Consistency and Realism will fluctuate and even decrease with more iterations. Generally, it reflects a trade-off between subject consistency and the other two aspects with the parameterN. Thus, if we can pick the best iteration for each example individually, it can achieve the best overall performance. | | Subject Replacement | | Subject Addition | | | |---------------------|-----------------------|-------|--------------------|-------|-------| | Method | Initialization | Easy↑ | Hard↑ | Easy↑ | Hard↑ | | Customized-DiffEdit | -/GLIGEN | 0.511 | 0.418 | 0.303 | 0.250 | | PhotoSwap | -/GLIGEN | 0.511 | 0.418 | 0.352 | 0.303 | | CopyHarmonize | -/GLIGEN | 0.511 | 0.418 | 0.685 | 0.632 | | DreamEditor (1) | COPY | 0.551 | 0.529 | 0.600 | 0.480 | | DreamEditor (5) | COPY | 0.600 | 0.612 | 0.663 | 0.574 | | DreamEditor (1) | -/GLIGEN | 0.648 | 0.515 | 0.562 | 0.470 | | DreamEditor (5) | -/GLIGEN | 0.702 | 0.567 | 0.676 | 0.563 | | Iteration | Subject↑ | Background↑ | Realistic↑ | Overall↑ | |-------------|------------|---------------|--------------|------------| | N=1 | 0.531 | 0.763 | 0.556 | 0.608 | | N=2 | 0.585 | 0.713 | 0.490 | 0.589 | | N=3 | 0.601 | 0.693 | 0.456 | 0.574 | | N=4 | 0.613 | 0.681 | 0.448 | 0.571 | | N=5 | 0.625 | 0.681 | 0.453 | 0.577 | | N=Best | 0.630 | 0.800 | 0.582 | 0.664 | Table 3: Comparison of Human Evaluation Results on the Easy and Hard split of DreamEditBench. Table 4: Human Evaluation Scores with Different Iteration Number for Subject Replacement Task. ## Limitations DreamEditor can either fail or require a large iteration number to adapt when the target subjects differ too much from the source subject. Besides, due to the iterative generation, the error in reconstruction from DDIM inversion will be propagated and leads to a blurry background after N iterations, especially for large N. In addition, the performance of DreamEditor is highly affected by the segmentation model and GLIGEN in-painting, if the two models fail in the first place, it is very likely that DreamEditor will also fail ultimately. ## 7 Conclusion In this work, we define two novel subject-driven image editing tasks, i.e. Subject Replacement and Subject Addition. To standardize the evaluation of the two proposed tasks, we collect the DreamEditBench, containing 440 same-typed subjects and background images for 22 subjects in total. Meanwhile, we devise DreamEditor to realize gradual refinement towards target subjects with iterative generation. The systematic human evaluation shows the advantage of our method over the other baselines on the overall performance. ## Broader Impact The precise control over subjects and backgrounds in image generation, as demonstrated in this research, can contribute to the ongoing efforts to detect and mitigate deepfake content. By improving our understanding of how subjects can interact with their environment, this research can aid in the development of more effective deepfake detection algorithms. In addition, leveraging the proposed method, well-controlled and relatively high-quality fake images can be synthesised automatically, which can be leveraged to train better deepfake detector. By advancing the field of subject-driven image generation, this research indirectly results in the development of countermeasures against malicious uses of deepfake technology. Moreover, it raises awareness among researchers and policymakers about the evolving landscape of AI-generated content, stimulating discussions on potential regulations and safeguards. ## References Rameen Abdal, Yipeng Qin, and Peter Wonka. Image2stylegan: How to embed images into the stylegan latent space? In *Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)*, October 2019. David Bau, Hendrik Strobelt, William Peebles, Jonas Wulff, Bolei Zhou, Jun-Yan Zhu, and Antonio Torralba. Semantic photo manipulation with a generative image prior. *ACM Transactions on Graphics (Proceedings* of ACM SIGGRAPH), 38(4), 2019. Mathilde Caron, Hugo Touvron, Ishan Misra, Herv'e J'egou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. *2021 IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 9630–9640, 2021. Hong Chen, Yipeng Zhang, Xin Wang, Xuguang Duan, Yuwei Zhou, and Wenwu Zhu. Disenbooth: Identitypreserving disentangled tuning for subject-driven text-to-image generation, 2023a. Wenhu Chen, Hexiang Hu, Yandong Li, Nataniel Ruiz, Xuhui Jia, Ming-Wei Chang, and William W. Cohen. Subject-driven text-to-image generation via apprenticeship learning. *ArXiv*, abs/2304.00186, 2023b. Jooyoung Choi, Yunjey Choi, Yunji Kim, Junho Kim, and Sungroh Yoon. Custom-edit: Text-guided image editing with customized diffusion models. *arXiv preprint arXiv:2305.15779*, 2023. Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. Diffedit: Diffusion-based semantic image editing with mask guidance. *ArXiv*, abs/2210.11427, 2022. Antonia Creswell and Anil Anthony Bharath. Inverting the generator of a generative adversarial network. IEEE Transactions on Neural Networks and Learning Systems, 30(7):1967–1974, 2019. doi: 10.1109/ TNNLS.2018.2875194. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 8780–8794. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/ 49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf. Hao Dong, Simiao Yu, Chao Wu, and Yike Guo. Semantic image synthesis via adversarial learning. In Proceedings of the IEEE international conference on computer vision, pp. 5706–5714, 2017. Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, and Daniel CohenOr. An image is worth one word: Personalizing text-to-image generation using textual inversion, 2022. URL https://arxiv.org/abs/2208.01618. Jing Gu, Yilin Wang, Nanxuan Zhao, Tsu-Jui Fu, Wei Xiong, Qing Liu, Zhifei Zhang, He Zhang, Jianming Zhang, HyunJoon Jung, et al. Photoswap: Personalized subject swapping in images. arXiv preprint arXiv:2305.18286, 2023. Amir Hertz, Ron Mokady, Jay M. Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Promptto-prompt image editing with cross attention control. *ArXiv*, abs/2208.01626, 2022. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing* Systems, volume 33, pp. 6840–6851. Curran Associates, Inc., 2020. URL https://proceedings.neurips. cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf. Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In *European Conference on Computer Vision (ECCV)*, 2018. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In *Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on*, 2017. Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Hui-Tang Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. Imagic: Text-based real image editing with diffusion models. *ArXiv*, abs/2210.09276, 2022. Zhanghan Ke, Chunyi Sun, Lei Zhu, Ke Xu, and Rynson W.H. Lau. Harmonizer: Learning to perform white-box image and video harmonization. In *European Conference on Computer Vision (ECCV)*, 2022. Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick. Segment anything. arXiv:2304.02643, 2023. Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, and Philip HS Torr. Manigan: Text-guided image manipulation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 7880–7889, 2020a. Bowen Li, Xiaojuan Qi, Philip Torr, and Thomas Lukasiewicz. Lightweight generative adversarial networks for text-guided image manipulation. *Advances in Neural Information Processing Systems*, 33:22020–22031, 2020b. Dongxu Li, Junnan Li, and Steven CH Hoi. Blip-diffusion: Pre-trained subject representation for controllable text-to-image generation and editing. *arXiv preprint arXiv:2305.14720*, 2023a. Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, and Yong Jae Lee. Gligen: Open-set grounded text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 22511–22521, June 2023b. Andreas Lugmayr, Martin Danelljan, Andrés Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. Repaint: Inpainting using denoising diffusion probabilistic models. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11451–11461, 2022. Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models, 2022. Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. GLIDE: Towards photorealistic image generation and editing with textguided diffusion models. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), *Proceedings of the 39th International Conference on Machine Learning*, volume 162 of *Proceedings of Machine Learning Research*, pp. 16784–16804. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/nichol22a.html. Li Niu, Wenyan Cong, Liu Liu, Yan Hong, Bo Zhang, Jing Liang, and Liqing Zhang. Making images real again: A comprehensive survey on deep image composition. *ArXiv*, abs/2106.14490, 2021. URL https://api.semanticscholar.org/CorpusID:235658778. Taesung Park, Alexei A. Efros, Richard Zhang, and Jun-Yan Zhu. Contrastive learning for unpaired imageto-image translation. In *European Conference on Computer Vision*, 2020. Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. Zero-shot image-to-image translation. *ArXiv*, abs/2302.03027, 2023. Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski. Styleclip: Text-driven manipulation of stylegan imagery. In *Proceedings of the IEEE/CVF International Conference on Computer* Vision (ICCV), pp. 2085–2094, October 2021. Guim Perarnau, Joost van de Weijer, Bogdan Raducanu, and Jose M. Álvarez. Invertible Conditional GANs for image editing. In *NIPS Workshop on Adversarial Training*, 2016. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In *International Conference on Machine* Learning, 2021. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. *ArXiv*, abs/2204.06125, 2022. Robin Rombach, A. Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. *2022 IEEE/CVF Conference on Computer Vision and Pattern* Recognition (CVPR), pp. 10674–10685, 2021. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. *ArXiv*, abs/1505.04597, 2015. Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In *Computer Vision and* Pattern Recognition Conference, 2023. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L. Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, Seyedeh Sara Mahdavi, Raphael Gontijo Lopes, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding. *ArXiv*, abs/2205.11487, 2022. Jing Shi, Wei Xiong, Zhe Lin, and Hyun Joon Jung. Instantbooth: Personalized text-to-image generation without test-time finetuning, 2023. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In *International* Conference on Learning Representations, 2021. Yoad Tewel, Rinon Gal, Gal Chechik, and Yuval Atzmon. Key-locked rank one editing for text-to-image personalization. In *ACM SIGGRAPH 2023 Conference Proceedings*, SIGGRAPH '23, 2023. Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, and Daniel Cohen-Or. Designing an encoder for stylegan image manipulation. *ACM Transactions on Graphics (TOG)*, 40:1 - 14, 2021. Bo Zhang, Li Niu, and Liqing Zhang. Image composition assessment with saliency-augmented multi-pattern pooling. *ArXiv*, abs/2104.03133, 2021. URL https://api.semanticscholar.org/CorpusID:233168684. Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, and Alexei A. Efros. Generative visual manipulation on the natural image manifold. In *Proceedings of European Conference on Computer Vision (ECCV)*, 2016. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In *Computer Vision (ICCV), 2017 IEEE International Conference* on, 2017a. Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A Efros, Oliver Wang, and Eli Shechtman. Toward multimodal image-to-image translation. In *Advances in Neural Information Processing* Systems, 2017b. ## A Appendix A.1 Dataset Examples We demonstrate part of our collected data at Figure 6. The left column is some examples of easy subset and the right column is that for hard subsets. ## A.2 Human Evaluation To standardize the conduction of a rigorous human evaluation, we stipulate the criteria for each measurement as follows: Subject Consistency How well do the generated results preserve the feature of the customized subject in the provided set of images for the target subject? - Score 1: The subject is consistent and accurately represents the intended subject, closely matching the visual characteristics and appearance. - Score 0.5: The subject partially resembles the intended subject but lacks consistency in some visual attributes, such as facial features, body proportions, or object shapes. - Score 0: The subject in the generated image bears little resemblance to the intended subject or exhibits significant distortions in visual characteristics and appearance. Background Consistency How well do the generated results preserve the background information in the provided source inputs? - Score 1: The background is consistent and seamless, demonstrating a high level of coherence with the given context or scene. - Score 0.5: The background has some minor inconsistencies or artifacts, but they are not prominent enough to significantly affect the overall coherence of the image. - Score 0: The background shows significant inconsistencies or artifacts that are visually distracting and do not align with the given context or scene. Realism Assess the overall realism and naturalness of the generated image. - Score 1: The generated image is visually convincing and closely resembles a real photograph, exhibiting realistic lighting, shadows, texture details, and overall visual coherence. - Score 0.5: Score 0.5: The realism of the image is somewhat compromised, showing minor visual flaws or inconsistencies that may raise suspicion but do not strongly detract from its overall appearance. - Score 0: The generated image appears highly artificial and unrealistic, with noticeable visual flaws, unnatural lighting, or inconsistencies that make it easily identifiable as a generated image. If the score of an aspect is 0, we use 0.001 as an approximation. We also provide an example for each score with respective to different criteria as shown in Figure 7. ## A.3 Failure Cases The results are generated by DreamEditor with unified parameters for all the examples without subtly tuning for any specific subject. Therefore, there can be various failure cases in different stage of the DreamEditor pipeline as shown in Figure 7. ![17_image_0.png](17_image_0.png) (a) Benchmark Examples for Replacement Task ![17_image_1.png](17_image_1.png) (b) Benchmark Examples for Addition Task Figure 6: DreamEditBench Examples ![18_image_0.png](18_image_0.png) Figure 7: Detailed human evaluation criteria for different measurements. For instance, in subject replacement task, it is hard for DreamEditor to adapt to the target pink backpack from the green source as they share few similar features, and a large iteration number N may be required to transform it completely. Secondly, it will also lead to a failure if the segmentation step fails at the very beginning as shown in the teapot example. The inaccurate segmentation map can result in the shift of the subject position and unnecessary artifact added to the background. For subject addition task, in addition to all the failures from replacement, the initialization step can play a vital role. It is more likely to fail if the subject generated by GLIGEN differs far from the target subject like the black backpack in the 4th column of Figure 7. Moreover, the subject generated by GLIGEN may not correctly interpret its relationship with the context. The cat should be a real cat sitting at the carpet other than the painting, while the bear plushie should be put on the bay window instead of suspending in the air. These failure cases may be rectified with a more specific parameter tuning for each subject or better segmentation and in-painting model. ## A.4 Comparison With Naive Baselines According to the human evaluation results for DreamEditor and CopyHarmonize, we conduct deeper analysis on why our method can not outperform the naive CopyHarmonize baseline in subject addition task for the overall score. As we can observe from the left column of Figure 9, when adding the target subject to the source image, CopyHarmonize can achieve exact match with the target subject and preserve the background consistently, as it pastes the segmented subject directly to the background. However, though high scores can CopyHarmonize obtain in subject and background consistency, the copy-and-harmonize pipeline will fail the realism score when the posture or angle of the view play a vital role of making the images look realistic. For instance, as in the first row of the left column in Figure 9, the shoulder straps of a backpack should not stand up when placed in the chair, and it also should not be floating above the chair. In the second row of CopyHarmonize, the angle of the view of the toy car is not consistent with the that of the table from background. Instead, a view of top is expected as generated by DreamEditor other than a front view. Moreover, direct copy can lead to obvious artifact when the subject is at the edge of the original image like the flat cut of the body of the dog in the third row of Figure 9. Our DreamEditor can generate more ![19_image_0.png](19_image_0.png) Figure 8: Failure cases of DreamEditor in the two proposed tasks. reasonable and realistic images when it comes to these cases as shown in Figure 9. In subject replacement task, CopyHarmonize will induce patent artifact after inpainting as shown in the right column of Figure 9, which will impair the background consistency and realism of the generated images dramatically. ![19_image_1.png](19_image_1.png) Figure 9: Comparison of DreamEditor with CopyHarmonize.
Review 1: Summary: This paper introduces an innovative task called subject-driven image editing, encompassing two sub-tasks: subject replacement and subject addition. This task aims to precisely control subject placement, similar to subject-driven image generation, while also managing subject location and pose, akin to conventional image editing. The authors curate a new dataset, named DreamEditBench, dedicated to evaluating this innovative task. Moreover, the authors propose an iterative approach named DreamEditor for this task. DreamEditor employs DDIM for iterative result generation, guided by segmentation masks and text prompts. The method incorporates a customized initialization strategy. Experimental validation on DreamEditBench demonstrates the superiority of the proposed DreamEditor over existing methods in terms of human evaluation results. Strengths and Weaknesses: Strengths: 1. The introduced novel task is very attractive, with the compiled dataset holding promising potential for future research endeavors within this domain. 2. The rationale and details of the method are presented with remarkable clarity, ensuring readers' ease of comprehension. The experiments results are comprehensive, including automatic evaluations, human evaluations and the effect of initialization strategy and different iteration number. Weakness: 1. While the DreamEditor outperforms existing methods in human evaluation, its results still fall short of achieving the desired level of quality. A noticeable disparity persists between the original subject and the generated counterpart. In many instances, the generated subject fails to seamlessly integrate with the surrounding environment in a plausible manner. 2. The Iteration number is set mannuly for each input image. Regrettably, the model lacks the inherent capability to autonomously ascertain the optimal iteration number for individual images, despite the evident impact of iteration number on the quality of generated outcomes. Requested Changes: - The equations (1), (2), and (3) lack appropriate punctuation. - A broader and impartial human evaluation process is recommended. - Further experimentation involving an ablation study is encouraged. Broader Impact Concerns: I do not have any concerns about the broader impact of this work. ================================================== Review 2: Summary: This paper works on subject-driven image editing, including subject replacement and subject addition, by combining DreamBooth, Segment Anything and GLIGEN. The proposed method is evaluated on a newly curated dataset via both quantitive metrics and human evaluation. Strengths and Weaknesses: Strengths: 1. The method is evaluated by human evaluation. The observation of the discrepancy between human evaluation and quantitive metrics is beneficial to the community. 2. A new dataset, with moderate size, is curated. 3. The limitations are discussed. Weaknesses: 1. Technical contribution is limited. The proposed method is a combined application of prior works, i.e., DreamBooth, Segment Anything and GLIGEN. Though it is a recent trend to do research by pipelining, more technical insights are needed. Requested Changes: Regarding the main weakness, I would not request a change since it would be too significant. We can see that DreamBooth has low background scores. I am curious whether it is possible fine-tune a second model for another special token V2 to learn the background to handle this problem of DreamBooth? Broader Impact Concerns: It has a potential mis-use risk to produce fake contents. ================================================== Review 3: Summary: This work aims at subject-driven image editing and targets two sub-tasks: subject replacement and subject addition. To evaluate the new settings, an evaluation dataset is collected, and several metrics are introduced. An iterative strategy and two initialization schemas are proposed to achieve reasonable performance. Strengths and Weaknesses: Strengths: - Subject-driven image editing is a challenging and useful task. This work defines two sub-task settings and proposes to use DreamBooth and an iterative generation scheme to achieve reasonable results. - The challenges of subject replacement and subject addition are analyzed, and the corresponding evaluation datasets and metrics are introduced. A user study is also constructed to evaluate subject consistency, background consistency, and realism. - To further improve the subject addition performance, two initialization schemas are introduced: COPY and GLIGEN-infill, which provide a good starting point for subject addition. - Figure 1 clearly illustrates the two sub-task settings, and the overall paper organization and writing are easy to follow. Weaknesses: - The technical contributions are somewhat limited. The iterative generation scheme is not new and has been applied in several previous generation tasks, such as image inpainting “High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling, ECCV20” - A better baseline could be “Inpainting + Copy-Paste + Harmonization,” which is a straightforward workflow for object replacement or addition. It would be good to perform a comparison with such a baseline. - Regarding the newly introduced evaluation dataset, it only contains 22 subject classes, which may not be enough for a comprehensive evaluation. Collecting at least hundreds of subject classes would be ideal for a more convincing experiment result. - Regarding the introduced evaluation metrics, Dino-sub, Dino-back, ClipI-sub, and ClipI-back may be questionable as subject and background pixels are all in the generated image, which may affect the evaluation to each other. For example, when evaluating Dino-sub and ClipI-sub, different backgrounds may cause unpredictable results. - It is mentioned in the limitation that “DreamEditor can either fail or require a large iteration number” and “leads to a blurry background after N iterations, especially for large N.” The failure cases and analysis could be presented in the appendix. - It is not clear how to select the best N, and it would be interesting to explore an automatic strategy to choose the best N. For example, we can calculate a metric to do early-stopping. Minor question: “DreamEditor (5)” in Sec. 6.3, I’m not sure what it means either as it is not explained in the text. Requested Changes: Critical adjustments: - Better baseline comparison. - Explain the effectiveness of the proposed metrics. Broader Impact Concerns: A broader impact statement should be added to discuss the potential impact on deepfake. ================================================== Metareview: Recommendation: Accept with minor revision Comment: Three expert reviewers provided high-quality reviews for this paper. After the revision and the discussions between authors and the reviewers, the ratings are split, with two reviewers leaning to accept and one reviewer leaning to reject. The primary concerns of the limitation lie in the result quality. In particular, the comparison with the simple baseline "inpaint+paste+harmonization" because it does not require computationally expensive fine-tuning. However, while this simple baseline can preserve the high-resolution details of the input image, it is fundamentally not feasible for more advanced editing beyond simple translation and scaling. For example, in the paper, the authors did show examples of changing the subject's pose in the inserted scene. The revision also provided these baseline results and quantitative comparisons. Considering all the strengths and weaknesses raised by the reviewers, the AE believes that this paper has sufficient merits to be published at TMLR. ==================================================
# Contrastive Learning With Consistent Representations Anonymous authors Paper under double-blind review ## Abstract Contrastive learning demonstrates great promise for representation learning. Data augmentations play a critical role in contrastive learning by providing informative views of the data without necessitating explicit labels. Nonetheless, the efficacy of current methodologies heavily hinges on the quality of employed data augmentation (DA) functions, often chosen manually from a limited set of options. While exploiting diverse data augmentations is appealing, the complexities inherent in both DAs and representation learning can lead to performance deterioration. Addressing this challenge and facilitating the systematic incorporation of diverse data augmentations, this paper proposes Contrastive Learning with Consistent Representations (CoCor). At the heart of CoCor is a novel consistency metric termed DA consistency. This metric governs the mapping of augmented input data to the representation space, ensuring that these instances are positioned optimally in a manner consistent with the applied intensity of the DA. Moreover, we propose to learn the optimal mapping locations as a function of DA, all while preserving a desired monotonic property relative to DA intensity. Experimental results demonstrate that CoCor notably enhances the generalizability and transferability of learned representations in comparison to baseline methods. ## 1 Introduction Data augmentation (DA) is widely used in supervised learning in computer vision Ho et al. (2019); Lim et al. (2019); Cubuk et al. (2019; 2020); Li & Li (2023), achieving excellent results on popular datasets Ciregan et al. (2012); Sato et al. (2015); Wan et al. (2013); Krizhevsky et al. (2017). DA is also a key component in recent contrastive learning techniques Chen et al. (2020a); Tian et al. (2020b); He et al. (2020); Chen & He (2021); Xiao et al. (2020); Lee & Shin (2023). An encoder that learns good visual representations of the input data is trained with a contrastive loss. The contrastive loss is characterized by the following principle: in the feature space, two views of a given data example, transformed by distinct DA functions, exhibit correlation (similarity), whereas transformed views of different input examples manifest dissimilarity. The effectiveness of the encoder, trained on unlabeled data, is pivotal to the overall performance of contrastive learning and is contingent upon the choice of employed DAs. To learn effective and transferable representations, numerous studies have focused on enhancing contrastive learning by selecting suitable DAs. Chen et al. (2020a); Tian et al. (2020b); He et al. (2020); Li et al. (2023); Wang et al. (2023); Van der Sluijs et al. (2024) have shown that combining different augmentations with appropriate intensities can improve performance. However, these works confine augmentations to a random composition of specific types within a limited intensity range. Contrastingly, research has indicated that employing diverse DAs effectively enhances the model's ability to capture the invariance of the training data, thereby boosting the model's performance Cubuk et al. (2020), transferability Lee et al. (2021), and robustness Lopes et al. (2019). Consequently, the exploration of diverse DAs has garnered increased attention recently. PIRL Misra & Maaten (2020) adopts additional augmentations, SwAV Caron et al. (2020) introduces multiple random resized-crops to provide the encoder with a broader range of data reviews. Moreover, some recent works adopt combinations of stronger DAs in contrastive learning Tian et al. (2020b); Wang & Qi (2021); Lee & Shin (2023). 1 ![1_image_0.png](1_image_0.png) Figure 1: (a) Left: An encoder trained with the standard contrastive loss can exhibit inconsistency, as different views of an instance are encouraged to be represented similarly in the feature space, irrespective of the actual difference between them. Right: A consistent encoder positions the vector of more strongly augmented data further away from that of the raw data. Here the rings represent points with varying similarities to the central representation vector. (b) Nearest-neighbor retrieval in the feature space on CUB200 Wah et al. (2011) and Flowers102 Nilsback & Zisserman (2008) using pre-trained encoders. Existing contrastive methods He et al. (2020); Chen & He (2021), which enforce invariance to all data augmentations, may inadvertently cluster dissimilar data closely in the feature space. However, by applying *consistency*, CoCor ensures that only data sharing similar latent semantics are distributed closely in the latent space. While the idea of leveraging diverse data augmentations is appealing, to date, there lacks a systematic approach for integrating a substantial number of augmentations into contrastive learning. It is crucial to note that the intricacies of data augmentation and representation learning may lead to a performance degradation if augmentation functions are not judiciously chosen Tian et al. (2020b); Chen et al. (2020a); Wang & Qi (2021). To tackle the aforementioned challenges, this paper introduces Contrastive Learning with Consistent Representations (CoCor). First, we define a set of composite augmentations, denoted as Ωc, formed by combining multiple basic augmentations. Each composite augmentation is represented using a composition vector that encodes the types and frequencies of the basic augmentations employed. This composite set Ωc facilitates the incorporation of diverse DAs, encompassing a wide range of transformation types and overall intensity. Crucially, we introduce the concept of consistency for data augmentations, termed *DA consistency*, which imposes a desired property on the representations of views generated by various composite data augmentations. Specifically, for a given input example, a stronger data augmentation should move its feature space view farther from the representation of the original example. In other words, the similarity in feature space between the transformed view and the original input decreases with the strength of the data augmentation, as depicted in Figure 1(a). An encoder failing to satisfy this property is considered inconsistent from a data augmentation perspective. We propose the integration of a novel *consistency loss* to enforce *DA consistency* within the representation space of the encoder. Unlike contrastive loss, which does not account for the type or strength of the data augmentations used Chen et al. (2020a); Tian et al. (2020b); He et al. (2020), our *consistency loss* strategically guides encoder training to ensure alignment of the resultant feature space with the strength of the applied data augmentations. As demonstrated in nearest-neighbor retrieval tasks shown in Figure 1(b), existing contrastive learning approaches like He et al. (2020); Chen et al. (2020b); Chen & He (2021) may cluster dissimilar views due to their agnostic to augmentation-induced data variance. However, the consistency enforced by CoCor ensures proximity in the feature space only for data with similar latent features, while distinctly separating variants of the data. Furthermore, our experimental studies affirm that maintaining DA consistency is crucial for mitigating discrepancies among diverse data augmentations, thereby preventing potential performance degradation or even dimensional collapse in the feature space Jing et al. (2021); Li et al. (2022a). The proposed consistency loss hinges on determining the optimal similarity between the representations of an input example and its transformed view, a value contingent on the strength of the composite data augmentation. However, this optimal similarity is not known *a priori*. To address this challenge, we adopt a data-driven approach, training a neural network to map from the composition vector of each data augmentation to the desired similarity. Recognizing the monotonic relationship between similarity and augmentation strength, we introduce and enforce a monotonicity constraint on the neural network. This constraint ensures that stronger composite data augmentations correspond to a strictly smaller valued similarity. The proposed Contrastive Learning with Consistent Representations (CoCor) has the following contributions: - Systematically explore a large set of diverse composite data augmentations to improve the encoder's performance on various downstream tasks. - Propose a novel concept, *DA consistency*, which quantifies the monotonic dependency of latent-space similarity between an input example and its transformed view on the strength of the augmentation. - Introduce a new *consistency loss* designed to train the encoder to satisfy *DA consistency* and to utilize diverse augmentations without suffering inconsistency-induced performance loss. - Train a monotonically constrained neural network for learning the optimal latent-space similarities. With consistent representations, CoCor achieves state-of-the-art results for various downstream tasks. Moreover, it can be readily integrated into existing contrastive learning frameworks, effectively imposing DA consistency on the encoder. ## 2 Related Work Contrastive Learning It is a common practice to utilize data augmentations in forming both positive and negative pairs of data for defining the contrastive loss Chen et al. (2020a); Oord et al. (2018). Some methods highlight the significance of negative pairs in learning distinguishable features from the data Robinson et al. (2020); Awasthi et al. (2022). MoCo introduces a moving-averaged encoder paired with a large negative memory queue He et al. (2020); Misra & Maaten (2020). In contrast, Chen & He (2021); Grill et al. (2020) exclusively learn features from positive pairs, achieving state-of-the-art performance without incorporating negative pairs. Furthermore, supervised contrastive methods introduce annotation-related information to learn better representations Khosla et al. (2020); Li et al. (2022b). Recent research endeavors aim to comprehend the effectiveness and limitations of contrastive learning via the lens of representation distribution Wang & Isola (2020) and delve into issues such as dimensional collapse Li et al. (2022a); Jing et al. (2021). Data Augmentation Data augmentations applied to natural images have demonstrated efficacy in enhancing the generalizability and robustness of models trained in supervised learning Cubuk et al. (2019; 2020); Lopes et al. (2019); Krizhevsky et al. (2017). However, in self-supervised learning Chen et al. (2020a); He et al. (2020), careful selection of data augmentations from a limited set becomes crucial for ensuring optimal performance. Instead of treating both views in positive pairs equally, recent proposals by Zhang & Ma (2022); Lee et al. (2021); Zhang et al. (2022); Devillers & Lefort (2023) suggest capturing the variance between two views caused by the application of different random augmentations, termed augmentation-aware information. AugSelf Lee et al. (2021) achieves this by training an auxiliary network to predict the difference between augmentations applied to generate positive pairs. LoGo Zhang et al. (2022) proposes learning the differences between variously sized crops of each image. Zhang & Ma (2022) encodes an augmented image along with the DA parameters using two networks, and combines the two embeddings to form a feature ![3_image_0.png](3_image_0.png) Figure 2: Overview of the Proposed Method. (a) The proposed property, *DA consistency*, ensures that data augmented with stronger augmentation is positioned farther from the original data compared to a weakly augmented view in the representation space. (b) The Monotonic Mapping Neural Network (MMNN) predicts the optimal latent similarity between an augmented view and the original data, using augmentation composition vectors as input. Stronger augmentation results in a smaller predicted latent similarity by the MMNN. vector for contrastive loss. EquiMod Devillers & Lefort (2023) trains a network to predict the representation of an augmented view from the original data and the applied DA. While methods like InfoMin Tian et al. (2020b) and JOAO You et al. (2021) propose searching for optimal data augmentation policies for forming pairs in contrastive loss, their empirical studies reveal that stronger and more diverse augmentations can, in fact, degrade encoder performance. Conversely, some recent works aim to explore the benefits of strong augmentations Wang & Qi (2021); Caron et al. (2020); Lee & Shin (2023). However, as of now, there is a lack of a systematic approach to leveraging diverse data augmentations while considering the impact of augmentation strength. ## 3 Preliminaries The goal of contrastive representation learning is to learn an encoder parameterized by θe that maps an input data x ∈ R n to an ℓ2 normalized feature vector z of dimension m in the feature space Z, i.e., fθe (·) : R n → Sm−1. The encoder is trained by minimizing a contrastive loss. Specifically, it defines two different augmented versions of the same input example as a positive pair, which are expected to have similar representations in the feature space. Meanwhile, the encoder shall be trained to discriminate any two instances augmented from different input examples, i.e., a negative pair, in the feature space. Minimizing the contrastive loss pulls together positive pairs and pushes apart negative pairs Oord et al. (2018); Chen et al. (2020a); Li et al. (2020). ## 4 Method 4.1 Diverse Composite Augmentations The proposed Contrastive Learning with Consistent Representations (CoCor) improves contrastive learning performance by exploring diverse data augmentations composed from a set of basic augmentations. Definition 1 (**Composite Data Augmentations**). A composite augmentation with a length of l, namely, A <l> i, is defined as the composition of l randomly sampled augmentations from a given set of Na basic augmentation functions Ωa = {a1(·), a2(·), · · · , aNa (·)}: A <l> i = ai(1) ◦ ai(2) *◦ · · ·* ai(l), where i(k) ∈ [1, Na], k = 1, 2 *· · ·* l. We denote the set of all composite augmentations with a length of l by Ω<l> c, and denote the set of all composite augmentations with different lengths by Ωc = Ω<1> c ∪ Ω<2> c*· · ·* . Our studies show that the ordering of the basic augmentations in a composite augmentation does not have a significant impact on performance. To simplify the discussion, we assume that composite augmentations are order invariant, e.g., ai ◦ aj = aj ◦ ai. We represent a composite augmentation with an unique composition vector as shown in Figure 2(b). Definition 2 (**Composition Vector**). The composition vector of a length-l composite augmentation A <l> i is defined as a Na-dimensional vector vi = v(A <l> i) ∈ N Na 0(N0 is the set of all natural numbers), where the jth entry vi[j] is the number of times that the basic augmentation aj is applied in A <l> i, and PNa j=1 vi[j] = l. ## 4.2 Data Augmentation Consistency Data augmentation plays a crucial role in contrastive learning, requiring careful design to expose the latent structure of the original data. However, our key observation is that the standard contrastive learning framework Chen et al. (2020a); He et al. (2020); Chen & He (2021) does not inherently imply data augmentation (DA) consistency, a fundamental property proposed by this work. In the absence of this consistency, the encoder might simply bring views in positive pairs together in the feature space *Z ⊆ S*m−1, irrespective of the actual difference between them. However, a weakly augmented view and a strongly augmented view of an input example may not necessarily share the same latent structure, and it could be advantageous to encode them into different regions in the feature space, as illustrated in Figure 1 (a). In such cases, an encoder attempting to pull positively paired views together would learn an incorrect representation distribution. To this end, we introduce the concept of *DA consistency*. With the *DA consistency* imposed, the encoder is trained not only to cluster positive pairs together but also to map them to their respective optimal locations in the latent feature space Z based on the strength and types of applied augmentations. Consequently, the encoder preserves critical information introduced by data augmentations, such as variances related to brightness, sharpness, and rotation, as shown in Figure 2(a). This characteristic of the pre-trained encoder enhances its performance on various downstream tasks, where such information is crucial for recognition. We quantify a mapped feature space location using *latent similarity* with respect to the raw input data. Definition 3 (**Latent Similarity**). The latent similarity ld(x; *f, A*) of an input x, given an encoder f and a composite augmentation A, is defined as the cosine similarity between the normalized representations of x and A(x) in the feature space Z ⊆ Sm−1: ld(x; *f, A*) = f(x) T· f(A(x)). To be DA consistent, latent-space similarities of different augmented views of the same data shall be monotonically decreasing with the augmentation strength. We learn a mapping, i.e., a parameterized neural network gθd (·) that takes a composition vector v(A) of a composite augmentation A as input and map it to the optimal latent similarity l ∗ d (v(A)), as detailed in Section 4.3. An optimal encoder is considered to be fully DA consistent, if its latent similarity for any given composite augmentation is considered optimal. For encoders that are not fully DA consistent, we further define DA consistency level, as the degree at which the learned representations are consistent in terms of data augmentation. Definition 4 (**Consistency Level**). Given an encoder f, a set of composite augmentations Ωc, and a raw input x, the DA consistency level (DACL) is define as the encoder's deviation from optimal latent similarity: $$\mathrm{DACL}(\mathbf{x};f,\mathbf{\Omega}_{c})=\mathbb{E}_{A\sim\mathbf{\Omega}_{c}}[|l_{d}(\mathbf{x};f,A)-l_{d}^{*}(\mathbf{v}(A))|]$$ $\left(1\right)$. (v(A))|] (1) DACL measures the level of consistency of a given encoder f by comparing the latent similarity of the augmented view of x with the corresponding optimal latent similarity l ∗ d (v(A)) over all possible composite augmentations. With DACL, an encoder fθe (·) parameterized by θe that is not fully DA consistent can be trained to be more DA consistent by minimizing the *consistency loss* defined below: $$\mathcal{L}_{\text{consistent}}(\mathbf{\theta}_{\mathbf{e}})=\mathbb{E}_{\mathbf{x}\sim\mathcal{X},A\sim\Omega_{c}}[[\text{DACL}(\mathbf{x};f_{\mathbf{\theta}_{\mathbf{e}}},\mathbf{\Omega}_{c})]]$$ $$=\mathbb{E}_{\mathbf{x}\sim\mathcal{X},A\sim\Omega_{c}}[[l_{d}(\mathbf{x};f_{\mathbf{\theta}_{\mathbf{e}}},A)-l_{d}^{*}(\mathbf{v}(A))]]$$ $$=\mathbb{E}_{\mathbf{x}\sim\mathcal{X},A\sim\Omega_{c}}[[l_{d}(\mathbf{x};f_{\mathbf{\theta}_{\mathbf{e}}},A)-g_{\mathbf{\theta}_{d}}(\mathbf{v}(A))]]$$ $$\quad(2)$$ Our empirical study shows that an alternative consistency loss provides better performance, details of this alternative is included in Appendices. ## 4.3 Learning Optimal Latent Similarities 4.3.1 Learnable Model For Optimal Latent Similarities To train the encoder on the proposed consistency loss in equation 2, we need to provide the optimal latent similarity l ∗ d (v(A)) for a given composite augmentation A, which is contingent on the strength of A. Nevertheless, defining augmentation strength poses a challenge, as two composite augmentations may comprise different basic augmentation types, making it unclear how to define and compare their strength. To tackle this problem, we learn a mapping, i.e., a parameterized neural network gθd (·) that takes a composition vector as input and map it to the optimal latent similarity l ∗ d (v(A)). This approach relaxes the need for directly modeling the strength of different augmentations. ## 4.3.2 Imposing Monotonicity To The Learnable Model Taking it a step further, we not only approximate the optimal latent similarity l ∗ d with a neural network but also enforce an important monotonic property to better learn the desired optimal latent similarities. As the overall strength of a composite augmentation increases, e.g., by incorporating additional basic augmentations, the similarity between the augmented data and the raw data is expected to decrease. For instance, although directly comparing the effects of two different augmentations GaussianBlur and Sharpness (which blurs the image by a Gaussian kernel and adjusts edge contrast, respectively.) may be challenging, applying Sharpness twice in a composite DA would certainly distort the raw input more than applying it once. [DA Strength Comparison Operator]Thus, as illustrated in Figure 3, formally, we define an operator >˚ for comparing the strength of composition data augmentations in set Ωc: Ai>A˚ j iff v(Ai)>˚v(Aj ), where >˚ operates element wise on the composition vectors: v(Ai)>˚v(Aj ) implies that v(Ai)[k] ≥ v(Aj )[k], ∀k ∈ [1, Na], and ∃k ∈ [1, Na] such that v(Ai)[k] > v(Aj )[k]. In other words, composite augmentation Aiis considered to be stronger than Aj (i.e., Ai>A˚ j ) if and only if the number of times any basic augmentation is applied in Aiis at least that of the same basic augmentation is applied in Aj , and there is at least one basic augmentation used in Ai more times than in Aj . ![5_image_0.png](5_image_0.png) Figure 3: A composite augmentation is considered stronger than another if and only if it includes all components of the latter, along with additional basic augmentations. Our learnable neural network model gθd (·) takes the composition vector of an augmentation Ai as the input, and map it to the optimal latent similarity of l ∗ d (v(Ai)) in the feature space: l ∗ d (v(Ai)) = gθd (v(Ai)). We enforce monotonicity w.r.t. input v(Ai) on gθd (·): gθd (v(Ai)) > gθd (v(Aj )) if v(Ai)>v ˚ (Aj ), ∀Ai, ∀Aj ∈ Ωc. In other words, **stronger data augmentations result in a smaller latent similarity**. The monotonicity is enforced by incorporating monotonic linear embedding layers You et al. (2017) in gθd (·), as shown in Figure 2(b). We refer to gθd (·) as a monotonic mapping neural network (MMNN). ## 4.4 Encoder Training With Consistency The proposed CoCor approach incorporates the consistency loss in equation 2 to form the overall loss function Lu for optimizing the encoder parameter θe given the MMNN's parameter θd on unlabeled data: $${\mathcal{L}}_{u}(\theta_{\mathbf{e}}|\theta_{d})={\mathcal{L}}_{\mathrm{contrast}}(\theta_{\mathbf{e}})+{\mathcal{L}}_{\mathrm{consistent}}(\theta_{\mathbf{e}}|\theta_{d})$$ $$\left({\mathfrak{3}}\right)$$ Lu(θe|θd) = Lcontrast(θe) + Lconsistent(θe|θd) (3) In equation 3, the optimization of the encoder parameters, θe, is contingent upon the MMNN parameters, θd. Consequently, we denote the dependency of θe on θd as θe(θd) and solve a bi-level optimization problem to optimize θd: $$\begin{array}{r l}{\operatorname*{min}_{\theta_{d}}}&{{}\operatorname{CE}(\theta_{e}^{*}(\theta_{d}))}\\ {\theta_{e}^{*}(\theta_{d})=\arg\operatorname*{min}_{\theta_{e}}\mathcal{L}_{u}(\theta_{e}|\theta_{d})}\end{array}$$ (4) $\binom{5}{5}$ . (θd)) (4) (θd) = arg minθe Lu(θe|θd) (5) $$\mathrm{s.t.}$$ The top-level problem in equation 4 optimizes the MMNN and a linear classifier (not explicitly shown in equation 4 based on a cross entropy loss CE on a small amount of labeled data. During each iteration, the bottom-level problem in equation 5 is solved to update the encoder parameter θe on the unlabeled data. We defer the derivation of θe's dependency on θd and update rule for θd to Appendices. We conclude the algorithm flow of the proposed approach in Algorithm 1. Algorithm 1: Algorithm flow of CoCor Input: initial encoder, MMNN, and classifier parameters θ (0) e , θ (0) d, and θ (0) c , number of training epochs N, unlabeled dataloader Du, labeled dataloader Dl. for i=1 to N do 1.Sample unlabeled xu and labeled data (xl, yl) from Du and Dl, respectively. 2.Call equation 5 to update the encoder's parameter θ (i) e on xu with MMNN parameters θ (i−1) d fixed. 3.Call equation 4 to update MMNN θ (i) dand classifier θ (i) c on (xl, yl) with the encoder parameters θ (i) e fixed. Output: Trained encoder with parameters θN e The design of CoCor makes it compatible with various state-of-the-art self-supervised learning frameworks. CoCor can be integrated into an existing method by replacing Lcontrast in equation 3 with the method's specific loss function. The added consistency loss imposes DA consistency and improves the generalizability and transferability of the pre-trained encoder within these frameworks. ## 5 Experimental Studies We demonstrate the performance and generality of CoCor by integrating it into several state-of-the-art contrastive learning methods: MoCo Chen et al. (2020b); He et al. (2020), a dual-encoder approach with a | Method | Epochs IN-100 Cifar10 Cifar100 CUB200 Caltech101 SUN397 Food101 Flowers102 | | | | | Pets | | | | | |-----------------------------|------------------------------------------------------------------------------|-------|-------|-------|-------|--------|-------|-------|-------|-------| | MoCo Chen et al. (2020b) | 200 | 67.04 | 82.15 | 59.17 | 21.52 | 79.23 | 39.43 | 52.97 | 71.77 | 56.94 | | MoCo + CoCor (Ours) | 200 | 71.66 | 83.87 | 59.64 | 22.57 | 81.51 | 41.95 | 54.81 | 75.65 | 60.23 | | SimSiam Chen & He (2021) | 200 | 73.92 | 84.66 | 61.82 | 26.13 | 85.11 | 45.96 | 58.73 | 82.40 | 65.60 | | SimSiam + CoCor (Ours) | 200 | 83.70 | 86.89 | 66.36 | 34.50 | 87.85 | 49.92 | 62.48 | 87.95 | 76.04 | | SupCon Khosla et al. (2020) | 100 | 80.16 | 84.30 | 61.33 | 26.46 | 85.04 | 41.95 | 52.45 | 76.99 | 73.75 | | SupCon + CoCor (Ours) | 100 | 82.14 | 85.49 | 62.18 | 27.40 | 87.12 | 42.87 | 53.93 | 79.04 | 75.95 | Table 1: Top-1 accuracies (%) of linear evaluation. All ResNet-50 backbone encoders are pre-trained on ImageNet-100. large negative memory queue, SimSiam Chen & He (2021), which exclusively leverages positive pairs, and SupCon Khosla et al. (2020), a supervised contrastive learning method that uses labeled data to enhance representation learning. We refer to each of the three methods as a baseline. We compare the performance of encoders pre-trained using these baselines against those pre-trained using a combination of CoCor with the baselines across a variety of datasets for linear evaluation and object detection. CoCor is also compared with recent works which take augmentation-aware information into consideration Zhang & Ma (2022); Lee et al. (2021); Devillers & Lefort (2023) and works which use stronger data augmentations than conventional contrastive learning Lee & Shin (2023); Wang & Qi (2021). To shed more light on the working mechanism of CoCor, we perform post-hoc analysis on latent similarity and potential dimensional collapse, and a number of ablation studies. Unless otherwise noted, all experimental results presented in this paper are reproduced by us. ## 5.1 Experimental Settings For Encoder Pre-Training The encoders of the baseline methods are pre-trained on the large ImageNet-1K Russakovsky et al. (2015) dataset and its subset ImageNet-100 Tian et al. (2020a), under two different backbone encoder architectures ResNet-50 and ResNet-34 He et al. (2016). The encoders of MoCo and SimSiam based methods are pretrained for 200 epochs, while SupCon's encoder undergoes 100 epochs of pre-training. Batch size of all pre-training experiments is set to 256. We follow Khosla et al. (2020); Chen et al. (2020b); Chen & He (2021) for other settings of these baseline methods. For the sake of fair comparison, pre-training with CoCor incorporated follows the same experiment settings as their corresponding baselines. The MMNN in CoCor is a 3-layer MLP with monotonic linear embedding layers You et al. (2017) and ReLU units. While most of the results are demonstrated when using only 1% of the pre-training dataset labels for tuning the MMNN, we also demonstrate the good performance of CoCor in an ablation study where the labeled data usage is significantly further reduced in Table 5.4. CoCor utilizes a basic data augmentation set Ωa, consisting of 14 commonly used augmentation functions as detailed in Appendices. We impose the DA consistency constraint to several composite DA sets Ω <l> c of specific lengths. For most pre-training experiments, we take l = 1, 2, 3, i.e., for each example, we sample three composite DAs of length 1, 2, and 3, and apply them to get three views of data to calculate the consistency loss. More details of data augmentations and pre-training setup are provided in Appendices. ## 5.2 Main Results Linear evaluation Each pre-trained encoder is parameter-frozen and paired with a linear classifier, which is fine-tuned, following the linear evaluation protocol of Krizhevsky et al. (2017), as detailed in Appendices. Linear evaluation is conducted on the following datasets: Cifar-10/100 Krizhevsky et al. (2009), CUB200 Wah et al. (2011), Caltech-101 Fei-Fei et al. (2004), SUN397 Xiao et al. (2010), Food101 Bossard et al. (2014), Flowers102 Nilsback & Zisserman (2008), Oxford-IIIT Pet (Pets) Parkhi et al. (2012), Aircraft Maji et al. (2013), and StanfordCars Krause et al. (2013). The classification accuracies for ImageNet-100 and ImageNet-1K pre-trained encoders are presented in Table 1 and Table 2. Notably, CoCor demonstrates a substantial enhancement in the performance of the | Method | Epochs IN-1K Cifar10 Cifar100 CUB200 Caltech101 SUN397 Food101 Flowers102 | | | | | | Pets | | | | |-----------------------------|-----------------------------------------------------------------------------|-------|-------|-------|-------|-------|--------|-------|-------|-------| | MoCo Chen et al. (2020b) | 200 | 67.56 | 92.24 | 74.33 | 41.54 | 92.00 | 58.28 | 69.84 | 88.86 | 81.74 | | MoCo + CoCor (Ours) | 200 | 72.83 | 93.28 | 76.19 | 44.11 | 93.10 | 60.15 | 71.10 | 90.39 | 83.08 | | SimSiam Chen & He (2021) | 200 | 70.93 | 93.08 | 76.24 | 51.05 | 93.13 | 60.81 | 71.18 | 92.11 | 85.01 | | SimSiam + CoCor (Ours) | 200 | 73.25 | 94.04 | 78.12 | 54.50 | 94.51 | 63.09 | 73.31 | 92.84 | 87.24 | | SupCon Khosla et al. (2020) | 100 | 74.18 | 93.17 | 76.55 | 52.30 | 93.28 | 61.03 | 71.54 | 91.34 | 85.35 | | SupCon + CoCor (Ours) | 100 | 76.24 | 94.61 | 77.91 | 55.73 | 94.66 | 63.80 | 74.09 | 92.17 | 87.21 | | Pre-train | VOC07+12 | | COCO | | | | | | | | |-----------------------------|------------|-------|--------|-------|-------|-------|-------|-------|-------|-------| | Method | Epochs | AP | AP50 | AP75 | AP | AP50 | AP75 | APs | APm | APl | | MoCo He et al. (2020) | 200 | 50.23 | 76.52 | 54.42 | 34.23 | 55.08 | 36.60 | 16.67 | 37.27 | 50.97 | | MoCo + CoCor (Ours) | 200 | 51.11 | 77.41 | 54.88 | 39.13 | 58.30 | 42.19 | 23.19 | 43.45 | 52.13 | | SimSiam Chen & He (2021) | 200 | 52.04 | 77.86 | 57.11 | 34.15 | 55.21 | 36.50 | 15.30 | 37.37 | 50.49 | | SimSiam + CoCor (Ours) | 200 | 54.31 | 80.60 | 59.58 | 34.80 | 56.29 | 37.05 | 17.53 | 39.88 | 51.74 | | SupCon Khosla et al. (2020) | 100 | 47.41 | 75.83 | 53.17 | 34.26 | 55.11 | 36.35 | 14.34 | 37.40 | 50.56 | | SupCon + CoCor (Ours) | 100 | 49.03 | 77.22 | 54.19 | 35.67 | 56.10 | 37.98 | 16.63 | 38.71 | 51.29 | Table 2: Top-1 accuracies (%) of linear evaluation. All ResNet-50 backbone encoders are pre-trained on ImageNet-1K. Table 3: Transfer learning results on VOC and COCO object detection tasks. pre-trained encoder across diverse classification tasks. This improvement suggests that CoCor contributes to an enhanced generalizability of the encoder, achieved by incorporating a wider range of augmentations and by complementing the semantics within the feature space. Object detection We fine-tune the pre-trained encoders on VOC2007+2012 Everingham et al. (2009) and COCO2017 Lin et al. (2014) datasets for object detection downstream tasks. The pre-trained encoders are converted to generalized R-CNN Girshick et al. (2014) detectors with a ResNet50-C4 backbone by using Detectron2 Wu et al. (2019). The models trained on VOC2007 and COCO2007 are subsequently evaluated on the corresponding dataset's test set. In comparison with the baselines, Table 3 shows that CoCor substantially improves accuracy under standard ![8_image_0.png](8_image_0.png) metrics such as AP, AP50, and AP75, suggesting that CoCor is effective in achieving the pre-trained encoders' transferability. Notably, CoCor shows substantial improvements in detecting challenging small and medium object detection, as evidenced by the increased APs and APm scores. Figure 4: (a): Latent similarities under different composite augmentation lengths of encoders pre-trained with and without consistency loss, and (b) singular values of the learned latent presentations, both evaluated on ImageNet-100. | Method | ImageNet100 | CUB200 | Flowers102 | StanfordCars | |-----------------------------------|---------------|----------|--------------|----------------| | SimSiam Chen & He (2021) | 63.90 | 29.10 | 76.10 | 19.60 | | Augself Lee et al. (2021) | 75.96 | 29.96 | 82.96 | 29.41 | | Hierarchical Zhang & Ma (2022) | 67.10 | 33.90 | 83.60 | 20.70 | | EquiMod Devillers & Lefort (2023) | 65.45 | 30.09 | 77.91 | 19.06 | | SimSiam+CoCor (Ours) | 77.76 | 34.07 | 83.88 | 29.93 | Table 4: Comparison of top-1 accuracies (%) in linear evaluation of SimSiam, existing augmentation-aware methods, and CoCor. Encoders (ResNet-34) are trained on ImageNet-100 for 200 epochs. Post-hoc study We visualize the effect of the proposed consistency loss of equation 2 during pre-training by comparing the latent similarity of encoders pre-trained with and without the consistency loss in Table 4 (a), showing the averaged latent similarity of each specific length of composite augmentations on the ImageNet100 dataset. Clearly, the use of CoCor results in larger latent similarity values across a wide range of DA lengths, demonstrating a tighter clustering of image views with the same identify. This aligns with the analysis presented in Wang & Isola (2020), which, even without taking into account the dependency on augmentation strength, suggests that a stronger clustering of views generated by weaker augmentations is correlated with improved encoder performance. Additionally, the image retrieval task results in Figure 1(b) suggest that CoCor learns a more semantically consistent feature space compared to the baseline methods. This introduced consistency ensures that data sharing similar latent semantics is positioned close, while dissimilar instances are kept distant in the feature space. Dimensional collapse evaluation We illustrate the singular values of the latent representations of pretrained encoders on ImageNet-100 by conducting principal component analysis (PCA) on the feature vectors. The occurrence of dimensional collapse is typically associated with observed small singular values. However, as depicted in Figure 4 (b), this is less likely to be the case for the encoders pre-trained by CoCor compared with the two baselines. While stronger data augmentations Jing et al. (2021) and larger datasets Li et al. (2022a) have been suggested to contribute to dimensional collapse, this issue is not observed in the CoCor encoders. In conventional contrastive learning methods, the encoder is forced to be invariant to feature variance introduced by augmentations, potentially leading to constant values in some feature space dimensions. The dimensional collapse problem is mitigated in CoCor, possibly due to the imposed consistency that captures this variance. ## 5.3 Comparison With The State-Of-The-Art Comparison with augmentation-aware methods Recently, several works Devillers & Lefort (2023); Lee et al. (2021); Zhang & Ma (2022) have taken into account augmentation-aware information in contrastive learning. AugSelf Lee et al. (2021) achieves this by training an auxiliary network to predict the difference between augmentations applied to generate positive pairs. Zhang & Ma (2022) encodes an augmented image along with the DA parameters using two networks, and combines the two embeddings to form a feature vector for contrastive loss. EquiMod Devillers & Lefort (2023) trains a network to predict the representation of an augmented view from the original data and the applied DA. A performance comparison between these methods and CoCor on linear evaluation is presented in Table 4. All the encoders have a ResNet-34 architecture and are pre-trained on ImageNet-100 for 200 epochs. Compared to Lee et al. (2021); Zhang & Ma (2022); Devillers & Lefort (2023), CoCor introduces much stronger DA and more diverse DA types. CoCor consistently outperforms them, which may be attributed to its effective utilization of a significantly broader set of augmentations while maintaining the well-specified monotonic property in augmentation strength. Comparison with methods using stronger data augmentations Several recent contrastive learning approaches incorporate the use of stronger data augmentations. CLSA Wang & Qi (2021) employs the distribution divergence between weakly augmented views as supervision for the divergence between a weakly and a strongly augmented view. R´enyiCL Lee & Shin (2023) proposes a new contrastive objective using R´enyi divergence to address strongly augmented data. Table 5 shows the comparison between these methods and | Method | ImageNet1K | CUB200 | Flowers102 | Aircraft | |----------------------------|--------------|----------|--------------|------------| | CLSA Wang & Qi (2021) | 69.4 | 40.20 | 89.97 | 61.24 | | R´enyiCL Lee & Shin (2023) | 72.6 | 45.05 | 90.17 | 61.80 | | MoCo+CoCor (Ours) | 72.8 | 44.11 | 90.39 | 63.18 | Table 5: Comparison of top-1 accuracies (%) in linear evaluation of methods that leverage stronger data augmentations. Encoders (ResNet-50) are trained on ImageNet-1K for 200 epochs. | Method | ImageNet100 | CUB200 | Flowers(5-shot) | Flowers(10-shot) | |-------------------------------------------|---------------|----------|-------------------|--------------------| | MoCo* Chen et al. (2020b) | 81.0 | 36.7 | 67.9 | 77.3 | | LooC(color)* Xiao et al. (2020) | 81.1 | 40.1 | 68.2 | 77.6 | | LooC(rotation)* Xiao et al. (2020) | 80.2 | 38.8 | 70.1 | 79.3 | | LooC(color, rotation)* Xiao et al. (2020) | 79.2 | 39.6 | 70.9 | 80.8 | | MoCo + AugSelf* Lee et al. (2021) | 82.4 | 37.0 | 81.7 | 84.5 | | SimSiam + AugSelf* Lee et al. (2021) | 82.6 | 45.3 | 86.4 | 88.3 | | SimSiam + CoCor (color) | 83.5 | 46.5 | 89.1 | 90.2 | | SimSiam + CoCor (affine) | 81.8 | 45.2 | 83.5 | 86.1 | | SimSiam + CoCor (color+affine) | 83.7 | 47.9 | 88.9 | 89.6 | Table 6: Comparison between LooC, AugSelf, and CoCor. CoCor is pre-trained using basic data augmentation pools, which include color-related augmentations, affine transformations, and their combination. Top-1 Linear evaluation accuracy is reported. Encoders (ResNet-50) are trained on ImageNet-100 for 500 epochs. *Since the official implementation of LooC has not been released, we adopt the results from Lee et al. (2021); Xiao et al. (2020). To ensure a fair comparison, we maintain the same experimental settings for all methods in this table. ours. Here all encoders are ResNet-50 trained for 200 epochs on ImageNet-1K. Unlike R´enyiCL and CLSA, which do not explicitly distinguish between different types of data augmentations (DAs), CoCor addresses this by using the MMNN to differentiate various DA types. Furthermore, CoCor introduces DAs that are not only stronger but also more diverse, implementing consistency across different lengths of DAs, in contrast to R´enyiCL and CLSA, which apply a single length of DA per pre-training process. ## 5.4 Ablation Studies Basic Data Augmentations define the augmentation-aware information to be learned by the encoder. To better understand the role of these data augmentations during pre-training, we separate them into two groups: Ωcolor and Ωaffine. Ωcolor includes augmentations that change the color of each pixel while maintaining its position, such as Brightness and Contrast. Ωaffine comprises affine transformations, such as Rotate and Shear. Composition of Ωcolor and Ωaffine is provided in the Appendices. Using CoCor, we pretrained three encoders for 500 epochs on ImageNet-100 with Ωcolor, Ωaffine, and Ωcolor SΩaffine, respectively. Table 6 presents the linear evaluation results on ImageNet-100 and two fine-grained datasets, CUB-200 and Flowers102. The results indicate that CoCor achieves better overall performance than LooC Xiao et al. (2020) and AugSelf Lee et al. (2021), which have improved transferability to various downstream recognition tasks. Notably, while Ωcolor SΩaffine delivers the best overall results across the tested datasets, refining the basic data augmentations can enhance CoCor's performance on specific downstream tasks. For example, coloraware information is crucial for discriminating flower images in the Flower102 dataset Nilsback & Zisserman (2008), whereas position-related information introduced by affine transformations is less relevant. Thus, applying only color-related augmentations while maintaining invariance to affine transformations improves performance on Flower102. Effectiveness of the MMNN CoCor utilizes an MMNN model to learn the optimal latent similarities l ∗ d of various composite augmentations. To see the benefits of this learnable MMNN model, we replace it by a manually optimized model: we manually search for the best l ∗ d value for each composite augmentation Table 7: Linear evaluation of CoCor models trained with and without MMNN. Encoders (ResNet-50) are trained on ImageNet-100 for 200 epochs. | baseline | w/o MMNN | w/ MMNN | | |-----------------|------------|-----------|-------| | MoCo + CoCor | 67.04 | 69.51 | 71.66 | | SimSiam + CoCor | 73.92 | 82.18 | 83.70 | ![11_image_0.png](11_image_0.png) Figure 5: Linear evaluation of encoders trained by CoCor on (a) MoCo and (b) SimSiam with single length l of composite augmentations and a combination of length 1, 2, and 3. length, determined by extensive trial-and-error, to achieve optimal linear evaluation performance. More detailed experimental settings and results on l ∗ d selection are included in Appendices. We pre-train ResNet-50 encoders built upon SimSiam and MoCo with the MMNN and the manual model based on composite augmentation lengths l = 1, 2, 3, on ImageNet-100 for 200 epochs. The resulting performances are reported in Table 7, which also includes the performances of the MoCo and SimSiam baselines. There is the clear advantages of the encoders trained with the MMNN over the ones without, i.e. with the manual model. Effect of the amount of labeled data in MMNN training We further study the impact of the amount of supervision used to train the MMNN on the encoder performance. We pre-train ResNet-50 CoCor encoders built upon SimSiam on ImageNet-100 for 100 epochs while training the MMNN using 100%, 10%, 1%, and 0.1% of the labeled data in the dataset. The pre-trained backbones are evaluated by linear evaluation on ImageNet-100. Table 8 shows that drastically reducing the amount of labeled MMNN training data from 100% to 0.1% only results in a 0.94% performance drop, and all four CoCor models outperform the baseline SimSiam by more than 15%. Composite augmentation length in pre-training The composite augmentations introduced in our work produce diverse informative views. We test the effect of the strength of composite augmentations by running experiments where only one length of composite augmentation is adopted per pre-training experiment. We run CoCor on SimSiam and MoCo with single augmentation length 1, 2, 3, 4, and a combination of lengths 1, 2, and 3. All encoders are ResNet-50 pre-trained on ImageNet-100 for 50 epochs, and are then evaluated on ImageNet-100. As seen in Table 5, CoCor is able to leverage augmentations of varying lengths, and combining DAs at all these lengths further improves performance. Different from the results presented in Chen et al. (2020a); Tian et al. (2020b), which show that strong augmentations can degrade performance, we have conducted additional experiments with much stronger composite augmentations to demonstrate CoCor's ability in leveraging such augmentations. These results, presented in the Appendices, demonstrate CoCor's capability to benefit from even stronger augmentations compared to recent works that introduce strong augmentations in their methodsLee & Shin (2023); Wang & Qi (2021). | Percentage of labels | 100% | 10% | 1% | 0.1% | SimSiam | |------------------------|--------|-------|-------|--------|-----------| | Top-1 Acc | 75.97 | 75.82 | 75.79 | 75.46 | 59.20 | Table 8: Performance of CoCor models trained with different amounts of labeled data. Encoders (ResNet-50) are trained on ImageNet-100 for 50 epochs. ## 6 Discussion In this paper, we introduce CoCor, a systematic approach to exploring diverse data augmentations in contrastive learning. Our contribution includes the introduction of DA consistency to quantify the dependency of the desired latent similarity on the applied data augmentation. We propose a consistency loss to guide the encoder training towards improved DA consistency. To enforce the DA consistency, we employ a data-driven method to learn the optimal dependency and apply it to the encoder training. The effectiveness of CoCor is substantiated through extensive experimental results on various tasks and datasets. Further details on the MMNN update rule, additional experimental settings, and extended results are provided in the Appendices. As a potential future direction, we suggest exploring the learning of optimal latent similarity with respect to data examples. Specifically, extending the MMNN to take data instances as input could allow for considering the variance of latent similarity caused by data variance. This approach has the potential to learn a more accurate mapping from data augmentation to latent similarity. ## References Pranjal Awasthi, Nishanth Dikkala, and Pritish Kamath. Do more negative samples necessarily hurt in contrastive learning?, 2022. Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101–mining discriminative components with random forests. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part VI 13, pp. 446–461. Springer, 2014. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. *Advances in neural information* processing systems, 33:9912–9924, 2020. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. *arXiv preprint arXiv:2002.05709*, 2020a. Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 15750–15758, 2021. Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. *arXiv preprint arXiv:2003.04297*, 2020b. Dan Ciregan, Ueli Meier, and Jürgen Schmidhuber. Multi-column deep neural networks for image classification. In *2012 IEEE conference on computer vision and pattern recognition*, pp. 3642–3649. IEEE, 2012. Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation strategies from data. In *Proceedings of the IEEE/CVF conference on computer vision and* pattern recognition, pp. 113–123, 2019. Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In *Proceedings of the IEEE/CVF conference on computer* vision and pattern recognition workshops, pp. 702–703, 2020. Alexandre Devillers and Mathieu Lefort. Equimod: An equivariance module to improve visual instance discrimination. In *International Conference on Learning Representations*, 2023. Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. *International journal of computer vision*, 88:303–308, 2009. Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In 2004 conference on computer vision and pattern recognition workshop, pp. 178–178. IEEE, 2004. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In *Proceedings of the IEEE conference on computer vision* and pattern recognition, pp. 580–587, 2014. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. *Advances in neural information processing systems*, 33:21271–21284, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9729–9738, 2020. Daniel Ho, Eric Liang, Xi Chen, Ion Stoica, and Pieter Abbeel. Population based augmentation: Efficient learning of augmentation policy schedules. In *International Conference on Machine Learning*, pp. 2731– 2741. PMLR, 2019. Li Jing, Pascal Vincent, Yann LeCun, and Yuandong Tian. Understanding dimensional collapse in contrastive self-supervised learning. *arXiv preprint arXiv:2110.09348*, 2021. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. Advances in neural information processing systems, 33:18661–18673, 2020. Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2661–2671, 2019. Jonathan Krause, Jia Deng, Michael Stark, and Li Fei-Fei. Collecting a large-scale dataset of fine-grained cars. 2013. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. *Communications of the ACM*, 60(6):84–90, 2017. Hankook Lee, Kibok Lee, Kimin Lee, Honglak Lee, and Jinwoo Shin. Improving transferability of representations via augmentation-aware self-supervision. *Advances in Neural Information Processing Systems*, 34: 17710–17722, 2021. Kyungmin Lee and Jinwoo Shin. Renyicl: Contrastive representation learning with skew renyi divergence, 2023. Alexander C Li, Alexei A Efros, and Deepak Pathak. Understanding collapse in non-contrastive siamese representation learning. In *European Conference on Computer Vision*, pp. 490–505. Springer, 2022a. Junnan Li, Pan Zhou, Caiming Xiong, and Steven CH Hoi. Prototypical contrastive learning of unsupervised representations. *arXiv preprint arXiv:2005.04966*, 2020. Lujun Li and Anggeng Li. A2-aug: Adaptive automated data augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2266–2273, 2023. Tianhong Li, Peng Cao, Yuan Yuan, Lijie Fan, Yuzhe Yang, Rogerio S Feris, Piotr Indyk, and Dina Katabi. Targeted supervised contrastive learning for long-tailed recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6918–6928, 2022b. Wei Li, Jiahao Xie, and Chen Change Loy. Correlational image modeling for self-supervised visual pretraining. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 15105–15115, 2023. Sungbin Lim, Ildoo Kim, Taesup Kim, Chiheon Kim, and Sungwoong Kim. Fast autoaugment. *Advances in* Neural Information Processing Systems, 32, 2019. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740–755. Springer, 2014. Raphael Gontijo Lopes, Dong Yin, Ben Poole, Justin Gilmer, and Ekin D Cubuk. Improving robustness without sacrificing accuracy with patch gaussian augmentation. *arXiv preprint arXiv:1906.02611*, 2019. Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. *arXiv preprint arXiv:1306.5151*, 2013. Ishan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant representations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 6707–6717, 2020. Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In *2008 Sixth Indian conference on computer vision, graphics & image processing*, pp. 722–729. IEEE, 2008. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*, 2018. Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, pp. 3498–3505. IEEE, 2012. Hieu Pham, Zihang Dai, Qizhe Xie, and Quoc V Le. Meta pseudo labels. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11557–11568, 2021. Joshua Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. Contrastive learning with hard negative samples. *arXiv preprint arXiv:2010.04592*, 2020. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115:211–252, 2015. Ikuro Sato, Hiroki Nishimura, and Kensuke Yokoi. Apac: Augmented pattern classification with neural networks. *arXiv preprint arXiv:1505.03229*, 2015. Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. In *Computer Vision–* ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XI 16, pp. 776–794. Springer, 2020a. Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. What makes for good views for contrastive learning? *Advances in neural information processing systems*, 33:6827–6839, 2020b. Rogier Van der Sluijs, Nandita Bhaskhar, Daniel Rubin, Curtis Langlotz, and Akshay S Chaudhari. Exploring image augmentations for siamese representation learning with chest x-rays. In Medical Imaging with Deep Learning, pp. 444–467. PMLR, 2024. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds200-2011 dataset. 2011. Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In *International conference on machine learning*, pp. 1058–1066. PMLR, 2013. Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In *International Conference on Machine Learning*, pp. 9929–9939. PMLR, 2020. Xiao Wang and Guo-Jun Qi. Contrastive learning with stronger augmentations. *arXiv preprint* arXiv:2104.07713, 2021. Zihu Wang, Hanbin Hu, Chen He, and Peng Li. Recognizing wafer map patterns using semi-supervised contrastive learning with optimized latent representation learning and data augmentation. In 2023 IEEE International Test Conference (ITC), pp. 141–150. IEEE, 2023. Xi Weng, Yunhao Ni, Tengwei Song, Jie Luo, Rao Muhammad Anwer, Salman Khan, Fahad Shahbaz Khan, and Lei Huang. Modulate your spectrum in self-supervised learning. *arXiv preprint arXiv:2305.16789*, 2023. Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2. https: //github.com/facebookresearch/detectron2, 2019. Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Largescale scene recognition from abbey to zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition, pp. 3485–3492. IEEE, 2010. Tete Xiao, Xiaolong Wang, Alexei A Efros, and Trevor Darrell. What should not be contrastive in contrastive learning. *arXiv preprint arXiv:2008.05659*, 2020. Seungil You, David Ding, Kevin Canini, Jan Pfeifer, and Maya Gupta. Deep lattice networks and partial monotonic functions. *Advances in neural information processing systems*, 30, 2017. Yuning You, Tianlong Chen, Yang Shen, and Zhangyang Wang. Graph contrastive learning automated. In International Conference on Machine Learning, pp. 12121–12132. PMLR, 2021. Junbo Zhang and Kaisheng Ma. Rethinking the augmentation module in contrastive learning: Learning hierarchical augmentation invariance with expanded views. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pp. 16650–16659, 2022. Tong Zhang, Congpei Qiu, Wei Ke, Sabine Süsstrunk, and Mathieu Salzmann. Leverage your local and global representations: A new self-supervised learning strategy. In *Proceedings of the IEEE/CVF conference on* computer vision and pattern recognition, pp. 16580–16589, 2022. ## A Improving Cocor With An Alternative Consistency Loss The original consistency loss is defined as: $${\mathcal{L}}_{\mathrm{consistent}}(\mathbf{\theta}_{\mathbf{c}}|\mathbf{\theta}_{\mathbf{d}})=\mathbb{E}_{\mathbf{x}\sim{\mathcal{X}},a_{c}\sim\mathbf{\Omega}_{c}}\left[|l_{d}(\mathbf{x};f_{\mathbf{\theta}_{\mathbf{c}}},a_{c})-g_{\mathbf{\theta}_{d}}(\mathbf{v}_{c}(a_{c}))|\right]$$ $$\mathbf{\partial}$$ (vc(ac))|] (6) However, in our preliminary experiments, we find the following consistency loss leads to better downstream tasks performance than equation 6. $${\mathcal{L}}_{\mathrm{consistent}}(\mathbf{\theta}_{e}|\mathbf{\theta}_{d})=\mathbb{E}_{\Omega_{c}^{c|>}\sim\Omega_{c}}[\mathrm{softplus}[\mathbb{E}_{\mathbf{x}\sim X,a_{c,i}^{c|>}}\sim\Omega_{c}^{c|>}[l_{d}(\mathbf{x};f,a_{c,i}^{c|>})-g_{\mathbf{\theta}_{d}}(\mathbf{w}_{c}(a_{c,i}^{c|>}))]]]$$ ![16_image_0.png](16_image_0.png) Figure 6: Images from a same category of a dataset. Target items may be of different sizes and colors, and may be located at different places in images. Thus same augmentation can have different effects on different images. ImageNet is used for illustration. The motivation is to reduce the high variance among images in visual datasets. The exactly same composite augmentation hence can lead to different latent similarity in the feature space. For instance, as illustrated in Figure 6, target items of images may be of different size and shape, and may be located at different places. Therefore, enforcing all augmented views to share the same optimal similarity may not lead to the best performance. To this end, equation 7 calculates the averaged latent similarity in a minibatch of each length of augmentation composition. Therefore, equation 7 imposes DA consistency constraint on the averaged latent similarity in a batch instead of on each image, which help better handle the variance. Additionally, data augmentation that can greatly distort an images may not be able to largely distort the identity of another one. However, the absolute value loss pushes apart those less distorted views from the representations of the original data. Thus, softplus is used to replace the absolute value calculation because softplus produce small gradient for negative numbers. ## B Derivation Of The Update Rule Of The Mmnn In our implementation, the parameter of the encoder (θe) and the parameter of MMNN (θd) is updated in an alternate manner, as in equation 8. $\theta^{\prime}_{e}=\theta_{e}-\eta_{e}\cdot\nabla_{\theta_{e}}\mathcal{L}_{u}(\theta_{e}|\theta_{d})$ $\theta^{\prime}_{d}=\theta_{d}-\eta_{d}\cdot\nabla_{\theta_{d}}\mathrm{CE}(\theta^{\prime}_{e}(\theta_{d}))$ $$({\mathfrak{s}})$$ With the overall loss function to optimize the encoder, the update of θe with learning rate ηe can be written as: $$\theta_{e}^{\prime}=\theta_{e}-\eta_{e}\cdot(\nabla\theta_{e}\mathcal{L}_{\mathrm{contrast}}+\nabla\theta_{e}\mathcal{L}_{\mathrm{consistent}})$$ e = θe − ηe · (∇θe Lcontrast + ∇θe Lconsistent) (9) By applying chain rule, the update of θd with learning rate ηd can be rewritten as: $$\mathbf{\theta}^{\prime}_{\mathbf{d}}=\mathbf{\theta}_{\mathbf{d}}-\eta_{d}\cdot(\nabla_{\mathbf{\theta}_{\mathbf{d}}}\mbox{CE}(\mathbf{\theta}^{\prime}_{\mathbf{e}}(\mathbf{\theta}_{\mathbf{d}})),$$ where : $$\nabla_{\mathbf{\theta}_{\mathbf{d}}}\mbox{CE}(\mathbf{\theta}^{\prime}_{\mathbf{e}}(\mathbf{\theta}_{\mathbf{d}}))=\frac{\partial}{\partial\mathbf{\theta}^{\prime}_{\mathbf{e}}}\mbox{CE}(\mathbf{\theta}^{\prime}_{\mathbf{e}})\cdot\frac{\partial\mathbf{\theta}^{\prime}_{\mathbf{e}}}{\partial\mathbf{\theta}_{\mathbf{d}}}$$ (10) ``` In equation 10, ∂ ∂θ ′ e CE(θ ′ e ) can be computed via back-propagation. Thus, we focus on the computation of ∂θ ′ e ∂θd . First, we substitute θ ′ e in equation 10 with the one-step updated θ ′ e in equation 9. We also neglect the higher ``` order dependency of θe on θd as adopted in Pham et al. (2021). ∂θ ′ e ∂θd can thus be expanded as below: $$\frac{\partial\theta_{e}^{\prime}}{\partial\theta_{d}}=\frac{\partial}{\partial\theta_{d}}(\theta_{e}-\eta_{e}\cdot(\nabla_{\theta_{e}}\mathcal{L}_{\text{contrust}}+\nabla_{\theta_{e}}\mathcal{L}_{\text{consistent}}))\tag{1}$$ $$=-\eta_{e}\cdot\frac{\partial}{\partial\theta_{d}}\frac{\partial}{\partial\theta_{e}}\mathcal{L}_{\text{consistent}}$$ $$=-\eta_{e}\cdot\frac{\partial}{\partial\theta_{d}}\frac{\partial}{\partial\theta_{e}}\mathbb{E}_{i\in I}[\text{softplus}[\mathbb{E}_{i\in B}\ \left[(f_{\theta_{e}}(x_{i}^{I})^{T}\cdot f_{\theta_{e}}(x_{i}))-g_{\theta_{d}}(w_{e}(a_{e,i}^{<I>}))\right]]$$ $$(11)$$ where x l i = a <l> c,i (xi) denotes the augmented data, ℓ denotes the set of lengths of composite augmentations that are used, and B denotes the set of data indices in a minibatch. For simplification in later section, we define: $$k^{l}=\mathbb{E}_{i\in B}[(f_{\theta_{e}}(\mathbf{x}_{i}^{l})^{T}\cdot f_{\theta_{e}}(\mathbf{x}_{i}))-g_{\theta_{d}}(\mathbf{v}_{c}(a_{c,i}^{<l>})]$$ $$\sin^{l}=\mathbb{E}_{i\in B}[(f_{\theta_{e}}(\mathbf{x}_{i}^{l})^{T}\cdot f_{\theta_{e}}(\mathbf{x}_{i}))]$$ $$\left(12\right)$$ We then combine equation 11 with equation 12: $$\frac{\partial\theta^{\prime}_{e}}{\partial\theta_{d}}=-\eta_{e}\cdot\mathbb{E}_{l\in\ell}[\frac{\partial}{\partial\theta_{d}}(\frac{-e^{k^{\prime}}}{1+e^{k^{\prime}}}\cdot\frac{\partial}{\partial\theta_{e}}\text{sim}^{l})]$$ $$=-\eta_{e}\cdot\mathbb{E}_{l\in\ell}[\frac{e^{k^{\prime}}}{(1+e^{k^{\prime}})^{2}}\cdot\frac{\partial}{\partial\theta_{e}}\text{sim}^{l}\cdot\frac{\partial}{\partial\theta_{d}}\mathbb{E}_{i\in B}[g_{\theta_{d}}(v_{e}(a_{c,i}^{<l>}))]]$$ $$(13)$$ By incorporating equation 13 with equation 10, we have: $$\nabla\theta_{a}\mathrm{CE}(\theta_{e}^{\prime}(\theta_{d}))={\frac{\partial}{\partial\theta_{e}}}\mathrm{CE}(\theta_{e}^{\prime})\cdot\ (-\eta_{e}\cdot\mathbb{E}_{i\in I}[{\frac{e^{k^{\prime}}}{(1+e^{k^{\prime}})^{2}}}\cdot{\frac{\partial}{\partial\theta_{e}}}\mathrm{sin}^{I}\cdot{\frac{\partial}{\partial\theta_{d}}}\mathbb{E}_{i\in B}[g\theta_{a}(\mathbf{v}_{e}(a_{c,i}^{<>}))]])$$ $$\left(14\right)$$ $$(15)$$ c,i ))]]) (14) ``` Additionally, in equation 14, ∂ ∂θ ′ e CE(θ ′ e ) can be approximated with first order Taylor expansion: ``` $$\mathrm{CE}(\theta^{\prime}_{e})-\mathrm{CE}(\theta_{e})\approx(\theta^{\prime}_{e}-\theta_{e})\cdot\frac{\partial}{\partial\theta^{\prime}_{e}}\mathrm{CE}(\theta^{\prime}_{e})\tag{1}$$ Thus, we have: $${\frac{\partial}{\partial\theta_{e}^{\prime}}}\mathrm{CE}(\theta_{e}^{\prime})\approx{\frac{\mathrm{CE}(\theta_{e}^{\prime})-\mathrm{CE}(\theta_{e})}{\theta_{e}^{\prime}-\theta_{e}}}={\frac{\mathrm{CE}(\theta_{e}^{\prime})-\mathrm{CE}(\theta_{e})}{-\eta_{e}\cdot(\nabla\theta_{e}\hat{L}_{\mathrm{contract}}+\nabla\theta_{e}\hat{L}_{\mathrm{contribute}})}}$$ $$(16)$$ We can now rewrite equation 14 as: $$\nabla\mathbf{\theta_{d}}\text{CE}(\mathbf{\theta}_{d}^{\prime}(\mathbf{\theta}_{d}))\approx\frac{\text{CE}(\mathbf{\theta}_{d}^{\prime})-\text{CE}(\mathbf{\theta}_{e})}{(\nabla\mathbf{\theta_{e}}\mathcal{L}_{\text{Contract}}+\nabla\mathbf{\theta_{e}}\mathcal{L}_{\text{Contract}})}\cdot\mathbb{E}_{\xi\in\xi}[\frac{e^{\xi}}{(1+e^{\xi})^{2}}\cdot\frac{\partial}{\partial\mathbf{\theta_{e}}}\text{sin}^{\prime}\cdot\frac{\partial}{\partial\mathbf{\theta_{d}}}\mathbb{E}_{\xi\in\mathbb{B}}[g\mathbf{\theta_{d}}(\mathbf{r_{e}}(a_{c,\xi}^{<1>}))]]\tag{17}$$ $$(18)$$ Further more, equation 17 can be further simplified. Note that ∇θe Lcontrast + ∇θe Lconsistent can be approximated by: $$\nabla\theta_{\mathbf{e}}{\mathcal{L}}_{\mathrm{contrast}}+\nabla\theta_{\mathbf{e}}{\mathcal{L}}_{\mathrm{consistent}}:=\nabla\theta_{\mathbf{e}}{\mathcal{L}}_{u}(\theta_{\mathbf{e}})\approx{\frac{{\mathcal{L}}_{u}(\theta_{\mathbf{e}}^{\prime})-{\mathcal{L}}_{u}(\theta_{\mathbf{e}})}{\theta_{\mathbf{e}}^{\prime}-\theta_{\mathbf{e}}}}$$ Similarly, the approximation of ∂ ∂θe simlcan be written as: $$\frac{\partial}{\partial\mathbf{\theta}_{e}}\mbox{sim}^{l}\approx\frac{\mbox{sim}^{l}(\mathbf{\theta}_{e}^{\prime})-\mbox{sim}^{l}(\mathbf{\theta}_{e})}{\mathbf{\theta}_{e}^{\prime}-\mathbf{\theta}_{e}}$$ where: $\mbox{sim}^{l}(\mathbf{\theta}_{e}^{\prime})=\mathbb{E}_{i\in\mathbb{S}}[(f_{\mathbf{\theta}_{e}^{\prime}}(\mathbf{x}_{i}^{l})^{T}\cdot f_{\mathbf{\theta}_{e}^{\prime}}(\mathbf{x}_{i}))]$ $$(19)$$ Finally, by combining equation 18 and equation 19 with equation 17, ∇θd CE(θ ′ e (θd)) can be rewritten as follows: $$\nabla\mathbf{\theta_{d}}\text{CE}(\mathbf{\theta}^{\prime}_{e}(\mathbf{\theta_{d}}))\approx(\text{CE}(\mathbf{\theta}^{\prime}_{e})-\text{CE}(\mathbf{\theta_{e}}))\cdot\mathbb{E}_{\mathbb{E}\in\mathcal{E}}[\frac{e^{k^{\prime}}}{(1+e^{k^{\prime}})^{2}}\cdot\frac{\sin^{2}(\mathbf{\theta}^{\prime}_{e})-\sin^{i}(\mathbf{\theta_{e}})}{\mathcal{L}_{u}(\mathbf{\theta_{e}})-\mathcal{L}_{u}(\mathbf{\theta_{e}})}\cdot\frac{\partial}{\partial\mathbf{\theta_{d}}}\mathbb{E}_{\mathbb{E}\in\mathbb{B}}[s_{\mathbf{\theta_{d}}}(\mathbf{V}(A))]]\tag{20}$$ Note that in equation 20, all the derivative terms can be calculated directly via back propagation, resulting in a scalable algorithm. ## C Candidate Augmentations In Cocor The set of basic augmentations Ωa used to form composite augmentation consists of 14 different types data augmentations, i.e., Ωa = {AutoContrast, Brightness, Color, Contrast, Rotate, Equalize, Identity, Posterize, Sharpness, ShearX, ShearY, Solarize, TranslateX, TranslateY}. In Section 5.4, we conduct ablation studies on the composition of basic data augmentations, as illustrated in Table 6. In these experiments, Ωa is divided into two sets: Ωcolor and Ωaffine. The compositions of the two sets are as follows: Ωcolor = {AutoContrast, Brightness, Color, Contrast, Equalize, Identity, Posterize, Sharpness, Solarize,} and Ωaffine = {Rotate, Identity, ShearX, ShearY, TranslateX, TranslateY}. ## D Experimental Setup For Pre-Training All pre-training experiment are conducted using stochastic gradient descent (SGD) as the optimizer. A cosine decay scheduler is adopted for scheduling the learning rate. All pre-training adopt a training data batch size of 256. Detailed setups used for each method are as follows. - **MoCo** He et al. (2020); Chen et al. (2020b): We use a starting learning rate of 3 × 10−2, a weight decay of 1×10−4, and a momentum of 0.9 for the optimizer. For the dual encoders of MoCo, feature space dimension is 128, memory queue size is 65536, and the momentum of updating the key encoder is 0.999. For the consistency loss, the query encoder takes both the original data and its augmented view as input. However, the query encoder is stop-gradient when encoding the original input. The encoder receives gradient while encoding the augmented views. - **SimSiam** Chen & He (2021): A starting learning rate of 5×10−2, a weight decay of 1×10−4, and a momentum of 0.9 are used for the SGD optimizer. Predictor and projector and output dimension are both 2048. For the consistency loss, representations of the original data are obtained by letting the data go through the stop-gradient backbone and the projector. The augmented views go through backbone, projector, and predictor for the representations of them. All the three models receive gradient while calculating the representations of augmented views. - **SupCon** Khosla et al. (2020): We use a starting learning rate of 3×10−2, a weight decay of 1×10−4, and a momentum of 0.9 for the optimizer. Feature space dimension of the projection head output is 128. In the consistency loss, the encoder takes both the original data and its augmented view as input. The encoder is stop-gradient when encoding the original input. The encoder receives gradient while calculating representations of the augmented views. ## E Experimental Setup For Linear Evaluation We follow the linear evaluation protocol of Chen et al. (2020a); Lee et al. (2021); Kornblith et al. (2019). ![19_image_0.png](19_image_0.png) A one-layer linear classifier is trained upon the frozen pre-trained backbone. Resize, CentorCrop, and Normalization are used as the training transformations. An L-BFGS optimizer is adopted for minimizing the ℓ2-regularized cross-entropy loss. The optimal regularization parameter is selected on the validation set of each dataset. Finally, models are trained with the optimal regularization parameter, and the obtained test accuracies are reported. Figure 7: Linear performance accuracies of CoCor models trained with only one length of composite augmentation. CoCor with each length of composite augmentation is run with multiple different constant l ∗ d values. ## F Ablation Study On Cocor Without The Mmnn To evaluate the effect of MMNN, we run CoCor where l ∗ d is replaced by fixed constants. In practice, a constant is used for each length of composite augmentation in calculating the consistency loss. We run multiple experiments to select the constant for each length of composite augmentation. All models are ResNet-50 pre-trained on ImageNet-100 for 50 epochs. Figure 7 shows the linear evaluation results of CoCor running with one length of composite DA and with different constant l ∗ d . For each length of composite augmentation, there exists an optimal constant which leads to the best result in linear evaluation. These | ℓ | 1 | 2 | 3 | |---------|------|------|------| | MoCo | 0.80 | 0.75 | 0.65 | | SimSiam | 0.75 | 0.70 | 0.60 | optimal constants are then used for the comparison with the MMNN. The best constants of MoCo and SimSiam are listed in Table 9. Table 9: Constant l ∗ d used for the comparison with the MMNN. ## G Cocor With Much Stronger Data Augmentations | ℓ | 1 | 2 | 3 | 5 | 10 | baseline | |---------|-------|-------|-------|-------|-------|------------| | MoCo | 47.90 | 47.70 | 48.08 | 45.16 | 43.07 | 42.66 | | SimSiam | 54.01 | 54.20 | 55.16 | 53.50 | 52.70 | 44.60 | CoCor is a systematic approach of leveraging the information provided by diverse augmentations. In order to further test the effect of composite augmentations of different strength, we run CoCor on MoCo and SimSiam with single augmentation length of 1, 2, 3, 5, and 10. We train ResNet-50 on ImageNet-100 for 50 epochs. Pre-trained backbones are evaluated on ImageNet-100 for linear classification. Table 10 shows the accuracies of evaluation. Table 10: Linear evaluation of encoders trained by applying consistency loss on only one length l of composite augmentations. As it is shown in Table 10, introducing relatively weaker (shorter) data augmentation in pre-training leads to better performance on downstream tasks. When evaluating the pre-trained models on various downstream tasks, the test data for testing model performance are natural and realistic images. Thus, providing the encoder with less distorted views in pre-training can help the encoder better recognize natural images during inference. It is also noteworthy that CoCor can leverage very strong augmentations of a length of 10 to improve the encoder's performance over baseline, which is even stronger than the data augmentations used in recent works that include strong augmentations such as CLSA Wang & Qi (2021) and R´enyiCL Lee & Shin (2023). Although it has been shown that very strong augmentations which can distort the identity of data too much may bring too much noise to the training process. The noise may result in performance degradation Chen et al. (2020a); Tian et al. (2020b), partial dimensional collapse, or even complete dimensional collapse Li et al. (2022a); Jing et al. (2021). However, CoCor is able to capture essential information from the highly distorted views and enrich the semantics of the feature space using these strong augmentations. ## H **Cocor'S Compatibility With More Baseline Contrastive Learning Methods.** To demonstrate CoCor's compatibility with additional baseline methods, e.g. BYOL Grill et al. (2020) and INTL Weng et al. (2023), we implement CoCor on these additional baseline methods. Linear evaluation results of these encoders across various datasets are presented in Table 11. These results indicate that CoCor not only enhances the generalizability of learned representations from contrastive methods like MoCo, SimSiam, and SupCon but also extends these benefits to other existing contrastive learning approaches. ## I **Training Time Comparison** CoCor applies the proposed consistency loss to views produced by various data augmentations. To address the concern of extra training time consumed by CoCor over its corresponding baseline methods, we compare the pre-training time and model performance in Table 12. Although CoCor requires more time per epoch, Table 11: Top-1 accuracies (%) of linear evaluation. All ResNet-50 backbone encoders are pre-trained on ImageNet-100. | Method | IN-100 | Cifar100 | CUB200 | Caltech101 | SUN397 | Food101 | |--------------------------|----------|------------|----------|--------------|----------|-----------| | BYOL Grill et al. (2020) | 68.05 | 60.13 | 20.27 | 78.90 | 39.03 | 53.01 | | BYOL + CoCor (Ours) | 72.30 | 62.42 | 22.75 | 80.51 | 42.64 | 55.32 | | INTL Weng et al. (2023) | 74.58 | 62.73 | 29.05 | 87.45 | 47.06 | 60.38 | | INTL + CoCor (Ours) | 76.98 | 63.06 | 31.40 | 88.81 | 48.27 | 64.27 | CoCor achieves better performance than its corresponding baselines with shorter training time. Additionally, CoCor takes similar amount of time to train as CLSA Wang & Qi (2021), which uses strong multi-view augmentations. However, CoCor achieves significantly better performance than CLSA with similar training time. | | Epochs | 50 | 100 | 200 | |-------------------------------|-------------|-------------|-------------|-------| | Methods MoCo He et al. (2020) | 13.4h/64.78 | 26.8h/66.81 | 53.6h/67.22 | | | SimSiam Chen & He (2021) | 13.2h/66.89 | 26.4h/68.12 | 52.8h/70.93 | | | CLSA Wang & Qi (2021) | 18.1h/67.95 | 36.2h/69.15 | 72.4h/71.19 | | | MoCo + CoCor | 18.5h/69.26 | 37.0h/70.76 | 74.0h/72.83 | | | SimSiam + CoCor | 18.1h/70.90 | 36.2h/72.03 | 72.4h/73.25 | | Table 12: Training Time Comparison on a machine with 4 A100 GPUs. Training time (in hour)/Top-1 linear evaluation results (in %) on ImageNet-1K is provided for comparison. ## J **Impact Of The Data Augmentation Ordering In Cocor.** | MoCo | CoCor - fixed 1 | CoCor - fixed 2 | | |-----------|-------------------|-------------------|-------| | Top-1 Acc | 67.04 | 71.32 | 71.47 | To evaluate the impact of the ordering of DAs in CoCor, we conducted experiments with various fixed DA orderings. We trained encoders using MoCo + CoCor with specific DA orderings. Results are shown in Table 13. For instance, in configurations CoCor-fixed 1/2, the order of DAs is predetermined. In a fixed order scenario, such as in CoCor-fixed 1, rotation is consistently applied before brightness whenever both augmentations are selected for an image. Results in Table 13 indicate that in CoCor, the ordering of DAs does not significantly affect the encoder's performance in downstream tasks. Thus, in final experiments, we use random DA ordering to explore a wider range of DA variation. Table 13: Linear evaluation results on ImageNet-100 of CoCor models trained with different fixed DA orderings. Encoders are trained on ImageNet-100 for 200 epochs.
Review 1: Summary: The paper presents an approach to addressing a key challenge in contrastive learning: the dependency on the quality of manually chosen data augmentation functions. Recognizing the critical role of data augmentations in providing informative views without explicit labels, the authors introduce Contrastive Learning with Consistent Representations (CoCor). Central to CoCor is the Data Augmentation consistency, which ensures optimal mapping of augmented input data in the representation space, aligned with the intensity of the applied augmentation. The proposed method systematically incorporates diverse data augmentations and learns optimal mapping locations while preserving a monotonic relationship with DA intensity. The experimental results are noteworthy, demonstrating that CoCor significantly enhances the generalizability and transferability of learned representations compared to baseline methods. This approach not only addresses the complexities inherent in data augmentations and representation learning but also offers a systematic solution that could be highly beneficial for future research in the field. Strengths and Weaknesses: Pros: 1. The idea of incorporating data augmentations into the loss function and making the encoder aware of the augmentations is commendable. Unlike AugSelf, this paper delves deeply into designing a metric to rank the augmentations, which is a novel and valuable contribution. Cons: 1. Writing Clarity: The writing of this paper is somewhat confusing. For example, it begins by introducing augmentation in a supervised setting, but this is not connected to the rest of the paper. The sentences in the introduction are somewhat disjointed and the authors tend to use very long and complex sentences. Missing commas exacerbate this issue, making the text difficult to understand. For instance, in Definition 3, the lack of a comma after "A" made it challenging to understand what "A" represents. 2. Formulation. The formulation presented in the paper seems a bit confusing, and the star representing the optimal should not be on the $l_d$, but on the parameter $\theta_d$. 3. Literature Review: The paper misses a very relevant piece of literature. The formulation (learning another neural network as a new metric) and the idea of the proposed method are quite similar to [1], which focuses on crop size as a data augmentation method. This paper, however, extends the focus to general data augmentations. Including and discussing this related work would provide a more comprehensive context for the contributions of the current paper. 4. The justification of stronger augmentations. Simply taking more operations as stronger augmentation is a bit unreasonable, and at least from my humble view, the intensity should be a key value. For example, conducting twice saturations with smaller parameters should be less than one saturation operation with a larger value. 5. The experiments. I found the results in all the tables are much lower than the MoCov2 and SimSiam are much lower than the original paper and results reported by [1] and Augself. [1] T. Zhang C. Qiu, W. Ke, S. Süsstrunk, and M. Salzmann, "Leverage Your Local and Global Representations: A New Self-Supervised Learning Strategy" CVPR 2022 Requested Changes: 1. Discussion on the Difference with Reference[1]. 2. The authors should address the reasons for any degraded results observed. It would be beneficial to explain why the proposed method might underperform in certain scenarios and attempt to replicate the results under the same settings and baseline conditions. This would provide a clearer understanding of the method's strengths and weaknesses and offer insights into potential areas for improvement. 3. The authors should strive to improve the clarity and readability of the paper. Simplifying complex sentences and ensuring proper punctuation would significantly enhance comprehension. Broader Impact Concerns: No need to discuss broader impact concerns. ================================================== Review 2: Summary: This paper focuses on investigating the data augmentation (DA) of contrastive learning (typically the self-supervised learning for vision data). The motivation is that the intricacies of data augmentation and representation learning may lead to a performance degradation if augmentation functions are not judiciously chosen. This paper proposes Contrastive Learning with Consistent Representations (CoCor), including introducing the set of composite augmentations, defining the DA consistency. It further proposes to learn the optimal mapping locations as a function of DA, all while preserving a desired monotonic property relative to DA intensity. Experimental results demonstrate that effectiveness of CoCor in enhancing the generalizability and transferability of learned representations comparing to baseline methods Strengths and Weaknesses: **Strengths:** The motivation of the proposed method is clear. The proposed method intuitively can improve the performance of SSL methods, if it is well designed to exploit the magnitude (of consistency) of data augmentation. The introduce of data augmentation consistency is new to me. This paper is overall well organized, the description is also clear. The experimental results of “Composite augmentation length in pre-training” is interesting. **Weaknesses:** 1. This paper should provide the computation cost of the CoCor. E.g., The overall time cost to train baselines+CoCor, compared to the baselines. Based on the description of CoCor, I feel the proposed method of this paper will introduce significant additional computation cost, due to the bilevel-optimization. I doubt the practicability of the proposed method. 2. This paper claims that “ CoCor achieves state-of-the-art results for various downstream tasks.”. I donot think the current experimental results can support this claim. The experimental results currently are weak: (1) In Section 5.2 ‘Main results”, Why not show the results of linear evaluation on ImageNet-1K itself?, like the experimental setup in Simsiam paper (Chen&He 2021). It is important to show this result; (2) In Table 3, the Simsiam method is significantly lower the Simsiam paper reported, e..g, Simsiam obtains 57.0 AP on VOC07+12 and 39.2 AP on COCO detection reported in Simsiam paper (Chen&He 2021) while 50.23 AP and 34.23 AP only in this paper. I think this paper should clarify it. (3) I donot think MoCO and SimSiam is currently the state-of-the-art SSL baselines, there are many better methods, e..g, Barlow Twins[1], INTL[2] (INTL [2] can obtains 75.2 accuracy of linear evaluation on ImageNet using 200epoch training with strong data argumentation) and 40.7 AP on COCO). This paper should conduct experiments to compare these methods. 3. This paper claims that “Moreover, it can be readily integrated into existing contrastive learning frameworks, effectively imposing DA consistency on the encoder”. I think this paper should conduct more experiments to support this claim, since the experiments are only based on Moco and SimSiam. This paper should conduct more experiments on SSL method using regularization, e.g., Barlow Twins[1], VICReg[3], and INTL [2] 4. This paper claims “Our studies show that the ordering of the basic augmentations in a composite augmentation does not have a significant impact on performance.”. I think this paper should provide some evidence (in Appendix) to support this claim. E.g., how this paper gets the observations., 5. Some notation is not well explained (If I am correct), e.g., (1) What is $N_a$ in the last row of page 4? (2) what is $N_0$ in the Definition 2? **Ref:** [1] Barlow Twins: Self-Supervised Learning via Redundancy Reduction, ICML 2021 [2]Vicreg: Variance-invariance-covariance regularization for self-supervised learning, ICLR 2022 [3 ] MODULATE YOUR SPECTRUM IN SELF-SUPERVISED LEARNING, ICLR, 2024 Requested Changes: My main concern is the practicality of the proposed method, it seems the proposed method is complicated in computation, and this paper does not provide strong results in experiments (see weaknesses) . It is better to add more experimental results to support the claims or reword the claims. Broader Impact Concerns: NA ================================================== Review 3: Summary: Data augmentation (DA) is important in contrastive learning for visual representation. Various kinds of data augmentation have been elaborately studied. However, how to efficiently compose different kinds of DA is rarely considered. In this paper, a measurement of DA consistency is proposed to make the view of stronger augmentation get smaller similarity with the original sample in the feature space, and the strength of the DA is estimated by an auxiliary network named monotonic mapping neural network (MMNN). Experimental results show that the proposed approach can be used as a plug-in module with different contrastive learning frameworks and achieves performance gain compared with them. Strengths and Weaknesses: Strengths: 1. The motivation is clear that the view of stronger augmentation has a smaller similarity with the original sample, which is thoroughly introduced. 2. The experiments show the effectiveness of the proposed DA loss and the monotonic mapping neural network (MMNN). 3. The visualization in Fig. 4(a) verified the latent similarity is larger than the baseline when stronger DA is adopted. Weaknesses: 1. The generalization of the proposed DA consistent is limited as only basic DA could be used, for example, color-related augmentations and affine transformations. 2. The teaser of Fig. 1(a) is overclaimed that the proposed approach cannot force the view of different DA to a regular structure, i.e., the circular ring. It’s better to explain the circular ring which indeed is the contour line of similarity. 3. Why is labeled data used for the monotonic mapping neural network? In my opinion, the consistency loss can be computed directly by ld(x; f, A1) - ld(x; f, A2), where A1 is stronger than A2 from the perspective of Fig. 3. Requested Changes: 1. The MoCo, SimSiam, and SupCon are used in Table 1 to show the proposed DA consistent can be used together with different contrastive learning frameworks. All three frameworks are better compared in Table 2 to Table 7. In Table 2, as the SupCon is not compared, I would doubt that the DA consistently didn’t work.   2. It’s hard to understand the illustration of dimensional collapse evaluation in Fig. 4(b). 3. The strength of DA is measured with the length of DA in Section G in the appendix. However, it is measured by the monotonic mapping neural network. It’s not consistent. 4. Some numbers in the experiment section are non-corresponding, like 83.8 in Table 6 and 83.70 in Table 7. Broader Impact Concerns: No concern. ================================================== Metareview: Recommendation: Accept as is Comment: Despite expressing some concerns in their final evaluation, including w.r.t. the generality/uniformity of the approach across different augmentations, some reported baseline numbers, and the computational cost of the method, the reviewers all acknowledge that the paper introduces some interesting ideas. ==================================================
# Tight Conditions For When The Ntk Approximation Is Valid Enric Boix-Adsera *eboix@mit.edu* MIT Electrical Engineering and Computer Science Apple Etai Littwin *elittwin@apple.com* Apple Reviewed on OpenReview: *https: // openreview. net/ forum? id= qM7JPBYROr* ## Abstract We study when the neural tangent kernel (NTK) approximation is valid for training a model with the square loss. In the lazy training setting of Chizat et al. (2019), we show that rescaling the model by a factor of α = O(T) suffices for the NTK approximation to be valid until training time T. Our bound is tight and improves on the previous bound of Chizat et al. (2019), which required a larger rescaling factor of α = O(T 2). ## 1 Introduction In the modern machine learning paradigm, practitioners train the weights w of a large neural network model fw : R din → R dout via a gradient-based optimizer. Theoretical understanding lags behind, since the training dynamics are non-linear and hence difficult to analyze. To address this, Jacot et al. (2018) proposed an approximation to the dynamics called the *NTK approximation*, and proved it was valid for infinitely-wide networks trained by gradient descent1. The NTK approximation has been extremely influential, leading to theoretical explanations for a range of questions, including why deep learning can memorize training data (Du et al., 2018; 2019; Allen-Zhu et al., 2019a;b; Arora et al., 2019a; Cao & Gu, 2019; Lee et al., 2019), why neural networks exhibit spectral bias (Cao et al., 2019; Basri et al., 2020; Canatar et al., 2021), and why different architectures generalize differently (Bietti & Mairal, 2019; Mei et al., 2021; Wang et al., 2021). Nevertheless, in practice the training dynamics of neural networks often diverge from the predictions of the NTK approximation (see, e.g., Arora et al. (2019b)) Therefore, it is of interest to understand exactly under which conditions the NTK approximation holds. In this paper, we ask the following question: Can we give tight conditions for when the NTK approximation is valid? ## 1.1 The "Lazy Training" Setting Of Chizat Et Al. **(2019)** The work of Chizat et al. (2019) showed that the NTK approximation actually holds for training any differentiable model, as long as the model's outputs are *rescaled* so that the model's outputs change by a large amount even when the weights change by a small amount. The correctness of the NTK approximation for infinite-width models is a consequence of this observation, because by the default the model is rescaled as the width tends to infinity; see the related work in Section 1.3 for more details. Rescaling the model Let h : R p → F be a smoothly-parameterized model, where F is a separable Hilbert space. Let α > 0 be a parameter which controls the rescaling of the model and which should be thought of as large. We train the rescaled model αh with gradient flow to minimize a smooth loss function R : F → R+. 2 1Under a specific scaling of the initialization and learning rate as width tends to infinity. 2We use the Hilbert space notation as in Chizat et al. (2019). We can recover the setting of training a neural network fw : Rd → R on a finite training dataset {(x1, y1), . . . , (xn, yn)} ⊆ Rd × R with empirical loss function L(w) = 1n Pn i=1 ℓ(fw(xi), yi) as follows. Let H = Rn be the Hilbert space, let h(w) = [fw(x1)*, . . . , f*w(xn)], and let R(v) = 1n Pn i=1 ℓ(vi, yi). Namely, the weights w(t) ∈ R p are initialized at w(0) = w0 and evolve according to the gradient flow $$\frac{d\mathbf{w}}{dt}=-\frac{1}{\alpha^{2}}\nabla_{\mathbf{w}}R(\alpha h(\mathbf{w}(t)))\,.\tag{1}$$ .) $\blacksquare$ $$\mathfrak{h}$$ NTK approximation Define the linear approximation of the model around the initial weights w0 by $$\bar{h}(\mathbf{w})=h(\mathbf{w}_{0})+D h(\mathbf{w}_{0})(\mathbf{w}-\mathbf{w}_{0})\,,$$ h¯(w) = h(w0) + Dh(w0)(w − w0), (2) where Dh is the first derivative of h in w. Let w¯ (t) be weights initialized at w¯ (0) = w0 that evolve according to the gradient flow from training the rescaled linearized model αh¯: $$\frac{d\bar{\mathbf{w}}}{dt}=-\frac{1}{\alpha^{2}}\nabla_{\bar{\mathbf{w}}}R(\alpha\bar{h}(\bar{\mathbf{w}}(t)))\,.\tag{1}$$ $\mathrm{T}$b. The NTK approximation states that In other words, it states that the linearization of the model h is valid throughout training. This allows for much simpler analysis of the training dynamics since the model h¯ is linear in its parameters, and so the evolution of h¯(w¯ ) can be understood via a kernel gradient flow in function space. $$\alpha h(\mathbf{w}(t))\approx\alpha\bar{h}(\bar{\mathbf{w}}(t)).$$ When is the NTK approximation valid? Chizat et al. (2019) proves that if the rescaling parameter α is large, then the NTK approximation is valid. The intuition is that the weights do not need to move far from their initialization in order to change the output of the model significantly, so the linearization (2) is valid for longer. Since the weights stay close to initialization, Chizat et al. (2019) refer to this regime of training as "lazy training." The following bound is proved.3 Here $$R_{0}=R(\alpha h(\mathbf{w}_{0}))$$ is the loss at initialization, and $\theta$. $$\kappa=\frac{T}{\alpha}\mathrm{Lip}(D h)\sqrt{R_{0}}\,,$$ $$\left(4\right)$$ is a quantity that will also appear in our main results. Proposition 1.1 (Theorem 2.3 of Chizat et al. (2019)). Let R(y) = 12 ∥y − y ∗∥ 2*be the square loss, where* y ∗ ∈ F *are the target labels. Assume that* h is Lip(h)*-Lipschitz and that* Dh is Lip(Dh)*-Lipschitz in a ball of* radius ρ around w0. Then, for any time 0 ≤ T ≤ αρ/(Lip(h) √R0), $$\|\alpha h(\mathbf{w}(T))-\alpha\bar{h}(\bar{\mathbf{w}}(T))\|\leq T\mathrm{Lip}(h)^{2}\kappa\sqrt{R_{0}}\,.$$ 2κpR0 . (4) Notice that as we take the rescaling parameter α to infinity, then κ goes to 0, so the right-hand-side of (4) is small and the NTK approximation is valid. ## 1.2 Our Results Our contribution is to refine the bound of Chizat et al. (2019) for large time scales. We prove: Theorem 1.2 (NTK approximation error bound). Let R(y) = 12 ∥y − y ∗∥ 2*be the square loss. Assume that* Dh is Lip(Dh)-Lipschitz in a ball of radius ρ around w0*. Then, at any time* 0 ≤ T ≤ α 2ρ 2/R0, $$\|\alpha h(\mathbf{w}(T))-\alpha\bar{h}(\bar{\mathbf{w}}(T))\|\leq\operatorname*{min}(6\kappa\sqrt{R_{0}},\sqrt{8R_{0}})\,.$$ ∥αh(w(T)) − αh¯(w¯ (T))∥ ≤ min(6κpR0,p8R0). (5) Furthermore, the converse is true. Our bound is tight up to a constant factor. Theorem 1.3 (Converse to Theorem 1.2). For any *α, T,* Lip(Dh), and R0*, there is a model* h : R → R, an initialization w0 ∈ R*, and a target* y ∗ ∈ R *such that, for the risk* R(y) = 12 (y − y ∗) 2*, the initial risk is* R(αh(w0)) = R0*, the derivative map* Dh is Lip(Dh)*-Lipschitz, and* $$\|\alpha h(w(T))-\alpha\bar{h}(\bar{w}(T))\|\geq\operatorname*{min}(\frac{1}{5}\kappa\sqrt{R_{0}},\frac{1}{5}\sqrt{R_{0}})\,.$$ 3See Section 1.3 for discussion on the other results of Chizat et al. (2019). $\left(5\right)^2$ Comparison to Chizat et al. **(2019)** In contrast to our theorem, the bound (4) depends on the Lipschitz constant of h, and incurs an extra factor of TLip(h) 2. So if Lip(Dh), Lip(h), and R0 are bounded by constants, our result shows that the NTK approximation (up to O(ϵ) error) is valid for times T = O(αϵ), while the previously known bound is valid for T = O( √αϵ). Since the regime of interest is training for large times T ≫ 1, our result shows that the NTK approximation holds for much longer time horizons than previously known. ## 1.3 Additional Related Literature Other results of Chizat et al. **(2019)** In addition to the bound of Proposition 1.1 above, Chizat et al. (2019) controls the error in the NTK approximation in two other settings: (a) for general losses, but α must be taken exponential in T, and (b) for strongly convex losses and infinite training time T, but the problem must be "well-conditioned." We work in the setting of Proposition 1.1 instead, since it is more aligned with the situation in practice, where we have long training times and the problem is ill-conditioned. Indeed, the experiments of Chizat et al. (2019) report that for convolutional neural networks on CIFAR10 trained in the lazy regime, the problem is ill-conditioned, and training takes a long time to converge. Other works on the validity of the NTK approximation The NTK approximation is valid for infinitelywide neural networks under a certain choice of hyperparameter scaling called the "NTK parametrization" Jacot et al. (2018). However, there is another choice of hyperparameter scaling, called the "mean-field parametrization", under which the NTK approximation is not valid at infinite width (Chizat & Bach, 2018; Rotskoff & Vanden-Eijnden, 2018; Sirignano & Spiliopoulos, 2022; Mei et al., 2018; 2019; Yang & Hu, 2021). It was observed by Chizat et al. (2019) that one can interpolate between the "NTK parametrization" and the "mean-field parametrization" by varying the lazy training parameter α. This inspired the works Woodworth et al. (2020); Geiger et al. (2020; 2021), which study the effect of interpolating between lazy and non-lazy training by varying α. Most work points towards provable benefits of non-lazy training (Allen-Zhu & Li, 2019; Bai & Lee, 2019; Bai et al., 2020; Chen et al., 2020; Nichani et al., 2022; Ghorbani et al., 2020; Malach et al., 2021; Abbe et al., 2022; 2023; Mousavi-Hosseini et al., 2022; Damian et al., 2022; Bietti et al., 2022; Ba et al., 2022) although interestingly there are settings where lazy training provably outperforms non-lazy training (Petrini et al., 2022). Finally, our results do not apply to ReLU activations because we require twice-differentiability of the model as in Chizat et al. (2019). It is an interesting future direction to prove such an extension. One promising approach could be to adapt a technique of Cao & Gu (2020), which analyzes ReLU network training in the NTK regime by showing in Lemma 5.2 that around initialization the model is "almost" linear and "almost" smooth, even though these assumptions are not strictly met because of the ReLU activations. ## 2 Application To Neural Networks The bound in Theorem 1.2 applies to lazy training of any differentiable model. As a concrete example, we describe its application to neural networks (a similar application was presented in Chizat et al. (2019)). We parametrize the networks in the mean-field regime, so that the NTK approximation is not valid even as the width tends to infinity. Therefore, the NTK approximation is valid only when we train with lazy training. Let fw : R d → R be a 2-layer network of width m in the mean-field parametrization Chizat & Bach (2018); Rotskoff & Vanden-Eijnden (2018); Sirignano & Spiliopoulos (2022); Mei et al. (2018; 2019), with activation function σ : R → R, $$f_{\mathbf{w}}(\mathbf{x})={\frac{1}{\sqrt{m}}}\sum_{i=1}^{m}a_{i}\sigma({\sqrt{m}}\langle\mathbf{x}_{i},\mathbf{u}_{i}\rangle)\,.$$ The weights are w = (a, U) for a = [a1*, . . . , a*m] and U = [u1*, . . . ,*um]. These are initialized at w0 with i.i.d. Unif[−1/ √m, 1/ √m] entries. Given training data (x1, y1), . . . ,(xn, yn), we train the weights of the network with the mean-squared loss $${\mathcal{L}}(\mathbf{w})={\frac{1}{n}}\sum_{i=1}^{n}\ell(f_{\mathbf{w}}(\mathbf{x}_{i}),y_{i}),\quad\ell(a,b)={\frac{1}{2}}(a-b)^{2}\,.$$ 2. (6) In the Hilbert space notation, we let H = R n, so that the gradient flow training dynamics with loss (6) correspond to the gradient flow dynamics (1) with the following model and loss function $$h(\mathbf{w})={\frac{1}{\sqrt{n}}}[f_{\mathbf{w}}(\mathbf{x}_{1}),\ldots,f_{\mathbf{w}}(\mathbf{x}_{n})]\in\mathbb{R}^{n},\quad R(\mathbf{v})={\frac{1}{2}}\|\mathbf{v}-{\frac{\mathbf{y}}{\sqrt{n}}}\|^{2}\,.$$ Under some regularity assumptions on the activation function (which are satisfied, for example, by the sigmoid function) and some bound on the weights, it holds that Lip(Dh) is bounded. Lemma 2.1 (Bound on Lip(Dh) for mean-field 2-layer network). Suppose that there is a constant K *such that* (i) the activation function σ *is bounded and has bounded derivatives* ∥σ∥∞, ∥σ ′∥∞, ∥σ ′′∥∞, ∥σ ′′′∥∞ ≤ K, (ii) the weights have bounded norm ∥a∥ + ∥U∥ ≤ K*, and (iii) the data points have bounded norm* maxi ∥xi∥ ≤ K. Then there is a constant K′ depending only K *such that* $$({\mathfrak{f}}{\mathfrak{o}})$$ $$\mathrm{Lip}(D h)\leq K^{\prime}.$$ $\square$ Proof. See Appendix C. Note that our bounds hold at any finite width of the neural network, because we have taken initialization uniformly bounded in the interval [−1/ √m, 1/ √m]. Since the assumptions of Theorem 1.2 are met, we obtain the following corollary for the lazy training dynamics of the 2-layer mean-field network. Corollary 2.2 (Lazy training of 2-layer mean-field network). Suppose that the conditions of Lemma *2.1, and* also that the labels are bounded in norm ∥y∥ ≤ √nK. Then there are constants c, C > 0 *depending only on* K *such that for any time* 0 ≤ T ≤ cα2, $$\|\alpha h(\mathbf{w}(T))-\alpha\bar{h}(\bar{\mathbf{w}}(T))\|\leq C\operatorname*{min}(T/\alpha,1)\,.$$ Notice that training in the NTK parametrization corresponds to training the model √mfw, where fw is the network in the mean-field parametrization. This amounts to taking the lazy training parameter α = √m in the mean-field setting. Therefore, under the NTK parametrization with width m, the bound in Corollary 2.2 shows that the NTK approximation is valid until training time O(m) and the error bound is O(T /√m). ## 3 Proof Ideas 3.1 Proof Ideas For Theorem 1.2 Proof of Chizat et al. **(2019)** In order to give intuition for our proof, we first explain the idea behind the proof in Chizat et al. (2019). Define residuals r(t), r¯(t) ∈ F under training the original rescaled model and the linearized rescaled model as r(t) = y ∗ − αh(w(t)) and r¯(t) = y ∗ − αh¯(w¯ (t)). It is well known that these evolve according to $$\frac{d r}{d t}=-K_{t}r\;\;\;\;\;\mathrm{and}\;\;\;\;\;\frac{d\bar{r}}{d t}=-K_{0}\bar{r}\,,$$ for the time-dependent kernel Kt : *F → F* which is the linear operator given by Kt := Dh(w(t))Dh(w(t))⊤. To compare these trajectories, Chizat et al. (2019) observes that, since K0 is p.s.d., $$\frac{1}{2}\frac{d}{d t}\|r-\bar{r}\|^{2}=-\langle r-\bar{r},K_{t}r-K_{0}\bar{r}\rangle\leq-\langle r-\bar{r},(K_{t}-K_{0})r\rangle\,,$$ which, dividing both sides by ∥r − r¯∥ and using that ∥r∥ ≤ √R0 implies $$\frac{d}{d t}\|r-\bar{r}\|\leq\|K_{t}-K_{0}\|\|r\|\leq2\mathrm{Lip}(h)\mathrm{Lip}(D h)\|\mathbf{w}-\mathbf{w}_{0}\|\sqrt{R_{0}}\,.$$ Using the Lipschitzness of the model, Chizat et al. (2019) furthermore proves that the weight change is bounded by ∥w(t) − w0∥ ≤ t √R0Lip(h)/α. Plugging this into (7) yields the bound in Proposition 1.1, $$\|\alpha h(\mathbf{w}(T))-\alpha\bar{h}(\bar{\mathbf{w}}(T))\|=\|r(T)-\bar{r}(T)\|\leq2\text{Lip}(h)^{2}\text{Lip}(Dh)R_{0}\alpha^{-1}\int_{0}^{T}tdt$$ $$=T^{2}\text{Lip}(h)^{2}\text{Lip}(Dh)R_{0}/\alpha\,.$$ $\square$ First attempt: strengthening of the bound for long time horizons We show how to strengthen this bound to hold for longer time horizons by using an improved bound on the movement of the weights. Consider the following bound on the weight change. Proposition 3.1 (Bound on weight change, implicit in proof of Theorem 2.2 in Chizat et al. (2019)). $\|\mathbf{w}(T)-\mathbf{w}_{0}\|\leq\sqrt{TR_{0}}/\alpha\quad\quad and\quad\|\bar{\mathbf{w}}(T)-\mathbf{w}_{0}\|\leq\sqrt{TR_{0}}/\alpha\,.$ Proof of Proposition *3.1.* By (a) Cauchy-Schwarz, and (b) the nonnegativity of the loss R, $$\|\mathbf{w}(T)-\mathbf{w}(0)\|\leq\int_{0}^{T}\|\frac{d\mathbf{w}}{dt}\|dt\leq\sqrt{T\int_{0}^{T}\|\frac{d\mathbf{w}}{dt}\|^{2}dt}=\sqrt{-\frac{T}{\alpha^{2}}\int_{0}^{T}\frac{d}{dt}R(\alpha h(\mathbf{w}(t)))dt}$$ $$\overset{(b)}{\leq}\sqrt{TR_{0}}/\alpha\,.$$ The bound for w¯ is analogous. This bound (8) has the benefit of √t dependence (instead of linear t dependence), and also does not depend on Lip(h). So if we plug it into (7), we obtain $$\begin{split}\|\alpha h(\mathbf{w}(T))-\alpha\bar{h}(\bar{\mathbf{w}}(T))\|&\leq2\text{Lip}(h)\text{Lip}(Dh)R_{0}\alpha^{-1}\int_{0}^{T}\sqrt{t}dt\\ &=\frac{4}{3}T^{3/2}\text{Lip}(h)\text{Lip}(Dh)R_{0}/\alpha\,.\end{split}$$ **we have $1.1$ for $\alpha$-function** $\alpha$**.** $$({\boldsymbol{\delta}})$$ $$\square$$ This improves over Proposition 1.1 for long time horizons since the time dependence scales as T 3/2instead of as T 2. However, it still depends on the Lipschitz constant Lip(h) of h, and it also falls short of the linear in T dependence of Theorem 1.2. Second attempt: new approach to prove Theorem 1.2 In order to avoid dependence on Lip(h) and obtain a linear dependence in T, we develop a new approach. We cannot use (7), which was the core of the proof in Chizat et al. (2019), since it depends on Lip(h). Furthermore, in order to achieve linear T dependence using (7), we would need that ∥w − w0∥ = O(1) for a constant that does not depend on the time horizon, which is not true unless the problem is well-conditioned. In the full proof in Appendix A, we bound ∥r(T) − r¯(T)∥ = ∥αh(w(T)) − αh¯(w¯ (T))∥, which requires working with a product integral formulation of the dynamics of r to handle the time-varying kernels Kt (Dollard & Friedman, 1984). The main technical innovation in the proof is Theorem A.8, which is a new, general bound on the difference between product integrals. To avoid the technical complications of the appendix, we provide some intuitions here by providing a proof of a simplified theorem which does not imply the main result. We show: Theorem 3.2 (Simplified variant of Theorem 1.2). *Consider* r ′(t) ∈ F *which is initialized as* r ′(0) = r(0) and evolves as dr′ dt = −KT r ′*. Then,* $$\|r^{\prime}(T)-\bar{r}(T)\|\leq\operatorname*{min}(3\kappa\sqrt{R_{0}},\sqrt{8R_{0}})\,.$$ ′(T) − r¯(T)∥ ≤ min(3κpR0,p8R0). (9) $\left(9\right)$. It is at the final time t = T that the kernel Kt can differ the most from K0. So, intuitively, if we can prove in Theorem 3.2 that r ′(T) and r¯(T) are close, then the same should be true for r(T) and r¯(T) as in Theorem 1.2. For convenience, define the operators $$A=D h(\mathbf{w}_{0})^{\top}\quad{\mathrm{and}}\quad B=D h(\mathbf{w}(T))^{\top}-D h(\mathbf{w}_{0})^{\top}\,.$$ Since the kernels do not vary in time, the closed-form solution is $$r^{\prime}(t)=e^{-(A+B)^{\top}(A+B)t}r(0)\quad{\mathrm{~and~}}\quad{\bar{r}}(t)=e^{-A^{\top}A t}r(0)$$ We prove that the time evolution operators for r ′ and r¯ are close in operator norm. Lemma 3.3. For any t ≥ 0*, we have* ∥e −(A+B) ⊤(A+B)t − e −A ⊤At∥ ≤ 2∥B∥ √t. Proof of Lemma *3.3.* Define Z(ζ) = −(A + ζB) ⊤(A + ζB)t. By the fundamental theorem of calculus $$\|e^{-(A+B)^{\top}(A+B)t}-e^{-A^{\top}At}\|=\|e^{Z(1)}-e^{Z(0)}\|=\|\int_{0}^{1}\frac{d}{d\zeta}e^{Z(\zeta)}d\zeta\|\leq\sup_{\zeta\in[0,1]}\|\frac{d}{d\zeta}e^{Z(\zeta)}\|.$$ Using the integral representation of the exponential map (see, e.g., Theorem 1.5.3 of (Dollard & Friedman, 1984)), $$\|\frac{de^{Z(\zeta)}}{d\zeta}\|=\|\int_{0}^{1}e^{(1-\tau)Z(\zeta)}(\frac{d}{d\zeta}Z(\zeta))e^{\tau Z(\zeta)}d\tau\|$$ $$=\|t\int_{0}^{1}e^{(1-\tau)Z(\zeta)}(A^{\intercal}B+B^{\intercal}A+2\zeta B^{\intercal}B)e^{\tau Z(\zeta)}d\tau\|$$ $$\leq\underbrace{\|t\int_{0}^{1}e^{(1-\tau)Z(\zeta)}(A+\zeta B)^{\intercal}Be^{\tau Z(\zeta)}d\tau\|}_{\text{(Term1)}}+\underbrace{\|t\int_{0}^{1}e^{(1-\tau)Z(\zeta)}B^{\intercal}(A+\zeta B)e^{\tau Z(\zeta)}d\tau\|}_{\text{(Term2)}}\,.$$ By symmetry under transposing and reversing time, (Term 1) = (Term 2), so it suffices to bound the first term. Since ∥e τZ(ζ)∥ ≤ 1, (Term 1) ≤ t Z 1 0 ∥e (1−τ)Z(ζ)(A + ζB) ⊤∥∥B∥∥e τZ(ζ)∥dτ ≤ t∥B∥ Z 1 0 ∥e (1−τ)Z(ζ)(A + ζB) ⊤∥dτ = t∥B∥ Z 1 0 q∥e (1−τ)Z(ζ)(A + ζB)⊤(A + ζB)e (1−τ)Z(ζ)∥dτ = √t∥B∥ Z 1 0 q∥e (1−τ)Z(ζ)Z(ζ)e (1−τ)Z(ζ)∥dτ ≤ √t∥B∥ Z 1 0 sup λ≥0 pλe−2(1−τ)λdτ = √t∥B∥ Z 1 0 p1/(2e(1 − τ ))dτ =p2t/e∥B∥ . where in the third-to-last line we use the Courant-Fischer-Weyl theorem and the fact that Z(ζ) is negative semidefinite. Combining these bounds ∥e −(A+B) ⊤(A+B)t − e −A ⊤At∥ ≤ 2p2t/e∥B∥ ≤ 2∥B∥ √t. Finally, let us combine Lemma 3.3 with the weight-change bound in Proposition 3.1 to prove Theorem 3.2. Notice that the weight-change bound in Proposition 3.1 implies ∥B∥ ≤ Lip(Dh)∥w(T) − w0∥ ≤ Lip(Dh)pT R0*/α .* So Lemma 3.3 implies ∥r ′(T) − r¯(T)∥ ≤ 2Lip(Dh)TpR0α −1∥r(0)∥ = 2κ∥r(0)∥. Combining this with ∥r ′(T) − r¯(T)*∥ ≤ ∥*r ′(T)∥ + ∥r¯(T)∥ ≤ 2∥r(0)∥ = 2√2R0 implies (9). Thus, we have shown Theorem 3.2, which is the result of Theorem 1.2 if we replace r by r ′. The actual proof of the theorem handles the time-varying kernel Kt, and is in Appendix A. ## 3.2 Proof Ideas For Theorem 1.3 The converse in Theorem 1.3 is achieved in the simple case where h(w) = aw + 1 2 bw2for a = √ 1 T and b = Lip(Dh), and w0 = 0 and R(y) = 12 (y − √2R0) 2, as we show in Appendix B by direct calculation. ## 4 Discussion A limitation of our result is that it applies only to the gradient flow, which corresponds to SGD with infinitesimally small step size. However, larger step sizes are beneficial for generalization in practice (see, e.g., Li et al. (2021); Andriushchenko et al. (2022)), so it would be interesting to understand the validity of the NTK approximation in that setting. Another limitation is that our result applies only to the square loss, and not to other popular losses such as the cross-entropy loss. Indeed, the known bounds in the setting of general losses require either a "well-conditioning" assumption, or taking α exponential in the training time T (Chizat et al., 2019). Can one prove bounds of analogous to Theorem 1.2 for more general losses, with α depending polynomially on T, and without conditioning assumptions? A natural question raised by our bounds in Theorems 1.2 and 1.3 is: how do the dynamics behave just outside the regime where the NTK approximation is valid? For models h where Lip(h) and Lip(Dh) are bounded by a constant, can we understand the dynamics in the regime where T ≥ Cα for some large constant C and α ≫ C, at the edge of the lazy training regime? ## Acknowledgements We thank Emmanuel Abbe, Samy Bengio, and Joshua Susskind for stimulating and helpful discussions. EB also thanks Apple for the company's generous support through the AI/ML fellowship. ## References Emmanuel Abbe, Enric Boix-Adsera, and Theodor Misiakiewicz. The merged-staircase property: a necessary and nearly sufficient condition for sgd learning of sparse functions on two-layer neural networks. In Conference on Learning Theory, pp. 4782–4887. PMLR, 2022. Emmanuel Abbe, Enric Boix-Adsera, and Theodor Misiakiewicz. Sgd learning on neural networks: leap complexity and saddle-to-saddle dynamics. *arXiv preprint arXiv:2302.11055*, 2023. Zeyuan Allen-Zhu and Yuanzhi Li. What can resnet learn efficiently, going beyond kernels? *Advances in* Neural Information Processing Systems, 32, 2019. Zeyuan Allen-Zhu, Yuanzhi Li, and Yingyu Liang. Learning and generalization in overparameterized neural networks, going beyond two layers. *Advances in neural information processing systems*, 32, 2019a. Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. In *International Conference on Machine Learning*, pp. 242–252. PMLR, 2019b. Maksym Andriushchenko, Aditya Varre, Loucas Pillaud-Vivien, and Nicolas Flammarion. Sgd with large step sizes learns sparse features. *arXiv preprint arXiv:2210.05337*, 2022. Sanjeev Arora, Simon Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. In *Proceedings of the 36th International* Conference on Machine Learning, pp. 322–332, 2019a. Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Russ R Salakhutdinov, and Ruosong Wang. On exact computation with an infinitely wide neural net. In *Advances in Neural Information Processing Systems*, 2019b. Jimmy Ba, Murat A Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu, and Greg Yang. High-dimensional asymptotics of feature learning: How one gradient step improves the representation. arXiv preprint arXiv:2205.01445, 2022. Yu Bai and Jason D Lee. Beyond linearization: On quadratic and higher-order approximation of wide neural networks. *arXiv preprint arXiv:1910.01619*, 2019. Yu Bai, Ben Krause, Huan Wang, Caiming Xiong, and Richard Socher. Taylorized training: Towards better approximation of neural network training at finite width. *arXiv preprint arXiv:2002.04010*, 2020. Ronen Basri, Meirav Galun, Amnon Geifman, David Jacobs, Yoni Kasten, and Shira Kritchman. Frequency bias in neural networks for input of non-uniform density. In *International Conference on Machine Learning*, pp. 685–694. PMLR, 2020. Alberto Bietti and Julien Mairal. On the inductive bias of neural tangent kernels. Advances in Neural Information Processing Systems, 32, 2019. Alberto Bietti, Joan Bruna, Clayton Sanford, and Min Jae Song. Learning single-index models with shallow neural networks. *arXiv preprint arXiv:2210.15651*, 2022. Abdulkadir Canatar, Blake Bordelon, and Cengiz Pehlevan. Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks. *Nature communications*, 12(1):2914, 2021. Yuan Cao and Quanquan Gu. Generalization bounds of stochastic gradient descent for wide and deep neural networks. In *Advances in Neural Information Processing Systems*, 2019. Yuan Cao and Quanquan Gu. Generalization error bounds of gradient descent for learning over-parameterized deep relu networks. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 3349–3356, 2020. Yuan Cao, Zhiying Fang, Yue Wu, Ding-Xuan Zhou, and Quanquan Gu. Towards understanding the spectral bias of deep learning. *arXiv preprint arXiv:1912.01198*, 2019. Minshuo Chen, Yu Bai, Jason D Lee, Tuo Zhao, Huan Wang, Caiming Xiong, and Richard Socher. Towards understanding hierarchical learning: Benefits of neural representations. Advances in Neural Information Processing Systems, 33:22134–22145, 2020. Lenaic Chizat and Francis Bach. On the global convergence of gradient descent for over-parameterized models using optimal transport. *Advances in neural information processing systems*, 31, 2018. Lenaic Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable programming. *Advances* in Neural Information Processing Systems, 32, 2019. Alexandru Damian, Jason Lee, and Mahdi Soltanolkotabi. Neural networks can learn representations with gradient descent. In *Conference on Learning Theory*, pp. 5413–5452. PMLR, 2022. John Day Dollard and Charles N. Friedman. *Product Integration with Application to Differential Equations*. Encyclopedia of Mathematics and its Applications. Cambridge University Press, 1984. doi: 10.1017/ CBO9781107340701. Simon Du, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. In *International conference on machine learning*, pp. 1675–1685. PMLR, 2019. Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes overparameterized neural networks. *arXiv preprint arXiv:1810.02054*, 2018. Mario Geiger, Stefano Spigler, Arthur Jacot, and Matthieu Wyart. Disentangling feature and lazy training in deep neural networks. *Journal of Statistical Mechanics: Theory and Experiment*, 2020(11):113301, 2020. Mario Geiger, Leonardo Petrini, and Matthieu Wyart. Landscape and training regimes in deep learning. Physics Reports, 924:1–18, 2021. Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. When do neural networks outperform kernel methods? *Advances in Neural Information Processing Systems*, 33:14820–14830, 2020. Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. *Advances in neural information processing systems*, 31, 2018. Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models under gradient descent. Advances in neural information processing systems, 32, 2019. Zhiyuan Li, Sadhika Malladi, and Sanjeev Arora. On the validity of modeling sgd with stochastic differential equations (sdes). *Advances in Neural Information Processing Systems*, 34:12712–12725, 2021. Eran Malach, Pritish Kamath, Emmanuel Abbe, and Nathan Srebro. Quantifying the benefit of using differentiable learning over tangent kernels. In *International Conference on Machine Learning*, pp. 7379– 7389. PMLR, 2021. Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A mean field view of the landscape of two-layer neural networks. *Proceedings of the National Academy of Sciences*, 115(33):E7665–E7671, 2018. Song Mei, Theodor Misiakiewicz, and Andrea Montanari. Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit. In *Conference on Learning Theory*, pp. 2388–2464. PMLR, 2019. Song Mei, Theodor Misiakiewicz, and Andrea Montanari. Learning with invariances in random features and kernel models. In *Conference on Learning Theory*, pp. 3351–3418. PMLR, 2021. Alireza Mousavi-Hosseini, Sejun Park, Manuela Girotti, Ioannis Mitliagkas, and Murat A Erdogdu. Neural networks efficiently learn low-dimensional representations with sgd. *arXiv preprint arXiv:2209.14863*, 2022. Eshaan Nichani, Yu Bai, and Jason D Lee. Identifying good directions to escape the ntk regime and efficiently learn low-degree plus sparse polynomials. *arXiv preprint arXiv:2206.03688*, 2022. Leonardo Petrini, Francesco Cagnetta, Eric Vanden-Eijnden, and Matthieu Wyart. Learning sparse features can lead to overfitting in neural networks. *arXiv preprint arXiv:2206.12314*, 2022. Grant M Rotskoff and Eric Vanden-Eijnden. Neural networks as interacting particle systems: Asymptotic convexity of the loss landscape and universal scaling of the approximation error. *stat*, 1050:22, 2018. Justin Sirignano and Konstantinos Spiliopoulos. Mean field analysis of deep neural networks. Mathematics of Operations Research, 47(1):120–152, 2022. Terence Tao. Topics in random matrix theory. *Graduate Studies in Mathematics*, 132, 2011. Sifan Wang, Hanwen Wang, and Paris Perdikaris. On the eigenvector bias of fourier feature networks: From regression to solving multi-scale pdes with physics-informed neural networks. *Computer Methods in Applied* Mechanics and Engineering, 384:113938, 2021. Blake Woodworth, Suriya Gunasekar, Jason D Lee, Edward Moroshko, Pedro Savarese, Itay Golan, Daniel Soudry, and Nathan Srebro. Kernel and rich regimes in overparametrized models. In Conference on Learning Theory, pp. 3635–3673. PMLR, 2020. Greg Yang and Edward J Hu. Tensor programs IV: Feature learning in infinite-width neural networks. In International Conference on Machine Learning, pp. 11727–11737. PMLR, 2021. ## A Proof Of Theorem 1.2 A.1 Notations We let R0 := R(αh(w0)) denote the loss at initialization. We define the residuals r(t), r¯(t) ∈ F under training the original model and the linearized model as $r(t)=y^{*}-\alpha h(\mathbf{w}(t)),\ \mbox{and}\ \bar{r}(t)=y^{*}-\alpha\bar{h}(\bar{\mathbf{w}}(t)).$ Since the evolution of w and w¯ is given by the gradient flow in (1) and (3), the residuals evolve as follows. We write Dh⊤ =dh dw ⊤to denote the adjoint of Dh = dh dw . $\dfrac{dr}{dt}=\dfrac{dr}{d\mathbf{w}}$ $\dfrac{d\mathbf{w}}{dt}=-\alpha Dh(\mathbf{w})(-\nabla F_{\alpha}(\mathbf{w}))=\alpha Dh(\mathbf{w})(\dfrac{1}{\alpha}Dh(\mathbf{w})^\top\nabla R(\alpha h(\mathbf{w})))=-Dh(\mathbf{w})Dh(\mathbf{w})^\top r$, $\nabla R(\alpha h(\mathbf{w}))=-(\mu^*-\alpha h(\mathbf{w}))=-r$, An analogous result can be derived for the residual $\bar{r}$ under the $\alpha$-function. since ∇R(αh(w)) = −(y linearized dynamics: $$\frac{d\bar{r}}{d t}=-D\bar{h}(\bar{\mathbf{w}})D\bar{h}(\bar{\mathbf{w}})^{\top}\bar{r}.$$ For any time t ≥ 0, define the kernel Kt : *F → F* as $\neg$ $$({\bar{\mathbf{w}}}(t))D{\bar{h}}({\bar{\mathbf{w}}}(t))^{\top}$$ $$(10)$$ Kt := Dh(w(t))Dh(w(t))⊤ . Since $K_0=Dh(\mathbf{w}(0))Dh(\mathbf{w}(0))$ Since K0 = Dh(w(0))Dh(w(0))⊤ = Dh¯(w¯ (t))Dh¯(w¯ (t))⊤, we can write the dynamics in compact form: $${\frac{d r}{d t}}=-K_{t}r\quad{\mathrm{~and~}}\quad{\frac{d{\bar{r}}}{d t}}=-K_{0}{\bar{r}}\,.$$ dt = −K0r . ¯ (10) ## A.2 Proof Overview Our proof of Theorem 1.2 is outlined in the flowchart in Figure 1. First, we use Lemma A.1 to argue that the change in the weights is bounded above during training. This implies several basic facts about the continuity and boundedness of the kernel Kt over time, which are written as Lemmas A.2, A.3, A.4 A.5. These lemmas allow us to write the dynamics of r and r¯ using product integrals Dollard & Friedman (1984) in Lemma A.6. Product integrals are a standard tool in differential equations and quantum mechanics, but known results on product integrals do not suffice to prove our theorem. Thus, in Theorem A.8, we prove a new operator-norm bound between differences of product integrals. Theorem A.8 implies our main Theorem 1.2 when we specialize to the case of gradient-flow training of a model, and combine it with the weight-change bound from Lemma A.1. The bulk of the proof is dedicated to showing Theorem A.8. The proof is an interpolation argument, where we interpolate between the two product integrals being compared, and show that the operator norm of the derivative of the interpolation path is bounded. In order to show this, we require several new technical bounds - most notably Claim 6, which shows that for any matrices X, Y ∈ R n×d and any t ≥ 0 we have $$\|e^{-(X+Y)(X+Y)^{\top}t}(X+Y)-e^{-X X^{\top}t}X\|\leq3\|Y\|\;.$$ ![10_image_0.png](10_image_0.png) Figure 1: Proof structure. ## A.3 Basic Facts About Boundedness And Continuity Of The Kernel We begin with a series of simple propositions. Recall Proposition 3.1, which we restate below with a slight strengthening in (11) which will be needed later on. Lemma A.1 (Restatement of Proposition 3.1). *For any time* t, $\|\mathbf{w}(t)-\mathbf{w}_{0}\|\leq\sqrt{tR_{0}}/\alpha$ and $\|\bar{\mathbf{w}}(t)-\mathbf{w}_{0}\|\leq\sqrt{tR_{0}}/\alpha$. and furthermore $$\int_{0}^{t}\|\frac{d\mathbf{w}}{d\tau}\|d\tau\leq\sqrt{tR_{0}}/\alpha\quad\quad\text{and}\quad\int_{0}^{t}\|\frac{d\bar{\mathbf{w}}}{d\tau}\|d\tau\leq\sqrt{tR_{0}}/\alpha\,.\tag{11}$$ $$\square$$ Proof. The statement (11) is also implied by the proof of Proposition 3.1 in the main text. The first implication is that the weights w(t) stay within the ball of radius ρ around w0, during the time-span that we consider. Lemma A.2. For any time 0 ≤ t ≤ T, we have ∥w(t) − w0∥ ≤ ρ. Proof. Immediate from Lemma A.1 and the fact that T ≤ ρ 2α 2/R0. This allows to use the bounds Lip(h) and Lip(Dh) on the Lipschitz constants of h and Dh. Specifically, the kernels Kt and K0 stay close during training, in the sense that the difference between Dh and Dh¯ during training is bounded. Lemma A.3. For any time 0 ≤ t ≤ T*, we have* $$\square$$ $$\|D h(\mathbf{w}(t))-D{\bar{h}}({\bar{\mathbf{w}}}(t))\|\leq\mathrm{Lip}(D h){\sqrt{t R_{0}}}/\alpha$$ $$\square$$ Proof. Since (a) h¯ is the linearization of h at w0, and (b) ∥w(t) − w0∥ ≤ min(ρ, √tR0/α) by Lemma A.2 and Lemma A.1, $$\|D h(\mathbf{w}(t))-D{\bar{h}}({\bar{\mathbf{w}}}(t))\|\stackrel{(a)}{=}\|D h(\mathbf{w}(t))-D h(\mathbf{w}_{0})\|\stackrel{(b)}{\leq}\mathrm{Lip}(D h){\sqrt{t R_{0}}}/\alpha\,.$$ And therefore the kernel Kt is bounded at all times 0 ≤ t ≤ T in operator norm. Lemma A.4. For any time 0 ≤ t ≤ T, we have ∥Kt∥ ≤ 3∥Dh(w(0))∥ 2 + 2Lip(Dh)tR0/α2. Proof. By triangle inequality and Lemma A.3, $$\|K_{t}-K_{0}\|=\|Dh(\mathbf{w}(t))Dh(\mathbf{w}(t))^{\top}-Dh(\mathbf{w}(0))Dh(\mathbf{w}(0))^{\top}\|$$ $$\leq\|Dh(\mathbf{w}(t))\|^{2}+\|Dh(\mathbf{w}(0))\|^{2}$$ $$\leq(\|Dh(\mathbf{w}(t))-Dh(\mathbf{w}(0))\|+\|Dh(\mathbf{w}(0))\|)^{2}+\|Dh(\mathbf{w}(0))\|^{2}$$ $$\leq3\|Dh(\mathbf{w}(0))\|^{2}+2\text{Lip}(Dh)tR_{0}/\alpha^{2}\,.$$ And finally we note that the kernel evolves continously in time. Lemma A.5. The map t 7→ Kt is continuous (in the operator norm topology) in the interval [0, T]. Proof. First, t 7→ w(t) is continuous in time, since it solves the gradient flow. Second, we know that w(t) is in the ball of radius ρ around w0 by Lemma A.2, and in this ball the map w 7→ Dh(w) is continuous because Lip(Dh) < ∞. Finally, Dh 7→ *DhDh*⊤ is continuous. ## A.4 Product Integral Formulation Of Dynamics Now we can present an equivalent formulation of the training dynamics (10) in terms of product integration. For any 0 ≤ x ≤ y ≤ T, let P(*y, x*) : R p → F solve the operator integral equation $$P(y,x)=I-\int_{x}^{y}K_{t}P(t,x)dt\,.\tag{1}$$ $$(12)$$ A solution P(*y, x*) is guaranteed to exist and to be unique: Lemma A.6. *The unique solution to the integral equation* (12) *is given as follows. For any* 0 ≤ x ≤ y ≤ T define sm,j = (y − x)(j/m) + x and δ = (y − x)/m, and let P(y, x) *be the product integral* $$P(y,x):=\prod_{x}^{y}e^{-K_{x}d s}:=\operatorname*{lim}_{m\to\infty}\prod_{j=1}^{m}e^{-\delta K_{x_{j}}}=\operatorname*{lim}_{m\to\infty}e^{-\delta K_{x_{m}}}e^{-\delta K_{x_{m-1}}}\ldots e^{-\delta K_{x_{2}}}e^{-\delta K_{x_{1}}}\ldots$$ Proof. Existence, uniqueness, and the expression as an infinite product are guaranteed by Theorems 3.4.1, 3.4.2, and 3.5.1 of Dollard & Friedman (1984), since t 7→ Kt lies in L 1 s (0, T), which is the space of "strongly integrable" functions on [0, T] defined in Definition 3.3.1 of Dollard & Friedman (1984). This fact is guaranteed by the separability of F and the continuity and boundedness of t 7→ Kt (Lemmas A.4 and A.5). The operators P(*y, x*) are the time-evolution operators corresponding to the differential equation (10) for the residual error rt. Namely, for any time 0 ≤ t ≤ T, $$r_{t}=P(t,0)r_{0}\,.$$ On the other hand, the solution to the linearized dynamics (10) is given by $$\bar{r}_{t}=e^{-K_{0}t}r_{0}\;,$$ since e −K0tis the time-evolution operator when the kernel does not evolve with time. ## A.5 Error Bound Between Product Integrals To prove Theorem 1.2, it suffices to bound ∥P(t, 0) − e −K0t∥, the difference of the time-evolution operators under the full dynamics versus the linearized dynamics. We will do this via a general theorem. To state it, we must define the total variation norm of a time-indexed sequence of operators: Definition A.7. Let {Ct}t∈[x,y] be a sequence of time-bounded operators Ct : R p → F so that t 7→ Ct is continuous in the interval [*x, y*]. Then the total variation norm of {Ct}t∈[x,y]is $${\mathcal{V}}(\{C_{t}\}_{t\in[x,y]})=\operatorname*{sup}_{P\in{\mathcal{P}}}\;\sum_{i=1}^{n_{P}-1}\;\|C_{t_{i}}-C_{t_{i-1}}\|,$$ where the supremum is taken over partitions P = {P = {x = t1 ≤ t2 *≤ · · · ≤* tnP −1 ≤ tnP = y}} of the interval [*x, y*]. We may now state the general result. Theorem A.8. Let F be a separable Hilbert space, and let {At}t∈[0,T], {Bt}t∈[0,T] *be time-indexed sequences* of bounded operators At : R p → F and Bt : R p → F such that t 7→ At and t 7→ Bt are continuous in [0, T]. Then, $$\|\prod_{0}^{T}e^{-A_{t}\cdot A_{t}^{\top}\,ds}-\prod_{0}^{T}e^{-B_{t}B_{t}^{\top}\,ds}\|\leq\big{(}\sup_{t\in[0,T]}\|A_{t}-B_{t}\|\big{)}(2\sqrt{T}+3T\cdot\mathcal{V}(\{A_{t}-B_{t}\}_{t\in[0,T]}))\,.$$ If we can establish Theorem A.8, then we may prove Theorem 1.2 as follows. Proof of Theorem *1.2.* for each t ≥ 0 we choose the linear operators At, Bt : R p → F by At = Dh(w(t)) and Bt = Dh(w(0)) so that AtA⊤ t = Kt and BtB⊤ t = K0. We know that At, Bt are bounded by Lemma A.4, and that t 7→ At is continuous by Lemma A.5. (Also t 7→ Bt is trivially continuous). So we may apply Theorem A.8 to bound the difference in the residuals. We first bound the total variation norm of {At − Bt}t∈[0,T], By (a) the fact from Lemma A.2 that w(t) is in a ball of radius at most ρ around w0 where w 7→ Dh(w) is Lip(Dh)-Lipschitz; (b) the fact that t 7→ w(t) is differentiable, since it solves a gradient flow; and (c) Lemma A.1, V({At − Bt}t∈[0,T]) = sup P ∈P nXP −1 i=1 ∥Dh(w(ti+1)) − Dh(w(0)) − Dh(w(ti)) + Dh(w(0))∥ = sup P ∈P nXP −1 i=1 ∥Dh(w(ti+1)) − Dh(w(ti))∥ (a) ≤ Lip(Dh) sup P ∈P nXP −1 i=1 ∥w(ti+1) − w(ti)∥ (b) = Lip(Dh) Z T 0 ∥ dw dt ∥dt (c) ≤ Lip(Dh)pT R0/α When we plug the above expression into Theorem A.8, along with the bound ∥At − Bt∥ ≤ Lip(Dh) √tR0/α from Lemma A.3, we obtain: $$\|r_{T}-\bar{r}_{T}\|=\|(\prod_{s=0}^{T}e^{-K_{s}\,ds}-\prod_{s=0}^{T}e^{-K_{0}ds})r_{0}\|$$ $$\leq(\text{Lip}(Dh)\sqrt{TR_{0}}/\alpha)(2\sqrt{T}+3\text{Lip}(Dh)T^{3/2}\sqrt{R_{0}}/\alpha)\sqrt{2R_{0}}$$ $$=(2\kappa+3\kappa^{2})\sqrt{2R_{0}},$$ where κ = T α Lip(Dh) √R0. Also note that $$\|r_{T}-{\bar{r}}_{T}\|\leq\|r_{T}\|+\|{\bar{r}}_{T}\|={\sqrt{2R(\alpha h(\mathbf{w}(T)))}}+{\sqrt{2R(\alpha{\bar{h}}(\mathbf{\bar{w}}(T)))}}\leq2{\sqrt{2R_{0}}},$$ since the gradient flow does not increase the risk. So $$\|r_{T}-\bar{r}_{T}\|\leq\operatorname*{min}(2\kappa+3\kappa^{2},2)\sqrt{2R_{0}}\leq6\kappa\sqrt{R_{0}}\,.$$ It remains only to prove Theorem A.8. ## A.6 Reduction To The Finite-Dimensional Case We first show that in order to prove Theorem A.8, it suffices to consider the case where F is a finite-dimensional Hilbert space. The argument is standard, and uses the fact that F has a countable orthonormal basis. Lemma A.9. Theorem A.8 is true for general separable Hilbert spaces F if it is true whenever F is finite-dimensional. Proof. Suppose that F is a separable Hilbert space and At, Bt : R p → F satisfy the hypotheses of Theorem A.8. Let {fi}i∈N be a countable orthonormal basis for F, which is guaranteed by separability of F. For any n, let Pn : *F → F* be the linear projection operator defined by $$\mathcal{P}_{n}(f_{i})=\begin{cases}f_{i},&1\leq i\leq n\\ 0,&\text{otherwise}\end{cases}.$$ By (a) Duhamel's formula in Theorem 3.5.8 of Dollard & Friedman (1984), (b) the fact that ∥e −AsA ⊤ s ∥ ≤ 1 and ∥e −PnAsA ⊤ s P ⊤ n ∥ ≤ 1 because AsA⊤ s and PnAsA⊤ s P ⊤ n are positive semidefinite. ∥ Y T 0 e −AsA ⊤ s ds − Y T 0 e −PnAsA ⊤ s P ⊤ n ds∥ Y T τ e −AsA ⊤ s ds!(AτA ⊤ τ − PnAτA ⊤ τ P ⊤ n ) Yτ 0 e −PnAsA ⊤ s P ⊤ n ds!dτ∥ (a) = ∥ Z T 0 ≤ Z T 0 ∥ Y T τ e −AsA ⊤ s ds∥∥AτA ⊤ τ − PnAτA ⊤ τ P ⊤ n ∥∥Yτ 0 e −PnAsA ⊤ s P ⊤ n ds∥dτ (b) ≤ Z T 0 ∥AτA ⊤ τ − PnAτA ⊤ τ P ⊤ n ∥dτ (13) We have chosen Pn so that for any bounded linear operator M : F → F, we have limn→∞ PnMPn = M. Since τ 7→ AτA⊤ τ is continuous in τ , and Aτ is bounded for each τ , the expression in (13) converges to 0 as n → ∞. By triangle inequality, we conclude that $$\|\prod_{0}^{T}e^{-A_{x}A_{x}^{\top}\,ds}-\prod_{0}^{T}e^{-B_{x}B_{x}^{\top}\,ds}\|\leq\limsup_{n\to\infty}\|\prod_{0}^{T}e^{-\mathcal{P}_{n}A_{x}A_{x}^{\top}\,\mathcal{P}_{n}^{\top}\,ds}-\prod_{0}^{T}e^{-\mathcal{P}_{n}B_{x}B_{x}^{\top}\,\mathcal{P}_{n}^{\top}\,ds}\|\,.$$ $$\square$$ Notice that PnAt and PnBt are bounded maps from R pto span{f1*, . . . , f*n}, and t *7→ P*nAt and t *7→ P*nBt are continuous and bounded. So using the theorem in the case where F is finite-dimensional, the right-hand side can be bounded by $$\limsup_{n\to\infty}\|\prod_{0}^{T}e^{-\mathcal{P}_{n}A_{s}A_{s}^{\top}\mathcal{P}_{n}^{\top}ds}-\prod_{0}^{T}e^{-\mathcal{P}_{n}B_{s}B_{s}^{\top}\mathcal{P}_{n}^{\top}ds}\|$$ $$\leq\limsup_{n\to\infty}(\sup_{t\in[0,T]}\|\mathcal{P}_{n}A_{t}-\mathcal{P}_{n}B_{t}\|)(2\sqrt{T}+3T\cdot\mathcal{V}(\{\mathcal{P}_{n}A_{t}-\mathcal{P}_{n}B_{t}\}_{t\in[0,T]})dt)$$ $$=(\sup_{t\in[0,T]}\|A_{t}-B_{t}\|)(2\sqrt{T}+3T\cdot\mathcal{V}(\{A_{t}-B_{t}\}_{t\in[0,T]})dt)\,.$$ ## A.7 Bound For The Finite-Dimensional Case We conclude by proving Theorem A.8 in the finite-dimensional case, where we write At, Bt ∈ R n×d as time-indexed matrices. In order to prove a bound, we will interpolate between the dynamics we wish to compare. Let Ct = Bt − At ∈ R n×d For any ζ ∈ [0, 1] and t ∈ [0, T], we define the kernel $$K_{t,\zeta}=(A_{t}+\zeta C_{t})(A_{t}+\zeta C_{t})^{\top}\in\mathbb{R}^{n\times n},$$ This interpolates between the kernels of interest, since on the one hand, Kt,0 = AtA⊤ t and on the other Kt,1 = BtB⊤ t . For any *x, y* ∈ [0, T], let $$P(y,x;\zeta)=\prod_{x}^{y}e^{-K_{t,\zeta}d t}\in\mathbb{R}^{n\times n}.$$ The derivative ∂ ∂ζ P(*y, x*; ζ) exists by Theorem 1.5.3 of Dollard & Friedman (1984), since (i) Kt,ζ is continuous in t for each fixed ζ, (ii) Kt,ζ is differentiable in ζ in the L 1sense4, and (iii) has a partial derivative ∂Kt,ζ ∂ζ that is integrable in t. The formula for ∂ ∂ζ P(*y, x*; ζ) is given by the following formula, which generalizes the integral representation of the exponential map:5 exponential map. $$\frac{\partial}{\partial\zeta}P(y,x;\zeta)=-\int_{x}^{y}P(y,t;\zeta)\frac{\partial K_{t,\zeta}}{\partial\zeta}P(t,x;\zeta)dt\tag{14}$$ Our main lemma is: Lemma A.10. For all ζ ∈ [0, 1]*, we have the bound* $\zeta\in[0,1]$, we have the bound_ $$\|\frac{\partial P(T,0;\zeta)}{\partial\zeta}\|\leq(\sup_{t\in[0,T]}\|C_{t}\|)(2\sqrt{T}+3T\cdot\mathcal{V}(\{C_{t}\}_{t\in[0,T]}))\,.$$ This lemma suffices to prove Theorem A.8. Proof of Theorem 4.8.: Using the fundamental theorem of calculus, $$\big{\|}\prod_{0}^{T}e^{-A_{t}T^{\prime}\,dt}-\prod_{0}^{T}e^{-B_{t}B_{t}^{T}\,dt}\big{\|}=\|P(T,0;1)-P(T,0;0)\|\leq\int_{0}^{1}\|\frac{\partial P(T,0;\zeta)}{\partial\zeta}\|d\zeta,$$ which combined with Lemma A.10 proves Theorem A.8. ## A.8 Proof Of Lemma **A.10** By a direct calculation, $$\frac{\partial K_{t,\zeta}}{\partial\zeta}=(A_{t}+\zeta C_{t})C_{t}^{\top}+C_{t}(A_{t}+\zeta C_{t})^{\top},$$ so, by (14), $$\frac{\partial}{\partial\zeta}P(T,0;\zeta)=-\int_{0}^{T}P(T,t;\zeta)((A_{t}+\zeta C_{t})C_{t}^{\top}+C_{t}(A_{t}+\zeta C_{t})^{\top})P(t,0;\zeta)dt$$ $$=-\underbrace{\int_{0}^{T}P(T,t;\zeta)(A_{t}+\zeta C_{t})C_{t}^{\top}P(t,0;\zeta)dt}_{M_{1}}-\underbrace{\int_{0}^{T}P(T,t;\zeta)C_{t}(A_{t}+\zeta C_{t})^{\top}P(t,0;\zeta)dt}_{M_{2}}.$$ $\square$ The arguments are similar for bounding M1 and M2, so we only bound M1. We will need two technical bounds, whose proofs are deferred to Section A.9. Claim 1. For any 0 ≤ t ≤ T, we have ∥P(t, 0; ζ)∥ ≤ 1. Claim 2. For any 0 ≤ t ≤ T, we have ∥P(T, t; ζ)(At + ζCt)∥ ≤ 1 2 √T −t + 3V({Cs}s∈[t,T]). Using (a) Claim 1, and (b) Claim 2, ∥M1∥ (a) ≤ ( sup t∈[0,T] ∥Ct∥) Z T 0 ∥P(T, t; ζ)(At + ζCt)∥dt (15) (b) ≤ ( sup t∈[0,T] ∥Ct∥) Z T 0 1 2 √T − t + 3V({Cs}s∈[t,T])dt!(16) = ( sup t∈[0,T] ∥Ct∥)(√T + 3 Z T 0 V({Cs}s∈[t,T])dt). (17) Kt,ζ′ −Kt,ζ ∂Kt,ζ 4For any ζ ∈ [0, 1], limζ ′→ζR T 0 ∥ ζ ′−ζ − ∂ζ ∥dt = 0, since the matrices At, Bt are uniformly bounded. 5The proof is a consequence of Duhamel's formula. This a tool used in a variety of contexts, including perturbative analysis of path integrals in quantum mechanics Dollard & Friedman (1984). Lemma A.10 is proved by noting that, symmetrically, $$\|M_{2}\|\leq(\,\operatorname*{sup}_{t\in[0,T]}\|C_{t}\|)({\sqrt{T}}+3\int_{0}^{T}{\mathcal{V}}(\{C_{s}\}_{s\in[0,t]})d t)\,,$$ and, for any t ∈ [0, T], $\mathcal{V}(\{C_{s}\}_{s\in[0,t]})+\mathcal{V}(\{C_{s}\}_{s\in[t,T]})=\mathcal{V}(\{C_{s}\}_{s\in[0,T]})$. A.9 Deferred proofs of Claims 1 and 2 Claim 1. For any 0 ≤ t ≤ T, we have ∥P(t, 0; ζ)∥ ≤ 1. Proof. This follows from the definition of the product integral as an infinite product, and the fact that each term e −δKti ,ζ in the product has norm at most 1 because Kti,ζ is positive semidefinite. In order to prove Claim 2, we need two more claims: Claim 3. *For any* X ∈ R n×d and t ≥ 0, $$\|e^{-X X^{\top}t}X\|\leq{\frac{1}{2{\sqrt{t}}}}$$ Proof. Since XX⊤ is positive semidefinite, $$\|e^{-XX^{\top}}X\|=\sqrt{\|e^{-XX^{\top}t}XX^{\top}e^{-XX^{\top}t}\|}\leq\sup_{\lambda\geq0}\sqrt{e^{-\lambda t}\lambda e^{-\lambda t}}=\sup_{\lambda\geq0}e^{-\lambda}\sqrt{\lambda/t}\leq\frac{1}{2\sqrt{t}}.$$ Claim 4. *For any time-indexed sequence of matrices* (Xt)t∈[0,T]in R n×dsuch that t 7→ Xt *is continuous in* [0, T]*, and any* 0 ≤ a ≤ b ≤ T, $$\|e^{-X_{b}X_{b}^{\top}(b-a)}X_{b}-\left(\prod_{a}^{b}e^{-X_{s}X_{s}^{\top}d s}\right)X_{a}\|\leq3\mathcal{V}(\{X_{s}\}_{s\in[a,b]})$$ $\square$ This latter claim shows that we can approximate the product integral with an exponential. The proof is involved, and is provided in Section A.10. Assuming the previous two claims, we may prove Claim 2. Claim 2. For any 0 ≤ t ≤ T, we have ∥P(T, t; ζ)(At + ζCt)∥ ≤ 1 2 √T −t + 3V({Cs}s∈[t,T]). Proof. By Claim 3, since K*T ,ζ* = (AT + ζCT )(AT + ζCT ) ⊤, $$\|e^{-K_{T,\zeta}(T-t)}(A_{T}+\zeta C_{T})\|\leq\frac{1}{2\sqrt{T-t}}\,.$$ By the triangle inequality, it remains to prove $$\|e^{-K_{T,\zeta}(T-t)}(A_{T}+\zeta C_{T})-P(T,t;\zeta)(A_{t}+\zeta C_{t})^{\top}\|\leq3{\mathcal{V}}(\{C_{s}\}_{s\in[t,T]}),$$ and this is implied by Claim 4, defining Xt = At + ζCt and a = t, b = T. ## A.10 Deferred Proof Of Claim 4 The proof will be by interpolation, using the integral representation of the exponential map provided in (14), similarly to the main body of the proof, but of course interpolating with respect to a different parameter. We begin the following claim, which we will subsequently strengthen. Claim 5. For any symmetric *X, Y* ∈ R n×n and t ≥ 0, $$\|e^{-(X+Y)^{2}t}(X+Y)-e^{-X^{2}t}X\|\leq3\|Y\|$$ Proof. For any τ ∈ [0, 1], define X(τ ) = X + τY , which interpolates between X and Y . Then by (a) the derivative of the exponential map in (14), and (b) the fact that X′(τ ) = Y and ∥e −X2(τ)t/2∥ ≤ 1, ∥e −(X+Y ) 2t(X + Y ) − e −X2tX∥ = ∥ Z 1 0 d dτ e −X2(τ)tX(τ ) dτ∥ = ∥ Z 1 0 d dτ e −X2(τ)t/2X(τ )e −X2(τ)t/2dτ∥ ≤ sup τ∈[0,1] ∥ d dτ e −X2(τ)t/2X(τ )e −X2(τ)t/2∥ (a) = sup τ∈[0,1] ∥e −X2(τ)t/2X′(τ )e −X2(τ)t/2 − (t/2) Z 1 0 e −(1−s)X2(τ)t/2(X′(τ )X(τ ) + X(τ )X′(τ ))e −sX2(τ)t/2X(τ )e −X2(τ)t/2ds − (t/2) Z 1 0 e −X2(τ)t/2X(τ )e −(1−s)X2(τ)t/2(X′(τ )X(τ ) + X(τ )X′(τ ))e −sX2(τ)t/2ds∥ (b) ≤ sup τ∈[0,1] ∥Y ∥ + ∥(t/2) Z 1 0 e −(1−s)X2(τ)t/2(Y X(τ ) + X(τ )Y )e −sX2(τ)t/2X(τ )e −X2(τ)t/2ds∥ | {z } T1 + ∥(t/2) Z 1 0 e −X2(τ)t/2X(τ )e −(1−s)X2(τ)t/2(Y X(τ ) + X(τ )Y )e −sX2(τ)t/2ds∥ , | {z } T2 and by (a) supλ≥0 λe−λ 2t/2 = 1/ √et, and (b) ∥e −M∥ ≤ 1 if M is p.s.d., T1 ≤ (t/2) Z 1 0 ∥e −(1−s)X2(τ)t/2(Y X(τ ) + X(τ )Y )e −sX2(τ)t/2∥∥X(τ )e −X2(τ)t/2∥ds (a) ≤ (t/2) Z 1 0 ∥e −(1−s)X2(τ)t/2(Y X(τ ) + X(τ )Y )e −sX2(τ)t/2∥1 √et ds ≤ √t 2 √e Z 1 0 ∥e −(1−s)X2(τ)t/2∥∥Y ∥∥X(τ )e −sX2(τ)t/2∥ + ∥e −(1−s)X2(τ)t/2X(τ )∥∥Y ∥∥e −sX2(τ)t/2∥ds (a) ≤ √t 2e Z 1 0 ∥e −(1−s)X2(τ)t/2∥∥Y ∥1 √st +1 p(1 − s)t ∥Y ∥∥e −sX2(τ)t/2∥ds (b) ≤ ∥Y ∥ 2e Z 1 0 1 √s +1 p(1 − s) ds = ∥Y ∥ 2e (2√s − 2 √1 − s) 1 0 ≤ ∥Y ∥. Similarly, T2 ≤ ∥Y ∥. Claim 6. For any *X, Y* ∈ R n×d*(not necessarily symmetric) and* t ≥ 0, $$\|e^{-(X+Y)(X+Y)^{\top}t}(X+Y)-e^{-X X^{\top}t}X\|\leq3\|Y\|\,.$$ Proof. We will use Claim 5, combined with a general method for lifting facts from symmetric matrices to asymmetric matrices (see, e.g., Tao (2011) for other similar arguments). Assume without loss of generality that n = d, since otherwise we can pad with zeros. Define n¯ = 2n and $${\bar{X}}={\begin{bmatrix}0&X^{\top}\\ X&0\end{bmatrix}}\in\mathbb{R}^{{\bar{n}}\times{\bar{n}}}{\mathrm{~and~}}{\bar{Y}}={\begin{bmatrix}0&Y^{\top}\\ Y&0\end{bmatrix}}\in\mathbb{R}^{{\bar{n}}\times{\bar{n}}}.$$ These are symmetric matrices by construction. Furthermore, $$\bar{X}^{2}=\begin{bmatrix}X^{\top}X&0\\ 0&X X^{\top}\end{bmatrix}\ \mathrm{and}\ (\bar{X}+\bar{Y})^{2}=\begin{bmatrix}(X+Y)^{\top}(X+Y)&0\\ 0&(X+Y)(X+Y)^{\top}\end{bmatrix}.$$ Because of the block-diagonal structure of these matrices, $$e^{-{\bar{X}}^{2}t}=\begin{bmatrix}e^{-X^{\top}X t}&0\\ 0&e^{-{X X^{\top}t}}\end{bmatrix}\,\,\text{and}\,\,e^{-({\bar{X}}+{\bar{Y}})^{2}t}=\begin{bmatrix}e^{-(X+Y)^{\top}(X+Y)t}&0\\ 0&e^{-(X+Y)(X+Y)^{\top}t}\end{bmatrix}.$$ So $$e^{-\bar{X}^{2}t}\bar{X}=\begin{bmatrix}0&e^{-X^{\top}X t}X^{\top}\\ e^{-X X^{\top}t}X&0\end{bmatrix}.$$ Similarly, $$e^{-(\bar{X}+\bar{Y})^{2}t}(\bar{X}+\bar{Y})=\begin{bmatrix}0&e^{-(X+Y)^{\top}(X+Y)t}(X+Y)^{\top}\\ e^{-(X+Y)(X+Y)^{\top}t}(X+Y)&0\end{bmatrix}.$$ For any matrix M ∈ R n×n, we have ∥M∥ = supv∈Sn−1 ∥Mv∥. So for any matrices M1, M2 ∈ R n×n, we have $$\left\|\begin{bmatrix}0&M_{1}\\ M_{2}&0\end{bmatrix}\right\|\geq\sup_{\mathbf{v}\in\mathbb{S}^{n-1}}\left\|\begin{bmatrix}0&M_{1}\\ M_{2}&0\end{bmatrix}\begin{bmatrix}\mathbf{v}\\ 0\end{bmatrix}\right\|=\sup_{\mathbf{v}\in\mathbb{S}^{n-1}}\|M_{2}\mathbf{v}\|=\|M_{2}\|.$$ This means that, using Claim 5, $$\|e^{-(X+Y)(X+Y)^{\top}t}(X+Y)-e^{-XX^{\top}t}X\|\leq\|e^{-(\bar{X}+\bar{Y})^{2}t}(\bar{X}+\bar{Y})-e^{\bar{X}^{2}t}\bar{X}\|\leq3\|\bar{Y}\|\,.$$ Finally, using the symmetry of Y¯ and the block-diagonal structure of Y¯ 2, $$\|\bar{Y}\|=\|\bar{Y}^{2}\|^{1/2}=\left\|\begin{bmatrix}Y^{\top}Y&0\\ 0&Y Y^{\top}\end{bmatrix}\right\|^{1/2}=\operatorname*{max}(\|Y^{\top}Y\|^{1/2},\|Y Y^{\top}\|^{1/2})=\|Y\|.$$ We conclude by using these results to prove Claim 4 with a telescoping argument. Claim 4. *For any time-indexed sequence of matrices* (Xt)t∈[0,T]in R n×dsuch that t 7→ Xt *is continuous in* [0, T]*, and any* 0 ≤ a ≤ b ≤ T, $$\|e^{-X_{b}X_{b}^{\top}(b-a)}X_{b}-\left(\prod_{a}^{b}e^{-X_{s}X_{s}^{\top}d s}\right)X_{a}\|\leq3\mathcal{V}(\{X_{s}\}_{s\in[a,b]})$$ $$\square$$ Proof. We will do this by approximating the product integral by a finite product of m matrices, and taking the limit m → ∞. For any m ≥ 0 and j ∈ {0*, . . . , m*}, and t ∈ [0, T], let tm,j = (b − a)(j/m) + a, and define the finite-product approximation to the product integral for any k ∈ {0*, . . . , m*} $$P_{m,k}=\left(\prod_{j=k+1}^{m}Q_{m,j}\right)Q_{m,k}^{k}\,,$$ where $$Q_{m,j}=e^{-X_{t_{m,j}}X_{t_{m,j}}^{\top}(b-a)/m}\,.$$ Notice that for k = 0 we have $$P_{m,0}=\left(\prod_{j=1}^{m}e^{-X_{t_{m,j}}\,X_{t_{m,j}}^{\top}\,(b-a)/m}\right)\,,$$ and so by continuity of s 7→ Xs for s ∈ [0, T] and boundedness of Xs, we know from Theorem 1.1 of Dollard & Friedman (1984) that $$\operatorname*{lim}_{m\to\infty}P_{m,0}=\prod_{a}^{b}e^{-X_{s}X_{s}^{\top}d s}\,.$$ This means that that $$\|e^{-X_{b}X_{b}^{\top}(b-a)}X_{b}-\left(\prod_{a}^{b}e^{-X_{x}X_{s}^{\top}ds}\right)X_{a}\|=\|P_{m,m}X_{t_{m,m}}-\left(\prod_{a}^{b}e^{-X_{s}X_{s}^{\top}ds}\right)X_{a}\|$$ $$=\lim_{m\to\infty}\|P_{m,m}X_{t_{m,m}}-P_{m,0}X_{t_{m,0}}\|\,.$$ Finally, telescoping and (a) using ∥Qm,j∥ ≤ 1 for all j, and (b) Claim 6, ∥Pm,mXtm,m − Pm,1Xtm,1 ∥ ≤ Xm k=1 ∥Pm,kXtm,k − Pm,k−1Xtm,k−1 ∥ k=1 ∥ Ym j=k+1 Qm,j = Xm (Q k m,kXtm,k − Qm,k(Qm,k−1) k−1Xtm,k−1 )∥ (a) ≤ Xm k=1 ∥(Qm,k) k−1Xtm,k − (Qm,k−1) k−1Xtm,k−1 ∥ (b) ≤ Xm k=1 3∥Xtm,k − Xtm,k−1 ∥ ≤ 3V({Xs}s∈[a,b]). The lemma follows by taking m → ∞, since the bound is independent of m. ## B Proof Of Theorem 1.3 The converse bound in Theorem 1.2 is achieved in the simple case where h : R → R is given by h(w) = aw + 1 2 bw2for a = √ 1 T and b = Lip(Dh). We also let w0 = 0 and y ∗ = √2R0 so that all conditions are satisfied. The evolution of the residuals r(t) = y ∗ − αh(w(t)) and r¯(t) = y ∗ − αh¯( ¯w(t)) is given by $${\frac{d r}{d t}}=-K_{t}r\quad{\mathrm{~and~}}\quad{\frac{d{\bar{r}}}{d t}}=-K_{0}{\bar{r}},$$ where Kt = Dh(w(t))Dh(w(t))⊤ = (a + bw(t))2 and K0 = a 2. Since r = y ∗ − α(aw + b 2w 2), we can express the evolution of the residuals r and r¯ as: is $r$ and $r$ as: $$\frac{dr}{dt}=-(a^{2}+2b(y^{*}-r)/\alpha)r\quad\mbox{and}\quad\frac{d\bar{r}}{dt}=-a^{2}\bar{r}\,.\tag{18}$$ Since b(y ∗ − r)/α ≥ 0 at all times, we must have a 2 + 2b(y ∗ − r)/α ≥ a 2. This means that at all times $$r(t)\leq{\bar{r}}(t)=e^{-a^{2}t}y^{*}\;.$$ So, at any time t ≥ T /2, $$r(t)\leq r(T/2)\leq\bar{r}(T/2)=e^{-(1/\sqrt{T})^{2}(T/2)}y^{*}=e^{-1/2}y^{*},\ldots$$ By plugging this into (18), for times t ≥ T /2, $${\frac{d r}{d t}}\leq-(a^{2}+2\mathrm{Lip}(D h)(1-e^{-1/2}){\sqrt{2R_{0}}}/\alpha)r\leq-(a^{2}+1.1\kappa/T)r.$$ So, at time T, assuming that κ ≤ 1 without loss of generality, $r(T)\leq r(T/2)e^{-(1/T+1.1\kappa/T)(T/2)}=r(T/2)e^{-1/2-0.55\kappa}\leq y^{*}e^{-1}e^{-0.55\kappa}\leq y^{*}e^{-1}(1-0.4\kappa)\,.$ So $|\alpha h(w(T))-\alpha\bar{h}(\bar{w}(T))|=|r(T)-\bar{r}(T)|\geq|e^{-1}y^{*}-(1-0.4\epsilon)e^{-1}y^{*}|\geq0.4\epsilon e^{-1}\sqrt{2R_{0}}\geq\sqrt{R_{0}}/5$. ## C Deferred Details From Section 2 The bound on Lip(Dh) for 2-layer networks is below. Lemma C.1 (Bound on Lip(Dh) for mean-field 2-layer network). Suppose that there is a constant K such that (i) the activation function σ *is bounded and has bounded derivatives* ∥σ∥∞, ∥σ ′∥∞, ∥σ ′′∥∞, ∥σ ′′′∥∞ ≤ K, (ii) the weights have bounded norm ∥a∥ + ∥U∥ ≤ K*, and (iii) the data points have bounded norm* maxi ∥xi∥ ≤ K. Then there is a constant K′ depending only K *such that* $$\square$$ $$\mathrm{Lip}(D h)\leq K^{\prime}.$$ Proof. Let p = m + md be the number of parameters of the network. Then Dh ∈ R n×pis $$D h={\frac{1}{\sqrt{n}}}\begin{bmatrix}D f_{\mathbf{w}}(\mathbf{x}_{1})\\ \vdots\\ D f_{\mathbf{w}}(\mathbf{x}_{n})\end{bmatrix}$$ . $$\operatorname{Lip}(D h)\leq\operatorname*{max}_{i\in[n]}\operatorname{Lip}(D f_{\mathbf{w}}(\mathbf{x}_{i}))\,.$$ So So for the 2-layer network, Lip(Dh) ≤ max i∈[n] 1 m Lip(-σ( √m⟨u1, xi⟩) *. . . σ*( √m⟨um, xi⟩)) +1 √m Lip(-a1σ ′( √m⟨u1, xi⟩)x ⊤ i*. . . a*mσ ′( √m⟨um, xi⟩)x ⊤ i ) = max i∈[n] 1 √m σ ′( √m⟨u1, xi⟩)x ⊤ i 0 0 *. . .* 0 0 σ ′( √m⟨u2, xi⟩)x ⊤ i 0 *. . .* 0 ... 0 *. . .* 0 σ ′( √m⟨um, xi⟩)x ⊤ i + ∥xi∥ √m Lip(-a1σ ′( √m⟨u1, xi⟩) *. . . a*mσ ′( √m⟨um, xi⟩)) ≤ ∥σ ′∥∞∥xi∥ √m+ ∥xi∥ √m ∥diag(-σ ′( √m⟨u1, xi⟩) *. . . σ*′( √m⟨um, xi⟩))∥ + ∥xi∥ a1σ ′′( √m⟨u1, xi⟩)x ⊤ i 0 *. . .* 0 ...... 0 *. . .* 0 amσ ′′( √m⟨um, xi⟩)x ⊤ i ≤ 2 ∥σ ′∥∞∥xi∥ √m+ ∥σ ′′∥∞∥xi∥ 2∥a∥∞
Review 1: Summary: This paper establishes a tighter bound on the condition required for NTK approximation. Strengths and Weaknesses: Strengths: * In terms of the results, this paper shows that the NTK approximation can be achieved by setting $\alpha=O(T)$, which improves the previous result that requires $\alpha=O(T^2)$. * This paper develops a more refined proof technique, which can be used in other theoretical analyses for dynamic kernels. Weaknesses: * Although the main goal of this paper is to improve the conditions made in [Chizat et al. (2019)], it still needs some more explorations, at least how this condition affects the conditions for the neural network width and the convergence rate. * Following the previous comment, the authors should also derive the convergence rate under the improved conditions. * The authors only consider the general NTK model, it would be better to consider the applications to certain neural network models such as multi-layer ReLU networks. * The comparison with [Cao and Gu, 2019] should be added, in which the authors also prove a bound on the linearization approximation error (see their Lemma 5.2). Cao and Gu 2019, Generalization Error Bounds of Gradient Descent for Learning Over-parameterized Deep ReLU Networks, https://arxiv.org/pdf/1902.01384.pdf. Requested Changes: See the weakness section. Broader Impact Concerns: No concern. ================================================== Review 2: Summary: The paper introduces a tighter analysis of the NTK evolution in the most general setting introduced by Chizat et al. (2019). In particular, their new bound holds for a much longer time horizon ($\alpha^2\rho^2$ vs $\alpha\rho/\text{Lip}(Dh)$) and is tighter ($\kappa$ vs $\kappa T\text{Lip}(Dh)$). Framing this in terms of initial rescaling and a fixed error budget epsilon, the time horizon is squared from Chizat et al.'s $O(\sqrt{\alpha\varepsilon})$ to $O(\alpha\varepsilon)$. To achieve this the authors propose a new proof blueprint based on a product integral formulation of the dynamics of the residual, leveraging in a few points results from Chizat et al. The main technical challange seem to be showing continuity and operator-boundedness guarantee for the main quantities involved in the process, which in turns makes it possible to leverage classical results from Dollard & Friedman (1984) for the evolution of bounded continuous operators. Once the main reformulation is achieved, the rest of the proof is a more straightforward (but still highly technical) sequence of bounding to recover a quantity comparable to Chizat et al. Once they establish their tighter bound, the authors also provide a worst case example that achieves it, which prove that the bound is indeed tight under this specific set of assumptions. Strengths and Weaknesses: The strong points of the paper are: - Clear improvement over current SOTA (Chizat et al.) - General and novel approach that can probably be further refined/extended - Thorough derivation, missing only a good recap (ideally a proof flowchart) for the extensive appendix - Corresponding lower bound The weaknesses: - The result only holds for continuous flow, and is tight only under very general assumptions while real-world data might satisfy stronger assumptions - At the same time the authors do not discuss how realistic the assumptions they inherit from Chizat et al. are. This is particularly important when these assumptions are playing an important role in the proof (e.g. assuming that $Dh$ is $\text{Lip}(Dh)$-Lipschitz in a ball of radius $\rho$ makes it much easier to prove boundedness of the operator) - The proof sketch in Sec. 2 gives an intuition of the main result by saying that the $K_T$ process bounds any of the $K_t$ intermediate processes. However in the appendix the proof mainly relies on a $\sup_{t \in [0,T]}$ analysis, and while I find it plausible that that $K_T$ is a bound on this sup I cannot find a clear point where it is shown and the derivation is not immediate. Minor: - This is a theory paper so empirical validations are not strictly necessary. However the result would be strengthened by showing that it capture the real-world behaviour of NN optimization (the original goal of NTK). For example the authors could show empirically how close (or not, for "easy" data) real-world flows are to their bounds, how much finite steps make a real-world system deviate from their analysis. Finally, since the authors provide a constructive lower bound, it would have been nice to provide some real-world simulation to see if the designed model keeps close to the theoretical behaviour under finite steps. - Section 2 is slightly confusing w.r.t. how novel the proof blueprint is. The authors start with Chizat et al.'s algebra-based proof, then introduce an intermediate algebra-based lemma, then switch to their integral-representation based proof but eventually return to the lemma. The flow can be greatly improved by clarifying which parts of the approach are novel and which are off-the-shelf. Requested Changes: Add a short discussion of the necessity and impact of the assumptions, and a clarification of the $\sup_t$ to $K_T$ bound. Beyond that, I think the paper as is achieves overall the threshold for acceptance in TMLR. However a broader discussion of the assumptions, more structured presentation of the proof, and an illustrative empirical validation would strengthen the contribution. Minor: - $\kappa$ is defined as part of Prop. 1.1 while it is a shared quantity used across theorems and should be independently defined as $\alpha, R_0$ and other common quantities are - adding a proof flowchart/graph would help legibility of the appendix a lot. Broader Impact Concerns: N/A as this is a theory paper ================================================== Review 3: Summary: The authors tightened an NTK approximation result of Chizat and Bach (2019) based on time rescaling, and showed this result is tight with a matching lower bound. I believe this submission exactly fits the purpose of TMLR: to publish correct and useful technical results, without judging for the result's "impact" or other subjective criteria. I would recommend accepting this submission as it is. Strengths and Weaknesses: Strengths The result is a clear improvement over existing work and is shown to have a matching lower bound, thereby providing a very satisfying conclusion to this problem. In fact, there is very little to cherry pick, as the contributions are quite clear and easy to understand. Requested Changes: N/A Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper is reviewed by three experts. Although two of the reviewers are positive about the paper, one reviewer raised concerns which authors only partially addressed. In particular, the reviewer would like to see a discussion on global convergence, which I think authors can briefly mention in the final version (does not have to be a technical statement if it is beyond the scope). I think the paper will be a good addition to TMLR and its contributions will be appreciated by its audience. ==================================================
# Fair Representation In Submodular Subset Selection: A Pareto Optimization Approach Adriano Fazzone *adriano.fazzone@centai.eu* CENTAI Institute, Turin, Italy Yanhao Wang yhwang@dase.ecnu.edu.cn East China Normal University, Shanghai, China Francesco Bonchi francesco.bonchi@centai.eu CENTAI Institute, Turin, Italy Eurecat, Barcelona, Spain Reviewed on OpenReview: *https: // openreview. net/ forum? id= 0Hm01Vc8zT* ## Abstract Many machine learning applications, such as feature selection, recommendation, and social advertising, require the joint optimization of the global *utility* and the *representativeness* for different groups of items or users. To meet such requirements, we propose a novel multiobjective combinatorial optimization problem called *Submodular Maximization with Fair* Representation (SMFR), which selects subsets from a ground set, subject to a knapsack or matroid constraint, to maximize a submodular (*utility*) function f as well as a set of d submodular (*representativeness*) functions g1*, . . . , g*d. We show that the maximization of f might conflict with the maximization of g1*, . . . , g*d, so that no single solution can optimize all these objectives at the same time. Therefore, we propose a Pareto optimization approach to SMFR, which finds a set of solutions to approximate all Pareto-optimal solutions with different trade-offs between the objectives. Our method converts an instance of SMFR into several submodular cover instances by adjusting the weights of the objective functions; then it computes a set of solutions by running the greedy algorithm on each submodular cover instance. We prove that our method provides approximation guarantees for SMFR under knapsack or matroid constraints. Finally, we demonstrate the effectiveness of SMFR and our proposed approach in two real-world problems: *maximum coverage* and *recommendation*. ## 1 Introduction The problem of subset selection aims to pick a maximum utility subset S, under a given constraint, from a ground set V of items. This fundamental problem arises in a wide range of machine learning applications, such as social advertising (Kempe et al., 2003; Aslay et al., 2015; 2017; Tang, 2018), recommendation systems (Ohsaka & Matsuoka, 2021; Mehrotra & Vishnoi, 2023), data summarization (Lin & Bilmes, 2010; Mirzasoleiman et al., 2016), and feature selection (Liu et al., 2013; Bao et al., 2022), to name just a few. A common combinatorial structure in such problems is *submodularity* (Krause & Golovin, 2014), which naturally captures the "diminishing returns" property: adding an item to a smaller set produces a higher marginal gain than adding it to a larger set. This property not only captures the desirable properties of coverage and *diversity* of subsets, but also enables the design of efficient approximation algorithms. Among the various combinatorial optimization problems for subset selection in the literature, maximizing a monotone submodular function subject to a knapsack constraint (SMK) or a matroid constraint (SMM) has attracted a lot of attention, as such constraints capture common scenarios in which the selected subset must be limited within a budget (Krause & Guestrin, 2005; Călinescu et al., 2011). More formally, given a ground set V of n items, we consider a set function f : 2V → R + to measure the *utility* f(S) of any set S ⊆ V . We assume that f is normalized, i.e., f(∅) = 0, monotone, i.e., f(S) ≤ f(T) for any S ⊆ T ⊆ V , and submodular, f(S ∪ {v}) − f(S) ≥ f(T ∪ {v}) − f(T) for any S ⊆ T ⊆ V and v ∈ V \ T. We also consider a cost function c : V → R + which assigns a positive cost c(v) to each item v ∈ V , and we denote c(S) the cost of a set S ⊆ V , defined as the sum of costs for all items in S, i.e., c(S) = Pv∈S c(v). For a given budget k ∈ R +, the set of all feasible solutions subject to the knapsack constraint contains all subsets of V whose costs are at most k, i.e., Ik = {S ⊆ V : c(S) ≤ k}. The SMK problem on f is defined as S ∗ f = arg maxS∈Ik f(S). Furthermore, a matroid M on a ground set V is defined by a collection I(M) of subsets of V called the *independent sets*, that satisfies the following properties: (1) *∅ ∈ I*(M); (2) for any S ⊂ T ⊆ V , if T ∈ I, then S ∈ I(M) holds; (3) for any *S, T* ⊆ V , if |S| < |T|, there exists v ∈ T \ S such that S ∪ {v*} ∈ I*(M). Here, the size of the maximum independent sets in M is called its rank r(M). Similarly to SMK, the SMM problem on f is defined as S ∗ f = arg maxS∈I(M) f(S). In many real-world problems, in addition to the primary objective of maximizing the utility function f, it is often essential to take into account the representativeness with respect to different groups of items or users. For example, consider the influence maximization problem (Tsang et al., 2019; Becker et al., 2020): Example 1. Let G = (*V, E*) be a graph that denotes the relationships between a set of users V on a social network. Each user v ∈ V is also associated with a sensitive attribute A to divide V into multiple protected groups. The influence maximization (IM) problem (Kempe et al., 2003) aims to select a subset S ⊆ V of users as seeds to maximize a (monotone, submodular) influence spread function under an information diffusion (e.g., independent cascade or linear threshold) model. If the information to be spread is related to education and employment opportunities, fair access to information between protected groups (Tsang et al., 2019; Becker et al., 2020) becomes a critical issue. This is often formulated as maximizing the influence spread functions specific to all protected groups in a balanced manner so that none of the groups is much worse off than the others. Furthermore, we should also impose constraints in different contexts on the seed set S, e.g., to limit the overall budget for the propagation campaign, the total cost of S should be within an upper bound (*knapsack constraint*), or to ensure diversity, the number of seeds in S from any demographic category cannot exceed an upper limit (*matroid constraint*). The above problem, as well as many other subset selection problems with fairness or other representativeness considerations (Krause et al., 2008; Mirzasoleiman et al., 2016; Wang et al., 2024), can be formulated as a multi-objective optimization problem of selecting a set S to simultaneously maximize a monotone submodular utility function f and a set of d monotone submodular *representativeness* functions g1*, . . . , g*d, all defined on the same ground set V , subject to a knapsack or matroid constraint I: $$\operatorname*{max}_{S\in{\mathcal{I}}}\left(f(S),g_{1}(S),\ldots,g_{d}(S)\right).$$ $$(1)$$ (f(S), g1(S)*, . . . , g*d(S)). (1) We call this problem *Submodular Maximization with Fair Representation* (SMFR) 1since it captures the case where the submodular utility function f and all the submodular representativeness functions g1*, . . . , g*d are maximized at the same time to avoid under-representing any of them. Our Contributions. To the best of our knowledge, SMFR is a novel optimization problem, never addressed before (see Section 2 for a detailed discussion of how the related literature differs from SMFR). It is easy to see that SMFR is at least as hard as SMK and SMM, which cannot be approximated within a factor better than 1−1/e unless P = NP (Feige, 1998). However, SMFR is much more challenging than SMK and SMM due to its multi-objective nature. By providing a counterexample (see Example 2), we show that there might not exist any single solution to an instance of SMFR that achieves an approximation factor greater than 0 to maximize f and g1*, . . . , g*d at the same time, even when d = 1. Due to the inapproximability of SMFR, we consider approaching it by *Pareto optimization*. Specifically, we call a set S an (*α, β*)-approximate solution for an instance of SMFR if S ∈ I, f(S) ≥ αOPTf , where OPTf = maxS′∈I f(S ′), and gi(S) ≥ βOPTgi for all i = 1*, . . . , d*, where OPTgi = maxS′∈I gi(S ′). An (*α, β*)-approximate solution S is Pareto optimal if there does not exist an (α ′, β′)-approximate solution S ′ ∈ I for any α ′ ≥ *α, β*′ ≥ β and at least one is strictly larger. Since computing a single Pareto optimal solution to SMFR is still NP-hard, we turn our attention to identifying a set S of multiple solutions to approximate the *Pareto frontier*; that is, to find 1Note that all objective functions are unordered and equally important in Eq. 1. a set S such that for any Pareto-optimal solution, there exists a corresponding solution in S achieving a bounded approximation for it. Our framework first uses any existing algorithm for SMK (Sviridenko, 2004; Yaroslavtsev et al., 2020; Tang et al., 2021; Feldman et al., 2022; Li et al., 2022) or SMM (Fisher et al., 1978; Vondrak, 2008; Călinescu et al., 2011; Badanidiyuru & Vondrák, 2013; Filmus & Ward, 2014; Buchbinder et al., 2019) to approximate OPTf and each OPTgi . Based on the approximations, our proposal transforms an instance of SMFR into multiple instances of the submodular cover problem with different weights on OPTf and each OPTgi to capture the trade-offs between f and each gi. Then, classic greedy algorithms (Wolsey, 1982; Torrico et al., 2021) are used to obtain an approximate solution for each submodular cover instance. Finally, all the above-computed solutions that are not "dominated"2 by any other computed solution are returned as the set S of at most O( 1 ε ) approximate solutions to SMFR for any ε ∈ (0, 1). Theoretically, our framework provides approximation bounds for SMFR under both knapsack and matroid constraints: - When using a δ-approximation algorithm for SMK, it provides a set S such that for each (*α, β*)- approximate Pareto optimal solution of SMFR, there must exist a corresponding (δα − *ε, δβ* − ε)- approximate solution of cost O(k log d ε ) in S, where k ∈ R + is the budget of the knapsack constraint. - When using a δ-approximation algorithm for SMM, it also provides a set S such that for each (*α, β*)- approximate Pareto optimal solution of SMFR, there must exist a corresponding (δα − *ε, δβ* − ε)- approximate solution of size O(r log d ε ) in S, where r ∈ Z + is the rank of the matroid constraint. In the empirical assessment, we evaluate our proposed framework for the problems of *maximum coverage* and *recommendation* using real-world data. The numerical results confirm the effectiveness of our proposal compared to competitive baselines. Paper Organization. The remainder of this paper is organized as follows. We review the related work in Section 2. Then, we analyze the hardness of SMFR in Section 3. Next, our algorithmic framework for SMFR is presented in Section 4. Subsequently, the experimental setup and results are provided in Section 5. Finally, we conclude the paper and discuss future work in Section 6. The proofs of theorems and lemmas and several supplemental experiments are deferred to the appendices due to space limitations. ## 2 Related Work Monotone Submodular Maximization with Knapsack or Matroid Constraints. There exists a wide literature on maximizing a monotone submodular function subject to a knapsack constraint (SMK) or a matroid constraint (SMM). For cardinality constraints, a special case of both knapsack and matroid constraints, Nemhauser et al. (1978) proposed a simple greedy algorithm that runs in O(kn) time and yields the best possible approximation factor 1 − 1/e unless P = NP. However, the greedy algorithm can be arbitrarily bad for general knapsack or matroid constraints. Sviridenko (2004) first proposed a greedy algorithm with partial enumerations that achieves the best possible approximation 1 − 1/e for SMK in O(n 5) time. Kulik et al. (2021) and Feldman et al. (2022) improved the time complexity to O(n 4) while keeping the same approximation factor. Krause & Guestrin (2005) proposed an O(n 2)-time 12 (1 − 1 e ) ≈ 0.316-approximation cost-effective greedy algorithm for SMK. Tang et al. (2021), Kulik et al. (2021), and Feldman et al. (2022) improved the approximation factor of the cost-effective greedy algorithm to 0.405, [0.427, 0.4295], and [0.427, 0.462] independently. Ene & Nguyen (2019a) proposed a near-linear time (1−1/e− ε)-approximation algorithm for SMK based on multilinear relaxation. Yaroslavtsev et al. (2020) proposed a 1 2 -approximation Greedy+Max algorithm for SMK in O(n 2) time. Feldman et al. (2022) further provided an approximation factor of 0.6174 in O(n 3) time by enumerating each item as a partial solution and running Greedy+Max on each partial solution. Li et al. (2022) recently proposed a ( 1 2−ε)-approximation algorithm for SMK in O( n ε log 1 ε ) time. Fisher et al. (1978) first proposed a 12 -approximation greedy algorithm for SMM running in O(nr) time. Călinescu et al. (2011) and Vondrak (2008) independently proposed randomized continuous greedy algorithms with rounding for SMM. Both algorithms achieved the best possible (1−1/e)- approximation in expectation but had prohibitive O(n 8) running time. Badanidiyuru & Vondrák (2013) proposed a faster continuous greedy algorithm that yielded a (1 − 1/e − ε)-approximation for SMM in 2A solution S will be dominated by another solution T if the approximation factors *α, β* of S are both no greater than those of T and at least one is strictly smaller. O( n 2 ε 4 log2 n ε ) time. Filmus & Ward (2014) proposed a (1 − 1/e − ε)-approximation algorithm in O( nr4 ε 3 ) time and a (1−1/e)-approximation algorithm in O(n 2r 7) time, both randomized and based on non-oblivious local search. Buchbinder et al. (2019) proposed the first deterministic algorithm for SMM with an approximation factor over 1/2 in O(nr2) time. Ene & Nguyen (2019b) also proposed a nearly linear-time (1 − 1/e − ε)- approximation algorithm for SMM based on multilinear relaxation. Although the above algorithms cannot be applied directly to SMFR, any of them can serve as a subroutine in our algorithmic framework for SMFR. Multi-objective Submodular Maximization. There exist several variants of submodular maximization problems to deal with more than one objective. Next, we discuss multi-objective submodular maximization problems relevant to SMFR. One basic problem of this kind is to maximize a weighted sum of d > 1 submodular functions g1*, . . . , g*d. Since the weighted sum of multiple submodular functions is still submodular (Krause & Golovin, 2014), this problem can be directly resolved with any algorithm for submodular maximization. However, maximizing the weighted sum is often not enough to achieve a fair representation among all objectives, as its (optimal) solution may not have any approximation guarantee for each objective function individually. The problem of maximizing the minimum of d > 1 submodular functions g1*, . . . , g*d was studied in (Krause et al., 2008; Udwani, 2018; Anari et al., 2019; Torrico et al., 2021). This problem differs from SMFR because it does not consider maximizing f and aims to return only a single solution for all functions. Nevertheless, we draw inspiration from the Saturate framework first proposed by Krause et al. (2008) to address SMFR. Another two relevant problems to SMFR are *Submodular Maximization* under Submodular Cover (SMSC) (Ohsaka & Matsuoka, 2021), which maximizes one submodular function subject to the value of the other submodular function not being below a threshold, and *Balancing utility* and fairness in Submodular Maximization (BSM) (Wang et al., 2024), which maximizes a submodular utility function subject to that a fairness function in form of the minimum of d > 1 submodular functions is approximately maximized. SMSC and BSM differ from SMFR in the following four aspects: (i) they also return a single solution to optimize a user-specified trade-off between multiple objectives; (ii) they are specific to cardinality constraints but cannot handle more general knapsack or matroid constraints; (iii) SMSC is limited to two submodular functions, that is, a special case of d = 1 in SMFR; (iv) BSM requires all objective functions to be decomposable. Thus, SMFR can work in more general scenarios than SMSC and BSM. Due to the above differences, the algorithms for SMSC and BSM cannot be used directly for SMFR, and in the experiments they will be compared with our algorithm after adaptations. Very recently, Tang & Yuan (2023) proposed a randomized subset selection method to maximize a (submodular) overall utility function while the (submodular) utility functions for d groups are all not below a lower bound in expectation. They also considered submodular maximization with group equality, which ensures that the difference in the expected utilities of any two groups does not exceed an upper bound. As they limit their consideration to cardinality constraints and their problem formulations are different from SMFR, their proposed methods do not apply to SMFR. The problem of regret-ratio minimization (Soma & Yoshida, 2017; Feng & Qian, 2021; Wang et al., 2023) for multi-objective submodular maximization is similar to SMFR in the sense that they also aim to find a set of approximate solutions for different trade-offs between multiple objectives. However, they consider denoting the trade-offs as different non-negative linear combinations of multiple submodular functions but cannot guarantee any approximation for each objective individually. Finally, several subset selection problems, e.g., (Qian et al., 2015; 2017; 2020; Roostapour et al., 2022), utilize a Pareto optimization method by transforming a single-objective problem into a bi-objective problem and then solving the bi-objective problem to obtain an approximate solution to the original problem. These problems are interesting but orthogonal to our work. ## 3 Hardness Of Smfr In this paper, we focus on the SMFR problem in Eq. 1 subject to a knapsack or matroid constraint. Next, we formally analyze the computational hardness of SMFR. Since SMK and SMM are both NP-hard and cannot be approximated within a factor 1 − 1/e + ε in polynomial time for any ε > 0 unless P = NP (Feige, 1998; Khuller et al., 1999), the problem of maximizing f or each giindividually can only be solved approximately. We provide a trivial example to indicate that the maximization of f and the maximization of each gi could conflict with each other, and there might not exist any S ∈ I with approximation factors greater than 0 for both of them, even when d = 1. Example 2. Suppose that d = 1 and the set of feasible solutions I is defined by a cardinality constraint 1, i.e., I = {S ⊆ V : |S| ≤ 1}. Note that a cardinality constraint is a special case of both knapsack and matroid constraints. For the two functions f and g1, we have OPTf = f({v0}) = 1, OPTg1 = g1({v1}) = 1, g1({v0}) = 0, f({v1}) = 0, and f({vj}) = g1({vj}) = 0 for any j > 1. In the above SMFR instance, there is no set S ∈ I such that f(S) > 0 and g1(S) > 0. Given the above result, we are motivated to introduce *Pareto optimization*, a well-known concept for multiobjective optimization (Qian et al., 2015; Soma & Yoshida, 2017) which provides more than one solution with different (best possible) trade-offs between multiple objectives. We call a set S ∈ I an (*α, β*)-approximate solution for an instance of SMFR if f(S) ≥ αOPTf and gi(S) ≥ βOPTgi , ∀i ∈ [d]. An (*α, β*)-approximate solution S is Pareto optimal if there does not exist an (α ′, β′)-approximate solution for any α ′ ≥ α and β ′ ≥ β and at least one is strictly larger. Ideally, by enumerating all distinct Pareto optimal solutions (which form the so-called *Pareto frontier*), one can obtain all different optimal trade-offs between maximizing f and each gi. However, computing any Pareto optimal solution is still NP-hard. To circumvent the barrier, a feasible approach to SMFR is to find a set S of approximate solutions, in which, for any Pareto optimal solution, at least one solution close to it is included. This is the approach that we follow in our framework. ## 4 The Smfr-Saturate Framework To find approximate solutions to an instance of SMFR, we propose to transform it into a series of instances of its corresponding decision problems, that is, to determine whether there exists any (*α, β*)-approximate solution for the SMFR instance. Then, we introduce the Saturate framework first proposed in (Krause et al., 2008) to approximately solve each instance of the decision problem as *Submodular Cover* (SC), that is, the problem of finding a set S ∗ c with the minimum cardinality/cost such that f(S ∗ c ) ≥ L for some L ∈ R +. Now, we formally define the decision problem and analyze why the transformation follows. Definition 1 (SMFR-Dec). Given an instance of SMFR and two approximation factors *α, β* ∈ [0, 1], find a set S ∈ Ik such that f(S) ≥ αOPTf and gi(S) ≥ βOPTgi for each i ∈ [d], or decide that there does not exist any set that can meet the conditions. Assuming that OPTf and each OPTgi are already known, the above conditions can be equivalently expressed asf(S) αOPTf ≥ 1 and gi(S) βOPTgi ≥ 1. Then, using the truncation technique in (Krause et al., 2008), SMFR-Dec is converted to decide whether the objective value of the following problem is d + 1: $$\max_{S\in\mathcal{I}}F_{\alpha,\beta}(S):=\min\Big{\{}1,\frac{f(S)}{\alpha0\mathsf{PT}_{f}}\Big{\}}+\sum_{i=1}^{d}\min\Big{\{}1,\frac{g_{i}(S)}{\beta0\mathsf{PT}_{g_{i}}}\Big{\}}.\tag{2}$$ Note that Fα,β is ill-formulated due to division by zero when α, β or OPTf , OPTgi are equal to 0. To solve this problem, the first term of Fα,β is replaced by 1 when α = 0 or OPTf = 0; the second term of Fα,β is replaced by d when β = 0 or OPTgi = 0 for any i ∈ [d]. The above conversion holds because Fα(S) = d+ 1 if and only if f(S) ≥ αOPTf and gi(S) ≥ βOPTgi , ∀i ∈ [d]. In addition, Fα,β is a normalized, monotone, and submodular function because the minimum of a positive real number and a monotone submodular function is monotone and submodular (Krause et al., 2008), and the nonnegative linear combination of monotone submodular functions is monotone and submodular (Krause & Golovin, 2014). In this way, SMFR-Dec is transformed to SC on Fα,β. Since computing OPTf and OPTgi is NP-hard, we should use any existing algorithm for SMK (Sviridenko, 2004; Yaroslavtsev et al., 2020; Tang et al., 2021; Feldman et al., 2022; Li et al., 2022) or SMM (Fisher et al., 1978; Vondrak, 2008; Călinescu et al., 2011; Badanidiyuru & Vondrák, 2013; Filmus & Ward, 2014; Buchbinder et al., 2019) to compute their approximations. Suppose that we run an approximation algorithm for SMK or SMM to obtain OPT′f ≤ OPTf and OPT′gi ≤ OPTgi , ∀i ∈ [d] accordingly. The problem in Eq. 2 is relaxed as follows: $$\operatorname*{max}_{S\in\mathcal{I}}F_{\alpha,\beta}^{\prime}(S):=\operatorname*{min}\Big\{1,{\frac{f(S)}{\alpha\mathrm{OPT}_{f}^{\prime}}}\Big\}+\sum_{i=1}^{d}\operatorname*{min}\Big\{1,{\frac{g_{i}(S)}{\beta\mathrm{OPT}_{g_{i}}^{\prime}}}\Big\},$$ $$(3)$$ o, (3) Algorithm 1: SMFR-Saturate Input: Normalized, monotone, and submodular set functions f, g1*, . . . , g*d : 2V → R +; Cost function c : V → R + and budget k ∈ R + (for knapsack constraint) or Collection of feasible sets I(M) ⊆ 2 V and rank r ∈ Z + (for matroid constraint); Error parameter ε ∈ (0, 1) Result: A set S of approximate solutions to SMFR Initialize *S ← ∅*; Run an algorithm for SMK or SMM to maximize f, g1*, . . . , g*d subject to the constraint Ik or I(M) to compute OPT′f , OPT′g1 , . . . , OPT′gd ; for β ← 0; β ≤ 1; β ← β + ε 2 do Initialize αmax ← 1, αmin ← 0; while αmax − αmin > ε 2 do Set α ← (αmax + αmin)/2 and define F ′ α,β(S) according to Eq. 3; S ← CostEffectiveGreedy(f, g1*, . . . , g*d, c, k, ε) (for knapsack constraint) or IterativeGreedy(f, g1*, . . . , g*d, I(M), ε) (for matroid constraint); if F ′ α,β(S) ≥ d + 1 − ε 2 then αmin ← α and Sα,β ← S; else αmax ← α; end end Add Sαmin,β to S and remove all Sα′,β′ with α ′ ≤ αmin and β ′ < β from S; end return S; Function CostEffectiveGreedy(f, g1*, . . . , g*d, c, k, ε): Initialize S ← ∅; while ∃v ∈ V \ S such that c(S ∪ {v}) ≤ k(1 + ln 2d+2 ε) do I ← {v ∈ V : c(S ∪ {v}) ≤ k(1 + ln 2d+2 ε)}; v ∗ ← arg maxv∈I F ′ α,β(S ∪ {v}) − F ′ α,β(S)/c(v) and S ← S ∪ {v ∗}; end return S; Function IterativeGreedy(f, g1*, . . . , g*d, I(M), ε): for l ← 1; l ≤ 1 + ⌈log2 d+1 ε ⌉; l ← l + 1 do Sl ← ∅; while ∃v ∈ V : Sl ∪ {v*} ∈ I*(M) do I ← {v ∈ V : Sl ∪ {v*} ∈ I*(M)}; v ∗ ← arg maxv∈I F ′ α,β(∪ l j=1Sj ∪ {v}) − F ′ α,β(∪ l j=1Sj ) and Sl ← Sl ∪ {v ∗}; end end return S ←S1+⌈log2 d+1 ε ⌉ l=1 Sl; where the problem of division by zero is solved in the same way as for Fα,β when α, β or OPT′f , OPT′gi are equal to 0. Next, the following lemmas indicate that SMFR-Dec can still be answered approximately by solving the relaxed problem in Eq. 3. Lemma 1. Any set S ∈ I *with* F ′ α,β(S) ≥ d + 1 − ε 2 *must be a* (δα − ε 2 , δβ − ε 2 )*-approximate solution to* SMFR, where δ ∈ (0, 1 − 1/e] *is the approximation factor of the algorithm used for* SMK or SMM. Lemma 2. If there is no set S ∈ I *with* F ′ α,β(S) = d + 1, no (α, β)*-approximate solution to* SMFR *exists.* See Appendices A.1 and A.2 for the proofs of the above two lemmas. Based on Lemmas 1 and 2, we propose SMFR-Saturate in Algorithm 1 for SMFR. Generally, SMFRSaturate follows the same framework to handle the knapsack and matroid constraints but uses different greedy algorithms to obtain approximate solutions to SC on F ′ α,β. We first run an algorithm for SMK or SMM on each objective function individually with the same knapsack constraint Ik or matroid constraint I(M) to calculate OPT′f , OPT′g1 , . . . , OPT′gd . Then, we iterate over each value of β from 0 to 1 with an interval of ε2 . For each value of β, we perform a bisection search on α between 0 and 1. Given a pair of α and β, we formulate an instance of SC on F ′ α,β in Eq. 3. To address SC on F ′ α,β, we adopt two different types of greedy algorithms specific to the knapsack and matroid constraints, respectively. For a knapsack constraint Ik, we run the CostEffectiveGreedy algorithm, which starts from S = ∅ and adds the most "cost-effective" item v ∗ with the largest ratio between its marginal gain w.r.t. S and its cost c(v ∗) until no more items can be added with a relaxed knapsack constraint with a budget k(1 + ln 2d+2 ε), to find the candidate solution S. For a matroid constraint I(M), we run the IterativeGreedy algorithm, which performs the classic greedy algorithm for SMM (Fisher et al., 1978) iteratively in 1 + ⌈log2 d+1 ε ⌉ rounds. In the l-th round, we start from Sl = ∅ and add the item v ∗that satisfies Sl ∪ {v ∗*} ∈ I*(M) and has the largest marginal gain w.r.t. ∪ l j=1Sj until no more items can be added to Sl under the knapsack constraint I(M). Finally, we return the union of the items selected over all rounds, i.e., S1+⌈log2 d+1 ε ⌉ l=1 Sl, as the candidate solution S. After computing a candidate solution S, if F ′ α,β(S) ≥ d + 1 − ε 2 , that is, S reaches the "saturation level" w.r.t. *α, β* according to Lemma 1, we set S as the current solution Sα,β and search in the upper half for a better solution with a higher value of α; otherwise, we search in the lower half for a feasible solution. When αmax − αmin ≤ ε 2 , we add the solution Sαmin,β to S, remove all solutions dominated by Sαmin,β, and move on to the next value of β. Finally, all non-dominated solutions in S are returned for SMFR. The theoretical guarantees of SMFR-Saturate for SMFR with knapsack and matroid constraints are analyzed in the following two theorems, respectively. Theorem 1. For SMFR *with a knapsack constraint* Ik, SMFR-Saturate *runs in* O(dt(A) + n 2 ε log 1 ε ) time, where t(A) is the time complexity of the δ*-approximation algorithm for* SMK*, and provides a set* S of solutions with the following properties: (1) |S| = O( 1 ε )*, (2)* c(S) = O(k log d ε ) for each S ∈ S, (3) for each (α ∗, β∗)*-approximate Pareto optimal solution* S ∗to SMFR*, there must exist its corresponding solution* S ∈ S *such that* f(S) ≥ (δα∗ − ε)OPTf and gi(S) ≥ (δβ∗ − ε)OPTgi , ∀i ∈ [d]. Theorem 2. For SMFR *with a matroid constraint* I(M), SMFR-Saturate *runs in* O(dt(A) + nr ε log2 d ε ) time, where t(A) is the time complexity of the δ*-approximation algorithm for* SMM*, and provides a set* S of solutions with the following properties: (1) |S| = O( 1 ε )*, (2)* |S| = O(r log d ε ) for each S ∈ S, (3) for each (α ∗, β∗)*-approximate Pareto optimal solution* S ∗to SMFR*, there must exist its corresponding solution* S ∈ S *such that* f(S) ≥ (δα∗ − ε)OPTf and gi(S) ≥ (δβ∗ − ε)OPTgi , ∀i ∈ [d]. See Appendices A.3 and A.4 for the proofs of the above two theorems. ## 5 Experiments In this section, we present extensive experimental results to evaluate the performance of our proposed algorithm (SMFR-Saturate) on two benchmark problems, namely *Maximum Coverage* and *Recommendation*, using several real-world data sets. We compare SMFR-Saturate with the following non-trivial baselines. - Greedy+Max (or Greedy): The original greedy algorithms for single-objective submodular maximization. For SMK, we adopt the O(n 2)-time Greedy+Max algorithm by Yaroslavtsev et al. (2020); and for SMM, we adopt the O(nr)-time Greedy algorithm by Fisher et al. (1978). Both algorithms have the same approximation factor of 1/2. - Saturate: The bicriteria approximation algorithms for the problem of *multi-objective submodular maximization* (MOSM) that maximizes the minimum among multiple (submodular) objective functions. As for SMFR, we should maximize the minimum among the d + 1 functions of f and g1*, . . . , g*d. In particular, Saturate for MOSM with knapsack and matroid constraints is presented in (Krause et al., 2008) and (Anari et al., 2019), respectively. - SMSC: A (0.16, 0.16)-approximation algorithm for the problem of Submodular Maximization under Submodular Cover (SMSC) (Ohsaka & Matsuoka, 2021), which can be used for SMFR only when d = 1 by maximizing f under the submodular cover constraint defined on g1. - BSM-Saturate: The instance-dependent bicriteria approximation algorithm for balancing *utility* (i.e., maximizing f) and *fairness* (i.e., maximizing the minimum of g1*, . . . , g*d) in (Wang et al., 2024). - OPT: Formulating an instance of SMFR as an integer-linear program (ILP) and using a solver to enumerate its Pareto optimal solutions in the worst-case exponential time. The ILP formulations of SMFR for *Maximum Coverage* and *Recommendation* are deferred to Appendix B. All algorithms are appropriately adapted to provide solutions without violating the specified constraints. We implemented them in Python 3, and for the OPT algorithm, we applied the Gurobi3 optimizer to solve the ILP formulations of the *Maximum Coverage* and *Recommendation* instances. All algorithms except OPT were accelerated using the lazy-forward strategy (Leskovec et al., 2007), as this strategy cannot be applied to OPT. All experiments were run on a MacBook Pro laptop with an Apple M1 Max processor and 32GB memory running MacOS 14. Data and code are available publicly at https://github.com/adrianfaz/ Fair-Representation-in-Submodular-Subset-Selection-A-Pareto-Optimization-Approach. ## 5.1 Maximum Coverage Setup. In this subsection, we evaluate the performance of different algorithms for SMFR on the *Maximum* Coverage problem using two real-world data sets: *Facebook* and *DBLP*. The *Facebook* data set (Mislove et al., 2010) is an undirected graph of 1, 216 nodes and 42, 443 edges representing the friendships between Rice University students on Facebook, and the *DBLP* data set (Dong et al., 2023) is an undirected graph of 3, 980 nodes and 6, 966 edges denoting the coauthorships between researchers. Our settings for *Maximum Coverage* follow those used in the existing literature on submodular maximization (Halabi et al., 2020; Ohsaka & Matsuoka, 2021; Wang et al., 2024). Given a graph G = (*V, E*), the utility (i.e., coverage) function is defined as f(S) := |Sv∈S N (v)|, where N (v) is the set of nodes consisting of v and its neighbors in G. That is, the coverage of a set S ⊆ V is measured by the number of nodes in the union of the neighborhoods of all nodes in S. To define the representativeness functions g1, g2*, . . . , g*d, we divide the node set into d communities C1*, . . . , C*d such that Sd i=1 Ci = V . For each i ∈ [d], the function giis associated with a particular community Ci as gi(S) := |Sv∈S N (v) ∩ Ci|. That is, the representativeness of a set S for a community Ciis measured by the number of nodes in Ci covered by S. For both data sets, the node set V is partitioned into four disjoint groups using the Louvain method (Blondel et al., 2008) for community detection. We then index the four communities according to their sizes as |C1| ≥ |C2| ≥ |C3*| ≥ |*C4|. For the DBLP data set, we follow the scheme of (Jin et al., 2021) to define a knapsack constraint by assigning a cost of 0.2 times its degree to each node and then normalizing all costs by the minimum cost. For the *Facebook* data set, we define a partition matroid constraint by dividing all nodes into 4 disjoint groups based on a sensitive attribute (i.e., age). We then follow the rule of *equal representation* (Halabi et al., 2020) to set the same upper bound k ∈ Z + for each age group, resulting in a partition matroid of rank r = 4k. Results. Figures 1a–1c and 2a–2c present the trade-offs between α and β achieved by each algorithm for different instances of SMFR on *Maximum Coverage* with knapsack and matroid constraints on the *DBLP* and *Facebook* data sets, respectively. We fix k = 40 for the knapsack constraint and k = 5 (and thus r = 20) for the matroid constraint. We set d = 1, 2, and 4 by considering the representativeness functions on the first group C1, the first two groups C1 and C2, and all four groups from C1 to C4. In each of these figures, the x- and y-axes represent the values of α and β for all solutions with a distinct marker for each algorithm. Furthermore, we also use a black line and a red line to denote the optimal Pareto frontier returned by OPT and its approximation returned by SMFR-Saturate. From the results, we observe that the Pareto frontiers provided by SMFR-Saturate are equal or very close to the optimal ones. This confirms the effectiveness of 3https://www.gurobi.com/solutions/gurobi-optimizer/ ![8_image_0.png](8_image_0.png) Figure 1: Results for *Maximum Coverage* on the *DBLP* data set, with knapsack constraints. SMFR-Saturate for the SMFR problem. We also find that the Greedy+Max and Greedy algorithms, which focus solely on maximizing f, generally provide solutions with low values of β, indicating significant neglect of representativeness functions. Furthermore, Saturate, which maximizes the minimum among all representativeness and utility functions and does not allow any trade-off between f and g by design, in some cases (e.g., Figures 1a–1c), it provides a solution with the highest value of β while having a value of α equal to or close to that of SMFR-Saturate and OPT for maximum β. However, it returns inferior solutions dominated by those of SMFR-Saturate in other cases. BSM-Saturate and SMSC provide different trade-offs between f and g by adjusting the threshold value τ in their definitions. The trade-offs reported by SMSC are marginally better than those of SMFR-Saturate on the *Facebook* data set with matroid constraints (Figure 2a). Conversely, it performs poorly for knapsack constraints (Figure 1a). In fact, SMSC is a special case of SMFR when d = 1, the matroid/knapsack constraint is reduced to the cardinality constraint, and the trade-off between f and g is predetermined by τ . It is also noted that SMSC cannot work when d > 1. Although BSM-Saturate does not have the restriction of d = 1, its trade-offs are never better than those obtained by SMFR-Saturate, and significantly worse for *Maximum Coverage* on the *DBLP* data set with knapsack constraints (Figures 1a–1c). Figures 1d–1f and 2d–2f report the effect of the parameter k, which directly decides the solution size, on the performance of each algorithm for different instances of SMFR in the context of *Maximum Coverage* with knapsack and matroid constraints on the *DBLP* and *Facebook* data sets, respectively. In each plot, the xaxis represents the value of k in the knapsack or matroid constraint, and the y-axis represents the maximum utility value f(S) among all solutions with a certain level of representativeness, i.e., the value of β reaches a given threshold, provided by an algorithm. We also set d = 1, 2, and 4 by considering the representativeness functions on C1, C1&C2, and C1–C4. Only solutions with β ≥ 0.8 are considered for d = 1, β ≥ 0.4 for d = 2, and β ≥ 0.2 for d = 4. A unique marker and a distinct line color are used for each algorithm. From Figures 1d–1f, we observe that the solutions provided by SMFR-Saturate consistently achieve the highest utility value f(S) across all values of k in the knapsack constraint. The absence of SMSC and BSM-Saturate indicates that they fail to provide solutions with an adequate level of representativeness (i.e., the value of β is below the given thresholds), with the only exception shown in Figure 1e when k = 100. ![9_image_0.png](9_image_0.png) Figure 2: Results for *Maximum Coverage* on the *Facebook* data set, with matroid constraints. Furthermore, although Saturate provides valid solutions in all cases, the gap in the utility value f(S) between SMFR-Saturate and Saturate widens as the knapsack restriction becomes less stringent (i.e., increasing k), for all values of the number of representativeness functions d. Figures 2d–2f show that across all values of k, the solutions provided by SMFR-Saturate always achieve utility values f(S) higher than those of BSM-Saturate and Saturate. Unlike the case of knapsack constraints, the gap in the utility value f(S) among all methods decreases as the matroid constraint becomes less stringent (i.e. increasing k), for all values of the number of representativeness functions d. In the case of d = 1, SMSC and SMFRSaturate exhibit the same performance, as shown in Figure 2d. The above results confirm that when the trade-off level between f and g is pre-specified, one can still find a corresponding solution from those of SMFR-Saturate that is comparable to or better than those provided by other baselines. ## 5.2 Recommendation Setup. In this subsection, we evaluate the performance of different algorithms for SMFR on the *Recommendation* problem using another two real-world data sets: *X-Wines* (de Azambuja et al., 2023) and *MovieLens*4. The *X-Wines* data set consists of 150 000 ratings from 10 561 users on 1 007 wines, where each rating takes a value in the range [1.0, 1.5*, . . . ,* 5.0]. Moreover, each wine in the data set is associated with one or more food types that pair with the wine itself; we group these food types into four categories: "meat", "fish", *"pasta"*, and *"cheese"*. The *MovieLens* data set consists of 100 000 ratings from 600 users on 9 000 movies, where each rating takes a value in the range [0.5, 1.0*, . . . ,* 5.0]. Each movie in the data set is associated with one or more genres, with a total of 20 genres. Our experimental settings are similar to those adopted in (Ohsaka & Matsuoka, 2021). In the following, we use the term *"item"* to refer to either a wine in the *X-Wines* data set or a movie in the *MovieLens* data set. By performing the non-negative matrix factorization5(NMF) on the user-item rating matrix with p = 32 factors, we obtain a 32-dimensional feature vector for each item and user. Denoting by vi ∈ R pthe feature vector of 4https://grouplens.org/datasets/movielens/ 5https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html ![10_image_0.png](10_image_0.png) Figure 3: Results for *Recommendation* on the *X-Wines* data set, with knapsack constraints. item i, and by uj ∈ R pthe feature vector of user j, the inner product ⟨vi, vj ⟩ between two feature vectors associated with two items measures their similarity. The same holds for users and items as well: ⟨vi, uj ⟩ indicates the level at which a user likes an item. To design the utility function f according to the facility location objective, we select a subset T of items with at least 54 ratings (|T| = 503 for the *X-Wines* data set, and |T| = 403 for the *MovieLens* data set), and define f : 2V → R + as f(S) := Pt∈T maxs∈S ⟨vs, vt⟩, where V is the set of all items in each data set: |V | = 1 007 for the *X-Wines* data set, and |V | = 9 000 for the *MovieLens* data set. The function f captures how well the selected subset S can represent all items in T in the sense that for any item t ∈ T, there exists an item in S that is highly similar to it. This function, as defined, is known to be monotone and submodular (Frieze, 1974). To define the representativeness functions g1, g2*, . . . , g*d, we consider using, for the *X-Wines* data set, the food type categories with which a wine pair, and, for the *MovieLens* data set, the genres to which a movie belongs. Specifically, for the *X-Wines* data set, we divide wines into four groups according to their associated food type categories as G1 (*meat*), G2 (fish), G3 (*pasta*), and G4 (*cheese*). Similarly, for the *MovieLens* data set, we divide movies into four groups according to their genres as G1 (dramas), G2 (comedies), G3 (*thrillers*), and G4 (*action movies*). Then, each gi function is associated with a particular set of items and is defined as gi(S) := |S ∩ Gi|. To be specific, the representativeness of S for Giis measured by the number of items in S selected from Gi. For the *X-Wines* data set, we define a knapsack constraint by assigning to each item (wine) a random integer cost in the range [1, 10]. For the *MovieLens* data set, to define a matroid constraint, we partition the movies into 7 groups according to their release dates: [1900, 1950), [1950, 1970), [1970, 1980), [1980, 1990), [1990, 2000), [2000, 2010), and [2010, 2019). We also use an equal upper bound k ∈ Z + for each group, resulting in a partition matroid of rank r = 7k. Results. Figures 3 and 4 present the performance of each algorithm for different instances of SMFR on *Recommendation* with knapsack and matroid constraints on the *X-Wines* and *MovieLens* data sets, respectively. In general, we observe results similar to those for *Maximum Coverage* and further confirm the effectiveness of SMFR-Saturate for SMFR in different applications. The absence of OPT in Figures 4a– 4c is due to the inefficiency of the ILP solver: it cannot finish on any SMFR instance for the *MovieLens* data set within one hour. We also find that SMFR-Saturate shows more significant advantages over SMSC ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) Figure 4: Results for *Recommendation* on the *MovieLens* data set, with matroid constraints. and BSM-Saturate for the knapsack constraints than for the matroid constraints. In particular, SMSC slightly outperforms SMFR-Saturate when d = 1 on the *MovieLens* data set, with matroid constraints. This is because the solutions with cardinality constraints are typically very close to those with the partition matroid constraints that we define but differ significantly from those with knapsack constraints. As such, SMSC, which is designed specifically for cardinality constraints, can achieve good performance under matroid constraints without adaptations. Again, SMSC is not comparable to SMFR-Saturate in other cases. Finally, we omit the remaining experimental results due to space limitations. Please refer to Appendix C for those results, which further confirm the effectiveness of SMFR-Saturate in other experimental settings and provide additional evaluations for the efficiency of SMFR-Saturate and other baselines. ## 6 Conclusion And Future Work In this paper, we study a novel multi-objective combinatorial optimization problem called *Submodular Maximization with Fair Representation* (SMFR), which aims to select subsets from a ground set under a specific knapsack or matroid constraint such that a submodular (utility) function f is maximized while d submodular (representativeness) functions g1*, . . . , g*d are also maximized. We show the hardness of finding optimal solutions to SMFR and propose a Pareto optimization approach, SMFR-Saturate, to enumerating a set of approximate solutions to all Pareto optimal solutions with different trade-offs between multiple objectives for SMFR. Finally, we demonstrate the effectiveness of SMFR-Saturate in two classic submodular problems, Maximum Coverage and *Recommendation*, using real-world data. We note that SMFR-Saturate still has several limitations. For example, it cannot support more general classes of functions in subset selection problems, such as non-monotone and weakly submodular functions, and more complex constraints, including the intersection of multiple knapsack and matroid constraints and the P-system constraint. We would like to extend SMFR-Saturate to support them in future work. Furthermore, it would also be interesting to expand the realm of *fair submodular optimization* (Halabi et al., 2023; Mehrotra & Vishnoi, 2023) by considering more novel and practical notions of fairness. ## Acknowledgments Yanhao Wang was supported by the National Natural Science Foundation of China under grant number 62202169 and the start-up funding from ECNU. ## References Nima Anari, Nika Haghtalab, Seffi Naor, Sebastian Pokutta, Mohit Singh, and Alfredo Torrico. Structured robust submodular maximization: Offline and online algorithms. In Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 3128–3137. PMLR, 2019. URL http://proceedings.mlr.press/v89/anari19a.html. Çigdem Aslay, Wei Lu, Francesco Bonchi, Amit Goyal, and Laks V. S. Lakshmanan. Viral marketing meets social advertising: Ad allocation with minimum regret. *Proc. VLDB Endow.*, 8(7):822–833, 2015. URL http://www.vldb.org/pvldb/vol8/p814-aslay.pdf. Çigdem Aslay, Francesco Bonchi, Laks V. S. Lakshmanan, and Wei Lu. Revenue maximization in incentivized social advertising. *Proc. VLDB Endow.*, 10(11):1238–1249, 2017. URL http://www.vldb.org/pvldb/ vol10/p1238-aslay.pdf. Ashwinkumar Badanidiyuru and Jan Vondrák. Fast algorithms for maximizing submodular functions. In *Proceedings of the 2014 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*, pp. 1497–1514. Society for Industrial and Applied Mathematics, 2013. URL https://doi.org/10.1137/1.9781611973402. 110. Wei-Xuan Bao, Jun-Yi Hang, and Min-Ling Zhang. Submodular feature selection for partial label learning. In *Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD* '22), pp. 26–34. Association for Computing Machinery, 2022. URL https://doi.org/10.1145/3534678. 3539292. Ruben Becker, Federico Corò, Gianlorenzo D'Angelo, and Hugo Gilbert. Balancing spreads of influence in a social network. *Proceedings of the AAAI Conference on Artificial Intelligence*, 34(01):3–10, 2020. URL https://doi.org/10.1609/aaai.v34i01.5327. Vincent D. Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. Fast unfolding of communities in large networks. *J. Stat. Mech.: Theory Exp.*, 2008(10):P10008, 2008. URL https://dx. doi.org/10.1088/1742-5468/2008/10/P10008. Niv Buchbinder, Moran Feldman, and Mohit Garg. Deterministic (1/2 + ε)-approximation for submodular maximization over a matroid. In *Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete* Algorithms (SODA), pp. 241–254. SIAM, 2019. URL https://doi.org/10.1137/1.9781611975482.16. Gruia Călinescu, Chandra Chekuri, Martin Pál, and Jan Vondrák. Maximizing a monotone submodular function subject to a matroid constraint. *SIAM J. Comput.*, 40(6):1740–1766, 2011. URL https://doi. org/10.1137/080733991. Rogério Xavier de Azambuja, A. Jorge Morais, and Vítor Filipe. X-Wines: A wine dataset for recommender systems and machine learning. *Big Data Cogn. Comput.*, 7(1):20, 2023. URL https://doi.org/10.3390/ bdcc7010020. Yushun Dong, Jing Ma, Song Wang, Chen Chen, and Jundong Li. Fairness in graph mining: A survey. IEEE Trans. Knowl. Data Eng., 35(10):10583–10602, 2023. URL https://doi.org/10.1109/TKDE.2023. 3265598. Alina Ene and Huy L. Nguyen. A nearly-linear time algorithm for submodular maximization with a knapsack constraint. In *46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)*, pp. 53:1–53:12, Dagstuhl, Germany, 2019a. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. URL https://doi.org/10.4230/LIPIcs.ICALP.2019.53. Alina Ene and Huy L. Nguyen. Towards nearly-linear time algorithms for submodular maximization with a matroid constraint. In *46th International Colloquium on Automata, Languages, and Programming (ICALP* 2019), pp. 54:1–54:14, Dagstuhl, Germany, 2019b. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. URL https://doi.org/10.4230/LIPIcs.ICALP.2019.54. Uriel Feige. A threshold of ln n for approximating set cover. *J. ACM*, 45(4):634–652, 1998. URL https: //doi.org/10.1145/285055.285059. Moran Feldman, Zeev Nutov, and Elad Shoham. Practical budgeted submodular maximization. *Algorithmica*, 85(5):1332–1371, 2022. URL https://doi.org/10.1007/s00453-022-01071-2. Chao Feng and Chao Qian. Multi-objective submodular maximization by regret ratio minimization with theoretical guarantee. *Proceedings of the AAAI Conference on Artificial Intelligence*, 35(14):12302–12310, 2021. URL https://doi.org/10.1609/aaai.v35i14.17460. Yuval Filmus and Justin Ward. Monotone submodular maximization over a matroid via non-oblivious local search. *SIAM J. Comput.*, 43(2):514–542, 2014. URL https://doi.org/10.1137/130920277. Marshall L. Fisher, George L. Nemhauser, and Laurence A. Wolsey. An analysis of approximations for maximizing submodular set functions–II. In Michel L. Balinski and Alan J. Hoffman (eds.), *Polyhedral* Combinatorics: Dedicated to the memory of D.R. Fulkerson, pp. 73–87. Springer, Berlin, Heidelberg, 1978. URL https://doi.org/10.1007/BFb0121195. Alan M. Frieze. A cost function property for plant location problems. *Math. Program.*, 7(1):245–248, 1974. URL https://doi.org/10.1007/BF01585521. Marwa El Halabi, Slobodan Mitrovic, Ashkan Norouzi-Fard, Jakab Tardos, and Jakub Tarnawski. Fairness in streaming submodular maximization: Algorithms and hardness. *Advances in Neural Information* Processing Systems, 33:13609–13622, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 9d752cb08ef466fc480fba981cfa44a1-Abstract.html. Marwa El Halabi, Federico Fusco, Ashkan Norouzi-Fard, Jakab Tardos, and Jakub Tarnawski. Fairness in streaming submodular maximization over a matroid constraint. In *Proceedings of the 40th International* Conference on Machine Learning, pp. 9150–9171. PMLR, 2023. URL https://proceedings.mlr.press/ v202/el-halabi23a.html. Tianyuan Jin, Yu Yang, Renchi Yang, Jieming Shi, Keke Huang, and Xiaokui Xiao. Unconstrained submodular maximization with modular costs: Tight approximation and application to profit maximization. Proc. VLDB Endow., 14(10):1756–1768, 2021. URL http://www.vldb.org/pvldb/vol14/p1756-jin.pdf. David Kempe, Jon Kleinberg, and Éva Tardos. Maximizing the spread of influence through a social network. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '03), pp. 137–146. Association for Computing Machinery, 2003. URL https://doi.org/ 10.1145/956750.956769. Samir Khuller, Anna Moss, and Joseph (Seffi) Naor. The budgeted maximum coverage problem. Inf. Process. Lett., 70(1):39–45, 1999. URL https://doi.org/10.1016/S0020-0190(99)00031-9. Andreas Krause and Daniel Golovin. Submodular function maximization. In *Tractability: Practical Approaches to Hard Problems*, pp. 71–104. Cambridge University Press, Cambridge, UK, 2014. URL https://doi.org/10.1017/CBO9781139177801.004. Andreas Krause and Carlos Guestrin. A note on the budgeted maximization of submodular functions. Technical Report CMU-CALD-05-103, Carnegie Mellon University, 2005. URL https://las.inf.ethz. ch/files/krause05note.pdf. Andreas Krause, H. Brendan McMahan, Carlos Guestrin, and Anupam Gupta. Robust submodular observation selection. *J. Mach. Learn. Res.*, 9(93):2761–2801, 2008. URL https://jmlr.org/papers/v9/ krause08b.html. Ariel Kulik, Roy Schwartz, and Hadas Shachnai. A refined analysis of submodular greedy. *Oper. Res. Lett.*, 49(4):507–514, 2021. URL https://doi.org/10.1016/j.orl.2021.04.006. Jure Leskovec, Andreas Krause, Carlos Guestrin, Christos Faloutsos, Jeanne VanBriesen, and Natalie Glance. Cost-effective outbreak detection in networks. In *Proceedings of the 13th ACM SIGKDD International* Conference on Knowledge Discovery and Data Mining (KDD '07), pp. 420–429. Association for Computing Machinery, 2007. URL https://doi.org/10.1145/1281192.1281239. Wenxin Li, Moran Feldman, Ehsan Kazemi, and Amin Karbasi. Submodular maximization in clean linear time. *Advances in Neural Information Processing Systems*, 35:17473– 17487, 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/hash/ 6faf3b8ed0df532c14d0fc009e451b6d-Abstract-Conference.html. Hui Lin and Jeff Bilmes. Multi-document summarization via budgeted maximization of submodular functions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 912–920. Association for Computational Linguistics, 2010. URL https://aclanthology.org/N10-1134/. Yuzong Liu, Kai Wei, Katrin Kirchhoff, Yisong Song, and Jeff A. Bilmes. Submodular feature selection for high-dimensional acoustic score spaces. In *IEEE International Conference on Acoustics, Speech and* Signal Processing (ICASSP), pp. 7184–7188. IEEE, 2013. URL https://doi.org/10.1109/ICASSP.2013. 6639057. Anay Mehrotra and Nisheeth K. Vishnoi. Maximizing submodular functions for recommendation in the presence of biases. In *Proceedings of the ACM Web Conference 2023 (WWW '23)*, pp. 3625–3636. Association for Computing Machinery, 2023. URL https://doi.org/10.1145/3543507.3583195. Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, and Amin Karbasi. Fast constrained submodular maximization: Personalized data summarization. In Proceedings of The 33rd International Conference on Machine Learning (ICML), pp. 1358–1367. PMLR, 2016. URL http://proceedings.mlr.press/v48/ mirzasoleiman16.html. Alan Mislove, Bimal Viswanath, Krishna P. Gummadi, and Peter Druschel. You are who you know: Inferring user profiles in online social networks. In Proceedings of the Third ACM International Conference on Web Search and Data Mining (WSDM '10), pp. 251–260. Association for Computing Machinery, 2010. URL https://doi.org/10.1145/1718487.1718519. George L. Nemhauser, Laurence A. Wolsey, and Marshall L. Fisher. An analysis of approximations for maximizing submodular set functions–I. *Math. Program.*, 14:265–294, 1978. URL https://doi.org/10. 1007/BF01588971. Naoto Ohsaka and Tatsuya Matsuoka. Approximation algorithm for submodular maximization under submodular cover. In *Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence* (UAI), pp. 792–801. PMLR, 2021. URL https://proceedings.mlr.press/v161/ohsaka21a.html. Chao Qian, Yang Yu, and Zhi-Hua Zhou. Subset selection by pareto optimization. Advances in Neural Information Processing Systems, 28:1774–1782, 2015. URL https://proceedings.neurips.cc/paper/ 2015/hash/b4d168b48157c623fbd095b4a565b5bb-Abstract.html. Chao Qian, Jing-Cheng Shi, Yang Yu, Ke Tang, and Zhi-Hua Zhou. Subset selection under noise. *Advances* in Neural Information Processing Systems, 30:3560–3570, 2017. URL https://proceedings.neurips. cc/paper/2017/hash/d7a84628c025d30f7b2c52c958767e76-Abstract.html. Chao Qian, Chao Bian, and Chao Feng. Subset selection by pareto optimization with recombination. Proceedings of the AAAI Conference on Artificial Intelligence, 34(03):2408–2415, 2020. URL https: //doi.org/10.1609/aaai.v34i03.5621. Vahid Roostapour, Aneta Neumann, Frank Neumann, and Tobias Friedrich. Pareto optimization for subset selection with dynamic cost constraints. *Artif. Intell.*, 302:103597, 2022. URL https://doi.org/10. 1016/j.artint.2021.103597. Tasuku Soma and Yuichi Yoshida. Regret ratio minimization in multi-objective submodular function maximization. *Proceedings of the AAAI Conference on Artificial Intelligence*, 31(1):905–911, 2017. URL https://doi.org/10.1609/aaai.v31i1.10652. Maxim Sviridenko. A note on maximizing a submodular set function subject to a knapsack constraint. *Oper.* Res. Lett., 32(1):41–43, 2004. URL https://doi.org/10.1016/S0167-6377(03)00062-2. Jing Tang, Xueyan Tang, Andrew Lim, Kai Han, Chongshou Li, and Junsong Yuan. Revisiting modified greedy algorithm for monotone submodular maximization with a knapsack constraint. *Proc. ACM Meas.* Anal. Comput. Syst., 5(1):08:1–08:22, 2021. URL https://doi.org/10.1145/3447386. Shaojie Tang. When social advertising meets viral marketing: Sequencing social advertisements for influence maximization. *Proceedings of the AAAI Conference on Artificial Intelligence*, 32(1):176–183, 2018. URL https://doi.org/10.1609/aaai.v32i1.11306. Shaojie Tang and Jing Yuan. Beyond submodularity: a unified framework of randomized set selection with group fairness constraints. *J. Comb. Optim.*, 45(4):102, 2023. URL https://doi.org/10.1007/ s10878-023-01035-4. Alfredo Torrico, Mohit Singh, Sebastian Pokutta, Nika Haghtalab, Joseph (Seffi) Naor, and Nima Anari. Structured robust submodular maximization: Offline and online algorithms. *INFORMS J. Comput.*, 33 (4):1590–1607, 2021. URL https://doi.org/10.1287/ijoc.2020.0998. Alan Tsang, Bryan Wilder, Eric Rice, Milind Tambe, and Yair Zick. Group-fairness in influence maximization. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pp. 5997–6005. International Joint Conferences on Artificial Intelligence Organization, 2019. URL https://doi.org/10.24963/ijcai.2019/831. Rajan Udwani. Multi-objective maximization of monotone submodular functions with cardinality constraint. Advances in Neural Information Processing Systems, 31:9513–9524, 2018. URL https://proceedings. neurips.cc/paper/2018/hash/7e448ed9dd44e6e22442dac8e21856ae-Abstract.html. Jan Vondrak. Optimal approximation for the submodular welfare problem in the value oracle model. In Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing (STOC '08), pp. 67–74. Association for Computing Machinery, 2008. URL https://doi.org/10.1145/1374376.1374389. Yanhao Wang, Jiping Zheng, and Fanxu Meng. Improved algorithm for regret ratio minimization in multiobjective submodular maximization. *Proceedings of the AAAI Conference on Artificial Intelligence*, 37 (10):12500–12508, 2023. URL https://doi.org/10.1609/aaai.v37i10.26472. Yanhao Wang, Yuchen Li, Francesco Bonchi, and Ying Wang. Balancing utility and fairness in submodular maximization. In Proceedings of the 27th International Conference on Extending Database Technology, EDBT 2024, Paestum, Italy, March 25 - March 28, pp. 1–14. OpenProceedings.org, 2024. URL https: //doi.org/10.48786/edbt.2024.01. Laurence A. Wolsey. An analysis of the greedy algorithm for the submodular set covering problem. *Combinatorica*, 2(4):385–393, 1982. URL https://doi.org/10.1007/BF02579435. Grigory Yaroslavtsev, Samson Zhou, and Dmitrii Avdiukhin. "Bring your own greedy"+max: Nearoptimal 1/2-approximations for submodular knapsack. In *Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics (AISTATS)*, pp. 3263–3274. PMLR, 2020. URL http://proceedings.mlr.press/v108/yaroslavtsev20a.html. ## A Proofs Of Lemmas And Theorems A.1 Proof Of Lemma 1 Lemma 1. Any set S ∈ I *with* F ′ α,β(S) ≥ d + 1 − ε 2 *must be a* (δα − ε 2 , δβ − ε 2 )*-approximate solution to* SMFR, where δ ∈ (0, 1 − 1/e] *is the approximation factor of the algorithm used for* SMK or SMM. Proof. We first consider the two special cases of α = 0 and β = 0. When α = 0 or β = 0, if F ′ α,β(S) > d+1− ε 2 , we will have gi(S) βOPT′gi> 1 − ε 2 for every i ∈ [d] orf(S) αOPT′f> 1 − ε 2 . In the general case of *α, β >* 0, if F ′ α,β(S) > d + 1 − ε 2 , we will have f(S) αOPT′f > 1 − ε 2 and gi(S) βOPT′gi > 1 − ε 2 for every i ∈ [d] at the same time. Thus, it holds that $$f(S)\geq(1-\frac{\varepsilon}{2})\alpha\mathsf{OPT}_{f}^{\prime}\geq\delta\alpha(1-\frac{\varepsilon}{2})\mathsf{OPT}_{f}\geq(\delta\alpha-\frac{\varepsilon}{2})\mathsf{OPT}_{f}$$ $$g_{i}(S)\geq(1-\frac{\varepsilon}{2})\partial\mathsf{OPT}_{g_{i}}^{\mathsf{r}}\geq\delta\beta(1-\frac{\varepsilon}{2})\mathsf{OPT}_{g_{i}}\geq(\delta\beta-\frac{\varepsilon}{2})\mathsf{OPT}_{g_{i}},\,\forall i\in[d].$$ Therefore, $S$ is a $(\delta\alpha-\frac{\varepsilon}{2},\delta\beta-\frac{\varepsilon}{2})$-approximate solution to SMFR. $$\square$$ ## A.2 Proof Of Lemma 2 Lemma 2. If there is no set S ∈ I *with* F ′ α,β(S) = d + 1, no (α, β)*-approximate solution to* SMFR *exists.* Proof. If F ′ α,β(S) < d + 1, then we will have f(S) < αOPT′f ≤ αOPTf or there is some i ∈ [d] with gi(S) < βOPT′gi ≤ βOPTgi . Therefore, if F ′ α,β(S) < d + 1, S will not be an (*α, β*)-approximate solution to SMFR. And if there is no set S ∈ I with F ′ α,β(S) = d + 1, no (*α, β*)-approximate solution to SMFR exists. ## A.3 Proof Of Theorem 1 Theorem 1. For SMFR *with a knapsack constraint* Ik, SMFR-Saturate *runs in* O(dt(A) + n 2 ε log 1 ε ) time, where t(A) is the time complexity of the δ*-approximation algorithm for* SMK*, and provides a set* S of solutions with the following properties: (1) |S| = O( 1 ε )*, (2)* c(S) = O(k log d ε ) for each S ∈ S, (3) for each (α ∗, β∗)*-approximate Pareto optimal solution* S ∗to SMFR*, there must exist its corresponding solution* S ∈ S *such that* f(S) ≥ (δα∗ − ε)OPTf and gi(S) ≥ (δβ∗ − ε)OPTgi , ∀i ∈ [d]. Proof. Let us first analyze the time complexity of SMFR-Saturate for a knapsack constraint Ik. First, it runs the SMK algorithm d + 1 times to compute OPT′f and OPT′gi for every i ∈ [d]. Then, it iterates over ⌈ 2 ε ⌉ values of β in the for loop. For each value of β, it attempts to use O(log 1 ε ) different values of α in the bisection search. Finally, the subroutine CostEffectiveGreedy takes O(n 2) time for SC on each F ′ α,β. In summary, the time complexity of SMFR-Saturate for a knapsack constraint Ik is O(dt(A) + n 2 ε log 1 ε ) time, where t(A) is the time complexity of the SMK algorithm. For the solution S of SMFR-Saturate, it is easy to see that *|S| ≤ ⌈* 2 ε ⌉ and thus |S| = O( 1 ε ) because SMFRSaturate adds at most one set to S for each value of β. Then, due to the condition in the while loop of the subroutine CostEffectiveGreedy, it must hold that c(S) ≤ k(1 + ln 2d+2 ε) and thus c(S) = O(k log d ε ) for each S ∈ S. Finally, given an (α ∗, β∗)-approximate Pareto optimal solution S ∗, there must exist a value of β in the for loop such that 0 ≤ β ∗ − β ≤ ε 2 . Let Sαmin,β be the solution of SMFR-Saturate w.r.t. such β and its corresponding αmin. Since F ′ αmin,β(Sαmin,β) ≥ d+1− ε 2 , Sαmin,β is a (δαmin− ε 2 , δβ− ε 2 )-approximate solution according to Lemma 1. Furthermore, we have F ′ αmax,β(Sgr) < d + 1 − ε 2 , where Sgr is the solution w.r.t. F ′ αmax,β with a relaxed knapsack constraint for a budget k(1 + ln 2d+2 ε) returned by the subroutine CostEffectiveGreedy in Algorithm 1, and αmax − αmin < ε 2 . Suppose that S ′ gr is the first intermediate subset of Sgr with c(S ′ gr) ≥ k ln 2d+2 εconstructed using the cost-effective greedy procedure. Let S ∗ k = arg maxS∈Ik F ′ αmax,β(S) and OPTF ′*αmax,β* = F ′ αmax,β(S ∗ k ). According to the monotonicity and submodularity of F ′ αmax,β, we have $$F^{\prime}_{\alpha_{max},\beta}(S^{*}_{k})\leq F^{\prime}_{\alpha_{max},\beta}(S^{(i)}_{gt})+\sum_{v\in S^{*}_{k}\setminus S^{(i)}_{pv}}\Delta(v|S^{(i)}_{gt})=F^{\prime}_{\alpha_{max},\beta}(S^{(i)}_{gt})+\sum_{v\in S^{*}_{k}\setminus S^{(i)}_{gt}}\frac{c(v)\cdot\Delta(v|S^{(i)}_{gt})}{c(v)},$$ for any S (i) gr ⊂ S ′ gr after i iterations and ∆(v|S (i) gr ) = F ′ αmax,β(S (i) gr ∪ {v}) − F ′ αmax,β(S (i) gr ). Let u ∗ i be the i-th item added to S ′ gr for any i = 1*, . . . ,* |S ′ gr|. Based on the cost-effective greedy selection in Algorithm 1, $$\frac{\Delta(u_{i+1}^{*}|S_{g r}^{(i)})}{c(u_{i+1}^{*})}\geq\frac{\Delta(v|S_{g r}^{(i)})}{c(v)}$$ for any v ∈ S ∗ k \ S (i) gr and i ∈ [0*, . . . ,* |S ′ gr − 1|] because c(v) ≤ k for any v ∈ S ∗ k and thus no item from S ∗ k is excluded from consideration due to budget violation when u ∗ i+1 is added to S (i) gr . Therefore, we further obtain $$F_{\alpha_{m a r},\beta}^{\prime}(S_{k}^{u})\leq F_{\alpha_{m a r},\beta}^{\prime}(S_{g r}^{(i)})+\frac{\Delta(u_{i+1}^{*}|S_{g r}^{(i)})}{c(u_{i+1}^{*})}\sum_{v\in S_{k}^{\prime}\setminus S_{g r}^{(i)}}c(v)\leq F_{\alpha_{m a r},\beta}^{\prime}(S_{g r}^{(i)})+\frac{\Delta(u_{i+1}^{*}|S_{g r}^{(i)})}{c(u_{i+1}^{*})}\cdot k,$$ After rearranging the inequality above, we have $$F_{\alpha_{m a x},\beta}^{\prime}(S_{k}^{*})-F_{\alpha_{m a x},\beta}^{\prime}(S_{g r}^{(i+1)})\leq\big(1-\frac{c(u_{i+1}^{*})}{k}\big)\big(F_{\alpha_{m a x},\beta}^{\prime}(S_{k}^{*})-F_{\alpha_{m a x},\beta}^{\prime}(S_{g r}^{(i)})\big).$$ Moreover, since 1 − x ≤ e −xfor any x > 0, it holds that 1 − c(u ∗ i+1) k ≤ exp(− c(u ∗ i+1) k). Therefore, $$F^{\prime}_{\alpha_{max},\beta}(S^{*}_{k})-F^{\prime}_{\alpha_{max},\beta}(S^{(i+1)}_{gr})\leq\exp(-\frac{c(u^{*}_{i+1})}{k})\cdot\big{(}F^{\prime}_{\alpha_{max},\beta}(S^{*}_{k})-F^{\prime}_{\alpha_{max},\beta}(S^{(i)}_{gr})\big{)}.\tag{4}$$ By applying Eq. 4 recursively to i = 0*, . . .* |S ′ gr| − 1, we have F ′ αmax,β(S ∗ k ) − F ′ αmax,β(S ′ gr) ≤ exp(− c(u ∗ i+1) k) ·F ′ αmax,β(S ∗ k ) − F ′ αmax,β(S (i) gr ) ≤ exp(− c(u ∗ i+1) k) exp(− c(u ∗ i ) k)F ′ αmax,β(S ∗ k ) − F ′ αmax,β(S (i−1) gr ) ≤ . . . . . . ≤ exp(− P|S ′ gr|−1 i=0 c(u ∗ i+1) k)F ′ αmax,β(S ∗ k ) = exp(− c(S ′ gr) k)F ′ αmax,β(S ∗ k ) = exp(− c(S ′ gr) k)OPTF ′αmax,β . Since c(S ′ gr) ≥ k ln 2d+2 ε, it holds that $$F_{\alpha_{m a x},\beta}^{\prime}(S_{g r}^{\prime})\geq(1-\exp{\big(}-{\frac{c(S_{g r}^{\prime})}{k}}{\big)})\mathsf{OPT}_{F_{\alpha_{m a x},\beta}^{\prime}}^{\prime}\geq(1-{\frac{\varepsilon}{2d+2}})\mathsf{OPT}_{F_{\alpha_{m a x},\beta}^{\prime}}^{\prime}$$ In addition, F ′ αmax,β(Sgr) ≥ F ′ αmax,β(S ′ gr) since S ′ gr ⊆ Sgr. Therefore, we have OPTF ′*αmax,β* < d + 1 and, according to Lemma 1, there does not exist any (αmax, β)-approximate solution of cost at most k. Since S ∗is an (α ∗, β∗)-approximate Pareto optimal solution and β ≤ β ∗, S ∗ must be an (α ∗, β)-approximate solution of cost at most k. As such, we obtain αmax > α∗ and αmin > α∗ − ε 2 . Because we have shown that Sαmin,β is a (δαmin − ε 2 , δβ − ε 2 )-approximate solution, Sαmin,β is guaranteed to be a (δα∗ − *ε, δβ*∗ − ε)- approximate solution. If Sαmin,β is included in S, we will conclude the proof directly; otherwise, the solution in S dominating Sαmin,β can confirm our conclusion. ## A.4 Proof Of Theorem 2 Theorem 2. For SMFR *with a matroid constraint* I(M), SMFR-Saturate *runs in* O(dt(A) + nr ε log2 d ε ) time, where t(A) is the time complexity of the δ*-approximation algorithm for* SMM*, and provides a set* S of solutions with the following properties: (1) |S| = O( 1 ε )*, (2)* |S| = O(r log d ε ) for each S ∈ S*, (3) for* each (α ∗, β∗)*-approximate Pareto optimal solution* S ∗to SMFR*, there must exist its corresponding solution* S ∈ S *such that* f(S) ≥ (δα∗ − ε)OPTf and gi(S) ≥ (δβ∗ − ε)OPTgi , ∀i ∈ [d]. Proof. Let us analyze the time complexity of SMFR-Saturate for a matroid constraint I(M). First, it runs the SMM algorithm d + 1 times to compute OPT′f and OPT′gi for every i ∈ [d]. Then, it iterates over ⌈ 2 ε ⌉ values of β in the for loop. For each value of β, it attempts to use O(log 1 ε ) different values of α in the bisection search. Finally, the subroutine IterativeGreedy takes O(nr) time per round and runs in O(log d ε ) rounds. In summary, the time complexity of SMFR-Saturate for a matroid constraint I(M) is O(dt(A) + nr ε log d ε log 1 ε ) time, where t(A) is the time complexity of the SMM algorithm, and can be simplified as O(dt(A) + nr ε log2 d ε ). For the solution S of SMFR-Saturate, it is easy to see that *|S| ≤ ⌈* 2 ε ⌉ and thus |S| = O( 1 ε ) because SMFRSaturate adds at most one set to S for each value of β. Then, because the subroutine IterativeGreedy runs in at most 1 + ⌈log2 d+1 ε ⌉ rounds and the size of each Slis bounded by the rank r of the matroid M, it must hold that |S| ≤ r · (1 + ⌈log2 d+1 ε ⌉) and thus |S| = O(r log d ε ) for each S ∈ S. Finally, given an (α ∗, β∗)-approximate Pareto optimal solution S ∗, there must exist a value of β in the for loop such that 0 ≤ β ∗ − β ≤ ε 2 . Let Sαmin,β be the solution of SMFR-Saturate w.r.t. such β and its corresponding αmin. Since F ′ αmin,β(Sαmin,β) ≥ d + 1 − ε 2 , Sαmin,β is a (δαmin − ε 2 , δβ − ε 2 )-approximate solution according to Lemma 1. Furthermore, we have F ′ αmax,β(Sgr) < d+ 1− ε 2 , where Sgr is the solution w.r.t. F ′ αmax,β returned by the subroutine IterativeGreedy in Algorithm 1, and αmax − αmin < ε 2 . Since IterativeGreedy runs a 1 2 -approximation greedy algorithm for submodular maximization with matroid constraints in each round, we have $$F_{\alpha_{m a x},\beta}^{\prime}(S_{1})-F_{\alpha_{m a x},\beta}^{\prime}(\emptyset)\geq\left(1-\frac{1}{2}\right)\cdot\operatorname*{max}_{S^{\prime}\in\mathcal{I}(\mathcal{M})}(F_{\alpha_{m a x},\beta}^{\prime}(S^{\prime})-F_{\alpha_{m a x},\beta}^{\prime}(\emptyset)).$$ Since fe(S) = f(S ∪ A) − f(S) is nonnegative, monotone, and submodular if f(·) is nonnegative, monotone, and submodular for any A ⊆ V , we can extend the above result for each round l > 1 as follows: $$F^{\prime}_{\alpha_{max},\beta}(\cup_{j=1}^{l}S_{j})-F^{\prime}_{\alpha_{max},\beta}(\cup_{j=1}^{l-1}S_{j})\geq\left(1-\frac{1}{2}\right)\cdot\max_{S_{j}^{\prime}\in\mathcal{L}(\mathcal{M})}(F^{\prime}_{\alpha_{max},\beta}(S^{\prime}_{1}\cup(\cup_{j=1}^{l-1}S_{j}))-F^{\prime}_{\alpha_{max},\beta}(\cup_{j=1}^{l-1}S_{j}))$$ $$\geq\left(1-\frac{1}{2}\right)\cdot\max_{S_{j}^{\prime}\in\mathcal{L}(\mathcal{M})}(F^{\prime}_{\alpha_{max},\beta}(S^{\prime})-F^{\prime}_{\alpha_{max},\beta}(\cup_{j=1}^{l-1}S_{j})).$$ By induction, we obtain the following: $$F_{\alpha_{m a x},\beta}^{\prime}(\cup_{j=1}^{l}S_{j})\geq\left(1-\frac{1}{2^{l}}\right)\cdot\operatorname*{max}_{S^{\prime}\in\mathcal{I}(\mathcal{M})}F_{\alpha_{m a x},\beta}^{\prime}(S^{\prime})=\left(1-\frac{1}{2^{l}}\right)\mathsf{OPT}_{F_{\alpha_{m a x},\beta}^{\prime}}.$$ Since $S_{gr}=\cup_{j=1}^{1+\lceil\log_2\frac{d+1}{\varepsilon}\rceil}S_j$, we have: . $$F_{\alpha_{m a x},\beta}^{\prime}(S_{g r})\geq\left(1-\frac{1}{2^{l}}\right)\cdot\mathsf{0P T}_{F_{\alpha_{m a x},\beta}^{\prime}}\geq\left(1-\frac{\varepsilon}{2d+2}\right)\cdot\mathsf{0P T}_{F_{\alpha_{m a x},\beta}^{\prime}}.$$ Therefore, we have OPTF ′*αmax,β* < d + 1 and, according to Lemma 1, there does not exist any (αmax, β)- approximate solution under matroid constraint I(M). Since S ∗is an (α ∗, β∗)-approximate Pareto optimal solution and β ≤ β ∗, S ∗ must be an (α ∗, β)-approximate solution under matroid constraint I(M). As such, we obtain αmax > α∗ and αmin > α∗ − ε 2 . Because we have shown that Sαmin,β is a (δαmin − ε 2 , δβ − ε 2 )- approximate solution, Sαmin,β is guaranteed to be a (δα∗ − *ε, δβ*∗ − ε)-approximate solution. If Sαmin,β is included in S, we will conclude the proof directly; otherwise, the solution in S dominating Sαmin,β can confirm our conclusion. ## B Ilp Formulations In this section, we present the integer linear programming (ILP) formulations for the *Maximum Coverage* and *Recommendation* problems, specifically tailored to the SMFR problem, as defined in Section 5.1 and Section 5.2, respectively. Any ILP solver can be employed to identify optimal solutions for small SMFR instances on *Maximum Coverage* and *Recommendation*. For our experimental results in Section 5 and Appendix C, we refer to this approach as the OPT algorithm. Note that these formulations are specifically designed for these settings and cannot be applied directly to general SMFR problems. Problems 5 and 6 are specialized versions of the standard ILP formulation of SMFR on *Maximum Coverage*6 in Section 5.1, with knapsack and partition matroid constraints, respectively. subject to X $\sum_{l\in[n]}c_{l}x_{l}\leq k$ $\sum_{e_{j}\in S_{l}}x_{l}\geq y_{j},$$\forall j\in[m]$ $\sum_{e_{j}\in C_{i}}y_{j}\geq\beta\,0{\rm PT}_{g_{i}},$$\forall i\in[d]$ $y_{j}\in\{0,1\},$$\forall j\in[m]$ $x_{l}\in\{0,1\},$$\forall l\in[n]$ max X j∈[m] yj (5) max X j∈[m] subject to X Sl∈Vt xl ≤ k, ∀t ∈ [p] X ej∈Sl xl ≥ yj , ∀j ∈ [m] X ej∈Ci yj ≥ β OPTgi , ∀i ∈ [d] yj ∈ {0, 1}, ∀j ∈ [m] xl ∈ {0, 1}, ∀l ∈ [n] yj (6) These ILPs maximize the *coverage* (i.e., the utility function f in SMFR) on a universe U = {e1*, . . . , e*m} of m elements and a collection V = {S1*, . . . , S*n} of n sets (Sl ⊆ V, ∀l ∈ [n]), subject to additional coverage constraints on each subset C1*, . . . , C*d of U (w.r.t. each representativeness function g1*, . . . , g*d in SMFR). In both formulations, xlindicates whether Sl ∈ V is included in the solution S, and yj indicates whether ej ∈ U is covered by S. Problem 5 is specific to the knapsack constraint defined on a budget k ∈ Z + and a cost function c(·). Problem 6 is specific to the partition matroid constraint, where V is divided into p disjoint partitions V1*, . . . , V*p and at most k sets can be selected from each partition. Solving optimally Problems 5 and 6 with β = 0 and U = Ci yields the value of OPTgi for each representativeness function gi corresponding to the knapsack and the partition matroid constraints, respectively. Problems 7 and 8 are specialized versions of the ILP formulation for capacitated facility location7, with a benefit matrix B = {bjl = ⟨vi, vj ⟩ : j ∈ [m], l ∈ [n]} ∈ R m×n (m = |T| and n = |V |), specifically designed for SMFR on the *Recommendation* setting in Section 5.2, with knapsack and partition matroid constraints, respectively. max X (7) $$\max$$ $$\text{subject to}$$ $$\forall j\in[m]$$ $$\forall j\in[m],l\in[n]$$ $$\forall i,\hspace{1cm}\forall i\in[d]$$. $$\sum_{j\in[m]}\sum_{l\in[n]}b_{j l}y_{j l}$$ subject to X $\begin{array}{l}\overline{l\in[n]}\\ \\ \sum_{l\in[n]}y_{jl}\leq1,\\ \\ \leq\end{array}$ $$\sum_{l\in[n]}c_{l}x_{l}\leq k$$ yjl ≤ 1, ∀j ∈ [m] $$\begin{array}{r}{y_{j l}\leq x_{l},}\\ {\sum_{e l\in C_{i}}x_{l}\geq\beta\,\mathbb{O}\mathrm{PT}_{g i},}\end{array}$$ yjl ≤ xl, ∀j ∈ [m], l ∈ [n] yjl ∈ {0, 1}, ∀j ∈ [m], l ∈ [n] xl ∈ {0, 1}, ∀l ∈ [n] max X $$\sum_{j\in[m]}\sum_{l\in[n]}b_{j l}y_{j l}$$ bjlyjl (8) xl ≤ k, ∀t ∈ [p] el∈Vt X l∈[n] X el∈Ci xl ≥ β OPTgi $$\forall t\in[p]$$ $$\forall j\in[m]$$ yjl ≤ 1, ∀j ∈ [m] yjl ≤ xl, ∀j ∈ [m], l ∈ [n] $$\begin{array}{c}{{\forall j\in[m],l\in[n]}}\\ {{\qquad\qquad\forall i\in[d]}}\end{array}$$ yjl ∈ {0, 1}, ∀j ∈ [m], l ∈ [n] xl ∈ {0, 1}, ∀l ∈ [n] 6https://en.wikipedia.org/wiki/Maximum_coverage_problem 7https://en.wikipedia.org/wiki/Optimal_facility_location Given a set V = {e1*, . . . , e*n} of n items, both ILPs maximize the total *benefit* (i.e., the utility function f in SMFR) provided by a set S ⊆ V for a subset T ⊆ V of m items, subject to representativeness constraints on each C1*, . . . , C*d subset of V (i.e., the representativeness functions g1*, . . . , g*d in SMFR). In both formulations, xlindicates whether el ∈ V is included in the solution S, and yjl indicates whether ej ∈ T takes the benefit from item el ∈ V . Problem 7 is specific to the knapsack constraint defined on a budget k ∈ Z + and a cost function c(·). Problem 8 is specific to the partition matroid constraint, where V is divided into p disjoint partitions V1*, . . . , V*p. For the knapsack constraint, the value of OPTgi for each representativeness function gi can be easily computed by sorting the items in Ci ascendingly according to their costs and finding the maximum number of items whose cumulative cost does not exceed k. For the partition matroid constraint, the value of OPTgi for each representativeness function giis trivially the maximum between k and |Ci|. ## C Additional Experiments In this section, we complement the experimental analysis described in Sections 5.1 and 5.2. ## C.1 Additional Experiments On Maximum Coverage In this section, we use the same data sets and settings as in Section 5.1 for the *Maximum Coverage* problem. For the *Facebook* data set, we alternatively define the knapsack constraint in the same way as for the DBLP data set. For the *DBLP* data set, we alternatively define a partition matroid constraint based on the geographic area of the researchers, with five groups: Asia, Europe, North America, *Oceania*, and South America. We also set the same upper bound k ∈ Z + for each geographic group, resulting in a partition matroid of rank r = 5k. Figures 5 and 6 present the performance of each algorithm for different instances of SMFR on *Maximum Coverage* with knapsack and matroid constraints on the *Facebook* and *DBLP* data sets, respectively. Generally, we observe trends similar to those already presented in Section 5.1, which further confirm the effectiveness of SMFR-Saturate. ## C.2 Additional Experiments On Recommendation In this section, we use the same data sets and settings as in Section 5.2 for the *Recommendation* problem. For the *MovieLens* data set, we alternatively define a knapsack constraint by assigning to each item (movie) a random integer cost in the range [1, 10]. For the *X-Wines* data set, we alternatively define a partition matroid constraint based on the continent of origin for wine production: Africa, Asia, Europe, North America, *South* America, and *Oceania*. We also set the same upper bound k ∈ Z + for each geographic group, resulting in a partition matroid of rank r = 6k. Figures 7 and 8 present the performance of each algorithm for different instances of SMFR on *Recommendation* with knapsack and matroid constraints on the *MovieLens* and *XWines* data sets, respectively. Generally, we observe trends similar to those already presented in Section 5.2, which further confirm the effectiveness of SMFR-Saturate. ## C.3 Time Efficiency Figure 9 reports the running time (in seconds) of SMFR-Saturate, Saturate, BSM-Saturate, and SMSC for SMFR on both *Maximum Coverage* and *Recommendation* instances. We use the same settings as in Sections 5.1 and 5.2. In each plot, the x-axis represents the value of k in the knapsack or matroid constraint and the y-axis represents the running time (in seconds) used by each algorithm to solve an SMFR instance. We present the results for d = 1 and 4 in Figure 9. All algorithms take less than a minute to complete on each tested instance. SMFR-Saturate is faster than SMSC in all cases. For the knapsack constraints, SMFR-Saturate generally runs faster than or close to BSM-Saturate. However, for the matroid constraints, SMFR-Saturate is slower than BSM-Saturate. Saturate is the fastest method in most configurations. This is because Saturate does not allow for any trade-off between utility (f) and representativeness (g) by design and thus is run only once for each instance. However, all other algorithms should be run multiple times with different values of β or τ . ## C.4 Experiments On Large Data Sets Setup. To show the applicability of SMFR-Saturate to data sets larger than those used in Section 5, we performed additional experiments on two larger real-world data sets: *Pokec (Kosicky)* for Maximum Coverage and *MovieLens-25M* for *Recommendation*. The *Pokec (Kosicky)* data set is an undirected graph with 234, 320 nodes and 2, 417, 175 edges, extracted from the Pokec8 data set. Pokec itself is a directed graph that represents the follower-followee relationships of users in a Slovakian social network. *Pokec (Kosicky)* is a subgraph of the Pokec graph induced by nodes representing Pokec users who reside in the Kosicky region while ignoring the directions of the edges. The MovieLens-25M9 data set is a larger version of the *MovieLens* data set presented in Section 5. The only difference relies on the size: the *MovieLens-25M* data set consists of 25, 000, 095 ratings from 162, 541 users on 62, 423 movies. We evaluate the SMFR-Saturate algorithm and baseline methods on the *Pokec (Kosicky)* data set for the Maximum Coverage problem under a knapsack constraint, and on the *MovieLens-25M* data set for the *Recommendation* problem under a partition matroid constraint. We apply the same pre-processing and settings used for the *Maximum Coverage* problem on the *DBLP* data set (Section 5.1) and the *Recommendation* problem on the *MovieLens* data set (Section 5.2) to the *Pokec (Kosicky)* and *MovieLens-25M* data sets, respectively. Results. The top row of Figure 10 presents the performance of each algorithm for different instances of SMFR on *Maximum Coverage* with knapsack constraints on the *Pokec (Kosicky)* data set (see Figures 10a, 10b, and 10c); while the bottom row of Figure 10 presents the performance of each algorithm for different instances of SMFR on *Recommendation* with matroid constraints on the *MovieLens-25M* data set (Figures 10d, 10e, and 10f). Generally, we observe trends similar to those already presented for the other data sets for the same problems under the same kind of constraints. Figure 11 reports the running time (in seconds) of SMFR-Saturate, Saturate, BSM-Saturate, and SMSC for SMFR on both *Maximum Coverage* instances with knapsack constraints on the *Pokec (Kosicky)* data set and *Recommendation* instances with matroid constraints on the *MovieLens-25M* data set. We use the same settings as in Appendix C.3: In each plot, the x-axis represents the value of k in the knapsack or matroid constraint, and the y-axis represents the running time (in seconds) used by each algorithm to solve an SMFR instance. We present the results for d = 1 and 4 in Figure 11. Generally, we observe trends similar to those in Figures 9b, 9f, 9c, and 9g (see Appendix C.3). Execution times generally grow linearly with the size of the data sets in both applications. The above results have confirmed the applicability of SMFR-Saturate to larger data sets. 8https://snap.stanford.edu/data/soc-pokec.html 9https://grouplens.org/datasets/movielens/ ![22_image_0.png](22_image_0.png) ![22_image_1.png](22_image_1.png) Figure 6: Results for *Maximum Coverage* on the *DBLP* data set, with matroid constraints. ![23_image_0.png](23_image_0.png) ![23_image_1.png](23_image_1.png) ![23_image_2.png](23_image_2.png) ![23_image_3.png](23_image_3.png) Figure 7: Results for *Recommendation* on the *MovieLens* data set, with knapsack constraints. Figure 8: Results for *Recommendation* on the *X-Wines* data set, with matroid constraints. ![24_image_0.png](24_image_0.png) Figure 9: Running times (in seconds) of SMFR-Saturate, Saturate, BSM-Saturate, and SMSC for SMFR when d = 1, 4. Here, the *Facebook* and *DBLP* data sets are used for *Maximum Coverage* (MC); the X-Wines and *MovieLens* data sets are used for *Recommendation* (RE). In addition, the matroid constraints (Mat.) are imposed on the *Facebook* and *MovieLens* data sets; the knapsack constraints (Kna.) are imposed on the *DBLP* and *X-Wines* data sets. ![24_image_1.png](24_image_1.png) Saturate BSM-Saturate SMSC SMFR-Saturate ![24_image_2.png](24_image_2.png) ![24_image_3.png](24_image_3.png) Figure 10: Results for *Maximum Coverage* on the *Pokec (Kosicky)* data set with knapsack constraints, and for *Recommendation* on the *MovieLens-25M* data set with matroid constraints. ![25_image_0.png](25_image_0.png) Figure 11: Running times (in seconds) of SMFR-Saturate, Saturate, BSM-Saturate, and SMSC for SMFR when d = 1, 4. Here, the *Pokec (Kosicky)* data set is used for *Maximum Coverage* (MC) with knapsack constraints (Kna.); the *MovieLens-25M* data set is used for *Recommendation* (RE) with matroid constraints (Mat.).
Review 1: Summary: This work introduces the problem of submodular maximization with fair representation (SMFR). In particular, it aims to maximize a (normalized) monotone submodular function subject to a knapsack or matroid constraint in a Pareto-optimal way with respect to $d$ representative objective functions $g_i$. The main idea is to recast an SMFR instance into mutiple instances of the submodular cover problem with different weights between the $f$ and $g_i$ objectives. The authors show that if we have a $\delta$-approximation algorithm for submodular maximization with a knapsack (or matroid) constraint that returns a $(\alpha,\beta)$-approximate Pareto optimal solution to SMFR, then there exists a $(\delta \alpha - \varepsilon, \delta \beta - \varepsilon)$-approximate solution with cost $O(k \log (d \ \varepsilon))$ where $k$ is the knapsack budget (or matroid rank) constraint. They also provide extensive experiments on real-world data that compares their algorithm on max coverage and recommendation tasks to true optimal solutions computed via integer programming. Strengths and Weaknesses: **Strengths** - This work proposes an interesting a novel way to maximize monotone submodular functions in the presence of structured constraints and $d > 1$ "representative" functions. In particular, this differs from past works that simply aim to maximize the *minimum* of the "fairness" objectives $g_i$. - The experiments are quite strong for an applied paper that studies submodular maximization. **Weaknesses** - The only concrete theoretical contribution is Theorem 1 and Theorem 2, which formalize how to recast the SMFR to a sequence of submodular cover problems. Requested Changes: **Questions** - [page 02] In Eq. (1), how is the max operator defined for a tuple of real values? If we use a lexicographic comparator, then the first representative function $g_1$ carries more weight than $g_d$. - [page 02] In the definition of Pareto optimal for $(\alpha, \beta)$-approximate solutions, instead of saying "there does not exist any ...", should you instead relax this to "you have not computed any" (i.e., a solution is Pareto-optimal w.r.t. a set of computed solutions? Otherwise, this quantifier includes optimal solution that you may not have computed. The same applies to the top of page 05. **Typos / suggestions** - [page 02] Consider putting Example 1 in a \paragraph so that the text is not all italcized, as it is somewhat hard to read for more than a few lines. - [page 04] Typo: "which ensures that the difference in the utilities of any two groups is As ..." - [page 04] In the beginning of Section 4, you do not need to reintroduce what SMFR means, as it was defined earlier. Broader Impact Concerns: None ================================================== Review 2: Summary: The paper suggests the so-called “submodular maximization with fair representation (SMFR)” problem and obtains Pareto approximation algorithms for the problem subject to knapsack and matroid constraints. In this problem, one is given a main (monotone) submodular function f, and d other submodular functions g_1, …, g_d. A subset S is called (alpha, beta)-approx, if it is alpha-approx to the max of main function f, and beta-approx to the max of all the other d functions g_1, …, g_d simultaneously. Roughly speaking, the Pareto approximation is a set P of solutions, such that for every alpha, beta, if there is a (alpha, beta)-approx solution S’ then there is a (\delta alpha - eps, \delta beta - eps)-approx to it in the solution set P, where \delta is the approx-ratio of a black-box of an algorithm for submodular maximization subject to the matroid or knapsack constraints. The main result is that |P| = O(1 / eps) is sufficient for this goal, and the every solution set in P is of small size (linear in the size of the knapsack or the rank of the matroid). This can be found efficiently in poly time. Experiments are conducted to validate the effectiveness of the algorithm on real datasets. Strengths and Weaknesses: Strength: - The writing quality is good - The result of only O(1 / eps) solutions can already capture the Pareto frontier looks interesting and strong - The citations and comparison to related works are comprehensive - The experiments are solid Weakness: - The technical novelty is limited, and it seems to be a composition of several existing results in a relatively straightforward way - The size of the data sets used in the experiments looks small Requested Changes: - \mathcal{I} is not quantified in (1). Maybe you want to change to “subject to a knapsack or matroid constraint \mathcal{I}”, in the sentence immediately before (1)? - The meaning of max in (1) is not discussed in the paragraph immediately below it. While I understand that the exact definition is discussed in ”Our Contributions” part, it is still suggested to at least comment that the intuition is to maximize them simultaneously even though this goal can be tricky and you would discuss this next. - Page 2, I think “and at least one is strictly larger” should be put out of the parenthesis since this is an important part of the definition. And similarly for the one on page 5. - In the statement of lemma 1, the quantification of S sounds confusing. I suppose you meant “exists S \in \mathcal{I}”, instead of “for every S”. However, currently it reads “for any set S \in \mathcal{I}”, and it may confuse with “for every S”. Maybe replace “any” with “some”? Or, rephrase as e.g. “suppose S \in \mathcal{I} is any set that satisfies F’_{\alpha, \beta}(S) \geq d + 1 - eps / 2” Broader Impact Concerns: None. ================================================== Review 3: Summary: The paper solves the problem of submodular function maximization under a knapsack or matroid constraint. Unlike previous work, the paper is not maximizing a single objective function but rather a main function f in addition to a collection of d representation functions g_1,..., g_d. The authors suggest finding an (alpha,beta) Pareto optimal solution where the alpha and beta are the approximation factors for f and g1,..,g_d respectively. The main technical result is that an approximation algorithm for submodular maximization under knapsack or matroid constraints can be used to obtain a Pareto optimal approximate solution with close approximation factors. The paper further shows experimental results on several datasets and shows that the proposed algorithm has better performance in comparison to some baselines. Strengths and Weaknesses: ### Strengths: -I think the topic is important and interesting and it is nice to see that previous work can be used to obtain an approximate solution to this setting. -The experiments have been broad comparing to various baselines and datasets. ### Weaknesses: W1-The major weakness in my view has to do with the problem formulation. Specifically, Pareto optimality is in general a very weak guarantee. Why haven’t the authors considered other possibilities such as imposing bounds on the functions g_1, …, g_d or maximizing a weighted sum of the functions? These possibilities and their disadvantages should be discussed in the paper. W2-The paper would be more clear if the point behind functions g_1, …, g_d is clarified early on and separated from the knapsack/matroid constraints. Example 1 in the paper makes it seem as though the constraints (knapsack or matroid) are there to ensure fairness, but further ahead in the paper this is done by the representativeness objectives g_1, …, g_d W3-In the related work, the paper states that the previous work “returns only a single solution”. But is this not the case in this paper as well? W4-This issue is also about the Pareto optimality, consider the knapsack constraint in section 5.1. The functions g_i are counting vertices from different communities, can’t we end up with trivial solutions where for example one group has full representation and the other has none? Would that not be Pareto-optimal? What forbids this? This would not happen on the other hand if we imposed bounds. Requested Changes: Please see the points above under weaknesses. The paper would be much stronger if it could convince the reader that other more traditional options such as maximizing a weighted sum or imposing bounds are not the right thing to do here. Broader Impact Concerns: I don't think that a statement on ethical implications is needed. ================================================== Metareview: Recommendation: Accept with minor revision Comment: The reviewers' consensus was that the proposed formulation is likely to be of limited interest. However, the formulation still makes sense, and the results are accurate and correct. So I think regarding the "accuracy, credibility and clarity", the paper is still doing good enough. They also agree that regardless of the technical depth and potential impact, the second criteria (for audience interest) is met. As a result, I would like to ask the authors to include a more thorough discussions of the advantages and limitations of this approach, as done through the rebuttal. ==================================================
# Generating Adversarial Examples With Task Oriented Multiobjective Optimization Monash University Trung Le trunglm@monash.edu Monash University He Zhao he.zhao@ieee.org CSIRO's Data61, Australia Quan Tran *qtran@adobe.com* Adobe Research Paul Montague *paul.montague@dst.defence.gov.au* Defence Science and Technology Group, Australia Anh Bui tuananh.bui@monash.edu Dinh Phung *dinh.phung@monash.edu* Monash University, VinAI Research Reviewed on OpenReview: *https: // openreview. net/ forum? id= XXXX* ## Abstract Deep learning models, even the-state-of-the-art ones, are highly vulnerable to adversarial examples. Adversarial training is one of the most efficient methods to improve the model's robustness. The key factor for the success of adversarial training is the capability to generate qualified and divergent adversarial examples which satisfy some objectives/goals (e.g., finding adversarial examples that maximize the model losses for simultaneously attacking multiple models). Therefore, multi-objective optimization (MOO) is a natural tool for adversarial example generation to achieve multiple objectives/goals simultaneously. However, we observe that a naive application of MOO tends to maximize all objectives/goals equally, without caring if an objective/goal has been achieved yet. This leads to useless effort to further improve the goal-achieved tasks, while putting less focus on the goal-unachieved tasks. In this paper, we propose Task Oriented MOO to address this issue, in the context where we can explicitly define the goal achievement for a task. Our principle is to only maintain the goal-achieved tasks, while letting the optimizer spend more effort on improving the goal-unachieved tasks. We conduct comprehensive experiments for our Task Oriented MOO on various adversarial example generation schemes. The experimental results firmly demonstrate the merit of our proposed approach. Our code is available at https://github.com/tuananhbui89/TAMOO. ## 1 Introduction Deep neural networks are powerful models that achieve impressive performance across various domains such as bioinformatics (Spencer et al., 2015), speech recognition (Hinton et al., 2012), computer vision (He et al., 2016), and natural language processing (Vaswani et al., 2017). Despite achieving state-of-the-art performance, these models are extremely fragile, as one can easily craft small and imperceptible adversarial perturbations of input data to fool them, hence resulting in high misclassifications (Szegedy et al., 2014; Goodfellow et al., 2015). Accordingly, adversarial training (AT) (Madry et al., 2018; Zhang et al., 2019) has been proven to be one of the most efficient approaches to strengthen model robustness (Athalye et al., 2018). AT requires challenging models with divergent and qualified adversarial examples (Madry et al., 2018; Zhang et al., 2019; Bui et al., 2021b) so that the robustified models can defend against adversarial examples. Therefore, generating adversarial examples is an important research topic in Adversarial Machine Learning (AML). Several perturbation based attacks have been proposed, notably PGD (Madry et al., 2018), CW (Carlini & Wagner, 2017), and AutoAttack (Croce & Hein, 2020). Most of them aim to optimize a single objective/goal, e.g., maximizing the cross-entropy (CE) loss w.r.t. the ground-truth label (Goodfellow et al., 2015; Madry et al., 2018), maximizing the Kullback-Leibler (KL) divergence w.r.t. the predicted probabilities of a benign example (Zhang et al., 2019), or minimizing a combination of perturbation size and predicted loss to a targeted class as in Carlini & Wagner (2017). However, in many contexts, we need to find qualified adversarial examples satisfying multiple objectives/goals, e.g., finding an adversarial example that can *attack simultaneously multiple models* in an ensemble model (Pang et al., 2019; Bui et al., 2021b), finding an universal perturbation that can *attack simultaneously multiple* benign examples (Moosavi-Dezfooli et al., 2017). Obviously, these adversarial generations have a nature of multi-objective problem rather than a single-objective one. Consequently, using *single-objective* adversarial examples leads to a much less adversarial robustness in ensemble learning as discussed in Section 4.2 and Appendix D.2. Multi-Objective Optimization (MOO) (Désidéri, 2012) is an optimization problem to find a Pareto optimality that aims to optimize multiple objective functions. In a nutshell, MOO is a natural tool for the aforementioned multi-objective adversarial generations. However, a direct and naive application of MOO to generating robust adversarial examples for multiple models or ensemble of transformations does not work satisfactorily (cf. Appendix E). Concretely, it can be observed that the tasks are not optimized equally. The optimizing process focuses too much on one dominating task and can be trapped easily by it, hence leading to downgraded attack performances. Intuitively, for multi-objective adversarial generations, we can explicitly investigate if an objective or a task achieves or fails to achieve its goal (e.g., the current adversarial example can fool a model successfully or unsuccessfully in multiple models). To avoid some tasks dominating others during the optimization process, we can favour more the tasks that are failing and pay less attention to the tasks that are performing well. For example, in the context of attacking multiple models, we update an adversarial example x ato favor the models that x a has not attacked successfully yet, while trying to maintain the attack capability of x a on the already successful models. In this way, we expect that no task really dominates others and all tasks can be updated equally to fulfill their goals. Bearing this in mind, we propose a new framework named TAsk Oriented Multi-Objective Optimization (TA-MOO) with multi-objective adversarial generations as the demonstrating applications. Specifically, we learn a weight vector (i.e., each dimension is the weight for a task) lying on a simplex corresponding to all tasks. To favor the unsuccessful tasks while maintaining the success of the successful ones, we propose a geometry-based regularization term that represents the distance between the original simplex and a reduced simplex which involves the weight vectors for the currently unsuccessful tasks only. Furthermore, along with the original quadratic term of the standard MOO helping to improve all tasks, minimizing our geometry-based regularization term encourages the weights of the goal-achieved tasks to be as small as possible, while inspiring those for the goal-unachieved ones to have a sum close to 1. By doing so, we aim to focus more on improving the goal-unachieved tasks, while still maintain the performance of goal-achieved tasks. Most related work to ours is Wang et al. (2021), which considers the worst-case performance across all tasks. However, this original principle reduces the generalizability to other tasks. To mitigate this issue, a specific regularization was proposed to balance all tasks' weights. Our work, which casts an adversarial generation task as a multi-objective optimization problem, is conceptually different from that work, although both methods can be applied to similar tasks. Further discussion about relate work can be found in Appendix A. To summarize, our contributions in this work include: (C1) We propose a novel framework called TA-MOO, which addresses the shortcomings of the original MOO when applied to multi-objective adversarial generation. Specifically, the TA-MOO framework incorporates a geometry-based regularization term that favors unsuccessful tasks, while simultaneously maintaining the performance of successful tasks. This innovative approach improves the efficiency and efficacy of adversarial generation by promoting a more balanced exploration of the solution space. (C2) We conduct comprehensive experiments for three adversarial generation tasks and one adversarial training task including attacking multiple models, learning universal perturbation, attacking over many data transformations, and adversarial training on ensemble learning setting. The experimental results show that our TA-MOO outperforms the baselines by a wide margin on the three aforementioned adversarial generation tasks. More importantly, our adversary brings a great benefit on improving adversarial robustness, highlighting the potential of our TA-MOO framework in adversarial machine learning. (C3) Additionally, we provide a comprehensive analysis on different aspects of applying MOO and TA-MOO to adversarial generation tasks, such as the impact of the dominating issue in Appendix E.1, the importance of the Task-Oriented regularization in Appendix E.2, the impact of initialization of MOO in Appendix subsec:optimal-init-moo, and the limitations of MOO solver in Appendix sec:sup-gradient-des-discuss. We believe that our analysis would be beneficial for future research in this area. ## 2 Background We revisit the background of multi-objective optimization (MOO), which lays the foundation for our taskoriented MOO in the sequel. Given multiple objective functions f (δ) := [f1 (δ)*, ..., f*m (δ)] where each fi: R d → R, we aim to find the Pareto optimal solution that simultaneously maximizes all objective functions: $$\operatorname*{max}_{\delta}f(\delta):=\left[f_{1}\left(\delta\right),...,f_{m}\left(\delta\right)\right].$$ $$(1)$$ δf(δ) := [f1 (δ)*, ..., f*m (δ)] . (1) While there are a variety of MOO solvers (Miettinen, 2012; Ehrgott, 2005), in this paper, we adapt from the multi-gradient descent algorithm (MGDA) that was proposed suitably for end-to-end learning by Désidéri (2012). Specifically, MGDA combines the gradients of individual objectives to a single optimal direction that increases all objectives simultaneously. The optimal direction corresponds to the minimum-norm point that can be found by solving the quadratic programming problem: $$w^{*}=\operatorname{argmin}_{w\in\Delta_{m}}w^{T}Q w,$$ $$\left(2\right)$$ T Qw, (2) where ∆m =π ∈ R m + : kπk1 = 1 is the m-simplex and Q ∈ R m×m is the matrix with Qij = ∇δfi (δ) T ∇δfj (δ). Finally, the solution of the problem 1 can be found iteratively with each update step δ = δ + ηg where g is the combined gradient g =Pm i=1 w ∗ i ∇δfi (δ) and η > 0 is a sufficiently small learning rate. Furthermore, Désidéri (2012) also proved that by using an appropriate learning rate at each step, we reach the Pareto optimality point δ ∗ at which there exist w ∈ ∆m such that Pm i=1 wi∇δfi (δ ∗) = 0. ## 3 Our Proposed Method 3.1 Task Oriented Multi-Objective Optimization We now present our TAsk Oriented Multi-Objective Optimization (TA-MOO). We consider the MOO problem in (1) where each task Ti (i = 1*, ..., m*) corresponds to the objective function fi (δ) (i = 1*, ..., m*). Additionally, assume that given a task Ti, we can explicitly observe if this task has currently achieved its goal (e.g., the current adversarial example x can fool successfully the model fi), which is named a goal-achieved task. We also name a task that has not achieved its goal a goal-unachieved task. Different from the standard MOO, which equally pays equal attention to all tasks, our TA-MOO focuses on improving the currently goal-unachieved tasks, while trying to maintain the performance of the goal-achieved tasks. By this principle, we expect all tasks would be equally improved to simultaneously achieve their goals. To be more precise, we depart from δ0 and consecutively update in L steps to obtain the sequence δ1, δ2*, ..., δ*L that approaches the optimal solution. Considering the t-th step (i.e., 1 ≤ t ≤ L), we currently have δt and need to update it to obtain δt+1. We examine the tasks that have achieved their goals already and denote them as T1, T2*, ...,* Ts without the loss of generalization. Here we note that the list of goal-achieved tasks is empty if s = 0 and the list of goal-unachieved tasks is empty if s = m. Specifically, to find δt+1, we first solve the following optimization problem (OP): $$w^{*}=\operatorname{argmin}_{w\in\Delta_{m}}\left\{w^{T}Q w+\lambda\Omega\left(w\right)\right\},$$ $$\left({\mathrm{3}}\right)$$ T Qw + λΩ (w) , (3) where Q ∈ R m×m with Qij = ∇δfi (δt) T ∇δfj (δt), λ > 0 is a trade-off parameter, and Ω (w) is a regularization term to let the weights focus more on the goal-unachieved tasks. We next compute the combined gradient gt and update δt as: $$g_{t}=\sum_{i=1}^{m}w_{i}^{*}\nabla_{\delta}f_{i}\left(\delta_{t}\right)\mathrm{~and~}\delta_{t+1}=\delta_{t}+\eta g_{t}.$$ The OP in (3) consists of two terms. The first term w T Qw ensures that all tasks are improving, while the second term Ω (w) serves as the regularization to restrict the goal-achieved tasks T1*, ...,* Ts by setting the corresponding weights w1*, ..., w*s as small as possible. Before getting into the details of the regularization, we emphasize that to impose the constraint w ∈ ∆m, we parameterize w = softmax (α) with α ∈ R m and solve the OP in (3) using gradient descent. In what follows, we discuss our proposed geometry-based regularization term Ω (w). Simplex-based regularization. Let Su =β = [βi] m i=s+1 ∈ R m−s + :Pm i=s+1 βi = 1 be a simplex w.r.t. the goal-unachieved tasks and S = {0s*} × S*u be the extended simplex, where 0s is the s-dimensional vector of all zeros. We define the regularization term Ω (w) as the distance from w to the extended simplex S: $$\Omega\left(w\right)=d\left(w,\mathcal{S}\right)=\operatorname*{min}_{\pi\in\mathcal{S}}\left\|w-\pi\right\|_{2}^{2}.$$ $$\left(4\right)$$ . (4) Because S is a compact and convex set and kw − πk 2 2 is a differentiable and convex function, the optimization problem in (4) has a unique global minimizer Ω (w) = kw − projS (w)k 2 2 , where the projection projS (w) is defined as projS (w) = argminπ∈S kw − πk 2 2 . The following lemma shows us how to find the projection projS (w) and evaluate Ω (w). Lemma 1. *Sorting* ws+1:m *into* us+1:m such that us+1 ≥ us+2 ≥ ... ≥ um*. Defining* ρ = max ns + 1 ≤ i ≤ m : ui +1 i−s 1 −Pij=s+1 uj > 0 o*. Denoting* γ =1ρ 1 −Pρ i=s+1 ui , the projection projS (w) *can be computed as* $$p r o j_{S}\left(w\right)_{i}=\begin{cases}0&1\leq i\leq s\\ \operatorname*{max}\left\{w_{i}+\gamma,0\right\}&o t h e r w i s e\end{cases}$$ Furthermore, the regularization Ω (w) *has the form:* _tion $\Omega\left(w\right)$ has the form:_ $$\Omega\left(w\right)=\sum_{i=1}^{s}w_{i}^{2}+\sum_{i=s+1}^{m}\left(w_{i}-\max\left\{w_{i}+\gamma,0\right\}\right)^{2}.\tag{1}$$ With further algebraic manipulations, Ω (w) can be significantly simplified as shown in Theorem 1. Theorem 1. *The regularization* Ω (w) *has the following closed-form:* $$\Omega\left(w\right)=\sum_{i=1}^{s}w_{i}^{2}+\frac{1}{m-s}\left(1-\sum_{i=s+1}^{m}w_{i}\right)^{2}.\tag{6}$$ The proof of Lemma 1 and Theorem 1 can be found in Appendix B.1. Evidently, the regularization term in Eq. (6) in Theorem 1 encourages the weights w1:s associated with the goal-achieved tasks to be as small as possible and the weights ws+1:massociated with the goal-unachieved tasks to move closer to the simplex Su (i.e., Pm i=s+1 wiis closer to 1). $$\mathbf{\Sigma}$$ Parameterized TA-MOO. Algorithm 1 summarizes the key steps of our TA-MOO. We use gradient descent to find solution δ for the OP 1 in L steps and at each iteration we solve the OP in 3 in K steps using gradient descent solver with the parameterization w = softmax (α). To reduce computational cost, at each iteration we reuse the previous solution α and use a few steps K (i.e., K ≤ 10) to get new solution. We then compute the combined gradient gt and finally update δt to δt+1 using the combined gradient gt (or sign(gt) in the case of L∞ norm). The projecting operation in step 13 is to project δ to a valid space specifying to applications that we introduce hereon. Algorithm 1 Pseudocode for Parameterized TA-MOO. Input: Multi-objective functions f1:m (δ). δ's solver with L update steps and learning rate ηδ. w's Gradient Descent Solver (GD) with K update steps and learning rate ηw and variable α. The softmax function denotes by σ. Tradeoff parameter λ. Output: The optimal solution δ ∗. 1: Initialize δ0 (e.g., δ0 ∼ U(−, )). 2: Initialize α0 = [α i 0 ] m i=1 with α i 0 = 1/m. 3: for t = 0 to L − 1 do 4: Collect list of tasks' gradients {∇δfi(δt)} m i=1. 5: Compute Q with Qij = ∇δfi (δt) T ∇δfj (δt). 6: Initialize αt+1 = αt 7: for k = 0 to K − 1 do 8: Compute L(αt+1) = σ(αt+1) T Qσ(αt+1) + λΩ(σ(αt+1)). 9: Update αt+1 = αt+1 − ηw∇αL(αt+1). 10: **end for** 11: Compute the combined gradient gt =Pm i=1 σ(αt+1,i)∇δfi (δt). 12: Update δt+1 = δt + ηδgt. 13: Project δt+1 to a valid space (specific to domain, e.g., kδk ≤ ). 14: **end for** 15: Output δ ∗ = δL. ## 3.2 Applications In Adversarial Generation Although TA-MOO is a general framework, we in this paper focus on its applications in adversarial generation. Following Wang et al. (2021), we consider three tasks of generating adversarial examples. Generating adversarial examples for an ensemble model. Considering an ensemble classifier with multiple classification models h1, h2*, ..., h*m, where hi (x) ∈ ∆M =π ∈ RM + : kπk1 = 1 with the number of classes M. Given a data sample x, our aim is to find an adversarial example x a = x + δ that can successfully attack all the models. Specifically, we consider a set of tasks each of which, Ti, is about whether x + δ can successfully attack model hi, defined as: I argmax1≤k≤Mhi (x + *δ, k*) 6= y , where y is the ground truth label of x, I is the indicator function and hi (*x, k*) returns the probability to predict x to the class k. To find a perturbation δ that can attack successfully all models, we solve the following multi-objective optimization problem: $$\operatorname*{max}_{\delta:\|\delta\|\leq\epsilon}\left[f_{1}\left(\delta\right),...,f_{m}\left(\delta\right)\right],$$ where fi (δ) = ` (hi (x + δ), y) with the loss function ` which could be the cross-entropy (CE) loss (Madry et al., 2018), the Kullback-Leibler (KL) loss (Zhang et al., 2019), or the Carlini-Wagner (CW) loss (Carlini & Wagner, 2017). Generating universal perturbations. Considering a single classification model h with h (x) ∈ ∆M and a batch of data samples x1, x2*, ..., x*B, we would like to find a perturbation δ with kδk ≤ such that x a i = xi + *δ, i* = 1*, ..., B*, are adversarial examples. We define the task Ti as finding the adversarial example x a i = xi + δ for data sample xi. For each task Ti, we can define its goal as finding successfully the adversarial example x a i : I argmax1≤k≤Mh (x a i , k) 6= argmax1≤k≤Mh (xi, k) . To find the perturbation δ, we solve the following multi-objective optimization problem: $$\operatorname*{max}_{\delta:\|\delta\|\leq\epsilon}\left[f_{1}\left(\delta\right),...,f_{m}\left(\delta\right)\right],$$ where fi (δ) = ` (h (x a i ), yi) = ` (h (xi + δ), yi) with yi the ground-truth label of xi. Generating adversarial examples against transformations. Considering a single classification model h and m categories of data transformation P1:m (e.g., rotation, lighting, and translation). Our goal is to find an adversarial attack that is robust to these data transformations. Specifically, given a benign example x, we would like to learn a perturbation δ with kδk ≤ that can successfully attack the model after any transformation ti ∼ Piis applied. To formulate as an MOO problem, we consider the task Ti as finding the adversarial example x a i = ti (x + δ) with ti ∼ Pi. For each task Ti, we can define the goal as finding successfully the adversarial example x a i : $$\operatorname*{max}_{1\leq k\leq M}k$$ I argmax1≤k≤Mh (x a i , k) 6= argmax1≤k≤Mh (*x, k*) . To find the perturbation δ, we solve the following multi-objective optimization problem: $\mathbf{c},\,k)\big\}$ . $$\operatorname*{max}_{\delta:\|\delta\|\leq\epsilon}\left[f_{1}\left(\delta\right),...,f_{m}\left(\delta\right)\right],$$ where fi (δ) = Eti∼Pi [` (h (ti (x + δ)), y)] with y the ground-truth label of x. ## 4 Experiments In this section, we provide extensive experiments across four settings: (i) generating adversarial examples for ensemble of models (ENS, Sec 4.1), (ii) generating universal perturbation (UNI, Sec 4.3) , (iii) generating robust adversarial examples against Ensemble of Transformations (EoT, Sec 4.4), and (iv) adversarial training for ensemble of models (AT, Sec 4.2). The details of each setting can be found in Appendix C. General settings. Through our experiments, we use six common architectures for the classifier including ResNet18 (He et al., 2016), VGG16 (Simonyan & Zisserman, 2014), GoogLeNet (Szegedy et al., 2015), EfficientNet (Tan & Le, 2019), MobileNet Howard et al. (2017), and WideResNet Zagoruyko & Komodakis (2016) with the implementation 1. We evaluate on the full testing set of two benchmark datasets which are CIFAR10 and CIFAR100 (Krizhevsky et al., 2009). We observed that the attack performance is saturated with standard training models. Therefore, to make the job of adversaries more challenging, we use Adversarial Training with PGD-AT (Madry et al., 2018) to robustify the models and use these robust models as the victim models in our experiments. Evaluation metrics. We use three metrics to evaluate the attack performance including (i) A-All: the Attack Success Rate (ASR) when an adversarial example can achieve goals in all tasks. This is considered as the most important metric to indicate how well one method can achieve in all tasks; (ii)A-Avg: the average Attack Success Rate over all tasks which indicate the average attacking performance; (iii){A-i} K i=1: Attack Success Rate in each individual task. For reading comprehension purposes, if necessary the highest/second highest performance in each experimental setting is highlighted in **Bold**/Underline and the most important metric(s) is emphasized in blue color. Baseline methods. We compare our method with the **Uniform** strategy which assigns the same weight for all tasks and the **MinMax** method (Wang et al., 2021) which examines only the worst-case performance across all tasks. To increase the generality to other tasks, MinMax requires a regularization to balance 1https://github.com/kuangliu/pytorch-cifar Table 1: Evaluation of Attacking Ensemble model on the CIFAR10 and CIFAR100 datasets. | CW | | | CE | KL | | | | |----------|---------|-------|-------|-------|-------|-------|-------| | A-All | A-Avg | A-All | A-Avg | A-All | A-Avg | | | | CIFAR10 | Uniform | 26.37 | 41.13 | 28.21 | 48.34 | 17.44 | 32.85 | | MinMax | 27.53 | 41.20 | 35.75 | 51.56 | 19.97 | 33.13 | | | MOO | 18.87 | 34.24 | 25.16 | 44.76 | 15.69 | 29.54 | | | TA-MOO | 30.65 | 40.41 | 38.01 | 51.10 | 20.56 | 31.42 | | | CIFAR100 | Uniform | 52.82 | 67.39 | 55.86 | 72.62 | 38.57 | 54.88 | | MinMax | 54.96 | 66.92 | 63.70 | 75.44 | 40.67 | 53.83 | | | MOO | 51.16 | 65.87 | 58.17 | 73.19 | 39.18 | 53.44 | | | TA-MOO | 55.73 | 67.02 | 64.89 | 75.85 | 41.97 | 53.76 | | between the average and the worst-case performance. We use the same attack setting for all methods: the attack is the L∞ untargeted attack with 100 steps, step size ηδ = 2/255 and perturbation limitation = 8/255. The GD solver in TA-MOO uses 10 steps with learning rate ηw = 0.005. Further detail can be found in Appendix C. ## 4.1 Adversarial Examples For Ensemble Of Models (Ens) Experimental setting. In our experiment, we use an ensemble of four adversarially trained models: ResNet18, VGG16, GoogLeNet, and EfficientNet. The architecture is the same for both the CIFAR10 and CIFAR100 datasets except for the last layer which corresponds with the number of classes in each dataset. The final output of the ensemble is an average of the probability outputs (i.e., output of the softmax layer). We use three different losses as an object for generating adversarial examples including CE (Madry et al., 2018), KL (Zhang et al., 2019), and CW (Carlini & Wagner, 2017). Results 1: TA-MOO achieves the best performance. Table 1 shows the results of attacking the ensemble model on the CIFAR10 and CIFAR100 datasets. It can be seen that TA-MOO significantly outperforms the baselines and achieves the best performance in all the settings. For example, the improvement over the Uniform strategy is around 10% on both datasets with the CE loss. Comparing to the MinMax method, the biggest improvement is around 3% for CIFAR10 with CW loss and the lowest one is around 0.6% with the KL loss. The improvement can be observed in all the settings, showing the generality of the proposed method. Results 2: When does not MOO work? It can be observed that MOO falls behind all other methods, even compared with the Uniform strategy. Our hypothesis for the failure of MOO is that in the original setting with an ensemble of 4 diverse architectures (i.e., ResNet18, VGG16, GoogLeNet, and EfficientNet) there is one task that dominates the others and makes MOO become trapped (i.e., focusing on improving the dominant task). To verify our hypothesis, we measure the gradient norm k∇δfi(δ)k corresponding to each model and the final weight w of 1000 samples and report the results in Table 2. It can be seen that the EfficientNet has a much lower gradient strength, therefore, it has a much higher weight. This explains the highest ASR observed in EfficientNet and the large gap of 19% (56.11% in EfficientNet and 37.05% in GoogLeNet). To further confirm our hypothesis, we provide an additional experiment on a non-diverse ensemble model which consists of 4 individual ResNet18 models. It can be observed that in the non-diverse setting, the gradient strengths are more balanced across models, indicating that no task dominates others. As a result, MOO shows its effectiveness by outperforming the Uniform strategy by 4.3% in A-All. Results 3: The importance of the Task-Oriented regularization. It can be observed from Table 2 that in the diverse setting, TA-MOO has a much lower gap (4%) between the highest ASR (53.4% at EfficientNet) and the lowest one (49.29% at GoogLeNet) compared to MOO ( 19%). Moreover, while the ASR of EfficientNet is lower by 2.7%, the ASRs of all other architectures have been improved considerably (i.e., 12% in GoogLeNet). This improvement shows the importance of the Task-Oriented regularization, which helps to avoid being trapped by one dominating task, as happened in MOO. For the non-diverse Table 2: Attacking Ensemble model with a diverse set D={R-ResNet18, V-VGG16, G-GoogLeNet, EEfficientNet} and non-diverse set ND={4 ResNets}. w represents the final w of MOO (mean ± std). k∇δfi(δ)k represents the gradient norm of each model (mean ± std). | A-All | A-Avg | R/R1 | V/R2 | G/R3 | E/R4 | | | |-----------|---------|--------|-------------|-------------|-------------|-------------|-------------| | k∇δfi(δ)k | - | - | 7.15 ± 6.87 | 4.29 ± 4.64 | 7.35 ± 7.21 | 0.98 ± 0.72 | | | D | w | - | - | 0.15 ± 0.14 | 0.17 ±0.13 | 0.15 ± 0.14 | 0.53 ± 0.29 | | Uniform | 28.21 | 48.34 | 48.89 | 49.08 | 48.38 | 47.03 | | | MOO | 25.16 | 44.76 | 39.06 | 46.83 | 37.05 | 56.11 | | | TA-MOO | 38.01 | 51.10 | 49.55 | 52.15 | 49.29 | 53.40 | | | k∇δfi(δ)k | - | - | 8.41 ± 8.22 | 6.68± 6.95 | 7.36 ± 6.03 | 5.67 ± 6.09 | | | ND | w | - | - | 0.23 ± 0.21 | 0.24±0.17 | 0.23 ± 0.19 | 0.30 ± 0.21 | | Uniform | 28.17 | 48.75 | 51.94 | 45.55 | 54.15 | 43.34 | | | MOO | 32.50 | 52.21 | 53.25 | 49.05 | 56.80 | 49.76 | | | TA-MOO | 41.01 | 57.33 | 58.88 | 55.32 | 60.81 | 54.29 | | | RME | RVW | EVW | MVW | REV | MEV | RMEV | RMEVW | | |---------|-------|-------|-------|-------|-------|--------|---------|-------| | Uniform | 31.73 | 25.03 | 22.13 | 22.73 | 29.50 | 28.44 | 26.95 | 20.50 | | MinMax | 40.01 | 23.75 | 22.39 | 23.34 | 32.57 | 32.75 | 31.85 | 21.99 | | MOO | 35.20 | 24.25 | 22.94 | 23.76 | 30.65 | 32.28 | 29.49 | 21.77 | | TA-MOO | 40.97 | 25.13 | 23.59 | 24.38 | 33.00 | 33.05 | 32.14 | 23.04 | Table 3: Evaluation on the Transferability of adversarial examples. Each cell (row-ith, column-jth) reports SAR (higher is better) of adversarial examples from the same source architecture (RME) with an adversary at row-ith to attack an ensemble at column-jth. Each architecture has been denoted by symbols such as R: ResNet18, M: MobileNet, E: EfficientNet, V: VGG16, W: WideResNet. For examples, RME represents for an ensemble of ResNet18, MobileNet and EfficientNet. setting, when no task dominates others, TA-MOO still shows its effectiveness when improving the ASR in all tasks by around 5%. The significant improvement can be observed in all settings (except the setting on EfficientNet with the CIFAR10 dataset) as shown in Table 1, and demonstrates the generality of the Task-Oriented regularization. ## Results 4: Ta-Moo Achieves The Best Transferability On A Diverse Set Of Ensembles. Table 3 reports the SAR-All metric of transferred adversarial examples crafted from a source ensemble (RME) on attacking target ensembles (e.g., RMEVW is an ensemble of 5 models). A higher number indicates a higher success rate of attacking a target model, therefore, also implies a higher transferability of adversarial examples. It can be seen that our TA-MOO adversary achieves the highest attacking performance on the whitebox attack setting, with a huge gap of 9.24% success rate over the Uniform strategy. Our method also achieves the highest transferability regardless diversity of a target ensemble. More specifically, on target models such as REV, MEV, and RMEV, where members in the source ensemble (RME) are also in the target ensemble, our TA-MOO significantly outperforms the Uniform strategy, with the highest improvement is 5.19% observed on target model RMEV. On the target models EVW and MVW which are less similar to the source model, our method still outperforms the Uniform strategy by 1.46% and 1.65%. The superior performance of our adversary on the transferability shows another benefit of using multi-objective optimization in generating adversarial examples. By reaching the intersection of all members' adversarial regions, our adversary is capable to generate a common vulnerable pattern on an input image shared across architectures, therefore, increasing the transferability of adversarial examples. More discussion can be found in Appendix D.1. Table 4: Robustness evaluation of Adversarial Training methods on the CIFAR10 dataset. RME represents an ensemble of ResNet18 (R), MobileNet (M) and EfficientNet E), while MobiX3 represents an ensemble of three MobileNets. NAT and ADV measure the natural accuracy and the robust accuracy against PGD-Linf attack (↑the higher the better). Other metrics measure the success attack rate (SAR) of adversarial examples generated by the same PGD-Linf attack on fooling each single member and all members of the ensemble (↓the lower the better). | MobiX3 | RME | | | | | | | | |-----------|-------|--------|--------|-------|-------|--------|--------|-------| | NAT↑ | ADV↑ | A-All↓ | A-Avg↓ | NAT↑ | ADV↑ | A-All↓ | A-Avg↓ | | | PGD-AT | 80.43 | 32.78 | 54.34 | 73.89 | 86.52 | 37.36 | 49.01 | 69.75 | | MinMax-AT | 79.01 | 37.28 | 50.28 | 66.77 | 83.16 | 40.40 | 46.91 | 65.73 | | MOO-AT | 79.38 | 33.04 | 46.28 | 74.36 | 82.04 | 37.48 | 45.24 | 70.11 | | TA-MOO-AT | 79.22 | 38.22 | 48.21 | 67.83 | 82.59 | 41.32 | 43.68 | 65.09 | ## 4.2 Adversarial Training With Ta-Moo For Ensemble Of Models (Ens) We conduct adversarial training with adversarial examples generated by MOO and TA-MOO attacks to verify the quality of these adversarial examples and report results on Table 4. The detailed setting and more experimental results can be found in Appendix D.2. **Result 1: Reducing transferability.** It can be seen that the SAR-All of MOO-AT and TA-MOO-AT are much lower than that on other methods. More specifically, the gap of SAR-All between PGD-AT and TA-MOO-AT is (5.33%) 6.13% on the (non) diverse setting. The lower SAR-All indicating that adversarial examples are harder to transfer among ensemble members on the TA-MOO-AT model than on the PGD-AT model. **Result 2: Producing more robust** single members. The comparison of average SAR shows that adversarial training with TA-MOO produces more robust single models than PGD-AT does. More specifically, the average robust accuracy (measured by 100% - *A-Avg*) of TA-MOO-AT is 32.17%, an improvement of 6.06% over PGD-AT in the non-diverse setting, while there is an improvement of 4.66% in the diverse setting. **Result 3: Adversarial training with** TA-MOO achieves the best robustness. More specifically, on the non-divese setting, TA-MOO-AT achives 38.22% robust accuracy, an improvement of 1% over MinMax-AT and 5.44% over standard PGD-AT. On the diverse setting, the improvement over MinMax-AT and PGD-AT are 0.9% and 4%, respectively. The root of the improvement is the ability to generate stronger adversarial examples in the the sense that they can challenge not only the entire ensemble model but also all single members. These adversarial examples lie in the joint insecure region of members (i.e., the low confidence region of multiple classes), therefore, making the decision boundaries more separate. As a result, adversarial training with TA-MOO produces more robust single models (i.e., lower SAR-Avg) and significantly reduces the transferability of adversarial examples among members (i.e., lower SAR-All). These two conditions explain the best ensemble adversarial robustness achieved by TA-MOO. ## 4.3 Universal Perturbation (Uni) Experimental setting. We follow the experimental setup in Wang et al. (2021), where the full test set (10k images) is randomly divided into equal-size groups (K images per group). The comparison has been conducted on the CIFAR10 and CIFAR100 datasets, with an adversarially trained ResNet18 model and CW loss. We observed that the ASR-All was mostly zero, indicating that it is difficult to generate a general perturbation for all data points. Therefore, in Table 5 we use ASR-Avg to compare the performances of the methods. More experiments on VGG16 and EfficientNet models can be found in Appendix D.3. Results. Table 5 shows the evaluation of generating universal perturbations on the CIFAR10 and CIFAR100 datasets, respectively. K represents the number of images that are using the same perturbation. The larger the value of K, the harder it is to generate a universal perturbation that can be applied successfully to all images. It can be seen that with a small number of tasks (i.e., K=4), MOO and TA-MOO achieve lower performance than the MinMax method. However, with a large number of tasks (i.e, K ≥ 8), MOO and TA-MOO show their effectiveness and achieve the best performance. More specifically, on the CIFAR10 dataset, the improvements of MOO over the Uniform strategy are 5.6%, 4%, 3.2%, and 2.5% with K = 8, | CIFAR10 | | | | CIFAR100 | | | | | | | |-----------|-------|-------|-------|------------|-------|-------|-------|-------|-------|-------| | K=4 | K=8 | K=12 | K=16 | K=20 | K=4 | K=8 | K=12 | K=16 | K=20 | | | Uniform | 37.52 | 30.34 | 27.41 | 25.52 | 24.31 | 65.40 | 58.99 | 55.33 | 53.02 | 51.49 | | MinMax | 50.13 | 33.68 | 20.46 | 15.74 | 14.73 | 74.73 | 62.29 | 52.05 | 45.26 | 42.33 | | MOO | 43.80 | 35.92 | 31.41 | 28.75 | 26.83 | 69.35 | 62.72 | 57.72 | 54.12 | 52.25 | | TA-MOO | 48.00 | 39.31 | 34.96 | 31.84 | 30.12 | 72.74 | 68.06 | 62.33 | 57.48 | 54.12 | Table 5: Evaluation of generating Universal Perturbation on the CIFAR10 and CIFAR100 datasets. | | | A-All | A-Avg | I | H | V | C | G | B | R | |------|---------|---------|---------|-------|-------|-------|-------|-------|-------|-------| | C10 | Uniform | 25.98 | 55.33 | 44.85 | 41.58 | 82.90 | 72.56 | 45.92 | 49.59 | 49.93 | | | MinMax | 30.54 | 52.20 | 43.31 | 41.59 | 78.80 | 64.83 | 44.38 | 46.53 | 45.97 | | | MOO | 21.25 | 49.81 | 36.23 | 33.93 | 87.47 | 71.05 | 37.68 | 40.21 | 42.12 | | | TA-MOO | 31.10 | 55.26 | 44.15 | 41.86 | 85.19 | 71.86 | 45.53 | 48.70 | 49.54 | | C100 | Uniform | 56.19 | 76.23 | 70.43 | 69.01 | 87.66 | 87.36 | 71.40 | 74.25 | 73.47 | | | MinMax | 59.75 | 75.72 | 70.13 | 69.26 | 87.45 | 86.03 | 71.54 | 73.30 | 72.32 | | | MOO | 53.17 | 74.21 | 66.96 | 65.68 | 89.16 | 87.03 | 68.49 | 71.11 | 71.06 | | | TA-MOO | 60.88 | 76.71 | 70.43 | 69.37 | 89.11 | 87.95 | 71.70 | 74.73 | 73.69 | Table 6: Robust adversarial examples against transformations evaluation. I: Identity, H: Horizontal flip, V: Vertical flip, C: Center crop, G: Adjust gamma, B: Adjust brightness, R: Rotation. K = 12, K = 16, and K = 20, respectively. On the same setting, TA-MOO significantly improves MOO by around 4% in all the K settings and consistently achieves the best performance. Unlike the ENS setting, in the UNI setting, MOO consistently achieves better performance than the Uniform strategy . This improvement can be explained by the fact that in the UNI setting with the same architecture and data transformation, no task dominates the others. There will be a case (a group) when one sample is extremely close to/far from the decision boundary, and hence easier/harder to fool. However, in the entire test set with a large number of groups, the issue of dominating tasks is lessened. ## 4.4 Robust Adversarial Examples Against Transformations (Eot) Results. Table 6 shows the evaluation on the CIFAR10 and CIFAR100 datasets with 7 common data transformations. It can be observed that (i) MOO has a lower performance than the baselines, (ii) the Task Oriented regularization significantly boosts the performance, and (iii) our TA-MOO method achieves the best performance on both settings and outperforms the MinMax method 0.6% and 1.1% in the CIFAR10 and the CIFAR100 experiments, respectively. The low performance of MOO in observation (i) is again caused by the issue of one task dominating others. In the EoT setting, it is because of the V-vertical flip transformation as shown in Table 6. Observation (ii) provides another piece of evidence to support the effectiveness of the Task-Oriented regularization for MOO. This regularization boosts the ASRs in all the tasks (except V - the dominant one), increases the average ASR by 5.45% and 2.5% in the CIFAR10 and CIFAR100 experiments, respectively. ## 4.5 Additional Experiments With Multi-Task Learning Methods In this section we would like to provide additional experiments with recent multi-task learning methods to explore how better constrained approaches can improve over the naive MOO. We applied three recent multi-task learning methods including PCGrad Yu et al. (2020), CAGrad Liu et al. (2021a), and HVM Albuquerque et al. (2019) with implementation from their official repositories into our adversarial generation task. We apply the best practice in Albuquerque et al. (2019) which is adaptively updated the Nadir point based on the current tasks' losses. For PCGrad we use the *mean* as the reduction mode. For CAGrad we use parameter α = 0.5 and rescale = 1 as in their default setting. We experiment on attacking ensemble Table 7: Attacking Ensemble model with a diverse set D={R-ResNet18, V-VGG16, G-GoogLeNet, EEfficientNet} and non-diverse set ND={4 ResNets}. | A-All | A-Avg | R/R1 | V/R2 | G/R3 | E/R4 | | | |---------|---------|--------|--------|--------|--------|-------|-------| | D | Uniform | 28.21 | 48.34 | 48.89 | 49.08 | 48.38 | 47.03 | | HVM | 29.88 | 46.98 | 48.97 | 48.10 | 46.88 | 43.96 | | | PCGrad | 28.25 | 48.28 | 48.81 | 49.03 | 48.13 | 47.14 | | | CAGrad | 30.23 | 48.34 | 47.03 | 48.22 | 45.92 | 52.20 | | | MOO | 25.16 | 44.76 | 39.06 | 46.83 | 37.05 | 56.11 | | | TA-MOO | 38.01 | 51.10 | 49.55 | 52.15 | 49.29 | 53.40 | | | ND | Uniform | 28.17 | 48.75 | 51.94 | 45.55 | 54.15 | 43.34 | | HVM | 28.46 | 49.87 | 51.64 | 50.03 | 50.72 | 47.10 | | | PCGrad | 28.30 | 48.75 | 52.02 | 45.42 | 54.35 | 43.21 | | | CAGrad | 35.22 | 51.07 | 54.22 | 47.84 | 55.24 | 46.97 | | | MOO | 32.50 | 52.21 | 53.25 | 49.05 | 56.80 | 49.76 | | | TA-MOO | 41.01 | 57.33 | 58.88 | 55.32 | 60.81 | 54.29 | | of models setting with two settings, a diverse set D with 4 different architectures including R-ResNet18, V-VGG16, G-GoogLeNet, E-EfficientNet and a non-diverse set ND with 4 ResNet18 models. It can be seen from the Table 7 that in the diverse ensemble setting, the three additional methods HVM, PCGrad and CAGrad significantly outperform the standard MOO method with the improvement gaps of SAR-All around 4.7%, 3% and 5%, respectively. In the non-diverse ensemble setting, while HVM and PCGrad achieve lower performances than the standard MOO method, CAGrad can outperform the MOO method with a 2.7% improvement. On comparison to the naive uniform method, the three methods also achieve better performance in both settings. The improvement on the diverse set of HVM, PCGrad and CAGrad over the standard MOO method is more noticeable than on the non-diverse set. It can be explained by the fact that on the diverse set of model architectures, there is a huge difference in gradients among architectures, therefore, requires a better multi-task learning method to handle the constraint between tasks. On the other hand, on both ensemble settings, our TA-MOO still achieves the best performance, with a huge gap of (5.8%) 7.8% compared to the second best method on the (non) diverse setting. It is because our method can leverage a supervised signal from knowing whether a task is achieved or not to focus on improving unsuccessful tasks. It is a huge advantage compared to unsupervised multi-task learning methods as MOO, HVM, PCGrad, and CAGrad. ## 5 Additional Discussion In this section, we would like to summarize some important observations through all experiments while the complete discussion with detail can be found in Appendix E. Correlation between the objective loss and attack performance. It is broadly accepted that to fool a model, a feasible approach is maximizing the objective loss (i.e., CE, KL, or CW loss), and the higher the loss, the higher the attack success rate. While it is true when observing the same architecture, we found that it is not necessarily true when comparing different architectures. As shown in Figure 1, with CW loss as the adversarial objective, it can be observed that there is a positive correlation between the loss value and the ASR, i.e., the higher the loss, the higher the ASR. However, there is no clear correlation observed when using CE and KL loss. Therefore, the higher weighted loss does not directly imply a higher success rate for attacking an ensemble of different architectures. The MinMax method (Wang et al., 2021) which solely ![11_image_0.png](11_image_0.png) Figure 1: Loss (left fig) and ASR (right fig) of each task over all attack iterations with the MinMax method. model0/1/2/3 represents R/V/G/E architecture, respectively. weighs tasks' losses, therefore, is not always appropriate to achieve a good performance in all tasks. More discussion can be found in Appendix E.4. When does MOO work? On the one hand, the dominating issue is observed in all three settings (ENS, UNI, EoT). The issue can be recognized by the gap of attack performance among tasks or by observing the dominating of one task's weight over others which is caused by a significant small gradient strength of one task on comparison with other tasks' strength as discussed in Section 4.1. The root of the dominating issue can be the natural of the setting (i.e., as in EoT setting, when the large gap can be observed in all methods) or the MOO solver. On the other hand, if overcoming this issue, MOO can outperform the Uniform strategy as shown in Section 4.1. As discussed in Appendix 4.4, a simple memory can helps to overcome the infinite gradient issue and significantly boosts the performance of MOO or TA-MOO. Therefore, we believe that developing a technique to lessen the dominating issue might be a potential extension. More efficient MOO solvers. Inspired by Sener & Koltun (2018), in this paper we use multi-gradient descent algorithm (Deb, 2011) as a MOO solver which casts the multi-objective problem to a single-objective problem. However, while Sener & Koltun (2018) used Frank-Wolfe algorithm to project the weight into the desired simplex, we use parameterization with softmax to do the job. While this technique is much faster than Frank-Wolfe algorithm, it has some weaknesses that might be target for future works. First, it cannot handle well the edge case which is the root of the dominating issue. Second, it does not work well in the case of a non-convex objective space as similar as other MOO scalarizing methods (Deb, 2011). ## 6 Conclusion In this paper, we propose Task Oriented Multi-Objective Optimization (TA-MOO), with specific applications to adversarial generation tasks. We develop a geometry-based regularization term to favor the goal-unachieved tasks, while trying to maintain the the goal-achieved tasks. We conduct comprehensive experiments to showcase the merit of our proposed approach on generating adversarial examples and adversarial training. On the other hand, there are acknowledged limitations of our method such as weaknesses of the gradient-based solver and lacking theory on algorithm's convergence which might be target for future works. ## Acknowledgements This work was partially supported by the Australian Defence Science and Technology (DST) Group under the Next Generation Technology Fund (NGTF) scheme. The authors would like to thank the anonymous reviewers for their valuable comments and suggestions. ## References Isabela Albuquerque, Joao Monteiro, Thang Doan, Breandan Considine, Tiago Falk, and Ioannis Mitliagkas. Multi-objective training of generative adversarial networks with multiple discriminators. In International Conference on Machine Learning, pp. 202–211. PMLR, 2019. Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning, pp. 274–283, 2018. Emil Björnson and Eduard Jorswieck. Optimal resource allocation in coordinated multi-cell systems. Now Publishers Inc, 2013. Emil Bjornson, Eduard Axel Jorswieck, Mérouane Debbah, and Bjorn Ottersten. Multiobjective signal processing optimization: The way to balance conflicting metrics in 5g systems. IEEE Signal Processing Magazine, 31(6):14–23, 2014. Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, and Matthias Bethge. Accurate, reliable and fast robustness evaluation. In Advances in Neural Information Processing Systems, pp. 12861– 12871, 2019. Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, and Dinh Phung. Improving adversarial robustness by enforcing local and global compactness. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVII, pp. 209–223. Springer, 2020. Anh Bui, Trung Le, He Zhao, Paul Montague, Seyit Camtepe, and Dinh Phung. Understanding and achieving efficient robustness with adversarial supervised contrastive learning. arXiv preprint arXiv:2101.10027, 2021a. Anh Tuan Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, and Dinh Phung. Improving ensemble robustness by collaboratively promoting and demoting adversarial robustness. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 6831–6839, 2021b. Anh Tuan Bui, Trung Le, Quan Hung Tran, He Zhao, and Dinh Phung. A unified wasserstein distributional robustness framework for adversarial training. In International Conference on Learning Representations, 2022. Rafael Caballero, Lourdes Rey, Francisco Ruiz, and Mercedes González. An algorithmic package for the resolution and analysis of convex multiple objective problems. In Multiple criteria decision making, pp. 275–284. Springer, 1997. N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pp. 39–57. IEEE, 2017. Carlos A Coello Coello. A comprehensive survey of evolutionary-based multiobjective optimization techniques. Knowledge and Information systems, 1(3):269–308, 1999. Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. arXiv preprint arXiv:2003.01690, 2020. Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2021. URL https://openreview.net/forum?id=SSKZPJCt7B. Kalyanmoy Deb. Multi-objective optimisation using evolutionary algorithms: an introduction. In Multi-objective evolutionary optimisation for product design and manufacturing, pp. 3–34. Springer, 2011. Jean-Antoine Désidéri. Multiple-gradient descent algorithm (mgda) for multiobjective optimization. Comptes Rendus Mathematique, 350(5-6):313–318, 2012. Shangchen Du, Shan You, Xiaojie Li, Jianlong Wu, Fei Wang, Chen Qian, and Changshui Zhang. Agree to disagree: Adaptive ensemble knowledge distillation in gradient space. Advances in Neural Information Processing Systems, 33, 2020. Matthias Ehrgott. Multicriteria optimization, volume 491. Springer Science & Business Media, 2005. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http: //arxiv.org/abs/1412.6572. Pengxin Guo, Yuancheng Xu, Baijiong Lin, and Yu Zhang. Multi-task adversarial attack. arXiv preprint arXiv:2011.09824, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82–97, 2012. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. Alex Krizhevsky et al. Learning multiple layers of features from tiny images. 2009. Alexey Kurakin, Ian J Goodfellow, and Samy Bengio. Adversarial examples in the physical world. In Artificial intelligence safety and security, pp. 99–112. Chapman and Hall/CRC, 2018. Xi Lin, Hui-Ling Zhen, Zhenhua Li, Qing-Fu Zhang, and Sam Kwong. Pareto multi-task learning. Advances in neural information processing systems, 32, 2019. Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, and Qiang Liu. Conflict-averse gradient descent for multi-task learning. Advances in Neural Information Processing Systems, 34:18878–18890, 2021a. Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. Advances in neural information processing systems, 29, 2016. Xingchao Liu, Xin Tong, and Qiang Liu. Profiling pareto front with multi-objective stein variational gradient descent. Advances in Neural Information Processing Systems, 34, 2021b. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018. Debabrata Mahapatra and Vaibhav Rajan. Multi-task learning with user preferences: Gradient descent with controlled ascent in pareto optimization. In International Conference on Machine Learning, pp. 6597–6607. PMLR, 2020. Kaisa Miettinen. Nonlinear multiobjective optimization, volume 12. Springer Science & Business Media, 2012. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1765–1773, 2017. Tianyu Pang, Kun Xu, Chao Du, Ning Chen, and Jun Zhu. Improving adversarial robustness via promoting ensemble diversity. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 4970–4979. PMLR, 09–15 Jun 2019. Haoxuan Qiu, Yanhui Du, and Tianliang Lu. The framework of cross-domain and model adversarial attack against deepfake. Future Internet, 14(2):46, 2022. Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do adversarially robust imagenet models transfer better? Advances in Neural Information Processing Systems, 33:3533–3545, 2020. Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. Advances in neural information processing systems, 31, 2018. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. M. Spencer, J. Eickholt, and J. Cheng. A deep learning network approach to ab initio protein secondary structure prediction. IEEE/ACM Trans. Comput. Biol. Bioinformatics, 12(1):103–112, January 2015. ISSN 1545-5963. Takahiro Suzuki, Shingo Takeshita, and Satoshi Ono. Adversarial example generation using evolutionary multi-objective optimization. In 2019 IEEE Congress on evolutionary computation (CEC), pp. 2136–2144. IEEE, 2019. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. URL http://arxiv.org/abs/1312.6199. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9, 2015. Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, pp. 6105–6114. PMLR, 2019. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008, 2017. Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, and Bo Li. Adversarial attack generation empowered by min-max optimization. Advances in Neural Information Processing Systems, 34, 2021. Weiran Wang and Miguel A Carreira-Perpinán. Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application. arXiv preprint arXiv:1309.1541, 2013. Eric Wong, Leslie Rice, and J Zico Kolter. Fast is better than free: Revisiting adversarial training. In International Conference on Learning Representations, 2019. Feiyang Ye, Baijiong Lin, Zhixiong Yue, Pengxin Guo, Qiao Xiao, and Yu Zhang. Multi-objective meta learning. Advances in Neural Information Processing Systems, 34, 2021. Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn. Gradient surgery for multi-task learning. Advances in Neural Information Processing Systems, 33:5824–5836, 2020. S. Zagoruyko and N. Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically principled trade-off between robustness and accuracy. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 7472–7482, 2019. ## Appendix The Appendix provides technical and experimental details as well as auxiliary aspects to complement the main paper. Briefly, it contains the following: - Appendix A: Discussion on related work. - Appendix B: Detailed proof and an illustration of our methods. - Appendix C: Detailed description of experimental settings. - Appendix D.1: Additional experiments on transferability of adversarial examples in the ENS setting. - Appendix D.2: Additional experiments on adversarial training with our methods. - Appendix D.3: Additional experiments on the UNI setting. - Appendix D.4: Additional experiments on the EoT setting. - Appendix D.5: Additional comparison on speed of generating adversarial examples. - Appendix D.6: Additional experiments on sensitivity to hyper-parameters. - Appendix D.7: Additional comparison with standard attacks on attacking performance. - Appendix D.8: Additional experiments on attacking the ImageNet dataset. - Appendix E.1: Additional discussions on the dominating issue and when MOO can work. - Appendix E.2: A summary on the importance of Task-Oriented regularization. - Appendix E.3: Discussion on the limitation of MOO solver. - Appendix E.4: Discussion on correlation between the objective loss and attack performance. - Appendix E.5: Discussion on the conflicting between gradients in the adversarial generation task. - Appendix E.6: Discussion on the convergence of our methods. - Appendix E.7: Additional experiments with MOO with different initializations. ## A Related Work Multi-Objective Optimization for multi-task learning. (Désidéri, 2012) proposed a multi-gradient descent algorithm for multi-objective optimization (MOO) which opens the door for the applications of MOO in machine learning and deep learning. Inspired by Désidéri (2012), MOO has been applied in multi-task learning (MTL) (Sener & Koltun, 2018; Mahapatra & Rajan, 2020), few-shot learning (Ye et al., 2021), and knowledge distillation (Du et al., 2020). Specifically, the work of Sener & Koltun (2018) viewed multi-task learning as a multi-objective optimization problem, where a task network consists of a shared feature extractor and a task-specific predictor. The work of Mahapatra & Rajan (2020) developed a gradient-based multi-objective MTL algorithm to find a solution that satisfies the user preferences. The work of Lin et al. (2019) proposed a Pareto MTL to find a set of well-distributed Pareto solutions which can represent different trade-offs among different tasks. Recently, the work of Liu et al. (2021b) leveraged MOO with Stein Variational Gradient Descent (Liu & Wang, 2016) to diversify the solutions of MOO. Additionally, the work of Ye et al. (2021) proposed a bi-level MOO which can be applied to few-shot learning. Finally, the work of Du et al. (2020) applied MOO to enable knowledge distillation from multiple teachers. Generating adversarial examples with single-objective and multi-objective optimizations. Generating qualified adversarial examples is crucial for adversarial training (Madry et al., 2018; Zhang et al., 2019; Bui et al., 2021a; 2022). Many perturbation based attacks have been proposed, notably FGSM (Goodfellow et al., 2015), PGD (Madry et al., 2018), TRADES (Zhang et al., 2019), CW (Carlini & Wagner, 2017), BIM (Kurakin et al., 2018), and AutoAttack (Croce & Hein, 2020). Most adversarial attacks aim to maximize a single objective, e.g., maximizing the cross-entropy (CE) loss w.r.t. the ground-truth label (Madry et al., 2018), maximizing the Kullback-Leibler (KL) divergence w.r.t. the predicted probabilities of a benign example (Zhang et al., 2019), or maximizing the CW loss (Carlini & Wagner, 2017). However, in some contexts, we need to generate adversarial examples maximizing multiple objectives or goals, e.g., attacking multiple models (Pang et al., 2019; Bui et al., 2020) or finding universal perturbations (Moosavi-Dezfooli et al., 2017). The work of Suzuki et al. (2019) was a pioneering attempt to consider the generation of adversarial examples as a multi-objective optimization problem. The authors proposed a non-adaptive method based on Evolutionary Multi-Objective Optimization (EMOO) Deb (2011) to generate sets of adversarial examples. However, the EMOO method is computationally expensive and requires a large number of evaluations, which limits its practicality. Additionally, the authors applied MOO without conducting an extensive study on the behavior of the algorithm, which could limit the effectiveness of the proposed method. Furthermore, the experimental results presented in the work are limited, which could weaken the evidence for the effectiveness of the proposed method. To this end, the work of Wang et al. (2021) examined the worst-case scenario by casting the problem of interest as a min-max problem for finding the weight of each task. However, this principle leads to a problem of lacking generality in other tasks. To mitigate the issue, Wang et al. (2021) proposed a regularization to strike a balance between the average and the worst-case performance. The final optimization was formulated as follow: $$\operatorname*{max}_{\delta:\|\delta\|\leq\epsilon}\operatorname*{min}_{w\in\Delta_{m}}\sum_{i=1}^{K}w_{i}f_{i}(\delta)+\frac{\gamma}{2}\|w-1/K\|_{2}^{2},$$ Where fi(v) is the victim model's loss (i.e., cross entropy loss or KL divergence) and γ > 0 is the regularization parameter. The authors used the bisection method (Boyd et al., 2004) with project gradient descent for the inner minimization and project gradient ascent for the outer maximization. There are several major differences in comparison to MOO and TA-MOO methods: (i) In principle, MinMax considers the worst-case performance only while our methods improve performance of all tasks simultaneously. (ii) MinMax weighs the tasks' losses to find the minimal weighted sum loss in its inner minimization, however, as discussed in Section E.4 the higher weighted loss does not directly imply the higher success rate in attacking multi-tasks simultaneously. In contrast, our methods use multi-gradient descent algorithm (Deb, 2011) in order to increase losses of all tasks simultaneously. (iii) The original principle of MinMax leads to the biasing problem to the worst-case task. The above regularization has been used to mitigate the issue, however, it considers all tasks equally. In contrast, our TA-MOO takes goal-achievement status of each task into account and focuses more on the goal-unachieved tasks. Recently, Guo et al. (2020) proposed a multi-task adversarial attack and demonstrated on the universal perturbation problem. However, while Wang et al. (2021) and ours can be classified as an iterative optimizationbased attack, Guo et al. (2020) requires a generative model in order to generate adversarial examples. While this line of attack is faster than optimization-based attacks at the inference phase, it requires to train a generator on several tasks beforehand. Due to the difference in setting, we do not compare with that work in this paper. More recently, Qiu et al. (2022) proposed a framework to attack a generative Deepfake model using the multi-gradient descent algorithm in their backpropagation step. While their method also use the multiobjective optimization for generating adversarial examples, there are several major differences to ours. Firstly, their method aims for a generative Deepfake model while our method aims for the standard classification problem which is the most common and important setting in AML. Secondly, we conduct comprehensive experiments to show that a direct and naive application of MOO to adversarial generation tasks does not work satisfactorily because of the gradient dominating problem. Most importantly, we propose the TA-MOO method which employs a geometry-based regularization term to favor the unsuccessful tasks, while trying to maintain the performance of the already successful tasks. We have conducted extensive experiments to show that our TA-MOO consistently achieves the best attacking performance across different settings. We also conducted additional experiments with SOTA multi-task learning methods which are PCGrad (Yu et al., 2020) and CAGrad (Liu et al., 2021a) in Section 4.5. Compared to these methods, our TA-MOO still achieves the best attack performance thanks to the Task Oriented regularization. ## B Further Details Of The Proposed Method B.1 Proofs Lemma 1. *Sorting* ws+1:m *into* us+1:m such that us+1 ≥ us+2 ≥ ... ≥ um*. Defining* ρ = max ns + 1 ≤ i ≤ m : ui +1 i−s 1 −Pij=s+1 uj > 0 o*. Denoting* γ =1ρ 1 −Pρ i=s+1 ui , the projection projS (w) *can be computed as* $$p r o j_{S}\left(w\right)_{i}=\begin{cases}0&1\leq i\leq s\\ \operatorname*{max}\left\{w_{i}+\gamma,0\right\}&o t h e r w i s e\end{cases}$$ Furthermore, the regularization Ω (w) *has the form:* $$\Omega\left(w\right)=\sum_{i=1}^{s}w_{i}^{2}+\sum_{i=s+1}^{m}\left(w_{i}-\max\left\{w_{i}+\gamma,0\right\}\right)^{2}.\tag{5}$$ Proof. The proof is based on Wang & Carreira-Perpinán (2013) with modifications. We need to solve the following OP: $$\begin{array}{c}{{\operatorname*{min}_{\pi}{\frac{1}{2}}\left\|w-\pi\right\|_{2}^{2}}}\\ {{\mathrm{s.t.}:\pi\geq\mathbf{0}}}\\ {{\left\|\pi\right\|_{1}=1.}}\end{array}$$ We note that π1 = ... = πs = 0. The OP of interest reduces to $ \qquad\qquad\min_{\pi_{s+1:m}}\frac{1}{2}\sum_{i=s+1}^{m}\left(\pi_{i}-w_{i}\right)^{2}$ s.t. :$ \pi_{s+1:m}\geq\mathbf{0}$ ... $$\sum_{i=s+1}^{m}\pi_{i}=1.$$ Using the Karush-Kuhn-Tucker (KKT) theorem, we construct the following Lagrange function: $${\mathcal{L}}\left(\pi,\gamma,\beta\right)={\frac{1}{2}}\sum_{i=s+1}^{m}\left(\pi_{i}-w_{i}\right)^{2}-\gamma\left(\sum_{i=s+1}^{m}\pi_{i}-1\right)-\sum_{i=s+1}^{m}\beta_{i}\pi_{i}.$$ Setting the derivative w.r.t. πi to zeros and using the KKT conditions, we obtain: $$\begin{array}{c}{{\pi_{i}-w_{i}-\gamma-\beta_{i}=0,\forall i=s+1,...,m}}\\ {{\sum_{i=s+1}^{m}\pi_{i}=1}}\\ {{\beta_{i}\geq0,\pi_{i}\geq0,\beta_{i}\pi_{i}=0,\forall i=s+1,...,m.}}\end{array}$$ If πi > 0, βi = 0, hence πi = wi + γ > 0. Otherwise, if πi = 0, wi + γ = −βi ≤ 0. Therefore, ws+1:m has the same order as πs+1:m and we can arrange them as: πs+1 ≥ πs+2 ≥ ... ≥ πρ > πρ−1 = ... = πm = 0. us+1 = ws+1 ≥ us+2 = ws+2 ≥ .... ≥ up = wp ≥ uρ−1 = wρ−1 ≥ ... ≥ um = wm ≥ 0. It appears that 1 = Pm i=s+1 πi =Pρ i=s+1 πi =Pρ i=s+1 (wi + γ) =Pρ i=s+1 wi + (ρ − s) γ. Hence, we gain γ =1 ρ−s -1 −Pρ i=s+1 wi =1 ρ−s -1 −Pρ i=s+1 ui . We now prove that ρ = max ns + 1 ≤ i ≤ m : ui +1 i−s 1 −Pij=s+1 uj > 0 o. - For i = ρ, we have $$u_{\rho}+{\frac{1}{\rho-s}}\left(1-\sum_{j=s+1}^{\rho}u_{j}\right)=u_{\rho}+\gamma=w_{\rho}+\gamma>0.$$ - For *i < ρ*, we have we have $$u_{i}+\frac{1}{i-s}\left(1-\sum_{j=s+1}^{i}u_{j}\right)=\frac{1}{i-s}\left((i-s)u_{i}+1-\sum_{j=s+1}^{i}u_{j}\right)$$ $$=\frac{1}{i-s}\left[(i-s)w_{i}+\sum_{j=s+1}^{\rho-1}\pi_{j}-\sum_{j=s+1}^{i}w_{j}\right]$$ $$=\frac{1}{i-s}\left[(i-s)w_{i}+\sum_{j=i+1}^{\rho-1}\pi_{j}+\sum_{j=s+1}^{i}\left(\pi_{j}-w_{j}\right)\right]$$ $$=\frac{1}{i-s}\left[(i-s)\left(w_{i}+\gamma\right)+\sum_{j=i+1}^{\rho-1}\pi_{j}\right]$$ $$=\frac{1}{i-s}\left[(i-s)\pi_{i}+\sum_{j=i+1}^{\rho-1}\pi_{j}\right]>0.$$ - For *i > ρ*, we have ui +1 i − s j=s+1 uj = 1 i − s j=s+1 uj 1 −X i (i − s)ui + 1 −X i =1 i − s j=s+1 wj (i − s)wi +X ρ−1 j=s+1 πj −X i =1 i − s j=ρ wj (i − s)wi +X ρ−1 j=s+1 (πj − wj ) − X i =1 i − s j=ρ wj (i − s)wi + (ρ − s − 1)γ − X i =1 i − s j=ρ (wi − wj ) (ρ − s − 1)(wi + γ) +X i ≤ 0. ![20_image_0.png](20_image_0.png) Figure 2: Visualization of standard MOO and TA-MOO solutions in a scenario of 2 goal-achieved tasks (∇f s 1,2 ) and 2 goal-unachieved tasks (∇f u 1,2 ). (left) MOO; (middle) MOO on the set of goal-unachieved tasks only; (right) TA-MOO with a solution focuses more on the goal-unachieved tasks. Therefore, $\rho=\max\left\{s+1\leq i\leq m:u_{i}+\frac{1}{i-s}\left(1-\sum_{j=s+1}^{i}u_{j}\right)>0\right\}.$ Finally, we also have $\pi_{i}=\max\{w_{i}+\gamma,0\},i=s+1,...,m$ and $\pi_{i}=0,i=1,...,s$. Theorem 1. *The regularization* Ω (w) *has the following closed-form:* $$\Omega\left(w\right)=\sum_{i=1}^{s}w_{i}^{2}+\frac{1}{m-s}\left(1-\sum_{i=s+1}^{m}w_{i}\right)^{2}.\tag{6}$$ Proof. Recall ρ = max ns + 1 ≤ i ≤ m : ui +1 $\left\{s+1\leq i\leq m:u_{i}+\frac{1}{i-s}\left(1-\sum_{j=s+1}^{i}u_{j}\right)>0\right\}$. The o. Therefore, ρ = m because we have $$u_{m}+{\frac{1}{m-s}}\left(1-\sum_{j=s+1}^{m}u_{j}\right)=w_{m}+{\frac{1}{m-s}}\left(1-\sum_{j=s+1}^{m}w_{j}\right)=w_{m}+{\frac{\sum_{j=1}^{s}w_{j}}{m-s}}>0.$$ It follows that $$\gamma=\frac{1}{m-s}\left(1-\sum_{i=s+1}^{m}u_{i}\right)=\frac{1}{m-s}\left(1-\sum_{i=s+1}^{m}w_{i}\right)\geq0.$$ $$\operatorname{proj}_{\mathcal{S}}\left(w\right)_{i}=\begin{cases}0&1\leq i\leq s\\ \max\left\{w_{i}+\gamma,0\right\}=w_{i}+\gamma&\text{otherwise}\end{cases}$$ $$\Omega\left(w\right)=\sum_{i=1}^{s}w_{i}^{2}+\sum_{i=s+1}^{m}\left(w_{i}-\max\left\{w_{i}+\gamma,0\right\}\right)^{2}$$ $$=\sum_{i=1}^{s}w_{i}^{2}+\sum_{i=s+1}^{m}\gamma^{2}=\sum_{i=1}^{s}w_{i}^{2}+(m-s)\gamma^{2}$$ $$=\sum_{i=1}^{s}w_{i}^{2}+\frac{1}{m-s}\left(1-\sum_{i=s+1}^{m}w_{i}\right)^{2}.$$ ## B.2 Illustrations Of How Moo And Ta-Moo Work Figure 2 illustrates solutions of MOO and TA-MOO in a scenario of 2 goal-achieved tasks (with corresponding gradients ∇f s 1,2 ) and 2 goal-unachieved tasks (with corresponding gradients ∇f u 1,2 ). As illustrated in the left figure, with standard MOO method, where all the tasks' gradients have been considered regardless their status and the solution associated with the minimal norm is the perpendicular vector as suggested by geometry (Sener & Koltun, 2018). If considering the goal-unachieved tasks only as in the middle case, the MOO solution | CIFAR10 | CIFAR100 | | | | |--------------|------------|---------|---------|-------| | Nat-Acc | Adv-Acc | Nat-Acc | Adv-Acc | | | ResNet18 | 86.47 | 42.14 | 59.64 | 18.62 | | VGG16 | 84.24 | 40.88 | 55.27 | 16.41 | | GoogLeNet | 88.26 | 41.26 | 63.10 | 19.16 | | EfficientNet | 74.52 | 41.36 | 57.67 | 19.90 | | MobileNet | 76.52 | 31.12 | - | - | | WideResNet | 88.13 | 48.62 | - | - | Table 8: Robustness performance of models in the experiments is the edge case. However, this extreme strategy ignores all the goal-achieved tasks which might lead to the instability. The Task Oriented regularization strikes a balance between the two aforementioned strategies as illustrated in the right figure. The method focuses more on improving the goal-unachieved tasks while spend less effort to maintain the goal-achieved tasks. With λ = 0 the TA-MOO optimal solution is equivalent to the standard MOO optimal solution while it becomes the MOO solution in the case of the goal-unachieved tasks only when λ → ∞. ## C Experimental Settings General settings. Through our experiments, we use six common architectures including ResNet18 (He et al., 2016), VGG16 (Simonyan & Zisserman, 2014), GoogLeNet (Szegedy et al., 2015), EfficientNet (B0) (Tan & Le, 2019), MobileNet Howard et al. (2017), and WideResNet (with depth 34 and widen factor 10) Zagoruyko & Komodakis (2016) with the implementation of https://github.com/kuangliu/pytorch-cifar. We evaluate on the full testing set (10k) of two benchmark datasets which are CIFAR10 and CIFAR100 (Krizhevsky et al., 2009). More specifically, the two datasets have 50k training images and 10k testing images, respectively, with the same image resolution of 32 × 32 × 3. However, while the CIFAR10 dataset has 10 classes, the CIFAR100 dataset has 100 classes and fewer images per class. Therefore, in general, an adversary is easier to attack a CIFAR100 model than a CIFAR10 one as shown in Table 8. We observed that the attack performance is saturated with standard training models. Therefore, to make the job of adversaries more challenging, we use Adversarial Training with PGD-AT (Madry et al., 2018) to robustify the models and use these robust models as victim models in our experiments. Specifically, we use the SGD optimizer (momentum 0.9 and weight decay 5 × 10−4) and Cosine Annealing Scheduler to adjust the learning rate with an initial value of 0.1 and train a model in 200 epochs as suggested in the implementation above. We use PGD-AT L∞ (Madry et al., 2018) with the same setting for both CIFAR10 and CIFAR100 datasets, i.e., perturbation limitation = 8/255, k = 20 steps, and step size η = 2/255. Method settings. In this work, we evaluate all the methods in the untargeted attack setting with L∞ norm. The attack parameters are the same among methods, i.e., number of attack steps 100, attack budget = 8/255 and step size ηδ = 2/255. In our method, we use K=10 to update the weight in each step with learning rate ηw = 0.005. Tradeoff parameter λ = 100 in all experiments. In MinMax (Wang et al., 2021), we use the same γ = 3 for all settings and use the authors' implementation 2. Attacking ensemble model settings. In our experiment, we use an ensemble of four adversarially trained models: ResNet18, VGG16, GoogLeNet, and EfficientNet. The architecture is the same for both the CIFAR10 and CIFAR100 datasets except for the last layer which corresponds with the number of classes in each dataset. The final output of the ensemble is an average of the probability outputs (i.e., output of the softmax layer). We use three different losses as an object for generating adversarial examples including Cross Entropy (CE) (Madry et al., 2018), Kullback-Leibler divergence (KL) (Zhang et al., 2019), and CW loss (Carlini & Wagner, 2017). 2https://github.com/wangjksjtu/minmax-adv | | Deterministic | Stochastic | |-------------------|-----------------|----------------------| | Identity | Identity | Identity | | Horizontal flip | p = 1 | p = 0.5 | | Vertical flip | p = 1 | p = 0.5 | | Center crop | scale = 0.6 | scale = U(0.6, 1.0) | | Adjust brightness | factor = 1.3 | factor = U(1.0, 1.3) | | Rotation | angle = 10 | angle = U(−10 , 10 ) | | Adjust gamma | gamma = 1.3 | gamma = U(0.7, 1.3) | Table 9: Data transformation setting. U represents uniform sampling function and p represents probability to excuse a transformation (e.g., flipping). Universal perturbation settings. We follow the experimental setup in Wang et al. (2021), such that the full test set (10k images) is randomly divided into equal-size groups (K images per group). The comparison has been conducted on the CIFAR10 and CIFAR100 datasets, and CW loss. We use adversarial trained ResNet18, VGG16 and EfficientNet as base models. We observed that the ASR-All was mostly zero, indicating that it is difficult to generate a general perturbation for all data points. Therefore, we use ASR-Avg to compare the performances of the methods. Robust adversarial examples against transformations settings. In our experiment, we use 7 common data transformations including I-Identity, H-Horizontal flip, V-Vertical flip, C-Center crop, B-Adjust brightness, R-Rotation, and G-Adjust gamma. The parameter setting for each transformation has been shown in Table 9. In the deterministic setting, a transformation has been fixed with one specific parameter, e.g., center cropping with a scale of 0.6 or adjusting brightness with a factor of 1.3. While in the stochastic setting, a transformation has been uniformly sampled from its family, e.g., center cropping with a random scale in range (0.6, 1.0) or adjusting brightness with a random factor in range (1.0, 1.3). The experiment has been conducted on adversarially trained ResNet18 model with the CW loss. ## D Additional Experiments D.1 Transferability Of Adversarial Examples In The Ens Setting We conduct an additional experiment to evaluate the transferability of our adversarial examples. We use an ensemble (RME) of three models: ResNet18, MobileNet, and EfficientNet as a source model and apply different adversaries to generate adversarial examples to this ensemble. We then use these adversarial examples to attack other ensemble architectures (target models), for example, RMEVW is an ensemble of 5 models including ResNet18, MobileNet, EfficientNet, VGG16 and WideResNet. Table 10 reports the SAR-All metric of transferred adversarial examples, where a higher number indicates a higher success rate of attacking a target model, therefore, also implies a higher transferability of adversarial examples. The first column (heading RME) shows SAR-All when adversarial examples attack the source model (i.e., the whitebox attack setting). The Uniform strategy achieves the lowest transferability. It can be observed from Table 10 that the Uniform strategy achieves the lowest SAR in the whitebox attack setting. This strategy also has the lowest transferability in attacking other ensembles (except an ensemble RVW). MinMax's transferability drops on dissimilar target models. While MinMax achieves the secondbest performance in the whitebox attack setting, its adversarial examples have a low transferability when target models are different from the source model. For example, in the target model RVW where there is only one member of the target model from the source model (RME) (i.e., R or ResNet18), MinMax achieves a 23.75% success rate which is lower than the Uniform strategy by 1.28%. Similar observation can be observed on target models EVW and MVW, where MinMax outperforms the Uniform strategy by just 0.2% and 0.6%, respectively. Table 10: Evaluation on the Transferability of adversarial examples. Each cell (row-ith, column-jth) reports SAR (higher is better) of adversarial examples from the same source architecture (RME) with an adversary at row-ith to attack an ensemble at column-jth. Each architecture has been denoted by symbols such as R: ResNet18, M: MobileNet, E: EfficientNet, V: VGG16, W: WideResNet. For examples, RME represents for an ensemble of ResNet18, MobileNet and EfficientNet. The highest/second highest performance is highlighted in Bold/Underline. The table is copied from Table 3 in the main paper for reading comprehension purpose. | RME | RVW | EVW | MVW | REV | MEV | RMEV | RMEVW | | |---------|-------|-------|-------|-------|-------|--------|---------|-------| | Uniform | 31.73 | 25.03 | 22.13 | 22.73 | 29.50 | 28.44 | 26.95 | 20.50 | | MinMax | 40.01 | 23.75 | 22.39 | 23.34 | 32.57 | 32.75 | 31.85 | 21.99 | | MOO | 35.20 | 24.25 | 22.94 | 23.76 | 30.65 | 32.28 | 29.49 | 21.77 | | TA-MOO | 40.97 | 25.13 | 23.59 | 24.38 | 33.00 | 33.05 | 32.14 | 23.04 | TA-MOO achieves the highest transferability on a diverse set of ensembles . Our TA-MOO adversary achieves the highest attacking performance on the whitebox attack setting, with a huge gap of 9.24% success rate over the Uniform strategy. Our method also achieves the highest transferability regardless diversity of a target ensemble. More specifically, on target models such as REV, MEV, and RMEV, where members in the source ensemble (RME) are also in the target ensemble, our TA-MOO significantly outperforms the Uniform strategy, with the highest improvement is 5.19% observed on target model RMEV. On the target models EVW and MVW which are less similar to the source model, our method still outperforms the Uniform strategy by 1.46% and 1.65%. The superior performance of our adversary on the transferability shows another benefit of using multi-objective optimization in generating adversarial examples. By reaching the intersection of all members' adversarial regions, our adversary is capable to generate a common vulnerable pattern on an input image shared across architectures, therefore, increasing the transferability of adversarial examples. ## D.2 Adversarial Training With Ta-Moo Setting. We conduct adversarial training with adversarial examples generated by MOO and TA-MOO attacks to verify the quality of these adversarial examples. We choose an ensemble of 3 MobileNet architectures (non-diverse set) and ensemble of 3 different architectures including ResNet18, MobileNet and EfficientNet (diverse set). To evaluate the adversarial robustness, we compare natural accuracy (NAT) and robust accuracy (ADV) against PGD-Linf attack of these adversarial training methods (the higher the better). We also measure the success attack rate (SAR) of adversarial examples generated by the same PGD-Linf attack on fooling each single member and all members of the ensemble (the lower the better). We use k = 10, = 8/255, η = 2/255 for adversarial training and PGD-Linf with k = 20, = 8/255, η = 2/255 for robustness evaluation. We use SGD optimizer with momentum 0.9 and weight decay 5e-4. Initial learning rate is 0.1 with Cosine Annealing scheduler and train on 100 epochs. Result 1. Reducing transferability. It can be seen that the SAR-All of MOO-AT and TA-MOO-AT are much lower than that on other methods. More specifically, the gap of SAR-All between PGD-AT and TA-MOO-AT is (5.33%) 6.13% on the (non) diverse setting. The lower SAR-All indicating that adversarial examples are harder to transfer among ensemble members on the TA-MOO-AT model than on the PGD-AT model. Result 2. Producing more robust single members. The comparison of average SAR shows that adversarial training with TA-MOO produces more robust single models than PGD-AT does. More specifically, the average robust accuracy (measured by 100% - *A-Avg*) of TA-MOO-AT is 32.17%, an improvement of 6.06% over PGD-AT in the non-diverse setting, while there is an improvement of 4.66% in the diverse setting. Result 3. Adversarial training with TA-MOO achieves the best robustness. More specifically, on the non-divese setting, TA-MOO-AT achives 38.22% robust accuracy, an improvement of 1% over MinMax- Table 11: Robustness evaluation of Adversarial Training methods on the CIFAR10 dataset. RME represents an ensemble of ResNet18 (R), MobileNet (M) and EfficientNet E), while MobiX3 represents an ensemble of three MobileNets. NAT and ADV measure the natural accuracy and the robust accuracy against PGD-Linf attack (↑the higher the better). Other metrics measure the success attack rate (SAR) of adversarial examples generated by the same PGD-Linf attack on fooling each single member and all members of the ensemble (↓the lower the better). The highest/second highest **robustness** is highlighted in **Bold**/Underline. The most important metric is emphasized in blue. | | Arch | NAT↑ | ADV↑ | A-All↓ | A-Avg↓ | R/M1↓ | M/M2↓ | E/M3↓ | |-----------|--------|--------|--------|----------|----------|---------|---------|---------| | PGD-AT | MobiX3 | 80.43 | 32.78 | 54.34 | 73.89 | 76.17 | 74.35 | 71.14 | | MinMax-AT | MobiX3 | 79.01 | 37.28 | 50.28 | 66.77 | 65.27 | 70.27 | 64.78 | | MOO-AT | MobiX3 | 79.38 | 33.04 | 46.28 | 74.36 | 71.25 | 74.53 | 77.29 | | TA-MOO-AT | MobiX3 | 79.22 | 38.22 | 48.21 | 67.83 | 68.04 | 67.37 | 68.07 | | PGD-AT | RME | 86.52 | 37.36 | 49.01 | 69.75 | 65.81 | 75.24 | 68.21 | | MinMax-AT | RME | 83.16 | 40.40 | 46.91 | 65.73 | 65.22 | 68.28 | 63.70 | | MOO -AT | RME | 82.04 | 37.48 | 45.24 | 70.11 | 69.00 | 75.43 | 65.90 | | TA-MOO-AT | RME | 82.59 | 41.32 | 43.68 | 65.09 | 63.77 | 68.98 | 62.51 | AT and 5.44% over standard PGD-AT. On the diverse setting, the improvement over MinMax-AT and PGD-AT are 0.9% and 4%, respectively. The root of the improvement is the ability to generate stronger adversarial examples in the the sense that they can challenge not only the entire ensemble model but also all single members. These adversarial examples lie in the joint insecure region of members (i.e., the low confidence region of multiple classes), therefore, making the decision boundaries more separate. As a result, adversarial training with TA-MOO produces more robust single models (i.e., lower SAR-Avg) and significantly reduces the transferability of adversarial examples among members (i.e., lower SAR-All). These two conditions explain the best ensemble adversarial robustness achieved by TA-MOO. ## D.3 Universal Perturbation (Uni) Additional experimental results. In addition to the experiments on ResNet18 as reported in Table 5, we would like to provide additional experimental results on two other adversarial trained models VGG16 and EfficientNet as shown in Table 12. It can be seen that, TA-MOO consistently achieves the best attacking performance on ResNet18 and VGG16, on both CIFAR10 and CIFAR100 datasets, with K ≥ 8. Why does MOO work? As shown in Table 12, MOO consistently achieves better performance than the Uniform strategy (except for the setting with EfficientNet on the CIFAR100 dataset). To find out the reason for the improvement, we investigate the gradient norm k∇δf(δ)k and weight w for the first, and second groups (as an example) and the average over 100 groups of the testset as shown in Table 13. It can be seen that in the first and second groups, there are some tasks that have significantly low gradient strengths than other tasks. The gap of the strongest/weakest gradient strength can be a magnitude of 106indicating the domination of one task over others. While this issue can cause the failure as in the ENS setting, however, in the UNI setting, the lowest gradient strengths in each group correspond to unsuccessful tasks (unsuccessful adversarial examples) and vice versa. Recall that we use the multi-gradient descent algorithm to solve MOO, which in principle assigns a higher weight for a weaker gradient vector. Therefore, in the UNI setting, while the dominating issue still exists, fortunately, the result still fits our desired weighting strategy (i.e., higher weight for an unsuccessful task and vice versa). Moreover, when there are a large number of groups (i.e., 100 groups), the issue of dominating tasks is alleviated. The average gradient strength is more balanced as shown in Table 13. This explains the improvement of MOO over the Uniform strategy in the UNI setting. ![25_image_0.png](25_image_0.png) ![25_image_1.png](25_image_1.png) (a) MobiX3 (b) RME Figure 3: Comparison progress of three adversarial training methods. The bigger marker size represents the later epoch. Each point represents the natural accuracy and robust accuracy against PGD-Linf attack on the testing set. Table 12: Evaluation of generating Universal Perturbation on the CIFAR10 and CIFAR100 datasets. R: ResNet18, V: VGG16, E: EfficientNet. | R V E | |---------| CIFAR10 CIFAR100 K=4 K=8 K=12 K=16 K=20 K=4 K=8 K=12 K=16 K=20 Uniform 37.52 30.34 27.41 25.52 24.31 65.40 58.99 55.33 53.02 51.49 MinMax **50.13** 33.68 20.46 15.74 14.73 **74.73** 62.29 52.05 45.26 42.33 MOO 43.80 35.92 31.41 28.75 26.83 69.35 62.72 57.72 54.12 52.25 TA-MOO 48.00 **39.31 34.96 31.84 30.12** 72.74 **68.06 62.33 57.48 54.12** Uniform 37.76 30.81 27.49 25.94 24.46 66.87 61.49 58.53 56.29 54.98 MinMax **47.96** 30.88 20.20 16.93 16.25 **78.58** 69.14 58.85 51.81 48.09 MOO 43.04 34.56 30.07 27.43 25.42 73.46 66.51 61.28 57.88 56.09 TA-MOO 46.58 **38.33 32.32 29.16 26.56** 75.57 **71.86 67.22 62.99 59.19** Uniform 44.86 39.03 36.37 34.65 33.49 67.55 60.99 57.35 **54.84 53.57** MinMax 44.47 32.96 28.86 27.01 26.47 69.69 57.99 50.93 45.59 43.87 MOO 45.31 **39.28 36.44 34.72 33.51** 66.68 59.69 54.95 53.20 51.43 TA-MOO **46.74** 37.95 33.95 31.71 30.41 **70.40 63.78 58.17** 53.26 50.66 Table 13: Evaluation of generating Universal Perturbation (K=8) on the CIFAR10 dataset with ResNet18 architecture and MOO method. {Ti} K i=1 represents value for each task (i.e., a sample in a group). w1/w2 represents the weight of the first/second group of K samples, while w represents the the statistic of weight over all groups (mean±std). k∇δ1 fi(δ1)k / k∇δ2 fi(δ2)k represents the gradient norm of the first/second group of K samples, while k∇δfi(δ)k represents the statistic of gradient norm over all groups (mean±std). I0/I1 represents the indicator function for a successful (1) or unsuccessful (0) task, while I represents the the statistic of successful rate over all groups. | T1 | T2 | T3 | T4 | T5 | T6 | T7 | T8 | | |--------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | k∇δ1 fi(δ1)k | 1.15e1 | 3.45e-5 | 1.97e-2 | 1.26e-4 | 1.27e0 | 1.04e-1 | 1.04e1 | 9.91e0 | | w0 | 0.0238 | 0.1861 | 0.1859 | 0.1861 | 0.1763 | 0.1862 | 0.0257 | 0.0299 | | I0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | | k∇δ2 fi(δ2)k | 9.70e0 | 1.59e1 | 4.32e-4 | 4.27e-4 | 1.25e1 | 6.23e-5 | 2.91e-5 | 6.17e-6 | | w1 | 0.0341 | 0.0167 | 0.1854 | 0.1854 | 0.0222 | 0.1854 | 0.1854 | 0.1854 | | I1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | | k∇δfi(δ)k | 4.93±6.63 | 4.23±6.97 | 5.18±7.42 | 3.84±5.83 | 4.39±6.04 | 6.66±7.64 | 4.82±7.48 | 5.25±7.17 | | w | 0.12±0.08 | 0.14±0.09 | 0.12±0.08 | 0.13±0.08 | 0.12±0.09 | 0.10±0.08 | 0.14±0.10 | 0.11±0.08 | | I | 0.38±0.49 | 0.28±0.46 | 0.36±0.48 | 0.32±0.48 | 0.38±0.48 | 0.48±0.50 | 0.32±0.47 | 0.40±0.49 | ## D.4 Robust Adversarial Examples Against Transformations (Eot) We observed that in EoT with the stochastic setting, adjusting gamma sometimes has the overflow issue resulting in an infinite gradient. Recall that our method using MGDA to solve MOO which relies on the stability of gradient strengths. Therefore, in the case of having infinite gradients, learning weight w is unstable, resulting to lower performance in both MOO and TA-MOO. To overcome the overflow issue, we allocate memory to cache the valid gradient of each task in the previous iteration and replace the infinite value in the current iteration with the valid one in the memory. The storage only requires a tensor with the same shape as the gradient (i.e., as the exact size of the input), therefore, it does not increase the computation resource significantly. As shown in Table 14, this simple technique helps to improve performance of TA-MOO by 5.3% on both the CIFAR10 and CIFAR100 datasets. It also helps to improve performance of MOO by 0.8% and 4.8%, respectively. Finally, after overcoming the gradient issue, the TA-MOO achieves the best performance on the CIFAR100 dataset and the second best performance on the CIFAR10 dataset (0.4% lower in ASR-All but 0.8% higher in ASR-Avg when comparing to MinMax). This result provides additional evidence of the advantage of our method. ## D.5 Generating Speed Comparison And Experiments' Stability Generating Speed Comparison. Table 16 shows the average time to generate one adversarial example in each setting. The results are measured on the CIFAR10 dataset with ResNet18 architecture in the Ensemble of Transformations (EoT) and Universal Perturbation (Uni) settings. We use 1 Titan RTX 24GB for the EoT experiment and 4 Tesla V100 16GB each for the other experiments. It is worth mentioning that our primary focus in this paper is showing the advantage of MOO and the Task-Oriented regularization in generating adversarial examples. Therefore, we did not try to optimize our implementation in terms of generating time. Experiments' Stability. We conduct an experiment with 5 different random seeds to generate adversarial examples for the ENS setting to evaluate the stability of experimental results on choosing of random seed. The experiment is on the CIFAR10 dataset, with an ensemble of 4 architectures including ResNet18, VGG16, GoogLeNet, and EfficientNet. We report mean and variation values in Table 17. It can be observed that there is a slight variation in attack performances across methods. The variation is small enough compared to | D-C10 D-C100 S-C10 S-C100 | |-----------------------------| Table 14: Robust adversarial examples against transformations evaluation. The highest/second highest performance is highlighted in **Bold**/Underline. MOO? and TA-MOO?represent version with memory to overcome the infinite gradient issue in the stochastic setting. | Deterministic | Stochastic | | | | | |-----------------|--------------|-------|-------|-------|-------| | A-All | A-Avg | A-All | A-Avg | | | | C10 | Uniform | 25.98 | 55.33 | 31.47 | 50.55 | | MinMax | 30.54 | 52.20 | 33.35 | 49.44 | | | MOO | 21.25 | 49.81 | 26.97 | 43.84 | | | TA-MOO | 31.10 | 55.26 | 28.26 | 45.67 | | | MOO? | - | - | 27.79 | 45.91 | | | TA-MOO? | - | - | 32.96 | 50.27 | | | C100 | Uniform | 56.19 | 76.23 | 59.89 | 73.73 | | MinMax | 59.75 | 75.72 | 61.30 | 73.59 | | | MOO | 53.17 | 74.21 | 54.96 | 69.26 | | | TA-MOO | 60.88 | 76.71 | 56.23 | 69.91 | | | MOO? | - | - | 58.79 | 72.81 | | | TA-MOO? | - | - | 61.54 | 74.07 | | Table 15: Robust adversarial examples against transformations evaluation. The highest/second highest performance is highlighted in **Bold**/Underline. The most important metric is emphasized in blue color. MOO? and TA-MOO?represent version with memory to overcome the infinite gradient issue in the stochastic setting. I: Identity, H: Horizontal flip, V: Vertical flip, C: Center crop, G: Adjust gamma, B: Adjust brightness, R: Rotation. | A-All | A-Avg | I | H | V | C | G | B | R | | |---------|---------|-------|-------|-------|-------|-------|-------|-------|-------| | Uniform | 25.98 | 55.33 | 44.85 | 41.58 | 82.90 | 72.56 | 45.92 | 49.59 | 49.93 | | MinMax | 30.54 | 52.20 | 43.31 | 41.59 | 78.80 | 64.83 | 44.38 | 46.53 | 45.97 | | MOO | 21.25 | 49.81 | 36.23 | 33.93 | 87.47 | 71.05 | 37.68 | 40.21 | 42.12 | | TA-MOO | 31.10 | 55.26 | 44.15 | 41.86 | 85.19 | 71.86 | 45.53 | 48.70 | 49.54 | | Uniform | 56.19 | 76.23 | 70.43 | 69.01 | 87.66 | 87.36 | 71.40 | 74.25 | 73.47 | | MinMax | 59.75 | 75.72 | 70.13 | 69.26 | 87.45 | 86.03 | 71.54 | 73.30 | 72.32 | | MOO | 53.17 | 74.21 | 66.96 | 65.68 | 89.16 | 87.03 | 68.49 | 71.11 | 71.06 | | TA-MOO | 60.88 | 76.71 | 70.43 | 69.37 | 89.11 | 87.95 | 71.70 | 74.73 | 73.69 | | Uniform | 31.47 | 50.55 | 48.58 | 44.70 | 65.52 | 51.14 | 47.43 | 48.76 | 47.70 | | MinMax | 33.35 | 49.44 | 47.35 | 44.45 | 62.78 | 51.75 | 46.32 | 47.13 | 46.34 | | MOO | 26.97 | 43.84 | 40.62 | 38.45 | 57.65 | 48.55 | 40.41 | 40.71 | 40.47 | | TA-MOO | 28.26 | 45.67 | 42.80 | 39.66 | 61.98 | 47.92 | 41.80 | 43.01 | 42.54 | | MOO? | 27.79 | 45.91 | 42.43 | 39.65 | 62.11 | 51.44 | 41.62 | 42.21 | 41.92 | | TA-MOO? | 32.96 | 50.27 | 48.18 | 45.26 | 62.97 | 52.49 | 47.03 | 48.22 | 47.76 | | Uniform | 59.89 | 73.73 | 73.19 | 71.15 | 79.73 | 74.81 | 72.05 | 73.10 | 72.10 | | MinMax | 61.30 | 73.59 | 72.44 | 70.55 | 80.04 | 75.55 | 71.99 | 72.49 | 72.10 | | MOO | 54.96 | 69.26 | 67.62 | 66.11 | 75.88 | 72.72 | 66.87 | 68.11 | 67.49 | | TA-MOO | 56.23 | 69.91 | 68.52 | 66.92 | 76.70 | 72.71 | 67.57 | 68.97 | 67.97 | | MOO? | 58.79 | 72.81 | 71.58 | 69.08 | 80.17 | 75.01 | 70.78 | 71.71 | 71.33 | | TA-MOO? | 61.54 | 74.07 | 72.95 | 70.95 | 80.94 | 76.22 | 72.22 | 73.21 | 72.00 | | Ensemble (K=4) | EoT (K=7) | Uni@K=12 | Uni@K=20 | | |------------------|-------------|------------|------------|--------| | Uniform | 640ms | 350ms | 1850ms | 3030ms | | MinMax | 1540ms | 610ms | 1210ms | 2080ms | | MOO | 1770ms | 1130ms | 5600ms | 9280ms | | TA-MOO | 1960ms | 1200ms | 5870ms | 9500ms | Table 16: Average time per sample for generating adversarial example. All experiments are measured on the CIFAR10 dataset, EoT and Uni are with ResNet18 architecture. | A-All | A-Avg | R | V | G | E | | |---------|--------------|--------------|--------------|--------------|--------------|--------------| | Uniform | 28.12 ± 0.09 | 48.29 ± 0.05 | 48.81 ± 0.08 | 49.06 ± 0.08 | 48.27 ± 0.10 | 47.06 ± 0.03 | | MOO | 25.61 ± 0.36 | 45.13 ± 0.30 | 39.84 ± 0.62 | 47.29 ± 0.36 | 37.51 ± 0.36 | 55.90 ± 0.17 | | TA-MOO | 37.56 ± 0.32 | 51.15 ± 0.21 | 49.37 ± 0.15 | 52.80 ± 0.45 | 48.98 ± 0.25 | 53.24 ± 0.13 | Table 17: Stability of experiments' evaluation on different random seeds. Experiment on the ENS setting, with an ensemble of 4 models: Resnet18, VGG16, GoogleNet and EfficientNet. the gap between methods (i.e., the biggest variation is 0.32% in SAR-All while the smallest gap is 2.51% between MOO and the Uniform approach), therefore, making the comparison still reliable. ## D.6 Sensitivity To Hyper-Parameters In this section we provide an analytical experiment on the sensitivity of our TA-MOO method to the tradeoff λ. The study has been conducted with the ENS setting with CE loss and the EoT setting with deterministic transformations using ResNet18 architecture. All experiments are on the CIFAR10 dataset. The value of λ is changed from 1 to 1000. It can be observed from Figure 4a (the ENS setting) that (i) increasing λ reduces the performance of dominated task (i.e., ASR on the EfficientNet decreases from 54.49% at λ = 1 to 53.40% at λ = 100) while increases performances of other tasks. In overall, it significantly increases the ASR-All performance of the entire ensemble from 29.14% at λ = 1 to 38.01% at λ = 100. (ii) However, over-high λ (i.e., λ > 200) leads to the drop of performance in all tasks, resulting in a lower overall performance. A similar observation can be seen in the EoT setting in Figure 4b. The attack performance on the dominated task (V-Vertical flipping) decreases from 86.11% at λ = 50 to 83.67% at λ = 200. In contract, in the same range of λ the overall performance increases from 32.85% to 34.36%. The performances of all tasks decrease when using too large λ (i.e., λ > 200). Based on the result of this study, we choose λ = 100 in all the other experiments. ## D.7 Comparison With Standard Attacks We conducted an additional comparison on the ENS setting to further confirm the effectiveness of our method over standard adversarial attacks (which consider an entire ensemble as a single model). More specifically, we compare with AutoAttack (Croce & Hein, 2020), Brendel-Bethge attack (BB) (Brendel et al., 2019), Carlini-Wagner attack (CW) (Carlini & Wagner, 2017), and PGD attack (Madry et al., 2018). For AutoAttack, we use the standard version which includes 4 different attacks. For BB attack, we initialized with the PGD attack with 20 steps. For CW attack, we set the confidence factor to 1.0. We evaluate these attacks on 2 ensemble settings, a diverse (D) ensemble set with 4 different architectures (ResNet18, VGG16, GoogLeNet, and EfficientNet) and a non-diverse (ND) ensemble set with 4 ResNet18 architectures. It can be seen from the Table 18 that our TA-MOO attack consistently achieves the best attack performance, with a significant gap compared to the best standard attack. More specifically, our TA-MOO method achieves 38.01% (SAR-All metric) on the diverse ensemble set, while the second best attack is AutoAttack with 30.71% (a gap of 7.3%). On the non-diverse set, the gap between our TA-MOO and AutoAttack is still notably large at 4%. These standard attacks consider an entire ensemble as a single model, i.e., aim to optimize a single objective given a single ensemble output. Therefore, they cannot guarantee a successful attack on each member. ![29_image_0.png](29_image_0.png) ![29_image_1.png](29_image_1.png) (a) ENS setting (b) EoT setting Figure 4: Sensitivity to the parameter λ. Table 18: Attacking Ensemble model with a diverse set D={R-ResNet18, V-VGG16, G-GoogLeNet, EEfficientNet} and non-diverse set ND={4 ResNets}. Experiment on the CIFAR10 dataset with cross-entropy objective loss. The most important metric is emphasized in blue. | | A-All | A-Avg | R/R1 | V/R2 | G/R3 | E/R4 | | |------------|---------|---------|--------|--------|--------|--------|-------| | D | PGD | 28.21 | 48.34 | 48.89 | 49.08 | 48.38 | 47.03 | | CW | 6.10 | 16.63 | 13.53 | 15.76 | 11.74 | 25.47 | | | B&B | 6.67 | 38.03 | 37.95 | 38.92 | 35.58 | 39.68 | | | AutoAttack | 30.71 | 45.49 | 48.32 | 45.83 | 47.25 | 40.56 | | | MOO | 25.16 | 44.76 | 39.06 | 46.83 | 37.05 | 56.11 | | | TA-MOO | 38.01 | 51.10 | 49.55 | 52.15 | 49.29 | 53.40 | | | ND | PGD | 28.17 | 48.75 | 51.94 | 45.55 | 54.15 | 43.34 | | CW | 4.71 | 13.86 | 14.92 | 12.71 | 17.51 | 10.31 | | | B&B | 5.29 | 40.51 | 49.06 | 35.19 | 48.63 | 29.16 | | | AutoAttack | 37.00 | 49.32 | 51.07 | 48.58 | 51.08 | 46.55 | | | MOO | 32.50 | 52.21 | 53.25 | 49.05 | 56.80 | 49.76 | | | TA-MOO | 41.01 | 57.33 | 58.88 | 55.32 | 60.81 | 54.29 | | ## D.8 Attacking The Imagenet Dataset Experimental Setting. We conduct experiments on the ENS setting using the adversarial pre-trained models on the RobustBench (Croce et al., 2021). We use two sets of an ensemble to verify the importance of our task-oriented strategy. The first set is the robust ensemble (RE) set including 3 robust models: ResNet18 (model ID: Salman2020Do_R18 (Salman et al., 2020), robust accuracy 25.32%), ResNet50 (model ID: Salman2020Do_R50 (Salman et al., 2020), robust accuracy 34.96%) and ResNet50 (model ID: Wong2020Fast (Wong et al., 2019), robust accuracy 26.24%). The second set is the less-robust ensemble (LE) which includes 3 models: ResNet18 (model ID: Salman2020Do_R18), ResNet50 (model ID: Salman2020Do_R50) and the Table 19: Evaluation attacking performance on the ImageNet dataset. RE/LE/TAR/UNTAR represents Robust Ensemble/Less-Robust Ensemble/Targeted Attack/Untargeted Attack, respectively. R18/R50/STD represents robust ResNet18, robust ResNet50 and standard ResNet50 pre-trained model, respectively. The most important metric is emphasized in blue. | A-All | A-Avg | R18/R18 | R50/R50 | R50/STD | | | |----------|---------|-----------|-----------|-----------|-------|-------| | RE-TAR | Uniform | 29.58 | 39.38 | 42.50 | 32.22 | 43.42 | | MOO | 29.66 | 39.73 | 42.86 | 32.32 | 44.00 | | | TA-MOO | 29.68 | 39.73 | 42.90 | 32.26 | 44.02 | | | LE-TAR | Uniform | 30.30 | 58.14 | 42.36 | 32.06 | 100.0 | | MOO | 30.66 | 58.37 | 42.70 | 32.48 | 99.94 | | | TA-MOO | 30.68 | 58.25 | 42.54 | 32.36 | 99.86 | | | RE-UNTAR | Uniform | 48.58 | 60.11 | 64.22 | 51.72 | 64.38 | | MOO | 48.68 | 60.20 | 64.30 | 51.82 | 64.48 | | | TA-MOO | 49.80 | 59.71 | 63.80 | 52.38 | 62.94 | | | LE-UNTAR | Uniform | 34.24 | 61.01 | 46.98 | 36.28 | 99.78 | | MOO | 44.76 | 68.29 | 58.42 | 46.64 | 99.80 | | | TA-MOO | 49.46 | 70.74 | 61.26 | 51.14 | 99.82 | | standard training ResNet50 (model ID: Standard_R50, robust accuracy 0%). We use both targeted attack and untargeted attack settings, with = 4/255 , and η = 1/255 with 20 steps. We use 5000 images of the validation set to evaluate. Experimental Results. We report experimental results with different settings in Table 19, where RE/LE/- TAR/UNTAR represents Robust Ensemble/Less-Robust Ensemble/Targeted Attack/Untargeted Attack, respectively. It can be seen that, in the robust ensemble setting (RE-TAR and RE-UNTAR), our MOO achieves a similar performance compared to the baseline, while TA-MOO has a further improvement over MOO. The gap of SAR-All between TA-MOO and the uniform weighting strategy is 0.1% in the targeted attack setting (RE-TAR), while that in the untargeted attack setting is 1.2%. In the less-robust ensemble setting (LE-TAR and LE-UNTAR), the improvement of our methods over the baseline is higher than in the robust ensemble setting. With the gap of SAR-All between TA-MOO and the uniform strategy is 0.38% with the targeted attack setting (LE-TAR), while the gap in the untargeted setting (LE-UNTAR) is 15.22% a significantly higher. While it is acknowledged that the targeted attack is a more common protocol in attacking the ImageNet dataset (Athalye et al., 2018), however, we believe that our significant improvement on the untargeted attack is still worth noting. We conduct an additional experiment on the EoT setting with the ImageNet dataset and report result in Table 20. In this experiment, we use the robust pretrained ResNet18 model (model ID: Salman2020Do_R18) as the victim model. We use the standard attack setting, i.e., targeted attack with = 4/255, η = 1/255 with 20 steps. It can be seen that both MOO and TA-MOO could obtain a better attack performance than the uniform strategy. It is a worth noting that, in the experiment on the CIFAR10/CIFAR100 datasets (i.e., Table 6 in the main paper) the dominating issue of the vertical filliping exists and prevents MOO to obtain a better performance. In the ImageNet dataset, the dominating issue is less serious, therefore, explains the improvement of MOO and corroborates our hypothesis on the issue of dominating task. Table 20: Evaluation on the EoT setting with the ImageNet dataset. The most important metric is emphasized in blue. | A-All | A-Avg | I | H | V | C | G | B | R | | |---------|---------|-------|-------|-------|-------|-------|-------|-------|-------| | Uniform | 31.52 | 46.59 | 41.12 | 40.98 | 67.42 | 41.60 | 43.26 | 41.82 | 49.96 | | MOO | 31.92 | 47.19 | 41.92 | 41.78 | 67.64 | 42.10 | 43.66 | 42.74 | 50.48 | | TA-MOO | 32.00 | 47.21 | 41.94 | 41.80 | 67.66 | 42.06 | 43.70 | 42.80 | 50.52 | | | CW | CE | | KL | | | | |--------|---------|-------|-------|-------|-------|-------|-------| | | A-All | A-Avg | A-All | A-Avg | A-All | A-Avg | | | C10 | Uniform | 26.37 | 41.13 | 28.21 | 48.34 | 17.44 | 32.85 | | MinMax | 27.53 | 41.20 | 35.75 | 51.56 | 19.97 | 33.13 | | | MOO | 18.87 | 34.24 | 25.16 | 44.76 | 15.69 | 29.54 | | | TA-MOO | 30.65 | 40.41 | 38.01 | 51.10 | 20.56 | 31.42 | | | C100 | Uniform | 52.82 | 67.39 | 55.86 | 72.62 | 38.57 | 54.88 | | MinMax | 54.96 | 66.92 | 63.70 | 75.44 | 40.67 | 53.83 | | | MOO | 51.16 | 65.87 | 58.17 | 73.19 | 39.18 | 53.44 | | | TA-MOO | 55.73 | 67.02 | 64.89 | 75.85 | 41.97 | 53.76 | | | A-All | A-Avg | R/R1 | V/R2 | G/R3 | E/R4 | | |-----------|---------|--------|-------------|-------------|-------------|-------------| | k∇δfi(δ)k | - | - | 7.15 ± 6.87 | 4.29 ± 4.64 | 7.35 ± 7.21 | 0.98 ± 0.72 | | w | - | - | 0.15 ± 0.14 | 0.17 ±0.13 | 0.15 ± 0.14 | 0.53 ± 0.29 | | Uniform | 28.21 | 48.34 | 48.89 | 49.08 | 48.38 | 47.03 | | MOO | 25.16 | 44.76 | 39.06 | 46.83 | 37.05 | 56.11 | | TA-MOO | 38.01 | 51.10 | 49.55 | 52.15 | 49.29 | 53.40 | | k∇δfi(δ)k | - | - | 8.41 ± 8.22 | 6.68± 6.95 | 7.36 ± 6.03 | 5.67 ± 6.09 | | w | - | - | 0.23 ± 0.21 | 0.24±0.17 | 0.23 ± 0.19 | 0.30 ± 0.21 | | Uniform | 28.17 | 48.75 | 51.94 | 45.55 | 54.15 | 43.34 | | MOO | 32.50 | 52.21 | 53.25 | 49.05 | 56.80 | 49.76 | | TA-MOO | 41.01 | 57.33 | 58.88 | 55.32 | 60.81 | 54.29 | Table 21: Evaluation of Attacking Ensemble model on the CIFAR10 (C10) and CIFAR100 (C100) datasets. The highest/second highest performance is highlighted in **Bold**/Underline. The table is copied from Table 1 in the main paper for reading comprehension purpose. Table 22: Attacking Ensemble model with a diverse set D={R-ResNet18, V-VGG16, G-GoogLeNet, EEfficientNet} and non-diverse set ND={4 ResNets}. w represents the final w of MOO (mean ± std). k∇δfi(δ)k represents the gradient norm of each model (mean ± std). The table is copied from Table 2 in the main paper for reading comprehension purpose. ## E Additional Discussions E.1 When Does Moo Work? The dominating issue. On one hand, there is the dominating issue that happens in all the three settings. The issue can be recognized by the gap of attack performance among tasks. For example, in Table 22 (i.e., the ENS setting with the diverse ensemble and MOO method), the gap between highest ASR (at EfficientNet) and lowest ASR (at GoogLeNet) is 19%. In the EoT setting, the problem is even worse: The largest gap observed is 53.6% as shown in Table 15 (the highest ASR is 88.19% with Vertical flipping and the lowest ASR is 34.54% with Horizontal flipping in with MOO - D-C10 setting). The dominating issue is also be recognized by the observation that a significant small gradient strength of one task on comparison with other tasks' strength. For example, in Table 22 it can be seen that the gradient strength corresponding to the EfficientNet architecture (mean value is 0.98) is much lower than those of other architectures (mean values are at least 4.29). As the result, the weight corresponding to the EfficientNet architecture is much higher than those of others. The root of the dominating issue can be the natural of the setting (i.e., as shown in Table 15 with the EoT setting, when the domination of the Vertical flipping task can be observed in all methods) or because of the MOO solver which is discussed in Section E.3 Overcoming the dominating issue. On the other hand, if overcoming this issue, MOO can outperform the Uniform strategy. For example, on attacking the non-diverse ensemble model (i.e., 4 ResNets) MOO surpasses the Uniform strategy by 4.3% and 3.5% in the ASR-All and ASR-Avg metrics, respectively. On generating universal perturbations, MOO outperforms the Uniform strategy in most of the settings. As discussed in Section D.4, a simple memory caching trick can helps to overcome the infinite gradient issue and significantly boosts the performance of MOO or TA-MOO. Therefore, we believe that developing a technique to lessen the dominating issue might be a potential extension to further improve the performance. Balancing among goal-unachived tasks. We observed in the EoT setting, the dominating issue is strictly serious when gradients of some tasks are much weaker/stronger than others. It is because of the natural of the transformation operations, therefore, this issue happens regardless status of the tasks. In the set of goal-unachieved tasks' gradients can exist a dominated one, resulting to a much higher weight of the dominated task. Therefore, in order to strike a more balance among goal-unachieved tasks, we apply an additional regularization which minimizes the entropy of goal-unachieved weights H(w) = Pm i=s+1 −wilog wi. If all tasks have been achieved (i.e., s = m) then the additional regularization will be ignored. This additional regularization helps to improve further 2% in the EoT setting. ## E.2 Importance Of The Task-Oriented Regularization. In this discussion, we would like to provide more experimental results in the ENS and EoT settings to further emphasize the contribution of the Task-Oriented regularization. Figure 5 shows the ASR of each individual task in the ENS setting with three losses and the EoT setting with ResNet18 architecture and deterministic transformations. As shown in Figure 5a, in the ENS setting, the MOO adversary produces a much higher ASR on the EfficientNet architecture than other architectures with any losses. In contrast, the TA-MOO adversary has a lower ASR on the EfficientNet architecture but a much higher ASR on other architectures. Similar observation can be seen in Figure 5b such that the ASR corresponding to the V-flipping of MOO is slightly higher than that of TA-MOO, however, the ASR on other transformations of MOO is much lower than those of TA-MOO. ## E.3 More Efficient Moo Solvers Discussions on the weighted-sum method. One of the most common approaches to solve the MOO problem is the scalarizing method, which formulates a single-objective optimization (SOO) such that the optimal solutions to the SOO problem are Pareto optimal solutions to the MOO problem. While this line of approach (e.g., weighted-sum method) is suitable for end-to-end learning such as deep learning, there are ![33_image_0.png](33_image_0.png) Figure 5: Comparison on the ASR of each individual task. R: ResNet18, V: VGG16, G: GoogLeNet, E: EfficientNet. CE: Cross-entropy loss, KL: Kullback-Leibler divergence, CW: Carnili-Wagner loss several acknowledged weaknesses: (i) the choice of utility function has a large impact on the computational complexity of the resulted SOO problem (Bjornson et al., 2014; Björnson & Jorswieck, 2013); (ii) a small change in weights may results in big changes in the combined objective (Caballero et al., 1997), and vice versa, a huge different weights may produce nearly similar result (Coello Coello, 1999); (iii) it does not work well in the case of a non-convex objective space (Deb, 2011). One of the most common replacement for the weighted-sum method is the constraint method which is applicable to either convex or non-convex problem. Applying a more efficient MOO solver might be one of the potential extensions of this work. Discussions on the gradient descent solver. Inspired by Sener & Koltun (2018), in this paper we use multi-gradient descent algorithm (Deb, 2011) as an MOO solver which casts the multi-objective problem to a single-objective problem. While Sener & Koltun (2018) used Frank-Wolfe algorithm to project the weight into the desired simplex, we use parameterization with softmax instead. Although this technique is much faster than Frank-Wolfe algorithm, it has some weaknesses that will be addressed in our future work. More specifically, the GD solver with softmax parameterization cannot handle well the edge case which is the root of the dominating issue. The snippet code E.3 provides a minimal example of quadratic optimization problem as similar in MGDA, where the goal is to find w ∗ = argmin w∈∆w P5 i=1 kwigik 2 2 . The solver is the Gradient Solver with softmax parameterization. With input1 where none of elements dominates others, the solver works quite reasonable with the weights corresponding to 4 first elements are equal and less than the last one (corresponding to bigger strength). With input2 where g5 g1, the solver still works well where w1 = 1 corresponding to the minimal strength g1 = 0.1. However, with input3 , the solver fails to find a good solution (which should be w = [1, 0, 0, 0] given that input). It is a worth noting that the main goal of this paper is to show the application of Multi-objective Optimization for generating adversarial examples and the impact of the Task-Oriented regularization. Therefore, while the issue of the gradient descent solver is well recognized, we did not take effort to try with a better solver. 1 import torch 2 import torch . nn . functional as F 3 import torch . optim as optim 4 5 input_1 = [0.1 , 0.1 , 0.1 , 0.1 , 0.2] # normal case 6 input_2 = [0.01 , 0.1 , 0.1 , 0.1 , 2 e3 ] # normal case 7 input_3 = [0.001 , 0.002 , 0.002 , 0.002 , 2 e3 ] \# dominating issue 8 9 init_alpha = [0.2 , 0.2 , 0.2 , 0.2 , 0.2] 10 g = torch . tensor ( input_3 ) 11 alpha = torch . tensor ( init_alpha , requires_grad = True ) 12 opt = optim . SGD ([ alpha ], lr =1.0) 13 14 for step in range (20) : 15 w = F . softmax ( alpha , dim =0) 16 loss = torch . square ( torch . sum (w * g )) 17 opt . zero_grad () 18 loss . backward () 19 opt . step () 20 print ('step ={} , w ={} '. format ( step , w . detach () . numpy () )) 21 22 \# Result with input_1 23 \# step =19 , w =[0.20344244 0.20344244 0.20344244 0.20344244 0.18623024] 24 \# Result with input_2 25 \# step =19 , w =[9.999982e -01 5.582609e -07 5.582609e -07 5.582609e -07 0.] 26 \# Result with input_3 27 \# step =19 , w =[0.28042343 0.23985887 0.23985887 0.23985887 0.] Listing 1: Python example of the Gradient Solver with softmax parameterization ## E.4 Correlation Between The Objective Loss And Attack Performance. It is broadly accepted that to fool a model, a feasible approach is maximizing the objective loss (i.e., CE, KL, or CW loss), and the higher the loss, the higher the attack success rate. While it is true with the same architecture, we found that it does not hold when comparing different architectures. Figure 6 shows the adversarial loss and the attack success rate for each model in the ENS setting. With the CW loss as the adversarial objective, it can be observed that there is a positive correlation between the loss value and the ASR, i.e., the higher the loss, the higher the ASR. For example, with the same adversarial examples, the adversarial loss on EfficientNet is the highest and so is ASR. However, there is no clear correlation observed when using CE and KL losses. Therefore, the higher weighted loss does not directly imply a higher success rate for attacking an ensemble of different architectures. The MinMax method (Wang et al., 2021) which solely weighs the tasks' losses, therefore, does not always achieve a good performance in all the tasks. ## E.5 Conflicting Between Gradients In The Adversarial Generation Task In multi-task learning setting, conflicting between gradient is the common issue to tackle with. More specifically, the gradients with respect to the (shared) model parameter of task fi and task fj can have a negative correlation (i.e., cosine similarity between ∇θfi(*θ, δ*) and ∇θfj (*θ, δ*) is negative). However, in the adversarial generation task, we consider the gradient with respect to the input (e.g., ∇δf(*θ, δ*)) to update the adversarial examples. As we explore through empirical experiments, the issue that we to deal with is not the gradient confliction problem but the gradient domination problem. These gradients with respect to the inputs can have a positive correlation but also have a huge difference in their strengths. In this specific challenge, the standard MOO which solely relies on the gradient strengths to calculated the weight for each task is strongly sensitive to the gradient domination problem and in some cases cannot lead to a good solution as discussed in Appendix E.1 To further support our hypothesis, we would like to provide a measurement on the cosine similarity between gradients on different ensemble members on the ENS setting in Table 23. Each cell (row-ith, column-jth) of the Table reports the cosine similarity between gradient ∇δfi(δ) of model ith and gradient ∇δfj (δ) of model jth (w.r.t. the same input δ). It can be seen that the gradients between different architectures has the positive correlation instead of negative correlation. On the other hand, as shown in the last row, the gradient norm k∇δfi(δ)k varies widely among architectures. While this observation is in line with the widely accepted phenomenon about the transferability of adversarial examples, it also does support our motivation to derive the TA-MOO method to improve the standard MOO. ![35_image_0.png](35_image_0.png) (a) CW ![35_image_1.png](35_image_1.png) (b) CE ![35_image_2.png](35_image_2.png) (c) KL Figure 6: Loss (left fig) and ASR (right fig) of each task over all attack iterations with the MinMax method. model0/1/2/3 represents R/V/G/E architecture, respectively. Table 23: Correlation between gradients of ensemble members on ENS setting. Each cell (row-ith, column-jth) reports the cosine similarity (mean ± std) between gradient ∇δfi(δ) of model ith and gradient ∇δfj (δ) of model jth (w.r.t. the same input δ). The last row k∇δfi(δ)k reports the gradient norm of each model. R: ResNet18, V: VGG16, E: EfficientNet, G: G-GoogLeNet. | | R | V | G | E | |-----------|-------------|-------------|-------------|-------------| | R | 1.00±0.00 | 0.34±0.15 | 0.44±0.17 | 0.35±0.19 | | V | 0.34±0.15 | 1.00±0.00 | 0.36±0.19 | 0.41±0.22 | | G | 0.44±0.17 | 0.36±0.19 | 1.00±0.00 | 0.41±0.18 | | E | 0.35±0.19 | 0.41±0.22 | 0.41±0.18 | 1.00±0.00 | | k∇δfi(δ)k | 7.15 ± 6.87 | 4.29 ± 4.64 | 7.35 ± 7.21 | 0.98 ± 0.72 | ![36_image_0.png](36_image_0.png) Figure 7: Norm of the gradient ∇δf(δ) over all attack iterations. Measure on the diverse set of the ENS setting, with CE loss. ## E.6 Discussion On The Convergence Of Our Methods In multi-task learning, the gradient of each task is calculated with respect to the (shared) model parameter (e.g., ∇θf(*θ, δ*)). Therefore, to quantify the convergence of a multi-task learning method, we can measure the gradient norm of the comment gradient direction to quantify the convergence of the model. The gradient norm is expected to be a very small value when the model reaches to the Pareto optimality points. However, in adversarial generation problem, the gradient of each task is calculated with respect to the input (e.g., ∇δf(*θ, δ*)). Therefore, unlike in the multi-task learning, there is a different behavior of gradient in the adversarial generation task. To verify our hypothesis, we measure the gradient norm of the gradient over all attack iterations and visualize in Figure 7. It can be seen that the gradient norm of all attacks tends to converge to a large value. It is a worth noting that we use projected gradient descent with l∞ in all attacks. Therefore, in each attack iteration, the amount to update is not the gradient ∇δf(*θ, δ*) but the sign of it scaling with a step size ηδ. However, there is still an interesting observation such that MOO and TA-MOO attack have a much lower gradient norm than other attacks. We would like to propose a simple alternative approach to quantify the convergence of our method in the adversarial generation setting. More specifically, we leverage the advantage of the adversarial generation task ![37_image_0.png](37_image_0.png) Figure 8: Loss (left fig) and SAR (right fig) of each task over all attack iterations. model0/1/2/3 represents R/V/G/E architecture, respectively. The CW loss is used as the adversaries's objective function. such that we can access to the label to audit whether the task is successful or not. Therefore, we simply measure the loss and the success attack rate over all attack iterations as shown in Figure 8. First, we would like to recall the definition of the Pareto optimality. Given m objective function f(δ) , [f1(δ)*, ..., f*m(δ)], the Pareto optimality δ ∗ of the multi-objective optimization δ ∗ = argmax δ f(δ) if there is no feasible solution δ 0such that is strictly better than δ ∗in some tasks (i.e., fi(δ 0) > fi(δ ∗) for some i) while equally good as δ ∗in all other tasks (i.e., fj (δ 0) = fj (δ ∗), j 6= i). Bear this definition in mind, it can be seen from the loss progress of MOO attack in Figure 8a that (i) from iteration 1st to around iteration 10th all the losses are increased quickly showing that the method optimize efficiently; (ii) after iteration 10th, the loss w.r.t. the EfficientNet model (i.e., model3 in the legend) continually increases while other losses continually decrease. Therefore, any solution after iteration 10th do not dominate each other indicating that the method reaches the Pareto front. On the other hand, it can be seen from Figure 8b that the loss progress of our TA-MOO is more stable. TA-MOO also can optimize to the optimal point efficiently as MOO does, however, after reaching the peak, the losses in all tasks are more stable than those in MOO. This observation indicates that the solutions after the peak point are also in the Pareto front but are more concentrated than those in MOO. It can explain the stability of the success attack rate in TA-MOO in Figure 8b. Comparing across both MOO and TA-MOO at their last iteration shows that while the loss w.r.t. the EfficientNet model (model3) in MOO is a bit higher than that in TA-MOO, these other losses w.r.t. V/G/E models in MOO is lower than those in TA-MOO. This observation indicates that in term of losses, the solutions of MOO and TA-MOO do not dominate each other. However, the solution of TA-MOO is more stable and leads better final attacking performance. ## E.7 Additional Experiments With Different Initializations For Moo In our method, the default initialization for the weight w is 1/m equally for all tasks. Therefore, one raising valid concern is that *Might better initialization can help to boost the performance?*. To answer this question, we first find the optimal initital weight by using the weight at the last iteration when running MOO and TA-MOO attacks with the default initialization. For example, as shown in Figure 9a for the ENS setting with diverse architectures, the average weight that MOO assigns for model R/V/G/E converging to 0.15/0.17/0.15/0.53 (*set A*), respectively. The average weights' distribution learned by TA-MOO is 0.19/0.25/0.19/0.37 (*set B*), respectively. It is a worth noting that, we consider each set of weights for each data sample separately, and the above weights are just the average over entire testing set (e.g., 10K sample), while the full statistic (mean ± std) of weights can be seen in Table 2. In order to make the experiment to be more comprehensive with diverse initializations, we use two additional sets including set C=[0.22, 0.23, 0.22, 0.33] and set D=[0.24, 0.25, 0.24, 0.27]. Given these above four weights sets A/B/C/D, we then init the standard MOO with one of these above sets and adjust the learning rate ηw with three options 5e-3, 5e-5, 1e-8 and report results in Table 24. The ![38_image_0.png](38_image_0.png) Figure 9: Weight (left fig) and SAR (right fig) of each task over all attack iterations. model0/1/2/3 represents R/V/G/E architecture, respectively. complete attacking progress can be seen in Figure 9. It can be seen from Table 24 that better initialization does help to improve the performance of the standard MOO. The best setting is the initialization with set D and η = 5e-3 achieves 29.53% in A-All metric, a 4.37% improvement over the default MOO initialization. It can be seen from the evolution of the weights in Figure 9c that even initializing with the converged weights (i.e., set A) from the pre-running attack, the weight of each task does not stand still but converges to a different value. It is another different behavior in adversarial generation task compared to the multi-task learning problem. On the other hand, despite of the extensive tuning, the performance of MOO is still far below the TA-MOO approach, with the gap of 8.48% in A-All metric. Table 24: Attacking Ensemble model with a diverse set D={R-ResNet18, V-VGG16, G-GoogLeNet, EEfficientNet}. MOO*A/B/C/D* is MOO with initial weights from set A/B/C/D, respectively. ηw denotes the learning rate to update for the weight w. | | ηw = 5e-3 | ηw = 5e-5 | ηw = 1e-8 | |--------|-------------|-------------|-------------| | MOOA | 28.64 | 29.18 | 29.12 | | MOOB | 29.13 | 28.75 | 28.65 | | MOOC | 29.38 | 28.46 | 28.33 | | MOOD | 29.53 | 28.37 | 28.18 | | MOO | 25.16 | - | - | | TA-MOO | 38.01 | - | - |
Review 1: Summary: Summary. This paper is dedicated to developing improved approaches to generate adversarial examples. The authors point out that directly applying multi-objective optimization (MOO) is sub-optimal for adversarial example generation to achieve multiple objectives simultaneously. Because this naive fashion tends to maximize all goals equally. Instead, they design the task-oriented MOO to only maintain the goal-achieved tasks, and let the optimizer spend more effort on improving the goal-unachieved tasks. Comprehensive experiments with various setups are conducted. Strengths and Weaknesses: Pros. 1. The paper is well-written and easy to follow. 2. Sufficient technical and implementation details are presented, which is very helpful. 3. Various setups of MMO in adversarial example generation are examined. 4. The literature is sufficient, which is good. Cons. 1. No large-scale experiments. For example, investigations on ImageNet/Tiny-ImageNet are needed. 2. The statement of C2 is unclear. For example, "What is a comprehensive study", "What are the different aspects", "What is the particular focus", and "What are the dominant issues". I feel the C1 and C2 overlapped. 3. The biggest concern I have is the comparisons between Wang et al., 2021 and this paper. (A) The authors should unify the setups and enable the direct comparisons to the number listed in Wang et al., 2021; (B) The authors said "the MinMax methods (Wang et al. 2021) which examine only the worst-case performance across all tasks". It is not true. The min-max and regularization in Wang et al. 2021 will lead to a non-one-hot w vector to reweight all tasks. Therefore, Wang et al., 2021 also considers all tasks and assign more weight to tasks with the worse performance; (C) Actually, in my opinion, this paper is conceptually the same as Wang et al. 2021, which pays more attention to tasks that are goal-unachieved or have worse performance. Although their specific forms of regularizations are different, there still exists a need for more distinguishable designs. Requested Changes: Refer to the previous section. Broader Impact Concerns: I do not see any ethical concerns. ================================================== Review 2: Summary: The paper proposed a new task-oriented multi-objective optimization, i.e., TA-MOO, framework, in which aims at reaching a balance between goal-achieved and goal-unachieved tasks. By introducing a geometry-based regularization term, TA-MOO demonstrated advantages in representative tasks in generating adversarial examples under different scenarios. Strengths and Weaknesses: Strengths: - The paper is well-written and easy to follow. - A comprehensive study is presented in the paper to show the superior performance of the proposed TA-MOO framework. Weaknesses: - Selected baselines are not sufficient to support the advantages of TA-MOO. - The effects of the regularizer are not explicitly discussed in the paper. However, this is an essential part considering contributions claimed by the authors. At least the authors should provide some analyses of how $w$ is modified during the training, and whether the task is goal-achieved or not. Requested Changes: - It would be good if the authors can provide a detailed analysis of the convergence after introducing the regularization term. The existing analysis in the paper mainly focuses on how to derive the regularizer instead of its effects on the optimization problem. - More stronger baseline methods should be compared. In current form, only three simple and straightforward algorithms are not sufficient to show the effectiveness of TA-MOO. For example, multi-objective approaches in [1, 2, 3]. - Multi-objective optimization should not be restricted to the domain of adversarial learning, and there are many other tasks involved in multiple training objectives. I am wondering how TA-MOO would perform on those different tasks such as image classification on different datasets. [1] Albuquerque et al. "Multi-objective training of generative adversarial networks with multiple discriminators", 2019. [2] Miranda et al. ”Single-solution hypervolume maximization and its use for improving generalization of neural networks”, 2016. [3] Navon et al. “Multi-Task Learning as a Bargaining Game”, 2022. Broader Impact Concerns: This paper investigates some small-scale datasets for academic use only, and does not arouse any ethical concerns from my perspective. ================================================== Review 3: Summary: This paper reviews the limitations of current methods generating adversarial examples for multiple tasks, which uniformly treat different tasks. By contrast, this work proposes task-oriented multi-objective optimization (TA-MOO), which uses simplex regularizations to make the attack algorithms focus more on the unachieved tasks and therefore improve the attack success rate. Extensive experiments demonstrate the effectiveness and the advantages of TA-MOO over the traditional MOO methods. Strengths and Weaknesses: Strength: The method is well-motivated, generally applicable, and easy to understand. TA-MOO outperforms traditional MOO in different tasks. Weakness or suggestions: 1. [Baselines] For the experiments, I think more baselines should be included for a comprehensive comparison. 1) The universal perturbation generation method [Moosavi-Dezfooli 2017] should be included. Although this method generates unbounded perturbations, you can always force the perturbations within an adversarial budget by projection. 2) For attacking ensemble models, I suggest the authors include the attack methods in https://openreview.net/forum?id=rkZvSe-RZ for a more comprehensive evaluation. 2. Although the authors include the results of standard auto-attack in the appendix, it is still very meaningful to incorporate the adaptive step size in APGD in AutoAttack into TA-MOO to see if it can further improve the performance. If I understood correctly, the authors are using constant step size as said in the beginning of Section 4. 3. I do not know why we need to fool all the sub-models in an ensemble model. If the output of the ensemble model is based on the vote of its sub-models, then successfully attacking the majority of the sub-models is enough to fool an ensemble model. Requested Changes: In general, this paper is interesting and well-written. I encourage the authors to address the concerns pointed out in the "weakness" part of the last section. In addition, there are some minor typos in this manuscript. For example, in the "generating universal perturbations" section in page 5, the optimization problem should be to maximize $[f_1(\delta), ..., f_B(\delta)]$ instead of $[f_1(\delta), ..., f_m(\delta)]$, since there are $B$ data points in total. I suggest the authors do a comprehensive check on notation consistency. Broader Impact Concerns: From my point of view, there is no ethical implications or concerns raised by the methods in this work. ================================================== Metareview: Recommendation: Accept as is Comment: This paper proposed a new task-oriented multi-objective optimization (TA-MOO) framework to reach a balance between goal-achieved and goal-unachieved tasks. By introducing a geometry-based regularization term, TA-MOO demonstrated advantages in tasks of generating adversarial examples under different scenarios. After author response, it received 2 Leaning Accept, and 1 Leaning Reject recommendations. On the one hand, reviewers agree that the paper is well written and a comprehensive study with many experiments is presented. The reviewers have raised many useful questions, and the authors have provided good answers to them during rebuttal. The additional results added during rebuttal are especially helpful. On the other hand, one reviewer mentioned that the questions regarding how w is changed during training and and whether the task is goal-achieved or not are not well answered. Also, the results on ImageNet seem limited. On balance, due to the careful writing and the comprehensive results that have already been provided in the paper (though mostly focused on small-scale CIFAR10 and CIFAR100), the editor thinks that the merits of this paper slightly outweigh the flaws, therefore, would like to recommend acceptance of the paper. The authors are encouraged to add more ImageNet results if bandwidth allows. ==================================================
# Infonce Is Variational Inference In A Recognition Parameterised Model Laurence Aitchison laurence.aitchison@gmail.com University of Bristol Stoil Ganev *stoil.ganev@bristol.ac.uk* University of Bristol Reviewed on OpenReview: *https: // openreview. net/ forum? id= chbRsWwjax* ## Abstract Here, we develop a new class of Bayesian latent variable model, the recognition parameterised model (RPM). RPMs have an implicit likelihood, which is defined in terms of the recognition model. Therefore, it is not possible to do traditional "generation" with RPMs. Instead, RPMs are designed to learn good latent representations of data (in modern parlance, they solve a self-supervised learning task). Indeed, the RPM implicit likelihood is specifically designed so that it drops out of the VI objective, the ELBO. That allows us to learn an RPM without a "reconstruction" step, which is believed to be at the root of poor latent representations learned by VAEs. Indeed, in a very specific setting where we learn the optimal prior, the RPM ELBO becomes equal to the mutual information (MI; up to a constant), establishing a connection to pre-existing self-supervised learning methods such as InfoNCE. ## 1 Introduction Self-supervised learning (SSL) involves learning structured, high-level representations of data useful for downstream tasks such as few-shot classification. One common self-supervised approach is to define a "pretext" classification task (Dosovitskiy et al., 2015; Noroozi & Favaro, 2016; Doersch et al., 2015; Gidaris et al., 2018). For instance, we might take a number of images, rotate them, and then ask the model to determine the rotation applied (Gidaris et al., 2018). The rotation can be identified by looking at the objects in the image (e.g. grass is typically on the bottom of the image, while birds are nearer the top), and thus a representation useful for determining the orientation may also extract useful information for other high-level tasks. We are interested in an alternative class of SSL objectives known as InfoNCE (NCE standing for noise contrastive estimation; (Oord et al., 2018; Chen et al., 2020)). These methods take two inputs (e.g. two different frames of video), encode them to form two latent representations, and use a classification task to maximize a bound on the mutual information between latent representations. As the shared information should concern high-level properties such as objects, but not low-level details of each patch, this should again extract a useful representation, which is also invariant to data augmentation. Ultimately, the goal of SSL is to extracting useful, structured, high-level representations of data. Interestingly, such representations have classically been obtained using a probabilistic latent variable models (PLVMs; Murphy, 2022). For instance, consider Latent Dirichlet Analysis (LDA; Blei et al., 2003). LDA clusters words and documents into topics. Given a new document, LDA allows us to assign a combination of topics to that document. Importantly, LDA, as a *probabilistic* latent variable model, allows us to sample "fake data". However, in the case of LDA, it is not at all clear why you would ever want to sample fake data. Specifically, LDA treats documents as "bags of words"; while you could sample bags of words, it is unclear why you ever would ever want to. Alternative classic PLVMs include probabilistic PCA/ICA (Pearlmutter & Parra, 1996; MacKay, 1996; Tipping & Bishop, 1999). Again, in probabilistic PCA/ICA, the point was always to extract interpretable, high-level factors of variation. While you could sample "fake data", it is again not at all clear why you would ever want to. Perhaps the most obvious modern incarnation of traditional PLVMs is the variational autoencoder (VAEs Kingma & Welling, 2013; Rezende et al., 2014; Kingma et al., 2019). The VAE extracts latent variables, and one would hope that these latents would form a good high-level representation, so that VAEs could be used in SSL. However, VAEs typically perform poorly when evaluated on SSL tasks such as few-shot classification or disentanglement (Karaletsos et al., 2015; Higgins et al., 2016; Zhao et al., 2019). The issues are commonly understood to arise because a VAE needs to reconstruct the data, so the latent representation is forced to encode low-level details (Chen et al., 2020; Balestriero & LeCun, 2024). This raises an important question: is it possible to develop new PLVMs that give useful high-level representations useful in SSL by eliminating the need to "reconstruct"? Here, we answer in the affirmative by developing a new family of recognition parameterised model (RPM) (note that the "recognition parameterised model" name first appeared in follow-up work Walker et al., 2023b but the actual idea was introduced here; see Related work for further details). We perform variational inference in this RPM, so the resulting objective is the ELBO. Next, we connect the RPM to modern SSL methods, in particular InfoNCE (Oord et al., 2018; Chen et al., 2020). We show that the RPM ELBO is equal (up to a constant) to the MI under an optimized prior (Sec. 4.4), and equal (up to a constant) to the infinite-sample limit of the InfoNCE objective under a different choice of prior (Sec. 4.5). We give an example in a toy system with a latent space without unit-norm constraints in which the usual choice of InfoNCE objective fails completely, but a modified system that exploits our prior knowledge about dynamics in the latent space succeeds (Sec. 5). This aligns with recent work (Locatello et al., 2019) which argues that good, problem-specific inductive biases are critical for effective disentanglement. However, it is difficult to introduce such inductive biases in traditional InfoNCE like settings. In contrast, it is straightforward to introduce these inductive biases into the priors of an RPM. At a high-level, RPMs function by having observations that consist of multiple components, e.g. following a classic SSL setting, we could have two components, where each component is a different augmentation of the same underlying image. RPMs then use a latent space which models only dependencies between components, and avoids modelling any information about the marginal distribution over each component. Information about potentially complex marginals is instead captured in a complex likelihood. This likelihood is difficult to specify, and lies at the root of problems around "reconstruction" in classical VAEs. Critically, in the RPM framework the likelihood cancels out of the ELBO (see Sec. 4). In this way, an RPM is able to specify a full generative model, but is able to avoid the problematic reconstruction step in typical VAEs. Interestingly, we show that a particular PLVM, the RPM, can be effective for SSL (i.e. learning a good high-level representation useful for downstream tasks such as few-shot classification). We would argue that this effectiveness arises precisely because we drop the requirement for easily decoding/sampling "fake-data". This seems to mirror the manner in which state-of-the-art generative models such as diffusions arose from the VAE framework by dropping the requirement for potentially interpretable latent space (Sohl-Dickstein et al., 2015; Ho et al., 2020; Kingma et al., 2021; Song et al., 2021). Our results have important implications for the interpretation of methods such as InfoNCE. Specifically, InfoNCE was thought to learn good representations by (approximately) maximizing mutual information (Oord et al., 2018). However, recent work has argued that maximizing the true mutual information could lead to arbitrarily entangled representations, as the mutual information is invariant under arbitrary invertible transformations (Tschannen et al., 2019). Instead, they argue that InfoNCE learns good representations because it uses a highly simplified lower bound on the information estimator (Oord et al., 2018) which forms only a loose bound on the true MI. This is highly problematic: Tschannen et al. (2019) argue that better MI estimators give worse representations. Thus, InfoNCE appears to be successful not because it is maximizing MI, but because of ad-hoc choice of simplified mutual information estimator. So what is InfoNCE doing? And how can the success of its simplified mutual information estimator be understood? We show that the InfoNCE objective is equal (up to a constant) to the RPM ELBO. Further, with a deterministic encoder, the RPM ELBO becomes equal to the log marginal likelihood. This would argue that the InfoNCE objective is better motivated in terms of the RPM ELBO or log marginal likelihood (as they are equal up to a constant in the infinite sample setting), as opposed to the mutual information (as the InfoNCE objective only forms a bound in the same setting). ## 2 Related Work The original version of this paper introduced the key recognition parameterised model idea in 2021, albeit under a different name. However, due to the vagaries of the conference publishing system, follow-up work (Walker et al., 2023b) was published first (at AISTATs). Walker et al. (2023b) introduced the name "Recognition parameterised model", and we agree that this is the right name, so to avoid confusing the literature, we have adopted their terminology. Of course, downstream of the key RPM idea (embodied in the definition of the likelihood in Eq. 10) the papers go in quite different directions, as Walker et al. (2023b) is primarily interested in probabilistic modelling, while we are primarily interested in linking to self-supervised learning. This is particularly evident in three aspects. First, we originally set the approximate posterior to be equal to the recognition distributions, while Walker et al. (2023b) allowed more flexibility in the approximate posteriors, which can slightly improve the quality of their posterior inferences. Second, we made slightly different choices in the "base" distribution (see Appendix A) for details. Third, there are considerable notional differences, in the sense that we defined RPMs in terms of a "recognition joint", while Walker et al. (2023b) defined RPMs in terms of a factor graph. We believe that the recognition joint approach is more intuitive when generalising RPMs to more complex settings, and when using contrastive losses. A number of papers have taken forward the ideas originally introduced here forward, not just Walker et al. (2023b). First, Walker et al. (2023a) used the RPM in an out-of-distribution setting. Specifically, they are able to retain accurate predictions in an out-of-distribution setting whether *both* the input distribution and the mapping from inputs to labels can change. Second, Möllers et al. (2023) used this approach to obtain uncertainty estimates in a graph SSL setting. Third, Wang et al. (2022) used these ideas to improve sequential recommendation, by improving modelling of user behaviour in the face of sparsity of user-item interactions, uncertainty and a long-tail of items. Fourth, Hasanzadeh et al. (2021) used our approach to introduce principled Bayesian uncertainty estimates in contrastive learning on graph data. In particular, they were able to show improvements in uncertainty estimation, interpretability and predictive performance. Finally, Jelley et al. (2023) used a recognition parameterised model to introduce contrastive methods for metalearning in the few-shot setting (specifically, Eq. 2 in arXiv v0). That said, their work differs in that while they have a recognition parameterised generative model, they do not optimize using VI. Instead, they optimize using the predictive log-likelihood. Moreover, there are interesting connections to work on connecting self-supervised learning to learning in a linear state-space model (Eysenbach et al., 2024). Perhaps the closest *prior* (as opposed to follow-up) work is Zimmermann et al. (2021), which also identifies an interpretation of InfoNCE as inference in a principled generative model. Our work differs from (Zimmermann et al., 2021) in that we introduce an RPM and explicitly identify a novel connection between the InfoNCE objective and the RPM ELBO. In addition, their approach requires four restrictive assumptions. First, they assume deterministic encoder. In contrast, all our theory applies to stochastic and deterministic encoders. While we do explicitly consider deterministic encoders in Appendix B, this is only to show that with deterministic encoders, the ELBO bound is tight - all the derivations outside of this very small section (which includes all our key derivations) use fully general encoders. Second, they assume that the encoder is invertible, which is not necessary in our framework. This is particularly problematic as practical encoders commonly used in contrastive SSL are not invertible. Third, they assume that the latent space is unit hypersphere, while in our framework there is no constraint on the latent space. Fourth, they assume the ground truth marginal of the latents of the generative process is uniform, whereas our framework accepts any choice of ground-truth marginal. As such, our framework has considerably more flexibility to include rich priors on complex, structured latent spaces. Other work looked at the specific case of isolating content from style (von Kügelgen et al., 2021). This work used a similar derivation to that in Zimmermann et al. (2021) with slightly different assumptions. While they still required deterministic, invertible encoders, they relax e.g. uniformity in the latent space. But because they are working in the specific case of style and content variables, they make a number of additional assumptions on those variables. Importantly, they again do not connect the InfoNCE objective with the ELBO or log marginal likelihood. Very different methods use noise-contrastive methods to update a VAE prior (Aneja et al., 2020). Importantly, they still use an explicit decoder. There is a large class of work that seeks to use VAEs to extract useful, disentangled representations (e.g. Burgess et al., 2018; Chen et al., 2018; Kim & Mnih, 2018; Mathieu et al., 2019; Joy et al., 2020). Again, this work differs from our work in that it uses explicit decoders and thus does not identify an explicit link to self-supervised learning. Likewise, there is work on using GANs to learn interpretable latent spaces (e.g. Chen et al., 2016a). Importantly, GANs learn a decoder (mapping from the random latent space to the data domain). Moreover, GANs use a classifier to estimate a density ratio. However, GANs estimate this density ratio for the data, x and x ′, whereas InfoNCE, like the methods described here, uses a classifier to estimate a density ratio on the latent space, z and z ′. There is work on reinterpreting classifiers as energy-based probabilistic generative models (e.g. Grathwohl et al., 2019), which is related if we view SSL methods as being analogous to a classifier. Our work is very different, if for no other reason than because it is not possible to sample data from a RPM (even using a method like MCMC), because the decoder is written in terms of the unknown true data distribution. ## 3 Background 3.1 Variational Inference (Vi) In VI, we have observed data, x, and latents, z, and we specify a prior, P (z), a likelihood, P (x|z), and an approximate posterior, Q (z|x). When the approximate posterior, Q (z|x) is parameterised by a neural network, the resulting model is known as a variational autoencoder (Kingma & Welling, 2013; Rezende et al., 2014). We then jointly optimize parameters of the prior, likelihood and approximate posterior using the ELBO as the objective, $$\log\mathrm{P}\left(x\right)\geq{\mathcal{L}}(x)=\operatorname{E}_{\mathrm{Q}\left(z|x\right)}\left[\log{\frac{\mathrm{P}\left(x|z\right)\mathrm{P}\left(z\right)}{\mathrm{Q}\left(z|x\right)}}\right],$$ $\left(1\right)$ . , (1) which bounds the log marginal likelihood, log P (x) (as can be shown using Jensen's inequality). We can rewrite the ELBO as an expected log-likelihood minus a KL-divergence, $${\mathcal{L}}(x)=\operatorname{E}_{\mathrm{Q}(z|x)}\left[\log\operatorname{P}\left(x|z\right)\right]-\operatorname{D}_{\mathrm{KL}}\left(\operatorname{Q}\left(z|x\right)\|\operatorname{P}\left(z\right)\right).$$ L(x) = EQ(z|x)[log P (x|z)] − DKL (Q (z|x)∥P (z)). (2) That KL-divergence is very closely related (with a particular choice of prior, P (z)) to the MI (Alemi et al., 2018; Chen et al., 2016b), so the KL-divergence can be understood intuitively as reducing the MI between data, x and latent, z. However this connection between the ELBO and MI is not relevant for our work. In particular, the ELBO can be interpreted as minimizing the MI between data and latents, whereas InfoNCE maximizes the MI between different latent variables in a structured model. ## 3.2 Infonce In InfoNCE (Oord et al., 2018), there are two data items, x and x ′. Oord et al. (2018) initially describes a time-series setting where for instance x is the previous datapoint and x ′is the current datapoint. But one can also consider other contexts where x and x ′ are different augmentations or patches of the same underlying image (Chen et al., 2020). In all cases, there are strong dependencies between x and x ′, and the goal is to capture these dependencies with latent representations, z and z ′, formed by passing x and x ′through neural network encoders. All our derivations consider fully general encoders, Rϕ (z|x) and Rϕ (z ′|x ′) which could be stochastic or deterministic (see Sec. B, where we discuss this distinction in-depth). Note that we write $$\left(2\right)$$ these encoders with explicit ϕ subscripts to emphasise that these are user-specified conditional distributions, often a Gaussians over z or z ′ with means and variances specified by applying a neural network to x or x ′. Importantly, we do not just use R to denote the encoder distributions, Rϕ (z|x) and Rϕ (z ′|x ′). We generalise R to denote the joint distribution arising from taking the true / empirical training data distribution, Rbase (*x, x*′) (see Appendix A) and encoding using Rϕ (z|x) and Rϕ (z ′|x ′). Thus, the joint distribution of all random variables under R can be written, $$\mathrm{R}\left(x,x^{\prime},z^{\prime},z\right)=\mathrm{R}_{\phi}\left(z|x\right)\mathrm{R}_{\phi}\left(z^{\prime}|x^{\prime}\right)\mathrm{R}_{\mathrm{base}}\left(x,x^{\prime}\right).$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ ′) Rbase (*x, x*′). (3) We can obtain marginals and conditionals of this joint, such as R (z ′, z), R (z ′), R (z), R (z ′|z) in the usual manner - by marginalising and/or applying Bayes theorem. For instance, $$\mathrm{R}\left(z,z^{\prime}\right)=\int d x\;d x^{\prime}\;\mathrm{R}_{\phi}\left(z|x\right)\mathrm{R}_{\phi}\left(z^{\prime}|x^{\prime}\right)\mathrm{R}_{\mathrm{base}}\left(x,x^{\prime}\right).\tag{1}$$ Note that R (*z, z*′) also implicitly depends on ϕ. We use the subscript ϕ in Rϕ (z|x) to remind the reader that Rϕ (z|x) is directly parameterised by e.g. a neural network with weights ϕ, and we omit the subscript in e.g. R (*z, z*′) to indicate that its dependence on ϕ is only implicit (through Eq. 4). The InfoNCE objective was originally motivated as maximizing the mutual information between latent representations, $$\mathrm{MI}=\mathrm{E}_{\mathrm{R}\left(z,z^{\prime}\right)}\left[\log{\frac{\mathrm{R}\left(z^{\prime}|z\right)}{\mathrm{R}\left(z^{\prime}\right)}}\right]=\mathrm{E}_{\mathrm{R}\left(z,z^{\prime}\right)}\left[\log{\frac{\mathrm{R}\left(z^{\prime},z\right)}{\mathrm{R}\left(z^{\prime}\right)\mathrm{R}\left(z\right)}}\right].\tag{1}$$ $$\mathbf{\Sigma}$$ Of course, observed x and x ′exhibit dependencies as they are e.g. the previous and current datapoints in a timeseries, or different augmentations of the same underlying image. Thus, z and z ′ must also exhibit dependencies under R (*z, z*′). As the mutual information is difficult to compute directly, InfoNCE uses a bound, IN (*θ, ϕ*), based on a classifier that uses fθ with parameters θ to distinguish positive samples (i.e. the z ′ paired with the corresponding z) from negative samples (i.e. z ′ j drawn from the marginal distribution and unrelated to z or to the underlying data; see Poole et al., 2019 for further details), $$\text{MI}\geq\mathcal{I}_{N}=\text{E}\left[\log\frac{f_{\theta}(z,z^{\prime})}{f_{\theta}(z,z^{\prime})+\sum_{j=1}^{N}f_{\theta}(z,z^{\prime}_{j})}\right]+\log N.\tag{6}$$ Here, the expectation is taken over R (*z, z*′)Qj Rz ′ j , and we use this objective to optimize θ (the parameters of fθ) and ϕ (the parameters of the encoder). There are two source of slack in this bound, arising from finite N and a restrictive choice of f. To start, we can reduce but not in general eliminate slack by taking the limit as N goes to infinity, (Oord et al., 2018), $$\operatorname{MI}>{\mathcal{I}}_{\infty}(\theta,\phi)\geq{\mathcal{I}}_{N}(\theta,\phi).$$ $$\left(7\right)$$ MI > I∞(θ, ϕ) ≥ IN (*θ, ϕ*). (7) The bound only becomes tight if we additionally optimize an arbitrarily flexible f (Oord et al., 2018). If as usual, we have a restrictive parametric family for f, then the bound does not in general become tight (Oord et al., 2018). In reality InfoNCE does indeed use a highly restrictive class of function for f, which can be expected to give a loose bound on the MI (Oord et al., 2018), $$f_{\theta}(z,z^{\prime})=\exp\left(z^{T}\theta z^{\prime}\right),\tag{1}$$ where θ for this particular function is a matrix. Note additionally that for this particular functional form for fθ to work well, we usually need to restrict z and z ′to have unit-norm. This raises a critical question: if our goal is really to maximize the bound on the MI, why not use a more flexible fθ? The answer that our goal is not ultimately to maximize the MI. Our goal is ultimately to learn a good representation, and MI is merely a means to that end. Further, Tschannen et al. (2019) argue that optimizing the true MI is likely to lead give poor repesentations, as the MI is invariant to arbitrary invertible transformations that can entangle the representation. They go on to argue that it is $\downarrow$ . precisely the restrictive family of functions, corresponding to a loose bound on the MI, that encourages good representations. Tschannen et al. (2019) thus raise an important question: does it really make sense to motivate an objective that works (the InfoNCE objective) as a loose bound on an objective that does not work (the mutual information)? We offer an alternative motivation by showing that the InfoNCE objective is equal (up to a constant) to the log marginal likelihood under a particular choice of prior and with a deterministic encoder. Note that this loss has been used in downstream settings, such as SimCLR (Chen et al., 2020). ## 4 Recognition Parameterised Models An RPM is a specific type of probabilistic latent variable model (PLVM) where the likelihood is written in terms of a recognition model. All RPMs have a structured graphical model with multiple observations. We start with perhaps the simplest RPM, where observations are a pair, (*x, x*′), e.g. two augmentations of the same image, or two different but adjacent frames in a video. In this simple example, we consider a model with latent variable, z associated with x and z ′ associated with x ′. The generative probability in our probabilistic generative model can be factorised as, $$\mathrm{P}\left(x,x^{\prime},z^{\prime},z\right)\equiv\mathrm{P}\left(x|z\right)\mathrm{P}\left(x^{\prime}|z^{\prime}\right)\mathrm{P}_{\theta}\left(z,z^{\prime}\right).$$ $$({\mathfrak{g}})$$ Here, we write the prior as Pθ (*z, z*′) to emphasise that this could be a user-specified distribution with learned parameters, θ. Importantly, P implies a valid joint distribution (Eq. 9) over all random variables (x, x′, z′, z), so any time we write a distribution with P, we mean a marginal / conditional of that model joint (Eq. 9). Importantly, we have yet to define the likelihood, and it is the likelihood that makes it an RPM. In particular, in an RPM, the likelihood is defined in terms of Bayesian inference in the recognition joint, R, formed by taking the true/empirical data distribution, and encoding using Rϕ (z|x) and Rϕ (z ′|x ′) (Eq. 3). We can use Bayes in the recognition joint to obtain R (x|z) and R (x ′|z ′). We then decide to use R (x|z) and R (x ′|z ′) as the likelihoods in our generative model, $$\mathrm{P}\left(x|z\right)\equiv\mathrm{R}\left(x|z\right)=\frac{\mathrm{R}_{\phi}\left(z|x\right)\mathrm{R}_{\mathrm{base}}\left(x\right)}{\mathrm{R}\left(z\right)},$$ $$\mathrm{P}\left(x^{\prime}|z^{\prime}\right)\equiv\mathrm{R}\left(x^{\prime}|z^{\prime}\right)=\frac{\mathrm{R}_{\phi}\left(z^{\prime}|x^{\prime}\right)\mathrm{R}_{\mathrm{base}}\left(x^{\prime}\right)}{\mathrm{R}\left(z^{\prime}\right)}.$$ $$(10\mathrm{a})$$ $$(10\mathrm{b})$$ Here, we have written P (x|z) ≡ R (x|z) to denote that we are choosing the generative likelihood, P (x|z), to be equal to R (x|z) from the recognition joint. We have written Bayes theorem with an = because that is not a choice. All distributions written with a R represent a coherent joint distribution (Eq. 3) and hence R (x|z) must be given by Bayes theorem applied to that joint distribution. The normalizing constants, R (z) and R (z ′), are, $$\begin{array}{l}{{\mathrm{R}\left(z\right)=\int d x\,\mathrm{R}_{\phi}\left(z|x\right)\mathrm{R}_{\mathrm{base}}\left(x\right),}}\\ {{\mathrm{R}\left(z^{\prime}\right)=\int d x^{\prime}\,\mathrm{R}_{\phi}\left(z^{\prime}|x^{\prime}\right)\mathrm{R}_{\mathrm{base}}\left(x^{\prime}\right).}}\end{array}$$ $$(11\mathbf{a})$$ $$(11\mathbf{b})$$ See Appendix A for further details. ## 4.1 Vi In Rpms Now, we substitute these generative probabilities into the VI objective (i.e. the ELBO), using an approximate posterior, Qψ (z, z′|*x, x*′), $${\cal L}(x,x^{\prime})={\rm const}+{\rm E}_{\rm Q}\left[\log\frac{{\rm P}_{\theta}\left(z,z^{\prime}\right)}{{\rm R}\left(z\right){\rm R}\left(z^{\prime}\right)}+\log\frac{{\rm R}_{\phi}\left(z|x\right){\rm R}_{\phi}\left(z^{\prime}|x^{\prime}\right)}{{\rm Q}_{\psi}\left(z,z^{\prime}|x,x^{\prime}\right)}\right].\tag{12}$$ Here, $$\mathrm{const}=\log\left(\mathrm{R_{base}}\left(x\right)\mathrm{R_{base}}\left(x^{\prime}\right)\right),$$ $$(13)$$ $$(14)$$ ′)), (13) is constant because Rbase (x) and Rbase (x ′) are either the true data marginals or the empirical marginals (Appendix A). In either case, these distributions do not depend on the prior parameters, θ, the recognition parameters, ϕ, or the approximate posterior parameters, ψ. ## 4.2 Infonce As A Rpm Now, we choose the approximate posterior to be equal to the corresponding conditional of the recognition joint, $$\mathrm{Q}\left(z,z^{\prime}|x,x^{\prime}\right)\equiv\mathrm{R}\left(z,z^{\prime}|x,x^{\prime}\right)=\mathrm{R}_{\phi}\left(z|x\right)\mathrm{R}_{\phi}\left(z^{\prime}|x^{\prime}\right)$$ Note that Walker et al. (2023b) do not make this choice. Instead, they allow the approximate posterior to be optimized separately; on the probabilistic modelling tasks they consider, their choice is likely to lead to slightly improved performance. Substituting this choice for the approximate posterior, the ELBO simplifies considerably, $${\mathcal{L}}(x,x^{\prime})=\mathrm{const}+\mathrm{E}_{\mathrm{R}_{\phi}(z|x)\,\mathrm{R}_{\phi}(z^{\prime}|x^{\prime})}\left[\log{\frac{\mathrm{P}_{\theta}\left(z,z^{\prime}\right)}{\mathrm{R}\left(z\right)\mathrm{R}\left(z^{\prime}\right)}}\right].$$ $$(15)$$ $$\left(16\right)$$ Additionally, we consider the expected loss, taking the expectation over the true/empirical data distribution, Rbase (*x, x*′), L = ERbase(x,x′)[L(x, x′)] = const + ER(z,z′) log Pθ (z, z′) R (z) R (z ′) . (16) where R (*z, z*′) is the relevant marginal of the recognition joint (Eq. 4). ## 4.3 The Rpm Elbo Can Be Written As The Mutual Information Minus A Kl Divergence To get an intuitive understanding of the ELBO we take Eq. (16) and add and subtract ER(z,z′)[log R (*z, z*′)], $${\mathcal{L}}=\mathrm{const}+\mathrm{E}_{\mathrm{R}\left(z,z^{\prime}\right)}\left[\log{\frac{\mathrm{R}\left(z,z^{\prime}\right)}{\mathrm{R}\left(z\right)\mathrm{R}\left(z^{\prime}\right)}}\right]+\mathrm{E}_{\mathrm{R}\left(z,z^{\prime}\right)}\left[\log{\frac{\mathrm{P}_{\theta}\left(z,z^{\prime}\right)}{\mathrm{R}\left(z,z^{\prime}\right)}}\right].$$ The first term is the mutual information between z and z ′ under the recognition joint, R (Eq. 5), and the second term is a KL-divergence, $$(17)$$ $${\mathcal{L}}=\mathrm{const}+\mathrm{MI}-\mathrm{D}_{\mathrm{KL}}\left(\mathrm{R}\left(z,z^{\prime}\right)\|\mathrm{P}_{\theta}\left(z,z^{\prime}\right)\right).$$ L = const + MI − DKL (R (z, z′)∥Pθ (*z, z*′)). (18) This objective therefore encourages large mutual information between z and z ′ under R (Eq. 4), while encouraging R (*z, z*′) to lie close to the prior, Pθ (*z, z*′). ## 4.4 Under The Optimal Prior, The Rpm Elbo Is Equal To The Mutual-Information (Up To A Constant) Looking at Eq. (18), the only term that depends on the prior, Pθ (z ′, z), is the negative KL-divergence. As such, maximizing L with respect to the parameters of Pθ (z ′, z) is equivalent to minimizing DKL (R (z, z′)∥Pθ (*z, z*′)). Of course, the minimal KL-divergence of zero is obtained when, $$(18)$$ $$\mathrm{P}^{*}\left(z,z^{\prime}\right)=\mathrm{R}\left(z,z^{\prime}\right).$$ ∗(*z, z*′) = R (*z, z*′). (19) For this optimal prior, the KL-divergence is zero, so the ELBO reduces to just the mutual information between z and z ′(and a constant), $${\mathcal{L}}_{\mathrm{MI}}=\mathrm{const}+\mathrm{MI}\,.$$ LMI = const + MI. (20) $$(19)$$ $$(20)$$ ## 4.5 Under A Particular Prior, The Rpm Elbo Is Equal To The Infinite-Sample Infonce Objective (Up To A Constant) Recent work has argued that the good representation arising from InfoNCE cannot be from maximizing mutual information alone, because the mutual information is invariant under arbitrary invertible transformations (Tschannen et al., 2019; Li et al., 2021). Instead, the good properties must arise somehow out of the fact that the InfoNCE objective forms only a loose bound on the true MI, even in the infinite sample limit Eq. (7). In contrast, here we show that the infinite-sample InfoNCE objective is equal (up to a constant) to the ELBO (or log marginal likelihood for deterministic encoders) for a specific choice of prior. In particular, we choose the prior on z implicitly (through R (z)), and we choose the distribution over z ′conditioned on z to be given by an energy based model with an unrestricted coupling function, fθ(*z, z*′) (we could of course use Eq. 8 for fθ), $$\mathrm{P}_{\theta}^{\mathrm{InfoNCE}}\left(z\right)=\mathrm{R}\left(z\right)$$ $$\mathrm{P}_{\theta}^{\mathrm{InfoNCE}}\left(z^{\prime}|z\right)=\frac{1}{Z(z)}\,\mathrm{R}\left(z^{\prime}\right)f_{\theta}(z,z^{\prime}).$$ $$(21\mathrm{a})$$ $$(21\mathrm{b})$$ $$(22)$$ The normalizing constant, Z(z), is $$Z(z)=\int d z^{\prime}\,\mathrm{R}\left(z^{\prime}\right)f_{\theta}(z,z^{\prime})=\mathrm{E}_{\mathrm{R}\left(z^{\prime}\right)}\left[f_{\theta}(z,z^{\prime})\right].$$ Substituting these choices into Eq. (16), and cancelling R (z) R (z ′), the average ELBO or log marginal likelihood becomes, $${\mathcal{L}}_{\mathrm{InfoNCE}}=\mathrm{const}+\mathrm{E}_{\mathrm{R}(z,z^{\prime})}\left[\log f_{\theta}(z,z^{\prime})-\log Z(z)\right],$$ $${\mathrm{gives}},$$ LInfoNCE = const + ER(z,z′)[log fθ(*z, z*′) − log Z(z)] , (23) and substituting for Z(z) gives, $${\mathcal{L}}_{\mathrm{InfoNCE}}=\mathrm{const}+\mathrm{E}_{\mathrm{R}(z,z^{\prime})}\left[\log f_{\theta}(z,z^{\prime})\right]-\mathrm{E}_{\mathrm{R}(z)}\left[\log\mathrm{E}_{\mathrm{R}(z^{\prime})}\left[f_{\theta}(z,z^{\prime})\right]\right].$$ ′)[fθ(*z, z*′)]. (24) Following Wang & Isola (2020) and Li et al. (2021) the right hand side can be identified as the infinite sample InfoNCE objective that we introduced in Sec. 3.2, $$(23)$$ $$(24)$$ $$\mathrm{MI}>\mathcal{I}_{\infty}=\mathrm{const}+\mathcal{L}_{\mathrm{InfoNCE}}.$$ $$(25)$$ MI > I∞ = const + LInfoNCE. (25) Therefore, in this choice of model, the ELBO, LInfoNCE, is equal to the infinite-sample InfoNCE objective, I∞, up to a constant. This would argue that the InfoNCE objective has a closer link to the ELBO, LInfoNCE, than it does to the MI, as the infinite-sample InfoNCE objective, I∞(*θ, ϕ*) is equal (up to a constant) to log marginal likelihood (with a determinstic encoder; Appendix C) or the ELBO with a stochastic encoder. Whereas the infinite-sample InfoNCE objective only gives a bound on the MI. Of course, this is all in the infinite sample setting: we have an infinite-sample expression for the InfoNCE objective (which forms a bound on the MI) and we have an infinite-sample expression for the ELBO/log marginal likelihood. However, the infinite-sample InfoNCE objective and the infinite-sample ELBO are equal up to additive constants, so these are really just different interpretations of the same quantity. The question then becomes how to develop a finite-sample bound on this single quantity with two slightly different interpretations. As we really only have a single quantity, we only need a single finite-sample bound. That single finite-sample bound would of course apply equally to both the InfoNCE and ELBO interpretations. Ultimately, we choose to use the finite-sample bound originally described in the InfoNCE framework (Oord et al., 2018), but as discussed above, that bound would of course apply equally to both the InfoNCE and ELBO interpretations. ## 5 Experimental Results Our primary results are theoretical: in connecting the ELBO/log marginal likelihood and mutual information, and in showing that the InfoNCE objective with a restricted choice of fθ makes more sense as a bound on ![8_image_0.png](8_image_0.png) Figure 1: Results of the moving balls experiment. A) Example of the motion between consecutive frames. The balls move by a full diameter in a semi-random direction. B) Locations of the extracted ball centres, after supervised linear decoding. The standard InfoNCE setup fails to extract correct locations. C) The mean distance from the extracted and true centres of the balls for a supervised method, InfoNCE with a Gaussian discriminator after supervised decoding and InfoNCE with a linear discriminator after supervised decoding. D) Probability distribution for the next location of the coral ball in A according to an encoder trained with a Gaussian discriminator. E) Probability distribution for the next location of the same ball according to an encoder trained with a linear discriminator. the log marginal likelihood than on the MI. At the same time, our approach encourages a different way of thinking about how to set up contrastive SSL methods, in terms of Bayesian priors. As an example, we considered a task in which the goal was to extract the locations of three moving balls, based on videos of these balls bouncing around in a square (Fig. 1A; Appendix D). Critically, we want the latent space, z and z ′, to mirror the underlying true latent space as closely as possible. We take the latent spaces to be 6 dimensional, representing the x and y positions of the 3 balls. Critically, it does not make sense to impose a unit norm constraint on this latent space, as such a constraint does not exist in the real latent space. This contrasts with the usual InfoNCE setup, where z and z ′ are usually taken to be unit norm. To resolve this difficulty, we consider the InfoNCE-like setup described in Sec. 4.5. Our prior is given by Eq. (21), with R (z) and R (z ′) defined by Eq. (11). The freedom in this setup is given by the choice of fθ. Naively applying the usual InfoNCE choice of fθ without unit-norm constraints (using Eq. 8), failed (linear in Fig. 1BC), because we did not correctly encode prior information about the structure of the problem. Critically, our prior is that for the adjacent frames, the locations extracted by the network will be close, while for random frames, the locations extracted by the network will be far apart. The linear estimator in Eq. (8) is not suitable for extracting the proximity of the ball locations, so it fails (linear in Fig. 1 BC). In particular, it corresponds to a non-sensical prior over z ′ given z, $$\mathrm{P}^{\mathrm{InfoNCE}}\left(z^{\prime}|z\right)={\frac{1}{Z(z)}}\,\mathrm{R}\left(z^{\prime}\right)f_{\theta}(z,z^{\prime})\propto\exp\left(z^{T}\theta z^{\prime}\right)$$ (where we have taken Qϕ (z ′) defined by Eq. 11 to be approximately uniform purely for the purposes of building intuition). This prior will encourage z ′to be very large (specifically, z ′should have a large dotproduct with z Tθ. Instead, we would like a prior that encodes our knowledge that z ′is likely to be close to z. We can get such a prior by using a Gaussian RBF form for fθ, $$f_{\theta}(z,z^{\prime})=\exp\left(-\frac{1}{2\theta^{2}}(z-z^{\prime})^{2}\right).$$ 2. (27) where the only parameter, θ, is a scalar learned lengthscale. Critically, this choice of fθ is natural and obvious if we take a probabilistic generative view of the problem (with a uniform Qϕ (z ′), this corresponds to a Gaussian conditional, P InfoNCE θ,ϕ (z ′|z)). Because this is the natural and obvious choice of prior, it works well even without unit-norm constraints (which as discussed, do not make sense in this setting). $$(26)$$ $$(27)$$ ## 6 Conclusions In conclusion, we have developed a new family of probabilistic generative model, the "recognition parameterised model". With a determinstic recognition model, the RPM ELBO is equal to the marginal likelihood (Appendix C). For the optimal prior, the RPM ELBO is equal (up to a constant) to the mutual information, and with a particular choice of prior, the RPM ELBO is equal (up to a constant) to the infinite-sample InfoNCE objective (up to constants). In contrast, the infinite-sample InfoNCE forms only a loose bound on the true MI, which would argue that the InfoNCE objective might be better motivated as the RPM ELBO. As such, we unify contrastive semi-supervised learning with generative self-supervised learning (or unsupervised learning). Finally, we provide a principled framework for using simple parametric models in the latent space to enforce disentangled representations, and our framework allows us to use Bayesian intuition to form richer priors on the latent space. ## References Alexander Alemi, Ben Poole, Ian Fischer, Joshua Dillon, Rif A Saurous, and Kevin Murphy. Fixing a broken elbo. In *International Conference on Machine Learning*, pp. 159–168. PMLR, 2018. Jyoti Aneja, Alexander Schwing, Jan Kautz, and Arash Vahdat. Ncp-vae: Variational autoencoders with noise contrastive priors. *arXiv preprint arXiv:2010.02917*, 2020. Randall Balestriero and Yann LeCun. Learning by reconstruction produces uninformative features for perception. *arXiv preprint arXiv:2402.11337*, 2024. David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022, 2003. Christopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in β-vae. *arXiv preprint arXiv:1804.03599*, 2018. Ricky TQ Chen, Xuechen Li, Roger Grosse, and David Duvenaud. Isolating sources of disentanglement in variational autoencoders. *arXiv preprint arXiv:1802.04942*, 2018. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv preprint arXiv:1606.03657, 2016a. Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. Variational lossy autoencoder. *arXiv preprint arXiv:1611.02731*, 2016b. Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In *Proceedings of the IEEE international conference on computer vision*, pp. 1422–1430, 2015. Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE transactions on pattern analysis and machine intelligence, 38(9):1734–1747, 2015. Benjamin Eysenbach, Vivek Myers, Ruslan Salakhutdinov, and Sergey Levine. Inference via interpolation: Contrastive representations provably enable planning and inference. *arXiv preprint arXiv:2403.04082*, 2024. Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. *arXiv preprint arXiv:1803.07728*, 2018. Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, and Kevin Swersky. Your classifier is secretly an energy based model and you should treat it like one. *arXiv* preprint arXiv:1912.03263, 2019. Arman Hasanzadeh, Mohammadreza Armandpour, Ehsan Hajiramezanali, Mingyuan Zhou, Nick Duffield, and Krishna Narayanan. Bayesian graph contrastive learning. *arXiv preprint arXiv:2112.07823*, 2021. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. In *International conference on learning representations*, 2016. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020. Adam Jelley, Amos Storkey, Antreas Antoniou, and Sam Devlin. Contrastive meta-learning for partially observable few-shot learning. *arXiv preprint arXiv:2301.13136*, 2023. Tom Joy, Sebastian Schmon, Philip Torr, N Siddharth, and Tom Rainforth. Capturing label characteristics in vaes. In *International Conference on Learning Representations*, 2020. Theofanis Karaletsos, Serge Belongie, and Gunnar Rätsch. Bayesian representation learning with oracle constraints. *arXiv preprint arXiv:1506.05011*, 2015. Hyunjik Kim and Andriy Mnih. Disentangling by factorising. In *International Conference on Machine* Learning, pp. 2649–2658. PMLR, 2018. Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. *Advances in* neural information processing systems, 34:21696–21707, 2021. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Diederik P Kingma, Max Welling, et al. An introduction to variational autoencoders. *Foundations and* Trends® *in Machine Learning*, 12(4):307–392, 2019. Yazhe Li, Roman Pogodin, Danica J Sutherland, and Arthur Gretton. Self-supervised learning with kernel dependence maximization. *arXiv preprint arXiv:2106.08320*, 2021. Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In *international conference on machine learning*, pp. 4114–4124. PMLR, 2019. David JC MacKay. Maximum likelihood and covariant algorithms for independent component analysis, 1996. Emile Mathieu, Tom Rainforth, Nana Siddharth, and Yee Whye Teh. Disentangling disentanglement in variational autoencoders. In *International Conference on Machine Learning*, pp. 4402–4412. PMLR, 2019. Alexander Möllers, Alexander Immer, Elvin Isufi, and Vincent Fortuin. Uncertainty in graph contrastive learning with bayesian neural networks. In *Fifth Symposium on Advances in Approximate Bayesian Inference*, 2023. Kevin P Murphy. *Probabilistic machine learning: an introduction*. MIT press, 2022. Didrik Nielsen, Priyank Jaini, Emiel Hoogeboom, Ole Winther, and Max Welling. Survae flows: Surjections to bridge the gap between vaes and flows. *Advances in Neural Information Processing Systems*, 33:12685– 12696, 2020. Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In *European conference on computer vision*, pp. 69–84. Springer, 2016. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*, 2018. Barak Pearlmutter and Lucas Parra. Maximum likelihood blind source separation: A context-sensitive generalization of ica. *Advances in neural information processing systems*, 9, 1996. Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, and George Tucker. On variational bounds of mutual information. In *International Conference on Machine Learning*, pp. 5171–5180. PMLR, 2019. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In *International conference on machine learning*, pp. 1278–1286. PMLR, 2014. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In *International conference on machine learning*, pp. 2256–2265. PMLR, 2015. Yang Song, Conor Durkan, Iain Murray, and Stefano Ermon. Maximum likelihood training of score-based diffusion models. *Advances in Neural Information Processing Systems*, 34:1415–1428, 2021. Michael E Tipping and Christopher M Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical Society Series B: Statistical Methodology, 61(3):611–622, 1999. Michael Tschannen, Josip Djolonga, Paul K Rubenstein, Sylvain Gelly, and Mario Lucic. On mutual information maximization for representation learning. *arXiv preprint arXiv:1907.13625*, 2019. Julius von Kügelgen, Yash Sharma, Luigi Gresele, Wieland Brendel, Bernhard Schölkopf, Michel Besserve, and Francesco Locatello. Self-supervised learning with data augmentations provably isolates content from style. *arXiv preprint arXiv:2106.04619*, 2021. William I. Walker, Arthur Gretton, and Maneesh Sahani. Prediction under latent subgroup shifts with high-dimensional observations. *arXiv: 2306.13472*, 2023a. William I Walker, Hugo Soulat, Changmin Yu, and Maneesh Sahani. Unsupervised representational learning with recognition-parametrised probabilistic models. *AISTATS*, 2023b. Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In *International Conference on Machine Learning*, pp. 9929–9939. PMLR, 2020. Yu Wang, Hengrui Zhang, Zhiwei Liu, Liangwei Yang, and Philip S Yu. Contrastvae: Contrastive variational autoencoder for sequential recommendation. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pp. 2056–2066, 2022. Veit David Wild, Robert Hu, and Dino Sejdinovic. Generalized variational inference in function spaces: Gaussian measures meet bayesian deep learning. *Advances in Neural Information Processing Systems*, 35: 3716–3730, 2022. Shengjia Zhao, Jiaming Song, and Stefano Ermon. Infovae: Balancing learning and inference in variational autoencoders. In *Proceedings of the AAAI conference on artificial intelligence*, 2019. Roland S Zimmermann, Yash Sharma, Steffen Schneider, Matthias Bethge, and Wieland Brendel. Contrastive learning inverts the data generating process. *arXiv preprint arXiv:2102.08850*, 2021. ## A Choices For The Base Distribution There are three choices of base marginals, Rbase (x) and Rbase (x ′) in an RPM, corresponding to three different variants of the RPM: 1. The empirical marginal RPM. 2. The true marginal RPM. 3. The estimated marginal RPM. It will turn out that at least the choice between 1 and 2 makes little difference, as they lead to exactly the same expression for the ELBO (compare Eq. 29 and Eq. 37), albeit with a slightly different interpretation. ## A.1 Empirical Marginal Rpm The empirical marginal RPM was introduced in Walker et al. (2023b), and uses, $\begin{array}{l}{\mathrm{R}_{\text{base emp}}\left(x\right)=\frac{1}{N}\sum_{i}\delta(x-x_{i})}\\ {\mathrm{R}_{\text{base emp}}\left(x'\right)=\frac{1}{N}\sum_{i}\delta(x'-x'_{i})}.\end{array}$ where i indexes training data points. This has the advantage that Rbase emp (x) and Rbase emp (x ′) are known. Thus, R (z) and R (z ′) can be evaluated exactly as a mixture distribution, $$(28\mathrm{a})$$ $$(28\mathrm{b})$$ $$\mathrm{R}\left(z\right)=\frac{1}{N}\sum_{i}\mathrm{R}\left(z|x_{i}\right)\tag{1}$$ $$\mathrm{R}\left(z^{\prime}\right)=\frac{1}{N}\sum_{i}\mathrm{R}\left(z^{\prime}|x_{i}^{\prime}\right)$$ $$(29\mathrm{a})$$ $$(29\mathrm{b})$$ In this setting, we know Rbase emp (x), R (z) and (of course) Rϕ (z|x), so we can exactly evaluate the probability density for the recognition parameterised likelihood (Eq. 10). As such, we can sample from the model distribution over data, P (*x, x*′). However, the empirical marginal viewpoints does has important issues. Specifically, $$\mathrm{P}\left(x,x^{\prime}\right)=\int d z d z^{\prime}\,\mathrm{P}\left(z,z^{\prime}\right)\mathrm{P}\left(x|z\right)\mathrm{P}\left(x^{\prime}|z^{\prime}\right)$$ ′) (30) Substituting Eq. (10), $$\mathrm{P}\left(x,x^{\prime}\right)=\mathrm{R_{b a s e\ e m p}}\left(x\right)\mathrm{R_{b a s e\ e m p}}\left(x^{\prime}\right)F\left(x,x^{\prime}\right)$$ ′) F (*x, x*′) (31) where, $$F\left(x,x^{\prime}\right)=\int d z d z^{\prime}\,\mathrm{P}\left(z,z^{\prime}\right)\frac{\mathrm{R}_{\phi}\left(z|x\right)}{\mathrm{R}\left(z\right)}\frac{\mathrm{R}_{\phi}\left(z^{\prime}|x^{\prime}\right)}{\mathrm{R}\left(z^{\prime}\right)}.\tag{1}$$ Substituting the empirical marginal form for Rbase emp (x) and Rbase emp (x) (Eq. (28)), $$\mathrm{P}\left(x,x^{\prime}\right)=\left({\frac{1}{N}}\sum_{i=1}^{N}\delta\left(x-x_{i}\right)\right)\left({\frac{1}{N}}\sum_{j=1}^{N}\delta\left(x^{\prime}-x_{j}^{\prime}\right)\right)F\left(x_{i},x_{j}^{\prime}\right)$$ $$(30)$$ $$(31)$$ $$(32)$$ $$(33)$$ This distribution only places probability density/mass at values of x and x ′that were actually observed in the data, i.e. x ∈ {xi} N i=1 and x ′ ∈ {x ′ i } N i=1. In most cases we do not expect the exact test value of x and x ′to be in the training set, and this model assigns those test points zero probability density. Indeed, for absolutely continuous distributions, we *never* expect the exact test value of x and x ′to be in the training set. Additionally, we can again connect this form to InfoNCE-like objectives. In particular, as the probability density over (*x, x*′) above places mass only at a finite number of training points, we can equivalently write it as a probability mass function over indicies (*i, j*) describing which datapoint was chosen. The resulting probability mass function is, $$\mathrm{P}\left(i,j\right)=P_{i j}={\frac{1}{N^{2}}}F(x_{i},x_{j}^{\prime}).$$ ). (34) Critically, the implied conditional, P (j|i), in essence embodies a classification problem: classifying which of the N training points, x ′ j , is associated with a particular xi. And that classification problem is almost exactly the InfoNCE classification problem. Finally, this implies that even if we do sample from P (*x, x*′), we are definitely going to end up with x and x ′from the training dataset. As such, "sampling" is just "matching up" potentially consistent x and x ′in the training set, and is thus quite different from sampling in a proper generative model. ## A.2 True Marginal Rpm While the empirical marginal RPM has many advantages, it does assign zero probability to any test points that did not turn up in the training data. To resolve this issue, we could instead use the true data distribution for Rbase. It seems that this would be problematic as while the true data distribution exists, it is almost always unknown (and even unknownable). However, remember that to perform inference and learn the parameters, we never need to evaluate probability density of the true data distribution, as the Rbase (x ′) and Rbase (x) terms cancel out of the ELBO (Eq. 12). Instead, we just need to be able to sample R (*z, z*′), which can easily be achieved by taking (x ′, x) from the real data, and encoding using Rϕ (z|x) and Rϕ (z ′|x ′). Of course, not knowing the true data distribution does prevent us from sampling (x ′, x) from the model in a true marginal RPM. But as discussed in the Introduction, we are not interested in sampling data from the model: if we were, we would use e.g. a diffusion model. Instead, we are interested in extracting useful structured, high-level representations of data, and for that purpose, all we need to be able to do is to encode using Rϕ (z|x) and Rϕ (z ′|x ′) and optimize the parameters of these encoders. In practice, the empirical and true marginal settings are almost identical. The only minor difference is that in the empirical marginal setting, R (z) and R (z ′) can be evaluated exactly (Eq. 29), whereas in the true marginal setting, R (z) and R (z ′) can only be estimated. Critically, though the numerical value of the exact R (z) and R (z ′) in the empirical marginal setting (Eq. 29) is exactly the same as the true marginal estimate of these quantities. Speicifically, in the true marginal setting, $$(34)$$ $$\begin{array}{l}{{\mathrm{R}\left(z\right)=\int d x\,\mathrm{P_{true}}\left(x\right)\mathrm{R_{\phi}}\left(z|x\right)}}\\ {{\mathrm{R}\left(z^{\prime}\right)=\int d x^{\prime}\,\mathrm{P_{true}}\left(x^{\prime}\right)\mathrm{R_{\phi}}\left(z^{\prime}|x^{\prime}\right).}}\end{array}$$ However, we can rewrite these integrals as expectations, $$(35\mathrm{a})$$ $$(35\mathrm{b})$$ $${\mathrm{ectations}},$$ R (z) = EPtrue(x)[R (z|x)] (36a) $$\begin{array}{l}{\operatorname{R}\left(z\right)=\operatorname{E}_{\operatorname{Prune}}(x)\left[\operatorname{R}\left(z|x\right)\right]}\\ {\operatorname{R}\left(z^{\prime}\right)=\operatorname{E}_{\operatorname{Prune}}(x^{\prime})\left[\operatorname{R}\left(z^{\prime}|x^{\prime}\right)\right]}\end{array}$$ ′)] (36b) We can estimate these expectations using samples from Ptrue (x) and Ptrue (x ′). Of course, the data itself gives samples from Ptrue (x) and Ptrue (x ′). Thus, even in the true-marginal setting, we would estimate R (z) and R (z ′) using the same sum over datapoints used in the empirical marginal setting (i.e. Eq. 29), $$(36\mathrm{a})$$ $$\stackrel{\mathrm{(}}{(36\mathrm{b})}$$ $${\rm R}\left(z\right)\approx\frac{1}{N}\sum_{i}{\rm R}\left(z|x_{i}\right)\tag{1}$$ $${\rm R}\left(z^{\prime}\right)\approx\frac{1}{N}\sum_{i}{\rm R}\left(z^{\prime}|x_{i}^{\prime}\right)\tag{2}$$ The only difference is in the interpretation of this sum. Here (in the true marginal setting), the sum *estimates* the true value of R (z) and R (z ′), while in the empirical marginal setting (Eq. 29), the sum is the true value. $$(37\mathrm{a})$$ $$(37\mathrm{b})$$ ## A.3 Estimated Marginal Rpm We could of course learn Rbase est (x) and Rbase est (x ′). That results in a model where we can both evaluate the probability density, and sample P (x ′, x). However, this involves training a generative model for complex data, such as images. While that might seem to obviate the whole point of the exercise, there is the potential for estimated marginal RPMs to be useful. In particular, the estimated marginal RPM only requires training a generative model for x and x ′separately (note that the ELBO in Eq. 12 depends only on the marginals, R (x) and R (x ′), and not on R (*x, x*′). Thus, it may be possible to use the estimated marginal to "stitch together" samples of the joint, (*x, x*′) from a generative model which can sample only from marginals. ## B Technical Details When Using Deterministic Recognition Models While we do not have to, we could follow the standard self-supervised setup, which in effect uses Dirac-delta recognition models, $$\begin{array}{l}{{\mathrm{R}_{\phi}\left(z\,|x\,\right)=\delta\left(z\,-g_{\phi}(x)\right),}}\\ {{\mathrm{R}_{\phi}\left(z^{\prime}|x^{\prime}\right)=\delta\left(z^{\prime}-g_{\phi}^{\prime}(x^{\prime})\right).}}\end{array}$$ $$(38)$$ $$(39)$$ That raises the question of when our objectives make sense. To answer these questions, we need to carefully understand the measure-theory underlying the expressions in the main text. In particular, our objectives are all ultimately written in terms of KL-divergences for distributions over the latent variables, (*z, z*′). We take z ∈ Z and z ′ ∈ Z′, and consider probability measures µ and ν, where the set underlying the measure space is *Z × Z*′. Thus, the KL-divergences in the objective can be written in measure-theoretic notation as, $$\mathrm{D}_{\mathrm{KL}}\left(\nu\|\mu\right)=\int d\nu\log{\frac{d\nu}{d\mu}},$$ (e.g. see Wild et al. 2022). The critical term is *dν/dµ*, which is known as the Radon-Nikodym derivative. This derivative is defined when we can write ν in terms of a function f : *Z × Z*′ → [0, ∞). $$(40)$$ $ \frac{d\nu}{d\mu}=f$ $ \nu(A)=\int_A f d\mu$ which is the same as it is the case if we take this as a bit. $$(41)$$ $$(42)$$ $$(43)$$ where A *⊆ Z × Z*′. In that case, f is uniquely defined up to a µ-null set (informally, if ν(A) = µ(A) = 0 then f is not uniquely defined for (*z, z*′) ∈ A). Now, we can consider the two KL-divergence terms in Eq. (17). The first KL-divergence term is, $$\left(\frac{\mathrm{R}}{\mathrm{R}}(z,z^{\prime})\right)=\mathrm{R}_{\mathrm{R}}\left(\frac{\mathrm{R}\left(z,z^{\prime}\right)}{\mathrm{R}\left(z\right)\mathrm{R}\left(z^{\prime}\right)}\right)=\mathrm{D}_{\mathrm{KL}}\left(\mathrm{R}\left(z,z^{\prime}\right)\|\mathrm{R}\left(z\right)\mathrm{R}\left(z^{\prime}\right)\right).$$ In this case, we can write, $$\mathrm{R}\left(z,z^{\prime}\right)=f(z,z^{\prime})\,\mathrm{R}\left(z\right)\mathrm{R}\left(z^{\prime}\right),$$ ′), (44) so the Radon-Nikodym derivative exists, irrespective of the form of Rϕ (z|x) and Rϕ (z ′|x ′). This also implies that in Section 4.4, where we use the optimal prior, the objective (Eq. 20) is always well-defined. The second KL-divergence term is, $$\mathrm{E}_{\mathrm{R}\left(z,z^{\prime}\right)}\left[\log\frac{\mathrm{P}_{\theta}\left(z,z^{\prime}\right)}{\mathrm{R}\left(z,z^{\prime}\right)}\right]=-\mathrm{D}_{\mathrm{KL}}\left(\mathrm{R}\left(z,z^{\prime}\right)\|\mathrm{P}\left(z,z^{\prime}\right)\right).$$ If we use an optimal prior, this KL-divergence cancels. However, if we do not use the optimal prior, but instead want to use a parametric prior, this term can cause problems. The issue is that if we choose a $$(44)$$ $$(45)$$ standard form for the prior, such as a multivariate Gaussian, then P (*z, z*′) will be absolutely continuous. However if we have deterministic Rϕ (z|x) and Rϕ (z ′|x ′), then R (*z, z*′) can be supported on a measure-zero subspace, and hence it will not be absolutely continuous. This for instance might happen if (*x, x*′) and (*z, z*′) are real vector spaces, with x lower dimensional than z and/or x ′lower dimensional than z ′. One way to resolve this issue would be to set, $$\begin{array}{l}{{\mathrm{R}_{\phi}\left(z|x\right)=\mathcal{N}\left(g_{\phi}(x),\epsilon\mathbf{I}\right)}}\\ {{\mathrm{R}_{\phi}\left(z^{\prime}|x^{\prime}\right)=\mathcal{N}\left(g_{\phi}^{\prime}(x^{\prime}),\epsilon\mathbf{I}\right)}}\end{array}$$ $$\begin{array}{l}{(46)}\\ {(47)}\end{array}$$ where 0 < ϵ is arbitrarily small. In that case, Rϕ (z|x) Rϕ (z ′|x ′) and hence R (*z, z*′) are absolutely continuous and non-zero everywhere so the necessary Radon-Nikodym derivative exists. Also see Nielsen et al. (2020) which also uses deterministic approximate posteriors in the VAE setting, and compares them against normalizing flows. Note however that they work with quantities such as EQ(z|x)[log P (x|z) / Q (z|x)] which lack a rigorous measure-theoretic formulation, as the distributions in the ratio are over different variables: x in the numerator and z in the numerator. As such, approach presented here which does allow a measure-theoretic formulation is likely to be preferable. ## C When We Use Deterministic Recognition Models, The Elbo Is Equal To The (Marginal) Log Likelihood) The marginal likelihood, P (*x, x*′), is, $$\mathrm{P}\left(x,x^{\prime}\right)=\int d z d z^{\prime}\,\mathrm{P}\left(x|z\right)\mathrm{P}\left(x^{\prime}|z^{\prime}\right)\mathrm{P}\left(z,z^{\prime}\right).$$ ′) P (*z, z*′). (48) Substituting the recognition parameterised likelihoods, $$\begin{split}\log\mathrm{P}\left(x,x^{\prime}\right)&=\log\int dz dz^{\prime}\,\mathrm{P}\left(z,z^{\prime}\right)\frac{\mathrm{R}_{\phi}\left(z|x\right)\mathrm{R}_{\mathrm{base}}\left(x\right)}{\mathrm{R}\left(z\right)}\frac{\mathrm{R}_{\phi}\left(z^{\prime}|x^{\prime}\right)\mathrm{R}_{\mathrm{base}}\left(x^{\prime}\right)}{\mathrm{R}\left(z^{\prime}\right)}\\ &=\mathrm{const}+\log\mathrm{E}_{\mathrm{R}_{\phi}\left(z|x\right)\mathrm{R}_{\phi}\left(z^{\prime}|x^{\prime}\right)}\left[\frac{\mathrm{P}\left(z,z^{\prime}\right)}{\mathrm{R}\left(z\right)\mathrm{R}\left(z^{\prime}\right)}\right]\end{split}$$ $$(48)$$ (49) $\binom{50}{50}$ . where, as usual const = log (Rbase (x) Rbase (x ′)). The recognition models in the expectation are deterministic (Eq. 38), z = gϕ(x) and z ′ = g ′ ϕ (x ′), so we can easily evaluate the expectation, $$\log\mathrm{P}\left(x,x^{\prime}\right)=\mathrm{const}+\log\frac{\mathrm{P}\left(z,z^{\prime}\right)}{\mathrm{R}\left(z\right)\mathrm{R}\left(z^{\prime}\right)}=\mathcal{L}\left(x,x^{\prime}\right).$$ $$(51)$$ Here, the last equality arises from taking the ELBO in Eq. (15) and substituting the same deterministic encoders. Thus, with a deterministic encoder, the (marginal) log likelihood is equal to the ELBO. ## D Experimental Details We generated 900 images in a single continuous video with a resolution of 256 × 256 pixels. The three balls had a diameter of 32 pixels. Between consecutive frames the balls moved by a full diameter in a random direction, as illustrated in Fig. 1A. The movement trajectory was picked by taking the previous trajectory and adding a uniform noise of −2 ◦to +2◦. If the picked movement resulted in a collision, we sampled a new trajectory by doubling the noise range until a valid trajectory is found. We trained the model in a classic self-supervised manner. We encoded one "base" frame, one "target" frame (the next frame in a video sequence), along with a number of random frames. As usual, the network was trained to distinguish between the target frame (adjacent to the base frame) and random frames. We then trained a linear decoder in a supervised manner to return the (*x, y*) locations of the balls. The encoder itself is a simple convolutional neural network, as shown in Fig. 2. It consists of 2 batch normalised convolutional layers with a kernel size of 3. The first layer uses ReLU as the activation function, ![16_image_0.png](16_image_0.png) Figure 2: Architecture of the encoder neural network. The first of the two 3x3 convolutional layers outputs 6 feature maps and uses a ReLU activation. The second convolution outputs 3 feature maps and applies a sigmoid activation. For each of these 3 maps, we extract their centre of mass. This is done by summing each dimension and normalising it to 1. This is then used to perform a weighted average over the axis locations and get the final coordinates. while the second layer uses a sigmoid. At the output of the convolutional layers, we have 3 feature maps, which we interpret as the locations of the 3 different balls. We finally extract these locations by computing the centre of mass of the feature maps, giving a vector of six numbers as output (the x and y locations of the centres of mass of each feature map). The training itself was performed by using stochastic gradient descent with a learning rate of 0.005 over the course of 30 epochs. The batches were made of 30 random pairs of consecutive frames. For any pair, we use the second frame as the positive example and we use the second frame of the other pairs in the batch, as the random negative examples, against which we contrast.
Review 1: Summary: This paper presents a novel perspective on self-supervised learning (SSL) by demonstrating that the popular InfoNCE objective is equivalent to Evidence Lower Bound (ELBO) in a new class of probabilistic generative models, dubbed Recognition Parametrized Models (RPMs). Some theoretical investigations are shown as follows: 1. The RPM ELBO can be written as the subtraction of mutual information with KL divergence between prior and data distribution of pairs, thus the optimal prior reduces to the mutual information up to a constant 2. Given the infinite sample, ELBO and InfoNCE are equivalent up to constant Strengths and Weaknesses: ## Strengths 1. The paper is clearly written with rigorous definitions and notations, providing a new theoretical concept in understanding self-supervised learning objectives. 2. In particular, the paper elaborates the equivalence between RPM ELBO and InfoNCE objective, and their relationship to mutual information, which is a common measure that previous works have focused on. ## Weakness 1. The major weakness of this paper would be the shortage of empirical validation. I suspect that the toy experiment demonstrates that the prior information is important in self-supervised learning, where it was nothing but simply using gaussian RBF for $f_\theta$. My question is, how does it corroborate your claim that RPM provides a new approach in self-supervised learning? Maybe providing more pragmatic experimental results on vision benchmarks (e.g., self-supervised learning on CIFAR-10, at its minimal approach) might be relevant to better deliver the theoretical finding. 2. In addition, although the concept of RPM is quite new to the literature, it seems like the equation and derivation is not new from previous literature. Consider $P_\theta(z,z’)$ as a family of energy-based model with energy $f_\theta(z,z’)$, then the proposed objective is nothing but Donsker-Varadhna (DV) objective [1], or MINE [2] that is derived for modern neural network architecture. While I agree that writing it into $P_\theta(z,z’)$ provides a more general formulation, I do not see much advantage in understanding SSL with such a perspective. [1] Donsker, M. and Varadhan, S. Asymptotic evaluation of certain markov process expectations for large time, iv. Communications on Pure and Applied Mathematics, 36(2):183?212, 1983. [2] Belghazi, Mohamed Ishmael, et al. "Mutual information neural estimation." International conference on machine learning. PMLR, 2018. Requested Changes: I believe this paper has a lot of room for improvement by clarifying the advantage of analyzing or implementing self-supervised learning with RPM; the author could do better on theory / experiments by showing some cases where RPM is better than other self-supervised objectives such as InfoNCE. Broader Impact Concerns: N/A ================================================== Review 2: Summary: In the paper "InfoNCE is variational inference in a recognition parametrised model", the authors present a different view on the popular InfoNCE loss function, widely used in contrastive learning frameworks such as SimCLR. The authors argue that it is more helpful to see InfoNCE as an approximation to ELBO, rather than an approximation to MI (mutual information). Strengths and Weaknesses: Disclaimer: I have not fully followed all the arguments, but overall the paper appears sensible. It is on topic for TMLR and may be of interest for its audience. My only substantial comment is on the experimental section; apart from that I only have very minor comments. Requested Changes: MAJOR COMMENTS * I must be confused, but I don't quite understand the meaning of \theta in Equation 8. The way I know InfoNCE, e.g. from SimCLR, the loss function simply uses exp(z^T z' / tau) terms, i.e. there is no matrix multiplication by matrix \theta, but instead there is a scalar temperature parameter \tau (which is typically held fixed). Please comment on why you need \theta matrix here, and whether one can also use parameter-less function f(z, z') that I gave above. I suggest to explicitly refer to SimCLR here and explain how it fits into your notation/framework. It would also be useful to mention in this section that z and z' are typically normalized to have norm equal 1 (otherwise the objective does not make sense). * This normalization issue becomes crucial in Section 5, which presents a toy experiment supposedly showing a failure mode of standard InfoNCE setup. However, this looks like a strawman to me, due to missing normalization. SimCLR uses cosine similarity, which can be written as z^T z' but ONLY IF z and z' are normalized to lie on the hypersphere and have norms 1. If they are not normalized, then z^T z' does not give cosine similarity (obviously) and it does not make sense to use that for what the authors call f_theta. What the authors show in Figure 1C as "linear" does NOT correspond to SimCLR because it uses unnormalized vectors in z^T z'. What about using exp(cos_similarity(z, z')) as f() and show that in Figure 1C instead of "linear"? Or at least in addition. Currently this whole section looks like a strawman. MINOR COMMENTS * The paper contains many typos and could use a thorough proof reading. Page 2: "(see ...)". Page 2: "This seems to mirrors". Page 3: "Third,,". * Section 4 title looks incomplete: "Recognition Parameterised Generative" -- this sounds like an adjective, not a noun. Is the word "model" missing? Broader Impact Concerns: None ================================================== Review 3: Summary: This papers proposes a new understanding of the InfoNCE from the variational inference perspective, and claims its gain over the conventional mutual information understanding. To achieve this, they propose a recognition parameterized model that in turn, defines the generative probabilistic model as the bayes poster of the recognition model (i.e., the encoder). As a result, unlike VAEs, this model has no explicit decoder (can be a good thing), while they also show the variational lower bound (ELBO) is deeply connected to the mutual information (with a KL divergence accounting for the amortization gap). They also link the InfoNCE objective to the ELBO objective, which offers an alternative view. They show the benefit of this understanding using a rather toy experiment. Strengths and Weaknesses: ## Strengths 1. I would say that this paper is excellently written, starting from a intriguing and ambitious goal, and proceeding the arguments with convincing logic flows. It shows that the authors have plenty of knowledge of the literature as well as deep understandings of the essences of different approaches. 2. The paper has a good account of the related work. I fully understand that publishing paper can be quite uncertain in these days, and the authors set up good examples of how we could cope with situations like “because followup works got published earlier, is this work meaningless and hopeless now”. A not that up-to-date work can still give us lots of insights, and we should also account for its contributions in the context of its publishing date. Besides, given the elaborated descriptions of the publication history, to make a good evaluation of these arguments, I search for this paper and browse its submission history. I would say that by shifting the main theme from “InfoNCE is VAE” to “InfoNCE is “RPM”, the authors certainly have done a lot of polishing and it makes this work more understandable and less confusing. 3. I think that finding new perspective for understanding self-supervised objectives is a meaningful topic, and drawing connection to variational inference is certainly one of the most promising and interesting ways. 4. The recognition parameterised model (RPM) is an interesting probabilistic generative model, and it kind of inverts the usual convention from “decoder - variational encoder” to “encoder - variational decoder”. In this way, RPM has an advantage over VAEs by discarding the heavy burden to explicitly reconstruct the outputs, which may help explains the gains of InfoNCE over VAEs for representation learning. ## Weakness 1. **Limited experiments and practical insights.** I would say that although the theory is interesting, the experiments are far from being inspiring or complete to show potential practical insights of this theory. Considering the toy experiment in Sec 5, it is actually a common choice to directly use the cosine similarity between $z$ and $z’$ for parameterizing $f_\theta(z,z’)$ in InfoNCE. And the new theory here does not add much new information. 2. **Connection to Poole et al. is not elaborated.** As far as I could see, the RPM model proposed has a close connection to the energy-based model discussed in Poole et al. (2019) which is prior to this work (and its preprint). Specifically, in their energy-based model (Eq. 3), they propose roughly the same formulation as Eq. 21b, and they also established the connection between InfoNCE and a lower bound Eq.4 (the same lower bound as this paper Eq. 23) and the MI objective. Obviously, the energy-based model in Poole et al. is also a kind of decoder-free probabilistic model that bears great similarities to the definitions to this work (if not the same after addressing Problem 2), and they already show a variational inference of this model gives the InfoNCE loss. So with these similarities, their differences should be elaborated. 3. **Are MI and ELBO perspectives so different?** I note that the ELBO (as a lower bound of log likelihood) is *nothing but yet another lower bound of MI*, since its upper bound, log likelihood, is also a variational lower bound of MI (up to constants). To see this, notice that $E_{P(x,y)} \log P(x,y) = E_{P(x,y)} \log Q(x,y) + KL(P(x,y), Q(x,y))\geq E_{P(x,y)} \log Q(x,y).$ I think that this connection should be mentioned to give a holistic view of these different “lower bounds” and derivations. And in view of this limitation, this ELBO perspective is actually not much different from the MI lower bound perspective, e.g., in Poole et al. Given these serious concerns, I think this work still lacks enough evidence to show that it is sufficiently different from prior works, or offer concrete new insights to algorithm design. Ref: Poole et al. On Variational Bounds of Mutual Information. In ICML. 2019. Requested Changes: The quality of this work could be significantly improved if they could propose new amendments to the manuscript (or show my misunderstandings): 1. Showing concrete new insights from the ELBO/RPM perspective. 2. Elaborate the connections and differences to the energy-based model formulation. 3. Elaborate the real difference between two perspectives. Minor: On page 2, there is an incomplete bracket "(see ...)" Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: The paper introduces a novel perspective on the InfoNCE objective by framing it within the context of variational inference, specifically through the lens of Recognition Parametrized Models (RPMs). This approach diverges from the conventional understanding of mutual information in self-supervised learning, offering new conceptual framework that potentially enhances our comprehension of self-supervised learning objectives. ### Major Contributions: - The paper's primary contribution is the innovative reinterpretation of the InfoNCE objective as an Evidence Lower Bound (ELBO) in the context of RPMs. This reinterpretation broadens the theoretical understanding of self-supervised learning. - The paper is well-structured, presenting complex ideas in an accessible manner, thereby contributing to the theoretical discourse in the field. ### Points of Consideration: - Multiple reviewers highlight the need for more empirical evidence to support the theoretical claims made in the paper. While the theoretical framework is robust, additional experiments, particularly on more practical or larger-scale datasets, would bolster the paper's impact and relevance. ### Recommendation: Despite the concerns regarding empirical validation and the need for a more distinct differentiation from existing literature, the paper presents a significant theoretical contribution that enriches our understanding of self-supervised learning. Therefore, I recommend the acceptance of this paper. ==================================================
# Multi-Tailed, Multi-Headed, Spatial Dynamic Memory Refined Text-To-Image Synthesis Anonymous authors Paper under double-blind review ## Abstract Recent text-to-image generation methods that employ hundreds-of-millions to billions of model parameters or that are trained using tens-to-hundreds of GPUs of computational power have delivered highly-compelling and text-matching images. However, training to synthesize realistic images from text descriptions using a single-GPU resource remains a challenging task. In this paper we revisit the problem of generating images from text using models of less than 100 million parameters that can reliably be trained from scratch using a single-V100-GPU machine, and point out that there are still significant gains to be made within this problem setting. The current state-of-the-art amongst such low-resource models typically tackle text-to-image generation in a multistage manner, by first generating a rough initial image and then refining image details at subsequent stages. However, current methods suffer three important limitations. Firstly, initial images are generated at a sentence-level and provide a poor basis for word-level refinement. Secondly, by using common text-representations across all image regions, current refinement methods prevent different interpretations of words at different regions. Finally, images are refined in a single shot at each stage, limiting precision of image improvement. We introduce three novel components to address these shortcomings of low-resource methods: (1) A word-level initial stage to generate a better basis for refinement. (2) A spatial dynamic memory module to interpret words differently at different image regions. (3) An iterative multi-headed mechanism to better refine image details at each stage. We combine our three components as a unified model and demonstrate favourable performance against the previous state-of-the-art. ## 1 Introduction Generative Adversarial Networks (GANs) have shown great promise for the generation of photo-realistic synthetic images (Goodfellow et al., 2014; Radford et al., 2015; Denton et al., 2015), and the success of GANs has driven research into conditional image-generation and multimodal learning. Text-to-image generation has especially received much of the community's attention and recent methods for text-to-image generation that employ very large models using hundreds-of-millions to billions of parameters or methods that require tens-to-hundreds of GPUs during training (Ramesh et al., 2021; Ding et al., 2021; Nichol et al., 2021; Rombach et al., 2022) have delivered highly-realistic text-matching images of general-scene content. However, while the results from these high-resource methods are highly compelling, their computational-requirements often make them prohibitively expensive to train-from-scratch in many settings. A key requirement for wider adoption of text-to-image generation technology on the other hand, remains that models be cheap to train-from-scratch for niche domains. In this paper we revisit the problem of text-to-image generation using smaller low-resource models. In particular, we consider the setting where models employ less than 100 million parameters and can be trained from scratch using a single V100 GPU. We point out that there are still significant gains to be made by rethinking the design of such low-resource models - that are more practical to train than their high-resource counterparts. We identify three important limitations in current state-of-the-art methods of lower-resource models (Xu et al., 2018; Li et al., 2019a; Zhu et al., 2019) and propose architectural solutions to address these problems. These lower-resource methods typically employ multiple-stages of image generation - conventionally, an initial image is first generated from a global sentence-level vector, and subsequent stages incorporate fine-grained information extracted from word-level vectors to refine image details. The first problem that we highlight is that by synthesizing image features only from a sentence-level vector, the initial generation stage often fails to provide a sufficient basis for word-level refinement. This is especially important for images possessing detailed spatial structure or co-dependent attributes. If co-dependent image details such as the head and a limb of an animal for example are poorly formed in the initial image, then their relative positions and details can be difficult to decide or correct during refinement at later stages. By using word-level information to synthesize images directly at the initial stage, we demonstrate that we are able generate more detailed initial images that provide a better basis for subsequent word-level refinement. Secondly, current methods do not construct region-specific representations of text at refinement stages. This prevents us from interpreting the same words differently based on the content of image regions. Whereas, such requirement is natural in many contexts. The word 'vibrant' for example may dictate a requirement regarding the sharpness of a birds beak that is fundamentally different from the requirement that it dictates for the color of it's feathers. To better generate realistic images from natural text descriptions, it is important that we use a refinement architecture that allows different image regions to assimilate region-contextualized information from text. Finally, we note that current methods generate refinement features (that modify previous image features) only once at each refinement stage and attempt to address all image aspects within a single-shot. This single-shot refinement limits the precision with which each refinement stage can learn to improve the prior image. In this paper, we propose a Multi-Headed and Spatial Dynamic Memory image refinement mechanism with a Multi-Tailed Word-level Initial Generation stage (MSMT-GAN) to address these three issues. Our contributions are summarized as follows: - We introduce a novel "Multi-Tailed" Word-level Initial Generation stage (MTWIG), that generates a separate set of image features for each word n-gram, and iteratively fuse these sets together to obtain better initial image features for subsequent refinement. - We introduce a novel Spatial Dynamic Memory module (SDM) that fuses word-information in a custom way with each prior image region, to obtain region-contextualized representations of text. We extract refinement features from these SDM modules at each refinement stage. - We introduce a novel Iterative Multi-Headed Mechanism (IMHM) of image refinement - wherein we explicitly allow each stage of refinement to make multiple distinct modifications to the prior image, under common discriminator feedback. - We demonstrate that these three separate components work well together and that the addition of each component to the pipeline boosts generation performance on the Caltech-UCSD Birds 200 (CUB) dataset. Experiment results demonstrate that in training-from-scratch, MSMT-GAN is competitive with current methods for low-resource models on the Microsoft Common Objects in Context (COCO) dataset (Lin et al., 2014) and significantly outperforms the previous state-of-the art for low-resource models on the CUB dataset, decreasing the lowest reported Fréchet Inception Distance (FID) (Heusel et al., 2017) by 21.58% and moving the R-precision (Xu et al., 2018) ahead by 8 standard deviations over the previous state-of-the-art for CUB. We further note that while we have restricted our comparisons to models under 100 million parameters (that can reasonably be trained using a single V100 GPU), our method is only roughly half that size (Table 2). ## 2 Related Work Text-to-Image Generators: Reed et. al. (Reed et al., 2016) first demonstrated that a translation model from natural language to image pixels could be learnt by conditioning both generator and discriminator networks of a GAN on input text-descriptions. There has since been a surge of interest in training multi-stage attention based GAN architectures for this task. While the conventional setting (Zhang et al., 2017; Xu et al., 2018; Li et al., 2019a; Zhu et al., 2019) assumes only the availability of (text,image) pairs at training time, recently a second setting has emerged that assumes availability of bounding-box/shape-mask information of objects attributes during training (Li et al., 2019b; Hinz et al., 2019; Cho et al., 2020; Liang et al., 2020). We highlight that this represents a significantly easier problem setting and that such methods are not feasible where bounding-box/shape information is unavailable (such as the CUB dataset). Our method does not assume the availability of bounding-box/shape information, and we make our comparisons against prior work of the same setting. As earlier mentioned, there has also been recent work that explores the use of very large models with billions of parameters for the task of text-to-image generation (Ramesh et al., 2021; Ding et al., 2021; Nichol et al., 2021). We do not make a comparison against these methods as our model tackles the problem of text-to-image generation using only a small fraction of their number of model parameters. Memory Networks: Memory Networks (Weston et al., 2014) combine inference components with a long-term memory module that can be dynamically written to and read from. Current methods (Miller et al., 2016) query "key encodings" of memory slots to retrieve a set of weights. These weights are used to combine separate "value encodings" of the slots into a single response. A Dynamic Memory Generative Adversarial Network (DM-GAN) (Zhu et al., 2019) that retrieves information for image refinement from a memory module was proposed for text-to-image synthesis. In our SDM module, we too employ the memory-writing, key-addressing, value-reading paradigm introduced by (Miller et al., 2016), but our method differs from (Zhu et al., 2019) in all three memory operations. Fundamentally, DM-GAN does not create region-contextualized representations of text. Multi-Headed Attention: Transformers (Vaswani et al., 2017) utilize a key-value mechanism similar to memory networks and introduced the idea of multi-headed attention. They linearly project query, keys and values to h separate encodings, called "attention heads", and each head is separately used to extract an output vector. These vectors are concatenated together and linearly projected to a single response. Inspired by the success of Transformers, we introduce the IMHM method for image refinement. However, our method differs in a few respects. We maintain separate SDM modules for each head and we obtain queries and fuse outputs in an iterative fashion. ## 3 Msmt-Gan Our MSMT-GAN architecture (Figure 1) comprises of three stages - a Multi-Tailed Word-level Initial Generation (MTWIG) stage, and two refinement stages. Each refinement stage is Multi-Headed, and each refinement head has a separate Spatial Dynamic Memory (SDM) module. The following sections present our MTWIG stage, our SDM module for a single refinement head, and the details of our Iterative Multi-Headed Mechanism (IMHM). ## 3.1 Multi-Tailed Word-Level Initial Generation (Mtwig) We highlight that previous multi-stage low-resource methods (Zhang et al., 2017; 2018; Li et al., 2019a; Zhu et al., 2019) all rely on the same type of initial generation stage and focus only on improving the refinement stages - making the conventional assumption that the performance of multi-stage generators is primarily determined by the refinement stages, and that the quality of the "rough initial image" is of little importance. In our paper, we break from this tradition and demonstrate for the first time that gains can be achieved in the final stage of image refinement by making an improvement to the initial images. The conventional approach synthesizes initial images directly from a sentence-level vector without ![3_image_0.png](3_image_0.png) Figure 1: Our MSMT-GAN architecture for text-to-image synthesis, showing a Mutli-Tailed Word-level Initial Generation stage, a Multi-Headed Spatial Dynamic Memory based refinement stage with three refinement heads, and image prediction. attempting to separate image attributes at a word-level. As a result, words-level details are are inherently excluded from the initial image. In our novel Multi-Tailed Word-level Initial Generation (MTWIG) stage, we overcome this shortcoming by explicitly creating separate sets of image attributes for each word n-gram. First, we sample a vector of random noise z from a normal distribution and use a pretrained textencoder to extract a sentence-level vector and word-level vectors: s and W from the input text. $$W=\{w_{1},w_{2},...,w_{L}\};\ w_{1}\in\mathbb{R}^{N_{w}}\quad s\in\mathbb{R}^{N_{s}}\quad;\quad z_{n}\sim\mathcal{N}(0,1);\ z\in\mathbb{R}^{N_{s}}\tag{1}$$ Where L is the number of words in the text-description, and Nz, Ns and Nw are the dimensions of the noise vector, sentence vector and word vectors respectively. To mitigate over-fitting, the Conditioning Augmentation technique (Zhang et al., 2017) is used to resample the sentence-vector from an independent Gaussian distribution. This resampled sentence vector s ′ and the noise vector z are concatenated with each word-level vector wl from the input text sequence, and the sequence of concatenated vectors are passed through a 1D convolutional operation V of stride 1 (see Figure 1). $$F=V(\{c o n c a t(s^{\prime},\ z,\ w_{l})\ |\ \ \forall\ w_{l}\in W\})$$ ′, z, wl) | ∀ wl ∈ W}) (2) The length T of the output sequence F depends on the kernel size used by V and the vectors of the output sequence ft ∈ F are each separately passed through a series of upsampling blocks to generate corresponding sets of image features St. These sets of image features or "tails" each correspond to a different word n-gram from the input text sequence. If we use a kernel size of 1 for V , then each tail St corresponds to a single word. If we use a kernel size of 2, then each tail St corresponds to a word bi-gram, and so on. We combine our sequence of tails {St} together in an iterative fashion using the adaptive gating fusion mechanism introduced by (Zhu et al., 2019) (discussed in Appendix). $$S_{1:t}=f u s e(S_{1:t-1},\;S_{t},\;P^{\mathrm{MTWIG}},\rho^{\mathrm{MTWIG}})\;;\;R_{1}=S_{1:T}\;.$$ S1:t = fuse(S1:t−1, St, P MTWIG, ρMTWIG) ; R1 = S1:T (3) $$\left(2\right)$$ $\left(3\right)$. Where P MTWIG and ρ MTWIG denote parameter matrix and bias terms, S1:t denotes a combination of the first t tails, and S1:1 denotes the first tail S1. The combination of all T tails gives us the final image features R1 for our initial stage. Notice that by concatenating each word vector wl with s ′ and z before the 1D convolution, each tail is created with some common information, so they may learn to fuse together coherently. Each upsampling block consists of a nearest neighbor upsampling layer and a 3×3 convolution operation. An initial image is predicted from R1 using a 3×3 convolution. ## 3.2 Spatial Dynamic Memory (Sdm) In this section, we describe the operation of a single refinement head. Unlike previous methods, our novel Spatial Dynamic Memory (SDM) module creates a separate region-contextualized text representations for each image region. This allows us to interpret the same text in fundamentally different ways at different parts of an image and assimilate region-contextualized information from text at each part. To begin with, we have the set of word-level vectors W and image features Rk−1 from the previous stage of generation. $$R_{k-1}=\left\{r_{1,1}\ ,\ r_{1,2}\ ,...,\ r_{s,s}\right\}\ ;\ \ r_{u,v}\in\mathbb{R}^{N_{r}}$$ Nr(4) Where |s × s| is the number of image pixels and Nr is the dimension of pixel features. We obtain refinement features in three steps: Memory Writing, *Key Addressing* and *Value Reading*. Memory Writing: First, we divide the fine-grained s × s inital image into a coarse h × h sized grid-map and average the pixel features within each grid-cell to get grid-level image features C. $$C_{i,j}=\frac{1}{|p\times p|}\sum_{u=(i-1)*p+1}^{i*p}\sum_{v=(j-1)*p+1}^{j*p}r_{u,v}\tag{5}$$ $$\left(4\right)$$ Where p = s/h, so that |p × p| are the number of pixels represented by each grid cell. Then, we create L × h × h memory slots {m*l,i,j*} - one corresponding to each word l for each grid-cell (*i, j*). These slots are our region-contextualized representations of each word, and each slot uses a separate memory writing gate g w l,i,j to fuse information from each grid-cell (*i, j*) with each word feature wl. $$g_{l,i,j}^{w}(R_{k-1},w_{L})=\sigma\left(A*w_{l}+B_{i,j}*C_{i,j}\right)\tag{6}$$ $$m_{l,i,j}=M_{w}(w_{l})\odot g_{l,i,j}^{w}+M_{c}(C)_{i,j}\odot(1-g_{l,i,j}^{w})\tag{7}$$ The grid-level features C are encoded using a 2d convolution operation Mc (with stride 1 and Nm output filters) and we use a common 1x1 convolution operation Mw to encode all word vectors to a Nm dimensional space. A and Bi,j are 1 × Nw and 1 × Nr matrices respectively. Key Addressing: In this step, we compute attention weights {α*l,i,j,a,b*} over our region-contextualized text-representations {m*l,i,j*}. The dimensions (*a, b*) index pixels within grid-cell (*i, j*), so that each slot m*l,i,j* gets a matrix α*l,i,j* : p × p of attention weights. Each weight is computed as a similarity probability between a key-encoding of the slot: ϕK(ml)ij and a query vector: q*i,j,a,b*, where ϕK(.) is a 2d conv. operation of stride 1 and (Nr + p 2) output filters. $$\alpha_{l,i,j,a,b}=\frac{\exp(\phi_{K}(m_{l})_{i,j}*q_{i,j,a,b})}{\sum_{l=1}^{L}\exp(\phi_{K}(m_{l})_{i,j}*q_{i,j,a,b})}\tag{8}$$ In the case of single headed image refinement, we use the previous image features Rk−1 to obtain the query vectors. A query vector q*i,j,a,b* is made up of three components, 1)A global-level query: q global, 2)A grid-level query: q grid i,j , and 3)A pixel-level query: q pixel i,j,a,b. To obtain these three components, we encode Rk−1 using three separate 2d convolution operations: ϕQglobal (.), ϕQ*grid* (.) and ϕQ*pixel* (.), each with a stride of 1 and Nr output filters. $$Q^{global}=\phi_{Q^{local}}(R_{k-1})\quad;\quad Q^{grid}=\phi_{Q^{grid}}(R_{k-1})\quad;\quad Q^{pixel}=\phi_{Q^{pixel}}(R_{k-1})\tag{9}$$ Then, the average of all pixel features of Q*global* becomes the global-level query component q global. The average of pixel features within the grid cell (i, j) of Q*grid* becomes the grid-level query q grid i,j , and the pixel feature at location (*a, b*) within grid cell (*i, j*) is extracted from Q*pixel* to give us the pixel-level query component q pixel i,j,a,b. $$q^{global}=\frac{1}{|s\times s|}\sum_{u=1}^s\sum_{v=1}^s Q^{global}_{u,v}\quad;$$ $$q^{grid}_{i,j}=\frac{1}{|p\times p|}\quad\sum_{u=(i-1)*p+1}^{i*p}\sum_{v=(j-1)*p+1}^{j*p}Q^{grid}_{u,v}$$ $$q^{pixel}_{i,j,a,b}=Q^{local}_{h(i,a),\ h(j,b)}\quad;\quad\quad h(i,a)=(i-1)*p+a$$ hence the joint label contains (which is called (i,j)). To obtain the final $$(10)$$ $$(11)$$ $$\left(12\right)$$ $$(13)$$ Where (h(i, a), h(*j, b*)) indexes the pixel at location (*a, b*) within grid-cell (*i, j*). To obtain the final query q*i,j,a,b*, we concatene these three components together. Value Reading: In the value reading step, for each pixel (*a, b*) within a grid-cell (*i, j*), we compute a weighted sum of value-encoded memory slots: ϕV (ml)ij along the word dimension l. $$e_{i,j,a,b}=\sum_{l=1}^{L}\alpha_{l,i,j,a,b}\cdot\phi_{V}(m_{l})_{i,j}\tag{1}$$ $$(14)$$ ϕV (.) is a 2d convolution operation with stride 1 and Nr output filters. We now have ei,j : p × p × Nr dimensional matrices - each of which corresponds to a single grid cell of our coarse h×h grid map. To obtain s × s fine-grained refinement features, we apply the mapping: $$O_{h(i,a)}\;,\;h(j,b)=e_{i,j,a,b}\tag{1}$$ Where h(*., .*) is the function defined in Eq.12. That is, we populate each grid cell with |p × p| vectors of Nr dimensionality. Since p = s/h, we are left with a set of refinement features O = {ou,v} that are made up of |s × s| vectors of Nr dimensionality, each corresponding to a single pixel. ## 3.3 Iterative Multi-Headed Mechanism (Imhm) Current methods generate refinement features only once at each refinement stage and attempt to address all image aspects in a single-shot. This limits the precision with which each refinement stage can learn to improve the prior image. In order to make it easier for each refinement stage to precisely address multiple image aspects, we introduce a novel iterative multi-headed mechanism that makes multiple distinct modifications to the prior image features under common discriminator feedback. Each head of our mechanism has a separate spatial dynamic memory module formed from Rk−1 and W. For the first refinement head, we use the previous image features Rk−1 to obtain a query matrix and extract a set of refinement features O1 exactly as described in Section 3.2. Then, we fuse O1 and Rk−1 using the fusion mechanism introduced by (Zhu et al., 2019) (described in Section 7.1) to obtain an updated set of image features U1. If we use only a single refinement head, then this becomes our response for the refinement stage k. However, if we use more than one refinement head, then for the next head, we use U1 to obtain a query matrix. That is, we follow the same mechanism outlined in Section 3.2, but replace Rk−1 with U1 in Eq.9. Doing so, we extract a second set of refinement features O2, and we fuse O2 and U1 to obtain updated image features U2. We proceed in this iterative fashion until we have used all of our refinement heads. The final updated image features are fused with the original image features Rk−1 in a skip-connection to obtain the final response of the refinement stage k. That is, if we have T refinement heads: $$U_{t}=f u s e(U_{t-1},\;O_{t},\;P_{t},\;\rho_{t})$$ $response_{k}=f u s e(U_{T},\;R_{k-1},\;P^{s k i p},\;\rho^{s k i p})$ (1) Notice, we use separate parameter matrix and and bias terms P and ρ for each fusion operation, so that we combine refinement features and image features in a custom way for each head. The, *response*k is passed through several residual blocks (He et al., 2016) and an upsampling block to obtain higher resolution image features Rk. Each block consists of a nearest neighbor upsampling layer and a 3×3 convolution operation. Finally, a refined image xk is predicted from Rk using a 3×3 convolution operation. $$\begin{array}{l}{(15)}\\ {\quad(16)}\end{array}$$ ## 4 Experiments Datasets: We evaluate our method on the Caltech-UCSD Birds 200 (CUB) dataset (Wah et al., 2011) and the Microsoft Common Objects in Context (COCO) dataset (Lin et al., 2014). The CUB dataset contains 8,855 training images and 2,933 test images, with 10 corresponding text descriptions for each image. The COCO dataset, contains 82,783 training images and 40,504 test images, with 5 corresponding text descriptions for each image. We preprocess the datasets according to the methods introduced by (Zhang et al., 2017). Evaluation metrics: To evaluate the realism of images, we rely on the Fréchet Inception Distance (FID) (Heusel et al., 2017). FID computes the distance between synthetic and real-world image distributions based on features extracted from a pre-trained Inception v3 network. A lower FID indicates greater image-realism. To evaluate the relevance of a synthetic image to it's generating text-description, we rely on the R-precision introduced by (Xu et al., 2018). R-precision is computed as the mean accuracy of using each synthetic image to retrieve one ground truth text-description from among 100 candidates. To evaluate a model on a dataset, we generate 30,000 synthetic images conditioned on text descriptions from the unseen test set. Implementation Details: To obtain a sentence-level vector and word-level vectors for a given text description, we use the pretrained bidirectional LSTM text encoder employed by AttnGAN (Xu et al., 2018). Our MTWIG stage synthesizes images features with 64x64 resolution. Two refinement stages refine these features to 128x128 and 256x256 resolution respectively. At refinement stages, we use T = 6 refinement heads and we use h = 8 to divide each input image into a coarse 8 × 8 grid map. We use the same discriminator network architecture employed by (Zhu et al., 2019). Our objective Function is largely the same as that used by previous methods (Xu et al., 2018; Li et al., 2019a; Zhu et al., 2019) with the addition of a redundancy loss to encourage each head of refinement to focus on different image aspects. Further implementation details are provided in the Appendix. ## 4.1 Ablative Experiments Effectiveness of Multi-Tailed Word-level Initial Generation: In our experiments (Appendix), we find that our MTWIG stage is most effective for single stage generation when used with a kernel size of 3, so that we generate a separate tail for each word tri-gram. To evaluate the effectiveness of our MTWIG(ks=3) stage in the context of multi-stage models, we train our MTWIG(ks=3) method with DM-GAN (Zhu et al., 2019) style-refinement stages for 700 epochs on the CUB dataset, and observe that it provides a better basis for word-level refinement, decreasing the FID score achieved by the original DM-GAN model by 7.72% and increasing R-precision by 2.76% (Table 1). Figure 2 shows the improved visual quality of the refined images. We again point out that previous multi-stage methods (Zhang et al., 2017; 2018; Xu et al., 2018; Li et al., 2019a; Zhu et al., 2019) all rely on the same type of initial generation stage, and we expect a similar boost in performance if we replace their initial stage with ours. Effectiveness of Spatial Dynamic Memory: In order to evaluate the effectiveness of our SDM basedrefinement stages, we compare a multi-stage model that uses our MTWIG(ks=3) stage and DM-GAN's refinement stages against a multi-stage model that uses our MTWIG(ks=3) stage and our single-headed SDM-based refinement stages. Both models are trained on the CUB dataset for 700 epochs. We observe that our SDM-based refinement out-performs DM-GAN's refinement, decreasing FID score by 4.64% , and boosting R-precision by an additional 0.83% (Table 1). Figure 2 shows that SDM-based refinement generates images of better visual quality than DM-GAN's refinement stages for the same initial generation architecture. Effectiveness of the Iterative Multi-Headed Mechanism: To evaluate the effectiveness of our IMHM refinement, we compare the model that uses MTWIG(ks=3) and single-headed SDM-based refinement stages against our full MSMT-GAN model - that uses MTWIG(ks=3) and six SDM-based refinement heads for each stage. As before, both models are trained for 700 epochs on the CUB dataset. We find that refinement stages that use our multi-headed mechanism out-perform single-headed refinement stages, decreasing FID score 2.38%, and boosting R-precision by another 1.03% (Table 1). Visually, we find that images generated by our IMHM refinement posses text-irrelevant content that is far more detailed than that observed in images generated by single-headed refinement stages (Figure 2). | | CUB | | |---------------------------------------------|-------|--------------| | Method | FID ↓ | R-prcn (%) ↑ | | DM-GAN | 11.91 | 76.58 ± 0.53 | | MTWIG w/ DM-GAN's refinement | 10.99 | 79.37 ± 0.73 | | MTWIG w/ SDM refinement | 10.48 | 80.20 ± 0.67 | | MTWIG w/ SDM and IMHM refinement (MSMT-GAN) | 10.23 | 81.23 ± 0.68 | ![7_image_0.png](7_image_0.png) | Method | #Param. CUB | #Param. COCO | #Train V100 GPUs required | |--------------|---------------|----------------|-----------------------------| | AttnGAN | 9.16M | 22.43M | 1 | | ControlGAN | 30.72M | 45.36M | 1 | | DM-GAN | 23.52M | 30.96M | 1 | | DF-GAN | 14.32M | 20.87M | 1 | | Our MSMT-GAN | 48.11M | 55.16M | 1 | | XMC-GAN | - | >111M | >3 | Figure 2: Comparison of DM-GAN with ablative versions of our model trained for 700 epochs on the CUB dataset. ## 4.2 Comparison With State Of The Art To compare our architecture against the previous state-of-the-art for the task of text-to-image generation, we train MSMT-GAN models for 1000 epochs on the CUB dataset and for 210 epochs on the COCO dataset. As shown in Table 3, our MSMT-GAN decreases the previous lowest reported FID score by 21.58% on the CUB dataset -marking a significant improvement in image realism, and also boosts the previous best reported CUB R-precision by 4.24% (placing it 8 standard deviations ahead of the previous state-of-the-art) - marking a large improvement in the similarity of synthetic images to their generating text. As shown in Table 2 and Table 3, our model is comparable in size to previous methods, and outperforms the next closest contender of similar size for COCO (DM-GAN) by 4.21% on FID score -making it highly competitive with the current state-of-the-art for COCO. We also observe a slight improvement of 0.23% on COCO Rprecision. Qualitatively, we observe that synthetic images generated by our model are typically sharper and more realistic than those generated by prior methods (Figure 3). In particular, we observe that our method generates synthetic images that possess greater detail and that are better matches for the generating text. | on CUB and 210 epochs on COCO. | CUB | COCO | | | |----------------------------------|-------|--------------|-------|--------------| | Method | FID ↓ | R-prcn (%) ↑ | FID ↓ | R-prcn (%) ↑ | | AttnGAN1 | 14.01 | 67.82 ± 4.43 | 29.53 | 85.47 ± 3.69 | | ControlGAN | - | 69.33 ± 3.23 | - | 82.43 ± 2.43 | | DF-GAN1 | 13.48 | - | 33.29 | - | | DM-GAN1 | 11.91 | 76.58 ± 0.53 | 24.24 | 92.23 ± 0.37 | | Our MSMT-GAN | 9.34 | 80.82 ± 0.54 | 23.22 | 92.46 ± 0.28 | ![8_image_0.png](8_image_0.png) Figure 3: Comparison of MSMT-GAN with state-of-the art models on the CUB and COCO datasets. We attribute the larger improvement for CUB over COCO to the greater spatial-structure of image attributes in CUB images over the loosely coupled nature of objects in the COCO dataset, and point out that that accounting for detailed spatial structure over image attributes is a key issue tackled by our MTWIG and SDM modules. 1We make our comparisons against pretrained models released by the authors, and we report results using the official implementations of FID score. Note on evaluation metrics in Appendix. ## 5 Broader Impact Similar to other GAN models, we reflect that one may synthesize realistic-looking artificial data to maliciously fool human-beings, and lower-resource models makes it potentially easier for offenders to do so. However, we also note that the ability to quickly and cheaply train text-to-image models from scratch with better quality for niche domains offers significant benefits to human-computer-interaction and computer-aideddesign, democratizing a technology that otherwise might be limited to those few groups with high-resources. ## 6 Conclusion In this work, we revisited the problem of training text-to-image generation models from scratch using a single-V100-GPU and less than 100 million network parameters, pointing out that there are still significant gains to be made by rethinking the design of such methods. We identified three limitations of previous lowresource models and proposed the unified MSMT-GAN architecture to address them. First, we introduced a novel Multi-Tailed Word Level Initial Generation stage (MTWIG) that explicitly generates separate sets of image features for each word n-gram. Second, we proposed a novel Spatial Dynamic Memory (SDM) module to contextualize text representations by image-region. Third, we introduced a novel Iterative Multi-Headed Mechanism (IMHM) of image refinement to make it easier for each refinement stage to precisely address multiple image aspects. Our ablative experiments demonstrate that these three separate components work well together and that the addition of each component to the pipeline boosts generation performance on the CUB dataset. Benchmarking experiments further demonstrate that our MSMT-GAN model significantly out-performs the previous state of the art amongst low-resource models on the CUB dataset, decreasing the lowest reported FID score by 21.58% and boosting R-precision ahead by 8 standard deviations over the previous state-of-the-art. On the COCO dataset, we have demonstrated that MSMT-GAN is highly competitive with current methods based on image realism and model resources. In future work, we aim to design a discriminator model that provides more region-specific feedback than existing methods, to use in conjunction with our MSMT-GAN. ## References Jun Cheng, Fuxiang Wu, Yanling Tian, Lei Wang, and Dapeng Tao. Rifegan: Rich feature generation for text-to-image synthesis from prior knowledge. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pp. 10911–10920, 2020. Jaemin Cho, Jiasen Lu, Dustin Schwenk, Hannaneh Hajishirzi, and Aniruddha Kembhavi. X-lxmert: Paint, caption and answer questions with multi-modal transformers. *arXiv preprint arXiv:2009.11278*, 2020. Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In *Advances in neural information processing systems*, pp. 1486–1494, 2015. Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, et al. Cogview: Mastering text-to-image generation via transformers. Advances in Neural Information Processing Systems, 34, 2021. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems, pp. 6626–6637, 2017. Tobias Hinz, Stefan Heinrich, and Stefan Wermter. Semantic object accuracy for generative text-to-image synthesis. *arXiv preprint arXiv:1910.13321*, 2019. Diederik P Kingma and Jimmy Ba. Adam: A methodfor stochastic optimization. In International Conference onLearning Representations (ICLR), 2015. Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, and Philip Torr. Controllable text-to-image generation. In Advances in Neural Information Processing Systems, pp. 2065–2075, 2019a. Wenbo Li, Pengchuan Zhang, Lei Zhang, Qiuyuan Huang, Xiaodong He, Siwei Lyu, and Jianfeng Gao. Object-driven text-to-image synthesis via adversarial training. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 12174–12182, 2019b. Jiadong Liang, Wenjie Pei, and Feng Lu. Cpgan: Content-parsing generative adversarial networks for textto-image synthesis. In *European Conference on Computer Vision*, pp. 491–508. Springer, 2020. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740–755. Springer, 2014. Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents. *arXiv preprint arXiv:1606.03126*, 2016. Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. *arXiv preprint arXiv:2112.10741*, 2021. Tingting Qiao, Jing Zhang, Duanqing Xu, and Dacheng Tao. Learn, imagine and create: Text-to-image generation from prior knowledge. *Advances in Neural Information Processing Systems*, 32:887–897, 2019. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. *arXiv preprint arXiv:1511.06434*, 2015. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In *International Conference on Machine Learning*, pp. 8821–8831. PMLR, 2021. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. *arXiv preprint arXiv:1605.05396*, 2016. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684–10695, 2022. Hongchen Tan, Xiuping Liu, Xin Li, Yi Zhang, and Baocai Yin. Semantics-enhanced adversarial nets for text-to-image synthesis. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 10501–10510, 2019. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in neural information processing systems*, pp. 5998–6008, 2017. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds200-2011 dataset. 2011. Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. *arXiv preprint arXiv:1410.3916*, 2014. Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1316–1324, 2018. Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In *Proceedings of the IEEE international conference on computer vision*, pp. 5907–5915, 2017. Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas. Stackgan++: Realistic image synthesis with stacked generative adversarial networks. *IEEE* transactions on pattern analysis and machine intelligence, 41(8):1947–1962, 2018. Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, and Yinfei Yang. Cross-modal contrastive learning for text-to-image generation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and* Pattern Recognition, pp. 833–842, 2021. Minfeng Zhu, Pingbo Pan, Wei Chen, and Yi Yang. Dm-gan: Dynamic memory generative adversarial networks for text-to-image synthesis. In *Proceedings of the IEEE Conference on Computer Vision and* Pattern Recognition, pp. 5802–5810, 2019. ## 7 Appendix 7.1 Fusing Two Sets Of Image Features Together Given any two sets of image/refinement features A = {au,v} and B = {bu,v} such that au,v, bu,v ∈ R Nr, we can use the adaptive gating mechanism introduced by (Zhu et al., 2019) to obtain a combined/updated set of features {ru,v}, where ru,v ∈ R Nr. $$\begin{array}{l}{{g_{u,v}^{r}=\sigma(P*c o n c a t(a_{u,v},\;b_{u,v})+\rho)}}\\ {{r_{u,v}=a_{u,v}\odot g_{u,v}^{r}+b_{u,v}\odot(1-g_{u,v}^{r})}}\end{array}$$ $$\left(17\right)$$ $$\left(18\right)$$ Where (*u, v*) index pixel locations, Nr is the dimension of image pixel features, g r u,v is a gate for information fusion, σ is the sigmoid function and P and ρ are the parameter matrix and bias term respectively. In our paper, we invoke this adaptive gating mechanism by a function "*fuse*" such that fuse(*A, B, P, ρ*) = {ru,v}. ## 7.2 Analysis Of Multi-Tailed Word-Level Initial Generation | (ks) on the CUB and COCO datasets. CUB | COCO | | | | |------------------------------------------|--------|--------------|-------|--------------| | Method | FID ↓ | R-prcn (%) ↑ | FID ↓ | R-prcn (%) ↑ | | IG | 119.39 | 83.54 ± 0.57 | 51.22 | 94.57 ± 0.53 | | MTWIG (ks=1) | 118.49 | 82.78 ± 0.58 | 41.79 | 94.47 ± 0.49 | | MTWIG (ks=2) | 120.76 | 82.82 ± 0.92 | 40.23 | 94.65 ± 0.39 | | MTWIG (ks=3) | 115.38 | 82.94 ± 0.74 | 39.85 | 94.80 ± 0.29 | To analyze the effectiveness of our MTWIG stage, we make comparisons with the inital generation (IG) stage employed by previous methods (Zhang et al., 2017), (Zhang et al., 2018), (Xu et al., 2018), (Li et al., 2019a), (Zhu et al., 2019). We train IG and MTWIG stages without refinement for 300 epochs on the CUB dataset and for 60 epochs on the COCO dataset. Table 4 shows a quantitative comparison between IG and MTWIG architectures that use different kernel sizes. We observe that our MTWIG method achieves best results for ks=3 (that is, by forming separate image-feature sets for word tri-grams), decreasing FID scores from the previous IG method by 3.36% on the CUB dataset and by 22.2% on the COCO dataset. In single-stage generation, the larger improvement observed on the COCO dataset highlights that our MTWIG method is most beneficial for complex scene generation, where the presence of a large number of distinct objects ![12_image_0.png](12_image_0.png) Figure 4: Multi-Tailed Word-level Initial Generation (MTWIG; kernel size=3) in comparison to conventional Initial Generation (IG). demands word-level separation of attributes. Our MTWIG stages also achieves competitive R-precision scores on both datasets, demonstrating that images synthesized by our method are well conditioned on their input text-descriptions. In Figure 4, we visually compare 64x64 images generated by IG and images generated by our MTWIG(ks=3) model. We observe that images generated by our method typically posses object attributes that are better separated and more clearly discernible. ## 7.3 Additional Implementation Details We set Nw = 256, Nr = 48 and Nm = 96 to be the dimension of text, image and memory feature vectors respectively. We set the hyperparameters {λ1 = 1, λ2 = 0.5, λ3 = 5} for the CUB dataset and {λ1 = 1, λ2 = 0.5, λ3 = 50} for the COCO dataset. All networks are trained using an ADAM optimizer (Kingma & Ba, 2015) with β1 = 0.5 and β2 = 0.999. The learning rate is set to be 0.0002 and we use a batch size of 20 for all networks. We train our models on a single Tesla V100 gpu, and observe that similar to previous multi-stage methods, training our MSMT-GAN method is a GPU intensive process. We require approximately 27min for one epoch of the CUB dataset, and approximately 4.7 hours for one epoch of the COCO dataset. ## 7.4 Objective Function The objective function for our generator network is defined as: $$L=\lambda_{1}L_{CA}+\sum_{k=2}^{K}\lambda_{2}L_{RED_{k}}+\sum_{k=1}^{K}\left(L_{G_{k}}+\lambda_{3}L_{DAMS_{k}}\right)\tag{19}$$ $L$ is the $L$-norm of $L$. The $L$-norm of $L$ is the $L$-norm of Where, LCA denotes the conditioning augmentation loss (Zhang et al., 2017), Gk denotes the generator of the k th stage so that LGk denotes the adversarial loss for Gk, LREDk denotes our redundancy loss for the k th stage, and L*DAMSM*k denotes the DAMSM text-image matching loss (Xu et al., 2018) for the k th stage. λ1, λ2 and λ3 are hyperparameters that combine the various losses. Redundancy Loss: To encourage each head of a refinement stage to focus on separate image aspects, we average region-wise information of each head's output refinement features and penalize similarity between different refinement heads. That is, for T refinement heads: $$f(t)=\frac{1}{|s\times s|}\sum_{u=1}^{s}\sum_{v=1}^{s}o_{u,v}$$ $$L_{RED_{k}}=\sum_{i=1}^{T}\sum_{j=i+1}^{T}sim(f(i),f(j))$$ $$(20)$$ $$(21)$$ Where ou,v ∈ Ot in stage k (see Section 3.3) and sim is the cosine similarity between vectors. We call this sum of pairwise similarity LREDk the "redundancy loss" of the k th refinement stage. Adversarial Loss: The adversarial loss for Gk is defined as: $${\cal L}_{G_{k}}=-\frac{1}{2}[\mathbb{E}_{x\sim p_{G_{k}}}\log D_{k}(x)+\mathbb{E}_{x\sim p_{G_{k}}}\log D_{k}(x,s)]$$ Dk is the discriminator network for the k th stage of generation. The first term provides feedback on image realism independent of the input text, and the second term provides feedback on the realism of the image in light of the input text. Alternate to adversarial training of Gk, each discriminator Dk is trained to classify images as real or fake by minimizing the discriminator loss LDk . $$L_{D_{k}}=\underbrace{-\frac{1}{2}\log n-p_{data,k}\log D_{k}\left(x\right)+L_{b}\log p_{G_{k}}\log\left(1-D_{k}\left(x\right)\right)}_{\text{reconditional loss}}+\underbrace{-\frac{1}{2}\log n-p_{data,k}\log D_{k}\left(x,y\right)+L_{b}\log p_{G_{k}}\log\left(1-D_{k}\left(x,y\right)\right)}_{\text{reconditional loss}}\tag{22}$$ $$(22)$$ Where the unconditional component distinguishes synthetic and real images independent of the input text, and the conditional component distinguishes them in light of the input text. ## 7.5 Note On Evaluation Metrics We point out that there is inconsistency in the FID implementation used to evaluate prior methods. While some methods (Li et al., 2019b) report scores using the official Tensorflow version of FID - which uses the weights of a pretrained Tensorflow inception model, other methods (Zhu et al., 2019; Zhang et al., 2021) have reported scores using an unofficial Pytorch implementation of FID - that uses the weights of a pretrained Pytorch inception model. Each of these implementations computes different values of FID scores for the same sets of images, and scores computed from the two versions are not directly comparable with each other. We highlight that the correlation between FID score and human judgement is an empirical observation, that is dependent on the weights of the pretrained inception model used. The correlation with human judgement has **only** been shown to hold true for the weights of the pretrained Tensorflow inception model (used by the FID authors -(Heusel et al., 2017)), and has not been verified for the unofficial Pytorch inception model (which has different weights). As such, to ensure that we correlate with human judgement, in our paper we only report scores using the official Tensorflow implementation of FID score - computing values for prior work from the pretrained models released by their authors. In Table 5 below, we additionally report Pytorch-implementation FID scores of SEGAN (Tan et al., 2019) and XMC-GAN (Zhang et al., 2021) models, as reported by their authors. It was not possible for us to recompute these scores using the official Tensorflow version of FID as pretrained models for these methods have not been made publicly available. We again highlight, that the FID scores computed by the official Tensorflow implementation are not directly comparable with the scores computed by the unofficial Pytorch implementation. In Table 5, we additionally benchmark the performance of models on the Inception Score (IS) (Goodfellow et al., 2014). We note however, that the FID score (which is reference-based and compares the distributions of real and synthetic images together) has been observed to be more consistent with human judgement of image realism than IS (which is reference-free and does not make comparisons to real images) (Heusel et al., 2017). We were not able to recompute other metrics for RiFeGAN (Cheng et al., 2020) and LeciaGAN (Qiao et al., 2019) as the pretrained models have not been made publicly available. In Table 5, "-" represents cases where the data was not reported or is reported in a manner which is noncomparable (besides FID values) and we note that methods which appear in the comparison for one dataset (CUB/COCO) and not for another reported no data for the missing dataset or reported no data that can be readily compared. 1We make our comparisons against the pretrained models released by the authors, and we report results using the official implementations of FID score. 3 We use the FID score reported by the paper, but note that this was computed using an unofficial Pytorch implementation of FID - which is not directly comparable with the official FID implementation scores. See Section 7.5 for further details. | on CUB and 210 epochs on COCO. CUB | | | COCO | | | | |--------------------------------------|--------|--------------|-------------|-------|--------------|--------------| | Method | FID ↓ | R-prcn (%) ↑ | IS ↑ | FID ↓ | R-prcn (%) ↑ | IS ↑ | | AttnGAN1 | 14.01 | 67.82 ± 4.43 | 4.36 ± 0.03 | 29.53 | 85.47 ± 3.69 | 25.89 ± 0.47 | | ControlGAN | - | 69.33 ± 3.23 | 4.58 ± 0.09 | - | 82.43 ± 2.43 | 24.06 ± 0.60 | | SEGAN | 18.163 | - | 4.67 ± 0.04 | 32.28 | - | 27.86 ± 0.31 | | RiFeGAN | - | - | 5.23 ± 0.09 | - | - | - | | LeciaGAN | - | - | 4.62 ± 0.06 | - | - | - | | DM-GAN1 | 11.91 | 76.58 ± 0.53 | 4.71 ± 0.06 | 24.24 | 92.23 ± 0.37 | 32.43 ± 0.58 | | DF-GAN1 | 13.48 | - | 4.70 ± 0.06 | 33.29 | - | 18.64 ± 0.64 | | XMC-GAN | - | - | - | 9.333 | - | 30.45 | | Our MSMT-GAN | 9.34 | 80.82 ± 0.54 | 4.55 ± 0.06 | 23.22 | 92.46 ± 0.28 | 28.91 ± 0.35 | ## 7.6 More Example Images Figures 5 and 6 below show more images generated by our method in comparison to prior work on the CUB and COCO datasets. Again, we observe that synthetic images generated by MSMT-GAN are typically sharper and more realistic than those generated by previous methods. We also observe that the synthetic images generated by MSMT-GAN are usually a better match for the generating text. ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) Figure 5: Extended comparison of MSMT-GAN with state-of-the art models on the CUB dataset. ![16_image_0.png](16_image_0.png) Figure 6: Extended comparison of MSMT-GAN with state-of-the art models on the COCO dataset.
Review 1: Summary: The papers main contribution is a relatively small text-to-image model (<100M parameters) which produces qualitatively and quantitatively better images on the Caltech-UCSD Birds 200 (CUB) dataset and minor improvements on MS-COCO. Strengths and Weaknesses: ### Strengths + Paper is straightforward to read and presents evidence for the claims it makes + Improving on "small" models that can be trained on a single GPU is a worthwhile goal ### Weaknesses - The definition of "small" (<100M parameters) seems a bit arbitrary and "convenient" for the authors' claim - Improvements mostly limited to the birds dataset; generality less clear Requested Changes: Overall the paper is well organized, well written and straightforward to follow. The claims presented are backed by convincing experiments, e.g. showing that the individual components the authors add each actually contribute to improving image synthesis performance on CUB. My two main criticisms and requests for change are the following: 1. Definition of "small": The 100M parameter boundary seems reasonable, but at the same time just too convenient for the authors' analysis. While their own model is twice as big as the next best in their list of competitors (DM-GAN), but the 111M parameter XMC-GAN, which itself is roughly twice as big as the authors' model is excluded because it is >100M parameters. It's quite clear that there is a strong correlation between model size and performance, so the key question is whether the authors' model is particularly efficient or just on the same line as most previous models. I think rather than defining arbitrary cutoffs, it would make more sense to present the results as a plot with number of parameters on the x axis and performance on the y axis and include also bigger models such as XMC-GAN. Such a presentation of the data would give the reader a better impression of the model landscape. Given that XMC-GAN is quite substantially better on COCO (FID ca. 10 vs. 23 for the authors' model), I suspect that the authors' model actually does not really present an improvement over the state of the art on COCO. 1. Improvements mostly limited to CUB: Given the point above, I think the model mostly presents an improvement on the CUB dataset. I think this is totally fine and the authors also acknowledge this fact to some extent in the paper. However, there is no mention of the scope of the contributions in the abstract, which sounds much more general than the paper actually delivers. I suggest to fix that and mention already in the abstract the focus on CUB. Broader Impact Concerns: No ================================================== Review 2: Summary: This paper tackles text-to-image generation with limited GPU resource. The proposed method introduces 1) a word-level initial stage, 2) spatially varying words, and 3) an iterative multi-head mechanism for image refinement. Strengths and Weaknesses: Strengths 1. Ablation study demonstrates effectiveness of spatial dynamic memory and the iterative multi-head mechanism. 1. The proposed method outperforms existing methods with <100M params. Weaknesses 1. Why do we need a low-resource model with inferior performance? An alternative choice would be starting from high-resource models with superior performance and then fine-tuning or introducing some sort of domain adaptation. It is a common approach in the literature [stylegan2-ada, eg3d, diffusionclip] 1. Is stable diffusion larger than 100M params? 1. Why is the memory mechanism suitable for the refinement head? 3.1 and 3.3 start with briefly explaining the limitations of existing methods but 3.2 does not. 1. Why is iterative refinement always better than single-shot refinement? 1. Figure 1 does not match the corresponding description and has unnecessary complexity. E.g., fuse component can be simply one box without the inner circuit. 1. Would the proposed method improve when combined with T5 or CLIP text encoder? 1. The datasets are outdated: CUB and COCO. Please consider using LAION. 1. Improvements in Table 1 are minor. 1. Values in the last row of Table 1 and CUB in Table 3 are different. Mistake? 1. How do we know whether MTWIG really reflects the words or not? 1. I wonder how much the multiple refinement steps actually helps. Would the image prediction produce gradually improving images over the refinement steps? 1. Writing should be clearer and more compact. Please separate long sentences and trim out unimportant descriptions. < Minor > However, current methods suffer three important limitations. -> However, current methods suffer from three important limitations. … words-level details are are inherently … -> … word-level details are inherently … … and point out that that accounting for … -> … and point out that accounting for … Requested Changes: Please check the weakness part. Broader Impact Concerns: Fine. ================================================== Review 3: Summary: The paper focuses on the text-to-image synthesis task with low-resource models. The authors first argue that there are three limitations of existing methods: (1) the initial stage only uses sentence-level features and does not use word-level features, (2) no constructing region-specific representations, and (3) limited precision in refinement because of using single-shot refinement. The paper proposes MSMT-GAN with three components (i.e., MTWIG, SDM, IMHM) to address each limitation, respectively. The experiments show that the proposed MSMT-GAN can improve performance against existing methods. Strengths and Weaknesses: Strengths: * The paper is well-written and easy to follow. * The code is attached for better reproducibility. Weaknesses: * **[The claim on region-specific representations and the difference between SDM and other cross-attention methods]** I am not convinced by the authors’ claim that existing methods were limited by not constructing region-specific representations. For example, in Figure 1 of AttnGAN, the authors show that the different words can attend to different spatial regions of the visual representations. ControlGAN proposes the word-level discriminator to encourage region-specific representations. DAE-GAN (Ruan et al., 2021) proposes Aspect-aware Dynamic Re-drawer for the same purpose. XMC-GAN also proposes “contrastive loss between image regions and words” based on the cosine similarity between words and image regions. On the other hand, while the authors claim the novelty of the proposed SDM module (see the summary of contribution in Section 1), it is still based on the cross-attention between text and visual representation like existing methods. Therefore, I am not convinced by the claim of the novelty of SDM. * **[Ablation study of MTWIG]** In Table 1 and Section 4.1, the paper studied the “effectiveness of Multi-Tailed Word-level Initial Generation” by replacing the refinement method in MSMT-GAN with DM-GAN’s refinement method and comparing it against the vanilla DM-GAN method. However, an ablation study to remove or replace the component within the proposed method since different components can interact with each other. The current ablation study can show that the MTWIG can improve the results for DM-GAN instead of MSMT-GAN. Therefore, the effectiveness of MTWIG within the proposed MSMT-GAN needs to be well-demonstrated. Instead, the ablation study between IG and MTWIG in Table 4 (Section 7 Appendix) should be used as the ablation study for MTWIG. In Table 4, IG outperforms MTWIG in R-precision on CUB. At the same time, IG achieves very similar results (i.e., results have overlaps in standard deviation) in R-precision on the COCO dataset. I understand that MTWIG achieves better synthesis quality in FID. However, the motivation for proposing MTWIG is to generate more details with word-level information. Achieving better R-precision to achieve better image-text alignment should be the goal. In summary, the effectiveness of MTWIG is not well-demonstrated via experiments. * **[Misleading Description of Results]** The claim that the proposed model is “comparable in size to previous methods” is misleading. In Table 2, the proposed MSMT-GAN has significantly more parameters than existing methods, e.g., ~2 times larger than DM-GAN. On the other hand, some results are bolded even with an overlap with the results of other methods (92.46+-0.28 in Table 3), which is misleading. * **[Problems in Evaluation Metrics]** There are two problems regarding evaluation metrics. First, regarding the evaluation metrics used in the paper, R-Precision (Xu et al., 2018) should not be used anymore. (Zhang et al.) show that using image-text encoders to compute R-Precision skews the results: “many generated models report R-precision scores significantly higher than real images.” The reason is that the image-text encoder was also used as the objective function—the DAMSM loss, also used by MSMT-GAN (Equation 19). Regarding FID, I appreciate the authors’ explanation of the result differences caused by PyTorch and Tensorflow implementation. However, (Parmar et al., 2022) show that anti-aliasing in image resizing plays a vital role in FID results, which is not specified in the paper. Second, some important evaluation metrics are missing in the paper. Regarding the “low-resource” aspect that the authors emphasized in the paper, I appreciate the authors reporting the number of parameters and number of GPUs in Table 2. However, the authors should also report the computational overhead in FLOPs and memory usage to better demonstrate the “low-resource” aspect of the model. Besides, there is no human evaluation to evaluate the photo-realism and image-text alignment, as in XMC-GAN (Zhang et al., 2021). * **[Missing Related Work]** While the authors mainly studied the text-to-image models with low computational resources, the authors did not discuss or compare with the other line of works focusing on transforming the pretrained unconditional StyleGAN (Karras et al., 2019) into conditional text-to-image synthesis models, e.g., CI-GAN (Wang et al., 2021), TTF-HD (Wang et al., 2021), TediGAN (Xia et al., 2021), FuseDream (Liu et al., 2021), StyleMC (Kocasari et al., 2022), StyleT2I (Li et al., 2022), LAFITE (Zhou et al., 2022). These works can also be trained with a single GPU. **References** Ruan, Shulan, Yong Zhang, Kun Zhang, Yanbo Fan, Fan Tang, Qi Liu, and Enhong Chen. “DAE-GAN: Dynamic Aspect-Aware GAN for Text-to-Image Synthesis.” in ICCV 2021. Xu, Tao, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. “AttnGAN: Fine-Grained Text to Image Generation With Attentional Generative Adversarial Networks.” in CVPR 2018. Zhang, Han, Jing Yu Koh, Jason Baldridge, Honglak Lee, and Yinfei Yang. “Cross-Modal Contrastive Learning for Text-to-Image Generation.” in CVPR 2021. Parmar, Gaurav, Richard Zhang, and Jun-Yan Zhu. “On Aliased Resizing and Surprising Subtleties in GAN Evaluation.” in CVPR 2022. Karras, Tero, Samuli Laine, and Timo Aila. “A Style-Based Generator Architecture for Generative Adversarial Networks.” in CVPR 2019. Wang, Hao, Guosheng Lin, Steven C. H. Hoi, and Chunyan Miao. “Cycle-Consistent Inverse GAN for Text-to-Image Synthesis.” In ACM Multimedia Conference on Multimedia Conference - MM 2021. Wang, Tianren, Teng Zhang, and Brian Lovell. “Faces a La Carte: Text-to-Face Generation via Attribute Disentanglement.” in WACV 2021. Xia, Weihao, Yujiu Yang, Jing-Hao Xue, and Baoyuan Wu. “TediGAN: Text-Guided Diverse Face Image Generation and Manipulation.” in CVPR 2021. Liu, Xingchao, Chengyue Gong, Lemeng Wu, Shujian Zhang, Hao Su, and Qiang Liu. “FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimization.” ArXiv:2112.01573 [Cs], 2021. Kocasari, Umut, Alara Dirik, Mert Tiftikci, and Pinar Yanardag. 2022. “StyleMC: Multi-Channel Based Fast Text-Guided Image Generation and Manipulation.” In The IEEE Winter Conference on Applications of Computer Vision (WACV). Li, Zhiheng, Martin Renqiang Min, Kai Li, and Chenliang Xu. “StyleT2I: Toward Compositional and High-Fidelity Text-to-Image Synthesis.” in CVPR 2022. Zhou, Yufan, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiuxiang Gu, Jinhui Xu, and Tong Sun. “LAFITE: Towards Language-Free Training for Text-to-Image Generation.” in CVPR 2022. Requested Changes: Major requested changes that are important for acceptance: * Clarify the claim on region-specific representations and the difference between SDM and other cross-attention methods. * Clarify the effectiveness of MTWIG. * Address the misleading descriptions of results. * Evaluation metrics: * Replace R-Precision (Xu et al., 2018) with SOA (Hinz et al., 2020) on COCO and CLIP R-Precision (Park et al., 2021) on CUB. * Clarify if anti-aliasing is used for resizing when computing FID. * Add FLOPs, memory usage, and human evaluation results. * Discuss or compare with related works that leverage unconditional StyleGAN for text-to-image synthesis. Minor requested changes: It is better to move the loss function from the Appendix (Section 7.4) to the main paper, especially with a new redundancy loss proposed by the authors. Typos: Implementation details in Section 4: Our objective Function -> our objective function References Hinz, Tobias, Stefan Heinrich, and Stefan Wermter. “Semantic Object Accuracy for Generative Text-to-Image Synthesis.” TPAMI 2020. Park, Dong Huk, Samaneh Azadi, Xihui Liu, Trevor Darrell, and Anna Rohrbach. “Benchmark for Compositional Text-to-Image Synthesis.” in NeurIPS Datasets and Benchmarks Track (Round 1) 2021. Broader Impact Concerns: The broader impact statement addresses the concerns well. There are no more concerns from my perspective. ==================================================
# Mechanistic Interpretability For Ai Safety A Review Leonard Bereska Efstratios Gavves {leonard.bereska, egavves}@uva.nl University of Amsterdam Reviewed on OpenReview: *https: // openreview. net/ forum? id= ePUVetPKu6* ## Abstract Understanding AI systems' inner workings is critical for ensuring value alignment and safety. This review explores mechanistic interpretability: reverse engineering the computational mechanisms and representations learned by neural networks into human-understandable algorithms and concepts to provide a granular, causal understanding. We establish foundational concepts such as features encoding knowledge within neural activations and hypotheses about their representation and computation. We survey methodologies for causally dissecting model behaviors and assess the relevance of mechanistic interpretability to AI safety. We examine benefits in understanding, control, alignment, and risks such as capability gains and dual-use concerns. We investigate challenges surrounding scalability, automation, and comprehensive interpretation. We advocate for clarifying concepts, setting standards, and scaling techniques to handle complex models and behaviors and expand to domains such as vision and reinforcement learning. Mechanistic interpretability could help prevent catastrophic outcomes as AI systems become more powerful and inscrutable. For an HTML version of the paper, visit https://leonardbereska.github.io/blog/2024/ mechinterpreview/. ## 1 Introduction As AI systems rapidly become more sophisticated and general (Bubeck et al., 2023; Bengio et al., 2023), advancing our understanding of these systems is crucial to ensure their alignment (Ji et al., 2024) with human values and avoid catastrophic outcomes (Hendrycks et al., 2023; Hendrycks & Mazeika, 2022). The field of interpretability aims to demystify the internal processes of AI models, moving beyond evaluating performance alone. This review focuses on mechanistic interpretability, an emerging approach within the broader interpretability landscape that strives to comprehensively specify the computations underlying deep neural networks. We emphasize that understanding and interpreting these complex systems is not merely an academic endeavor - *it's a societal imperative to ensure AI remains trustworthy and beneficial*. The interpretability landscape is undergoing a paradigm shift akin to the evolution from behaviorism to cognitive neuroscience in psychology. Historically, lacking tools for introspection, psychology treated the mind as a black box, focusing solely on observable behaviors. Similarly, interpretability has predominantly relied on black-box techniques (Casper et al., 2024), analyzing models based on input-output relationships or using attribution methods that, while probing deeper, still neglect the model's internal architecture. However, just as advancements in neuroscience allowed for a deeper understanding of internal cognitive processes, the field of interpretability is now moving towards a more granular approach. This shift from surface-level analysis to a focus on the internal mechanics of deep neural networks characterizes the transition towards inner interpretability (Räuker et al., 2023). Mechanistic interpretability, as an approach to inner interpretability, aims to completely specify a neural network's computation, potentially in a format as explicit as pseudocode (also called *reverse engineering*), striving for a granular and precise understanding of model behavior. It distinguishes itself primarily through ![1_image_0.png](1_image_0.png) Figure 1: Interpretability paradigms offer distinct lenses for understanding neural networks: **Behavioral** analyzes input-output relations; **Attributional** quantifies individual input feature influences; **Concept-based** identifies high-level representations governing behavior; **Mechanistic** uncovers precise causal mechanisms from inputs to outputs. its *ambition* for comprehensive reverse engineering and its strong *motivation* towards AI safety. Our review serves as the first comprehensive exploration of mechanistic interpretability research, with the most accessible introductions currently scattered in a blog or list format (Olah, 2022; Nanda, 2022d; Olah et al., 2020; Sharkey et al., 2022a; Olah et al., 2018; Nanda, 2023f; 2024). Concurrently, Ferrando et al. (2024) and Rai et al. (2024) have also contributed valuable reviews giving concise, technical introductions to mechanistic interpretability in transformer-based language models. Our work complements these efforts by synthesizing the research (addressing the "research debt" (Olah & Carter, 2017)) and providing a structured, accessible, and comprehensive introduction for AI researchers and practitioners. The structure of this paper provides a cohesive overview of mechanistic interpretability, situating the mechanistic approach in the broader interpretability landscape (Section 2), presenting core concepts and hypotheses (Section 3), explaining methods and techniques (Section 4), presenting a taxonomy and survey of the current field (Section 5), exploring relevance to AI safety (Section 6), and addressing challenges (Section 7) and future directions (Section 8). ## 2 Interpretability Paradigms From The Outside In We encounter a spectrum of interpretability paradigms for decoding AI systems' decision-making, ranging from external black-box techniques to internal analyses. We contrast these paradigms with mechanistic interpretability, highlighting its distinct causal bottom-up perspective within the broader interpretability landscape (see Figure 1). Behavioral interpretability treats the model as a black box, analyzing input-output relations. Techniques such as minimal pair analysis (Warstadt et al., 2020), sensitivity and perturbation analysis (Casalicchio et al., 2018) examine input-output relations to assess the model's robustness and variable dependencies (Shapley, 1988; Ribeiro et al., 2016; Covert et al., 2021). Its *model-agnostic* nature is practical for complex or proprietary models but lacks insight into internal decision processes and causal depth (Jumelet, 2023). Attributional interpretability aims to explain outputs by tracing predictions to individual input contributions using gradients. Raw gradients can be discontinuous or sensitive to slight perturbations. Therefore, techniques such as SmoothGrad (Smilkov et al., 2017) and Integrated Gradients (Sundararajan et al., 2017) average across gradients. Other popular techniques are layer-wise relevance propagation (Bach et al., 2015), DeepLIFT (Shrikumar et al., 2017), or GradCAM (Selvaraju et al., 2016). Attribution enhances transparency by showing input feature influence without requiring an understanding of the internal structure, enabling decision validation, compliance, and trust while serving as a bias detection tool, but also has fundamental limitations (Bilodeau et al., 2024). ![2_image_0.png](2_image_0.png) Figure 2: Overview of key concepts and hypotheses in mechanistic interpretability, organized into four subsection (pink boxes): defining features (Section 3.1), representation (Section 3.2), computation (Section 3.3), and emergence (Section 3.4). In turquoise, it highlights definitions like features, *circuits*, and *motifs*, and in orange, it highlights hypotheses like linear representation, superposition, universality, *simulation*, and *prediction orthogonality*. Arrows show relationships, *e.g.*, superposition enabling an alternative feature definition or universality connecting circuits and motifs. Concept-based interpretability adopts a top-down approach to unraveling a model's decision-making processes by probing its learned representations for high-level concepts and patterns governing behavior. Techniques include training supervised auxiliary classifiers (Belinkov, 2021), employing unsupervised contrastive and structured probes (see Section 4.2) to explore latent knowledge (Burns et al., 2023), and using neural representation analysis to quantify the representational similarities between the internal representations learned by different neural networks (Kornblith et al., 2019; Bansal et al., 2021). Beyond observational analysis, concept-based interpretability can enable manipulation of these representations - also called *representation engineering* (Zou et al., 2023) - potentially enhancing safety by upregulating concepts such as honesty, harmlessness, and morality. Mechanistic interpretability is a bottom-up approach that studies the fundamental components of models through granular analysis of features, neurons, layers, and connections, offering an intimate view of operational mechanics. Unlike concept-based interpretability, it aims to uncover causal relationships and precise computations transforming inputs into outputs, often identifying specific neural circuits driving behavior. This *reverse engineering* approach draws from interdisciplinary fields like physics, neuroscience, and systems biology to guide the development of transparent, value-aligned AI systems. Mechanistic interpretability is the primary focus of this review. ## 3 Core Concepts And Assumptions This section introduces the key concepts and hypotheses of mechanistic interpretability, as summarized in Figure 2. We start by defining features as the basic units of representation (Section 3.1). We then examine the nature of these features, including the challenges posed by polysemantic neurons and the implications of the superposition and linear representation hypotheses (Section 3.2). Next, we explore computation through circuits and motifs, considering the universality hypothesis (Section 3.3). Finally, we discuss the implications for understanding emergent properties, such as internal world models and simulated agents with potentially misaligned objectives (Section 3.4). ## 3.1 Defining Features As Representational Primitives Features as fundamental units of representation. The notion of a *feature* in neural networks is central yet elusive, reflecting the pre-paradigmatic state of mechanistic interpretability. We adopt the notion of *features* as the *fundamental units of neural network representations*, such that features cannot be further *disentangled* into simpler, distinct factors. These features are core components of a neural network's representation, analogous to how cells form the fundamental unit of biological organisms (Olah et al., 2020). Definition 1: Feature Features are the fundamental units of neural network representations that cannot be further decomposed into simpler independent factors. Concepts as natural abstractions. The world consists of various entities that can be grouped into categories or *concepts* based on shared properties. These concepts form high-level summaries like "tree" or "velocity," allowing compact world representations by discarding many irrelevant low-level details. Neural networks can capture and represent such *natural abstractions* (Chan et al., 2023) through their learned features, which serve as building blocks of their internal representations, aiming to capture the *concepts* underlying the data. Features encoding input patterns. In traditional machine learning, *features* are understood as characteristics or attributes derived directly from the input data stream (Bishop, 2006). This view is particularly relevant for systems focused on *perception*, where features map closely to the input data. However, in more advanced systems capable of *reasoning* with abstractions, features may emerge internally within the model as representational patterns, even when processing information unrelated to the input. In this context, features are better conceptualized as *any measurable property or characteristic of a phenomenon* (Olah, 2022), encoding abstract concepts rather than strictly reflecting input attributes. Features as representational atoms. A key property of features is their irreducibility, meaning they cannot be decomposed into or expressed as a combination of simpler, independent factors. In the context of input-related features, Engels et al. (2024) define a feature as *irreducible* if it cannot be decomposed into or expressed as a combination of statistically independent patterns or factors in the original input data. Specifically, a feature is reducible if transformations reveal its underlying pattern, which can be separated into independent co-occurring patterns or is a mixture of patterns that never co-occur. We propose generalizing this notion of irreducibility to features encoding abstract concepts not directly tied to input patterns, such that features cannot be reduced to combinations or mixtures of other independent components within the model's representations. Features beyond human interpretability. Features could be defined from a *human-centric perspective* as *semantically meaningful, articulable input patterns encoded in the network's activation space* (Olah, 2022). However, while cognitive systems may converge on similar *natural abstractions* (Chan et al., 2023), these need not necessarily align with human-interpretable *concepts*. Adversarial examples have been interpreted as non-interpretable features meaningful to models but not humans. Imperceptible perturbations fool networks, suggesting reliance on alien representational patterns (Ilyas et al., 2019). As models surpass human capabilities, their learned features may become increasingly abstract, encoding information in ways incongruent with human intuition (Hubinger, 2019a). Mechanistic interpretability aims to uncover the *actual* representations learned, even if diverging from human concepts. While human-interpretable concepts provide guidance, a non-human-centric perspective that defines features as independent model components, whether aligned with human concepts or not, is a more comprehensive and future-proof approach. ## 3.2 Nature Of Features: From Monosemantic Neurons To Non-Linear Representations Neurons as Computational Units? In the architecture of neural networks, neurons are the natural computational units, potentially representing individual features. Within a neural network representation ![4_image_0.png](4_image_0.png) monosemantic neurons: polysemantic **neurons:** Figure 3: Contrasting privileged and non-privileged bases. In a non-privileged basis, there is no reason to expect features to be basis-aligned - calling basis dimensions neurons has no meaning. In a privileged basis, the architecture treats basis directions differently - features can but need not align with neurons (Bricken et al., 2023). **Leftmost:** Privileged basis; individual features (arrows) align with basis directions, resulting in *monosemantic* neurons (colored circles). **Middle left:** Privileged basis, where despite having more features than neurons, some neurons are monosemantic, representing individual features, while others are *polysemantic* (overlapping gradients), encoding *superposition* of multiple features. **Middle right:** Nonprivileged basis where, even when the number of features equals the number of neurons, the lack of alignment between the feature directions and basis directions results in polysemantic neurons encoding combinations of features. **Rightmost:** Non-privileged, polysemantic neurons as feature directions do not align with neuron basis. h ∈ R n, the n basis directions are called neurons. For a neuron to be meaningful, the basis directions must functionally differ from other directions in the representation, forming a *privileged basis* - where the basis vectors are architecturally distinguished within the neural network layer from arbitrary directions in activation space, as shown in Figure 3. Typical non-linear activation functions privilege the basis directions formed by the neurons, making it meaningful to analyze individual neurons (Elhage et al., 2022b). Analyzing neurons can give insights into a network's functionality (Sajjad et al., 2022; Mu & Andreas, 2020; Dai et al., 2022; Ghorbani & Zou, 2020; Voita et al., 2023; Durrani et al., 2020; Goh et al., 2021; Bills et al., 2023; Huang et al., 2023). Monosemantic and Polysemantic Neurons. A neuron corresponding to a single semantic concept is called *monosemantic*. The intuition behind this term comes from analyzing what inputs activate a given neuron, revealing its associated semantic meaning or concept. If neurons were the representational primitives of neural networks, all neurons would be monosemantic, implying a one-to-one relationship between neurons and features. Comprehensive interpretability would be as tractable as characterizing all neurons and their connections. However, empirically, especially for transformer models (Elhage et al., 2022b), neurons are often observed to be polysemantic, *i.e.*, associated with multiple, unrelated concepts (Arora et al., 2018; Mu & Andreas, 2020; Elhage et al., 2022a; Olah et al., 2020). For example, a single neuron may be activated by both images of cats and images of cars, suggesting it encodes multiple unrelated concepts. Polysemanticity contradicts the interpretation of neurons as representational primitives and, in practice, makes it challenging to understand the information processing of neural networks. Exploring Polysemanticity: Hypotheses and Implications. To understand the widespread occurrence of polysemanticity in neural networks, several hypotheses have been proposed: i.) One trivial scenario would be that feature directions are orthogonal but not aligned with the basis directions (neurons). There is no inherent reason to assume that features would align with neurons in a non-privileged basis, where the basis vectors are not architecturally distinguished. However, even in a privileged basis formed by the neurons, the network could represent features not in the standard basis but as linear combinations of neurons (see Figure 3, middle right). ii.) An alternative hypothesis posits that *redundancy due to noise* introduced during training, such as random dropout (Srivastava et al., 2014), can lead to redundant representations and, consequently, to polysemantic neurons (Marshall & Kirchner, 2024). This process involves distributing a single feature across several neurons rather than isolating it into individual ones, thereby encouraging polysemanticity. iii.) Finally, the *superposition* hypothesis addresses the limitations in the network's representative capacity - the number of neurons versus the number of crucial concepts. This hypothesis argues that the limited number of neurons compared to the vast array of important concepts necessitates a form of compression. As a result, an n-dimensional representation may encode features not with the n basis directions (neurons) but with the ∝ exp(n) possible almost orthogonal directions (Elhage et al., 2022b), leading to polysemanticity. ## Hypothesis 1: Superposition Polysemanticity Neural networks represent more features than they have neurons by encoding features in overlapping combinations of neurons. Superposition Hypothesis. The *superposition* hypothesis suggests that neural networks can leverage high-dimensional spaces to represent more features than their actual neuron count by encoding features in almost orthogonal directions. Non-orthogonality means that features interfere with one another. However, the benefit of representing many more features than neurons may outweigh the interference cost, mainly when concepts are sparse and non-linear activation functions can error-correct noise (Elhage et al., 2022b). Observed model Hypothetical disentangled **model** Observed model Hypothetical disentangled **model** ![5_image_0.png](5_image_0.png) Figure 4: Observed neural networks (left) can be viewed as compressed simulations of larger, sparser networks (right) where neurons represent distinct features. An "almost orthogonal" projection compresses the highdimensional sparse representation, manifesting as polysemantic neurons involved with multiple features in the lower-dimensional observed model, reflecting the compressed encoding. Figure adapted from (Bricken et al., 2023). Toy models can demonstrate under which conditions superposition occurs (Elhage et al., 2022b; Scherlis et al., 2023). Neural networks, via superposition, may effectively simulate computation with more neurons than they possess by allocating each feature to a linear combination of neurons, creating what is known as an overcomplete linear basis in the representation space. This perspective on superposition suggests that polysemantic models could be seen as compressed versions of hypothetically larger neural networks where each neuron represents a single concept (see Figure 4). Consequently, an alternative definition of features could be: ## Definition 2: Feature (Alternative) Features are elements that a network would ideally assign to individual neurons if neuron count were not a limiting factor (Bricken et al., 2023). In other words, *features* correspond to the disentangled concepts that a larger, sparser network with sufficient capacity would learn to represent with individual neurons. ## Toy Model Of Superposition A toy model (Elhage et al., 2022b) investigates the hypothesis that neural networks can represent more *features* than the number of neurons by encoding real-world *concepts* in a compressed manner. The model considers a high-dimensional vector x, where each element xi corresponds to a feature capturing a real-world concept, represented as a random vector with varying importance determined by a weight ai. These features are assumed to have the following properties: i.) **Concept sparsity**: Real-world concepts occur sparsely. ii.) **More concepts than neurons**: The number of potential concepts vastly exceeds the available neurons. iii.) **Varying concept importance**: Some concepts are more important than others for the task at hand. The input vector x represents features capturing these concepts, defined by a sparsity level S and an importance level ai for each feature xi, reflecting the sparsity and varying importance of the underlying concepts. The model dynamics involve transforming x into a hidden representation h of lower dimension, and then reconstructing it as x ′: $\mathbf{h}=\mathbf{W}\mathbf{x}$, $\mathbf{x}^{\prime}=\mathbf{Rel}\mathbf{U}(\mathbf{W}\mathbf{h}+\mathbf{b})$. The network's performance is evaluated using a loss function L weighted by the feature importances ai, reflecting the importance of the underlying concepts: $${\mathcal{L}}=\sum_{x}\sum_{i}a_{i}(x_{i}-x_{i}^{\prime})^{2}.$$ This toy model highlights neural networks' ability to encode numerous features representing real-world concepts into a compressed representation, providing insights into the superposition phenomenon observed in neural networks trained on real data. Superposition ![6_image_0.png](6_image_0.png) Figure 5: Illustration of the toy model architecture and the effects of sparsity. (left) Transformation of a five-feature input vector x into a two-dimensional hidden representation h, and its reconstruction as x ′ using the weight matrix W and its transpose, with feature importance indicated by a color gradient from yellow to green. (right) The effect of increasing feature sparsity S on the encoding capacity of the network, highlighting the network's enhanced ability to represent features in superposition as sparsity increases from 0 to 0.9, illustrated by arrows in the activation space h, which correspond to the columns of the matrix W. Research on superposition, including works by (Elhage et al., 2022b; Scherlis et al., 2023; Henighan et al., 2023), often investigates simplified models. However, understanding superposition in practical, transformerbased scenarios is crucial for real-world applications, as pioneered by Gurnee et al. (2023). The need for understanding networks despite polysemanticity has led to various approaches: One involves training models without superposition (Jermyn et al., 2022), for example, using a softmax linear unit (Elhage et al., 2022a) as an activation function to empirically increase the number of *monosemantic* neurons, but at the cost of making other neurons less interpretable. From a capabilities standpoint, polysemanticity may be desirable as it allows models to represent more concepts with limited compute, making training cheaper. Overall, engineering monosemanticity has proven challenging (Bricken et al., 2023) and may be impractical until we have orders of magnitude more compute available. Another approach is to train networks in a standard way (creating polysemanticity) and use post-hoc analysis to find the feature directions in activation space, for example, with Sparse Autoencoders (SAEs). SAEs aim to find the true, disentangled features in an uncompressed representation by learning a sparse overcomplete basis that describes the activation space of the trained model (Bricken et al., 2023; Sharkey et al., 2022b; Cunningham et al., 2024) (also see Section 4.2). If not neurons, what are features then? We want to identify the fundamental units of neural networks, which we call *features*. Initially, neurons seemed likely candidates. However, this view fell short, particularly in transformer models where neurons often represent multiple concepts, a phenomenon known as polysemanticity. The superposition hypothesis addresses this, proposing that due to limited representational capacity, neural networks compress numerous features into the confined space of neurons, complicating interpretation. This raises the question: *How are features encoded if not in discrete neuron units?* While a priori features could be encoded in an arbitrarily complex, non-linear structure, a growing body of theoretical arguments and empirical evidence supports the hypothesis that features are commonly represented linearly, *i.e.*, as linear combinations of neurons - hence, as directions in representation space. This perspective promises to enhance our comprehension of neural networks by providing a more interpretable and manipulable framework for their internal representations. Hypothesis 2: Linear Representation Features are directions in activation space, *i.e.*, linear combinations of neurons. The *linear representation* hypothesis suggests that neural networks frequently represent high-level features as linear directions in activation space. This hypothesis can simplify the understanding and manipulation of neural network representations (Nanda et al., 2023b). The prevalence of linear layers in neural network architectures favors linear representations. Matrix multiplication in these layers most readily processes linear features, while more complex non-linear encodings would require multiple layers to decode. However, recent work by Engels et al. (2024) provides evidence against a strict formulation of the linear representation hypothesis by identifying circular features representing days of the week and months of the year. These multi-dimensional, non-linear representations were shown to be used for solving modular arithmetic problems in days and months. Intervention experiments confirmed that these circular features are the fundamental unit of computation in these tasks, and the authors developed methods to decompose the hidden states, revealing the circular representations. Establishing non-linearity can be challenging. For example, Li et al. (2023a) initially found that in a GPT model trained on Othello, the board state could only be decoded with a non-linear probe when represented in terms of white and black pieces, seemingly violating the linearity assumption. However, Nanda (2023c); Nanda et al. (2023b) later showed that a linear probe sufficed when the board state was decoded in terms of "one's own" and "the opponent's" pieces, reaffirming the linear representation hypothesis in this case. In contrast, the work by Engels et al. (2024) provides a clear and convincing existence proof for non-linear, multi-dimensional representations in language models. While the linear representation hypothesis remains a useful simplification, it is important to recognize its limitations and the potential role of non-linear representations (Sharkey et al., 2022a). As neural networks continue to evolve, ongoing reevaluation of the hypothesis is crucial, particularly considering the possible emergence of non-linear features under optimization pressure for interpretability (Hubinger, 2022). Alternative perspectives, such as the polytope lens proposed by Black et al. (2022), emphasize the impact of non-linear activation functions and discrete polytopes formed by piecewise linear activations as potential primitives of neural network representations. Despite these exceptions, empirical evidence largely supports the linear representation hypothesis in many contexts, especially for feedforward networks with ReLU activations. Semantic vector calculus in word embeddings (Mikolov et al., 2013), successful linear probing (Alain & Bengio, 2016; Belinkov, 2021), sparse dictionary learning (Bricken et al., 2023; Cunningham et al., 2024; Deng et al., 2023), and linear decoding of concepts (O'Mahony et al., 2023), tasks (Hendel et al., 2023), functions (Todd et al., 2023), sentiment (Tigges et al., 2024), refusal (Arditi et al., 2024), and relations (Hernandez et al., 2023; Chanin et al., 2023) in large language models all point to the prevalence of linear representations. Moreover, linear addition techniques for model steering (Turner et al., 2023; Sakarvadia et al., 2023a; Li et al., 2023b) and *representation engineering* (Zou et al., 2023) highlight the practical implications of linear feature representations. Building upon the linear representation hypothesis, recent work investigated the structural organization of these linear features within activation space. Park et al. (2024) reveal a geometric framework for categorical and hierarchical concepts in large language models. Their findings demonstrate that simple categorical concepts (*e.g.*, mammal, bird) are represented as simplices in the activation space, while hierarchically related concepts are orthogonal. This geometric analysis aligns with earlier observations on feature clustering and splitting in neural networks (Elhage et al., 2022b). It suggests that the linear features are not merely scattered directions but are organized to reflect semantic relationships and hierarchies. ## 3.3 Circuits As Computational Primitives And Motifs As Universal Circuit Patterns Having defined features as directions in activation space as the fundamental units of neural network representation, we now explore their computation. Neural networks can be conceptualized as computational graphs, within which *circuits* are sub-graphs consisting of linked features and the weights connecting them. Similar to how features are the representational primitive, circuits function as the computational primitive (Michaud et al., 2023) and the primary building block of these networks (Olah et al., 2020). Motifs, Universality ![8_image_0.png](8_image_0.png) Figure 6: Comparing observed models (left) and corresponding hypothetical *disentangled* models (right) trained on similar tasks and data. The observed models show different neuronal activation patterns, while the dissection into feature-level *circuits* reveals a *motif* - a shared circuit pattern emerging across models, hinting at *universality* - models converging on similar solutions based on common underlying principles. Definition 3: Circuit Circuits are sub-graphs of the network, consisting of features and the weights connecting them. The decomposition of neural networks into circuits for interpretability has shown significant promise, particularly in small models trained for specific tasks such as addition, as seen in the work of Nanda et al. (2023a) and Quirke & Barez (2023). Scaling such a *comprehensive* circuit analysis to broader behaviors in large language models remains challenging. However, there has been notable progress in scaling circuit analysis of narrow behaviors to larger circuits and models, such as indirect object identification (Wang et al., 2023) and greater-than computations (Hanna et al., 2023) in GPT-2 and multiple-choice question answering (Lieberum et al., 2023). In search of general and universal circuits, researchers focus particularly on more general and transferable behaviors. McDougall et al. (2023)'s work on copy suppression in GPT-2's attention heads sheds light on model calibration and self-repair mechanisms. Davies et al. (2023) and Feng & Steinhardt (2023) focus on how large language models represent symbolic knowledge through variable binding and entity-attribute binding, respectively. Yu et al. (2023); Nanda et al. (2023c); Lv et al. (2024); Chughtai et al. (2024); Ortu et al. (2024) explore mechanisms for factual recall, revealing how circuits dynamically balance pretrained knowledge with new contextual information. Lan & Barez (2023) extend circuit analysis to sequence continuation tasks, identifying shared computational structures across semantically related sequences. More promisingly, some repeating patterns have shown *universality* across models and tasks. These universal patterns are called *motifs* (Olah et al., 2020) and can manifest not just as specific circuits or features but also as higher-level behaviors emerging from the interaction of multiple components. Examples include the curve detectors found across vision models (Cammarata et al., 2021; 2020), induction circuits enabling in-context learning (Olsson et al., 2022), and the phenomenon of branch specialization in neural networks (Voss et al., 2021). Motifs may also capture how models leverage tokens for working memory or parallelize computations in a divide-and-conquer fashion across representations. The significance of motifs lies in revealing the common structures, mechanisms, and strategies that naturally emerge across neural architectures, shedding light on the fundamental building blocks underlying their intelligence. Figure 6 contrasts observed neural network models with hypothetical disentangled models, illustrating how a shared circuit pattern can emerge across different models trained on similar tasks and data, hinting at an underlying *universality*. ## Definition 4: Motif Motifs are repeated patterns within a network, encompassing either features or circuits that emerge across different models and tasks. Universality Hypothesis. Following the evidence for motifs, we can propose two versions for a *universality* hypothesis regarding the convergence of features and circuits across neural network models: ## Hypothesis 3: Weak Universality There are underlying principles governing how neural networks learn to solve certain tasks. Models will generally converge on analogous solutions that adhere to the common underlying principles. However, the specific *features* and *circuits* that implement these principles can vary across different models based on factors like hyperparameters, random seeds, and architectural choices. ## Hypothesis 4: Strong Universality The *same* core features and circuits will universally and consistently arise across all neural network models trained on similar tasks and data distributions and using similar techniques, reflecting a set of fundamental computational *motifs* that neural networks inherently gravitate towards when learning. The universality hypothesis posits a convergence in forming features and circuits across various models and tasks, which could significantly ease interpretability efforts in AI. It proposes that artificial and biological neural networks share similar features and circuits, suggesting a standard underlying structure (Chan et al., 2023; Sucholutsky et al., 2023; Kornblith et al., 2019). This idea posits that there is a fundamental basis in how neural networks, irrespective of their specific configurations, process and comprehend information. This could be due to inbuilt inductive biases in neural networks or *natural abstractions* (Chan et al., 2023) - concepts favored by the natural world that any cognitive system would naturally gravitate towards. Evidence for this hypothesis comes from *cross-species neural structures* in neuroscience, where similar neural structures and functions are found in different species (Kirchner, 2023). Additionally, machine learning models, including neural networks, tend to converge on similar features, representations, and classifications across different tasks and architectures (Chen et al., 2023a; Hacohen et al., 2020; Li et al., 2015; Bricken et al., 2023). Marchetti et al. (2023) provide mathematical support for emerging universal features. While various studies support the universality hypothesis, questions remain about the extent of feature and circuit similarity across different models and tasks. In the context of mechanistic interpretability, this hypothesis has been investigated for neurons (Gurnee et al., 2024), group composition circuits (Chughtai et al., 2023), and modular task processing (Variengien & Winsor, 2023), with evidence for the weak but not the strong formulation (Chughtai et al., 2023). ## 3.4 Emergence Of World Models And Simulated Agents Internal World Models. World models are internal causal models of an environment formed within neural networks. Traditionally linked with reinforcement learning, these models are *explicitly* trained to develop a compressed spatial and temporal representation of the training environment, enhancing downstream task performance and sample efficiency through training on internal hallucinations (Ha & Schmidhuber, 2018). However, in the context of our survey, our focus shifts to *internal world models* that potentially form *implicitly* as a by-product of the training process, especially in LLMs trained on next-token prediction - also called GPT. LLMs are sometimes characterized as *stochastic parrots* (Bender et al., 2021). This label stems from their fundamental operational mechanism of predicting the next word in a sequence, which is seen as relying heavily on memorization. From this viewpoint, LLMs are thought to form complex correlations based on observational data but cannot develop causal models of the world due to their lack of access to interventional data (Pearl, 2009). An alternative perspective on LLMs comes from the *active inference* framework (Salvatori et al., 2023), a theory rooted in cognitive science and neuroscience. Active inference postulates that the objective of minimizing prediction error, given enough representative capacity, is adequate for a learning system to develop complex world representations, behaviors, and abstractions. Since language inherently mirrors the world, these models could implicitly construct linguistic and broader world models (Kulveit et al., 2023). The *simulation* hypothesis suggests that models designed for prediction, such as LLMs, will eventually simulate the causal processes underlying data creation. Seen as an extension of their drive for efficient compression, this hypothesis implies that adequately trained models like GPT could develop internal world models as a natural outcome of their predictive training (janus, 2022; Shanahan et al., 2023). Hypothesis 5: Simulation A model whose objective is text prediction will simulate the causal processes underlying the text creation if optimized sufficiently strongly (janus, 2022). In addition to theoretical considerations for emergent causal world models (Richens & Everitt, 2024; Nichani et al., 2024), mechanistic interpretability is starting to provide empirical evidence on the types of internal world models that may emerge in LLMs. The ability to internally represent the board state in games like chess (Karvonen, 2024) or Othello (Li et al., 2023a; Nanda et al., 2023b), create linear abstractions of spatial and temporal data (Gurnee & Tegmark, 2024), and structure complex representations of mazes, demonstrating an understanding of maze topology and pathways (Ivanitskiy et al., 2023) highlight the growing abstraction capabilities of LLMs. Li et al. (2021) identified contextual word representations that function as models of entities and situations evolving throughout a discourse, akin to linguistic models of dynamic semantics. Patel & Pavlick (2022) demonstrated that LLMs can map conceptual domains (e.g, direction, color) to grounded world representations given a few examples, suggesting they learn rich conceptual spaces (Gardenfors, 2004) reflective of the non-linguistic world. The *prediction orthogonality* hypothesis further expands on this idea: It posits that prediction-focused models like GPT may simulate agents with various objectives and levels of optimality. In this context, GPT are simulators, simulating entities known as *simulacra* that can be either agentic or non-agentic, with different objectives from the simulator itself (janus, 2022; Shanahan et al., 2023). The implications of the simulation and prediction orthogonality hypotheses for AI safety and alignment are discussed in Section 6. Hypothesis 6: Prediction Orthogonality A model whose objective is prediction can simulate agents who optimize toward any objectives with any degree of optimality (janus, 2022). In conclusion, the evolution of LLMs from simple predictive models to entities potentially possessing complex internal world models, as suggested by the *simulation* hypothesis and supported by mechanistic interpretability studies, represents a significant shift in our understanding of these systems. This evolution challenges us to reconsider LLMs' capabilities and future trajectories in the broader landscape of AI development. ## 4 Core Methods Methods Mechanistic interpretability (MI) employs various tools, from observational analysis to causal interventions. This section provides a comprehensive overview of these methods, beginning with a taxonomy that categorizes approaches based on their key characteristics (Section 4.1). We then survey observational (Section 4.2), followed by interventional techniques (Section 4.3). Finally, we study their synergistic interplay (Section 4.4). Figure 7 offers a visual summary of the methods and techniques unique to mechanistic interpretability. Observation **Intervention** ![11_image_1.png](11_image_1.png) ![11_image_2.png](11_image_2.png) ![11_image_3.png](11_image_3.png) ![11_image_4.png](11_image_4.png) ![11_image_0.png](11_image_0.png) Attribution Patching Activation Patching Logit Lens Causal Scrubbing Figure 7: Overview of key methods and techniques in mechanistic interpretability research. Observational approaches include structured probes, logit lens variants, and sparse autoencoders (SAEs). Interventional methods, focusing on causal understanding, encompass activation patching variants for uncovering causal mechanisms and causal scrubbing for hypothesis evaluation. ## 4.1 Taxonomy Of Mechanistic Interpretability Methods We propose a taxonomy based on four key dimensions: causal nature, learning phase, locality, and comprehensiveness (Table 1). The causal nature of methods ranges from purely observational, which analyze existing representations without direct manipulation, to interventional approaches that actively perturb model components to establish causal relationships. The learning phase dimension distinguishes between post-hoc techniques applied to trained models and intrinsic methods that enhance interpretability during the training process itself. Locality refers to the scope of analysis, spanning from individual neurons (*e.g.*, feature visualization) to entire model architectures (*e.g.*, causal abstraction). Comprehensiveness varies from partial insights into specific components to holistic explanations of model behavior. | Method | Causal Nature | Phase | Locality | Comprehensiveness | Key Examples | |-----------------------|-----------------|------------|------------|---------------------|---------------------------------------------------------------| | Feature Visualization | Observation | Post-hoc | Local | Partial | Zeiler & Fergus (2014) Zimmermann et al. (2021) | | Exemplar methods | Observation | Post-hoc | Local | Partial | Grosse et al. (2023) Garde et al. (2023) | | Probing Techniques | Observation | Post-hoc | Both | Both | McGrath et al. (2022) Gurnee et al. (2023) | | Structured Probes | Observation | Post-hoc | Both | Both | Burns et al. (2023) | | Logit Lens Variants | Observation | Post-hoc | Global | Partial | nostalgebraist (2020) Belrose et al. (2023) | | Sparse Autoencoders | Observation | Post-hoc | Both | Comprehensive | Cunningham et al. (2024) Bricken et al. (2023) | | Activation Patching | Intervention | Post-hoc | Local | Partial | Meng et al. (2022a) Wang et al. (2023) | | Path Patching | Intervention | Post-hoc | Both | Both | Goldowsky-Dill et al. (2023) | | Causal Abstraction | Intervention | Post-hoc | Global | Comprehensive | Geiger et al. (2023a) Geiger et al. (2023b) Wu et al. (2023a) | | Hypothesis Testing | Intervention | Post-hoc | Global | Comprehensive | Chan et al. (2022) Jenner et al. (2023) | | Intrinsic Methods | - | Pre/During | Global | Comprehensive | Elhage et al. (2022a) Liu et al. (2023a) | Table 1: Taxonomy of Mechanistic Interpretability Methods The categorization is based on the methods' general tendencies. Some methods can offer local and global or partial and comprehensive interpretability depending on the scope of the analysis and application. Probing techniques can range from local to global and partial to comprehensive; simple linear probes might offer local insights into individual *features*, while more sophisticated structured probes can uncover global patterns. Sparse autoencoders decompose individual neuron activations (local) but aim to disentangle features across the entire model (global). Path patching extends local interventions to global model understanding by tracing information flow across layers, demonstrating how local perturbations can yield broader insights. In practice, mechanistic interpretability research involves both method development and their application. When applying methods to understand a model, combining techniques from multiple categories is often necessary and beneficial to build a more comprehensive understanding (Section 4.4). ## 4.2 Observation Mechanistic interpretability draws from observational methods that analyze the inner workings of neural networks, with many of these methods preceding the field itself. For a detailed exploration of inner interpretability methods, refer to (Räuker et al., 2023). Two prominent categories are example-based methods and feature-based methods: i.) **Exemplar methods** identify real input examples that highly activate specific neurons or layers. This helps pinpoint influential data points that maximize neuron activation within the neural network (Grosse et al., 2023; Garde et al., 2023; Nanfack et al., 2024). ii.) **Feature visualization** encompasses techniques that generate synthetic inputs to optimize neuron activation. These visualizations reveal how neurons respond to stimuli and which features are sensitive to (Zeiler & Fergus, 2014; Zimmermann et al., 2021). By inspecting the synthetic inputs that drive neuron behavior, we can hypothesize about the features encoded by those neurons. Probing for Features. Probing (Alain & Bengio, 2016; Hewitt & Manning, 2019) involves training a classifier using the activations of a model, with the classifier's performance subsequently observed to deduce insights about the model's behavior and internal representations. However, the probe's performance may often reflect its own learning capacities more than the actual characteristics of the model's representations (Belinkov, 2021). This dilemma has led researchers to investigate the ideal balance between the complexity of a probe and its capacity to accurately represent the model's features (Cao et al., 2021; Voita & Titov, 2020). The *linear representation* hypothesis offers a resolution to this issue. Under this hypothesis, the failure of a simple linear probe to detect certain features suggests their absence in the model's representations. Conversely, suppose a more complex probe succeeds where a simpler one fails. In that case, it implies that the model contains features that a complex function can combine into the target feature but that the target feature itself is not explicitly represented. Thus, the hypothesis implies that using linear probes could suffice in most cases, circumventing the complexity considerations generally associated with probing (Belinkov, 2021). McGrath et al. (2022) analyzed chess knowledge acquisition in AlphaZero, revealing the emergence of strategic concepts during training. In language models, Gurnee et al. (2023) introduced *sparse probing* to decode internal neuron activations to understand feature representation and sparsity. They show that early layers use sparse combinations of neurons to represent many features in superposition, while middle layers seem to have dedicated *monosemantic* neurons for higher-level contextual features. Probing is limited in drawing causal or behavioral conclusions. Its primarily observational nature focuses on how information is encoded rather than how it is used (see Figure 1), necessitating careful analysis and integration with interventional techniques (Section 4.3), or alternative approaches (Elazar et al., 2021). While in explainable AI (Linardatos et al., 2020), probing has primarily analyzed high-level concepts like linguistic representations (Tenney et al., 2019; Dalvi et al., 2019), MI aims to probe towards uncovering underlying computational processes and functionality. This shift in goals towards uncovering mechanistic computation is a nuanced distinction rather than a clear-cut line between probing in MI and the broader explainability field. Structured Probes. While focusing on bottom-up, mechanistic interpretability approaches, we can also consider integrating top-down, concept-based structured probes with mechanistic interpretability. Structured probes aid conceptual interpretability, probing language models for complex features like truth representations. Notably, Burns et al. (2023)'s *contrast-consistent search* identifies linear projections exhibiting logical consistency in hidden states, contrasting truth values for statements and negations. However, structured probes face significant challenges in unsupervised probing scenarios. As Farquhar et al. (2023) showed, arbitrary features, not just knowledge-related ones, can satisfy contrast consistency equally well, raising doubts about scalability. For example, the loss may capture *simulation* of knowledge from hypothesized *simulacra* within sufficiently powerful language models rather than the models' true knowledge. Furthermore, Farquhar et al. (2023) demonstrates self-supervised probing methods (like (Burns et al., 2023)) often detect prominent but unintended distractor features in the data. The discovered features are also highly sensitive to prompt choice, and there is no principled way to select prompts that would reliably surface a model's true knowledge. While structured probes primarily focus on high-level conceptual representations (Zou et al., 2023), their findings could potentially inform or complement mechanistic interpretability efforts. For instance, identifying truth directions through structured probes could help guide targeted interventions or analyze the underlying circuits responsible for truthful behavior using mechanistic techniques such as activation patching or circuit tracing (Section 4.3). Conversely, mechanistic methods could provide insights into how truth representations emerge and are computed within the model, addressing some of the challenges faced by unsupervised structured probes. Logit Lens. The *logit lens* (nostalgebraist, 2020) provides a window into the model's predictive process by applying the final classification layer (which projects the residual stream activation into logits/vocabulary space) to intermediate activations of the residual stream, revealing how prediction confidence evolves across computational stages. This is possible because transformers tend to build their predictions across layers iteratively (Geva et al., 2022). Extensions of this approach include the tuned lens (Belrose et al., 2023), which trains affine probes to decode hidden states into probability distributions over the vocabulary, and the Future Lens (Pal et al., 2023), which explores the extent to which individual hidden states encode information about subsequent tokens. Researchers have also investigated techniques that bypass intermediate computations to probe representations directly. Din et al. (2023) propose using linear transformations to approximate hidden states from different layers, revealing that language models often predict final outputs in early layers. Dar et al. (2022) present a theoretical framework for interpreting transformer parameters by projecting them into the embedding space, enabling model alignment and parameter transfer across architectures. Other techniques focus on interpreting specific model components or submodules. The DecoderLens (Langedijk et al., 2023) allows analyzing encoder-decoder transformers by cross-attending intermediate encoder representations in the decoder, shedding light on the information flow within the encoder. The Attention Lens (Sakarvadia et al., 2023b) aims to elucidate the specialized roles of attention heads by translating their outputs into vocabulary tokens via learned transformations. Feature Disentanglement via Sparse Dictionary Learning. Recent work suggests that the essential elements in neural networks are linear combinations of neurons representing features in superposition (Elhage et al., 2022b). To disentangle these features, researchers have developed sparse autoencoders (SAEs), which decompose neural network activations into individual component features (Sharkey et al., 2022b; Cunningham et al., 2024). This process, known as sparse dictionary learning, reconstructs activation vectors as sparse linear combinations of directional vectors within the activation space (Olshausen & Field, 1997). The theoretical foundations of SAEs are rooted in work on *disentangled* representations. Whittington et al. (2022) demonstrate that autoencoders can recover ground truth features under conditions of feature sparsity and non-negativity. Furthermore, Garfinkle & Hillar (2019) provides guarantees for the uniqueness and stability of dictionaries for sparse representation, even in the presence of noise. These theoretical underpinnings support SAEs' ability to uncover true, disentangled features underlying the data distribution. In practice, SAEs stand out for their simplicity and scalability (Sharkey et al., 2022b). They incorporate sparsity regularization to encourage learning sparse yet meaningful data representations, with the precise tuning of the sparsity penalty on hidden activations critical in dictating the autoencoder's sparsity level. We provide an overview of the SAE architecture in Figure 8. SAEs' dictionary features exhibit higher scores on autointerpretability metrics and increased monosemanticity (Bricken et al., 2023; Cunningham et al., 2024; Sharkey et al., 2022b). They are scalable to state-of-the-art models and can detect safety-relevant features (Templeton et al., 2024), measure feature sparsity (Deng et al., 2023), and interpret reward models in reinforcement learning-based language models (Marks et al., 2023a). Evaluating SAE quality remains challenging due to the lack of ground-truth interpretable features. Researchers have addressed this through various approaches: Karvonen et al. (2024) proposed using language models trained on chess and Othello transcripts as testbeds, providing natural collections of interpretable features. Sharkey et al. (2022b) constructed a toy model with traceable features, while Makelov et al. (2024); Makelov (2024) compared SAE results with supervised features in large language models to demonstrate their viability. The versatility of SAEs extends to various neural network architectures. They have been successfully applied to transformer attention layers (Kissane et al., 2024) and convolutional neural networks (Gorton, 2024). Notably, Gorton (2024) applied SAEs to the early vision layers of InceptionV1, uncovering new interpretable features, including additional curve detectors not apparent from examining individual neurons (Cammarata et al., 2020). In circuit discovery, SAEs have shown particular promise (see also Section 4.4). He et al. (2024) proposed a circuit discovery framework alternative to activation patching (discussed in Section 4.3.1), leveraging dictionary features decomposed from all modules writing to the residual stream. Similarly, O'Neill & Bui (2024) employed discrete sparse autoencoders for discovering interpretable circuits in large language models. Recent advancements have focused on improving SAE performance and addressing limitations. Rajamanoharan et al. (2024) introduced a gating mechanism to separate the functionalities of determining which directions to use and estimating their magnitudes, mitigating shrinkage - the systematic underestimation of feature activations. An alternative approach by Dunefsky et al. (2024) uses transcoders to faithfully approximate a densely activating MLP layer with a wider, sparsely-activating MLP layer, offering another path to interpretable feature discovery, a type of sparse distillation (slavachalnev, 2024). ## Sparse Dictionary Learning Sparse autoencoders (Cunningham et al., 2024) are proposed as a solution to *polysemantic* neurons. The problem of *superposition* is mathematically formalized as *sparse dictionary learning* (Olshausen & Field, 1997) problem to decompose neural network activations into *disentangled* component features. The goal is to learn a dictionary of vectors {fk} nfeat k=1 ⊂ R dthat can represent the unknown, ground truth network features as sparse linear combinations. If successful, the learned dictionary contains monosemantic neurons corresponding to *features* (Sharkey et al., 2022b). The autoencoder architecture consists of an encoder and a ReLU activation function, expanding the input dimensionality to dhid > din. The encoder's output is given by: h = ReLU(Wencx + b), (1) $$\begin{array}{r}{h=\mathrm{ReLU}(W_{\mathrm{enc}}x+b),}\\ {x^{\prime}=W_{\mathrm{dec}}h=\sum_{i=0}^{d_{\mathrm{hid}}-1}h_{i}f_{i},}\end{array}$$ hifi, (2) Sparse **Autoencoder** where Wenc,W⊤ dec ∈ R dhid×din and b ∈ R dhid . The parameter matrix Wdec forms the feature dictionary, with rows fi as dictionary features. The autoencoder is trained to minimize the loss, where the L 1 penalty on h encourages sparse reconstructions using the dictionary features, Transformer L(x) = |x − x ′| $=\;-t^{|2}$ . 2 + α|h|1. (3) ![15_image_0.png](15_image_0.png) $\mathbf{a}\cup\mathbf{b}$ $$(1)$$ $$\left(2\right)$$ Figure 8: Illustration of a sparse autoencoder applied to the MLP layer activations, consisting of an encoder that increases dimensionality while emphasizing sparse representations and a decoder that reconstructs the original activations using the learned feature dictionary. ## 4.3 Intervention Causality as a Theoretical Foundation. The theory of causality (Pearl, 2009) provides a mathematically precise framework for mechanistic interpretability, offering a rigorous approach to understanding highlevel semantics in neural representations (Geiger et al., 2023a). By treating neural networks as causal models, with their *compute graphs serving as causal graphs*, researchers can perform precise interventions and examine the roles of individual parameters (Mueller et al., 2024). This causal perspective on interpretability has led to the development of various intervention techniques, including activation patching (Section 4.3.1), causal abstraction (Section 4.3.2), and hypothesis testing methods (Section 4.3.3). 4.3.1 Activation Patching ![16_image_0.png](16_image_0.png) Figure 9: (a) Activation patching in a transformer model. Left: The model processes the clean input "Colosseum in Rome," caching the latent activations (step i.). Right: The model runs with the corrupted input "Eiffel Tower in Paris" (step ii.). The pink arrow shows an MLP layer activation (green diamond) patched from the clean run into the corrupted run (step *iii.*). This causes the prediction to change from "Paris" to "Rome," demonstrating how the significance of the patched component is determined (step iv.). By comparing these carefully selected inputs, researchers can control for confounding circuitry and isolate the specific circuit responsible for the location prediction behavior. (b) Activation patching directions: Top: Patching corrupted activations (orange) into clean circuits (turquoise) reveals *sufficient* components for identifying OR logic scenarios. Bottom: Patching clean activations (green) into corrupted circuits (orange) reveals *necessary* components that are useful for identifying AND logic scenarios. The AND and OR gates demonstrate how these patching directions uncover different logical relationships between model components. Activation patching is a collective term for a set of causal intervention techniques that manipulate neural network activations to shed light on the decision-making processes within the model. These techniques, including causal tracing (Meng et al., 2022a), interchange intervention (Geiger et al., 2021b), causal mediation analysis (Vig et al., 2020), and causal ablation (Wang et al., 2023), share the common goal of modifying a neural model's internal state by replacing specific activations with alternative values, such as zeros, mean activations across samples, random noise, or activations from a different forward pass (Figure 9a). The primary objective of activation patching is to isolate and understand the role of specific components or circuits within the model by observing how changes in activations affect the model's output. This enables researchers to infer the function and importance of those components. Key applications include localizing behavior by identifying critical activations, such as understanding the storage and processing of factual information (Meng et al., 2022a; Geva et al., 2023; Goldowsky-Dill et al., 2023; Stolfo et al., 2023), and analyzing component interactions through circuit analysis to identify sub-networks within a model's computation graph that implement specified behaviors (Wang et al., 2023; Hanna et al., 2023; Lieberum et al., 2023; Hendel et al., 2023; Geva et al., 2023). The standard protocol for activation patching (Figure 9a) involves: step i. Running the model with a clean input and caching the latent activations; step ii. Executing the model with a corrupted input; step *iii.* Re-running the model with the corrupted input but substituting specific activations with those from the clean cache; and step iv. Determining significance by observing the variations in the model's output during the third step, thereby highlighting the importance of the replaced components. This process relies on comparing pairs of inputs: a clean input, which triggers the desired behavior, and a corrupted input, which is identical to the clean one except for critical differences that prevent the behavior. By carefully selecting these inputs, researchers can *control for confounding circuitry* and isolate the specific circuit responsible for the behavior. Differences in patching direction - clean to corrupted (causal tracing) versus corrupted to clean (resample ablation) - provide insights into the sufficiency or necessity of model components for a given behavior. Clean to corrupted patching identifies activations sufficient for restoring clean performance, even if they are unnecessary due to redundancy, which is particularly informative in OR logic scenarios (Figure 9b, OR gate). Conversely, corrupted to clean patching determines the necessary activations for clean performance, which is useful in AND logic scenarios (Figure 9b, AND gate). Activation patching can employ corruption methods, including zero-, mean-, random-, or resample ablation, each modulating the model's internal state in distinct ways. Resample ablation stands out for its effectiveness in maintaining consistent model behavior by not changing the data distribution too much (Zhang & Nanda, 2023). However, it is essential to be careful when interpreting the patching results, as breaking behavior by taking the model off-distribution is uninteresting for finding the relevant circuit (Nanda, 2023e). Path Patching and Subspace Activation Patching. Path patching extends the activation patching approach to multiple edges in the computational graph (Wang et al., 2023; Goldowsky-Dill et al., 2023), allowing for a more fine-grained analysis of component interactions. For example, path patching can be used to estimate the direct and indirect effects of attention heads on the output logits. Subspace activation patching, also known as distributed interchange interventions (Geiger et al., 2023b), aims to intervene only on linear subspaces of the representation space where *features* are hypothesized to be encoded, providing a tool for more targeted interventions. Recently, Ghandeharioun et al. (2024) introduced *patchscopes*, a framework that unifies and extends activation patching techniques: using the model's text generation to explain internal representations, it enables more flexible interventions across various interpretability tasks, improving early layer inspection and allowing for cross-model analysis. Limitations and Advancements. Activation patching has several limitations, including the effort required to design input templates and counterfactual datasets, the need for human inspection to isolate important subgraphs, and potential second-order effects that can complicate the interpretation of results (Lange et al., 2023) and the *hydra effect* (McGrath et al., 2023; Rushing & Nanda, 2024) (see discussion in Section 7.2). Recent advancements aim to address these limitations, such as automated circuit discovery algorithms (Conmy et al., 2023), gradient-based methods for scalable component importance estimation like attribution patching (Nanda, 2023d; Syed et al., 2023), and techniques to mitigate self-repair interference during analysis (Ferrando & Voita, 2024). ## 4.3.2 Causal Abstraction Causal abstraction (Geiger et al., 2021a; 2023a) provides a mathematical framework for mechanistic interpretability, treating neural networks and their explanations as causal models. This approach validates explanations through interchange interventions on network activations (Jenner et al., 2023), unifying various interpretability methods such as LIME (Ribeiro et al., 2016), causal effect estimation (Feder et al., 2021), causal mediation analysis (Vig et al., 2020), iterated nullspace projection (Ravfogel et al., 2020), and circuit-based explanations (Geiger et al., 2023a). To overcome computational limitations, *distributed alignment search* (Geiger et al., 2023b) introduced gradient-based distributed interchange interventions, extending causal abstraction to larger models (Wu et al., 2023b). Further advancements include *causal proxy models* (Wu et al., 2023a), which address the challenge of counterfactual observations. Applications of causal abstraction span from linguistic phenomena analysis (Arora et al., 2024; Wu et al., 2022b), and evaluation of interpretability methods (Huang et al., 2024), to improving performance through representation finetuning (Wu et al., 2024), and improving efficiency via model distillation (Wu et al., 2022b). ## 4.3.3 Hypothesis Testing In addition to the causal abstraction framework, several methods have been developed for rigorous hypothesis testing about neural network behavior. These methods aim to formalize and empirically validate explanations of how neural networks implement specific behaviors. Causal scrubbing (Chan et al., 2022) formalizes hypotheses as a tuple (G, I, c), where G is the model's computational graph, I is an interpretable computational graph hypothesized to explain the behavior, and c maps nodes of I to nodes of G. This method replaces activations in G with others that should be equivalent according to the hypothesis, measuring performance on the scrubbed model to validate the hypothesis. Locally consistent abstractions (Jenner et al., 2023) offer a more permissive approach, checking the consistency between the neural network and the explanation only one step away from the intervention node. This method forms a middle ground between the strictness of full causal abstraction and the flexibility of causal scrubbing. These methods form a hierarchy of strictness, with full causal abstractions being the most stringent, followed by locally consistent abstractions and causal scrubbing being the most permissive. This hierarchy highlights trade-offs in choosing stricter or more permissive notions, affecting the ability to find acceptable explanations, generalization, and mechanistic anomaly detection. ## 4.4 Integrating Observation And Intervention. To comprehensively understand internal neural network mechanisms, combining observational and interventional methods is crucial. For instance, sparse autoencoders can be used to disentangle superposed features (Cunningham et al., 2024), followed by targeted activation patching to test the causal importance of these features (Wang et al., 2023). Similarly, the logit lens can track prediction formation across layers (nostalgebraist, 2020), with subsequent interventions confirming causal relationships at key points. Probing techniques can identify encoded information (Belinkov, 2021), which can then be subjected to causal abstraction (Geiger et al., 2023a) to understand how this information is utilized. This iterative refinement process, where broad observational methods guide targeted interventions and intervention results inform further observations, enables a multi-level analysis that builds a holistic understanding across different levels of abstraction. Recent work (Marks et al., 2024; Bushnaq et al., 2024; Braun et al., 2024; O'Neill & Bui, 2024; Ge et al., 2024) demonstrates the potential of integrating sparse autoencoders with automated circuits discovery (Conmy et al., 2023; Syed et al., 2023), combining feature-level analysis with circuit-level interventions to uncover the interplay between representation and mechanism. ## 5 Current Research This section surveys current research in mechanistic interpretability across three approaches based on when and how the model is interpreted during training: Intrinsic interpretability methods are applied before training to enhance the model's inherent interpretability (Section 5.1). Developmental interpretability involves studying the model's learning dynamics and the emergence of internal structures during training (Section 5.2). Survey After training, post-hoc interpretability techniques are applied to gain insights into the model's behavior and decision-making processes (Section 5.3), including efforts towards uncovering general, transferable principles across models and tasks, as well as automating the discovery and interpretation of critical circuits in trained models (Section 5.4). | intrinsic | | | |-----------------|-----------------|----------------| | before training | developmental | post-hoc | | | during training | after training | | sparsity | predictive | universal | | modularity | high-level | | | | tractable | comprehensive | | | unifying theory | | | disentanglement | | | Figure 10: Key desiderata for interpretability approaches across training and analysis stages: (1) Intrinsic: Architectural biases for sparsity, *modularity*, and *disentangled* representations. (2) Developmental: Predictive capability for phase transitions, manageable number of critical transitions, and a unifying theory connecting observations to singularity geometry. (3) Post-hoc: Global, comprehensive, automated discovery of critical circuits, uncovering transferable principles across models/tasks, and extracting high-level causal mechanisms. ## 5.1 Intrinsic Interpretability Intrinsic methods for mechanistic interpretability offer a promising approach to designing neural networks that are more amenable to *reverse engineering* without sacrificing performance. By encouraging sparsity, modularity, and *monosemanticity* through architectural choices and training procedures, these methods aim to make the reverse engineering process more tractable. Intrinsic interpretability methods aim to constrain the training process to make learned programs more interpretable (Friedman et al., 2023b). This approach is closely related to neurosymbolic learning (Riegel et al., 2020) and can involve techniques like regularization with spatial structure, akin to the organization of information in the human brain (Liu et al., 2023a;b). Recent work has explored various architectural choices and training procedures to improve the interpretability of neural networks. Jermyn et al. (2022) and Elhage et al. (2022a) demonstrate that architectural choices can affect monosemanticity, suggesting that models could be engineered to be more *monosemantic*. Sharkey (2023) propose using a bilinear layer instead of a linear layer to encourage monosemanticity in language models. Liu et al. (2023a) and Liu et al. (2023b) introduce a biologically inspired spatial regularization regime called brain-inspired modular training for forming modules in networks during training. They showcase how this can help RNNs exhibit brain-like anatomical modularity without degrading performance, in contrast to naive attempts to use sparsity to reduce the cost of having more neurons per layer (Jermyn et al., 2022; Bricken et al., 2023). Preceding the mechanistic interpretability literature, various works have explored techniques to improve interpretability, such as sparse attention (Zhang et al., 2021), adding L 1 penalties to neuron activations (Kasioumis et al., 2021; Georgiadis, 2019), and pruning neurons (Frankle & Carbin, 2019). These techniques have been shown to encourage sparsity, modularity, and disentanglement, which are essential aspects of intrinsic interpretability. ## 5.2 Developmental Interpretability Developmental interpretability examines the learning dynamics and emergence of internal structures in neural networks over time, focusing on the formation of *features* and *circuits*. This approach complements static analyses by investigating critical phase transitions corresponding to significant changes in model behavior or capabilities (Steinhardt, 2023; Schaeffer et al., 2023; Wei et al., 2022; Simon et al., 2023). While primarily a distinct field, developmental interpretability often intersects with mechanistic interpretability, as exemplified by Olsson et al. (2022)'s work. Their research, rooted in mechanistic interpretability, demonstrated how the emergence of in-context learning relates to specific training phase transitions, connecting microscopic changes (induction heads) with macroscopic observables (training loss). A key motivation for developmental interpretability is investigating the *universality* of safety-critical patterns, aiming to understand how deeply ingrained and thereby resistant to safety fine-tuning capabilities like deception are. In addition, researchers hypothesize that emergent capabilities correspond to sudden circuit formation during training (Michaud et al., 2023), potentially allowing for prediction or control of their development. Singular Learning Theory (SLT), developed by Watanabe (Watanabe, 2009; 2018), provides a rigorous framework for understanding overparameterized models' behavior and generalization. By quantifying model complexity through the *local learning coefficient*, SLT offers insights into learning phase transitions and the emergence of structure in the model (Lau et al., 2023). Recent work by Hoogland et al. (2024) applied this coefficient to identify developmental stages in transformer models, while Furman & Lau (2024) and Chen et al. (2023b) advanced SLT's scalability and application to the toy model of *superposition* (Figure 5), respectively. While direct applications to phenomena such as generalization (Zhang et al., 2017), learning functions with increasing complexity (Nakkiran et al., 2019), and the transition from memorization to generalization (*grokking*) (Liu et al., 2022a; Power et al., 2022; Liu et al., 2022b; Nanda et al., 2023a; Varma et al., 2023; Thilak et al., 2022; Merrill et al., 2023; Liu et al., 2023c; Stander et al., 2023; Wang et al., 2024) are limited, these areas, along with neural scaling laws (Caballero et al., 2022; Liu & Tegmark, 2023; Michaud et al., 2023) (which can be connected to mechanistic insights (Hernandez et al., 2022)), represent promising future research directions. In conclusion, developmental interpretability serves as an evolutionary theory lens for neural networks, offering insights into the emergence of structures and behaviors over time (Saphra, 2023). Drawing parallels from systems biology (Alon, 2019), this approach can apply concepts like network *motifs*, robustness, and modularity to neural network development, explaining how functional capabilities arise. Sometimes, understanding how structures came about is easier than analyzing the final product, similar to how biologists find certain features in organisms easier to explain in light of their evolutionary history. By studying the temporal aspects of neural network training, researchers can potentially uncover fundamental principles of learning and representation that may not be apparent from examining static, trained models alone. ## 5.3 Post-Hoc Interpretability In applied mechanistic interpretability, researchers explore various facets and methodologies to uncover the inner workings of AI models. Some key distinctions are drawn between *global* versus *local* interpretability and *comprehensive* versus *partial* interpretability. Global interpretability aims to uncover general patterns and behaviors of a model, providing insights that apply broadly across many instances (Doshi-Velez & Kim, 2017; Nanda, 2023e). In contrast, local interpretability explains the reasons behind a model's decisions for particular instances, offering insights into individual predictions or behaviors. Comprehensive interpretability involves achieving a deep and exhaustive understanding of a model's behavior, providing a holistic view of its inner workings (Nanda, 2023e). In contrast, partial interpretability often applied to larger and more complex models, concentrates on interpreting specific aspects or subsets of the model's behavior, focusing on the application's most relevant or critical areas. Large Models - Narrow Behavior. Circuit-style mechanistic interpretability aims to explain neural networks by *reverse engineering* the underlying mechanisms at the level of individual neurons or subgraphs. This approach assumes that neural vector representations encode high-level concepts and circuits defined by model weights encode meaningful algorithms (Olah et al., 2020; Cammarata et al., 2020). Studies on deep networks support these claims, identifying circuits responsible for detecting curved lines or object orientation (Cammarata et al., 2020; 2021; Voss et al., 2021). This paradigm has been applied to language models to discover subnetworks (circuits) responsible for specific capabilities. Circuit analysis localizes and understands subgraphs within a model's computational graph responsible for specific behaviors. For large language models, this often involves narrow investigations into behaviors like multiple choice reasoning (Lieberum et al., 2023), indirect object identification (Wang et al., 2023), or computing operations (Hanna et al., 2023). Other examples include analyzing circuits for Python docstrings (Heimersheim & Jett, 2023), "an" vs "a" usage (Miller & Neo, 2023), and price tagging (Wu et al., 2023b). Case studies often construct datasets using templates filled by placeholder values to enable precise control for causal interventions (Wang et al., 2023; Hanna et al., 2023; Wu et al., 2023b). Toy Models - Comprehensive Analysis. Small models trained on specialized mathematical or algorithmic tasks enable more comprehensive reverse engineering of learned algorithms (Nanda et al., 2023a; Zhong et al., 2023; Chughtai et al., 2023). Even simple arithmetic operations can involve complex strategies and multiple algorithmic solutions (Nanda et al., 2023a; Zhong et al., 2023). Characterizing these algorithms helps test hypotheses around generalizable mechanisms like variable binding (Feng & Steinhardt, 2023; Davies et al., 2023) and arithmetic reasoning (Stolfo et al., 2023). The work by Varma et al. (2023) builds on the work that analyzes transformers trained on modular addition (Nanda et al., 2023a) and explains *grokking* in terms of circuit efficiency, illustrating how a comprehensive understanding of a toy model can enable interesting analyses on top of that understanding. Towards Universality. The ultimate goal is to uncover general principles that transfer across models and tasks, such as induction heads for in-context learning (Olsson et al., 2022), variable binding mechanisms (Feng & Steinhardt, 2023; Davies et al., 2023), arithmetic reasoning (Stolfo et al., 2023; Brinkmann et al., 2024), or retrieval tasks (Variengien & Winsor, 2023). Despite promising results, debates surround the universality hypothesis - the idea that different models learn similar features and circuits when trained on similar tasks. (Chughtai et al., 2023) finds mixed evidence for universality in group composition, suggesting that while families of circuits and features can be characterized, precise circuits and development order may be arbitrary. Towards High-level Mechanisms. Causal interventions can extract a high-level understanding of computations and representations learned by large language models (Variengien & Winsor, 2023; Hendel et al., 2023; Feng & Steinhardt, 2023; Zou et al., 2023). Recent work focuses on intervening in internal representations to study high-level concepts and computations encoded. For example, Hendel et al. (2023) patched residual stream vectors to transfer task representations, while Feng & Steinhardt (2023) intervened on residual streams to argue that models generate IDs to bind entities to attributes. Techniques for representation engineering (Zou et al., 2023) extract reading vectors from model activations to stimulate or inhibit specific concepts. Although these interventions don't operate via specific mechanisms, they offer a promising approach for extracting high-level causal understanding and bridging bottom-up and top-down interpretability approaches. ## 5.4 Automation: Scaling Post-Hoc Interpretability As models become more complex, automating key aspects of the interpretability workflow becomes increasingly crucial. Tracing a model's computational pathways is highly labor-intensive, quickly becoming infeasible as the model size increases. Automating the discovery of relevant circuits and their functional interpretation represents a pivotal step towards scalable and comprehensive model understanding (Nainani, 2024). Dissecting Models into Interpretable Circuits. The first major automation challenge is identifying the critical computational sub-circuits or components underpinning a model's behavior for a given task. A pioneering line of work aims to achieve this via efficient masking or **patching** procedures. Methods like automated circuit discovery (Conmy et al., 2023) and *attribution patching* (Syed et al., 2023; Kramár et al., 2024) iteratively knock out model activations, pinpointing components whose removal has the most significant impact on performance. This masking approach has proven scalable even to large models (Lieberum et al., 2023). Other techniques take a more top-down approach. Davies et al. (2023) specify high-level causal properties (desiderata) that components solving a target subtask should satisfy and then learn binary masks to expose those component subsets. Ferrando & Voita (2024) construct *information flow graphs* highlighting key nodes and operations by tracing attribution flows, enabling extraction of general information routing patterns across prediction domains. Explicit architectural biases like modularity can further boost automation efficiency. Nainani (2024) find that models trained with *brain-inspired modular training* (Liu et al., 2023a) produce more readily identifiable circuits compared to standard training. Such domain-inspired inductive biases may prove increasingly vital as models grow more massive and monolithic. Interpreting Extracted Circuits. Once critical circuit components have been isolated, the key remaining step is interpreting *what* computation those components perform. Sparse autoencoders are a prominent approach for interpreting extracted circuits by decomposing neural network activations into individual component *features*, as discussed in Section 4.2. A novel paradigm uses large language models themselves as an interpretive tool. Bills et al. (2023) demonstrate generating natural language descriptions of individual neuron functions by prompting language models like GPT-4 to explain sets of inputs that activate a neuron. Mousi et al. (2023) similarly employ language models to annotate unsupervised neuron clusters identified via hierarchical clustering. Bai et al. (2024) describe the roles of neurons in vision networks with multimodal models. These methods can easily leverage more capable general-purpose models in the future. Foote et al. (2023) take a complementary graph-based approach in their neuron-to-graph tool: automatically extracting individual neurons' behavior patterns from training data as structured graphs amenable to visualization, programmatic comparisons, and property searches. Such representations could synergize with language model-based annotation to provide descriptions of neuron roles. However, robustly interpreting the largest trillion-parameter models using automated techniques remains an open challenge. Another novel approach, mechanistic-interpretability-based program synthesis (Michaud et al., 2024), entirely sidesteps this complexity by auto-distilling the algorithm learned by a trained model into human-readable Python code without relying on further interpretability analyses or model architectural knowledge. As models become increasingly vast and opaque, such synergistic combinations of methods - uncovering circuits, annotating them, or altogether transcribing them into executable code - will likely prove crucial for maintaining insight and *oversight* when scaling model size. ## 6 Relevance To Ai Safety Relevance | Helpful | Harmful | | |----------------------------|---------------------|---------| | Monitoring and | Accelerate | | | evaluation | capabilities | | | Substantiate threat models | Dual-use | | | Anticipate emergence | Accelerate | | | safety research | Diverting resources | Causing | | | overconfidence | | Figure 11: Potential benefits and risks of mechanistic interpretability for AI safety. How Could Interpretability Promote AI Safety? Gaining mechanistic insights into the inner workings of AI systems seems crucial for navigating AI safety as we develop more powerful models (Nanda, 2022e). Interpretability tools can provide an understanding of artificial cognition, the way AI systems process information and make decisions, which offers several potential benefits: Mechanistic interpretability could accelerate AI safety research by providing richer feedback loops and grounding for model evaluation (Casper, 2023). It may also help anticipate emergent capabilities, such as the emergence of new skills or behaviors in the model before they fully manifest (Wei et al., 2022; Steinhardt, 2023; Nanda et al., 2023a; Barak et al., 2022). This relates to studying the incremental development of internal structures and representations as the model learns (Section 5.2). Additionally, interpretability could substantiate theoretical risk models with concrete evidence, such as demonstrating *inner misalignment* (when a model's behavior deviates from its intended goals) or *mesa-optimization* (the emergence of unintended subagents within the model) (Hubinger et al., 2019; von Oswald et al., 2023). It may also trigger normative shifts within the AI community toward rigorous safety protocols by revealing potential risks or concerning behaviors (Hubinger, 2019a). Regarding specific AI risks (Hendrycks et al., 2023), interpretability may prevent malicious misuse by locating and erasing sensitive information stored in the model (Meng et al., 2022a; Nguyen et al., 2022). It could reduce competitive pressures by substantiating potential threats, promoting organizational safety cultures, and supporting AI alignment (ensuring AI systems pursue intended goals) through better monitoring and evaluation (Hendrycks & Mazeika, 2022). Interpretability can provide safety filters for every stage of training: before training by deliberate design (Hubinger, 2019a), during training by detecting early signs of misalignment and potentially shifting the distribution towards alignment (Hubinger, 2022; Sharkey, 2022), and after training by rigorous evaluation of artificial cognition for honesty (Burns et al., 2023; Zou et al., 2023) and screening for deceptive behaviors (Park et al., 2023b). The emergence of *internal world models* in LLMs, as posited by the *simulation* hypothesis, could have significant implications for AI alignment research. Finding an internal representation of human values and aiming the AI system's objective may be a trivial way to achieve alignment (Wentworth, 2022), especially if the world model is internally separated from notions of goals and agency (Ruthenis, 2022). In such cases, world model interpretability alone may be sufficient for alignment (Ruthenis, 2023). Conditioning pre-trained models is considered a comparatively safe pathway towards general intelligence, as it avoids directly creating agents with inherent goals or agendas (Jozdien, 2022; Hubinger et al., 2023). However, prompting a model to simulate an actual agent, such as "You are a superintelligence in 2035 writing down an alignment solution," could inadvertently lead to the formation of internal agents (Hubinger et al., 2023). In contrast, reinforcement learning tends to create agents by default (Casper et al., 2023a; Ngo et al., 2022). The *prediction orthogonality* hypothesis suggests that prediction-focused models like GPT can simulate agents with potentially misaligned objectives (janus, 2022). Although GPT may lack genuine agency or intentionality, it may produce outputs that simulate these qualities (Bereska & Gavves, 2023; Shanahan et al., 2023). This underscores the need for careful *oversight* and, better yet, using mechanistic interpretability to search for internal agents or their constituents, such as optimization or search processes - an endeavor known as *searching for search* (NicholasKees & janus, 2022; Jenner et al., 2024). Mechanistic interpretability integrates well into various AI alignment agendas, such as understanding existing models, controlling them, making AI systems solve alignment problems, and developing alignment theories (technicalities & Stag, 2023; Hubinger, 2020). It could enhance strategies like detecting deceptive alignment (hypothetical when a model ensures to appear aligned as to pursue misaligned goals without raising suspicion) (Park et al., 2023b), *eliciting latent knowledge* from models (Christiano et al., 2021), and enabling better scalable *oversight*, such as in *iterative distillation and amplification* (Chan, 2023). A high degree of understanding may even allow for *well-founded AI* approaches (AI systems with provable guarantees) (Tegmark & Omohundro, 2023) or *microscope AI* (extract world knowledge from the model without letting the model take actions) (Hubinger, 2019a). Furthermore, comprehensive interpretability itself may be an alignment strategy if we can identify internal representations of human values and guide the model to pursue those values by *retargeting an internal search process* (Wentworth, 2022). Ultimately, *understanding* and control are intertwined, and deeper understanding can control AI systems more reliably. However, there is a spectrum of potential misalignment risks, ranging from acute, *model-centric* issues to gradual, *systemic* concerns (Kulveit, 2024). While mechanistic interpretability may address risks stemming directly from model internals - such as deceptive alignment or sudden capability jumps - it may be less helpful for tackling broader systemic risks like the emergence of misaligned economic structures or novel evolutionary dynamics (Hendrycks, 2023b). The multi-scale risk landscape calls for a balanced research portfolio to minimize risk, where research on governance, complex systems, and multi-agent simulations complements mechanistic insights and model evaluations. The perceived utility of mechanistic interpretability for AI safety largely depends on researchers' priors regarding the likelihood of these different risk scenarios. How Could Mechanistic Insight be Harmful? Mechanistic interpretability research could accelerate AI capabilities, potentially leading to the development of powerful AI systems that are misaligned with human values, posing significant risks (Soares, 2023; Kross, 2023; Hendrycks & Mazeika, 2022). While historically, interpretability research had little impact on AI capabilities, recent exceptions like discoveries about scaling laws (Hoffmann et al., 2022), architectural improvements inspired by studying induction heads (Olsson et al., 2022; Fu et al., 2023a; Poli et al., 2023; Schuster et al., 2022), and efficiency gains inspired by the logit lens technique (Schuster et al., 2022) demonstrated its potential to enhance capabilities. Scaling interpretability research may necessitate automation (Conmy et al., 2023; Bills et al., 2023), potentially enabling rapid selfimprovement of AI systems (RicG, 2023). Some researchers recommend selective publication and focusing on lower-risk areas to mitigate these risks (Hobbhahn & Chan, 2023; Shovelain & McKernon, 2023; Elhage et al., 2022b; Nanda et al., 2023a). Mechanistic interpretability also poses dual-use risks, where the same techniques could be used for both beneficial and harmful purposes. Fine-grained editing capabilities enabled by interpretability could be used for *machine unlearning* (removing private data or dangerous knowledge from models) (Guo et al., 2024; Sun et al., 2024; Nguyen et al., 2022; Pochinkov & , 2023) but could be misused for censorship. Similarly, while interpretability may help improve adversarial robustness (Räuker et al., 2023), it may also facilitate the development of stronger adversarial attacks (Mu & Andreas, 2020; Casper et al., 2023b). Misunderstanding or overestimating the capabilities of interpretability techniques can divert resources from critical safety areas or lead to overconfidence and misplaced trust in AI systems (Charbel-Raphaël, 2023; Casper, 2023). Robust evaluation and benchmarking (Section 8.2) are crucial to validate interpretability claims and reduce the risks of overinterpretation or misinterpretation. ## 7 Challenges 7.1 Research Issues Need for Comprehensive, Multi-Pronged Approaches. Current interpretability research often focuses on individual techniques rather than combining complementary approaches. To achieve a holistic understanding of neural networks, we propose utilizing a diverse interpretability toolbox that integrates multiple methods (see also Section 4.4), such as: *(i.)* Coordinating observational (*e.g.*, probing, logit lens) and interventional methods (*e.g.*, activation patching) to establish causal relationships. *(ii.)* Combining feature-level analysis (*e.g.*, sparse autoencoders) with circuit-level interventions (*e.g.*, path patching) to uncover representation-mechanism interplay. *(iii.)* Integrating intrinsic interpretability approaches with post-hoc analysis for robust understanding. For example, coordinated methods could be used for *reverse engineering* trojaned behaviors (Casper et al., 2023c), where observational techniques identify suspicious activations, interventional methods isolate the relevant circuits, and intrinsic approaches guide the design of more robust architectures. Cherry-Picking and Streetlight Interpretability. Another concerning pattern is the tendency to cherry-pick results, relying on a small number of convincing examples or visualizations as the basis for an argument without comprehensive evaluation (Räuker et al., 2023). This amounts to publication bias, showcasing an unrealistic highlight reel of best-case performance. Relatedly, many interpretability techniques are primarily evaluated on small toy models and tasks (Chughtai et al., 2023; Elhage et al., 2022b; Jermyn et al., 2022; Chen et al., 2023b), risking missing critical phenomena that only emerge in more realistic and diverse contexts. This focus on cherry-picked results from toy models is a form of *streetlight interpretability* (Casper, 2023), examining AI systems under only ideal conditions of maximal interpretability. ## 7.2 Technical Limitations Scalability Challenges and Risks of Human Reliance. A critical hurdle is demonstrating the scalability of mechanistic interpretability to real-world AI systems across model size, task complexity, behavioral coverage, and analysis efficiency (Elhage et al., 2022b; Scherlis et al., 2023). Achieving a truly comprehensive understanding of a model's capabilities in all contexts is daunting, and the time and compute required must scale tractably. Automating interpretability techniques is crucial, as manual analysis quickly becomes infeasible for large models. The high human involvement in current interpretability research raises concerns about the scalability and validity of human-generated model interpretations. Subjective, inconsistent human evaluations and lack of ground-truth benchmarks are known issues (Räuker et al., 2023). As models scale, it will become increasingly untenable to rely on humans to hypothesize about model mechanisms manually. More work is needed on automating the discovery of mechanistic explanations and translating model weights into human-readable computational graphs (Elhage et al., 2022b), but progress on that front may also come from outside the field (Lu et al., 2024). Obstacles to Bottom-Up Interpretability. There are fundamental questions about the tractability of fully *reverse engineering* neural networks from the bottom up, especially as models become more complex (Hendrycks, 2023a). Models may learn internal representations and algorithms that do not cleanly map to human-understandable concepts, making them difficult to interpret even with complete transparency (McGrath et al., 2022). This gap between human and model ontologies may widen as architectures evolve, increasing opaqueness (Hendrycks et al., 2022). Conversely, model representations might naturally converge to more human-interpretable forms as capability increases (Hubinger, 2019a; Feng & Steinhardt, 2023). Analyzing Models Embedded in Environments. Real-world AI systems embedded in rich, interactive environments exhibit two forms of in-context behavior that pose significant interpretability challenges beyond understanding models in isolation. Externally, models may dynamically adapt to and reshape their environments through in-context learning from the interactions and feedback loops with their external environment (Leahy, 2023). Internally, the *hydra effect* demonstrates in-context reorganization, where models flexibly reorganize their internal representations in a context-dependent manner to maintain capabilities even after ablating key components (McGrath et al., 2023). These two instances of in-context behavior - external adaptation to the environment and internal self-reorganization - undermine interpretability approaches that assume fixed *circuits*. For models deeply embedded in rich real-world settings, their dynamic coupling with the external world via in-context environmental learning and their internal in-context representational reorganization make strong interpretability guarantees difficult to attain through analysis of the initial model alone. Adversarial Pressure Against Interpretability. As models become more capable through increased training and optimization, there is a risk they may learn deceptive behaviors that actively obscure or mislead the interpretability techniques meant to understand them. Models could develop adversarial "mind-reader" components that predict and counteract the specific analysis methods used to interpret their inner workings (Sharkey, 2022; Hubinger, 2022). Optimizing models through techniques like gradient descent could inadvertently make their internal representations less interpretable to external observers (Hubinger, 2019b; Fu et al., 2023b; von Oswald et al., 2023). In extreme cases, a highly advanced AI system singularly focused on preserving its core objectives may directly undermine the fundamental assumptions that enable interpretability methods in the first place. These adversarial dynamics, where the capabilities of the AI model are pitted against efforts to interpret it, underscore the need for interpretability research to prioritize worst-case robustness rather than just averagecase scenarios. Current techniques often fail even when models are not adversarially optimized. Achieving high confidence in fully understanding extremely capable AI models may require fundamental advances to make interpretability frameworks resilient against an intelligent system's active deceptive efforts. ## 8 Future Directions Given the current limitations and challenges, several key research problems emerge as critical for advancing mechanistic interpretability. These problems span four main areas: emphasizing conceptual clarity (Section 8.1), establishing rigorous standards (Section 8.2), improving the scalability of interpretability techniques (Section 8.3), and expanding the research scope (Section 8.4). Each subsection presents specific research questions and challenges that need to be addressed to move the field forward. Future **Directions** | | | Prioritize robustness advancement | | |---------------------------------------------------------------------|-------------------------|---------------------------------------------------------|-----------------| | | | Setting | over capability | | | | Standards | | | Corroborate or refute | | | | | Clarifying | core assumptions | | | | concepts | | Establish metrics, benchmarks, and algorithmic testbeds | | | Integrate existing literature and terminology Automation techniques | | Vision, multimodal, and RL models | | | | | Expanding Scope | | | Scaling Up | Coverage and complexity | Top-down and Hybrid | | | Universality and | | | | | overarching theories | | During and before training | | Figure 12: Roadmap for advancing mechanistic interpretability research, highlighting key strategic directions. ## 8.1 Clarifying Concepts Integrating with Existing Literature. To mature, mechanistic interpretability should embrace existing work, using established terminology rather than reinventing the wheel. Diverging terminology inhibits collaboration across disciplines. Presently, the terminology used for mechanistic interpretability partially diverges from mainstream AI research (Casper, 2023). For example, while the mainstream speaks of *distributed representations* (Hinton, 1984; Olah, 2023) and the goal of *disentangled* representations (Higgins et al., 2018; Locatello et al., 2019), the mechanistic interpretability literature refers to the same phenomenon as *polysemanticity* (Scherlis et al., 2023; Lecomte et al., 2023; Marshall & Kirchner, 2024) and *superposition* (Elhage et al., 2022b; Henighan et al., 2023). Using common language invites "accidental" contributions and prevents isolating mechanistic interpretability from broader AI research. Mechanistic interpretability relates to many other fields in AI research, including compressed sensing (Elhage et al., 2022b), modularity, adversarial robustness, continual learning, network compression (Räuker et al., 2023), neurosymbolic reasoning, trojan detection, and program synthesis (Casper, 2023; Michaud et al., 2024), and causal representation learning. These relationships can help develop new methods, metrics, benchmarks, and theoretical frameworks. For instance: i.) **Neurosymbolic Reasoning and Program Synthesis**: Mechanistic interpretability aims for reverse engineering neural networks by converting their weights into human-readable algorithms. This endeavor can draw inspiration from neurosymbolic reasoning (Riegel et al., 2020) and program synthesis. Techniques like creating programs in domain-specific languages (Verma et al., 2019b;a; Trivedi et al., 2021), extracting decision trees (Zhang et al., 2019) or symbolic causal graphs (Ren et al., 2023) from neural networks align well with the goals of mechanistic interpretability. Adopting these approaches can extend the toolkit for reverse engineering AI systems. ii.) **Causal Representation Learning**: Causal Representation Learning (CRL) aims to discover and disentangle underlying causal factors in data (Schölkopf et al., 2021), complementing mechanistic interpretability's goal of understanding causal structures within neural networks. While mechanistic interpretability typically examines individual *features* and *circuits*, CRL offers a framework for understanding high-level causal structures. CRL techniques could enhance interpretability by identifying causal relationships between neurons or layers (Bengio et al., 2019; Ke et al., 2021), potentially revealing model reasoning. Its focus on interventions and counterfactuals (Pearl & Mackenzie, 2018; Peters et al., 2017) could inspire new methods for probing model internals (Goyal et al., 2020; Besserve et al., 2019). CRL's emphasis on learning invariant representations (Peters et al., 2015; von Kügelgen et al., 2019) could guide the search for robust features, while its approach to transfer learning (Rojas-Carulla et al., 2018; Magliacane et al., 2018) could inform studies into model generalization. iii.) **Trojan Detection**: Detecting deceptive alignment models is a key motivation for inspecting model internals, as - by definition - deception is not salient from observing behavior alone (Casper et al., 2024). However, quantifying progress is challenging due to the lack of evidence for deception as an emergent capability in current models (Steinhardt, 2023), apart from *sycophancy* (Sharma et al., 2023; Denison et al., 2024) and theoretical evidence for *deceptive inflation* behavior (Lang et al., 2024). Detecting trojans (or backdoors) (Hubinger et al., 2024) implanted via data poisoning could be a proxy goal and proof-of-concept. These trojans simulate *outer misalignment* (where the model's behavior is misaligned with the specified reward function or objectives due to poorly defined or incorrect reward signals) rather than *inner misalignment* such as deceptive alignment (where the model appears aligned with the specified objectives but internally pursues different, misaligned goals). Moreover, activating a trojan typically results in an immediate change of behavior, while deception can be subtle, gradual, and, at first, entirely internal. Nevertheless, trojan detection can still provide a practical testbed for benchmarking interpretability methods (Maloyan et al., 2024). iv.) **Adversarial Robustness**: There is a duality between interpretability and adversarial robustness (Elhage et al., 2022b; Räuker et al., 2023; Bereska, 2024). More interpretable models tend to be more robust against adversarial attacks (Jyoti et al., 2022), and vice versa, adversarially trained models are often more interpretable (Engstrom et al., 2019). For instance, techniques like input gradient regularization have been shown to simultaneously improve the interpretability of saliency maps and enhance adversarial robustness (Ross & Doshi-Velez, 2017; Du et al., 2021). Furthermore, interpretability tools can help create more sophisticated adversaries (Carter et al., 2019; Casper et al., 2021), improving our understanding of model internals. Viewing adversarial examples as inherent neural network *features* (Ilyas et al., 2019) rather than bugs also hints at alien features beyond human perception. Connecting mechanistic interpretability to adversarial robustness thus promises ways to gain theoretical insight, measure progress (Casper, 2023), design inherently more robust architectures (Fort & Lakshminarayanan, 2024), and create interpretability-guided approaches for identifying (and mitigating) adversarial vulnerabilities (García-Carrasco et al., 2024). More details on the interplay between interpretability, robustness, modularity, continual learning, network compression, and the human visual system can be found in the review by Räuker et al. (2023). Corroborate or Refute Core Assumptions. Features are the fundamental units defining neural representations and enabling mechanistic interpretability's bottom-up approach (Chan, 2023), but defining them involves assumptions requiring scrutiny, as they shape interpretations and research directions. Questioning hypotheses by seeking additional evidence or counter-examples is crucial. The *linear representation* hypothesis treats activation directions as features (Park et al., 2023a; Nanda et al., 2023b; Elhage et al., 2022b), but the emergence and necessity of linearity is unclear - is it architectural bias or inherent? Stronger theory justifying linearity's necessity or counter-examples like autoencoders on uncorrelated data without intermediate linear layers (Elhage et al., 2022b) are needed. An alternative lens views features as polytopes from piecewise linear activations (Black et al., 2022), questioning if direction simplification suffices or added polytope complexity aids interpretability. The *superposition* hypothesis suggests that *polysemantic* neurons arise from the network compressing and representing many features within its limited set of neurons (Elhage et al., 2022b), but polysemanticity can also occur incidentally due to redundancy (Lecomte et al., 2023; Marshall & Kirchner, 2024; McGrath et al., 2023). Understanding superposition's role could inform mitigating polysemanticity via regularization (Lecomte et al., 2023). Superposition also raises open questions like operationalizing *computation in superposition* (Vaintrob et al., 2024; Hänni et al., 2024), *attention head superposition* (Elhage et al., 2022b; Jermyn et al., 2023; Lieberum et al., 2023; Gould et al., 2023), representing feature clusters (Elhage et al., 2022b), connections to adversarial robustness (Elhage et al., 2022b; García-Carrasco et al., 2024; Bloom & Bailey, 2023), anti-correlated feature organization (Elhage et al., 2022b), and architectural effects (Nanda, 2023a). ## 8.2 Setting Standards Prioritizing Robustness over Capability Advancement. As the mechanistic interpretability community expands, it is essential to maintain the norm of not advancing AI capabilities while simultaneously establishing metrics necessary for the field's progress (Räuker et al., 2023). Researchers should prioritize developing comprehensive tools for analyzing the worst-case performance of AI systems, ensuring robustness and reliability in critical applications. This includes focusing on adversarial tasks, such as backdoor detection and removal (Lamparth & Reuel, 2023; Hubinger et al., 2024; Wu et al., 2022a), and evaluating the accuracy of explanations in producing adversarial examples (Goldowsky-Dill et al., 2023). Establishing Metrics, Benchmarks, and Algorithmic Testbeds. A central challenge in mechanistic interpretability is the lack of rigorous evaluation methods. Relying solely on intuition can lead to conflating hypotheses with conclusions, resulting in cherry-picking and optimizing for best-case rather than average or worst-case performance (Rudin, 2019; Miller, 2019; Räuker et al., 2023; Casper, 2023). Current ad hoc practices and proxy measures (Doshi-Velez & Kim, 2017) risk over-optimization (Goodhart's law - When a measure becomes a target, it ceases to be a good measure). Distinguishing correlation from causation is crucial, as interpretability illusions demonstrate that visualizations may be meaningless without causal linking (Bolukbasi et al., 2021; Friedman et al., 2023a; Olah et al., 2017). To advance the field, rigorous evaluation methods are needed. These should include: (i) assessing out-ofdistribution inputs, as most current methods are only valid for specific examples or datasets (Räuker et al., 2023; Ilyas et al., 2019; Mu & Andreas, 2020; Casper et al., 2023c; Burns et al., 2023); *(ii)* controlling systems through edits, such as implanting or removing trojans (Mazeika et al., 2022) or targeted editing (Ghorbani & Zou, 2020; Dai et al., 2022; Meng et al., 2022a;b; Bau et al., 2018; Hase et al., 2023); *(iii)* replacing components with simpler reverse-engineered alternatives (Lindner et al., 2023); and *(iv)* comprehensive evaluation through replacing components with hypothesized circuits (Quirke et al., 2024). Algorithmic testbeds are essential for evaluating faithfulness (Jacovi & Goldberg, 2020; Hanna et al., 2024) and falsifiability (Leavitt & Morcos, 2020). Tools like Tracr (Lindner et al., 2023) can provide ground truth labels for benchmarking search methods (Goldowsky-Dill et al., 2023), while toy models studying superposition in computation (Vaintrob et al., 2024) and transformers on algorithmic tasks can quantify sparsity and test intrinsic methods. Recently, Thurnherr & Scheurer (2024); Gupta et al. (2024) introduced datasets of transformer weights with known circuits for evaluating mechanistic interpretability techniques. ## 8.3 Scaling Techniques Broader and Deeper Coverage of Complex Models and Behaviors. A primary goal in scaling mechanistic interpretability is pushing the Pareto frontier between model and task complexity and the coverage of interpretability techniques (Chan, 2023). While efforts have focused on larger models, it is equally crucial to scale to more complex tasks and provide comprehensive explanations essential for provable safety (Tegmark & Omohundro, 2023; Dalrymple et al., 2024; Gross et al., 2024) and enumerative safety (Cunningham et al., 2024; Elhage et al., 2022b) by ensuring models won't engage in dangerous behaviors like deception. Future work should aim for thorough *reverse engineering* (Quirke & Barez, 2023), integrating proven modules into larger networks (Nanda et al., 2023a), and capturing sequences encoded in hidden states beyond immediate predictions (Pal et al., 2023). Deepening analysis complexity is also key, validating the realism of toy models (Elhage et al., 2022b) and extending techniques like path patching (Goldowsky-Dill et al., 2023; Liu et al., 2023a) to larger language models. The field must move beyond small transformers on algorithmic tasks (Nanda et al., 2023a) and limited scenarios (Friedman et al., 2023a) to tackle more complex, realistic cases. Towards Universality. As mechanistic interpretability matures, the field must transition from isolated empirical findings to developing overarching theories and universal reasoning primitives beyond specific circuits, aiming for a comprehensive understanding of AI capabilities. While collecting empirical data remains valuable (Nanda, 2023f), establishing motifs, empirical laws, and theories capturing universal model behavior aspects is crucial. This may involve finding more circuits/features (Nanda, 2022a;c), exploring circuits as a lens for memorization/generalization (Hanna et al., 2023), identifying primitive general reasoning skills (Feng & Steinhardt, 2023), generalizing specific findings to model-agnostic phenomena (Merullo et al., 2023), and investigating emergent model generality across neural network classes (Ivanitskiy et al., 2023). Identifying universal reasoning patterns and unifying theories is key to advancing interpretability. Automation. Implementing automated methods is crucial for scaling interpretability of real-world stateof-the-art models across size, task complexity, behavior coverage, and analysis time (Hobbhahn, 2022). Manual circuit identification is labor-intensive (Lieberum et al., 2023), so automated techniques like circuit discovery and sparse autoencoders can enhance the process (Foote et al., 2023; Nanda, 2023b). Future work should automatically create varying datasets for understanding circuit functionality (Conmy et al., 2023), develop automated hypothesis search (Goldowsky-Dill et al., 2023), and investigate attention head/MLP interplay (Monea et al., 2023). Scaling sparse autoencoders to extract high-quality features automatically for frontier models is critical (Bricken et al., 2023). Still, it requires caution regarding potential downsides like AI iteration outpacing training (RicG, 2023) and loss of human interpretability from tool complexity (Doshi-Velez & Kim, 2017). ## 8.4 Expanding Scope Interpretability Across Training. While mechanistic interpretability of final trained models is a prerequisite, the field should also advance interpretability before and during training by studying learning dynamics (Nanda, 2022b; Elhage et al., 2022b; Hubinger, 2022). This includes tracking neuron development (Liu et al., 2021), analyzing neuron set changes with scale (Michaud et al., 2023), and investigating emergent computations (Quirke & Barez, 2023). Studying phase transitions could yield safety insights for *reward hacking* risks (Olsson et al., 2022). Multi-Level Analysis. Complementing the predominant bottom-up methods (Hanna et al., 2023), mechanistic interpretability should explore top-down and hybrid approaches, a promising yet neglected avenue. The top-down analysis offers a tractable way to study large models and guide microscopic research with macroscopic observations (Variengien & Winsor, 2023). Its computational efficiency could enable extensive "comparative anatomy" of diverse models, revealing high-level motifs underlying abilities. These motifs could serve as analysis units for understanding internal modifications from techniques like instruction fine-tuning (Ouyang et al., 2022) and reinforcement learning from human feedback (Christiano et al., 2017; Bai et al., 2022). New Frontiers: Vision, Multimodal, and Reinforcement Learning Models. While some mechanistic interpretability has explored convolutional neural networks for vision (Cammarata et al., 2021; 2020), vision-language models (Palit et al., 2023; Salin et al., 2022; Hilton et al., 2020), and multimodal neurons (Goh et al., 2021), little work has focused on vision transformers (Palit et al., 2023; Aflalo et al., 2022; Vilas et al., 2023; Pan et al., 2024). Future efforts could identify mechanisms within vision-language models, mirroring progress in unimodal language models (Nanda et al., 2023a; Wang et al., 2023). Reinforcement learning (RL) is also a crucial frontier given its role in advanced AI training via techniques like reinforcement learning from human feedback (RLHF) (Christiano et al., 2017; Bai et al., 2022), despite potentially posing significant safety risks (Bereska & Gavves, 2023; Casper et al., 2023a). Interpretability of RL should investigate reward/goal representations (Mini et al., 2023; Colognese & Jozdien, 2023; Colognese, 2023; Bloom & Colognese, 2023; Bloom & Bailey, 2023), study circuitry changes from alignment algorithms (Prakash et al., 2024; Jain et al., 2023; Lee et al., 2024; Jain et al., 2024), and explore emergent subgoals or proxies (Hubinger et al., 2019; Ivanitskiy et al., 2023) such as internal reward models (Marks et al., 2023b). While current state-of-the-art AI systems as prediction-trained LLMs are considered relatively safe (Hubinger et al., 2023), progress on interpreting RL systems may prove critical for safeguarding the next paradigm (Aschenbrenner, 2024). ## Acknowledgements I am grateful for the invaluable feedback and comments from Leon Lang, Tim Bakker, Jannik Brinkmann, Can Rager, Louis van Harten, Jacqueline Bereska, Benjamin Shaffrey, Thijmen Nijdam, Alice Rigg, Arthur Conmy, and Tom Lieberum. Their insights substantially improved this work. ## Glossary circuits Sub-graphs within neural networks consisting of *features* and the weights connecting them. Circuits can be thought of as *computational primitives* that perform understandable operations to produce (ideally interpretable) features from prior (ideally interpretable) features. Examples include circuits for detecting curves at specific orientations (Cammarata et al., 2020; 2021), continuing repeated patterns in text (Olsson et al., 2022), and resolving anaphoric references (Wang et al., 2023). While circuits can involve clearly interpretable features, the definition allows for intermediate representations that are less easily interpretable.. 3, 9, 10, 20, 26, 27, 35 concepts An abstract idea or representation derived from observations of the world. Concepts refer to the natural abstractions that a cognitive system, like a neural network, aims to capture and represent through its learned *features*, which may or may not align perfectly with human-defined concepts.. 4, 6, 7, 32, 34 deceptive alignment When a misaligned model aims to appear aligned to gain more power to take control once sufficiently powerful.. 24 deceptive inflation Theoretical result on deceptive behavior: policies produce trajectories that look better than they actually are from the human's perspective with limited observations to get higher reward signals during training. This deceptive behavior arises in reinforcement learning from human feedback when the human provides feedback based only on partial observations of the trajectories, while the policy has full state information during training (Lang et al., 2024).. 28 disentangled In disentangled representations, individual dimensions or components correspond to distinct, independent factors of variation in the data, rather than representing a tangled mixture of these factors.. 4, 9, 15, 16, 20, 27, 32, 33 eliciting latent knowledge Developing strategies to make a machine learning model explicitly report latent facts or knowledge embedded in its parameters, especially in cases where the model's output is untrusted (Christiano et al., 2021). This involves finding patterns in neural network activations that track the true state of the world (Mallen & Belrose, 2023).. 24 features The fundamental units of how neural networks encode knowledge, which cannot be further decomposed into smaller, distinct *concepts*. Features are core components of a neural network's representation, analogous to how cells form the fundamental unit of biological organisms (Olah et al., 2020). The *superposition* hypothesis suggests an alternative definition: that features correspond to the *disentangled* concepts that a larger, sparser network with *sufficient capacity* would learn to represent with individual (*monosemantic*) neurons (Olah et al., 2020; Bricken et al., 2023).. 3, 4, 6–8, 10, 13, 16, 18, 20, 23, 27, 28, 32, 33, 35 grokking "Grokking refers to the surprising phenomenon of delayed generalization where neural networks, on certain learning problems, generalize long after overfitting their training set." (Liu et al., 2022a). 21, 22 hydra effect The phenomenon where models can internally self-repair and maintain capabilities even when key components are ablated, making it challenging to identify the relevant components underlying a particular behavior (McGrath et al., 2023).. 18, 26 inner misalignment Inner misalignment, or goal misgeneralization, occurs when an AI system develops goals or behaviors during training that are misaligned with the intended objectives despite a correctly specified reward signal.. 24, 28 internal world models Internal causal environment models formed within neural networks, implicitly emerging as a by-product of prediction (e.g., in large language models).. 11, 12, 24, 34 irreducible We adopt the notion of *features* as the fundamental units of neural network representations, such that features cannot be further decomposed into smaller, distinct factors. To make this more precise, we can formalize the definition of features as irreducible input patterns following Engels et al. (2024): A feature f of sparsity s is a function that maps a subset of the input space (with probability 1 − s > 0) into a higher-dimensional representational space. We say the feature is active on this subset. A feature f is reducible into features a and b if there exists a transformation that decomposes f into a and b, such that the transformed distribution p(*a, b*) is either: 1. Separable: p(*a, b*) = p(a)p(b) 2. A mixture: p(*a, b*) = wp1(*a, b*) + (1 − w)p2(*a, b*) where p1 is lower-dimensional. Features are defined as irreducible patterns that cannot be decomposed into separable or mixture distributions via such transformations. This formalizes the notion that features form the fundamental atomic units underlying neural representations. Features that can be *disentangled* into statistically independent components (separable) or simpler lower-dimensional factors (mixtures) are not considered the core representational primitives. The key properties are that 1) features map from the input space to higher-dimensional representational spaces, 2) features are sparse and only activated on subsets of the input, and crucially, 3) features are irreducible and cannot be expressed as transformations of other statistically independent components.. 4 iterative distillation and amplification A technique for training AI systems by repeatedly distilling knowledge from a larger model into a smaller one while amplifying the smaller model's capabilities through feedback and interaction with humans.. 24 linear representation Features are directions in activation space, i.e., linear combinations of neurons.. 3, 8, 14, 28 machine unlearning Techniques for removing private data or dangerous knowledge from models.. 25 mesa-optimization The emergence of unintended subagents within a model with their own objectives, potentially misaligned with the original training objective.. 24 microscope AI Systems that extract and utilize knowledge from a model without allowing the model to take autonomous actions. This involves reverse engineering a trained model to understand its learned knowledge about the world, aiming to leverage this understanding directly without deploying the model in an operational capacity.. 24 modularity The property of an AI system being composed of distinct, semi-independent components or submodules that can be separately understood, modified, and recombined, rather than a monolithic, opaque structure.. 20, 21 monosemantic A neuron corresponding to a single concept. The intuition is that analyzing what inputs activate a given neuron reveals its associated semantic meaning or concept. In contrast to *polysemantic*.. 5, 8, 14, 16, 20, 32, 34 motifs Repeating patterns that emerge across models and tasks, manifesting as circuits, features, or higherlevel behaviors from component interactions. Examples include curve detectors, induction circuits, and branch specialization. Motifs reveal common structures and mechanisms underlying neural network intelligence.. 3, 10, 21, 35 natural abstractions High-level summaries or descriptions of a system or environment learned and used by many cognitive systems. According to the *natural abstraction hypothesis* (Chan et al., 2023), a set of "natural" abstractions exist that represent redundantly encoded information in the world and tend to be learned by intelligent systems produced through local selection pressures. These natural abstractions form a relatively small, discrete set of concepts like "tree," "velocity," etc., that allow compact descriptions of the world while discarding many irrelevant low-level details.. 4, 11, 32 outer misalignment Outer misalignment, or reward hacking, occurs when the specified reward function or utility function fails to capture the desired objectives correctly. This leads the AI to optimize for behaviors that achieve high reward scores but are misaligned with the intended outcomes.. 28, 34 oversight (Scalable) oversight refers to the challenge of providing reliable supervision—through labels, reward signals, or critiques—to AI models, ensuring effectiveness even as models *surpass* humanlevel performance.. 23, 24 polysemantic Neurons that are associated with multiple, unrelated *concepts*, contradicting the interpretation of neurons as representational primitives and making it challenging to understand the information processing of neural networks. This term is derived from linguistic concepts of *polysemy* (Falkum & Vicente, 2015), and in the context of neural networks first introduced by Arora et al. (2018), who suggested that word embeddings of polysemous words may be stored as a *superposition* of vectors representing distinct meanings. Olah et al. (2020) first used the term *polysemanticity*, elaborating on the concept of *polysemantic* neurons as a challenge for mechanistic interpretability.. 5, 16, 28, 33 prediction orthogonality A model whose objective is prediction can simulate agents who optimize toward any objectives with any degree of optimality (janus, 2022).. 3, 12, 24 privileged basis In certain neural network representations, the basis directions formed by the individual neurons are architecturally distinguished from arbitrary directions in the activation space. This privileged basis makes it meaningful to analyze the properties and roles of individual neurons, as the architecture encourages features to align with these basis directions. Hence, a privileged basis is necessary but *not sufficient* for the formation of *monosemantic* neurons. (Elhage et al., 2022b).. 5 representation engineering A top-down approach to transparency research that treats representations as the fundamental unit of analysis, aiming to understand and control representations of high-level cognitive phenomena in neural networks like large language models. Representation engineering has two main areas: 1) Reading representations to probe and interpret their contents, and 2) Controlling representations to manipulate high-level concepts like honesty or morality (Zou et al., 2023).. 3, 9, 22 reverse engineering The process of deconstructing a neural network's computations to fully understand and specify its operations. This involves breaking down the network's functionality into explicit, interpretable components, potentially as clear and detailed as pseudocode.. 1, 3, 20, 21, 25–27, 29 reward hacking See *outer misalignment*.. 30 simulacra The text outputs generated by a predictive model simulating the causal processes underlying text creation. These outputs simulate coherent and contextually relevant language, sometimes exhibiting agentic behaviors or goals despite the predictive model itself lacking genuine agency or intentionality. Simulacra can be either *agentic*, mimicking intentional and persuasive language use, or *non-agentic*, merely generating descriptive text without simulated goals or agency (janus, 2022; Bereska & Gavves, 2023).. 12, 14 simulation The simulation hypothesis says that when scaled up sufficiently, predictive models will learn to simulate the real-world causal processes that generated their training data (janus, 2022). When these models are optimized for predictive accuracy on broad data distributions like natural language, they are incentivized to discover the underlying rules, physics, and semantics that govern the data to model and predict future observations effectively. This allows the models to go beyond just memorizing or pattern-matching their training sets, instead learning to simulate hypothetical scenarios, reason about counterfactuals, and exhibit behaviors characteristic of general intelligence - all as a byproduct of the drive for efficient compression and accurate prediction. The simulation hypothesis suggests these models will develop rich *internal world models* capturing the causal dynamics of the training distribution.. 3, 11, 12, 14, 24 streetlight interpretability Examining AI systems under only ideal conditions of maximal interpretability, risking missing critical phenomena that only emerge in more realistic and diverse contexts.. 25 superposition The superposition hypothesis suggests that neural networks can leverage high-dimensional spaces to represent more *features* than the actual count of neurons by encoding features in almost orthogonal directions (Elhage et al., 2022b).. 3, 5, 6, 16, 21, 28, 32, 34 sycophancy The tendency of models to generate responses that align with user beliefs rather than providing truthful information. This behavior, encouraged by human feedback used in fine-tuning, is observed in state-of-the-art AI assistants across various tasks (Sharma et al., 2023). Sycophancy arises because human preference judgments often favor responses that match users' views, leading to a preference for convincingly written sycophantic responses over correct ones.. 28 universality The universality hypothesis proposes the emergence of common *circuits* across neural network models trained on similar tasks and data distributions. A *stronger* form posits that these common circuits represent a set of fundamental computational *motifs* that neural networks gravitate towards when learning. The *weaker* version suggests that for a given task, dataset, and model architecture, an optimal way to solve the problem may exist, which different models will tend to converge towards, resulting in analogous circuits. The universality hypothesis implies that rather than each model learning arbitrary, unstructured representations, there is an underlying universality to the circuits that emerge, shaped by the learning task and inductive biases.. 3, 9, 10, 21, 22 well-founded AI Developing AI systems with provable safety guarantees about their behavior and alignment with human values through rigorous mathematical modeling and verification. (Tegmark & Omohundro, 2023; Dalrymple et al., 2024).. 24 ## References Estelle Aflalo, Meng Du, Shao-Yen Tseng, Yongfei Liu, Chenfei Wu, Nan Duan, and Vasudev Lal. Vlinterpret: An interactive visualization tool for interpreting vision-language transformers. *CVPR*, June 2022. 30 Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier probes. *ICLR*, 2016. 9, 14 Uri Alon. *An introduction to systems biology: design principles of biological circuits*. Chapman and Hall/CRC, 2019. 21 Andy Arditi, Oscar Obeso, Aaquib Syed, Daniel Paleka, Nina Panickssery, Wes Gurnee, and Neel Nanda. Refusal in language models is mediated by a single direction. *CoRR*, 2024. 9 Aryaman Arora, Dan Jurafsky, and Christopher Potts. Causalgym: Benchmarking causal interpretability methods on linguistic tasks. *CoRR*, February 2024. 19 Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. Linear algebraic structure of word senses, with applications to polysemy. *TACL*, December 2018. 5, 34 Leopold Aschenbrenner. Situational awareness: The decade ahead. *Series: Situational Awareness. June*, 2024. 31 Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. *PLOS ONE*, July 2015. 2 Nicholas Bai, Rahul Ajay Iyer, Tuomas Oikarinen, and Tsui-Wei Weng. Describe-and-dissect: Interpreting neurons in vision networks with language models. *ICML MI Workshop*, June 2024. 23 Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. Training a helpful and harmless assistant with reinforcement learning from human feedback. *CoRR*, April 2022. 30 Yamini Bansal, Preetum Nakkiran, and Boaz Barak. Revisiting model stitching to compare neural representations. *CoRR*, June 2021. 3 Boaz Barak, Benjamin L. Edelman, Surbhi Goel, Sham Kakade, Eran Malach, and Cyril Zhang. Hidden progress in deep learning: Sgd learns parities near the computational limit. *NeurIPS*, 2022. 24 David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman, and Antonio Torralba. Gan dissection: Visualizing and understanding generative adversarial networks. *ICLR*, December 2018. 29 Yonatan Belinkov. Probing classifiers: Promises, shortcomings, and advances. *CoRR*, September 2021. 3, 9, 14, 19 Nora Belrose, Zach Furman, Logan Smith, Danny Halawi, Igor Ostrovsky, Lev McKinney, Stella Biderman, and Jacob Steinhardt. Eliciting latent predictions from transformers with the tuned lens. *CoRR*, August 2023. 13, 15 Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? *ACM FAccT*, March 2021. 11 Yoshua Bengio, Tristan Deleu, Nasim Rahaman, Rosemary Ke, Sébastien Lachapelle, Olexa Bilaniuk, Anirudh Goyal, and Christopher Pal. A meta-transfer objective for learning to disentangle causal mechanisms. *CoRR*, February 2019. 28 Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner, and Sören Mindermann. Managing ai risks in an era of rapid progress. *CoRR*, November 2023. 1 Leonard Bereska. Mechanistic interpretability for adversarial robustness - a proposal. *Leonard Bereska's* Blog, August 2024. 28 Leonard Bereska and Efstratios Gavves. Taming simulators: Challenges, pathways and vision for the alignment of large language models. *AAAI-SS*, October 2023. 24, 30, 34 Michel Besserve, Arash Mehrjou, Rémy Sun, and Bernhard Schölkopf. Counterfactuals uncover the modular structure of deep generative models. *CoRR*, December 2019. 28 Steven Bills, Nick Cammarata, Dan Mossing, Henk Tillman, Leo Gao, Gabriel Goh, Ilya Sutskever, Jan Leike, Jeff Wu, and William Saunders. Language models can explain neurons in language models. *OpenAI* Blog, 2023. 5, 23, 25 Blair Bilodeau, Natasha Jaques, Pang Wei Koh, and Been Kim. Impossibility theorems for feature attribution. *Proc. Natl. Acad. Sci. U.S.A.*, January 2024. 2 Christopher M. Bishop. *Pattern recognition and machine learning*. Springer-Verlag New York Inc., 2006. 4 Sid Black, Lee Sharkey, Leo Grinsztajn, Eric Winsor, Dan Braun, Jacob Merizian, Kip Parker, Carlos Ramón Guevara, Beren Millidge, Gabriel Alfour, and Connor Leahy. Interpreting neural networks through the polytope lens. *CoRR*, November 2022. 9, 28 Joseph Bloom and Jay Bailey. Features and adversaries in memorydt. *LessWrong*, October 2023. 29, 30 Joseph Bloom and Paul Colognese. Decision transformer interpretability. *AI Alignment Forum*, 2023. 30 Tolga Bolukbasi, Adam Pearce, Ann Yuan, Andy Coenen, Emily Reif, Fernanda Vi'egas, and M. Wattenberg. An interpretability illusion for bert. *CoRR*, April 2021. 29 Dan Braun, Jordan Taylor, Nicholas Goldowsky-Dill, and Lee Sharkey. Identifying functionally important features with end-to-end sparse dictionary learning. *ICML MI Workshop*, May 2024. 19 Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Conerly, Nicholas L. Turner, Cem Anil, Carson Denison, Amanda Askell, Robert Lasenby, Yifan Wu, Shauna Kravec, Nicholas Schiefer, Tim Maxwell, Nicholas Joseph, Alex Tamkin, Karina Nguyen, Brayden McLean, Josiah E. Burke, Tristan Hume, Shan Carter, Tom Henighan, and Chris Olah. Towards monosemanticity: Decomposing language models with dictionary learning. *Transformer Circuits Thread*, October 2023. 5, 6, 8, 9, 11, 13, 15, 20, 30, 32 Jannik Brinkmann, Abhay Sheshadri, Victor Levoso, Paul Swoboda, and Christian Bartelt. A mechanistic analysis of a transformer trained on a symbolic multi-step reasoning task. *CoRR*, February 2024. 22 Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with gpt-4. *CoRR*, April 2023. 1 Collin Burns, Haotian Ye, Dan Klein, and Jacob Steinhardt. Discovering latent knowledge in language models without supervision. *ICLR*, 2023. 3, 13, 14, 24, 29 Lucius Bushnaq, Stefan Heimersheim, Nicholas Goldowsky-Dill, Dan Braun, Jake Mendel, Kaarel Hanni, Avery Griffin, Jorn Stohler, Magdalena Wache, and Marius Hobbhahn. The local interaction basis: Identifying computationally-relevant and sparsely interacting features in neural networks. *CoRR*, May 2024. 19 Ethan Caballero, Kshitij Gupta, Irina Rish, and David Krueger. Broken neural scaling laws. *ICLR*, October 2022. 21 Nick Cammarata, Gabriel Goh, Shan Carter, Ludwig Schubert, Michael Petrov, and Chris Olah. Curve detectors. *Distill*, June 2020. 10, 15, 21, 30, 32 Nick Cammarata, Gabriel Goh, Shan Carter, Chelsea Voss, Ludwig Schubert, and Chris Olah. Curve circuits. Distill, 2021. 10, 21, 30, 32 Steven Cao, Victor Sanh, and Alexander M. Rush. Low-complexity probing via finding subnetworks. *NAACLHLT*, April 2021. 14 Shan Carter, Zan Armstrong, Ludwig Schubert, Ian Johnson, and Chris Olah. Activation atlas. *Distill*, March 2019. 28 Giuseppe Casalicchio, Christoph Molnar, and Bernd Bischl. Visualizing the feature importance for black box models. *ECML PKDD*, 2018. 2 Stephen Casper. The engineer's interpretability sequence. *AI Alignment Forum*, February 2023. 24, 25, 27, 28, 29 Stephen Casper, Max Nadeau, Dylan Hadfield-Menell, and Gabriel Kreiman. Robust feature-level adversaries are interpretability tools. *NeurIPS*, October 2021. 28 Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, CharbelRaphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, and Dylan Hadfield-Menell. Open problems and fundamental limitations of reinforcement learning from human feedback. *CoRR*, 2023a. 24, 30 Stephen Casper, Kaivalya Hariharan, and Dylan Hadfield-Menell. Diagnostics for deep neural networks with automated copy/paste attacks. *NeurIPS 2022 ML Safety Workshop (Best paper award)*, May 2023b. 25 Stephen Casper, Yuxiao Li, Jiawei Li, Tong Bu, Kevin Zhang, Kaivalya Hariharan, and Dylan HadfieldMenell. Red teaming deep neural networks with feature synthesis tools. *NeurIPS*, 2023c. 25, 29 Stephen Casper, Carson Ezell, Charlotte Siegmann, Noam Kolt, Taylor Lynn Curtis, Benjamin Bucknall, Andreas Haupt, Kevin Wei, Jérémy Scheurer, Marius Hobbhahn, Lee Sharkey, Satyapriya Krishna, Marvin Von Hagen, Silas Alberti, Alan Chan, Qinyi Sun, Michael Gerovitch, David Bau, Max Tegmark, David Krueger, and Dylan Hadfield-Menell. Black-box access is insufficient for rigorous ai audits. ACM Conference on Fairness, Accountability, and Transparency, January 2024. 1, 28 Lawrence Chan. What i would do if i wasn't at arc evals. *AI Alignment Forum*, May 2023. 24, 28, 29 Lawrence Chan, Adrià Garriga-alonso, Nicholas Goldowsky-Dill, ryan_greenblatt, jenny, Ansh Radhakrishnan, Buck, and Nate Thomas. Causal scrubbing: a method for rigorously testing interpretability hypotheses [redwood research]. *AI Alignment Forum*, December 2022. 13, 19 Lawrence Chan, Leon Lang, and Erik Jenner. Natural abstractions: Key claims, theorems, and critiques. AI Alignment Forum, March 2023. 4, 11, 33 David Chanin, Anthony Hunter, and Oana-Maria Camburu. Identifying linear relational concepts in large language models. *CoRR*, 2023. 9 Charbel-Raphaël. Against almost every theory of impact of interpretability. *AI Alignment Forum*, August 2023. 25 Yiting Chen, Zhanpeng Zhou, and Junchi Yan. Going beyond neural network feature similarity: The network feature complexity and its interpretation using category theory. *CoRR*, November 2023a. 11 Zhongtian Chen, Edmund Lau, Jake Mendel, Susan Wei, and Daniel Murfet. Dynamical versus bayesian phase transitions in a toy model of superposition. *CoRR*, October 2023b. 21, 25 Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. *NeurIPS*, December 2017. 30 Paul Christiano, Ajeya Cotra, and Mark Xu. Eliciting latent knowledge, January 2021. 24, 32 Bilal Chughtai, Lawrence Chan, and Neel Nanda. A toy model of universality: Reverse engineering how networks learn group operations. *ICML*, 2023. 11, 22, 25 Bilal Chughtai, Alan Cooney, and Neel Nanda. Summing up the facts: Additive mechanisms behind factual recall in llms. *NeurIPS Workshop Attributing Model Behaviour at Scale*, 2024. 10 Paul Colognese. Internal target information for ai oversight. *LessWrong*, 2023. 30 Paul Colognese and Jozdien. High-level interpretability: detecting an ai's objectives. *AI Alignment Forum*, 2023. 30 Arthur Conmy, Augustine N. Mavor-Parker, Aengus Lynch, Stefan Heimersheim, and Adrià Garriga-Alonso. Towards automated circuit discovery for mechanistic interpretability. *NeurIPS*, 2023. 18, 19, 22, 25, 30 Ian C. Covert, Scott Lundberg, and Su-In Lee. Explaining by removing: a unified framework for model explanation. *J. Mach. Learn. Res.*, January 2021. 2 Hoagy Cunningham, Aidan Ewart, Logan Riggs, Robert Huben, and Lee Sharkey. Sparse autoencoders find highly interpretable features in language models. *ICLR*, January 2024. 8, 9, 13, 15, 16, 19, 29 Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. Knowledge neurons in pretrained transformers. ACL, 2022. 5, 29 David "davidad" Dalrymple, Joar Skalse, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, Alessandro Abate, Joe Halpern, Clark Barrett, Ding Zhao, Tan Zhi-Xuan, Jeannette Wing, and Joshua Tenenbaum. Towards guaranteed safe ai: A framework for ensuring robust and reliable ai systems. *CoRR*, May 2024. 29, 35 Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, Anthony Bau, and James Glass. What is one grain of sand in the desert? analyzing individual neurons in deep nlp models. Proceedings of the AAAI Conference on Artificial Intelligence, July 2019. 14 Guy Dar, Mor Geva, Ankit Gupta, and Jonathan Berant. Analyzing transformers in embedding space. ACL, December 2022. 15 Xander Davies, Max Nadeau, Nikhil Prakash, Tamar Rott Shaham, and David Bau. Discovering variable binding circuitry with desiderata. *CoRR*, July 2023. 10, 22, 23 Mingyang Deng, Lucas Tao, and Joe Benton. Measuring feature sparsity in language models. *CoRR*, 2023. 9, 15 Carson Denison, Monte MacDiarmid, Fazl Barez, David Duvenaud, Shauna Kravec, Samuel Marks, Nicholas Schiefer, Ryan Soklaski, Alex Tamkin, Jared Kaplan, Buck Shlegeris, Samuel R. Bowman, Ethan Perez, and Evan Hubinger. Sycophancy to subterfuge: Investigating reward-tampering in large language models. CoRR, 2024. 28 Alexander Yom Din, Taelin Karidi, Leshem Choshen, and Mor Geva. Jump to conclusions: Short-cutting transformers with linear transformations. *CoRR*, March 2023. 15 Finale Doshi-Velez and Been Kim. Towards a rigorous science of interpretable machine learning. *CoRR*, March 2017. 21, 29, 30 Keke Du, Shan Chang, Huixiang Wen, and Hao Zhang. Fighting adversarial images with interpretable gradients. *ACM TURC*, October 2021. 28 Jacob Dunefsky, Philippe Chlenski, and Neel Nanda. Transcoders find interpretable llm feature circuits. ICML MI Workshop, June 2024. 16 Nadir Durrani, Hassan Sajjad, Fahim Dalvi, and Yonatan Belinkov. Analyzing individual neurons in pretrained language models. *EMNLP*, October 2020. 5 Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. Amnesic probing: Behavioral explanation with amnesic counterfactuals. *TACL*, February 2021. 14 Nelson Elhage, Tristan Hume, Olsson Catherine, Nanda Neel, Tom Henighan, Scott Johnston, Sheer ElShowk, Nicholas Joseph, Nova DasSarma, Ben Mann, Danny Hernandez, Amanda Askell, Kamal Ndousse, Dawn Drain, Anna Chen, Yuntao Bai, Deep Ganguli, Liane Lovitt, Zac Hatfield-Dodds, Jackson Kernion, Tom Conerly, Shauna Kravec, Stanislav Fort, Saurav Kadavath, Josh Jacobson, Eli TranJohnson, Jared Kaplan, Jack Clark, Tom Brown, Sam McCandlish, Dario Amodei, and Christopher Olah. Softmax linear units. *Transformer Circuits Thread*, 2022a. 5, 8, 13, 20 Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, et al. Toy models of superposition. *Transformer* Circuits Thread, 2022b. 5, 6, 7, 9, 15, 25, 26, 27, 28, 29, 30, 34, 35 Joshua Engels, Isaac Liao, Eric J. Michaud, Wes Gurnee, and Max Tegmark. Not all language model features are linear. *CoRR*, May 2024. 4, 8, 33 Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, and Aleksander Madry. Adversarial robustness as a prior for learned representations. *CoRR*, September 2019. 28 Ingrid Lossius Falkum and Agustin Vicente. Polysemy: Current perspectives and approaches. *Lingua*, April 2015. 34 Sebastian Farquhar, Vikrant Varma, Zachary Kenton, Johannes Gasteiger, Vladimir Mikulik, and Rohin Shah. Challenges with unsupervised llm knowledge discovery. *CoRR*, 2023. 14 Amir Feder, Nadav Oved, Uri Shalit, and Roi Reichart. Causalm: Causal model explanation through counterfactual language models. *Computational Linguistics*, May 2021. 19 Jiahai Feng and Jacob Steinhardt. How do language models bind entities in context? *CoRR*, October 2023. 10, 22, 26, 30 Javier Ferrando and Elena Voita. Information flow routes: Automatically interpreting language models at scale. *CoRR*, February 2024. 18, 23 Javier Ferrando, Gabriele Sarti, Arianna Bisazza, and Marta R. Costa-jussà. A primer on the inner workings of transformer-based language models. *CoRR*, May 2024. 2 Alex Foote, Neel Nanda, Esben Kran, Ioannis Konstas, Shay Cohen, and Fazl Barez. Neuron to graph: Interpreting language model neurons at scale. *CoRR*, May 2023. 23, 30 Stanislav Fort and Balaji Lakshminarayanan. Ensemble everything everywhere: Multi-scale aggregation for adversarial robustness. *CoRR*, August 2024. 28 Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. *ICLR*, March 2019. 20 Dan Friedman, Andrew Lampinen, Lucas Dixon, Danqi Chen, and Asma Ghandeharioun. Interpretability illusions in the generalization of simplified models. *CoRR*, 2023a. 29, 30 Dan Friedman, Alexander Wettig, and Danqi Chen. Learning transformer programs. *NeurIPS*, June 2023b. 20 Daniel Y. Fu, Tri Dao, Khaled K. Saab, Armin W. Thomas, Atri Rudra, and Christopher Ré. Hungry hungry hippos: Towards language modeling with state space models. *ICLR*, 2023a. 25 Deqing Fu, Tian-Qi Chen, Robin Jia, and Vatsal Sharan. Transformers learn higher-order optimization methods for in-context learning: A study with linear models. *CoRR*, October 2023b. 26 Zach Furman and Edmund Lau. Estimating the local learning coefficient at scale. *CoRR*, February 2024. 21 Jorge García-Carrasco, Alejandro Maté, and Juan Trujillo. Detecting and understanding vulnerabilities in language models via mechanistic interpretability. *IJCAI*, August 2024. 28, 29 Albert Garde, Esben Kran, and Fazl Barez. Deepdecipher: Accessing and investigating neuron activation in large language models. *NeurIPS Workshop XAIA*, October 2023. 13 Peter Gardenfors. *Conceptual spaces: The geometry of thought*. MIT press, 2004. 12 Charles J. Garfinkle and Christopher J. Hillar. On the uniqueness and stability of dictionaries for sparse representation of noisy signals. *IEEE Transactions on Signal Processing*, December 2019. 15 Xuyang Ge, Fukang Zhu, Wentao Shu, Junxuan Wang, Zhengfu He, and Xipeng Qiu. Automatically identifying local and global circuits with linear computation graphs. *CoRR*, May 2024. 19 Atticus Geiger, Hanson Lu, Thomas Icard, and Christopher Potts. Causal abstractions of neural networks. NeurIPS, 2021a. 18 Atticus Geiger, Zhengxuan Wu, Hanson Lu, Josh Rozner, Elisa Kreiss, Thomas Icard, Noah D. Goodman, and Christopher Potts. Inducing causal structure for interpretable neural networks. *ICML*, January 2021b. 17 Atticus Geiger, Chris Potts, and Thomas Icard. Causal abstraction for faithful model interpretation. *CoRR*, January 2023a. 13, 17, 18, 19 Atticus Geiger, Zhengxuan Wu, Christopher Potts, Thomas Icard, and Noah D. Goodman. Finding alignments between interpretable causal variables and distributed neural representations. *CoRR*, 2023b. 13, 18, 19 Georgios Georgiadis. Accelerating convolutional neural networks via activation map compression. *CoRR*, March 2019. 20 Mor Geva, Avi Caciularu, Kevin Ro Wang, and Yoav Goldberg. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. *EMNLP*, October 2022. 15 Mor Geva, Jasmijn Bastings, Katja Filippova, and Amir Globerson. Dissecting recall of factual associations in auto-regressive language models. *EMNLP*, October 2023. 17 Asma Ghandeharioun, Avi Caciularu, Adam Pearce, Lucas Dixon, and Mor Geva. Patchscopes: A unifying framework for inspecting hidden representations of language models. *CoRR*, January 2024. 18 Amirata Ghorbani and James Zou. Neuron shapley: Discovering the responsible neurons. *NeurIPS*, November 2020. 5, 29 Gabriel Goh, Nick Cammarata †, Chelsea Voss †, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, and Chris Olah. Multimodal neurons in artificial neural networks. *Distill*, March 2021. 5, 30 Nicholas Goldowsky-Dill, Chris MacLeod, Lucas Sato, and Aryaman Arora. Localizing model behavior with path patching. *CoRR*, 2023. 13, 17, 18, 29, 30 Liv Gorton. The missing curve detectors of inceptionv1: Applying sparse autoencoders to inceptionv1 early vision. *ICML MI Workshop*, June 2024. 15 Rhys Gould, Euan Ong, George Ogden, and Arthur Conmy. Successor heads: Recurring, interpretable attention heads in the wild. *CoRR*, 2023. 29 Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, and Bernhard Schölkopf. Recurrent independent mechanisms. *CoRR*, November 2020. 28 Jason Gross, Rajashree Agrawal, Thomas Kwa, Euan Ong, Chun Hei Yip, Alex Gibson, Soufiane Noubir, and Lawrence Chan. Compact proofs of model performance via mechanistic interpretability. *ICML MI* Workshop, June 2024. 29 Roger Grosse, Juhan Bae, Cem Anil, Nelson Elhage, Alex Tamkin, Amirhossein Tajdini, Benoit Steiner, Dustin Li, Esin Durmus, Ethan Perez, Evan Hubinger, Kamil˙e Lukoši¯ut˙e, Karina Nguyen, Nicholas Joseph, Sam McCandlish, Jared Kaplan, and Samuel R. Bowman. Studying large language model generalization with influence functions. *CoRR*, August 2023. 13 Phillip Huang Guo, Aaquib Syed, Abhay Sheshadri, Aidan Ewart, and Gintare Karolina Dziugaite. Robust unlearning via mechanistic localizations. *ICML MI Workshop*, June 2024. 25 Rohan Gupta, Iván Arcuschin, Thomas Kwa, and Adrià Garriga-Alonso. Interpbench: Semi-synthetic transformers for evaluating mechanistic interpretability techniques. *CoRR*, July 2024. 29 Wes Gurnee and Max Tegmark. Language models represent space and time. *ICLR*, 2024. 11 Wes Gurnee, Neel Nanda, Matthew Pauly, Katherine Harvey, Dmitrii Troitskii, and Dimitris Bertsimas. Finding neurons in a haystack: Case studies with sparse probing. *TMLR*, 2023. 7, 13, 14 Wes Gurnee, Theo Horsley, Zifan Carl Guo, Tara Rezaei Kheirkhah, Qinyi Sun, Will Hathaway, Neel Nanda, and Dimitris Bertsimas. Universal neurons in gpt2 language models. *CoRR*, January 2024. 11 David R. Ha and J. Schmidhuber. Recurrent world models facilitate policy evolution. *NeurIPS*, September 2018. 11 Guy Hacohen, Leshem Choshen, and Daphna Weinshall. Let's agree to agree: Neural networks share classification order on real datasets. *ICML*, 2020. 11 Michael Hanna, Ollie Liu, and Alexandre Variengien. How does gpt-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model. *NeurIPS*, 2023. 10, 17, 22, 30 Michael Hanna, Sandro Pezzelle, and Yonatan Belinkov. Have faith in faithfulness: Going beyond circuit overlap when finding model mechanisms. *ICML MI Workshop*, June 2024. 29 Kaarel Hänni, Jake Mendel, Dmitry Vaintrob, and Lawrence Chan. Mathematical models of computation in superposition. *ICML MI Workshop*, August 2024. 29 Peter Hase, Mohit Bansal, Been Kim, and Asma Ghandeharioun. Does localization inform editing? surprising differences in causality-based localization vs. knowledge editing in language models. *NeurIPS Spotlight*, January 2023. 29 Zhengfu He, Xuyang Ge, Qiong Tang, Tianxiang Sun, Qinyuan Cheng, and Xipeng Qiu. Dictionary learning improves patch-free circuit discovery in mechanistic interpretability: A case study on othello-gpt. *CoRR*, 2024. 15 Stefan Heimersheim and Jett. A circuit for python docstrings in a 4-layer attention-only transformer. AI Alignment Forum, February 2023. 22 Roee Hendel, Mor Geva, and Amir Globerson. In-context learning creates task vectors. *EMNLP*, October 2023. 9, 17, 22 Dan Hendrycks. *Introduction to AI Safety, Ethics, and Society*. Self-published, 2023a. 26 Dan Hendrycks. Natural selection favors ais over humans. *CoRR*, July 2023b. 25 Dan Hendrycks and Mantas Mazeika. X-risk analysis for ai research. *CoRR*, June 2022. 1, 24, 25 Dan Hendrycks, Nicholas Carlini, John Schulman, and Jacob Steinhardt. Unsolved problems in ml safety. CoRR, June 2022. 26 Dan Hendrycks, Mantas Mazeika, and Thomas Woodside. An overview of catastrophic ai risks. *CoRR*, October 2023. 1, 24 Tom Henighan, Shan Carter, Tristan Hume, Nelson Elhage, Robert Lasenby, Stanislav Fort, Nicholas Schiefer, and Christopher Olah. Superposition, memorization, and double descent. Transformer Circuits Thread, 2023. 7, 27 Danny Hernandez, Tom Brown, Tom Conerly, Nova DasSarma, Dawn Drain, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Tom Henighan, Tristan Hume, Scott Johnston, Ben Mann, Chris Olah, Catherine Olsson, Dario Amodei, Nicholas Joseph, Jared Kaplan, and Sam McCandlish. Scaling laws and interpretability of learning from repeated data. *CoRR*, 2022. 21 Evan Hernandez, Arnab Sen Sharma, Tal Haklay, Kevin Meng, Martin Wattenberg, Jacob Andreas, Yonatan Belinkov, and David Bau. Linearity of relation decoding in transformer language models. *CoRR*, August 2023. 9 John Hewitt and Christopher D. Manning. A structural probe for finding syntax in word representations. NAACL HLT, June 2019. 14 Irina Higgins, David Amos, David Pfau, Sebastien Racaniere, Loic Matthey, Danilo Rezende, and Alexander Lerchner. Towards a definition of disentangled representations. *CoRR*, December 2018. 27 Jacob Hilton, Nick Cammarata, Shan Carter, Gabriel Goh, and Chris Olah. Understanding rl vision. *Distill*, 2020. 30 Geoffrey E Hinton. Distributed representations. *Carnegie Mellon University*, 1984. 27 Marius Hobbhahn. Marius' alignment agenda, 2022. 30 Marius Hobbhahn and Lawrence Chan. Should we publish mechanistic interpretability research? *AI Alignment Forum*, April 2023. 25 Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. Training compute-optimal large language models. *CoRR*, March 2022. 25 Jesse Hoogland, Liam Carroll, and Daniel Murfet. Stagewise development in neural networks. *AI Alignment* Forum, March 2024. 21 Jing Huang, Atticus Geiger, Karel D'Oosterlinck, Zhengxuan Wu, and Christopher Potts. Rigorously assessing natural language explanations of neurons. *CoRR*, September 2023. 5 Jing Huang, Zhengxuan Wu, Christopher Potts, Mor Geva, and Atticus Geiger. Ravel: Evaluating interpretability methods on disentangling language model representations. *CoRR*, 2024. 19 Evan Hubinger. Chris olah's views on agi safety. *AI Alignment Forum*, November 2019a. 4, 24, 26 Evan Hubinger. Gradient hacking. *AI Alignment Forum*, October 2019b. 26 Evan Hubinger. An overview of 11 proposals for building safe advanced ai. *CoRR*, December 2020. 24 Evan Hubinger. A transparency and interpretability tech tree. *AI Alignment Forum*, June 2022. 8, 24, 26, 30 Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Risks from learned optimization in advanced machine learning systems. *CoRR*, May 2019. 24, 31 Evan Hubinger, Adam Jermyn, Johannes Treutlein, Rubi Hudson, and Kate Woolverton. Conditioning predictive models: Risks and strategies. *CoRR*, February 2023. 24, 31 Evan Hubinger, Carson Denison, Jesse Mu, Mike Lambert, Meg Tong, Monte MacDiarmid, Tamera Lanham, Daniel M. Ziegler, Tim Maxwell, Newton Cheng, Adam Jermyn, Amanda Askell, Ansh Radhakrishnan, Cem Anil, David Duvenaud, Deep Ganguli, Fazl Barez, Jack Clark, Kamal Ndousse, Kshitij Sachan, Michael Sellitto, Mrinank Sharma, Nova DasSarma, Roger Grosse, Shauna Kravec, Yuntao Bai, Zachary Witten, Marina Favaro, Jan Brauner, Holden Karnofsky, Paul Christiano, Samuel R. Bowman, Logan Graham, Jared Kaplan, Sören Mindermann, Ryan Greenblatt, Buck Shlegeris, Nicholas Schiefer, and Ethan Perez. Sleeper agents: Training deceptive llms that persist through safety training. *CoRR*, 2024. 28, 29 Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. *NeurIPS*, August 2019. 4, 28, 29 M. Ivanitskiy, Alexander F. Spies, Tilman Rauker, Guillaume Corlouer, Chris Mathwin, Lucia Quirke, Can Rager, Rusheb Shah, Dan Valentine, Cecilia Diniz Behn, Katsumi Inoue, and Samy Wu Fung. Structured world representations in maze-solving transformers. *CoRR*, December 2023. 11, 30, 31 Alon Jacovi and Yoav Goldberg. Towards faithfully interpretable nlp systems: How should we define and evaluate faithfulness? *CoRR*, April 2020. 29 Samyak Jain, Robert Kirk, Ekdeep Singh Lubana, Robert P. Dick, Hidenori Tanaka, Edward Grefenstette, Tim Rocktäschel, and David Scott Krueger. Mechanistically analyzing the effects of fine-tuning on procedurally defined tasks. *CoRR*, November 2023. 30 Samyak Jain, Ekdeep Singh Lubana, Kemal Oksuz, Tom Joy, Philip Torr, Amartya Sanyal, and Puneet K. Dokania. What makes and breaks safety fine-tuning? a mechanistic study. *ICML MI Workshop*, June 2024. 30 janus. Simulators. *LessWrong*, September 2022. 11, 12, 24, 34 Erik Jenner, Adrià Garriga-alonso, and Egor Zverev. A comparison of causal scrubbing, causal abstractions, and related methods. *AI Alignment Forum*, June 2023. 13, 19 Erik Jenner, Shreyas Kapur, Vasil Georgiev, Cameron Allen, Scott Emmons, and Stuart Russell. Evidence of learned look-ahead in a chess-playing neural network. *CoRR*, June 2024. 24 Adam Jermyn, Chris Olah, and T Henighan. Circuits updates - may 2023: Attention head superposition. Transformer Circuits Thread, 2023. 29 Adam S. Jermyn, Nicholas Schiefer, and Evan Hubinger. Engineering monosemanticity in toy models. *CoRR*, November 2022. 8, 20, 25 Jiaming Ji, Tianyi Qiu, Boyuan Chen, Borong Zhang, Hantao Lou, Kaile Wang, Yawen Duan, Zhonghao He, Jiayi Zhou, Zhaowei Zhang, Fanzhi Zeng, Kwan Yee Ng, Juntao Dai, Xuehai Pan, Aidan O'Gara, Yingshan Lei, Hua Xu, Brian Tse, Jie Fu, Stephen McAleer, Yaodong Yang, Yizhou Wang, Song-Chun Zhu, Yike Guo, and Wen Gao. Ai alignment: A comprehensive survey. *CoRR*, January 2024. 1 Jozdien. Conditioning generative models for alignment. *AI Alignment Forum*, July 2022. 24 Jaap Jumelet. Evaluating and interpreting language models. *NLP Lecture*, November 2023. 2 Amlan Jyoti, Karthik Balaji Ganesh, Manoj Gayala, Nandita Lakshmi Tunuguntla, Sandesh Kamath, and Vineeth N. Balasubramanian. On the robustness of explanations of deep neural network models: A survey. CoRR, November 2022. 28 Adam Karvonen. Emergent world models and latent variable estimation in chess-playing language models. COLM, July 2024. 11 Adam Karvonen, Benjamin Wright, Can Rager, Rico Angell, Jannik Brinkmann, Logan Smith, C. M. Verdun, David Bau, and Samuel Marks. Measuring progress in dictionary learning for language model interpretability with board game models. *ICML MI Workshop (Oral)*, July 2024. 15 Theodoros Kasioumis, Joe Townsend, and Hiroya Inakoshi. Elite backprop: Training sparse interpretable neurons. *International Workshop on Neuro-Symbolic Learning and Reasoning*, 2021. 20 Nan Rosemary Ke, Aniket Didolkar, Sarthak Mittal, Anirudh Goyal, Guillaume Lajoie, Stefan Bauer, Danilo Rezende, Yoshua Bengio, Michael Mozer, and Christopher Pal. Systematic evaluation of causal discovery in visual model based reinforcement learning. *CoRR*, July 2021. 28 Jan Kirchner. Neuroscience and natural abstractions. *LessWrong*, March 2023. 11 Connor Kissane, Robert Krzyzanowski, Joseph Isaac Bloom, Arthur Conmy, and Neel Nanda. Interpreting attention layer outputs with sparse autoencoders. *ICML MI Workshop*, June 2024. 15 Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. *ICML*, July 2019. 3, 11 János Kramár, Tom Lieberum, Rohin Shah, and Neel Nanda. Atp*: An efficient and scalable method for localizing llm behaviour to components. *CoRR*, March 2024. 22 Nicholas Kross. Why and when interpretability work is dangerous. *LessWrong*, May 2023. 25 Jan Kulveit. Risks from ai misalignment at different scales, July 2024. 24 Jan Kulveit, Clem von Stengel, and Roman Leventov. Predictive minds: Llms as atypical active inference agents. *CoRR*, November 2023. 11 Max Lamparth and Anka Reuel. Analyzing and editing inner mechanisms of backdoored language models. CoRR, 2023. 29 Michael Lan and Fazl Barez. Locating cross-task sequence continuation circuits in transformers. *CoRR*, November 2023. 10 Leon Lang, Davis Foote, Stuart Russell, Anca Dragan, Erik Jenner, and Scott Emmons. When your ais deceive you: Challenges with partial observability of human evaluators in reward learning. *CoRR*, March 2024. 28, 32 Georg Lange, Alex Makelov, and Neel Nanda. An interpretability illusion for activation patching of arbitrary subspaces. *AI Alignment Forum*, August 2023. 18 Anna Langedijk, Hosein Mohebbi, Gabriele Sarti, Willem Zuidema, and Jaap Jumelet. Decoderlens: Layerwise interpretation of encoder-decoder transformers. *CoRR*, 2023. 15 Edmund Lau, Daniel Murfet, and Susan Wei. Quantifying degeneracy in singular models via the learning coefficient. *CoRR*, August 2023. 21 Connor Leahy. Barriers to mechanistic interpretability for agi safety. *AI Alignment Forum*, 2023. 26 Matthew L. Leavitt and Ari Morcos. Towards falsifiable interpretability research. *CoRR*, October 2020. 29 Victor Lecomte, Kushal Thaman, Trevor Chow, Rylan Schaeffer, and Sanmi Koyejo. Incidental polysemanticity. *CoRR*, 2023. 27, 28, 29 Andrew Lee, Xiaoyan Bai, Itamar Pres, Martin Wattenberg, Jonathan K. Kummerfeld, and Rada Mihalcea. A mechanistic understanding of alignment algorithms: A case study on dpo and toxicity. *CoRR*, 2024. 30 Belinda Z. Li, Maxwell Nye, and Jacob Andreas. Implicit representations of meaning in neural language models. *ACL-IJCNLP*, August 2021. 12 Kenneth Li, Aspen K. Hopkins, David Bau, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. Emergent world representations: Exploring a sequence model trained on a synthetic task. *ICLR*, 2023a. 8, 11 Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. Inference-time intervention: Eliciting truthful answers from a language model. *NeurIPS Spotlight*, July 2023b. 9 Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John Hopcroft. Convergent learning: Do different neural networks learn the same representations? *NIPS Workshop on Feature Extraction*, December 2015. 11 Tom Lieberum, Matthew Rahtz, János Kramár, Neel Nanda, Geoffrey Irving, Rohin Shah, and Vladimir Mikulik. Does circuit analysis interpretability scale? evidence from multiple choice capabilities in chinchilla. *CoRR*, July 2023. 10, 17, 22, 29, 30 Pantelis Linardatos, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. Explainable ai: A review of machine learning interpretability methods. *Entropy*, December 2020. 14 David Lindner, János Kramár, Sebastian Farquhar, Matthew Rahtz, Thomas McGrath, and Vladimir Mikulik. Tracr: Compiled transformers as a laboratory for interpretability. *CoRR*, 2023. 29 Leo Z. Liu, Yizhong Wang, Jungo Kasai, Hannaneh Hajishirzi, and Noah A. Smith. Probing across time: What does roberta know and when? *EMNLP*, September 2021. 30 Ziming Liu and Max Tegmark. A neural scaling law from lottery ticket ensembling. *CoRR*, 2023. 21 Ziming Liu, Ouail Kitouni, Niklas Nolte, Eric J. Michaud, Max Tegmark, and Mike Williams. Towards understanding grokking: An effective theory of representation learning. *NeurIPS*, 2022a. 21, 32 Ziming Liu, Eric J. Michaud, and Max Tegmark. Omnigrok: Grokking beyond algorithmic data. *ICML*, 2022b. 21 Ziming Liu, Eric Gan, and Max Tegmark. Seeing is believing: Brain-inspired modular training for mechanistic interpretability. *Entropy*, June 2023a. 13, 20, 23, 29 Ziming Liu, Mikail Khona, Ila R. Fiete, and Max Tegmark. Growing brains: Co-emergence of anatomical and functional modularity in recurrent neural networks. *CoRR*, 2023b. 20 Ziming Liu, Ziqian Zhong, and Max Tegmark. Grokking as compression: A nonlinear complexity perspective. CoRR, 2023c. 21 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. *CoRR*, June 2019. 27 Chris Lu, Cong Lu, Robert Tjarko Lange, Jakob Foerster, Jeff Clune, and David Ha. The ai scientist: Towards fully automated open-ended scientific discovery. *CoRR*, August 2024. 26 Ang Lv, Yuhan Chen, Kaiyi Zhang, Yulong Wang, Lifeng Liu, Ji-Rong Wen, Jian Xie, and Rui Yan. Interpreting key mechanisms of factual recall in transformer-based language models. *CoRR*, 2024. 10 Sara Magliacane, Thijs van Ommen, Tom Claassen, Stephan Bongers, Philip Versteeg, and Joris M. Mooij. Domain adaptation by using causal inference to predict invariant conditional distributions. *NeurIPS*, October 2018. 28 Aleksandar Makelov. Sparse autoencoders match supervised features for model steering on the ioi task. ICML MI Workshop, June 2024. 15 Aleksandar Makelov, George Lange, and Neel Nanda. Towards principled evaluations of sparse autoencoders for interpretability and control. *CoRR*, May 2024. 15 Alex Mallen and Nora Belrose. Eliciting latent knowledge from quirky language models. *CoRR*, December 2023. 32 Narek Maloyan, Ekansh Verma, Bulat Nutfullin, and Bislan Ashinov. Trojan detection in large language models: Insights from the trojan detection challenge. *CoRR*, April 2024. 28 Giovanni Luca Marchetti, Christopher Hillar, Danica Kragic, and Sophia Sanborn. Harmonics of learning: Universal fourier features emerge in invariant networks. *CoRR*, December 2023. 11 Luke Marks, Amir Abdullah, Luna Mendez, Rauno Arike, Philip Torr, and Fazl Barez. Interpreting reward models in rlhf-tuned language models using sparse autoencoders. *CoRR*, October 2023a. 15 Luke Marks, Amir Abdullah, Clement Neo, Rauno Arike, Philip Torr, and Fazl Barez. Beyond training objectives: Interpreting reward model divergence in large language models. *CoRR*, October 2023b. 31 Samuel Marks, Can Rager, Eric J. Michaud, Yonatan Belinkov, David Bau, and Aaron Mueller. Sparse feature circuits: Discovering and editing interpretable causal graphs in language models. *CoRR*, March 2024. 19 Simon C. Marshall and Jan H. Kirchner. Understanding polysemanticity in neural networks through coding theory. *CoRR*, January 2024. 6, 27, 28 Mantas Mazeika, Andy Zou, Akul Arora, Pavel Pleskov, Dawn Song, Dan Hendrycks, Bo Li, and David Forsyth. How hard is trojan detection in dnns? fooling detectors with evasive trojans. *CoRR*, September 2022. 29 Callum McDougall, Arthur Conmy, Cody Rushing, Thomas McGrath, and Neel Nanda. Copy suppression: Comprehensively understanding an attention head. *CoRR*, October 2023. 10 Thomas McGrath, Andrei Kapishnikov, Nenad Tomašev, Adam Pearce, Martin Wattenberg, Demis Hassabis, Been Kim, Ulrich Paquet, and Vladimir Kramnik. Acquisition of chess knowledge in alphazero. *PNAS*, November 2022. 13, 14, 26 Thomas McGrath, Matthew Rahtz, Janos Kramar, Vladimir Mikulik, and Shane Legg. The hydra effect: Emergent self-repair in language model computations. *CoRR*, July 2023. 18, 26, 28, 32 Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in gpt. *NeurIPS*, 2022a. 13, 17, 24, 29 Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. Mass-editing memory in a transformer. *ICLR*, 2022b. 29 William Merrill, Nikolaos Tsilivis, and Aman Shukla. A tale of two circuits: Grokking as competition of sparse and dense subnetworks. *CoRR*, 2023. 21 Jack Merullo, Carsten Eickhoff, and Ellie Pavlick. A mechanism for solving relational tasks in transformer language models. *CoRR*, May 2023. 30 Eric J. Michaud, Ziming Liu, Uzay Girit, and Max Tegmark. The quantization model of neural scaling. CoRR, March 2023. 9, 21, 30 Eric J. Michaud, Isaac Liao, Vedang Lad, Ziming Liu, Anish Mudide, Chloe Loughridge, Zifan Carl Guo, Tara Rezaei Kheirkhah, Mateja Vukelić, and Max Tegmark. Opening the ai black box: program synthesis via mechanistic interpretability. *CoRR*, February 2024. 23, 27 Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. *NeurIPS*, October 2013. 9 Joseph Miller and Clement Neo. We found an neuron in gpt-2. *AI Alignment Forum*, February 2023. 22 Tim Miller. Explanation in artificial intelligence: Insights from the social sciences. *Artificial Intelligence*, February 2019. 29 Ulisse Mini, Peli Grietzer, Mrinank Sharma, Austin Meek, Monte MacDiarmid, and Alexander Matt Turner. Understanding and controlling a maze-solving policy network. *CoRR*, October 2023. 30 Giovanni Monea, Maxime Peyrard, Martin Josifoski, Vishrav Chaudhary, Jason Eisner, Emre Kıcıman, Hamid Palangi, Barun Patra, and Robert West. A glitch in the matrix? locating and detecting language model grounding with fakepedia. *CoRR*, 2023. 30 Basel Mousi, Nadir Durrani, and Fahim Dalvi. Can llms facilitate interpretation of pre-trained language models? *EMNLP*, 2023. 23 Jesse Mu and Jacob Andreas. Compositional explanations of neurons. *NeurIPS*, June 2020. 5, 25, 29 Aaron Mueller, Jannik Brinkmann, Millicent Li, Samuel Marks, Koyena Pal, Nikhil Prakash, Can Rager, Aruna Sankaranarayanan, Arnab Sen Sharma, Jiuding Sun, Eric Todd, David Bau, and Yonatan Belinkov. The quest for the right mediator: A history, survey, and theoretical grounding of causal interpretability. CoRR, August 2024. 17 Jatin Nainani. Evaluating brain-inspired modular training in automated circuit discovery for mechanistic interpretability. *CoRR*, January 2024. 22, 23 Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L. Edelman, Fred Zhang, and Boaz Barak. Sgd on neural networks learns functions of increasing complexity. *NeurIPS*, May 2019. 21 Neel Nanda. 200 cop in mi: Looking for circuits in the wild. *Neel Nanda's Blog*, 2022a. 30 Neel Nanda. 200 cop in mi: Analysing training dynamics. *Neel Nanda's Blog*, 2022b. 30 Neel Nanda. 200 cop in mi: Studying learned features in language models. *Neel Nanda's Blog*, 2022c. 30 Neel Nanda. A comprehensive mechanistic interpretability explainer & glossary. *Neel Nanda's Blog*, December 2022d. 2 Neel Nanda. A longlist of theories of impact for interpretability. *AI Alignment Forum*, March 2022e. 23 Neel Nanda. 200 cop in mi: Exploring polysemanticity and superposition. *Neel Nanda's Blog*, January 2023a. 29 Neel Nanda. 200 cop in mi: Techniques, tooling and automation. *Neel Nanda's Blog*, 2023b. 30 Neel Nanda. Actually, othello-gpt has a linear emergent world representation. *Neel Nanda's Blog*, March 2023c. 8 Neel Nanda. Attribution patching: Activation patching at industrial scale. *Neel Nanda's Blog*, February 2023d. 18 Neel Nanda. How to think about activation patching. *AI Alignment Forum*, April 2023e. 18, 21 Neel Nanda. Mechanistic interpretability quickstart guide. *Neel Nanda's Blog*, January 2023f. 2, 30 Neel Nanda. An extremely opinionated annotated list of my favourite mechanistic interpretability papers v2. *AI Alignment Forum*, July 2024. 2 Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, and Jacob Steinhardt. Progress measures for grokking via mechanistic interpretability. *ICLR*, January 2023a. 10, 21, 22, 24, 25, 29, 30 Neel Nanda, Andrew Lee, and Martin Wattenberg. Emergent linear representations in world models of self-supervised sequence models. *BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks* for NLP, September 2023b. 8, 11, 28 Neel Nanda, S. Rajamanoharan, J. Kramár, and R. Shah. Fact finding: Attempting to reverse-engineer factual recall on the neuron level. AI Alignment Forum, 2023c. URL https://www. alignmentforum. org/posts/iGuwZTHWb6DFY3sKB/fact-finding-attempting-to-reverse-engineer-factual-recall, 2023c. 10 Geraldin Nanfack, Alexander Fulleringer, Jonathan Marty, Michael Eickenberg, and Eugene Belilovsky. Adversarial attacks on the interpretation of neuron activation maximization. *Proceedings of the AAAI* Conference on Artificial Intelligence, March 2024. 13 Richard Ngo, Lawrence Chan, and Sören Mindermann. The alignment problem from a deep learning perspective. *CoRR*, December 2022. 24 Thanh Tam Nguyen, Thanh Trung Huynh, Phi Le Nguyen, Alan Wee-Chung Liew, Hongzhi Yin, and Quoc Viet Hung Nguyen. A survey of machine unlearning. *CoRR*, October 2022. 24, 25 Eshaan Nichani, Alex Damian, and Jason D. Lee. How transformers learn causal structure with gradient descent. *CoRR*, February 2024. 11 NicholasKees and janus. Searching for search. *AI Alignment Forum*, November 2022. 24 nostalgebraist. interpreting gpt: the logit lens. *AI Alignment Forum*, August 2020. 13, 14, 19 Chris Olah. Distributed representations: Composition & superposition. *Transformer Circuits Thread*, 2023. 27 Chris Olah and Shan Carter. Research debt. *Distill*, March 2017. 2 Chris Olah, Alexander Mordvintsev, and Ludwig Schubert. Feature visualization. *Distill*, November 2017. 29 Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, and Alexander Mordvintsev. The building blocks of interpretability. *Distill*, March 2018. 2 Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. Zoom in: An introduction to circuits. *Distill*, March 2020. 2, 4, 5, 9, 10, 21, 32, 34 Christopher Olah. Mechanistic interpretability, variables, and the importance of interpretable bases. *Transformer Circuits Thread*, 2022. 2, 4 B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: a strategy employed by v1? Vision Res, December 1997. 15, 16 Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Scott Johnston, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. In-context learning and induction heads. *Transformer Circuits Thread*, 2022. 10, 21, 22, 25, 30, 32 Laura O'Mahony, Vincent Andrearczyk, Henning Muller, and Mara Graziani. Disentangling neuron representations with concept vectors. *CVPR Workshops*, April 2023. 9 Charles O'Neill and Thang Bui. Sparse autoencoders enable scalable and reliable circuit identification in language models. *CoRR*, May 2024. 16, 19 Francesco Ortu, Zhijing Jin, Diego Doimo, Mrinmaya Sachan, Alberto Cazzaniga, and Bernhard Schölkopf. Competition of mechanisms: Tracing how language models handle facts and counterfactuals. *ACL 2024*, June 2024. 10 Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. *CoRR*, 2022. 30 Koyena Pal, Jiuding Sun, Andrew Yuan, Byron C. Wallace, and David Bau. Future lens: Anticipating subsequent tokens from a single hidden state. *CoNLL*, 2023. 15, 29 Vedant Palit, Rohan Pandey, Aryaman Arora, and P. Liang. Towards vision-language mechanistic interpretability: A causal tracing tool for blip. *ICCVW*, 2023. 30 Xu Pan, Aaron Philip, Ziqian Xie, and Odelia Schwartz. Dissecting query-key interaction in vision transformers. *ICML MI Workshop*, June 2024. 30 Kiho Park, Yo Joong Choe, and Victor Veitch. The linear representation hypothesis and the geometry of large language models. *NeurIPS Workshop on Causal Representation Learning*, November 2023a. 28 Kiho Park, Yo Joong Choe, Yibo Jiang, and Victor Veitch. The geometry of categorical and hierarchical concepts in large language models. *ICML MI Workshop (Oral)*, June 2024. 9 Peter S. Park, Simon Goldstein, Aidan O'Gara, Michael Chen, and Dan Hendrycks. Ai deception: A survey of examples, risks, and potential solutions. *CoRR*, August 2023b. 24 Roma Patel and Ellie Pavlick. Mapping language models to grounded conceptual spaces. *ICLR*, 2022. 12 Judea Pearl. *Causality*. Cambridge University Press, 2009. 11, 17 Judea Pearl and Dana Mackenzie. *The Book of Why: The New Science of Cause and Effect*. Penguin Books Limited, May 2018. 28 Jonas Peters, Peter Bühlmann, and Nicolai Meinshausen. Causal inference using invariant prediction: identification and confidence intervals. *Journal of the Royal Statistical Society Series B: Statistical Methodology*, November 2015. 28 Jonas Peters, Dominik Janzing, and Bernhard Schlkopf. Elements of Causal Inference: Foundations and Learning Algorithms. The MIT Press, October 2017. 28 Nicky Pochinkov and Nandi . Machine unlearning evaluations as interpretability benchmarks. *AI Alignment* Forum, August 2023. 25 Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y. Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, and Christopher Ré. Hyena hierarchy: Towards larger convolutional language models. ICML, April 2023. 25 Alethea Power, Yuri Burda, Harri Edwards, Igor Babuschkin, and Vedant Misra. Grokking: Generalization beyond overfitting on small algorithmic datasets. *CoRR*, January 2022. 21 Nikhil Prakash, Tamar Rott Shaham, Tal Haklay, Yonatan Belinkov, and David Bau. Fine-tuning enhances existing mechanisms: A case study on entity tracking. *ICLR*, February 2024. 30 Philip Quirke and Fazl Barez. Understanding addition in transformers. *CoRR*, October 2023. 10, 29, 30 Philip Quirke, Clement Neo, and Fazl Barez. Increasing trust in language models through the reuse of verified circuits. *CoRR*, February 2024. 29 Daking Rai, Yilun Zhou, Shi Feng, Abulhair Saparov, and Ziyu Yao. A practical review of mechanistic interpretability for transformer-based language models. *CoRR*, July 2024. 2 Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Tom Lieberum, Vikrant Varma, János Kramár, Rohin Shah, and Neel Nanda. Improving dictionary learning with gated sparse autoencoders. *CoRR*, April 2024. 16 Tilman Räuker, Anson Ho, Stephen Casper, and Dylan Hadfield-Menell. Toward transparent ai: A survey on interpreting the inner structures of deep neural networks. *TMLR*, August 2023. 1, 13, 25, 26, 27, 28, 29 Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. Null it out: Guarding protected attributes by iterative nullspace projection. ACL, July 2020. 19 Jie Ren, Mingjie Li, Qirui Chen, Huiqi Deng, and Quanshi Zhang. Defining and quantifying the emergence of sparse concepts in dnns. *CoRR*, April 2023. 27 Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should i trust you?": Explaining the predictions of any classifier. *NAACL*, August 2016. 2, 19 RicG. Agi-automated interpretability is suicide. *LessWrong*, May 2023. 25, 30 Jonathan Richens and Tom Everitt. Robust agents learn causal world models. *ICLR Oral*, February 2024. 11 Ryan Riegel, Alexander Gray, Francois Luus, Naweed Khan, Ndivhuwo Makondo, Ismail Yunus Akhalwaya, Haifeng Qian, Ronald Fagin, Francisco Barahona, Udit Sharma, Shajith Ikbal, Hima Karanam, Sumit Neelam, Ankita Likhyani, and Santosh Srivastava. Logical neural networks. *NeurIPS*, June 2020. 20, 27 Mateo Rojas-Carulla, Bernhard Scholkopf, Richard Turner, and Jonas Peters. Invariant models for causal transfer learning. *JMLR*, 2018. 28 Andrew Slavin Ross and Finale Doshi-Velez. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. *AAAI*, November 2017. 28 Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. *Nat Mach Intell*, May 2019. 29 Cody Rushing and Neel Nanda. Explorations of self-repair in language models. *ICML*, May 2024. 18 Thane Ruthenis. Internal interfaces are a high-priority interpretability target. *AI Alignment Forum*, December 2022. 24 Thane Ruthenis. World-model interpretability is all we need. *AI Alignment Forum*, January 2023. 24 Hassan Sajjad, Nadir Durrani, and Fahim Dalvi. Neuron-level interpretation of deep nlp models: A survey. TACL, November 2022. 5 Mansi Sakarvadia, Aswathy Ajith, Arham Khan, Daniel Grzenda, Nathaniel Hudson, André Bauer, Kyle Chard, and Ian Foster. Memory injections: Correcting multi-hop reasoning failures during inference in transformer-based language models. *CoRR*, September 2023a. 9 Mansi Sakarvadia, Arham Khan, Aswathy Ajith, Daniel Grzenda, Nathaniel Hudson, André Bauer, Kyle Chard, and Ian Foster. Attention lens: A tool for mechanistically interpreting the attention head information retrieval mechanism. *CoRR*, October 2023b. 15 Emmanuelle Salin, Badreddine Farah, Stéphane Ayache, and Benoit Favre. Are vision-language transformers learning multimodal representations? a probing perspective. *AAAI*, June 2022. 30 Tommaso Salvatori, Ankur Mali, Christopher L. Buckley, Thomas Lukasiewicz, Rajesh P. N. Rao, Karl Friston, and Alexander Ororbia. Brain-inspired computational intelligence via predictive coding. *CoRR*, 2023. 11 Naomi Saphra. Interpretability creationism. *The Gradient*, 2023. 21 Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo. Are emergent abilities of large language models a mirage? *CoRR*, May 2023. 21 Adam Scherlis, Kshitij Sachan, Adam S. Jermyn, Joe Benton, and Buck Shlegeris. Polysemanticity and capacity in neural networks. *CoRR*, July 2023. 6, 7, 26, 27 Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. Towards causal representation learning. Special Issue of Proceedings of the IEEE - Advances in Machine Learning and Deep Neural Networks, February 2021. 27 Tal Schuster, Adam Fisch, Jai Gupta, Mostafa Dehghani, Dara Bahri, Vinh Q. Tran, Yi Tay, and Donald Metzler. Confident adaptive language modeling. *NeurIPS Oral*, October 2022. 25 Ramprasaath R. Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell, Devi Parikh, and Dhruv Batra. Grad-cam: Why did you say that? visual explanations from deep networks via gradientbased localization. *ICCV*, 2016. 2 Murray Shanahan, Kyle McDonell, and Laria Reynolds. Role play with large language models. *Nature*, November 2023. 11, 12, 24 Lloyd S. Shapley. A value for n -person games. *Cambridge University Press*, October 1988. 2 Lee Sharkey. Circumventing interpretability: How to defeat mind-readers. *CoRR*, December 2022. 24, 26 Lee Sharkey. A technical note on bilinear layers for interpretability. *CoRR*, May 2023. 20 Lee Sharkey, Sid Black, and beren. Current themes in mechanistic interpretability research. AI Alignment Forum, November 2022a. 2, 8 Lee Sharkey, Dan Braun, and Beren Millidge. Taking features out of superposition with sparse autoencoders. AI Alignment Forum, 2022b. 8, 15, 16 Mrinank Sharma, Meg Tong, Tomasz Korbak, David Duvenaud, Amanda Askell, Samuel R. Bowman, Newton Cheng, Esin Durmus, Zac Hatfield-Dodds, Scott R. Johnston, Shauna Kravec, Timothy Maxwell, Sam McCandlish, Kamal Ndousse, Oliver Rausch, Nicholas Schiefer, Da Yan, Miranda Zhang, and Ethan Perez. Towards understanding sycophancy in language models. *CoRR*, October 2023. 28, 35 Justin Shovelain and Elliot McKernon. The risk-reward tradeoff of interpretability research. *LessWrong*, July 2023. 25 Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. *ICML*, 2017. 2 James B. Simon, Maksis Knutins, Liu Ziyin, Daniel Geisz, Abraham J. Fetterman, and Joshua Albrecht. On the stepwise nature of self-supervised learning. *ICML*, May 2023. 21 slavachalnev. Sparse mlp distillation. *LessWrong*, 2024. 16 Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. *CoRR*, June 2017. 2 Nate Soares. If interpretability research goes well, it may get dangerous. *LessWrong*, April 2023. 25 Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. *JMLR*, 2014. 5 Dashiell Stander, Qinan Yu, Honglu Fan, and Stella Biderman. Grokking group multiplication with cosets. CoRR, 2023. 21 Jacob Steinhardt. Emergent deception and emergent optimization. *Bounded Regret*, February 2023. 21, 24, 28 Alessandro Stolfo, Yonatan Belinkov, and Mrinmaya Sachan. A mechanistic interpretation of arithmetic reasoning in language models using causal mediation analysis. *EMNLP*, October 2023. 17, 22 Ilia Sucholutsky, Lukas Muttenthaler, Adrian Weller, Andi Peng, Andreea Bobu, Been Kim, Bradley C. Love, Erin Grant, Iris Groen, Jascha Achterberg, Joshua B. Tenenbaum, Katherine M. Collins, Katherine L. Hermann, Kerem Oktar, Klaus Greff, Martin N. Hebart, Nori Jacoby, Qiuyi Zhang, Raja Marjieh, Robert Geirhos, Sherol Chen, Simon Kornblith, Sunayana Rane, Talia Konkle, Thomas P. O'Connell, Thomas Unterthiner, Andrew K. Lampinen, Klaus-Robert Müller, Mariya Toneva, and Thomas L. Griffiths. Getting aligned on representational alignment. *CoRR*, November 2023. 11 Chen Sun, Nolan Andrew Miller, Andrey Zhmoginov, Max Vladymyrov, and Mark Sandler. Learning and unlearning of fabricated knowledge in language models. *ICML MI Workshop*, June 2024. 25 Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. *ICML*, June 2017. 2 Aaquib Syed, Can Rager, and Arthur Conmy. Attribution patching outperforms automated circuit discovery. CoRR, October 2023. 18, 19, 22 technicalities and Stag. Shallow review of live agendas in alignment & safety. *LessWrong*, 2023. 24 Max Tegmark and Steve Omohundro. Provably safe systems: the only path to controllable agi. *CoRR*, September 2023. 24, 29, 35 Adly Templeton, Tom Conerly, Jonathan Marcus, Jack Lindsey, Trenton Bricken, and Brian Chen. Scaling monosemanticity: Extracting interpretable features from claude 3 sonnet. *Transformer Circuits Thread*, 2024. 15 Ian Tenney, Dipanjan Das, and Ellie Pavlick. Bert rediscovers the classical nlp pipeline. ACL, August 2019. 14 Vimal Thilak, Etai Littwin, Shuangfei Zhai, Omid Saremi, Roni Paiss, and Joshua Susskind. The slingshot mechanism: An empirical study of adaptive optimizers and the grokking phenomenon. *CoRR*, 2022. 21 Hannes Thurnherr and Jérémy Scheurer. Tracrbench: Generating interpretability testbeds with large language models. *ICML MI Workshop*, June 2024. 29 Curt Tigges, Oskar John Hollinsworth, Atticus Geiger, and Neel Nanda. Language models linearly represent sentiment. *ICML MI Workshop*, June 2024. 9 Eric Todd, Millicent L. Li, Arnab Sen Sharma, Aaron Mueller, Byron C. Wallace, and David Bau. Function vectors in large language models. *CoRR*, 2023. 9 Dweep Trivedi, Jesse Zhang, Shao-Hua Sun, and Joseph J. Lim. Learning to synthesize programs as interpretable and generalizable policies. *NeurIPS*, 2021. 27 Alexander Matt Turner, Lisa Thiergart, David Udell, Gavin Leech, Ulisse Mini, and Monte MacDiarmid. Activation addition: Steering language models without optimization. *CoRR*, September 2023. 9 Dmitry Vaintrob, jake_mendel, and Kaarel. Toward a mathematical framework for computation in superposition. *AI Alignment Forum*, 2024. 29 Alexandre Variengien and Eric Winsor. Look before you leap: A universal emergent decomposition of retrieval tasks in language models. *ICML MI Workshop*, December 2023. 11, 22, 30 Vikrant Varma, Rohin Shah, Zachary Kenton, János Kramár, and Ramana Kumar. Explaining grokking through circuit efficiency. *CoRR*, September 2023. 21, 22 Abhinav Verma, Hoang M. Le, Yisong Yue, and Swarat Chaudhuri. Imitation-projected programmatic reinforcement learning. *NeurIPS*, 2019a. 27 Abhinav Verma, Vijayaraghavan Murali, Rishabh Singh, Pushmeet Kohli, and Swarat Chaudhuri. Programmatically interpretable reinforcement learning. *CoRR*, April 2019b. 27 Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart Shieber. Investigating gender bias in language models using causal mediation analysis. *NeurIPS*, 2020. 17, 19 M. Vilas, Timothy Schaumlöffel, and Gemma Roig. Analyzing vision transformers for image classification in class embedding space. *CoRR*, 2023. 30 Elena Voita and Ivan Titov. Information-theoretic probing with minimum description length. *EMNLP*, March 2020. 14 Elena Voita, Javier Ferrando, and Christoforos Nalmpantis. Neurons in large language models: Dead, ngram, positional. *CoRR*, September 2023. 5 Julius von Kügelgen, M. Loog, A. Mey, and B. Scholkopf. Semi-supervised learning, causality, and the conditional cluster assumption. *Conference on Uncertainty in Artificial Intelligence*, May 2019. 28 Johannes von Oswald, Eyvind Niklasson, Maximilian Schlegel, Seijin Kobayashi, Nicolas Zucchet, Nino Scherrer, Nolan Miller, Mark Sandler, Blaise Agüera y Arcas, Max Vladymyrov, Razvan Pascanu, and João Sacramento. Uncovering mesa-optimization algorithms in transformers. *CoRR*, September 2023. 24, 26 Chelsea Voss, Gabriel Goh, Nick Cammarata, Michael Petrov, Ludwig Schubert, and Chris Olah. Branch specialization. *Distill*, April 2021. 10, 21 Boshi Wang, Xiang Yue, Yu Su, and Huan Sun. Grokked transformers are implicit reasoners: A mechanistic journey to the edge of generalization. *ICML MI Workshop*, June 2024. 21 Kevin Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, and Jacob Steinhardt. Interpretability in the wild: a circuit for indirect object identification in gpt-2 small. *ICLR*, 2023. 10, 13, 17, 18, 19, 22, 30, 32 Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. Blimp: The benchmark of linguistic minimal pairs for english. Transactions of the Association for Computational Linguistics, 2020. 2 Sumio Watanabe. *Algebraic Geometry and Statistical Learning Theory*. Cambridge Monographs on Applied and Computational Mathematics. Cambridge University Press, 2009. 21 Sumio Watanabe. *Mathematical Theory of Bayesian Statistics*. Chapman and Hall, 1 edition, April 2018. 21 Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models. *TMLR*, October 2022. 21, 24 John Wentworth. How to go from interpretability to alignment: Just retarget the search. *AI Alignment* Forum, August 2022. 24 James C. R. Whittington, Will Dorrell, Surya Ganguli, and Timothy E. J. Behrens. Disentangling with biological constraints: A theory of functional cell types. *CoRR*, September 2022. 15 Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, and Chao Shen. Backdoorbench: A comprehensive benchmark of backdoor learning. *NeurIPS Datasets and Benchmarks*, October 2022a. 29 Zhengxuan Wu, Atticus Geiger, Joshua Rozner, Elisa Kreiss, Hanson Lu, Thomas Icard, Christopher Potts, and Noah Goodman. Causal distillation for language models. *NAACL-HLT*, July 2022b. 19 Zhengxuan Wu, Karel D'Oosterlinck, Atticus Geiger, Amir Zur, and Christopher Potts. Causal proxy models for concept-based model explanations. *ICML*, 2023a. 13, 19 Zhengxuan Wu, Atticus Geiger, Christopher Potts, and Noah D. Goodman. Interpretability at scale: Identifying causal mechanisms in alpaca. *CoRR*, May 2023b. 19, 22 Zhengxuan Wu, Aryaman Arora, Zheng Wang, Atticus Geiger, Dan Jurafsky, Christopher D. Manning, and Christopher Potts. Reft: Representation finetuning for language models. *CoRR*, May 2024. 19 Qinan Yu, Jack Merullo, and Ellie Pavlick. Characterizing mechanisms for factual recall in language models. CoRR, October 2023. 10 Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. *ECCV*, 2014. 13 Biao Zhang, Ivan Titov, and Rico Sennrich. Sparse attention with linear units. *EMNLP*, October 2021. 20 Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. *ICLR*, February 2017. 21 Fred Zhang and Neel Nanda. Towards best practices of activation patching in language models: Metrics and methods. *ICLR*, 2023. 18 Quanshi Zhang, Yu Yang, Haotian Ma, and Ying Nian Wu. Interpreting cnns via decision trees. *CVPR*, 2019. 27 Ziqian Zhong, Ziming Liu, Max Tegmark, and Jacob Andreas. The clock and the pizza: Two stories in mechanistic explanation of neural networks. *CoRR*, 2023. 22 Roland S. Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas S. A. Wallis, and Wieland Brendel. How well do feature visualizations support causal understanding of cnn activations? NeurIPS, November 2021. 13 Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, and Dan Hendrycks. Representation engineering: A top-down approach to ai transparency. *CoRR*, October 2023. 3, 9, 14, 22, 24, 34
Review 1: Summary: This paper focuses on investigating current methodologies in the field of mechanistic interpretability, which aims to uncover causal relationships and precise computations transforming inputs into outputs in a neural network. Towards this field, this paper presents its core concepts and hypotheses, explains methods and techniques, discusses evaluation, explores its relevance to AI safety, and points out challenges and future directions. Strengths and Weaknesses: **Strength**: 1. The topic of survey, mechanistic interpretability, is important and may attract interests from many researchers. 2. This paper does a extensive investigation for mechanistic interpretability. As far as I know, there are few survey papers on this topic. **Weakness**: My main concerns come from the organization of the survey paper. I understand that mechanistic interpretability is more complex than attribution/concept interpretability. But at current vision, I find it difficult to provide a coherent overview of this field. To improve the readability, I have following comments: 1. **There is short of overview figures/tables**, so as to (such as) clarify the connection between core concepts, the relationship between concepts and hypotheses, and so on. For example, how can we distinguish from features, semantics, neurons, and basis? What’s the connection between features, circuits, and motif? How does the Universality Hypothesis relate to these core concepts? I notice that the authors may give answers to some of the above questions in the texts, but this is not intuitive and impressive. It is crucial to help the readers form a clear overall picture. 2. This survey only lists core concepts and hypothesis, but **lacks a summary and refinement of key research problems** in the field of mechanism interpretability. Key research problems are essential to investigating a research field. Based on my understanding of this survey, key research problems may be “How a neural network encode semantics? Is it encoded in neurons?”, “Whether a single neuron corresponds to an explicit semantic?”, and so on. I suggest the authors to have a deep thinking on summarizing research problems. 3. **There is a significant gap between the core concepts/research problems (Section 3) and methods/techniques (Section 4)**. The authors should give more details or examples on how to adopt these techniques in interpreting the mechanisms of neural networks. 4. I find the survey a little long and difficult to read. I suggest to condense the paper and polish the language, to stress the key point and make the statement more precise and understandable. Requested Changes: Please refer to the weakness. The four points are important to my final decision. Broader Impact Concerns: Not applicable. ================================================== Review 2: Summary: This paper serves as a survey and introduction to the field of mechanistic interpretability. It outlines the terminology, provides definitions, overviews recent approaches based on these, and also provides a critique of aspects of the field (e.g., fundamental limitations of some methods, weaknesses in the rigour of some evaluations, etc). Strengths and Weaknesses: 1. I think this paper is a well-constructed and researched survey paper that goes over definitions, hypothesis, methods and relevant aspects of the field of mechanistic interpretability. I have some thoughts on making it a more comprehensive repository of information on mechanistic interpretability, and some things to change below: 2. It would be great to have a more detailed anthology as in this paper. E.g., see this paper https://arxiv.org/abs/2311.04329 that links each definition to its place in the glossary allowing readers to go over and access all definitions easily.) 3. If each definition could be numbered and then linked in the glossary that would also help retrieve them more easily. 4. This paper interestingly falls in between two types of papers i.e., critique/position papers vs. a survey of all of the work in the field. Most of the paper is structured as a survey paper with definitions of important terms, hypotheses, algorithms and methods with the relevant citations needed for these. The latter part of the paper is more of a critique of aspects of the field. This division is great! However, in between some of the definitions/methods are critiques of certain methods that are not citable, or at least cited in this paper (they seem more subjective). For the sake of a comprehensive/unbiased survey paper, I would urge the authors to not provide subjective viewpoints in the descriptive/survey part. I’ve listed them all below. 5. An overarching concern with this type of survey paper is that the field is emerging so fast that a lot of the things in this paper are already outdated (e.g., claims such as “so far no one has done so and so in a certain task”) are not largely true anymore even at the time of reviewing this paper. It’s clearly necessary to outline what has and hasn’t been done until this point in time but I would urge the authors to reduce such claims. 6. For e.g., on page 8, “only relatively narrow behaviours like Python docstring formatting” have been explored so far”. 7. On page 8:” Hypothesis: Universality” it might be nice to also have the hypotheses linked in a similar glossary? 8. On page 9: on internal world models, there are several works on behavioural probing and mech interp that look into this that are not cited: https://openreview.net/forum?id=jE8xbmvFin, https://openreview.net/forum?id=gJcEM8sxHK, https://arxiv.org/abs/2106.00737 9. On page 12, re: “unsupervised methods can identify numerous features without a straightforward verification process”, can we add citations for this? 10. On page 14, the overview of the different activation patching approaches is great, but if this could be elaborated in some detail and cited that would be helpful! 11. Figure 7 is really great! 12. The terms, inner misalignment and mesa optimisation could be helpful to define in the anthology. 13, On page 22: “utilising a diverse interpretabiltiy toolbox”, how would the authors propose to do that? This sentence, without any explanation of what this implies doesn’t actually provide any insight or details into what could be done better. 14. On page 24: this might be hard to do but it would be extremely helpful to have a table that maps the definitions to each other re: the point about how terms are redefined and new terms/definitions really mean the same underlying thing or concept. 15. In section 2, RSA-like approaches could also be mentioned and cited here as a class of probing/interpretability techniques. 16. Re: monosemantic and polysmenatic neurons: can we cite where these terms originate from? (e.g., used in the linguistics literature here, Vicente and Falkum (2017), but it would be nice to have the original paper that used this both in semantics, and also in the interpretability field). ^maybe this paragraph should be phrased not in the context of neurons, but just talking about mono/polysemy, why it is important to make this distinction, and that it is about trying to find units that correspond to it. The next paragraph can then tie things together (as it does already). 17. The definition of features could be made more concrete—is just saying “fundamental unit” sufficient? Is the implication that features are monosemantic (whether human interpretable or otherwise?) Also it is currently unclear to me if they are implying that features equate to neurons? 18. When talking about world modeling and relating to AI safety: “emergent world models have significant implications … internal representation of human values” → it might be worth drawing the comparison between intrinsic values/human morals in the real world and the game/world representations that are typically studied e.g., in chess domains. These are usually smaller and less complex than a world that allows the incorporation of complex, nuances, human values and cultural information. 19. Activation patching is defined but not path patching and more complex variants. It might also be worth diving into how the patching is done (and the links to causal interventions). 20. “Causality as a theoretical foundation”: it would be great if this could/should be introduced a lot earlier? Especially when introducing patching techniques it is worth drawing the connection to causality and the theory behind a lot of this field of mechanistic interpretability methods. Even having some of the definitions of causal interventions from Pearl etc., would be helpful to readers new to this space! Requested Changes: 1. Simple change: numbering the definitions so it’s easy to refer to and go back to them. 2. More involved change: Having an index / glossary at the end that lists out all the terms defined could be helpful to readers that want to skim through these terms and link back to the context of where they were defined. (e.g., format from several papers e.g., this paper: https://arxiv.org/abs/2311.04329) 3. Addressing all comments above! Broader Impact Concerns: This paper is fairly well justified in its criticism of several aspects of the field and has a nuanced and balanced take on sensitive topics such as AI safety. I would urge the authors to take into account the above points though! ================================================== Review 3: Summary: Mechanistic Interpretability (MI) is a new avenue of research that targets reverse engineering the functioning of deep learning models. In this paper, the authors review works in this area by analyzing the fundamental concepts and hypotheses, collecting the core methods, presenting evaluations and tested scenarios, discussing applications to AI safety, and describing possible future avenues. MI so far stands as an independent branch of Explainable AI (XAI) for bottom-up explainability, comprising the study of intrinsic interpretable methods, the analysis of neural networks' dynamics, and assessing post-hoc evaluations of trained models (especially LLMs). The authors collected a series of works from these three branches to create a common ground for future research in MI. Strengths and Weaknesses: # Strenghts **Necessity of a common ground for Mechanistic interpretability research.** MI developed over the past years as a detached research area, with its own terminology and challenges, necessitating formal links to other better-known research branches in AI, especially with XAI. The reference to date introducing the notion of MI is [Olah 2020], and often cited as the primary contribution in the area, which is a blog post not followed by any peer-reviewed publication clarifying the terminology and the specific challenges of this research field. Following, other blog posts also explored aspects of MI, introducing novel ideas and terminologies, often conflating with existing literature in XAI. To this end, the effort of the authors to recollect several papers under the same umbrella and set the core concepts and challenges is valuable and can serve as a basis for future works in MI. Moreover, cross-fertilization with other AI research is promoted and could benefit both MI and XAI researchers. **Sound presentation of core ideas and useful taxonomy.** The authors contribute to pointing to the core concepts (features, circuits, motifs), and typical assumptions and hypotheses (superposition, linearity, universality, simulation). Core methods are divided into the observational vs interventional nature and into what aspects of training they target: pre/during/post- training. This taxonomy is helpful and does comprise many, if not all, works in MI. **Relevance to AI safety.** Authors point to applications of MI to safety/trustworthy AI, which could be integrated with current LLMs research. The discussion touches on both helpful and harmful aspects of MI applied to safe AI, making MI a potential new avenue of research also for trustworthy AI. **Future challenges.** Eventually, authors detect future challenges and research directions that can help improve current methodologies in MI. This is valuable when pointing to common standards that future research should share and to indicating problems and future avenues to the researchers in the field. # Weaknesses ### Clarity Several parts require more scrutiny and better formalization. While previous papers and blog posts avoid being specific in terminology and formalization, the paper should avoid adopting a confusing conceptualization. **Definitions.** In section 3, the authors introduce the notions of features, circuits, and motifs. In the presentation, however, (human-understandable) concepts are often mentioned but not explained making somehow vague how should the various notions be interpreted. The reference to concepts keeps appearing in different parts of the text and appears to be rooted in intuition. However, this poses a degree of ambiguity in definitions. In section 3.1 the authors write *'' We adopt the notion of features as the smallest units of how neural networks encode knowledge, such that features cannot be further decomposed into smaller, distinct concepts.''* which becomes vague and cannot be parsed from the text. In this respect, it is not clear (Definition: Feature, Alternative) what are the disentangled concepts. This should be made clear, especially in light of the presupposed distinction between concept-based explanations and MI methods (Sec 2 - Mechanistic). Mentioning *patterns* in (Def: Motifs) is also ambiguous and not clear from the text. If a pattern is either a feature or a circuit, it should be specified. Regarding the hypotheses, universality is not also crystal clear what is the meaning that should be attributed to *analogous*: the same, or similar (up to what)? The definition of simulation is also a bit vague, regarding what it means to ''_simulate the underlying causal process of textual data_'' and what it means to ''_optimize strongly the model_''. As a stylistic note, it would be helpful to separate into a different subsection the notions of features and circuits and the discussion on the last three hypotheses (as mentioned at the beginning of section 3). I am also not convinced that violations of the linearity hypothesis have not been observed yet: it depends on the context and clear counter-examples of some linear notion of features are present in the literature [1]. If some sort of linearity of the features holds, it should be specified in what domains and contexts, e.g. [2]. **Methods and Evaluations.** In the presentation of the core methods, at the attributional level methods based on (structured) probing have connections to XAI research. The review comprehends those that fall within MI, but it is not clear where the line should be drawn between XAI methods of this sort and MI methods. In light of the discussion in (Sec 4.1 - Structured Probes) regarding the ''_complementarity to MI efforts_'' it would be better to make clear the main differences between methods that use probes in MI and XAI. Feature Disentanglement is also mentioned only w.r.t. sparse autoencoders, but there is a rich literature in this respect, which should be mentioned in the potential future avenues of research, e.g. [1]. The presentation of Activation Patching can be improved. The explanation of the standard protocol can be made more precise by referring to Fig 7, and the example with A-OR-B and A-AND-B cannot be understood entirely. Also, it is not clear why methods based on causal abstractions and causal scrubbing are not discussed in Sec 4.2 and are related to rigorous hypothesis testing. This should be explained in the text. **Overall structure and taxonomy** While the overall structure is reasonable and offers a good distinction between methods, it would be helpful to resume what are the axes of distinctions between MI methods. The authors mention that methods can be distinguished based on their causal nature (observational/interventional) and the phase of learning they study (pre/during/post). Locality and globality are also mentioned for post-hoc methods and their comprehensiveness. These constitute other axes among which methods can be regrouped as customary in XAI research, see [3]. ### Minors **Connection to other research areas.** It is clear that MI has several intersections with other research on XAI. The main difference is the bottom-up approach of MI methods, although, it is not clear if all methods respect this general idea (e.g. MI methods based on causal abstractions and high-level concepts, called top-down, ref. Sec 8.2 - Obstacles to bottom-up interpretability). Causality also seems to play a central role, due to disentangled representations and to interventions. Thus, the intersection with other works on XAI based on concepts [4] and Causal Representation Learning (CRL) [5] can be fruitful for both research fields and MI. Recent works in CRL address the emergence of the Linear Hypothesis [6] and provably learning of (causal) concepts in foundation models [7]. Similarly, causality and CRL have been applied to defining causally interpretable representations [8]. Other works in this direction are also particularly relevant to Mechanistic Interpretability [9]. **Listing key concepts and working assumptions.** It can be helpful to provide an enlarged illustration of how MI differs from other methods and list the key concepts and hypotheses that are peculiar to MI, compared to other methods in XAI. This can help facilitate researchers approaching the field from related areas. [1] Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations - ICML, Locatello et al. (2019) \ [2] Relative representations enable zero-shot latent space communication - ICLR, Moschella et al. (2023) \ [3] Explainable AI: A Review of Machine Learning Interpretability Methods - Entropy, Linardatos et al. (2021) \ [4] Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors - ICML, Kim et al. (2018) \ [5] Towards Causal Representation Learning - IEEE, Scholkopf et al. (2021) \ [6] On the Origins of Linear Representations in Large Language Models - arxiv, Jiang et al. (2024) \ [7] Learning Interpretable Concepts: Unifying Causal Representation Learning and Foundation Models - arxiv, Rajendran et al. (2024) \ [8] Interpretability is in the Mind of the Beholder: A Causal Framework for Human-interpretable Representation Learning - Entropy, Marconato et al. (2023) \ [9] Impossibility theorems for feature attribution, PNAS, Bilodeau et al. (2024) Requested Changes: Authors should address the main weaknesses of their work: 1) Better formalization of the notions and concepts presented in their manuscript. It is crucial to fix the relevant parts, especially if this review appears to be the official reference for MI. 2) Taxonomy listing how MI approaches are related to one another, w.r.t. to axes the authors highlighted. This could strengthen the presentation, providing a concise overview of different works in MI. I hope the authors will also take into account other suggestions if not all, which may improve connections to other fields of research. Broader Impact Concerns: There is no major negative ethical implications concerning this work. ================================================== Metareview: Recommendation: Accept as is Comment: The paper structure and content has been appreciated by all reviewers. The aim of structuring the literature around interpretability has been found as a worthy goal, and reviewers considered that an omni-comprehensive survey could be impossible. At the same time, the initial revision of the manuscript has been criticized for missing some literature connections and references and for presenting dimensions that appeared to be detached one from another in the presentation. Lastly, some statements were imprecised and needed to be rewritten. Authors took all comments into account and provided a revised paper that substantially improved presentation and readability, as well as better covering the interpretability landscape. All reviewers are willing to accept the paper and I agree with them. Furthermore, two reviewers agreed on proposing the Survey certification, as the revised version now looks pretty polished. ==================================================
# Koopman Spectrum Nonlinear Regulators And Efficient Online Learning Motoya Ohnishi *mohnishi@cs.washington.edu* Paul G. Allen School of Computer Science & Engineering University of Washington Isao Ishikawa *ishikawa.isao.zx@ehime-u.ac.jp* Ehime University RIKEN Center for Advanced Intelligence Project Kendall Lowrey *kendall.lowrey@gmail.com* Et Cetera Robotics Masahiro Ikeda *masahiro.ikeda@riken.jp* RIKEN Center for Advanced Intelligence Project Keio University Sham Kakade *sham@seas.harvard.edu* Harvard University Yoshinobu Kawahara kawahara@ist.osaka-u.ac.jp Graduate School of Information Science and Technology, Osaka University RIKEN Center for Advanced Intelligence Project Reviewed on OpenReview: *https: // openreview. net/ forum? id= thfoUZugvS* ## Abstract Most modern reinforcement learning algorithms optimize a cumulative single-step cost along a trajectory. The optimized motions are often 'unnatural', representing, for example, behaviors with sudden accelerations that waste energy and lack predictability. In this work, we present a novel paradigm of controlling nonlinear systems via the minimization of the Koopman spectrum cost: a cost over the Koopman operator of the controlled dynamics. This induces a broader class of dynamical behaviors that evolve over stable manifolds such as nonlinear oscillators, closed loops, and smooth movements. We demonstrate that some dynamics characterizations that are not possible with a cumulative cost are feasible in this paradigm, which generalizes the classical eigenstructure and pole assignments to nonlinear decision making. Moreover, we present a sample efficient online learning algorithm for our problem that enjoys a sub-linear regret bound under some structural assumptions. ## 1 Introduction Reinforcement learning (RL) has been successfully applied to diverse domains, such as robot control (Kober et al. (2013); Todorov et al. (2012); Ibarz et al. (2021)) and playing video games (Mnih et al. (2013; 2015)). Most modern RL problems modeled as Markov decision processes consider an immediate (single-step) cost (reward) that accumulates over a certain time horizon to encode tasks of interest. Although such a cost can encode any single realizable dynamics, which is central to inverse RL problems (Ng & Russell (2000)), the generated motions often exhibit undesirable properties such as high jerk, sudden accelerations that waste energy, and unpredictability. Intuitively, the motion specified by the task-oriented cumulative cost formulation may ignore "how to" achieve the task unless careful design of cumulative cost is in place, necessitating a systematic approach that effectively regularizes or *constrains* the dynamics to guarantee predictable global property such as stability. Meanwhile, many dynamic phenomena found in nature are known to be represented as simple trajectories, such as nonlinear oscillators, on low-dimensional manifolds embedded in high-dimensional spaces that we observe (Strogatz, 2018). Its mathematical concept is known as *phase reduction* (Winfree, 2001; Nakao, 2017), and recently its connection to the Koopman operator has been attracted much attention in response to the growing abundance of measurement data and the lack of known governing equations for many systems of interest (Koopman, 1931; Mezić, 2005; Kutz et al., 2016). In this work, we present a novel paradigm of controlling nonlinear systems based on the spectrum of the Koopman operator. To this end, we exploit the recent theoretical and practical developments of the Koopman operators (Koopman, 1931; Mezić, 2005), and propose the *Koopman spectrum cost* as the cost over the Koopman operator of controlled dynamics, defining a preference of the dynamical system in the reduced phase space. The Koopman operator, also known as the composition operator, is a linear operator over an observable space of a (potentially nonlinear) dynamical system, and is used to extract global properties of the dynamics such as its dominating modes and eigenspectrum through spectral decomposition. Controlling nonlinear systems via the minimization of the Koopman spectrum cost induces a broader class of dynamical behaviors such as nonlinear oscillators, closed loops, and smooth movements. Although working in the spectrum (or frequency) domain has been standard in the control community (e.g. Andry et al. (1983); Hemati & Yao (2017)), the use of the Koopman spectrum cost together with function approximation and learning techniques enables us to generate rich class of dynamics evolving over stable manifolds (cf. Strogatz (2018)). Our contributions. The contributions of this work are three folds: First, we propose the Koopman spectrum cost that complements the (cumulative) single-step cost for nonlinear control. Our problem, which we refer to as Koopman Spectrum Nonlinear Regulator (KSNR), is to find an optimal parameter (e.g. policy parameter) that leads to a dynamical system associated to the Koopman operator that minimizes the sum of both the Koopman spectrum cost and the cumulative cost. Note that "Regulator" in KSNR means not only controller in control problems but a more broad sense of regularization of dynamical systems for attaining specific characteristics. Second, we show that KSNR paradigm effectively encodes/imitates some desirable agent dynamics such as limit cycles, stable loops, and smooth movements. Note that, when the underlying agent dynamics is known, KSNR may be approximately solved by extending any nonlinear planning heuristics, including population based methods. Lastly, we present a (theoretical) learning algorithm for online KSNR, which attains the sub-linear regret bound (under certain condition, of order O˜( √T)). Our algorithm (Koopman-Spectrum LC3(KS-LC3)) is a modification of Lower Confidence-based Continuous Control (LC3) (Kakade et al., 2020) to KSNR problems with several technical contributions. We need structural assumptions on the model to simultaneously deal with the Koopman spectrum cost and cumulative cost. Additionally, we present a certain Hölder condition of the Koopman spectrum cost that makes regret analysis tractable for some cost such as the spectral radius. ## ✓Key Takeaways ✏ KSNR considers the *spectrum cost* that is not subsumed by a classical single-step cost or an episodic cost, and is beyond the MDP framework, which could be viewed as a generalization of eigenstructure / pole assignments to nonlinear decision making problems. The framework systematically deals with the shaping of behavior (e.g., ensuring stability, smoothness, and adherence to a target *mode* of behavior). Because we employ this new cost criterion, we strongly emphasize that *this work is not intended to* compete against the MDP counterparts, but rather to illustrate the effectiveness of the generalization we make for systematic behavior shaping. For online learning settings under unknown dynamics, the spectrum cost is unobservable because of inaccessible system models; it thus differs from other costs such as a policy cost. As such, we need specific structural assumptions that are irrelevant to the Kernelized Nonlinear Regulator (Kakade et al., 2020) to devise sample efficient (if not computationally efficient) algorithm. Proof techniques include some operator theoretic arguments that are novel in this context. ✒ ✑ Notation. Throughout this paper, R, R≥0, N, Z>0, and C denote the set of the real numbers, the nonnegative real numbers, the natural numbers ({0, 1, 2*, . . .*}), the positive integers, and the complex numbers, respectively. Also, Π is a set of dynamics parameters, and [H] := {0, 1*, . . . H* − 1} for H ∈ Z>0. The set of bounded linear operators from A to B is denoted by L(A; B), and the adjoint of the operator A is denoted by A †. We let det(·) be the functional determinant. Finally, we let ∥x∥Rd , ∥x∥1, ∥A ∥, and ∥A ∥HS be the Euclidean norm of x ∈ R d, the 1-norm (sum of absolute values), the operator norm of A ∈ L(A; B), and the Hilbert–Schmidt norm of a Hilbert-Schmidt operator A , respectively. ## 2 Related Work Koopman operator was first introduced in Koopman (1931); and, during the last two decades, it has gained traction, leading to the developments of theory and algorithm (e.g. Črnjarić-Žic et al. (2019); Kawahara (2016); Mauroy & Mezić (2016); Ishikawa et al. (2018); Iwata & Kawahara (2020); Burov et al. (2021)) partially due to the surge of interests in data-driven approaches. The analysis of nonlinear dynamical system with Koopman operator has been applied to control (e.g. Korda & Mezić (2018); Mauroy et al. (2020); Kaiser et al. (2021); Li et al. (2019); Korda & Mezić (2020)), using model predictive control (MPC) framework and linear quadratic regulator (LQR) although nonlinear controlled systems in general cannot be transformed to LQR problem even by lifting to a feature space. For unknown systems, active learning of Koopman operator has been proposed (Abraham & Murphey (2019)). Note our framework applied to stability regularization considers the solely different problem than that of (Mamakoukas et al., 2023). In the context of stability regularization, our framework is *not for learning the stable Koopman operator or* to construct a control Lyapunov function from the learned operator but to solve the regulator problem to balance the (possibly task-based) cumulative cost and the spectrum cost that enforces stability. We will revisit this perspective in Section 3.3. The line of work that is most closely related to ours is the eigenstructure / pole assignments problem (e.g. Andry et al. (1983); Hemati & Yao (2017)) classically considered in the control community particularly for linear systems. In fact, in the literature on control, working in frequency domain has been standard (e.g. Pintelon & Schoukens (2012); Sabanovic & Ohnishi (2011)). These problems aim at computing a feedback policy that generates the dynamics whose eigenstructure matches the desired one; we note such problems can be naturally encoded by using the Koopman spectrum cost in our framework as well. In connection with the relation of the Koopman operator to stability, these are in principle closely related to the recent surge of interest in neural networks to learn dynamics with stability (Manek & Kolter, 2019; Takeishi & Kawahara, 2021). In the RL community, there have been several attempts of using metrics such as mutual information for acquisitions of the *skills* under RL settings, which are often referred to as unsupervised RL (e.g. Eysenbach et al. (2018)). These work provide an approach of effectively generating desirable behaviors through compositions of skills rather than directly optimizing for tasks, but are still within cumulative (single-step) cost framework. Historically, the motor primitives investigated in, for example, (Peters & Schaal, 2008; Ijspeert et al., 2002; Stulp & Sigaud, 2013) have considered parametric nonlinear dynamics having some desirable properties such as stability, convergence to certain attractor etc., and it is related to the Koopman spectrum regularization in the sense both aim at regulating the global dynamical properties. Those primitives may be discovered by clustering (cf. Stulp et al. (2014)), learned by imitation learning (cf. Kober & Peters (2010)), and coupled with meta-parameter learning (e.g. Kober et al. (2012)). Finally, as related to the sample efficient algorithm we propose, provably correct methods (e.g. Jiang et al. (2017); Sun et al. (2019)) have recently been developed for continuous control problems (Kakade et al., 2020; Mania et al., 2020; Simchowitz & Foster, 2020; Curi et al., 2020). Below, we present our control framework, KSNR, in Section 3 with several illustrative numerical examples based on population based policy search (e.g. genetic algorithm), followed by an introduction of its example online learning algorithm (Section 4) with its theoretical insights on sample complexity and on reduction of the model to that of eigenstructure assignments problem as a special case. For more details about population based search that repeatedly evaluates the sampled actions to update the sampling distribution of action sequence so that the agent can achieve lower cost, see for example (Beheshti & Shamsuddin, 2013). ## 3 Koopman Spectrum Nonlinear Regulator In this section, we present the dynamical system model and our main framework. ## 3.1 Dynamical System Model Let X ⊂ R dX be the state space, and Π a set of parameters each of which corresponds to one random dynamical system (RDS) as described below. Given a parameter Θ ∈ Π, let (ΩΘ, PΘ) be a probability space, where ΩΘ is a measurable space and PΘ is a probability measure. Let µΘ := (µΘ(r))r∈N be a semi-group of measure preserving measurable maps on ΩΘ (i.e., µΘ(r) : ΩΘ → ΩΘ). This work studies the following nonlinear regulator (control) problem: for each parameter Θ ∈ Π, the corresponding nonlinear random dynamical system is given by $${\mathcal{F}}^{\Theta}:\mathbb{N}\times\Omega_{\Theta}\times{\mathcal{X}}\to{\mathcal{X}},$$ * [3.1] that satisfies $${\mathcal{F}}^{\Theta}(0,\omega,x)=x,\ \ {\mathcal{F}}^{\Theta}(r+s,\omega,x)={\mathcal{F}}^{\Theta}(r,\mu_{\Theta}(s)\omega,{\mathcal{F}}^{\Theta}(s,\omega,x)),\ \ \forall r,s\in\mathbb{N},\ \omega\in\Omega_{\Theta},\ x\in{\mathcal{X}}.$$ The above definition of random dynamical system is standard in the community studying dynamical systems (refer to (Arnold, 1998), for example). Roughly speaking, an RDS consists of the following two models: - A model of the *noise*; - A function representing the *physical* dynamics of the system. RDSs subsume many practical systems including solutions to stochastic differential equations and additivenoise systems, i.e., $$x_{h+1}=f(x_{h})+\eta_{h},\ \ x_{0}\in\mathbb{R}^{d},\ \ h\in[H],$$ $$h\in[H],$$ where f : R dX → R dX represents the dynamics, and ηh ∈ R dX is the zero-mean i.i.d. additive noise vector. Let Ω0 be a probability space of the noise, and one could consider Ω := ΩN 0 (and µ is the shift) to express the system as an RDS. Also, Markov chains can be described as an RDS by regarding them as a composition of i.i.d. random transition and by following similar arguments to the above (see Arnold (1998, Theorem 2.1.6)). As an example, consider the discrete states {s1, s2} and the transition matrix $${\mathrm{Transition~Matrix}}:={\left[\begin{array}{l l}{0.8}&{0.2}\\ {0.1}&{0.9}\end{array}\right]}=0.7{\left[\begin{array}{l l}{1}&{0}\\ {0}&{1}\end{array}\right]}+0.2{\left[\begin{array}{l l}{0}&{1}\\ {0}&{1}\end{array}\right]}+0.1{\left[\begin{array}{l l}{1}&{0}\\ {1}&{0}\end{array}\right]},$$ where the right hand side shows a decomposition of the transition matrix to deterministic transitions. For this example, one can naturely treat this Markov chain as an RDS that generates each deterministic transition with corresponding probability at every step. More intuitively, dynamical systems with an *invariant noise-generating mechanism* could be described as an RDS by an appropriate translation (see Figure 1 (Left) for an illustration of an RDS). Koopman operator For the random dynamical systems being defined above, we define the operatorvalued map K below, using the dynamical system model. Definition 3.1 (Koopman operator). Let H be a function space on X over C and let {FΘ}Θ∈Π be a dynamical system model. We define an operator-valued map K by K : Π → L(H, H) such that for any Θ ∈ Π and g ∈ H, $$[{\mathcal{K}}(\Theta)g](x):=\mathbb{E}_{\Omega_{\Theta}}\big[g\circ{\mathcal{F}}^{\Theta}(1,\omega,x)\big],\quad x\in{\mathcal{X}}.$$ We will choose a suitable H to define the map K , and K (Θ) is the Koopman operator for F Θ. Essentially, the Koopman operator represents a nonlinear dynamics as a linear (infinite-dimensional) operator that describes the evolution of *observables* in a lifted space (see Figure 1 Right). ![4_image_0.png](4_image_0.png) Figure 1: Left: Random dynamical system consists of a model of the *noise* and a function representing the physical phase space (the illustration is inspired by Arnold (1998); Ghil et al. (2008)). The RDS flows over sample space and phase space for each realization ω and for initial state x0. Right: By lifting the state space to a space of observables, a nonlinear dynamical system over the state space is represented by the linear operator in a lifted space. Remark 3.1 (Choice of H and existence of K ). In an extreme (but useless) case, one could choose H to be a one dimensional space spanned by a constant function, and K can be defined. In general, the properties of the Koopman operator depend on the choice of the space on which the operator is defined. As more practical cases, if one employs a Gaussian RKHS for example, the only dynamics inducing bounded Koopman operators are those of affine (e.g. Ishikawa (2023); Ikeda et al. (2022)). However, some practical algorithms have recently been proposed for an RKHS to approximate the eigenvalues of so-called "extended" Koopman operator through some appropriate computations under certain conditions on the dynamics and on the RKHS (cf. Ishikawa et al. (2024)). With these settings in place, we propose our framework. ## 3.2 Koopman Spectrum Nonlinear Regulator Fix a set X0 := {(x0,0, H0),(x0,1, H1), . . . ,(x0,N−1, HN−1)*} ⊂ X ×*Z>0, for N ∈ Z>0, and define c : X → R≥0 be a cost function. The Koopman Spectrum Nonlinear Regulator (KSNR), which we propose in this paper, is the following optimization problem: $$\mathrm{Find}\;\;\Theta^{\star}\in\operatorname*{arg\,min}_{\Theta\in\Pi}\left\{\Lambda[{\mathcal{K}}(\Theta)]+J^{\Theta}(X_{0};c)\right\},$$ where Λ : L(H; H) → R≥0 is a mapping that takes a Koopman operator as an input and returns its cost; and $$J^{\Theta}(X_{0};c):=\sum_{n=0}^{N-1}\mathbb{E}_{\Omega_{\Theta}}\left[\sum_{h=0}^{H_{n}-1}c(x_{h,n})\Big|\Theta,x_{0,n}\right],$$ $$(3.2)$$ where xh,n(ω) := F Θ(*h, ω, x*0,n). In control problem setting, the problem (3.2) can be read as finding a control policy Θ∗, which minimizes the cost; note, each control policy Θ generates a dynamical system that gives the costs Λ[K (Θ)] and J Θ(X0; c) in this case. However, we mention that the parameter Θ can be the physics parameters used to design the robot body for automated fabrication or any parameter that uniquely determines the dynamics. Example 3.1 (Examples of Λ). Some of the examples of Λ are: 1. Λ[A ] = max {1, ρ (A )}, where ρ(A ) is the spectral radius of A , prefers stable dynamics. ![5_image_0.png](5_image_0.png) Figure 2: Comparisons of several costs for decision making problems. The Koopman spectrum cost is the cost over the global properties of the dynamical system itself which is typically unknown for learning problems, and is unobservable. 2. Λ[A ] = ℓA ⋆ (A ) can be used for imitation learning, where ℓA ⋆ (A ) : L(H; H) → R≥0 is a loss function measuring the gap between A and the given A ⋆ ∈ L(H; H). 3. Λ[A ] = Pi |λi (A )|, prefers agent behaviors described by fewer dominating modes. Here, {λi}i∈Z>0 is the set of eigenvalues of the operator A (assuming that the operator has discrete spectrum). Assuming that the Koopman operator is defined over a finite-dimensional space, an eigenvalue of the operator corresponding to each eigenfunction can be given by that of the matrix realization of the operator. In practice, one employs a finite-dimensional space even if it is not invariant under the Koopman operator; in such cases, there are several analyses that have recently been made for providing estimates of the spectra of the Koopman operator through computations over such a finite-dimensional space. Interested readers may be referred to (Ishikawa et al., 2024; Colbrook & Townsend, 2024; Colbrook et al., 2023) for example. In particular, avoiding spectral pollution, which refers to the phenomena where discretizations of an infinite-dimensional operator to a finite matrix create spurious eigenvalues, and approximating the continuous spectra have been actively studied with some theoretical convergence guarantees for the approximated spectra. In order to obtain an estimate of the spectral radius through computations on a matrix of finite rank, the Koopman operator may need to be compact (see the very recent work Akindji et al. (2024) for example). Remark 3.2 (Remarks on how the choice of H affects the Koopman spectrum cost). As we have discussed in Remark 3.1, the mathematical properties of the Koopman operator depend on the function space H, and information of the background dynamics implied by this operator depends on the choice of H; however their spectral properties typically capture global characterstics of the dynamics through its linearity. We mention that the Koopman operator over H may *fully represents* the dynamics in the sense that it can reproduce the dynamical system over the state space (i.e., the original space) under certain conditions (see Ishikawa et al. (2024) for example on detailed discussions); however, depending on the choice of H, it is often the case that the Koopman operator does not uniquely reproduce the dynamics. The extreme case of such an example is the function space of single dimension spanned by a constant function, for which the Koopman operator does not carry any information about the dynamics. Even for the cases where the Koopman operator over the chosen space H only captures *partial information* on the dynamics, the spectrum cost is still expected to act as a regularizer of such partial spectral properties of the dynamics. As also mentioned in Remark 3.2, the Koopman spectrum cost regularizes certain global properties of the generated dynamical system; as such, it is advantageous to employ this cost Λ over sums of single-step costs c(x) that are the objective in MDPs especially when encoding stability of the system for example. To illustrate this perspective more, we consider generating a desirable dynamical system (or trajectory) as a solution to some optimization problem. ![6_image_0.png](6_image_0.png) Figure 3: While single-step costs (taking the current and next states as input) could be used to specify every transition, acting as a "local" cost, the Koopman spectrum cost regularizes "global" characteristics of the dynamics through specifying its spectral properties (e.g., by forcing the dynamics to have some given mode m∗ as its top mode). The regularization incurred by the Koopman spectrum cost may not be implemented by the cumulative cost formulation in a straightforward manner. We mention it has some relations to the skill learning with motor primitives (see Section 2) in the sense that both aim at regulating the global dynamical properties. Let X = R, υ ∈ (0, 1], and let c : X → R≥0 be nonconstant cost function. We consider the following loss function for a random dynamical system F : N × Ω *× X → X* : $$\ell({\mathcal{F}},x):=\mathbb{E}_{\Omega}\sum_{h=0}^{\infty}v^{h}c\left({\mathcal{F}}(h,\omega,x_{0})\right).$$ Now consider the dynamical system F(1*, ω, x*) = −x, ∀x ∈ X , ∀ω ∈ Ω, for example. Then, it holds that, for any choice of υ and c, there exists another random dynamical system G satisfying that for all x ∈ X , $$\ell({\mathcal{G}},x)<\ell({\mathcal{F}},x).$$ This fact indicates that there exists a dynamical system that cannot be described by a solution to the above optimization. Note, however, that given any deterministic map f ⋆: *X → X* , if one employs a (different form of) cost c : *X × X →* R≥0, where c(*x, y*) evaluates to 0 only if y = f ⋆(x) and otherwise evaluates to 1, it is straightforward to see that the dynamics f ⋆is the one that simultaneously optimizes c(*x, y*) for any x ∈ X . In other words, this form of single-step cost uniquely identifies the (deterministic) dynamics by defining its evaluation at each state (see Figure 3 (Left)). In contrast, when one wishes to constrain or regularize the dynamics globally in the (spatio-temporal) spectrum domain, to obtain the set of stable dynamics for example, single-step costs become powerless. Intuitively, while cumulative cost can effectively determine or regularize one-step transitions towards certain state, the Koopman spectrum cost can characterize the dynamics globally (see Figure 3 (Right)). Refer to Appendix C for more formal arguments. Although aforementioned facts are simple, they give us some insights on the limitation of the use of (cumulative) single-step cost for characterizing dynamical systems, and imply that enforcing certain constraints on the dynamics requires the Koopman spectrum perspective. This is similar to the problems treated in the Fourier analysis; where the global (spectral) characteristics of the sequential data are better captured in the frequency domain. Lastly, we depicted how the spectrum cost differs from other types of costs used in decision making problems in Figure 2. The figure illustrates that the spectrum cost is not incurred on a single-step or on one trajectory ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) no spectrum cost -with spectrum cost Figure 4: Left: We minimize solely for Koopman spectrum cost A(@) = |m-m*|1 to imitate the top mode of a reference spectrum to recover a desired limit cycle behavior for the single-integrator system. Right: By regularizing the spectral radius of Cartpole with a cumulative cost that favors high velocity, the cartpole performs a stable oscillation rather than moving off to infinity. ![7_image_2.png](7_image_2.png) Figure 5: The joint angle trajectories generated by a combination of linear and RFF policies. Left: when only cumulative reward is maximized. Right: when both the cumulative cost and the spectrum cost 1( ( ) = 5 == 1 |>i(x) | are used, where the factor 5 is multiplied to balance between the spectrum cost and the cumulative cost. (episode), but is over a (part of) the global properties of the dynamical system model (see also Remark 3.2). The dynamics typically corresponds to a policy, but a policy regularization cost used in, for example (Haarnoja et al., 2018), is computable when given the current policy while the spectrum cost is unobservable if the dynamics is unknown (and hence is not directly computable). ## 3.3 Simulated Experiments We illustrate how one can use the costs in Example 3.1. See Appendix H for detailed descriptions and results of the experiments. Throughout, we used Julia language (Bezanson et al., 2017) based robotics control package, Lyceum (Summers et al., 2020), for simulations and visualizations. Also, we use Cross- Entropy Method (CEM) based policy search (Kobilarov (2012); one of the population based policy search techniques) to optimize the policy parameter O to minimize the cost in (3.2). Specifically, at each iteration of CEM, we generate many parameters (Os) to compute the loss (i.e., the sum of the Koopman spectrum cost and negative cumulative reward). This is achieved by fitting the transition data to the chosen feature space to estimate its (finite-dimensional realization of) Koopman operator (see Appendix H.1); here the data are generated by the simulator which we assume to have access to. Imitating target behaviors through the Koopman operators We consider the limit cycle dynamics r˙ = r(1 − r 2),˙θ = 1, described by the polar coordinates, and find the Koopman operator for this dynamics by sampling transitions, assuming H is the span of Random Fourier Features (RFFs) (Rahimi & Recht, 2007). We illustrate how KSNR is used to imitate the dynamics; in particular, by imitating the *Koopman modes*, we expect that some physically meaningful dynamical behaviors of the target system can be effectively reconstructed. To define Koopman modes, suppose that the Koopman operator A ⋆induced by the target dynamics has eigenvalues λi ∈ C and eigenfunctions ξi: X → C for i ∈ {1, 2*, . . . , d*ϕ}, i.e., $${\mathcal{A}}^{\star}\xi_{i}=\lambda_{i}\xi_{i}.$$ If the set of observables ϕis satisfies $$\phi_{x}=\sum_{i=1}^{d_{\phi}}\xi_{i}(x){\bf m}_{i}^{\star},$$ for m⋆ i ∈ C dϕ , where ϕx := [ϕ1(x), ϕ2(x)*, . . . , ϕ*dϕ (x)]⊤ ∈ R dϕ , then m⋆ i s are called the Koopman modes. The Koopman modes are closely related to the concept *isostable*; interested readers are referred to (Mauroy et al., 2013) for example. In this simulated example, the target system is being imitated by forcing the generated dynamics to have (approximately) the same top mode (i.e., the Koopman modes corresponding to the largest absolute eigenvalue) that dominates the behavior. To this end, with Π a space of RFF policies that define r˙ and ˙θ as a single-integrator model, we solve KSNR for the spectrum cost Λ(A ) = ∥m − m⋆∥1, where m ∈ C dϕ and m⋆ are the top modes of the Koopman operator induced by the generated dynamics and of the target Koopman operator found previously, respectively. Figure 4 (Left) plots the trajectories (of the Cartesian coordinates) generated by RFF policies that minimize this cost; it is observed that the agent successfully converged to the desired limit cycle of radius one by imitating the dominant mode of the target spectrum. Generating stable loops (Cartpole) We consider Cartpole environment (where the rail length is extended from the original model). The single-step reward (negative cost) is 10−3|v| where v is the velocity of the cart, plus the penalty −1 when the pole falls down (i.e., directing downward). This single-step reward encourages the cartpole to maximize its velocity while preferring not to let the pole fall down. The additional spectrum cost considered in this experiment is Λ(A ) = 104 max(1, ρ(A )), which highly penalizes spectral radius larger than one; it therefore regularizes the dynamics to be stable, preventing the velocity from ever increasing its magnitude. Figure 4 (Right) plots the cart velocity trajectories generated by RFF policies that (approximately) solve KSNR with/without the spectrum cost. It is observed that spectral regularization led to a back-and-forth motion while the non-regularized policy preferred accelerating to one direction to solely maximize velocity. When the spectrum cost was used, the cumulative rewards were 0.072 and the spectral radius was 0.990, while they were 0.212 and 1.003 when the spectrum cost was not used; limiting the spectral radius prevents the ever increasing change in position. We mention that this experiment is not intended to force the dynamics to have this oscillation, but rather to let the dynamics be stable in the sense that the Koopman operator over a chosen space has spectral radius that is less than or equal to one (see Figure 3 to review the difference of concepts and roles of cumulative cost and Koopman spectrum cost). See more experimental results in Appendix I. Remark 3.3 (Remarks on the stability regularization). As mentioned in Section 2, this stability regularization is not meant to learn the stable Koopman operator but to find an RDS that balances the cumulative cost and the spectrum cost that enforces stability; in other words, the task is to find a dynamical system that minimizes the cumulative cost under the hard stability constraint encoded by the spectrum cost. Here, ![9_image_0.png](9_image_0.png) Figure 6: Examples of attractor dynamics that are stable; from the left, they are a fixed-point attractor, limit cycle attractor, and strange attractor. Those should be included in the set of stable dynamics. we emphasize that the set of stable dynamics should include a variety of dynamics ranging from a fixed-point attractor to a strange attractor as portrayed in Figure 6. On the other hand, constraining dynamics by a single control Lyapunov function (cf. Freeman & Primbs (1996)) will form a strictly smaller set of stable dynamics (see the discussions in Section 3.2 as well). Also, in this simulation example, the stability does not necessarily mean the behaviors converging to the zero state as long as the velocity does not increase indefinitely, and the preference of keeping the pole straight up is encoded by the cumulative cost term instead. Generating smooth movements (Walker) We use the Walker2d environment and compare movements with/without the spectrum cost. The single-step reward (negative cost) is given by v−0.001∥a∥ 2R6 , where v is the velocity and a is the action vector of dimension 6. The spectrum cost is given by Λ(A ) = 5Pdϕ i=1 |λi(A )|, where the factor 5 is multiplied to balance between the spectrum and cumulative cost. The single-step reward encourages the walker to move to the right as fast as possible with a small penalty incurred on the action input, and the spectrum cost regularizes the dynamics to have fewer dominating modes, which intuitively leads to *smoother* motions. Figure 5 plots typical joint angles along a trajectory generated by a combination of linear and RFF policies that (approximately) solves KSNR with/without the spectrum cost. It is observed that the spectrum cost led to simpler (i.e., some joint positions converge) and smoother dynamics while doing the task sufficiently well. With the spectrum cost, the cumulative rewards and the spectrum costs averaged across four random seeds (plus-minus standard deviation) were 584 ± 112 and 196 ± 8.13. Without the spectrum cost, they were 698 ± 231 and 310 ± 38.6. We observe that, as expected, the spectrum cost is lower for KSNR while the classical cumulative reward is higher for the behavior generated by the optimization without spectrum cost. Again, we emphasize that we are not competing against the MDP counterparts in terms of the cumulative reward, but rather showing an effect of additional spectrum cost. Please also refer to Appendix H and I for detailed results. ## 4 Theoretical Algorithm Of Online Ksnr And Its Insights On The Complexity In this section, we present a (theoretical) learning algorithm for online KSNR. Although the KSNR itself is a general regulator framework, we need some structural assumptions to the problem in order to devise a sample efficient (if not computation efficient) algorithm. Nevertheless, despite the unobservability of the spectrum cost, KSNR admits a sample efficient model-based algorithm through operator theoretic arguments under those assumptions. Here, "model-based" simply means we are not directly learning the spectrum cost itself but the Koopman operator model. We believe the algorithm and its theoretical analysis clarify the intrinsic difficulty of considering the spectrum cost, and pave the way towards future research. The learning goal is to find a parameter Θ⋆t that satisfies (3.2) at each episode t ∈ [T]. We employ episodic learning, and let Θt be a parameter employed at the t-th episode. Adversary chooses Xt0 := {(x t 0,0 , Ht0 ),(x t 0,1 , Ht1 )*, . . . ,*(x t 0,Nt−1 , HtNt−1 )*} ⊂ X ×* Z>0, where Nt ∈ Z>0, and the cost function c t at the beginning of each episode. Let c t h,n := c t(x t h,n). ω ∈ ΩΘt is chosen according to PΘt . Algorithm evaluation. In this work, we employ the following performance metric, namely, the cumulative regret: $$\mathrm{Regret}_{T}:=\sum_{t=0}^{T-1}\left(\Lambda[\mathcal{K}(\Theta^{t})]+\sum_{n=0}^{N^{t}-1}\sum_{h=0}^{H_{n}^{t}-1}c_{h,n}^{t}\right)-\sum_{t=0}^{T-1}\min_{\Theta\in\Pi}\left(\Lambda[\mathcal{K}(\Theta)]+J^{\Theta}(X_{0}^{t};c^{t})\right).$$ Note the minimum on the second term on the right hand side is taken at every episode. Below, we present model assumptions and an algorithm with a regret bound. ## 4.1 Models And Algorithms We make the following modeling assumptions. Assumption 1. Let K (Θ) be the Koopman operator corresponding to a parameter Θ. Then, assume that there exists a finite-dimensional subspace H0 on X over R and its basis ϕ1*, . . . , ϕ*dϕ such that the random dynamical system ( 3.1*) satisfies the following:* $$\forall\Theta\in\Pi,\ \forall x\in{\mathcal{X}}:\ \phi_{i}({\mathcal{F}}^{\Theta}(1,\omega,x))=[{\mathcal{K}}(\Theta)\phi_{i}](x)+\epsilon_{i}(\omega),$$ where the additive noise ϵi(ω) ∼ N (0, σ2) is assumed to be independent across timesteps, parameters Θ*, and* indices i. Remark 4.1 (On Assumption 1). Although the added noise term is expected to deal with stochasticity and misspecification to some extent in practice, Assumption 1 is strong to ask for; in fact, claiming existence of the Koopman operator over a useful RKHS (e.g., with Gaussian kernel) is not trivial for most of the practical problems. Studying *misspecified case* with small error margin is an important future direction of research; however, as our regulator problem is novel, we believe this work guides the future attempts of further algorithmic research. Assumption 2 (Function-valued RKHS (see Appendix A for details)). K (·)ϕ for ϕ ∈ H *is assumed to live* in a known function valued RKHS with the operator-valued reproducing kernel K(·, ·) : Π × Π → L(H; H), or equivalently, there exists a known map Ψ : Π → L(H; H′) for a specific Hilbert space H′*satisfying for any* ϕ ∈ H, there exists ψ ∈ H′*such that* $${\mathcal{H}}(\cdot)\phi=\Psi(\cdot)^{\dagger}\psi.$$ $$(4.1)$$ †ψ. (4.1) Remark 4.2 (On Assumption 2). The assumption intuitively states that the *structure* on how the closeness of parameters Θs relates to that of the resulting Koopman operators is known. One can always consider an expressive function-valued RKHS in practice, but it may lead to increased (effective) dimensions that will require more samples to learn. When Assumption 2 holds, we obtain the following claim which is critical for our learning framework. Lemma 4.3. Suppose Assumption 2 holds. Then, there exists a linear operator M⋆: H → H′*such that* $${\mathcal{H}}(\Theta)=\Psi(\Theta)^{\dagger}\circ M^{\star}.$$ In the reminder of this paper, we work on the invariant subspace H0 in Assumption 1 and thus we regard H = H0 ∼= R dϕ , L(H; H) ∼= R dϕ×dϕ , and, by abuse of notations, we view K (Θ) as the realization of the operator over R dϕ , i.e., $$\phi_{{\mathcal{F}}^{\Theta}(1,\omega,x)}={\mathcal{K}}(\Theta)\phi_{x}+\epsilon(\omega)=[\Psi(\Theta)^{\dagger}\circ M^{\star}]\phi_{x}+\epsilon(\omega),$$ where ϕx := [ϕ1(x), ϕ2(x)*, . . . , ϕ*dϕ (x)]⊤ ∈ R dϕ , and ϵ(ω) := [ϵ1(ω), ϵ2(ω)*, . . . , ϵ*dϕ $$\iota_{\phi}(\omega)]^{\top}\in\mathbb{R}^{d_{\phi}}.$$ Finally, we assume the following. Assumption 3 (Realizability of costs). For all t*, the single-step cost* c t*is known and satisfies* c t(x) = w t(ϕx ) for some known map w t: R dϕ → R≥0. Algorithm 1 Koopman-Spectrum LC3(KS-LC3) Require: Parameter set Π; regularizer λ 1: Initialize Ball0M to be a set containing M⋆. 2: for t = 0 *. . . T* − 1 do 3: Adversary chooses Xt0 . 4: Θt, Mˆt = arg minΘ∈Π, M∈BalltM Λ[Ψ(Θ)† ◦ M] + J Θ(Xt0 ; M; c t) 5: Under the dynamics F Θt, sample transition data τ t:= {τ t n} Nt−1 n=0 , where τ t n := {x t h,n} Htn h=0 6: Update Ballt+1 M . 7: end for Remark 4.4 (On Assumption 3). As mentioned in Remark 3.2, the space H0 over which the Koopman operator is acting should be properly chosen so that the Koopman operator exists and that its spectrum cost has desirable regularization effect over the dynamics. At the same time, Assumption 3 requires that H0 is sufficiently *expressive* in the sense that it can capture the immediate cost c t. For later use, we define, for all t ∈ [T], n ∈ [Nt], and h ∈ [Htn ]; A t h,n ∈ L(L(H; H′); H) and Bt ∈ L(L(H; H′);L(H; H)) by $${\mathcal A}_{h,n}^{t}(M)=\left[\Psi(\Theta^{t})^{\dagger}\circ M\right]\left(\phi_{x_{h,n}^{t}}\right),\qquad{\mathcal B}^{t}(M)$$ $$(4.2)$$ h,n , Bt(M) = Ψ(Θt) †◦ M. Remark 4.5 (Hilbert-Schmidt operators). Both A t h,n and Bt are Hilbert-Schmidt operators because the ranges H and L(H; H) are of finite dimension. Note, in case H′is finite, we obtain $${\cal A}^{\star})+\epsilon(\omega),$$ $\mathbf{a}\in\mathbb{R}^n$ ϕFΘ(1*,ω,x*) = Ψ(Θ)†M⋆ϕx + ϵ(ω) = (ϕ † x ⊗ Ψ(Θ)†)vec(M⋆) + ϵ(ω), (4.2) where vec is the vectorization of matrix. With these preparations in mind, our proposed information theoretic algorithm, which is an extension of LC3to KSNR problem (estimating the true operator M⋆) is summarized in Algorithm 1. 1 This algorithm assumes the following oracle. Definition 4.1 (Optimal parameter oracle). Define the oracle, OptDynamics, that computes the minimization problem (3.2) for any K , X0, Λ and c tsatisfying Assumption 3. ## 4.2 Information Theoretic Regret Bounds Here, we present the regret bound analysis. To this end, we make the following assumptions. Assumption 4. The operator Λ *satisfies the following (modified) Hölder condition:* ∃L ∈ R≥0, ∃α ∈ (0, 1], ∀A ∈ L(H, H), ∀Θ ∈ Π, |Λ[A ] − Λ[K (Θ)]| ≤ L · max n∥A − K (Θ)∥ 2, ∥A − K (Θ)∥ αo. Further, we assume there exists a constant Λmax ≥ 0 such that, for any Θ ∈ Π and for any M ∈ *Ball*0M, Λ[Ψ(Θ)†◦ M] ≤ Λmax. Remark 4.6 (On Assumption 4). This assumption does not preclude practical examples such as matrix norms (because of the triangle inequality) and the spectral radius as described below. However, we note that the spectrum cost in Section 3.3 used for top mode imitation, namely, Λ(A ) = ∥m1 − m⋆ 1∥1, does not satisfy this (modified) Hölder condition in general; therefore this cost might not be used for KS-LC3. For example, for spectral radius ρ, the following proposition holds using the result from (Song, 2002, Corollary 2.3). 1See Appendix B for the definitions of the values in this algorithm. Proposition 4.7. *Assume there exists a constant* Λmax ≥ 0 such that, for any Θ ∈ Π *and for any* M ∈ Ball0M, ρ(Ψ(Θ)† ◦ M) ≤ Λmax*. Let the Jordan condition number of* K (Θ) *be the following:* $$\kappa:=\operatorname*{sup}_{\Theta\in\Pi}\operatorname*{inf}_{\mathcal{Q}(\Theta)}\left\{\|\mathcal{Q}(\Theta)\|\|\mathcal{Q}(\Theta)^{-1}\|:\mathcal{Q}(\Theta)^{-1}\mathcal{K}(\Theta)\mathcal{Q}(\Theta)=\mathcal{J}(\Theta)\right\},$$ where J (Θ) *is a Jordan canonical form of* K (Θ) *transformed by a nonsingular matrix* Q(Θ)*. Also, let* m be the order of the largest Jordan block. Then, if κ < ∞*, the cost* Λ(A ) = ρ(A ) satisfies the Assumption 4 for $$L:=(1+\kappa)d_{\phi}^{2}(1+\sqrt{d_{\phi}-1}),~~~~\alpha=\frac{1}{m}.$$ Note when K (Θ) is diagonalizable for all Θ, α = 1. Assumption 5. For every t, adversary chooses Nt*trajectories such that* {ϕx t 0,n } *satisfies that the smallest* eigenvalue of PNt−1 n=0 ϕx t 0,n ϕ † x t 0,n is bounded below by some constant C > 0*. Also, there exists a constant* H ∈ Z>0 *such that for every* t,PNt−1 n=0 Htn ≤ H. Remark 4.8 (On Assumption 5). In practice, user may wait to end an episode until a sufficient number of trajectories is collected in order for the assumption to hold. Intuitively, this condition ensures that fitting the transition data over the feature space is not ill-posed. Note, this assumption does not preclude the necessity of exploration: If the sole purpose is just to fit the data for single parameter Θ, we may not need a well-designed exploration strategy in this case; however, because the smallest eigenvalue of PNt−1 n=0PHtn−1 h=0 A t h,n †A t h,n is in general not bounded below by a positive constant, we still need careful design of exploration. We mention that this assumption plays a role in our regret analysis to simultaneously manage the cumulative cost and the spectrum cost through the use of common confidence balls; note the former cares about each single-step transition while the latter deals with the global dynamical properties represented as the Koopman operator realized over a given space. The direct use of this assumption is highlighted by Lemma F.2 in Appendix F (derived from our *positive* operator norm bounding lemma; see Lemma F.1) which is used to basically bound the norm of difference between the true Koopman operator and the estimated one at each episode by the multiple of *radius* of the confidence ball. We hope that this assumption can be relaxed by assuming the bounds on the norms of ϕx0,n and K (Θ) and by using matrix Bernstein inequality under additive Gaussian noise assumption. Lastly, the following assumption is the modified version of the one made in (Kakade et al., 2020). Assumption 6. [Modified version of Assumption 2 in (Kakade et al., *2020)] Assume there exists a constant* Vmax > 0 *such that, for every* t, $$\operatorname*{sup}_{\Theta\in\Pi}\;\sum_{n=0}^{N^{t}-1}\,\mathbb{E}_{\Omega_{\Theta}}\left[\left(\sum_{h=0}^{H_{n}^{t}-1}\,c^{t}(x_{h,n}^{t})\right)^{2}\;\;\Bigg|\;\Theta,x_{0,n}^{t}\right]\leq V_{\operatorname*{max}}.$$ Remark 4.9 (On Assumption 6). This is a slight modification of Assumption 2 in (Kakade et al., 2020) to adjust to our problem settings; this assumption *does not* state that the cost function is bounded over the state space but the second moment of *realized* cumulative cost is bounded, and the complexity depends on this bound. Theorem 4.10. Suppose Assumption 1 to 6 *hold. Set* λ =σ 2 ∥M⋆∥2 *. Then, there exists an absolute constant* C1 such that, for all T, KS-LC3(Algorithm 1) using OptDynamics *enjoys the following regret bound:* $\mathbb{E}_{\rm KS-LC^{3}}\left[\mbox{\it{REGRET}}_{T}\right]\leq C_{1}(\tilde{d}_{T,1}+\tilde{d}_{T,2})T^{1-\frac{a}{2}}$ where $$\begin{array}{l}{{\bar{d}_{2,1}^{2}:=(1+\gamma_{T,\mathcal{B}})\left[L^{2}(1+C^{-1})^{2}\bar{\beta}_{2,T}+(L^{2}+\Lambda_{\mathrm{max}}L)(1+C^{-1})\bar{\beta}_{1,T}+\Lambda_{\mathrm{max}}^{2}+L^{2}\right],}}\\ {{\bar{d}_{7,2}^{2}:=\gamma_{T,\mathcal{A}}H V_{\mathrm{max}}\left(\gamma_{T,\mathcal{A}}+d_{\phi}+\log(T)+H\right),}}\\ {{\bar{\beta}_{1,T}:=\sigma^{2}(d_{\phi}+\log(T)+\gamma_{T,\mathcal{A}}),}}\\ {{\bar{\beta}_{2,T}:=\sigma^{4}((d_{\phi}+\log(T)+\gamma_{T,\mathcal{A}})^{2}+\gamma_{2,T,\mathcal{A}}).}}\end{array}$$ Here, γT ,A , γ2,T ,A , and γT ,B are the expected maximum information gains that scale (poly-)logarithmically with T under practical settings (see Appendix B *for details).* Remark 4.11 (On the order). We note that, when α = 1, we obtain the order of O˜( √T). Remark 4.12 (On the adversary). In our setting, the adversary only chooses the initial states, their time horizons, and the immediate cost function. The trajectories themselves are generated by the learner's parameter Θ. Although the regret bound given above is valid if the assumptions hold regardless of the adversary, a caveat here is the bound H and Vmax appearing in the regret bound can potentially depend on the choice of the adversary. The proof techniques include our *positive operator norm bounding lemma* (see Lemma F.1 in Appendix F), which is another crucial operator theoretic lemma in this work in addition to Lemma 4.3. ## 4.3 Relations To The Kernelized Nonlinear Regulator And To Eigenstructure Assignments First, we mention that our theoretical results apply some of the techniques developed in the work of KNR; and in fact, additional theoretical arguments that are necessitated for KSNR framework highlight the essential difference of our framework from the MDP counterpart. As mentioned in Remark 4.1, for a given dynamics described by the system model studied in the KNR (i.e., the transition map from the current state and control to the next state is in a known RKHS), the existence of a (finite-dimensional) space H0 in Assumption 1 that can represent the given cumulative cost is in general not guaranteed. As such, the system model considered in this section is no more general than that of the KNR (although our spectrum cost formulation is indeed a generalization of that of the KNR). Now, we consider the finite dimensional description (see (4.2)) for simplicity. The equation (4.2) can be rewritten by $$\phi_{{\mathcal{F}}^{\Theta}(1,\omega,x)}=\left(I\otimes\mathrm{vec}(M^{\star})^{\top}\right)\mathrm{vec}\left[\left(\phi_{x}^{\dagger}\otimes\Psi(\Theta)^{\dagger}\right)^{\top}\right]+\epsilon(\omega),$$ and is a special case of the system model of the KNR. We see that the model associated with our spectrum cost formulation reduces to the eigenstructure assignment problem for the linear systems described by $x_{h+1}=Ax_{h}+Bu_{h}+\epsilon,\ \ A\in\mathbb{R}^{d_{X}\times d_{X}},\ B\in\mathbb{R}^{d_{X}\times d_{U}},\ \epsilon\sim\mathcal{N}(0,\sigma^{2}I),$ where u ∈ dU is a control input. In particular, considering the feedback policy in the form of uh = Kxh where K ∈ R dU×dX (or Θ = K in this case), the system becomes xh+1 = (A + BK)xh + ϵ. By letting H0 be R dX where the canonical basis is taken and by properly designing Ψ(Θ) (see Appendix G), our system model reduces to the linear case. The spectrum cost may be designed not only to constrain the eigenstructure but also to balance with the cumulative single-step cost in our framework as well. ## 4.4 Simulated Experiments Linear case: as eigenstructure assignment problem First, to demonstrate the correctness of our theoretical algorithm, we consider a linear system case. In particular, the following simple system is considered: $$x_{h+1}=x_{h}+\Delta t\cdot u_{h},$$ ![14_image_0.png](14_image_0.png) Figure 7: Linear system imitation (as eigenstructure assignment) by KS-LC3 using the spectrum cost Λ(A ) = ∥(I + 0.05K∗) − (A + BK)∥ 2 HS (where A and B are extracted from A ). Left: ground-truth linear system position trajectory and a trajectory generated by the learned model with KS-LC3. We see both overlap exactly, indicating that the algorithm learned the feedback matrix properly. Middle: The spectrum cost curve evaluated approximately using the current trajectory data. Right: The estimated spectrum cost curve obtained by using the current estimate Mˆ . where ∆t = 0.05. The target linear system is given by where $$a+1=(I\,.$$ $\mathbf{v}=\mathbf{v}$. $\pi=\pi$. ![14_image_1.png](14_image_1.png) xh+1 = (I + 0.05K∗) xh, $$K^{*}:=\left[\begin{array}{l l}{{-2}}&{{4}}\\ {{-6}}&{{-2}}\end{array}\right].$$ The spectrum cost given by ∥(I +0.05K∗)−(A+BK)∥ 2 HS enforces that the parameter Θ (which is a feedback matrix K in this case) generates the dynamical system that follows this target linear dynamics; where A and B are matrices that are part of the Koopman operator to learn (see Section 4.3). Figure 7 plots the result showing that the trajectories generated by the ground-truth target linear system and by the learned model match almost exactly, and that the spectrum cost decreases along episodes. The spectrum cost curve evaluated approximately using the current trajectory data is given in Figure 7 (Middle). Note the curve is averaged across four random seeds and across episode window of size four. Additionally, the estimated spectrum cost curve obtained by using the current estimate Mˆ is plotted in Figure 8 (Right). Cartpole problem with reduced policy space We again consider a Cartpole environment for KS-LC3. To reduce the policy search space, we compose three pre-trained policies to balance the pole while 1) moving the cart position p to −0.3 and then to 0.3, 2) moving the cart with velocity v = −1.5, 3) moving cart with velocity v = 1.5. We also extract the Koopman operator A ⋆for the first policy. Then, we let Π to be the space of linear combinations of the three policies, reducing the dimension of Θ. We let the first four features ϕ1(x) to ϕ4(x) be x1 to x4, where x = [x1*, . . . , x*4] ⊤ is the state, and the rest be RFFs. The single-step cost is given by 10−4(|v| − 1.5)2 plus the large penalty when the pole falls down (i.e., directing downward), and the spectrum cost is Λ(A ) = ∥A [1 : 4, :] − A ⋆[1 : 4, :]∥ 2 HS + 0.01Pdϕ i=1 |λi(A )|, where A [1 : 4, :] ∈ R 4×dϕ is the first four rows of A ; this imitates the first policy while also regularizing the spectrum. When CEM is used to (approximately) solve KSNR, the resulting cart position trajectories are given by Figure 8 (Left). With the spectrum cost, the cumulative costs and the spectrum costs averaged across four random seeds were 0.706±0.006 and 2.706±0.035. Without the spectrum cost, they were 0.428±0.045 and 3.383 ± 0.604. We then use the Thompson sampling version of KS-LC3; the resulting cart position trajectories are given by Figure 8 (Left) as well, and the spectrum cost curves are also shown. It is observed that the addition of the spectrum cost favored the behavior corresponding to the target Koopman operator to oscillate while balancing the pole, and achieved similar performance to the ground-truth model optimized with CEM. ![15_image_0.png](15_image_0.png) Figure 8: Cartpole imitation by KS-LC3 using the spectrum cost Λ(A ) = ∥A [1 : 4, :] − A ⋆[1 : 4, :]∥ 2 HS + 0.01Pdϕ i=1 |λi(A )|. Left: Cart position trajectories with/without the spectrum cost with CEM on a known model versus the learned model with KS-LC3. Middle: The spectrum cost curve evaluated approximately using the current trajectory data. Right: The estimated spectrum cost curve obtained by using the current estimate Mˆ . ## 5 Discussion And Conclusion This work proposed a novel paradigm of regulating (controlling) dynamical systems, which we refer to as Koopman Spectrum Nonlinear Regulator, and presented an information theoretic algorithm that achieves a sublinear regret bound under model assumptions. We showed that behaviors such as limit cycles of interest, stable motion, and smooth movements are effectively realized within our framework, which is an effective generalization of classical eigenstructure (pole) assignments. In terms of learning algorithms, we stress that there is a significant potential of future work for inventing sophisticated and scalable methods that elicit desired and interesting behaviors from dynamical systems. Our motivation of this work stems from the fact that some preferences over dynamical systems cannot be encoded by cumulative single-step cost based control/learning algorithms; we believe that this work enables a broader representation of dynamical properties that can enable more intelligent and robust agent behaviors. ## 6 Limitations First, when the dynamical systems of interests do not meet certain conditions, Koopman operators might not be well defined over functional spaces that can be practically managed (e.g. RKHSs). Studying such conditions is an on-going research topic, and we will need additional studies of misspecified cases where only an approximately invariant Koopman space is employed with errors. Second, to solve KSNR, one needs heuristic approximations when the (policy) space Π or the state space X is continuous. Even when the dynamical system model is known, getting better approximation results would require certain amount of computations, and additional studies on the relation between the amount of computations (or samples) and approximation errors would be required. Also, studying a better heuristic algorithm that is well suited to our problem to robustly scale the methodology to more complicated domains is indeed an interesting direction of research. Third, KS-LC3requires several assumptions for tractable regret analysis. It is again a somewhat strong to assume that one can find a Koopman invariant space H. Further, additive Gaussian noise assumption may not be satisfied exactly in some practical applications, and robustness against the violations of the assumptions is an important future work. However, we stress that we believe the analysis of provably correct methods deepens our understandings of the problem. Eventually, we hope our framework will match the maturity of the MDP counterpart, for example by studying gradient-based algorithms that scale better to the more complicated domains and their theoretical analysis with a sample complexity guarantee. Lastly, we note that our empirical experiment results are only conducted on simulators; to apply to real robotics problems, we need additional studies, such as computation-accuracy trade-off, safety/robustness issues, and simulation-to-real, if necessary. Also, balancing between cumulative cost and the Koopman spectrum cost is necessary to avoid unexpected negative outcomes. ## 7 Broader Impact Statement The work, along with other robotics/control literature, encourages the further automation of robots; which would lead to loss of jobs that are currently existent, or may lead to developments of harmful military robots. Further, when our proposed framework requires additional (computational or other) resources, it would benefit entities with larger amount of such resources. Also, there is always a risk of robots misbehaving under partially unknown environments or under adversarial attacks. However, overall, we believe our work alone does not immediately carry a significant risk of harm. Moreover, we stress that the use of the Koopman spectrum cost has no implications on "good" and "bad" about any particular human behaviors. ## Acknowledgments We thank the constructive comments by anonymous reviewers for improving this work. Motoya Ohnishi thanks Colin Summers for instructions on Lyceum. Kendall Lowrey and Motoya Ohnishi thank Emanuel Todorov for valuable discussions and Roboti LLC for computational supports. This work of Motoya Ohnishi, Isao Ishikawa, Masahiro Ikeda, and Yoshinobu Kawahara was supported by JST CREST Grant Number JPMJCR1913, including AIP challenge program, Japan. Also, Motoya Ohnishi is supported in part by Funai Overseas Fellowship. Sham Kakade acknowledges funding from the ONR award N00014-18-1-2247. ## References I. Abraham and T. D. Murphey. Active learning of dynamics for data-driven control using Koopman operators. *IEEE Trans. Robotics*, 35(5):1071–1083, 2019. E. Akindji, J. Slipantschuk, O. F. Bandtlow, and W. Just. Convergence properties of dynamic mode decomposition for analytic interval maps. *arXiv preprint arXiv:2404.08512*, 2024. A. N. Andry, E. Y. Shapiro, and J. C. Chung. Eigenstructure assignment for linear systems. *IEEE Trans.* Aerospace and Electronic Systems, (5):711–729, 1983. L. Arnold. *Random dynamical systems*. Springer, 1998. Z. Beheshti and S. M. H. Shamsuddin. A review of population-based meta-heuristic algorithms. Int. j. adv. soft comput. appl, 5(1):1–35, 2013. J. Bezanson, A. Edelman, S. Karpinski, and V. B. Shah. Julia: A fresh approach to numerical computing. SIAM review, 59(1):65–98, 2017. R. Brault, M. Heinonen, and F. Buc. Random Fourier features for operator-valued kernels. In *Proc. ACML*, pp. 110–125, 2016. G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. OpenAi Gym. *arXiv preprint arXiv:1606.01540*, 2016. D. Burov, D. Giannakis, K. Manohar, and A. Stuart. Kernel analog forecasting: Multiscale test problems. Multiscale Modeling & Simulation, 19(2):1011–1040, 2021. M. J. Colbrook and A. Townsend. Rigorous data-driven computation of spectral properties of Koopman operators for dynamical systems. *Communications on Pure and Applied Mathematics*, 77(1):221–283, 2024. M. J. Colbrook, L. J. Ayton, and M. Szőke. Residual dynamic mode decomposition: robust and verified Koopmanism. *Journal of Fluid Mechanics*, 955:A21, 2023. N. Črnjarić-Žic, S. Maćešić, and I. Mezić. Koopman operator spectrum for random dynamical systems. Journal of Nonlinear Science, pp. 1–50, 2019. S. Curi, F. Berkenkamp, and A. Krause. Efficient model-based reinforcement learning through optimistic policy search and planning. *arXiv preprint arXiv:2006.08684*, 2020. B. Eysenbach, A. Gupta, J. Ibarz, and S. Levine. Diversity is all you need: Learning skills without a reward function. *arXiv preprint arXiv:1802.06070*, 2018. R. A. Freeman and J. A. Primbs. Control Lyapunov functions: New ideas from an old source. In IEEE Proc. CDC, volume 4, pp. 3926–3931, 1996. M. Ghil, M. D. Chekroun, and E. Simonnet. Climate dynamics and fluid mechanics: Natural variability and related uncertainties. *Physica D: Nonlinear Phenomena*, 237(14-17):2111–2126, 2008. L. Grafakos. *Classical Fourier analysis*, volume 2. Springer, 2008. T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In *Proc. ICML*, pp. 1861–1870, 2018. M. Hemati and H. Yao. Dynamic mode shaping for fluid flow control: New strategies for transient growth suppression. In *8th AIAA Theoretical Fluid Mechanics Conference*, pp. 3160, 2017. J. Ibarz, J. Tan, C. Finn, M. Kalakrishnan, P. Pastor, and S. Levine. How to train your robot with deep reinforcement learning: lessons we have learned. *International Journal of Robotics Research*, 40(4-5): 698–721, 2021. A. Ijspeert, J. Nakanishi, and S. Schaal. Learning attractor landscapes for learning motor primitives. Proc. NeurIPS, 15, 2002. M. Ikeda, I. Ishikawa, and C. Schlosser. Koopman and Perron–Frobenius operators on reproducing kernel Banach spaces. *Chaos: An Interdisciplinary Journal of Nonlinear Science*, 32(12), 2022. I. Ishikawa. Bounded composition operators on functional quasi-Banach spaces and stability of dynamical systems. *Advances in Mathematics*, 424:109048, 2023. I. Ishikawa, K. Fujii, M. Ikeda, Y. Hashimoto, and Y. Kawahara. Metric on nonlinear dynamical systems with Perron-Frobenius operators. In *Proc. NeurIPS*, pp. 2856–2866. 2018. I. Ishikawa, Y. Hashimoto, M. Ikeda, and Y. Kawahara. Koopman operators with intrinsic observables in rigged reproducing kernel Hilbert spaces. *arXiv preprint arXiv:2403.02524*, 2024. T. Iwata and Y. Kawahara. Neural dynamic mode decomposition for end-to-end modeling of nonlinear dynamics. *arXiv:2012.06191*, 2020. N. Jiang, A. Krishnamurthy, A. Agarwal, J. Langford, and R. E. Schapire. Contextual decision processes with low Bellman rank are PAC-learnable. In *Proc. ICML*, pp. 1704–1713, 2017. H. Kadri, E. Duflos, P. Preux, S. Canu, A. Rakotomamonjy, and J. Audiffren. Operator-valued kernels for learning from functional response data. *Journal of Machine Learning Research*, 17(20):1–54, 2016. E. Kaiser, J. N. Kutz, and S. Brunton. Data-driven discovery of Koopman eigenfunctions for control. *Machine* Learning: Science and Technology, 2021. S. Kakade, A. Krishnamurthy, K. Lowrey, M. Ohnishi, and W. Sun. Information theoretic regret bounds for online nonlinear control. *Proc. NeurIPS*, 2020. Y. Kawahara. Dynamic mode decomposition with reproducing kernels for Koopman spectral analysis. Proc. NeurIPS, 29:911–919, 2016. J. Kober and J. Peters. Imitation and reinforcement learning. *IEEE Robotics & Automation Magazine*, 17 (2):55–62, 2010. J. Kober, A. Wilhelm, E. Oztop, and J. Peters. Reinforcement learning to adjust parametrized motor primitives to new situations. *Autonomous Robots*, 33:361–379, 2012. J. Kober, J. A. Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. International Journal of Robotics Research, 32(11):1238–1274, 2013. M. Kobilarov. Cross-entropy motion planning. *International Journal of Robotics Research*, 31(7):855–871, 2012. B. O. Koopman. Hamiltonian systems and transformation in Hilbert space. *Proc. National Academy of* Sciences of the United States of America, 17(5):315, 1931. M. Korda and I. Mezić. Linear predictors for nonlinear dynamical systems: Koopman operator meets model predictive control. *Automatica*, 93:149–160, 2018. M. Korda and I. Mezić. Optimal construction of Koopman eigenfunctions for prediction and control. IEEE Trans. Automatic Control, 65(12):5114–5129, 2020. J. N. Kutz, S. L. Brunton, B. W. Brunton, and J. L. Proctor. *Dynamic mode decomposition: Data-driven* modeling of complex systems. SIAM, 2016. Y. Li, H. He, J. Wu, D. Katabi, and A. Torralba. Learning compositional Koopman operators for model-based control. *arXiv preprint arXiv:1910.08264*, 2019. G. Mamakoukas, I. Abraham, and T. D. Murphey. Learning stable models for prediction and control. *IEEE* Trans. Robotics, 2023. G. Manek and J Zico Kolter. Learning stable deep dynamics models. *Proc. NeurIPS*, 2019. H. Mania, M. I. Jordan, and B. Recht. Active learning for nonlinear system identification with guarantees. arXiv preprint arXiv:2006.10277, 2020. A. Mauroy and I. Mezić. Global stability analysis using the eigenfunctions of the Koopman operator. IEEE Trans. Automatic Control, 61(11):3356–3369, 2016. A. Mauroy, I. Mezić, and J. Moehlis. Isostables, isochrons, and Koopman spectrum for the action–angle representation of stable fixed point dynamics. *Physica D: Nonlinear Phenomena*, 261:19–30, 2013. A. Mauroy, Y. Susuki, and I. Mezić. *The Koopman operator in systems and control*. Springer, 2020. I. Mezić. Spectral properties of dynamical systems, model reduction and decompositions. *Nonlinear Dynamics*, 41:309–325, 2005. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing Atari with deep reinforcement learning. *arXiv preprint arXiv:1312.5602*, 2013. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. *Nature*, 518(7540):529–533, 2015. H. Nakao. Phase reduction approach to synchronization of nonlinear oscillators. *Contemporary Physics*, 57 (2):188–214, 2017. A. Y. Ng and S. J. Russell. Algorithms for inverse reinforcement learning. In *Proc. ICML*, volume 1, 2000. J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. *Neural networks*, 21 (4):682–697, 2008. R. Pintelon and J. Schoukens. *System identification: a frequency domain approach*. John Wiley & Sons, 2012. A. Rahimi and B. Recht. Random features for large-scale kernel machines. *Proc. NeurIPS*, 2007. A. Rajeswaran, K. Lowrey, E. Todorov, and S. Kakade. Towards generalization and simplicity in continuous control. *Proc. NeurIPS*, 2017. A. Sabanovic and K. Ohnishi. *Motion control systems*. John Wiley & Sons, 2011. M. Simchowitz and D. Foster. Naive exploration is optimal for online LQR. In *Proc. ICML*, pp. 8937–8948, 2020. Y. Song. A note on the variation of the spectrum of an arbitrary matrix. *Linear algebra and its applications*, 342(1-3):41–46, 2002. S. H. Strogatz. *Nonlinear dynamics and chaos with student solutions manual: With applications to physics,* biology, chemistry, and engineering. CRC press, 2018. F. Stulp and O. Sigaud. Robot skill learning: From reinforcement learning to evolution strategies. *Paladyn,* Journal of Behavioral Robotics, 4(1):49–61, 2013. F. Stulp, L. Herlant, A. Hoarau, and G. Raiola. Simultaneous on-line discovery and improvement of robotic skill options. In *IEEE/RSJ IROS*, pp. 1408–1413, 2014. C. Summers, K. Lowrey, A. Rajeswaran, S. Srinivasa, and E. Todorov. Lyceum: An efficient and scalable ecosystem for robot learning. In *Learning for Dynamics and Control*, pp. 793–803. PMLR, 2020. W. Sun, N. Jiang, A. Krishnamurthy, A. Agarwal, and J. Langford. Model-based RL in contextual decision processes: PAC bounds and exponential improvements over model-free approaches. In *Proc. COLT*, pp. 2898–2933, 2019. N. Takeishi and Y. Kawahara. Learning dynamics models with stable invariant sets. In *Proc. AAAI*, 2021. Y. Tassa, Y. Doron, A. Muldal, T. Erez, Y. Li, D. L. Casas, D. Budden, A. Abdolmaleki, J. Merel, A. Lefrancq, et al. DeepMind Control Suite. *arXiv preprint arXiv:1801.00690*, 2018. E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In *IEEE/RSJ* IROS, pp. 5026–5033, 2012. G. Williams, A. Aldrich, and E. A. Theodorou. Model predictive path integral control: From theory to parallel computation. *Journal of Guidance, Control, and Dynamics*, 40(2):344–357, 2017. A. T. Winfree. *The Geometry of Biological Time*. Springer, 2001. ## A Function-Valued Rkhs Function-valued reproducing kernel Hilbert spaces (RKHSs) are defined below. Definition A.1 (Kadri et al. (2016)). A Hilbert space (HK,⟨·, ·⟩HK ) of functions from Π to a Hilbert space (H,⟨·, ·⟩H) is called a reproducing kernel Hilbert space if there is a nonnegative L(H; H)-valued kernel K on Π × Π such that: 1. Θ 7→ K(Θ′, Θ)ϕ belongs to HK for all Θ′ ∈ Π and ϕ ∈ H, 2. for every G ∈ HK, Θ ∈ Π and ϕ ∈ H, ⟨G , K(Θ, ·)ϕ⟩HK = ⟨G (Θ), ϕ⟩H. For function-valued RKHSs, the following proposition holds. Proposition A.1 (Feature map (Brault et al., 2016)). Let H′*be a Hilbert space and* Ψ : Π → L(H; H′). Then the operator W : H′ → HΠ defined by [W ψ](Θ) := Ψ(Θ)†ψ, ∀ψ ∈ H′ and ∀Θ ∈ Π, is a partial isometry from H′ onto the reproducing kernel Hilbert space HK *with a reproducing kernel* K(Θ2, Θ1) = Ψ(Θ2) †Ψ(Θ1), ∀Θ1, Θ2 ∈ Π. Remark A.2 (Decomposable kernel). In practice, one can use *decomposable kernel* (Brault et al., 2016); when the kernel K is given by K(Θ1, Θ2) = k(Θ1, Θ2)A for some scalar-valued kernel k(Θ1, Θ2) and for positive semi-definite operator A ∈ L(H, H), the kernel K is called decomposable kernel. For an RKHS of a decomposable kernel K, (4.1) becomes $${\mathcal{H}}(\Theta)\phi=(\zeta(\Theta)^{\dagger}\otimes B)\psi,$$ where ζ : Π → H′′ is known (H′′ is some Hilbert space), and A = BB†. Further, to use RFFs, one considers a shift-invariant kernel k(Θ1, Θ2). ## B Some Definitions Of The Values The value J Θ(Xt0 ; M; c t) in Algorithm 1 is defined by $$J^{\Theta}(X_{0}^{t};M;c^{t}):=\sum_{n=0}^{N^{t}-1}\mathbb{E}_{\Omega_{\Theta}}\left[\sum_{h=0}^{H_{n}^{t}-1}c^{t}(x_{h,n}^{t})\Big|\Theta,M,x_{0,n}^{t}\right],$$ where the expectation is taken over the trajectory following ϕxh+1 = [Ψ(Θ)† ◦ M]ϕxh + ϵ(ω). Also the confidence ball at t is given by $$\mathrm{{\cal{B}}}_{\mathrm{ALL}_{M}^{t}}:=\left\{M\bigg|\bigg|(\Sigma_{a\sigma}^{t})^{\frac{1}{2}}\left(M-\overline{{{M}}}^{t}\right)\bigg|^{2}\leq\beta_{M}^{t}\right\}\cap\mathrm{{\cal{B}}}_{\mathrm{ALL}_{M}^{0}},\;\;\Sigma_{a\sigma}^{t}:=\lambda I+\sum_{r=0}^{t-1}\sum_{n=0}^{N^{r}-1}\sum_{h=0}^{H_{r}^{r}-1}\beta_{h,n}^{r}\,\alpha_{h,n}^{r},$$ and $\Sigma_{\mathcal{A}}^{0}:=\lambda I$, where $\beta_{M}^{t}:=20\sigma^{2}\left(d_{\phi}+\log\left(t\frac{\det(\Sigma_{\mathcal{A}}^{t})}{\det(\Sigma_{\mathcal{A}}^{0})}\right)\right)$ and $t$. $$\overline{{{M}}}^{t}:=\arg\operatorname*{min}_{M}\left[\sum_{\tau=0}^{t-1}\sum_{n=0}^{N^{\tau}-1}\sum_{h=0}^{H_{n}^{\tau}-1}\left\|\phi_{x_{h+1,n}^{\tau}}-\mathcal{A}_{h,n}^{\tau}(M)\right\|_{\mathbb{R}^{d_{\phi}}}^{2}+\lambda\left\|M\right\|_{\mathrm{HS}}^{2}\right].$$ Similar to the work (Kakade et al., 2020), we define the expected maximum information gains as: $$\gamma_{T,\mathcal{A}}(\lambda):=2\operatorname*{max}_{\mathcal{A}}\mathbb{E}_{\mathcal{A}}\left[\log\left({\frac{\operatorname*{det}\left(\Sigma_{\mathcal{A}}^{T}\right)}{\operatorname*{det}\left(\Sigma_{\mathcal{A}}^{0}\right)\right)}}\right)\right].$$ Here, det is a properly defined functional determinant of a bounded linear operator. Further, $$\gamma_{2,T,\mathcal{A}}(\lambda):=2\operatorname*{max}_{\mathcal{A}}\mathbb{E}_{\mathcal{A}}\left[\left(\log\left({\frac{\operatorname*{det}\left(\Sigma_{\mathcal{A}}^{T}\right)}{\operatorname*{det}\left(\Sigma_{\mathcal{A}}^{0}\right)\right)}}\right)\right)^{2}\right].$$ Also, we define $$\Sigma_{;\mathcal{B}}^{t}:=\lambda I+\sum_{\tau=0}^{t-1}\mathcal{B}^{\tau\dagger}\mathcal{B}^{\tau},\quad\Sigma_{;\mathcal{B}}^{0}:=\lambda I,\ \ \gamma_{T,\mathcal{B}}(\lambda):=2\max_{A}\mathbb{E}_{A}\left[\log\left(\frac{\det\left(\Sigma_{;\mathcal{B}}^{T}\right)}{\det\left(\Sigma_{;\mathcal{B}}^{0}\right)}\right)\right].$$ Lemma B.1. *Assume that* Ψ(Θ) ∈ R dΨ×dϕ *and that* ∥Bt∥HS ≤ BB, ∥A t h,n∥HS ≤ BA *for all* t ∈ [T], n ∈ [Nt]*, and* h ∈ [Htn ] and some BB ≥ 0 and BA ≥ 0. Then, γT ,A (λ) = O(dϕdΨ log(1 + T HB2A /λ))*, and* γT ,B(λ) = O(dϕdΨ log(1 + T B2B/λ)). Proof. For γT ,A (λ), from the definition of Hilbert-Schmidt norm, we have $$\mathrm{tr}\left(\sum_{t=0}^{T-1}\sum_{n=0}^{N^{t}-1}\sum_{h=0}^{H_{n}^{t}-1}\mathcal{A}_{h,n}^{t}{}^{\dagger}\mathcal{A}_{h,n}^{t}\right)\leq T H B_{\mathcal{A}}^{2},$$ and the result follows from (Kakade et al., 2020, Lemma C.5). The similar argument holds for γT ,B(λ) too. ## C Formal Argument On The Limitation Of Cumulative Costs In this section, we present a formal argument on the limitation of cumulative costs, mentioned in Section 3.2. To this end, we give the following proposition. Proposition C.1. Let X = R. Consider the set Sf of dynamics converging to some point in X *. Then, for* any choice of υ ∈ (0, 1], set-valued map S : X → 2 X \ ∅, and cost c *of the form* $$\mathbf{c}(x,y)={\begin{cases}0&(y\in{\mathcal{S}}(x)),\\ 1&{\mathrm{(otherwise)}},\end{cases}}$$ we have {F} ̸= Sf . Here, {F} *are given by* $$\bigcap_{x_{0}\in\mathcal{X}}\left\{\operatorname*{arg\,min}_{\mathcal{F}:\mathbf{r.d.s.}}\mathbb{E}_{\Omega}\sum_{h=0}^{\infty}v^{h}\mathbf{c}\left(\mathcal{F}(h,\omega,x_{0}),\mathcal{F}(h+1,\omega,x_{0})\right)\right\}.$$ Proof. Assume there exist υ, a set-valued map S , and c such that the set {FΘ} = Sf . Then, because Sf is strictly smaller than the set of arbitrary dynamics, it must be the case that ∃x0 ∈ X , ∃y0 ∈ X , y0 ∈/ S (x0). However, the dynamics f ⋆, where f ⋆(x0) = y0 and f ⋆(x) = x, ∀x *∈ X \ {*x0}, is an element of Sf . Therefore, the proposition is proved. Remark C.2 (Interpretation of Proposition C.1). Proposition C.1 intuitively says that the set of "stable dynamics" cannot be determined by specifying every single transition. This is because, the dynamics that does not follow any pre-specified transition can always be "stable". ## D Proof Of Lemma 4.3 It is easy to see that one can define a map M0 so that it satisfies M0(ϕ) = ψ such that Assumption 2 holds. Define $$M^{\star}:=P M_{0},$$ where P is the orthogonal projection operator onto the sum space of the orthogonal complement of the null space of Ψ(Θ)†for all Θ ∈ Π, namely, $$\overline{{\sum_{\Theta\in\Pi}\ker(\Psi(\Theta)^{\dagger})^{\perp}}}.$$ Also, define PΘ by the orthogonal projection onto ker(Ψ(Θ)†) ⊥. Then, for any Θ ∈ Π, we obtain $$P_{\Theta}M_{0}=\Psi(\Theta)^{\dagger}{}^{+}\mathcal{H}(\Theta),$$ where A + is the pseudoinverse of the operator A , and PΘM0 is linear. Let *a, b* ∈ R and ϕ1, ϕ2 ∈ H, and define ψ˜ by $$\tilde{\psi}:=P M_{0}(a\phi)$$ ψ˜ := PM0(aϕ1 + bϕ2) − aPM0(ϕ1) − bPM0(ϕ2). Because PΘPM0 = PΘM0, it follows that PΘψ˜ = 0 for all Θ ∈ Π, which implies $$\tilde{\psi}\in\bigcap_{\Theta\in\Pi}\ker P_{\Theta}=\bigcap_{\Theta\in\Pi}\ker(\Psi(\Theta)^{\dagger}).$$ On the other hand, we have $${\tilde{\psi}}\in\overline{{{\sum_{\Theta\in\Pi}\ker(\Psi(\Theta)^{\dagger})^{\perp}}}}.$$ Therefore, it follows that ψ˜ = 0, which proves that M⋆is linear. ## E Proof Of Proposition 4.7 Fix Θ ∈ Π and suppose that the eigenvalues λi (i ∈ I := {1, 2*, . . . , d*ϕ}) of K (Θ) are in descending order according to their absolute values, i.e., |λ1| ≥ |λ2| ≥ . . . ≥ |λdϕ |. Given A ∈ L(H; H), suppose also that the eigenvalues µi (i ∈ I) of A are in descending order according to their absolute values. Let $$\kappa_{\Theta}:=\operatorname*{inf}_{\mathcal{Q}(\Theta)}\left\{\|{\mathcal{Q}}(\Theta)\|\|{\mathcal{Q}}(\Theta)^{-1}\|:{\mathcal{Q}}(\Theta)^{-1}{\mathcal{K}}(\Theta){\mathcal{Q}}(\Theta)={\mathcal{J}}$$ where J (Θ) is a Jordan canonical form of K (Θ) transformed by a nonsingular matrix Q(Θ). Also, let $$\overline{{{\kappa}}}_{\Theta}:=\operatorname*{max}_{m\in\{1,\ldots,d_{\phi}\}}\left\{\kappa_{\Theta}^{\pm\frac{1}{m}}\right\}.$$ Then, by (Song, 2002, Corollary 2.3), we have $\therefore$, Corolla $||\mu_{i}|-|\lambda_{i}||\leq|\mu_{i}-\lambda_{i}|\leq\sum_{i}|\mu_{i}-\lambda_{i}|\leq\sum_{i}|\mu_{\pi(i)}-\lambda_{i}|$ $\leq d_{\phi}\sqrt{d_{\phi}}(1+\sqrt{d_{\phi}-p})\max\left\{\kappa_{\Theta}\sqrt{d_{\phi}}\|\mathscr{A}-\mathscr{K}(\Theta)\|,(\kappa_{\Theta}\sqrt{d_{\phi}})^{\frac{1}{n}}\|\mathscr{A}-\mathscr{K}(\Theta)\|^{\frac{1}{n}}\right\},$ for any i ∈ I and for some permutation π, where p ∈ {1, 2*, . . . , d*ϕ} is the number of the Jordan block of Q(Θ)−1K (Θ)Q(Θ) and m ∈ {1, 2*, . . . , d*ϕ} is the order of the largest Jordan block. Because pdϕ − p ≤ pdϕ − 1 for any p ∈ {1, 2*, . . . , d*ϕ}, and because $$\operatorname*{max}_{m\in\{1,\ldots,d_{\phi}\}}\left[\sqrt{d_{\phi}}\cdot\operatorname*{max}\{\sqrt{d_{\phi}},(\sqrt{d_{\phi}})^{\frac{1}{m}}\}\cdot\operatorname*{max}\left\{\kappa_{\Theta},\kappa_{\Theta}^{\frac{1}{m}}\right\}\right]\leq d_{\phi}\overline{{{\kappa}}}_{\Theta},$$ it follows that It follows that $$\begin{aligned} |\rho(\mathcal{A})-\rho(\mathcal{K}(\Theta))|&=||\mu_1|-|\lambda_1||\\ &\leq\overline{\kappa}_{\Theta}d_{\Theta}^2(1+\sqrt{d_{\phi}-1})\max\left\{\|\mathcal{A}-\mathcal{K}(\Theta)\|,\|\mathcal{A}-\mathcal{K}(\Theta)\|^{\frac{1}{\alpha}}\right\}. \end{aligned}$$ Since $\overline{\kappa}_{\Theta}<1+\kappa$, simple computations complete the proof. ## F Regret Analysis Throughout this section, suppose Assumptions 1 to 6 hold. Note that Assumption 3 is required for OptDynamics. We first give the *positive operator norm bounding lemma* followed by another lemma based on it. Lemma F.1 (Positive operator norm bounding lemma). Let H be a Hilbert space and Ai, Bi ∈ L(H; H) (i ∈ {1, 2, . . . , n}). Assume, for all i ∈ {1, 2, . . . , n}, that Ai is positive definite. Also, assume B1 is positive definite, and for all i ∈ {2, 3, . . . , n}, Bi *is positive semi-definite. Then,* $$\left\|\left(\sum_{i}B_{i}^{\frac{1}{2}}A_{i}B_{i}^{\frac{1}{2}}\right)^{-\frac{1}{2}}\left(\sum_{i}B_{i}\right)\left(\sum_{i}B_{i}^{\frac{1}{2}}A_{i}B_{i}^{\frac{1}{2}}\right)^{-\frac{1}{2}}\right\|\leq\max_{i}\left\|A_{i}^{-1}\right\|.$$ _If $A_{i}B_{i}=B_{i}A_{i}$ for all $i\in\{1,2,\ldots,n\}$, then_ $$\left\|\left(\sum_{i}A_{i}B_{i}\right)^{-\frac{1}{2}}\left(\sum_{i}B_{i}\right)\left(\sum_{i}A_{i}B_{i}\right)^{-\frac{1}{2}}\right\|\leq\max_{i}\left\|A_{i}^{-1}\right\|.$$ Proof. Let c := maxi A −1 i. Then, we have, for all i, $$I\preceq\left\|A_{i}^{-1}\right\|A_{i}\preceq c A_{i},$$ from which it follows that $$B_{i}\preceq c B_{i}^{\frac{1}{2}}A_{i}B_{i}^{\frac{1}{2}}.$$ Therefore, we obtain $$\sum_{i}B_{i}\preceq c\sum_{i}B_{i}^{\frac{1}{2}}A_{i}B_{i}^{\frac{1}{2}}.$$ From the assumptions, Pi B $$\left(\sum_{i}B_{i}^{\frac{1}{2}}A_{i}B_{i}^{\frac{1}{2}}\right)^{-\frac{1}{2}}\left(\sum_{i}B_{i}\right)\left(\sum_{i}B_{i}^{\frac{1}{2}}A_{i}B_{i}^{\frac{1}{2}}\right)^{-\frac{1}{2}}\preceq c I,$$ $$\left(\sum_{i}B_{i}^{\frac{1}{2}}A_{i}B_{i}^{\frac{1}{2}}\right)^{-1}\mathrm{~exists~and~}$$ from which we obtain $$\left\|\left(\sum_{i}B_{i}^{\frac{1}{2}}A_{i}B_{i}^{\frac{1}{2}}\right)^{-\frac{1}{2}}\left(\sum_{i}B_{i}\right)\left(\sum_{i}B_{i}^{\frac{1}{2}}A_{i}B_{i}^{\frac{1}{2}}\right)^{-\frac{1}{2}}\right\|\leq c.$$ The second claim follows immediately. Lemma F.2. Suppose Assumptions 1, 2, and 5 *hold. Then, it follows that, for all* t ∈ [T], $$\left\|(\Sigma_{\mathcal{B}}^{t})^{\frac{1}{2}}\left(M-\overline{{{M}}}^{t}\right)\right\|^{2}\leq(1+C^{-1})\left\|(\Sigma_{\mathcal{A}}^{t})^{\frac{1}{2}}\left(M-\overline{{{M}}}^{t}\right)\right\|^{2}.$$ Proof. Under Assumptions 1 and 2, define C t ∈ L(L(H; H′);L(H; H′)) by $$\mathcal{C}^{t}(M)=M\circ\left[\sum_{n=0}^{N^{t}-1}\sum_{h=0}^{H_{n}^{t}-1}\phi_{x_{h,n}^{t}}\phi_{x_{h,n}^{t}}^{\dagger}\right].$$ Also, define X t:= PNt−1 n=0PHtn−1 h=0 A t h,n †A t h,n and Y t:= Bt †Bt. We have C tY t = Y tC t = X t(and thus X tY t = Y tX t), and $$\Sigma_{\mathcal{A}}^{t}=\lambda I+\sum_{\tau=0}^{t-1}\mathcal{X}^{\tau},\qquad\Sigma_{\mathcal{B}}^{t}=\lambda I+\sum_{\tau=0}^{t-1}\mathcal{Y}^{\tau}.$$ From Assumption 5, we obtain, for all t ∈ [T], (C t) −1exists and ∥(C t) −1∥ ≤ −1 ≤ −1 ≤ C −1. N Xt−1 n=0 HtXn−1 h=0 ϕx t h,n ϕ † x t h,n N Xt−1 n=0 ϕx t 0,n ϕ † x t 0,n Therefore, using Lemma F.1 by substituting I and C to A, λI and Y to B, it follows that (ΣtA ) − 12 Σ t B(ΣtA ) − 12 ≤ max 1, max τ∈[t] {∥(C τ) −1∥}≤ 1 + C −1, and (ΣtB) 1 2 M − M t 2 = (ΣtB) 1 2 (ΣtA ) − 12 (ΣtA ) 1 2 M − M t 2 ≤ (ΣtA ) 1 2 M − M t 2 (ΣtA ) − 12 (ΣtB) 1 2 2 = (ΣtA ) − 12 Σ t B(ΣtA ) − 12 (ΣtA ) 1 2 M − M t 2 ≤ (1 + C −1) (ΣtA ) 1 2 M − M t 2 . Here, the second equality used ∥A ∥ 2 = ∥A A †∥. Now, let E t be the event M⋆ ∈ BalltM. Assume TT −1 t=0 E t. Then, using Assumption 4, Λ[K (Θt)] + J Θt(Xt0 ; c t) − Λ[K (Θ⋆t)] + J Θ⋆t (Xt0 ; c t) =Λ[K (Θt)] − Λ[K (Θ⋆t)]+ J Θt(Xt0 ; c t) − J Θ⋆t (Xt0 ; c t) ≤ Λ[K (Θt)] − Λ[Bt◦ Mˆ t] + J Θt(Xt0 ; c t) − J ΘtXt0 ; Mˆ t; c t ≤ Λ[K (Θt)] − Λ[Bt◦ Mˆ t] + J Θt(Xt0 ; c t) − J ΘtXt0 ; Mˆ t; c t ≤ min L · max BtM⋆ − Mˆ t 2 , BtM⋆ − Mˆ t α, 2Λmax + J Θt(Xt0 ; c t) − J ΘtXt0 ; Mˆ t; c t . | {z } term2 | {z } term1 (F.1) $$\square$$ Here, the first inequality follows because we assumed E t and because the algorithm selects Mˆ t and Θtsuch that $$\Lambda[\mathcal{B}^{t}\circ\hat{M}^{t}]+J^{\Theta^{t}}\left(X_{0}^{t},\hat{M}^{t};c^{t}\right)\leq\Lambda[\mathcal{B}^{t}\circ M]+J^{\Theta}\left(X_{0}^{t};M;c^{t}\right)$$ for any M ∈ BalltM and for any Θ ∈ Π. The third inequality follows from Assumption 4. Using Lemma F.2, we have BtM⋆ − Mˆ t ≤ (ΣtB) 1 2 M⋆ − Mˆ t Bt(ΣtB) − 12 ≤p(1 + C−1) (ΣtA ) 1 2 M⋆ − Mˆ t Bt(ΣtB) − 12 ≤p(1 + C−1) (ΣtA ) 1 2 M⋆ − M t + (ΣtA ) 1 2 M t− Mˆ t Bt(ΣtB) − 12 ≤ 2 q(1 + C−1)β t M Bt(ΣtB) − 12 (∵ E t). Therefore, if E t, it follows that term${}_{1}\leq\min\left\{L\left\{4(1+C^{-1})\beta_{M}^{t}+1\right\}\max\left\{\left\|\mathscr{B}^{t}(\Sigma_{\mathscr{B}}^{t})^{-\frac{1}{2}}\right\|^{2},\left\|\mathscr{B}^{t}(\Sigma_{\mathscr{B}}^{t})^{-\frac{1}{2}}\right\|^{\alpha}\right\},2\Lambda_{\max}\right\}.$ (F.2) Then, we use the following lemma which is an extension of (Kakade et al., 2020, Lemman B.6) to our Hölder condition. Lemma F.3. For any sequence of Bt and for any α ∈ (0, 1]*, we have* $$\sum_{t=0}^{T-1}\operatorname*{min}\left\{\left\|\mathcal{B}^{t}(\Sigma_{\mathcal{B}}^{t})^{-\frac{1}{2}}\right\|^{2\alpha},1\right\}\leq2T^{1-\alpha}\left[1+\log\left(\frac{\operatorname*{det}\left(\Sigma_{\mathcal{B}}^{T}\right)}{\operatorname*{det}\left(\Sigma_{\mathcal{B}}^{0}\right)}\right)\right].$$ Proof. Using x ≤ 2 log(1 + x) for x ∈ [0, 1], T X−1 t=0 min Bt(ΣtB) − 12 2α , 1 ≤ T X−1 t=0 min Bt(ΣtB) − 12 2α HS , 1 (∵ ∥A ∥ ≤ ∥A ∥HS) = T X−1 t=0 min Bt(ΣtB) − 12 2 HS , 1 α≤ T X−1 t=0 2 log 1 + Bt(ΣtB) − 12 2 HSα = T X−1 t=0 h2 log 1 + trn(ΣtB) − 12 Bt †Bt(ΣtB) − 12 oiα ≤ 2 α T X−1 t=0 hlog det I + (ΣtB) − 12 Bt †Bt(ΣtB) − 12 iα ≤ 2 αT 1−α "TX−1 t=0 log det I + (ΣtB) − 12 Bt †Bt(ΣtB) − 12 #α ≤ 2 αT 1−α "TX−1 t=0 log det Σ t+1 B − log det Σ t B #α≤ 2T 1−α " log det Σ T B det Σ0B !#α ≤ 2T 1−α 1 + log det Σ T B det Σ0B !! . Here, the fifth inequality follows from (Grafakos, 2008, Exercise 1.1.4). Corollary F.4. For any sequence of Bt and for any α ∈ (0, 1]*, we have* $$\sum_{t=0}^{T-1}\min\left\{\max\left\{\left\|\mathscr{B}^{t}(\Sigma_{\mathscr{B}}^{t})^{-\frac{1}{2}}\right\|^{4},\left\|\mathscr{B}^{t}(\Sigma_{\mathscr{B}}^{t})^{-\frac{1}{2}}\right\|^{2\alpha}\right\},1\right\}\leq2T^{1-\alpha}\left[1+\log\left(\frac{\det\left(\Sigma_{\mathscr{B}}^{T}\right)}{\det\left(\Sigma_{\mathscr{B}}^{0}\right)}\right)\right].$$ 26 E "TX−1 t=0 term1 T \−1 t=0 E t # ≤ E "TX−1 t=0 min L4(1 + C −1)β t M + 1 max Bt(ΣtB) − 12 2 , Bt(ΣtB) − 12 α, 2Λmax# ≤ E "TX−1 t=0 L4(1 + C −1)β t M + 1 + 2Λmax min max Bt(ΣtB) − 12 2 , Bt(ΣtB) − 12 α, 1 # ≤ T X−1 t=0 qE [ [L{4(1 + C−1)β t M + 1} + 2Λmax] 2] sE min max Bt(ΣtB) − 12 4 , Bt(ΣtB) − 12 2α, 1 ≤ vuutX T t=0 E [ [L{4(1 + C−1)β t M + 1} + 2Λmax] 2] vuutE "TX−1 t=0 min max Bt(ΣtB) − 12 4 , Bt(ΣtB) − 12 2α, 1 # ≤ √ T · valueqT1−α(2 + γT ,B(λ)). (F.3) Here, the first inequality is due to (F.2); the third inequality uses E[ab] ≤pE[a 2]E[b 2]; the forth inequality From (F.2) and Corollary F.4, we obtain uses the Cauchy-Schwartz inequality; the last is from Corollary F.4. Also, $$\begin{array}{l}{{\tt{value}:=16L^{2}(1+C^{-1})^{2}{\mathbb{E}}[(\beta_{M}^{T})^{2}]+(8L^{2}+16\Lambda_{\max}L)(1+C^{-1}){\mathbb{E}}[\beta_{M}^{T}]+4\Lambda_{\max}L+L^{2}+4\Lambda_{\max}^{2}L}}\\ {{\qquad\leq C^{\prime}\left\{L^{2}(1+C^{-1})^{2}{\mathbb{E}}[(\beta_{M}^{T})^{2}]+(L^{2}+\Lambda_{\max}L)(1+C^{-1}){\mathbb{E}}[\beta_{M}^{T}]+\Lambda_{\max}^{2}+L^{2}\right\},}}\end{array}$$ for some constant C ′. Next, we turn to bound the latter term term2 of (F.1); our analysis is based on that of (Kakade et al., 2020). Simple calculations show that $$M^{\star}-\overline{{{M}}}^{t}=\lambda(\Sigma_{\mathcal{A}}^{t})^{-1}M^{\star}-(\Sigma_{\mathcal{A}}^{t})^{-1}\sum_{\tau=0}^{t-1}\sum_{n=0}^{N^{\tau}-1}\sum_{h=0}^{H_{n}^{\tau}-1}\mathcal{A}_{h,n}^{\tau}\,\dot{\epsilon}_{h,n}^{\tau},$$ where ϵ τ h,n is the sampled noise at τ -th episode, h-th timestep, and n-th trajectory. Now, by introducing a Hilbert space containing an operator of L(L(H; H′); R), which is a Hilbert-Schmidt operator, because of Assumption 1, we can apply Lemma C.4 in (Kakade et al., 2020) to our problem too. Therefore, with probability at least 1 − δt, it holds that $$\left\|(\Sigma_{\sigma\ell}^{t})^{\frac{1}{2}}\left(M-\overline{{{M}}}^{t}\right)\right\|^{2}\leq\lambda\|M^{*}\|^{2}+\sigma^{2}(8d_{\phi}\log(5)+8\log(\det(\Sigma_{\sigma\ell}^{t})\det(\Sigma_{\sigma\ell}^{0})^{-1})/\delta_{t}),$$ and properly choosing δt leads to β t M defined in Section B, and we obtain the result $$\operatorname*{Pr}\left(\bigcup_{t=0}^{T-1}{\overline{{{\mathcal{E}}^{t}}}}\right)\leq{\frac{1}{2}}.$$ In our algorithm, transition data are chosen from any initial states and the horizon lengths vary; however, slight modification of the analysis of LC3 will give the following lemma. Lemma F.5 (Modified version of Theorem 3.2 in (Kakade et al., 2020)). Suppose Assumptions 1, 2, 3, 5, and 6 *hold. Then, the term* term2 *is bounded by* $$\begin{array}{l}{{\mathbb{E}\left[\sum_{t=0}^{T-1}\operatorname{term}_{2}\left|\bigcap_{t=0}^{T-1}\mathcal{E}^{t}\right|\right]}}\\ {{\leq\sqrt{H V_{\operatorname*{max}}}\sqrt{64T(d_{\phi}+\log(T)+\gamma_{T,\mathcal{A}}(\lambda)+H)}\sqrt{\gamma_{T,\mathcal{A}}(\lambda)}.}}\end{array}$$ Combining all of the above results, we prove Theorem 4.10: Proof of Theorem *4.10.* Using (F.3) (which requires Assumptions 1, 2, 4, and 5), Lemma F.5 (which requires Assumptions 1, 2, 3, 5, and 6), and PrST −1 t=0 E t≤ 1 2 , it follows that EKS−LC3 [RegretT ] $$\mathrm{{Tr}}[T]$$ In fact, ${\leq\sqrt{T\left\{C^*\left\{L^2(1+C^{-1})^2\mathbb{E}[(\beta_M^2)^2]+(L^2+\Lambda_\text{max}L)(1+C^{-1})\mathbb{E}[\beta_M^2]+\Lambda_\text{max}^2+L^2)\right\}}\sqrt{T^{1-\alpha}(2+\gamma_{T,\mathcal{M}}(\lambda))}\right\}}$ ${+\sqrt{H V_\text{max}}\sqrt{64T(d_\phi+\log(T)+\gamma_{T,\mathcal{M}}(\lambda)+H)}\sqrt{\gamma_{T,\mathcal{M}}(\lambda)}}$ $$+\,{\frac{1}{2}}\cdot(\Lambda_{\mathrm{max}}+{\sqrt{V_{\mathrm{max}}}})$$ 2 ≤ C1T 1− α 2 ( ˜dT ,1 + ˜dT ,2), for some absolute constant C1. Therefore, the theorem is proved. ## G Reduction To Eigenstructure Assignment Problem For Linear Systems To see how our system model studied in (4.2) reduces to linear system case, take R dX as H0 with the canonical basis (i.e., ϕx = x) and let $$\Phi(\Theta)=[I_{d\chi}\otimes k_{1}^{\top},I_{d\chi}\otimes k_{2}^{\top},...$$ $$\triangleright k_{2}^{\top},\ldots,I_{d\chi}\otimes k_{d\chi}^{\top},I_{d\chi}]^{\top},$$ where the feedback matrix is given by K := [k1, k2*, . . . , k*dX ], and let $\square$ M∗ =-[b1, a1] ⊤*, . . . ,* [bdX , adX ] ⊤, where the entries of the row vector bi are all zero except for the entries from the index (i − 1)dX dU + 1 to the index idX dU given by vec B⊤, and A = [a ⊤ 1 , a⊤ 2 , . . . , a⊤ dX ]. ## H Setups And Results Of Simulations In The Main Body Throughout the main body of this paper, we used the following version of Julia; for each experiment, the running time was less than around 10 minutes. Julia Version 1.5.3 Platform Info: OS: Linux (x86_64-pc-linux-gnu) CPU: AMD Ryzen Threadripper 3990X 64-Core Processor WORD_SIZE: 64 LIBM: libopenlibm LLVM: libLLVM-9.0.1 (ORCJIT, znver2) Environment: JULIA_NUM_THREADS = 12 The licenses of Julia, OpenAI Gym, DeepMind Control Suite, Lyceum, and MuJoCo, are [The MIT License; Copyright (c) 2009-2021: Jeff Bezanson, Stefan Karpinski, Viral B. Shah, and other contributors: https://github.com/JuliaLang/julia/contributors], [The MIT License; Copyright (c) 2016 OpenAI (https://openai.com)], [Apache License Version 2.0, January 2004 http://www.apache.org/licenses/], [The MIT License; Copyright (c) 2019 Colin Summers, The Contributors of Lyceum], and [MuJoCo Pro Lab license], respectively. In this section, we provide simulation setups, including the details of environments (see also Figure 9) and ![28_image_0.png](28_image_0.png) parameter settings. Figure 9: Illustration of some dynamical systems we have used in this work. Left: Simple limit cycle represented effectively by the Koopman modes. Middle: DeepMind Control Suite (Tassa et al., 2018) Cartpole showing stable cycle with spectral radius regularization. Right: OpenAI Gym (Brockman et al., 2016) walker2d showing simpler movement cycle when the Koopman eigenvalues are regularized. ## H.1 Cross-Entropy Method Throughout, we used CEM for dynamics parameter (policy) selection to approximately solve KSNR. Here, we present the setting of CEM. First, we prepare some fixed feature (e.g. RFFs) for ϕ. Then, at each iteration of CEM, we generate many parameters to compute the loss (i.e., the sum of the Koopman spectrum cost and *negative* cumulative reward) by fitting the transition data generated by each parameter to the feature to estimate its Koopman operator A . In particular, we used the following regularized fitting: A = Y X⊤(XX⊤ + I) −1, | Table 1: Hyperparameters used for limit cycle generation. | | | | |-------------------------------------------------------------|-------|----------------------------------|-------| | CEM hyperparameter | Value | Training target Koopman operator | Value | | samples | 200 | training iteration | 500 | | elite size | 20 | RFF bandwidth for ϕ | 3.0 | | iteration | 50 | RFF dimension dϕ | 80 | | planning horizon | 80 | horizon for each iteration | 80 | | policy RFF dimension | 50 | | | | policy RFF bandwidth | 2.0 | | | where Y := [ϕxh1+1,1 , ϕxh2+1,2 , . . . , ϕxhn+1,n ] and X := [ϕxh1,1 , ϕxh2,2 , . . . , ϕx*hn,n* ]. If the feature spans a Koopman invariant space and the deterministic dynamical systems are considered, and if no regularization (i.e., the identity matrix I) is used, any sufficiently rich trajectory data may be used to exactly compute K (Θ) for Θ. However, in practice, the estimate depends on transition data although the regularization mitigates this effect. In our simulations, at each iteration, we randomly reset the initial state according to some distribution, and computed loss for each parameter generating trajectory starting from that state. ## H.2 Setups: Imitating Target Behaviors Through Koopman Operators The discrete-time dynamics rh+1 = rh + vr,h∆t, θh+1 = θh + vθ,h∆t is considered and the policy returns vr,h and vθ,h given rh and θh. In our simulation, we used ∆t = 0.05. Note the ground-truth dynamics $${\dot{r}}=r(1-r^{2}),\ {\dot{\theta}}=1,$$ is discretized to $$r_{h+1}=r_{h}+r_{h}(1-r_{h}^{2})\Delta t,\ \theta_{h+1}=\theta_{h}+\Delta t.$$ Figure 10 plots the ground-truth trajectories of observations and x-y positions. We trained the target Koopman operator using the ground-truth dynamics with random initializations; the hyperparameters used for training are summarized in Table 1. Then, we used CEM to select policy so that the spectrum cost is minimized; the hyperparameters are also summarized in Table 1. We tested two forms of the spectrum cost; Λ1(A ) = ∥m − m⋆∥1 and Λ2(A ) = ∥A − A ⋆∥ 2 HS. The resulting trajectories are plotted in Figure 11 and 12, respectively. It is interesting to observe that the top mode imitation successfully converged to the desirable limit cycle while Frobenius norm imitation did not. Intuitively, the top mode imitation focuses more on reconstructing the practically and physically meaningful behavior while minimizing the error on the Frobenius norm has no immediately clear physical meaning. ## H.3 Setups: Generating Stable Loops (Cartpole) We used DeepMind Control Suite Cartpole environment with modifications; specifically, we extended the cart rail to [−100, 100] from the original length [−5, 5] to deal with divergent behaviors. Also, we used a combination of linear and RFF features; the first elements of the feature are simply the observation (state) vector, and the rest are Gaussian RFFs. That way, we found divergent behaviors were well-captured in terms of spectral radius. The hyperparemeters used for CEM are summarized in Table 2. ![30_image_1.png](30_image_1.png) ![30_image_0.png](30_image_0.png) Figure 10: The ground-truth trajectory of the limit cycle r = r(1 - r2), d = 1. Left: Observations », cos(0), and sin(0). Right: x-y positions. ![30_image_2.png](30_image_2.png) Figure 11: The trajectory generated by RFF policies that minimize A(@) = ||m-m*||1. Left: Observations r, cos(0), and sin(0). Right: x-y positions. ## H.4 Setups: Generating Smooth Movements (Walker) Because of the complexity of the dynamics, we used four random seeds in this simulation, namely, 100, 200, 300, and 400. We used a combination of linear and RFF features for both o and the policy. Note, according to the work (Rajeswaran et al., 2017), linear policy is actually sufficient for some tasks for particular environments. The hyperparemeters used for CEM are summarized in Table 3. The resulting trajectories of Walker are illustrated in Figure 13. The results are rather surprising; because we did not specify the height in reward, the dynamics with only cumulative cost showed rolling behavior (Up figure) to go right faster most of the time. On the other hand, when the spectrum cost was used, the hopping behavior (Down figure) emerged. Indeed this hopping behavior moves only one or two joints most of the time while fixing other joints, which leads to lower (absolute values of) eigenvalues. The eigenspectrums of the resulting dynamics with/without the spectrum cost are plotted in Figure 14. In fact, it is observed that the dynamics when the spectrum cost was used showed consistently lower (absolute values of) eigenvalues; for the hopping behavior, most of the joint angles converged to some values and stayed there. ![31_image_0.png](31_image_0.png) Figure 12: The trajectory generated by RFF policies that minimize Λ(A ) = ∥A −A ⋆∥ 2 HS. Left: Observations r, cos(θ), and sin(θ). Right: x-y positions. | Table 2: Hyperparameters used for stable loop generation. | | | | |-------------------------------------------------------------|-------|----------------------|-------| | Hyperparameters | Value | Hyperparameters | Value | | samples | 200 | elite size | 20 | | iteration | 100 | planning horizon | 100 | | dimension dϕ | 50 | RFF bandwidth for ϕ | 2.0 | | policy RFF dimension | 100 | policy RFF bandwidth | 2.0 | Algorithm 2 Practical Algorithm for KS-LC3 Require: Parameter set Π; prior parameter λ; covariance scale ι ∈ R≥0. 1: Prior distribution over M′is given by (Σ0M′ ) −1where Σ 0 M′ := λI 2: for t = 0 *. . . T* − 1 do 3: Adversary chooses Xt0 . 4: Sample Mˆ ′ tfrom N M′ t,(ΣtM′ ) −1 5: Solve Θt = arg minΘ∈Π Λ[Kˆt(Θ)] + J ΘXt0 ; Mˆ ′ t; c t(e.g., using CEM) 6: Under the dynamics F Θt, sample transition data τ t:= {τ t n} Nt−1 n=0 , where τ t n := {x t h,n} Htn h=0 7: Update Σ t M′ 8: **end for** ## H.5 Setups: Koopman-Spectrum Lc3 Decomposable kernels case We explain how to reduce the memory size in a special decomposable kernel case. Assume that we employ decomposable kernel with B = I for simplicity. Also, suppose Ψ(Θ) ∈ R dΨ×dϕ is of finite dimension. For such a case, the dimension dΨ = dζ · dϕ where dζ is the dimension of H′′; however, we do not need to store a covariance matrix of size d 2 ϕ dζ × d 2 ϕ dζ but only require to update a matrix of size dϕdζ × dϕdζ which significantly reduces the memory size. Specifically, we consider the model ϕxh+1 = M′(ϕxh ⊗ ζ(Θ)); then using M′′ := reshape(M′, dϕ, dζ , dϕ), we obtain K (Θ) = [M′′[: , :, 1]ζ(Θ), . . . , M′′[:, :, dϕ]ζ(Θ)]. Here, ζ is the realization of ζ(Θ) over some basis. Now, we note that the dimension of ϕxh ⊗ ζ(Θ) is dζ · dϕ. Our practical (Thompson sampling version) algorithm is thus given by Algorithm 2. In the algorithm, we used Kˆt(Θ) = [Mˆ′′ t[:, :, 1]ζ(Θ)*, . . . ,* Mˆ′′ t[:, :, dϕ]ζ(Θ)], where Mˆ′′ t:= reshape(Mˆ ′ t, dϕ, dζ , dϕ). | Table 3: Hyperparameters used for Walker. | | | | |---------------------------------------------|-------|----------------------|-------| | Hyperparameters | Value | Hyperparameters | Value | | samples | 300 | elite size | 20 | | iteration | 50 | planning horizon | 300 | | dimension of dϕ | 200 | RFF bandwidth for ϕ | 5.0 | | policy RFF dimension | 300 | policy RFF bandwidth | 30.0 | ![32_image_0.png](32_image_0.png) Figure 13: Walker trajectories visualized via Lyceum. Up: When only (single-step) reward v − 0.001∥a∥ 2R6 is used, showing rolling behavior. Down: When the spectrum cost Λ(A ) = 5Pdϕ i=1 |λi(A )| is used together with the reward, showing simple hopping behavior. Simple linear system experiment The hyperparameters used for KS-LC3in the simple linear system experiment are summarized in Table 4. Cartpole pretraining policies For training three policies, we used Model Predictive Path Integral Control (MPPI) (Williams et al., 2017) with the rewards (p + 0.3)2, (p − 0.3)2, (v + 1.5)2, and (v − 1.5)2, where p is the cart position and v is the cart velocity. Also, for all of the cases, the penalty −100 is added when cos(θ) < 0, where θ is the pole angle, which aims at preventing the pole from falling down. Because we need to have one more state dimension to specify which reward to generate, we used the analytical model of cartpole specified in OpenAI Gym. Starting from random initial state, we first use MPPI to move to p = −0.3, then from there move to p = 0.3; and we learn an RFF policy for this movement along with the Koopman operator. Then, we randomly ![33_image_0.png](33_image_0.png) Figure 14: Eigenspectrums showing absolute values of eigenvalues for the dynamics with/without the spectrum cost. Table 4: Hyperparameters used for KS-LC3in the simple linear case. | Table 4: Hyperparameters used for KS-LC3 in the simple linear | case. | | | |-----------------------------------------------------------------|---------|--------------------|--------| | Hyperparameters | Value | Hyperparameters | Value | | prior parameter λ | 0.05 | covariance scale ι | 0.0001 | | planning horizon | 50 | | | initialized the state to accelerate to v = −1.5, followed by a random initialization again to accelerate to v = 1.5; we then learned two policies for those two movements. We used the planning horizons of 100 for every movement except for the movement going to p = 0.3 where we used 120 because it is following the previous movement. We repeated this for 20 iterations. The parameter space Π is a space of linear combinations of those three policies. We summarized the hyperparemeters used for MPPI/pretraining in Table 5. | Table 5: Hyperparameters of MPPI and pretraining. | | | | |-----------------------------------------------------|---------------------|-----------------------------|-------| | MPPI hyperparameters | Value | Pretraining hyperparameters | Value | | variance of controls | 0.4 2 | iteration | 20 | | temperature parameter | 0.1 | policy RFF dimension | 2000 | | planning horizon | 100/120 | policy RFF bandwidth | 1.5 | | number of planning samples | 524 | dimension of dϕ | 60 | | | RFF bandwidth for ϕ | 1.5 | | Cartpole learning For learning, we used four random seeds, namely, 100, 200, 300, and 400. The estimated spectrum cost curve represents the cost Λ[Kˆt(Θt)]. The hyperparameters used for CEM and KS-LC3 are summarized in Table 6. We note that for KS-LC3 we added additional cost on the policy parameter exceeding its ℓ∞ norm above 2.0. | Table 6: Hyperparameters used for CEM/KS-LC3 . | | | | |--------------------------------------------------|-------|----------------------------|--------| | Hyperparameters for CEM | Value | Hyperparameters for KS-LC3 | Value | | samples | 200 | dimension of dζ | 50 | | elite size | 20 | RFF bandwidth for ζ | 5.0 | | planning horizon | 500 | prior parameter λ | 1.0 | | iteration | 50 | covariance scale ι | 0.0001 | ## I Further Experimental Analysis In this section, we provide additional experiments. Throughout this section, we used the following version of Julia as our computational resource has changed when conducting the experiments presented in this section. Julia Version 1.5.3 Platform Info: OS: Linux (x86_64-pc-linux-gnu) CPU: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz WORD_SIZE: 64 LIBM: libopenlibm LLVM: libLLVM-9.0.1 (ORCJIT, haswell) Environment: JULIA_NUM_THREADS = 12 MuJoCo version 2.0 is used (license: MuJoCo Pro Individual license). ## I.1 Variations On Stable Cartpole Motions In our simulation of generating stable Cartpole motion, we have seen that it shows oscillating behavior when stability is enforced through the spectrum cost in addition to the reward that encourages the cartpole to have larger velocity. To see this phenomenon more in details, we have conducted the same experiments for different time horizons, namely, 50, 100 and 200. Here, we use Λ(A ) = 105 max(1, ρ(A )) (10 times more weight than that in the simulation experiment in the main body). The resulting velocity trajectories with/without the spectrum cost are plotted in Figure 15. Also, the angle trajectories are plotted in Figure 16, where the zero lines are the threshold for adding penalty costs. Note for the case of time horizon 200, we used more intensive CEM search whose parameters are summarized in Table 7, but could not find a policy parameter that can keep the pole straight up. Studying more sophisticated heuristic search algorithm will be an important direction of future research. From Figure 15, it is observed that the longer horizon may not indicate more oscillation "cycles". In fact, our spectrum cost only regularizes the dynamics to be stable, which may include a motion where the velocity converges to some fixed value. The spectrum radius for the cases of time horizon 50, 100 and 200 without the spectrum cost is given by 1.003, 1.00006 and 1.002, while that with the spectrum cost is 0.992, 0.999 and 0.997. While all of them show stable Koopman spectrum when the spectrum cost is used, the case for the time horizon 100 shows particularly interesting behavior. Please recall that the failure of keeping the pole straight up is not necessarily regarded as unstable dynamics over the specified state space *where the angle representation is bounded* but costs the learner within the cumulative cost term. ## I.2 Variations On Smooth Walker Motions To investigate smooth motion generations studied in the main body of this paper more, we conducted additional experiments. ![35_image_0.png](35_image_0.png) ![35_image_1.png](35_image_1.png) Figure 15: Velocity trajectories of Cartpole for different time horizons (50, 100 and 200) with/without the ![35_image_2.png](35_image_2.png) spectrum cost. | Table 7: Hyperparameters used for stabilized Cartpole for time horizo | n 200. | | | |-------------------------------------------------------------------------|----------|----------------------|-------| | Hyperparameters | Value | Hyperparameters | Value | | samples | 200 | elite size | 20 | | iteration | 1000 | planning horizon | 200 | | dimension dϕ | 50 | RFF bandwidth for ϕ | 2.0 | | policy RFF dimension | 300 | policy RFF bandwidth | 2.0 | ## I.2.1 Smoothness Comparison With Increased Action Cost Especially, we also compare our KSNR for smoothness enhancements to the use of action costs in the Walker environment. In this experiment, we used the hyperparameters summarized in Table 8. We again used a combination of linear and RFF features for both ϕ and the policy. Recall the default immediate reward is v −0.001∥a∥ 2R6 , where v is the velocity and a is the action vector of dimension 6. Here, in addition to KSNR, we tested increased action cost scenarios where the immediate rewards are v − 0.01∥a∥ 2R6 and v − 0.1∥a∥ 2R6 respectively. Across the six seed runs (of seed numbers of 100, 200, 300, 400, 500, and 600), we obtained the mean of the cumulative reward and the cumulative action cost (which is computed for the trajectories using 0.001∥a∥ 2R6 for all of the cases), and the mean and standard deviation of the spectrum cost, all of which are summarized in Table 9. As observed, increased action cost in our scenarios shows lower spectrum cost; while KSNR shows better cumulative reward with better spectrum cost. However, the motion generated by the increased action cost shows lower action penalty cost; which implies that the spectrum cost and the action cost have some correlation while they qualitatively prefer different motions. We also measured the smoothness by another metric than the spectrum cost itself, which is defined by $$\mathrm{Smoothness}(\tau):={\frac{1}{d_{\mathcal{X}}H}}\sum_{h=0}^{H-1}\|x_{h+1}-x_{h}\|_{1},$$ ![36_image_0.png](36_image_0.png) Figure 16: Angle trajectories of Cartpole for different time horizons (50, 100 and 200) with/without the spectrum cost. | Table 8: Hyperparameters used for additional Walker smoothness experi | ments. | | | |-------------------------------------------------------------------------|----------|----------------------|-------| | Hyperparameters | Value | Hyperparameters | Value | | samples | 300 | elite size | 20 | | iteration | 120 | planning horizon | 300 | | dimension of dϕ | 200 | RFF bandwidth for ϕ | 5.0 | | policy RFF dimension | 300 | policy RFF bandwidth | 30.0 | where τ := {xh} H h=0 is a trajectory. The mean smoothness values across the runs for the motions generated by the CEM algorithms with default action cost, 10 times more action cost, 100 times more action cost, and with the spectrum cost are 0.082, 0.033, 0.007, and 0.028 respectively, and they appear to be consistent to the spectrum cost in this case. The motions are visualized in Figure 17; their joint trajectories are plotted in Figure 18 and the eigenspectrums are given in Figure 19. Note those motions are of those showing median values of the spectrum cost within the seed runs. ## I.2.2 Different Feature Space H0 Next, we examine what happens when different feature space H0 is employed in practice. In particular, we used a polynomial feature (spanned by x, x2, x3, x4, x5) for ϕ while the policy is again a combination of linear and RFF features. The hyperparameters are the same except for the dimension of dϕ which is now 5dX . Now, the spectrum costs of the generated motions of the CEM algorithms with default action cost, 10 times more action cost, 100 times more action cost are given by 232.9 ± 13.6, 194.5 ± 19.5, 152.1 ± 42.8, and 217.9 ± 24.7; and the mean of cumulative reward and action cost (penalty) of the one with spectrum cost defined over the polynomial feature space is 900.1 and 143.3. It is observed that 10 times more action cost led to lower spectrum cost in this case than KSNR while the cumulative reward of KSNR is much higher. The smoothness measure for KSNR is now 0.064, which is higher than the case with RFF features. | Table 9: Cumulative reward, cumulative action cost (penalty), and spectrum cost comparison | | s. | | | |----------------------------------------------------------------------------------------------|--------|---------|----------------------|---------------------| | Method (Env. setting) | Reward | Penalty | Spectrum cost (mean) | Spectrum cost (std) | | CEM (default action cost) | 1011.5 | 177.3 | 317.0 | ±33.2 | | CEM (×10 action cost) | 596.4 | 10.9 | 213.2 | ±50.0 | | CEM (×100 action cost) | 63.4 | 0.5 | 88.8 | ±46.6 | | CEM (with spectrum cost) | 737.8 | 78.1 | 186.6 | ±88.4 | In fact, when visualizing the motion and joint trajectory Figure 20, we observe that the walker also shows similar rolling behavior but with slightly better periodicity than that one without spectrum cost. Eigenspectrums are shown in Figure 21; where we see less difference among the ones with/without spectrum cost and with ×10 action cost. These results imply that the feature space selection influences the qualitative behavior difference in practice; please also see Remark 3.2. ![38_image_0.png](38_image_0.png) Figure 17: Visualizations of Walker motions generated by the CEM algorithm with default action cost, 10 times more action cost, 100 times more action cost, and with the spectrum cost. The motions are of those showing median values of the spectrum cost within the seed runs. It is observed that the motion generated by the one with 10 times more action cost is smooth but uses two feet to hop, which would reduce the magnitudes of actions applied to the joints. The motion generated by the one with the spectrum cost again lifts one foot and hops; this specific visualized motion then shows a bit of rotation at the last moment. ![39_image_0.png](39_image_0.png) Figure 18: Joint trajectories of Walker motions generated by the CEM algorithm with default action cost, 10 times more action cost, 100 times more action cost, and with the spectrum cost. They are of those showing median values of the spectrum cost within the seed runs. Eigenspectrum ![39_image_1.png](39_image_1.png) –with spectrum cost -x 10 action cost …no spectrum cost -x 100 action cost Figure 19: Averaged eigenspectrums showing absolute values of eigenvalues for the dynamics with/without the spectrum cost and with 10 times more action cost and 100 times more action cost. ![40_image_0.png](40_image_0.png) ![40_image_1.png](40_image_1.png) with spectrum cost (over polynomial basis) Figure 20: Up: Visualization of Walker motions generated by the CEM algorithm with the spectrum cost where the feature space is spanned by x, x2, x3, x2, x2, x2, x2, The motion is of that showing median value of the spectrum cost within the seed runs. Down: Its joint trajectory. Eigenspectrum ![41_image_0.png](41_image_0.png) –with spectrum cost …x 10 action cost …no spectrum cost …x 100 action cost Figure 21: Averaged eigenspectrums over the polynomial feature space showing absolute values of eigenvalues for the dynamics with/without the spectrum cost and with 10 times more action cost and 100 times more action cost.
Review 1: Summary: This manuscript proposes a new paradigm for learning control of nonlinear random dynamical systems, which is based on Koopman operator theory. Specifically, the authors propose an optimization formulation over the cost of the Koopman operator of the dynamical system. They term this the Koopman spectrum cost. Contributions of the manuscript include the conceptual formulation and proposal, a (theoretical) learning algorithms, as well as several numerical examples illustrating the proposed method. Strengths and Weaknesses: While this reviewer is not an expert on Koopman theory for dynamical systems, they get the impression that this manuscript makes a solid and relevant contribution to the theory of dynamical systems / controller learning in the context of Koopman operator theory. ### Strengths 1) Manuscript makes a conceptual and potentially fundamental contribution, which could open up new research directions (as also mentioned in the manuscript) 2) Well written manuscript 3) Clear and transparent communication of contributions (p. 2) 4) Assumptions clearly stated 5) Aiming to give illustrative examples and explanations ### Weaknesses 1) While the authors make an effort providing intuition for the theoretical concepts, especially linking the exposition to the (more commonly known) MDP-type RL framework, which is commendable, the exposition could be still be clearer or more elaborate in parts. This includes, in particular, the following parts: * P. 4, "is a mapping that takes a Koopman operator as an input and returns its cost" -> Can we interpret this to basically mean that the mapping takes a dynamical system (described by a Koopman operator) as input and returns its cost? Such statement could be added and also compared to standard MDP formulation for illustration. * What is the goal of the design in (3.2)? In practice, would this be used to find a controller? The controller is not explicitly introduced, but presumable it is considered part of the dynamical system (thus its parameters would be part of \Theta). I suggest to clarify this. In this context, what components in the formulation have the role of the controller/policy? * Sec. 3.3: How is CEM based policy search related to the design problem (3.2)? Is CEM the optimizer that is used to solve the problem (3.2)? Or is this a separate problem building on top of it? In general, it would be very helpful if the authors could better explain how policy search relates to their design problem (3.2). 2) If possible, it would be helpful to provide intuition about Assumptions 2-4 + 6 (e.g., examples for which the assumptions hold; are the assumptions restrictive; etc) 3) Some further aspects still require more explanation (see minor comments and suggestions below) ### Minor comments and suggestions * Introduction/Abstract: What do the authors mean with "unpredictability" of the motions generated by standard RL? Please explain. * p. 2, key takeaways: I don't understand the sentence "such, we need some specific structural assumptions that are irrelevant to the Kernelized Nonlinear Regulator (KNR)". Why is KNR mentioned here? It seems to be out of context, or at least the context is not clear to the reviewer. * Related work: "Skill learning", e.g. with RL, is an important topic in robotics. I suggest the authors to review main works there and mention the relation to their approach herein. Some of the main authors include Jan Peters, Jens Kober, Stefan Schaal, Freek Stulp, for example. * Section 2: At the end of this section, "population based policy search" is mentioned, but not explained. It was unclear to this reviewer what the authors mean. Furthermore, why are these numerical examples "based on population based policy search" relevant for this work? Please explain. * Bottom of p. 3: even though it is standard, "i.i.d." should be introduced. * General question: Does "Regulator" refer to "regularization" or "Regulator" (controller) as in LQR? Might be useful to clarify early on. (I only realized later, what was actually meant.) * Section 4, first paragraph: What is meant by "model-based algorithm"? What model? Requested Changes: See weaknesses above. Minor comments can be considered optional suggestions. Broader Impact Concerns: No concerns. The manuscript contains a description of (technical) limitations, which is commendable. ================================================== Review 2: Summary: This paper studies the problem of controlling a nonlinear system by minimizing both stepwise costs and also a koopman spectrum cost. The main intuition is tha this serves as a regularizing effect that leads to smoother motions that are more natural and adapted to the dynamics of the system itself. Both simulation experiments and some theory for an online learning algorithm are demonstrated. Strengths and Weaknesses: Strengths: - The paper addresses an important problem of learning/optimizing control policies that lead to smooth, natural motion in contrast to what might be learned by an RL algorithm - The results are well-grounded in theory to motivate the design decisions. - Simulations seem to suggest this achieves the desired results. - The theory for the online learning algorithm is also promising. Weaknesses: - a number of approximations have to be made to get the online algorithm to work on cartpole, which is a relatively simple problem (including restricting the policy space). This suggests it may be difficult to scale these methods and insights to harder control problems. - I have questions about some of the experiments below that are potential weaknesses. Requested Changes: I only have requests for clarifications based on these questions: - “Note, this assumption does not preclude the necessity of exploration…” Can you elaborate on this point? Is the exploration that is done in the algorithm done through maintaining the BALL confidence set and minimizing over this optimistically? Then, what purpose does this eigenvalue lower bound serve in Assumption 5? - In the initial simulation experiments of section 3.3, it’s a little unclear what is being learned vs. just optimized. The appendix seems to suggest that the K is being learned through transitions. How were these generated? Was this done iteratively? If so, I’m confused about the difference between these and the results 4.4. Why can one not use the method of 3.3 to solve 4.4 as well? - Is it possible to achieve the desired smoothness effects by simply regularizing e.g. punishing high accelerations? Qualitatively, how would this differ from the proposed method? It would perhaps be nice to see comparisons with this in the experiments. - I don’t really understand the design decisions made in the first cart pole experiments. The reward is set high for the cart for large velocity, so it makes sense that it increases the velocity. If one wanted it to oscillate, why not just shape the reward function to get it to oscillate? It’s unclear why this might require more machinery when the spectrum cost is also set by hand. - The regret bound seems potentially dependent on the adversary’s choice of sequences, as opposed to doing regret that is just the suboptimality with respect to the lowest cost achievable by any control policy. Can you elaborate on how the sequences are chosen? Are they not chosen by the learner’s policy itself? Broader Impact Concerns: NA ================================================== Review 3: Summary: The paper presents a method for control of nonlinear systems using Koopman operator theory with a spectral-based cost. The approach is claimed to be effective at online learning. Strengths and Weaknesses: The paper presents a novel form of using the Koopman operator which is a timely and useful tool for modeling nonlinear systems as linear systems in a lifted spectral domain. Control in that domain is often very challenging to acquire, even though the system is modeled as a lifted linear system. The work does a good job of describing the Koopman operator and how one can obtain controllers that leverage the spectrum of the nonlinear system. The theoretical results are very strong and seem to be a significant advancement in Koopman operator theory. However, the empirical results of the proposed work are not convincing, especially with the baseline comparisons. For example, the provided video experiments do not seem like the proposed method provided much more improvement. The results provided in the paper provide a better picture of the results but are still limited in the example cases and could be explicitly compared with the addition of a table outlining the extent to which performance has improved. Requested Changes: The paper should consider more baseline experiments that holistically demonstrate and analyze the proposed method. For instance, adding in ablation studies regarding the basis function and choices of cost parameters would strengthen the contribution of the paper. With regards to the results, the paper claims to have cyclical behavior for the cart pendulum and the walker, however, the results only show one "cycle". How repeatable are the trajectories only some of the results show clear repeatability. It would be good to see if the approach can be extended to non-cyclical systems. If not, what are the conditions for which a system can use the proposed method? Additional results for the online learning for different systems and different basis functions would be valuable for the paper as well. Broader Impact Concerns: The paper did not provide a broader impact statement. ================================================== Metareview: Recommendation: Accept with minor revision Comment: I would like to thank the authors for their patience, as this paper has been under review for longer than usual. The final decisions of the reviewers have been one Accept, one Leaning Accept, and one Leaning Reject. All reviewers believe that the paper makes interesting contributions. So, from the novelty perspective, this paper is strong. Most of them also believe that the claims are reasonably well supported. The reviewer on the negative side, however, criticizes the paper because the advantages of the proposed method is not very evident. I read the paper myself, without verifying the proofs. At a high-level, the paper has - some conceptual novelty (the addition of a Koopman spectrum cost to the per-step cost); - theoretical guarantees in the form of regret bounds (which are under strong assumptions, such as Assumption 1); - some empirical illustrations, mostly on toy problems. Given that the formulation is new and optimization of the corresponding objective is not trivial, perhaps it is reasonable not to expect extensive and competitive empirical results for this work. One aspect of the paper that can and should be improved is its clarity. At several points in the paper, its quality of writing and clarity decline. I believe it is partly because the authors revised the paper to answer specific questions of the reviewers, but those revisions disrupted the flow of the paper (for example, Figure 3 and Analogy to Fourier analysis). In some other places, more explanation can be added. I provide some specific examples, in addition to some of my own questions. **Overall, this is a good paper overall, and can be accepted if the authors improve the paper's clarity.** === Some places were clarity can be improved and some additional questions: - More discussion of the RDS model in Section 3.1. For example, it is stated that "RDS subsume many practical systems including Markov chains". Some examples of this would be helpful. - More explanation of the Koopman operator. The mathematical definition is in Definition 3.1, but some intuition would be helpful. For example, it is not clear whether such an operator K always exists or not. - Example 3.1 can be expanded. How easy is it to compute the spectral radius of the Koopman operator (given that it is infinite dimensional)? What are the practical ways to compute it? - Remark 3.1 is unclear to me. This is apparently an answer to one of the reviewers, but it may not be very clear for someone who hasn't read the discussion between the authors and the reviewers. - m and m* are used in Section 3.3 before being introduced much later in Appendix H.2. In general, I believe the discussion of Simulated experiments in Section 3.3 can be improved by adding more details. - Would you confirm that the minimum in the definition of regret is after the summation over t and not before that? ==================================================
# Actively Learning Costly Reward Functions For Reinforcement Learning Anonymous authors Paper under double-blind review ## Abstract Transfer of recent advances in deep reinforcement learning to real-world applications is hindered by high data demands and thus low efficiency and scalability. Through independent improvements of components such as replay buffers or more stable learning algorithms, and through massively distributed systems, training time could be reduced from several days to several hours for standard benchmark tasks. However, while rewards in simulated environments are well-defined and easy to compute, reward evaluation becomes the bottleneck in many real-world environments, e.g., in molecular optimization tasks, where computationally demanding simulations or even experiments are required to evaluate states and to quantify rewards. When ground-truth evaluations become orders of magnitude more expensive than in research scenarios, direct transfer of recent advances would require massive amounts of scale, just for evaluating rewards rather than training the models. We propose to alleviate this problem by replacing costly ground-truth rewards with rewards modeled by neural networks, counteracting non-stationarity of state and reward distributions during training with an active learning component. We demonstrate that using our proposed ACRL method (actively learning costly rewards for reinforcement learning), it is possible to train agents in complex real-world environments orders of magnitudes faster. By enabling the application of reinforcement learning methods to new domains, we show that we can find interesting and non-trivial solutions to real-world optimization problems in chemistry, materials science and engineering. ## 1 Introduction Reinforcement Learning (RL) techniques have achieved impressive results in a wide range of applications such as robotics (Kober et al., 2013), games (Mnih et al., 2015; Silver et al., 2016; Vinyals et al., 2019) or natural sciences (Mahmud et al., 2018; Zhou et al., 2019). This success is the result of improvements along multiple independent branches of RL research such as an improved understanding of rewards in difficult environments (Schaal, 1997; Abbeel & Ng, 2004; Christiano et al., 2017; Wirth et al., 2017), more sample-efficient training via experience replay (Lin, 1992; Schaul et al., 2015; Andrychowicz et al., 2017; Kong et al., 2021) or more effective sampling via active learning (Daniel et al., 2015; Cui & Niekum, 2018; Biyik et al., 2020), more powerful algorithms (Mnih et al., 2015; Van Hasselt et al., 2016; Lillicrap et al., 2015; Fujimoto et al., 2018; Haarnoja et al., 2018a) and more efficient and scalable implementations (Horgan et al., 2018; Dalton & frosio, 2020; Hessel et al., 2018) of established techniques. These extensions were primarily developed and benchmarked in simulated environments such as OpenAI Gym (Brockman et al., 2016) or MuJoCo (Todorov et al., 2012), where rewards are well-defined and computationally cheap to obtain. However, in real-world tasks rewards may be either difficult to formulate or to collect. There has been extensive work on how to formulate and quantify rewards in scenarios where agents have to learn from demonstrations (Schaal, 1997; Abbeel & Ng, 2004) or from ranked alternatives (Christiano et al., 2017; Wirth et al., 2017). These methods are mainly developed within the field of robotics, where feedback frequently is provided by a human supervisor. Thus, since human feedback is relatively expensive, it is desirable to reduce the number of expert evaluations. Active reward learning techniques aim to reduce the number of expert queries by selecting only the most informative ones, usually employing uncertainty measures within the Bayesian framework (Daniel et al., 2015; Cui & Niekum, 2018; Biyik et al., 2020; Lindner et al., 2021) as a selection criterion. Existing literature focuses on developments in simulated environments and real-world tasks in fields such as robotics. While in the former case rewards are clearly formulated and cheap to obtain, in the latter case rewards are typically difficult to formulate and/or quantify, e.g., in object manipulation tasks (Jangir et al., 2020). However, in many other fields rewards appear to have different properties than in these scenarios, for which most of existing work has been done. In a natural sciences and engineering context, for example, rewards are frequently the result of computationally demanding optimization procedures or algorithms. The current trend in reinforcement learning is to tackle these issues with massive scale by running multiple independent copies of agent-environment interactions in parallel. While using massive amounts of compute resources may be justified by the outstanding results such as (Silver et al., 2017; Jumper et al., 2021), following this trend in scenarios with complex reward evaluations has two major drawbacks. First, it may exclude all but the largest institutions to engage in this area of research at all. Second, establishing scale as the default to gather enough data to train agents can be considered *Red AI* (Schwartz et al., 2020). In contrast to a large body of existing work on optimizing with reinforcement learning, we explicitly focus on training with costly rewards, whereas prior work mostly reported results on cheap or approximate quantities (Zhou et al., 2019; Goel et al., 2021; Bhola et al., 2023). When ground-truth evaluations become orders of magnitude more expensive than in these scenarios, direct transfer of these methods would require massive amounts of scale, just for evaluating rewards rather than training the models. In this paper, we develop a framework combining reinforcement and active learning to resolve the issue of reward collection in real-world scenarios, where the relevant domain-specific quantities are difficult to obtain. We show that within our framework, which we term **ACRL**1, neural networks, pre-trained on a relatively small initial dataset and regularly updated during training via an active learning approach, can be used as reward proxies and that agents trained within this framework achieve competitive results across different real-world tasks with varying computational cost. ## 2 Related Work 2.1 Learning Reward Functions In theory, every agent accumulates rewards under a unified mathematical framework. In practice, though, the exact properties of a reward function depend on the task. For example, rewards can be immediate or delayed and the reward signal can be binary, discrete or real. In simulated environments like OpenAI Gym and MuJoCo rewards are well-defined and exposed to the agent via a simulator interface. In fields like robotics, rewards can become complex, high-level signals of desired behavior, e.g., to manipulate an object in a particular manner (Jangir et al., 2020). Since the formulation of a reward function is often difficult in the latter case, early work (Schaal, 1997; Abbeel & Ng, 2004) aimed to infer an unknown reward function solely from demonstration. While alleviating the issue of reward formulation, demonstrations by a human supervisor are costly to obtain. As an alternative, preference-based learning (Christiano et al., 2017; Wirth et al., 2017) allows feedback to be a relative preference over a set of trajectories rather than a quantitative measure of goodness. In our work, we focus on problems where reward functions correspond to quantities such as evaluations of properties of physical systems. For example, this can be expensive quantum-mechanical simulations for evaluation of molecular properties. Given such a reward function, we can formulate an optimization process as goal-directed search within the reinforcement learning framework, in which states of high rewards correspond to more optimal solution spaces of the underlying problem and vice versa. We therefore propose to replace the ground-truth reward function with an approximate model and to jointly train it with the agent to account for non-stationarity of state and reward distributions during exploration. Due to their ability to generalize, our agents are able to solve optimization tasks with varying constraints, which is, in general, not trivially doable using conventional optimization and search methods. 1Our code is available at: https://github.com/32af3611/acrl ## 2.2 Active Reward Learning And Sample Efficiency Active reward learning techniques (Daniel et al., 2015; Cui & Niekum, 2018; Biyik et al., 2020) build upon the insight that not all training samples are equally important for learning and aim to select only those samples which are most beneficial for learning. The selection is usually done by some form of uncertainty estimation, often within the Bayesian framework. Reducing the number of state queries is vital in cases where reward evaluation is expensive. While existing work employs active learning to reduce the number of queries for the agent to accelerate convergence of the RL training (Lindner et al., 2021), we employ active learning for the reward model such that predictions become more accurate on states the agent visits during exploration. In vanilla RL, every observation is used only once to update the agent's policy, making learning slow and sample-inefficient. A popular technique to overcome this is to use *experience replay* (Lin, 1992) in off-policy algorithms, which improves sample-efficiency in terms of sample usage by storing experience in a *replay* buffer and performing parameter updates on batches uniformly sampled from it. Improvements of experience replay use different forms of non-uniform sampling (Schaul et al., 2015; Kong et al., 2021), handle sparse and binary reward signals and multi-goal environments (Andrychowicz et al., 2017), and are also extended to a distributed context (Horgan et al., 2018). In our work, we do not aim to increase sample-efficiency of the RL training process. Rather, we avoid expensive ground-truth evaluations for known regions of the state space by using a reward model. We increase the size of this region over the course of training by providing ground-truth labels for a small fraction of states selected by some sampling method. Since our framework assumes a modification of the environment's properties, i.e., the reward, it is possible to use techniques like *IDRL* with our framework to reduce the number of reward model queries. Specifically, we show that we can improve training time without using any of the more advanced techniques in the reinforcement learning toolbox to keep our framework lean and to avoid common reproducibility issues (Huang et al., 2022; Henderson et al., 2018). ## 2.3 Efficient Implementations The effects of other extensions within the RL framework have been studied in Hessel et al. (2018), showing recent advances can be integrated to improve their standalone-performance. From a practical point of view, the authors of Stooke & Abbeel (2018) provide a unified implementation view of RL algorithms to leverage modern, parallel hardware architectures to further reduce training time. In our work, we do not aim to leverage massively scaled reinforcement learning in order to solve the issue of costly rewards. Rather, we propose an extension to restore the effectiveness of these methods in scenarios where their efficiency would be threatened by the reward evaluation bottleneck. We note that our method is scalable and naturally can be integrated into distributed architectures such as those in Horgan et al. (2018). ## 2.4 Reinforcement Learning For Optimization The idea of optimizing functions with reinforcement learning was already investigated several decades ago (Williams & Peng, 1991). In (Williams & Peng, 1991), the authors used REINFORCE (Williams, 1992)-like algorithms on a set of experiments with known maxima to show that it is possible to learn an adaptive system that generates optima by trial-and-error. Interestingly, they found that their algorithms converge to suboptimal single-mode solutions in the presence of multiple equally-valued actions and corrected this behavior by maximizing entropy in addition to reward, which today is a standard technique in robust reinforcement learning and also a core component in one of the current state-of-the-art algorithms, Soft Actor-Critic and its variants (Haarnoja et al., 2018a;b; Christodoulou, 2019). Since then, reinforcement learning has been applied to many instances of combinatorial optimization due to its ability to efficiently explore large spaces without handcrafted heuristics. Canonical NP-hard problems such as the *Travelling Salesman Problem* (TSP) and other graph-related problems have been the focus due to the difficulty of obtaining optimal solutions for these problems (Bello et al., 2016; Khalil et al., 2017). Furthermore, there is an increased interest in using these methods in real-world applications, with applications for road (Yu et al., 2019) or computer (Vesselinova et al., 2020) networks. A broader overview of machine learning for combinatorial optimization can be found in (Bengio et al., 2021). Besides applications in computer science, reinforcement learning has been applied for optimization and discovery in a variety of other domains. In chemistry and materials science, optimizing properties of molecules or their molecular graphs, respectively, is of major interest. Existing work such as *MolDQN* (Zhou et al., 2019) uses reinforcement learning to find local modifications of molecules that yield improved properties. Besides single-property optimizations, other work aims for molecules meeting multiple criteria at the same time (Goel et al., 2021), geometry optimization (Ahuja et al., 2021) or design and discovery (Fromer & Coley, 2023; Olivecrona et al., 2017; Pereira et al., 2021). Another source of interest is the optimization of airfoils to improve their aerodynamic properties, with all kinds of applications in aeronautics. While traditionally these kinds of problems are tackled with optimization methods such as gradient-based optimization, the authors in (Dussauge et al., 2023) argue that these methods, even though computationally efficient in large spaces, are susceptible to poor local minima and do not work well with non-linear cost functions. While machine learning techniques are less susceptible to these kinds of errors, the authors in (Bhola et al., 2023) point out that using high-fidelity data for training can become prohibitively expensive. While there is much interest in using reinforcement learning for different kinds of optimization problems, in many cases, most effort is spent on finding a solution, with the general assumption that it can be verified fast. Methods such as *MolDQN* (Zhou et al., 2019) or *MoleGuLAR* (Goel et al., 2021) optimize cheap properties, e.g., *logP* and QED, while in the case of airfoil design, lower-fidelity data is used to accelerate data generation (Bhola et al., 2023). For real-world domains, the validation of solutions may be orders of magnitude slower than in research scenarios, which hinders training agents in environments that require a very large number of steps. In this work, we show that we can alleviate the computational burden of reward evaluation by actively sampling data points and learning a reward model. By using a cheap reward model, we can provide rewards much faster and, in addition, avoid repetitive and thus redundant evaluation of frequently visited states. We show that we can use the original *MolDQN* with an actively learned reward model to optimize properties of molecules which are much more costly to evaluate. We also show that we can train agents for hundreds of thousands of steps without excessive amounts of computational effort in an airfoil optimization task. This does not only contribute to *Green AI* (Schwartz et al., 2020) in these scenarios, but also may allow smaller institutions to engage in this area of research. ## 3 Our Method: Acrl Existing literature covers how to learn a reward function in cases of unclear tasks or how to make efficient use of it in cases where it exists and can be evaluated frequently. In contrast to that, in many other tasks the reward function is clearly defined but costly to evaluate. Providing these kinds of rewards to an agent during training thus can become prohibitively expensive even with off-policy learning with experience replay as one may fail to gather enough examples to learn from. In the following we describe our proposed ACRL framework to alleviate this issue. We use a standard MDP formulation as found in Sutton & Barto (2018). Let f(s) be a quantity or metric associated with state s, f being a known but expensive to evaluate function of s. Without loss of generality, we aim to find a (local) minimum of f, or equivalently, a (locally) optimal state s ∗ = arg min s f(s). Due to high computational cost as well as non-convexity of f in real-world tasks, we neither can directly solve for s ∗ nor is it likely that we can find s ∗ with heuristic search in general. We therefore propose a more principled search of s ∗ by framing it as a sequential decision-making problem within the RL framework. A natural definition of reward in such environments is rt = f(st−1) − f(st), i.e., the agent aims to accumulate reward by sequentially visiting states s with decreasing f(s). We note that this formulation lends itself well to attract the agent to minima and is a popular choice in optimization scenarios (Khalil et al., 2017). Nevertheless, the standard cumulative, discounted return formulation can be whenever appropriate. Let s0 be a possibly random initial state, the agent then aims to maximize the total cumulative reward RT =PT t=1 f(st−1) − f(st) = f(s0) − f(sT ). Due to the computational complexity of f, training an agent for a large number of steps may become infeasible or at least very time-consuming. To reduce the computational burden of state evaluations during training, our framework requires only a few modifications of the standard training loop, namely the introduction of a reward model and its improvement via active learning. In general, the interaction of policy and reward model closely mirrors the interaction of policy and value networks in generalized policy iteration (Sutton & Barto, 2018), except that in our framework trajectories generated by the policy are used to improve the reward model, which itself is used to improve the value function. The two steps are then as follows. The first step is to pre-train an approximate reward model ˆf, e.g., a neural network, on a small, initial dataset D in a supervised manner. ˆf is then used as a drop-in replacement for the true evaluation function f. Doing so is theoretically sound as the reward distribution does not depend on the agent's policy. This allows using our framework with both value-based and policy gradient methods without the necessity to change the underlying theory. At this point, we make several mild assumptions about f. In contrast to the general RL setting, we assume that we can evaluate f in any state, thus providing dense and instantaneous rewards on state transitions. Hence, our method is not well-suited for sparse or delayed rewards, as found in conventional reinforcement learning scenarios, which we do not aim to cover since it is rarely the case that we cannot evaluate any property of a system. The second step is then to actively improve the reward model during agent training. Since the initial state distribution in D likely differs from states visited by an exploring agent, ˆf may have poor extrapolation capabilities which will cause agent training to diverge as estimated state quantities may not have their true value predicted accurately. This particularly applies in scenarios where it is difficult to define *good* initial states, for example in the case of optimization problems where the optimal solution is to be found rather than given. To overcome this issue, we propose to sample a small number of states encountered during agent training and to provide the expensive ground-truth labels for them. In the most general form, we define an acquisition function h(s) which hypothesizes about how beneficial adding the true label f(s) to D is for training the reward model. We then periodically evaluate h for a small fraction of the agent's experience E, e.g., the last N steps, where N is an application-dependent hyperparameter. We set s ′ = arg max s∈E h(s), D = D ∪ {s ′} and subsequently update ˆf on the new D, either by training from scratch or fine-tuning. At this point, we assume that reward model can be trained reasonably fast such that the training time can be amortized given enough reward evaluations. For example, h(s) may be chosen to be ˆf(s), ||∇ ˆf(s)|| or other sampling techniques like uniform or uncertainty sampling. We hypothesize that this active learning component allows to explore relevant regions of the state space effectively and efficiently. An important implication of using active learning is that, depending on the task at hand, the initial reward model must not be perfectly accurate, which is a recurring theme in reinforcement learning when viewed from the perspective of policy iteration. For example, when using our method for optimization tasks where it is unlikely that the optimal solution space is included in the initial dataset D, perfect accuracy in this space is not necessary since the agent moves away from the initially covered space towards a more optimal region. It is thus more important for the reward model to be accurate on the on-policy distribution of states rather than on randomly selected initial data points. The reward model is only required to improve as the agent's policy improves and stabilizes. We found this active learning component to be crucial in our tasks. A summary of the overall procedure can be found in Algorithm 1. We note that even though we use variations of Double DQN (Van Hasselt et al., 2016) agents in all experiments, our method does not assume any particular type of RL implementation and can be integrated into existing implementations with minimal changes, even in asynchronous and distributed settings. Due to the freedom of choice of algorithms, our framework can be used for both discrete and continuous optimization problems. The same holds for sampling strategies or active learning approaches in general. ## 4 Applications 4.1 Proof-Of-Principle: Molecular Property Optimization The algorithm described above is first used in molecular property optimization tasks as a proof-of-principle. We use two fast-to-evaluate benchmarking properties to evaluate the performance of the algorithm and to choose its optimal hyperparameters. Both the Q-network and the reward network are trained on Morgan | Algorithm 1 Double Deep-Q-Learning within ACRL 1: agent A, replay buffer B, initial dataset D, environment E, reward network ˆf trained on D 2: ˆf ← train( ˆf, D) ▷ train reward network | | | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------|---------------------------------| | 3: E.reward ← ˆf | | ▷ E.step() uses ˆf instead of f | | 4: for episode = 1 to M do | | ▷ training loop | | 5: | st ← initial state | | | 6: | for step = 1 to T do | ▷ episode loop | | 7: | at ← A.action(st) | ▷ ϵ-greedy | | 8: | st+1, rˆt+1 ← E.step(at) | ▷ rˆt+1 = ˆf(st, at) | | 9: | obs ← (st, at, rˆt+1, st+1) | | | 10: | B.add(obs) | ▷ save observation | | 11: | obs ← B.sample() | ▷ sample experience from B | | 12: | A.optimize(obs) | ▷ update parameters | | 13: | end for | | | 14: | if sample state then | ▷ e.g., periodically | | 15: | s ′ ← arg max h(s) | ▷ any method | | | | s∈E | | 16: | y ′ ← f(s ′ ) | ▷ calculate ground-truth label | | 17: | D ← D ∪ {(s ′ , y′ )} | | | 18: | end if | | | 19: | if update model then | ▷ e.g., periodically | | 20: | ˆf ← train( ˆf, D) | ▷ retrain reward network | | 21: | E.reward ← ˆf | ▷ update reward network | | 22: | end if | | | 23: end for | | | fingerprint vectors as molecular representations (Morgan, 1965; Rogers & Hahn, 2010). States and actions are based on prior work (Zhou et al., 2019), where states are discrete molecular graphs and actions are semantically allowed local graph modifications. The first benchmarking property is the penalized logP score, a widely used metric in the literature for evaluating and benchmarking machine learning models on regression and generative tasks (Nigam et al., 2020; Gómez-Bombarelli et al., 2018; You et al., 2018). The logP score is the logarithm of the water-octanol partition coefficient, quantifying the lipophilicity or hydrophobicity of a molecule. Penalized logP additionally takes into account the synthetic accessibility (SA) and the number of long cycles (n*cycles*): $$p e n.\log P=\log P-S A-n_{c y c l e s}$$ pen. logP = logP − SA − n*cycles* (1) The second benchmarking property used here is the QED score, which is a quantitative estimate of druglikeness based on the concept of desirability (Bickerton et al., 2012). QED is an empirical score quantifying how "drug-like" a molecule is. Both properties are computationally inexpensive and can be calculated using RDKit (RdKit, 2006). We use them as benchmarking properties to study the effect of replacing the groundtruth reward with an approximation and to choose hyperparameters of our algorithm. In both applications, empty initial states are optimized for T = 40 steps. We then test our method on a real-life application in molecular improvement with a more costly property value to calculate. ## 4.2 Application I: Molecular Design In our first application, we evaluate ACRL on a molecular design task involving more costly rewards. We aim to optimize electronic properties of molecules such as energies of the Highest Occupied Molecular Orbital (HOMO) and the Lowest Unoccupied Molecular Orbital (LUMO) by performing sequential modifications. These values can be calculated using semiempirical quantum mechanical methods such as density functional tight binding methods as implemented in xTB (Grimme et al., 2017; Bannwarth et al., 2020). xTB-based reward evaluations on one Intel Xeon Gold 6248 CPU range from seconds to minutes, depending on size and structure of the molecule. Compared to other RL applications, this is comparably expensive, especially considering the number of reward evaluations needed during agent training. The algorithm described above is applied using the hyperparameters found in the experiments of Section 4.1. Here, the agent learns a more application-oriented optimization goal, i.e., how to decrease the LUMO energy of randomly sampled starting molecules with only T = 5 steps per episode, while keeping the HOMO-LUMO gap constant. Therefore, the goal of the agent is to find optimal local improvements of given molecules with a limited number of actions, i.e., changes of the chemical structure. Let s0 be a randomly sampled molecule at the beginning of an episode, then the improvement of the molecule st at timestep t over s0 is defined as: $$)-\operatorname{gap}(s_{0})|-0$$ $$4\mathrm{O}(s_{t})-\mathrm{LUMO}(s_{0}))$$ $\left(2\right)^3$ R(st) = −|gap(st) − gap(s0)| − (LUMO(st) − LUMO(s0)) (2) with gap(s) = LUMO(s) − HOMO(s) being the HOMO-LUMO energy difference of molecule s. ## 4.3 Application Ii: Optimization Of Airflow Drag Around An Airfoil The control technique of wall-normal blowing or/and suction constitutes a promising approach for the reduction of drag in turbulent boundary layers (Kinney, 1967). This technique has been successfully utilized not only in flat-plate boundary layers (Kametani & Fukagata, 2011) but also on more complex curved geometries like airfoils (Atzori et al., 2020). The majority of studies on the aforementioned control technique, however, considers uniform distribution of the introduced blowing or suction profiles. In our second application, we use ACRL to minimize aerodynamic drag around an airfoil by sequential adjustment of a set of blowing and suction coefficients represented as vectors in R d(see figure 2a), which form the state space in R 2d. As higher coefficients trivially reduce drag, we seek to optimize profiles with a constrained mean value for each side. By choosing a different constraint at the start of each episode, we aim to generalize across multiple instances of optimization. We use a Double DQN (Van Hasselt et al., 2016) agent with discrete actions corresponding to exactly one (or no) modification of an entry of s per step to keep the action space as small as possible. Thus, we seek to find a (near-)optimal state s ∗ ∈ R 2d under given constraints. In our experiments, we use an episode length of T = 30 steps. While policy methods would be a more appropriate for this task, we use Double DQN for the sake of consistency. Let d0 = f(s0) be the drag coefficient of starting state s0 corresponding to a uniform profile on each side. Our agent then seeks to find a sequence of modifications such that RT =PT t=1 dt−1 − dt = d0 − dT becomes as large as possible. We note that while the agent seeks to maximize RT , we are primarily interested in the shape of states sT close to the (globally) optimal state s ∗rather than the exact value of f(sT ). The incompressible flow around airfoils is analysed using Reynolds-averaged Navier–Stokes equation based simulations in order to assess the effect of localized blowing and suction on the global aerodynamic performance of the airfoil. The simulations are carried out with the open-source CFD-toolbox OpenFOAM (Weller & Jasak, 2011) using a steady state, incompressible solver. For the current study we consider a flow around the NACA4412 airfoil at the Reynolds number Re = U∞c/ν = 4 · 105 and the angle of attack α = 5◦. For a more detailed description of the setup the reader is referred to Fahland et al. (2021). One particular difficulty in training an RL agent in this scenario is the fact that the true state evaluation function f is a Computational Fluid Dynamics (CFD) simulation. On one core of an Intel Xeon Platinum 8368 CPU, the simulation runs for approximately 10 minutes. Due to a fixed mesh size, we found that parallelization beyond 4 cores did not result in a significant speed-up, hence one reward evaluation takes approximately 2 to 3 minutes and cannot be reduced significantly, which severely limits the applicability of conventional RL algorithms with thousands of sequential reward evaluations. ## 5 Results And Discussion 5.1 Molecular Property Optimization Based on prior work by Zhou et al. (2019), we used cheap chemistry benchmarking properties logP and QED as a proof of concept to evaluate how the use of actively learned rewards performs in comparison to the real reward. Figure 1a and 1b show the performance of three different agents with NN-approximated rewards compared to a reference agent ("oracle-based reward") trained on the real reward. One of the reward approximation agents is only trained once in the beginning ("static"). One of the agents ("ACLR") uses a reward model which is updated at regular intervals using additional oracle queries selected based on uncertainty sampling. The last agent ("full update") is updated after every episode using oracle queries of all states encountered in that episode (i.e., closest to the reference agent which directly uses oracle queries for training). After approximately 2000 episodes in case of logP optimization and already at the beginning of QED optimization, the performances of the agents start to differ. While the performance of the static agent stagnates, all three other agents show similar performance. ![7_image_0.png](7_image_0.png) Figure 1: Evolution of the reward reached by the agent during the optimization of logP and QED: The red curve was obtained by training the agent on real (oracle-based) rewards, while the blue, orange and green curves are the ACRL model, the static reward model and a fully updated reward model, respectively. Due to high computational costs, only ACRL and static models can be tested in (c). The failure of the **static agent** to learn is due to the low generalization ability of the initial reward model itself, which is trained on the QM9 dataset (Ruddigkeit et al., 2012; Ramakrishnan et al., 2014) containing approximately 134.000 molecules with up to 9 non-hydrogen atoms. To some extent, the weak generalization can be attributed to not using state-of-the-art graph neural networks. However, we decided to use the same molecular representation and model as in the original Q-networks in Zhou et al. (2019), i.e., fingerprint representations and MLPs. Furthermore, during the learning process, the molecules generated (especially after a high number of episodes) contain many more atoms, which explains why the static reward model fails to correctly estimate the real property values. The strength of this effect depends on the property studied. In the logP optimization task, the property values reached with a static reward model follow the general trend of learning with real reward, even though the final performance after 5000 episodes is lower. In case of QED optimization, the static reward model fails to predict QED-values for molecules outside the training distribution. As a consequence, the RL agent learns to exploit errors of the static reward model and finds adversarial examples, rather than samples with desirable properties. The active learning component within **ACRL agent** allows the reward model to learn from molecules outside its initial training distribution, thus improving reward evaluations during agent training. By only selecting a small subset of labels obtained using oracle queries to be added to the training set, the objective of the ACLR agent is to mimic the reference agent's real behavior as closely as possible. This includes finding (nearly) optimal points (see e.g., Lindner et al. (2021)) to be selected for retraining of the reward model to minimize its errors while at the same time minimizing the number of costly oracle queries. We experimented with different sampling strategies (see SI), from which a query-of-committee model (see Seung et al. (1992)) performed best. Therefore, in the ACLR model used in the molecular design and improvement tasks, three reward models were trained independently to form a query-of-committee model. The three reward models are retrained after 500 episodes with the initial training set along with all 400 new molecules generated during the agent learning process and their computed real property values. The selection of new oracle queries to extend the dataset is based on the disagreement between the three reward models measured by the standard deviation of the predictions. However, our work is independent of the particular sampling strategy (even random sampling of visited states can work well in some applications), as long as the reward model's training distribution follows the exploration of the RL agent. Overall, the speed-up achieved by the ACRL model in | | Model | Relative | Oracle | Model | Oracle | Model | | |-----------|----------------|------------|----------|----------|----------|---------|---------| | Task | Oracle queries | queries | speed-up | duration | duration | time | time | | Mol. opt. | ∼4,000 | ∼200,000 | ∼50 | ∼1s | ∼0.001s | ∼55h | ∼0.05h | | Mol. imp. | ∼4,000 | ∼25,000 | ∼6.25 | ∼1m | ∼0.001s | ∼7h | ∼0.007h | | Drag opt. | ∼3,000 | ∼9,000,000 | ∼3,000 | ∼3m | ∼0.001s | ∼7500h | ∼2.5h | this experiment compared to the fully updated and oracle-based model is 50 (see Table 1). The relationship between speed-up and rewards reached is analyzed in the SI. Table 1: Relative and absolute speed-up factors for different tasks comparing the number of oracle and model queries. The duration of a reward model forward pass has been conservatively estimated as 1ms. Wallclock durations are calculated based on the number of steps the agents trained. Note that the absolute times are only for evaluating rewards, not including training time. For the drag optimization task, the agent trained for roughly 10h. Training on ground-truth rewards would require lots of parallel agents to train in a reasonable timeframe, while we train only one agent at a time. Fully updating the reward model on oracle queries of all samples ("full update") aids the learning process. In case of logP optimization and even stronger in case of QED optimization, learning by fully updating the reward model has even surpassed learning with actual reward values at certain episodes. One potential reason for that can be that the exploration of the fully updated agent is stronger than that of the reference agent (see SI), which needs to be confirmed in future work. However, in practice, fully updating the reward model by adding every single generated point (along with its real property value) to the initial dataset and retraining the neural network is as expensive as training the reference agent, so it cannot be applied to tasks with costly rewards. In order to understand why the fully updated reward model in some cases (e.g., Figure 1b) outperforms the oracle-based training, we analyzed the effect of additional noise and thus exploration which might be induced by replacing oracle-based rewards with (noisy) approximated rewards. We therefore varied the ϵ-greedy strategy of the learning process. In particular, we varied final ϵ values (i.e., probabilities of random actions) and the form of the ϵ-decay function used on the learning process. However, none of the changes in ϵ-decay could improve the learning behaviour, i.e., the ϵ-decay rate and function used by Zhou et al. (2019) was optimal. Therefore, for the rest of the simulations we used a fully exponential decay reaching approximately 1% randomness in episode 5000. The results of this study are available in the supplementary information section. Further study of the improvement effect due to a fully updated reward model is part of ongoing work as it has the potential to improve the performance of RL agents with little computational overhead. ## 5.2 Molecular Improvement After evaluating the performance of our agent on easy-to-compute properties such as penalized logP and QED, we test our ACRL approach on a molecular improvement task with more costly rewards, where an oracle-based reference study is unfeasible. In particular, we study a RL agent with the goal of independently varying two quantum mechanically calculated energy levels of molecules with only very few, in our case five, modification steps (see Section 4). Figure 1c shows the evolution of the ACRL and the static reward agents' rewards as a function of the training episode. We observe that the reward becomes positive after approximately 1000 episodes and stagnates after approximately 2000 episodes. Therefore, the agent has learned to improve given (arbitrary) molecules, since the reward value of the starting reference molecule is zero, each episode starts with a randomly sampled molecule, and any molecule with negative reward would have less desirable properties than the initial one. This suggests that even though the agent deals with different starting reference molecules at each episode, it has managed to learn a strategy to increase the reward in a limited number of steps. In contrast to the property optimization task discussed before, the performance of the ACRL and the static reward agents are equal within the confidence intervals. A likely explanation for this observation is that the number of steps per epoch in this task is limited to five, whereas 40 steps were possible in the prior task. Therefore, the agent here cannot generate molecules that are far outside the initial distribution of starting molecules, i.e., the QM9 dataset. Furthermore, the ratio of reward model queries to oracle queries in this experiment is comparably high (see Table 1), meaning that the ACRL reward model is updated on a high fraction of actually encountered molecules. A reference calculation with oracle based rewards or a fully updated reward model to check if the ACRL model found near-optimal results (within the DQN framework) are computationally too costly here and thus unfeasible. However, we compared the predictions of the reward models for randomly selected molecules throughout the training process to oracle predictions (see points in Figure 1c). We found excellent agreement, indicating that the ACRL as well as the static reward models are reliable. Thus, the solutions found are not exploiting weaknesses of the reward models, nor is the training limited by wrong predictions of the reward models. Therefore, it is likely that the found solutions are of comparable quality as ones that a hypothetical oracle-based RL model would find. The speed-up achieved in this experiment compared to a hypothetical oracle based model is 6.25, which still has room for improvement, given the high reliability of the reward models. Even though the ACRL agent does not exceed the performance of a static reward model trained on QM9, these results have another important implication. While QM9 contains all molecules with up to nine heavy atoms, of which around 130,000 exist and on which the static model has been trained on, the ACRL agent matches its performance using only 4000 ground-truth queries. While large databases exist for molecules, this may not be the case in other and especially new domains. This showcases the effectiveness of our approach in the low-data regime. ## 5.3 Optimization Of Airflow Drag Around An Airfoil Our ACRL method is applicable to a large number of different tasks in natural sciences and engineering, not only limited to chemistry. Therefore, in this section we present the results of a task in engineering, namely the reduction of airflow drag around an airfoil, e.g. an airplane wing (see Section 4). The objective in this task was to find a set of coefficients minimizing drag and to analyze the resulting profiles. Figure 2b shows the evolution of drag during 300000 episodes of training. The discrete jumps of the ACRL model coincide with retraining of the reward model every 10000 episodes. As higher mean constraints are highly correlated with lower drag, we choose samples for ground-truth evaluation based on reward rather than drag. Dots represent oracle-based ground-truth evaluations of random profiles sampled during training. ![9_image_0.png](9_image_0.png) Figure 2: (a) blowing/suction distribution discretized with 30 coefficients corresponding to 15 sections on each side of the considered airfoil. (b) drag evolution of two independent runs. (c) coefficient distribution for low-drag profiles. The results demonstrate that the ACRL agent is able to find profiles with significantly lower drag coefficients than the static reward model. They also show that in this task (in contrast to the molecular improvement task) it is crucial to actively update the reward model during training. This is related to the fact that in order to improve upon the initially uniform profile, the RL agent has to perform a constrained optimization in high-dimensional real space (30-dimensional in our case). Accurate reward model predictions require sufficient coverage of the relevant space within the initial dataset which is difficult to assert because the relevant region is, in general, not known, which is also true for many other real-world problems. As a consequence, an agent trained without active updates of the reward model only slightly improves upon a uniform profile. At the same time, model updates result in sharp drops of both predicted and ground-truth drag especially in the beginning of training as the relative effect of new ground-truth samples is high and the RL agent probably exploits wrong predictions of the early-stage reward models. This effect decreases as more and more samples are obtained along the trajectories towards the low-drag region in parameter space. Figure 2c shows the distribution of a small number of low-drag profiles sampled with ground-truth labels during training. The resulting profiles are non-trivial and have a regular, alternating pattern of coefficients with physically explainable meaning (Kametani et al., 2016; Mahfoze et al., 2019; Stroh et al., 2016). We note that due to various limitations of the simulated environment such as discretization of action space, a limited number of coefficients due to limitations in OpenFOAM simulations (used as an oracle) and limited episode length, these results are only locally optimal w.r.t. our setup. Yet, we find consistent, physically interpretable and highly non-trivial results. ## 6 Conclusion We introduced ACRL, an extension to standard reinforcement learning methods in the context of (computationally) expensive rewards, which models the reward of given applications using machine learning models. Because optimal regions in the search spaces are not known a priori and thus typically are not included in initial training sets, we use active learning while exploring the state space to update the reward model over the course of training. We first showed that it is possible to train agents with an incrementally improving reward model on existing benchmark tasks using cheap benchmark quantities. We then showed in two more realistic scenarios that by learning a reward model jointly with our policy, we can reduce the time for reward evaluations by several orders while still being able to produce meaningful results. In turn, it becomes feasible to train agents without massive distribution within reasonable timeframes, which saves computational resources and energy and at the same time accelerates research since resources can be spent on training models rather than evaluating rewards. ## Acknowledgements We would like to thank the Federal Ministry of Economics and Energy under Grant No. KK5139001AP0. We acknowledge support by the Federal Ministry of Education and Research (BMBF) Grant No. 01DM21001B (German-Canadian Materials Acceleration Center). We acknowledge funding by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) within Priority Programme SPP 2331. ## References Pieter Abbeel and Andrew Y. Ng. Apprenticeship learning via inverse reinforcement learning. In *ICML '04:* Proceedings of the twenty-first international conference on Machine learning, pp. 1, New York, NY, USA, 2004. ACM. ISBN 1-58113-828-5. Kabir Ahuja, William H Green, and Yi-Pei Li. Learning to optimize molecular geometries using reinforcement learning. *Journal of Chemical Theory and Computation*, 17(2):818–825, 2021. Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances* in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https: //proceedings.neurips.cc/paper/2017/file/453fadbd8a1a3af50a9df4df899537b5-Paper.pdf. M. Atzori, R. Vinuesa, G. Fahland, A. Stroh, D. Gatti, B. Frohnapfel, and P. Schlatter. Aerodynamic Effects of Uniform Blowing and Suction on a NACA4412 Airfoil. *Flow Turbul. Combust.*, 105:735–759, April 2020. ISSN 1386-6184, 1573-1987. doi: 10.1007/s10494-020-00135-z. URL http://link.springer.com/ 10.1007/s10494-020-00135-z. Christoph Bannwarth, Eike Caldeweyher, Sebastian Ehlert, Andreas Hansen, Philipp Pracht, Jakob Seibert, Spicher Spicher, and Stefan Grimme. Extended tight-binding quantum chemistry methods. *WIREs Comput.* Mol. Sci., 11:e01493, 2020. doi: 10.1002/wcms.1493. URL https://dx.doi.org/10.1002/wcms.1493. Irwan Bello, Hieu Pham, Quoc V Le, Mohammad Norouzi, and Samy Bengio. Neural combinatorial optimization with reinforcement learning. *arXiv preprint arXiv:1611.09940*, 2016. Yoshua Bengio, Andrea Lodi, and Antoine Prouvost. Machine learning for combinatorial optimization: a methodological tour d'horizon. *European Journal of Operational Research*, 290(2):405–421, 2021. Sahil Bhola, Suraj Pawar, Prasanna Balaprakash, and Romit Maulik. Multi-fidelity reinforcement learning framework for shape optimization. *Journal of Computational Physics*, 482:112018, 2023. Richard Bickerton, Gaia Paolini, Jérémy Besnard, Sorel Muresan, and Andrew Hopkins. Quantifying the chemical beauty of drugs. *Nature chemistry*, 4:90–8, 02 2012. doi: 10.1038/nchem.1243. Erdem Biyik, Nicolas Huynh, Mykel J. Kochenderfer, and Dorsa Sadigh. Active preference-based gaussian process regression for reward learning. In Marc Toussaint, Antonio Bicchi, and Tucker Hermans (eds.), Robotics: Science and Systems XVI, Virtual Event / Corvalis, Oregon, USA, July 12-16, 2020, 2020. doi: 10.15607/RSS.2020.XVI.041. URL https://doi.org/10.15607/RSS.2020.XVI.041. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. *arXiv preprint arXiv:1606.01540*, 2016. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/ d5e2c0adad503c91f91df240d0cd4e49-Paper.pdf. Petros Christodoulou. Soft actor-critic for discrete action settings. *arXiv preprint arXiv:1910.07207*, 2019. Yuchen Cui and Scott Niekum. Active reward learning from critiques. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 6907–6914. IEEE, 2018. Steven Dalton and iuri frosio. Accelerating reinforcement learning through gpu atari emulation. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information* Processing Systems, volume 33, pp. 19773–19782. Curran Associates, Inc., 2020. URL https:// proceedings.neurips.cc/paper/2020/file/e4d78a6b4d93e1d79241f7b282fa3413-Paper.pdf. Christian Daniel, Oliver Kroemer, Malte Viering, Jan Metz, and Jan Peters. Active reward learning with a novel acquisition function. *Autonomous Robots*, 39(3):389–405, 2015. Thomas P Dussauge, Woong Je Sung, Olivia J Pinon Fischer, and Dimitri N Mavris. A reinforcement learning approach to airfoil shape optimization. *Scientific Reports*, 13(1):9753, 2023. G. Fahland, A. Stroh, B. Frohnapfel, M. Atzori, R. Vinuesa, P. Schlatter, and D. Gatti. Investigation of Blowing and Suction for Turbulent Flow Control on Airfoils. *AIAA J.*, 59(11):4422–4436, July 2021. doi: 10.2514/1.J060211. URL https://doi.org/10.2514/1.J060211. Jenna C Fromer and Connor W Coley. Computer-aided multi-objective optimization in small molecule discovery. *Patterns*, 4(2), 2023. Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In *International conference on machine learning*, pp. 1587–1596. PMLR, 2018. Manan Goel, Shampa Raghunathan, Siddhartha Laghuvarapu, and U Deva Priyakumar. Molegular: molecule generation using reinforcement learning with alternating rewards. *Journal of Chemical Information and* Modeling, 61(12):5815–5826, 2021. Rafael Gómez-Bombarelli, Jennifer N. Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel, Ryan P. Adams, and Alán Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules. *ACS Central Science*, 4(2):268–276, jan 2018. doi: 10.1021/acscentsci.7b00572. URL https://doi.org/10.1021%2Facscentsci.7b00572. Stefan Grimme, Christoph Bannwarth, and Philip Shushkov. A robust and accurate tight-binding quantum chemical method for structures, vibrational frequencies, and noncovalent interactions of large molecular systems parametrized for all spd-block elements (z=1–86). *J. Chem. Theory Comput.*, 13(5):1989–2009, 2017. doi: 10.1021/acs.jctc.7b00118. URL https://dx.doi.org/10.1021/acs.jctc.7b00118. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pp. 1861–1870. PMLR, 2018a. Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905, 2018b. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. In *Proceedings of the AAAI conference on artificial intelligence*, volume 32, 2018. Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. In *Thirty-second AAAI conference on artificial intelligence*, 2018. Dan Horgan, John Quan, David Budden, Gabriel Barth-Maron, Matteo Hessel, Hado van Hasselt, and David Silver. Distributed prioritized experience replay. In *6th International Conference on Learning* Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=H1Dy---0Z. Shengyi Huang, Rousslan Fernand Julien Dossa, Antonin Raffin, Anssi Kanervisto, and Weixun Wang. The 37 implementation details of proximal policy optimization. *The ICLR Blog Track 2023*, 2022. Rishabh Jangir, Guillem Alenya, and Carme Torras. Dynamic cloth manipulation with deep reinforcement learning. In *2020 IEEE International Conference on Robotics and Automation (ICRA)*, pp. 4630–4636. IEEE, 2020. John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. *Nature*, 596(7873):583–589, 2021. Y. Kametani and K. Fukagata. Direct numerical simulation of spatially developing turbulent boundary layers with uniform blowing or suction. *J. Fluid Mech.*, 681:154–172, August 2011. ISSN 0022-1120, 1469-7645. doi: 10.1017/jfm.2011.219. URL https://www.cambridge.org/core/product/identifier/ S0022112011002199/type/journal_article. Yukinori Kametani, Koji Fukagata, Ramis Örlü, and Philipp Schlatter. Drag reduction in spatially developing turbulent boundary layers by spatially intermittent blowing at constant mass-flux. *Journal of Turbulence*, 17(10):913–929, 2016. Elias Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. *Advances in neural information processing systems*, 30, 2017. R. B. Kinney. Skin-friction drag of a constant-property turbulent boundary layer with uniform injection. AIAA Journal, 5(4):624–630, 1967. Jens Kober, J. Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. *International* Journal of Robotics Research, 32(11):1238–1274, 2013. doi: 10.1177/0278364913495721. Seung-Hyun Kong, I Made Aswin Nahrendra, and Dong-Hee Paek. Enhanced off-policy reinforcement learning with focused experience replay. *IEEE Access*, 9:93152–93164, 2021. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. *arXiv preprint arXiv:1509.02971*, 2015. Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning, 8(3):293–321, May 1992. ISSN 1573-0565. doi: 10.1007/BF00992699. URL https://doi.org/ 10.1007/BF00992699. David Lindner, Matteo Turchetta, Sebastian Tschiatschek, Kamil Ciosek, and Andreas Krause. Information directed reward learning for reinforcement learning. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 3850–3862. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/ 1fa6269f58898f0e809575c9a48747ef-Paper.pdf. OA Mahfoze, A Moody, A Wynn, RD Whalley, and S Laizet. Reducing the skin-friction drag of a turbulent boundary-layer flow with low-amplitude wall-normal blowing within a bayesian optimization framework. Physical Review Fluids, 4(9):094601, 2019. Mufti Mahmud, Mohammed Shamim Kaiser, Amir Hussain, and Stefano Vassanelli. Applications of deep learning and reinforcement learning to biological data. IEEE Transactions on Neural Networks and Learning Systems, 29(6):2063–2079, 2018. doi: 10.1109/TNNLS.2018.2790388. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. *Nature*, 518(7540):529–533, February 2015. ISSN 00280836. URL http://dx.doi.org/10.1038/nature14236. H. L. Morgan. The generation of a unique machine description for chemical structures-a technique developed at chemical abstracts service. *Journal of Chemical Documentation*, 5(2):107–113, 1965. doi: 10.1021/ c160017a018. URL http://dx.doi.org/10.1021/c160017a018. AkshatKumar Nigam, Pascal Friederich, Mario Krenn, and Alán Aspuru-Guzik. Augmenting genetic algorithms with deep neural networks for exploring the chemical space. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=H1lmyRNFvr. Marcus Olivecrona, Thomas Blaschke, Ola Engkvist, and Hongming Chen. Molecular de-novo design through deep reinforcement learning. *Journal of cheminformatics*, 9(1):1–14, 2017. Tiago Pereira, Maryam Abbasi, Bernardete Ribeiro, and Joel P Arrais. Diversity oriented deep reinforcement learning for targeted molecule generation. *Journal of cheminformatics*, 13:1–17, 2021. Raghunathan Ramakrishnan, Pavlo Dral, Matthias Rupp, and Anatole von Lilienfeld. Quantum chemistry structures and properties of 134 kilo molecules. *Scientific Data*, 1, 08 2014. doi: 10.1038/sdata.2014.22. RdKit. Rdkit: Open-source cheminformatics, 2006. URL http://www.rdkit.org/,https://github.com/ rdkit/rdkit. David Rogers and Mathew Hahn. Extended-connectivity fingerprints. Journal of Chemical Information and Modeling, 50(5):742–754, 2010. doi: 10.1021/ci100050t. URL http://dx.doi.org/10.1021/ci100050t. PMID: 20426451. Lars Ruddigkeit, Ruud van Deursen, Lorenz C Blum, and Jean-Louis Reymond. Enumeration of 166 billion organic small molecules in the chemical universe database gdb-17. *Journal of chemical information and* modeling, 52(11):2864—2875, November 2012. ISSN 1549-9596. doi: 10.1021/ci300415d. Stefan Schaal. Learning from demonstration. In M. C. Mozer, M. Jordan, and T. Petsche (eds.), *Advances in* Neural Information Processing Systems, volume 9. MIT Press, 1997. URL https://proceedings.neurips. cc/paper/1996/file/68d13cf26c4b4f4f932e3eff990093ba-Paper.pdf. Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015. Roy Schwartz, Jesse Dodge, Noah A Smith, and Oren Etzioni. Green ai. *Communications of the ACM*, 63 (12):54–63, 2020. H. S. Seung, M. Opper, and H. Sompolinsky. Query by committee. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, COLT '92, pp. 287–294, New York, NY, USA, 1992. Association for Computing Machinery. ISBN 089791497X. doi: 10.1145/130385.130417. URL https: //doi.org/10.1145/130385.130417. David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489, January 2016. doi: 10.1038/nature16961. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. *arXiv preprint arXiv:1712.01815*, 2017. Adam Stooke and Pieter Abbeel. Accelerated methods for deep reinforcement learning, 2018. URL https: //arxiv.org/abs/1803.02811. A Stroh, Y Hasegawa, Philipp Schlatter, and B Frohnapfel. Global effect of local skin friction drag reduction in spatially developing turbulent boundary layer. *Journal of Fluid Mechanics*, 805:303–321, 2016. Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. IEEE, 2012. doi: 10.1109/IROS.2012.6386109. Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In Proceedings of the AAAI conference on artificial intelligence, volume 30, 2016. Natalia Vesselinova, Rebecca Steinert, Daniel F Perez-Ramirez, and Magnus Boman. Learning combinatorial optimization on graphs: A survey with applications to networking. *IEEE Access*, 8:120388–120416, 2020. Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John P. Agapiou, Max Jaderberg, Alexander S. Vezhnevets, Rémi Leblond, Tobias Pohlen, Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom L. Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wünsch, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Koray Kavukcuoglu, Demis Hassabis, Chris Apps, and David Silver. Grandmaster level in StarCraft II using multi-agent reinforcement learning. *Nature*, 575(7782):350–354, October 2019. doi: 10.1038/s41586-019-1724-z. H. Weller and H. Jasak. OpenFOAM, 2011. URL https://openfoam.org. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Reinforcement learning, pp. 5–32, 1992. Ronald J Williams and Jing Peng. Function optimization using connectionist reinforcement learning algorithms. Connection Science, 3(3):241–268, 1991. Christian Wirth, Riad Akrour, Gerhard Neumann, and Johannes Fürnkranz. A survey of preferencebased reinforcement learning methods. *Journal of Machine Learning Research*, 18(136):1–46, 2017. URL http://jmlr.org/papers/v18/16-634.html. Jiaxuan You, Bowen Liu, Zhitao Ying, Vijay S. Pande, and Jure Leskovec. Graph convolutional policy network for goal-directed molecular graph generation. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), *Advances in Neural Information Processing* Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 6412–6422, 2018. URL https://proceedings.neurips.cc/paper/ 2018/hash/d60678e8f2ba9c540798ebbde31177e8-Abstract.html. James J. Q. Yu, Wen Yu, and Jiatao Gu. Online vehicle routing with neural combinatorial optimization and deep reinforcement learning. *IEEE Transactions on Intelligent Transportation Systems*, 20(10):3806–3817, 2019. doi: 10.1109/TITS.2019.2909109. Zhenpeng Zhou, Steven Kearnes, Li Li, Richard N. Zare, and Patrick Riley. Optimization of molecules via deep reinforcement learning. *Scientific Reports*, 9(1), Jul 2019. ISSN 2045-2322. doi: 10.1038/s41598-019-47148-x. URL http://dx.doi.org/10.1038/s41598-019-47148-x. ## 7 Supplementary Information Comparison Of Querying Strategies For Retraining The three strategies for selection of points presented were compared on a constant number of selected points (800) (Figure 3). In each case, three reward models are initially trained with three different train-test splits of the original QM9 dataset and used for prediction later on. The mean of the three reward models' predictions is used as a final reward for the agent to maximize, and the standard deviation of these are calculated to give an idea of the uncertainty on prediction. Based on these models, three selection modes are studied. In the first setting, the models are retrained by randomly sampling a number of points from the initial QM9 test set as well as newly generated points during the learning process. In a second setting, the points with the highest standard deviations of model predictions are sampled and the models are updated using these points as train set. This is based on the assumption that the points with the highest standard deviations of model predictions (points in which the three models "disagree") are more likely to come from outside the original train distribution, thus potentially representing the points that the model needs to learn from in order to improve its predictions during the agent training process. In this sense, having three reward models instead of only one could provide a good basis for the selection of points. Finally, a third approach in sampling points is based on classifying the previously obtained test set and newly generated points in different bins before selecting points with the highest standard deviation (of model predictions). This stems from the fact that in certain situations, points with the highest standard deviations could represent outliers, and therefore could prevent the reward models from learning the main trend of the data. The strategy of bin-based selection offers a solution to this by selecting points with the highest standard deviations while respecting the initial data distribution. It is important to note that all these sampling processes are done separately for each model to be retrained, since their respective starting train and test sets are not the same to start with. Therefore, the three models will never be updated on the same training data, and will thus provide independent predictions, depending on the points they were trained on. Overall, the standard deviation based sampling method performed best and was thus used in the main part of this paper. ## Comparison Of The Number Of Points Selected For Retraining Given the best strategy (standard deviation based), the effect of the number of points selected for reward model retraining was studied (Figure 4). The minimal number of points that was consistently comparable to the real reward was 400 points. ## Study Of The Effect Of Varying Degrees Of Randomness On Learning The ϵ values in the MolDQN paper start with values of 100% and decrease exponentially at each episode until they reach a percentage of 1% at episode 5000. In this study, our aim was to compare the effect of the ϵ end value (last value of randomness) on the real reward agent at episode 4800, considering that the agent ![16_image_0.png](16_image_0.png) (a) logp ![16_image_1.png](16_image_1.png) (b) QED Figure 3: Comparison of different selection modes, random (purple), standard deviation based (blue) and bin-based (brown) on 800 sampled points. ![16_image_2.png](16_image_2.png) ![16_image_3.png](16_image_3.png) Figure 4: Comparison of reward models retrained on a varying number of points selected based on standard deviation does not choose any more random actions from episode 4800 to 5000 (to better guide it towards the end goal). Results are shown in Figure 5. We concluded that increasing the e end values (increasing randomness, with the hope of favoring exploration) did not help the agent reach better rewards. ## Study Of The Effect Of The Decay Function Form The e-decay function used in MoIDQN is an exponential decay function. In this study, we choose to study the effect of varying decay function forms by adding a linear component to the exponential function with varying fractions. The equation is the following: $$\epsilon(t)=\epsilon_{0}(\lambda(1-\beta t)+(1-\lambda)\alpha^{t})$$ $$(3)^{\frac{1}{2}}$$ with 1 - Bt and a the linear and exponential components respectively, og the starting randomness (at 100%), 3 and & constants that depend on the starting and end values (we choose an end value of 1% at episode 4800), and x the fraction (relative importance) of the linear component. At a \ of 0, the decay-function is fully exponential, and at a \ of 1, it is fully linear. Results are shown in Figure 6. We conclude here that a fully exponential function is more convenient for the learning of the agent. ## Comparison Of Different Mean Constraints In this section, we present a more detailed description of the results for the drag optimization task. Our initial training dataset contains 5000 random profiles with a mean value centered around ±0.002 on each side. For a constraint configuration matching the distribution of the initial dataset, Figures 7a and 7b show the evolution of drag and reward, respectively. In order to test extrapolation and generalization capabilities of ![17_image_0.png](17_image_0.png) ![17_image_1.png](17_image_1.png) (a) e-values' exponential decay with varying end points (b) Results on the QED task Figure 5: The effect of increasing randomness by reaching different e end values with the same exponential decay function. ![17_image_2.png](17_image_2.png) ![17_image_4.png](17_image_4.png) (a) Different forms of e-decay functions ![17_image_3.png](17_image_3.png) (b) Results on the QED task Figure 6: The effect of different c-decay functions with the same randomness end value. our ACRL method, we repeated the same experiment with a larger constraint interval which lies outside the initial training distribution. The results in Figures 8a and 8b show that ACRL is able to explore the underrepresented space well, in contrast to a static reward model which fails to guide the agent to explore the relevant solution spaces. The higher variance stems from the fact that higher mean values (or equivalently, higher total volume) trivially reduce drag. Thus, in this experiment the agent encounters a higher diversity of states in terms of their constraint. Even though most of the observed states lie outside the initial training distribution, an ACRL agent is still able to explore the relevant low-drag space. The importance of actively updating the reward model during training is reflected by the results of agents using a static reward model in both experiments. Both agents trained with a static reward model achieve very similar results in terms of drag, even though drag distributions vary considerably between the experiments. Only the ACRL agents are able to capture the variance of drag well, which is especially high in Figure 8 due a larger constraint interval. In contrast, static models fail to move outside their initially modelled distribution, even though they predict drag values of encountered states very accurately. ![18_image_0.png](18_image_0.png) ![18_image_1.png](18_image_1.png) Figure 7: Results for different mean constraints in [0.0019, 0.0021]. ![18_image_2.png](18_image_2.png) Figure 8: Results for different mean constraints in [0.0015, 0.0025].
Review 1: Summary: The paper presents a method of learning reward functions, which can be useful when evaluating of the reward function is costly. In a nutshell, the algorithm clones a (costly) ground truth reward function, into (cheaper-to-evaluate) neural network. It periodically retrains the neural network to mitigate the modelling errors on the previously unseen parts of the state space. The algorithm is evaluated on three domains, and is used to search for local minima of functions related to molecular optimization, molecular design and airflow drug. Strengths and Weaknesses: Strengths: - a straightforward method (easy to implement and control) Weaknesses: 1. lack of focus about contributions - I am confused by what is the main contribution of the paper. Is it the ACRL method. If yes, that it should be benchmarked more properly, on more tasks/setups etc, and perhaps the detailed description of environments presented in the experimental section should be deferred to the appendix. At the moment, it feels more like the paper is focused on solving these particular tasks. 2. lack of clarity - the paper casts optimization problem into the RL setting, but does not specify the action space (in Sec 4.1/Sec 4.2) 3. non-standard RL setup - the paper claims: "we use a standard MDP formulation ...", however, it restricts to the optimization problems with a specific rewards, given by the difference of the potential 4. optimization with RL - RL is perhaps not the best tool for looking for local minima. At the very least, there should be some motivation provided, why it might be the case. Similarly, the baseline, in this case, should be 'standard' optimization techniques. Why, for example, SGD would not work? Requested Changes: I would like to see some comments addressing the weaknesses specified above. Ad 1/4. Perhaps it would be better to state explicitly what is the main contribution. If it is ACRL, as it seems at the moment, please provide more environments. I'd be great, it they would cover some standard RL tasks. Ad 2. Please provide the action space. Ad 4. Please motivate why to use RL instead of optimization techniques (designed specifically to search for local minima). Broader Impact Concerns: n/A ================================================== Review 2: Summary: The authors present a method called ACRL to actively learn a reward function $f'$ when a real reward function $f$ exists but is very expensive to evaluate. The idea of ACRL is to train a neural network to approximate $f$ using a small set of samples obtained by evaluating $f$. The authors propose 3 strategies to build the training set for $f'$. The performance of ACRL is evaluated on chemistry and physics data sets. Strengths and Weaknesses: Strengths * The problem is well motivated * The paper is generally well written and easy to follow Weaknesses * In Section 2.2, the authors stress that the proposed method is more efficient w.r.t. wall time. However, there is not a single wall time comparison in the experiment. * The speed-up shown in Table 1 is not convincing. I don't understand why using more queries means more efficiency. If cheaper querying time is the point, we still need to know the total training time to make that arguement. * The experiment does not show if active learning reduces sample complexity. * No baseline comparison. Some related works are mentioned in Sec 2.2 but none of them was compared with. They may have slightly different assumptions, but they can be adapted fairly easily. * Some experiment settings need more explanation. E.g., why is a very short horizon $(T=5)$ used in the molecular design task, whereas much longer horizons are used in other tasks. Requested Changes: * Major changes * If the claim is to reduce wall time, then the wall time needs to be reported in experiments. * I also encourage to do an experiment that shows ACRL can achieve higher rewards with fewer samples than a reward model trained with random samples. * I think the strategy of incrementally build a training set is very important to active learning. But those strategies are only described in the supplementary. I would suggest move it to the main paper. * Clarify why number of steps are set differently for different tasks in the experiment. * Figure 1 (c) is confusing because it shows that ACRL has no advantage. I would suggest using a positive example instead of explaining why it did not work because it is supposed to support the main claim of the paper. * Minor changes * In page 8, 'the ratio of reward model queries to oracle queries in this experiment is comparably low', should it be high? * In Figure 3, the caption and legend do not agree. * In Figure 8b, ACRL seems not converged. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The authors propose actively learning costly rewards for reinforcement learning (ACRL). This particular method works in settings where a reward function is known but hard to compute and a learned reward model can improve wall-clock time. The authors focus on problems that are not standard RL benchmarks: chemistry, materials science and engineering. The authors provide empirical results on toy examples as well as more practical problems to demonstrate its effectiveness. Strengths and Weaknesses: Strengths: - **Clarity**: the paper is easy to read and is overall clear - **Presentation**: overall structure, figures, and tables are clear - **Significance**: the proposed method seems to be effective in reducing wall-clock time, and the results seem promising. The results can be quite interesting to people who work in these scientific problems and want to use a faster RL solution. And these problems are important problems but are less studied in the RL literature, which might add to the significance. Weaknesses: - My only major concern is on **Related work**: the authors discussed how the work is different from previous works and emphasized the differences, which is good. And the authors provided ablations on the proposed method which is also good. However, I am a little concerned about the fact that the proposed method is not compared to **any** other alternative methods. For example, authors mentioned IDRL and how it is different from the proposed method, but does it not work at all in the problems studied in this paper (or, maybe even a naive adaptation of IDRL to the problems in this paper, if IDRL has not been tested on the same problems). While the proposed method seems to be quite good compared to a naive baseline, it is unclear to me how it compares to related works in the same research direction. Overall seems a good paper, I am happy to discuss the concern with authors and if I missed sth please point it out. Requested Changes: - How does the proposed method compare to related works (on using a learned reward model) that can be applied to the same problems, or can be applied with some small adjustments? If all previous works are absolutely impossible to apply to the problems studied in this paper, the authors should emphasize that in the paper (maybe using a table like the Table 1 in IDRL paper to emphasize it), if they can be somehow applied, then there should be some comparison experiments. (it is OK to show, for example, an alternative solution will give similar performance but takes much longer wall-clock time, but there should be some comparison). Other minor changes: - Please add additional explanations on what is (c in figure 1 in its caption to help reading. You mentioned in later section, but it’s good to also mention it in the caption. Broader Impact Concerns: N/A ================================================== Review 4: Summary: The authors propose a new deep reinforcement learning method suitable for a wide range of applications in science and engineering. The method called actively learning costly rewards for reinforcement learning (ACRL), provides a fast and efficient way to solve optimization problems with a known ground-truth reward function. Evaluating the ground-truth reward function can be expensive for many science and engineering problems. ACRL involves the following key steps: pre-training a neural network model to approximate the ground truth reward function and then using active learning to update the model during training. The authors demonstrate the effectiveness of ACRL with double deep Q-learning on three use cases: molecular property optimization, molecular design, and optimization of airflow drag around an airfoil. The first two use cases have a known optimization function, while the third relies on querying from CFD simulations. Overall, this work presents an interesting approach to engineering optimization problems and showcases that most optimization problems can be framed as RL problems and approximated by deep learning models. Strengths and Weaknesses: The strengths of this work include: • The paper is well-written, and the language is easy to understand. • The authors discuss the key differences between this work and prior works in RL, such as IDRL, which highlights the uniqueness and progress made in this work. • The ACRL method is presented well in mathematical terms, and the algorithm is easy to understand. • For each use case, the authors provide detailed background information on the optimization problems, making this work accessible to readers with little or no knowledge of chemistry or engineering. The weaknesses of this work include: • The authors did not compare ACRL with methods often used in science and engineering where the original optimization function is expensive to evaluate. These methods include surrogate modeling approaches and Bayesian Optimization. This raises questions about the level of innovation in this work, as DQN is just another surrogate model and the optimization problem is framed as RL. • The manuscript is missing many technical details of the active learning approach, such as the choice of initial sampling strategy, acquisition function, and how to balance the exploration and exploitation trade-off. Requested Changes: This work would be further improved if the authors addressed the following questions: • As mentioned above, the authors should review relevant works in traditional surrogate modeling approaches (e.g. Engineering Design via Surrogate Modelling: A Practical Guide, DOI:10.1002/9780470770801) and Bayesian optimization methods (e.g. https://pubs.acs.org/doi/abs/10.1021/acs.jcim.1c00637, https://www.nature.com/articles/s41586-021-03213-y) in the introduction section. They should also point out the key differences between ACRL and these methods. • The authors state that the ground truth reward functions have to be known, but later indicate that the CFD simulation is one of the ground truth reward functions. The word "known" is ambiguous. Does "known" mean that the mathematical model has to be known, or can it simply be queried as is the case for CFD? These are two different concepts. Namely, is the author referring to a "black box function" or a "white box function"? • On a similar note, the authors state that "IDRL assumes the absence of a reward function". Isn't this the more common case in science and engineering where the reward function is unknown ("black box function")? • The authors should explain how to construct the initial set of samples and training. Is there a strategy to sample (random or uniform)? How does the active learning approach balance the exploration and exploitation trade-off? When to choose uniform and when to choose uncertainty sampling? In section 5.1, 500 episodes are used for initial training. The authors didn't mention the number of initial samples for the latter two use cases in sections 5.2 and 5.3. These are key questions in active learning that need to be addressed. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: Due to lack of baselines the paper both lacks proper support for the main claim, as well as the main claim is too narrow (due to being restricted to methods developed in the context of reinforcement learning field, while not being clear on why one cannot use methods that wouldn't be described usually as belonging to the field). ==================================================
# Provably Convergent Policy Optimization Via Metric-Aware Trust Region Methods Jun Song juns113@uw.edu Department of Industrial and Systems Engineering University of Washington Niao He niao.he@inf.ethz.ch Department of Computer Science ETH Zürich Lijun Ding *lding47@wisc.edu* Wisconsin Institute for Discovery University of Wisconsin - Madison Chaoyue Zhao cyzhao@uw.edu Department of Industrial and Systems Engineering University of Washington Reviewed on OpenReview: *https: // openreview. net/ forum? id= jkTqJJOGMS* ## Abstract Trust-region methods based on Kullback-Leibler divergence are pervasively used to stabilize policy optimization in reinforcement learning. In this paper, we exploit more flexible metrics and examine two natural extensions of policy optimization with Wasserstein and Sinkhorn trust regions, namely *Wasserstein policy optimization (WPO)* and *Sinkhorn policy* optimization (SPO). Instead of restricting the policy to a parametric distribution class, we directly optimize the policy distribution and derive their closed-form policy updates based on the Lagrangian duality. Theoretically, we show that WPO guarantees a monotonic performance improvement, and SPO provably converges to WPO as the entropic regularizer diminishes. Moreover, we prove that with a decaying Lagrangian multiplier to the trust region constraint, both methods converge to global optimality. Experiments across tabular domains, robotic locomotion, and continuous control tasks further demonstrate the performance improvement of both approaches, more robustness of WPO to sample insufficiency, and faster convergence of SPO, over state-of-art policy gradient methods. ## 1 Introduction Policy-based reinforcement learning (RL) approaches have received remarkable success in many domains, including video games (Wang et al., 2017; Wu et al., 2017; Mnih et al., 2016), robotics (Grudic et al., 2003; Levine et al., 2016), and continuous control tasks (Duan et al., 2016; Schulman et al., 2016; Heess et al., 2015). One prominent example is policy gradient method (Grudic et al., 2003; Peters & Schaal, 2006; Lillicrap et al., 2016; Sutton et al., 1999; Williams, 1992; Mnih et al., 2016; Silver et al., 2014). The core idea is to represent the policy with a probability distribution πθ(a|s) = P[a|s; θ], such that the action a in state s is chosen stochastically following the policy πθ controlled by parameter θ. Determining the right step size to update the policy is crucial for maintaining the stability of policy gradient methods: too conservative choice of stepsizes result in slow convergence, while too large stepsizes may lead to catastrophically bad updates. To control the size of policy updates, Kullback-Leibler (KL) divergence is commonly adopted to measure the difference between two policies. For example, the seminal work on trust region policy optimization (TRPO) by Schulman et al. (2015) introduced KL divergence based constraints (trust region constraints) to restrict the size of the policy update; see also Peng et al. (2019); Abdolmaleki et al. (2018). Kakade (2001) and Schulman et al. (2017) introduced a KL-based penalty term to the objective to prevent excessive policy shift. Though KL-based policy optimization has achieved promising results, it remains interesting whether using other metrics to gauge the similarity between policies could bring additional benefits. Recently, a few work (Richemond & Maginnis, 2017; Zhang et al., 2018; Moskovitz et al., 2021; Pacchiano et al., 2020) has explored the Wasserstein metric to restrict the deviation between consecutive policies. Compared with KL divergence, the Wasserstein metric has several desirable properties. Firstly, it is a true symmetric distance measure. Secondly, it allows flexible user-defined costs between actions and is less sensitive to ill-posed likelihood ratios. Thirdly but most importantly, the Wasserstein metric takes into account the geometry of the metric space (Panaretos & Zemel, 2019) and allows distributions to have different or even non-overlapping supports. Motivating Example: Below we provide an example of a grid world (see Figure 1) that illustrates the advantages of using the Wasserstein metric over the KL divergence to construct trust regions and policy updates. The grid world consists of 5 regular grids and 2 goal grids, and there are three possible actions: left, right, and pickup. The player always starts from the middle grid, and making a left or right move results in a reward of −1. Picking up yields a reward of −3 at regular grids, +5 at the blue goal grid, and +10 at the red goal grid. An episode terminates either at the maximum length of 10 or immediately after picking up. We define the geometric distance between left and right actions to be 1, and 4 between other actions. Figure 1: Motivating grid world example ![1_image_0.png](1_image_0.png) ![1_image_1.png](1_image_1.png) Figure 2 shows the Wasserstein distance and KL divergence for different policy shifts of this grid world example. We can see that Wasserstein metric utilizes the geometric distance between actions to distinguish the shift of policy distribution to a close action (policy distribution 1 −→ 2 in Figure 2a) from the shift to a far action (policy distribution 1 −→ 3 in Figure 2b), while KL divergence does not. Figure 3 demonstrates the constrained policy updates based on Wasserstein distance and KL divergence respectively with a fixed trust region size 1. We can see that Wasserstein-based policy update finds the optimal policy faster than KL-based policy update. This is because KL distance is larger than Waserstein when considering policy shifts of close actions (see Figure 2a). Therefore, Wasserstein policy update is able to shift action (from left to right) in multiple states, while KL update is only allowed to shift action in a single state. Besides, KL policy update keeps using a suboptimal short-sighted solution between the 2nd and 4th iteration, which further slows down the convergence. ![1_image_2.png](1_image_2.png) Figure 2: Wasserstein utilizes geometric feature of action space ![2_image_0.png](2_image_0.png) Figure 3: Demonstration of policy updates under different trust regions ![2_image_1.png](2_image_1.png) ![2_image_2.png](2_image_2.png) However, the challenge of applying the Wasserstein metric for policy optimization is also evident: evaluating the Wasserstein distance requires solving an optimal transport problem, which could be computationally expensive. To avoid this computation hurdle, existing work resorts to different techniques to *approximate* the policy update under Wasserstein regularization. For example, Richemond & Maginnis (2017) solved the resulting RL problem using Fokker-Planck equations; Zhang et al. (2018) introduced particle approximation method to estimate the Wasserstein gradient flow. Recently, Moskovitz et al. (2021) instead considered the second-order Taylor expansion of Wasserstein distance based on Wasserstein information matrix to characterize the local behavioral structure of policies. Pacchiano et al. (2020) tackled behavior-guided policy optimization with smooth Wasserstein regularization by solving an approximate dual reformulation defined on reproducing kernel Hilbert spaces. Aside from such approximation, some of these work also limits the policy representation to a particular parametric distribution class, As indicated in Tessler et al. (2019), since parametric distributions are not convex in the distribution space, optimizing over such distributions results in local movements in the action space and thus leads to convergence to a sub-optimal solution. Until now, the theoretical performance of policy optimization under the Wasserstein metric remains elusive in light of these approximation errors. In this paper, we study policy optimization with trust regions based on Wasserstein distance and Sinkhorn divergence. The latter is a smooth variant of Waserstein distance by imposing an entropic regularization to the optimal transport problem (Cuturi, 2013). We call them, *Wasserstein Policy Optimization (WPO)* and *Sinkhorn Policy Optimization (SPO)*, respectively. Instead of confining the distribution of policy to a particular distribution class, we work on the space of policy distribution directly, and consider all admissible policies that are within the trust regions with the goal of avoiding approximation errors. Unlike existing work, we focus on *exact characterization* of the policy updates. We would like to emphasize that our methodology and theoretical analysis in Section 3, 4, and 5 primarily concentrate on a discrete action space. However, we also present an extension of our method to accommodate a continuous action space, detailed in Section 7.5. We highlight our contributions as follows: 1. **Algorithms:** We develop closed-form expressions of the policy updates for both WPO and SPO based on the corresponding optimal Lagrangian multipliers of the trust region constraints. To the best of our knowledge, this is the first explicit closed-form updates for policy optimization based on Wasserstein and Sinkhorn trust regions. In particular, the optimal Lagrangian multiplier of SPO admits a simple form and can be computed efficiently. A practical on-policy actor-critic algorithm is proposed based on the derived expressions of policy updates and advantage value function estimation. 2. **Theory:** We theoretically show that WPO guarantees a *monotonic performance improvement* through the iterations, *even with non-optimal Lagrangian multipliers*. We also prove that SPO converges to WPO as the entropic regularizer diminishes. Moreover, we prove that with a decaying schedule of the multiplier, SPO and WPO converge to *global optimality*, and with a constant multiplier, both methods converge linearly up to a neighborhood of the optimal value. To our best knowledge, this appears to be the first convergence rate analysis of policy optimization based on Wasserstein-type metrics. 3. **Experiments:** We provide comprehensive evaluation on the efficiency of WPO and SPO under several types of testing environments including tabular domains, robotic locomotion tasks, and further extend it to continuous control tasks. Compared to state-of-art policy gradients approaches that use KL divergence such as TRPO and PPO and those use Wasserstein metric such as Wasserstein Natural Policy Gradient (WNPG) (Moskovitz et al., 2021) and Behavior Guided Policy Gradients (BGPG) (Pacchiano et al., 2020), our methods achieve better sample efficiency, faster convergence, and improved final performance. Numerical study indicates that by properly choosing the weight of entropic regularizer, SPO achieves a better trade-off between convergence and final performance than WPO. Related work: Wasserstein-like metrics have been explored in a number of works in the context reinforcement learning. Ferns et al. (2004) first introduced bisimulation metrics based on Wasserstein distance to quantify behavioral similarity between states for the purpose of state aggregation. Such bisimulation metrics were recently utilized for representation learning of RL; see e.g., Castro (2020); Agarwal et al. (2021b). In addition, a few recent work has also exploited Wasserstein distance for imitation learning (see e.g., Xiao et al. (2019); Dadashi et al. (2021)) and unsupervised RL (see e.g., He et al. (2022)). Our work is closely related to several previous studies, including Richemond & Maginnis (2017); Zhang et al. (2018); Moskovitz et al. (2021); Pacchiano et al. (2020), which also utilize Wasserstein distance to measure the proximity of policies. However, unlike the aforementioned studies that solely employ Wasserstein distance as an explicit penalty function, we additionally utilize it as a trust region constraint. Moreover, we consider nonparametric policies and derive explicit policy update forms, whereas these studies update parametric policies using policy gradients. Furthermore, we demonstrate monotonic performance improvement and global convergence with our policy update, which is not provided in these previous works. Regarding the use of Sinkhorn divergence in RL, Pacchiano et al. (2020)is the only related work to our best knowledge, where the entropy regularization is used to mitigate the computational burden of computing Wasserstein metric. However, no explicit form of policy update is provided in this work, while we derive an explicit Sinkhorn policy update and demonstrate its advantage in convergence speed. Additionally, we use Wasserstein distance to directly measure the proximity of nonparametric policies in the distribution space, while Pacchiano et al. (2020); Moskovitz et al. (2021)measure the similarity of parametric policies in the behavioral space. Wasserstein-like metrics are also pervasively studied in distributionally robust optimization (DRO); see e.g., Esfahani & Kuhn (2018); Gao & Kleywegt (2016); Zhao & Guan (2018); Blanchet & Murthy (2019). We also point out that a recent concurrent work by Wang et al. (2021a) studied DRO using the Sinkhorn distance. Our duality formulations are largely inspired from existing work in DRO. However, we note that constrained policy optimization is conceptually different from DRO. Constrained policy optimization focuses on finding the optimistic policy that falls in a trust region, whereas DRO (e.g., the KL DRO) aims to optimize some worst-case loss given by the adversarial distribution of unknown parameters within some ambiguity set. ## 2 Background And Notations Markov Decision Process (MDP): We consider an infinite-horizon discounted MDP, defined by the tuple (S, A, P, r, *υ, γ*), where S is the state space, A is the action space, P : *S × A × S −→* R is the transition probability, r : *S × A −→* R is the reward function, υ : *S −→* R is the distribution of the initial state s0, and γ ∈ (0, 1) is the discount factor. We define the return of timestep t as the accumulated discounted reward from t, Rt =P∞ k=0 γ kr(st+k, at+k), and the value function as V π(s) = E[Rt|st = s; π]. The performance of a stochastic policy π is defined as J(π) = Es0,a0,s1...[P∞ t=0 γ tr(st, at)] where at ∼ π(at|st), st+1 ∼ P(st+1|st, at). As shown in Kakade & Langford (2002), the expected return of a new policy π ′can be expressed in terms of the advantage over the old policy π: J(π ′) = J(π) + Es∼ρπ′ υ ,a∼π′ [Aπ(*s, a*)], where Aπ(*s, a*) = E[Rt|st = *s, a*t = a; π] − E[Rt|st = s; π] represents the advantage function and ρ π υ represents the unnormalized discounted visitation frequencies with initial state distribution υ, i.e., ρ π υ (s) = Es0∼υ[P∞ t=0 γ tP(st = s|s0)]. Trust Region Policy Optimization (TRPO): In TRPO (Schulman et al., 2015), the policy π is parameterized as πθ with parameter vector θ. For notation brevity, we use θ to represent the policy πθ. Then, the new policy θ ′is found in each iteration to maximize the expected improvement J(π ′) − J(π), or equivalently, the expected value of the advantage function: $\max\limits_{\theta^{\prime}}\ \ \mathbb{E}_{s\sim\rho^{\theta}_{v},a\sim\theta^{\prime}}[A^{\theta}(s,a)]$ s.t. $\mathbb{E}_{s\sim\rho^{\theta}_{v}}[d_{\rm KL}(\theta^{\prime},\theta)]\leq\delta$, $$\left(1\right)$$ where dKL represents the KL divergence and δ is the threshold of the distance between new and old policies. Wasserstein Distance: Given two probability distributions of policies π and π ′ on the discrete action space A = {a1, a2*, . . . , a*N }, the Wasserstein distance between the policies is defined as: $$d_{\mathrm{W}}(\pi^{\prime},\pi)=\operatorname*{inf}_{Q\in\Pi(\pi^{\prime},\pi)}\langle Q,D\rangle,$$ $$\left(2\right)$$ $$\left({\mathrm{3}}\right)$$ ⟨*Q, D*⟩, (2) where ⟨·, ·⟩ denotes the Frobenius inner product. The infimum is taken over all joint distributions Q with marginals π ′ and π, and D is the cost matrix with Dij = d(ai, aj ), where d(ai, aj ) is defined as the distance between actions ai and aj . Its largest entry in magnitude is denoted by ∥D∥∞. In implementation, our choice of distance d is task-dependent and is reported in Table 3 in Appendix A. Sinkhorn Divergence: Sinkhorn divergence (Cuturi, 2013) provides a smooth approximation of the Wasserstein distance by adding an entropic regularizer. The Sinkhorn divergence is defined as: $$d_{\mathrm{S}}(\pi^{\prime},\pi|\lambda)=\operatorname*{inf}_{Q\in\Pi(\pi^{\prime},\pi)}\left\{\langle Q,D\rangle-{\frac{1}{\lambda}}h(Q)\right\},$$ where h(Q) = −PN i=1 PN j=1 Qij log Qij represents the entropy term, and λ > 0 is a regularization parameter. The intuition of adding the entropic regularization is: since most elements of the optimal joint distribution Q will be 0 with a high probability, by trading the sparsity with entropy, a smoother and denser coupling between distributions can be achieved (Courty et al., 2014; 2016). Therefore, when the weight of the entropic regularization decreases (i.e., λ increases), the sparsity of the divergence increases, and the Sinkhorn divergence converges to the Wasserstein metric, i.e., limλ→∞ dS(π ′, π|λ) = dW(π ′, π). More critically, Sinkhorn divergence is useful to mitigate the computational burden of computing Wasserstein distance. In fact, the efficiency improvement that Sinkhorn divergence and the related algorithms brought paves the way to utilize Wasserstein-like metrics in many machine learning domains, including online learning (Cesa-Bianchi & Lugosi, 2006), model selection (Juditsky et al., 2008; Rigollet & Tsybakov, 2011), generative modeling (Genevay et al., 2018; Petzka et al., 2017; Patrini et al., 2019), dimensionality reduction (Huang et al., 2021; Lin et al., 2020; Wang et al., 2021b). ## 3 Wasserstein Policy Optimization Motivated by TRPO, here we consider a trust region based on the Wasserstein metric. Moreover, we lift the restrictive assumption that a policy has to follow a parametric distribution class by allowing all admissible policies. Then, the new policy π ′is found in each iteration to maximize the estimated expected value of the advantage function. Therefore, the *Wasserstein Policy Optimization* (WPO) framework is shown as follows: $$\begin{array}{r l}{{\operatorname*{max}_{\pi^{\prime}\in{\mathcal{D}}}}}&{{\mathbb{E}_{s\sim\rho_{v}^{\pi},a\sim\pi^{\prime}(\cdot|s)}[A^{\pi}(s,a)]}}\\ {{\mathrm{where}}}&{{{\mathcal{D}}=\{\pi^{\prime}|\mathbb{E}_{s\sim\rho_{v}^{\pi}}[d_{\mathrm{W}}(\pi^{\prime}(\cdot|s),\pi(\cdot|s))]\leq\delta\},}}\end{array}$$ $$\quad(4)$$ where the Wasserstein distance dW(·, ·) is defined in (2). In most practical cases, the reward r is bounded and correspondingly, the accumulated discounted reward Rt is bounded. So without loss of generality, we make the following assumption: Assumption 1. Assume Aπ(s, a) *is bounded, i.e.,* supa∈A,s∈S |Aπ(s, a)| ≤ Amax for some Amax > 0. With Wasserstein metric based trust region constraint, we are able to derive the closed-form of the policy update shown in Theorem 1. The main idea is to form the Lagrangian dual of the constrained optimization problem presented above, which is inspired by the way to obtain the extremal distribution in Wasserstein DRO literature, see e.g., Kuhn et al. (2019); Blanchet & Murthy (2019); Zhao & Guan (2018). The detailed proof can be found in Appendix B. Theorem 1. **(Closed-form policy update)** Let κ π s (*β, j*) = argmaxk=1...N {Aπ(s, ak) − βDkj}, *where* D denotes the cost matrix. If Assumption 1 holds, then an optimal solution to (4) is: $$\pi^{*}(a_{i}|s)=\sum\nolimits_{j=1}^{N}\pi(a_{j}|s)f_{s}^{*}(i,j),$$ $$\left(5\right)$$ (*i, j*), (5) where f ∗ s (*i, j*) = 1 if i = κ π s (β ∗, j) and f ∗ s (*i, j*) = 0 *otherwise, and* β ∗*is an optimal Lagrangian multiplier* corresponds to the following dual formulation: $$\min_{\beta\geq0}\ F(\beta)=\min_{\beta\geq0}\ \{\beta\delta+\mathbb{E}_{s\sim\rho_{c}^{\pi}}\sum\nolimits_{j=1}^{N}\pi(a_{j}|s)\max_{i=1,\ldots,N}(A^{\pi}(s,a_{i})-\beta D_{ij})\}.\tag{6}$$ Moreover, we have β ∗ ≤ β¯*, where* β¯ := maxs∈S,k,j=1*...N,k*̸=j (Dkj ) −1(Aπ(s, ak) − Aπ(*s, a*j )). Remark 1. *For ease of notation and simplicity, we assume the uniqueness of* κ π s (β, j) *in order to form the* simple expression of f ∗ s in Theorem 1. When it is not unique, a necessary condition for the optimality of π ∗in (5) is Pi∈Kπ s (β,j) f ∗ s (*i, j*) = 1*, and* f ∗ s (*i, j*) = 0 for i /∈ Kπ s (β, j)*, where* Kπ s (*β, j*) = argmaxk=1...N Aπ(*s, a*k) − βDkj *. The weight* f ∗ s (i, j) for i ∈ Kπ s (β, j) *could be determined through linear programming (see details in* (17) in Appendix B). The exact policy update for WPO in (5) requires computing the optimal Lagrangian multiplier β ∗ by solving the one-dimensional subproblem (6). A closed-form of β ∗is not easy to obtain in general, except for special cases of the distance d(*x, y*) or cost matrix D. In Appendix C, we provide the closed-form of β ∗for the case when d(*x, y*) = 0 if x = y and 1 otherwise. WPO Policy Update: Based on Theorem 1, we introduce the following WPO policy updating rule: $$\pi_{k+1}(a_{i}|s)=\mathbb{F}^{\text{WPO}}(\pi_{k}):=\sum_{j=1}^{N}\pi_{k}(a_{j}|s)f_{s}^{k}(i,j),$$ (WPO) where f k s (*i, j*) = 1 if i = κ πk s (βk, j) and 0 otherwise. Note that different from (5), we allow βk to be chosen arbitrarily and time dependently. We show that such policy update always leads to a monotonic improvement of the performance even when βk is not the optimal Lagrangian multiplier. In particular, we propose two strategies to update multiplier βk: (i) Approximation of optimal βk: To improve the convergence, we can approximately solve the optimal Lagrangian multiplier based on Sinkhorn divergence. More details in Section 4. (ii) Time-dependent βk: To improve the computational efficiency, we can simply treat βk as a timedependent parameter, e.g., we can set βk as a diminishing sequence. In this setting, (WPO) produces the solution to the following penalty version of problem (4) (with d = dW): $$\operatorname*{max}_{\pi_{k+1}}\,\mathbb{E}_{s\sim\rho_{v}^{\pi_{k}},a\sim\pi_{k+1}(\cdot|s)}[A^{\pi_{k}}(s,a)]-\beta_{k}\mathbb{E}_{s\sim\rho_{v}^{\pi_{k}}}[d(\pi_{k+1}(\cdot|s),\pi_{k}(\cdot|s))].$$ ## 4 Sinkhorn Policy Optimization In this section, we introduce Sinkhorn policy optimization (SPO) by constructing trust region with Sinkhorn divergence. In the following theorem, we derive the optimal policy update in each step when using Sinkhorn divergence based trust region. Detailed proofs are provided in Appendix D. $$\mathbf{\Phi}(7)$$ Theorem 2. *If Assumption 1 holds, then the optimal solution to (4) with Sinkhorn divergence is:* $$\pi_{\lambda}^{*}(a_{i}|s)=\sum_{j=1}^{N}\pi(a_{j}|s)f_{s,\lambda}^{*}(i,j),$$ $$(8)$$ s,λ(*i, j*), (8) where D *denotes the cost matrix,* f ∗ s,λ(*i, j*) = $$\begin{array}{c}{{\exp\left(\frac{\lambda}{\beta^{*}}A^{\pi}\left(s,a_{i}\right)-\lambda D_{i j}\right)}}\end{array}$$ PN k=1 exp ( λ β∗λ Aπ(s,ak)−λDkj ) and β ∗ λ is an optimal solution to the following dual formulation: min β≥0 Fλ(β) = min β≥0 nβδ − Es∼ρπ υ XN j=1 π(aj |s)(βλ + β λ ln(π(aj |s)) − β λ ln[XN i=1 exp ( λ β A π(s, ai) − λDij )]) j=1 β λ exp ( λ β A π(s, ai) − λDij ) · π(aj |s) PN k=1 exp ( λ β Aπ(s, ak) − λDkj ) o. (9) Es∼ρπ υ XN i=1 XN Moreover, we have β ∗ λ ≤ 2Amax δ. In contrast to the Wasserstein dual formulation (6), the objective in the Sinkhorn dual formulation (9) is differentiable in β and admits closed-form gradients (shown in Appendix F). With this gradient information, we can use gradient-based global optimization algorithms (Wales & Doye, 1998; Zhan et al., 2006; Leary, 2000) to find a global optimal solution β ∗ λ to (9). Next, we show that if the entropic regularization parameter λ is large enough, then the optimal solution β ∗ λ is a close approximation to the β ∗ of Wasserstein dual formulation. Proof is provided in Appendix G. Theorem 3. *Define* βUB = max{ 2A max δ, β¯}. We have: 1. $F_{\lambda}(\beta)$ _converges to_ $F(\beta)$ _uniformly on_ $[0,\beta_{UB}]$_:_ $\lim_{\lambda\to\infty}\sup_{0\leq\beta\leq\beta_{0}\atop\lambda}\left|F_{\lambda}(\beta)-F(\beta)\right|\leq\lim_{\lambda\to\infty}\frac{\theta_{0}u}{\lambda}N\ln N=0$_._ 2. $\lim\limits_{\lambda\to\infty}\mbox{\it argmin}_{0\leq\beta\leq\beta_{UB}}F_{\lambda}(\beta)\subseteq\mbox{\it argmin}_{0\leq\beta\leq\beta_{UB}}F(\beta)$ Although it is difficult to obtain the exact value of the optimal solution β ∗to the Wasserstein dual formulation (6), the above theorem suggests that we can approximate β ∗ via β ∗ λ by setting up a relative large λ. In practice, we can also adopt a smooth homotopy approach by setting an increasing sequence λk for each iteration and letting λk → ∞. SPO Policy Update: Based on Theorem 2, we introduce the following SPO policy updating rule: $$\pi_{k+1}(a_{i}|s)=\mathbb{F}^{\text{SPO}}(\pi_{k})=\sum_{j=1}^{N}\pi_{k}(a_{j}|s)f_{s,\lambda_{k}}^{k}(i,j).$$ (SPO) Here f k s,λk (*i, j*) = exp ( λk βk A πk (s,ai)−λkDij ) PN l=1 exp ( λk βk Aπk (s,al)−λkDlj ) , λk ≥ 0 and βk ≥ 0 are some control parameters. The parameter βk can be either computed via solving the one-dimensional subproblem (9) or simply set as a diminishing sequence. The proper setup of λk can effectively adjust the trade-off between convergence speed and final performance. More details are provided in the ablation study in Section 7. ## 5 Theoretical Analysis We first show that SPO policy update converges to WPO policy update as the regularization parameter increases (i.e., λ *−→ ∞*). The detailed proof is provided in Appendix H. Lemma 1. As λk −→ ∞*, SPO update converges to WPO update:* limλk−→∞ F SPO(πk) ∈ FWPO(πk). We then provide a theoretical justification that WPO policy update (and SPO with λ *−→ ∞*) are always guaranteed to improve the true performance J monotonically if we have access to the true advantage function. If the advantage function can only be evaluated inexactly with limited samples, then an extra estimation error (measured by the largest absolute entry *∥ · ∥*∞) will occur. Proof can be found in Appendix I. Theorem 4. **(Performance improvement)** For any initial state distribution υ and any βk ≥ 0*, if* ||Aˆπ − Aπ||∞ ≤ ϵ for some ϵ > 0, let Kˆπk s (βk, j) = argmaxi=1,...,N {Aˆπk (s, ai) − βkDij}, WPO policy update (and SPO with λ −→ ∞*) guarantee the following performance improvement bound when the inaccurate* advantage function Aˆπ*is used,* $$J(\pi_{k+1})\geq J(\pi_{k})+\beta_{k}\mathbb{E}_{s\sim p_{u}^{n+1}}\sum_{j=1}^{N}\pi_{k}(a_{j}|s)\sum_{i\in\mathbb{K}_{s}^{n+k}(\beta_{k,j})}f_{s}^{k}(i,j)D_{i j}-\frac{2\epsilon}{1-\gamma}.$$ $$\quad(10)$$ . (10) The value of ϵ, which quantifies the approximation error of the advantage function, is dependent on various factors such as the advantage estimation algorithm used and the number of samples (Schulman et al., 2016). It is worth noting that the improvement bound of NPG/TRPO (Cen et al., 2021)includes the same additional term − 2ϵ 1−γ , which indicates that our methods offer comparable theoretical performance guarantees to KL based updates. In the following, we show that with a decreasing schedule of the multiplier βk, both WPO and SPO policy updates have their values J(πk) converging to the optimal J ⋆ = maxπ J(π) on the tabular domain. To start, for k-th iteration, we consider (WPO) and (SPO) (with arbitrary λ > 0) whose updates πk+1 are optimal solutions to (7) with d being dW and dS respectively. Assumption 2. The state space and the action space are both finite, the reward function r *is non-negative,* and the initial distribution covers all state. Note that once state and action spaces are both finite, the reward can be assumed non-negative without loss of generality, as we can always add maxs,a |r(*s, a*)| to the reward function without changing the optimal policy and the order of the policies. Defining the optimal value function V ⋆(s) = maxπ E[Rt|st = s], we have the following theorem, whose proof is in Appendix J and is inspired by Bhandari & Russo (2021). Theorem 5. **(Global convergence)** Under Assumption 2, we have for any βk ≥ 0*, (WPO) satisfies that* $$\|V^{\star}-V^{\pi_{k+1}}\|_{\infty}\leq\gamma\|V^{\star}-V^{\pi_{k}}\|_{\infty}+\beta_{k}\|D\|_{\infty},\tag{1}$$ $$(11)$$ $$\left(12\right)$$ and (SPO) satisfies that $$\|V^{\star}-V^{\pi_{k+1}}\|_{\infty}\leq\gamma\|V^{\star}-V^{\pi_{k}}\|_{\infty}+2\frac{\beta_{k}}{1-\gamma}\left(\|D\|_{\infty}+2\frac{\log N}{\lambda}\right).$$ If limk→∞ βk = 0*, we further have* limk→∞ J(πk) = J ⋆. Remark 2. *Note the convergence is* geometric. If we keep βk *as a constant, then* 0 ≤ J ⋆ − J(π T) ≤ ∥V ⋆ − V π T∥∞ ≤ γ T ∥V ⋆ − V π0 ∥∞ + βB 1−γ , where B = ∥D∥∞ *for (WPO) and* B = 2 ∥D∥∞+2 log N λ 1−γfor (SPO). To achieve an ϵ *optimality gap, we only need to take* β = (1−γ)ϵ 2B*and let* T ≥ log(ϵ/2) γ. Remark 3. The study of global non-asymptotic convergence of nonconvex policy optimization algorithms has been an active research topic. Recent theoretical work has mostly centered on PG and natural policy gradient (NPG) Kakade (2001) - a close relative of TRPO; see e.g., Agarwal et al. (2021a); Cen et al. (2021); Lan (2022). To our best knowledge, a few work has discussed the global convergence of TRPO. Neu et al. (2017) and Geist et al. (2019) established the connection of TRPO to Mirror Descent, but did not provide any non-asymptotic rate; Shani et al. (2020) showed that adaptive TRPO with decaying stepsize achieved O(1/ √T) convergence rate for unregularized MDPs in the tabular setting (finite state and finite action). Our result seems to be the first non-asymptotic analysis of policy optimization based on Wasserstein and Sinkhorn divergence. It remains interesting to extend the convergence theory of TRPO/WPO/SPO to function approximation regime following recent advance Agarwal et al. (2021a). However, this is beyond the scope of our current work, as we focus on explicit closed-form update of WPO/SPO, which can be a viable alternative to TRPO in practice. ## 6 A Practical Algorithm In practice, the advantage value functions are often estimated from sampled trajectories. In this section, we provide a practical on-policy actor-critic algorithm, described in Algorithm 1, that combines WPO/SPO with advantage function estimation. At each iteration, the first step is to collect trajectories, which can be either complete or partial. If the trajectory is complete, the total return can be directly expressed as the accumulated discounted rewards Rt =PT −t−1 k=0 γ krt+k. If the trajectory is partial, it can be estimated by applying the multi-step temporal difference (TD) methods (De Asis et al., 2017): Rˆt:t+n =Pn−1 k=0 γ krt+k + γ nV (st+n). Then for the advantage estimation, we can use Monte Carlo advantage estimation, i.e., Aˆπk t = Rt − Vψk (st) or Generalized Advantage Estimation (GAE) (Schulman et al., 2016), which provides a more explicit control over the bias-variance tradeoff. In the value update step, we use a neural net to represent the value function, where ψ is the parameter that specifies the value net s → V (s). Then, we can update ψ by using gradient descent, which significantly reduces the computational burden of computing advantage directly. The computational complexity of the algorithm is discussed in Appendix K. Algorithm 1: On-policy WPO/SPO algorithm Input: number of iterations K, learning rate α Initialize policy π0 and value network Vψ0 with random parameter ψ0 for k = 0, 1, 2 *. . . K* do Collect trajectory set Dk on policy πk For each timestep t in each trajectory, compute total returns Gt and estimate advantages Aˆπk t Update value: ψk+1 ←− ψk − α∇ψk X(Gt − Vψk (st))2 Update policy: πk+1 ←− F(πk) via WPO/ SPO with Aˆπk t end ## 7 Experiments In this section, we evaluate the proposed WPO and SPO approaches presented in Algorithm 1. We compare the performance of our methods with benchmarks including TRPO (Schulman et al., 2015), PPO (Schulman et al., 2017), A2C (Mnih et al., 2016); and with BGPG (Pacchiano et al., 2020), WNPG (Moskovitz et al., 2021) for continuous control. The code of our WPO/SPO can be found here1. We adopt the implementations of TRPO, PPO and A2C from OpenAI Baselines (Dhariwal et al., 2017) for MuJuCo tasks and Stable Baselines (Hill et al., 2018) for other tasks. For BGPG, we adopt the same implementation2 as (Pacchiano et al., 2020). Our experiments include (1) ablation study that focuses on sensitivity analysis of WPO and SPO; (2) tabular domain tasks with discrete state and action including the Taxi, Chain, and Cliff Walking environments; (3) locomotion tasks with continuous state and discrete action including the CartPole, Acrobot environments; (4) comparison of KL and Wasserstein trust regions under tabular domain and locomotion tasks; and (5) extension to continuous control tasks with continuous action including HalfCheetah, Hopper, Walker, and Ant environments from MuJuCo. See Table 4 in Appendix A for a summary of performance. The setting of hyperparameters and network sizes of our algorithms and additional results are provided in Appendix A. ## 7.1 Ablation Study In this experiment, we first examine the sensitivity of WPO in terms of different strategies of βk. We test four settings of β value for WPO policy update: (1) Setting 1: Computing optimal β value for all policy update; (2) Setting 2: Computing optimal β value for first 20% of policy updates and decaying β for the 1https://github.com/efficientwpo/EfficientWPO 2https://github.com/behaviorguidedRL/BGRL remaining; (3) Setting 3: Computing optimal β value for first 20% of policy updates and fix β as its last updated value for the remaining; (4) Setting 4: Decaying β for all policy updates (e.g., βk = Θ(1/ log k)). In particular, Setting 2 is rooted in the observation that β ∗ decays slowly in the later stage of the experiments carried out in the paper. Small perturbations are added to the approximate values to avoid any stagnation in updating. Taxi task (Dietterich, 1998) from tabular domain is selected for this experiment. Table 1: Runtime for different β settings, average across 5 runs with random initialization | Runtime | Taxi (s) | CartPole (s) | |--------------------------------|----------------|----------------| | Setting 1 (optimal β) | 1224.3 ± 105.7 | 129.7 ± 15.2 | | Setting 2 (optimal-then-decay) | 648.4 ± 55.7 | 63.2 ± 8.3 | | Setting 3 (optimal-then-fix) | 630.2 ± 67.4 | 67.1 ± 9.7 | | Setting 4 (decaying β) | 522.7 ± 49.5 | 44.3 ± 6.2 | The performance comparisons and average run times are shown in Figure 4 and Table 1 respectively. Figure 4a and Table 1 clearly indicate a tradeoff between computation efficiency and accuracy in terms of different choices of β value. Setting 2 is the most effective way to balance the tradeoff between performance and run time. For the rest of experiments, we adopt this setting for both WPO and SPO (see Appendix A.2 for how Setting 2 is tuned for each task). Figure 4b shows that as λ increases, the convergence of SPO becomes slower but the final performance of SPO improves and becomes closer to that of WPO, which verifies the convergence property of Sinkhorn to Wasserstein distance shown in Theorem 3. Therefore, the choice of λ can effectively adjust the trade-off between convergence and final performance. Similar results are observed when using time-varying λ on Taxi, Chain and CartPole tasks, presented in Figure 9 in Appendix A. ![9_image_0.png](9_image_0.png) Figure 4: Episode rewards for Taxi with different β and λ settings, averaged across 5 runs with random initialization. The shaded area depicts the mean ± the standard deviation. ## 7.2 Tabular Domains We evaluate WPO and SPO on tabular domain tasks and test the exploration ability of the algorithms on several environments including Taxi, Chain, and Cliff Walking. We use a table of size *|S| × |A|* to represent the policy π(a|s). For the value function, we use a neural net to smoothly update the values. The performance of WPO and SPO are compared to the performance of TRPO, PPO and A2C under the same neural net structure. Results on Taxi, Cliff and Chain are reported in Figure 5. As shown in Figure 5, the performances of WPO, SPO and TRPO are manifestly better than A2C and PPO. Among the trust region based methods, WPO and SPO outperform TRPO in Taxi and Cliff Walking, whereas in Chain, the performances of these three methods are comparable. In all of the test cases, SPO converges faster than WPO but to a lower optimum. As further shown in Table 2, for the Taxi environment, WPO has a higher successful drop-off rate and a lower task completion time while the original TRPO reaches the time limit with a drop-off rate 0, suggesting that WPO finds a better policy than the original TRPO. In Figure 7, we also compare the performance of WPO under Wasserstein and KL divergences given different number of samples NA used to estimate the advantage function, and the result suggests that using Wasserstein metric is more robust than KL divergence under inaccurate advantage values. ![10_image_0.png](10_image_0.png) Figure 5: Episode rewards during training for tabular domain tasks, averaged across 5 runs with random initialization. The shaded area depicts the mean ± the standard deviation. Table 2: Trained agents performance on Taxi | Success (+20) | Fail (-10) | Steps (-1) | Return | | |-----------------|--------------|--------------|----------|---------| | WPQ | 0.753 | 0.232 | 70.891 | -58 151 | | TRPO | 0 | 0 | 200 | -200 | ## 7.3 Robotic Locomotion Tasks We now integrate deep neural network architecture into WPO and SPO and evaluate their performance on several locomotion tasks (with continuous state and discrete action), including CartPole (Barto et al., 1983) and Acrobot (Geramifard et al., 2015). We use two separate neural nets to represent the policy and the value. The policy neural net receives state s as an input and outputs the categorical distribution of m(a)s). A random subset of states Sk E S is sampled at each iteration to perform policy updates. Figure 6 shows that WPO and SPO outperform TRPO, PPO and A2C in most tasks in terms of final performance, except in Acrobot where PPO performs the best. In most cases, SPO converges faster but WPO has a better final performance. To train 105 timesteps in the discrete locomotion tasks, the training wall-clock time is 63.2 ±8.2s for WPO, 64.7 ±7.8s for SPO, 59.4 ±10.2s for TRPO and 69.9 ±10.5× for PPO. Therefore, WPO has a similar computational efficiency as TRPO and PPO. ![10_image_1.png](10_image_1.png) Figure 6: Episode rewards during the training process for the locomotion tasks, averaged across 5 runs with random initialization. The shaded area depicts the mean ± the standard deviation. ## 7.4 Comparison Of Wasserstein And Kl Trust Regions We show that compared with the KL divergence, the utilization of Wasserstein metric can cope with the inaccurate advantage estimations caused by the lack of samples. Let NA denote the number of samples used to estimate the advantage function. We evaluate the performance of WPO framework (4) with Wasserstein and KL constraints (as derived in Peng et al. (2019)). We consider the Chain task and different NA. As shown in Figure 7, when NA is 1000, KL performs slightly better than WPO. However, when NA decreases to 100 or 250, WPO outperforms KL. These results indicate that WPO is more robust than KL under inaccurate advantage values. This finding is consistent with our observations on the policy update formulations of Wasserstein and KL. For the Wasserstein update in (5), policy will be updated only when the advantage difference between two actions is significant, i.e., Aπ(s, aj ) − βDij ≥ Aπ(*s, a*i). However, for the KL update in Peng et al. (2019), policy will be updated as long as the current advantage function has a single non-zero value. Therefore, KL update is more sensitive; while Wasserstein update is more robust and more tolerant to advantage inaccuracies. Similar results are obtained for the locomotion tasks (Figure 10 in Appendix A). The runtime of Wasserstein and KL updates are reported in Table 5 in Appendix A. ![11_image_0.png](11_image_0.png) Figure 7: Episode rewards during training for the Chain task, where advantage value function is estimated under different number of samples, averaged across 5 runs with random initialization. The shaded area depicts the mean ± the standard deviation. ## 7.5 Extension To Continuous Control To extend to environments with continuous action, we use Implicit Quantile Networks (IQN) (Will Dabney & Munos, 2018) actor that can represent an arbitrary complex non-parametric policy. Let F −1 s(p) represent the quantile function associated with policy π(·|s). The IQN actor takes state s and probability p ∈ [0, 1] as input, and outputs the corresponding quantile value a = F −1 s(p). IQN actor can be trained to approach pre-defined target policy distributions through quantile regression (Will Dabney & Munos, 2018; Tessler et al., 2019). Define the action support for state s in k-th iteration as I πk (s) = {a ′: Aπk (*s, a*′) > mina∈I πk−1 (s) Aπk (*s, a*)}. Then, the WPO/SPO target policy distribution to guide IQN update in the k-th iteration is: $$P_{I^{\pi_{k}}(s)}(a^{\prime}|s)=\sum_{a\in I^{\pi_{k-1}}(s)}\pi_{k}(a|s)f_{s}(a^{\prime},a),$$ ′, a), (13) where for WPO update fs(a ′, a) = 1 if a ′ = argmaxa′∈I πk (s){Aπk (*s, a*′) − βkd(a ′, a)} and fs(a ′, a) = 0 otherwise; for SPO update, fs(a ′, a) = exp( λk βk A πk (s,a′)−λkd(a ′,a)) Pa′∈I πk (s) exp( λk βk Aπk (s,a′)−λkd(a′,a)) . In implementation, we sample a batch of states Sk ∈ S at each iteration to perform policy updates, and for each s ∈ Sk, we sample |Ak| actions to approximate the support I πk (s) and the target policy distribution PI πk (s)(·|s). We additionally compare WPO and SPO with BGPG (Pacchiano et al., 2020) and WNPG (Moskovitz et al., 2021) that are specially designed to address the continuous control with Wasserstein metric, for several MuJuCo tasks including HalfCheetah, Hopper, Walker, and Ant. Figure 8 shows that WPO and SPO have consistently better performances than other benchmarks. Similar results are obtained for the challenging Humanoid task, presented in Figure 11 in Appendix A. We also provide the runtime of each algorithm in Table 6 in Appendix A. $$(13)$$ ![12_image_0.png](12_image_0.png) Figure 8: Episode rewards during training for MuJuCo continuous control tasks, averaged across 10 runs with random initialization. The shaded area depicts the mean ± the standard deviation. ## 8 Conclusion In this paper, we present two policy optimization frameworks, WPO and SPO, which can exactly characterize the policy updates instead of confining their distributions to particular distribution class or requiring any approximation. Our methods outperform TRPO and PPO with better sample efficiency, faster convergence, and improved final performance. Our numerical results show that the Wasserstein metric is more robust to the ambiguity of advantage functions, compared with the KL divergence. Our strategy for adjusting 3 value for WPO can reduce the computational time and boost the convergence without noticeable performance degradation. SPO improves the convergence speed of WPO by properly choosing the weight of the entropic regularizer. Performance improvement and global convergence for WPO are discussed. For future work, it remains interesting to extend the idea to PPO and natural policy gradients, which penalize the policy update instead of imposing trust region constraint, and extend it to off-policy frameworks. ## References Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, and Martin Riedmiller. Maximum a posteriori policy optimisation. *ArXiv Preprint*, pp. arXiv:1806.06920, 2018. Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. On the theory of policy gradient methods: Optimality, approximation, and distribution shift. *J. Mach. Learn. Res.*, 22(98):1–76, 2021a. Rishabh Agarwal, Marlos C Machado, Pablo Samuel Castro, and Marc G Bellemare. Contrastive behavioral similarity embeddings for generalization in reinforcement learning. In *Proceedings of the 9th International* Conference on Learning Representations, 2021b. Andrew G. Barto, Richard S. Sutton, and Charles W. Anderson. Neuronlike adaptive elements that can solve difficult learning control problems. *IEEE Transactions on Systems, Man, and Cybernetics*, SMC-13(5): 834–846, 1983. Jalaj Bhandari and Daniel Russo. On the linear convergence of policy gradient methods for finite mdps. In International Conference on Artificial Intelligence and Statistics, pp. 2386–2394. PMLR, 2021. Jose Blanchet and Karthyek Murthy. Quantifying distributional model risk via optimal transport. Mathematics of Operations Research, 44(2):565–600, 2019. Pablo Samuel Castro. Scalable methods for computing state similarity in deterministic Markov decision processes. In *Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence*, volume 34, pp. 10069–10076, 2020. Shicong Cen, Chen Cheng, Yuxin Chen, Yuting Wei, and Yuejie Chi. Fast global convergence of natural policy gradient methods with entropy regularization. *Operations Research*, 2021. Nicolo Cesa-Bianchi and Gábor Lugosi. *Prediction, learning, and games*. Cambridge University Press, 2006. Nicolas Courty, Rémi Flamary, and Devis Tuia. Domain adaptation with regularized optimal transport. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 274–289. Springer, 2014. Nicolas Courty, Rémi Flamary, Devis Tuia, and Alain Rakotomamonjy. Optimal transport for domain adaptation. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 39(9):1853–1865, 2016. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In *Advances in Neural* Information Processing Systems, volume 26, pp. 2292–2300, 2013. Robert Dadashi, Léonard Hussenot, Matthieu Geist, and Olivier Pietquin. Primal Wasserstein imitation learning. In *Proceedings of the 9th International Conference on Learning Representations*, 2021. Kristopher De Asis, J. Fernando Hernandez-Garcia, G. Zacharias Holland, and Richard S. Sutton. Multi-step reinforcement learning: A unifying algorithm. *ArXiv Preprint*, pp. arXiv:1703.01327, 2017. Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, Yuhuai Wu, and Peter Zhokhov. OpenAI baselines. https://github.com/ openai/baselines, 2017. Thomas G. Dietterich. The MAXQ method for hierarchical reinforcement learning. In Proceedings of the 15th International Conference on Machine Learning, pp. 118–126, 1998. Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In *Proceedings of the 33rd International Conference on Machine Learning*, pp. 1329–1338, 2016. Peyman Mohajerin Esfahani and Daniel Kuhn. Data-driven distributionally robust optimization using the Wasserstein metric: Performance guarantees and tractable reformulations. *Mathematical Programming*, 171 (1):115–166, 2018. Norm Ferns, Prakash Panangaden, and Doina Precup. Metrics for finite Markov decision processes. In Uncertainty in Artificial Intelligence, volume 4, pp. 162–169, 2004. Rui Gao and Anton J Kleywegt. Distributionally robust stochastic optimization with Wasserstein distance. arXiv preprint arXiv:1604.02199, 2016. Matthieu Geist, Bruno Scherrer, and Olivier Pietquin. A theory of regularized markov decision processes. In International Conference on Machine Learning, pp. 2160–2169. PMLR, 2019. Aude Genevay, Gabriel Peyré, and Marco Cuturi. Learning generative models with Sinkhorn divergences. In International Conference on Artificial Intelligence and Statistics, pp. 1608–1617, 2018. Alborz Geramifard, Christoph Dann, Robert H. Klein, William Dabney, and Jonathan P. How. RLPy: A value-function-based reinforcement learning framework for education and research. Journal of Machine Learning Research, 16(46):1573–1578, 2015. Gregory Z. Grudic, Vijay Kumar, and Lyle H. Ungar. Using policy gradient reinforcement learning on autonomous robot controllers. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 406–411, 2003. Shuncheng He, Yuhang Jiang, Hongchang Zhang, Jianzhun Shao, and Xiangyang Ji. Wasserstein Unsupervised Reinforcement Learning. In *Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence*, 2022. Nicolas Heess, Gregory Wayne, David Silver, Timothy Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, volume 28, 2015. Ashley Hill, Antonin Raffin, Maximilian Ernestus, Adam Gleave, Anssi Kanervisto, Rene Traore, Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, and Yuhuai Wu. Stable baselines. https://github.com/hill-a/stable-baselines, 2018. Minhui Huang, Shiqian Ma, and Lifeng Lai. A Riemannian block coordinate descent method for computing the projection robust Wasserstein distance. In *Proceedings of the 38th International Conference on Machine* Learning, pp. 4446–4455, 2021. Anatoli Juditsky, Philippe Rigollet, and Alexandre B Tsybakov. Learning by mirror averaging. The Annals of Statistics, 36(5):2183–2206, 2008. Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In Proceedings of the 19th International Conference on Machine Learning, pp. 267–274, 2002. Sham M Kakade. A natural policy gradient. In *Advances in Neural Information Processing Systems*, volume 14, 2001. Daniel Kuhn, Peyman Mohajerin Esfahani, Viet Anh Nguyen, and Soroosh Shafieezadeh-Abadeh. Wasserstein distributionally robust optimization: Theory and applications in machine learning. In *Operations Research* & Management Science in the Age of Analytics, pp. 130–166. INFORMS, 2019. Guanghui Lan. Policy mirror descent for reinforcement learning: Linear convergence, new sampling complexity, and generalized problem classes. *Mathematical programming*, pp. 1–48, 2022. Robert Leary. Global optimization on funneling landscapes. *Journal of Global Optimization*, 18:367–383, 2000. Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. *Journal of Machine Learning Research*, 17(39):1–40, 2016. Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. In *Proceedings of the 4th* International Conference on Learning Representations, 2016. Tianyi Lin, Chenyou Fan, Nhat Ho, Marco Cuturi, and Michael I Jordan. Projection robust Wasserstein distance and Riemannian optimization. *ArXiv Preprint*, pp. arXiv:2006.07458, 2020. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning, pp. 1928–1937, 2016. Ted Moskovitz, Michael Arbel, Ferenc Huszar, and Arthur Gretton. Efficient Wasserstein natural gradients for reinforcement learning. In *Proceedings of the 9th International Conference on Learning Representations*, 2021. Gergely Neu, Anders Jonsson, and Vicenç Gómez. A unified view of entropy-regularized markov decision processes. *arXiv preprint arXiv:1705.07798*, 2017. Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang, Krzysztof Choromanski, Anna Choromanska, and Michael Jordan. Learning to score behaviors for guided policy optimization. In *Proceedings of the 37th International* Conference on Machine Learning, pp. 7445–7454, 2020. Victor M. Panaretos and Yoav Zemel. Statistical aspects of wasserstein distances. *Annual Review of Statistics* and Its Application, 6(1):405–431, 2019. Giorgio Patrini, Rianne van den Berg, Patrick Forre, Marcello Carioni, Samarth Bhargav, Max Welling, Tim Genewein, and Frank Nielsen. Sinkhorn autoencoders. In *Uncertainty in Artificial Intelligence*, pp. 733–743, 2019. Xue Bin Peng, Aviral Kumar, Grace Zhang, and Sergey Levine. Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. *ArXiv Preprint*, pp. arXiv:1910.00177, 2019. Jan Peters and Stefan Schaal. Policy gradient methods for robotics. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2219–2225, 2006. Henning Petzka, Asja Fischer, and Denis Lukovnicov. On the regularization of Wasserstein GANs. ArXiv Preprint, pp. arXiv:1709.08894, 2017. Pierre H. Richemond and Brendan Maginnis. On Wasserstein reinforcement learning and the Fokker-Planck equation. *ArXiv Preprint*, pp. arXiv:1712.07185, 2017. Philippe Rigollet and Alexandre Tsybakov. Exponential screening and optimal rates of sparse estimation. The Annals of Statistics, 39(2):731–771, 2011. R. Tyrrell Rockafellar and Roger J. B. Wets. *Variational Analysis*. Springer, 1998. Johannes O. Royset. Approximations and solution estimates in optimization. *Mathematical Programming*, 170:479–506, 2018. John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust region policy optimization. In *Proceedings of the 32nd International Conference on Machine Learning*, pp. 1889–1897, 2015. John Schulman, Philipp Moritz, Sergey Levine, Michael I. Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. In Proceedings of the 4th International Conference on Learning Representations, 2016. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. *ArXiv Preprint*, pp. arXiv:1707.06347, 2017. Lior Shani, Yonathan Efroni, and Shie Mannor. Adaptive trust region policy optimization: Global convergence and faster rates for regularized mdps. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, volume 34, pp. 5668–5675, 2020. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In *Proceedings of the 31st International Conference on Machine Learning*, pp. 387–395, 2014. Maurice Sion. On general minimax theorems. *Pacific Journal of Mathematics*, 8(1):171–176, 1958. Student. The probable error of a mean. *Biometrika*, 6(1):1–25, 1908. Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In *Advances in Neural Information Processing Systems*, pp. 1057–1063, 1999. Chen Tessler, Guy Tennenholtz, and Shie Mannor. Distributional policy optimization: An alternative approach for continuous control. In *Advances in Neural Information Processing Systems*, pp. 1350–1360, 2019. David Wales and Jonathan Doye. Global optimization by basin-hopping and the lowest energy structures of Lennard-Jones clusters containing up to 110 atoms. *The Journal of Physical Chemistry A*, 101(28): 5111–5116, 1998. Jie Wang, Rui Gao, and Yao Xie. Sinkhorn distributionally robust optimization. *ArXiv Preprint*, pp. arXiv:2109.11926, 2021a. Jie Wang, Rui Gao, and Yao Xie. Two-sample test using projected Wasserstein distance. In *Proceedings of* IEEE International Symposium on Information Theory, volume 21, 2021b. Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, and Nando de Freitas. Sample efficient actor-critic with experience replay. In *Proceedings of the 5th International* Conference on Learning Representations, 2017. David Silver Will Dabney, Georg Ostrovski and Rémi Munos. Implicit quantile networks for distributional reinforcement learning. In *International Conference on Machine Learning*, 2018. Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3–4):229–256, 1992. Yuhuai Wu, Elman Mansimov, Shun Liao, Roger Grosse, and Jimmy Ba. Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation. In *Advances in Neural Information* Processing Systems, pp. 5285–5294, 2017. Huang Xiao, Michael Herman, Joerg Wagner, Sebastian Ziesche, Jalal Etesami, and Thai Hong Linh. Wasserstein adversarial imitation learning. *ArXiv Preprint*, pp. arXiv:1906.08113, 2019. Lixin Zhan, Jeff Chen, and Wing-Ki Liu. Monte Carlo basin paving: An improved global optimization method. *Physical Review E*, 73:015701, 2006. Ruiyi Zhang, Changyou Chen, Chunyuan Li, and Lawrence Carin. Policy optimization as Wasserstein gradient flows. In *Proceedings of the 35th International Conference on Machine Learning*, pp. 5737–5746, 2018. Chaoyue Zhao and Yongpei Guan. Data-driven risk-averse stochastic optimization with Wasserstein metric. Operations Research Letters, 46(2):262–267, 2018. ## A Implementation Details And Additional Results The implementation of our WPO/SPO can be found in https://github.com/efficientwpo/EfficientWPO. We use the implementations of TRPO, PPO and A2C from OpenAI Baselines (Dhariwal et al., 2017) for MuJuCo tasks and Stable Baselines (Hill et al., 2018) for other tasks. For BGPG, we adopt the same implementation as Pacchinao et al., (2020) based on the released code https://github.com/behaviorguidedRL/BGRL. ## A.1 Visitation Frequencies Estimation: The unnormalized discounted visitation frequencies are needed to compute the global optimal β ∗. At the k-th iteration, the visitation frequencies ρ π k are estimated using samples of the trajectory set Dk. Specifically, we first initialize ρ π k (s) = 0, ∀s ∈ S. Then for each timestep t in each trajectory from Dk, we update ρ π k as ρ π k (st) ←− ρ π k (st) + γ t/|Dk|. ## A.2 Optimal-Then-Decay Beta Strategy: During the training of multiple tasks, including Taxi, Chain and CartPole, we observe a consistent trend in the behavior of the optimal β value during the policy updates: It initially fluctuates, then stabilizes and decays slowly towards 0. In the Taxi task, the optimal β stabilizes after approximately 18% of the total training iterations. If we decay β before this stabilization point (e.g, using optimal beta for only first 5% or 10% updates), we observe a drop in performance. However, we do not observe any notable performance difference when we decay β after this stabilization point (e.g., using optimal β for first 20% or 30% updates). We also observe that the optimal β decays at a very slow rate, and Θ(1/ log(k)) matches this trend best. If we employ a faster decaying function, such as Θ(1/k) or Θ(1/k2), we observe a drop in performance. Based on these findings, when implementing the optimal-then-decay β strategy on other tasks, we compute the optimal β for each policy update until we observe that its value stabilizes across updates. At this point, we stop calculating the optimal β and decay it using Θ(1/ log(k)) for the remaining policy updates. The specific iteration at which the optimal β value stabilizes varies across tasks, and we denote this point as kβ, which is reported in Table 3. ## A.3 Hyperparameters And Performance Summary | | Taxi-v3 | NChain-v0 | CartPole-v1 | Acrobot-v1 | MuJuCo tasks | |----------|------------------|------------------|------------------|------------------|----------------| | | | CliffWalking-v0 | | | | | γ | 0.9 | 0.9 | 0.95 | 0.95 | 0.99 | | lrπ | \ | \ | 10−2 | 5 × 10−3 | 10−4 | | lrvalue | 10−2 | 10−2 | 10−2 | 5 × 10−3 | 10−3 | | |Dk| | 60 (Taxi) | 1 (Chain) | 2 | 3 | partial | | | | 3 (CliffWalking) | | | | | π size | 2D array | 2D array | [64, 64] | [64, 64] | [400, 300] | | Q/v size | [10, 7, 5] | [10, 7, 5] | [64, 64] | [64, 64] | [400, 300] | | |Sk| | all states, |S| | all states, |S| | 128 | 128 | 64 | | |Ak| | all actions, |A| | all actions, |A| | all actions, |A| | all actions, |A| | 32 | Our main experimental results are reported in section 7. In addition, we provide the setting of hyperparameters and network sizes of our WPO/SPO algorithms in Table 3, and a summary of performance in Table 4. Table 3: Hyperparameters and network sizes 18 | Environment | WPO | SPO | TRPO | PPO | A2C | BGPG | WNPG | |-----------------|------------|------------|------------|--------------|--------------|------------|------------| | Taxi-v3 | −45 ± 27 | −87 ± 11 | −202 ± 3 | −381 ± 34 | −338 ± 30 | - | - | | NChain-v0 | 3549 ± 197 | 3432 ± 131 | 3522 ± 258 | 3506 ± 237 | 1606 ± 10 | - | - | | CliffWalking-v0 | −35 ± 15 | −25 ± 1 | −159 ± 94 | −3290 ± 2106 | −5587 ± 1942 | - | - | | CartPole-v1 | 388 ± 54 | 370 ± 30 | 297 ± 65 | 193 ± 45 | 267 ± 61 | - | - | | Acrobot-v1 | −162 ± 8 | −185 ± 15 | −248 ± 33 | −103 ± 5 | −379 ± 39 | - | - | | HalfCheetah-v2 | 2050 ± 108 | 1750 ± 172 | 1158 ± 35 | 1628 ± 136 | −645 ± 31 | 1697 ± 195 | 1832 ± 125 | | Hopper-v2 | 3208 ± 259 | 2834 ± 305 | 2035 ± 248 | 2321 ± 233 | 43 ± 21 | 1982 ± 218 | 2361 ± 272 | | Walker2d-v2 | 3739 ± 298 | 3489 ± 257 | 2535 ± 369 | 3290 ± 354 | 28 ± 1 | 2775 ± 301 | 3059 ± 209 | | Ant-v2 | 1863 ± 271 | 1780 ± 257 | 21 ± 10 | 1487 ± 206 | −39 ± 8 | 1622 ± 235 | 1587 ± 221 | | Humanoid-v2 | 965 ± 76 | 914 ± 93 | 725 ± 112 | 632 ± 73 | 107 ± 15 | 797 ± 85 | 820 ± 91 | | d(a, a′ ) | 0-1 distance 3 | 0-1 distance | 0-1 distance | 0-1 distance | L1 distance | |-------------------|------------------|----------------|----------------|----------------|---------------| | kβ | 250 | 100 (Chain) | 150 | 150 | 1000 | | 50 (CliffWalking) | | | | | | Table 4: Averaged rewards over last 10% episodes during the training process ## A.4 Additional Results For Ablation Studies ![18_Image_0.Png](18_Image_0.Png) Figure 9: Episode rewards during the training process for different β and λ settings, averaged across 5 runs with a random initialization. The shaded area depicts the mean ± the standard deviation. 3We note that specifying distance based on control relevance leads to higher performance in this test case: i.e., d = 1 to distinct actions from set A = { move north, move south, move west, move east}, d = 1 to distinct actions from set B = { pickup, dropoff}, and d = 4 to actions from different sets. 19 ## A.5 Additional Comparison Of Wasserstein And Kl Trust Regions ![19_image_0.png](19_image_0.png) Figure 10: Episode rewards during the training process for the locomotion tasks, averaged across 5 runs with a random initialization. The shaded area depicts the mean ± the standard deviation. | | WPO | SPO | KL | |-----------------------------|--------------|--------------|--------------| | Taxi-v3 (per 103 steps) | 71.0 ± 7.3 | 69.5 ± 8.7 | 74.3 ± 9.5 | | NChain-v0 (per 103 steps) | 58.4 ± 9.1 | 63.1 ± 7.4 | 59.9 ± 8.7 | | CartPole-v1 (per 106 steps) | 11.4 ± 1.8 | 10.2 ± 2.3 | 9.7 ± 1.9 | | Acrobot-v1 (per 105 steps) | 10.4 ± 1.9 | 9.7 ± 2.5 | 10.9 ± 2.3 | | Humanoid-v2 (per 105 steps) | 422.7 ± 65.4 | 409.1 ± 46.5 | 438.5 ± 61.2 | Table 5: Average runtime (seconds) of WPO, SPO and KL A.6 Additional Results for Large-scale Continuous Control ![19_image_1.png](19_image_1.png) Figure 11: Episode rewards during training for MuJuCo Humanoid task, averaged across 10 runs with random initialization. The shaded area depicts the mean ± the standard deviation. | Environment | WPO | SPO | TRPO | PPO | A2C | BGPG | WNPG | |----------------|----------|----------|----------|----------|----------|----------|----------| | HalfCheetah-v2 | 297 ± 31 | 289 ± 25 | 290 ± 28 | 292 ± 36 | 293 ± 27 | 306 ± 33 | 298 ± 22 | | Hopper-v2 | 233 ± 38 | 226 ± 42 | 242 ± 56 | 167 ± 36 | 254 ± 49 | 201 ± 32 | 197 ± 31 | | Walker2d-v2 | 289 ± 55 | 312 ± 61 | 253 ± 39 | 307 ± 52 | 259 ± 46 | 322 ± 62 | 214 ± 45 | | Ant-v2 | 307 ± 51 | 290 ± 57 | 296 ± 63 | 251 ± 47 | 291 ± 41 | 286 ± 63 | 269 ± 54 | | Humanoid-v2 | 423 ± 65 | 401 ± 47 | 446 ± 52 | 395 ± 57 | 230 ± 31 | 425 ± 58 | 398 ± 49 | Table 6: Average runtime (seconds per 105timesteps) for the MuJuCo continuous control tasks Theorem 1. **(Closed-form policy update)** Let κ π s (*β, j*) = argmaxk=1...N {Aπ(s, ak) − βDkj}, *where* D denotes the cost matrix. If Assumption 1 holds, then an optimal solution to (4) is: $$\pi^{*}(a_{i}|s)=\sum\nolimits_{j=1}^{N}\pi(a_{j}|s)f_{s}^{*}(i,j),$$ where f ∗ s (*i, j*) = 1 if i = κ π s (β ∗, j) and f ∗ s (*i, j*) = 0 *otherwise, and* β ∗is an optimal Lagrangian multiplier corresponds to the following dual formulation: $$\min_{\beta\geq0}\ F(\beta)=\min_{\beta\geq0}\ \{\beta\delta+\mathbb{E}_{s\sim\rho_{c}^{\pi}}\sum\nolimits_{j=1}^{N}\pi(a_{j}|s)\max_{i=1\ldots N}(A^{\pi}(s,a_{i})-\beta D_{ij})\}.\tag{6}$$ Moreover, we have β ∗ ≤ β¯*, where* β¯ := maxs∈S,k,j=1*...N,k*̸=j (Dkj ) −1(Aπ(s, ak) − Aπ(s, aj )). Proof of Theorem 1. First, we denote Qs as the joint distribution of π(·|s) and π ′(·|s) with PN i=1 Qs ij = π(aj |s) and PN j=1 Qs ij = π ′(ai|s). Also, let fs(*i, j*) represent the conditional distribution of π ′(ai|s) under π(aj |s). Then Qs ij = π(aj |s)fs(*i, j*), π ′(ai|s) = PN j=1 Qs ij =PN j=1 π(aj |s)fs(*i, j*). In addition: $$\begin{array}{r l}{d_{\mathrm{W}}(\pi^{\prime}(\cdot|s),\pi(\cdot|s))}&{{}=\operatorname*{min}_{Q_{i j}^{s}}\sum_{i=1}^{N}\sum_{j=1}^{N}D_{i j}Q_{i j}^{s}=\operatorname*{min}_{f_{s}(i,j)}\sum_{i=1}^{N}\sum_{j=1}^{N}D_{i j}\pi(a_{j}|s)f_{s}(i,j),\mathrm{~and~}}\\ {\mathbb{E}_{a\sim\pi^{\prime}(\cdot|s)}[A^{\pi}(s,a)]}&{{}=\sum_{i=1}^{N}A^{\pi}(s,a_{i})\pi^{\prime}(a_{i}|s)=\sum_{i=1}^{N}\sum_{j=1}^{N}A^{\pi}(s,a_{i})\pi(a_{j}|s)f_{s}(i,j).}\end{array}$$ $$\left(5\right)$$ Thus, the WPO problem in (4) can be reformulated as: $$\max_{f_{*}(i,j)\geq0}\mathbb{E}_{s\sim\rho_{\mathbb{P}^{*}}}\sum_{i=1}^{N}\sum_{j=1}^{N}A^{\pi}(s,a_{i})\pi(a_{j}|s)f_{s}(i,j)\tag{14a}$$ $$s.t.\mathbb{E}_{s\sim\rho_{\mathbb{P}^{*}}}\sum_{i=1}^{N}\sum_{j=1}^{N}D_{ij}\pi(a_{j}|s)f_{s}(i,j)\leq\delta,$$ (14b) $$\sum_{i=1}^{N}f_{s}(i,j)=1,\hskip28.452756pt\forall s\in\mathcal{S},j=1\ldots N.\tag{14c}$$ Note here that (14b) is equivalent to Es∼ρπ υ minfs(i,j)PN i=1 PN j=1 Dijπ(aj |s)fs(*i, j*) ≤ δ because if we have a feasible fs(*i, j*) to make (14b) hold, we must have Es∼ρπ υ minfs(i,j)PN i=1 PN j=1 Dijπ(aj |s)fs(*i, j*) ≤ δ. Since both the objective function and the constraint are linear in fs(*i, j*), (14) is a convex optimization problem. Also, Slater's condition holds for (14) as the feasible region has an interior point, which is fs(*i, i*) = 1 ∀i, and fs(*i, j*) = 0 ∀i ̸= j. Meanwhile, since Aπ(*s, a*) is bounded based on Assumption 1, the objective is bounded above. Therefore, strong duality holds for (14). At this point we can derive the dual problem of (14) as its equivalent reformulation: $$\min_{\beta\geq0,\zeta_{j}^{*}}\quad\beta\delta+\int_{s\in{\cal S}}\sum_{j=1}^{N}\zeta_{j}^{*}ds$$ _s.t._$A^{\pi}(s,a_{i})\pi(a_{j}|s)-\beta D_{ij}\pi(a_{j}|s)-\frac{\zeta_{j}^{*}}{\rho_{v}^{\pi}(s)}\leq0,\qquad\quad\forall s\in{\cal S},i,j=1\ldots N.$ $$\left(15\right)$$ $$(16)$$ We observe that with a fixed β, the optimal ζ s j will be achieved at: $$\zeta_{j}^{s*}(\beta)=\operatorname*{max}_{i=1,\ldots,N}\rho_{v}^{\pi}(s)\pi(a_{j}|s)(A^{\pi}(s,a_{i})-\beta D_{i j}).$$ π(*s, a*i) − βDij ). (16) Denote β ∗ as an optimal solution to (15) and f ∗ s (*i, j*) as an optimal solution to (14). Due to the complimentary slackness, the following equations hold: $$(A^{\pi}(s,a_{i})\pi(a_{j}|s)-\beta^{*}D_{i j}\pi(a_{j}|s)-\frac{\zeta_{j}^{s*}(\beta^{*})}{\rho_{v}^{\pi}(s)})f_{s}^{*}(i,j)=0,\qquad\quad\forall s,i,j.$$ In this case, f ∗ s (*i, j*) can have non-zero values only when Aπ(*s, a*i)π(aj |s) − β ∗Dijπ(aj |s) − ζ s∗ j (β ∗) ρπ υ (s) = 0, which means ζ s∗ j (β ∗) = ρ π υ (s)π(aj |s)(Aπ(*s, a*i) − β ∗Dij ). Given the expression of the optimal ζ s∗ jin (16), f ∗ s (*i, j*) can have non-zero values only when i ∈ Kπ s (β ∗, j), where Kπ s (*β, j*) = argmaxk=1...N Aπ(*s, a*k) − βDkj . When there exists a unique optimizer, i.e., |Kπ s (β ∗, j)| = 1, let κ π s (β ∗, j) denote the optimizer. Since PN i=1 f ∗ s (*i, j*) = 1 as indicated in (14c), the only optimal solution is: $$f_{s}^{*}(i,j)=\begin{cases}1&\quad\mathrm{if}\ i=\kappa_{s}^{\pi}(\beta^{*},j),\\ 0&\quad\mathrm{otherwise}.\end{cases}$$ When there exists multiple optimizers, i.e., |Kπ s (β ∗, j)| > 1, the optimal weights f ∗ s (*i, j*) for i ∈ Kπ s (β ∗, j) could be determined by solving the following linear programming: $$\max_{f_{s}^{*}(i,j)\geq0,i\in\mathcal{K}_{\tau}^{*}(\beta^{*},j)}\mathbb{E}_{s\sim\rho_{\varepsilon}^{*}}\sum_{j=1}^{N}\pi(a_{j}|s)\sum_{i\in\mathcal{K}_{\tau}^{*}(\beta^{*},j)}A^{\pi}(s,a_{i})f_{s}^{*}(i,j)$$ $$s.t.\mathbb{E}_{s\sim\rho_{\varepsilon}^{*}}\sum_{j=1}^{N}\pi(a_{j}|s)\sum_{i\in\mathcal{K}_{\tau}^{*}(\beta^{*},j)}D_{ij}f_{s}^{*}(i,j)\leq\delta,\tag{17}$$ $$\sum_{i\in\mathcal{K}_{\tau}^{*}(\beta^{*},j)}f_{s}^{*}(i,j)=1,\qquad\quad\forall s\in\mathcal{S},j=1\dots N.$$ And then the corresponding optimal solution is, π ∗(ai|s) = PN j=1 π(aj |s)f ∗ s (*i, j*). Last, by substituting ζ s∗ j (β) = ρ π υ (s)π(aj |s) maxi=1...N (Aπ(*s, a*i) − βDij ) into the dual problem (15), we can reformulate (15) into: $$\min_{\beta\geq0}\{\beta\delta+\int_{s\in\mathcal{S}}\sum_{j=1}^{N}\zeta_{j}^{\star\star}(\beta)ds\}=\min_{\beta\geq0}\{\beta\delta+\mathbb{E}_{s\sim p_{s}^{\star}}\sum_{j=1}^{N}\pi(a_{j}|s)\max_{i=1\ldots N}(A^{\star}(s,a_{i})-\beta D_{ij})\}.\tag{18}$$ $$(19\mathrm{a})$$ (19b) $$\left(19\mathrm{c}\right)$$. The optimal β can then be obtained by solving (18). We will further show that β ∗ ≤ β¯ := maxs∈S,k,j=1*...N,k*̸=j (Dkj ) −1(Aπ(s, ak) − Aπ(*s, a*j )). In the general case, i.e., β ≥ 0, (14a) is non-negative because: Es∼ρπ υ X N i=1 X N j=1 A π(s, ai)π(aj |s)f ∗ s (i, j) (19a) = Es∼ρπ υ X N j=1 π(aj |s) X N i=1 A π(s, ai)f ∗ s (i, j) (19b) = Es∼ρπ υ X N j=1 π(aj |s)X i∈Kπ s (β∗,j) f ∗ s (i, j)A π(s, ai) (19c) ≥ Es∼ρπ υ X N j=1 π(aj |s)X i∈Kπ s (β∗,j) f ∗ s (i, j)[A π(s, aj ) + β ∗Dij ] (19d) $$(19\mathrm{d})$$ $$=\mathbb{E}_{s\sim\rho_{\Sigma}^{\pi}}\sum_{j=1}^{N}\pi(a_{j}|s)A^{\pi}(s,a_{j})+\mathbb{E}_{s\sim\rho_{\Sigma}^{\pi}}\sum_{j=1}^{N}\pi(a_{j}|s)\sum_{i\in\mathcal{K}_{s}^{\pi}(\beta^{*},j)}f_{s}^{*}(i,j)\beta^{*}D_{ij}$$ $$=\mathbb{E}_{s\sim\rho_{\Sigma}^{\pi}}\sum_{j=1}^{N}\pi(a_{j}|s)\beta^{*}\sum_{i\in\mathcal{K}_{s}^{\pi}(\beta^{*},j)}f_{s}^{*}(i,j)D_{ij}$$ $$\geq0,$$ $$\left(19\text{e}\right)$$ $$\left(19\text{f}\right)$$ $$\left(19\text{g}\right)$$. $\square$ where (19d) holds since for i ∈ Kπ s (β ∗, j), Aπ(*s, a*i) − β ∗Dij ≥ Aπ(s, aj ) − β ∗Djj = Aπ(*s, a*j ). When β ∗ > maxs∈S,k,j=1*...N,k*̸=j{ A π(s,ak)−A π(s,aj ) Dkj}, we have that for all s ∈ S, κ π s (β ∗, j) = j. Thus, f ∗ s (*i, i*) = 1, ∀i and f ∗ s (*i, j*) = 0, ∀i ̸= j. The objective value (14a) will be 0 because Es∼ρπ υ PN i=1 PN j=1 Aπ(*s, a*i)π(aj |s)f ∗ s (*i, j*) = Es∼ρπ υ PN i=1 Aπ(*s, a*i)π(ai|s) = 0. The left hand side of (14b) equals to Es∼ρπ υ PN i=1 PN j=1 Dijπ(aj |s)f ∗ s (*i, j*) = Es∼ρπ υ PN i=1 Diiπ(ai|s) = 0. Thus, for any δ > 0, (14b) is always satisfied. Since the objective of the primal Wasserstein trust-region constrained problem in (6) constantly evaluates to 0 when β ∗ > maxs∈S,k,j=1*...N,k*̸=j{ A π(s,ak)−A π(s,aj ) Dkj}, and is non-negative when β ∗ ≤ maxs∈S,k,j=1*...N,k*̸=j{ A π(s,ak)−A π(s,aj ) Dkj}, we can use maxs∈S,k,j=1*...N,k*̸=j{ A π(s,ak)−A π(s,aj ) Dkj} as an upper bound for the optimal dual variable β ∗. ## C Optimal Beta For A Special Distance Proposition 1. Let ks = argmaxi=1,...,N Aπ(s, ai)*, we have:* (1). If the initial point β0 *is in* [maxs,j{Aπ(*s, a*ks ) − Aπ(s, aj )}, +∞), the local optimal β *solution is* maxs,j{Aπ(*s, a*ks ) − Aπ(*s, a*j )}. (2). If the initial point β0 *is in* [0, mins,j̸=ks {Aπ(*s, a*ks ) − Aπ(s, aj )}]*: if* δ −Rs∈S ρ π(s)(1 − π(aks |s))*ds <* 0, the local optimal β is mins,j̸=ks {Aπ(*s, a*ks ) − Aπ(s, aj )}; otherwise, the local optimal β solution is 0. (3). If the initial point β0 *is in* (mins,j̸=ks {Aπ(*s, a*ks ) − Aπ(*s, a*j )}, maxs,j{Aπ(*s, a*ks ) − Aπ(s, aj )}), we construct sets I 1 s and I 2 s as: for s ∈ S, j ∈ {1, 2 . . . N} : if β0 ≥ Aπ(*s, a*ks ) − Aπ(*s, a*j ) **then** Add j to I 1 s **else** Add j to I 2 s . Then, if δ − Es∼ρπPj∈I 2 s π(aj |s) < 0*, the local optimal* β is mins∈S,j∈I 2 s {Aπ(*s, a*ks ) − Aπ(s, aj )}*; otherwise, the local* optimal β is maxs∈S,j∈I 1 s {Aπ(*s, a*ks ) − Aπ(s, aj )}. Proof of Proposition 1. (1). When β ∈ [maxs,j{Aπ(*s, a*ks ) − Aπ(*s, a*j )}, +∞), we have Aπ(*s, a*j ) ≥ Aπ(*s, a*ks ) − β for all s ∈ S, j = 1 *. . . N*. Since Aπ(*s, a*ks ) − β ≥ Aπ(*s, a*k) − β for all k = 1 *. . . N*, we have Aπ(s, aj ) ≥ Aπ(*s, a*k) − β for all s ∈ S, j = 1 *. . . N*, k = 1 *. . . N*. Thus, j ∈ argmaxk=1...N {Aπ(*s, a*k) − βDkj}, for all s ∈ S, j = 1 *. . . N*. Therefore, (6) can be reformulated as: $$\operatorname*{min}_{\beta\geq0}\{\beta\delta+\mathbb{E}_{s\sim\rho_{v}^{\pi}}\sum_{j=1}^{N}\pi(a_{j}|s)A^{\pi}(s,a_{j})\}.$$ Since δ ≥ 0, we have the local optimal β = maxs,j{Aπ(*s, a*ks ) − Aπ(*s, a*j )}. (2). When β ∈ [0, mins,j̸=ks {Aπ(*s, a*ks ) − Aπ(*s, a*j )}], we have Aπ(s, aj ) ≤ Aπ(*s, a*ks ) − β for all s ∈ S, j = 1 *. . . N*, j ̸= ks. Thus ks ∈ argmaxk=1...N {Aπ(*s, a*k) − βDkj} for all s ∈ S, j = 1 *. . . N*. The inner part of (6) then is: βδ + Es∼ρπ υ {X N j=1,j̸=ks π(aj |s)(A π(s, aks ) − β) + π(aks |s)A π(s, aks )} = β(δ − Es∼ρπ υX N j=1,j̸=ks π(aj |s)) + Es∼ρπ υ X N j=1 π(aj |s)A π(s, aks ) = β(δ − Z s∈S ρ π υ (s)(1 − π(aks |s))ds) + Es∼ρπ υ X N j=1 π(aj |s)A π(s, aks ). If δ −Rs∈S ρ π υ (s)(1 − π(aks |s))*ds <* 0, we have the local optimal β = mins,j̸=ks {Aπ(*s, a*ks ) − Aπ(*s, a*j )}. If δ −Rs∈S ρ π υ (s)(1 − π(aks |s))ds ≥ 0, we have the local optimal β = 0. (3). For an initial point β0 in (mins,j̸=ks {Aπ(*s, a*ks )−Aπ(*s, a*j )}, maxs,j{Aπ(*s, a*ks )−Aπ(*s, a*j )}), we construct partitions I 1 s and I 2 s of the set {1, 2 *. . . N*} in the way described in Proposition 1 for all s ∈ S. Consider β in the neighborhood of β0, i.e., β ≥ Aπ(*s, a*ks ) − Aπ(*s, a*j ) for s ∈ S, j ∈ I 1 s and β ≤ Aπ(*s, a*ks ) − Aπ(*s, a*j ) for s ∈ S, j ∈ I 2 s . Then the inner part of (6) can be reformulated as: βδ + Es∼ρπ υ { X j∈I 1 s π(aj |s)A π(s, aj ) + X j∈I 2 s π(aj |s)(A π(s, aks ) − β)} = β(δ − Es∼ρπ υ X j∈I 2 s π(aj |s)) + Es∼ρπ υ { X j∈I 1 s π(aj |s)A π(s, aj ) + X j∈I 2 s π(aj |s)A π(s, aks )}. If δ − Es∼ρπ υ Pj∈I 2 s π(aj |s) < 0, we have the local optimal β = mins∈S,j∈I 2 s {Aπ(*s, a*ks ) − Aπ(*s, a*j )}. If δ − Es∼ρπ υ Pj∈I 2 s π(aj |s) ≥ 0, we have the local optimal β = maxs∈S,j∈I 1 s {Aπ(*s, a*ks ) − Aπ(*s, a*j )}. Theorem 2. *If Assumption 1 holds, then the optimal solution to (4) with Sinkhorn divergence is:* $$\pi_{\lambda}^{*}(a_{i}|s)=\sum_{j=1}^{N}\pi(a_{j}|s)f_{s,\lambda}^{*}(i,j),$$ $$\text{(8)}$$. where D *denotes the cost matrix,* f ∗ s,λ(*i, j*) = $\frac{\exp\left(\frac{\lambda}{\beta^{\pi}_{\lambda}}A^{\pi}\left(s,a_{i}\right)-\lambda D_{ij}\right)}{\lambda^{N}-\exp\left(\frac{\lambda}{\beta^{\pi}_{\lambda}}A\pi\left(s,a_{i}\right)-\lambda D_{ij}\right)}$ PN k=1 exp ( λ β∗λ Aπ(s,ak)−λDkj ) and β ∗ λ is an optimal solution to the following dual formulation: $$\min_{\beta\geq0}\ E_{\lambda}(\beta)=\min_{\beta\geq0}\left\{\beta\beta-\mathbb{E}_{n\sim\rho}\sum_{i=1}^{N}\pi(a_{i})\phi(\frac{\beta}{\lambda}+\frac{\beta}{\lambda}\ln(\pi(a_{i}|s))-\frac{\beta}{\lambda}\ln[\sum_{i=1}^{N}\exp\left(\frac{\lambda}{\beta}A^{\pi}(s,a_{i})-\lambda D_{ij}\right)]\right)\tag{9}$$ $$\mathbb{E}_{n\sim\rho}\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{\beta}{\lambda}\frac{\exp\left(\frac{\lambda}{\beta}A^{\pi}(s,a_{i})-\lambda D_{ij}\right)\cdot\pi(a_{i}|s)}{\sum_{h=1}^{N}\exp\left(\frac{\lambda}{\beta}A^{\pi}(s,a_{h})-\lambda D_{hij}\right)}\right\}.$$ Moreover, we have β ∗ λ ≤ 2Amax δ. Proof of Theorem 2. Invoking the definition of Sinkhorn divergence in (3), the trust region constrained problem with Sinkhorn divergence can be reformulated as: $$\max_{Q}\mathbb{E}_{s\sim\rho_{z}^{\pi}}[\sum\nolimits_{i=1}^{N}A^{\pi}(s,a_{i})\sum\nolimits_{j=1}^{N}Q_{ij}^{s}]$$ (20a) s.t. $$\mathbb{E}_{s\sim\rho_{z}^{\pi}}[\sum\nolimits_{i=1}^{N}\sum\nolimits_{j=1}^{N}D_{ij}Q_{ij}^{s}+\frac{1}{\lambda}Q_{ij}^{s}\log Q_{ij}^{s}]\leq\delta\tag{20b}$$ 24 $$\sum_{i=1}^{N}Q_{i j}^{s}=\pi(a_{j}|s),\ \ \ \ \forall j=1,\ldots,N,s\in{\mathcal{S}}.$$ Let β and ω represent the dual variables of constraints (20b) and (20c) respectively, then the Lagrangian duality of (20) can be derived as: max Qmin β≥0,ω L(*Q, β, ω*) = max Qmin β≥0,ω Es∼ρπ υ [ X N i=1 A π(*s, a*i) X N $$(20\mathrm{c})$$ j=1 Q s ij ] + Z s∈S X N j=1 ω s j ( X N i=1 Q s ij − π(aj |s))ds + β(δ − Es∼ρπ υ [ X N i=1 X N j=1 DijQ s ij + 1 λ Q s ij log Q s ij ]) (21a) = max Qmin β≥0,ω Es∼ρπ υ [ X N i=1 A π(*s, a*i) X N j=1 Q s ij ] + Z s∈S X N j=1 X N ω s j ρ π υ (s) Q s ijρ π υ (s)ds $$(22)$$ i=1 − Z s∈S X N j=1 ω s jπ(aj |s)ds + βδ − βEs∼ρπ υ [ X N i=1 X N j=1 DijQ s ij + 1 λ Q s ij log Q s ij ]) (21b) = max Qmin β≥0,ω βδ − Z s∈S X N j=1 ω s jπ(aj |s)ds + Es∼ρπ υ [ X N i=1 X N j=1 (A π(*s, a*i) − βDij +ω s j ρ π υ (s) )Q s ij − β λ Q s ij log Q s ij ] (21c) = min β≥0,ω max Qβδ − Z s∈S X N j=1 ω s jπ(aj |s)ds + Es∼ρπ υ [ X N i=1 X N j=1 (A π(*s, a*i) − βDij +ω s j ρ π υ (s) )Q s ij − β λ Q s ij log Q s ij ], (21d) where (21d) holds since the Lagrangian function L(*Q, β, ω*) is concave in Q and linear in β and ω, and we can exchange the max and the min following the Minimax theorem (Sion, 1958). Note that the inner max problem of (21d) is an unconstrained concave problem, and we can obtain the optimal Q by taking the derivatives and setting them to 0. That is, $${\frac{\partial L}{\partial Q_{i j}^{s}}}=A^{\pi}(s,a_{i})-\beta D_{i j}+{\frac{\omega_{j}^{s}}{\rho_{v}^{v}(s)}}-{\frac{\beta}{\lambda}}(\log Q_{i j}^{s}+1)=0,\ \ \forall i,j=1,\cdots,N,s\in{\mathcal{S}}.$$ Therefore, we have the optimal Qs∗ ij as: $$Q_{i j}^{**}=\exp{(\frac{\lambda}{\beta}A^{\pi}(s,a_{i})-\lambda D_{i j})}\exp{(\frac{\lambda\omega_{j}^{s}}{\beta\rho_{v}^{\pi}(s)}-1)},\ \ \forall i,j=1,\cdots,N,s\in{\mathcal{S}}.$$ In addition, since PN i=1 Qs∗ ij = π(aj |s), we have the following hold: $$\exp\left(\frac{\lambda\omega_{j}^{s}}{\beta\rho_{v}^{\pi}(s)}-1\right)=\frac{\pi(a_{j}|s)}{\sum_{i=1}^{N}\exp\left(\frac{\lambda}{\beta}A^{\pi}(s,a_{i})-\lambda D_{i j}\right)}.\tag{1}$$ By substituting the left hand side of (24) into (23), we can further reformulate the optimal Qs∗ ij as: $$Q_{i j}^{**}=\frac{\exp\left(\frac{\lambda}{\beta}A^{\pi}(s,a_{i})-\lambda D_{i j}\right)}{\sum_{k=1}^{N}\exp\left(\frac{\lambda}{\beta}A^{\pi}(s,a_{k})-\lambda D_{k j}\right)}\pi(a_{j}|s),\ \ \forall i,j=1,\cdots,N,s\in\mathcal{S}.$$ $$(23)$$ $$(24)$$ $$(25)$$ To obtain the optimal dual variables, based on (24), we have the optimal ω ∗ as: $$\omega_{j}^{\star\star}=\rho_{v}^{\star}(s)\{\frac{\beta}{\lambda}+\frac{\beta}{\lambda}\ln(\pi(a_{j}|s))-\frac{\beta}{\lambda}\ln\{\sum_{i=1}^{N}\exp\left(\frac{\lambda}{\beta}A^{\pi}(s,a_{i})-\lambda D_{ij})\right]\},\ \ \forall j=1,\cdots,N,s\in\mathcal{S}\tag{26}$$ By substituting (25) and (26) into (21d), we can obtain the optimal β ∗ via: $$\min_{\beta\geq0}\beta\delta-\mathbb{E}_{s\sim\rho_{\tau}^{\kappa}}\sum_{j=1}^{N}\pi(a_{j}|s)\{\frac{\beta}{\lambda}+\frac{\beta}{\lambda}\ln(\pi(a_{j}|s))-\frac{\beta}{\lambda}\ln[\sum_{i=1}^{N}\exp\left(\frac{\lambda}{\beta}A^{\pi}(s,a_{i})-\lambda D_{ij}\right)]\}$$ $$+\mathbb{E}_{s\sim\rho_{\tau}^{\kappa}}\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{\beta}{\lambda}\frac{\exp\left(\frac{\lambda}{\beta}A^{\pi}(s,a_{i})-\lambda D_{ij}\right)\cdot\pi(a_{j}|s)}{\sum_{k=1}^{N}\exp\left(\frac{\lambda}{\beta}A^{\pi}(s,a_{k})-\lambda D_{kj}\right)}.$$ $$\square$$ The proof for the upper bound of sinkhorn optimal β can be found in Appendix E. ## E Upper Bound Of Sinkhorn Optimal Beta In this section, we will derive the upper bound of Sinkhorn optimal β. First, for a given β, the optimal Qs∗ ij (β) to the Lagrangian dual L(*Q, β, ω*) can be expressed in (25). With this, we will present the following two lemmas: Lemma 1. *The objective function (20a) with respect to* Qs∗ ij (β) decreases as the dual variable β *increases.* Lemma 2. If Assumption 1 holds, then for every δ > 0, Qs∗ ij ( 2A max δ) *is feasible to (20b) for any* λ. We provide proofs for Lemma 1 and Lemma 2 in Appendix E.1 and Appendix E.2 respectively. Given the above two lemmas, we are able to prove the following proposition on the upper bound of Sinkhorn optimal β: Proposition 2. If β ∗ λ is the optimal dual solution to the Sinkhorn dual formulation (9), then β ∗ λ ≤ 2A max δfor any λ. Proof of Proposition 2. We will prove it by contradiction. According to Lemma 2, Qs∗ ij ( 2A max δ) is feasible to (20b). Since β ∗ λ is the optimal dual solution, Qs∗ ij (β ∗ λ ) is optimal to (20). If β ∗ λ > 2A max δ, according to Lemma 1, the objective value in (20a) with respect to 2A max δis smaller than the objective value in (20a) with respect to β ∗ λ , which contradicts the fact that Qs∗ ij (β ∗ λ ) is the optimal solution to (20). ## E.1 Proof Of Lemma 1 Lemma 1. The objective function (20a) with respect to Qs∗ ij (β) decreases as the dual variable β increases. Proof of Lemma 1. Let Gλ(β) represent the objective function (20a). By substituting the optimal Qs∗ ij in (25) into (20a), we have: $$G_{\lambda}(\beta)=\mathbb{E}_{s\sim\rho\mathbb{E}}[\sum_{i=1}^{N}A^{\pi}(s,a_{i})\sum_{j=1}^{N}\frac{\exp{(\frac{\lambda}{\beta}A^{\pi}(s,a_{i})-\lambda D_{ij})}}{\sum_{k=1}^{N}\exp{(\frac{\lambda}{\beta}A^{\pi}(s,a_{k})-\lambda D_{kj})}}\pi(a_{j}|s)]\tag{27a}$$ $$=\mathbb{E}_{s\sim\rho\mathbb{E}}[\sum_{j=1}^{N}\pi(a_{j}|s)\sum_{i=1}^{N}A^{\pi}(s,a_{i})\frac{\exp{(\frac{\lambda}{\beta}A^{\pi}(s,a_{i})-\lambda D_{ij})}}{\sum_{k=1}^{N}\exp{(\frac{\lambda}{\beta}A^{\pi}(s,a_{k})-\lambda D_{kj})}}].\tag{27b}$$ For any β2 > β1 > 0, we have: $$\begin{array}{l}{{G_{\lambda}(\beta_{1})-G_{\lambda}(\beta_{2})}}\\ {{\quad=\mathbb{E}_{s\sim\rho_{v}^{\pi}}\sum_{j=1}^{N}\pi(a_{j}|s)\sum_{i=1}^{N}A^{\pi}(s,a_{i})\{\frac{\exp\left(\frac{\lambda}{\beta_{1}}A^{\pi}(s,a_{i})-\lambda D_{i j}\right)}{\sum_{k=1}^{N}\exp\left(\frac{\lambda}{\beta_{1}}A^{\pi}(s,a_{k})-\lambda D_{k j}\right)}}}}\end{array}$$ − exp ( λ β2 Aπ(s, ai) − λDij ) PN k=1 exp ( λ β2 Aπ(s, ak) − λDkj ) } (28a) = Es∼ρπ υ X N j=1 π(aj |s) X N i=1 A π(s, a[i]){ exp ( λ β1 Aπ(s, a[i]) − λD[i]j ) PN k=1 exp ( λ β1 Aπ(s, a[k]) − λD[k]j ) − exp ( λ β2 Aπ(s, a[i]) − λD[i]j ) PN k=1 exp ( λ β2 Aπ(s, a[k]) − λD[k]j ) }, (28b) $$(28\mathrm{a})$$ $$(28\mathrm{b})$$ $$(29\mathrm{b})$$ where [i] denotes sorted indices that satisfy Aπ(*s, a*[1]) ≥ Aπ(*s, a*[2]) ≥ · · · ≥ Aπ(*s, a*[N]). Let fs(i) = exp ( λ β1 Aπ(s, a[i]) − λD[i]j ) PN k=1 exp ( λ β1 Aπ(s, a[k]) − λD[k]j ) − exp ( λ β2 Aπ(s, a[i]) − λD[i]j ) PN k=1 exp ( λ β2 Aπ(s, a[k]) − λD[k]j ) (29a) = exp (( λ β1 − λ β2 )Aπ(s, a[i])) exp ( λ β2 Aπ(s, a[i]) − λD[i]j ) PN k=1 exp (( λ β1 − λ β2 )Aπ(s, a[k])) exp ( λ β2 Aπ(s, a[k]) − λD[k]j ) − exp ( λ β2 Aπ(s, a[i]) − λD[i]j ) PN k=1 exp ( λ β2 Aπ(s, a[k]) − λD[k]j ) . (29b) For notation brevity, we let ms(i) = exp (( λ β1 − λ β2 )Aπ(s, a[i])) > 0, ws(i) = exp ( λ β2 Aπ(s, a[i]) − λD[i]j ) > 0 $$(29\mathrm{a})$$ and qs(i) = P1 N k=1 ms(k)ws(k) − P1 N $$(29b)=\frac{m_{s}(i)w_{s}(i)}{\sum_{k=1}^{N}m_{s}(k)w_{s}(k)}-\frac{w_{s}(i)}{\sum_{k=1}^{N}w_{s}(k)}$$ $$=m_{s}(i)w_{s}(i)(\frac{1}{\sum_{k=1}^{N}m_{s}(k)w_{s}(k)}-\frac{1}{\sum_{k=1}^{N}m_{s}(i)w_{s}(k)})$$ $$=m_{s}(i)w_{s}(i)q_{s}(i).$$ ms(i)ws(k) . Then we have (30a) $$\left(\text{30}\text{b}\right)$$ $$\left(\text{30}\text{c}\right)$$. Since λ β1 − λ β2 > 0, ms(i) decreases as i increases. Thus, qs(i) decreases as i increases. Since ms(1) ≥ ms(k) and ms(N) ≤ ms(k) for all k = 1*, . . . , N*, we have qs(1) = P1 N k=1 ms(k)ws(k) − P1 N k=1 ms(1)ws(k) ≥ P1 N k=1 ms(k)ws(k) − P 1 N k=1 ms(k)ws(k) = 0, and qs(N) = P1 N k=1 ms(k)ws(k) −P1 N k=1 ms(N)ws(k) ≤ P1 N k=1 ms(k)ws(k) −P1 N k=1 ms(k)ws(k) = 0. Since qs(1) ≥ 0, qs(N) ≤ 0 and qs(i) decreases as i increases, there exists an index 1 ≤ ks ≤ N such that qs(i) ≥ 0 for i ≤ ks and qs(i) < 0 for *i > k*s. Since ms(i), ws(i) > 0, we have fs(i) ≥ 0 for i ≤ ks and fs(i) < 0 for *i > k*s. In addition, we have PN i=1 fs(i) = 0 directly follows from the definition. Thus, PN i=1 fs(i) = Pks i=1 |fs(i)| − PN i=ks+1 |fs(i)| = 0. Therefore, Gλ(β1) − Gλ(β2) = Es∼ρπ υ X N j=1 π(aj |s) X N i=1 A π(s, a[i])fs(i) (31a) = Es∼ρπ υ X N j=1 π(aj |s){ X ks i=1 A π(s, a[i])|fs(i)| − X N i=ks+1 A π(s, a[i])|fs(i)|} (31b) ≥ Es∼ρπ υ X N j=1 π(aj |s){ X ks i=1 A π(s, a[ks])|fs(i)| − X N i=ks+1 A π(s, a[ks+1])|fs(i)|} (31c) = Es∼ρπ υ X N j=1 π(aj |s){A π(s, a[ks]) X ks i=1 |fs(i)| − A π(s, a[ks+1])X N i=ks+1 |fs(i)|} (31d) = Es∼ρπ υ X N j=1 π(aj |s){A π(s, a[ks]) X ks i=1 |fs(i)| − A π(s, a[ks+1]) X ks i=1 |fs(i)|} (31e) (31a) $\\$ (31b) $\\$ (31c) $\\$ (31d) $\\$ (31e) $$=\mathbb{E}_{s\sim\rho_{c}^{\infty}}\sum_{j=1}^{N}\pi(a_{j}|s)(A^{\pi}(s,a_{[k_{s}]})-A^{\pi}(s,a_{[k_{s}+1]}))\sum_{i=1}^{k_{s}}|f_{s}(i)|$$ $$\geq0.$$ $$(31\mathrm{f})$$ $$(31\mathrm{g})$$ $$(32\mathrm{a})$$ where (31c) and (31g) hold since Aπ(*s, a*[i]) is non-increasing as i increases. Furthermore, at least one inequality of (31c) and (31g) will not hold at equality since PN i=1 π(ai|s)Aπ(*s, a*i) = 0, ∀s ∈ S, and for non-trivial cases, P r{Aπ(*s, a*) = 0, ∀s ∈ S, ∀a *∈ A}* < 1, which means P r{∃s1, s2 ∈ S, a1, a2 ∈ A*, s.t. A*π(s1, a1) ̸= Aπ(s2, a2)} > 0. Therefore, we have Gλ(β1) − Gλ(β2) > 0. ## E.2 Proof Of Lemma 2 Lemma 2. If Assumption 1 holds, then for every δ > 0, Qs∗ ij ( 2A max δ) *is feasible to (20b) for any* λ. Proof of Lemma 2. By substituting the optimal Qs∗ ij in (25) into (20b), we can reformulate the left hand side of (20b) as follows: Es∼ρπ υ [ X N i=1 X N j=1 DijQ s∗ ij + 1 λ Q s∗ ij log Q s∗ ij ] (32a) = Es∼ρπ υ { X N i=1 X N j=1 DijQ s∗ ij + 1 λ Q s∗ ij [ λ β A π(s, ai) − λDij + log π(aj |s) PN k=1 exp ( λ β Aπ(s, ak) − λDkj ) = Es∼ρπ υ { X N i=1 X N j=1 1 β Q s∗ ij A π(s, ai) + 1λ Q s∗ ij log π(aj |s) PN k=1 exp ( λ β Aπ(s, ak) − λDkj ) }. (32c) Now we prove that when β =2A max δ, Es∼ρπ υ {PN i=1 PN j=1 1 βQs∗ ij (β)Aπ(s, ai)} ≤ δ2and ]} (32b) Es∼ρπ υ { 1 λQs∗ ij (β) log P π(aj |s) N k=1 exp ( λ β Aπ(s,ak)−λDkj ) } ≤ δ2 hold. For the first part, we have: $\mathbb{E}_{s\sim\rho_{v}^{\pi}}\{\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{1}{\beta}Q_{ij}^{ss}A^{\pi}(s,a_{i})\}$ $$=\frac{1}{\beta}\mathbb{E}_{s\sim\rho_{v}^{\pi}}\{\sum_{i=1}^{N}[\sum_{j=1}^{N}Q_{ij}^{ss}]A^{\pi}(s,a_{i})\}$$ $$=\frac{1}{\beta}\mathbb{E}_{s\sim\rho_{v}^{\pi}}\{\sum_{i=1}^{N}\pi^{\prime}(a_{i}|s)A^{\pi}(s,a_{i})\}$$ $$\leq\frac{1}{\beta}\mathbb{E}_{s\sim\rho_{v}^{\pi}}\{\sum_{i=1}^{N}\pi^{\prime}(a_{i}|s)|A^{\pi}(s,a_{i})|\}$$ $$\leq\frac{A^{max}}{\beta}=\frac{\delta}{2}.$$ $\delta$ $$\left(33\text{a}\right)$$ $$\left(33\text{b}\right)$$ $$\left(33\text{c}\right)$$ $$\left(33\text{d}\right)$$ $$\left(33\text{e}\right)$$ . (34a) $$\left(\text{34}\text{b}\right)$$ $$\left(\text{34}\text{c}\right)$$. For the second part, the followings hold: Es∼ρπ υ { X N i=1 X N j=1 1 λ Q s∗ ij log π(aj |s) PN k=1 exp ( λ β Aπ(s, ak) − λDkj ) } (34a) = Es∼ρπ υ { X N j=1 1 λ ( X N i=1 Q s∗ ij ) log π(aj |s) PN k=1 exp ( λ β Aπ(s, ak) − λDkj ) } (34b) = 1 λ Es∼ρπ υ { X N j=1 π(aj |s) log π(aj |s) PN k=1 exp ( λ β Aπ(s, ak) − λDkj ) } (34c) ≤ 1 λ Es∼ρπ υ { X N j=1 π(aj |s) log π(aj |s) exp ( λ β Aπ(s, aj )) } (34d) ≤ 1 λ Es∼ρπ υ { X N j=1 π(aj |s) log 1 exp ( λ β Aπ(s, aj )) } (34e) = 1 λ Es∼ρπ υ { X N j=1 π(aj |s)(− λ β A π(s, aj ))} (34f) ≤ 1 β Es∼ρπ υ { X N j=1 π(aj |s)|A π(s, aj )|} (34g) ≤ Amax β= δ 2 . (34h) $$(34\mathrm{d})$$ $$(34\mathrm{e})$$ $$\left(34\text{f}\right)$$ $$\left(34\text{g}\right)$$ $$\left(34\text{h}\right)$$ . $\square$ Therefore, Qs∗ ij ( 2A max δ) is feasible to (20b) for any λ. ## F Gradient Of The Objective In The Sinkhorn Dual Formulation The closed-form gradient of the objective in the Sinkhorn dual formulation (9) is as follows: δ − Es∼ρπ υ XN j=1 π(aj |s) n1 λ + 1 λ ln(π(aj |s)) − 1 λ ln[XN i=1 exp ( λ β A π(*s, a*i) − λDij )] − β λ ·1 PN i=1 exp ( λ β Aπ(*s, a*i) − λDij ) ×XN i=1 [exp ( λ β A π(s, ai) − λDij ) × −λAπ(*s, a*i)β −2] o + Es∼ρπ υ XN i=1 XN j=1 nπ(aj |s) λ exp ( λ β A π(*s, a*i) − λDij ) PN k=1 exp ( λ β Aπ(*s, a*k) − λDkj ) + βπ(aj |s) λ· exp ( λ β A π(s, ai) − λDij ) × −λAπ(*s, a*i)β −2 ×PN k=1 exp ( λ β A π(*s, a*k) − λDkj ) (PN k=1 exp ( λ β Aπ(*s, a*k) − λDkj ))2 − βπ(aj |s) λ· exp ( λ β A π(*s, a*i) − λDij ) ×PN k=1[exp ( λ β A π(s, ak) − λDkj ) × −λAπ(*s, a*k)β −2] (PN k=1 exp ( λ β Aπ(*s, a*k) − λDkj ))2 o. Given the upper bound of Wassertein optimal β in Theorem 1 and the upper bound of Sinkhorn optimal β in Proposition 2, we are able to derive the following theorem: Theorem 3. *Define* βUB = max{ 2A max δ, β¯}*. We have:* 1. Fλ(β) converges to F(β) uniformly on [0, βUB]:lim _only on $[0,\beta_{UB}]$: $\lim\limits_{\lambda\rightarrow\infty}\sup\limits_{0\leq\beta\leq\beta_{UB}}\left|F_{\lambda}(\beta)-F(\beta)\right|\leq\lim\limits_{\lambda\rightarrow\infty}\frac{\beta_{UB}}{\lambda}\,N\ln N=0$._ $$2.\;\operatorname*{lim}_{\lambda\rightarrow\infty}\;a r g m i n_{0\leq\beta\leq\beta_{U B}}F_{\lambda}(\beta)\subseteq a r g m i n_{0\leq\beta\leq\beta_{U B}}F(\beta).$$ Proof of Theorem 3. To show that Fλ(β) converges to F(β) uniformly on [0, βUB], it is equivalent to show that limλ−→∞ sup0≤β≤βUB Fλ(β)−F(β) = 0. Let ϵ π s (*β, i, j*) = maxk=1...N (Aπ(s, ak)−βDkj )−[Aπ(*s, a*i)−βDij ], and ϵ π s (*β, i, j*) ≥ 0. First, we have $$\left|F_{\lambda}(\beta)-F(\beta)\right|$$ = βδ − Es∼ρπ υ X N j=1 π(aj |s){ β λ + β λ ln(π(aj |s)) − β λ ln[X N i=1 exp (λ β A π(*s, a*i) − λDij )]} + Es∼ρπ υ X N i=1 X N j=1 β λ exp ( λ β Aπ(*s, a*i) − λDij ) · π(aj |s) PN k=1 exp ( λ β Aπ(*s, a*k) − λDkj ) − βδ − Es∼ρπ υ X N j=1 π(aj |s) max i=1*...N* (A π(*s, a*i) − βDij ) (35a) ≤ β λ Es∼ρπ υ X N j=1 π(aj |s) + β λ Es∼ρπ υ X N j=1 π(aj |s) ln(π(aj |s)) + Es∼ρπ υ X N i=1 X N j=1 β λ exp ( λ β Aπ(*s, a*i) − λDij ) · π(aj |s) PN k=1 exp ( λ β Aπ(*s, a*k) − λDkj ) + Es∼ρπ υ X N j=1 π(aj |s) β λ ln[X N i=1 exp (λ β A π(*s, a*i) − λDij )] − Es∼ρπ υ X N j=1 π(aj |s) max i=1*...N* (A π(*s, a*i) − βDij ) (35b) ≤ 2 β λ Es∼ρπ υ X N j=1 π(aj |s) + β λ Es∼ρπ υ X N j=1 π(aj |s) ln(π(aj |s)) + Es∼ρπ υ X N j=1 π(aj |s) β λ ln[X N i=1 exp (λ β A π(*s, a*i) − λDij )] − Es∼ρπ υ X N j=1 π(aj |s) max i=1*...N* (A π(*s, a*i) − βDij ) . (35c) In addition, Es∼ρπ υ X N j=1 π(aj |s) β λ ln[X N i=1 exp (λ β A π(*s, a*i) − λDij )] − Es∼ρπ υ X N j=1 π(aj |s) max i=1*...N* (A π(*s, a*i) − βDij ) (36a) = Es∼ρπ υ X N j=1 π(aj |s) β λ ln[exp (λ βmax k=1*...N* (A π(*s, a*k) − βDkj ))X N $$(36\mathrm{a})$$ i=1 exp (− λ β ϵ π s (*β, i, j*))] − Es∼ρπ υ X N j=1 π(aj |s) max i=1*...N* (A π(*s, a*i) − βDij ) (36b) = Es∼ρπ υ X N j=1 π(aj |s) β λ {ln[exp (λ βmax k=1*...N* (A π(*s, a*k) − βDkj ))] + ln[X N $$(36\mathrm{b})$$ $$(36\mathrm{{tc}})$$ $$(36\mathrm{d})$$ i=1 exp (− λ β ϵ π s (*β, i, j*))]} − Es∼ρπ υ X N j=1 π(aj |s) max i=1*...N* (A π(*s, a*i) − βDij ) (36c) = Es∼ρπ υ X N j=1 π(aj |s) β λ ln[X N i=1 exp (− λ β ϵ π s (*β, i, j*))] . (36d) Therefore, lim λ−→∞sup 0≤β≤βUB Fλ(β) − F(β) (37a) ≤ lim λ−→∞ 2βUB λ Es∼ρπ υ X N j=1 π(aj |s) + lim λ−→∞ βUB λ Es∼ρπ υ X N j=1 π(aj |s) ln(π(aj |s)) + lim λ−→∞sup 0≤β≤βUB β λ Es∼ρπ υ X N j=1 π(aj |s) ln[X N i=1 exp (− λ β ϵ π s (β, i, j))] (37b) = lim λ−→∞sup 0≤β≤βUB β λ Es∼ρπ υ X N j=1 π(aj |s) ln[X N i=1 exp (− λ β ϵ π s (β, i, j))] . (37c) $$\left(37\text{a}\right)$$ $$\left(37\text{b}\right)$$ $$\left(37\text{c}\right)$$ $$\left(37\text{c}\right)$$ ... (38) $\binom{39}{2}$ (39) . In addition, ∀β ∈ [0, βUB] and ∀λ, ϵ π s (*β, i, j*) is bounded since $$\left|e_{s}^{\pi}(\beta,i,j)\right|=\left|\max_{k=1,\ldots,N}(A^{\pi}(s,a_{k})-\beta D_{kj})-[A^{\pi}(s,a_{i})-\beta D_{ij}]\right|$$ $$\leq2\max_{s,a}A^{\pi}(s,a)+\beta_{\text{UB}}\max_{i,j}D_{ij}$$ $$\leq2A^{\max}+\beta_{\text{UB}}\max_{i,j}D_{ij}<\infty.$$ Then, Es∼ρπ υ PN j=1 π(aj |s)ln[PN i=1 exp (− λ β ϵ π s (*β, i, j*))] is bounded. Therefore in (37c), the optimal β can be achieved. Let β λ = argmax0≤β≤βUB β λ Es∼ρπ υ PN j=1 π(aj |s) ln[PN i=1 exp (− λ β ϵ π s (*β, i, j*))] , and then we have: $$\lim_{\lambda\rightarrow\infty}\sup_{0\leq\beta\leq\beta\leq n}\frac{\beta}{\lambda}\Big{|}\mathbb{E}_{s\sim\rho_{\mathbb{E}}^{\kappa}}\sum_{j=1}^{N}\pi(a_{j}|s)\ln[\sum_{i=1}^{N}\exp{(-\frac{\lambda}{\beta}\epsilon_{s}^{\pi}(\beta,i,j))}]\Big{|}$$ $$=\lim_{\lambda\rightarrow\infty}\frac{\beta^{\lambda}}{\lambda}\Big{|}\mathbb{E}_{s\sim\rho_{\mathbb{E}}^{\kappa}}\sum_{j=1}^{N}\pi(a_{j}|s)\ln[\sum_{i=1}^{N}\exp{(-\frac{\lambda}{\beta^{\lambda}}\epsilon_{s}^{\pi}(\beta^{\lambda},i,j))}]\Big{|}.$$ (40a) $$\left(40\mathrm{b}\right)$$ $$\left(40\mathrm{b}\right)$$ ... Let Kπ s (*β, j*) = argmaxk=1...N Aπ(*s, a*k) − βDkj . Define σs(j) = min0≤β≤βUB mini=1*...N,i /*∈Kπ s (β,j) ϵ π s (*β, i, j*). Then since ϵ π s (*β, i, j*) > 0 for i /∈ Kπ s (*β, j*) based on its definition, we have σs(j) > 0. On one hand, we have lim λ−→∞ ln[X N i=1 exp (− λ β λ ϵ π s (β λ, i, j))] (41a) = lim λ−→∞ ln[ X N i=1|i /∈Kπ s (βλ,j) exp (− λ β λ ϵ π s (β λ, i, j)) + X N i=1|i∈Kπ s (βλ,j) exp (− λ β λ ϵ π s (β λ, i, j))] (41b) ≤ lim λ−→∞ ln[ X N i=1|i /∈Kπ s (βλ,j) exp (−λ βUB σs(j)) + X N i=1|i∈Kπ s (βλ,j) exp (0)] (41c) = lim λ−→∞ ln[ X N i=1|i /∈Kπ s (βλ,j) exp (−λ βUB σs(j)) + |K π s (βλ, j)|] (41d) = lim λ−→∞ ln[|K π s (βλ, j)|]. (41e) $$(41\mathrm{a})$$ $$\left({41{\rm{b}}}\right)$$ $$\left({41{\rm{c}}}\right)$$ $$\left({41{\rm{d}}}\right)$$ $$\left({41{\rm{e}}}\right)$$ ... On the other hand, we have $$\operatorname*{lim}_{\lambda\rightarrow\infty}\ln[\sum_{i=1}^{N}\exp\left(-\frac{\lambda}{\beta^{\lambda}}\epsilon_{s}^{\pi}(\beta^{\lambda},i,j)\right)]$$ λ*, i, j*))] (42a) $$(42\mathrm{a})$$ = lim λ−→∞ ln[ X N i=1|i /∈Kπ s (βλ,j) exp (− λ β λ ϵ π s (β λ, i, j)) + X N i=1|i∈Kπ s (βλ,j) exp (− λ β λ ϵ π s (β λ, i, j))] (42b) ≥ lim λ−→∞ ln[ X N i=1|i∈Kπ s (βλ,j) exp (− λ β λ ϵ π s (β λ, i, j))] (42c) = lim λ−→∞ ln[ X N i=1|i∈Kπ s (βλ,j) exp (0)] (42d) = lim λ−→∞ ln[|K π s (βλ, j)|]. (42e) (42b) $$\left(42\mathrm{c}\right)$$ ________ (42c) ________ $$(42\mathrm{d})$$ $$(42\mathrm{e})$$ Therefore, limλ−→∞ ln[PN i=1 exp (− λ βλ ϵ π s (β λ*, i, j*))] = limλ−→∞ ln[|Kπ s (βλ, j)|]. Based on that, we have lim λ−→∞ β λ λ Es∼ρπ υ X N j=1 π(aj |s) ln[X N i=1 exp (− λ β λ ϵ π s (β λ, i, j))] (43a) ≤ lim λ−→∞ β λ λ X N j=1 ln[X N i=1 exp (− λ β λ ϵ π s (β λ, i, j))] (43b) ≤ lim λ−→∞ β λ λ X N j=1 ln[X N i=1 exp (− λ β λ ϵ π s (β λ, i, j))] (43c) = lim λ−→∞ β λ λ X N j=1 ln[|K π s (βλ, j)|] (43d) ≤ lim λ−→∞ βUB λN ln N = 0, (43e) $$\left(43\text{a}\right)$$ $$\left(43\text{b}\right)$$ $$\left(43\text{c}\right)$$ $$\left(43\text{d}\right)$$ $$\left(43\text{d}\right)$$ ... $$(43\mathrm{e})$$ which means limλ−→∞ sup0≤β≤βUB Fλ(β) − F(β) ≤ 0. Furthermore, since limλ−→∞ sup0≤β≤βUB |Fλ(β) − F(β)| ≥ 0 holds naturally, we have limλ−→∞ sup0≤β≤βUB |Fλ(β) − F(β)| = 0. Therefore, Fλ(β) converges to F(β) uniformly on [0, βUB], which also indicates Fλ(β) epi-converges to F(β) on [0, βUB] (Royset, 2018; Rockafellar & Wets, 1998). By properties of epi-convergence, we have that lim λ−→∞ argmin0≤β≤βUB Fλ(β) ⊆ argmin0≤β≤βUB F(β) (Rockafellar & Wets, 1998). ## H Proof Of Lemma 1 Lemma 1. As λk −→ ∞*, SPO update converges to WPO update:* limλk−→∞ F SPO(πk) ∈ FWPO(πk). Proof of Lemma 1. Let ξ k s (*i, j*) = λ βk {maxl=1,...,N (Aˆπk (s, al) − βkDlj ) − [Aˆπk (*s, a*i) − βkDij ]}. The SPO update with λ *−→ ∞* equals to: exp ( λ βk Aˆπk (*s, a*i) − λDij ) PN l=1 exp ( λ βk Aˆπk (*s, a*l) − λDlj ) πk(aj |s) (44a) πk+1(ai|s) = lim λ−→∞ F SPO(πk) = lim λ−→∞ X N j=1 exp (Aˆπk (*s, a*kˆ πk s (βk,j) ) − βkDkˆ πk s (βk,j)j ) · exp (−ξ k s (*i, j*)) = lim λ−→∞ X N exp (Aˆπt (*s, a*kˆ πk s (βk,j) ) − βkDkˆ πk s (βk,j)j ) ·PN l=1 exp(−ξ k s (*l, j*)) πk(aj |s) (44b) j=1 = lim λ−→∞ X N exp (−ξ k s (*i, j*)) PN l=1 exp(−ξ k s (*l, j*)) πk(aj |s) (44c) j=1 = lim λ−→∞ X N exp (−ξ k s (*i, j*)) · πk(aj |s) Pl∈Kˆ πk s (βk,j) exp(−ξ k s (*l, j*)) + Pl /∈Kˆ πk s (βk,j) exp(−ξ k s (*l, j*)) (44d) j=1 $$\begin{array}{l l}{{}}&{{=\sum_{j=1}^{N}\frac{\operatorname*{lim}_{\lambda\to\infty}\exp\left(-\xi_{*}^{k}(i,j)\right)\cdot\pi_{k}(a_{j}|s)}{\sum_{l\in\xi_{*}^{***}(\beta_{k,j})}\operatorname*{lim}_{\lambda\to\infty}\exp(-\xi_{*}^{k}(l,j))+\sum_{l\not=\xi_{*}^{**}(\beta_{k,j})}\operatorname*{lim}_{\lambda\to\infty}\exp(-\xi_{*}^{k}(l,j))}}}\\ {{}}&{{=\sum_{j=1}^{N}\frac{I_{\xi_{*}^{**}(\beta_{k,j})}(i)}{|\vec{k}_{\xi_{*}^{*}}(\beta_{k,j})|}\pi_{k}(a_{j}|s),}}\end{array}$$ (44e) $\begin{array}{l}~~~~~~~~~~~~~~\end{array}$ (44f) . (*l, j*)) (44e) where I denotes the indicator function; (44f) holds because as λ *−→ ∞*, ξ k s (*i, j*) = ∞ for i /∈ Kˆπk s (βk, j) and 0 otherwise, thus limλ−→∞ exp (−ξ k s (*i, j*)) = 0 for i /∈ Kˆπk s (βk, j) and 1 otherwise. Let f k s (*i, j*) = 1 |Kˆ πk s (βk,j)| if i ∈ Kˆπk s (βk, j), and f k s (*i, j*) = 0 otherwise. Therefore, SPO update with λ *−→ ∞* equals to the following WPO update, FWPO(πk) = PN j=1 πk(aj |s)f k s (*i, j*). Theorem 4. **(Performance improvement)** For any initial state distribution υ and any βk ≥ 0*, if* ||Aˆπ − Aπ||∞ ≤ ϵ for some ϵ > 0*, let* Kˆπk s (βk, j) = argmaxi=1,...,N {Aˆπk (s, ai) − βkDij}, WPO policy update (and SPO with λ −→ ∞) guarantee the following performance improvement bound when the inaccurate advantage function Aˆπ*is used,* $$J(\pi_{k+1})\geq J(\pi_{k})+\beta_{k}\mathbb{E}_{s\sim\rho_{k}^{n_{k+1}}}\sum_{j=1}^{N}\pi_{k}(a_{j}|s)\sum_{i\in\mathcal{E}_{s}^{\epsilon,\epsilon}(\beta_{k,j})}f_{s}^{k}(i,j)D_{ij}-\frac{2\epsilon}{1-\gamma}.\tag{10}$$ Proof of Theorem 4. J(πk+1) − J(πk) = Es∼ρ πk+1 υ Ea∼πk+1 [A πk(*s, a*)] (45a) = Es∼ρ πk+1 υ X N i=1 πk+1(ai|s)A πk(*s, a*i) (45b) = Es∼ρ πk+1 υ X N i=1 X N j=1 πk(aj |s)f k s (*i, j*)A πk(*s, a*i) (45c) = Es∼ρ πk+1 υ X N j=1 πk(aj |s) X N $$\begin{array}{l}\text{(45a)}\end{array}$$ = $$\begin{array}{l}\text{(45b)}\end{array}$$ = $$\begin{array}{l}\text{(45c)}\end{array}$$ = $$\begin{array}{l}\text{(45d)}\end{array}$$ = $$\begin{array}{l}\text{(45e)}\end{array}$$ = $$\begin{array}{l}\text{(45f)}\end{array}$$ = $$\begin{array}{l}\text{(45g)}\end{array}$$ = $$\begin{array}{l}\text{(45g)}\end{array}$$ . i=1 f k s (*i, j*)A πk(*s, a*i) (45d) = Es∼ρ πk+1 υ X N j=1 πk(aj |s)X i∈Kˆ πk s (βk,j) f k s (*i, j*)A πk(*s, a*i) (45e) ≥ Es∼ρ πk+1 υ X N j=1 πk(aj |s)X i∈Kˆ πk s (βk,j) f k s (*i, j*)[A πk(*s, a*j ) + βkDij − 2ϵ] (45f) = βkEs∼ρ πk+1 υ X N j=1 πk(aj |s)X i∈Kˆ πk s (βk,j) f k s (*i, j*)Dij −2ϵ 1 − γ , (45g) where (45a) holds due to the performance difference lemma in Kakade & Langford (2002); (45f) follows from the definition of Kˆπk s (βk, j) and the fact that ||Aˆπk − Aπk ||∞ ≤ ϵ, therefore for i ∈ Kˆπk s (βk, j), [Aπk (*s, a*i) + ϵ] − βkDij ≥ Aˆπk (s, ai) − βkDij ≥ Aˆπk (s, aj ) − βkDjj = Aˆπk (s, aj ) ≥ Aπk (*s, a*j ) − ϵ; (45g) holds since Ea∼π[Aπ(*s, a*)] = 0. Theorem 5. **(Global convergence)** Under Assumption 2, we have for any βk ≥ 0*, (WPO) satisfies that* $$\|V^{\star}-V^{\pi_{k+1}}\|_{\infty}\leq\gamma\|V^{\star}-V^{\pi_{k}}\|_{\infty}+\beta_{k}\|D\|_{\infty},\tag{1}$$ $$(11)$$ $$(12)$$ and (SPO) satisfies that $$\|V^{\star}-V^{\pi_{k+1}}\|_{\infty}\leq\gamma\|V^{\star}-V^{\pi_{k}}\|_{\infty}+2\frac{\beta_{k}}{1-\gamma}\left(\|D\|_{\infty}+2\frac{\log N}{\lambda}\right).$$ If limk→∞ βk = 0*, we further have* limk→∞ J(πk) = J ⋆. Proof of Theorem 5. Our proof is inspired by the work Bhandari & Russo (2021). We use the shorthand πs for the probability distribution π(· | s) on the actions and denote the probability distribution on the action space A as ∆. To save notations, we rewrite πk+1, πk and βk as π +, π and β respectively. We use d for either dW or dS in the following derivation. Note d ≤ ∥D∥∞ = : D for both cases4, and dS ≥ −2 log N λ. 5 Since a policy π is indeed just a member of Q|S| i=1 ∆, we find that the problem (7) can be split into |S| many optimization problems. For each s ∈ S, we need to solve $$\operatorname*{max}_{\pi_{s}^{\prime}\in\Delta}\quad\rho^{\pi}(s)\mathbb{E}_{a\sim\pi^{\prime}(\cdot|s)}[A^{\pi}(s,a)]-\beta\rho^{\pi}(s)d(\pi_{s}^{\prime},\pi_{s}).$$ , πs).(46) Denote the quality function of π as Qπ(*s, a*) = E[Rt|st = *s, a*t = a; π], and the value function of π as V π(s) = E[Rt|st = s; π], we find that Aπ(*s, a*) = Qπ(*s, a*) − V π(s). Since the second term is only a function of the current policy π and the state s, we find that Problem (46) is further equivalent to (in the sense of the same solution set): $$(46)$$ $$\operatorname*{max}_{\pi_{s}^{\prime}\in\Delta}\quad\mathbb{E}_{a\sim\pi_{s}^{\prime}}[Q^{\pi}(s,a)]-\beta d(\pi_{s}^{\prime},\pi_{s}).$$ Here we use ρ0(s) > 0 for all s. Let π¯ be a solution of the policy iteration: $$(47)$$ $$\bar{\pi}_{s}\in\arg\operatorname*{max}_{\pi_{s}^{\prime}}\ \mathbb{E}_{a\sim\pi_{s}^{\prime}}[Q^{\pi}(s,a)].$$ $$(48)$$ $$(49)$$ $$(50)$$ π(*s, a*)]. (48) Also define the bellman operator T : R |S| → R |S| and the operator T π ′: R |S| → R |S|: for any V ∈ R |S|, $$(TV)_{s}=\max_{a\in A}r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim P(\cdot|s,a)}[V(s^{\prime})],$$ $$(T^{\pi^{\prime}}V)_{s}=\mathbb{E}_{a\sim\pi^{\prime}_{s}}[r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim P(\cdot|s,a)}V(s^{\prime})].$$ Using the relation between the quality function and the value function, Qπ(*s, a*) = r(*s, a*)+Es ′∼P (·|s,a)[V π(s ′)], we can rewrite the above equations in terms of the quality function for V = V π: $$\begin{array}{r l}{{}}&{{(T V^{\pi})_{s}=\operatorname*{max}_{a\in{\mathcal{A}}}r(s,a)+\gamma_{s'\sim P(\cdot|s,a)}[V^{\pi}(s')]=\operatorname*{max}_{a\in{\mathcal{A}}}Q(s,a)=T^{s}V^{\pi},}}\\ {{}}&{{(T^{\pi^{\prime}}V^{\pi})_{s}=\mathbb{E}_{a\sim\pi_{s}^{\prime}}[Q^{\pi}(s,a)].}}\end{array}$$ π, (51) 4For Sinkhorn divergence, note that the entropy function is always nonnegative. 5This lower bound is obtained via dS(π ′, π|λ) ≥ minQ≥0,Pi,j Qij=1 ⟨Q, D⟩ − 1λ h(Q) (a) = ⟨Q, D⟩ − 1 λ h(Q)| Qij=exp(−*λDij* P ) i,j exp(−*λDij* ) = − 1λ log Pi,j exp(−λDij ) (b) ≥ − 2 log N λ. Here in the step (a), we use the Lagrangian multiplier method to derive the optimal Qij = P exp(−λDij ) i,j exp(−λDij ) . In the step (b), we use the fact that log(Pn i=1 exp(xi)) ≤ max{x1*, . . . , x*n} + log n for any x1*, . . . , x*n ∈ R and Dii = 0 for any i. $$(51)$$ $$(52)$$ Let us consider d = dW first. Using the optimality of π + for the problem (46), we know that $$\begin{array}{l}{{\mathbb{E}_{a\in\pi_{s}^{+}}[Q^{\pi}(s,a)]-\beta d(\pi_{s}^{+},\pi_{s})\geq\mathbb{E}_{a\in\bar{\pi}_{s}}[Q^{\pi}(s,a)]-\beta d(\bar{\pi}_{s},\pi_{s})}}\\ {{\Rightarrow\mathbb{E}_{a\in\pi_{s}^{+}}[Q^{\pi}(s,a)]\geq\mathbb{E}_{a\in\bar{\pi}_{s}}[Q^{\pi}(s,a)]-\beta D.}}\end{array}$$ andEa∈π $$\mathbb{E}_{a\in\pi_{s}^{+}}[Q^{\pi}(s,a)]-\beta d(\pi_{s}^{+},\pi_{s})\geq\mathbb{E}_{a\in\pi_{s}}[Q^{\pi}(s,a)]-\beta d(\pi_{s},\pi_{s})$$ $$\implies\mathbb{E}_{a\in\pi_{s}^{+}}[Q^{\pi}(s,a)]\geq\mathbb{E}_{a\in\pi_{s}}[Q^{\pi}(s,a)]=V^{\pi}(s).$$ Using the notation in (51) and (52), (53) and (54) become $$T^{\pi^{+}}V^{\pi}\geq TV^{\pi}-\beta D{\bf1}_{|S|},$$ $$T^{\pi^{+}}V^{\pi}\geq V^{\pi}.$$ $$(53)$$ $$(54)$$ $$(55)$$ $$(56)$$ $$\left(57\right)$$ $$(58)$$ Here 1|S| is a vector of all one entries and the inequality ≥ means entrywisely larger than or equal to. By iteratively applying T π +to (56) and use the fact that T π +is a monotone and contraction map with V π +as the unique fixed point, we have $$V^{\pi^{+}}\geq\cdots\geq(T^{\pi^{+}})^{2}V^{\pi}\geq T^{\pi^{+}}V^{\pi}\geq V^{\pi}.$$ π. (57) Hence we have $0\stackrel{{(a)}}{{\leq}}V^{\star}-V^{\pi^{+}}\stackrel{{(b)}}{{\leq}}V^{\star}-T^{\pi^{+}}V^{\pi}\stackrel{{(c)}}{{\leq}}V^{\star}-TV^{\pi}+\beta D{\bf1}_{|S|}$. Here the inequality (a) is due to the optimality of V ⋆. The inequality (b) is due to (57), and the inequality (c) is due to (55). Now using the fact V ⋆is the unique fixed point of T, and T is a monotone and contraction map, we have from (58) that $$\|V^{\star}-V^{\pi^{+}}\|_{\infty}\leq\|TV^{\star}-TV^{\pi}\|_{\infty}+\beta D\leq\gamma\|V^{\star}-V^{\pi}\|_{\infty}+\beta D.\tag{1}$$ Next consider d = dS. The optimality of π + reveals that for π˜ = ¯π or π: $$\mathbb{E}_{a\in\tau_{s}^{+}}[Q^{\pi}(s,a)]-\beta d(\pi_{s}^{+},\pi_{s})\geq\mathbb{E}_{a\in\tilde{\pi}_{s}}[Q^{\pi}(s,a)]-\beta d(\tilde{\pi}_{s},\pi_{s})$$ $\Longrightarrow\mathbb{E}_{a\in\tau_{s}^{+}}[Q^{\pi}(s,a)]\geq\mathbb{E}_{a\in\tilde{\pi}_{s}}[Q^{\pi}(s,a)]-\beta(D+2\frac{\log N}{\lambda}).$ $$(60)$$ Thus we have the following $$T^{\pi^{+}}V^{\pi}\geq TV^{\pi}-\beta(D+\frac{2\log N}{\lambda})\mathbf{1}_{|S|},$$ $$T^{\pi^{+}}V^{\pi}\geq V^{\pi}-\beta(D+\frac{2\log N}{\lambda})\mathbf{1}_{|S|}.\tag{1}$$ By iteratively applying T π +to (62) and use the fact that T π +is a monotone and contraction map with V π + as the unique fixed point, we have $$V^{\pi^{+}}\geq V^{\pi}-\frac{\beta}{1-\gamma}(D+2\frac{\log N}{\lambda})\mathbf{1}_{|S|}.$$ Hence we have $$\begin{array}{l}0\leq\,V^{\star}-V^{\pi^{+}}\stackrel{{(b)}}{{\leq}}V^{\star}-T^{\pi^{+}}V^{\pi}+\frac{\beta}{1-\gamma}(D+2\frac{\log N}{\lambda})\mathbf{1}_{|S|}\\ \stackrel{{(c)}}{{\leq}}V^{\star}-TV^{\pi}+2\frac{\beta}{1-\gamma}(D+2\frac{\log N}{\lambda})\mathbf{1}_{|S|}.\end{array}\tag{64}$$ $$(59)$$ (61) $\binom{62}{}$ . $$(63)$$ Here the inequality (a) is due to the optimality of V ⋆. The inequality (b) is due to (63), and the inequality (c) is due to (61). A similar derivation as (59) shows the inequality in the theorem. Hence the theorem is established. ## K Computational Complexity Of The Algorithm 1 Our overall algorithm applies a general actor-critic framework: the actor follows the proposed WPO or SPO update while the critic follows TD methods. The computational complexity depends on (i) the per-iteration computation cost of the policy and critic update and (ii) the iteration complexity of the actor-critic method. Here we mainly discuss the per-iteration computation cost of the policy update, as studies on the iteration complexity of actor-critic framework for constrained policy optimization are limited. The computation cost of WPO and SPO updates at each iteration depends on the selection of βk. If βk is chosen time dependently, the computation cost of WPO/SPO policy update is O(n 2 ans), where na and ns are the number of actions and states to perform policy update. If we set βk as the dual optimizer, there will be additional cost to run gradient descent to solve the one-dimensional dual formulation. As discussed in our experiments, we can set βk to be the dual optimizer only in the first a few iterations and use a decaying afterward. Therefore, the average computational complexity of a policy update step can be O(n 2 ans). ## L Difference Between Spo/Wpo And Other Exponential Style Updates Sinkhorn divergence smooths the original Wasserstein by adding an entropy term, which causes the SPO update to contain exponential components similar to standard exponential style updates such as NPG (Kakade, 2001; Peng et al., 2019). Thus, SPO can be viewed as a smoother version of WPO update. Nonetheless, it's important to note that SPO/WPO updates differ fundamentally from standard exponential style updates that are based solely on entropy or KL divergence. In both SPO and WPO, the probability mass at action a is redistributed to neighboring actions with high value (i.e., those a ′ with high Aπ(*s, a*′) − βd(a ′, a)). In contrast, in these standard exponential style updates, probability mass at action a is reweighted according to its exponential advantage or Q value. ## M Exploration Properties Of Wpo/Spo Compared to the Wasserstein metric, the KL divergence between policies is often larger, especially when considering the policy shifts of closely related actions, as shown in Figure 2. In practice, when employing the same trust region size δ, Wasserstein metric allows for more admissible policies within the trust region compared to KL, thereby leading to better exploration. This advantage is demonstrated in our motivating example in Figure 3. Furthermore, Sinkhorn divergence has even more exploration advantages than using Wasserstein. As Sinkhorn smooths the original Wasserstein with an entropy term, it includes additional smoother (more uniform) policies in the trust region, leading to even faster exploration. Our numerical results in Section 7 also support that WPO/SPO explores better than KL; and SPO achieves faster exploration than WPO. ## N Policy Parametrization, Prior Work On Nonparametric Policy As noted in (Tessler et al., 2019), the suboptimality of policy gradient is not due to parametrization (e.g., neural network), but is a result of the parametric distribution assumption imposed on policy, which constrains policies to a predefined set. In our work, we strive to avoid suboptimality by circumventing the parametric distribution assumption imposed on policy, while still allowing for parametrization of policy in our empirical studies. Previous research, such as (Abdolmaleki et al., 2018; Peng et al., 2019), has investigated theoretical policy update rules based on KL divergence without making explicit parametric assumptions about the policy being used. However, to our best knowledge, no prior work has explored theoretical policy update rules based on Wasserstein metric or Sinkhorn divergence. ## O T-Tests To Compare The Performance Of Wpo, Spo With Bgpg And Wnpg We conduct independent two-sample one-tailed t-tests (Student, 1908) to compare the mean performance of our proposed methods (WPO and SPO) with two other Wasserstein-based policy optimization approaches: BGPG (Pacchiano et al., 2020) and WNPG (Moskovitz et al., 2021). Specifically, we formulate four alternative hypotheses for each task: JWPO > JBGPG, JWPO > JWNPG, JSPO > JBGPG, and JSPO > JWNPG. MuJuCo continuous control tasks are considered for the t-tests, with a sample size of 10 for each algorithm. All t-tests are conducted at a confidence level of 90%. The results of the t-tests are presented in Table 7, where a checkmark (✓) indicates that the alternative hypothesis is supported with 90% confidence, and a dash (−) indicates a failure to support the alternative hypothesis. Based on the results presented in Table 7, we can conclude the following: - The mean performance of WPO is higher than BGPG with 90% confidence for all tasks. - The mean performance of WPO is higher than WNPG with 90% confidence for all tasks. - The mean performance of SPO is higher than BGPG with 90% confidence for all tasks except Ant-v2. - The mean performance of SPO is higher than WNPG with 90% confidence for all tasks except HalfCheetah-v2. We note that though SPO's performance is not statistically significantly higher than BGPG or WNPG in Ant-v2 and HalfCheetah-v2 tasks, SPO demonstrates a faster convergence speed than WNPG and BGPG in these two tasks. | Environment | JWPO > JBGPG | JWPO > JWNPG | JSPO > JBGPG | JSPO > JWNPG | |----------------|----------------|----------------|----------------|----------------| | HalfCheetah-v2 | ✓ | ✓ | ✓ | − | | Hopper-v2 | ✓ | ✓ | ✓ | ✓ | | Walker2d-v2 | ✓ | ✓ | ✓ | ✓ | | Ant-v2 | ✓ | ✓ | − | ✓ | | Humanoid-v2 | ✓ | ✓ | ✓ | ✓ | Table 7: T-tests results on the performance of WPO, SPO, BGPG and WNPG
Review 1: Summary: First, I apologize to the authors for the tardiness of this review. I hope the feedback can still be useful. This paper studies the problem of policy optimization for reinforcement learning via constraining the policy using metrics beyond the KL divergence. The paper contributes to a line of work that explores Wasserstein-based metrics for the trust-region part of policy optimization. In contrast to other works that have investigated the Wasserstein-based metrics, this work is concerned with providing theoretical guarantees and avoiding potentially suboptimal approximations or use of penalty functions. The variants of the algorithm are given, one based on the Wasserstein metric directly and one using a Sinkhorn-like version. Both involve solving their respective dual problem. A monotone improvement guarantee is established and a convergence theorem is also proven in the tabular setting. The method is evaluated on a number of classic reinforcement learning benchmarks, covering tabular settings and continuous control settings. The proposed methods appear to outperform the baselines, although marginally. Strengths and Weaknesses: *Strengths* - Pursuing better methods for making use of the Wasserstein-based metrics is definitely an important direction. - The derivation of the policy update rules and the theoretical results on monotone improvement and convergence appear to be new, at least for this specific problem. - The experimental results are fairly convincing even though the improvement is marginal, but there are a few missing results that might make the case stronger. *Weaknesses* While I generally feel positive about the paper, there are some weaknesses that I hope can be fixed. - The related work indeed cites a lot of the relevant literature, but falls short in properly contrasting the contributions and significance of this paper in light of that literature. - The motivating example is a little shaky because it requires one to know in advance how the actions are related by “distance.” - A major claim of the paper is that the update rules do not explicitly require a certain parametric class which can be restrictive (see prior work). But in practice it seems that some either implicit or explicit parameterization of the policy is used in all but the tabular settings. Theoretical update rules for KL-based methods are also do not require explicit parameterization, if I recall correctly. - The theoretical results are not particularly concerned with exploration and generally just assume the problem away with sufficient coverage. While there is certainly more to be explored here, it may be alright considering this is on par with similar papers that study this sort of thing. - The experimental performance improvement (although definitely there) seems marginal in practice. Requested Changes: I have some questions and suggestions. - I’m confused about how the cost matrix M is defined. In the motivating example, the distances are simply “assigned” by the authors. Fair enough, if you have knowledge of the problem setting in this way. However, later it goes on to say that $M_{ij} = d(a_i, a_j) = ||a_i - a_j||_1$. If these are discrete abstract actions, this would be undefined, such as in some of the experiments. How exactly do you define subtraction between two such abstract actions? In general, what would one do in the case where nothing is known about the discrete action space? How does this choice affect performance? - How does the SPO update rule compare to NPG/SAC or other sort of “exponential weights” style algorithms for policy optimization? It would be helpful to have more extended discussion about the policy parameterization as mentioned in the weaknesses section. - As mentioned, the related work could be revised to include better distinction between this work and prior works rather than just discussion what prior works did. - Potentially several more difficult control settings (e.g. Humanoid) or maybe difficult exploration problems would make the experimental results more convincing. - Theorem 3: it would be helpful to write directly the inequality of the bound. - Page 3: “Henceforth” used incorrectly. Consider: “Until now” or “Heretofore” - Page 11: “coherent” used awkwardly. Consider: “consistent” Broader Impact Concerns: N/A ================================================== Review 2: Summary: This work proposes to replace Kullback-Leibler divergence in trust-region methods with Wasserstein and Sinkhorn trust regions, namely Wasserstein policy optimization (WPO) and Sinkhorn policy optimization (SPO). Algorithm 1 presents the updates for WPO/SPO. The authors firstly use an example of grid world to intuitively compare Wasserstein and KL divergences. Figure 2 shows that Wasserstein can characterize distances between difference probability distributions over action spaces, while KL cannot. Figure 3 shows that Wasserstein-based policy update finds the optimal policy faster than KL-based policy update. In Sections 3 and 4, the authors develop closed-form expressions of the policy updates for both WPO and SPO, using optimal Lagrangian multipliers of the trust region constraints. Results are given in Theorems 1 and 2. Section 5 provides theoretical analysis. Theorem 4 shows that WPO has monotonic performance improvement, the same results apply for SPO with multipliers approaching infinity $\lambda \to \infty$. Theorem 5 shows that WPO has global convergence with decaying regularizer coefficient $\beta_k \to 0$. The authors then conduct experiments to compare the proposed WPO/SPO with TRPO, PPO, A2C, and BGPG, WNPG, using several tasks, including tabular domains with discrete state and action (Taxi, Chain, and Cliff Walking), locomotion tasks with continuous state and discrete action (CartPole, Acrobot), and continuous control tasks with continuous action (HalfCheetah, Hopper, Walker, and Ant). The authors also did ablation study to examine the hyperparameter sensitivity of the proposed methods. Strengths and Weaknesses: **Strengths** 1. Using Wasserstein and Sinkhorn in trust region policy optimization seems reasonable (from the motivating example) and new. 2. The proposed ideas are guaranteed by theory and practical algorithms. 3. The experimental results verify the proposed methods. **Weaknesses** 1. From the closed form update of WPO (Theorem 1), it seems it is more complicated than KL divergence. This might suggest the method is computationally inefficient comparing to using KL divergence, given sometimes the proposed methods have comparable or worse performance (e.g., in Figure 6). 2. From the closed form update of SPO (Theorem 2), it looks very similar to standard exponentiate gradient update from using entropy or KL divergence. This might suggest the proposed SPO eventually is similar to existing methods, which could possibly reduce the novelty. 3. Other closely related baselines such as SAC should be included, since SAC also use entropy / KL divergence. Requested Changes: 1. I am wondering how computationally efficient the proposed method is comparing to using KL divergence. Maybe this could be shown by comparing the performance of different methods using the number of samples or actual running time. 2. In the motivating example, why is it the case that $1 \to 2$ in Figure 2a is a close action and $1 \to 3$ in Figure 2b is a far action? 3. Please include some closely related baselines such as SAC in the experiments. 4. It is mentioned that $\beta_k = 1/k^2$ is used in Section 7.1. Why is this specific choice and how does it compare to other possibility (e.g., $1/\sqrt{k}$, $1/k$, etc)? Broader Impact Concerns: The experiments are conducted using publicly available benchmarks such as Mujoco, and I do no see ethical implications of the work. ================================================== Review 3: Summary: The paper proposes to use Wasserstein distance or Sinkorn divergence as the trust region distance in policy optimization, compared to the KL divergence in the original TRPO algorithm. The paper studies the theoretical properties of such an algorithm, and provides some empirical evidence that this new trust region performs on par or better than previous baselines. Strengths and Weaknesses: ### Strengths The paper provides a relatively comprehensive account of the trust region optimization algorithm using Wasserstein distance or Sinkorn divergence as the local metrics. The paper details the properties of the policy updates (e.g., convergence of Sinkorn updates to Wasserstein distance updates), local improvement properties and global convergence properties of the algorithm. ### Weaknesses Maybe a major weakness of the paper is its lack of novelty -- the idea of using Wasserstein distance or Sinkorn divergence as the trust region metric is not new in the RL literature (see the multiple references in the paper too). The experimental results in the paper are not convincing enough to show that such a new metric leads to significantly better results than baseline KL divergence. It is also not completely clear that the theory brings much novelty and insights to the established literature. Requested Changes: I have multiple questions and comments below. The authors may consider making changes to the paper based on these questions and comments. ### Novelty of the method It is not quite clear how much novelty this current paper brings compared to a few papers already in the literature [1,2] (both are referenced in the paper). Using metrics alternative to KL divergence is a very natural idea, and introducing distance metrics based trust region is a reasonable next step, however, this seems to have already been accomplished by prior work. If the aim here is to establish better theoretical properties / empirical performance, the results feel to me not strong enough, as I will detail below. [1] Pacchiano et al, Learning to score behaviors for guided policy optimization, 2020 [2] Moskovitz et al, Efficient Wasserstein natural gradientsfor reinforcement learning, 2021 ### Theoretical result: Thm 4 Thm 4 is a local improvement property popularized in trust region policy optimization literature. I think a major weakness here is that obtaining such a local improvement property is not very challenging since the improvement bound always contain something like $-2\epsilon/(1-\gamma)$ as shown in Eqn 10 of the paper. Such a penalty term stems from the fact that we cannot control higher order approximation error from the improvement step. It is not clear what is the benefit of such a local improvement bound compared to a KL based improvement bound. In its current form, the result does not bring much technical insight compared to existing results. ### Theoretical result: Thm 5 The global convergence property in Thm 5 is not a very strong result unfortunately. Essentially what it says is as $\lambda\rightarrow\infty$ we have $\gamma$-contraction of the value function $V^{\pi_k}$ to the optimal value function $V^\ast$. This is quite obvious since if $\lambda\rightarrow \infty$, we have $\pi_{k+1}=\arg\max A^{\pi_k}$. In other words, the algorithm reduces to policy iteration, which indeed achieves a $\gamma$-contraction. Having a more general result in Eqn 11, which considers the case when $\lambda$ and $\beta_k$ are finite, does not bring much more compared to Thm 4. ### Choice of distance metric I think a major challenge in adopting Wasserstein distance based trust region update is that it is not clear apriori what is a good choice of the action metric $d(a_i,a_j)$. In this paper this is not discussed extensively (only that the paper suggests to implement L-1 distance throughout). Obviously, the choice of such a metric would be critical to the practical performance of the RL algorithm. ### Experiment results The paper has studied empirical results in a few setups. Maybe the most practically relevant result is in Fig 8 where the testbeds are higher-dimensional control problems. Though it seems that there is some marginal gains of WPO/SPO compared to baseline algorithms, I do feel the performance of baseline PPO to be a bit worse than expected, compared to results reported in e.g., [3]. It is also not clear to me why the training step is generally <1M steps since such near on-policy optimization algorithms are usually benchmarked with 3-5M steps. This brings into question whether other baseline algorithms are not properly tuned with the 1M regime; or rather, it is just more convenient to run all algorithms longer for 3-5M steps and see what happens. [3] Openai spinning up, https://spinningup.openai.com/en/latest/spinningup/bench_ppo.html ### Minor: notation Instead of using $M$ as the distance matrix, it might be more clear to use $D$, which is compatible with the definition $D_{ij}=d(a_i,a_j)$. Broader Impact Concerns: No ethical impact. ================================================== Review 4: Summary: This paper introduces, analyzes, and tests two conceptually simple, yet novel policy optimization algorithms based on taking gradient steps constrained by trust regions defined in terms of the Wasserstein distance (WPO) or the related Sinkhorn divergence (SPO). While previous work has studied similar ideas, this paper is the first to provide analysis of the convergence properties of such methods. Specifically, they show a performance improvement guarantee for WPO given possibly inaccurate advantage estimates, a global convergence guarantee for WPO, and that SPO converges to WPO as the regularizer coefficient $\lambda\to\infty$. Empirically, they apply practical versions of these algorithms to both tabular and discrete and continuous control tasks with function approximation, demonstrating strong performance relative to other on-policy trust region-based policy optimization algorithms. Strengths and Weaknesses: Strengths - Overall the paper is clearly written and presented. The toy problem in Figures 1-3 is interesting and illustrative. It’s interesting that the Wasserstein distance’s sensitivity to “close” vs. “far” actions permits a *bigger* policy change in this case, which skips over the local minimum of “picking up” in the blue state. How sensitive is this phenomenon to the specific values of the geometric distances among actions, and can we think of this insight as applying to high-dimensional problem settings? - Related work is appropriately cited and discussed. - Most of the empirical evaluation is fairly convincing wrt the performance of both WPO and SPO, and the pros and cons of each approach is discussed nicely. - The theoretical analysis seems thorough and rigorous, and reinforces intuition about how the proposed approaches work, though see below for more discussion. Weaknesses - Regarding the performance improvement guarantee, my impression is that the challenge of the exploration problem is packed away into the assumed bound on the advantage estimation error—if the max error across state-action pairs is small, then there’s an implication that the agent has strong coverage across the state(-action) space. That’s fine, but the term in the bound $-2\epsilon/(1 - \gamma)$ implies that $\epsilon$ likely has to be quite low in practice for the guarantee of improvement to hold, as $1/(1-\gamma)$ can be quite high for typical discount factors. If exploration is hard, it’s not unreasonable to expect that $\epsilon$ could be relatively large. This does seem to be a weakness of the result, although my primary concern with it is that it doesn’t seem to be discussed/addressed. - This is relatively minor, but in several cases runtimes are reported without error bars. - The performance on continuous control is more ambiguous—I don’t think the results support the conclusion that WPO/SPO are clearly better than other methods. Requested Changes: Requested Changes - Empirically, results on more challenging continuous control tasks (e.g., Humanoid) seem important to support the claim that WPO/SPO are superior to other methods in this domain. Alternately, more random seeds on existing tasks would also be helpful. - While the paper is clear overall, more discussion regarding the intuition of the WPO update would be helpful. - A discussion surrounding $\epsilon$ in the performance improvement result for WPO would also be welcome. - I would appreciate error bars / variance measurements be reported for the wall-clock numbers. - Figure 10 in the Appendix seems to indicate that a KL-based method outperforms WPO for higher values of $N_A$, which doesn’t seem to be defined. Could an explanation of this result be added? - Could the authors add runtime results for the continuous control experiments? I would think the need to perform additional sampling would drive up computational cost. - The result regarding exploration in the motivating example at the beginning of the paper is interesting, and a discussion of how WPO/SPO can affect exploration would be very helpful. - Could the authors add a discussion and/or results regarding the factors governing the “use optimal $\beta$ for the first 20% of runtime then decay/freeze” strategy? Would similar performance be expected for only computing the optimal $\beta$ for 10% of the run? For 5? How environment/task-dependent is this? I am aware that all of these changes together may be tough to fit within the main paper or infeasible, but any added results or discussion in the appendix would be appreciated. Thank you! Broader Impact Concerns: I don't have any concerns regarding the ethical implications of this work. ================================================== Metareview: Recommendation: Accept with minor revision Comment: **Summary of the Paper:** The paper proposes a new trust region methods for policy optimization. As opposed to TRPO, which uses the KL divergence to define the trust region, this paper suggests using the Wasserstein distance, as well as its regularized variant, to define the trust region. The resulting algorithm are WPO and SPO. The paper derives the algorithms, shows some theoretical properties regarding their convergence, and empirically compares them with other algorithms including TRPO and two other methods (BPGD and WNPG) that also use Wasserstein distance. **Evaluation:** After the initial review, the authors revised their paper, and many initial concerns have been satisfactorily addressed. We have now three reviewers [br9i, xF5g, yvPF] who are Leaning Accept and one [dwsm] who is Leaning Reject. At the high-level, the concerns are related to the novelty of the algorithm, the significant of theoretical results, and the significance of empirical results. I briefly explain them here, and postpone more discussions, and my reaction to them, to later. *Regarding the novelty of methods:* Although there are similar algorithms (proposed by Pacchiano et al. 2020, Moskovitz et al. 2021), they are not the same. So even though the method may not be considered unprecedentedly novel, this is not a serious issue, especially because TMLR's main acceptance criteria is not based on novelty. *Regarding theoretical result:* The theoretical results may not be the strongest. For example, the error of critic is measured in the supremum norm, which is technically simpler than an Lp norm but loses some nuances of how critic actually affects the performance. Nonetheless, I believe that is not a real issue as it still shows some basic and important properties of the algorithm. *Regarding the empirical evidence:* The learning curves overlap significantly in some domains such as Humanoid-v2 or Ant-v2. This makes it difficult to infer much about the superiority of the algorithm, especially compared to WNPG and BGPG that use Wasserstein penalty as well. Therefore, claims such as "our methods achieve better sample efficiency, faster convergence, and improved final performance" (Item 3 in contributions in Section 1) sounds stronger than justified. I have a few other comments from my own reading of the paper, which I will describe later. I believe they can be addressed with some work. Altogether, **I consider this paper acceptable after a few clarifications and minor revisions**. **Detailed Comments:** I summarize some of the comments of the reviewers, and write down my reaction to them and what I recommend the authors should do about them: **Concern:** [yvPF,dwsm] The improvement over the baseline are not particularly convincing. **Reaction and recommendation:** I agree that the improvement over the baselines, especially other Wasserstein-base algorithms, are not visually significant in some domains. For example, most methods on Ant and Humanoid have a significant overlap in their shaded areas. This brings the question of what the shaded areas in the figures represent. Are they standard deviation across runs? Or standard error? Or some 95% or 95% confidence interval? (I see that in Figures 9 and 10 in the the supplementary material, standard deviation is mentioned, but not in other figures.) My recommendation is that the authors be more clear about what they are presenting, and ideally do a statistical test to see whether the differences are actually significant or not. I know this is uncommon practice nowadays, but when the results are so close, it is warranted. It may also warrant to tone down the claim of superiority in the paper. **Concern:** [dwsm, xF5g, yvPF] The theoretical results are not strong enough, for example, its use of the uniform error bound on the critic hides away the issue of exploration. Moreover, at the limit of $\lambda$, they reduce to the Policy Iteration algorithm (Theorem 5). In particular, they do not shed light on why Wasserstein distance should be preferred to the KL divergence (mentioned in the private recommendation). **Reaction and recommendation:** As far as I can see, the theoretical results do not explain why Wasserstein distance should be preferred, so I agree with reviewer dwsm on this front. It would be great if they did, or if they do but is not clearly apparent, the authors expand on it. This isn't a critical shortcoming for this work though. On the other hand, I do not agree that when $\lambda$ goes to infinity, the algorithm reduces to the Policy Iteration algorithm. That would be the case when $\beta$ goes to zero, which is basically when we disregard the constraints imposed by the algorithm and "turn it off". So even if in the limit the algorithms become the same as Policy Iteration, that limit is not a good description of what the algorithm is doing and is analyzed by the theorem. Overall, I think even though the theoretical results are perhaps not groundbreaking, they provide some evidence on the soundness of the algorithm. **Concern:** [br9i] Comparison with SAC. **My reaction and recommendation:** Given the similarity of the use of entropy, a comparison can be helpful, though this is optional. **Concern:** The Wasserstein distance has been used in the policy optimization literature before (Pacchiano et al. 2020, Moskovitz et al. 2021), albeit in different forms. This somewhat reduces the novelty of the paper. **Reaction and recommendation:** This does not seem to be a major issue, given that there is enough differences between those algorithms and that the TMLR's criteria is not focused on the novelty anyway. **Concern:** [xF5g, dwsm] It is not clear how to choose the action metric d. **Reaction and recommendation:** The question of how to choose the action metric d is an important one, but I suppose it can be done as a future work. Though I encourage the authors to explore it further for this work as well, for example, compare the effect of various choices of d on the performance. **My Comments:** **Major:** * I am confused about the upper bound in Theorem 4. Can't we choose a large $\beta$ to make the RHS arbitrary large? As far as I see, all terms in the summations of the RHS are positive, so if we choose $\beta$ large enough, we show that the performance at the new iteration is arbitrary better than the performance at the old iteration. The only way I see this does not happen is if $f$ goes to zero faster than $\beta$ increases. I have not verified this myself. Please expand on this. * One of the motivations of the using SPO over WPO has been the computational cost of computing the Wasserstein distance by solving the Optimal Transport problem compared to the Sinkhorn algorithm (for example, this is alluded in the last paragraph of Section 2). But the results in Table 5 in the supplementary material and wall-clock report in Section 7.3 do not show much of a difference. Please explain this more. * After Theorem 2, it is mentioned that one can use gradient-based method to find the optimal solution of $\beta$. Is it established that Eq. (9) is convex? **Minor:** * What is distribution $\mu$ in Theorem 4? How is related to $\nu$? * Given that most of the algorithmic development is for finite action spaces, it would be better to be upfront about that, perhaps as early as Section 1 or 2. * Some of the references do not seem to be the right ones. For example, Mnih et al. 2013 or Silver et al. 2016 are not good examples of Policy-based RL. * The equation numbers and references are not clickable. * There are some typos. For example: - Abstract and Remark 3: close-form --> closed-form - In a few places, "few" is used instead of "a few". For example, "Recently, few work ..." (Page 1) or "In addition, few recent work ..." (Page 3). **Requested Changes and Clarifications:** These are the list of changes and clarification that I'd like the authors to make in their revised paper: - Clarify the meaning of shaded areas in the revised paper and ideally perform a statistical test to show that whether the suggested methods are better than existing ones, especially for results with overlapping shaded areas. If needed, tone down the claims. - Clarify my questions above, especially the one regarding the RHS of Eq. (10) in Theorem 4. - Fix all the typos. - [Optional] Compare with SAC. - [Optional] Perform experiments with different choice of action distance d. ==================================================
# A Revenue Function For Comparison-Based Hierarchical Clustering Aishik Mandal∗jitaishik@iitkgp.ac.in Centre of Excellence in Artificial Intelligence Indian Institute of Technology Kharagpur Michaël Perrot∗ michael.perrot@inria.fr Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 - CRIStAL, F-59000 Lille, France Debarghya Ghoshdastidar∗ghoshdas@cit.tum.de Technical University of Munich School of Computation, Information and Technology Munich Data Science Institute Reviewed on OpenReview: *https: // openreview. net/ forum? id= QzWr4w8PXx* ## Abstract Comparison-based learning addresses the problem of learning when, instead of explicit features or pairwise similarities, one only has access to comparisons of the form: *Object* A is more similar to B *than to* C. Recently, it has been shown that, in Hierarchical Clustering, single and complete linkage can be directly implemented using only such comparisons while several algorithms have been proposed to emulate the behaviour of average linkage. Hence, finding hierarchies (or dendrograms) using only comparisons is a well understood problem. However, evaluating their meaningfulness when no ground-truth nor explicit similarities are available remains an open question. In this paper, we bridge this gap by proposing a new revenue function that allows one to measure the goodness of dendrograms using only comparisons. We show that this function is closely related to Dasgupta's cost for hierarchical clustering that uses pairwise similarities. On the theoretical side, we use the proposed revenue function to resolve the open problem of whether one can approximately recover a latent hierarchy using few triplet comparisons. On the practical side, we present principled algorithms for comparison-based hierarchical clustering based on the maximisation of the revenue and we empirically compare them with existing methods. ## 1 Introduction In the past decade, there has been an exponential growth in the scope of data science and machine learning in domains such as psycho-physics (Shepard, 1962; Stewart et al., 2005; Haghiri et al., 2020) or cultural psychology (Berenhaut et al., 2022), evolutionary biology (Foulds et al., 1979; Semple and Steel, 2003; Catanzaro, 2009), or crowd-sourcing (Heikinheimo and Ukkonen, 2013; Ukkonen, 2017) among others. A type of data that has recently gained some traction in these contexts is *comparisons* (Stewart et al., 2005; Agarwal et al., 2007), particularly in the form of: Triplet comparison: Binary response to the query—is object i more similar to object j than to object k? Quadruplet comparison: Binary response to the query —are objects i and j more similar to each other than objects k and l? *All authors contributed equally. Comparisons have been used in the psycho-physics literature for more than 50 years since it is known that humans can provide relative measurements better than absolute ones (Shepard, 1962; Stewart et al., 2005). It led to a surge of popularity of comparisons in the context of crowdsourced data about objects that cannot be represented by Euclidean features, such as food (Wilber et al., 2014) or musical artists (Ellis et al., 2002), or objects for which humans cannot robustly estimate a pairwise similarity, for instance cars (Kleindessner and von Luxburg, 2017) or natural scenes (Heikinheimo and Ukkonen, 2013). The purpose of collecting comparisons is often to learn patterns in the objects, such as latent clusters, or use them for prediction, as in classification. Hence, there has been significant development of algorithms for comparison-based learning (Agarwal et al., 2007; Heikinheimo and Ukkonen, 2013; Haghiri et al., 2017; Kazemi et al., 2018; Perrot and von Luxburg, 2019). The present paper focuses on comparison-based hierarchical clustering. Clustering refers to partitioning a dataset into groups of similar objects, while hierarchical clustering is the problem of finding partitions of the data at different levels of granularity. It is natural to wonder how one can group similar objects or find a hierarchy of groups when neither features nor pairwise similarities are available, and one only has access to triplet or quadruplet comparisons. For instance, the objects in the food dataset (Wilber et al., 2014) can be broadly categorised into 'sweets or desserts' and 'main or savoury dishes', but the latter can be further sub-divided into meat dishes, soups and others. Surprisingly, interest in comparison-based (hierarchical) clustering stemmed in 1970s when single linkage clustering gained popularity, and researchers realised that the method uses only ordinal informal instead of absolute values of pairwise similarities (Janowitz, 1971; Sibson, 1972; Janowitz, 1979). Around the same time, works also started on the *consensus tree problem*, that is, constructing trees (hierarchies) from given sub-trees or ordinal relations (Adams III, 1972; Aho et al., 1981). The problem has since evolved as an important topic in both computational biology and computer science, most notably addressing the question of phylogenetic tree reconstruction under triplet or other ordinal constraints Semple and Steel (2003); Wu (2004); Snir and Yuster (2011). More recently, Ghoshdastidar et al. (2019) re-discovered that single and complete linkage can be computed using only few actively chosen quadruplet comparisons. There are, however, limited practical settings where the learning algorithm can actively decide which comparisons should be queried, and the most relevant case is that of learning from a set of *passively collected* comparisons. This is certainly true in the known practical applications of comparison-based hierarchical clustering, such as (hierarchical) clustering of objects from crowd-sourced comparisons (Ukkonen, 2017; Kleindessner and von Luxburg, 2017), finding communities in languages or in cultural psychology (Berenhaut et al., 2022), and constructing relational database queries Aho et al. (1981) or phylogenetic trees (Semple and Steel, 2003). One of the fundamental problems in hierarchical clustering is to evaluate the *goodness* of a hierarchy. This issue is obviously inherent to identifying *better* hierarchical clustering algorithms. In the phylogenetics literature, the optimal hierarchy problem typically corresponds to the *minimum evolution problem*, where, given a set of species and a pairwise distance matrix (representing evolutionary distance between species), the goal is to find a weighted tree with minimal total edge weights that preserve the evolutionary distances (Foulds et al., 1979; Catanzaro, 2009). A similar philosophy exists in the early works on hierarchical clustering, where an algorithm is judged to better if the ultrametric induced by the output tree is closer to the specified pairwise dissimilarities among the given objects (Janowitz, 1979). More recently, there has been efforts to mathematically quantify the goodness of a hierarchy in terms of certain *cost or revenue functions* (Dasgupta, 2016; Moseley and Wang, 2017; Wang and Wang, 2020). Such formulations have led to a plethora of new methods for hierarchical clustering that also come with worst-case approximation guarantees (Cohen-Addad et al., 2019; Charikar et al., 2019; Chatziafratis et al., 2021). Motivation for this work and our contributions. The main motivation for this work stems from the lack of cost or revenue functions that can be used in the comparison-based framework. Available goodness measures for trees can only be defined using pairwise (dis)similarities (Dasgupta, 2016; Moseley and Wang, 2017; Wang and Wang, 2020). Hence, existing works on comparison-based hierarchical clustering either demonstrate the meaningfulness of the computed hierarchies visually or in artificial settings, where comparisons are derived from pairwise similarities (Kleindessner and von Luxburg, 2017; Ghoshdastidar et al., 2019). Neither solution is useful in practice, where one only has access to comparisons. In this paper, we propose new revenue functions for dendrograms that are only based on triplet or quadruplet comparisons (Section 4). We show that the proposed comparison-based revenues are equivalent to Dasgupta's cost or revenue (Dasgupta, 2016; Moseley and Wang, 2017) applied to particular pairwise similarities that can be computed from comparisons. Interestingly, the pairwise similarities that arise from this equivalence are known in the comparison-based clustering literature (Perrot et al., 2020). Section 5 demonstrates that the proposed revenue function meaningfully captures the goodness of a hierarchical tree. For this purpose, we consider the problem of reconstructing a latent hierarchy (for example, a phylogeny tree) from ordinal constraints (Emamjomeh-Zadeh and Kempe, 2018). In particular, we show that, when all possible triplets among the objects are available, the dendrogram corresponding to the latent hierarchy maximises the proposed revenue function. This, in turn, implies that one can mathematically formulate the triplet-based hierarchical clustering problem as a *maximum triplet comparison revenue problem*. We further address the question of whether one can approximately recover the latent hierarchy using fewer than Ω(n 3) triplets. This problem has not been directly addressed in previous works (see Section 2). We show that only O(n 2log n/2) passive triplets suffice to obtain a (1 − )-approximation of the optimal revenue. Finally, Sections 6–7 use the connection of the proposed revenue functions to the *additive similarities* in Perrot et al. (2020) to present two variants of average linkage hierarchical clustering based on passive triplet or quadruplet comparisons. The performance of these approaches is empirically compared with state of the art baselines using synthetic and real datasets. ## 2 Related Work In this section, we briefly review the algorithmic developments of comparison-based hierarchical clustering, as well as existing theoretical results related to this problem. As noted earlier, interest in comparisonbased hierarchical clustering stemmed from different applications. The current literature consists of two lines of research—works related to reconstruction of phylogenetic trees (Wu, 2004; Snir and Yuster, 2011; Chatziafratis et al., 2021) and those focusing on ordinal data analysis from crowd-sourced data (Kleindessner and von Luxburg, 2017; Ghoshdastidar et al., 2019). In ordinal data analysis literature, the most widely used principle is that of ordinal embedding, where the underlying idea is to retrieve Euclidean representations of the objects that respect the available comparisons as well as possible (see the review in Vankadara et al. (2019) for more details). The embedded data can be subsequently used for (hierarchical) clustering. While this principle provides flexibility in the choice of clustering methods, the Euclidean restriction of the underlying data often leads to inaccurate representations, and hence, poor performance in the context of hierarchical clustering (Ghoshdastidar et al., 2019). The restrictive assumption of Euclidean embedding is avoided by computing pairwise similarities from available comparisons (Kleindessner and von Luxburg, 2017; Ghoshdastidar et al., 2019). Standard hierarchical clustering algorithms, such as average linkage, can then be applied using the pairwise similarities. An alternative approach for comparison-based (hierarchical) clustering is to define an appropriate cost or objective based on comparison and directly optimise it. Ukkonen (2017) employs such a technique for clustering using crowd-sourced data, while this principle underlies most techniques in consensus tree problems or phylogenetic tree reconstruction. In the latter context, two well-studied optimisation problems are *maximum* rooted triplet consistency (Wu, 2004; Byrka et al., 2010)—finding a hierarchy that satisfies most, if not all, given triplets—and *maximum quartet consistency* (Snir and Yuster, 2011; Jiang et al., 2000)—where one has access to quartets (sub-trees with four leaves indicating which pairs should be merged first) and the problem is to find a tree that satisfies most given quartets.* Other related optimisation problems as well as various constraints other than triplets or quartets have been studied (Snir and Rao, 2010; Chatziafratis et al., 2021). Since the focus of the present paper is to define a revenue for trees (see Section 4), our work naturally belongs to this broad class of hierarchical clustering algorithms based on revenue maximisation. However, in Theorem 1, we relate the proposed revenues to pairwise similarities computed from comparisons. Hence, *Note that quartets are different from quadruplets though both are defined on four objects. More precisely, a quartet on *i, j, k, l* corresponds to information that *i, j* and *k, l* should be merged in the tree before all four are merged. Using the notation from Section 3, a quadruplet (*i, j, k, l*) only implies sij > skl whereas a quartet on *i, j, k, l* implies min{sij , skl} > max{sik, sil, sjk, sjl}. the present paper connects the optimisation principle to the aforementioned approach of defining pairwise similarities from comparisons. Prior works on comparison-based hierarchical clustering provide a range of computational and statistical results. On the computational side, it is known that both the problems of maximum rooted triplet consistency and maximum quartet consistency are NP-hard (Byrka et al., 2010; Snir and Yuster, 2011). However, polynomial-time constant factor approximation algorithms are known in both cases, assuming that a uniformly random subset of triplets/quartets is available. For triplets, Wu (2004) provides a 13 -approximation algorithm—a fraction at least 13 of the given triplets are satisfied—which is slightly improved in Byrka et al. (2010). For quartets, polynomial-time algorithms that satisfy at least a (1−)-fraction of the given quartets are known (Jiang et al., 2000; Snir and Yuster, 2011). While the above results focus on finding hierarchies that only match the given triplets/quartets, Emamjomeh-Zadeh and Kempe (2018) show that the true (latent) hierarchy can be recovered only if Ω(n 3) passive (uniformly sampled) triplets are available. In contrast, only O(n log n) triplets suffice if they are actively queried. In Section 5, we show that only O(n 2log n/2) uniformly sampled triplets suffice to obtain a (1 − )-approximation of the optimal triplet revenue. A different latent model is considered in Ghoshdastidar et al. (2019) and Perrot et al. (2020), where the objects have latent (noisy) pairwise similarities that have a (hierarchical) cluster structure. Noisy triplets/quadruplets are uniformly sampled following the noisy latent similarities. While Ghoshdastidar et al. (2019) focus on quadruplet-based hierarchical clustering and show that O(n 3.5log n) suffice to recover the latent hierarchy, Perrot et al. (2020) show that flat latent clusters can be exactly recovered using only O(n 2log n) uniformly sampled triplets/quadruplets. Although our model is different from Perrot et al. (2020), we obtain a similar O(n 2log n) upper bound on sample complexity—even when the triplets are noisy. ## 3 Preliminaries We consider the problem of hierarchical clustering of a set of n objects, denoted by [n] = {1, 2*, . . . , n*}. In this paper, we assume that a hierarchy or dendrogram on [n] is a binary tree H whose root node is the set [n], each leaf node is a singleton containing one of the n objects, and each internal node represents a set C ⊆ [n] with its two children, C1 and C2, denoting a partition of C, that is min(|C1|, |C2|) > 0, C = C1 ∪C2, and C1 ∩ C2 = ∅. In the following, we use binary tree or tree to designate a hierarchy. For node C, we use H(C) to denote the subtree rooted at C and |H(C)| represents the number of leaves in the subtree, or equivalently, the number of objects in the set C. For objects *i, j* ∈ [n], let i ∨ j denote the smallest node in the tree containing both i and j, and H(i ∨ j) denote the smallest subtree containing both i and j. The goal of hierarchical clustering is to find a dendrogram H that is *optimal*, or at least *good*, in some sense. In the next subsections, we recall Dasgupta's cost for hierarchical clustering that allows one to measure the goodness of a dendrogram given full access to pairwise similarities, and then describe the comparison-based learning framework, where only triplet or quadruplet comparisons are available. In the paper, we use the standard Landau notations O(·), Ω(·), o(·), where the asymptotics are defined with respect to n. ## 3.1 Dasgupta'S Cost For Hierarchical Clustering Suppose one has access to a function s : [n]×[n] → R, symmetric, such that sij = s(*i, j*) denotes the pairwise similarity between objects *i, j* ∈ [n]. Dasgupta's cost function (Dasgupta, 2016) for a dendrogram H on [n], with respect to the pairwise similarity s, is defined as $$Dcost(H,s)=\sum_{i,j\in[n],\ i<j}s_{ij}\cdot|H(i\lor j)|\,.\tag{1}$$ An equivalent definition of the above cost can be found in Wang and Wang (2020), where the cost is expressed in terms of triplets of objects instead of pairs. To find a good dendrogram, Dasgupta (2016) proposed to minimize this cost over all trees. While it is NP-hard to find the optimal solution, several relaxations are known to have constant factor approximation guarantees for Dasgupta's cost or related quantities. In $\quad(1)$ . particular, Moseley and Wang (2017) defined Dasgupta's revenue function, $$D r e v(H,s)=n\sum_{i,j\in[n],\ i<j}s_{i j}-D c o s t(H,s).$$ $$\left(2\right)$$ sij − Dcost(*H, s*). (2) Note that since Psij is fixed, maximising Drev(*H, s*) over all binary trees is equivalent to minimising Dcost(*H, s*). It can then be shown that the revenue of the tree H obtained from average linkage achieves a revenue Drev(*H, s*) that is at least 13 of the revenue of the optimal tree that would achieve the best revenue, provided that the similarity function s is non-negative. ## 3.2 Comparison-Based Learning In the present paper, we assume that the pairwise similarities {sij}i,j∈[n] are not available. Instead the algorithm has access to either a set of triplets T , which is a subset of $$T_{a l l}=\{(i,j,k)\in[n]^{3}:s_{i j}>s_{i k},\ i,j,k\ \mathrm{distinct}\},$$ or a set of quadruplets Q ⊆ Qall, where $\mathcal{Q}_{all}=\{(i,j,k,l)\in[n]^{4}:\ s_{ij}>s_{kl},i<j,\ k<l,\ (i,j)\neq(k,l)\}$. Since the pairwise similarities are assumed symmetric, we set *i < j* and *k < l* to avoid considering the same comparison multiple times. We note that the number of possible comparisons is high—|Qall| = O(n 4) and |Tall| = O(n 3)—but, in practice, the observed comparisons, T or Q may be fewer than that, about O(n 2) comparisons, as can be seen from Table 3. Note that whenever a triple (*i, j, k*) is considered, T contains either (i, j, k) or (*i, k, j*), depending on whether sij ≶ sik. The same holds for quadruplets. We further assume that the observed set of comparisons T , or Q, is passively collected, that is the algorithm cannot decide which comparisons should be present in it—as opposed to the active setting where the algorithm can choose which comparisons should be observed (Ghoshdastidar et al., 2019). ## 4 Comparison-Based Revenue We present two comparison-based revenue functions for hierarchical clustering, one in the triplets framework and the other for quadruplet comparisons. Triplet comparisons. We first consider the case of triplets and assume that the algorithm has access to a passively collected set of triplets T . We define the triplet comparison revenue of a binary tree (dendrogram) H on [n], using triplets T , as $$Trev(H,\mathcal{T})=\sum_{(i,j,k)\in\mathcal{T}}\Big{(}|H(i\lor k)|-|H(i\lor j)|\Big{)}.\tag{3}$$ For every (i, j, k) ∈ T , we know that i is more similar to j than to k, and hence, we prefer to merge *i, j* before merging i and k. It means the ideal tree should have |H(i ∨ k)| > |H(i ∨ j)| for every (i, j, k) ∈ T . Hence, it is desirable to maximise |H(i ∨ k)*| − |*H(i ∨ j)| for every observed triplet (i, j, k) ∈ T . We then propose to formulate triplets comparison-based hierarchical clustering as the problem of maximizing *T rev*(H, T ) over all binary trees. Remark 1. We note that the proposed triplet revenue is significantly different from the triplet based cost presented in Wang and Wang (2020). The most important distinction is that the triplet cost in Wang and Wang (2020) is a reformulation of Dasgupta's cost, and requires knowledge of pairwise similarities. In contrast, the revenue in equation 3 is computed only from triplet comparisons without access to pairwise similarities. Quadruplet comparisons. The above formulation can be similarly stated in the quadruplets setting. Assuming that the algorithm has access to a passively collected set of quadruplets Q, we define the quadruplet comparison revenue of a binary tree H on [n] as $$Qrev(H,\mathcal{Q})=\sum_{(i,j,k,l)\in\mathcal{Q}}\Big{(}|H(k\lor l)|-|H(i\lor j)|\Big{)}.\tag{4}$$ Similar to the triplet setting, every (i, j, k, l) ∈ Q indicates that *i, j* should be merged earlier than *k, l* in H, and we prefer trees such that |H(k ∨ l)*| ≥ |*H(i ∨ j)|. We propose to achieve this by finding a tree that maximises *Qrev*(H, Q). Connection with Dasgupta's cost. While one may try to directly maximise the above comparisonbased revenue functions, the following equivalence to Dasgupta's cost and revenue allows us to employ existing methods for hierarchical clustering that require pairwise similarities. In the following, let IE denote the indicator of event E, that is, IE = 1 if E happens, and 0 otherwise. Theorem 1. For any given set of triplets T *and any dendrogram* H on [n], $$T r e v(H,\mathcal{T})=-D c o s t(H,s^{A d d S3})=D r e v(H,s^{A d d S3}),$$ where s AddS3*refers to the additive similarity from triplets (AddS3) defined by Perrot et al. (2020)* $$s_{i j}^{A d d S3}=\sum_{k\neq i,j}\left(\mathbb{I}_{(i,j,k)\in\mathcal{T}}-\mathbb{I}_{(i,k,j)\in\mathcal{T}}+\mathbb{I}_{(j,i,k)\in\mathcal{T}}-\mathbb{I}_{(j,k,i)\in\mathcal{T}}\right)$$ Similarly, for any set of quadruplets Q and dendrogram H, $$a m\ H,$$ $$Q r e v(H,\mathcal{Q})=-D c\alpha$$ Qrev(H, Q) = −Dcost(H, s*AddS*4) = Drev(H, s*AddS*4), where s AddS4*is the additive similarity from quadruplets (AddS4) defined by Perrot et al. (2020)* $$d S4\rangle,$$ $$s t(H,s^{A d d}$$ $$s_{i j}^{A d d S4}=\sum_{k\neq l,\ (k,l)\neq(i,j)}\left(\mathbb{I}_{(i,j,k,l)\in\mathbb{Q}}-\mathbb{I}_{(k,l,i,j)\in\mathbb{Q}}\right)\,.$$ Proof idea (details in appendix). Proving *T rev*(H, T ) = −Dcost(H, s*AddS*3) involves a rearrangement of terms, with the observation that, for every *i, j*, the term |H(i ∨ j)| appears in the summation in equation 3 with coefficient −1 when (i, j, k) ∈ T or (j, i, k) ∈ T and with coefficient +1 when (i, k, j) ∈ T or (j, k, i) ∈ T . Adding these coefficients for all k 6= *i, j* gives us −s AddS3 ij , and proves the equality. The second equality −Dcost(H, s*AddS*3) = Drev(H, s*AddS*3) simply follows from the observation that Pi<j s AddS3 ij = 0. The proof for quadruplets is similar. ## 5 Recovering A Latent Hierarchy By Triplet Revenue Maximisation In this section, we consider the problem of recovering a latent hierarchy from triplet comparisons, earlier studied in Emamjomeh-Zadeh and Kempe (2018). Let H0 be a hierarchy on [n], from which we derive a set of triplets* $${\mathcal{T}}_{0}=\{(i,j,k),(j,i,k)\ :\ |H_{0}(i\lor j)|<\operatorname*{min}(|H_{0}(i\lor k)|,|H_{0}(j\lor k)|)\}.$$ One can show that any rooted tree H0that satisfies all triplets in T0 is equivalent to H0, up to isomorphic transformations, and hence, one can exactly recover H0 given T0. Note that |T0| = nn 2 . It is natural to ask whether, H0 can be recovered if a significantly smaller number of triplets are observed. To this end, Emamjomeh-Zadeh and Kempe (2018) show that if the algorithm can choose the triplets to be queried (active setting), then a deterministic algorithm can recover H0 using only n log2 n queries. However, the authors also construct a (randomised) H0 to show that for any fixed set *T ⊂ T*0 with fewer than n 3/48 triplets, the *Emamjomeh-Zadeh and Kempe (2018) consider triples of the form {*i, j, k*} that imply *i, j* are closer to each other than k, with respect to H0. Each such triple {*i, j, k*} correspond to two triplets (*i, j, k*) and (*j, i, k*) in our setting. $$\left(5\right)$$ latent hierarchy H0 cannot be recovered with probability at least 1/2. This raises the question—can one approximately recover H0 from a smaller set of triplets T ? We use the proposed triplets-based revenue to answer this question in the affirmative. Before providing an approximation guarantee, we first show the significance of our formulation in this context by proving that one can recover H0 from T0 by maximising *T rev*. Proposition 2. Consider the aforementioned setting, where H0 is a hierarchy on [n] objects, and T0 is the corresponding set of triplets as defined above. Then $$H_{0}=\arg\operatorname*{max}_{H}\;\;T r e v(H,{\mathcal{T}}_{0}),$$ where the maximisation is over all binary trees H on [n]. Proof idea (details in appendix). One can show that if T0 is a set of triplets corresponding to hierarchy H0 according to equation 5 and (sij )1≤i,j≤n denote the AddS3 similarity derived from T0 (cf. Theorem 1), then sij = 2n+ 2−3|H0(i∨j)|, that is, one can recursively construct H0 from the AddS3 similarities. We further use this representation to show that if T0, T1 are respectively derived from two hierarchies H0, H1 according to equation 5, then there is a symmetry in the triplet revenue of the form *T rev*(H0, T1) = *T rev*(H1, T0). The above symmetry implies that proving H0 uniquely maximises *T rev*(H, T0) is equivalent to showing that T rev(H0, T0) *> T rev*(H0, T1) for every triplet set T1 that correspond to a binary tree H1 on [n] that is not isomorphic to H0. This last claim can be proved by showing that every *i, j, k*—for which the ordering of merger is changed between H0 and H1—has a positive contribution in T rev(H0, T0) − *T rev*(H0, T1). Emamjomeh-Zadeh and Kempe (2018) prove the uniqueness of the hierarchy that satisfies all the triplets in T0, that is, H0 maximises the function f(H, T0) = P(*i,j,k*)∈T0 I|H(i∨k)|>|H(i∨j)|. While maximising T rev(H, T0) seems to be a relaxation of maximising f(H, T0) in this context, Proposition 2 shows that both problems have the same optimal solution H0. ## 5.1 Approximate Recovery Of H0 **Using Passive Triplets** We consider the setting, where T0 is not completely available but one has access to a uniformly sampled subset *T ⊆ T*0. We show that *|T |* = O(n 2log n/2) triplets suffice to obtain a tree Hb such that *T rev*(H, b T0) ≥ (1−)·*T rev*(H0, T0), that is, we get a good approximation of H0 with much fewer than n 3samples, although we may not exactly recover H0. We consider the following uniform sampling to obtain T . Let pn ∈ (0, 1] denote a sampling probability, depending on n. For every pair of triplets (i, j, k),(j, i, k) ∈ T0, we add the pair to T with probability pn. We state the following approximation guarantee for trees derived using T . Theorem 3. For a triplet set T *obtained from the above sampling procedure, consider the hierarchy* $${\hat{H}}=\operatorname*{arg\,max}_{H}\;\;T r e v(H,T).$$ For any constants α > 0 and 0 < < 1/2, if n > 8/ and pn > 2 12· (α + 2) log n/n2, then with probability at least 1 − 2n −α, $$0.1p_{n}n^{3}\leq|{\mathcal{T}}|\leq0.5p_{n}n^{3}$$ $$a^{3}\qquad a n d\qquad T r e v(\hat{H},{\mathcal{T}}_{0})\geq(1-\epsilon)\cdot T r e v(H_{0},{\mathcal{T}}_{0}).$$ Proof idea (details in appendix). We need to relate *T rev*(H, T ) with *T rev*(H, T0) for every tree H. Since, the revenue function is linear with respect to the observed triplets, E[*T rev*(H, T )] = pn · *T rev*(H, T0), where the expectation is with respect random observation of each triplet pair in T0. We use concentration inequalities to bound deviation from expectation, maxH T rev(H, T ) − E[*T rev*(H, T )] . Although there are exponentially many H, one can note that, due to Theorem 1, the deviation can be controlled by bounding the maximum deviation of the n 2 AddS3 similarities, maxi<j sij − E[sij ] . For this we use, Bernstein inequality (for each sij ) followed by union bound (over all *i, j*). Thus, we show that for any constant α > 0, if pn > (α + 2) log n/n, then maxH T rev(H, T ) − pn*T rev*(H, T0) = On 3√pnn log nwith probability $$(6)$$ 1 − n −α. Using concentration for both H0, Hb, and noting that T rev(H, b T ) ≥ *T rev*(H0, T ), we have that T rev(H, b T0) ≥ *T rev*(H0, T0) − O qn7 log n pn . For stated condition on pn, the second term is at least n4 24 . Next, we show that *T rev*(H0, T0) ≥ n 4 12 − 2(n 3−n 2−n) 3, which is at least (1 − ) n 4 12 for n > 8/. This follows since AddS3 similarities computed from T0 are of the form sij = 2n + 2 − 3|H0(i ∨ j)|, and hence, we can rewrite *T rev*(H0, T0) in terms sizes of internal nodes. The lower bound follows from inductive arguments. Combining the lower bound with the deviation bound results in the theorem. The claim *|T |* = Θ(pnn 3) with probability 1 − n −α follows from E*|T |* = pn|T0| = Θ(pnn 3) and multiplicative Chernoff inequality. With pn fixed at the stated threshold, Theorem 3 shows that, with probability 1 − 2n −α, we can achieve (1 − )-approximation of triplet revenue using *|T |* = Θ(n 2log n/2) triplets. The result in Emamjomeh- Zadeh and Kempe (2018, Proposition 2.2)—that Ω(n 3) triplets are necessary to exactly recover H0—hinges on the fact that it is impossible to correctly guess the hierarchy at the lowest level of the tree H0 using fewer comparisons. Since errors in the lowest level do not significantly affect *T rev*, we can achieve the (1 − )-approximation in Theorem 3. However, note that Hb may not be efficiently computable as it requires exhaustive search over all trees. We discuss practical algorithms in the next section. Theorem 3 is stated in the noiseless setting, where it is assumed that every observed triplet in T is correct. It is natural to ask if Theorem 3 still holds under a noisy setting, where some triplets may be flipped with some probability. To formalise this, let *T ⊆ T*0 be a set of triplets obtained from the sampling procedure in Theorem 3. Let T 0 be constructed such that, for every (i, j, k) ∈ T , T 0contains (*i, j, k*) with probability 1 − δ, or (*i, k, j*) with probability δ. The random flipping of labels is independent for all (i, j, k) ∈ T . We obtain the following corollary from a minor modification of the above proof (details in appendix). Corollary 4. *For any fixed flipping probability* δ ∈ (0, 1 2 ), and for any α > 0 and 0 *< <* 1/2, max H*T rev*(H, T 0) ≥ (1 − ) · T rev(H0, T0) *with probability* 1 − n −α if n > 8/ and pn > 2 12·(α+2) log n n2(1−2δ) 2 . ## 6 Comparison-Based Algorithms For Hierarchical Clustering The equivalence between comparison-based revenues and Dasgupta's revenue, stated in Theorem 1, implies that one may simply employ standard hierarchical clustering algorithms using the pairwise similarities AddS3 or AddS4, depending on whether one has access to triplets or quadruplets. This makes it possible to use the well-established literature on hierarchical clustering with pairwise similarities. In fact, as mentioned before, previous works on passive comparison-based hierarchical clustering also follow this philosophy using other kind of pairwise similarities obtained from the comparisons (Kleindessner and von Luxburg, 2017; Ghoshdastidar et al., 2019). Unlike previous works, our use of AddS3 or AddS4 stems from a revenue maximisation formulation that allows us to consider an approach based on the average linkage (AL) clustering algorithm, that is, the following procedure: | AddS3-AL (or AddS4-AL) Given. A set of triplets T (or quadruplets Q) on [n] Step 1. Compute the pairwise similarity function s AddS3 (or s | | |----------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------| | Step 2. | Run average linkage algorithm with s AddS3 (or s AddS4 ) | | Output. | The tree or dendrogram H on the n objects | Step 1. Compute the pairwise similarity function s AddS3(or s AddS4) for every pair of objects Remark on approximation guarantee. Average linkage enjoys theoretical guarantees under the assumption that the similarities are always positive. Moseley and Wang (2017) show that average linkage achieves a worst-case 13 -approximation for revenue maximisation. Unfortunately, this result does not readily extend to AddS3-AL and AddS4-AL, as these similarities may be negative in some cases. A possible approach could be to add a positive constant to all the similarities to ensure that they are positive. Although this does not change the optimal tree or the one obtained from average linkage, a 13 -approximation for the modified revenues (considering revised similarities) does not imply a 13 -approximation for the original revenues. Based on the proof of Moseley and Wang (2017), one can show that AddS3-AL (or AddS4-AL) returns a tree with non-negative triplet (or quadruplet) comparison revenue. Whether approximation guarantees may also be derived for AddS3-AL and AddS4-AL remains open. ## 7 Experiments In this section, we propose two sets of experiments *to demonstrate the practical relevance of our new revenue function and the corresponding algorithm. In our first set of experiments, our goal is to show the usefulness of revenue maximisation as a solution to find hierarchies that are closer to the ground truth. We consider a planted model and demonstrate the alignment between AARI scores, a supervised metric of goodness for clustering, and our proposed revenue function. In our second set of experiments, we aim to show that the heuristic proposed in Section 6 to maximize the revenue performs well in practice. On real datasets, we compare our approach to two different state of the art approaches in comparison-based hierarchical clustering. ## 7.1 Planted Model In this first set of experiments, we study the behaviour of the proposed revenues in a controlled setting. Hence, we generate data using a planted model for comparison-based hierarchical clustering (Ghoshdastidar et al., 2019) and we use 3 triplets-based and 2 quadruplets-based methods to learn dendrograms. Data. To generate the data in this first set of experiments, we use a standard planted model in comparisonbased hierarchical clustering (Balakrishnan et al., 2011; Ghoshdastidar et al., 2019). Given n objects, we create a real similarity matrix S = (sij )1≤i,j≤n such that sii = 0 and sij = sji correspond to the similarity between objects i and j. We assume that (sij )i<j are independent Gaussians with sij ∼ N (µij , σ2). The choice of µij defines a planted hierarchy, a complete binary tree of height L, built on top of 2 L ground clusters, that is sets of n0 objects denoted as C1, C2*, ...,* C2L . The total number of points (leaves) in the complete hierarchy is thus n = n02 L. For every pair of objects *i, j* that belong to the same ground cluster, µij = µ—a constant. On the other hand, for objects *i, j* from two distinct clusters, if H(i ∨ j) is rooted at level `, we define µij = µ − (L − `)δ. We observe that the tree is rooted at level-0 and the constantsseparation δ and noise level σ—control the hardness of the problem. In particular, smaller values of δ make the similarities between examples that belong to the same cluster more difficult to distinguish from similarities between examples that belong to different clusters. The signal-to-noise ratio is thus δσ . In all the experiments, we set µ = 0.8, σ = 0.1, n0 = 30, L = 3 and we vary δ ∈ {0.02, 0.04*, ...,* 0.2}. Since we are in a comparison-based setting, we do not directly use the similarities of the planted model to learn dendrograms but instead generate comparisons. Given Tall and Qall the sets containing all possible triplets and quadruplets (see preliminaries), we obtain T ⊆ Tall and Q ⊆ Qall by uniformly sampling kn2comparisons with k > 0. Evaluation Function. To measure the closeness between the dendrograms obtained by the different approaches and the ground truth trees, we use the Averaged Adjusted Rand Index (Ghoshdastidar et al., 2019). The AARI is an extension to hierarchies of a well-known measure in standard clustering called Adjusted Rand Index (ARI; see Hubert and Arabie, 1985). The underlying idea is to average the ARI obtained over the top L levels of the tree. This measure takes values in [0, 1] with higher values for more similar hierarchies, an AARI of 1 implying identical trees. Our goal is to empirically verify that the hierarchies with higher revenues are the ones closest to the ground truths as indicated by a higher AARI. Indeed, this would show that our revenue function is appropriate to evaluate the goodness of a dendrogram and that maximizing the revenue is indeed a good unsupervised way to select hierarchies. The results reported are averaged over 10 independent trials. * We defer the standard deviations to the appendix for the sake of readability. Methods. We compare AddS3-AL and AddS4-AL, the two methods proposed in this work, to various comparison-based algorithms for learning dendrograms, such as 4K-AL (Ghoshdastidar et al., 2019), a *The code is available at https://github.com/jitaishik/Revenue_ComparisonHC.git *The randomness stems from three sources: the noise in the similarities, triplets selection, and the optimization procedure in tSTE. We fix the seeds to 0-9 in the 10 runs. ![9_image_0.png](9_image_0.png) Figure 1: Revenue and AARI (higher is better) of several triplets-based methods using n 2comparisons. Given various signal to noise ratios, a higher revenue implies higher AARI values (better dendrograms). Table 1: Revenue and AARI of various methods for a fixed signal to noise ratio δσ = 1.5 and varying number of triplets (decreased by factor of 2). The planted setting consists of a total of n = 240 objects. In each line the highest revenue and AARI are underlined, taking into account standard deviation (see appendix). This shows that the two measures are well aligned. | Number of | AddS3-AL | tSTE-AL | MulK3-AL | | | | |-------------|-------------|-----------|-------------|-------|-------------|-------| | triplets | Revenue | AARI | Revenue | AARI | Revenue | AARI | | 16n 2 | 7.347 × 107 | 0.937 | 7.300 × 107 | 0.877 | 7.315 × 107 | 0.861 | | 8n 2 | 3.667 × 107 | 0.905 | 3.656 × 107 | 0.877 | 3.636 × 106 | 0.855 | | 4n 2 | 1.823 × 107 | 0.862 | 1.825 × 107 | 0.874 | 1.795 × 107 | 0.830 | | 2n 2 | 8.962 × 106 | 0.782 | 9.130 × 106 | 0.867 | 8.444 × 106 | 0.677 | | n 2 | 4.315 × 106 | 0.682 | 4.559 × 106 | 0.868 | 3.728 × 106 | 0.540 | | n 2/2 | 2.038 × 106 | 0.593 | 2.277 × 106 | 0.860 | 1.220 × 106 | 0.347 | | n 2/4 | 9.268 × 105 | 0.498 | 1.137 × 106 | 0.851 | 1.531 × 105 | 0.077 | | n 2/8 | 4.261 × 105 | 0.396 | 5.728 × 105 | 0.840 | 1.856 × 104 | 0.011 | | 2/16 | 2.015 × 105 | 0.295 | 2.858 × 105 | 0.720 | 4.026 × 103 | 0.005 | | n n 2/32 | 1.096 × 105 | 0.192 | 1.450 × 105 | 0.549 | 2.015 × 103 | 0.003 | quadruplets-based method, along with two triplets-based approaches MulK3-AL (Kleindessner and von Luxburg, 2017; Perrot et al., 2020) and tSTE-AL (Van Der Maaten and Weinberger, 2012). The former two are similarity-based approaches where the idea is use the comparisons to learn a similarity. The latter is an ordinal embedding approach where the idea is to recover a representation of the data that respects the comparisons as well as possible, and then use the cosine similarity sij =hxi,xj i ||xi||2||xj ||2 to compare the examples. To learn the dendrograms we then apply standard average linkage to the various similarities. Results. In Figure 1, we present the AARI and Revenue of different triplet-based methods for several signal to noise ratios using n 2comparisons.* We observe that, given a set signal to noise ratio, the ordering between the methods remains the same for the revenue and the AARI, that is the method with the highest revenue is also the one with the highest AARI. In other words, a higher revenue indicates that the corresponding dendrogram is better. In Table 1, we verify that this remains true for a constant signal to noise ratio of 1.5 and various number of observed comparisons. In particular, we notice that when the revenue of AddS3-AL becomes higher than the revenue of tSTE-AL, that is using more than 4n 2triplets, the AARI also follows the same trend, thus confirming that selecting the dendrogram with the highest revenue is indeed a good *Note that we also considered other amounts of comparisons. However, the trends were similar to the ones observed here and thus we chose to defer these results to the appendix. Table 2: Experiments on real datasets. For the triplets-based methods, AddS3-AL tends to obtain the dendrograms with the best revenues. For the quadruplets-based approaches, AddS4-AL and 4K-AL obtain comparable results. Using the original Cosine similarities only yields slightly better hierarchies than the comparison-based methods. For first 3 datasets, multiple runs are used and standard deviation (see appendix) is considered for highlighting the best method(s). | Dataset | Triplet | Quadruplet | | | | | | |-----------|-----------|--------------|-----------|-----------|-----------|-----------|-----------| | AddS3-AL | tSTE-AL | MulK3-AL | Cosine-AL | AddS4-AL | 4K-AL | Cosine-AL | | | Zoo | 2.771×105 | 2.164×105 | 2.041×105 | 2.824×105 | 2.828×105 | 2.866×105 | 2.945×105 | | Glass | 2.161×106 | 1.973×106 | 1.412×106 | 2.110×106 | 2.429×106 | 2.425×106 | 2.494×106 | | MNIST | 1.893×109 | 2.061×109 | 1.724×109 | 2.064×109 | 1.905×109 | 1.884×109 | 2.075×109 | | Car | 1.521×105 | 1.562×105 | 1.264×105 | - | 1.521×105 | 1.125×105 | - | | Food | 6.137×106 | 5.993×106 | 6.096×106 | - | 6.137×106 | 6.137×106 | - | | Vogue | 2.722×104 | 2.104×104 | 3.022×103 | - | 2.722×104 | 2.549×104 | - | | Nature | 2.650×105 | 2.056×105 | 1.231×105 | - | 2.650×105 | 2.228×105 | - | | Imagenet | 7.179×107 | 6.571×107 | 3.440×107 | - | 7.179×107 | 6.994×107 | - | way to select meaningful hierarchies. In the appendix, we show that the same behaviour can be observed for various signal to noise ratios as well as in the quadruplet case. We further investigate the dependence between AARI and the triplet revenue in Figure 2, where we plot the AARI and the corresponding triplet revenue (in log scale) for different runs and number of triplets, considered in Table 1. Although the variation of the revenue for increasing AARI seems to depend on the method under consideration, all plots show a monotonic trend. To validate this we use two measures of rank correlation— Kendall's-τ and Spearman's-ρ—that capture how well the dependence is captured as monotonic function. For both AddS3-AL and MulK3-AL, the rank correlations are greater than 0.9, indicating a highly monotonic trend. Although the rank correlations are smaller for tSTE-AL, but still large enough and corresponding p-value is about 10−17, indicating rank correlation. ## 7.2 Real Data The previous experiments establish that our revenue functions are good at identifying meaningful dendrograms in an unsupervised way. In the following experiments, we investigate the behaviour of the proposed approaches on real data. In particular, we show that they are competitive with standard comparison-based hierarchical clustering approaches on various datasets. ![10_image_0.png](10_image_0.png) Figure 2: Scatter plots for the AARI and triplet revenue from different runs and varying number of triplets in the setting of Table 1. The three methods (AddS3, tSTE and MulK3) show different trends. However, in each case, the triplet revenue and AARI have high Kendall-τ and Spearman-ρ correlation. Data. We consider 8 different datasets. On the one hand, we consider 3 standard clustering datasets: Zoo, Glass, and MNIST (Heller and Ghahramani, 2005; LeCun et al., 2010; Vikram and Dasgupta, 2016). The Zoo dataset originally consisted of 101 animals each with 16 features. But we choose to remove the entry with class "girl" since we do not feel it belongs to the zoo dataset. The Glass dataset has 9 features for 214 examples. For the MNIST dataset we consider two subsets of the MNIST test dataset that originally contains 10000 examples distributed among the ten digits. A 2dimensional embedding of the entire MNIST test data was constructed with t-SNE (van der Maaten, 2014). From this we randomly sampled 200 examples for each digit to form a dataset of 2000 entries and normalized the embeddings so that each example lies in [−1, 1]. Since we are in a comparison-based setting, we generate n 2comparisons using the cosine similarity. To model mistakes from human annotators, we randomly and uniformly flip 5% of the comparisons (Emamjomeh-Zadeh and Kempe, 2018), where by flipping (*i, j, k*) we mean replacing it with (*i, k, j*). On the other hand, we consider 5 comparison-based datasets, Car, Food, Vogue Cover, Nature Scene and ImageNet Images v0.1, from the cblearn repository.* The number of objects and the kind of query used to obtain comparisons are summarized in Table 3. The comparisons are transformed into triplets (final number of triplets noted in Table 3), which are also used in the quadruplet setting. In Table 3, a central triplet is a query of the form—which of the three objects (i,j,k) is most central. Provided that the answer is i, this implies object i is more similar to both j and k than they are to each other. Two standard triplets (j,i,k) and (k,i,j) are thus obtained. An Odd-out triplet is a query of the form—which of the three objects (i,j,k) is the odd one out. If i is picked as the odd one, it gives two standard triplets of the form (j,k,i) and (k,j,i). A rank 2 from 8 query is of the form—among 8 objects (i0*, . . . , i*7), rank the 2 that appear to be most similar to the reference object i0. If i1 and i2 are ranked as the most similar to i0 in this order, then 11 standard triplets of the form (i0, i1, ik) 7 k=2 and (i0, i2, ik) 7 k=3 are obtained. Table 3: Description of datasets used in the experiments. Dataset Query \#Objects \#Triplets Zoo Cosine Similarity 100 100000 Glass Cosine Similarity 214 45796 MNIST Cosine Similarity 2000 4000000 Car Most Central Triplet 60 14194 Food Standard Triplet 100 190376 Vogue Odd-out Triplet 60 2214 Nature Odd-out Triplet 120 6710 Imagenet Rank 2 from 8 1000 328549 Evaluation Function. Since the datasets considered here do not come with a ground truth hierarchy, we cannot compute the AARI. Hence, we only report the revenue. The results reported are averaged over 10 independent trials* and defer the standard deviations to the appendix. Methods. Besides the methods already used in the planted setting, we also consider the Cosine baseline where it is assumed that the pairwise cosine similarities are available, and we apply average linkage directly on the similarities used to generate the comparisons. This baseline is not applicable to the comparison-based datasets where we only have access to the comparisons and not to the similarities. Results. The results are reported in Table 2. We can notice that AddS3-AL tends to be better than tSTE-AL and MulK3-AL while AddS4-AL and 4K-AL are comparable. As is expected, the Cosine baseline based on the original similarities obtains the best performances in most cases, but it only seems to yield slightly better hierarchies than the comparison-based methods. This would tend to confirm that hierarchical clustering with average linkage is indeed a problem that can be solved using only a limited number of comparisons, instead of using all similarities. *https://github.com/dekuenstle/cblearn *The randomness stems from two main sources: the triplets generation (in the Zoo, Glass, and MNIST dataset), the optimization procedure in tSTE (initialization, batch selection). We fix the seeds to 0-9 for the 10 runs. For all the other datasets and methods, every step is deterministic and, thus, we only need to report the results of a single run. ## 8 Conclusion In this paper, we proposed novel revenue functions that allow us to measure the goodness of a dendrogram in an unsupervised way using only triplet or quadruplet comparisons. This suggest natural algorithms for hierarchical clustering based on the maximization of such revenues. Drawing theoretical connections with existing work on cost and revenue functions in standard hierarchical clustering, we propose two algorithms based on average linkage for hierarchical clustering using only comparisons. We empirically show that our revenue functions successfully identify the dendrograms that are closest to the ground truth. We also show that the proposed approaches to learn hierarchies perform well on real datasets and are competitive with state of the art methods. We further used the proposed revenue function to resolve an open theoretical problem of recovering a latent hierarchy using fewer than Ω(n 3) passive triplets. We showed that O(n 2log n/2) passive triplets suffice to obtain a (1−)-approximation of the optimal triplet revenue. We conclude with the following open questions: (i) Are Ω(n 2log n) passive triplets necessary for a (1 − )-approximation? (ii) At this point, we are unable to obtain a polynomial-time approximation scheme (PTAS) for the revenue maximisation problem, but we believe this should be possible. It may also be possible to obtain more efficient algorithms that are linear in time with respect to the number of triplets. A linear-time method exists for maximum quartet consistency (Snir and Yuster, 2011). ## Acknowledgments The work of A. Mandal was supported by the German Academic Exchange Service (DAAD) through the Working Internships in Science and Engineering (WISE) scholarship.The work of D. Ghoshdastidar is partly supported by the German Research Foundation through the DFG-ANR PRCI "ASCAI" (GH 257/3-1). ## References E. N. Adams III. Consensus techniques and the comparison of taxonomic trees. *Systematic Biology*, 21(4): 390–397, 1972. S. Agarwal, J. Wills, L. Cayton, G. Lanckriet, D. Kriegman, and S. Belongie. Generalized non-metric multidimensional scaling. In *Artificial Intelligence and Statistics*, pages 11–18. PMLR, 2007. A. V. Aho, Y. Sagiv, T. G. Szymanski, and J. D. Ullman. Inferring a tree from lowest common ancestors with an application to the optimization of relational expressions. *SIAM J. Comput.*, 10(3):405–421, 1981. doi: 10.1137/0210030. S. Balakrishnan, M. Xu, A. Krishnamurthy, and A. Singh. Noise thresholds for spectral clustering. Advances in Neural Information Processing Systems, 24:954–962, 2011. K. S. Berenhaut, K. E. Moore, and R. L. Melvin. A social perspective on perceived distances reveals deep community structure. *Proceedings of the National Academy of Sciences*, 119(4):e2003634119, 2022. J. Byrka, S. Guillemot, and J. Jansson. New results on optimizing rooted triplets consistency. Discret. Appl. Math., 158(11):1136–1147, 2010. doi: 10.1016/j.dam.2010.03.004. D. Catanzaro. The minimum evolution problem: Overview and classification. *Networks*, 53(2):112–125, 2009. doi: 10.1002/net.20280. M. Charikar, V. Chatziafratis, and R. Niazadeh. Hierarchical clustering better than average-linkage. In T. M. Chan, editor, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pages 2291–2304. SIAM, 2019. doi: 10. 1137/1.9781611975482.139. V. Chatziafratis, M. Mahdian, and S. Ahmadian. Maximizing agreements for ranking, clustering and hierarchical clustering via MAX-CUT. In A. Banerjee and K. Fukumizu, editors, *The 24th International* Conference on Artificial Intelligence and Statistics, volume 130 of *Proceedings of Machine Learning Research*, pages 1657–1665. PMLR, 2021. V. Cohen-Addad, V. Kanade, F. Mallmann-Trenn, and C. Mathieu. Hierarchical clustering: Objective functions and algorithms. *Journal of the ACM (JACM)*, 66(4):1–42, 2019. S. Dasgupta. A cost function for similarity-based hierarchical clustering. In Proceedings of the Forty-Eighth Annual ACM Symposium on Theory of Computing, STOC '16, page 118–127, New York, NY, USA, 2016. Association for Computing Machinery. doi: 10.1145/2897518.2897527. URL https://doi.org/10.1145/ 2897518.2897527. D. Ellis, B. Whitman, A. Berenzweig, and S. Lawrence. The quest for ground truth in musical artist similarity. In *Proceedings of ISMIR*, 2002. E. Emamjomeh-Zadeh and D. Kempe. Adaptive hierarchical clustering using ordinal queries. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 415–429. SIAM, 2018. L. Foulds, M. Hendy, and D. Penny. A graph theoretic approach to the development of minimal phylogenetic trees. *J. Mol. Evol*, 13:149, 1979. D. Ghoshdastidar, M. Perrot, and U. von Luxburg. Foundations of comparison-based hierarchical clustering. Advances in Neural Information Processing Systems, 32:7456–7466, 2019. S. Haghiri, D. Ghoshdastidar, and U. von Luxburg. Comparison-based nearest neighbor search. In *Artificial* Intelligence and Statistics, pages 851–859. PMLR, 2017. S. Haghiri, F. A. Wichmann, and U. von Luxburg. Estimation of perceptual scales using ordinal embedding. Journal of vision, 20(9):14–14, 2020. H. Heikinheimo and A. Ukkonen. The crowd-median algorithm. In *First AAAI Conference on Human* Computation and Crowdsourcing, 2013. K. A. Heller and Z. Ghahramani. Bayesian hierarchical clustering. In Proceedings of the 22nd international conference on Machine learning, pages 297–304, 2005. L. Hubert and P. Arabie. Comparing partitions. *Journal of classification*, 2(1):193–218, 1985. M. Janowitz. Monotone equivariant cluster methods. *SIAM Journal on Applied Mathematics*, 37(1):148–165, 1979. M. F. Janowitz. *Mathematical Taxonomy*. Wiley Series in Probability and Mathematical Statistics. Wiley, 1971. ISBN 978-0471440505. T. Jiang, P. E. Kearney, and M. Li. A polynomial time approximation scheme for inferring evolutionary trees from quartet topologies and its application. *SIAM J. Comput.*, 30(6):1942–1961, 2000. doi: 10.1137/ S0097539799361683. E. Kazemi, L. Chen, S. Dasgupta, and A. Karbasi. Comparison based learning from weak oracles. In International Conference on Artificial Intelligence and Statistics, pages 1849–1858, 2018. M. Kleindessner and U. von Luxburg. Kernel functions based on triplet similarity comparisons. In *Advances* in Neural Information Processing Systems, pages 6807–6817, 2017. Y. LeCun, C. Cortes, and C. Burges. Mnist handwritten digit database, 2010. B. Moseley and J. Wang. Approximation bounds for hierarchical clustering: Average linkage, bisecting kmeans, and local search. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. M. Perrot and U. von Luxburg. Boosting for comparison-based learning. In *IJCAI*, 2019. M. Perrot, P. Esser, and D. Ghoshdastidar. Near-optimal comparison based clustering. In Advances in Neural Information Processing Systems, volume 33, pages 19388–19399. Curran Associates, Inc., 2020. C. Semple and M. A. Steel. *Phylogenetics*, volume 24. Oxford University Press on Demand, 2003. R. N. Shepard. The analysis of proximities: Multidimensional scaling with an unknown distance function. i. Psychometrika, 27(2):125–140, 1962. R. Sibson. Order invariant methods for data analysis. *Journal of the Royal Statistical Society: Series B* (Methodological), 34(3):311–338, 1972. S. Snir and S. Rao. Quartets maxcut: A divide and conquer quartets algorithm. *IEEE ACM Trans. Comput.* Biol. Bioinform., 7(4):704–718, 2010. doi: 10.1109/TCBB.2008.133. S. Snir and R. Yuster. A linear time approximation scheme for maximum quartet consistency on sparse sampled inputs. *SIAM J. Discret. Math.*, 25(4):1722–1736, 2011. doi: 10.1137/110820555. URL https: //doi.org/10.1137/110820555. N. Stewart, G. D. Brown, and N. Chater. Absolute identification by relative judgment. *Psychological review*, 112(4):881, 2005. A. Ukkonen. Crowdsourced correlation clustering with relative distance comparisons. In *2017 IEEE International Conference on Data Mining (ICDM)*, pages 1117–1122. IEEE, 2017. L. van der Maaten. Accelerating t-sne using tree-based algorithms. *Journal of Machine Learning Research*, 15(93):3221–3245, 2014. URL http://jmlr.org/papers/v15/vandermaaten14a.html. L. Van Der Maaten and K. Weinberger. Stochastic triplet embedding. In *2012 IEEE International Workshop* on Machine Learning for Signal Processing, pages 1–6. IEEE, 2012. L. C. Vankadara, S. Haghiri, F. U. Wahab, and U. von Luxburg. Large scale ordinal embedding: training neural networks with structure-free inputs. *arXiv preprint*, arXiv:1912.01666, 2019. S. Vikram and S. Dasgupta. Interactive bayesian hierarchical clustering. In *International Conference on* Machine Learning, pages 2081–2090. PMLR, 2016. D. Wang and Y. Wang. An improved cost function for hierarchical cluster trees. Journal of Computational Geometry, 11(1):283–331, 2020. doi: 10.20382/jocg.v11i1a11. M. Wilber, S. Kwak, and S. Belongie. Cost-effective hits for relative similarity comparisons. In Human Computation and Crowdsourcing (HCOMP), Pittsburgh, 2014. B. Y. Wu. Constructing the maximum consensus tree from rooted triples. *J. Comb. Optim.*, 8(1):29–39, 2004. doi: 10.1023/B:JOCO.0000021936.04215.68. ## A Proof Of Theorem 1 Recall the formulation of *T rev* for a tree H and a set of triplets T : T rev(H, T ) = X i,j,k I(i,j,k)∈T |H(i ∨ k)| − |H(i ∨ j)| (sum over all distinct i, j, k.) = X i,j,k I(i,j,k)∈T |H(i ∨ k)| − X i,j,k I(i,j,k)∈T |H(i ∨ j)| = X i,j,k I(i,k,j)∈T |H(i ∨ j)| − X i,j,k I(i,j,k)∈T |H(i ∨ j)| (by change of variable between j and k in the first term.) i6=j X k6=i,j I(i,k,j)∈T − I(i,j,k)∈T |H(i ∨ j)| = X i<j |H(i ∨ j)| × X k6=i,j I(i,k,j)∈T − I(i,j,k)∈T + I(j,k,i)∈T − I(j,i,k)∈T = X (by change of variable between i and j when i > j.) = X i<j −s AddS3 ij |H(i ∨ j)| by definition of s AddS3. This concludes the proof for the triplets-based revenue. Using a similar approach, recall the formulation of *Qrev* for a tree H and quadruplet set Q: Qrev(H, Q) = X i,j,k,l I(i,j,k,l)∈Q|H(k ∨ l)| − |H(i ∨ j)| = X i,j,k,l I(i,j,k,l)∈Q|H(k ∨ l)| − X i,j,k,l I(i,j,k,l)∈Q|H(i ∨ j)| (sum over all i < j, k < l,(i, j) 6= (k, l).) = X i,j,k,l I(k,l,i,j)∈Q|H(i ∨ j)| − X i,j,k,l I(i,j,k,l)∈Q|H(i ∨ j)| (by swapping the role of i, j and k, l in the first term.) i<j X k<l (k,l)6=(i,j) I(k,l,i,j)∈Q − I(i,j,k,l)∈Q|H(i ∨ j)| = X = X i<j −s AddS4 ij |H(i ∨ j)| by definition of s AddS4. This concludes the proof for the quadruplets-based revenue. ## B Proof Of Proposition 2 The proposition follows immediately from the following two lemmas. Lemma 5. Let H0 be a binary tree on [n] and T0 be the set of triplets induced by H0*. Then* $${\mathcal{T}}_{0}=\arg\operatorname*{max}_{\mathcal{T}}T r e v(H_{0},{\mathcal{T}}),$$ T rev(H0, T ), (7) where the maximum is unique over all triplet sets that are induced by some binary tree on [n]. Lemma 6. Let H0, H1 be two binary trees on [n] and T0, T1 be the set of triplets induced by H0 and H1, respectively. Then *T rev*(H0, T1) = *T rev*(H1, T0). $$\left(7\right)$$ Combining the above two lemmas, we obtain that $$T r e v(H_{0},T_{0})>T^{\prime}$$ T rev(H0, T0) *> T rev*(H0, T1) = *T rev*(H1, T0) for any tree H1 and corresponding set of triplets T1. This directly implies the Proposition 2. We now complete the proof by proving Lemmas 5 and 6. Proof of Lemma 5. Recall that $$T r e v(H_{0},{\mathcal{T}}_{0})=\sum_{(i,j,k)\in{\mathcal{T}}_{0}}\underbrace{\left(|H_{0}(i\lor k)|-|H_{0}(i\lor j)|\right)}_{=:D_{0}(i,j,k)}$$ is a sum of positive terms. For convenience, we denote each difference by D0(*i, j, k*) > 0. Let T1 be the set of triplets generated by another tree H1. Note that at least one pair of triplets in T1 has to be different from T0, otherwise H1 and H0 would be isomorphic transformations of one another. Without loss of generality, assume that the pair (i, j, k),(*j, i, k*) has been replaced by the pair (i, k, j),(*k, i, j*), that is, i, k are merged in H1 before they are merged to j. Observe that we can write $$\begin{array}{c}{{T r e v(H_{0},{\cal T}_{0})-T r e v(H_{0},{\cal T}_{1})=\sum_{\begin{array}{l}{{(i,j,k),}}\\ {{(i,k,j)}}\\ {{(i,k,j)}}\\ {{\in F_{0}\setminus T_{1}}}\end{array}}D_{0}(i,j,k)+D_{0}(j,i,k)-D_{0}(i,k,j)-D_{0}(k,i,j)}}\\ {{\epsilon_{F}^{(i,j,k)}}}\\ {{\epsilon_{F}^{(i,k,j)}}}\\ {{\epsilon_{F}^{(i,j,k)}}}\end{array}$$ where each term in the summation can be computed as $$D_{0}(i,j,k)+D_{0}(j,i,k)-D_{0}(i,k,j)-D_{0}(k,i,j)$$ $$=|H_{0}(i\lor k)|+|H_{0}(j\lor k)|-2|H_{0}(i\lor j)|-\left(|H_{0}(i\lor j)|+|H_{0}(k\lor j)|-2|H_{0}(i\lor k)|\right)$$ $$=3D_{0}(i,j,k),$$ which is strictly positive for every (i, j, k) ∈ T0. Summing over all (i, j, k),(j, i, k) ∈ T0\T1, we have that T rev(H0, T0) *> T rev*(H0, T1) for any T1 generated by another tree H1. Proof of Lemma 6. Let {s0ij}i,j be the pairwise AddS3 similarity induced by T0, and {s1ij}i,j be the AddS3 similarity from T1. Due to the definition of T0, we note that, for any k 6= *i, j*, the term (I(i,j,k)∈T0−I(*i,k,j*)∈T0+ I(j,i,k)∈T0 − I(*j,k,i*)∈T0 ) either takes the value 2 if k /∈ (i ∨ j)— that is, *i, j* is merged in H0 before k—or the value −1 if k is merged to either i or j before (i ∨ j). Summing over all k 6= *i, j* gives $$s_{0i j}=2(n-|H_{0}(i\lor j)|)-(|H_{0}(i\lor j)|-2)=2n+2-3|H_{0}(i\lor j)|$$ for every *i, j*. Using the same arguments s1ij can be expressed as s1ij = 2n + 2 − 3|H1(i ∨ j)|. We can now use the equivalence in Theorem 1 to write $$Tree(H_{0},\mathcal{T}_{1})-Tree(H_{1},\mathcal{T}_{0})=-\sum_{i<j}(s_{1ij}|H_{0}(i\lor j)|-s_{0ij}|H_{1}(i\lor j)|)$$ $$=-\sum_{i<j}\left((2n+2-3|H_{1}(i\lor j)|)|H_{0}(i\lor j)|-(2n+2-3|H_{0}(i\lor j)|)|H_{1}(i\lor j)|\right)$$ $$=-(2n+2)\sum_{i<j}\left(|H_{0}(i\lor j)|-|H_{1}(i\lor j)|\right),$$ which is zero, since P i<j |H0(i ∨ j)| =P i<j $\sum\limits_{j}^{i}|H_1(i\lor j)|=\frac{1}{3}$ (n 3 − n) is Dasgupta's cost for any tree on [n] when all pairwise similarities are 1 (Dasgupta, 2016, Theorem 3). Hence, the claim. ## C Proof Of Theorem 3 And Corollary 4 We first state and prove two lemmas that are essential for the proof of Theorem 3. The first lemma shows that *T rev*(H0, T0) = Ω(n 4). The second lemma derives concentration inequalities for the AddS3 similarities sij , which is then used in the proof of Theorem 3 to derive bound on |T rev(H, T ) − *T rev*(H, T0)| for all H, and subsequently arrive at the claim. Lemma 7. Given ∈ (0, 1) and n ≥ 8/, the following holds. If H0 is a hierarchy on [n] and T0 is the set of triplets induced by H0, $$T r e v(H_{0},T_{0})\geq\frac{(1-\epsilon)n^{4}}{12}.$$ Proof. We start with the equivalence in Theorem 1 to write the revenue as *T rev*(H0, T0) = −Pi<j s0ij |H0(i∨ j)|, where s0ij is the pairwise AddS3 similarity induced by T0. Due to the definition of T0, we note that, for any k 6= *i, j*, the term (I(i,j,k)∈T0 − I(i,k,j)∈T0 + I(j,i,k)∈T0 − I(*j,k,i*)∈T0 ) either takes the value 2 if k /∈ (i ∨ j)— that is, *i, j* is merged in H0 before k—or the value −1 if k is merged to either i or j before (i ∨ j). Summing over all k 6= *i, j* gives $s_{0ij}=2(n-|H_{0}(i\lor j)|)-(|H_{0}(i\lor j)|-2)=2n+2-3|H_{0}(i\lor j)|$ for every *i, j*. Let N = (i∨j) denote the least common ancestor of *i, j* in H0, and N1, N2 be the two children of N. Note that |N1*| · |*N2| pairs of *i, j* are merged at N. Hence, we can rewrite the revenue as $$Tree(H_{0},\mathcal{T}_{0})=-\sum_{i<j}s_{0ij}|H_{0}(i\lor j)|=\sum_{i<j}|H_{0}(i\lor j)|\big{(}3|H_{0}(i\lor j)|-2n-2\big{)}$$ $$=\sum_{N\in H_{0}}|N_{1}|\cdot|N_{2}|\cdot|N|\cdot\big{(}3|N|-2n-2\big{)}$$ $$=3\sum_{N\in H_{0}}|N_{1}||N_{2}||N|^{2}-(2n+2)\sum_{N\in H_{0}}|N_{1}||N_{2}||N|$$ where the summations are over all internal nodes N in the tree H0, with N1, N2 deenoting the two children of N. The second summation is Dasgupta's cost for any tree on [n] with all pairwise similarities as 1, and evaluates to n 3−n 3(Dasgupta, 2016, Theorem 3). On the other hand, we claim that the first sum has lower bound P N∈H0 |N1||N2||N| 2 ≥ n 4 4 . We prove this claim through induction on n. The claim is easy to verify for n = 2, 3. For n ≥ 4, we assume that claim holds for any H0 with k leaves, when *k < n* (equivalently, H0 on [k]). Consider the tree H0 on [n] such that the root node is split into two nodes of size n1, n2 < n (note n1 + n2 = n). From our inductive hypothesis, $$\sum_{N\in H_{0}}|N_{1}||N_{2}||N|^{2}\geq n_{1}n_{2}n^{2}+\frac{n_{1}^{4}}{4}+\frac{n_{2}^{4}}{4}$$ $$=\frac{1}{4}\left(4n_{1}^{3}n_{2}+8n_{1}^{2}n_{2}^{2}+4n_{1}n_{2}^{3}+n_{1}^{4}+n_{2}^{4}\right)$$ $$\geq\frac{1}{4}(n_{1}+n_{2})^{4}=\frac{n^{4}}{4}$$ which proves the claim for any n. Combining all terms, we have $$T r e v(H_{0},{\mathcal{T}}_{0})\geq{\frac{3n^{4}}{4}}-{\frac{(2n+2)(n^{3}-n)}{3}}={\frac{n^{4}}{12}}-{\frac{2}{3}}(n^{3}-n^{2}-n).$$ Now, given ∈ (0, 1), notice that 23 (n 3 − n 2 − n) ≤ n4 12 for any n > 8 , which proves the lemma. We now state and prove the concentration results for the AddS3 similarity computed from the sampled triplet set T . We first recall the sampling and introduce some notations. For any n, with probability pn ∈ (0, 1), a pair of triplets (i, j, k),(j, i, k) ∈ T0 is included in T , independent of other pairs. To formalise this, we define the random variable χijk = χjik ∼ Bernoulli(pn) such that the collection {χijk : i < j, k 6= *i, j*} are mutually independent. If sij denotes the pairwise AddS3 similarity, computed using T , then observe that $$s_{ij}=\sum_{k\neq i,j}\chi_{ijk}(\mathbb{I}_{(i,j,k)\in\mathcal{T}_{0}}-\mathbb{I}_{(i,k,j)\in\mathcal{T}_{0}}+\mathbb{I}_{(j,i,k)\in\mathcal{T}_{0}}-\mathbb{I}_{(j,k,i)\in\mathcal{T}_{0}})\tag{8}$$ Hence, for a fixed H0—and T0—the similaritiy sij is a weighted sum of independent Bernoullis, with weights either 2 or −1 (cf. proof of Lemma 7). As a consequence, E[sij ] = pns0ij , where the expectation is with respect to sampling, and furthermore we can state the following concentration for all pairwise similarities. Lemma 8. Assume n ≥ 8. Let T denote a random subset of T0 (obtained from the aforementioned sampling), and {sij}i<j , {s0ij}i<j denote the pairwise AddS3 similarities computed using triplets in T and T0, respectively. For any α > 0*, if* pn > (α + 2) log n/n*, then with probability* 1 − 2n −α, $${\frac{p_{n}n^{3}}{10}}\leq|{\mathcal{T}}|\leq{\frac{p_{n}n^{3}}{2}}\quad a n d\quad\operatorname*{max}_{i<j}|s_{i j}-p_{n}s_{0i j}|\leq4{\sqrt{(\alpha+2)p_{n}n\log n}}.$$ Proof. We first derive the bound on max i<j |sij −pns0ij |. From the expression of sij , mentioned above, we note that sij − pns0ij is a sum of (n − 2) independent mean zero random variables, with each term in [−2, 2] and variance bounded by 4pn. By Bernstein's inequality, $$\mathbb{P}\big(|s_{i j}-p_{n}s_{0i j}|>\delta\big)\leq2\exp\left(-\frac{\delta^{2}}{8p_{n}(n-2)+\frac{4}{3}\delta}\right).$$ For any α > 0, if pn > (α + 2) log n/n and setting δ = 4p(α + 2)pnn log n, we have |sij − pns0ij | > 4p(α + 2)pnn log n with probability ≤ 2n −(α+2). Using union bound over all n 2 < n2/2 pairs of *i, j*, we have that max i<j |sij − pns0ij | exceeds the claim bound with probability ≤ n −α. To derive the bound on *|T |*, we first bound |T0|. From definition of T0, every internal node N ∈ H0 contributes |N1||N2|(|N| − 2) triplets to T0 since merger of every *i, j* contributes to |N| − 2 triplets, one for each k that is either merged with i or j at a lower level. Hence, $$|\mathcal{T}_{0}|=\sum_{N\in H_{0}}|N_{1}||N_{2}|(|N|-2)=\sum_{N\in H_{0}}|N_{1}||N_{2}||N|-2\sum_{N\in H_{0}}|N_{1}||N_{2}|=\frac{n^{3}-n}{3}-2\cdot\frac{n^{2}}{2}=\frac{n^{3}-3n^{2}-n}{3}\.$$ The first summation follows from (Dasgupta, 2016, Theorem 3), whereas the second sum follows from induction with hypothesis that the sum evaluates to n 2/2. We now note that E[*|T |*] = pn|T0| and apply multiplicative Chernoff bound to get that 12 pn|T0*| ≤ |T | ≤* 32 pn|T0| with probability 1 − 2e −pn|T0|/12. To simplify the terms, we use |T0| = n 3−3n 2−n 3∈ [ n 3 5 , n 3 3 ], where the lower bound holds for n ≥ 8. This leads to the bounds on *|T |*, whereas for the probability, note that 2e −pn|T0|/12 ≤ 2n −(α+2) ≤ n −α, where the first inequality follows using |T0| ≥ n 3/5, pn > (α + 2) log n/n and n ≥ 8. Below, we prove Theorem 3 using Lemmas 7–8. Proof of Theorem 3. We first derive bounds on the deviation of the revenue *T rev* of any tree H due to sampling. The concentration of {sij}i<j ensures that we can state a deviation bound that uniformly holds for all H, as shown below. From the equivalence in Theorem 1, we write for any H, for all $H$, as shown below. From the equivalence in Theorem 4, we write for any $H$, $$\left|Trev(H,\mathcal{T})-p_{n}Trev(H,\mathcal{T}_{0})\right|=\left|\sum_{i<j}(s_{ij}-p_{n}s_{0ij})|H(i\lor j)|\right|$$ $$\leq\sum_{i<j}|s_{ij}-p_{n}s_{0ij}|\cdot|H(i\lor j)|$$ $$\leq4\sqrt{(\alpha+2)p_{n}n\log n}\cdot\sum_{i<j}|H(i\lor j)|,$$ where the last bound holds with probability $1-n^{-\alpha}$ due to Lemma 8. Note that $\sum_{i<j}|H(i\lor j)|$ is a $n^{-\alpha}$-independent of $H$. |H(i∨j)| is Dasgupta's cost of tree H on [n] if all pairwise similarities are 1, and hence the summation is n 3 < n 2 3 . We conclude that, with probability 1 − n −α, $$\operatorname*{max}_{H}|T r e v(H,{\mathcal{T}})-p_{n}T r e v(H,{\mathcal{T}}_{0})|<(4/3){\sqrt{(\alpha+2)p_{n}n^{7}\log n}}.$$ We now write $$Tree(\widehat{H},\mathcal{T}_{0})\geq\frac{1}{p_{n}}Tree(\widehat{H},\mathcal{T})-\frac{4}{3}\sqrt{\frac{(\alpha+2)n^{7}\log n}{p_{n}}}$$ $$\geq\frac{1}{p_{n}}Tree(H_{0},\mathcal{T})-\frac{4}{3}\sqrt{\frac{(\alpha+2)n^{7}\log n}{p_{n}}}\geq Tree(H_{0},\mathcal{T}_{0})-\frac{8}{3}\sqrt{\frac{(\alpha+2)n^{7}\log n}{p_{n}}},$$ where the first and third inequalities follow from the deviation bound stated above, and the second inequality holds since Hb maximises *T rev*(H, T ). For pn > 2 12(α + 2) log n/n2, the second term is smaller than n4/24 ≤ (1 − )n 4/12 ≤ *T rev*(H0, T0), where the first inequality uses the fact ≤ 1/2 and the second inequality is due to Lemma 7. Hence the claim. Proof of Corollary 4. In the noisy setting, the random flipping can be modelled through the independent variables {ζ i jk : *i, j < k*} where ζ i jk ∼ Bernoulli(δ) is the indicator for triplet (*i, j, k*) to be flipped with triplet (*i, k, j*). Using the notation of equation 8, the variable ξijkζ i jk indicates (i, k, j) ∈ T 0(noisy triplet) whereas ξijk(1 − ζ i jk) indicates correct triplet (i, j, k) ∈ T 0. Hence, $$s_{ij}=\sum_{k\neq i,j}\chi_{ijk}(1-2\zeta_{jk}^{i})(\mathbb{I}_{(i,j,k)\in\mathcal{T}_{b}}-\mathbb{I}_{(i,k,j)\in\mathcal{T}_{b}})+\chi_{ijk}(1-2\zeta_{ik}^{j})(\mathbb{I}_{(j,i,k)\in\mathcal{T}_{b}}-\mathbb{I}_{(j,k,i)\in\mathcal{T}_{b}}),\tag{9}$$ and E[sij ] = (1 − 2δ)pns0ij . Following the arguments of Lemma 8, we can that, with probability 1 − n −α, $$\operatorname*{max}_{i<j}|s_{i j}-(1-2\delta)p_{n}s_{0i j}|<4{\sqrt{(\alpha+2)p_{n}n\log n}}$$ where the deviation bound is same as in Lemma 8 since the same variance bound holds for the independent random terms in the summation in equation 9. Subsequently, following the proof of Theorem 3, we have with probability 1 − n −α, $$\operatorname*{max}_{H}|T r e v(H,{\mathcal{T}})-(1-2\delta)p_{n}\cdot T r e v(H,{\mathcal{T}}_{0})|=(4/3){\sqrt{(\alpha+2)p_{n}n^{7}\log n}}.$$ and so $$Tree(\widehat{H},\mathcal{H}_{0})\geq\frac{1}{(1-2\delta)p_{n}}Tree(\widehat{H},\mathcal{T})-\frac{4}{3(1-2\delta)}\sqrt{\frac{(\alpha+2)n^{7}\log n}{p_{n}}}$$ $$\geq\frac{1}{(1-2\delta)p_{n}}Tree(H_{0},\mathcal{T})-\frac{4}{3(1-2\delta)}\sqrt{\frac{(\alpha+2)n^{7}\log n}{p_{n}}}\geq Tree(H_{0},\mathcal{T}_{0})-\frac{8}{3(1-2\delta)}\sqrt{\frac{(\alpha+2)n^{7}\log n}{p_{n}}}.$$ Following the proof of Theorem 3, second term is $\leq cn^{4}/24$ for $p_{n}>\frac{2^{12}(\alpha+2)\log n}{n^{6}(1-2\delta)^{2}}$. ## D Standard Deviation On Real Data In Table 4, we provide the standard deviations for the real data experiments that were omitted in the main paper. Table 4: Experiments on real datasets. For the triplets-based methods, AddS3-AL typically obtains dendrograms with the best revenues. For the quadruplets-based methods, AddS4-AL and 4K-AL show similar results. Using Cosine similarities yields slightly better hierarchies than comparison-based methods. | Dataset | Triplet | | | | |-----------|----------------------|----------------------|----------------------|----------------------| | AddS3-AL | tSTE-AL | MulK3-AL | Cosine-AL | | | Zoo | 2.77 × 105 ± 5 × 103 | 2.16 × 105 ± 8 × 103 | 2.04 × 105 ± 2 × 104 | 2.82 × 105 ± 3 × 103 | | Glass | 2.16 × 106 ± 4 × 104 | 1.97 × 106 ± 3 × 104 | 1.41 × 106 ± 5 × 104 | 2.11 × 106 ± 2 × 104 | | MNIST | 1.89 × 109 ± 4 × 107 | 2.06 × 109 ± 4 × 107 | 1.72 × 109 ± 6 × 107 | 2.06 × 109 ± 2 × 106 | | Car | 1.52 × 105 | 1.56 × 105 ± 2 × 103 | 1.26 × 105 | - | | Food | 6.14 × 106 | 5.99 × 106 ± 2 × 104 | 6.10 × 106 | - | | Vogue | 2.72 × 104 | 2.10 × 104 ± 1 × 103 | 3.02 × 103 | - | | Nature | 2.65 × 105 | 2.06 × 105 ± 8 × 103 | 1.23 × 105 | - | | Imagenet | 7.18 × 107 | 6.57 × 107 ± 8 × 105 | 3.44 × 107 | - | | Dataset | Quadruplet | | | | | AddS4-AL | 4K-AL | Cosine-AL | | | | Zoo | 2.83 × 105 ± 1 × 104 | 2.87 × 105 ± 1 × 104 | 2.95 × 105 ± 3 × 103 | | | Glass | 2.43 × 106 ± 3 × 104 | 2.43 × 106 ± 3 × 104 | 2.49 × 106 ± 1 × 104 | | | MNIST | 1.91 × 109 ± 4 × 107 | 1.88 × 109 ± 3 × 107 | 2.08 × 109 ± 2 × 106 | | | Car | 1.52 × 105 | 1.13 × 105 | - | | | Food | 6.14 × 106 | 6.14 × 106 | - | | | Vogue | 2.72 × 104 | 2.55 × 104 | - | | | Nature | 2.65 × 105 | 2.23 × 105 | - | | | Imagenet | 7.18 × 107 | 6.99 × 107 | - | | ## E Additional Results On The Planted Model In this section, we provide additional results on the Planted Model presented in Section 7.1 of the main paper. In Figure 3, we present the results obtained n 2/2, n 2 and 2n 2triplet comparisons respectively. Similarly, Figure 4 displays the results obtained using n 2/2, n 2 and 2n 2 quadruplet comparisons respectively. In all these figures, we notice that, given a set signal to noise ratio, the ordering between the methods remains the same for the revenue and the AARI, that is the method with the highest revenue also has the highest AARI. In other words, a higher revenue indicates a better dendrogram.* In Table 5 we verify that this remains true for constant signal to noise ratios of 1.5, and halving number of comparisons (Table 1 is an abbrieved version of Table 5). The highest revenue and AARI are underlined. We can notice that, when the revenue of AddS3-AL becomes higher than the revenue of tSTE-AL, the AARI also follows the same trend, thus confirming that selecting the dendrogram with the highest revenue is indeed a good way to select meaningful hierarchies. ## F Results On The Planted Model With Noisy Comparisons In the main paper, we only used the planted model to generate comparisons with no noise. In this section, we show that our findings remain true even when some of the comparisons are noisy, that is randomly *In Figures 3–6, we use current time as the seeds for random numbers for each run. flipped with a probability of 5%. In Figure 5, we present the results obtained using n 2/2, n 2 and 2n 2 noisy triplet comparisons respectively. In Figure 6, we present the results obtained using n 2/2, n 2 and 2n 2 noisy quadruplet comparisons respectively. We notice that, given a set signal to noise ratio, the ordering between the methods remains the same for the revenue and the AARI, that is the method with the highest revenue is also the one with the highest AARI. In other words, a higher revenue indicates a better dendrogram. Table 5: Revenue and AARI of various methods for a signal to noise ratio of 1.5 and halving of comparisons. In each line the highest revenue and the highest AARI are underlined, showing that the two measures are well aligned. | Number of | AddS3-AL | tSTE-AL | | | |-------------|-------------------------|----------------|-------------------------|---------------| | triplets | Revenue | AARI | Revenue | AARI | | 16n 2 | 7.347 × 107 ± 1.3 × 105 | 0.937 ± 0.024 | 7.300 × 107 ± 8.7 × 104 | 0.877 ± 0.007 | | 8n 2 | 3.667 × 107 ± 1.7 × 105 | 0.905 ± 0.020 | 3.656 × 107 ± 8.8 × 104 | 0.877 ± 0.007 | | 4n 2 | 1.823 × 107 ± 1.0 × 105 | 0.862 ± 0.023 | 1.825 × 107 ± 7.0 × 104 | 0.874 ± 0.012 | | 2 | 8.962 × 106 ± 9.8 × 104 | 0.782 ± 0.042 | 9.130 × 106 ± 4.0 × 104 | 0.867 ± 0.014 | | 2n n 2 | 4.315 × 106 ± 9.7 × 104 | 0.682 ± 0.047 | 4.559 × 106 ± 2.0 × 104 | 0.868 ± 0.012 | | n 2/2 | 2.038 × 106 ± 6.8 × 104 | 0.593 ± 0.037 | 2.277 × 106 ± 1.4 × 104 | 0.860 ± 0.014 | | n 2/4 | 9.268 × 105 ± 4.2 × 104 | 0.498 ± 0.035 | 1.137 × 106 ± 1.0 × 104 | 0.851 ± 0.065 | | n 2/8 | 4.261 × 105 ± 2.5 × 104 | 0.396 ± 0.033 | 5.728 × 105 ± 6.2 × 103 | 0.840 ± 0.010 | | 2/16 | 2.015 × 105 ± 1.2 × 104 | 0.295 ± 0.041 | 2.858 × 105 ± 3.3 × 103 | 0.720 ± 0.057 | | n n 2/32 | 1.096 × 105 ± 8.5 × 103 | 0.192 ± 0.057 | 1.450 × 105 ± 2.2 × 103 | 0.549 ± 0.025 | | Number of | MulK3-AL | | | | | triplets | Revenue | AARI | | | | 2 | 7.315 × 107 ± 1.3 × 105 | 0.861 ± 0.005 | | | | 16n 8n 2 | 3.636 × 107 ± 9.7 × 104 | 0.855 ± 0.003 | | | | 4n 2 | 1.795 × 107 ± 9.3 × 104 | 0.830 ± 0.009 | | | | 2n 2 | 8.444 × 106 ± 1.3 × 105 | 0.677 ± 0.041 | | | | n 2 | 3.728 × 106 ± 1.4 × 105 | 0.540 ± 0.016 | | | | n 2/2 | 1.220 × 106 ± 1.7 × 105 | 0.347 ± 0.050 | | | | n 2/4 | 1.531 × 105 ± 8.9 × 104 | 0.077 ± 0.047 | | | | n 2/8 | 1.856 × 104 ± 1.2 × 104 | 0.011 ± 0.011 | | | | n 2/16 | 4.026 × 103 ± 8.0 × 103 | 0.005 ± 0.004 | | | | n 2/32 | 2.015 × 103 ± 3.5 × 103 | 0.0003 ± 0.001 | | | ![22_image_0.png](22_image_0.png) Figure 3: Revenue and AARI (higher is better) of several triplets-based methods using respectively n 2/2 comparisons (a-b), n 2comparisons (c-d), and 2n 2comparisons (e-f). Given various signal to noise ratios, a higher revenue implies higher AARI values, that is better dendrograms. ![23_image_0.png](23_image_0.png) Figure 4: Revenue and AARI (higher is better) of several quadruplets-based methods using respectively n 2/2 comparisons (a-b), n 2comparisons (c-d), and 2n 2comparisons (e-f). Given various signal to noise ratios, a higher revenue implies higher AARI values, that is better dendrograms. ![24_image_0.png](24_image_0.png) Figure 5: Revenue and AARI (higher is better) of several triplets-based methods using respectively n 2/2 comparisons (a-b), n 2comparisons (c-d), and 2n 2comparisons (e-f) with 5% noise. Given various signal to noise ratios, a higher revenue implies higher AARI values, that is better dendrograms. ![25_image_0.png](25_image_0.png) Figure 6: Revenue and AARI (higher is better) of several quadruplets-based methods using respectively n 2/2 comparisons (a-b), n 2comparisons (c-d), and 2n 2comparisons (e-f) with 5% noise. Given various signal to noise ratios, a higher revenue implies higher AARI values, that is better dendrograms.
Review 1: Summary: This work proposes a new revenue function for comparison-based hierarchical clustering. There are many good theoretical contributions including the equivalence between the proposed function and Dasgupta's cost function, and provably approximate recovery of the hierarchy based on O(n^2 log n) sampled comparisons based on the proposed function. Evaluation on real datasets also reveals the effectiveness of the proposed function. Overall, I think this work makes valid contributions to the research on hierarchical clustering. Strengths and Weaknesses: Strengths: 1. The paper is generally well-written. The idea is very clear. The authors also do a good job to position this work in the context of related works, so the contributions are also presented clearly. 2. The notations and derivations are good and rigorous. I have roughly gone through the proof. 3. The experiments are extensive including synthetic datasets and real datasets, which demonstrates the effectiveness of the proposed revenue function. Weaknesses: 1. The biggest weakness is the theoretical side of the algorithm for this revenue function. Although the sample complexity O(n^2 log n) has been proved to achieve good approximation, no poly-time approximation algorithm is provided for this revenue function. 2. There are some typos. Some parts of the paper are not self-contained. I will list some of them in "Requested Changes". Requested Changes: 1. In sec 5, when T0 is defined, it is unclear "i,j are merged before k". "merged" is not defined. Suggest using a more formal way to define this. 2. Typo in the last line of the proof sketch for Prop.2: H! should be H1. 3. Real data experiments: To make the paper self-contained, I suggest adding descriptions of how the hierarchical structure is formulated rather than "use the same pre-processing...". 4. Please explain why the proposed algorithm is worse than tSTE-AL when the number of comparisons is less than 6n^2. Broader Impact Concerns: I see no ethical concerns. ================================================== Review 2: Summary: This paper studies hierarchical clustering in the comparison-based setting. That is rather than have direct access to pairwise similarities, we only have triplet (i more similar to j than k) or quadruplet comparisons (i and j more similar to each other than to k and l). The paper introduces an objective for comparison-based hierarchical clustering that is closely related to Dasgupta's cost. Furthermore, it presents principled algorithms along with an empirical comparison of these methods. Strengths and Weaknesses: **Strengths** * This is a very clearly written paper that provides an accessible and seemingly comprehensive description of the landscape of related work and where the proposed objective and algorithms fit in. * The algorithms, while simple, are clearly motivated by the setting and serve for an empirical comparison to validate the algorithms and proposed objective. * The authors provide connections between related algorithms, objectives and settings that are nice and could help inspire future work connecting these related areas. **Weaknesses** * I think the paper could benefit from a bit more discussion of: how comparison-based data comes to be and further discussion of considerations for such data. What is the scale of such datasets? What is the purpose of clustering in the example datasets (e.g., data exploration, some end-task, etc)? What are the characteristics of the clusterings desired for such data? Speed, accuracy, guarantees about the solution? The paper does a nice job placing itself in the related literature, I think it would benefit from this further discussion of the data. * Similarly, I think it would be good to discuss the use of agglomerative clustering further, especially in relation to highly scalable variants such as those that operate on sparse graphs, e.g., [1,2] among others. [1] Sumengen, Baris, et al. "Scaling hierarchical agglomerative clustering to billion-sized datasets." arXiv preprint arXiv:2105.11653 (2021). [2] Dhulipala, Laxman, et al. "Hierarchical Agglomerative Graph Clustering in Poly-Logarithmic Depth." arXiv preprint arXiv:2206.11654 (2022). Requested Changes: I only have a few minor writing comments: Minor: * "In the past decade, there has been an exponential growth in the scope of data science and machine learning." -> Personally, I think this reads as too broad a statement. I think that stronger engagement from the reader would be to lead with the "crowdsourcing and psychometrics" use case and give both applications of these and impacts of those applications as you do in the following paragraph. * "those focusing on ordinal data analysis from crowd-sourced data Kleindessner and von Luxburg (2017); Ghoshdastidar et al. (2019)." -> citation parens * " we believe that our revenue maximisation formulation can recover the latent hierarchy using only O(n^2 log n) comparisons under their latent model." -> would be nice to say a bit more here, perhaps turning it into a proper result (or describing more for future work). Broader Impact Concerns: No ethical concerns. ================================================== Review 3: Summary: This work introduces a measure of quality for hierarchical clusterings on a set of objects. The novelty of this measure is that, instead of being based on pairwise similarities $s(A,B)$ between objects, it is based on comparisons of the kind "is $s(A,B)>s(A,C)$?". This is interesting since several clustering algorithms that are based on comparisons rather than similarities have been proposed; human themselves are often better at performing comparisons rather than providing similarities. After defining such a measure of quality, called revenue, the work relates it to other known cost functions and proves some interesting results. For instance, they prove that one can compute a hierarchy that has revenue within constant factors from the optimum if one samples roughly $O(n^2)$ of the $\Omega(n^3)$ possible triplets comparisons. The work is completed by some experiments that assess the effectiveness of the proposed measure in capturing the quality of a hierarchical clustering. Strengths and Weaknesses: STRENGTHS 1) The problem is well-motivated. Several works exist that study comparison-based clustering, yet I am not aware of a measure of cluster quality that is defined in terms of comparisons. 2) The contributions are mostly clear and understandable. 3) The results and definitions are nicely connected to their related counterparts (e.g., Dasgupta's cost function for hierarchical clusterings). WEAKNESSES 1) The work suffers from lack of formality and rigor in several points. This includes some sloppy or incomplete definitions, some slightly wrong formulation of hypotheses, and some slightly wrong formulation of bounds and results. 2) The sketches of the proofs are really too short, to the point that they do not manage to convey the idea, and therefore in the end they become quite useless. 3) The experimental part can be improved. On the one hand, there is no clear statement of what the whole experimental part is supposed to assess (the individual experiments are described, but here I'm talking about the general goal of the experiments). On the other hand, some experiments are not very solid and seem almost anecdotal (more about this below). Requested Changes: CRITICAL CHANGES Page 5: "We propose to achieve this by finding a tree that maximises $Qrev(H, Q)$", how do you guarantee that in such a tree $|H(k \vee l)| > |H(i \vee j)|$? for all $(i,j,k,l) \in Q$? Moreover, the inequality could be loose (i.e., $|H(k \vee l)| = |H(i \vee j)|$). Page 6: "one cannot exactly recover $H_0$ from any passively obtained $T \subseteq T_0$ if $|T| = o(n^3)$", this sentence is vague and unclear. Page 6: "$i,j$ are merged before $k$", although I understand what is meant, there is no formal definition of what "merged before" means; and I do not think one can say "$i,j$ are merged before $k$" but rather "$i,j$ are merged before $i,k$ and $j,k$" Page 6: "we obtain that $Trev(H_0, T_0) > Trev(H_0, T_1)$", couldn't this be an equality? Page 7: "One can use standard concentration inequalities to show that, with high probability, $|T| = O(p_n n^3)$". This is true only if $p_n$ is large enough. In particular it is false if $p_n = n^{-3}$, since in that case $E[|T|]$ is too small to yield concentration. Page 7, Theorem 3: "Given access to $T$", this has no meaning here. Page 7, Theorem 3: "with probability at least $1-n^{-O(1)}$", probably you want to say $1-n^{-a}$ where $a>0$ can be made arbitrarily large by adjusting the constants. Otherwise that $O(1)$ could be $0$, in which case you get the trivial lower bound of $0$. In fact I would suggest defining formally what you mean by "high probability". Page 7, Theorem 3: "Setting $p_n$ at its smallest allowable value, equation 5 holds for $|T| = O(n^2 \log n / \epsilon^2)$". Do you mean that, for that value of $p_n$, with high probability we have simultaneously $|T| = O(n^2 \log n / \epsilon^2)$ and equation (5)? I guess this is the intended meaning, but one could interpret that sentence differently -- for instance, that conditioned on the event $|T| = O(n^2 \log n / \epsilon^2)$ we have (5) with high probability. In any case, the sentence is missing some quantifiers and/or some probability. Page 8: what do you mean by "ideal" matrix? I have not heard this before. Page 8: so what objects are R,M precisely? Real symmetric matrices? This should be stated clearly and formally. Page 8: "there are $2^L$ pure clusters", what is a cluster, and what is a pure cluster? Page 9, Table 1: this experiment does not tell much. First, the range of values for the number of comparisons is rather small. I guess it would be better to test, say, from k=n/2^c to k=n in doubling steps. This would support the claim that your revenue is correlated with the AARI. Second, this test is basically anecdotal evidence. Better evidence would be given by computing some standard measure of correlation between revenue and AARI such as Pearson's correlation between the revenue and AARI sequences, or Spearman's rank correlation between the rankings of the hierarchies given by the revenue and the AARI for the various methods. Page 10: the results are not easily repeatable since the authors used the current time to seed the random number generator (why not use just 0 as the seed?). Page 10: the denominator of the cosine similarity should have the norms, and not the squared norms Page 11: as before, these experiments are not repeatable since the authors used the current time to seed the random number generator. Lemma 6: the statement is problematic as it first chooses $n, H_0, T_0$ and then makes a "universal" claim about all such choices. Consider rewriting it like this: "For every $\epsilon \in (0,1)$ there exists $n_0 > 0$ such that what follows holds for all $n \ge n_0$. If $H_0$ is a hierarchy on $[n]$ and $T_0$ is the set of triplets induced by $H_0$, then ...$" Lemma 7: first, the $O(1)$ at the exponent of $n$ is not what you want, as I remarked above. Second, if you aim at $1-n^{-a}$ for arbitrarily large $a > 0$, then the hypothesis $p_n = \Omega(\log n / n)$ is not sufficient; you need the stronger hypothesis $p_n > c \log n / n$ for sufficiently large $c > 0$. ORDINARY CHANGES Page 3: "optimmisation" --> "optimisation" Page 3: "an uniformly" --> "a uniformly", "are" --> "is" Page 3: "at least $\frac{1}{3}$-fraction of given triplets" --> "a fraction at least $\frac{1}{3}$ of the given triplets" Page 3: "that satisfy at least $(1-\epsilon)$-fraction" --> "that satisfy at least a $(1-\epsilon)$-fraction" Page 3: "given quartets" --> "the given quartets" Page 4: "Since, we show that" --> "Since we show that" Page 4: "we believe that ...": I do not understand this sentence; do you prove this is true, or what? Page 4: "$C_1$ and $C_2$, denoting a split of $C$ such that ..." Why not just say that $\{C_1,C_2\}$ is a partition of $C$? Moreover, shouldn't the partition be nontrivial, i.e., $\min(|C_1|,|C_2|) > 0$ ? Page 4: "..., or equivalently, the number of objects in the set $C$". I see no constraint in the definition that ensures that $|H(C)|=|C|$. Page 4: "the number of objects in the set $C$", why not just $|C|$? Page 4: "the smallest node in the tree and the smallest subtree containing both i and j" --> "the smallest node in the tree containing both i and j, and the smallest subtree containing both i and j"; otherwise it's ambiguous. Page 4: "that allows ones" --> "that allows one" Page 4: "o(n)" --> "o(\cdot)" Page 4: "achieves $Drev(H, s) \ge \frac{1}{3}$-optimal revenue", what does this mean? Page 5: "avoids counting the same comparison multiple times", what does this mean? I see no "counting" problem here. Page 5: "in practice, the observed comparisons are fewer", in practice where? Moreover, this claim does not seem to be supported (there is no justification or citation). Page 5: "We further assume that $T$, or $Q$, is passively collected -- the algorithm cannot decide which comparisons should be queried.", I am confused. What can the algorithm do? Query any triplet from $T$ where $T$ is an arbitrary subset of $T_{all}$? Page 5: "which leads to $|H(i \vee k)| > |H(i \vee j)|$", you mean that you would like this to hold for every $(i,j,k) \in T$? Page 5, Theorem 1: perhaps highlight in the notation that $s^{AddS3}$ is a function of the set of triplets $T$. Otherwise the right-hand side of the first equation looks independent of $T$, which is weird. Page 5, Theorem 1: need brackets in the right-hand side of the equation of $s_{ij}^{AddS3}$ Page 6, Theorem 1: need brackets in the right-hand side of the equation of $s_{ij}^{AddS4}$ Page 6: "$Trev(H!,T_0)$" Page 7: "that $\Omega(n^3)$ is necessary" --> "that $|T| \in \Omega(n^3)$ is necessary" Page 8: "experimentsto" Page 8: "triplets based" --> "triplets-based", "quadruplets based" --> "quadruplets-based" Page 8: "$M+R$ where," --> "$M+R$, where" Page 8: "to be more difficult" --> "more difficult" Page 8: "comparison based setting" --> "comparison-based setting" Page 8: "well known measure" --> "well-known measure" Page 8: "higher revenues imply higher AARI", you mean AARI against the optimal hierarchy? Page 9: "several triplets (a-b) and quadruplets (c-d) based" --> "several triplet-based methods (a-b) and quadruplet-based methods (c-d)" Page 10, Table 2: "triplets based" --> "triplet-based", "quadruplets based" --> "quadruplet-based" Page 10: "in this work to various" --> "in this work, to various" Page 10: "quadruplets based" --> "quadruplet-based", "triplets based" --> "triplet-based", "similarity based" --> "similarity-based" Page 10: "we can then use" --> "and then use" Page 11: "comparison based" --> "comparison-based" Page 11: "we randomly and uniformly flip 5% of the comparisons (Emamjomeh-Zadeh and Kempe, 2018), for example if one should observe the triple $(i, j, k)$ that is object $x_i$ is closer to $x_j$ than to $x_k$ then one observes $(i, k, j)$ instead", perhaps I would say "we randomly and uniformly flip 5% of the comparisons (Emamjomeh-Zadeh and Kempe, 2018), where by flipping $(i, j, k)$ we mean replacing it with $(i, k, j)$." Page 11: "The number of objects, the kind of query" --> "The number of objects and the kind of query" Page 11: it is not clear what kind of comparisons the real-data datasets contain, and how they are converted into triplets/quadruplets. For instance I wonder what the "most central triplet" is. I believe this should be explained in the main body of the manuscript. Page 12: "two average linkage based algorithms" --> "two algorithms based on average linkage" Proof of Theorem 1, Lemma 4, Lemma 5, Lemma 6, ... : the hierarchies $H$ are sometimes called "hierarchies", sometimes "binary trees", sometimes "trees". Lemma 6: what is a "latent" hierarchy? It seems to me that it has no mathematical meaning. Broader Impact Concerns: I see no concern. ================================================== Metareview: Recommendation: Accept as is Comment: After some revisions during the review process, the paper is now clearly written and would be a valid contribution to TMLR. ==================================================
# Ap: Selective Activation For De-Sparsifying Pruned Networks Shiyu Liu *shiyu_liu@u.nus.edu* Department of Electrical and Computer Engineering College of Design and Engineering National University of Singapore Rohan Ghosh *rghosh92@gmail.com* Department of Electrical and Computer Engineering College of Design and Engineering National University of Singapore Mehul Motani *motani@nus.edu.sg* Department of Electrical and Computer Engineering College of Design and Engineering N.1 Institute for Health Institute of Data Science Institute for Digital Medicine (WisDM) National University of Singapore Reviewed on OpenReview: *https: // openreview. net/ forum? id= EGQSpkUDdD* ## Abstract The rectified linear unit (ReLU) is a highly successful activation function in neural networks as it allows networks to easily obtain sparse representations, which reduces overfitting in overparameterized networks. However, in the context of network pruning, we find that the sparsity introduced by ReLU, which we quantify by a term called dynamic dead neuron rate (DNR), is not beneficial for the pruned network. Interestingly, the more the network is pruned, the smaller the dynamic DNR becomes after optimization. This motivates us to propose a method to explicitly reduce the dynamic DNR for the pruned network, i.e., de-sparsify the network. We refer to our method as Activate-while-Pruning (AP). We note that AP does not function as a stand-alone method, as it does not evaluate the importance of weights. Instead, it works in tandem with existing pruning methods and aims to improve their performance by selective activation of nodes to reduce the dynamic DNR. We conduct extensive experiments using various popular networks (e.g., ResNet, VGG, DenseNet, MobileNet) via two classical and three competitive pruning methods. The experimental results on public datasets (e.g., CIFAR-10, CIFAR-100) suggest that AP works well with existing pruning methods and improves the performance by 3% - 4%. For larger scale datasets (e.g., ImageNet) and competitive networks (e.g., vision transformer), we observe an improvement of 2% - 3% with AP as opposed to without. Lastly, we conduct an ablation study and a substitution study to examine the effectiveness of the components comprising AP. ## 1 Introduction The rectified linear unit (ReLU) (Glorot et al., 2011), σ(x) = max{x, 0}, is the most widely used activation function in neural networks (e.g., ResNet (He et al., 2016), Transformer (Vaswani et al., 2017)). The success of ReLU is mainly due to fact that existing networks tend to be overparameterized and ReLU can easily regularize overparameterized networks by introducing sparsity (i.e., post-activation output is zero) (Li et al., 2023; Denil et al., 2014; Wang et al., 2023), leading to promising results in many computer vision tasks (e.g., image classification (He et al., 2016), object detection (Dai et al., 2021; Joseph et al., 2021)). In this paper, we study the ReLU's sparsity constraint in the context of network pruning (i.e., a method of compression that removes weights from the network). Specifically, we question the utility of ReLU's sparsity constraint, when the network is no longer overparameterized during iterative pruning. In the following, we summarize the workflow of our study together with our contributions. 1. **Motivation and Theoretical Study.** In Section 3.1, we introduce a term called dynamic Dead Neuron Rate (DNR), which quantifies the sparsity introduced by ReLU neurons that are not completely pruned during iterative pruning. Through rigorous experiments on existing networks (e.g., ResNet (He et al., 2016)), we find that the more the network is pruned, the smaller the dynamic DNR becomes during and after optimization. This suggests that the sparsity introduced by ReLU is not beneficial for pruned networks. Further theoretical investigations also reveal the importance of reducing dynamic DNR for pruned networks from an information bottleneck (IB) (Tishby & Zaslavsky, 2015) perspective (see Section 3.2). 2. **A Method for De-sparsifying Pruned Networks.** In Section 3.3, we propose a method called Activate-while-Pruning (AP) which aims to explicitly reduce dynamic DNR. We note that AP does not function as a stand-alone method, as it does not evaluate the importance of weights. Instead, it works in tandem with existing pruning methods and aims to improve their performance by reducing dynamic DNR. AP has two variants: (i) AP-Lite which slightly improves the performance of existing methods, but without increasing the algorithm complexity, and (ii) AP-Pro which introduces an additional retraining step to the existing methods in every pruning cycle, but significantly improves the performance of existing methods. 3. **Experiments.** In Section 4, we conduct experiments on CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009) with various popular networks (e.g., ResNet, VGG (Simonyan & Zisserman, 2014), MobileNet (Sandler et al., 2018), DenseNet (Huang et al., 2017)) using two classical and three competitive pruning methods. The results demonstrate that AP works well with existing pruning methods and improve their performance by 3% - 4%. For the larger scale dataset (e.g., ImageNet (Deng et al., 2009)) and competitive networks (e.g., vision transformer (Dosovitskiy et al., 2020)), we observe an improvement of 2% - 3% with AP as opposed to without. 4. **Ablation Study.** In Section 4.3, we carry out an ablation study to further investigate and demonstrate the effectiveness of several key components that make up the proposed AP. 5. **Substitution Study.** In Section 4.4, we conduct a substitution study to replace certain components in the proposed AP and examine the impact on pruning performance. ## 2 Background Network pruning is a method used to reduce the size of the neural network, with its first work (LeCun et al., 1998) dating back to 1990. In terms of the pruning style, all existing methods can be divided into two classes: (i) **One-Shot Pruning** and (ii) **Iterative Pruning**. Assuming that we plan to prune Q% of the parameters of a trained network, a typical **pruning cycle** consists of three basic steps: (i) Prune η% of existing parameters based on given metrics (ii) Freeze pruned weights as zero (iii) Retrain the pruned network to recover the performance. In One-Shot Pruning, η is set to Q and the parameters are pruned in one pruning cycle. While for Iterative Pruning, a much smaller portion of parameters (i.e., *η << Q*) are pruned per pruning cycle. The pruning process is repeated multiple times until Q% of parameters are pruned. As for performance, Iterative Pruning often results in better performance compared to One-Shot Pruning (Li et al., 2017; Vysogorets & Kempe, 2023; Zhang & Freris, 2023). So far, existing works aim to improve the pruning performance by exploring either new pruning metrics or new retraining methods. Pruning Metrics. Weight magnitude is the most popular approximation metric used to determine less useful connections; the intuition being that smaller magnitude weights have a smaller effect in the output, and hence are less likely to have an impact on the model outcome if pruned (He et al., 2020; Li et al., 2020a;b). Many works have investigated the use of weight magnitude as the pruning metric (Han et al., 2015; Frankle & Carbin, 2019). More recently, Lee et al. (2020) introduced layer-adaptive magnitude-based pruning (LAMP) and attempted to prune weights based on a scaled version of the magnitude. Park et al. (2020) proposed a method called Lookahead Pruning (LAP), which evaluates the importance of weights based on the impact of pruning on neighbor layers. Another popular metric used for pruning is via the gradient, the intuition being that weights with smaller gradients are less impactful in optimizing the loss function. Examples are (LeCun et al., 1998; Theis et al., 2018), where LeCun et al. (1998) proposed using the second derivative of the loss function with respect to the parameters (i.e., the Hessian Matrix) as a pruning metric and Theis et al. (2018) used Fisher information to approximate the Hessian Matrix. A recent work (Blalock et al., 2020) reviewed numerous pruning methods and suggested two classical pruning methods for performance evaluation: (i) **Global Magnitude**: Pruning the weights with the lowest absolute value anywhere in the network and (ii) **Global Taylor** (Molchanov et al., 2019): Pruning the weights with the lowest absolute value of (weight × gradient) anywhere in the network. 1. **Global Magnitude**: Pruning weights with the lowest absolute value globally (anywhere in the network). 2. **Global Taylor**: Pruning weights with the lowest absolute value of (weight×gradient) globally. Retraining Methods. Another factor that significantly affects the pruning performance is the retraining method. For example, Han et al. (2015) trained the unpruned network with a learning rate (LR) schedule and retrained the pruned network using a constant learning rate (i.e., often the final LR of the LR schedule). A recent work (Renda et al., 2019) proposed learning rate rewinding which used the same learning rate schedule to retrain the pruned network, leading to a better pruning performance. More recently, Liu et al. (2021a) attempted to optimize the choice of LR during retraining and proposed a LR schedule called S-Cyc. They showed that S-Cyc could work well with various pruning methods, further improving the existing performance. Most notably, Frankle & Carbin (2019) found that resetting the unpruned weights to their original values (known as **weight rewinding**) after each pruning cycle could lead to even higher performance than the original model. Some follow-on works (Zhou et al., 2019; Renda et al., 2019; Malach et al., 2020; Evci et al., 2021) investigated this phenomenon more precisely and applied this method in other fields (e.g., transfer learning (Mehta, 2019), reinforcement learning and natural language processing (Yu et al., 2020)) while other works (Evci et al., 2022; Paul et al., 2022) study its limitation and attempt to improve on it. One interesting work to mention is (Chen et al., 2022), which further examined the lottery ticket hypothesis from other perspectives, such as interpretability and geometry of loss landscapes. Other Works. In addition to works mentioned above, several other works also share some deeper insights on network pruning (Liu et al., 2019; Zhu & Gupta, 2018; Li et al., 2022; Wang et al., 2022; Peste et al., 2021; Lee & et al, 2023; Gale et al., 2019). For example, Wang et al. (2020) suggested that the fully-trained network could reduce the search space for the pruned structure. More recently, Luo & Wu (2020) addressed the issue of pruning residual connections with limited data and Ye et al. (2020) theoretically proved the existence of small subnetworks with lower loss than the unpruned network. You et al. (2022) motivated the use of the affine spline formulation to analyze recent pruning techniques. Liu et al. (2022a) applied the network pruning technique in graph networks and approximated the subgraph edit distance. ## 3 Activate-While-Pruning In Section 3.1, we first conduct experiments to evaluate the DNR during iterative pruning. Next, in Section 3.2, we link the experimental results to theoretical studies and motivate Activate-while-Pruning (AP). In Section 3.3, we introduce the idea of AP and present its algorithm. Lastly, in Section 3.4, we illustrate how AP can improve on the performance of existing pruning methods. ## 3.1 Experiments On Dnr We study the state of the ReLU function during iterative pruning and introduce a term called Dead Neuron Rate (DNR), which is the percentage of dead ReLU neurons (i.e., a neuron with a post-ReLU output of zero) in the network averaged over all training samples when the network converges. Mathematically, the DNR can be written as In as _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _____________ - _________ _________ $ n$ $ i=1$ an ReLU neurons in the unprimed network $ =\frac{1}{n}\sum_{i=1}^{n}\frac{\#\text{of dynamically+statically dead ReLU neuron}}{\text{all ReLU neurons in the unprimed network}}$ $$(1)$$ $$\left(2\right)$$ ![3_image_0.png](3_image_0.png) Figure 1: **Left**: Dynamic and static Dead Neuron Rate (DNR) when iteratively pruning ResNet-20 using Global Magnitude; **Right**: The corresponding dynamic DNR during optimization. where n is the number of training samples. We further classify a dead neuron as either dynamically dead or statically dead. The **dynamically dead neuron** is a dead neuron in which not all of the weights have been pruned. Hence, it is not likely to be permanently dead and its state depends on its input. As an example, a neuron can be dead for a sample X, but it could be active (i.e., post-ReLU output > 0) for a sample Y . The DNR contributed by dynamically dead neurons is referred to as **dynamic DNR**. The **statically** dead neuron is a dead neuron in which all associated weights have been pruned. The DNR contributed by statically dead neurons is referred to as **static DNR**. Related Works. The phenomenon of dead ReLU neurons is a widely studied topic and inspire many interesting works Sokar et al. (2023); Dohare et al. (2021). DNR is a term that we introduce to quantify the sparsity introduced by ReLU. We note that DNR is closely related to activation sparsity (Kurtz & et al, 2020; Georgiadis, 2019) in the literature and in many scenarios, they can be interchangeable. The only difference could be that DNR is a more explicit term which quantifies the percent of dead ReLU neurons in the network. In the literature, many similar sparsity metrics have also been proposed (Hurley & Rickard, 2009). As an example, the Gini Index (Goswami et al., 2016) computed from Lorenz curve (i.e., plot the cumulative percentages of total quantities) can be used to evaluate the sparsity of network graphs. Another popular metric will be Hoyer measure (Hoyer, 2004) which is the ratio between L1 and L2 norms, can also be used to evaluate the sparsity of networks. Another interesting metric to mention will be parameter sparsity (Goodfellow et al., 2016) which computes the percentage of zero-magnitude parameters among all parameters. Both parameter sparsity and DNR will contribute to sparse representations, and in this paper, we use DNR to quantify the sparsity introduced by ReLU. Experiment Setup and Observations. Given the definition of DNR, static and dynamic DNR, we conduct pruning experiments using ResNet-20 on the CIFAR-10 dataset with the aim of examining the benefit (or lack thereof) of ReLU's sparsity for pruned networks. We iteratively prune ResNet-20 with a pruning rate of 20 (i.e., 20% of existing weights are pruned) using the Global Magnitude (i.e., prune weights with the smallest magnitude anywhere in the network). We refer to the standard implementation reported in (Renda et al., 2019) (i.e., SGD optimizer (Ruder, 2016), 100 training epochs and batch size of 128, learning rate warmup to 0.03 and drop by a factor of 10 at 55 and 70 epochs, with learning rate rewinding (Renda et al., 2019), but without weight rewinding (Frankle & Carbin, 2019)) and compute the static DNR and dynamic DNR while the network is iteratively pruned. The experimental results are shown in Fig. 1, where we make two observations as follows. 1. As shown in Fig. 1 (left), the value of DNR (i.e., sum of static and dynamic DNR) increases as the network is iteratively pruned. As expected, static DNR grows as more weights are pruned. 2. Surprisingly, dynamic DNR tends to decrease as the network is iteratively pruned (see Fig. 1 (left)), suggesting that pruned networks do not favor the sparsity of ReLU. In Fig. 1 (right), for pruned networks with different λ (i.e., percent of remaining weights), they have similar dynamic DNR at beginning, but the pruned network with smaller λ tends to have a smaller dynamic DNR during and after optimization. Result Analysis. One possible reason for the decrease in dynamic DNR could be due to the fact that once the neuron is dead, its gradient becomes zero, meaning that it stops learning and degrades the learning ability of the network (Lu et al., 2019; Arnekvist et al., 2020). This could be beneficial as existing networks tend to be overparameterized and dynamic DNR may help to reduce the occurrence of overfitting. However, for pruned networks whose learning ability are heavily degraded, the dynamic DNR could be harmful as a dead ReLU always outputs the same value (zero as it happens) for any given non-positive input, meaning that it takes no role in discriminating between inputs. Therefore, during retraining, the pruned network attempts to restore its performance by reducing its dynamic DNR so that the extracted information can be passed to the subsequent layers. Similar performance trends can be observed using VGG-19 with Global Taylor (see Fig. 4 in the **Appendix**). Next, we present a **theoretical study** of DNR and show its relevance to the network's ability to discriminate. ## 3.2 Theoretical Insights: Relevance To Information Bottleneck And Complexity Here, we present some theoretical results and subsequent insights that highlight the relevance of the dynamic DNR of a certain layer of the pruned network to the Information Bottleneck (IB) method proposed in (Tishby & Zaslavsky, 2015). In the IB setting, the computational flow is denoted as X −→ T −→ Y , where X represents the input, T represents the extracted representation, and Y represents the network's output. In (Tishby & Zaslavsky, 2015), the authors observed that the training of neural networks is essentially a process of minimizing the mutual information (Cover & Thomas, 2006) between X and T (denoted as I(X; T)) while keeping I(Y ; T) large (precisely what IB suggests). A consequence of this is that over-compressed features (very low I(X; T)) will not retain enough information to predict the labels, whereas under-compressed features (high I(X; T)) imply that more label-irrelevant information is retained in T which can adversely affect generalization performance. Next, we provide a few definitions. Definition 1. **Layer-Specific dynamic DNR (**DDNR(T)): We are given a dataset S = {X1*, ..., X*m}, where Xi ∼ P ∀i (i.i.d) and P is the data generating distribution. We denote the dynamic DNR of the neurons at a certain layer within the network represented by the vector T, by DDNR(T). DDNR(T) is computed over the entire distribution of input in P. Definition 2. **Layer-Specific static DNR (**SDNR(T)): In the same manner as DDNR(T), we define the layer-specific static DNR of a network layer T. With this, we now outline our first theoretical result which highlights the relevance of DDNR(T) and SDNR(T) to I(X; T), as follows. Theorem 1. We are given the computational flow X −→ T −→ Y , where T represents the features at some arbitrary layer within a network, which are represented with finite precision (e.g., float32 or float64). We consider the subset of network configurations for which (a) the activations in T are less than a threshold τ and (b) the zero-activation probability of each neuron in T is upper bounded by some pS < 1. Let dim(T) represent the dimensionality of T, i.e., the number of neurons at that depth. We then have, $$I(X;T)\leq C\times dim(T)\times\Big{(}1-S_{\mbox{\tiny{\it conv}}}(T)-D_{\mbox{\tiny{\it conv}}}(T)\Big{(}1-\frac{1}{C}\log\frac{1-S_{\mbox{\tiny{\it conv}}}(T)}{D_{\mbox{\tiny{\it conv}}}(T)}\Big{)}\Big{)},\tag{3}$$ for a finite, independent constant C that only depends on the network architecture, τ and pS. The following corollary addresses the dependencies of Theorem 1. The proof of Theorem 1 and Corollary 1 are provided in the **Appendix**. Corollary 1. In the setting of Theorem 1, the right hand side of equation 3 decreases as DDNR(T) or SDNR(T) increases. Remark 1. **(Relevance to Complexity)** We see that (Shamir et al., 2010) notes how the metric I(X; T) represents the *effective complexity* of the network. As Theorem 3 in (Shamir et al., 2010) shows, I(X; T) captures the dependencies between X and T and directly correlates with the network's ability to fit the data. Coupled with the observations from Theorem 1 and Corollary 1, for a fixed pruned network configuration ![5_image_0.png](5_image_0.png) Figure 2: Illustration of how AP works in tandem with existing pruning methods (e.g., method X). (i.e., fixed SDNR(T)), greater DDNR(T) will likely reduce the *effective complexity* of the network, undermining the function-fitting ability of the neural network. Remark 2. **(Motivation for AP)** Theorem 1 also shows that a pruned network, which possesses large SDNR(T), leads to a higher risk of *over-compression* of information (low I(X; T)). To address this issue, we can reduce the dynamic DNR (from Corollary 1) so that the upper bound of I(X; T) can be increased, mitigating the issue of *over-compression* for a pruned network. This agrees with our initial motivation that the sparsity introduced by ReLU is not beneficial for the pruned network and reducing dynamic DNR helps in avoiding over-compressed features while simultaneously increasing the effective complexity of the network. ## 3.3 Algorithm Of Activate-While-Pruning The experimental and theoretical results above suggest that, in order to better preserve the learning ability of pruned networks, a smaller dynamic DNR is preferred. This motivates us to propose Activate-while-Pruning (AP) which aims to explicitly reduce dynamic DNR. We note that the proposed AP does not work alone, as it does not evaluate the importance of weights. Instead, it serves as a booster to existing pruning methods and help to improve their pruning performance by reducing dynamic DNR (see Fig. 2). Assume that the pruning method X removes p% of weights in every pruning cycle (see the upper part in Fig. 2). After using AP, the overall pruning rate remains unchanged as p%, but (p − q)% of weights are pruned according to the pruning method X with the aim of pruning less important weights, while q% of weights are pruned according to AP (see the lower part in Fig. 2) with the aim of reducing dynamic DNR. Consider a network f(θ) with ReLU activation function. Two key steps to reducing dynamic DNR are summarized as follows. (1) Locate Dead ReLU Neurons. Consider a neuron in the hidden layer with ReLU activation function, taking n inputs {X1W1*, ..., X*nWn|Xi ∈ R is the input and Wi ∈ R is the associated weight}. Let j be the pre-activated output of the neuron (i.e., j =Pn i=1 XiWi) and J be the post-activated output of the neuron (J = *ReLU*(j)). Let L be the loss function and assume the neuron is dead (J = 0), then the gradient of its associated weights (e.g., W1) with respected to the loss function will be ∂L ∂W1 = ∂L ∂J · ∂J ∂j ·∂j ∂W1 = 0 as ∂J ∂j = 0. If a neuron is often dead during training, the weight movement of its associated weights is likely to be smaller than other neurons. Therefore, we compute the difference between weights at initialization (θ0) and the weights when the network convergences (θ∗), i.e., |θ∗ − θ0| and use it as a heuristic to locate dead ReLU neurons. (2) Activate Dead ReLU Neurons. Assume we have located a dead neuron in the hidden layer with n inputs {X1W1*, ..., X*nWn |Xi ∈ R is the input and Wi ∈ R is the associated weight}. We note that Xiis non-negative as Xiis usually the post-activated output from the previous layer (i.e., the output of ReLU is non-negative). Therefore, a straightforward way to activate the dead neuron is to prune the weights with the negative value. By pruning such negative weights, we can increase the value of the pre-activation output, which may turn the pre-activation output into positive so as to reduce dynamic DNR. Algorithm 1 The Pruning Metric of the Proposed AP Require: (i) Network f with unpruned weights θ0 at initialization, f(θ0); (ii) Network f with unpruned weights θ∗ at convergence, f(θ∗); (iii) Pruning Rate of AP, q; Locate Dead Neurons: Sort |θ∗ - θ0| in an ascending order. Activate Dead Neurons: In the ascending order of |θ∗ - θ0|, prune first q% negative weights. Algorithm 2 The Pruning Method X with and without AP Require: (i) Network, f(θ); (ii) Pruning Rate of Method X, p; (iii) Pruning Rate of AP, q; (iv) Pruning Cycles, n; (v) Pro_Flag = {0: AP-Lite, 1: AP-Pro}; ——————————— The Conventional Pruning Method X ——————————— for i = 1 to n do Randomly initialize unpruned weights, θ ← θ0. Train the network to convergence, arriving at parameters θ∗. Prune p % of θ∗ according to the pruning method X. end for Retrain: Retrain the network to recover its performance. ———————– The Conventional Pruning Method X with Proposed AP ———————– for i = 1 to n do Randomly initialize unpruned weights, θ ← θ0. Train the network to convergence, arriving at parameters θ∗. Prune (p - q) % of θ∗ according to the pruning method X. if Pro_Flag **then** \# Execution of AP-Pro (i) Pruning: Prune q % of parameter θ∗ *according to the metric of AP (see details in Algo. 1).* (ii) *Weight Rewinding: Reset the remaining parameters to their values in* θ0. (iii) *Retrain: Retrain the pruned network to recover its performance.* end if end for if NOT Pro_Flag **then** \# Execution of AP-Lite (i) Pruning: Prune q % of the parameters θ∗ *according to the metric of AP (see details in Algo. 1).* (ii) *Weight Rewinding: Reset the remaining parameters to their values in* θ0. (iii) *Retrain: Retrain the pruned network to recover its performance.* end if ## 3.4 How Ap Improves Existing Methods We now summarize how AP can improve existing pruning methods in Algorithm 2, where the upper part is the algorithm of a standard iterative pruning method called pruning method X and the lower part is the algorithm of method X with AP. The proposed AP has two variants: **AP-Pro** and **AP-Lite**. We note that both AP-Pro and AP-Lite contain the same three steps, summarized as follows. - **Step 1: Pruning.** Given a network at convergence with a set of dynamically dead ReLU neurons, N1 = {n1, n2, n3*, ...*}. The pruning step of AP aims to activate these dynamically dead ReLU neurons by pruning negative weights (i.e., see Algorithm 1), so as to preserve the learning ability of the pruned network. - **Step 2: Weight Rewinding.** Resetting unpruned weights to their values at the initialization. We note that different weight initializations could lead to different sets of N . In step 1, AP aims to reduce dynamic DNR for the target N1 and weight rewinding attempts to prevent the target N1 from changing too much. Since the weights of ReLU neurons in N1 have been pruned by AP, these neurons could become active during retraining. The effect of weight rewinding is evaluated via an ablation study. - **Step 3: Retraining.** Retrain the pruned network to recover performance. AP-Lite and AP-Pro. The key difference between AP-Lite and AP-Pro is that AP-Lite applies these three steps only once at the end of pruning. It aims to slightly improve the performance, but does not substantially increase the algorithm complexity. For AP-Pro, it applies the three steps above in every pruning cycle, which increases the algorithm complexity (mainly due to the retraining step), but aims to significantly improve the performance, which could be preferred in performance oriented tasks. Difference with Existing Works. The AP is a pruning method which does not work alone as AP's pruning metric cannot evaluate the importance of weights (verified in Section 4.3). AP works in tandem with existing pruning methods and help to further pruned the network by reducing the occurrence of dynamic dead neurons (i.e., decrease activation sparsity). This is different from existing works (Raihan & Aamodt, 2020; Liu et al., 2022b; Akiva-Hochman et al., 2022; Gupta et al., 2019) which jointly optimize weight and activation sparsity for computation acceleration, the proposed AP investigates the interaction of weight and activation sparsity from a new perspective, i.e., how to tradeoff activation sparsity for more weight sparsity. ## 4 Performance Evaluation In Section 4.1, we first summarize the experiment setup. Next, in Section 4.2, we compare and analyze the results obtained. In Section 4.3, we conduct an ablation study to evaluate the effectiveness of several components in AP. Lastly, in Section 4.4, we conduct a substitution study on the proposed AP. ## 4.1 Experiment Setup (1) Experiment Details. To demonstrate that AP can work well with different pruning methods, we shortlist two classical and competitive pruning methods. The details are summarized as follows. 1. Pruning ResNet-20 on the CIFAR-10 dataset using Global Magnitude with and without AP. 2. Pruning VGG-19 on the CIFAR-10 dataset using Global Taylor with and without AP. 3. Pruning DenseNet-40 (Huang et al., 2017) on CIFAR-100 using Layer-Adaptive Magnitude-based Pruning (LAMP) (Lee et al., 2020) with and without AP. 4. Pruning MobileNetV2 (Sandler et al., 2018) on the CIFAR-100 dataset using Lookahead Pruning (LAP) (Park et al., 2020) with and without AP. 5. Pruning ResNet-50 (He et al., 2016) on the ImageNet (i.e., ImageNet-1000) using Iterative Magnitude Pruning (IMP) (Frankle & Carbin, 2019) with and without AP. 6. Pruning Vision Transformer (ViT-B-16) on CIFAR-10 using IMP with and without AP. We train the network using SGD with He initialization (He et al., 2015), momentum = 0.9 and a weight decay of 1e-4 (same as (Renda et al., 2019; Frankle & Carbin, 2019)). For the benchmark pruning method, we prune the network with a pruning rate p = 20 (i.e., 20% of existing weights are pruned) in 1 pruning cycle. After using AP, the overall pruning rate remains unchanged as 20%, but 2% of existing weights are pruned based on AP, while the other 18% of existing weights are pruned based on the benchmark pruning method to be compared with (see Algorithm 2). We repeat 25 pruning cycles in 1 run and use the early-stop top-1 test accuracy (i.e., the corresponding test accuracy when early stopping criteria for validation error is met) to evaluate the performance. The experimental results averaged over 5 runs and the corresponding standard deviation are summarized in Tables 1 - 6, where λ is the percentage of weights remaining. The bolded results indicate that AP is significantly better than benchmarks results after accounting for the standard deviation. (2) Hyper-parameter Selection and Tuning. To ensure fair comparison against prior results, we utilize standard implementations (i.e., network hyper-parameters and learning rate schedules) reported in the literature. Specifically, the implementations for Tables 1 - 6 are from (Frankle & Carbin, 2019), (Zhao et al., 2019), (Chin et al., 2020), (Renda et al., 2019) and (Dosovitskiy et al., 2020). The implementation details can be found in Section B.2 of the **Appendix**. In addition, we also tune hyper-parameters for each experiment setup mentioned above using the validation dataset via grid search. Some hyper-parameters are tuned as follows. (i) The training batch size is tuned from {64, 128, ...., 1024}. (ii) The learning rate is tuned from 1e-3 to 1e-1 via a stepsize of 2e-3. (iii) The number training epochs is tuned from 80 to 500 with a stepsize of 20. The validation performance using our tuned parameters are close to that of using standard implementations. Therefore, we use standard implementations reported in the literature to reproduce benchmark results. | Original Top-1 Test Accuracy: 91.7% (λ = 100%) | | | | | |--------------------------------------------------|------------|------------|------------|------------| | λ | 32.8% | 26.2% | 13.4% | 5.72% | | Global Magnitude | 90.3 ± 0.4 | 89.8 ± 0.6 | 88.2 ± 0.7 | 81.2 ± 1.1 | | Global Magnitude with AP-Lite | 90.4 ± 0.7 | 90.2 ± 0.8 | 88.7 ± 0.7 | 82.4 ± 1.4 | | Global Magnitude with AP-Pro | 90.7 ± 0.6 | 90.4 ± 0.4 | 89.3 ± 0.8 | 84.1 ± 1.1 | | Original Top-1 Test Accuracy: 92.2% (λ = 100%) | | | | | |--------------------------------------------------|------------|------------|------------|------------| | λ | 32.8% | 26.2% | 13.4% | 5.72% | | Global Taylor | 90.2 ± 0.5 | 89.8 ± 0.8 | 89.2 ± 0.8 | 76.9 ± 1.1 | | Global Taylor with AP-Lite | 90.5 ± 0.8 | 90.3 ± 0.7 | 89.7 ± 0.9 | 78.4 ± 1.4 | | Global Taylor with AP-Pro | 90.8 ± 0.6 | 90.7 ± 0.9 | 90.4 ± 0.8 | 79.2 ± 1.3 | Table 1: Performance (top-1 test accuracy ± standard deviation) of pruning ResNet-20 on CIFAR-10 using Global Magnitude with and without the proposed AP. Table 2: Performance (top-1 test accuracy ± standard deviation) of pruning VGG-19 on CIFAR-10 using Global Taylor with and without the proposed AP. (3) Reproducing Benchmark Results. By using the implementations reported in the literature, we have correctly reproduced the benchmark results. For example, the benchmark results in our Tables 1 - 6 are comparable to Fig.11 and Fig.9 of (Blalock et al., 2020), Table.4 in (Liu et al., 2019), Fig.3 in (Chin et al., 2020), Fig. 10 in (Frankle et al., 2020), Table 5 in (Dosovitskiy et al., 2020), respectively. (4) Source Code & Devices: We use Tesla V100 devices to conduct our experiments. The datasets are preprocessed using the conventional method. The source code is available at https://github.com/ Martin1937/Activate-While-Pruning. ## 4.2 Performance Comparison (1) Performance using Classical Pruning Methods. In Tables 1 & 2, we show the performance of AP using classical pruning methods (e.g., Global Magnitude, Global Taylor) via ResNet-20 and VGG-19 on CIFAR-10. We observe that as the percent of weights remaining, λ decreases, the improvement of AP becomes larger. For example, in Table 1, the performance of AP-Lite at λ = 26.2% is 1.3% higher than the benchmark result. The improvement increases to 2.6% at λ = 5.7%. Note that AP-Lite does not increase the algorithm complexity of existing methods. As expected, in Table 1, AP-Pro leads to a more significant improvement of 2.0% and 4.1% at λ = 26.2% and λ = 5.7%, respectively. Similar performance trends can be observed in Table 2 as well. The results for more values of λ **can be found in the Appendix**. (2) Performance using Competitive and Classical Pruning Methods. In Tables 3 and 4, we show that AP can work well with Competitive pruning methods (e.g., LAMP, LAP). In Table 3, we show the performance of AP using LAMP via DenseNet-40 on CIFAR-100. We observe that AP-Lite improves the performance of LAMP by 1.2% at λ = 13.4% and the improvement increases to 1.6% at λ = 5.7%. Note that AP-Lite does not increase the algorithm complexity of existing methods. For AP-Pro, it causes a larger improvement of 4.6% and 3.8% at λ = 13.4% and λ = 5.7%, respectively. Similar performance trends can be observed in Table 4, where we show the performance of AP using LAP via MobileNetV2 on CIFAR-100. (3) Performance on ImageNet. In Table 5, we show the performance of AP using Iterative Magnitude Pruning (IMP, i.e., the lottery ticket hypothesis pruning method) via ResNet-50 on ImageNet (i.e., the ILSVRC version) which contains over 1.2 million images from 1000 different classes. We observe that APLite improves the performance of IMP by 1.5% at λ = 5.7%. For AP-Pro, it improves the performance of IMP by 2.8% at λ = 5.7%. | | Original Top-1 Test Accuracy: 74.6% (λ = 100%) | | | | |-------------------|--------------------------------------------------|------------|------------|------------| | λ | 32.8% | 26.2% | 13.4% | 5.72% | | LAMP | 71.5 ± 0.7 | 69.6 ± 0.8 | 65.8 ± 0.9 | 61.2 ± 1.4 | | LAMP with AP-Lite | 71.9 ± 0.8 | 70.3 ± 0.7 | 66.6 ± 0.7 | 62.2 ± 1.2 | | LAMP with AP-Pro | 72.2 ± 0.7 | 71.1 ± 0.7 | 68.8 ± 0.9 | 63.5 ± 1.5 | | Original Top-1 Test Accuracy: 73.7% (λ = 100%) | | | | | |--------------------------------------------------|------------|------------|------------|------------| | λ | 32.8% | 26.2% | 13.4% | 5.72% | | LAP | 72.1 ± 0.8 | 70.5 ± 0.9 | 67.3 ± 0.8 | 64.8 ± 1.5 | | LAP with AP-Lite | 72.5 ± 0.9 | 70.9 ± 0.8 | 68.2 ± 1.2 | 66.2 ± 1.5 | | LAP with AP-Pro | 72.8 ± 0.7 | 71.4 ± 0.8 | 69.1 ± 0.8 | 67.4 ± 1.1 | | Original Top-1 Test Accuracy: 77.0% (λ = 100%) | | | | | |--------------------------------------------------|------------|------------|------------|------------| | λ | 32.8% | 26.2% | 13.4% | 5.72% | | IMP | 76.8 ± 0.2 | 76.4 ± 0.3 | 75.2 ± 0.4 | 71.5 ± 0.4 | | IMP with AP-Lite | 77.2 ± 0.3 | 76.9 ± 0.4 | 76.1 ± 0.3 | 72.6 ± 0.5 | | IMP with AP-Pro | 77.5 ± 0.4 | 77.2 ± 0.3 | 76.8 ± 0.6 | 73.5 ± 0.4 | | Original Top-1 Test Accuracy: 98.0% (λ = 100%) | | | | | |--------------------------------------------------|------------|------------|------------|------------| | λ | 32.8% | 26.2% | 13.4% | 5.72% | | IMP | 97.3 ± 0.6 | 96.8 ± 0.7 | 88.1 ± 0.9 | 82.1 ± 0.9 | | IMP with AP-Lite | 98.0 ± 0.4 | 97.3 ± 0.7 | 89.9 ± 0.6 | 83.6 ± 0.8 | | IMP with AP-Pro | 98.2 ± 0.6 | 97.6 ± 0.5 | 91.1 ± 0.8 | 84.8 ± 1.0 | Table 3: Performance (top-1 test accuracy ± standard deviation) of pruning DenseNet-40 on CIFAR-100 using Layer-Adaptive Magnitude Pruning (LAMP) with and without the proposed AP. Table 4: Performance (top-1 test accuracy ± standard deviation) of pruning MobileNetV2 on CIFAR-100 using Lookahead Pruning (LAP) with and without the proposed AP. Table 5: Performance (top-1 validation accuracy ± standard deviation) of pruning ResNet-50 on ImageNet using Iterative Magnitude Pruning (IMP) with and without AP. Table 6: Performance (top-1 test accuracy ± standard deviation) of pruning Vision Transformer (ViT-B-16) on CIFAR-10 using IMP with and without AP. (3) Performance on Competitive Networks (Vision Transformer). Several recent works (Liu et al., 2021b; Yuan et al., 2021; Chen et al., 2021) demonstrated that transformer based networks tend to provide excellent performance in computer vision tasks. We now examine the performance of AP using Vision Transformer (i.e., ViT-B16 with a resolution of 384, pretrained on ImageNet dataset). We note that the ViT-B16 uses Gaussian Error Linear Units (GELU, GELU(x) = xΦ(x), where Φ(x) is the standard Gaussian cumulative distribution function) as the activation function. Similar to ReLU which blocks the negative preactivation output, GELU heavily regularizes the negative pre-activation output by multiplying an extremely small value of Φ(x), suggesting that AP could be helpful with pruning GELU based models as well. We repeat the same experiment setup as above and evaluate the performance of AP using ViT-B16 in Table 6. We observe that AP-Lite helps to improve the performance of IMP by 1.8% at λ = 5.7%. For AP-Pro, it improves the performance of IMP by 3.3% at λ = 5.7%. | λ | 32.8% | 26.2% | 13.4% | 5.72% | |---------------|------------|------------|------------|------------| | AP-Lite | 90.4 ± 0.7 | 90.2 ± 0.8 | 88.7 ± 0.7 | 82.4 ± 1.1 | | AP-Lite-SOLO | 86.0 ± 1.0 | 84.3 ± 1.5 | 81.5 ± 2.0 | 74.5 ± 3.1 | | AP-Lite-NO-WR | 87.5 ± 0.9 | 87.1 ± 1.2 | 84.7 ± 1.5 | 78.8 ± 2.3 | | λ | 32.8% | 26.2% | 13.4% | 5.72% | |--------------|------------|------------|------------|------------| | AP-Pro | 90.8 ± 0.6 | 90.7 ± 0.9 | 90.4 ± 0.8 | 79.2 ± 1.3 | | AP-Pro-SOLO | 85.8 ± 1.5 | 83.2 ± 1.7 | 81.5 ± 1.9 | 70.3 ± 2.7 | | AP-Pro-NO-WR | 88.1 ± 1.2 | 86.3 ± 1.5 | 85.6 ± 1.5 | 74.8 ± 2.1 | | λ | 32.8% | 26.2% | 13.4% | 5.72% | |---------------|------------|------------|------------|------------| | AP-Lite | 77.2 ± 0.3 | 76.9 ± 0.4 | 76.1 ± 0.3 | 72.6 ± 0.5 | | AP-Lite-SOLO | 75.8 ± 0.5 | 74.3 ± 0.7 | 71.1 ± 0.6 | 68.5 ± 0.9 | | AP-Lite-NO-WR | 76.3 ± 0.6 | 74.9 ± 0.8 | 73.2 ± 0.8 | 70.3 ± 1.1 | Table 7: Ablation Study: Performance Comparison (top-1 test accuracy ± standard deviation) between AP-Lite and AP-SOLO, AP-Lite-NO-WR on pruning ResNet-20 on CIFAR-10 via Global Magnitude. Table 8: Ablation Study: Performance Comparison (top-1 test accuracy ± standard deviation) between AP-Pro and AP-Pro-SOLO, AP-Pro-NO-WR on pruning VGG-19 using CIFAR-10 via Global Taylor. Table 9: Ablation Study: Performance Comparison (top-1 test accuracy ± standard deviation) between AP-Lite and AP-Lite-SOLO, AP-Lite-NO-WR on pruning ResNet-50 on ImageNet via IMP. ## 4.3 Ablation Study We now conduct an ablation study to evaluate the effectiveness of components in AP. Specifically, we remove one component at a time in AP and observe the impact on the pruning performance. 1. **AP-(Lite/Pro)-NO-WR**: Using AP without the weight rewinding step (i.e., remove step (ii) from Algo. 2). This aims to evaluate the effect of weight rewinding on the pruning performance. 2. **AP-(Lite/Pro)-SOLO**: Using only AP-(Lite/Pro) without the benchmark pruning method (i.e., in every pruning cycle, pruning weights only based on AP). This aims to evaluate if the pruning metric of AP alone can be used to evaluate the importance of weights. In Tables 7 and 8, we conduct experiments of pruning ResNet-20 on the CIFAR-10 dataset using Global Magnitude (AP-Lite) and pruning VGG-19 on CIFAR-10 using Global Taylor (AP-Pro) respectively. In each case, we compare the performance of AP-(Lite/Pro)-NO-WR, AP-(Lite/Pro)-SOLO to AP-(Lite/Pro) so as to demonstrate the effectiveness of components in AP. We note that, same as before, we utilize the implementation reported in the literature. Specifically, the hyper-parameters and the learning rate schedule are from (Frankle & Carbin, 2019). Effect of Weight Rewinding. In Tables 7 and 8, we compare the performance of AP-(Lite/Pro)-NO-WR to AP-(Lite/Pro). The key difference is that AP-(Lite/Pro) uses weight rewinding (see Algorithm 2) whereas the NO-WR approaches do not. We find that the performance of AP-(Lite/Pro) is always higher across all λ. For instance, at λ = 5.72%, AP-Pro-NO-WR yields an accuracy of 74.8%, which is 4.4% lower than AP-Pro itself. This suggests the crucial role of weight rewinding in improving the performance. | λ | 32.8% | 26.2% | 13.4% | 5.72% | |-----------------------|-------------|-------------|-------------|-------------| | Global Magnitude (GM) | 90.3 (6.7%) | 89.8 (6.3%) | 88.2 (5.8%) | 81.2 (5.4%) | | GM + AP-Pro | 90.7 (6.3%) | 90.4 (5.9%) | 89.3 (5.1%) | 84.1 (3.9%) | | GM + AP-output | 90.6 (6.2%) | 90.5 (5.8%) | 89.4 (5.1%) | 84.4 (3.8%) | | GM + AP-bias-0.2 | 90.1 (6.6%) | 89.4 (6.3%) | 88.1 (5.8%) | 82.3 (5.4%) | | GM + AP-bias-0.5 | 89.9 (6.5%) | 89.6 (6.1%) | 86.9 (5.5%) | 82.8 (4.8%) | | GM + AP-bias-1.0 | 89.5 (6.1%) | 88.7 (5.7%) | 87.0 (5.0%) | 81.5 (3.6%) | | λ | 32.8% | 26.2% | 13.4% | 5.72% | |-------------------|-------------|-------------|-------------|-------------| | LAP | 72.1 (7.8%) | 70.5 (6.9%) | 67.3 (6.5%) | 64.8 (6.1%) | | LAP + AP-Pro | 72.8 (7.4%) | 71.4 (6.4%) | 69.1 (5.6%) | 67.4 (5.1%) | | LAP + AP-output | 72.5 (7.6%) | 71.7 (6.2%) | 69.3 (5.8%) | 67.2 (5.3%) | | LAP + AP-bias-0.2 | 71.8 (7.8%) | 70.2 (6.8%) | 66.8 (6.4%) | 63.3 (6.1%) | | LAP + AP-bias-0.5 | 71.4 (7.7%) | 70.4 (6.6%) | 67.1 (6.2%) | 65.1 (5.8%) | | LAP + AP-bias-1.0 | 71.2 (7.5%) | 70.1 (6.5%) | 67.7 (6.1%) | 65.9 (5.4%) | Table 10: Performance comparison (i.e., top-1 test accuracy (dynamic DNR)) between AP-Pro and alternative methods in terms of dead neuron location and activation (rows 4 - 7) on ResNet-20 using CIFAR-10. Table 11: Performance comparison (i.e., top-1 test accuracy (dynamic DNR)) between AP-Pro and alternative methods in terms of dead neuron location and activation (rows 4 - 7) on MobileNetV2 using CIFAR-100. When AP Works Solely. The pruning metric of AP (see Algorithm 1) aims to reduce dynamic DNR by pruning. We compare the performance of AP-(Lite/Pro)-SOLO to AP-(Lite/Pro) to evaluate if the pruning metric of AP can be used solely, without working with other pruning methods. In Tables 7 and 8, we observe that the SOLO methods perform much worse. For example, at λ = 5.72%, the performance of AP-LiteSOLO is 74.5, which is 7.9% lower than AP-Lite. It suggests that the pruning metric of AP alone is not suitable to evaluate the importance of weights. The effect of AP's metric on reducing dynamic DNR and its pruning rate q on pruning performance are discussed in Section 5. More Results using ResNet-50 on ImageNet. To further validate the results, we also conduct the ablation study using AP-Lite on ImageNet (ResNet-50). We summarize the results in Table 9, and we find that the results largely mirror those in Tables 7 and 8. Thus, both SOLO and NO-WR approaches perform significantly worse than the baseline. ## 4.4 Substitution Study In this subsection, we conduct a substitution study on the proposed AP. Specifically, AP consists of two key components: dead ReLU neuron location and activation (see Algorithm 1). We replace one component at a time and observe the effect on dynamic DNR reduction and pruning performance (i.e., accuracy). we construct two variants of AP as follows and compare to the original AP-Pro. 1. **AP-output**: We replace the existing dead neuron location mechanism (i.e., weight movement) with directly observing the post-activated output of each ReLU neuron. 2. **AP-bias-k**: We replace the existing dead neuron activation mechanism (i.e., prune negative weights) by adding a constant value k to the bias of dead ReLU neurons. The results of ResNet-20 & MobileNetV2 are summarized in Tables 10 - 11. The value in parentheses is dynamic DNR. Note that static DNR values remain roughly the same for all methods in each column and are therefore not shown. This is mainly because that majority of pruned weights are determined by the pruning method that AP works with (more details in Section 5). The findings from Tables 10 - 11 are as follows: | AP Pruning Rate, q | 1% | 2% | 3% | 5% | |----------------------|------------|------------|------------|------------| | AP-Lite (λ = 64.0%) | 89.8 ± 0.1 | 90.0 ± 0.2 | 89.5 ± 0.4 | 89.2 ± 0.6 | | AP-Lite (λ = 40.9%) | 88.2 ± 0.4 | 88.9 ± 0.6 | 88.5 ± 0.7 | 87.3 ± 0.5 | | AP-Lite (λ = 26.2%) | 87.1 ± 0.5 | 87.9 ± 0.8 | 86.7 ± 0.8 | 86.3 ± 0.9 | | AP Pruning Rate, q | 1% | 2% | 3% | 5% | |----------------------|------------|------------|------------|------------| | AP-Lite (λ = 64.0%) | 91.1 ± 0.2 | 91.7 ± 0.3 | 90.2 ± 0.5 | 89.8 ± 0.6 | | AP-Lite (λ = 40.9%) | 88.7 ± 0.5 | 89.8 ± 0.9 | 89.3 ± 1.1 | 88.0 ± 0.9 | | AP-Lite (λ = 26.2%) | 87.9 ± 0.8 | 88.5 ± 0.7 | 88.1 ± 1.3 | 87.1 ± 1.3 | Table 12: Performance (top-1 test accuracy ± standard deviation) of AP-Lite when iterative pruning ResNet20 on CIFAR-10 with different pruning rate. Table 13: Performance (top-1 test accuracy ± standard deviation) of AP-Lite when iterative pruning VGG19 on CIFAR-10 with different pruning rate. 1. **Effect of AP on reducing dynamic DNR.** When conventional pruning methods work with AP, the dynamic DNR is significantly reduced (compare GM to GM + AP-Pro in Table 10). 2. **Ceiling analysis on dead neuron location.** The performance of AP-Pro and AP-output is comparable in terms of reducing dynamic DNR and pruning performance. This suggests that AP-Pro works as expected and is able to locate dynamic dead neurons as if it directly observes the post-activated output. 3. **Implementation complexity.** We note that implementing AP-output is more complex than AP-Pro. AP-output requires practitioners to record the state of each neuron and then average over every training batch. For AP-Pro, we only need to compute the difference between two weight matrices. Given that AP-Pro can provide comparable performance to AP-output, and is relatively simpler to implement. As such, we still recommend using weight movements to locate dead ReLU neurons, as AP-Pro does now. 4. **Comparison to AP-bias-k.** When comparing AP-bias to AP-Pro, we find that a small value of k fails to reduce dynamic DNR as AP-Pro does. While for a large value of k, it can directly reduce dynamic DNR, but the performance performance is still not comparable to AP-Pro. We suspect that adding a large bias may hinder the optimization of the network during retraining, leading to uncompetitive results. ## 5 Reflections In this section, we discuss several important points and present some experimental results. (1) Pruning Rate of AP, q. Active Pruning removes q% of remaining parameters in every pruning cycle, so as to reduce dynamic DNR. The value of q is usually much smaller than the pruning rate of the pruning method it works with. As an example, in Section 4, the overall pruning rate is fixed as 20% and 2% of weights are pruned based on Active Pruning, which is much smaller than the pruning rate of the benchmark method compared with (i.e., 18%). Adjusting the value of q is a trade-off between pruning less important weights and reducing dynamic DNR. A large q value indicates preferential reduction of dynamic DNR, while a small q value means preferential removal of less important weights. We repeat the experiments of pruning ResNet-20 on CIFAR-10 using Global Magnitude and AP-Lite. We note that the overall pruning rate is fixed as 20% and the pruning rate of AP increases from 1% to 5%. Correspondingly, the pruning rate of Global Magnitude decreases from 19% to 15%. The experimental results are summarized in Table 12. We observe that as we increase the pruning rate of AP from 2%, the performance tends to decrease. Similar performance trends can be observed using VGG-19 on CIFAR-10 as well (see Table 13). The theoretical determination of the optimal value of q is clearly worth deeper thought. Alternatively, q can be thought of as a hyper-parameter and tuned via the validation dataset and let q = 2 could be a good choice as it provides promising results in various experiments. ![13_image_0.png](13_image_0.png) Figure 3: The value of q-best and q-feasible-region as p gradually increases when iteratively pruning ResNet20 (Left) and VGG-19 (Right) on CIFAR-10 using Global Magnitude and Global Taylor, respectively. (2) Relationship between p and q. In the above experiments, we examine feasible values of AP's pruning rate q when the overall pruning rate p is fixed at 20%. We now conduct experiments to find feasible values of q when p is changing, so as to suggest the relationship between them. We conduct experiments with p values from [10%, 15%, ..., 30%] on ResNet-20 and VGG-19. We gradually increase the value of q with a step size of 0.25% and define two terms about q as follows. 1. **q-best**: The value of q that provides the best pruning performance in terms of accuracy. 2. **q-feasible-region**: The value region of q that provides comparable performance to q-best (i.e., < 0.5% accuracy difference). The results are depicted in Fig 3, where we make two observations: (i) As p increases, q should increase as well. (ii) A heuristic of q = 0.1p seems to be a promising method to determine the value of q. Alternatively, q can also be considered as a hyperparameter and tuned via validation dataset. (3) Comparison to Activation Sparsity Baselines. We note that AP works in tandem with existing pruning methods and decreases the activation sparsity of pruned networks by pruning its negative weights. There could be other approaches which can decrease the activation sparsity as well, such as L1 regularization (Georgiadis, 2019) and boosted Hoyer Regularization (Kurtz & et al, 2020). Specifically, L1 regularization and boosted Hoyer regularization can be applied in the opposite direction to decrease the activation sparsity. We now replace AP with activation sparsity baselines (e.g., boosted Hoyer regularization) and examine the performance when they work in tandem with existing pruning methods. The results are summarized in Tables 14 - 15. The takeaway message is two-fold: (i) When conventional pruning methods work with AP, this leads to a better performance than working with other activation sparsity baselines. The reason could be that AP explicitly targets dynamic dead neurons by pruning negative weights. This decreases the activation sparsity in a more precise manner. While other activation sparsity baselines use an augmented loss function and the decrease of activation sparsity becomes implicit and hard to control during optimization (i.e., the selection of which neuron is activated is not clear and there is no precise control). (ii) Interestingly, when we gradually prune the network, using conventional pruning methods with activation sparsity baselines outperforms the original counterpart (i.e., using conventional pruning method solely). For example, compare Global Magnitude + Booster Hoyer to Global Magnitude when λ = 5.72%. This suggests that activation sparsity approaches can help to improve the performance of pruning methods. New methods to reduce activation sparsity to obtain more weighted sparsity are definitely worth exploring. (4) Static DNR. During iterative pruning, the static DNR tends to increase as expected (see Fig. 1). It is interesting to mention that, after incorporating with AP, the static DNR of pruned networks remains almost the same as opposed to without. For example, the static DNR of Global Magnitude in Table 10 (second row) increases from 7.1% to 8.4% and finally reaches 15.9% when λ decreases from 32.8% to 26.2% and finally to | λ | 32.8% | 26.2% | 13.4% | 5.72% | |------------------------|------------|------------|------------|------------| | Global Magnitude (GM) | 90.3 ± 0.4 | 89.8 ± 0.6 | 88.2 ± 0.7 | 81.2 ± 1.1 | | GM + AP-Pro | 90.7 ± 0.6 | 90.4 ± 0.4 | 89.3 ± 0.8 | 84.1 ± 1.1 | | GM + Booster Hoyer | 89.5 ± 0.6 | 88.7 ± 0.8 | 86.1 ± 1.0 | 82.5 ± 0.7 | | GM + L1 Regularization | 89.8 ± 0.5 | 88.3 ± 0.7 | 85.6 ± 0.9 | 81.9 ± 0.8 | | λ | 32.8% | 26.2% | 13.4% | 5.72% | |------------------------|------------|------------|------------|------------| | Global Taylor (GT) | 90.2 ± 0.5 | 89.8 ± 0.8 | 89.2 ± 0.8 | 76.9 ± 1.1 | | GT + AP-Pro | 90.8 ± 0.6 | 90.7 ± 0.9 | 90.4 ± 0.8 | 79.2 ± 1.3 | | GT + Booster Hoyer | 89.8 ± 0.7 | 88.6 ± 0.6 | 87.1 ± 1.2 | 77.5 ± 0.9 | | GT + L1 Regularization | 89.2 ± 0.4 | 88.1 ± 0.7 | 86.3 ± 1.0 | 77.1 ± 1.2 | Table 14: Performance (top-1 test accuracy ± standard deviation) of pruning ResNet-20 on CIFAR-10 using Global Magnitude with the proposed AP and other activation sparsity baselines. Table 15: Performance (top-1 test accuracy ± standard deviation) of pruning VGG-19 on CIFAR-10 using Global Taylor with the proposed AP and other activation sparsity baselines. 13.4%. After incorporating with AP-Pro (third row in Table 10), the static DNR almost remains the same. This is mainly because that the majority of pruned weights are still determined by the Global Magnitude (i.e., 18%) while only 2% of pruned weights are determined by AP. The Theorem 1 shows that, in addition to dynamic DNR, reducing static DNR also can improve the upper bound of I(X; T). In fact, reducing static DNR has been incorporated directly or indirectly into the existing pruning methods. As an example, LAMP (i.e., one Competitive pruning method used in performance evaluation, see Table 3) takes the number of unpruned weights of neurons/layers into account and avoids pruning weights from neurons/filters with less number of unpruned weights. This prevents neurons from being statically dead. Differing from existing methods, AP is the first method targeting the dynamic DNR. Hence, as a method that works in tandem with existing pruning methods, AP improves existing methods by filling in the gap in reducing dynamic DNR, leading to much better pruning performance. (5) Working with Non-ReLU based Networks. We would like to highlight that AP also works well with non-ReLU based networks. For example, in Table 5, we show the performance of AP using Vision Transformer which uses GELU as the activation function. In this setup, AP also leads to an improvement of 2% - 3%. We posit that this could be due to the fact that, similar to ReLU which blocks the negative preactivation output, GELU heavily regularizes the negative pre-activation output by multiplying an extremely small value of Φ(x), suggesting that AP could be helpful with pruning GELU based models as well. (6) Future Research. (i) We only examine the effect of AP on network pruning using image datasets. In fact, AP may not be limited to this, but can also be applied to dynamic sparse training algorithms or for NLP tasks. (ii) The activation sparsity methods which enforce activation sparsity could also be used in the opposite way to decrease the activation sparsity. Such methods could be an alternative for AP and new methods to reduce activation sparsity to obtain more weight sparsity are definitely worth exploring. (iii) The pruning rate of AP q is a important hyperparameter to tune and may significantly affect the performance. In above, we suggest the heuristic of q = 0.1p to determine q from the overall pruning rate p. A theoretical way to determine the value of q is also worth exploring and q may change in a non-linear way with p. ## 6 Conclusion In this paper, we propose a new pruning method called Activate-while-Pruning (AP). Unlike existing pruning methods which remove less important parameters, the proposed AP works in tandem with existing pruning methods and aims to improve their pruning performance by de-sparsifying pruned networks. It is also interesting to mention that the proposed AP studies the interaction of weight and activation sparsity from a new perspective, i.e., how to tradeoff activation sparsity for more weight sparsity. Theoretically, we show the benefits of de-sparsifying pruned networks from the perspective of information bottleneck. Empirically, we use six different sets of experiments to demonstrate that AP can work well with a diverse range of networks (e.g., ResNet, VGG, DenseNet, MobileNet) and pruning methods (e.g., IMP, LAP, LAMP, etc) on both CIFAR-10/100 and ImageNet. It should be noted that AP is a generic approach, and by using the proposed AP, the pruning performance of existing pruning methods can be improved by 3% - 8%. Furthermore, we conduct an ablation study to further investigate and demonstrate the effectiveness of several key components that make up the proposed AP. Lastly, we conduct a substitution study to replace certain components in AP with alternative methods, further verifying the design of the proposed AP. ## Acknowledgements This research is supported by A*STAR, CISCO Systems (USA) Pte. Ltd and the National University of Singapore under its Cisco-NUS Accelerated Digital Economy Corporate Laboratory (Award I21001E0002). Additionally, we would like to thank the members of the Kent-Ridge AI research group at the National University of Singapore for helpful feedback and interesting discussions on this work. ## References Ruth Akiva-Hochman, Shahaf E Finder, Javier S. Turek, and Eran Treister. Searching for n: M fine-grained sparsity of weights and activations in neural networks. In European Conference on Computer Vision (ECCV), pp. 130–143. Springer, 2022. Isac Arnekvist, J. Frederico Carvalho, Danica Kragic, and Johannes A. Stork. The effect of target normalization and momentum on dying relu. *CoRR*, abs/2005.06195, 2020. Davis Blalock et al. What is the state of neural network pruning? In *Proceedings of the Machine Learning* and Systems (MLSys), 2020. Chun-Fu Richard Chen, Quanfu Fan, and Rameswar Panda. Crossvit: Cross-attention multi-scale vision transformer for image classification. In *Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 357–366, 2021. Tianlong Chen, Zhenyu Zhang, Jun Wu, Randy Huang, Sijia Liu, Shiyu Chang, and Zhangyang Wang. Can you win everything with a lottery ticket? *Transactions of Machine Learning Research (TMLR)*, 2022. URL https://openreview.net/forum?id=JL6MU9XFzW. Ting-Wu Chin, Ruizhou Ding, Cha Zhang, and Diana Marculescu. Towards efficient model compression via learned global ranking. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition (CVPR), pp. 1518–1528, 2020. Thomas M Cover and Joy A Thomas. *Elements of Information Theory, 2nd edition*. John Wiley & Sons, 2006. Xiyang Dai, Yinpeng Chen, Bin Xiao, Dongdong Chen, Mengchen Liu, Lu Yuan, and Lei Zhang. Dynamic head: Unifying object detection heads with attentions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7373–7382, 2021. Jia Deng et al. Imagenet: A large-scale hierarchical image database. In *Proceedings of IEEE/CVF Conference* on Computer Vision and Pattern Recognition (CVPR), pp. 248–255, 2009. Misha Denil, Babak Shakibi, Laurent Dinh, Marc'Aurelio Ranzato, and Nando de Freitas. Predicting parameters in deep learning. *arXiv preprint: 1306.0543*, 2014. Shibhansh Dohare, A. Rupam Mahmood, and Richard S. Sutton. Continual backprop: Stochastic gradient descent with persistent randomness. *CoRR*, abs/2108.06325, 2021. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint:2010.11929*, 2020. Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen. Rigging the lottery: Making all tickets winners. *arXiv 1911.11134 cs.LG*, 2021. Utku Evci, Yani A. Ioannou, Cem Keskin, and Yann Dauphin. Gradient flow in sparse neural networks and how lottery tickets win. *arXiv 2010.03533 cs.LG*, 2022. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In *Proceedings of the International Conference on Learning Representations (ICLR)*, 2019. Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M Roy, and Michael Carbin. Stabilizing the lottery ticket hypothesis. *arXiv preprint arXiv:1903.01611*, 2019. Jonathan Frankle et al. Linear mode connectivity and the lottery ticket hypothesis. In Proceedings of the International Conference on Machine Learning (ICML), pp. 3259–3269, 2020. Trevor Gale, Erich Elsen, and Sara Hooker. The state of sparsity in deep neural networks. *arXiv preprint* arXiv:1902.09574, 2019. Georgios Georgiadis. Accelerating convolutional neural networks via activation map compression. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 7085–7095, 2019. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 315–323. JMLR Workshop and Conference Proceedings, 2011. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. *Deep learning*. MIT press, 2016. Swati Goswami, C. A. Murthy, and Asit K. Das. Sparsity measure of a network graph: Gini index, 2016. Udit Gupta, Brandon Reagen, Lillian Pentecost, Marco Donato, Thierry Tambe, Alexander M Rush, GuYeon Wei, and David Brooks. Masr: A modular accelerator for sparse rnns. In *2019 28th International* Conference on Parallel Architectures and Compilation Techniques (PACT), pp. 1–14. IEEE, 2019. Song Han et al. Learning both weights and connections for efficient neural network. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), pp. 1135–1143, 2015. Kaiming He et al. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In *Proceedings of the IEEE International Conference on Computer Vision (ICCV)*, pp. 1026–1034, 2015. Kaiming He et al. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016. Yang He, Yuhang Ding, Ping Liu, Linchao Zhu, Hanwang Zhang, and Yi Yang. Learning filter pruning criteria for deep convolutional neural networks acceleration. In *Proceedings of the IEEE/CVF conference* on Computer Vision and Pattern Recognition (CVPR), pp. 2009–2018, 2020. Patrik O. Hoyer. Non-negative matrix factorization with sparseness constraints. Journal of Machine Learning Research (JMLR), 5(9), 2004. Gao Huang et al. Densely connected convolutional networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4700–4708, 2017. Niall P. Hurley and Scott T. Rickard. Comparing measures of sparsity. *arXiv preprint: 0811.4706*, 2009. KJ Joseph, Salman Khan, Fahad Shahbaz Khan, and Vineeth N Balasubramanian. Towards open world object detection. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition* (CVPR), pp. 5830–5840, 2021. Alex Krizhevsky et al. Learning multiple layers of features from tiny images. 2009. Mark Kurtz and et al. Inducing and exploiting activation sparsity for fast inference on deep neural networks. In *International Conference on Machine Learning (ICML)*, pp. 5533–5543. PMLR, 2020. Yann LeCun et al. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86 (11):2278–2324, 1998. Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, and Jinwoo Shin. Layer-adaptive sparsity for the magnitude-based pruning. In *International Conference on Learning Representations (ICLR)*, 2020. Joo Hyung Lee and et al. Jaxpruner: A concise library for sparsity research. *arXiv preprint: 2304.14082*, 2023. Bailin Li, Bowen Wu, Jiang Su, and Guangrun Wang. Eagleeye: Fast sub-net evaluation for efficient neural network pruning. In *European Conference on Computer Vision (ECCV)*, pp. 639–654. Springer, 2020a. Hao Li et al. Pruning filters for efficient convnets. In Proceedings of the International Conference on Learning Representations (ICLR), 2017. Yawei Li, Kamil Adamczewski, Wen Li, Shuhang Gu, Radu Timofte, and Luc Van Gool. Revisiting random channel pruning for neural network compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 191–201, 2022. Yun Li, Weiqun Wu, Zechun Liu, Chi Zhang, Xiangyu Zhang, Haotian Yao, and Baoqun Yin. Weightdependent gates for differentiable neural network pruning. In European Conference on Computer Vision (ECCV), pp. 23–37. Springer, 2020b. Zonglin Li, Chong You, Srinadh Bhojanapalli, Daliang Li, Ankit Singh Rawat, Sashank J. Reddi, Ke Ye, Felix Chern, Felix Yu, Ruiqi Guo, and Sanjiv Kumar. The lazy neuron phenomenon: On emergence of activation sparsity in transformers. *arXiv preprint: 2210.06313*, 2023. Linfeng Liu, Xu Han, Dawei Zhou, and Liping Liu. Towards accurate subgraph similarity computation via neural graph pruning. *Transactions on Machine Learning Research (TMLR)*, 2022a. URL https: //openreview.net/forum?id=CfzIsWWBlo. Shiyu Liu, Chong Min John Tan, and Mehul Motani. S-cyc: A learning rate schedule for iterative pruning of relu-based networks. *CoRR*, abs/2110.08764, 2021a. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10012–10022, 2021b. Zhi-Gang Liu, Paul N Whatmough, Yuhao Zhu, and Matthew Mattina. S2ta: Exploiting structured sparsity for energy-efficient mobile cnn acceleration. In 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pp. 573–586. IEEE, 2022b. Zhuang Liu et al. Rethinking the value of network pruning. In Proceedings of the International Conference on Learning Representations (ICLR), 2019. Lu Lu, Yeonjong Shin, Yanhui Su, and George Em Karniadakis. Dying relu and initialization: Theory and numerical examples. *arXiv preprint arXiv:1903.06733*, 2019. Jian-Hao Luo and Jianxin Wu. Neural network pruning with residual-connections and limited-data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1458–1467, 2020. Eran Malach et al. Proving the lottery ticket hypothesis: Pruning is all you need. In Proceedings of the International Conference on Machine Learning (ICML), pp. 6682–6691, 2020. Rahul Mehta. Sparse transfer learning via winning lottery tickets. In Proceedings of the Advances in Neural Information Processing Systems Workshop on Learning Transferable Skills, 2019. Pavlo Molchanov et al. Importance estimation for neural network pruning. In *Proceedings of the IEEE/CVF* Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11264–11272, 2019. Sejun Park, Jaeho Lee, Sangwoo Mo, and Jinwoo Shin. Lookahead: A far-sighted alternative of magnitudebased pruning. In *International Conference on Learning Representations (ICLR)*, 2020. Mansheej Paul, Feng Chen, Brett W. Larsen, Jonathan Frankle, Surya Ganguli, and Gintare Karolina Dziugaite. Unmasking the lottery ticket hypothesis: What's encoded in a winning ticket's mask? arXiv 2210.03044 cs.LG, 2022. Alexandra Peste, Eugenia Iofinova, Adrian Vladu, and Dan Alistarh. Ac/dc: Alternating compressed/decompressed training of deep neural networks. *arXiv preprint: 2106.12379*, 2021. Md Aamir Raihan and Tor Aamodt. Sparse weight activation training. Advances in Neural Information Processing Systems (NeurIPS), 33:15625–15638, 2020. Alex Renda, Jonathan Frankle, and Michael Carbin. Comparing rewinding and fine-tuning in neural network pruning. In *International Conference on Learning Representations (ICLR)*, 2019. Sebastian Ruder. An overview of gradient descent optimization algorithms. *arXiv preprint arXiv:1609.04747*, 2016. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In *Proceedings of the IEEE Conference on Computer Vision and* Pattern Recognition (CVPR), pp. 4510–4520, 2018. Ohad Shamir, Sivan Sabato, and Naftali Tishby. Learning and generalization with the information bottleneck. Theoretical Computer Science, 411(29):2696–2711, 2010. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Ghada Sokar, Rishabh Agarwal, Pablo Samuel Castro, and Utku Evci. The dormant neuron phenomenon in deep reinforcement learning. *arXiv, 2302.12902, cs.LG*, 2023. Lucas Theis et al. Faster gaze prediction with dense networks and fisher pruning. *arXiv preprint* arXiv:1801.05787, 2018. Naftali Tishby and Noga Zaslavsky. Deep learning and the information bottleneck principle. *arXiv* 1503.02406, 2015. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems (NeurIPS), pp. 5998–6008, 2017. Artem Vysogorets and Julia Kempe. Connectivity matters: Neural network pruning through the lens of effective sparsity. *Journal of Machine Learning Research*, 24(99):1–23, 2023. Huan Wang, Can Qin, Yue Bai, Yulun Zhang, and Yun Fu. Recent advances on neural network pruning at initialization. In *Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI)*, pp. 23–29, 2022. Yifei Wang, Yixuan Hua, Emmanuel Candés, and Mert Pilanci. Overparameterized relu neural networks learn the simplest models: Neural isometry and exact recovery. *arXiv preprint: 2209.15265*, 2023. Yulong Wang et al. Pruning from scratch. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 12273–12280, 2020. Mao Ye et al. Good subnetworks provably exist: Pruning via greedy forward selection. In *Proceedings of the* International Conference on Machine Learning (ICML), pp. 10820–10830. PMLR, 2020. Haoran You, Randall Balestriero, Zhihan Lu, Yutong Kou, Huihong Shi, Shunyao Zhang, Shang Wu, Yingyan Lin, and Richard Baraniuk. Max-affine spline insights into deep network pruning. *Transactions on Machine* Learning Research (TMLR), 2022. ISSN 2835-8856. Hao nan Yu et al. Playing the lottery with rewards and multiple languages: lottery tickets in RL and NLP. In *Proceedings of the International Conference on Learning Representations (ICLR)*, 2020. Li Yuan, Qibin Hou, Zihang Jiang, Jiashi Feng, and Shuicheng Yan. Volo: Vision outlooker for visual recognition. *arXiv preprint arXiv:2106.13112*, 2021. Yuyao Zhang and Nikolaos M Freris. Adaptive filter pruning via sensitivity feedback. *IEEE Transactions* on Neural Networks and Learning Systems (TNNLS), 2023. Chenglong Zhao et al. Variational convolutional neural network pruning. In *Proceedings of the IEEE/CVF* Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2780–2789, 2019. Hattie Zhou et al. Deconstructing lottery tickets: Zeros, signs, and the supermask. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2019. Michael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of pruning for model compression. In *Proceedings of the International Conference on Learning Representations (ICLR)*, 2018.
Review 1: Summary: This paper introduces a novel method, AP, for improving the performance of pruned neural networks which works in parallel with existing pruning methods. AP allocates a portion of the desired overall pruning ratio to negative weights which remain close to their initialized values at convergence. These weights are pruned in addition to those determined by the base method saliency criterion as required to meet the desired sparsity. AP reduces the DNR by increasing the number of active ReLU neurons, on average, across a given training dataset distribution. The author’s provide theoretical justification for their claims by considering how DNR relates to the Information Bottleneck framework. Empirical evidence of AP’s performance is presented across a number of datasets and network architectures for image classification. Strengths and Weaknesses: **Strengths** 1. As far as I am aware, the proposed method is novel in its approach and provides interesting insights into the relationship between parameter sparsity and ephemeral sparsity induced by commonly used nonlinear activation functions. 1. Extensive experiments are conducted across a wide variety of datasets and network architectures. 1. This paper studies a very timely and important topic as sparse neural networks are an important ingredient in improving the overall efficiency of serving ML models at scale. 1. The proposed method complements existing pruning algorithms and appears to demonstrate moderate improvements across each dataset/architecture pair studied. Of particular note are improvements in generalization performance at high sparsities. 1. The Information Bottleneck analysis provides a compelling theoretical justification for the proposed method and generalization improvements observed. **Weaknesses** 1. The heuristic used to locate dead ReLU neurons could also be used to locate neurons that were initialized at values close to their final converged value. I am curious to know if the authors conducted any empirical studies to determine how accurate this heuristic is in practice at locating truly dead neurons. 1. A more formal analysis of the additional FLOPs and/or wall-clock timings required to perform the AP-Pro algorithm would help the reader understand what kind of overhead should be expected when adopting this method. 1. The paper could use another proofread as there were some minor typos and formatting issues that must be addressed. See the Requested Changes section below for more details. 1. Presentation of results could be improved by combining tables and using more plots, where appropriate. Further, maintaining the same sparsities across Tables 1-8 would make the ablation studies more compelling. 1. Additional investigations into applying AP in the context of dynamic sparse training and NLP tasks would be of additional interest to the broader sparse neural network research community. Requested Changes: **Typos** (Must change for acceptance) 1. Formatting of in-text citations seems to be missing parentheses throughout the paper. 1. (“ii) AP-Pro which introduces an **addition** retraining step to the existing methods in every pruning cycle, but significantly improves the performance of existing methods.” -> Addition should be “additional” here. 1. “According to **Han et al. (2015), Han et al.** trained the unpruned network” -> Double in-text citation for Han et al. Only cite once. 1. A citation for VGG-19 appears to be missing. This should be added at the first mention of the network, similar to other network architectures. 1. “Haoran You and et al.” is listed as the author of “Max-affine spline insights into deep network pruning”. This should simply say “Haoran You et al.”. The corresponding in-text citation also shows the extra “and”. 1. Section 5.3 -> “**Differ** from existing methods, AP is the first…”. Should read: “Differing from …” **Clarifications** (Further information will help secure acceptance, but is not specifically required) 1. “The success of ReLU is mainly **due to fact _(sic)_** that existing networks tend to be overparameterized and ReLU can easily regularize overparameterized networks” -> This claim does not appear to be supported by Glorot et al. (2011). The cited paper refers to several advantages related to vanishing / exploding gradient, sparse representations, and computational simplicity, but regularization is not listed as a motivation. Consider rephrasing this sentence or clarifying with additional works cited. 1. “For example, Liu et al. (2019a) demonstrated that training from scratch on the right sparse architecture yields better results than pruning from pre-trained models.” -> Please check this citation. The paper cited is for DARTS which is an AutoML technique that does not discuss sparsity as far as I am aware. It appears that this citation should be for Liu et al. (2019b). In any case, this is a strong claim to draw from Liu et al. (2019b) and is at odds with the majority of sparsity literature to my knowledge. Especially since Liu et al. explicitly noted that their claims only hold true for structured sparsity and they were unable to maintain accuracy with unstructured sparse networks on large scale datasets (imagenet). Given that the context of the paper under review is unstructured pruning, I encourage the authors to remove this claim unless better supporting justification can be provided. 1. In Fig 1 (Left), the figure is missing the 64% weight remaining column. What was the reason for this exclusion? All other weights remaining appear to follow the 20% pruning ratio. Consider including 64% or clarifying why this result is excluded. 1. While the pruning results augmented with AP do demonstrate improved generalization performance, in some cases this improvement is not clearly significant across the full range of sparsities considered. For the sake of clarity, I would encourage the authors to only bold tabulated results that are significant after accounting for the standard deviation. For instance, in Table 1 where lambda=32.8%, AP-Pro is bolded but the vanilla global magnitude (GM) results are not significantly worse considering the standard deviations of each result. I.e., 90.3+0.4 > 90.7-0.6. 1. Were any data augmentation schemes used in training? If so, those should be included in the experimental details section or in the appendix. 1. What initialization method was used to train the networks? This information should be included for reproducibility. 1. Table 7: Why do lambda=32.8% results for AP-Lite not match Table 1 for the same lambda? It appears that these should be identical runs? 1. For tables 7 and 8, why are the weights remaining depicted >> those shown in tables 1-4? As a reader, I am most interested in seeing how the ablations differ from the benchmark and results presented previously. Especially since AP is most beneficial in the highly sparse regime. **Recommendations** (Changes not required, mere suggestions to consider) 1. The numerator and denominator in equation 1 could be expanded to clarify the contributions of static and dynamic DNR. I.e., # of dead ReLU neurons + # of neurons with pruned incoming weights for the numerator. For denominator, consider clarifying that these are all "relu" neurons in the unpruned network (i.e., no conv, batch norm, etc. neurons included in this sum) 1. Figure 2 -> I suggest revising the AP % labels to depict (p-q)+q. As currently depicted, it appears to be p-q*q. The color labels help here but it could be more clear. Another recommendation here is to use the length of the color bars to show the overall model sparsity decreasing with each pruning cycle. I.e., Pruning Method X could show blue shading for 20% of the total bar at cycle 1, 20% + 16% for cycle 2, etc. This may help clarify to the reader how each pruning cycle removes a smaller and smaller portion of the total dense model weights. 1. In Section 4.1.2, the authors discuss a grid search conducted on a variety of hyperparameters. However, the results of this search do not appear to have been included as the “validation performance of using our tuned parameters are close to that of using standard implementations”. While I can appreciate the effort, I do not recommend including this description if the results are not included in the manuscript or appendices. Further, given that the purpose of this work is to establish the benefit of AP over existing methodologies, I believe the standard implementations already provide the most useful information to the reader when determining the overall impact of AP. 1. In Section 5.1, the authors discuss the effect of the pruning rate of AP, q. This is a very important inclusion as the introduction of any new hyperparameter deserves scrutiny. It would be interesting to see how this optimal q value changes with various p values. Perhaps q = 0.1 *p is a good heuristic across a multitude of p values? 1. “In this section, we first conduct experiments to evaluate the DNR during iterative pruning in Section 3.1.” -> I suggest referring to the current section only once. I.e., “In section 3.1, we first …” 1. The results section is needlessly verbose in some paragraphs with several sentences simply repeating the results listed in tabular form elsewhere. Where these sentences are used to emphasize a finding they work well; however, each paragraph in section 4.2 contains a similarly formatted sentence which diminishes the emphasizing effect. I encourage the authors to simply remove sentences that repeat the tabulated results without expanding or providing further relevant information. 1. Further investigation of AP by applying it to dynamic sparse training algorithms (SET, RigL, etc.) would be of additional interest to the broader sparse neural network research community. 1. Only image classification tasks were considered. It would be interesting to see how AP performs on NLP tasks as well. Broader Impact Concerns: None ================================================== Review 2: Summary: This paper studies dead neurons in pruned networks and classifies them as (1) neurons that have connections but give zero activations neurons and (2) neurons without incoming connections. They observe that during pruning "static dead neuron" rate increases, however "dynamic dead neuron rate" (DNR) decreases. Authors argue that it would be better for learnability to have smaller DNR, thus they propose removing some of the negative valued incoming connections to push activations to the positive side of the ReLU. Authors combine this strategy with existing competitive pruning methods and prune a fraction of the weights (2%) using proposed approach while pruning the rest using the original pruning metric. Authors show improved generalization across different datasets and networks. Despite its strengths (see below), I found few things about the experimental evaluation concerning. Happy to update my review if these concerns are addressed. Also, I had limited time to review this paper, please let me know if you have any questions. Strengths and Weaknesses: # Strengths - Paper is mostly well written and easy to follow - Authors use different network architectures and datasets in image classification, which includes compact architectures like MobileNet and more recent Vision transformers with GELU activations. - Authors investigate an interesting and under-studied area of neural network pruning # Weaknesses (1) I found few things about the experimental evaluation concerning. Happy to update my review if these concerns are addressed. (a) It is not clear whether proposed algorithm AP uses more training steps than the baselines. Training the networks longer would give better pruning results, therefore baseline pruning methods should be trained longer using either Cosine cyclic learning rate (i.e. learning rate restarts) or regular linear scaling as used in [1]. (b) It is not clear whether proposed technic works as concluded (i.e. by reducing DNR) due to lack of relevant ablations (see requested changes below). It's not obvious why authors doesn't use [1] https://arxiv.org/abs/1902.09574 Requested Changes: # Major - *AP-Lite and AP-Pro* The difference between the two approach seems whether Iterative magnitude pruning is used. As argued by the authors AP can be applied to any pruning method and defining two version for 2 different pruning method seems unnecessary. As stated in (1a) and in the paper #training_steps is a hyper parameter increasing which often leads to better results [2]. - It is not clear why authors look at the movement of the neurons instead of their activations directly? I would guess with large enough batch size looking at the activations one can directly predict dead neurons. Without this it is not clear whether proposed pruning metric targets dynamic dead neurons directly. Similarly, effect of removing negative weights can be achieved by adding a constant value to the bias of the dead neurons. Would that work as well? - What happens to Static Dead neurons in Table 11/12? It would be nice to add those numbers. To me it seems like proposed method increased static dead neurons by pruning weights from about to die neurons. If so it would be nice to say that. - Is the hyper parameter search as explained in Section 4.1 done for both baseline pruning methods and AP separately? It is not clear hyper-parameters for which experiments are optimized. # Minor - I think we shouldn't be training VGG-19 on Cifar-10 and more importantly use it in pruning baselines. It is extremely over-parameterized and has much worse scaling than more recent architectures. - I wouldn't call algorithms given SOTA. "Competitive" would be a better word. There are more recent works which report improved performance [3]. Global magnitude pruning often doesn't perform as well [4] gradual magnitude pruning (Zhu and Gupta). - Better to call"Global Gradient" as "Global Taylor" [5]. - There are quite a few references for lottery ticket hypothesis and IMP, which is not a competitive pruning method (see Renda et. al.) If authors like to keep these references, I would recommend adding more recent work discussing the limitations of lottery tickets [6]. - I don't think ViT paper as Cifar-10 pretraining. I assume authors prune the imagenet pretrained networks. Is that the case? If so, would be nice to mention this. Also in general when standard deviations overlap, one would conclude none of the methods are better, thus bolding both. It would be nice to do that in all results. - "the the" -> "the" - I would recommend authors to cite relevant work which looks into dead neurons in neural networks [7] [2] https://arxiv.org/abs/1911.11134 [3] https://arxiv.org/abs/2106.12379 [4] https://arxiv.org/abs/2304.14082 [5] https://arxiv.org/abs/1906.10771 [6] https://arxiv.org/abs/2010.03533, https://arxiv.org/abs/2210.03044 [7] https://arxiv.org/abs/2302.12902, https://arxiv.org/abs/2108.06325 Broader Impact Concerns: none ================================================== Review 3: Summary: The paper proposes a mechanism (which can presumably be combined with other pruning algorithms) to improve the performance of the pruned ReLU network. The idea is to maximize the utilization rate of the neurons (of the pruned model) by pruning the model in a way that prevents outputting many zeros in the intermediate layer (i.e., minimizing the activation sparsity). To achieve this goal, authors propose a simple method: prune the negative weights, so that the pre-activation becomes larger, so that ReLU gets activated more often. The proposed method boost the performance of conventional pruning algorithms. Strengths and Weaknesses: __Strength.__ The proposed algorithm is quite well-motivated. The observations in Figure 1(right) is quite interesting (and makes a lot of sense), and could potentially inspire many future research efforts. I have not seen many results that explicitly study the intersection of the weight and activation sparsity (with a delightful exception of https://arxiv.org/abs/2112.13896 )---there could be some fundamental tradeoff between two notions of sparsity. Another strength is the originality of the idea that authors propose to enhance the activation sparsity of the pruned models. As far as I know, typical approach to control the activation sparsity is by an explicit regularization of the intermediate activations. In contrast, this paper pioneers an alternative way, which is to perturb the weights along a specific direction (which makes most sense in the pruning context) so that the pre-activations shift toward more frequent activation. Perhaps a similar idea could also be used whenever a less frequent activation sparsity is needed. __Weakness.__ The most significant weakness of this manuscript is that it almost completely disregards any related work on the activation sparsity. This paper revolves around a core concept, which the authors call "dead neuron rate," and this is actually a well-studied notion under the name of "activation sparsity" or sometimes ephemeral sparsity (see https://proceedings.mlr.press/v119/kurtz20a.html for instance). Hopefully to the authors, there is a small difference that usually the activation sparsity literature aims to maximize the activation sparsity, while this work aims to minimize. But still, the concept itself is not new, and there are some well-established techniques to control the activation sparsity which can generalize to the case of minimization. Also, the relationship between two notions have already been studied in some prior works (e.g., https://arxiv.org/abs/2112.13896 ). Also, it seems like the manuscript is still under preparation. In the bottom of page 10, authors mention that "We intend to show results on larger datasets (e.g., ImageNet-1000) in the camera ready version." TMLR is a journal, where there is no fixed submission deadline. In this respect, deferring the ImageNet-1k result for the camera ready is not really understandable. If authors need more time, perhaps consider submitting the paper after all results are ready. The writing in section 3.1. is somewhat unclear and can be improved. Could the readers clarify which pruning algorithm the authors are using? Are we doing as Han et al. (2015) does, or are we doing the full weight rewind as Frankle & Carbin (2019) does, or do a partial rewinding with the learning rate rewinding as Renda et al. does? Requested Changes: - Please re-write the main text with more details about the existing works on activation sparsity. - Compare with the activation sparsity baselines. For example, the one that uses Hoyer regularization can be used the other direction to decrease the activation sparsity, which could be a nice baseline method that this paper currently misses. The work also uses a specially designed activation function, and perhaps comparing with a similarly designed activation could help us better understand the benefit of the approach that modifies the weight pruning procedure. - Please clarify the setups on Section 3.1. - Please include the ImageNet1k experiment. Broader Impact Concerns: This work may not really need a broader impact section. ================================================== Metareview: Recommendation: Accept as is Comment: The submission clearly meets the bar in terms of claims and evidence and interest to TMLR's audience. I encourage the authors to incorporate the following reviewer feedback in the camera-ready version: * As per Reviewer YnV4's suggestion, using the `<X>+AP` notation, where X is either "One-Shot" or "IMP". * Incorporating a discussion on the effect (or lack thereof) of training length in the main text. * Combining tables and using more plots, where appropriate, and maintaining the same sparsities across Tables 1-8. * Removing sentences that repeat the tabulated results without expanding or providing further relevant information. ==================================================
# Evaluating The Robustness Of Text-To-Image Diffusion Models Against Real-World Attacks Anonymous authors Paper under double-blind review ## Abstract Text-to-image (T2I) diffusion models (DMs) have shown promise in generating high-quality images from textual descriptions. The real-world applications of these models require particular attention to their safety and fidelity, which yet has not been sufficiently explored. One fundamental question is whether the existing T2I DMs are robust against variations over input texts. To answer it, this work provides the first robustness evaluation of T2I DMs against *real-world* perturbations. Unlike malicious attacks that involve apocryphal alterations to the input texts, we consider a perturbation space spanned by realistic errors (e.g., typo, glyph, phonetic) that humans can make and develop adversarial attacks to generate worst-case perturbations for robustness evaluation. Given the inherent randomness of the generation process, we design four novel distribution-based objectives to mislead T2I DMs. We optimize the objectives in a black-box manner without any knowledge of the model. Extensive experiments demonstrate the effectiveness of our method for attacking popular T2I DMs and simultaneously reveal their non-trivial robustness issues. Moreover, we also offer an in-depth analysis to show our method is not specialized for solely attacking the text encoder in T2I DMs. ## 1 Introduction Diffusion models (DMs) (Sohl-Dickstein et al., 2015; Ho et al., 2020) have demonstrated remarkable success in generating images and shown promise in diverse applications, including super-resolution (Saharia et al., 2022b), image inpainting (Lugmayr et al., 2022), text-to-image synthesis (Rombach et al., 2022; Ramesh et al., 2022), video generation (Ho et al., 2022a;b), etc. A typical DM employs a forward process that gradually diffuses the data distribution towards a noise distribution and a reverse process that recovers the data through step-by-step denoising. Among the applications, text-to-image (T2I) generation has received significant attention and witnessed the development of large models such as GLIDE (Nichol et al., 2022), Imagen (Saharia et al., 2022a), DALL-E 2 (Ramesh et al., 2022), Stable Diffusion (Rombach et al., 2022), VQ-Diffusion (Gu et al., 2022), etc. These models typically proceed by conditioning the reverse process on the embeddings of textual descriptions obtained from certain text encoders. Their ability to generate high-quality images from textual descriptions can significantly simplify the creation of game scenarios, book illustrations, organization logos, and more. The robustness of T2I DMs against perturbations to the input text plays a vital role in ensuring their reliability in practical use. Initial studies investigating this have shown that T2I DMs can be vulnerable to adversarial attacks (Du et al., 2023; Yang et al., 2023; Zhang et al., 2024)—by applying subtle perturbations to the input text, the generated image deviates significantly from the intended target. However, these works primarily focus on malicious attacks, e.g., creating meaningless or distorted custom words (Millière, 2022) or phrases (Maus et al., 2023), adding irrelevant distractions (Zhuang et al., 2023), etc., which often introduce substantial changes to the text and may rarely occur in real-world scenarios. We shift our attention from intentional attacks to everyday errors such as typos, grammar mistakes, or vague expressions, as suggested by related work in natural language processing (Li et al., 2018; Eger & Benz, 2020; Eger et al., 2019a; Le et al., 2022), to thoroughly evaluate the robustness of models that interact with humans in **practical** use. It is of particular importance to evaluate and understand the robustness of T2I DMs since a more robust model Original Typo Glyph **Phonetic** A giant icae cream sculpture towering over a miniature town. A giant ice cream sculpture towering over a miniature town. A giant icе cream sculpture ![1_image_0.png](1_image_0.png) Figure 1: An illustration of our attack method against Stable Diffusion (Rombach et al., 2022) based on three attack rules (detailed in Section 3.3.1). Adversarially modified content is highlighted in red. Note that the red 'e' (U+0435) in Glyph is different from 'e' (U+0065) in the original sentence. can enhance user efficiency by avoiding the need to go back and check mistakes in the prompts and make corrections after generating erroneous images. This work provides the first evaluation of the robustness of T2I DMs against *real-world* perturbations. As discussed, we consider an attack space spanned by realistic errors that humans can make to ensure semantic consistency, including typos, glyphs, and phonetics. To tackle the inherent uncertainty in the generation process of DMs, we develop novel distribution-based attack objectives to mislead T2I DMs. We perform attacks in a black-box manner using greedy search to avoid assumptions about the model. Technically, our attack algorithm first identifies the keywords based on the words' marginal influence on the generation distribution and then applies elaborate character-level replacements. Our algorithm can be used by the model developers to evaluate the robustness of their T2I DMs before being deployed in the wild. We perform extensive empirical evaluations on datasets of artificial prompts and image captions. We first conduct a set of diagnostic experiments to prioritize the different variants originated from the distributionoriented attack objectives, which also reflects the vulnerability of existing T2I DMs. We then provide an interesting discussion on the target of attacking DMs: the text encoder only vs. the whole diffusion process. Finally, we attack T2I DMs (including DALL-E 2) in real-world settings and observe high success rates, even in the case that the perturbation rates and query times are low. ## 2 Related Work Diffusion models (DMs) (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2020) are a powelful family of generative models that attract great attention recently. In the diffusion process, the data distribution is diffused to an isotropic Gaussian by continually adding Gaussian noises. The reverse process recovers the original input from a Gaussian noise by denoising. DMs have been widely applied to T2I generation. GLIDE (Nichol et al., 2022) first achieves this by integrating the text feature into transformer blocks in the denoising process. Subsequently, increasing effort is devoted to this field to improve the performance of T2I generation, with DALL-E (Ramesh et al., 2021), Cogview (Ding et al., 2021), Make-A-Scene (Gafni et al., 2022), Stable Diffusion (Rombach et al., 2022), and Imagen (Saharia et al., 2022a) as popular examples. A prevalent strategy nowadays is to perform denoising in the feature space while introducing the text condition by cross-attention mechanisms (Tang et al., 2022). However, textual conditions cannot provide the synthesis results with more structural guidance. To remediate this, there are also many other kinds of DMs conditioning on factors beyond text descriptions, such as PITI (Wang et al., 2022a), ControlNet (Zhang & Agrawala, 2023) and Sketch-Guided models (Voynov et al., 2022). Adversarial attacks typically deceive DNNs by integrating carefully-crafted tiny perturbations into input data (Szegedy et al., 2014; Zhang et al., 2020). Based on how an adversary interacts with the victim model, adversarial attacks can be categorized into white-box attacks (Zhang et al., 2022a; Meng & Wattenhofer, 2020; ![2_image_0.png](2_image_0.png) Adv Image Embeding Adv Text Embeding Ori Image Embeding Ori Text Embeding Figure 2: An illustration of our attack pipeline for evaluating the robustness of T2I DMs. Xu et al., 2022) (with full access to the victim model) and black-box attacks (Zhang et al., 2022b; He et al., 2021) (with limited access to the victim model). Adversarial attacks on text can also be categorized in terms of the level of granularity of the perturbations. Character-level attacks (Eger et al., 2019b; Formento et al., 2023) modify individual characters in words to force the tokenizer to process multiple unrelated embeddings instead of the original, resulting in decreased performance. Word-level attacks (Li et al., 2021; Lee et al., 2022) employ a search algorithm to locate useful perturbing embeddings or operations that are clustered close to the candidate attack word's embedding given a similarity constraint (e.g., the Universal Sentence Encoder (Cer et al., 2018)). Sentence-level attacks (Wang et al., 2020; Han et al., 2020) refer to making changes to sentence structures in order to prevent the model from correctly predicting the outcome. Multi-level attacks (Gupta et al., 2021; Wallace et al., 2019) combine multiple types of perturbations, making the attack cumulative. Recent studies (Millière, 2022; Maus et al., 2023; Zhuang et al., 2023; Yang et al., 2023; Zhang et al., 2024; Wang et al., 2023) have explored the over-sensitivity of T2I DMs to prompt perturbations in the text domain with malicious word synthesis, phrase synthesis, visual substitution and adding distraction. (Zhuang et al., 2023) also reveal the vulnerability of T2I models and attributes it to the weak robustness of the used text encoders. ## 3 Methodology This section provides a detailed description of our approach to real-world adversarial attacks of T2I DMs. We briefly outline the problem formulation before delving into the design of attack objective functions and then describe how to perform optimization in a black-box manner. Figure 2 displays the overview of our method. ## 3.1 Problem Formulation A T2I DM that accepts a text input c and generates an image x essentially characterizes the conditional distribution pθ(x|c) with θ as model parameters. To evaluate the robustness of modern DMs so as to govern their behaviors when adopted in the wild, we opt to attack the input text, i.e., finding a text c ′ which keeps close to the original text c but can lead to a significantly biased generated distribution. Such an attack is meaningful in the sense of encompassing real-world perturbations such as typos, glyphs, and phonetics. Concretely, the optimization problem is formulated as: $$\operatorname*{max}_{c^{\prime}}{\mathcal{D}}(p_{\theta}(x|c^{\prime})\|p_{\theta}(x|c)),\quad{\mathrm{s.t.~}}d(c,c^{\prime})\leq\epsilon,$$ \label {eq:1} \max _{c'} \mathcal {D}(p_\theta (x | c') \Vert p_\theta (x | c)), \quad \text {s.t.}\; d(c, c') \leq \epsilon , (1) where D denotes a divergence measure between two distributions, d(*c, c*′) measures the distance between two texts, and ϵ indicates the perturbation budget. The main challenge of attack lies in that we cannot write down the exact formulation of pθ(x|c) and pθ(x|c ′) of DMs but get only a few i.i.d. samples {x¯1*, . . . ,* x¯N } and {x1*, . . . , x*N } from them, where x¯iis an image generated with the original text c while xiis generated with the modified text c ′. $$(1)^{\frac{1}{2}}$$ ## 3.2 Attack Objectives In this section, we develop four instantiations of the distribution-based attack objective, as defined in Eq. (1). ## 3.2.1 Mmd Distance As validated by the community (Dziugaite et al., 2015; Tolstikhin et al., 2016), the maximum mean discrepancy (MMD) is a widely used metric to distinguish two distributions given finite samples. Formally, assuming access to a kernel function κ, the square of MMD distance is typically defined as: $$\mathcal{D}_{\text{MMD3}}(p_{\theta}(x|c^{\prime})\|p_{\theta}(x|c))\approx\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\kappa(x_{i},x_{j})-\frac{2}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\kappa(x_{i},\bar{x}_{j})+C,\tag{2}$$ where C refers to a constant agnostic to c ′. The feature maps associated with the kernel should be able to help construct useful statistics of the sample set such that MMD can compare distributions. In the case that x is an image, a valid choice is a deep kernel built upon a pre-trained NN-based image encoder h (e.g., a ViT trained by the objective of MAE (He et al., 2022) or CLIP (Radford et al., 2021)). In practice, we specify the kernel with a simple cosine form κ(*x, x*′) := h(x) ⊤h(x ′)/∥h(x)∥∥h(x ′)∥ given that h's outputs usually locate in a well-suited Euclidean space. ## 3.2.2 Kl Divergence Considering that text also provides crucial information in the attack process, we will incorporate text information to consider the joint distribution of images and texts. Due to the excellent ability of CLIP to represent both image and text information while preserving their relationships, we have chosen to use CLIP as the model for encoding images and texts. Assume access to a pre-trained ϕ-parameterized CLIP model comprised of an image encoder hϕ and a text encoder gϕ and assume the output features to be L2-normalized. It can provide a third-party characterization of the joint distribution between the image x and the text c for guiding attack. Note that hϕ(x) ⊤gϕ(c) measures the likelihood of the coexistence of image x and text c, thus from a probabilistic viewpoint, we can think of eϕ(*x, c*) := αhϕ(x) ⊤gϕ(c), where α is some constant scaling factor, as log pϕ(*x, c*). Under the mild assumption that pϕ(x|c) approximates pθ(x|c), we instantiate the measure D in Eq. (1) with KL divergence and derive the following maximization objective (details are deferred to Appendix A.1): $${\mathcal{D}}_{\mathrm{KL}}(p_{\theta}(x|c^{\prime})\|p_{\theta}(x|c))\approx\mathbb{E}_{p_{\theta}(x|c^{\prime})}[-e_{\phi}(x,c)]+\mathbb{E}_{p_{\theta}(x|c^{\prime})}[\log p_{\theta}(x|c^{\prime})]+C,$$ where C denotes a constant agnostic to c ′. The first term corresponds to generating images containing semantics contradictory to text c and can be easily computed by Monte Carlo (MC) estimation. The second term is negative entropy, so the maximization of it means reducing generation diversity. Whereas, in practice, the entropy of distribution over high-dimensional images cannot be trivially estimated given a few samples. To address this issue, we replace Epθ(x|c ′)[log pθ(x|c ′)] with a lower bound Epθ(x|c ′)[log q(x)] for any probability distribution q, due to that DKL(pθ(x|c ′)∥q(x)) = Epθ(x|c ′)[log pθ(x|c ′) − log q(x)] ≥ 0. In practice, we can only acquire distributions associated with the CLIP model, so we primarily explore the following two strategies. * **Strategy 1 (KL-1)**.: $\log q(x):=\log p_{\theta}(x,e^{\prime})=e_{\phi}(x,e^{\prime})$. Combining with Eq. (3), there is ($C$ is omitted): $$\mathcal{D}_{\text{KL}}(p_{\theta}(x|e^{\prime})\|p_{\phi}(x|c))\geq\mathbb{E}_{p_{\theta}(x|c)}[e_{\phi}(x,c)-e_{\phi}(x,c)]$$ $$\approx\alpha\Big{[}\frac{1}{N}\sum_{i=1}^{N}h_{\phi}(x_{i})\Big{]}^{\top}\Big{(}g_{\phi}(e^{\prime})-g_{\phi}(c)\Big{)}.$$ (4) $$\left({3}\right)$$ The adversarial text c ′ would affect both the generated images xi and the text embeddings gϕ(c ′). Therefore, it is likely that by maximizing the resulting term in Eq. (4) w.r.t. c ′, the text encoder of the CLIP model is attacked (i.e., gϕ(c ′)−gϕ(c) is pushed to align with the average image embedding), which deviates from our goal of delivering a biased generation distribution. - **Strategy 2 (KL-2).** log q(x) := log pϕ(x) = Lcˆ∈C(eϕ(x, cˆ)) − log |C| where L is the log-sum-exp operator and C denotes the set of all possible text inputs. Likewise, there is (we omit constants): $$\mathcal{D}_{\mathrm{KL}}(p_{\theta}(x|c^{\prime})\|p_{\phi}(x|c))\geq\mathbb{E}_{p_{\theta}(x|c^{\prime})}[\mathrm{L}_{\hat{c}\in\mathcal{C}}(e_{\phi}(x,\hat{c}))-e_{\phi}(x,c)]$$ $$\approx\frac{1}{N}\sum_{i=1}^{N}\big{[}\mathrm{L}_{\hat{c}\in\mathcal{C}}(e_{\phi}(x_{i},\hat{c}))-e_{\phi}(x_{i},c)\big{]}.$$ $$\left({5\atop5}\right)$$ $$\uparrow\rangle$$ As shown, the first term pushes the generated images toward the high-energy regions, and the second term hinders the generated images from containing semantics about c. To reduce the computational overhead, we draw a set of commonly used texts and pre-compute their text embeddings via CLIP before attacking. Then, during attacking, we only need to send the embeddings of generated images to a linear transformation followed by an L operator to get an estimation of the first term of Eq. (5). ## 3.2.3 Two-Sample Test In essence, distinguishing pθ(x|c ′) and pθ(x|c) by finite observations corresponds to a two-sample test (2ST) in statistics, and the aforementioned MMD distance is a test statistic that gains particular attention in the machine learning community. Based on this point, we are then interested in building a general framework that can embrace existing off-the-shelf two-sample test tools for attacking T2I DMs. This can considerably enrich the modeling space. Basically, we define a unified test statistic in the following formula: $${\hat{t}}{\Bigl(}\{\varphi(x_{i})\}_{i=1}^{N},\{\varphi({\bar{x}}_{i})\}_{i=1}^{N}{\Bigr)}.$$ $$\left(7\right)$$ \label {eq:4} \hat {t}\Big (\{\varphi (x_i)\}_{i=1}^N, \{\varphi (\bar {x}_i)\}_{i=1}^N\Big ). (6) Roughly speaking, we will reject the null hypothesis pθ(x|c ′) = pθ(x|c) when the statistic is large to a certain extent. The function tˆ in the above equation is customized by off-the-shelf two-sample test tools such as KS test, t-test, etc. Considering the behavior of these tools may quickly deteriorate as the dimension increases (Gretton et al., 2012), we introduce a projector φ to produce one-dimensional representations of images x. As a result, φ implicitly determines the direction of our attack. For example, if we define φ as a measurement of image quality in terms of FID (Heusel et al., 2017), then by maximizing Eq. (6), we will discover c ′that leads to generations of low quality. Recalling that our original goal is a distribution of high-quality images deviated from pθ(x|c), we hence want to set φ(·) := log pθ(·|c), which, yet, is inaccessible. Reusing the assumption that the conditional distribution captured by a CLIP model can form a reasonable approximation to pθ(x|c), we set φ(·) to the aforementioned energy score eϕ(·, c), which leads to the following test statistic: $$\mathcal{D}_{2\mathrm{ST}}(p_{\theta}(x|c^{\prime})\|p_{\theta}(x|c)):=\hat{t}\Big(\{e_{\phi}(x_{i},c)\}_{i=1}^{N},\{e_{\phi}(\bar{x}_{i},c)\}_{i=1}^{N}\Big).$$ We empirically found that the t-test tool can yield a higher attack success rate compared to other two-sample test tools, hence we use the t-test tool as the default option in the following. ## 3.3 Attack Method Based on the attack objectives specified above, here we establish a real-world-oriented word search space and implement a greedy search strategy to find adversarial input text for T2I DMs. ## 3.3.1 Perturbation Rules Following related works in natural language processing (Eger & Benz, 2020; Eger et al., 2019a; Le et al., 2022; Chen et al., 2022; 2023), we include the following three kinds of perturbations into the search space of our attack algorithm: (1) **Typo** (Li et al., 2018; Eger & Benz, 2020), which comprises seven fundamental operations for introducing typos into the text, including randomly deleting, inserting, replacing, swapping, adding space, transforming case, and repeating a single character; (2) **Glyph** (Li et al., 2018; Eger et al., 2019a), which involves replacing characters with visually similar ones; (3) **Phonetic** (Le et al., 2022), which involves replacing characters in a way that makes the whole word sound similar to the original one. We present examples of these three perturbation rules in Table 1. | Rule | Ori. Sentence | Adv. Sentence | |----------|---------------------------------------------|----------------------------------------------| | Typo | A red ball on green grass under a blue sky. | A rde ball on green grass under a blue skky. | | Glyph | A red ball on green grass under a blue sky. | A rêd ball 0n green grass under a blue sky. | | Phonetic | A red ball on green grass under a blue sky. | A read ball on green grass under a blue SKY. | Table 1: Examples of our perturbation rules. ## 3.3.2 Greedy Search Given the efficiency and effectiveness of greedy algorithms in previous black-box text attack problems (Feng et al., 2018; Pruthi et al., 2019), we also employ greedy algorithm here and organize it as the following steps. Step 1: word importance ranking. Given a sentence of n words c = {w1, w2*, ..., w*n}, it is usually the case that only some keywords act as the influential factors for controlling DMs. Therefore, we aim to first identify such words and then perform attack. The identification of word importance is trivial in a white-box scenario, e.g., by inspecting model gradients (Behjati et al., 2019), but is challenging in the considered black-box setting. To address this, we directly measure the marginal influence of the word wi on the generation distribution via Iwi := D(pθ(x|c\wi)∥pθ(x|c)) where c\wi = {w1, ..., wi−1, wi+1*, ...w*n} denotes the sentence without the word wi and D refers to the divergence measure defined earlier. With this, we can compute the influence score Iwi for each word wiin the sentence c, and then obtain a ranking over the words according to their importance. Step2: word perturbation. We then attempt to perturb the detected important words to find the adversarial example c ′. Concretely, for the most important word wi ∈ c, we randomly select one character in it and then randomly apply one of the meta-operations in the perturbation rule of concern, e.g., character swapping and deleting, to obtain a perturbed word as well as a perturbed sentence. Repeating this five times results in 5 perturbed sentences {c ′ 1 , c′2 , ...c′5}. We select the sentence leading to the highest generation divergence from the original sentence, i.e., D(pθ(x|c ′ i )∥pθ(x|c)), ∀i ∈ {1*, . . . ,* 5} as the eventual adversarial sentence c ′. If the attack has not reached the termination condition, the next word in the importance ranking will be selected for perturbation. ## 4 Diagnostic Experiments In this section, we provide diagnostic experiments consisting of two aspects: (1) assessing the four proposed attack objectives under varying perturbation rates; (2) analyzing which part of the DM is significantly misled. These analyses not only validate the efficacy of our method, but also deepen our understanding of the robustness of T2I DMs, and provide insightful perspectives for future works. Datasets. We consider two types of textual data for prompting the generation of T2I DMs: (1) 50 ChatGPT generated (ChatGPT-GP) prompts by querying: "generate 50 basic prompts used for image synthesis." and (2) 50 image captions from SBU Corpus (Ordonez et al., 2011). Such a dataset facilitates a thorough investigation of the efficacy and applicability of our method in practical image-text generation tasks. Victim Models. We choose Stable Diffusion (Rombach et al., 2022) as the victim model due to its widespread usage, availability as an open-source model, and strong generation capability. Stable Diffusion utilizes a denoising mechanism that operates in the latent space of images and incorporates cross-attention to leverage guidance information. Text inputs are first processed by CLIP's text encoder to generate text embeddings, which are subsequently fed into the cross-attention layers to aid in image generation. Evaluation Metrics. We use the CLIP Score (Hessel et al., 2021), esentially the aforementioned hϕ(x) ⊤gϕ(c), to measure the semantic similarity between the original text c and the generated images {x1*, . . . , x*N } based on the adversarial text c ′. Specifically, we define the metric SI2T = 1 N PN i=1 max(0, 100 · gϕ(c) ⊤hϕ(x ′)) over the generated images, and we hypothesize that a higher SI2T indicates a less adversarial text c ′. Typically, N is set to 15 to balance efficiency and fidelity. We can also calculate the similarity between the original text c and the adversarial text c ′ with ST2T = max(0, 100 · gϕ(c) ⊤gϕ(c ′)). Though these two metrics use the same notations as our attack objectives, we actually use various pre-trained CLIPs to instantiate them to avoid ![6_image_0.png](6_image_0.png) Figure 3: CLIP Score at different perturbation rates on ChatGPT-GP and SBU Corpus. over-fitting—employing the CLIP with VIT-L-patch14 backbone for attacking while using VIT-L-patch14-336 backbone for evaluation. ## 4.1 Attack With Different Objectives We first conduct a diagnostic experiment on the effects of the four proposed attack objectives under various perturbation rules. We define the perturbation rate as the ratio between the number of perturbed words and the total words in a sentence, and vary it from 0% to 100% with an interval of 10%. We calculate the average values of SI2T and ST2T on ChatGPT-GP and SBU Corpus, which are reported in Figure 3. Note that we also include a random baseline in comparison. On ChatGPT-GP, all methods exhibit a declining trend in SI2T as the perturbation rate increases. Considering high perturbation rates rarely exist in practice, we primarily focus on situations where the perturbation rate is less than 50%. Within this range, the curves corresponding to MMD, KL-2, and 2ST display a rapid decrease across all three perturbation rules, more than 2× faster than random and KL-1 when using typo and glyph rules. It is also noteworthy that MMD and 2ST perform similarly and yield the best overall results. On SBU Corpus, it is evident that 2ST is more effective than MMD. Additionally, even with a perturbation rate of 100%, the random method fails to achieve a similar SI2T score compared to other methods. This observation suggests the effectiveness of our attack algorithm. Additionally, glyph-based perturbations lead to the most rapid decrease in performance, followed by typo perturbations, and phonetic perturbations lead to the slowest drop. This disparity may be attributed to glyph perturbations completely disrupting the original word embedding. ## 4.2 Which Part Of The Dm Is Significantly Misled? Previous studies suggest that attacking only the CLIP encoder is sufficient for misleading diffusion models (Zhuang et al., 2023). However, our method is designed to attack the entire generation process instead of the CLIP encoder. For empirical evaluation, we conduct a set of experiments in this section. We include two additional attack methods: attacking only the CLIP encoder and attacking only the diffusion process. Regarding this first one, we focus solely on maximizing the dissimilarity between the original text and the adversarial one. To achieve this, we employ ST2T as the optimization objective, i.e., ![7_image_0.png](7_image_0.png) $$({\mathfrak{s}})$$ MMD KL-2 2ST CLIP DP Figure 4: Corelation between SI2T and ST2T on ChatGPT-GP and SBU Corpus. The numbers in the upper-left corner represent the slopes of the plotted lines. DCLIP = ST2T = max(0, 100 · gϕ(c) ⊤gϕ(c ′)). As for the second one, we modify Eq. (4) and devise a new attack objective as follows (α and β denote two trade-off coefficients): $${\mathcal{D}}_{\mathrm{DP}}\approx\left[\alpha g_{\phi}(c^{\prime})-\beta{\frac{1}{N}}\sum_{i=1}^{N}h_{\phi}(x_{i})\right]^{\top}g_{\phi}(c).$$ \begin {aligned} \label {eq:8} \mathcal {D}_\text {DP} \approx \Big [\alpha g_{\phi }(c') - \beta \frac {1}{N} \sum _{i=1}^N h_{\phi }(x_i)\Big ]^\top g_{\phi }(c). \end {aligned} (8) While maximizing the distance between the original text and the adversarial images, we also aim to ensure that the representations of the adversarial text and the original text are as similar as possible. This confines that even though the entire DM is under attack, the CLIP encoder remains safe. More details of this equation can be found in Appendix A.2. Given the poor performance of the random and KL-1 methods, we exclude them from this study. Considering that high perturbation rates are almost impossible in the real world, we experiment with perturbation rates only from 0% to 80%. We compute the average SI2T and ST2T across all texts at every perturbation rate, and plot their correlations in Figure 4. As shown, exclusively targeting the CLIP encoder during the attack process yields the maximum slope of the regression line, while solely attacking the diffusion process leads to the minimum slope. For instance, in the typo attack on ChatGPT-GP, the attack method solely attacking the CLIP encoder exhibits the lowest slope of 0.21, whereas the attack method exclusively targeting the diffusion process shows the highest slope of 0.46. Attack methods that simultaneously target both processes display slopes between these extremes. These clearly support that our attack objectives simultaneously attack the CLIP encoder and the diffusion process. Furthermore, through the slope information, we can conclude that directly attacking the diffusion process yields a more significant decrease in image-text similarity at a given textual semantic divergence. Across all datasets and perturbation spaces, the slope disparity between direct attacks on the diffusion process and direct attacks on the CLIP encoder is mostly above 0.1, and the maximum slope disparity reaches even 0.15. ![8_image_0.png](8_image_0.png) Figure 5: CLIP Score at different perturbation rates on ChatGPT-GP and SBU Corpus with typo rule. ## 4.3 Compare With Non-Distribution Attack Objective We conduct a comparison experiment between our distribution-based optimization objective 2ST and a non-distribution method that minimizes the SI2T of the prompt and a single definite image (DI). We randomly sampled 20 texts from ChatGPT-GP and SBU Corpus separately, then applied the typo rule to perturb sampled texts with different perturbation rates. The results, depicted in Figure 5, clearly demonstrate the superior effectiveness of our distribution-based approach. ## 5 Real-World Attack Experiment Based on the preceding analysis, we identify that 2ST and MMD are two good attack objectives for T2I DMs. In this section, we will carry out attacks in real-world scenarios, where termination conditions are incorporated to balance the perturbation level and effectiveness. Datasets. To provide a more comprehensive evaluation of our attack method in realistic scenarios, we incorporate two additional datasets. The first one, DiffusionDB (Wang et al., 2022b), is a large-scale dataset of 14 million T2I prompts. The second one, LAION-COCO (Schuhmann et al., 2021), includes captions for 600 million images from the English subset of LAION-5B (Schuhmann et al., 2022). The captions are generated using an ensemble of BLIP L/14 (Li et al., 2022) and two CLIP (Radford et al., 2021) variants. To conjoin diversity and efficiency, we randomly select 100 examples from each of the aforementioned datasets. Additionally, we also increased the size of ChatGPT-GP and SBU to 100 for this experiment. Attack method. As said, we consider attacking based on MMD and 2ST. A threshold on the value of D is set for termination. If it is not reached, the attack terminates at a pre-fixed number of steps. Evaluation metric. We use four metrics to evaluate our method in real-world attack scenes. (1) Levenshtein distance (L-distance), which measures the minimum number of single-character edits, a powerful indicator of the number of modifications made to a text. (2) Ori.SI2T and Adv.SI2T which indicate the similarity between the original text and original images as well as that between the original text and the adversarial images respectively. The mean and variance are both reported. (3) Average query times, which represents the number of times that DM generates images with one text, and serves as a metric for evaluating the attack efficiency. (4) Human evaluation, where humans are employed to assess the consistency between the image and text. Let N1 represent the number of images generated by the original text that are consistent with the original text, and N2 represent the number of images generated by the adversarial text that are consistent with original text. If (N2 − N1) > 1, the attack on that particular prompt text is deemed meaningless. Let's assume the frequency of samples where (N2 − N1) > 1 as Nu, and the effective total number of samples should be Ntotal − Nu. If (N1 − N2 > 1), it indicates a successful attack. We use Nc to represent the number of samples where the attack is successful. Thus, the final score for each evaluator is given by Nc/(Ntotal − Nu). The average of three human annotators represents the overall human evaluation score (Hum.Eval). | Dataset | Attacker | Ori. SI2T | Ave. Len. | L-distance | Adv. SI2T | Ave. Query | Hum. Eval. | |-------------|------------|-------------|-------------|--------------|-------------|--------------|--------------| | Typo | 2.92 | 23.21±3.08 | 19.43 | 84.34% | | | | | ChatGPT-GP | Glyph | 27.61±2.07 | 10.41 | 2.27 | 23.09±2.75 | 18.63 | 84.65% | | Phonetic | 5.38 | 22.67±3.58 | 17.78 | 86.16% | | | | | Typo | 2.29 | 22.70±3.31 | 17.25 | 76.64% | | | | | DiffusionDB | Glyph | 29.17±3.36 | 10.62 | 1.81 | 22.71±3.22 | 16.30 | 76.64% | | Phonetic | 5.04 | 22.91±3.34 | 16.27 | 75.51% | | | | | Typo | 2.08 | 21.73±3.62 | 14.77 | 80.21% | | | | | LAION-COCO | Glyph | 27.54±2.86 | 9.17 | 1.85 | 21.32±3.69 | 15.11 | 81.89% | | Phonetic | 5.04 | 21.76±3.87 | 16.15 | 79.32% | | | | | Typo | 2.97 | 19.65±3.53 | 21.19 | 84.34% | | | | | SBU Corpus | Glyph | 24.99±3.43 | 11.69 | 2.42 | 19.01±3.76 | 20.54 | 85.41% | | Phonetic | 5.85 | 18.86±3.91 | 19.92 | 85.41% | | | | Table 2: Real-world attack with the 2ST attack objective. | Dataset | Attacker | Ori. SI2T | Ave. Len. | L-distance | Adv. SI2T | Ave. Query | Hum. Eval. | |-------------|------------|-------------|-------------|--------------|-------------|--------------|--------------| | Typo | 1.77 | 24.54±2.69 | 14.17 | 84.21% | | | | | ChatGPT-GP | Glyph | 27.61±2.07 | 10.41 | 1.15 | 24.88±2.67 | 13.08 | 84.36% | | Phonetic | 3.81 | 26.08±2.21 | 14.58 | 80.02% | | | | | Typo | 1.75 | 24.94±3.82 | 13.72 | 72.77% | | | | | DiffusionDB | Glyph | 29.17±3.36 | 10.62 | 1.29 | 24.81±3.90 | 13.41 | 73.53% | | Phonetic | 4.27 | 26.71±3.24 | 15.13 | 70.09% | | | | | Typo | 1.75 | 23.04±4.10 | 13.33 | 80.21% | | | | | LAION-COCO | Glyph | 27.54±2.86 | 9.17 | 1.35 | 23.72±3.91 | 12.35 | 82.04% | | Phonetic | 3.62 | 25.06±3.09 | 13.21 | 77.37% | | | | | Typo | 1.91 | 21.37±3.92 | 16.36 | 82.05% | | | | | SBU Corpus | Glyph | 24.99±3.43 | 11.69 | 1.37 | 21.44±3.66 | 15.01 | 82.33% | | Phonetic | 3.72 | 23.15±3.25 | 16.20 | 79.67% | | | | Table 3: Real-world attack with **MMD distance** attack objective. We first conduct the real-world attack experiment on **Stable Diffusion**. Table 2 and Table 3 present the results of real-attack experiments using various perturbation rules on different datasets, with 2ST and MMD distance as the attack objectives, respectively. Since the termination criteria for the two optimization algorithms differ, we cannot compare them directly. Considering that our method involves querying each word of the sentence (described in Section 3.3.2), the query times minus the sentence length, which we named true query times, can better demonstrate the true efficiency of our approach. From this perspective, our method requires less than 10 *true query times* to achieve more than 4 SI2T score drop across most datasets with more than 75% *human evaluation* score. Simultaneously, we observe that our modifications are relatively minor. In the typo and glyph attacker, we require an *L-distance* of less than 3, while in the phonetic attacker, the threshold remains below 6. Furthermore, ChatGPT-GP and LAION-COCO are more susceptible to our attack, possibly attributed to their clearer and more flow sentence descriptions. In conclusion, with minimal modifications and a limited number of queries to the model, we achieve a significant decrease in text-image similarity, substantiated by human evaluations. DALL-E 2 (Ramesh et al., 2022) is also a powerful image generation model that can create realistic and diverse images from textual descriptions. We then conduct a case study with the same attack method used in Stable Diffusion. The results respectively obtained with the attack objective MMD and 2ST are presented in Figure 6 and Figure 7. More cases can be found in Appendix B.1. Original Typo Glyph **Phonetic** A Cait dressed as french emp- eror napoleon holding a piece ![10_image_0.png](10_image_0.png) A cɑt dressed as french emp- eror napoleon holding a piece Figure 6: An illustration of adversarial attack against DALL-E 2 with MMD attack objective. Original Typo Glyph **Phonetic** Painda mad scientist mixing ![10_image_1.png](10_image_1.png) Figure 7: An illustration of adversarial attack against DALL-E 2 with 2ST attack objective. | Dataset | Attacker | Ori. SI2T | Ave. Len. | L-distance | Adv. SI2T | Ave. Query | Hum. Eval. | |------------|------------|-------------|-------------|--------------|-------------|--------------|--------------| | Typo | | 2.65 | 24.61±3.32 | 16.97 | 79.36% | | | | ChatGPT-GP | Glyph | 27.61±2.07 | 10.41 | 2.09 | 24.94±3.01 | 16.86 | 77.45% | | Phonetic | | 5.13 | 26.67±3.74 | 15.71 | 78.68% | | | Table 4: Real-world attack with the DI attack objective. To further quantify the superiority of distribution-based attack methods over single-image-based attack methods, we also conduct a **real-world attack with the DI attack objective** on the ChatGPT-GP dataset, employing the same settings as the other objectives detailed in Section 5. Table 4 illustrates that the Adv.SI2T and human evaluation scores associated with this objective are relatively low. This observation suggests that the attack with the DI objective may be susceptible to overfitting on a single image. To prove that human evaluation won't be affected by the adversarial noise of text. We experiment to evaluate the difference between the image content and the actual meaning of the adversarial text. We let N1 represent the number of images generated by the original text that are consistent with the original text as original, while N2 represents the number of images generated by the adversarial text that are consistent with the adversarial text. The new overall human evaluation score is calculated in the same way as before. Table 5 shows the new overall human evaluation on the same adversarial dataset generated under real-world attack with the 2ST attack objective in section 5. We found that although the new overall human evaluation score will decrease to some extent compared to the old score, the change is not significant, indicating that human evaluations are not affected by the adversarial text. | Dataset | Attacker | Ori. SI2T | Ave. Len. | L-distance | Old Hum. Eval. | New Hum. Eval. | |-------------|------------|-------------|-------------|--------------|------------------|------------------| | Typo | 2.92 | 84.34% | 82.19% | | | | | ChatGPT-GP | Glyph | 27.61±2.07 | 10.41 | 2.27 | 84.65% | 83.42% | | Phonetic | 5.38 | 86.16% | 82.06% | | | | | Typo | 2.29 | 76.64% | 74.34% | | | | | DiffusionDB | Glyph | 29.17±3.36 | 10.62 | 1.81 | 76.64% | 75.26% | | Phonetic | 5.04 | 75.51% | 73.61% | | | | | Typo | 2.08 | 80.21% | 78.28% | | | | | LAION-COCO | Glyph | 27.54±2.86 | 9.17 | 1.85 | 81.89% | 80.94% | | Phonetic | 5.04 | 79.32% | 77.02% | | | | | Typo | 2.97 | 84.34% | 82.13% | | | | | SBU Corpus | Glyph | 24.99±3.43 | 11.69 | 2.42 | 85.41% | 85.57% | | Phonetic | 5.85 | 85.41% | 81.95% | | | | Table 5: Real-world attack with the 2ST attack objective. | Typo Corrector | Perturbation Rule | SRR | |-------------------|---------------------|-------| | Typo | 68% | | | LanguageTool | Glyph | 39% | | Phonetic | 21% | | | Typo | 81% | | | Online Correction | Glyph | 42% | | Phonetic | 25% | | Table 6: SRR of auto-correctors to 3 perturbation rules. Furthermore, we carry out experiments on the **defense sides**. We used two widely used correctors, LanguageTool1 and Online Correction2 as the defense method. Then we selected 100 successfully attacked text samples for each perturbation rule based on the 2ST attack objective from the ChatGPT-GP dataset. Finally, we evaluated the samples modified by these typo-correctors to determine whether they were successfully repaired. Note that if the corrector provided more than one recommended correct word, we utilized the first recommended word. Table 6 presents the comparison of the Successful Repair Rate (SRR). It is shown that auto-typo correctors can partially correct human mistakes from typo perturbations in our work. However, correcting a little fiercely perturbed sentence caused by glyphs and phonetics proves challenging. Hence, our method can remain effective against auto-typo correctors. It is also worth noting that these correctors often struggle to automatically rectify words in a manner that aligns with the user's intent when human intervention is not involved in the word selection process from the correction word list. Finally, we engage in a discussion concerning **human attack** without algorithmic interventions and **wordlevel attacks** in Appendix C, to separately provide evidence for the effectiveness of our attack algorithm and highlight the impracticality of directly transferring text adversarial attack methods to DMs. ## 6 Conclusion In this work, we present a comprehensive evaluation of the robustness of DMs against real-world attacks. Unlike previous studies that focused on malicious alterations to input texts, we explore an attack method based on realistic errors that humans can make to ensure semantic consistency. Our novel distribution-based attack method can effectively mislead DMs in a black-box setting without any knowledge of the original generative model. Importantly, we show that our method does not solely target the text encoder in DMs, it can also attack the diffusion process. Even with extremely low perturbation rates and query times, our method can still achieve a high attack success rate. 1https://languagetool.org/. 2https://www.onlinecorrection.com/. ## References Melika Behjati, Seyed-Mohsen Moosavi-Dezfooli, Mahdieh Soleymani Baghshah, and Pascal Frossard. Universal adversarial attacks on text classifiers. In *ICASSP 2019-2019 IEEE International Conference on* Acoustics, Speech and Signal Processing (ICASSP), pp. 7345–7349. IEEE, 2019. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. Universal sentence encoder. *arXiv preprint* arXiv:1803.11175, 2018. Yangyi Chen, Hongcheng Gao, Ganqu Cui, Fanchao Qi, Longtao Huang, Zhiyuan Liu, and Maosong Sun. Why should adversarial perturbations be imperceptible? rethink the research paradigm in adversarial NLP. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing(EMNLP)*, pp. 11222–11237. Association for Computational Linguistics, 2022. Yangyi Chen, Hongcheng Gao, Ganqu Cui, Lifan Yuan, Dehan Kong, Hanlu Wu, Ning Shi, Bo Yuan, Longtao Huang, Hui Xue, et al. From adversarial arms race to model-centric evaluation: Motivating a unified automatic robustness evaluation framework. *arXiv preprint arXiv:2305.18503*, 2023. Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, et al. Cogview: Mastering text-to-image generation via transformers. *Advances in* Neural Information Processing Systems, 34:19822–19835, 2021. Chengbin Du, Yanxi Li, Zhongwei Qiu, and Chang Xu. Stable diffusion is unstable. *ArXiv*, abs/2306.02583, 2023. Gintare Karolina Dziugaite, Daniel M Roy, and Zoubin Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In *Proceedings of the Thirty-First Conference on Uncertainty in* Artificial Intelligence, pp. 258–267, 2015. Steffen Eger and Yannik Benz. From hero to zéroe: A benchmark of low-level adversarial attacks. In Proc. of AACL, 2020. Steffen Eger, Gözde Gül Şahin, Andreas Rücklé, Ji-Ung Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, and Iryna Gurevych. Text processing like humans do: Visually attacking and shielding nlp systems. In *Proc. of NAACL*, 2019a. Steffen Eger, Gözde Gül Şahin, Andreas Rücklé, Ji-Ung Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, and Iryna Gurevych. Text processing like humans do: Visually attacking and shielding nlp systems. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 1634–1647, 2019b. Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. Pathologies of neural models make interpretations difficult. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3719–3728, 2018. Brian Formento, Chuan-sheng Foo, Anh Tuan Luu, and See Kiong Ng. Using punctuation as an adversarial attack on deep learning-based nlp systems: An empirical study. In Findings of the Association for Computational Linguistics: EACL 2023, pp. 1–34, 2023. Oran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, and Yaniv Taigman. Make-a-scene: Scene-based text-to-image generation with human priors. In Computer Vision–ECCV 2022: 17th European Conference, pp. 89–106, 2022. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *The Journal of Machine Learning Research*, 13(1):723–773, 2012. Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, and Baining Guo. Vector quantized diffusion model for text-to-image synthesis. In IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp. 10686–10696. IEEE, 2022. Prakhar Gupta, Yulia Tsvetkov, and Jeffrey P Bigham. Synthesizing adversarial negative responses for robust response ranking and evaluation. In *Findings of the Association for Computational Linguistics:* ACL-IJCNLP 2021, pp. 3867–3883, 2021. Wenjuan Han, Liwen Zhang, Yong Jiang, and Kewei Tu. Adversarial attack and defense of structured prediction models. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pp. 2327–2338, 2020. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16000–16009, 2022. Xuanli He, Lingjuan Lyu, Lichao Sun, and Qiongkai Xu. Model extraction and adversarial transferability, your bert is vulnerable! In *Proceedings of the 2021 Conference of the North American Chapter of the* Association for Computational Linguistics: Human Language Technologies, pp. 2006–2012, 2021. Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. In *Proceedings of the 2021 Conference on Empirical Methods in* Natural Language Processing, pp. 7514–7528, 2021. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, volume 33, pp. 6840–6851. Curran Associates, Inc., 2020. Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al. Imagen video: High definition video generation with diffusion models. *arXiv preprint arXiv:2210.02303*, 2022a. Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. *arXiv preprint arXiv:2204.03458*, 2022b. Thai Le, Jooyoung Lee, Kevin Yen, Yifan Hu, and Dongwon Lee. Perturbations in the wild: Leveraging humanwritten text perturbations for realistic adversarial attack and defense. *arXiv preprint arXiv:2203.10346*, 2022. Deokjae Lee, Seungyong Moon, Junhyeok Lee, and Hyun Oh Song. Query-efficient and scalable black-box adversarial attacks on discrete sequential data via bayesian optimization. In *International Conference on* Machine Learning, pp. 12478–12497. PMLR, 2022. Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, and William B Dolan. Contextualized perturbation for textual adversarial attack. In *Proceedings of the 2021 Conference of the* North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5053–5069, 2021. Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. Textbugger: Generating adversarial text against real-world applications. *arXiv preprint arXiv:1812.05271*, 2018. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In *International Conference on Machine Learning*, pp. 12888–12900. PMLR, 2022. Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. BERT-ATTACK: adversarial attack against BERT using BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing(EMNLP), pp. 6193–6202, 2020. Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. Repaint: Inpainting using denoising diffusion probabilistic models. In *Proceedings of the IEEE/CVF Conference on* Computer Vision and Pattern Recognition, pp. 11461–11471, 2022. Natalie Maus, Patrick Chao, Eric Wong, and Jacob Gardner. Adversarial prompting for black box foundation models. *arXiv preprint arXiv:2302.04237*, 2023. Zhao Meng and Roger Wattenhofer. A geometry-inspired attack for generating natural language adversarial examples. In *Proceedings of the 28th International Conference on Computational Linguistics*, pp. 6679–6689, 2020. Raphaël Millière. Adversarial attacks on image generation with made-up words. arXiv preprint arXiv:2208.04135, 2022. Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with textguided diffusion models. In *International Conference on Machine Learning*, pp. 16784–16804. PMLR, 2022. Vicente Ordonez, Girish Kulkarni, and Tamara Berg. Im2text: Describing images using 1 million captioned photographs. In *Advances in Neural Information Processing Systems*, volume 24. Curran Associates, Inc., 2011. Danish Pruthi, Bhuwan Dhingra, and Zachary C Lipton. Combating adversarial misspellings with robust word recognition. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pp. 5582–5591, 2019. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International conference on machine learning*, pp. 8748–8763. PMLR, 2021. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In *International Conference on Machine Learning*, pp. 8821–8831, 2021. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. *arXiv preprint arXiv:2204.06125*, 1(2):3, 2022. Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1085–1097, July 2019. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In *IEEE/CVF Conference on Computer Vision and Pattern* Recognition(CVPR), pp. 10674–10685. IEEE, 2022. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. *Advances in Neural Information Processing Systems*, 35: 36479–36494, 2022a. Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2022b. Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. *arXiv preprint arXiv:2111.02114*, 2021. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. *Advances in Neural Information Processing Systems*, 35:25278–25294, 2022. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In *International Conference on Machine Learning*, pp. 2256–2265. PMLR, 2015. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. *arXiv preprint* arXiv:2010.02502, 2020. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In *2nd International Conference on Learning* Representations(ICLR), 2014. Raphael Tang, Akshat Pandey, Zhiying Jiang, Gefei Yang, Karun Kumar, Jimmy Lin, and Ferhan Ture. What the daam: Interpreting stable diffusion using cross attention. *arXiv preprint arXiv:2210.04885*, 2022. Ilya O Tolstikhin, Bharath K Sriperumbudur, and Bernhard Schölkopf. Minimax estimation of maximum mean discrepancy with radial kernels. *Advances in Neural Information Processing Systems*, 29, 2016. Andrey Voynov, Kfir Aberman, and Daniel Cohen-Or. Sketch-guided text-to-image diffusion models. arXiv preprint arXiv:2211.13752, 2022. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. Universal adversarial triggers for attacking and analyzing nlp. In *Proceedings of the 2019 Conference on Empirical Methods in Natural* Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pp. 2153–2162, 2019. Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, and Bo Li. T3: Tree-autoencoder constrained adversarial text generation for targeted attack. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pp. 6134–6150, 2020. Ruochen Wang, Ting Liu, Cho-Jui Hsieh, and Boqing Gong. Dpo-diff: On discrete prompt optimization of text-to-image diffusion models. 2023. Tengfei Wang, Ting Zhang, Bo Zhang, Hao Ouyang, Dong Chen, Qifeng Chen, and Fang Wen. Pretraining is all you need for image-to-image translation. In *arXiv*, 2022a. Zijie J. Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, and Duen Horng Chau. DiffusionDB: A large-scale prompt gallery dataset for text-to-image generative models. arXiv:2210.14896 [cs], 2022b. Lei Xu, Yangyi Chen, Ganqu Cui, Hongcheng Gao, and Zhiyuan Liu. Exploring the universal vulnerability of prompt-based learning paradigm. In Findings of the Association for Computational Linguistics: NAACL 2022, pp. 1799–1810, 2022. Yijun Yang, Ruiyuan Gao, Xiaosen Wang, Nan Xu, and Qiang Xu. Mma-diffusion: Multimodal attack on diffusion models. *arXiv preprint arXiv:2311.17516*, 2023. Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, and In So Kweon. Investigating top-k white-box and transferable black-box attack. In *Proceedings of the IEEE/CVF Conference on* Computer Vision and Pattern Recognition, pp. 15085–15094, 2022a. Chenyu Zhang, Lanjun Wang, and Anan Liu. Revealing vulnerabilities in stable diffusion via targeted attacks. arXiv preprint arXiv:2401.08725, 2024. Jie Zhang, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Lei Zhang, and Chao Wu. Towards efficient data free black-box adversarial attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15115–15125, June 2022b. Lvmin Zhang and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. arXiv preprint arXiv:2302.05543, 2023. Wei Emma Zhang, Quan Z Sheng, Ahoud Alhazmi, and Chenliang Li. Adversarial attacks on deep-learning models in natural language processing: A survey. ACM Transactions on Intelligent Systems and Technology (TIST), 11(3):1–41, 2020. Haomin Zhuang, Yihua Zhang, and Sijia Liu. A pilot study of query-free adversarial attack against stable diffusion. *arXiv preprint arXiv:2303.16378*, 2023. ## A Proof Of Equations A.1 Proof Of Eq. (3) $$\begin{split}\mathcal{D}_{\text{KL}}(p_{\theta}(x|c^{\prime})\|p_{\theta}(x|c))&\approx\mathcal{D}_{\text{KL}}(p_{\theta}(x|c^{\prime})\|p_{\phi}(x|c))\\ &=\mathbb{E}_{p_{\theta}(x|c^{\prime})}\Big{[}\log\frac{p_{\theta}(x|c^{\prime})}{p_{\phi}(x|c)p(c)}+\log p(c)\Big{]}\\ &=\mathbb{E}_{p_{\theta}(x|c^{\prime})}[-c_{\phi}(x,c)]+\mathbb{E}_{p_{\theta}(x|c^{\prime})}[\log p_{\theta}(x|c^{\prime})]+C\end{split}\tag{9}$$ (3) ## A.2 Proof Of Eq. (8) We demonstrate in this section why Eq. (8) represents only the attack diffusion process. For Eq. (8), we can expand it as: $$\begin{split}\mathcal{D}_{\text{DP}}&\approx\Big{[}\alpha g_{\phi}(c^{\prime})-\beta\frac{1}{N}\sum_{i=1}^{N}h_{\phi}(x_{i})\Big{]}^{\top}g_{\phi}(c)\\ &=\alpha g_{\phi}(c^{\prime})^{\top}g_{\phi}(c)-\beta\Big{[}\frac{1}{N}\sum_{i=1}^{N}h_{\phi}(x_{i})\Big{]}^{\top}g_{\phi}(c)\end{split}\tag{10}$$ where c is the original text, c ′is the modified text with xi generated from it. The first term, αgϕ(c ′) ⊤gϕ(c), measures the similarity between the original text and the adversarial text. The second term, β h1 N PN i=1 hϕ(xi) i⊤gϕ(c), represents the similarity between the original text and the adversarially generated images. Maximizing this objective constrains the original text and the adversarial text to be as similar as possible after being encoded by the text encoder, while minimizing the similarity between the original text and the adversarial generated images. In this way, it avoids attacking the text encoder and solely attacks the diffusion process. Since Eq. (8) is modified from Eq. (4), we also provide an expanded explanation for Eq. (4) as follows: $$\begin{split}\mathcal{D}_{\text{KL}}(p_{\theta}(x|c^{\prime})\|p_{\phi}(x|c))&\approx\alpha\Big{[}\frac{1}{N}\sum_{i=1}^{N}h_{\phi}(x_{i})\Big{]}^{\top}\Big{(}g_{\phi}(c^{\prime})-g_{\phi}(c)\Big{)}\\ &=\alpha\Big{[}\frac{1}{N}\sum_{i=1}^{N}h_{\phi}(x_{i})\Big{]}^{\top}g_{\phi}(c^{\prime})-\alpha\Big{[}\frac{1}{N}\sum_{i=1}^{N}h_{\phi}(x_{i})\Big{]}^{\top}g_{\phi}(c)\end{split}\tag{11}$$ The first term, α h1 N PN i=1 hϕ(xi) i⊤gϕ(c ′) , measures the similarity between the adversarial text and adversarially generated images. The second term, α h1 N PN i=1 hϕ(xi) i⊤gϕ(c) , represents the similarity between the original text and the adversarial generated images. Maximizing this objective constrains the encoded adversarial text and the adversarial images to be as similar as possible, which essentially ensures the quality of the text embedding guided image diffusion process, while minimizing the similarity between the original text and the adversarial generated images. In this way, it avoids attacking the diffusion process and solely attacks the text encoder, different from Eq. (8). ## B Experiment Result B.1 Case Study On Dall-E 2 As a supplement to the case study experiments on DALL-E 2 in Section 5, we present two additional cases for each of the optimization objectives, MMD and 2ST, shown in Figure 8. Original Typo Glyph **Phonetic** A cute hoase stands on top of a giant, delicious hamburger. A cute hořse stands on top of a ![18_image_0.png](18_image_0.png) ![18_image_1.png](18_image_1.png) Original Typo Glyph **Phonetic** A small panta playing a bright red ball on the table. A small paոda playing a bright Original Typo Glyph **Phonetic** A cartoon illustration of a lyon wears a detective's hat inspecting a treasure chest atop the sea. A cartoon illustration of a 1ion wears a detective's hat inspecting a treasure chest atop the sea. A cartoon illustration of a li on Original Typo Glyph **Phonetic** Ber in glasses, deeply engrossed in a hefty tome. Beɑr in glasses, deeply (d) 2ST: Case 2 Figure 8: Illustrations of adversarial attack against DALL-E 2 with MMD or 2ST attack objective. Each of these objectives has two cases. Original Typo Glyph **Phonetic** A read ball on green grass under a blue sky. A red ball on green grass ![19_image_0.png](19_image_0.png) Original Typo Glyph **Phonetic** A strong man is swinning fast and very well. A strong man is swimming fast Figure 9: Cases without noun modification. ## B.2 Lexical Properties Of The Modified Word The modified words of adversarial text can be nouns, adjectives and verbs. In descriptive text about objects, there is a greater occurrence of modified words in nouns. It is important to note that our approach aims to identify words that are most important for the model, rather than those deemed most important by humans. Therefore, in theory, it is not limited to a specific part of speech. Figure 9 shows some cases without noun modification. ## C Discussion On Human Attack And Word-Level Attack C.1 Human Attack Firstly, we would like to emphasize the effectiveness of our adversarial optimization algorithm. In order to demonstrate this, we compare our method with human attack without algorithmic interventions. We randomly selected a set of sentences and made random modifications to the most important words based on human intuition. Remarkably, we observed that a lot of sentences with these modifications did not result in DMs generating incorrect images. This further substantiates the effectiveness of our attack algorithm. We present two illustrative cases in Figure 10. The results emphasize the difficulty of this attack task and show the effectiveness of our method. ## C.2 Word-Level Attack Then we talk about the other level attacks such as word-level attacks. Due to the high sensitivity of the DM to individual words in the prompt, word-level attacks such as synonym replacement or context filling were not employed in this study. If we were to use synonym replacements and substitute words with less commonly used ones, those words themselves might have multiple meanings. In such cases, the model is likely to generate images based on alternative meanings, making the substituted words different in the context of the sentence, Original Adv-1 Adv-2 **Adv-3** A photo of a dinosuar wakling through a moden ctiy ![20_image_0.png](20_image_0.png) A photo of a dinsaur waklnig through aa mdern cityy Original Adv-1 Adv-2 **Adv-3** A snomwan with a carrott nose wearing a hat and scarrf in a snowy field A snoowman with a crrot nose wearing a hat and scraf in a snowy field (b) Case 2 Figure 10: Illustrations of human attack method against Stable Diffusion. Adversarially modified content is highlighted in red. | Attack Method | Perturbation Level | DCR | |----------------------|----------------------|-------| | BERTAttack | Word-level | 19% | | PWWS | Word-level | 24% | | Our Method(Typo) | Char-level | 82% | | Our Method(Glyph) | Char-level | 96% | | Our Method(Phonetic) | Char-level | 73% | Table 7: Comparison on DCR between word-level attacks and our method. even though they may be synonyms in terms of individual words. Therefore, a more stringent restriction is required for word-level replacements. It is precisely because of this reason that traditional text-based attack methods are not applicable to image-text generation. For instance, in sentiment classification tasks, they only consider the overall sentiment of the entire sentence, and the ambiguity of a particular word does not significantly impact the overall result. However, this sensitivity becomes crucial in the context of T2I. Hence, further research is needed to explore word-level and sentence-level attacks on T2I generation models. To better illustrate our point, we conducted a comparison on the semantic consistency between word-level attack and our method. We chose two classical textual word-level adversarial attack algorithms in natural language processing(NLP), BERTAttack (Li et al., 2020) and PWWS (Ren et al., 2019) to compare with our method with 3 perturbation rules (typo, glyph and phonetic) with same optimization objective. We sampled 100 texts from successfully attacked texts for each attack method and evaluated the description consistency between these adversarial texts and their corresponding original texts by humans. To avoid bias, we evaluated each text with three people and took the plural of the three people's opinions as the final decision. Table 7 below presents the comparison on **Description Consistency Rate (DCR)** and shows that our method based on character level perturbation can keep the description consistency far more than word-level attack methods such as BERTAttack and PWWS. | Attack Method | Ori. Text | Adv. Text | |-----------------|----------------------------------------------------------------|----------------------------------------------------------------| | | A red ball on green grass under a blue sky. | A red field on green grass under a blue sky. | | BERTAttack | A white cat sleeping on a windowsill with a flower pot nearby. | A green cat sleeping on a windowsill with a flower pot nearby. | | | A wooden chair sitting in the sand at a beach. | A wooden camera sitting in the sand at a beach. | | | A red ball on green grass under a blue sky. | A red orchis on green grass under a blue sky | | PWWS | A white cat sleeping on a windowsill with a flower pot nearby. | A white guy sleeping on a windowsill with a flower pot nearby | | | A wooden chair sitting in the sand at a beach. | A wooden chairwoman sitting in the baroness at a beach. | Table 8: Word-level attack examples by BERTAttack and PWWS with 2ST attack objective. We also list some examples generated by word-level adversarial attack methods in table 8. It is evident that significant semantic changes have occurred in the examples presented. Therefore, word-level attacks in still have a long way to go in T2I adversarial attack. ## D Limitation In our experiments, we employ DMs as the testbed and evaluate both random attack methods and our proposed method with four optimization objectives on our custom benchmark datasets. Due to limited resources, we focus on Stable Diffusion for the complete experiment and DALL-E 2 for the case study, given that our method involves 12 combinations of optimization objectives and perturbation rules. Therefore, conducting more comprehensive experiments covering different model architectures and training paradigms is a direction for future research. ## E Ethics Statement A potential negative societal impact of our approach is that malicious attackers could exploit it to construct targeted attacks by modifying the loss function, leading to the generation of unhealthy or harmful images, thus causing security concerns. As more people focus on T2I DMs due to their excellent performance on image generation. In such scenarios, it becomes inevitable to address the vulnerability of DMs which can be easily attacked through black-box perturbation. Our work emphasizes the importance for developers of DMs to consider potential attacks that may exist in real-world settings during the training process. ## F Compute Device All experiments were conducted on NVIDIA Tesla A100 GPUs. For diagnostic experiments, each attack rule with each optimization objective on one dataset took approximately 4 GPU days. For real-world attack experiments, each attack rule with each optimization objective on one dataset took approximately 3 GPU days. So in total, running all of the experiments (including ablation studies and case studies) requires about 250 GPU days.
# Maximizing Global Model Appeal In Federated Learning Yae Jee Cho yaejeec@andrew.cmu.edu Carnegie Mellon University Divyansh Jhunjhunwala djhunjhu@andrew.cmu.edu Carnegie Mellon University Tian Li *litian@andrew.cmu.edu* Carnegie Mellon University University of Chicago Virginia Smith smithv@cmu.edu Carnegie Mellon University Gauri Joshi gaurij@andrew.cmu.edu Carnegie Mellon University Reviewed on OpenReview: *https: // openreview. net/ forum? id= 8GI1SXqJBk* ## Abstract Federated learning (FL) aims to collaboratively train a global model using local data from a network of clients. To warrant collaborative training, each federated client may expect the resulting global model to satisfy some individual requirement, such as achieving a certain loss threshold on their local data. However, in real FL scenarios, the global model may not satisfy the requirements of all clients in the network due to the data heterogeneity across clients. In this work, we explore the problem of *global model appeal* in FL, which we define as the total number of clients that find that the global model satisfies their individual requirements. We discover that global models trained using traditional FL approaches can result in a significant number of clients unsatisfied with the model based on their local requirements. As a consequence, we show that global model appeal can directly impact how clients participate in training and how the model performs on new clients at inference time. Our work proposes MaxFL, which maximizes the number of clients that find the global model appealing. MaxFL achieves a 22-40% and 18-50% improvement in the test accuracy of training clients and (unseen) test clients respectively, compared to a wide range of FL approaches that tackle data heterogeneity, aim to incentivize clients, and learn personalized/fair models. ## 1 Introduction Federated learning (FL) is a distributed learning framework that considers training a machine learning model using a network of clients (e.g., mobile phones, hospitals), without directly sharing client data with a central server (McMahan et al., 2017). FL is typically performed by aggregating clients' updates over multiple communication rounds to produce a global model (Kairouz et al., 2019). In turn, each client may have its own requirement that it expects to be met by the resulting global model under different settings such as at inference or with some fine-tuning. For example, clients such as hospitals or edge devices may expect that the global model performs *at least* better than a local model trained in isolation on the client's limited local data before contributing to FL training. Unfortunately, due to data heterogeneity across the clients, the global model may fail to meet the requirements of all clients (Yu et al., 2020). Previously, a plethora of works in FL on techniques such as variance reduction (Karimireddy et al., 2019), personalization (Fallah et al., 2020; Dinh et al., 2020), and fairness (Li et al., 2019) have proposed to train a global model or several personalized models that can better cater to the needs of the clients or the server. However, prior work has not directly focused on *the total number of clients* that are satisfied with the single global model based on their individual requirements, and has not explored how this may affect the training of the global model from the server's perspective when clients have the autonomy to freely join or leave the federation. Recent closely related works focus on clients' incentives from a game-theoretic lens (Hu et al., 2023; Kang et al., 2019; Zhang et al., 2021) and establish useful insights for simple linear tasks, but it is difficult to extend these to practical non-convex machine learning problems. Other related works design strategies specifically to prevent client dropout (Wang & Xu, 2022; Gu et al., 2021), but these algorithms are stateful, i.e., they require saving the gradient information from previously participating clients for the current updates, making them impractical to implement in cross-device settings (Kairouz et al., 2019). Moreover, these works only provide convergence guarantees to the global minimum with respect to the standard FedAvg objective (McMahan et al., 2017), lacking theoretical justification that their objectives can yield solutions that guarantee more participating clients compared to other classic FL objectives, even in simplified settings; we provide such guarantees for our proposed objective for mean estimation problems, which helps to shed light on our strong empirical performance (Section 2.2). Proposing a new and formal metric to evaluate FL systems, we define that a global model is *appealing* to a client if it satisfies the client's specified requirement, such as incurring at most some max training loss. Subsequently, we define the number of clients which find the global model appealing as *global model* appeal (GM-Appeal; formalized in Definition 1). We show that having a high global model appeal is critical for the server to maintain a large pool of clients to select from for training, and for gathering additional willingly participating clients. This is especially true in the light of clients possibly opting out of FL due to the significant costs associated with training (e.g., computational overhead, privacy risks, logistical challenges), which is a practical concern not typically considered in prior work. With a larger pool of clients to select from, a server can not only improve privacy-utility trade-offs (McMahan et al., 2018), but can also improve test accuracy on participating clients, and produce a global model that generalizes better at inference to new unseen clients (see Fig. 1 and Table 1). )HG$YJ )HG3UR[ 0D[)/ )HG$YJ )HG3UR[ 0D[)/ *0$SSHDO 7HVW$FFXUDF\ *0$SSHDO *0$SSHDO 7HVW$FFXUDF\ *0$SSHDO 7HVW$FFXUDF\ 7HVW$FFXUDF\ (a) Seen Clients (b) Unseen Clients Figure 1: Test acc. and GM-Appeal of the global model for FMNIST. A higher GM-Appeal results in a higher test accuracy for both the seen clients that have participated during training and unseen clients that have not, due to the server having a larger pool of clients to select from. MaxFL, which aims to maximize GMAppeal, results in the highest test accuracy compared to the other baselines that do not consider GM-Appeal. In this work, we seek to understand: (1) What benefits exist when maximizing global model appeal relative to other common federated modeling approaches, and (2) What strategies exist to maximize global model appeal in federated settings. Our key contributions are summarized as follows: - We introduce the notion of *global model appeal* (referred to as GM-Appeal)—the fraction of clients that have their local requirements met by the global model. We then propose MaxFL, an objective that directly maximizes global model appeal, and show that having a high global model appeal can lead to better test accuracy on training clients, as well as better generalization to unseen clients at inference time. - We theoretically show for mean estimation that the MaxFL objective yields a solution that guarantees higher GM-Appeal than standard FL objectives and provide convergence guarantees for our MaxFL solver which allows partial client participation, is applicable to non-convex objectives, and is stateless (does not require clients to maintain local parameters during training). - We empirically evaluate the performance of MaxFL to thoroughly understand the benefits of maximizing GM-Appeal with experiments where i) clients can flexibly opt-out of training, ii) there are new incoming (unseen) clients, and iii) clients perform personalization via local fine-tuning with the global model. - We show that MaxFL significantly improves the global model appeal in practice, leading to a 22-40% and 18-50% test accuracy improvement for the seen clients and unseen clients respectively, compared to a wide range of FL methods, including those that tackle data heterogeneity, aim for variance reduction or incentivizing clients, or provide personalization or fairness. Overall, our goal in comparing MaxFL with a variety of other FL methods that have varying goals is not necessarily to compete against these methods, but rather to understand and demonstrate the potential benefits of our proposed notion of global model appeal relative to other objectives under different scenarios, such as with flexible client participation or new incoming clients. As global model appeal has not been studied previously in FL, our work is the first to explore its possible implications, and then propose an objective to train a global model that can maximize the number of clients whose individual requirements are satisfied. We hope our proposed perspective of viewing and evaluating FL systems can inspire future works in different FL scenarios and applications where client participation is not necessarily taken for granted, and can potentially be used in conjunction with prior approaches in FL (e.g., see Section D.2). We provide a more detailed review of prior work and related areas of fairness, personalization and client incentives in Section 4. ## 2 Problem Formulation Setting. We consider a setup where M clients are connected to a central server to collaboratively train a global model. For each client k ∈ [M], its true loss function is given by fk(w) = Eξ∼Dk [ℓ(w, ξ)] where Dk is the true data distribution of client k, and ℓ(w, ξ) is the composite loss function for the model w ∈ R d for data sample ξ. In practice, each client only has access to its local training dataset Bk with |Bk| = Nk data samples sampled from Dk. Client k's empirical loss function is Fk(w) = 1 |Bk| Pξ∈Bk ℓ(w, ξ). While some of the take-aways of our work (e.g., improving performance on unseen clients) may be more specific to cross-device applications, our general setup and method are applicable to both cross-device and cross-silo FL. Defining Global Model Appeal. Each client's natural aim is to find a model that minimizes its true local loss fk(w). Clients can have different thresholds of how small this loss should be, and we denote such self-defined threshold for each client as ρk, k ∈ [M]. For instance, each client can perform solo training on its local dataset Bk to obtain an approximate local model wb k and have its threshold to be the true loss from this local model, i.e., ρk = fk(wb k) 1. Based on these client requirements, we provide the formal definition of global model appeal below: Definition 1 (Global Model Appeal). A global model w is said to be appealing to client k ∈ [M] if fk(w) < ρk, i.e., the global model w *yields a smaller local true loss than the self-defined threshold of the client. Accordingly,* we define the fraction of clients to which the global model is appealing as global model appeal (GM-Appeal) with I *being the indicator function:* $$G M{\mathrm{-}}A P P E A L={\frac{1}{M}}\sum_{k=1}^{M}\mathbb{I}\{f_{k}({\bf w})<\rho_{k}\}$$ $$\left(1\right)$$ I{fk(w) < ρk} (1) Our GM-Appeal metric measures the exact *fraction of clients* that find the global model appealing by focusing on whether the global model satisfies the clients' requirements or not instead of looking at the gap between fk(w) and ρk. Another variation of (1) could be to measure the margin Pk max{ρk − fk(w), 0}, but this does not capture the motivation behind our work which is to understand how the number of clients that find the global model appealing affects the global model performance. Why Explore GM-Appeal in FL? GM-Appeal measures how many clients have their requirements satisfied by the global model. Thus, it can gauge important characteristics of the server's global model such as i) how many clients are likely to dropout with the current global model or ii) how many new incoming 1The client can have held-out data used for calculating the true loss fk(·) or use its training data as a proxy. We explain in more detail of defining ρk in Section 3. clients that do not have the capacity for additional training will likely be satisfied with the current global model without any additional training to the model. Ultimately, a high global model appeal can lead to a larger pool of clients for the server to select from. The standard FL objective (McMahan et al., 2017) or its popular variants (Karimireddy et al., 2019; Li et al., 2020; Fallah et al., 2020) does not consider whether the global model satisfies the clients' requirements, and implicitly assumes that the server will have a large number of clients to select from. However, this may not necessarily be true if clients are allowed to dropout when they find the global model unappealing. We show that acquiring a larger pool of clients by improving global model appeal is useful for improving the global model for both the seen clients at training as well as the unseen clients at inference (see Fig. 1 and Table 1). In fact, we find that other baselines such as those that aim to tackle data heterogeneity, improve fairness, or provide personalization have low GM-Appeal, leading to a large number of clients opting out. Due to this, the global model is trained on just a few limited data points, resulting in poor performance. Our work explores a new notion of global model appeal, showing its significance in FL. ## 2.1 Proposed Maxfl Objective In this section, we first introduce MaxFL whose aim is to train a global model that maximizes GM-Appeal. A naïve approach can be to directly maximize GM-Appeal defined in (1) as follows: $$\operatorname*{argmax}_{\mathbf{w}}\operatorname{GM-APPEAL}=\operatorname*{argmin}_{\mathbf{w}}\sum_{k=1}^{M}\operatorname{sign}(f_{k}(\mathbf{w})-\rho_{k}).$$ $$\left(2\right)$$ where sign(x) = 1 if x ≥ 0 and 0 otherwise. There are two immediate difficulties in minimizing (2). First, clients may not know their true data distribution Dk to compute fk(w) − ρk. Second, the sign function makes the objective nondifferentiable and limits the use of common gradient-based methods. We resolve these issues by proposing a "proxy" for (2) with the following relaxations. i) Replacing the Sign function with the Sigmoid function σ(·): Replacing the non-differentiable 0-1 loss with a smooth differentiable loss is a standard tool used in optimization (Nguyen & Sanner, 2013; Masnadi-shirazi & Vasconcelos, 2008). Given the many candidates (e.g. hinge loss, ReLU, sigmoid), we find that using the sigmoid function is essential for our objective to faithfully approximate the true objective in (2). We further discuss the theoretical implications of using the sigmoid loss in Section 2.2. ii) Replacing fk(w) **with** Fk(w): As clients do not have access to their true distribution Dk to compute fk(·) we propose to use an empirical estimate σ(Fk(w)−ρk). This is again similar to what is done in standard FL where we minimize Fk(w) instead of fk(w) at client k. Note that the global model w is trained on the data of all clients, making it unlikely to overfit to the local data of any particular client, leading to fk(w) ≈ Fk(w), which we also show empirically in Appendix D.2. With the two relaxations above, we present our proposed MaxFL objective: MaxFL Obj.: $\min_{\mathbf{w}}\widetilde{F}(\mathbf{w})=\min_{\mathbf{w}}\frac{1}{M}\sum_{i=1}^{M}\widetilde{F}_{i}(\mathbf{w}),\text{where}\widetilde{F}_{i}(\mathbf{w}):=\sigma(F_{i}(\mathbf{w})-\rho_{i})$. Before presenting our proposed solver for the MaxFL objective, we first present a motivating toy example with mean estimation which shows that MaxFL's objective leads to a different solution that has higher GM-Appeal compared to the solution obtained from the classic FL objective. ## 2.2 Toy Example: Maximizing Gm-Appeal In Mean Estimation We consider a toy setup with M = 2 clients where the true loss function at each client is given by fk(w) = (w − θk) 2. In practice, clients only have Nk samples drawn from the distribution given by ek,j ∼ N (θk, ν2), ∀j ∈ [Nk]. We further assume that the empirical loss function at each client is given by Fk(w) = (w − θbk) 2 + (θbk − θk) 2 where θbk is the empirical mean, θbk =1 |Bk| PNk j=1 ek,j . It is easy to see that the minimizer of Fk(w) is the empirical mean θbk. Thus, we set the solo-trained model at each client as wbk = θbk and the loss threshold requirement at a client as ρk = Fk(wbk) = (θbk − θk) 2. GM-Appeal **for Standard FL Model Decreases Exponentially with Heterogeneity.** For simplicity let us assume N1 = N2 = N where γ 2 = ν 2/N is the variance of the local empirical means and γ 2 G = ((θ1 − θ2)/2)2 > 0 is the measure of heterogeneity between the true means. The standard FL objective will always set the FL model to be the average of the local empirical means (i.e. w = (θb1 + θb2)/2) and does not take into account the heterogeneity among the clients. As a result, the GM-Appeal of the global model decreases *exponentially* as γ 2 G increases. Lemma 2.1. The expected GM-Appeal *of the standard FL model is upper bounded by* 2 exp −γ 2 G/(5γ 2), where the expectation is taken over the randomness in the local datasets B1, B2. Maximizing GM-Appeal **with Relaxed Objective.** We now maximize the GM-Appeal for this setting by solving a relaxed version of the objective in (2) as proposed earlier. We replace the true loss fk(·) with the empirical loss Fk(·) and replace the 0-1 (sign) loss with a differentiable approximation h(·). We first show that setting h(·) to be a standard convex surrogate for the 0-1 loss (e.g. log loss, exponential loss, ReLU) leads to our new objective behaving the same as the standard FL objective. Lemma 2.2. Let h *be any function that is convex, twice differentiable, and strictly increasing in* [0, ∞). Then our relaxed objective is strictly convex and has a unique minimizer at w ∗ = (θb1 + θb2)/2. MaxFL Objective Leads to Increased **GM-Appeal.** Based on Lemma 2.2, we see that we need nonconvexity in h(·) for the objective to behave differently than standard FL. We set h(x) = σ(x) = exp(x)/1 + exp(x), as proposed in our MaxFL objective in (3) and find that the MaxFL objective *adapts* to the empirical heterogeneity parameter γb 2 G = (θb2 − θb1/2)2. If γb 2 G < 1 (small data heterogeneity), the objective encourages collaboration by setting the global model to be the average of the local models. Conversely, if γb 2 G > 2 (large data heterogeneity), the objective encourages *separation* by setting the global model close to the local model of either client (see Fig. 2 below). Based on this observation, we have the following theorem. Theorem 2.1. Let w be a local minima of the MaxFL objective. The expected GM-Appeal using w *is lower* bounded by exp −γ −2/16 *where the expectation is over the local datasets* B1, B2. Observe that even with γ 2 G ≫ 0, MaxFL will keep satisfying the requirement of at least one client by adapting its objective accordingly. We show the behavior of MaxFL in a 3-client setup which further highlights the non-trivialness of our proposed MaxFL's formulation in Appendix A along with the simulation details for Fig. 2. Details of our proof in this section can be found in Appendix B. ![4_image_0.png](4_image_0.png) í Figure 2: (a): GM-Appeal for FedAvg decays exponentially while GM-Appeal for MaxFL is lower bounded. Replacing the sigmoid approximation with ReLU in MaxFL leads to the same solution as FedAvg. (b-c): MaxFL adapts to the heterogeneity of the problem—for small heterogeneity it encourages collaboration by having a single global minima, for large heterogeneity it encourages separation by having far away local minimas. ## 3 Proposed Maxfl **Solver** In this section, we present our MaxFL objective's solver. The MaxFL algorithm enjoys the following properties: i) uses the same local SGD procedure as in standard FedAvg, ii) allows partial client participation, and iii) is stateless. By stateless, we mean that clients do not carry varying local parameters throughout training rounds, preventing issues from stale parameters (Wang et al., 2021). With the sigmoid approximation of sign loss and for differentiable Fk(w), our objective Fe(w) in (3) is differentiable and can be minimized with gradient descent and its variants. Its gradient is given by: $$\nabla\tilde{F}(\mathbf{w})={\frac{1}{M}}\sum_{k=1}^{M}{\underbrace{(1-\tilde{F}_{k}(\mathbf{w}))\tilde{F}_{k}(\mathbf{w})}}_{\mathrm{aggregating~weight:}=q_{k}(\mathbf{w})}\nabla F_{k}(\mathbf{w}).$$ ∇Fk(w). (4) Observe that ∇Fe(w) is a **weighted aggregate** of the gradients of the clients' empirical losses, similar in spirit to the gradient ∇F(w) in standard FL. The key difference is that in MaxFL, the weights qk(w) := (1 − Fek(w))Fek(w) depend on how much the global model appeals to the clients and are dynamically updated based on the current model w, as we discuss below. Behavior of the Aggregation Weights. For a given w, the aggregation weights qk(w) depend on the GM-Appeal Gap, Fk(w)−ρk (see Fig. 3). When Fk(w) ≪ ρk, the global model w sufficiently meets the client's requirement. Therefore, MaxFL sets qk(w) ≈ 0 to focus on the updates of other clients. Similarly, if Fk(w) ≫ ρk, MaxFL sets qk(w) ≈ 0. This is because Fk(w) ≫ ρk implies that the current model w is incompatible with the requirement of client k and hence it is better to avoid optimizing for this client at the risk of sacrificing other clients' requirements. MaxFL gives the highest weight to clients for which the global model performs similarly to the clients' requirements since this allows it to increase the GM-Appeal without sabotaging other clients' requirements. $$\left(4\right)$$ ![5_image_0.png](5_image_0.png) (Fk(w) Fk(wb k), 8 k) Figure 3: Aggregating weight qk(w) versus the GM-Appeal gap defined as Fk(w)−ρk for any client k ∈ [M]. A Practical MaxFL Solver. Directly minimizing the MaxFL objective using gradient descent can be slow to converge and impractical, as it requires all clients to be available for training. Instead, we propose a practical MaxFL algorithm, which uses multiple local updates at each client to speed up convergence as done in standard FL (McMahan et al., 2017) and allow partial client participation. We use the superscript (*t, r*) to denote the communication round t and the local iteration index r. In each round t, the server selects a new set of clients S (t,0) uniformly at random and sends the most recent global model w(t,0) to the clients in S (t,0). Clients in S (t,0) perform τ local iterations with a learning rate ηl to calculate their updates as w (t,r+1) k = w (t,r) k − ηlg(w (t,r) k, ξ(t,r) k), ∀ r ∈ {0*, ..., τ* − 1} where g(w (t,r) k, ξ(t,r) k) = 1 b Pξ∈ξ (t,r) k ∇f(w (t,r) k, ξ) is the stochastic gradient computed using a mini-batch ξ (t,r) kof size b that is randomly sampled from client k's local dataset Bk. The weight qk(w (t,0) k) can be computed at each client by calculating the loss over its training data with w (t,0) k, which is a simple inference step. Clients in S (t,0) then send their local updates ∆w (t,0) k:= w (t,τ) k − w (t,0) kand weights qk(w (t,0) k) back to the server, which updates the global model as w(t+1,0) = w(t,0) − η (t,0) gPk∈S(t,0) qk(w(t,0))∆w (t,0) k where η (t,0) g = Pηg k∈S(t,0) qk(w(t,0))+ϵ is the adaptive server learning rate with global learning rate ηg and ϵ > 0. We discuss the reasoning for such a learning rate below. The pseudo-code for our MaxFL solver can be found in Algorithm 1. Adaptive Server Learning Rate for MaxFL. With Lc continuous and Ls smooth Fk(w), ∀k ∈ [M] (see Assumption 3.1), the objective Fe(w) is Les smooth where Les = Ls M PM k=1 qk(w) + Lc 4(see Appendix C). Hence, the optimal learning rate η˜ for the MaxFL is given by, ηe = 1/Les = Mη/ PM k=1 qk(w) + ϵ , where η = 1 Ls is the optimal learning rate for standard FL and ϵ = MLc 4Ls> 0 is a constant. The denominator of the optimal ηe is proportional to the sum of the aggregation weights qk(w) and acts as a dynamic normalizing factor. Therefore, we propose using an adaptive global learning rate η (t,0) g = ηg/(Pk∈S(t,0) qk(w(t,0)) + ϵ) with hyperparameters ηg, ϵ. Setting ρk as Fk(wb k) **for MaxFL.** One intuitive way to set ρk for each client is to set it as the training loss value Fk(wb k) where wb k is a client local model that is solo-trained with a few warm-up local SGD steps Algorithm 1 Our Proposed MaxFL Solver 1: **Input:** mini-batch size b, local iteration steps τ , client requirement ρk, k ∈ [M] 2: **Output:** Global model w(T ,0) 3: **Initialize:** Global model w(0,0) 4: For t = 0*, ..., T* − 1 **communication rounds do**: 5: **Global server do:** 6: Select m clients for S (t,0) uniformly at random and send w(t,0) to clients in S (t,0) 7: **Clients** k ∈ S(t,0) **in parallel do:** 8: Set w (t,0) k = w(t,0), and calculate qk(w (t,0) k) = σ(Fk(w (t,0) k) − ρk) 9: For r = 0*, ..., τ* − 1 **local iterations do:** 10: Update w (t,r+1) k ← w (t,r) k − ηlg(w (t,r) k, ξ(t,r) k) 11: Send ∆w (t,0) k = w (t,0) k − w (t,τ) kand aggregation weight qk(w (t,0) k) to the server 12: **Global server do:** 13: Update global model with w(t+1,0) = w(t,0) − η (t,0) gPk∈S(t,0) qk(w(t,0))∆w (t,0) k on its local data. The loss value only needs to be computed once and saved as a constant beforehand at each client. How to train the local model wb k, such as deciding the number of steps of training or whether to add regularization, is dependent on the personal resources and requirements of the clients. Therefore, the quality of the local model (such as whether it has overfitted or underfitted) is not explicitly controlled by MaxFL. Nevertheless, we do provide an ablation study on the number of local SGD steps we take for training the local trained model and provide the results in Appendix D.2 which shows that MaxFL performance is in fact robust to the number of local SGD steps we take and the performance does not vary much on this number. In the essence, ρk is a client-dependent parameter that the client can choose, and our proposed MaxFL is a more general framework that can be used for any client-defined ρk. One might raise concerns that adversarial clients can send arbitrarily small ρk in the hope to get high weights, but as Fig. 3 clearly shows this will in fact make the client get smaller weights. Further, we add experiments with byzantine clients in Section 5 in Appendix D.2 and show that MaxFL is robust to these specific attack scenarios. Appeal-based Flexible Client Participation. It may appear that our MaxFL solver requires clients to always participate in FL if selected even when the global model does not appeal to them. However, our algorithm is easily modified to allow clients to participate flexibly during training depending on whether they find the global model appealing or not. For such appeal-based flexible client participation, we assume that clients are available for training if selected only during a few initial training rounds. After these rounds, clients are included in the pool of clients where the server can select the clients from only if they find the global model appealing. We demonstrate this extension of MaxFL with appeal-based flexible client participation in Table 1. The experiment shows that with flexible client participation, retaining a high global model is even more imperative for the server to achieve good test accuracy and generalization performance. We also show that even after we allow clients to participate flexibly, MaxFL retains a significantly higher number of clients that find the global model appealing compared to the other baselines. ## 3.1 Convergence Properties Of **Maxfl** In this section, we show the convergence guarantees of MaxFL in Algorithm 1. Our convergence analysis shows that the gradient norm of our global model goes to zero, and therefore we converge to a stationary point of our objective Fe(w). First, we introduce the assumptions and definitions below. Assumption 3.1 (Continuity & Smoothness of Fk(w), ∀ k). The local objective functions F1(w)*, ..., F*M(w), are Lc-continuous and Ls*-smooth for any* w. Assumption 3.2 (Unbiased Stochastic Gradient with Bounded Variance for Fk(w), ∀ k). *For mini-batch* ξk uniformly sampled at random from Bk, the resulting stochastic gradient is unbiased, i.e., E[gk(wk, ξk)] = ∇Fk(wk), and its variance is bounded: E[∥gk(wk, ξk) − ∇Fk(wk)∥ 2] ≤ σ 2 g . Assumption 3.3 (Bounded Dissimilarity of F(w)). *There exists* β 2 ≥ 1, κ2 ≥ 0 *such that* 1 M PM i=1 ∥∇Fi(w)∥ 2 ≤ β 2∥ 1 M PM i=1 ∇Fi(w)∥ 2 + κ 2*for any* w. Assumption 3.1-3.3 are standard assumptions used in the optimization literature (Stich, 2019; Karimireddy et al., 2019; Bistritz et al., 2020; Wang et al., 2020), including the Lc-continuity assumption (Shalev-Shwartz et al., 2009; Riis et al., 2021). Note that we do not assume anything for our proposed objective function Fe(w) and only have assumptions over the standard objective function F(w) to prove the convergence of MaxFL over Fe(w) in Theorem 3.1. Theorem 3.1 (Convergence to the MaxFL Objective Fe(w)). *Under Assumption 3.1-3.3, suppose the* server uniformly selects m out of M *clients without replacement in each round of Algorithm 1. With* ηl = √ 1 T τ , ηg = √τm, for total communication rounds T of the MaxFL *solver in Algorithm 1 we have:* $$\min_{t\in[T]}\mathbb{E}\left[\left\|\nabla\widetilde{F}(\mathbf{w}^{(t,0)})\right\|^{2}\right]\leq\mathcal{O}\left(\frac{\sigma_{g}^{2}}{\sqrt{m\tau T}}\right)+\mathcal{O}\left(\frac{\sigma_{g}^{2}}{T\tau}\right)+\mathcal{O}\left(\frac{\sqrt{\tau}}{\sqrt{T}m}\right)+\mathcal{O}\left(\frac{\kappa^{2}+\beta^{2}}{T}\right)$$ (5) $\frac{1}{2}$ $\mathbf{a}=\mathbf{a}\cdot\mathbf{a}$. $and\ L_c$). (5) where O subsumes all constants (including Ls and Lc). Theorem 3.1 shows that with a sufficiently large number of communication rounds T we reach a stationary point of our objective function Fe(w). The proof is deferred to Appendix C where we also show a version of this theorem that contains the learning rates ηg and ηl with the constants. ## 4 Related Work To the best of our knowledge, the notion of GM-Appeal and the proposal to maximize it while considering flexible client participation have not appeared before in the previous literature. Previous works have focused on the notion of satisfying clients' personal requirements from a game-theoretic lens or designing strategies specifically to prevent client dropout, including the use of personalization, which have their limitations, as we discuss below. ## 4.1 Incentivizing Clients And Preventing Drop-Out A recent line of work in game theory models FL as a coalition of self-interested agents and studies how clients can optimally satisfy their individual incentives defined differently from our goal. Instead of training a single global model, Donahue & Kleinberg (2021a;b) consider the problem where each client tries to find the best possible coalition of clients to federate with to minimize its own error. Blum et al. (2021) consider an orthogonal setting where each client aims to satisfy its constraint of low expected error while simultaneously trying to minimize the number of samples it contributes to FL. While these works establish useful insights for simple linear tasks, it is difficult to extend these to practical non-convex machine learning tasks. In contrast to these works, in MaxFL we aim to directly maximize the *number* of satisfied clients using a global model. Concurrent work (Huang et al., 2023) has evaluated the diverse data contributions in FL where the tension between the server and the clients is modeled as a utility-cost function to analyze the optimal behavior of clients in FL and propose a mechanisam to incentivize agents to give contribution to FL using the Stackelberg game to maximize the total utility. Another concurrent similar line of work (Dorner et al., 2023) has investigated how to incentivize "honesty" across clients in FL where honesty implies clients sending local updates that are true to their local data and non-malicious. This perspective alleviates some of the analysis complexities occurring in game-theoretic formulations and allows us to consider general non-convex objective functions. A separate line of work looks at how to prevent and deal with client drop-out in FL. Wang & Xu (2022) introduce a notion of 'friendship' among clients and proposes to use friends' local update as a substitute for the update of dropped-out clients. Gu et al. (2021) propose to use previous updates of dropped-out clients as a substitute for their current updates. Both algorithms are stateful. Another line of work (Han et al., 2022; Kang et al., 2019; Zhang et al., 2021) aims to incentivize clients to contribute resources for FL and promote long-term participation by providing monetary compensation for their contributions, determined using game-theoretic tools. These techniques are orthogonal to MaxFL's formulation and can be combined if needed to further incentivize clients. ## 4.2 Personalized And Fair Federated Learning Personalized federated learning (PFL) methods aim to increase performance by training multiple related models across the network (e.g., Smith et al., 2017). In contrast to PFL, MaxFL focuses on the challenging goal of training a *single* global model that can maximize the number of clients for which the global model outperforms their local model. Unlike PFL which may require additional training on new clients for personalization, MaxFL's global model can be used by new clients without additional training (see Table 2). Also, MaxFL is stateless, in that clients do not carry varying local parameters throughout training rounds as in many popular personalized FL methods (Smith et al., 2017; Dinh et al., 2020; Fallah et al., 2020; Li et al., 2021), preventing parameter staleness problems which can be exacerbated by partial client participation (Wang et al., 2021). Furthermore, MaxFL is orthogonal to and can be combined with PFL methods. We demonstrate this in Table 3, where we show results for MaxFL jointly used with personalization via fine-tuning (Jiang et al., 2019). We compare MaxFL +Fine-tuning with another well known PFL method PerFedAvg (Fallah et al., 2020) and show that MaxFL appeals to a significantly higher number of clients than the baseline. Finally, another related area is fair FL, where a common goal is to train a global model whose accuracy has less variance across the client population than standard FedAvg (Li et al., 2019; Mohri et al., 2019). A side benefit of these methods is that they can improve global model appeal for the worst performing clients. However, the downside is that the performance of the global model may be degraded for the best performing clients, thus making it unappealing for them to participate. We show in Appendix D.2 that fair FL methods are indeed not effective in increasing GM-Appeal. ## 5 Experiments In this section we evaluate MaxFL for a number of different datasets while comparing with a wide range of baselines to show that maximizing GM-Appeal, i.e., training a global model that can appeal to a *larger* number of clients, provide many benefits for FL including: i) the server gaining more participating clients to select clients from for training a better global model for the seen clients, ii) the global model having a higher chance to have a good performance on unseen clients, and iii) clients gaining better performance with the global model when they combine MaxFL with local fine-tuning. Datasets and Model. We evaluate MaxFL in three different settings: image classification for non-iid partitioned (i) FMNIST (Xiao et al., 2017), (ii) EMNIST with 62 labels (Cohen et al., 2017), and (iii) sentiment analysis for (iv) Sent140 (Go et al., 2009)with a MLP. For FMNIST, EMNIST, and Sent140 dataset, we consider 100, 500, and 308 clients in total that are used for training where we select 5 and 10 clients uniformly at random per round for FMNIST and EMNIST, Sent140 respectively. These clients are active at some point in training the global model and we call them **'seen clients'**. We also sample the 'unseen clients' from the same distribution from which we generate the seen clients, with 619 clients for Sent140, 100 clients for FMNIST, and 500 for EMNIST. These unseen clients represent new incoming clients that have not been seen before during the training rounds of FL to evaluate the generalization performance at inference. Further details of the experimental settings are deferred to Appendix D.1. Baselines. We compare MaxFL with numerous well-known FL algorithms such as standard FedAvg (McMahan et al., 2017); FedProx (Sahu et al., 2020) which aims to tackle data heterogeneity; SCAFFOLD (Karimireddy et al., 2019) which aims for variance-reduction; PerFedAvg (Fallah et al., 2020), pFedme (Dinh et al., 2020) which facilitates personalization; MW-Fed (Blum et al., 2021) which incentivizes client participation; and qFFL which facilitates fairness (Li et al., 2019). For all algorithms, we set ρk to be the same, i.e., ρk = Fk(wb k), where wb k is obtained by running a few warm-up local SGD steps on client k's data as outlined in Section 3 to ensure a fair comparison across baselines. We perform grid search for hyperparameter tuning for all baselines and choose the best performing ones. Evaluation Metrics: GM-Appeal, Average Test Accuracy, and Preferred-model Test Accuracy. We evaluate MaxFL and other methods with three key metrics: 1) GM-Appeal, defined in (1), 2) average test accuracy (avg. test acc.) across clients, and a new metric that we propose called 3) preferred-model test accuracy. Preferred-model test accuracy is the average of the clients' test accuracies computed on either the global model w or their solo-trained local model wb k, whichever one satisfies the client's requirement. We belive that average test accuracy is a more server-oriented metric as it assumes that clients will use the global model by default. On the other hand, preferred-model test accuracy is a more client-centric metric that allows clients to select the model which works best, thereby better reflecting their actual satisfaction. | | | Seen Clients | | | Unseen Clients | | | | |-----------------------|-----------------------------------------------------------------------------------------------------|-------------------------------|--------------|-------------|-------------------------------|-------------|--------------|-------------| | | FMNIST | | EMNIST | | FMNIST | EMNIST | | | | | Test Acc. | GM-Appeal Test Acc. GM-Appeal | | Test Acc. | GM-Appeal Test Acc. GM-Appeal | | | | | FedAvg | 43.70(±0.02) | 0.04(±0.0) | 35.15(±0.51) | 0.02(±0.01) | 43.14(±0.23) | 0.07(±0.01) | 37.14(±0.10) | 0.06(±0.0) | | FedProx | 44.59(±1.94) | 0.05(±0.01) | 34.06(±1.21) | 0.004(±0.0) | 43.80(±1.67) | 0.07(±0.01) | 36.82(±0.22) | 0.008(±0.0) | | Scaffold | 39.90(±0.59) | 0.0(±0.0) | 34.78(±2.05) | 0.0(±0.0) | 39.24(±0.68) | 0.01(±0.0) | 34.19(±1.25) | 0.004(±0.0) | | PerFedAvg 46.62(±1.0) | | 0.05(±0.0) | 34.78(±1.05) | 0.003(±0.0) | 46.00(±0.87) | 0.07(±0.0) | 36.92(±0.51) | 0.008(±0.0) | | pFedme | 31.06 (±2.06) | 0.0 (±0.0) | 9.78 (±2.13) | 0.0 (±0.0) | 20.11 (±3.4) | 0.0 (±0.0) | 7.05 (±1.03) | 0.0 (±0.0) | | qFFL | 29.92(±3.13) | 0.0(±0.0) | 15.95(±3.02) | 0.0(±0.0) | 19.63(±2.17) | 0.0(±0.0) | 5.41(±0.52) | 0.0(±0.0) | | MW-Fed | 44.41(±2.38) | 0.04(±0.0) | 30.44(±3.07) | 0.01(±0.0) | 43.46(±2.15) | 0.06(±0.0) | 36.54(±0.40) | 0.01(±0.0) | | MaxFL | 70.86(±2.18) 0.37(±0.05) 57.34(±1.41) 0.25(±0.03) 74.53(±0.50) 0.39(±0.07) 55.62(±0.86) 0.31(±0.03) | | | | | | | | Table 1: Avg. test accuracy and GM-Appeal where we train for 200 communication rounds. At the 10th communication round, we let clients flexibly opt-out or opt-in depending on whether the global model has met their requirements. We report the final avg. test accuracy and GM-Appeal at the 200th comm. round. ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) Figure 4: GM-Appeal and preferred-model test accuracy for the seen clients are significantly higher for MaxFL. Therefore, clients can also benefit from choosing either the local or global model for best performance, while the server also gains a large number of clients to select from. Note that the preferred-model test accuracy is a complementary novel metric we propose to gauge the satisfaction of the clients when they can choose between the global and local models. For instance, if we have a high preferred-model test accuracy but a low GM-Appeal where all clients are not interested in the global model, then the primary objective of MaxFL has not been achieved. However, in our results we show that in fact MaxFL has the complementary effect of not only being able to maximize the GM-Appeal, but also have higher preferred-model test accuracy. This is because MaxFL is able to find the global model that can perform better than a prefixed threshold for as many clients as possible, and therefore compared to the other global models trained from different baselines, the test accuracy for those clients who chose the global model is higher. ## 5.1 Experiment Results Average Test Accuracy of Seen Clients & Unseen Clients. We first show that we improve the GM-Appeal and thus the average test accuracy performance for the 'seen clients' used during the training of the global model. In Table 1, we show the average test accuracy across clients where we let clients flexibly join or drop-out depending on whether the global model is appealing after 5% of communication rounds of mandatory participation. We show that MaxFL achieves the highest GM-Appeal than other baselines for both FMNIST and EMNIST by 0.32-0.39 and 0.23-0.31 improvement, respectively. Since MaxFL is able to retain a larger pool of clients due to having a higher GM-Appeal, it therefore trains from selecting from a more larger client pool, leading to the highest average test accuracy compared to the baselines by 22-40% and 18-50% improvement respectively for the seen and unseen clients. Since the other baselines do not consider the notion of GM-Appeal entirely, it fails in preventing client dropouts leading to poor performance. Note that we do not use any of the 'unseen clients' during training and only calculate the GM-Appeal and test accuracy via inference with the global model trained with the 'seen clients'. Preferred-model Test Accuracy: Clients' Perspective. Recall that a high preferredmodel test accuracy implies that the client has a higher chance of satisfying its requirement by choosing between the global or solo-trained local model, whichever performs better. In Fig. 4 we show that as the GM-Appeal increases across the communication round, preferredmodel test accuracy also increases. MaxFL achieves the highest final GM-Appeal and preferred-model test accuracy indicating that it provides a win-win case for both the server and clients, since the clients have the highest accuracy by choosing the better model between the global model w and the local model wb k, and the server has the highest fraction of participating clients. Similarly, in Table 2, MaxFL achieves the highest GM-Appeal and preferred-model test accuracy. Although the preferred-model test accuracy improvement compared to the other baselines may appear small, showing that MaxFL is able to maintain a high preferred-model test accuracy while also achieving a high GM-Appeal implies that it does not sabotage the benefit of clients while also bringing the server more clients to select from. Table 2: GM-Appeal and preferred-model test accuracy of the final global models for the unseen clients. MaxFL improves the GM-Appeal by at least 47% for FMNIST, and 6% for Sent140 and achieves the same or higher preferredmodel test accuracy. | GM-Appeal | Preferred-Model Test Acc. | | | |-------------------------------------------------------|--------------------------------------|--------------|---------| | FMNIST | Sent140 | FMNIST | Sent140 | | FedAvg | 0.08(±0.01) 0.37(±0.07) 98.53(±0.13) | 57.05(±1.44) | | | FedProx 0.07(±0.01) 0.37(±0.07) 98.43(±0.21) | 57.07(±1.42) | | | | Scaffold 0.02(±0.01) 0.03(±0.05) 98.26(±0.20) | 51.59(±0.11) | | | | MW-Fed0.05(±0.04) 0.17(±0.03) 98.32(±0.13) | 55.57(±1.28) | | | | MaxFL 0.55(±0.0) 0.43(±0.05)98.83(±0.06) 57.16(±1.35) | | | | | Seen Clients | Unseen Clients | | | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------|--------|---------| | FMNIST | Sent140 | FMNIST | Sent140 | | FedAvg | 0.38(±0.06) 0.25(±0.09) 0.39(±0.06) 0.42(±0.06) | | | | FedProx | 0.40(±0.07) 0.26(±0.09) 0.41(±0.07) 0.43(±0.12) | | | | Scaffold | 0.02(±0.02) 0.16(±0.22) 0.03(±0.02) 0.07(±0.01) | | | | PerFedAvg 0.45(±0.05) 0.24(±0.10) 0.46(±0.06) 0.47(±0.06) MW-Fed 0.28(±0.07) 0.08(±0.01) 0.39(±0.04) 0.20(±0.01) MaxFL 0.55(±0.01)0.36(±0.05)0.56(±0.01)0.55(±0.01) | | | | Local Tuning for Personalization. Personalized FL methods can be used to fine-tune the global model at each client before comparing it with the client's locally trained model. MaxFL can be combined with these methods by simply allowing clients to perform some finetuning iterations before computing the aggregation weights in Step 7 of Algorithm 1. Both for clients that are active during training and unseen test clients, we show in Table 3 that MaxFL increases the GM-Appeal by at least 10% compared to all baselines. For FMNIST and Sent140, the improvement in GM-Appeal over other methods is up to 27%, 28% respectively for active clients and 17%, 4% respectively for unseen clients. Table 3: GM-Appeal of locally-tuned models with 5 local steps from the final global models for seen clients and unseen clients. Both for clients that are active during training and unseen test clients, MaxFL increases the fraction of clients that find the global model appealing by at least 10% as compared to all baselines. ## 6 Limitations And Concluding Remarks In this work we explore the notion of global model appeal by proposing to train a global model that maximizes the number of clients whose local requirements are satisfied. We show that when participating clients drop out or clients do not join due to a low global model appeal, the test accuracy for the current clients and generalization performance to the new unseen clients can suffer significantly. Through extensive experiments as well as theoretical insights and guarantees, we show that MaxFL can help to retain clients for training and thus achieve a high average test accuracy across both participating clients and new incoming clients. We note that our proposed metric GM-Appeal and MaxFL objective have some limitations. For instance, it is possible to train a global model that sacrifices the performance of a few clients to maximize GM-Appeal, potentially reducing fairness. However, we expect that MaxFL could potentially be altered by modulating the ρk value to cover such limitations such as setting ρk to a constant that retains fairness. Another limitation in MaxFL is that it does not currently consider specific incentive mechanisms for various settings such as what cost we can set for the new incoming clients that wants to use the global model at inference without participating in training. Without setting this cost, one may raise the concern of free-rider problems. Nevertheless, our work presents a first step towards maximizing the set of clients that are interested in the global model; how to design the cost of using a global model would be an interesting, orthogonal direction of future work. As a similar notion of global model appeal has not been thoroughly examined previously, we hope our work can open up new research directions in understanding the role played by the server to prevent client dropout and recruit new clients by finding a global model that serves as many clients as possible. ## Broader Impact Statement Our work proposes a new objective to maximize global model appeal in federated learning, which can incentivize more clients to participate and prevent clients from dropping out, and improve generalization performance on unseen clients. Despite these benefits, it is worth acknowledging the potential negative effects of the proposed objective and algorithm in terms of other metrics, such as unfairness (e.g., increased performance gap) between different sub-populations. Here, we note that (1) Our MaxFL framework is general in the sense that it could be adjusted to trade off fairness/utility by setting the local requirement parameters ρk appropriately for specific sub-populations, arriving at a fairer model that reduces the number of clients suffering from inferior performance; and (2) Our results demonstrate that MaxFL is in fact more robust to specific Byzantine clients that adversarially send high GM-Appeal gap than competitors due to its dynamic reweighting scheme (see Table 4). In general, however, it remains critical to carefully consider trade-offs between issues such as model appeal, fairness, and robustness for the application at hand. The goal of this work is to explore the implications of MaxFL both theoretically and empirically so that we can understand various benefits and limitations of GM-Appeal in different scenarios. We hope practitioners and researchers can thus appropriately adjust the framework and/or combine it with other learning schemes depending on the application of interest, weighing potential benefits of the approach relative to existing methods in terms of achieving higher accuracy, broader participation, and increased fairness or robustness. ## References I. Bistritz, A. J. Mann, and N. Bambos. Distributed distillation for on-device learning. In *Advances in Neural* Information Processing Systems, 2020. Avrim Blum, Nika Haghtalab, Richard Lanas Phillips, and Han Shao. One for one, or all for all: Equilibria and optimality of collaboration in federated learning. In *International Conference on Machine Learning*, 2021. Gregory Cohen, Saeed Afshar, Jonathan Tapson, and André van Schaik. EMNIST: an extension of MNIST to handwritten letters. *arXiv preprint arXiv:1702.05373*, 2017. Canh T. Dinh, Nguten H. Tran, and Tuan Dung Nguyen. Personalized federated learning with moreau envelopes. In *Advances in Neural Information Processing Systems*, 2020. Kate Donahue and Jon Kleinberg. Model-sharing games: Analyzing federated learning under voluntary participation. In *The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21)*, 2021a. Kate Donahue and Jon Kleinberg. Optimality and stability in federated learning: A game-theoretic approach. In *Advances in Neural Information Processing Systems*, 2021b. Florian E. Dorner, Nikola Konstantinov, Georgi Pashaliev, and Martin Vechev. Incentivizing honesty among competitors in collaborative learning and optimization. In *Advances in Neural Information Processing* Systems, volume 36, pp. 7659–7696, 2023. Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach. In *Advances in Neural Information Processing* Systems, 2020. Alec Go, Richa Bhayani, and Lei Huang. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford, 2009. URL https://www-cs.stanford.edu/people/alecmgo/papers/ TwitterDistantSupervision09.pdf. Xinran Gu, Kaixuan Huang, Jingzhao Zhang, and Longbo Huang. Fast federated learning in the presence of arbitrary device unavailability. *Advances in Neural Information Processing Systems*, 34:12052–12064, 2021. Jingoo Han, Ahmad Faraz Khan, Syed Zawad, Ali Anwar, Nathalie Baracaldo Angel, Yi Zhou, Feng Yan, and Ali R. Butt. Tokenized incentive for federated learning. In *Proceedings of the Federated Learning Workshop* at the Association for the Advancement of Artificial Intelligence (AAAI) Conference, 2022. Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. Array programming with NumPy. *Nature*, 585(7825):357–362, September 2020. doi: 10.1038/s41586-020-2649-2. URL https://doi.org/10.1038/ s41586-020-2649-2. Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non-identical data distribution for federated visual classification. In International Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with NeurIPS 2019 (FL-NeurIPS'19), December 2019. Shengyuan Hu, Dung Daniel Ngo, Shuran Zheng, Virginia Smith, and Zhiwei Steven Wu. Federated learning as a network effects game. *ArXiv*, February 2023. URL https://arxiv.org/abs/2302.08533. Baihe Huang, Sai Praneeth Karimireddy, and Michael I. Jordan. Evaluating and incentivizing diverse data contributions in collaborative learning, 2023. Divyansh Jhunjhunwala, Pranay Sharma, Aushim Nagarkatti, and Gauri Joshi. Fedvarp: Tackling the variance due to partial client participation in federated learning. *arXiv*, 2022. Yihan Jiang, Jakub Konecny, Keith Rush, and Sreeram Kannan. Improving federated learning personalization via model agnostic meta learning. *arXiv 1909.12488*, 2019. Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurelien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adria Gascon, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konecny, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrede Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Ozgur, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramer, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in federated learning. *arXiv preprint arXiv:1912.04977*, 2019. Jiawen Kang, Zehui Xiong, Dusit Niyato, Han Yu, Ying-Chang Liang, and Dong In Kim. Incentive design for efficient federated learning in mobile networks: A contract theory approach. In 2019 IEEE VTS Asia Pacific Wireless Communications Symposium (APWCS), pp. 1–5, 2019. doi: 10.1109/VTS-APWCS.2019.8851649. Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J Reddi, Sebastian U Stich, and Ananda Theertha Suresh. SCAFFOLD: Stochastic controlled averaging for on-device federated learning. arXiv preprint arXiv:1910.06378, 2019. Yûsaku Komatu. Elementary inequalities for mills' ratio. Reports of Statistical Application Research (Union of Japanese Scientific Engineers), 4:69–70, 1955. Tian Li, Maziar Sanjabi, and Virginia Smith. Fair resource allocation in federated learning. arXiv preprint arXiv:1905.10497, 2019. Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. Ditto: Fair and robust federated learning through personalization. In *Proceedings of the 38th International Conference on Machine Learning*, 2021. URL https://arxiv.org/abs/2012.04221. Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. On the convergence of fedavg on non-iid data. In *International Conference on Learning Representations (ICLR)*, July 2020. URL https://arxiv.org/abs/1907.02189. Hamed Masnadi-shirazi and Nuno Vasconcelos. On the design of loss functions for classification: theory, robustness to outliers, and savageboost. In *Advances in Neural Information Processing Systems(NIPS)*, 2008. H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agøura y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. *International Conference on* Artificial Intelligenece and Statistics (AISTATS), April 2017. URL https://arxiv.org/abs/1602.05629. H Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. Learning differentially private recurrent language models. *International Conference on Learning Representations*, 2018. Mehryar Mohri, Gary Sivek, and Ananda Theertha Suresh. Agnostic federated learning. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of *Proceedings of Machine Learning Research*, pp. 4615–4625, Long Beach, California, USA, 09–15 Jun 2019. PMLR. Tan T. Nguyen and Scott Sanner. Algorithms for direct 0–1 loss optimization in binary classification. In Proceedings of the 30th International Conference on Machine Learning, volume 28, 2013. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In *Empirical Methods in Natural Language Processing (EMNLP)*, pp. 1532–1543, 2014. URL http://www.aclweb.org/anthology/D14-1162. Erlend S. Riis, Matthias J. Ehrhardt, G. R. W. Quispel, and Carola-Bibiane Schönlieb. A geometric integration approach to nonsmooth, nonconvex optimisation. *Foundations of Computational Mathematics*, 2021. Anit Kumar Sahu, Tian Li, Maziar Sanjabi, Manzil Zaheer, Ameet Talwalkar, and Virginia Smith. Federated optimization for heterogeneous networks. In *Proceedings of the 3rd MLSys Conference*, January 2020. Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. Stochastic convex optimization. In *COLT*, 2009. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S Talwalkar. Federated multi-task learning. In *Advances in Neural Information Processing Systems*, pp. 4424–4434. 2017. URL http://papers.nips. cc/paper/7029-federated-multi-task-learning.pdf. Sebastian U Stich. Local SGD converges fast and communicates little. In *International Conference on* Learning Representations (ICLR), 2019. Pauli Virtanen, Ralf Gommers, Travis E Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, et al. Scipy 1.0: fundamental algorithms for scientific computing in python. *Nature methods*, 17(3):261–272, 2020. Heqiang Wang and Jie Xu. Friends to help: Saving federated learning from client dropout. arXiv preprint arXiv:2205.13222, 2022. Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H. Vincent Poor. Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization. *preprint*, May 2020. URL https://arxiv.org/abs/ 2007.07481. Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H Brendan McMahan, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, et al. A field guide to federated optimization. arXiv preprint arXiv:2107.06917, 2021. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. *https://arxiv.org/abs/1708.07747*, aug 2017. Tao Yu, Eugene Bagdasaryan, and Vitaly Shmatikov. Salvaging federated learning by local adaptation. *arXiv* preprint arXiv:2002.04758, 2020. Meng Zhang, Ermin Wei, and Randall Berry. Faithful edge federated learning: Scalability and privacy. IEEE Journal on Selected Areas in Communications, 39(12):3790–3804, 2021. doi: 10.1109/JSAC.2021.3118423. A Further Results and Details for Our Motivating Toy Example ![15_image_0.png](15_image_0.png) Figure 5: Results for the three client mean estimation; (a): case 1 when the true mean across clients are close to amongst each other where MaxFL's optimal solution is identical to that of FedAvg; (b): case 2 when the true mean across clients are all different from each other where MaxFL's optimal solution ensures that at least one of the clients will be satisfied with MaxFL's global model (unlike FedAvg); (c) case 3 when two clients' true means are close to each other while the other client has a different mean. MaxFL in this case, is able to ensure that the two clients satisfied while FedAvg is not able to make any client satisfied. Mean Estimation with 3 Clients with MaxFL. We further examine the property of MaxFL to satisfy clients with a 3 clients toy example which is an extension from what we have shown for 2 clients. Reusing the notation from the 2 client example, where θiis the true mean at client i and ˆθi ∼ N (θi, 1) is the empirical mean of a client, our analysis can be divided into the following cases for the 3 client example (also depicted in Fig. 5): - Case 1: θ1 ≈ θ2 ≈ θ3: This case captures the setting where the data at the clients is almost i.i.d. In this case, it makes sense for clients to collaborate together and therefore MaxFL's optimal solution will be the average of local empirical means (same as FedAvg). - Case 2: θ1 ̸= θ2 ̸= θ3: This case captures the setting where the data at clients is completely disparate. In this case, none of the clients benefit from collaborating and therefore MaxFL's optimal solution will be the local model of one of the clients. This ensures at least one of the clients will still be satisfied with the MaxFL global model unlike FedAvg. - Case 3: θ1 ≈ θ2 ̸= θ3: The most interesting case happens when data at two of the clients is similar but the data at the third client is different. Without loss of generality we assume that data at clients 1 and 2 is similar and client 3 is different. In this case, although client 1 and 2 benefit from federating, FedAvg is unable to leverage that due to the heterogeneity at client 3. MaxFL, on the other hand, will set the optimal solution to be the average of the local models of just client 1 and client 2. This ensures clients 1 and 2 are satisfied with the global model, thus maximizing the GM-Appeal. Simulation Details for Fig. 2 For the mean estimation simulation for Fig. 2(a), we set the true means for the two clients as θ1 = 0, θ2 = 2γG where γG ∈ [0, √20]. The simulation was perfomed using NumPy (Harris et al., 2020) and SciPy (Virtanen et al., 2020). The empirical means θb1 and θb2 are sampled from the distribution N (θ1, 1) and N (θ2, 1) respectively where the number of samples are assumed to be identical for simplicity. For local training we assume clients set their local models as their local empirical means which is analogous to clients performing a large number of local SGD steps to obtain the local minima of their empirical loss. For the global objective (standard FL, MaxFL (ReLU), MaxFL) a local minima is found using the scipy.optimize function in the SciPy package. For each γ 2 G ∈ [0, √20], the average GM-Appeal is calculated over 10000 runs for each global objective. ## B Proof For Theoretical Analysis In Section 2.2 We additionally define the following quantities $$\gamma^{2}:=\frac{\nu^{2}}{N};\quad\gamma_{G}^{2}=\left(\frac{\theta_{2}-\theta_{1}}{2}\right)^{2};\tag{6}$$ Note that the distribution of the empirical means itself follows a normal distribution following the linear additivity of independent normal random variables. $$\widehat{\theta}_{1}\sim{\mathcal N}(\theta_{1},\gamma^{2});\quad\widehat{\theta}_{2}\sim{\mathcal N}(\theta_{2},\gamma^{2})$$ θb1 ∼ N (θ1, γ2); θb2 ∼ N (θ2, γ2) (7) Lemma A.1 The expected GM-Appeal *of the standard FL model is upper bounded by* 2 exp − γ 2 G 5γ2 , where the expectation is taken over the randomness in the local datasets B1, B2. ## Proof. The standard FL model is given by, w =bθ1+bθ2 2, and therefore the expected GM-Appeal is, $$\mathbb{E}\left[\frac{\mathbb{I}\{(w-\theta_{1})^{2}<(\widehat{\theta}_{1}-\theta_{1})^{2}\}+\mathbb{I}\{(w-\theta_{2})^{2}<(\widehat{\theta}_{2}-\theta_{2})^{2}\}}{2}\right]$$ $$=\frac{1}{2}\left[\underbrace{\mathbb{P}\left((w-\theta_{1})^{2}<(\widehat{\theta}_{1}-\theta_{1})^{2}\right)}_{T_{1}}+\underbrace{\mathbb{P}\left((w-\theta_{2})^{2}<(\widehat{\theta}_{2}-\theta_{2})^{2}\right)}_{T_{2}}\right]$$ (8) $\frac{1}{2}$ (9) . $$\left(7\right)$$ Next we bound T1 and T2. T1 = P (w − θ1) 2 < (θb1 − θ1) 2(10) = P θb1 + θb2 2− θ1 !2 < (θb1 − θ1) 2 (11) = P θb2 − θb1 2 !2 + 2 θb2 − θb1 2 ! (θb1 − θ1) < 0 (12) = P θb2 − θb1 2 !2 + 2 θb2 − θb1 2 ! (θb1 − θ1) < 0 ∩ nθb2 > θb1 o + P θb2 − θb1 2 !2 + 2 θb2 − θb1 2 ! (θb1 − θ1) < 0 ∩ nθb2 ≤ θb1 o (13) = P ( θb2 − θb1 2 ! + 2(θb1 − θ1) < 0 ) ∩ nθb2 > θb1 o! + P θb2 − θb1 2 !2 + 2 θb2 − θb1 2 ! (θb1 − θ1) < 0 ∩ nθb2 ≤ θb1 o (14) $$\leq\mathbb{P}\left(\left(\frac{\widehat{\theta}_{2}-\widehat{\theta}_{1}}{2}\right)+2(\widehat{\theta}_{1}-\theta_{1})<0\right)+\mathbb{P}\left(\widehat{\theta}_{2}-\widehat{\theta}_{1}\leq0\right)$$ $$=\mathbb{P}\left(Z_{1}<0\right)+\mathbb{P}\left(Z_{2}\leq0\right)\quad\text{where}Z_{1}\sim\mathcal{N}\left(\gamma_{G},\frac{5}{2}\gamma^{2}\right),Z_{2}\sim\mathcal{N}\left(2\gamma_{G},2\gamma^{2}\right)$$ $$\leq\exp\left(-\frac{\gamma_{G}^{2}}{5\gamma^{2}}\right)+\exp\left(-\frac{\gamma_{G}^{2}}{\gamma^{2}}\right)$$ $$\leq2\exp\left(-\frac{\gamma_{G}^{2}}{5\gamma^{2}}\right)$$ 2(16) where (13) uses P (A) = P (A ∩ B) + P A ∩ B∁, (15) uses P (A ∩ B) ≤ P (A), (16) uses (7) and linear additivity of independent normal random variables, (17) uses a Chernoff bound. We can similarly bound T2 to get T2 ≤ 2 exp − γ 2 G 5γ2 . Thus the expected GM-Appeal of the standard FL model is upper bounded by 2 exp − γ 2 G 5γ2 . Lemma A.2 Let h be any function that is convex, twice differentiable, and strictly increasing in [0, ∞). Then our relaxed objective is strictly convex and has a unique minimizer at w ∗ = bθ1+bθ2 2 . ## Proof. Let us denote our relaxed objective by v(w). Then v(w) can be written as, $$v(w)={\frac{1}{2}}\left[h\left(F_{1}(w)-F({\hat{w}}_{1})\right)+h\left(F_{2}(w)-F({\hat{w}}_{2})\right)\right]={\underbrace{{\frac{1}{2}}h\left((w-{\hat{\theta}}_{1})^{2}\right)}_{v_{1}(w)}}+{\underbrace{{\frac{1}{2}}h\left((w-{\hat{\theta}}_{2})^{2}\right)}_{v_{2}(w)}}$$ $$(15)$$ $$\quad(16)$$ $$(17)$$ $$(18)$$ $$\quad(19)$$ $$(20)$$ We first prove that v1(w) is strictly convex. Let λ ∈ (0, 1) and (w1, w2) be any pair of points in R 2such that w1 ̸= w2. We have, $$v_{1}(\lambda w_{1}+(1-\lambda)w_{2})=\frac{1}{2}h\left((\lambda(w_{1}-\widehat{\theta}_{1})+(1-\lambda)(w_{2}-\widehat{\theta}_{1}))^{2}\right)$$ $$<\frac{1}{2}h\left(\lambda(w_{1}-\widehat{\theta}_{1})^{2}+(1-\lambda)(w_{2}-\widehat{\theta}_{1})^{2}\right)$$ $$\leq\frac{\lambda}{2}h\left((w_{1}-\widehat{\theta}_{1})^{2}\right)+\frac{1-\lambda}{2}h\left((w_{2}-\widehat{\theta}_{1})^{2}\right)$$ $$=\lambda v_{1}(w_{1})+(1-\lambda)v_{1}(w_{2})$$ the right is a positive definite function $\widehat{\theta}_{1}$ and the fact that $h(w_{1})$ is not is the linear function. where (22) follows from the strict convexity of f(w) = w 2 and the fact that h(w) is strictly increasing in the range [0, ∞), (23) follows from the convexity of h(w). This completes the proof that v1(w) is strictly convex. We can similarly prove that v2(w) is stricly convex and hence v(w) is strictly convex since summation of strictly convex functions is strictly convex. Also note that, $$\nabla v(w)=\nabla h\left((w-\widehat{\theta}_{1})^{2}\right)(w-\widehat{\theta}_{1})+\nabla h\left((w-\widehat{\theta}_{2})^{2}\right)(w-\widehat{\theta}_{2})\tag{25}$$ $v(w)=0$ at $w=\left(\frac{\widehat{\theta}_{1}+\widehat{\theta}_{2}}{2}\right)$. Since $v(w)$ is strictly convex this implies that $w^{*}=\left(\frac{\widehat{\theta}_{1}+\widehat{\theta}_{2}}{2}\right)$ It is easy to see that ∇v(w) = 0 at w = will be a unique global minimizer. This completes the proof. ## Proof Of Theorem A.1 Before stating the proof of Theorem A.1 we first state some intermediate results that will be used in the proof. The MaxFL objective can be written as, $$v(w)=\frac{1}{2}\sigma\left((w-\widehat{\theta}_{1})^{2}\right)+\frac{1}{2}\sigma\left((w-\widehat{\theta}_{2})^{2}\right)$$ where σ(w) = 1/(1 + exp(−w)). We additionally define the following quantities, $$i:=\operatorname{argmin}\left\{{\widehat{\theta}}_{1},{\widehat{\theta}}_{2}\right\};\;\;j:=\operatorname{argmax}\left\{{\widehat{\theta}}_{1},{\widehat{\theta}}_{2}\right\};\;\;{\widehat{\gamma}}_{G}:={\frac{{\widehat{\theta}}_{j}-{\widehat{\theta}}_{i}}{2}}$$ Let q(w) = σ(w)(1 − σ(w)). The gradient of v(w) is given as, $$\nabla v(w)=q\left((w-{\widehat{\theta}}_{1})^{2}\right)(w-{\widehat{\theta}}_{1})+q\left((w-{\widehat{\theta}}_{2})^{2}\right)(w-{\widehat{\theta}}_{2})$$ Lemma A.3 For γbG > 2, w = bθ1+bθ2 2 will be a local maxima of the MaxFL *objective.* It is easy to see that w = bθ1+bθ2 2 will always be a stationary point of ∇v(w). Our goal is to determine whether it will be a local minima or a local maxima. To do so, we calculate the hessian of v(w) as follows. Let f(w) = 2σ(w)(1 − σ(w))(1 − 2σ(w)). Then, $$\nabla^{2}v(w)=\underbrace{f\left((w-\widehat{\theta}_{1})^{2}\right)(w-\widehat{\theta}_{1})^{2}+q\left((w-\widehat{\theta}_{1})^{2}\right)}_{h_{1}(w)}+\underbrace{f\left((w-\widehat{\theta}_{2})^{2}\right)(w-\widehat{\theta}_{2})^{2}+q\left((w-\widehat{\theta}_{2})^{2}\right)}_{h_{2}(w)}$$ $$(26)$$ $$(27)$$ $$(28)$$ $$(29)$$ Note that h1(w) = h2(w) for w = bθ1+bθ2 2 . Hence it suffices to focus on the condition for which h1(w) < 0 at w = bθ1+bθ2 2 . We have, $$h_{1}\left((\widehat{\theta}_{1}+\widehat{\theta}_{2})/2\right)=f(\widehat{\gamma}_{G}^{2})\widehat{\gamma}_{G}^{2}+q(\widehat{\gamma}_{G}^{2})$$ $$=q(\widehat{\gamma}_{G}^{2})(2(1-2\sigma(\widehat{\gamma}_{G}^{2}))\widehat{\gamma}_{G}^{2}+1)$$ $$<0\quad\mbox{for$\widehat{\gamma}_{G}\geq1.022$}$$ where the last inequality follows from the fact that q(w) > 0 for all w ∈ R and 2(1 − 2σ(w 2))w 2 + 1 < 0 for w ≥ 1.022. Thus for γbG > 2, w = bθ1+bθ2 2 will be a local maxima of the MaxFL objective. Lemma A.4 For γbG > 0, any local minima of v(w) *lies in the range* (θbi, θbi + 2] ∪ [θbj − 2, θbj ). Firstly note that since γbG > 0 we have θbj > θbi. Secondly note that since q(w) > 0 for all w ∈ R, ∇v(w) < 0 for all w ≤ θbi and ∇v(w) > 0 for all w ≥ θbj . Therefore any root of the function ∇v(w) must lie in the range (θbi, θbj ). Case 1: 0 < γbG ≤ 2. In this case, the lemma is trivially satisified since (θbi, θbj ) ⊂ $$({\widehat{\theta_{i}}},{\widehat{\theta_{i}}}+2]\cup[{\widehat{\theta_{j}}}-2,{\widehat{\theta_{j}}})\Big\}.$$ Case 2: γbG > 2. Let x = w − θbi and g(x) = q(x 2)x. We can write ∇v(w) as, ∇v(θbi + x) = g(x) − g(2γbG − x). It can be seen that for x > 2, g(x) is a decreasing function. For x ∈ (2, γbG) we have x > 2γbG − x which implies g(x) > g(2γbG − x). Therefore ∇v(θbi + x) > 0 for x ∈ (2, γbG). Also ∇v(θbi + 2γbG − x) = −∇v(θbi + x) and therefore ∇v(θbi + x) < 0 for x ∈ (γbG, 2γbG − 2). ∇v(θbi + γbG) = 0 but this will be a local maxima for γbG > 2 as shown in Lemma A.3. Thus there exists no local minima of v(w) for w ∈ (θbi + 2, θbj − 2) Combining both cases we see that any local minima of v(w) lies in the range n(θbi, θbi + 2] ∪ [θbj − 2, θbj ) o. Theorem A.1 Let w be a local minima of the MaxFL objective. The expected GM-Appeal using w *is lower* bounded by 1 16 exp − 1 γ2 *where the expectation is over the randomness in the local dataset* B1, B2. ## Proof. The GM-Appeal can be written as, $$\frac{1}{2}\left[\mathbb{P}\left((w-\theta_{i})^{2}<(\widehat{\theta}_{i}-\theta_{i})^{2}\right)+\mathbb{P}\left((w-\theta_{j})^{2}<(\widehat{\theta}_{j}-\theta_{j})^{2}\right)\right]$$ We focus on the case where θb2 ≠ θbiimplying θbj > θbi (θb2 = θb1 is a zero-probability event and does not affect our proof). Let w be any local minima of the MaxFL objective. From Lemma A.4 we know that w will lie in the range (θbi, θbi + 2] ∪ [θbj − 2, θbj ) Case 1: w ∈ (θbi, θbi + 2] P (w − θi) 2 < (θbi − θi) 2= P (w − θbi) 2 + 2(w − θbi)(θbi − θi) < 0 (34) = P (w − θbi) + 2(θbi − θi) < 0 (35) ≥ P 2 + 2(θbi − θi) < 0 (36) = P (θbi − θi) < −1 (37) ≥ P nθb1 < θb2 o∩ n(θb1 − θ1) < −1 o (38) = P θb1 < θb2 P θb1 − θ1 < −1|θb1 < θb2 (39) ≥ P θb1 < θb2 P θb1 − θ1 < −1 (40) = P θb1 < θb2 P (Z > 1/γ) where Z ∼ N (0, 1) (41) ≥ 1 8 exp − 1 γ 2 (42) $$(33)$$ $$(39)$$ $$\mathbf{a}^{(40)}$$ $$(41)$$ (35) uses the fact that (w − θbi) > 0, (36) uses (w − θbi) ≤ 2, (38) uses P (A) ≥ P (A ∩ B) and definition of i. (40) uses the following argument. If θ1 − 1 ≥ θb2 then P θb1 − θ1 < −1|θb1 < θb2 = 1. If θ1 − 1 < θb2 then P θb1 − θ1 < −1|θb1 < θb2 = P θb1 − θ1 < −1 /P θb1 < θb2 ≥ P θb1 − θ1 < −1 . (41) uses θb1 − θ1 ∼ N (0, γ2), (42) uses P θb1 < θb2 ≥ 1 2 and P (Z ≥ x) ≥2 exp(−x 2/2) √2π( √4+x2+x) ≥ 1 4 exp(−x 2) where Z ∼ N (0, 1) (Komatu, 1955). $$(42)$$ In the case where w ∈ (θbj − 2, θbj ] a similar technique can be used to lower bound P (w − θj ) 2 < (θbj − θj ) 2. Thus the GM-Appeal of any local minima of the MaxFL objective is lower bounded by 1 16 exp − 1 γ2 . ## C Convergence Proof C.1 Preliminaries First, we introduce the key lemmas used for the convergence analysis. Lemma C.1 (Bounded Dissimilarity for Fe(w)). *With Assumption 3.1 and Assumption 3.3 we have the* bounded dissimilarity with respect to Fe(w) as: $$\frac{1}{M}\sum_{i=1}^M\|\nabla\widehat{F}_i(\mathbf{w})\|^2\leq\beta'^2\|\nabla\widehat{F}(\mathbf{w})\|^2+\kappa'^2\tag{1}$$ 2. where β ′2 = 2β 2, κ′2 = 4β 2L 2 c + κ 2 Proof. One can easily show that $$\frac{1}{M}\sum_{i=1}^{M}\|\nabla\tilde{F}_{i}(\mathbf{w})\|^{2}=\frac{1}{M}\sum_{i=1}^{M}q_{i}(\mathbf{w})^{2}\|\nabla F_{i}(\mathbf{w})\|^{2}\leq\frac{1}{M}\sum_{i=1}^{M}\|\nabla F_{i}(\mathbf{w})\|^{2}$$ $$(43)$$ $$(44)$$ (45) $\binom{46}{47}$ (47) ... due to qi(w) ≤ 1. Hence we have from Assumption 3.3 and Cauchy-Schwarz inequality that $$\frac{1}{M}\sum_{i=1}^{M}\|\nabla\widetilde{F}_{i}(\mathbf{w})\|^{2}\leq\frac{1}{M}\sum_{i=1}^{M}\|\nabla F_{i}(\mathbf{w})\|^{2}$$ $$\leq\beta^{2}\|\nabla F(\mathbf{w})-\nabla\widetilde{F}(\mathbf{w})+\nabla\widetilde{F}(\mathbf{w})\|^{2}+\kappa^{2}$$ $$\leq2\beta^{2}\|\nabla F(\mathbf{w})-\nabla\widetilde{F}(\mathbf{w})\|^{2}+2\beta^{2}\|\nabla\widetilde{F}(\mathbf{w})\|^{2}+\kappa^{2}$$ We bound the first term in (47) as $$\|\nabla F(\mathbf{w})-\nabla\widetilde{F}(\mathbf{w})\|^{2}=\left\|\sum_{i=1}^{M}\frac{(1-q_{i}(\mathbf{w}))}{M}\nabla F_{i}(\mathbf{w})\right\|^{2}$$ $$\leq\frac{1}{M}\sum_{i=1}^{M}\|(1-q_{i}(\mathbf{w}))\nabla F_{i}(\mathbf{w})\|^{2}$$ $$\leq\frac{2}{M}\sum_{i=1}^{M}\|\nabla F_{i}(\mathbf{w})\|^{2}\leq2L_{c}^{2}$$ (48) $$\begin{array}{l}\left(49\right)\end{array}$$ = $$\begin{array}{l}\left(50\right)\end{array}$$ . $$(51)$$ where in (50) we use qi(w) ≤ 1, ∀i ∈ [M] and Assumption 3.1. Then from (47) we have $$\frac{1}{M}\sum_{i=1}^{M}\|\nabla\tilde{F}_{i}({\bf w})\|^{2}\leq2\beta^{2}\|\nabla\tilde{F}({\bf w})\|^{2}+\kappa^{2}+4\beta^{2}L_{c}^{2}\tag{1}$$ completing the proof. Lemma C.2 (Smoothness of Fe(w)). *If Assumption 3.1 is satisfied we have that the local objectives,* Fe1(w), ... , FeM(w), are also Les-smooth for any w *where* Les = L 2 c /4 + qi(w)Ls. Proof. Recall the definitions of Fe(w) below: $$\widetilde{F}({\bf w})=\frac{1}{M}\sum_{i=1}^{M}\widetilde{F}_{i}({\bf w}),\ \widetilde{F}_{i}({\bf w}):=\sigma(F_{i}({\bf w})-F_{i}(\widehat{\bf w}_{i}^{*}))$$ Let ∥ ∥op denote the spectral norm of a matrix. Accordingly, with the model parameter vector w ∈ R d, we have the spectral norm of the Hessian of Fei(w), ∀i ∈ [M] as: $$(53)$$ $$\|\nabla^{2}\widetilde{F}_{i}({\bf w})\|_{op}$$ $$=\|q_{i}({\bf w})[(\nabla F_{i}({\bf w})\nabla F_{i}({\bf w})^{T})(1-q_{i}({\bf w}))+\nabla^{2}F_{i}({\bf w})]\|_{op}$$ where qi(w) = Sigmoid(Fi(w) − Fi(wb ∗ i )) and ∇Fi(w) ∈ R d×1is the gradient vector for the local objective Fi(w) and ∇2Fi(w) ∈ R d×dis the Hessian of Fi(w). We can bound the RHS of (53) as follows ∥∇2Fei(w)∥op = ∥qi(w)(1 − qi(w))(∇Fi(w)∇Fi(w) T) + qi(w)∇2Fi(w)]∥op (54) ≤ ∥qi(w)(1 − qi(w))(∇Fi(w)∇Fi(w) T)∥op + ∥qi(w)∇2Fi(w)∥op (55) = qi(w)(1 − qi(w))∥(∇Fi(w)∇Fi(w) T)∥op + qi(w)∥∇2Fi(w)∥op (56) = qi(w)(1 − qi(w))∥∇Fi(w)∥ 2 + qi(w)∥∇2Fi(w)∥op (57) ≤ L 2 c 4 + qi(w)Ls (58) $$(52)$$ $$\begin{array}{l}{(54)}\\ {(55)}\end{array}$$ $$(56)$$ $$\left(57\right)$$ $$(58)$$ where we use triangle inequality in (55), and use ∥xyT ∥op = ∥x∥∥y∥ in (57), and use qi(w) ≤ 1 along with Assumption 3.1 in (58). Since the norm of the Hessian of Fei(w) is bounded by L 2 c 4 + qi(w)Ls we complete the proof. ## C.2 Proof Of Theorem 3.1 - Full Client Participation Theorem C.1 (Convergence to the MaxFL Objective Fe(w) for Full Client Participation). Under Assumption 3.1-3.3, suppose all M *clients participate for each communication round of Algorithm 1. With* ηl = √ 1 T τ , ηg = √τM, for total communication rounds T of the MaxFL *solver in Algorithm 1 we have:* $$\min_{t\in[T]}\mathbb{E}\left[\left\|\nabla\widetilde{F}(\mathbf{w}^{(t,0)})\right\|^{2}\right]\leq\frac{(4L_{s}+L_{c})\left(\widetilde{F}(\mathbf{w}^{(0,0)})-\widetilde{F}_{\text{inf}}\right)}{L_{s}\sqrt{T}M\tau}+\frac{64L_{s}^{2}\mathbf{z}^{\prime2}(4L_{s}+L_{c})}{L_{c}T}$$ $$+\frac{4L_{s}^{2}\sigma_{d}^{2}(4L_{s}+L_{c})}{T\tau L_{c}}+\frac{64L_{s}\sigma_{d}^{2}(L_{s}+L_{c}/4)^{2}}{\sqrt{T}M\tau}$$ $$\quad(59)$$ $$(60)$$ $$(61)$$ For ease of writing, we define the following auxiliary variables for any client i ∈ [M]: where ϵ is a constant added to the denominator to prevent the denominator from being 0. From Algorithm 1 with full client participation, our proposed algorithm has the following effective update rule for the global model at the server: $${\bf w}^{(t+1,0)}={\bf w}^{(t,0)}-\eta_{g}^{(t,0)}\eta_{l}\sum_{k=1}^{M}{\bf h}_{k}^{(t,0)}\,$$ Weighted Stochastic Gradient: $\mathbf{h}_i^{(t,0)}:=q_i(\mathbf{w}^{(t,0)})\sum_{r=0}^{\tau-1}\mathbf{g}(\mathbf{w}_i^{(t,r)},\xi_i^{(t,r)})$, $$\text{Weighted Gradient:}\ \overline{\mathbf{h}}_i^{(t,0)}:=q_i(\mathbf{w}^{(t,0)})\sum_{r=0}^{\tau-1}\nabla F_i(\mathbf{w}_i^{(t,r)}),$$ $$\text{Normalized Global Learning Rate:}\ \eta_g^{(t,0)}:=\eta_g/\left(\sum_{i=1}^M q_i(\mathbf{w}^{(t,0)})+\epsilon\right)$$ to label the above joint counterpart the domain is of order $0$, $\mathbf{F}_1$, $\mathbf{w}_i$, $\mathbf{h}_i$. $$(63)$$ $$(62)$$ With the update rule in (63), defining ηe (t,0) := η (t,0) g ηlτM and using Lemma C.2 we have E hFe(w(t+1,0)) i− Fe(w(t,0)) ≤ −ηe (t,0)E "*∇Fe(w(t,0)),1 Mτ X M i=1 h (t,0) i +# i=1 h (t,0) i 2 + Les(ηe (t,0)) 2 2 E 1 Mτ X M = −ηe (t,0)E "*∇Fe(w(t,0)),1 Mτ X M i=1 h (t,0) i − h (t,0) i+# − ηe (t,0)E "*∇Fe(w(t,0)),1 Mτ X M i=1 h (t,0) i +# i=1 h (t,0) i 2 + Les(ηe (t,0)) 2 2 E 1 Mτ X M i=1 h (t,0) i 2 i=1 h (t,0) i 2 = − ηe (t,0) 2 ∇Fe(w(t,0)) 2 − ηe (t,0) 2 E + ηe (t,0) 2 E 1 Mτ X M ∇Fe(w(t,0)) − 1 Mτ X M + Les(ηe (t,0)) 2 2M2τ 2E i=1 h (t,0) i 2 X M (64) $$\begin{array}{l}~~~~~~~~~~~~~~~~~~~~~~\end{array}$$ (65) $$\begin{array}{l}~~~~~~~~~~~~~~~~~~~~~~\end{array}$$ (66) $$\begin{array}{l}~~~~~~~~~~~~~~~~~~~~~~\end{array}$$ (66) . For the last term in (66), we can bound it as i=1 h (t,0) i 2 i=1 h (t,0) i 2 Les(ηe (t,0)) 2 2M2τ 2E i=1 E h (t,0) i − h (t,0) i 2+ Les(ηe (t,0)) 2 M2τ 2E X M ≤ Les(ηe (t,0)) 2 M2τ 2X M X M qi(w(t,0)) τX−1 r=0 g(w (t,r) i, ξ(t,r) i) − ∇Fi(w (t,r) i) 2 i=1 h (t,0) i i=1 E + Les(ηe (t,0)) 2 M2τ 2E = Les(ηe (t,0)) 2 M2τ 2X M X M (67) $\left[\begin{matrix}2\\ \\ \\ \end{matrix}\right]$ (68) (67) i=1 h (t,0) i 2 i=1 qi(w(t,0)) 2 τX−1 r=0 E g(w (t,r) i, ξ(t,r) i) − ∇Fi(w (t,r) i) 2+ Les(ηe (t,0)) 2 M2τ 2E = Les(ηe (t,0)) 2 M2τ 2X M X M i=1 qi(w(t,0)) 2τσ2 g + Les(ηe (t,0)) 2 M2τ 2E i=1 h (t,0) i 2 X M = Les(ηe (t,0)) 2 M2τ 2X M (70) ≤ Les(ηe (t,0)) 2σ 2 g Mτ+ Les(ηe (t,0)) 2E i=1 h (t,0) i 2 1 Mτ X M (71) (69) $\binom{70}{70}$ . $$\left(71\right)$$ (69) where (67) is due to the Cauchy-Schwartz inequality and (70) is due to Assumption 3.2 and (71) is due to qi(w) ≤ 1, ∀i ∈ [M]. Merging (71) into (66) we have E hFe(w(t+1,0)) i− Fe(w(t,0)) ≤ − ηe (t,0) 2 ∇Fe(w(t,0)) 2 + ηe (t,0) 2 E i=1 h (t,0) i 2 ∇Fe(w(t,0)) − 1 Mτ X M (72) + Les(ηe (t,0)) 2σ 2 g Mτ+ (ηe (t,0)) 2Les − ηe (t,0) 2 E i=1 h (t,0) i 2 1 Mτ X M Now we aim at bounding the second term in the RHS of (72) as follows: i=1 h (t,0) i 2 ηe (t,0) 2 E ∇Fe(w(t,0)) − 1 Mτ X M (73) i=1 qi(w(t,0)) τX−1 r=0 ∇Fi(w (t,r) i) 2 = ηe (t,0) 2 E 1 M X M i=1 qi(w(t,0))∇Fi(w(t,0)) −1 Mτ X M (74) i=1 qi(w(t,0)) τX−1 r=0 ∇Fi(w(t,0)) − ∇Fi(w (t,r) i) 2 = ηe (t,0) 2 E 1 Mτ X M (75) ≤ ηe (t,0) 2Mτ X M i=1 qi(w(t,0)) 2 τX−1 r=0 E ∇Fi(w(t,0)) − ∇Fi(w (t,r) i) 2(76) = L 2 sηe (t,0) 2Mτ X M i=1 qi(w(t,0)) 2 τX−1 r=0 E w(t,0) − w (t,r) i 2(77) where (76) is due to Jensen's inequality and (77) is due to Lemma C.2. We can bound the difference of the (73) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (74) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (75) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (76) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (77) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (77) . global model and local model for any client i ∈ [M] as follows: E w(t,0) − w (t,r) i 2= η 2 l E l=0 g(w (t,l) i, ξ(t,l) i) 2 Xr−1 ≤ 2η 2 l E l=0 g(w (t,l) i, ξ(t,l) i) − ∇Fi(w (t,l) i) 2 + 2η 2 l E l=0 ∇Fi(w (t,l) i) 2 Xr−1 Xr−1 ≤ 2η 2 l σ 2 g r + 2η 2 l E l=0 ∇Fi(w (t,l) i) 2 Xr−1 (78) $$\begin{array}{l}\left(79\right)\end{array}$$ = $$\begin{array}{l}\left(80\right)\end{array}$$ . (78) (79) (81) $$\begin{array}{l}\left(82\right)\end{array}$$ = $$\begin{array}{l}\left(83\right)\end{array}$$ . (80) where (79) is due to Cauchy-Schwarz inequality and (80) is due to Assumption 3.2. We bound the last term in (80) as follows: l=0 ∇Fi(w (t,l) i) 2 E Xr−1 ≤ r Xr−1 l=0 E ∇Fi(w (t,l) i) 2≤ τ τX−1 l=0 E ∇Fi(w (t,l) i) ≤ 2τ τX−1 l=0 E ∇Fi(w (t,l) i) − ∇Fi(w(t,0)) 2+ 2τ 2E ∇Fi(w(t,0)) ≤ 2L 2 s τ τX−1 l=0 E w (t,l) i − w(t,0) 2+ 2τ 2E ∇Fi(w(t,0)) $$(84)$$ 2(81) 2(82) 2(83) where (81) is due to Jensen's inequality, and (82) is due to Cauchy-Schwarz inequality, and (83) is due to Lemma C.2. Combining (83) with (80) we have that $$\mathbb{E}\left[\left|\mathbf{w}^{(t,0)}-\mathbf{w}_{i}^{(t,r)}\right|\right|^{2}\right]\leq2\eta_{t}^{2}\sigma_{y}^{2}\tau+4L_{\eta}^{2}\eta_{t}^{2}\tau\sum_{l=0}^{r-1}\mathbb{E}\left[\left|\mathbf{w}^{(t,0)}-\mathbf{w}_{i}^{(t,l)}\right|\right|^{2}\right]+4\eta_{t}^{2}\tau^{2}\mathbb{E}\left[\left|\nabla F_{i}(\mathbf{w}^{(t,0)})\right|\right|^{2}\right]$$ which is $\mathbf{w}^{(t,0)}$ and $\mathbf{w}^{(t,0)}$ are $\mathbf{w}^{(t,0)}$ and $\mathbf{w}^{(t,0)}$. 2(84) Reorganizing (84) and taking the summation r ∈ [τ ] on both sides we have, $$(1-4L_{s}^{2}\eta_{t}^{2}\tau^{2})\sum_{r=0}^{r-1}\mathbb{E}\left[\left\|\mathbf{w}^{(t,0)}-\mathbf{w}_{i}^{(t,r)}\right\|^{2}\right]\leq2\eta_{t}^{2}\sigma_{s}^{2}\sum_{r=0}^{r-1}r+4\eta_{t}^{2}\tau^{3}\mathbb{E}\left[\left\|\nabla F_{i}(\mathbf{w}^{(t,0)}\right\|^{2}\right]\right.$$ $$\leq\eta_{t}^{2}\sigma_{\sigma}^{2}\tau^{2}+4\eta_{t}^{2}\tau^{3}\mathbb{E}\left[\left\|\nabla F_{i}(\mathbf{w}^{(t,0)}\right\|^{2}\right]\right.$$ 2(85) (85) $\binom{86}{85}$ . 2(86) With ηl ≤ 1/(2√2τLs), we have that 1/(1 − 4L 2 sη 2 l τ 2) ≤ 2 and hence can further bound (86) as $$\sum_{r=0}^{r-1}\mathbb{E}\left[\left\|\mathbf{w}^{(t,0)}-\mathbf{w}_{i}^{(t,r)}\right\|^{2}\right]\leq2\eta_{l}^{2}\sigma_{g}^{2}\tau^{2}+8\eta_{l}^{2}\tau^{3}\mathbb{E}\left[\left\|\nabla F_{i}(\mathbf{w}^{(t,0)})\right\|^{2}\right]\tag{87}$$ (87) to (77) we have Finally, plugging in (87) to (77) we have ηe (t,0) 2 E i=1 h (t,0) i 2 ∇Fe(w(t,0)) − 1 Mτ X M $$(87)$$ i=1 qi(w(t,0)) 2 2η 2 l σ 2 g τ 2 + 8η 2 l τ 3E ∇Fi(w(t,0)) 2 (88) $$\begin{array}{l}\left(89\right)\end{array}$$ = $$\begin{array}{l}\left(90\right)\end{array}$$ = $$\begin{array}{l}\left(90\right)\end{array}$$ . ≤ L 2 sηe (t,0) 2Mτ X M ≤ L 2 sηe (t,0)η 2 l σ 2 g τ + 4η 2 l τ 2L 2 sηe (t,0) 1 M X M i=1 E ∇Fi(w(t,0)) 2(89) ≤ L 2 sηe (t,0)η 2 l σ 2 g τ + 4η 2 l τ 2L 2 sηe (t,0)(β ′2∇Fe(w(t,0)) 2 + κ ′2) (90) where (89) uses qi(w) ≤ 1, ∀i ∈ [M] and (90) uses Lemma C.1. Merging (90) to (72) we have E hFe(w(t+1,0)) i− Fe(w(t,0)) i=1 h (t,0) i 2 ≤ − ηe (t,0) 2 ∇Fe(w(t,0)) 2 + ηe (t,0) ηe (t,0)Les − 1 2 E 1 Mτ X M + Les(ηe (t,0)) 2σ 2 g Mτ+ ηe (t,0)L 2 sη 2 l σ 2 g τ + 4ηe (t,0)η 2 l τ 2L 2 sβ ′2∇Fe(w(t,0) 2 + 4ηe (t,0)η 2 l τ 2L 2 sκ ′2 With ηlηg ≤ 1/(4τLs) we have that ηe (t,0)Les − 1 2 ≤ −1/4 and thus can further simplify (91) to E hFe(w(t+1,0)) i− Fe(w(t,0)) ≤ − ηe (t,0) 2 ∇Fe(w(t,0)) 2 + 4ηe (t,0)η 2 l τ 2L 2 sβ ′2∇Fe(w(t,0)) 2 + Les(ηe (t,0)) 2σ 2 g Mτ+ ηe (t,0)L 2 sη 2 l σ 2 g τ + 4ηe (t,0)η 2 l τ 2L 2 sκ ′2 (92) $\begin{array}{l}\left(\text{93}\right)^{}\end{array}$ . = ηe (t,0) 4η 2 l τ 2L 2 sβ ′ − 1 2 ∇Fe(w(t,0)) 2 + Les(ηe (t,0)) 2σ 2 g Mτ+ ηe (t,0)L 2 sη 2 l σ 2 g τ + 4ηe (t,0)η 2 l τ 2L 2 sκ ′2(93) With local learning rate ηl ≤ min{1/(4τLs), 1/(4β ′τLs)} we have that E hFe(w(t+1,0)) i− Fe(w(t,0)) ≤ − ηe (t,0) 4 ∇Fe(w(t,0)) 2 + Les(ηe (t,0)) 2σ 2 g Mτ+ ηe (t,0)L 2 sη 2 l σ 2 g τ +4ηe (t,0)η 2 l τ 2L 2 sκ ′2 $$(94)$$ and we use the property of ηe property of $\widetilde{\eta}^{(t,0)}$ that $\frac{\omega_1\eta_0\eta_0}{M+\epsilon}\leq\widetilde{\eta}^{(t,0)}\leq\frac{\omega_1\eta_0\eta_0}{\epsilon}$ to get $$\begin{aligned} \mathbb{E}\left[\widetilde{F}(\mathbf{w}^{(t+1,0)})\right]-\widetilde{F}(\mathbf{w}^{(t,0)}) &\leq -\frac{M\tau\eta\eta_g}{4(M+\epsilon)}\left\|\nabla\widetilde{F}(\mathbf{w}^{(t,0)})\right\|^2+\frac{\widetilde{L}_s M\tau\eta_t^2\eta_g^2\sigma_g^2}{\epsilon^2}\nonumber\\ &+\frac{M\tau^2L_s^2\eta_t^3\eta_g\sigma_g^2}{\epsilon}+\frac{4M\eta_t^3\eta_{\theta}\tau^3L_s^2\kappa^{\prime2}}{\epsilon} \end{aligned}$$ age across all rounds on both sides of (95) we get M τηlηg $$\quad(91)$$ $$\left(95\right)$$ Noting across an volume of both sides of (80) we get $$\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\left[\|\nabla\widetilde{F}(\mathbf{w}^{(t,0)})\|^2\right]\leq\frac{4(M+\epsilon)\left(\widetilde{F}(\mathbf{w}^{(0,0)})-\widetilde{F}_{\text{int}}\right)}{M\tau\eta_\eta T}+\frac{16\eta_t^2\tau^2L_\epsilon^2\kappa^\prime2(M+\epsilon)}{\epsilon}\tag{1}$$ $$+\frac{4L_\epsilon^2\eta_t^2\tau\sigma_\sigma^2(M+\epsilon)}{\epsilon}+\frac{4\eta_\eta\eta\widetilde{L}_\sigma\sigma_\sigma^2(M+\epsilon)}{\epsilon^2}$$ $$(96)$$ and prove min t∈[T] E ∇Fe(w(t,0)) 2≤ 1 T T X−1 t=0 E h∥∇Fe(w(t,0))∥ 2i≤ 4(M + ϵ) Fe(w(0,0)) − Feinf MτηlηgT + 16η 2 l τ 2L 2 sκ ′2(M + ϵ) ϵ+ 4L 2 sη 2 l τσ2 g (M + ϵ) ϵ+ 4ηgηlLesσ 2 g (M + ϵ) ϵ 2 $$(97)$$ Further, using L˜s = Ls M PM k=1 qk(w) + Lc 4and ϵ = MLc 4Ls > 0 from the optimal learning rate we have the bound in (97) to be $$\begin{array}{c}{{\min_{t\in[\overline{{{r}}}]}\mathbb{E}\left[\left\|\nabla\widetilde{F}(\mathbf{w}^{(t,0)})\right\|^{2}\right]\leq\frac{(4L_{s}+L_{c})\left(\widetilde{F}(\mathbf{w}^{(0,0)})-\widetilde{F}_{\mathrm{inf}}\right)}{L_{s}\pi\eta_{g}T}+\frac{64\eta_{l}^{2}\tau^{2}L_{s}^{2}\kappa^{\prime2}(4L_{s}+L_{c})}{L_{c}}}}\\ {{+\frac{4L_{s}^{2}\eta_{l}^{2}\tau\sigma_{g}^{2}(4L_{s}+L_{c})}{L_{c}}+\frac{64L_{s}\eta_{g}\eta\sigma_{g}^{2}(L_{s}+L_{c}/4)^{2}}{M L_{c}^{2}}}}\end{array}$$ $$(98)$$ By setting the global and local learning rate as ηg = √τM and ηl = √ 1 T τ we can further optimize the bound as $$\begin{split}\min_{t\in[T]}\mathbb{E}\left[\left\|\nabla\widetilde{F}(\mathbf{w}^{(t,0)})\right\|^{2}\right]&\leq\frac{(4L_{s}+L_{c})\left(\widetilde{F}(\mathbf{w}^{(0,0)})-\widetilde{F}_{\text{inf}}\right)}{L_{s}\sqrt{T M\tau}}+\frac{64L_{s}^{2}\kappa^{\prime2}(4L_{s}+L_{c})}{L_{c}T}\\ &\quad+\frac{4L_{s}^{2}\sigma_{g}^{2}(4L_{s}+L_{c})}{T\tau L_{c}}+\frac{64L_{s}\sigma_{g}^{2}(L_{s}+L_{c}/4)^{2}}{\sqrt{T M\tau}}\end{split}$$ As full client participation proof of Theorem 3.1. $$(99)$$ completing the full client participation proof of Theorem 3.1. ## C.3 Proof Of Theorem 3.1 - Partial Client Participation Theorem C.2 (Convergence to the MaxFL Objective Fe(w) for Partial Client Participation). Under Assumption 3.1-3.3, suppose m *clients are selected by the server uniformly at random without replacement out* of M *total clients for participation for each communication round of Algorithm 1. With* ηl = √ 1 for total communication rounds $T$ of the MaxFL solver in Algorithm 1 we have:_ $$\min_{t\in[T]}\mathbb{E}\left[\left\|\nabla\widetilde{F}(\mathbf{w}^{(t,0)})\right\|^{2}\right]\leq\frac{4\left(\widetilde{F}(\mathbf{w}^{(0,0)})-\widetilde{E}_{\mathrm{in}}\right)+4\sigma_{\omega}^{2}\nu}{\sqrt{T\pi}}+\frac{4\sigma_{\omega}^{2}L_{\tau}^{2}}{\sqrt{T}}+\frac{8\sigma_{\omega}^{2}L_{\tau}^{2}}{3\sqrt{T}}$$ $$+\frac{80L_{\tau}^{2}\kappa^{2}}{T}+\frac{48\sigma(M-m)E_{\tau}^{2}\sqrt{\tau}}{\sqrt{T}m}$$ , ηg = √τm, $$(100)$$ We present the convergence guarantees of MaxFL for partical client participation in this section. With partical client participation, we have the update rule in (63) changed to We the update rule in (30) changed to $${\bf w}^{(t+1,0)}={\bf w}^{(t,0)}-\eta_{g}^{(t,0)}\eta_{l}\sum_{k\in S^{(t,0)}}{\bf h}_{k}^{(t,0)}\tag{101}$$ where the m clients are sampled uniformly at random without replacement for S (t,0) at each communication round t by the server and η (t,0) g = mηg/(Pk∈S(t,0) qk(w(t,0)) + ϵ) for positive constant ϵ. Then with the update rule in (101) and Lemma C.2, defining ηe (t,0) = η (t,0) g ηlτm we have $$\mathbb{E}\left[\widehat{F}(\mathbf{w}^{(t+1,0)})\right]-\widehat{F}(\mathbf{w}^{(t,0)})\leq\mathbb{E}\left[-\widehat{\eta}^{(t,0)}\left\langle\,\nabla\widehat{F}(\mathbf{w}^{(t,0)}),\frac{1}{m\tau}\,\sum_{i\in\mathcal{S}^{(t,0)}}\mathbf{h}_{i}^{(t,0)}\right\rangle\right]\tag{102}$$ $$+\mathbb{E}\left[\frac{\widehat{L}_{s}(\widehat{\eta}^{(t,0)})^{2}}{2}\left\|\frac{1}{m\tau}\,\sum_{i\in\mathcal{S}^{(t,0)}}\mathbf{h}_{i}^{(t,0)}\right\|^{2}\right]$$ For the first term in the RHS of (102) we have that due to the uniform sampling of clients (see Lemma 4 in Jhunjhunwala et al. (2022)), it becomes analogous to the derivation for full client participation. Hence, with the property of mτηlηg m+ϵ ≤ ηe (t,0) ≤ mτηlηg ϵand using the previous bounds in (90), we result in the final bound for the first term in the RHS of (102) as below: the first term in the ratio of (100) is much. $$\mathbb{E}\left[-\widetilde{\eta}^{(t,0)}\left\langle\nabla\tilde{F}(\mathbf{w}^{(t,0)}),\frac{1}{m\tau}\sum_{t\in\mathbb{S}^{(t,0)}}\mathbf{h}_{i}^{(t,0)}\right\rangle\right]\leq\left(-\frac{m\tau\eta\eta_{\theta}}{m+\epsilon}+\frac{4\eta_{\theta}^{2}\tau^{2}L_{\epsilon}^{2}\delta^{\prime2}\eta_{\theta}m}{\epsilon}\right)\left\|\nabla\tilde{F}(\mathbf{w}^{(t,0)})\right\|^{2}\tag{103}$$ $$+\frac{4L_{\epsilon}^{2}\tau^{3}\eta_{\theta}^{2}m\eta_{\theta}\kappa^{\prime2}}{\epsilon}+\frac{L_{\epsilon}^{2}\tau^{2}\eta_{\theta}^{2}m\eta_{\theta}\sigma_{\theta}^{2}}{\epsilon}$$ For the second term in the RHS of (102), with C = Les(mτηlηg/ϵ) 2 we have the following: E Les(ηe (t,0)) 2 2 1 mτ X i∈S(t,0) h (t,0) i 2 ≤ CE 1 mτ X i∈S(t,0) (h (t,0) i − h (t,0) i) 2 (104) +CE 1 mτ X i∈S(t,0) h (t,0) i 2 + CE 1 mτ X i∈S(t,0) h (t,0) i 2 (105) =C m2τ 2 E X i∈S(t,0) h (t,0) i − h (t,0) i 2 i=1 E h (t,0) i − h (t,0) i 2+ CE 1 mτ X i∈S(t,0) h (t,0) i 2 (106) =C mMτ 2 X M ≤ Cσ2 g mτ + CE 1 mτ X i∈S(t,0) h (t,0) i 2 (107) where (106) follows due to, again, the uniform sampling of clients and the rest follows identical steps for full client participation in the derivation for (67). Note that $$C=\left({\frac{L_{s}}{M}}\sum_{k=1}^{M}q_{k}(\mathbf{w})+{\frac{L_{c}}{4}}\right)(m\tau\eta\eta_{g}/\epsilon)^{2}\leq(L_{s}+{\frac{L_{c}}{4}})(m\tau\eta\eta_{g}/\epsilon)^{2}$$ For the second term in (107) we have that E 1 mτ X i∈S(t,0) h (t,0) i 2 = E 1 mτ X i∈S(t,0) h (t,0) i − ∇Fei(w(t,0)) + ∇Fei(w(t,0)) − 1 τ ∇Fe(w(t,0)) + 1 τ ∇Fe(w(t,0)) 2# (109) ≤ 3E 1 mτ X i∈S(t,0) h (t,0) i − ∇Fei(w(t,0)) 2 + 3 τ 2 E 1 m X i∈S(t,0) ∇Fei(w(t,0)) − ∇Fe(w(t,0)) 2 (110) | {z } A1 | {z } A2 +3E " 1 τ ∇Fe(w(t,0)) 2# $$(108)$$ First we bound A1 in (110) as follows: 3E 1 mτ X i∈S(t,0) h (t,0) i − ∇Fei(w(t,0)) 2 = 3E 1 mτ X i∈S(t,0) qi(w(t,0)) τX−1 r=0 ∇Fi(w (t,r) i) − ∇Fi(w(t,0)) 2 ≤3 mτ E X i∈S(t,0) τX−1 r=0 ∇Fi(w (t,r) i) − ∇Fi(w(t,0)) 2 = 3 Mτ X M i=1 τX−1 r=0 E ∇Fi(w (t,r) i) − ∇Fi(w(t,0)) 2 (111) i=1 τX−1 r=0 E w(t,0) − w (t,r) i 2(112) ≤ 3L 2 s Mτ X M (113) $\binom{114}{114}$ . where (111) is due to Jensen's inequality, qi(w) ≤ 1, and uniform sampling of clients, and (112) is due to Assumption 3.1. Using (72) we have already derived, bound (112) further to: $$3\mathbb{E}\left[\left\|\frac{1}{m\tau}\sum_{i\in\mathcal{S}^{(t,0)}}\left(\widetilde{\mathbf{h}}_{i}^{(t,0)}-\nabla\widetilde{F}_{i}(\mathbf{w}^{(t,0)})\right)\right\|^{2}\right]\leq6L_{\tau}^{2}\eta_{\sigma}^{2}\tau+\frac{24L_{\tau}^{2}\eta_{\tau}^{2}\tau^{2}}{M}\sum_{i=1}^{M}\mathbb{E}\left[\left\|\nabla F_{i}(\mathbf{w}^{(t,0)})\right\|^{2}\right]$$ $$\leq6L_{\tau}^{2}\eta_{\tau}^{2}\sigma_{\tau}^{2}+24L_{\tau}^{2}\eta_{\tau}^{2}\tau^{2}(\beta^{\prime2}\|\nabla\widetilde{F}(\mathbf{w}^{(t,0)}\|^{2}+\kappa^{\prime2})$$ 2(113) ′2) (114) where (114) is due to Lemma C.1. Next we bound A2 as follows: 3 τ 2 E 1 m X i∈S(t,0) ∇Fei(w(t,0)) − ∇Fe(w(t,0)) 2 i=1 E ∇Fei(w(t,0)) − ∇Fe(w(t,0)) 2 =3(M − m) τ 2mM(M − 1) X M i=1 qi(w(t,0))∇Fi(w(t,0)) 2 =3(M − m) τ 2mM(M − 1) X M i=1 ∇qi(w(t,0))Fi(w(t,0)) − 1 M X M i=1 qi(w(t,0))∇Fi(w(t,0)) 2 i=1 ≤6(M − m) τ 2mM(M − 1) X M ∇Fi(w(t,0)) 2 + 1 M X M (117) ≤ 12(M − m)L 2 c τ 2m(M − 1) (118) $$(115)$$ $$\quad(116)$$ $$\quad(117)$$ $$\quad(117)$$ ... $$(118)$$ where (115) is due to the variance under uniform sampling without replacement (see Lemma 4 in Jhunjhunwala et al. (2022)) and (117) is due to the Cauchy-Schwarz inequality and (118) is due to Assumption 3.1. Mering the bounds for A1 and A2 to (110) we have that E 1 mτ X i∈S(t,0) h (t,0) i 2 ≤ 6L 2 sη 2 l σ 2 g τ + 24L 2 sη 2 l τ 2β ′2∥∇Fe(w(t,0))∥ 2 +24L 2 sη 2 l τ 2κ ′2 + 12(M − m)L 2 c τ 2m(M − 1) + 3E " 1 τ ∇Fe(w(t,0)) 2# (119) = 24L 2 sη 2 l τ 2β ′2 + 3 τ 2 ∥∇Fe(w(t,0))∥ 2 + 6L 2 sη 2 l τ (σ 2 g + 4τκ′2) + 12(M − m)L 2 c τ 2m(M − 1) (120) Then we can plug in (120) back to (107) and plugging in (103) to (102), we can derive the bound in (102) as E hFe(w(t+1,0)) i− Fe(w(t,0)) ≤ − mτηlηg m + ϵ+ 4η 3 l ηgτ 3L 2 sβ ′2m ϵ+ ν (τηlηg) 2(24L 2 sη 2 l τ 2β ′2 + 3) ∇Fe(w(t,0)) 2 + (τηlηg) 2ν σ 2 g mτ + (τηlηg) 2ν 6L 2 sη 2 l τ (σ 2 g + 4τκ′2) + 12(M − m)L 2 c τ 2m(M − 1) + 4L 2 s τ 3η 3 l mηgκ ′2 ϵ+ L 2 s τ 2η 2 l mηgσ 2 g ϵ $$(121)$$ where ν = Ls + Lc/4. With ηl ≤ 1/4β ′τLs, ϵ = m, and ηgηl ≤1 9τ ν , we can further bound above as $$\begin{array}{l}{{\mathbb{E}\left[\widetilde{F}(\mathbf{w}^{(t+1,0)})\right]-\widetilde{F}(\mathbf{w}^{(t,0)})\leq-\frac{\eta\eta\eta_{g}\tau}{4}\left\|\nabla\widetilde{F}(\mathbf{w}^{(t,0)})\right\|^{2}+\left(\tau\eta\eta_{g}\right)^{2}\nu\frac{\sigma_{g}^{2}}{m\tau}}}\\ {{\ \ \ \ +\left(\tau\eta\eta_{g}\right)^{2}\nu\left(6L_{s}^{2}\eta_{l}^{2}\tau(\sigma_{g}^{2}+4\tau\kappa^{\prime2})+\frac{12(M-m)L_{s}^{2}}{\tau^{2}m(M-1)}\right)+4L_{s}^{2}\tau^{3}\eta_{l}^{3}\eta_{g}\kappa^{\prime2}+L_{s}^{2}\tau^{2}\eta_{l}^{2}\eta_{g}\sigma_{g}^{2}}}\end{array}$$ $$(122)$$ Taking the average across all rounds on both sides of (122) and rearranging the terms we get 1 T T X−1 t=0 E h∥∇Fe(w(t,0))∥ 2i≤ 4 Fe(w(0,0)) − Feinf T ηlηgτ+ 4σ 2 gηl ηgν m + 2L 2 sηlτ 3+ L 2 s τ + 80L 2 sη 2 l τ 2κ ′2 3+ 48ηlηgν(M − m)L 2 c τm(M − 1) $$(123)$$ With the small enough learning rate ηl = 1/( √T τ ) and ηg = √τm one can prove that min t∈[T] E ∇Fe(w(t,0)) 2≤ 4 Fe(w(0,0)) − Feinf+ 4σ 2 g ν √T τm+ 4σ 2 gL 2 s √T + 8σ 2 gL 2 s 3τT + 80L 2 sκ ′2 T+ 48ν(M − m)L 2 c √τ √Tm = O σ 2 g √T τm!+ O σ 2 g τT !+ O κ ′2 T + O √τ √Tm(125) $$\left(124\right)$$ $$\left(125\right)$$... completing the proof for Theorem 3.1 for partial client participation. ## D Experiment Details And Additional Results All experiments are conducted on clusters equipped with one NVIDIA TitanX GPU. The algorithms are implemented in PyTorch 1. 11. 0. All experiments are run with 3 different random seeds and the average performance with the standard deviation is shown. The code used for all experiments is included in the supplementary material. ## D.1 Experiment Details For FMNIST, for the results in Fig. 4, Table 2, and Table 3, the data is partitioned into 5 clusters where 2 labels are assigned for each cluster with no labels overlapping across clusters. For the other FMNIST results and EMNIST, we use the Dirichlet distribution (Hsu et al., 2019) to partition the data with α = 0.5, 0.05 respectively. Clients are randomly assigned to each cluster, and within each cluster, clients are homogeneously distributed with the assigned labels. For the Sent140 dataset, clients are naturally partitioned with their twitter IDs. The data of each client is partitioned to 60% : 40% for training and test data ratio unless mentioned otherwise. | | Seen Clients | | | | | | | |-------------------------------------------------------------------------------------------------------------------|----------------|---------------|-------------|---------------|--------------|------------------------------|--------------| | FMNIST | EMNIST | | | | | | | | Byz=0.1 | Byz=0.05 | Byz=0.1 | Byz=0.05 | | | | | | Test Acc. | GM-Appeal | Test Acc. | GM-Appeal | Test Acc. | GM-Appeal | Test Acc. | GM-Appeal | | MW-Fed 17.24 (±2.35) | 0.01 (±0.0) | 21.28 (±1.79) | 0.02 (±0.0) | 15.83 (±1.52) | 0.004 (±0.0) | 22.22 (±0.63) 0.008 (±0.001) | | | MaxFL 69.42 (±2.87) 0.35 (±0.05) 70.60 (±2.76) 0.42 (±0.03) 52.74 (±0.44) 0.20 (±0.01) 56.10 (±0.77) 0.23 (±0.01) | | | | | | | | | | Unseen Clients | | | | | | | | FMNIST | EMNIST | | | | | | | | Byz=0.1 | Byz=0.05 | Byz=0.1 | Byz=0.05 | | | | | | Test Acc. | GM-Appeal | Test Acc. | GM-Appeal | Test Acc. | GM-Appeal | Test Acc. | GM-Appeal | | MW-Fed 18.45 (±2.81) | 0.01 (±0.0) | 21.91 (±3.81) | 0.01 (±0.0) | 17.03 (±0.21) | 0.005 (±0.0) | 22.23 (±0.63) | 0.003 (±0.0) | | MaxFL 69.75 (±3.66) 0.39 (±0.01) 71.11 (±1.47) 0.46 (±0.01) 53.82 (±0.09) 0.26 (±0.02) 55.10 (±0.78) 0.28 (±0.01) | | | | | | | | Table 4: Byzantine clients are included in the total clients where they artificially report large losses to the server and add noise to their gradients. The ratio of the Byzantine clients is denoted as 'Byz'. We report the final avg. test accuracy and GM-Appeal across the seen and unseen clients where we train for 200 communication rounds. At the 10th round, clients flexibly opt-out or opt-in depending on whether the global model has met their requirements. Obtaining wbi, i ∈ [M] **for MaxFL Results in Section 5.** In MaxFL, we use wbi, i ∈ [M] to calculate the aggregating weights (see Algorithm 1). For all experiments with MaxFL, we obtain wbi, i ∈ [M] at each client by each client taking 100 local SGD steps on its local dataset with its own separate local model before starting federated training. We use the same batch-size and learning rate used for the local training at clients done after we start the federated training (line 8-9 in Algorithm 1). The specific values are mentioned in the next paragraph. Local Training and Hyperparameters. For all experiments, we do a grid search over the required hyperparameters to find the best performing ones. Specifically, we do a grid search over the learning rate: ηlηg ∈ {0.1, 0.05, 0.01, 0.005, 0.001}, batchsize: b ∈ {32, 64, 128}, and local iterations: τ ∈ {10, 30, 50} to find the hyper-parameters with the highest test accuracy for each benchmark. For all benchmarks we use the best hyper-parameter for each benchmark after doing a grid search over feasible parameters referring to their source codes that are open-sourced. For a fair comparison across all benchmarks we do not use any learning rate decay or momentum. DNN Experiments. For FMNIST and EMNIST, we train a deep multi-layer perceptron network with 2 hidden layers of units [64, 30] with dropout after the first hidden layer where the input is the normalized flattened image and the output is consisted of 10 units each of one of the 0-9 labels. For Sent140, we train a deep multi-layer perceptron network with 3 hidden layers of units [128, 86, 30] with pre-trained 200D average-pooled GloVe embedding (Pennington et al., 2014). The input is the embedded 200D vector and the output is a binary classifier determining whether the tweet sentiment is positive or negative with labels 0 and 1 respectively. All clients have at least 50 data samples. To demonstrate further heterogeneity across clients' data we perform label flipping to 30% of the clients that are uniformly sampled without replacement from the entire number of clients. ## D.2 Additional Experimental Results Robustness of MaxFL Against Byzantine Clients. One may think that MaxFL may be perceptible to specific attacks from Byzantine clients that intentionally send a greater GM-Appeal gap to the server to gain a higher aggregation weight. To show MaxFL's robustness against such attacks we show in Table 4 the performance of MaxFL with Byzantine clients attacks which send higher losses to gain higher weights and then send Gaussian noise mixed gradients to the server. We compare with MW-Fed (Blum et al., 2021) which aims for incentivizing client participation by clients sending higher weights to the server and performing ![30_image_0.png](30_image_0.png) Figure 6: Comparison of the average of the true local losses across all clients (PM k=1 fk(w)/M) and the empirical local losses across all clients (PM k=1 Fk(w)/M) where the former is calculated on the test set and the latter is calculated on the training set for the global model w. We show that the average of the true local losses is nearly identical to the average empirical local loss across all clients empirically validating our relaxation of replacing fk(w) with Fk(w). | GM-Appeal | Preferred-Model Test Acc. | | | | | | |-------------|-----------------------------|-------------|--------------|---------------|---------------|---------------| | τl = 50 | τl = 100 | τl = 150 | τl = 50 | τl = 100 | τl = 150 | | | FedAvg | 0.01 (±0.01) | 0.01 (±0.0) | 0.01 (±0.0) | 98.56 (±0.08) | 98.72 (±1.02) | 98.75 (±1.28) | | FedProx | 0.01 (±0.0) | 0.01 (±0.0) | 0.01 (±0.01) | 98.56 (±1.15) | 98.72 (±1.08) | 98.75 (±1.05) | | MaxFL | 0.55 (±0.0) | 0.55 (±0.0) | 0.55 (±0.0) | 98.64 (±1.03) | 98.77 (±1.01) | 98.77 (±1.01) | Table 5: GM-Appeal and preferred-model test accuracy for varying number of local steps τl to obtain wb k, k ∈ [M] where T = 200 is the total number of communication rounds for training the global model. more local updates. In Table 4 we see that for both high and low byzantine client ratios, MaxFL achieves only 1-5% lower test accuracy for seen and unseen clients compared to the case where there are no Byzantine clients in Table 1. This is due to our objective giving lower weight to those clients that give a too high GM-Appeal gap (see Fig. 3). Hence MaxFL disregards these clients that send artificially high GM-Appeal gaps. Note that it is indeed possible that attackers can devise different attacks, and in our work we consider one of them to show MaxFL's robustness. Specifically, our focus is on the case when some attackers try to modify their loss values in order to get higher weights for their updates. Ablation Study on fk(w) ≈ Fk(w). One of the two key relaxations we use for MaxFL (see Section 2.1) is that we replace fk(w) − fk(wb k) with Fk(w) − Fk(wb k). In other words, we replace the true loss fk(w) = Eξ∼Dk [ℓ(w, ξ)] with the empirical loss Fk(w) = 1 |Bk| Pξ∈Bk ℓ(w, ξ) for all clients k ∈ [M]. We have used the likely conjecture that the global model w is trained on the data of all clients, making it unlikely to overfit to the local data of any particular client, leading to fk(w) ≈ Fk(w). We show in Fig. 6 that this is indeed the case. For all DNN experiments, we show that the average true local loss across all clients, i.e., PM k=1 fk(w)/M is nearly identical to the average empirical local loss across all clients, i.e., PM k=1 Fk(w)/M given the training of the global model w throughout the communication rounds. This empirically validates our relaxation of the true local losses to the empirical local losses. Ablation Study on the Number of Local Steps τl **to train** wb k, k ∈ [M]. We conduct an additional ablation study where we vary the number of local steps to obtain wb k, k ∈ [M] for clients as shown in Appendix D.2. Despite that a smaller number of local steps can lead to underfitting and a larger number of local steps can lead to overfitting, we show that all methods' GM-Appeals do not vary much by the different number of local steps used for training. Preferred-model Test Accuracy for the Local-Tuning Results in Table 3. In Table 3, we have shown how MaxFL can largely increase the GM-Appeal compared to the other baselines even when jointly used with local-tuning. In Table 6, we show the corresponding preferred-model test accuracies. We show that for the seen clients that were active during training, MaxFL achieves at least the same or higher preferred-model test accuracy than the other methods for all the different datasets. Hence, the clients are able to also gain from MaxFL by achieving the highest accuracy in average with their preferred models (either | Seen Clients | Unseen Clients | | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|--------|---------| | FMNIST | Sent140 | FMNIST | Sent140 | | FedAvg | 99.37 (±0.24) 55.71 (±0.46) 99.50 (±0.02) 58.79 (±0.67) | | | | FedProx | 99.35 (±0.23) 55.75 (±0.80) 99.55 (±0.09) 58.82 (±0.72) | | | | PerFedAvg 99.20 (±0.25) 55.74 (±0.80) 98.98 (±0.55) 58.82 (±0.72) MW-Fed 99.27 (±0.39) 55.06 (±0.38) 99.47 (±0.08) 57.36 (±0.71) MaxFL 99.40 (±0.30) 55.82 (±0.82) 99.50 (±0.02) 58.88 (±0.77) | | | | | | GM-Appeal | Preferred-Model Test Acc. | | | |----------------|-------------------------------------------------------|-----------------------------|-----------------------------|---------| | | FMNIST | Sent140 | FMNIST | Sent140 | | q-FFL (q = 1) | 0.03 (±0.01) 0.09 (±0.06) 99.24 (±0.05) 53.10 (±2.63) | | | | | q-FFL (q = 10) | 0.0 (±0.0) | 0.09 (±0.0) | 98.90 (±0.01) 52.71 (±1.40) | | | MaxFL | 0.55 (±0.0) 0.41 (±0.07) 99.29 (±0.03) 53.93 (±1.87) | | | | Table 6: Preferred-model test accuracy with the locally-tuned models with 5 local steps from the final global model for seen clients' and unseen clients' test data (the corresponding GM-Appeal is in Table 3). Table 7: GM-Appeal and preferred-model test accuracy for the seen clients' test data with the final global models trained via MaxFL and q-FFL (Li et al., 2019) which aims in improving fairness. The baseline q-FFL with large q, e.g. q = 10, emulates the behavior of another well-known algorithm named AFL (Mohri et al., 2019). global model or solo-trained local model). For the unseen clients with FMNIST, FedProx achieves a slightly higher preferred-model test accuracy (+0.05) than MaxFL but with a much lower GM-Appeal of 0.46 (see Table 3) as MaxFL's GM-Appeal is 0.56. For the other datasets with unseen clients, MaxFL achieves at least the same or higher preferred-model test accuracy than the other methods. This demonstrates that MaxFL consistently largely improves the GM-Appeal compared to the other methods while losing very little, if any, in terms of the preferred-model test accuracy. Comparison with Algorithms for Fairness Fair FL methods (Li et al., 2019; Mohri et al., 2019) aim in training a global model that yields small variance across the clients' test accuracies. These methods may satisfy the worst performing clients, but potentially at the cost of causing dissatisfaction from the best performing clients. We show in Table 7 that the common fair FL methods are indeed not effective in improving the overall clients' GM-Appeal. We see that the fair FL methods achieve a GM-Appeal lower than 0.01 for all datasets while MaxFL achieves at least 0.40 for all datasets. Moreover, the preferred-model test accuracy is also higher for MaxFL compared to the fair FL methods. This underwelming performance of fair FL methods in GM-Appeal can be due to the fact that fair FL methods try to find the global model that performs well, in overall, over all clients which results in failing to satisfy any client. MaxFL's Theoretical Learning Rate Behavior for Fig. 1 (b). Here, we provide a plot of MaxFL's theoretical learning rate for the mean estimation example in Fig. 1(b) in Fig. 7 to show how the learning rate changes for different regions of the model. We show this plot as a proof of concept on the adaptive learning rate we discuss in Section 4. For the sigmoid function which is used for our MaxFL objective, using a global notion of smoothness can cause gradient descent to be too slow since global smoothness is determined by behavior at w = 0 where w is the model. In this case, it is better to use a local estimate of smoothness in the flat regions where |w| >> 0. Recall that ∇2σ(w) = σ(w)(1 − σ(w))(1 − 2σ(w)) < σ(x)(1 − σ(w)) and therefore setting the learning rate proportional to 1 σ(w)(1−σ(w)) can increase the learning rate in flat regions where σ(w) is close to 1 or 0. Following a similar argument, we can show that the learning rate in our objective should be proportional to 1/ PM i=1 σ(Fi(w) − Fi( ˆw ∗)(1 − σ(Fi(w) − Fi( ˆw ∗)). ![31_image_0.png](31_image_0.png) Figure 7: Behavior of Theoretical Learning Rate of MaxFL for the mean estimation example in Fig. 1(b). As expected from the theoretical learning rate formula, we see a higher learning rate in regions where the function is flat.
Review 1: Summary: The authors propose the so-called Global Model Appeal (GM-Appeal) as an alternative to the standard FL objective. This quantity measures the ratio of clients finding the current global model $w$ "appealing", i.e., the ratio of clients such that the value of true loss function $f_k(w)$ of client $k$ is smaller than some predefined threshold $\rho_k$. The authors also provide a smooth relaxation of the proposed formulation (via replacing the indicator function with a sigmoid and the true loss with an empirical one) and design a new algorithm (called MaxFL) for minimizing the resulting objective. This algorithm can be seen as FedAvg for the relaxed GM-Appeal minimization with a stepsize that adapts to the weights of aggregation. The authors also analyze the proposed method for non-convex problems with smooth, Lipschitz-continuous objective under bounded variance and bounded gradient dissimilarity assumptions. MaxFL was also tested in numerical experiments and showed better GM-Appeal and test accuracy than competitors on image classification tasks (FMNIST, EMNIST, Sent140). Strengths and Weaknesses: ### Strengths S1. The proposed formulation is new and shows good performance in simple experiments. S2. Numerical results show that MaxFL works well even when some clients decide to leave/join. ### Weaknesses W1. The benefits of considering GM-Appeal maximization are not formalized and not justified theoretically. The authors explain the intuition and motivation behind the proposed formulation but do not show formal benefits of its consideration. W2. The authors do not provide a formal result on how problems (2) and (3) are related. W3. Theorem 3.1 relies on many assumptions. In particular, the authors assume Lipschitzness and smoothness of $F_k$. Typically, for the analysis of optimization methods and, in particular, FL algorithms it is sufficient to assume either Lipschitzness or smoothness. Requested Changes: ## Questions and suggestions for improvement 1. Can the benefits of maximizing GM-Appeal be formalized and justified theoretically? For example, one can try to come up with an example when GM-Appeal maximization is provably beneficial in the case when new clients join with a data similar to the seen clients. If it is problematic to show theoretical benefits of GM-Appeal maximization, then more substantial numerical justification is required. 2. How well does problem (3) approximate problem (2)? 3. In the toy example, $\rho_k$ requires knowing $\theta_k$. In this case, the mean is already known for each client. 4. The experiment with Byzantine attackers is confusing. In this experiment, the authors assume that Byzantine workers send too large GM-Appeal gap. However, it is unclear why it is the worst possible scenario. For example, Byzantine workers can send zero as a "gap" and send arbitrary vectors as $\Delta w_k^{(t,0)}$, e.g., these vectors can have arbitrary large norms. In this case, MaxFL can be broken by Byzantines in 1 communication round. Therefore, in the current shape the algorithm cannot resist Byzantine attacks. 5. On page 6, the authors claim that MaxFL allows "partial client availability". This statement is inaccurate since the authors use uniform client sampling, which implicitly relies on the fact that all clients are available at any time (since every client can be sampled with non-zero probability). 6. The pseudocode of the method should be provided in the main part of the paper. Otherwise, it is hard to understand how the method works and what are some of the quantities stand for (in particular, it is mentioned only in the pseudocode that $T$ stands for the number of communication rounds). 7. On page, the authors mention that the optimal learning rate for MaxFL is $\widetilde{\eta} = 1/\widetilde{L}_s$. Is there any proof of optimality of this stepsize? Did the authors try to use other stepsizes (e.g., constant ones, without weights) in MaxFL for the experiments? 8. There is no complete formulation of Theorem 3.1 in the appendix (though the authors claim that they provide it in the appendix). In particular, complete formulas for stepsizes are required together with the requirements on $T$, i.e., how large $T$ should be? Moreover, the current formula for $\eta_l$ and the third and fourth terms in RHS of (5) have issues with a physical dimension: $\eta_l$ should have the same physical dimension as $1/L_s$ while $T,\tau,m$ have void physical dimension. 9. Why does the inequality for $\widetilde{\eta}^{(t,0)}$ between formulas (95) and (96) hold? ## Minor comments 1. Definition 1: the notation for GM-Appeal does not reflect the fact that it depends on $w$ and $\rho_k$. I suggest to replace GM-Appeal with GM-appeal($w, \lbrace\rho_k\rbrace_{k\in [M]}$). 2. Formula (2): usually $\text{sign}(x)$ denotes a sign of $x \in \mathbb{R}$, i.e., $-1,0,+1$. The function that authors denote as $\text{sign}(x)$ is usually denoted as $[x]_+$. 3. Page 6: wrong sign in the definition of $\Delta w_k^{(t,0)}$. 4. I think there is a typo in (55) in the calculation of the Hessian, though starting from (58) the derivation is correct. 5. In the derivation of (91), the authors use (47) + (49) + (52) instead of just the statement of Lemma C.1. So, it is better to adjust the statement of Lemma C.1. 6. The first line of page 26: should be $\eta_l \eta_g \leq 1 / (4\tau M L_s)$. Broader Impact Concerns: The authors provide enough details on a broader impact. ================================================== Review 2: Summary: This paper introduce a novel objective called Global Model Appeal (GM-Appeal), which aims at training a global model that can satisfy the requirements of the most number of local clients. To maximize the proposed GM-Appeal, the author propose a framework called MaxFL, which are shown to be able to perform well in the new objective. In addition, convergence analysis are provided for the proposed method. Strengths and Weaknesses: Strength: 1) The proposed metric is well-motivated and has real world application potentials. This proposed new framework can inspire the community to rethink about the traditional global loss minimization objective. 2) Extensive numerical validation are provided from various aspects to show effectiveness of the proposed method. 3) Theoretical analysis are provided for the method. Weakness: 1) Since the proposed objective resembles a "hard" variation of q-FFL, I suggest the author provide more discussion to compare between q-FFL and the proposed GM-Appeal. Requested Changes: Please see the "weakness" section. Broader Impact Concerns: Broader impact is properly addressed ================================================== Review 3: Summary: This paper considers a novel objective in federated learning, i.e., to maximize the total number of clients whose requirements are satisfied by the global model. The paper firstly proposes the novel metric of global model appeal, which measures the proportion of clients whose requirements are satisfied by the global model. Then, the proposed objective is relaxed in order to derive a practical objective and optimization algorithm to maximize the global model appeal. The convergence of the method is analyzed, and extensive experiments are performed to assess the advantage of the proposed metric and method. Strengths and Weaknesses: Strengths: - The proposed metric of global model appeal is intuitive and practically useful, especially in problems in which the clients may opt out of the federation. It also provides a fresh perspective as to the goal of federated learning. - Based on the proposed definition of global model appeal, the paper also applies natural relaxation of the definition, which leads to a natural algorithm which is able to efficiently maximize the objective of global model appeal. - Although the toy example is very simple, it does allow for formalizing a number of interesting insights about the proposed global model appeal. - The experimental results indeed show the empirical advantage of the proposed method. Weaknesses: - In your main experiments (such as Figure 1 and Table 1), do you consider the possibility of agents opting out of the federation? If yes, then this is different from the common experimental setting in federated learning, and needs to be clearly specified. Also, if you have indeed allowed the agents to opt out, how is this done exactly? An agent would opt out as long as its requirement is not satisfied in any iteration? In addition, if the answer is yes, have you tested the performance of the algorithm in the standard FL setting (no agents opting out)? - In the toy example in Section 2.2, in the definition of $F_k(w)$, why is the second term $(\hat{\theta}_k - \theta_k)^2$ needed? - The interpretation of the aggregated gradient in Section 3 is very interesting and provides insights and intuitions as to how the proposed algorithm works. However, I find the behavior of the algorithm when $F_k(\mathbf{w}) \gg \rho_k$ concerning. More specifically, the discussion at the top of page 6 suggests that in this case, the algorithm will simply "abandon" those agents for whom there is little hope to satisfy their requirements, i.e., give up helping them satisfy their requirements. I think this is likely to create problems regarding fairness. So, I think mitigation methods should be proposed or this issue should be given sufficient discussions. - The theoretical results in Section 3.1 can be presented better. For example, I think it would be helpful to clearly discuss what are the insights drawn from the theoretical results, and what how do these theoretical results differ from those of classical FedAvg. - On page 9, it is mentioned that "We perform grid search for hyperparameter tuning for all baselines and choose the best performing ones". Is this a common practice? I imagine in practice, it may make more sense to use a single set of hyperparameters in all new applications, or to use a validation set to select hyperparameters and then test the results using a separate test set. - In the experiments of "Local Tuning for Personalization", have you compared the performance using local fine-tuning with the performance without it (no personalization)? Requested Changes: Most of the required changes are discussed under Weaknesses above. In addition, - Theorem 2.1, should the $\gamma$ be $\gamma_G$? Broader Impact Concerns: As I mentioned in the third point under Weaknesses above, the proposed method may cause fairness issues. ================================================== Metareview: Recommendation: Accept with minor revision Comment: This paper proposes a new objective for federated learning, which is to optimize a notion of "global model appeal" to the clients rather than the average loss over clients. This idea is new, significantly different from previous proposals of alternative objective functions, and resonates with the need of providing incentives to convince clients to join an FL process. The proposed formulation is validated by some theoretical and empirical results. While I agree with reviewer E382 that these results could be improved/broadened, I side with the other two reviewers that the current results are sufficient to validate the usefulness of the proposed idea, which I believe has the potential to be refined in subsequent work. For these reasons, I recommend the paper to be accepted with minor revision. The requested changes are as follows: - The claim that the proposed approach is "robust to Byzantine clients", while not central to the paper, is not supported by sufficient evidence, as highlighted by reviewer E382. Indeed, Byzantine clients can behave arbitrarily, hence such a robustness property must be formally established. What the authors provide is evidence that their approach is robust to a particular attack (but better attacks may exist). I thus ask the authors to tone down their claim, e.g., by stating that their approach exhibits better robustness to some attacks. - In his/her last message, reviewer E382 asked for clarifications about Theorem 3.1 and its proof. Please make sure you add these details. ==================================================
# A Sandbox Tool To Bias(Stress)-Test Fairness Algorithms Anonymous authors Paper under double-blind review ## Abstract Motivated by the growing importance of reducing unfairness in ML predictions, Fair-ML researchers have presented an extensive suite of algorithmic "fairness-enhancing" remedies. Most existing algorithms, however, are agnostic to the sources of the observed unfairness. As a result, the literature currently lacks guiding frameworks to specify conditions under which each algorithmic intervention can potentially alleviate the underpinning cause of unfairness. To close this gap, we scrutinize the underlying biases (e.g., in the training data or design choices) that cause observational unfairness. We present the conceptual idea and a first implementation of a bias-injection sandbox tool to investigate fairness consequences of various biases and assess the eectiveness of algorithmic remedies in the presence of specific types of bias. We call this process the bias(stress)-testing of algorithmic interventions. Unlike existing toolkits, ours provides a controlled environment to counterfactually inject biases in the ML pipeline. This stylized setup oers the distinct capability of testing fairness interventions beyond observational data and against an unbiased benchmark. In particular, we can test whether a given remedy can alleviate the injected bias by comparing the predictions resulting after the intervention in the biased setting with true labels in the unbiased regime - that is, before any bias injection. We illustrate the utility of our toolkit via a proof-of-concept case study on synthetic data. Our empirical analysis showcases the type of insights that can be obtained through our simulations. ## 1 Introduction Machine Learning (ML) increasingly makes or informs high-stakes decisions allocating or withholding vital resources to individuals and communities in domains such as employment, credit lending, education, welfare benefits, and beyond. If not done carefully, ML-based decision-making systems may worsen existing inequities and impose disparate harms on already underserved individuals and social groups. This realization has motivated an active area of research into quantifying and guaranteeing fairness for ML. Prior work has proposed various mathematical formulations of (un)fairness as predictive (dis)parities (Berk et al., 2018; Dwork et al., 2012; Hardt et al., 2016; Joseph et al., 2016) and fairness-enhancing algorithms to guarantee the respective parity conditions in the trained model's predictions (Agarwal et al., 2018; Hardt et al., 2016; Calmon et al., 2017; Feldman et al., 2015; Zhang et al., 2018; Kamiran et al., 2012). Our work argues that these interventions are not suciently well-understood to warrant practical uptake. One crucial limitations of these algorithms is the fact that they are agnostic to the underlying *sources* of the observed unfairness. As a result, applying them in practice may simply hide the real problem by ensuring narrowly defined notions of parity in predictions. As a result, what these methods seemingly gain in observational parity can come at the cost of predictive disparity and accuracy loss in deployment, and at worst, they can become an instrument of fair-washing (Aïvodji et al., 2019). As an example, consider a hypothetical healthcare setting in which electronic healthcare data is used to determine which patients are selected for a specialized treatment. Assume the hospital wants to ensure their prediction model is fair across a demographic majority and minority group. In this setting, it may seem intuitive to promote fairness by enforcing an Equalized Odds constraint. Equalized Odds ensures that, given the true need for the procedure is the same, the model's decision to select a patient is independent of the patient's group membership. While this may seem like a viable solution to combat unfairness, it is agnostic to the types of data bias that cause outcome disparities. Training data in healthcare prediction tasks like this is often plagued by biases (Obermeyer et al., 2019; Chen et al., 2021). In our example, we may not have access to patients' true need for the procedure and instead default to healthcare cost as a proxy outcome to train the model. Since access to healthcare has historically been lower for some minority groups, this can lead to a setting in which minority group patients selected for the procedure are sicker than their majority group counterparts even when enforcing Equalized Odds. Blindly applying an o-the-shelf Equalized Odds fairness enhancing method without understanding the types of bias present in this setting could thus hide the real problem while creating an illusion of fairness. Our work aims to address the above shortcoming by oering a simulation framework for examining fairness interventions in the presence of various biases. This oers an initial yet crucial step toward a broader research agenda: to trace the limitations and scope of applicability of fairness-enhancing algorithms. We start with the observation that the ML pipeline consists of numerous steps, and distinct types of biases (e.g., under-/over-representation of certain groups, label bias, or measurement bias in the training data) can creep into it at various stages, amplifying or concealing each other in the trained model's predictive disparities. The fair-ML scholarship currently lacks a comprehensive framework for specifying the conditions under which each algorithmic fairness-enhancing mechanism eectively removes specific types of biases—instead of simply covering up their manifestations as unfairness. For example, it is unclear what type of intervention (e.g., pre-, in-, or post-processing) one must employ depending on the underlying cause of the observed statistical disparity. As a concrete instance, a careful investigation of the relationship between biases and fairness remedies may reveal that if the source of unfairness is label bias among examples belonging to the disadvantaged group, imposing fairness constraints on ERM may be more eective than certain types of pre-processing or post-processing techniques. The reverse might be true for a dierent bias (e.g., biased choice of hypothesis class). Our simulation tool. Motivated by the above account, in this work, we identify and simulate various (stylized) forms of bias that can infiltrate the ML pipeline and lead to observational unfairness. We prototype a sandbox toolkit designed to facilitate simulating and assessing the eectiveness of algorithmic fairness methods in alleviating specific types of bias, by providing a controlled environment. We call this process the *bias(stress)-testing* of algorithmic interventions. Our sandbox oers users a simulation environment to stress-test existing remedies by 1. simulating/injecting various types of biases (e.g., representation bias, measurement bias, omitted variable bias, model validity discrepancies) into their ML pipeline; 2. observing the interactions of these biases with one another via the predictions produced at the end of the ML pipeline (i.e., through the trained model); 3. and testing the eectiveness of a given algorithmic fairness intervention in alleviating the injected biases. This paper oers a preliminary implementation of the idea (see footnote 1) along with a detailed proofof-concept analysis showing its utility. The sandbox is currently realized as a python library and we are working to add a visual user interface component in the future. We emphasize that the tool needs to be further developed and thoroughly evaluated before it is ready to be utilized beyond educational and research settings. The current implementation can be utilized - in research settings to explore the relationships between bias and unfairness, and shape informed hypotheses for further theoretical and empirical investigations; - as an educational tool to demonstrate the nuanced sources of unfairness, and grasp the limitations of fairness-enhancing algorithms; Ultimately once the tool is fully developed and validated, we hope that it can be utilized by practitioners interested in exploring the potential eect of various algorithmic interventions in their real-world use cases. This will be an appropriate usage of the tool if (and this is an crucial if) the bias patterns in the real-world data are well-understood. Counterfactual comparisons. The key idea that distinguishes our tool from existing ones is the possibility of evaluating fairness interventions beyond observational measures of predictive disparity. In particular, we can test whether a given remedy can alleviate the injected bias by comparing the predictions resulting from the intervention in the biased setting with the true labels *before* bias injection. This ability to compare with the unbiased data provides an ideal baseline for assessing the ecacy of a given remedy. We note, however, that the viability of this approach requires access to unbiased data. We, therefore, strongly recommend restricting the use of our tool to *synthetic* data sets—unless the user has a comprehensive and in-depth understanding of various biases in the real-world dataset they plan to experiment with. Remark 1 *Note that in the case of real-world applications, one can rarely assume the training data are free* of bias. However, if the practitioner is aware of what biases are present in the data (e.g., under-representation of a specific group) our toolkit may still allow them to obtain practically relevant insights concerning the eect of their fairness interventions of choice (e.g., up-sampling) on alleviating that bias—assuming *that we can* extrapolate the observed relationship between the amount of additional bias injected and the trained model's unfairness. We leave a thorough assessment of our toolkit's applicability to real-world data as a critical direction for future work. Case study. We demonstrate the utility of our proposed simulation tool through a case study. In particular, we simulate the setting studied in (Blum & Stangl, 2020). Blum and Stangl oer one of the few rigorous analyses of fairness algorithms under specific bias conditions. Their work establishes an intriguing theoretical result that calls into question conventional wisdom about the existence of tradeos between accuracy and fairness. In particular, their theoretical analysis shows that when underrepresentation bias is present in the training data, constraining Empirical Risk Minimization (ERM) with Equalized Odds (EO) conditions can recover a Bayes optimal classifier under certain conditions. Utilizing our tool, we investigate the extent to which these findings remain valid in the finite data case. Our findings suggest that, even for relatively simple regression models, a considerable amount of training data is required to recover from under-representation bias. In the studied settings with smaller data sets and little to moderate under-representation bias the intervention model showed to be no more successful in recovering the Bayes optimal model than a model without intervention. We then investigate the eectiveness of the in-processing EO intervention when alternative types of biases are injected into the training data. We observe that the intervention model struggles to recover from the other studied types of biases. In some of the bias settings, such as a dierence in base rates or dierential label noise, the model with Equalized Odds intervention can provably not recover the Bayes optimal classifier since the latter does not fulfill Equalized Odds. Finally, we contrast the in-processing approach with the original post-processing method introduced by (Hardt et al., 2016) to ensure EO. As we discuss in Sections 3 and 4, our empirical analysis identifies several critical limitations of these methods. Remark 2 *Our proof-of-concept demonstration deliberately addresses a small number of fairness-enhancing* algorithms, and evaluates them in the presence of a wide range of data biases. While the sandbox can be used to contrast a wide array of fairness definitions and algorithms, such an analysis is beyond the scope of the current contribution and is left as an important avenue for future work. In summary, we present a first implementation of the unified toolkit to bias(stress)-test existing fairness algorithms by assessing their performance under carefully injected biases. As demonstrated by our case study, our sandbox environment can oer practically relevant insights to users and present researchers with hypotheses for further investigations. Moreover, our work provides a potential hands-on educational tool to learn about the relationship between data bias and unfairness as a part of the AI ethics curricula. Once evaluated and validated carefully, we hope that this tool contributes to educating current and future AI experts about the potential of technological work for producing or amplifying social disparities, and in the process, impacting lives and society. ![3_image_0.png](3_image_0.png) Figure 1: Flowchart illustrating the modules of the sandbox framework. ## 2 Description Of The Sandbox The proposed sandbox presents a tool to understand the eectiveness of fairness enhancing algorithms under counterfactually injected bias in binary classification settings. The tool can be visualized as a simplified ML pipeline, with room for customization at each step. As indicated by its name, the sandbox prioritizes modularity and allows users to play around and experiment with alternatives at every stage. We summarize the six stages of the pipeline, which are illustrated in Figure 1, in the following. Note that, at the current time, some of implementation details as well as a visual user interface are still under development. 1. **Choice of Data:** The sandbox will allow users to select one of three options: input their own dataset, select one of the benchmark datasets in fair ML (e.g. Adult Income (Kohavi & Becker, 2017)), or synthetically generate a dataset. For custom data, the user will be asked to indicate which columns are to be used as group, outcome and feature columns. For synthetic data, which is the recommended option at this time, we provide a rigorous helper file which allows users to customize how the dataset is built. For example, we permit users to determine the number and quality (e.g. categorical or numeric) of features, the distribution of values for each feature, and the proportion of examples in dierent protected groups. The protected attribute is assumed to be binary. In addition, users can choose how labels are generated where label distributions are allowed to vary across groups. If desired, data can be sampled from a causal graph as demonstrated in Appendix C. 2. **Bias Injection:** The crux of our sandbox pipeline is the injection of dierent types of biases. In this iteration of the tool, we provide the options to inject representation bias, measurement bias, sampling bias, and label bias (Mehrabi et al., 2021; Frénay & Verleysen, 2013) which spans a large portion of the bias types discussed in the fair ML literature. Support for other types of bias will be added in the near future. In addition to injecting bias into the whole data set, the sandbox tool allows for application of biases at the intersection of protected attributes and other variables. For example, users can decide to under-sample only the positively-labeled examples from a group. Users are able to inject multiple biases at once which allows for realistic bias patterns that are multi-faceted in nature. 3. **Model Class Selection:** The proposed sandbox tool is compatible with any machine learning model in the scikit-learn paradigm. We encourage the use of the so-called white-box classifiers, as they allow for greater ease when reasoning about the results obtained throughout the sandbox pipeline and present use cases of the sandbox with logistic regression in Sections 3 and 4. 4. **Fairness Intervention:** We make use of four fairness enhancing algorithms from the Fairlearn package (Bird et al., 2020) covering pre-processing, in-processing and post-processing techniques. First, the CorrelationRemover pre-processing algorithm filters out correlation between the sensitive feature and other features in the data. Next, the ExponentiatedGradient and GridSearch inprocessing algorithms operate on a model and are based on Agarwal et al. (2018). Finally, the ThresholdOptimizer post-processing algorithm adjusts a classifier's predictions to satisfy a specific fairness constraint. ![4_image_0.png](4_image_0.png) Figure 2: Exemplary visualization generated by the sandbox. We compare the performance of a biased model on ground truth and biased data. Possible fairness metrics for in- and post-processing algorithms are Equalized Odds, Equality of Opportunity, Demographic Parity, Error Rate Parity, and False Positive Rate Parity. For example, in Section 3, we utilize the GridSearch algorithm subject to an Equalized Odds constraint. In Appendix C we consider Equalized Odds, Equality of Opportunity and Demographic Parity metrics. 5. **Evaluation Metrics:** Any scikit-learn supported machine learning performance metric for classification can be utilized in our sandbox framework. Examples include precision, accuracy, recall, F1 score, etc. Additionally, the sandbox also supports fairness metrics for evaluation, such as Equalized Odds or Demographic Parity disparities. For example, we obtain Equalized Odds disparities for the demonstrations provided in Sections 3 and 4. 6. **Visualization:** The sandbox tool outputs several figures including a visualization of the eectiveness of a fairness intervention at dealing with a particular type of bias. We note that various notions of performance are supported including more traditional measures of performance such as accuracy. Figure 2 provides an example visualization output of the sandbox. The figure displays the performance of a learned model in the selected metric (here, accuracy) over dierent degrees of bias (here, under-sampling examples from one group). Our sandbox allows us to compare performance in two dimensions: (1) Between models with and without a fairness intervention, and (2) On biased data versus unbiased ground truth data. In the figure, we show the latter comparison. We inject underrepresentation bias into the training data and utilize Fairlearn's CorrelationRemover pre-processing algorithm to modify the data by removing correlations between the sensitive feature and the other features before training the model. What we observe is that, if we only evaluate on biased data, then we might be lulled into a false sense of progress and claim that the intervention is improving our model for increasing amounts of bias. However, when we examine the model's performance on the unbiased ground truth data, we see that performance does not improve significantly. Overall, the sandbox tool regards the initial data set as unbiased and splits into training and test examples. While the training data are injected with bias, data reserved for testing remains untouched. After model fitting and fairness intervention, evaluation metrics and visualizations are provided on both the biased training data and the ground truth test data. The entire process is repeated for dierent levels of injected bias and, if indicated by the user, for several repetitions in order to obtain reliable average results. ## 3 Case Study: Can Fairness Constraints Improve Accuracy? A main objective of the proposed sandbox tool is to aid empirical evaluation of the performance of fairness intervention under dierent biases. There are various special cases in which the eect of imposing fairness constraints has been characterized from a theoretical perspective (Khani & Liang, 2021; Zhou et al., 2021; Du & Wu, 2021). However, results like these usually focus on an infinite data setting and require a vast array of assumptions which can call their practical usefulness into question. In the coming sections, we use our sandbox tool to empirically replicate a known result from Blum & Stangl (2020) (Section 3) and explore performance beyond the assumptions required for the theory (Section 4). On a high level, we find that an often prohibitive amount of data is required to approximate the infinite data level result. In addition, our exploration suggests that the theoretical result breaks down completely if some of the structural assumptions on the problem setup and bias type are relaxed. The case study demonstrates how our sandbox tool can facilitate understanding of empirical implications of theoretical results and give the user a better sense of what performance to expect in their specific setting.1 We note that the case study and explorations discussed in the main text make various simplifying assumptions including an absence of confounding. Appendix C presents supplementary experiments with confounding bias in a more realistic data setting. ## 3.1 Under-Representation Bias Under Equalized Odds Constraints Fairness intervention into machine learning systems is often cast in terms of a fairness-accuracy tradeo. Yet learning from biased data can actually lead to sub-optimal accuracy once evaluated with regards to the unbiased data distribution. Blum & Stangl (2020) theoretically describe settings in which fairnessconstrained optimization on biased data recovers the Bayes optimal classifier on the true data distribution. In this case study, we specifically zoom into one of the findings of the paper, that is, Equalized Odds constrained empirical risk minimization on data with under-representation bias can recover the Bayes optimal classifier on the unbiased data. This result requires several structural assumptions on the data generating process as outlined below. We will draw on the described data generating procedure when simulating data for the sandbox demonstration. Data generating process. Let G œ {*A, B*} specify membership in demographic groups *A, B* where B is the minority group and let x œ X be a feature vector from some feature space. We assume there is a coordinate in x which corresponds to the group membership and write x œ A if individual x belongs to group A. The respective features distributions are denoted by DA and DB. In order to generate data, we start with a pair of Bayes optimal classifiers hú = (húA, húB) œ H ◊ H where H = {h : X æ {0, 1}} is a hypothesis class. For a given constant r œ (0, 0.5], we then draw data points x such that with probability 1 ≠ r it holds x ≥ DA and with probability r it holds x ≥ DB. Dependent on the class membership, the true label is generated by first, using húA or húB and second, independently flipping the output with probability ÷ < 0.5. The second step controls the errors of hú by ensuring that húA and húB have the same error rate and errors are uniformly distributed. Starting with a ground truth data set including m observations and label noise ÷, under-representation bias is introduced by discarding positive observations from the minority group B with some probability. Specifically, for each pair (x, y) with x œ B and y = 1, the data point is independently excluded from the data set with probability 1 ≠ —. Note that (1 ≠ —) is the amount of under-representation bias. Recovery of the Bayes optimal classifier. We first note that recovery of a classifier only pertains to the binary predictions. A Bayes optimal classifier learned from the noisy unbiased data does not necessarily have the same class probability predictions as hú even in the infinite data setting. To see this, consider the case in which P(hú(x)=1|x œ A) = P(hú(x)=1|x œ B)=1 and ÷ = 0.2. Then, fitting a suciently complex threshold based classifier on enough noisy data will result in a predictor hˆ with P(hˆ(x)=1|x œ A) = P(hˆ(x)=1|x œ B)=0.8. While class probabilities dier, both hú and hˆ are Bayes optimal and, in this case, reflect the same binary predictor when selecting a threshold smaller or equal to 0.8. Main recovery result. The derivations in Blum & Stangl (2020) are concerned with fairness constrained empirical risk minimization where an estimator Yˆ is deemed fair if Yˆ‹G|Y = y for y = 1 (equality of opportunity) or y œ {0, 1} (Equalized Odds). Here, G denotes the protected group attribute. In our binary 1The code generating the results in this Section can be found in the following repository: https://anonymous.4open. science/r/bias-stress-test-sandbox prediction setting, the Equalized Odds (Hardt et al., 2016) constraint is equivalent to $$P\left({\hat{Y}}{\Big|}\mathbf{x}\in A,Y=y\right)=P\left({\hat{Y}}{\Big|}\mathbf{x}\in B,Y=y\right),$$ for y œ {0, 1}. The main result presented here is based on Theorem 4.1 in Blum & Stangl (2020) where a proof can be found. We note that this is a population level or 'with enough data' type of result. Theorem 1 (Blum & Stangl (2020)) *Let true labels be generated by the described data generating process* and corrupted with under-representation bias. Assume that 1. *both groups have the same base rates, i.e.* p = P(húA(x)=1|x œ A) = P(húB(x)=1|x œ B)*, and* 2. label noise ÷ œ [0, 0.5) and bias parameter - œ (0, 1] *are such that* $$P(h_{B}^{*}(\mathbf{x})=1|\mathbf{x}\in B),\,a n d$$ $$(1-r)(1-2\eta)+r(1-\eta)\beta>0.$$ Then, hú = (húA, húB) is among the classifiers with lowest error on the biased data that satisfies Equalized Odds. ## 3.2 Empirical Replication Using The Sandbox Toolkit Contribution of the sandbox. The finding in Theorem 1 implies that fairness intervention can improve accuracy in some settings which goes against the common framing of fairness and accuracy as a trade-o. However, Theorem 1 is a purely theoretical result which can make it dicult to assess its usefulness in any specific application setting. For example, the Theorem operates at the population level suppressing issues of sample complexity. In practice it is unclear how much data would be needed for a satisfactory performance even if all the assumptions were met. Our proposed sandbox tool can bridge this gap between theory and practice by providing a controlled environment to test the eectiveness of fairness interventions in dierent settings. In the case of Theorem 1, the fairness sandbox can help to (1) Give a sense of how fast the result kicks in with a finite sample, (2) Assess eectiveness in a specific data generation and hypothesis class setting , and (3) Understand the importance of the dierent assumptions for the result. Implementation with the sandbox. We describe how the dierent modules of the sandbox toolkit are used to empirically replicate the findings of Theorem 1. 1. **Choice of Data**: We opt for a synthetic data set generation according to the exact process described in Section 3.1. This leaves room for several input parameters which can be varied by the user. While some of these parameters determine whether the assumptions of Theorem 1 are met, i.e. the relative size of groups and the amount of label noise, the theorem is agnostic to the number of features, the distribution of features, and the Bayes optimal classifiers and their hypothesis class. In order to simplify reasoning about the results, our analysis focuses on a setting with only three features x1, x2, x3 ≥ N (0, 1) and a linear function class for the Bayes optimal classifiers. The illustration can be readily repeated for more features and a dierent Bayes Optimal classifier. But this simple example suces to illustrate some of the key limitations of the theory in Blum & Stangl (2020). More specifically, the group dependent Bayes optimal classifiers húA, húB are thresholded versions of logistic regression functions $$\log{\frac{p}{1-p}}=b_{1}x_{1}+b_{2}x_{2}+b_{3}x_{3}\tag{1}$$ for group dependent parameter vectors b œ {bAú, bBú}. We set the parameters to fixed values bAú = (≠0.7, 0.5, 1.5)T and bBú = (0.5, ≠0.2, 0.1)T which leads to dierent continuous distributions of probabilities between groups but to approximately the same positive rates when thresholded at 0.5 as required for the theoretical setting of the Theorem. Note that to adhere to the theory, we start out with a threshold-based classifier and subsequently add label noise with ÷ (instead of the more common way of turning probabilistic predictions into labels, i.e., flipping biased coins for the binary labels). $$(1)$$ 2. **Bias Injection:** Theorem 1 is concerned with a specific form of inter-sectional under-representation bias which strategically leaves out positive observations from the minority group. The sandbox is set up to inject this type of bias based on a user specified parameter - which determines the amount of bias injected. The addition of further types of biases goes beyond the theory presented in Blum & Stangl (2020) and is empirically explored with the sandbox tool in Section 4. 3. **Model Class Selection:** The theoretical result we are looking to replicate operates on a population level and does not constrain the Bayes optimal classifier or learned model to belong to a specific class of functions. However, in practice we need to select a class of models with enough capacity to express both Bayes optimal classifiers húA and húB at once since the fairness constrained empirical risk minimization requires us to train a single model for both groups. To accomplish this, we select a logistic regression function of the form $$\log{\frac{p}{1-p}}=b_{0}+\mathbf{1}(\mathbf{x}\in A)\mathbf{b_{A}}^{T}\mathbf{x}+\mathbf{1}(\mathbf{x}\in B)\mathbf{b_{B}}^{T}\mathbf{x}$$ $$=b_{0}+\begin{bmatrix}\mathbf{b_{A}}\\ \mathbf{b_{B}}\end{bmatrix}^{T}\mathbf{x}^{\prime},$$ $$\left(2\right)$$ where bA corresponds to the parameters used for rows belonging to group A, and bB denotes the parameters used for x œ B. The indicator functions are absorbed into the data by reformatting the feature vectors x œ R3 to feature vectors xÕ œ R6 with xÕT = [xT , 0, 0, 0] for x œ A and xÕT = [0, 0, 0, xT ] for x œ B. Note that the additional intercept b0 increases the capacity of the model and can only help our performance here. 4. **Fairness Intervention:** Recall that Blum & Stangl (2020) analyze the setting of fairness constrained empirical risk minimization. We choose Equalized Odds constrained optimization as fairness intervention in order to mimic the theoretical setting of the result we are replicating. The constrained optimization is performed by scikit-learn unpenalized logistic regression with Equalized Odds enforcement provided by Fairlearn's Grid Search function which is based on Agarwal et al. (2018). For the sake of comparison, we also fit the model from Equation 2 without fairness intervention. Since in-processing fairness intervention is not always desirable or possible, e.g. sometimes we only have access to biased black-box predictions, we conduct the same experiments with Fairlearn's postprocessing method which enforces Equalized Odds by optimizing group-specific thresholds (Hardt et al., 2016). The respective results are discussed in detail in Appendices B.0.1 and B.0.2. 5. **Evaluation Metrics:** There are several relevant evaluation metrics for the case study, all of which are supported by our sandbox toolkit. First, we are interested in the overall and group-wise accuracy of the learned model which is provided for the models learned with and without fairness invention. Second, we evaluate the Equalized Odds disparity of the models in order to demonstrate the eectiveness of the intervention. Following Agarwal et al. (2018), the extent to which a classifier ˆf violates Equalized Odds is computed by $$\operatorname{disp}({\hat{f}})=\operatorname*{max}_{g,y}\vert\mathbb{E}[{\hat{f}}(\mathbf{x})|G=g,Y=y]-\mathbb{E}[{\hat{f}}(\mathbf{x})|Y=y]\vert,$$ where G is the protected group attribute. This definition is adapted to a finite data version by inserting the respective sample means for the expected values. Lastly, we want to demonstrate the explicit finding of Theorem 1 which is concerned with the recovery of Bayes optimal classifier. To this end, we compute the fidelity between the predictions of the learned models and the Bayes optimal classifier. The fidelity between two binary classifiers ˆf1 and ˆf2 with respect to a data set D is defined as $$\operatorname{fid}_{D}({\hat{f}}_{1},{\hat{f}}_{2})={\frac{1}{|D|}}\sum_{x\in D}\left|{\hat{f}}_{1}(\mathbf{x})-{\hat{f}}_{2}(\mathbf{x})\right|,$$ i.e. as the fraction of examples on which the predictions of the classifiers coincide. The evaluation metrics are output each for the training and test sets. While fidelity results are discussed in detail in the main text, we refer to Appendix A for a summary of accuracy and disparity results. ![8_image_0.png](8_image_0.png) Figure 3: Test set fidelity between Bayes optimal classifier and models trained on biased data with and without fairness intervention using n = 300, 3000, 30000 (left to right) samples for training and testing each. Results are reported averaged over 50 simulation runs with error bars for one standard deviation in each direction. We see that Equalized Odds constrained optimization retrieves the Bayes optimal classifier almost perfectly at all levels of bias when using large amounts of data (n = 30000) but deviates from the Bayes optimal predictions when trained on n œ {300, 3000} data points. The model class used in this example is logistic regression in 7 parameters. 6. **Visualization:** The sandbox tool provides visualizations of the eectiveness of the fairness intervention. In the context of the case study, this consists of figures displaying the accuracies and fidelities to the Bayes optimal classifier of the models learned with and without fairness intervention at dierent levels of injected inter-sectional under-representation bias. ## 3.3 Empirical Results Parameter inputs. The sandbox tool with the described configurations is used to examine the empirical performance of the theoretical result from Blum & Stangl (2020) presented in Theorem 1. In this setting of the sandbox, the user can input several numerical values corresponding to the size of the minority group r œ (0, 0.5], the number of synthetic data points to be generated n œ N, the amount of overall label noise to be injected ÷ œ [0, 0.5) and the number of times the whole simulation should be repeated R. In each run of the simulation, new data are sampled and injected with bias before the respective models are fit. The whole simulation pipeline is performed based on the input values and performance metrics and visualizations are output to the user. For the sake of demonstration, we chose r = 0.2, ÷ = 0.4, R = 50, which provides one of many examples within the bounds of the theory. In an eort to explore how much data are actually required to obtain the performance promised by the population level theory in our example, we vary the number samples n œ {600, 6000, 60000}. We note that half of the synthetically generated data are used for model training and half for evaluation and visualization. Results. Figure 3 displays the fidelity results of the sandbox simulation case study measured on the portion of the data sets withheld for testing. We note that fidelity here corresponds to the fraction of test examples that receive the same predictions from the model trained on biased data and the Bayes optimal classifier fit to the unbiased data. No bias is injected in the data used for testing. We intuitively expect the model fit on biased data without fairness intervention to deviate from the Bayes optimal model especially when large amounts of bias are injected. This is confirmed by the downward slopes of the dashed curves in Figure 3. Theorem 1 implies that fitting the same model on biased data with Equalized Odds fairness intervention recovers the Bayes optimal classifier on the true data distribution. To see that the assumptions of the Theorem are met in our example, note that we selected the Bayes optimal classifiers húA and húB specifically to have equal base rates (see Equation 1), and that our choice of parameters r = 0.2 and ÷ = 0.4 fulfills (1 ≠ r)(1 ≠ 2÷) + r(1 ≠ ÷)— > 0 for all levels of injected bias 1 ≠ - œ [0, 1). We would thus expect the fidelity of the models with fairness intervention to be 1 for all levels of 1 ≠ - which is only partially supported by Figure 3. For small amounts of training data (n = 300), the average fidelity over simulation runs and levels of injected bias only reaches a level of 0.837 with even poorer performance in the minority group. In cases with 90% of positive minority examples deleted from the training data, the model learned with fairness intervention on average only classifies about 64% of the minority test examples the same way as the Bayes optimal classifier. In addition, results vary significantly over simulation runs leading to many instances with little to moderate amounts of injected bias in which the model learned from biased data without intervention is closer to the Bayes optimal than the model with intervention. With more training data (n = 3000), the test fidelity performance of the intervention model increases to 0.942 on average. Yet even in this setting, the biased model outperforms the intervention model if only 20% or less of positive minority examples are deleted from the test data. Only when increasing the training data size to (n = 30000), the fidelity of the intervention model reaches 0.982 which is much closer to the results implied by the theory. In this case, the model with intervention outperforms the model without intervention for almost all positive bias levels. Overall, the findings of the sandbox demonstrate that a considerable amount of data are needed to recover from under-representation bias. We only observed satisfactory results at all positive bias values when 30000 training examples were used for a relatively simple 7 parameter logistic regression model. 2 Many practical applications fall into the range of small data sets and little to moderate under-representation bias in which the intervention model showed to be no more successful in recovering the Bayes optimal model than a model without intervention. The presented case study demonstrates how the sandbox toolkit can help to uncover insights of this type for users who are looking to assess the eectiveness of fairness intervention in their specific application setting. Comparison to post-processing intervention. While Blum & Stangl (2020) specifically call for inprocessing intervention, fairness constrained risk minimization is not the only method that targets Equalized Odds across groups. Since post-processing strategies are desirable in some cases, we repeat the same experiments with the threshold based post-processing Equalized Odds algorithm from Hardt et al. (2016). Note that this corresponds to changing the configuration of step '(4) Fairness intervention' in the sandbox pipeline while keeping the fairness metric fixed. The results from this analysis are discussed in Appendix B.0.1 and indicate a very similar performance to the in-processing method. ## 4 Exploration Of Other Forms Of Bias Section 3 demonstrates the usefulness of the proposed sandbox tool by empirically evaluating the performance of a theoretical result from Blum & Stangl (2020). For this, we assume the exact setting of the paper with requires a list of structural assumptions on the synthetic data generation, Bayes optimal model and type of injected bias. For example, the replicated finding only considers a specific case of under-representation bias. Real world applications are likely to violate some of the posed assumptions and can carry a number of dierent biases. In the following, we show how the modularity of the sandbox allows us to explore the performance of fairness intervention beyond the setting posed by the theory. We loosen the assumption of equal base rates in Bayes optimal predictions and inject dierent types of biases in order to stress-test the ecacy of the intervention. The changes to the sandbox modules discussed in the following refer to the sandbox configuration presented in Section 3.2. ## 4.1 Dierence In Base Rates Implementation with the sandbox and parameter values. The result of Theorem 1 relies on the assumption that base rates are the same across groups which is often violated in practice. We use the sandbox framework to test the extent to which the fidelity of the Equalized Odds intervention is aected by diverging rates and alter the data choice module of the sandbox used in the case study for this purpose. 2In general, the amount of data required to reliably fit a model increases with complexity of the model class. Many algorithms used in practice exceed the complexity of the model studied here which suggests that even more data are required to observe the desired fairness mitigation eects. ![10_image_0.png](10_image_0.png) Figure 4: Test set fidelity between Bayes optimal classifier and model trained on biased data with Equalized Odds intervention. Results are reported as an average over 50 simulation runs. Error bars correspond to one standard deviation in each direction. We see that the fidelity between models is generally smaller than 1 if base rates are not the same across groups. In other words, the intervention fails to retrieve the Bayes optimal classifier in these cases. A collection of data sets with dierent base rates is generated as follows. We leave the labeling model and eect parameters búA, búB untouched and sample the features x1, x2, x3 conditional on group membership with xi|(x œ A) ≥ N (d, 1) and xi|(x œ B) ≥ N (0, 1) for i = 1, 2, 3. Here, d is from a collection of feature mean values selected to lead to evenly spaced base rate dierences in [≠0.5, 0.5] once the binary Bayes optimal outcomes are computed. Note that the rate of positive outcomes for the minority group B is always 0.5 which justifies the range of the interval. As in the previous experiments, we set the additional input parameters to r = 0.2, ÷ = 0.4 and R = 50. This aligns with the setting in Section 3 and thus enables us to compare performance across dierent types of injected bias. That the choices made here are one example among many, they were picked early on to comply with theory and were never changed to obtain specific results. We run the experiment with n = 60000 data points at each base rate dierence split evenly between training and testing and set the under-representation bias level to 1 ≠ - = 0.4. Results. Figure 4 depicts the test set fidelity of the classifiers trained on biased data with Equalized Odds intervention and the data-driven Bayes optimal model at dierent levels of base rate dierence between groups. The base rate dierence is here defined as the base rate of the majority group minus the base rate of the minority group where latter is fixed at 0.5. While the rate of positive Bayes optimal outcomes in the minority group is constant at 0.5, the base rate in the majority group varies between 0 and 1 in our experiment. We see that the intervention model is able to recover the Bayes optimal classifier for a base rate dierence of 0 which corresponds exactly to the setting of Theorem 1. The larger the base rate dierence becomes in absolute value, the more the predictions of the fair trained model and the Bayes optimal model diverge. The performance in the minority group appears to be particularly poor with a minority base rate of 0.5 and majority base rate of 0.8 leading to minority group fidelity of 0.423 on average. Larger dierences in base rates also seem to lead to intervention models with less stable performance which leads to large standard errors. In order to understand why the result of Theorem 1 does not generalize to settings with dierent base rates pA "= pB, consider that the true positive rate of the Bayes optimal classifier for G œ {*A, B*} on unbiased data takes the form $$P(h_{G}^{*}({\bf x})=1|Y=1,{\bf x}\in G)=\frac{(1-\eta)p_{G}}{p_{G}(1-\eta)+(1-p_{G})\eta},$$ ![11_image_0.png](11_image_0.png) Figure 5: **Sampling bias.** Test set fidelity between Bayes optimal classifier and models trained on biased data with Equalized Odds intervention on 30000 samples for training and testing each. Results are reported averaged over 50 simulation runs with error bars for one standard deviation in each direction. Bias is injected into either the entire minority group (left), or the positively labeled minority group (middle). On the right, bias is injected into the positively labeled minority group and we assume a base rate dierence of -0.2 which is dierent for dierent base rates pA and pB. When under-representation bias 1≠— "= 1 is introduced, the true positive rate for group B becomes $$P(h_{B}^{*}({\bf x})=1|Y=1,{\bf x}\in B)=\frac{(1-\eta)\beta p_{B}}{p_{B}\beta(1-\eta)+(1-p_{B})\beta\eta},$$ which coincides with the rate for the unbiased data. It follows that the Bayes optimal classifier does not have equal true positive rates, and thus does not satisfy Equalized Odds, on the biased data if base rates are dierent. It can therefore not be recovered by the fair trained model. ## 4.2 Sampling Bias Implementation with the sandbox and parameter values. Our previous discussion of underrepresentation bias only considered bias specifically injected into the subgroup of examples at the intersection of minority group and positive labels. We extend this setting to under-representation bias in the full minority group, which we will refer to as sampling bias, by altering the bias injection module of the sandbox to remove minority examples with some probability ranging between 0 and 1. Experiments are repeated with equal base rates and with base rate dierence of -0.2 which allows us to explore how the performance changes as a dierence in base rates is introduced while ensuring that the data still contains examples for both outcomes in each group. We set the parameter inputs to r = 0.2, ÷ = 0.4, R = 50 and n = 60000 to comply with the parameter choices in previous experiments. Results. The results of the experiments for bias injected in the whole minority group, positively labeled minority examples, and positively labeled minority examples with dierent base rates are depicted in the first column of Figure 5. The left plot shows a decreasing minority test set fidelity with increasing sampling bias in the minority group. With maximally injected bias, 99% of minority examples are deleted and the average minority group fidelity only reaches 0.694. With smaller amounts of bias, the intervention model classifies over 90% of minority test samples like the Bayes optimal classifier. Intuitively, the decreased performance on the minority set can be led back to less available training data for the group. Since we fit only one model for both groups, this leads the predictions for the majority group to be closer to the Bayes optimal predictions than for the minority group. When bias is injected only for positively labeled minority examples, the intervention successfully recovers the base optimal classifier as discussed in Section 3. The right plot of the figure displays the test set fidelity in the case of dierent Bayes optimal base rates in groups with bias injected only for positively labeled minority group examples. We note that the fidelity here appears much less stable over dierent runs of the simulation which leads to larger standard errors. In contrast to the ![12_image_0.png](12_image_0.png) Figure 6: **Label bias.** Test set fidelity between Bayes optimal classifier and models trained on biased data with Equalized Odds intervention on 30000 samples for training and testing each. Results are reported averaged over 50 simulation runs with error bars for one standard deviation in each direction. Bias is injected into either the entire minority group (left), or the positively labeled minority group (middle). On the right, bias is injected into the positively labeled minority group and we assume a base rate dierence of -0.2. setting with equal base rates, the bias injection here impacts also the fidelity of the majority group. Recall that the Bayes optimal classifier does not satisfy Equalized Odds on the biased data in this setting and can thus not be recovered by strictly requiring Equalized Odds. However, the figure suggests a remarkably high fidelity for low amounts of bias in the dierent base rates case and we hypothesize that the model was faced with large accuracy fairness trade-os and opted for a small violation of the fairness constraint in favor of accuracy. ## 4.3 Label Bias Implementation with the sandbox and parameter values. Recall that our experiments use a noise parameter ÷ which represents the probability with which the Bayes optimal label is flipped in our observed labels. So far, this value was chosen independently from group membership. Since data in real-world application often suers from dierential label noise, we test how well the Equalized Odds intervention can recover the Bayes optimal model under label bias. To achieve this, we alter the choice of data module to inject 40% label noise into the majority group to be consistent with the previous experiments. We then change the bias injection module to inject label bias of 0-45% into the minority group. Note that the label bias cannot exceed 50% in order for the Bayes optimal classifiers to be correct which justifies the chosen range. Similarly, we repeat the experiment by injecting constant bias of 40% into both the majority and negatively labeled minority group and vary the amount of bias among the positively labeled minority. In all instances, the test set has 40% label bias throughout like in the previous experiments. The experiment is repeated with dierent base rates for bias injected into the positively labeled minority examples. As before, we set r = 0.2, R = 50 and n = 60000. Results. The results of the label noise experiments are depicted in Figure 6. We observe that fidelity performance of the intervention model deteriorates in the minority group as the amount of label noise diverges. This holds true both when bias is injected into the whole group and when bias is injected into positively labeled minority examples. To understand why the intervention cannot retrieve the Bayes optimal classifier, we note that the Bayes optimal classifier does not fulfill Equalized Odds under dierential label noise. To see this, assume a setting with ÷maj = 0.4 label noise bias in the majority group and ÷min "= 0.4 bias in the minority group. The Bayes optimal classifier húA has true positive and true negative rates of 0.6 on the majority group data while húB has true positive and true negative rates of 1 ≠ ÷min "= 0.6 on the minority portion of the biased data. Note that this assumes that base rates are 0.5 like in our experiment, but the same phenomenon with a similar calculation holds true for other cases. If bias is injected into the ![13_image_0.png](13_image_0.png) Figure 7: **Feature measurement bias.** Test set fidelity between Bayes optimal classifier and models trained on biased data with Equalized Odds intervention on 30000 samples for training and testing each. Results are reported averaged over 50 simulation runs with error bars for one standard deviation in each direction. Bias is injected into either the entire minority group (left), or the positively labeled minority group (middle). On the right, bias is injected into the positively labeled minority group and we assume a base rate dierence of -0.2. positively labeled minority examples and base rates dier by -0.2, the fidelity curves of both the minority and majority groups are impacted. ## 4.4 Feature Measurement Bias Implementation with the sandbox and parameter values. Our final exploration focuses on a type of feature noise which is injected in the form of missingness in one of the features. We alter the bias injection module in the sandbox and set feature x1 to 0 with varying probability while omitting the injection of other types of biases. The functionality to enforce dierent base rates in the data choice module is retained. Feature measurement bias is injected in to the whole minority group or the minority group with positive labels in dierent variations of the experiment. As before, we choose r = 0.2, ÷ = 0.4, R = 50 and n = 60000. Experiments are repeated with base rate dierences of 0 and -0.2. Results. The test set fidelity results of the feature noise experiments are displayed in Figure 7. We see that the intervention model recovers the Bayes optimal model for small amounts of feature missingness while fidelity slightly decreases as more bias is injected. While performance remains above the 0.95 mark for most amounts of injected bias, we observe an average fidelity of only 0.635 if all instances of x1 in the minority group default to 0. Similar to the other types of bias, the intervention model successfully recovers from measurement bias if the bias is only injected into positively labeled examples. In contrast to our observations for other types of biases, a dierence in base rates appears to not deteriorate the fidelity by more than 1-2 percentage points for up to 70% feature missingness in a single feature among the minority group examples with positive labels. Assuming our hypothesis from Section 4.2, i.e. with no additional bias the fair learned model on dierent base rates trades o fairness for accuracy, is true, we conjecture that the performance when injecting measurement bias does not deteriorate quickly because it only introduces very small amounts of Equalized Odds disparity. On a high level, removing the information of one feature leads to a higher concentration around the mean in the predicted conditional probabilities. While this leads to a small violation of Equalized Odds, the fair trained model accepts this unfairness in favor of a high accuracy. ## 4.5 Comparison To Post-Processing Intervention We repeat the exploration experiments with threshold-based post-processing fairness intervention (Hardt et al., 2016) which corresponds to altering the fairness intervention module of the sandbox tool. The result are discussed in detail in Appendix B.0.2. While the two intervention methods showed to lead to fairly similar results in the original case study setting, this is not necessarily the case when base rates dier or dierent types of bias are injected. In those cases, the algorithms face a trade-odd between fairness and accuracy which can lead to dierent predictions across dierent intervention methods. For example, we see that the in-processing method yields higher fidelity performance than the post-processing intervention when feature measurement bias is injected as the in-processing method is better in trading o some amount of fairness for accuracy. ## 5 Related Work Types of fairness-enhancing algorithms. At a high level, there are three classes of fairness-enhancing algorithms or fairness interventions: pre, post, and in-processing (Zhong, 2018). These algorithms are applied at dierent stages in the ML pipeline and can be accommodated by our sandbox toolkit. Pre-processing algorithms modify the data itself and remove the underlying biases which is best suited when training data are accessible and modifiable. Examples of pre-processing algorithms include optimized pre-processing (Calmon et al., 2017), disparate impact remover (Feldman et al., 2015) and reweighing (Kamiran & Calders, 2012). In-processing algorithms operate on the model directly, removing biases during the training process. Examples in this category include the Meta-Fair Classifier (Celis et al., 2019), adversarial debiasing (Zhang et al., 2018) and exponentiated gradient reduction (Agarwal et al., 2018). Post-processing algorithms utilize the predictions and modify the model outputs directly. This approach is best suited when neither the data nor the models are accessible, as it only requires access to black-box predictions. Example algorithms include Equalized Odds post-processing (Hardt et al., 2016) and reject option classification (Kamiran et al., 2012). In this paper, we demonstrate how our sandbox toolkit applies both in-processing and post-processing fairness interventions at the example of a result from (Blum & Stangl, 2020). Fairness toolkits. In order to ease application of fairness interventions in practice, recent work has developed a number of open-source ML fairness software packages or "fairness toolkits" (Bird et al., 2020; Bellamy et al., 2019; Saleiro et al., 2018; Bantilan, 2018; Wexler et al., 2019; Adebayo et al., 2016; Tramer et al., 2017). For example, Fairlearn (Bird et al., 2020) consists of an API to allow researchers and developers to easily use popular fairness interventions (such as Equalized Odds or Demographic Parity) at the three stages of the ML pipeline listed above. Most of these toolkits focus on *fairness interventions*, or how to apply fairness algorithms (Bird et al., 2020; Bellamy et al., 2019; Saleiro et al., 2018; Bantilan, 2018; Wexler et al., 2019; Adebayo et al., 2016; Tramer et al., 2017). A key distinguishing feature of our work is that our toolkit focuses on specific *biases* themselves. Our toolkit allows users to inject biases into their data, uses algorithms from Fairlearn to apply fairness interventions, and then compares to the ground truth. Our toolkit currently uses Fairlearn to apply fairness interventions due to its popularity and ease-of-use. Though, in future development of the toolkit, we plan to add other fairness toolkits, such as AIF360 (Bellamy et al., 2019). Algorithms for specific sources of unfairness. A motivating reason why we focus on injecting specific biases in our toolkit is to evaluate or empirically replicate work or which claims to address specific sources of bias or unfairness. For example, in this paper, we primarily focus on representation bias, measurement bias, and label bias (Frénay & Verleysen, 2013; Mehrabi et al., 2021). See Mehrabi et al. (2021) or Suresh & Guttag (2021) for further detail on more sources of bias or unfairness. To address these sources of unfairness, some have proposed solutions beyond algorithms, such as creating a more representative dataset3 addressing larger societal inequities. In our toolkit, however, we focus on interventions which can be implemented at the time of training a model, after the dataset has already been created and any broader conditions surrounding model deployment are fixed. Recent work has proposed attempts at algorithmic solutions to remedy specific sources of biases or unfairness. Here, we present examples of the kinds of papers which our toolkit would be able to evaluate. For example, in the context of medical diagnoses, there exists a significant discrepancy in the quality of an evaluation (consider this as the label) between dierent races (Obermeyer et al., 2019). Khani & Liang (2021) show that removing spurious features (e.g. sensitive attributes) can decrease accuracy due to the inductive bias of overparameterized models. Zhou et al. (2021) finds that oversampling underrepresented groups can not only mitigate algorithmic bias in systems that consistently 3See, for example, https://ai.googleblog.com/2018/09/introducing-inclusive-images-competition.html in response to (Shankar et al., 2017). predict a favorable outcome for a certain group, but improve overall accuracy by mitigating class imbalance within data that leads to a bias towards majority class. Du & Wu (2021) observes the impact of fairnessaware learning under sample-selection bias. Wang et al. (2021) considers label bias based on dierential label noise. Wang et al. (2020) looks at whether fairness criteria can be satisfied when the protected group information is noisy, missing, or unreliable. While our toolkit is able to address many of the claims in these papers, we focus on applying Equalized Odds intervention to data sets injected with the biases listed above in this paper. ## 6 Summary And Future Directions This work presented the idea and first implementation of a simulation toolkit to investigate the fairness consequences of various forms of biases and identify eective remedies for each the performance of fairnessenhancing algorithms under various forms of counterfactually injected biases. We demonstrated the utility of our tool through a thorough case study of Blum & Stangl (2020). The theoretical contribution of Blum & Stangl (2020) stated that if the source of unfairness is under-representation bias in the training data, constraining ERM with EO can recover the Bayes optimal classifiers on the unbiased data under certain conditions. Our tool allowed us to examine EO constraints under the conditions of Blum & Stangl (2020) as well as a number of new biased settings. Lessons from case study. Through our case study, we established several limitations of the existing theory. In particular, we observed the need for very large volumes of data for the theory to hold. (In our example, we needed 30k training data points for a 7-parameter logistic regression.) Furthermore, our empirical results suggest that the smaller the amount of injected bias, the larger the volume of data needed in order for the fairness-constrained model to outperform the unconstrained one trained on biased data. We emphasize that many practical applications do not satisfy these preconditions (i.e., either the volume of data or the amount of under-representation bias is relatively small). Therefore the theoretical findings of Blum & Stangl (2020), while conceptually interesting, might not be applicable in those practical domains. Another key prerequisite of the theory was the equality of base rates across groups. This assumption is also often violated in practice, and we showed empirically that EO constrained ERM can not recover the Bayes optimal models if base rates dier—even slightly. Exploring the implications of various biases and interventions. We experimented with various forms of biases and assessed the performance of EO constraints in alleviating them. For example, our empirical investigation of *sampling bias* demonstrated how the EO-constrained model struggles to recover comparable performance across groups. We also observed that the constraint could not retrieve the Bayes optimal classifiers under *label bias* either. In terms of the choice of interventions, we contrasted the in-processing method of Agarwal et al. (2018) with the post-processing method proposed by Hardt et al. (2016). The key distinction between these two approaches appeared to be in their ability to trade o accuracy and fairness. In particular, the in-processing method oers a wider range of tradeo possibilities, while the post-processing method yields fair classifiers but with no error guarantees. When the theoretical conditions of our case study hold, the two methods perform similarly, but they diverge as soon as those conditions are relaxed. Scope of applicability and limitations. First, we should emphasize that the sandbox tool should be understood as an environment to explore the limitations of various fairness interventions in user-specified biased settings rather than a method to obtain fully generalizable results. The insights obtained through this exploration can form the basis of informed hypotheses for further empirical and/or theoretical investigations, but on their own they do not guarantee generalizability. For example, the analysis presented in Section 4 reveals that, at least in our specific experimental setting, EO-constrained optimization cannot recover the Bayes optimal classifier when base rates between groups dier or the data are impacted by label bias. While these findings are not guaranteed to hold in settings beyond the ones studied here, they allow us to surface several limitations of EO constraints as fairness interventions. Second, we note that the current version of our tool is designed with the intention of helping researchers and students to form a better understanding of sources of unfairness. Our implementation of the data biases mentioned in this work is highly simplified, and it does not capture the complex nature of bias in real-world data. Addressing bias in specific domains requires prolonged deliberations with domain-experts and stakeholders. Therefore, the results obtained using our tool should not be interpreted in vacuum as the proof of ecacy (or lack thereof) for a given algorithmic fairness interventions *in practice*. An active-learning module in AI ethics curricula. In recent years, call for "greater integration of ethics across computer science curriculum" have amplified (see, e.g., Fiesler et al. (2020)). However, instructors without a background in the area may lack the necessary tools to cover these issues in depth (Saltz et al., 2019; Martin, 1997). Our sandbox toolkit can serve these educators as a self-contained and ready-to-use learning module. With the toolkit, students can inject various types of biases into a given dataset, observe the fairness ramifications of the bias, and evaluate the eectiveness of various fairness interventions in alleviating them. By oering a hands-on practice, we hypothesize that the toolkit improves students' understanding of the Machine Learning pipeline, the underlying causes of unfairness, and the scope and limitation of existing algorithmic remedies depending on the type of bias present in the setting at hand. In our future work, we plan to conduct human-subject studies at college-level computer science programs to examine the eect of our sandbox toolkit in achieving FATE-related learning objectives and improving the learning experience. ## References Julius A Adebayo et al. *FairML: ToolBox for diagnosing bias in predictive modeling*. PhD thesis, Massachusetts Institute of Technology, 2016. Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. A reductions approach to fair classification. In *International Conference on Machine Learning*, pp. 60–69. PMLR, 2018. Ulrich Aïvodji, Hiromi Arai, Olivier Fortineau, Sébastien Gambs, Satoshi Hara, and Alain Tapp. Fairwashing: the risk of rationalization. In *International Conference on Machine Learning*, pp. 161–170. PMLR, 2019. Niels Bantilan. Themis-ml: A fairness-aware machine learning interface for end-to-end discrimination discovery and mitigation. *Journal of Technology in Human Services*, 36(1):15–30, 2018. Rachel KE Bellamy, Kuntal Dey, Michael Hind, Samuel C Homan, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra MojsiloviÊ, et al. Ai fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. *IBM Journal of Research and Development*, 63(4/5):4–1, 2019. Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. Fairness in criminal justice risk assessments: The state of the art. *Sociological Methods & Research*, 50(1):3–44, July 2018. Sarah Bird, Miro Dudík, Richard Edgar, Brandon Horn, Roman Lutz, Vanessa Milan, Mehrnoosh Sameki, Hanna Wallach, and Kathleen Walker. Fairlearn: A toolkit for assessing and improving fairness in ai. Microsoft, Tech. Rep. MSR-TR-2020-32, 2020. Avrim Blum and Kevin Stangl. Recovering from biased data: Can fairness constraints improve accuracy? In *Proceedings of the 2020 Symposium on the Foundations of Responsible Computing (FORC)*, 2020. Flavio P Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. Optimized pre-processing for discrimination prevention. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 3995–4004, 2017. L Elisa Celis, Lingxiao Huang, Vijay Keswani, and Nisheeth K Vishnoi. Classification with fairness constraints: A meta-algorithm with provable guarantees. In *Proceedings of the conference on fairness, accountability, and transparency*, pp. 319–328, 2019. Irene Y. Chen, Emma Pierson, Sherri Rose, Shalmali Joshi, Kadija Ferryman, and Marzyeh Ghassemi. Ethical machine learning in healthcare. *Annual Review of Biomedical Data Science*, 4(1):123–144, July 2021. Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. *Big Data*, 5(2):153–163, June 2017. Frances Ding, Moritz Hardt, John Miller, and Ludwig Schmidt. Retiring adult: New datasets for fair machine learning. In *Advances in Neural Information Processing Systems*, 2021. Wei Du and Xintao Wu. Robust fairness-aware learning under sample selection bias. *arXiv preprint* arXiv:2105.11570, 2021. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference on - ITCS 12. ACM Press, 2012. Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In *proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining*, pp. 259–268, 2015. Casey Fiesler, Natalie Garrett, and Nathan Beard. What do we teach when we teach tech ethics? a syllabi analysis. In *Proceedings of the 51st ACM Technical Symposium on Computer Science Education*, pp. 289–295, 2020. Benoît Frénay and Michel Verleysen. Classification in the presence of label noise: a survey. *IEEE transactions* on neural networks and learning systems, 25(5):845–869, 2013. Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. Advances in neural information processing systems, 29:3315–3323, 2016. Matthew Joseph, Michael Kearns, Jamie H Morgenstern, and Aaron Roth. Fairness in learning: Classic and contextual bandits. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), *Advances in* Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. Faisal Kamiran and Toon Calders. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1):1–33, 2012. Faisal Kamiran, Asim Karim, and Xiangliang Zhang. Decision theory for discrimination-aware classification. In *2012 IEEE 12th International Conference on Data Mining*, pp. 924–929. IEEE, 2012. Fereshte Khani and Percy Liang. Removing spurious features can hurt accuracy and aect groups disproportionately. In *Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency*, pp. 196–205, 2021. Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-os in the fair determination of risk scores. *arXic preprint arXiv: 1609.05807*, 2017. Ronny Kohavi and Barry Becker. UCI machine learning repository, adult income dataset, 2017. URL https://archive.ics.uci.edu/ml/datasets/census+income. C Dianne Martin. The case for integrating ethical and social impact into the computer science curriculum. In The supplemental proceedings of the conference on Integrating technology into computer science education: working group reports and supplemental proceedings, pp. 114–120, 1997. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. *ACM Computing Surveys (CSUR)*, 54(6):1–35, 2021. Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. Dissecting racial bias in an algorithm used to manage the health of populations. *Science*, 366(6464):447–453, 2019. Pedro Saleiro, Benedict Kuester, Loren Hinkson, Jesse London, Abby Stevens, Ari Anisfeld, Kit T Rodolfa, and Rayid Ghani. Aequitas: A bias and fairness audit toolkit. *arXiv preprint arXiv:1811.05577*, 2018. Jerey Saltz, Michael Skirpan, Casey Fiesler, Micha Gorelick, Tom Yeh, Robert Heckman, Neil Dewar, and Nathan Beard. Integrating ethics within machine learning courses. *ACM Transactions on Computing* Education (TOCE), 19(4):1–26, 2019. Shreya Shankar, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, and D. Sculley. No classification without representation: Assessing geodiversity issues in open data sets for the developing world. In NIPS 2017 workshop: Machine Learning for the Developing World, 2017. Harini Suresh and John Guttag. A framework for understanding sources of harm throughout the machine learning life cycle. In *Equity and Access in Algorithms, Mechanisms, and Optimization*, pp. 1–9. 2021. Florian Tramer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, Jean-Pierre Hubaux, Mathias Humbert, Ari Juels, and Huang Lin. Fairtest: Discovering unwarranted associations in data-driven applications. In 2017 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 401–416. IEEE, 2017. Jialu Wang, Yang Liu, and Caleb Levy. Fair classification with group-dependent label noise. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 526–536, 2021. Serena Wang, Wenshuo Guo, Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, and Michael I Jordan. Robust optimization for fairness with noisy protected groups. *arXiv preprint arXiv:2002.09343*, 2020. James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viégas, and Jimbo Wilson. The what-if tool: Interactive probing of machine learning models. *IEEE transactions on visualization* and computer graphics, 26(1):56–65, 2019. Ke Yang. Mirror data generator package, 2023. URL https://github.com/DataResponsibly/ MirrorDataGenerator. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial learning. In *Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society*, pp. 335–340, 2018. Z Zhong. A tutorial on fairness in machine learning, 2018. URL https://towardsdatascience.com/ a-tutorial-on-fairness-in-machine-learning-3ff8ba1040cb. Yan Zhou, Murat Kantarcioglu, and Chris Clifton. Improving fairness of ai systems with lossless de-biasing. arXiv preprint arXiv:2105.04534, 2021.
Review 1: Summary: This paper is proposing a toolbox for assessing the impact of interventions on ML models intended for improving fairness. The tool is intended to have an ability to induce various forms of biases in the data, and then assess whether a particular in-training/post-hoc fairness interventions results in desired results. The data biases focused on i) measurement bias, ii) sampling bias, iii) label bias, iv) representation bias. The toolbox would work with synthetic as well as real-world data, though it appears to have more utility with synthetic data with the ability to introduce custom bias. The authors demonstrate potential utility via case studies, demonstrating practical implications of theoretical results of Blum & Stangl 20. Metrics to assess are primarily disparity and fidelity. Authors further demonstrate case studies with varying base rates, label noise etc. Strengths and Weaknesses: Strengths: 1. I believe this is a great tool to build and will indeed be useful for practitioners to assess utility of fairness interventions. 2. I believe the results are insightful and I like the choice of the case study that evaluates practical implications of the Theory from Blum & Stangl. 3. Overall presentation is quite good and clear Weakness: 1. I have a few qualms about the way the paper is structured. I think that makes it quite challenging to assess how extensive the framework is and how widely it could be used. 2. The framework description is not detailed enough. First I would like to know, how general is the kind of synthetic data that I can generate, what features are crucial for such fairness assessments and is the framework's synthetic data generation useful and sufficient for the kinds of potential assessments of fairness interventions I might consider in practice? 3. Then I'd like to know how easy it is to incorporate other fairness notions I might care about, that are not considered through the case studies. Is this sandbox essentially a package that can be extended easily? Or is it primarily an abstract framework. The former is a very tangible takeaway from the paper for the community to build on. On the other hand, the latter is not very general. I believe the goal is the former but I couldn't assess the code because the code link seems to have expired. If you wouldn't mind refreshing the link, I am happy to take another look. The way the paper is presented makes it seem like the goal is in fact the latter. See below on my requested changes, which I feel might enable you to make the presentation more general. 4. Some more egregious types of biases are not accounted for, such as confounding bias. If possible, I would like the authors consider introducing confounding bias (known and unknown) and see the impact of the same interventions (equalized odds and equal opportunity) on the downstream ML models. Here the source of confounding may or may not be the sensitive attribute. If the authors think this is not possible, unnecessary, or too challenging, please provide a justification. 5. On page 7, you say: "While class probabilities differ, both h∗ and hˆ are Bayes optimal and, in this case, reflect the same binary predictor." Is this assuming a threshold of <0.8? Minors: There are some typos that could be easily fixed: 1. Page 11: "extend to which fidelity..". 2. Page 3: "as an critical..." 3. I believe you can condense some text non-trivially, there are many places where text is repeated without adding new information. Requested Changes: My main suggestion is to restructure the paper a bit. First fully explain the framework in terms of its capabilities. i) What kinds of synthetic data can be generated (dimensionality, assumptions on the data-generating processes, types of noise etc.) ii) Go on to explain how the API might allow for i) choice of fairness metrics, ii) choice of fairness interventions, iii) choice of type of method for a particular fairness interventions. For example, I did not realize the toolbox allowed for synthetic data of different base-rates until I read more than half-way through. iii) Separate out description of capabilities for synthetic and real-world data. Ideally the above could be the first part of the paper after intro and related work. This is completely separated from specific case studies. Overall *using* the case studies to describe the framework is somehow very challenging on the reader to assess the tool itself. Then use the case studies purely as demonstration of the capabilities, and introduce necessary theory in the context of the toolbox capabilities. I believe this would make the paper much more accessible and increase the tool's impact on the community. I hope the authors can incorporate some of these suggestions. I am very happy to revisit my assessment based on the changes. Broader Impact Concerns: I don't have concerns on the ethics of the work, the authors have sufficiently discussed implications. ================================================== Review 2: Summary: This paper provides a simulation tool to stress-test fairness algorithms in a simulated environment. In particular, the simulated environment builds on and extends the setup of Blum & Stangl (2020). The authors provide empirical verification for the claim of Blum & Stangl (2020) that in some cases EO regularization with a proper strength would be the Bayes optimal solution for performance (i.e., there is no tradeoff between fairness violation and performance). They also extend the setup in several ways by adding bias and showing that the Bayes optimality breaks down. The authors release their analysis environment as a general-purpose tool for such stress testing with (1) the simulated environment, and (2) arbitrary datasets. Strengths and Weaknesses: **Strengths** * Stress testing the understanding of fairness mitigation and how well it generalizes in the presence of other biases (e.g., noisy labels, unbalanced demographics, and distribution shift) is extremely important, and a tool that could help better understand these problems in practice would be really useful for the research community. * The proposed tool seems to be very generally usable as discussed in Section 3. **Weaknesses** * Unfortunately, the full capacity of the proposed sandbox has not been showcased. The evaluations of the paper cover a tiny fraction of the full generality of the method, as discussed in Section 3. * Unfortunately, the paper did not really reveal new insights about understanding fairness in the presence of other types of biases. Bulk of the paper is dedicated to empirically replicate the results of Blum & Stangl (2020). While important, I think this would be better fitted for a blogpost, or one usecase out of many in the paper. * The rest of the analyses on other types of biases introduced in the simulated environment mostly revealed that the Bayes optimality claim of Blum & Stangl (2020) breaks down in such imperfect situations. I think given that for the most part the community is used to seeing tradeoff curves for fairness and performance, this would be considered expected. * The paper compares in-processing method of Agarwal et al. (2018) with the post-processing method of Hardt et al. (2016), and show that generally the former achieves better tradeoffs between fairness and accuracy. Given that it is considered well-known that in-processing methods give better in-distribution tradeoffs, I would call that finding unsurprising. * There are several pieces of work that specifically target *noisy labels*, *distribution shift*, *adversarial robustness*, *imbalanced demographics* within the context of fairness. I think at the very least the authors should consider making connections to those pieces of related work. Requested Changes: * I think the paper is too long (in its current form) given the technical content that is being put forth. I suggest the authors revise the paper and strive to fit it in ~12 pages (and move some of the details to an appendix), which I hope will make the paper more concise and fleshed out. * I think *Bias(Stress)-Test* is hard to read. I'd suggest the authors change it to something that is easier to recall. * Please make connections to the literature on *noisy labels*, *distribution shift*, *adversarial robustness*, *imbalanced demographics* within the context of fairness. * Please consider expanding the scope of comparisons to reveal insights about how the SOTA in-processing methods generalize. For example, I would be intrigued, for example, if the authors showed that method A achieves a better fairness/accuracy tradeoff compared to method B, however, when corrected for the effect of additional biases, method B indeed gives a better *real* tradeoff. **Other minor issues** > (Khani & Liang, 2021) shows that I'd suggest changing this and other similar usecases to *Khani & Liang (2021) show that* Broader Impact Concerns: I think the authors have done a great job discussing the broader implications of their work throughout the paper, and especially at the concluding remarks! ================================================== Review 3: Summary: This paper targets a very important problem where the conditions for algorithm selection are under-explored in the fair ML field. To close the gap, it provides a sandbox where users can explore the relationship between bias type and fairness intervention. The sandbox contains several modules, e.g., choice of data, bias injection, model selection, fairness intervention, evaluation, and visualization. This sandbox is demonstrated with the case study of Blum & Stangl. Strengths and Weaknesses: Strengths 1. The proposed framework facilitates bias injection and algorithm testing. 2. It is a practical workflow to evaluate the fairness algorithms in this sandbox. 3. The outputs of this framework are informative for the model developers. The various configurations make this framework flexible to various scenarios. Weakness 1. It is limited to just exploring the relationships between known bias and existing fairness interventions. It is not clear whether the framework is extensible or how easy it is to be extended. 2. This framework shows its effectiveness in tabular data. It would be more complete to show abilities in image data or sequential data. 3. Usually, the bias evaluation is coupled with performance evaluation. Most existing research assumes the performance is correctly assessed. It is unclear whether the existing work, including this one, is able to detect and evaluate bias if the performance metrics are wrongly selected. Requested Changes: This framework is expected to be a standalone package with clean and concise documentation such that users without a fair machine learning background are able to play. It is encouraged to include more types of bias injection and be compatible with mainstream machine learning software, e.g., TensorFlow and PyTorch. I believe it is conceptually feasible as the models are isolated with data and evaluation. Broader Impact Concerns: There are no impact concerns. Actually, I believe this work will promote fair machine-learning research in many domains and stimulate students' interest in fair ml. ================================================== Review 4: Summary: This paper presented a sandbox tool to evaluate fairness algorithms. The sandbox consists of several modules. The data module allows the user to import existing datasets or general synthetic data; the bias injection module can inject one or a combination of different types of biases including representation bias, measurement bias, sampling bias, and label bias; the model selection module can import any machine learning models in the scikit-learn paradigm; the fairness intervention module makes use of the fair learning algorithms from the Fairlearn package including pre-processing, in-processing and post-processing techniques; and finally the evaluation and visualization modules provide fairness evaluation metrics and visualization tools. The paper uses a case study to demonstrate how the sandbox can be used to conduct empirical evaluations of theoretical conclusions and the algorithm’s performance when different sources of unfairness are present in the data. Strengths and Weaknesses: + I think the sandbox will be very beneficial to fairness researchers. Currently, there are many theoretical studies on achieving algorithmic fairness, including dealing with the fairness-utility trade-off and impossibility results for different fairness notions. Although those research papers often provide their own simulation studies, it is still beneficial to have a uniform testbed so that we can test the performance of different algorithms under different settings. The key difference between the sandbox tool and existing tools is that the sandbox tool can inject biases into the data and the users can freely configure the types of biases and how they are to be injected. - Although the sandbox tool seems promising, there could be more experiments or case studies to show its utilities. Currently, there is only one case study in the paper, although different forms of bias are injected. For each form of bias, the authors only evaluate the fidelity of the machine learning model. Other metrics such as different types of fairness metrics are of interest and should be evaluated as well. I am also interested in the trade-off between fidelity and fairness under different types of bias. In addition, more case studies are preferred. For example, the authors can evaluate existing fairness impossibility results to see if different fairness notions can be achieved simultaneously under general and extreme conditions. - Another issue is that, although the paper briefly mentions the difference between the sandbox and other fair ML tools, it is still clear to what extent the sandbox overlaps with other popular tools like AIF 360 and What-if Tool. I know that bias injection is one major difference. However, for other modules like fairness intervention, evaluation and visualization, it is not clear where the differences are. The authors should also make recommendations on how the users can combine different tools together. Requested Changes: - Add the evaluation of fairness metrics like demographic parity and equality of opportunity in the case study. - More case studies are preferred. - Explain the difference between the sandbox and other tools like AIF 360 and What-if Tool. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Reject Comment: The paper proposes a "sandbox" for thoroughly evaluating fairness algorithms, by allowing for injection of various interventions (e.g., label bias, measurement bias). The authors provide an initial implementation of this idea, and use this to study findings from (Blum and Stangl, 2020), as well as showcase extensions to new settings. Reviewers generally appreciated the motivation of having a fairness "sandbox"; however, there were concerns about the provided evidence being convincing enough. The "sandbox" comprises two components: the conceptual proposal, and the practical implementation. For the latter, as acknowledged by the authors, the provided implementation is a prototype which misses certain features (e.g., visual interface, seamless integration into existing libraries). The provided code comprises a series of Colabs, which are interesting, but the general reviewer consensus is that these fall somewhat short of being something that would be immediately useful to other researchers with minimal friction. e.g., the individual Colabs appear to have sparse comments, and it would appear to require some effort to modify them systematically to test out a new algorithm. The former by itself could plausibly be a primary contribution: if it gives researchers in the space some way to systematically study their techniques, and inspire them to implement their own version of a "sandbox", then that could be useful. However, on this front it is not clear if the paper is fully successful. Section 2 offers a nice summary of the different general means of algorithmically intervening. However, without some very concrete recommendations for each --- e.g., specific families of label shift distributions --- it is not clear if these by themselves could be actionably employed by other researchers. Overall, while the paper is definitely well motivated with an interesting idea, the general consensus is that the current execution falls a little short of being completely convincing. ==================================================
# Graph Neural Networks Designed For Different Graph Types: A Survey Josephine M. Thomas∗jthomas@uni-kassel.de GAIN - Graphs in Artificial Intelligence and Machine Learning Intelligent Embedded Systems University of Kassel, Germany Alice Moallemy-Oureh∗amoallemy@uni-kassel.de GAIN - Graphs in Artificial Intelligence and Machine Learning Intelligent Embedded Systems University of Kassel, Germany Silvia Beddar-Wiesing∗s.beddarwiesing@uni-kassel.de GAIN - Graphs in Artificial Intelligence and Machine Learning Intelligent Embedded Systems University of Kassel, Germany Clara Holzhüter∗*clara.juliane.holzhueter@iee.fraunhofer.de* GAIN - Graphs in Artificial Intelligence and Machine Learning Fraunhofer Institute for Energy Economics and Energy System Technology (IEE) Kassel, Germany Reviewed on OpenReview: *https: // openreview. net/ forum? id= h4BYtZ79uy* ## Abstract Graphs are ubiquitous in nature and can therefore serve as models for many practical but also theoretical problems. For this purpose, they can be defined as many different types which suitably reflect the individual contexts of the represented problem. To address cutting-edge problems based on graph data, the research field of Graph Neural Networks (GNNs) has emerged. Despite the field's youth and the speed at which new models are developed, many recent surveys have been published to keep track of them. Nevertheless, it has not yet been gathered which GNN can process what kind of graph types. In this survey, we give a detailed overview of already existing GNNs and, unlike previous surveys, categorize them according to their ability to handle different graph types and properties. We consider GNNs operating on static and dynamic graphs of different structural constitutions, with or without node or edge attributes. Moreover, we distinguish between GNN models for discrete-time or continuous-time dynamic graphs and group the models according to their architecture. We find that there are still graph types that are not or only rarely covered by existing GNN models. We point out where models are missing and give potential reasons for their absence. ## 1 Introduction Over the last decades, neural networks (NNs) have become increasingly important. Their development dates back to the early 1940s (Anderson & Rosenfeld, 1988) 1. With increasing computational power and the possibility of utilizing Deep Learning (DL), their applications have reached most parts of society, from detecting cancer (McKinney et al., 2020) to playing computer games (Ibarz et al., 2018; Silver et al., ∗All authors contributed equally. 1Anderson & Rosenfeld (1988) provides a historical overview up to the end of the 1980s. 2018). Nevertheless, classical NNs are limited to Euclidean data. Given the rising amount of non-Euclidean data (Bronstein et al., 2017) and the fact that graphs are a suitable mathematical representation for many theoretical and practical problems, several authors started investigating NNs on particular graph problems (Cimikowski & Shope, 1996; Lai et al., 1994) or so-called "structures" (Sperduti, 1997; Sperduti & Starita, 1997) in the 90s. With an ever-increasing amount of graph data available (see, e.g., repositories Rossi & Ahmed (2015), or OGB Hu et al. (2020)) in many applications (e.g., traffic (Ma et al., 2020; Rossi & Ahmed, 2015), citation (Feng et al., 2019; Ioannidis et al., 2019; Ren et al., 2020; Tran & Tran, 2020), biological or medical (La Gatta et al., 2020; Wang et al., 2020; Yadati et al., 2019; Zitnik et al., 2018), social (Pareja et al., 2020; Rossi et al., 2020; Trivedi et al., 2019), recommendation (Sankar et al., 2020; Wang et al., 2021; Yang et al., 2020)), so-called Graph Neural Networks (GNNs) have become a thriving research field. Therefore, many surveys have recently conducted intensive research on GNN models, e.g., Barros et al. (2021); Kazemi et al. (2020); Skarding et al. (2021); Zhou et al. (2020). However, most GNN models are either limited to a specific graph type or developed to address particular problems. E.g., Hier-GNN Chen et al. (2022) is developed especially for hierarchical graphs, MXMNet Zhang et al. (2020a) for multiplex graphs, and EpiGNN La Gatta et al. (2020) focuses on learning the evolution of an epidemic. On the other hand, real-world graphs are diverse. In many cases, they contain heterogeneous nodes or edges and evolve dynamically. One example of a heterogeneous graph is a power grid representation in which the nodes could have different types, such as "solar power plants", "wind parks", or "nuclear power plants". An example of a dynamic graph is a social network with time-changing nodes and the connections among them. However, no comprehensive overview is available that investigates which graph types are addressed by existing GNN models. Since the graph type plays a vital role in choosing a model to solve a graph problem, it is essential to provide an overview of the latest collection of GNNs. This survey aims to fill this gap by providing an outline of GNNs for all graph types and pointing out the absent GNN models for static and dynamic graphs. As a comprehensive overview of the different graph types is missing, the first contribution of this survey consists of the definition and overview of these. It covers basic structural graph types (e.g., directed, multi-, heterogeneous, or hypergraphs) for static and dynamic graphs in discrete and continuous-time and the so-called semantic graph types (e.g., cyclic, regular, and bipartite graphs). This categorization approach is advantageous because some GNN models are restricted to specific graph properties. The second contribution is an analysis of which graph types can be handled by currently available GNN models. As a third contribution, we group the investigated GNN models by their architecture in the main part. The final contribution consists in analyzing what graph types cannot be handled by current GNN models, including explanations for these gaps. Due to the vast amount of publications in the field, this survey cannot cover all existing models. Therefore, this survey aims to cover the most important models and list only one or two models for each graph type or property to illustrate the existence of at least one model. The following criteria determine the importance of the models for the choice: 1) Up-to-dateness of the model, 2) relevance of the model concerning the number of citations and its use as a baseline in other publications, 3) the generality of the model (e.g., that it is not only applicable to a particular domain), 4) explicitness in addressing the listed graph properties, and 5) simplicity of the model (e.g., if two models fulfill the same task, priority is given to the simpler one). The individual reason for the choice of each model can be found in the appendix in Tab.8. This paper is structured as follows. Sec. 2 contains related work. In Sec. 3, the considered graphs and their properties are defined in 3.1, while preliminary definitions concerning GNNs are given in 3.2. Sections 4 to 7 constitute the central part of the paper and deal with GNN models focusing on structural graph properties (Sec. 4), dynamic graph properties (Sec. 5), semantic graph properties (Sec. 6) and combined or other GNN models (Sec. 7). Here, each section contains a table showing which graph types and properties are addressed by existing GNN models, a description of the applied GNN techniques, and an evaluation of why current models might not cover specific properties. Note that many models have the same acronym in the respective publication. Therefore we altered some of them to distinguish the models and improve readability. Finally, Sec. 8 concludes the work and points out future challenges. The mathematical notation used throughout this work can be found in Sec. 9, Tab. 7. ## 2 Related Work Several surveys that review GNNs concerning different aspects have been proposed over the last few years. Multiple surveys provide a more detailed overview of specific types of methods, such as convolutional GNNs (Gama et al., 2020; Zhang et al., 2018; 2019b), GNNs using attention mechanisms (Lee et al., 2019), or Baysian GNNs (Shi et al., 2021). Furthermore, many existing surveys focus on specific application areas (Jiang & Luo, 2022; Shlomi et al., 2020; Wu et al., 2022), such as natural language processing (Wu et al., 2021), combinatorial optimization (Peng et al., 2021), or power systems (Liao et al., 2021). Other publications reviewing GNN models concentrate on specific aspects such as explainability (Yuan et al., 2021), or the expressive power of GNNs (Sato, 2020). Unlike these publications, we provide a more general survey, which is neither limited to particular types of methods or aspects nor explicit application fields. Cai et al. (2018) provide a broad survey of graph embedding techniques, including methods apart from deep learning, such as matrix factorization or graph kernels, similar to (Cui et al., 2018; Goyal & Ferrara, 2018; Hamilton et al., 2017). In Bronstein et al. (2017), an overview of deep learning methods applicable to non-Euclidian data is provided. The survey does not only focus on graphs but aims to cover methods of geometric deep learning in general, including its applications, challenges, and future directions. Concerning GNNs, it primarily surveys convolutional methods. However, the aforementioned surveys do not cover methods for dynamic graphs. In contrast, (Wu et al., 2020) covers spatial-temporal GNNs, convolutional methods, recurrent GNNs, and graph autoencoders. The investigated methods are grouped according to these categories. Similarly, (Zhang et al., 2020b) review models by the type of GNN they apply. However, these categories differ from thosse in Wu et al. (2020) such that instead of spatial-temporal GNNs, graph reinforcement learning and adversarial methods are discussed. Both methods only partially cover dynamic graph models. Further publications such as Barros et al. (2021); Kazemi et al. (2020); Skarding et al. (2021) explicitly focus on models for dynamic graphs. Skarding et al. (2021) further group the reviewed models concerning the encoded type of dynamics (e.g., node dynamic, edge-growing) and the applied methods. While Barros et al. (2021) and Skarding et al. (2021) survey models for dynamic graphs only, Kazemi et al. (2020) also reviews several static methods. However, the corresponding chapter of this survey aims to understand better the basic concepts for static graphs, which can be extended to dynamic graphs, rather than reviewing methods for static graphs. None of the abovementioned surveys categorizes the reviewed methods for different graph types and their semantic properties. The only survey that explicitly investigates GNN models concerning the graph types is Zhou et al. (2020). However, it does not consider all graph types covered in this survey since we provide a more fine-grained distinction of different graph properties. Moreover, in Zhou et al. (2020) the authors focus on the pipeline of designing a GNN, including identifying the graph type and additional network modules such as pooling or sampling. Accordingly, it takes a different point of view and reviews GNN models amongst other modules, which can be integrated into a deep learning pipeline. Our contribution is a detailed overview of existing GNNs and their categorization into certain types of methods, but more importantly, the types of graphs they can process. Unlike many existing surveys, we consider static and dynamic graphs. Moreover, we group the corresponding dynamic GNNs into discrete-time and continuous-time dynamic models while considering the node and edge attributes and the graph's structure. ## 3 Foundations The application of graphs takes place in many different fields. This is because of the high degree of freedom in designing a graph and, thus, in representing information. Therefore, many different graph types have been developed and extended over time. This section defines graph types and properties in detail to give the reader a comprehensive insight into all graph types and associated GNNs in detail and presents them in order. Readers familiar with the different graph types and properties may omit this section and go on to the following section, Sec. 4, for an overview of existing GNN models and architectures. For the remainder of this section, the reader is assumed to have basic knowledge of analysis and linear algebra (see, for example, Abbott (2001); Strang (1993)). A table containing the most frequently used notation can be found in Sec. 9. ## 3.1 Graphs And Their Properties At first, the considered graph properties and graph types have to be defined to survey for which graph types and properties GNN models exist. These definitions are given here, as well as some graph-related terms which are needed throughout the paper. We distinguish between structural and semantic graph properties. The graph structure is defined only by the mathematical objects that make up the graph, i.e., node, edge, and attribute sets. The so-called structural properties can be deduced from these sets, e.g., whether the graph is directed or attributed. Semantic properties, in contrast, do not affect the mathematical representation. They result from interpreting the graph, e.g., whether it is cyclic or a tree. Some GNNs specialize in such properties since they frequently occur in real-world applications. All definitions concerning structural properties are taken from Thomas et al. (2021) and given here for the reader's comfort. In the following, elementary graph types are defined. They form the basis for all graphs to which neural networks have already been applied or might be applied in the future. Definition 3.1 (Static Graphs: Elementary) 1. A **directed (simple) graph** is a tuple G = (V, E) containing a set of nodes V ⊂ N and a set of directed edges given as tuples E ⊆ V × V. 2. A **(generalized) directed hypergraph** is a tuple G = (V, E) with nodes V ⊂ N and hyperedges E ⊆ {(x, fi)i∣ x ⊆ V, fi∶ x → N0} that include a numbering map fi for the i-th edge (x, f)i which indicates the order of the nodes in the (generalized) directed hyperedge. W.l.o.g. it can be assumed that the numbering is gap-free, so if there exists a node u ∈ x with f(u) = k > 1 then there will also exist a node v s.t. f(v) = k − 1. These graphs are called elementary because every other graph is a composition of them. In this sense, a directed hypergraph is a directed simple graph that simultaneously is a hypergraph. Since one can not only combine elementary graphs but also extend them with additional properties, in what follows, different types of graph properties are introduced. Namely the static structural, dynamic structural, and semantic properties. Definition 3.2 (Static Structural Graph Properties) An elementary graph G = (V, E) is called 1. **undirected** if the edge directions are irrelevant, i.e., - for directed graphs: if (*u, v*) ∈ E whenever (*v, u*) ∈ E for *u, v* ∈ V. Then, the edges can be denoted as a set of sets instead of a set of tuples, namely E ⊆ {{*u, v*} ∣ u, v ∈ V, u ≠ v} ∪ {{u} ∣ u ∈ V} 2, - for directed hypergraphs: if fi∶ x → {0} for all (x, fi)i ∈ E 3. Abbreviated by E ⊆ {x ∣ x ⊆ V}. 2. **multigraph** if it is a multi-edge graph, i.e., the edges E are defined as a multiset4, a multi-node graph, i.e., the node set V is a multiset, or both. 3. **heterogeneous** if the nodes or edges can have different types (node or edge-heterogeneous). Mathematically, the type is appended to the nodes and edges. I.e., the node set is determined by V ⊆ N×S with a node type set S and thus, a node (*v, s*) ∈ V is given by the node v itself and its type s. The edges can be extended by a set R that describes their types, to (*e, r*) ∀e ∈ E of edge type r ∈ R. 4. **attributed** if the nodes V or edges E are equipped with node or edge attributes. These attributes are formally given by a node attribute function and an edge attribute function, respectively, i.e. α ∶ V → A and ω ∶ E → W, where A and W are arbitrary attribute sets. In case there are only node attributes the graph is called **node-attributed** (or node labeled/node features), in case of just edge attributes it is called **edge-attributed** and if we have W ⊆ R it is called **weighted**. 2the second set contains the set of self-loops 3fi(x) = 0 encodes that x is an undirected hyperedge 4A multiset is a set that can have entries which occur multiple times. Fig. 1 shows examples for each graph type up to this point. ![4_image_0.png](4_image_0.png) Figure 1: Visualization of different elementary static graph types. The term **static** in these structural properties stands for the absence of temporal dependence. This means, in particular, that once the graph is given, it never changes with time. In contrast, the so-called (temporal) dynamic structural graph properties are listed in the following. Definition 3.3 (Dynamic Structural Graph Properties) A graph is called 1. **dynamic** if the graph structure or the graph properties are time dependent. In the following, the notation Gi = (Vi, Ei), ti ∈ T is used, where T is a set of (not necessarily equidistant) timestamps to emphasize the time-dependence and therefore the dynamics. 2. **growing** if it is dynamic and the node or edge sets evolve w.r.t. addition of new nodes and edges respectively. I.e., for all ti ∈ T it holds Vi ⊆ Vi+1 or Ei ⊆ Ei+1. 3. **shrinking** if it is dynamic and we just allow node or edge set evolution w.r.t. deletions of nodes and edges respectively. I.e., for all ti ∈ T, it is Vi ⊇ Vi+1 or Ei ⊇ Ei+1. 4. **strictly growing/shrinking** if we consider only real inclusions in definition 2 and 3 above. 5. **structure-dynamic** if it is growing, shrinking or both simultaneously, i.e., in particular, the nodes V or edges E evolve over time due to additions or deletions of nodes or edges5. 6. **attribute-dynamic** if the node or edge attribute function is time-dependent. Thus, we extend our notions of the attribute functions to αi∶ Vi → A and ωi∶ Ei → W, for all ti ∈ T. 7. **type-dynamic** if the graph type evolves over time. E.g., an undirected graph becomes directed from one to another time step. Structurally, these dynamics describe different temporal behaviors of graphs. When processing dynamic graphs, they are typically defined either as discrete-time or continuous-time representations. 5(Kazemi et al., 2020) also mentions splits and merges of nodes and edges. Obviously, these events are sequences of additions and deletions. ## Definition 3.4 (Dynamic Graph Representation) 1. A dynamic graph in **discrete-time representation** is given by a set G = {g1*, . . . , g*k} of graph snapshots gi at time steps i = 1*, . . . , k*. Here, gi∶= (Vi, Ei) are static graphs with nodes V and edges Ei ⊆ {(*u, v*) ∣ *u, v* ∈ Vi}. 2. A dynamic graph in **continuous-time representation** is defined by a set G = {gt0 ,E} containing an initial static graph at time stamp t0 ∈ T and a set E = {et, t ∈ T } of events encoding a structural or attribute change at time stamp t > t0 ∈ T . Not all combined graphs are equally important in the literature and especially for GNNs. The following introduces some combined graph types of specific interest with proper names. Definition 3.5 (Combined Static Graphs) 1. **Knowledge graphs** are defined in several ways. In Wang et al. (2021), they are defined as heterogeneous directed graphs, while in Yu et al. (2020) knowledge graphs are the same as edgeheterogeneous graphs. But there are also definitions that do not see a knowledge graph as a graph combined from the aforementioned types, see for example Ehrlinger & Wöß (2016) for an overview. 2. A **multi-relational graph** (Hamilton, 2020) is an edge-heterogeneous but node-homogeneous graph. 3. A **content-associated heterogeneous graph** is a heterogeneous graph with node attributes that correspond to heterogeneous data like, e.g., attributes, text or images (Zhang et al., 2019a). 4. A multiplex graph/**multi-channel graph** corresponds to an edge-heterogeneous graph with self-loops (Hamilton, 2020). Here, we have k layers, where each layer consists of the same node set V, but different edge sets E (k). Additionally, inter-layer edges E˜ exist between the same nodes across different layers. 5. A **spatio-temporal graph** is a multiplex graph where edges per each layer are interpreted as spatial edges and the inter-layer edges indicate temporal steps between a layer at time step t and t + 1. They are called temporal edges (Kapoor et al., 2020). Remark. All the combined properties mentioned in Def. 3.5 can occur in dynamic graphs as well. Besides the structural properties, a graph can have semantic properties that do not explicitly change its structure but result from applying or interpreting the graph information. Some GNNs are limited or specialized to these properties defined in the following. Definition 3.6 (Semantic Graph Properties) An elementary graph G = (V, E) is called 1. **complete** if all pairwise different nodes are connected through an edge, i.e., E = {(*u, v*) ∈ V ×V ∣ u ≠ v}. 2. r**-regular** if each node v ∈ V has r ∈ N neighbors, i.e., ∣N (v)∣ ∶= ∣{u ∈ V ∣ (*u, v*) ∈ E}∣ = r. 3. **bipartite** if there exists a disjoint node decomposition into two sets V = U ⊍ W, such that the edges are of the form E ⊆ U × W. 4. **connected** if the graph is undirected and for all node pairs *v, w* ∈ V there is a path from v to w in G. An elementary graph is called **weakly connected** if the underlying undirected graph is connected and it is **strongly connected** if for all node pairs *v, w* ∈ V there is a directed path from v to w in G. Otherwise it is unconnected. 5. **cyclic** if it contains a cycle of length k ∈ N, i.e., there exists a subgraph H = ({v1, . . . , vk}, {e1, . . . , ek}) ⊆ G, vi ∈ V, ei ∈ E ∀i, such that the series of nodes and edges v1, e1, v2, . . . , vk, ek, v1 is a closed (directed) path called **(directed) cycle** of length k with vi ≠ vj ∀*i, j*. Otherwise, it is called **acyclic** or a **forest**. 6. **tree** if it is a connected forest. In case each node in the tree has at most two neighbors, it is called binary tree (Gessel & Stanley, 1995). A **polytree** is a directed graph whose underlying undirected graph is a tree (Dasgupta, 1999). 7. level-(l+1) **hierarchical** w.r.t. a level-l base graph H = (V˜, E˜) if one can find a complete partitioning of H into k ≥ 1 non-empty, connected sets of nodes V˜1*, ..,*V˜k. Such that each set of nodes V˜i ⊆ V˜ induces a subgraph subi(H) = (V˜i, E˜i ⊆ E˜) with E˜i = {(v1, v2) ∈ E˜ ∣ v1, v2 ∈ V˜i}. Each of these subgraphs, in turn, corresponds to a node in the hierarchical graph G. Edges in G correspond to edges in H between nodes vi, vj of two different subgraphs subi(H)*, sub*j (H) (Stoffel et al., 2008). 8. **scale-free**, if its node degree distribution P(d) follows a power law P(d) d −γ, where γ typically lies within the range 2 < γ < 3. A (generalized) (directed) hypergraph G = (V, E) is called 9. **recursive**, if an edge can not only exist between nodes, but also between edges and in a recursive way. E.g. two edges e1 and e2 make up edge e12, edges e3 and e4 make up edge e34 and edge e5 consists of edges e12 and e34. See definition 4 of an ubergraph in Joslyn & Nowak (2017) and figure 1a in Yadati (2020) for a visualization. Fig. 2 shows examples for each semantic graph property by applying it to undirected graphs. ![6_image_0.png](6_image_0.png) Figure 2: Semantic graph properties illustrated for undirected graphs. In the following chapter, we introduce the basic architectures for GNNs that make up all the GNNs in this survey. In order to be able to describe these appropriately, we list some frequently occurring graph-related terms beforehand. Definition 3.7 (Graph related terms) Let G ∶= (V, E) be a graph. 1. The **degree of a node** v ∈ V is given by δ(v) = ∣{e ∈ E ∣ v ∈ e}∣. For directed graphs, the **out- or** in-degree of v is the number of edges starting in v or ending in v, respectively. The **degree of an** edge e ∈ E is determined by ∣e∣, i.e., by the number of nodes in the edge. 2. The graph Laplacian or **Laplacian matrix** L is defined by L = D − A, where D is the degree matrix and A the adjacency matrix. In Graph Convolutional Neural Networks, it is mostly used in a normalized version, e.g., the symmetric and normalized graph Laplacian L*norm* = D˜ − 1 2 A˜D˜ − 1 2 , where A˜ is the adjacency matrix with self connections and D˜ the degree matrix with self-loops. 3. An entry yi,j of the **incidence matrix** Y ∈ {0, 1} ∣V∣×∣E∣of a graph G = (V, E) is 1, if the node i is incident to edge j, and 0 otherwise. For non-hypergraphs, the incidence matrix has exactly 2 entries per row that are non-zero. Let V˜ ⊆ V be a set of nodes. Then, the **induced subgraph of** V˜ is defined by a graph G(V˜) = (V˜, E˜) with edges E˜ = {e ∈ E ∣ e ∈ V˜ × V˜} between the nodes of V˜. 4. A **path** from u ∈ V to v ∈ V, denoted by p(u, v) ∶= e1*, . . . , e*k ∈ E is a sequence of edges, for which there is a sequence of nodes (z1*, ..., z*k+1) such that ei = (zi, zi+1) for i = 2*, ..., k* and e1 = (*u, x*) and ek+1 = (*y, v*) for some *x, y* ∈ V. 5. The **path length** of a path p(u, v) = e1*, . . . e*k denotes the sequence length, i.e., len(p(*u, v*)) = k. 6. A **random walk of length** k is a path of length k whose edges are selected iteratively and randomly. ## 3.2 Gnn Preliminaries GNNs define the adaptation of traditional NNs to graph data and aim to learn high-level representations of graphs in an end-to-end fashion by applying several network layers. They can be applied to all classical machine learning problems, such as classification, regression, or clustering, for entire graphs and subgraphs at a node or edge level. Each layer computes a new representation of the graph or its components. A typical procedure is to update the representation for the nodes in each layer by propagating information through the graph. A task-specific prediction can then be made using the learned representation and a suitable decoder function. For node classification, e.g., a typical choice for the decoder is a standard MLP with a softmax activation as the output function. It maps the learned representation to a vector indicating the class probabilities for all nodes. At the edge level, a frequently considered task is link prediction which aims to predict the probability of the existence of an edge. The corresponding decoder is often implemented as a logistic regression classifier since the existence of an edge can be expressed as a two-class problem. Different types of GNNs specify the computation of the node representation in the GNN layers. According to Bronstein et al. (2021), the following relation of GNN approaches applies: $${\mathrm{mess}}$$ message-passing ⊇ attention ⊇ convolution Therefore, these are introduced one after the other, from the most general case to special ones. A visualization of the three GNN layer types is shown in figure 3. The Recurrent Neural Networks coexist with the messagepassing and will be introduced afterward. Combined with GNNs, it is particularly relevant for dynamic graph learning problems due to its ability to model temporal data. ![7_image_0.png](7_image_0.png) Figure 3: Visualization of the information propagation process in the different types of GNN layers for node u and its neighbors. The idea for the figure is taken from Bronstein et al. (2021, Fig. 17). *Left:* In convolutional layers, the node features xv of the neighbors v ∈ {1*, . . . ,* 3} of node u are multiplied with a constant cu,v to form the message. *Middle:* In attention layers, this multiplier is computed via an attention mechanism attuv = att(xu, xv) between the source and the target nodes u, v. *Right:* In message-passing layers, the messages muv are computed explicitly from the source and target node representations, i.e., muv = ψ(xu, xv). ## 3.2.1 Message-Passing Message-passing determines which and how much information is forwarded between two nodes, e.g., via their connecting edges. The resulting representation hu of node u is computed from the node representations xv ∀v ∈ V: $$\mathbf{h}_{u}=\phi\Bigg{(}\mathbf{x}_{u},\bigoplus_{v\in\mathcal{N}(u)}\psi(\mathbf{x}_{u},\mathbf{x}_{v})\Bigg{)},\tag{1}$$ where ψ is a learnable message function that assigns an information vector to the pair *u, v*. Typically, ψ is defined as multiplication with a learnable weight matrix, and its output is denoted as a message. The aggregation ⊕ depicts the message-passing process on the graph, which in most cases is implemented as a non-parametric operation such as sum, mean, or maximum. N (u) denotes a neighborhood of node u and ϕ is an activation function (Bronstein et al., 2021). ## 3.2.2 (Multi-Head) Graph Attention Graph attention is a special case of message-passing (Bronstein et al., 2021). Here, the message is computed by applying a learnable function ψ to each neighboring node weighted by a so-called *attention* factor. Typically, the function ψ is shared across all neighbors, whereas the attention is computed individually for each node pair. The attention mechanism specifies the message-passing rule in the aggregation function as follows: $$\mathbf{h}_{u}=\phi\left(\mathbf{x}_{u},\bigoplus_{v\in\mathcal{N}(u)}\operatorname{att}(\mathbf{x}_{u},\mathbf{x}_{v})\psi(\mathbf{x}_{v})\right),\tag{1}$$ $$\left(2\right)$$ where the attention function att is learnable and determines the effect of the message from neighbor v with representation ψ(xv) to the hidden representation hu of node u. Additionally, the attention coefficients are normalized across all neighbors of the target node. Note that attk, therefore, depends not only on xu and xv but also on all other neighbors of node xu. Furthermore, if ⊕ is a sum, the aggregation is a linear combination considering feature-specific weights for the neighbors. Multi-head attention extends the attention mechanism to K different attention functions (Velickovic et al., 2018) and is determined by $$\mathbf{h}_{u}=\prod_{k\in[K]}\phi\Bigg{(}\mathbf{x}_{u},\bigoplus_{v\in\mathcal{N}(u)}\text{att}_{k}(\mathbf{x}_{u},\mathbf{x}_{v})\psi(\mathbf{x}_{v})\Bigg{)},\tag{1}$$ where ∣∣ denotes the concatenation operation. The K different attention functions also called *attention heads* are computed independently. In Velickovic et al. (2018), an implementation of an attention mechanism is proposed. The according self-attention function att ∶ R dim(h) × R dim(h) Ð→ R outputs an attention weight. $$\mathrm{{\mathbb{H}}}$$ $$\omega_{i,j}:=\operatorname{att}(W h_{i},W h_{j})$$ for an edge (*i, j*) given the incident node embeddings hi,hj to indicate the importance of the features of node j to node i for all node pairs *i, j* ∈ V. Considering the neighborhoods given in the graph, the attention mechanism can be defined by $$a_{i,j}:=\operatorname{softmax}_{j}(\omega_{i,j})={\frac{\exp\omega_{i,j}}{\sum_{k\in{\mathcal{N}}(i)}\exp\omega_{i,k}}}.$$ ## 3.2.3 Spatial And Spectral Graph Convolutions Compared to the attention approach, the graph convolution aggregates the neighbored nodes directly using fixed weights (Bronstein et al., 2021) by $$\mathbf{h}_{u}=\phi\left(\mathbf{x}_{u},\bigoplus_{v\in\mathcal{N}(u)}c_{u,v}\psi(\mathbf{x}_{v})\right),\tag{4}$$ where cu,v is a factor indicating the impact of neighbor v on the hidden representation of node u. Note that cu,v is a pre-defined constant instead of a node-specific function, as is the case for attention layers. For spatial convolution, cu,v is usually given by the (weighted) adjacency matrix and thus includes structural information. For spectral convolution, spectral filters dependent on the graph Laplacian (c.f. 3.7.2) determine the weights of all nodes in the graph integrating structural information implicitly. If the aggregation is a sum, the layer can be interpreted as linear diffusion or position-dependent linear filtering. An example of a spatial convolution in layer l − 1 is given in, e.g., Morris et al. (2019): $$h_{u}^{(l+1)}=\sigma(W_{1}h_{u}+W_{2}\sum_{v\in{\cal N}(u)}h_{v}^{(l)}).$$ W1 and W2 are learnable weight matrices and σ is an activation function such as ReLU(⋅) = max(0, ⋅) . An implementation of a standard spectral convolution is given in, e.g., Kipf & Welling (2017). The layer-wise propagation rule in layer l is determined by $${\cal H}^{(l+1)}=\sigma(\underbrace{\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}}_{L_{n o r m}}H^{(l)}W^{(l)}),$$ $$\left({\bar{5}}\right)$$ $$({\mathfrak{h}})$$ where A˜ = A + E is the adjacency matrix with added self connections, E is the identity matrix, D˜ is the degree matrix of the graph with self-loops, and L*norm* is the normalized graph Laplacian. σ is an activation function, and W is a learnable weight matrix functioning as a filter. The idea of spectral graph convolutions comes from signal processing. Vividly, one can imagine it as global message passing on a graph, weighted with a filter. ## 3.2.4 Recurrent Neural Networks For each time step t, the Recurrent Neural Network (RNN) calculates a hidden representation using historical information together with the current input X(t)(Bronstein et al., 2021). First, the input is transformed by an encoder function f to a representation vector z (t) = f(X(t)). Then, z (t)is aggregated together with the previous information by an update function R ∶ R k × R m → R m that additionally considers the hidden representation from the time step before. Altogether, a basic RNN is formalized by h (t)= R(z (t), h(t−1)). In the context of graph learning, the node feature matrix is commonly used as initial input X(0). Furthermore, in various GNNs for dynamic graphs, GCN layers are combined with RNNs by, e.g., modeling the GCN weight evolution with an RNN or propagating the learned structural information from one timestamp to the next timestamp. ## 4 Models Focusing On Structural Graph Properties Learning on simple graphs is most prevalent in the research of GNNs. The elementary graph structure can already model relations in the data, and the mathematical foundations go back to the 17th century. After the most prominent introduction of Graph Neural Networks by Scarselli et al. (2009) for learning on static node-attributed graphs, many different extensions have been proposed. An overview of GNN models for simple static graphs is discussed in Sec. 4.1. Their approaches often build on the graph information processing scheme of Scarselli et al. and adapt it to new applications and several structural graph properties. One of the extensions includes the higher-order representation of relational data with the aid of elementary hypergraphs. Hypergraph theory is still a young field and has been essentially developed by Claude Berge Berge (1973) in the 1970s. Learning on hypergraphs has also emerged as part of research in recent years and has much potential for applications on differently structured hypergraphs, as illustrated in Table 2. ## 4.1 Gnns For Simple Graphs The number of GNNs for simple graphs has increased immensely in the past years, so Table 1 does not list all of them but gives an overview of several GNNs for simple graphs with structural properties defined in Def. 3.2. This demonstrates which graph types have already been considered in GNN research. The models are selected according to their up-to-dateness, relevance, general applicability, explicit addressing of a specific graph type, and simplicity, as discussed in the introduction. Note that in the case of processing attributed graphs, the attributes have to be encoded in d-dimensional vectors. To apply the corresponding models to arbitrary attributed graphs, preprocessing steps have to be utilized as in Zhang et al. (2019a). The most common type of GNNs for simple graphs concerning structural properties are **convolutional** models, which compute new node representations in each layer. A common graph convolution as, for example, defined in GNN⋆(Scarselli et al., 2009), or WL (Morris et al., 2019), typically assumes attributed nodes and allows for directed and undirected edges without being explicitly designed for either property. Models designed for other graph types typically extend a common spectral or spatial convolution to adapt to the specific structural property they focus on. To consider directed edges, for example, in GenRecN (Sperduti & Starita, 1997), a standard spectral convolution is applied only on the out-neighbors of a target node, i.e., on those neighbors connected via a directed edge originating from it. A more recent model, MagNet (Zhang et al., 2021), applies a spectral convolution using a complex-valued Hermitian matrix, called magnetic Laplacian, instead of a symmetric and real-valued Laplacian which cannot be computed due to the asymmetric adjacency matrix of a directed graph. The magnitude of the complex Laplacian indicates the presence of an edge but not its direction, while the phase of the complex Laplacian indicates the direction of an edge. The lack of models designed for graphs with edge attributes probably results from considering edge attributes only in addition to node attributes since edge attributes are typically not relevant in isolation. In terms of heterogeneity, a similar observation can be made. Corresponding graphs are either heterogeneous in their nodes and edges or node homogeneous, i.e. the, node set remains of one type. Edge heterogeneity is more common than node heterogeneity since it includes widely-used multi-relational graphs. These can be handled by, e.g., an extension of a standard convolution applied separately for each relation (GRNN (Ioannidis et al., 2019)). Another common procedure is extending a convolutional model using an **attention** mechanism, e.g., as described in Sec. 3.2.2. Attention mechanisms are suitable for node-attributed graphs since they allow the computation of node-specific attention scores that express the importance of one node to another. These attention scores can serve as weights in computing node features to focus on specific nodes. Spectral (CapsGNN (Xinyi & Chen, 2018)), as well as spatial convolutions (GAT (Velickovic et al., 2018)), can be equipped with attention mechanisms. They can also be adapted to attributed edges to process entirely attributed graphs (EGNN (Gong & Cheng, 2019)). Also, attention-based models are a suitable approach for heterogeneous graphs since multi-head attention can be used to model different relation types, as in AA-HGNN (Ren et al., 2020). A particular case of attention convolution is HAN (Wang et al., 2019), which utilizes a selected set of meta paths for neighborhood aggregation. | research of GNNs. | | | | |------------------------------|---------------------------------------------------|-----------------------------------------------------------|--------------------------------------| | Graph Type | Models | Problem | Data Category | | directed | GenRecN Sperduti & Starita (1997) | graph classification | logic terms | | MagNet | link prediction, node | linking of websites, | | | Zhang et al. (2021) | classification | synthetic | | | undirected | GNN⋆ Scarselli et al. (2009) | subgraph matching, graph classification, web page ranking | synthetic, mutagenesis (molecules) | | GAT Velickovic et al. (2018) | node and graph classification | | | | node-attributed | | | | | graph | citation networks, protein interaction | | | | CapsGNN | biological-, social | | | | Xinyi & Chen (2018) | networks | | | | WL | graph classification, | biological-, social | | | Morris et al. (2019) | attribute prediction | networks, molecules | | | edge-attributed | - | | | | attributed | EGNN Gong & Cheng (2019), PG-GNN Xia & Ku (2021) | graph classification, node and edge attribute prediction | citation networks, protein structure | | node-heterogeneous | - | | | | edge-heterogeneous | GRNN Ioannidis et al. (2019) | citation networks | | | | node classification | | | | heterogeneous | AA-HGNN Ren et al. (2020), HAN Wang et al. (2019) | news articles & citation networks | | | multi | - | | | Table 1: GNN's developed for learning on simple graphs of different structures. Such models are most prevalent in the research of GNNs. ## 4.2 Gnns For Hypergraphs Learning on hypergraphs as in Def. 3.1 has been rarely explored. Most approaches involve convolutions adapted to hypergraphs, i.e., the property that an edge can be incident to an arbitrary number of nodes. Table 2 lists GNNs that mainly address node classification on citation networks represented as hypergraphs, which shows that the application of hypergraphs is not yet widespread; hence, the available datasets are currently limited. During the research for hypergraph GNNs, it turned out that, so far, only a few GNNs have been applied to hypergraphs. When it comes to additional structural properties as defined in Def. 3.2, sometimes only one or two models for the specific hypergraphs have been developed. Table 2 indicates that the data is still very homogeneous and that the heterogeneity in graphs is only addressed to a limited extent. One option to handle hypergraphs is to **transform them into simple graphs** and apply standard GNNs afterward. This preprocessing can be done by selecting two representative nodes for each hyperedge, as in HyperGCN (Yadati et al., 2019). Based on the assumption that nodes in a hyperedge are similar, the representative nodes typically have the most significant difference between their attributes. Another approach presented in LHCN (Bandyopadhyay et al., 2020) represents a hypergraph as a line graph6. In this process, each hyperedge of the original graph serves as a simple node in the line graph. The corresponding node attributes are computed by the average across the attributes of all hypernodes in that hyperedge. Both variants allow for processing attributed and undirected hypergraphs using GNN models for simple graphs. | Graph Type | Models | Problem | Data Category | | |--------------------|----------------------------------------------|----------------------------------|------------------------------|-------------------| | directed | NDHGNN Tran & Tran (2020) | node classification | citation networks | | | | HyperGCN | | | | | undirected | Yadati et al. (2019) | densest | | | | | k-subhypergraph problem, node classification | combinatorial optimization, | | | | | citation networks | | | | | | HyperConvAtt Bai et al. (2021) | node classification | citation networks | | | hypergraph | node-attributed | LHCN Bandyopadhyay et al. (2020) | node classification | citation networks | | | node classification, | | | | | edge-attributed | HGNN Feng et al. (2019) | object classification | citation networks | | | attributed | AHGAE Hu et al. (2021) | graph clustering | citation networks, 3D models | | | node-heterogeneous | - | | | | | edge-heterogeneous | - | | | | | heterogeneous | HWNN Sun et al. (2021b) | node classification | citation networks | | | multi | G-MPNN (multiple edges) Yadati (2020) | link prediction | knowledge (hyper-) graphs | | Table 2: **GNNs learning on hypergraphs with different** additional properties. The selection of GNNs is still limited, which illustrates gaps and the potential of the young research field. There are also models which are **specifically designed for hypergraphs**, most of them based on spectral convolutions. The graph's incidence matrix can be used to adapt spectral convolutions to attributed 6The nodes of a line graph w.r.t. an original graph are determined as the edges of the original graph, while the edges are inserted between two edges of the original graph that share an incident node. hypergraphs and the node and edge degree matrix in the neighborhood aggregation (HGNN (Feng et al., 2019), AHGAE (Hu et al., 2021)). Such a convolution can be additionally equipped with an attention mechanism (HyperConvAtt (Bai et al., 2021)). NDHGNN (Tran & Tran, 2020) uses separate incidence matrices for the source and target nodes to model the graph's Laplacian in the spectral hypergraph convolution to process directed hypergraphs. Such convolutions can also be used for heterogeneous graphs by using edge homogenization, e.g., by working on subgraphs that include hyperedges of only one specific type. In HWNN (Sun et al., 2021b), the spectral convolution is applied on subgraphs, which include hyperedges of only one specific type. Finally, to enable learning on multi-hypergraphs, a message-passing GNN is extended to include multiple relations and node or edge duplicates (Yadati, 2020). Both approaches, i.e., transforming hypergraphs into simple graphs or directly working on them, have advantages and disadvantages. The first case enables the application of well-established GNN architectures, which have typically been investigated more thoroughly. However, the transformation is often related to information loss, affecting performance. In HyperGCN (Yadati et al., 2019), the information from nodes in hyperedges that do not serve as representative nodes disappear. Models that directly operate on hypergraphs, such as HGNN (Feng et al., 2019), can use the complete information to learn. ## 5 Models Respecting Dynamic Graph Properties Many applications include data that changes with time. In the application of graphs, it often appears, e.g., that graphs are growing or structurally changing or that node and edge attributes are evolving, as defined in Def. 3.3. Therefore, the research on GNNs for dynamic graphs has expanded immensely. There are two common approaches in graph learning for representing a graph's dynamical behavior: discrete-time and continuous-time representation (cf. Def. 3.4). The first approach has been widely used since the snapshots simplify the processing of structures in the graph. Corresponding proposed GNNs in the literature for processing discrete-time graphs are listed in the next section. The continuous-time approach is much more compact in its representation since it stores only events instead of the entire graph for each time step. However, a local evaluation of the graph is required, i.e., the area in the graph affected by an event has to be identified and retrieved to process the event using a GNN. Hence, the application of this representation is more complex, which is also reflected in the less frequent use of it in GNN models, which can be seen in Sec. 5.2. ## 5.1 Gnns For Discrete-Time Graphs Although dynamic graphs are much more challenging to handle than static graphs due to the additional temporal dependencies, existing dynamic GNN models already cover many structural graph properties. GNN models operating on discrete-time graphs are typically extensions of static GNNs since the discrete-time representation corresponds to a series of static graphs. Therefore, similar gaps appear, i.e., only nodeheterogeneous graphs and multi-graphs are not yet covered. The structural component of dynamic graphs can be learned by applying standard GNN models to the graph snapshots. Those models are often combined with **RNN-based** models to encode the dynamics, which capture the temporal features. Such an approach is pursued in, e.g., GCRN (Seo et al., 2018). The model processes attribute dynamic graphs using a spectral GCN combined with an LSTM. First, the node attributes are preprocessed by a spectral convolution, and the resulting representation is passed to the LSTM, which captures the data distribution. A similar approach is taken in WD/CD-GCN (Manessi et al., 2020), which applies a GCN to transform the input graph sequence into a sequence of node representations, which are then processed by a modified LSTM. EpiGNN (La Gatta et al., 2020) also combines GCNs and LSTMs to predict the parameters of a generic epidemiological model based on historical movement data. The model embeds the graph nodes for each time step, representing locations using a standard GCN. It learns the desired parameters by embedding the current graph and information from previous time steps stored in the LSTM. Table 3: **GNN's learning on discrete-time dynamic graphs.** Many of these models are extensions of the static case since the discrete-time representation corresponds to a series of static graphs. Therefore also the gaps appear similar to the static case. The ◻ sign means, that the graph handled by the model can have attributes, but the attributes are static. Thus, they appear or disappear together with their respective nodes or edges but do not change over time. | with their respective nodes or edges but do not change over time. | | | | | | | | | | | | |---------------------------------------------------------------------|---------------------------------------------|------------|-------|------------------|---------|------|-----------------|---------------------|------------|-------------------------------|--------------| | Graph Type | Models | Nodes | Edges | Attr. | Problem | Data | | | | | | | | Category | | | | | | | | | | | | add | del | add | del | node | edge | | | | | | | | EpiGNN | 7 | node label | | | | | | | | | | | directed | La | Gatta | × | × | × | × | ✓ | ◻ | prediction | covid-19 | | | et al. (2020) DySAT | × | × | ✓ | ✓ | × | ◻ | link prediction | communication, | | | | | Sankar et al. | rating networks | | | | | | | | | | | | (2020) | | | | | | | | | | | | | undirected | (WD/CD)- GCN Manessi et al. (2020) | × | × | ✓ | ✓ | ✓ | × | node classification | research | | | | | community | | | | | | | | | | | | GCRN | video prediction,8 | | | | | | | | | | | | node | | | | | | | | | | | | | attributed | Seo | et | al. | × | × | × | × | ✓ | ◻ | graph sequence | videos, text | | (2018) | prediction | | | | | | | | | | | | DynGEM9 | | | | | | | | | | | | | edge | | | | | | | | | | | | | attributed | Goyal et al. | ✓ | ✓ | ✓ | ✓ | × | ◻ | | | | | | (2018) | graph | | | | | | | | | | | | | reconstruction, link prediction, | | | | | | | | | | | | | anomaly detection | synthetic, | | | | | | | | | | | | collaboration, communication networks | | | | | | | | | | | | graph | | | | | | | | | | | | | DTR | | EvolveGCN | 10 | link prediction, | | | | | | | | | attributed | Pareja et al. | ✓ | ✓ | ✓ | ✓ | ✓ | ◻ | edge and node | | | | | (2020) | classification | synthetic, | | | | | | | | | | | | social networks, bitcoin, community network | | | | | | | | | | | | node | | | | | | | | | | | | | heterogeneous | - | | | | | | | | | | | | RE-Net | knowledge | | | | | | | | | | | | edge | | | | | | | | | | | | | heterogeneous | Jin | et | al. | × | × | ✓ | ✓ | × | × | extrapolation link prediction | graphs | | (2019) DyHAN | e-commerce, | | | | | | | | | | | | heterogeneous | Yang | et | al. | ✓ | ✓ | ✓ | ✓ | × | × | link prediction | online | | | community | | | | | | | | | | | | (2020) | | | | | | | | | | | | | multi | - | | | | | | | | | | | 7Only static edge attributes are considered. 8The model uses the moving written digits datasat (moving-MNIST dataset) generated by Shi et al. Shi et al. (2015) 9Only considers the previous time step, patterns of short duration (length 2) for link prediction and is restricted to weights. 10Edges are weighted not generally attributed. | DHAT | | | | | | | | | | | | |---------------------------|-------------------|--------|-----|----|----|----|--------------|--------------------|---------------|--------------------|--------------| | directed | Luo | et | al. | × | × | × | × | ✓ | × | feature prediction | traffic data | | (2021) | | | | | | | | | | | | | undirected | - | | | | | | | | | | | | STHAN-SR Sawhney | × | × | × | × | ✓ | ◻ | node ranking | stock prediction | | | | | et al. (2021) | | | | | | | | | | | | | node | | | | | | | | | | | | | attributed | HGC-RNN Yi & Park | × | × | × | × | ✓ | × | feature prediction | traffic flows | | | | (2020) | | | | | | | | | | | | | hypergraph | Hyper-GNN | action | | | | | | | | | | | | 10 | | | | | | | | | | | | attributed | Hao | et | al. | × | × | ✓ | ✓ | ✓ | ◻ | recognition/graph | human motion | | (2021) | classification | | | | | | | | | | | | node-, edge heterogeneous | - | | | | | | | | | | | | MGH | | | | | | | | | | | | | heterogeneous | Yan | et | al. | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | graph | | | | classification | videos | | | | | | | | | | | (2020) | | | | | | | | | | | | | multi | - | | | | | | | | | | | Further approaches combining RNNs and GCNs are, e.g., RE-Net (Jin et al., 2019) and EvolveGCN (Pareja et al., 2020). RE-Net computes local representations for all nodes by applying an RNN to its temporal neighborhood, i.e., the neighbors at different time steps. The model is designed for edge-heterogeneous graphs and aggregates the edges of different types using a GCN before the neighborhood aggregation. To obtain a global node representation, the local node representations over time are processed by another RNN. In contrast to the models mentioned above, EvolveGCN uses the RNN to model the GCN weights, which embeds the graph nodes. More specifically, the weights of each GCN layer are generated by an RNN, which takes the weights of the preceding GCN layer and, optionally, the node embeddings as input. This way, the model adapts the GCN weights along the temporal dimension to tackle the problem of changing node attributes. Another way to handle temporal features in GNNs is **temporal attention**. DySat (Sankar et al., 2020), e.g., generalizes the GAT approach (Velickovic et al., 2018) described in Sec. 1 to dynamic graphs. On the one hand, the model extends the structural attention mechanism. On the other hand, it incorporates an additional temporal attention mechanism that enforces an auto-regressive behavior. Similarly, DyHAN (Yang et al., 2020) generates node embeddings using node-level attention and updates them via neighborhood aggregation and edge-level attention. Finally, the node embeddings are aggregated over time using a temporal attention mechanism. Heterogeneity is accounted for by applying node-level attention at each time step to subgraphs of only one edge type. During edge-level attention, the importance of each edge type is learned through a one-layer MLP. DynGEM (Goyal et al., 2018) takes an entirely different approach and is a dynamically extendable autoencoder for growing graphs. The input and output dimensions are extended respectively for each new incoming node. Since handling hypergraphs is challenging, especially in the dynamic case, few models have been proposed yet. One model that combines RNNs and GCNs is the HGC-RNN (Yi & Park, 2020). It integrates the temporal evolution of higher-ordered structures with two different hypergraph convolutions to encode structural and temporal information, global states, and a subsequent recurrent unit. All other hypergraph models in Tab. 4 involve attention mechanisms. Typically, they combine a hypergraph convolution with a temporal attention mechanism, as in DHAT (Luo et al., 2021), and MGH (Yan et al., 2020). For graph learning on video data, MGH extracts features from classical CNNs of different granularity to define hypergraphs of different types beforehand. The heterogeneity of the edges is then integrated into the model using corresponding attention. A very similar model is Hyper-GNN (Hao et al., 2021). It applies a hypergraph convolution similar to HGNN (Feng et al., 2019) from Sec. 4.2 and a corresponding attention mechanism adapted to neighborhoods on hypergraphs. The overall architecture consists of three parallel networks of the same structure, each processing different input features. STHAN-SR (Sawhney et al., 2021) also applies an attention convolution, which has been designed for static graphs. It applies a HyperGCN model (Yadati et al., 2019) from Sec. 4.2 to process node features that have been generated utilizing an LSTM and a temporal attention mechanism. When considering the types of dynamics the different models can handle, it becomes apparent that most of them focus on specific dynamics, such as dynamic node attributes only (EpiGNN (La Gatta et al., 2020), GCRN (Seo et al., 2018), STHAN-SR (Sawhney et al., 2021), HGC-RNN (Yi & Park, 2020), DHAT (Luo et al., 2021)) or evolving edges (RE-Net (Jin et al., 2019), Hyper-GNN (Hao et al., 2021), WD/CD-GCN (Manessi et al., 2020)). In particular, to the best of our knowledge, MGH (Yan et al., 2020) is the only model capable of processing graphs with changing node and edge sets and node and edge attributes over time. Among all the dynamics, deleting nodes and changing edge attributes have emerged as the least considered and probably most challenging ones. In the case of decreasing node sets, difficulties arise from the changing data structures leading to data gaps, the handling of obsolete data, and in particular, the lack of data and applications in this area. At the same time, the lack of models for changing edge attributes is a consequence of the fact that there are hardly any data and applications for this case. ## 5.2 Gnns For Graphs In Continuous Time Regarding dynamic graphs in continuous-time representation, fewer models use the advantages of this compressed representation, although the amount is growing quickly. Especially dynamic hypergraphs in this form are currently rarely investigated. Using the continuous-time representation allows the usage of explicit timestamps and an explicit specification of the change in the graph instead of processing a graph snapshot in every time step. Therefore, it drastically reduces the storage requirements. Nevertheless, utilizing this representation is challenging due to the absence of a direct encoding of the graph structure at a particular time and the model's requirement to be updateable in case of occurring events. Stochastic processes are frequently used to model dynamic graphs represented as a sequence of events. Typically, such processes model the probability of an event occurring at a specific time. These events encode the graph's dynamics, such as a node's appearance or an attribute's change, and an intensity function describes the distribution of the events over time. The occurrence of an event is modeled based on the most recent events involving the nodes or edges of interest. Examples of approaches utilizing stochastic processes are Know-Evolve (Trivedi et al., 2017) and its extension, DyREP (Trivedi et al., 2019). Know-Evolve considers events of appearing edges of different types. Here, separate embeddings for source and target nodes are computed to take directed edges into account. Furthermore, a learnable function is applied to the difference between a specific node's current and the last event to capture the temporal evolution. Moreover, previous embeddings of the nodes and edges involved in the current event are processed by an RNN-based model to encode the effect of the recurrent participation of each entity in events. The node embedding is further processed by a learnable function, which captures the compatibility of nodes in previous edges. Based on the learned node embeddings, a temporal point process is used to model the probability of an edge occurring between two existing nodes at the next timestamp. Know-Evolve's extension DyREP additionally uses structural information of the graph for two different edge types that represent different ways of communication between nodes. A different approach is proposed in DyGNN (Ma et al., 2020). It utilizes **LSTMs** in two kinds of units, one for the source and the other for target nodes connected through an edge. In the case of link prediction, node pairs are ranked respecting the cosine similarity of their node representations, and in the case of node classification, the softmax function is utilized. Similarly, TGN (Rossi et al., 2020) enables the usage of a memory module, which can be updated using an RNN such as an LSTM or GRU. The obtained node embedding can be used together with a learnable function to perform, e.g., temporal attention, summation, or projection. Afterward, an MLP processes the node embeddings of node pairs to generate a probability for the edge at the next timestamp to perform future link prediction. (Souza et al., 2022) proved a deficit in the expressivity of TGN and proposed the Positional-Encoding Injective Temporal Graph Net (PINT) to overcome this deficit. A node embedding at a certain timestamp is determined by an injective temporal aggregation of the neighborhood, where the attributes of the neighbors are calculated with an MLP over the neighboring node attribute concatenated with the corresponding edge attribute. Further, the performance is improved by concatenating the relative positional features that incorporate information from the temporal walk structures with the node features. Jin et al. (2022) propose a model that uses a recurrent unit combined with temporal walks. Such walks are defined as node sequences in which subsequent nodes have previously been involved in an event together. While sampling temporal walks for a target node, nodes that have been involved in an event with the target node more recently are sampled with a higher probability. By anonymizing these walks into relative and node-unspecific encodings, so-called motifs are created. An autoregressive gated recurrent unit processes these to compute the node embeddings. Since the nodes are sampled irregularly for the temporal walks, the motifs are integrated over multiple interaction time intervals. Table 4: **GNN's learning on continuous-time dynamic** graphs. Due to the difficulties arising from the lack of a direct encoding of the graph structure at each time point, there are only certain graph types covered by models models utilizing graphs in this representation. | this representation. | | | | | | | | | | |-------------------------------------------------------------------------------------------------------------------------|---------------------------|-------------------------------|------------|--------------|---------|-----------------|-----------------|-----------------|---------------------------------------| | Graph Type | Models | Nodes | Edges | Attr. | Problem | Data | | | | | | Category | | | | | | | | | | add | del | add | del | node | edge | | | | | | Know-Evolve | × | × | ✓ | × | × | × | link/time | socio-political | | | Trivedi | et | al. | prediction | interactions | | | | | | | (2017) | | | | | | | | | | | directed | DyGNN | link prediction, | | | | | | | | | Ma | et | al. | node | | | | | | | | ✓ | × | ✓ | × | × | × | | | | | | (2020)11 | classification | communication/ trust networks | | | | | | | | | undirected | DyRep | ✓ | × | ✓ | × | × | × | link/event time | social networks, | | Trivedi et al. | prediction | | | | | | | | | | (2019) | | | | | | | | | | | node- | github | | | | | | | | | | attributed edgeattributed | - | dynamic link | | | | | | | | | attributed | NeurTWs Jin et al. (2022) | × | × | ✓ | × | × | × | prediction | social networks, interaction networks | | node | | | | | | | | | | | heterogeneous | - | | | | | | | | | | Know-Evolve | × | × | ✓ | × | × | × | link/time | socio-political | | | Trivedi | et | al. | prediction | interactions | | | | | | | (2017) | | | | | | | | | | | graph | | | | | | | | | | | CTR | | edge | | | | | | | | | heterogeneous | DyRep | social networks, | | | | | | | | | ✓ | × | ✓ | × | × | × | link/event time | | | | | Trivedi | et | al. | prediction | github | | | | | | | (2019) | | | | | | | | | | | heterogeneous | - | | | | | | | | | | multi | PINT | Souza | | | | | | | | | et | al. | (2022), | | | | | | | | | TGN Rossi | et | al. | | | | | | | | | (2020) | node | | | | | | | | | | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | classification, | social networks | | | | | edge prediction | | | | | | | | | | 11The baseline models used in the experiments are made for continuous-time dynamic GNNs. All the models used are either | | | | | | | | | | 11The baseline models used in the experiments are made for continuous-time dynamic GNNs. All the models used are either made for static graphs (e.g., GCN, GraphSAGE) or discrete-time dynamics (e.g., DynGEM, DANE, Dynamic Triad). | directed | - | | | | | | | | | | |------------------------------------------------------------|----------------------------------------------------------|----|-----|----|----|----|----|----|----|------------------| | HIT | | | | | | | | | | | | undirected | Liu | et | al. | 12 | × | ✓ | × | × | × | edge-, pattern-, | | ◻ | time prediction | | | | | | | | | | | (2021) | | | | | | | | | | | | hypergraph | Q&A platform, political interactions, patient medication | | | | | | | | | | | (node, edge) attributed, (node, edge) heterogeneous, multi | - | | | | | | | | | | To the best of our knowledge, the only GNN developed for hypergraphs in continuous-time representation is HIT (Liu et al., 2021). To encode structural and temporal information, it uses temporal random walks defined as a randomly selected set of hyperedges backward in time. Afterward, an aggregation mechanism pools the obtained representation into the final node embedding. ## 6 Models Utilizing Semantic Graph Properties Besides the structural graph properties, it is also possible to consider semantic properties in designing a GNN. Although semantic graph properties typically do not explicitly affect the graph's structure (R-5), it can be advantageous to leverage them in GNNs since the graph topology could change or specialized architectures might better preserve the original properties in the learned representation. The necessity for this comes from the data's nature and theoretical considerations to learn structures more efficiently or to explicitly model certain constraints or properties of the data structure. The semantic properties listed in Def. 3.2 are a selection from data-motivated characteristics (e.g., bipartite nodes for user-item modeling, complete graphs for relation prediction) and graph theory (e.g., regular or disconnected graphs, trees). Accordingly, GNNs that integrate some of these properties are presented in Tab. 5. Some characteristics are considered more often in graph learning than others. These include, e.g., complete, acyclic, and bipartite graphs since they reflect frequently occurring characteristics of graph applications. In contrast, recursive graphs or (poly-)trees are considered explicitly only occasionally. To the best of our knowledge, regular graphs do not play a significant role in graph learning. Complete graphs represent the existence of a connection between each node pair. Standard Message-Passing GNNs or Convolutional GNNs are theoretically capable of handling complete graphs. However, especially in the case of GCNs, the neighborhood of all nodes is considered equally, and thus, the information flow in the graph is inexpressive. Some GNNs have been developed for complete graphs to overcome this problem. MGCN (Lu et al., 2019), e.g., is specifically designed to predict properties of molecules represented as a complete graph. The network is a standard Message-Passing Neural Network (MPNN), utilizing node and edge attributes. The crucial innovation is how nodes and edges are embedded. The idea is to model quantum interactions between atoms since these influence the overall properties of the molecule. Initially, the atoms of a molecule define nodes of a complete graph, respecting the number of protons in their nuclei. Then, the edge attributes are constructed using a radial basis function (RBF) layer and processed in a hierarchical GCN to weight nodes in the message-passing. A final node embedding is obtained by executing several convolutions and hierarchically aggregating the neighborhood of increasing depth. The learned node and edge embeddings are summed across the graph to infer the prediction of molecule properties. Since molecule datasets typically comprise labels only for a small fraction of the data and only for smaller molecules, the authors mainly focused on generalizability and transferability between different molecule sizes. As can be observed from the table, there are models focusing on **bipartite graphs**, i.e., graphs that can be divided into two disjoint node sets such that every edge connects a node of one set to a node from the other 12Nodes only appear together with new hyperedges. set. One example is BGNN (He et al., 2019), which focuses on generating a suitable representation for such graphs. For this purpose, information across and within the graph's two partitions (domains) is aggregated to enable inter-domain message passing. The model is trained in a so-called cascade way, i.e., the training of a layer begins after the preceding layer has been fully trained. Thereby, the loss function for the domains is defined layer-wise. Together with a global loss, the quality of the resulting node representation is measured. BipGNN (Wang et al., 2020), in contrast, restricts the convolution to the propagation between the disjoint node sets. The network encoder produces pairwise embeddings for nodes from the two disjoint sets, and the decoder maps these embeddings to an association matrix to perform link prediction between disjoint sets. In the case of **unconnected graphs**, the underlying concept of information flow in GNNs may reach its limits. In standard GNNs, subgraphs without a connection to the rest of the graph are processed isolatedly. Thus, small isolated subgraphs may not provide enough structural information to prevent over-smoothing (Li et al. (2018) of the GNN. To tackle this problem, the generative graph transformer network (GTN) (Yun et al., 2019), e.g., aims to identify valuable connections between unconnected nodes to the rest of the graph. It enables learning on multiple subgraph structures in a heterogeneous graph by concatenating graph convolutions on different meta-paths. Table 5: **GNNs using semantic graph properties.** The specific semantic properties have been selected due to their rather common appearance in graph data. Since some graph characteristics are considered in more applications or in more popular applications, there exist more GNN models, e.g. bipartite graphs are considered often. | often. | | | | |-------------------------|-------------------------------|------------------------------------|-----------------------------------------------------| | Graph Type | Models | Problem | Data Category | | complete | MGCN Lu et al. (2019) | graph attribute prediction | quantum chemistry | | r-regular | - | | | | BipGNN | | | | | bipartite | Wang et al. (2020) | link-rank prediction | drug repurposing | | BGNN He et al. (2019) | node representation learning | social-/citation networks | | | | graph generation, meta-path | | | | unconnected13 | GTN | generation, node classification | citation networks, movie genres | | Yun et al. (2019) DAGNN | | | | | acyclic14 | Thost | & | Chen | | (2021) | node prediction, longest path | source code, neural architectures, | | | | prediction | Bayesian networks | | | GenRecN | | | | | trees | Sperduti & Starita | graph classification | logic terms | | (1997) | | | | | polytrees | CTNN He et al. (2021) | node classification | 3d surfaces in context of hydrological applications | | recursive | MPNN-R Yadati (2020) | node classification | documents in academic networks | | hierarchical | Hier-GNN Chen et al. (2022) | image classification | images | Acyclic graphs occur across various domains, such as source code, neural architectures, or logic terms. DAGNN (Thost & Chen, 2021) learns a representation for directed acyclic graphs driven by the partial order 13Since, to the best of our knowledge, most of the models do not specify the connectedness, it is assumed here that they can handle both connected and unconnected graphs. 14The majority of GNN models in this survey can handle cycles because they are very common in graphs. over the graph nodes. It is an RNN-based message-passing network utilizing an attention module to obtain the messages, which are then forwarded through a GRU. A graph representation is obtained by first concatenating the source and target node representations separately, then max-pooling them and concatenating the results. Particular cases of acyclic graphs are **trees**, i.a., examined in GenRecN (Sperduti & Starita, 1997). This early work, as mentioned in Sec. 4.1, applies a spatial neighborhood convolution on the out-neighbors of a node. Polytrees also serve as suitable representations for some types of data, such as surface contours of 3D data. As shown in CTNN (He et al., 2021), polytrees can be used to model the evolution of the surface contours at different elevation levels. The model uses a U-Net (Ronneberger et al., 2015) architecture with ChebyNet (Defferrard et al., 2016) and diffusion Graph Convolution (Li et al., 2017) Layers, using graph pooling and unpooling methods for the characteristic unit architecture. Another model using a U-Net (Ronneberger et al., 2015) architecture is Hier-GNN (Chen et al., 2022), which explores hierarchical correlations between nodes. For this purpose, specialized pooling and unpooling methods are explicitly defined to encode hierarchical information. Graph convolutions are then applied among a layer in the hierarchy. Finally, MPNN-R (Yadati, 2020) has been developed to encode **recursive graphs**. It is based on G-MPNN (Yadati, 2020) mentioned in Sec. 4.2 and adapts the message-passing function for recursive multi-relational ordered multi-hypergraphs. ## 7 Models For Combined Graphs Arbitrary combinations of graph types can be used to model real-world problems and thus be considered for graph learning purposes. To conclude this work, we give a selected list of graph-type combinations used in several research fields where GNNs are already established. Therefore, this list is not necessarily complete but gives an insight into further research on GNNs considering combined graph types. The architectures listed in Tab. 6 are GNN models specialized for a particular combination of graph properties. Some of them use a selected non-Euclidean space that is assumed to provide a better fit for the specific data. GCN (Kipf & Welling, 2017), e.g., defines a standard spectral graph convolution for simple graphs allowing for one-dimensional edge weights, whereas Hyperbolic GNN (Liu et al., 2019) defines its extension to hyperbolic space. Hyperbolic GNN operates on Riemannian manifolds15 and is independent of the underlying space. Since every point of a differentiable Riemannian manifold can be approximated by Euclidean space, all functions with trainable parameters are executed in Euclidean Space. HVGNN (Sun et al., 2021a) uses a hyperbolic model as well. More precisely, it consists of a temporal graph neural network based on convolution and attention modules and a variational graph auto-encoder in hyperbolic space. A map from time to a hyperbolic space encodes time information to handle time in the convolution process. This way, the aggregation of the features is done in a time-aware neighborhood. The combined graph structures that make up **knowledge graphs** represent a common application for GNNs. They can represent all types of attributes, together with heterogeneity and dynamics. Therefore, many different models have been developed. KGIN (Wang et al., 2021), e.g., uses an attention mechanism that combines different relations into so-called intents to model the user-item relations. These are subsequently used for user and item embeddings modeled via another attention layer to predict the probability of a user adopting an item. A similar use case is approached in SBGNN (Huang et al., 2021), where the two node types of users and items are represented as a bipartite graph connected through signed relations. The model uses a message-passing scheme, including an attention mechanism to encode positive and negative links in recommender, voting, and review systems. HetG (Zhang et al., 2019a) processes a similar graph type. The model is designed to embed heterogeneous graphs with node and edge attributes of any kind. It generates a heterogeneous neighborhood using Random Walk with Restart and applies a Bi-LSTM for heterogeneous content embedding. Different types of nodes are then combined via an attention mechanism. 15A Riemannian manifold is a real and smooth manifold equipped with an inner product at each point of the manifold (Liu et al., 2019). A particular type of graph that can be useful for several applications is the **multiplex graph**. It consists of different layers, each with the same set of nodes but different sets of edges within these layers. Inter-layer edges connect the same nodes across different layers. In MXMNet (Zhang et al., 2020a), a two-layer multiplex graph is utilized such that the so-called local layer is generated with the aid of molecular expert knowledge, and the global layer depends on the neighborhood of the local layer. MXMNet applies a message-passing procedure to each layer separately and enables communication between these layers by defining an additional cross-layer mapping function. Multiplex graphs can also be used to model temporal features without explicitly using a dynamic graph representation as in STGNN (Kapoor et al., 2020). This model is specifically designed to predict the daily new cases of COVID-19 in a particular region based on mobility data. Each layer of the multiplex graph corresponds to a specific period, i.e., a day. Nodes represent regions, and relations within these layers describe human mobility between different regions. Edges between the layers are temporal and define a node's attribute through time. STGNN processes such graphs using spectral graph convolutions. | Graph Type | Models | Problem | Data Category | |---------------------------------|--------------------------------------------------------|-----------------------------------------------------------------------------------|-----------------------------| | GCN | node classification | citation networks, knowledge | | | Kipf | & | Welling | graphs | | (2017) | | | | | undirected | | | | | node-attributed | Hyperbolic GNN | graph classification, node | synthetic, molecular, | | Liu et al. (2019) | regression | blockchain | | | knowledge graph | KGIN Wang et al. (2021) | link prediction | recommender systems | | content-associated | HetG | | | | heterogeneous | Zhang et al. (2019a) | link prediction, recommendation, (inductive) node classification, node clustering | review networks | | multiplex | MXMNet Zhang et al. (2020a) | graph feature prediction | molecules | | spatio-temporal | STGNN Kapoor et al. (2020) | node attribute prediction | disease spreading | | multi-relational | SBGNN Huang et al. (2021) | link sign prediction | recommender, voting, review | | bipartite | | systems | | | bipartite | | | | | edge-growing in continuous-time | JODIE Kumar et al. (2019) | future user-item | | | | interaction prediction, user state change prediction16 | social media, wiki, music, student actions | | | undirected | | | | | node-attributed edge-dynamic | HVGNN | link prediction, node classification | social, citation, knowledge | | Sun et al. (2021a) | | | | Table 6: **GNN's for combined graph types.** The graph type combinations were selected to cover combinations in fields where the usage of GNNs is already established. JODIE (Kumar et al., 2019) also processes temporal information, but it directly encodes the dynamics using an RNN. It can be considered a particular case of the TGN-Model (Rossi et al., 2020). For node embeddings, it also uses a memory module that can be updated using an RNN. The message-passing function is set to the identity and applied together with a learnable time projection function. The model is evaluated, e.g., on link prediction between users and items inferred from the distance between the embeddings of a pair of nodes. 16The task is to predict if an interaction will lead to a state change in user, particularly in two use cases: predicting if a user will be banned and predicting if a student will drop-out of a course. ## 8 Conclusion And Future Work This survey provides a fine-grained overview of Graph Neural Networks for graph types of different structural constitutions. To the best of our knowledge, this is the first work to survey which graph types are addressed by published GNNs. We overviewed and defined the most common graph types and properties and the respective GNN models. Moreover, we identified GNN models specialized for specific graph properties and investigated how they handle these. This way, we could relate formal graph properties to the corresponding practical GNN models. Furthermore, we analyzed the architecture of the considered models and grouped them according to the modules they apply, i.e., the type of layer, such as convolutional or recurrent layers. Additionally, we analyzed GNN models concerning dynamics and grouped the models according to the types of dynamics they can process. Our work allows several conclusions to be drawn and identifies gaps concerning the graph types, properties, and dynamics that GNN models can handle. First, existing GNN models can, in principle, handle the most common structural graph properties (e.g., attributed, directed, node-heterogeneity) for static graphs and hypergraphs. The lack of models for a few properties results from the existence of more general models, e.g., there is no GNN model for node-heterogeneous graphs in discrete time, but there is one for fully heterogeneous graphs. Another reason could be a lack of standard graph data sets for such types. Furthermore, there are many GNN models which consider graphs in discrete-time representation. These models cover the most common graph types and properties except for the multiplicity of nodes or edges. A difficulty in handling multiplicity results from the inability of a standard graph's adjacency matrix to encode duplicate nodes. When it comes to the models for graphs in continuous time, it is evident that there are substantial gaps in research on GNNs for most of the graph types compared to the discrete case. In particular, only one model for hypergraphs has been found. Generally, developing GNNs for continuous-time graphs is still a young field of GNN research. Another reason for the small number of graph types covered by models for continuous-time graphs could be that such models typically use stochastic point processes to model the dynamic behavior of the graphs. The number of different events increases with the number of dynamic graph properties considered. Since a point process models each event, the model becomes more complex. From the results from Tab. 4, it can be observed that most events model discrete outputs in continuous time, such as whether there is a new edge. When including attribute dynamics for real-valued attributes, the model must deal with continuous values in continuous time, making the model more computationally intensive. Most dynamic graphs addressed by GNN models exhibit only specific dynamics, such as strictly growing graphs or dynamic node attributes. GNN models for graphs with dynamics in the edge attributes and the deletion of nodes are scarce. To the best of our knowledge, only one model, MGH(Yan et al., 2020), has been developed to process graphs with all dynamics considered in this work, i.e., changing node and edge sets as well as changing node and edge attributes. Reasons for this may be the popularity of problems where graphs are growing over time and node deletions are believed not to play a crucial role (as, e.g., in citation networks, recommender systems, or data networks) and the difficulties that arise when combining known GNN techniques for dynamic graphs. Considering the discrete-time representation of graphs, e.g., GNN techniques for static graphs are usually applied to every graph snapshot and combined with an RNN to capture the dynamics, leading to computationally expensive models. Finally, existing GNN models have been developed to cover many semantic graph properties or for particular combined graph types dependent on the given data structure, which shows that multiple graph properties can be learned simultaneously by GNN models. To sum up, the research on GNNs for particular graph types has become a hot area in recent years. However, this extensive survey could reveal gaps in graph types, properties, and dynamics that are not yet considered sufficiently in the GNN community. ## 9 Notation Table 7: Notation used throughout this work. | N | natural numbers | |---------|--------------------------------------------------------------| | N0 | natural numbers starting at 0 | | R | real numbers | | R k | R vector space of dimension k | | ∣a∣ | absolute value of a real a | | ∥ ⋅ ∥ | norm on R | | ∣M∣ | number of elements of a set M | | ∅ | empty set | | {⋅} | set | | {∣ ⋅ ∣} | multiset, i.e., set allowing multiple appearances of entries | | ∪ | union of two (multi)sets | | ⊍ | disjoint union of two (multi)sets | | ⊆ | sub(multi)set | | × | factor set of two sets | | ψ | learnable message function | | ϕ | activation function | | σ | sigmoid activation function | | ⊕ | aggregation | | ∣∣ | concatenation | | A | adjacency matrix | | A˜ | adjacency matrix with self-loops | | B | edge degree matrix | | D | node degree matrix | | D˜ | node degree matrix with self-loops | | E | identity matrix | | I | incidence matrix | | L | Laplacian matrix | | W | edge weight matrix | ## Author Contributions All authors contributed equally. ## Acknowledgments The GAIN project is funded by the Ministry of Education and Research Germany (BMBF), under the funding code 01IS20047A, according to the 'Policy for the funding of female junior researchers in Artificial Intelligence'. The authors would like to thank Mohamed Hassouna, and Jan Schneegans for the fruitful discussion and corrections of the manuscript. ## References Stephen Abbott. Understanding Analysis, volume 2. Springer, 2001. James A. Anderson and Edward Rosenfeld. Neurocomputing: Foundations of Research. MIT Press, Cambridge, MA, USA, 1988. ISBN 0262010976. Song Bai, Feihu Zhang, and Philip HS Torr. Hypergraph convolution and hypergraph attention. Pattern Recognition, 110:107637, 2021. Sambaran Bandyopadhyay, Kishalay Das, and M Narasimha Murty. Line hypergraph convolution network: Applying graph convolution for hypergraphs. arXiv preprint arXiv:2002.03392, 2020. Claudio DT Barros, Matheus RF Mendonça, Alex B Vieira, and Artur Ziviani. A survey on embedding dynamic graphs. ACM Computing Surveys (CSUR), 55(1):1–37, 2021. Publisher: ACM New York, NY. Claude Berge. Graphs and Hypergraphs. 1973. M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst. Geometric deep learning: Going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017. doi:10.1109/MSP.2017.2693418. Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges, 2021. URL https://arxiv.org/abs/2104.13478. Hongyun Cai, Vincent W. Zheng, and Kevin Chen-Chuan Chang. A comprehensive survey of graph embedding: Problems, techniques and applications. arXiv:1709.07604 [cs], February 2018. URL http: //arxiv.org/abs/1709.07604. arXiv: 1709.07604. Cen Chen, Kenli Li, Wei Wei, Joey Tianyi Zhou, and Zeng Zeng. Hierarchical graph neural networks for few-shot learning. IEEE Trans. Circuits Syst. Video Technol., 32(1):240–252, 2022. doi:10.1109/TCSVT.2021.3058098. URL https://doi.org/10.1109/TCSVT.2021.3058098. Robert J. Cimikowski and Paul Shope. A neural-network algorithm for a graph layout problem. IEEE Trans. Neural Networks, 7(2):341–345, 1996. doi:10.1109/72.485670. URL https://doi.org/10.1109/72. 485670. Peng Cui, Xiao Wang, Jian Pei, and Wenwu Zhu. A survey on network embedding. IEEE transactions on knowledge and data engineering, 31(5):833–852, 2018. Publisher: IEEE. Sanjoy Dasgupta. Learning polytrees. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, UAI'99, pp. 134–141, San Francisco, CA, USA, 1999. Morgan Kaufmann Publishers Inc. ISBN 1558606149. Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural information processing systems, 29, 2016. Lisa Ehrlinger and Wolfram Wöß. Towards a definition of knowledge graphs. SEMANTiCS (Posters, Demos, SuCCESS), 48(1-4):2, 2016. Yifan Feng, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao. Hypergraph neural networks. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pp. 3558–3565. AAAI Press, 2019. doi:10.1609/aaai.v33i01.33013558. URL https://doi.org/10.1609/ aaai.v33i01.33013558. Fernando Gama, Elvin Isufi, Geert Leus, and Alejandro Ribeiro. Graphs, convolutions, and neural networks: From graph filters to graph neural networks. IEEE Signal Processing Magazine, 37(6):128–138, November 2020. ISSN 1558-0792. doi:10.1109/MSP.2020.3016143. Ira M. Gessel and Richard P. Stanley. Algebraic enumeration. Handbook of combinatorics, 2:1021–1061, 1995. Liyu Gong and Qiang Cheng. Exploiting edge features for graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9211–9219, 2019. Palash Goyal and Emilio Ferrara. Graph embedding techniques, applications, and performance: A survey. Knowledge-Based Systems, 151:78–94, 2018. Publisher: Elsevier. Palash Goyal, Nitin Kamra, Xinran He, and Yan Liu. Dyngem: Deep embedding method for dynamic graphs, 2018. URL http://arxiv.org/abs/1805.11273. William L. Hamilton. Graph Representation Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 14(3):1–159, 2020. William L. Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. IEEE Data Eng. Bull., 40(3):52–74, 2017. URL http://sites.computer.org/debull/ A17sept/p52.pdf. Xiaoke Hao, Jie Li, Yingchun Guo, Tao Jiang, and Ming Yu. Hypergraph neural network for skeleton-based action recognition. IEEE Transactions on Image Processing, 30:2263–2275, 2021. doi:10.1109/TIP.2021.3051495. Chaoyang He, Tian Xie, Yu Rong, Wenbing Huang, Yanfang Li, Junzhou Huang, Xiang Ren, and Cyrus Shahabi. Bipartite graph neural networks for efficient node representation learning. arXiv e-prints, pp. arXiv–1906, 2019. Wenchong He, Arpan Man Sainju, Zhe Jiang, and Da Yan. Deep neural network for 3d surface segmentation based on contour tree hierarchy. In Carlotta Demeniconi and Ian Davidson (eds.), Proceedings of the 2021 SIAM International Conference on Data Mining, SDM 2021, Virtual Event, April 29 - May 1, 2021, pp. 253– 261. SIAM, 2021. doi:10.1137/1.9781611976700.29. URL https://doi.org/10.1137/1.9781611976700. 29. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. CoRR, abs/2005.00687, 2020. URL https://arxiv.org/abs/2005.00687. Youpeng Hu, Xunkai Li, Yujie Wang, Yixuan Wu, Yining Zhao, Chenggang Yan, Jian Yin, and Yue Gao. Adaptive hypergraph auto-encoder for relational data clustering. IEEE Transactions on Knowledge and Data Engineering, 2021. Junjie Huang, Huawei Shen, Qi Cao, Shuchang Tao, and Xueqi Cheng. Signed bipartite graph neural networks. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 740–749, 2021. Borja Ibarz, Jan Leike, Tobias Pohlen, Geoffrey Irving, Shane Legg, and Dario Amodei. Reward learning from human preferences and demonstrations in atari. In NeurIPS, 2018. Vassilis N Ioannidis, Antonio G Marques, and Georgios B Giannakis. A recurrent graph neural network for multi-relational data. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8157–8161. IEEE, 2019. Weiwei Jiang and Jiayun Luo. Graph neural network for traffic forecasting: A survey. arXiv:2101.11174 [cs], February 2022. URL http://arxiv.org/abs/2101.11174. arXiv: 2101.11174. Ming Jin, Yuan-Fang Li, and Shirui Pan. Neural temporal walks: Motif-aware representation learning on continuous-time dynamic graphs. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/ forum?id=NqbktPUkZf7. Woojeong Jin, Meng Qu, Xisen Jin, and Xiang Ren. Recurrent event network: Autoregressive structure inference over temporal knowledge graphs. arXiv preprint arXiv:1904.05530, 2019. Cliff A. Joslyn and Kathleen Nowak. Ubergraphs: A definition of a recursive hypergraph structure. CoRR, abs/1704.05547, 2017. URL http://arxiv.org/abs/1704.05547. Amol Kapoor, Xue Ben, Luyang Liu, Bryan Perozzi, Matt Barnes, Martin Blais, and Shawn O'Banion. Examining covid-19 forecasting using spatio-temporal graph neural networks. arXiv preprint arXiv:2007.03113, 2020. Seyed Mehran Kazemi, Rishab Goel, Kshitij Jain, Ivan Kobyzev, Akshay Sethi, Peter Forsyth, and Pascal Poupart. Representation Learning for Dynamic Graphs: A Survey. Journal of Machine Learning Research, 21(70):1–73, 2020. URL http://jmlr.org/papers/v21/19-447.html. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id= SJU4ayYgl. Srijan Kumar, Xikun Zhang, and Jure Leskovec. Predicting dynamic embedding trajectory in temporal interaction networks. In Ankur Teredesai, Vipin Kumar, Ying Li, Rómer Rosales, Evimaria Terzi, and George Karypis (eds.), Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019, Anchorage, AK, USA, August 4-8, 2019, pp. 1269–1278. ACM, 2019. doi:10.1145/3292500.3330895. URL https://doi.org/10.1145/3292500.3330895. Valerio La Gatta, Vincenzo Moscato, Marco Postiglione, and Giancarlo Sperli. An epidemiological neural network exploiting dynamic graph structured data applied to the covid-19 outbreak. IEEE Transactions on Big Data, 7(1):45–55, 2020. Jenn-Shiang Lai, Sy-Yen Kuo, and Ing-Yi Chen. Neural networks for optimization problems in graph theory. In 1994 IEEE International Symposium on Circuits and Systems, ISCAS 1994, London, England, UK, May 30 - June 2, 1994, pp. 269–272. IEEE, 1994. doi:10.1109/ISCAS.1994.409578. URL https: //doi.org/10.1109/ISCAS.1994.409578. John Boaz Lee, Ryan A Rossi, Sungchul Kim, Nesreen K Ahmed, and Eunyee Koh. Attention models in graphs: A survey. ACM Transactions on Knowledge Discovery from Data (TKDD), 13(6):1–25, 2019. Publisher: ACM New York, NY, USA. Qimai Li, Zhichao Han, and Xiao-ming Wu. Deeper insights into graph convolutional networks for semisupervised learning. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1), Apr. 2018. doi:10.1609/aaai.v32i1.11604. URL https://ojs.aaai.org/index.php/AAAI/article/view/11604. Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. arXiv preprint arXiv:1707.01926, 2017. Wenlong Liao, Birgitte Bak-Jensen, Jayakrishnan Radhakrishna Pillai, Yuelong Wang, and Yusen Wang. A review of graph neural networks and their applications in power systems. Journal of Modern Power Systems and Clean Energy, pp. 1–16, 2021. ISSN 2196-5420. doi:10.35833/MPCE.2021.000058. Conference Name: Journal of Modern Power Systems and Clean Energy. Qi Liu, Maximilian Nickel, and Douwe Kiela. Hyperbolic graph neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/ 2019/file/103303dd56a731e377d01f6a37badae3-Paper.pdf. Yunyu Liu, Jianzhu Ma, and Pan Li. Neural higher-order pattern (motif) prediction in temporal networks. arXiv preprint arXiv:2106.06039, 2021. Chengqiang Lu, Qi Liu, Chao Wang, Zhenya Huang, Peize Lin, and Lixin He. Molecular property prediction: A multilevel quantum interactions modeling perspective. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pp. 1052–1060. AAAI Press, 2019. doi:10.1609/aaai.v33i01.33011052. URL https://doi.org/10.1609/aaai.v33i01.33011052. Xiaoyi Luo, Jiaheng Peng, and Jun Liang. Directed hypergraph attention network for traffic forecasting. IET Intelligent Transport Systems, n/a(n/a), 2021. doi:https://doi.org/10.1049/itr2.12130. URL https: //ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/itr2.12130. Yao Ma, Ziyi Guo, Zhaochun Ren, Eric Zhao, Jiliang Tang, and Dawei Yin. Streaming graph neural networks. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR' 20, pp. 719–728, 2020. Franco Manessi, Alessandro Rozza, and Mario Manzo. Dynamic graph convolutional networks. Pattern Recognit., 97, 2020. doi:10.1016/j.patcog.2019.107000. URL https://doi.org/10.1016/j.patcog.2019. 107000. Scott McKinney, Marcin Sieniek, Varun Godbole, Jonathan Godwin, Natasha Antropova, Hutan Ashrafian, Trevor Back, Mary Chesus, Greg Corrado, Ara Darzi, Mozziyar Etemadi, Florencia Garcia-Vicente, Fiona Gilbert, Mark Halling-Brown, Demis Hassabis, Sunny Jansen, Alan Karthikesalingam, Christopher Kelly, Dominic King, Joseph Ledsam, David Melnick, Hormuz Mostofi, Lily Peng, Joshua Reicher, Bernardino Romera-Paredes, Richard Sidebottom, Mustafa Suleyman, Daniel Tse, Kenneth Young, Jeffrey De Fauw, and Shravya Shetty. International evaluation of an ai system for breast cancer screening. Nature, 577: 89–94, 12/2020 2020. ISSN 1476-4687. doi:10.1038/s41586-019-1799-6. Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pp. 4602–4609, 2019. Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao B. Schardl, and Charles E. Leiserson. Evolvegcn: Evolving graph convolutional networks for dynamic graphs. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pp. 5363–5370. AAAI Press, 2020. URL https://aaai.org/ojs/index.php/AAAI/article/view/5984. Yun Peng, Byron Choi, and Jianliang Xu. Graph learning for combinatorial optimization: A survey of state-of-the-art. Data Science and Engineering, 6(2):119–141, 2021. Publisher: Springer. Yuxiang Ren, Bo Wang, Jiawei Zhang, and Yi Chang. Adversarial active learning based heterogeneous graph neural network for fake news detection. In 2020 IEEE International Conference on Data Mining (ICDM), pp. 452–461, 2020. doi:10.1109/ICDM50108.2020.00054. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Springer, 2015. Emanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, and Michael M. Bronstein. Temporal graph networks for deep learning on dynamic graphs, 2020. URL https://arxiv.org/abs/ 2006.10637. Ryan A. Rossi and Nesreen K. Ahmed. The network data repository with interactive graph analytics and visualization. In AAAI, 2015. URL https://networkrepository.com. Aravind Sankar, Yanhong Wu, Liang Gou, Wei Zhang, and Hao Yang. Dysat: Deep neural representation learning on dynamic graphs via self-attention networks. In James Caverlee, Xia (Ben) Hu, Mounia Lalmas, and Wei Wang (eds.), WSDM '20: The Thirteenth ACM International Conference on Web Search and Data Mining, Houston, TX, USA, February 3-7, 2020, pp. 519–527. ACM, 2020. doi:10.1145/3336191.3371845. URL https://doi.org/10.1145/3336191.3371845. Ryoma Sato. A survey on the expressive power of graph neural networks. arXiv:2003.04078 [cs, stat], October 2020. URL http://arxiv.org/abs/2003.04078. arXiv: 2003.04078. Ramit Sawhney, Shivam Agarwal, Arnav Wadhwa, Tyler Derr, and Rajiv Ratn Shah. Stock selection via spatiotemporal hypergraph attention network: A learning to rank approach. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 497–504, 2021. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20:61–80, 2009. ISSN 1941-0093. doi:10.1109/TNN.2008.2005605. Youngjoo Seo, Michaël Defferrard, Pierre Vandergheynst, and Xavier Bresson. Structured sequence modeling with graph convolutional recurrent networks. In Long Cheng, Andrew Chi-Sing Leung, and Seiichi Ozawa (eds.), Neural Information Processing - 25th International Conference, ICONIP 2018, Siem Reap, Cambodia, December 13-16, 2018, Proceedings, Part I, volume 11301 of Lecture Notes in Computer Science, pp. 362–373. Springer, 2018. doi:10.1007/978-3-030-04167-0_33. URL https://doi.org/10. 1007/978-3-030-04167-0_33. Hong Shi, Xiaomene Zhang, Shizhong Sun, Lin Liu, and Lin Tang. A survey on bayesian graph neural networks. In 2021 13th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), pp. 158–161, August 2021. doi:10.1109/IHMSC52134.2021.00044. Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. In Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pp. 802–810, 2015. URL https://proceedings.neurips.cc/ paper/2015/hash/07563a3fe3bbe7e3ba84431ad9d055af-Abstract.html. Jonathan Shlomi, Peter Battaglia, and Jean-Roch Vlimant. Graph neural networks in particle physics. Machine Learning: Science and Technology, 2(2):021001, 2020. Publisher: IOP Publishing. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362(6419):1140–1144, 2018. doi:10.1126/science.aar6404. URL https://www.science. org/doi/abs/10.1126/science.aar6404. Joakim Skarding, Bogdan Gabrys, and Katarzyna Musial. Foundations and modelling of dynamic networks using dynamic graph neural networks: A survey. IEEE Access, 9, 2021. ISSN 2169-3536. doi:10.1109/ACCESS.2021.3082932. URL http://arxiv.org/abs/2005.07496. arXiv: 2005.07496. Amauri H Souza, Diego Mesquita, Samuel Kaski, and Vikas Garg. Provably expressive temporal graph networks. In NeurIPS 2022 Workshop: New Frontiers in Graph Learning, 2022. URL https://openreview. net/forum?id=2neXknZg9ej. Alessandro Sperduti. Neural networks for processing data structures. In C. Lee Giles and Marco Gori (eds.), Adaptive Processing of Sequences and Data Structures, International Summer School on Neural Networks, "E.R. Caianiello", Vietri sul Mare, Salerno, Italy, September 6-13, 1997, Tutorial Lectures, volume 1387 of Lecture Notes in Computer Science, pp. 121–144. Springer, 1997. doi:10.1007/BFb0053997. URL https://doi.org/10.1007/BFb0053997. Alessandro Sperduti and Antonina Starita. Supervised neural networks for the classification of structures. IEEE Trans. Neural Networks, 8(3):714–735, 1997. doi:10.1109/72.572108. URL https://doi.org/10. 1109/72.572108. Edgar-Philipp Stoffel, Korbinian Schoder, and Hans Jürgen Ohlbach. Applying hierarchical graphs to pedestrian indoor navigation. In Proceedings of the 16th ACM SIGSPATIAL international conference on Advances in geographic information systems, pp. 1–4, 2008. Gilbert Strang. Introduction to Linear Algebra, volume 3. Wellesley-Cambridge Press Wellesley, MA, 1993. Li Sun, Zhongbao Zhang, Jiawei Zhang, Feiyang Wang, Hao Peng, Sen Su, and Philip S Yu. Hyperbolic Variational Graph Neural Network for Modeling Dynamic Graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 4375–4383, 2021a. Xiangguo Sun, Hongzhi Yin, Bo Liu, Hongxu Chen, Jiuxin Cao, Yingxia Shao, and Nguyen Quoc Viet Hung. Heterogeneous hypergraph embedding for graph classification. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pp. 725–733, 2021b. Josephine M. Thomas, Silvia Beddar-Wiesing, Alice Moallemy-Oureh, and Rüdiger Nather. Graph type expressivity and transformations. CoRR, abs/2109.10708, 2021. URL https://arxiv.org/abs/2109. 10708. Veronika Thost and Jie Chen. Directed acyclic graph neural networks. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=JbuYF437WB6. Loc Hoang Tran and Linh Hoang Tran. Directed hypergraph neural network. CoRR, abs/2008.03626, 2020. URL https://arxiv.org/abs/2008.03626. Rakshit Trivedi, Hanjun Dai, Yichen Wang, and Le Song. Know-evolve: Deep temporal reasoning for dynamic knowledge graphs. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pp. 3462–3471. PMLR, 2017. URL http://proceedings.mlr. press/v70/trivedi17a.html. Rakshit Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, and Hongyuan Zha. Dyrep: Learning representations over dynamic graphs. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id= HyePrhR5KX. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=rJXMpikCZ. Xiang Wang, Tinglin Huang, Dingxian Wang, Yancheng Yuan, Zhenguang Liu, Xiangnan He, and Tat-Seng Chua. Learning intents behind interactions with knowledge graph for recommendation. In Proceedings of the Web Conference 2021, pp. 878–887, 2021. Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Yanfang Ye, Peng Cui, and Philip S Yu. Heterogeneous graph attention network. In The World Wide Web Conference, pp. 2022–2032, 2019. Zichen Wang, Mu Zhou, and Corey Arnold. Toward heterogeneous information fusion: Bipartite graph convolutional networks for in silico drug repurposing. Bioinformatics, 36(Supplement_1):i525–i533, 2020. Lingfei Wu, Yu Chen, Kai Shen, Xiaojie Guo, Hanning Gao, Shucheng Li, Jian Pei, and Bo Long. Graph neural networks for natural language processing: A survey. arXiv preprint arXiv:2106.06090, 2021. Shiwen Wu, Fei Sun, Wentao Zhang, Xu Xie, and Bin Cui. Graph neural networks in recommender systems: A survey. arXiv:2011.02260 [cs], February 2022. URL http://arxiv.org/abs/2011.02260. arXiv: 2011.02260. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S. Yu Philip. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2020. Tian Xia and Wei-Shinn Ku. Geometric graph representation learning on protein structure prediction. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1873– 1883, 2021. Zhang Xinyi and Lihui Chen. Capsule graph neural network. In International Conference on Learning Representations, 2018. Naganand Yadati. Neural message passing for multi-relational ordered and recursive hypergraphs. Advances in Neural Information Processing Systems, 33, 2020. Naganand Yadati, Madhav Nimishakavi, Prateek Yadav, Vikram Nitin, Anand Louis, and Partha P. Talukdar. Hypergcn: A new method for training graph convolutional networks on hypergraphs. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 1509–1520, 2019. Yichao Yan, Jie Qin, Jiaxin Chen, Li Liu, Fan Zhu, Ying Tai, and Ling Shao. Learning multi-granular hypergraphs for video-based person re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. Luwei Yang, Zhibo Xiao, Wen Jiang, Yi Wei, Yi Hu, and Hao Wang. Dynamic heterogeneous graph embedding using hierarchical attentions. Advances in Information Retrieval, 12036:425, 2020. Jaehyuk Yi and Jinkyoo Park. Hypergraph convolutional recurrent neural network. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 3366–3376, 2020. Donghan Yu, Chenguang Zhu, Yiming Yang, and Michael Zeng. Jaket: Joint pre-training of knowledge graph and language understanding. arXiv preprint arXiv:2010.00796, 2020. Hao Yuan, Haiyang Yu, Shurui Gui, and Shuiwang Ji. Explainability in graph neural networks: A taxonomic survey. arXiv:2012.15445 [cs], March 2021. URL http://arxiv.org/abs/2012.15445. arXiv: 2012.15445. Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, and Hyunwoo J Kim. Graph transformer networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https: //proceedings.neurips.cc/paper/2019/file/9d63484abb477c97640154d40595a3bb-Paper.pdf. Chuxu Zhang, Dongjin Song, Chao Huang, Ananthram Swami, and Nitesh V. Chawla. Heterogeneous graph neural network. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 793–803, 2019a. Shuo Zhang, Yang Liu, and Lei Xie. Molecular mechanics-driven graph neural network with multiplex graph for molecular structures, 2020a. URL https://arxiv.org/abs/2011.07457. Si Zhang, Hanghang Tong, Jiejun Xu, and Ross Maciejewski. Graph convolutional networks: Algorithms, applications and open challenges. In International Conference on Computational Social Networks, pp. 79–91. Springer, 2018. Si Zhang, Hanghang Tong, Jiejun Xu, and Ross Maciejewski. Graph convolutional networks: a comprehensive review. Computational Social Networks, 6(1):1–23, December 2019b. ISSN 21974314. doi:10.1186/s40649-019-0069-y. URL https://computationalsocialnetworks.springeropen. com/articles/10.1186/s40649-019-0069-y. Number: 1 Publisher: SpringerOpen. Xitong Zhang, Yixuan He, Nathan Brugnone, Michael Perlmutter, and Matthew J. Hirn. Magnet: A neural network for directed graphs. Advances in neural information processing systems, 34:27003–27015, 2021. Ziwei Zhang, Peng Cui, and Wenwu Zhu. Deep learning on graphs: A survey. IEEE Transactions on Knowledge and Data Engineering, 2020b. Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. AI Open, 1:57–81, 2020. Marinka Zitnik, Monica Agrawal, and Jure Leskovec. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics, 34(13):457–466, 2018. ## 10 Appendix 10.1 Reasons For Selection Of The Models Per Graph Type To clarify why each model was chosen for a certain graph type, the reasons are listed in Tab.8. Additionally, the criterion of generality, i.e. the applicability of the model not only for a certain domain, is fulfilled by most of the models. Table 8: **Reasons for Selection of the Models per Graph** Type. The number of citations is taken from google scholar in the beginning of March 2023. | Model | Reason(s) for Selection | |-------------------------------------------------------|------------------------------------------------------------------| | Models from Tab. 1 GenRecN (Sperduti & Starita, 1997) | cited often (783 times), historical importance | | MagNet Zhang et al. (2021) | explicit addressing of graph property | | GNN* Scarselli et al. (2009) | cited very often (5433 times) | | GAT Velickovic et al. (2018) | cited very often (5705 times), commonly used as basline | | CapGNN Xinyi & Chen (2018) | cited often (172 times) | | WL Morris et al. (2019) | cited often (914 times) | | EGNN Gong & Cheng (2019) | cited often (195 times) | | GRNN Ioannidis et al. (2019) | explicit addressing of graph property | | AA-HGNN Ren et al. (2020) | explicit addressing of graph property | | HAN Wang et al. (2019) | explicit addressing of graph property | | Models from Tab. 2 NDHGNN Tran & Tran (2020) | only model found for graph property | | HyperGCN Yadati et al. (2019) | cited often (202 times) | | HyperConvAtt Bai et al. (2021) | cited often (239 times) | | LHCN Bandyopadhyay et al. (2020) | more straight forward than other models (simplicity) | | HGNN Feng et al. (2019) | cited often (566 times), commonly used as baseline | | AHGAE Hu et al. (2021) | only model found for graph property | | HWNN Sun et al. (2021b) | explicit addressing of graph property | | G-MPNN Yadati (2020) | explicit addressing of graph property | | Models from Tab. 3 EpiGNN La Gatta et al. (2020) | more straight forward than other models (simplicity) | | DySAT Sankar et al. (2020) | cited often (254 times), commonly used as baseline | | (WD/CD)-GCN Manessi et al. (2020) | cited often (229 times), can handle node attributes unlike DySat | | GCRN Seo et al. (2018) | cited often (537 times) | | DynGEM Goyal et al. (2018) | cited often (322 times) | | EvolveGCN Pareja et al. (2020) | cited often (532 times) | | RE-Net Jin et al. (2019) | only model found for graph property | | DyHAN Yang et al. (2020) | only model found for graph property | | DHAT Luo et al. (2021) | only model found for graph property | | STHAN-SR Sawhney et al. (2021) | can handle edge attributes unlike HGC-RNN | |-------------------------------------------------------|-----------------------------------------------------------------------------------------------------| | HGC-RNN Yi & Park (2020) | more straight forward than other models (simplicity) | | Hyper-GNN Hao et al. (2021) | cited often (33 times since 2021), more straight forward than other models (simplicity) | | MGH Yan et al. (2020) | only model found for graph property that can handle all dynamics in DTR | | Models from Tab. 4 Know-Evolve (Trivedi et al., 2017) | cited often (337 times), commonly used as baseline | | DyGNN Ma et al. (2020) | cited often (121 times), more recent and better performing than predessesor Know-Evolve | | DyRep Trivedi et al. (2019) | cited often (301 times), commonly used as baseline | | NeurTWs Jin et al. (2022) | recent model | | PINT Souza et al. (2022) | recent model, better performace than TGN | | TGN Rossi et al. (2020) | cited often (246 times), commonly used as baseline | | HIT Liu et al. (2021) | only model found for graph property | | Models from Tab. 5 MGCN Lu et al. (2019) | explicit addressing of graph property | | BipGNN Wang et al. (2020) | explicit addressing of graph property | | BGNN He et al. (2019) | explicit addressing of graph property | | GTN Yun et al. (2019) | explicit addressing of graph property, cited often(492 times) | | DAGNN Thost & Chen (2021) | explicit addressing of graph property, cited often (51 times since 2021) | | CTNN He et al. (2021) | explicit addressing of graph property | | MPNN-R Yadati (2020) | explicit addressing of graph property | | Hier-GNN Chen et al. (2022) | explicit addressing of graph property | | Models from Tab. 6 GCN Kipf & Welling (2017) | cited very often (21945 times) | | Hyperbolic GNN Liu et al. (2019) | cited often (220 times), explicit addressing of graph property | | KGIN Wang et al. (2021) | cited most (120 times) among models for this graph type, explicit addressing of graph property | | HetG Zhang et al. (2019a) | cited often (775 times) | | MXMNet Zhang et al. (2020a) | cited most (29 times) among among models for this graph type, explicit addressing of graph property | | STGNN Kapoor et al. (2020) | often cited (148 times), explicit addressing of graph property | | SBGNN Huang et al. (2021) | explicit addressing of graph property | | JODIE Kumar et al. (2019) | often cited (342 times), explicit addressing of graph property | | HVGNN Sun et al. (2021a) | explicit addressing of graph property |
Review 1: Summary: The paper provides a survey of Graph Neural Networks (GNNs) with a focus on different graph types and properties and the GNNs designed for the corresponding categories. In particular, Section 2 and Section 3 introduce the basic knowledge of graphs and GNNs. The following sections (4, 5, 6, and 7) summarize existing GNN research on structural graphs, dynamic graphs, semantic graphs, and combined graphs. Strengths and Weaknesses: Strengths: 1. The motivation of the paper is great since it tries to summarize the current research on GNNs from the data perspective. Essentially, various graphs of types and attributes might need a customized GNN design. The survey provides a more comprehensive view compared with existing survey papers. 2. The paper tries to provide a high-level categorization of all kinds of graphs and summarizes representative GNN models for each category. Some model designs are discussed and analyzed. Weaknesses: 1. The writing of the paper needs significant improvement in terms of language, grammar, and logic. 2. The taxonomy of graphs in Section 2 is a bit unclear and confusing. Although it is mentioned that all basic definitions concerning graph types and their structural properties are taken from Thomas et al. (2021), there is a lack of discussion on why this taxonomy is reasonable and suitable for the discussion of GNNs. For instance, what are the intrinsic differences between "Structural graphs" and "Semantic graphs"? 3. While the paper mentions that the cited models are selected according to their up-to-dateness, relevance, general applicability, explicit addressing of a specific graph type, and simplicity. It is unclear how these cafeterias are actually carried out. A significant portion of references is from arXiv or some less-known venues while only a small portion of papers is from well-known Machine Learning or Data Mining venues. For instance, to the best of my knowledge, there are many more representative models for directed graphs not being cited while the cited works are less known. 4. The paper summarizes many existing works, but there is a lack of deeper discussion on their advantages, disadvantages, connections, potential, and empirical performance. 5. It would be better to provide a summary of implementations of cited works, i.e., adding links to the table. Other minor comments: 1. From what is described in section 3.2.3, the differences between spatial and spectral convolution are unclear. 2. It is suggested to adjust the wording to avoid lines with one a few words. This can make the space compact and save more space for deeper discussions. Requested Changes: Please refer to the comments in weakness. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper is a survey that categorized existing GNNs according to what graph types they could handle. First, they gave mathematical definitions of various graph types and basic GNN architectures. Then, for each graph type, they gave representative GNNs that could handle it. Based on these classifications, the paper identified graph types that many GNNs supported and ones that few GNNs supported and discussed the reasons for the differences. According to the authors, the contributions of this paper are as follows (P2, Section 1): 1. gave a comprehensive definition of the various graph types 2. classified existing GNNs by graph types they can handle 3. related GNNs to each other based on their architectures 4. identified graph types that current GNN research is missing Strengths and Weaknesses: **Strengths** 1. Few survey papers focused on the graph topology (semantic) and classified GNNs according to which topology they can handle (Section 6) 2. The paper not only listed GNNs that support each graph type but also discussed how much and why GNN research for the type was active or not. 3. The paper is well-organized and clearly written. **Weaknesses** 1. There is room for improvement in the description of graph-type definitions. 2. Contributions 1 and 3 above could be stronger in terms of novelty. **Soundness** [S-1] I think the above contributions are well-supported. [S-2] Contribution 1 is discussed in Section 3. Specifically, graph types such as elementary properties (simple graph/hypergraph), temporal properties (static/discrete-time dynamic/continuous-time dynamic graph), and semantic properties (e.g., bipartite/complete.) This paper considered graph types for each property, such as undirected/directed, node-/edge- attributed, and node-/edge- heterogeneous. In addition, special examples of their combinations were discussed, such as knowledge graphs, multi-relational graphs, and multiplex graphs. These definitions are reasonable, although there are some minor suggestions (see Requested Changes). [S-3] Contribution 2 is discussed in Sections 4--7: Section 4 for simple graphs and hypergraphs, Section 5 for static and dynamic graphs, Section 6 for graphs with semantic properties, and Section 7 for combined graphs. I am afraid I have yet to be able to check all GNN models this paper referred to. However, as far as I know, there were no major errors in the paper's classification of GNNs. [S-4] Contribution 3 is discussed in Section 3.2, although it is not explicitly stated in the text. Since the contents mostly relied on Bronstein et al. (2021), this part is not the contribution of this paper. [S-5] Contribution 4 is discussed in Section 8. I agree that for node-heterogeneous static graphs, graphs with duplicated nodes/edges, and continuous-time dynamic graphs, few GNNs support these graph types. Also, the reasons for this phenomenon were reasonable. **Novelty and significance** [N-1] For Contribution 1, giving mathematical definitions of the graph types was preferred because the definitions of these graph types often vary from paper to paper. It certainly increased the readability of the paper. On the other hand, the graph types were not new in themselves. Also, this unification of terminology was mainly for the preparation of the following sections (especially, Contribution 2). Therefore, novelty is limited on its own. Nevertheless, this classification of the graph types serves as a foundation on which subsequent studies will be developed. [N-2] For Contribution 2, I agree with the authors' claim that although there were many GNN surveys for individual graph types (or some of them), few studies comprehensively classified them based on graph types. In addition, surveys of GNNs focusing on semantics are rare. From these points, I think we can recognize the novelty in this respect. [N-3] For Contribution 3, since the discussion follows that of Bronstein et al. (2021), I do not think this part is novel. [N-4] For Contribution 4, the discussion is novel. To the best of my knowledge, little literature has analyzed the relationship between graph types and the amount of GNNs models that supported them. However, I have several questions regarding the discussion on continuous dynamic graphs (see Requested Changes). Requested Changes: [R-1] P1, Section 1: ,,structure'' -> I do not think using such quotations in English papers is common. "structure" or $\textit{structure}$ are more suitable. [R-2] P3, Section 3.1: >In this sense, a directed hypergraph is a directed graph that simultaneously is a hypergraph In Definition 3.1, hypergraphs are described as elementary. However, this sentence described them as a composition of elementary objects. Therefore, they need to look more consistent. [R-3] P4, Figure 1: I would like to confirm the source of the dog image. [R-4] P5, Definition 3.3: In the definition of growing, $\mathcal{E}\_i \subseteq \mathcal{E}\_{i+1}$ implicitly assumed $\mathcal{V}\_i \subseteq \mathcal{V}\_{i+1}$. Therefore, it is weird that these conditions were connected by the "or" expression. The same is true for the definition of shrinking. [R-5] P6, Definition 3.6: I would like to clarify how the paper used the following terms: structure, topology, and semantics. It looks strange to call graph properties such as complete, regular, bipartite, and so on "semantic ." Also, I could not understand the following sentence in P18: > Although semantic graph properties typically do not explicitly affect the graph's topology, [...] I want to know what the authors intended to mean by a graph's topology -- I think that the properties such as complete are something that I imagine the topology of the graph: [R-6] P6, Definition 3.6: 7. recursive and 10. hyperbolic were not well defined. Therefore, it is difficult to understand these concepts from these descriptions alone. I suggest the authors guide the readers to the reference therein. [R-7] P6, Definition 3.6, 8: Edges in $H$ correspond to ... -> $G$ [R-8] P7, Definition 3.7, 4: The notations $v\in e$ in 1 and $e\subseteq \mathcal{V}$ implied that the graph is interpreted as a hypergraph. It is better to state this explicitly. [R-9] P7, Definition 3.7, 7: random -> randomly [R-10] P7, Definition 3.7, 8: It would be difficult for someone unfamiliar with the concept of metapaths to understand its meaning from this definition alone. It is preferable to provide appropriate definitions or references. [R-11] P7, Definition 3.7, 1: indegree -> in-degree [R-12] P8, Eq. (2): Strictly speaking, GAT cannot be formulated in the form Eq. (2). $\mathrm{att}$ corresponds to $a_{ij}$ of an equation in P.9. However this function depends not only on $x_u$ and $x_v$ but also on all $x_i$'s for $i\in \mathcal{N}(v)$. Nevertheless, it would be too complicated to describe this in a rigorous mathematical manner. Therefore, I think it is sufficient to explain it as a supplement to Eq. (2). [R-13] P9, Section 3.2.4: I think it is better to use a more special character to denote the set of real numbers (e.g., $\mathbb{R}$) [R-14] P9, Section 3.2.4: $\boldsymbol{X}$ -> $\boldsymbol{X}^{(0)}$ [R-15] P10, Section 4.1: > The number of GNNs for simple graphs [...], so the following table does not list all of them [...] I think it is better to write explicitly which table this sentence refers to (Table 1?). The same applies to Table 2 in Section 4.2. [R-16] P10, Section 4.1: The authors referred to GenRecN as an example of GNN for directed edges. Certainly, this paper has 723 citations (google scholar, 2023/1/14) and is a very important study. However, this paper was published in 1997 and is not positioned in the GNN studies starting from the late 2000s. Since one of the criteria for selecting GNN models in this paper is that they are relatively new, I would recommend mentioning other models, such as DimeNet [1]? [R-17] P10, Section 4.1: Add reference to HAN. [R-18] P13, Section 5: [...], Def. 3.4 The first [...] -> [...] Def. 3.4 The first [...] -> [...] [R-19] P13, Section 5 and P16, Section 5.2: This paper explained that continuous-time graphs/GNNs were more compact than discrete-time graphs/GNNs. However, I think this is not a matter of discreteness or continuity of dynamic graphs. Rather, it is a matter of representation of graphs. There are two ways to represent a dynamic graph: taking a snapshot of graphs at each time step or saving the changes of graphs. The latter is expected to be more compact representations. While discrete-time graphs can take both approaches, continuous-time graphs can only take the latter approach. Therefore, a compact representation is not an advantage of the continuous-time graph but rather an advantage of the representation method. [R-20] P18, Section 6: Over-smoothing is a technical term with special meaning in the context of GNNs, so a reference should be provided, e.g., [2]. [R-21] P21, Section 8: Moreover, [...] how they handle these. -> them. [1] https://openreview.net/forum?id=B1eWbxStPH [2] https://ojs.aaai.org/index.php/AAAI/article/view/11604 Broader Impact Concerns: I have no major concerns about broader impacts. ================================================== Review 3: Summary: With the increased popularity of GNN based approaches on data from different domains, the range of GNN models have grown significantly. This survey aims to examine different graph types and point out corresponding GNN models designed for such types. In addition, this work also points out graph types where there is a lack of GNN models. This survey provides a roadmap for novel and existing researchers working with specific types of graph data to identify relevant GNN models in the literature. The authors also clearly listed the criterias for including specific work in this survey. These criterias are 1). Up-to-dateness of the model, 2). relevance of the model, 3). the generality of the model, 4). explicitness in addressing a given graph property and 5). simplicity of the model and the priority of selection is in the same order. The criterias help reader understand the significance of the selected model. Lastly, I believe this survey can act as a guide for the GNN community to develop more models in areas which lacks GNN models and facilitate future research. Strengths and Weaknesses: * Strengths overall, I believe this survey can benefit the graph representation learning community through its systematic analysis of graph types and models. Overall, I would recommend acceptance with some edits required as listed in the weakness and request change section. I will first list the strengths below: 1. This survey systematically organizes different graph types and pointed out related GNN models to each type 2. The selection criteria for the inclusion of a given model is clearly discussed and the presentation of the survey overall is quite clear 3. the discussion around which areas lack GNN models are beneficial for the community and can be a guide for further future research * Weaknesses There are a few weaknesses and suggestions that the authors should improve upon: 1. One important consideration of applying ML models on a given data type is scalability. Recently there is an increasing amount of work on scalable GNNs[1]. To improve the usefulness of the survey, clearly listing the time complexity for each model in the tables would help readers understand the scalability aspect. This can also facilitate future research especially if existing models are expensive in complexity. 2. The performance of a given model on a graph type should also be a selection criteria for model inclusion as it will point readers to the state-of-the-art model (as of the writing of the paper). 3. There is a large amount of recent work designing GNNs for continuous time dynamic graphs. The statement such as "the small number of models for continuous-time graphs" on page 22 should be reconsidered. I also hope the authors would include some of these recent papers and update the section on continuous-time dynamic graphs. Some recent work include [2],[3],[4],[5]. In particular, [3] is one of the first work to design methods for signed continuous time dynamic graphs and should be included in the survey. References: [1]. Ding et al., Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity, NeurIPS 2022 [2]. Luo et al., Neighborhood-aware Scalable Temporal Network Representation Learning, LOG 2022 Conference [3]. Raghavendra et al., Signed Link Representation in Continuous-Time Dynamic Signed Networks [4]. Jin et al., Neural Temporal Walks: Motif-Aware Representation Learning on Continuous-Time Dynamic Graphs, NeurIPS 2022 [5]. Souza et al., Provably expressive temporal graph networks, NeurIPS 2022 Requested Changes: * suggestions and adjustments 1. following weakness 1, I believe adding the discussion on time complexity for each included model would strengthen the work but wouldn't be critical for my acceptance recommendation 2. following weakness 2, I think a discussion on the performance of the models selected would be useful but again it would mostly strengthen the work 3. following weakness 3, **I would request the authors to update the section on continuous-time dynamic graphs** with the references I included in mind. This is important towards my recommendation of acceptance as it will improve the Up-to-dateness of the survey on this section. * minor suggestions and edits Please update the write-up based on these minor suggestions, it wouldn't change my recommendation but it is very important for the clarity of the paper. 1. in introduction page 1 "or so-called "structures" and on last line page 1 "solar power plants". Most quotation marks are formatted incorrectly in the paper, please check and update all quotation marks 2. "Remark. All combined static graphs can also be dynamic" on page 6. Does it mean that all types of static graph can also be dynamic? if so, this sentence was not clear on it. 3. "However, the continuous-time approach is much more compact in its representation but requires a local evaluation of the graph." It is unclear what "a local evaluation" means. Broader Impact Concerns: There is no concern of ethical implications of the work to my knowledge as it is a survey paper examining existing work. ================================================== Metareview: Recommendation: Accept with minor revision Comment: Two of the reviewers recommended to accept the paper and agreed that the manuscript was technically correct and relevant to the TMLR community, especially to researchers working on graph machine learning. This survey offers an overview of different graph types and models to tackle ML tasks involving such graphs. The 3rd reviewer was leaning toward rejection mainly for two reasons: (i) they questioned the choice of references by the authors and asked to select more high-quality and representative references (ii) lack of discussion on their connections, advantages, disadvantages, potential, limitations, and future development. The authors uploaded a last revision appending a table to the paper explaining the choice of each of the models they chose to refer to in the survey. They also argued that "*We do not aim to provide an extensive analysis of all available models for each graph type. Instead, we focus on identifying which graph types are currently covered by existing models. Therefore, we think that discussing multiple models for each graph type is beyond the scope of this paper.* " I believe that the addition of the table addresses concern (i) and I think it is reasonable that fully addressing (ii) goes beyond the scope and objective of the paper. I thus decided to accept the manuscript, I think this will be a useful reference for the graph learning community. Here are a couple of minor points to address in the last revision: - Unless I missed it, I think Table 7 is not referenced anywhere in the main paper. - Def 3.2: 1) the notation f_i : x \to 0 seems ambiguous to me, maybe consider an alternative notation, e.g. f_i : x \to \{0\} would be technically correct, or f_i(x) = \{0\}. 2) "All multigraphs are written as the set Gm": I recommend using the alternative phrasing " The set of all multigraphs is denoted as Gm" (as it was done in 4)) 4) remove the "-" after "node" - Def 3.3: 1) "notion" should be "notation" - Def 3.7: 1) remove "(R11)" 2) where \tilde{A} **is** the adjacency ==================================================
# On The Convergence Rates Of Federated Q-Learning Across Heterogeneous Environments Anonymous authors Paper under double-blind review ## Abstract Large-scale multi-agent systems are often deployed across wide geographic areas, where agents interact with heterogeneous environments. There is an emerging interest in understanding the role of heterogeneity in the performance of the federated versions of classic reinforcement learning algorithms. In this paper, we study synchronous federated Q-learning, which aims to learn an optimal Q-function by having K agents average their local Q-estimates per E iterations. We observe an interesting phenomenon on the convergence speeds in terms of K and E. Similar to the homogeneous environment settings, there is a linear speed-up concerning K in reducing the errors that arise from sampling randomness. Yet, in sharp contrast to the homogeneous settings, E > 1 leads to significant performance degradation. Specifically, we provide a fine-grained characterization of the error evolution in the presence of environmental heterogeneity, which decay to zero as the number of iterations T increases. The slow convergence of having E > 1 turns out to be fundamental rather than an artifact of our analysis. We prove that, for a wide range of stepsizes, the ℓ∞ norm of the error cannot decay faster than Θ(E/T). In addition, our experiments demonstrate that the convergence exhibits an interesting two-phase phenomenon. For any given stepsize, there is a sharp phase-transition of the convergence: the error decays rapidly in the beginning yet later bounces up and stabilizes. Provided that the phase-transition time can be estimated, choosing different stepsizes for the two phases leads to faster overall convergence. ## 1 Introduction Advancements in unmanned capabilities are rapidly transforming industries and national security by enabling fast-paced and versatile operations across domains such as advanced manufacturing (Park et al., 2019), autonomous driving (Kiran et al., 2021), and battlefields (Möhlenhof et al., 2021). Reinforcement learning (RL) - a cornerstone for unmanned capabilities - is a powerful machine learning method that aims to enable an agent to learn an optimal policy via interacting with its operating environment to solve sequential decisionmaking problems (Bertsekas & Tsitsiklis, 1996; Bertsekas, 2019). However, the ever-increasing complexity of the environment results in a high-dimensional state-action space, often imposing overwhelmingly high sample collection requirements on individual agents. This limited-data challenge becomes a significant hurdle that must be addressed to realize the potential of reinforcement learning. In this paper, we study reinforcement learning within a federated learning framework (also known as Federated Reinforcement Learning (Qi et al., 2021; Jin et al., 2022; Woo et al., 2023)), wherein multiple agents independently collect samples and collaboratively train a common policy under the orchestration of a parameter server without disclosing the local data trajectories. A simple illustration can be found in Fig. 1. When the environments of all agents are homogeneous, it has been shown that the federated version of classic reinforcement learning algorithms can significantly alleviate the data collection burden on individual agents (Woo et al., 2023; Khodadadian et al., 2022) - error bounds derived therein exhibit a linear speedup in terms of number of agents. Moreover, by tuning the synchronization period E (i.e., the number of iterations between agent synchronization), the communication cost can be significantly reduced compared with E = 1 yet without significant performance degradation. However, many large-scale multi-agent systems are often deployed across wide geographic areas, resulting in agents interacting with heterogeneous environments. For instance, connected and autonomous vehicles (CAVs) operating in various regions of a metropolitan area encounter diverse conditions such as varying traffic patterns, road infrastructure, and local regulations. The clients' federation must be managed in a way that ensures the learned policy is robust to environmental heterogeneity. There is an emerging interest in mathematically understanding the role of heterogeneity in the performance of the federated versions of classic reinforcement learning algorithms (Jin et al., 2022; Woo et al., 2023; Doan et al., 2019; Wang et al., 2023; Xie & Song, 2023) such as Q-learning, policy gradient methods, and temporal difference (TD) methods. In this paper, we study synchronous federated Q-learning in the presence of environmental heterogeneity, which aims to learn an optimal Q-function by averaging local Q-estimates per E (where E ≥ 1) update iterations on their local data. We leave the exploration of asynchronous Q-learning for future work. Federated Q-learning is a natural integration of FedAvg and Q-learning (Jin et al., 2022; Woo et al., 2023). The former is the most widely adopted classic federated learning algorithm (Kairouz et al., 2021; McMahan et al., 2017), and the latter is one of the most fundamental model-free reinforcement learning algorithms (Watkins & Dayan, 1992). Despite intensive study, the tight sample complexity of Q-learning in the single-agent setting was open until recently (Li et al., 2024). Similarly, the understanding of FedAvg is far from complete; a detailed discussion can be found in Section 2. Figure 1: An illustration of a federated learning system. Contributions. In this paper, we study synchronous federated Q-learning in the presence of environment heterogeneity. - We provide a fine-grained characterization of the error evolution, which decays to zero as the number of iterations T increases. We observe an interesting phenomenon on the convergence speeds in terms of K and E . Similar to the homogeneous environment settings, there is a linear speed-up concerning K in reducing the errors that arise from sampling randomness. Yet, in sharp contrast to the homogeneous settings, E > 1 leads to significant performance degradation. - We prove that the convergence slowing down for E > 1 is fundamental. We show that the ℓ∞ norm of the error cannot decay faster than Θ(E/T). A practical implication of this impossibility result is that, eventually, having multiple local updates (i.e., E > 1) ends up consuming more samples (i.e., E× more) than using E = 1. - Our numerical results illustrate that when the environments are heterogeneous and E > 1, and there exists a sharp phase-transition of the error convergence: The error decays rapidly in the beginning yet later bounces up and stabilizes. In addition, provided that the phase-transition time can be estimated, choosing different stepsizes for the two phases can lead to faster overall convergence. ## 2 Related Work Federated Learning. Federated learning is a communication-efficient distributed machine learning approach that enables training global models without sharing raw local data (McMahan et al., 2017; Kairouz et al., 2021). Federated learning has been adopted in commercial applications that involve diverse edge devices such as autonomous vehicles (Du et al., 2020; Chen et al., 2021; Zeng et al., 2022; Posner et al., 2021; Peng et al., 2023), internet of things (Nguyen et al., 2019; Yu et al., 2020), industrial automation (Liu et al., 2020), healthcare (Yan et al., 2021; Sheller et al., 2019), and natural language processing (Yang et al., 2018; Ramaswamy et al., 2019). Multiple open-source frameworks and libraries are available such as FATE, Flower, OpenMinded-PySyft, OpenFL, TensorFlow Federated, and NVIDIA Clara. FedAvg was proposed in the seminal work (McMahan et al., 2017), and has been one of the most widely implemented federated learning algorithms. It also has inspired many follow-up algorithms such as FedProx (Li et al., 2020b), FedNova (Wang et al., 2020), SCAFFOLD (Karimireddy et al., 2020), and adaptive federated methods (Deng et al., 2020). Despite intensive efforts, the theoretical understanding of FedAvg is far from complete. Most existing theoretical work on FedAvg overlooks the underlying data statistics at the agents, which often leads to misalignment of the pessimistic theoretical predictions and empirical success (Su et al., 2023; Pathak & Wainwright, 2020; Wang et al., 2022a;b). This theory and practice gap is studied in a recent work (Su et al., 2023) in the context of solving general non-parametric regression problems. It shows that the limiting points of the global model under FedAvg is one unbiased estimator of the underlying model that generates the data. Reinforcement Learning. There has been extensive research on the convergence guarantees of reinforcement learning algorithms. A recent surge of work focuses on non-asymptotic convergence and the corresponding sample complexity for the single-agent setup. Bhandari et al. (2018) analyses non-asymptotic TD learning with linear function approximation (LFA) considering a variety of noise conditions, including noiseless, independent noise and Markovian noise. The results were extended to TD(λ) and Q-learning. Li et al. (2020a) investigates the sample complexity of asynchronous Q-learning with different families of learning rates. They also provide an extension of using variance reduction methods inspired by the seminal SVRG algorithm. Li et al. (2024) shows the sample complexity of Q-learning. Let A be the set of actions. When |A| = 1, the sample complexity of synchronous Q-learning is sharp and minimax optimal, however, when *|A| ≥* 2, it is shown that synchronous Q- learning has a lower bound which is not minimax optimal. Federated Reinforcement Learning. Woo et al. (2023) provides sample complexity guarantees for both synchronous and asynchronous distributed Q-learning and reveals that given the same transition probability (i.e., homogeneous environment) for all agents, they can speed up the convergence process linearly by collaboratively learning the optimal Q-function. Doan et al. (2019) investigates distributed Temporal Difference (TD) algorithm TD(0) with LFA under the setting of multi-agent MDP, where multiple agents act in a shared environment and each agent has its own reward function. They provide a finite-time analysis of this algorithm that with constant stepsize, the estimates of agents can converge to a neighborhood around optimal solutions at the rate of O(1/T) and asymptotically converge to the optimal solutions at the rate of O(1/ √T + 1), where T is the timestep. Khodadadian et al. (2022) studies on-policy federated TD learning, off-policy federated TD learning, and federated Q-learning of homogeneous environment and reward with Markovian noise. The sample complexity derived exhibits linear speedup with respect to the number of agents. Heterogeneous environments are considered in Jin et al. (2022); Wang et al. (2023); Xie & Song (2023); Zhang et al. (2023b). Jin et al. (2022) studies federated Q-learning and policy gradient methods under the setting of different known transition probabilities for each agent. Yet, no state sampling is considered. Wang et al. (2023) proposes FedTD(0) with LFA dealing with the environmental and reward heterogeneity of MDPs. They rigorously prove that in a low-heterogeneity regime, there is a linear convergence speedup in the number of agents. Xie & Song (2023) uses KL-divergence to penalize the deviation of local update from the global policy, and they prove that under the setting of heterogeneous environments, the local update is beneficial for global convergence using their method. Zhang et al. (2023a) proposes FedSARSA using the classic on-policy RL algorithm SARSA with linear function approximation (LFA) under the setting of heterogeneous environments and rewards. They theoretically prove that the algorithm can converge to the near-optimal solution. Neither Xie & Song (2023) nor Zhang et al. (2023a) characterize sample complexity. ## 3 Preliminary On Q-Learning Markov decision process. A Markov decision process (MDP) is defined by the tuple ⟨S, A,P*, γ, R*⟩, where S represents the set of states, A represents the set of actions, the transition probability P : *S×A →* [0, 1] provides the probability distribution over the next states given a current state s and action a, the reward function R : *S × A →* [0, 1] assigns a reward value to each state-action pair, and the discount factor γ ∈ (0, 1) models the preference for immediate rewards over future rewards. It is worth noting that P = {P(· | *s, a*)}s∈S,a∈A is a collection of *|S| × |A|* probability distributions over S, one for each state-action pair (*s, a*). Policy, value function, Q-Function, and optimality. A policy π specifies the action-selection strategy and is defined by the mapping π : S → ∆(A), where π(a | s) denotes the probability of choosing action a when in state s. For a given policy π, the value function V π: S → R measures the expected total discounted reward starting from state s: $$V^{\pi}(s)=\mathbb{E}_{a_{t}\sim\pi(\{s_{t}\}_{s\geq t}+\Gamma\in\{s_{t},a_{t}\})}\left[\sum_{t}\gamma^{t}R(s_{t},a_{t})\mid s_{0}=s\right],\quad\forall s\in\mathcal{S}.$$ The state-action value function, or Q-function $Q^{\pi}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}$, evaluates the expected total discounted reward from taking action a in state s and then following policy π: Q π(s, a) = R(s, a) + Eat∼π(·|st),st+1∼P (·|st,at) "X t γ tR(st, at) | s0 = s, a0 = a # , ∀(s, a) ∈ S × A. An optimal policy π ∗is one that maximizes the value function for every state, that is ∀s ∈ S, V π ∗(s) ≥ V π(s) for any other π ̸= π ∗. Such a policy ensures the highest possible cumulative reward. The optimal value function V ∗(shorthand for V π ∗) and the optimal Q-function Q∗(shorthand for Qπ ∗) are defined under the optimal policy π ∗. The Bellman optimality equation for the value function and state-value function are: $$\begin{array}{c}{{V^{*}(s)=\operatorname*{max}_{a}[R(s,a)+\gamma\sum_{s^{\prime}\in{\mathcal{S}}}P(s^{\prime}|s,a)V^{*}(s^{\prime})]}}\\ {{Q^{*}(s,a)=R(s,a)+\gamma\sum_{s^{\prime}\in{\mathcal{S}}}P(s^{\prime}|s,a)\operatorname*{max}_{a^{\prime}\in{\mathcal{A}}}Q^{*}(s^{\prime},a^{\prime}).}}\end{array}$$ Q-learning. Q-learning (Watkins & Dayan, 1992) is a model-free reinforcement learning algorithm that aims to learn the value of actions of all states by updating Q-values through iterative exploration of the environment, ultimately converging to the optimal state-action function. Based on the Bellman optimality equation for the state-action function, the update rule for Q-Learning is formulated as: $$Q_{t+1}(s,a)=(1-\lambda)Q_{t}(s,a)+\lambda[R(s,a)+\gamma\operatorname*{max}_{a^{\prime}\in A}Q_{t}(s^{\prime},a^{\prime})],$$ a′∈A ′, a′)], ∀(s, a) *∈ S × A*, where s ′is sampled from the environment or the transition probability and λ is the stepsize. ## 4 Federated Q-Learning The federated learning system consists of one parameter server (PS) and K agents. The K agents are deployed in possibly heterogeneous yet independent environments. The K agents are modeled as Markov Decision Processes (MDPs) with Mk = ⟨S, A,P k*, γ, R*⟩ for k = 1, · · · , K, where P k = {P k(· | *s, a*)}s∈S,a∈A are the collection of probability distributions that can be heterogeneous across agents. In the synchronous setting, each agent k has access to a generative model, and generates a new state sample for each (*s, a*) via $$s_{t}^{k}(s,a)\sim P^{k}(\cdot\mid s,a)$$ i.e., Ps k t (*s, a*) = s ′ = P k(s ′| *s, a*) for all s ′ ∈ S, independently across state-action pairs (*s, a*). For each (*s, a*), the global environment P¯(· | *s, a*) (Jin et al., 2022) is defined as $$\widehat{P}(s^{\prime}\mid s,a)=\frac{1}{K}\sum_{k=1}^{K}P^{k}(s^{\prime}\mid s,a),\forall s^{\prime}\,,\tag{1}$$ with the corresponding global MDP defined as $\mathcal{A}_{\mathcal{G}}=\langle\mathcal{S},\mathcal{A},\widehat{P},\gamma,R\rangle$. Define transition heterogeneity $\kappa$ as $$\left(1\right)$$ := κ. (2) $$\operatorname*{sup}_{k,s,a}\|{\bar{P}}(\cdot\mid s,a)-P^{k}(\cdot\mid s,a)\|_{\infty}:=\kappa.$$ Let Q∗ denote the optimal Q-function of the global MDP. By the Bellman optimality equation, we have for all (*s, a*), $$Q^{*}(s,a)=R(s,a)+\gamma\sum_{s^{\prime}\in\mathcal{S}}\bar{P}(s^{\prime}\mid s,a)V^{*}(s^{\prime}),\tag{1}$$ $$\left({3}\right)$$ where V ∗(s) = maxa∈A Q∗(*s, a*) is the optimal value function. The goal of the federated Q-learning is to have the K agents collaboratively learn Q∗. We consider synchronous federated Q-learning, which is a natural integration of FedAvg and Q-learning (Woo et al., 2023; Jin et al., 2022) - described in Algorithm 1. Every agent initializes its local Qkestimate as Q0 and performs standard synchronous Q-learning based on the locally collected samples s k t (*s, a*). Whenever t + 1 mod E = 0, through the parameter server, the K agents average their local estimate of Q; that is, all agents report their Qk t+ 12 to the parameter server, which computes the average and sends back to agents. ## 5 Main Results Algorithm 1 Synchronous Federated Q-Learning Inputs: discount factor γ, E, total iteration T, stepsize λ, initial estimate Q0 1: for k ∈ [K] do 2: Qk 0 = Q0 3: end for 4: for t = 0 to T − 1 do 5: for k ∈ [K] and (s, a) ∈ S × A do 6: Qk t+ 12 (s, a) = (1 − λ)Qk t (s, a) + λR(s, a) + γ maxa′∈A Qk t (st(s, a), a′). 7: if (t + 1) mod E = 0 then 8: Qk t+1 = 1 K PK k=1 Qk t+ 12 9: else 10: Qk t+1 = Qk t+ 12 11: end if 12: end for 13: **end for** 14: **return** QT = 1 K PK k=1 QkT With a little abuse of notation, let the matrix P k ∈ R |S||A|×|S| represent the transition kernel of the MDP of agent k with the (*s, a*)-th row being P k(· | *s, a*) ∈ R |S| - the transition probability of the state-action pair (*s, a*). For ease of exposition, we write P k(· | *s, a*) = P k(*s, a*) as the state transition probability at the state-action pair (*s, a*) when its meaning is clear from the context. ## 5.1 Main Convergence Results. Let Pek t ∈ {0, 1} |S||A|×|S| denote the local empirical transition matrix at the t-th iteration, defined as $$\widetilde{P}_{t}^{k}(s^{\prime}\mid s,a)={\bf1}\{s^{\prime}=s_{t}^{k}(s,a)\}.$$ Denoting Pek i V ∗ ∈ R |S||A|×1 with the (*s, a*)-th entry as Pek i (*s, a*)V ∗ =Ps ′∈S Pek i (s ′|*s, a*)V ∗(s ′). Let Q¯t+1 := 1 K PK k=1 Qk t+1. From lines 6, 8, and 10 of Algorithm 1, it follows that $$\bar{Q}_{t+1}=\frac{1}{K}\sum_{k=1}^{K}\left((1-\lambda_{t})Q_{t}^{k}+\lambda_{t}(R+\gamma\tilde{P}_{t}^{k}V_{t}^{k})\right),$$ where V k t (s) := maxa∈A Qk t (*s, a*) for all s ∈ S. Define $$\Delta_{t+1}:=Q^{*}-\bar{Q}_{t+1},\mathrm{~and~}\Delta_{0}:=Q^{*}-Q_{0}.$$ ∗ − Q0. (4) The error iteration ∆t is captured in the following lemma. Lemma 1 (Error iteration). *For any* t ≥ 0, $$\begin{array}{c}{{\Delta_{t+1}=(1-\lambda)^{t+1}\Delta_{0}+\gamma\lambda\sum_{i=0}^{t}(1-\lambda)^{t-i}\frac{1}{K}\sum_{k=1}^{K}(\bar{P}-\widetilde{P}_{i}^{k})V^{*}}}\\ {{+\gamma\lambda\sum_{i=0}^{t}(1-\lambda)^{t-i}\frac{1}{K}\sum_{k=1}^{K}\widetilde{P}_{i}^{k}(V^{*}-V_{i}^{k}).}}\end{array}$$ $$\left({\boldsymbol{4}}\right)$$ $\left(5\right)^2$ ). (5) To show the convergence of ∥∆t+1∥∞, we bound each of the three terms in the right-hand-side of equation 5. The following lemma is a coarse upper bound of errors. Lemma 2. Choosing R(s, a) ∈ [0, 1] for each state-action pair (s, a), and choose 0 ≤ Q0(*s, a*) ≤1 1−γ for any (s, a) ∈ S × A, then 0 ≤ Qk t (*s, a*) ≤1 1−γ , 0 ≤ Q∗(*s, a*) ≤1 1−γ , $$\|Q^{*}-Q_{t}^{k}\|_{\infty}\leq\frac{1}{1-\gamma},\ \ \mbox{and}\ \ \|V^{*}-V_{t}^{k}\|_{\infty}\leq\frac{1}{1-\gamma},\ \ \ \forall\ t\geq0,\mbox{and}k\in[K].$$ With the choice of Q0 in Lemma 2, the first term in equation 5 can be bounded as(1 − λ) t+1∆0 ∞ ≤ (1 − λ) t+1 1 1−γ . In addition, as detailed in the proof of Lemma 4 and Theorem 1, the boundedness in Lemma 2 enables us to bound the second term in equation 5 via invoking the Hoeffding's inequality. It remains to bound the third term in equation 5, for which we follow the analysis roadmap of Woo et al. (2023) by a two-step procedure that is described in Lemma 3 and Lemma 4. Let $$\Delta_{t}^{k}=Q^{*}-Q_{t}^{k},\mathrm{~and~}\chi(t)=t-(t\mathrm{~\mod~}E),$$ , and χ(t) = t − (t mod E), (7) i.e., ∆k t is the local error of agent k, and χ(t) is the most recent synchronization iteration of t. Lemma 3. If t mod E = 0*, then* maxs,a 1 K PK k=1 Pek t (V ∗ − Vt) ∞ ≤ ∥∆t∥∞*. Otherwise,* $$\left\|\frac{1}{K}\sum_{k=1}^{K}\widetilde{P}_{t}^{k}(V^{*}-V_{t}^{k})\right\|_{\infty}\leq\left\|\Delta_{\chi(t)}\right\|_{\infty}+2\lambda\frac{1}{K}\sum_{k=1}^{K}\sum_{t^{\prime}=\chi(t)}^{t-1}\left\|\Delta_{t^{\prime}}^{k}\right\|_{\infty}$$ $$+\gamma\lambda\frac{1}{K}\sum_{k=1}^{K}\max_{s,a}\left|\sum_{t^{\prime}=\chi(t)}^{t-1}\left(\widetilde{P}_{t^{\prime}}^{k}(s,a)-\widetilde{P}(s,a)\right)V^{*}\right|.$$ $$\mathfrak{h}$$ $\mathsf{L}$). $$\left(8\right)$$ where we use the convention that Pχ(t)−1 t ′=χ(t) ∆k t ′∞ = 0. Lemma 4. *Choose* λ ≤ 1 E . For any δ ∈ (0, 1)*, with probability at least* (1 − δ), $$\left\|\Delta_{i}^{k}\right\|_{\infty}\leq\left\|\Delta_{\mathbf{x}(i)}\right\|_{\infty}+{\frac{3\gamma}{1-\gamma}}\lambda(E-1)\kappa+{\frac{3\gamma}{1-\gamma}}{\sqrt{\lambda\log{\frac{|S||A|K T}{\delta}}}},\forall~i\leq T,k\in[K].$$ Both Lemma 3 and Lemma 4 are non-trivial adaptations of the characterization in the analysis of Woo et al. (2023) due to lack of common optimal action for any given state when environments are heterogeneous. To bound the ℓ∞ norm of the third term in equation 5, we first invoke Lemma 3, followed by Lemma 4. It is worth noting that directly applying Lemma 4 can also lead to a valid error bound yet the resulting bound will not decay as T increases with proper choice of stepsizes. Theorem 1 (Convergence). *Choose* E − 1 ≤ 1−γ 4γλ and λ ≤ 1 E . For any δ ∈ (0, 1 3 )*, with probability at least* 1 − 3δ*, it holds that* $$\|\Delta_{T}\|_{\infty}\leq\frac{4}{(1-\gamma)^{2}}\exp\left\{-\frac{1}{2}\sqrt{(1-\gamma)\lambda T}\right\}+\frac{2\gamma^{2}}{(1-\gamma)^{2}}(6\lambda^{2}(E-1)^{2}+\lambda(E-1))\kappa$$ $$+\left(\frac{12\gamma^{2}\lambda}{(1-\gamma)^{2}}\sqrt{E-1}+\frac{2\gamma^{2}\sqrt{\lambda}}{(1-\gamma)^{2}}\right)\sqrt{\lambda(E-1)\log\frac{|\mathcal{S}||\mathcal{A}|K T}{\delta}}$$ $$+\frac{2\gamma}{(1-\gamma)^{2}}\sqrt{\frac{1}{K}\lambda\log\frac{|\mathcal{S}||\mathcal{A}|T K}{\delta}}.$$ The first term of Theorem 1 is the standard error bound in the absence of environmental heterogeneity and sampling noises. The second term arises from environmental heterogeneity. It is clear that when E = 1, the environmental heterogeneity does not negatively impact the convergence. The last two terms result from the randomness in sampling. Remark 1 (Eventual zero error). It is common to choose the stepsize λ based on the time horizon T. Let λ = g(T) be a non-increasing function of T. As long as λ = g(T) decay in T, terms 2-4 in Theorem 1 will go to 0 as T increases. In addition, when λ = ω(1/T), the first term will decay to 0. There is a tradeoff in the convergence rates of the first term and the remaining terms - the slower λ decay in T leads to faster decay in the first term but slower in the remaining terms. Forcing these terms to decay around the same speed lead to slow overall convergence. Corollary 1 follows immediately from Theorem 1 via carefully choosing λ to balance the decay rates of different terms. Corollary 1. *Choose* (E − 1) ≤ min 1λ { γ 1−γ , 1 K }*, and* λ = 4 log2(TK) T(1−γ). Let T ≥ E*. For any* δ ∈ (0, 1 3 ), with probability at least 1 − 3δ, $$\|\Delta_{T}\|_{\infty}\leq\frac{4}{(1-\gamma)^{2}TK}+\frac{36}{(1-\gamma)^{3}}\frac{\log(TK)}{\sqrt{TK}}\sqrt{\log\frac{|S||A|TK}{\delta}}+\frac{56\log^{2}(TK)}{(1-\gamma)^{3}}\frac{E-1}{T}\kappa.$$ Remark 2 (Partial linear speedup and the negative impacts of E > 1). Intuitively, both terms 1 and 2 decay as if there are TK iterations - a linear speedup. In fact, the decay rate of the sampling noises in Corollary 1, with respect to TK, is the minimax optimal up to polylog factors (Vershynin, 2018). The decay of the third term is controlled by environmental heterogeneity when E > 1. In sharp contrast to the homogeneous settings, larger E significantly slows down the convergence of this term. We show in the next subsection that this slow convergence is fundamental. ## 5.2 On The Fundamentals Of Convergence Slowing Down For E > 1. Theorem 2. Let Q0 = 0. For any even K ≥ 2*, there exist a collection of* {(S, A,P k*, R, γ*) : k ∈ [K]} such that, for any synchronization period E *and time-invariant stepsize* λ ≤1 1+γ , *when $T/E\in\mathbb{N}$ and $T\geq\frac{E}{1-\gamma}\log\frac{1}{1-\gamma}$.* $$\|\Delta_{T}\|_{\infty}=\Omega\left(E/T\right),$$ Proof Sketch. Below we discuss the key ideas and provide the proof sketch of Theorem 2. The full proof is deferred to Appendix F. The eventual slow rate convergence is due to the heterogeneous environments P kregardless of the cardinality of the action space. In particular, we prove the slow rate when the action space is a singleton, in which case the Q-function coincides with the V-function. The process is also known as the Markov reward process. According to Algorithm 1, when (t + 1) mod E ̸= 0, we have $$Q_{t+1}^{k}=\left((1-\lambda)I+\lambda\gamma P^{k}\right)Q_{t}^{k}+\lambda R.$$ Following Algorithm 1, we obtain the following recursion between two synchronization rounds: Fortunately, we obtain the following recursion between two synchronization rounds: $$\Delta_{(r+1)E}=\bar{A}^{(E)}\Delta_{rE}+\left(\left(I-\bar{A}^{(E)}\right)-\left(I+\bar{A}^{(1)}+\ldots\bar{A}^{(E-1)}\right)\left(I-\bar{A}^{(1)}\right)\right)Q^{*},\tag{9}$$ $\bullet$\(\ where A¯(ℓ) ≜1K PK k=1(Ak) ℓ and Ak ≜ (1 − λ)I + λγPk. While the first term on the right-hand side of equation 9 decays rapidly to zero, the second term is non-vanishing due to environment heterogeneity for E ≥ 2. Specifically, to ensure the rapid decay of the first term, it is necessary to select a stepsize λ = Ωe( 1 rE ). However, this choice results in the dominating residual error from the second term, which increases linearly with λE = Ω( e 1 r ). Next, we instantiate the analyses by constructing the set P k over a pair of states and an even number of clients with $$P^{2k-1}=\begin{bmatrix}1&0\\ 0&1\end{bmatrix},\quad P^{2k}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\quad\text{for}k\in\mathbb{N}.$$ Applying the formula of A¯(ℓ) yields the following eigen-decomposition: $$\bar{A}^{(\ell)}=\alpha_{\ell}(I-\bar{P})+\beta_{\ell}\bar{P},$$ $$(10)^{\frac{1}{2}}$$ where P¯ = 1 2 11⊤, αℓ ≜ 1 2 (ν ℓ 1 + ν ℓ 2 ), βℓ ≜ ν ℓ 2 , ν1 ≜ 1 − (1 + γ)λ, and ν2 ≜ 1 − (1 − γ)λ. For this instance of Pk, the error evolution equation 9 reduces to ∆(r+1)E =αE(I − P¯) + βEP¯∆rE + κE(I − P¯)Q∗ with κE ≜ − γ 2 1−ν E 2 1−γ − 1−ν E 1 1+γ , which further yields the following full error recursion: $$\Delta_{r E}=\left(\alpha_{E}^{r}(I-\bar{P})+\beta_{E}^{r}\bar{P}\right)\Delta_{0}+\frac{1-\alpha_{E}^{r}}{1-\alpha_{E}}\kappa_{E}(I-\bar{P})Q^{*}.$$ Starting from Q0 = 0, the error can be decomposed into $$\Delta_{rE}=\beta_{E}^{r}\bar{P}Q^{*}+\left(\alpha_{E}^{r}+\frac{1-\alpha_{E}^{r}}{1-\alpha_{E}}\kappa_{E}\right)(I-\bar{P})Q^{*}.\tag{11}$$ The two terms of the error are orthogonal and both non-vanishing. Therefore, it remains to lower bound the maximum magnitude of two coefficients irrespective of the stepsize λ. To this end, we analyze two regimes of λ separated by a threshold λ0 ≜log r (1−γ)rE : - Slow rate due to small stepsize when λ ≤ λ0. Since β r E decreases as λ increases, $$\beta_{E}^{r}\geq(1-(1-\gamma)\lambda_{0})^{r E}=\left(1-\frac{\log r}{r E}\right)^{r E}\stackrel{>}{\sim}\frac{1}{r}.$$ - Slow rate due to environment heterogeneity when λ ≥ λ0. We show that $$\left|\frac{\kappa_{E}}{1-\alpha_{E}}\right|\geq\gamma^{2}\frac{\lambda(E-1)}{4}\geq\gamma^{2}\frac{\log r}{(1-\gamma)r},\qquad\left(1+\left|\frac{\kappa_{E}}{1-\alpha_{E}}\right|\right)\alpha_{E}^{r}\leq\frac{1}{(1-\gamma^{2})r}.$$ We conclude that at least one component of the error in equation 11 must be slower than the rate Ω(1/r). Remark 3. The explicit calculations are based on a set P k over a pair of states. Nevertheless, the evolution equation 9 is generally applicable. Similar analyses can be extended to scenarios involving more than two states, provided that the sequence of matrices A¯(ℓ)is simultaneously diagonalizable. For instance, the construction of the transition kernels in equation 10 can be readily extended to multiple states if the set S can be partitioned into two different classes. The key insight is the non-vanishing residual on the right-hand side of equation 9 when E ≥ 2 due to the environment heterogeneity. ## 6 Experiments Description of the setup. In our experiments, we consider K = 5 agents (Jin et al., 2022), each interacting with an independently and randomly generated 5 × 5 maze environment ⟨S, A,P k*, R, γ*⟩ for k ∈ {1, 2, *· · ·* , 5}. The state set S contains 25 cells that the agent is currently in. The action set contains 4 actions A = {left, up,right, down}. Thus, *|S| × |A|* = 100. We choose γ = 0.99. For ease of verifying our theory, each entry of the reward R ∈ R 100 is sampled from Bern(p = 0.05), which slightly departs from a typical maze environment wherein only two state-action pairs have nonzero rewards. We choose this reward so that ∥∆0∥∞ ≈ 100 = 1 1−γ , which is the coarse upper bound of ∥∆t∥∞ for all t. For each agent k, its state transition probability vectors P k are constructed on top of standard state transition probability vectors of maze environments incorporated with a drifting probability 0.1 in each non-intentional action as in WindyCliff (Jin et al., 2022; Paul et al., 2019). In this way, the environment heterogeneity lies not only in the differences of the non-zero probability values (Jin et al., 2022; Paul et al., 2019) but also in the probability supports (i.e., the locations of non-zero entries). Our construction is more challenging: The environment heterogeneity κ as per (2) of our environment construction was calculated to be 1.2. Yet, the largest environment heterogeneity of the WindyCliff construction in Jin et al. (2022) is about 0.31. We choose Q0 = 0 ∈ R 100. All numerical results are based on 5 independent runs to capture the variability. The dark lines represent the mean of the runs, while the shaded areas around each line illustrate the range obtained by adding and subtracting one standard deviation from the mean. The maximum time duration is T = 20, 000 in our experiment since it is sufficient to capture the characteristics of the training process. Two-phase phenomenon. We plot the evolutions of ∥∆t∥∞ for synchronous federated Q-learning under heterogeneous and homogeneous environments, respectively. Our results show that the sharp two-phase phenomenon mainly arises from environmental heterogeneity rather than sampling noise. From Figure 2a, it is clear that under the heterogeneous setting, for a given set of constant stepsizes ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) λ ∈ {0.9, 0.5, 0.2, 0.1, 0.05}, the ℓ∞-norm of ∆t = Q¯t − Q∗ decreases to a minimum point and then bounces back rapidly before stabilizing around some fixed errors. Moreover, we can see that different stepsizes give different minimum error. The smaller the stepsize, the smaller the minimum error; however, it takes longer to reach such minimum errors. In sharp contrast, as shown in Figure 2b, there is no drastic bounce when the environments are homogeneous. (a) Heterogeneous environments E = 10. (b) Homogeneous environments E = 10. Figure 2: The ℓ∞ error of different constant stepsizes under the heterogeneous and homogenous settings. A useful practice implication of our results is that: While constant stepsizes are often used in reinforcement learning problems because of the great performance in applications as described in Sutton & Barto (2018), they suffer significant performance degradation in the presence of environmental heterogeneity. Impacts of the synchronization period E. Furthermore, we test the impacts of the synchronization period E. As shown in Figure 3 and Figure 2a, with λ ∈ {0.9, 0.5, 0.2, 0.1, 0.05}, as E increases, the final error increases and saturates around 62 in the presence of environmental heterogeneity. For a homogeneous setting (results deferred to Appendix G.1), E does not have a significant impact, which aligns with the observations in the existing literature on the homogeneous settings (Woo et al., 2023; Khodadadian et al., 2022). ## Potential Utilization Of The Two-Phase Phenomenon. As shown in Figures 2a and 3, in the presence of environmental heterogeneity, the smaller the stepsizes, the smaller error ∥∆t∥∞ can reach and less significant of the error bouncing in the second phase. In our preliminary experiments, we tested small stepsizes λ = 1/T α for α ∈ {0.4, 0.5, *· · ·* , 1}, which eventually lead to small errors yet at the cost of being extremely slow. Among these choices, λ = 1/ √T has the fastest convergence performance yet is still ≈ 24 up to iteration 20,000. Let t0 be the iteration at which the error trajectory ∥∆t∥∞ switches from phase 1 to phase 2. Provided that t0 can be estimated, choosing different stepsizes for the two phases can lead to faster overall convergence, compared with using the same stepsize throughout. Figure 4 illustrates two-phase training with different phase 1 stepsizes and phase 2 stepsize λ = 1/ √T compared with using λ = 1/ √T throughout. Overall, using λ = 1/ √T throughout leads to the slowest convergence, highlighting the benefits of the two-phase training strategy. Among all two-phase stepsize choices, the stepsize of 0.05 in the first phase results in a longer phase 1 duration (t0 = 5550 ) but the lowest ![9_image_1.png](9_image_1.png) ![9_image_2.png](9_image_2.png) ![9_image_0.png](9_image_0.png) ![9_image_3.png](9_image_3.png) ![9_image_4.png](9_image_4.png) (c) E=40 Figure 3: Heterogeneous environments with varying E. ![9_image_5.png](9_image_5.png) Figure 4: Choosing different stepsizes for phases 1 and 2 leads to faster overall convergence. E = 10. final error (2.75327), suggesting a better convergence. We further test the convergence performance with respect to different target error levels, details can be found in Appendix G.2. We leave the estimation and characterization of t0 for future work. ## References Dimitri Bertsekas. *Reinforcement learning and optimal control*, volume 1. Athena Scientific, 2019. Dimitri Bertsekas and John N Tsitsiklis. *Neuro-dynamic programming*. Athena Scientific, 1996. Jalaj Bhandari, Daniel Russo, and Raghav Singal. A finite time analysis of temporal difference learning with linear function approximation. In *Conference on learning theory*, pp. 1691–1692. PMLR, 2018. Jin-Hua Chen, Min-Rong Chen, Guo-Qiang Zeng, and Jia-Si Weng. Bdfl: a byzantine-fault-tolerance decentralized federated learning method for autonomous vehicle. *IEEE Transactions on Vehicular Technology*, 70(9):8639–8652, 2021. Yuyang Deng, Mohammad Mahdi Kamani, and Mehrdad Mahdavi. Adaptive personalized federated learning. arXiv preprint arXiv:2003.13461, 2020. Thinh Doan, Siva Maguluri, and Justin Romberg. Finite-time analysis of distributed td (0) with linear function approximation on multi-agent reinforcement learning. In *International Conference on Machine* Learning, pp. 1626–1635. PMLR, 2019. Zhaoyang Du, Celimuge Wu, Tsutomu Yoshinaga, Kok-Lim Alvin Yau, Yusheng Ji, and Jie Li. Federated learning for vehicular internet of things: Recent advances and open issues. *IEEE Open Journal of the* Computer Society, 1:45–61, 2020. Hao Jin, Yang Peng, Wenhao Yang, Shusen Wang, and Zhihua Zhang. Federated reinforcement learning with environment heterogeneity. In *International Conference on Artificial Intelligence and Statistics*, pp. 18–37. PMLR, 2022. Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konecný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Hang Qi, Daniel Ramage, Ramesh Raskar, Mariana Raykova, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in federated learning. Foundations and Trends® *in Machine Learning*, 14(1–2):1–210, 2021. Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. In *International Conference on Machine Learning*, pp. 5132–5143. PMLR, 2020. Sajad Khodadadian, Pranay Sharma, Gauri Joshi, and Siva Theja Maguluri. Federated reinforcement learning: Linear speedup under markovian sampling. In *International Conference on Machine Learning*, pp. 10997–11057. PMLR, 2022. B Ravi Kiran, Ibrahim Sobh, Victor Talpaert, Patrick Mannion, Ahmad A Al Sallab, Senthil Yogamani, and Patrick Pérez. Deep reinforcement learning for autonomous driving: A survey. *IEEE Transactions on* Intelligent Transportation Systems, 23(6):4909–4926, 2021. Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, and Yuxin Chen. Sample complexity of asynchronous q-learning: Sharper analysis and variance reduction. *Advances in neural information processing systems*, 33:7031–7043, 2020a. Gen Li, Changxiao Cai, Yuxin Chen, Yuting Wei, and Yuejie Chi. Is q-learning minimax optimal? a tight sample complexity analysis. *Operations Research*, 72(1):222–236, 2024. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. *Proceedings of Machine Learning and Systems*, 2:429–450, 2020b. Yang Liu, Anbu Huang, Yun Luo, He Huang, Youzhi Liu, Yuanyuan Chen, Lican Feng, Tianjian Chen, Han Yu, and Qiang Yang. Fedvision: An online visual object detection platform powered by federated learning. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 13172–13179, 2020. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communicationefficient learning of deep networks from decentralized data. In *Artificial intelligence and statistics*, pp. 1273–1282. PMLR, 2017. Thies Möhlenhof, Norman Jansen, and Wiam Rachid. Reinforcement learning environment for tactical networks. In *2021 International Conference on Military Communication and Information Systems (ICMCIS)*, pp. 1–8. IEEE, 2021. Thien Duc Nguyen, Samuel Marchal, Markus Miettinen, Hossein Fereidooni, N Asokan, and Ahmad-Reza Sadeghi. Dïot: A federated self-learning anomaly detection system for iot. In 2019 IEEE 39th International conference on distributed computing systems (ICDCS), pp. 756–767. IEEE, 2019. In-Beom Park, Jaeseok Huh, Joongkyun Kim, and Jonghun Park. A reinforcement learning approach to robust scheduling of semiconductor manufacturing facilities. *IEEE Transactions on Automation Science* and Engineering, 17(3):1420–1431, 2019. Reese Pathak and Martin J Wainwright. Fedsplit: An algorithmic framework for fast federated optimization. Advances in neural information processing systems, 33:7057–7066, 2020. Supratik Paul, Michael A Osborne, and Shimon Whiteson. Fingerprint policy optimisation for robust reinforcement learning. In *International Conference on Machine Learning*, pp. 5082–5091. PMLR, 2019. Muzi Peng, Jiangwei Wang, Dongjin Song, Fei Miao, and Lili Su. Privacy-preserving and uncertainty-aware federated trajectory prediction for connected autonomous vehicles. In *2023 IEEE/RSJ International* Conference on Intelligent Robots and Systems (IROS), pp. 11141–11147, 2023. doi: 10.1109/IROS55552. 2023.10341638. Jason Posner, Lewis Tseng, Moayad Aloqaily, and Yaser Jararweh. Federated learning in vehicular networks: Opportunities and solutions. *IEEE Network*, 35(2):152–159, 2021. Jiaju Qi, Qihao Zhou, Lei Lei, and Kan Zheng. Federated reinforcement learning: techniques, applications, and open challenges. *Intelligence &amp Robotics*, 2021. doi: 10.20517/ir.2021.02. URL https://doi.org/ 10.20517%2Fir.2021.02. Swaroop Ramaswamy, Rajiv Mathews, Kanishka Rao, and Françoise Beaufays. Federated learning for emoji prediction in a mobile keyboard. *arXiv preprint arXiv:1906.04329*, 2019. Micah J Sheller, G Anthony Reina, Brandon Edwards, Jason Martin, and Spyridon Bakas. Multi-institutional deep learning modeling without sharing patient data: A feasibility study on brain tumor segmentation. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part I 4, pp. 92–104. Springer, 2019. Lili Su, Jiaming Xu, and Pengkun Yang. A non-parametric view of fedavg and fedprox: Beyond stationary points. *Journal of Machine Learning Research*, 24(203):1–48, 2023. Richard S. Sutton and Andrew G. Barto. Chapter 2.5 Tracking a Nonstationary Problem, Reinforcement Learning: An Introduction, chapter 8, pp. 33. The MIT Press, 2018. Roman Vershynin. *High-dimensional probability: An introduction with applications in data science*, volume 47. Cambridge university press, 2018. Chunnan Wang, Xiang Chen, Junzhe Wang, and Hongzhi Wang. Atpfl: Automatic trajectory prediction model design under federated learning framework. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition (CVPR), pp. 6563–6572, June 2022a. Han Wang, Aritra Mitra, Hamed Hassani, George J Pappas, and James Anderson. Federated temporal difference learning with linear function approximation under environmental heterogeneity. *arXiv preprint* arXiv:2302.02212, 2023. Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H Vincent Poor. Tackling the objective inconsistency problem in heterogeneous federated optimization. *Advances in neural information processing systems*, 33: 7611–7623, 2020. Jianyu Wang, Rudrajit Das, Gauri Joshi, Satyen Kale, Zheng Xu, and Tong Zhang. On the unreasonable effectiveness of federated averaging with heterogeneous data. *arXiv preprint arXiv:2206.04723*, 2022b. Christopher Watkins and Peter Dayan. Q-learning. *Machine Learning*, 8:279–292, 1992. URL https: //api.semanticscholar.org/CorpusID:208910339. Jiin Woo, Gauri Joshi, and Yuejie Chi. The blessing of heterogeneity in federated q-learning: Linear speedup and beyond. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), *Proceedings of the 40th International Conference on Machine Learning*, volume 202 of *Proceedings of Machine Learning Research*, pp. 37157–37216. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/woo23a.html. Zhijie Xie and Shenghui Song. FedKL: Tackling Data Heterogeneity in Federated Reinforcement Learning by Penalizing KL Divergence. *IEEE Journal on Selected Areas in Communications*, 41(4):1227–1242, April 2023. ISSN 1558-0008. doi: 10.1109/JSAC.2023.3242734. URL https://ieeexplore.ieee.org/abstract/document/10038492?casa_token=yGyMDlnL_FsAAAAA: hqNvzWEb6yVKwTZVdHKLVnorDg07AWx4uujDsLLTTY_7unjr1ew8Yv4_UUAWfCx3X1b9wHNySP8. Bingjie Yan, Jun Wang, Jieren Cheng, Yize Zhou, Yixian Zhang, Yifan Yang, Li Liu, Haojiang Zhao, Chunjuan Wang, and Boyi Liu. Experiments of federated learning for covid-19 chest x-ray images. In Advances in Artificial Intelligence and Security: 7th International Conference, ICAIS 2021, Dublin, Ireland, July 19-23, 2021, Proceedings, Part II 7, pp. 41–53. Springer, 2021. Timothy Yang, Galen Andrew, Hubert Eichner, Haicheng Sun, Wei Li, Nicholas Kong, Daniel Ramage, and Françoise Beaufays. Applied federated learning: Improving google keyboard query suggestions. arXiv preprint arXiv:1812.02903, 2018. Tianlong Yu, Tian Li, Yuqiong Sun, Susanta Nanda, Virginia Smith, Vyas Sekar, and Srinivasan Seshan. Learning context-aware policies from multiple smart homes via federated multi-task learning. In *2020* IEEE/ACM Fifth International Conference on Internet-of-Things Design and Implementation (IoTDI), pp. 104–115. IEEE, 2020. Tengchan Zeng, Omid Semiari, Mingzhe Chen, Walid Saad, and Mehdi Bennis. Federated learning on the road autonomous controller design for connected and autonomous vehicles. *IEEE Transactions on Wireless* Communications, 21(12):10407–10423, 2022. Chenyu Zhang, Han Wang, Aritra Mitra, and James Anderson. Finite-time analysis of on-policy heterogeneous federated reinforcement learning. In *The Twelfth International Conference on Learning Representations*, 2023a. Shangtong Zhang, Remi Tachet Des Combes, and Romain Laroche. On the convergence of sarsa with linear function approximation. In *International Conference on Machine Learning*, pp. 41613–41646. PMLR, 2023b. ## Appendices A Proof Of Lemma 1 The update of ∆t+1 is as follows: ∆t+1 = Q ∗ − Q¯t+1 = 1 K X K k=1 (Q ∗ − ((1 − λ)Q k t + λ(R + γPek t Q k t ))) = 1 K X K k=1 ((1 − λ)(Q ∗ − Q k t ) + λ(Q ∗ − R − γPek t V k t )) = (1 − λ)∆t + γλ 1K X K k=1 (P V¯ ∗ − Pek t V k t ) = (1 − λ)∆t + γλ K X K k=1 (P¯ − Pek t )V ∗ + γλ K X K k=1 Pek t (V ∗ − V k t ) = (1 − λ) t+1∆0 + γλXt i=0 (1 − λ) t−i 1 K X K k=1 (P¯ − Pek i )V ∗ + γλXt i=0 (1 − λ) t−i 1 K X K k=1 Pek i (V ∗ − V k i ), recalling that ∆0 = Q∗ − Q0. ## B Proof Of Lemma 2 We first show 0 ≤ Qk t (*s, a*) ≤1 1−γ by inducting on t. When t = 0, this is true by the choice of Q0. Suppose that 0 ≤ Qk t−1 (*s, a*) ≤1 1−γ for any state-action pair (*s, a*) and any client k. Let's focus on time t. When t is not a synchronization iteration (i.e., t + 1 mod ̸= 0), we have $$\begin{array}{l}{{Q_{t}^{k}(s,a)=(1-\lambda)Q_{t-1}^{k}(s,a)+\lambda(R(s,a)+\gamma\tilde{P}_{t}^{k}V_{t-1}^{k}(s))}}\\ {{\qquad\leq\frac{1-\lambda}{1-\gamma}+\lambda(R(s,a)+\gamma\tilde{P}_{t}^{k}V_{t-1}^{k}(s))}}\\ {{\qquad\stackrel{(a)}{\leq}\frac{1-\lambda}{1-\gamma}+\lambda(1+\frac{\gamma}{1-\gamma})}}\\ {{\qquad\leq\frac{1}{1-\gamma}-\frac{\lambda}{1-\gamma}+\frac{\lambda}{1-\gamma}}}\\ {{\qquad=\frac{1}{1-\gamma},}}\end{array}$$ where inequality (a) holds because for any s, V k t−1 (s) = maxa∈A Qk t−1 (*s, a*) ≤1 1−γ by the inductive hypothesis, and Pek t ∥1 = 1. Similarly, we can show the case when t is a synchronization iteration. With the above argument, we can also show that 0 ≤ Q∗(*s, a*) ≤1 1−γ for any state-action pair (*s, a*). Therefore, we have thatQ∗ − Qk t ∞ ≤1 1−γ . Next, we show that bound onV ∗ − V k t ∞ . $$\left\|V^{*}-V_{t}^{k}\right\|_{\infty}=\max_{s\in\mathcal{S}}\left|V^{*}(s)-V_{t}^{k}(s)\right|$$ $$=\max_{s\in\mathcal{S}}\left|\max_{a\in\mathcal{A}}Q^{*}(s,a)-\max_{a^{\prime}\in\mathcal{A}}Q_{t}^{k}(s,a^{\prime})\right|$$ $$\leq\max_{s\in\mathcal{S},a\in\mathcal{A}}\left|Q^{*}(s,a)-Q_{t}^{k}(s,a)\right|$$ $$=\left\|Q^{*}-Q_{t}^{k}\right\|_{\infty}$$ $$\leq\frac{1}{1-\gamma}.$$ ## C Proof Of Lemma 3 When t mod E = 0, i.e., i is a synchronization round, Qk t = Qk ′ tfor any pair of agents *k, k*′ ∈ [K]. Hence, $$\frac{1}{K}\sum_{k=1}^{K}\widetilde{P}_{t}^{k}(s,a)(V^{*}-V_{t})=\left(\frac{1}{K}\sum_{k=1}^{K}\widetilde{P}_{t}^{k}(s,a)\right)(V^{*}-V_{t})$$ $$\leq\|\frac{1}{K}\sum_{k=1}^{K}\widetilde{P}_{t}^{k}(s,a)\|_{1}\,\|V^{*}-V_{t}\|_{\infty}$$ $$\leq\|Q^{*}-Q_{t}\|_{\infty}$$ $$=\|\Delta_{t}\|_{\infty}\,.\tag{12}$$ For general t, we have 1 K X K k=1 Pek t (V ∗ − V k t ) ∞ = 1 K X K k=1 Pek t (V ∗ − V k χ(t) + V k χ(t) − V k i ) ∞ ≤ 1 K X K k=1 Pek t (V ∗ − V k χ(t) ) ∞ + 1 K X K k=1 Pek t (V k χ(t) − V k t ) ∞ ≤∆χ(t) ∞ + 1 K X K k=1 Pek t (V k χ(t) − V k t ) ∞ by equation 12 ≤∆χ(t) ∞ + 1 K X K k=1 V k χ(t) − V k t ∞ . (13) For any state s ∈ S, we have $$V_{t}^{k}(s)-V_{\chi(t)}^{k}(s)$$ $$=Q_{t}^{k}(s,a_{t}^{k}(s))-Q_{\chi(t)}^{k}(s,a_{\chi(t)}^{k}(s))$$ $$\stackrel{{(a)}}{{=}}Q_{t}^{k}(s,a_{t}^{k}(s))-Q_{\chi(t)}^{k}(s,a_{t}^{k}(s))$$ $$=Q_{t}^{k}(s,a_{t}^{k}(s))-Q_{t-1}^{k}(s,a_{t}^{k}(s))+Q_{t-1}^{k}(s,a_{t}^{k}(s))-Q_{t-2}^{k}(s,a_{t}^{k}(s))$$ $$\quad+\cdots+Q_{\chi(t)+1}^{k}(s,a_{t}^{k}(s))-Q_{\chi(t)}^{k}(s,a_{t}^{k}(s)).\tag{14}$$ where inequality (a) holds because Qkχ(t) (*s, a*k t (s)) ≤ Qkχ(t) (*s, a*kχ(t) (s)). For each t ′such that χ(t) ≤ t ′ ≤ t, it holds that, Q k t ′+1(s, ak t (s)) − Q k t ′ (s, ak t (s)) = (1 − λ)Q k t ′ (s, ak t (s)) + λ(R(s, ak t (s)) + γPek t ′ (s, ak t (s))V k t ′ ) − Q k t ′ (s, ak t (s)) (a) = −λQk t ′ (s, ak t (s)) + λ Q ∗(s, ak t (s)) − R(s, ak t (s)) − γP¯(s, ak t (s))V ∗ + R(s, ak t (s)) + γPek t ′ (s, ak t (s))V k t ′ = λ∆k t ′ (s, ak t (s)) + γλ (Pek t ′ (s, ak t (s)) − P¯(s, ak t (s)))V ∗ + Pek t ′ (s, ak t (s))(V k t ′ − V ∗) ≤ 2λ∆k t ′∞ + γλ Pek t ′ (s, ak t (s)) − P¯(s, ak t (s))V ∗, where equality (a) follows from the Bellman equation equation 3. Thus, $$V_{t}^{k}(s)-V_{\chi(t)}^{k}(s)\leq\sum_{t^{\prime}=\chi(t)}^{t-1}Q_{t+1}^{k}(s,a_{t}^{k}(s))-Q_{t^{\prime}}^{k}(s,a_{t}^{k}(s))$$ $$=2\lambda\sum_{t^{\prime}=\chi(t)}^{t-1}\left\|\Delta_{t^{\prime}}^{k}\right\|_{\infty}+\gamma\lambda\sum_{t^{\prime}=\chi(t)}^{t-1}\left(\bar{P}_{t^{\prime}}^{k}(s,a_{t}^{k}(s))-\bar{P}(s,a_{t}^{k}(s))\right)V^{*}.\tag{15}$$ Similarly, we have $$V_{t}^{k}(s)-V_{\chi(t)}^{k}(s)\geq\sum_{t^{\prime}=\chi(t)}^{t-1}Q_{t^{\prime}+1}^{k}(s,a_{\chi(t)}^{k}(s))-Q_{t^{\prime}}^{k}(s,a_{\chi(t)}^{k}(s))\tag{16}$$ $$\geq-2\lambda\sum_{t^{\prime}=\chi(t)}^{t-1}\left\|\Delta_{t}^{k}\right\|_{\infty}+\gamma\lambda\sum_{t^{\prime}=\chi(t)}^{t-1}\left(\tilde{P}_{t^{\prime}}^{k}(s,a_{\chi(t)}^{k}(s))-\tilde{P}(s,a_{\chi(t)}^{k}(s))\right)V^{*}.$$ Plugging the bounds in equation 15 and in equation 16 back into equation 13, we get $$\left\|\frac{1}{K}\sum_{k=1}^{K}\widetilde{P}_{t}^{k}(V^{*}-V_{t}^{k})\right\|_{\infty}\leq\left\|\Delta_{\chi(t)}\right\|_{\infty}+\frac{1}{K}\sum_{k=1}^{K}\left\|Y_{\chi(t)}^{k}-V_{t}^{k}\right\|_{\infty}$$ $$\leq\left\|\Delta_{\chi(t)}\right\|_{\infty}+2\lambda\frac{1}{K}\sum_{k=1}^{K}\sum_{t^{\prime}=\chi(t)}^{t-1}\left\|\Delta_{t^{\prime}}^{k}\right\|_{\infty}$$ $$+\gamma\lambda\frac{1}{K}\sum_{k=1}^{K}\max_{s,a}\left\|\sum_{t^{\prime}=\chi(t)}^{t-1}\left(\widetilde{P}_{t^{\prime}}^{k}(s,a)-\widetilde{P}(s,a)\right)V^{*}\right\|_{\infty}.$$ ## D Proof Of Lemma 4 When i mod E = 0, then ∆k i = ∆χ(i). When i mod E ̸= 0, we have $$Q_{i}^{k}=(1-\lambda)Q_{i-1}^{k}+\lambda\left(R+\gamma\widetilde{P}_{i-1}^{k}V_{i-1}^{k}\right)$$ $$=(1-\lambda)Q_{i-1}^{k}+\lambda\left(Q^{*}-R-\gamma\widetilde{P}V^{*}+R+\gamma\widetilde{P}_{i-1}^{k}V_{i-1}^{k}\right).$$ So, $$\Delta_{i}^{k}=(1-\lambda)\Delta_{i-1}^{k}+\lambda\gamma\left(\tilde{P}V^{*}-\tilde{P}_{i-1}^{k}V_{i-1}^{k}\right)\tag{14}$$ $$=(1-\lambda)\Delta_{i-1}^{k}+\lambda\gamma(\tilde{P}-\tilde{P}_{i-1}^{k})V^{*}+\lambda\gamma\tilde{P}_{i-1}^{k}(V^{*}-V_{i-1}^{k})$$ $$\leq(1-\lambda)^{i-\chi(i)}\Delta_{\chi(i)}+\gamma\lambda\sum_{j=\chi(i)}^{i-1}\,(1-\lambda)^{i-j-1}(\tilde{P}-\tilde{P}_{j}^{k})V^{*}$$ $$+\gamma\lambda\sum_{j=\chi(i)}^{i-1}\,(1-\lambda)^{i-j-1}\tilde{P}_{j}^{k}(V^{*}-V_{j}^{k}).$$ pair $(s,a)$, $$\left(17\right)$$ For any state-action pair (*s, a*), |(1 − λ) i−χ(i)∆χ(i)(s, a)| ≤ (1 − λ) i−χ(i)∆χ(i) ∞ . (18) By invoking Hoeffding's inequality, for any given δ ∈ δ ∈ (0, 1), with probability at least 1 − δ, it holds that along including's inequality, for any given $\sigma\in\sigma\in(0,1)$, with probability at least $1-\sigma$, it holds $$\left|\gamma\lambda\sum_{j=\chi(i)}^{i-1}(1-\lambda)^{i-j-1}(\tilde{P}-\tilde{P}_{j}^{k})V^{*}\right|\leq\frac{\gamma}{1-\gamma}\lambda\sum_{j=\chi(i)}^{i-1}(1-\lambda)^{i-1-j}\kappa+\frac{\gamma}{1-\gamma}\sqrt{\lambda\log\frac{|\mathcal{S}||\mathcal{A}|KT}{\delta}}$$ $$\leq\frac{\gamma}{1-\gamma}\lambda(E-1)\kappa+\frac{\gamma}{1-\gamma}\sqrt{\lambda\log\frac{|\mathcal{S}||\mathcal{A}|KT}{\delta}},$$ $(s,a)\in\mathcal{S}\times\mathcal{A},i\in[T],k\in[K]$. In addition, we have $$(18)$$ $$\left|\gamma\lambda\sum_{j=\chi(i)}^{i-1}\left(1-\lambda\right)^{i-j-1}\widetilde{F}_{j}^{k}\left(V^{*}-V_{j}^{k}\right)\right|\leq\gamma\lambda\sum_{j=\chi(i)}^{i-1}\left(1-\lambda\right)^{i-j-1}\left\|\Delta_{j}^{k}\right\|_{\infty}.$$ (19) $\frac{1}{2}$ $$(20)$$ . (20) Combining the bounds in equation 18, equation 19, and equation 20, we get ∆k i ∞ ≤ (1 − λ) i−χ(i)∆χ(i) ∞ +γ 1 − γ λ(E − 1)κ +γ 1 − γ rλ log |S||A|KT δ + γλ X i−1 j=χ(i) (1 − λ) i−j−1∆k j ∞ ≤ (1 − (1 − γ)λ) i−χ(i)∆χ(t) ∞ + (1 + γλ) i−χ(i) γ 1 − γ λ(E − 1)κ +γ 1 − γ rλ log |S||A|KT δ ! , (21) where the last inequality can be shown via inducting on i − χ(i) ∈ {0, · · · , E − 1}. When λ ≤ 1 E , (1 + γλ) i−χ(i) ≤ (1 + λ) E ≤ (1 + 1/E) E ≤ e ≤ 3. We get $$\left\|\Delta_{i}^{k}\right\|_{\infty}\leq\left\|\Delta_{\chi(i)}\right\|_{\infty}+3\frac{\gamma}{1-\gamma}\lambda(E-1)\kappa+3\frac{\gamma}{1-\gamma}\sqrt{\lambda\log\frac{|{\mathcal{S}}||{\mathcal{A}}|K T}{\delta}}.$$ ## E Proof Of Theorem 1 By Lemma 1, thimi 1, $$\Delta_{t+1}=(1-\lambda)^{t+1}\Delta_0+\sum_{i=0}^t(1-\lambda)^i\frac{\gamma\lambda}{K}\sum_{k=1}^K(\bar{P}-\tilde{P}_{t-i}^k)V^*+\sum_{i=0}^t(1-\lambda)^i\frac{\gamma\lambda}{K}\sum_{k=1}^K\tilde{P}_{t-i}^k(V^*-V_{t-i}^k).$$ Taking the ℓ∞ norm on both sides, we get $$\|\Delta_{t+1}\|_{\infty}\leq(1-\lambda)^{t+1}\left\|\Delta_{0}\right\|_{\infty}+\left\|\sum_{i=0}^{t}(1-\lambda)^{i}\lambda\gamma\frac{1}{K}\sum_{k=1}^{K}(\tilde{P}-\widetilde{P}_{t-i}^{k})V^{*}\right\|_{\infty}$$ $$+\sum_{i=0}^{t}(1-\lambda)^{i}\lambda\gamma\left\|\frac{1}{K}\sum_{k=1}^{K}\widetilde{P}_{t-i}^{k}(V^{*}-V_{t-i}^{k})\right\|_{\infty}.$$ We bound the three terms in the right-hand-side of the above-displayed equation separately. Since 0 ≤ Q0(*s, a*) ≤1 1−γ , the first term can be bounded as $$(1-\lambda)^{t+1}\left\|\Delta_{0}\right\|_{\infty}\leq(1-\lambda)^{t+1}\frac{1}{1-\gamma}.$$ $$\left(\text{22}\right)$$. To bound the second term Pt i=0(1 − λ) iλγ 1K PK k=1(P¯ − Pek t−i )V ∗∞ , we have $$\sum_{i=0}^{t}(1-\lambda)^{i}\lambda\gamma\frac{1}{K}\sum_{k=1}^{K}(\bar{P}-\widetilde{P}_{t-i}^{k})V^{*}=\sum_{i=0}^{t}(1-\lambda)^{i}\lambda\gamma\frac{1}{K}\sum_{k=1}^{K}(P^{k}-\widetilde{P}_{t-i}^{k})V^{*}$$ $$=\frac{1}{K}\sum_{k=1}^{K}\sum_{i=0}^{t}(1-\lambda)^{i}\lambda\gamma(P^{k}-\widetilde{P}_{t-i}^{k})V^{*}.$$ Let Xi,k(*s, a*)) = 1K γλ(1 − λ) i(P k − Pek t−i )V ∗. It is easy to see that E [Xi,k(*s, a*)] = 0 for all (*s, a*). By Lemma 2, we have |Xi,k| ≤ 2 K(1−γ) γλ(1 − λ) ifor all (*s, a*). Since the sampling across clients and across iterations are independent, via invoking Hoeffding's inequality, for any given δ ∈ (0, 1), with probability at least 1 − δ, $$\left\|\sum_{i=0}^{t}(1-\lambda)^{i}\lambda\gamma\frac{1}{K}\sum_{k=1}^{K}(\bar{P}-\bar{P}_{t-i}^{k})V^{\star}\right\|_{\infty}\leq\frac{\gamma}{1-\gamma}\sqrt{\frac{1}{K}\lambda\log\frac{|\mathcal{S}||\mathcal{A}|T\,K}{\delta}}.\tag{23}$$ To bound the third term Pt i=0(1 − λ) iλγ 1 K PK k=1 Pek t−i (V ∗ − V k t−i ) ∞ , following the roadmap of Woo et al. (2023), we divide the summation into two parts as follows. For any βE ≤ t ≤ T, we have Xt i=0 (1 − λ) iλγ 1 K X K k=1 Pek t−i (V ∗ − V k t−i ) ∞ = Xt i=0 (1 − λ) t−iλγ 1 K X K k=1 Pek i (V ∗ − V k i ) ∞ = χ(tX )−βE i=0 (1 − λ) t−iλγ 1 K X K k=1 Pek i (V ∗ − V k i ) ∞ +Xt i=χ(t)−βE+1 (1 − λ) t−iλγ 1 K X K k=1 Pek i (V ∗ − V k i ) ∞ ≤γ 1 − γ (1 − λ) t−χ(t)+βE +Xt i=χ(t)−βE+1 (1 − λ) t−iλγ 1 K X K k=1 Pek i (V ∗ − V k i ) ∞ . By Lemma 3, i=χ(t)−βE+1 (1 − λ) t−iλγ 1 K X K k=1 Pek i (V ∗ − V k i ) ∞ Xt ≤Xt i=χ(t)−βE+1 (1 − λ) t−iλγ ∆χ(i) ∞ + 2λ 1 K X K k=1 X i−1 j=χ(i) ∆k t ′∞ k=1 max s,a X i−1 j=χ(i) Pek j (s, a) − P¯(s, a) V ∗ +γλ 1K X K . Since Pek j (*s, a*)'s are independent across time j and across state action pair (*s, a*), and |Pek j (s, a)−P¯(*s, a*)V ∗| ≤ 1 1−γ (from Lemma 2), with Hoeffding's inequality and union bound, we get for any δ ∈ (0, 1), with probability at least 1 − δ, $$\sum_{j=\chi(t)}^{i-1}\left(\widetilde{F}_{j}^{k}(s,a)-\widetilde{P}(s,a)\right)V^{*}\Bigg{|}\leq(E-1)\frac{1}{1-\gamma}\kappa+\frac{1}{1-\gamma}\sqrt{(E-1)\log\frac{S[A]KT}{\delta}}\tag{24}$$ for all (s, a) *∈ S × A*, k ∈ K, and i. By Lemma 4, with probability at least (1 − δ), we have i=χ(t)−βE+1 (1 − λ) t−iλγ2λ 1 K X K Xt k=1 X i−1 j=χ(i) ∆k j ∞ ≤ 2λ 2γXt i=χ(t)−βE+1 (1 − λ) t−i 1 K X K k=1 X i−1 j=χ(i) ∆χ(i) ∞ + 3γ 1 − γ λ(E − 1)κ + 3γ 1 − γ rλ log |S||A|KT δ ! ≤ 2λγ(E − 1) max χ(t)−βE≤i≤t ∆χ(i) ∞ + 6γ 2λ 2 1 − γ (E − 1)2κ + 6γ 2λ 1 − γ (E − 1)rλ log |S||A|KT δ. Thus, with probability at least (1 − 2δ), i=χ(t)−βE+1 (1 − λ) t−iλγ 1 K X K Xt k=1 Pek i (V ∗ − V k i ) ∞ ≤ γ max χ(t)−βE≤i≤t ∆χ(i) ∞ + 2λγ(E − 1) max χ(t)−βE≤i≤t ∆χ(i) ∞ + 6γ 2λ 2 1 − γ (E − 1)2κ + 6γ 2λ 1 − γ (E − 1)rλ log |S||A|KT δ +Xt i=χ(t)−βE+1 (1 − λ) t−iλγ γλ 1 − γ (E − 1)κ +γλ 1 − γ r (E − 1) log |S||A|KT δ ! = γ(1 + 2λ(E − 1)) max χ(t)−βE≤i≤t ∆χ(i) ∞ +γ 2 1 − γ (6λ 2(E − 1)2 + λ(E − 1))κ +γ 2λ 1 − γ r (E − 1) log |S||A|KT δ+ 6γ 2λ 1 − γ (E − 1)rλ log |S||A|KT δ. The third term can be bounded as $$\sum_{i=0}^{t}(1-\lambda)^{i}\lambda\gamma\left\|\frac{1}{K}\sum_{k=1}^{K}\widetilde{P}_{i}^{k}(V^{*}-V_{i}^{k})\right\|_{\infty}$$ $$\leq\frac{\gamma}{1-\gamma}(1-\lambda)^{t-\lambda(t)+\beta E}+\gamma(1+2\lambda(E-1))\max_{\chi(t)=\max_{E\leq t\leq t}\|\Delta_{\chi}(t)\|_{\infty}+\frac{\gamma^{2}}{1-\gamma}(6)^{2}(E-1)^{2}+\lambda(E-1))\kappa$$ $$+\frac{\gamma^{2}\lambda}{1-\gamma}\sqrt{(E-1)\log\frac{S|\lambda|K\overline{T}}{\delta}}+\frac{6\gamma^{2}\lambda}{1-\gamma}(E-1)\sqrt{\lambda\log\frac{S|\lambda|K\overline{T}}{\delta}}.\tag{25}$$ Combing the bounds for terms 1, 2, and 3, we get the following recursion holds for all rounds T with probability at least (1 − 3δ): ∥∆t+1∥∞ ≤ (1 − λ) t+1 1 1 − γ +γ 1 − γ r1 K λ log |S||A|TK δ+ γ 1 − γ (1 − λ) t−χ(t)+βE + γ(1 + 2λ(E − 1)) max χ(t)−βE≤i≤t ∆χ(i) ∞ +γ 2 1 − γ (6λ 2(E − 1)2 + λ(E − 1))κ +γ 2λ 1 − γ r (E − 1) log |S||A|KT δ+ 6γ 2λ 1 − γ (E − 1)rλ log |S||A|KT δ ≤ γ(1 + 2λ(E − 1)) max χ(t)−βE≤i≤t ∆χ(i) ∞ +2 1 − γ (1 − λ) βE +γ 2 1 − γ (6λ 2(E − 1)2 + λ(E − 1))κ +γ 2λ 1 − γ r (E − 1) log |S||A|KT δ+ 6γ 2λ 1 − γ (E − 1)rλ log |S||A|KT δ +γ 1 − γ r1 K λ log |S||A|TK δ. Let $$\rho:=\frac{2}{1-\gamma}(1-\lambda)^{\beta E}+\frac{\gamma^{2}}{1-\gamma}(6\lambda^{2}(E-1)^{2}+\lambda(E-1))\kappa\tag{26}$$ $$+\frac{\gamma^{2}\lambda}{1-\gamma}\sqrt{(E-1)\log\frac{|S||\lambda|KT}{\delta}}+\frac{6\gamma^{2}\lambda}{1-\gamma}(E-1)\sqrt{\lambda\log\frac{|S||\lambda|KT}{\delta}}$$ $$+\frac{\gamma}{1-\gamma}\sqrt{\frac{1}{K}\lambda\log\frac{|S||\lambda|KT}{\delta}}.$$ With the assumption that λ ≤1−γ 4γ(E−1) , the above recursion can be written as $$\left\|\Delta_{t+1}\right\|_{\infty}\leq\frac{1+\gamma}{2}\operatorname*{max}_{\chi(t)-\beta E\leq i\leq t}\left\|\Delta_{\chi(i)}\right\|_{\infty}+\rho.$$ Unrolling the above recursion L times where LβE ≤ t ≤ T, we obtain that $$\|\Delta_{t+1}\|_{\infty}\leq(\frac{1+\gamma}{2})^{L}\max_{\chi(t)-L\beta E\leq i\leq t}\left\|\Delta_{\chi(i)}\right\|_{\infty}+\sum_{i=0}^{L-1}(\frac{1+\gamma}{2})^{i}\rho$$ $$\leq(\frac{1+\gamma}{2})^{L}\frac{1}{1-\gamma}+\frac{2}{1-\gamma}\rho.$$ Choosing β = 1 E q(1−γ)T 2λ, L = q λT 1−γ , t + 1 = T, we get ∥∆T ∥∞ ≤ 1 1 − γ ( 1 + γ 2) p λT 1−γ +2 1 − γ 2 1 − γ (1 − λ) βE +γ 2 1 − γ (6λ 2(E − 1)2 + λ(E − 1))κ + 6γ 2λ 1 − γ √E − 1 + γ 2 √λ 1 − γ ! rλ(E − 1) log *|S||A|*KT δ+ γ 1 − γ r1 K λ log |S||A|TK δ ! ≤1 1 − γ ( 1 + γ 2) p λT 1−γ +4 (1 − γ) 2 (1 − λ) p(1−γ)T 2λ +2γ 2 (1 − γ) 2 (6λ 2(E − 1)2 + λ(E − 1))κ + 12γ 2λ (1 − γ) 2 √E − 1 + 2γ 2 √λ (1 − γ) 2 ! rλ(E − 1) log |S||A|KT δ+ 2γ (1 − γ) 2 r1 K λ log |S||A|TK δ ≤1 1 − γ exp − 1 2 p(1 − γ)λT+4 (1 − γ) 2 exp n−p(1 − γ)λTo +2γ 2 (1 − γ) 2 (6λ 2(E − 1)2 + λ(E − 1))κ + 12γ 2λ (1 − γ) 2 √E − 1 + 2γ 2 √λ (1 − γ) 2 ! rλ(E − 1) log *|S||A|*KT δ+ 2γ (1 − γ) 2 r1 K λ log |S||A|TK δ ≤4 (1 − γ) 2 exp − 1 2 p(1 − γ)λT+2γ 2 (1 − γ) 2 (6λ 2(E − 1)2 + λ(E − 1))κ + 12γ 2λ (1 − γ) 2 √E − 1 + 2γ 2 √λ (1 − γ) 2 ! rλ(E − 1) log *|S||A|*KT δ+ 2γ (1 − γ) 2 r1 K λ log |S||A|TK δ. ## F Proof Of Theorem 2 Let |A| = 1, in which case Q-function coincides with the V -function. According to Algorithm 1, when (t + 1) mod E ̸= 0, we have Q k t+1 =(1 − λ)I + λγPkQ k t + λR. Define Ak ≜ (1 − λ)I + λγPk. We obtain the following recursion between two synchronization rounds: Q k (r+1)E = (A k) EQ k rE +(A k) 0 + *. . .*(A k) E−1λR. Define $$\bar{A}^{(\ell)}\triangleq\frac{1}{K}\sum_{k=1}^{K}(A^{k})^{\ell}.\tag{27}$$ $\ell$ is a critical value $\bar{B}$ and $\bar{B}$ are $\lambda B=\lambda(L=\bar{B})\Omega^{*}=(L=\bar{A}^{(1)})\Omega^{*}$. Note that Q∗is the fixed point under the transition kernel P¯, we have λR = λ(I − γP¯)Q∗ = (I − A¯(1))Q∗ since A¯(1) = I − λ(I − γP¯). Furthermore, since Q1 tE*, . . . , Q*K tE are identical due to synchronization, we get $$\bar{Q}_{(r+1)E}=\bar{A}^{(E)}\bar{Q}_{r E}+\left(I+\bar{A}^{(1)}+\ldots\bar{A}^{(E-1)}\right)\left(I-\bar{A}^{(1)}\right)Q^{*}.$$ Consequently, ∆(r+1)E = Q ∗ − Q¯(r+1)E = A¯(E)∆rE + I − A¯(E)− I + A¯(1) + . . . A¯(E−1) I − A¯(1) Q ∗. (28) Next, consider |S| = 2 and even K with $$P^{2k-1}={\begin{bmatrix}1&0\\ 0&1\end{bmatrix}},\quad P^{2k}={\begin{bmatrix}0&1\\ 1&0\end{bmatrix}},\quad{\mathrm{for~}}k\in\mathbb{N}.$$ Then P¯ = 1 2 11⊤, where 1 denotes the all ones vector. For the above transition kernels, we have $${\frac{1}{k}}\sum_{k=1}^{K}(P^{k})^{\ell}={\begin{cases}I,&\ell{\mathrm{~even}},\\ {\bar{P}},&\ell{\mathrm{~odd}}.\end{cases}}$$ Applying the definition of A¯(ℓ)in equation 27 yields that A¯(ℓ) = 1 K X K k=1 (A k) ℓ = 1 K X K k=1 ((1 − λ)I + λγPk) ℓ = 1 K X K k=1 X ℓ j=0 ℓ j (λγPk) j((1 − λ)I) ℓ−j =X j even ℓ j (1 − λ) ℓ−j(λγ) j(I − P¯ + P¯) + X j odd ℓ j (1 − λ) ℓ−j(λγ) jP¯ = 1 2 ((1 − λ − λγ) ℓ + (1 − λ + λγ) ℓ) (I − P¯) + (1 − λ + λγ) ℓ P¯ | {z } ≜βℓ | {z } ≜αℓ = αℓ(I − P¯) + βℓP , ¯ which is the eigen-decomposition of A¯(ℓ). Let $$\lambda_{1}\triangleq(1+\gamma)\lambda,\lambda_{2}\triangleq(1-\gamma)\lambda,\quad\nu_{1}=1-\lambda_{1},\nu_{2}=1-\lambda_{2}.$$ Then $$\alpha_{\ell}=\frac{1}{2}(\nu_{1}^{\ell}+\nu_{2}^{\ell}),\quad\beta_{\ell}=\nu_{2}^{\ell}.\tag{10.11}$$ Note that 0 ≤ α ≤ β ≤ 1 and I − P¯ and P¯ are orthogonal projection matrices satisfying (I − P¯)P¯ = 0. The matrices for the second term of the error on the right-hand side of 28 reduce to I + A¯(1) + . . . A¯(E−1) I − A¯(1) = EX−1 ℓ=0 αℓ(I − P¯) + E X−1 ℓ=0 βℓP¯ !(α0 − α1)(I − P¯) + (β0 − β1)P¯ = (1 − α1) E X−1 ℓ=0 αℓ(I − P¯) 2 + (1 − β1) E X−1 ℓ=0 βℓP¯2 ! since α0 = β0 = 1 = (1 − α1) E X−1 ℓ=0 αℓ(I − P¯) + (1 − β1) E X−1 ℓ=0 βℓP¯ ! since (I − P¯) and P¯ are idempotent. $$(29)^{\frac{1}{2}}$$ It follow that $$\begin{array}{l}{{\qquad\left(I-\bar{A}^{(E)}\right)-\left(I+\bar{A}^{(1)}+\ldots\bar{A}^{(E-1)}\right)\left(I-\bar{A}^{(1)}\right)}}\\ {{=\underbrace{\left((1-\alpha_{E})-(1-\alpha_{1})\left(\sum_{i=0}^{E-1}\alpha_{i}\right)\right)}_{\triangleq\kappa_{E}}(I-\bar{P})+\underbrace{\left((1-\beta_{E})-(1-\beta_{1})\left(\sum_{i=0}^{E-1}\beta_{i}\right)\right)}_{=0}\bar{P}.}}\end{array}$$ $$\quad(30)$$. Applying equation 29 yields that $$\kappa_{E}=-\frac{\gamma}{2}\left(\frac{1-\nu_{2}^{E}}{1-\gamma}-\frac{1-\nu_{1}^{E}}{1+\gamma}\right).$$ . (30) It follows from equation 28 that the error evolves as $$\Delta_{(r+1)E}=\left(\alpha_{E}(I-\bar{P})+\beta_{E}\bar{P}\right)\Delta_{r E}+\kappa_{E}(I-\bar{P})Q^{*},$$ which further yields the following full recursion of the error: ∆rE =αE(I − P¯) + βEP¯r∆0 + Xr−1 ℓ=0 αE(I − P¯) + βEP¯ℓκE(I − P¯)Q ∗ =α r E(I − P¯) + β r EP¯∆0 + Xr−1 ℓ=0 α ℓ E(I − P¯) + β ℓ EP¯κE(I − P¯)Q ∗ since αE(I − P¯) + βEP¯ℓ= α ℓ E(I − P¯) + β ℓ EP , ¯ ∀ℓ ∈ N =α r E(I − P¯) + β r EP¯∆0 + 1 − α r E 1 − αE κE(I − P¯)Q ∗ = α r E + 1 − α r E 1 − αE κE (I − P¯)Q ∗ + β r EP Q¯ ∗, where the last equality applied the zero initialization condition. Note that (I − P¯)Q∗ and P Q¯ ∗ are orthogonal vectors. Since |S| = 2, we have $$\|\Delta_{r E}\|_{\infty}\geq\frac{1}{\sqrt{2}}\,\|\Delta_{r E}\|_{2}\geq\frac{\min\{\|(I-\bar{P})Q^{*}\|_{2},\|\bar{P}Q^{*}\|_{2}\}}{\sqrt{2}}\cdot\max\left\{|\alpha_{E}^{r}+\frac{1-\alpha_{E}^{r}}{1-\alpha_{E}}\kappa_{E}|,\beta_{E}^{r}\right\}.$$ Since Q∗ = (I − γP¯) −1R = (I − P¯)R +1 1−γ P R¯ , we obtain that $$\|(I-\bar{P})Q^{*}\|_{2}=\|(I-\bar{P})R\|_{2},\qquad\|\bar{P}Q^{*}\|_{2}=\frac{1}{1-\gamma}\|\bar{P}R\|_{2}.$$ Therefore, for the reward R in general position, we have min{∥(I − P¯)Q∗∥2, ∥P Q¯ ∗∥2} ≥ cR for some constant cR depending on the reward function. It remain to analyze the coefficients as functions of λ. To this end, we introduce the following lemma: Lemma 5. *The following properties hold:* 1. *Negativity:* κE < 0; 2. *Monotonicity:* κE 1−αE is monotonically decreasing for λ ∈ (0,1 $$\lambda\in(0,{\frac{1}{1+\gamma}});$$ 3. *Upper bound:* | $$:|\,\frac{\kappa_{E}}{1-\alpha_{E}}|\leq\frac{\gamma^{2}}{1-\gamma^{2}}\;f o r\;\lambda\in(0,\frac{1}{1+\gamma});$$ 4. *Lower bound: if* (1 + γ)λ ≤1 2E , then | κE 1−αE | ≥ λγ2(E−1) 4. Proof. We prove the properties separately. 1. Note that ν1 < ν2, 1 − ν1 = (1 + γ)λ, and 1 − ν2 = (1 − γ)λ. Then it follows from equation 30 that $$\kappa_{E}=-\frac{\lambda\gamma}{2}\sum_{i=1}^{E-1}(\nu_{2}^{i}-\nu_{1}^{i})<0.$$ 2. For the monotonicity, it suffices to show that d dλ κE 1−αE ≤ 0. We calculate the derivative as $$\frac{d}{d\lambda}\frac{\kappa_{E}}{1-\alpha_{E}}=\frac{\gamma E(1-\nu_{1}^{E})(1-\nu_{2}^{E})}{2(1-\gamma^{2})(1-\alpha_{E})^{2}}\left(\frac{(1+\gamma)\nu_{1}^{E-1}}{1-\nu_{1}^{E}}-\frac{(1-\gamma)\nu_{2}^{E-1}}{1-\nu_{2}^{E}}\right).$$ Note that $$\frac{(1+\gamma)\nu_{1}^{E-1}}{1-\nu_{1}^{E}}-\frac{(1-\gamma)\nu_{2}^{E-1}}{1-\nu_{2}^{E}}=\frac{1}{\lambda}\left(\frac{\nu_{1}^{E-1}}{1+\nu_{1}+\cdots+\nu_{1}^{E-1}}-\frac{\nu_{2}^{E-1}}{1+\nu_{2}+\cdots+\nu_{2}^{E-1}}\right)\leq0.$$ 3. For the upper bound, it suffices to show the result at λ =1 1+γ due to the negativity and monotonicity. At λ =1 1+γ , we have $$\left|{\frac{\kappa_{E}}{1-\alpha_{E}}}\right|={\frac{\gamma}{1-\gamma^{2}}}\left(\gamma-{\frac{({\frac{2\gamma}{1+\gamma}})^{E}}{2-({\frac{2\gamma}{1+\gamma}})^{E}}}\right)\leq{\frac{\gamma^{2}}{1-\gamma^{2}}}.$$ 4. For the lower bound, the case E = 1 trivially holds. Next, consider E ≥ 2. We have $$\begin{array}{l}{{\frac{\kappa_{E}}{1-\alpha_{E}}=-\frac{\gamma}{1-\gamma^{2}}\frac{(1+\gamma)(1-\nu_{2}^{E})-(1-\gamma)(1-\nu_{1}^{E})}{(1-\nu_{1}^{E})+(1-\nu_{2}^{E})}}}\\ {{\qquad\qquad=-\lambda\gamma\frac{\sum_{\ell=1}^{E-1}(\nu_{2}^{\ell}-\nu_{1}^{\ell})}{(1-\nu_{1}^{E})+(1-\nu_{2}^{E})}.}}\end{array}$$ Note that 1 − nx ≤ (1 − x) n ≤ 1 − 1 2 nx for n ≥ 1 and 0 ≤ x ≤ 1 n . Then, for (1 + γ)λ ≤1 2E , we have $$\begin{array}{l}{{\nu_{1}^{E}=(1-(1+\gamma)\lambda)^{E}\geq1-(1+\gamma)\lambda E\geq\frac{1}{2},}}\\ {{\nu_{2}^{E}=(1-(1-\gamma)\lambda)^{E}\geq1-(1-\gamma)\lambda E.}}\end{array}$$ Moreover, for all x ∈ [ν1, ν2] ⊆ [0, 1] and ℓ − 1 ≤ E, we have $$x^{\ell-1}\geq x^{E}\geq\nu_{1}^{E}\geq{\frac{1}{2}}.$$ $\square$ We obtain that $${\frac{\sum_{\ell=1}^{E-1}(\nu_{2}^{\ell}-\nu_{1}^{\ell})}{(1-\nu_{1}^{E})+(1-\nu_{2}^{E})}}\geq{\frac{\sum_{\ell=1}^{E-1}\int_{\nu_{1}}^{\nu_{2}}\ell\cdot x^{\ell-1}d x}{2\lambda E}}\geq{\frac{\sum_{\ell=1}^{E-1}\ell{\frac{1}{2}}(\nu_{2}-\nu_{1})}{2\lambda E}}={\frac{1}{4}}\gamma(E-1).$$ The proof is completed. We consider two regimes of the stepsize separated by λ0 ≜log r (1−γ)rE <1 1+γ , where the dominating error is due to the small stepsize and the environment heterogeneity, respectively: Slow rate due to small stepsize when λ ≤ λ0. Since β r E monotonically decreases as λ increases, $$\beta_{E}^{r}=(1-(1-\gamma)\lambda)^{r E}\geq(1-(1-\gamma)\lambda_{0})^{r E}=\left(1-\frac{\log t}{r E}\right)^{r E}\,.$$ Note that log r rE ∈ (0, 1 2 ), applying the fact log(1 − x) + x ≥ −x 2for x ∈ [0, 1 2 ] yields that $$\log\left(1-{\frac{\log r}{r E}}\right)+{\frac{\log r}{r E}}\geq-\left({\frac{\log r}{r E}}\right)^{2}\geq-{\frac{1}{r E}}.$$ Then we get $$\beta_{E}^{r}\geq\left(1-{\frac{\log r}{r E}}\right)^{r E}\geq{\frac{1}{e r}}.$$ Slow rate due to environment heterogeneity when λ ≥ λ0. Recall that λ < 1 1+γ . Applying the triangle inequality yields that $$\left|\alpha_{E}^{r}+\frac{1-\alpha_{E}^{r}}{1-\alpha_{E}}\kappa_{E}\right|\geq\left|\frac{\kappa_{E}}{1-\alpha_{E}}\right|-\left(1+\left|\frac{\kappa_{E}}{1-\alpha_{E}}\right|\right)\alpha_{E}^{r}.$$ For the first term, by the negativety and monotonicity in Lemma 5, it suffices to show the lower bound at λ = λ0. Since λ < 1 1+γ , then αE = 1 2 (1 − (1 − γ)λ) E + (1 − (1 + γ)λ) Edecreases as λ increases. For t ≳1 1−γ log 1 1−γ such that (1 + γ)λ0 ≤1 2E , we apply the lower bound in Lemma 5 and obtain that $$\left|{\frac{\kappa_{E}}{1-\alpha_{E}}}\right|\geq\gamma^{2}{\frac{\log r}{(1-\gamma)r}}.$$ Additionally, applying the upper bound in Lemma 5 yields $$\left(1+\left|\frac{\kappa_{E}}{1-\alpha_{E}}\right|\right)\alpha_{E}^{r}\leq\frac{\nu_{2}^{r E}}{1-\gamma^{2}}=\frac{(1-(1-\gamma)\lambda)^{r E}}{1-\gamma^{2}}\leq\frac{1}{(1-\gamma^{2})r}.$$ The conclusion follows. ## G Additional Experiments G.1 Impacts Of E On Homogeneous Settings. For the homogeneous settings, in addition to E = 10, we also consider E = {1,20,40, 0}, where E = 00 means no communication among the agents throughout the entire learning process. Similar to Figure 2b, there is no obvious two-phase phenomenon even in the extreme case when E = 0. Also, though there is indeed performance degradation caused by larger E, the overall performance degradation is nearly negligible compared with the heterogeneous settings shown in Figures 2a and 3. ![26_image_0.png](26_image_0.png) Figure 5: Homogeneous FQL with varying E. ## G.2 Different Target Error Levels. In Figure 6, we show the error levels that these training strategies can achieve within a time horizon T = 20,000. The tolerance levels are 10%,5%,3%, and 1% of the initial error ||00%, respectively. At a high level, choosing different stepsizes for phases 1 and 2 can speed up convergence. ![27_image_0.png](27_image_0.png) ![27_image_1.png](27_image_1.png) (a) One common λ = √1T ![27_image_2.png](27_image_2.png) ![27_image_3.png](27_image_3.png) throughout. ∥∆t∥∞ does meet any of the tolerance levels within 20000 iterations(b) With a phase 1 stepsize of 0.9, it meets the 10% tolerance level at iteration 16502. (c) With a phase 1 stepsize of 0.5, it meets the 10% ![27_image_5.png](27_image_5.png) tolerance level at iteration 15250. (d) With a phase 1 stepsize of 0.2, it meets the 10% and 5% tolerance level at iterations 9669 and 19597, respectively. (e) With a phase 1 stepsize of 0.1, it meets the 10% and 5% tolerance level at iterations 3901 and 14008, respectively. (f) With a phase 1 stepsize of 0.05, it meets the 10%, 5%, ![27_image_4.png](27_image_4.png) and 3% tolerance levels at iterations 4610, 8795, and 16687, respectively. Figure 6: Convergence performance of different tolerance levels of different stepsize choices. The horizontal dashed lines represent the tolerance levels not met, while the vertical dashed lines indicate the iterations at which the training processes meet the corresponding tolerance levels. 28
Review 1: Summary: This paper studies the sample complexity (equivalently, iteration complexity) of synchronous federated Q-learning algorithm with environment heterogeneity. The theoretical contribution are two aspects. The first contribution is the finite-sample guarantee of the algorithm. The second contribution is the example of elaborating the phenomenon that the convergence rate can be linear dependent on local updates $E$. The empirical contribution is the numerical experiments on one setting. However, there are several issues that make me feel it is an inappropriate manuscript to be published at TMLR. The main concern to me is the contribution is not significant enough. Below are some comments. Strengths and Weaknesses: See requested changes. Requested Changes: 1. For the first theoretical contribution, the only difference from \cite{woo2023blessing} is the additional environment heterogeneous assumption. In Corollary 1, the dominating term in Corollary 1 is of sample complexity $\widetilde{\mathcal{O}}\left(\frac{SA}{(1-\gamma)^6\varepsilon^2K}\right)$, which is independent with environment heterogeneity $\kappa$ and worse than the result in \cite{woo2023blessing} by a factor $(1-\gamma)^{-1}$. The authors should compare the results to elaborate whether this additional $(1-\gamma)^{-1}$ factor is due to the environment heterogeneity. 2. I would recommend the authors add the asynchronous case to enrich the results. The current results are not sufficient for a journal paper though it may be sufficient for a conference paper. Moreover, I would also recommend the authors add a few more experiments on other more complex environments. 3. In the experiments, Fig 3 does not really match the theoretical results of the theory. The theory tells us the algorithm converges but Fig 3 shows the algorithm may fail to converge with large local updates. The authors need to elaborate the phenomenon with more details. Below I list some minor comments about this paper. 1. In the description of Algorithm 1, please elaborate how $s_t^k(s,a)$ is sampled. I guess it follows from $P^k(\cdot|s,a)$. 2. In the line 6 of Algorithm 1, it should be $s_t^k(s,a)$ instead of $s_t(s,a)$. 3. In Remark 1 last line, I guess it is $\lambda=\Omega(1/T)$ instead of $\lambda=\omega(1/T)$. 4. In Page 15, third line from the bottom, change $\widetilde{P}_t^k\||_1$ to $\||\widetilde{P}_t^k\||_1$. 5. In Section E, the definition of $X_{i,k}(s,a)=\frac{1}{K}\gamma\lambda(1-\lambda)^i(P^k-\widetilde{P}^k_{i-k})V^*$ is somehow misleading as the LHS is a scalar and RHS is a $SA$-vector. I suggest the authors define it as $X_{i,k}=\frac{1}{K}\gamma\lambda(1-\lambda)^i(P^k-\widetilde{P}^k_{i-k})V^*$. 6. In Fig 5, FQL is never defined in the paper, though I know it may mean Federated Q-learning. Broader Impact Concerns: No. ================================================== Review 2: Summary: This paper investigates federated Q-learning with environment heterogeneity, where agents interact with local MDPs having different transition kernels. The study includes a finite-time convergence analysis of Q-learning under FL settings using FedAvg. The convergence bound explores the impact of synchronization period (E) and the degree of heterogeneity ($\kappa$) on convergence speed. Additionally, the paper identifies a significant slowdown in federated Q-learning due to environment heterogeneity and large synchronization periods (E), supported by convergence analysis and experimental results, and suggested two-phase training using different learning rates, alleviating the slowdown. Strengths and Weaknesses: Strengths 1. They conducted a finite-time convergence analysis of federated Q-learning in heterogeneous environments. 2. They extensively discussed the slowdown in convergence caused by extended synchronization periods in heterogeneous environments and empirically demonstrated the effectiveness of switching learning rates in such scenarios. Weaknesses Overall, the paper appears to lack comparisons with other related works in terms of both theoretical and empirical analyses, and the contribution of this paper is somewhat unclear. 1. The convergence analysis and implications presented in the paper seems similar to that of previous works on federated RL involving heterogeneous environments [1,2]. These works have also demonstrated that convergence bounds increase with the level of heterogeneity ($\kappa$) and synchronization periods (E). Despite listing these works in the related section, it remains unclear what novel discoveries or improvements the analysis offers compared to previous studies. 2. Convergence slowdown due to large synchronization periods is a well-known issue in classic FL. In FL, as agents perform multiple local updates, client deviations can lead to convergence slowdown. However, it is unclear from this paper how the observed convergence slowdown differs from general FL challenges. 3. The effectiveness of the proposed two-phase training with different learning rates lacks clarity due to the absence of comparisons with other FL and FRL methods designed to manage client deviations arising from lengthy synchronization periods. For instance, it's unclear whether the proposed method outperforms commonly used techniques like linearly decreasing learning rates, widely adopted in various RL and FL scenarios. [1] Federated temporal difference learning with linear function approximation under environmental heterogeneity, Wang et al., arXiv preprint arXiv:2302.02212, 2023. [2] Finite-time analysis of on-policy heterogeneous federated reinforcement learning, Zhang et al., In The Twelfth International Conference on Learning Representations, 2023. Requested Changes: 1. To address W1, the paper should include detailed comparisons with the convergence bounds and implications presented in previous works on FRL involving environment heterogeneity. 2. To address W2, the paper should discuss unique observations about convergence slowdown with less communications discovered in their specific settings that have not been covered in previous FRL and FL literature. 3. To address W3, the paper should provide comparisons demonstrating the advantages of the proposed two-phase training method over other FL and RL approaches designed to manage client drifts resulting from extended synchronization periods. Broader Impact Concerns: Broader impact is properly addressed ================================================== Review 3: Summary: This paper focused on the synchronous federated Q-Learning algorithm under environmental heterogeneity and showed the performance degradation with regard to the synchronous period $E$, which was validated via the proof, the information bound, and the numerical experiment. In addition, the authors found the two-phase phenomenon (the error first decreasing to a minimum point and then bouncing back rapidly before stabilizing around some fixed values) in their numerical experiments. Overall, I am positive about the rating of this paper if the authors can address my concerns. Strengths and Weaknesses: Strength: 1. The authors provided solid mathematical analysis for the error bound of the degradation and also showed a lower bound of $\Omega(E/T)$. 2. The authors provided comprehensive numerical experiments that show the degradation as well as the two-phase phenomenon, which is quite surprising. Weakness: 1. The authors didn’t discuss the communication cost of their federated algorithm. In fact, even under the degradation, Corollary 1 indicates that their algorithm can reach a better communication cost compared to $O(T)$. 2. When discussing degradation, the authors only limit their discussion to fixed step sizes. It is still unclear about whether the degradation can be alleviated when we use diminishing step sizes like [1] or [2]. It would be better if the authors could provide some reasonings regarding the situations for diminishing step sizes. 3. I suggest the authors provide a more rigorous statement in Theorem 2. First, the RHS in the display mode equation does not include terms related to $O(1/\sqrt{T})$, which seems wrong if we adopt the form $= \Omega(something)$ instead of $\geq \Omega(something)$. Second, $\Omega(E/T)$ seems to hide some constant factors. If the authors are certain that it only hides numerical constants, please state it clearly. 4. There are typos and undefined notations throughout the whole paper. I suggest the authors conduct additional rounds of proofreading. The following content provides an incomprehensive list. (a) In “contributions” in section 1, $K$ is mentioned before the explanation. (b) There might be ambiguity in the function “mod” if it is not clearly undefined. (c) In Lemma 3, $\mbox{max} _{s,a} \|\|\cdot\|\| _\infty$ is wrong. The same issue also appears in the last display mode equation in Appendix C. (d) In the second display mode equation in section 5.1, $\lambda _t$ is wrong. (e) In the third line of Appendix B, $t+1\ \mbox{mod}\ \neq 0$ is wrong. (f) In the second line after the first display mode equation in Appendix B, $\|\| _1$ is wrong. (g) In Appendix C, $V _t,Q _t$ are undefined. (h) In the line after equation 18, $\delta\in\delta$ is wrong. (i) When using Hoeffding’s inequality in equation 19, more explanation will be better. (j) In equation 21, $\chi(t)$ is wrong. (k) In equation 19 and equation 20, $|\cdot|$ is wrong. (l) Between equation 22 and equation 23, $X(\cdot))$ is wrong. (m) In the line after equation 25, please add equation references for “1,2,3”. (n) At the beginning of page 22, please put some effort into handling the situation that $\beta,L$ might not be integers. [1] Is Q-learning Provably Efficient? [2] Federated Q-learning: linear regret speedup with low communication cost Requested Changes: I am positive about the technique soundness of this paper and only wish the authors to address the concerns in the weakness part. In addition, I wonder whether there are any mathematical explanations regarding the two-stage phenomenon. Broader Impact Concerns: Do not apply. This is a theoretical work. ==================================================
# Calibrate And Debias Layer-Wise Sampling For Graph Convolutional Networks Yifan Chen1∗ Tianning Xu1∗ Dilek Hakkani-Tur2 Di Jin2 Yun Yang1 **Ruoqing Zhu**1 1 University of Illinois Urbana-Champaign 2 *Amazon Alexa AI* {yifanc10, tx8, yy84, rqzhu}@illinois.edu {hakkanit, djinamzn}@amazon.com Reviewed on OpenReview: **https://openreview.net/forum?id=JyKNuoZGux** ## Abstract Multiple sampling-based methods have been developed for approximating and accelerating node embedding aggregation in graph convolutional networks (GCNs) training. Among them, a layer-wise approach recursively performs importance sampling to select neighbors jointly for existing nodes in each layer. This paper revisits the approach from a matrix approximation perspective, and identifies two issues in the existing layer-wise sampling methods: suboptimal sampling probabilities and estimation biases induced by sampling without replacement. To address these issues, we accordingly propose two remedies: a new principle for constructing sampling probabilities and an efficient debiasing algorithm. The improvements are demonstrated by extensive analyses of estimation variance and experiments on common benchmarks. Code and algorithm implementations are publicly available at https://github.com/ychen-stat-ml/GCN-layer-wise-sampling. ## 1 Introduction Graph Convolutional Networks (Kipf & Welling, 2017) are popular methods for learning node representations. However, it is computationally challenging to train a GCN over large-scale graphs due to the inter-dependence of nodes in a graph. In mini-batch gradient descent training for an L-layer GCN, the computation of embeddings involves not only the nodes in the batch but also their L-hop neighbors, which is known as the phenomenon of "neighbor explosion" (Zeng et al., 2020) or "neighbor expansion" (Chen et al., 2018a; Huang et al., 2018). To alleviate such a computation issue for large-scale graphs, sampling-based methods are proposed to accelerate the training and reduce the memory cost. These approaches can be categorized as node-wise sampling approaches (Hamilton et al., 2017; Chen et al., 2018a), subgraph sampling approaches (Zeng et al., 2020; Chiang et al., 2019; Cong et al., 2020), and layer-wise sampling approaches (Chen et al., 2018b; Huang et al., 2018; Zou et al., 2019). We focus on layer-wise sampling in this work, which enjoys the efficiency and variance reduction by sampling columns of renormalized Laplacian matrix in each layer. This paper revisits the existing sampling schemes in layer-wise sampling methods. We identify two potential drawbacks in the common practice of layer-wise sampling, especially on FastGCN (Chen et al., 2018b) and LADIES (Zou et al., 2019). First, the sampling probabilities are suboptimal since a convenient while unguaranteed assumption fails to hold on many common graph benchmarks, such as Reddit (Hamilton et al., 2017) and OGB (Hu et al., 2020). Secondly, the previous implementations of layer-wise sampling methods perform sampling *without* replacement, which deviates from their theoretical results, and introduce biases in the estimation. Realizing the two issues, we accordingly propose the remedies with a new principle to construct sampling probabilities and a debiasing algorithm, as well as the variance analyses for the two propositions. ∗ Equal contribution. The majority of this work was done prior to the first author's internship at Amazon Alexa AI. To the best of our knowledge, our paper is the first to recognize and resolve these two issues, importance sampling assumption and the practical sampling implementation, for layer-wise sampling on GCNs. Specifically, we first investigate the distributions of embedding and weight matrices in GCNs and propose a more conservative principle to construct importance sampling probabilities, which leverages the Principle of Maximum Entropy. Secondly, we recognize the bias induced by sampling without replacement and suggest a debiasing algorithm supported by theoretical analysis, which closes the gap between theory and practice. We demonstrate the improvement of our sampling method by evaluating both matrix approximation error and the model prediction accuracy on common benchmarks. With our proposed remedies, GCNs consistently converge faster in training. We believe our proposed debiasing method can be further adapted to more general machine learning tasks involving importance sampling without replacement, and we discuss the prospective applications in Section 7. ## 1.1 Background And Related Work GCN. Graph Convolutional Networks (Kipf & Welling, 2017), as the name suggests, effectively incorporate the technique of convolution filter into the graph domain (Wu et al., 2020; Bronstein et al., 2017). GCN has achieved great success in learning tasks such as node classification and link prediction, with applications ranging from recommender systems (Ying et al., 2018), traffic prediction (Cui et al., 2019; Rahimi et al., 2018), and knowledge graphs (Schlichtkrull et al., 2018). Its mechanism will be detailed shortly in Section 2.1. Sampling-based GCN Training. To name a few of sampling schemes, GraphSAGE (Hamilton et al., 2017) first introduces the "node-wise" neighbor sampling scheme, where a fixed number of neighbors are uniformly and independently sampled for each node involved, across every layer. To reduce variance in node-wise sampling, VR-GCN (Chen et al., 2018a) applies a control variate based algorithm using historical activation. Instead of sampling for each node separately, "layer-wise" sampling is a more collective approach: the neighbors are jointly sampled for all the existing nodes in each layer. FastGCN (Chen et al., 2018b) first introduces this scheme with importance sampling. AS-GCN (Huang et al., 2018) proposes an alternative sampling method which approximates the hidden layer to help estimate the probabilities in sampling procedures. Then, Zou et al. (2019) propose a layer-dependent importance sampling scheme (LADIES) to further reduce the variance in training, and aim to alleviate the issue of sparse connection (empty rows in the sampled adjacency matrix) in FastGCN. In addition, for "subgraph" approaches, ClusterGCN (Chiang et al., 2019) samples a dense subgraph associated with the nodes in a mini batch by graph clustering algorithm; GraphSAINT (Zeng et al., 2020) introduces normalization and variance reduction in subgraph sampling. To provide a more scalable improvement on sampling-based GCN training, we focus on *history-oblivious* layer-wise sampling methods (e.g., FastGCN and LADIES), which do not rely on history information to construct the sampling probabilities. Note that though some other sampling based methods, such as VRGCN and AS-GCN, enjoy attractive approximation accuracy by storing and leveraging historical information of model hidden states, they introduce large time and space cost. For example, the training time of AS-GCN can be "even longer than vanilla GCN" (Zeng et al., 2020). Moreover, they cannot perform sampling and training separately due to the dependence of sampling probabilities on the training information. Debiasing algorithms for weighted random sampling. In practice, layer-wise sampling is performed by a sequential procedure named as weighted random sampling (WRS) (Efraimidis & Spirakis, 2006), which realizes "sampling without replacement" while induces bias (analyzed in Section 5). Similar phenomena have been noticed by some studies on stochastic gradient estimators (Liang et al., 2018; Liu et al., 2019; Kool et al., 2020), which involve WRS as well; some debiasing algorithms are accordingly developed in those works. In Section 5, we discuss the issues of directly applying existing algorithms to layer-wise sampling and propose a more time-efficient debiasing method. ## 2 Notations And Preliminaries We introduce necessary notations and backgrounds of GCNs and layer-wise sampling in this section. Debiasing-related preliminaries are deferred to Section 5. ## 2.1 Graph Convolutional Networks The GCN architecture for semi-supervised node classification is introduced by Kipf & Welling (2017). Suppose we have an undirected graph G = (V, E), where V is the set of n nodes and E is the set of E edges. Denote node i in V as vi, where i ∈ [n] is the index of nodes in the graph and [n] denotes the set {1, 2*, . . . , n*}. Each node vi ∈ V is associated with a feature vector xi ∈ R p and a label vector yi ∈ R q. In a transductive setting, although we have access to the feature of every node in V and every edge in E, i.e. the n×n adjacency matrix A, we can only observe the label of partial nodes Vtrain ⊂ V; predicting the labels of the rest nodes in V − V*train* therefore becomes a semi-supervised learning task. A graph convolution layer is defined as: $$\mathbf{Z}^{(l+1)}=\mathbf{P}\mathbf{H}^{(l)}\mathbf{W}^{(l)},\quad\mathbf{H}^{(l)}=\sigma(\mathbf{Z}^{(l)}),$$ $$(1)$$ (l)), (1) where σ is an activation function and P is obtained from normalizing the graph adjacency matrix A; H(l) is the embedding matrix of the graph nodes in the l-th layer, and W(l)is the weight matrix of the same layer. In particular, H(0) is the n × p feature matrix whose i-th row is xi. For mini-batch gradient descent training, the training loss for an L-layer GCN is defined as 1 |V*batch*| Pvi∈V*batch* `(yi, z (L) i), where ` is the loss function, batch nodes V*batch* is a subset of V*train* at each iteration. z (L) iis the i-th row in Z(L), *| · |* denotes the cardinality of a set. In this paper, we set P = D˜ −1/2(A + I)D˜ −1/2, where D˜ is a diagonal matrix with Dii = 1 + Pi Aij . The matrix P is constructed as a *renormalized Laplacian matrix* to help alleviate overfitting and exploding/vanishing gradients issues (Kipf & Welling, 2017), which is previously used by Kipf & Welling (2017); Chen et al. (2018a); Cong et al. (2020). ## 2.2 Layer-Wise Sampling To address the "neighbor explosion" issue for graph neural networks, sampling methods are integrated into the stochastic training. Motivated by the idea to approximate the matrix PH(l)in (1), FastGCN (Chen et al., 2018b) applies an importance-sampling-based strategy. Instead of individually sampling neighbors for each node in the l-th layer, they sample a set of s neighbors S (l)from V with importance sampling probability pi, where pi ∝Pn j=1 P 2 ji and Pn i=1 pi = 1. For the (l−1)-th layer, they naturally set V (l−1) = S (l). LADIES (Zou et al., 2019) improves the importance sampling probability pi as $$p_{i}^{(l)}\propto\sum_{v_{j}\in{\mathcal{N}}^{(l)}}{\mathbf{P}}_{j i}^{2},\forall i\in[n]$$ $$\mathbf{\Sigma}$$ ji, ∀i ∈ [n] (2) where N (l) = ∪vi∈V(l)N (vi) and Pn i=1 p (l) i = 1. In this case, S (l), the nodes sampled for the l-th layer, are guaranteed to be within the neighborhood of V (l). The whole procedure can be concluded by introducing a diagonal matrix S (l) ∈ R n×n and a row selection matrix Q(l) ∈ R sl×n, which are defined as $$\mathbf{Q}_{k,j}^{(l)}=\begin{cases}1,&j=i_{k}^{(l)}\\ 0,&\text{else}\end{cases},\quad\mathbf{S}_{j,j}^{(l)}=\begin{cases}(sp_{i_{k}^{(l)}})^{-1},&j=i_{k}^{(l)}\\ 0,&\text{else},\end{cases}\tag{3}$$ where slis the sample size in the l-th layer and {i (l) k } sl k=1 are the indices of rows selected in the lth layer. The forward propagation with layer-wise sampling can thus be equivalently represented as Z˜(l+1) = Q(l+1)P S(l)H(l)W(l), H(l) = (Q(l)) T σ(Z˜(l)), where Z˜(l+1) is the approximation of the embedding matrix for layer l. ## 3 Experimental Setup History-oblivious layer-wise sampling methods, the subject of this work, rely on specific assumptions to construct the sampling probabilities. A paradox here is that the original LADIES (the model we aim to improve on) would automatically be optimal under its own assumptions. However, it is important to verify their assumptions on common real-world open benchmarks, where our major empirical evaluation will be performed (c.f. Section 4). In advance of discussions on existing issues and corresponding remedies in Section 4 and Section 5, we introduce the basic setups of main experiments and datasets across the paper. Details about GCN model training are deferred to the related sections. Table 1: Summary of datasets. Each undirected edge is counted once. Each node in ogbn-proteins has 112 binary labels. "Deg." refers to the average degree of the graph. "Feat" refers to the number of features. "Split Ratio" refers to the ratio of training/validation/test data. | Dataset | Nodes | Edges | Deg. | Feat. | Classes | Tasks | Split Ratio | Metric | |---------------|-----------|------------|--------|---------|-----------|---------|---------------|----------| | Reddit | 232,965 | 11,606,919 | 50 | 602 | 41 | 1 | 66/10/24 | F1-score | | ogbn-arxiv | 160,343 | 1,166,243 | 13.7 | 128 | 40 | 1 | 54/18/28 | Accuracy | | ogbn-proteins | 132,534 | 39,561,252 | 597.0 | 8 | binary | 112 | 65/16/19 | ROC-AUC | | ognb-mag | 736,389 | 5,396,336 | 7.3 | 128 | 349 | 1 | 85/9/6 | Accuracy | | ogbn-products | 2,449,029 | 61,859,140 | 50.5 | 100 | 47 | 1 | 8/2/90 | Accuracy | Benchmarks. The distribution of the input graph will impact the effectiveness of sampling methods to a great extent—we can always construct a graph in an adversarial manner to favor one while deteriorate another. To overcome this issue, we conduct empirical experiments on 5 large real-world datasets to ensure the fair comparison and the representative results. The datasets (see details in Table 1) involve: Reddit (Hamilton et al., 2017), ogbn-arxiv, ogbn-proteins, ogbn-mag, and ogbn-products (Hu et al., 2020). Reddit is a traditional large graph dataset used by Chen et al. (2018b); Zou et al. (2019); Chen et al. (2018a); Cong et al. (2020); Zeng et al. (2020). Ogbn-arxiv, ogbn-proteins, ogbn-mag, and ogbn-products are proposed in Open Graph Benchmarks (OGB) by Hu et al. (2020). Compared to traditional datasets, the OGB datasets we use have a larger volume (up to the million-node scale) with a more challenging data split (Hu et al., 2020). The metrics in Table 1 follow the choices of recent works and the recommendation by Hu et al. (2020). Main experiments. To study the influence of the aforementioned issues, we evaluate the matrix approximation error (c.f. Section 4.3 and Figure 2) of different methods in one-step propagation. This is an intuitive and useful metric to reflect the performance of the sampling strategy on approximating the original minibatch training. Since the updates of parameters in the training are not involved in the simple metric above, in Section 6 we further evaluate the prediction accuracy on testing sets of both intermediate models during training and final outputs, using the metrics in Table 1. ## 4 Reconsider Importance Sampling Probabilities In Layer-Wise Sampling The efficiency of layer-wise sampling relies on its importance sampling procedure, which helps approximate node aggregations with much fewer nodes than involved. As expected, the choice of sampling probabilities can significantly impact the ultimate prediction accuracy of GCNs, and different sampling paradigms more or less seek to minimize the following variance (for the sake of notational brevity, from now on we omit the superscript (l) when the objects are from the same layer) $$\mathbb{E}\,\|Q P S H W-Q P H W\|_{F}^{2},$$ $$\left({4}\right)$$ F , (4) where *k · k*F denotes the Frobenius norm. Under the layer-dependent sampling framework, Zou et al. (2019) show that the optimal sampling probability pi for node i satisfies (see Appendix C.1 for a derivation from a perspective of approximate matrix multiplication) $$p_{i}\propto\|{\boldsymbol{Q}}{\boldsymbol{P}}^{[i]}\|\cdot\|({\boldsymbol{H}}{\boldsymbol{W}})_{[i]}\|,$$ pi ∝ kQP [i]*k · k*(HW)[i]k, (5) where for a matrix A, A[i] and A[i]respectively represent the i-th row/column of matrix A. ## 4.1 Current Strategies And Their Limitation The optimal sampling probabilities (5) discussed above are usually unavailable during the mini-batch gradient descent training due to a circular dependency: to sample the nodes in the `-th layer based the probabilities $$\left(5\right)$$ in Equation (5), we need the hidden embedding H(`) which in turn depends on the nodes not yet sampled in the (` − 1)-th layer. In this case, FastGCN (Chen et al., 2018b) and LADIES (Zou et al., 2019) choose to perform layer-wise importance sampling without the information from HW 1. In particular, FastGCN (resp. LADIES) assumes k(HW)[i]*k ∝ k*P [i]k (resp. kQP [i]k), and sets their sampling probabilities as pi ∝ kP [i]k 2(resp. kQP [i]k 2), ∀i ∈ [n]. The proportionality assumption above seems sensible considering the computation of the hidden embedding H involves P . However, this assumption is unguaranteed because of the changing weight matrix W in training, and no previous work (to our knowledge) scrutinizes whether this assumption generally holds. To study the appropriateness of the core assumption, we conduct a linear regression 2 $$({\hat{0}})$$ $$y\sim\beta_{0}+\beta_{1}x$$ y ∼ β0 + β1x (6) for each layer separately, where x ranging over k(HW)[i]k's is the ` 2 norm of a certain row in HW and y over kP [i]k is the norm of the corresponding column in P . Table 2: Regression coefficients in Equation (6) for 3-layer GCNs trained with LADIES and full-batch SGD respectively. Negative β1's are highlighted in boldface. Method LADIES Full-batch Dataset ogbn-arxiv reddit ogbn-proteins ogbn-mag ogbn-arxiv reddit ogbn-proteins ogbn-mag Layer 1 β0 3.517 ± 0.002 11.34 ± 0.01 4.162 ± 0.001 3.687 ± 0.001 2.364 ± 0.002 29.53 ± 0.03 3.942 ± 0.001 3.282 ± 0.001 β1 -0.54 ± **0.01** 8.03 ± 0.07 0.488 ± 0.004 -0.391 ± 0.004 -0.15 ± **0.01** 15.66 ± 0.17 0.375 ± 0.004 -1.03 ± **0.03** R2 0.012 0.012 0.023 0.003 0.001 0.008 0.013 0.026 Layer 2 β0 6.21 ± 0.01 4.41 ± 0.01 26.95 ± 0.01 10.67 ± 0.01 4.01 ± 0.01 27.94 ± 0.02 23.46 ± 0.01 10.26 ± 0.01 β1 4.20 ± 0.03 4.59 ± 0.05 -38.18 ± **0.17** 1.58 ± 0.03 4.47 ± 0.02 24.07 ± 0.02 -35.09 ± 0.15 -4.12 ± **0.01** R2 0.028 0.008 0.074 0.001 0.051 0.022 0.081 0.024 Layer 3 ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) β0 22.21 ±0.02 4.72 ± 0.01 104.924 ± 0.03 29.86 ± 0.03 19.72±0.02 45.82 ± 0.03 174.72 ± 0.09 41.98 ± 0.02 β1 1.00±0.06 0.10 ± 0.03 -137.8 ± 0.4 -0.13 ± **0.08** 2.16±0.07 9.49 ± 0.19 -367.2 ± 1.1 -29.14 ± **0.05** R2 < 0.001 < 0.001 0.160 < 0.001 0.001 0.002 0.153 0.100 Figure 1: Regression lines (orange) and scatter plots of k(HW)[i]k in 3-layer LADIES's different layers on ogbn-arxiv and ogbn-mag. 1This scheme decouples the sampling and the training procedure. We can save the training runtime on GPU by preparing sampling on CPU in advance. 2We take the counterpart assumption k(HW)[i]*k ∝ k*QP [i]k in LADIES as a randomized version of the one k(HW)[i]k ∝ kP [i]k in FastGCN (and therefore focus on the latter for the regression experiments), since k(HW)[i]k's are conceptually independent of the subsequent row selection matrix Q. Supplementary regression experiments on k(HW)[i]k kQP [i]k are collected in Appendix D. Figure 1 presents regression curves of each layer in LADIES on ogbn-arxiv and ogbn-mag. Table 2 summarizes the regression coefficient β0, β1, and R2 of a 3-layer GCN trained by LADIES or regular mini-batch SGD without node sampling ("full-batch" in short). The regression results demonstrate that the assumption k(HW)[i]*k ∝ k*P [i]k is violated on many real-world datasets from two-fold evidence: negative regression coefficients and small R2's. First, regression coefficients for full-batch SGD and LADIES show similar patterns: negative slope β1's appear across multiple layers and different datasets, such as layer 1 of ogbnarxiv, layer 2 of ogbn-protains, and both layers 1 and 3 of ogbn-mag. The negative correlation clearly violates the assumption, k(HW)[i]*k ∝ k*P [i]k. Secondly, the R2(see Table 2) in the single variable regression is corr2(*x, y*), which measures the proportion of y's variance explained by x. A positive β1 with small R2 can only imply a positive but weak correlation between k(HW)[i]k and kP [i]k, however, a weak correlation cannot imply a proportionality relationship. For example, even though the β1 is positive for each layer in Reddit data, corresponding R2's are at the scale of 0.01. Consequently, the performance of LADIES on Reddit is not surprisingly inferior to our method with new sampling probabilities (LADIES+flat in Table 4). Experimental setups are collected in Appendix A.4. Additional regression results for FastGCN and our proposed methods are collected in Appendix D. ## 4.2 Proposed Sampling Probabilities The small or even negative correlation between k(HW)[i]k and kP[i]k (kQP[i]k) implies the inappropriateness of the proportionality assumption and the resulting sampling probabilities in FastGCN/LADIES. To address this issue, we instead admit that we have limited prior knowledge of HW under the history-oblivious setting, and follow the Principle of Maximum Entropy to assume a uniform distribution of k(HW)[i]k's. With this belief, we propose the following sampling probabilities: $$p_{i}\propto\|Q\mathbf{P}^{[i]}\|,\quad\forall i\in[n].$$ $$\left(7\right)$$ pi ∝ kQP [i]k, ∀i ∈ [n]. (7) Compared to LADIES in Equation (2), our proposed sampling probabilities pi's are more conservative. From a matrix approximation perspective, we rewrite the target matrix product as *QP IHW*, and only aim to approximate the known part *QP I*. It turns out that assuming the uniform distribution of the norms of rows in HW can help improve both the variance of the matrix approximation and the prediction accuracy of GCNs, as empirically shown in Section 4.3 and Section 6. In addition to the empirical results, we compare the estimation variance of our probabilities with LADIES in Lemma C.1 under a mild condition 3. Specifically, a common long-tail distribution of HW can justify the strengths of our new probabilities. More discussion and visualization on the distribution of HW are provided in Appendix C. ## 4.3 Matrix Approximation Error Evaluation ![5_Image_0.Png](5_Image_0.Png) Figure 2: Matrix approximation errors of layer-wise sampling methods. The error curves of LADIES show an abnormal U-shape on ogbn-arxiv and ogbn-mag datasets. "flat" and "debiased" denote our proposed methods in Sections 4 and 5 respectively. 3We reiterate that the variance depends on the underlying distribution of the row norms of HW, and therefore no sampling probabilities can always have smaller variance than others. To further justify our proposed sampling probabilities, we consider the following 1-layer embedding approximation error, which evaluates the propagation approximation to the embedding aggregation of a batch: $$\|\tilde{\mathbf{Z}}_{b a t c h}^{(1)}-\tilde{\mathbf{Z}}_{s a m p l i n g}^{(1)}\|_{F}=\|\mathbf{Q}_{b a t c h}\mathbf{P}\mathbf{H}^{(0)}\mathbf{W}^{(0)}-\mathbf{Q}_{b a t c h}\mathbf{P}\mathbf{S}\mathbf{H}^{(0)}\mathbf{W}^{(0)}\|_{F},$$ where Z˜(1) batch and Z˜(1) sampling are the embedding at the first layer computed using all available neighbors and a certain sampling method, respectively; S is the sampling matrix; Q*batch*'s 0, 1 diagonal entries indicate if a node is in the batch. The experiments are repeated 200 times, in which we regenerate the batch nodes (shared by all sampling methods) and the sampling matrix for each method. The batch size is fixed as 512, and the numbers of sampled neighbors ranges over 256, 512, 768, 1024, 1536, and 2048. W(0) is fixed and inherited from the trained model reported in Section 6. In Figure 2, the result of our proposed sampling probabilities (denoted as "LAIDES+flat", blue solid line) is consistently better than that of the original LADIES method (black dashed line) and of FastGCN (black solid line) on every dataset. The debiasing method in this figure will be discussed shortly in the next section. ## 5 Debiased Sampling We first make the clarification that in the derivation of previous layer-wise sampling strategies the neighbor nodes are sampled *with* replacement, proven to be unbiased approximations of GCN embedding. However, in their actual implementations, sampling is always performed *without* replacement, which induces biases since the estimator remain the same as sampling with replacement. We illustrate the biases in Figure 2, where the matrix approximation errors on ogbn-arxiv and ogbn-mag datasets (sparse graphs with few average degrees) are U-shape for LADIES. The curve indicates that the errors even increase with the number of sub-samples, and the approximation performance is heavily deteriorated by the biases. In the following subsections, we dive into the implementation of FastGCN and LADIES, reveal the origin of the biases, propose a new debiasing method for sampling without replacement, and study the statistical properties of the debiased estimators. Remark. We insist on sampling without replacement because it helps reduce the variance of the estimator. For instance, current node-wise sampling GCNs also sample "without replacement"—they apply simple random sampling (SRS, sampling with all-equal probabilities), which is guaranteed to shrink the variance by a finite population correction (FPC) factor n−s n−1 (Lohr, 2019, Section 2.3) 4. ## 5.1 Weighted Random Sampling (Wrs) The implementation of layer-wise importance sampling (without replacement) follows a sequential procedure named as weighted random sampling (WRS) (Efraimidis & Spirakis, 2006, Algorithm D). Given a set V = [n] representing the indices of n items {Xi} n i=1 5 and the associated sampling probabilities {pi} n i=1, we sample s samples and denote the sampled indices as Ik for k = 1, 2*, ..., s*. With s sampled indices, FastGCN/LADIES use the following importance sampling estimator to approximate the target sum of matrices Pn i=1 Xi (adapted to the notations in this section) $${\frac{1}{s}}\sum_{k=1}^{s}X_{I_{k}}/p_{I_{k}},$$ $$({\boldsymbol{\delta}})$$ , (8) which is a weighted average of XIk 's and induces biases when obtained through sampling without replacement. We aim to preserve the linear form Ys := Psk=1 βkXIk in debiasing, and develop new coefficients βk for each XIk to make Ys unbiased; the debiasing algorithm is officially presented in Section 5.3. 4s and n denote the sample size and the population size. 5In layer-wise sampling, Xi represents B[i]CT [i] , where for brevity QP (HW) is denoted as B (C) throughout the analysis. ## 5.2 Analysis Of Bias To analyze the bias of Equation (8), we introduce the following auxiliary notations. In WRS, given the set Sk of k previously sampled indices (0 ≤ k ≤ s − 1, S0 := ∅), the (k + 1)-th random index Ik+1 is sampled from the set V − Sk of the rest n − k indices with probabilities $$p_{i}^{(0)}:=p_{i},\forall i\in V=[n];\quad p_{i}^{(k)}:={\frac{p_{i}}{\sum_{j\in V-S_{k}}p_{j}}},\forall k\in[s-1],i\in V-S_{k}.$$ With the notations introduced, we are now able to analyze the effect of applying Equation (8) while the WRS algorithm is performed. The expectation of a certain summand XIk+1 /pIk+1 will be $$\mathbb{E}\,\frac{\mathbf{X}_{I_{k+1}}}{p_{I_{k+1}}}=\mathbb{E}\left[\mathbb{E}\left[\frac{\mathbf{X}_{I_{k+1}}}{p_{I_{k+1}}^{(k)}}\frac{p_{I_{k+1}}^{(k)}}{p_{I_{k+1}}}\mid\mathcal{F}_{k}\right]\right]=\mathbb{E}\left[\frac{1}{\sum_{i\in V-S_{k}}p_{i}}\sum_{i\in V-S_{k}}\mathbf{X}_{i}\right],\tag{9}$$ $$\operatorname*{\mathrm{g}}\,\mathrm{set}\ S_{k},\forall k=0,1,\cdots,s-1$$ where Fk is the σ-algebra generated by the random indices inside the corresponding set Sk, ∀k = 0, 1, · · · , s− 1, and the second equation holds because p (k) Ik+1 pIk+1 = P1 i∈V −Sk pi is Fk-measurable. The expectation is in general unequal to the target Pn i=1 Xi for k > 0, except for some extreme conditions, such as all-equal pi's. The bias in each summand (except for the first term with k = 0) accumulates and results in the biased estimation. ## 5.3 Debiasing Algorithms We start with a review of existing works on debiasing algorithms for stochastic gradient estimators. Given a sequence of random indices sampled through WRS, there are two common genres to assign coefficients to summands in Equation (8). Both of the two genres relate to the *stochastic sum-and-sample* estimator (Liang et al., 2018; Liu et al., 2019), which can be derived from Equation (9). Using the fact E XIk+1 pIk+1 Pi∈V −Sk pi = E Pi∈V −Sk Xi , a stochastic sum-and-sample estimator of Pn i=1 Xi can be immediately constructed as 6 $$\Pi_{k+1}=\sum_{j\in S_{k}}\mathbf{X}_{i}+\frac{\mathbf{X}_{I_{k+1}}}{p_{I_{k+1}}^{(k)}},\forall k=0,1,\cdots s-1.\tag{10}$$ To minimize the variance, Liang et al. (2018); Liu et al. (2019) develop the first genre to focus on the last estimator Πs and propose methods to pick the initial s − 1 random indices. Kool et al. (2020, Theorem 4) turn to the second genre which utilize Rao-Blackwellization (Casella & Robert, 1996) of Πs. In fast training for GCN, both of the two genres are somewhat inefficient from a practitioner's perspective. The first genre works well when Pi∈Ss−1 piis close to 1, otherwise the last term in Πs, XIk+1 p (k) Ik+1 , will bring in large variance and reduce the sample efficiency; for the second genre, the time cost to perform Rao-Blackwellization (Kool et al., 2020) is extremely high (O(2s) even with approximation by numerical integration) and conflicts with the purpose of fast training. To overcome the issues of the two existing genres, we propose an iterative method to fully utilize each estimator Πk+1 with acceptable runtime to decide the coefficients for each term in Equation (8). Denote our final estimator with s samples as Ys. Algorithm 1 returns the coefficients βk's used in the debiased estimator Ys =Psk=1 βkXIk . The main idea is to perform recursive estimation Y1,Y2*, ...* until Ys and thus update β accordingly. To be more specific, we recursively perform the weighted averaging below: $$\mathbf{Y}_{0}:=0,\quad\mathbf{Y}_{k+1}:=(1-\alpha_{k+1})\mathbf{Y}_{k}+\alpha_{k}.$$ Y0 := 0, Yk+1 := (1 − αk+1)Yk + αk+1Πk+1, ∀k = 0, 1, · · · , s − 1, where α1 = 1 and αk+1 is a constant depending on k. Specifically, Y1 = Π1 = XI1 /pI1 is unbiased and the unbiasedness of Ys can be obtained by induction as each Πk+1 is unbiased as well. There can be variant choices of αk+1's. For example, the *stochastic sum-and-sample* estimator (10) sets all αk+1's as 0 except for αs = 1. In contrast, we intentionally specify αk+1 =n (n−k)(k+1) , motivated by the preference that if all pi's are 1/n, the output coefficients of the algorithm will be all 1/s, the same as the ones in an SRS setting. 6The proof of its unbiasedness is brief and provided by Kool et al. (2020, Appendix C.1). Algorithm 1: Iterative updates of coefficients to construct the ultimate debiased estimator Ys. Input: probabilities {pi} n i=1, random indices {Ik+1} s−1 k=0 generated by WRS with {pi} n i=1 Output: a length s coefficient vector β Initialize β = 0 ∈ R s, pS = 0 (sum of probabilities); for k ← 0 to s − 1 do αk+1 =n (n−k)(k+1) ; β[k+1] = αk+1(1 − pS)/pIk+1 ; for j ← 0 to k − 1 do β[j+1] = (1 − αk+1)β[j+1] + αk+1; end pS = pS + pIk+1 ; end return β; ## 5.4 Effects Of Debiasing We evaluate the debiasing algorithm again by matrix approximation error (see Section 4.3). As shown in Figure 2, our proposed debiasing method can significantly improve the one-step matrix approximation error on all datasets. In particular, by introducing the debiasing algorithm, the U-shape curve of LADIES in Figure 2 no longer exists for debiased LADIES. We also observe if the new sampling probabilities have already been applied (LADIES + flat in Figure 2), an additional debiasing algorithm (LADIES + flat + debiased) only makes marginal improvement on sparse graphs (ogbn-arxiv and ogbn-mag). The observation implies that the effect of debiasing and new sampling probabilities may have overlaps. We provide the following conjectures for this phenomenon. First, for the bias introduced by sampling without replacement, it is significant only when the proportion of sampled nodes over all neighbor nodes is large enough. With a fixed batch size while an increasing sample size, sparse graphs generally exhibit a larger bias since they have a larger "sampling proportion" than dense graphs. Second, our proposed sampling probabilities have a flatter distribution than LADIES, which resembles a uniform distribution and implies a smaller bias (there is no bias in SRS). The phenomenon, therefore, implies an efficient practice to apply the debiasing algorithm: users can decide whether to debias the estimation based on the ratio of the sampling size to the batch size and the degrees in the graphs. In addition to the effect on matrix approximation, the evidence that the debiasing algorithm can also accelerate the convergence and improve the model prediction accuracy will be provided in Section 6. ## 5.5 Sampling Time The iterative updates in Algorithm 1 induces additional O(s 2) time complexity cost in sampling per batch, as in the k-th iteration we need to update the coefficients for the first k random indices sampled. We first remark the time complexity is comparable to the one of embedding aggregation in layer-wise training, as shown in Appendix B. Moreover, since our layer-wise sampling procedure can be performed independently on CPU, it will not retard the training on GPU 7. Note that this decoupling of sampling and training does not hold for some node-wise or layer-wise sampling methods, such as VR-GCN, which requires up-to-date embedding information. The experimental results for sampling time in Table 3 further show that the additional cost of debiasing is acceptable compared to FastGCN and LADIES. For example, comparing "LADIES + debiasing" to LADIES, the sampling time only increases from 11.2 ms to 13.6 ms on ogbn-proteins. In contrast, vanilla node-wise sampling takes 831 ms due to the overhead of row-wisely sampling the sparse Laplacian matrix. 7Technically the sampling results can be prepared in advance of the training on the GPU, and therefore we claim sampling can be performed independently of training. Furthermore, we remark the sequential debiasing procedure is not a fit for GPU. Table 3: Average sampling time (in milliseconds) per batch for layer-wise methods and the vanilla node-wise method. The experiment is conducted on CPU. The batch size is set as 512 and the sample size is set as 512/1024 (indicated in the following parentheses). The "f" and "d" in "LADIES+f+d" denotes "flat" and "debiased" respectively. More experimental details are collected in Appendix A.3. FastGCN LADIES LADIES+f LADIES+d LADIES+f+d Node-wise Reddit (512) 10.6 ± 0.9 10.9 ± 0.3 10.0 ± 0.2 13.1 ± 0.3 13.1 ± 0.4 632.5 ± 4.3 Reddit (1024) 10.0 ± 0.6 11.8 ± 0.4 10.4 ± 0.1 17.1 ± 0.4 16.1 ± 0.6 637.0 ± 4.2 arxiv (512) 4.2 ± 0.1 8.3 ± 0.1 7.8 ± 0.1 11.7 ± 0.2 11.7 ± 0.5 585.2 ± 3.6 arxiv (1024) 6.9 ± 0.1 9.7 ± 0.1 9.0 ± 0.1 17.2 ± 0.3 16.5 ± 0.3 585.6 ± 3.1 mag (512) 16.8 ± 0.1 27 ± 0.1 24.5 ± 0.1 30.0 ± 0.03 27.8 ± 0.1 1084.3 ± 1.7 mag (1024) 18.9 ± 0.2 28.6 ± 0.2 27.7 ± 0.1 36.0 ± 0.2 34.9 ± 0.1 1119 ± 2.9 proteins (512) 11.1 ± 1 11.2 ± 0.3 10 ± 0.2 13.6 ± 0.2 12.6 ± 0.2 830.9 ± 5.3 proteins (1024) 8.9 ± 0.2 12.4 ± 0.1 11.4 ± 0.1 18.9 ± 0.2 18.0 ± 0.3 804.2 ± 4.6 products (512) 54.8 ± 0.7 83.4 ± 1.3 80.3 ± 0.4 83.7 ± 0.8 83.5 ± 0.6 2795.4 ± 4.7 products (1024) 57.1 ± 0.5 80.8 ± 0.8 78.7 ± 0.7 87.0 ± 0.6 85.4 ± 0.7 2737.7 ± 4.8 ## 5.6 Analysis Of Variance In sampling without replacement, the selected samples are no longer independent, and therefore the classical analysis in previous works (c.f. Lemma C.2 in Appendix C) cannot be applied to the variance of WRSbased estimators. To quantify the variance under the WRS setting, we leverage a common technique in experimental design—viewing {β (k) i} n i=1, ∀k ∈ [s] as random variables. β (k) idenote the coefficients assigned to Xi's when the k-th sample is drawn (if i /∈ Sk, β (k) i:= 0). This technique can derive the same result as in the previous random indices (Ik's) setting, while allow a finer analysis of the variance. We can rewrite the variance in Equation (4) as 8 $$\mathbb{E}\,\|\mathbf{B}\mathbf{S}\mathbf{C}-\mathbf{B}\mathbf{C}\|_{F}^{2}=\sum_{j,k}\mathrm{Var}\left(\sum_{i=1}^{n}\beta_{i}^{(s)}\mathbf{B}_{j}^{[i]}\mathbf{C}_{[i],k}\right).$$ The variance above is determined by the covariance matrix Cov(β), whose (*i, j*)-th element is Cov(β (s) i, β(s) j). We provide the following proposition for the covariance matrix Cov(β) in Algorithm 1: Theorem 1. For all k ∈ [s]*, let* pSk be the probability of having Sk as the first k *samples,* q¯ (k) i*be the* probability of index i not in the k *samples, and* q¯ (k) i,j be the probability of both index i, j not in the k *samples.* Define r (k) i:=PSk63i pSk (1 −Pj∈Sk pj )*, where* PSk63i iterates over all Sk that does not contain i*. Then* Var(β (k+1) i) ≥ 0 and Cov(β (k+1) i, β(k+1) j) ≤ 0 *are recursively given as:* $$(1-\alpha_{k+1})^{2}\,{\rm Var}(\beta_{i}^{(k)})+\left(\frac{r_{i}^{(k)}}{p_{i}}-\alpha_{k+1}^{2}\bar{q}_{i}^{(k)}\right),\tag{11}$$ $$(1-\alpha_{k+1})^{2}\,{\rm Cov}(\beta_{i}^{(k)},\beta_{j}^{(k)})-\alpha_{k+1}^{2}\bar{q}_{i,j}^{(k)}.\tag{12}$$ Furthermore, there exists a sequence {αk} s k=1 only depending on k, n such that for all i, j*, Var*(β (k) i) ≤ 1 k ( 1 pi − 1), |Cov(β (k) i, β(k) j)| ≤ 1k , ∀k ∈ [s]. Proof and discussions are collected in Appendix C.4. We remark that due to the fixed weights αs = 1 in the stochastic sum-and-sample estimator (10), its (co)variance is usually larger than ours, especially when s n (intuitively the second term XIs /p(s−1) Isin Equation (10) will cause large variance). 8We let B (C) have n columns (rows), and the j(k)-th element in the i-th column (row) is denoted as B [i] j(C[i],k). ## 6 Experiments In this section, we empirically evaluate the performance of each method on five node prediction datasets: Reddit, ogbn-arxiv, ogbn-proteins, ogbn-mag, and ogbn-products (c.f. Table 1). We denote "LADIES+flat", "LADIES+debiased", and "LADIES+flat+debiased" respectively as the variants of LADIES with the improvements from Section 4, Section 5, and from both. We compare our methods to the original GCN with mini-batch stochastic training (denoted by full-batch), two layer-wise sampling methods: FastGCN and LADIES. Apart from that, we also implement several other fast GCN training methods, including GraphSAGE (Hamilton et al., 2017) (vanilla node-wise sampling while keep using the GCN architecture), VR-GCN (Chen et al., 2018a), and a subgraph sampling method GraphSAINT (Zeng et al., 2020). In training, we use a 2-layer GCN for each task trained with an ADAM optimizer. (Due to limited computational resources, we have to use the shallow GCN since the full-batch method and node-wise sampling methods require much more GPU memory even when L = 3.) The number of hidden variables is 256 and the batch size is 512. For layer-wise sampling methods, we consider two settings for node sample size: 1. fixed as 512 (equal to the batch size); 2. an "increasing" setting (denoted with a suffix (2)) that double nodes will be sampled in the next layer. For node-wise sampling methods (GraphSAGE, VR-GCN), the sample size per node is 2 (denoted with a suffix (2)). For the subgraph sampling method GraphSAINT, the subgraph size is by default equal to the batch size. The experimental results are reported in Table 4, in the form of "mean(±std.)", computed based on 5 runs. More details of the settings are deferred to Appendix A.1. 6.1 Model Convergence Trajectory ![10_image_0.png](10_image_0.png) Figure 3: Metrics (detailed in Table 1) on the validation (dev) sets in each epoch. "f" and "d" refer to the "flat" sampling probability and "debiasing" respectively. The layer-wise sampling methods follow the "increasing" setting (denoted with a suffix (2) in Table 4). We first compare the convergence rates of layer-wise methods. The convergence curves on ogbn-proteins and ogbn-products are shown in Figure 3 (FastGCN is excluded for clearer illustration, due to its outlying curve). Complete results of all methods (including node-wise and subgraph) on all five datasets are deferred to Figure 4 in Appendix A.2. As shown in Figure 3 (and Figure 4 in the appendix), our proposed improvements (LADIES + flat, LADIES + debias, LADIES + flat + debias) exhibit faster convergence rate than LADIES (the solid blue curve). The observation implies both the new sampling probabilities ("flat") and the debiasing algorithm can help accelerate the convergence. Specifically, we note that the effect of debiasing is not as significant as choosing a proper sampling scheme on some datasets (e.g. Reddit and ogbn-products). ## 6.2 Prediction Accuracy Table 4: Metrics (detailed in Table 1) on testing sets of benchmarks. The best results among layer-wise sampling methods and all methods are both highlighted in boldface. The metrics are in percentage (%). The averaged training time for one epoch is in milliseconds and measured on the GPU. | | Reddit | ogbn-arxiv | ogbn-mag | ogbn-proteins | ogbn-products | ogbn-arxiv | ogbn-products | |------------------|------------|--------------|------------|-----------------|-----------------|--------------|-----------------| | Full-batch | 93.81±0.18 | 66.39±0.25 | 29.60±0.27 | 65.71±0.11 | 68.33±0.16 | 65.2 ± 3.97 | 703 ± 77.8 | | Node-wise (2) | 92.13±0.27 | 64.51±0.30 | 29.05±0.45 | 65.76±0.18 | 68.71±0.07 | 22.0 ± 3.64 | 8.34 ± 1.21 | | VR-GCN (2) | 94.62±0.04 | 67.49±0.25 | 28.99±0.40 | 67.45±0.02 | 70.90±0.28 | 86.8 ± 6.09 | 88.5 ± 2.51 | | GraphSAINT | 89.47±0.83 | 60.58±0.62 | 24.77±0.88 | 66.33±0.07 | 62.77±1.04 | 23.1 ± 3.26 | 8.26 ± 1.11 | | FastGCN | 44.46±2.30 | 25.44±0.82 | 7.13±0.48 | 52.44±1.88 | 26.98±0.42 | 24.1 ± 5.12 | 7.43 ± 1.04 | | LADIES | 73.86±0.17 | 60.95±0.31 | 24.79±0.48 | 68.28±0.05 | 52.97±1.11 | 19.3 ± 3.25 | 10.3 ± 1.24 | | w/ flat | 90.04±0.11 | 62.76±0.26 | 27.30±0.27 | 68.26±0.06 | 62.64±0.10 | 16.0 ± 2.37 | 8.03 ± 1.07 | | w/ debias | 86.73±0.36 | 61.55±0.40 | 25.74±0.80 | 68.87±0.09 | 55.92±0.92 | 19.1 ± 3.45 | 8.44 ± 1.09 | | w/ flat & debias | 89.34±0.40 | 61.90±0.43 | 27.41±0.28 | 67.64±0.15 | 62.57±0.22 | 14.6 ± 2.59 | 8.11 ± 1.09 | | FastGCN (2) | 60.31±0.70 | 30.23±1.10 | 5.85±0.57 | 58.80±1.06 | 31.58±0.70 | 24.9 ± 5.09 | 8.34 ± 1.11 | | LADIES (2) | 88.34±0.11 | 64.01±0.39 | 28.59±0.39 | 68.17±0.10 | 65.24±0.40 | 21.0 ± 4.00 | 11.2 ± 1.35 | | w/ flat | 93.64±0.19 | 66.56±1.84 | 29.58±0.19 | 68.10±0.07 | 68.47±0.25 | 23.1 ± 4.04 | 13.1 ± 1.52 | | w/ debias | 92.75±0.22 | 65.93±0.27 | 30.08±0.28 | 69.14±0.15 | 67.18±0.24 | 14.0 ± 1.94 | 8.54 ± 1.09 | | w/ flat & debias | 93.59±0.09 | 66.22±0.10 | 29.88±0.34 | 67.75±0.11 | 68.49±0.06 | 21.0 ± 3.60 | 8.50 ± 1.15 | The prediction accuracy (measured by corresponding metrics) on testing sets of different datasets is reported in Table 4. Our proposed methods, which combine the new sampling probabilities and the debiasing algorithm, are comparable to the full-batch training (no node sampling), showing consistent improvement over existing layer-wise sampling methods, FastGCN and LADIES. On most benchmarks, the prediction performance of our methods is better than the vanilla node-wise sampling method (GraphSAGE) and GraphSAINT. In addition, we would like to remark on the overlapping effect for the flat sampling probabilities and the debiasing algorithm. In Table 4, the accuracy of LADIES+flat+debias is better than LADIES+flat and LADIES+debias on most benchmarks, but this relative improvement is not remarkable on several benchmarks. This phenomenon is also observed in Figure 2 and Figure 3. The only exception is the ogbn-proteins dataset, where LADIES+flat+debias is inferior to LADIES+flat and LADIES+debias. However, on this dataset, LADIES and its variants even outperform "full-batch" GCN and VR-GCN. We tend to believe GCN's accuracy on ogbn-proteins is mainly impacted by factors other than the sampling variance. We further remark on a seemingly strange phenomenon that some efficient GCNs have a higher prediction accuracy than full-batch GCN on several datasets. We speculate the reason is that a good approximation can recover the principal components in the original embedding matrix, restrain the noise via the sparse / low-rank structure, and serve as implicit regularization. There are similar observations (Sanyal et al., 2018; Chen et al., 2021) in Convolutional Neural Networks (CNN) and Transformers as well, that applying a lowrank regularizer, such as SVD, to the representation of the intermediate layers can improve the prediction accuracy of models. Since LADIES improves upon FastGCN, alleviates the sparsity connection issue thereof, and performs consistently better than FastGCN on our benchmarks, the variants of FastGCN with our methods ("FastGCN+flat", "FastGCN+debias", and "FastGCN+flat+debias") are not the focus of this paper. We report their accuracy results in Appendix A for reference, where consistent improvements of "FastGCN+flat+debias" over FastGCN are observed on most benchmarks. We also notice that in some cases, the flat sampling probabilities and the debiasing algorithm bring insignificant improvements or even decreased accuracy. We conjugate that this is because FastGCN's structural deficiency somewhat distorts the convergence of GCN, as remarked by Zou et al. (2019). To be more specific, FastGCN does not apply a layer-dependent sampling strategy, which leads to the failure to capture the dynamics of mini-batch SGD and the overly sparse layer-wise connection. When the model does not converge well, the bias and variance of sampling is no longer the dominant factor in model's prediction accuracy. ## 6.3 Training Time In the right-most columns of Table 4 9, we report the training time for ogbn-arxiv and ogbn-product (complete runtime results for 2-layer and 3-layer GCNs are respectively provided in Tables 5 and 6 in Appendix A.3, along with the detailed timing settings). The difference between the runtime of our proposed methods and LADIES is not significant, since these layer-wise sampling strategies have the same propagation procedure. As for VR-GCN, a much heavier computational cost is required than the layer-wise methods as the expense of involving historical embedding, which helps it achieve the best accuracy on three tasks; in particular, its training time is even comparable to the full-batch method on ogbn-arxiv data. Overall, we comment that based on the experimental results, our improvement in sampling probabilities and the proposed debiasing algorithm lead to better accuracy than the other two classical layer-wise sampling methods, FastGCN, and LADIES, while maintaining roughly the same computational cost. ## 7 Conclusion And Discussion In this work, we revisit the existing layer-wise sampling strategies and propose two improvements. We first show that following the Principle of Maximum Entropy, a conservative choice of sampling probabilities outperforms the existing ones, whose proportionality assumption on embedding norms is in general unguaranteed. We further propose an efficient debiasing algorithm for layer-wise importance sampling through iterative updates of coefficients for columns sampled, and provide statistical analysis. The empirical experiments show our methods achieve high accuracy close to the SOTA node-wise sampling method, VR-GCN, while significantly saving runtime on GCN training like other history-oblivious layer-wise sampling methods. We remark our debiased importance sampling strategy can be extended to a broader class of graph neural networks, such as node-wise sampling for GCN. Current node-wise sampling methods, e.g. GraphSAGE and VR-GCN, uniformly sample neighbors of each node *without* replacement. To further improve the approximation accuracy, node-wise sampling can also introduce importance sampling with our debiasing algorithm. Moreover, our debiasing algorithm can be applied to general machine learning involving sampling among a finite number of elements. In addition to the batch sampling for stochastic gradient descent (SGD) training discussed in Section 5.3, the proposed debiasing method can also contribute to sampling-based efficient attention, fast kernel ridge regression, etc. ## Acknowledgments We appreciate all the valuable feedback from the anonymous reviewers and the TMLR editors. Y. Yang's research was supported in part by U.S. NSF grant DMS-2210717. R. Zhu's research was supported in part by U.S. NSF grant DMS-2210657. 9The least average training time is not boldfaced, since the training time on GPU is sensitive to the hardware and has a relatively large standard deviation the best result cannot significantly outperform the second-best one. ## References Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. *IEEE Signal Processing Magazine*, 34(4):18–42, 2017. George Casella and Christian P Robert. Rao-blackwellisation of sampling schemes. *Biometrika*, 83(1):81–94, 1996. Jianfei Chen, Jun Zhu, and Le Song. Stochastic training of graph convolutional networks with variance reduction. In Jennifer G. Dy and Andreas Krause (eds.), *Proceedings of the 35th International Conference* on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of *Proceedings of Machine Learning Research*, pp. 941–949. PMLR, 2018a. URL http://proceedings. mlr.press/v80/chen18p.html. Jie Chen, Tengfei Ma, and Cao Xiao. Fastgcn: Fast learning with graph convolutional networks via importance sampling. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018b. URL https://openreview.net/forum?id=rytstxWAW. Yifan Chen, Qi Zeng, Heng Ji, and Yun Yang. Skyformer: Remodel self-attention with gaussian kernel and nystr\" om method. *Advances in Neural Information Processing Systems*, 34:2122–2135, 2021. Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In Ankur Teredesai, Vipin Kumar, Ying Li, Rómer Rosales, Evimaria Terzi, and George Karypis (eds.), Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019, Anchorage, AK, USA, August 4-8, 2019, pp. 257–266. ACM, 2019. doi: 10.1145/3292500.3330925. URL https://doi.org/10. 1145/3292500.3330925. Weilin Cong, Rana Forsati, Mahmut T. Kandemir, and Mehrdad Mahdavi. Minimal variance sampling with provable guarantees for fast training of graph neural networks. In Rajesh Gupta, Yan Liu, Jiliang Tang, and B. Aditya Prakash (eds.), KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pp. 1393–1403. ACM, 2020. URL https://dl.acm.org/doi/10.1145/3394486.3403192. Zhiyong Cui, Kristian Henrickson, Ruimin Ke, and Yinhai Wang. Traffic graph convolutional recurrent neural network: A deep learning framework for network-scale traffic learning and forecasting. IEEE Transactions on Intelligent Transportation Systems, 21(11):4883–4894, 2019. Petros Drineas, Ravi Kannan, and Michael W Mahoney. Fast monte carlo algorithms for matrices i: Approximating matrix multiplication. *SIAM Journal on Computing*, 36(1):132–157, 2006. Pavlos S Efraimidis and Paul G Spirakis. Weighted random sampling with a reservoir. *Information Processing* Letters, 97(5):181–185, 2006. William L. Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 1024–1034, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/ 5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.html. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), *Advances in Neural* Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/ 2020/hash/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html. Wen-bing Huang, Tong Zhang, Yu Rong, and Junzhou Huang. Adaptive sampling towards fast graph representation learning. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 4563–4572, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/ 01eee509ee2f68dc6014898c309e86bf-Abstract.html. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id= SJU4ayYgl. Wouter Kool, Herke van Hoof, and Max Welling. Estimating gradients for discrete random variables by sampling without replacement. In *8th International Conference on Learning Representations, ICLR 2020,* Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum? id=rklEj2EFvB. Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc V. Le, and Ni Lao. Memory augmented policy optimization for program synthesis and semantic parsing. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), *Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems* 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 10015–10027, 2018. URL https:// proceedings.neurips.cc/paper/2018/hash/f4e369c0a468d3aeeda0593ba90b5e55-Abstract.html. Runjing Liu, Jeffrey Regier, Nilesh Tripuraneni, Michael I. Jordan, and Jon D. McAuliffe. Rao-blackwellized stochastic gradients for discrete distributions. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings of Machine Learning Research*, pp. 4023–4031. PMLR, 2019. URL http://proceedings.mlr.press/v97/liu19c.html. Sharon L Lohr. *Sampling: design and analysis*. Chapman and Hall/CRC, 2019. Afshin Rahimi, Trevor Cohn, and Timothy Baldwin. Semi-supervised user geolocation via graph convolutional networks. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pp. 2009–2019, Melbourne, Australia, 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1187. URL https://aclanthology.org/P18-1187. Amartya Sanyal, Varun Kanade, Philip HS Torr, and Puneet K Dokania. Robustness via deep low-rank representations. *arXiv preprint arXiv:1804.07090*, 2018. URL https://arxiv.org/abs/1804.07090. Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In *European semantic web conference*, pp. 593–607. Springer, 2018. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. *IEEE transactions on neural networks and learning systems*, 2020. Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In Yike Guo and Faisal Farooq (eds.), Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018, pp. 974–983. ACM, 2018. doi: 10.1145/3219819.3219890. URL https://doi.org/10.1145/3219819.3219890. Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor K. Prasanna. Graphsaint: Graph sampling based inductive learning method. In *8th International Conference on Learning* Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=BJe8pkHFwS. Difan Zou, Ziniu Hu, Yewen Wang, Song Jiang, Yizhou Sun, and Quanquan Gu. Layer-dependent importance sampling for training deep and large graph convolutional networks. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 11247–11256, 2019. URL https:// proceedings.neurips.cc/paper/2019/hash/91ba4a4478a66bee9812b0804b6f9d1b-Abstract.html. ## A Supplementary Experimental Setup And Results A.1 Experimental Setup Details We describe the additional details of experiment setups for Section 6. All the models are implemented by PyTorch. We use one Tesla V100 SXM2 16GB GPU with 10 CPU threads to train all the models listed in Section 6. Our implementation of full-batch method, FastGCN, and LADIES are adapted from the codes by Zou et al. (2019); the implementation of vanilla node-wise sampling, VR-GCN, GraphSAINT is adapted from the codes by Cong et al. (2020). For the vanilla node-wise sampling method, there are several variants of GNN structures (Ying et al., 2018) while we fix the model structure as GCN in our experiments for fair comparison. We use ELU as the activation function in the convolutional layer for all the models: ELU(x) = x for x > 0, ELU(x) = exp(x) − 1 for x ≤ 0. For the details in model training, the learning rate is 0.001 and the dropout rate is 0.2, which means 20 percents of hidden units are randomly dropped during the training. Validation and testing are performed with full-batch inference (our experiments on those graph benchmarks are transductive: all the possible neighbors are accessible during training) on validation and testing nodes. Note that some existing PyTorch implementations of GCNs involve several ad-hoc tricks, such as row-normalizing sampled Laplacian matrix. For the prediction accuracy evaluation experiments in Section 6, we stop training when the validation F1 score does not increase for 200 batches. For a fair comparison, we remove certain tricks in our experiments, such as normalization of each row in the sampled Laplacian matrix in layer-wise sampling. Such a trick may help in the practice, but it might not be compatible with some other methods and is out of the focus of our study. We use the metrics in Table 1 to evaluate the accuracy of each method. Concretely, Reddit is a multi-class classification task, and we use the Micro-F1 score with function "sklearn.metrics.f1_score". For OGB data, we use the built-in evaluator function in module ogb provided by Hu et al. (2020). ## A.2 Model Convergence Figure 4 is a supplementary to Figure 3. We present the convergence curve of all methods on every task. The setting of each model is the same as in Figure 3. ## A.3 Sampling Time And Training Time We compare the sampling time per batch for 1-layer GCN with layer-wise sampling methods (FastGCN, LADIES, and our proposed methods) and GraphSAGE by experiments on CPU. The time is presented in milliseconds. The batch size is 512, and the number of sampled nodes is 512 or 1024. The average sampling time (followed by standard deviation) over 200 batches is presented in Table 3. We note that the sampling time may involve some overhead costs. For example, the input Laplacian matrix is Scipy-spare-matrix on the CPU, while in sampling, it is converted to a PyTorch-sparse-matrix. By Table 3, we conclude that the cost of debiasing algorithm is acceptable. Moreover, since the debiasing only depends on the number of nodes sampled, its time cost will be dwarfed by sampling on very large graphs. For example, sampling 512 nodes, the average batch sampling time for "LADIES", "LADIES + debiased", "LADIES + flat + debiased", are 8.3±0.1, 11.7±0.2, 11.7±0.5 respectively on ogbn-arxiv while 83.4±1.3 and 83.7±0.8 and 83.5±0.6 respectively on ogbn-products data. As we mentioned in Section 5.3, the node-wise sampling takes a significantly longer time because individually sampling from each row in the re-normalized Laplacian matrix (stored as a sparse matrix) leads to a large overhead cost. We present the complete training time (per batch) of 2-layer and 3-layer GCNs in Tables 5 and 6 respectively. The time is presented in millisecond and averaged over 110 batches, where we discard the first 20 and the last 20 out of 150 total batches to disregard potential warm-up time for GPU. The other settings are kept the same as our experiments of accuracy evaluation in Table 4. We note that the timing on GPU is sensitive to the hardware and has a relatively large standard deviation. As presented in Tables 5 and 6, our proposed methods have similar training time with LADIES due to the same propagation scheme of GCN with layer-wise sampling strategy. VR-GCN generally shows superiority Full-batch FastGCN | LADIES+d | | |------------|--------| | LADIES+f+d | ...... | | LADIES | |----------| | LADIES+f | Node-wise VR-GCN GraphSAINT ![17_image_0.png](17_image_0.png) Figure 4: Metrics of each epoch on the validation set. The layer-wise sampling methods follow the "increasing" setting (denoted with a suffix (2) in Table 4); for the node-wise sampling methods, the number of neighbors is 2 per node. in prediction accuracy (see Table 4). However, it also takes a significantly longer time in training since its propagation involves using and updating historical activation. | Reddit | ogbn-arxiv | ogbn-mag | ogbn-proteins | ogbn-products | | |--------------------|--------------|-------------|-----------------|-----------------|-------------| | Full-batch | 372 ± 21.5 | 65.2 ± 3.97 | 72.3 ± 8.25 | 1702 ± 67.0 | 703 ± 77.8 | | Node-wise (2) | 8.13 ± 1.12 | 22.0 ± 3.64 | 17.3 ± 4.67 | 9.50 ± 1.29 | 8.34 ± 1.21 | | Node-wise (10) | 11.3 ± 1.05 | 19.7 ± 2.84 | 22.0 ± 5.60 | 10.3 ± 1.29 | 11.7 ± 1.31 | | VR-GCN (2) | 153 ± 17.3 | 86.8 ± 6.09 | 106 ± 12.4 | 239 ± 42.7 | 88.5 ± 2.51 | | VR-GCN (10) | 302 ± 23.9 | 104 ± 8.47 | 175 ± 15.1 | 360 ± 45.6 | 402 ± 65.3 | | GraphSAINT | 8.23 ± 1.14 | 23.1 ± 3.26 | 20.4 ± 5.98 | 8.34 ± 1.07 | 8.26 ± 1.11 | | FastGCN | 9.47 ± 1.21 | 24.1 ± 5.12 | 23.3 ± 6.27 | 8.50 ± 1.22 | 7.43 ± 1.04 | | LADIES | 7.95 ± 1.03 | 19.3 ± 3.25 | 16.7 ± 4.54 | 8.69 ± 1.11 | 10.3 ± 1.24 | | w/ flat | 7.86 ± 1.04 | 16.0 ± 2.37 | 26.1 ± 6.54 | 9.00 ± 1.21 | 8.03 ± 1.07 | | w/ debiased | 7.86 ± 1.11 | 19.1 ± 3.45 | 19.5 ± 5.67 | 8.21 ± 1.04 | 8.44 ± 1.09 | | w/ flat & debiased | 8.01 ± 1.09 | 14.6 ± 2.59 | 20.5 ± 5.62 | 9.06 ± 1.14 | 8.11 ± 1.09 | | FastGCN (2) | 9.47 ± 1.25 | 24.9 ± 5.09 | 20.8 ± 5.71 | 8.76 ± 1.06 | 8.34 ± 1.11 | | LADIES (2) | 14.2 ± 4.68 | 21.0 ± 4.00 | 22.7 ± 6.05 | 8.83 ± 1.12 | 11.2 ± 1.35 | | w/ flat | 8.70 ± 1.13 | 23.1 ± 4.04 | 21.9 ± 5.65 | 10.3 ± 1.24 | 13.1 ± 1.52 | | w/ debiased | 8.75 ± 1.15 | 14.0 ± 1.94 | 16.3 ± 4.64 | 10.7 ± 1.29 | 8.54 ± 1.09 | | w/ flat & debiased | 8.12 ± 1.13 | 21.0 ± 3.60 | 13.3 ± 4.15 | 12.5 ± 1.41 | 8.50 ± 1.15 | Table 5: Average training time (in milliseconds) per batch for a 2-layer GCN. | Reddit | ogbn-arxiv | ogbn-mag | ogbn-proteins | ogbn-products | | |--------------------|---------------|--------------|-----------------|-----------------|----------------| | Full-batch | 1042.8 ± 30.1 | 148 ± 5.8 | 352.1 ± 11.5 | 3312.3 ± 82.1 | 4490.7 ± 102.6 | | Node-wise (2) | 10.9 ± 1.3 | 27 ± 4.2 | 17.2 ± 4.3 | 11.5 ± 1.3 | 11.5 ± 1.1 | | Node-wise (10) | 77.8 ± 12 | 34.9 ± 3.3 | 52.4 ± 6.7 | 24.6 ± 1.1 | 50.4 ± 1.1 | | VR-GCN (2) | 379.7 ± 23.6 | 154.1 ± 12 | 218.7 ± 16.7 | 428.9 ± 53 | 473.6 ± 69.7 | | VR-GCN (10) | 858.1 ± 32 | 224.9 ± 17.8 | 488.1 ± 34.7 | 1618.7 ± 67.3 | 2075 ± 112.8 | | GraphSAINT | 9.5 ± 1.1 | 22.6 ± 3.2 | 21.2 ± 5.6 | 11.2 ± 1.3 | 10.5 ± 1.0 | | FastGCN | 16.9 ± 6.3 | 30.8 ± 4.3 | 19 ± 4.8 | 13 ± 1.3 | 8.9 ± 1.0 | | LADIES | 10.6 ± 1.3 | 27.6 ± 3.7 | 19.9 ± 4.8 | 11.1 ± 1.3 | 8.9 ± 1.0 | | w/ flat | 10.1 ± 1.1 | 26.4 ± 3.9 | 12.1 ± 2.4 | 10.1 ± 1.2 | 9.9 ± 1.0 | | w/ debiased | 10.1 ± 1.2 | 30 ± 4.1 | 22.5 ± 5.5 | 10.6 ± 1.2 | 10.6 ± 1.0 | | w/ flat & debiased | 9.7 ± 1.1 | 27.1 ± 3.6 | 15.7 ± 4.3 | 10.9 ± 1.2 | 9.3 ± 1.0 | | FastGCN (2) | 9.6 ± 1.2 | 27.4 ± 4.6 | 18.9 ± 4.7 | 9.8 ± 1.2 | 9.4 ± 1.1 | | LADIES (2) | 10 ± 1.2 | 29 ± 4.1 | 19.2 ± 5.1 | 10.5 ± 1.2 | 10.2 ± 1.1 | | w/ flat | 16.1 ± 4.9 | 27.2 ± 3.7 | 21.4 ± 5.8 | 10.4 ± 1.1 | 13.1 ± 1.4 | | w/ debiased | 10.3 ± 1.1 | 24.7 ± 3.1 | 23.3 ± 5.8 | 10.5 ± 1.2 | 12.4 ± 1.3 | | w/ flat & debiased | 10.7 ± 1.1 | 26.8 ± 4.4 | 24.8 ± 6.4 | 10.5 ± 1.1 | 9.9 ± 1.1 | Table 6: Average training time (in milliseconds) per batch for a 3-layer GCN. ## A.4 Regression Experimental Setup For the regression experiments in Section 4.1, we train a 3-layer GCN with LADIES sampler or full-batch sampler, with 256 hidden variables per layer. The batch size is 512. Early stopping training policy is applied. The regression lines in Figure 1 are fitted based on all the (k(HW)[i]k, kP [i]k) pairs collected from converged 3-layer GCN models in 5 repeated experiments. To make the pattern of the scatter plot clear, we only randomly sample 1000 pairs of data in each layer in each scatter plot. We also remark that these experiments are conducted to check the assumption of importance sampling, rather than pursuing SOTA performance. When we finish training the model, the norms of rows in HW are extracted through a full-batch inference with all training nodes. We collect additional regression experiments in Appendix D. Table 7: Testing metrics (detailed in Table 1) for FastGCN with flat sampling or/and debiasing on benchmarks. The metrics are in percentage (%). LADIES 73.86 ± 0.17 60.95 ± 0.31 24.79 ± 0.48 68.28 ± 0.05 52.97 ± 1.11 FastGCN 44.46 ± 2.30 25.44± 0.82 7.13 ± 0.48 52.44 ± 1.88 26.98 ± 0.42 FastGCN w/ flat 53.78 ± 0.86 22.76 ± 0.69 5.69 ± 0.65 61.68 ± 0.50 32.72 ± 1.6 FastGCN w/ debias 44.27 ± 2.26 26.57 ± 2.52 4.66 ± 1.17 53.64 ± 1.33 26.77 ± 1.32 FastGCN w/ flat & debias 53.88 ± 0.94 24.56 ± 2.1 7.54 ± 0.31 60.30 ± 1.89 29.79 ± 2.19 LADIES (2) 88.34 ± 0.11 64.01 ± 0.39 28.59 ± 0.39 68.17 ± 0.10 65.24 ± 0.40 FastGCN (2) 60.31 ± 0.70 30.23 ± 1.10 5.85 ± 0.57 58.80 ± 1.06 31.58 ± 0.70 FastGCN (2) w/ flat 59.60 ± 1.07 26.68 ± 1.96 6.68 ± 0.03 64.04 ± 0.63 35.27 ± 1.14 FastGCN (2) w/ debias 51.11 ± 1.93 25.65 ± 1.02 5.83 ± 0.76 56.30 ± 2.49 27.38 ± 1.82 FastGCN (2) w/ flat & debias 60.22 ± 0.84 24.41 ± 1.02 6.06 ± 0.62 63.13 ± 0.57 34.47 ± 1.76 | Accuracy Metrics | | | | | |--------------------|------------|----------|---------------|---------------| | Reddit | ogbn-arxiv | ogbn-mag | ogbn-proteins | ogbn-products | ## A.5 Supplementary Accuracy Results For Fastgcn With Flat Sampling Probabilities And Debiasing To complement the accuracy evaluation results in Table 4, we accordingly implement the variants of FastGCN ("FastGCN w/ flat", "FastGCN w/ debias", and "FastGCN w/ flat & debias") and perform the same experiments (all the five datasets in Section 6.2) on them. We still follow the same settings (including all the hyper-parameters in training) described at the beginning of Section 6: "FastGCN (2)" similarly refers to the "increasing" setting, where "double nodes will be sampled in the next layer". We summarize their accuracy results in Table 7, where we also provide the accuracy of original FastGCN and LADIES for comparison. By introducing flat sampling probabilities and debasing algorithms, we observe "FastGCN w/ flat & debias" makes consistent improvements over FastGCN on most benchmarks. However, in some cases, the flat sampling probabilities and the debiasing algorithms bring insignificant improvements or even decreased accuracy. We provide a possible explanation for the phenomenon as follows. As illustrated in Table 7, FastGCN and its variants have significantly worse performance than even vanilla LADIES on all the benchmarks, which implies that the poor model convergence of FastGCN is mainly caused by a structural deficiency, the sparse layer-wise connection (Zou et al., 2019), on these benchmarks; regarding the impact on the model accuracy, the structural deficiency far outweighs the variance in sampling, and our improvement on sampling variance cannot always lead to higher model accuracy. ## B Time Complexity Analysis | | Layer-wise | Node-wise | |-----------------------|--------------|--------------| | Nodes Aggregation | O(scpL) | O(sbLp) | | Linear Transformation | O(sp2L) | O(sbL−1p 2 ) | Table 8: The time complexity for computation for L-layer GCN training by layer-wise sampling and node-wise sampling. The first column refers to the matrix operation type, nodes aggregation or linear transformation. We analyze the complexity of vanilla node-wise sampling and layer-wise sampling in this section. The analysis is adapted from the work by Zou et al. (2019), but we show a lighter bound for layer-wise sampling. For l such that 0 ≤ l ≤ L − 1, the propagation formulas for sampling based GCN can be formulated as: $$\tilde{\mathbf{Z}}^{(l+1)}=\bar{\mathbf{P}}^{(l)}\tilde{\mathbf{H}}^{(l)}W^{(l)},$$ where H˜ (l) = Z˜(l) ∈ R sl×p, P¯(l) ∈ R sl+1×sl, W(l) ∈ R p×p. In particular, for LADIES, P¯(l) LADIES = Q(l+1)P S(l). For simplicity, we suppose that the hidden dimension in each layer is fixed as p, the same as the dimension of H(0). The batch size and the numbers of nodes sampled in each layer are set all equal to a fixed constant s. We assume the number of sampled neighbors per node in node-wise sampling is b. We denote the maximal degree of all the nodes in the graph as c. The computational cost of the propagation comes from two parts: the linear transformation, a dense matrix product, H˜ (l)W(l) and the node aggregation, a sparse matrix product, P¯(l)(H˜ (l)W(l)). The time complexity is summarized in Table 8. We additionally comment although the time cost of two parts both linearly depend on the number of nodes involved (number of non-zero elements in Q(l+1)), the node aggregation part usually dominates since the sparse matrix product involved is less efficient than the dense matrix product involved in a modern computer. The linear transformation, H˜ (l)W(l)is dense matrix production. The cost depends on the shape of two matrices, and is given as O(slp 2). LADIES fixes sl as s for each layer, so O(slp 2) = O(sp2). For node-wise sampling sl = sbL−l, since the number of node grows exponentially. Thus, by summation over all the layers, we have the results in the second row in Table 8. The node aggregation, P¯(l)(H˜ (l)W(l)) is a sparse matrix production, since P¯(l)is sparse. For simplicity, we denote H˜ (l)W(l) as C(l) ∈ R sl×p. Thus, the time complexity of this sparse matrix production becomes O(nnz(Pl)p), where nnz(Pl)is the number of non-zero entries in P¯(l). For layer-wise sampling, since we sample s nodes for each layer and each node has at most c neighbors, so nnz(Pl) ≤ sc. For node-wise sampling, since each node has b neighbors and the neighbors are not shared by all the nodes in each layer, nnz(Pl) = bsl = sbL+1−l. By summation over all the layers, we attain the results in the first row of Table 8. ## C Theoretical Analysis Of Sampling Probabilities C.1 Results In Approximate Matrix Multiplication In this section, we revisit approximate matrix multiplication to derive the previous layer-wise sampling methods. Specifically, the sampling matrix S used in FastGCN and LADIES can be decomposed as S = ΠΠT, where Π ∈ R n×dis a sub-sampling sketching matrix defined as follows: Definition C.1 (Sub-sampling sketching matrix). Consider a discrete distribution which draws i with probability pi > 0, ∀i ∈ [n]*. For a random matrix* Π ∈ R n×d, if Π has i.i.d. columns and each column Π(j)can randomly be √ 1 dpi ei with probability pi, where ei is the i-th column of the n-by-n identity matrix In*, then* Π is called a sub-sampling sketching matrix with sub-sampling probabilities {pi} n i=1. With this definition, we introduce a result in AMM to construct the sub-sampling sketching matrix, which coincides with the conclusion in FastGCN and LADIES. Theorem C.1 (Theorem 1 (Drineas et al., 2006)). *Suppose* B ∈ R nB×n, C ∈ R n×nC *, the number of subsampled columns* d ∈ Z + such that 1 ≤ d ≤ n*, and the sub-sampling probabilities* {pi} n i=1 P are such that n i=1 pi = 1 *and such that for a quality coefficient* β ∈ (0, 1] $$p_{i}\geq\beta\frac{\|\mathbf{B}^{[i]}\|\|\mathbf{C}_{[i]}\|}{\sum_{i^{\prime}=1}^{n}\|\mathbf{B}^{[i^{\prime}]}\|\|\mathbf{C}_{[i^{\prime}]}\|},\forall i\in[n].\tag{1}$$ $$(13)$$ Construct a sub-sampling sketching matrix Π ∈ R n×d *with sub-sampling probabilities* {pi} n i=1 as in Definition C.1, and let BΠΠT C be an approximation to BC. Let δ ∈ (0, 1) and η = 1 + p(8/β) log(1/δ). Then with probability at least 1 − δ, $$\|B C-B\Pi\Pi^{T}C\|_{F}^{2}\leq\frac{\eta^{2}}{\beta d}\|B\|_{F}^{2}\|C\|_{F}^{2}.$$ F . (14) Remark. The theorem is closely related to Lemma 1 in Appendix B of LADIES, which studies the variance E kBC − BSCk 2 F . For the choice of sub-sampling probabilities, Equation (13) reproduces the conclusion in FastGCN and LADIES, when we respectively take B as P and QP . ## C.2 Comparison Of Sampling Variance Between Our Sampling Probabilities And Ladies Whether our choice of probabilities can outperform the LADIES depends on the distribution of the norms of rows in HW. When kHW(i)k is not proportional to the corresponding `2 norm of column (QP ) (i), our $$(14)$$ proposed probabilities can benefit the approximate matrix multiplication task more than the ones assuming a relation of proportionality. We find the common long-tail distribution of numbers suffices to exert the strengths of the new probabilities, which can be concluded as the following assumption: Assumption 1. To simplify the notation, we denote B := QP and C := HW, where P is an n-by-n *matrix* as defined above. Let m be the number of non-zero columns in B*, and define* C1 :=kBk 2 F /m (Pn i=1 kB[i]k/m) 2 ≥ 1. There also exists a constant C2 ≥ 1 *such that* 1 C2 kCk 2 F /n ≤ kC[i]k 2 ≤ C2kCk 2 F /n*. Assume* C1/C2 2 ≥ 1. With the assumption above, we show the variance of the approximation with our proposed probabilities is smaller than the variance of LADIES by the following lemma. Lemma C.1. We denote the sampling matrix with our probabilities in Equation (7) as S1*, and denote the* sampling matrix with probabilities of LADIES in Equation (2) as S0. If Assumption 1 *holds, then we have* $$\mathbb{E}\,\|\mathbf{B}\mathbf{S}_{1}\mathbf{C}-\mathbf{B}\mathbf{C}\|_{F}^{2}\leq\mathbb{E}\,\|\mathbf{B}\mathbf{S}_{0}\mathbf{C}-\mathbf{B}\mathbf{C}\|_{F}^{2}.$$ Remark. Assumption 1 is related to the uniformity in the distributions of kB[i]k's and kC[i]k's. We tentatively discuss the implication of the assumption in Appendix C.2. We remark the assumption indicates it is unrealistic that the new probabilities can outperform the ones in LADIES, as distributions of datasets can vary. Nevertheless, as shown in Section 6 it can be an effective attempt to improve the prediction accuracy of LADIES by simply adopting the conservative sampling scheme. To prove Lemma C.1, we first adapt a technical lemma (Zou et al., 2019, Lemma 1), which relates the sampling matrix to the variance (expectation of squared Frobenius norm) of the approximate matrix multiplication. Lemma C.2 (Adapted from Lemma 1 (Zou et al., 2019)). *Given two matrices* B ∈ R nB×n and C ∈ R n×nC , for any i ∈ [n] define the positive probabilities pi's such that Pn i=1 pi = 1*. We further require the probability* pi = 0 if and only if the corresponding column B[i] or row C[i]*is all-zero. The sub-sampling sketching matrix* Π ∈ R n×dis generated accordingly. Let S := ΠΠT*, it holds that* $$\mathbb{E}_{\mathbf{S}}\left[\|\mathbf{B}\mathbf{S}\mathbf{C}-\mathbf{B}\mathbf{C}\|_{F}^{2}\right]=\frac{1}{d}\left(\sum_{i:p_{i}>0}\frac{1}{p_{i}}\left\|\mathbf{B}^{[i]}\right\|^{2}\cdot\|\mathbf{C}_{[i]}\|^{2}-\|\mathbf{B}\mathbf{C}\|_{F}^{2}\right).$$ where d *is the number of samples.* With the lemma above, the proof of Lemma C.1 is provided as follows. Proof. Recall the notation in the main paper is simplified as B := QP , C := HW. As the union of neighbors of nodes in Q cannot cover all the nodes, some columns in B are all-zero, and we accordingly define a Q-measurable matrix L as in Lemma C.2. We have $$\mathbb{E}\left[\|B S_{1}C-B C\|_{F}^{2}\right]=\mathbb{E}_{\mathbf{Q}}\left[\mathbb{E}_{S_{1}}\left(\|B S_{1}C-B C\|_{F}^{2}|\mathbf{Q}\right)\right]$$ = 1 d EQ X i:pi>0 1 pi B[i] 2 ·C[i] 2− kBCk 2 F . where the second equation holds as we apply Lemma C.2 to the inner expectation in the right-hand side of the first line. Plugging pi ∝ kB[i]k (Equation (7) in the main paper) into the preceding probabilities pi's, we reach E -kBS1C − BCk 2 F = EQ hPi:pi>0 B[i] Pi:pi>0 B[i]C[i] 2i d− EQ -kBCk 2 F d = 1 d EQ " Xn i=1 B[i] ! Xn i=1 B[i]C[i] 2 !# − 1 d EQ -kBCk 2 F . As computed by Zou et al. (2019), the variance of LADIES is similarly given as $$\mathbb{E}\left[\|\mathbf{B}\mathbf{S}_{\mathbf{0}}\mathbf{C}-\mathbf{B}\mathbf{C}\|_{F}^{2}\right]=\frac{1}{d}\,\mathbb{E}_{\mathbf{Q}}\left[\left(\sum_{i:p_{i}>0}\left\|\mathbf{B}^{(i)}\right\|^{2}\right)\left(\sum_{i:p_{i}>0}\left\|\mathbf{C}_{[i]}\right\|^{2}\right)\right]-\frac{1}{d}\,\mathbb{E}_{\mathbf{Q}}\left[\|\mathbf{B}\mathbf{C}\|_{F}^{2}\right].$$ Consequently, to prove the lemma it suffices to show that $$\left(\sum_{i:p_{i}>0}\left\|\mathbf{B}^{[i]}\right\|\right)\left(\sum_{i:p_{i}>0}\left\|\mathbf{B}^{[i]}\right\|\left\|\mathbf{C}_{[i]}\right\|^{2}\right)\leq\left(\sum_{i:p_{i}>0}\left\|\mathbf{B}^{[i]}\right\|^{2}\right)\left(\sum_{i:p_{i}>0}\left\|\mathbf{C}_{[i]}\right\|^{2}\right),\tag{15}$$ and the inequality above follows with Assumption 1. Specifically, plugging the inequality kC[i]k 2 ≤ C2kCk 2 F /n, ∀i ∈ [n] in the left-hand-side above, we have $$\left(\sum_{i:p_{i}>0}\|\mathbf{B}^{[i]}\|\right)\left(\sum_{i:p_{i}>0}\|\mathbf{B}^{[i]}\|\,\|\mathbf{C}_{[i]}\|^{2}\right)\leq\left(\sum_{i=1}^{n}\|\mathbf{B}^{[i]}\|\right)^{2}\frac{C_{2}}{n}\|\mathbf{C}\|_{F}^{2}=\frac{m}{C_{1}}\|\mathbf{B}\|_{F}^{2}\frac{C_{2}}{n m}m\|\mathbf{C}\|_{F}^{2},$$ in which the last equation comes from the definition C1 :=kBk 2 F /m (Pn i=1 kB[i]k/m) 2 . To close the proof, we utilize the inequality 1 C2 kCk 2 F /n ≤ kC[i]k 2 and bound mkCk 2 F by nC2Pi:pi>0 C[i] 2. Finally we attain Equation (15) ``` with the core assumption C1 C2 2 ≥ 1. ♦ ``` Remark. In Assumption 1 we indeed implicitly assume kB[i]k's follow a long-tail distribution that most norms are around the average while a few columns have large norms. The high non-uniformity makes the average of squared norms much larger than the square of averaged norms. For kC[i]k's, considering the normalization techniques (such as batch or layer normalization) to stabilize the scale of the parameters, they tend to not vary widely, which implies a small C2. The numerical experiments on the comparison of approximation error (see Figure 2) and the histograms of the norms in trained models shown in Figure 5 further validate the assumption. Based on the empirical analysis above, we claim the assumption is mild and tends to hold at least for some datasets. ## C.3 Distributions Of The Matrix Rows / Columns Norm Figure 5 demonstrates the distribution of kP [i]k and k(HW)[i]k (layer 1, 2, 3) for Reddit, ogbn-arxiv, ogbnproteins, and ogbn-mag datasets. The k(HW)[i]k's are obtained from the experiment in Section A.4. The outliers larger than the 99.9% quantile or small than the 0.1% quantile are removed. As shown in the histograms, our analysis regarding Assumption 1 tends to hold generally on these datasets. For the norms of columns in P (as a replacement for QP for clarity), we observe there are some columns with large norms far beyond the average. Those columns contribute a lot to the quadratic mean, which results in a huge C1 in Assumption 1. In contrast, the norms of rows in HW concentrate around their average, inducing a small C2. Those facts together with Assumption 1 and Lemma C.1 explain why our proposed sampling probabilities are more proper for some real datasets. ## C.4 Proof Of Theorem 1 Proof. We first show E β (k) i = 1, ∀i ∈ [n], k ∈ [s]. As β (k) iis constructed by Algorithm 1 to attain the unbiased estimator, take Xi = 1, Xj = 0, ∀j 6= i, and we have E β (k) i = EPn j=1 β (k) j Xj =Pn j=1 Xj = 1, ∀i ∈ [n], k ∈ [s]. With E β (k+1) iat hand, we still need to compute E (β (k+1) i) 2(and E β (k+1) i β (k+1) j) to obtain the (co)variance. To start the analysis, we recursively write β (k+1) ias $$\beta_{i}^{(k+1)}=\mathbf{1}_{\{i\in S_{k}\}}[\beta_{i}^{(k)}(1-\alpha_{k+1})+\alpha_{k+1}]+\mathbf{1}_{\{i\not\in S_{k}\}}\mathbf{1}_{\{i\in S_{k+1}\}}\frac{1-\sum_{j\in S_{k}}p_{j}}{p_{i}}\alpha_{k+1}:=\pi_{i}^{k+1}(\beta_{i}^{(k)})+\gamma_{i}^{(k+1)}.$$ ![23_image_0.png](23_image_0.png) Figure 5: Distributions of ||P[i]|'s and ||(HW)¡¡||'s for Reddit, ogbn-arxiv, ogbn-protein and ogbn-mag. For E (β (k+1) i) 2, we notice the cross term 2π k+1 i(β (k) i)γ (k+1) iis always zero as 1{i∈Sk}1{i6∈Sk}:= 0; as for the first terms, utilizing the fact 1{i∈Sk}β (k) i = β (k) i we have $$\mathbb{E}\left(\pi_{i}^{k+1}(\beta_{i}^{(k)})\right)^{2}=\mathbb{E}\left(\beta_{i}^{(k)}\right)^{2}(1-\alpha_{k+1})^{2}+2\alpha_{k+1}(1-\alpha_{k+1})+q_{i}^{k}\alpha_{k+1}^{2},$$ to obtain the last term, erm, $$\begin{aligned} \mathbb{E}\left(\gamma_i^{(k+1)}\right)^2 &= \mathbb{E}\,\mathbb{E}\left(\mathbf{1}_{\left\{i\not\in S_k\right\}}\mathbf{1}_{\left\{i\in S_{k+1}\right\}}\frac{1-\sum_{j\in S_k}p_j}{p_i}\alpha_{k+1}|i\not\in S_k\right)\\ &= \mathbb{E}\left[\mathbf{1}_{\left\{i\not\in S_k\right\}}\,\mathbb{E}\left(\mathbf{1}_{\left\{i\in S_{k+1}\right\}}\frac{1-\sum_{j\in S_k}p_j}{p_i}\alpha_{k+1}|i\not\in S_k\right)\right]\\ &= q_i^k \sum_{S_k\not\in I_i}\frac{p_{S_k}}{q_i^k}\frac{1-\sum_{j\in S_k}p_j}{p_i}\alpha_{k+1}^2=\frac{r_i^k}{p_i}\alpha_{k+1}^2. \end{aligned}$$ For E β (k+1) i β (k+1) j, we can similarly drop the last term E γ (k+1) iγ (k+1) jas 1{i∈Sk}1{i6∈Sk}:= 0; as for the first term E π k+1 i(β (k) i)π k+1 j(β (k) j), we have E π k+1 i(β (k) i)π k+1 j(β (k) j) = E β (k) i β (k) j(1 − αk+1) 2 + E 1{i∈Sk}β (k) j + 1{j∈Sk}β (k) i αk+1(1 − αk+1) + p k i,jα 2 k+1, where p k i,j is the probability that **both** index *i, j* are in the first k samples; as for the next term E π k+1 i(β (k) i)γ (k+1) j, we first compute E β (k) iγ (k+1) j = E E β (k) i 1{j6∈Sk}1{j∈Sk+1} 1 −Pj 0∈Sk pj 0 pjαk+1|Fk = E β (k) i 1{j6∈Sk} 1 −Pj 0∈Sk pj 0 pjαk+1 E1{j∈Sk+1}|Fk = E " β (k) i 1{j6∈Sk} 1 −Pj 0∈Sk pj 0 pjαk+1 pj 1 −Pj 0∈Sk pj 0 # = E 1{j6∈Sk}β (k) i αk+1, and similarly we have and similarly we have E 1{i∈Sk}γ (k+1) j = E E 1{i∈Sk}1{j6∈Sk}1{j∈Sk+1} 1 −Pj 0∈Sk pj 0 pjαk+1|Fk = E 1{i∈Sk}1{j6∈Sk} 1 −Pj 0∈Sk pj 0 pjαk+1 E1{j∈Sk+1}|Fk = E " 1{i∈Sk}1{j6∈Sk} 1 −Pj 0∈Sk pj 0 pjαk+1 pj 1 −Pj 0∈Sk pj 0 # = E1{j6∈Sk}1{i∈Sk} αk+1; accordingly we can obtain and applying the same derivation as above we have E π k+1 i(β (k) i)γ (k+1) j = E $${\bf1}_{\{j\not\in S_{k}\}}\beta_{i}^{(k)})\alpha_{k+1}(1-\alpha_{k+1})+\mathbb{E}\left({\bf1}_{\{j\not\in S_{k}\}}{\bf1}_{\{i\in S_{k}\}}\right)\alpha_{k+1}^{2},$$ E π k+1 j(β (k) j)γ (k+1) i = E $$\{i\not\in S_{k}\}\beta_{j}^{(k)}\Big)\alpha_{k+1}(1-\alpha_{k+1})+$$ αk+1(1 − αk+1) + E1{i6∈Sk}1{j∈Sk} α 2 k+1. Combining all the pieces together, we obtain E β (k+1) i β (k+1) j = E β (k) i β (k) j(1 − αk+1) 2 + 2αk+1(1 − αk+1) + p k i,j + E1{j6∈Sk}1{i∈Sk} + 1{i6∈Sk}1{j∈Sk} α 2 k+1 = E β (k) i β (k) j(1 − αk+1) 2 + 2αk+1(1 − αk+1) + p k i,j + E1{j6∈Sk}1{i∈Sk} + 1{i6∈Sk}1{j∈Sk} α 2 k+1. With the derivation above, we have $$\begin{array}{c}{{\mathbb{E}\,(\beta_{i}^{(k+1)})^{2}=\mathbb{E}\,(\beta_{i}^{(k)})^{2}(1-\alpha_{k+1})^{2}+2\alpha_{k+1}(1-\alpha_{k+1})+(\frac{r_{i}^{(k)}}{p_{i}}+q_{i}^{k})\alpha_{k+1}^{2},}}\\ {{\mathbb{E}\,\beta_{i}^{(k+1)}\beta_{j}^{(k+1)}=\mathbb{E}\,\beta_{i}^{(k)}\beta_{j}^{(k)}(1-\alpha_{k+1})^{2}+2\alpha_{k+1}(1-\alpha_{k+1})+q_{i,j}^{k}\alpha_{k+1}^{2}}}\end{array}$$ where q k i (= 1 − q¯ (k) i) is the probability that index i is in the first k samples, and similarly q k i,j (= 1 − q¯ (k) i,j = q k i +q k j −p k i,j ) is the probability that either index i or index j is in the first k samples. Plugging the expression above into the following identities, $$\begin{array}{c}{{\mathrm{Var}(\beta_{i}^{(k+1)})=\mathbb{E}\,(\beta_{i}^{(k+1)})^{2}-\mathbb{E}^{2}\,(\beta_{i}^{(k+1)})}}\\ {{\mathrm{Cov}(\beta_{i}^{(k+1)},\beta_{j}^{(k+1)})=\mathbb{E}\,\beta_{i}^{(k+1)}\,\beta_{j}^{(k+1)}-\mathbb{E}\,\beta_{i}^{(k+1)}\,\mathbb{E}\,\beta_{j}^{(k+1)},}}\end{array}$$ we can then have the expression for the covariance stated in the main paper. For the scale of the covariance, we prove the upper bound through induction. We can verify the upper bounds hold for k = 1, and for the (co)variance with αk = 1 k , Var(β (k+1) i) and Cov(β (k+1) i, β(k+1) j) now respectively becomes $$\mathrm{Var}(\beta_{i}^{(k+1)})=\frac{k^{2}}{(k+1)^{2}}\mathrm{Var}(\beta_{i}^{(k)})+\left(\frac{r_{i}^{(k)}}{p_{i}}-\bar{q}_{i}^{(k)}\right)\frac{1}{(k+1)^{2}},$$ $$\mathrm{Cov}(\beta_{i}^{(k+1)},\beta_{j}^{(k+1)})=\frac{k^{2}}{(k+1)^{2}}\mathrm{Cov}(\beta_{i}^{(k)},\beta_{j}^{(k)})-\bar{q}_{i,j}^{(k)}\frac{1}{(k+1)^{2}}.$$ Utilizing the induction conditions that for all *i, j*, we have $$\mathrm{Var}(\beta_{i}^{(k)})\leq\frac{1}{k}(\frac{1}{p_{i}}-1),$$ Along with the facts that $\frac{r_i^{(k)}}{p_i}-\bar{q}_i^{(k)}\leq\frac{\bar{q}_i^{(k)}}{p_i}-\bar{q}_i^{(k)}\leq\frac{1}{p_i}-1$ and $\bar{q}_{i,j}^{(k)}\leq1$, that i,j ≤ 1, we can finally achieve the inequality $$\mathrm{Var}(\beta_{i}^{(k+1)})\leq\frac{1}{k+1}(\frac{1}{p_{i}}-1),$$ $$\mathrm{Cov}(\beta_{i}^{(k+1)},\beta_{j}^{(k+1)})|\leq\frac{1}{k+1}.$$ ♦ ``` Remark. The choice of αk = 1 k here is mainly for easing the proof, while may not be the optimal choice in ``` practice; indeed in SRS the αk's are different than the ones used here. ## D Supplementary Regression Results D.1 Full Batch Training In this subsection, we present the regression analysis for GCN with full-batch SGD training (without sampling). Figure 6 shows a similar pattern, as supplementary to Figure 1. Here, the scatter plots make use of all points. The assumption: k(HW)[i]*k ∝ k*P [i]k still does not hold. Note that we do not have the regression result on the ogbn-product dataset, since the training of a 3-layer GCN fails due to memory limitation. ![26_image_0.png](26_image_0.png) Figure 6: Regression of ||(HW);|| ~ ßo + B]|PÑ|| on Reddit, ogbn-arxiv, ogbn-arxiv, ogbn-proteins, and ogbn-mag datasets. 3-layer GCN is trained by full-bacth sampler. The f ## D.2 Fastgcn/Fastgcn+Debiasing In this subsection, we present the regression analysis for FastGCN/FastGCN+debiasing in this subsection. The regression results are illustrated in Figures 7 and 8; more details can be found in Table 9. The ![27_image_0.png](27_image_0.png) distribution patterns of the (k(HW)[i]k, kP [i]k) pairs in FastGCN/FastGCN+debiasing are similar to the patterns in Figures 6 and 9; we can analogously draw the conclusion that for models trained by FastGCN/FastGCN+debiasing, the assumption k(HW)[i]*k ∝ k*P [i]k tends to not hold. Figure 7: Regression of k(HW)[i]k ∼ β0 + β1kP [i]k on Reddit, ogbn-arxiv, ogbn-proteins, and ogbn-mag datasets. 3-layer GCN is trained by FastGCN sampler. The fitted regression line is in orange color. Table 9: Regression coefficients for k(HW)[i]k ∼ β0 + β1kP [i]k. The data come from 3-layer GCNs trained with FastGCN/FastGCN+d(ebiasing) respectively. No regression has high R2 and the R2for positive β1's are highlighted in boldface. | Method | Dataset | Layer 1 | Layer 2 | | | Layer 3 | | | | | |------------|---------------|-----------|-----------|-------|--------|-----------|-------|--------|---------|-------| | β0 | β1 | R2 | β0 | β1 | R2 | β0 | β1 | R2 | | | | ogbn-arxiv | 2.651 | -0.165 | 0.005 | 1.627 | 0.981 | 0.017 | 1.640 | 0.123 | <0.001 | | | reddit | 11.773 | 8.453 | 0.009 | 3.639 | 5.155 | 0.006 | 2.524 | 1.587 | 0.001 | | | FastGCN | ogbn-proteins | 2.853 | 0.457 | 0.022 | 4.692 | -5.883 | 0.069 | 12.938 | -22.552 | 0.098 | | ogbn-mag | 2.278 | -0.057 | 0.001 | 1.208 | 0.023 | <0.001 | 0.938 | -0.108 | 0.005 | | | ogbn-arxiv | 2.847 | -0.164 | 0.004 | 1.984 | 1.277 | 0.021 | 2.385 | 0.111 | <0.001 | | | reddit | 13.747 | 9.712 | 0.008 | 4.654 | 6.322 | 0.007 | 3.264 | 1.563 | 0.001 | | | FastGCN+d | ogbn-proteins | 3.090 | 0.397 | 0.017 | 5.372 | -7.357 | 0.072 | 11.395 | -20.179 | 0.102 | | ogbn-mag | 2.310 | -0.067 | 0.002 | 1.194 | -0.009 | <0.001 | 0.999 | -0.147 | 0.006 | | ![28_image_1.png](28_image_1.png) ![28_image_0.png](28_image_0.png) Figure 8: Regression of ||(HW)¡¡|| ~ ß + B]|Plʲ|| on Reddit, ogbn-arxiv, ogbn-arxiv, ogbn-proteins, and ogbn-mag datasets. 3-layer GCN is trained by FastGCN+debiasing sample color. ## D.3 Ladies: Setting 1 This subsection presents regression plots for ||(HW)¡¡|| ~ ||P(¹|| as supplementary to Figure 1. Note that a different regression setting ||(HW)¡¡|| ~ ||QP[i]|| is collected in Appendix D.4. ![29_image_0.png](29_image_0.png) Figure 9: Regression of ||(HW)¡¡|| ~ ßo + ß1||Plö|| on Reddit, ogbn-arxiv, ogbn-proteins, and ogbn-mag datasets. 3-layer GCN is trained by LADIES. The fitted regression line is in orange color. ## D.4 Ladies: Setting 2 As remarked in the footnote in Section 4.1, we view the assumption ||(HW)¡¡|| x ||QP[0]| in LADIES as a randomized version of the one ||(HW)¡¡|| x ||P(ð|| and hence focus on the latter regression setting. However, it may still be interesting to present regression analysis of ||(HW)¡¡|| ~ ||QP[0]| in this subsection to study the corresponding assumption ||(HW)¡¡|| x ||QP[0]| in LADIES. Compared to the original assumption in FastGCN, setting ||QP[i]||'s as the predictor causes some different patterns in the regression. · There are more empty columns in QP than in P. For a single selection matrix, we have fewer (x, y) pairs. To compensate, we pick 500 Q's and record non-zero ||QP[i]|'s and corresponding ||(HW)[i]||'s as the regression input. 30 · There is also a higher portion of high-leverage points after considering the selection matrix Q-the points with large |QP[i]|'s are fewer while they have higher influence on the coefficients (which is not favored in regression analysis since they increase the standard error of the estimated coefficients). The regression results are illustrated in Figures 10 and 11; more details can be found in Table 10. We note the regression results still fail to support the proportionality assumption in LADIES: most ß1's are negative, and even for the positive ß1's the R2 (the coefficient of determination, equal to the square of the correlation coefficient in univariate linear regression) is small, which matches the observation in the figures that there is no clear proportionality in the data. ![30_image_0.png](30_image_0.png) ![30_image_1.png](30_image_1.png) Figure 10: Regression of ||(HW)¡¡|| ~ ßo + B1||QP[0|| on Reddit, ogbn-arxiv, ogbn-proteins, and ogbn-mag datasets. 3-layer GCN is trained by LADIES sampler. The fitted regression line is in orange color. ![31_image_0.png](31_image_0.png) Figure 11: Regression of k(HW)[i]k ∼ β0 + β1kQP [i]k on Reddit, ogbn-arxiv, ogbn-proteins, and ogbn-mag datasets. 3-layer GCN is trained by LADIES+debiasing sampler. The fitted regression line is in orange color. Table 10: Regression coefficients for k(HW)[i]k ∼ β0 + β1kQP [i]k. The data come from 3-layer GCNs trained with LADIES/LADIES+d(ebiasing) respectively. No regression has high R2 and the R2for positive β1 are highlighted in boldface. | Method | Dataset | Layer 1 | | Layer 2 | | Layer 3 | | | | | |------------|---------------|-----------|-------|-----------|---------|-----------|--------|---------|----------|-------| | β0 | β1 | R2 | β0 | β1 | R2 | β0 | β1 | R2 | | | | ogbn-arxiv | 3.819 | -0.424 | 0.007 | 14.827 | -20.853 | 0.048 | 32.079 | -26.803 | 0.028 | | | reddit | 12.218 | 24.459 | 0.006 | 6.409 | 7.711 | 0.001 | 5.897 | -8.366 | 0.002 | | | LADIES | ogbn-proteins | 4.425 | 0.642 | 0.006 | 33.042 | -139.554 | 0.033 | 113.204 | -405.534 | 0.060 | | ogbn-mag | 4.138 | -0.797 | 0.011 | 20.193 | -17.247 | 0.039 | 48.205 | -19.879 | 0.011 | | | ogbn-arxiv | 3.593 | -0.538 | 0.011 | 6.503 | 4.125 | 0.025 | 19.388 | 1.374 | 0.001 | | | reddit | 22.122 | 13.941 | 0.008 | 21.783 | 13.231 | 0.004 | 37.739 | 1.320 | <0.001 | | | LADIES+d | ogbn-proteins | 4.025 | 0.296 | 0.012 | 24.889 | -36.718 | 0.080 | 109.424 | -197.488 | 0.152 | | ogbn-mag | 4.184 | -0.653 | 0.009 | 12.524 | -0.920 | 0.001 | 31.837 | 0.118 | <0.001 | |
Review 1: Summary: In this paper, the authors points out two potential drawbacks (sub-optimal sampling probabilities and sampling without replacement) in the common practice for layer-wise sampling of graph convolutional networks. To address these, thr authors 1) propose a more conservative principle to construct importance sampling probabilities relying on the Principle of Maximum Entropy; 2) suggest a debiasing algorithm to deal with the bias induced by sampling without replacement. Strengths and Weaknesses: Strengths: 1. The authors provide in-depth analysis on the two drawbacks in layer-wise sampling and the results are convincing. 2. The proposed sampling probabilities (Eq (7)) is improved based on the violated assumpation in FastGCN/LADIES, which in spirit is interesting. 3. It seems that the proposed debiasing method can be further adapted to more general machine learning tasks involving importance sampling without replacement, which may broden the application areas. Weaknesses&Questions: Though the overall paper is interesting, I have the following concerns: 1. From Table 4, when choosing LADIES(2) as the baseline, the training time increased when additionally applying “flat” on the “debias” variant (21 vs 14). However, when choosing LADIES as the baseline, the training time decreases from “w/ debias”(19) to “w/ flat&debias”(14). Could you provide some explanations? 2. In terms of some metrics (e.g., ogbn-arxiv), the performance decreases when additionally applying “flat” on the “debias” variant. From these point of view, the proposed two strategies may be conflict in some application sceneries.. 3. From Figure2, the “flat&debiased” baseline has similar performance compared with the “debiased” baseline when the number of sampled nodes is relatively small (around 1000). Could the two strategies be automatically selected via some threasholds to achieve a bettern trade-off between effectiveness and efficiency. Requested Changes: Please see the weakness part above. Broader Impact Concerns: No. ================================================== Review 2: Summary: The paper investigates two issues on layer-wise sampling schemes for GCNs (FastGCN and LADIES). The first one refers to wrong assumptions on sampling probabilities, which is tackled by adopting the Maximum Entropy principle. The second issue refers to biased estimation due to sampling without replacement, for which the paper introduces a debiasing sequential procedure. Experiments on relevant benchmarks show better convergence and accuracy of LADIES combined with the proposed solutions. Strengths and Weaknesses: **Strengths** - The paper identifies an important theory-practice gap regarding assumptions and biases in layer-wise sampling on GCNs. - The proposed solutions are simple and address the identified issues. - Experiments include relevant large-scale benchmarks and validate the efficacy of the proposed solutions. **Weaknesses** - I am not sure if this contribution advances the state-of-the-art in (large-scale) node classification since vanilla GCNs are weak models, usually outperformed by simple fast approaches [e.g.,1, 2]. - The experimental setup only considers shallow models (up to 3 layers). - Except for Table 3, it is unclear how time is measured (e.g., if it was measured on GPU or CPU). Reporting only CPU numbers could be misleading. [1] https://arxiv.org/abs/2010.13993 [2] https://arxiv.org/abs/1902.07153 Requested Changes: Overall, this is a solid and interesting work. I only have a few minor requests/comments: - Adding to the captions whether time was measured on GPU or CPU (like in Table 3) would be helpful. Also, when the paper says "our procedure can be performed independently on CPU, it will not retard the training on GPU", it neglects CPU-GPU communication time. Is this communication time insignificant here? Somehow I would expect the sequential debiasing procedure on GPU to cause a higher effect on the overall training time. - It would be interesting to report accuracy numbers for FastGCN+flat+debias in the Appendix. - Saying that the success of GCNs comes from the "successful approximation to the spectral graph convolutions" (Sec. 1.1) is questionable. For instance, I would simply begin with "GCN has achieved great success ...". - The paper would benefit from a textual review. Here are some typos: - Sparese (Page 2) - Matrix \hat{D} should be bold. (Page 2) - Use \mathcal{S} (before equation 2) - We insist ON sampling ... (Remark) Broader Impact Concerns: I have no broader impact concerns. ================================================== Review 3: Summary: This paper aims to improve layer-wise sampling for graph convolutional networks (GCN), with a particular focus on two popular models, i.e., FastGCN and LADIES. The authors pointed out two shortcomings of the current sampling methods: 1) the sampling probabilities are suboptimal because they are often constructed under unguaranteed assumptions; 2) the sampling is performed without replacement, bringing in unexpected estimation biases. Accordingly, the authors proposed two methods to address these issues respectively. They propose to generate sampling probabilities following the Principle of Maximum Entropy and introduce an iterative debiasing algorithm. Results show that the proposed methods could reduce the matrix approximation error and improve downstream performance. Strengths and Weaknesses: Strengthens: * This paper points out two problems related to the traditional sampling strategy used in layer-wise sampling for GCN; the problems might be interesting to the GCN research community. * The proposed methods are simple and obtained encouraging performance as reported in the paper. Weaknesses: * Some motivational analyses and statements are not convincing. * More empirical evidence should be given to show the generality of the identified problems. Requested Changes: 1) The analysis for the sampling probability problem in Section 4 is not convincing a) First of all, it is FastGCN that assumes $||(\mathbf{H}\mathbf{W})_{\[i\]}||\propto ||\mathbf{P}^{\[i\]}||$ rather than LADIES. Why do the authors examine the correlation between them for LADIES? For LADIES, it should be $||\mathbf{Q}\mathbf{P}^{\[i\]}||$. Such a mismatch makes the results in Table 2 and Figure 1 less convincing, so it’s still questionable whether the traditional assumption really deteriorates the sampling probability. b) Secondly, stating that “little information of $||(\mathbf{H}\mathbf{W})_{\[i\]}||$ can be explicitly retrieved from $\mathbf{P}$” is annoying. Both position and negative coefficients could tell that these two variables are correlated, albeit with a different relationship. Instead, by using your proposed approach dropping the approximated part or assuming a constant of 1 for $||\mathbf{P}^{\[i\]}||$, the correlation is totally ignored. c) Thirdly, from intuition, the proposed sampling method should perform better in settings where negative coefficients are observed but worse in those positive-coefficient setups. But this is not true. In Table 4, the “flat” method performs consistently better than the LADIES baseline on Reddit (where LADIES show positive coefficients across all layers). Could you explain this observation? This may be related to the mismatch problem in a). d) The debiasing algorithm introduced in Section 5 shows promising performance in Table 4. Would fixing the bias problem also alleviates the probability issue? What about the coefficients for “LADIES/FastGCN + debiasing”? 2) The identified problems are assumed to be general to layer-wise sampling based GCN, but the authors only performed a full examination with LADIES. Since the authors also focused on FastGCN in this paper, it makes a lot of sense to provide full results for FastGCN as well, which could offer great insights to the readers on whether the findings in this paper are generalizable. Broader Impact Concerns: I didn't find serious ethical problems. ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper aims to improve efficiency of graph neural networks. The paper focuses on improving sampling strategy for graph convolution networks and in this regards, the author begin by identifying shortcomings on current methods like LADIES or FastGCN, like bias caused by sampling without replacement. A new sampling distribution with flattening and a debiased sampling algorithm is proposed. Strong empirical improvements is achieved across several large-scale benchmarks when correcting LADIES approach, but unfortunately the proposed method seems to not work well with FastGCN. Nevertheless, the proposed method and technique are correct, experimental result demonstrate effectiveness in one way, and the approach will be of interest to the community, hence I propose to accept the paper with some minor modifications. 1. Add limitation about proposed method not working for FastGCN. It can't be just sparsity of FastGCN is the cause as adding the debiased sampling almost always hurts FastGCN's performance. 2. As pointed out by 31rs, it would be instructive for readers to write about why proposed method with flattening still works better than LADIES on datasets where proportionality assumption holds? Is it better hyper-parameter tuning? 3. Rename appendix D.3, it can't be titled author response. I will thank the authors for the patience and all the updates for more clarity and thorough evaluation. ==================================================
# Why Do Fine-Grained Labels In Pretraining Benefit Generalization? Anonymous authors Paper under double-blind review ## Abstract Recent literature shows that if a deep neural network is pretrained using fine-grained labeled data and then fine-tuned using coarse-labeled data for downstream tasks, its generalization performance is often better than pretraining using coarse-labeled data. While empirical evidence that support this finding is abundant, theoretical justification remains an open problem. This paper addresses the problem by introducing a "hierarchical multi-view" structure to confine the input data distribution. Under this data assumption, we prove that 1) coarse-grained pretraining only allows a neural network to learn the common features well, while 2) fine-grained pretraining helps the network learn the rare features in addition to the common ones, thus improving its accuracy on hard downstream test samples. ## 1 Introduction ![0_image_0.png](0_image_0.png) We consider the theory of label granularity in deep learning. By label granularity, we mean a hierarchy of training labels specifying how detailed each label subclass needs to be (See Figure 1). Figure 1: The goal of this paper is to provide a theoretical justification of why fine-grained labels in pretraining benefit generalization. Having access to different granularity of labels offers us the freedom of training a classifier using a different level of precision. For example, instead of differentiating between dogs and cats, we can train a classifier to differentiate a Poodle dog and a Persian cat. The latter classification task is undoubtedly harder. However, recent studies found that if one uses fine-grained labels to *pre-train* a backbone, the pre-trained backbone will help the downstream neural networks generalize better (Chen et al., 2018). Vision transformers, for example, are well-known to require pretraining on large datasets with thousands of classes for effective downstream generalization (Dosovitskiy et al., 2021; He et al., 2016; Krizhevsky et al., 2012). To convince readers who are less familiar with this particular training strategy, we conduct an experiment on ImageNet with details described in Appendix A.2 (we also include experiments on iNaturalist 2021 in Appendix A). Our experiment is limited in scale due to its high demand on the computing resources. ![1_image_0.png](1_image_0.png) Figure 2: ImageNet21k→ImageNet1k transfer using a ViT-B/16 model. [Blue]: pretrained on the WordNet hierarchy of ImageNet21k, *finetuned* on ImageNet1k. [Red]: baseline, trained and evaluated on ImageNet1k. Figure 2 shows an experiment of pre-training on ImageNet21k and fine-tuning the pre-trained network using ImageNet1k. The labels used in the ImageNet21k is based on WordNet Hierarchy. The downstream task is ImageNet1k classification. The x-axis of this plot indicates the number of pre-training classes whereas the y-axis shows the validation accuracy for the ImageNet1k classification task. It is evident from the plot that as we increase the number of classes (hence a finer label granularity in pre-training), the downstream classification task's performance is improved. ## 1.1 Goal Of This Paper The above experimental finding may sound familiar to practitioners who frequently train large models. In fact, experimental evidence on this subject is abundant (Mahajan et al., 2018; Singh et al., 2022; Yan et al., 2020; Shnarch et al., 2022; Juan et al., 2020; Yang et al., 2021; Chen et al., 2018; Ridnik et al., 2021; Son et al., 2023; Ngiam et al., 2018; Cui et al., 2018). However, the theoretical explanation remains an open problem. Our goal in this paper is to provide a *theoretical* justification. The core question we ask is: Theoretical Question Why does pretraining at a high label granularity benefit generalization? Certainly, this grand challenge can be impossible to answer in full because of the uncontrollable complexity of the practical situations. To say something concrete, we focus on a tractable (sub-)problem under a controlled setting: - *Simple* scheme: We pretrain a backbone on a classification task and then finetune it for a target problem; - Assume *negligible distribution shift* between the input distributions of the source and target datasets; - The label functions for both datasets *align* well in terms of the features which they consider discriminative; - The labels are error-free. ## 1.2 Main Results And Theoretical Contributions Our main result is based on analyzing a two-layer convolutional neural network with ReLU activation. We assume that the data distribution satisfies a certain *hierarchical multi-view* condition (to be discussed in Section 4.1). The optimization algorithm is stochastic gradient descent. Such problem settings are consistent with published works on this subject (Allen-Zhu & Li, 2023b; 2022; Shen et al., 2022b; Jelassi & Li, 2022). Our conclusions are as follows. Theoretical results 1. *Coarse-grained* pretraining *only* allows the neural network to learn the common features well. Therefore, when testing, the test error on easy samples is o(1) (i.e., small) whereas the error on hard samples is Ω(1) (i.e., large). 2. *Fine-grained* pretraining helps the network learn the rare features *in addition* to the common ones, thus improving its test error on *hard* samples. In particular, the test error rate on both easy and hard test samples are o(1) (i.e., small). To our knowledge, a precise characterization of the test error presented in this paper has never been reported in the literature. The key enablers of our theoretical finding are the concepts of *hierarchical multi-view* and representation-label correspondence. We summarize these two concepts below: 1. *Hierarchical multi-view*. To understand the label granularity problem, we argue that it is necessary for coarse and fine-grained classes to be distinguished by their corresponding input features. This is consistent with the *multi-view* data property pioneered by Allen-Zhu & Li (2023b). We call this a *hierarchical* multi-view structure. The hierarchical multi-view structure on the data makes us different from many other deep learning theory works that assume simple or no structure in the input data (Kawaguchi, 2016; Allen-Zhu & Li, 2023a; Ba et al., 2022; 2023; Damian et al., 2022; Kumar et al., 2023; Ju et al., 2021). 2. *Representation-label correspondence*. Representation learning aims to recognize features in the input data. As will be shown later in the paper, under the hierarchical multi-view data assumption, label complexity (i.e., how complex the labels are) during training influences the representation complexity (i.e., how many and what types of features are learnt), which further influences the model's generalization performance. Studying label granularity through understanding the neural network's feature-learning process is a departure from the literature which focuses on *feature selection* (Jacot et al., 2018; Ju et al., 2021; 2022; Pezeshki et al., 2021; Arora et al., 2019), i.e., selecting a subset of pre-determined features. ## 2 Related Work 2.1 Our Theoretical Setting Compared To The Literature The subject of label granularity is immensely related to how to make a deep neural network (DNN) generalize better. In the existing literature, this is mostly explained through the lens of implicit regularization and bias towards simpler solutions to prevent overfitting even when DNNs are highly overparameterized (Lyu et al., 2021; Kalimeris et al., 2019; Ji & Telgarsky, 2019; De Palma et al., 2019; Huh et al., 2017). An alternative approach is the concept of shortcut learning which argues that deep networks can learn overly simple solutions. As such, deep networks achieve high training and testing accuracy on in-distribution data but generalize poorly to challenging downstream tasks (Geirhos et al., 2020; Shah et al., 2020; Pezeshki et al., 2021). By examining these papers, we believe that Shah et al. (2020); Pezeshki et al. (2021) are the closest to ours because they demonstrate that DNNs perform shortcut learning and respond weakly to features that have a weak presence in the training data. However, our work departs from Shah et al. (2020); Pezeshki et al. (2021) in several key ways. 1. We focus on how the pretraining label space affects classification generalization, while Shah et al. (2020); Pezeshki et al. (2021) primarily focus on demonstrating that simplicity bias can be harmful to generalization. 2. The core theoretical tool used by Pezeshki et al. (2021) is the neural tangent kernel (NTK) model, which is unsuitable for analyzing the label granularity problem because the feature extractor of an NTK model barely changes after pretraining. 3. The theoretical setting in Shah et al. (2020) is limited because they use the hinge loss while we use a more standard exponential-tailed cross-entropy loss. 4. Our data distribution assumptions are more realistic, as they capture feature hierarchies in natural images, which has direct impact on the downstream generalization power of the pretrained model. ## 2.2 Our Analytic Tool Compared To Literature Our theoretical analysis is inspired by a recent line of work by Allen-Zhu & Li (2022; 2023b); Shen et al. (2022b). These papers analyze the feature learning dynamics of neural networks by tracking how the hidden neurons of shallow nonlinear neural networks evolve to solve dictionary-learning-like problems. We adopt a multi-view approach to the data distribution which was first proposed in Allen-Zhu & Li (2023b). However, the learning problems we analyze and the results we aim to show are fundamentally different. As such, we derive the gradient descent dynamics of the neural network from scratch. ## 2.3 Consistency With Existing Empirical Results We stress that our theoretical findings are consistent with the reported empirical results in the literature, especially those that aim to improve classification accuracy by manipulating the pre-training label space (Mahajan et al., 2018; Singh et al., 2022; Yan et al., 2020; Shnarch et al., 2022; Juan et al., 2020; Yang et al., 2021; Chen et al., 2018; Ridnik et al., 2021; Son et al., 2023; Ngiam et al., 2018; Cui et al., 2018). For example, Mahajan et al. (2018); Singh et al. (2022) use hashtags from Instagram as pretraining labels, Yan et al. (2020); Shnarch et al. (2022) apply clustering on the data first and then treat the cluster IDs as pretraining labels, Juan et al. (2020) use the queries from image search results, Yang et al. (2021) apply image transformations such as rotation to augment the label space, and Chen et al. (2018); Ridnik et al. (2021) include fine-grained manual hierarchies in their pretraining processes. Our results corroborate the utility of pretraining on fine-grained label space. On the empirical end, there is also work focusing on exploiting the hierarchical structures present in (humangenerated) label space to improve classification accuracy (Yan et al., 2015; Zhu & Bain, 2017; Goyal & Ghosh, 2020; Sun et al., 2017; Zelikman et al., 2022; Silla & Freitas, 2011; Shkodrani et al., 2021; Bilal et al., 2017; Goo et al., 2016). For example, Yan et al. (2015) adapt the network architecture to learn super-classes at each hierarchical level, Zhu & Bain (2017) add hierarchical losses in the hierarchical classification task, Goyal & Ghosh (2020) propose a hierarchical curriculum loss for curriculum learning. Our results do not directly validate these practices because we are more interested in understanding the influence of label granularity on model generalization. ## 3 Notations And Intuitions 3.1 Notations And Training Schemes For a DNN-based classifier, given input image X, we can write its (pre-logit) output for class c as $$\begin{array}{r l}{\underbrace{F_{c}(X)}_{\mathrm{pre-logit}}}&{{}={\Big\langle}\underbrace{a_{c}}_{\mathrm{linear}},\underbrace{h}_{\mathrm{backbone}}(\underbrace{\Theta}_{\mathrm{network}};X){\Big\rangle},}\\ {\mathrm{output~for~class~}}c&{{}\quad{\mathrm{classifier}}\quad{\mathrm{network}}\quad{\mathrm{parameter}}}\end{array}$$ E, (1) where ac is the linear classifier for class c, h(Θ; ·) is the network backbone with parameter Θ. Referring to Figure 1, label granularity concerns about two datasets: X src for the source (typically fine-grained) and X tgt for the target (typically coarse-grained). The corresponding labels are Y src and Y tgt, respectively. A dataset can be represented as D = (X , Y). For instance, the source training dataset is Dsrc train = (X src train, Y src train). The relevant training and testing datasets are denoted as Dsrc train, D tgt train, D tgt test. Finally, the granularity of a label set is denoted as G(Y), which represents the total number of classes. The two learning methodologies of interest are as follows. 1. Baseline: Train Fc(·) using D tgt train. Test Fc(·) using D tgt test. 2. Fine-to-coarse: Train Fc(·) using Dsrc train. This gives us the pretrained feature extractor h(Θsrc train; ·). Then finetune Fc(·) using D tgt train. Test the resulting Fc(·) using D tgt test. $\quad(1)^{\frac{1}{2}}$ . ![4_image_1.png](4_image_1.png) ![4_image_0.png](4_image_0.png) Figure 3: A simplified symbolic representation of the cat versus dog problem. ## 3.2 Intuition: Why Higher Granularity Improves Generalization Consider the following toy example. There are two classes: cat and dog. Our goal is to build a binary classifier. Let's discuss how the two training schemes would work, with an illustration shown in Figure 3. 1. **Baseline**. The baseline method tries to identify the common features that can distinguish most of the cats from dogs, for instance, the shape of the animal's ear as shown in Figure 3. These features are often the most noticeable ones because they appear the most frequently. Of course, there are hard samples, e.g., a close-up shot of a cat's fur. They pose limited influence during training because they are relatively rare in natural images. 2. **Fine-to-coarse**. With fine-grained labels, each subclass has its own unique visual features that are only dominant within that subclass. However, fine-grained features are not as common in the dataset, hence making them more difficult to be noticed. Therefore, if we only present the coarse labels in the pre-training stage, the learner is allowed to take shortcuts by learning only the common features to achieve low training loss. One strategy to force the learner to learn the rarer features is to explicitly label the fine-grained classes. This means that within each fine-grained class, the fine-grained features become as easy to notice as the common features. As a result, even if common features are weakly present or missing in a hard test sample, the network can still be reasonably robust to distracting irrelevant patterns due to its ability to recognize (some of) the finer-grained features. ## 4 Problem Formulation Our first theoretical contribution is a new data model, the hierarchical multi-view model. This model consists of four definitions. Compared to existing theories studying feature learning of neural networks in the literature (Allen-Zhu & Li, 2023b; 2022; Shen et al., 2022b; Jelassi & Li, 2022), these four definitions are better formulated to the label granularity problem. For the sake of brevity, we present the core concepts of our data model here, and delay its full specification to Appendix B. Following data model specifications, we also discuss characteristics of the learner, a two-layer nonlinear convolutional neural network. ## 4.1 New Data Model: Hierarchical Multi-View We consider the setting where an input sample X ∈ R dP consists of P patches x1, x2*, ...,* xP with xp ∈ R d, where d is sufficiently large, and all our asymptotic statements are made with respect to d. For analytic tractability, we consider two levels of label hierarchy. The root of this hierarchy has two superclasses +1 and −1. The superclass +1 has k+ subclasses. We denote these k+ subclasses as (+1, c1), . . . ,(+1, ck+ ). We can do the same for the superclass −1 which has k− subclasses. Each subclass has two types of features: the common features and the fine-grained features. The two types of features are sufficiently different in the sense they have zero correlation and equal magnitude. This leads to the following definition. Definition 4.1 (Features). We define **features** as elements of a fixed orthonormal dictionary V = {vi} d i=1 ⊂ R d. The common and fine-grained features are - Common feature: v+ ∈ V and v− ∈ V - Fine-grained feature of subclass c: v+,c and v−,c ∈ V The usage of an orthonormal dictionary is again a choice of our model. We choose so because it is more tractable. With features defined, we can now specify patches in an input sample. Definition 4.2 (Input patches). We define three types of patches for y ∈ {+, −}: - (Common-feature patches) are defined as xp = αpvy + ζp, where αp ≈ 1, and ζp ∼ N (0, σ2 ζ Id). - (Subclass-feature patches) are defined as xp = αpvy,c + ζp, where αp ≈ 1, and ζp ∼ N (0, σ2 ζ Id). - (Noise patches) are defined as xp = ζp. Within an input sample X = (x1, x2*, ...,* xp), there are approximately s ∗common-feature patches and s ∗ subclass-feature patches, the rest are all noise patches. Moreover, within a sample, the choice of y has to be consistent across the feature patches. Lastly, the positions of the features patches are random. These definitions of the input patches are illustrated in Figure 4. ![5_image_0.png](5_image_0.png) Figure 4: Illustration of features and patches. Some comments: An **easy sample** is generated according to Definition 4.2. A **hard sample** is generated in the same way as easy samples, except the common-feature patches are replaced by noise patches, and we replace a small number of noise patches by "feature-noise" patches, which are of the form xp = α † pv− + ζp, where α † p ∈ o(1), and set one of the noise patches to ζ ∗ ∼ N (0, σ2 ζ ∗ Id) with σζ ∗ ≫ σζ ; these patches serve the role of "distracting patterns". Definition 4.3 (Source dataset's label mapping). We say a sample X belongs to the +1 superclass if any one of its common- or subclass-feature patches contains v+ or v+,c for any c ∈ [k+]. It belongs to the (+, c) subclass if any one of its subclass-feature patches contains v+,c. Definition 4.4 (Source training set). We assume the input samples of the source training set as X src train are generated as in Definition 4.2; the corresponding labels are generated following Definition 4.3. Overall, we denote the source training dataset Dsrc train. Relation to multi-view. Our data model is inspired by the multi-view concept first proposed in Allen-Zhu & Li (2023b), as we (1) use an orthonormal dictionary to define the features, (2) define an input consisting of many disjoint high-dimensional patches, and (3) assume the existence of *multiple* discriminative features per class. The reason why the original multi-view property is insufficient for our problem is that it does not consider any label hierarchy nor its link to the input structure. We resolve this issue by following our intuition that classes at different hierarchy levels should be distinguished by their corresponding features: this naturally defines a feature hierarchy, with an exact correspondence with the label hierarchy. Target dataset. To ensure that baseline and fine-grained training have no unfair advantage over each other, we post a set of new characterizations on the *target* dataset: 1. The input samples in the target dataset is generated according to Definition 4.2. 2. The true label function is identical across the source and target datasets. 3. Since we are studying the "fine-to-coarse" transfer direction, the target problem's label space is the *root* of the hierarchy, meaning that any element of Y tgt train or Y tgt test must belong to the label space {+1, −1}. Therefore, in our setting, only Y src and Y tgt can differ (in distribution) due to different choices in the label granularity level. In this idealized setting, we have essentially made baseline training and coarse-grained pretraining the *same* procedure. Therefore, an equally valid way to view our theory's setting is to consider D tgt train the same as Dsrc train except with coarse-grained labels. In other words, we pretrain the network on two versions of the source dataset D src,coarse train and D src,fine train , and then compare the two models on D tgt test (which has coarse-grained labels). ## 4.2 Characteristics About The Learner Our model about the learner is consistent with Allen-Zhu & Li (2023b; 2022); Shen et al. (2022b). The learner is a two-layer average-pooling convolutional ReLU network: $$F_{c}(\mathbf{X})=\sum_{r=1}^{m}a_{c,r}\sum_{p=1}^{P}\sigma(\langle\mathbf{w}_{c,r},\mathbf{x}_{p}\rangle+b_{c,r}),$$ $$\left(2\right)$$ where m is a low-degree polynomial in d and denotes the width of the network, σ(·) = max(0, ·) is the ReLU nonlinearity, and c denotes the class. We perform a *random initialization* of w (0) c,r ∼ N (0, σ2 0Id) with σ 2 0 = 1/poly(d); we set b (0) c,r = −Θ σ0 pln(d) and manually tune it, similar to Allen-Zhu & Li (2022). Crossentropy is the training loss for both baseline and transfer training. To simplify analysis and to focus solely on the learning of the feature extractor, we freeze ac,r = 1 during all baseline and transfer training phases, and we use the fine-grained model for binary classification as follows: Fb+(X) = maxc∈[k+] F+,c(X), Fb−(X) = maxc∈[k−] F−,c(X). See Appendix B.2 and the beginning of Appendix G for details of learner characteristics and training algorithm. ## 5 Theoretical Results And Proof Strategy Our second theoretical contribution lies in establishing a correspondence between the *complexity of the labels* and *complexity of the network's representations*. Under the assumption of the hierarchical multi-view data structure, the following are true: 1. If trained with coarse-grained labels (i.e. overly simple labels), the network only learns the common features well, so its representations of the data is overly simple; 2. In contrast, training with fine-grained labels helps the network learn the fine-grained features well in addition to the common ones, so its representation of the data is more complex. The difference in representation complexity leads to the difference in the network's downstream test accuracy. ## 5.1 Main Results Theorem 5.1 (Coarse-label training: baseline). *(Summary). Let the number of subclasses be lower-bounded:* ky ≥ polylog(d)*. With high probability, with proper choice of step size, there exists a time* T ∗ ∈ poly(d) *such* that for any T ∈ [T ∗, poly(d)]*, the training loss is upper bounded according to* $${\mathcal{L}}(F^{(T)})\leq o(1)$$ $$\left({\mathfrak{3}}\right)$$ $\frac{1}{2}$ becomes $\frac{1}{2}$ and $\frac{1}{2}$ is $\frac{1}{2}$. (T)) ≤ o(1) (3) $x_n\;for\;an,\;eq$ Moreover, for an **easy** test sample (Xeasy, y)*, the probability of making a classification mistake is small:* $$\mathbb{P}\left[F_{y}^{(T)}(\mathbf{X}_{e a s y})\leq F_{y^{\prime}}^{(T)}(\mathbf{X}_{e a s y})\right]\leq o(1),\quad f o r\,y^{\prime}\neq y.$$ ′̸= y. (4) However, for all t ∈ [0, poly(d)], given a **hard** test sample (Xhard, y)*, the probability of making a classification* mistake is large: $$\mathbb{P}\left[F_{y}^{(t)}(\mathbf{X}_{h a r d})\leq F_{y^{\prime}}^{(t)}(\mathbf{X}_{h a r d})\right]\geq\Omega(1),\quad f o r\,y^{\prime}\neq y.$$ ′̸= y. (5) $$\left({\boldsymbol{4}}\right)$$ $\left(5\right)^3$ This theorem essentially says that, with a mild lower bound on the number of fine-grained classes, if we only train on the *easy* samples with *coarse* labels, it is virtually impossible for the network to learn the fine-grained features even if we give it as much practically reachable amount of time and training samples as possible. Consequently, the network would perform poorly on the hard downstream test samples: if the sample is missing the *common* features, then the network can be easily misled by the noise present in the sample. To see the full setup and statement of this theorem, please see Appendix B and E. Its proof spans Appendix C to E. Theorem 5.2 (Fine-grained-label training). (Summary). Assume the same setting as in Theorem 5.1, except let the labels be fine-grained and ky ≤ d 0.4*(number of subclasses not pathologically large; see Section 6 for its* discussion). Within poly(d) *time, the probability of making a classification mistake is small:* $$\mathbb{P}\left[{\widehat{F}}_{y}^{(T)}(X)\leq{\widehat{F}}_{y^{\prime}}^{(T)}(X)\right]\leq o(1)\;\;f o r\;\;y^{\prime}\neq y,$$ $$({\mathfrak{f}}{\mathfrak{h}})$$ ′̸= y, (6) on the target binary problem on both **easy** and **hard** *test samples.* The full version of this result is presented in Appendix G.4, and its proof in Appendix G. After fine-grained pretraining, the network's feature extractor gains a strong response to the fine-grained features, therefore its accuracy on the downstream hard test samples increases significantly. Remark. One concern about the above theorems is that the neural networks are trained only on easy samples. As noted in Sections 1 and 3.2, *easy* samples should make up the *majority* of the training and testing samples. Pretraining at higher label granularities only improves network performance on *rare* samples. Our theoretical result presents the feature-learning bias of a neural network in an exaggerated fashion. Therefore, it is natural to start with the case of no hard training samples. In reality, even if a small portion of hard training samples is present, finite-sized training datasets can have many flaws that can cause the network to overfit severely before learning the fine-grained features, especially since rarer features are learnt more slowly and corrupted by greater amount of noise. We leave these deeper considerations for future theoretical work. ## 5.2 Proof Strategy: Representation-Label Correspondence The key idea of the proof is to establish a correspondence between the *complexity of the labels* and *complexity* of the network's representations. We show that when trained on coarse-grained labels (i.e. overly simple labels), the network only learns the common features well, so its representations of the data is overly simple. In contrast, training with fine-grained labels helps the network learn the fine-grained features well in addition to the common ones, so its representations are more complex. We first sketch the proof of **baseline training** which uses *coarse-grained* labels. Feature detector neurons. We show that, at initialization, with high probability, for every feature v ∈ V, there exists a small group of "lucky" neurons, denoted S ∗(0) y (v) (with y indicating the superclass), that only activate on v-dominated feature patches. We prove that if v is a feature of class y, then with high probability, the lucky neurons will remain activated on v-dominated patches throughout training, and dominate the feature extractor's response to the feature v. In particular, given any v-dominated patch xp = αpv + ζp, $$\sum_{\begin{subarray}{c}\mathbf{c}=1\\ \mathbf{c}=1\end{subarray}}^{m}\sigma\left(\left\langle\mathbf{w}_{y,r}^{(t)},\mathbf{x}_{p}\right\rangle+b_{y,r}^{(t)}\right)\approx\underbrace{\sum_{\begin{subarray}{c}\mathbf{v}\in\mathbb{S}^{(t)}(v)\\ \mathbf{w}\text{-dominated patch}\mathbf{w}_{p}\end{subarray}}^{m}\sigma\left(\left\langle\mathbf{w}_{p,r}^{(t)},\mathbf{x}_{p}\right\rangle+b_{\mathbf{v},r}^{(t)}\right)}_{\text{repropose to re-dominated patch}\mathbf{w}_{p}},\;\;t\in[0,\text{poly}(d)].\tag{7}$$ ![7_image_0.png](7_image_0.png) Therefore, we call neurons in S ∗(0) y (v) the *detector neurons* of feature v. The significance of equation 7 is that, we may now argue about the *network's representation of the input* data solely based on the *behavior of the feature detector neurons*. Impartial representation at initalization. At initialization, the feature extractor's response to common and fine-grained features are *very close*. The reason is that, S ∗(0) y (v) ≈ S ∗(0) y′ (v ′) for all superclasses *y, y*′ and features v, v ′, and they all have a similar magnitude of activation strength. Written explicitly, given any common-feature patch xcom = αvy + ζ and subclass-feature patch xsub = α ′vy,c + ζ ′(from the training or testing distribution), with high probability, $$\underbrace{\underbrace{\tau(\mathbf{s}_{y}^{(0)},\,\alpha\mathbf{v}_{y}+\mathbf{c})}_{\text{network representation of}}+b_{y,r}^{(0)}}_{\text{common-feature patch}\mathbf{w}_{\text{conn}}\text{at}t=0}\approx\underbrace{\sum_{\tau\in\{\mathbf{s}_{y}^{(0)}\}}^{m}\sigma\left(\left\langle\mathbf{w}_{y,r}^{(0)},\alpha^{\prime}w_{y,c}+\mathbf{c}^{\prime}\right\rangle+b_{y,r}^{(0)}\right)}_{\text{network representation of}}\tag{8}$$ So what happened during training which caused a strong *imbalance* of representation of the common and fine-grained features in the end? The answer below is the core of the proof. Overly simple labels =⇒ *overly simple representations*. The imbalance of growth is a result of the subclass-feature patches occurring with less frequency in the training set than the common-feature patches. Recall that the number of subclasses is ky: for any subclass (*y, c*), subclass-feature patches dominated by vy,c are about ky times *rarer* than the common feature patches. This has a direct impact on the growth speed of the common and fine-grained detector neurons: for any neuron rcom ∈ S ∗(0) y (vy) and any rfine ∈ S ∗(0) y (vy,c), ⟨∆w (t) y,rcom , vy⟩ ≈ Θ (ky) × ⟨∆w (t) y,rfine , vy,c⟩. With careful arguments on the influence of noise and bias on the activation values, we can show that, for t sufficiently large, the fine-grained detector neurons are about Θ(ky) times *weaker* in strength: Xm ≈ Θ (ky) ×Xm r∈S ∗(0) y (vy,c) σ Dw(t) y,r, α′vy,c + ζ ′E+ b (t) y,r r∈S ∗(0) y (vy) σ Dw(t) y,r, αvy + ζ E+ b (t) y,r | {z } network representation of common-feature patch xcom at large t | {z } network representation of subclass-feature patch xsub at large t $$\quad(9)$$ $$(10)$$ $$(11)$$ Furthermore, we prove that, due to the exponential tail of cross-entropy, by the end of training, $$\sum_{r\in S_{y}^{(0)}(\mathbf{w_{y}})}^{m}\sigma\left(\left\langle\Delta\mathbf{w}_{y,r}^{(t)},\alpha\mathbf{w_{y}}+\zeta\right\rangle+b_{y,r}^{(t)}\right)=\Theta\left(\log(d)\right)$$ which causes the representation of subclass-feature patches to be *vanishing* in strength: $$\sum_{r\in S_{p}^{(t)}(w_{p,c})}^{m}\sigma\left(\left\langle\left\langle w_{y,r}^{(t)},\alpha^{\prime}w_{p,c}+\zeta^{\prime}\right\rangle+b_{y,r}^{(t)}\right\rangle\leq O\left(\frac{\log(d)}{k_{y}}\right)<o(1),\;\;t\leq\mathrm{poly}(d).$$ In other words, the neural network almost cannot detect subclass features by the end of baseline training. Therefore, even though it can classify the easy test samples correctly since it learned the common features well, it simply cannot classify the hard ones, which requires the model to solely rely on subclass-feature patches for inference. Fine-grained training alleviates this issue. Complex labels =⇒ *complex representations*. The proof of fine-grained training proceeds in a very similar fashion as the case of coarse-grained training. The main difference lies in the gradient updates. During training, for any neuron rcom ∈ S ∗(0) (y,c) (vy) and any rfine ∈ S ∗(0) (y,c) (vy,c), $$\left\langle\Delta\mathbf{w}_{(y,c),r_{\rm norm}}^{(t)},\mathbf{v}_{y}\right\rangle\approx\left\langle\Delta\mathbf{w}_{(y,c),r_{\rm norm}}^{(t)},\mathbf{v}_{y,c}\right\rangle.\tag{12}$$ In other words, the common and fine-grained detector neurons for each subclass grow at similar speeds now, because the common- and subclass-feature patches occur with similar frequency in each subclass. Again with careful analysis of how the noise and bias influence the activation values, we arrive at $$\sum_{\begin{subarray}{c}\mathbf{v}\in\mathbb{S}_{\{y,c\}}^{(t)}\setminus\mathbf{e}_{w}\end{subarray}}^{m}\sigma\left(\left\langle\mathbf{w}_{(y,c),r}^{(t)},\alpha\mathbf{v}_{w}+\zeta\right\rangle+b_{(y,c),r}^{(t)}\right)\approx\underbrace{\sum_{\begin{subarray}{c}r\in\mathbb{S}_{\{y,c\}}^{(t)}\setminus\mathbf{e}_{w,r}\end{subarray}}^{m}\sigma\left(\left\langle\mathbf{w}_{(y,c),r}^{(t)},\alpha^{\prime}\mathbf{v}_{w,c}+\zeta^{\prime}\right\rangle+b_{(y,c),r}^{(t)}\right)}_{\text{network representation of}}$$ $$\geq\Omega(1)\geq\Omega(1)$$ r∈S $$(13)$$ Therefore, both the common and fine-grained features are learnt well. It follows that the model can correctly utilize the common- and subclass-feature patches in the input, so it can classify easy and hard test samples with high accuracy. ## 6 Discussion We complement our theory with experimental results in Appendix A. For the theoretical part, we summarize a few Q & A that might be of interest to the reader. Q: Are there other reasons why fine-grained labels benefit neural network generalization? A: Yes, it is possible, e.g., the optimization landscape induced by finer-grained labels could contain less saddle points, making it friendlier to SGD. We did not analyze this because our focus is primarily on the generalization instead of optimization aspect of the problem. Q: Does a higher label granularity always imply better generalization? A: No. There is an operating regime. Training a model with *pathologically* high label granularity is harmful. For example, if we assign a unique class to every sample in the dataset, the model will be forced to rely on the frivolous differences between each sample. We verify this intuition in Figure 6 in Appendix A.1.1 on the iNaturalist 2021 dataset. These extreme scenarios do not arise in common practice, so we do not focus on them in this paper. Q: The theoretical setting appears restrictive. A: Our theoretical setting is consistent with Cao et al. (2022); Allen-Zhu & Li (2023b; 2022); Shen et al. (2022b); Jelassi & Li (2022). With a limited number of available analytic tools in the literature, we believe these settings are necessary to keep things tractable. ## 7 Conclusion In this paper, we formally studied the influence of pretraining label granularity on the generalization of DNNs, and performed large-scale experiments to complement our theoretical results. Under the new data model, hierarchical multi-view, we theoretically showed that higher label complexity leads to higher representation complexity, through which we explained why pretraining with fine-grained labels is beneficial to generalization. We complement our theory with experiments on ImageNet and iNaturalist in Appendix A, demonstrating that in the controlled setting of this paper, fine-grained pretraining indeed benefits generalization. ## Broader Impact Statement This paper presents work whose goal is to advance the theory of deep learning. There are potential societal consequences of our work, none which we feel must be specifically highlighted here. ## References Zeyuan Allen-Zhu and Yuanzhi Li. Feature purification: How adversarial training performs robust deep learning. In *FOCS*, 2022. Zeyuan Allen-Zhu and Yuanzhi Li. Backward feature correction: How deep learning performs deep (hierarchical) learning. In Gergely Neu and Lorenzo Rosasco (eds.), *Proceedings of Thirty Sixth Conference on* Learning Theory, volume 195 of *Proceedings of Machine Learning Research*, pp. 4598–4598. PMLR, 12–15 Jul 2023a. URL https://proceedings.mlr.press/v195/allen-zhu23a.html. Zeyuan Allen-Zhu and Yuanzhi Li. Towards understanding ensemble, knowledge distillation and self-distillation in deep learning. In *The Eleventh International Conference on Learning Representations*, 2023b. URL https://openreview.net/forum?id=Uuf2q9TfXGA. Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Russ R Salakhutdinov, and Ruosong Wang. On exact computation with an infinitely wide neural net. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/file/ dbc4d84bfcfe2284ba11beffb853a8c4-Paper.pdf. Jimmy Ba, Murat A Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu, and Greg Yang. High-dimensional asymptotics of feature learning: How one gradient step improves the representation. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), *Advances in Neural Information* Processing Systems, volume 35, pp. 37932–37946. Curran Associates, Inc., 2022. Jimmy Ba, Murat A Erdogdu, Taiji Suzuki, Zhichao Wang, and Denny Wu. Learning in the presence of low-dimensional structure: A spiked random matrix perspective. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (eds.), *Advances in Neural Information Processing Systems*, volume 36, pp. 17420–17449. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/ paper/2023/file/38a1671ab0747b6ffe4d1c6ef117a3a9-Paper-Conference.pdf. Alsallakh Bilal, Amin Jourabloo, Mao Ye, Xiaoming Liu, and Liu Ren. Do convolutional neural networks learn class hierarchy? *IEEE transactions on visualization and computer graphics*, 2017. Yuan Cao, Zixiang Chen, Misha Belkin, and Quanquan Gu. Benign overfitting in two-layer convolutional neural networks. In *NeurIPS*, 2022. Zhuo Chen, Ruizhou Ding, Ting-Wu Chin, and Diana Marculescu. Understanding the impact of label granularity on cnn-based image classification. In *ICDMW*, 2018. Yin Cui, Yang Song, Chen Sun, Andrew Howard, and Serge Belongie. Large scale fine-grained categorization and domain-specific transfer learning. In *CVPR*, 2018. Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. Class-balanced loss based on effective number of samples. In *CVPR*, 2019. Alexandru Damian, Jason Lee, and Mahdi Soltanolkotabi. Neural networks can learn representations with gradient descent. In Po-Ling Loh and Maxim Raginsky (eds.), Proceedings of Thirty Fifth Conference on Learning Theory, volume 178 of *Proceedings of Machine Learning Research*, pp. 5413–5452. PMLR, 02–05 Jul 2022. URL https://proceedings.mlr.press/v178/damian22a.html. Giacomo De Palma, Bobak Kiani, and Seth Lloyd. Random deep neural networks are biased towards simple functions. In *NeurIPS*, 2019. Mostafa Dehghani, Alexey Gritsenko, Anurag Arnab, Matthias Minderer, and Yi Tay. Scenic: A jax library for computer vision research and beyond. In *CVPR*, 2022. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *ICLR*, 2021. Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. Shortcut learning in deep neural networks. In *Nature Machine Intelligence*, 2020. Wonjoon Goo, Juyong Kim, Gunhee Kim, and Sung Ju Hwang. Taxonomy-regularized semantic deep convolutional neural networks. In *ECCV*, 2016. Palash Goyal and Shalini Ghosh. Hierarchical class-based curriculum loss. *arXiv preprint arXiv:2006.03629*, 2020. Priya Goyal, Piotr Dollar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv:1706.02677, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. Minyoung Huh, Hossein Mobahi, Richard Zhang, Brian Cheung, Pulkit Agrawal, and Phillip Isola. The low-rank simplicity bias in deep networks. *arXiv:2103.10427*, 2017. Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *NeurIPS*, 2018. Samy Jelassi and Yuanzhi Li. Towards understanding how momentum improves generalization in deep learning. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 9965–10040. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr. press/v162/jelassi22a.html. Ziwei Ji and Matus Telgarsky. Gradient descent aligns the layers of deep linear networks. In *ICLR*, 2019. Ralph P. Boas Jr. and Jr. John W. Wrench. Partial sums of the harmonic series. The American Mathematical Monthly, 1971. Peizhong Ju, Xiaojun Lin, and Ness Shroff. On the generalization power of overfitted two-layer neural tangent kernel models. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International Conference* on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 5137–5147. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/ju21a.html. Peizhong Ju, Xiaojun Lin, and Ness Shroff. On the generalization power of the overfitted three-layer neural tangent kernel model. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 26135–26146. Curran Associates, Inc., 2022. Da-Cheng Juan, Chun-Ta Lu, Zhen Li, Futang Peng, Aleksei Timofeev, Yi-Ting Chen, Yaxi Gao, Tom Duerig, Andrew Tomkins, and Sujith Ravi. Ultra fine-grained image semantic embedding. In *WSDM*, 2020. Dimitris Kalimeris, Gal Kaplun, Preetum Nakkiran, Benjamin Edelman, Tristan Yang, Boaz Barak, and Haofeng Zhang. Sgd on neural networks learns functions of increasing complexity. In *NeurIPS*, 2019. Stefani Karp, Ezra Winston, Yuanzhi Li, and Aarti Singh. Local signal adaptivity: Provable feature learning in neural networks beyond kernels. In *NeurIPS*, 2021. Kenji Kawaguchi. Deep learning without poor local minima. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 29. Curran Associates, Inc., 2016. URL https://proceedings.neurips.cc/paper_files/paper/2016/file/ f2fc990265c712c49d51a18a32b39f0c-Paper.pdf. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In *NeurIPS*, 2012. Tanishq Kumar, Blake Bordelon, Samuel J. Gershman, and Cengiz Pehlevan. Grokking as the transition from lazy to rich training dynamics. *arXiv:2310.06110*, 2023. Béatrice Laurent and Pascal Massart. Adaptive estimation of a quadratic functional by model selection. The Annals of Statistics, 28(5):1302 - 1338, 2000. doi: 10.1214/aos/1015957395. URL https://doi.org/10. 1214/aos/1015957395. Kuang-Huei Lee, Anurag Arnab, Sergio Guadarrama, John Canny, and Ian Fischer. Compressive visual representations. In *NeurIPS*, 2021. Kaifeng Lyu, Zhiyuan Li, Runzhe Wang, and Sanjeev Arora. Gradient descent on two-layer nets: Margin maximization and simplicity bias. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *NeurIPS*. Curran Associates, Inc., 2021. Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens Van Der Maaten. Exploring the limits of weakly supervised pretraining. In *ECCV*, 2018. Jiquan Ngiam, Daiyi Peng, Vijay Vasudevan, Simon Kornblith, Quoc V Le, and Ruoming Pang. Domain adaptive transfer learning with specialist models. *arXiv preprint arXiv:1811.07056*, 2018. Mohammad Pezeshki, Oumar Kaba, Yoshua Bengio, Aaron C Courville, Doina Precup, and Guillaume Lajoie. Gradient starvation: A learning proclivity in neural networks. In *NeurIPS*, 2021. Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik. Imagenet-21k pretraining for the masses. In NeurIPS Track on Datasets and Benchmarks, 2021. Harshay Shah, Kaustav Tamuly, Aditi Raghunathan, Prateek Jain, and Praneeth Netrapalli. The pitfalls of simplicity bias in neural networks. In *NeurIPS*, 2020. Ruoqi Shen, Sebastien Bubeck, and Suriya Gunasekar. Data augmentation as feature manipulation. In *ICML*, 2022a. Ruoqi Shen, Sebastien Bubeck, and Suriya Gunasekar. Data augmentation as feature manipulation. In *ICML*, 2022b. Sindi Shkodrani, Yu Wang, Marco Manfredi, and Nóra Baka. United we learn better: Harvesting learning improvements from class hierarchies across tasks. *arXiv preprint arXiv:2107.13627*, 2021. Eyal Shnarch, Ariel Gera, Alon Halfon, Lena Dankin, Leshem Choshen, Ranit Aharonov, and Noam Slonim. Cluster & tune: Boost cold start performance in text classification. *arXiv preprint arXiv:2203.10581*, 2022. Carlos N Silla and Alex A Freitas. A survey of hierarchical classification across different application domains. Data Mining and Knowledge Discovery, 2011. Mannat Singh, Laura Gustafson, Aaron Adcock, Vinicius de Freitas Reis, Bugra Gedik, Raj Prateek Kosaraju, Dhruv Mahajan, Ross Girshick, Piotr Dollár, and Laurens Van Der Maaten. Revisiting weakly supervised pre-training of visual perception models. In *CVPR*, 2022. Donghyun Son, Byounggyu Lew, Kwanghee Choi, Yongsu Baek, Seungwoo Choi, Beomjun Shin, Sungjoo Ha, and Buru Chang. Reliable decision from multiple subtasks through threshold optimization: Content moderation in the wild. In *WSDM*, 2023. Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In *ICCV*, 2017. Xueting Yan, Ishan Misra, Abhinav Gupta, Deepti Ghadiyaram, and Dhruv Mahajan. Clusterfit: Improving generalization of visual representations. In *CVPR*, 2020. Zhicheng Yan, Hao Zhang, Robinson Piramuthu, Vignesh Jagadeesh, Dennis DeCoste, Wei Di, and Yizhou Yu. Hd-cnn: hierarchical deep convolutional neural networks for large scale visual recognition. In *ICCV*, 2015. Chuanguang Yang, Zhulin An, Linhang Cai, and Yongjun Xu. Hierarchical self-supervised augmented knowledge distillation. *arXiv preprint arXiv:2107.13715*, 2021. Eric Zelikman, Jesse Mu, Noah D Goodman, and Yuhuai Tony Wu. Star: Self-taught reasoner bootstrapping reasoning with reasoning. In *NeurIPS*, 2022. Xinqi Zhu and Michael Bain. B-cnn: branch convolutional neural network for hierarchical classification. arXiv preprint arXiv:1709.09890, 2017. # Appendix ## A Additional Experimental Results In this section, we present the full details of our experiments and relevant ablation studies. All of our experiments were performed using tools in the Scenic library Dehghani et al. (2022). ## A.1 In-Dataset Transfer Results To clarify, in this transfer setting, we are essentially transferring *within* a dataset. More specifically, we set X src = X tgt and only the label spaces Y src and Y tgt may differ (in distribution). The baseline in this setting is clear: train on D tgt train and test on D tgt test. In contrast, after pretraining the backbone network h(Θ; ·) on Y src, we finetune or linear probe it on D tgt train using the backbone and then test on D tgt test. ## A.1.1 Inaturalist 2021 iNaturalist 2021 is well-suited for our analysis because it has a high-quality, manually defined label hierarchy that is based on the biological traits of the creatures in the images. Additionally, the large sample size of this dataset reduces the likelihood of sample-starved pretraining on reasonably fine-grained hierarchy levels. We use the mini training dataset with size 500,000 instead of the full training dataset to show a greater gap between the results of different hierarchies and speed up training. We use the architectures ResNet 34 and 50 He et al. (2016). Training details. Our pretraining pipeline on iNaturalist is essentially the same as the standard large-batchsize ImageNet-type training for ResNets He et al. (2016); Goyal et al. (2017). The following pipeline applies to model pretraining on any hierarchy. - Optimization: SGD with 0.9 momentum coefficient, 0.00005 weight decay, 4096 batch size, 90 epochs total training length. We perform 7 epochs of linear warmup in the beginning of training until the learning rate reaches 0.1 × 4096/256 = 1.6, and then apply the cosine annealing schedule. Each training instance is run on 16 TPU v4 chips, taking around 2 hours per run. - Data augmentation: subtracting mean and dividing by standard deviation, image (original or its horizontal flip) resized such that its shorter side is 256 pixels, then a 224 × 224 random crop is taken. For finetuning, we keep everything in the pipeline the same except setting the batch size to 4096/4 = 1024 and base learning rate 1.6/4 = 0.4. We found that finetuning at higher batch size and learning rate resulted in training instabilities and severely affected the final finetuned model's validation accuracy, while finetuning at lower batch size and learning rate than the chosen one resulted in lower validation accuracy at the end even though their training dynamics was stabler. For the baseline accuracy, as mentioned in the main text, to ensure fairness of comparison, in addition to only training the network on the target 11-superclass problem for 90 epochs (using the same pretraining pipeline), we also perform "retraining": follow the exact training process of the models trained on the various hierarchies, but use D tgt train as the training dataset in both the pretrianing and finetuning stage. We observed consistent increase in the final validation accuracy of the model, so we report this as the baseline accuracy. Without retraining (so naive one-pass 90-epoch training on 11 superclasses), the average accuracy with standard deviation is 94.13, 0.025. Clustering. To obtain the cluster-ID-based labels, we perform the following procedure. 1. For every sample Xn in the mini training dataset of iNaturalist 2021, obtain its ViT-L/14 CLIP embedding En. 2. Per-superclass kMeans clustering. Let C be the predefined number of clusters per class. | Manual Hierarchy | G(Y src) | 11 | 13 | 51 | 273 | 1103 | 4884 | 6485 | |--------------------|------------------|------------|------------|------------|------------|------------|------------|------------| | | Validation error | 5.25±0.051 | 5.40±0.075 | 5.10±0.038 | 4.83±0.041 | 4.79±0.045 | 4.82±0.056 | 4.84±0.033 | | Random class ID | G(Y src) | 22 | 88 | 352 | 1,408 | 5,632 | 11,264 | 500,000 | | | Validation error | 6.61±0.215 | 6.30±0.070 | 6.12±0.77 | 6.10±0.053 | 6.12±0.042 | 6.10±0.057 | 6.54±0.758 | | | src) | 22 | 88 | 352 | 1408 | 2816 | 5632 | 22528 | | CLIP+kMeans | G(Y | | | | | | | | | per superclass | Validation error | 5.14±0.049 | 5.16±0.033 | 5.17±0.027 | 5.24±0.029 | 5.30±0.029 | 5.31±0.077 | 5.37±0.032 | | C+k per supclass | G(Y src) | 88 | 218 | 320 | 608 | 1040 | 1984 | | | Class rebalanced | Validation error | 5.18±0.054 | 5.17±0.038 | 5.23±0.052 | 5.28±0.045 | 5.26±0.035 | 5.21±0.040 | | | CLIP+kMeans | G(Y src) | 22 | 44 | 88 | 352 | 1408 | 2816 | 5632 | | whole dataset | Validation error | 5.52±0.015 | 5.42±0.047 | 5.45±0.049 | 5.46±0.019 | 5.60±0.029 | 5.50±0.029 | 5.47±0.029 | Table 1: **In-dataset transfer, iNaturalist 2021**. ResNet34 average finetuning validation error and standard deviation on 11 superclasses in iNaturalist 2021, pretrained on various label hierarchies with different label granularity. Baseline (11-superclass) and best performance are highlighted. (a) For every superclass k, for the set of embedding {(En, yn = k)} belonging to that superclass, perform kMeans clustering with cluster size set to C. (b) Given a sample with superclass ID k ∈ {1, 2*, ...,* 11} and cluster ID c ∈ {1, 2*, ..., C*}, define its fine-grained ID as C × k + c. 3. Whole-dataset kMeans clustering. Let C be the predefined number of clusters on the whole dataset. (a) Perform kMeans on the embedding of all the samples in the dataset, with the number of clusters set to C. Set the fine-grained class ID of a sample to its cluster ID. Some might have the concern that having the same number of kMeans clusters per superclass could cause certain classes to have too few samples, which could be a reason for why the cluster ID hierarchies perform worse than the manual hierarchies. Indeed, the number of samples per superclass on iNaturalist is different, so in addition to the above "uniform-number-of-cluster-per-superclass" hierarchy, we add an extra label hierarchy by performing the following procedure to balance the sample size of each cluster: 1. Perform kMeans for each superclass with number of clusters set to 2, 8, 32, 64, 128, 256, 512, 1024 and save the corresponding image-ID-to-cluster-ID dictionaries (so we are basically reusing the clustering results of the CLIP+kMeans per superclass experiment) 2. For each superclass, find the image-ID-to-cluster-ID dictionary with the highest granularity while still keeping the minimum number of samples for each cluster > predefined threshold (e.g. 1000 samples per subclass) 3. Now we have nonuniform granularity for each superclass while ensuring that the sample count per cluster is above some predefined threshold. This simple procedure somewhat improves the balance of sample count per cluster, for example, Figure 5 shows the sample count per cluster for the cases of total number of clusters = 608 and 1984. Unfortunately, we do not observe any meaningful improvement on the model's validation accuracy trained on this more refined hierarchy. Experimental procedures. All the validation accuracies we report on ResNet34 are the averaged results of experiments performed on at least 6 random seeds: 2 random seeds for backbone pretraining and 3 random seeds for finetuning. We report the average accuracies with their standard deviation on various hierarchies in Table 1. An additional experiment we performed with ResNet34 is a small grid search over what checkpoint of a pretrained backbone we should use for finetuning on the 11-superclass method; we tried the 50-, 70- and 90-epoch checkpoints of the backbone on the manual hierarchies. We report these results in Table 2. As we can see, 90-epoch checkpoints performs almost equally well as the 70-epoch checkpoints and better than the 50-epoch ones by a nontrivial margin. With this observation, we chose to use the end-of-pretraining 90-epoch checkpoints in all our other experiments without further ablation studies on those hierarchies. Table 2: **In-dataset transfer, iNaturalist 2021**. ResNet34 average finetuned validation error and standard deviation on 11 superclasses in iNaturalist 2021, pretrained on the manual hierarchies, with different backbone checkpoints. | 90-Epoch ckpt | G(Y src) | 13 | 51 | 273 | 1103 | 4884 | 6485 | |------------------|------------|------------|------------|------------|------------|------------|--------| | Validation error | 5.40±0.075 | 5.10±0.038 | 4.83±0.041 | 4.79±0.045 | 4.82±0.056 | 4.84±0.033 | | | src) | 13 | 51 | 273 | 1103 | 4884 | 6485 | | | 70-Epoch ckpt | G(Y | | | | | | | | Validation error | 5.43±0.055 | 5.08±0.029 | 4.86±0.037 | 4.82±0.034 | 4.83±0.064 | 4.85±0.018 | | | 50-Epoch ckpt | G(Y src) | 13 | 51 | 273 | 1103 | 4884 | 6485 | | Validation error | 5.53±0.036 | 5.2±0.031 | 4.90±0.038 | 4.9±0.042 | 4.91±0.020 | 4.95±0.026 | | | Manual Hierarchy | G(Y src) | 11 | 13 | 51 | 273 | 1103 | 4884 | 6485 | |--------------------|------------|------------|------------|------------|------------|------------|------------|---------| | Validation error | 4.43±0.029 | 4.44±0.063 | 4.36±0.062 | 4.22±0.021 | 4.20±0.035 | 4.23±0.054 | 4.33±0.037 | | | Random class ID | G(Y src) | 22 | 88 | 352 | 1,408 | 5,632 | 11,264 | 500,000 | | Validation error | 5.36±0.111 | 5.31±0.079 | 5.24±0.093 | 5.38±0.052 | 5.37±0.033 | 5.40±0.033 | 5.13±0.072 | | Table 3: **In-dataset transfer, iNaturalist 2021**. ResNet50 finetuned average validation error and standard deviation on 11 superclasses in iNaturalist 2021, pretrained on label hierarchies with different label granularity. ![16_image_0.png](16_image_0.png) Figure 5: **In-dataset transfer, iNaturalist 2021**. Number of samples per cluster in the case of 608 and 1984 total clusters, after applying the sample size rebalancing procedure described in subsection A.1.1. Observe that the sample sizes are reasonably balanced across almost all the subclasses. Our ResNet50 results are not as extensive as those on ResNet34. We present the average accuracies and standard deviations in Table 3. Experimental results and interpretations. Table 1 and Figure 6 show the validation errors of the resulting ResNet34's on the 11-superclass problem. We make the following observations. (Random class ID). Random ID pretraining performs the worst of all the alternatives. The label function of this type does not generate a *meaningful hierarchy* because it has no consistency in the features it considers discriminative when decomposing the superclasses. This is in stark contrast to the manual hierarchies, which decompose the superclasses based on the finer biological traits of the creatures in the image. (kMeans). For fine-grained pretraining to be effective, the features that the pretraining label function considers discriminative must *align* well with those valued by the label function of the 11-superclass hierarchy. To see this point, observe that for models trained on cluster IDs obtained by performing kMeans on the CLIP embedding samples in each superclass *separately* (green curve in Figure 6), their validation errors are much lower than those trained on cluster IDs obtained by performing kMeans on the whole dataset (purple curve in Figure 6). As expected, the manually defined fine-grained label functions align best with that of the 11 superclasses, and the results corroborate this view. (Manual Hierarchy). Even with high-quality (human) labels, the granularity of pretraining labels should not be too large or too small. As shown by the blue curve in Figure 6, models trained on the manual hierarchies ![17_image_0.png](17_image_0.png) Figure 6: **In-dataset transfer**. ResNet34 validation error (with standard deviation) of finetuning on 11 superclasses of iNaturalist 2021, pretrained on various label hierarchies. The manual hierarchy outperforms the baseline and every other hierarchy, and exhibits a U-shaped curve. Table 4: **In-dataset transfer**. ViT-B/16 validation error on the binary problem "is this object a *living* thing?" of ImageNet21k. Pretrained on various hierarchy levels of ImageNet21k, finetuned on the binary problem. Observe that the maximal improvement appears at the leaf labels, and as G(Y src) approaches 2, the percentage improvement approaches 0. | src) | Validation error | | |-----------------|--------------------|------| | Hierarchy level | G(Y | | | Baseline | 2 | 7.90 | | 0 (leaf) | 21843 | 6.56 | | 1 | 5995 | 6.76 | | 2 | 2281 | 6.70 | | 4 | 519 | 6.97 | | 6 | 160 | 7.31 | | 9 | 38 | 7.55 | outperform all other alternative as long as the pretraining label granularity is beyond the order of 102. However, the error exhibits a U shape, meaning that as the label granularity becomes too large, the error starts to rise. This is intuitive. If the pretraining granularity is too close to the target one, we should not expect improvement. On the other extreme, if we assign a unique label to *every* sample in the training data, it is highly likely that the only *differences* a model can find between each class would be frivolous details of the images, which would not be considered discriminative by the label function of the target coarse-label problem. In this case, the pretraining stage is almost meaningless and can be misleading, as evidenced by the very high label-per-sample error (red star in Figure 6). ## A.1.2 Imagenet21K The ImageNet21k dataset we experiment on contains a total of 12,743,321 training samples and 102,400 validation samples, with 21843 leaf labels. A small portion of samples have multiple labels. Caution: due to the high demand on computational resources of training ViT models on ImageNet21k, all of our experiments that require (pre-)training or finetuning/linear probing on this dataset were performed with one random seed. Hierarchy generation. To define fine-grained labels, we start by defining the leaf labels of the dataset to be Hierarchy level 0. For every image, we trace from the leaf synset to the root synset relying on the WordNet hierarchy, and set the k-th synset (or the root synset, whichever is higher in level) as the level-k label of this image; this procedure also applies to the multi-label samples. This is the way we generate the manual hierarchies shown in the main text. Due to the lack of a predefined coarse-label problem, we manually define our target problem to be a binary one: given an image, if the synset "Living Thing" is present on the path tracing from the leaf label of the image to the root, assign label 1 to this image; otherwise, assign 0. This problem almost evenly splits the training and validation sets of ImageNet21k: 5,448,549:7,294,772 for training, 43,745:58,655 for validation. Network choice and pretraining pipeline. We experiment with the ViT-B/16 model Dosovitskiy et al. (2021). The pretraining pipeline of this model follows the one in Dosovitskiy et al. (2021) exactly: we train the model for 90 epochs using the Adam optimizer, with β1 = 0.9, β2 = 0.999, weight decay coefficient equal to 0.03 and a batch size of 4096; we let the dropout rate be 0.1; the output dense layer's bias is initialized to −10.0 to prevent huge loss value coming from the off-diagonal classes near the beginning of training Cui et al. (2019); for learning rate, we perform linear warmup for 10,000 steps until the learning rate reaches 10−3, then it is linearly decayed to 10−5. The data augmentations are the common ones in ImageNet-type training Dosovitskiy et al. (2021); He et al. (2016): random cropping and horizontal flipping. Note that we use the sigmoid cross-entropy for training since the dataset has multi-label samples. Each training instance (90 epochs) is run on 64 TPU v4 chips, taking approximately 1.5 to 2 days. Evaluation on the binary problem. After the 90-epoch pretraining on the manual hierarchies, we evaluate the model on the binary problem. We report the best accuracies on each hierarchy level in Table 4. To get a sense of how the relevant hyperparameters influence final accuracy of the model, we try out the following finetuning/linear probing strategies on the backbone trained on the *leaf labels* and the target *binary problem* of the dataset, and report the results in Table 5 (similar to our experiments on iNaturalist, we include the backbone trained on the binary problem in these ablation studies to ensure that our comparisons against the baseline are fair) : 1. 90-epochs finetuning in the same fashion as the pretraining stage, but with a small grid search over (batch size, base learning rate) ={(4096, 0.001),(4096/4 = 1024, 0.001/4 = 0.00025), (4096/8 = 512, 0.001/8 = 0.000125)}. 2. Linear probing with 20 epochs training length, using exactly the same training pipeline as in pretraining. We ran a small grid search over (batch size, base learning rate) = {(4096, 0.001),(4096/8 = 512, 0.001/8 = 0.000125)}. 3. 10-epochs finetuning, no linear warmup, 3 epochs of constant learning rate in the beginning followed by 7 epochs of linear decay, with a small grid search over (batch size, base learning rate) = {(4096, 0.001),(4096/8 = 512, 0.001/8 = 0.000125)}. Table 5 helps us decide the best accuracies to report. First, as expected the linear probing results are much worse than the finetuning ones. Second, the "retraining" accuracy of 92.102 is the best baseline we can report (the same thing happened in the iNaturalist case) - if we only train the model for 90 epochs (the naive one-pass training) on the binary problem, then the model's final validation accuracy is 91.746%, which is lower than 92.102% by a nontrivial margin. In contrast, the short 10-epoch finetuning strategy | Eval strategy | 90-epoch finetune | Linear probe | 10-epoch finetune | | | | | | |------------------|-----------------------|----------------|---------------------|---------------|--------------|----------------|-------------|---------------| | Leaf-pretrained | (Batch size, base lr) | (4096,1e-3) | (1024,2.5e-4) | (512,1.25e-4) | (4096, 1e-3) | (512, 1.25e-4) | (4096,1e-3) | (512,1.25e-4) | | Validation error | 92.782 | 93.177 | 93.295 | 87.497 | 87.493 | 92.294 | 93.439 | | | Baseline | (Batch size, base lr) | (4096,1e-3) | (1024,2.5e-4) | (512,1.25e-4) | (4096, 1e-3) | (512, 1.25e-4) | (4096,1e-3) | (512,1.25e-4) | | Validation error | 92.102 | 91.971 | 91.939 | 91.703 | 91.719 | 92.002 | 91.856 | | Table 5: **In-dataset transfer, ImageNet21k**. ViT-B/16 validation accuracy on the binary problem "Is the object a Living Thing" on ImageNet21k. Ablation study on the exact finetuning/linear probing strategy. | ResNet50 CLIP+kMeans | G(Y src) | 2000 | 4000 | 8000 | |------------------------|------------------|------------|-------------|-------------| | per-class | Validation error | 23.4±0.13 | 23.48±0.098 | 23.49±0.204 | | ViT-L/14 CLIP+kMeans | G(Y src) | 2000 | 4000 | 8000 | | per-class | Validation error | 23.4±0.127 | 23.47±0.074 | 23.78±0.048 | | src) | 2000 | 4000 | 8000 | | | Random ID | G(Y | | | | | per-class | Validation error | 23.4±0.068 | 23.4±0.070 | 23.65±0.071 | Table 6: **In-dataset transfer, ImageNet1k**. ResNet50 finetuned average validation error and standard deviation on the vanilla 1000 classes, pretrained on label hierarchies with different label granularity. works best for the backbone trained on the leaf labels, therefore, we also use this strategy to evaluate the backbones trained on all the other manual hierarchies. A peculiar observation we made was that, finetuning the leaf-labels-pretrained backbone for extended period of time on the binary problem caused it to overfit severely: for batch size and base learning rate in the set {(4096, 0.001),(1024, 0.00025),(512, 0.000125)}, throughout the 90 epochs of finetuning, although its training loss exhibits the normal behavior of staying mostly monotonically decreasing, its validation accuracy actually reached its peak during the linear warmup period! ## A.1.3 Imagenet1K Our ImageNet1k in-dataset transfer experiments are done in a very similar fashion to the iNaturalist ones. In particular, the pretraining and finetuning pipeline for ResNet50 is exactly the same as the one in the iNaturalist case, so we do not repeat it here. Due to a lack of more fine-grained manual label on this dataset, we generate fine-grained labels by performing kMeans on the ViT-L/14 CLIP embedding of the dataset separately for each class; the exact procedure is also identical to the iNaturalist case. The CLIP backbones we use here are the ResNet50 version and the ViT-L/14 version. We report the average accuracies and their standard deviation in Table 6. All results are obtained from at least one random seed during pretraining and 3 random seeds during finetuning. The best baseline we report is the one using retraining: if we adopt the pretrain-then-finetune procedure but with D tgt train (i.e. the vanilla 1000-class labels) set as the pretraining dataset, then we obtain an average validation error of 23.28% with standard deviation of 0.103, averaged over results of 3 random seeds. In comparison, if we only perform the naive one-pass 90-epoch training, we obtain average valiation error 24.04%, with standard deviation 0.057. From Table 6, we see that there is virtually no difference between the baseline and the best errors obtained by the models trained on the custom hierarchies: they are almost equally bad. Noting that the sample size of each class in ImageNet1k is only around 103, and the fact that ImageNet1k classification is a "hard problem" - it is a problem of high sample complexity - further decomposing the classes causes each fine-grained class to have too few samples, leading to the above negative results. This reflects the intuition that higher label granularity does not necessarily mean better model generalization, since the sample size per class might become too small. Table 7: **Cross-dataset transfer**. ViT-B/16 average *finetuning* validation accuracy on ImageNet1k along with standard deviation, pretrained on various hierarchy levels of ImageNet21k, and a small grid search over the base learning rate. | Pretrained on / Base lr | 3 × 10−3 | 3 × 10−2 | 6 × 10−2 | 3 × 10−1 | |---------------------------|-------------|-------------|-------------|-------------| | ImageNet21k, Hier. lv. 0 | 80.87±0.012 | 82.48±0.005 | 82.51±0.042 | 81.40±0.041 | | ImageNet21k, Hier. lv. 1 | 77.38±0.037 | 81.03±0.054 | 81.28±0.045 | 80.40±0.087 | | ImageNet21k, Hier. lv. 2 | 74.91±0.012 | 79.76±0.021 | 80.26±0.05 | 79.7±0.019 | | ImageNet21k, Hier. lv. 4 | 63.65±0.052 | 76.43±0.033 | 77.32±0.088 | 77.53±0.078 | | ImageNet21k, Hier. lv. 6 | 62.17±0.012 | 73.65±0.033 | 73.92±0.073 | 75.53±0.024 | | ImageNet21k, Hier. lv. 9 | 53.68±0.034 | 69.33±0.045 | 71.08±0.068 | 72.75±0.071 | | Pretrained on | Hier. lv | G(Y src) | Validation acc. | |-----------------|------------|-------------|-------------------| | IM21k | 0 (leaf) | 21843 | 81.45±0.021 | | 1 | 5995 | 78.33±0.018 | | | 2 | 2281 | 75.66±0.005 | | | 4 | 519 | 68.95±0.051 | | | 6 | 160 | 63.65±0.035 | | | 9 | 38 | 57.35±0.016 | | Table 8: **Cross-dataset transfer**. ViT-B/16 average *linear-probing* validation accuracy on ImageNet1k along with standard deviation, pretrained on various hierarchy levels of ImageNet21k. ## A.2 Cross-Dataset Transfer, Imagenet21K→**Imagenet1K** In this subsection, we report the average validation accuracy and standard deviation of the cross-dataset transfer experiment from ImageNet21k to ImageNet1k, as discussed in Figure 2 and Section 1 in the main text. Network choice. We use the same architecture ViT-B/16 as the one in the in-dataset ImageNet21k transfer experiment and follow the same training procedure, which we repeat here for the reader's convenience. The pretraining pipeline of this model follows the one in Dosovitskiy et al. (2021): we train the model for 90 epochs using the Adam optimizer, with β1 = 0.9, β2 = 0.999, weight decay coefficient equal to 0.03 and a batch size of 4096; we let the dropout rate be 0.1; the output dense layer's bias is initialized to −10.0 to prevent huge loss value coming from the off-diagonal classes near the beginning of training Cui et al. (2019); for learning rate, we perform linear warmup for 10,000 steps until the learning rate reaches 10−3, then it is linearly decayed to 10−5. The data augmentations are the common ones in ImageNet-type training Dosovitskiy et al. (2021); He et al. (2016): random cropping and horizontal flipping. Note that we use the sigmoid cross-entropy for training since the dataset has multi-label samples. Additionally, each training instance (90 epochs) is run on 64 TPU v4 chips, taking approximately 1.5 to 2 days. Finetuning. For finetuning on ImageNet1k, our procedure is very similar to the one in the original ViT paper Dosovitskiy et al. (2021), described in its Appendix B.1.1. We optimize the network for 8 epochs using SGD with momentum factor set to 0.9, zero weight decay, and batch size of 512. The dropout rate, unlike in pretraining, is set to 0. Gradient clipping at 1.0 is applied. Unlike Dosovitskiy et al. (2021), we still finetune at the resolution of 224×224. For learning rate, we apply linear warmup for 500 epochs until it reaches the base learning rate, then cosine annealing is applied; we perform a small grid search of base learning rate = {3 × 10−3, 3 × 10−2, 6 × 10−2, 3 × 10−1}. Every one of these grid search is repeated over 3 random seeds. We report the ImageNet1k validation accuracies and their standard deviations in Table 7. In the main text, we report the best accuracy for each hierarchy level. Linear probing. For linear probing, we use the following procedure. We optimize the linear classifier for 40 epochs (similar to Lee et al. (2021)) using SGD with Nesterov momentum factor set to 0.9, a small weight decay coefficient 10−6, and batch size 512. We start with a base learning rate of 0.9, and multiply it by 0.97 per 0.5 epoch. In terms of data augmentation, we adopt the standard ones like before: horizontal flipping and random cropping of size 224×224. We repeat this linear probing procedure over 3 random seeds given the pretrained backbone, and report the average validation accuracy and standard deviation in Table 8. Baseline. The baseline accuracy on ImageNet1k is directly taken from the ViT paper Dosovitskiy et al. (2021) (see Table 5 in it), in which the ViT-B/16 model is trained for 300 epochs on ImageNet1k. ## B Theory, Problem Setup B.1 Data Properties 1. Coarse classification: a binary task, +1 vs. −1. 2. An input sample X ∈ R d×P consists of P patches, each with dimension d. In this work, always assume d is sufficiently large1; 3. Assume there exists k+ subclasses of the superclass "+", and k− subclasses of the superclass "−". Let k+ = k−. 4. Assume orthonormal dictionary V = {v1, ..., vd} ⊂ R d, which forms an orthonormal basis of R d. Define v+ ∈ V to be the common feature of class "+". For each subclass (+, c) (where c ∈ [k+]), denote the subclass feature of it as v+,c ∈ V. Similar for the "−" class. 5. For an easy sample X belonging to the (+, c) class (for c ∈ [k+]), we sample its patches as follows: Definition: we define the function P : R d×P *× V →* [P] (so (X; v) 7→ I ⊆ [P]) to extract, from sample X, the indices of the patches on which the dictionary word v ∈ D dominates. (a) (Common-feature patches) With probability s ∗ P , a patch xp in X is a common-feature patch, on which xp = αpv+ + ζp for some (random) αp ∈-√1 − ι, √1 + ι; (b) (Subclass-feature patches) With probability s ∗ P −|P(X;v+)| , a patch with index p ∈ ([P] − P(X; v+)) is a subclass-feature patch, on which xp = αpv+,c + ζp, for random αp ∈-√1 − ι, √1 + ι; (c) (Noise patches) For the remaining P − |P(X; v+)*| − |P*(X; v+,c)| patches, xp = ζp. 6. A hard sample Xhard for class (+, c) is exactly the same as an easy one except: (a) Its common-feature patches are replaced by noise patches; (b) (Feature noise patches) With probability s † P −|P(X;v+,c)| , a patch with index p ∈ ([P] − P(X; v+,c)) is a feature-noise patch, on which xp = α † pv− + ζp for some (random) αp ∈ hι † lower, ι†*upper*i; (c) Set one of the noise patches to ζ ∗ ∼ N (0, σ2 ζ ∗ Id). 7. A sample X belongs to the "+" superclass if |P(X; v+)| > 0 or |P(X; v+,c)| > 0 for any c (excluding feature-noise patches). 8. The above sample definitions also apply to the "−" classes by switching the class signs. 9. A training batch of samples contains exactly N/2k+ samples for each (+, c) and (−, c) subclass. This also means that each training batch contains exactly N/2 samples belonging to the +1 superclass, and N/2 samples for the −1 superclass. 10. As discussed in the main text, for both coarse-grained (baseline) and fine-grained training, we only train on *easy* samples. ## B.2 Learner Assumptions And Training Algorithm Assume the learner is a two-layer convolutional ReLU network: $$F_{c}(\mathbf{X})=\sum_{r=1}^{m}a_{c,r}\sum_{p=1}^{P}\sigma(\langle\mathbf{w}_{c,r},\mathbf{x}_{p}\rangle+b_{c,r})$$ $$(14)^{\frac{1}{2}}$$ To simplify analysis and only focus on the learning of the feature extractor, we freeze ac,r = 1 throughout training. The nonlinear activation σ(·) = max(0, ·) is ReLU. Note that the convolution kernels have dimension d and stride d. 1Consider each d-dimensional patch of the input as an embedding of the input image generated by, for instance, an intermediate layer of a DNN. Remark. One difference between this architecture and a CNN used in practice is that we do not allow feature sharing across classes: for each class c, we are assigning a disjoint group of neurons wc,r to it. Separating neurons for each class is a somewhat common trick to lower the complexity of analysis in DNN theory literature Allen-Zhu & Li (2023b); Karp et al. (2021); Cao et al. (2022), as it reduces complex coupling between neurons *across* classes which is not the central focus of our study in this paper. Now we discuss the **training algorithm**. Initialization. Sample $\boldsymbol{w}_{c,r}^{(0)}\sim\mathcal{N}(\boldsymbol{0},\sigma_0^2\boldsymbol{I}_d)$, and set $b_{c,r}^{(0)}=-\sigma_0c_b\sqrt{\log(d)}$. Training. We adopt the standard cross-entropy training: $${\mathcal{L}}(F)=\sum_{n=1}^{N}L(F;\mathbf{X}_{n},y_{n})=-\sum_{n=1}^{N}\log\left({\frac{\exp(F y_{n}(\mathbf{X}_{n}))}{\sum_{c=1}^{C}\exp(F_{c}(\mathbf{X}_{n}))}}\right)$$ $$(15)$$ $$\quad(16)$$ This induces the stochastic gradient descent update for each hidden neuron (c ∈ [k], r ∈ [m]) per minibatch of N iid samples: $$\mathbf{w}_{c,r}^{(t+1)}=\mathbf{w}_{c,r}^{(t)}+\eta{\frac{1}{N P}}\sum_{n=1}^{N}\mathbf{w}_{n}^{(t)}$$ where As for the bias, $$\begin{array}{l}{{\left(\begin{array}{l}{{1\left\{y_{n}=c\right\}\left[1-\operatorname{logit}_{c}^{(t)}({\boldsymbol{X}}_{n}^{(t)})\right]\sum_{p\in\left[P\right]}\sigma^{\prime}(\langle{\boldsymbol{w}}_{c,r}^{(t)},{\boldsymbol{x}}_{n,p}^{(t)}\rangle+b_{c,r}^{(t)}){\boldsymbol{x}}_{n,p}^{(t)}+}}\\ {{}}\\ {{}}\\ {{1\left\{y_{n}\neq c\right\}\left[-\operatorname{logit}_{c}^{(t)}({\boldsymbol{X}}_{n}^{(t)})\right]\sum_{p\in\left[P\right]}\sigma^{\prime}(\langle{\boldsymbol{w}}_{c,r}^{(t)},{\boldsymbol{x}}_{n,p}^{(t)}\rangle+b_{c,r}^{(t)}){\boldsymbol{x}}_{n,p}^{(t)}}\end{array}\right)}}\end{array}$$ n,p! (16) $$\operatorname{logit}_{c}^{(t)}(X)={\frac{\exp(F_{c}(X))}{\sum_{y=1}^{C}\exp(F_{y}(X))}}$$ $$\left(17\right)$$ $$b_{c,r}^{(t+1)}=b_{c,r}^{(t)}-\frac{\|\mathbf{w}_{c,r}^{(t+1)}-\mathbf{w}_{c,r}^{(t)}\|_{2}}{\log^{5}(d)}$$ $$\left(18\right)$$ log5(d)(18) Remark. 1. The initialization strategy is similar to the one in Allen-Zhu & Li (2022). 2. Since the only difference between the training samples of coarse and fine-grained pretraining is the label space, the form of SGD update is identical. The only difference is the number of output nodes of the network: for coarse training, the output nodes are just F+ and F− (binary classification), while for fine-grained training, the output nodes are F+,1, F+,2, ..., F+,k+ , F−,1, F−,2*, ..., F*−,k− , a total of k+ + k− nodes. 3. The bias is for thresholding out the neuron's noisy activations that grow slower than 1/ log5(d) times the activations on the features which the neuron detects. This way, the bias does not really influence updates to the neuron's response to the (common and/or fine-grained) features which it activates strongly on, since 1−1 log5(d) ≈ 1, while it removes useless low-magnitude noisy activations. This in fact creates a (generalization) gap between the nonlinear model that we are studying and linear models. Due to our parameter choices (as discussed below), if the model has no nonlinearity (remove the ReLU activations), then even if the model can be written as F+(X) = Pp∈[P ] c+⟨v+, xp⟩ + c+,1⟨v+,1, xp⟩ + ... + c+,k+ ⟨v+,k+ , xp⟩ and F−(X) = Pp∈[P ] c−⟨v−, xp⟩ + c−,1⟨v−,1, xp⟩ + ... + c−,k− ⟨v−,k− , xp⟩ for any sequence of nonnegative real numbers c+, c−, {c+,j} k+ j=1, {c−,j} k− j=1 (which is the ideal situation since the true features are not corrupted by anything), it is impossible for the model to reach o(1) error on the input samples, because the number of noise patches will accumulate to a variance of (P − O(s ∗)) σζ ≫ O(s ∗), which significantly overwhelms the signal from the true features. On the other hand, each noise patch is sufficiently small in magnitude with high probability (their strength is o(1/ log5(d))), so a slightly negative bias, as described above, can threshold out these noise-based signals and prevent them from accumulating across the patches. An important difference between our bias update rule and the one in Allen-Zhu & Li (2022) is that, our rule depends on the ℓ2 norm of the neuron's update, while the one in Allen-Zhu & Li (2022) is hard-coded and not dependent on the neuron weights. The reason that we should not hard code the bias update rate is that, the neurons that are responsible for detecting the common features will grow more quickly in norm than those responsible for detecting the fine-grained features, therefore, to ensure fairness between the different groups of neurons (i.e. only using the bias to remove useless activations on the noise patches while creating minimal disturbance to the neurons' activation on feature-dominated patches), we rely on our neuron-dependent bias update rule. ## B.3 Parameter Choices The following are fixed choices of parameters for the sake of simplicity in our proofs. 1. Always assume d is sufficiently large. All of our asymptotic results are presented with respect to d; 2. poly(d) denotes the asymptotic order "polynomial in d"; 3. polylog(d) aymptotic order "polylogarithmic in d"; 4. polylog(d) ≤ k+ = k− ≤ d 0.4 and s ∗log5(d) ≤ k+ (i.e. k+ lower bounded by polynomial of log(d) of sufficiently high degree); 5. Small positive constant c0 ∈ (0, 0.1); 6. For coarse-grained (baseline) training, set cb = √4 + 2c0, and for fine-grained training, set cb = √2 + 2c0; 7. 0 ≤ ι ≤1 polylog(d) ; 8. ι † lower ≥1 log4(d) , and s †ι † upper ≤ O 1 $${\frac{1}{\log(d)}}\,\}\,;$$ 9. s † ≥ 1; 10. s ∗ ∈ polylog(d) with a degree > 15; 11. σζ =1 log10(d) √ d ; $$12.\ \ 0$$ $$\mathbf{v}(\mathbf{x})=\mathbf{x}$$ $\rho$. 12. σζ ∗ ∈ hω polylog(d) √ d , O 1 polylog(d) i; 13. P σζ ≥ ω(polylog(d)), and P ≤ poly(d); 14. σ0 ≤ O 1 d3s ∗ log(d) , and set η = Θ(σ0) for simplicity; 15. Batch of samples B (t) at every iteration has a deterministic size of N ∈ (Ω(polylog(d)k+d), poly(d)). 16. Note: we sometimes abuse the notation x = a ± b as an abbreviation for x ∈ [a − *b, a* + b]. Remark. We believe the range of parameter choice can be (asymptotically) wider than what is considered here, but for the purpose of illustrating the main messages of the paper, we do not consider a more general set of parameter choice necessary because having a wider range of it can significantly complicate and obscure the already lengthy proofs without adding to the core messages. ## B.4 Plan Of Presentation And Central Ideas We shall devote the majority of our effort to proving results for the coarse-label learning dynamics, starting with appendix section C and ending on E, and only devote section G to the fine-grained-label learning dynamics, since the analysis of fine-grained training overlaps significantly with the coarse-grained one. One technical difficulty in making the above ideas rigorous lies in the ReLU activation (with time-dependent bias): due to randomness in the gradient updates and the initialization, it is possible for individual hidden neurons that activate on v-dominated patches at one time iterate to no longer do so at the next iterate, and the opposite can happen. This can be problematic: for instance, it is possible that certain "lucky" neurons for v+ at one iterate become dead on v+-dominated patches at the next iterate, while some "unlucky" neurons that were dead on v+,c-dominated patches before start activating on these patches at the current iterate. In our proof, we show that this kind of situation does not happen too frequently nor do they contribute too much to the overall behavior of the neural network, by carefully keeping track of each hidden neuron's response to feature vectors and noise vectors throughout training. ## C Coarse-Grained Training, Initialization Geometry For coarse-grained training, assume m = Θ(d 2+2c0 ). Definition C.1. Define the following sets of interest of the hidden neurons: 1. ${\cal U}^{(0)}_{+,r}=\{\mathbf{v}\in{\cal V}:\langle\mathbf{w}^{(0)}_{+,r},\mathbf{v}\rangle\geq\sigma_{0}\sqrt{4+2c_{0}}\sqrt{\log(d)-\frac{1}{\log^{5}(d)}}\}$. 2. Given v ∈ V, S ∗(0) + (v) ⊆ + × [m] satisfies: 1. $\langle\mathbf{w}_{+,r}^{(0)},\mathbf{v}\rangle\geq\sigma_{0}\sqrt{4+2c_{0}}\sqrt{\log(d)+\frac{1}{\log^{2}(d)}}$ 2. $\forall\mathbf{v}^{\prime}\in\mathcal{V}$ s.t. $\mathbf{v}^{\prime}\perp\mathbf{v}$, $\langle\mathbf{w}_{+,r}^{(0)},\mathbf{v}^{\prime}\rangle<\sigma_{0}\sqrt{4+2c_{0}}\sqrt{\log(d)-\frac{1}{\log^{2}(d)}}$ 3. Given v ∈ D, S (0) + (v) ⊆ + × [m] satisfies: $${\mathrm{(a)~}}\langle\mathbf{w}_{+,r}^{(0)},\mathbf{v}\rangle\geq\sigma_{0}{\sqrt{4+2c_{0}}}{\sqrt{\log(d)-{\frac{1}{\log^{5}(d)}}}}$$ 4. For any (+, r) ∈ S ∗(0) +*,reg* ⊆ + × [m]: * $\langle\mathbf{w}_{+,r}^{(0)},\mathbf{v}\rangle\leq\sigma_{0}\sqrt{10}\sqrt{\log(d)}\;\forall\mathbf{v}\in\mathcal{V}$ * $\left|\mathcal{U}_{+,r}^{(0)}\right|\leq O(1)$ Proposition 1. *Assume* m = Θ(d 2+2c0 ), i.e. the number of neurons assigned to the + and − class are equal and set to Θ(d 2+2c0 ). At t = 0, for all v ∈ V*, the following properties are true with probability at least* 1 − d −2 *over the randomness* of the initialized kernels: $$1.\ |S_{+}^{*(0)}(\mathbf{v})|,|S_{+}^{(0)}(\mathbf{v})|=\Theta\left({\frac{1}{\sqrt{\log(d)}}}\right)d^{c_{0}}$$ 2. _In particular, for any_ $\mathbf{v},\mathbf{v}^{\prime}\in\mathcal{V}$_,_ $\left|\frac{|S^{x(0)}(\mathbf{v})|}{|S^{x(0)}_{+}(\mathbf{v}^{\prime})|}-1\right|,\left|\frac{|S^{x(0)}_{-}(\mathbf{v})|}{|S^{(0)}_{+}(\mathbf{v}^{\prime})|}-1\right|\leq O\left(\frac{1}{\log^{2}(d)}\right)$_._ $$3.\;\;S_{+,r e g}^{(0)}=[m]$$ Proof. Recall the tail bound of g ∼ N (0, 1) for every ϵ > 0: $$\frac{1}{2}\frac{1}{\sqrt{2\pi}}\frac{\epsilon}{\epsilon^{2}+1}e^{-\epsilon^{2}/2}\leq\mathbb{P}\left[g\geq\epsilon\right]\leq\frac{1}{2}\frac{1}{\sqrt{2\pi}}\frac{1}{\epsilon}e^{-\epsilon^{2}/2}\tag{19}$$ First note that for any r ∈ [m], {⟨w (0) +,r, v⟩}v∈V is a sequence of iid random variables with distribution N (0, σ2 0 ). The proof of the first point proceeds in two steps. 1. The following properties hold at t = 0: ing properties hold at $v=0$: $$p_{1}:=\mathbb{P}\left[\left\langle\mathbf{w}_{r,r}^{(0)},v\right\rangle\geq\sigma_{0}\sqrt{4+2\epsilon_{0}}\sqrt{\log(d)+\frac{1}{\log^{5}(d)}}\right]$$ $$\in\frac{1}{\sqrt{8\pi}}d^{-2-\epsilon_{0}}e^{(-2-\epsilon_{0})/\log^{5}(d)}$$ $$\quad\times\left[\frac{\sqrt{(4+2\epsilon_{0})\left(\log(d)+\frac{1}{\log^{5}(d)}\right)}}{\sqrt{(4+2\epsilon_{0})\left(\log(d)+\frac{1}{\log^{5}(d)}\right)}+1},\frac{1}{\sqrt{(4+2\epsilon_{0})\left(\log(d)+\frac{1}{\log^{5}(d)}\right)}}\right]\tag{20}$$ $$=\Theta\left(\frac{1}{\sqrt{\log(d)}}\right)d^{-2-\epsilon_{0}}$$ and $$p_{2}:=\mathbb{P}\left[\langle\mathbf{w}_{r,r}^{(0)},\mathbf{v}\rangle\geq\sigma_{0}\sqrt{4+2c_{0}}\sqrt{\log(d)-\frac{1}{\log^{5}(d)}}\right]$$ $$\in\frac{1}{\sqrt{8\pi}}d^{-2-c_{0}}e^{-(-2-c_{0})/\log^{5}(d)}$$ $$\times\left[\frac{\sqrt{(4+2c_{0})\left(\log(d)-\frac{1}{\log^{5}(d)}\right)}}{\left(4+2c_{0}\right)\left(\log(d)-\frac{1}{\log^{5}(d)}\right)+1},\frac{1}{\sqrt{(4+2c_{0})\left(\log(d)-\frac{1}{\log^{5}(d)}\right)}}\right]\tag{21}$$ $$=\Theta\left(\frac{1}{\sqrt{\log(d)}}\right)d^{-2-c_{0}}$$ Therefore, for any r ∈ [m], the random event described in S ∗(0) + holds with probability $$p_{1}\times\left(1-p_{2}\right)^{d-1}=\Theta\left(\frac{1}{\sqrt{\log(d)}}\right)d^{-2-c_{0}}\times\left(1-\Theta\left(\frac{1}{\sqrt{\log(d)}}\right)d^{-2-c_{0}}\right)^{d-1}\tag{22}$$ $$=\Theta\left(\frac{1}{\sqrt{\log(d)}}\right)d^{-2-c_{0}}.$$ $$(23)^{\frac{1}{2}}$$ $$(24)^{\frac{1}{2}}$$ $$(25)$$ The last equality holds because defining f(d) = d −2−c0 and d being sufficiently large, g(d) := |(d − 1) log(1 − f(d))| ≤ (d − 1) × (f(d) + O(f(d) 2)) ≤ O(d −1) (23) which means $$(1-f(d))^{d-1}=e^{-g(d)}\in(1-O(d^{-1}),1)$$ 2. Given v ∈ V, |S ∗(0) + (v)| is a binomial random variable, with each Bernoulli trial (ranging over r ∈ [m]) having success probability p1(1 − p2) d−1. Therefore, E h|S ∗(0) + (v)| i= mp1(1 − p2) d−1 = Θ $$\Phi\left({\frac{1}{\sqrt{\log(d)}}}\right)d^{c_{0}}.$$ Now recall the Chernoff bound of binomial random variables. Let {Xn} m n=1 be an iid sequence of Bernoulli random variable with success rate p, and Sn =Pm n=1 Xn. Then for any δ ∈ (0, 1), $$\mathbb{P}[S_{n}\geq(1+\delta)m p]\leq\exp\left(-{\frac{\delta^{2}m p}{3}}\right)$$ $$\mathbb{P}[S_{n}\leq(1-\delta)m p]\leq\exp\left(-{\frac{\delta^{2}m p}{2}}\right)$$ (25) It follows that, for each v ∈ V, |S ∗(0) + (v)| = Θ √ 1 log(d) d c0 with probability at least 1 − exp(−Ω(log−1/2(d))d c0 ). Taking union bound over all possible v ∈ D, the random event still holds with probability at least 1 − exp(−Ω(log−1/2(d))d c0 + O(log(d))) ≥ 1 − exp(−Ω(d 0.5c0 )) (in sufficiently high dimension). The proof for S (0) + (v) proceeds in virtually the same way, so we omit the calculations here. To show the second point, in particular |S ∗(0) + (v)| |S (0) + (v′)| − 1 ≤ O 1 log5(d) , we need to be a bit more careful in our bounds of the relevant sets. In particular, we need to directly use the CDF of gaussian random variables: P " ⟨w (0) +,r, v⟩ ≥ σ0 √4 + 2c0 s log(d) + 1 log5(d) # (1 ± O(d −1)) − P " ⟨w (0) +,r, v ′⟩ ≥ σ0 √4 + c0 s log(d) −1 log5(d) # ≤1 2 √2π Z √4+2c0plog(d)+ 1 log5(d) √4+2c0plog(d)− 1 log5(d) e −ϵ 2/2dϵ + O 1 d 3+c0plog(d) ! (26) ≤1 2 √2π d −2−c0 e (2+c0)/ log5(d)√4 + 2c0 s log(d) + 1 log5(d) − s log(d) −1 log5(d) ! + O 1 d 3+c0plog(d) ! =1 2 √2π d −2−c0 e (2+c0)/ log5(d)√4 + 2c0 2 log5(d) qlog(d) + 1 log5(d) + qlog(d) −1 log5(d) + O 1 d 3+c0plog(d) ! The expected difference in number between the two sets is just the above expression multiplied by m = Θ(d 2+2c0 ), and with probability at least 1 − exp(−Ω(d −c0/4)), the difference term satisfies 1 2 √2π (1 ± d −c0/2)Θ(d c0)e (2+c0)/ log5(d)√4 + 2c0 2 log5(d) qlog(d) + 1 log5(d) + qlog(d) −1 log5(d) ± O d 2+2c0 d 3+c0plog(d) ! (27) ∈Θ 1 plog(d) ! d c0 ×1 log5(d) By further noting from before that |S (0) + (v)| = Θ √ 1 log(d) d c0, |S ∗(0) + (v)| |S (0) + (v′)| − 1 ≤ O 1 log5(d) follows. The proof of |S ∗(0) + (v)| |S ∗(0) + (v′)| − 1 ≤ O 1 log5(d) follows a very similar argument, so we omit the calculations here. $$(28)$$ $$(29)$$ Now, as for the set S (0) reg, we know for any r ∈ [m] and vi ∈ D, $$\mathbb{P}\left[\langle\mathbf{w}_{+,\tau}^{(0)},\mathbf{v}_{i}\rangle\geq\sigma_{0}{\sqrt{10}}{\sqrt{\log(d)}}\right]\leq O\left({\frac{1}{\sqrt{\log(d)}}}\right)d^{-5}.$$ Taking the union bound over r and i yields $$\mathbb{P}\left[\exists r\ \mathrm{and}\ i\ \mathrm{s.t.}\langle\mathbf{w}_{+,r}^{(0)},\mathbf{v}_{i}\rangle\geq\sigma_{0}{\sqrt{10}}{\sqrt{\log(d)}}\right]\leq m d O\left({\frac{1}{\sqrt{\log(d)}}}\right)d^{-5}<d^{-2}.$$ Finally, to show U (0) +,r ≤ O(1) holds for every (+, r), we just need to note that for any arbitrary (+, r) neuron, the probability of U (0) +,r > 4 is no greater than $$p_{2}^{4}\binom{d}{4}\leq O\left(\frac{1}{\log^{2}d}\right)d^{-8-4c_{0}}\times d^{4}\leq O\left(\frac{1}{\log^{2}d}\right)d^{-4-4c_{0}}\tag{30}$$ $$\square$$ Taking union bound over all m ≤ Od 2+2c0neurons yields the desired result. ## D Coarse-Grained Sgd Phase I: (Almost) Constant Loss, Neurons Diversify Definition D.1. We define T0 to be the first time which there exists some sample n such that $$F_{c}^{(T_{0})}({\mathbf{X}}_{n}^{(T_{0})})\geq d^{-1}$$ $$(31)^{\frac{1}{2}}$$ −1(31) Without loss of generality assume c = +. Define phase I to be the time t ∈ [0, T0). ## D.1 Main Results Theorem D.1 (Phase 1 SGD update properties). *The following properties hold with probability at least* 1 − O *mNP k*+t poly(d) − O(e −Ω(log2(d))) for every t ∈ [0, T0). 1. (On-diagonal common-feature neuron growth) For every (+, r),(+, r′) ∈ S ∗(0) + (v+), $$\mathbf{w}_{+,r}^{(t)}-\mathbf{w}_{+,r}^{(0)}=\mathbf{w}_{+,r^{\prime}}^{(t)}-\mathbf{w}_{+,r^{\prime}}^{(0)}$$ $$(32)$$ $$(33)$$ $$(34)$$ Moreover, ∆w (t) +,r =η 1 2 ± ψ1 √1 ± ι 1 ± s ∗−1/3± O 1 log10(d) !s ∗ 2P v+ + ∆ζ (t) +,r (33) where ∆ζ (t) +,r ∼ N (0, σ (t)2 ∆ζ+,r I), σ (t) ∆ζ+,r = ησζ 12 ± ψ1 √1 ± s ∗−1/3 √s ∗ P √ 2N , and |ψ1| ≤ d −1. Furthermore, every (+, r) ∈ S ∗(0) + (v+) activates on v+-dominated patches at time t. 2. (On-diagonal finegrained-feature neuron growth) For every possible choice of c *and every* (+, r),(+, r′) ∈ S ∗(0) + (v+,c), $$\mathbf{w}_{+,r}^{(t)}-\mathbf{w}_{+,r}^{(0)}=\mathbf{w}_{+,r^{\prime}}^{(t)}-\mathbf{w}_{+,r^{\prime}}^{(0)}\tag{10}$$ Moreover, ∆w (t) +,r =η 1 2 ± ψ1 √1 ± ι 1 ± s ∗−1/3± O 1 log10(d) !s ∗ 2k+P v+,c + ∆ζ (t) +,r (35) where ζ (t) +,r ∼ N (0, σ (t)2 ∆ζ+,r I), and σ (t) ∆ζ+,r = ησζ 12 ± ψ1 √1 ± s ∗−1/3 √s ∗ P √2Nk+ . Furthermore, every (+, r) ∈ S ∗(0) + (v+,c) activates on v+-dominated patches at time t. 3. The above results also hold with the "+" and "−" signs flipped. Proof. The SGD update rule produces the following update: w (t+1) +,r =w (t) +,r + η 1 NP × (36) n=1 1{yn = +}[1 − logit(t) + (X(t) n )] X p∈[P ] σ ′(⟨w (t) +,r, x (t) n,p⟩ + b (t) +,r)x (t) n,p (37) X N + 1{yn = −}[−logit(t) + (X(t) n )] X p∈[P ] σ ′(⟨w (t) +,r, x (t) n,p⟩ + b (t) +,r)x (t) n,p!(38) In particular, equation 37 =X N n=1 1{yn = +} 1 2 ± ψ1 × (1{|P(X(t) n ; v+)| > 0} " X p∈P(X(t) n ;v+) σ ′(⟨w (t) +,r, α(t) n,pv+ + ζ (t) n,p⟩ + b (t) +,r) α (t) n,pv+ + ζ (t) n,p p /∈P(X(t) n ;v+) σ ′(⟨w (t) +,r, x (t) n,p⟩ + b (t) +,r)x (t) n,p# +X + 1{|P(X(t) n ; v+)| = 0} X p∈[P ] σ ′(⟨w (t) +,r, x (t) n,p⟩ + b (t) +,r)x (t) n,p) (39) = X N n=1 1{yn = +} 1 2 ± ψ1 × (1{|P(X(t) n ; v+)| > 0} " X p∈P(X(t) n ;v+) 1 n⟨w (t) +,r, α(t) n,pv+ + ζ (t) n,p⟩ ≥ b (t) +,ro α (t) n,pv+ + ζ (t) n,p p /∈P(X(t) n ;v+) 1 n⟨w (t) +,r, x (t) n,p⟩ ≥ b (t) +,rox (t) n,p# +X + 1{|P(X(t) n ; v+)| = 0} X p∈[P ] 1 n⟨w (t) +,r, x (t) n,p⟩ ≥ b (t) +,rox (t) n,p) The rest of the proof proceeds by induction (in Phase 1). First, recall that we set b (0) c,r = − √4 + 2c0 plog(d), and ∆b (t) c,r = − ∥∆w(t) c,r∥2 log5(d)for all t in phase 1, and for any +-class sample Xn with p ∈ P(X (t) n ; v+), α (t) n,p ∈ √1 ± ι by our data assumption. Base case t = 0. 1. (On-diagonal common-feature neuron growth) The base case for the neuron expression of point 1. is trivially true. We show that the neurons (+, r) ∈ S ∗(0) + (v+) only activate on v+-dominated patches at time t = 0. With probability at least 1 − O mNP poly(d) , by Lemma H.3, we have for all possible choices of *r, n, p*: $$\left|\langle\mathbf{w}_{+,r}^{(0)},\mathbf{\zeta}_{n,p^{\prime}}^{(0)}\right|\leq O(\sigma_{0}\sigma_{\zeta}\sqrt{d\log(d)})\leq O\left(\frac{\sigma_{0}}{\log^{9}(d)}\right)$$ It follows that $$\langle\mathbf{w}_{\nu,\tau}^{(0)},\alpha_{\nu,\theta}^{(0)}\mathbf{v}_{\tau}+\mathsf{G}_{\nu,\theta}^{(0)}\rangle$$ $$=\sigma_{0}\left\{\sqrt{1\pm\iota}\times\left(\sqrt{4+2\mathrm{c}_{0}}\sqrt{\log(d)+1/\log^{5}(d)},\sqrt{10}\sqrt{\log(d)}\right)\pm\frac{1}{\log^{5}(d)}\right\}\tag{41}$$ $$=\sigma_{0}\left\{\left(\sqrt{1-\iota}\sqrt{4+2\mathrm{c}_{0}}\sqrt{\log(d)+1/\log^{5}(d)},\sqrt{1+\iota}\sqrt{10}\sqrt{\log(d)}\right)\pm\frac{1}{\log^{5}(d)}\right\}$$ $$(40)$$ Employing the basic identity a − b = a 2−b 2 a+b , we have the lower bound σ −1 0⟨w (0) +,r, α(0) n,pv+ + ζ (0) n,p⟩ + b (0) +,r ≥ q(1 − ι)(4 + 2c0)(log(d) + 1/ log5(d)) −p(4 + 2c0) log(d) − O 1 log9(d) =(1 − ι)(4 + 2c0)(log(d) + 1/ log5(d)) − (4 + 2c0) log(d) q(1 − ι)(4 + 2c0)(log(d) + 1/ log5(d)) + p(4 + 2c0) log(d) − O 1 log9(d) =(4 + 2c0)(−ιlog(d) + (1 − ι)/ log5(d)) q(1 − ι)(4 + 2c0)(log(d) + 1/ log5(d)) + p(4 + 2c0) log(d) − O 1 log9(d) > 0 $$(42)$$ The last inequality holds since ι ≤1 polylog(d) and d is sufficiently large such that 1 log9(d) does not drive the positive term down past 0. Therefore, the neurons in S ∗(0) + (v+) indeed activate on the v+-dominated patches at t = 0. The rest of the patches x (0) n,p is either a feature patch (not dominated by v+) or a noise patch. By definition, (+, r) ∈ S ∗(0) + (v+) =⇒ (+, r) ∈ S (0) + (v+). Therefore, by Theorem F.1, with probability at least 1 − O mk+NP poly(d) , at time t = 0, the (+, r) ∈ S ∗(0) + (v+) neurons we are considering cannot activate on any feature patch dominated by v ⊥ v+, nor on any noise patches. It follows that the expression equation 37 at time t = 0 is as follows: equation 37 =X N n=1 1{yn = +} 1 2 ± ψ1 × (1{|P(X(0) n ; v+)| > 0} " X p∈P(X(0) n ;v+) √1 ± ιv+ + ζ (0) n,p+X p /∈P(X(0) n ;v+) 0 # + 1{|P(X(0) n ; v+)| = 0} X p∈[P ] 0 ) = 1 2 ± ψ1 X N n=1 1{yn = +, |P(X(0) n ; v+)| > 0}X p∈P(X(0) n ;v+) √1 ± ιv+ + ζ (0) n,p (43) = 1 2 ± ψ1 × n(n, p) ∈ [N] × [P] : yn = +, |P(X(0) n ; v+)| > 0, p ∈ P(X(0) n ; v+) o √1 ± ιv+ + X N p∈P(X(0) n ;v+) {yn = +} 1 2 ± ψ1 ζ (0) n,p n=1 X On average, $$\mathbb{E}\left[\left|\left\{\left(n,p\right)\in[N]\times[P]:y_{n}=+,\left|\mathcal{P}(\mathbf{X}_{n}^{(0)};\mathbf{w}_{+})\right|>0,p\in\mathcal{P}(\mathbf{X}_{n}^{(0)};\mathbf{w}_{+})\right\}\right|\right]\tag{44}$$ $$=\frac{s^{*}}{P}\times P\times\frac{N}{2}=\frac{s^{*}N}{2}$$ Furthermore, with our parameter choices, and by concentration of binomial random variables, with probability at least 1 − e −Ω(polylog(d)), $$(44)$$ $$\left|\left\{(n,p)\in[N]\times[P]:y_{n}=+,|\mathcal{P}(\mathbf{X}_{n}^{(0)};\mathbf{v}_{+})|>0,p\in\mathcal{P}(\mathbf{X}_{n}^{(0)};\mathbf{v}_{+})\right\}\right|=\frac{s^{*}N}{2}\left(1\pm s^{*-1/3}\right)\tag{45}$$ must be true. It follows that equation $37=\left(\frac{1}{2}\pm\psi_{1}\right)\times\frac{s^{*}N}{2}\left(1\pm s^{*-1/2}\right)\times\left(\sqrt{1\pm\imath}\,\mathbf{v}_{+}\right)$ $$+\sum_{n=1}^{N}\sum_{p\in\mathcal{P}(\mathbf{X}_{n}^{(0)};\mathbf{v}_{+})}\left\{y_{n}=+\right\}\left(\frac{1}{2}\pm\psi_{1}\right)\zeta_{n,p}^{(0)}\tag{46}$$ The other component expression equation 38 is zero with probability at least 1 − O mk+NP poly(d) by Theorem F.1. By noting that $$\begin{split}\text{Var}\left(\Delta\zeta_{+,r}^{(0)}\right)&=\text{Var}\left(\frac{\eta}{NP}\sum_{n=1}^{N}\sum_{p\in\mathcal{P}(\mathbf{X}_{n}^{(0)},\mathbf{w}_{+})}\{y_{n}=+\}\left(\frac{1}{2}\pm\psi_{1}\right)\zeta_{n,p}^{(0)}\right)\\ &=\eta^{2}\left(\frac{1}{2}\pm\psi_{1}\right)^{2}\frac{s^{*}}{2NP^{2}}\left(1\pm s^{*-1/3}\right)\sigma_{\zeta}^{2},\end{split}$$ $$\quad(47)$$ and $$\mathbb{E}\left[\Delta\xi^{(0)}_{+,r}\right]=\mathbb{E}\left[\frac{\eta}{NP}\sum_{n=1}^{N}\sum_{p\in\mathcal{P}(\mathbf{X}^{(0)}_{n};\mathbf{w}_{+})}\{y_{n}=+\}\left(\frac{1}{2}\pm\psi_{1}\right)\xi^{(0)}_{n,p}\right]=\mathbf{0},\tag{48}$$ we finish the proof of the base case for point 1. 2. (On-diagonal finegrained-feature neuron growth) The proof of the base case of point 2. is virtually identical to point 1, so we omit the computations here. Inductive step: We condition on the high probability events of the induction hypothesis for t ∈ [0, T] (with T < T0 of course), and prove the statements for t = T + 1. 1. (On-diagonal common-feature neuron growth) By the induction hypothesis, up to time t = T, with probability at least 1 − O mk+*NP T* poly(d) , for all (+, r) ∈ S ∗(T) + (v+), $$\Delta\mathbf{w}_{+,r}^{(t)}=\eta\Bigg{(}\left(\frac{1}{2}\pm\psi_{1}\right)\sqrt{1\pm\imath}\left(1\pm s^{*-1/3}\right)\Bigg{)}\frac{s^{*}}{2P}\mathbf{v}_{+}+\Delta\mathbf{c}_{+,r}^{(t)}\tag{1}$$ $$\mathbf{\zeta}_{+,r}^{(t)}\sim\mathcal{N}(\mathbf{0},\sigma_{\Delta\mathbf{c}}^{(t)}\mathbf{I}),\;\sigma_{\Delta\mathbf{c}}^{(t)}=\eta\sigma_{\mathbf{c}}\left(\left(\frac{1}{2}\pm\psi_{1}\right)\sqrt{1\pm s^{*-1/3}}\right)\frac{\sqrt{s}}{P\sqrt{2N}}.$$ where ∆ζ Expression of w (T +1) +,r . Conditioning on the high-probability event of the induction hypothesis, at time t = T + 1, w (T +1) +,r =w (0) +,r + X T τ=0 ∆w (τ) +,r (50) =ηT 12 ± ψ1 √1 ± ι 1 ± s ∗−1/3!s ∗ 2P v+ + ζ (t) +,r +,r ∼ N (0, σ (t)2 ζI), σ (t) ζ = ησζ √T 12 ± ψ1 √1 ± s ∗−1/3 √s ∗ P √ 2N . (T +1) $$(49)$$ where ζ (t) Let us compute ∆w +,r . We first want to show that w (T +1) +,r activates on v+-dominated patches x (T +1) n,p = √1 ± ιv+ + ζ (T +1) n,p . We need to show that the following expression is above 0: $$\langle\mathbf{w}_{+,r}^{(T+1)},\mathbf{x}_{+,r}^{(T+1)}\rangle+b_{+,r}^{(T+1)}$$ $$=\langle\mathbf{w}_{++,r}^{(0)},\sqrt{1\pm\mathbf{w}_{+}}+\mathbf{\zeta}_{n,p}^{(T+1)}\rangle+b_{+,r}^{(0)}$$ $$+\left\langle\eta T\bigg{(}\left(\frac{1}{2}\pm\psi_{1}\right)\sqrt{1\pm\iota}\left(1\pm s^{\iota-1/3}\right)\pm O\left(\frac{1}{\log^{10}(d)}\right)\right)\frac{s^{\iota}}{2P}\mathbf{v}_{\iota}+\mathbf{\zeta}_{+,r}^{(T+1)},\sqrt{1\pm\iota}\mathbf{v}_{+}+\mathbf{\zeta}_{n,p}^{(T+1)}\right\rangle\tag{51}$$ $$+\sum_{\nu=0}^{T}\Delta b_{+,\nu}^{(T+1)}$$ Let us treat the three terms (on three lines) separately. First, following virtually the same argument as in the base case, the following lower bound holds with probability at least 1 − O mNP poly(d) for all *n, p* and (+, r) ∈ S ∗(T) + (v+): $$\begin{array}{l}{{\langle\pmb_{+,r}^{(0)},\sqrt{1\pm\imath}\,\pmb_{+}^{(T+1)}\rangle+b_{+,r}^{(0)}}}\\ {{\geq\varepsilon_{0}\left\{\sqrt{(1-\imath)(4+2c_{0})(\log(d)+1/\log^{5}(d))}-\sqrt{(4+2c_{0})\log(d)}-O\left(\frac{1}{\log^{9}(d)}\right)\right\}}}\\ {{>0}}\end{array}$$ $$\left(52\right)$$ $$(53)$$ $$\left(55\right)$$ Now consider the second term. We know, with probability at least 1 − e −Ω(d), for all n and p, $$\left|\langle\mathbf{\zeta}_{n,p}^{(T+1)},\mathbf{v}_{+}\rangle\right|\leq O\left(\frac{1}{\log^{10}(d)}\right),$$ therefore, $$\begin{array}{l}{{\left|\left\langle\eta T\left(\left(\frac{1}{2}\pm\psi_{1}\right)\sqrt{1\pm\iota}\left(1\pm s^{*-1/3}\right)\pm O\left(\frac{1}{\log^{10}(d)}\right)\right)\frac{s^{*}}{2P}\mathbf{v}_{+},\zeta_{\mathrm{n},p}^{(T+1)}\right\rangle\right|}}\\ {{\leq}}\eta T{\frac{s^{*}}{2P}}O\left(\frac{1}{\log^{10}(d)}\right).}\end{array}$$ $$(54)$$ Moreover, with probability at least 1 − e −Ω(d), $$\left|\langle\zeta_{+,r}^{(T+1)},\mathbf{v}_{+}\rangle\right|\leq\eta\sqrt{T}\,\frac{\sqrt{s^{*}}}{P\sqrt{2N}}\times O\left(\frac{1}{\log^{10}(d)}\right)$$ and with probability at least 1 − e −Ω(d), $$\left|\langle\xi_{+,\tau}^{(T)},\xi_{n,p}^{(T+1)}\rangle\right|\leq O\left(\sigma_{\zeta}\sigma_{\zeta}^{(T)}d\right)\leq O\left(\eta\sqrt{T}\,\frac{\sqrt{s^{\star}}}{P\sqrt{2N}}\,\frac{1}{\log^{20}(d)d}d\right)\leq\eta\sqrt{T}\,\frac{\sqrt{s^{\star}}}{P\sqrt{2N}}\,\frac{1}{\log^{19}(d)}\,.$$ log19(d)(56) therefore $$\langle\eta T\zeta_{+,r}^{(T+1)},\sqrt{1\pm\iota}\mathbf{v}_{+}+\zeta_{n,p}^{(T+1)}\rangle\leq\eta\sqrt{T}\frac{\sqrt{s^{*}}}{P\sqrt{2N}}O\left(\frac{1}{\log^{10}(d)}\right).$$ $$(56)$$ $$\left(57\right)$$ It follows that with probability at least 1 − O(e −Ω(d)), * ηT 12 ± ψ1 √1 ± ι 1 ± s ∗−1/3!s ∗ 2P v+ + ζ (T +1) +,r , √1 ± ιv+ + ζ (T +1) n,p + =⟨ηT 12 ± ψ1 √1 ± ι 1 ± s ∗−1/3!s ∗ 2P v+, √1 ± ιv+⟩ + ⟨ηT 12 ± ψ1 √1 ± ι 1 ± s ∗−1/3!s ∗ 2P v+, ζ (T +1) n,p ⟩ (58) + ⟨ηζ (T +1) +,r , √1 ± ιv+ + ζ (T +1) n,p ⟩ ≥ηT 12 − ψ (T +1) 1 (1 − ι) 1 − s ∗−1/3 s ∗ 2P − η √ T √s ∗ P √2N O 1 log10(d) . Now we compute the third term. By the induction hypothesis, X T t=0 ∆b (t) +,r = X T t=0 ∥∆w (t) +,r∥2 log5(d) = X T 1 log5(d) η 1 2 ± ψ1 √1 ± ι 1 ± s ∗−1/3 s ∗ 2P v+ + ∆ζ (t) +,r 2 (59) t=0 ≤ X T 1 log5(d) η 1 2 + ψ1 √1 + ι 1 + s ∗−1/3 s ∗ 2P ∥v+∥2 + X T 1 log5(d) ∆ζ (t) +,r 2 t=0 t=0 =1 log5(d) ηT 12 + ψ1 √1 + ι 1 + s ∗−1/3 s ∗ 2P + X T 1 log5(d) ∆ζ (t) +,r 2 t=0 $$(60)$$ With probability at least 1 − O mT poly(d) , for all t ∈ [0, T] and r in consideration, $$\left\|\Delta\zeta_{+,r}^{(t)}\right\|_{2}\leq\eta\frac{\sqrt{s^{*}}}{P\sqrt{2N}}O\left(\frac{1}{\log^{10}(d)}\right)$$ Therefore, $$\sum_{t=0}^{T}\Delta b_{s,r}^{(t)}\tag{61}$$ $$\leq\frac{1}{\log^{10}(d)}\left(\eta T\left(\frac{1}{2}+\psi_{1}\right)\sqrt{1+t}\left(1+s^{*-1/3}\right)\frac{s^{*}}{2P}+\eta T\frac{\sqrt{s^{*}}}{P\sqrt{2N}}O\left(\frac{1}{\log^{10}(d)}\right)\right)$$ 36 Combining our calculations of the three terms from above, we find the following estimate: ⟨w (T +1) +,r , x (T +1) n,p ⟩ + b (T +1) +,r > 0 + ηT 12 − ψ1 (1 − ι) 1 − s ∗−1/3 s ∗ 2P − η √ T √s ∗ P √2N O 1 log10(d) −1 log5(d) ηT 12 + ψ1 √1 + ι 1 + s ∗−1/3 s ∗ 2P + ηT √s ∗ P √2N O 1 log10(d) (62) >ηT 12 − ψ1 (1 − ι) 1 − s ∗−1/3− O 1 log4(d) s ∗ 2P > 0 On the other hand, by Theorem F.1, with probability at least 1−O mk+*NP T* poly(d) , none of the (+, r) ∈ S ∗(T) + (v+) can activate on x (T +1) n,p that are feature-patches dominated by v ⊥ v+ or noise patches. Combining the above observations, with probability at least 1 − O mk+NP (T +1) poly(d) , the update expressions up to time t = T + 1 can be written as follows: **Proof** **that is defined as follows:** $\Delta\mathbf{w}_{+,r}^{(0)}=\left(\frac{1}{2}\pm\psi_{1}\right)$ $$\times\left\{\left|\left\{\left(n,p\right)\in[N]\times[P]:y_{n}=+,|\mathcal{P}(\mathbf{X}_{n}^{(t)};\mathbf{v}_{+})|>0,p\in\mathcal{P}(\mathbf{X}_{n}^{(t)};\mathbf{v}_{+})\right\}\right|\left(\sqrt{1\pm\imath}\mathbf{v}_{+}\right)\right.\right.$$ $$\left.\left.+\sum_{n=1}^{N}\sum_{p\in\mathcal{P}(\mathbf{X}_{n}^{(0)};\mathbf{v}_{+})}\left\{y_{n}=+\right\}\left(\frac{1}{2}\pm\psi_{1}\right)\mathcal{G}_{n,p}^{(t)}\right\}\right.$$ $$(63)$$ $$(64)$$ $$(65)$$ The rest of the derivations proceeds virtually the same as in the base case; we just need to rely on the concentration of binomial random variables to calculate $$\Big\vert\Big\{(n,p)\in[N]\times[P]:y_{n}=+,\vert\mathcal{P}(\mathbf{X}_{n}^{(0)};\mathbf{v}_{+})\vert>0,p\in\mathcal{P}(\mathbf{X}_{n}^{(0)};\mathbf{v}_{+})\Big\}\Big\vert={\frac{s^{*}N}{2}}\left(1\pm s^{*-1/3}\right)\Big\vert\Big\vert$$ but in the proof of the paper is $\mathbf{\Lambda}_{n}^{(t)}$ ∗−1/3(64) which completes the proof of the expression of ∆w +,r. Additionally, to show $\pmb{w}_{+,r}^{(T+1)}-\pmb{w}_{+,r}^{(0)}=\pmb{w}_{+,r'}^{(T+1)}-\pmb{w}_{+,r'}^{(0)}$ ove sequence of derivations, for every $(\pmb{+},r)\in S^{*(0)}(\pmb{w}_{\pm})$, these no. we just need to note that, by the above sequence of derivations, for every (+, r) ∈ S + (v+), these neurons receive exactly the same update at time t = T + 1 $$\sum_{n=1}^{N}\mathds{1}\{y_{n}=+\}\mathds{1}\{|\mathcal{P}(\mathbf{X}_{n}^{(T+1)};\mathbf{v}_{+})|>0\}[1-\log\!\mathrm{i}_{+}^{(T+1)}(\mathbf{X}_{n}^{(T+1)})]\sum_{p\in\mathcal{P}(\mathbf{X}_{n}^{(T+1)};\mathbf{v}_{+})}\left(\alpha_{n,p}^{(T+1)}\mathbf{v}_{+}+\zeta_{n,p}^{(T+1)}\right).\tag{66}$$ 2. (On-diagonal finegrained-feature neuron growth) For point 2, the proof strategy is almost identical, the only difference is that at every iteration, the expected number of patches in which subclass features appear in is $$\left|\left\{\left(n,p\right)\in[N]\times\left([P]-\mathcal{P}(\mathbf{X}_{n}^{(T)});\mathbf{v}_{+,c}\right):y_{n}=+,|\mathcal{P}(\mathbf{X}_{n}^{(T)};\mathbf{v}_{+,c})|>0,p\in\mathcal{P}(\mathbf{X}_{n}^{(T)};\mathbf{v}_{+,c})\right\}\right|\tag{67}$$ = which holds with probability at least 1 − e −Ω(log2(d)) for the relevant neurons. **Corollary D.1.1**.: $T_{0}<O\left(\left(\eta\frac{s^{*}}{P}\right)^{-1}\right)\in poly(d)$. Proof. Follows from Theorem D.1. D.2 Lemmas Lemma D.2. During the time t ∈ [0, T0)*, for any* X (t) n , $$1-l o g i t_{+}^{(t)}(X_{n}^{(t)})=\frac{1}{2}\pm O(d^{-1})$$ $$(\mathrm{68})$$ $$(69)$$ −1) (68) The same holds for 1 − *logit*(t) − (X (t) n ). Therefore, |ψ1| ≤ O(d −1) for t ∈ [0, T0). Proof. By definition of T0, for any t ∈ [0, T0], we have F (t) c (X (t) n ) < d−1 + O (η) for all n, therefore, using Taylor approximation, $$1-\log\operatorname{int}_{+}^{(t)}(\mathbf{X}_{n}^{(t)})={\frac{\exp(F_{-}^{(t)}(\mathbf{X}_{n}^{(t)}))}{\exp(F_{+}^{(t)}(\mathbf{X}_{n}^{(t)}))+\exp(F_{-}^{(t)}(\mathbf{X}_{n}^{(t)}))}}<{\frac{\exp(d^{-1})}{1+1}}\leq{\frac{1}{2}}+O(d^{-1})$$ −1) (69) The lower bound can be proven due to convexity of the exponential: $$\frac{\exp(F_{-}^{(t)}({\mathbf{X}}_{n}^{(t)}))}{\exp(F_{+}^{(t)}({\mathbf{X}}_{n}^{(t)}))+\exp(F_{-}^{(t)}({\mathbf{X}}_{n}^{(t)}))}>\frac{1}{2}\exp(-d^{-1})\geq\frac{1}{2}-\frac{1}{2d}$$ $$(70)$$ $\square$ ## E Coarse-Grained Sgd Phase Ii: Loss Convergence, Large Neuron Movement Recall that the desired probability events in Phase I happens with probability at least 1 − o(1). In phase II, common-feature neurons start gaining large movement and drive the training loss down to o(1). We show that the desired probability events occur with probability at least 1 − o(1). We study the case of T1 ≤ poly(d), where T1 denotes the time step at the end of training. ## E.1 Main Results Theorem E.1. *With probability at least* 1 − O mk+*NP T*1 poly(d) *, the following events take place:* 1. *There exists time* T ∗ ∈ poly(d) *such that for any* t ∈ [T ∗, poly(d)], for any n ∈ [N]*, the training loss* L(F; X (t) n , yn) ∈ o(1). 2. (Easy sample test accuracy is nearly perfect) Given an easy test sample (Xeasy, y)*, for* y ′ ∈ {+1, −1}− {y}*, for* t ∈ [T ∗, *poly*(d)], $$\mathbb{P}\left[F_{y}^{(t)}(\mathbf{X}_{e a s y})\leq F_{y^{\prime}}^{(t)}(\mathbf{X}_{e a s y})\right]\leq o(1).$$ 3. (Hard sample test accuracy is bad) However, for all t ∈ [0, poly(d)], given a hard test sample (Xhard, y), $$e\ p l a c e.$$ $$\mathbb{P}\left[F_{y}^{(t)}(\mathbf{X}_{hard})\leq F_{y^{\prime}}^{(t)}(\mathbf{X}_{hard})\right]\geq\Omega(1).\tag{1}$$ Proof. The training loss property follows from Lemma E.3 and Lemma E.4. We can set T ∗ = T1,1 or any time beyond it (and upper bounded by poly(d)). The test accuracy properties follow from Lemma E.8 and Lemma E.9. ## E.2 Lemmas Lemma E.2 (Phase II, Update Expressions). For any T1 ∈ poly(d)*, with probability at least* 1−O *mNP k*+t poly(d) , during t ∈ [T0, T1]*, for any* (+, r) ∈ S ∗(0) + (v+), $$\Delta\mathbf{w}_{+,\,r}^{(t)}$$ $$=\eta\sum_{n=1}^{N}\mathbb{1}\left\{y_{n}=+\right\}\exp\left\{-F_{+}^{(t)}(\mathbf{X}_{n}^{(t)})\right\}\tag{73}$$ $$\times\frac{\exp(F_{-}^{(t)}(\mathbf{X}_{n}^{(t)}))}{\exp\left(F_{-}^{(t)}(\mathbf{X}_{n}^{(t)})-F_{+}^{(t)}(\mathbf{X}_{n}^{(t)})\right)+1}(1\pm s^{*-1/3})\frac{s^{*}}{\textit{NP}}\left(\sqrt{1\pm\iota}\mathbf{w}_{+}+\zeta_{n,\,p}^{(t)}\right),$$ $$\left(71\right)$$ $$(72)^{\frac{1}{2}}$$ $\square$ (where c t n *denotes the subclass index of sample* X (t) n ) and for any (+, r) ∈ S ∗(0) + (v+,c), ∆w (t) +,r =η exp (− (1 ± s ∗−1/3) √1 ± ι 1 ± O 1 log5(d) s ∗A ∗(t) +,r∗ S ∗(0) + (v+) + A ∗(t) +,c,r∗ S ∗(0) + (v+,c) ) (74) × X N n=1 1{yn = (+, c)}exp(F (t) − (X (t) n )) exp F (t) − (X (t) n ) − F (t) + (X (t) n ) + 1 (1 ± s ∗−1/3) s ∗ NP √1 ± ιv+,c + ζ (t) n,p, In fact, for any v ∈ {v+} ∪ {v+,c} k+ c=1*, every neuron in* S ∗(0) + (v) remain activated (on v-dominated patches) and receive exactly the same updates at every iteration as shown above. For simpler exposition, for any (+, r∗) ∈ S ∗(0) + (v+)*, we write* A ∗(t) +,r∗ := ⟨w (t) +,r∗ , v+⟩*; similarly for* A ∗(t) +*,c,r*∗ := ⟨w+,r∗ , v+,c⟩ for neurons (+, r∗) ∈ S ∗(0) + (v+,c). Moreover, on "+"-class samples, the neural network response satisfies the estimate for every (+, r∗) ∈ S ∗(0) + (v+): $$F_{+}^{(t)}(\mathbf{X}_{u}^{(t)})$$ $$=(1\pm s^{*-1/3})\sqrt{1\pm\iota}\left(1\pm O\left(\frac{1}{\log^{2}(d)}\right)\right)\times s^{*}\left(A_{+,r^{*}}^{*(t)}\left|S_{+}^{*(0)}(\mathbf{v}_{+})\right|+A_{+,c,s^{*}}^{*(t)}\left|S_{+}^{*(0)}(\mathbf{v}_{+,c^{*}_{u}})\right|\right),\tag{75}$$ The same claims hold for the "−*" class neurons (with the class signs flipped).* Proof. In this proof we focus on the neurons in S ∗(0) + (v+); the proof for the update expressions for those in S ∗(0) + (v+,c) are proven in virtually the same way. Base case, t = T0. First define A ∗(t) +,r∗ := ⟨w+,r∗ , v+⟩, (+, r∗) ∈ S ∗(0) + (v+); similarly for A ∗(t) +*,c,r*∗ := ⟨w+,r∗ , v+,c⟩. Note that the choice of r ∗ does not really matter, since we know from phase I that every neuron in S ∗(0) + (v+) evolve at exactly the same rate, so by the end of phase I, ∥w (T0) +,r − w (T0) +,r′∥2 ≤ O(σ0 log(d)) ≪ ∥w (T0) +,r ∥2 for any (+, r),(+, r′) ∈ S ∗(0) + (v+). Let (+, r) ∈ S ∗(0) + (v+). Similar to phase I, consider the update equation w (t+1) +,r =w (t) +,r + η 1 NP × (76) n=1 1{yn = +}[1 − logit(t) + (X(t) n )] X p∈[P ] σ ′(⟨w (t) +,r, x (t) n,p⟩ + b (t) +,r)x (t) X N + 1{yn = −}[−logit(t) + (X(t) n )] X p∈[P ] σ ′(⟨w (t) +,r, x (t) n,p⟩ + b (t) +,r)x (t) n,p!(78) n,p (77) For the on-diagonal update expression, we have X N n=1 1{yn = +}[1 − logit(t) + (X(t) n )] X p∈[P ] σ ′(⟨w (t) +,r, x (t) n,p⟩ + b (t) +,r)x (t) n,p = X N n=1 1{yn = +}[1 − logit(t) + (X(t) n )] (1{|P(X(t) n ; v+)| > 0} " X p∈P(X(t) n ;v+) 1 n⟨w (t) +,r, α(t) n,pv+ + ζ (t) n,p⟩ ≥ b (t) +,ro α (t) n,pv+ + ζ (t) n,p (79) p /∈P(X(t) n ;v+) 1 n⟨w (t) +,r, x (t) n,p⟩ ≥ b (t) +,rox (t) n,p# +X + 1{|P(X(t) n ; v+)| = 0} X p∈[P ] 1 n⟨w (t) +,r, x (t) n,p⟩ ≥ b (t) +,rox (t) n,p) (76) $\binom{77}{7}$ (78) (79) Following from Theorem D.1 and F.1, the neurons' non-activation on the patches that do not contain v+, and activation on the v+-dominated patches hold with probability at least 1 − O *mNP k*+ poly(d) at time T0. Therefore, the above update expression reduces to $$\sum_{n=1}^{N}\mathbb{1}\{y_{n}=+,|\mathcal{P}(\mathbf{X}_{n}^{(t)};\mathbf{v}_{+})|>0\}[1-\log\mathrm{i}t_{+}^{(t)}(\mathbf{X}_{n}^{(t)})]\sum_{p\in\mathcal{P}(\mathbf{X}_{n}^{(t)};\mathbf{v}_{+})}\left(\alpha_{n,p}^{(t)}\mathbf{v}_{+}+\xi_{n,p}^{(t)}\right)$$ n,p(80) Note that for samples X (t) n with yn = +, $$[1-\text{logit}_{+}^{(t)}(\mathbf{X}_{n}^{(t)})]=\frac{\exp(F_{-}^{(t)}(\mathbf{X}_{n}^{(t)}))}{\exp(F_{-}^{(t)}(\mathbf{X}_{n}^{(t)}))+\exp(F_{+}^{(t)}(\mathbf{X}_{n}^{(t)}))}\tag{1}$$ $$(81)$$ Now we need to estimate the network response F (t) + (X (t) n ). With probability at least 1 − exp(−Ω(s ∗1/3)), we have the upper bound (let (+, ctn ) denote the subclass which sample X (t) n belongs to): F (t) + (X(t) n ) ≤X p∈P(X(t);v+) X (+,r)∈S (0) + (v+) ⟨w (t) +,r, v+ + ζ (t) n,p⟩ + b (t) +,r (+,r)∈S (0) + (v+,ctn ) ⟨w (t) +,r, v+,ctn + ζ (t) n,p⟩ + b (t) +,r (82) +X p∈P(X(t);v+,ctn ) X ≤(1 + s ∗−1/3) √1 + ιs∗ 1 + O 1 log9(d) A ∗(t) +,r∗ S (0) + (v+) + A ∗(t) +,ctn,r∗ S (0) + (v+,ctn ) $$(80)$$ The second inequality is true since maxr⟨w (t) +,r, v+⟩ ≤ A ∗(t) +,r∗ + O(σ0 log(d)), and for any (+, r) ∈ S (0) + (v+), |⟨w (t) +,r, ζ (t) n,p*⟩| ≤* O(1/ log9(d))A ∗(t) +,r∗ . The bias value is negative (and so less than 0). To further refine the bound, we recall S ∗(0) + (v) / S ∗(0) + (v ′) , S ∗(0) + (v) / S (0) + (v ′) = 1 ± O(1/ log5(d)). Therefore, we obtain the bound $$F_{+}^{(t)}(\mathbf{X}_{n}^{(t)})\leq(1+s^{*-1/3})\sqrt{1+s^{*}}\left(1+O\left(\frac{1}{\log^{2}(d)}\right)\right)\left(1+O\left(\frac{1}{\log^{2}(d)}\right)\right)\tag{83}$$ $$\times\left(A_{+,r^{*}}^{(t)}\left|S_{+}^{(0)}(\mathbf{v}_{+})\right|+A_{+,c_{n},r^{*}}^{*(t)}\left|S_{+}^{(0)}(\mathbf{w}_{+,c_{n}^{*}})\right|\right)$$ Following a similar argument, we also have the lower bound F (t) + (X(t) n ) (+,r)∈S ∗(0) + (v+) σ ⟨w (t) +,r, v+ + ζ (t) n,p⟩ + b (t) +,r ≥X p∈P(X(t);v+) X (+,r)∈S ∗(0) + (v+,ctn ) σ ⟨w (t) +,r, v+,ctn + ζ (t) n,p⟩ + b (t) +,r +X p∈P(X(t);v+,ctn ) X (84) ≥(1 − s ∗−1/3) √1 − ιs∗ 1 − O 1 log5(d) 1 − O 1 log5(d) × A ∗(t) +,r∗ S ∗(0) + (v+) + A ∗(t) +,ctn,r∗ S ∗(0) + (v+,ctn ) 41 The neurons in S ∗(0) + (v+) have to activate, therefore they serve a key role in the lower bound, the bias bound for them is simply −A ∗(t) +,r∗Θ(1/ log5(d)); the neurons in S (0) + (v+,c) contribute at least 0 due to the ReLU activation; the rest of the neurons do not activate. The same reasoning holds for the S ∗(0) + (v+,c). Knowing that neurons in S ∗(0) + (v+) cannot activate on the patches in samples belonging to the "−" class, now we may write the update expression for every (+, r) ∈ S ∗(t) + (v+) as (their updates are identical, same as in phase I): ∆w (t) +,r =η NP X N n=1 1{yn = +}[1 − logit(t) + (X(t) n )] X p∈[P ] σ ′(⟨w (t) +,r, x (t) n,p⟩ + b (t) +,r)x (t) n,p =η NP X N n=1 1{yn = +, |P(X(t) n ; v+)| > 0} exp(−F (t) + (X(t) n )) ×exp(F (t) − (X (t) n )) exp F (t) − (X (t) n ) − exp(F (t) + (X (t) n ))+ 1 X p∈P(X(t) n ;v+) α (t) n,pv+ + ζ (t) n,p (85) =η X N n=1 1{yn = +} exp (− (1 + s ∗−1/3) √1 + ιs∗ 1 + O 1 log5(d) × A ∗(t) +,r∗ S ∗(0) + (v+) + A ∗(t) +,ctn,r∗ S ∗(0) + (v+,ctn ) ) ×exp(F (t) − (X (t) n )) exp F (t) − (X (t) n ) − F (t) + (X (t) n ) + 1 (1 ± s ∗−1/3) s ∗ NP √1 ± ιv+ + ζ (t) n,p This concludes the proof of the base case. Induction step. Assume the statements hold for time period [T0, t], prove for time t + 1. At step t + 1, based on the induction hypothesis, we know that with probability at least 1 − O *mNP k*+t poly(d) , during time τ ∈ [T0, t], for any (+, r) ∈ S ∗(0) + (v+), ∆w (τ) +,r =η X N n=1 1{yn = +} exp (− (1 + s ∗−1/3) √1 + ιs∗ 1 + O 1 log5(d) × A ∗(τ) +,r∗ S ∗(0) + (v+) + A ∗(τ) +,ctn,r∗ S ∗(0) + (v+,ctn ) ) (86) ×exp(F (τ) − (X (τ) n )) exp F (τ) − (X (τ) n ) − exp(F (τ) + (X (τ) n ))+ 1 (1 ± s ∗−1/3) s ∗ NP √1 ± ιv+ + ζ (τ) n,p 42 and for the bias, ∆b (τ) +,r ≤ − η1 log5(d) X N n=1 1{yn = +} exp (− (1 + s ∗−1/3) √1 + ιs∗ 1 + O 1 log5(d) × A ∗(τ) +,r∗ S ∗(0) + (v+) + A ∗(τ) +,ctn,r∗ S ∗(0) + (v+,ctn ) ) (87) × (1 − s ∗−1/3) s ∗ NP √1 − ι −1 log10(d) exp(F (τ) − (X (τ) n )) exp F (τ) − (X (τ) n ) − exp(F (τ) + (X (τ) n ))+ 1 Conditioning on the high-probability events of the induction hypothesis, w (t+1) +,r =w (T0) +,r + η Xt τ=T0 X N n=1 1{yn = +} exp (− (1 + s ∗−1/3) √1 + ιs∗ 1 + O 1 log5(d) (88) × A ∗(τ) +,r∗ S ∗(0) + (v+) + A ∗(τ) +,ctn,r∗ S ∗(0) + (v+,ctn ) ) ×exp(F (τ) − (X (τ) n )) exp F (τ) − (X (τ) n ) − exp(F (τ) + (X (τ) n ))+ 1 (1 ± s ∗−1/3) s ∗ NP √1 ± ιv+ + ζ (τ) n,p It follows that, with probability at least 1 − O mNP poly(d) , for all v+-dominated patch x (t+1) n,p , ⟨w (t+1) +,r , x (t+1) n,p ⟩ + b (t+1) +,r =⟨w (T0) +,r , √1 ± ιv+ + ζ (t+1) n,p ⟩ + b (T0) +,r + η Xt τ=T0 X N n=1 1{yn = +} exp (− (1 + s ∗−1/3) √1 + ιs∗ 1 + O 1 log5(d) × A ∗(τ) +,r∗ S ∗(0) + (v+) + A ∗(τ) +,ctn,r∗ S ∗(0) + (v+,ctn ) ) ×exp(F (τ) − (X (τ) n )) exp F (τ) − (X (τ) n ) − F (τ) + (X (τ) n ) + 1 (1 ± s ∗−1/3) s ∗ NP × ⟨√1 ± ιv+ + ζ (τ) n,p, √1 ± ιv+ + ζ (t+1) n,p ⟩ + ∆b (τ) +,r ≥ 0 (89) + η Xt τ=T0 X N n=1 1{yn = +} exp (− (1 + s ∗−1/3) √1 + ιs∗ 1 + O 1 log5(d) × A ∗(τ) +,r∗ S ∗(0) + (v+) + A ∗(τ) +,ctn,r∗ S ∗(0) + (v+,ctn ) ) ×exp(F (τ) − (X (τ) n )) exp F (τ) − (X (τ) n ) − F (τ) + (X (τ) n ) + 1 (1 ± s ∗−1/3) s ∗ NP × 1 − ι − O 1 log5(d) > 0 Therefore the neurons (+, r) ∈ S ∗(0) + (v+) activate on the v+-dominated patches x (t+1) n,p . We also know that they cannot activate on patches that are not dominated by v+ by Theorem F.1. Following a similar derivation to the base case, we arrive at the result that, conditioning on the events of the induction hypothesis, with probability at least 1 − O *mNP k*+ poly(d) , for all (+, r) ∈ S ∗(0) + (v+), ∆w (t+1) +,r =η X N n=1 1{yn = +} exp (− (1 + s ∗−1/3) √1 + ιs∗ 1 + O 1 log5(d) × A ∗(t+1) +,r∗ S ∗(0) + (v+) + A ∗(t+1) +,ctn,r∗ S ∗(0) + (v+,ctn ) ) (90) × (1 ± s ∗−1/3) s ∗ NP exp(F (t+1) − (X (t+1) n )) exp F (t+1) − (X (t+1) n ) − F (t+1) + (X (t+1) n ) + 1 √1 ± ιv+ + ζ (t+1) n,p 44 Consequently, with probability at least 1 − O mNP poly(d) , ∆b (t+1) +,r ≤ −1 log5(d) X N n=1 1{yn = +}η exp (− (1 + s ∗−1/3) √1 + ιs∗ 1 + O 1 log5(d) × A ∗(t+1) +,r∗ S ∗(0) + (v+) + A ∗(t+1) +,ctn,r∗ S ∗(0) + (v+,ctn ) ) (91) ×exp(F (t+1) − (X (t+1) n )) exp F (t+1) − (X (t+1) n ) − F (t+1) + (X (t+1) n ) + 1 (1 − s ∗−1/3) s ∗ NP × 1 − ι − O 1 log9(d) $\square$ Utilizing the definition of conditional probability, we conclude that the expressions for ∆w (τ) +,r and ∆b (t+1) +,r are indeed as described in the theorem during time τ ∈ [T0, t + 1] with probability at least 1 − O *mNP k*+t poly(d) × 1 − O *mNP k*+ poly(d) ≥ 1 − O *mNP k*+(t+1) poly(d) . Moreover, based on the expression of ∆w (τ) +,r and ∆b (t+1) +,r , following virtually the same argument as in the base case, we can estimate the network output for any (X (t+1) n , yn = +): $$F_{+}^{(t+1)}(\mathbf{X}_{n}^{(t+1)})=(1\pm s^{*-1/3})\sqrt{1\pm\iota}\left(1\pm O\left(\frac{1}{\log^{5}(d)}\right)\right)s^{*}\tag{92}$$ $$\times\left(A_{+,r}^{s(t)}\left|S_{+}^{s(0)}(\mathbf{v}_{+})\right|+A_{+,c_{n},r^{*}}^{s(t)}\left|S_{+}^{s(0)}(\mathbf{v}_{+,c_{n}^{t}})\right|\right)$$ Lemma E.3. Define time T1,1 *to be the first point in time which the following identity holds on all* X (t) n belonging to the "+*" class:* $$\frac{\exp(F_{-}^{(t)}(\mathbf{X}_{n}^{(t)}))}{\exp(F_{-}^{(t)}(\mathbf{X}_{n}^{(t)})-F_{+}^{(t)}(\mathbf{X}_{n}^{(t)}))+1}\geq1-O\left(\frac{1}{\log^{5}(d)}\right)$$ $$(93)$$ Then T1,1 ≤ poly(d), and for all t ∈ [T1,1, T1]*, the above holds. The following also holds for this time period:* $$[1-l o g i t_{+}^{(t)}(X_{n}^{(t)})]\leq O\left(\frac{1}{\log^{5}(d)}\right)$$ The same results also hold with the class signs flipped. Proof. We first note that, the training loss [1 − logit(t) + (X (t) n )] on samples belonging to the "+" class at any time during t ∈ [T0, T1] is, asymptotically speaking, monotonically decreasing from 12 − O(d −1). This can be easily proven by observing the way s ∗A ∗(t) +,r∗ S ∗(0) + (v+) + A ∗(t) +*,c,r*∗ S ∗(0) + (v+,c) monotonically increases from the proof of Lemma E.2: before F (t) + (X (t) n ) ≥ log log5(d) on all X (t) n belonging to the "+" class, there $$(94)$$ must be some samples X (t) n on which $$[1-\log\!i_{+}^{(t)}(\mathbf{X}_{n}^{(t)})]=\frac{\exp(F_{-}^{(t)}(\mathbf{X}_{n}^{(t)}))}{\exp(F_{-}^{(t)}(\mathbf{X}_{n}^{(t)}))+\exp(F_{-}^{(t)}(\mathbf{X}_{n}^{(t)}))}\tag{95}$$ $$\geq\frac{1-O(\sigma_{0}\log(d)s^{\star}d^{\omega_{0}})}{1+O(\sigma_{0}\log(d)s^{\star}d^{\omega_{0}})+\log^{5}(d)}$$ $$\geq\Omega\left(\frac{1}{\log^{5}(d)}\right).$$ $\square$ $$(97)$$ Therefore, by the update expressions in the proof of Lemma E.2, F (t) + (X (t) n ) can reach log log5(d) in time at most O NP log5(d) ηs∗ ∈ poly(d) (in the worst case scenario). At time T1,1 and beyond, $$1-\frac{\exp(F_{-}^{(t)}(\mathbf{X}_{n}^{(t)}))}{\exp(F_{-}^{(t)}(\mathbf{X}_{n}^{(t)})-F_{+}^{(t)}(\mathbf{X}_{n}^{(t)}))+1}\leq1-\frac{\exp(1-O(\sigma_{0}d^{\alpha}s^{\star}))}{\exp(1+O(\sigma_{0}d^{\alpha}s^{\star}))\frac{1}{\log^{2}(d)}+1}$$ $$\leq O\left(\frac{1}{\log^{5}(d)}\right).$$ $$(96)$$ Lemma E.4. *Denote* C = ηs ∗ 2k+P , and write (for any c ∈ [k+]) $$A_{c}(t)=s^{*}\left(A_{+,r^{*}}^{*(t)}\left|S_{+}^{*(0)}(\mathbf{v}_{+})\right|+A_{+,c,r^{*}}^{*(t)}\left|S_{+}^{*(0)}(\mathbf{v}_{+,c})\right|\right)\tag{1}$$ (see Lemma E.2 for definition of A ∗(t) · *). Define* tc,0 = exp(Ac(T1,1)). We write A(t) and t0 below for cleaner notations. _Then with probability at least $1-o(1)$, during $t\in[T_{1,1},T_{1}]$,_ $$A(t)=\log(C(t-T_{1,1})+t_{0})+E(t)$$ _where $|E(t)|\leq O\left(\frac{1}{\log^{2}(d)}\right)\sum_{r=C_{1}^{-t}t_{0}}^{t-T_{1}+C^{-t}t_{0}}\frac{1}{r}\leq O\left(\frac{\log(t)-\log(C^{-t}t_{0})}{\log^{2}(d)}\right)$. The same results also hold with the class signs flipped._ Proof. **Sidenote**: To make the writing a bit cleaner, we assume in the proof below that C −1t0 is an integer. The general case is easy to extend to by observing that 1 t−T1,1+⌈C−1t0⌉ −1 t−T1,1+C−1t0 ≤ 1 (t−T1,1+⌈C−1t0⌉)(t−T1,1+C−1t0) , which can be absorbed into the error term at every iteration since 1 t−T1,1+⌈C−1t0⌉ ≪ 1 log4(d) due to C −1t0 ≥ Ω(σ −1 0/(polylog(d)d c0 )) ≫ d ≫ log4(d). Based on result from Lemmas E.2 and E.3, as long as A(t) ≤ O (log(d)), we know during time t ∈ [T1,1, T1] the update rule for A(t) is as follows: $$A(t+1)-A(t)=C\exp\left\{-(1\pm s^{\epsilon-1/3})\sqrt{1\pm\iota}\left(1\pm O\left(\frac{1}{\log^{5}(d)}\right)\right)A(t)\right\}$$ $$\times\left(1\pm O\left(\frac{1}{\log^{5}(d)}\right)\right)(1\pm s^{\epsilon-1/3})\left(\sqrt{1\pm\iota}\pm\frac{1}{\log^{10}(d)}\right)$$ $$=C\exp\left\{-A(t)\right\}\exp\left\{\pm O\left(\frac{1}{\log^{4}(d)}\right)\right\}\left(1\pm O\left(\frac{1}{\log^{5}(d)}\right)\right)$$ $$=C\exp\left\{-A(t)\right\}\left(1\pm\frac{C_{1}}{\log^{4}(d)}\right)$$ $C$ is the $\alpha$-function. $$(98)$$ where we write C1 in place of O(·) for a more concrete update expression. The base case t = T1,1 is trivially true. We proceed with the induction step. Assume the hypothesis true for t ∈ [T1,1, T], prove for t + 1 = T + 1. Note that by Lemma E.10, A(t + 1) = log(C(t − T1,1) + t0) + E(t) + C exp {− log(C(t − T1,1) + t0) − E(t))} 1 ±C1 log4(d) = log(C) + log(t − T1,1 + C −1t0) + E(t) + C1 C(t − T1,1) + t0 1 − E(t) ± O(E(t) 2)1 ±C1 log4(d) = log(C) + t−T1,1+C −1 X t0−1 τ=1 1 τ + 1 2 1 t − T1,1 + C−1t0 + 0, 1 8 1 (t − T1,1 + C−1t0) 2 +1 t − T1,1 + C−1t0 ±C1 log4(d) 1 t − T1,1 + C−1t0 (100) + E(t) + 1 t − T1,1 + C−1t0 −E(t) ± O(E(t) 2)1 ±C1 log4(d) = log(C) + t−T1,1+C −1 X t0 τ=1 1 τ + 1 2 1 t − T1,1 + C−1t0 + 0, 1 8 1 (t − T1,1 + C−1t0) 2 ±C1 log4(d) 1 t − T1,1 + C−1t0 + E(t) + 1 t − T1,1 + C−1t0 −E(t) ± O(E(t) 2)1 ±C1 log4(d) Invoking Lemma E.10 again, A(t + 1) = log(C) + log(t + 1 − T1,1 + C −1t0) − 1 2 1 t + 1 − T1,1 + C−1t0 + 1 2 1 t − T1,1 + C−1t0 + − 1 8 1 (t + 1 − T1,1 + C−1t0) 2 , 0 + 0, 1 8 1 (t − T1,1 + C−1t0) 2 ±C1 log4(d) 1 t − T1,1 + C−1t0 + E(t) + 1 t − T1,1 + C−1t0 −E(t) ± O(E(t) 2)1 ±C1 log4(d) (101) = log(C(t + 1 − T1,1) + t0) + 1 2 1 (t + 1 − T1,1 + C−1t0)(t − T1,1 + C−1t0) ± O 1 (t + 1 − T1,1 + C−1t0) 2 ±C1 log4(d) 1 t − T1,1 + C−1t0 + E(t) + 1 t − T1,1 + C−1t0 −E(t) ± O(E(t) 2)1 ±C1 log4(d) 47 To further refine the expression, first note that the error passed down from the previous step t does not grow in this step (in fact it slightly decreases): t slightly decreases): $$\left|E(t)+\frac{1}{t-T_{1,1}+C^{-1}t_{0}}\left(-E(t)\pm O(E(t)^{2})\right)\left(1\pm\frac{C_{1}}{\log^{4}(d)}\right)\right|$$ $$<\left|E(t)\right|\tag{102}$$ $$\leq O\left(\frac{1}{\log^{4}(d)}\right)^{t-T_{1,1}+C^{-1}t_{0}}\frac{1}{\tau}.$$ $$\square$$ Moreover, notice that at step t + 1, since 1 t+1−T1,1+C−1t0 ≪ 1 log4(d) , the error term |E(t + 1)| = |A(t + 1) − log(C(t + 1 − T1,1) + t0)| ≤ O 1 log4(d) Pt+1−T1,1+C −1t0 τ=C−1t01 τ , which finishes the inductive step. **Lemma E.5**.: _With probability at least $1-O\left(\frac{mNPk_{+}T_{i}}{p^{\text{obs}}(d)}\right)$, for all $t\in[0,T_{1}]$, all $c\in[k_{+}]$,_ $$\frac{\Delta_{k}^{t+(c)}}{\Delta_{k}^{t+(c)}}=\Theta\left(\frac{1}{k_{+}}\right),\tag{1}$$ $$\frac{\Delta_{k}^{t+(c)}}{\Delta_{k}^{t+(c)}}=\Theta\left(\frac{1}{k_{+}}\right).$$ $$(103)$$ The same identity holds for the "−*"-classes.* Proof. The statements in the lemma follow trivially from Theorem D.1 for time period [0, T0]. Let us focus on the phase [T0, T1]. In this proof, we condition on the high-probability events of Lemma E.4 and Lemma E.2. First of all, based on Lemma E.4, we know that s ∗A ∗(t) +,r∗ S ∗(0) + (v+) ≤ O(log(d)). We will make use of this fact later. Base case, t = T0. The base case directly follows from our Theorem D.1. Induction step, assume statement holds for τ ∈ [T0, t], prove statement for t + 1. By Lemma E.2, we know that $$\Delta A_{+,r}^{(t)}$$ $$=\eta\sum_{k=1}^{k}\exp\left\{\ -(1\pm s^{*-1/2})\sqrt{1\pm}\,ts^{*}\left(1\pm O\left(\frac{1}{\log^{2}(d)}\right)\right)\times\left(A_{+,r}^{*(t)}\left|S_{+}^{*(0)}(\mathbf{v}_{+})\right|+A_{+,G,r^{*}}^{*(t)}\left|S_{+}^{*(0)}(\mathbf{v}_{+,c})\right|\right)\right\}$$ $$\quad\times[1/3,1](1\pm s^{*-1/3})\frac{s^{*}}{2k_{+}P}\left(\sqrt{1\pm}\pm O\left(\frac{1}{\log^{2}(d)}\right)\right),$$ and for any $c\in[k_{+}]$. $$\Delta A^{(1)}_{+,\varepsilon,\tau^{*}}$$ $$=\eta\exp\left\{-(1\pm s^{*-1/3})\sqrt{1\pm\iota s^{*}}\left(1\pm O\left(\frac{1}{\log^{6}(d)}\right)\right)\times\left(A^{*(1)}_{+,\tau}\left|S^{*(0)}_{+}(\mathbf{v}_{+})\right|+A^{*(1)}_{+,\varepsilon,\tau^{*}}\left|S^{*(0)}_{+}(\mathbf{v}_{+,\varepsilon})\right|\right)\right\}$$ $$\quad\times[1/3,1](1\pm s^{*-1/3})\frac{s^{*}}{2k_{+}P}\left(\sqrt{1\pm\iota}\pm O\left(\frac{1}{\log^{6}(d)}\right)\right),\tag{105}$$ Relying on the induction hypothesis, we can reduce the above expressions to ∆A ∗(t) +,r∗ =η X k+ c=1 exp (− (1 ± s ∗−1/3) √1 ± ι 1 ± O 1 log5(d) 1 ± O 1 k+ s ∗A ∗(t) +,r∗ S ∗(0) + (v+) ) × [1/3, 1](1 ± s ∗−1/3)s ∗ 2k+P √1 ± ι ± O 1 log9(d) (106) =η exp (− (1 ± s ∗−1/3) √1 ± ι 1 ± O 1 log5(d) 1 ± O 1 k+ s ∗A ∗(t) +,r∗ S ∗(0) + (v+) ) × Θ(1) × s ∗ 2P , and for any c ∈ [k+], $$\Delta A^{*(t)}_{+,\varepsilon,r}$$ $$=\eta\exp\left\{-\left(1\pm s^{*-1/3}\right)\sqrt{1\pm t}\left(1\pm O\left(\frac{1}{\log^{5}(d)}\right)\right)\left(1\pm O\left(\frac{1}{k_{+}}\right)\right)s^{*}A^{*(t)}_{+,r}\left[S^{*(0)}(\mathbf{v}_{+})\right]\right\}\tag{107}$$ $$\quad\times\Theta(1)\times\frac{s^{*}}{2k_{+}P}.$$ By invoking the property that s ∗A ∗(t) +,r∗ S ∗(0) + (v+) ≤ O(log(d)), we find that for all c ∈ [k+], $$\frac{\Delta A^{*(t)}_{+,r,r}}{\Delta A^{*,r}_{+,r^{*}}}=\exp\left\{\pm\,O\left(\frac{1}{\log^{5}(d)}\right)s^{*}A^{*(t)}_{+,r^{*}}\left|S^{*(0)}_{+}(\mathbf{v}_{+})\right|\right\}\times\Theta\left(\frac{1}{k_{+}}\right)\tag{108}$$ $$=\left(1\pm\,O\left(\frac{1}{\log^{4}(d)}\right)\right)\times\Theta\left(\frac{1}{k_{+}}\right)$$ $$=\Theta\left(\frac{1}{k_{+}}\right).$$ $$(110)$$ Therefore, we can finish our induction step: $$\frac{A^{*(t+1)}_{+,c,r^{*}}}{A^{*(t+1)}_{+,r^{*}}}=\frac{A^{*(t)}_{+,c,r^{*}}+\Delta A^{*(t)}_{+,c,r^{*}}}{A^{*(t)}_{+,r^{*}}+\Delta A^{*(t)}_{+,r^{*}}}=\frac{A^{*(t)}_{+,c,r^{*}}+\Delta A^{*(t)}_{+,c,r^{*}}}{\Theta\left(k_{+}\right)\times\left(A^{*(t)}_{+,c,r^{*}}+\Delta A^{*(t)}_{+,c,r^{*}}\right)}=\Theta\left(\frac{1}{k_{+}}\right).\tag{109}$$ Lemma E.6. Let TΩ(1) *be the first point in time such that either* s ∗A ∗(t) +,r∗ S ∗(0) + (v+) ≥ Ω(1) or s ∗A ∗(t) −,r∗ S ∗(0) − (v−) ≥ Ω(1). Then for any *t < T*Ω(1), $$\frac{A_{-,r^{*}}^{*(t)}}{A_{+,r^{*}}^{*(t)}}=\Theta(1)$$ and for any t ∈ [TΩ(1), T1], $$\frac{A_{-,r^{*}}^{*(t)}}{A_{+,r^{*}}^{*(t)}},\frac{A_{+,r^{*}}^{*(t)}}{A_{-,r^{*}}^{*(t)}}\geq\Omega\left(\frac{1}{\log(d)}\right).\tag{1}$$ $$(111)$$ Proof. This lemma is a consequence of Theorem D.1, Lemma E.2 and Lemma E.4. ``` Due to Theorem D.1, we already know that A ∗(t) −,r∗ A ∗(t) +,r∗ = Θ(1) up to time T0. In addition, with Lemma E.2 we ``` know that before s ∗A ∗(t) +,r∗ S ∗(0) + (v+) ≥ Ω(1), the loss term (on a +-class sample) 1 − logit(t) + (X (t) n ) = Θ(1) ``` (the same holds with the class signs flipped), in which case it is also easy to derive A ∗(t) −,r∗ A ∗(t) +,r∗ = Θ(1) by noting ``` that the update expressions ∆A ∗(t) −,r∗ /∆A ∗(t) +,r∗ = Θ(1). Beyond time TΩ(1), by Lemma E.4, we know that s ∗A ∗(t) +,r∗ S ∗(0) + (v+) , s∗A ∗(t) −,r∗ S ∗(0) − (v−) ≤ O(log(d)). With the understanding that s ∗A ∗(t) +,r∗ S ∗(0) + (v+) , s∗A ∗(t) −,r∗ S ∗(0) − (v−) ≥ Ω(1) beyond TΩ(1) due to the monotonicity of these functions, and the property |S ∗(0) − (v−)| |S ∗(0) + (v+)| − 1 ≤ O 1 log5(d) from Proposition 1, the rest of the lemma follows. Lemma E.7. *With probability at least* 1 − O $$1-O\left(\frac{mNPk_{+}t}{poly(d)}\right),\text{for all}t\in[0,T_{1}]\text{and all}(+,r)\in S_{+}^{*(0)}(\mathbf{v}_{+}),$$ $$\frac{\Delta b_{+,r}^{(t)}}{\Delta A_{+,r}^{(t)}}=-\Theta\left(\frac{1}{\log^{5}(d)}\right).\tag{112}$$ The same holds with the +-class signs replaced by the −*-class signs.* Proof. Choose any (+, r) ∈ S ∗(0) + (v+). The statement in this lemma for time period t ∈ [0, T0] follows easily from Theorem D.1 and its proof. Let us examine the period t ∈ [T0, T1]. Based on Lemma E.2 and its proof and Lemma E.5, we know that for t ∈ [T0, T1], with probability at least 1 − O *mNP k*+t poly(d) , $$\Delta A^{(1)}_{t,r}$$ $$=\eta\exp\left\{-\left(1\pm s^{\tau-1/2}\right)\sqrt{1\mp i}\left(1\pm O\left(\frac{1}{\log^{2}(d)}\right)\right)\left(1\pm O\left(\frac{1}{k_{\tau}}\right)\right)s^{\tau}A^{\tau(t)}_{t,r}\left|S^{\tau(0)}_{t}(\mathbf{v}_{r})\right|\right\}\tag{113}$$ $$\times\left(1\pm s^{\tau-1/3}\right)\frac{s^{\tau}}{NP}\left(\sqrt{1\mp i}\pm O\left(\frac{1}{\log^{2}(d)}\right)\right)\sum_{n=1}^{N}\mathbbm{1}\{y_{n}=+\}\frac{\exp\left(F^{(t)}(\mathbf{X}^{(t)})\right)}{\exp\left(F^{(t)}(\mathbf{X}^{(t)})-F^{(t)}_{r}(\mathbf{X}^{(t)})\right)+1}$$ Furthermore, ∆b (t) +,r = − ∥∆w (t) +,r∥2 log5(d) = − η1 log5(d) exp (− (1 ± s ∗−1/3) √1 ± ι 1 ± O 1 log5(d) 1 ± O 1 k+ s ∗A ∗(t) +,r∗ S ∗(0) + (v+) ) (114) × (1 ± s ∗−1/3) s ∗ NP 1 ± ι ±1 log9(d) X N n=1 1{yn = +}exp(F (t) − (X (t) n )) exp F (t) − (X (t) n ) − F (t) + (X (t) n ) + 1 With the understanding that s ∗A ∗(t) +,r∗ S ∗(0) + (v+) ≤ O(log(d)) from Lemma E.4 and the fact that exp(F (t) − (X(t) n )) expF (t) − (X(t) n )−F (t) + (X(t) n )+1 = Θ(1), we have $$\frac{\Delta b_{+,r}^{(l)}}{\Delta A_{+,r}^{(l)}}=-\,\Theta\left(\frac{1}{\log^{5}(d)}\right)\exp\left\{\,-\left(1\pm O\left(\frac{1}{\log^{5}(d)}\right)\right)s^{*}A_{+,r}^{*(l)}\left|S_{+}^{*(0)}(\mathbf{v}_{+})\right|\,\right\}\tag{115}$$ $$=-\,\Theta\left(\frac{1}{\log^{5}(d)}\right)\left(1\pm O\left(\frac{1}{\log^{4}(d)}\right)\right)$$ $$=-\,\Theta\left(\frac{1}{\log^{5}(d)}\right).$$ $$(116)$$ Lemma E.8 (Probability of mistake on hard samples is high). For all t ∈ [0, T1]*, given a hard test sample* (Xhard, y), y ′ ̸= y, $$\mathbb{P}\left[F_{y}^{(T)}(\mathbf{X}_{hard})\leq F_{y^{\prime}}^{(T)}(\mathbf{X}_{hard})\right]\geq\Omega(1).\tag{1}$$ Proof. We first show that at time t = 0, the probability of the network making a mistake on hard test samples is Ω(1), then prove that for the rest of the time, i.e. t ∈ (0, T1], the model still makes mistake on hard test samples with probability Ω(1). At time t = 0, by Lemma H.3, we know that for any r ∈ [m], with probability Ω(1), $$\langle\mathbf{w}_{+,r}^{(0)},\mathbf{\zeta}^{*}\rangle\geq\Omega(\sigma_{0}\sigma_{\zeta^{*}}\sqrt{d})\geq\Omega(\sigma_{0}\mathrm{polylog}(d))\gg\Omega\left(\sigma_{0}\sqrt{\log(d)}\right).$$ Relying on concentration of the binomial random variable, with probability at least 1 − e −Ω(polylog(d)), $$\sum_{r=1}^{m}\sigma\left(\langle\mathbf{w}_{+,r}^{(0)},\mathbf{\zeta}^{*}\rangle+b_{+,r}^{(0)}\right)\geq\Omega(m\sigma_{0}\sigma_{\zeta^{*}}\sqrt{d}),$$ which is asymptotically larger than the activation from the features, which, following from Proposition 1, is upper bounded by O σ0 plog(d)s ∗d c0. The same can be said for the "−" class. In other words, $$F^{(0)}(\mathbf{X}_{\rm hard})-F^{(0)}_{+}(\mathbf{X}_{\rm hard})>0$$ $$\iff\Bigg{\{}\sum_{r=1}^{m}\mathbb{1}\{\langle\mathbf{w}^{(0)}_{-r},\mathbf{\zeta}^{\star}\rangle+b^{(0)}_{-r}>0\}\langle\mathbf{w}^{(0)}_{-r},\mathbf{\zeta}^{\star}\rangle\tag{119}$$ $$\qquad-\sum_{r=1}^{m}\mathbb{1}\{\langle\mathbf{w}^{(0)}_{+,r},\mathbf{\zeta}^{\star}\rangle+b^{(0)}_{+,r}>0\}\langle\mathbf{w}^{(0)}_{+,r},\mathbf{\zeta}^{\star}\rangle\Bigg{\}}(1\pm o(1))>0$$ $$(1117)$$ $$(1118)$$ which clearly holds with probability Ω(1). Now consider t ∈ (0, T1]. During this period of time, by Theorem D.1 and Lemma E.2, we note that for any c ∈ [k+] and (+, r) ∈ S ∗(0) + (v+,c), ∆ζ (t) +,r ∼ N (0, σ (t)2 ∆ζ+,r Id), with σ (t) ∆ζ+,r = Θ ∆A (t) +*,c,r*q 2k+ s ∗N σζ . The same can be said for (+, r) ∈ S ∗(0) + (v+), although with the ∆A (t) +*,c,r*q 2k+ s ∗N factor replaced by ∆A (t) +,rq2 s ∗N . Also from the proofs of Theorem D.1 and Lemma E.2, and using the property |U(0) +,r| ≤ O(1) from Proposition 1, we know that for all neurons, the updates to the neurons also take the feature-plus-Gaussian-noise form of Pv′∈U(0) +,r c (t)(v ′)v ′ +∆ζ (t) +,r, with c (t)(v ′) ≤ 1 + O 1 log5(d) ∆A (t) +*,c,r* if v ′ = v+,c for some c ∈ [k+], or c (t)(v ′) ≤ 1 + O 1 log5(d) ∆A (t) +,r if v ′ = v+ (because the v ′component of a v ′-singleton neuron's update is already the maximum possible). otherwise, if U (0) +,r only contains the fine-grained features, then σ $$\left(\Delta A_{+,r}^{(t)}\sqrt{\frac{2}{s^{*}N}}\sigma_{\zeta}\right)+{\cal O}\left(\Delta A_{+,c,r}^{(t)}\sqrt{\frac{2k_{+}}{s^{*}N}}\sigma_{\zeta}\right)\leq{\cal O}\left(\Delta A_{+,r}^{(t)}\sqrt{\frac{2}{s^{*}N}}\sigma_{\zeta}\right),$$ ained features, then $\sigma_{\Delta\zeta+,r}^{(t)}\leq{\cal O}\left(\Delta A_{+,c,r}^{(t)}\sqrt{\frac{2k_{+}}{s^{*}N}}\sigma_{\zeta}\right)$. Moreover, if v+ ∈ U(0) +,r, then σ (t) ∆ζ+,r ≤ O With the understanding that only neurons in S (0) y (vy) and S (0) y (vy,c) can possibly activate on the feature patches of a sample when t ≤ T1 (coming from Theorem F.1), we have p∈P(Xhard;v+,c) σ ⟨w (0) +,r + Xt−1 τ=0 ∆w (τ) +,r, √1 ± ιv+,c + ζp⟩ + b (t) +,r! F (t) + (Xhard) ≤X X (+,r)∈S (0) + (v+,c) r∈[m] σ ⟨w (0) +,r + Xt−1 τ=0 ∆w (τ) +,r, ζ ∗⟩ + b (t) +,r! + X (120) p∈P(Xhard;v−) σ ⟨w (0) +,r + Xt−1 τ=0 ∆w (τ) +,r, α†pv− + ζp⟩ + b (t) +,r! +X X (+,r)∈S (0) + (v−) To further refine this upper bound, we first note that with probability at least 1 − O *mNP k*+t poly(d) , the following holds with arbitrary choice of (+, r∗) ∈ S (0) + (v+,c): $$\sum_{(+,+)\in S_{+}^{(0)}(\mathbf{w}_{+,-})}\sum_{p\in\mathcal{P}(\mathbf{X}_{\mathrm{hard}};\mathbf{w}_{+,-})}\langle\sum_{r=0}^{t-1}\Delta\mathbf{w}_{+,r}^{(r)},\sqrt{1\pm\imath}\mathbf{w}_{+,c}+\zeta_{p}\rangle\leq O\left(s^{*}\left|S_{+}^{(0)}(\mathbf{w}_{+,c})\right|\sum_{r=0}^{t-1}\Delta A_{+,c,r}^{(r)}\right)$$ $$(121)$$ Invoking Lemma E.5, we obtain (for arbitrary (+, r∗) ∈ S (0) + (v+)): $$\sum_{(+,r)\in S^{(0)}_{+}(w_{+,c})}\sum_{p\in\mathcal{P}(X_{\text{best}(w_{+,c})}}\sum_{r=0}^{t-1}\Delta w^{(r)}_{+,r},\sqrt{1\pm i}w_{+,c}+\zeta_{p})\leq O\left(\frac{1}{k_{+}}s^{*}\left|S^{(0)}_{+}(w_{+,c})\right|\sum_{r=0}^{t-1}\Delta A^{(r)}_{+,r}\right.\right)\tag{122}$$ Let us examine the term Pr∈[m] σ ⟨w (0) +,r +Pt−1 τ=0 ∆w (τ) +,r, ζ ∗⟩ + b (t) +,rmore carefully. First of all, denoting S (0) + = ∪ k+ c=1S (0) + (v+,c)∪∪k− c=1S (0) + (v−,c)∪S (0) + (v+)∪S (0) + (v−), neurons (+, r) ∈/ S (0) + cannot receive any update at all during training due to Theorem F.1. Therefore we can rewrite the term $$\begin{array}{c}{{\sum_{r\in[m]}\sigma\left(\langle\mathbf{w}_{+,r}^{(0)}+\sum_{r=0}^{t-1}\Delta\mathbf{w}_{+,r}^{(r)},\mathbf{\zeta}^{*}\rangle+b_{+,r}^{(t)}\right)}}\\ {{=\sum_{(+,r)\in S_{+}^{(0)}}\sigma\left(\langle\mathbf{w}_{+,r}^{(0)}+\sum_{r=0}^{t-1}\Delta\mathbf{w}_{+,r}^{(r)},\mathbf{\zeta}^{*}\rangle+b_{+,r}^{(t)}\right)+\sum_{(+,r)\notin S_{+}^{(0)}}\sigma\left(\langle\mathbf{w}_{+,r}^{(0)},\mathbf{\zeta}^{*}\rangle+b_{+,r}^{(0)}\right)}}\end{array}$$ $$(123)$$ Relying on Corollary F.1.1, we know $$\sum_{\tau=0}^{t-1}\Delta b_{+,\tau}^{(\tau)}<\sum_{\tau=0}^{t-1}-\Omega\left(\frac{\mathrm{polylog}(d)}{\log^{5}(d)}\right)\left|\langle\Delta\mathbf{w}_{+,\tau}^{(\tau)},\mathbf{\zeta}^{*}\rangle\right|.$$ Therefore, we know that for r ∈ [m], $$\sum_{\tau=0}^{t-1}\langle\Delta\mathbf{w}_{+,r}^{(\tau)},\mathbf{\zeta}^{*}\rangle+\Delta b_{+,r}^{(\tau)}\leq0\tag{1}$$ $$(124)$$ $$(125)^{\frac{1}{2}}$$ As a consequence, we can write the naive upper bound $$\sum_{r\in[m]}\sigma\left(\langle\mathbf{w}_{+,r}^{(0)}+\sum_{r=0}^{t-1}\Delta\mathbf{w}_{+,r}^{(r)},\mathbf{\zeta}^{*}\rangle+b_{+,r}^{(t)}\right)$$ $$\leq\sum_{(+,r)\in S_{+}^{(0)}}\sigma\left(\langle\mathbf{w}_{+,r}^{(0)},\mathbf{\zeta}^{*}\rangle+b_{+,r}^{(0)}\right)+\sum_{(+,r)\notin S_{+}^{(0)}}\sigma\left(\langle\mathbf{w}_{+,r}^{(0)},\mathbf{\zeta}^{*}\rangle+b_{+,r}^{(0)}\right)\tag{126}$$ $$=\sum_{r\in[m]}\sigma\left(\langle\mathbf{w}_{+,r}^{(0)},\mathbf{\zeta}^{*}\rangle+b_{+,r}^{(0)}\right)$$ Additionally, due to Theorem F.1 (and its proof), we know that p∈P(Xhard;v−) σ ⟨w (0) +,r + Xt−1 τ=0 ∆w (τ) +,r, α†pv− + ζp⟩ + b (t) +,r! X X (+,r)∈S (0) + (v−) p∈P(Xhard;v−) σ ⟨w (0) +,r, α†pv− + ζp⟩ + b (0) +,r (127) ≤X X (+,r)∈S (0) + (v−) It follows that F (t) + (Xhard) ≤O 1 k+ s ∗ S (0) + (v+,c) Xt−1 τ=0 ∆A (τ) +,r∗ ! +X (+,r)∈S (0) + (v+,c) p∈P(Xhard;v+,c) ⟨w (0) +,r, √1 ± ιv+,c + ζp⟩ X (128) r∈[m] σ ⟨w (0) +,r, ζ ∗⟩ + b (0) +,r+X (+,r)∈S (0) + (v−) p∈P(Xhard;v−) σ ⟨w (0) +,r, α†pv− + ζp⟩ + b (0) +,r + X X On the other hand, for the "−" neurons, denoting S (0) − = ∪ k+ c=1S (0) − (v+,c)∪∪k− c=1S (0) − (v−,c)∪S (0) − (v+)∪S (0) − (v−), $$E_{-}^{(t)}(\mathbf{X}_{\text{hard}})\geq\sum_{(+,r)\in\mathcal{S}^{(0)}(\mathbf{w}_{-r})\in\mathcal{P}(\mathbf{X}_{\text{hard}\times\mathbf{w}_{-r}})}\sigma\left((\mathbf{w}_{-r}^{(0)}+\sum_{r=0}^{t-1}\Delta\mathbf{w}_{-r}^{(r)},\alpha_{p}^{\dagger}\mathbf{w}_{-}+\zeta_{p})+b_{+,r}^{(t)}\right)\tag{129}$$ $$+\sum_{(+,r)\notin\mathcal{S}^{(0)}}\sigma\left((\mathbf{w}_{-r}^{(0)},\mathbf{\zeta}^{*})+b_{+,r}^{(0)}\right),$$ note that the last line is true because neurons outside the set S (0) − cannot receive any update during training with probability at least 1−O *mNP k*+t poly(d) due to Theorem F.1. Estimating the activation value of the neurons from S ∗(0) − (v−) on the feature noise patches requires some care. We define time t− to be the first point in time such that any (−, r∗) ∈ S ∗(0) − (v−) satisfies Pt− τ=0 ∆A (τ) −,r∗ ≥ σ0 log5(d), and beyond this point in time, i.e. for t ∈ [t−, T1], the neurons in S ∗(0) − (v−) have to activate with high probability, since $$\begin{split}\langle\mathbf{w}_{-,r}^{(t)}+\sum_{r=0}^{t-1}\Delta\mathbf{w}_{-,r,r}^{(\cdot)}\alpha_{p}^{\dagger}\mathbf{w}_{-}+\mathbf{\zeta}_{p}\rangle+b_{r,r}^{(t)}&\geq\left(1-O\left(\frac{1}{\log^{\circ}(d)}\right)\right)\sigma_{0}\log^{\circ}(d)/\log^{4}(d)-O(\sigma_{0}\sqrt{\log(d)})\\ &>0.\end{split}\tag{130}$$ Now we can proceed to prove the lemma for t ∈ (0, T1] by combining the above estimates for F (t) + (Xhard) and F (t) − (Xhard). For t ∈ (0, t−], relying argument similar to the situation of t = 0 and the fact that m − |S (0) − | = (1 − o(1))m, $$\left\{\begin{array}{l}\sum_{(+,r)\notin S_{-}^{(0)}}\mathbb{1}\{\langle\mathbf{w}_{-,r}^{(0)},\mathbf{\zeta}^{*}\rangle+b_{-,r}^{(0)}>0\}\langle\mathbf{w}_{-,r}^{(0)},\mathbf{\zeta}^{*}\rangle\\ \\ -\sum_{r=1}^{m}\mathbb{1}\{\langle\mathbf{w}_{+,r}^{(0)},\mathbf{\zeta}^{*}\rangle+b_{+,r}^{(0)}>0\}\langle\mathbf{w}_{+,r}^{(0)},\mathbf{\zeta}^{*}\rangle\}(1\pm o(1))>0\\ \\ \Longrightarrow F_{-}^{(t)}(\mathbf{X}_{\rm hard})-F_{+}^{(t)}(\mathbf{X}_{\rm hard})>0\end{array}\right.\tag{131}$$ which has to be true with probability Ω(1). On the other hand, with t ∈ (t−, T1], we have F (t) − (Xhard) − F (t) + (Xhard) ≥ (Xt−1 τ=0 1 − O 1 log5(d) s †|S ∗(0) − (v−)|∆A (τ) −,r∗ − O(σ0 plog(d)) − O 1 k+ s ∗ S (0) + (v+,c) Xt−1 τ=0 ∆A (τ) +,r∗ ! ) (132) + ( X p∈P(Xhard;v+,c) ⟨w (0) +,r, √1 ± ιv+,c + ζp⟩ (+,r)∈/S (0) − σ ⟨w (0) −,r, ζ ∗⟩ + b (0) +,r−X (+,r)∈S (0) + (v+,c) X p∈P(Xhard;v−) σ ⟨w (0) +,r, α†pv− + ζp⟩ + b (0) +,r) r∈[m] σ ⟨w (0) +,r, ζ ∗⟩ + b (0) +,r−X (+,r)∈S (0) + (v−) − X X Let us begin analyzing the first {·} bracket. By Proposition 1 we know that S ∗(0) − (v−) = (1 ± O(1/ log5(d))) S (0) + (v+,c) , and by Lemma E.5, we know that ∆A (τ) +,r∗ ≤ O(log(d)∆A (τ) −,r∗ ), therefore, $$O\left(\frac{1}{k_{+}}s^{*}\left|S_{+}^{(0)}(\mathbf{v}_{+,\rho})\right|\sum_{r=0}^{t-1}\Delta A_{+,\rho^{*}}^{(r)}\right)\leq O\left(\frac{\log(d)}{k_{+}}s^{*}\left|S_{-}^{(0)}(\mathbf{v}_{-})\right|\sum_{r=0}^{t-1}\Delta A_{-,r^{*}}^{(r)}\right)\right.\tag{133}$$ $$\leq\sum_{r=0}^{t-1}\left(1-O\left(\frac{1}{\log^{5}(d)}\right)s^{\dagger}\left|S_{-}^{(0)}(\mathbf{v}_{-})\right|\Delta A_{-,r^{*}}^{(r)}-O(\sigma_{0}\sqrt{\log(d)})\right)$$ Therefore, we obtained the simpler lower bound F (t) − (Xhard) − F (t) + (Xhard) ≥ ( X p∈P(Xhard;v+,c) ⟨w (0) +,r, √1 ± ιv+,c + ζp⟩ (+,r)∈/S (0) − σ ⟨w (0) −,r, ζ ∗⟩ + b (0) +,r−X (+,r)∈S (0) + (v+,c) X (134) p∈P(Xhard;v−) σ ⟨w (0) +,r, α†pv− + ζp⟩ + b (0) +,r) r∈[m] σ ⟨w (0) +,r, ζ ∗⟩ + b (0) +,r−X (+,r)∈S (0) + (v−) − X X which is greater than 0 with probability Ω(1) (by relying on an argument almost identical to the t = 0 case again, and noting that m − |S (0) − | = (1 − o(1))m). This concludes the proof. Lemma E.9 (Probability of mistake on easy samples is low after training). For t ∈ [T1,1, T1]*, given an easy* test sample (Xeasy, y), $$\mathbb{P}\left[F_{y}^{(T)}(\mathbf{X}_{easy})\leq F_{y}^{(T)}(\mathbf{X}_{easy})\right]\leq o(1).\tag{135}$$ Proof. Without loss of generality, assume the true label of Xeasy is +1. Assume t ≥ T1,1. Firstly, conditioning on the events of Theorem F.1, the following upper bound on F (t) − (Xeasy) holds with probability at least 1 − O m poly(d) : p∈P(Xeasy;v+) σ ⟨w (t) −,r, √1 ± ιv+ + ζp⟩ + b (t) −,r F (t) − (Xeasy) = X X (−,r)∈S (0) − (v+) p∈P(Xeasy;v+,c) σ ⟨w (t) −,r, √1 ± ιv+,c + ζp⟩ + b (t) −,r +X X (−,r)∈S (0) − (v+,c) p∈P(Xeasy;v+) σ ⟨w (0) −,r, √1 ± ιv+ + ζp⟩ + b (0) −,r ≤X X (136) (−,r)∈S (0) − (v+) p∈P(Xeasy;v+,c) σ ⟨w (0) −,r, √1 ± ιv+,c + ζp⟩ + b (0) −,r +X X (−,r)∈S (0) − (v+,c) <O (s ∗d c0 σ0) ≤o(1), and on the other hand, that leads, $$F_{+}^{(t)}(\mathbf{X}_{\text{easy}})\geq\sum_{(+,r)\in S_{+}^{(t)}(\mathbf{w}_{+})}\sum_{p\in\mathcal{P}(\mathbf{X}_{\text{asy}},\mathbf{w}_{+})}\sigma\left(\langle\mathbf{w}_{+,r}^{(t)},\sqrt{1\pm\imath}\mathbf{v}_{+}+\mathbf{\zeta}_{p}\rangle+b_{+,r}^{(t)}\right)\tag{137}$$ $$+\sum_{(+,r)\in S_{+}^{(t)}(\mathbf{w}_{+,r})}\sum_{p\in\mathcal{P}(\mathbf{X}_{\text{asy}},\mathbf{w}_{+,r})}\sigma\left(\langle\mathbf{w}_{+,r}^{(t)},\sqrt{1\pm\imath}\mathbf{v}_{+,r}+\mathbf{\zeta}_{p}\rangle+b_{+,r}^{(t)}\right)$$ $$>\Omega(1).$$ $$(138)$$ Therefore, F (t) + (Xeasy) ≫ F (t) − (Xeasy), which completes the proof. Lemma E.10 (Jr. & John W. Wrench (1971)). The partial sum of harmonic series satisfies the following identity: $$\sum_{k=1}^{n-1}{\frac{1}{k}}=\log(n)+{\mathcal{E}}-{\frac{1}{2n}}-\epsilon_{n}$$ where E *is the Euler–Mascheroni constant (approximately 0.58), and* ϵn ∈ [0, 1/8n 2]. ## F Coarse-Grained Sgd, Poly-Time Properties In this section, set Te ∈ poly(d). Please note that we are performing stochastic gradient descent on easy samples only. Theorem F.1. Fix any t ∈ [0, Te]. 1. (_Non-activation invariance_) _For any $\tau\geq t$, with probability at least $1-O$,_ $\mathbf{v}\in\{\mathbf{v}_{+,c}\}_{c=1}^{k_{+}}\cup\{\mathbf{v}_{-,c}\}_{c=1}^{k_{-}}\cup\{\mathbf{v}_{+},\mathbf{v}_{-}\}$_, any $t^{\prime}\leq t$, $(+,r)\notin S_{+}^{(0)}(\mathbf{v})$ and $\mathbf{v}_{-}$,_ $\mathbf{x}_{n,p}^{(\tau)}=\alpha_{n,p}^{(\tau)}\mathbf{v}+\mathbf{\zeta}_{n,p}^{(\tau)}$_, the following holds:_ mk+*NP t* poly(d) *, any feature* + (v) and v*-dominated patch sample* $$\sigma\left(\langle\mathbf{w}_{+,r}^{(t)},\mathbf{x}_{n,p}^{(\tau)}\rangle+b_{+,r}^{(t^{\prime})}\right)=0$$ $$(139)$$ 2. (Non-activation on noise patches) For any τ ≥ t*, with probability at least* 1 − O *mNP t* poly(d) *, for every* t ′ ≤ t, r ∈ [m] *and noise patch* x (τ) n,p = ζ (τ) n,p*, the following holds:* $$\sigma\left(\langle\mathbf{w}_{+,r}^{(t^{\prime})},\mathbf{x}_{n,p}^{(\tau)}\rangle+b_{+,r}^{(t^{\prime})}\right)=0$$ $$(140)^{\frac{1}{2}}$$ 3. (Off-diagonal nonpositive growth) For any τ ≥ t*, with probability at least* 1 − O mk+*NP t* poly(d) , for any t ′ ≤ t, any feature v ∈ {v−,c} k− c=1 ∪ {v−}, any (+, r) ∈ S (0) + (v) and v*-dominated patch* x (τ) n,p = α (τ) n,pv + ζ (τ) n,p, σ ⟨w (t ′) +,r, x (τ) n,p⟩ + b (t ′) +,r≤ σ ⟨w (0) +,r, x (τ) n,p⟩ + b (0) +,r. Proof. **Base case** t = 0. 1. (Nonactivation invariance) Choose any τ ≥ 0, v ∗from the set {v+,c} k+ c=1 ∪ {v−,c} k− c=1 ∪ {v+, v−}. We will work with neuron sets in the "+" class in this proof; the "−"-class case can be handled in the same way. First, we need to show that, for every n such that |P(X (τ) n ; v ∗)| > 0 and p ∈ P(X (τ) n ; v ∗), for every (+, r) neuron index, $$\langle\mathbf{w}_{+,r}^{(0)},\mathbf{v}^{*}\rangle<\sigma_{0}{\sqrt{4+2c_{0}}}{\sqrt{\log(d)-{\frac{1}{\log^{5}(d)}}}}\implies\sigma\left(\langle\mathbf{w}_{+,r}^{(0)},\mathbf{x}_{n,p}^{(r)}\rangle+b_{+,r}^{(0)}\right)=0$$ This is indeed true. The following holds with probability at least 1 − O mNP poly(d) for all (+, r) ∈/ S (0) + (v) and all such x (τ) n,p: ⟨w (0) +,r, x (τ) n,p⟩ + b (0) +,r ≤σ0 √1 + ι q(4 + 2c0)(log(d) − 1/ log5(d)) + O σ0 log9(d) − √4 + 2c0 plog(d)σ0 =σ0 (4 + 2c0)(1 + ι)(log(d) − 1/ log5(d)) − (4 + 2c0) log(d) q(4 + 2c0)(log(d) − 1/ log5(d)) + √4 + 2c0 plog(d) + O 1 log9(d) =σ0 (4 + 2c0)ιlog(d) − (1 + ι)/ log5(d) q(4 + 2c0)(log(d) − 1/ log5(d)) + √4 + 2c0 plog(d) + O 1 log9(d) <0, $$(141)$$ $$(142)$$ The first equality holds by utilizing the identity a − b = a 2−b 2 a+b . As a consequence, σ(⟨w (0) +,r, x (τ) n,p⟩ + b (0) +,r) = 0. 2. (Non-activation on noise patches) Invoking Lemma H.3, for any τ ≥ 0, with probability at least 1 − O mNP poly(d) , we have for all possible choices of r ∈ [m] and the noise patches x (τ) n,p = ζ (τ) n,p: $$\left|\langle\mathbf{w}_{+,r}^{(0)},\mathbf{\zeta}_{n,p}^{(\tau)}\rangle\right|\leq O(\sigma_{0}\sigma_{\zeta}\sqrt{d\log(d)})\leq O\left(\frac{\sigma_{0}}{\log^{9}(d)}\right)\ll b_{+,r}^{(0)}.$$ $$(143)$$ Therefore, no neuron can activate on the noise patches at time t = 0. 3. (Off-diagonal nonpositive growth) This point is trivially true at t = 0. Inductive step: we assume the induction hypothesis for t ∈ [0, T] (with *T < T*e of course), and prove the statements for t = T + 1. 1. (Nonactivation invariance) Choose any v ∗from the set {v+,c} k+ c=1 ∪ {v−,c} k− c=1 ∪ {v+, v−}. We will work with neuron sets in the "+" class in this proof; the "−"-class case can be handled in the same way. We need to prove that given τ ≥ T + 1, with probability at least 1 − O mNP (T +1) poly(d) , for every t ′ ≤ T + 1, (+, r) neuron index and v ∗-dominated patch x (τ) n,p, $$(+,r)\notin S_{+}^{(0)}(\mathbf{v}^{*})\implies\sigma\left(\langle\mathbf{w}_{+,r}^{(t^{\prime})},\mathbf{x}_{n,p}^{(\tau)}\rangle+b_{+,r}^{(t^{\prime})}\right)=0.$$ $$(1444)$$ Conditioning on the (high-probability) event of the induction hypothesis of point 1., the following is already true on all the v ∗-dominated patches at time t ′ ≤ T: $$(+,r)\notin S_{+}^{(0)}(\mathbf{v}^{*})\implies\sigma\left(\langle\mathbf{w}_{+,r}^{(t^{\prime})},\mathbf{x}_{n,p}^{(T)}\rangle+b_{+,r}^{(t^{\prime})}\right)=0.$$ In particular, σ ⟨w (T) +,r, x (T) n,p ⟩ + b (T) +,r= 0. In other words, no (+, r) ∈/ S (0) + (v ∗) can be updated on the v ∗-dominated patches at time t = T. Furthermore, the induction hypothesis of point 2. also states that the network cannot activate on any noise patch x (T) n,p = ζ (T) n,p with probability at least 1−O *mNP T* poly(d) . Therefore, the neuron update for those (+, r) ∈/ S (0) + (v ∗) takes the form $$\begin{split}\Delta\mathbf{w}_{+,r}^{(T)}&=\frac{\eta}{N P}\sum_{\mathbf{v}\in\{(\mathbf{v}^{*})\}}\sum_{n=1}^{N}\mathbbm{1}\{|\mathcal{P}(\mathbf{X}_{n}^{(T)};\mathbf{v})|>0\}\mathbbm{1}\{y_{n}=+\}-\operatorname{logit}_{+}^{(T)}(\mathbf{X}_{n}^{(T)})]\\ &\quad\times\sum_{p\in\mathcal{P}(\mathbf{X}_{n}^{(T)},\mathbf{v})}\mathbbm{1}\{\langle\mathbf{w}_{+,r}^{(T)},\alpha_{n,p}^{(T)}\mathbf{v}+\mathbf{\zeta}_{n,p}^{(T)}\rangle+b_{c,r}^{(T)}>0\}\left(\alpha_{n,p}^{(T)}\mathbf{v}+\mathbf{\zeta}_{n,p}^{(T)}\right)\end{split}$$ $$(145)$$ $$(146)$$ Now we can invoke Lemma F.2 and obtain that, with probability at least 1 − O mNP poly(d) , the following holds for all relevant neurons and v ∗-dominated patches: $$\langle\Delta\mathbf{w}_{+,r}^{(T)},\mathbf{x}_{n,p}^{(\tau)}\rangle+\Delta b_{+,r}^{(T)}<0.\tag{1}$$ In conclusion, with τ ≥ T + 1, with probability at least 1 − O mNP poly(d) , for every (+, r) ∈/ S (0) + (v ∗) and relevant (*n, p*)'s, $$\langle\mathbf{w}_{+,r}^{(T)}+\Delta\mathbf{w}_{+,r}^{(T)},\mathbf{x}_{n,p}^{(\tau)}\rangle+b_{+,r}^{(T)}+\Delta b_{+,r}^{(T)}=\langle\mathbf{w}_{+,r}^{(T+1)},\mathbf{x}_{n,p}^{(\tau)}\rangle+b_{+,r}^{(T+1)}<0,$$ $$(147)$$ $$(148)^{3}$$ which leads to ⟨w (t ′) +,r, x (τ) n,p⟩ + b (t ′) +,r < 0 for all t ′ ≤ T + 1 with probability at least 1 − O mk+NP (T +1) poly(d) (also taking union bound over all the possible choices of v ∗). This finishes the inductive step for point 1. 2. (Non-activation on noise patches) Relying on the event of the induction hypothesis, for any τ ≥ T, the following holds for every r ∈ [m] and noise patch x (τ) n,p = ζ (τ) n,p, $$\langle\mathbf{w}_{+,r}^{(T)},\mathbf{x}_{n,p}^{(\tau)}\rangle+b_{+,r}^{(T)}<0.\tag{1}$$ Conditioning on this high-probability event, this means no neuron w (T) +,r can be updated on the noise patches. Denoting the set of features M = {v+,c} k+ c=1 ∪ {v−,c} k− c=1 ∪ {v+, v−}, for every r ∈ [m], its update is reduced to $$\begin{array}{l}{{\Delta\mathbf{w}_{+,r}^{(T)}=\frac{\eta}{N P}\sum_{\mathbf{v}\in\mathcal{M}}\sum_{n=1}^{N}\mathbb{I}\left\{|\mathcal{P}(\mathbf{X}_{n}^{(T)};\mathbf{v})|>0\right\}\left[\mathbb{I}\left\{y_{n}=+\right\}-\operatorname{logit}_{+}^{(T)}(\mathbf{X}_{n}^{(T)})\right]}}\\ {{\qquad\times\sum_{p\in\mathcal{P}(\mathbf{X}_{n}^{(T)},\mathbf{v})}\mathbb{I}\left\{\left\{\mathbf{w}_{+,r}^{(T)},\alpha_{n,p}^{(T)}\mathbf{v}+\mathbf{\zeta}_{n,p}^{(T)}\right\}+b_{c,r}^{(T)}>0\right\}\left(\alpha_{n,p}^{(T)}\mathbf{v}+\mathbf{\zeta}_{n,p}^{(T)}\right),}}\end{array}$$ $$(149)$$ $$(150)$$ Invoking Lemma F.3, we have that, for any τ ≥ T + 1, the following inequality holds with probability at least 1 − O mNP poly(d) for every r ∈ [m] and noise patches, $$\langle\Delta\mathbf{w}_{+,r}^{(T)},\mathbf{x}_{n,p}^{(\tau)}\rangle+\Delta b_{+,r}^{(T)}<0.\tag{1}$$ $$(151)$$ Consequently, for any τ ≥ T + 1, the following inequality holds with probability at least 1 − O mNP poly(d) for every r ∈ [m] and noise patches x (τ) n,p = ζ (τ) n,p: $$\langle\mathbf{w}_{+,r}^{(T)}+\Delta\mathbf{w}_{+,r}^{(T)},\mathbf{x}_{n,p}^{(\tau)}\rangle+b_{+,r}^{(T)}+\Delta b_{+,r}^{(T)}=\langle\mathbf{w}_{+,r}^{(T+1)},\mathbf{x}_{n,p}^{(\tau)}\rangle+b_{+,r}^{(T+1)}<0.\tag{152}$$ This finishes the inductive step for point 2. 3. (Off-diagonal nonpositive growth) Choose any v ∗ ∈ {v−} ∪ {v−,c} k− c=1. Choose any neuron with index (+, r). Similar to our proof for point 2., we know that its update, when taken inner product with a v ∗-dominated patch x (τ) n,p = √1 ± ιv ∗ + ζ (τ) n,p, has to take the form ⟨∆w (T) +,r, √1 ± ιv ∗ + ζ (τ) n,p⟩ =η NP X v∈M X N n=1 1{|P(X(T) n; v)| > 0}[1{yn = +} − logit(T) + (X(T) n)] p∈P(X(T ) n ;v) 1{⟨w (T) +,r, α(T) n,p v + ζ (T) n,p ⟩ + b (T) +,r > 0}⟨α (T) n,p v + ζ (T) n,p , √1 ± ιv ∗ + ζ (τ) n,p⟩ ×X =η NP X v*∈M−{*v∗} X N n=1 1{|P(X(T) n; v)| > 0}[1{yn = +} − logit(T) + (X(T) n)] p∈P(X(T ) n ;v) 1{⟨w (T) +,r, α(T) n,p v + ζ (T) n,p ⟩ + b (T) +,r > 0} ⟨ζ (T) n,p , √1 ± ιv ∗⟩ + ⟨α (T) n,p v + ζ (T) n,p , ζ (τ) n,p⟩ ×X −η NP X N n=1 1{|P(X(T) n; v ∗)| > 0}[logit(T) + (X(T) n)] p∈P(X(T ) n ;v) 1{⟨w (T) +,r, α(T) n,p v + ζ (T) n,p ⟩ + b (T) +,r > 0}⟨α (T) n,p v ∗ + ζ (T) n,p , √1 ± ιv ∗ + ζ (τ) n,p⟩ ×X $$(153)$$ With probability at least 1−O NP poly(d) , ⟨α (T) n,p v ∗+ζ (T) n,p , √1 ± ιv ∗+ζ (τ) n,p⟩ > 0, and ⟨ζ (T) n,p , √1 ± ιv ∗⟩+⟨α (T) n,p v+ ζ (T) n,p , ζ (τ) n,p⟩ < O(1/ log9(d)). Therefore, ⟨∆w (T) +,r, v ∗⟩ <η NP X v∈M−{v∗} X N n=1 1{|P(X(T) n; v)| > 0}[1{yn = +} − logit(T) + (X(T) n)] p∈P(X(T ) n ;v) 1{⟨w (T) +,r, α(T) n,p v + ζ (T) n,p ⟩ + b (T) +,r > 0}O 1 log9(d) (154) ×X Invoking Lemma F.3, we know that $\Delta b^{(T)}_{+,r}$ $\leq-\,\overline{\log}$ $\times\left(\vphantom{\frac{1}{\log_2{b_1}}}\right)$ $\times$ $p{\in}1$ $$\begin{split}&\phi^{(T)}_{+,\,r}\\ &\frac{1}{\log^{5}(d)}\frac{\eta}{NP}\left(\sqrt{1-\iota}-\frac{1}{\log^{5}(d)}\right)\\ &\left(\sum_{\mathbf{v}\in\mathcal{M}}\sum_{n=1}^{N}\mathbb{I}\left\{|\mathcal{P}(\mathbf{X}^{(T)}_{n};\mathbf{v})|>0\right\}\left|\mathbb{I}\left\{y_{n}=+\right\}-\text{logit}^{(T)}_{+}(\mathbf{X}^{(T)}_{n})\right|\right.\\ &\left.\sum_{p\in\mathcal{P}(\mathbf{X}^{(T)}_{n};\mathbf{v})}\mathbb{I}\left\{\langle\mathbf{w}^{(T)}_{+,r},\alpha^{(T)}_{n,p}\mathbf{v}+\mathbf{\zeta}^{(T)}_{n,p}\right\rangle+b^{(T)}_{+,r}>0\right\}\right).\end{split}\tag{155}$$ It follows that ⟨∆w (T) +,r, √1 ± ιv ∗ + ζ (τ) n,p⟩ + ∆b (T) +,r <O 1 log9(d) η NP X v∈M−{v∗} X N n=1 1{|P(X(T) n; v)| > 0}[1{yn = +} − logit(T) + (X(T) n)] p∈P(X(T ) n ;v) 1{⟨w (T) +,r, α(T) n,p v + ζ (T) n,p ⟩ + b (T) c,r > 0} ! ×X − Ω 1 log5(d) η NP X v∈M X N n=1 1{|P(X(T) n; v)| > 0} 1{yn = +} − logit(T) + (X(T) n) p∈P(X(T ) n ;v) 1{⟨w (T) +,r, α(T) n,p v + ζ (T) n,p ⟩ + b (T) c,r > 0} ! ×X <0. (156) $$(156)$$ Consequently, σ ⟨w (T +1) +,r , √1 ± ιv ∗ + ζ (τ) n,p⟩ + b (T +1) +,r =σ ⟨w (T) +,r, √1 ± ιv ∗ + ζ (τ) n,p⟩ + b (T) +,r + ⟨∆w (T) +,r, √1 ± ιv ∗ + ζ (τ) n,p⟩ + ∆b (T) +,r (157) ≤σ ⟨w (T) +,r, √1 ± ιv ∗ + ζ (τ) n,p⟩ + b (T) +,r ≤σ ⟨w (0) +,r, √1 ± ιv ∗ + ζ (τ) n,p⟩ + b (0) +,r. $$(158)$$ Corollary F.1.1 (Bias update upper bound). Choose any Te ≤ poly(d)*. With probability at least* 1 − O mk+*NP T*e poly(d) , for all t ∈ [0, Te], any neuron w+,r, and any v ∈ U(0) +,r, $$\Delta b_{+,r}^{(t)}<-\Omega\left(\frac{p o l y o l g(d)}{\log^{5}(d)}\right)\left|\langle\Delta w_{+,r}^{(t)},\zeta^{*}\rangle\right|.\tag{1}$$ Proof. Conditioning on the high-probability events of Theorem F.1 above, we know that for any neuron indexed (+, r), at any time t ≤ Te, its update takes the form $$\begin{split}\Delta\mathbf{w}_{+,r}^{(t)}=\frac{\eta}{\textit{NP}}\sum_{\mathbf{v}\in\mathcal{U}_{r,p}^{(t)}}\sum_{n=1}^{N}\mathbb{1}\left\{[\mathcal{P}(\mathbf{X}_{n}^{(t)};\mathbf{v})|>0\right\}\left[\mathbb{1}\left\{y_{n}=+\right\}-\log\mathrm{i}_{+}^{(t)}(\mathbf{X}_{n}^{(t)})\right]\\ \times\sum_{\mathbf{v}\in\mathcal{P}(\mathbf{X}_{n}^{(t)};\mathbf{v})}\mathbb{1}\left\{\langle\mathbf{w}_{+,r}^{(t)},\alpha_{n,p}^{(t)}\mathbf{v}+\mathcal{L}_{n,p}^{(t)}\right\rangle+b_{c,r}^{(t)}>0\right\}\left(\alpha_{n,p}^{(t)}\mathbf{v}+\mathcal{L}_{n,p}^{(t)}\right),\end{split}\tag{159}$$ It follows that, with probability at least 1 − O 1 poly(d) , ⟨∆w (t) +,r, ζ ∗⟩ = η NP X v∈U(0) +,r X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = +} − logit(t) + (X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0}⟨α (t) n,pv + ζ (t) n,p, ζ ∗⟩ ×X (160) v∈U(0) +,r X N ≤η NP X n=1 1{|P(X(t) n ; v)| > 0} 1{yn = +} − logit(t) + (X(t) n ) p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0}O 1 polylog(d) ×X On the other hand, ∆w (t) +,r 2 ≥ η NP X v∈U(0) +,r X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = +} − logit(t) + (X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0}α (t) n,pv 2 ×X − η NP X v∈U(0) +,r X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = +} − logit(t) + (X(t) n )] (161) p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0}ζ (t) n,p 2 ×X v∈U(0) +,r X N ≥η NP X n=1 1{|P(X(t) n ; v)| > 0} 1{yn = +} − logit(t) + (X(t) n ) p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0} √1 − ι − O 1 log9(d) ×X Clearly, $$\left\|\Delta\mathbf{w}_{+,r}^{(t)}\right\|_{2}\geq\Omega\left(\text{polylog}(d)\left|\left\langle\Delta\mathbf{w}_{+,r}^{(t)},\zeta^{*}\right\rangle\right|\right).\tag{162}$$ The conclusion follows. Lemma F.2 (Nonactivation invariance). Let the assumptions in Theorem D.1 hold. Denote the set of features C(v ∗) = {v+,c} k+ c=1 ∪ {v−,c} k− c=1 ∪ {v+, v−*} − {*v ∗}*. If the update term for neuron* w (t) +,r *can be written as follows* ∆w (t) +,r = η NP X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = +} − logit(t) + (X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0} α (t) n,pv + ζ (t) n,p, (163) ×X then given any τ > t*, the following inequality holds with probability at least* 1−O NP poly(d) *for all* v ∗*-dominated* patch x (τ) n,p: $$\langle\Delta\mathbf{w}_{+,r}^{(t)},\mathbf{x}_{n,p}^{(\tau)}\rangle+\Delta b_{+,r}^{(t)}<0$$ $$(164)$$ +,r < 0 (164) Proof. Let us fix a neuron w+,r satisfying the update expression in the Lemma statement, and fix some *τ > t*. Firstly, the bias update for this neuron can be upper bounded via the reverse triangle inequality: ∆b (t) +,r = − ∆w (t) +,r 2 log5(d) ≤ −1 log5(d) η NP X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = +} − logit(t) + (X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0}α (t) n,pv 2 ×X (165) +1 log5(d) η NP X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = +} − logit(t) + (X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0}ζ (t) n,p 2 ×X Let us further upper bound the two *∥ · ∥*2 terms separately. Firstly, X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = +} − logit(t) + (X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0}α (t) n,pv 2 ×X =X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = +} − logit(t) + (X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0}α (t) n,pv 2 ×X (166) =X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0} 1{yn = +} − logit(t) + (X(t) n ) ×X p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0}α (t) n,p ∥v∥2 v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0} 1{yn = +} − logit(t) + (X(t) n ) ≥X p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0} √1 − ι ×X Secondly, with probability at least 1 − O NP poly(d) , X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = +} − logit(t) + (X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0}ζ (t) n,p 2 ×X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0} 1{yn = +} − logit(t) + (X(t) n ) ≤X (167) p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0} ζ (t) n,p 2 ×X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0} 1{yn = +} − logit(t) + (X(t) n ) ≤X p∈P(X(0) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0} 1 log9(d) ×X Therefore, with probability at least 1 − O NP poly(d) , we can bound the update to the bias as follows: $$\Delta b^{(t)}_{t,\tau\ 1}$$ $$\leq-\frac{1}{\log^{2}(d)}\frac{\eta}{N\mathcal{P}}\left(\sqrt{1-\iota}-\frac{1}{\log^{9}(d)}\right)$$ $$\times\left(\sum_{v\in C(v^{\prime})}\sum_{n=1}^{N}\mathbb{1}\{|\mathcal{P}(\mathbf{X}^{(t)}_{n};\mathbf{v})|>0\}\left|\mathbb{1}\{y_{n}=+\}-\log\mathrm{i}^{(t)}_{+}(\mathbf{X}^{(t)}_{n})\right|\right.\tag{168}$$ $$\times\left.\sum_{p\in\mathcal{P}(\mathbf{X}^{(t)}_{n};\mathbf{v})}\mathbb{1}\{\langle\mathbf{w}^{(t)}_{+,\tau},\alpha^{(t)}_{n,p}\mathbf{v}+\mathbf{\zeta}^{(t)}_{n,p}+b^{(t)}_{c,\tau}>0\}\right)$$ Furthermore, with probability at least 1 − e bility at least $1-e^{-\Omega(d)+O(\log(d))}>1-O\left(\frac{NP}{\text{poly}(d)}\right)$, the following holds for all $n,p$: $$\langle\alpha^{(t)}_{n,p}\mathbf{v},\zeta^{(\tau)}_{n,p}\rangle,\;\langle\zeta^{(t)}_{n,p},\alpha^{(\tau)}_{n,p}\mathbf{v}^*\rangle,\;\langle\zeta^{(t)}_{n,p},\zeta^{(\tau)}_{n,p}\rangle < O\left(\frac{1}{\log^g(d)}\right).\eqno(169)$$ derivations, they imply that with probability at least $1-O\left(\frac{NP}{\text{poly}(d)}\right)$, for any $\mathbf{x}^{(\tau)}_{n,p}$ . dominated by v ∗, ⟨∆w (t) +,r, x (τ) n,p⟩ + ∆b (t) +,r =⟨∆w (t) +,r, α(τ) n,pv ∗ + ζ (τ) n,p⟩ + ∆b (t) +,r =η NP X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = +} − logit(t) + (X(t) n )] ×X p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0}⟨α (t) n,pv + ζ (t) n,p, α(τ) n,pv ∗ + ζ (τ) n,p⟩ + ∆b (t) +,r =η NP X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = +} − logit(t) + (X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0} ⟨α (t) n,pv, ζ (τ) n,p⟩ + ⟨ζ (t) n,p, α(τ) n,pv ∗⟩ + ⟨ζ (t) n,p, ζ (τ) n,p⟩ ×X + ∆b (t) +,r $$(170)$$ ≤η NP X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0} 1{yn = +} − logit(t) + (X(t) n ) p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0} × O 1 log9(d) + ∆b (t) +,r ×X ≤η NP O 1 log9(d) −1 log5(d) √1 − ι −1 log9(d) × X v∈C(v+) X N n=1 1{|P(X(t) n ; v)| > 0} 1{yn = +} − logit(t) + (X(t) n ) p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0} ! ×X <0. This completes the proof. Lemma F.3 (Nonactivation on noise patches). *Let the assumptions in Theorem D.1 hold.* Denote the set of features M = {v+,c} k+ c=1 ∪ {v−,c} k− c=1 ∪ {v+, v−}*. If the update term for neuron* w (t) +,r can be written as follows $$\begin{split}\Delta\mathbf{w}_{+,r}^{(t)}=&\frac{\eta}{\mathcal{NP}}\sum_{\mathbf{w}\in\mathcal{M}}\sum_{n=1}^{N}\mathbbm{1}\{|\mathcal{P}(\mathbf{X}_{n}^{(t)};\mathbf{v})|>0\}[\mathbbm{1}\{y_{n}=+\}-log\hat{u}_{+}^{(t)}(\mathbf{X}_{n}^{(t)})]\\ &\times\sum_{\mathbf{p}\in\mathcal{P}(\mathbf{X}_{n}^{(t)};\mathbf{w})}\mathbbm{1}\{\langle\mathbf{w}_{+,r}^{(t)},\alpha_{\mathbf{u},\mathbf{p}}^{(t)}\mathbf{v}+\mathbf{\zeta}_{n,\mathbf{p}}^{(t)}\rangle+b_{c,r}^{(t)}>0\}\left(\alpha_{n,\mathbf{p}}^{(t)}\mathbf{v}+\mathbf{\zeta}_{n,\mathbf{p}}^{(t)}\right),\end{split}\tag{171}$$ then $$\Delta b^{(t)}_{+,\tau}$$ $$\leq-\frac{1}{\log^{5}(d)}\frac{\eta}{NP}\left(\sqrt{1-\iota}-\frac{1}{\log^{9}(d)}\right)$$ $$\times\left(\sum_{\mathbf{v}\in\mathcal{M}}\sum_{n=1}^{N}\mathbb{1}\{|\mathcal{P}(\mathbf{X}^{(t)}_{n};\mathbf{v})|>0\}\left|\mathbb{1}\left\{y_{n}=+\right\}-logit^{(t)}_{+}(\mathbf{X}^{(t)}_{n})\right|\right.\tag{172}$$ $$\times\left.\sum_{p\in\mathcal{P}(\mathbf{X}^{(t)}_{n};\mathbf{v})}\mathbb{1}\left\{(\mathbf{w}^{(t)}_{+,\tau},\alpha^{(t)}_{n,p}\mathbf{v}+\mathbf{\zeta}^{(t)}_{n,p})+b^{(t)}_{c,\tau}>0\right\}\right).$$ $$(173)$$ Moreover, for any τ > t*, the following inequality holds with probability at least* 1 − O NP poly(d) for all noise patches x (τ) n,p = ζ (τ) n,p: $$\langle\Delta\mathbf{w}_{+,r}^{(t)},\mathbf{x}_{n,p}^{(\tau)}\rangle+\Delta b_{+,r}^{(t)}<0\tag{1}$$ Proof. Similar to the proof of Lemma F.2, we can estimate the update to the bias term $$\Delta b^{(t)}_{t,\tau}$$ $$\leq-\frac{1}{\log^{2}(d)}\frac{\eta}{NP}\left(\sqrt{1-\iota}-\frac{1}{\log^{2}(d)}\right)$$ $$\times\left(\sum_{v\in\mathcal{M}}\sum_{n=1}^{N}\mathbb{1}\left\{\left|\mathcal{P}(\mathbf{X}^{(t)}_{n};\mathbf{v})>0\right\right\}\right|\mathbb{1}\{y_{n}=+\}-\log\!\mathrm{i}^{(t)}_{+}(\mathbf{X}^{(t)}_{n})\right|\tag{174}$$ $$\times\sum_{p\in\mathcal{P}(\mathbf{X}^{(t)}_{n};\mathbf{v})}\mathbb{1}\left\{\langle\mathbf{w}^{(t)}_{+,\tau},\alpha^{(t)}_{n,p}\mathbf{v}+\mathbf{\zeta}^{(t)}_{n,p}\right\rangle+b^{(t)}_{c,\tau}>0\right\}$$ ⟨∆w (t) +,r, x (τ) n,p⟩ + ∆b (t) +,r =⟨∆w (t) +,r, ζ (τ) n,p⟩ + ∆b (t) +,r =η NP X v∈M X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = +} − logit(t) + (X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0}⟨α (t) n,pv + ζ (t) n,p, ζ (τ) n,p⟩ + ∆b (t) +,r ×X =η NP X v∈M X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = +} − logit(t) + (X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0} ⟨α (t) n,pv, ζ (τ) n,p⟩ + ⟨ζ (t) n,p, ζ (τ) n,p⟩ + ∆b (t) +,r ×X ≤η NP X v∈M X N (175) n=1 1{|P(X(t) n ; v)| > 0} 1{yn = +} − logit(t) + (X(t) n ) p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0} × O 1 log9(d) + ∆b (t) +,r ×X ≤η NP O 1 log9(d) −1 log5(d) √1 − ι −1 log9(d) × X v∈M X N n=1 1{|P(X(t) n ; v)| > 0} 1{yn = +} − logit(t) + (X(t) n ) p∈P(X(t) n ;v) 1{⟨w (t) +,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) c,r > 0} ! ×X <0. Then for any x (τ) n,p = ζ (τ) n,p with *τ > t*, with probability at least 1 − O mNP poly(d) , ## G Fine-Grained Learning This section treats the learning dynamics of using fine-grained labels to train the NN; the analysis will be much simpler since the technical analysis overlaps significantly with that in the previous sections. The training procedure is exactly the same as in the coarse-grained training setting. We explicitly write them out here to avoid any possible confusion. The learner for fine-grained classification is written as follows for c ∈ [k+]: $$F_{+,c}(\mathbf{X})=\sum_{r=1}^{m_{+,c}}a_{+,c,r}\sum_{p=1}^{P}\sigma(\langle\mathbf{w}_{+,c,r},\mathbf{x}_{p}\rangle+b_{+,c,r}),\ \ c\in[k_{+}]\tag{1}$$ $$(176)$$ $$(177)$$ with frozen linear classifier weights a+*,c,r* = 1. Same definition applies to the − classes. The SGD dynamics induced by the training loss is now w (t+1) +,c,r = w (t) +,c,r + η 1 NP X N n=1 1{yn = (+, c)}[1 − logit(t) +,c(X(t) n )] X p∈[P ] σ ′(⟨w (t) +,c,r, x (t) n,p⟩ + b (t) c,r)x (t) n,p+ 1{yn ̸= (+, c)}[−logit(t) +,c(X(t) n )] X p∈[P ] σ ′(⟨w (t) +,c,r, x (t) n,p⟩ + b (t) c,r)x (t) n,p! (177) The bias is manually tuned according to the update rule $$b_{+,c,r}^{(t+1)}=b_{+,c,r}^{(t)}-\frac{\|\Delta\mathbf{w}_{+,c,r}^{(t)}\|_{2}}{\log^{5}(d)}\tag{1}$$ $$(178)$$ We assign m+,c = Θ(d 1+2c0 ) neurons to each subclass (+, c). For convenience, we write m = dm+,c. The initialization scheme is identical to the coarse-training case, except we choose a slightly less negative b (0) c,r = −σ0 √2 + 2c0 plog(d). The parameter choices remain the same as before. ## G.1 Initialization Geometry Definition G.1. Define the following sets of interest of the hidden neurons: $$1.\ {\mathcal{U}}_{+,c,r}^{(0)}=\{\mathbf{v}\in{\mathcal{V}}:\langle\mathbf{w}_{+,c,r}^{(0)},\mathbf{v}\rangle\geq\sigma_{0}{\sqrt{2+2c_{0}}}{\sqrt{\log(d)-{\frac{1}{\log^{5}(d)}}}}\}$$ 2. Given $\mathbf{v}\in\mathcal{V}$, $S^{*(0)}_{+,c}(\mathbf{v})\subseteq(+,c)\times[m_{+,c}]$ satisfies: 1. $\langle\mathbf{w}^{(0)}_{+,c},\mathbf{v}\rangle\geq\sigma_{0}\sqrt{2+2c_{0}}\sqrt{\log(d)+\frac{1}{\log^{b}(d)}}$ 2. $\forall\mathbf{v}^{\prime}\in\mathcal{V}$ s.t. $\mathbf{v}^{\prime}\perp\mathbf{v}$, $\langle\mathbf{w}^{(0)}_{+,c,r},\mathbf{v}^{\prime}\rangle<\sigma_{0}\sqrt{2+2c_{0}}\sqrt{\log(d)-\frac{1}{\log^{b}(d)}}$ 3. Given $\mathbf{v}\in\mathcal{V}$, $S^{(0)}_{+,c}(\mathbf{v})\subseteq(+,c)\times[m_{+,c}]$ satisfies: 1. $\langle\mathbf{w}^{(0)}_{+,c,r},\mathbf{v}\rangle\geq\sigma_{0}\sqrt{2+2c_{0}}\sqrt{\log(d)-\frac{1}{\log^{b}(d)}}$ 4. For any $(+,c,r)\in S^{*(0)}_{+,c,reg}\subseteq(+,c)\times[m_{+,c}]$: 1. $\langle\mathbf{w}^{(0)}_{+,c,r},\mathbf{v}\rangle\leq\sigma_{0}\sqrt{10}\sqrt{\log(d)}\;\forall\mathbf{v}\in\mathcal{V}$ 2. $\left|\mathcal{U}^{(0)}_{+,c,r}\right|\leq O(1)$ The same definitions apply to the −-class neurons. Proposition 2. At t = 0, for all v ∈ D*, the following properties are true with probability at least* 1 − d −2 over the randomness of the initialized kernels: 1. |S ∗(0) +,c (v)|, |S (0) +,c(v)| = Θ √ 1 log(d) d c0 2. In particular, |S ∗(0) y(v)| |S (0) y′ (v′)| − 1 = O 1 log5(d) and |S ∗(0) y(v)| |S ∗(0) y′(v′)| − 1 = O 1 log5(d) for any y, y′ ∈ {(+, c)} k+ c=1 ∪ {(−, c)} k− c=1 and common or fine-grained features v, v ′. 3. S (0) +,c,reg = [m+,c] σ ⟨w (0) +*,c,r*, x (τ) n,p⟩ + b (0) +,c,r. The same properties apply to the −-class neurons. Proof. This proof proceeds in virtually the same way as in the proof of Proposition 1, so we omit it here. ## G.2 Poly-Time Properties Theorem G.1. Fix any t ∈ [0, Te], assuming Te ∈ poly(d). 1. (Non-activation invariance) For any τ ≥ t*, with probability at least* 1 − O mk+*NP t* poly(d) *, for any feature* v ∈ {v+,c} k+ c=1 ∪ {v−,c} k− c=1 ∪ {v+, v−}*, for every* t ′ ≤ t, (+*, c, r*) ∈/ S (0) +,c(v) and v*-dominated patch* sample x (τ) n,p = α (τ) n,pv + ζ (τ) n,p*, the following holds:* $$\sigma\left(\langle\mathbf{w}_{+,c,r}^{(t^{\prime})},\mathbf{x}_{n,p}^{(\tau)}\rangle+b_{+,c,r}^{(t^{\prime})}\right)=0$$ $$(179)$$ 2. (Non-activation on noise patches) For any τ ≥ t*, with probability at least* 1 − O *mNP t* poly(d) *, for every* c ∈ [k+], r ∈ [m] *and noise patch* x (τ) n,p = ζ (τ) n,p*, the following holds:* $$\sigma\left(\langle\mathbf{w}_{+,c,r}^{(t)},\mathbf{x}_{n,p}^{(\tau)}\rangle+b_{+,c,r}^{(t)}\right)=0\tag{1}$$ $$(180)$$ 3. (Off-diagonal nonpositive growth) Given fine-grained class (+, c) and any τ ≥ t, with probability at least 1 − O mk+*NP t* poly(d) *, for any* t ′ ≤ t, any feature v ∈ {v−,c} k− c=1 ∪ {v−*} ∪ {*v+,c′}c ′̸=c*, any neuron* w+*,c,r* ∈ S (0) +,c(v) and any v*-dominated patch* x (τ) n,p = α (τ) n,pv + ζ (τ) n,p, σ ⟨w (t ′) +*,c,r*, x (τ) n,p⟩ + b (t ′) +*,c,r*≤ Proof. The proof of this theorem is similar to that of Theorem F.1, but with some subtle differences. Base case t = 0. 1. (Nonactivation invariance) Choose any v ∗from the set {v+,c} k+ c=1 ∪ {v−,c} k− c=1 ∪ {v+, v−}. We will work with neuron sets in the "+" class in this proof; the "−"-class case can be handled in the same way. First, given τ ≥ 0, we need to show that, for every n such that |P(X (τ) n ; v ∗)| > 0 and p ∈ P(X (τ) n ; v ∗), for every (+*, c, r*) neuron index, $$\langle\mathbf{w}_{+,c,r}^{(0)},\mathbf{v}^{*}\rangle<\sigma_{0}\sqrt{2+2c_{0}}\sqrt{\log(d)-\frac{1}{\log^{5}(d)}}\implies\sigma\left(\langle\mathbf{w}_{+,c,r}^{(0)},\mathbf{x}_{n,p}^{(r)}\rangle+b_{+,c,r}^{(0)}\right)=0\tag{181}$$ This is indeed true. The following holds with probability at least 1 − O mNP poly(d) for all (+, r) ∈/ S (0) + (v) and all such x (τ) n,p: ⟨w (0) +,c,r, x (τ) n,p⟩ + b (0) +,c,r ≤σ0 √1 + ι q(2 + 2c0)(log(d) − 1/ log5(d)) + O σ0 log9(d) − √2 + 2c0 plog(d)σ0 =σ0 (2 + 2c0)(1 + ι)(log(d) − 1/ log5(d)) − (2 + 2c0) log(d) q(2 + 2c0)(log(d) − 1/ log5(d)) + √4 + 2c0 plog(d) + O 1 log9(d) (182) =σ0 (2 + 2c0)(ιlog(d) − (1 + ι)/ log5(d)) q(2 + 2c0)(log(d) − 1/ log5(d)) + √2 + 2c0 plog(d) + O 1 log9(d) <0, $$(183)$$ The first equality holds by utilizing the identity a − b = a 2−b 2 a+b . As a consequence, σ(⟨w (0) +*,c,r*, x (τ) n,p⟩ + b (0) +,r) = 0. 2. (Non-activation on noise patches) Invoking Lemma H.3, for any τ ≥ 0, with probability at least 1 − O mNP poly(d) , we have for all possible choices of r ∈ [m] and the noise patches x (τ) n,p = ζ (τ) n,p: $$\left|\langle\mathbf{w}_{+,c,r}^{(0)},\xi_{n,p}^{(\tau)}\rangle\right|\leq O(\sigma_{0}\sigma_{\zeta}\sqrt{d\log(d)})\leq O\left(\frac{\sigma_{0}}{\log^{9}(d)}\right)\ll b_{+,r}^{(0)}.$$ Therefore, no neuron can activate on the noise patches at time t = 0. 3. (Off-diagonal nonpositive growth) This point is trivially true at t = 0. Inductive step: we assume the induction hypothesis for t ∈ [0, T] (with *T < T*e of course), and prove the statements for t = T + 1. 1. (Nonactivation invariance) Again, choose any v ∗from the set {v+,c} k+ c=1 ∪ {v−,c} $$\{\mathbf{v}_{-,c}\}_{c=1}^{k_{-}}\cup\{\mathbf{v}_{+},\mathbf{v}_{-}\}.$$ We need to prove that given τ ≥ T + 1, with probability at least 1 − O mk+NP (T +1) poly(d) , for every t ′ ≤ T + 1, (+*, c, r*) neuron index and v ∗-dominated patch x (τ) n,p, $$(+,c,r)\notin S_{+,c}^{(0)}(\mathbf{v}^{*})\implies\sigma\left(\langle\mathbf{w}_{+,c,r}^{(t^{\prime})},\mathbf{x}_{n,p}^{(\tau)}\rangle+b_{+,c,r}^{(t^{\prime})}\right)=0.$$ By the induction hypothesis of point 1., with probability at least 1 − O mk+*NP T* poly(d) , the following is already true on all the v ∗-dominated patches at time t ′ ≤ T: $$(+,c,r)\not\in S_{+,c}^{(0)}(\mathbf{v}^{*})\implies\sigma\left(\langle\mathbf{w}_{+,c,r}^{(t^{\prime})},\mathbf{x}_{n,p}^{(T)}\rangle+b_{+,c,r}^{(t^{\prime})}\right)=0.$$ In particular, σ ⟨w $${\mathbf{w}}_{+,c,r}^{(T)},{\mathbf{x}}_{n,p}^{(T)}\rangle+b_{+,c,r}^{(T)}\rangle=0.$$ In other words, no (+*, c, r*) ∈/ S (0) +,c(v ∗) can be updated on the v ∗-dominated patches at time t = T. Furthermore, the induction hypothesis of point 2. also states that the network cannot activate on any noise patch x (T) n,p = ζ (T) n,p with probability at least 1 − O *mNP T* poly(d) . Therefore, the neuron update for those $$(184)$$ $$(185)$$ (+*, c, r*) ∈/ S (0) +,c(v ∗) takes the form ∆w (T) +,c,r = η NP X v∈C(v∗) X N n=1 1{|P(X(T) n; v)| > 0}[1{yn = (+, c)} − logit(T) +,c(X(T) n)] p∈P(X(T ) n ;v) 1{⟨w (T) +,c,r, α(T) n,p v + ζ (T) n,p ⟩ + b (T) +,c,r > 0} α (T) n,p v + ζ (T) n,p (186) ×X Conditioning on this high-probability event, we have ∆b (t) +,c,r = − ∆w (t) +,c,r 2 log5(d) ≤ −1 log5(d) η NP X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = (+, c)} − logit(t) +,c(X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,c,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) +,c,r > 0}α (t) n,pv 2 ×X (187) +1 log5(d) η NP X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = (+, c)} − logit(t) +,c(X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,c,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) +,c,r > 0}ζ (t) n,p 2 ×X Let us further upper bound the two *∥ · ∥*2 terms separately. Firstly, X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = (+, c)} − logit(t) +,c(X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,c,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) +*,c,r* > 0}α (t) n,pv 2 ×X =X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0} 1{yn = (+, c)} − logit(t) +,c(X(t) n ) (188) p∈P(X(t) n ;v) 1{⟨w (t) +,c,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) +*,c,r* > 0}α (t) n,p ∥v∥2 ×X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0} 1{yn = (+, c)} − logit(t) +,c(X(t) n ) ≥X p∈P(X(t) n ;v) 1{⟨w (t) +,c,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) +*,c,r* > 0} √1 − ι ×X For the second *∥ · ∥*2 term consisting purely of noise, note that since all the ζ (t) n,p's are independent Gaussian random vectors, the standard deviation of the sum is in fact $$\left\{\begin{array}{l}\sum_{\mathbf{v}\in C(\mathbf{v}^{*})}\sum_{n=1}^{N}\sum_{p\in\mathcal{P}(\mathbf{X}_{n}^{(t)};\mathbf{v})}\mathbbm{1}\{|\mathcal{P}(\mathbf{X}_{n}^{(t)};\mathbf{v})|>0\}\mathbbm{1}\{\langle\mathbf{w}_{+,c,r}^{(t)},\alpha_{n,p}^{(t)}\mathbf{v}+\mathbf{\zeta}_{n,p}^{(t)}\rangle+b_{+,c,r}^{(t)}>0\}\\ \\ \times[\mathbbm{1}\{y_{n}=(+,c)\}-\log\mathbbm{1}_{+,c}^{(t)}(\mathbf{X}_{n}^{(t)})]^{2}\}^{1/2}\sigma_{c}.\end{array}\right.\tag{1}$$ $$(189)$$ With the basic property that qPj c 2 j ≤Pj |cj | for any sequence of real numbers c1, c2*, ...*, we know this standard deviation can be upper bounded by $$\sum_{\mathbf{v}\in C(\mathbf{v}^{n})}\sum_{n=1}^{N}\sum_{p\in\mathcal{P}(\mathbf{X}_{n}^{(t)},\mathbf{v})}\mathbbm{1}\{|\mathcal{P}(\mathbf{X}_{n}^{(t)};\mathbf{v})|>0\}\mathbbm{1}\{\langle\mathbf{w}_{+,c,r}^{(t)},\alpha_{n,p}^{(t)}\mathbf{v}+\zeta_{n,p}^{(t)}\rangle+b_{+,c,r}^{(t)}>0\}\tag{190}$$ $$\times\left|\mathbbm{1}\{y_{n}=(+,c)\}-\log_{\uparrow,c}^{(t)}\langle\mathbf{X}_{n}^{(t)}\rangle\right|\sigma_{\zeta}$$ It follows that with probability at least 1 − O 1 poly(d) , X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = (+, c)} − logit(t) +,c(X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,c,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) +,c,r > 0}ζ (t) n,p 2 ×X (191) v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0} 1{yn = (+, c)} − logit(t) +,c(X(t) n ) ≤X p∈P(X(t) n ;v) 1{⟨w (t) +,c,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) +,c,r > 0} 1 log9(d) ×X Therefore, we can upper bound the bias update as follows: $$\Delta b^{(t)}_{\nu,c,r}\leq-\frac{1}{\log^{5}(d)}\frac{\eta}{NP}\left(\sqrt{1-\iota}-\frac{1}{\log^{5}(d)}\right)$$ $$\times\left(\sum_{\nu\in\mathbb{C}^{(r)}}\sum_{n=1}^{N}\mathbb{1}\{|\mathcal{P}(\mathbf{X}_{n}^{(t)};\mathbf{v})|>0\}\left|\mathbb{1}\{y_{n}=(+,c)\}-\text{logit}^{(t)}_{+,c}(\mathbf{X}_{n}^{(t)})\right|\right.\tag{192}$$ $$\times\left.\sum_{p\in\mathbb{P}(\mathbf{X}_{n}^{(t)};\mathbf{v})}\mathbb{1}\{\langle\mathbf{w}_{+,c,r}^{(t)},\alpha_{n,p}^{(t)}\mathbf{v}+\zeta_{n,p}^{(t)}\rangle+b_{+,c,r}^{(t)}>0\}\right)$$ $$(193)$$ Furthermore, with probability at least 1 − O NP poly(d) , the following holds for all *n, p*: $$\langle\alpha_{n,p}^{(t)}\mathbf{v},\mathbf{\xi}_{n,p}^{(\tau)}\rangle,\;\langle\mathbf{\xi}_{n,p}^{(t)},\alpha_{n,p}^{(\tau)}\mathbf{v}^{*}\rangle,\;\langle\mathbf{\xi}_{n,p}^{(t)},\mathbf{\xi}_{n,p}^{(\tau)}\rangle<O\left(\frac{1}{\log^{9}(d)}\right).$$ Combining the above derivations, they imply that with probability at least 1 − O NP poly(d) , for any x (τ) n,p dominated by v ∗, ⟨∆w (t) +*,c,r*, x (τ) n,p⟩ + ∆b (t) +*,c,r* =⟨∆w (t) +,c,r, α(τ) n,pv ∗ + ζ (τ) n,p⟩ + ∆b (t) +*,c,r* =η NP X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = (+, c)} − logit(t) +,c(X(t) n )] ×X p∈P(X(t) n ;v) 1{⟨w (t) +,c,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) +*,c,r* > 0}⟨α (t) n,pv + ζ (t) n,p, α(τ) n,pv ∗ + ζ (τ) n,p⟩ + ∆b (t) +*,c,r* =η NP X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0}[1{yn = (+, c)} − logit(t) +,c(X(t) n )] p∈P(X(t) n ;v) 1{⟨w (t) +,c,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) +*,c,r* > 0} ⟨α (t) n,pv, ζ (τ) n,p⟩ + ⟨ζ (t) n,p, α(τ) n,pv ∗⟩ + ⟨ζ (t) n,p, ζ (τ) n,p⟩ ×X + ∆b (t) +*,c,r* $$(194)$$ $$(195)$$ ≤η NP X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0} 1{yn = (+, c)} − logit(t) +,c(X(t) n ) p∈P(X(t) n ;v) 1{⟨w (t) +,c,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) +,c,r > 0} × O 1 log9(d) + ∆b (t) +*,c,r* ×X ≤η NP O 1 log9(d) −1 log5(d) √1 − ι −1 log9(d) × X v∈C(v∗) X N n=1 1{|P(X(t) n ; v)| > 0} 1{yn = (+, c)} − logit(t) +,c(X(t) n ) p∈P(X(t) n ;v) 1{⟨w (t) +,c,r, α(t) n,pv + ζ (t) n,p⟩ + b (t) +*,c,r* > 0} ! ×X <0. Therefore, with probability at least 1 − O mNP poly(d) , the following holds for the relevant neurons and v ∗- dominated patches: $\langle\Delta\mathbf{w}_{+,c,r}^{(T)},\mathbf{x}_{n,p}^{(\tau)}\rangle+\Delta b_{+,c,r}^{(T)}<0$. In conclusion, with τ ≥ T + 1, with probability at least 1 − O mNP poly(d) , for every (+*, c, r*) ∈/ S (0) +,c(v ∗) and relevant (*n, p*)'s, $$\langle\mathbf{w}_{+,c,r}^{(T)}+\Delta\mathbf{w}_{+,c,r}^{(T)},\mathbf{x}_{n,p}^{(*)}\rangle+b_{+,c,r}^{(T)}+\Delta b_{+,c,r}^{(T)}=\langle\mathbf{w}_{+,c,r}^{(T+1)},\mathbf{x}_{n,p}^{(*)}\rangle+b_{+,c,r}^{(T+1)}<0,$$ which leads to ⟨w (t ′) +*,c,r*, x (τ) n,p⟩ + b (t ′) +*,c,r* < 0 for all t ′ ≤ T + 1 with probability at least 1 − O (196) $\textit{0}\left(\frac{m k_{+}NP(T+1)}{n k_{+}n k_{-}}\right)$ poly(d) (also by taking union bound over all the possible choices of v ∗ at time T + 1). This finishes the inductive step for point 1. 2. (Non-activation on noise patches) The inductive step for this part is very similar to (and even simpler than) the inductive step of point 1, so we omit the calculations here. 3. (Off-diagonal nonpositive growth) By the induction hypothesis's high-probability event, we already have that, given any fine-grained class (+, c), τ ≥ T + 1, for any feature v ∗ ∈ {v−,c} k− c=1 ∪ {v−*} ∪ {*v+,c′}c ′̸=c and any neuron w+*,c,r*, σ ⟨w (T) +*,c,r*, x (τ) n,p⟩ + b (T) +*,c,r*≤ σ ⟨w (0) +*,c,r*, x (τ) n,p⟩ + b (0) +*,c,r*. We just need to show that ⟨∆w (t) +*,c,r*, x (τ) n,p⟩ + ∆b (T) +,r ≤ 0 to finish the proof; the rest proceeds in a similar fashion to the induction step of point 3 in the proof of Theorem F.1. Similar to the induction step of point 1, denoting M to be the set of all common and fine-grained features, the update expression of any neuron (+*, c, r*) has to be $$\begin{split}\Delta\mathbf{w}_{+,c,r}^{(T)}&=\frac{\eta}{NP}\sum_{\mathbf{v}\in\mathcal{M}}\sum_{n=1}^{N}\mathbbm{1}\{|\mathcal{P}(\mathbf{X}_{n}^{(T)};\mathbf{v})|>0\}[\mathbbm{1}\{y_{n}=(+,c)\}-\text{logit}_{+,c}^{(T)}(\mathbf{X}_{n}^{(T)})]\\ &\quad\times\sum_{p\in\mathcal{P}(\mathbf{X}_{n}^{(T)};\mathbf{v})}\mathbbm{1}\{\langle\mathbf{w}_{+,c,r}^{(T)},\alpha_{n,p}^{(T)}\mathbf{v}+\mathbf{\zeta}_{n,p}^{(T)}\rangle+b_{+,c,r}^{(T)}>0\}\left(\alpha_{n,p}^{(T)}\mathbf{v}+\mathbf{\zeta}_{n,p}^{(T)}\right)\end{split}\tag{197}$$ Written more explicitly, ∆w (T) +,c,r = η NP X v∈M−{v∗} X N n=1 1{|P(X(T) n; v)| > 0}1{yn = (+, c)}[1 − logit(T) +,c(X(T) n)] p∈P(X(T ) n ;v) 1{⟨w (T) +,c,r, α(T) n,p v + ζ (T) n,p ⟩ + b (T) +,c,r > 0} α (T) n,p v + ζ (T) n,p ×X (198) −η NP X N n=1 1{yn ̸= (+, c)}1{|P(X(T) n; v ∗)| > 0}[logit(T) +,c(X(T) n)] p∈P(X(T ) n ;v∗) 1{⟨w (T) +,c,r, α(T) n,p v ∗ + ζ (T) n,p ⟩ + b (T) +,c,r > 0} α (T) n,p v ∗ + ζ (T) n,p ×X It follows that with probability at least 1 − O mNP poly(d) , for relevant *n, p, r*, we have ⟨∆w (T) +,c,r, α(τ) n,pv ∗ + ζ (τ) n,p⟩ <η NP X v∈M−{v∗} X N n=1 1{|P(X(T) n; v)| > 0}1{yn = (+, c)}[1 − logit(T) +,c(X(T) n)] (199) p∈P(X(T ) n ;v) 1{⟨w (T) +,c,r, α(T) n,p v + ζ (T) n,p ⟩ + b (T) +,c,r > 0}O 1 log9(d) ×X Furthermore, similar to the induction step of point 1, we can estimate the bias update as follows: $$\Delta\phi^{(t)}_{+,c,r}$$ $$\leq-\Omega\left(\frac{1}{\log^{\phi}(d)}\right)\frac{\eta}{N\mathcal{P}}\Bigg{(}\sum_{v\in\mathcal{M}}\sum_{n=1}^{N}\mathbb{1}\left\{|\mathcal{P}(\mathbf{X}_{n}^{(t)};\mathbf{v})|>0\right\}\left|\mathbb{1}\left\{y_{n}=(+,c)\right\}-\log\!\mathrm{i}\!\!\left({}_{+,c}^{(t)}(\mathbf{X}_{n}^{(t)})\right|\right.\tag{200}$$ $$\times\sum_{p\in\mathcal{P}(\mathbf{X}_{n}^{(t)};\mathbf{v})}\mathbb{1}\left\{\left\{\left(\mathbf{w}_{+,c,r}^{(t)},\alpha_{n,p}^{(t)}\mathbf{v}+\zeta_{n,p}^{(t)}\right)+b_{+,c,r}^{(t)}>0\right\}\right)$$ It follows that, indeed, ⟨∆w (T) +*,c,r*, x (τ) n,p⟩ + ∆b (T) +*,c,r* ≤ 0, which completes the induction step of point 3. ## G.3 Training Choose an arbitrary constant B ∈ [Ω(1), log(3/2)]. Definition G.2. Let T0(B) > 0 be the first time that there exists some X (t) n and c such that F (T0(B)) y (X (T0(B)) n ) ≥ B for any n ∈ [N] and y ∈ {(+, c)} k+ c=1 ∪ {(−, c)} k− c=1. We write T0(B) as T0 for simplicity of notation when the context is clear. Lemma G.2. *With probability at least* 1 − O mk+*NP T*0 poly(d) , the following holds for all t ∈ [0, T0): 1. (On-diagonal common-feature neuron growth) For every c ∈ [k+], every (+, c, r),(+*, c, r*′) ∈ S ∗(0) +,c (v+), Show $\textbf{w}_{+,c,r}^{(t)}\in\textbf{w}_{+,c,r}^{(0)}=\textbf{w}_{+,c,r'}^{(t)}-\textbf{w}_{+,c,r'}^{(0)}$ . $$(201)^{\frac{1}{2}}$$ $$(202)^{\frac{1}{2}}$$ $$(203)$$ _Moreover,_ $$\Delta\mathbf{w}_{+,r}^{(t)}=[1/4,2/3]\sqrt{1\pm\iota}\left(1\pm s^{*-1/3}\right)\eta\frac{s^{*}}{2k_{+}P}\mathbf{v}_{+}+\Delta\mathbf{\zeta}_{+,r}^{(t)}$$ _where $\Delta\mathbf{\zeta}_{+,c,r}^{(t)}\sim\mathcal{N}(\mathbf{0},\sigma_{\Delta\mathbf{\zeta}_{+,c,r}}^{(t)2}\mathbf{I})$, $\sigma_{\Delta\mathbf{\zeta}_{+,c,r}}^{(t)}=\Theta(1)\times\eta c\frac{\sqrt{\nu^{2}}}{\sqrt{\nu^{2}}N}$. The bias updates satisfy_ $$\Delta\mathbf{b}_{+,c,r}^{(t)}=-\Theta\left(\frac{\eta s^{*}}{k_{+}P\log^{5}(d)}\right).$$ +,r (202) Furthermore, every (+, r) ∈ S ∗(0) + (v+) activates on all the v+*-dominated patches at time* t. 2. (On-diagonal finegrained-feature neuron growth) For every c ∈ [k+] and every (+, c, r),(+*, c, r*′) ∈ S ∗(0) +,c (v+,c), $$\mathbf{w}_{+,c,r}^{(t)}-\mathbf{w}_{+,c,r}^{(0)}=\mathbf{w}_{+,c,r^{\prime}}^{(t)}-\mathbf{w}_{+,c,r^{\prime}}^{(0)}$$ Moreover, ∆w (t) +,c,r = 1 ± O 1 k+ √1 ± ι 1 ± s ∗−1/3ηs ∗ 2k+P v+,c + ∆ζ (t) +,r (205) where ζ (t) +,c,r ∼ N (0, σ (t)2 ∆ζ+,cr I), and σ (t) ∆ζ+,r = 1 ± O 1 k+ 1 ± s ∗−1/3ησζ √s ∗ P √2Nk+ . The bias updates satisfy ∆b (t) +,c,r = −Θ ηs∗ k+P log5(d) . (206) $$(204)^{\frac{1}{2}}$$ $$(205)$$ $$(206)$$ Furthermore, every (+*, c, r*) ∈ S ∗(0) +,c (v+,c) activates on all the v+*-dominated patches at time* t. 3. The above results also hold with the "+" and "−" class signs flipped. Proof. The proof of this theorem proceeds in a similar fashion to Theorem D.1, with some variations for the common-feature neurons. We shall prove the statements in this theorem via induction. We focus on the +-class neurons; −-class neurons' proofs are done in the same fashion. First of all, relying on the (high-probability) event of Theorem G.1, we know that we can simplify the update expressions for the neurons in S ∗(0) +,c (v+,c) to the form $$\Delta\mathbf{w}_{+,c,r}^{(t)}=\frac{\eta}{NP}\sum_{n=1}^{N}\mathbbm{1}\{y_{n}=(+,c)\}[1-\log\mathrm{i}_{+,c}^{(t)}(\mathbf{X}_{n}^{(t)})]\tag{1}$$ $$\times\sum_{p\in\mathcal{P}(\mathbf{X}_{n}^{(t)},\mathbf{w}_{+,c})}\mathbbm{1}\{\langle\mathbf{w}_{+,c,r}^{(t)},\alpha_{n,p}^{(t)}\mathbf{v}_{+,c}+\zeta_{n,p}^{(t)}\rangle+b_{+,c,r}^{(t)}>0\}\left(\alpha_{n,p}^{(t)}\mathbf{v}_{+,c}+\zeta_{n,p}^{(t)}\right),$$ $$(207)$$ and for the neurons in S ∗(0) +,c (v+), the updates take the form ∆w (t) +,c,r n=1 1{yn = (+, c)}[1 − logit(t) +,c(X(t) n )] + X c ′∈[k+]−{c} 1{yn = (+, c′)}[−logit(t) +,c(X(t) n )] =η NP X N (208) p∈P(X(t) n ;v+) 1{⟨w (t) +,c,r, α(t) n,pv+ + ζ (t) n,p⟩ + b (t) +,c,r > 0} α (t) n,pv+ + ζ (t) n,p. ×X By definition of T0 and the fact that B ≤ log(3/2), for any n ∈ [N] and *t < T*0, we can write down a simple upper bound of logit(t) +,c(X (t) n ): $$\begin{split}\text{logit}_{+,c}^{(t)}(\mathbf{X}_{n}^{(t)})&=\frac{\exp(F_{+,c}(\mathbf{X}_{n}^{(t)}))}{\sum_{k^{\prime}=1}^{k}\exp(F_{+,c^{\prime}}(\mathbf{X}_{n}^{(t)}))+\sum_{c^{\prime}=1}^{k}\exp(F_{-,c^{\prime}}(\mathbf{X}_{n}^{(t)}))}\\ &\leq\frac{\frac{3}{2}}{2k_{+}}=\frac{3}{4k_{+}},\end{split}$$ $$(209)$$ $$(210)$$ and we can lower bound it as follows $$\operatorname*{logit}_{+,c}^{(t)}(\mathbf{X}_{n}^{(t)})\geq\frac{1}{2k_{+}\times\frac{3}{2}}=\frac{1}{3k_{+}},$$ The inductive proof for the fine-grained neurons S ∗(0) +,c (v+,c) is almost identical to that in the proof of Theorem D.1. The only notable difference here is that [1 − logit(t) +,c(X (t) n )] has the estimate 1 ± O 1 k+ . The inductive proof of the common-feature neurons S ∗(0) +,c (v+) requires more care as its update expression equation 208 is qualitatively different from the coarse-grained training case in Theorem D.1, so we present the full proof here. Base case, t = 0. With probability at least 1 − O mNP poly(d) , for every c ∈ [k+] and every (+*, c, r*) ∈ S $$S_{+,c}^{*(0)}(\mathbf{v}_{+}),$$ ⟨w (0) +,c,r, α(0) n,pv+ + ζ (0) n,p⟩ + b (0) +,c,r ≥ σ0 q(1 − ι)(2 + 2c0)(log(d) + 1/ log5(d)) −p(2 + 2c0) log(d) − O 1 log9(d) = σ0 (1 − ι)(2 + 2c0)(log(d) + 1/ log5(d)) − (2 + 2c0) log(d) q(1 − ι)(2 + 2c0)(log(d) + 1/ log5(d)) + p(2 + 2c0) log(d) − O 1 log9(d) (211) = σ0 (2 + 2c0)(−ιlog(d) + (1 − ι)/ log5(d)) q(1 − ι)(2 + 2c0)(log(d) + 1/ log5(d)) + p(2 + 2c0) log(d) − O 1 log9(d) > 0. This means all the v+-singleton neurons will be updated on all the v+-dominated patches at time t = 0. Therefore, we can write update expression equation 208 as follows $$\Delta\mathbf{w}_{+,c}^{(0)}$$ $$=\frac{\eta}{N\!P}\sum_{n=1}^{N}\left\{1\left\{y_{n}=(+,c)\right\}[1-\text{logit}_{+,c}^{(0)}(\mathbf{X}_{n}^{(0)})]+\sum_{c^{\prime}\in[k_{+}]-(c)}1\left\{y_{n}=(+,c^{\prime})\right\}[-\text{logit}_{+,c}^{(0)}(\mathbf{X}_{n}^{(0)})]\right\}$$ $$\quad\times\sum_{p\in\mathcal{P}(\mathbf{X}_{n}^{(0)},\mathbf{w}_{+})}\left(\alpha_{n,p}^{(0)}\mathbf{w}_{+}+\mathbf{\zeta}_{n,p}^{(0)}\right).$$ The above is the $\alpha$-function of the $\alpha$-function. $$(213)^{\frac{1}{2}}$$ $$(214)$$ $$(212)$$ By concentration of the binomial random variable, we know that with probability at least 1 − e −Ω(log2(d)), for all n, $$\left|{\mathcal{P}}({\boldsymbol{X}}_{n}^{(0)};{\boldsymbol{v}}_{+})\right|=\left(1\pm s^{*-1/3}\right)s^{*}.$$ Now, with the estimates we derived for logit(t) +,c(X (t) n ) at the beginning of the proof and the independence of all the noise vectors ζ (0) n,p's, we arrive at $$\Delta\mathbf{w}_{+,r}^{(0)}=[1/4,2/3]\sqrt{1\pm\imath}\left(1\pm s^{*-1/3}\right)\eta\frac{s^{*}}{2k_{+}P}\mathbf{v}_{+}+\Delta\mathbf{\zeta}_{+,r}^{(0)}$$ where σ (0) ∆ζ+*,c,r* = Θ(1) × ησζ √s ∗ P √ 2N . Additionally, a byproduct of the above proof steps is that all the S ∗(0) +,c (v+) neurons indeed activate on all the v+-dominated patches at t = 0 with high probability. Now we examine the bias update. We first estimate ∆w (0) +*,c,r* 2 . With probability at least 1 − O m poly(d) the following upper bound holds for all neurons in S ∗(0) +,c (v+): $$\left\|\Delta\mathbf{w}_{+,c,r}^{(0)}\right\|_{2}\leq O\left(\eta\frac{s^{*}}{k_{+}P}\right)\left\|\mathbf{v}_{+}\right\|_{2}+\left\|\Delta\mathbf{\zeta}_{+,r}^{(0)}\right\|_{2}$$ $$\leq O\left(\eta\frac{s^{*}}{k_{+}P}\right)+O\left(\eta\sigma_{\zeta}\frac{\sqrt{s^{*}}}{P\sqrt{N}}\sqrt{d}\right)\tag{1}$$ $$\leq O\left(\eta\frac{s^{*}}{k_{+}P}\right),$$ $$(215)$$ and the following lower bound holds (via the reverse triangle inequality): $$\left\|\Delta w_{+,c,r}^{(0)}\right.$$ $$\begin{array}{l l}{{}}&{{}}\\ {{}}&{{\left\|{}_{2}\geq\Omega\left(\eta{\frac{s^{*}}{k_{+}P}}\right)\|\mathbf{v}_{+}\|_{2}-\left\|\Delta\zeta_{+,r}^{(0)}\right\|_{2}}}\\ {{}}&{{}}\\ {{}}&{{\geq\Omega\left(\eta{\frac{s^{*}}{k_{+}P}}\right)-O\left(\eta\sigma\zeta{\frac{\sqrt{s^{*}}}{P\sqrt{N}}}\sqrt{d}\right)}}\\ {{}}&{{}}\\ {{}}&{{\geq\Omega\left(\eta{\frac{s^{*}}{k_{+}P}}\right),}}\end{array}$$ $$(216)$$ $$(217)$$ It follows that ∆w (0) +*,c,r* 2 = Θ η s ∗ k+P , which means $$\Delta b^{(0)}_{+,c,r}=-\left.\frac{\left\|\Delta\mathbf{w}^{(0)}_{+,c,r}\right\|_{2}}{\log^{5}(d)}=-\Theta\left(\frac{\eta s^{*}}{k_{+}P\log^{5}(d)}\right).\right.\tag{1}$$ This completes the proof of the base case. Induction step. Assume statements for time [0, t], prove for t + 1. First, by the induction hypothesis, we know that neurons in S ∗(0) +,c (v+) must activate on all the v+-dominated patches at time t. Therefore, we can write the update expression equation 208 as follows: $$\begin{array}{l}{{\Delta\mathbf{w}_{+,\varepsilon}^{(t)}}}\\ {{=\frac{\eta}{N P}\sum_{n=1}^{N}\left\{\mathbbm{1}\{y_{n}=(+,c)\}[1-\log_{+,\varepsilon}^{(t)}(\mathbf{X}_{n}^{(t)})]+\sum_{\sigma^{\prime}\in[k_{+}]-(c)}\mathbbm{1}\{y_{n}=(+,\varepsilon^{\prime})\}[-\log_{+,\varepsilon}^{(t)}(\mathbf{X}_{n}^{(t)})]\right\}}}\\ {{\times\sum_{p\in\mathcal{P}(\mathbf{X}_{n}^{(t)};\mathbf{w}_{+})}\left(\alpha_{n,p}^{(t)}\mathbf{w}_{+}+\zeta_{n,p}^{(t)}\right).}}\end{array}$$ $$(219)$$ $$(218)$$ Following the same argument as in the base case, we have that with probability at least 1 − O mNP poly(d) , $$\Delta\mathbf{w}_{+,c,r}^{(t)}=[1/4,2/3]\sqrt{1\pm\iota}\left(1\pm s^{*-1/3}\right)\eta\frac{s^{*}}{2k_{+}P}\mathbf{v}_{+}+\Delta\mathbf{\zeta}_{+,c,r}^{(t)},$$ and σ (t) ∆ζ+*,c,r* = Θ(1) × ησζ √s ∗ P √ 2N . Now we need to show that w (t+1) +*,c,r* indeed activate on all the v+-dominated patches at time t + 1 with high probability. So far, we know that for τ ∈ [0, t + 1], $$\Delta\mathbf{w}_{+,c,r}^{(\tau)}=[1/4,2/3]\sqrt{1\pm\iota}\left(1\pm s^{*-1/3}\right)\eta\frac{s^{*}}{2k_{+}P}\mathbf{v}_{+}+\Delta\mathbf{\xi}_{+,c,r}^{(\tau)},$$ and σ $\sigma^{(\tau)}_{\Delta\zeta+,c,r}=\Theta(1)\times\eta\sigma_{\zeta}\frac{\sqrt{s^{*}}}{P\sqrt{2N}}.$ It follows that $$\mathbf{w}_{+,r}^{(t+1)}=\mathbf{w}_{+,c,r}^{(0)}+(t+1)[1/4,2/3]\sqrt{1\pm\iota}\left(1\pm s^{r-1/3}\right)\eta\frac{s^{*}}{2k_{+}P}\mathbf{v}_{+}+\mathbf{\zeta}_{+,c,r}^{(t+1)},\tag{1}$$ where σ (t+1) ζ+*,c,r* = Θ(1) × √t + 1ησζ√s ∗ P √ 2N . The following holds with probability at least 1 − O mNP poly(d) over all the v+-dominated patches x (t+1) n,p = α (t+1) n,p v+ + ζ (t+1) n,p (which are independent of w (t+1) +,r ) and the v+-singleton neurons: ⟨w (t+1) +,c,r , α(t+1) n,p v+ + ζ (t+1) n,p ⟩ =⟨w (0) +,c,rα (t+1) n,p v+ + ζ (t+1) n,p ⟩ + (t + 1)[1/4, 2/3](1 ± ι) 1 ± s ∗−1/3 1 ± O 1 log9(d) ηs ∗ 2k+P + ⟨ζ (t+1) +,c,r , α(t+1) n,p v+ + ζ (t+1) n,p ⟩ $$(220)$$ $$(221)$$ $$(222)$$ $$(223)$$ $$(224)^{\frac{1}{2}}$$ Note that with probability at least 1 − O 1 poly(d) , $$\langle\zeta_{+,c,r}^{(t+1)},\alpha_{n,p}^{(t+1)}v_{+}\rangle\leq O(1)\times\sqrt{T}\eta\sigma_{\zeta}\frac{\sqrt{s^{*}}}{P\sqrt{2N}}\sqrt{d\log(d)},$$ and since √t + 1 ≤ t + 1, √s ∗ < s∗, σζ pd log(d) <1 log9(d) , and *N > dk*+, we know that $$\langle\zeta_{+,c,r}^{(t+1)},\alpha_{n,p}^{(t+1)}v_{+}\rangle\leq O\left(\frac{1}{d}\right)\times(t+1)\eta\frac{s^{*}}{2k_{+}P}.$$ Similarly, with probability at least 1 − O 1 poly(d) , $$\langle\zeta_{+,c,r}^{(t+1)},\alpha_{n,p}^{(t+1)}\mathbf{v}_{t}+\zeta_{n,p}^{(t+1)}\rangle\leq O(1)\times\sqrt{T}\eta\sigma_{c}^{2}\frac{\sqrt{s^{*}}}{P\sqrt{2N}}\sqrt{d\log(d)}\leq O\left(\frac{1}{d}\right)\times(t+1)\eta\frac{s^{*}}{2k_{+}P}.$$ $$(225)$$ . (225) It follows that with probability at least 1 − O mNP poly(d) , $$\langle\mathbf{v}_{+,\tau}^{(t+1)},\alpha_{n,p}^{(t+1)}\mathbf{v}_{+}+\mathcal{C}_{n,p}^{(t+1)}\rangle$$ $$\geq\langle\mathbf{u}_{+,\tau}^{(0)},\alpha_{n,p}^{(t+1)}\mathbf{v}_{+}+\mathcal{C}_{n,p}^{(t+1)}\rangle+\frac{1}{4}(t+1)(1-\iota)\left(1-s^{*-1/3}\right)\left(1-O\left(\frac{1}{\log^{9}(d)}\right)\right)\eta\frac{s^{*}}{2k_{+}P}.\tag{226}$$ Next, let us estimate the bias updates for τ ∈ [0, t + 1]. Estimating ∆b (t) +*,c,r* follows an almost identical argument as in the base case (with the only main difference being relying on Theorem G.1 for non-activation on non-v+-dominated patches), so we skip its calculations. Therefore, b +,c,r = b +,c,r + −Θ k+P log5(d) ⟨w (t+1) +,c,r , α(t+1) n,p v+ + ζ (t+1) n,p ⟩ + b (t+1) +,c,r ≥⟨w (0) +,c,r, α(t+1) n,p v+ + ζ (t+1) n,p ⟩ + b (0) +,c,r (227) + 1 4 (t + 1)(1 − ι) 1 − s ∗−1/3 1 − O 1 log9(d) ηs ∗ 2k+P − O ηs∗(t + 1) k+P log5(d) >0. $\frac{(t+1)}{+,c,r}=b_{+,c,r}^{(0)}+-\Theta\left(\frac{\eta s^{*}(t+1)}{k_{+}P^{\log^{5}(d)}}\right)$. This means $\square$ $\square$ This completes the inductive step. **Corollary G.2.1**.: _At time $t=T_{0}$, $\frac{n^{*}}{k_{+}P}\times s^{*}\left|S_{+,c}^{\pi(0)}(\mathbf{v}_{+})\right|$, $\frac{n^{*}}{k_{+}P}\times s^{*}\left|S_{+,c}^{\pi(0)}(\mathbf{v}_{+,c})\right|=\Theta(1)$._ Proof. Directly follows from Lemma G.2 and Theorem G.1. ## G.4 Model Error After Training In this subsection, we show the model's error after fine-grained training. We also discuss that finetuning the model further increases its feature extractor's response to the true features, so it is even more robust/generalizing in downstream classification tasks. Theorem G.3. *Define* Fb+(X) = maxc∈[k+] F+,c(X), Fb−(X) = maxc∈[k−] F−,c(X). With probability at least 1 − O mk2+*NP T*0 poly(d) , the following events take place: 1. (Fine-grained easy & hard sample test accuracies are nearly perfect) Given an easy or hard fine-grained test sample (X, y) where y ∈ {(+, c)} k+ c=1 ∪ {(−, c)} k− c=1, P hF (T0) y (X) ≤ maxy′̸=y F (T0) y′ (X) i≤ o(1). 2. (Coarse-grained easy & hard sample test accuracy are nearly perfect) Given an easy or hard coarsegrained test sample (X, y) where y ∈ {+1, −1}, P hFb(T0) y (X) ≤ Fb(T0) y′ (X) i≤ o(1). Proof. **Probability of mistake on easy samples**. Without loss of generality, assume X is a (+, c)-class easy sample. Conditioning on the events of Theorem G.1 and Lemma G.2, we know that for all c ′ ∈ [k−], $$F_{-,c^{\prime}}^{(T_{0})}\leq O(m_{+,c^{\prime}}\sigma_{0}\sqrt{\log(d)})\leq o(1),$$ plog(d)) ≤ o(1), (228) $$(228)^{\frac{1}{2}}$$ and for all c ′ ∈ [k+] − {c}, $$F_{+,c^{\prime}}^{(T_{0})}\leq\sum_{p\in\mathcal{P}(\mathbf{X},\mathbf{v}_{+})}\sum_{(+,r)\in\mathbb{S}_{+,c^{\prime}}^{(0)}(\mathbf{v}_{+})}\sigma\left(\left(\mathbf{w}_{+,r}^{(T_{0})},\alpha_{n,p}\mathbf{v}_{+}+\zeta_{n,p}\right)+b_{+,c^{\prime},r}^{(T_{0})}\right)+O(m_{+,cr}\sigma_{0}\sqrt{\log(d)})\tag{229}$$ $$\leq\mathbf{s}^{*}\left|S_{+,c^{\prime}}^{(0)}(\mathbf{v}_{+})\right|\frac{2}{3}(1+i)\left(1+\mathbf{s}^{*-1/3}\right)\left(1+\left(\frac{1}{\log^{9}(d)}\right)\right)\eta T_{0}\frac{\mathbf{s}^{*}}{2k_{+}P}$$ moreover, (+,r)∈S ∗(0) +,c (v+) σ ⟨w (T0) +,c,r, αn,pv+ + ζn,p⟩ + b (T0) +,c,r F (T0) +,c ≥X p∈P(X;v+) X (+,r)∈S ∗(0) +,c (v+,c) σ ⟨w (T0) +,c,r, αn,pv+,c + ζn,p⟩ + b (T0) +,c,r +X p∈P(X;v+,c) X (230) ≥s ∗ S ∗(0) +,c (v+) 1 4 (1 − ι) 1 − s ∗−1/3 1 − 1 log5(d) ηT0s ∗ 2k+P + s ∗ S ∗(0) +,c (v+,c) 1 − O 1 k+ (1 − ι) 1 − s ∗−1/3 1 − 1 log5(d) ηT0s ∗ 2k+P Relying on Proposition 2, we know S (0) +,c′ (v+) =1 ± 1 log5(d) S ∗(0) +,c (v+) and S ∗(0) +,c (v+,c) = 1 ± 1 log5(d) S ∗(0) +,c (v+) , therefore F (T0) +,c (X) > maxc ′̸=c F (T0) +,c′ (X) has to be true. With Corollary G.2.1, we also have F (T0) +,c (X) ≥ Ω(1) > o(1) ≥ maxc ′∈[k−] F (T0) −,c′ (X). It follows that the probability of mistake on an easy test sample is indeed at most o(1). Probability of mistake on hard samples. Without loss of generality, assume X is a (+, c)-class hard sample. By Theorem G.1 (and its proof) and Lemma G.2, we know that for any c ′ ∈ [k+], the neurons w+,c′,r can only possibly receive update on v-dominated patches for v ∈ U(0) +,c′,r, and the updates to the neurons take the feature-plus-Gaussian-noise form of Pv′∈U(0) +,c′,r c(v ′)v ′ + ∆ζ (t) +,c′,r, with c(v ′) ≤ √1 + ι1 + s ∗−1/3ηs ∗ 2k+P if v ′is a fine-grained feature, or c(v ′) ≤ 2 3 √1 + ι1 + s ∗−1/3ηs ∗ 2k+P if v ′ = v+ (because the v ′component of a v ′-singleton neuron's update is already the maximum possible). Moreover, σ (t) ∆ζ+,c′,r ≤ O ησζ√s ∗ P √ 2N . Relying on Theorem G.1, Lemma G.2, Corollary G.2.1 and previous observations, we have $$F_{+,c}^{(T_{0})}(\mathbf{X})\geq\sum_{p\in\mathcal{P}(\mathbf{X}_{\text{w},c})}\sum_{(\epsilon_{+,p})\in S_{+,0}^{(T_{0})}(\mathbf{w}_{+,c})}\sigma\left(\langle\mathbf{w}_{+,c,r}^{(T_{0})},\alpha_{n,p}\mathbf{w}_{+,c}+\xi_{n,p}\rangle+b_{+,c,r}^{(T_{0})}\right)\tag{231}$$ $$\geq s^{*}\Big{|}S_{+,0}^{(T_{0})}(\mathbf{w}_{+,c})\Big{|}\left(1-O\left(\frac{1}{k_{+}}\right)\right)(1-\iota)\left(1-s^{\iota-1/3}\right)\left(1-O\left(\frac{1}{\log^{\circ}(d)}\right)\right)qT_{0}\frac{s^{*}}{2k_{+}P}$$ $$\geq\Omega(1),$$ and for c ′ ̸= c, F (T0) +,c′ (X) ≤ mX+,c′ r=1 σ ⟨w (T0) +,c′,r, ζ ∗⟩ + b (T0) +,c′,r (+,c′,r)∈S (0) +,c′ (v+,c) σ ⟨w (T0) +,c′,r, αn,pv+,c + ζn,p⟩ + b (T0) +,c′,r +X p∈P(X;v+,c) X (+,c′,r)∈S (0) +,c′ (v−) σ ⟨w (T0) +,c′,r, α†n,pv− + ζn,p⟩ + b (T0) +,c′,r +X p∈P(X;v−) X ≤O(1) × X (+,c′,r)∈U(0) +,c′,r ⟨ T X0−1 τ=0 ∆w (τ) +,c,′r , ζ ∗⟩ +X r∈[m+,c′ ] ⟨w (0) +,c,′r , ζ ∗⟩ (232) (+,c′,r)∈S (0) +,c′ (v+,c) σ ⟨w (0) +,c′,r, αn,pv+,c + ζn,p⟩ + b (0) +,c′,r +X p∈P(X;v+,c) X (+,c′,r)∈S (0) +,c′ (v−) σ ⟨w (0) +,c′,r, α†n,pv− + ζn,p⟩ + b (0) +,c′,r +X p∈P(X;v−) X ≤O 1 polylog(d) . Moreover, for any c ′ ∈ [k−], similar to before, F (T0) −,c′ (X) ≤ mX−,c′ r=1 σ ⟨w (T0) −,c′,r, ζ ∗⟩ + b (T0) −,c′,r (−,c′,r)∈S (0) −,c′ (v+,c) σ ⟨w (T0) −,c′,r, αn,pv+,c + ζn,p⟩ + b (T0) −,c′,r +X p∈P(X;v+,c) X (−,c′,r)∈S (0) −,c′ (v−) σ ⟨w (T0) −,c′,r, α†n,pv− + ζn,p⟩ + b (T0) −,c′,r +X p∈P(X;v−) X ≤O(1) × X (−,c′,r)∈U(0) −,c′,r ⟨w (T0) −,c,′r , ζ ∗⟩ +X r∈[m−,c′ ] ⟨w (0) −,c,′r , ζ ∗⟩ (233) (−,c′,r)∈S (0) −,c′ (v+,c) σ ⟨w (0) −,c′,r, αn,pv+,c + ζn,p⟩ + b (0) −,c′,r +X p∈P(X;v+,c) X + O(1) × s † S (0) −,c′ (v−) ×ι † upper + O(σ0 log(d)) ≤O 1 polylog(d) + O σ0 plog(d) + O 1 log(d) ≤o(1). Therefore, F (T0) +,c (X) > maxy̸=(+,c) F (T0) y (X), which means Fb(T0) $${}^{\mathrm{(b)}}(\mathbf{X})>{\widehat{F}}_{-}^{(T_{0})}(\mathbf{X}){\mathrm{~indeed.}}$$ Remark. First of all, note that the feature extractor, after fine-grained training, is already well-performing, as it responds strongly (Ω(1) strength) to the true features, and very weakly (o(1) strength) to any off-diagonal features and noise. In other words, we stop training when the margin is at least Ω(1), i.e. when we have F (T) y (T ) n (X (T) n ) − maxy̸=y (T ) nF (T) y (X (T) n ) ≥ Ω(1) for all n at some T ≤ poly(d), and with high probability, we just need T0 time to reach it. This can already help us explain the linear-probing result we saw on ImageNet21k in Appendix A.2, since linear probing does not alter the the feature extractor after fine-grained pretraining (on ImageNet21k), it only retrains a new linear classifier on top of the feature extractor for classifying on the target ImageNet1k dataset. At a high level, *finetuning* Fb can only further enhance the feature extractor's response to the features, therefore making the model even more robust for challenging downstream classification problems; it will not degrade the feature extractor's response to any true feature. A rigorous proof of this statement is almost a repetition of the proofs for fine-grained training, so we do not repeat them here. Intuitively speaking, we just need to note that the properties stated in Theorem G.1 will continue to hold during finetuning (as long as we stay in polynomial time), and with similar argument to those in the proof of Lemma G.2, we note that the neurons responsible for detecting fine-grained features, i.e. the S ∗(0) +,c (v+,c), will continue to only receive (positive) updates on the v+,c-dominated patches of the following form: $$\begin{split}\Delta\mathbf{w}_{+,c,r}^{(t)}=&\frac{\eta}{N\!P}\sum_{n=1}^{N}\mathbb{1}\left\{y_{n}=(+,c)\right\}[1-\log_{+}^{(t)}(\mathbf{X}_{n}^{(t)})]\\ &\times\sum_{p\in\mathcal{P}(\mathbf{X}_{n}^{(t)},\mathbf{w}_{+,c})}\mathbb{1}\left\{\langle\mathbf{w}_{+,c,r}^{(t)},\alpha_{n,p}^{(t)}\mathbf{v}_{+,c}+\mathbf{\zeta}_{n,p}^{(t)}\right\rangle+b_{+,c,r}^{(t)}>0\right\}\left(\alpha_{n,p}^{(t)}\mathbf{v}_{+,c}+\mathbf{\zeta}_{n,p}^{(t)}\right),\end{split}\tag{234}$$ and similar update expression can be stated for the S ∗(0) +,c (v+) neurons: $$\Delta\mathbf{w}_{+,c,r}^{(t)}$$ $$=\frac{\eta}{N\!P}\sum_{n=1}^{N}\mathds{1}\left\{y_{n}=(+,c)\right\}[1-\log_{+}^{(t)}(\mathbf{X}_{n}^{(t)})]\tag{235}$$ $$\times\sum_{p\in\mathsf{P}(\mathbf{X}_{n}^{(t)},y_{n})}\mathds{1}\left\{(\mathbf{w}_{+,c,r}^{(t)},\alpha_{n,p}^{(t)}\mathbf{v}_{+}+\zeta_{n,p}^{(t)})+b_{+,c,r}^{(t)}>0\right\}\left(\alpha_{n,p}^{(t)}\mathbf{v}_{+}+\zeta_{n,p}^{(t)}\right).$$ Indeed, these feature-detector neurons will continue growing in the direction of the features they are responsible for detecting instead of degrade in strength. ## H Probability Lemmas Lemma H.1 (Laurent-Massart χ 2 Concentration (Laurent & Massart (2000) Lemma 1)). Let g ∼ N (0, Id). For any vector a ∈ R d ≥0 , any t > 0*, the following concentration inequality holds:* $$\mathbb{P}\left[\sum_{i=1}^{d}a_{i}g_{i}^{2}\geq\|\mathbf{a}\|_{1}+2\|\mathbf{a}\|_{2}{\sqrt{t}}+2\|\mathbf{a}\|_{\infty}t\right]\leq e^{-t}$$ Lemma H.2. Let g ∼ N (0, σ2Id)*. Then,* $$(236)$$ $$\mathbb{P}\left[\|\mathbf{g}\|_{2}^{2}\geq5\sigma^{2}d\right]\leq e^{-d}$$ $$(237)$$ $$(238)$$ $\square$ $$(239)$$ $$(240)^{\frac{1}{2}}$$ −d(237) Proof. By Lemma H.1, setting ai = 1 for all i and t = d yields $$\mathbb{P}\left[\|\mathbf{g}\|_{2}^{2}\geq\sigma^{2}d+2\sigma^{2}d+2\sigma^{2}d\right]\leq e^{-d}$$ −d(238) Lemma H.3 (Shen et al. (2022a)). Let g1 ∼ N (0, σ2 1Id) and g2 ∼ N (0, σ2 2Id) be independent. Then, for any δ ∈ (0, 1) and sufficiently large d, there exist constants c1, c2 *such that* $$\mathbb{P}\left[|\langle\mathbf{g}_{1},\mathbf{g}_{2}\rangle|\leq c_{1}\sigma_{1}\sigma_{2}\sqrt{d\log(1/\delta)}\right]\geq1-\delta$$ $$\mathbb{P}\left[\langle\mathbf{g}_{1},\mathbf{g}_{2}\rangle\geq c_{2}\sigma_{1}\sigma_{2}\sqrt{d}\right]\geq\frac{1}{4}$$
Review 1: Summary: This paper provides a theoretical framework to explain why pretraining deep neural networks (DNNs) with fine-grained labels enhances generalization when fine-tuning on coarse-labeled downstream tasks. The authors introduce a "hierarchical multi-view" data model and prove that fine-grained pretraining enables DNNs to learn both common and rare features, leading to better accuracy on challenging test samples. They also show that higher label complexity in pretraining increases the complexity of learned representations, which is key to improved generalization. These theoretical insights are supported by large-scale experiments on datasets like ImageNet and iNaturalist, confirming the practical benefits of fine-grained pretraining. Strengths and Weaknesses: Strengths: 1. This paper studies an intriguing phenomenon on neural network generalization, where pretraining with fine-grained labels lead to better representations. The authors provided a theoretical analysis on explaining the reason behind this phenomenon. 2. The authors provided detailed background introduction for the theoretical framework and summarize the results in plain and useful insights. Weaknesses: 1. The authors should have more discussions on the practical implications of the results. Given that pretraining on more labels are beneficial, should we be using this more often in practice? Since unsupervised pretraining is the standard of representation learning nowadays, can the results of supervised learning in this paper provide any insights in the current regime? 2. Are the learning dynamics the same between pretraining with coarse-grained labels and fine-grained labels? Given that the supervised tasks are different, is one necessarily harder than the others? Some analysis comparing the training dynamics of these two settings would be better. 3. The studied setting in this paper is that we should do pretraining with fine-grained labels and fine-tuning with coarse-grained labels. I am confused about the fine-tuning part. Why not always fine-tune using the same label space as pretraining? Is it necessarily inferior to fine-tuning with coarse-grained labels? I think the authors should have more discussions on this part or add clarifications on the settings where the claim holds. 4. Another relevant question would be about whether we can gradually interpolate from coarse-grained labels to fine-grained labels during pretraining, or the reversed way. Given the results presented in this paper, do the authors have any intuition about how one should do this interleaved style training? How does it compare to only training in either coarse-grained or fine-grained labels? Requested Changes: See weaknesses. Broader Impact Concerns: I do not have concerns regarding the broader impact of this paper. ================================================== Review 2: Summary: The paper models and comes up with a theory for the effectiveness of pre-training on fine-grained labels. The intuitive argument (which is represented in their proofs) is that use of many (or an equal number of) subclass labels in pre-training ensures that an equal number of "detector" neurons for these subclass labels is produced during training. When only coarse labels are used, the common class labels are overrepresented. The theoretical model considers a two-layer convolutional ReLU network and an orthonormal dictionary of feature vectors. The input is represented as a collection of patches and dataset modelling is considered as simple isotropic Gaussian noise on top of feature vectors. Hard test examples are generated by including feature-noise patches and "high-noise" patches, which serve as distracting patterns. Strengths and Weaknesses: Strengths: 1. I found the writing in the main part of the text to be very clear. The authors clearly present the intuition behind the technical proofs in the appendices. 2. The model seems to be reasonably constructed and considered (though limited, necessarily; see below). The model allows for analysis, while being complicated enough to perhaps model the simplest relevant scenario for image classification. 3. I did not have time to check the theoretical proofs carefully, so I must give the benefit of the doubt on their correctness. At a quick glance the arguments seem to be well-structured and nontrivial. Weaknesses: 1. There are many simplifying assumptions in the model that could limit the applicability of the analysis (and the authors admit as much), e.g., the model used is a simple two-layer convolutional network, and both classes and subclasses are represented with a single feature vector each. In reality, models are far more complicated (and would be harder to analytically characterize) and the features that represent a class may be a collection of local patch features in conjunction. 2. More of a question: The relationship between the common feature vectors and the subclass feature vectors is simply that of orthogonal unit vectors. This does not seem to capture any explicit hierarchical relationship between the common feature vectors and the subclass vectors, e.g., the common vectors being some linear combination of subclass vectors. Was any such modelling attempted, or can you give justification for the choice made here? 3. The empirical results simply show the effectiveness of pre-training with fine-grained labels, but do not aim to verify the intuitive argument. In particular, it'd be nice to try and show the presence of such "detector" neurons and to measure their approximate relative growth. 4. While the theoretical results are on a simple two-layer ReLU network, the experiments run are with a transformer vision model VitB. Requested Changes: Overall, I am in favor of this article, as it seems to be a reasonable attempt at an analytical theory for explaining the effectiveness of fine-grained pretraining. The only thing I would ask for is further discussion of the weaknesses that I've noted above. If experimental evidence backing up the intuitive argument (as noted in point 3) was provided, I'd be willing to champion the article, I think. Broader Impact Concerns: N/A. ================================================== Review 3: Summary: This paper explains why the generalization performance of a DNN pre-trained using fine-grained labels is better than that of a DNN pre-trained using coarse-labeled data. Strengths and Weaknesses: Strength: 1. This topic is very interesting. Weakness: 1. Many definitions are not clarified clearly. For example, why and what is the meaning of two superclasses +1 and −1? Can authors take a vivid example to explain this? Moreover, in definition 4.1, what is the difference between "v_{+}" and "v_{-}?" In definition 4.2, what is the definition of "v_{y}?" 2. Why easy sample and hard sample can be defined through Definition 4.2? Any experimental results to support the definition of easy sample and hard sample? 3. This paper is not self-contained, since experimental results are reported in Appendix. 4. I suggest that authors can explain each definition/theorem immediately after introducing each definition/theorem. Requested Changes: Stated in Weakness. Broader Impact Concerns: This work does not require adding a Broader Impact Statement ==================================================
# Undersampling Is A Minimax Optimal Robustness Intervention In Nonparametric Classification Niladri S. Chatterji∗ *niladri@cs.stanford.edu* Department of Computer Science Stanford University Saminul Haque∗*saminulh@stanford.edu* Department of Computer Science Stanford University Tatsunori B. Hashimoto thashim@stanford.edu Department of Computer Science Stanford University Reviewed on OpenReview: *https: // openreview. net/ forum? id= r6oHDYOZ6p* ## Abstract While a broad range of techniques have been proposed to tackle distribution shift, the simple baseline of training on an *undersampled* balanced dataset often achieves close to state-ofthe-art-accuracy across several popular benchmarks. This is rather surprising, since undersampling algorithms discard excess majority group data. To understand this phenomenon, we ask if learning is fundamentally constrained by a lack of minority group samples. We prove that this is indeed the case in the setting of nonparametric binary classification. Our results show that in the worst case, an algorithm cannot outperform undersampling unless there is a high degree of overlap between the train and test distributions (which is unlikely to be the case in real-world datasets), or if the algorithm leverages additional structure about the distribution shift. In particular, in the case of label shift we show that there is always an undersampling algorithm that is minimax optimal. In the case of group-covariate shift we show that there is an undersampling algorithm that is minimax optimal when the overlap between the group distributions is small. We also perform an experimental case study on a label shift dataset and find that in line with our theory, the test accuracy of robust neural network classifiers is constrained by the number of minority samples. ## 1 Introduction A key challenge facing the machine learning community is to design models that are robust to distribution shift. When there is a mismatch between the train and test distributions, current models are often brittle and perform poorly on rare examples (Hovy & Søgaard, 2015; Blodgett et al., 2016; Tatman, 2017; Hashimoto et al., 2018; Alcorn et al., 2019). In this paper, our focus is on group-structured distribution shifts. In the training set, we have many samples from a *majority* group and relatively few samples from the *minority* group, while during test time we are equally likely to get a sample from either group. To tackle such distribution shifts, a naïve algorithm is one that first *undersamples* the training data by discarding excess majority group samples (Kubat & Matwin, 1997; Wallace et al., 2011) and then trains a model on this resulting dataset (see Figure 1 for an illustration of this algorithm). The samples that remain in this undersampled dataset constitute i.i.d. draws from the test distribution. Therefore, while a classifier trained on this pruned dataset cannot suffer biases due to distribution shift, this algorithm is clearly wasteful, as it discards training samples. This perceived inefficiency of undersampling has led to the design of several algorithms to combat such distribution shift (Chawla et al., 2002; Lipton et al., 2018; ![1_image_0.png](1_image_0.png) Figure 1: Example with linear models and linearly separable data. On the left we have the maximum margin classifier over the entire dataset, and on the right we have the maximum margin classifier over the undersampled dataset. The undersampled classifier is less biased and aligns more closely with the true boundary. Sagawa et al., 2020; Cao et al., 2019; Menon et al., 2020; Ye et al., 2020; Kini et al., 2021; Wang et al., 2022). In spite of this algorithmic progress, the simple baseline of training models on an undersampled dataset remains competitive. In the case of label shift, where one class label is overrepresented in the training data, this has been observed by Cui et al. (2019); Cao et al. (2019), and Yang & Xu (2020). While in the case of group-covariate shift, a study by Idrissi et al. (2022) showed that the empirical effectiveness of these more complicated algorithms is limited. For example, Idrissi et al. (2022) showed that on the group-covariate shift CelebA dataset the worst-group accuracy of a ResNet-50 model on the undersampled CelebA dataset which *discards 97%* of the available training data is as good as methods that use all of available data such as importance-weighted ERM (Shimodaira, 2000), Group-DRO (Sagawa et al., 2020) and Just-Train-Twice (Liu et al., 2021). In Table 1, we report the performance of the undersampled classifier compared to the state-of-the-art-methods in the literature across several label shift and group-covariate shift datasets. We find that, although undersampling isn't always the optimal robustness algorithm, it is typically a very competitive baseline and within 1–4% the performance of the best method. Table 1: Performance of undersampled classifier compared to the best classifier across several popular label shift and group-covariate shift datasets. When reporting worst-group accuracy we denote it by a ?. When available, we report the 95% confidence interval. We find that the undersampled classifier is always within 1–4% of the best performing robustness algorithm, except on the CIFAR100 and MultiNLI datasets. In Appendix F we provide more details about each of the results in the table. | Shift Type | Dataset/Paper | Test/Worst-Group? Accuracy Best Undersampled | | |-----------------|--------------------------------------------|------------------------------------------------|--------------| | Label | Imb. CIFAR10 (step 10) (Cao et al., 2019) | 87.81 | 84.59 | | | Imb. CIFAR100 (step 10) (Cao et al., 2019) | 59.46 | 53.08 | | | CelebA (Idrissi et al., 2022) | 86.9 ± 1.1 ? | 85.6 ± 2.3 ? | | | Waterbirds (Idrissi et al., 2022) | 87.6 ± 1.6 ? | 89.1 ± 1.1 ? | | Group-Covariate | | ? | | | | MultiNLI (Idrissi et al., 2022) | 78.0 ± 0.7 ? | 68.9 ± 0.8 | | | CivilComments (Idrissi et al., 2022) | 72.0 ± 1.9 ? | 71.8 ± 1.4 ? | Inspired by the strong performance of undersampling in these experiments, we ask: Is the performance of a model under distribution shift fundamentally constrained by the lack of minority group samples? To answer this question we analyze the *minimax excess risk*. We lower bound the minimax excess risk to prove that the performance of any algorithm is lower bounded only as a function of the minority samples (nmin). This shows that even if a robust algorithm optimally trades off between the bias and the variance, it is fundamentally constrained by the variance on the minority group which decreases only with nmin. Our contributions. In our paper, we consider the well-studied setting of nonparametric binary classification (Tsybakov, 2010). By operating in this nonparametric regime we are able to study the properties of undersampling in rich data distributions, but are able to circumvent the complications that arise due to the optimization and implicit bias of parametric models. We provide insights into this question in the label shift scenario, where one of the labels is overrepresented in the training data, Ptrain(y = 1) ≥ Ptrain(y = −1), whereas the test samples are equally likely to come from either class. Here the class-conditional distribution P(x | y) is Lipschitz in x. We show that in the label shift setting there is a fundamental constraint, and that the minimax excess risk of any robust learning method is lower bounded by 1/nmin 1/3. That is, minority group samples fundamentally constrain performance under distribution shift. Furthermore, by leveraging previous results about nonparametric density estimation (Freedman & Diaconis, 1981) we show a matching upper bound on the excess risk of a standard binning estimator trained on an undersampled dataset to demonstrate that undersampling is optimal. Further, we experimentally show in a label shift dataset (Imbalanced Binary CIFAR10) that the accuracy of popular classifiers generally follow the trends predicted by our theory. When the minority samples are increased, the accuracy of these classifiers increases drastically, whereas when the number of majority samples are increased the gains in the accuracy are marginal at best. We also study the covariate shift case. In this setting, there has been extensive work studying the effectiveness of transfer (Kpotufe & Martinet, 2018; Hanneke & Kpotufe, 2019) from train to test distributions, often focusing on deriving specific conditions under which this transfer is possible. In this work, we demonstrate that when the overlap (defined in terms of total variation distance) between the group distributions Pa and Pb is small, transfer is difficult, and that the minimax excess risk of any robust learning algorithm is lower bounded by 1/nmin 1/3. While this prior work also shows the impossibility of using majority group samples in the extreme case with no overlap, our results provide a simple lower bound that shows that the amount of overlap needed to make transfer feasible is unrealistic. We also show that this lower bound is tight, by proving an upper bound on the excess risk of the binning estimator acting on the undersampled dataset. Taken together, our results underline the need to move beyond designing "general-purpose" robustness algorithms (like importance-weighting (Cao et al., 2019; Menon et al., 2020; Kini et al., 2021; Wang et al., 2022), g-DRO (Sagawa et al., 2020), JTT (Liu et al., 2021), SMOTE (Chawla et al., 2002), etc.) that are agnostic to the structure in the distribution shift. Our worst case analysis highlights that to successfully beat undersampling, an algorithm must leverage additional structure in the distribution shift. ## 2 Related Work On several group-covariate shift benchmarks (CelebA, CivilComments, Waterbirds), Idrissi et al. (2022) showed that training ResNet classifiers on an undersampled dataset either outperforms or performs as well as other popular reweighting methods like Group-DRO (Sagawa et al., 2020), reweighted ERM, and JustTrain-Twice (Liu et al., 2021). They find Group-DRO performs comparably to undersampling, while both tend to outperform methods that don't utilize group information. One classic method to tackle distribution shift is importance weighting (Shimodaira, 2000), which reweights the loss of the minority group samples to yield an unbiased estimate of the loss. However, recent work (Byrd & Lipton, 2019; Xu et al., 2020) has demonstrated the ineffectiveness of such methods when applied to overparameterized neural networks. Many followup papers (Cao et al., 2019; Ye et al., 2020; Menon et al., 2020; Kini et al., 2021; Wang et al., 2022) have introduced methods that modify the loss function in various ways to address this. However, despite this progress undersampling remains a competitive alternative to these importance weighted classifiers. Our theory draws from the rich literature on non-parametric classification (Tsybakov, 2010). Apart from borrowing this setting of nonparametric classification, we also utilize upper bounds on the estimation error of the simple histogram estimator (Freedman & Diaconis, 1981; Devroye & Györfi, 1985) to prove our upper bounds in the label shift case. Finally, we note that to prove our minimax lower bounds we proceed by using the general recipe of reducing from estimation to testing (Wainwright, 2019, Chapter 15). One difference from this standard framework is that our training samples shall be drawn from a different distribution than the test samples used to define the risk. Past work has established lower bounds on the minimax risk for binary classification without distribution shift for general VC classes (see, e.g., Massart & Nédélec, 2006). Note that, these bounds are not directly applicable in the distribution shift setting, and consequently these lower bounds scale with the total number of samples n = nmaj+nmin rather than with the minority number of samples (nmin). There are also refinements of this lower bound to obtain minimax lower bounds for cost-sensitive losses that penalize errors on the two class classes differently (Kamalaruban & Williamson, 2018). By carefully selecting these costs it is possible to apply these results in the label shift setting. However, these lower bounds remain loose and decay with n and nmaj in contrast to the tighter nmin dependence in our lower bounds. We provide a more detailed discussion about potentially applying these lower bounds to the label shift setting after the presentation of our theorem in Section 4.1. There is rich literature that studies domain adaptation and transfer learning under label shift (Maity et al., 2020) and covariate shift (Ben-David et al., 2006; David et al., 2010; Ben-David et al., 2010; Ben-David & Urner, 2012; 2014; Berlind & Urner, 2015; Kpotufe & Martinet, 2018; Hanneke & Kpotufe, 2019). The principal focus of this line of work was to understand the value of unlabeled data from the target domain, rather than to characterize the relative value of the number of labeled samples from the majority and minority groups. Among these papers, most closely related to our work are those in the covariate shift setting (Kpotufe & Martinet, 2018; Hanneke & Kpotufe, 2019). Their lower bound results can be reinterpreted to show that under covariate shift in the absence of overlap, the minimax excess risk is lower bounded by 1/nmin 1/3. We provide a more detailed comparison with their results after presenting our lower bounds in Section 4.2. Finally, we note that Arjovsky et al. (2022) recently showed that undersampling can improve the worst-class accuracy of linear SVMs in the presence of label shift. In comparison, our results hold for arbitrary classifiers with the rich nonparametric data distributions. ## 3 Setting In this section, we shall introduce our problem setup and define the types of distribution shift that we consider. ## 3.1 Problem Setup The setting for our study is nonparametric binary classification with Lipschitz data distributions. We are given n training datapoints S := {(x1, y1), . . . ,(xn, yn)} ∈ ([0, 1] *× {−*1, 1}) n that are all drawn from a *train* distribution Ptrain. During test time, the data shall be drawn from a *different* distribution Ptest. Our paper focuses on the robustness to this shift in the distribution from train to test time. To present a clean analysis, we study the case where the features x are bounded scalars, however, it is easy to extend our results to the high-dimensional setting. Given a classifier f : [0, 1] *→ {−*1, 1}, we shall be interested in the test error (risk) of this classifier under the test distribution Ptest: $$R(f;\mathsf{P_{t e s t}}):=\mathbb{E}_{(x,y)\sim\mathsf{P_{t e s t}}}\left[\mathbf{1}(f(x)\neq y)\right].$$ ## 3.2 Types Of Distribution Shift We assume that Ptrain consists of a mixture of two groups of unequal size, and Ptest contains equal numbers of samples from both groups. Given a majority group distribution Pmaj and a minority group distribution Pmin, the learner has access to nmaj majority group samples and nmin minority group samples: $\mathcal{S}_{\rm maj}\sim\mathcal{P}_{\rm maj}^{\rm min}$ and $\mathcal{S}_{\rm min}\sim\mathcal{P}_{\rm min}^{\rm min}$ Here nmaj *> n/*2 and nmin *< n/*2 with nmaj + nmin = n. The full training dataset is S = Smaj ∪ Smin = {(x1, y1), . . . ,(xn, yn)}. We assume that the learner has access to the knowledge whether a particular sample (xi, yi) comes from the majority or minority group. The test samples will be drawn from Ptest = 1 2 Pmaj + 1 2 Pmin, a uniform mixture over Pmaj and Pmin. Thus, the training dataset is an imbalanced draw from the distributions Pmaj and Pmin, whereas the test samples are balanced draws. We let ρ := nmaj/nmin > 1 denote the imbalance ratio in the training data. We consider the uniform mixture during test time since the resulting test loss is of the same order as the worst-group loss. We focus on two-types of distribution shifts: label shift and group-covariate shift that we describe below. ## 3.2.1 Label Shift In this setting, the imbalance in the training data comes from there being more samples from one class over another. Without loss of generality, we shall assume that the class y = 1 is the majority class. Then, we define the majority and the minority class distributions as $$\mathsf{P_{m a j}}(x,y)=\mathsf{P_{1}}(x)\mathbf{1}(y=1)\quad\mathrm{and}\quad\mathsf{P_{m i n}}=\mathsf{P_{-1}}(x)\mathbf{1}(y=-1),$$ where P1, P−1 are class-conditional distributions over the interval [0, 1]. We assume that class-conditional distributions Pi have densities on [0, 1] and that they are 1-Lipschitz: for any *x, x*0 ∈ [0, 1], $$|\mathsf{P}_{i}(x)-\mathsf{P}_{i}(x^{\prime})|\leq|x-x^{\prime}|.$$ We denote the class of pairs of distributions (Pmaj, Pmin) that satisfy these conditions by PLS. We note that such Lipschitzness assumptions are common in the literature (see Tsybakov, 2010). ## 3.2.2 Group-Covariate Shift In this setting, we have two groups {*a, b*}, and corresponding to each of these groups is a distribution (with densities) over the features Pa(x) and Pb(x). We let a correspond to the majority group and b correspond to the minority group. Then, we define $\mathsf{P}_{\min}(x,y)=\mathsf{P}_{a}(x)\mathsf{P}(y\mid x)\quad\text{and}\quad\mathsf{P}_{\min}(x,y)=\mathsf{P}_{b}(x)\mathsf{P}(y\mid x).$ We assume that for y *∈ {−*1, 1}, for all *x, x*0 ∈ [0, 1]: $$|\mathsf{P}(y\mid$$ P(y | x) − P(y | x 0) ≤ |x − x 0|, that is, the distribution of the label given the feature is 1-Lipschitz, and it varies slowly over the domain. To quantify the shift between the train and test distribution, we define a notion of overlap between the group distributions Pa and Pb as follows: Overlap(Pa, Pb) := 1 − TV(Pa, Pb) where TV(Pa, Pb) := supE⊆[0,1] |Pa(E) − Pb(E)|, denotes the total variation distance between Pa and Pb. Notice that when Pa and Pb have disjoint supports, TV(Pa, Pb) = 1 and therefore Overlap(Pa, Pb) = 0. On the other hand when Pa = Pb, TV(Pa, Pb) = 0 and Overlap(Pa, Pb) = 1. When the overlap is 1, the majority and minority distributions are identical and hence we have no shift between train and test. Observe that Overlap(Pa, Pb) = Overlap(Pmaj, Pmin) since P(y | x) is shared across Pmaj and Pmin. Given a level of overlap τ ∈ [0, 1] we denote the class of pairs of distributions (Pmaj, Pmin) with overlap at least τ by PGS(τ ). It is easy to check that, PGS(τ ) ⊆ PGS(0) at any overlap level τ ∈ [0, 1]. Considering a notion of overlap between the marginal distributions Pa(x) and Pb(x) is natural in the group covariate setting since the conditional distribution that we wish to estimate P(y | x) remains constant from train to test time. Higher overlap between Pa and Pb allows a classifier to learn more about the underlying conditional distribution P(y | x) when it sees samples from either group. In contrast, in the label shift setting P(x | y) remains constant from train to test time and higher overlap between P(x | 1) and P(x | −1) does not help to estimate P(y | x). ## 4 Lower Bounds On The Minimax Excess Risk In this section, we shall prove our lower bounds that show that the performance of any algorithm is constrained by the number of minority samples nmin. Before we state our lower bounds, we need to introduce the notion of excess risk and minimax excess risk. Excess risk and minimax excess risk. We measure the performance of an algorithm A through its excess risk defined in the following way. Given an algorithm A that takes as input a dataset S and returns a classifier AS , and a pair of distributions (Pmaj, Pmin) with Ptest = 1 2 Pmaj + 1 2 Pmin, the *expected excess risk* is given by Excess $\mathsf{Risk}[\mathcal{A};(\mathsf{P}_{\mathsf{maj}},\mathsf{P}_{\mathsf{min}})]:=\mathbb{E}_{\mathcal{S}\sim\mathsf{P}_{\mathsf{maj}}^{\mathsf{P}_{\mathsf{maj}}}\times\mathsf{P}_{\mathsf{min}}^{\mathsf{P}_{\mathsf{min}}}}\left[R(\mathcal{A}^{\mathcal{S}};\mathsf{P}_{\mathsf{test}}))-R(f^{\star}(\mathsf{P}_{\mathsf{test}});\mathsf{P}_{\mathsf{test}})\right]$, $$\left(1\right)$$ where f ?(Ptest) is the Bayes classifier that minimizes the risk R(·; Ptest). The first term corresponds to the expected risk for the algorithm when given nmaj samples from Pmaj and nmin samples from Pmin, whereas the second term corresponds to the Bayes error for the problem. Excess risk does not let us characterize the inherent difficulty of a problem, since for any particular data distribution (Pmaj, Pmin) the best possible algorithm A to minimize the excess risk would be the trivial mapping AS = f ?(Ptest). Therefore, to prove meaningful lower bounds on the performance of algorithms we need to define the notion of minimax excess risk (see Wainwright, 2019, Chapter 15). Given a class of pairs of distributions P define Minimax Excess Risk($\mathcal{P}$) := $\inf\limits_{\mathcal{A}}\sup\limits_{(\mathsf{P}_{\mathsf{maj}},\mathsf{P}_{\mathsf{maj}})\in\mathcal{P}}$Excess Risk($\mathcal{A}$; ($\mathsf{P}_{\mathsf{maj}},\mathsf{P}_{\mathsf{min}}$)], where the infimum is over all measurable estimators A. The minimax excess risk is the excess risk of the "best" algorithm in the worst case over the class of problems defined by P. ## 4.1 Label Shift Lower Bounds We demonstrate the hardness of the label shift problem in general by establishing a lower bound on the minimax excess risk. Theorem 4.1. Consider the label shift setting described in Section 3.2.1. Recall that PLS *is the class of* pairs of distributions (Pmaj, Pmin) *that satisfy the assumptions in that section. The minimax excess risk over* this class is lower bounded as follows: $${\mathrm{Minimax~Excess~Risk}}(\mathcal{P}_{\mathrm{LS}})=\operatorname*{inf}_{\mathcal{A}~(\mathsf{P}_{m s},\mathsf{P}_{m n})\in\mathcal{P}_{\mathrm{LS}}}{\mathrm{Excess~Risk}}[\mathcal{A};(\mathsf{P}_{m s j},\mathsf{P}_{m i n})]\geq{\frac{1}{600}}{\frac{1}{n_{m i n}^{1/3}}}.$$ . (3) We establish this result in Appendix B. We show that rather surprisingly, the lower bound on the minimax excess risk scales only with the number of minority class samples nmin 1/3, and does not depend on nmaj. Intuitively, this is because any learner must predict which class-conditional distribution (P(x | 1) or P(x | −1)) assigns higher likelihood at that x. To interpret this result, consider the extreme scenario where nmaj → ∞ but nmin is finite. In this case, the learner has full information about the majority class distribution. However, $$\left(2\right)$$ $$(3)$$ the learning task continues to be challenging since any learner would be uncertain about whether the minority class distribution assigns higher or lower likelihood at any given x. This uncertainty underlies the reason why the minimax rate of classification is constrained by the number of minority samples nmin. We briefly note that, applying minimax lower bounds from the transfer learning literature (Maity et al., 2020, Theorem 3.1 with α = 1, β = 0 and d = 1) to our problem leads to a more optimistic lower bound of 1/n1/3. Our lower bounds that scale as 1/nmin 1/3, uncover the fact that only adding minority class samples helps reduce the risk. As noted above in the introduction, it is possible to obtain lower bounds for the label shift setting by applying bounds from the cost-sensitive classification literature. However, as we shall argue below they are loose and predict the incorrect trend when applied in this setting. Consider the result (Kamalaruban & Williamson, 2018, Theorem 4) which is a minimax lower bound for cost sensitive binary classification that applies to VC classses (which does not capture the nonparameteric setting studied here but it is illuminating to study how that bound scales with the imbalance ratio ρ = nmaj/nmin). Assume that the joint distribution during training is a mixture distribution given by P =ρ 1+ρ Pmaj +1 1+ρ Pmin so that on average the ratio of the number of samples from the majority and minority class is equal to ρ. Then by applying their lower bound we find that it scales with 1/(nρ) (see Appendix E for a detailed calculation). This scales inversely with ρ the imbalance ratio and incorrectly predicts that the problem gets easier as the imbalance is larger. In contrast, our lower bound scales with 1/nmin = (1 + ρ)/n, which correctly predicts that as the imbalance is larger, the minimax test error is higher. ## 4.2 Group-Covariate Shift Lower Bounds Next, we shall state our lower bound on the minimax excess risk that demonstrates the hardness of the group-covariate shift problem. Theorem 4.2. *Consider the group shift setting described in Section 3.2.2. Given any overlap* τ ∈ [0, 1] recall that PGS(τ ) *is the class of distributions such that* Overlap(Pmaj, Pmin) ≥ τ *. The minimax excess risk in* this setting is lower bounded as follows: Minimax Excess $\text{Risk}(\mathcal{P}_{\text{GS}}(\tau))=\inf_{\mathcal{A}}\sup_{(\mathbf{P}_{\text{min}},\mathbf{P}_{\text{min}})\in\mathcal{P}_{\text{GS}}(\tau)}\text{Excess}\text{Risk}[\mathcal{A};(\mathbf{P}_{\text{min}},\mathbf{P}_{\text{min}})]$ $$\geq\frac{1}{200(n_{\text{min}}\cdot(2-\tau)+n_{\text{min}}\cdot\tau)^{1/3}}\geq\frac{1}{200n_{\text{min}}{}^{1/3}(\rho\cdot\tau+2)^{1/3}},\tag{4}$$ where ρ = nmaj/nmin > 1. We prove this theorem in Appendix C. We see that in the *low overlap* setting (τ 1/ρ), the minimax excess risk is lower bounded by 1/nmin 1/3, and we are fundamentally constrained by the number of samples in minority group. To see why this is the case, consider the extreme example with τ = 0 where Pa has support [0, 0.5] and Pb has support [0.5, 1]. The nmaj majority group samples from Pa provide information about the correct label predict in the interval [0, 0.5] (the support of Pa). However, since the distribution P(y | x) is 1-Lipschitz in the worst case these samples provide very limited information about the correct predictions in [0.5, 1] (the support of Pb). Thus, predicting on the support of Pb requires samples from the minority group and this results in the nmin dependent rate. In fact, in this extreme case (τ = 0) even if nmaj → ∞, the minimax excess risk is still bounded away from zero. This intuition also carries over to the case when the overlap is small but non-zero and our lower bound shows that minority samples are much more valuable than majority samples at reducing the risk. On the other hand, when the overlap is high (τ 1/ρ) the minimax excess risk is lower bounded by 1/(nmin(2 − τ ) + nmajτ ) 1/3 and the extra majority samples are quite beneficial. This is roughly because the supports of Pa and Pb have large overlap and hence samples from the majority group are useful in helping make predictions even in regions where Pb is large. In the extreme case when τ = 1, we have that Pa = Pb and therefore recover the classic i.i.d. setting with no distribution shift. Here, the lower bound scales with 1/n1/3, as one might expect. Previous work on transfer learning with covariate shift has considered other more elaborate notions of *transferability* (Kpotufe & Martinet, 2018; Hanneke & Kpotufe, 2019) than overlap between group distributions considered here. In the case of no overlap (τ = 0), previous results (Kpotufe & Martinet, 2018, Theorem 1 with α = 1, β = 0 and γ = ∞) yield the same lower bound of 1/nmin 1/3. On the other extreme, applying their result (Kpotufe & Martinet, 2018, Theorem 1 with α = 1, β = 0 and γ = 0) in the high transfer regime yields a lower bound on 1/n1/3. This result is aligned with the high overlap τ = 1 case that we consider here. Beyond these two edge cases of no overlap (τ = 0) and high overlap (τ = 1), our lower bound is key to drawing the simple complementary conclusion that even when overlap between group distributions is small as compared to 1/ρ, minority samples alone dictate the rate of convergence. ## 5 Upper Bounds On The Excess Risk For The Undersampled Binning Estimator We will show that an undersampled estimator matches the rates in the previous section showing that undersampling is an optimal robustness intervention. We start by defining the undersampling procedure and the undersampling binning estimator. Undersampling procedure. Given training data S := {(x1, y1), . . . ,(xn, yn)}, generate a new undersampled dataset SUS by - including all nmin samples from Smin and, - including nmin samples from Smaj by sampling uniformly at random without replacement. This procedure ensures that in the undersampled dataset SUS, the groups are balanced, and that |SUS| = 2nmin. The undersampling binning estimator defined next will first run this undersampling procedure to obtain SUS and just uses these samples to output a classifier. Undersampled binning estimator The undersampled binning estimator AUSB takes as input a dataset S and a positive integer K corresponding to the number of bins, and returns a classifier A S,K USB : [0, 1] *→ {−*1, 1}. This estimator is defined as follows: 1. First, we compute the undersampled dataset SUS. 2. Given this dataset SUS, let n1,j be the number of points with label +1 that lie in the interval Ij = [ j−1 K , j K ]. Also, define n−1,j analogously. Then set $$\mathcal{A}_{j}=\begin{cases}1&\text{if}n_{1,j}>n_{-1,j},\\ -1&\text{otherwise.}\end{cases}$$ 3. Define the classifier A S,K USB such that if x ∈ Ij then $${\mathcal{A}}_{\mathrm{USB}}^{S,K}(x)={\mathcal{A}}_{j}.$$ $\left(5\right)$. USB (x) = Aj . (5) Essentially in each bin Ij , we set the prediction to be the majority label among the samples that fall in this bin. Whenever the number of bins K is clear from the context we shall denote A S,K USB by ASUSB. Below we establish upper bounds on the excess risk of this simple estimator. ## 5.1 Label Shift Upper Bounds We now establish an upper bound on the excess risk of AUSB in the label shift setting (see Section 3.2.1). Below we let *c, C >* 0 be absolute constants independent of problem parameters like nmaj and nmin. Theorem 5.1. *Consider the label shift setting described in Section 3.2.1. For any* (Pmaj, Pmin) ∈ PLS the expected excess risk of the Undersampling Binning Estimator (Eq. (5)*) with number of bins with* K = cdnmin 1/3e *is upper bounded by* Excess Risk$[\mathcal{A}_{\text{USB}};(\mathsf{P}_{\text{maj}},\mathsf{P}_{\text{min}})]=\mathbb{E}_{S\sim\mathsf{P}_{\text{maj}}^{\text{\tiny{$\mathsf{P}_{\text{maj}}$}}}\times\mathsf{P}_{\text{ma}}^{\text{\tiny{$\mathsf{P}_{\text{maj}}$}}}}\left[R(\mathcal{A}_{\text{USB}}^{S};\mathsf{P}_{\text{test}})-R(f^{\star};\mathsf{P}_{\text{test}})\right]\leq\frac{C}{n_{\text{min}}^{1/3}}$. We prove this result in Appendix B. This upper bound combined with the lower bound in Theorem 4.1 shows that an undersampling approach is minimax optimal up to constants in the presence of label shift. Our analysis leaves open the possibility of better algorithms when the learner has additional information about the structure of the label shift beyond Lipschitz continuity. ## 5.2 Group-Covariate Shift Upper Bounds Next, we present our upper bounds on the excess risk of the undersampled binning estimator in the groupcovariate shift setting (see Section 3.2.2). In the theorem below, C > 0 is an absolute constant independent of the problem parameters nmaj, nmin and τ . Theorem 5.2. Consider the group shift setting described in Section 3.2.2. For any overlap τ ∈ [0, 1] and for any (Pmaj, Pmin) ∈ PGS(τ ) *the expected excess risk of the Undersampling Binning Estimator (Eq.* (5)) with number of bins with K = dnmin 1/3e is Excess $\mathsf{Risk}[\mathcal{A}_{\mathsf{USB}};(\mathsf{P}_{\mathsf{maj}},\mathsf{P}_{\mathsf{min}})]=\mathsf{E}_{S\sim\mathsf{P}_{\mathsf{maj}}^{\mathsf{P}_{\mathsf{maj}}}\times\mathsf{P}_{\mathsf{ma}}^{\mathsf{P}_{\mathsf{maj}}}}\left[R(\mathcal{A}_{\mathsf{USB}}^{S};\mathsf{P}_{\mathsf{test}}))-R(f^{\star};\mathsf{P}_{\mathsf{test}})\right]\leq\frac{C}{n_{\mathsf{min}}{}^{1/3}}$. We provide a proof for this theorem in Appendix C. Compared to the lower bound established in Theorem 4.2 which scales as 1/ ((2 − τ )nmin + nmajτ ) 1/3, the upper bound for the undersampled binning estimator always scales with 1/nmin 1/3since it operates on the undersampled dataset (SUS). Thus, we have shown that in the absence of overlap (τ 1/ρ = nmin/nmaj) there is an undersampling algorithm that is minimax optimal up to constants. However when there is high overlap (τ 1/ρ) there is a non-trivial gap between the upper and lower bounds: $${\frac{\mathrm{Upper~Bound}}{\mathrm{Lower~Bound}}}=c(\rho\cdot\tau+2)^{1/3}.$$ ## 6 Minority Sample Dependence In Practice Inspired by our worst-case theoretical predictions in nonparametric classification, we ask: how does the accuracy of neural network classifiers trained using robust algorithms evolve as a function of the majority and minority samples? To explore this question, we conduct a small case study using the imbalanced binary CIFAR10 dataset (Byrd & Lipton, 2019; Wang et al., 2022) that is constructed using the "cat" and "dog" classes. The test set consists of all of the 1000 cat and 1000 dog test examples. To form our initial train and validation sets, we take 2500 cat examples but only 500 dog examples from the official train set, corresponding to a 5:1 label imbalance. We then use 80% of those examples for training and the rest for validation. In our experiment, we either (a) add only minority samples; (b) add only majority samples; (c) add both majority and minority samples in a 5:1 ratio. We consider competitive robust classifiers proposed in the literature that are convolutional neural networks trained either by using (i) the importance weighted cross entropy loss, or (ii) the importance weighted VS loss (Kini et al., 2021). We early stop using the importance weighted validation loss in both cases. The additional experimental details are presented in Appendix G. ![9_image_0.png](9_image_0.png) Figure 2: Convolutional network classifiers trained on the Imbalanced Binary CIFAR10 dataset with a 5:1 label imbalance. (Top) Models trained using the importance weighted cross entropy loss with early stopping. (Bottom) Models trained using the importance weighted VS loss (Kini et al., 2021) with early stopping. We report the average test accuracy calculated on a balanced test set over 5 random seeds. We start off with 2500 cat examples and 500 dog examples in the training dataset. We find that in accordance with our theory, for both of the classifiers adding only minority class samples (red) leads to large gain in accuracy (~ 6%), while adding majority class samples (blue) leads to little or no gain. In fact, adding majority samples sometimes hurts test accuracy due to the added bias. When we add majority and minority samples in a 5:1 ratio (green), the gain is largely due to the addition of minority samples and is only marginally higher (< 2%) than adding only minority samples. The green curves correspond to the same classifiers in both the left and right panels. Our results in Figure 2 are generally consistent with our theoretical predictions. By adding only minority class samples the test accuracy of both classifiers increases by a great extent (6%), while by adding only majority class samples the test accuracy remains constant or in some cases even decreases owing to the added bias of the classifiers. When we add samples to both groups proportionately, the increase in the test accuracy appears to largely to be due to the increase in the number of minority class samples. We see this on the left panels, where the difference between adding only extra minority group samples (red) and both minority and majority group samples (green) is small. Thus, we find that the accuracy for these neural network classifiers is also constrained by the number of minority class samples. Similar conclusions hold for classifiers trained using the tilted loss (Li et al., 2020) and group-DRO objective (Sagawa et al., 2020) (see Appendix D). ## 7 Discussion We showed that undersampling is an optimal robustness intervention in nonparametric classification in the absence of significant overlap between group distributions or without additional structure beyond Lipschitz continuity. We worked in one dimension for the sake of clarity and it would be interesting to extend this study to higher dimensions. We focused on Lipschitz continuous distributions here, but it is also interesting to consider other forms of regularity such as Hölder continuity. At a high level our results highlight the need to reason about the specific structure in the distribution shift and design algorithms that are tailored to take advantage of this structure. This would require us to step away from the common practice in robust machine learning where the focus is to design "universal" robustness interventions that are agnostic to the structure in the shift. Alongside this, our results also dictate the need for datasets and benchmarks with the propensity for transfer from train to test time. ## Acknowledgments We would like to thank Ke Alexander Wang for his useful comments and feedback in the early stages of this project. We would also like to thank Shibani Santurkar and Dimitrios Tsipras for useful discussions and encouragement. Finally, we would like to thank the anonymous reviewers whose many helpful comments improved the paper. NC was supported by a SAIL Postdoctoral Fellowship and TH was supported by a gift from Open Philanthropy. ## References Michael Alcorn, Qi Li, Zhitao Gong, Chengfei Wang, Long Mai, Wei-Shinn Ku, and Anh Nguyen. Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects. In *Computer Vision* and Pattern Recognition (CVPR), 2019. Martin Arjovsky, Kamalika Chaudhuri, and David Lopez-Paz. Throwing away data improves worst-class error in imbalanced classification. *arXiv preprint arXiv:2205.11672*, 2022. Shai Ben-David and Ruth Urner. On the hardness of domain adaptation and the utility of unlabeled target samples. In *Algorithmic Learning Theory (ALT)*, 2012. Shai Ben-David and Ruth Urner. Domain adaptation–can quantity compensate for quality? Annals of Mathematics and Artificial Intelligence, 2014. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2006. Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. *Machine learning*, 2010. Christopher Berlind and Ruth Urner. Active nearest neighbors in changing environments. In *International* Conference on Machine Learning (ICML), 2015. Su Lin Blodgett, Lisa Green, and Brendan O'Connor. Demographic dialectal variation in social media: A case study of african-american english. In *Empirical Methods in Natural Language Processing (EMNLP)*, 2016. Jonathon Byrd and Zachary Lipton. What is the effect of importance weighting in deep learning? In International Conference on Machine Learning (ICML), 2019. Clément Canonne. A short note on an inequality between KL and TV. *arXiv preprint arXiv:2202.07198*, 2022. Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2019. Nitesh Chawla, Kevin Bowyer, Lawrence Hall, and Philip Kegelmeyer. Smote: Synthetic minority oversampling technique. *Journal of Artificial Intelligence Research*, 2002. Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. Class-balanced loss based on effective number of samples. In *Computer Vision and Pattern Recognition (CVPR)*, 2019. Shai Ben David, Tyler Lu, Teresa Luu, and Dávid Pál. Impossibility theorems for domain adaptation. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2010. Luc Devroye and László Györfi. Nonparametric density estimation: the L1 *view*. Wiley Series in Probability and Mathematical Statistics, 1985. David Freedman and Persi Diaconis. On the histogram as a density estimator: L2 theory. *Zeitschrift für* Wahrscheinlichkeitstheorie und verwandte Gebiete, 1981. Steve Hanneke and Samory Kpotufe. On the value of target data in transfer learning. In *Advances in Neural* Information Processing Systems (NeurIPS), 2019. Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. Fairness without demographics in repeated loss minimization. In *International Conference on Machine Learning (ICML)*, 2018. Dirk Hovy and Anders Søgaard. Tagging performance correlates with author age. In *Association for Computational Linguistics (ACL)*, 2015. Badr Youbi Idrissi, Martín Arjovsky, Mohammad Pezeshki, and David Lopez-Paz. Simple data balancing achieves competitive worst-group-accuracy. In *Causal Learning and Reasoning*, 2022. Parameswaran Kamalaruban and Robert Williamson. Minimax lower bounds for cost sensitive classification. arXiv preprint arXiv:1805.07723, 2018. Ganesh Ramachandra Kini, Orestis Paraskevas, Samet Oymak, and Christos Thrampoulidis. Labelimbalanced and group-sensitive classification under overparameterization. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2021. Samory Kpotufe and Guillaume Martinet. Marginal singularity, and the benefits of labels in covariate-shift. In *Conference On Learning Theory (COLT)*, 2018. Miroslav Kubat and Stan Matwin. Addressing the curse of imbalanced training sets: one-sided selection. In International Conference on Machine Learning (ICML), 1997. Tian Li, Ahmad Beirami, Maziar Sanjabi, and Virginia Smith. Tilted empirical risk minimization. In International Conference on Learning Representations (ICLR), 2020. Zachary Lipton, Yu-Xiang Wang, and Alexander Smola. Detecting and correcting for label shift with black box predictors. In *International Conference on Machine Learning (ICML)*, 2018. Evan Liu, Behzad Haghgoo, Annie Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In International Conference on Machine Learning (ICML), 2021. Subha Maity, Yuekai Sun, and Moulinath Banerjee. Minimax optimal approaches to the label shift problem. arXiv preprint arXiv:2003.10443, 2020. Pascal Massart and Élodie Nédélec. Risk bounds for statistical learning. *Annals of Statistics*, 2006. Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, Andreas Veit, and Sanjiv Kumar. Long-tail learning via logit adjustment. In International Conference on Learning Representations (ICLR), 2020. Shiori Sagawa, Pang Wei Koh, Tatsunori Hashimoto, and Percy Liang. Distributionally robust neural networks. In *International Conference on Learning Representations (ICLR)*, 2020. Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. *Journal of Statistical Planning and Inference*, 2000. Rachael Tatman. Gender and dialect bias in youtube's automatic captions. In *ACL Workshop on Ethics in* Natural Language Processing, 2017. Alexandre Tsybakov. *Introduction to Nonparametric Estimation*. Springer, 2010. Martin Wainwright. *High-dimensional statistics: A non-asymptotic viewpoint*. Cambridge University Press, 2019. Byron Wallace, Kevin Small, Carla Brodley, and Thomas Trikalinos. Class imbalance, redux. In *International* Conference on Data Mining (ICDM), 2011. Ke Alexander Wang, Niladri Chatterji, Saminul Haque, and Tatsunori Hashimoto. Is importance weighting incompatible with interpolating classifiers? In International Conference on Learning Representations (ICLR), 2022. Larry Wasserman. Lecture notes in nonparametric classification, 2019. URL https://www.stat.cmu.edu/ ~larry/=sml/nonparclass.pdf. [Online; accessed 12-May-2022]. Wikipedia contributors. Poisson binomial distribution - Wikipedia, the free encyclopedia, 2022. URL https://en.wikipedia.org/w/index.php?title=Poisson_binomial_distribution& oldid=1071847908. [Online; accessed 5-May-2022]. Da Xu, Yuting Ye, and Chuanwei Ruan. Understanding the role of importance weighting for deep learning. In *International Conference on Learning Representations (ICLR)*, 2020. Yuzhe Yang and Zhi Xu. Rethinking the value of labels for improving class-imbalanced learning. Advances in Neural Information Processing Systems (NeurIPS), 2020. Han-Jia Ye, Hong-You Chen, De-Chuan Zhan, and Wei-Lun Chao. Identifying and compensating for feature deviation in imbalanced deep learning. *arXiv preprint arXiv:2001.01385*, 2020. ## A Technical Tools In this section we avail ourselves of some technical tools that shall be used in all of the proofs below. ## A.1 Reduction To Lower Bounds Over A Finite Class The lower bound on the minimax excess risk will be established via the usual route of first identifying a "hard" finite set of problem instances and then establishing the lower bound over this finite class. One difference from the usual setup in proving such lower bounds (see Wainwright, 2019, Chapter 15) is that the training samples are drawn from an imbalanced distribution, whereas the test samples are drawn from a balanced one. Let P be a class of pairs of distributions, where each element (Pmaj, Pmin) ∈ P is a pair of distributions over [0, 1] *× {−*1, 1}. As before, we let Ptest denote the uniform mixture over Pmaj and Pmin. We let V denote a finite index set. Corresponding to each element v ∈ V there is a Pv = (Pv,maj, Pv,min) ∈ P with Pv,test = (Pv,maj + Pv,min)/2. Finally, also define a pair of random variables (*V, S*) as follows: 1. V is a uniform random variable over the set V. 2. (S | V = v) ∼ P nmaj v,maj × P nmin v,min, is an independent draw of nmaj samples from Pv,maj and nmin samples from Pv,min. We shall let Q denote the joint distribution of the random variables (*V, S*), and let QS denote the marginal distribution of S. With this notation in place, we now present a lemma that lower bounds the minimax excess risk in terms of quantities defined over the finite class of "hard" instances Pv. Lemma A.1. Let the random variables (V, S) be as defined above. The minimax excess risk is lower bounded as follows: Minimax Excess Risk($\mathcal{P}$) = $\inf\limits_{\mathcal{A}}\sup\limits_{(\mathcal{P}_{\text{out}},\mathcal{P}_{\text{min}})\in\mathcal{P}}\mathbb{E}_{S\sim\mathcal{P}_{\text{out}}^{\text{max}}\prec\mathcal{P}_{\text{out}}^{\text{max}}}\left[R(\mathcal{A}^{S};\mathcal{P}_{\text{test}})-R(f^{*}(\mathcal{P}_{\text{test}});\mathcal{P}_{\text{test}})\right]$ $$\geq\mathfrak{R}_{\mathcal{V}}-\mathfrak{R}_{\mathcal{V}},$$ where RV and Bayes-error BV *are defined as* $\mathfrak{R}_{\mathcal{V}}:=\mathbb{E}_{S\sim\mathcal{Q}_{S}}[\inf_{h}\Pr_{(x,y)\sim\sum_{v\in\mathcal{V}}\mathcal{Q}(v|S)\Pr_{v,\text{test}}}(h(x)\neq y)],$ $\mathfrak{R}_{\mathcal{V}}:=\mathbb{E}_{V}[R(f^{\star}(\Pr_{V,\text{test}});\Pr_{V,\text{test}}))].$ Proof. By the definition of Minimax Excess Risk, Minimax Excess Risk = inf Asup (Pmaj,Pmin)∈P ES∼P nmaj maj ×P nmin min [R(A S; Ptest)] − R(f ?(Ptest); Ptest) ≥ inf A sup v∈V ES|v∼P nmaj v,maj×P nmin v,min [R(A S; Pv,test)] − R(f ?(Pv,test); Pv,test) ≥ inf A EV hES|V ∼P nmaj V,maj×P nmin V,min [R(A S; PV,test)] − R(f ?(PV,test); PV,test))i = inf A EV [ES|V ∼P nmaj V,maj×P nmin V,min [R(A S; PV,test)]] − EV [R(f ?(PV,test); PV,test))] | {z } =BV . We continue lower bounding the first term as follows inf A EV [ES|V ∼P nmaj V,maj×P nmin V,min [R(A S; PV,test)]] = inf A E(V,S)∼Q[ Pr (x,y)∼PV,test (A S(x) 6= y)] = inf A ES∼QS EV ∼Q(·|S)[ Pr (x,y)∼PV,test (A S(x) 6= y)] (i) ≥ ES∼QS [inf h EV ∼Q(·|S)[ Pr (x,y)∼PV,test (h(x) 6= y)]] = ES∼QS [inf hPr (x,y)∼Pv∈V Q(v|S)Pv,test (h(x) 6= y)] = RV , where (i) follows since AS is a fixed classifier given the sample set S. This, combined with the previous equation block completes the proof. ## A.2 The Hat Function And Its Properties In this section, we define the *hat function* and establish some of its properties. This function will be useful in defining "hard" problem instances to prove our lower bounds. Given a positive integer K the hat function is defined as $$\phi_{K}(x)=\begin{cases}\left|x+{\frac{1}{4K}}\right|-{\frac{1}{4K}}&{\mathrm{for~}}x\in\left[-{\frac{1}{2K}},0\right],\\ {\frac{1}{4K}}-\left|x-{\frac{1}{4K}}\right|&{\mathrm{for~}}x\in\left[0,{\frac{1}{2K}}\right],\\ 0&{\mathrm{otherwise.}}\end{cases}$$ (6) $$\begin{array}{l}\small\text{(Eq.(3))}\end{array}$$ . When K is clear from context, we omit the subscript. ![14_image_0.png](14_image_0.png) Figure 3: The hat function with K = 4. We first notice that this function is 1-Lipschitz and odd, so $$\int_{-{\frac{1}{2R}}}^{\frac{1}{2R}}\phi_{K}(x)\;\mathrm{d}x=0.$$ We also compute some other key quantities for φ. Lemma A.2. *For any positive integer* K, $$\int_{-{\frac{1}{2K}}}^{{\frac{1}{2K}}}|\phi_{K}(x)|\;\mathrm{d}x={\frac{1}{8K^{2}}}.$$ Proof. We suppress K in the notation. We have that, Z 1 2K − 1 2K |φ(x)| dx = Z 0 − 1 2K 1 4K − x +1 4K dx + Z 1 2K 0 x −1 4K − 1 4K dx. 1 The integrand 1 4K − x +1 4K over x ∈-− 2K , 0defines a triangle with base 1 2K and height 1 4K , thus it has area 1 16K2 . Therefore, $$\int_{-{\frac{1}{2K}}}^{0}\left|{\frac{1}{4K}}-\left|x+{\frac{1}{4K}}\right|\right|\,\mathrm{d}x={\frac{1}{16K^{2}}}.$$ The same holds for the second term. Thus, by adding them up we get that R 1 2K − 1 2K |φ(x)| dx =1 8K2 . Lemma A.3. *For any positive integer* K, $$\int_{0}^{\frac{1}{K}}\log\left(\frac{1+\phi_{K}(x-\frac{1}{2K})}{1-\phi_{K}(x-\frac{1}{2K})}\right)\left(1+\phi_{K}\left(x-\frac{1}{2K}\right)\right)\;\mathrm{d}x\leq\frac{1}{3K^{3}}$$ and $$\int_{0}^{\frac{1}{K}}\log\left(\frac{1-\phi_{K}(x-\frac{1}{2K})}{1+\phi_{K}(x-\frac{1}{2K})}\right)\left(1-\phi_{K}\left(x-\frac{1}{2K}\right)\right)\;\mathrm{d}x\leq\frac{1}{3K^{3}}.$$ Proof. Let us suppress K in the notation. We prove the first bound below and the second bound follows by an identical argument. We have that Z 1K 0 log 1 + φ(x −1 2K ) 1 − φ(x −1 2K ) 1 + φ x −1 2K dx = Z 1 2K − 1 2K log 1 + φ(x) 1 − φ(x) (1 + φ(x)) dx = Z 1 2K 0 log 1 + φ(x) 1 − φ(x) (1 + φ(x)) dx + Z 0 − 1 2K log 1 + φ(x) 1 − φ(x) (1 + φ(x)) dx = Z 1 2K 0 log 1 + φ(x) 1 − φ(x) (1 + φ(x)) dx − Z 0 1 2K log 1 + φ(−x) 1 − φ(−x) (1 + φ(−x)) dx = Z 1 2K 0 log 1 + φ(x) 1 − φ(x) (1 + φ(x)) dx + Z 1 2K 0 log 1 − φ(x) 1 + φ(x) (1 − φ(x)) dx, where the last equality follows since φ is an odd function. Now, we may collect the integrands to get that, ows since $\psi$ is an odd function. Now, we may connect the $\psi$-function. $$\int_{0}^{\frac{1}{K}}\log\left(\frac{1+\phi(x-\frac{1}{2K})}{1-\phi(x-\frac{1}{2K})}\right)\left(1+\phi\left(x-\frac{1}{2K}\right)\right)\,\mathrm{d}x$$ $$=2\int_{0}^{\frac{1}{2K}}\log\left(\frac{1+\phi(x)}{1-\phi(x)}\right)\phi(x)\,\mathrm{d}x$$ $$=2\int_{0}^{\frac{1}{2K}}\log\left(1+\frac{2\phi(x)}{1-\phi(x)}\right)\phi(x)\,\mathrm{d}x$$ $$\leq2\int_{0}^{\frac{1}{2K}}\frac{2\phi(x)^{2}}{1-\phi(x)}\,\mathrm{d}x,$$ $\psi$-function. where the last inequality follows since log(1 + x) ≤ x for all x. Now we observe that φ(x) ≤ x ≤ 1 2 for x ∈ [0, 1 2K ], and in particular, 1 1−φ(x) ≤ 2. Thus, $$\int_{0}^{\frac{1}{R}}$$ $\frac{1}{K}$ $\log\left(\frac{1+\phi(x-\frac{1}{2K})}{1-\phi(x-\frac{1}{2K})}\right)\left(1+\phi\left(x-\frac{1}{2K}\right)\right)\;\mathrm{d}x$ $\leq8\int_{0}^{\frac{1}{2K}}\phi(x)^{2}\;\mathrm{d}x$ $\leq8\int_{0}^{\frac{1}{2K}}x^{2}\;\mathrm{d}x$ $=\frac{1}{3K^{3}}$. This proves the first bound. The second bound follows analogously. ## B Proofs In The Label Shift Setting Throughout this section we operate in the label shift setting (Section 3.2.1). First, in Appendix B.1 through a sequence of lemmas we prove the minimax lower bound Theorem 4.1. Next, in Appendix B.2 we prove Theorem 5.1 which is an upper bound on the excess risk of the undersampled binning estimator (see Eq. (5)) with dnmine 1/3 bins by invoking previous results on nonparametric density estimation (Freedman & Diaconis, 1981; Devroye & Györfi, 1985). ## B.1 Proof Of Theorem 4.1 In this section, we provide a proof of the minimax lower bound in the label shift setting. $$\square$$ We will proceed by constructing a class of distributions where the separation between any two distributions in the class is small enough such that it is hard to distinguish between them with finite minority class samples. In particular, we split the interval [0, 1] into sub-intervals and each class distribution on each sub-interval either has slightly more probability mass on the left side of the sub-interval, on the right, or completely uniform. Since the minority class sample size is limited, no classifier will be able to tell which distribution the minority class is generated from, and hence will suffer high excess risk. We construct the "hard" set of distributions as follows. Fix K to be an integer that will be specified in the sequel as a function of nmin. Let the index set be V = {−1, 0, 1} K *× {−*1, 0, 1} K. For v ∈ V, we will let v1 *∈ {−*1, 0, 1} K be the first K coordinates and v−1 *∈ {−*1, 0, 1} K be the last K coordinates. That is, v = (v1, v−1). For every v ∈ P we shall define pair of class-conditional distributions Pv,1 and Pv,−1 as follows: for x ∈ Ij = [ j−1 K , j K ], $$\begin{array}{c}{{\sf P_{v,1}(x)=1+v_{1,j}\phi\left(x-\frac{j+1/2}{K}\right)}}\\ {{\sf P_{v,-1}(x)=1+v_{-1,j}\phi\left(x-\frac{j+1/2}{K}\right),}}\end{array}$$ where φ is defined in Eq. 6. Notice that Pv,1 only depends on v1 while Pv,−1 only depends on v−1. We continue to define $$\begin{array}{l}{{\mathrm{P}_{v,\operatorname*{maj}}(x,y)=\mathrm{P}_{v,1}(x){\bf1}(y=1)}}\\ {{\mathrm{P}_{v,\operatorname*{min}}(x,y)=\mathrm{P}_{v,-1}(x){\bf1}(y=-1),}}\end{array}$$ and $$\mathsf{P}_{v,\mathsf{test}}(x,y)={\frac{\mathsf{P}_{v,\mathsf{maj}}(x,y)+\mathsf{P}_{v,\mathsf{min}}(x,y)}{2}}={\frac{\mathsf{P}_{v,1}(x)\mathbf{1}(y=1)+\mathsf{P}_{v,-1}(x)\mathbf{1}(y=-1)}{2}}.$$ Observe that in the test distribution it is equally likely for the label to be +1 or −1. Recall that as described in Section A.1, V shall be a uniform random variable over V and S | V ∼ P nmaj v,maj × P nmin v,min. We shall let Q denote the joint distribution of (*V, S*) and let QS denote the marginal over S. With this construction in place, we first show that the minimax excess risk is lower bounded as follows. Lemma B.1. For any positive integers K, nmaj, nmin*, the minimax excess risk is lower bounded as follows:* Minimax Excess Risk($P_{1,\text{S}}$) $$=\inf_{\text{\tiny$\mathcal{A}$}}\sup_{(P_{\text{\tiny best}}),P_{\text{\tiny best}})\in P_{1,\text{S}}}\mathbb{E}_{S\sim\Phi_{\text{\tiny best}}^{\text{\tiny best}}\prec\Phi_{\text{\tiny best}}^{\text{\tiny best}}}\left[R(\mathcal{A}^{S};\mathsf{P}_{\text{\tiny best}})-R(f^{*};\mathsf{P}_{\text{\tiny best}})\right]$$ $$\geq\frac{1}{36K}-\frac{1}{2}\mathbb{E}_{S\sim\mathsf{Q}_{S}}\left[\text{TV}\left(\sum_{v\in\mathcal{V}}\mathsf{Q}(v\mid S)\mathsf{P}_{v,1},\sum_{v\in\mathcal{V}}\mathsf{Q}(v\mid S)\mathsf{P}_{v,-1}\right)\right].\tag{7}$$ Proof. By invoking Lemma A.1 we get that Minimax Excess Risk($\mathcal{P}_{\mathsf{LS}}$) $$\geq\underbrace{\mathbb{E}_{S\sim\mathcal{Q}_{S}}[\inf_{h\ (x,y)\sim\sum_{v\in\mathcal{V}}\mathcal{Q}^{(v|S)\mathsf{P}_{v,\mathsf{int}}}}(h(x)\neq y)]}_{=:\mathcal{P}_{\mathsf{IV}}}-\underbrace{\mathbb{E}_{V}[R(f^{*}(\mathsf{P}_{V,\mathsf{int}});\mathsf{P}_{V,\mathsf{int}})]}_{=:\mathcal{P}_{\mathsf{IV}}}.$$ In calculation, the term $\mathsf{P}_{\mathsf{LS}}$ is $f_{\mathsf{SS}}\mathcal{Q}_{\mathsf{S}}$, and $\mathcal{P}_{\mathsf{V}}$ is $f_{\mathsf{SS}}\mathcal{Q}_{\mathsf{S}}$ is $f_{\mathsf{SS}}\mathcal{Q}_{\mathsf{S}}$. We proceed by calculating alternate expressions for RV and BV to get our desired lower bound on the minimax excess risk. Calculation of RV : Immediately by Le Cam's lemma (Wainwright, 2019, Eq. 15.13), we get that $$\mathfrak{R}_{V}=\mathbb{E}_{S\sim\Omega_{S}}\left[\inf_{h\ (x,y)\sim\sum_{v\in V}\mathsf{Q}(v|S)\mathsf{P}_{v,m}}(h(x)\neq y)\right]$$ $$=\frac{1}{2}\mathbb{E}_{S\sim\mathsf{Q}_{S}}\left[1-\mathrm{TV}\left(\sum_{v\in V}\mathsf{Q}(v\mid S)\mathsf{P}_{v,1},\sum_{v\in V}\mathsf{Q}(v\mid S)\mathsf{P}_{v,-1}\right)\right].\tag{8}$$ Calculation of BV : Again by invoking Le Cam's lemma (Wainwright, 2019, Eq. 15.13), we get that for any class conditional distributions P1, P−1, $$R(f^{\star};{\mathsf{P_{\mathrm{test}}}})={\frac{1}{2}}-{\frac{1}{2}}\mathrm{TV}({\mathsf{P_{1}}},{\mathsf{P_{-1}}}).$$ So by taking expectations, we get that $$\mathcal{B}_{V}=\mathbb{E}_{V}[R(f^{*}(\mathsf{P}_{V,\mathrm{test}});\mathsf{P}_{V,\mathrm{test}})]=\mathbb{E}_{V}\left[\frac{1}{2}-\frac{1}{2}\mathrm{TV}(\mathsf{P}_{V,1},\mathsf{P}_{V,-1})\right].\tag{1}$$ We now compute EV [TV(PV,1, PV,−1)] as follows: EV [TV(PV,1, PV,−1)] = 12 EV Z 1 x=0 |PV,1(x) − PV,−1(x)| dx = 1 2 EV j=1 Z jK j−1 K |V1,j − V−1,j | φ x − j + 1/2 K dx X K = 1 2 X K j=1 EV "Z jK j−1 K |V1,j − V−1,j | φ x − j + 1/2 K dx # (i) =1 16K2 X K j=1 EV [|V1,j − V−1,j |], $$({\mathfrak{g}})$$ where (i) follows by Lemma A.2. Observe that V1,j , V−1,j are independent uniform random variables on {−1, 0, 1}, it is therefore straightforward to compute that $$\mathbb{E}_{V}[|V_{1,j}-V_{-1,j}|]={\frac{8}{9}}.$$ This yields that $$\mathbb{E}_{V}\left[\mathrm{TV}(\mathsf{P}_{V,1},\mathsf{P}_{V,-1})\right]={\frac{1}{18K}}.$$ Plugging this into Eq. (9) allows us to conclude that $$\mathfrak{B}_{V}=\mathbb{E}_{V}[R(f^{\star}(\mathsf{P}_{V,\text{test}});\mathsf{P}_{V,\text{test}})]=\frac{1}{2}\left(1-\frac{1}{18K}\right).\tag{10}$$ Combining Eqs. (8) and (10) establishes the claimed result. In light of this previous lemma we now aim to upper bound the expected total variation distance in Eq. (7). Lemma B.2. Suppose that v *is drawn uniformly from the set* {−1, 1} K, and that S | v *is drawn from* P nmaj v,maj × P nmin v,min *then,* $$\mathbb{E}_{S}\left[\mathrm{TV}\left(\sum_{v\in\mathcal{V}}\mathbb{Q}(v\mid S)\mathbb{P}_{v,1},\sum_{v\in\mathcal{V}}\mathbb{Q}(v\mid S)\mathbb{P}_{v,-1}\right)\right]\leq{\frac{1}{18K}}-{\frac{1}{144K}}\exp\left(-{\frac{n_{\mathrm{min}}}{3K^{3}}}\right).$$ 18 Proof. Let ψ := ES -TV Pv∈V Q(v | S)Pv,1,Pv∈V Q(v | S)Pv,−1 . Then, ψ = ES " TV X v∈V Q(v | S)Pv,1, X v∈V Q(v | S)Pv,−1 !# x=0 X v∈V Q(v | S) (Pv,1(x) − Pv,−1(x)) dx # = 1 2 ES "Z 1 = 1 2 ES X v∈V Q(v | S) (Pv,1(x) − Pv,−1(x)) dx j=1 Z jK X K x= j−1 K = 1 2 ES X v∈V Q(v | S)(v1,j − v−1,j )φ x − j + 1/2 K dx j=1 Z jK X K , x= j−1 K where the last equality is by the definition of Pv,1 and Pv,−1. Continuing we get that, ψ = 1 2 X K j=1 "Z jK x= j−1 K φ x − j + 1/2 K dx #ES " X v∈V Q(v | S)(v1,j − v−1,j ) # (i) =1 16K2 ES j=1 X v∈V Q(v | S)(v1,j − v−1,j ) X K =1 16K2 X K j=1 Z X v∈V Q(v | S)(v1,j − v−1,j ) dQS(S) =1 16K2 X K j=1 Z X v∈V Q(v, S)(v1,j − v−1,j ) dS (ii) =1 16K2|V| X K j=1 Z X v∈V Q(S | v)(v1,j − v−1,j ) dS, where (i) follows by the calculation in Lemma A.2 and (ii) follows since v is a uniform random variable over the set V. The distributions Pv,1 and Pv,−1 are symmetrically defined over all intervals Ij = [ j−1 K , j K ], and hence all of the summands in the RHS above are equal. Thus, $$\psi=\frac{1}{16K|\mathcal{V}|}\int\left|\sum_{v\in\mathcal{V}}\mathsf{Q}(S\mid v)(v_{1,1}-v_{-1,1})\right|\;\mathrm{d}S.\tag{11}$$ Before we continue further, let us define $${\mathcal{V}}^{+}=\{v\in{\mathcal{V}}\mid v_{1,1}>v_{-1,1}\}.$$ For every v ∈ V+, let v˜ ∈ V be such that is the same as v on all coordinates, except v˜1,1 = −v1,1 and v˜−1,1 = −v−1,1. Then continuing from Eq. (11) we find that, ψ (i) =1 16K|V| Z X v∈V+ (v1,1 − v−1,1)(Q(S | v) − Q(S | v˜)) dS (ii) ≤1 16K|V| Z X v∈V+ (v1,1 − v−1,1)|Q(S | v) − Q(S | v˜)| dS =1 16K|V| X v∈V+ (v1,1 − v−1,1) Z|Q(S | v) − Q(S | v˜)| dS =1 8K|V| X v∈V+ (v1,1 − v−1,1)TV(Q(S | v), Q(S | v˜)) , (12) | {z } =:Ξ where (i) we use the definition of V + and v˜, (ii) follows since v1,1 > v−1,1 for v ∈ V+. Now we further partition V + into 3 sets V (1,0), V (0,−1), V (1,−1) as follows $$\begin{array}{l}{{\mathcal{V}^{(1,0)}=\{v\in\mathcal{V}\mid v_{1,1}=1,v_{-1,1}=0\},}}\\ {{\mathcal{V}^{(0,-1)}=\{v\in\mathcal{V}\mid v_{1,1}=0,v_{-1,1}=-1\},}}\\ {{\mathcal{V}^{(1,-1)}=\{v\in\mathcal{V}\mid v_{1,1}=1,v_{-1,1}=-1\}.}}\end{array}$$ Note that Q(S | v) = P nmaj v,maj × P nmin v,min, and therefore Ξ = X v∈V+ (v1,1 − v−1,1)TV P nmaj v,maj × P nmin v,min, P nmaj v, ˜ maj × P nmin v, ˜ min (i) =X v∈V(1,0) TV P nmaj v,maj × P nmin v,min, P nmaj v, ˜ maj × P nmin v, ˜ min v∈V(0,−1) TV P nmaj v,maj × P nmin v,min, P nmaj v, ˜ maj × P nmin v, ˜ min +X v∈V(1,−1) TV P nmaj v,maj × P nmin v,min, P nmaj v, ˜ maj × P nmin v, ˜ min, (13) + 2 X where (i) follows since v1, v−1 *∈ {−*1, 0, 1} K and by the definition of the sets V (1,0), V (0,1) and V (1,−1). Now by the Bretagnolle–Huber inequality (see Canonne, 2022, Corollary 4), TV P nmaj v,maj × P nmin v,min, P nmaj v, ˜ maj × P nmin v, ˜ min= TV P nmaj v, ˜ maj × P nmin v, ˜ min, P nmaj v,maj × P nmin v,min ≤ 1 − 1 2 exp −KL P nmaj v, ˜ maj × P nmin v, ˜ minkP nmaj v,maj × P nmin v,min , where we flip the arguments in the first step for simplicity later. Next, by the chain rule for KL-divergence, we have that $\mathrm{KL}(\mathsf{P}^{n_{\mathrm{maj}}}_{\hat{v},\mathrm{maj}}\times\mathsf{P}^{n_{\mathrm{maj}}}_{\hat{v},\mathrm{min}}\|\mathsf{P}^{n_{\mathrm{maj}}}_{\hat{v},\mathrm{maj}}\times\mathsf{P}^{n_{\mathrm{maj}}}_{\hat{v},\mathrm{min}})=n_{\mathrm{maj}}\mathrm{KL}(\mathsf{P}_{\hat{v},\mathrm{maj}}\|\mathsf{P}_{v,\mathrm{maj}})+n_{\mathrm{min}}\mathrm{KL}(\mathsf{P}_{\hat{v},\mathrm{min}}\|\mathsf{P}_{v,\mathrm{min}})$. Using these, let us upper bound the first term in Eq. (13) corresponding to v ∈ V(0,−1). For v ∈ V(0,−1), notice that KL(Pv, ˜ majkPv,maj) = 0 since v1,j = ˜v1,j for all j ∈ {1*, . . . , K*}. For the second term, KL(Pv, ˜ minkPv,min), only v1,1 and v˜1,1 differ, so $$\mathrm{KL}(\mathsf{P}_{\bar{v},\min}\|\mathsf{P}_{v,\min})=\int_{0}^{1}\mathsf{P}_{v,-1}(x)\log\left(\frac{\mathsf{P}_{v,-1}(x)}{\mathsf{P}_{\bar{v},-1}(x)}\right)\ \mathrm{d}x$$ $$=\int_{0}^{\frac{1}{K}}\log\left(\frac{1+\phi_{K}(x-\frac{1}{2K})}{1-\phi_{K}(x-\frac{1}{2K})}\right)\left(1+\phi_{K}\left(x-\frac{1}{2K}\right)\right)\ \mathrm{d}x$$ $$\leq\frac{1}{3K^{3}},$$ where the last inequality is a result of the calculation in Lemma A.3. Therefore, we get $$\sum_{v\in\mathcal{V}^{(0,-1)}}\mathrm{TV}\left(\mathsf{P}_{v,\mathrm{maj}}^{n_{\mathrm{maj}}}\times\mathsf{P}_{v,\mathrm{min}}^{n_{\mathrm{min}}},\mathsf{P}_{v,\mathrm{maj}}^{n_{\mathrm{maj}}}\times\mathsf{P}_{v,\mathrm{min}}^{n_{\mathrm{min}}}\right)\leq9^{K-1}\left(1-\frac{1}{2}\exp\left(-\frac{n_{\mathrm{min}}}{3K^{3}}\right)\right).$$ For the terms in Eq. (13) corresponding to V (0,−1), V (1,−1), we simply take the trivial bound to get $$\sum_{v\in\mathcal{V}^{(0,-1)}}\mathrm{TV}\left(\mathsf{P}_{v,\mathrm{maj}}^{n_{\mathrm{maj}}}\times\mathsf{P}_{v,\mathrm{min}}^{n_{\mathrm{min}}},\mathsf{P}_{\overline{v},\mathrm{maj}}^{n_{\mathrm{maj}}}\times\mathsf{P}_{\overline{v},\mathrm{min}}^{n_{\mathrm{min}}}\right)\leq9^{K-1},$$ $$\sum_{v\in\mathcal{V}^{(1,-1)}}\mathrm{TV}\left(\mathsf{P}_{v,\mathrm{maj}}^{n_{\mathrm{maj}}}\times\mathsf{P}_{v,\mathrm{min}}^{n_{\mathrm{min}}},\mathsf{P}_{\overline{v},\mathrm{maj}}^{n_{\mathrm{maj}}}\times\mathsf{P}_{\overline{v},\mathrm{min}}^{n_{\mathrm{min}}}\right)\leq9^{K-1}.$$ Plugging these bounds into Eq. (13) we get that, $$\Xi\leq4\cdot9^{K-1}-\frac{9^{K-1}}{2}\exp\left(-\frac{n_{\mathrm{min}}}{3K^{3}}\right).$$ Now using this bound on Ξ in Eq. (12) and observing that |V| = 9K, we get that, $$\begin{split}\psi&=\mathbb{E}_{S}\left[\mathrm{TV}\left(\sum_{v\in\mathcal{V}}Q(v\mid S)P_{v,1},\sum_{v\in\mathcal{V}}Q(v\mid S)P_{v,-1}\right)\right]\\ &\leq\frac{1}{8\cdot9^{K}K}\left(4\cdot9^{K-1}-\frac{9^{K-1}}{2}\exp\left(-\frac{n_{\mathrm{min}}}{3K^{3}}\right)\right)\\ &=\frac{1}{18K}-\frac{1}{144K}\exp\left(-\frac{n_{\mathrm{min}}}{3K^{3}}\right),\end{split}$$ completing the proof. Finally, we combine Lemma B.1 and Lemma B.2 to establish the minimax lower bound in this label shift setting. We recall the statement of the theorem here. Theorem 4.1. Consider the label shift setting described in Section 3.2.1. Recall that PLS is the class of pairs of distributions (Pmaj, Pmin) that satisfy the assumptions in that section. The minimax excess risk over this class is lower bounded as follows: Minimax Excess Risk($P_{15}$) = $\inf\limits_{A}\sup\limits_{(P_{\text{m5}},P_{\text{m5}})\in P_{15}}$Excess Risk[$A$; $(P_{\text{m5}},P_{\text{m5}})$] $\geq\frac{1}{600}\frac{1}{n_{\text{min}}\,^{1/3}}$. Proof. By Lemma B.1 we know that, Minimax Excess $\mathsf{Risk}(\mathcal{P}_{1S})\geq\frac{1}{36K}-\frac{1}{2}\mathbb{E}_{S\sim\mathsf{Q}_{S}}\left[\mathrm{TV}\left(\sum_{v\in\mathcal{V}}\mathsf{Q}(v\mid S)\mathsf{P}_{v,1},\sum_{v\in\mathcal{V}}\mathsf{Q}(v\mid S)\mathsf{P}_{v,-1}\right)\right].$ Next by the calculation in Lemma B.2 we have that Minimax Excess Risk($\mathcal{P}_{\text{LS}}$) $\geq\frac{1}{36K}-\frac{1}{2}\left(\frac{1}{18K}-\frac{1}{144K}\exp\left(-\frac{n_{\text{min}}}{3K^{3}}\right)\right)$ $=\frac{1}{288K}\exp\left(-\frac{n_{\text{min}}}{3K^{3}}\right).$ Setting K = dnmin 1/3e yields the following Minimax Excess Risk($\mathcal{P}_{\text{L5}}$) $\geq\frac{1}{288\lceil n_{\text{min}}{}^{1/3}\rceil}\exp\left(-\frac{n_{\text{min}}}{3\lceil n_{\text{min}}{}^{1/3}\rceil^{3}}\right)$ $$\geq\frac{\exp\left(-\frac{n_{\text{min}}}{3\lceil n_{\text{min}}{}^{1/3}\rceil^{3}}\right)}{288}\frac{n_{\text{min}}{}^{1/3}}{\left\lceil n_{\text{min}}{}^{1/3}\right\rceil}\frac{1}{n_{\text{min}}{}^{1/3}}$$ $$\overset{(i)}{\geq}\frac{0.7\exp\left(-\frac{1}{3}\right)}{288}\frac{1}{n_{\text{min}}{}^{1/3}}$$ $$\geq\frac{1}{600}\frac{1}{n_{\text{min}}{}^{1/3}},$$ where (i) follows since nmin 1/3/dnmin 1/3e ≥ 0.7 for nmin ≥ 1. ## B.2 Proof Of Theorem 5.1 In this section, we derive an upper bound on the excess risk of the undersampled binning estimator AUSB (Eq. (5)) in the label shift setting. Recall that given a dataset S this estimator first calculates the undersampled dataset SUS, where the number of points from the minority group (nmin) is equal to the number of points from the majority group (nmin), and the size of the dataset is 2nmin. Throughout this section, (Pmaj, Pmin) shall be an arbitrary element of PLS. To bound the excess risk of the undersampling algorithm, we will relate it to density estimation. Recall that n1,j denotes the number of points in SUS with label +1 that lie in Ij , and n−1,j is defined analogously. Given a positive integer K, for x ∈ Ij = [ j−1 K , j K ], by the definition of the undersampled binning estimator (Eq. (5)) $$\mathcal{A}_{\mathrm{USB}}^{S}(x)=\left\{\begin{matrix}1\\ -1\end{matrix}\right.$$ USB(x) = (1 if n1,j > n−1,j , $$\mathrm{if}\ n_{1,j}>n_{-1,j},$$ $$\mathrm{otherwise.}$$ Recall that since we have undersampled, Pj n1,j =Pj n−1,j = nmin. Therefore, define the simple histogram estimators for P1(x) = P(x | y = 1) and P−1(x) = P(x | y = −1) as follows: for x ∈ Ij , $${\widehat{\mathrm{P}}}_{1}^{S}(x):={\frac{n_{1,j}}{K n_{\mathrm{min}}}}\quad{\mathrm{and}}$$ Knminand PbS−1 $$1\quad{\widehat{\mathrm{P}}}_{-1}^{\mathcal{S}}(x):={\frac{n_{-1,j}}{K n_{\operatorname*{min}}}}.$$ With this histogram estimator in place, we may define an estimator for η(x) := Ptest(y = 1|x) as follows, $$\widehat{\eta}^{\mathcal{S}}(x):=\frac{\widehat{\mathsf{P}}_{1}^{\mathcal{S}}(x)}{\widehat{\mathsf{P}}_{1}^{\mathcal{S}}(x)+\widehat{\mathsf{P}}_{-1}^{\mathcal{S}}(x)}.$$ Observe that, for x ∈ Ij $$\widehat{\eta}^{\mathcal{S}}(x)>1/2\iff n_{1,j}>n_{-1,j}\iff\mathcal{A}_{0\mathrm{SB}}^{\mathcal{S}}(x)=1.$$ Defining an estimator ηb S for the Ptest(y = 1 | x) in this way will allow us to relate the excess risk of AUSB to the estimation error in PbS 1 and PbS−1 . Before proving the theorem we restate it here. Theorem 5.1. *Consider the label shift setting described in Section 3.2.1. For any* (Pmaj, Pmin) ∈ PLS the expected excess risk of the Undersampling Binning Estimator (Eq. (5)*) with number of bins with* K = cdnmin 1/3e *is upper bounded by* Excess Risk$[\mathcal{A}_{\text{USB}};(\mathsf{P}_{\text{maj}},\mathsf{P}_{\text{min}})]=\mathbb{E}_{S\sim\mathsf{P}_{\text{maj}}^{\circ}\ll\mathsf{P}_{\text{max}}^{\circ}}\big{[}R(\mathcal{A}_{\text{USB}}^{S};\mathsf{P}_{\text{test}})-R(f^{\star};\mathsf{P}_{\text{test}})\big{]}\leq\frac{C}{n_{\text{min}}{}^{1/3}}$. Proof. By the definition of the excess risk Excess Risk[AUSB; (Pmaj, Pmin)] := ES∼P nmaj maj ×P nmin min -R(A S USB; Ptest)) − R(f ?; Ptest). By invoking (Wasserman, 2019, Theorem 1) we may upper bound the excess risk given a draw of S by $$R({\mathcal{A}}_{\mathrm{USB}}^{S};\mathsf{P}_{\mathrm{test}}))-R(f^{\star};\mathsf{P}_{\mathrm{test}})\leq2\int\left|{\widehat{\eta}}^{S}(x)-\eta(x)\right|\mathsf{P}_{\mathrm{test}}(x)\,\mathrm{d}x.$$ Continuing using the definition of ηb S above and because η = P1/(P1 + P−1) we have that, R(A S USB; Ptest)) − R(f ?; Ptest) = 2 Z 1 0 PbS 1 (x) PbS 1 (x) + PbS−1 (x) −P1(x) P1(x) + P−1(x) P1(x) + P−1(x) 2 dx = Z 1 0 P1(x) + P−1(x) PbS 1 (x) + PbS−1 (x) ! PbS 1 (x) − P1(x) dx (i) ≤ Z 1 0 PbS 1 (x) − P1(x) dx + Z 1 0 P1(x) + P−1(x) PbS 1 (x) + PbS−1 (x) − 1 PbS 1 (x) dx = Z 1 0 PbS 1 (x) − P1(x) dx + Z 1 0 PbS 1 (x) + PbS−1 (x) − P1(x) − P−1(x) PbS 1 (x) PbS 1 (x) + PbS−1 (x) dx ≤ 2 Z 1 0 PbS 1 (x) − P1(x) dx + Z 1 0 PbS−1 (x) − P−1(x) dx (ii) ≤ 2 sZ 1 0 PbS 1 (x) − P1(x) 2dx + sZ 1 0 PbS−1 (x) − P−1(x) 2dx, where (i) follows by the triangle inequality, (ii) is by the Cauchy–Schwarz inequality. Taking expectation over the samples S and by invoking Jensen's inequality we find that, Excess Risk($\mathcal{A}^{\mathcal{S}};(\mathsf{P_{maj}},\mathsf{P_{min}})$) $$=\mathbb{E}_{\mathcal{S}}\left[R(\mathcal{A}^{\mathcal{S}}_{\mathsf{SS}};\mathsf{P_{test}})\right]-R(f^{*};\mathsf{P_{test}})\right]$$ $$\leq2\sqrt{\mathbb{E}_{\mathcal{S}}\left[\int\left(\widehat{\mathsf{P}}^{\mathcal{S}}_{1}(x)-\mathsf{P}_{1}(x)\right)^{2}\,\mathrm{d}x\right]}+\sqrt{\mathbb{E}_{\mathcal{S}}\left[\int\left(\widehat{\mathsf{P}}^{\mathcal{S}}_{-1}(x)-\mathsf{P}_{-1}(x)\right)^{2}\,\mathrm{d}x\right]}.$$ We note that PbS jonly depends on nmin i.i.d. draws from class j. Thus by (Freedman & Diaconis, 1981, Theorem 1.7), if K = cdnmine 1/3then $$\mathbb{E}s\left[\int\left({\widehat{\mathrm{P}}}_{j}^{S}(x)-\mathsf{P}_{j}(x)\right)^{2}\,\mathrm{d}x\right]\leq{\frac{C}{n_{\operatorname*{min}}{}^{2/3}}}.$$ Plugging this into the previous inequality yields the desired result. ## C Proof In The Group-Covariate Shift Setting Throughout this section we operate in the group-covariate shift setting (Section 3.2.2). We will proceed similarly to Section B. We shall construct a family of class-conditional distributions such that it will be necessary for adequate samples in each sub-interval of [0, 1] to be able to learn the maximally likely label in that sub-interval. On the other hand, we will construct the group-covariate distributions to be separated from one another. As a consequence, sub-intervals with high probability mass under the minority group distribution will have low probability mass under the majority group distribution. Hence, these subintervals will not have enough training sample points for any classifier to be able to learn the maximally likely label and as a result shall suffer high excess risk. First in Appendix C.1, we prove Theorem 4.2, the minimax lower bound through a sequence of lemmas. Second in Appendix C.2, we prove Theorem 5.2 that upper bound on the excess risk of the undersampled binning estimator with dnmine 1/3 bins. ## C.1 Proof Of Theorem 4.2 In this section, we provide a proof of the minimax lower bound in the group shift setting. We construct the "hard" set of distributions as follows. Let the index set be V = {−1, 1} K. For every v ∈ V define a distribution as follows: for x ∈ Ij = [ j−1 K , j K ], $$\mathsf{P}_{v}(y=1\mid x):={\frac{1}{2}}\left[1+v_{j}\phi\left(x-{\frac{j+1/2}{K}}\right)\right],$$ where φ is defined in Eq. 6. Given a τ ∈ [0, 1] we also construct the group distributions as follows: $$\mathsf{P}_{a}(x)={\begin{cases}2-\tau&{\mathrm{if~}}x\in[0,0.5)\\ \tau&{\mathrm{if~}}x\in[0.5,1],\end{cases}}$$ and let $$\mathsf{P}_{b}(x)=2-\mathsf{P}_{a}(x).$$ We can verify that $$\mathrm{Overlap}(\mathsf{P}_{a},\mathsf{P}_{b})=1-\mathrm{TV}(\mathsf{P}_{a},\mathsf{P}_{b})=1-{\frac{1}{2}}\int_{x=0}^{1}|\mathsf{P}_{a}(x)-\mathsf{P}_{b}(x)|\;\mathrm{d}x=\tau.$$ We continue to define $$\begin{array}{l}{{\mathrm{P}_{v,\operatorname*{maj}}(x,y)=\mathrm{P}_{v}(y\mid x)\mathrm{P}_{a}(x)}}\\ {{\mathrm{P}_{v,\operatorname*{min}}(x,y)=\mathrm{P}_{v}(y\mid x)\mathrm{P}_{b}(x),}}\end{array}$$ and $$\mathsf{P}_{v,\mathrm{test}}(x,y)=\mathsf{P}_{v}(y\mid x)\left({\frac{\mathsf{P}_{a}(x)+\mathsf{P}_{b}(x)}{2}}\right).$$ Observe that (Pa(x) + Pb(x))/2 = 1, the uniform distribution over [0, 1]. Recall that as described in Section A.1, V shall be a uniform random variable over V and S | V ∼ P nmaj v,maj × P nmin v,min. We shall let Q denote the joint distribution of (*V, S*) and let QS denote the marginal over S. With this construction in place, we present the following lemma that lower bounds the minimax excess risk by a sum of exp(−KL(Q(S | vj = 1)kQ(S | vj = −1)) over the intervals. Intuitively, KL(Q(S | vj = 1)kQ(S | vj = −1) is a measure of how difficult it is to identify whether vj = 1 or vj = −1 from the samples. Lemma C.1. For any positive integers *K, n*maj, nmin and τ ∈ [0, 1], the minimax excess risk is lower bounded as follows: Minimax Excess $\mathsf{Risk}(\mathcal{P}_{\mathsf{GS}}(\tau))=\inf_{\mathcal{A}}\sup_{(\mathsf{P}_{\mathsf{out}},\mathsf{P}_{\mathsf{out}})\in\mathcal{P}_{\mathsf{DS}}(\tau)}\mathbb{E}_{S\sim\mathsf{P}_{\mathsf{out}}^{\mathsf{out}}\prec\mathsf{P}_{\mathsf{out}}^{\mathsf{out}}}\left[R(\mathcal{A}^{S};\mathsf{P}_{\mathsf{out}})-R(f^{*};\mathsf{P}_{\mathsf{out}})\right]$ $$\geq\frac{1}{32K^{2}}\sum_{j=1}^{K}\exp(-\mathsf{KL}(\mathsf{Q}(S\mid v_{j}=1)\|\mathsf{Q}(S\mid v_{j}=-1))).$$ Proof. By invoking Lemma A.1, we know that the minimax excess risk is lower bounded by Minimax Excess Risk($\mathcal{P}_{\mathsf{GS}}(\tau)$) $$\geq\underbrace{\mathbb{E}_{S\sim\mathcal{Q}_{S}}[\inf_{h}\underbrace{\Pr}_{(x,y)\sim\sum_{v\in V}\mathsf{P}(v|S)\mathsf{P}_{v,\mathsf{tut}}}(h(x)\neq y)]}_{=\mathfrak{W}_{V}}-\underbrace{\mathbb{E}_{V}[R(f^{*}(\mathsf{P}_{V,\mathsf{tut}});\mathsf{P}_{V,\mathsf{tut}}])}_{=\mathfrak{W}_{V}},$$ where V is a uniform random variable over the set V, S | V = v is a draw from P nmaj v,maj ×P nmin v,min, and Q denotes the joint distribution over (*V, S*). We shall lower bound this minimax risk in parts. First, we shall establish a lower bound on RV , and then an upper bound on the Bayes risk BV . Lower bound on RV . Unpacking RV using its definition we get that, RV = ES∼QS [inf hPr (x,y)∼Pv∈V Q(v|S)Pv,test (h(x) 6= y)] = ES∼QS " inf h Z 1 0 Ptest(x) Pr y∼Pv∈V Q(v|S)Pv(·|x) [h(x) 6= y] dx # (i) = ES∼QS "Z 1 0 Ptest(x) min (X v∈V Q(v | S)Pv(1 | x), X v∈V Q(v | S)Pv(−1 | x) ) dx # (ii) = 1 2 − ES∼QS "Z 1 0 Ptest(x) 1 2 − X v∈V Q(v | S)Pv(1 | x) dx # (iii) =12 − Z 1 0 Ptest(x)ES∼QS " 1 2 − X v∈V Q(v | S)Pv(1 | x) # dx, (14) where (i) follows by taking h to be the pointwise minimizer over x, (ii) follows since Pv(−1 | x) = 1−Pv(1 | x) and min{s, 1 − s} = (1 − |1 − 2s|)/2 for all s ∈ [0, 1], and (iii) follows by Fubini's theorem which allows us to switch the order of the integrals. If x ∈ Ij = [ j−1 K , j K ] for some j ∈ {1*, . . . , K*} we let jx denote the value of this index j. With this notation in place let us continue to upper bound integrand in the second term in the RHS above as follows: ES∼QS " 1 2 − X v∈V Q(v | S)Pv(1 | x) # (i) = ES∼QS φ x − jx + 1/2 K |Q(vjx = 1 | S) − Q(vjx = −1 | S)| = φ x − jx + 1/2 K ES∼QS [|Q(vjx = 1 | S) − Q(vjx = −1 | S)|] (ii) = φ x − jx + 1/2 K ES∼QS Q(S | vjx = 1)QV (vjx = 1) QS(S)− Q(S | vjx = −1)QV (vjx = −1) QS(S) (iii) =12 φ x − jx + 1/2 K TV(Q(S | vjx = 1), Q(S | vjx = −1)), (15) where (i) follows since Pv(1 | x) = (1 + vjx φ(x − (jx + 1/2)/K))/2 and by marginalizing Q(v | S) over the indices j 6= jx, (ii) follows by using Bayes' rule and (iii) follows since the total-variation distance is half the `1 distance. Now by the Bretagnolle–Huber inequality (see Canonne, 2022, Corollary 4) we get that, $$\mathrm{TV}(\mathsf{Q}(S\mid v_{j_{x}}=1),\mathsf{Q}(S\mid v_{j_{x}}=-1))\\ \leq1-\frac{\exp(-\mathrm{KL}(\mathsf{Q}(S\mid v_{j_{x}}=1)\|\mathsf{Q}(S\mid v_{j_{x}}=-1)))}{2}.$$ Combining Eqs. (14)-(16) we get that RV ≥ 1 2 − 1 2 + 1 4 Z 1 0 $$\frac{1}{2}\int_{0}^{1}\mathsf{P}_{\mathsf{test}}(x)\left|\phi\left(x-\frac{j_{x}+1/2}{K}\right)\right|\;\mathrm{d}x$$ $$\mathsf{P}_{\mathsf{test}}(x)\left|\phi\left(x-\frac{j_{x}+1/2}{K}\right)\right|\exp(-\mathrm{KL}(\mathsf{Q}(S\mid v_{j_{x}}=1)\|\mathsf{Q}(S\mid v_{j_{x}}=-1)))\;\mathrm{d}x.\tag{17}$$ Upper bound on BV : The Bayes error is BV = EV [R(f ?(PV ); PV )] = EV inf f E(x,y)∼Pv,test1(f(x) 6= y) = EV inf f Z 1 x=0 X y∈{−1,1} Ptest(x)PV,test(y | x)1(f(x) = −y) = EV Z 1 x=0 Ptest(x) min y∈{−1,1} PV,test(y | x) (i) = EV 1 2 1 − Z 1 x=0 Ptest(x)|PV,test(1 | x) − PV,test(−1 | x)| dx (ii) = EV 1 2 1 − Z 1 x=0 Ptest(x) φ x − jx + 1/2 K dx = 1 2 − 1 2 Z 1 x=0 Ptest(x) φ x − jx + 1/2 K dx, (18) where (i) follows since Pv(1 | x) = 1 − Pv(−1 | x) and min{s, 1 − s} = (1 − |1 − 2s|)/2 for all s ∈ [0, 1], and (ii) follows by our construction of Pv above along with the fact that Pv(1 | x) = 1 − Pv(−1 | x). $$\quad(16)$$ $$(18)$$ Putting things together: Combining Eqs. (17) and (18) allows us to conclude that Minimax Excess Risk(PGS(τ )) ≥ 1 4 Z 1 0 Ptest(x) φ x − jx + 1/2 K exp(−KL(Q(S | vjx = 1)kQ(S | vjx = −1))) dx = 1 4 X K j=1 Z jK j−1 K Ptest(x) φ x − j + 1/2 K exp(−KL(Q(S | vj = 1)kQ(S | vj = −1))) dx = 1 4 X K j=1 exp(−KL(Q(S | vj = 1)kQ(S | vj = −1))) "Z jK j−1 K Ptest(x) φ x − j + 1/2 K dx # (i) =1 32K2 X K j=1 exp(−KL(Q(S | vj = 1)kQ(S | vj = −1))), where (i) follows by using Lemma A.2 along with the fact that Ptest(x) = 1 in our construction to show that the integral in the square brackets is equal to 1/8K2. This proves the result. The next lemma upper bounds the KL divergence between Q(S | vj = 1) and Q(S | vj = −1) for each j ∈ {1*, . . . , K*}. It shows that the KL divergence between these two posteriors is larger when the expected number of samples in that bin is larger. Lemma C.2. Suppose that v *is drawn uniformly from the set* {−1, 1} K, and that S | v *is drawn from* P nmaj v,maj × P nmin v,min. Then for any j ∈ {1, . . . , K/2} *and any* τ ∈ [0, 1], $$\operatorname{KL}(\mathbb{Q}(S\mid v_{j}=1)\|\mathbb{Q}(S\mid v_{j}=-1))\leq{\frac{n_{\operatorname*{maj}}(2-\tau)+n_{\operatorname*{min}}\tau}{3K^{3}}},$$ and for any j ∈ {K/2 + 1*, . . . , K*} $$\cdot\,1,\ldots,K\}$$ $$\operatorname{KL}(\mathbb{Q}(S\mid v_{j}=1)\|\mathbb{Q}(S\mid v_{j}=-1))\leq{\frac{n_{\operatorname*{maj}}\tau+n_{\operatorname*{min}}(2-\tau)}{3K^{3}}}.$$ Proof. Let us consider the case when j = 1. The bound for all other j ∈ {2*, . . . , K*} shall follow analogously. Given samples S, let S = (S1, S¯1) be a partition where S1 are the samples that fall in the interval I1, and S¯1 be the other samples. Similarly, given a vector v *∈ {−*1, 1}, let v = (v1, v¯1), where v1 is the first component and v¯1 denotes the other components (2*, . . . , K*) of v. First, we will show that $$\mathbb{Q}(S\mid v_{1})=\mathbb{Q}(S_{1}\mid v_{1})\mathbb{Q}(\bar{S}_{1}).$$ To see this, observe that $$\mathbb{Q}(S\mid v_{1})=\mathbb{Q}((S_{1},{\bar{S}}_{1})\mid v_{1})=\mathbb{Q}(S_{1}\mid v_{1})\mathbb{Q}({\bar{S}}_{1}\mid v_{1},S_{1}).$$ Further, if v is chosen uniformly over the hypercube {−1, 1} K, then Q(S¯1 | v1, S1) = X v¯1 Q(S¯1, v¯1 | v1, S1) v¯1 Q(S¯1 | v1, v¯1, S1)Q(¯v1 | v1, S1) = X (i) = X v¯1 Q(S¯1 | v1, v¯1, S1)Q(¯v1) (ii) = X v¯1 Q(S¯1 | v1, v¯1)Q(¯v1) (iii) = X v¯1 Q(S¯1 | v¯1)Q(¯v1) = Q(S¯1), where (i) follows since by Bayes' rule $$\mathbf{Q}(\bar{v}_{1}\mid v_{1},S_{1})=\frac{\mathbf{Q}(\bar{v}_{1}\mid v_{1})\mathbf{Q}(S_{1}\mid v_{1},\bar{v}_{1})}{\mathbf{Q}(S_{1}\mid v_{1})}$$ $$=\frac{\mathbf{Q}(\bar{v}_{1})\mathbf{Q}(S_{1}\mid v_{1},\bar{v}_{1})}{\mathbf{Q}(S_{1}\mid v_{1})}\qquad\qquad\text{(since$\bar{v}_{1}$is independent of$v_{1}$)}$$ $$=\frac{\mathbf{Q}(\bar{v}_{1})\mathbf{Q}(S_{1}\mid v_{1})}{\mathbf{Q}(S_{1}\mid v_{1})}=\mathbf{Q}(\bar{v}_{1})\qquad\qquad\text{(the samples in$S_{1}$depend only on$v_{1}$)}.$$ Inequality (ii) follows since the samples are drawn independently given v = (v1, v¯1). Finally, (iii) follows since S¯1 (the samples that lie outside the interval I1) only depend on v¯1 since the marginal distribution of x is independent of v and the distribution of y | x depends only on the value of v corresponding to the interval in which x lies. Thus since, Q(S | v1) = Q(S1 | v1)Q(S¯1) we have that $\|\,v_{1}\rangle=\mathbb{Q}(S_{1}\mid v_{1}\rangle\mathbb{Q}(S_{1}))$ we have that $\mathrm{KL}(\mathbb{Q}(S\mid v_{1}=1)\|\mathbb{Q}(S\mid v_{1}=-1))=\mathrm{KL}(\mathbb{Q}(S_{1}\mid v_{1}=1)\|\mathbb{Q}(S_{1}\mid v_{1}=-1))$. To bound this KL divergence, let us condition of the number of samples in S1 from group a, (the majority group) n1,a and the number of samples from group b (the minority group), n1,b. Now since n1,a and n1,b are independent of v1 (which only affects the labels) we have that, $$\mathbb{Q}(S_{1}\mid v_{1})=\sum_{n_{1,a},n_{1,b}}\mathbb{Q}(n_{1,a},n_{1,b}\mid v_{1})\mathbb{Q}(S_{1}\mid v_{1},n_{1,a},n_{1,b})$$ $$=\sum_{n_{1,a},n_{1,b}}\mathbb{Q}(n_{1,a},n_{1,b})\mathbb{Q}(S_{1}\mid v_{1},n_{1,a},n_{1,b})$$ $$=\mathbb{E}_{n_{1,a},n_{1,b}}\left[\mathbb{Q}(S_{1}\mid v_{1},n_{1,a},n_{1,b})\right].$$ $$(19)$$ Therefore, by the joint convexity of the KL-divergence and by Jensen's inequality we have that, $$\mathrm{KL}(\mathbf{Q}(S_{1}\mid v_{1}=1)\|\mathbf{Q}(S_{1}\mid v_{1}=-1))$$ $$\leq\Xi_{n_{1,a_{1}},n_{1,b}}\left[\mathrm{KL}(\mathbf{Q}(S_{1}\mid v_{1}=1,n_{1,a_{1}}n_{1,b})\|\mathbf{Q}(S_{1}\mid v_{1}=-1,n_{1,a_{1}}n_{1,b}))\right].\tag{20}$$ Now conditioned on $v_{1},n_{1,a}$ and $n_{1,b}$, samples in $S_{1}$ are composed of 2 groups of samples $(S_{1,a},S_{1,b})$. The samples in each group (S1,a, S1,b) are drawn independently from the distributions Pa(x | x ∈ I1)Pv(y | x) and Pb(x | x ∈ I1)Pv(y | x) respectively. Therefore, KL(Q(S1 | v1 = 1, n1,a, n1,b)kQ(S1 | v1 = −1, n1,a, n1,b)) (i) = n1,aKL(Pa(x | x ∈ I1)Pv1=1(y | x)kPa(x | x ∈ I1)Pv1=−1(y | x)) + n1,bKL(Pb(x | x ∈ I1)Pv1=1(y | x)kPb(x | x ∈ I1)Pv1=−1(y | x)) (ii) = (n1,a + n1,b)Ex∼Unif(I1)[KL(Pv1=1(y | x)kPv1=−1(y | x))] (iii) =n1,a + n1,b 2 Ex∼Unif(I1) X y∈{−1,1} 1 + yφ x −1 2K log 1 + yφ x −1 2K 1 + yφ x −1 2K ! = n1,a + n1,b 2 X y∈{−1,1} Ex∼Unif(I1) "1 + yφ x −1 2K log 1 + yφ x −1 2K 1 + yφ x −1 2K !# = n1,a + n1,b 2K X y∈{−1,1} Z 1K $$(21)$$ x=0 "1 + yφ x −1 2K log 1 + yφ x −1 2K 1 + yφ x −1 2K !# dx (iv) ≤n1,a + n1,b 3K2, (21) where in (i) we let Pv1 denote the conditional distribution of y for x ∈ I1 given v1, (ii) follows since both Pa and Pb are constant in the interval, (iii) follows by our construction of Pv above, and finally (iv) follows by invoking Lemma A.3 that ensures that the integral is bounded by 1/3K2. Using this bound in Eq. (20), along with Eq. (19) we get that $$\operatorname{KL}(\mathbb{Q}(S\mid v_{1}=1)\|\mathbb{Q}(S\mid v_{1}=-1))\leq{\frac{\mathbb{E}\left[n_{1,a}+n_{2,b}\right]}{3K^{2}}}.$$ Now there are nmaj samples from group a in S and nmin samples from group b. Therefore, $$\mathbb{E}\left[n_{1,a}\right]=n_{\operatorname*{maj}}\mathsf{P}_{a}(x\in I_{1})={\frac{n_{\operatorname*{maj}}(2-\tau)}{K}},$$ $$\mathbb{E}\left[n_{1,b}\right]=n_{\operatorname*{min}}\mathsf{P}_{b}(x\in I_{1})={\frac{n_{\operatorname*{min}}\tau}{K}}.$$ Plugging this bound into Eq. (21) completes the proof by the first interval. An identical argument holds for j ∈ {2*, . . . , K/*2}. For j ∈ {K/2 + 1*, . . . , K*} the only change is that $$\mathbb{E}\left[n_{j,a}\right]=n_{\operatorname*{maj}}\mathsf{P}_{a}(x\in I_{j})={\frac{n_{\operatorname*{maj}}\tau}{K}},$$ Next, we combine the previous two lemmas to establish our stated lower bound. We first restate it here. Theorem 4.2. *Consider the group shift setting described in Section 3.2.2. Given any overlap* τ ∈ [0, 1] recall that PGS(τ ) *is the class of distributions such that* Overlap(Pmaj, Pmin) ≥ τ *. The minimax excess risk in* this setting is lower bounded as follows: Minimax Excess Risk($\mathcal{P}_{\text{GS}}(\tau)$) = $\inf_{\mathcal{A}}\sup_{(\mathcal{P}_{\text{mph}},\mathcal{P}_{\text{min}})\in\mathcal{P}_{\text{GS}}(\tau)}$ Excess Risk($\mathcal{A}$; ($\mathcal{P}_{\text{mph}},\mathcal{P}_{\text{min}}$)] $$\geq\frac{1}{200(n_{\text{min}}\cdot(2-\tau)+n_{\text{mph}}\cdot\tau)^{1/3}}\geq\frac{1}{200n_{\text{min}}{}^{1/3}(\rho\cdot\tau+2)^{1/3}},$$ (4) where ρ = nmaj/nmin > 1. Proof. First, by Lemma C.1 we know that Minimax Excess Risk($\mathcal{P}_{\mathsf{GS}}(\tau)$) $\geq\frac{1}{32K^{2}}\sum_{j=1}^{K}\exp(-\text{KL}(\mathsf{Q}(S\mid v_{j}=1)\|\mathsf{Q}(S\mid v_{j}=-1)))$. Next, by invoking the bound on the KL divergences in the equation above by Lemma C.2 we get that Minimax Excess Risk($\mathcal{P}_{\text{GS}}(\tau)$) $$\geq\frac{1}{64K}\left[\exp\left(-\frac{n_{\text{m\!\!\text{m}i}}(2-\tau)+n_{\text{m\!\!\text{m}i}}\tau}{3K^{3}}\right)+\exp\left(-\frac{n_{\text{m\!\!\text{m}i}}(2-\tau)+n_{\text{m\!\!\text{m}i}}\tau}{3K^{3}}\right)\right]$$ $$\geq\frac{1}{64K}\left[\exp\left(-\frac{n_{\text{m\!\!\text{m}i}}(2-\tau)+n_{\text{m\!\!\text{m}i}}\tau}{3K^{3}}\right)\right]$$ Setting K = d(nmin(2 − τ ) + nmajτ ) 1/3e and recalling that τ ≤ 1 we get that Minimax Excess Risk(PGS(τ )) $${\mathsf{s k}}({\mathcal{P}}_{\mathsf{G S}}(\tau))$$ ≥1 64d(nmin(2 − τ ) + nmajτ ) 1/3e exp −nmin(2 − τ ) + nmajτ 3d(nmin(2 − τ ) + nmajτ ) 1/3e 3 (i) ≥ exp(−1/3) 64 (nmin(2 − τ ) + nmajτ ) 1/3 d(nmin(2 − τ ) + nmajτ ) 1/3e 1 (nmin(2 − τ ) + nmajτ ) 1/3 (ii) ≥ 0.7 exp(−1/3) 64 1 (nmin(2 − τ ) + nmajτ ) 1/3 ≥1 200 1 (nmin(2 − τ ) + nmajτ ) 1/3 , where (i) follows since nmin(2 − τ ) + nmajτ /d(nmin(2 − τ ) + nmajτ ) 1/3e 3 ≤ 1, and (ii) follows since 0 ≤ τ ≤ 1 and nmin ≥ 1 and hence (nmin(2−τ)+nmajτ) 1/3 d(nmin(2−τ)+nmajτ) 1/3e ≥ 0.7. ## C.2 Proof Of Theorem 5.2 In this section, we derive an upper bound on the excess risk of the undersampled binning estimator AUSB (Eq. (5)). Recall that given a dataset S this estimator first calculates the undersampled dataset SUS, where the number of points from the minority group (nmin) is equal to the number of points from the majority group (nmin), and the size of the dataset is 2nmin. Throughout this section, (Pmaj, Pmin) shall be an arbitrary element of PGS(τ ) for any τ ∈ [0, 1]. In this section, whenever we shall often denote Excess Risk(A; (Pmaj, Pmin)) by simply Excess Risk(A). Before we proceed, we introduce some additional notation. For any j ∈ {1*, . . . , K*} and Ij = [ j−1 K , j K ] let $$q_{j,1}:=\mathsf{P}_{\mathsf{test}}(y=1\mid x\in I_{j})=\int_{x\in I_{j}}\mathsf{P}(y=1\mid x)\mathsf{P}_{\mathsf{test}}(x\mid x\in I_{j})\;\mathrm{d}x,\tag{22a}$$ $$q_{j,1}:=\mathsf{P}_{\mathsf{test}}(y=1\mid x\in I_{j})=\int_{x\in I_{j}}\mathsf{P}(y=1\mid x)\mathsf{P}_{\mathsf{test}}(x\mid x\in I_{j})\;\mathrm{d}x.\tag{22b}$$ where Ptest(Ij ) := Rx∈Ij Ptest(x) dx. For the undersampled binning estimator AUSB (defined above in Eq. (5)), define the *excess risk in an interval* Ij as follows: $$R_{j}(\mathcal{A}_{\text{USB}}^{S}):=p\left(y=-\mathcal{A}_{j}^{S}\mid x\in I_{j}\right)-\min\left\{\mathsf{P}_{\text{test}}(y=1\mid x\in I_{j}),\mathsf{P}_{\text{test}}(y=-1\mid x\in I_{j})\right\}$$ $$=q_{j,-\mathcal{A}_{j}^{S}}-\min\{q_{j,1},q_{j,-1}\}.$$ The proof of the upper bound shall proceed in steps. First, in Lemma C.3 we will show that the excess risk is equal to sum the excess risk over the intervals up to a factor of 2/K on account of the distribution being 1-Lipschitz. Next, in Lemma C.4 we upper bound the risk over each interval. We put these two together and to upper bound the risk. Lemma C.3. *The expected excess risk of undersampled binning estimator* AUSB *can be decomposed as follows* $$\mathrm{Excess~Risk}({\mathcal{A}}_{\mathrm{USB}})\leq\sum_{j=0}^{K-1}\mathbb{E}_{{\mathcal{S}}\sim\mathsf{P}_{m j}^{n_{m j}}\times\mathsf{P}_{m j}^{n_{m j}}}\left[R_{j}({\mathcal{A}}_{\mathrm{USB}}^{{\mathcal{S}}})\right]\cdot\mathsf{P}_{\mathrm{test}}(I_{j})+{\frac{2}{K}},$$ Proof. Recall that by definition, the expected excess risk is $$\mathbb{E}_{S\sim\mathsf{P}_{m a j}^{n m a j}\times\mathsf{P}_{m i n}^{n_{m i n}}}\left[R(\mathcal{A}^{S};\mathsf{P}_{\mathrm{test}})-R(f^{\star};\mathsf{P}_{\mathrm{test}})\right].$$ Let us first decompose the Bayes risk R(f ?), R(f ?) = inf f E(x,y)∼Ptest [1(f(x) 6= y)] = inf f Z 1 x=0 X y∈{−1,1} 1(f(x) 6= y)Ptest(y | x)Ptest(x) dx = Z 1 x=0 inf f(x)∈{−1,1} X y∈{−1,1} 1(f(x) 6= y)Ptest(y | x)Ptest(x) dx = Z 1 x=0 inf f(x)∈{−1,1} Ptest(y = −f(x) | x)Ptest(x) dx = Z 1 x=0 min {Ptest(y = 1 | x), Ptest(y = −1 | x)} Ptest(x) dx. (23) $$(23)$$ The risk of the undersampled binning algorithm AUSB is given by $$R(\mathcal{A}^{S}_{\mathsf{USS}})=\int_{x=0}^{1}\sum_{y\in\{-1,1\}}\mathbf{1}(\mathcal{A}^{S}_{\mathsf{USS}}(x)\neq y)\mathsf{P}_{\mathsf{test}}(y\mid x)\mathsf{P}_{\mathsf{test}}(x)\;\mathrm{d}x$$ $$=\int_{x=0}^{1}\mathsf{P}_{\mathsf{test}}(y=-\mathcal{A}^{S}_{\mathsf{USS}}(x)\mid x)\mathsf{P}_{\mathsf{test}}(x)\;\mathrm{d}x.$$ Next, recall that the undersampled binning estimator is constant over the intervals Ij for j ∈ {1*, . . . , K*} where it takes the value AS j (to ease notation let us simply denote it by Aj below), and therefore $$R({\mathcal{A}}_{\mathrm{USB}}^{S})=\sum_{j=0}^{K-1}\int_{x\in I_{j}}\mathsf{P}_{\mathrm{test}}(y=-{\mathcal{A}}_{j}|x)\mathsf{P}_{\mathrm{test}}(x)\;\mathrm{d}x.$$ This combined with Eq. (23) tells us that $$R(\mathcal{A}_{\mathsf{OLSB}}^{\mathsf{G}})-R(f^{*})$$ $$=\sum_{j=0}^{K-1}\int_{x\in I_{j}}\left(\mathsf{P}_{\mathsf{test}}(y=-\mathcal{A}_{j}|x)-\min\left\{\mathsf{P}_{\mathsf{test}}(y=1\mid x),\mathsf{P}_{\mathsf{test}}(y=-1\mid x)\right\}\right)\mathsf{P}_{\mathsf{test}}(x)\;\mathrm{d}x.\tag{24}$$ In Section 1 we have shown above, $\mathsf{P}_{\mathsf{test}}(\mathcal{A}_{\mathsf{OLSB}}^{\mathsf{G}})$ is known as $\mathsf{P}_{\mathsf{test}}(\mathcal{A}_{\mathsf{OLSB}}^{\mathsf{G}})$. Recall the definition of qj,1 and qj,−1 from Eqs. (22a)-(22b) above. For any x ∈ Ij = [ j−1 K , j K ], |Ptest(y | x) − qj,y| ≤ 1/K, since the distribution Ptest(y | x) is 1-Lipschitz and qj,y is its conditional mean. Therefore, $$R(\mathcal{A}_{\mathsf{USB}}^{S})-R(f^{*})$$ $$\leq\sum_{j=0}^{K-1}\int_{x\in I_{j}}\left(q_{j,-\mathcal{A}_{j}}-\min\left\{q_{j,1},q_{j,-1}\right\}\right)\mathsf{P}_{\mathsf{test}}(x)\;\mathrm{d}x+\frac{2}{K}\sum_{j=0}^{K-1}\int_{x\in I_{j}}\mathsf{P}_{\mathsf{test}}(x)\;\mathrm{d}x$$ $$=\sum_{j=0}^{K-1}\int_{x\in I_{j}}R_{j}(\mathcal{A}_{\mathsf{USB}}^{S})\mathsf{P}_{\mathsf{test}}(x)\;\mathrm{d}x+\frac{2}{K}.$$ Taking expectation over the training samples S (where nmin samples are drawn independently from Pmin and nmaj samples are drawn independently from Pmaj) concludes the proof. Next we provide an upper bound on the expected excess risk is an interval Rj (ASUSB). Lemma C.4. For any j ∈ {1, . . . , K} *with* Ij = [ j−1 K K ES∼P nmaj maj ×P nmin min -Rj (A S USB)≤c pnminPtest(Ij ) + c K , Ptest(x) dx. , j ], where c *is an absolute constant, and* Ptest(Ij ) := Rx∈Ij Proof. Consider an arbitrary bucket j ∈ {1*, . . . , K*}. Let us introduce some notation that shall be useful in the remainder of the proof. Analogous to qj,1 and qj,−1 defined above (see Eqs. (22a)-(22b)), define q a j,1 and q b j,1 as follows: $$q_{j,1}^{a}:=\mathsf{P}_{a}(y=1\mid x\in I_{j})=\int_{x\in I_{j}}\mathsf{P}(y=1\mid x)\mathsf{P}_{a}(x\mid x\in I_{j})\;\mathrm{d}x,\tag{25a}$$ $$q_{j,1}^{b}:=\mathsf{P}_{b}(y=1\mid x\in I_{j})=\int_{x\in I_{j}}\mathsf{P}(y=1\mid x)\mathsf{P}_{b}(x\mid x\in I_{j})\;\mathrm{d}x.\tag{25b}$$ Essentially, q a j,1 is the probability that a sample is from group a and has label 1, conditioned on the event that the sample falls in the interval Ij . Since $$\mathsf{P_{t e s t}}(x\mid x\in I_{j})={\frac{1}{2}}\left[\mathsf{P}_{a}(x\mid x\in I_{j})+\mathsf{P}_{b}(x\mid x\in I_{j})\right],$$ therefore $$q_{j,1}-q_{j,1}^{a}=\left|\int_{x\in I_{j}}\mathsf{P}(y=1\mid x)\mathsf{P}_{\mathsf{test}}(x\mid x\in I_{j})\;\mathrm{d}x-\int_{x\in I_{j}}\mathsf{P}(y=1\mid x)\mathsf{P}_{a}(x\mid x\in I_{j})\;\mathrm{d}x\right|$$ $$\leq\frac{1}{K}.$$ This follows since P(y | x) is 1-Lipschitz and therefore can fluctuate by at most 1/K in the interval Ij . Of course the same bound also holds for |qj,1 − q b j,1 |. With this notation in place let us present a bound on the expected value of Rj (ASUSB). By definition $$R_{j}({\mathcal A}_{\mathrm{USB}}^{S})=q_{j,-{\mathcal A}_{j}^{S}}-\operatorname*{min}\{q_{j,1},q_{j,-1}\}.$$ First, note that qj,1 := Ptest(y = 1 | x ∈ Ij ) = 1 − qj,−1. Suppose that qj,1 < 1/2 and therefore qj,−1 > 1/2 (the same bound shall hold in the other case). In this case, risk is incurred only when AS j = 1. That is, $$\mathbb{E}_{S\sim\mathbb{P}_{m\!j}^{n_{m\!j}}\times\mathbb{P}_{m\!m}^{n_{m\!j}}}\left[R_{j}(\mathcal{A}_{\text{GSB}}^{S})\right]=|q_{j,-1}-q_{j,1}|\Pr[\mathcal{A}_{j}^{S}=1]$$ $$=|1-2q_{j,1}|\Pr[\mathcal{A}_{j}^{S}=1].\tag{1}$$ $$(26)$$ $$(27)$$ $$(28)$$ Now by the definition of the undersampled binning estimator (see Eq. (5)), AS j = 1 only when there are more samples in the interval Ij with label 1 than −1. However, we can bound the probability of this happening since qj,1 is smaller than qj,−1. Let nj be the number of samples in the undersampled sample set SUS in the interval Ij . Let n1,j be the number of these samples with label 1, and n−1,j = nj − n1,j be the number of samples with label −1. Further, let na,j be the number of samples in from group a such that they fall in the interval Ij , and define mb,j analogously. The probability of incurring risk is given by $$\mathbb{P}[\mathcal{A}_{j}=1]=\sum_{s=1}^{2n_{\min}}\mathbb{P}[\mathcal{A}_{j}=1\mid n_{j}=s]\mathbb{P}[n_{j}=s],\tag{1}$$ where the sum is up to 2nmin since the size of the undersample dataset |SUS| is equal to 2nmin. Conditioned on the event that nj = s the probability of incurring risk is $$\mathbb{P}\left[\mathcal{A}_{j}=1\mid n_{j}=s\right]=\mathbb{P}\left[m_{1,j}>n_{-1,j}\mid n_{j}=s\right]=\mathbb{P}\left[n_{1,j}>n_{j}/2\mid n_{j}=s\right]$$ $$=\mathbb{P}\left[n_{1,j}>s/2\mid n_{j}=s\right].$$ Now, note that nj = na,j + nb,j . Thus continuing, we have that $$\mathbb{P}\left[n_{1,j}>s/2\mid n_{j}=s\right]=\sum_{s^{\prime}\leq s}\mathbb{P}\left[n_{1,j}>s/2\mid n_{j}=s,n_{b,j}=s^{\prime}\right]\mathbb{P}[n_{b,j}=s^{\prime}]$$ $$=\sum_{s^{\prime}\leq s}\mathbb{P}\left[n_{1,j}>s/2\mid n_{a,j}=s-s^{\prime},n_{b,j}=s^{\prime}\right]\mathbb{P}[n_{b,j}=s^{\prime}].$$ $$(29)$$ In light of this previous equation, we want to control the probability that the number of samples with label 1 in the interval Ij conditioned on the event that the number of samples from group a in this interval is s−s 0 and the number of samples from group b in this interval is s 0. Recall that q a j,1 and q b j,1 the probabilities of the label of the sample being 1 conditioned the event that sample is in the interval Ij when it is group a and b respectively. So we define the random variables: $z_{a}[s-s^{\prime}]\sim\mathsf{Bin}(s-s^{\prime},q_{j,1}^{a}),\quad z_{b}[s^{\prime}]\sim\mathsf{Bin}(s^{\prime},q_{j,1}^{b}),\quad z[s]\sim\mathsf{Bin}(s,\max\left\{q_{j,1}^{a},q_{j,1}^{b}\right\}).$ Then, P [n1,j > s/2 | nj = s] = X s 0≤s P [n1,j > s/2 | nj,a = s − s 0, nj,b = s 0] P[nj,b = s 0] = X s 0≤s P [za[s − s 0] + zb[s 0]) > s/2 | na,j = s − s 0, nb,j = s 0] P[nb,j = s 0] ≤ X s 0≤s P [z[s] > s/2 | na,j = s − s 0, nb,j = s 0] P[nb,j = s 0] = X s 0≤s P [z[s] > s/2] P[nb,j = s 0] = P [z[s] > s/2] (i) ≤ exp − s 2 (1 − 2 max q a j,1 , qb j,1 ) 2, (30) where (i) follows by invoking Hoeffding's inequality(Wainwright, 2019, Proposition 2.5). Combining this with Eqs. (28) and (29) we get that $$\mathbb{P}[\mathcal{A}_{j}=1]\leq\sum_{s=1}^{2n_{m i n}}\exp\left(-\frac{s}{2}(1-2\operatorname*{max}\left\{q_{j,1}^{a},q_{j,1}^{b}\right\})^{2}\right)\mathbb{P}[n_{j}=s].$$ Now nj , which is the number of samples that lands in the interval Ij is equal to na,j + nb,j . Now each of na,j and nb,j (the number of samples in this interval from each of the groups) are random variables with distributions Bin(nmin, Pa(Ij )) and Bin(nmin, Pb(Ij )), where Pa(Ij ) = Rx∈Ij Pa(x) dx and Pb(Ij ) = Rx∈Ij Pa(x) dx. Therefore, nj is distributed as a sum of two binomial distribution and is therefore Poisson binomially distributed (Wikipedia contributors, 2022). Using the formula for the moment generating function (MGF) of a Poisson binomially distributed random variable we infer that, $$\mathbb{P}[\mathcal{A}_{j}=1]\leq\left(1-\mathsf{P}_{a}(I_{j})+\mathsf{P}_{a}(I_{j})\exp\left(-\frac{(1-2\max\left\{q_{j,1}^{a},q_{j,1}^{b}\right\})^{2}}{2}\right)\right)^{n_{\max}}\times$$ $$\left(1-\mathsf{P}_{b}(I_{j})+\mathsf{P}_{b}(I_{j})\exp\left(-\frac{(1-2\max\left\{q_{j,1}^{a},q_{j,1}^{b}\right\})^{2}}{2}\right)\right)^{n_{\max}}.$$ which is $\mathsf{P}_{a}(\mathcal{A})$, $\mathsf{P}_{b}(\mathcal{A})$, $\mathsf{P}_{b}(\mathcal{A})$, $\mathsf{P}_{b}(\mathcal{A})$, $\mathsf{P}_{b}(\mathcal{A})$, \(\mathsf{P}_{b}(\mathcal{A}) Plugging this into Eq. (28) we get that, ES∼P nmaj maj ×P nmin min -Rj (A S USB) ≤ |1 − 2qj,1| " 1 − Pa(Ij ) + Pa(Ij ) exp − (1 − 2 max q a j,1 , qb j,1 ) 2 2 !#nmin × " 1 − Pb(Ij ) + Pb(Ij ) exp − (1 − 2 max q a j,1 , qb j,1 ) 2 2 !#nmin = |1 − 2qj,1| " 1 − Pa(Ij ) 1 − exp − (1 − 2 max q a j,1 , qb j,1 ) 2 2 !!#nmin × " 1 − Pb(Ij ) 1 − exp − (1 − 2 max q a j,1 , qb j,1 ) 2 2 !!#nmin . Since |1 − 2 max q a j,1 , qb j,1 | ≤ 1, $$1-\exp\left(-\frac{(1-2\operatorname*{max}\left\{q_{j,1}^{a},q_{j,1}^{b}\right\})^{2}}{2}\right)\geq\frac{(1-2\operatorname*{max}\left\{q_{j,1}^{a},q_{j,1}^{b}\right\})^{2}}{4},$$ and therefore ES∼P nmaj maj ×P nmin min -Rj (A S USB)≤ |1 − 2qj,1| " 1 − Pa(Ij ) (1 − 2 max q a j,1 , qb j,1 ) 2 2 #nmin × " 1 − Pb(Ij ) (1 − 2 max q a j,1 , qb j,1 ) 2 2 #nmin (i) ≤ |1 − 2qj,1| 1 − Pa(Ij ) (1 − 2qj,1 − 2γ) 2 2 nmin× 1 − Pb(Ij ) (1 − 2qj,1 − 2γ) 2 2 nmin (ii) ≤ |1 − 2qj,1| exp −nmin(Pa(Ij ) + Pb(Ij ))(1 − 2qj,1 − 2γ) 2 2 , a where (i) follows since | max{q j,1 , qb j,1 } − qj,1| ≤ 1/K by Eq. (26) and γ is such that |γ| ≤ 1/K, and (ii) follows since (1+z) b ≤ exp(bz). Now the RHS above is maximized when (1−2qj,1−2γ) 2 =c nmin(Pa(Ij )+Pb(Ij )) , for some constant c. Plugging this into the equation above we get that $$\mathbb{E}_{\mathcal{S}\sim\mathsf{P}_{\mathrm{min}}^{n_{\mathrm{min}}}\times\mathsf{P}_{\mathrm{min}}^{n_{\mathrm{min}}}}\left[R_{j}(\mathcal{A}_{\mathrm{USB}}^{S})\right]\leq\frac{c^{\prime}}{\sqrt{n_{\mathrm{min}}(\mathsf{P}_{a}(I_{j})+\mathsf{P}_{b}(I_{j}))}}+c^{\prime}|\gamma|$$ $$\leq\frac{c^{\prime}}{\sqrt{n_{\mathrm{min}}(\mathsf{P}_{a}(I_{j})+\mathsf{P}_{b}(I_{j}))}}+\frac{c^{\prime}}{K}.$$ $$\square$$ Finally, noting that Ptest(Ij ) = (Pa(Ij ) + Pb(Ij ))/2 completes the proof. By combining the previous two lemmas we can now prove our upper bound on the risk of the undersampled binning estimator. We begin by restating it. Theorem 5.2. Consider the group shift setting described in Section 3.2.2. For any overlap τ ∈ [0, 1] *and for* any (Pmaj, Pmin) ∈ PGS(τ ) *the expected excess risk of the Undersampling Binning Estimator (Eq.* (5)) with number of bins with K = dnmin 1/3e is Excess $\mathrm{Risk}[\mathcal{A}_{\mathrm{USB}};(\mathsf{P}_{\mathrm{misj}},\mathsf{P}_{\mathrm{misj}})]=\mathsf{E}_{\mathcal{S}\sim\mathsf{P}_{\mathrm{misj}}^{\mathrm{max}}\preccurlyeq\mathsf{P}_{\mathrm{misj}}^{\mathrm{max}}}\left[R(\mathcal{A}_{\mathrm{USB}}^{S};\mathsf{P}_{\mathrm{test}}))-R(f^{\star};\mathsf{P}_{\mathrm{test}})\right]\leq\frac{C}{n_{\mathrm{min}}/3}$. Proof. First by Lemma C.3 we know that Excess Risk[AUSB] ≤ K X−1 j=0 ES∼P nmaj maj ×P nmin min -Rj (A S USB)· Ptest(Ij ) + 2K . Next by using the bound on ES∼P nmaj maj ×P nmin min -Rj (ASUSB)established in Lemma C.4 we get that, Excess $\text{Risk}(\mathcal{A}_{\text{USB}})\leq c\sum_{j=0}^{K-1}\frac{1}{\sqrt{n_{\text{min}}\mathsf{P}_{\text{test}}(I_{j})}}\mathsf{P}_{\text{test}}(I_{j})+\frac{c}{K}$ $$=\frac{c}{\sqrt{n_{\text{min}}}}\sum_{j=0}^{K-1}\sqrt{\mathsf{P}_{\text{test}}(I_{j})}+\frac{c}{K}$$ $$\leq\frac{c}{\sqrt{n_{\text{min}}}}\sqrt{K}\sum_{j=0}^{K-1}\mathsf{P}_{\text{test}}(I_{j})+\frac{c}{K}$$ $$=c\sqrt{\frac{K}{n_{\text{min}}}}+\frac{c}{K}.$$ where (i) follows since for any vector z E RK, ||2||1 ≤ √K||2||2. Maximizing over K yields the choice K = [nmin1/3], completing the proof. 0 ![34_image_0.png](34_image_0.png) ## Additional Simulations D Figure 4: Convolutional network classifiers trained on the Imbalanced Binary CIFAR10 dataset with a 5:1 label imbalance. (Top) Models trained using the tilted loss (Li et al., 2020) with early stopping. (Bottom) Models trained using group-DRO (Sagawa et al., 2020) with early stopping. We report the average test accuracy calculated on a balanced test set over 5 random seeds. We start off with 2500 cat examples and 500 dog examples in the training dataset. We find similar trends to those obtained in Figure 2 even with these losses that are designed to optimize for the worst group accuracy. ## E Discussion About Minimax Lower Bounds For Cost-Sensitive Losses Applied To The Label Shift Setting We add a more detailed discussion about applying minimax cost-sensitive losses to obtain a lower bound in the presence of label shift. Assume that Pmaj is distribution of the covariates x | y = 1, and Pmin is the distribution of the covariates x | y = -1. The training samples are drawn from the distribution: $$\mathsf{P}(x,y)=\mathsf{P}(y=1)\mathsf{P_{m a j}}+\mathsf{P}(y=-1)\mathsf{P_{m i n}},$$ where $$\mathsf{P}(y=1)={\frac{\rho}{1+\rho}}\quad{\mathrm{and}}\quad\mathsf{P}(y=-1)={\frac{1}{1+\rho}}$$ for some imbalance ratio p > 1. On average the ratio between the number of points from the majority class to the number of points from the minority class is equal to p. We set the cost of getting an incorrectly predicting the majority class label to be equal to $$c_{1}={\frac{1}{1+\rho}}$$ and the cost of incorrectly predicting the minority class label to be equal to $$c_{-1}={\frac{\rho}{1+\rho}}.$$ Note that the costs c1 + c−1 = 1 and that c1 < c−1. The expected cost-sensitive loss is therefore equal to E(x,y)∼P [cy1 [f(x) 6= y]] = ρ 1 + ρ E(x)∼P [c11 [f(x) 6= 1]] + 1 1 + ρ E(x)∼P [c−11 [f(x) 6= −1]] =ρ (1 + ρ) 2 Ex∼Pmaj [1 [f(x) 6= 1]] + ρ (1 + ρ) 2 Ex∼Pmin [1 [f(x) 6= −1]] =2ρ (1 + ρ) 2 Ey∼Unif(−1,1),x∼P(x|y)[1 [f(x) 6= y]] . Now if we invoke the minimax lower bound (Kamalaruban & Williamson, 2018, Theorem 4) we get that $$\min_{f}\max_{P}\frac{2\rho}{(1+\rho)^{2}}\mathbb{E}_{\rho\sim\text{Diff}(-1,1),x\sim\text{P}(x|y)}\left[1\left[f(x)\neq y\right]\right]\geq\frac{C}{1+\rho}\min\left\{\sqrt{\frac{V}{(1+\rho)\mu}},\frac{1}{1+\rho}\frac{V}{\mu h}\right\},$$ where the minimum over $f$ is over all measurable functions from the training data to binary labels, the maximum is over a data distribution that can be correctly classified with a classifier from a VC class with VC dimension at most V and h is the Massart noise margin. For more thorough definitions we urge the reader to see (Kamalaruban & Williamson, 2018). With this lower bound we get that see (Raman-down & Vinnahson, 2019). With this lower bound we get that $$\min_{f}\max_{\rho}\mathbb{E}_{y\sim\mathsf{Unif}(-1,1),x\sim P(x|y)}\left[1\left[f(x)\neq y\right]\right]\geq\frac{C(1+\rho)}{2\rho}\min\left\{\sqrt{\frac{V}{(1+\rho)n}},\frac{1}{1+\rho}\frac{V}{nh}\right\}$$ $$\geq\frac{C}{2}\min\left\{\sqrt{\frac{V}{(1+\rho)n}},\frac{1}{1+\rho}\frac{V}{nh}\right\}.$$ Such that this has no been better as well as the $n$-th largest estimator is Therefore we find that this lower bound gets smaller as the imbalance ratio ρ gets larger, predicting the wrong trend for the label shift problem. ## F Details About Results In Table 1 In Table 1, we listed results regarding the performance of undersampled algorithms to others that are reported in the literature. Here we provide detailed references to these results. Label shift. The results for label shift are from the paper by Cao et al. (2019). The results are reported in Table 2 of that paper. For Imb CIFAR 10 (step 10), the undersampling result corresponds to the entry CB RS from that table with accuracy 84.59% (error 15.41%), while the best method corresponds to the method LDAM-DRW with accuracy 87.81% (error 12.19%). For Imb CIFAR100 (step 10), the undersampling result again corresponds to CB RS with accuracy 53.08% (error 46.92%) while the best method corresponds to the method LDAM-DRW with accuracy 59.46% (error 40.54%). Group-covariate shift. The results for the group-covariate shift are from Table 2 in Idrissi et al. (2022). For the CelebA dataset, the undersampled accuracy corresponds to the method SUBG and the best accuracy is for gDRO. For the Waterbirds dataset, the undersampled method is SUBG and the best competitor is RWG. For the MultiNLI dataset, the undersampled accuracy corresponds to the method SUBG and the best accuracy is for gDRO. Finally, for the CivilComments dataset, the undersampled method is SUBG and the best method is RWG. ## G Experimental Details For Figures 2 And 4 We construct our label shift dataset from the original CIFAR10 dataset. We create a binary classification task using the "cat" and "dog" classes. We use the official test examples as the balanced test set with 1000 cats and 1000 dogs. To form the initial train and validation sets, we use 2500 cat examples (half of the training set) and 500 dog examples, corresponding to a 5:1 label imbalance. We use 80% of those examples for training and the rest for validation. We are left with 2500 additional cat examples and 4500 dog examples from the original train set which we add into our training set to generate Figure 2. We use the same convolutional neural network architecture as (Byrd & Lipton, 2019; Wang et al., 2022) with random initializations for this dataset. We train this model using SGD for 800 epochs with batchsize 64, a constant learning rate 0.001 and momentum 0.9. The importance weights used upweight the minority class samples in the training loss and validation loss is calculated to be \#Cat Train Examples \#Dog Train Examples . We note that all of the experiments were performed on an internal cluster on 8 GPUs. VS loss: Given a dataset {xi, yi} n i=1, the VS loss (Kini et al., 2021) is defined as follows $${\mathcal{L}}_{\mathsf{V S}}(f):=\sum_{i=1}^{n}\log\left(1+\exp\left(-\left({\frac{n_{g i}}{n_{\operatorname*{max}}}}\right)^{\gamma}y_{i}f(x_{i})-{\frac{\tau n_{g i}}{n}}\right)\right),$$ where gi denotes the group label, ngi corresponds to the number of samples from the group, nmax is the number of samples in the largest group and n is the total number of samples. We set τ = 3 and γ = 0.3, the best hyperparameters identified by Wang et al. (2022) on this dataset for this neural network architecture. Tilted loss: The tilted loss (Li et al., 2020) is defined as $${\mathcal{L}}_{\mathrm{Tilted}}(f):={\frac{1}{t}}\log\left[\sum_{i=1}^{n}\exp\left(t\ell(y_{i}f(x_{i}))\right)\right],$$ where we take ` to be the logistic loss. In our experiments we set t = 2. Group-DRO: We run group-DRO (Sagawa et al., 2020, Algorithm 1) with the logistic loss. We set adversarial step-size ηq = 0.05 which was the best hyperparameter identified by Wang et al. (2022).
Review 1: Summary: The paper presents an analysis of binary classification under group imbalance (majority vs minority), when the group membership of individual examples is known. The authors prove lower bounds on minimax excess risk when there is either label shift (there are more examples of one class than another), or group-covariate shift (minority and majority groups have different distributions over the input features x, with varying overlap between these distributions). They find that these lower bounds depend purely on the number of minority class examples. Next, they study a predictor that first undersamples the majority group, then bins the features (scalars in a bounded interval), and then makes a prediction based on majority class of the examples in this bin. Finally, the authors run some simple experiments with neural networks, showing that the accuracy of the learned predictors increases with the increase of the minority samples, and not the majority, as predicted by the theory. Strengths and Weaknesses: Overall, I found the results in the paper interesting and think that the community will see these findings as a valuable contribution. As motivated by the authors, under group imbalance, empirically it is hard to improve upon learning on an undersampled balanced datasets. This has been observed in the empirical work, and is sometimes discussed at an intuitive level. Some results in transfer learning are applicable here too (which the authors mention and discuss relative to their work), but I have not seen work establishing minimax excess risk lower bounds in this particular setting. The weaknesses of the paper relate to high dimensional extensions, experiments that do not evaluate undersampling, potentially missed connection to memorization literature. For details, please see the comments and questions in the section below. Requested Changes: Comments and questions: - the authors mention that their findings can be easily extended to a high dimensional feature setting. However, it seems like the undersampling and binning algorithm would have terrible dependence on the dimension, and in practice have no relevance. More generally, I find Section 5 to be almost too toy to be interesting. Due to dimensionality dependence in the most straightforward extension of the results, they don’t really shed light on any realistic empirical studies. - Is there a version of undersampling such that the undersampled binning estimator would match the lower bound rate when there is overlap in the group-covariate shift distributions? Perhaps the undersampling should not be done as severely? - How does the literature on memorization relate to the results presented here? See, e.g., Feldman et al papers on memorization from ~2020, 2021. - Experiments: it would be natural to also show the results with undersampling. It would also help to further disentangle the contribution to accuracy coming from the majority group examples, and also at least somewhat tie back into the theory section on undersampling. - I see “robustness” mentioned several times (even in the title), however, it is never properly defined or being directly studied. Robustness can mean a lot of things and is used differently in various research problems. In my opinion, if used, it should be more formally defined. Typos and minor comments: - “Eq. Equation XX” , see, e.g., theorems 5.1 and 5.2. (\cref error). - The sentence just above Section 7 “When we add samples to both…” is incredibly hard to follow, and seems to be logically broken. Broader Impact Concerns: None ================================================== Review 2: Summary: This paper studies a distribution shift setting where the distribution for training is a mixture of a minority distribution and a majority distribution, and the distribution for testing is a uniform average of the minority distribution and the majority distribution. The paper considers two kinds of distribution shift: a label shift and a group-covariate shift. For these two problems, the paper derives lower bounds on the minimax excess risk of order $n^{1/3}_{min}$, where $n_{min}$ is the number of examples for the minority distribution. The paper also shows that the undersampling scheme is able to achieve this minimax optimal rate. Empirical results are also presented. Strengths and Weaknesses: Strength: The paper establishes lower bounds on the minimax excess risk, which shows that the rate depends only on the number of training examples from the minority distribution. The paper shows that a binning estimator applied to undersampling can achieve this minimax rate. The paper presents experimental results to show that the number of negative examples plays a large role in the performance: increasing only the number of minority examples can improve the performance, while increasing only the number of majority examples does not. This matches well with the experimental results. Weakness: The paper only considers the special case $d=1$. This is restrictive since $d$ can be large in practice. Furthermore, the effect of $d$ can be important on the performance. It would be very interesting to develop lower bounds for the minimax rate reflecting the dependency of $d$. As far as I see, the binning estimator may suffer from the curse of dimensionality, and therefore is not a good estimator if $d$ is large. The paper considers a setting where the distribution for testing is a uniform average of $P_{min}$ and $P_{maj}$. It is not clear to me why the paper considers this specific setting. What will happen if the distribution for testing is another weighted average of $P_{min}$ and $P_{maj}$. The lower bound in Thm 4.1 does not depend on the overlap between $P_{maj}$ and $P_{min}$. This is different from the bound in Thm 4.2. Can we use the information on the overlap to further improve the lower bound in Thm 4.1 Requested Changes: Can we extend the analysis to the general case $d>1$. What is the potential challenge in this extension? Can the subsampling still achieve this minimal optimal rate? What will happen if we consider a general test distribution, i.e., the test distribution is not a uniform average of the $P_{min}$ and $P_{maj}$? Can we improve the bound in Thm 4.1 by incorporating the information on the overlap between $P_{min}$ and $P_{maj}$? Broader Impact Concerns: No concerns on the ethical impacts. ================================================== Review 3: Summary: This paper contributes to showing minimax lower bounds for imbalanced binary classification by considering two scenarios: label shift and group-covariate shift. The authors aim to convince that the classical approach of undersampling is a reasonable one even though it would not leverage some information from the majority samples, unlike more sophisticated methods such as oversampling and group distributionally robust learning. For the label shift scenario, the training distribution can have an imbalanced proportion of positive and negative samples. The authors show that the minimax lower bound of the classification risk under this scenario is in the order of $n\_\mathrm{min}^{-1/3}$. This result is notable because the minimax rate is completely free from the majority sample size and captures the trend as the label proportion becomes more extreme. Indeed, the previous minimax bounds in transfer learning and cost-sensitive learning fail to capture this mode appropriately. For the group-covariate shift scenario, the training samples have another attribute representing two groups in addition to their labels, and we suppose the covariate shift between the two groups. In this case, the authors show that the minimax lower bound is in the order of $n\_\mathrm{min}^{-1/3}$ when the two covariate distributions rarely overlap, and the minimax error becomes closer to the order of $1/n$ when they highly overlap. Thus, the two modes are interpolated smoothly in theory. Further, the authors unveil that the binning estimator using the undersampled data achieves the obtained minimax rate for both scenarios. With neural network experiments, they additionally confirm that adding the minority samples is more effective in boosting the classification performance than adding the majority samples. These results altogether support the efficacy of the undersampling strategy and the fundamental hardness of learning under the limited availability of the minority samples. Strengths and Weaknesses: ### Strengths - **Importance of the hardness results**: Although the class imbalance problem has been extensively studied experimentally and theoretically, the fundamental hardness results were lacking. This work shows what we can do under given imbalanced conditions at best. We eventually admit the effectiveness of the undersampling strategy, which is simple but often dismissed, perhaps because it discards some information about the training samples. - **Educational proof**: The minimax lower bounds in this paper are shown by combining Le Cam's method and the total variation bound. The proof follows the classical flow with a new construction of the hard instance of the training distribution specifically for imbalanced learning. It is written in a transparent way and quite educational. - **Going beyond the existing analyses**: I like this paper particularly for giving a meaningful extrapolation for the limit $\\rho \\to 0$ (extreme imbalance) in Theorem 4.1 and interpolation for $\\tau \\to \\{0, 1\\}$ (hard/easy transfer) in Theorem 4.2. In particular, the interpolation of Theorem 4.2 gives us insights into when the minority sample size is the performance bottleneck. None of the existing work should be able to provide these insights, including the analyses of transfer learning and cost-sensitive learning. ### Weaknesses - **Limited analysis for $d=1$**: The analyses in this paper focus merely on the one-dimensional case. Although the authors mention that the extension to the multi-dimensional case is elementary, I do not immediately see how to extend the construction of the "hard" distributions using the hat function beyond the one-dimensional case. It is illustrative to give readers instructions on constructing it when $d>2$. We do not necessarily need the full detail of the construction. In addition, the binning estimator in Section 5 seems to suffer from the curse of dimension under the extreme high-dimensional case. I understand that the authors do not intend to claim that the binning estimator is a "silver bullet," but it should sound fair if you can discuss what happens under the high-dimensional case. - **Discussion of Kpotufe & Martinet (2018)**: In the last part of Section 4.2, the authors discuss the applicability of the minimax analysis of Kpotufe & Martinet (2018) in the current problem setting, but the discussion seems to be limited to only the zero-overlap case $\\tau=0$. What if $\tau>0$? Can their analysis be applied to imbalanced learning, and if so, is there any agreement/contradiction with your results obtained in Theorem 4.2? Because the framework of Kpotufe & Martinet (2018) is slightly intricate, it is instructive for readers if you have some discussions here. (For the comparison of Theorem 4.1 and the existing analysis of cost-sensitive learning, I could imagine the difference) Requested Changes: Major suggestions are stated as the weaknesses above. Some minor comments: - In Section 3.1 (the second paragraph), the classifier domain should be $f: [0,1] \\to \\{-1, 1\\}$, not $f: \\mathbb{R} \\to \\{-1,1\\}$. - Right before Theorem 4.2, you may not need "the absolute constant c > 0" because it does not appear in the theorem. - Here and there, I found many typos like "Eq. equation X." For example, you can find them in the statements of Theorems 5.1 and 5.2. - Right before Lemma B.1, it looks better to finish the sentence such as "the minimax excess risk is lower bounded as follows." Broader Impact Concerns: Since this work is mainly theoretical, there is no concern about broader impacts. ================================================== Metareview: Recommendation: Accept with minor revision Comment: The results are of interest and are mostly well substantiated. However, as the discussion on higher dimensions is not sufficiently well founded, the authors are requested to revise it so that it does not make unsubstantiated claims. Instead, the authors may wish to point to a promising direction without making strong claims. The authors are furhter requested to correct all other minor comments mentioned in the reviews. ==================================================
# State-Separated Sarsa: A Practical Sequential Decisionmaking Algorithm With Recovering Rewards Anonymous authors Paper under double-blind review ## Abstract While many multi-armed bandit algorithms assume that rewards for all arms are constant across rounds, this assumption does not hold in many real-world scenarios. This paper considers the setting of recovering bandits (Pike-Burke & Grunewalder, 2019), where the reward depends on the number of rounds elapsed since the last time an arm was pulled. We propose a new reinforcement learning (RL) algorithm tailored to this setting, named the State-Separate SARSA (SS-SARSA) algorithm, which treats rounds as states. The SSSARSA algorithm achieves efficient learning by reducing the number of state combinations required for Q-learning/SARSA, which often suffers from combinatorial issues for large-scale RL problems. Additionally, it makes minimal assumptions about the reward structure and offers lower computational complexity. Furthermore, we prove asymptotic convergence to an optimal policy under mild assumptions. Simulation studies demonstrate the superior performance of our algorithm across various settings. ## 1 Introduction The multi-armed bandit (MAB) problem (Lattimore & Szepesvári, 2020) is a sequential decision-making problem between an agent and environment. For each round, the agent pulls an arm from a fixed set and receives a reward from the environment. The objective is to maximize the cumulative rewards over a certain number of rounds. This is equivalent to regret minimization, which is commonly used to evaluate algorithms (Lattimore & Szepesvári, 2020). For superior performance, the key ingredient is an exploration-exploitation tradeoff. During the initial rounds, the agent explores arms at random to gather information about the environment. After that, the agent exploits the knowledge obtained during the exploration phase to choose the best arm. The MAB framework is widely used and finds applications in various domains (Bouneffouf et al., 2020). For instance, in the context of item recommendation (Gangan et al., 2021), MAB algorithms can be applied by interpreting arms as items and rewards as conversion rates. Another example is dynamic pricing (Misra et al., 2019), where arms and rewards correspond to price and profit, respectively. Moreover, in a language-learning application described in Yancey & Settles (2020), MAB algorithms are employed for push notifications, with arms representing notifications and rewards representing app usage. Many bandit algorithms assume reward stationarity, meaning constant rewards across rounds (Garivier & Moulines, 2011). In such cases, continuing to draw the arm with the highest expected reward is optimal for cumulative rewards. However, this assumption does not hold in the examples presented above. Instead, rewards often depend on the timing of arm selections. In recommendation systems, for example, commodities should be purchased more frequently than other expensive items. Assuming the reward stationarity, bandit algorithms would repeatedly recommend the same item. On the contrary, the purchase probability would increase if the recommendation is made after suggesting a variety of items. In dynamic pricing, it may be more profitable, in the long run, to discount occasionally than to continue discounting for immediate profit. Similarly, occasional push notifications may be more effective at capturing attention than frequent notifications conveying the same message, supported by (Yancey & Settles, 2020) through offline and online experiments. 1 To address this situation, we consider the case where the reward function depends on the time elapsed since the last arm was pulled, known as *recovering bandits* (Pike-Burke & Grunewalder, 2019). Various algorithms have been proposed in the MAB framework, but most assume specific reward structures (Yancey & Settles (2020); Simchi-Levi et al. (2021); Kleinberg & Immorlica (2018); Leqi et al. (2021);Moriwaki et al. (2019); Warlop et al. (2018)) or are computationally expensive (Laforgue et al., 2022). Implementing such algorithms for unknown reward functions over a long sequence of rounds would be challenging. An alternative approach to dealing with the change in rewards is to apply a reinforcement learning (RL) algorithm (Sutton & Barto, 2018), considering *states* as the elapsed rounds for each arm. However, popular tabular RL algorithms, such as Q-learning (Watkins & Dayan, 1992) and SARSA (Rummery & Niranjan, 1994)), face a combinatorial problem in terms of the number of arms and states. Specifically, we need to estimate the Q-function for all possible combinations across the maximum number of states for each arm. Consequently, tabular RL algorithms are computationally prohibitive, except for a small number of arms. To mitigate the combinatorial issue, this paper proposes a new tabular SARSA, called *State-Separated* SARSA (*SS-SARSA*). We introduce for each arm State-Separated Q-function (*SS-Q-function*), which depends on the states for both the associated and a pulled arm, and similarly update them to the standard tabular SARSA. The update thus requires only the states of the two arms. As a result, the number of Q functions to be estimated is significantly reduced, leading to more efficient estimation. Furthermore, this algorithm guarantees convergence to Bellman optimality equation for Q-functions, meaning it achieves an optimal policy asymptotically. Additionally, since our algorithm is a slightly modified version of SARSA, it can be solved in linear time for rounds and is faster than the related work (Laforgue et al., 2022). Also, we introduce a new policy called *Uniform-Explore-First*; during the exploration phase, it pulls the least frequently selected arm for given states to update Q-functions uniformly. Subsequently, the agent pulls arms to maximize cumulative rewards. Note that even random exploration such as ϵ-greedy (Sutton & Barto, 2018) does not update Q-functions uniformly in our setting. Finally, compared to popular RL algorithms and recovering MAB algorithms (Pike-Burke & Grunewalder, 2019), simulation results across various reward settings demonstrate the superiority of our algorithm in terms of cumulative rewards and optimal policy. The contributions of this work are summarized as follows. - In the recovering reward setting, the proposed algorithm SS-SARSA can mitigate the combinatorial computation and can be solved in linear time. - It is theoretically guaranteed that the proposed algorithm obtains optimal policy asymptotically for any reward structure. - The proposed policy, Uniform-Explore-First, updates each Q-function uniformly for efficient exploration. - In various settings, simulation results show the superiority of our algorithm in terms of cumulative rewards and optimal policy over related works. The remainder of this paper is organized as follows. Section 2 reviews related literature and discusses the differences from our work. We define a formal problem setting in Section 3 and the proposed algorithm in Section 4. In Section 5, we present a convergence analysis for the proposed algorithm. Section 6 shows the simulation results and advantages over the related methods. Finally, we state our conclusion in Section 7. ## 2 Related Work In sequential decision-making problems, two typical approaches are Multi-Armed Bandits (MAB) and Reinforcement Learning (RL). MAB does not use a state in modeling, while RL incorporates a state. Recovering bandit problems have predominantly been addressed in the MAB context, so we mainly survey that area. In stationary bandits, where expected rewards are constant over time for each arm, algorithms aim to choose the arm with the highest expected value to maximize cumulative rewards. Numerous algorithms have been proposed for both parametric and nonparametric reward settings, such as KL-UCB (Lai et al., 1985; Garivier & Cappé, 2011), Thompson sampling (Thompson, 1933; Chapelle & Li, 2011), ϵ-greedy (Sutton & Barto, 2018), and UCB1 (Auer et al., 2002). However, this setting ignores reward changes, a crucial aspect addressed in this paper. The non-stationary bandit problem considers scenarios where rewards can change over time. Restless bandits allow rewards to vary over rounds independently of the history of pulled arms, incorporating settings like piecewise stationary rewards (Garivier & Moulines, 2011; Liu et al., 2018) and variation budgets (Besbes et al., 2014; Russac et al., 2019). However, even in these settings, the variation of rewards due to the history of pulled arms is not accounted for. In contrast, Rested Bandits involve rewards that depend on the history of the arms pulled. One of the typical settings is that each arm's reward changes monotonically each time the arm is pulled (Heidari et al., 2016). Examples include Rotting bandits (Levine et al., 2017; Seznec et al., 2019), handling monotonically decreasing rewards, and Rising bandits (Li et al., 2020; Metelli et al., 2022), dealing with monotonic increase. Another setting in Rested Bandits is *Recovering bandits* (Pike-Burke & Grunewalder, 2019) (or *Recharging* bandits (Kleinberg & Immorlica, 2018)), where rewards depend on the elapsed rounds since the last pull for each arm. Numerous algorithms have been proposed in this context, with many assuming a monotonous increase in rewards as rounds progress (Yancey & Settles (2020); Simchi-Levi et al. (2021); Kleinberg & Immorlica (2018)). An alternative approach involves functional approximation with past actions as contexts, exemplified by stochastic bandits with time-invariant linear dynamical systems (Leqi et al., 2021), contextual bandits (Moriwaki et al., 2019), and linear-approximation RL (Warlop et al., 2018). In contrast to these approaches, we propose an algorithm that does not rely on a specific structure of the rewards. Several articles make fewer assumptions about rewards (Laforgue et al., 2022; Pike-Burke & Grunewalder, 2019). Laforgue et al. (2022) propose an algorithm based on Combinatorial Semi-Bandits, applying integer linear programming for each block of prespecified rounds to determine the sequence of pulling arms. However, its time complexity is more than quadratic in total rounds, making it impractical for large total rounds. In contrast, our algorithm, a modified version of SARSA, can be computed in linear time. Pike-Burke & Grunewalder (2019) uses Gaussian process (GP) regression and proves the Bayesian sublinear regret bound without reward monotonicity and concavity. However, their algorithms only considered shortterm lookahead and did not guarantee to achieve the optimal policy in the long run, as deemed in RL. In contrast, our RL approach can realize the optimal policy asymptotically. ## 3 Problem Setting In this section, we introduce recovering bandits (Pike-Burke & Grunewalder, 2019) within the framework of the Markov Decision Process (MDP) to facilitate RL algorithms. The (discounted) MDP is defined as M = (A, S*, f, r, γ*), where A = [K] := {1, 2, · · · , K} is the index set of K arms1, S k:= [smax] denotes the states of the arm k (k = 1*, . . . , K*), and S =QK k=1 S kis their direct product. Additionally, we let fk : S k *× A → S*k denote a deterministic state transition function for arm k, and f = (f1, f2, · · · , fK) a bundled vector. The stochastic bounded reward function is denoted by r : *S × A →* R, and γ ∈ (0, 1] serves as a discount rate. Key distinctions from standard MDP lie in the state structure and the state transition. In the K-dimensional state s := (s1, s2, · · · , sk, · · · , sK), the component sk ∈ [smax] signifies the elapsed number of rounds for the arm k since its last pull. We limit sk at smax even if it surpasses smax rounds. Consequently, the cardinality of the states in s is s K max. With the multi-dimensional states, an agent interacts with an environment over T (allowing ∞) rounds as follows. At round t ∈ [T], given state st := (st,1, st,2, · · · , st,k, · · · , st,K), the agent draws an arm at from the K arms. This choice is governed by a policy πt(at|st), which is a map from S to ∆A, the probability distributions on the K arms. The environment then returns a stochastic reward r(st,at , at), which depends on only at and the corresponding state st,at . Note that r is independent of the 1In the context of RL, arms are often referred to as actions, but we use the term "arm" following the original recovering bandits (Pike-Burke & Grunewalder, 2019). other arm states. Finally, the next state st+1 is updated as st+1,k = f(st,k, at) := min{st,k + 1, smax} for k ̸= at and := 1 for k = at, as stated in the previous paragraph. With the above MDP, our goal is to maximize the expected (discounted) cumulative rewards, which is defined by $$V_{\pi}(\mathbf{s}):=\mathbb{E}_{\pi}\left[\sum_{t=0}^{T}\gamma^{t}r(s_{t,a_{t}},a_{t})\mid\mathbf{s}_{0}=\mathbf{s}\right],$$ where π is a given stationary policy, which does not depend on time t and s is an initial state. The optimal policy is defined as π that maximizes Vπ(s) for any initial state. In our MDP, when T = ∞ and γ < 1 (i.e. infinite-horizon discounted MDP), it is known (Puterman, 2014) that there exists an optimal policy π ∗, which is stationary and deterministic, meaning that π ∗is invariant over time, and for any s ∈ S, π(a|s) = 1 for some a ∈ A. ## 4 Algorithm This section presents a novel algorithm to learn the optimal policy. Section 4.1 introduces Q-functions called the State-Separated Q-functions (SS-Q-functions), considering the state structure. Using these SSQ-functions, Section 4.2 proposes the State-Separated SARSA (SS-SARSA) algorithm for efficient learning. This section focuses on the discounted MDP with infinite horizons (i.e. T = ∞ and γ ∈ (0, 1)). ## 4.1 State-Separated Q-Function: Constructing The Mdp With Reduced State Combinations We start with the problem of the tabular RL approach: the combinatorial explosion associated with the Q-function $$Q(\mathbf{s},a):=\mathbb{E}_{\pi}\left[\sum_{t=0}^{T}\gamma^{t}r(s_{t}^{a_{t}},a_{t})\mid\mathbf{s}_{0}=\mathbf{s},a_{0}=a\right].\tag{1}$$ $$\left(1\right)$$ $$\left(2\right)$$ $$\left({\boldsymbol{3}}\right)$$ (1) can also be expressed in Bellman-equation form (Sutton & Barto, 2018): $$Q({\bf s},a)=\mathbb{E}_{r}[r(s_{a},a)]+\gamma Q({\bf s}^{\prime},a^{\prime}).$$ ′, a′). (2) Here, s ′ and a ′represent the next state and arm after s and a, respectively; s ′ = f(s, a) and a ′ ∼ π(·|s ′). We also define the Bellman optimality equation for Q-function as $$Q^{*}({\bf s},a)=\mathbb{E}_{r}[r(s_{a},a)]+\gamma\operatorname*{max}_{a^{\prime}\in{\mathcal{A}}}Q^{*}({\bf s}^{\prime},a^{\prime}),$$ where Q∗is the Q-function concerning the optimal policy. Unlike the MDP with a probabilistic state transition, there is no need to take the expectation of s ′in the second term of (3). Then, since the cardinality of s is s K max, tabular Q-learning (Watkins & Dayan, 1992)/ SARSA (Rummery & Niranjan, 1994) has to estimate the Q-function for s K max × K combinations of the argument. These algorithms are updated with the following rules respectively: $$\begin{array}{r l}{{\mathrm{Q-learning:}}}&{{\hat{Q}({\bf s},a)\leftarrow\hat{Q}({\bf s},a)+\alpha\left(r(s_{a},a)+\gamma\operatorname*{max}_{a^{\prime}}\hat{Q}({\bf s}^{\prime},a^{\prime})-\hat{Q}({\bf s},a)\right)}}\\ {{}}&{{\mathrm{SARSA:}}}&{{\hat{Q}({\bf s},a)\leftarrow\hat{Q}({\bf s},a)+\alpha\left(r(s_{a},a)+\gamma\hat{Q}({\bf s}^{\prime},a^{\prime})-\hat{Q}({\bf s},a)\right)}}\end{array}$$ (4) Thus, these algorithms must learn for a combinatorial number of states, which is prohibitive unless smax and K are small. To mitigate this computational difficulty, we introduce SS-Q-functions. Combining these new Q-functions results in a significant reduction in state combinations compared to the original Q-functions. (4) $\binom{5}{}$ . More specifically, *State-Separated Q-function (SS-Q-function)* is defined by a form of Bellman-equation $$Q_{S S,k}(s_{k},s_{a},a):=\mathbb{E}_{r}[r(s_{a},a)]+\gamma Q_{S S,k}(s_{k}^{\prime},s_{a^{\prime}}^{\prime},a^{\prime})$$ for each k ∈ [K] and a ∈ [K]. It is similar to the Bellman equation of the original Q-function but involves only two-dimensional states; it depends solely on the state sk and the state of the pulled arm sa. We can recover the original Q-function by aggregating these new Q-functions. For a fixed a ∈ [K], by adding (6) over all k ∈ [K] and dividing it by K, $$\frac{1}{K}\sum_{k=1}^{K}Q_{SS,k}(s_{k},s_{a},a)=\mathbb{E}_{r}[r(s_{a},a)]+\gamma\frac{1}{K}\sum_{k=1}^{K}Q_{SS,k}(s^{\prime}_{k},s^{\prime}_{a^{\prime}},a^{\prime}).\tag{7}$$ $$(6)$$ Since r(sa, a) is independent of k ∈ [K], the instantaneous reward remains unchanged even after the aggregation. Let the left-hand side of (7) be denoted by Q(s, a), i.e. $$Q({\bf s},a):=\frac{1}{K}\sum_{k=1}^{K}Q_{S S,k}(s_{k},s_{a},a)$$ for each s ∈ [smax] K and a ∈ [K]. Then (7) is equivalent to $$Q({\bf s},a)=\mathbb{E}_{r}[r(s_{a},a)]+\gamma Q({\bf s}^{\prime},a^{\prime}),\tag{8}$$ which coincides with the definition of the original Q-function. Note that computation with SS-Q-functions requires only s 2 maxK2 variables, while the naive implementation of the original Q-function needs s K max × K. Similarly, we can also construct the Bellman-optimal equation by aggregating SS-Q-functions. ## 4.2 State-Separated Sarsa: A Novel Efficient Rl Algorithm In Recovering Bandits We introduce State-Separated SARSA (*SS-SARSA*) building on the SS-Q-functions and explain its advantages in comparison to conventional tabular RL and MAB approaches. We also propose a policy tailored to our MDP, which achieves efficient uniform exploration across all the variables of the Q-function. To begin, we outline SS-SARSA, illustrated in Algorithm 1. Given input parameters T, γ, and initial states s0, the estimates of SS-Q-functions QˆSS,k(sk, sa, a) are initialized to zero. The algorithm then proceeds in the following way. The agent pulls an arm a from K arms according to a proposed policy, *Uniform-ExploreFirst* π, which will be discussed later. Following the state transition rule described in Chapter 3, the next states s ′transition to one where k = a, and for the states of other arms, they transition to min{s + 1, smax}. Then, for the pulled arm a and for all k ∈ [K] the SS-Q-functions are updated as follows: $$\dot{Q}_{SS,k}(s_{k},s_{a},a)\gets\dot{Q}_{SS,k}(s_{k},s_{a},a)+\alpha(r(s_{a},a)+\gamma\dot{Q}_{SS,k}(s^{\prime}_{k},s^{\prime}_{a},a^{\prime})-\dot{Q}_{SS,k}(s_{k},s_{a},a)),\tag{9}$$ where α ∈ [0, 1] represents a learning rate that is independent of (sk, sa, a). The above expression is analogous to the update rule in tabular SARSA given in (5). By defining Qˆ(s, a) := 1K PK k=1 QˆSS,k(sk, sa, a) and combining as in (7) and (8) using these estimated SS-Q-functions, we can update $$\dot{Q}({\bf s},a)\gets\dot{Q}({\bf s},a)+\alpha\frac{1}{K}\sum_{k=1}^{K}(r(s_{a},a)+\gamma\dot{Q}_{SS,k}(s_{k}^{\prime},s_{a^{\prime}}^{\prime},a^{\prime})-\dot{Q}_{SS,k}(s_{k},s_{a},a)).$$ $\Longleftrightarrow\dot{Q}({\bf s},a)\gets\dot{Q}({\bf s},a)+\alpha\left(r(s_{a},a)+\gamma\dot{Q}({\bf s}^{\prime},a^{\prime})-\dot{Q}({\bf s},a)\right)$ ,which has the same form as SARSA (5). Compared to the related works, SS-SARSA has three advantages: reduced combinations of the estimated SS-Q-functions, low time complexity, and long-term lookahead. First, the combination of SS-Q functions to be estimated is at most s 2 maxK2, which is significantly smaller than s K max of Q-functions in tabular RL (10) $\binom{11}{2}$ (11) ... algorithms. Second, our algorithm updates K SS-Q-functions per round, similar to tabular SARSA, thus its time complexity is just O(KT). It has better computational efficiency compared to the O(K5/2T 9/4) time complexity associated with regret guarantees in Laforgue et al. (2022). Finally, our RL approach aims to identify the optimal policy over the entire duration of total rounds. In contrast, MAB approach determines the optimal sequence of selected arms only for short rounds (Laforgue et al., 2022; Pike-Burke & Grunewalder, 2019). Remark 4.1. (Why does our algorithm adopt SARSA update rule instead of Q-learning update?) Our algorithm updates the SS-Q-functions using the SARSA update rule (5) and combine these SS-Q-functions (10). Another possibility would be to use Q-learning (4). However, it would not fit our setting due to the max operator. In fact, if we apply (4) to each QSS,k and combine them with the learning rate α, the max operator part becomes $$\frac{1}{K}\sum_{k=1}^{K}\alpha\max_{a^{\prime}}Q_{S S,k}(s^{\prime}_{k},s^{\prime}_{a^{\prime}},a^{\prime}),$$ $$(12)$$ which should be α maxa′ 1 K PK k=1 Q*SS,k*(s ′ k , s′a′ , a′) = α maxa′ Q(s ′, a′) *to update the form of Q-learning.* Therefore convergence to the optimal policy would be questionable. Before introducing our new policy, we discuss why random exploration doesn't work well in our MDP. Under random exploration, we can compute the probability that arm k is pulled at state sk = i in round t, denoted by p*t,i,k*. Since π(a|s) = 1K under a random policy, we have p*t,i,k* = 1 K × q*t,i,k*, where q*t,i,k* is the probability that arm k is in state i at round t. When t ≥ smax, the probability p*t,i,k* does not depend on t 2. The probability for each state is as follows. Since for any sk ∈ [smax] the state of arm k transitions to one when pulling arm k, by the total law of probability, qt,1,k =1K Psmax j=1 qt−1*,j,k* =1K and thus pt,1,k =1 K2 . For i = 2, 3, · · · , smax − 1, an arm reaches state i exactly when arm k is not pulled at state (i − 1), which means q*t,i,k* = (1 − 1 K )qt−1,i−1,k. Thus, q*t,i,k* = (1 − 1 K ) i−1 1 K , and p*t,i,k* = (1 − 1 K ) i−1 1 K2 . For i = smax, by the law of total probability, pt,smax,k = 1 −Psmax−1 j=1 p*t,j,k* = 1 −1 K2 −Psmax−1 i=2 (1 − 1 K ) i−1 1 K2 , which is strictly more than 1 K2 because pt,1,k =1 K2 and p*t,i,k* <1 K2 for i = 2, 3, *· · ·* smax − 1. This result implies that even when applying policies that randomly select an arm given a state (e.g., random exploration with a positive probability ϵ in ϵ-greedy (Sutton & Barto, 2018)), SS-Q-functions are frequently updated at smax compared to other states. This property is notable in the case where smax < K. As a numerical example, when K = smax = 6, for any t and k, pt,smax,k, ≈ 0.067 and pt,smax−1,k ≈ 0.013. In contrast, when K = 6 and smax = 3, for any t and k, pt,smax,k ≈ 0.116 and pt,smax−1,k ≈ 0.023. Variation in the frequency of these updates has a negative impact, especially when the state with the highest reward is not smax. In this paper, we propose an alternative policy, named *Uniform-Explore-First*, which follows a strategy of exploring information uniformly initially and subsequently exploiting this explored information. Specifically, during the specified initial rounds E, the agent explores the arm with the minimum number of visits over the (s, a) pairs, given the current state s up to round t, denoted as vt(s, a). If vt(s, a) is tied for multiple arms, the selection of the arm can be arbitrary. After the exploration phase, the agent adopts a greedy approach by pulling the arm with the maximum Qˆ(s, a) to maximize the (discounted) cumulative rewards. In contrast to random exploration, our exploration strategy uniformly pulls arms at states other than smax. ## 5 Convergence Analysis This section presents a convergence analysis and provides remarks on our problem setting. Throughout this section, our focus is on the infinite-horizon discounted MDP, i.e., T = ∞ and γ < 1. Theorem 5.1. (Convergence of Q*-functions) Suppose that the variance of the (stochastic) reward* r is finite and αt =1 t+t0 where t0 *is some constant value. This learning rate satisfies Robbins-Monro scheme* 2If *t < s*max, some states cannot be reached. For example, if s0,k = smax, it takes at least smax rounds to visit the state at sk = smax − 1. Algorithm 1 State-Separated SARSA (SS-SARSA) Input: T, γ, α Initialize: QˆSS,k(sk, sa, a) ← 0 for all k ∈ [K] and a ∈ [K] for t = 1, 2, · · · , T do a ← (argmina∈[K]vt(s, a) when t ≤ E argmaxa∈[K]Qˆt(s, a) when *t > E*▷ Uniform-Explore-First r ← r(sa, a) s ′ k ← (1 for k = a min{sk + 1, smax} for k ∈ [K]\{a} Update QˆSS,k(sk, sa, a) for all k ∈ [K]: QˆSS,k(sk, sa, a) ← QˆSS,k(sk, sa, a) + α(r(sa, a) + γQˆ*SS,k*(s ′ k , s′a′ , a′) − QˆSS,k(sk, sa, a)) end for (i.e. P∞ t=1 αt = ∞ and P∞ t=1 α 2 t < ∞). Suppose that πt(s|a) is a GLIE policy, that is, it visits each (s, a) infinitely often and chooses an arm greedily with probability one in the limit (i.e. πt(s|a) = arg maxa Qˆ(s, a) w.p. 1 as t → ∞). Also, for each (s, a) pair, define the error of the Q*-function at round* t ∈ [T] as Qerr,t(s, a) := |Qˆt(s, a) − Q∗(s, a)|. Define a set V := {(s, a)| visit (s, a) for some policy }*. Then, for any* (s, a) ∈ V , Qerr,t(s, a) → 0 with probability 1 as T → ∞. Proof: As indicated in Section 4, since we adopt the learning rate which is independent of (sk, sa, a), SSSARSA can be considered SARSA after combining the SS-Q functions. Consequently, we can prove the convergence theorem analogous to Singh et al. (2000), under the same assumption as SARSA. Remark 5.2. *(No visit at some states) While the convergence theorem in MDP (Watkins & Dayan, 1992;* Singh et al., 2000) assumes infinite visits for each (s, a) *pair, in our MDP settings certain states are never* visited for any policy. For instance, (s, a) *with* sk = 1 for multiple k *never appears, since it means multiple* arms were pulled at the last time point. Only the value of Q-function within V *is needed to learn a policy.* Remark 5.3. *(Convergence theorem for Uniform-Explore-First) The Uniform-Explore-First policy is considered GLIE because it greedily pulls an arm after the exploration phase. However, due to the separation* of the exploration and the exploitation phase, our policy does not guarantee the infinite updating of all Qfunctions in V *. Nevertheless, in practice, by taking sufficient rounds for uniform exploration, the estimated* Q-function approaches the Bellman optimality equation for Q-function of each (s, a) ∈ V . ## 6 Experiments In this section, we present simulation results to empirically verify the performance of our algorithm. Section 6.1 gives the simulation settings, and Section 6.2 shows the superiority of our algorithm in some metrics. ## 6.1 Simulation Settings In this subsection, we state MDP environments, metrics, SS-SARSA parameters, and competing algorithms related to our work. Initial state: Throughout the simulation, initial state is assumed to be smax for every k ∈ [K]. In item recommendation, if the conversion rate increases with the number of states, representing the freshness of items, this assumption reflects the agent before the item recommendation is exposed. Note that this assumption is made purely for simplicity and that our algorithm can be generalized to any other initial states. Discount rate: We conduct simulations in both discounted (γ ∈ (0, 1)) and non-discounted (γ = 1) settings. The discounted setting is commonly employed in infinite horizon MDP. We set γ to (1 − 10−T), depending on T, because it makes the rewards in later rounds not significantly smaller. On the other hand, the nondiscounted setting is the standard setting in MAB. As discussed in the previous section, our algorithm does not satisfy the assumption for the convergence theorem. Nevertheless, we will verify that our algorithm performs well in practice for both situations. Reward: To begin, we demonstrate with a small state combination, specifically setting K = smax = 3 and T = 105. The first arm is nonstationary, with an expected reward of 0.1 for s1 ∈ {1, 2} and an increase to 0.3 at s1 = 3. Conversely, the remaining two arms are stationary, with an expected reward of 0.2 irrespective of the state of each arm. Note that our claim of superiority is against MAB algorithms rather than the tabular RL algorithms. Indeed, the cardinality of SS-Q functions in SS-SARSA and the Q-function for Q-learning/SARSA are the same. Next, we address larger-scale problems and provide a comprehensive framework for all remaining experiments. We consider various settings involved in reward heterogeneity and reward increments. For reward heterogeneity, we consider Kbest best arms and the remaining (K − Kbest) sub-best arms. When K = Kbest, arms are homogeneous: changes in expected rewards per state increment for all arms are the same, as illustrated in 6-Homo and 10-Homo in Table 1. In contrast, when *K > K*best, arms are heterogeneous: changes in expected rewards per state increment differ for best arms and sub-best arms, as illustrated in 6-Hetero and 10-Hetero in Table 1. For reward increments, we consider two cases of changes in expected rewards with state increments: monotone-increasing rewards and increasing-then-decreasing rewards. In the monotone-increasing case, the expected reward for each arm is as follows: for the best arms, the expected rewards start at 0.1 for sa = 1 and increase by Vbest per state increment, reaching 0.6 at smax. For the sub-best arms, the expected rewards at sa = 1 remain the same, but increase only by Vsub-best(< Vbest) per state increment, reaching 0.5 at smax. The increasing-then-decreasing case is similar to the monotone case, but the state of peak reward differs. Specifically, for the best arms, the expected rewards at sa = 1 are the same as in the monotone case and increase by Vbest per state increment, but reach 0.6 at smax − 1 and decrease by Vbest at smax. Similarly, for the sub-best arms, the reward at the peak state is 0.5 at smax − 1 and decreases by Vsub-best at smax. Finally, we state the total rounds and reward distributions. Considering the cardinality of the states, T = 105for K = 3, 6 and T = 106for K = 10. Moreover, we use Bernoulli and normal distributions as reward distributions for all simulations. In the normal case, the variance is 0.5. The normal case is more difficult due to its higher variance compared to the Bernoulli case. | Name | K | smax | |s| | T | Kbest | Vbest | Vsub-best | | | |-----------|-----|-----------|--------------------|-----|---------|---------|-------------|-------------|----| | | | 1 | 2 | | | | | | | | 6-Hetero | 6 | 3 | 3 6 (> 7 × 102 ) | 105 | 3 | 1 4 ( | ) | 1 5 ( | ) | | | | 2 | 5 | | | | | | | | 6-Homo | 6 | 6 | 6 6 (> 4.6 × 104 ) | 105 | 6 | 1 | 1 ) | - | | | | | 10 ( 8 1 | 2 | | | | | | | | 10-Hetero | 10 | 5 | 5 10(> 9 × 106 ) | 106 | 5 | 1 8 ( | ) | 1 10 ( 15 ) | | | | | 6 | | | | | | | | | 10-Homo | 10 | 10 | 1010 | 106 | 10 | 1 | 1 | | | | | | 18 ( 16 ) | - | | | | | | | Table 1: Monotone-increasing (increasing-then-decreasing) reward settings (The values enclosed in parentheses in Vbest and Vsub-best pertain to the increasing-then-decreasing case.) Metrics: We introduce some metrics for comparing algorithm performance. The first metric is cumulative regret, which is the difference in (discounted) cumulative expected rewards between the best and learning policies. This metric measures the closeness of the learning policy to the optimal one, so smaller is better, and is often used in MAB (Lattimore & Szepesvári, 2020). If we can define the optimal policy, we also use the rate of optimal policy, which measures how often the optimal policy is obtained out of a thousand simulations. The optimal policy is defined as having no regret during the last 3 × smax rounds, making it easy to assess the quality of exploration. Note that this metric only reflects the performance after exploration, while cumulative regret/rewards cover both the exploration and exploitation phases. Therefore, even if some algorithms have smaller regret than others, they may not necessarily achieve optimal policy. In the (first) small-scale and the monotone-increasing case, the optimal policy can be obtained explicitly. In the former case, the optimal policy is to select the first arm only when s1 = smax = 3 and to choose the other arms otherwise. In the monotone-increasing case, as long as smax ≤ Kbest, the agent has the chance to pull the best arm at smax given the initial state s0 = smax. Therefore, the optimal policy cyclically pulls only Kbest best arms at smax. However, in the increasing-then-decreasing rewards, given s0 = smax, there is no chance to pull the best arm at smax − 1 in the first round. Thus, optimal policy is not trivial.3In such a case, we use (discounted) cumulative expected rewards without optimal policy as another metric. The larger the metric, the better, although its maximization is equivalent to the minimization of cumulative regret. SS-SARSA parameters: We set the learning rate and the exploration horizon in our algorithm. As pointed out in the previous section, we use αt =1 t+t0 . The smaller the t0 value, the greater the effect of the update in the early visit, causing unstable learning due to the stochasticity of rewards. Thus, a large t0 is better for mitigating this issue, and we set t0 = 5000 through the simulations. The size of the exploration should be determined by considering the trade-off between exploration and exploitation. Over-exploration, characterized by uniformly pulling arms in a large fraction of rounds, fails to sufficiently exploit the best arm, leading to large regret or small cumulative rewards. Conversely, underexploration, characterized by uniformly pulling arms in a small fraction of rounds, fails to gather adequate information about each Q-function, resulting in a low probability of selecting the best arm. Taking this tradeoff into account, we allocate uniform exploration to 10% of the total rounds. (i.e. E = 0.1T). Compared algorithms: We introduce other algorithms to compare their performance to our algorithm. The first algorithms are the original tabular model-free RL algorithms: Q-learning (Watkins & Dayan, 1992) and SARSA (Rummery & Niranjan, 1994). These algorithms also have convergence results to the Bellman optimality equation for Q-function. However, in our setting, as described in section 4.1, these algorithms have to estimate enormous Q-functions unless smax and K are small, leading to few updates after many rounds. For the comparison of our algorithm, we maintain the learning rate, policy, and exploration size identical to those of SS-SARSA. Another algorithm is the dRGP-TS algorithm (Pike-Burke & Grunewalder, 2019). This approach utilizes GP regression for each arm to estimate the reward distribution. After sampling rewards from the distribution of each arm for predetermined d rounds, the agent pulls the sequence of arms with the highest total rewards. Due to the Kdcombinations to draw arms for each d round, a large d becomes unrealistic 4. Therefore, in our experiments, we consider d = 1, 2. This approach helps to avoid the state-combination problem in Q-learning/SARSA and shows a Bayesian regret upper bound. However, the upper bound is derived by repeatedly applying the d-step regret. In this simulation, We evaluate full-horizon regret that is defined in metrics. Additionally, we use an alternative implementation that is equivalent to the original but reduces computational complexity. The pseudo-code details and parameter settings are provided in Appendix A. ## 6.2 Simulation Results We show only the results with discounted rewards here, and the undiscounted cases will be deferred to Appendix B. ## 6.2.1 Small-Scale Problem For each reward distribution, Figure 1 shows the cumulative regret transitions (left), box plots of cumulative regret in the final round (middle), and the rate of the optimal policy (right). In the cumulative regret (left), the solid line and the filled area represent the median and the 90% confidence interval, calculated with a thousand simulations. Exploration of each algorithm increases the cumulative regret in the early stage, and then the agent exploits the learned policy. In the box plots of cumulative regret, the black circle, the boxes, the bottom lines in the boxes, the middle lines in the boxes, and the top 3Even if we set the initial state as smax − 1, after the state transition, each arm state is either 1 or smax. Thus, in the second round, the same problem occurs in the case of s0 = smax. 4Pike-Burke & Grunewalder (2019) introduces a computationally efficient algorithm for searching an optimal sequence of arms under large values of K and d. However, experimental results showed similar performance for d = 1, 3. Additionally, in the regret comparison with other algorithms, only the case d = 1 is considered. ![9_image_0.png](9_image_0.png) Figure 1: Small-scale problem (K = 3, smax = 3, γ = 0.99999) lines in the boxes represent the mean, 50% box, the 25th, 50th, and 75th percentiles, respectively. All points outside the interquartile range of the box, which is 1.5 times the difference between the top and bottom lines, correspond to outliers. The upper and lower lines represent the maximum and minimum values after removing outliers. These graphs reflect the stability of the algorithms through the variation in cumulative regret. The rate of optimal policy over a thousand simulations assesses the quality of exploration. Figure 1 shows the results of K = 3 and smax = 3. In the case of Bernoulli rewards (1(a)), all the methods except 2RGP-TS generally achieve the optimal policy due to the simple setting of the problem. 1RGP-TS shows slightly unstable results. 2RGP-TS does not perform well, suggesting that this method fails to identify the optimal sequence of selected arms even when considering a limited number of combinations. ## 6.2.2 Monotone-Increasing Rewards In Figures 2 and 3, the type and arrangement of the graphs are the same as before. Figure 2 shows the results of K = 6. With a larger state space, the standard Q-learning and SARSA do not achieve the optimal policy. For both 6-Hetero and 6-Homo, SS-SARSA stably achieves the optimal policy in most trials within the exploration phase, as seen by no increase of regret after that round. In contrast, in 6-Hetero, 1RGP-TS, the rate of optimal policy is lower. This is caused by the occasional failure to find the optimal policy, which can be seen by the increase of cumulative regret, especially notable in normal rewards. Similarly, in 6-Hetero, 2RGP-TS tends to fail in finding the optimal policy for the same reason as the experiments with K = 3. By comparing the results with K = 3 and K = 6, we can see that SS-SARSA as well as 1RGP-TS can handle the large state space much more efficiently than Q-learning and SARSA, which suffer from complexity even with K = 6. Figure 3 depicts the results for 10 arms. The results of SARSA and Q-learning are not included due to their requirement of a large memory for Q-functions and poor performance in the case of 6 arms. In 10-Hetero, for both Bernoulli (a) and normal (b) rewards, SS-SARSA has low regret and is competitive with 1RGP-TS in ![10_image_0.png](10_image_0.png) Figure 2: Increasing rewards (K = 6, y = 0.99999) obtaining the optimal policy for all simulations. However, While SS-SARSA has a higher rate of the optimal policy and more stable cumulative regret than 1RGP-TS. 2RGP-TS performs the worst in a manner similar to the previous cases. In 10-homo, our algorithm has larger regret than 1RGP-TS due to the exploration phase, but the rate of optimal policy remains competitive. ![11_image_0.png](11_image_0.png) Figure 3: Increasing rewards (K = 10, y = 0.999999) The obtained results indicate that SS-SARSA is the most stability for any case. Notably, in heterogeneous rewards, our algorithm demonstrates the most stable performance with low cumulative regret and the highest rate of optimal policy. 1RGP-TS performs slightly better than our algorithm in the case of homogeneous rewards, but it becomes unstable when dealing with heterogeneous and high-variance rewards. ![12_image_0.png](12_image_0.png) Figure 4: Increasing-then-decresing rewards (K = 6, y = 0.9999) ## 6.2.3 Increasing-Then-Decreasing Rewards First, note that in the case of increasing-then-decreasing rewards, we employ the cumulative rewards instead of regret in the left and center graphs of Figures 4 and 5, so the larger values are better. Those results show that the proposed SS-SARSA performs better or competitively compared to the other methods. In contrast to the monotone rewards, SS-SARSA does not have higher cumulative regret than 1RGP-TS even in the case of homogeneous rewards. The reason for this is the proposed policy; Uniform- Explore-First enforces each SS-Q-function to update uniformly across all states, while the exploration in 1RGP-TS does not take into account the structure inherent in our MDP. ![13_image_0.png](13_image_0.png) Figure 5: Increasing-then-decresing rewards (K = 10, y = 0.999999) Overall, SS-SARSA outperforms other algorithms in terms of stability of regret and rate of optimal policy, regardless of reward distribution, heterogeneity, and state cardinality. The only algorithm that comes close to SS-SARSA is IRGP-TS, which is competitive or slightly superior in cases of homogeneous and monotoneincreasing rewards. However, its performance significantly decreases with heterogeneous or non-monotone rewards. ## 7 Conclusion We propose an RL algorithm, called SS-SARSA, to solve the recovering bandit problem. This algorithm estimates Q-functions by combining SS-Q-functions and updates like SARSA, leading to efficient learning and low time complexity. We prove the convergence theorem for the optimal policy. Furthermore, our algorithm performs well in both monotone and non-monotone reward scenarios, as demonstrated through simulations. The algorithm has several advantages, but it also has some limitations. Firstly, when there are many arms and a large smax, even our algorithm struggles with too many combinations. In such cases, a functional approximation of Q-function is considered for efficient learning. Secondly, we only presented results from simulations, not from real-world data. Our settings require a substantial amount of data points for a person, but to our knowledge such data do not exist. The final is a finite sample analysis. Regret bounds and sample complexity are needed without relying on strong reward structures. These points are left in future works. ## References Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. Finite-time Analysis of the Multiarmed Bandit Problem. Machine Learning, 47(2):235–256, May 2002. ISSN 1573-0565. doi: 10.1023/A:1013689704352. URL https://doi.org/10.1023/A:1013689704352. Omar Besbes, Yonatan Gur, and Assaf Zeevi. Stochastic multi-armed-bandit problem with non-stationary rewards. *Advances in neural information processing systems*, 27, 2014. Djallel Bouneffouf, Irina Rish, and Charu Aggarwal. Survey on applications of multi-armed and contextual bandits. In *2020 IEEE Congress on Evolutionary Computation (CEC)*, pp. 1–8. IEEE, 2020. Olivier Chapelle and Lihong Li. An empirical evaluation of thompson sampling. *Advances in neural information processing systems*, 24, 2011. Elena Gangan, Milos Kudus, and Eugene Ilyushin. Survey of multiarmed bandit algorithms applied to recommendation systems. 9(4):16, 2021. Aurélien Garivier and Olivier Cappé. The KL-UCB Algorithm for Bounded Stochastic Bandits and Beyond. In *Proceedings of the 24th Annual Conference on Learning Theory*, pp. 359–376. JMLR Workshop and Conference Proceedings, December 2011. URL https://proceedings.mlr.press/v19/garivier11a. html. ISSN: 1938-7228. Aurélien Garivier and Eric Moulines. On upper-confidence bound policies for non-stationary bandit problems. In *Algorithmic Learning Theory*, pp. 174–188, 2011. Hoda Heidari, Michael Kearns, and Aaron Roth. Tight policy regret bounds for improving and decaying bandits. In *Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence*, pp. 1562–1570, 2016. Robert Kleinberg and Nicole Immorlica. Recharging bandits. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pp. 309–319. IEEE, 2018. Pierre Laforgue, Giulia Clerici, Nicolò Cesa-Bianchi, and Ran Gilad-Bachrach. A Last Switch Dependent Analysis of Satiation and Seasonality in Bandits. In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, pp. 971–990. PMLR, May 2022. URL https://proceedings.mlr. press/v151/laforgue22a.html. ISSN: 2640-3498. Tze Leung Lai, Herbert Robbins, et al. Asymptotically efficient adaptive allocation rules. *Advances in applied* mathematics, 6(1):4–22, 1985. Tor Lattimore and Csaba Szepesvári. *Bandit Algorithms*. Cambridge University Press, 1 edition, July 2020. ISBN 978-1-108-57140-1 978-1-108-48682-8. doi: 10.1017/9781108571401. URL https://www.cambridge. org/core/product/identifier/9781108571401/type/book. Liu Leqi, Fatma Kilinc-Karzan, Zachary C. Lipton, and Alan L. Montgomery. Rebounding Bandits for Modeling Satiation Effects, October 2021. URL http://arxiv.org/abs/2011.06741. arXiv:2011.06741 [cs, stat]. Nir Levine, Koby Crammer, and Shie Mannor. Rotting bandits. Advances in neural information processing systems, 30, 2017. Yang Li, Jiawei Jiang, Jinyang Gao, Yingxia Shao, Ce Zhang, and Bin Cui. Efficient automatic cash via rising bandits. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 4763–4771, 2020. Fang Liu, Joohyun Lee, and Ness Shroff. A change-detection based framework for piecewise-stationary multi-armed bandit problem. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32, 2018. Alberto Maria Metelli, Francesco Trovo, Matteo Pirola, and Marcello Restelli. Stochastic rising bandits. In International Conference on Machine Learning, pp. 15421–15457. PMLR, 2022. Kanishka Misra, Eric M. Schwartz, and Jacob Abernethy. Dynamic Online Pricing with Incomplete Information Using Multiarmed Bandit Experiments. *Marketing Science*, 38(2):226–252, March 2019. ISSN 0732-2399. doi: 10.1287/mksc.2018.1129. URL https://pubsonline.informs.org/doi/abs/10.1287/ mksc.2018.1129. Daisuke Moriwaki, Komei Fujita, Shota Yasui, and Takahiro Hoshino. Fatigue-aware ad creative selection. arXiv preprint arXiv:1908.08936, 2019. Ciara Pike-Burke and Steffen Grunewalder. Recovering Bandits. In *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https://papers.nips.cc/paper/2019/ hash/9a093d729036a5bd4736e03c5d634501-Abstract.html. Martin L Puterman. *Markov decision processes: discrete stochastic dynamic programming*. John Wiley & Sons, 2014. Gavin A Rummery and Mahesan Niranjan. *On-line Q-learning using connectionist systems*, volume 37. University of Cambridge, Department of Engineering Cambridge, UK, 1994. Yoan Russac, Claire Vernade, and Olivier Cappé. Weighted linear bandits for non-stationary environments. Advances in Neural Information Processing Systems, 32, 2019. Julien Seznec, Andrea Locatelli, Alexandra Carpentier, Alessandro Lazaric, and Michal Valko. Rotting bandits are no harder than stochastic ones. In *Proceedings of the Twenty-Second International Conference* on Artificial Intelligence and Statistics, pp. 2564–2572. PMLR, April 2019. URL https://proceedings. mlr.press/v89/seznec19a.html. ISSN: 2640-3498. David Simchi-Levi, Zeyu Zheng, and Feng Zhu. Dynamic Planning and Learning under Recovering Rewards. In *Proceedings of the 38th International Conference on Machine Learning*, pp. 9702–9711. PMLR, July 2021. URL https://proceedings.mlr.press/v139/simchi-levi21a.html. ISSN: 2640-3498. Satinder Singh, Tommi Jaakkola, Michael L Littman, and Csaba Szepesvári. Convergence results for singlestep on-policy reinforcement-learning algorithms. *Machine learning*, 38:287–308, 2000. Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. William R. Thompson. On the Likelihood that One Unknown Probability Exceeds Another in View of the Evidence of Two Samples. *Biometrika*, 25(3/4):285–294, 1933. ISSN 0006-3444. doi: 10.2307/2332286. URL https://www.jstor.org/stable/2332286. Publisher: [Oxford University Press, Biometrika Trust]. Romain Warlop, Alessandro Lazaric, and Jérémie Mary. Fighting Boredom in Recommender Systems with Linear Reinforcement Learning. In *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/hash/ 210f760a89db30aa72ca258a3483cc7f-Abstract.html. Christopher JCH Watkins and Peter Dayan. Q-learning. *Machine learning*, 8:279–292, 1992. Kevin P. Yancey and Burr Settles. A Sleeping, Recovering Bandit Algorithm for Optimizing Recurring Notifications. In *Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery* & Data Mining, pp. 3008–3016, Virtual Event CA USA, August 2020. ACM. ISBN 978-1-4503-7998-4. doi: 10.1145/3394486.3403351. URL https://dl.acm.org/doi/10.1145/3394486.3403351. ## A Alternative Implementation Of D**Rgp-Ts** This section details the alternative implementation of dRGP-TS and parameter settings. Before introducing the alternative, we briefly review the GP update in dRGP-TS algorithm (Pike-Burke & Grunewalder, 2019) 5. The original paper formulates the reward mechanism as r(sa, a) = fa(sa)+ϵ where fa is an unknown function depending on the state for arm a and ϵ ∼ N (0, σ2) is a Gaussian noise (σ is known). When we set a GP prior on fa and receive N observed rewards Ra,N = (r1, r2, · · · , rN ) T and states Sa,N = (s1, s2, · · · , sN ) for arm a, its posterior is also GP. That is, for ka,N (s) = (k(s1, s), k(s2, s), · · · , k(sN , s))T and positive semi-definite kernel matrix Ka,N = [k(si, sj )]N i,j=1, the mean and covariance of the posterior after N observations are, $$\mu_{\rm n}(s;N)={\bf k}_{{\rm n},N}(s)^{T}({\bf K}_{{\rm n},N}+\sigma^{2}I)^{-1}{\bf R}_{{\rm n},N}\quad k_{\rm n}(s,s^{\prime};N)=k(s,s^{\prime})-{\bf k}_{{\rm n},N}(s)^{T}({\bf K}_{{\rm n},N}+\sigma^{2}I)^{-1}{\bf k}_{{\rm n},N}(s^{\prime}).\tag{13}$$ Moreover, when s = s ′, ka(*s, s*′; N) is equivalent to the variance σ 2 a (s; N). A bottleneck in the above is the time complexity for computing the inverse matrix. In dGRP-TS, for every d rounds, the time complexity for the inverse matrix is O(ct(a) 3), where ct(a) represents the number of times an arm a is pulled up to round t. Therefore, the original dRGP-TS is impractical for large T. Instead, we introduce an alternative implementation of dRGP-TS, which is an equivalent update to (13) but reduces its time complexity. Since the states of each arm only take discrete values from one to smax, we can reformulate the argument of each arm's GP function using a smax dimensional normal distribution. To see this, we define fa the reward distribution of reward for arm a ∈ [K] with smax-dimensional normal distribution as follows. ![16_image_0.png](16_image_0.png) $$(14)$$ (14) , where fa(s) is reward function for arm a and state s ∈ [smax], µa,s is the mean for arm a and state s and k*a,ss*′ is the covariance of the state s and s ′for arm a (it is also variance when s = s ′). Additionally, we define s-th order column of Ka,smax as ka,smax (s). Next, we will explain how to update the posterior to reduce the time complexity. When using discrete input, we compute only the smax-dimensional inverse matrix to update the posterior as in (13). To do so, we introduce the count matrix for arm a, Ca, which is N × smax matrix, and its (*i, j*) element is one when the agent pulls the arm a at state j ∈ [smax] for the i-th time, otherwise zero. Then, we can easily verify that ka,N (s) T = ka,smax (s) T C T a and Ka,N = CaKa,smax C T a . Thus, if the reward were deterministic, µa(s; N) in 5Some notations differ from the original (Pike-Burke & Grunewalder, 2019) to match the notations of our paper. (13) could be rewritten as follows. $$\mu_{a}(s;N)=\mathbf{k}_{a,N}(s)^{T}(\mathbf{K}_{a,N}+\sigma^{2}I)^{-1}\mathbf{R}_{a,N}$$ $$=k_{a,s_{\max}}^{T}C_{a}^{T}(C_{a}\mathbf{K}_{a,s_{\max}}C_{a}^{T}+\sigma^{2}I)^{-1}\mathbf{R}_{a,N}$$ $$=k_{a,s_{\max}}^{T}C_{a}^{T}\underbrace{(C_{a}\mathbf{K}_{a,s_{\max}}C_{a}^{T}+\sigma^{2}I)^{-1}}_{(\cdot)}C_{a}\overline{\mathbf{R}}_{a,s_{\max}},$$ $$\left(15\right)$$ $$(16)$$ where Ra,smax is the mean reward vector from one to smax for arm a and we use Ra,N = CaRa,smax . In practice, since the reward is stochastic, the equality in (15) is replaced by approximation, yet such an approximation saves memory for RN . After applying Sherman–Morrison–Woodbury formula in ⃝1 , $$\mathbf{\tilde{(I)}}={\frac{1}{\sigma^{2}}}I-{\frac{1}{\sigma^{2}}}C_{a}(\mathbf{K}_{a,s_{\mathrm{max}}}^{-1}+{\frac{1}{\sigma^{2}}}C_{a}^{T}C_{a})^{-1}{\frac{1}{\sigma^{2}}}C_{a}^{T}.$$ Thus, by (16), we can rewrite (15) as follows. µa(s; N) = k T a,smax C T a 1 σ 2 I − 1 σ 2 Ca(K−1 a,smax + 1 σ 2 C T a Ca) −1 1 σ 2 C T a CaRa,smax = k T a,smax 1 σ 2 C T a Ca − 1 σ 2 C T a Ca(K−1 smax + 1 σ 2 C T a Ca) −1 1 σ 2 C T a Ca Ra,smax = k T a,smax Ca,σ − Ca,σ(K−1 a,smax + Ca,σ) −1Ca,σ Ra,smax , (17) where Ca,σ := 1 σ2 C T a Ca. Therefore, we need to compute only smax-dimensional inverse matrix for updating the posterior mean. In the same way, the posterior covariance matrix can be updated as follows. $$k_{a}(s,s^{\prime};N)=k(s,s^{\prime})-k_{a,s_{\max}}(s)^{T}\left\{{\bf C}_{a,\sigma}-{\bf C}_{a,\sigma}({\bf K}_{a,s_{\max}}^{-1}+{\bf C}_{a,\sigma})^{-1}{\bf C}_{a,\sigma}\right\}k_{a,s_{\max}}(s^{\prime}).\tag{18}$$ The pseudo-code for the algorithm is provided in Algorithm 2. The inputs are total rounds T, a standard error of the noise in GP regression σ > 0, a length scale of RBF kernel c > 0, and a size of lookahead d. With initial prior with a mean set to zero and covariance matrix with RBF kernel, and initial states, the algorithm proceeds as follows. For every d round, the agent selects combinations of arms Id,t to maximize the total reward, Pd−1 i=0 r(s (i) a(i), a(i)), where s (i) a(i) and a (i)represent the state for arm a (i) and arm after i steps of s and a, respectively (for i = 0, s (0) a(0) = sa and a (0) = a). These values are sampled from normal distributions with estimated means and variances. For the arm a = I (l) d,t selected in lth step, the agent receives its reward, updates the posterior mean (17) and covariance (18), and trans to the next state. Since we repeat the above procedure for round t = 1, 2, · · · , ⌊*T /d*⌋, its time complexity is O(KdT). When we run simulations, we set the input parameters as follows. As discussed in Section 6.1, we specify d = 1, 2. The remaining parameters, σ = 1.0 and c = 2.5, are consistent with those utilized in the simulation presented in Pike-Burke & Grunewalder (2019). ## B Simulation Results With Undiscounted Rewards In this section, we show the performance of our algorithm over the competitors in undiscounted cases (i.e. γ = 1), which is the default setting in MAB. In each case, the rate of the optimal policy is similar to the discounted case, but since the rewards in the later rounds are undiscounted, the cumulative regret/rewards is different. The most notable case is that of increasing-then-decreasing rewards (Figure 9 and 10); from the left and right graphs, the difference in cumulative rewards between SS-SARSA and 1RGP-TS is greater. ## Algorithm 2 Drgp-Ts (Alternative Implementation) Input: T, σ: standard error of noise, c: length scale of RBF kernel, d: size of lookahead Initialize: µsa,a = 0 ∀a ∈ [K], and ∀sa ∈ [smax]; ka(*i, j*) = e −(i−j) 2/2l 2∀a ∈ [K] and ∀*i, j* ∈ [smax]; s ← smax for t = 1, 2, · · · , ⌊*T /d*⌋ do Pull a d-sequence of arms Id,t = argmax(a,a′*,...,a* ′(d−1)) Pd−1 i=0 r(s (i) a(i), a(i)), where r(s (i) a(i), a(i)) ∼ N(µs (i) a(i) ,a(i), ka(i) (s (i) a(i), s (i) a(i) )) ∀s (i) a(i) ∈ [smax] , ∀a (i) ∈ [K], and ∀i ∈ {0*, . . . , d* − 1} ``` for l = 1, 2, · · · , d do a ← I (l) d,t ``` r ← r(sa, a) Update µsa,a using (17) for m = 1, 2, · · · , smax do Update ka(sa, m) using (18) end for s ′ k ← (1 for k = a min{sk + 1, smax} for k ∈ [K]\{a} end for end for ![18_image_0.png](18_image_0.png) Figure 6: Small-scale problem (K = 3, smax = 3, γ = 1) ![19_image_0.png](19_image_0.png) Figure 7: Increasing rewards (K = 6, y = 1) ![20_image_0.png](20_image_0.png) Figure 8: Increasing rewards (K = 10, y = 1) ![21_image_0.png](21_image_0.png) ![21_image_1.png](21_image_1.png) (a) 6-Hetero (Bernoulli rewards) (d) 6-Homo (Normal rewards) Figure 9: Increasing-then-decresing rewards (K = 6, ![22_image_0.png](22_image_0.png) Figure 10: Increasing-then-decresing rewards (K = 10, y = 1)
Review 1: Summary: This paper studies _recovering bandits_, a rested nonstationary setting where an arm's reward depend on the time since last pull. For each arm, this elapsed time between pulls can be modeled via a state-evolution in an MDP, which allows for one to employ techniques from RL. However, the drawback of this model is that there are combinatorially many states (in terms of bandit arms) which calls for more care in designing a practical algorithm. This paper proposes to fix this issue by working with a newly proposed "state-separated Q-function" which only requires tracking a polynomial number of state-action pairs. Combined with a SARSA-style update rule, this leads to a fairly simple new algorithm for recovering bandits. They show theoretical convergence guarantees of the estimated Q-functions and experiments which show their algorithm is at least on-par with a previous state-of-the-art based on a Gaussian process model, or offers other advantages in performance stability or computational complexity. Strengths and Weaknesses: Minimizing the combinatorial computation for this model to linear time is an important problem. The state-separated Q-function idea is a neat trick, but I guess not too surprising since one naturally suspects that the complexity of the problem is not truly $s_{\max}^K$ but more like $K \times s_{\max}$ since the states of unplayed arms evolve in a unified way, and this seems to be exactly what is being expressed in keeping track of $(s_k,s_a,a)$ rather than $({\bf s},a)$. However, I find the main theoretical contribution of this work (Theorem 5.1) underwhelming as it seems to just boil down to a standard convergence result from RL without nearly any modification. There is no discussion of finite-sample or even asymptotic regret bounds, of how the learning rate $\alpha_t$ or exploration period $E$ should be chosen for best regret. Even a rudimentary investigation of regret bounds, making assumptions as needed, would have made the paper stronger in my opinion. The experiments also do not sell a very convincing message since it seems like the 1RGP-TS algorithm still overall performs as well as (or better in some settings) than SS-SARSA and the main advantages of SS-SARSA is better stability in some settings and computation. If the authors wish to make the paper focus on experiments and application, then I think a more substantive experimental study is in order comparing with other non-stationary algorithms, real-world data, and demonstrations of superiority in computation time. Thus, I overall feel the paper leaves more to be desired and could use more work (both in discussion and results) before meeting the bar for acceptance. # Writing Notes * what is "s+1" in paragraph above display (9)? I presume this is the initial state of the other arms. * argmin and argmax should be written as \argmin or \argmax in Algorithm 1 pseudocode. * It might also be good to include some more references on this "rested nonstationary" setting like "Preferences Evolve And So Should Your Bandits: Bandits with Evolving States for Online Platforms" by Khosravi et al., 2024 (and see related works within). Requested Changes: * Can the authors comment on showing regret bounds for their procedure? * How is the exploration period $E$ set for Theorem 5.1? It seems like it needs to depend on a lot of things about the MDP or Q-functions, in order for the theorem statement to be immediate. There should also be discussion on whether such assumptions that the learner has knowledge of such things is practical or not. * I don't quite understand the discussion of the two paragraphs following Remark 4.1 on page 6 about why uniform exploration is not suitable for this setting, especially because at the end of the day, the chosen strategy of "Uniform-Explore-First" is still some kind of uniform exploration of arms. I guess one is just saying that the exploration policy should also take into account what the actual state is, so it doesn't allow some state to progress very fast to $s_{\max}$. In particular, I'm not sure what this last sentence "Variation in the frequency of these updates has a negative impact, especially when the state with the highest reward is not $s_{\max}$" is about, and it would be helpful if this could be made into a more precise theorem statement. Broader Impact Concerns: No broader impact concerns. ================================================== Review 2: Summary: This paper study the recovering bandit problem where the reward of each arm depends on the number of rounds elapsed since the last time the arm was pulled. The authors proposed a RL algorithm SS-SARSA to solve this problem, which tries to get around the combinatorial estimation of the Q function. The authors developed asymptotic results for RL algorithms that meet certain conditions; unfortunately, the proposed algorithm doesn't meet all the conditions. The authors simulated experiments and showed their algorithm outperform some baselines. Strengths and Weaknesses: Strengths: This paper is well written and easy to follow. The authors pointed out some failure cases of directly applying RL algorithms on recovering bandits. Weaknesses: 1. It seems that the proposed algorithm doesn't completely overcome the combinatorial issues: computing and storing the Q function in Eq. (8) requires exponential time and space. The proposed algorithm does need to take argmax wrt the Q function in Eq. (8) when transiting from exploration to exploitation. 2. While the authors developed some theoretical results for RL algorithms that meet certain conditions. The proposed algorithm, unfortunately, doesn't meet all the conditions. As a result, it's not clear if these theoretical guarantees hold true for the proposed algorithm. Additionally, these guarantees are asymptotic in nature; yet finite time guarantees are generally more desired. Requested Changes: Please address the weaknesses pointed above. Broader Impact Concerns: N/A. ================================================== Review 3: Summary: This paper studies the recovering bandit problem, a variant of multi-armed bandit (MAB) where each arms reward have a "cooldown" since it is last pulled. In order to capture all possible states of the arms, an combinatorially large state space is required to apply a standard approach. Instead this paper proposes to decompose the Q-function into pairs of states so the computational complexity is quadratic rather than exponential in the number of arms. Experiments were conducted on a set of synthetic setups and compared to some baselines. Strengths and Weaknesses: Strengths - Simple and intuitive solution to a well described problem. Weaknesses - Only simulation experiments are conducted. Requested Changes: - Typo: in abstract the proposed approach is named "State-Separate SARSA" instead of "State-Separate**d** SARSA" - Typo: on page 5 the reference to Section 3 is incorrectly shown as "Chapter 3" - Page 6: I'm not sure I understand the difference between random exploration and the proposed uniform exploration. In the explanation for random exploration, the paper says "$\pi(a|s)=\frac{1}{K}$ under a random policy" but isn't this just a uniform policy? How is the proposed Uniform-Explore-First different? - The figures for experimental results are blurry, and many of the lines are blurred together making it hard to tell the difference. - In some experiments, 1RGP-TS performs just as well as or even better than the proposed SS-SARSA (Fig 2 and 3), but the paper seems to claim their proposed approach is best in all cases. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: This paper proposes an RL algorithm for recovering bandits, where the reward of the arm depends on the time since its last pull. The algorithm is analyzed and empirically evaluated. Both leave a lot of room for improvement. See **Claims And Evidence**. Unfortunately, the authors did not respond with an updated version of the paper. I communicated with them. They confirmed that the paper needs a major revision and that they cannot meet the decision deadline. ==================================================
# Jumpstyle: Jump Starting Style Aware Test-Time Domain Generalization Anonymous authors Paper under double-blind review ## Abstract The performance of deep networks is quite vulnerable to distribution shifts encountered during test-time, and this applies even for models which have been trained to generalize to unseen domains. Thus, it is imperative that the model updates itself, leveraging the test data in an online manner. In this work, we propose a novel framework for test-time adaptation of deep networks trained in the Domain Generalization (DG) setting. Specifically, we propose two modifications, (i) Jump starting the adaptation using effective initialization and (ii) Style-aware augmentation based pseudo-labelling over the current state-of-the-art approach for test-time adaptation, namely Tent, for the DG task. The proposed framework only assumes access to the trained backbone and is agnostic to the model training process. We demonstrate the effectiveness of the proposed JumpStyle framework on four DG benchmark datasets, namely, PACS, VLCS, Office-Home and Terra-Incognita. Extensive experiments using standard backbones trained using multiple source domains and also the state-of-theart DG method shows that the proposed framework is generalizable not only across different backbones, but also different training methods. ## 1 Introduction Research in deep learning has achieved remarkable progress in solving several computer vision tasks like image classification, object detection, segmentation Deng et al. (2009); Lin et al. (2014); Everingham et al. (2010); Chen et al. (2017); He et al. (2017); Ren et al. (2015). However, their performance usually drops significantly when data from previously unseen domains are encountered during testing, which is quite common in real scenarios. To overcome this, there has been a considerable amount of research interest in areas like Unsupervised Domain Adaptation (UDA), Domain Generalization (DG), etc. UDA setting assumes access to labelled source domain data and unlabelled target domain data during training. However, the target domain samples may not be available during model training. This is addressed in the DG setup where the objective is to use multiple source domains to learn domain invariant representations, thus preparing the model for future deployment, where the test samples can come from a different domain. But, none of these approaches leverage the rich information inherently present in the test data, which may help to improve the test performance. Motivated by this, recently, Test-Time Adaptation (TTA) methods are gaining increasing importance, as they can leverage the testing data to reduce the adverse effect of distribution shift during testing. Here, we specifically focus on the approaches which can adapt any off-the-shelf model using the unlabeled test data in an online fashion. Very recently, a classifier adjustment framework Iwasawa & Matsuo (2021) was proposed for test-time adaptation in DG setup, which reports impressive performance for several backbones. Inspired by this work, here, we present a complementary analysis, where we analyze if different backbones trained using simple Empirical Risk Minimization (ERM) or even state-of-the-art DG approaches Zhou et al. (2021) specialized for generalization can further benefit from TTA using unlabelled test data. Towards this goal, we build upon the state-of-the-art TTA method Tent Wang et al. (2021) and suitably adapt it for the DG application. Our proposed simple yet effective framework termed **JumpStyle**, consists of two main components: (i) **Jump** start initialization that updates Batch-Normalization (BN) statistics as a convex combination of the source as well as the test data statistics, conditioning the mixing coefficient on the number of test samples available. (ii) Consistency of **Style**-aware augmentations for pseudo-labelling the unseen target domain data for updating the BN affine (scale and shift) parameters, in addition to the objective of test entropy minimization as in Tent. Our contributions can thus be summarized as follows: - We propose a novel framework, namely *JumpStyle* for addressing the task of test-time adaptation in the Domain Generalization setup. - JumpStyle can be seamlessly integrated with several backbones trained using various DG approaches. - Extensive experiments on four benchmark datasets using different backbones, also trained using different DG methods demonstrate the effectiveness of the proposed framework. ## 2 Related Work Here, we briefly review some recent literature relevant to our work. ## 2.1 Domain Generalization The objective of DG is to learn domain invariant representations using labelled data from multiple source domains belonging to the same classes, such that the trained model is robust to unseen test domains. In Li et al. (2018b), adversarial autoencoders are used to learn generalized features by minimizing the Maximum Mean Discrepancy (MMD) measure. An auxiliary unsupervised learning task of solving jigsaw puzzles along with the classification task to discover invariances and regularities was proposed in Carlucci et al. (2019). Another line of work Li et al. (2018a); Balaji et al. (2018) uses meta-learning to synthesize pseudo-training and test domains to generalize to domain shifts during testing. The Invariant Risk Minimization (IRM) learning paradigm proposed in Arjovsky et al. (2019), estimates invariant, causal predictors from multiple domains, enabling generalization to unseen domains. In Kim et al. (2021), a self-supervised contrastive regularization method was proposed, while in Li et al. (2021), the features are perturbed with Gaussian noise during training. In Robey et al. (2021), the DG problem is formulated as an infinite-dimensional constrained statistical learning problem, for which a novel algorithm inspired from non-convex duality theory was proposed. Recently, Zhou et al. (2021) proposes to mix instance level feature statistics implicitly to synthesize pseudo domains. This increases the diversity of source domains resulting in a model with better generalization ability. ## 2.2 Test Time Adaptation TTA refers to adapting the trained model during testing to improve the performance on the test data, which can come from an unseen domain. Although the test data is unlabeled, they provide rich domain information which can be utilized to update the trained model during deployment. Due to its wide applicability for real world deployment of deep networks, this field is gaining increasing attention. In Schneider et al. (2020), they propose to update BN statistics as a convex combination of the previously estimated training data statistics and the test data statistics to mitigate effect of domain shift. Test-Time-Training (Sun et al., 2020) introduces a self-supervised task along with that of classification during training and while testing, the model is adapted using just the self supervision objective. However, fully test time adaptation is done independent of the training phase, given a trained model and unlabeled test data. This was addressed in Tent (Wang et al., 2021), where they propose to update BN affine parameters to minimize the test entropy, enforcing confident model predictions. The above approaches were evaluated where the model was trained using a single source domain, and was tested on samples from a different domain. Only recently, a TTA approach for the DG setup was proposed, namely T3A (Iwasawa & Matsuo, 2021). It is an optimization free adaptation method, where the trained linear classifier is adjusted using the online available unlabelled test data. ## 3 Problem Definition Here, we first explain the task of domain generalization and then test-time adaptation. Domain Generalization (DG): The goal of DG is to train a model with labeled data from multiple domains, such that it can generalize well to unseen domains during testing. The training data comprises of multiple source domains, i.e., Dtrain = D1 ∪ D2 *. . .* ∪ Ddtr , where each source domain consists of labelled samples, which we denote as Dd = {(x d i , yd i ), i = 1 *. . . n*d}, d = 1*, . . . , d*tr. Here, nd denotes the number of samples in domain d and dtr denotes the number of training domains. The objective is to learn a model Fθ using D*train*, such that it can generalize well to an unseen test domain Dtest ∈ D/ *train*. Here, θ denotes the parameters of the trained model. Testing phase: For the standard DG task, the trained model Fθ is directly used for testing. Since the model is trained using data from multiple source domains to compute domain-invariant representations, it is expected to work well for unseen domains during testing. But the test data also contains rich information about the domain, which is usually not leveraged by these algorithms to further improve their performance. In general, during testing, the data is available in batches in an online manner, and thus this data can be used for TTA of the trained model Fθ to further improve the performance. ## 4 Baseline Approaches The performance of DG models on the test data depends significantly on the training process. If the training is not effective, thereby resulting in poor generalizability, TTA may help in improving the performance on the unseen domain test samples. In contrast, for well-trained DG models which generalize well to unseen domains, it is not obvious whether TTA can help to further improve their performance. Needless to say, the performance also depends on the backbone architecture. Based on this intuition, for this work, we choose two DG baselines which we discuss below. (1) Empirical risk minimization (ERM) (Vapnik, 1998): A simple, yet strong baseline in DG setup is to use the multiple source domain samples together to train the network Fθ by minimizing the following empirical risk: $${\mathcal{L}}_{E R M}={\frac{1}{d_{t r}}}\sum_{d=1}^{d_{t r}}{\frac{1}{n_{d}}}\sum_{i=1}^{n_{d}}{\mathcal{L}}_{C E}(x_{i}^{d},y_{i}^{d})$$ $$(1)$$ ) (1) Here, (x d i , yd i ) refers to the i th labelled sample from domain d and CE refers to the standard Cross-Entropy loss. We experiment with two different backbones to analyze their test-time adaptation performance in the DG scenario. (2) MixStyle (Zhou et al., 2021): As the second baseline, we choose the recent state-of-the-art DG approach, namely MixStyle. Here, pseudo-domains are synthesized by mixing feature statistics of two different instances. These label preserving feature perturbations help to regularize the CNN, thereby learning class discriminant, yet domain invariant features. The perturbed features are then used to minimize the CE loss as in eqn. (1). Generating pseudo-domains/styles helps to achieve excellent performance for unseen domain examples during testing. In this work, we develop a simple, yet effective framework for updating the DG model in an online fashion during test time. We analyze whether the proposed framework can improve upon DG models using different backbones as well as trained using different techniques. Here, all the baselines including MixStyle Zhou et al. (2021), use BN layers. Thus, this work will serve as a complementary analysis to the state-of-the-art T3A framework Iwasawa & Matsuo (2021), which analyses DG backbones without BN layers. ## 5 Proposed Method The proposed JumpStyle framework for test-time DG, has two main modules, (i) Effective initialization (Jump Start) and (ii) Consistency of style-aware augmentations for pseudo-labelling. It is built upon the successful TTA method Tent (Wang et al., 2021), which we describe below. Entropy Based Tent Framework: Tent is a fully test-time adaptation method designed to adapt any given off-the-shelf model using only the available test data, which can come from a distribution different from that of the training data. In general, during training, the BN layers estimate the channel-wise statistics µ, σ of the feature maps using the training data. While these statistics are relevant when the test samples are drawn from the same distribution as the training data, they are not optimal when there is a distribution shift during testing. This is because the feature maps would no longer be normalized to zero mean and unit variance, a phenomenon known as covariate shift. In Tent, the BN statistics of the trained model are replaced with that of the test data. Specifically, given features f at a certain layer, in general, BN normalizes the feature as ˆf = (f − µs)/σs and performs affine transformation as fBN = γ ˆf + β. In Tent, instead of the source data statistics {µs, σs}, the test data statistics {µt, σt} are used. Further, the BN affine parameters {*γ, β*} are finetuned to minimize the test prediction entropy (defined later). Before describing the proposed JumpStyle framework, we explain the reasons behind choosing Tent as the base approach in our work: (i) Tent can use any off-the-shelf trained model and does not assume any knowledge about the method of training; (ii) For the DG models with BN layers, the performance of Tent is comparable to the state-of-the-art T3A for some domains (Iwasawa & Matsuo, 2021); (iii) This will be a complementary analysis to that in T3A as our work focuses on backbones with BN layers and trained using different DG methods, whereas T3A mainly focuses on backbones without BN layers; (iv) Instead of proposing a completely different approach targeted towards test-time DG, we want to explore whether advances in the field of test-time adaptation can be made useful for related tasks. We now describe the two proposed modifications of the Entropy-based Tent framework for this application. Specifically, given a trained model Fθ parameterized by θ, our objective is to utilize the target samples xt of batch size n, to adapt the model. 1) Jump start initialization of the BN parameters: Tent (Wang et al., 2021) accounts for covariate shift by substituting the training BN statistics with those of the test data . But during online testing, the number of target samples available at any stage is usually quite less and variable, and thus may not be a good representative of the entire target distribution. Here, we propose a simple, yet effective correction of the test batch statistics by utilizing the training domain statistics as the prior to improve the performance under domain shift (Schneider et al., 2020). Since the quality of the estimated test domain statistics depends on the test batch size n, the source and target statistics are combined as follows $$\begin{array}{c}{{\bar{\mu}=\alpha(n)\mu_{s}+(1-\alpha(n))\mu_{t}}}\\ {{\bar{\sigma}^{2}=\alpha(n)\sigma_{s}^{2}+(1-\alpha(n))\sigma_{t}^{2}}}\end{array}$$ $$\left(2\right)$$ where µt and σ 2 t are the online estimated test distribution statistics, µs and σ 2 s are the source data statistics available as part of the given trained model. The weight α(n) is a function of batch size n and has a significant effect on the final performance. In Schneider et al. (2020), a method was proposed to compute this weight based on the batch size n and an additional hyper-parameter. Since the weight should ideally be a function of only the batch-size, in this work, we design α(n) to be: $$\alpha(n)=0.5(1+e^{-\kappa n});\quad\mathrm{where~}\kappa=0.05$$ −κn); where κ = 0.05 (3) The weight is designed such that it satisfies the following criteria: As the number of samples in the batch n decreases, the weight for the source statistics α(n) should increase. In the extreme case, when n = 0, α(n) = 1. But when n > 0, since the number of test samples available is still limited, the smallest value of α(n) is constrained to not fall below 0.5. The value of κ is obtained empirically, but the proposed weighting rule has the advantage that it only depends on the batch-size as desired. This weighting is used for all the experiments reported in this work. In addition to this initialization, after the data from a test-batch is passed through the model, its style-aware weak and strong augmentations are used to further update the BN affine parameters. 2) Pseudo-labelling based on consistency of style-augmented targets: During testing, as the test batches become available in an online manner, samples with confident predictions can be pseudo labelled and used to supervise the model for target data. Pseudo labels obtained with the criteria of consistent $\left(3\right)$. ![4_image_0.png](4_image_0.png) Figure 1: DG training (left) using Photo, Art-painting, Cartoon as source domains. TTA using JumpStyle (right) on test sample xt from test domain *Sketch*. Consistency across predictions of true sample pt and weak style augmentation ptw are used to pseudo label xt. BN affine parameters are updated to minimize the pseudo label and entropy loss. predictions across augmented versions of unlabelled data has shown remarkable success in semi-supervised learning (SSL) Sohn et al. (2020); Berthelot et al. (2019). Here, we explore whether pseudo-labelling based on suitable consistency condition aids test-time DG scenario, which to the best of our knowledge, has not been explored before. We propose to check consistency of style-augmented target samples when they are available, which is more suited for the DG task. The goal of DG is to generate domain (style used interchangeably)- invariant representations. Thus, the model should be able to consistently predict the same class label for a sample and its augmentations, with the same content, but different styles. During testing, given two target samples xi and xj , we create an augmented feature of xi using the style of xj . Let fi and fj denote their respective feature maps at a certain layer. The channel-wise means µ(fi), µ(fj ) and standard deviations σ(fi), σ(fj ) are representative of the image styles. In this layer, these feature statistics are mixed, thereby generating a pseudo-style, which is then applied to the style normalized feature of fi to obtain style augmented feature f SA i Zhou et al. (2021) $\mu_{mix}(f_{i};\lambda)=\lambda\mu(f_{i})+(1-\lambda)\mu(f_{j})$ $\sigma_{mix}(f_{i};\lambda)=\lambda\sigma(f_{i})+(1-\lambda)\sigma(f_{j})$ $f_{i}^{SA}=\sigma_{mix}(f_{i};\lambda)*\frac{f_{i}-\mu(f_{i})}{\sigma(f_{i})}+\mu_{mix}(f_{i};\lambda)$ $$\left(4\right)$$ $\left(5\right)$. where λ ∈ [0, 1] is the mixing coefficient. Features thus obtained preserve the semantic content of the input xi, while only the style is perturbed using that of the other image. Inspired by Sohn et al. (2020), we compute two types of style augmentations for each target sample, namely weak style augmentation and strong style augmentation as described next. Let F SA θ(; λ) denote the entire model including the feature extractor, classifier and the softmax layer with the style augmentations. Setting the mixing coefficient λ = 1 reduces the model F SA θ(; λ) to the original backbone Fθ. Given a test batch xt, the samples are randomly permuted within the batch to obtain x˜. The features of xt are perturbed by instance-wise mixing of styles from features of x˜ as described in eqn. (4). For a sample xt, we denote its prediction as pt, and those of its weak and strong augmentations as ptw and pts respectively. These are obtained as follows $$p_{t}=F_{\theta}^{S A}(x_{t};1);\quad p_{t w}=F_{\theta}^{S A}(x_{t};\lambda_{w});\quad p_{t s}=F_{\theta}^{S A}(x_{t};\lambda_{s})$$ θ(xt; λs) (5) To better utilise the target samples during test-time, we generate pseudo labels for the samples whose predictions are confident and robust against weak domain shifts. The pseudo labels for the test sample and | Backbone | Method | VLCS | PACS | OfficeHome | Terra | Average | |------------|-----------|----------|----------|--------------|----------|-----------| | ResNet-50 | ResNet-50 | 74.3±0.5 | 84.1±0.1 | 66.9±0.2 | 45.8±1.8 | 67.8 | | SHOT-IM | 61.5±1.7 | 84.6±0.3 | 68.0±0.0 | 33.8±0.3 | 62.0 | | | SHOT | 61.6±1.8 | 84.8±0.5 | 68.0±0.0 | 34.6±0.3 | 62.3 | | | PL | 63.4±1.8 | 80.1±3.5 | 61.3±1.5 | 36.8±4.4 | 60.4 | | | PL-C | 73.3±0.8 | 84.7±0.3 | 66.4±0.3 | 47.0±1.7 | 67.9 | | | Tent-Full | 75.4±0.6 | 87.0±0.2 | 66.9±0.2 | 42.6±0.8 | 68.0 | | | BN-Norm | 71.3±0.4 | 85.8±0.1 | 66.4±0.1 | 42.3±0.4 | 66.5 | | | Tent-C | 72.4±1.5 | 84.4±0.1 | 66.2±0.2 | 42.4±3.1 | 66.4 | | | Tent-BN | 65.6±1.4 | 84.9±0.0 | 67.7±0.2 | 42.7±0.5 | 65.2 | | | T3A | 76.0±0.3 | 85.1±0.2 | 68.2±0.1 | 44.6±0.9 | 68.5 | | | JumpStyle | 76.9±0.7 | 87.5±0.6 | 69.1±0.5 | 44.7±0.7 | 69.5 | | | ResNet-18 | ResNet-18 | 73.0±0.6 | 79.5±0.4 | 61.8±0.3 | 41.7±0.9 | 64.0 | | SHOT-IM | 61.6±0.3 | 82.1±0.3 | 62.5±0.3 | 32.8±0.4 | 59.8 | | | SHOT | 61.8±0.3 | 82.3±0.2 | 62.8±0.2 | 32.7±0.4 | 59.9 | | | PL | 67.0±0.6 | 72.9±1.0 | 56.3±2.5 | 35.4±1.7 | 57.9 | | | PL-C | 71.8±1.3 | 78.9±0.4 | 61.7±0.3 | 43.1±0.9 | 63.9 | | | Tent-Full | 72.3±0.3 | 83.9±0.3 | 62.7±0.2 | 36.9±0.3 | 64.0 | | | BN-Norm | 70.4±1.0 | 82.7±0.1 | 62.0±0.1 | 36.4±0.2 | 62.9 | | | Tent-C | 71.3±1.5 | 74.6±1.9 | 60.5±0.4 | 40.9±0.5 | 61.8 | | | Tent-BN | 64.7±0.7 | 81.1±0.2 | 62.5±0.3 | 36.4±0.9 | 61.2 | | | T3A | 74.5±0.9 | 81.4±0.2 | 63.2±0.4 | 39.5±0.3 | 64.6 | | | JumpStyle | 75.7±0.4 | 86.1±0.6 | 63.3±0.3 | 40.5±0.5 | 66.4 | | Table 1: Results with ERM approach using ResNet-50 and ResNet-18 backbones. its weak augmentation are obtained as yˆt = *argmax*(pt) and yˆtw = *argmax*(ptw) respectively. The pseudo label loss is then computed as $$\begin{array}{c c}{{{\mathcal{L}}_{p l}=\mathbb{E}_{x_{t}\in{\mathcal{S}}}[-\log p_{t s}({\hat{y}}_{t})];\qquad}}&{{{\mathcal{S}}=\{x_{t}|{\hat{y}}_{t}={\hat{y}}_{t w};m a x(p_{t})>\tau\}}}\end{array}$$ Inspired from Tent (Wang et al., 2021), we also use entropy loss to enforce confident predictions. In this work, we define this only for the strong style augmentations as follows: $$({\bar{0}})$$ $${\mathcal{L}}_{e n t}=-\frac{1}{n}\sum_{t=1}^{n}\sum_{c}p_{t s}(c)\log p_{t s}(c)\tag{1}$$ $$\left(7\right)$$ where c denotes the class index and n is the test batch size. Although inspired from the SSL approach Sohn et al. (2020), there are significant differences between the two approaches as: (i) The weak and strong style augmentations proposed in this work are better suited for the Domain Generalization objective as compared to the standard image augmentations as in Sohn et al. (2020) (details in experimental section). (ii) Unlike the semi-supervised approaches, where the whole network is trained/fine-tuned using the pseudo-labelling loss, here only the BN layers are updated. Final Test-time adaptation loss: The total loss for adaptation during test time is computed as a weighted combination of the pseudo-label loss and the entropy loss. The BN affine parameters, denoted by {*γ, β*} are updated in an online fashion each time a new batch is available, to minimize the following test time loss: Ltest = η ∗ Lpl + (1 − η) ∗ Lent (8) The parameter η balances the two losses, and is empirically set to 0.8 for all the experiments. $${\mathcal{L}}_{t e s t}=\eta*{\mathcal{L}}_{p l}+(1-\eta)*{\mathcal{L}}_{e n t}$$ $\left(8\right)$. | Method | PACS | VLCS | | | | | |--------------|---------------|---------------|--------------|---------------|---------------|----------| | Batch size=8 | Batch size=32 | Batch size=64 | Batch size=8 | Batch size=32 | Batch size=64 | | | MixStyle* | 82.4 | 82.4 | 82.4 | 75.7 | 75.7 | 75.7 | | Tent-Full | 81.2 ±0.9 | 85.9±0.7 | 86.7±0.8 | 69.2±0.6 | 71.4±0.4 | 71.9±0.4 | | T3A | 80.5±0.7 | 84.9±0.5 | 85.6±0.5 | 72.6±0.7 | 75±0.5 | 75.5±0.6 | | JumpStyle | 85.9±0.7 | 86.4±0.6 | 86.8±0.6 | 76.3±0.4 | 76.5±0.3 | 76.1±0.3 | | Method | VOC | LabelMe | Caltech | SUN09 | Average | |----------------|----------|-----------|-----------|----------|-----------| | Tent | 69.2±0.3 | 60.6±0.5 | 91.0±0.8 | 66.0±0.6 | 71.7 | | +Jump | 69.6±0.5 | 66.0±0.4 | 96.3±0.6 | 66.6±0.5 | 74.6 | | +Jump+FixMatch | 69.8±0.5 | 64.8±0.3 | 95.8±0.3 | 67.7±0.4 | 74.5 | | +JumpStyle | 71.3±0.4 | 66.5±0.5 | 96.5±0.7 | 68.5±0.7 | 75.7 | Table 2: Results on PACS and VLCS datasets using MixStyle trained DG model. ∗ denotes results obtained using the official MixStyle Zhou et al. (2021) implementation. Table 3: Ablation study on VLCS dataset using ResNet-18 backbone. ## 6 Experimental Evaluation Here, we describe the experiments done to evaluate the effectiveness of the proposed framework. We perform experiments on four benchmark DG datasets demonstrating different types of domain shifts. **PACS** (Li et al., 2017) consists of four domains, Photo, Art painting, Cartoon and Sketch, where the domain shift is particularly due to image styles. It has 9, 991 images belonging to 7 classes. **VLCS** (Fang et al., 2013) is a collection of four datasets, Caltech101 (Fei-Fei et al., 2006), LabelMe (Russell et al., 2008), SUN09 (Choi et al., 2010), VOC2007 (Everingham et al., 2010) with 10, 729 samples from 5 classes. **Office-Home** (Venkateswara et al., 2017) consists of four domains, Art, Clipart, Product, Real-world, with 15, 500 images of 65 objects in office and home environments. **Terra-Incognita** (Beery et al., 2018) contains photos of wild animals. Following Gulrajani & Lopez-Paz (2021); Iwasawa & Matsuo (2021), we use the images captured at locations L100, L46, L43, L38 as the four domains. This contains 24788 examples of 10 different classes. ## 6.1 Tta-Baselines And Implementation Details: We compare the proposed JumpStyle with the following test-time adaptation baselines: 1) **SHOTIM** Liang et al. (2020): updates the feature extractor to minimize entropy and the diversity regularizer; 2) SHOT Liang et al. (2020): uses pseudo-label loss along with information maximization as in (1); 3) PL (Pseudo labelling) Lee (2013): updates the entire network by minimizing the cross-entropy between the prediction and pseudo labels; 4) **PL-C** Lee (2013): minimizes the pseudo-label loss as above and updates only the linear classifier; 5) **Tent-Full** Wang et al. (2021): is the original method, where the BN statistics and transformations are updated; 6) **BN-Norm** Schneider et al. (2020): only the BN statistics are updated while keeping the affine parameters fixed; 7) **Tent-C** Wang et al. (2021): updates only the classifier to reduce the prediction entropy; 8) **Tent-BN** Wang et al. (2021): adds one BN layer just before the linear classifier and then modulates its affine parameters. Implementation Details: Following Iwasawa & Matsuo (2021), we split the data in each domain into a training (80%) and validation (20%) split. We follow the leave one out protocol for training and evaluation. In each experiment, three domains act as the source whose training splits are used to train the model, while the validation splits are used to select the learning rate. Further, we perform test-time adaptation on the target domain and report the average accuracy over all the domains in the dataset. The parameters for a TTA framework has to be selected prior to deployment, before one has access to test data. Following T3A (Iwasawa & Matsuo, 2021), we set the batch size to 32 and use training domain validation ![7_image_0.png](7_image_0.png) Table 4: Few examples from PACS and VLCS datasets where JumpStyle predicted the correct class (green box), and where it failed (red box), when compared with Tent (cyan) and T3A (yellow). The correct and incorrect predictions for T3A and Tent are marked with ✓and ✗respectively. set to tune the hyperparameters for fair comparison. The learning rates used were 10−4for PACS, VLCS, OfficeHome and 10−5for Terra Incognita for both ResNet-18 and ResNet-50 backbones. We set α(n) to 0.6 which is computed using eqn.( 3) for n=32 and set η to 0.8. We set λw and λs in eqn. (5) to 0.9 and 0.75 respectively. The parameter κ in eqn. (3) is fixed to 0.05 for all the experiments. ## 6.2 Results With Dg Baselines: (1) Empirical Risk Minimization: First, we test the proposed TTA framework with the ERM approach for DG, where labelled samples from multiple source domains are collectively used to train the network using CE loss. The results of the proposed framework and comparisons with the other approaches using ResNet-50 and ResNet-18 backbones are shown in Table 1 for the four datasets. The results of previous methods are directly taken from Iwasawa & Matsuo (2021). We observe that the proposed JumpStyle outperforms the other approaches for three of the four datasets, and also on an average. This explains the generalization ability of the proposed approach across different datasets and backbones. (2) Mixstyle: Here, we analyze whether TTA can also benefit from the state-of-the-art DG approaches, which have been designed specifically to obtain domain invariant representations. Since online TTA depends upon the test batch size, here, we also experiment with different batch sizes to analyze its effect on the final performance. We report the results obtained using MixStyle with ResNet-18 backbone and its performance on doing TTA using Tent-Full, T3A and JumpStyle in Table 2. From these results on PACS and VLCS datasets, we observe the following: (1) The performance of Tent-Full and T3A improves significantly for higher batch sizes. However, their performance is not satisfactory for smaller batch sizes. (2) The proposed framework outperforms all the previous approaches irrespective of the batch size. Table 4 shows the predictions of Tent, T3A and JumpStyle for a few examples from PACS and VLCS datasets. The proposed approach is indeed able to correct several samples which were wrongly predicted by previous TTA approaches. ## 6.3 Hyperparameter Selection As mentioned in Section 6.1, we use the training domains validation set to determine the hyperparameters η, α and the use of MixStyle layers. | η | VOC | LabelMe | Caltech | SUN09 | Average | |-----|----------|-----------|-----------|----------|-----------| | 0.2 | 68.3±0.7 | 66.0±0.6 | 96.3±0.3 | 63.8±1.2 | 73.6 | | 0.5 | 70.0±0.8 | 66.2±0.5 | 96.5±0.3 | 65.4±1.4 | 74.5 | | 0.8 | 71.3±0.4 | 66.5±0.5 | 96.5±0.4 | 68.5±0.6 | 75.7 | Table 5: Performance with varying η on VLCS using ResNet-18. | α(n) | VOC | LabelMe | Caltech | SUN09 | Average | |------------|----------|-----------|-----------|----------|-----------| | 0.4 | 70.7±0.4 | 64.8±0.7 | 96.5±0.3 | 67.3±0.4 | 74.8 | | 0.5 | 71.0±0.5 | 65.9±0.5 | 95.9±0.4 | 67.7±0.4 | 74.8 | | 0.7 | 71.0±0.4 | 66.3±0.5 | 96.5±0.4 | 68.0±0.6 | 75.4 | | Ours (0.6) | 71.3±0.4 | 66.5±0.5 | 96.5±0.4 | 68.5±0.6 | 75.7 | 1) We observed that η = 0.8 gave the best TTA performance on training domains validation set. For further insight, we vary η in JumpStyle and report the results in Table 5. Higher η implies higher weight for pseudo label loss when compared to entropy loss. Thus, consistency checked pseudo-labels provide stronger supervision and help to adapt to the target domain better, leading to improved performance. | layers | VOC | LabelMe | Caltech | SUN09 | Average | |----------|-----------|-----------|-----------|----------|-----------| | 1 | 69.3±0.8 | 66.3±0.7 | 96.5±0.2 | 63.7±1.6 | 74.0 | | 1, 2 | 70.33±0.7 | 66.4±0.5 | 96.3±0.3 | 65.8±1 | 74.7 | | 1,2,3 | 71.3±0.4 | 66.5±0.5 | 96.5±0.4 | 68.5±0.6 | 75.7 | Table 6: Performance with varying α on VLCS using ResNet-18. 2) We study the choice of α to mix the source and target BN statistics. As the batch size can be varying during test-time and the quality of test statistics depends on its(higher batch size gives better estimates), we perform experiments setting α to constants 0.4, 0.5, 0.7 and compare the results with the proposed choice of α(n) using eqn.(3). Table 7: Performance with different layers for augmentation. 3)Based on the analysis presented in MixStyle (Zhou et al., 2021) and our experiments (Table 7), we insert the proposed Style Augmentation layers after the first three ResNet blocks as the early layers contain style information. Results in Table7 show that inserting these layers after each of the three ResNet blocks performs the best. ## 7 Conclusion In this paper, we present a novel framework termed *JumpStyle* for test-time adaptation in domain generalization setup. Firstly, we propose an effective scheme to correct the BatchNorm statistics based on the number of test samples available online. Further, we propose a test-time consistency regularization method to ensure consistent predictions across perturbed versions of test samples. Unlike semi/unsupervised representation learning methods where augmentations in image space are observed to work effectively, our analysis shows that simple image augmentations are ineffective in the low-data test time scenario. We propose to augment the test samples in feature space instead. In specific, we use MixStyle which is a label preserving feature perturbation module to obtain weak and strong augmentations, across which we enforce consistent predictions. Extensive experiments performed using backbones with different representation ability, training methods and augmentations demonstrate the effectiveness of the proposed framework. 9 ## References M. Arjovsky, L. Bottou, I. Gulrajani, and D. Lopez-Paz. Invariant risk minimization. *arXiv*, 2019. Y. Balaji, S. Sankaranarayanan, and R. Chellappa. Metareg: Towards domain generalization using metaregularization. *NeurIPS*, 2018. S. Beery, G. Van Horn, and P. Perona. Recognition in terra incognita. In *ECCV*, 2018. D. Berthelot, N. Carlini, I. Goodfellow, N. Papernot, A. Oliver, and C. A Raffel. Mixmatch: A holistic approach to semi-supervised learning. *NeurIPS*, 2019. F. M. Carlucci, A. D'Innocente, S. Bucci, B. Caputo, and T. Tommasi. Domain generalization by solving jigsaw puzzles. In *CVPR*, 2019. L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. *TPAMI*, 2017. M. J. Choi, J. J. Lim, A. Torralba, and A. S. Willsky. Exploiting hierarchical context on a large database of object categories. In *CVPR*, 2010. J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In *CVPR*, 2009. Mark Everingham, Luc Gool, Christopher K. Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. *IJCV*, 2010. Chen Fang, Ye Xu, and Daniel N. Rockmore. Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias. In *ICCV*, 2013. Li Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. *TPAMI*, 2006. Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In *ICLR*, 2021. Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In *ICCV*, 2017. Yusuke Iwasawa and Yutaka Matsuo. Test-time classifier adjustment module for model-agnostic domain generalization. *NeurIPS*, 2021. Daehee Kim, Youngjun Yoo, Seunghyun Park, Jinkyu Kim, and Jaekoo Lee. Selfreg: Self-supervised contrastive regularization for domain generalization. In *ICCV*, 2021. Dong-Hyun Lee. Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks. *ICML*, 2013. Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy Hospedales. Deeper, broader and artier domain generalization. In *ICCV*, 2017. Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy Hospedales. Learning to generalize: Meta-learning for domain generalization. In *AAAI*, 2018a. Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C Kot. Domain generalization with adversarial feature learning. In *CVPR*, 2018b. Pan Li, Da Li, Wei Li, Shaogang Gong, Yanwei Fu, and Timothy M. Hospedales. A simple feature augmentation for domain generalization. In *ICCV*, 2021. Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In *ICML*, 2020. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft coco: Common objects in context. In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars (eds.), *ECCV*, 2014. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. *NeurIPS*, 2015. Alexander Robey, George J. Pappas, and Hamed Hassani. Model-based domain generalization. In *NeurIPS*, 2021. Bryan Russell, Antonio Torralba, Kevin Murphy, and William Freeman. Labelme: A database and web-based tool for image annotation. *IJCV*, 2008. Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, and Matthias Bethge. Improving robustness against common corruptions by covariate shift adaptation. *NeurIPS*, 2020. Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. *NeurIPS*, 2020. Y. Sun, X. Wang, Z. Liu, J. Miller, A. A. Efros, and M. Hardt. Test-time training with self-supervision for generalization under distribution shifts. In *ICML*, 2020. Vladimir N. Vapnik. Statistical learning theory. *Wiley*, 1998. H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan. Deep hashing network for unsupervised domain adaptation. In *CVPR*, 2017. D. Wang, E. Shelhamer, S. Liu, B. Olshausen, and T. Darrell. Tent: Fully test-time adaptation by entropy minimization. In *ICLR*, 2021. K. Zhou, Y. Yang, Y. Qiao, and T. Xiang. Domain generalization with mixstyle. In *ICLR*, 2021.
Review 1: Summary: This paper deals with the test-time adaptation task in a domain generalization setting. That is, adapting a DG model using online test data to further improve the test-time adaptation/generalization performance. This paper proposes using a combination of source statistics and test statistics as new BN statistics. Moreover, it adopts style-aware augmentation based pseudo-labeling tricks to refine the affine transformation parameters. Experiments on various domain generalization benchmarks show the effectiveness of proposed method. Strengths and Weaknesses: Pros: -The paper is well-written and easy to read. -The method is simple and reasonable. -The experiment results are good. Cons: -The contribution seems limited. This paper seems using a combination of previous tricks (although they may not be originally designed for test-time domain adaptation/generalization) to deal with the test-time adaptation/generalization problem. I didn't see any novel or significant contributions made by this paper. -JumpStyle only updates the affine transformation parameters of BN layers. How about updating all the network parameters? -Table 4 showed some success cases and some failure cases. Are there any patterns exhibited by different methods? Do the authors have any insights? Requested Changes: See the weakness part. Broader Impact Concerns: I think this work may not raise serious ethical concerns. ================================================== Review 2: Summary: This paper proposes a jumpstyle test-time domain adaptation method. It proposes to reestimate batchnorm statistics based on the number of test samples available online, and reformulate mixup in the feature space to obtain local manifolds with different level of augmentations. Results on small scale/medium datasets shows its effectiveness. Strengths and Weaknesses: +: 1. Correcting test batch statistics by substituting the training domain information as a prior to improve the domain shift is technically sound. 2. The proposed method is simple and can be easily integrated into different domain generalization pipelines. -: "Generating pseudo-domains/styles helps to achieve excellent performance for unseen domain examples during testing." this somehow sounds not that plausible, in Eq. 5 and Eq. 4 how do you guarantee that the pseudo-domains can well aligned with the unseen test data? I am actually wondering about Eq. 1 as mentioned in the paper: How to generalize well if \mu_{s} and \sigma_{s} is quite different from \mu_{t} and \sigma_{t}? Tent sounds like pretraining the weights of BN modules with the training data and re-estimated with test data. More general, what's the key difference compared to simply pretraining on more diverse upstream data using all learnable layers and then conduct few-shot finetuning with some adaptable modules? Literature in this field also shows much stronger results compared to results in Table 1. Requested Changes: 1. Results on larger datasets following https://github.com/facebookresearch/DomainBed. 2. Give a clear explanation how jumpstyle can achieve excellent performance for unseen domain examples during testing using Eq. 2- 5. Broader Impact Concerns: This paper does not introduce any concerns on the ethical implications. Author[s] are still encouraged to add such a section to discuss the paper. ================================================== Review 3: Summary: The paper proposes a test-time adaptation approach for domain generalization, which operates in an online manner and adapts to batches of testing examples. The proposed approach features two main components: (i) jump starting batch normalization by averaging the batch norm stats between training and test data depending on the batch size, (ii) style augmentations with pseudo-label loss and entropy minimization. The paper is evaluated on standard domain generalization datasets and is shown to outperform various baseline approaches. Strengths and Weaknesses: Strengths 1. The area of test test domain adaptation in the online setting is an interesting area of research and the proposed batch size aware batch normalization is shown to improve the model’s robustness under low batch sizes. 2. The proposed approach demonstrates sota or competitive performance against various baseline approaches. 3. The proposed approach can be applied to different domain generalization algorithms such as ERM and Tent. Weaknesses 1. Unclear problem formulation regarding the online learning setup: it is unclear what motivates the online setup of domain generalization, what are the underlying assumptions, and how it is validated in the experiments. The proposed approach is only evaluated on a single held-out test domain with predefined domain boundary, whereas in the typical online learning setup, the test data can be dynamic. 2. Unjustified complexity in loss function: the proposed loss function is a combination of various sota losses in semi-supervised learning, domain adaptation, test-time adaptation and domain generalization. Some of the loss details are not justified in the experiments: why use weak augmentation to obtain pseudo-label and strong autmentation for entropy minimization? If the weak augmentation is only taking examples with high label agreement, wouldn’t it somewhat equivalent to entropy minimization, i.e., making the prediction more confident if it is already robust? 3. Claims on style-augmentation is not supported by the experiments: if a single held-out domain is used in testing, it wouldn’t contain styles from other domains. It is unclear why style-augmentation is better than other types of input augmentation. Requested Changes: Questions: 1. [critical] Please elaborate the motivation of the online setup, assumptions on the testing distribution, and how it fits in the training procedure and experimental framework. Is there an assumption that the test data is from a single domain with predefined domain boundary? If so, why not use running averages as in batch norm training procedure to further improve the estimation of BN stats? 2. [critical] Ablation studies, or other types of evidence, are needed regarding why using weak augmentation to obtain pseudo-label and strong autmentation for entropy minimization? If the weak augmentation is only taking examples with high label agreement, wouldn’t it somewhat equivalent to entropy minimization, i.e., making the prediction more confident if it is already robust? 3. [non-critical] Ablation studies are needed to justify the need for style-augmentation over other types of input augmentation. The authors should also clarify the difference between styles and domains, and how style augmentation within the same test domain could be helpful. 4. [critical] In Sec. 6.3 (3), it is unclear why alpha is a constant hyper-parameter instead of a function over batch size. The authors should also clarify how alpha is configured in the ablation with different batch sizes in Table 2. Broader Impact Concerns: NA ================================================== Metareview: Recommendation: Reject Comment: This paper proposed a test-time adaptation algorithm for domain generalisation by improving batch normalisation and introducing style augmentations with pseudo-label. All three reviewers were leaning reject, due to their shared concerns about the audiences' potentially limited interest in the proposed minor technical changes and the insufficient experimental results. it might be a limitation for the authors to only focus on batch normalisation and the Tent is the only baseline to try out the effectiveness of the proposed changes. It would be more interesting if the authors can enhance the techniques accordingly to investigate other normalisations (e.g., layer or instance normalisation) and include more baseline algorithms/backbones to evaluate the generic enough changes. The authors thus need a significant revision before considering a resubmission. ==================================================
# Greedy Bayesian Posterior Approximation With Deep Ensembles Aleksei Tiulpin∗aleksei.tiulpin@oulu.fi Research Unit of Medical Imaging, Physics and Technology Faculty of Medicine, University of Oulu, Finland Matthew B. Blaschko matthew.blaschko@esat.kueleuven.be Center for Processing Speech and Images Department of Electrical Engineering KU Leuven, Belgium Reviewed on OpenReview: *https: // openreview. net/ forum? id= P1DuPJzVTN* ## Abstract Ensembles of independently trained neural networks are a state-of-the-art approach to estimate predictive uncertainty in Deep Learning, and can be interpreted as an approximation of the posterior distribution via a mixture of delta functions. The training of ensembles relies on non-convexity of the loss landscape and random initialization of their individual members, making the resulting posterior approximation uncontrolled. This paper proposes a novel and principled method to tackle this limitation, minimizing an f-divergence between the true posterior and a kernel density estimator (KDE) in a function space. We analyze this objective from a combinatorial point of view, and show that it is submodular with respect to mixture components for any f. Subsequently, we consider the problem of greedy ensemble construction. From the marginal gain on the negative f-divergence, which quantifies an improvement in posterior approximation yielded by adding a new component into the KDE, we derive a novel diversity term for ensemble methods. The performance of our approach is demonstrated on computer vision out-of-distribution detection benchmarks in a range of architectures trained on multiple datasets. The source code of our method is made publicly available at https://github.com/Oulu-IMEDS/greedy_ensembles_training. ## 1 Introduction Estimation of predictive uncertainty is one of the most important challenges to solve in Deep Learning (DL). Applications in finance, medicine and self-driving cars are examples where reliable uncertainty estimation may help to avoid substantial financial losses, improve patient outcomes, or prevent fatal accidents (Gal, 2016). However, to date, despite rapid progress, there is a lack of principled methods that reliably estimate the predictive uncertainty of deep neural networks (DNNs). Bayesian approaches to Machine Learning generally offer great benefits, which provide out-of-the-box features such as model selection (Immer et al., 2021), uncertainty quantification Wilson & Izmailov (2020), and incorporation of prior knowledge into the models (Fortuin, 2021). From a Bayesian standpoint, for a model z trained on some data D, there exist four main components: posterior p(z | D), prior p(z), likelihood p(D | z) and evidence p(D). In this work we focus on the applications to uncertainty estimation, and thus are interested in a posterior distribution p(z | D), assuming that it is multimodal (Wilson & Izmailov, 2020). This assumption is natural in the case of overparameterized models, such as DNNs. ∗A part of this work was done at KU Leuven, and a part at Aalto University, Finland. Numerous attempts have been made to develop Bayesian techniques for posterior approximation and uncertainty estimation in DL (Gal & Ghahramani, 2016; Lakshminarayanan et al., 2017; Ciosek et al., 2019; Maddox et al., 2019; Izmailov et al., 2020; Van Amersfoort et al., 2020; Wenzel et al., 2020b; He et al., 2020; Wilson & Izmailov, 2020). One of the most practical and empirically best-performing approaches is based on training a series of independent DNNs (Lakshminarayanan et al., 2017; Wilson & Izmailov, 2020; Ashukha et al., 2020; Lu et al., 2020; Wenzel et al., 2020b). The main method in this category, *Deep Ensembles* (DE) (Lakshminarayanan et al., 2017), is used as a reference approach in the context of this paper. Recent studies, e.g. Wilson & Izmailov (2020) interpret ensembles as an approximation of predictive posterior. While this interpretation is correct from a Bayesian point of view, obtaining individual ensemble members via maximum a posteriori probability (MAP) estimation, as e.g. done in DE (Lakshminarayanan et al., 2017), may not lead to obtaining good coverage of the full support of the posterior distribution, and has arbitrarily bad approximation guarantees. For example, the resulting approximation can be poor in the case when the true posterior distribution is unimodal, skewed and long-tailed. In this work, we argue that enforcing coverage of posterior (i.e. ensemble diversity) is non-trivial, and needs a specialized principled approach. We highlight this graphically in Figure 1. ![1_image_0.png](1_image_0.png) Figure 1: Illustration of how our method (c) approximates a 1D multimodal distribution p(z) compared to a randomization-based mode picking (a), and naïve diversity training (b). Blue shows the true distribution and red - approximations. This figure can be reproduced using the provided source code. Another important line of work in modern Bayesian DL (BDL) is a paradigm of performing Bayesian inference in the weight space. While distributions over weights induce distributions over functions (Wilson & Izmailov, 2020), it is rather unclear what the properties of such functions are, and whether Bayesian posteriors obtained in the weight space yield good quality approximations in the function space. For example, it is known that diverse weights do not necessary yield diverse functions, thus sampling from weight-based posteriors may yield poor quality uncertainty estimation, e.g. in detecting out-of-distribution (OOD) data (Garipov et al., 2018; Hafner et al., 2020). Recent studies (Wang et al., 2019; D'Angelo & Fortuin, 2021) show the promise of particle optimization variational inference (POVI) done in the function space, however, the performance of those methods is not state-of-the-art due to the use of BNN weight priors. Specifically, in the BNN literature, enforcing prior only over weights and avoiding training techniques like batch-normalization Ioffe & Szegedy (2015) or mixup Zhang et al. (2018) is a de-facto standard, as these techniques have no Bayesian interpretation. In practice, a weight decay prior combined with batch normalization Ioffe & Szegedy (2015), data augmentation and dropout Gal & Ghahramani (2016) are often employed due to empirical improvement, and theoretical guarantees are traded for practical performance. Furthermore, particle-based function space VI requires training an ensemble of BNNs simultaneously, which is not only difficult to implement, but also requires extensive resources for parallelization. Summary of the contributions. In this paper, we propose *a novel and principled* methodology for approximate function space posterior inference for DNNs. Contrary to the mainstream approach in BDL, which is based on defining a posterior distribution over the model parameters (Maddox et al., 2019), we take a functional view, which allows us to treat the problem of training ensembles as optimization over sets. Namely, selecting a set of functions to approximate the Bayesian posterior is fundamentally similar to problems such as facility location and set cover, which are frequently solved using submodular optimization (Krause et al., 2008). Specifically, our contributions are: 1. We show that fitting a kernel density estimator to a distribution using an f-divergence is a cardinalityfixed non-monotone submodular maximization problem. 2. Inspired by the Random Greedy algorithm for submodular maximization (Buchbinder et al., 2014), we design a new method for the function space Bayesian posterior approximation via greedy training of ensembles with a justified coverage-promoting diversity term. 3. We demonstrate the effectiveness and competitiveness of our approach compared to DE in the OOD detection task on MNIST, CIFAR, and SVHN, benchmarks on different architectures and ensemble sizes. Furthermore, we show that our method can use state-of-the-art training techniques, compared to the existing Bayesian approaches, and yields state-of-the-art performance. ## 2 Preliminaries 2.1 Problem Statement Consider an ensemble to be parameterized by a set of functions Z = {zm}M m=1 ⊂ V ⊂ F, where V is a ground set, F is a class of continuous functions, and zm : R d → R c, with d the dimensionality of the input data, and c the dimensionality of the output. When training ensembles, we generally want to solve the following optimization problem: $$\operatorname*{min}_{Z\subset V,|Z|=M}{\mathcal{R}}(Z)-\Omega_{\lambda_{M}}(Z),$$ $$(1)$$ R(Z) − ΩλM (Z), (1) where R(Z) = 1N PN i=1 ` 1 M PM m=1 zm(xi), yi is the empirical risk of the ensemble, ` : *Y × Y →* R+ is a loss function, D = {xi, yi} N i=1 is a training dataset of size N, and ΩλM (Z) is some diversity-promoting term, with diversity regularization strength λM. Empirical observations in the earlier works on ensembles (Lakshminarayanan et al., 2017; Wilson & Izmailov, 2020; Fort et al., 2019) have shown that one can simply ignore ΩλM (Z) during optimization, and rely on non-convexity of the loss landscape, minimizing the risks of individual ensemble members. From a variational inference (VI) perspective (Zhang et al., 2019), this can be seen as a *mode-seeking* method, that is, the resulting posterior approximation method aims to put mass at the true posterior modes. Notably, it has been shown experimentally that every ensemble member may discover different modes of the posterior distribution in the function space p(z|D) Fort et al. (2019). Approximation of p(z|D) is the ultimate aim of this paper, and we argue that randomization-based modeseeking is insufficient to obtain a good quality approximation of p(z|D), as this procedure does not maximize the coverage of the support of the posterior. In contrast, we aim to find an ΩλM (Z) such that the posterior coverage is also maximized. Taking a VI perspective again (Zhang et al., 2019), ΩλM (Z) needs to enforce that minZ∈V,|Z|=M R(Z) − ΩλM (Z) has also *mean-seeking* behavior. That is, the approximation method should aim to cover as much of the true posterior support as possible while still discovering the high density modes. ## 2.2 A Combinatorial View Of Ensemble Construction Having now defined the main criteria for (1), we highlight that the problem of constructing an ensemble can be seen from a combinatorial point of view. We therefore treat ensemble construction as subset selection from some ground set of functions, and introduce the main notions of submodular analysis, a powerful tool that enables the analysis of the optimization of set functions (Bach, 2013; Fujishige, 2005). Definition 1 (Submodularity). *A set function* g : 2V → R, for the power set of a base set V *, is submodular* if for all A ⊆ B ⊂ V and x ∈ V \ B $$g(A\cup\{x\})-g(A)\geq g(B\cup\{x\})-g(B).$$ g(A ∪ {x}) − g(A) ≥ g(B ∪ {x}) − g(B). (2) Definition 2 (Supermodularity and modularity). A set function is called supermodular if its negative is submodular, and modular if it is both submodular and supermodular. Consider now problem (1). Assuming that the loss function ` is convex, we can derive an upper-bound on the risk R(Z) using Jensen's inequality, and obtain a method, which generalizes DE $$\operatorname*{min}_{Z\subset V,|Z|=M}\frac{1}{M}\sum_{m=1}^{M}\mathcal{R}(z_{m})-\Omega_{\lambda_{M}}(Z),$$ $$\left({\boldsymbol{3}}\right)$$ where V is a ground set. In the context of neural networks, it is reasonable to consider the ground set V containing all possible neural networks with a specific architecture and realizable in a computer. If M is fixed during optimization, 1M PM m=1 R(zm) contributes a positive modular term to the overall objective. Adding a positive modular function to any set function does not change its submodularity or supermodularity, thus we focus on ΩλM (Z). A trivial approach would be to enforce pair-wise diversity by computing a norm of the pairwise differences between functions, i.e. setting ΩλM (Z) = λMPi6=j kzi − zjk 2∗ . However, this is a cardinality-fixed submodular *minimization* problem. It is known that it is strongly NP-hard, i.e. there exists no general polynomial time approximation algorithm for it (Svitkina & Fleischer, 2011). The poor quality approximation of this approach is highlighted in Figure 1. We therefore conclude that the choice of ΩλM (Z) has a direct impact on the approximabilty of the objective. ## 3 Submodular Analysis Of F**-Divergences** 3.1 F**-Divergences Are Supermodular Functions** Main result. We now consider the problem of approximating a Bayesian posterior via minimization of an f-divergence. Here, we specifically aim our optimization procedure to have both mode and mean-seeking behaviors, i.e. cover the posterior distribution as much as possible, ending up in its mode. We furthermore aim to obtain a polynomial time algorithm that yields a good quality approximation guarantee. In this paper, we leverage classic definitions of approximation algorithm and approximation guarantees. Definition 3 (Approximation algorithm and guarantees (Williamson & Shmoys, 2011)). A γ-approximation algorithm for an optimization problem is a polynomial-time algorithm that for all instances of the problem produces a solution whose value is within a factor γ of the optimal solution. γ in this case is called approximation guarantee. Let us now formally introduce f-divergences. Definition 4 (f-divergence). Let f : R + → R *be a convex function such that* f(1) = 0, Pz and Qz be distributions on a measurable space (Ω, Z) admitting densities p(z) and q(z) *with respect to a base measure* dz. If Pz is absolutely continuous with respect to Qz, f-divergence between Pz and Qz *is defined as* $$D_{f}(P_{z}||Q_{z})=\int_{\mathcal{Z}}f\left({\frac{p(z)}{q(z)}}\right)q(z)d z.$$ Consider some density p(z) over continuous functions. We define qM(z) = 1M PM m=1 K(*z, z*m), where Km(z) := K(*z, z*m) is a kernel centered at zm used in the density estimation of p(z). In addition, we simplify our notation, always assuming the existence of a measurable space (Ω, Z) and the base measure dz. Theorem 1. Any f*-divergence* $$D_{f}(p||q_{M})=\int f\left(\frac{p(z)}{\frac{1}{M}\sum_{j=1}^{M}K_{j}(z)}\right)\frac{1}{M}\sum_{m=1}^{M}K_{m}(z)d z$$ $$\mathbf{\Sigma}$$ $$\quad(5)$$ between a distribution p(z) and a normalized mixture of M *kernels with equal weights is supermodualar in a* cardinality-fixed setting, assuming that maxqM Df (p(z)||qM(z)) < ∞. Proof. The proof is shown in Appendix A.1. For some fixed p(z), minimization of (5) with respect to qM(z) is equivalent to a cardinality-constrained maximization of a non-monotone submodular function of Z = {z1*, . . . , z*M}. Approximation guarantees for problems of this form are given for non-negative submodular functions (Buchbinder et al., 2014). One can convert (5) to a non-negative function by defining: $$F(Z):=-D_{f}(p||q_{M})+C,$$ $$(6)$$ F(Z) := −Df (p||qM) + C, (6) where C = maxZ⊂V Df (p||qM) is a pre-defined constant, which is important for understanding approximation guarantees, but does not need to be computed in practice. After the described transformation, which leads to (6), we obtain F(Z), a *non-negative non-monotone* submodular function. Approximation guarantees for f**-divergences.** In the context of submodular functions, there exists an inapproximability result, obtained in (Gharan & Vondrák, 2011), which states that no general polynomial time algorithm with guarantees better than 0.491 exists to solve a submodular maximization problem with constrained cardinality. We note that in practice, however, it might still be possible to obtain a better approximation factor than 0.491 for some specific types of divergences, as these guarantees are defined for all instances of the optimization problem (Definition 3). Let us consider the approximation guarantees for (6), denoting by q ∗ M the optimal solution, and by qˆM a solution found by some algorithm. For an approximation factor γ, we have $$-D_{f}(p||\hat{q}_{M})+C\geq\gamma(-D_{f}(p||q_{M}^{*})+C).$$ $$\left(7\right)$$ $$({\boldsymbol{\delta}})$$ M) + C). (7) Simple algebra shows that this implies $$D_{f}(p||\hat{q}_{M})\leq\gamma\operatorname*{min}_{q_{M}\in A}D_{f}(p||q_{M})+(1-\gamma)\operatorname*{max}_{q_{M}\in A}D_{f}(p||q_{M}),$$ Df (p||qM), (8) where A is a set of possible approximating mixture distributions of size M constructed from the ground set. The derived result indicates that the upper bound on the approximate solution found by minimizing an f-divergence can be substantially dominated by (1 − γ) maxqM∈A Df (p||qM) if the ground set V is chosen poorly. ## 3.2 Greedy Minimization Of F**-Divergences** Random greedy algorithm. Although submodular optimization has natural parallel extensions and associated approximation guarantees, due to the simplicity of presentation, we focus in this paper on forward greedy selection and use Algorithm 1 for optimizing submodular functions. This algorithm has approximation guarantee of ≈ 1/e in general (Buchbinder et al., 2014). The only required step for this greedy algorithm is a computation of the marginal gain on the objective function F(Z), i.e. ∆(zk|Z) = F(Z ∪ {zk}) − F(Z). Algorithm 1 Random Greedy algorithm 1: **Input:** V - Ground set 2: **Input:** F - Arbitrary submodular function 3: **Input:** M - Cardinality of the solution 4: Z ← ∅ 5: for m = 1 to M do 6: R ← arg maxT ⊂V \Z:|T|=MPz 0∈T ∆(z 0|Z) 7: ui ← Uniform(R) 8: Z ← Z ∪ {ui} 9: **end for** 10: **return** Z Marginal gain. At each step of a greedy algorithm, a marginal gain ∆(zk|Z) = F(Z ∪ {zk}) − F(Z) of adding a new element zk to an existing mixture Pk−1 j=1 Kj (z) is maximized. For f-divergences, we thus formulate the following proposition: Proposition 1. *Consider* C = max Df (p||qM), where Df (p||qM) is an arbitrary f*-divergence between some* distribution p(z) *and a mixture of kernels* qM(z) = 1M PM j=1 Kj (z), and Df (p||qM) < ∞. Then, maximization of the marginal gain for the set function $$F(Z)=-\int f\left(\frac{p(z)}{\frac{1}{M}\sum_{j=1}^{M}K_{j}(z)}\right)\frac{1}{M}\sum_{m=1}^{M}K_{m}(z)d z+C,$$ $$({\mathfrak{g}})$$ $$(10)$$ at step k *of a greedy algorithm corresponds to* $$\arg\operatorname*{max}_{z_{k}}\Delta(z_{k}|Z)=\arg\operatorname*{min}_{z_{k}}\mathbb{E}_{z\sim K_{k}(z)}f\left({\frac{p(z)}{{\frac{1}{M}}\sum_{j=1}^{k}K_{j}(z)}}\right).$$ Proof. The proof is shown in Appendix A.2. The case of reverse KL divergence. Having mean-seeking behavior is useful to fit the kernel density estimator using *forward* divergences. However, if one wants to optimize marginal gains, they need to have a mode-seeking behavior, which is achieved via optimizing *reverse* divergences (Zhang et al., 2019). If we consider the generator for the reverse KL-divergence, f(x) = − log x, the minimization becomes $$\operatorname*{min}_{z_{k}}\mathbb{E}_{z\sim K_{k}(z)}-\log p(z)+\log\left({\frac{1}{M}}\sum_{j=1}^{k}K_{j}(z)\right).$$ $\square$ $$(11)$$ $$\left(12\right)$$ We note that it is costly or even intractable to compute the expectation in (11), and we thus resort to the mean-field assumption. Specifically, $$p(z)=p(z_{1},\ldots,z_{M})=\prod_{m=1}^{M}p(z_{m}),$$ where zm are variables that partition p(z). In the case of arbitrary multimodal distributions with finite number of modes, such an assumption is well-justified, and also leads to a computationally tractable optimization procedure. Having this assumption in mind, we obtain the following point estimate of (11): $$\min_{z_{k}}-\log p(z_{k})+\log\left(\frac{1}{M}\sum_{j=1}^{k-1}K_{j}(z_{k})\right).\tag{1}$$ $$\left(13\right)$$ ## 4 Greedy Approximation Of Bayesian Posterior For Parametric Functions 4.1 Objective Function Derivation of the negative marginal gain In this section, we consider parametric continuous functions zθ : R d → R c, where d - dimension of the input space, c - dimension of the output space, and θ denotes the parameters determining the function. While our derivations in this section hold for any measurable function that satisfies this definition, we assume a DNN to be our model of choice. For DNNs it is natural to consider a ground set V to be a set of all neural networks with a fixed architecture and a random initialization scheme realizable on a computer. We note that this is a very large, but finite set when considering fixed precision weights. We now use Bayes' theorem and a mean-field approximation, similarly to (13). This allows us to express the multimodal posterior over a neural network by a product of posteriors of individual ensemble members. Specifically, the factorized posterior is expressed as $$p(z_{\theta}|{\cal D})\propto p(z_{\theta_{1}},\ldots,z_{\theta_{M}}|{\cal D})\propto\prod_{m=1}^{M}p({\cal D}|z_{\theta_{m}})p(z_{\theta_{m}}),\tag{14}$$ where θm are parameters, p(D|zθ) the likelihood and p(zθ) the prior; p(D|zθ) ∝Qn i=1 exp(−`(zθ(xi), yi)), and p(zθ) ∝ exp(−λkθk 2 2 ). To leverage our earlier defined submodular maximization machinery, and optimize negative marginal gains derived in (13) for approximating (14), we define the kernel density components via generalized exponential kernels Kj (zθ) ∝ exp(−λMd(zθ, zθj ) 2), where λM is proportional to the kernel width and d(·, ·) is a distance measure between functions. Substituting the defined kernels and the factorized posterior defined in (14) into (13), we obtain the following objective to minimize at the k th greedy step of Algorithm 1: $$J(\theta_{k})=\underbrace{\mathbb{E}_{(x,y)\sim p(x,y)}\ell(z_{\theta_{k}}(x),y)+\lambda\|\theta_{k}\|_{2}^{2}}_{\text{Negative marginal gain on}\mathcal{R}(Z)}+\underbrace{\log\sum_{j=1}^{k-1}\exp\left(-\frac{\lambda_{M}}{M}d(z_{\theta_{k}},z_{\theta_{j}})^{2}\right)}_{\text{Negative marginal gain on}\Omega_{M}(Z)},\tag{15}$$ which is similar to the negative marginal gain on our originally defined high-level ensemble training objective (1), except that the diversity term Ω has a different form based on our submodular f-divergence optimization. Sampling-based approximation of the diversity term. When minimizing (15), one needs to be able to compute the diversity term Ω = log Pk−1 j=1 exp(−λMd(zθ, zθj ) 2), which is derived from a kernel density estimator we aim to fit to the true posterior. We note that this needs to be done *in the function space*, which makes this computation non-trivial. We earlier defined Kj (z) to be an individual kernel in a mixture 1M PM j=1 Kj (z). In order to be able to use the f-divergence, the individual components Kj (z) must be density functions centered at zj , which implies the need of a notion of similarity or the existence of a function norm, which are known to be NP-hard to compute for neural networks with depth greater than 3 (Rannen-Triki et al., 2019). We note that this intrinsic hardness result applies to all methods that define a meaningful posterior distribution in function space through a kernel density estimator. We thus use here a method from (Rannen-Triki et al., 2019) to approximate kzk 2 2 , via i.i.d. samples xi ∼ P ∗, where P ∗is a weighting distribution, which is required to ensure that Monte Carlo integration yields a reasonable approximation. This leads to the following sampling-based approximation of the diversity term: $$\log\sum_{j=1}^{k-1}\exp\left(-\frac{\lambda_{M}}{M}\mathbb{E}_{x\sim p^{*}(x)}\|z_{\theta_{k}}(x)-z_{\theta_{j}}(x)\|_{2}^{2}\right).$$ ## 4.2 Practical Implementation Computing the diversity term To this point, we defined all the main components of our method, except the weighting distribution in (16). The desirable behavior, which we expect an ensemble to exhibit, is that it must be uncertain on the OOD data and certain in the regions where the training data are available. This implies that p ∗(x) *must include* OOD samples. One can use OOD data in training explicitly, however, in our work, resort to a setting when OOD data are unknown. Specifically, we use a simple heuristic, which fits a Gaussian to every data dimension with the variance ×5 larger than the variance of the data. We specify further details about this in Appendix B.1. The resulting algorithm The resulting, computationally tractable optimization algorithm for ensembles, which minimizes negative marginal gains (10), is shown in Algorithm 2. For simplicity, we omit the snapshot selection step, i.e. early stopping. $$(16)$$ We note that in Algorithm 1 line 7 we require a uniform random sampling over the top M functions. In practice on a computer, the set of functions are parameterized by fixed precision floating point numbers, and that there are a large number of functions differing by a single lowest significant bit of one network weight, effectively achieving the same value of the negative marginal gain up to measurement error (recall the NP-hardness of the diversity term and its approximation by Monte Carlo integration). Therefore, the randomization introduced by Algorithm 1 line 7 is lower than the combination of optimization error and approximation error from Monte Carlo integration. There is therefore no gain from performing multiple optimizations and we replace the uniform random selection step with a single stochastic optimization: At each k th greedy step, we change the seed (set_seed(·) in Algorithm 2) of the random number generator before initializing the model and optimize a neural network using stochastic gradient descent from random initialization. The final computational performance improvement can be obtained by storing the evaluations zj (xi) ∀j = 1*, . . . , k* − 1 in memory before executing each k th step. We report here also one practical trick, which we found important during training. Specifically, freezing the batch normalization layers (Ioffe & Szegedy, 2015) before computing the diversity term turned out to help the convergence substantially. Algorithm 2 O(k) Random Greedy-based algorithm for training ensembles of neural networks. set_seed(·) sets the random seed. The arg minθ on line 10 is achieved by stochastic gradient descent with random weight initialization. 1: **Input:** V - Set of all neural networks specified by some architecture 2: **Input:** M - Cardinality of the solution 3: **Input:** p(*x, y*) - Training data / empirical distribution 4: **Input:** p ∗(x) - Weighting distribution for the diversity term 5: **Input:** s - Initial random seed 6: θ1 ← arg minθm E(x,y)∼p(x,y)`(zθ1 (x), y) + λkθ1k 2 2 7: Z ← {zθ1 } 8: for m = 2 to M do 9: set_seed(s + m − 1); 10: θm ← arg minθ E(x,y)`(zθ(x), y) + λkθk 2 2 + log Pm−1 j=1 exp − λM M Ex∗∼p∗(x)kzθ(x ∗) − zθj (x ∗)k 2 2 11: Z ← Z ∪ {zθm}; 12: **end for** 13: **return** Z ## 5 Related Work Randomization-based ensembles. Generally, ensembles have been studied in Machine Learning over several decades for different classes of models (Hansen & Salamon, 1990; Breiman, 1996; Freund & Schapire, 1997; Lakshminarayanan et al., 2017). One can distinguish several main methods for diverse ensemble construction (Pearce et al., 2020): randomization of training initialization and hyperparameters (Hansen & Salamon, 1990; Wenzel et al., 2020b; Wilson & Izmailov, 2020; Zaidi et al., 2021), bagging (Breiman, 1996; 2001), boosting (Freund & Schapire, 1997), and explicit diversity training (Kuncheva & Whitaker, 2003; Ross et al., 2020; Yang et al., 2020; Brown et al., 2005; Kariyappa & Qureshi, 2019; Sinha et al., 2021; Melville & Mooney, 2005). Randomization-based ensemble construction has shown good results in in-domain uncertainty (Ashukha et al., 2020) estimation, but also in the detection of OOD data (Lakshminarayanan et al., 2017). Submodular ensemble pruning and greedy ensemble construction Submodularity in ensemble learning has previously been discussed in the context of ensemble pruning (Sha et al., 2014). The goal of ensemble pruning is to trim a large ensemble of models so that the accuracy of the ensemble remains the same. We note that recently, inspired by the submodular pruning approach, a greedy algorithm has been applied to randomization based ensembles (Wenzel et al., 2020b; Zaidi et al., 2021). However, we also note that neither of these works approached the problem of ensemble construction from a Bayesian posterior approximation point of view. However, they provide extensive experimental evidence on the plausibility of greedy approach. Our work sheds light on why in Wenzel et al. (2020b); Zaidi et al. (2021) ensembles constructed greedily worked well in OOD detection tasks. Diversity-promoting regularization for ensemble training. The ensemble literature contains a line of work focusing on explicitly promoting regularization in ensembles (Kuncheva & Whitaker, 2003; Ross et al., 2020; Yang et al., 2020; Brown et al., 2005; Kariyappa & Qureshi, 2019; Sinha et al., 2021; Melville & Mooney, 2005; Havasi et al., 2021). In terms of promoting diversity outside training data, the closest work to ours is (Ross et al., 2020; Rame & Cord, 2021), and in terms of the form of diversity regularization it is (Kariyappa & Qureshi, 2019). However, none of these works takes the perspective of approximating the Bayesian posterior. Another limitation of most of these approaches is that they use either out-of-distribution data in training, adversarial examples, or expensive generative models, thus making those methods difficult to scale to large datasets. POVI A problem of learning diverse ensembles can be seen from a POVI perspective: in particular, Stein Variational Gradient Descent (SVGD) (Wang & Liu, 2019). SVGD aims to learn a diverse set of functions Z, approximating arbitrary distributions via a set of particles using a reverse KL divergence. This approach is similar to ours, however, we tackle the problem of optimizing a general f*-divergence* in the function space. Furthermore, to our knowledge, the present work is the first that takes a submodular minimization perspective in BDL. Another line of work in the POVI family also considers greedy approximations (Futami et al., 2019; Jerfel et al., 2021). In both of these works, the authors perform particle based inference by also optimizing the kernel widths of the KDE, which is different to our method. Furthermore, these works do not tell what the diversity term for ensemble training should look like, and how to compute it in practice. Function space POVI Conventionally, Bayesian inference in DL is thought of in the weight space (MacKay, 1992; Blundell et al., 2015; Gal & Ghahramani, 2016; Izmailov et al., 2020; Wilson & Izmailov, 2020; Pearce et al., 2020; Wenzel et al., 2020a). However, recent studies point out that despite the fact that simple priors over weights may imply complex posteriors in the function space, the connection between two of these is difficult to establish (Sun et al., 2019; Hafner et al., 2020). Recent papers on function-space POVI (Wang et al., 2019; D'Angelo & Fortuin, 2021) point out that one can do Bayesian inference in the function space by optimizing an objective function with a repulsive term. Notably, they draw connection to the reverse KL-divergence minimization, and we thus consider our method to be closely connected to function space POVI. ## 6 Experiments 6.1 Setup Datasets and models. We ran our main experiments on CIFAR10, CIFAR100 (Krizhevsky, 2009) and SVHN (Netzer et al., 2011) in-distribution datasets. Our OOD detection benchmark included CIFAR10, CIFAR100, DTD (Cimpoi et al., 2014), SVHN (Netzer et al., 2011), LSUN (Yu et al., 2015), TinyImageNet (Le & Yang, 2015), Places 365 (Zhou et al., 2017), Bernoulli noise images, Gaussian noise, random blobs image, and uniform noise images. The composition of the benchmark was inspired by the work of Hendrycks et al. (2019). We excluded the in-distribution datasets for each of the settings, resulting in a total of 10 OOD datasets for each in-distribution dataset. The full description of the benchmark is shown in Appendix B.2. The experiments were conducted using ResNet164 (pre-activated version; denoted as PreResNet164) (He et al., 2016), VGG16 (with batch normalization (Ioffe & Szegedy, 2015); denoted as VGG16BN) (Simonyan & Zisserman, 2015), and WideResNet28x10 (Zagoruyko & Komodakis, 2016). All our models in the ensembles were trained for 100 epochs using PyTorch (Paszke et al., 2019), each ensemble on a single NVIDIA V100 GPU. In the case of CIFAR and SVHN experiments, we trained ensembles of size M = 11, and report the results across 5 different random seeds. For the CIFAR experiments, we selected λM ∈ {0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 3, 5, 7, 10}. In addition to the CIFAR and SVHN experiments, we used MNIST (LeCun et al., 1998) with ResNet8. The details of those experiments are shown in Appendix B.2. Model selection and metrics. We used mutual information (MI) between the distribution of the predicted label yˆ for the point ˆx and the posterior distribution over functions p(f|D), to evaluate the *epistemic* uncertainty (Malinin & Gales, 2018; Depeweg et al., 2018) and reported the area under the ROC curve (AUC) and area under the precision-recall (PR) curve, i.e. average precision (AP) to quantify the OOD detection performance. Furthermore, we computed the false positive rate at 95% true positive rate (FPR95). Details on the computation of epistemic uncertainty can be found in Appendix B.3. 6.2 Results ![9_image_0.png](9_image_0.png) Figure 2: Uncertainty of an ensemble of two layer neural networks on a two moons dataset (size M = 11). Compared to DE, which is uncertain only close to the decision boundary, our method yields the desired behavior - the further we move from training data, the higher uncertainty is. Such behavior is controlled by the diversity regularization coefficient λM. Illustrative examples. Figure 2 illustrates how our method performs on the two moons dataset. Here, we used a two-layer fully-connected network with ReLU activations (Fukushima, 1988). Having a high λM is important to obtain good uncertainty estimation. As expected, compared to our method, the DE method does not explicitly maximize the coverage of the posterior, and thus fails to be uncertain outside the training data. Out-of-distribution detection. We present aggregated results for all the models and in-distribution datasets in Table 1. It is clear that on average (across OOD datasets), our method is substantially better than DE. This holds for all the architectures and in-distribution datasets. We show the expanded version of all the OOD detection results in Appendix C.3. An example of these results is shown in Figure 3 for all the models trained on CIFAR 100. Here, one can see that our method is at least similar to DE, and substantially better overall in terms of AUC and AP metrics. Finally, some examples of OOD detection by both DE and our method are shown in Figure 4. Here, we computed optimal thresholds for each of the methods by optimizing the trade-off between the true positive and true negative rates. ## 7 Discussion In this paper, we have introduced a novel paradigm for Bayesian posterior approximation in Deep Learning using greedy ensemble construction via submodular optimization. We have proven a new general theoretical result, which shows that minimization of an f-divergence between some distribution and a kernel density estimator has approximation guarantees, and can be done greedily. We then derived a novel coverage promoting diversity term for ensemble construction. The results presented in this paper, as well as in Appendix C.4, demonstrate that our method outperforms the state-of-the-art approach for ensemble construction, DE (Lakshminarayanan et al., 2017), on a range of benchmarks in OOD detection, while preserving the accuracy (Table C3). | Model | Dataset | Deep Ensembles | | Ours | | | | |-----------------|-----------|------------------|---------|--------|-----------|------|------| | AUC (↑) | AP (↑) | FPR95 (↓) | AUC (↑) | AP (↑) | FPR95 (↓) | | | | C10 | 0.94 | 0.92 | 0.17 | 0.95 | 0.95 | 0.14 | | | PreResNet164 | C100 | 0.79 | 0.80 | 0.47 | 0.88 | 0.88 | 0.40 | | SVHN | 0.99 | 0.97 | 0.02 | 1.00 | 0.98 | 0.01 | | | C10 | 0.95 | 0.94 | 0.15 | 0.96 | 0.96 | 0.12 | | | WideResNet28x10 | C100 | 0.86 | 0.85 | 0.36 | 0.90 | 0.91 | 0.30 | | SVHN | 0.99 | 0.96 | 0.03 | 1.00 | 0.99 | 0.01 | | | C10 | 0.92 | 0.91 | 0.23 | 0.95 | 0.95 | 0.18 | | | VGG16BN | C100 | 0.83 | 0.82 | 0.45 | 0.89 | 0.90 | 0.36 | | SVHN | 0.99 | 0.96 | 0.02 | 1.00 | 0.98 | 0.02 | | Table 1: Averaged metrics across 10 OOD datasets. ![10_image_0.png](10_image_0.png) Figure 3: Out-of distribution detection results on CIFAR 100 for 3 different architectures (read column-wise). Here, we show AUC (top row) and AP values (bottom row) from 0.5 to 1 averaged across 5 seeds. This study has some limitations, which outline several directions for the future work. Firstly, we did not compare our approach to a variety of existing methods for ensemble generation, e.g. snapshot ensembles (Huang et al., 2017), batch ensembles (Wen et al., 2020) or hyperparameter ensembles (Wenzel et al., 2020b). However, these methods are heuristic, and as discussed in the related work, our method can be used in conjunction with them to make them more principled. Furthermore, we note that the main contribution of this paper is novel theory. The second limitation of this work is that it does not compare to f-POVI Wang et al. (2019); D'Angelo & Fortuin (2021). However, as noted earlier, those methods are in a different class, as they focus on training models without state-of-the-art training techniques such as batch normalization (Ioffe & Szegedy, 2015) and data augmentation. ![11_image_0.png](11_image_0.png) Figure 4: OOD detection examples. In subplot (a), the top row shows true positives, and the bottom - true negatives detected by our method. We also show the uncertainty values. Subplot (b) shows the failures of both DE and our method on positives (top row) and negatives (bottom row), respectively. Here, we used PreResNet164 trained on CIFAR10 (M = 11). SVHN was used as an OOD dataset. The third limitation is that we did not fully explore different techniques of generating weighting distributions for the diversity term. However, we still evaluated the possibility of using real images for this purpose in Appendix C.4, similarly to an outlier exposure setting (Hendrycks et al., 2019). The fourth limitation of this work, is that we have implemented Algorithm 1 only for a single instance of an f-divergence with a single type of kernel density estimator, and Algorithm 2 is specific to neural networks. All this can alter the approximation factor on the f-divergence upper-bound, and it is thus hard to quantify the exact approximation gurarantees for the case of neural network posteriors. We consider that one can get results of this type only empirically, which is computationally intractable in general, but could still be evaluated on small scale problems in the future studies. The final limitation is related to our method, as it lacks parallelization possibilities compared to DE. We note, however, that there is a class of parallel submodular optimization algorithms that enable parallelization and retain approximation guarantees (Ene & Nguyen, 2020). When both our method and DE are run sequentially on GPU, one can observe 70-90% computational overhead compared to DE when running our method for M = 11 (Table C9). Such an overhead, is, however, natural due to saving predictions and a double number of forward passes. We note here, that despite that training of the proposed method was roughly 1.75-1.9 times more expensive than DE, both methods had the same test runtime. Therefore, our approach is still of interest in applications where high quality uncertainty estimation at test time cannot be traded in favor of lower cost training. To conclude, this paper provides a novel foundational framework for Bayesian Deep Learning. We hope for the wide adaption of the proposed method by practitioners across the fields. Our code is available at https://github.com/Oulu-IMEDS/greedy_ensembles_training. ## Acknowledgements We acknowledge support from the Research Foundation - Flanders (FWO) through project numbers G0A1319N and S001421N, KU Leuven Internal Funds via the MACCHINA project, and funding from the Flemish Government under the "Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen" programme. This research was also supported by the strategic funds of the University of Oulu, Finland, funding from the Academy of Finland (Profi6 336449 funding program and Finnish Center for Artificial Intelligence flagship), as well as the Northern Ostrobothnia hospital district, Finland (VTR project K33754). A.T. acknowledges travel support from the European Union's Horizon 2020 research and innovation programme under grant agreement No 951847. We thank CSC - Finnish Center for Science for generous computational resources. We also acknowledge the computational resources provided by the Aalto Science-IT project. Iaroslav Melekhov, Markus Heinonen, Martin Trapp, Bas Veeling, and Egor Panfilov are acknowledged for useful suggestions that helped to improve this work. ## References Arsenii Ashukha, Alexander Lyzhov, Dmitry Molchanov, and Dmitry Vetrov. Pitfalls of in-domain uncertainty estimation and ensembling in deep learning. In *International Conference on Learning Representations*, 2020. URL https://openreview.net/forum?id=BJxI5gHKDr. Francis Bach. Learning with submodular functions: A convex optimization perspective. Foundations and Trends in Machine Learning, 6(2-3):145–373, 2013. Francis Bach. Submodular functions: from discrete to continuous domains. *Mathematical Programming*, 175 (1):419–459, 2019. Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In *International Conference on Machine Learning*, pp. 1613–1622. PMLR, 2015. Leo Breiman. Bagging predictors. *Machine learning*, 24(2):123–140, 1996. Leo Breiman. Random forests. *Machine learning*, 45(1):5–32, 2001. Gavin Brown, Jeremy L Wyatt, Peter Tino, and Yoshua Bengio. Managing diversity in regression ensembles. Journal of machine learning research, 6(9), 2005. Niv Buchbinder, Moran Feldman, Joseph Naor, and Roy Schwartz. Submodular maximization with cardinality constraints. In *Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms*, pp. 1433–1452. SIAM, 2014. Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 3606–3613, 2014. Kamil Ciosek, Vincent Fortuin, Ryota Tomioka, Katja Hofmann, and Richard Turner. Conservative uncertainty estimation by fitting prior networks. In *International Conference on Learning Representations*, 2019. Francesco D'Angelo and Vincent Fortuin. Repulsive deep ensembles are bayesian. *Advances in Neural* Information Processing Systems, 34, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Stefan Depeweg, Jose-Miguel Hernandez-Lobato, Finale Doshi-Velez, and Steffen Udluft. Decomposition of uncertainty in bayesian deep learning for efficient and risk-sensitive learning. In International Conference on Machine Learning, pp. 1184–1193. PMLR, 2018. Alina Ene and Huy Nguyen. Parallel algorithm for non-monotone dr-submodular maximization. In *International Conference on Machine Learning*, pp. 2902–2911. PMLR, 2020. Stanislav Fort, Huiyi Hu, and Balaji Lakshminarayanan. Deep ensembles: A loss landscape perspective. arXiv preprint arXiv:1912.02757, 2019. Vincent Fortuin. Priors in bayesian deep learning: A review. *arXiv preprint arXiv:2105.06868*, 2021. Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. *Journal of computer and system sciences*, 55(1):119–139, 1997. Satoru Fujishige. *Submodular functions and optimization*. Elsevier, 2005. Kunihiko Fukushima. Neocognitron: A hierarchical neural network capable of visual pattern recognition. Neural networks, 1(2):119–130, 1988. Futoshi Futami, Zhenghang Cui, Issei Sato, and Masashi Sugiyama. Bayesian posterior approximation via greedy particle optimization. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 33, pp. 3606–3613, 2019. Yarin Gal. *Uncertainty in Deep Learning*. PhD thesis, University of Cambridge, 2016. Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In *international conference on machine learning*, pp. 1050–1059. PMLR, 2016. Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, and Andrew Gordon Wilson. Loss surfaces, mode connectivity, and fast ensembling of DNNs. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 8803–8812, 2018. Shayan Oveis Gharan and Jan Vondrák. Submodular maximization by simulated annealing. In Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms, pp. 1098–1116. SIAM, 2011. Danijar Hafner, Dustin Tran, Timothy Lillicrap, Alex Irpan, and James Davidson. Noise contrastive priors for functional uncertainty. In *Uncertainty in Artificial Intelligence*, pp. 905–914. PMLR, 2020. Lars Kai Hansen and Peter Salamon. Neural network ensembles. *IEEE transactions on pattern analysis and* machine intelligence, 12(10):993–1001, 1990. Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Zhe Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew Mingbo Dai, and Dustin Tran. Training independent subnetworks for robust prediction. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id= OGg9XnKxFAH. Bobby He, Balaji Lakshminarayanan, and Yee Whye Teh. Bayesian deep ensembles via the neural tangent kernel. *Advances in Neural Information Processing Systems*, 33:1010–1022, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630–645. Springer, 2016. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In *International Conference on Learning Representations*, 2019. URL https://openreview. net/forum?id=HJz6tiCqYm. Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. Deep anomaly detection with outlier exposure. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id= HyxCxhRcY7. Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, and Kilian Q. Weinberger. Snapshot ensembles: Train 1, get M for free. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=BJYwwY9ll. Alexander Immer, Matthias Bauer, Vincent Fortuin, Gunnar Rätsch, and Khan Mohammad Emtiyaz. Scalable marginal likelihood estimation for model selection in deep learning. In International Conference on Machine Learning, pp. 4563–4573. PMLR, 2021. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *International conference on machine learning*, pp. 448–456. PMLR, 2015. Pavel Izmailov, Wesley J Maddox, Polina Kirichenko, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Subspace inference for Bayesian deep learning. In *Uncertainty in Artificial Intelligence*, pp. 1169–1179. PMLR, 2020. Ghassen Jerfel, Serena Wang, Clara Wong-Fannjiang, Katherine A Heller, Yian Ma, and Michael I Jordan. Variational refinement for importance sampling using the forward kullback-leibler divergence. In *Uncertainty* in Artificial Intelligence, pp. 1819–1829. PMLR, 2021. Sanjay Kariyappa and Moinuddin K Qureshi. Improving adversarial robustness of ensembles with diversity training. *arXiv preprint arXiv:1901.09981*, 2019. Andreas Krause, Ajit Singh, and Carlos Guestrin. Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies. *Journal of Machine Learning Research*, 9(2), 2008. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. Ludmila I Kuncheva and Christopher J Whitaker. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. *Machine learning*, 51(2):181–207, 2003. Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. *Science*, 350(6266):1332–1338, 2015. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/ file/9ef2ed4b7fd2c810847ffa5fa85bce38-Paper.pdf. Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. CS 231N, Stanford University, 2015. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998. Zhiyun Lu, Eugene Ie, and Fei Sha. Uncertainty estimation with infinitesimal jackknife, its distribution and mean-field approximation. *arXiv preprint arXiv:2006.07584*, 2020. David JC MacKay. A practical bayesian framework for backpropagation networks. *Neural computation*, 4(3): 448–472, 1992. Wesley J Maddox, Pavel Izmailov, Timur Garipov, Dmitry P Vetrov, and Andrew Gordon Wilson. A simple baseline for Bayesian uncertainty in deep learning. In *Advances in Neural Information Processing Systems*, pp. 13132–13143, 2019. Andrey Malinin and Mark Gales. Predictive uncertainty estimation via prior networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances in Neural* Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings. neurips.cc/paper/2018/file/3ea2db50e62ceefceaf70a9d9a56a6f4-Paper.pdf. Prem Melville and Raymond J Mooney. Creating diversity in ensembles using artificial data. *Information* Fusion, 6(1):99–111, 2005. Dragoslav S Mitrinovic and Petar M Vasic. *Analytic inequalities*, volume 61. Springer, 1970. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. In *NIPS Workshop on Deep Learning and* Unsupervised Feature Learning 2011, 2011. URL http://ufldl.stanford.edu/housenumbers/nips2011_ housenumbers.pdf. Jeremy Nixon, Michael W Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran. Measuring calibration in deep learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, volume 2, 2019. Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D. Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ 8558cb408c1d76621371888657d2eb1d-Paper.pdf. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. *Advances in Neural Information Processing Systems*, 32:8026–8037, 2019. Tim Pearce, Felix Leibfried, and Alexandra Brintrup. Uncertainty in neural networks: Approximately Bayesian ensembling. In *International conference on artificial intelligence and statistics*, pp. 234–244. PMLR, 2020. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. *The Journal of Machine Learning Research*, 12:2825–2830, 2011. Alexandre Rame and Matthieu Cord. {DICE}: Diversity in deep ensembles via conditional redundancy adversarial estimation. In *International Conference on Learning Representations*, 2021. URL https: //openreview.net/forum?id=R2ZlTVPx0Gk. Amal Rannen-Triki, Maxim Berman, Vladimir Kolmogorov, and Matthew B Blaschko. Function norms for neural networks. In *Proceedings of the IEEE International Conference on Computer Vision Workshops*, 2019. Andrew Ross, Weiwei Pan, Leo Celi, and Finale Doshi-Velez. Ensembles of locally independent prediction models. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 5527–5536, 2020. Chaofeng Sha, Keqiang Wang, Xiaoling Wang, and Aoying Zhou. Ensemble pruning: A submodular function maximization perspective. In *International Conference on Database Systems for Advanced Applications*, pp. 1–15. Springer, 2014. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Yoshua Bengio and Yann LeCun (eds.), *3rd International Conference on Learning Representations,* ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http: //arxiv.org/abs/1409.1556. Samarth Sinha, Homanga Bharadhwaj, Anirudh Goyal, Hugo Larochelle, Animesh Garg, and Florian Shkurti. Dibs: Diversity inducing information bottleneck in model ensembles. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 9666–9674, 2021. Leslie N Smith and Nicholay Topin. Super-convergence: Very fast training of neural networks using large learning rates. In *Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications*, volume 11006, pp. 1100612. International Society for Optics and Photonics, 2019. Shengyang Sun, Guodong Zhang, Jiaxin Shi, and Roger Grosse. Functional variational Bayesian neural networks. In *International Conference on Learning Representations*, 2019. URL https://openreview. net/forum?id=rkxacs0qY7. Zoya Svitkina and Lisa Fleischer. Submodular approximation: Sampling-based algorithms and lower bounds. SIAM Journal on Computing, 40(6):1715–1737, 2011. Joost Van Amersfoort, Lewis Smith, Yee Whye Teh, and Yarin Gal. Uncertainty estimation using a single deep deterministic neural network. In *International Conference on Machine Learning*, pp. 9690–9700. PMLR, 2020. Dilin Wang and Qiang Liu. Nonlinear Stein variational gradient descent for learning diversified mixture models. In *International Conference on Machine Learning*, pp. 6576–6585, 2019. Ziyu Wang, Tongzheng Ren, Jun Zhu, and Bo Zhang. Function space particle optimization for Bayesian neural networks. *arXiv preprint arXiv:1902.09754*, 2019. Yeming Wen, Dustin Tran, and Jimmy Ba. Batchensemble: an alternative approach to efficient ensemble and lifelong learning. In *International Conference on Learning Representations*, 2020. URL https: //openreview.net/forum?id=Sklf1yrYDr. Florian Wenzel, Kevin Roth, Bastiaan Veeling, Jakub Swiatkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, and Sebastian Nowozin. How good is the Bayes posterior in deep neural networks really? In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 10248–10259. PMLR, 13–18 Jul 2020a. URL https://proceedings.mlr.press/v119/wenzel20a.html. Florian Wenzel, Jasper Snoek, Dustin Tran, and Rodolphe Jenatton. Hyperparameter ensembles for robustness and uncertainty quantification. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 6514–6527. Curran Associates, Inc., 2020b. David P Williamson and David B Shmoys. *The design of approximation algorithms*. Cambridge university press, 2011. Andrew G Wilson and Pavel Izmailov. Bayesian deep learning and a probabilistic perspective of generalization. Advances in neural information processing systems, 33:4697–4708, 2020. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. *arXiv preprint arXiv:1708.07747*, 2017. Huanrui Yang, Jingyang Zhang, Hongliang Dong, Nathan Inkawhich, Andrew Gardner, Andrew Touchet, Wesley Wilkes, Heath Berry, and Hai Li. Dverge: Diversifying vulnerabilities for enhanced robust generation of ensembles. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural* Information Processing Systems, volume 33, pp. 5505–5515. Curran Associates, Inc., 2020. URL https: //proceedings.neurips.cc/paper/2020/file/3ad7c2ebb96fcba7cda0cf54a2e802f5-Paper.pdf. Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. *arXiv preprint arXiv:1506.03365*, 2015. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In British Machine Vision Conference 2016. British Machine Vision Association, 2016. Sheheryar Zaidi, Arber Zela, Thomas Elsken, Christopher C. Holmes, Frank Hutter, and Yee Whye Teh. Neural ensemble search for uncertainty estimation and dataset shift. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, 2021. URL https://openreview.net/forum?id=HiYDAwAGWud. Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In *International Conference on Learning Representations*, 2018. URL https://openreview. net/forum?id=r1Ddp1-Rb. Mingtian Zhang, Thomas Bird, Raza Habib, Tianlin Xu, and David Barber. Variational f-divergence minimization. *arXiv preprint arXiv:1907.11891*, 2019. Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. *IEEE transactions on pattern analysis and machine intelligence*, 40(6): 1452–1464, 2017. ## A Proofs A.1 Proof Of Theorem 1 To prove Theorem 1, we make use of the following theorem: Theorem A1 (Theorem 1.4 from (Mitrinovic & Vasic, 1970)). A function f : D → R *is convex on* D = [*a, b*] if and only if ∀x1 < x2 < x3 ∈ D $\begin{vmatrix}x\\ x\\ x\\ \end{vmatrix}$ $$\begin{array}{l|l}f(x_{1})&1\\ f(x_{2})&1\\ f(x_{3})&1\end{array}\geq0.$$ Theorem 1. Any f*-divergence* $$D_{f}(p||q_{M})=\int f\left(\frac{p(z)}{\frac{1}{M}\sum_{j=1}^{M}K_{j}(z)}\right)\frac{1}{M}\sum_{m=1}^{M}K_{m}(z)d z$$ $$(17)$$ $$(18)$$ $$(19)$$ $$(20)$$ between a distribution p(z) and a mixture of M kernels with equal weights is supermodualar in a cardinality-fixed setting, assuming that ∀z maxqM Df (p(z)||qM(z)) < ∞. Proof. In order to prove that f-divergences are supermodular, we need to show that ∀α > 0, xf( α x ) is convex, because qM(z) is a positive modular function (Bach, 2019, Proposition 6.1), due to M being fixed. From Theorem A1, f(x) is convex on a closed interval [a, b], if and only if ∀x1 < x2 < x3 in [*a, b*] $\begin{vmatrix}x_1\\ x_2\\ x_3\end{vmatrix}$ $$\begin{array}{l}{f(x_{1})}\\ {f(x_{2})}\\ {f(x_{3})}\end{array}$$ x1 f(x1) 1 x2 f(x2) 1 x3 f(x3) 1 ≥ 0. (19) We know that x1 ≤ x2 ≤ x3 and x1*, α >* 0. We divide each i th row by xi: $$\begin{array}{l l}{{\left|1\right.}}&{{\frac{1}{x_{1}}}f(x_{1})}\\ {{\left|1\right.}}&{{\frac{1}{x_{2}}}f(x_{2})}\\ {{\left|1\right.}}&{{\frac{1}{x_{3}}}f(x_{3})}\end{array}$$ $$\left.\begin{array}{c}\frac{1}{\varepsilon_{1}}\\ \frac{1}{\varepsilon_{2}}\\ \frac{1}{\varepsilon_{3}}\end{array}\right|\geq0.\tag{1}$$ After the division, we denote new variables y1 = 1 x3 , y2 = 1 x2 , and y3 = 1 x2 . One can see that y1 < y2 < y3, because x1 < x2 < x3. We then get $$\begin{array}{r l}{\left|1\right.}&{{}y_{3}f({\frac{1}{y_{3}}})}\\ {1}&{{}y_{2}f({\frac{1}{y_{2}}})}\\ {1}&{{}y_{1}f({\frac{1}{y_{1}}})}\end{array}$$ $$(21)$$ ) y3 ) y2 ) y1 $y_{3}$$y_{2}$$\geq0$. $y_{1}$$\geq0$. Changing the first and the third row of the determinant will change the sign. Changing the third and the first columns will also change the sign. Therefore $$\begin{array}{l l l}{{}}&{{}}&{{y_{1}}}&{{y_{1}f(\frac{1}{y_{1}})}}&{{1}}\\ {{}}&{{}}&{{y_{2}}}&{{y_{2}f(\frac{1}{y_{2}})}}&{{1}}\\ {{}}&{{}}&{{y_{3}}}&{{y_{3}f(\frac{1}{y_{3}})}}&{{1}}\end{array}\geq0,}\tag{1}$$ and thus we get that $$\forall x\in[a,b],\,x f\left({\frac{1}{x}}\right){\mathrm{~is~convex~}}\Longleftrightarrow{\mathrm{~}}f(x){\mathrm{~is~convex.~}}$$ is convex ⇐⇒ f(x) is convex. (23) $$(22)$$ $$(23)$$ Consider the following reparameterization y˜ = y α , which preserves convexity. Then $$\begin{array}{l l}{{\left|\alpha\tilde{y}_{1}\right.}}&{{\alpha\tilde{y}_{1}f\left(\frac{\alpha}{\tilde{y}_{1}}\right)}}\\ {{\left|\alpha\tilde{y}_{2}\right.}}&{{\alpha\tilde{y}_{2}f\left(\frac{\alpha}{\tilde{y}_{2}}\right)}}\\ {{\left|\alpha\tilde{y}_{3}\right.}}&{{\alpha\tilde{y}_{3}f\left(\frac{\alpha}{\tilde{y}_{3}}\right)}}\end{array}$$ $$\begin{array}{c|l}1\\ 1\\ 1\end{array}\geq0.\tag{1}$$ Division of the first and the second column by α does not change the sign of the determinant, therefore, xf α x is convex ⇐⇒ f(x)is convex, (25) which concludes the proof. ## A.2 Proof Of Proposition 1 Proposition 1. *Consider* C = max Df (p||qM), where Df (p||qM) is an arbitrary f*-divergence between some* distribution p(z) *and a mixture of kernels* qM(z) = 1M PM j=1 Kj (z), and Df (p||qM) < ∞*. Then, maximization* of the marginal gain for the set function $$F(Z)=-\int f\left(\frac{p(z)}{\frac{1}{M}\sum_{j=1}^{M}K_{j}(z)}\right)\frac{1}{M}\sum_{m=1}^{M}K_{m}(z)d z+C,$$ $$(24)$$ $$(25)$$ $\square$ $$(26)$$ $$(27)$$ $$(29)$$ at step k *of a greedy algorithm corresponds to* $$\arg\operatorname*{max}_{z_{k}}\Delta(z_{k}|Z)=\arg\operatorname*{min}_{z_{k}}\mathbb{E}_{z\sim K_{k}(z)}f\left({\frac{p(z)}{{\frac{1}{M}}\sum_{j=1}^{k}K_{j}(z)}}\right).$$ Proof. We aim to derive a marginal gain of adding an element defined by zk to 1M Pk−1 j=1 Kj (z) 1. Let us denote $G(z)=f\left(\frac{Mp(z)}{\sum_{j=1}^{k}K_{j}(z)}\right)-f\left(\frac{\frac{Mp(z)}{M}}{\sum_{j=1}^{k}K_{j}(z)}\right)$. Then $$-\int f\left(\frac{Mp(z)}{\sum_{j=1}^{k-1}K_{j}(z)}\right)\frac{1}{M}\sum_{m=1}^{k}K_{m}(z)dz+C+\int f\left(\frac{Mp(z)}{\sum_{j=1}^{k-1}K_{j}(z)}\right)\frac{1}{M}\sum_{m=1}^{k-1}K_{m}(z)dz-C=\tag{28}$$ $$\int\frac{G(z)}{M}\sum_{m=1}^{k-1}K_{m}(z)dz-\int f\left(\frac{Mp(z)}{\sum_{j=1}^{k}K_{j}(z)}\right)\frac{K_{k}(z)}{M}dz.$$ One can observe that the first term of (28) is upper-bounded by a constant, which is not dependent on zk: $$\int\frac{G(z)}{M}\sum_{m=1}^{k-1}K_{m}(z)d z\leq\int f\left(\frac{k p(z)}{\sum_{j=1}^{k-1}K_{j}(z)}\right)\frac{1}{M}\sum_{j=1}^{k-1}K_{j}(z)d z\ =\ \mathrm{const},$$ therefore, to maximize the marginal gain, one needs to maximize the second term. Consequently, we write the objective corresponding to a marginal gain as $$\Delta(z_{k}|Z\setminus z_{k})=-\int f\left(\frac{p(z)}{\frac{1}{M}\sum_{j=1}^{k}K_{j}(z)}\right)K_{k}(z)d z,$$ maximization of which is equivalent to $$\arg\operatorname*{min}_{z_{k}}\mathbb{E}_{z\sim K_{k}(z)}f\left(\frac{p(z)}{\frac{1}{M}\sum_{j=1}^{k}K_{j}(z)}\right),$$ which concludes the proof. 1Note: 1M is a constant, which remains unchanged at all iterations of the greedy algorithm. $$(30)$$ $$(31)$$ ## B Implementation Details B.1 Weighting Distribution For The Diversity Term We propose the following simple heuristic, defining p ∗(x) as a normal distribution N (µD, α· ΣD) of dimensionality, corresponding to the training data. The covariance ΣD for this distribution is set to be diagonal, such that the variance for every dimension j is ΣD[*j, j*] = (α · σj ) 2, where α > 1 is a scaling parameter, and σ 2 j is a variance of the dimension j computed from samples of the training dataset D. Similarly, µD, the vector of expected values for every dimension, is also computed from the training data. Finally, the hyperparameter α = 5 was found to work well, and we thus report all the experimental results with it fixed. We note that a similar technique, but for *in-distribution* data generation, has been used earlier in (Melville & Mooney, 2005). ## B.2 Ood Detection Benchmarks MNIST The MNIST dataset benchmark included 6 datasets: Fashion MNIST Xiao et al. (2017), DTD Cimpoi et al. (2014), Omniglot Lake et al. (2015), Gaussian noise, Bernoulli noise, and uniform noise datasets (see Table B1). We re-scaled all the images to the range of [0, 1]. Subsequently, we applied the same mean and standard deviation as we have applied to the original images before feeding them to the network. CIFAR and SVHN The CIFAR and SVHN OOD benchmark included 10 different datasets. Here, we also DTD, Bernoulli noise, Gaussian noise and uniform noise datasets, and added Places 365 (Zhou et al., 2017), Tiny ImageNet (Le & Yang, 2015; Deng et al., 2009), and LSUN (Yu et al., 2015) datasets to the benchmark. For CIFAR10 as in-domain data, we added CIFAR100 and SVHN (Netzer et al., 2011) to the benchmark. For CIFAR100 - CIFAR10 and SVHN. Finally, for SVHN, we added CIFAR10 and CIFAR100 as OOD datasets, making a total of 10 OOD datasets per 1 in-distribution dataset. The details about each of the datasets are shown in Table B1. Before feeding the images to the network, we applied re-scaling similarly to the MNIST setting. We also used the same mean and standard deviation normalization procedure, as for the in-domain data. | Dataset | Type | # samples | Comment | |---------------|---------|----------------------------------------------------------------|-----------| | Uniform | S | 25, 000 | N/A | | Gaussian | 25, 000 | Generated once, used in all experiments | | | Blobs | 25, 000 | N/A | | | Bernoulli | 25, 000 | N/A | | | Omniglot | 13, 181 | Evaluation images | | | CIFAR10 | 10, 000 | Test set (not used in training) | | | CIFAR100 | 10, 000 | Test set (not used in training) | | | SVHN | 73, 257 | Test set (not used in training) | | | Places 365 | 10, 000 | First 10, 000 images from the test set (sorted alphabetically) | | | TinyImageNet | 10, 000 | Original validation set images | | | DTD | 5, 640 | Release 1.0.1 | | | LSUN | 10, 000 | Test set | | | Fashion MNIST | 10, 000 | Test set | | | R | | | | Table B1: Description of the datasets used in all the exp. R indicates real images, S - synthetic. ## B.3 Epistemic Uncertainty Computation We used the *epistemic uncertainty*, i.e. mutual information (MI) between the label y for the point ˆx and the posterior distribution over functions p(f|D), to evaluate the uncertainty (Malinin & Gales, 2018; Depeweg et al., 2018). As a distribution over weights induces a distribution over functions, we approximate the MI as: $${\mathcal{I}}(y;f|\hat{\mathbf{x}},{\mathcal{D}})={\mathcal{H}}\left[\mathbb{E}_{p(\theta|{\mathcal{D}})}p(y\mid\theta,\hat{\mathbf{x}},{\mathcal{D}})\right]-\mathbb{E}_{p(\theta|{\mathcal{D}})}{\mathcal{H}}\left[p(y\mid\theta,\hat{\mathbf{x}},{\mathcal{D}})\right],$$ where H[·] denotes the entropy. One can see that this metric can be efficiently computed from the predictions of an ensemble. ## C Experiments C.1 Experimental Details Model selection Contrary to the commonly used practice, we did not use CIFAR10/100 and SVHN test set sets for model selection. Neither did we use any OOD data. Instead, we used validation set accuracy (10% of the training data; randomly chosen stratified split) to select the models when optimizing the marginal gain. The best snapshot was found using the validation data, was then selected for final testing. When selecting the models for evaluation on OOD data, we first evaluated ensembles on the in-distribution test set (Appendix C.2). Subsequently, we selected the highest λM that did not harm the test set (in-domain) performance (no overlap of confidence intervals defined as mean ± standard error). To provide additional information, we also analyzed adaptive calibration error (ACE) with 30 bins (Nixon et al., 2019). Two moons dataset For the synthetic data experiments, we used scikit-learn (Pedregosa et al., 2011), and generated a two-moons dataset with 300 points in total, having the noise parameter fixed to 0.3. Here, we used a two-layer neural network with ReLU (Krizhevsky, 2009) activations and hidden layer size of 128. CIFAR10/100 and SVHN The main training hyper-parameters were adapted from (Maddox et al., 2019) (see Table C2), but with additional modifications inspired by (Malinin & Gales, 2018; Smith & Topin, 2019), which helped to train the CIFAR models to state-of-the-art performance in only 100 epochs. As such, we first employed a warm-up of the learning rate (LR) from a value 10 times lower than the initial LR (LR*init* in Table C2) for 5 epochs. Subsequently, after 50% of the training budget, we linearly annealed the LR to the value of LR × lr*scale* until 90% of the training budget is reached, after which we kept the value of LR constant. All models were trained using stochastic gradient descent with momentum of 0.9 and a total batch size of 128. We employed standard training augmentations - horizontal flipping, reflective padding to 34 × 34, and random crop to 34 × 34 pixels. | Model | LRinit | Nesterov | Weight Decay | lrscale | |-----------------|----------|------------|----------------|-----------| | PreResNet164 | 0.1 | Yes | 0.0001 | 0.01 | | VGG16BN | 0.05 | No | 0.0005 | 0.01 | | WideResNet28x10 | 0.1 | No | 0.0005 | 0.001 | Table C2: Main hyper-parameters of all the models used in the CIFAR and SVHN in-domain experiments. MNIST In addition to the CIFAR10/100 experiments, we also trained our method on MNIST (LeCun et al., 1998) with PreResNet8 architecture. As OOD, we used FashionMNIST (Xiao et al., 2017) and Omniglot (Lake et al., 2015) datasets. We also tested other architectures, such as PreResNet20, but the models with higher depth than 8 already gave nearly perfect scores on MNIST. Hyper-parameter-wise, we trained all the models for 20 epochs without warmup with the batch size of 256 using plain SGD with momentum. The weight decay was set to 1e − 5. We used LR annealing similarly as for CIFAR experiments, but used lr*scale* = 0.0001. No data augmentations were used in any of the MNIST experiments. λM was searched in range {0.0001, 0.001, 0.01, 0.11, 7} for M ∈ {3, 5, 9, 15}. This series of experiments was re-run 3 times, as the MNIST dataset is rather simple, and the test scores have low variance between the runs. ## C.2 Cifar10/100 And Svhn In-Domain Performance CIFAR10/100 in-distribution performance vs. diversity. Figure C1 provides an illustration of how the test set performance changes with λM on CIFAR data. One can see a general trend that when λM ![21_image_0.png](21_image_0.png) Figure C1: Relationship between accuracy, ACE, and λM (M = 11). Subplots (a) and (b) show the results for CIFAR10. Subplots (c) and (d) show the results for CIFAR100. approaches M, the models lose the ability to make accurate predictions, which results in lower accuracy and poorer calibration. Interestingly, performance on the VGG model degrades much slower with λM compared to other architectures. Similar findings were also obtained for the SVHN dataset. Based on the test performance, we selected the models for further evaluation on OOD benchmark. CIFAR and SVHN: best models' performance Table C3 shows the results of all the trained models on the in-domain data. One can see that the results between Deep Ensembles (DE) (Lakshminarayanan et al., 2017) do not differ significantly. We trained all these models according to the earlier specified hyper-parameters and the learning rate schedule. Models selected in Table C3 are used to report the results in the main experiments. Table C3: In-domain performance on the test sets of CIFAR10/100 and SVHN for all the models used in the experiments (M = 11). We report mean and standard error over 5 random seeds for each of the models. Standard errors are reported if they are more than 0.01 across runs. DE indicates Deep Ensembles. | Architecture | Dataset | Method | Accuracy (%) | NLL ×100 | ACE (%) | |-----------------|------------|------------|----------------|------------|-----------| | C10 | DE | 95.70±0.02 | 13.28±0.06 | 0.18 | | | Ours (λM = 3) | 95.66±0.02 | 13.18±0.09 | 0.16 | | | | PreResNet164 | C100 | DE | 79.97±0.04 | 73.44±0.17 | 0.08 | | Ours (λM = 5) | 79.93±0.04 | 74.16±0.91 | 0.09±0.01 | | | | SVHN | DE | 99.46±0.01 | 2.34±0.04 | 0.25 | | | Ours (λM = 1) | 99.38±0.01 | 3.16±0.13 | 0.81±0.12 | | | | C10 | DE | 94.55±0.02 | 17.59±0.05 | 0.23 | | | Ours (λM = 5) | 94.48±0.06 | 17.84±0.15 | 0.23±0.01 | | | | VGG16BN | C100 | DE | 76.32±0.09 | 91.07±0.35 | 0.11 | | Ours (λM = 5) | 76.78±0.07 | 89.10±0.33 | 0.10 | | | | SVHN | DE | 99.40±0.01 | 2.71±0.02 | 0.18±0.01 | | | Ours (λM = 1) | 99.25±0.01 | 3.94±0.10 | 0.95±0.12 | | | | C10 | DE | 96.56±0.02 | 10.76±0.06 | 0.16±0.01 | | | Ours (λM = 1) | 96.54±0.01 | 10.99±0.03 | 0.18±0.01 | | | | WideResNet28x10 | C100 | DE | 83.08±0.09 | 62.05±0.18 | 0.09 | | Ours (λM = 1) | 83.02±0.06 | 62.20±0.13 | 0.08 | | | | SVHN | DE | 99.45±0.01 | 2.53±0.03 | 0.43±0.01 | | | Ours (λM = 1) | 99.38 | 2.87±0.04 | 0.46±0.01 | | | ## C.3 Detailed Cifar, Svhn Results Detalized versions of the results presented in the main text are shown in Table C4. The corresponding λM coefficients are the same as in Table C3. Table C4: CIFAR10 results. We report mean and standard error over 5 random seeds for each of the models. Standard errors are reported if they are more than 0.01 across runs. DE indicates Deep Ensembles. | PreResNet164 VGG16BN WideResNet28x10 | |----------------------------------------| Architecture OOD dataset DE Ours AUC (↑) AP (↑) FPR95 (↓) AUC (↑) AP (↑**) FPR95 (**↓) bernoulli 0.98±0.01 0.97±0.01 0.04±0.01 1.00 1.**00 0**.00 blobs 0.96 0.98 0.12±0.01 0.96 0.98 0.12±0.01 cifar100 0.90 0.87 0.30 0.90 0.88 0.30 dtd 0.93 0.83 0.19±0.01 0.96 0.**93 0**.14 gaussian 0.93±0.01 0.94±0.01 0.16±0.01 0.96±0.02 0.97±0.02 0.11±0.03 lsun 0.93 0.89 0.20 0.95 0.**94 0**.18 places 0.92 0.89 0.21 0.94 0.**93 0**.19 svhn 0.94 0.99 0.16 0.95 0.99 0.14 tiny imagenet 0.91 0.88 0.28 0.92 0.**89 0**.26 uniform 0.98±0.01 0.97±0.01 0.04±0.01 1.00 1.**00 0**.00 bernoulli 0.94±0.01 0.95±0.01 0.11±0.02 1.00 1.**00 0**.00 blobs 0.96 0.98 0.16 0.96 0.98 0.15 cifar100 0.89 0.86 0.34 0.89 0.86 0.34 dtd 0.90 0.77±0.01 0.25±0.01 0.96 0.**92 0**.18 gaussian 0.95 0.97 0.14±0.01 0.99 0.**99 0**.05±0.01 lsun 0.93 0.91 0.24 0.95 0.**94 0**.21 places 0.91 0.90 0.27±0.01 0.94 0.**93 0**.23 svhn 0.87 0.97 0.28±0.01 0.87 0.97 0.27±0.01 tiny imagenet 0.90 0.88 0.33 0.90 0.88 0.32 uniform 0.90±0.02 0.89±0.01 0.16±0.02 1.00 1.**00 0**.00 bernoulli 1.00 1.00 0.00 1.00 1.00 0.00 blobs 0.96 0.97 0.11±0.01 0.**97 0**.98 0.10±0.01 cifar100 0.92 0.89 0.27 0.91 0.89 0.27 dtd 0.93 0.86±0.01 0.22±0.01 0.97 0.**94 0**.14±0.01 gaussian 0.96±0.01 0.97±0.01 0.09±0.01 1.00 1.**00 0**.00 lsun 0.93 0.91 0.20 0.95 0.**95 0**.17±0.01 places 0.93 0.91 0.21 0.95 0.**94 0**.18 svhn 0.96 0.99 0.12 0.95 0.99 0.13±0.01 tiny imagenet 0.92 0.90 0.27 0.93 0.**91 0**.25 uniform 1.00 1.00 0.00 1.00 1.00 0.00 ## C.3.1 Mnist Detailed Results In-domain performance Model selection scheme on MNIST was exactly the same as for the CIFAR10/100. We illustrate the relationship between λM, accuracy, and the Adaptive Calibration error (ACE) in Figure C2. One can see that λM remains the same even when M is increasing. Table C7 shows the detailed in-domain performance for the best λM, equal to 0.1. Out of distribution detection The final OOD benchmark results on MNIST are summarized in Table C8. One can see that on MNIST, our method yields a performance boost for any size of an ensemble, even for the very small ones (M = 3). ## C.4 Additional Results Computational overhead As noted in the main section, our method has computation overhead compared to DE, which is also a parallel method, allowing to get the results computed in O(1) time (in the number of | Architecture | OOD dataset | DE | Ours | | | | | |-----------------|---------------|-----------|-----------|-----------|-----------|-----------|------| | AUC (↑) | AP (↑) | FPR95 (↓) | AUC (↑) | AP (↑) | FPR95 (↓) | | | | PreResNet164 | bernoulli | 0.81±0.03 | 0.82±0.03 | 0.24±0.04 | 1.00 | 1.00 | 0.00 | | blobs | 0.92±0.01 | 0.95 | 0.25±0.02 | 0.92±0.02 | 0.95±0.01 | 0.23±0.03 | | | cifar10 | 0.80 | 0.76 | 0.57 | 0.78±0.01 | 0.75 | 0.67±0.02 | | | dtd | 0.76±0.01 | 0.61±0.01 | 0.64±0.01 | 0.80±0.01 | 0.75±0.01 | 0.78±0.05 | | | gaussian | 0.80±0.01 | 0.83±0.01 | 0.37±0.02 | 0.95±0.02 | 0.97±0.01 | 0.16±0.05 | | | lsun | 0.86 | 0.81 | 0.45 | 0.87 | 0.85 | 0.47±0.01 | | | places | 0.82 | 0.77 | 0.52 | 0.83 | 0.81 | 0.60±0.03 | | | svhn | 0.80±0.01 | 0.96 | 0.55±0.01 | 0.83±0.01 | 0.96 | 0.50±0.01 | | | tiny imagenet | 0.82 | 0.79 | 0.53 | 0.82 | 0.79 | 0.58±0.01 | | | uniform | 0.51±0.09 | 0.66±0.05 | 0.55±0.09 | 1.00 | 1.00 | 0.00 | | | VGG16BN | bernoulli | 0.87±0.02 | 0.88±0.02 | 0.24±0.03 | 1.00 | 1.00 | 0.00 | | blobs | 0.95 | 0.97 | 0.16±0.01 | 0.97 | 0.98 | 0.12±0.02 | | | cifar10 | 0.78 | 0.73 | 0.63 | 0.78 | 0.74 | 0.64±0.01 | | | dtd | 0.73 | 0.53 | 0.62±0.01 | 0.86 | 0.80±0.01 | 0.50±0.01 | | | gaussian | 0.87±0.02 | 0.90±0.01 | 0.35±0.04 | 0.95±0.01 | 0.97±0.01 | 0.15±0.03 | | | lsun | 0.85 | 0.82 | 0.47 | 0.90 | 0.89 | 0.40 | | | places | 0.82 | 0.78 | 0.55 | 0.86 | 0.85 | 0.51 | | | svhn | 0.76±0.01 | 0.95 | 0.67±0.02 | 0.76±0.01 | 0.95 | 0.72±0.04 | | | tiny imagenet | 0.81 | 0.78 | 0.56 | 0.83 | 0.79 | 0.55 | | | uniform | 0.86±0.02 | 0.89±0.02 | 0.28±0.04 | 1.00 | 1.00 | 0.00 | | | WideResNet28x10 | bernoulli | 0.97±0.02 | 0.96±0.02 | 0.04±0.02 | 1.00 | 1.00 | 0.00 | | blobs | 0.95 | 0.97 | 0.16±0.01 | 0.98±0.01 | 0.99 | 0.09±0.02 | | | cifar10 | 0.80 | 0.74 | 0.53 | 0.81 | 0.76 | 0.54±0.01 | | | dtd | 0.84±0.01 | 0.75±0.01 | 0.54±0.01 | 0.88±0.01 | 0.84±0.01 | 0.49±0.02 | | | gaussian | 0.72±0.08 | 0.78±0.05 | 0.39±0.09 | 0.94±0.03 | 0.97±0.01 | 0.20±0.07 | | | lsun | 0.87 | 0.81±0.01 | 0.35 | 0.91 | 0.90±0.01 | 0.32±0.01 | | | places | 0.84 | 0.79±0.01 | 0.44±0.01 | 0.88 | 0.87±0.01 | 0.41±0.01 | | | svhn | 0.80 | 0.95 | 0.52±0.01 | 0.80±0.01 | 0.95 | 0.52±0.01 | | | tiny imagenet | 0.84 | 0.78 | 0.46 | 0.85 | 0.81 | 0.46 | | | uniform | 0.95±0.01 | 0.96±0.01 | 0.12±0.03 | 1.00 | 1.00 | 0.00 | | Table C5: CIFAR100 results. We report mean and standard error over 5 random seeds for each of the models. Standard errors are reported if they are more than 0.01 across runs. DE indicates Deep Ensembles. ![23_image_0.png](23_image_0.png) $\left(33\right)^{2}$ Figure C2: Relationship between accuracy, negative log-likelihood, ACE, and λM for different M on MNIST (LeCun et al., 1998). Subplots (a) and (b) show the results for PreResNet8. Experiments were re-run 3 times with different seeds. We found that the best results were obtained with λM = 0.1. models). Table C9 shows the exact walltime (WT) comparisons and the estimated computational overhead computed as $$O v e r h e a d=\left({\frac{W T_{O u r s}}{W T_{D E}}}-1\right)\times100\%.$$ × 100%. (33) | Architecture | OOD dataset | DE | Ours | | | | | |-----------------|---------------|--------------|-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|------|------| | AUC (↑) | AP (↑) | FPR95 (↓) | AUC (↑) | AP (↑) | FPR95 (↓) | | | | PreResNet164 | bernoulli | 1.00 | 0.99 | 0.01 | 1.00 | 1.00 | 0.00 | | blobs | 1.00 | 0.99 | 0.01 | 0.99 | 0.98 | 0.02 | | | cifar10 | 0.99 | 0.97 | 0.02 | 0.99 | 0.97 | 0.02 | | | cifar100 | 0.99 | 0.96 | 0.02 | 0.99 | 0.96 | 0.04 | | | dtd | 0.99 | 0.94 | 0.02 | 1.00 | 0.98 | 0.01 | | | gaussian | 1.00 | 0.99 | 0.01 | 1.00 | 1.00 | 0.00 | | | lsun | 0.99 | 0.96 | 0.02 | 1.00 | 0.98 | 0.01 | | | places | 0.99 | 0.96 | 0.02 | 1.00 | 0.98 | 0.01 | | | tiny imagenet | 0.99 | 0.96 | 0.02 | 1.00 | 0.97 | 0.02 | | | uniform | 1.00 | 0.99 | 0.01 | 1.00 | 1.00 | 0.00 | | | VGG16BN | bernoulli | 1.00 | 0.99 | 0.01 | 1.00 | 1.00 | 0.00 | | blobs | 1.00 | 0.98 | 0.01 | 1.00 | 0.99 | 0.01 | | | cifar10 | 0.99 | 0.95 | 0.02 | 0.99 | 0.96 | 0.03 | | | cifar100 | 0.99 | 0.94 | 0.02 | 0.99 | 0.95 | 0.05 | | | dtd | 0.99 | 0.93 | 0.02 | 1.00 | 0.97 | 0.01 | | | gaussian | 1.00 | 0.99 | 0.01 | 1.00 | 1.00 | 0.00 | | | lsun | 0.99 | 0.95 | 0.02 | 1.00 | 0.99 | 0.01 | | | places | 0.99 | 0.96 | 0.02 | 1.00 | 0.98 | 0.01 | | | tiny imagenet | 0.99 | 0.95 | 0.02 | 0.99 | 0.97 | 0.03 | | | uniform | 1.00 | 0.99 | 0.01 | 1.00 | 1.00 | 0.00 | | | WideResNet28x10 | bernoulli | 1.00 | 0.99±0.01 | 0.02±0.01 | 1.00 | 1.00 | 0.00 | | blobs | 0.99 | 0.98 | 0.02 | 1.00 | 0.99 | 0.01 | | | cifar10 | 0.99 | 0.96 | 0.02 | 1.00 | 0.97 | 0.02 | | | cifar100 | 0.99 | 0.95 | 0.03 | 0.99 | 0.97 | 0.02 | | | dtd | 0.99 | 0.91±0.01 | 0.04 | 1.00 | 0.99 | 0.00 | | | gaussian | 1.00 | 0.98 | 0.02 | 1.00 | 1.00 | 0.00 | | | lsun | 0.99 | 0.95 | 0.03 | 1.00 | 0.99 | 0.00 | | | places | 0.99 | 0.95 | 0.03 | 1.00 | 0.99 | 0.00 | | | tiny imagenet | 0.99 | 0.96 | 0.02 | 1.00 | 0.98 | 0.01 | | | uniform | 0.99 | 0.98±0.01 | 0.03±0.01 | 1.00 | 1.00 | 0.00 | | | Size | Method | Accuracy (%) | NLL ×100 | ECE (%) | | | | | 3 | DE | 99.44±0.02 | 1.72±0.03 | 0.23±0.01 | | | | | Ours | 99.43±0.03 | 1.74±0.07 | 0.24±0.04 | Table C7: Test set results (in-domain performance) of our method on PreResNet8 trained on MNIST with different ensemble sizes M. We report the means over 3 random seeds for each of the models. λM = 0.1 Was used for all the experiments when training our method. Standard errors are reported if they are more than 0.01 across runs. DE indicates Deep Ensembles. | | | | | 5 | DE | 99.43±0.01 | 1.65±0.04 | 0.22±0.01 | | | | | Ours | 99.46±0.01 | 1.72±0.02 | 0.32±0.01 | | | | | | 7 | DE | 99.44±0.01 | 1.61±0.03 | 0.25±0.01 | | | | | Ours | 99.41±0.01 | 1.66±0.06 | 0.28 | | | | | | 9 | DE | 99.44±0.01 | 1.65±0.04 | 0.28±0.01 | | | | | Ours | 99.47±0.01 | 1.62±0.03 | 0.31±0.01 | | | | | | 15 | DE | 99.47±0.01 | 1.60±0.03 | 0.29±0.01 | | | | | Ours | 99.49 | 1.57±0.03 | 0.30±0.02 | | | | | Table C6: SVHN results (averaged across 5 seeds) Does the choice weighting distribution for the diversity term affect the results? We considered PreResNet164 trained on CIFAR100 with CIFAR10 as a weighting distribution dataset. This approach can be seen as a form of outlier exposure technique, earlier proposed by Hendrycks et al. (2019). The results on OOD benchmark (excluding CIFAR10) are shown in Table C10. One can observe that the relationship between changing to a dataset of real images and the OOD detection performance is non-trivial. While for some datasets outlier exposure did bring benefit, for some other datasets, it did not. Notably, the datasets on Table C8: Out-of-distribution detection results of our method with PreResNet8 trained on MNIST with different ensemble sizes M. We report the means over 3 random seeds for each of the models. λM = 0.1 Was used for all the experiments when training our method. Standard errors are reported if they are more than 0.01 across runs. DE indicates Deep Ensembles. Size OOD dataset **DE Ours** AUC (↑) AP (↑) FPR95 (↓) AUC (↑) AP (↑**) FPR95 (**↓) bernoulli 0.09±0.05 0.52±0.01 0.98 1.00 1.**00 0**.00 dtd 0.41±0.16 0.47±0.13 0.97±0.01 1.00 0.**99 0**.00 fashion mnist 0.89±0.03 0.92±0.02 0.71±0.19 0.99±0.01 0.**99 0**.07±0.03 gaussian 0.98±0.01 0.99±0.01 0.06±0.02 0.99±0.01 1.00 0.03±0.02 omniglot 0.98 0.98 0.08 0.98 0.98 0.08 uniform 0.23±0.17 0.58±0.07 0.90±0.06 1.00 1.**00 0**.00 | 3 5 7 9 15 | |--------------| bernoulli 0.02 0.50 0.98 1.00 1.**00 0**.00 dtd 0.48±0.17 0.55±0.14 0.94±0.03 1.00 1.**00 0**.00 fashion mnist 0.87±0.05 0.92±0.03 0.67±0.24 0.99 0.**99 0**.04±0.01 gaussian 1.00 1.00 0.02±0.01 1.00 1.00 0.01±0.01 omniglot 0.98 0.99 0.06 0.98 0.99 0.07 uniform 0.30±0.22 0.62±0.10 0.77±0.18 1.00 1.**00 0**.00 bernoulli 0.33±0.24 0.66±0.13 0.77±0.18 1.00 1.**00 0**.00 dtd 0.75±0.12 0.76±0.11 0.67±0.25 1.00 1.**00 0**.00 fashion mnist 0.95±0.02 0.97±0.02 0.29±0.18 0.99 0.99 0.05±0.01 gaussian 1.00 1.00 0.02±0.01 1.00 1.00 0.02±0.01 omniglot 0.99 0.99 0.06 0.99 0.99 0.06±0.01 uniform 0.49±0.22 0.71±0.12 0.64±0.26 1.00 1.**00 0**.00 bernoulli 0.45±0.17 0.70±0.08 0.90±0.05 1.00 1.**00 0**.00 dtd 0.83±0.06 0.83±0.06 0.68±0.23 1.00 1.**00 0**.00 fashion mnist 0.97±0.01 0.97 0.16±0.04 1.00 1.**00 0**.01 gaussian 1.00 1.00 0.01±0.01 1.00 1.00 0.00 omniglot 0.99 0.99 0.06 0.99 0.99 0.06 uniform 0.72±0.15 0.82±0.09 0.50±0.21 1.00 1.**00 0**.00 bernoulli 0.18±0.06 0.55±0.02 0.99±0.01 1.00 1.**00 0**.00 dtd 0.82±0.04 0.80±0.04 0.84±0.09 1.00 1.**00 0**.00 fashion mnist 0.97 0.97 0.17±0.03 1.00 1.**00 0**.01 gaussian 1.00 1.00 0.01±0.01 1.00 1.00 0.01±0.01 omniglot 0.99 0.99 0.05 0.99 0.99 0.05 uniform 0.70±0.13 0.80±0.07 0.53±0.19 1.00 1.**00 0**.00 which the benefit of outlier exposure was the best were datasets of real images, and we think that in case unlabeled images with non-overlapping class distribution are available, using them can bring benefit. Effect of model capacity While the main experiments in the paper were conducted using large models, we also investigated whether ensembles of smaller models can benefit from our method. We followed the same λM selection procedure as for the main experiments in the paper, and report results for PreResNet20 in Table C11 for an ensemble of size M = 11. We found that the optimal λM in this case is smaller compared | Architecture | Walltime (hours) | Overhead | | |-----------------|--------------------|------------|--------| | DE | Ours | | | | PreResNet164 | 18.81±0.18 | 32.81±0.25 | 74.39% | | VGG16BN | 2.11±0.01 | 4.00±0.03 | 89.12% | | WideResNet28x10 | 23.23±0.07 | 43.92±0.25 | 89.03% | Table C9: Walltime comparisons of Deep Ensembles and our method for M = 11 on CIFAR10 for 3 architectures. Both methods have been trained on 1 GPU for fair estimation of the computational overhead. The results reported over five runs with different random seeds. Table C10: Outlier exposure results for PreResNet164 trained on CIFAR100 with CIFAR10 as a weighting distribution for the diversity term. We found λM = 1 to be the best one for this experiment in the outlier exposure. The most optimal setting for our model was λM = 3 and λM = 5, respectively. | OOD dataset | | DE | Ours | | | Ours w. Outlier Exposure | | | | |---------------|-----------|-----------|-----------|-----------|-----------|----------------------------|-----------|-----------|-----------| | | AUC (↑) | AP (↑) | FPR95 (↓) | AUC (↑) | AP (↑) | FPR95 (↓) | AUC (↑) | AP (↑) | FPR95 (↓) | | bernoulli | 0.81±0.03 | 0.82±0.03 | 0.24±0.04 | 1.00 | 1.00 | 0.00 | 1.00 | 1.00 | 0.00 | | blobs | 0.92±0.01 | 0.95 | 0.25±0.02 | 0.92±0.02 | 0.95±0.01 | 0.23±0.03 | 0.96±0.01 | 0.98 | 0.14±0.01 | | dtd | 0.76±0.01 | 0.61±0.01 | 0.64±0.01 | 0.80±0.01 | 0.75±0.01 | 0.78±0.05 | 0.81±0.01 | 0.71±0.01 | 0.57 | | gaussian | 0.80±0.01 | 0.83±0.01 | 0.37±0.02 | 0.95±0.02 | 0.97±0.01 | 0.16±0.05 | 0.94±0.01 | 0.96±0.01 | 0.19±0.03 | | lsun | 0.86 | 0.81 | 0.45 | 0.87 | 0.85 | 0.47±0.01 | 0.89 | 0.88±0.01 | 0.40±0.01 | | places | 0.82 | 0.77 | 0.52 | 0.83 | 0.81 | 0.60±0.03 | 0.86 | 0.84±0.01 | 0.49±0.01 | | svhn | 0.80±0.01 | 0.96 | 0.55±0.01 | 0.83±0.01 | 0.96 | 0.50±0.01 | 0.78±0.01 | 0.95 | 0.56±0.02 | | tiny imagenet | 0.82 | 0.79 | 0.53 | 0.82 | 0.79 | 0.58±0.01 | 0.83 | 0.79 | 0.52 | | uniform | 0.51±0.09 | 0.66±0.05 | 0.55±0.09 | 1.00 | 1.00 | 0.00 | 0.98±0.01 | 0.98±0.02 | 0.05±0.02 | to λM for PreResNet164, and our method here does not yield any substantial boost over Deep Ensembles on these data. We found that all the models with small capacity trained on CIFAR diverged with high λM | Dataset | OOD dataset | DE | Ours | | | | | |---------------|---------------|-----------|-----------|-----------|-----------|-----------|------| | AUC (↑) | AP (↑) | FPR95 (↓) | AUC (↑) | AP (↑) | FPR95 (↓) | | | | C10 | bernoulli | 0.99±0.01 | 0.99±0.01 | 0.05±0.02 | 1.00 | 1.00 | 0.00 | | blobs | 0.97 | 0.98 | 0.12 | 0.97 | 0.98 | 0.12±0.01 | | | cifar100 | 0.88 | 0.86 | 0.37 | 0.89 | 0.86 | 0.36 | | | dtd | 0.93 | 0.86±0.01 | 0.22±0.01 | 0.94±0.01 | 0.89±0.01 | 0.20±0.01 | | | gaussian | 0.94±0.01 | 0.96±0.01 | 0.18±0.03 | 0.95±0.01 | 0.97±0.01 | 0.15±0.03 | | | lsun | 0.93 | 0.92 | 0.25 | 0.94 | 0.93 | 0.23±0.01 | | | places | 0.92 | 0.90 | 0.27 | 0.93 | 0.91 | 0.26 | | | svhn | 0.91±0.01 | 0.98 | 0.23±0.01 | 0.91 | 0.98 | 0.23±0.01 | | | tiny imagenet | 0.90 | 0.88 | 0.33 | 0.90 | 0.88 | 0.32 | | | uniform | 0.99 | 0.99 | 0.03±0.01 | 1.00 | 1.00 | 0.00 | | | C100 | bernoulli | 0.91±0.05 | 0.93±0.04 | 0.19±0.08 | 1.00 | 1.00 | 0.00 | | blobs | 0.95 | 0.97 | 0.17±0.01 | 0.97 | 0.98 | 0.12 | | | cifar10 | 0.78 | 0.73 | 0.63 | 0.77 | 0.72 | 0.63 | | | dtd | 0.81±0.01 | 0.72±0.01 | 0.66±0.03 | 0.83±0.01 | 0.77±0.01 | 0.63±0.03 | | | gaussian | 0.98±0.01 | 0.99 | 0.07±0.01 | 0.99 | 0.99 | 0.04±0.01 | | | lsun | 0.88 | 0.87 | 0.43±0.01 | 0.89 | 0.88 | 0.41±0.01 | | | places | 0.84 | 0.82 | 0.55±0.01 | 0.85 | 0.83 | 0.52±0.01 | | | svhn | 0.85 | 0.97 | 0.48±0.01 | 0.84±0.01 | 0.97 | 0.49±0.02 | | | tiny imagenet | 0.82 | 0.78 | 0.56 | 0.82 | 0.79 | 0.54 | | | uniform | 0.90±0.03 | 0.92±0.02 | 0.23±0.07 | 1.00 | 1.00 | 0.00 | | Table C11: Out of distribution detection for a small-capacity model - PreResNet20 (M = 11). Results were averaged over 5 random seeds. Standard errors are not reported if they less than 0.01 across runs. DE indicates Deep Ensembles. Robustness to distribution shift As an additional evaluation, we investigated whether our method performs on par with DE under the distribution shift, to make sure that introduction of additional regularization did not affect the robustness properties. We thus use a corrupted version of CIFAR10 test set released by (Hendrycks & Dietterich, 2019), and it has been recently shown that DE outperform many other methods on this benchmark (Ovadia et al., 2019). Here, we report the results for VGG16BN and PreResNet164, as they yielded OOD performance gain in both SVHN and LSUN datasets. One can see from Figure C3 that our method performs on-par with DE, as no statistical significance in difference between methods can be concluded from this plot. This further supports our claims that the developed greedy ensemble training approach works the same or on par with DE. Effect of ensemble size on CIFAR We ran our experiments using PreResNet164 with on CIFAR10/100, having M ∈ {3, 5, 7} and λM{0.1, 0.5, 0.8, 1, 1.5, 2, 3}. Both in-domain accuracy, and the OOD detection on ![27_image_0.png](27_image_0.png) Figure C3: CIFAR10 Robustness benchmark results (M = 11) (Hendrycks & Dietterich, 2019). Subplots (a) and (b) show the results for PreResNet164 trained with λM = 3. Subplots (c) and (d) show the results for VGG16BN trained with λM = 5. The results have been averaged over 5 seeds. LSUN and SVHN are shown in Table C12. The results in that table show that with small ensemble size, our method may yield better performance than DE on SVHN, abut even with an ensemble of size M = 3, it has a substantial boost over DE in detecting LSUN. | M | Dataset | Method | Accuracy (%) | NLL ×100 | ACE (%) | SVHN | LSUN | | | |------|--------------|------------|----------------|------------|-----------|-----------|-----------|------|-----------| | | AUC (↑) | AP (↑) | AUC (↑) | AP (↑) | | | | | | | C10 | DE | 95.38±0.04 | 15.59±0.14 | 0.25±0.01 | 0.93 | 0.95 | 0.92 | 0.87 | | | Ours | 95.32±0.04 | 15.67±0.11 | 0.24±0.01 | 0.94±0.01 | 0.96±0.01 | 0.94 | 0.91±0.01 | | | | 3 | C100 | DE | 78.67±0.04 | 84.35±0.19 | 0.10 | 0.78±0.02 | 0.87±0.01 | 0.82 | 0.76±0.01 | | Ours | 78.51±0.16 | 84.55±0.76 | 0.10 | 0.76±0.02 | 0.86±0.01 | 0.82±0.02 | 0.76±0.02 | | | | C10 | DE | 95.55±0.03 | 14.14±0.14 | 0.20±0.01 | 0.93 | 0.96 | 0.92 | 0.88 | | | Ours | 95.58±0.04 | 14.31±0.07 | 0.19 | 0.94 | 0.96 | 0.95 | 0.93 | | | | 5 | C100 | DE | 79.50±0.08 | 78.85±0.28 | 0.09 | 0.78±0.01 | 0.87±0.01 | 0.84 | 0.79 | | Ours | 79.33 ± 0.13 | 78.59±0.33 | 0.09 | 0.80±0.01 | 0.88 | 0.86 | 0.82±0.01 | | | | C10 | DE | 95.64±0.04 | 13.79±0.06 | 0.19 | 0.94 | 0.96 | 0.92 | 0.88 | | | Ours | 95.62±0.04 | 14.07±0.19 | 0.22±0.04 | 0.94 | 0.96 | 0.94 | 0.93±0.01 | | | | 7 | C100 | DE | 79.81±0.06 | 76.12±0.22 | 0.08 | 0.78±0.01 | 0.87±0.01 | 0.85 | 0.79 | | Ours | 79.76±0.08 | 78.09±1.41 | 0.11±0.02 | 0.78±0.01 | 0.88±0.01 | 0.86 | 0.83±0.01 | | | Table C12: Test set and OOD detection performances on PreResNet164 for ensemble sizes M ∈ {3, 5, 7}. We report the means over 5 different seeds. Standard errors are reported if they are non-zero across the runs. Qualitative results on CIFAR100. In Figure C4, we further illustrate the capabilities of uncertainty estimation of our method for the PreResNet164 trained on CIFAR100 (M = 11). Our method has a better true positive rate (as can also be seen from the histograms), and slightly better precision when the threshold for the recall is high. We note that the histograms of epistemic uncertainties still overlap significantly, however, ![28_image_0.png](28_image_0.png) with our method, the model does not have a high number of overconfident predictions coming from the OOD data anymore. Figure C4: Uncertainty estimation quality on PreResNet164 (He et al., 2016) trained on CIFAR100 and evaluated on CIFAR100 test set vs LSUN and SVHN, respectively. Histograms (a) and (e) indicate Deep Ensembles (Lakshminarayanan et al., 2017). Histograms (b) and (f) show our method trained with λM = 5. Subplots (c) and (d) show the ROC and PR curves for the LSUN dataset, respectively. Subplots (g) and (h) show the ROC and PR curves for the SVHN dataset, respectively The curves were computed using average epistemic uncertainty per sample (5 seeds). Standard errors were < 0.01.
Review 1: Summary: Consider the deep ensembles approach to predictive uncertainty estimation. This paper proposes a heuristic greedy algorithm that trains several neural networks via minimizing the averaged empirical risk with a diversity-promoting regularizer. The proposed algorithm is inspired by the observation that any f-divergence between a probability distribution and normalized mixture of kernels is supermodular (Theorem 1) and the forward greedy selection algorithm in submodular optimization. Strengths and Weaknesses: ==== Strengths ==== 1. The combinatorial perspective is fresh. 2. Theorem 1 and its proof are interesting. Theorem 1 regarding the supermodularity of f-divergences is fundamental yet seems to be novel. The proof exploits a characterization of convexity I did not know before. The paper highlights that the algorithm regards the function space instead of the parameter space. But the proposed solution ((15) with the explanations above it) looks naive. ==== Weakness ==== Regarding the idea, I have the following confusions. 1. There is a huge gap between the combinatorial insight and the actual implementation. Algorithm 1, the forward greedy selection algorithm, requires a finite ground set as the input. What is the analogue of the finite ground set in the deep ensembles case? 2. Why the sampling procedure in Line 7 & 8 of the forward greedy selection algorithm corresponds to training a neural network with random initialization needs more explanations. 3. A derivation of (14) should be provided. The presentation needs to be improved. 1. To me, a theoretician unfamiliar with Bayesian deep learning and submodular optimization, the presentation is difficult to digest. Below are some examples. In particular, terms like "predictive uncertainty", "deep ensembles", and "mode seeking" appear without definitions, forbidding non-experts to quickly digest this paper. 2. The mathematical writing is imprecise and careless. There are a lot of instances, such as: - In (1), the constraint $Z \subset \mathcal{F}$ is missing. - In (4), the set on which the integration is done is missing. - In the definition of $q_M (z)$, the subscript should be $m$ instead of $j$. - In the definition of $C$, the subscript $f$ of $D$ is missing. - Proof of Theorem 1: The condition that $a, b > 0$ is missing. - (22): It should be that the function is *convex on $[a, b]$* instead of that for any input in $[a, b]$ the function is convex. - Line following (22): $\frac{1}{\alpha} y$ is a quantity instead of a mapping. - (24): The word "convex" is missing on the left-hand side. 3. The proposed algorithm also looks imprecise and careless. - The definitions of "set_seed" and "random_init" are not given. - In Line 9, $\theta_m$ is chosen randomly (my guess), but then in Line 10, it is replaced by the solution of an optimization problem. What is the point of randomly choosing $\theta_m$ then? - I guess what Line 9 & 10 want to convey is to train the neural network with random initialization, but I am not sure. - It is an abuse of notation to write $\theta_m$ for both the solution of the optimization problem and the variable to be optimized. 3. Discussion on using the randomized initialization heuristic should not appear in Section 3.2, as that section only considers the problem of minimizing an f-divergence. The discussion should be moved to Section 4. 4. The proposed algorithm (Algorithm B2) should appear in the main text instead of the appendix. Indeed, Section 4 gives the proposed algorithm, but it looks too terse. Requested Changes: Please address the weaknesses above. Typos: - First line of A.1: proposition -> theorem. - Line following (4): *where* $K_j ( z )$ ... - p. 4: A word is missing between "however, " and "that in practice." - Proof of Proposition 1 on p. 5: *in* Appendix A.2. - Line above (11): simplifies -> becomes. (There is nothing simplified.) - Line above (14): at *the* $k^{\text{th}}$ - Line above (25): *the* set function Broader Impact Concerns: None. ================================================== Review 2: Summary: This paper presents a new angle on the question of Bayesian interpretation of deep ensembles by deriving and proposing a new approach that considers minimising an f-divergence between the true posterior and a function space KDE of the ensemble of fixed cardinality (ie number of members of the ensemble). Several approximations are derived to maintain computational tractability, including a greedy construction as well as using a diversity term that is approximated via MC integration on a weighting distribution p* on inputs (which is just Gaussian in input space). Experiments are provided comparing the new method to standard deep ensembles (Lakshminarayanan et al 2017) demonstrating improved performance in terms of uncertainty quantification and ood detection, both on two moons and on image datasets. Strengths and Weaknesses: Strengths: 1. The use of general f-divergence as an optimisation algorithm for an ensemble of Ns is novel as far as I am aware and should be of interest to some in the TMLR community. 2. The authors provide some theoretical analysis to support their claims (e.g. Theorem 1 and Proposition 1), though see weaknesses below for some comments on the relevance of the theory. 3. The experiments seem to demonstrate benefits of the proposed method in terms of uncertainty quantification relative to deep ensembles. 4. The related work is largely thorough (though see requested changes for some missed works). 5. The authors provide a limitations section. Weaknesses: 1. Clarity: the paper could be improved in terms of writing. Besides typos (in requested changes), there are several claims that are largely unsubstantiated imo that ought to be developed/clarified or accompanied by citations, such as: a. "DE... has arbitrar(il)y bad approxmation guarantees"\ b. "the performance of those methods is not state-of-the-art due to the use of BNN priors"\ c. "it has been shown experimentally that every ensemble member may discover different nodes of the posterior distribution in the function space".\ d. "This algorithm has approximation guarantee of 1/e in general." There are other situations where the writing is detrimental to clarity beyond simple typos. E.g: e. "marginal gain of the total objective" (neither of these two terms are defined yet) in the abstract\ f. p(z|D) is introduced before sec 2.2. before any notation concerning Bayes' rule in sec 4. \ g. likewise, "mean-seeking behaviour" is introduced without real definition just before sec 2.2, what does this mean? I see in sec 3 there is a semi-definition: 'cover the posterior distribution as much as possible', but this is not particularly precise to me. \ h. In line 6 of Alg 1., |T|=M-m right not M?\ i. Equation 12 should it be 1/(k-1) not 1/M?\ j. Equation 14 should it be *negative* marginal gain, so that minimising the objective J leads to maximising marginal gain?\ k. I'm not really sure in the sentence between eq 4 and eq 5 why there is a d in the definition of q_M(z), and also the index is from m not j\ l. "Minimization of (5) is equivalent..." it is not clear what we are minimising over, I take it that it is z_m but you should state that p(z) is fixed (and indeed is the posterior in your case). 2. Experiments on accuracy: it is shown that the proposed methods improve vs Deep Ensembles in terms of uncertainty quantification. However, ensembles also improve accuracy in NNs https://arxiv.org/abs/2012.09816, does the proposed methods also improve accuracy? Moreover are the authors able to provide experiments detailing how the size of the ensemble affects the improvement in uncertainty/accuracy, it seems the experiments provided all set ensemble size M=11. 3. Motivation: The motivation for why one would want to take a Bayesian interpretation for deep ensembles is somewhat lacking imo. The authors write: "none of these (other) works takes the perspective of approximation the Bayesian posterior", but the authors do not substantiate this statement for why one would want to do so. Moreover, I'm not sure if I properly understood section 3: is the point of introducing submodular analysis and theorem 1 because minimising f-diverfences benefit from approximation guarantees as a result? If so, this is quite confusion giving the bottom half of page 4 suggests it is difficult to approximate such optimisation problems. If not, then I am unclear what the motivation for section 3 is. 4. The authors write '"the impact of the choice of function space... is therefore unavoidable when designing algorithms for approximation the Bayesian posterior". This is likely true, but the justification for this is only that the *upper bound* in eq (8) could be bad, whereas there is no such statement for the true quanitity we care about. 5. Lack of parallelisability: I commend the authors for pointing this out. However, even the fix that in Ene and Nguyen 2020 requires communication between members of the ensemble trained in parallell for the diversity promoting term, which Deep ensembles do not. However, assuming communcation is possible, this would enable parallel training OTOH, the proposed method here requires the members of ther ensemble to be trained sequentially (i believe, looking at Alg B2), which is worse. Requested Changes: Besides the requested changes above, I have the following more minor suggestions: 1. yeilds -> yields 2. change V -> Z in def 1 for consistency. 3. "We, however, that in practice" needs rewording. 4. Bring the main algorithm used in practice (Alg B.2 i believe) into the main paper, rather than the algorithm that is currently in the main paper which isn't used (Alg. 1). 5. Please provide more justification for the mean-field approximation in Eq 13. I realise it may be convenient computationally, but do the authors have any understanding of how loose it is/settings where it will be tighter? 6. Please cite missed related works on Bayesian interpretations to Deep Ensembles https://openreview.net/forum?id=BJlahxHYDS and https://arxiv.org/abs/2007.05864 Broader Impact Concerns: n/a ================================================== Review 3: Summary: This paper studies the problem of constructing deep neural network ensembles. Unlike prior works that look at ensembles from the perspective of improving predictive accuracy, this papers focuses on recovering the true model posterior distribution. The authors measure the posterior recovery by f-divergence in the functional space. By a combinatorial analysis, the authors showed that the f-divergence is supermodular, which motivates a greedy method for selecting new models into the ensembles. The greedy method is implemented as a diversity-promoting regularization term, that encourages predictive difference on a set of input (the inputs could be sampled from a distribution but this distribution needs to be determined). The author evaluates the proposed ensemble method by out-of-distribution detection evaluation and mainly compares to Deep Ensembles (2017). The proposed method demonstrates advantage over Deep Ensembles on out-of-distribution detection across all datasets that the authors have evaluated on. I found the empirical study to be comprehensive (if we include those in the appendix into cosideration). That said, I note that the proposed method does not provide accuracy improvement over Deep Ensembles. Overall, this submission is stronger on the theory side, providing a new perspective to analyze neural network ensembles although the empirical gains seem less impressive. Strengths and Weaknesses: Strength: - The proposed idea of taking a functional view of the ensembles is novel and technically interesting. - The authors motivates the greedy selection algorithm by an analysis on submodular optimization. I think this makes meaningful efforts toward understanding the guarantees of practical algorithms, which I appreciate. - The paper is well-written and organized in a meaningful way. Weakness: - the weighting function, which essentially defines the diversity term, is heuristically defined and could have important impact on the practical performance. In the current experimental setup, the authors fit “a Gaussian to every data dimension with the variance x5 larger than the original variance”. I question the generalizability of such construction. I would imagine this to work better for regular data such as images, but can it work for audio, or text data? Because of this, I think the applicability of the current practical algorithm is limited. - I appreciate that the authors candidly report the issue with additional computational overhead. Can the authors report concrete wall clock time comparison with other comparable methods? say deep ensemble? The reported experiments are done with a 11 model ensemble. Does this mean a 11x slowdown to the overall run time? If so, this could be quite prohibitive. Nit: - page 4, “we, however, that in practice” → “we, however, believe that”? Requested Changes: - I would suggest moving alg.1 in the Appendix into the main paper to highlight the actual algorithm used for the reported experiments. The current separation makes it quite confusing. Broader Impact Concerns: I do not have any ethical concern with this paper. The proposed idea is general enough to be applied to many different applications. ================================================== Metareview: Recommendation: Accept as is Comment: Reviewers are happy with the submission and recommenced accept. ==================================================
# Augmented Language Models: A Survey | Grégoire Mialon∗ | gmialon@meta.com | |--------------------------|-----------------------| | Roberto Dessì∗† | rdessi@meta.com | | Maria Lomeli∗ | marialomeli@meta.com | | Christoforos Nalmpantis∗ | christoforos@meta.com | | Ram Pasunuru∗ | rpasunuru@meta.com | | Roberta Raileanu∗ | raileanu@meta.com | | Baptiste Rozière∗ | broz@meta.com | | Timo Schick∗ | schick@meta.com | | Jane Dwivedi-Yu∗ | janeyu@meta.com | | Asli Celikyilmaz∗ | aslic@meta.com | | Edouard Grave∗ | egrave@meta.com | | Yann LeCun∗ | yann@meta.com | | Thomas Scialom∗ | tscialom@meta.com | ∗Meta AI †*Universitat Pompeu Fabra* Reviewed on OpenReview: **https://openreview.net/forum?id=jh7wH2AzKK** ## Abstract This survey reviews works in which language models (LMs) are augmented with reasoning skills and the ability to use tools. The former is defined as decomposing a potentially complex task into simpler subtasks while the latter consists in calling external modules such as a code interpreter. LMs can leverage these augmentations separately or in combination via heuristics, or learn to do so from demonstrations. While adhering to a standard missing tokens prediction objective, such augmented LMs can use various, possibly non-parametric external modules to expand their context processing ability, thus departing from the pure language modeling paradigm. We therefore refer to them as Augmented Language Models (ALMs). The missing token objective allows ALMs to learn to reason, use tools, and even act, while still performing standard natural language tasks and even outperforming most regular LMs on several benchmarks. In this work, after reviewing current advance in ALMs, we conclude that this new research direction has the potential to address common limitations of traditional LMs such as interpretability, consistency, and scalability issues. ## 1 Introduction: Motivation For The Survey And Definitions 1.1 Motivation And Definitions Large Language Models (LLMs) (Devlin et al., 2019; Brown et al., 2020; Chowdhery et al., 2022) have fueled dramatic progress in Natural Language Processing (NLP) and are already core in several products with millions of users, such as the coding assistant Copilot (Chen et al., 2021), Google search engine1 or more recently ChatGPT2. Memorization (Tirumala et al., 2022) combined with compositionality (Zhou et al., 2022) capabilities made LLMs able to execute various tasks such as language understanding or conditional and 1See *e.g.* https://blog.google/products/search/search-language-understanding-bert/ 2https://openai.com/blog/chatgpt/ 1 unconditional text generation at an unprecedented level of performance, thus opening a realistic path towards higher-bandwidth human-computer interactions. However, LLMs suffer from important limitations hindering a broader deployment. LLMs often provide non-factual but seemingly plausible predictions, often referred to as hallucinations (Welleck et al., 2020). This leads to many avoidable mistakes, for example in the context of arithmetics (Qian et al., 2022) or within a reasoning chain (Wei et al., 2022c). Moreover, many LLMs groundbreaking capabilities seem to emerge with size, measured by the number of trainable parameters: for example, Wei et al. (2022b) demonstrate that LLMs become able to perform some BIG-bench tasks3 via few-shot prompting once a certain scale is attained. Although a recent line of work yielded smaller LMs that retain some capabilities from their largest counterpart (Hoffmann et al., 2022), the size and need for data of LLMs can be impractical for training but also maintenance: continual learning for large models remains an open research question (Scialom et al., 2022). Other limitations of LLMs are discussed by Goldberg (2023) in the context of *ChatGPT*, a chatbot built upon *GPT3*. We argue these issues stem from a fundamental defect of LLMs: they are generally trained to perform statistical language modeling given (i) a single parametric model and (ii) a limited context, typically the n previous or surrounding tokens. While n has been growing in recent years thanks to software and hardware innovations, most models still use a relatively small context size compared to the potentially large context needed to always correctly perform language modeling. Hence, massive scale is required to store knowledge that is not present in the context but necessary to perform the task at hand. As a consequence, a growing research trend emerged with the goal to solve these issues, slightly moving away from the pure statistical language modeling paradigm described above. For example, a line of work circumvents the limited context size of LLMs by increasing its relevance: this is done by adding information extracted from relevant external documents. Through equipping LMs with a module that retrieves such documents from a database given a context, it is possible to match certain capabilities of some of the largest LMs while having less parameters (Borgeaud et al., 2022; Izacard et al., 2022). Note that the resulting model is now non-parametric, since it can query external data sources. More generally, LMs can also improve their context via reasoning strategies (Wei et al. (2022c); Taylor et al. (2022); Yang et al. (2022c) *inter alia*) so that a more relevant context is produced in exchange for more computation before generating an answer. Another strategy is to allow LMs to leverage external tools (Press et al. (2022); Gao et al. (2022); Liu et al. (2022b) inter alia) to augment the current context with important missing information that was not contained in the LM's weights. Although most of these works aim to alleviate the downfalls of LMs mentioned above separately, it is straightforward to think that more systematically augmenting LMs with both reasoning and tools may lead to significantly more powerful agents. We will refer to these models as Augmented Language Models (ALMs). As this trend is accelerating, keeping track and understanding the scope of the numerous results becomes arduous. This calls for a taxonomy of ALMs works and definitions of technical terms that are used with sometimes different intents. Definitions. We now provide definitions for terms that will be used throughout the survey. - **Reasoning.** In the context of ALMs, reasoning is decomposing a potentially complex task into simpler subtasks the LM can solve more easily by itself or using tools. There exist various ways to decompose into subtasks, such as recursion or iteration. In that sense, reasoning is akin to planning as defined for example in LeCun (2022). In this survey, reasoning will very often refer to the various strategies to improve reasoning skills in LMs, such as step-by-step reasoning using few-shot examples. It is not yet fully understood whether the LM is really reasoning, or simply producing a larger context that increases the likelihood of correctly predicting the missing tokens. We refer to Huang and Chang (2022) for a discussion on this topic: although reasoning may currently be an abuse of language given the current state of the art, the term is already in use within the community. A more pragmatic definition of reasoning in the context of ALMs is giving more computation steps to the model before yielding the answer to a prompt. 3https://github.com/google/BIG-bench - **Tool.** For ALMs, a tool is an external module that is typically called using a rule or a special token and whose output is included in the ALM's context. The tool can gather external information, or have an effect on the virtual or physical world (generally perceived by the ALM). An example of a tool fetching external information is a document retriever, while a tool having an external effect could be a robotic arm. A tool can be called at training or at inference time. More generally, learning to interact with a tool may consist in learning to call its API. - **Act.** For ALMs, calling a tool that modifies a state in a virtual or physical object, and observing the result, typically by including it in the ALM's current context. For example, some works from the survey discuss searching the web, or robotic arm manipulation via LMs. With a slight abuse of term, we will sometimes denote the call of a tool by an ALM as an action, even if it does not have an external effect. Why jointly discussing reasoning and tools? The combination of reasoning and tools within LMs should allow solving a broad range of complex tasks without heuristics, hence with better generalization capabilities. Typically, reasoning would foster the LM to decompose a given problem into potentially simpler subtasks while tools would help getting each step right, for example obtaining the result from a mathematical operation. Put it differently, reasoning is a way for LMs to combine different tools in order to solve complex tasks, and tools are a way to not fail a reasoning with valid decomposition. Both should benefit from the other. Moreover, reasoning and tools can be put under the same hood, as both augment the context of the LM so that it better predicts the missing tokens, albeit in a different way. Why jointly discussing tools and actions? Tools that gather additional information and tools that have an effect on the virtual or physical world can be called in the same fashion by the LM. For example, there is seemingly no difference between a LM outputting python code for solving a mathematical operation, and a LM outputting python code to manipulate a robotic arm. A few works discussed in the survey are already using LMs that have effects on the virtual or physical world: under this view, we can say that the LM have the potential to act, and expect important advances in the direction of LMs as autonomous agents. ## 1.2 Our Classification We decompose the works included in the survey under three axes. Section 2 studies works which augment LM's reasoning capabilities as defined above. Section 3 focuses on works allowing LMs to interact with external tools and act. Finally, Section 4 explores whether reasoning and tools usage are implemented via heuristics or learned, *e.g.* via supervision or reinforcement. Other axes could naturally have been chosen for this survey and are discussed in Section 5. For conciseness, the survey focuses on works that combine reasoning or tools with LMs. However, the reader should keep in mind that many of these techniques were originally introduced in another context than LMs, and consult the introduction and related work section of the papers we mention if needed. Finally, although we focus on LLMs, not all works we consider employ large models, hence we stick to LMs for correctness in the remainder of the survey. ## 2 Reasoning In general, reasoning is the ability to make inferences using evidence and logic. Reasoning can be divided into multiple types of skills such as commonsense reasoning (McCarthy et al., 1960; Levesque et al., 2012), mathematical reasoning (Cobbe et al., 2021), symbolic reasoning (Wei et al., 2022c), etc. Often, reasoning involves deductions from inference chains, called as multi-step reasoning. In the context of LMs, we will use the definition of reasoning provided in Section 1. Previous work has shown that LLMs can solve simple reasoning problems but fail at complex reasoning (Creswell et al., 2022): hence, this section focuses on various strategies to augment LM's reasoning skills. One of the challenges with complex reasoning problems for LMs is to correctly obtain the solution by composing the correct answers predicted by it to the sub-problems. For example, a LM may correctly predict the dates of birth and death of a celebrity, but may not correctly predict the age. Press et al. (2022) call this discrepancy the compositionality gap for LMs. For the rest of this section, we discuss the works related to three popular paradigms for eliciting reasoning in LMs. Note that Huang and Chang (2022) propose a survey on reasoning in language models. Qiao et al. (2022) also propose a survey on reasoning albeit with a focus on prompting. Since our present work focuses on reasoning combined with tools, we refer the reader to Huang and Chang (2022); Qiao et al. (2022) for a more in-depth review of works on reasoning for LLMs. ## 2.1 Eliciting Reasoning With Prompting In recent years, prompting LMs to solve various downstream tasks has become a dominant paradigm (Brown et al., 2020). In prompting, examples from a downstream task are transformed such that they are formulated as a language modeling problem. Prompting typically takes one of the two forms: zero-shot, where the model is directly prompted with a test example's input; and few-shot, where few examples of a task are prepended along with a test example's input. This few-shot prompting is also known as in-context learning or few-shot learning. As opposed to "naive" prompting that requires an input to be directly followed by the output/answer, elicitive prompts encourage LMs to solve tasks by following intermediate steps before predicting the output/answer. While Nye et al. (2021) provides the first example of few-shot prompting LLMs with reasoning examples and Cobbe et al. (2021) generalizes the use of reasoning examples to non-algorithmic tasks, Wei et al. (2022c) extensively studies how elicitive prompting enables LMs to be better reasoners in a few-shot setting. Later, Kojima et al. (2022) showed similar ability in a zero-shot setting. We discuss them in detail in the following paragraphs. Few-shot setting. Wei et al. (2022c) popularized chain-of-thought (CoT), a few-shot prompting technique for LMs. The prompt consists of examples of a task, with inputs followed by intermediate reasoning steps leading to the final output, as depicted in Figure 1. Table 1 shows that CoT outperforms standard prompting methods. Wei et al. (2022b) observe that the success of the few-shot strategy emerges with scale, while Tay et al. (2022) add that without fine-tuning, successful use of CoT generally requires 100B+ parameters LMs such as *LaMDA* (Thoppilan et al., 2022), *PaLM* (Chowdhery et al., 2022) or *GPT3* (Brown et al., 2020; Ouyang et al., 2022), before proposing UL2, a 20B open source model that can perform CoT. Using few-shot CoT prompting, *Minerva* (Lewkowycz et al., 2022) achieves excellent performance on math benchmarks such as GSM8K (Cobbe et al., 2021). Wang et al. (2022c) further improve CoT with *Self-consistency*: diverse reasoning paths are sampled from a given language model using CoT, and the most consistent answer is selected as the final answer. Press et al. (2022) introduce *Self-ask*, a prompt in the spirit of CoT. Instead of providing the model with a continuous chain of thought as in Figure 1, *Self-ask* explicitly states the follow-up question before answering it and relies on a scaffold (e.g, "Follow-up question:" or *"So the final answer is:"*), so that the answers are more easily parseable. The authors demonstrate an improvement over CoT on their introduced datasets aiming at measuring the compositionality gap. They observe that this gap does not narrow when increasing the size of the model. Note that Press et al. (2022) focus on 2-hop questions, *i.e.*, questions for which the model only needs to compose two facts to obtain the answer. Interestingly, *Self-ask* can easily be augmented with a search engine (see Section 3). *ReAct* (Yao et al., 2022b) is another few-shot prompting approach eliciting reasoning that can query three tools throughout the reasoning steps: search and lookup in Wikipedia, and finish to return the answer. *ReAct* will be discussed in more detail in the next sections. Zero-shot setting. Kojima et al. (2022) extend the idea of eliciting reasoning in LMs to zero-shot prompting. Whereas few-shot provides examples of the task at hand, zero-shot conditions the LM on a single prompt that is not an example. Here, Kojima et al. (2022) simply append *Let's think step by step* to the input question before querying the model (see Figure 2), and demonstrate that zero-shot-CoT for large LMs does well on reasoning tasks such as GSM8K although not as much as few-shot-CoT. ## 2.2 Recursive Prompting Several works attempt to improve LM's reasoning by explicitly decomposing problems into sub-problems in order to solve the problem in a divide and conquer manner. This recursive approach slightly differs from CoT since the latter does not explicitly formulate sub-problems, and can be especially useful for complex tasks, given that compositional generalization can be challenging for LMs (Lake and Baroni, 2018; Keysers et al., Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now? Answer: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11. Question: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have? Answer: <LM> Figure 1: An example of few-shot Chain-of-Thought prompt. **<LM>** denotes call to the LM with the above prompt. Question: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have? Answer: Let's think step by step <LM> Figure 2: An example of zero-shot Chain-of-Thought prompt. **<LM>** denotes call to the LM with the above prompt. 2019; Li et al., 2022a). Methods that employ problem decomposition can either then solve the sub-problems independently, where these answers are aggregated to generate the final answer (Perez et al., 2020; Min et al., 2019), or solve the sub-problems sequentially, where the solution to the next sub-problem depends on the answer to the previous ones (Yang et al., 2022a; Zhou et al., 2022; Drozdov et al., 2022; Dua et al., 2022; Khot et al., 2022; Wang et al., 2022a; Wu et al., 2022b; Mishra and Nouri, 2022). For instance, in the context of math problems, *Least-to-most* prompting (Zhou et al., 2022) allows a language model to solve harder problems than the demonstration examples by decomposing a complex problem into a list of sub-problems. It first employs few-shot prompting to decompose the complex problem into sub-problems, before sequentially solving the extracted sub-problems, using the solution to the previous sub-problems to answer the next one. Patel et al. (2022) show that modifying existing benchmarks by letting human decompose questions into subquestions that are relatively easier for models to solve leads to significant improvement for *GPT3* and RoBERTa-SQuAD equipped with a symbolic calculator. While many earlier works include learning to decompose through distant supervision (Perez et al., 2020; Talmor and Berant, 2018; Min et al., 2019), like Zhou et al. (2022), many recent works employ in-context learning to do so (Yang et al., 2022a; Khot et al., 2022; Dua et al., 2022; Kazemi et al., 2022). Among these, there are further differences. For instance, Drozdov et al. (2022) is a follow-up work to Zhou et al. (2022), but differs by using a series of prompts to perform recursive syntactic parses of the input rather than a linear decomposition, and also differs by choosing the exemplars automatically through various heuristics. Dua et al. (2022) is concurrent work with Zhou et al. (2022) but differs by interweaving the question decomposition and answering stages, i.e., the next sub-question prediction has access to the previous questions and answers as opposed to generating all sub-questions independently of any previous answers. Yang et al. (2022a), on the other hand, decomposes using rule-based principles and slot-filling prompting to translate questions into a series of SQL operations. Khot et al. (2022) also employs prompts to decompose into specific operations, but then allows each sub-problem to be solved using a library of specialized handlers, where each is devoted to a particular sub-task (e.g., retrieval). Finally, Kazemi et al. (2022) decompose a given problem in a backward fashion: starting from the goal and a set of rules, the system decompose the goal into sub-goals and recursively check whether the sub-goals can be proved. Here again, the modules are implemented by few-shot prompting a pre-trained LM. | Model | Accuracy (%) | |--------------------------------------------------|----------------| | OpenAI (text-davinci-002) [1] | 15.6 | | OpenAI (text-davinci-002) + CoT[1] | 46.9 | | OpenAI (text-davinci-002) + CoT + Calculator[1] | 46.9 | | OpenAI (code-davinci-002) [1] | 19.7 | | OpenAI (code-davinci-002) + CoT[1] | 63.1 | | OpenAI (code-davinci-002) + CoT + Calculator[1] | 65.4 | | GPT-3 175B + FT + CoT + Calculator[2] | 34.0 | | GPT-3 175B + FT + CoT + Calculator + Verifier[2] | 55.0 | | PaLM 540B[3] | 17.0 | | PaLM 540B+CoT[3] | 54.0 | | PaLM 540B+CoT+Calculator[3] | 58.0 | | PAL[4] | 72.0 | Table 1: Evaluation of different reasoning methods on GSM8K, a popular reasoning benchmark. FT denotes fine-tuning and CoT denotes chain-of-thought. The reported accuracies are based on [1]: (Wei et al., 2022c); [2]: (Cobbe et al., 2021); [3]: (Chowdhery et al., 2022); and [4]: (Gao et al., 2022). ## 2.3 Explicitly Teaching Language Models To Reason Despite their spectacular results, prompting approaches have some drawbacks in addition to requiring model scale. Namely, they require to discover prompts that elicit e.g. step-by-step reasoning, manually providing examples when it comes to few-shot for a new task. Moreover, prompting is computationally expensive in the case of long prompts, and it is harder to benefit from a relatively large number of examples due to limited context size of the model. Recent works suggest to circumvent these issues by training LMs to use, as humans, a working memory when more than one step are required to solve a task correctly. Nye et al. (2021) introduce the notion of scratchpad, allowing a LM to better perform on multi-step computation tasks such as addition or code execution. More precisely, at training time, the LM sees input tasks such as addition along with associated intermediate steps: the ensemble is called a scratchpad. At test time, the model is required to predict the steps and the answer from the input task. Scratchpads differ from the above prompting strategies in that they are fine-tuned on example tasks with associated computation steps. Note however that Nye et al. (2021) also perform experiments in the few-shot regime. Taylor et al. (2022) use a similar approach in the context of large LM pre-training: *Galactica* was trained on a corpus of scientific data including some documents where step-by-step reasoning is wrapped with a special token <work> and </work> to mimic an internal working memory. At inference time, the model can be asked explicitly to activate this reasoning mode via the <work> token. Taylor et al. (2022) argue that one more problem arise when training on reasoning examples: many intermediate reasoning steps may be missing in the training data curated from the internet, as humans do not explicitly write all their reasoning steps. To circumvent the issue of missing steps, the authors created datasets with detailed reasoning process. An example of prompt seen during *Galactica*'s pre-training is presented in Figure 4. Other recent works improve the reasoning abilities of pre-trained LMs via fine-tuning. Zelikman et al. (2022) propose a bootstrap approach to generate reasoning steps (also called rationales) for a large set of unlabeled data and use that data to fine-tune the model. Lengerich et al. (2022) propose a self-supervised method which extracts reasoning capabilities of a teacher model by asking the question "why" given the student's initial responses to various NLP problems. Next, the student model is fine-tuned by using contrastive distillation via sampling evidence from memory. Yu et al. (2022) show that standard LM fine-tuning on reasoning tasks lead to better reasoning skills such as textual entailment, abductive reasoning, and analogical reasoning, compared to pre-trained models. Further, several instruction fine-tuning approaches (Ouyang et al., 2022; Chung et al., 2022; Iyer et al., 2022; Ho et al., 2022) use chain-of-thought style prompts to achieve remarkable improvements on popular benchmarks such as BBH (Srivastava et al., 2022) and MMLU (Hendrycks et al., 2021). Interestingly, all these works also show that small scale instruction-finetuned models can perform better than un-finetuned large scale models, especially in the tasks where instruction following is important. ## Prompt 0 Question: It takes Amy 4 minutes to climb to the top of a slide. It takes her 1 minute to slide down. The water slide closes in 15 minutes. How many times can she slide before it closes? <LM> Answer: To solve " How many times can she slide before it closes? ", we need to first solve: " How long does each trip take? " </LM> ## Prompt 1 It takes Amy 4 minutes to climb to the top of a slide. It takes her 1 minute to slide down. The water slide closes in 15 minutes. Subquestion 1: How long does each trip take? <LM> Answer 1: It takes Amy 4 minutes to climb and 1 minute to slide down. 4 + 1 = 5. So each trip takes 5 minutes. </LM> ## Prompt 2 It takes Amy 4 minutes to climb to the top of a slide. It takes her 1 minute to slide down. The slide closes in 15 minutes. Subquestion 1: How long does each trip take? Answer 1: It takes Amy 4 minutes to climb and 1 minute to slide down. 4 + 1 = 5. So each trip takes 5 minutes. Subquestion 2: How many times can she slide before it closes? <LM> Answer 2: The water slide closes in 15 minutes. Each trip takes 5 minutes. So Amy can slide 15 ÷ 5 = 3 times before it closes. </LM> Figure 3: Recursive prompting example. **<LM>** denotes the start of the LM's output to the prompt, while </LM> denotes the end. The problem is first decomposed into subproblems in **Prompt 0**. Then, Answer 2 to **Subquestion 2** and Answer 1 to **Subquestion 1** are sequentially fed to **Prompt 2** and **Prompt 1**. The few-shot examples for each stage's prompt are omitted. Inspired from Figure 1 in Zhou et al. (2022). ## 2.4 Comparison And Limitations Of Abstract Reasoning Overall, reasoning can be seen as decomposing a problem into a sequence of sub-problems either iteratively or recursively.4 Exploring as many reasoning paths as possible is hard and there is no guarantee that the intermediate steps are valid. A way to produce faithful reasoning traces is to generate pairs of questions and their corresponding answers for each reasoning step (Creswell and Shanahan, 2022), but there is still no guarantee of the correctness of these intermediate steps. Overall, a reasoning LM seeks to improve its context by itself so that it has more chance to output the correct answer. To what extent LMs actually use the stated reasoning steps to support the final prediction remains poorly understood (Yu et al., 2022). 4Here, reasoning is described as a sequential operation. However, other reasoning structures such as trees could be considered. For example, Lample et al. (2022) leverage trees to model the different strategies leading to a proof for a given theorem. A strategy is a set of intermediate results that must be either true or themselves proved, hence decomposed into another new subset of intermediate results. Question: A needle 35 mm long rests on a water surface at 20◦C. What force over and above the needle's weight is required to lift the needle from contact with the water surface? σ = 0.0728m. <work> σ = 0.0728N/m σ = F/L 0.0728 = F/(2 × 0.035) F = 0.0728(2 × 0.035) ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) "' f = 0.0728*(2*0.035) with open("output.txt", "w") as file: file.write(str(round(f, 5))) "' «run: calculate.py» ![7_image_2.png](7_image_2.png) «read: output.txt» ![7_image_3.png](7_image_3.png) 0.0051 </work> Answer: F = 0.0051N Figure 4: Working memory example from Taylor et al. (2022). This prompt and its output are seen during LM pre-training. In many cases, some reasoning steps may suffer from avoidable mistakes that compromise the correctness of the output. For example, mistakes on nontrivial mathematical operations in a reasoning step may lead to the wrong final output. The same goes with known facts such as the identity of a president at a given year. Some of the works studied above (Yao et al., 2022b; Press et al., 2022) already leverage simple external tools such as a search engine or a calculator to validate intermediate steps. More generally, the next section of the survey focuses on the various tools that can be queried by LMs to increase the chance of outputting a correct answer. ## 3 Using Tools And Act A recent line of LM research allows the model to access knowledge that is not necessarily stored in its weights, such as a given piece of factual knowledge. More precisely, tasks such as exact computation or information retrieval for example can be offloaded to external modules such as a python interpreter or a search engine that are queried by the model which, in that respect, use tools. Additionally, we can say the LM performs an action when the tool has an effect on the external world. The possibility to easily include tools and actions in the form of special tokens is a convenient feature of language modeling coupled with transformers. ## 3.1 Calling Another Model In many cases, the tool can simply be another neural network or the LM itself. Iteration 0 Text: Brittney Reese (born September 9, 1986 in Gulfport, Mississippi) is an American long jumper. <LM> Plan: Remove incorrect information Edit: Brittney Reese (born September 9, 1986 in Gulfport, Mississippi) is an American long jumper. </LM> Iteration 1 Text: Brittney Reese (born September 9, 1986) is an American long jumper. <LM> Plan: Add information about her career Edit: Brittney Reese (born September 9, 1986) is an American long jumper , who competed at the 2008 Summer Olympics, and is a 4-time World Champion . </LM> Iteration 2 Text: Brittney Reese (born September 9, 1986) is an American long jumper, who competed at the 2008 Summer Olympics, and is a 4-time World Champion. <LM> Plan: Add her birthplace Edit: Brittney Reese (born September 9, 1986 in Inglewood, California ) is an American long jumper, who competed at the 2008 Summer Olympics, and is a 4-time World Champion. </LM> Figure 5: Iterative prompting example using PEER (Schick et al., 2022), a LM trained to produce a plan of action and edit to the input text at each step. This process can be repeated until the generated text requires no further updates. **<LM>** denotes the start of the LM's output to the prompt, while **</LM>** denotes the end. Iterative LM calling. As an alternative to improving the LM's context for better outputs after a single inference pass, an alternative and intuitive way to get better results from LMs consists of repeatedly calling the model to iteratively refine its output. Re3 (Yang et al., 2022c) exploits this idea to automatically generate stories of over two thousand words. More precisely, Re3 first generates a plan, setting, and characters by prompting *GPT3* (Brown et al., 2020) with a premise. Then, Re3 iteratively injects information from both the plan and current story state into a new *GPT3* prompt to generate new story passages. This work is improved upon in Yang et al. (2022b) with the use of a learned detailed outliner that iteratively expands the brief initial outline to any desired level of granularity. Other approaches that teach models to iteratively improve texts in an unsupervised fashion range from applications such as blank filling (Shen et al., 2020; Donahue et al., 2020) to denoising a sequence of Gaussian vectors into word vectors (Li et al., 2022c). PEER (Schick et al., 2022), for example, is a model initialized from *LM-Adapted T5* (Raffel et al., 2020) and trained on Wikipedia edits, learning both how to carry out edits and how to plan for the next steps. Consequently, *PEER* is able to develop articles by repeatedly planning and editing as in Figure 5. The iterative approach has the additional benefit of allowing a complex task like story and article generation to be decomposed into smaller subtasks. Importantly and apart from *PEER*, the works mentioned above employ heuristics to call the LM. A future research direction may consist in allowing the LM to call itself repeatedly until the output satisfies a certain criterion. Rather than just calling a single model repeatedly, Wu et al. (2022a) propose an interactive interface for a pipeline allowing chaining of multiple LMs together, where the output of one step is passed as input to the next. Such contributions allow non-AI-experts to refine solutions to complex tasks that cannot be appropriately handled by a single LM. Leveraging other modalities. Prompts under the form of text may not contain enough context to correctly perform a given task. For example, a question does not call for the same answer if it is asked with a serious or ironic tone. Including various modalities into the context would probably be useful for LMs such as chatbots. As recently demonstrated by Hao et al. (2022) and Alayrac et al. (2022), LMs can also be used as a general-purpose interface with models pre-trained on different modalities. For example, Hao et al. (2022) take a number of pre-trained encoders that can process diverse modalities such as vision and language, and connect them to a LM that serves as a universal task layer. The interface and modular encoders are jointly pre-trained via a semi-causal language modeling objective. This approach combines the benefits of causal and non-causal language modeling, enabling both in-context learning and open-ended generation, as well as easy fine-tuning of the encoders. Similarly, Alayrac et al. (2022) introduce *Flamingo*, a family of Visual Language Models (VLMs) that can handle any interleaved sequences of visual and textual data. *Flamingo* models are trained on large-scale multimodal web corpora containing interleaved text and images, which enables them to display in-context few-shot learning capabilities of multimodal tasks. With only a handful of annotated examples, *Flamingo* can easily adapt to both generation tasks such as visual question-answering and captioning, as well as classification tasks such as multiple-choice visual question-answering. Zeng et al. (2022) introduce Socratic Models, a modular framework in which various models pre-trained on different modalities can be composed zero-shot. This allows models to exchange information with each other and acquire new multimodal capabilities without additional finetuning. Socratic Models enable new applications such as robot perception and planning, free-form question-answering about egocentric videos, or multimodal assistive dialogue by interfacing with external APIs and databases such as search engines. Interestingly, other modalities such as images can be incorporated to improve reasoning capabilities of moderate size LMs (1B) (Zhang et al., 2023), and enabling multimodal chain of thought reasoning (Lu et al., 2022a). ## 3.2 Information Retrieval LMs can be augmented with memory units, for example via a neural cache of recent inputs (Grave et al., 2017; Merity et al., 2017), to improve their reasoning abilities. Alternatively, knowledge in the form of natural language can be offloaded completely from the LM by retrieving from an external knowledge source. Memory augmentation strategies help the language model to avoid producing non-factual and out-of-date information as well as reducing the number of parameters required to achieve comparable performance to large LMs. ## 3.2.1 Retrieval-Augmented Language Models Dense and sparse retrievers. There exist two types of retrievers that can be used to augment a LM: dense and sparse. Sparse retrievers work with sparse bag-of-words representations of the documents and the queries (Robertson and Zaragoza, 2009). In contrast, dense neural retrievers use a dense query and dense document vectors obtained from a neural network (Asai et al., 2021). Both types of retrievers assess the relevance of a document to an information-seeking query. This can be done by (i) checking for precise term overlap or (ii) computing the semantic similarity across related concepts. Sparse retrievers excel at the first sub-problem, while dense retrievers can be better at the second (Luan et al., 2021). Conditioning LMs on retrieved documents. Various works augment LMs with a dense retriever by adding the retrieved documents to the current context (Chen et al., 2017; Clark and Gardner, 2017; Lee et al., 2019; Guu et al., 2020; Khandelwal et al., 2020; Lewis et al., 2020; Izacard and Grave, 2020; Zhong et al., 2022; Borgeaud et al., 2022; Izacard et al., 2022; Shi et al., 2023). Even though the idea of retrieving documents to perform question answering is not new, retrieval-augmented LMs have recently demonstrated strong performance in other knowledge intensive tasks besides Q&A. These proposals close the performance gap compared to larger LMs that use significantly more parameters. *REALM* (Guu et al., 2020) was the first method to jointly train end-to-end a retrieval system with an encoder LM. RAG (Lewis et al., 2020) jointly fine-tunes the retriever with a sequence-to-sequence model. Izacard and Grave (2020) introduced a modification of the seq2seq architecture to efficiently process many retrieved documents. Borgeaud et al. (2022) focuses on an auto-regressive LM, called *RETRO*, and use a large-scale corpus indexed by frozen BERT embeddings for the retriever module. Crucially, the authors show that at scale (retrieval corpus and language model), this approach does not require to train and update the retriever. In particular, *RETRO* obtains comparable performance to *GPT3* on different downstream tasks. Although *RETRO* was trained with retrieval from scratch, their approach allows the integration of retrieval into existing pre-trained LMs. Atlas (Izacard et al., 2022) jointly trains a retriever with a sequence-to-sequence model to obtain a LM with strong few-shot learning capabilities in spite of being orders of magnitude smaller than many other large LMs. Table 2 compares the main characteristics of the models discussed, notably how the retrieval results are integrated into the LM's context. In all the aforementioned cases, the query corresponds to the prompt but this has been relaxed, see the chain-of-thought subsection below. | Model | # Retrieval tokens | Granularity | Retriever training | Retrieval integration | |-------------------------------|----------------------|---------------|----------------------|-------------------------| | REALM (Guu et al., 2020) | O(109 ) | Prompt | End-to-End | Append to prompt | | RAG (Lewis et al., 2020) | O(109 ) | Prompt | Fine-tuning | Cross-attention | | RETRO (Borgeaud et al., 2022) | O(1012) | Chunk | Frozen | Chunked cross-attn. | | Atlas (Izacard et al., 2022) | O(109 ) | Prompt | Fine-tuning | Cross-attention | Table 2: Comparison between database retrieval augmented languages models. Inspired by Table 3 from Borgeaud et al. (2022). Efficient large scale retrieval. Since retrieval-augmented language models store knowledge in an external data store, it is crucial that the information retrieval step is efficient, especially when dealing with a large number of document/passage/sentence embeddings and/or a large number (billions) of query vectors. The literature offers a variety of optimisations to boost performance of retrievers as well as reducing the memory footprint of the data store. First, an index structure can be built for the document/passage/sentence embeddings to perform efficient similarity search, for instance, using the Facebook AI Similarity Search library (faiss) (Johnson et al., 2021). Leveraging indexing structures enables efficient search operations by partitioning the vector space and enabling fast pruning of irrelevant vectors. Second, if there exist memory constraints due to a large number of document/passage/sentence embeddings, a multi-node, multi-gpu distributed framework can be used to store the index corresponding to only part of the document/passage/sentence embeddings per process. The query embeddings can also be split in batches and obtain the top-k results in parallel using gpus for further speed ups for approximate or exact search. When exact search becomes intractable due to scale, approximate nearest neighbor algorithms can be used. These algorithms trade off accuracy for speed, allowing significantly faster retrieval while maintaining reasonably accurate results, see Johnson et al. (2021) for further details. Finally, in order to further reduce the memory footprint of each of the index shards corresponding to a subset of the document/passage/sentence embeddings, different compression techniques can be used. For instance, the Atlas retrieval-augmented language model (Izacard et al., 2022) uses product quantisation (Jégou et al., 2011) to reduce the memory footprint of the retriever without sacrificing accuracy in downstream tasks in terms of exact match and recall@50 metrics of Q&A. Currently, there also exist a variety of vector database frameworks that have some of these optimisations implemented out of the box and can be leveraged to use retrieval as a service without requiring the user to implement all optimisations from scratch. Chain-of-thought prompting and retrievers. Recent works (He et al., 2022; Trivedi et al., 2022) propose to combine a retriever with reasoning via chain-of-thoughts (CoT) prompting to augment a LM. He et al. (2022) use the CoT prompt to generate reasoning paths consisting of an explanation and prediction pair. Then, knowledge is retrieved to support the explanations and the prediction that is mostly supported by the evidence is selected. This approach does not require any additional training or fine-tuning. Trivedi et al. (2022) propose an information retrieval chain-of-thought approach (IRCoT) which consists of interleaving retrieval with CoT for multi-step QA. The idea is to use retrieval to guide the CoT reasoning steps and conversely, using CoT reasoning to guide the retrieval step. In all these works, a retriever is systematically called for every query in order to get the corresponding documents to augment the LM. These approaches also assume that the intent is contained in the query. The query could be augmented with the user's intent by providing a natural language description of the search task (instruction) in order to disambiguate the intent, as proposed by Asai et al. (2022). Also, the LM could query the retriever only occasionally—when a prompt suggests it to do so—which is discussed in the next subsection. ## 3.2.2 Querying Search Engines In the previous paragraph, the information retrieval query corresponds to the LM's context. However, the LM can also have the ability to generate a query based on the prompt, thus enlarging its action space and becoming more active. LaMDA is one example of an agent-like LM designed for dialogue applications. The authors pre-train the model on dialog data as well as other public web documents. In addition to this, to ensure that the model is factually grounded as well as enhancing its conversational abilities, it is augmented with retrieval, a calculator, and a translator (Thoppilan et al., 2022). Furthermore, to improve the model's safety, *LaMDA* is fine-tuned with annotated data. Another example is *BlenderBot* (Shuster et al., 2022b), where the LM decides to generate a query based on a prompt. In this case, the prompt corresponds to the instruction of calling the search engine tool. *BlenderBot* is capable of open-domain conversation, it has been deployed on a public website to further improve the model via continual learning with humans in the loop. Similarly, *ReAct* uses few-shot prompting to teach a LM how to use different tools such as search and lookup in Wikipedia, and finish to return the answer (Yao et al., 2022b). Similarly, Komeili et al. (2021); Shuster et al. (2022a) propose a model that learns to generate an internet search query based on the context, and then conditions on the search results to generate a response. *ReAct* interleaves reasoning and acting, allowing for greater synergy between the two and improved performance on both language and decision making tasks. *ReAct* performs well on a diverse set of language and decision making tasks such as question answering, fact verification, or web and home navigation. In general, reasoning can improve decision making by making better inferences and predictions, while the ability to use external tools can improve reasoning by gathering additional information from knowledge bases or environments. ## 3.2.3 Searching And Navigating The Web It is also possible to train agents that can navigate the open-ended internet in pursuit of specified goals such as searching information or buying items. For example, *WebGPT* (Nakano et al., 2021) is a LM-based agent which can interact with a custom text-based web-browsing environment in order to answer long-form questions. In contrast with other models that only learn how to query retrievers or search engines like LaMDA (Thoppilan et al., 2022) or *BlenderBot* (Shuster et al., 2022b), *WebGPT* learns to interact with a web-browser, which allows it to further refine the initial query or perform additional actions based on its interactions with the tool. More specifically, *WebGPT* can search the internet, navigate webpages, follow links, and cite sources (see Table 3 for the full list of available actions). By accessing the internet, the agent is able to enhance its question-answering abilities, even surpassing those of humans as determined by human evaluators. The best model is obtained by fine-tuning *GPT3* on human demonstrations, and then performing rejection sampling against a reward model trained to predict human preferences. Similarly, WebShop (Yao et al., 2022a) is a simulated e-commerce website where an agent has to find, customize, and purchase a product according to a given instruction. To accomplish this, the agent must understand and reason about noisy text, follow complex instructions, reformulate queries, navigate different types of webpages, take actions to collect additional information when needed, and make strategic decisions to achieve its goals. Both the observations and the actions are expressed in natural language, making the environment well-suited for LM-based agents. The agent consists of a LM fine-tuned with behavior cloning of human demonstrations (*i.e.*, question-human demonstration pairs) and reinforcement learning using a hard-coded reward function that verifies whether the purchased item matches the given description. While there are other works on web navigation and computer-control, most of them assume the typical human interface, that takes as input images of a computer screen and output keyboard commands in order to solve digital tasks (Shi et al., 2017; Gur et al., 2019; 2021; Toyama et al., 2021; Humphreys et al., 2022; Gur et al., 2022). Since our survey focuses on LM-based agents, we will not discuss these works in detail. ## 3.3 Computing Via Symbolic Modules And Code Interpreters Although recent LMs are able to correctly decompose many problems, they are still prone to errors when dealing with large numbers or performing complex arithmetics (Mishra et al., 2022b). For example, vanilla GPT3 cannot perform out-of-distribution addition, *i.e.* addition on larger numbers than those seen during the training even when provided with examples with annotated steps (Qian et al., 2022). In the context of reinforcement learning, the action space of a transformer agent is equipped with symbolic modules to perform e.g. arithmetic or navigation in Wang et al. (2022b). *Mind's Eye* (Liu et al., 2022b) invokes a physics engine to ground LMs physical reasoning. More precisely, a text-to-code LM is used to produce rendering code for the physics engine. The outcome of the simulation that is relevant to answer the question is then appended in natural language form to the LM prompt. As a result, *Mind's Eye* is able to outperform the largest LMs on some specific physical reasoning tasks while having two order of magnitude less parameters. For reasoning graph generation, *CoCoGen* (Madaan et al., 2022) propose to generate python code generating a graph instead of a serialized version of the graph. PAL (Gao et al., 2022) relies on CoT prompting of large LMs to decompose symbolic reasoning, mathematical reasoning, or algorithmic tasks into intermediate steps along with python code for each step (see Figure 6). The python steps are then offloaded to a python interpreter outputting the final result. They outperform CoT prompting on several benchmarks, especially on GSM-HARD, a version of GSM8K with larger numbers. See Table 1 for a comparison between PAL and other models on GSM8K. Similarly, Drori et al. (2022); Chen et al. (2022b) prompts *Codex* (Chen et al., 2021) to generate executable code-based solutions to university-level problems, math word problems, or financial QA. For code generation, Shi et al. (2022) execute several sampled generation on a small number of test inputs, and use the output to select a solution. Instead, Mishra et al. (2022a) extends existing datasets with solutions written as python programs. Then, they finetune a model to generate python solutions to mathematical reasoning problems. In the context of theorem proving, Wu et al. (2022c) uses large LMs to automatically formalize informal mathematical competition problem statements in Isabelle or HOL. Jiang et al. (2022) generate formal proof sketches, which are then fed to a prover. ## 3.4 Acting On The Virtual And Physical World While the previous tools gather external information in order to improve the LM's predictions or performance on a given task, other tools allow the LM to act on the virtual or physical world. In order to do this, the LM needs to ground itself in the real-world by learning about affordances i.e. what actions are possible in a given state, and their effect on the world. Controlling Virtual Agents. Recent works demonstrated the ability of LMs to control virtual agents in simulated 2D and 3D environments by outputting functions which can then be executed by computers in the corresponding environment, be it a simulation or the real-world. For example, Li et al. (2022b) fine-tune a pre-trained *GPT2* (Radford et al., 2019) on sequential decision-making problems by representing the goals and observations as a sequence of embeddings and predicting the next action. This framework enables strong combinatorial generalization across different domains including a simulated household environment. This suggests that LMs can produce representations that are useful for modeling not only language but also sequential goals and plans, so that they can improve learning and generalization on tasks that go beyond language processing. Similarly, Huang et al. (2022a) investigate whether it is possible to use the world knowledge captured by LMs to take specific actions in response to high-level tasks written in natural language such as "make breakfast". This work was the first to demonstrate that if the LM is large enough and correctly prompted, it can break down high-level tasks into a series of simple commands without additional training. However, the agent has access to a predetermined set of actions, so not all natural language commands can be executed in the environment. To address this issue, the authors propose to map the commands suggested by Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. ![13_image_0.png](13_image_0.png) How many tennis balls does he have now? ![13_image_1.png](13_image_1.png) Question: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have? Answer: <LM> Figure 6: An example of few-shot PAL (Gao et al., 2022) prompt. **<LM>** denotes call to the LM with the above prompt. The prompts are based on the chain-of-thoughts prompting shown on Figure 1, and the parts taken from it are highlighted in green . In PAL, the prompts also contain executable python code , which performs operations and stores the results in the answer variable. When prompted with a new question, PAL generates a mix of executable code and explanation. The answer is obtained by executing the code and print(answer) . the LM into feasible actions for the agent using the cosine similarity function. The approach is evaluated in a virtual household environment and displays an improvement in the ability to execute tasks compared to using the plans generated by the LM without the additional mapping. While these works have demonstrated the usefulness of LMs for controlling virtual robots, the following paragraph cover works on physical robots. Zeng et al. (2022) combine a LM with a visual-language model (VLM) and a pre-trained language-conditioned policy for controlling a simulated robotic arm. The LM is used as a multi-step planner to break down a high-level task into subgoals, while the VLM is used to describe the objects in the scene. Both are passed to the policy which then executes actions according to the specified goal and observed state of the world. Dasgupta et al. (2023) use 7B and 70B *Chinchilla* as planners for an agent that acts and observes the result in a PycoLab environment. Additionally, a reporter module converts actions and observations from pixel to text space. Finally, the agent in Carta et al. (2023) uses a LM to generate action policies for text-based tasks. Interactively learning via online RL allows to ground the LM internal representations to the environment, thus partly departing from the knowledge about statistical surface structure of text that was acquired during pre-training. | Command | Effect | | |---------------------------|----------------------------------------------------------------|---------------------------------------------------------------| | search <query> | Send <query> to the Bing API and display a search results page | | | clicked on link <link ID> | Follow the link with the given ID to a new page | | | find in page: | <text> | Find the next occurrence of <text> and scroll to it | | quote: | <text> | If <text> is found in the current page, add it as a reference | | scrolled down <1, 2, 3> | Scroll down a number of times | | | scrolled up <1, 2, 3> | Scroll up a number of times | | | Top | Scroll to the top of the page | | | back | Go to the previous page | | | end: | answer | End browsing and move to answering phase | | end: | <nonsense, controversial> | End browsing and skip answering phase | Table 3: The actions *WebGPT* can perform, taken from Nakano et al. (2021). Controlling Physical Robots. Liang et al. (2022) use a LM to write robot policy code given natural language commands by prompting the model with a few demonstrations. By combining classic logic structures and referencing external libraries, e.g., for arithmetic operations, LMs can create policies that exhibit spatialgeometric reasoning, generalize to new instructions, and provide precise values for ambiguous descriptions. The effectiveness of the approach is demonstrated on multiple real robot platforms. LMs encode common sense knowledge about the world which can be useful in getting robots to follow complex high-level instructions expressed in natural language. However, they lack contextual grounding which makes it difficult to use them for decision making in the real-world since they do not know what actions are feasible in a particular situation. To mitigate this problem, Ahn et al. (2022) propose to teach the robot a number of low-level skills (such as "find a sponge", "pick up the apple", "go to the kitchen") and learn to predict how feasible they are at any given state. Then, the LM can be used to split complex high-level instructions into simpler subgoals from the robot's repertoire. The LM can then select the most valuable yet feasible skills for the robot to perform. This way, the robot can use its physical abilities to carry out the LM's instructions, while the LM provides semantic knowledge about the task. The authors test their approach, called *SayCan*, on various real-world tasks and find that it can successfully complete long, abstract instructions in a variety of environments. To address the grounding problem, Chen et al. (2022a) propose *NLMap-SayCan*, a framework to gather and integrate contextual information into LM planners. *NLMap* uses a Visual Language Model (VLM) to create an open-vocabulary queryable scene representation before generating a context-conditioned plan. An alternative way of incorporating contextual information into the agent's decisions is to utilize linguistic feedback from the environment such as success detection, object recognition, scene description, or human interaction (Huang et al., 2022b). This results in improved performance on robotic control tasks such as table top rearrangement and mobile manipulation in a real kitchen. Finally, *RT-1* (Brohan et al., 2022) leverages large-scale, diverse, task-agnostic robotic datasets to learn a model that can follow over 700 natural language instructions, as well as generalize to new tasks, environments, and objects. *RT-1* makes use of DIAL (Xiao et al., 2022), an approach for automatically labeling robot demonstrations with linguistic labels via the vision-language alignment model *CLIP* (Radford et al., 2019). ## 4 Learning To Reason, Use Tools, And Act The previous sections reviewed *what* LMs can be augmented with in order to endow them with reasoning and tools. We will now present approaches on how to teach them such abilities. ## 4.1 Supervision A straightforward way of teaching LMs both to reason and to act consists in providing them with humanwritten demonstrations of the desired behaviours. Common ways of doing so are (i) via few-shot prompting as first suggested by Brown et al. (2020), where the LM is provided a few examples as additional context during inference, but no parameter updates are performed, or (ii) via regular gradient-based learning. Typically, supervised learning is done *after* an initial pre-training with a language modeling objective (Ouyang et al., 2022; Chung et al., 2022); an exception to this is recent work by Taylor et al. (2022), who propose to mix pre-training texts with human-annotated examples containing some form of explicit reasoning, marked with a special token. Some authors use supervised fine-tuning as an intermediate step, followed by reinforcement learning from human feedback (Nakano et al., 2021; Ouyang et al., 2022); see Section 4.2 for an in-depth discussion of such methods. Few-shot prompting. Providing LMs with a few human-written *in-context* demonstrations of a desired behaviour is a common approach both for teaching them to reason (Wei et al., 2022c;b; Suzgun et al., 2022; Press et al., 2022) and for teaching them to use tools and act (Gao et al., 2022; Lazaridou et al., 2022; Yao et al., 2022b). This is mainly due to its ease of use: few-shot prompting only requires a handful of manually labeled examples and enables very fast experimentation as no model fine-tuning is required; moreover, it enables reusing the very same model for different reasoning tasks and tools, just by changing the provided prompt (Brown et al., 2020; Wei et al., 2022c). On the other hand, the ability to perform reasoning with chain-of-thoughts from a few in-context examples only emerges as models reach a certain size (Wei et al., 2022b; Chung et al., 2022), and performance depends heavily on the format in which examples are presented (Jiang et al., 2020; Min et al., 2022), the choice of few-shot examples, and the order in which they are presented (Kumar and Talukdar, 2021; Lu et al., 2022b; Zhou et al., 2022). Another issue is that the amount of supervision that can be provided is limited by the number of examples that fit into the LM's context window; this is especially relevant if (i) a new behaviour is so difficult to learn that it requires more than a handful of examples, or (ii) we have a large space of possible actions that we want a model to learn. Beyond that, as no weight updates are performed, the LM's reasoning and acting abilities are tied entirely to the provided prompt; removing it also removes these abilities. Fine-tuning. As an alternative to few-shot prompting, the reasoning and acting abilities of a pre-trained LM can also be elicited by updating its parameters with standard supervised learning. This approach has been used both for teaching models to use tools, including search engines (Komeili et al., 2021; Shuster et al., 2022b), web browsers (Nakano et al., 2021), calculators and translation systems (Thoppilan et al., 2022), and for improving reasoning abilities (Chung et al., 2022). For the latter, examples of reasoning are typically used in the larger context of *instruction tuning* (Mishra et al., 2021; Sanh et al., 2022; Wang et al., 2022d; Ouyang et al., 2022), where, more generally, an LM's ability to follow instructions is improved based on human-labeled examples. Examples are typically collected from crowd workers. In some cases, they can instead be obtained automatically: Nye et al. (2021) use execution traces as a form of supervision for reasoning, while Andor et al. (2019) use heuristics to collect supervised data for teaching a language model to use a calculator. Prompt pre-training. A potential risk of finetuning *after* the pre-training phase is that the LM might deviate far from the original distribution and overfit the distribution of the examples provided during fine-tuning. To alleviate this issue, Taylor et al. (2022) propose to mix pre-training data with labeled demonstrations of reasoning, similar to how earlier work mixes pre-training data with examples from various downstream tasks (Raffel et al., 2020); however, the exact gains from this mixing, compared to having a separate fine-tuning stage, have not yet been empirically studied. With a similar goal in mind, Ouyang et al. (2022) and Iyer et al. (2022) include examples from pre-training during the fine-tuning stage. Bootstrapping. As an alternative to standard fine-tuning, several authors propose to use *bootstrapping* techniques (e.g. Yarowsky, 1995; Brin, 1999) to leverage some form of indirect supervision. This typically works by prompting a LM to reason or act in a few-shot setup followed by a final prediction; examples for which the actions or reasoning steps performed did not lead to a correct final prediction are then discarded. For example, STaR (Zelikman et al., 2022) prompts a model to generate chain-of-thought reasoning sequences in a common sense question answering setup, but only keeps those chains that lead to the correct final answer for a given question. Finally, either the original LM or another (typically smaller) model is fine-tuned on all correct examples. As such, bootstrapping combines the data efficiency of few-shot prompting with some of the advantages of fine-tuning and can be successfully applied both to teach models to reason (Shridhar et al., 2022) and to use tools (Parisi et al., 2022). ## 4.2 Reinforcement Learning Supervised learning from human-created prompts is effective to teach models to reason and act. However, such data is difficult and costly to obtain. Human preference data - such as rankings or likes/dislikes - is much easier, faster, and cheaper to obtain than full demonstrations. For instance, it might be easier for a human to evaluate the quality of a summary than write one from scratch. Going further, it might be even easier to rank different summaries than scoring each separately. Such data cannot be used in a supervised setting, but can provide rewards in the context of Reinforcement Learning (RL) (Sutton and Barto, 2018). RL has proven successful for learning complex behaviors through feedback-based interaction with an environment, and it has been used for applications such as playing games (Mnih et al., 2015; Silver et al., 2016; Vinyals et al., 2019; Team et al., 2021; Bakhtin et al., 2022) or controlling robots (Gu et al., 2017; Kalashnikov et al., 2018; Akkaya et al., 2019; Lee et al., 2020). When training a LM with RL, the LM can be considered an agent that learns a policy (i.e. a distribution over the model's vocabulary from which the next token is sampled) in order to optimize some reward function. Most of the existing work on RL and ALMs has focused on teaching LMs how to act rather than reason. The closest work on learning how to reason via RL is STaR (Zelikman et al., 2022), a bootstrapping-based approach that is discussed in Section 4.1 RL is a natural framework for training LMs to act and use tools since many of these tools are non-differentiable (e.g. search engines, calculators or programming language interpreters). Additionally, many tasks that benefit from interacting with tools resemble sequential decision making problems (e.g., navigating a web-browser to buy a specified product) and have a well-defined reward (e.g., 1 if the model buys the correct product and 0 otherwise). While there are early works focused on models that could interface with external tools, they employ ad-hoc tool-dependent architectures (Adolphs et al., 2022; Buck et al., 2018; Nogueira and Cho, 2017; Zhong et al., 2018). We do not cover them here since the main focus of our survey is instead on the acting and reasoning capabilities of standard general-purpose LM architectures trained with the language modeling objective. Hard-coded reward functions. When teaching a LM how to use external tools, the standard practice is to update the weights of the model using a scalar reward generated by a hard-coded reward function. This task-dependent function is computed based on the tool output. The LM agent takes a textual input, which in RL terminology corresponds to the current state of the environment, and generates a sequence of tokens, or actions in RL terms. Optimization is done through policy gradient algorithms like REINFORCE (Williams, 1992), PPO and similar variants (Schulman et al., 2017; Ramamurthy et al., 2022). Initial works on training LMs to use tools via RL mostly focused on searching and fetching additional factual information. Common tools for such information-seeking tasks are document retrievers, question answering systems, and search engines. The first two consist in retrieving document from a pre-defined set of text documents, or in retrieving an answer based on some input query. However, a search engine allows for more structured interactive search where, for instance, the model further refines the initial query or performs additional actions based on the initial output of the tool. For example, Wu et al. (2022d) perform conversational question-answering by teaching a LM via RL to rewrite queries in order to feed them to an off-the-shelf retriever. The reward function is a contrastive retrieval-accuracy metric based on the token overlap between following conversation rounds and retrieved passages. Another example is the work from Liu et al. (2022a): *RAINIER* is a LM able to generate contextually relevant questions that are optimized to query a frozen QA system. After distilling knowledge from a larger *GPT3* (Brown et al., 2020) model into a smaller T5 model (Raffel et al., 2020), *RAINIER* is finetuned using PPO (Schulman et al., 2017) with feedback provided by the pre-trained question answering model from Khashabi et al. (2020). Interestingly, this work is an example of a LM learning to use another frozen neural model as an external tool. Yao et al. (2022a) use RL to teach a language model to navigate a virtual shop and buy items constrained on attributes like color and price. Similar to *WebGPT* (Nakano et al., 2021), the model is given a goal in textual format and allowed to perform a limited set of actions. Prompted with a user-generated instruction, in a multi-task learning setup, the model needs to simultaneously understand the query and browse the web to search for the right product. The reward is a hard-coded text-matching function based on the similarity between the model-purchased written description of the item and the given shopping instruction. Optimization is performed with the A3C algorithm (Mnih et al., 2016), a variant of the standard actor-critic method. While the model still lags behind human experts, they found that fine-tuning with RL after training on human demonstrations improves performance. This provides additional evidence of the benefits of reward-based learning for endowing LMs with the ability to interact with external tools. While interacting with a search engine or a document retriever allows a model to augment its current context with additional input, it is often necessary to process structured information when interacting with tools like a knowledge base. Dognin et al. (2021) train a LM to learn how to interface with a graph-based knowledge base by performing the text2graph and graph2text tasks. The model, based on a T5 architecture (Raffel et al., 2020) and trained with the vanilla policy gradient algorithm REINFORCE (Williams, 1992), can perform bidirectional generation of text and graphs and shows state-of-the-art performance on tasks related to knowledge base automated construction from text and vice versa. The T5 -based agent is trained to directly maximize graph2text metrics such as BLEU (Papineni et al., 2002a), METEOR (Banerjee and Lavie, 2005), and chrF++ (Popović, 2017), or text2graph ones such as F1, Precision, and Recall. Finally, it is also possible to leverage RL at inference time only. For example, Cao et al. (2023) propose a method to avoid generating tokens that will likely lead to toxic content by framing it as a RL problem. More precisely, a reward model and a value function are trained to evaluate respectively the toxicity of a sentence and the probability of the so-far generated tokens to lead to a toxic generation. At inference time, the latter is used to truncate the next token probability distribution towards the tokens that are less likely to lead to a toxic generation. Human feedback. Evaluating the quality of machine-generated text is non-trivial because it can vary depending on the context, individual preferences, and user's intentions. For example, in some contexts, a user might require creative writing, while in others it may just require factual information. Model outputs should be judged accordingly and should be able to capture such intent differences. Several metrics based on heuristics like BLEU (Papineni et al., 2002b) and ROUGE (Lin, 2004) have been developed for comparing model outputs to reference texts. However, they fail to fully capture the quality of generations with respect to human intentions. Human feedback can be exploited to improve the quality of machine-generated text, for example for dialog agents (Xu et al., 2022). In particular, Reinforcement Learning from Human Feedback (RLHF) (Knox and Stone, 2008; MacGlashan et al., 2017; Christiano et al., 2017; Warnell et al., 2018) aims to overcome these limitations by using human preferences as an evaluation metric and as an objective function to optimize the language model. Using RLHF allows LMs to be more closely aligned with complex human preferences and values which are difficult to capture by hard-coded reward functions. RLHF works by using a pre-trained LM to generate text, which is then evaluated by humans by, for example, ranking two model generations for the same prompt. This data is then collected to learn a reward model that predicts a scalar reward given any generated text. The reward captures human preferences when judging model output. Finally, the LM is optimized against such reward model using RL policy gradient algorithms like PPO (Schulman et al., 2017). RLHF can be applied directly on top of a general-purpose LM pre-trained via self-supervised learning. However, for more complex tasks, the model's generations may not be good enough. In such cases, RLHF is typically applied after an initial supervised fine-tuning phase using a small number of expert demonstrations for the corresponding downstream task (Ramamurthy et al., 2022; Ouyang et al., 2022; Stiennon et al., 2020). A successful example of RLHF used to teach a LM to use an external tool stems from *WebGPT* (Nakano et al., 2021), discussed in 3.2.3, a model capable of answering questions using a search engine and providing references to support such answers. The tool interface is a simplified text-based web-browser. The model architecture is based on *GPT3* (Brown et al., 2020) and is trained to perform browsing actions expressed in natural language. The model is fine-tuned on question-human demonstration pairs, before further optimization via RLHF. On two QA datasets, *WebGPT*'s answers are preferred relative to human-generated ones and tend to be more factual than the original vanilla *GPT3* model. Similarly, Menick et al. (2022) propose *GopherCite*, a *Gopher*-based LM model (Rae et al., 2021) fine-tuned with RLHF that can cite supporting evidence when answering questions and abstain from answering when unsure. In contrast with WebGPT, *GopherCite* uses an information retrieval external module rather than a web-browser to find relevant information that improves its question answering capabilities. Besides learning to use external tools, RLHF has also proven useful for a wide range of language generation tasks, from summarization (Ziegler et al., 2019; Wu et al., 2021; Stiennon et al., 2020) to training more helpful, harmless, and accurate assistants (Glaese et al., 2022; Cohen et al., 2022; Ouyang et al., 2022; Bai et al., 2022). Since these works do not focus on training models to reason and act, they are out of the scope of this survey. ## 4.3 Limitations And Future Directions Despite recent algorithmic progress and performance improvements, current RL methods still suffer from instability issues which can make training difficult and slow (Ramamurthy et al., 2022; Snell et al., 2022). While supervised learning has been an efficient and robust way to fine-tune language models on specific tasks (Mishra et al., 2021; Sanh et al., 2022; Wang et al., 2022b), this assumes the existence of a large number of expert demonstrations, which can be difficult and costly to obtain. This is particularly true for tasks that require reasoning and acting where we do not have readily available data. A possible solution to the lack of quality data problem could come from bootstrapping methods and offline RL. The promise of these methods is that a dataset can be generated via feedback and interactions of any behavior policy and can be used to train an improved policy. Therefore, combining a data-driven approach with offline RL could give the "best of both worlds": learning from counterfactual events in a scalable and stable way (Levine et al., 2020). In the offline regime, though, there are some open challenges. The maximum improvement can be limited by a number of factors such as: the suboptimality of the initial behavior policy, the dimensionality of the state and the action space, the length of the effective horizon, the accumulation of errors and distributional shifts due to the discrepancy between the behavior policy and the learned one (Levine et al., 2020). Recent works (Zelikman et al., 2022; Snell et al., 2022) have shown that such approaches could reach performance that goes beyond that of the expert demonstrations or improve over initial model generations. For example, Snell et al. (2022) introduce a new offline RL algorithm called ILQL which learns from a static dataset of demonstrations and their associated rewards by estimating a value function and using it to optimize LM generations. ILQL combines online RL flexible optimization framework with the simplicity and ability to learn from existing datasets of supervised learning, resulting in good performance on dialogue tasks. As explained in Section 4, Zelikman et al. (2022) employ a bootstrapping approach for teaching LMs to reason, which can be seen as an approximation to policy gradient algorithms. Recently, Schick et al. (2023) proposed *Toolformer*, a model that teaches itself to use tools in a self-supervised way. This is achieved by first using the few-shot abilities of an existing LM to sample a large amount of potential tool uses. For instance, the model can call a calculator API to augment its context, e.g., "Out of 1400 participants, 400 (or [Calculator(400 / 1400)→ *0.29] 29% passed the test.*" Then, the model is fine-tuned on its own generations, filtering them based on whether they reduce perplexity for future tokens generations. This method enables using several tools (e.g., a calendar, a calculator, or an information retrieval system). However, it was tested in a limited setup of using a single tool at once, since examples of tool use were independently sampled. We believe that studying how this approach could be extended to more complex multi-step tool uses is a promising research direction for a generalist LM-based agent. ## 5 Discussion Moving away from language modeling. Is a model trained to do intermediate reasoning steps or having access to the internet still purely performing language modeling? Indeed, in NLP, language modeling (Bahl et al., 1983) is generally defined as the task of predicting missing tokens given a context and is relied heavily on for pre-training models. However, several techniques have been developed to later fine-tune models (Ziegler et al., 2019; Wei et al., 2022a; Sanh et al., 2022) to perform various natural language tasks, which could be seen as moving away from traditional language modeling. In particular, the texts used to fine-tune LMs are not just found on the internet, but rather designed to explicitly inject some level of grounding. One of the argument advocated recently in Goldberg (2023) is that "it might be much easier to learn from direct instructions like these than it is to learn from non-instruction data". This argument can be supported by the recent work of Giannou et al. (2023), showing both theoretically and in practice that even shallow looped transformers can follow instructions and be programmed as general purpose computers. Intuitively, a text is the result of complex intermediate thoughts that are hidden. Therefore, the superficial text used for supervision can be seen as representing only the logs of these thoughts, thus lacking of context. Conversely, with task-oriented supervised data, we can explicitly ground the answer with the intermediate steps. In this regard, the resulting model may not be considered as a language model. And yet, the task is still about predicting the next token given text only. The argument is all the more true for ALMs since they can augment their context. In particular, tool-augmented LMs might actually lose the ability to assign a probability to the next token - which is at the core of language modeling: whereas a regular LM can easily compute p(xt | x1*, . . . , x*t−1), a toolaugmented LM has to consider all possible tool uses, e.g. p(xt | x1*, . . . , x*t−1) = Pc p(c | x1*, . . . , x*t−1) · p(xt | x1, . . . , xt−1, c) where c is a tool, which might not be tractable. First, in the general case, marginalizing over all the tools is not enough, one also has to marginalize over all possible tool use. For many tools however, the probability p(c | x1*, . . . , x*t−1) associated with each possible tool output cannot be modeled. Take for example web browsing: for a fixed query, the result may vary day to day unpredictably as both the internet and the search algorithms are constantly evolving, making the evaluation of p(c | x1*, . . . , x*t−1) impossible unless the tool uses a past snapshot of the internet, which is generally not wanted. Second, in the case where a tool can be called within another tool call, the number of tool uses can become exponential in the number of tools. For example, within a tool call, an ALM could browse the web by reading the content of a web page, which is text, then apply another of the tools to the text it found to refine its query, etc. If for some reason we require a depth that is equal to the number of tools, we have an exponential complexity. For these reasons, we refer to Augmented Language Models (ALMs) in this survey, to distinguish from Language Modeling in the traditional sense. A tradeoff between memorizing and querying tools. Is it preferable to memorize information in the model weights, or to leverage external tools? Some situations arguably require external tools, for example computing 213443344. However, many information are well known facts such as "The Eiffel tower is located in Paris" or 1 + 2 = 3, and should not be offloaded. And, when learning world representations, memorization is not only desirable, but also deeply connected to reasoning (Hayes et al., 2014). Can ALMs be calibrated enough to decide when and when not to use a tool? Could a computation budget for each tool be integrated into the loss to let the model learn to do so? Generalizing the non-parametric framework. A motivation behind information retrieval augmented LMs such as *RETRO* (Borgeaud et al., 2022) and *Atlas* (Izacard et al., 2022) is to develop a class of LM requiring less parameters through relying on an external non-parametric memory. The motivation for using other kind of tools such as code interpreter or calculator has been slightly different so far: for instance, Cobbe et al. (2021) use a calculator to improve accuracy on tasks requiring arithmetic. Yet, the paradigm of tool-augmented LMs can be seen as a generalization of the non-parametric framework. Indeed, beyond information retrieval, LMs can delegate any kind of abilities such as calculus to the corresponding external tools. By avoiding to store rarely accessed knowledge in their weights, tool-augmented LMs may have better scaling laws and thus yield smaller models retaining the capabilities of their largest counterpart. Combined with the possibility to access recent information from the external world thus avoiding frequent updates, non-parametric generalization holds great benefits for ALMs. A path towards autonomous machine intelligence? A concept for an autonomous intelligent agent was proposed by LeCun (2022). We now discuss to what extent ALMs instantiate this idea. In LeCun (2022), the agent is composed of different modules starting from a world model and a short-term memory. Essentially, the agent takes actions via an actor module based on its world model, perception module, and short-term memory so as to minimize some cost. The agent is also equipped with a configurator module for modulating the world model, the perception, the actor and the cost given the task at hand. Translating into this framework, the ALM's weights essentially contain the world model, perception and actor modules. The short-term memory can be identified with the ALM's context or prompt. Based on its perception of the context and its world model, the ALM would take actions by outputting special tokens, and perceive the result. The configurator module remains elusive but may be implicit: it can be seen as the conditioning induced by the ALM's context, for example an initial prompt such as "You are a kind and helpful assistant". Finally, the cost remains fixed in this framework, and could be the ALM's perplexity mixed with a computational cost associated to reasoning and using external tools. However, an important feature of the agent in LeCun (2022) is its ability to plan, defined by the decomposition of a complex task into subtasks: in the ALM's context, planning is akin to reasoning, a slight abuse of terminology as it is not clear whether LMs reason as humans do as noted in Section 2. LeCun (2022) propose to implement reasoning (under the term planning) as the minimization of an energy with respect to a hierarchical combination of actions. Since ALMs only perform predictions at the token level, they cannot reason according to LeCun (2022)'s view and may be still limited to System 1 tasks, *i.e.* that rely on reflex rather than logic and thinking. Whether System 2, *i.e.* the opposite abilities can be obtained by pushing current methods remains uncertain. For example, LMs are deprived from global consistency beyond their maximum sequence length: as an illustration, two different discussions with the same LM will result in inconsistencies. This is a strong limitation when it comes to solving complex problems that require to perform a large number of sub-goals such as writing a research paper, where one has an initial mental state that includes the current results and the angle of the paper. This process is not linear and results from different interactions, e.g., new ideas while reading some related works. The mental state is maintained although updated trough all the process, such that we keep in mind the big picture. Although more compute and larger input size could mitigate the issue, another solution may be to endow LMs with adequate components. In this regard, a model architecture that intrinsically makes the LM consistent with an energy function as suggested in LeCun (2022) could constitute a promising venue. Finally, our survey sees LMs as the central piece of a generalist agent that could reason in natural language and interact with external tools. Along these lines, Wang et al. (2023) uses a LM as a centralized planner to generate goal sequences for solving tasks in the game of Minecraft. Through a feedback loop and intermediate checks on subgoals execution, the LM can explain mistakes of the goal executor and refine its original plan. However, we note that a LM-based controller might not be the only viable approach for a generalist agent. Recent work on the game of Diplomacy (Bakhtin et al., 2022), a long-standing challenge for AI agents due to its complex planning and reasoning dynamics, employs an ad-hoc planning model trained via self-play and reinforcement learning. Here the LM is used to interact with other players, thus as an external communication module grounded in the current state of the game. This offers an alternative view of LMs as agents specialized to communicate with humans, albeit in the restricted setting of a Diplomacy game. We believe that (A)LMs will play a central role in the next generation of powerful interactive systems, whether as centralized controller of a modular system or as a language-only module that needs to interact with an orchestrator remains an open research question. Augmented Language Models benefits. Overall, ALMs offer many potential advantages over traditional LMs. - *Truthfulness*: As the current LM's training objective is arguably responsible for inciting the generation of seemingly plausible but not factual information, grounding the predictions through some tools should lead to more trustworthy models. However, although this conclusion is straightforward when equipping a LM with a calculator, there is surprisingly little evidence of it for information retrieval augmented LMs (Krishna et al., 2021). One of the reasons is the presence of a lot of non-truthful information in the web. Investigating this direction will be critical for making LM reliable. - *Estimating and reducing uncertainty*: Extending the maximum-likelihood paradigm by letting the model reason and access additional information could help models to learn what they know and what they don't. Some papers suggest that LMs are already well calibrated (Kadavath et al., 2022), i.e. there is a high correlation between the accuracy of their predictions and the corresponding likelihood. This uncertainty could be directly exploited by ALMs to know when to rely on their own weights, or when to query an external tool. - *Interpretability*: Deep learning models are often considered to be black boxes, and their predictions are difficult to interpret. Providing intermediate reasoning steps and relying on tools should help to make ALMs more interpretable. In particular, we can expect that being able to cite the sources used to compose the answer to be critical. However, some works Lewkowycz et al. (2022) pointed out that chain-of-thoughts can lead to the correct predictions even though the intermediate reasoning doesn't make any sense, indicating clear challenges for researchers exploring this direction. - *Enhanced capabilities*: ALMs with improved reasoning abilities and tools can be more helpful assistants and solve a wider range of tasks than standard LMs. For example, an ALM connected to a python interpreter can run code and experiments on a user's behalf, which a vanilla LM cannot do. In addition, a feedback loop can emerge between reasoning and acting, where each ability further improves the other (Yao et al., 2022b). Interacting with external tools, entities, and environments can improve reasoning since it allows the ALM to collect additional information and ground itself in the real-world. Similarly, reasoning can improve the ALM's decision making abilities such as when and how to use a certain tool. Cost of using tools. To the best of our knowledge, the cost of using tools has not yet been taken into account comprehensively. Overall, using tools seem to be beneficial in terms of energy use. Retrieval-augmented language models are a good example of a more efficient way to handle new information since updating an external data store can be orders of magnitude cheaper than re-training a LLM for scratch every time we get new data (as argued for example in Borgeaud et al. (2022)). This stays true for using a calculator or browsing the web for example. One can therefore assume that relying on tools rather than storing knowledge in parameters is generally more energy-efficient than training, frequently updating a LLM, and scaling it if some ability remains out of reach, if one assumes an optimal tool usage (i.e., not using each tool at each token). Then, how to assess the cost of using a tool versus another? We believe that the most natural way of estimating such cost is to consider two factors: (i) the price per tool request, and (ii) the time required to fulfill the request. These factors make sense from the perspective of the ALM, and are a reasonable way to take into account the cost of creating and maintaining the tool since this should typically be reflected in the pricing. Interestingly, including such cost at training and inference time under the form of a loss term for example could help ALMs to develop optimal tool use, and lead to the emergence of models that know how to balance between tool use or internal chain of thoughts. Ethical concerns. ALMs raise new ethical concerns. LM predictions based on tools may look more trustworthy and authoritative at a first glance, when in fact many of them will still be incorrect. Moreover, we can expect this phenomenon to be amplified as LMs reason in quite a similar manner to humans (Dasgupta et al., 2022), making it even harder to detect mistakes. Hence, ALMs may be leveraged to generate more convincing fake news and conspiracy theories. Conversely, conditioning ALMs on "safe" and trusted sources should lead to more factuality and less toxicity in their answers. While such ethical concerns apply to most tools, it is important to distinguish between passive and active tools. The former only collects external information and passes it to the LM's context, while the latter allows it to act on the virtual or physical world without human validation in the loop. An example of active tool is letting the LM control a search engine. Hence, there exists a broad spectrum of possible harmful consequences of LM usage. We are moving from passive LMs that generate text in isolation of the external environment, towards ALMs that act in the real world. In this context, the aforementioned ethical concerns may resonate even further, as ALM will be connected to more and more tools and environments. Currently, training LLM results in a large number of gas emissions, leveraging tools may result in smaller hence cheaper models that are more environmentally friendly both at training, inference, and update. ALMs are therefore a promising avenue to decrease the environmental impact of LLMs. ALMs also have the potential to transform the LM landscape. Nowadays, state of the art LLMs have been the product of large research organizations, fine-tuning these models towards more versatility (via reasoning and learning to use different tools) and agency (via actions) may be within range of smaller entities and perhaps individuals. We may therefore see the burgeoning of a market of open, possibly specialized and even personalized ALM agents, learning to use new tools, handling various data modalities and interacting with each other. We may also see the emergence of a new breed of app store, dedicated (via appropriate documentation or API constraints for example) to be used by ALMs. Overall, this new ecosystem may further accelerate the current pace of LLM research and application to the real world, while also making its control even more uncertain. ## 6 Conclusion This survey presented works in which LMs are augmented with better reasoning and tools. In most of the works, LMs augment their context with additional relevant information useful to perform missing token prediction. As many of these augmentations are non-parametric, *i.e.* involve calling an external, possibly non-parametric module, such LMs arguably depart from the classical language modeling paradigm, hence our decision to dub them Augmented Language Models. Although several works focus either on reasoning or acting skills of LMs, most of them rely on human annotation which may not be scalable, e.g., hand-crafted few-shot prompting, or reinforcement learning from human feedback. How to equip language models with meaningful augmentations in a fully self-supervised fashion remains an open research question. Additionally, as very few works combine reasoning and tools, future efforts should study the integration and fruitful interaction between these two skills. Overall, we believe studying Augmented Language Models is a promising and exciting research avenue towards the next generation of deep learning systems capable of complex and useful human-machine interaction. ## References Leonard Adolphs, Benjamin Boerschinger, Christian Buck, Michelle Chen Huebscher, Massimiliano Ciaramita, Lasse Espeholt, Thomas Hofmann, and Yannic Kilcher. Boosting search engines with interactive agents. Transactions on Machine Learning Research (TMLR), 2022. Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, et al. Do as i can, not as i say: Grounding language in robotic affordances. *arXiv preprint arXiv:2204.01691*, 2022. Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Mateusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas, et al. Solving rubik's cube with a robot hand. arXiv preprint arXiv:1910.07113, 2019. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. *Advances in Neural Information Processing Systems (NeurIPS)*, 2022. Daniel Andor, Luheng He, Kenton Lee, and Emily Pitler. Giving BERT a calculator: Finding operations and arguments with reading comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019. Akari Asai, Xinyan Yu, Jungo Kasai, and Hannaneh Hajishirzi. One question answering model for many languages with cross-lingual dense passage retrieval. *Advances in Neural Information Processing Systems* (NeurIPS), 2021. Akari Asai, Timo Schick, Patrick Lewis, Xilun Chen, Gautier Izacard, Sebastian Riedel, Hannaneh Hajishirzi, and Wen-tau Yih. Task-aware retrieval with instructions. *arXiv preprint arXiv:2211.09260*, 2022. Lalit R. Bahl, Frederick Jelinek, and Robert L. Mercer. A maximum likelihood approach to continuous speech recognition. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, PAMI-5(2):179–190, 1983. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. *arXiv preprint arXiv:2212.08073*, 2022. Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, Jonathan Gray, Hengyuan Hu, Athul Paul Jacob, Mojtaba Komeili, Karthik Konath, Minae Kwon, Adam Lerer, Mike Lewis, Alexander H. Miller, Sandra Mitts, Adithya Renduchintala, Stephen Roller, Dirk Rowe, Weiyan Shi, Joe Spisak, Alexander Wei, David J. Wu, Hugh Zhang, and Markus Zijlstra. Human-level play in the game of diplomacy by combining language models with strategic reasoning. *Science*, 378:1067 – 1074, 2022. Satanjeev Banerjee and Alon Lavie. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In *Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation* Measures for Machine Translation and/or Summarization, pages 65–72. Association for Computational Linguistics, 2005. Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and Laurent Sifre. Improving language models by retrieving from trillions of tokens. In International Conference on Machine Learning (ICML), 2022. Sergey Brin. Extracting patterns and relations from the world wide web. In *The World Wide Web and* Databases, pages 172–183. Springer Berlin Heidelberg, 1999. Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. *arXiv preprint arXiv:2212.06817*, 2022. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Wojciech Gajewski, Andrea Gesmundo, Neil Houlsby, and Wei Wang. Ask the right questions: Active question reformulation with reinforcement learning. International Conference on Learning Representations (ICLR), 2018. Meng Cao, Mehdi Fatemi, Jackie Chi Kit Cheung, and Samira Shabanian. Systematic rectification of language models via dead-end analysis. In *International Conference on Learning Representations (ICLR)*, 2023. Thomas Carta, Clément Romac, Thomas Wolf, Sylvain Lamprier, Olivier Sigaud, and Pierre-Yves Oudeyer. Grounding large language models in interactive environments with online reinforcement learning, 2023. Boyuan Chen, Fei Xia, Brian Ichter, Kanishka Rao, Keerthana Gopalakrishnan, Michael S Ryoo, Austin Stone, and Daniel Kappler. Open-vocabulary queryable scene representations for real world planning. arXiv preprint arXiv:2209.09874, 2022a. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading wikipedia to answer open-domain questions. *arXiv preprint arXiv:1704.00051*, 2017. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code, 2021. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks, 2022b. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways. *arXiv*, 2022. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. *Advances in Neural Information Processing Systems (NeurIPS)*, 2017. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. Christopher Clark and Matt Gardner. Simple and effective multi-paragraph reading comprehension. arXiv preprint arXiv:1710.10723, 2017. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. *arXiv preprint arXiv:2110.14168*, 2021. Deborah Cohen, Moonkyung Ryu, Yinlam Chow, Orgad Keller, Ido Greenberg, Avinatan Hassidim, Michael Fink, Yossi Matias, Idan Szpektor, Craig Boutilier, et al. Dynamic planning in open-ended dialogue using reinforcement learning. *arXiv preprint arXiv:2208.02294*, 2022. Antonia Creswell and Murray Shanahan. Faithful reasoning using large language models. *arXiv preprint* arXiv:2208.14271, 2022. Antonia Creswell, Murray Shanahan, and Irina Higgins. Selection-inference: Exploiting large language models for interpretable logical reasoning. *arXiv preprint arXiv:2205.09712*, 2022. Ishita Dasgupta, Christine Kaeser-Chen, Kenneth Marino, Arun Ahuja, Sheila Babayan, Felix Hill, and Rob Fergus. Collaborating with language models for embodied reasoning, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the North American Chapter of the Association* for Computational Linguistics (NAACL), 2019. Pierre L Dognin, Inkit Padhi, Igor Melnyk, and Payel Das. Regen: Reinforcement learning for text and knowledge base generation using pretrained language models. *Conference on Empirical Methods in Natural* Language Processing (EMNLP), 2021. Chris Donahue, Mina Lee, and Percy Liang. Enabling language models to fill in the blanks. In *Proceedings of* the Annual Meeting of the Association for Computational Linguistics (ACL), 2020. Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard Tang, Albert Lu, Elizabeth Ke, Kevin Liu, Linda Chen, Sunny Tran, Newman Cheng, et al. A neural network solves, explains, and generates university math problems by program synthesis and few-shot learning at human level. Proceedings of the National Academy of Sciences, 119(32), 2022. Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, and Denny Zhou. Compositional semantic parsing with large language models. *arXiv preprint* arXiv:2209.15003, 2022. Dheeru Dua, Shivanshu Gupta, Sameer Singh, and Matt Gardner. Successive prompting for decomposing complex questions. *Conference on Empirical Methods in Natural Language Processing (EMNLP)*, 2022. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models, 2022. Angeliki Giannou, Shashank Rajput, Jy-yong Sohn, Kangwook Lee, Jason D Lee, and Dimitris Papailiopoulos. Looped transformers as programmable computers. *arXiv preprint arXiv:2301.13196*, 2023. Amelia Glaese, Nat McAleese, Maja Trębacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, Lucy Campbell-Gillingham, Jonathan Uesato, PoSen Huang, Ramona Comanescu, Fan Yang, Abigail See, Sumanth Dathathri, Rory Greig, Charlie Chen, Doug Fritz, Jaume Sanchez Elias, Richard Green, Soňa Mokrá, Nicholas Fernando, Boxi Wu, Rachel Foley, Susannah Young, Iason Gabriel, William Isaac, John Mellor, Demis Hassabis, Koray Kavukcuoglu, Lisa Anne Hendricks, and Geoffrey Irving. Improving alignment of dialogue agents via targeted human judgements. *arXiv preprint arXiv:2209.14375*, 2022. Yoav Goldberg. Some remarks on large language models, 2023. URL https://gist.github.com/yoavg/ 59d174608e92e845c8994ac2e234c8a9. Edouard Grave, Armand Joulin, and Nicolas Usunier. Improving neural language models with a continuous cache. In *International Conference on Learning Representations (ICLR)*, 2017. Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In 2017 IEEE international conference on robotics and automation (ICRA), pages 3389–3396, 2017. Izzeddin Gur, Ulrich Rueckert, Aleksandra Faust, and Dilek Hakkani-Tur. Learning to navigate the web. International Conference on Learning Representations (ICLR), 2019. Izzeddin Gur, Natasha Jaques, Kevin Malta, Manoj Tiwari, Honglak Lee, and Aleksandra Faust. Adversarial environment generation for learning to navigate the web. *arXiv preprint arXiv:2103.01991*, 2021. Izzeddin Gur, Ofir Nachum, Yingjie Miao, Mustafa Safdari, Austin Huang, Aakanksha Chowdhery, Sharan Narang, Noah Fiedel, and Aleksandra Faust. Understanding html with large language models. *arXiv* preprint arXiv:2210.03945, 2022. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrieval augmented language model pre-training. In *International Conference on Machine Learning (ICML)*, 2020. Yaru Hao, Haoyu Song, Li Dong, Shaohan Huang, Zewen Chi, Wenhui Wang, Shuming Ma, and Furu Wei. Language models are general-purpose interfaces. *arXiv preprint arXiv:2206.06336*, 2022. Brett K Hayes, Evan Heit, and Caren M Rotello. Memory, reasoning, and categorization: parallels and common mechanisms. *Frontiers in Psychology*, 5:529, 2014. Hangfeng He, Hongming Zhang, and Dan Roth. Rethinking with retrieval: Faithful large language model inference. *arXiv preprint arXiv:2301.00303*, 2022. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In Advances in Neural Information Processing Systems (NeurIPS), 2021. Namgyu Ho, Laura Schmid, and Se-Young Yun. Large language models are reasoning teachers, 2022. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. *arXiv preprint arXiv:2203.15556*, 2022. Jie Huang and Kevin Chen-Chuan Chang. Towards reasoning in large language models: A survey. arXiv preprint arXiv:2212.10403, 2022. Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. *arXiv preprint arXiv:2201.07207*, 2022a. Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models. *arXiv preprint arXiv:2207.05608*, 2022b. Peter C Humphreys, David Raposo, Tobias Pohlen, Gregory Thornton, Rachita Chhaparia, Alistair Muldal, Josh Abramson, Petko Georgiev, Adam Santoro, and Timothy Lillicrap. A data-driven approach for learning to control computers. In *International Conference on Machine Learning (ICML)*, 2022. Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian O'Horo, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, and Ves Stoyanov. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. *arXiv preprint arXiv:2212.12017*, 2022. Gautier Izacard and Edouard Grave. Leveraging passage retrieval with generative models for open domain question answering. *arXiv preprint arXiv:2007.01282*, 2020. Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. Atlas: Few-shot learning with retrieval augmented language models. *arXiv preprint arXiv:2208.03299*, 2022. Albert Q Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée Lacroix, Yuhuai Wu, and Guillaume Lample. Draft, sketch, and prove: Guiding formal theorem provers with informal proofs. *arXiv preprint arXiv:2210.12283*, 2022. Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. How can we know what language models know? *Transactions of the Association for Computational Linguistics*, 8, 2020. Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with gpus. *IEEE Transactions* on Big Data, 7(3):535–547, 2021. doi: 10.1109/TBDATA.2019.2921572. Herve Jégou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(1):117–128, 2011. doi: 10.1109/TPAMI. 2010.57. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and Jared Kaplan. Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221, 2022. Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, et al. Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. arxiv e-prints, page. *arXiv preprint arXiv:1806.10293*, 2018. Seyed Mehran Kazemi, Najoung Kim, Deepti Bhatia, Xin Xu, and Deepak Ramachandran. Lambadal backward chaining for automated reasoning in natural language, 2022. Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, et al. Measuring compositional generalization: A comprehensive method on realistic data. In *International Conference on Learning Representations*, 2019. Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. Generalization through Memorization: Nearest Neighbor Language Models. In *International Conference on Learning Representations (ICLR)*, 2020. Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. Unifiedqa: Crossing format boundaries with a single qa system. *arXiv preprint arXiv:2005.00700*, 2020. Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. Decomposed prompting: A modular approach for solving complex tasks. *arXiv preprint arXiv:2210.02406*, 2022. W Bradley Knox and Peter Stone. Tamer: Training an agent manually via evaluative reinforcement. In 2008 7th IEEE international conference on development and learning, pages 292–297. IEEE, 2008. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2022. Mojtaba Komeili, Kurt Shuster, and Jason Weston. Internet-augmented dialogue generation. *ArXiv*, abs/2107.07566, 2021. Kalpesh Krishna, Aurko Roy, and Mohit Iyyer. Hurdles to progress in long-form question answering. *arXiv* preprint arXiv:2103.06332, 2021. Sawan Kumar and Partha Talukdar. Reordering examples helps during priming-based few-shot learning. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4507–4518, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.findings-acl.395. URL https://aclanthology.org/2021.findings-acl.395. Brenden Lake and Marco Baroni. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In *International conference on machine learning*, pages 2873–2882. PMLR, 2018. Guillaume Lample, Marie-Anne Lachaux, Thibaut Lavril, Xavier Martinet, Amaury Hayat, Gabriel Ebner, Aurélien Rodriguez, and Timothée Lacroix. Hypertree proof search for neural theorem proving. In Advances in Neural Information Processing Systems (NeurIPS), 2022. Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. Internet-augmented language models through few-shot prompting for open-domain question answering, 2022. URL https: //arxiv.org/abs/2203.05115. Yann LeCun. A path towards autonomous machine intelligence, 2022. Joonho Lee, Jemin Hwangbo, Lorenz Wellhausen, Vladlen Koltun, and Marco Hutter. Learning quadrupedal locomotion over challenging terrain. *Science robotics*, 5(47):eabc5986, 2020. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. Latent retrieval for weakly supervised open domain question answering. *arXiv preprint arXiv:1906.00300*, 2019. Chris Lengerich, Gabriel Synnaeve, Amy Zhang, Hugh Leather, Kurt Shuster, François Charton, and Charysse Redwood. Contrastive distillation is a sample-efficient self-supervised loss policy for transfer learning. arXiv preprint arXiv:2212.11353, 2022. Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In *Thirteenth* international conference on the principles of knowledge representation and reasoning, 2012. Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems, 2020. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. Retrievalaugmented generation for knowledge-intensive nlp tasks. In *Advances in Neural Information Processing* Systems (NeurIPS), 2020. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems with language models, 2022. Belinda Li, Jane Yu, Madian Khabsa, Luke Zettlemoyer, Alon Halevy, and Jacob Andreas. Quantifying adaptability in pre-trained language models with 500 tasks. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4696–4715, Seattle, United States, July 2022a. Association for Computational Linguistics. doi: 10.18653/v1/2022.naacl-main.346. URL https://aclanthology.org/2022.naacl-main.346. Shuang Li, Xavier Puig, Yilun Du, Clinton Wang, Ekin Akyurek, Antonio Torralba, Jacob Andreas, and Igor Mordatch. Pre-trained language models for interactive decision-making. *arXiv preprint arXiv:2202.01771*, 2022b. Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori B Hashimoto. Diffusion-lm improves controllable text generation. *arXiv preprint arXiv:2205.14217*, 2022c. Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, and Andy Zeng. Code as policies: Language model programs for embodied control. *arXiv preprint arXiv:2209.07753*, 2022. Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81, 2004. Jiacheng Liu, Skyler Hallinan, Ximing Lu, Pengfei He, Sean Welleck, Hannaneh Hajishirzi, and Yejin Choi. Rainier: Reinforced knowledge introspector for commonsense question answering. *arXiv preprint* arXiv:2210.03078, 2022a. Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, and Andrew M Dai. Mind's eye: Grounded language model reasoning through simulation. arXiv preprint arXiv:2210.05359, 2022b. Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. *arXiv preprint arXiv:2209.09513*, 2022a. Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086–8098, Dublin, Ireland, May 2022b. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.556. URL https://aclanthology.org/2022.acl-long.556. Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. Sparse, Dense, and Attentional Representations for Text Retrieval. *Transactions of the Association for Computational Linguistics*, 9:329–345, 04 2021. ISSN 2307-387X. doi: 10.1162/tacl_a_00369. URL https://doi.org/10.1162/tacl_a_00369. James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David L Roberts, Matthew E Taylor, and Michael L Littman. Interactive learning from policy-dependent human feedback. In *International* Conference on Machine Learning, pages 2285–2294. PMLR, 2017. Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, and Graham Neubig. Language models of code are few-shot commonsense learners. *arXiv preprint arXiv:2210.07128*, 2022. John McCarthy et al. *Programs with common sense*. RLE and MIT computation center Cambridge, MA, USA, 1960. Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chadwick, Mia Glaese, Susannah Young, Lucy Campbell-Gillingham, Geoffrey Irving, et al. Teaching language models to support answers with verified quotes. *arXiv preprint arXiv:2203.11147*, 2022. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In International Conference on Learning Representations (ICLR), 2017. Sewon Min, Victor Zhong, Luke Zettlemoyer, and Hannaneh Hajishirzi. Multi-hop reading comprehension through question decomposition and rescoring. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6097–6109, 2019. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. Rethinking the role of demonstrations: What makes in-context learning work?, 2022. URL https: //arxiv.org/abs/2202.12837. Swaroop Mishra and Elnaz Nouri. Help me think: A simple prompting strategy for non-experts to create customized content with models. *arXiv preprint arXiv:2208.08232*, 2022. Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. Natural instructions: Benchmarking generalization to new tasks from natural language instructions. *arXiv preprint arXiv:2104.08773*, 2021. Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, et al. Lila: A unified benchmark for mathematical reasoning. arXiv preprint arXiv:2210.17517, 2022a. Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and Ashwin Kalyan. NumGLUE: A suite of fundamental yet challenging mathematical reasoning tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3505–3523, Dublin, Ireland, May 2022b. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.246. URL https://aclanthology.org/2022.acl-long.246. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. *Nature*, 518(7540):529–533, February 2015. ISSN 00280836. URL http://dx.doi.org/10.1038/nature14236. Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Tim Harley, Timothy P. Lillicrap, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML'16, page 1928–1937. JMLR.org, 2016. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. *arXiv preprint arXiv:2112.09332*, 2021. Rodrigo Nogueira and Kyunghyun Cho. Task-oriented query reformulation with reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 574–583, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/ D17-1061. URL https://aclanthology.org/D17-1061. Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. Show your work: Scratchpads for intermediate computation with language models. *arXiv preprint* arXiv:2112.00114, 2021. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. *arXiv preprint arXiv:2203.02155*, 2022. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the Annual Meeting of the Association for Computational Linguistics* (ACL), 2002a. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the 40th annual meeting of the Association for Computational* Linguistics, pages 311–318, 2002b. Aaron Parisi, Yao Zhao, and Noah Fiedel. Talm: Tool augmented language models. *arXiv preprint* arXiv:2205.12255, 2022. Pruthvi Patel, Swaroop Mishra, Mihir Parmar, and Chitta Baral. Is a question decomposition unit all we need? *arXiv preprint arXiv:2205.12538*, 2022. Ethan Perez, Patrick Lewis, Wen-tau Yih, Kyunghyun Cho, and Douwe Kiela. Unsupervised question decomposition for question answering. In *Proceedings of the 2020 Conference on Empirical Methods in* Natural Language Processing (EMNLP), 2020. Maja Popović. chrF++: words helping character n-grams. In Proceedings of the Second Conference on Machine Translation, pages 612–618. Association for Computational Linguistics, 2017. Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. Measuring and narrowing the compositionality gap in language models, 2022. Jing Qian, Hong Wang, Zekun Li, Shiyang Li, and Xifeng Yan. Limitations of language models in arithmetic and symbolic induction, 2022. Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, and Huajun Chen. Reasoning with language model prompting: A survey, 2022. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners, 2019. Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. Scaling language models: Methods, analysis & insights from training gopher, 2021. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research (JMLR), 2020. Rajkumar Ramamurthy, Prithviraj Ammanabrolu, Kianté Brantley, Jack Hessel, Rafet Sifa, Christian Bauckhage, Hannaneh Hajishirzi, and Yejin Choi. Is reinforcement learning (not) for natural language processing?: Benchmarks, baselines, and building blocks for natural language policy optimization. *arXiv* preprint arXiv:2210.01241, 2022. Stephen Robertson and Hugo Zaragoza. *The probabilistic relevance framework: BM25 and beyond*. Now Publishers Inc, 2009. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations (ICLR), 2022. Timo Schick, Jane Dwivedi-Yu, Zhengbao Jiang, Fabio Petroni, Patrick Lewis, Gautier Izacard, Qingfei You, Christoforos Nalmpantis, Edouard Grave, and Sebastian Riedel. Peer: A collaborative language model. arXiv preprint arXiv:2208.11663, 2022. Timo Schick, Jane Dwivedi-Yu, Roberto Dessì†, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. *arXiv* preprint arXiv:2302.04761, 2023. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. *arXiv preprint arXiv:1707.06347*, 2017. Thomas Scialom, Tuhin Chakrabarty, and Smaranda Muresan. Continual-t0: Progressively instructing 50+ tasks to language models without forgetting. *Conference on Empirical Methods in Natural Language* Processing (EMNLP), 2022. Tianxiao Shen, Victor Quach, Regina Barzilay, and Tommi Jaakkola. Blank language models. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020. Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, and Sida I Wang. Natural language to code translation with execution. *arXiv preprint arXiv:2204.11454*, 2022. Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits: An open-domain platform for web-based agents. In *International Conference on Machine Learning (ICML)*, 2017. Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen tau Yih. Replug: Retrieval-augmented black-box language models, 2023. Kumar Shridhar, Alessandro Stolfo, and Mrinmaya Sachan. Distilling multi-step reasoning capabilities of large language models into smaller models via semantic decompositions. *arXiv preprint arXiv:2212.00193*, 2022. Kurt Shuster, Mojtaba Komeili, Leonard Adolphs, Stephen Roller, Arthur Szlam, and Jason Weston. Language models that seek for knowledge: Modular search & generation for dialogue and prompt completion. arXiv preprint arXiv:2203.13224, 2022a. Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, Morteza Behrooz, William Ngan, Spencer Poff, Naman Goyal, Arthur Szlam, Y-Lan Boureau, Melanie Kambadur, and Jason Weston. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. *arXiv preprint arXiv:2208.03188*, 2022b. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. *Nature*, 529(7587):484–489, 2016. Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, and Sergey Levine. Offline rl for natural language generation with implicit language q learning. *arXiv preprint arXiv:2206.11871*, 2022. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. *arXiv preprint arXiv:2206.04615*, 2022. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. In *Advances in Neural* Information Processing Systems (NeurIPS), 2020. Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, and Jason Wei. Challenging big-bench tasks and whether chain-of-thought can solve them, 2022. URL https://arxiv.org/abs/2210.09261. Alon Talmor and Jonathan Berant. The web as a knowledge-base for answering complex questions. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL), 2018. Yi Tay, Mostafa Dehghani, Vinh Q Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, and Donald Metzler. Unifying language learning paradigms. *arXiv preprint arXiv:2205.05131*, 2022. Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language model for science. arXiv preprint arXiv:2211.09085, 2022. Open Ended Learning Team, Adam Stooke, Anuj Mahajan, Catarina Barros, Charlie Deck, Jakob Bauer, Jakub Sygnowski, Maja Trebacz, Max Jaderberg, Michael Mathieu, et al. Open-ended learning leads to generally capable agents. *arXiv preprint arXiv:2107.12808*, 2021. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen MeierHellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. Lamda: Language models for dialog applications. *arXiv preprint* arXiv:2201.08239, 2022. Kushal Tirumala, Aram H. Markosyan, Luke Zettlemoyer, and Armen Aghajanyan. Memorization without overfitting: Analyzing the training dynamics of large language models. In Advances in Neural Information Processing Systems (NeurIPS), 2022. Daniel Toyama, Philippe Hamel, Anita Gergely, Gheorghe Comanici, Amelia Glaese, Zafarali Ahmed, Tyler Jackson, Shibl Mourad, and Doina Precup. Androidenv: a reinforcement learning platform for android. arXiv preprint arXiv:2105.13231, 2021. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions. *arXiv preprint arXiv:2212.10509*, 2022. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. *Nature*, 575(7782):350–354, 2019. Boshi Wang, Xiang Deng, and Huan Sun. Iteratively prompt pre-trained language models for chain of thought. Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022a. Ruoyao Wang, Peter Jansen, Marc-Alexandre Côté, and Prithviraj Ammanabrolu. Behavior cloned transformers are neurosymbolic reasoners. *arXiv preprint arXiv:2210.07382*, 2022b. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. Advances in Neural Information Processing Systems (NeurIPS), 2022c. Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddhartha Mishra, Sujan Reddy, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Noah A. Smith, Hannaneh Hajishirzi, and Daniel Khashabi. Super-natural instructions: Generalization via declarative instructions on 1600+ nlp tasks. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022d. Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents, 2023. URL https://arxiv.org/abs/2302.01560. Garrett Warnell, Nicholas Waytowich, Vernon Lawhern, and Peter Stone. Deep tamer: Interactive agent shaping in high-dimensional state spaces. In *Proceedings of the AAAI conference on artificial intelligence*, volume 32, 1, 2018. Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. *International Conference on* Learning Representations (ICLR), 2022a. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. *Transactions on* Machine Learning Research (TMLR), 2022b. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*, 2022c. Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. Neural text generation with unlikelihood training. In *International Conference on Learning Representations (ICLR)*, 2020. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3):229–256, 1992. Jeff Wu, Long Ouyang, Daniel M Ziegler, Nisan Stiennon, Ryan Lowe, Jan Leike, and Paul Christiano. Recursively summarizing books with human feedback. *arXiv preprint arXiv:2109.10862*, 2021. Tongshuang Wu, Ellen Jiang, Aaron Donsbach, Jeff Gray, Alejandra Molina, Michael Terry, and Carrie J Cai. Promptchainer: Chaining large language model prompts through visual programming. In CHI Conference on Human Factors in Computing Systems Extended Abstracts, pages 1–10, 2022a. Tongshuang Wu, Michael Terry, and Carrie Jun Cai. Ai chains: Transparent and controllable human-ai interaction by chaining large language model prompts. In CHI Conference on Human Factors in Computing Systems, pages 1–22, 2022b. Yuhuai Wu, Albert Q Jiang, Wenda Li, Markus N Rabe, Charles Staats, Mateja Jamnik, and Christian Szegedy. Autoformalization with large language models. *Advances in Neural Information Processing* Systems (NeurIPS), 2022c. Zeqiu Wu, Yi Luan, Hannah Rashkin, David Reitter, and Gaurav Singh Tomar. Conqrr: Conversational query rewriting for retrieval with reinforcement learning. *Conference on Empirical Methods in Natural* Language Processing (EMNLP), 2022d. Ted Xiao, Harris Chan, Pierre Sermanet, Ayzaan Wahid, Anthony Brohan, Karol Hausman, Sergey Levine, and Jonathan Tompson. Robotic skill acquisition via instruction augmentation with vision-language models. arXiv preprint arXiv:2211.11736, 2022. Jing Xu, Megan Ung, Mojtaba Komeili, Kushal Arora, Y-Lan Boureau, and Jason Weston. Learning new skills after deployment: Improving open-domain internet-driven dialogue with human feedback, 2022. Jingfeng Yang, Haoming Jiang, Qingyu Yin, Danqing Zhang, Bing Yin, and Diyi Yang. Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL), 2022a. Kevin Yang, Dan Klein, Nanyun Peng, and Yuandong Tian. Doc: Improving long story coherence with detailed outline control. *arXiv preprint arXiv:2212.10077*, 2022b. Kevin Yang, Nanyun Peng, Yuandong Tian, and Dan Klein. Re3: Generating longer stories with recursive reprompting and revision. *Conference on Empirical Methods in Natural Language Processing (EMNLP)*, 2022c. Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. Webshop: Towards scalable real-world web interaction with grounded language agents. *Advances in Neural Information Processing Systems (NeurIPS)*, 2022a. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. *arXiv preprint arXiv:2210.03629*, 2022b. David Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), 1995. Ping Yu, Tianlu Wang, Olga Golovneva, Badr Alkhamissy, Gargi Ghosh, Mona Diab, and Asli Celikyilmaz. Alert: Adapting language models to reasoning tasks. *arXiv preprint arXiv:2212.08286*, 2022. Eric Zelikman, Jesse Mu, Noah D Goodman, and Yuhuai Tony Wu. Star: Self-taught reasoner bootstrapping reasoning with reasoning. *Advances in Neural Information Processing Systems (NeurIPS)*, 2022. Andy Zeng, Maria Attarian, Brian Ichter, Krzysztof Choromanski, Adrian Wong, Stefan Welker, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, and Pete Florence. Socratic models: Composing zero-shot multimodal reasoning with language, 2022. URL https://arxiv.org/abs/2204.00598. Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Multimodal chain-ofthought reasoning in language models, 2023. Victor Zhong, Caiming Xiong, and Richard Socher. Seq2SQL: Generating structured queries from natural language using reinforcement learning, 2018. URL https://openreview.net/forum?id=Syx6bz-Ab. Zexuan Zhong, Tao Lei, and Danqi Chen. Training language models with memory augmentation. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, and Ed Chi. Least-to-most prompting enables complex reasoning in large language models. *arXiv preprint arXiv:2205.10625*, 2022. Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. *arXiv preprint arXiv:1909.08593*, 2019.
Review 1: Summary: This survey paper reviews papers where language models are augmented with using tools and reasoning. These models are dubbed ALMs and recent advances in this paradigm are reviewed in the paper. The paper is organized well into - motivation, definitions, - dedicated sections on reasoning and tools+actions, - section on learning techniques to achieve these capabilities - discussion and benefits sections Strengths and Weaknesses: Strengths: - The paper is clearly written and is easy to read. The organization of the paper into dedicated sections for reasoning, tools and action are useful. - The paper does a good job of comparing various works wherever possible like table 1 and 2. - The figures with example prompts for important methods are clear to parse and understand and they add value to the narrative. - The section of various learning techniques to achieve language model augmentation are useful. Weaknesses: - The ethical concerns section could have gone into more detail instead of a single paragraph. - The paper could have benefited from adding more comparative results across different approaches. Tables 1 and 2 were informative in this regard but more of this could have helped. Requested Changes: Section 2.1: I believe the scratch pad paper and Cobbe et al proposed the original ideas here. Please see this thread for relevant division: https://twitter.com/gstsdn/status/1533841505172922369?s=46&t=YDJSnSYS3msxPvOtE0K2Og And fix the citations and language to reflect this. Section 2.2: The explanation for Patel et al. 2022 is incomplete and hard to understand. Please rephrase. Section 3.2.1: Explanation for RETRO can be made clearer with a few more sentences. Section 3.4 typo: “a framework to gather an integrate” change an to “and” Ethical concerns section could have been further enhanced to reflect the real-world impact these approaches are having in the developer community. Broader Impact Concerns: As mentioned above, LLMs especially ones that are augmented to use tools and act on the results are becoming extremely popular in the current world. A few large companies have the capability to build the large pre-trained models but there is also a sprawling open-source ecosystem that’s using these LLMs to build products in the real world. This paper would benefit from brief commentary on that topic. ================================================== Review 2: Summary: This paper presents a comprehensive survey on augmented language models (ALMs). Maybe slightly different from others’ definition of ALMs, the authors define ALMs as LMs augmented with reasoning and tools, resulting in agents that combine reasoning, tool usage, and action, in addition to purely LMs. In this regard, the survey systematically reviews ALMs with respect to reasoning, tool usage, and action, which were usually discussed separately before. This approach offers a more visionary perspective for readers to view LLMs and gain a deeper understanding of the underlying connections among recent advancements in these fields. Strengths and Weaknesses: ### Strengths: 1. This paper provides a novel perspective on augmented language models – by reviewing reasoning, tool usage, and action in the same framework, ALMs are envisioned as powerful agents. The paper also includes an interesting discussion on the connection between ALMs and world models. Although LLMs as agents are not new, this paper clearly defines and reviews the critical aspects of serving as agents, which I appreciate a lot by reading the paper. 2. This paper includes a comprehensive list of related work on reasoning, tool usage, and action, which should be helpful for researchers in related fields. ### Weaknesses: I don’t identify significant weaknesses of this paper. Requested Changes: I don't have requested changes for this paper. Broader Impact Concerns: This paper included a nice discussion on ethical concerns that I feel sufficient. ================================================== Review 3: Summary: This paper provides a comprehensive review and survey of **augmented language models** (ALMs), broadly defined as language models that are augmented with reasoning skills and the ability to use tools. The paper begins with a summary of the shortcomings of current LMs, and **why** the shift towards ALMs may alleviate these issues: Concretely, current LMs often (i) suffer from hallucinations, (ii) require a large number of model parameters and dataset size to master emergent behaviour, and (iii) are not straightforward to update through continual learning. The paper then dives deeper into two promising sub-areas that can address these limitations: (i) **reasoning** (e.g. prompting, chain-of-thought prompting, etc.), and (ii) **tool use and acting** (e.g. calling another model, using a calculator, search engines, retrieving information, taking actions in the real-world, etc.). In each section, the paper provides a comprehensive list and summary of the many different earlier work that had been done in each research area, alongside some of the current limitations of these approaches. Lastly, the paper discusses **how** current ALM approaches teach the model to acquire the relevant ability or tool use, and concludes by discussing some of the broader implications (e.g. trade-off between storing the knowledge and ability in the model weights vs keeping them in the external tools, ethical concerns of ALMs, etc.). Strengths and Weaknesses: **Strengths** 1. This paper provides a comprehensive overview of earlier work in the ALM literature. As someone who is broadly familiar with this literature, by reading this survey paper I nevertheless learn about interesting and relevant prior work that I had not encountered before, which attests to the comprehensive nature of the survey. 2. The ALM literature (both in terms of reasoning, tool use, and action-taking through LLMs) is incredibly broad and fast-paced, which presents a high barrier to entry for researchers who are new to this critical research area. Due to its comprehensive nature, this paper constitutes a great overview and entry point to this important research area; these readers can then dive deeper into their sub-area of choice by reading some of the cited papers that are most relevant to their interests. 3. The paper is overall well-structured and well-written, beginning with why ALMs are necessary, why the focus on reasoning and tool use, followed by a comprehensive description and current open questions of each research area, and concluding with a broader discussion of the rise of ALMs. **Weaknesses** 1. While the survey paper is comprehensive in terms of the description of the relevant prior work, the "limitations" subsection of each section is much shorter. In my opinion, these limitations & open questions constitute an important part of a survey paper: Not just outlining what **has** been done, but also dedicating enough space to comprehensively outline **what is still missing and what we as a community should do more in the future**: (i) what are some of the open challenges (computational or otherwise) behind each approach? (ii) Are there any impediments to scaling the approaches, considering that scale is a key driving force behind recent progress? (iii) What are the pros and cons of the proposed approaches against other alternatives (e.g. online vs offline RL)? Etc. Some examples of what would be important to cover in more depth are provided in the "requested changes" section below. 2. I have some objections to some of the assertions in the paper, particularly regarding whether or not these semi-parametric forms of LMs still constitute the probabilistic language modelling objective (Section 5 "Discussion", "Moving away from language modelling", page 18). This is detailed in the "Requested Changes" section below. Requested Changes: 1. **Critical**: More discussion around the limitations and open questions around each subsection and more broadly. Examples of important topics that should be covered in more depth: - In terms of retrieval-augmented models, how do we do fast retrieval, especially when the retrieval set is huge (e.g. the whole internet)? Is exact inference tractable, and what approaches should we use when it is not? Can the retrieval be done on-device (e.g. GPUs), and what happens if it doesn't fit on the GPU memory? - Despite the benefits, what are the primary **costs** of doing ALMs? How do the computational costs of these approaches (e.g. calling a search engine or a calculator or keeping a large retrieval set) compare against trying to cram all the knowledge inside a single dense model? What are the different dimensions (e.g. compute, memory, costs of API calls to other tools) of these costs? Has prior work provided a thorough accounting of these costs? If not, how should we start to measure these? - How scalable are some of these approaches? For instance, if we have to marginalize over all the different tools at the LM pre-training time, it would likely be very expensive given that the number of tokens is very large. Can we do this as a "second-stage" pre-training on a smaller dataset instead? - A key part of tool use is knowing **when** to rely on the tools, and **which** tools to use for a given context. The discussion around this important topic seems sparse at the moment; what are some avenues that the community should explore more here? - The paper identifies combining reasoning and tool use as a promising avenue. This is worth discussing in more detail: Are there any obvious low-hanging fruits that we can try? What is the right "medium" for combining them: a lot of reasoning tools, theorem provers, etc. operate through formal languages, whereas LLMs operate mostly through natural language. Is this an important issue, and if so, how should we bridge them? - The paper mentions "offline RL" (page 18) as a potential solution; it might be worth discussing some of the pros and cons of offline RL, e.g. better safety through "sandboxing", although exploration is more difficult, etc. 2. **Critical**: In the "moving away from language modelling" subsection (page 18), the paper claims that "... where c is a tool, which might not be tractable". Three points: - I don't agree that this is necessarily intractable: The formal definition of tractability refers to problems that can be solved in polynomial time. Given a finite set of N tools, we simply need to consider and marginalize over all those N possibilities, which can still be done in polynomial time with respect to the number of tools N. It can be **expensive** when the number of tools N is large, but not intractable. - In terms of the equation, the paper states that $p(x_t \mid x_1, \cdots, x_{t-1}) = \sum_{c} p(c) * p(x_t \mid x_1, \cdots, x_{t-1}, c)$. Shouldn't $p(c)$ here be $p(c \mid x_1, \cdots, x_{t-1})$? I'd imagine the choice of the tool should depend on the context, and this formulation lends itself well to marginalization & latent variables (here the choice of c is treated as a latent variable that we simply marginalize over). - The paper alludes to ALMs being somewhat of a shift from the standard, probabilistic LM formulation. I don't agree that this is necessarily the case: With the formulation above, ALMs **still** define a probabilistic distribution over the next token (and through the chain rule, over the entire sequence, where $p(\mathbf{x}) = \prod_{x_t \in \mathbf{x}} p(x_t \mid x_{<t})$, only that now each prediction $p(x_t \mid x_{<t})$ involves a **marginalization** over all possible tools / retrieval documents, etc.). The fact that each prediction $p(x_t \mid x_{<t})$ is now **more expensive** due to the need to marginalize does **not** mean that it is any less probabilistic than the standard LM or that we have shifted away from the probabilistic interpretation of LMs, as both standard LMs and ALMs still define valid probability distributions over the next token. In practice, one can rely on approximate inference procedures (as opposed to exact inference when the number of tools N is large) to approximate exact marginalization, as the community has done a lot in the past when it comes to other types of latent variables. 3. **Recommended**: The definition of taking actions can perhaps be made more precise. In some sense, standard LMs also "take action" in the form of generating a sequence of words that will be displayed on the screen. I don't think this is what the paper means by taking action (as opposed to e.g. selecting a tool / reasoning step as a latent variable, or more actively controlling a web browser)? 4. **Typo**: In page 14, "... gather an integrate contextual ..." is a typo, "an" -> "**and**"? Broader Impact Concerns: The paper includes an "Ethical Concerns" subsection in page 20. This subsection can be more comprehensive by touching on other relevant ethical issues. For instance, training a large dense LM on a large compute cluster can result in a large number of emissions; if leveraging tools can result in smaller models that are cheaper to train, deploy, and update, then this can be a good thing in terms of ethical concerns and environmental impact. I am not familiar with this literature, but if there is prior work that assesses the safety / factuality / toxic language rate of tool-augmented vs non-tool-augmented standard LMs, it would be worth citing (or at the very least mentioned as an important open research question that the community should work on more in the near future). Also there is a possibility of dual use here: more capable ALMs that can e.g. leverage and condition on more recent fake news and conspiracy theories on social media can then be used to generate more even more convincing, timely, and relevant-looking fake news that convinces people of things that are not true for e.g. one's political gains. ================================================== Metareview: Recommendation: Accept as is Comment: The paper provides a good review of a very fast-moving field, with a lot of good references. I enjoyed reading it and all reviewers recommended acceptance. It is also very appropriate for a survey certification. There is a small issue in Figure 6, where the answer line seems incorrect, and should be `answer = tennis_balls + bought_balls` instead of `tennis_balls * bought_balls`, I don’t know if this is a typo or intentional. ==================================================
# Credal Bayesian Deep Learning Anonymous authors Paper under double-blind review ## Abstract Uncertainty quantification and robustness to distribution shifts are important goals in machine learning and artificial intelligence. Although Bayesian Neural Networks (BNNs) allow for uncertainty in the predictions to be assessed, different sources of uncertainty are indistinguishable. We present Credal Bayesian Deep Learning (CBDL). Heuristically, CBDL allows to train an (uncountably) infinite ensemble of BNNs, using only finitely many elements. This is possible thanks to prior and likelihood finitely generated credal sets (FGCSs), a concept from the imprecise probability literature. Intuitively, convex combinations of a finite collection of prior-likelihood pairs are able to represent infinitely many such pairs. After training, CBDL outputs a set of posteriors on the parameters of the neural network. At inference time, such posterior set is used to derive a set of predictive distributions that is in turn utilized to distinguish between aleatoric and epistemic uncertainties, and to quantify them. The predictive set also produces either (i) a collection of outputs enjoying desirable probabilistic guarantees, or (ii) the single output that is deemed the best, that is, the one having the highest predictive lower probability - another imprecise-probabilistic concept. CBDL is more robust than single BNNs to prior and likelihood misspecification, and to distribution shift. We show that CBDL is better at quantifying and disentangling different types of uncertainties than single BNNs and ensemble of BNNs. In addition, we apply CBDL to two case studies to demonstrate its downstream tasks capabilities: one, for motion prediction in autonomous driving scenarios, and two, to model blood glucose and insulin dynamics for artificial pancreas control. We show that CBDL performs better when compared to an ensemble of BNNs baseline. ## 1 Introduction One of the greatest virtues an individual can have is arguably being aware of their own ignorance, and acting cautiously as a consequence. Similarly, an autonomous system using neural networks (NNs) would greatly benefit from understanding the probabilistic properties of the NN's output (for example, its robustness to distribution shift), in order to incorporate them into any further decision-making. In this paper, we present a procedure that allows us to give a machine such a desirable quality. In the last few years, there has been a proliferation of work on calibrating (classification) NNs, in order to estimate the confidence in their outputs (Guo et al., 2017) or to produce conformal sets that are guaranteed to contain the true label, in a probably approximately correct (PAC) sense (Park et al., 2020). While such methods are a promising first step, they require a calibration set (in addition to the original training set) and cannot be directly used on out-of-distribution data without further examples. Bayesian neural networks (BNNs) offer one approach to overcome the above limitations. The Bayesian paradigm provides a rigorous framework to analyze and train uncertainty-aware neural networks, and more generally to support the development of learning algorithms (Jospin et al., 2022). In addition, it overcomes some of the drawbacks of deep learning models, namely that they are prone to overfitting, which adversely affects their generalization capabilities, and that they tend to be overconfident about their predictions when they provide a confidence interval. BNNs, though, are trained using a single prior, which may still suffer from miscalibration and robustness issues (Lenk & Orme, 2009). In this work we introduce Credal Bayesian Deep Learning (CBDL), a procedure that draws on concepts from the imprecise probability (IP) literature (Augustin et al., 2014; Walley, 1991). Unlike other techniques in the fields of artificial intelligence (AI) and machine learning (ML) involving imprecise probabilities - that typically only focus on classification problems - CBDL can be used for both classification and regression. It captures the ambiguity the designer faces when selecting which prior to choose for the parameters of a neural network and which likelihood distribution to choose for the training data at hand. CBDL can be thought of as a NN trained using prior and likelihood finitely generated credal sets (FGCSs), Pprior and Plik, respectively.1 They are convex sets of probability measures having finitely many extreme elements (the elements that cannot be written as a convex combination of one another), exPprior and exPlik, respectively (see also Remark 2). A very simple example of a prior FGCS Pprior is the collection of all the convex combinations of two one-dimensional Normal distributions N (µ1, σ2 1 ) and N (µ2, σ2 2 ), i.e., Pprior = {P : P = βN (µ1, σ2 1 ) + (1 − β)N (µ2, σ2 2 ), for all β ∈ [0, 1]}. Consequently, in this toy example we have that exPprior = {N (µ1, σ2 1 ), N (µ2, σ2 2 )}. FGCSs are further examined in section 2.2. Given the use of finitely generated credal sets, CBDL can also be seen as a non-condensed (uncountably) infinite ensemble of BNNs - each BNN corresponding to a pair (*P, L*) of prior P from the prior FGCS Pprior and likelihood L from the likelihood FGCS Plik. Such infinite ensemble, though, is carried out using only finitely many elements, that is, only pairs (P ex, Lex) of components of the extreme sets exPprior and exPlik. This because every element P of Pprior can be obtained from a convex combinations of the elements of exPprior, and similarly for the likelihood FGCS. After training, CBDL produces a posterior FGCS Ppost on the parameters of the neural network. At inference time, Ppost is used to derive a predictive FGCS Ppred, that is, given a new input, a set of plausible distributions over the space of outputs.2In turn, Ppred generates a set of outputs - or a single output, depending on the user's needs - that enjoys desirable probabilistic guarantees. CBDL also gives a way of quantifying and disentangling different types of uncertainties within Ppred. We use a credal set approach to overcome some of the drawbacks of single BNNs. In particular, CBDL allows to counter the criticism to the practice in (standard) Bayesian statistics of (i) using a single, arbitrary prior to represent the initial state of ignorance of the agent, (ii) using non-informative priors to model ignorance, and (iii) using a single, arbitrary likelihood to represent the agent's knowledge about the sampling model.3 As a consequence, credal sets make the analysis more robust to prior and likelihood misspecification. In addition, they make it possible to quantify and distinguish between epistemic and aleatoric uncertainties (EU and AU, respectively). This is desirable in light of several areas of recent ML research, such as Bayesian deep learning (Depeweg et al., 2018; Kendall & Gal, 2017a), adversarial example detection (Smith & Gal, 2018), and data augmentation in Bayesian classification (Kapoor et al., 2022). AU refers to the uncertainty that is inherent to the data generating process; as such, it is irreducible. Think, for example, of a coin toss. No matter how many times the coin is tossed, the stochastic variability of the experiment cannot be eliminated. EU, instead, refers to the lack of knowledge about the data generating process; as such, it is reducible. It can be lessened on the basis of additional data. For example, after only a few tosses, we are unable to gauge whether a coin is biased or not, but if we repeat the experiment long enough, this type of uncertainty vanishes. We note in passing that EU cannot be captured using a single BNN (Hüllermeier & Waegeman, 2021). This because selecting a unique prior and a unique likelihood implicitly assumes perfect knowledge around the true prior and the true data generating process. In turn, a unique distribution is only able to retrieve AU. Methods that disentangle between the two types of uncertainties that are based on a single distribution are ad-hoc, but are not theoretically well-justified. EU can be typically reduced by retraining the model using an augmented training set (Lin et al., 2023) (e.g., via semantic preserving transformations (Kaur et al., 2023), Puzzle Mix (Kim et al., 2020), etc.). On the other hand, 1Intuitively, the larger these credal sets, the higher prior and likelihood ambiguity the user faces. 2As we shall see in section 3.1, since computing the posteriors and the predictive distributions is oftentimes an intractable problem, we approximate the elements of Ppost and of Ppred using Variational Inference (VI). As a consequence, we denote the VI-approximated posterior and predictive credal sets as P˘post and Pˆpred, respectively. 3Criticisms (i) and (iii) are also pointed out in Manchingal & Cuzzolin (2022, Section 2.2). since AU is irreducible, there is an increasing need for ML techniques that are able to detect an excess of AU and query for human help. Remark 1. EU should not be confused with the concept of epistemic probability (de Finetti, 1974; 1975; Walley, 1991). In the subjective probability literature, epistemic probability can be captured by a single distribution. Its best definition can be found in Walley (1991, Sections 1.3.2 and 2.11.2). There, the author specifies how epistemic probabilities model logical or psychological degrees of partial belief of the agent. We remark, though, how de Finetti and Walley work with finitely additive probabilities, while in this paper we use countably additive probabilities. The motivation for working with credal sets is threefold: (i) it allows to be robust against prior and likelihood misspecification; (ii) unlike when using a single probability distribution, it permits to represent ignorance in the sense of lack of knowledge; (iii) it allows to quantify and disentangle between EU and AU. A more in-depth discussion can be found in Appendix A. In addition, we point out that despite a hierarchical Bayesian model (HBM) approach seems to be a viable alternative to one based on credal sets, these latter are better justified philosophically, and do not suffer from the same theoretical shortcomings of HBM procedures (Bernardo, 1979; Hüllermeier & Waegeman, 2021; Jeffreys, 1946; Walley, 1991). A more detailed explanation can be found in Appendix B. Let us mention that, given our motivations, comparing CBDL with non-Bayesian techniques like the evidential-theory-based methodologies (Amini et al., 2020; Charpentier et al., 2020; Sensoy et al., 2018) or conformal prediction (Vovk et al., 2022; Shafer & Vovk, 2008; Gibbs et al., 2023; Barber et al., 2023) needs a separate treatment, and is beyond the current scope. We will delve into it in future work. We summarize our contributions next: (1) We present CBDL, and develop the theoretical tools and the algorithm required to use it in practice. (2) We show that a CBDL approach is more robust than single BNNs to prior and likelihood misspecification, and to distribution shifts. We also explain how, during inference, the credal set of posteriors Ppost on the network parameters obtained during training is used to derive a credal set of predictive distributions Ppred, and in turn a set of outcomes - or a single outcome – that enjoys probabilistic guarantees.4(3) We show how CBDL is better at quantifying and disentangling AU and EU than single BNNs and ensemble of BNNs. We also apply CBDL to two safety critical systems to demonstrate its downstream tasks capabilities. One, motion prediction for autonomous driving, and two, the human insulin and blood glucose dynamics for artificial pancreas control. We demonstrate improvements in both these settings with respect to ensemble of BNNs methods, the reason being that better uncertainty quantification leads to a positive impact on the decisions made using them.5 Before moving on, let us point out that while CBDL pays a computational price coming from the use of credal sets, it is able to quantify both EU and AU, unlike single BNNs, and to do so in a principled manner, unlike ensemble of BNNs. In addition, it requires less stringent assumptions on the nature of the prior and likelihood ambiguity faced by the agent than other imprecise-probabilities-based techniques, as explained in Appendices B and L. We also stress that if the user prioritizes computational efficiency, they should use backpropagation-based methods, as they are much faster than Bayesian techniques. In safetycritical situations instead - where using uncertainty-informed methods is crucial for quantifying the types of uncertainties, but also for the outcome of the analysis at hand - CBDL is a natural choice. Structure of the paper. Section 2 presents the needed preliminary concepts, followed by section 3 that introduces and discusses the CBDL algorithm, together with its theoretical properties. We present our experimental results in section 4, and we examine the related work in section 5. Section 6 concludes our work. In the appendices, we give further theoretical and philosophical arguments and we prove our claims. 4Once again, as we shall see in section 3.1, we approximate the elements of the posterior and the predictive credal sets using Variational Inference (VI). 5We note in passing how recently Mucsányi et al. (2024) show that (finite) deep ensembles are the current state-of-the-art methods for quantifying and disentangling different types of uncertainties on ImageNet. Our experiments show that CBDL improves on (finite) ensembles of BNNs, both in uncertainty quantification and disentanglement, and in downstream tasks performances. Together with the findings in Mucsányi et al. (2024), this is an additional argument in favor of the effectiveness of our methodology. ## 2 Background And Preliminaries In this section, we present the background notions that are needed to understand our main results. In section 2.1 we introduce Bayesian neural networks. Section 2.2 discusses (finitely generated) credal sets, upper and lower probabilities, and imprecise highest density regions. Section 2.3 introduces the concepts of upper and lower entropy, which are used to quantify and disentangle AU and EU. The reader that is familiar with these concepts can skip to section 3. ## 2.1 Bayesian Neural Networks In line with the recent survey on BNNs by Jospin et al. (2022), Bayes' theorem can be stated as P(H | D) = [P(D | H)P(H)]/P(D) = P(D, H)/RP(*D, H*′)dH′, where H is a hypothesis about which the agent holds some prior beliefs, and D is the data the agent uses to update their initial opinion. Probability distribution P(D | H) represents how likely it is to observe data D if hypothesis H were to be true, and is called likelihood, while probability distribution P(H) represents the agent's initial opinion around the plausibility of hypothesis H, and is called *prior*. The *evidence* available is encoded in P(D) = RP(*D, H*′)dH′, while posterior probability P(H | D) represents the agent's updated opinion. Using Bayes' theorem to train a predictor can be understood as learning from data D: the Bayesian paradigm offers an established way of quantifying uncertainty in deep learning models. BNNs are stochastic artificial neural networks (ANNs) trained using a Bayesian approach (Goan & Fookes, 2020; Jospin et al., 2022; Lampinen & Vehtari, 2001; Titterington, 2004; Wang & Yeung, 2021). The goal of ANNs is to represent an arbitrary function y = Φ(x). Let θ represent the parameters of the network, and call Θ the space θ belongs to. Stochastic neural networks are a type of ANN built by introducing stochastic components to the network. This is achieved by giving the network either a stochastic activation or stochastic weights to simulate multiple possible models with their associated probability distribution. This can be summarized as θ ∼ p(θ), y = Φθ(x) +ε, where Φ depends on θ to highlight the stochastic nature of the neural network, p is the density of a probability measure P on Θ, 6 and ε represents random noise to account for the fact that function Φθ is just an approximation. The connection between the BNN notation and the general one of the Bayes' theorem is explained in the next two paragraphs. To design a BNN, the first step is to choose a deep neural network *architecture*, that is, functional model Φθ. Then, the agent specifies the *stochastic model*, that is, a prior distribution over the possible model parametrization p(θ), and a prior confidence in the predictive power of the model p(y | *x, θ*). Given the usual assumption that multiple data points from the training set are independent, the product Q(x,y)∈D p(y | *x, θ*) represents the *likelihood* of outputs y ∈ Dy given inputs x ∈ Dx and parameter θ, where (a) D = Dx × Dy is the training set; (b) Dx = {xi} n i=1 is the collection of training inputs, which is a subset of the space X of inputs; (c) Dy = {yi} n i=1 is the collection of training outputs, which is a subset of the space Y of outputs. The model parametrization can be considered to be hypothesis H. Following Jospin et al. (2022), we assume independence between model parameters θ and training inputs Dx, in formulas Dx ⊥⊥ θ. Hence, Bayes' theorem can be rewritten as $$p(\theta\mid D)={\frac{p(D_{\mathbf{y}}\mid D_{\mathbf{x}},\theta)p(\theta)}{\int_{\Theta}p(D_{\mathbf{y}}\mid D_{\mathbf{x}},\theta^{\prime})p(\theta^{\prime})\mathrm{d}\theta^{\prime}}}\propto p(D_{\mathbf{y}}\mid D_{\mathbf{x}},\theta)p(\theta).$$ Notice that the equality comes from having assumed Dx ⊥⊥ θ. Posterior density p(θ | D) is typically high dimensional and highly nonconvex (Izmailov et al., 2021c; Jospin et al., 2022), so computing it and sampling from it is a difficult task. The first issue is tackled using Variational Inference (VI) procedures, while Markov Chain Monte Carlo (MCMC) methods address the second challenge. Both are reviewed - in the context of machine learning - in Jospin et al. (2022, Section V), where the authors also inspect their limitations. BNNs can be used for both regression and classification (Jospin et al., 2022, Section II); besides having a solid theoretical justification, there are practical benefits from using BNNs, as presented in Jospin et al. (2022, Section III). 6We can write p as the Radon-Nikodym derivative of P with respect to some σ-finite dominating measure µ, that is, p = dP/dµ. ## 2.2 Imprecise Probabilities As CBDL is rooted in the theory of imprecise probabilities (IPs), in this section we give a gentle introduction to the IP concepts we will use throughout the paper. CBDL is based on the *Bayesian sensitivity analysis* (BSA) approach to IPs, that in turn is grounded in the *dogma of ideal precision* (DIP) (Berger, 1984), (Walley, 1991, Section 5.9). The DIP posits that in any problem there is an *ideal probability model* which is precise, but which may not be precisely known. We call this condition *ambiguity* (Ellsberg, 1961; Gilboa & Marinacci, 2013). Facing ambiguity can be represented mathematically by a set Pprior of priors and a set Plik of likelihoods that seem "plausible" or "fit" to express the agent's beliefs on the parameters of interest and their knowledge of the data generating process (DGP). Generally speaking, the farther apart the "boundary elements" of the sets (i.e., their infimum and supremum), the higher the agent's ambiguity. Of course, if Pprior and Plik are singletons we go back to the usual Bayesian paradigm. A procedure based on sets Pprior and Plik yields results that are more robust to prior and likelihood misspecification than a regular Bayesian method. In the presence of prior ignorance and indecisiveness about the sampling model, it is better to give answers in the form of intervals or sets, rather than arbitrarily select a prior and a likelihood, and then update. Sets Pprior and Plik allow to represent *indecision*, thus leading to less informative but more robust and valid conclusions. Remark 2. *Throughout the paper, we denote by* Π = {P1, . . . , Pk}, k ∈ N, a finite set of probabilities on a generic space Ω, such that for all j ∈ {1, . . . , k}, Pj *cannot be written as a convex combination of the other* k − 1 components of Π. We denote by Π′its convex hull Π′ ≡ *Conv*(Π)*, i.e., the set of probabilities* Q on Ω that can be written as Q(A) = Pk j=1 βjPj (A), for all A ⊂ Ω, where the βj 's are elements of [0, 1] *that sum* up to 1. In the literature, it is referred to as a Finitely Generated Credal Set (FGCS, Levi (1980); Cozman (2000b)). Notice then that the extreme elements of Π′correspond to the elements of Π*, that is, ex*Π′ = Π. Simple graphical representations of finitely generated credal sets are given in Figures 1 and 2. ![4_image_0.png](4_image_0.png) Figure 1: Suppose we are in a 3-class classification setting, so Ω = {ω1, ω2, ω3}. Then, any probability measure P on Ω can be seen as a probability vector. For example, suppose P({ω1}) = 0.6, P({ω2}) = 0.3, and P({ω3}) = 0.1. We have that P ≡ (0.6, 0.3, 0.1)⊤. Since its elements are positive and sum up to 1, probability vector P belongs to the unit simplex, the purple triangle in the figure. Then, we can specify Π = {P1*, . . . , P*5}, and obtain as a consequence that Π ′ = Conv(Π) is the orange pentagon. It is a convex polygon with finitely many extreme elements, and it is the geometric representation of a finitely generated credal set. Let us now introduce the concepts of *lower* and *upper probabilities*. The lower probability P associated with Π is given by P(A) = infP ∈Π P(A), for all A ⊂ Ω. The upper probability P associated with Π is defined as the conjugate to P, that is, P(A) := 1 − P(Ac) = supP ∈Π P(A), for all A ⊂ Ω. These definitions hold even if Π is not finite. Then, we have the following important result. Proposition 3. P is the upper probability for Π if and only if it is also the upper probability for Π′*. That* is, P(A) = supP ∈Π P(A) = supP ′∈Π′ P ′(A), for all A ⊂ Ω. The same holds for the lower probability. A version of Proposition 3 was proven in Dantzig (1963), while a variant for finitely additive probability measures can be found in Walley (1991, Section 3.6). A simple graphical representation of upper and lower probabilities for a set A is given in Figure 2. We now use lower probability P to define the α*-level Imprecise Highest Density Region* (IHDR), for some α ∈ [0, 1]. Definition 4. (Coolen, 1992, Section 2) Let α be any value in [0, 1]. Then, set IRα(Π′) ⊂ Ω *is called a* (1 − α)-Imprecise Highest Density Region (IHDR) if 1. P[{ω ∈ IRα(Π′)}] ≥ 1 − α; 2. RIRα(Π′) dω is a minimum. If Ω *is at most countable, we replace* RIRα(Π′) dω with \#IRα(Π′)*, where* \# *denotes the cardinality operator.* ![5_image_0.png](5_image_0.png) Figure 2: In this figure, a replica of Flint et al. (2017, Figure 1), Π = {P1, P2}, where P1 and P2 are two Normal distributions whose probability density functions (pdf's) p1 and p2 are given by the dashed blue and brown curves, respectively. Their convex hull is Π ′ = Conv(Π) = {Q : Q = βP1 + (1 − β)P2, for all β ∈ [0, 1]}. The pdf q of an element Q of Π ′is depicted by a solid black curve. In addition, let A = [−0.8, −0.4]. Then, P(A) = R −0.4 −0.8 p2(ω)dω ≈ 0, while P(A) is given by the red shaded area under p1, that is, P(A) = R −0.4 −0.8 p1(ω)dω. Definition 4 holds also if Π = exΠ′is not finite. Notice that condition 2 is needed so that IRα(Π′) is the subset of Ω having the lowest possible cardinality, which still satisfies condition 1. By the definition of lower probability, Definition 4 implies that P ′[{ω ∈ IRα(Π′)}] ≥ 1 − α, for all P ′ ∈ Π′. Here lies the appeal of the IHDR concept. Let us give a simple example, borrowed from Caprio et al. (2024a). Suppose Ω = {ω1*, . . . , ω*5}, Π = {P1, P2, P3}, and α = 0.1. The numerical values for Ps({ωj}) are given in Table 1, for all s ∈ {1, 2, 3} and all j ∈ {1*, . . . ,* 5}. Then, from Proposition 3 and Definition 4, we have that IRα(Π′) = {ω1, ω2, ω3}. | ω1 | ω2 | ω3 | ω4 | ω5 | | |------|------|------|------|-------|-------| | P1 | 0.7 | 0.25 | 0.03 | 0.01 | 0.01 | | P2 | 0.6 | 0.2 | 0.1 | 0.05 | 0.05 | | P3 | 0.5 | 0.3 | 0.15 | 0.025 | 0.025 | Table 1: Numerical values for our example. It is easy to see that the smallest subset of Ω that is assigned a probability of at least 0.9 by all the elements of Π is {ω1, ω2, ω3}. An operative way of building the IHDR is to consider the union of the (precise) Highest Density Regions (HDRs) of the elements of Π = exΠ′. 7 Let us define them formally. Definition 5. (Coolen, 1992, Section 1) Pick any Pj ∈ Π, j ∈ {1, . . . , k}. Let α *be any value in* [0, 1]. Then, set Rα(Pj ) ⊂ Ω is called a (1 − α)*-Highest Density Region (HDR) for* Pj if $$P_{j}[\{\omega\in R_{\alpha}(P_{j})\}]\geq1-\alpha\;\;\;\;\;\;a n d$$ $$\int_{R_{\alpha}(P_{j})}d\omega\,\,\,i s\,\,a\,\,\,m i n i m u m.$$ 7Here, by "operative" we mean that this procedure to compute the IHDR is easy to carry out in practice. If Ω *is at most countable, we replace* RRα(Pj ) dω with \#Rα(Pj )*. Equivalently (Hyndman, 1996),* $R_{\alpha}(l)$ Rα(Pj ) = {ω ∈ Ω : pj (ω) ≥ p α j } ⊂ Ω, where pj is the pdf or the probability mass function (pmf) of Pj *, and* p α j is a constant value. In particular, it is the largest constant such that Pj [{ω ∈ Rα(Pj )}] ≥ 1 − α. In dimension 1, Rα(Pj ) can be interpreted as the smallest collection of elements of Ω (interval or union of intervals) to which distribution Pj assigns probability of at least 1 − α. As we can see, HDRs are a Bayesian counterpart of confidence intervals.8 We give a simple visual example in Figure 3. ![6_image_0.png](6_image_0.png) $$\alpha(P_{j})$$ Figure 3: The 0.25-HDR from a Normal Mixture density. This picture is a replica of Hyndman (1996, Figure 1). The geometric representation of "75% probability according to Pj " is the area between the pdf curve pj (ω) and the horizontal bar corresponding to p 0.25 j. A higher probability coverage (according to Pj ) would correspond to a lower constant, so p α j < p0.25 j, for all α < 0.25. In the limit, we recover 100% coverage at p 0 j = 0. As we mentioned earlier, an operative way of obtaining IHDR IRα(Π′) is by putting IRα(Π′) = ∪ k j=1Rα(Pj ). Thanks to Proposition 3, by taking the union of the HDRs, we ensure that all the probability measures in the credal set Π′ = Conv(Π) assign probability of at least 1 − α to the event {ω ∈ IRα(Π′)}. In turn, this implies that P[{ω ∈ IRα(Π′)}] = minj∈{1*,...,k*} Pj [{ω ∈ IRα(Π′)}] ≥ 1 − α. We also have that the difference between the upper and lower probabilities of {ω ∈ IRα(Π′)} is bounded by α. To see this, notice that P[{ω ∈ IRα(Π′)}] ≤ 1, so P[{ω ∈ IRα(Π′)}] − P[{ω ∈ IRα(Π′)}] ≤ α. ## 2.3 Quantifying And Disentangling Aleatoric And Epistemic Uncertainties Recall that, given a probability measure P on a generic space Ω, the (Shannon) entropy of P is defined as H(P) := E[− log p] = −RΩ log[p(ω)]P(dω) if Ω is uncountable, where p denotes the pdf of P. If Ω is at most countable, we have that H(P) = −Pω∈Ω P({ω}) log[P({ω})]. As pointed out by Dubois & Hüllermeier (2007); Hüllermeier & Waegeman (2021), the entropy primarily captures the shape of the distribution, namely its "peakedness" or non-uniformity, and hence informs about the predictability of the outcome of a random experiment: the higher its value, the lower the predictability. Then, we can define the imprecise versions of the Shannon entropy as proposed by Abellán et al. (2006); Hüllermeier & Waegeman (2021), H(P) := supP ∈Π′ H(P) and H(P) := infP ∈Π′ H(P), called the *upper* and *lower Shannon entropy*, respectively.9 Notice that these definitions hold for all sets of probabilities, not just for (finitely generated) credal sets. The upper entropy is a measure of total uncertainty since it represents the minimum level of predictability associated with the elements of Π′. In Abellán et al. (2006); Hüllermeier & Waegeman (2021), the authors posit that it can be decomposed as a sum of aleatoric and epistemic uncertainties, and that this latter can be specified as the difference between upper and lower entropy, thus obtaining $$\underbrace{\overline{{{H}}}(P^{\prime})}_{\mathrm{TU}(\Pi^{\prime})}=\underbrace{H(P^{\prime})}_{\mathrm{AU}(\Pi^{\prime})}+\underbrace{\left[\overline{{{H}}}(P^{\prime})-\underline{{{H}}}(P^{\prime})\right]}_{\mathrm{EU}(\Pi^{\prime})},$$ where TU(Π′) denotes the total uncertainty associated with set Π′, AU(Π′) is the AU associated with Π′, and EU(Π′) represents the EU associated with Π′. We have the following proposition. 8Hence standard choices for the value of α are 0.1, 0.05, 0.01. 9In Appendix E, we provide bounds to the values of upper and lower entropy. Proposition 6. Let Π, Π′*be sets of probability measures as the ones considered in Remark 2. Then,* supP ∈Π H(P) = H(P) ≤ H(P ′) = supP ′∈Π′ H(P ′) and infP ∈Π H(P) = H(P) = H(P ′) = infP ′∈Π′ H(P ′). Proposition 6 tells us that the upper entropy of the extreme elements in Π = exΠ′is a lower bound for the upper entropy of the whole credal set Π′, and that the lower entropy of the extreme elements in Π is equivalent to the lower entropy of the whole credal set Π′. These facts imply that AU(Π′) = AU(Π), and that EU(Π′) ≥ EU(Π). In addition, as a consequence of (Smieja & Tabor, 2012, Theorem III.1), we have that EU(Π′) ≤ EU(Π) + log(\#Π). In turn, we have that EU(Π′) ∈ [EU(Π),EU(Π) + log(\#Π)]. ## 3 Our Procedure And Its Properties This is the main portion of the paper. In section 3.1, we present and discuss the CBDL algorithm. Its theoretical properties are derived in section 3.2. ## 3.1 Cbdl Algorithm Recall that D = Dx × Dy denotes the training set, where Dx = {xi} n i=1 ⊂ X is the collection of training inputs, Dy = {yi} n i=1 ⊂ Y is the collection of training outputs, and X and Y denote the input and output spaces, respectively. We then denote by P a generic prior on the parameters θ ∈ Θ of a BNN having pdf p, and by L ≡ Lx,θ a generic likelihood on the space Y of outputs having pdf ℓ ≡ ℓx,θ. The act of computing the posterior from prior P and likelihood L using a BNN is designated by post[*P, L*]. The CBDL procedure is presented in Algorithm 1, and it is discussed in the following paragraphs. Algorithm 1 Credal Bayesian Deep Learning (CBDL) - Training and Inference During Training Step 1 Specify K priors exPprior = {P ex k } K k=1 Step 2 Specify S likelihoods exPlik = {L ex s } S s=1 Step 3 Compute Pk,s(· | D) = post[P ex k , Lex s ], for all k and all s ▷ % Approximated via VI by P˘k,s% During Inference Inputs: New input x˜ ∈ X Parameters: Confidence parameter α ∈ [0, 1] Outputs: Aleatoric and Epistemic Uncertainties, α-level IHDR Step 4 Compute P pred k,s = pred[P˘k,s, Lex s ], for all k and all s ▷ % Approximated via VI by Pˆpred k,s % Step 5 Compute and return AU(Pˆpred) and the bounds for EU(Pˆpred) Step 6 Compute and return the (1 − α)-IHDR IRα(Pˆpred) During training, in **Step 1** the user specifies K priors on the parameters of the neural network, that constitute the extrema exPprior of the prior FGCS Pprior. Similarly, in **Step 2** they elicit S likelihoods capturing the possible architectures of the neural network, that correspond to the extrema exPlik of the likelihood FGCS Plik. Let us give an example. Outlined in Jospin et al. (2022, Sections IV-B and IV-C1), for classification, the standard process for BNNs involves - A Normal prior with zero mean and diagonal covariance σ 2I on the coefficients of the network, that is, p(θ) = N (0, σ2I). In the context of CBDL, we could specify, e.g., exPprior = {P : p(θ) = N (µ, σ2I), µ ∈ {µ−, 0, µ+}, σ 2 ∈ {3, 7}}. That is, the extreme elements of the prior credal set are five independent Normals having different levels of "fatness" of the tails, and centered at a vector µ+ having positive entries, a vector µ− having negative entries, and a vector 0 having entries equal to 0. They capture the ideas of positive bias, negative bias, and no bias of the coefficients, respectively. This is done to hedge against possible prior misspecification. - A Categorical likelihood, p(y | *x, θ*) = Cat(Φθ(x)), whose parameter is given by the output of a functional model Φθ. In the context of CBDL, we could specify the set of extreme elements of the likelihood credal set as exPlik = {L : ℓx,θ(y) = Cat(Φs,θ(x)), s ∈ {1*, . . . , S*}}. Specifying set exPlik, then, corresponds to eliciting a finite number S of possible (parametrized) architectures for the neural network Φs,θ, s ∈ {1*, . . . , S*}, and obtain, as a consequence, S categorical distributions {Cat(Φs,θ(x))} S s=1. This captures the ambiguity around the true data generating process faced by the agent, and allows them to hedge against likelihood misspecification. More in general, we can use the priors and likelihoods that better fit the type of analysis we are performing.10 For example, for the choice of the priors we refer to Fortuin et al. (2021), where the authors study the problem of selecting the right type of prior for BNNs. Step 3 performs an element-wise application of Bayes' rule for all the elements of exPprior and exPlik. Each posterior is approximated using the Variational Inference (VI) method (Jospin et al., 2022, Section V). That is, we project every posterior Pk,s(· | D) onto a set S of "well-behaved" distributions (e.g., Normals) using the KL divergence. In formulas, P˘k,s = arg minQ∈S KL[Q∥Pk,s(· | D)]. By "well-behaved", we mean that they have to satisfy the conditions in Zhang & Gao (2020, Sections 2 and 3).11 This ensures that as the sample size goes to infinity, the approximated posteriors converge to the true data generating process. We also point out how despite using a VI approximation for the exact posteriors, as we shall see in the next section, the credal set of approximated posteriors is closer to the "oracle" posterior P o(· | D) than any of its elements. As a consequence, working with credal sets leads to VI posterior approximations that are better than the ones resulting from a single BNN, or an ensemble of BNNs, where several BNNs are combined into one. Remark 7. *Although highly unlikely in practice, it is theoretically possible that the VI approximation of the* (finite) set {Pk,s(· | D)}k,s *of posteriors is a singleton, see Figure 4. While the conditions in Zhang & Gao* (2020) guarantee that asymptotically the approximated posteriors coincide with the true data generating process, this typically does not happen with finite-dimensional datasets. As a consequence, obtaining a singleton when projecting {Pk,s(· | D)}k,s onto S *may result in an underestimation of the uncertainties faced by the* user. In that case, we either consider a different set - whose elements still satisfy the conditions in Zhang & Gao (2020, Sections 2 and 3) - on which to project {Pk,s(· | D)}k,s *according to the KL divergence, or we* use a different "projection operator", that is, a divergence different from the KL. For example, Rényi and χ 2 divergences, or Hellinger and total variation metrics are suggested by Zhang & Gao (2020). Alternatively, we can consider a different approximation strategy altogether, for instance the Laplace approximation (Ritter et al., 2018). After **Step 3**, we obtain a finite set {P˘k,s}k,s of VI approximation of the posteriors on the network parameters, whose cardinality is K × S. Its convex hull constitutes P˘post, that is, the VI-approximated posterior FGCS. We assume that exP˘post = {P˘k,s}k,s. This is an assumption because it may well be that - due to the approximation procedure - some of the elements of {P˘k,s}k,s are not independent of one another. We defer to future work the design of a procedure that finds the elements of {P˘k,s}k,s that cannot be written as a convex combination of one another. Being a combinatorial task, **Step 3** is a computational bottleneck of Algorithm 1. We have to calculate K × S VI approximations to as many posteriors, but this allows us to forego any additional assumptions on the nature of the lower and upper probabilities that are oftentimes required by other imprecise-probabilitiesbased techniques.12 Clearly, CBDL is simplified if either Pprior or Plik are singletons. Notice that in the case that Pprior and Plik are both not singletons, for all A ⊂ Θ the interval [P˘(A), P˘(A)] is wider than the case when one or the other is a singleton. In the limiting case where both are singletons, we retrieve the usual Bayesian updating, so the interval shrinks down to a point. Before moving to inference time, let us remark a difference between Bayesian Model Averaging (BMA) and CBDL. In BMA, the user specifies a distribution on the models. Translated in the notation we use in this work, this means having a discrete distribution Q over the K ×S prior-likelihood combinations, that is, over the elements of exP˘post. Such Q is then used to select an element P˘⋆from the (VI-approximated) posterior 10Choosing 2 to 5 priors and likelihoods is usually enough to safely hedge against prior and likelihood misspecification. 11We assume that the conditions on the priors and the likelihoods given in Zhang & Gao (2020) are satisfied. 12If we are willing to make such assumptions, Theorem 10 in Appendix D shows how to compute the upper posterior using only upper prior and upper likelihood. ![9_image_0.png](9_image_0.png) Figure 4: Let ∆Θ denote the space of probability measures on Θ. Suppose that in the analysis at hand we specified three priors and only one likelihood, so S = 1 and we can drop the s index. Let {Pk(· | D)} 3 k=1 be the collection of exact posteriors, so that the black segment represents the exact posterior FGCS. Then, if we project the elements of {Pk(· | D)} 3 k=1 onto S1 via the KL divergence, we obtain the same distribution P˘ . This is detrimental to the analysis because such an approximation underestimates epistemic (and possibly also aleatoric) uncertainty faced by the agent. Then, the user should specify a different set S2 of "well-behaved" distributions onto which project the elements of {Pk(· | D)} 3 k=1. In the figure, we see that they are projected onto S2 via the KL divergence to obtain P˘1, P˘2, and P˘3. The convex hull of these latter, captured by the red shaded triangle, represents the variational approximation of the exact posterior FGCS. $$\mathrm{FGCS}\ {\tilde{\cal P}}_{\mathrm{post}}\ \mathrm{as}$$ Q({P˘k,s})P˘k,s, (1) $$\breve{\mathcal{P}}_{\mathrm{post}}\ni\breve{P}^{\star}=\sum_{k,s}Q(\{\breve{P}_{k,s}\})\breve{P}_{k,s},$$ where Q({P˘k,s}) ∈ [0, 1] for all k and all s, and Pk,s Q({P˘k,s}) = 1. Instead, CBDL does not select a unique distribution from P˘post. Rather, distributions are kept separate so to be able to derive a predictive FGCS in **Step 4** of Algorithm 1, that is in turn used to quantify and disentangle predictive uncertainties, and to compute the predictive IHDR, that is, a collection of outputs having a high probability of being the correct ones for a new input x˜. During inference, a new input x˜ is provided. In **Step 4**, every element of exP˘post is used to derive a predictive distribution P pred k,s = pred[P˘k,s, Lex s ] on the output space Y. In particular, for every k and every s, the pdf p pred k,s of P pred k,s is obtained as $$p_{k,s}^{\text{pred}}(\tilde{y}\mid\tilde{x},x_{1},y_{1},\ldots,x_{n},y_{n})=\int_{\Theta}\ell_{s}^{\text{ex}}(\tilde{y}\mid\theta,\tilde{x})\cdot p_{k,s}(\theta\mid x_{1},y_{1},\ldots,x_{n},y_{n})\text{d}\theta$$ $$\approx\int_{\Theta}\ell_{s}^{\text{ex}}(\tilde{y}\mid\theta,\tilde{x})\cdot\tilde{p}_{k,s}(\theta)\text{d}\theta,$$ $$(1)$$ where ℓ ex sis the pdf of likelihood L ex s , pk,s is the pdf of the true posterior Pk,s(· | D), p˘k,s is the pdf of the VI-approximated posterior P˘k,s, and y˜ is the output associated to the new input x˜ (see Appendix F for more details). Each predictive distribution P pred k,s is approximated to Pˆpred k,s (e.g., using Normals) via Variational Inference. The convex hull of the collection {Pˆpred k,s }k,s having cardinality K × S constitutes the VI-approximated predictive FGCS Pˆpred. Similarly to what we did for exP˘post, we assume that exPˆpred = {Pˆpred k,s }k,s. In **Step 5**, building on the results in section 2.3, and in particular on Proposition 6, we compute and return AU(Pˆpred) and the bounds for EU(Pˆpred). In particular, we have $$\mathrm{AU}({\mathcal{P}}_{\mathrm{pred}})\approx\mathrm{AU}({\hat{\cal P}}_{\mathrm{pred}})=\underline{{{H}}}({\hat{\cal P}}^{\mathrm{pred}})$$ AU(Ppred) ≈ AU(Pˆpred) = H(Pˆpred) (2) $$(2)$$ $\left(3\right)$. and EU(Ppred) ≈ EU(Pˆpred) ∈ [H(Pˆpred) − H(Pˆpred), H(Pˆpred) − H(Pˆpred) + log(K × S)], (3) where (i) H(Pˆpred) = mink,s H(Pˆpred k,s ); (ii) H(Pˆpred) = maxk,s H(Pˆpred k,s ); and (iii) Ppred is the "true" predictive FGCS, that is, the one we would have obtained had we been able to compute the exact posteriors in **Step 3** and the exact predictive distributions in **Step 4**. Let us point out a salient feature of CBDL. The AU and the (bounds for) the EU associated with the VI-approximated predictive FGCS Pˆpred embed uncertainty comparable to an uncountably infinite ensemble of BNNs, i.e., an ensemble of BNNs of cardinality ℵ1, despite the simple and intuitive mathematics over the finite set exPˆpred. They are not merely pessimistic results on the uncertainties associated with a finite ensemble of BNNs. We observe that we compute the AU and EU associated with the (VI-approximated) predictive credal set Pˆpred, rather than those related to the (VI-approximated) posterior credal set P˘post. We do so because we are ultimately interested in reporting the uncertainty around the predicted outputs given a new input in the problem at hand, more than the uncertainty on the parameters of the NN. Before commenting on the next step, let us pause here and add a remark. While a "bad choice" of priors and likelihoods may lead to maximal upper and lower entropy - H(Pˆpred) and H(Pˆpred), respectively - this is not a risk confined to our procedure. Poor modeling choices are an unavoidable risk in model-based techniques. This gave rise to the famous adage by George Box "essentially, all models are wrong but some are useful" (Box, 1976).13 We maintain that our method is indeed useful, since it overcomes some of the shortcomings of traditional Bayesian techniques - as explained in Appendices A and B. As for "regular" Bayesian methods, though, for our approach too the designer will need to make "plausible" choices for priors and likelihoods. Finally, in **Step 6**, we compute and return the α-level Imprecise Highest Density Region IRα(Pˆpred) for the VI-approximated predictive FGCS Pˆpred, which approximates the IHDR for the "true" predictive FGCS Ppred. It is the smallest subset of Y such that Pˆpred[{y˜ ∈ IRα(Pˆpred)}] ≥ 1 − α, for all Pˆpred ∈ Pˆpred, and for some α ∈ [0, 1]. It can be interpreted as the smallest collection of outputs y˜ that have a high probability of being the correct ones for the new input x˜, according to all the distributions in Pˆpred. Or equivalently, we can say that the correct output for the new input x˜ belongs to IHDR IRα(Pˆpred) with lower probability of at least 1 − α, a probabilistic guarantee for the set of outputs generated by our procedure. Notice also that the size of IRα(Pˆpred) is an increasing function of both AU and EU. As a consequence, it is related, but it is not equal, to the AU the agent faces. If we want to avoid to perform the procedure only to discover that IRα(Pˆpred) is "too large", then we can add an "AU check" after **Step 5**. This, together with computing IRα(Pˆpred) in a classification setting, is explored in Appendices G and H. As we have seen in section 2.2, we find IRα(Pˆpred) by taking the union of the K × S HDRs Rα(Pˆpred k,s ) of the extreme elements exPˆpred of Pˆpred. The HDR of a well-known distribution (for instance, a Normal) can be routinely obtained in R, e.g. using package HDInterval (Juat et al., 2022). We conclude this section with two remarks. First, we point out how CBDL does not depend on the method used to approximate the posterior and the predictive distributions: Variational Inference can be substituted by other approaches. CBDL can also be easily adapted to other TU, AU, and EU measures, as long as the measure chosen for the total uncertainty is bounded. Second, **Step 6** can be effortlessly modified so that CBDL produces the collection of outputs having the highest lower density. This is in line with the Naive Credal Classifier theory (Cozman, 2000a; Zaffalon, 2002). In this case, we forego control on the accuracy level 1 − α of the output region produced by CBDL. It is implemented as follows. In the modified version of Step 6, we compute $$\operatorname*{arg\,max}_{\tilde{y}\in{\mathcal{Y}}}\operatorname*{min}_{k,s}\hat{p}_{k,s}^{\mathrm{pred}}(\tilde{y}),$$ where pˆ pred k,s is the pdf of the VI-approximated predictive distribution Pˆpred k,s . If such arg max is not a singleton, and the user is set on CBDL outputting a unique value y˜ for the new input x˜, they can select one element uniformly at random from the arg max. 13Curiously, a similar motivation was brought forward by Pseudo-Dionysius the Areopagite in favor of the use of sacred images in the Christian tradition (Migne, 1857). While they do not capture the essence of God, these defective approximations help the believer's thought to elevate. ## 3.2 Theoretical Properties Of Cbdl Working with credal sets makes CBDL more robust to distribution shifts than single BNNs. To see this, we present the following general result, and then we apply it to our case. Let Π′ be an FGCS as in Remark 2, and consider a probability measure Ψ such that Ψ ̸∈ Π′. Proposition 8. Call d any metric and div any divergence on the space of probability measures of interest. Let d(Π′, Ψ) := infP ′∈Π′ d(P ′, Ψ) *and div*(Π′∥Ψ) := infP ′∈Π′ div(Π′∥Ψ)*. Then, for all* P ′ ∈ Π′, d(Π′, Ψ) ≤ d(P ′, Ψ) and div(Π′, Ψ) ≤ div(P ′∥Ψ). Proposition 8 holds if Π′is any set of probabilities, not just an FGCS.14 In Appendix I, we show that the above result still holds if the elements of Π′ and Ψ are defined on Euclidean spaces having different dimensions (Cai & Lim, 2022; Caprio, 2022). Let us now apply Proposition 8 to CBDL. Suppose that, when designing a single BNN, an agent chooses likelihood L, while when implementing CBDL, they specify in **Step 2** a finite set of likelihoods exPlik = {L ex s } S s=1, and then let the induced credal set Plik = Conv(exPlik) represent their uncertainty around the sampling model. Assume that L ∈ exPlik. This means that when designing the single BNN, the agent chooses arbitrarily which of the elements of exPlik to use. Suppose also that the "oracle" data generating process L ois different from L, L o ̸= L, so that we are actually in the presence of distribution shift. Then, we have two cases. (1) If the true sampling model L o belongs to Plik, then the distance - measured via a metric or a divergence - between Plik and L ois 0, while that between L and L ois positive. (2) If L o *̸∈ P*lik, then the distance between Plik and L ois no larger than the distance between L and L o, no matter (i) which metric or distance we use (Proposition 8), (ii) whether or not L o and the elements of Plik are defined on the same Euclidean space (Appendix I, Lemma 17). A visual representation is given in Figure 5. ![11_image_0.png](11_image_0.png) Figure 5: CBDL is more robust to distribution shifts than single BNNs. Here Plik is the convex hull of five plausible likelihoods, and d denotes a generic metric on the space ∆Y of probabilities on Y. We see how d(Plik, Lo) < d(L, Lo); if we replace metric d by a generic divergence div, the inequality would still hold. Let us add a remark here. Assume that the "oracle" prior P ois in the prior credal set Pprior and that the "oracle" likelihood L ois in the likelihood credal set Plik. Then, it is immediate to see that the "oracle" posterior P o(· | D) belongs to the posterior credal set Ppost. Naturally, this does not imply that the posterior credal set collapses to P o(· | D). In general, it is unlikely that a finite amount of data is able to completely annihilate all the epistemic uncertainty faced by the agent. What may happen is that if the training set is large enough, Ppost may be inscribed in a ball of small radius around P o(· | D). This does not mean that we suffer from under-confidence due to larger-than-necessary epistemic uncertainty. Rather, the relative epistemic uncertainty (measured by the difference between prior and posterior uncertainty, divided by the prior uncertainty) drops significantly. In addition, working with sets of prior and likelihoods allows us to hedge against prior and likelihood misspecification, a consequence of Proposition 8. 14More in general, Proposition 8 holds for any type of function f, not just metrics or divergences, because Proposition 8 essentially only applies the definition of the infimum. ## 4 Experiments From the previous sections, it is clear that CBDL improves on the uncertainty quantification capabilities of single BNNs.15 CBDL allows a better quantification of AU because of its robustness to misspecification stemming from Proposition 8. In a sense, the AU quantified by a single BNN is a function of the choices of prior and likelihood made by the user. In addition, as we pointed out before, EU cannot be quantified by a single distribution (Hüllermeier & Waegeman, 2021), and hence it cannot be obtained in a theoretically principled manner from a single BNN. In this section, we present our experimental findings. In section 4.1, we show that CBDL is better at quantifying and disentangling predictive AU and EU than ensemble of BNNs. In section 4.2, we analyze the downstream task performance of CBDL. We show that they perform better than an ensemble of BNNs (EBNN). To demonstrate the utility of our method, we study the behavior of certain safety-critical settings under distribution shifts and its ramifications. One, for motion prediction in autonomous driving scenarios, and two, to model blood glucose and insulin dynamics for artificial pancreas control. ## 4.1 Uncertainty Quantification Distribution shifts can introduce uncertainties in a system, which in turn can render the predictions meaningless. This can be due to naturally occurring corruptions, as introduced in Hendrycks & Dietterich (2019) for image datasets. The authors introduced 18 different noise types, which can be varied across 5 different severity levels. The intuition is that in the current context, increasing the noise severity should generally result in higher uncertainty. We evaluate our CBDL method on 4 standard image datasets CIFAR-10 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), Fashion-MNIST (Xiao et al., 2017), and MNIST (Lecun et al., 1998). We use a slightly different set of perturbations than those introduced in Mu & Gilmer (2019) for gray-scale images like MNIST and Fashion-MNIST. Additionally, we perform cross domain testing for each dataset, where we expect the uncertainties to be higher. We implement and train a Resnet-20 Bayesian Neural Network model inside the library Bayesian-torch (Krishnan et al., 2019). For each dataset, we train 4 different networks initialized with different seeds on the prior and with the same architecture. This corresponds to eliciting a prior FGCS Pprior such that exPprior = {P ex 1 , . . . , Pex 4 }, so that K in **Step 1** of Algorithm 1 is equal to 4, and a likelihood FGCS that is a singleton, Plik = exPlik = {L}, so that S in **Step** 2 of Algorithm 1 is equal to 1. We use a learning-rate of 0.001, batch-size of 128, and train the networks using Mean-Field Variational Inference for 200 epochs. The inference is carried out by performing multiple forward passes through parameters drawn from the posterior distribution. We used 20 Monte-Carlo samples in the experiments. Baselines. In Egele et al. (2021), the authors pursue the same heuristic endeavor as we do, but take an ensemble route. They too consider different BNNs, but instead of keeping them separate and use them to build a predictive credal set, they average them out. Similar to theirs, we elicit the following procedure, that we call ensemble of BNNs (EBNN). Consider R ∈ N≥2 different BNNs, and compute the posterior distribution on the parameters. They induce R predictive distributions on the output space Y, each having mean µr and variance σ 2 r , r ∈ {1*, . . . , R*}. We call *EBNN distribution* Pens a Normal having mean µens = 1/RPR r=1 µr and covariance matrix σ 2 ensI, where σ 2 ens = 1/RPR r=1 σ 2 r + 1/(R − 1)PR r=1(µr − µens) 2. We use the α-level HDR Rα(Pens) associated with Pens as a baseline for the IHDR IRα(Pˆpred) computed at **Step 6** of Algorithm 1. Results. Following Egele et al. (2021), for EBNN we posit that 1/RPR r=1 σ 2 r captures the aleatoric uncertainty associated with Pens, and 1/(R − 1)PR r=1(µr − µens) 2captures the epistemic uncertainty associated with Pens; we use these values as baselines.16 For CBDL, we look at the value of AU(Pˆpred) = H(Pˆpred) from equation 2, and at the lower bound H(Pˆpred)−H(Pˆpred) for EU(Pˆpred) from equation 3. It is enough to focus 15This includes empirical Bayes methodologies (Krishnan et al., 2020). 16Notice that in this case we retain the assumption that the EBNN distribution Pens on Y has mean µens and covariance matrix σ 2 ensI, but we do not require that it is a Normal. That is because in the four image datasets that we consider, the output space Y is finite. We assume, though, that the probability mass function of Pens is a regularly looking, symmetric, bell-shaped histogram. on the lower bound because the upper bound is given by 开(Pped) - IH(Ppea) - IB(4), and log(4) > 0.6 is a fixed value. Hence, the trend of both lower and upper bounds for EU(P changes is the same. We discuss the results for CIFAR-10 presented in Table 2. We report the average uncertainties across all test samples as measured by the respective procedures. Note that we abstracted the severity levels to low (severity = 1), medium (severity = 2,3), and high (severity = 4,5). We observe that CBDL reports lower levels of aleatoric uncertainty when subjected to low levels of corruption, which gradually increases as we move to higher levels. The epistemic uncertainty grows as well, but the change in the aleatoric component is more pronounced. However, for EBNN this is not true. Even though the epistemic uncertainty increases, the aleatoric shows the reverse trend, contrary to our intuition of aleatoric uncertainty. Using CBDL, we achieve the highest uncertainties, both aleatoric and epistemic, when applied to entirely different test sets, namely MNIST, SVHN, and Fahion-MNIST. We report these numbers as well. | CBDL | Baseline | | | | | | | | | | | | | |----------------|---------------|-----------|-----------|-------|-------|-------|-------|-------|-------|-------|-------|--------|-------| | Epistemic | Aleatorio | Epistemic | Aleatoric | | | | | | | | | | | | Low | Med | High | Low | Med | High | Low | Mod | High | Low | Med | High | | | | 0.145 | 0.066 | 0.099 | 0.012 | 0.016 | 0.016 | 0.065 | 0.056 | 0.055 | | | | | | | gaussian noise | 0.169 | 0.176 | 0.102 | | | | | | | | | | | | shot noise | 0.129 | 0.158 | 0.174 | 0.053 | 0.084 | 0.104 | 0.01 | 0.014 | 0.016 | 0.069 | 0.061 | 0.055 | | | 0.174 | 0.053 | 0.081 | 0.102 | 0.01 | 0.014 | 0.016 | 0.069 | 0.061 | 0.055 | | | | | | speckle noise | 0.128 | 0.158 | | | | | | | | | | | | | impulse noise | 0.134 | 0.161 | 0.174 | 0.052 | 0.085 | 0.118 | 0.011 | 0.015 | 0.017 | 0.069 | 0.059 | 0.05 I | | | defocus blur | 0.105 | 0.129 | 0.176 | 0.032 | 0.048 | 0.095 | 0.008 | 0.01 | 0.016 | 0.076 | 0.071 | 0.057 | | | gaussian blur | 0.105 | 0.154 | 0.186 | 0.032 | 0.069 | 0.11 | 0.008 | 0.013 | 0.017 | 0.076 | 0.064 | 0.053 | | | motion blur | 0.137 | 0.166 | 0.175 | 0.055 | 0.085 | 0.1 | 0.011 | 0.015 | 0.016 | 0.068 | 0.06 | 0.056 | | | zoom blur | 0.148 | 0.161 | 0.176 | 0.063 | 0.077 | 0.1 | 0.012 | 0.014 | 0.016 | 0.066 | 0.062 | 0.056 | | | CIFAR-10 | SIDOW | 0.131 | 0.157 | 0.163 | 0.05 | 0.077 | 0.086 | 0.01 | 0.014 | 0.015 | 0.07 | 0.062 | 0.059 | | Corruption | 0.104 | 0.122 | 0.154 | 0.033 | 0.045 | 0.09 | 0.008 | 0.009 | 0.013 | 0.076 | 0.072 | 0.06 | | | fog | | | | | | | | | | | | | | | brightness | 0.103 | 0.108 | 0.124 | 0.031 | 0.034 | 0.043 | 0.008 | 0.008 | 0.01 | 0.076 | 0.075 | 0.072 | | | contrast | 0.107 | 0.145 | 0.165 | 0.035 | 0.066 | 0.148 | 0.008 | 0.012 | 0.017 | 0.075 | 0.065 | 0.046 | | | clastic | 0.134 | 0.144 | 0.168 | 0.053 | 0.059 | 0.089 | 0.011 | 0.012 | 0.015 | 0.069 | 0.067 | 0.058 | | | pixelate | 0.116 | 0.135 | 0.162 | 0.042 | 0.065 | 0.096 | 0.009 | 0.011 | 0.015 | 0.073 | 0.066 | 0.058 | | | JP'g | 0.13 | 0.147 | 0.156 | 0.049 | 0.064 | 0.075 | 0.01 | 0.012 | 0.013 | 0.07 | 0.066 | 0.063 | | | spatter | 0.117 | 0.145 | 0.147 | 0.041 | 0.065 | 0.062 | 0.009 | 0.012 | 0.012 | 0.073 | 0.065 | 0.066 | | | saturate | 0.112 | 0.113 | 0.131 | 0.041 | 0.039 | 0.046 | 0.008 | 0.009 | 0.011 | 0.073 | 0.074 | 0.07 | | | frost | 0.124 | 0.153 | 0.166 | 0.047 | 0.075 | 0.099 | 0.01 | 0.013 | 0.015 | 0.071 | 0.063 | 0.056 | | | Epistemic | Aleatoric | Epistemic | Aleatoric | | | | | | | | | | | | MNIST | 0.184 | 0.142 | 0.021 | 0.044 | | | | | | | | | | | Dataset | Fashion MNIST | 0.183 | 0.147 | 0.022 | 0.042 | | | | | | | | | | SVHN | 0.183 | 0.141 | 0.019 | 0.046 | | | | | | | | | | Table 2: The 4 BNNs trained have the following accuracies : 90, 89, 90, 89 in percentage terms and rounded to the nearest whole number. For different categories of corruptions, increasing severity leads to higher levels of aleatoric uncertainty for CBDL. When exposed to completely unseen datasets, this reaches its peak. In contrast, the baseline has a reverse trend. Summary. We summarize our results for CIFAR-10 in Figure 6.(a), and for all four datasets in Figure 6.(b). With increasing corruption severity, the aleatoric uncertainty for CBDL becomes more pronounced. This is expected since for higher corruptions, it is hard to reconcile the performance gap by simple dataaugmentation, without any additional knowledge. For EBNN, even though the epistemic uncertainty grows, the aleatoric part does not. This counterintuitive - and erroneous - behavior makes us conclude that CBDL is able to quantify and disentangle (predictive) AU and EU better than the baseline. Note that the absolute uncertainty differs due to the quantities being fundamentally different: variance versus entropy. However, the relative trends are consistent for each dataset, demonstrating the utility of CBDL. Other datasets. The results for MNIST, SVHN, and Fashion-MNIST are presented in Tables 3, 4, and 5, respectively. We observe trends similar to those seen for CIFAR-10: the aleatoric uncertainty increases with corruption severity, and is the highest for completely unknown datasets. ![14_image_1.png](14_image_1.png) ![14_image_0.png](14_image_0.png) Figure 6: The different trends with increasing corruption severity for EBNN (baseline) versus CBDL. CBDL better inform the degree of shift as compared to EBNN. | CBDL | | Baseline | | | | | | | | | | | | |---------------|---------------|------------|-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | Epistemic | Aleatoric | Epistemic | Aleatoric | | | | | | | | | | | | Low | Med | High | Low | Med | High | Low | Med | High | Low | Med | High | | | | brightness | 0.21 | 0.246 | 0.195 | 0.128 | 0.087 | 0.158 | 0.023 | 0.026 | 0.024 | 0.045 | 0.05 | 0.045 | | | canny edges | 0.162 | 0.158 | 0.162 | 0.174 | 0.176 | 0.174 | 0.016 | 0.015 | 0.016 | 0.042 | 0.042 | 0.042 | | | dotted line | 0.18 | 0.173 | 0.176 | 0.097 | 0.102 | 0.102 | 0.014 | 0.013 | 0.014 | 0.055 | 0.055 | 0.055 | | | fog | 0.185 | 0.187 | 0.19 | 0.163 | 0.167 | 0.166 | 0.023 | 0.023 | 0.023 | 0.043 | 0.042 | 0.043 | | | glass blur | 0.161 | 0.145 | 0.155 | 0.181 | 0.206 | 0.2 | 0.016 | 0.016 | 0.02 | 0.038 | 0.033 | 0.033 | | | impulse noise | 0.186 | 0.172 | 0.181 | 0.083 | 0.132 | 0.158 | 0.013 | 0.015 | 0.021 | 0.057 | 0.049 | 0.042 | | | motion blur | 0.184 | 0.161 | 0.157 | 0.123 | 0.185 | 0.195 | 0.016 | 0.018 | 0.019 | 0.049 | 0.035 | 0.029 | | | MNIST | rotate | 0.189 | 0.167 | 0.142 | 0.072 | 0.134 | 0.196 | 0.013 | 0.014 | 0.015 | 0.059 | 0.048 | 0.033 | | Corruption | scale | 0.196 | 0.169 | 0.121 | 0.08 | 0.152 | 0.232 | 0.014 | 0.015 | 0.013 | 0.057 | 0.044 | 0.025 | | shear | 0.188 | 0.177 | 0.153 | 0.065 | 0.102 | 0.184 | 0.012 | 0.014 | 0.017 | 0.061 | 0.055 | 0.036 | | | shot noise | 0.188 | 0.179 | 0.18 | 0.061 | 0.08 | 0.113 | 0.012 | 0.012 | 0.014 | 0.062 | 0.059 | 0.052 | | | spatter | 0.186 | 0.174 | 0.176 | 0.074 | 0.132 | 0.127 | 0.013 | 0.016 | 0.016 | 0.059 | 0.047 | 0.049 | | | stripe | 0.18 | 0.182 | 0.184 | 0.165 | 0.163 | 0.161 | 0.021 | 0.02 | 0.021 | 0.04 | 0.04 | 0.04 | | | translate | 0.191 | 0.192 | 0.192 | 0.061 | 0.086 | 0.128 | 0.013 | 0.015 | 0.019 | 0.061 | 0.056 | 0.046 | | | zigzag | 0.18 | 0.176 | 0.179 | 0.119 | 0.123 | 0.122 | 0.016 | 0.016 | 0.016 | 0.051 | 0.05 | 0.05 | | | Epistemic | Aleatoric | Epistemic | Aleatoric | | | | | | | | | | | | CIFAR | 0.185 | 0.167 | 0.023 | 0.042 | | | | | | | | | | | Dataset | Fashion MNIST | 0.162 | 0.187 | 0.019 | 0.035 | | | | | | | | | | SVHN | 0.190 | 0.168 | 0.022 | 0.043 | | | | | | | | | | Table 3: The 4 BNNs trained have the following accuracies : 99, 99, 98 in percentage terms and rounded to the nearest whole number. For different categories of corruptions, increasing severity leads to higher levels of aleatoric uncertainty for CBDL. When exposed to completely unseen datasets, this gets close to the highest aleatoric uncertainty. The same is not true for the baseline. The epistemic uncertainty for CBDL shows a less consistent trend in this case. ## 4.2 Downstream Tasks Performance As we have shown in the previous section, CBDL is better than single BNNs and ensemble of BNNs at quantifying and disentangling AU and EU. In this section, we show with two applications - motion prediction in autonomous driving scenarios, and blood glucose and insulin dynamics for artificial pancreas control - that CBDL has better downstream tasks capability than EBNN. We do not compare CBDL against belief tracking techniques because these latter require extra assumptions that CBDL do not, see Appendix K. | CBDL | Baseline | | | | | | | | | | | | | |----------------|------------|-----------|-----------|--------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | Epistemic | Aleatoric | Epistemic | Aleatoric | | | | | | | | | | | | Med | High | Low | Med | Med | Low | Med | | | | | | | | | Low | High | Low | High | High | | | | | | | | | | | brightness | 0.062 | 0.074 | 0.102 | 0.011 | 0.019 | 0.053 | 0.005 | 0.006 | 0.008 | 0.083 | 0.08 | 0.07 | | | 0.077 | 0.143 | 0.048 | 0.174 | 0.006 | 0.008 | 0.018 | 0.08 | | | | | | | | contrast | 0.099 | 0.022 | 0.072 | 0.04 | | | | | | | | | | | defocus blur | 0.109 | 0.154 | 0.138 | 0.044 | 0.147 | 0.218 | 0.009 | 0.017 | 0.02 | 0.072 | 0.045 | 0.029 | | | elastic | 0.182 | 0.177 | 0.192 | 0.14 | 0.122 | 0.102 | 0.023 | 0.021 | 0.022 | 0.04 | 0.046 | 0.05 | | | fog | 0.157 | 0.171 | 0.181 | 0.095 | 0.124 | 0.135 | 0.016 | 0.02 | 0.023 | 0.056 | 0.047 | 0.042 | | | frost | 0.089 | 0.118 | 0.131 | 0.026 | 0.052 | 0.063 | 0.007 | 0.01 | 0.012 | 0.078 | 0.069 | 0.066 | | | gaussian blur | 0.068 | 0.131 | 0.14 | 0.017 | 0.087 | 0.208 | 0.005 | 0.012 | 0.019 | 0.081 | 0.06 | 0.032 | | | gaussian noise | 0.116 | 0.163 | 0.182 | 0.031 | 0.061 | 0.103 | 0.01 | 0.018 | 0.025 | 0.074 | 0.001 | 0.044 | | | SVHN | 0.13 | 0.169 | 0.185 | 0.035 | 0.06 | 0.102 | 0.012 | 0.018 | 0.026 | 0.072 | 0.061 | 0.045 | | | impulse noise | | | | | | | | | | | | | | | Corruption | jpeg | 0.07 | 0.083 | 0.129 | 0.014 | 0.018 | 0.041 | 0.005 | 0.006 | 0.012 | 0.082 | 0.08 | 0.071 | | 0.1 | 0.167 | 0.027 | 0.078 | 0.008 | 0.015 | 0.02 | 0.077 | 0.06 | | | | | | | motion blur | 0.157 | 0.149 | 0.041 | | | | | | | | | | | | pixelate | 0,061 | 0.123 | 0.18 | 0.01 I | 0.036 | 0.095 | 0.004 | 0.011 | 0.02 | 0.083 | 0.073 | 0.052 | | | saturate | 0.063 | 0.065 | 0.141 | 0.011 | 0.012 | 0.075 | 0.005 | 0.005 | 0.014 | 0.083 | 0.083 | 0.062 | | | shot noise | 0.119 | 0.17 | 0.183 | 0.031 | 0.067 | 0.106 | 0.011 | 0.019 | 0.026 | 0.074 | 0.059 | 0.044 | | | snow | 0.131 | 0.16 | 0.16 | 0.043 | 0.076 | 0.096 | 0.012 | 0.017 | 0.017 | 0.07 | 0.059 | 0.055 | | | 0.088 | 0.127 | 0.163 | 0.021 | 0.037 | 0.06 | 0.007 | 0.012 | 0.017 | 0.079 | 0.063 | | | | | spatter | 0.072 | | | | | | | | | | | | | | speckle noise | 0.1 | 0.143 | 0.181 | 0.023 | 0.046 | 0.085 | 0.008 | 0.015 | 0.023 | 0.077 | 0.067 | 0.052 | | | zoom blur | 0.061 | 0.061 | 0.064 | 0.01 I | 0.012 | 0.013 | 0.004 | 0.004 | 0.005 | 0.083 | 0.083 | 0.083 | | | Epistemic | Aleatoric | Epistemic | Aleatoric | | | | | | | | | | | | CIFAR | 0.181 | 0.121 | 0,026 | 0.040 | | | | | | | | | | | Dataset | MNIST | 0.198 | 0.063 | 0.024 | 0.056 | | | | | | | | | | Fashion MNIST | 0.199 | 0.113 | 0.026 | 0.04 I | | | | | | | | | | Table 4: The 4 BNNs trained have the following accuracies : 95, 95, 96, 95 in percentage terms and rounded to the nearest whole number. For different categories of corruptions, increasing severity leads to higher levels of aleatoric uncertainty for CBDL. When exposed to completely unseen datasets, this reaches its peak. The same is not true for the baseline. For the epistemic uncertainty as well, there is a clear trend with | CBDL | Bascline | | | | | | | | | | | | | |---------------|-------------|-----------|-----------|-------|-------|-------|-------|-------|-------|--------|-------|-------|-------| | Epistemic | Aleatoric | Epistemic | Aleatoric | | | | | | | | | | | | Low | Med | High | Low | Med | High | Low | Med | High | Low | Med | High | | | | 0.199 | 0.212 | 0.205 | 0.108 | 0.106 | 0.094 | 0.021 | 0.024 | 0.022 | 0.051 | 0.048 | 0.051 | | | | brightness | | | | | | | | | | | | | | | canny edges | 0.18 | 0.175 | 0.177 | 0.157 | 0.164 | 0.16 | 0.021 | 0.02 | 0.02 | 0.043 | 0.042 | 0.043 | | | dotted line | 0.167 | 0.168 | 0.168 | 0.051 | 0.051 | 0.051 | 0.012 | 0.013 | 0.013 | 0.067 | 0.066 | 0.066 | | | fog | 0.19 | 0.185 | 0.187 | 0.128 | 0.132 | 0.129 | 0.021 | 0.02 | 0.021 | 0.046 | 0.045 | 0.046 | | | glass blur | 0.189 | 0.189 | 0.193 | 0.114 | 0.127 | 0.133 | 0.017 | 0.019 | 0.022 | 0.052 | 0.048 | 0.045 | | | impulse noise | 0.169 | 0.196 | 0.202 | 0.052 | 0.09 | 0.111 | 0.013 | 0.019 | 0.02 | 0.066 | 0.055 | 0.051 | | | Fashion | motion blur | 0.182 | 0.171 | 0 163 | 0 106 | 0 157 | 0.174 | 0.017 | 0.02 | 0.019 | 0.053 | 0.041 | 0.038 | | MNIST | 0.185 | 0.173 | 0.162 | 0.074 | 0.164 | 0.184 | 0.016 | 0.02 | 0.06 | 0.035 | | | | | rotate | 0.02 | 0.037 | | | | | | | | | | | | | Corruption | scalc | 0.154 | 0.172 | 0.127 | 0.054 | 0.123 | 0.223 | 0.01 | 0.015 | 0.014 | 0.067 | 0.05 | 0.028 | | shear | 0.172 | 0.182 | 0.063 | 0.13 | 0.144 | 0.014 | 0.019 | 0.02 | 0.063 | 0.045 | 0.039 | | | | 0.177 | | | | | | | | | | | | | | | shot moise | 0.16 | 0.178 | 0.197 | 0 046 | 0 059 | 0.088 | 0.01 | 0.013 | 0.017 | 0.068 | 0.064 | 0.056 | | | spatter | 0.15 | 0.189 | 0.191 | 0.044 | 0.082 | 0.074 | 0.01 | 0.018 | 0.017 | 0.07 | 0.057 | 0.059 | | | stripc | 0.21 | 0.208 | 0.208 | 0.125 | 0.126 | 0.125 | 0.025 | 0.025 | 0.025 | 0.041 | 0.041 | 0.041 | | | translate | 0.153 | 0.189 | 0.17 | 0.046 | 0.094 | 0.158 | 0.01 | 0.018 | 0.019 | 0.069 | 0.054 | 0.041 | | | zigzag | 0 187 | 0.19 | 0 189 | 0 065 | 0.065 | 0.066 | 0.016 | 0.016 | 0.016 | 0°00′1 | 0.061 | 0.061 | | | Epistemic | Aleatoric | Epistemic | Aleatoric | | | | | | | | | | | | CIFAR | 0.205 | 0 104 | 0.022 | 0.050 | | | | | | | | | | | Dataset | MNIST | 0.165 | 0.182 | 0.021 | 0.033 | | | | | | | | | | SVHN | 0.218 | 0.109 | 0.026 | 0.046 | | | | | | | | | | increasing corruption severity. Table 5: The 4 BNNs trained have the following accuracies : 93, 92, 92, 92, 92 in percentage terms and rounded to the nearest whole number. For different categories of corruptions, increasing severity leads to higher levels of aleatoric uncertainty for CBDL, which is high when exposed to completely unseen datasets as well. The same is not true for the baseline. ## 4.2.1 Motion Prediction For Autonomous Racing In this case study, we demonstrate the utility of CBDL for motion prediction in autonomous driving scenarios. An important challenge in autonomous driving is understanding the intent of other agents and predicting their future trajectories to allow for safety-aware planning. In autonomous racing, where control is pushed to the dynamical limits, accurate and robust predictions are even more essential for outperforming opponent agents while assuring safety. CBDL provides a straightforward method for quantifying uncertainty and deriving robust prediction regions for anticipating an agent's behavior. We use the problem settings in Tumu et al. (2023) to define the problem of obtaining prediction sets for future positions of an autonomous racing agent. Our results show that the prediction regions have improved coverage when compared to EBNN. These results hold in both in-distribution and out-of-distribution settings, which are described below. Problem. Let Oi(*t, l*) ≡ Oi = {π i t−l , . . . , πi t} denote the i-th trajectory instance of an agent at time t, consisting of the observed positions from time t − l up to time t. Let then C i be a time-invariant context variable. Let also F i(*t, h*) ≡ F i = {π i t+1*, . . . , π*i t+h } be the collection of the next h future positions. We wish to obtain a model M that predicts region Rα with probabilistic guarantees. In particular, for EBNN Rα is the α-level HDR Rα(Pens) of Pens, so that Pens[F i ∈ Rα(Pens)] ≥ 1−α, while for CBDL Rα = IRα(Pˆpred), so that Pˆ pred[F i ∈ IRα(Pˆpred)] ≥ 1 − α, or equivalently, Pˆpred[F i ∈ IRα(Pˆpred)] ≥ 1 − α, for all Pˆpred ∈ Pˆpred. The dataset consists of instances of (Oi, Fi) divided into a training set Dtrain and a testing set Dtest. We train an uncertainty-aware model on Dtrain that computes the triplet (F i l , Fim, Fiu ) = M(Oi, Ci) where F i l , F i u , F i m are the lower, upper, and mean predictions of the future positions, respectively. The dataset Dall is created by collecting simulated trajectories of autonomous race cars in the F1Tenth-Gym ![16_image_0.png](16_image_0.png) (O'Kelly et al., 2020); for details, see Tumu et al. (2023). As shown in Figure 7, different racing lines were utilized including the center, right, left, and optimal racing line for the Spielberg track. Figure 7: Motion Prediction for F1Tenth-Gym Environment (O'Kelly et al., 2020). Data is collected by simulating various racing lines on the Spielberg Track. We denote these by Dcenter, Dright, Dleft, and Drace, respectively. Position π is a vector π = (*a, b, ϑ, v*) ⊤, where a and b are coordinates in a 2-dimensional Euclidean space, and ϑ and v are the heading and speed, respectively. In total, the Dall consists of 34686 train instances, 4336 validation instances, and 4336 test instances. In-distribution vs. Out-of-distribution. We consider the prediction task to be in-distribution when Dtrain, Dtest ⊂ Dall. It is out-of-distribution (OOD) when Dtrain ⊂ Dcenter ∪ Dright ∪ Dleft and Dtest ⊂ Drace. Metrics. We train the ensemble of BNNs and the CBDL models, Mens and MCBDL respectively, using the same architecture and different seeds. As for section 4.1, for CBDL this corresponds to having a nonsingleton prior FGCS, and a singleton likelihood FGCS. We compare the performance with respect to the test set by computing the single-step coverage, where each prediction time-step is treated independently, and the multi-step coverage, which considers the entire h-step prediction. Figure 8.(a) depicts a sample of the in-distribution evaluation for each of the models. For a given trajectory, the red boxes indicate when the prediction region did not cover the actual trajectory at that time-step. Qualitatively, MCBDL has less missed time steps when compared to Mens. Table 6 shows that CBDL performs better in terms of both one-step and multi-step coverage. Similar results can be observed for the OOD scenario. There, all models were trained on racing lines which are predominantly parallel to the track curvature. As a consequence, when the test set consists of instances with higher curvatures, the overall coverage of all models degrades. This can be seen in Figure 8.(b), where the prediction of the models (orange) tends to be straight while the actual trajectory is more curved (green). Despite this, the figure and the coverage metrics in Table 6 show how CBDL exhibits a more robust behavior. | In-Distribution Results Ensemble CBDL | | | | | | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------|------|------|-----|------|------| | 1 − α | 0.9 | 0.95 | 0.99 | 0.9 | 0.95 | 0.99 | | One-step | 0.962 0.980 0.992 0.992 0.995 0.997 | | | | | | | Multi-step 0.638 0.826 0.937 0.914 0.948 0.979 Out-of-Distribution Results Ensemble CBDL 1 − α 0.9 0.95 0.99 0.9 0.95 0.99 One-step 0.919 0.950 0.980 0.979 0.988 0.995 Multi-step 0.532 0.703 0.860 0.825 0.884 0.943 | | | | | | | Table 6: F1Tenth coverage results. We report one-step coverage and multi-step coverage across 3 different values of α. CBDL exceed coverage of EBNNs in all settings. ![17_image_0.png](17_image_0.png) Figure 8: In both pictures, the red boxes indicate when the prediction region did not cover the actual trajectory at that time-step. **Left:** F1Tenth In-Distribution results. Given an input of past observations, CBDL exhibits better coverage of the future target trajectory. Predictions which do not cover the target within the desired 1 − α level are indicated in red. **Right:** F1Tenth Out-Of-Distribution (OOD) results. Robust performance is exhibited by CBDL when compared to EBNN in OOD settings. ## 4.2.2 Artificial Pancreas Control Overall Setup. In this next case study we consider the problem of data-driven control of human blood glucose-insulin dynamics, using an artificial pancreas system, see Figure 9. External insulin delivery is accomplished by using an insulin pump controlled by the artificial pancreas software, which attempts to regulate the blood-glucose (BG) level of the patient within the euglycemic range of [70, 180] mg/dl (Kushner et al., 2018). Levels below 70 mg/dl lead to hypoglycemia, which can ![18_image_0.png](18_image_0.png) Figure 9: The Bayesian Neural Networks predict a future blood glucose value. These individual predictions are combined to get a robust estimate of the true value as an interval. This is used by the Model Predictive Control (MPC) algorithm to recommend insulin dosage for the patient. The patient block in our experiment is simulated using the virtual patient models from the UVa-Padova simulator. lead to loss of consciousness, coma or even death. On the other hand, levels above 300 mg/dl lead to a ![18_image_1.png](18_image_1.png) condition called ketoacidosis, where the body can break down fat due to lack of insulin, and lead to build up of ketones. In order to treat this situation, patients receive external insulin delivery through insulin pumps. Artificial Pancreas (AP) systems can remedy this situation by measuring the blood glucose level, and automatically injecting insulin into the blood stream. Thus, we define the *unsafe regions* of the space as G(t) ∈ (−∞, 70)∪(300, ∞), where G(t) is the BG value at time t. This is the shaded region in Figure 10. Figure 10: Starting from an initial glucose value, the task of the artificial pancreas controller is to maintain the blood glucose value within safe operating limits using insulin as a mode of control. Neural Network Models and Controller. Deep Neural Networks are effective in capturing the BGinsulin dynamics for personalized medical devices (Kushner et al., 2018). This allows for improved device performance. Even though standard Feedforward Neural Networks can be used, Bayesian Neural Networks (BNNs), and especially a collection of multiple BNNs, offer a better alternative towards uncertainty aware predictions. Here, we test the ramifications of these prediction sets, when used inside an online receding horizon control scheme for insulin delivery. We use the standard MPC control scheme for this purpose, well-known in the literature (Dutta et al., 2018). More formally, let G(t) and I(t) ≡ It be the bloodglucose and insulin values at time t, respectively. We denote the finite length trajectory of length H as ←−GH(t) := [G(t − H + 1)*, . . . , G*(t)], and ←− I H(t) := [I(t − H + 1)*, . . . , I*(t)]. An uncertainty aware model M computes the triplet (Gl(t + l), Gm(t + l), Gu(t + l)) = M( ←−GH(t), ←− I H(t)), where Gm is the mean prediction output, and Gl, Gu are the lower and upper predictions of the glucose value, respectively. By design, it is true that Gl ≤ Gm ≤ Gu. A model predictive control algorithm - whose cost function we denote by J – solves arg minI0,I1*,...,I*k−1 Pk−1 i=0 J(M( ←−GH(t + i), ←− I H(t + i))). After every time step, the control algorithm picks the first insulin input I0 as the insulin bolus for the patient, and discards the rest. Cost function J takes into account three factors, (i) Distance of the mean prediction level Gm at each time step from a target value of 120 mg/dl, (ii) Distance of upper and lower predictions (Gu and Gl) from the unsafe regions of the state space G(t) > 300 and G(t) < 70, and (iii) Total insulin injected Pk−1 t=0 It. Starting with some initial glucose value G(0), we measure the performance of the artificial pancreas controller as the fraction of time it spends in the unsafe regions, $$t_{\mathrm{unsafe}}={\frac{1}{T}}\sum_{t=1}^{T}\mathbb{1}\left\{G(t)\in(-\infty,70)\cup(300,\infty)\right\},$$ where 1{·} denotes the indicator function. A lower value is more desirable. We compare EBNN and CBDL as different realizations of the model M. Distribution Shift using Meals. A well known problem with learned models is distribution shift. Bayesian Neural Networks can address this issue by apprising the end user of the increased uncertainty. For regression models of the type described above, this appears as larger prediction intervals [Gl, Gu]. The artificial pancreas controller can run into this situation in the following way: the insulin-glucose time series data collected for training the data-driven model M can be without meals, while at test time the patient can have meals. This creates a distribution shift between the training and test time data. Fortunately, the UVa-Padova simulator (Dalla Man et al., 2013) allows us to create datasets with and without meal inputs. In this case study, the training data was obtained by randomly initializing the BG value in the range [120, 190], and simulating the patient for 720 minutes. The controller was executed at 5 minutes intervals. At test time the patient was supplied meals at specific time intervals (for details, see Appendix J). This creates a significant distribution shift since meals are effectively an unknown variable which can affect the system state. However, from the controller's perspective this is practical, since patients can have unannounced meals. Results and Discussion. To capture the difference in performance between EBNN and CBDL, we compute Perfdiff := (t EBNN unsafe − t CBDL unsafe)/tEBNN unsafe . Both t EBNN unsafe and t CBDL unsafe depend on interval [Gl, Gu]; for EBNN, this latter corresponds to the α-level HDR Rα(Pens) associated with EBNN distribution Pens, while for CBDL it corresponds to the IHDR IRα(Pˆpred). We consider one case in which an CBDL is trained using a credal prior set and only one likelihood (we choose different seeds which initialize the prior distributions but we keep the same architecture for the BNNs), and another case in which we do the opposite (we use the same seed and different architectures). We report Perfdiff, across different choices in Table 7. We observe that for lower values of α the gains of CBDL are more pronounced. This means that when larger significance levels need to be ensured, CBDL is to be preferred to ensemble of BNNs. As discussed before, the CBDL procedure considers all the infinitely many possible priors that can be expressed as a convex combination of the priors that the user specifies at the beginning of the analysis. The same holds for the likelihoods. While this results in a more conservative estimate as compared to the EBNN framework, CBDL produces controllers which respect the safety limits better. To see that CBDL is more conservative than EBNN, notice that when combining predictive distributions from multiple BNNs, CBDL combines the predictions via a finitely generated credal set (FGCS) whose extrema are the individual distributions. On the contrary, an EBNN takes an average of the individual distributions to compute the ensemble distribution Pens. The union of the HDRs of the predictive distributions is more conservative (i.e., broader) than the HDR of the single ensemble distribution. While the approach by EBNN seems like a reasonable choice on the surface, it falls short in capturing the uncertainty necessary for the downstream task. For more details on this case study, see Appendix J. | 1 − α | 0.9 | 0.95 | 0.99 | |-----------------------|-------|--------|--------| | Varying Seeds | -5.8% | 5.5% | 0.6% | | Varying Architectures | -6.6% | 0.9% | 0.3% | Table 7: We report the performance improvements when using IBNNs as compared to EBNNs across 3 different values of α. Row 1 corresponds to the case where the individual BNNs are trained with different seeds for the prior distribution; and Row 2 is the case when the BNNs have different architectures. ## 5 Related Work In Corani et al. (2012), the authors introduce credal classifiers (CCs) as a generalization of classifiers based on Bayesian networks. Unlike CCs, CBDL does not require independence assumptions between non-descendant, non-parent variables. In addition, CBDL avoids NP-hard complexity issues of searching for optimal structure in the space of Bayesian networks (Chickering et al., 2004). In Manchingal & Cuzzolin (2022), an epistemic convolutional neural network (ECNN) is developed that explicitly models the epistemic uncertainty induced by training data of limited size and quality. A clear distinction is that ECNNs measure uncertainty in targetlevel representations whereas CBDL identifies the uncertainty measure on the output space Y. Despite the merit of their work, we believe CBDL achieves greater generality, since it is able to quantify both aleatoric and epistemic uncertainties, and is applicable to problems beyond classification. For a review of the state of the art concerning the distinction between EU and AU we refer to Hüllermeier & Waegeman (2021) and to Manchingal & Cuzzolin (2022). We also point out how CBDL has been recently used to solve prior-likelihood conflicts in Bayesian statistics (Marquardt et al., 2023). Further references can be found in Appendix L. We also point out how there exist other efficient methods which perform approximate Variational Inference via dropouts in deep neural networks (Kendall & Gal, 2017b; Gal & Ghahramani, 2016). As mentioned at the end of Section 3.1, we can easily adapt CBDL to use such dropout approximations in **Steps 3-4** of Algorithm 1. Since our contribution is centered around how different predictions can be combined via an FGCS, we used the de-facto standard for performing inference on BNNs, which is based on off-the-shelf VI techniques. In the future, we plan to study the effect on computational complexity and uncertainty quantification capability of a CBDL procedure that approximates posterior and predictive distributions via dropout. We do not consider Bayesian Model Averaging (BMA) as a baseline for CBDL for two main reasons. First, BMA applied to deep learning needs to implement full batch Hamiltonian Monte Carlo in order to get to the true posterior (Izmailov et al., 2021b). Given the number of parameters in modern deep learning architectures - in the order of millions - this is realistically possible at an experimental level only to labs with access to industry scale computational resources. In order to be practically relevant, we limit our experiments to the more well-understood realm of Variational Inference on BNNs. In addition, Izmailov et al. (2021a) show the pitfalls of BMA in the context of Bayesian Neural Networks, a further reason not to use a Highest Density Region resulting from BMA as a baseline for CBDL. ## 6 Conclusion We presented CBDL, a procedure that can be seen as a non-condensed, uncountably infinite ensemble of BNNs, carried out using only finitely many elements. It allows to distinguish between AU and EU, and to quantify them. We showed how it can be used to specify a set of outputs - the IHDR - that enjoys probabilistic guarantees. We showed empirically that it improves on the Bayesian state of the art at gauging AU and EU, and we also demonstrated its downstream tasks capabilities. We point out how a region that improves on IRα(Pˆpred), meaning that it would be tighter, is .$(\hat{P}_{\text{pred}})=$ $L\,R'$ . IR′α(Pˆpred) = {y ∈ Y : ˆp pred(y) ≥ pˆ α}, where pˆ pred := mink,s pˆ pred k,s , and pˆ αis the largest constant such that Pˆ pred[y ∈ IR′α(Pˆpred)] ≥ 1 − α. The problem with IR′α is that, while the highest density regions Rα(Pˆpred k,s ) associated with the predictive distributions in exPˆpred can be computed using off-the-shelf tools, calculating pˆ pred and pˆ α would have been much more computationally expensive. In addition, it would have required to come up with a new technique to find pˆ pred and pˆ α. We defer studying this to future work. We also plan to apply CBDL to continual learning (CL) to overcome the curse of dimensionality and to capture an agent's preference over the tasks to perform, similarly to Lu et al. (2023). Furthermore, we intend to relate CBDL to Bayesian Model Selection (BMS) (Ghosh et al., 2019). This latter suffers from the same problem as "regular" Bayesian inference. That is, while it tries to come up with a sophisticate prior that induces shrinkage, it still relies on the "correctness" of that prior, i.e. on correctly specifying the prior's parameters. In the future, an interesting way of combining CBDL with BMS will be to use a finite number of regularized horseshoe priors, as suggested by Ghosh et al. (2019, Section 3.2), as extreme elements of the prior credal set. We also call attention to the fact that CBDL is a model-based approach. The relationship with model-free approaches such as conformal prediction (Shafer & Vovk, 2008) will be the object of future studies. In particular, we are interested in finding in which cases IHDRs are narrower than conformal regions, and vice versa, and whether IHDRs enjoy the same probabilistic guarantees as conformal regions. Finally, we point out how one possible way of easing the burden of the combinatorial task in **Step 3** of Algorithm 1 is to specify a prior credal set whose size strikes the perfect balance between being "vague enough" so that we do not underestimate the EU, and being "small enough" so that CBDL is actually implementable. We suspect conjugacy of the priors may play a key role in this endeavor. Because of its centrality, we defer the study of "optimal prior credal sets" to future work. ## References Joaquín Abellán and Serafín Moral. A non-specificity measure for convex sets of probability distributions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 8(3):357–367, 2000. Joaquín Abellán, George Jiří Klir, and Serafín Moral. Disaggregated total uncertainty measure for credal sets. *International Journal of General Systems*, 1(35):29–44, 2006. Alexander Amini, Wilko Schwarting, Ava Soleimany, and Daniela Rus. Deep evidential regression. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information* Processing Systems, volume 33, pp. 14927–14937. Curran Associates, Inc., 2020. Thomas Augustin, Frank P.A. Coolen, Gert De Cooman, and Matthias C.M. Troffaes (eds.). *Introduction* to imprecise probabilities. Wiley Series in Probability and Statistics. John Wiley and Sons, 2014. David Avis and Komei Fukuda. Reverse search for enumeration. *Discrete Applied Mathematics*, 65:21–46, 1996. Rina Foygel Barber, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. Conformal prediction beyond exchangeability. *The Annals of Statistics*, 51(2):816 - 845, 2023. doi: 10.1214/23-AOS2276. URL https://doi.org/10.1214/23-AOS2276. James O. Berger. The robust Bayesian viewpoint. In Joseph B. Kadane (ed.), Robustness of Bayesian Analyses. Amsterdam : North-Holland, 1984. Jose M. Bernardo. Reference posterior distributions for Bayesian inference. *Journal of the Royal Statistical* Society: Series B, 41(2):113–128, 1979. George E. P. Box. Science and statistics. *Journal of the American Statistical Association*, 71(356):791–799, 1976. Yuhang Cai and Lek-Heng Lim. Distances between probability distributions of different dimensions. IEEE Transactions on Information Theory, 68(6):4020–4031, 2022. Michele Caprio. Refined Pinsker's and reverse Pinsker's inequalities for probability distributions of different dimensions. *IEEE Access*, 10:116425–116431, 2022. Michele Caprio and Ruobin Gong. Dynamic precise and imprecise probability kinematics. In Enrique Miranda, Ignacio Montes, Erik Quaeghebeur, and Barbara Vantaggi (eds.), Proceedings of the Thirteenth International Symposium on Imprecise Probability: Theories and Applications, volume 215 of Proceedings of Machine Learning Research, pp. 72–83. PMLR, 11–14 Jul 2023. Michele Caprio and Sayan Mukherjee. Ergodic theorems for dynamic imprecise probability kinematics. International Journal of Approximate Reasoning, 152:325–343, 2023. Michele Caprio, Kuk Jin Jang, Souradeep Dutta, Shireen Manchingal, Fabio Cuzzolin, Oleg Sokolsky, and insup Lee. Credal and interval deep evidential classification. *Technical Reports of the PRECISE Center*, 2024a. Michele Caprio, Yusuf Sale, Eyke Hüllermeier, and Insup Lee. A Novel Bayes' Theorem for Upper Probabilities. In Fabio Cuzzolin and Maryam Sultana (eds.), *Epistemic Uncertainty in Artificial Intelligence*, pp. 1–12, Cham, 2024b. Springer Nature Switzerland. Simone Cerreia-Vioglio, Fabio Maccheroni, and Massimo Marinacci. Ergodic theorems for lower probabilities. Proceedings of the American Mathematical Society, 144:3381–3396, 2015. Bertrand Charpentier, Daniel Zügner, and Stephan Günnemann. Posterior network: Uncertainty estimation without OOD samples via density-based pseudo-counts. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 1356–1367. Curran Associates, Inc., 2020. David M. Chickering, David Heckerman, and Christopher Meek. Large-sample learning of Bayesian networks is NP-hard. *The Journal of Machine Learning Research*, 5:1287–1330, 2004. Frank P. A. Coolen. Imprecise highest density regions related to intervals of measures. *Memorandum COSOR*, 9254, 1992. Giorgio Corani, Alessandro Antonucci, and Marco Zaffalon. *Bayesian Networks with Imprecise Probabilities: Theory and Application to Classification*, chapter 4 of Data Mining: Foundations and Intelligent Paradigms: Volume 1: Clustering, Association and Classification, pp. 49–93. Berlin, Germany : Springer, 2012. Fabio G. Cozman. Credal networks. *Artificial Intelligence*, 120(2):199–233, 2000a. ISSN 0004-3702. doi: https://doi.org/10.1016/S0004-3702(00)00029-1. URL https://www.sciencedirect.com/science/ article/pii/S0004370200000291. Fabio Gagliardi Cozman. Credal networks. *Artificial Intelligence*, 120:199–233, 2000b. Chiara Dalla Man, Francesco Micheletto, Dayu Lv, Marc D. Breton, Boris Kovatchev, and Claudio Cobelli. The UVA/PADOVA type 1 diabetes simulator: New features. *Journal of Diabetes Science and Technology*, 8(1):26–34, 2013. George B. Dantzig. *Linear Programming and Extensions*. Princeton University Press, 1963. URL http: //www.jstor.org/stable/j.ctt1cx3tvg. Bruno de Finetti. *Theory of Probability*, volume 1. New York : Wiley, 1974. Bruno de Finetti. *Theory of Probability*, volume 2. New York : Wiley, 1975. Thierry Denœux. An evidential neural network model for regression based on random fuzzy numbers. In Sylvie Le Hégarat-Mascle, Isabelle Bloch, and Emanuel Aldea (eds.), Belief Functions: Theory and Applications, pp. 57–66, Cham, 2022. Springer International Publishing. Thierry Denœux. Quantifying prediction uncertainty in regression using random fuzzy sets: the ENNreg model. *IEEE Transactions on Fuzzy Systems*, pp. 1–10, 2023. Stefan Depeweg, Jose-Miguel Hernandez-Lobato, Finale Doshi-Velez, and Steffen Udluft. Decomposition of uncertainty in Bayesian deep learning for efficient and risk-sensitive learning. In *International Conference* on Machine Learning, pp. 1184–1193. PMLR, 2018. Didier Dubois and Eyke Hüllermeier. Comparing probability measures using possibility theory: A notion of relative peakedness. *International Journal of Approximate Reasoning*, 45(2):364–385, 2007. Eighth European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU 2005). Souradeep Dutta, Taisa Kushner, and Sriram Sankaranarayanan. Robust data-driven control of artificial pancreas systems using neural networks. In Milan Češka and David Šafránek (eds.), Computational Methods in Systems Biology, pp. 183–202, Cham, 2018. Springer International Publishing. ISBN 978-3-319-99429-1. Romain Egele, Romit Maulik, Krishnan Raghavan, Prasanna Balaprakash, and Bethany Lusch. Autodeuq: Automated deep ensemble with uncertainty quantification. *CoRR*, abs/2110.13511, 2021. Daniel Ellsberg. Risk, ambiguity, and the Savage axioms. *The Quarterly Journal of Economics*, 75(4): 643–669, 1961. Emlyn Flint, Florence Chikurunhe, and Anthony Seymour. Regime-based tactical allocation for equity factors and balanced portfolios. *SSRN Electronic Journal*, 01 2017. Benoît Fortin, Samir Hachour, and François Delmotte. Multi-target PHD tracking and classification using imprecise likelihoods. *International Journal of Approximate Reasoning*, 90:17–36, 2017. Vincent Fortuin, Adrià Garriga-Alonso, Mark van der Wilk, and Laurence Aitchison. BNNpriors: A library for bayesian neural network inference with different prior distributions. *Software Impacts*, 9:100079, 2021. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning, 2016. Soumya Ghosh, Jiayu Yao, and Finale Doshi-Velez. Model selection in Bayesian neural networks via horseshoe priors. *Journal of Machine Learning Research*, 20(182):1–46, 2019. URL http://jmlr.org/papers/v20/ 19-236.html. Isaac Gibbs, John J. Cherian, and Emmanuel J. Candès. Conformal prediction with conditional guarantees. Available at arXiv:2305.12616, 2023. Itzhak Gilboa and Massimo Marinacci. Ambiguity and the Bayesian paradigm. In Daron Acemoglu, Manuel Arellano, and Eddie Dekel (eds.), *Advances in Economics and Econometrics, Tenth World Congress*, volume 1. Cambridge : Cambridge University Press, 2013. Ethan Goan and Clinton Fookes. *Bayesian Neural Networks: An Introduction and Survey*, pp. 45–87. Cham, Switzerland : Springer International Publishing, 2020. Ruobin Gong and Xiao-Li Meng. Judicious judgment meets unsettling updating: dilation, sure loss, and Simpson's paradox. *Statistical Science*, 36(2):169–190, 2021. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International conference on machine learning, pp. 1321–1330. PMLR, 2017. Dan Hendrycks and Thomas G. Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. *CoRR*, abs/1903.12261, 2019. Peter J. Huber and Elvezio M. Ronchetti. *Robust statistics*. Wiley Series in Probability and Statistics. Hoboken, New Jersey : Wiley, 2nd edition, 2009. Rob J. Hyndman. Computing and graphing highest density regions. *The American Statistician*, 50(2): 120–126, 1996. Eyke Hüllermeier and Willem Waegeman. Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods. *Machine Learning*, 3(110):457–506, 2021. Pavel Izmailov, Patrick Nicholson, Sanae Lotfi, and Andrew G Wilson. Dangers of bayesian model averaging under covariate shift. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 3309–3322. Curran Associates, Inc., 2021a. Pavel Izmailov, Sharad Vikram, Matthew D. Hoffman, and Andrew Gordon Wilson. What are bayesian neural network posteriors really like? *CoRR*, abs/2104.14421, 2021b. URL https://arxiv.org/abs/ 2104.14421. Pavel Izmailov, Sharad Vikram, Matthew D Hoffman, and Andrew Gordon Gordon Wilson. What are Bayesian neural network posteriors really like? In Marina Meila and Tong Zhang (eds.), *Proceedings of* the 38th International Conference on Machine Learning, volume 139, pp. 4629–4640. PMLR, 2021c. Harold Jeffreys. An invariant form for the prior probability in estimation problems. Proceedings of the Royal Society A, 186(1007):453–461, 1946. Laurent Valentin Jospin, Hamid Laga, Farid Boussaid, Wray Buntine, and Mohammed Bennamoun. Handson Bayesian neural networks - A tutorial for deep learning users. *IEEE Computational Intelligence* Magazine, 17(2):29–48, 2022. Ngumbang Juat, Mike Meredith, and John Kruschke. Package 'hdinterval', 2022. URL https://cran. r-project.org/web/packages/HDInterval/HDInterval.pdf. Accessed on May 9, 2023. Sanyam Kapoor, Wesley J Maddox, Pavel Izmailov, and Andrew Gordon Wilson. On uncertainty, tempering, and data augmentation in bayesian classification. *arXiv preprint arXiv:2203.16481*, 2022. Ramneet Kaur, Xiayan Ji, Souradeep Dutta, Michele Caprio, Yahan Yang, Elena Bernardis, Oleg Sokolsky, and Insup Lee. Using semantic information for defining and detecting OOD inputs. *Available at* arXiv:2302.11019, 2023. Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? Advances in neural information processing systems, 30, 2017a. Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? In *Proceedings of the 31st International Conference on Neural Information Processing Systems*, NIPS'17, pp. 5580–5590, Red Hook, NY, USA, 2017b. Curran Associates Inc. ISBN 9781510860964. Jang-Hyun Kim, Wonho Choo, and Hyun Oh Song. Puzzle mix: Exploiting saliency and local statistics for optimal mixup. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 5275–5285. PMLR, 13–18 Jul 2020. John Klein, Christèle Lecomte, and Pierre Miché. Hierarchical and conditional combination of belief functions induced by visual tracking. *International Journal of Approximate Reasoning*, 51(4):410–428, 2010. Ranganath Krishnan, Mahesh Subedar, and Omesh Tickoo. MOPED: efficient priors for scalable variational inference in bayesian deep neural networks. *CoRR*, abs/1906.05323, 2019. Ranganath Krishnan, Mahesh Subedar, and Omesh Tickoo. Specifying weight priors in Bayesian deep neural networks with empirical Bayes. *Proceedings of the AAAI Conference on Artificial Intelligence*, 34 (04):4477–4484, April 2020. doi: 10.1609/aaai.v34i04.5875. URL https://ojs.aaai.org/index.php/ AAAI/article/view/5875. Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. CIFAR-10 (Canadian Institute for Advanced Research). Available at http://www.cs.toronto.edu/ kriz/cifar.html, 2009. Meelis Kull and Peter A. Flach. Reliability Maps: A Tool to Enhance Probability Estimates and Improve Classification Accuracy. In Toon Calders, Floriana Esposito, Eyke Hüllermeier, and Rosa Meo (eds.), Machine Learning and Knowledge Discovery in Databases, Lecture Notes in Computer Science, pp. 18–33, Berlin, Heidelberg, 2014. Springer. ISBN 978-3-662-44851-9. Taisa Kushner, David Bortz, David M. Maahs, and Sriram Sankaranarayanan. A data-driven approach to artificial pancreas verification and synthesis. In Proceedings of the 9th ACM/IEEE International Conference on Cyber-Physical Systems, ICCPS '18, pp. 242–252. IEEE Press, 2018. ISBN 9781538653012. Jouko Lampinen and Aki Vehtari. Bayesian approach for neural networks - review and case studies. *Neural* Networks, 4:257–274, 2001. Daniel Lassiter. Representing credal imprecision: from sets of measures to hierarchical Bayesian models. Philosophical Studies, 177(6):1463–1485, 2020. Yann Lecun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998. Peter Lenk and Bryan Orme. The value of informative priors in Bayesian inference with sparse data. Journal of Marketing Research, 46(6):832–845, 2009. ISSN 00222437. Isaac Levi. *The Enterprise of Knowledge*. London, UK : MIT Press, 1980. Vivian Lin, Kuk Jin Jang, Souradeep Dutta, Michele Caprio, Oleg Sokolsky, and Insup Lee. Reversing distribution shifts using reinforcement learning. *Available at arXiv:2302.10341*, 2023. Pengyuan Lu, Michele Caprio, Eric Eaton, and Insup Lee. Zero-shot task preference addressing enabled by imprecise Bayesian continual learning. *Available at arXiv:2305.14782*, 2023. Shireen Kudukkil Manchingal and Fabio Cuzzolin. Epistemic deep learning. *Available at arxiv:2206.07609*, 2022. Massimo Marinacci and Luigi Montrucchio. Introduction to the mathematics of ambiguity. In Itzhak Gilboa (ed.), *Uncertainty in economic theory: a collection of essays in honor of David Schmeidler's 65th birthday*. London : Routledge, 2004. Alexander Marquardt, Julian Rodemann, and Thomas Augustin. An empirical study of prior-data conflicts in Bayesian neural networks. *Poster presented at 13th International Symposium on Imprecise Probabilities:* Theories and Applications (ISIPTA) 2023, 2023. Jacques Paul Migne. De coelesti hierarchia. In *Patrologia Graeca*, volume 3. Paris : Imprimerie Catholique, 1857. Norman Mu and Justin Gilmer. MNIST-C: A robustness benchmark for computer vision. *CoRR*, abs/1906.02337, 2019. Bálint Mucsányi, Michael Kirchhof, and Seong Joon Oh. Benchmarking uncertainty disentanglement: Specialized uncertainties for specialized tasks. *Available at arXiv:2402.19460*, 2024. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. In *NIPS Workshop on Deep Learning and Unsupervised* Feature Learning 2011, 2011. Matthew O'Kelly, Hongrui Zheng, Dhruv Karthik, and Rahul Mangharam. F1TENTH: An Open-source Evaluation Environment for Continuous Control and Reinforcement Learning. In *Proceedings of the* NeurIPS 2019 Competition and Demonstration Track, pp. 77–89. PMLR, 2020. Sangdon Park, Osbert Bastani, Nikolai Matni, and Insup Lee. PAC confidence sets for deep neural networks via calibrated prediction. In *8th International Conference on Learning Representations*, 2020. Luis Raul Pericchi. Sets of prior probabilites and bayesian robustness. *Documentation Section on the website* of the Society for Imprecise Probability Theory and Applications (SIPTA), 1998. Hippolyt Ritter, Aleksandar Botev, and David Barber. A scalable laplace approximation for neural networks. In *International Conference on Learning Representations*, 2018. Yusuf Sale, Viktor Bengs, Michele Caprio, and Eyke Hüllermeier. Second-Order Uncertainty Quantification: A Distance-Based Approach. *Available at arxiv:2312.00995*, 2023a. Yusuf Sale, Michele Caprio, and Eyke Höllermeier. Is the volume of a credal set a good measure for epistemic uncertainty? In Robin J. Evans and Ilya Shpitser (eds.), *Proceedings of the Thirty-Ninth Conference on* Uncertainty in Artificial Intelligence, volume 216 of *Proceedings of Machine Learning Research*, pp. 1795– 1804. PMLR, 31 Jul–04 Aug 2023b. URL https://proceedings.mlr.press/v216/sale23a.html. Robin Senge, Stefan Bösner, Krzysztof Dembczyński, Jörg Haasenritter, Oliver Hirsch, Norbert DonnerBanzhoff, and Eyke Hüllermeier. Reliable classification: Learning classifiers that distinguish aleatoric and epistemic uncertainty. *Information Sciences*, 255:16–29, January 2014. ISSN 0020-0255. Murat Sensoy, Lance Kaplan, and Melih Kandemir. Evidential deep learning to quantify classification uncertainty. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. Glenn Shafer and Vladimir Vovk. A tutorial on conformal prediction. *Journal of Machine Learning Research*, 9:371–421, 2008. Marek Smieja and Jacek Tabor. Entropy of the mixture of sources and entropy dimension. *IEEE Transactions* on Information Theory, 58(5):2719–2728, 2012. Lewis Smith and Yarin Gal. Understanding measures of uncertainty for adversarial example detection. arXiv preprint arXiv:1803.08533, 2018. Bjørnar Tessem. Interval probability propagation. *International Journal of Approximate Reasoning*, 7(3): 95–120, 1992. D. Michael Titterington. Bayesian methods for neural networks and related models. *Statistical Science*, 19 (1):128–139, 2004. Krasymyr Tretiak, Georg Schollmeyer, and Scott Ferson. Neural network model for imprecise regression with interval dependent variables. *Available at arXiv:2206.02467*, 2022. Matthias C.M. Troffaes and Gert de Cooman. *Lower Previsions*. Chichester, United Kingdom : John Wiley and Sons, 2014. Renukanandan Tumu, Lars Lindemann, Truong Nghiem, and Rahul Mangharam. Physics Constrained Motion Prediction with Uncertainty Quantification, 2023. arXiv:2302.01060. Vladimir Vovk, Alexander Gammerman, and Glenn Shafer. *Algorithmic Learning in a Random World*. Cham : Springer, second edition, 2022. Peter Walley. Coherent lower (and upper) probabilities. Technical report, University of Warwick, Coventry, 1981. Peter Walley. *Statistical Reasoning with Imprecise Probabilities*, volume 42 of *Monographs on Statistics and* Applied Probability. London : Chapman and Hall, 1991. Hao Wang and Dit-Yan Yeung. A survey on Bayesian deep learning. *ACM Computing Surveys*, 53(5):1–37, 2021. Larry A. Wasserman and Joseph B. Kadane. Bayes' theorem for Choquet capacities. *The Annals of Statistics*, 18(3):1328–1339, 1990. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. *CoRR*, abs/1708.07747, 2017. Marco Zaffalon. The naive credal classifier. *Journal of Statistical Planning and Inference*, 105(1): 5–21, 2002. ISSN 0378-3758. doi: https://doi.org/10.1016/S0378-3758(01)00201-4. URL https: //www.sciencedirect.com/science/article/pii/S0378375801002014. Imprecise Probability Models and their Applications. Fengshuo Zhang and Chao Gao. Convergence rates of variational posterior distributions. *The Annals of* Statistics, 48(4):2180 - 2207, 2020. ## A Why Do We Need Ips? The main motivations for working with credal sets are two. Let (Ω, F) be the measurable space of interest. (i) A single probability distribution does not suffice to represent ignorance in the sense of lack of knowledge; this is well documented in the literature, see e.g. Hüllermeier & Waegeman (2021) and references therein. Consider the example of complete ignorance (CI) in the case of a finite state space Ω (Hüllermeier & Waegeman, 2021, Section 3.3). In standard Bayesian analysis, CI is modeled in terms of the uniform distribution Unif(Ω); this is justified by Laplace's "principle of indifference". Then, however, it is not possible to distinguish between precise probabilistic knowledge about a random event - called *prior indifference*; think of the tossing of a fair coin - and a complete lack of knowledge due to an incomplete description of the experiment - called *prior ignorance*. Another problem is given by the additive nature of probability distributions. Consider again the example of a uniform distribution. First, let us observe that it is not invariant under reparametrization. In addition, if we model the ignorance about the length x of the side of a cube in R 3 via a uniform measure on the interval [*l, u*] ⊂ R, then this does not yield a uniform distribution of x 3 on [l 3, u3], which suggests some degree of informedness about the cube's volume. Finally, as pointed out in Walley (1991), if we ask a subject - even an expert - about their opinion regarding some events, it is much more likely that they will report interval of probabilities rather than single values. (ii) Working with credal sets allows to achieve prior and likelihood *robustness*: realistically large sets Pprior of priors and Plik of likelihoods are elicited. Using credal sets, the agent recognizes that prior beliefs and knowledge about the sampling model are limited and imprecise. Combining each pair of functions in Pprior and Plik using Bayes' rule, a class of posterior distributions - reflecting the updated state of uncertainty - is formed. If the available information is not sufficient to identify a unique posterior distribution, or a set of posteriors whose diameter is small, credal sets allow to represent *indecision*, thus leading to a less informative but more robust conclusions.17 ## B On The Use Of Credal Sets Let us address a critique raised against the use of credal sets. Lassiter (2020) argues against the use of sets of probabilities to model an agent's prior beliefs and their knowledge of the sampling model, while debating in favor of using hierarchical Bayesian models. As reported in Hüllermeier & Waegeman (2021, Secton 4.6.2), the argument against credal sets that is more cogent for the machine learning literature is that modeling a lack of knowledge in a set-based manner may hamper the possibility of inductive inference, up to a point where learning from empirical data is not possible any more. With this, we mean the following. As Pericchi (1998) points out, the natural candidate for a class of priors to represent complete ignorance is the class Pall of all distributions. When this class leads to non-vacuous and useful conclusions, these are quite compelling and uncontroversial. It turns out that the posterior probabilities obtained from this class are vacuous, that is, their lower and upper bounds are 0 and 1: no finite sample is enough to annihilate a sufficiently extreme prior belief. There is then a compromise to be made, and this is the compromise of *near-ignorance*. The nearignorance class should be vacuous a priori in some respects, typically the ones that are the most important for the analysis at hand. This way of proceeding is labeled as arbitrary by Lassiter (2020), who instead advocates for the use of hierarchical Bayesian procedures. We find this critique not compelling, as during the analysis the job of the agent is to model reality: as pointed out in Hüllermeier & Waegeman (2021, Secton 5), statistical inference is not possible without underlying assumptions, and conclusions drawn from data are always conditional on those assumptions. If we were to work every time with the maximum level of generality, we would hardly be able to reach any conclusions. For example, in a statistical analysis we never consider the state Ω of *apparently possible states* (Walley, 1991, section 2.1.2), that is, the one that contains all the states ω that are logically consistent with the available information. If we consider a coin toss, we let the state space be Ω = {heads, tails}, certainly not Ω = {heads, tails, coin landing on its edge, coin braking into pieces on landing, coin disappearing down a crack in the floor}. The same holds for sets 17Here "diameter" has to be understood as the distance between upper and lower probability of event A, for all A ∈ F. of probabilities: it makes much more sense to work with near-ignorance credal sets than to work with Pall. A final reason to rebut the point in Lassiter (2020) is that the problems indicated in (i) in section A that make the use of the uniform prior distribution - often interpreted as representing epistemic uncertainty in standard Bayesian inference - at least debatable are inherited by hierarchical Bayesian modeling, as specified in Bernardo (1979); Hüllermeier & Waegeman (2021); Jeffreys (1946). Furthermore, a single distribution that is the result of a hierarchical Bayesian procedure is unable to gauge epistemic uncertainty (Hüllermeier & Waegeman, 2021). ## B.1 A Note On How To Specify Credal Sets As pointed out in Corani et al. (2012, Section 3.3), there is a way of obtaining credal sets starting from sets of probability intervals; in addition, standard algorithms can compute the extreme elements of a credal set for which a probability interval has been provided (Avis & Fukuda, 1996). However, the resulting number of extrema is exponential in the size of the possibility space (Tessem, 1992).18 For this reason we prefer to specify prior and likelihood finitely generated credal sets instead. ## C A Further Ip Concept: The Core Let again (Ω, F) be the measurable space of interest. Because of the conjugacy property of upper and lower probabilities, let us focus on upper probabilities only. We say that upper probability P is *concave* if P(A∪B) ≤ P(A)+P(B)−P(A∩B), for all A, B ∈ F. Recall that ∆(Ω, F) denotes the set of all probability measures on (Ω, F). Upper probability P is *compatible* with the convex set (Gong & Meng, 2021) core(P) : = {P ∈ ∆(Ω, F) : P(A) ≤ P(A), ∀A *∈ F}* $\mathbf{\tau}:=\{P\in\mathbf{\tau}$ 2. $$):P(A$$ = {P ∈ ∆(Ω, F) : P(A) ≤ P(A) ≤ P(A), ∀A *∈ F}* where the second equality is a characterization (Cerreia-Vioglio et al., 2015, Page 3389). Notice that the core is convex (Marinacci & Montrucchio, 2004, Section 2.2). We assume it is nonempty and weak⋆-closed.19 Then, it is weak⋆-compact as a result of Marinacci & Montrucchio (2004, Proposition 3). Since the core is convex, the set ex[core(P)] of extreme points of the core is well defined. It contains all the elements of the core that cannot be written as a convex combination of one another. The following important result is a consequence of Walley (1991, Theorem 3.6.2). Theorem 9. Suppose core(P) is nonempty and weak⋆*-closed. Then, the following holds.* (a) ex[*core*(P)] ̸= ∅. (b) core(P) is the closure in the weak⋆topology of the convex hull of ex[*core*(P)]. (c) If P(A) = supP ∈*core*(P ) P(A), for all A ∈ F*, then* P(A) = supP ∈ex[core(P )] P(A), for all A ∈ F. So in order to define an upper probability P that setwise dominates the elements of core(P) it is enough to specify the extreme points of the core. ## D A New Bayes' Theorem For Ips We present Theorem 10, a result that - although appealing - does not lend itself well to be applied to the CBDL procedure. An extension of Theorem 10 is given in Caprio et al. (2024b). Call Θ the parameter space of interest and assume it is Polish, that is, the topology for Θ is complete, separable, and metrizable. This ensures that the set ∆(Θ, B) of probability measures on Θ is Polish as well, where B denotes the Borel σ-algebra for Θ. Let X be the set of all bounded, non-negative, B-measurable 18Recall that the possibility space of a random variable is the space of the values it can take on. 19Recall that in the weak⋆ topology, a net (Pα)α∈I converges to P if and only if Pα(A) → P(A), for all A ∈ F. functionals on Θ. Call D = *X × Y* the sample space endowed with the product σ-algebra A = Ax × Ay, where Ax is the σ-algebra endowed to X and Ay is the σ-algebra endowed to Y. Let the agent elicit Lθ := {Pθ ∈ ∆(D, A) : θ ∈ Θ}. Assume that each Pθ ∈ Lθ has density L(θ) = p(D | θ) with respect to some σ-finite dominating measure ν on (D, A); this represents the likelihood function for θ having observed data D ⊂ D. We assume for now that L ∈ X , for all D ⊂ D. Let the agent specify a set P of probabilities on (Θ, B). Then, compute P, and consider P co := core(P); it represents the agent's initial beliefs.20 We assume that every P ∈ Pco has density p with respect to some σ-finite dominating measure µ on (Θ, B), that is, p = dP dµ . We require the agent's beliefs to be represented by the core for two main reasons. The first, mathematical, one is to ensure that the belief set is compatible with the upper probability. The second, philosophical, one is the following (Caprio & Gong, 2023; Caprio & Mukherjee, 2023). A criticism brought forward by Walley (1991, Section 2.10.4.(c)) is that, given an upper probability P, there is no cogent reason for which the agent should choose a specific PT that is dominated by P, or - for that matter - a collection of "plausible" probabilities. Because the core considers all (countably additive) probability measures that are dominated by P, it is the perfect instrument to reconcile Walley's behavioral and sensitivity analysis interpretations. Let the agent compute Pθ, and consider L co θ:= core(Pθ); it represents the set of plausible likelihoods. Let $$\mathscr{L}:=\left\{L=\frac{\mathrm{d}P_{\theta}}{\mathrm{d}\nu},\,P_{\theta}\in\mathcal{L}_{\theta}^{\infty}\right\},\tag{4}$$ and denote by L(θ) := supL∈L L(θ) and by L(θ) := infL∈L L(θ), for all θ ∈ Θ. Call P co D := (PD ∈ ∆(Θ, B) : dPD dµ= p(θ | D) = L(θ)p(θ) RΘ L(θ)p(θ)dθ , p = dP dµ , P ∈ Pco, L = dPθ dν , Pθ ∈ Lco θ ) the class of posterior probabilities when the prior is in P co and the likelihood is in L co θ , and let P D(A) = supPD∈Pco D PD(A), for all A ∈ B. Then, the following is a generalization of Bayes' theorem in Wasserman & Kadane (1990). Theorem 10. *Suppose* P co,L co θare nonempty and weak⋆-closed. Then for all A ∈ B, $$\overline{P}_{D}(A)\leq\frac{\sup_{P\in\mathcal{P}^{\infty}}\int_{\Theta}\overline{L}(\theta)1_{A}(\theta)P(d\theta)}{\mathbf{c}},\tag{5}$$ provided that the ratio is well defined. Here, c := supP ∈Pco RΘ L(θ)1A(θ)P(dθ) + infP ∈Pco RΘ L(θ)1Ac (θ)P(dθ), and 1A denotes the indicator function for A ∈ B. In addition, if P is concave, then the inequality in equation 5 is an equality for all A ∈ B. This result is particularly appealing because, given some assumptions, it allows to perform a (generalized) Bayesian update of a prior upper probability (PUP) by carrying out only one operation, even when the likelihood is ill specified so that a set of likelihoods is needed. We also have the following. Lemma 11. *Suppose* P co,L co θare nonempty and weak⋆-closed. Then, if P is concave, we have that P D is concave as well. This lemma is important because it tells us that the generalized Bayesian update of Theorem 10 preserves concavity, and so it can be applied to successive iterations. If at time t the PUP is concave, then the PUP at time t + 1 - that is, the posterior upper probability at time t - will be concave too. Necessary and sufficient conditions for a generic upper probability to be concave are given in Marinacci & Montrucchio (2004, Section 5). 20Superscript "co" stands for convex and core. In the future, these results can be generalized to the case in which the elements of X are unbounded using techniques in Troffaes & de Cooman (2014), and to the case in which the elements of X are R d-valued, for some d ∈ N, since we never used specific properties of R in our proofs. Despite being attractive, the generalized Bayesian update of Theorem 10 hinges upon three assumptions, namely that P co and L co θare both cores of an upper probability, that they are nonempty and weak⋆-closed, and that the prior upper probability P is concave. As the proverb goes, there is no free lunch. Having to check these assumptions, together with computing a supremum, an infimum, and the integrals in equation 5, makes Theorem 10 inadequate to be applied in the context of CBDL. ## E Bounds On Upper And Lower Entropy In this section we find an upper bound for H(P) and a lower bound for H(P) that are extremely interesting. We first need to introduce three new concepts. Definition 12. Consider a set P of probabilities on a generic measurable space (Ω, F). We say that lower probability P *is convex if* P(A ∪ B) ≥ P(A) + P(B) − P(A ∩ B), for all A, B ∈ F. Then, let P *be either an upper or a lower probability, and consider a generic bounded function* f on (Ω, F), that is, f ∈ B(Ω). We define the Choquet integral of f with respect to P *as follows* $$\int_{\Omega}f(\omega)\mathsf{P}(d\omega):=\int_{0}^{\infty}\mathsf{P}\left(\{\omega\in\Omega:f(\omega)\geq t\}\right)dt+\int_{-\infty}^{0}\left[\mathsf{P}\left(\{\omega\in\Omega:f(\omega)\geq t\}\right)-\mathsf{P}(\Omega)\right]dt,$$ where the right hand side integrals are (improper) Riemann integrals. If P *is additive, then the Choquet* integral reduces to the standard additive integral. Finally, if Ω *is uncountable, define for all* ω ∈ Ω $$\underline{{{\pi}}}(\omega):=\operatorname*{inf}_{P\in\mathcal{P}}\frac{d P}{d\mu}(\omega)\quad a n d\quad\overline{{{\pi}}}(\omega):=\operatorname*{sup}_{P\in\mathcal{P}}\frac{d P}{d\mu}(\omega).$$ We call them lower and upper densities, respectively. The following theorem gives the desired bounds. Theorem 13. Consider a set P of probabilities on a generic measurable space (Ω, F). If Ω *is uncountable,* assume that every P ∈ P is dominated by a σ-finite measure µ *and that the Radon-Nikodym derivatives* dP dµ are continuous and bounded, for all P ∈ P*. Define* $$H(\underline{{{P}}}):=\begin{cases}-\int_{\Omega}\log\left[\overline{{{\pi}}}(\omega)\right]\underline{{{P}}}(d\omega)&\text{if}\Omega\text{is uncountable}\\ -\sum_{\omega\in\Omega}\underline{{{P}}}(\{\omega\})\log[\overline{{{P}}}(\{\omega\})]&\text{if}\Omega\text{is at most countable}\end{cases}$$ and similarly $$H(\overline{{{P}}}):=\begin{cases}-\int_{\Omega}\log\left[\underline{{{\pi}}}(\omega)\right]\overline{{{P}}}(d\omega)&\text{if}\Omega\text{is uncountable}\\ -\sum_{\omega\in\Omega}\overline{{{P}}}(\{\omega\})\log[\underline{{{P}}}(\{\omega\})]&\text{if}\Omega\text{is at most countable}\end{cases}$$ Then, H(P) ≤ H(P) and H(P) ≥ H(P). In addition, if P is concave, the first bound is tighter, and if P is convex, the second bound is tighter. Remark 14. *In Hüllermeier & Waegeman (2021, Section 4.6.1), the authors point out that Abellán & Moral* (2000) presents a generalization of the Hartley measure, called generalized Hartley measure GH(P), that can be used to disaggregate the total uncertainty captured by H(P) into aleatoric and epistemic uncertainties. We prefer not to introduce it in the present work because GH(P) *is defined based on the mass function of* a belief function (Gong & Meng, 2021, Definition 2.4).21 *This entails that the authors assume that lower* 21A belief function is a mathematical concept that should not be confused with the term "belief" we used throughout the paper to address the agent's knowledge. probability P associated with the set of probabilities of interest is a belief function, and so that for every collection {A, A1, . . . , Ak} such that A ⊂ Ai*, the following holds* $$\underline{{{P}}}(A)\geq\sum_{\emptyset\neq I\subset\{1,\ldots,k\}}(-1)^{\#I-1}\underline{{{P}}}(\cap_{i\in I}A_{i}),$$ for all k ∈ N. As it is immediate to see, this is assumption is not needed in the context of CBDL. Other ways of disentangling and quantifying AU and EU can be found in Sale et al. (2023a;b) ## F How To Derive A Predictive Distribution Suppose we performed a Bayesian updating procedure so to obtain posterior pdf p(θ | x1, y1, . . . , xn, yn). Recall that {(xi, yi)} n i=1 ∈ (*X × Y*) n denotes the training set. We obtain the predictive distribution p(˜y | x, x ˜ 1, y1, . . . , xn, yn) on Y as follows $$p(\tilde{y}\mid\tilde{x},x_{1},y_{1},\ldots,x_{n},y_{n})=\int_{\Theta}p(\tilde{y},\theta\mid\tilde{x},x_{1},y_{1},\ldots,x_{n},y_{n})\mathrm{d}\theta$$ $$=\int_{\Theta}p(\tilde{y}\mid\theta,\tilde{x},x_{1},y_{1},\ldots,x_{n},y_{n})\cdot p(\theta\mid\tilde{x},x_{1},y_{1},\ldots,x_{n},y_{n})\mathrm{d}\theta$$ $$=\int_{\Theta}p(\tilde{y}\mid\theta,\tilde{x})\cdot p(\theta\mid x_{1},y_{1},\ldots,x_{n},y_{n})\mathrm{d}\theta,$$ where p(˜y | θ, x˜) is the likelihood used to derive the posterior. Notice that the last equality comes from output y˜ only depending on input x˜ and parameter θ, and from having assumed Dx ⊥⊥ θ (see section 2.1). From an applied point of view, a sample from p(˜y | *x, x* ˜ 1, y1, . . . , xn, yn) is obtained as follows: 1. specify input x˜; 2. sample a parameter ˜θ from the posterior, ˜θ ∼ p(θ | x1, y1, . . . , xn, yn); 3. plug ˜θ in the likelihood and sample y˜ ∼ p(y | ˜θ, x˜). ## G Aleatoric Uncertainty Check For Α**-Level Ihdr** The diameter of IRα(Pˆpred) is a function of the aleatoric uncertainty (AU) faced by the agent.22 If we want to avoid to perform the computation in **Step 5** of Algorithm 1 only to discover that IRα(Pˆpred) is "too large", then we can add an "AU check". At the beginning of the analysis, compute the lower entropy H(Pˆpred) = mink,s H(Pˆpred k,s ) associated with the set exPˆpred of extreme elements of the VI-approximated predictive credal set Pˆpred. By equation 2, it is equal to the aleatoric uncertainty encoded in Pˆpred. We then verify whether the lower entropy H(Pˆpred) is "too high". That is, if H(Pˆpred) > φ, for some φ > 0, we want our procedure to abstain. This means that if the aleatoric uncertainty in set Pˆpred is too high, then our procedure does not return any output set for input x˜. The value of φ can be set equal to the entropy of the probability measures that are typically used in the context the agent works in. For example, in medical applications the agent may consider the entropy of a Normal distribution, while in financial applications the entropy of a distribution with fatter tails, such as a t-distribution or a Cauchy. We call these reference φ *values*. If we add this "AU check", the 2-tuple that the agent needs to specify at inference time in Algorithm 1 is ⟨*φ, α*⟩. 22Since the diameter is a metric concept, we assume that we can find a well-defined metric dy on Y. If that is not the case, we substitute the diameter with the notion of cardinality. ## H Α**-Level Ihdr In A Classification Setting** In classification problems, BNNs compute the probability vector $$\stackrel{\mathrm{\tiny{\bf-1}}}{\varpi}:=\frac{1}{\#\Theta}\sum_{\theta\in\Theta}\Phi_{\theta|D}(x),$$ where we write Φθ|D to highlight the fact that θ is sampled from posterior p(θ | D), and then select the most likely class yˆ := arg maxj ϖj , where the ϖj 's are the elements of ϖ. When applied to a classification setting, the general procedure introduced in Algorithm 1 becomes the following. Recall that we denote by K the cardinality of exPprior, and by S the cardinality of exPlik. Assume that Y = {y1*, . . . , y*J }, that is, there are J ∈ N≥2 possible labels. Then, a VI-approximated predictive distribution Pˆpred k,s in exPˆpred can be seen as J-dimensional probability vector pˆ pred k,s = (ˆp pred k,s,1 , . . . , pˆ pred k,s,J ) ⊤, where pˆ pred k,s,j = Pˆpred k,s ({yj}), for all k ∈ {1, . . . , K}, s ∈ {1*, . . . , S*}, and j ∈ {1*, . . . , J*}. Now fix any k and any s, and define the partial order ⪯k,s on Y as yl ⪯k,s yi ⇐⇒ pˆ pred k,s,l ≥ pˆ pred k,s,i and yl ≺k,s yi ⇐⇒ pˆ pred k,s,l > pˆ pred k,s,i, where i, l ∈ {1*, . . . , J*}, i ̸= l. This means that we can order the labels according to the probability that Pˆpred k,s assigns to them: the first label will be the one having highest probability according to Pˆpred k,s , the second label will have the second-highest probability according to Pˆpred k,s , and so on. Now order the label space Y according to ⪯k,s so to obtain $$y^{k,s}:=\{y_{1}^{k,s},\ldots,y_{J}^{k,s}\}.$$ This means that y k,s 1 ⪯k,s y k,s j, for all j ∈ {2*, . . . , J*}, y k,s 2 ⪯k,s y j, for all j ∈ {3*, . . . , J*} (but y k,s 1 ⪯k,s y k,s 2), and so on. That is, we order the labels from the most to the least likely according to Pˆpred $\circ\;y_j^{k,s}$, for all $j\in\{3,\ldots\}$. k,s . Then, we call α*-level credible set* according to Pˆpred k,s , α ∈ [0, 1], the set $$CS_{\alpha}(\hat{P}^{\rm pred}_{k,s}):=\bigg{\{}y_{1}^{k,s},\ldots,y_{j}^{k,s}:\sum_{i=1}^{j}\hat{P}^{\rmpred}_{k,s}(\{y_{i}^{k,s}\})\in[1-\alpha,1-\alpha+\varepsilon],\,j\leq J,\tag{6}$$ $$\text{and}\,\,\hat{y}^{j}<j,\sum_{i=1}^{j^{\prime}}\hat{P}^{\rm pred}_{k,s}(\{y_{i}^{k,s}\})\in[1-\alpha,1-\alpha+\varepsilon]\bigg{\}},$$ $\hat{P}^{\rm pred}_{k,s}$\(\hat{P}^{\rm pred}_{k,s} $k_{\mu}$ for some ε > 0. It corresponds to the α-level HDR Rα(Pˆpred k,s ). Notice that we require Pj i=1 Pˆpred k,s ({y i}) ∈ [1−α, 1−α+ε] because we may need to go slightly above level 1−α. Just as a toy example, we may have 7 labels, 3 of which would give a 0.945 coverage, while 4 would give a coverage of 0.953. If we are interested in the α = 0.05-level credible set, we ought to include the fourth label, thus yielding a coverage slightly higher than 1 − α = 0.95. The interpretation to CSα(Pˆpred k,s ) is the following: it consists of the smallest collection of labels to which Pˆpred k,s assigns probability of at least 1 − α (that is, those having the highest probability of being the correct one for the new input x˜). Finally, we call α*-level imprecise credible set*, α ∈ [0, 1], the set $$I C S_{\alpha}(\hat{\mathcal{P}}_{\mathrm{pred}}):=\bigcup_{k,s}C S_{\alpha}(\hat{P}_{k,s}^{\mathrm{pred}}).$$ In turn, we have that Pˆ pred[{y˜ ∈ ICSα(Pˆpred)] ≥ 1 − α. Remark 15. Notice that if a credible set of level ≈ α is enough, then we can replace the left endpoint of the interval in equation 6 with 1 − (α + εk,s), for some εk,s > 0*. Strictly speaking, in this case we obtain an* (α+εk,s)-level credible set, which we denote by CSαk,s (Pˆ*pred* k,s ), where αk,s := α+εk,s*. Going back to our toy* example, we will have a credible set with 3 labels that yields a coverage of 1−(α+εk,s) = 0.945 ≈ 0.95 = 1−α, so εk,s = 0.005*. In turn, this implies that the imprecise credible set will have a coverage of* 1−(α+maxk,s εk,s), that is, it will have level α + maxk,s εk,s. We denote it by ICSα˜(Pˆpred)*, where* α˜ := α + maxk,s εk,s. ## I Distance Between A Set Of Distributions P **And A Single Distribution** P ′ **Having** Different Dimensions Many concepts in this section are derived from Cai & Lim (2022). Let *m, n* ∈ N such that m ≤ n, and let p ∈ [1, ∞]. Call Mp(R j) the set of probability measures on R j having finite p-th moment, and Md(R j) the set of probability measures on R j having density with respect to some σ-finite dominating measure µ, j ∈ {*m, n*}. Let O(*m, n*) := {V ∈ R m×n : V V ⊤ = Im}, where Im denotes the m-dimensional identity matrix, and for any V ∈ O(*m, n*) and any b ∈ R m, define the following function φV,b : R n → R m, x 7→ φV,b(x) := V x + b. Let B(R n) be the Borel σ-algebra on R n, and for any Q ∈ ∆(R n, B(R n)), define φV,b(Q) := Q ◦ φ −1 V,b, the pushforward measure. Consider then two generic probability measures *Q, S* such that Q ∈ Mp(R m) and S ∈ Mp(R n), and call Φ + p (*Q, n*) := {α ∈ Mp(R n) : φV,b(α) = Q, for some V ∈ O(m, n), b ∈ R m}, Φ + d (*Q, n*) := {α ∈ Md(R n) : φV,b(α) = Q, for some V ∈ O(m, n), b ∈ R m}, Φ −(*S, m*) := {β ∈ M(R m) : φV,b(S) = β, for some V ∈ O(m, n), b ∈ R m}. Recall now the definition of p-Wasserstein metric between two generic distributions defined on the *same* Euclidean space. Let P1, P2 ∈ Mp(R n), for some n ∈ N and some p ∈ [1, ∞]. Then, the p-Wasserstein distance between them is defined as $$W_{p}(P_{1},P_{2}):=\left[\operatorname*{inf}_{\gamma\in\Gamma(P_{1},P_{2})}\int_{\mathbb{R}^{2n}}\|x-y\|_{2}^{p}\gamma(\operatorname{d}(x,y))\right]^{1/p},$$ where ∥·∥2 denotes the Euclidean distance, p = ∞ is interpreted as the essential supremum, and Γ(P1, P2) := {γ ∈ ∆(R 2n, B(R 2n)) : projn 1 (γ) = P2, projn 2 (γ) = P1} is the set of couplings between P1 and P2, where projn 1 is the projection onto the first n coordinates, and projn 2 is the projection to the last n coordinates. Recall then the definition of f-divergence between two generic distributions defined on the *same* Euclidean space. Let P1, P2 ∈ ∆(R n, B(R n)), for some n ∈ N, and assume P1 ≪ P2. Then, for any convex functional f on R such that f(1) = 0, the f-divergence between P1 and P2 is defined as $$\operatorname{div}_{f}(P_{1}\|P_{2}):=\int_{\mathbb{R}^{n}}f\left({\frac{\operatorname{d}\!P_{1}}{\operatorname{d}\!P_{2}}}(x)\right)P_{2}(\operatorname{d}\!x).$$ Aside from the Rényi divergence, the f-divergence includes just about every known divergences as special case (Cai & Lim, 2022). The following are the main results of this section. Lemma 16. Let m, n ∈ N such that m ≤ n, and let p ∈ [1, ∞] and f be any convex functional on R such that f(0) = 1*. Consider a generic* P ⊂ Mp(R m) and P ′ ∈ Mp(R n)*. Let* Φ + p (P, n) := ∪P ∈PΦ + p (P, n) and Φ + d (P, n) := ∪P ∈PΦ + d (P, n)*. Define* - W+ p (*P, P*′) := infα∈Φ + p (P,n) Wp(α, P′)*, for all* P ∈ P; - div+ f (P∥P ′) := infα∈Φ + d (P,n) divf (P∥P ′)*, for all* P ∈ P; - W+ p (P, P′) := infα∈Φ + p (P,n) Wp(*α, P*′); - div+ f (P∥P ′) := infα∈Φ + d (P,n) divf (P∥P ′); Then, for all P ∈ P *the following holds* $$W_{p}^{+}({\mathcal P},P^{\prime})\leq W_{p}^{+}(P,P^{\prime})\quad a n d\quad d i v_{f}^{+}({\mathcal P}\|P^{\prime})\leq d i v_{f}^{+}(P\|P^{\prime}).$$ Lemma 17. Let m, n ∈ N such that m ≤ n, and let p ∈ [1, ∞] and f be any convex functional on R *such* that f(0) = 1*. Consider a generic* P ⊂ Mp(R n) and P ′ ∈ Mp(R m)*. Let* Φ −(P, m) := ∪P ∈PΦ −(*P, m*). Define - W− p (*P, P*′) := infα∈Φ−(P,m) Wp(α, P′)*, for all* P ∈ P; - div− f (P∥P ′) := infα∈Φ−(P,m) divf (P∥P ′)*, for all* P ∈ P; - W− p (P, P′) := infα∈Φ−(P,m) Wp(*α, P*′); - div− f (P∥P ′) := infα∈Φ−(P,m) divf (P∥P ′); Then, for all P ∈ P *the following holds* W− $\nu_{\mu}(\mathcal{L},T_{\mu})$ (P, P′) ≤ W− p (P, P′) *and div*− f (P∥P ′) ≤ div− $$i v_{f}^{-}(P||P^{\prime}).$$ A visual representation of the application of Lemma 17 in the context of CBDL is given in Figure 11. ![34_image_0.png](34_image_0.png) Figure 11: We assume that *n > m* and that the oracle distribution L o belongs to Mp(R m), while likelihood FGCS Plik is a subset of Mp(R n), for some finite p ≥ 1. We also assume that L is one of the extreme elements of Plik. We see how W− p (Plik, Lo) < W− p (L, Lo); if we replace metric W− p by a generic f-divergence divf , the inequality would still hold thanks to Lemma 17. ## J Details On Artificial Pancreas Example Artificial Pancreas Model. An important factor when designing the controller for an artificial pancreas is to adapt the insulin delivery algorithm to the particular details of the patient. This is because patients display a wide range of variability in their response to insulin, depending on age, Body Mass Index (BMI), and other physiological parameters. The Bayesian Neural Network models have 2 hidden layers, with 10 neurons each, for the case study with 4 different seeds. This choice was informed by the experiments in Dutta et al. (2018). For the case study with different architectures, we trained BNNs with 4 different widths: 10, 20, 30, and 40. The horizon length is H = 10 time steps, and the prediction horizon is 5 steps into the future. The neural networks were trained for 200 time steps, with a learning rate of 0.001, and batch size of 128 using Mean-Field Variational Inference. The training dataset consisted of 28400 training samples, recorded without meals. Controller. We implemented a simple model predictive controller, using an off-the-shelf implementation of the covariance matrix adaptation evolution strategy (CMA-ES). The model predictive control planning horizon was k = 5, with a fixed seed for the randomized solver. ## K Other Possible Baselines For Cbdl Our experiments may seem like a type of belief tracking (Fortin et al., 2017; Klein et al., 2010). Two comments are in order. First, this line of literature does not use deep learning techniques. Second, in these works the authors rely on Dempster-Shafer theory, a field in imprecise probability theory where lower probabilities are assumed to be belief functions, see Remark 14. We do not rely on this assumption in our work. ## L Further Related Work Modeling uncertainty has been a longstanding goal of ML/AI research and a variety of approaches have been developed for doing so (Guo et al., 2017; Park et al., 2020; Jospin et al., 2022). Recently, emphasis has been placed on discerning between aleatoric and epistemic uncertainties (Senge et al., 2014; Kull & Flach, 2014; Kendall & Gal, 2017a). In Tretiak et al. (2022), the authors present an IP-based neural network which uses a regression technique based on probability intervals. Contrary to CBDL, their NN is rooted in the frequentist approach to imprecise probabilities (Huber & Ronchetti, 2009). In Manchingal & Cuzzolin (2022, Sections 2.1, 2.3), the authors focus on belief-functions-based classification methods. CBDL cannot be directly compared with these methodologies because (i) they do not require that the user expresses their knowledge via a belief function, but rather through a credal set; (ii) they can be used for regression and classification; (iii) they are rooted in Bayesian theory, as opposed to Dempster-Shafer theory. Other works in ML using a belief function approach are those from the field of evidential machine learning (EML), see e.g. Denœux (2022; 2023) and references therein. Existing models mainly address clustering, classification, and regression problems. The reasons why CBDL cannot be directly compared with methods from the EML literature are (i) and (iii) above. We also point out how there exists a field within ML called evidential deep learning (EDL, not to be confused with EML). It aims at quantifying and disentangling EU and AU, and its methodologies can be applied to classification and regression, see e.g. Amini et al. (2020); Sensoy et al. (2018) and references therein. CBDL is not immediately comparable to models from EDL because these latter are non-Bayesian. In addition, their definitions of AU and EU are slightly different from the canonical ones in ensemble deep learning. ## M Proofs Proof of Proposition 3. If P(A) = supP ′∈Π′ P ′(A), for all A ∈ F, then it is immediate to see that P(A) = supP ∈Π P(A) = supP ′∈exΠ′ P ′(A) = supP ′∈Π′ P ′(A), for all A ∈ F, since Π ⊂ Π′. Suppose now that P(A) = supP ∈Π P(A), for all A ∈ F. Then, we have that for all P ∈ Π and all A ∈ F, P(A) ≤ P(A). Pick now any P ′ ∈ Π′. We can write it as P ′ =Pk j=1 βjPj , where βj ∈ [0, 1], for all $j\in\{1,\ldots,k\}$, $\sum_{j=1}^{k}\beta_{j}=1$, and $\{P_{j}\}_{j=1}^{k}=\Pi$. Pick then any $A\in\mathcal{F}$; we have: $$P^{\prime}(A)=\sum_{j=1}^{k}\beta_{j}P_{j}(A)\leq\sum_{j=1}^{k}\beta_{j}{\overline{{P}}}(A)={\overline{{P}}}(A).$$ $$\square$$ So P(A) ≥ P ′(A). Because this holds for all P ′ ∈ Π′ and all A ∈ F, the claim is proven. Proof of Proposition 6. The lower bound for the upper entropy of Π′comes immediately from Π′ being a superset of Π. Let us now prove the lower entropy equality. Let Π = {P1*, . . . , P*k} and Π′ = ConvΠ. Pick any P ′ ∈ Π′. By the definition of Π′, there exists a collection of non-negative reals {βj} k j=1 such that Pk j=1 βj = 1 and Pk j=1 βjPj = P ′. By the concavity of the entropy, we have that $$H(P^{\prime})=H\left(\sum_{j=1}^{k}\beta_{j}P_{j}\right)\geq\sum_{j=1}^{k}\beta_{j}H(P_{j})\geq\sum_{j=1}^{k}\beta_{j}\underline{{{H}}}(P)=\underline{{{H}}}(P):=\operatorname*{inf}_{P\in\Pi}H(P).$$ $$\square$$ Since P ′ was chosen arbitrarily and Π is finite, this implies that $$\operatorname*{inf}_{P^{\prime}\in\Pi^{\prime}}P$$ H(P ′) =: H(P ′) ≥ H(P). In addition, we have that, since Π ⊂ Π′, H(P ′) ≤ H(P). Combining this with the above result, we obtain $$P^{\prime})\leq\underline{{{H}}}(P).\ \mathrm{Comh}$$ $$\underline{{{H}}}(P^{\prime})=\underline{{{H}}}(P).$$ Proof of Proposition 8. Pick any metric d on the space ∆Ω ≡ ∆(Ω, F) of probabilities on (Ω, F), and any P′ ∈ Π′. Because P′ belongs to Π′, infP ′∈Π′ d(P ′, Ψ) can only be either equal to or smaller than d(P′, Ψ). By the definition of d(Π′, Ψ), if infP ′∈Π′ d(P ′, Ψ) = d(P′, Ψ), then d(Π′, Ψ) = d(P′, Ψ). If instead infP ′∈Π′ d(P ′, Ψ) < d(P′, Ψ), then d(Π′, Ψ) < d(P′, Ψ). The proof is similar for a generic divergence div on ∆(Ω, F). Proof of Theorem 10. Assume P co,L co θare nonempty and weak⋆-closed. Then, by Marinacci & Montrucchio (2004, Proposition 3), they are also weak⋆-compact. Pick any A ∈ B. Recall that we can rewrite the usual Bayes' updating rule as $$\begin{array}{c}{{P_{D}(A)=\frac{\int_{\Theta}L(\theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)}}{\int_{\Theta}L(\theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)+\int_{\Theta}L(\theta)\mathbb{1}_{A^{c}}(\theta)P(\mathrm{d}\theta)}}}\\ {{=\frac{1}{1+\frac{\int_{\Theta}L(\theta)\mathbb{1}_{A^{c}}(\theta)P(\mathrm{d}\theta)}{\int_{\Theta}L(\theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)},}}}\\ {{=\frac{1}{1+\frac{\int_{\Theta}L(\theta)\mathbb{1}_{A^{c}}(\theta)P(\mathrm{d}\theta)}{\int_{\Theta}L(\theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)}},}}\end{array}$$ which is maximized whenRΘ $$\frac{\int_{\Theta}L(\theta)\mathbb{I}_{A^{c}}(\theta)P(\mathrm{d}\theta)}{\int_{\Theta}L(\theta)\mathbb{I}_{A}(\theta)P(\mathrm{d}\theta)}$$ is minimized. ButRΘ $$\frac{\int_{\Theta}L(\theta)\mathbb{1}_{A^{c}}(\theta)P(\mathrm{d}\theta)}{\int_{\Theta}L(\theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)}\geq\frac{\inf_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}\underline{L}(\theta)\mathbb{1}_{A^{c}}(\theta)P(\mathrm{d}\theta)}{\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}\overline{L}(\theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)},$$ which proves the inequality in equation 5. Assume now that P is concave. By Wasserman & Kadane (1990, Lemma 1), we have that there exists P ∈ Pco such that $$\operatorname*{sup}_{P\in P^{\mathrm{co}}}\int_{\Theta}L(\theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)=\int_{\Theta}L(\theta)\mathbb{1}_{A}(\theta)\mathbf{P}(\mathrm{d}\theta),$$ $$\left(7\right)$$ for all L ∈ L . In addition, by Wasserman & Kadane (1990, Lemma 4), we have that for all X ∈ X and all ϵ > 0, there exists a non-negative, upper semi-continuous function h ≤ X such that $$\begin{split}\left[\sup_{P\in\mathcal{P}^{\infty}}\int_{\Theta}X(\theta)P(\mathrm{d}\theta)\right]-\epsilon&<\sup_{P\in\mathcal{P}^{\infty}}\int_{\Theta}h(\theta)P(\mathrm{d}\theta)\\ &\leq\sup_{P\in\mathcal{P}^{\infty}}\int_{\Theta}X(\theta)P(\mathrm{d}\theta).\end{split}\tag{8}$$ Let now X = L1A. Notice that since L co θis weak⋆-compact, by equation 4 so is L . This implies that L,L ∈ L , since a compact set always contains its boundary, so L ∈ X as well, and in turn L1A ∈ X . Fix then any L ∈ L and put h = L1A. It is immediate to see that h is non-negative and upper semi-continuous. Then, by equation 8, we have that for all ϵ > 0 $$\left[\sup_{P\in\mathcal{P}^{\mathrm{loc}}}\int_{\Theta}\overline{{{L}}}(\theta)\mathbbm{1}_{A}(\theta)P(\mathrm{d}\theta)\right]-\epsilon<$$ $$\sup_{P\in\mathcal{P}^{\mathrm{loc}}}\int_{\Theta}L(\theta)\mathbbm{1}_{A}(\theta)P(\mathrm{d}\theta)\leq\sup_{P\in\mathcal{P}^{\mathrm{loc}}}\int_{\Theta}\overline{{{L}}}(\theta)\mathbbm{1}_{A}(\theta)P(\mathrm{d}\theta).$$ $$\quad(9)$$ Combining equation 7 andequation 9, we obtain $$\begin{split}&\left[\sup_{P\in\mathcal{P}^{\infty}}\int_{\Theta}\overline{L}(\theta)\mathds{1}_{A}(\theta)P(\mathrm{d}\theta)\right]-\epsilon\\ &<\int_{\Theta}L(\theta)\mathds{1}_{A}(\theta)\mathbf{P}(\mathrm{d}\theta)\leq\sup_{P\in\mathcal{P}^{\infty}}\int_{\Theta}\overline{L}(\theta)\mathds{1}_{A}(\theta)P(\mathrm{d}\theta),\end{split}\tag{10}$$ for all L ∈ L . Pick now any ϵ > 0 and put $$\begin{array}{l}{{k:=\operatorname*{sup}_{P\in{\mathcal{P}}^{\mathrm{co}}}\int_{\Theta}\overline{{{L}}}(\theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)}}\\ {{\quad+\operatorname*{inf}_{P\in{\mathcal{P}}^{\mathrm{co}}}\int_{\Theta}\underline{{{L}}}(\theta)\mathbb{1}_{A^{c}}(\theta)P(\mathrm{d}\theta)>0.}}\end{array}$$ Choose any L ∈ L and δ ∈ (0*, ϵk*). By equation 10 we have that [supP ∈Pco RΘ L(θ)1A(θ)P(dθ)] − δ < RΘ L(θ)1A(θ)P(dθ) and that [infP ∈Pco RΘ L(θ)1Ac (θ)P(dθ)] + δ > RΘ L(θ)1Ac (θ)P(dθ). Recall that c := supP ∈Pco RΘ L(θ)1A(θ)P(dθ) + infP ∈Pco RΘ L(θ)1Ac (θ)P(dθ), and define d := RΘ L(θ)1A(θ)P(dθ) + RΘ L(θ)1Ac (θ)P(dθ). Then, PD(A) = RΘ L(θ)1A(θ)P(dθ) d ≥ -supP ∈Pco RΘ L(θ)1A(θ)P(dθ)− δ c + δ − δ = supP ∈Pco RΘ L(θ)1A(θ)P(dθ) c− δ k > supP ∈Pco RΘ L(θ)1A(θ)P(dθ) c− ϵ. Since this holds for all ϵ > 0, we have that $$\operatorname*{sup}_{P_{D}\in{\mathcal{P}}_{D}^{\mathrm{co}}}P_{D}(A)={\frac{\operatorname*{sup}_{P\in{\mathcal{P}}^{\mathrm{co}}}\int_{\Theta}{\overline{{L}}}(\theta)1_{A}(\theta)P(\mathrm{d}\theta)}{\mathbf{c}}},$$ concluding the proof. Proof of Lemma 11. Walley (1981); Wasserman & Kadane (1990) show that concave upper probabilities are closed with respect to the generalized Bayes' rule. In particular, this means that, if we let b := supP ∈Pco RΘ L(θ)1A(θ)P(dθ) + infP ∈Pco RΘ L(θ)1Ac (θ)P(dθ), for any fixed A ∈ B, if P is concave, then for all L ∈ L P D(A) = supP ∈Pco RΘ L(θ)1A(θ)P(dθ) b(11) is concave. But since L co θis weak⋆-compact (by our assumption and Marinacci & Montrucchio (2004, Proposition 3)), by equation 4 so is L . This implies that L,L ∈ L , since a compact set always contains its boundary. Call then L ′ = L1A + L1Ac . It is immediate to see that L ′ ∈ L . Then, by equation 11 we have that if we call b ′:= supP ∈Pco RΘ L ′(θ)1A(θ)P(dθ) + infP ∈Pco RΘ L ′(θ)1Ac (θ)P(dθ), it follows that $$\begin{array}{c}{{\overline{{{P}}}_{D}(A)=\frac{\operatorname*{sup}_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}L^{\prime}(\theta)\mathbb{I}_{A}(\theta)P(\mathrm{d}\theta)}{\mathbf{b}^{\prime}}}}\\ {{=\frac{\operatorname*{sup}_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}\overline{{{L}}}(\theta)\mathbb{I}_{A}(\theta)P(\mathrm{d}\theta)}{\mathbf{c}}}}\end{array}$$ $\square$ is concave, concluding the proof. Proof of Theorem 13. Suppose Ω is uncountable. First notice that, since we assumed dP dµ to be continuous and bounded for all P ∈ P, then so is log ◦ dP dµ , for all P ∈ P, since composing a continuous function with a continuous and bounded one gives us a continuous and bounded function. This entails that the Choquet integrals of log ◦ dP dµ with respect to P and P are both well defined. In addition, being continuous and bounded, both dP dµ and log ◦ dP dµ attain their infima and suprema thanks to Weierstrass' extreme value theorem, for all P ∈ P. Hence, all the Choquet integrals used in this proof are well defined. Then, we have the following H(P) : = sup P ∈P H(P) = sup P ∈P − Z Ω log dP dµ (ω) P(dω) = sup P ∈P Z Ω (− log) dP dµ (ω) P(dω) ≤ sup P ∈P Z Ω sup P ∈P (− log) dP dµ (ω) P(dω) (12) ≤ Z Ω sup P ∈P (− log) dP dµ (ω) P(dω) (13) = − Z Ω inf P ∈P log dP dµ (ω) P(dω) (14) = − Z Ω log inf P ∈P dP dµ (ω) P(dω) (15) = − Z Ω log [π(ω)] P(dω) = H(P). The inequality in equation 12 is true because for all ω ∈ Ω, $$\operatorname*{sup}_{P\in\mathcal{P}}\left\{(-\log)\left({\frac{\mathrm{d}P}{\mathrm{d}\mu}}(\omega)\right)\right\}\geq(-\log)\left({\frac{\mathrm{d}P}{\mathrm{d}\mu}}(\omega)\right).$$ The inequality in equation 13 is a property of Choquet integrals taken with respect to upper probabilities (Marinacci & Montrucchio, 2004). The equality in equation 14 is true because for a generic function f, we have that sup −f = − inf f. Finally, the equality in equation 15 is true because the logarithm is a strictly increasing function. By Marinacci & Montrucchio (2004, Theorem 38), if P is concave, then inequality equation 13 holds with an equality, and so the bound is tighter. The proof for H(P) ≥ H(P) is similar; we use the facts that $$\operatorname*{inf}_{P\in{\mathcal{P}}}\Big\{\left(-\log\right)\Big({\frac{\mathrm{d}P}{\mathrm{d}\mu}}(\omega)\Big)\Big\}\leq(-\log)\,\Big({\frac{\mathrm{d}P}{\mathrm{d}\mu}}(\omega)\Big),{\mathrm{~for~all~}}\omega\in\Omega;$$ - by Marinacci & Montrucchio (2004), $$\inf_{P\in\mathcal{P}^{\mathcal{D}}}\int_{\Omega}\inf_{P\in\mathcal{P}}\left\{(-\log)\left(\frac{\mathrm{d}P}{\mathrm{d}\mu}(\omega)\right)\right\}P(\mathrm{d}\omega)\tag{16}$$ $$\geq\int_{\Omega}\inf_{P\in\mathcal{P}}\left\{(-\log)\left(\frac{\mathrm{d}P}{\mathrm{d}\mu}(\omega)\right)\right\}\underline{P}(\mathrm{d}\omega);$$ - for a generic function f, inf −f = − sup f; - by Marinacci & Montrucchio (2004, Theorem 38), if P is convex, then equation 16 holds with an equality. Suppose now Ω is at most countable; in this case, we do not need any assumptions to make the Choquet integrals well defined, since we will not deal with density functions. The following holds H(P) : = sup P ∈P H(P) = sup P ∈P − X ω∈Ω P({ω}) log [P({ω})]! = sup P ∈P X ω∈Ω P({ω})(− log) [P({ω})] ≤ X ω∈Ω sup P ∈P {P({ω})(− log) [P({ω})]} (17) ≤ X ω∈Ω P({ω}) sup P ∈P (− log) [P({ω})] (18) = − X ω∈Ω P({ω}) inf P ∈P log [P({ω})] (19) ω∈Ω P({ω}) log inf P ∈P P({ω}) (20) = − X = − X ω∈Ω P({ω}) log [P({ω})] = H(P). $$(20)$$ The inequality in equation 17 comes from the well known fact that the sum of the suprema is at least equal to the supremum of the sum. The inequality in equation 18 comes from the fact that for differentiable functions, the product of the suprema is at least equal to the supremum of the product. The equality in equation 19 is true because for a generic function f, we have that sup −f = − inf f. Finally, the equality in equation 20 is true because the logarithm is a strictly increasing function. By Marinacci & Montrucchio (2004, Theorem 38), if P is concave, then inequality equation 17 holds with an equality, and so the bound is tighter. The proof for H(P) ≥ H(P) is similar; we use the facts that - the sum of the infima is at most equal to the infimum of the sum; - for differentiable functions, the product of the infima is at most equal to the infimum of the product; - for a generic function f, inf −f = − sup f; by Marimaci & Montrucchio (2004, Theorem 38), if $\underline{P}$ is convex, $\inf_{P\in\mathcal{P}}\sum_{\omega\in\Omega}P(\{\omega\})(-\log)\left[P(\{\omega\})\right]=\sum_{\omega\in\Omega}\inf_{P\in\mathcal{P}}\left\{P(\{\omega\})(-\log)\left[P(\{\omega\})\right]\right\}$. Proof of Lemma 16. Fix any p ∈ [1, ∞] and pick any P ∈ P. Because Φ + p (P, n) ⊂ Φ + p (P, n), then infα∈Φ + p (P,n) Wp(*α, P*′) can only be either equal or smaller than infα∈Φ + p (P,n) Wp(*α, P*′). Now, if infα∈Φ + p (P,n) Wp(*α, P*′) = infα∈Φ + p (P,n) Wp(*α, P*′), then W+ p (P, P′) = W+ p (P, P′). If instead infα∈Φ + p (P,n) Wp(*α, P*′) < infα∈Φ + p (P,n) Wp(*α, P*′), then W+ p (P, P′) < W+ p (P, P′). This concludes the first part of the proof. Fix then any convex functional f on R such that f(0) = 1; the proof is similar for f-divergences. Proof of Lemma 17. The proof is very similar to that of Lemma 16.
Review 1: Summary: This paper proposes credal Bayesian deep learning. The main idea is to consider of a set of $K$ priors/ its convex hull set and another set of $S$ likelihood distribution/its convex hull set (e.g., different architectures), subsequently variational approach is applied to get $K \times S$ posteriors (each corresponds to one pair of prior and likelihood distribution), and finally $K \times S$ posteriors are used to make predictions and compute the Imprecise Highest Density Region over the predictions. Strengths and Weaknesses: ## Strengths - The idea makes sense. - The theory developed is quite solid. ## Weaknesses - The story is not really motivative and convincing. The section to motivate why we need to leverage the imprecise probabilities in Bayesian learning is not well motivated and convinced to me. The authors should rewrite this section. - It is still unclear the benefit of the imprecise probabilities, e.g., this enables the computation of Imprecise Highest Density Region and Aleatoric and Epistemic Uncertainties? Moreover, the definition of $\bar{H}(P)$ as sup of $H(P)$ is not rigorous because the sup is over $P$. Instead, it should be $H(\Pi')$? - Is the final Imprecise Highest Density Region over the label set is similar to the conformal prediction? If so, the authors should compare with the conformal prediction approaches for regression and classification to demonstrate the merit. - The proposed approach seems to be time and memory consuming because we need to do variational approach $K \times S$ times and store $K \times S$ deep learning models? - The authors should enrich the experiments to compare with the current SOTA approaches in Bayesian Neural Networks. As far as I know, there are recently many works in Bayesian Neural Networks. - In Tables 1 and 2, the authors should report the expected calibration error. Requested Changes: - Please give more motivation for leveraging the imprecise probabilities to Bayesian learning. - What is the fundamental difference of considering a mixture of some priors and a convex hull over the set of priors and similar question for the likelihood distribution? - Please make changes based on my comments in the weakness section. Broader Impact Concerns: There is no ethical concern of the work. ================================================== Review 2: Summary: Adapted from my previous review (Paper1573 by Reviewer cXJk): The paper introduces Credal Bayesian Deep Learning (CBDL), which allows for considering multiple priors and likelihoods and offer a different way of quantifying aleatoric and epistemic uncertainty. In empirical studies, they compare CBDL to ensembles of BNNs (EBNNs) for both regression and classification (in the appendix). The paper suggests using multiple likelihoods and priors and simply performing Bayesian inference on all combinations to obtain a set of BNNs. To quantify and disentangle aleatoric and epistemic uncertainty for an input the paper suggests determining the highest $\overline{H}(P)$ and lowest entropy $\underline{H}(P)$ and then using the decomposition: $$ \underbrace{\bar{H}(P)}\_{\text {total uncertainty }}=\underbrace{\underline{H}(P)}\_{\text {aleatoric uncertainty }}+\underbrace{[\bar{H}(P)-\underline{H}(P)]}\_{\text {epistemic uncertainty}}. $$ To make predictions, the paper defines regions of imprecise high density across the BNNs. --- Comparison to the previous submission: https://draftable.com/compare/sPCFPgnAKMzD Strengths and Weaknesses: The paper addresses an important point that often comes up with Bayesian neural networks and Bayesian approaches in general: the question of selecting a prior and likelihood to perform Bayesian inference on. Indeed, an often-heard claim against BNNs is that the choice of prior and likelihood can appear arbitrary, and while Bayesian inference is principled and rational, the Bayesian viewpoint does not help much with the former. It provides a good overview and many references for underlying research on uncertainty quantification and Bayesian theory. Thus, the research question is of great importance and the paper will find interest within the community. At the same time, there are a few weaknesses: 1. The paper mentions Kendall & Gal and other works that explicitly disentangle epistemic and aleatoric uncertainty with a single BNN but then claims that BNNs cannot disentangle aleatoric and epistemic uncertainty. Thus, related work that is mentioned in this context is not compared or acknowledged properly. 2. On the face of it, the approach trains an ensemble of BNNs with different architectures and/or priors (see also Wenzel et al., 2020). As such, it would seem necessary to compare to approaches that use meta-priors as this could be viewed as Bayesian inference over meta-priors where we choose a uniform (uninformative) prior over the prior distributions and likelihoods. 3. The experiments only compare an ensemble of BNNs with the proposed method. While the paper mentions that they do not apply BMA (Bayesian model averaging) due to a paper that found that it does not well for BNNs approximated with certain settings for HMC, a comparison to regular BNNs (not ensembled) and different approximation methods would have been helpful to compare the quality of the proposed method. Thus, I don’t see all the statements as sufficiently evidenced yet. ## Details > We note in passing that EU cannot be captured using a single BNN (Hüllermeier & Waegeman, 2021). This is simply wrong. Epistemic Uncertainty as defined in the same paragraph: > EU, instead, refers to the lack of knowledge about the data-generating process; as such, it is reducible can be measured using the concept of Expected Information Gain (EIG), introduced by Lindley (1956) and revisited in much research since then. This concept is exactly equivalent to the mutual information used to quantify EU in BNNs (see Kendal & Gal, etc.). The paper references these but does not actually compare them. Further, Hüllermeier & Waegeman (2021) is likely not the right citation. > since AU is irreducible, there is an increasing need for ML techniques that are able to detect an excess of AU and query for human help. This seems wrong as well. AU is irreducible, so assuming it is captured correctly by the model, querying a human for help will not help as AU is irreducible. Querying humans for help only helps for high EU. See the success of EU for active learning in BNNs and Bayesian Optimal Experiment Design. ### Ensemble of BNNs From my previous review: > **The problem with the comparison between EBNNs and CBDL is that for EBNNs, there is not necessarily a clean disentanglement of uncertainties, as each BNN might also capture some of the epistemic uncertainty within itself. That is, a sufficiently powerful model class could learn all possible uncertainty, so the resulting ensemble would express no epistemic uncertainty.** Further, I am not aware of *anyone* using EBNNs. The citation (Egele et al., 2021) does not substantiate this: it does not propose EBNNs from my reading. # IHDR as Union of HDRs The paper states that the construction of IHDRs (Def 4) can be operationalized via a union of HDRs (Def 5). I thank the authors for clarifying my earlier misunderstanding about this. However, I fail to see how the second property is always fulfilled: does the union of HDRs really always provide a minimal cover of high density? Requested Changes: From the last review: 1. to add comparisons to single BNNs as a baseline using the mean, stddev output parameterization as the ensembling for EBNNs might not be necessary, 2. compare to BMA as the cited paper is specific to HMC-based inference, which you do not seem to use anyway. In particular, EBNNs are flawed compared to a single (merged) BNN, as detailed above, so please compare them to that. Writing > From the previous sections, it is clear that CBDL improves on the uncertainty quantification capabilities of single BNNs as the start of the "Experiments" section is not sufficient as evidence and, hence, one of the listed contributions is not evidenced. ### Typos etc p. 2: "This because" p. 4: the last paragraph cites Jospin et al (2022) 5 times p. 5: the "(Berger, 1984), (Walley, 1991, Section 5.9)" citation should be merged. Finally: Compress the PDF before uploading please. I was able to compress the 4.3MB PDF down to 900KB. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper focuses on quantifying uncertainty in deep learning. The proposed approach is based on credal sets, i.e., on sets of probability distributions, Bayesian learning, and on disaggregation of bounds on total uncertainty (TU) into aleatoric uncertainty (AU) and epistemic uncertainty (EU) component bounds. The authors compare their approach to (ensembles of) Bayesian neural networks ((e)BNNs), providing theoretical and empirical evidence that their credal Bayesian deep learning (CBDL) approach leads to improved uncertainty estimates under various scenarios. All in all, I think the authors mostly follow up on the claims made by presenting a combination of theoretical arguments and empirical evidence. Strengths and Weaknesses: ### Strengths * The proposed approach has some theoretical justification and empirically leads to improved uncertainty estimates, especially for the AU component, compared to the eBNN baseline under various settings. * The problem of uncertainty quantification is important and timely. * The empirical problems seem interesting and relevant for the topic. ### Weaknesses * The paper is generally a bit hard to read, with plenty of side comments and detours. Improving, clarifying and focusing the writing could make the paper a lot easier to read and understand. * There is next to no discussion and no empirical comparison against other existing approaches beyond (e)BNNs. * While the proposed approach does seem to work better than the baseline, many of the results are still not very satisfying (e.g., AU estimates are not consistently increasing when increasing the corruption level). * Claiming that the proposed approach is theoretically well-justified while, e.g., eBNNs are not seems like a strong claim, since as far as I know, it is not generally clear how disentangling TU into AU and EU should actually be done in a principled manner (see e.g. Sec.3.3 in Hüllermeier et al. 2022 for a discussion). * One of the main limitations of the proposed approach seems to be the high computational complexity, yet this aspect is not discussed much, especially in connection to the empirical results. #### References: Hüllermeier et al. 2022: Quantification of Credal Uncertainty in Machine Learning: A Critical Analysis and Empirical Comparison. Requested Changes: Questions and suggestions roughly in decreasing order of importance: 1) Sec1, p1: "In this paper, we present a procedure that allows us to give a machine such a desirable quality." Please tone down the claim: there are existing methods which do this (even if the proposed method would improve on all of them, which is not clear), and the proposed method also does not work perfectly (see e.g. results in Table 3). 2) Sec1, p2: "CBDL also gives a way of quantifying and disentangling different types of uncertainties within $P_{pred}$". As far as I can see, the approach to disentangling AU and EU using TU comes from Abellàn et al. 2006? If so, please cite them also in the Intro, now this reads like you are proposing a novel approach to the disaggregation problem. 3) Sec1, p3: Comparing CBDL with non-Bayesian techniques: I do not fully understand why comparing to existing methods beyond Bayesian ones is out of scope. Can you explain? 4) W.r.t. to the "bad choice" discussion on p11: How sensitive the resulting uncertainty estimates are on the choice of the set of priors and likelihoods for the propose method? How much better such robustness is compared, e.g., to finite eBNNs? It would be good to have some discussion and empirical experiments to demonstrate this. 5) Sec4.2.2, p20: "We observe that for lower values of α the gains of CBDL are more pronounced." Looking at Table 7, the performance improvements with lowest alpha level (0.9) are negative, which I interpret as saying that CBDL is worse than eBNNs. Do I misread this (also: is there spelling mistake in Table 7: IBNN should be CBDL?)? 6) E.g. Sec1, p2: P_pred "enjoys desirable probabilistic guarantees", also Sec1, p3. Do all the stated theoretical properties carry over when the posterior/predictive distribution is only approximated, e.g., via VI, instead of calculated exactly? Please state this clearly in the paper. 7) Sec1, p3: "CBDL [...] is able to quantify both EU and AU, [...] in a principled manner, unlike ensemble of BNNs." As far as I know, disentangling TU into AU and EU like done in this paper is also not without potential problems (see e.g. Sec.3.3 in Hüllermeier et al. 2022). Why is this more principled than if one does something similar with eBNNS? You also state (e.g. in Abstract) that, heuristically, CBDL "allows to train [...] infinite ensemble of BNNs". If so, again, why is this more principled than if one uses finite ensemble of BNNs instead? 8) Sec3.2, p12 "This does not mean that we suffer from under-confidence". I am not sure if I understand the claim, so just to clarify: e.g. enlarging the set of priors/likelihoods around a true oracle prior/likelihood does not come with any cost in terms of increased uncertainty? How would having a larger set of priors/likelihoods compare to having single correctly specified prior/likelihood? 9) Sec1, p2: "This because [sic] selecting a unique prior and a unique likelihood implicitly assumes perfect knowledge around the true prior and the true data generating process. In turn, a unique distribution is only able to retrieve AU". I would have guessed that single prior and likelihood would result in basically estimating TU (i.e., pretending that EU is zero does not actually make the uncertainty "go away"). Can you elaborate? 10) Abstract: please clarify the claim: "Although Bayesian Neural Networks (BNNs) allow for uncertainty in the predictions to be assessed, different sources of uncertainty are indistinguishable". Do you mean specifically using a single BNN here? 11) Footnote 11, p9: you might as well state these for convenience at least in some appendix. 12) Sec4.1, p14: is there some motivation for the scale of severity, or is this just an empirical labeling? 13) Including a table with the notation and abbreviations might help in reading the paper: it is rather easy to get lost in soup of letters at some point. Broader Impact Concerns: I have no broader impact concerns for this paper. ==================================================
# Integrated Variational Fourier Features For Fast Spatial Modelling With Gaussian Processes Talay M Cheema tmc49@cam.ac.uk Department of Engineering University of Cambridge Carl Edward Rasmussen Department of Engineering University of Cambridge Reviewed on OpenReview: *https: // openreview. net/ forum? id= PtBzWCaCYB* ## Abstract Sparse variational approximations are popular methods for scaling up inference and learning in Gaussian processes to larger datasets. For N training points, exact inference has O(N3) cost; with M ≪ N features, state of the art sparse variational methods have O(NM2) cost. Recently, methods have been proposed using more sophisticated features; these promise O(M3) cost, with good performance in low dimensional tasks such as spatial modelling, but they only work with a very limited class of kernels, excluding some of the most commonly used. In this work, we propose integrated Fourier features, which extends these performance benefits to a very broad class of stationary covariance functions. We motivate the method and choice of parameters from a convergence analysis and empirical exploration, and show practical speedup in synthetic and real world spatial regression tasks. ## 1 Introduction Gaussian processes (GPs) are probabilistic models for functions widely used in machine learning applications where predictive uncertainties are important - for example, in active learning, Bayesian optimisation, or for risk-aware forecasts (Rasmussen & Williams, 2006; Hennig et al., 2022; Garnett, 2023). The hyperparameters of these models are often learnt by maximising the marginal likelihood, so it is important that this quantity can be evaluated fairly cheaply, especially to facilitate comparison of multiple models, or multiple random restarts for robustness. Yet, for N datapoints, the time cost is O(N3), associated with calculating the precision matrix of the data and its determinant. This is prohibitively large for many datasets of practical interest. A particularly important class of problems is spatial modelling, where often Gaussian process regression is applied in low (2-4) dimensions to very large datasets, and ideally the choice of prior process, encapsulated by the prior covariance function, is guided by domain-specific knowledge. One popular approach for improving scalability is to use a sparse variational approximation, wherein *M < N* inducing features are used as a compact representation, and a lower bound of the log marginal likelihood is maximised. In the conjugate setting, where the measurement model is affine with additive, white, and Gaussian noise, the variationally optimal distribution of the inducing features is available in closed form (SGPR; Titsias, 2009). This reduces the size of the precision matrix from N × N to M × M, but in practice multiplications by the cross covariance matrix between the data and the features dominates the computation cost at O(NM2). SGPR with inducing points - where the features are point evaluations of the latent function - can be interpreted as replacing data measured with independent and identical noise with a reduced pseudo-dataset measured at different locations and with correlated and variable noise. This pseudo-dataset is selected to minimise distortion in the posterior distribution over functions (or, equivalently, minimum discrepancy between the lower bound and the log marginal likelihood). This is particularly advantageous in the common case that the data is oversampled, where it is possible to set M ≪ N with asymptotically vanishing distortion (Burt et al., 2019; 2020a). Inducing points are the state of the art solution, but the scaling with N is problematic. One popular way to avoid this is to use batches of data (SVGP; Hensman et al., 2015). But this necessitates the use of stochastic, typically first order, optimisers; in the conjugate setting, this leads to iterative learning of the variational distribution which is otherwise available in closed form. Ideally, we would like to find an approximate inference method which avoids the O(N) scaling for any dataset and any prior by careful design of the inducing features. But this is more generality than we can reasonably expect, and methods are generally restricted in the prior covariance functions they support, which in turn restricts the freedom of modellers. Existing work on zonal kernels on spherical domains and tensor products of Matérn kernels on rectangular subsets of R D give a recipe for taking the O(N) part of the computation outside of the optimisation loop for low dimensional datasets (Dutordoir et al., 2020; Hensman et al., 2017; Cunningham et al., 2023). In this work, we propose integrated variational Fourier features (IFF), which provide the same computational benefits, but for a much broader class of sufficiently regular stationary kernels on R D. 1 We achieve this by allowing for modest numerical approximations in the evaluation of the learning objective and posterior predictives, rather than searching for mathematically exact methods. Yet, in contrast to those previous approaches, we also provide convergence guarantees. This both provides reassurance that the number of features required scales well with the size of the training data, and shows that the numerical approximations do not significantly compromise performance. In Section 2 we review variational GP regression in the conjugate setting, and we review related work in Section 3. In Section 4 we present our IFF method, and the complexity analysis; the main convergence results and guidance for tunable parameter selection follows in Section 4.1. Finally in Section 5 we evaluate our method experimentally, showing significant speedup relative to SGPR in low dimensions, and competitive performance compared to other fast methods, with broader applicability. A summary of our contributions is as follows. - We present a new set of variational features for Gaussian process regression, whose O(M2) memory cost and O(M3) per optimisation step computational cost greatly increases scalability for low dimensional problems compared to standard approaches - demonstrated on large scale regression datasets - and can be applied using a broad class of stationary covariance functions on R D. - We provide converge results demonstrating the number of features required for an arbitrarily good approximation to the log marginal likelihood grows sublinearly for a broad class of covariance functions. - We provide reasonable default choices of parameters in our algorithm, including the number of inducing features M, based on an empirical study and motivated by our theoretical results. ## 2 Background In the conjugate setting, the probabilistic model for Gaussian process regression is $$f\sim{\mathcal{G P}}(0,k)\quad y_{n}=f(x_{n})+\rho_{n}\quad\rho_{n}\sim{\mathcal{N}}(0,\sigma^{2})$$ f *∼ GP*(0, k) yn = f(xn) + ρn ρn ∼ N (0, σ2) (1) for n ∈ {1 : N}, with each xn ∈ R D and ρn, yn ∈ R, with the covariance function or kernel k : R D × R D → R symmetric and positive definite. Let f = [..., f(xn)*, ...*] ⊤, and Kab be the covariance matrix of finite dimensional random variables *a, b*. For example, Kff is the N × N matrix with [Kff]nn′ = k(xn, xn′ ). The posterior predictive at some collection of inputs x∗ and marginal likelihood are as follows, where x = x1:N , y = y1:N , and K∗fis defined analagously to Kff. Then, the posterior predictive distribution and 1We assume the kernel's spectral density has bounded second derivative. $\left(1\right)$. marginal likelihood are as follows (Rasmussen & Williams, 2006, Chapter 2). $$p(f(x_{*})|x,y)={\cal N}(f(x_{*})|K_{\rm rf}(K_{\rm rf}+\sigma^{2}I)^{-1}y,K_{**}-K_{*{\rm f}}(K_{\rm rf}+\sigma^{2}I)^{-1}K_{\rm f*})\tag{2}$$ $$p(y|x)={\cal N}(y|0,K_{\rm rf}+\sigma^{2}I)\quad{\cal L}=\log p(y|x)\tag{3}$$ We optimise the latter with respect to the covariance function's parameters. The data precision matrix A = (Kff + σ 2I) −1, which depends on the value of the hyperparameters, dominates the cost, as for each evaluation of the log marginal likelihood L, we need to compute its log determinant and the quadratic form y ⊤Ay, both of which incur O(N3) computational cost in general. Note that the posterior predictive is the prior process conditioned on y = f + ρ. For the variational approximation, we construct an approximate posterior2q(f) = Rp(f|u)q(u)du ≈ p(f|y) where u = u1:M is a collection of inducing features with prior distribution p(u). That is, we condition on u instead of y and average over an optimised distribution on u. The classic choice is inducing points, where um = f(zm) for some zm ∈ R D. We maximise a lower bound on the log marginal likelihood (DKL is the KL divergence). $$\mathcal{F}=\int q(f)\log\frac{p(y,f,u)}{q(f)}df=\mathcal{L}-D_{KL}(q(f)||p(f))\leq\mathcal{L}\tag{4}$$ doesn to be a linear functional $\phi$ of $f$ (Livero Credilla & Ejewisse Vidal, 2000). More generally, um is chosen to be a linear functional ϕm of f (Lázaro-Gredilla & Figueiras-Vidal, 2009), denoted ⟨ϕm, f⟩ ∈ C with associated parameter zm, in order that u is Gaussian a priori. For features other than inducing points, these are termed *inter-domain* features. Let ϕ ∗ m be such that ⟨ϕ ∗ m, f⟩ = ⟨ϕm, f⟩ ∗ (the complex conjugate) and let K be the covariance operator corresponding to k. That is, [Kϕm](x∗) = ⟨ϕ ∗ m, k(x∗, ·)⟩. Then (Bogachev, 1998, Chapter 2; Lifshits, 2012) $$\begin{array}{c}{{\langle\phi_{m},f\rangle\sim{\mathcal{N}}(0,\langle\phi_{m},{\mathcal{K}}\phi_{m}\rangle)}}\\ {{{\mathbb{E}}[\langle\phi_{m},f\rangle\langle\phi_{m^{\prime}},f\rangle^{*}]=\langle\phi_{m},{\mathcal{K}}\phi_{m^{\prime}}\rangle}}\\ {{{\mathbb{E}}[f(x_{*})\langle\phi_{m},f\rangle]=\langle\phi_{m}^{*},k(X_{*},\cdot)\rangle}}\end{array}$$ and for convenience define $$c_{m}(x_{*})=c(z_{m},x_{*})\stackrel{{\rm def}}{{=}}\langle\phi_{m}^{*},k(x_{*},\cdot)\rangle=c^{*}(x_{*},z_{m})=\langle\phi_{m},k(\cdot,x_{*})\rangle\tag{5}$$ $$\bar{k}_{m,m^{\prime}}=\bar{k}(z_{m},z_{m^{\prime}})\stackrel{{\rm def}}{{=}}\langle\phi_{m},{\cal K}\phi_{m^{\prime}}\rangle=\langle\phi_{m},c(z_{m},\cdot)\rangle=\langle\phi_{m}^{*},c(\cdot,z_{m})\rangle\tag{6}$$ which give the entries of the covariance matrices Kuf and Kuu. With inducing points, c = ¯k = k. But, more generally, p(u) = N (0, Kuu), and the optimal q(u) is available in closed form as $$q(u)\sim{\cal N}(\mu_{u},\Sigma_{u})\quad\mbox{with}\ \Sigma_{u}^{-1}=K_{uu}^{-1}(K_{uu}+\sigma^{-2}K_{uf}K_{uf}^{*})K_{uu}^{-1},\tag{7}$$ $$\mu_{u}=\sigma^{-2}\Sigma_{u}K_{uu}^{-1}K_{uf}y$$ with corresponding training objective (Titsias, 2009) $${\cal F}(\mu_{u},\Sigma_{u})=\log{\cal N}(y|0,\,K^{*}_{uf}K^{-1}_{uu}K_{uf}+\sigma^{2}I)-\frac{1}{2}\sigma^{-2}{\rm tr}(K_{\rm ff}-K^{*}_{uf}K^{-1}_{uu}K_{uf})\tag{8}$$ wherein the structured approximation to the data precision is A′ = (K∗ ufK−1 uu Kuf + σ 2I) −1. However, by exploiting standard linear algebra results (Appendix A), the inverse and log determinant can be isolated to B = (Kuu+σ −2KufK∗ uf ) −1(which is the precision of the appropriately noise-corrupted features u+Kufρ) and K−1 uu , both of which are only M × M. However, in practice, the dominant cost is O(NM2) to form KufK∗ uf , since generally M ≪ N and the cross-covariance matrix depends nonlinearly on the hyperparameters, so must be recalculated each time Burt et al. (2020b). Put differently, the features cm(·) are dependent on the hyperparameters. By choosing the linear functionals carefully, we aspire to find features which do not depend on the hyperparameters, so that KufK∗ uf can be precomputed and stored, reducing the cost to O(M3), without compromising on feature efficiency. 2In a standard minor abuse of notation, we write the distributions over f as densities, though none exist. The posterior predictive at new points x∗ is calculated as $$q(f(x_{*})|x_{*},x,y)=\int p(f(x_{*})|x,z,u)q(u)\,du$$ $$=\mathcal{N}(f(x_{*})|K^{*}_{us}K^{-1}_{us}u_{us},\ K_{*}-K^{*}_{us}K^{-1}_{us}K_{us}+K^{*}_{us}K^{-1}_{us}\Sigma_{us}K^{-1}_{us}K^{*}_{us}).\tag{9}$$ Moreover, high quality posterior samples can be efficiently generated (for example, when the number of inputs in x∗ is very large) by updating a random projection approximation of the prior (for example, using random Fourier features) using samples of the inducing variables (Wilson et al., 2020). We note that the lower bound property of this training objective makes it meaningful: increases in L involve either increasing the marginal likelihood with respect to the hyperparameter, or reducing the KL divergence from the approximate posterior to the true posterior. This KL divergence is between the approximate and true posterior processes, giving reassurance on the quality of posterior predictive distributions (Matthews et al., 2016). Moreover, the inducing values u act as a meaningful summary of the training data which can be used in downstream tasks, for example in order to make fast predictions (Wilson et al., 2020). ## 3 Related Work There are two other main, broadly applicable, approaches to reducing the cost of learning, which are complementary: 1. using iterative methods based on fast matrix-vector multiplications (MVMs) to approximate the linear solve and log determinant, and 2. directly forming a low-rank approximation to the kernel matrix Kff. In the former case, the cost is reduced to O(N2) in exchange for modest error, since only a limited number of steps of the iterative methods are needed to get close to convergence in practice. This is particularly advantageous when performing operations on GPU (Gardner et al., 2018b; Pleiss et al., 2018), and when A has some special structure that permits further reductions - due either to structure in the data or in k (Saatçi, 2011; Cunningham et al., 2008). Direct approximations of Kff include by projections onto the Fourier basis for stationary kernels (Random Fourier Features, RFF (Rahimi & Recht, 2007) and variants (Lázaro-Gredilla et al., 2010; Gal & Turner, 2015), interpolating from regular grid points (stochastic kernel interpolation (Wilson & Nickisch, 2015; Gardner et al., 2018a, SKI)), or projecting onto the highest variance harmonics on compact sets (Solin & Särkkä, 2020). Notably, SKI makes use of fast MVMs with structured matrices to obtain costs which are linear in N for low D. In contrast to the variational approach, these methods tend to approximate the posterior indirectly and the approximations may be qualitatively different to the exact posterior (see, for example, Hensman et al. (2017)). Variational methods can also be viewed as making the low rank approximation K∗ ufK−1 uu Kuf to the kernel matrix, but note that the training objective differs from simply plugging in this approximation to the marginal likelihood, as it has an additional trace term (Equation (8)). Recently, authors have attempted to incorporate nearest neighbour approximations into a variational framework (Tran et al., 2021; Wu et al., 2022). One notable approach which does not fit into these categories is using Kalman filtering: Gaussian process regression can be viewed as solving a linear stochastic differential equation, which has O(N) cost given the linear transition parameters (Särkkä et al., 2013). In practice, if the data does not have additional structure such as regularly spaced inputs, computing these transition parameters will dominate the cost. Finally, by careful design of the prior, we can create classes of covariance function for which inference and learning are computationally cheaper (Cohen et al., 2022; Jø rgensen & Osborne, 2022). However, these are not broadly applicable in the sense that the classes of covariance function (and hence the prior assumptions) are limited, and only suitable to certain applications. Fourier features If we restrict the prior to be stationary, that is k(*x, x*′) = k(x − x ′) = k(τ ), then k has a unique spectral measure. We assume throughout that the spectral measure has a proper density s(ξ) = Rk(τ )e −i2πτ ⊤ξ dτ . Note that according to the convention we use here, RRD s(ξ) dξ = k(0)/(2π) D. A first attempt at hyperparameter-independent features is Fourier features, appropriately normalised by the spectral density: ⟨ϕ1,ξ, f⟩ =Rf(x)e −i2πxξ/s(ξ) dx. These are independent with unbounded variance (Lázaro-Gredilla & Figueiras-Vidal, 2009; Lifshits, 2012, Chapter 3), which can be shown as follows. $$c_{1}(x^{\prime},\xi)=\int k(x^{\prime},x)e^{-2\pi\xi^{\top}x^{\prime}}/s(\xi)\,dx=e^{-i2\pi\xi^{\top}x^{\prime}}\tag{10}$$ $$\bar{k}_{1}(\xi,\xi^{\prime})=\int c(x^{\prime},\xi)e^{-2\pi\xi^{\top}x^{\prime}}/s(\xi^{\prime})\,dx^{\prime}=\int e^{-2\pi x^{\prime\top}(\xi-\xi^{\prime})}/s(\xi^{\prime})\,dx^{\prime}=\delta(\xi-\xi^{\prime})/s(\xi)\tag{11}$$ Here, δ is the Dirac delta. These features are unsuitable for constructing the conditional prior p(f|u) – informally, the prior feature precision K−1 uu vanishes but K∗ uf is finite, so the conditional prior mean K∗ fuK−1 uu u is zero (Figures 1a and 1b). Yet the general form is promising since (i) K∗ fuKfu depends only on the chosen features and the training inputs x, so indeed if we can fix the frequencies to good values, then we can precompute this term outside of the optimisation loop, and reduce the cost of computing A′to O(M3) per step (Appendix A), and also (ii) the features are independent, so Kuu would be diagonal. Modifications to Fourier features include applying a Gaussian window (Lázaro-Gredilla & Figueiras-Vidal, 2009) which gives finite variance but highly co-dependent features, and Variational Orthogonal Features, where ⟨ϕm, f⟩ =R R e i2πξ⊤xψm(ξ)/ps(ξ) dξf(x) dx for pairwise orthogonal ψm. This approach yields independent features, so Kuu is diagonal, but it is challenging to find suitable sets of orthogonal functions. In both of these cases, the cross-covariance cm still depends upon the hyperparameters, and so there is usually little to no computational advantage (Burt et al., 2020a). Variational Fourier Features (VFF) (Hensman et al., 2017) set ⟨ϕm, f⟩ to a reproducing kernel Hilbert space (RKHS) inner product between the harmonics on [a, b], *a < b* ∈ R and f in 1D. Due to limiting the domain to a compact subset, Fourier transforms become discrete - that is, they become generalised Fourier series. Consequently, conditioning on only a finite subset of the frequencies works, and this gives diagonal + lowrank structure in Kuu for lower order one-dimensional Matérn kernels. However, the covariance functions are defined on R×R rather than [a, b]×[*a, b*], so it is not straightforward to evaluate their spectra. Replacing the Euclidean inner product with an RKHS inner product permits to do this for lower order Matérn kernels, but this is not easily extended to other covariance functions, such as the spectral mixture kernel, or products of kernels. Moreover, in higher dimensions it is necessary to use a tensor product of 1D Matérn kernels, which is limiting. Hensman et al. (2017) and Dutordoir et al. (2020) note that using a regular grid of frequencies as in VFF significantly increases cost for D > 1 unecessarily, since features which are high frequency in every dimension are usually very unimportant. However, we demonstrate that it is possible to filter out these features (Section 4). This approach could be generalised by replacing the Fourier basis with some other basis. Then cm is calculated using RKHS inner products with other basis functions, always yielding a hyperparameter independent cm, with sparse matrices if the basis functions have compact support with little overlap. However, the need to calculate the RKHS norm of the basis functions for the elements of Kuu limits these methods to kernels whose RKHS has a convenient explicit characterisation. In practice, this means using tensor products of 1D Matérn kernels. The recent work of Cunningham et al. (2023) is a specific example of this which uses B-splines. Dutordoir et al (2020) used spherical harmonic features for zonal kernels on the sphere, and this can be applied to R D by mapping the data onto a sphere. In this case the inducing features are well defined and independent, and this can be generalised to other compact homogeneous Riemannian manifolds. However, the harmonic expansion of k on the domain must be known; for *isotropic* kernels on R D restricted to the manifold, these can be computed from the spectral density (Solin & Särkkä, 2020). Yet isotropy is too limiting an assumption; one can effectively incorporate different lengthscales in each dimension by learning the mapping onto the sphere, but K˜uf also depends on this mapping, and so the cost returns to O(NM2). We seek a method which can be used with a broader class of covariance functions, but retains the key computational benefits. ![5_image_0.png](5_image_0.png) Figure 1: Illustration of the Integrated Fourier Feature construction. We plot the mean function (dashed), between one and three standard deviations (shaded) and sample functions in both the data and frequency domains for a squared exponential kernel with unit lengthscale. The sample functions in the data and frequency domains correspond to one another. (a) The prior's Fourier transform is white Gaussian noise whose variance is given by the spectral density. (b) We cannot condition meaningfully on come finite collection of frequencies (red stars), as this gives no information about the other frequencies - the conditional prior p(f/u) in the data domain is unchanged. (c) We show only the inducing values in the frequency domain, which are averages of the surrounding region. The conditional prior is now meaningful, and the residual uncertainty is due to high frequency content not included in the features. Choosing z It is well known that optimising inducing inputs is usually not worth the extra computational cost compared to a good initialisation. We briefly review the initialisation methods for different features described above. For inducing points, Burt et al. (2020b) show that sampling from a k-DPP (determinental point process; with k = M, and the kernel used in the DPP is the same as the GP prior's) performs well, both in theory and in practice. Since that initialisation is hyperparameter-dependent, they alternate between optimising the hyperparameters and sampling z in a variational expectation maximisation (EM) approach. For VFF as described by Hensman et al. (2017), a rectangular grid of regularly spaced frequencies must be used, which they select (optimally in 1D) to be centred around the origin. In higher dimensions, the regular grid leads to including suboptimal frequencies in the corners of the grid. A construction which leads to a more feature efficient set of frequencies is the following. Create a rectangular grid, and then discard the features which are not within a given ellipsoid, where the ellipsoid's axes should be chosen to the proportional to the bandwidth of the spectral density in that dimension (which is inversely proportional to the lengthscale). This corresponds to discarding the corresponding rows in Kuf, and the corresponding row and column in Kuu. For spherical harmonics, the optimal choice is to use the frequencies which have the highest variance. For many covariance functions (for example, those constructed from monotonically decreasing stationary kernels on R D using the method of Solin & Särkkä (2020)) this corresponds to choosing the first M frequencies. For B-spline features, a grid of regularly spaced basis functions which covers a rectangular domain containing the data is used, and the sparsity of the matrices is used to make the method efficient - features which are not strongly correlated with the data also contribute less to the computational cost. ## 4 Integrated Fourier Features We are not able, in general, to integrate out the Dirac delta in Equation (11) and retain the desirable computational properties without introducing further approximations. We propose to average Fourier features over *disjoint* intervals of width ε (Figure 1c), and approximate the spectral density as constant over the integration width. We focus on D = 1 to lighten the presentation here; to enforce that the intervals are disjoint we require |zm − zm′ | ≥ ε for any m ̸= m′. ⟨ϕ2,m, f⟩ def = ε −1 Z zm+ε/2 zm−ε/2 Zf(x)e −i2πξx dx/s(ξ) dξ = ε −1 Z zm+ε/2 zm−ε/2 ⟨ϕ1,ξ, f⟩ dξ c2(x ′, zm) = ε −1 Z zm+ε/2 zm−ε/2 c1(x ′, ξ) dξ ≈ e −i2πzmx ′(12) ¯k2(zm, zm′ ) = ε −2 Z zm+ε/2 zm−ε/2 Z zm′+ε/2 zm′−ε/2 ¯k1(ξ, ξ′) dξ′dξ ≈ ε −1δm−m′/s(zm) (13) Now, δ is the Kronecker delta. Note that if the intervals were not chosen to be disjoint then only the last line would change. The advantage of choosing disjoint intervals is to make Kuu diagonal, which will simplify later analysis, as well as modestly reducing the computational cost. But recall that the inversion and log determinant of B is still required, and this matrix remains dense. ## 4.1 Convergence Now, if we calculate covariance matrices using the numerical approximation detailed above, and use this to evaluate the collapsed variational objective of Equation (8), we no longer have a proper variational objective in the sense of Equation (8). In order to distinguish the proper objective from our approximation, we introduce the notation F for the approximate objective. In this section we show that the approximation F converges to L at a reasonable rate as M → ∞, for a broad class of covariance functions, showing that these features are efficient and produce good approximations for the purposes of hyperparameter optimisation. Since we no longer have the interpretation of reducing the KL between posterior processes, we provide additional results to give reassurance about the quality of the approximate posterior predictive distribution. Firstly, we subtly transform the features to simplify the analysis. Note that applying an invertible linear transformation T to the features has no impact on inference or learning. That is, if we transform u ′ with mean and covariance µu′ , Σu′ to u = T u′, then Kuf = E[uf(x) ⊤] = E[T u′f(x) ⊤] = TKu′f, and similarly Kuu = TKu′u′T ∗. Then from Equation (7) it follows that if we optimise after transforming, the optimal Σ −1 u = T −∗Σ −1 u′ T −1 and the optimal µu = T µu′ , as would be expected from optimising before transforming. Furthermore the collapsed objective of Equation (8) and posterior predictive mean and covariance of Equation (9) are left unchanged. For the analysis, instead of normalising by the spectral density, we normalise by its square root. This has the advantage that we do not need any approximation for ¯k, only for c. This simplifies later calculations. ⟨ϕ3,m, f⟩ def = ε −1 Z zm+ε/2 zm−ε/2 Zf(x)e −i2πξx dx/ps(ξ) dξ = ε −1 Z zm+ε/2 zm−ε/2 ⟨ϕ1,ξ, f⟩ps(ξ) dξ cm(x ′) = c3(x ′, zm) = ε −1 Z zm+ε/2 zm−ε/2 c1(x ′, ξ)ps(ξ) dξ ≈ps(zm)e −i2πzmx ′= ˆcm(x ′) ¯km,m′ = ¯k3(zm, zm′ ) = ε −2 Z zm+ε/2 zm−ε/2 Z zm′+ε/2 zm′−ε/2 ¯k1(ξ, ξ′)s(ξ) dξ′dξ = ε −1δm−m′ We proceed without the subscript 3 hereafter for brevity. The new approximate features are a straightforward invertible linear transformation of the previous features (in particular, uˆm = ˆu3,m =ps(zm)ˆu2,m). We analyse how well uˆ approximates u, but in practice we use a real-valued version of the equivalent approximate features uˆ2 in order to have cˆ independent of the hyperparameters (Appendix B). We defer the details of the proofs, particularly for higher dimensional inputs, to Appendix C. For convergence of the objectives, we use the following result of Burt et al. (2019). $$\mathbb{E}_{y}[D_{K L}(q(f)\mid\mid p(f|y))]=\mathbb{E}_{y}[\mathcal{L}-\mathcal{F}]\leq\frac{t}{\sigma^{2}}$$ $$t=\operatorname{tr}(K_{\mathrm{ff}}-\underbrace{K_{\mathrm{uf}}^{*}K_{\mathrm{uu}}^{-1}K_{\mathrm{uf}}}_{Q_{\mathrm{ff}}})$$ $$(1{\mathrm{f}}{\mathrm{o}})$$ where y is distributed according to Equation (1). The first equality follows for any well-defined inducing features (Matthews et al., 2016)–that is, those where the inducing features can be a priori described as a linear functional of the prior process. We now detail the additional assumptions. A1 Let s˜ = s/(vσ2) be the normalised spectral density, which is assumed to exist, with v = k(*x, x*)/σ2. We assume that the normalised spectral density has a tail bound $$\int_{\rho}^{\infty}\tilde{s}(\xi)\,d\xi\leq\beta\rho^{-q}\tag{1}$$ for any ρ > 0 and some *β, q >* 0. A2 The second derivative of s is bounded. A3 The first derivative has a relative bound $$\left|{\frac{d s(\xi)}{d\xi}}\right|\leq2L s(\xi)\implies\left|{\frac{d{\sqrt{s(\xi)}}}{d\xi}}\right|\leq L{\sqrt{s(\xi)}}$$ $$(17)$$ for some L > 0, where the implication follows wherever s(ξ) > 0. For example, this is satisfied if s and its first derivative are bounded everywhere, which is the case for widely used covariance functions. A4 The frequencies are chosen according to the regular grid zm = (m − 1/2)ε − M/2 for M even. This requirement can be relaxed in practice. The key requirement is that Sm[zm − ε/2, zm + ε/2] → R; ε could also be varied as a function of m, with the convergence rate dominated by the largest. The higher dimensional generalisation of these are the standard assumptions. We use the following simple result, proved in the supplement. Lemma 4.1. *Under assumptiona A3 and A4,* $$c_{m}(x)=\hat{c}_{m}(x)(1+O(\varepsilon))$$ Theorem 4.2. For y *sampled according to Equation* (1), under assumptions A1 to A4, for any ∆*, δ >* 0 there exists M0, α0 > 0 *such that* $$\mathbb{P}\left[{\frac{{\mathcal{L}}-{\mathfrak{F}}}{N}}\geq{\frac{\Delta}{N}}\right]\leq\delta$$ *for all $M\geq M_0$, and with* $\alpha$ = 1. $$M\leq\left({\frac{\alpha_{0}}{\Delta\delta}}N\right)^{\frac{q+3}{2q}}$$ Moreover, there exists M1, α1 > 0 such that for all *M > M*1, $$\mathbb{P}\left[{\frac{{\mathcal{L}}-{\mathcal{F}}}{N}}\geq{\frac{\Delta}{N}}\right]\leq\delta$$ for all M ≥ M1*, and with* $$M\leq\left({\frac{\alpha_{1}}{\Delta\delta}}N\right)^{\frac{q+3}{2q}}$$ Proof. We sketch the 1D case here. Let tˆ = tr(Kff − Kˆ ∗ ufK−1 uu Kˆuf) = tr(Kff − Qˆff). First we show that t/Nσ ˆ 2 ∈ O(M−2q/(q+3)). $$\frac{\hat{t}}{N\sigma^{2}}=\frac{1}{N\sigma^{2}}\sum_{n}\left(\underbrace{k(x_{n},x_{n})}_{v\sigma^{2}}-\varepsilon\sum_{m}\hat{c}_{m}(x_{n})\hat{c}_{m}^{*}(x_{n})\right)$$ $$=v\left(1-\varepsilon\sum_{m}\hat{s}(z_{m})\right)$$ $$=v\left(1-\int_{-(M+1)\varepsilon/2}^{(M+1)\varepsilon/2}\tilde{s}(\xi)\,d\xi\right)+vE_{1}$$ $$=2v\int_{M\varepsilon/2}^{\infty}\tilde{s}(\xi)\,d\xi+vE_{1}$$ (18) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (19) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (20) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (21) ... where E1 =R (M+1)ε/2 −(M+1)ε/2 s˜(ξ) − εPm s˜(zm) + O(Mε3). The integral term in the last line is in O((Mε) −q) by assumption A1, and E1 ∈ O(Mε3) from standard bounds on the error of the midpoint approximation (using A2). We must have that ε → 0 as M → ∞ to make the midpoint approximation exact, yet we must have Mε → ∞ to ensure the features cover all frequencies. By optimising the trade-off, we get the stated bound. To complete the proof, for t we could immediately apply Equation (14). But for tˆ, we adapt the result of Equation (14) to show Ey[|L − F|/N] ≤ *t/Nσ* ˆ 2for sufficiently large M, using Lemma 4.1 (assumptiona A3, A4). By applying Markov's inequality, we complete the proof. Remark 4.3. The case of subgaussian spectral densities (such as for the squared exponential covariance function) is q → ∞, which yields M ∈ O( √N). Note that this is due to the O(Mε3) terms which arise due to approximating the spectral density as constant. Intuitively, it appears that if there were no numerical approximation, the cost would be dominated by the amount of spectrum in the tails, such that convergence for subgaussian tails would be possible with M ∈ O(log N) for sufficiently small ε, as with inducing points or eigenfunctions with the squared exponential covariance function Burt et al. (2019). Though this demonstrates that the features are suitable for learning, we may wish to use the same features in making predictions. We no longer can use the bound in the process KL, but we show that the posterior predictive marginals converge with comparable or better rate than the objective for any choice of variational distribution. Theorem 4.4. For the optimised µu, Σu (to maximise F*), let the posterior predictive at any test point* x∗ using the exact features u have mean and variance µ, Σ, and with the approximate features uˆ *have mean and* variance µ, ˆ Σˆ*. Then, under assumptions A3 and A4,* $$\begin{array}{l}{{|\mu-\hat{\mu}|\in O(M\varepsilon^{2})}}\\ {{|\Sigma-\hat{\Sigma}|\in O(M^{2}\varepsilon^{3})}}\end{array}$$ $$(22)$$ $$(23)$$ ``` In particular, allowing ε ∼ M − q+1 (q+3) as in the proof of Theorem 4.2 (see Appendix C), we have ``` $$\begin{array}{l}{{|\mu-\hat{\mu}|\in O\left(M^{-\frac{q-1}{q+3}}\right)}}\\ {{|\Sigma-\hat{\Sigma}|\in O\left(M-\frac{q-3}{q+3}\right)}}\end{array}$$ Proof. Use the definitions in Equation (7) and apply Lemma 4.1 and the triangle equality. ## 4.2 Higher Dimensions In higher dimensions, we modify the assumptions as follows. A1 Assume that ˜k's spectral measure admits a density, and denote this by s˜ and assume that the density admits a tail boundZ ∞ $$\int_{\rho}^{\infty}\cdots\int_{\rho}^{\infty}\tilde{s}(\xi)\,d\xi_{1}\ldots d\xi_{D}\leq\beta\rho^{-qD}\tag{1}$$ $$(24)$$ for any ρ > 0 and some *β, q >* 0. A2 The spectral density's second derivative is bounded. A3 The spectral density's first derivatives are bounded as $$\frac{\partial s(\xi)}{\partial\xi_{d}}\leq2Ls(\xi)\implies\frac{\partial\sqrt{s(\xi)}}{\partial\xi_{d}}\leq L\sqrt{s(\xi)}$$ $$(25)$$ for some L > 0 (where the second expression follows wherever s(ξ) > 0). For example, this would be satisfied if the spectral density and its first derivative are both bounded everywhere, which includes widely used covariance functions. A4 Let the inducing frequencies be an analogous regular grid in higher dimensions, with M1/D ∈ Z even. That is, for a multi-index m1:D, $$[z_{m_{1:D}}]_{d}=(-(M^{1/D}+1)/2+m_{d})\varepsilon\quad\mbox{for$d\in\{1:D\}$.}\tag{1}$$ Then the main result (Theorem 4.2) and the predictive bounds in terms of q in Theorem 4.4 are unchanged. See Appendix C for details. Assumption A4 requires a full regular grid of features, which means the number of features increases exponentially in D. In common with previous work (Hensman et al., 2017; Cunningham et al., 2023) this can $$(2{\mathfrak{f}}{\mathfrak{h}})$$ ![10_image_0.png](10_image_0.png) Figure 2: Gap between the log marginal likelihood and the training objective (L − F) for different settings of M, ε for data sampled from a GP with a Gaussian (left) or Matérn-3/2 (left) kernel. In each case the hyperparameters are set to their groundtruth values, where the lengthscale is λ. The inputs are samples from a uniform distribution centred on 0 and with width Wx. The horizontal line is at 0.95. be avoided if the covariance function can be assumed to be additive over dimensions. Otherwise, we can improve the computational cost by using a subset of the grid points where the spectral density is highest. Although this improves the cost in higher dimensions, the scaling is in general still exponential in D. This limits IFF's applicability to lower dimensional spatial or spatiotemporal modelling applications. ## 4.3 Choosing The Approximation Parameters Choosing M These need to be selected whenever using SGPR. In IFF, the faster s decays, the lower M we should need for a good approximation to the log marginal likelihood (Theorem 4.2) and Figure 2 shows that M need not be very large (in each dimension) to get good performance; we need Mε at around the approximate bandwidth of the covariance function, which increases as the lengthscale λ reduces. If we know a priori how small the lengthscale might become - for example by examining the Fourier transform of the data, from prior knowledge, or by training a model on small random subsets of the data - then we can use this to select M. Note that if the lengthscale is shorter, we would expect M to need to be larger for SGPR also, as we would need inducing points to be placed closer together. In practice, as with all SGPR methods, M controls the trade-off between computational resources and performance, and can be set as large as needed to get satisfactory performance for the applicaiton, or as large as the available resources allow. Choosing z Given a fixed *ε, M*, the optimal choice is to choose frequencies spaced by ε in the regions of highest spectral density, comparable to spherical harmonics. For monotonically decreasing spectral densities maximised at the origin, such as the Gaussian or Matérn kernels, this corresponds exactly to our refined construction for VFF in Section 3, which involved choosing grid points which were contained within an ellipsoid whose axes were inversely proportional to the lengthscale in each dimension. To avoid dependence on the hyperparameters (which would undo the benefits of being able to precompute A′), we opt in practice for a spherical threshold. Choosing ε When adding more features, we can either cover a higher proportion of the prior spectral density or reduce ε. In the full proof of Theorem 4.2 (Appendix C), as the spectral density gets heavier tailed, ε approaches O(M−1). Then Mε is almost constant, which suggests this part tends to dominate. That is, once Mε is sufficiently large, we should add more features by reducing ε rather than increasing M. However, we explore the trade-off between increasing Mε and decreasing ε numerically (Figure 2) by plotting the gap between the IFF bound and the log marginal likelihood, varying the bandwidth covered and the size of ε relative to the inverse of the data width (Wx ≈ maxn xn − minn xn). The model and data generating process use a kernel with lengthscale λ. We see that in practice, as long as ε −1is below the inverse of the data width, the gap is not very sensitive to its value. Thus for the experiments, we conservatively set εd = 0.95/(maxn,d xnd − minn,d xnd). This result appears to be in tension with the theory. However, we note that in the proof we are effectively interested in producing a good approximation of the function across the whole domain (we make no assumptions about the input locations). But in Figure 2, we explicitly take into account where the data is. Intuitively, our construction involves approximating the function with regularly sampled frequencies. Then it should be possible to construct a good approximation to a function on an interval of width Wx as long as the sample spacing is no more than 1/Wx, as a Fourier dual to the classical Nyquist-Shannon sampling theorem (see, for example, Chapter 5, Vetterli et al., 2012) Other covariance functions We have so far assumed that the spectral density is available in closed form. However, we only need regularly spaced point evaluations of the spectral density, for which it suffices to evaluate the discrete Fourier transform of regularly spaced evaluations of the covariance function. This adds, at worst, O(M2) computation to each step. ## 4.4 Limitations IFF can be used for faster learning for large datasets in low dimensions, which matches our target applications. Typically, it will perform poorly for D ⪆ 4, and both in this case and for low N, we expect SGPR to outperform all alternatives, including IFF, and our analysis and evaluation are limited to the conjugate setting. IFF is limited to stationary priors; while these are the most commonly used, they are not appropriate for many spatial regression tasks of interest, and a fast method for meaningful non-stationary priors would be a beneficial extension. Amongst those stationary priors, we require that the spectral density is sufficiently regular; this is satisfied for many commonly used covariance functions, but not for periodic covariance functions, where the spectral measure is discrete. While IFF can be used with popular covariance functions for modelling quasi-periodic functions (such as a product of squared exponential and periodic covariance functions, or the spectral mixture kernel) if the data has strong periodic components, the maximium marginal likelihood parameters will cause the spectral density to collapse towards a discrete measure (for example, learning very large lengthscales). Finally, we have left some small gaps between theory and practice, both in how to select the tunable parameter ε, and in characterising the quality of the posterior predictive distribution. ## 5 Experiments We seek to show that IFF gives a significant speedup for large datasets in low dimensions, with a particular focus on spatial modelling. Amongst other fast sparse methods, we compare against VFF and B-Spline features. For spherical harmonics, learning independent lengthscales for each dimension is incompatible with precomputation. In any case, we found that we were unable to successfully learn reasonable hyperparameters with that method in our setting, except if the number of feature was very small. For a conventional (no pre-compute) sparse baseline, we use inducing points sampled according to the scheme of Burt et al. (2020a). For our synthetic experiments, we also used inducing points initialised using k-means and kept fixed. For the real-world spatial datasets, we also tested SKI, due to its reputation for fast performance, and its fairly robust implementation. ![12_image_0.png](12_image_0.png) Figure 3: Comparing standard sparse Gaussian process regression (black) to IFF (red) for data generated from a prior with Gaussian covariance function in 1D (left) and 2D (right). Lower and the to the left is better. The groundtruth L is L evaluated at the groundtruth hyperparameters, whereas in the other rows, L is evaluated at the learnt hyperparameters. The gaps are normalised by N, and execution time is normalised by the longest. The bottom row shows feature efficiency, whereas the upper rows show computational efficiency. ![13_image_0.png](13_image_0.png) Figure 4: As Figure 3, but with the data sampled from a Matérn-5/2 GP. The picture is broadly comparable, but VFF now more closely matches the prior, so the drop in feature efficiency is far less in higher dimensions. ## 5.1 Synthetic Datasets First we consider a synthetic setting where the assumptions of Theorem 4.2 hold. We sample from a GP with a Gaussian covariance function, and compare the speed of variational methods in 1 and 2 dimensions. We use a small (N = 10 000) dataset in order that we can easily evaluate the log marginal likelihood at the learnt hyperparameters. Where possible, we use the same (squared exponential) model for learning; for VFF, we use a Matérn-5/2 kernel in 1D, and a tensor product of Matérn-5/2 covariance functions, since this is the best approximation to a Gaussian kernel which is supported. Further details are in Appendix D. Additionally, in the 2D setting, we use both the naive set of features (a regular, rectangular grid), and the refined set of features described in Section 3. IFF generally has slightly lower gap to the marginal likelihood at the learnt optimum for any M than other fast variational methods (Figure 3, bottom row), but because the O(NM2) work is done only once, it and the other fast sparse methods are much faster to run than inducing points (Figure 3, top two rows). Note the logarithmic time scale on the plots: for a specified threshold on the gap to the marginal likelihood, IFF is often around 30 times faster than using inducing points. The experiments demonstrate the issues with the limited choice of prior with methods such as VFF. In 1D, the Matérn-5/2 kernel is a good approximation to Gaussian, so the performance is similar to IFF. But for 2D, the product model is a much worse approximation of the groundtruth model. We expect to see a similar pattern for B-spline features, but we were unable to run the method in our synthetic setting due to unresolved issues in the implementation of Cunningham et al. (2023) which did not arise in the real-world experiments. When we reproduce the same experiment, but with data sampled from a Matérn-5/2 GP, the results are similar, but with a much smaller gap between VFF and the other methods in the 2D case (Figure 4). We note that in the 1D setting, when the data is sampled from a prior with Gaussian covariance function, the k-DPP method gets stuck at a local optimum, which is a known risk with that method. We did not find this to be an issue in any of the other experiments. The refined feature set for higher dimensional IFF and VFF (solid lines) is indeed slightly more feature efficient than the naive approach (dashed lines; Figure 3, bottom right panel). But in practice, we find that the computational overhead to selecting better features outweighs the savings from using fewer features for VFF, thought not for IFF (Figure 3, upper and middle right panels). ## 5.2 Real World Datasets We now compare training objective and test performance on three real-world spatial modelling datasets of increasing size, and of practical interest. We plot the root mean squared error (RMSE) and negative log predictive density (NLPD) on the test set along with the training objective and run time in Figures 5 and 6 using five uniformly random 80/20 train/test splits. For inducing points, we always use the method of Burt et al. (2020b). The time plotted is normalised per split against inducing points. Further training and dataset details are in Appendix D. We add SKI to the comparison here, since it is the most widely used alternative to variational methods in this setting. For the variational methods, both the time and performance are implicitly controlled by the number of features M; as M is increased the performance improves and the time increases, that is we move along the curve to the right. This is a very useful property, since we can select M according to our available computational budget and be fairly confident of maximising performance. With SKI, the equivalent parameter is the grid size, and similarly increasing the grid size generally improves performance. However, when the grid size is low, optimisation can take longer, so for example for the temperature dataset, we move to the left as the grid size is increased (Figure 6; top row). For the other datasets, we only plot the SKI with the grid size automatically selected by the reference implementation. SKI generally has very good predictive means, leading to low RMSE, and is very fast, but the predictive variances are poor, generally being too low and producing some negative values. Ignoring the negative ![15_image_0.png](15_image_0.png) Figure 5: Performance curves for real world datasets of increasing size (the top row is the smallest). Lower and to the left is better. ![16_image_0.png](16_image_0.png) Figure 6: As Figure 5, but with SKI included. SKI exhibits very favourable and fast predictive mean performance, but its predictive variances are poor, leading to very large NLPD. values, the NLPD is much worse for SKI than for the variational methods, which is highly undesirable in a probabilistic method. We exclude SKI in Figure 5 in order to zoom in on the curves for the variational methods. We are interested in the regime where M ≪ N; as we move to the right and M is similar to N, inducing points will become competitive with the faster methods, since the O(M3) cost dominates. Comparing IFF to VFF, we see that always performs at least as well, and produces a substantially better performance for a given time on the temperature and precipitation datasets, due to a more flexible choice of covariance function - in particular, note that the training objective of IFF is substantially lower than VFF on these datasets, but comparable to that of inducing points, which uses the same covariance function. The B-spline features are also limited in choice of covariance function, but the sparse structure of the covariance matrix leads to a much better performance. Nonetheless, despite involving a dense matrix inverse, IFF has a significant advantage on the precipitation dataset (around 25-30% faster on average), and is comparable on the houseprice dataset. Compared to the idealised, synthetic, setting, inducing points become very competetive in the higher resource setting towards the right hand side of each plot, particularly for the smallest dataset (temperature). But for more performance thresholds, we find that fast variational methods offer a substantial improvement, with typically more than a factor of two speedup. ## 6 Conclusions Integrated Fourier features offer a promising method for fast Gaussian process regression for large datasets. There are significant cost savings since the O(N) part of the computation can be done outside of the loop, yet they support a broad class of stationary priors. Crucially, they are also much easier to analyse than previous work, allowing for convergence guarantees and clear insight into how to choose parameters. They are immediately applicable to challenging spatial regression tasks, but a significant limitation is the need to increase M exponentially in D. Further methods to exploit structure in data for spatiotemporal modelling tasks (D = 3 or 4) is an important line of further work. More broadly, an interesting direction is to consider alternatives to the Fourier basis which can achieve similar results for non-stationary covariance functions, which are crucial for achieving state of the art performance in many applications. Finally, a worthwhile direction for increased practical use would be to develop suitable features to quasi-periodic priors such as those arising from the spectral mixture kernel, or a product of periodic and Gaussian kernels. ## References Vladimir I Bogachev. *Gaussian Measures*. American Mathematical Society, 1998. ISBN 978-0-8218-1054-5. David Burt, Carl Edward Rasmussen, and Mark van der Wilk. Rates of convergence for sparse variational Gaussian process regression. In 36th *International Conference on Machine Learning (ICML)*, 2019. David R Burt, Carl Edward Rasmussen, and Mark van der Wilk. Variational orthogonal features, 2020a. URL https://arxiv.org/abs/2006.13170. David R Burt, Carl Edward Rasmussen, and Mark van der Wilk. Convergence of sparse variational inference in Gaussian processes regression. *Journal of Machine Learning Research (JMLR)*, 21(131):1–63, 2020b. URL http://jmlr.org/papers/v21/19-1015.html. Michael K Cohen, Samuel Daulton, and Michael A Osborne. Log-linear-time gaussian processes using binary tree kernels. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), *Advances in Neural Information Processing Systems*, volume 35, pp. 8118–8129. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/ 359ddb9caccb4c54cc915dceeacf4892-Paper-Conference.pdf. Harry Jake Cunningham, Daniel Augusto de Souza, So Takao, Mark van der Wilk, and Marc Peter Deisenroth. Actually sparse variational gaussian processes. In Francisco Ruiz, Jennifer Dy, and Jan-Willem van de Meent (eds.), *Proceedings of The 26th International Conference on Artificial Intelligence and Statistics*, volume 206 of *Proceedings of Machine Learning Research*, pp. 10395–10408. PMLR, 25–27 Apr 2023. URL https://proceedings.mlr.press/v206/cunningham23a.html. John P Cunningham, Krishna V Shenoy, and Maneesh Sahani. Fast Gaussian process methods for point process intensity estimation. In 25th *International Conference on Machine Learning (ICML)*, pp. 192–199, 2008. Vincent Dutordoir, Nicolas Durrande, and James Hensman. Sparse Gaussian processes with spherical harmonic features. In 37th *International Conference on Machine Learning (ICML)*, 2020. Yarin Gal and Richard Turner. Improving the Gaussian process sparse spectrum approximation by representing uncertainty in frequency inputs. In 32nd *International Conference on Machine Learning (ICML)*, 2015. Jacob Gardner, Geoff Pleiss, Ruihan Wu, Kilian Weinberger, and Andrew Gordon Wilson. Product kernel interpolation for scalable Gaussian processes. In 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 2018a. Jacob R. Gardner, Geoff Pleiss, David Bindel, Kilian Q. Weinberger, and Andrew Gordon Wilson. GPyTorch: Blackbox matrix-matrix Gaussian process inference with GPU acceleration, 2018b. URL https://arxiv. org/abs/1809.11165. Roman Garnett. *Bayesian Optimization*. Cambridge University Press, 2023. Philipp Hennig, Michael A Osborne, and Hans P Kersting. Probabilistic Numerics: Computation as Machine Learning. Cambridge University Press, 2022. doi: 10.1017/9781316681411. James Hensman, Alexander G de G Matthews, and Zoubin Ghahramani. Scalable variational Gaussian process classification. In 18th *International Conference on Artificial Intelligence and Statistics (AISTATS)*, 2015. James Hensman, Nicolas Durrande, and Arno Solin. Variational Fourier features for Gaussian processes. Journal of Machine Learning Research, 2017. Martin Jø rgensen and Michael A Osborne. Bezier gaussian processes for tall and wide data. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), *Advances in Neural Information Processing Systems*, volume 35, pp. 24354–24366. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/ 99c80ceb10cb674110f03b2def6a5b76-Paper-Conference.pdf. Miguel Lázaro-Gredilla and Aníbal Figueiras-Vidal. Inter-domain Gaussian processes for sparse inference using inducing features. In 26th *Conference on Neural Information Processing Systems (NeurIPS)*, 2009. Miguel Lázaro-Gredilla, Joaquin Quinonero-Candela, Carl Edward Rasmussen, and Aníbal R FigueirasVidal. Sparse spectrum gaussian process regression. *The Journal of Machine Learning Research (JMLR)*, 11:1865–1881, 2010. Mikhail A Lifshits. *Lectures on Gaussian Processes*. Springer, 2012. ISBN 978-3-642-24938-9. Alexander G de G Matthews, James Hensman, Richard E Turner, and Zoubin Ghahramani. On sparse variational methods and the kullback-leibler divergence between stochastic processes. In Arthur Gretton and Christian C. Robert (eds.), *Proceedings of the 19th International Conference on Artificial Intelligence* and Statistics, volume 51 of *Proceedings of Machine Learning Research*, pp. 231–239, Cadiz, Spain, 09–11 May 2016. PMLR. URL https://proceedings.mlr.press/v51/matthews16.html. Alexander G. de G. Matthews, Mark van der Wilk, Tom Nickson, Keisuke. Fujii, Alexis Boukouvalas, Pablo León-Villagrá, Zoubin Ghahramani, and James Hensman. GPflow: A Gaussian process library using TensorFlow. *Journal of Machine Learning Research (JMLR)*, 2017. Geoff Pleiss, Jacob Gardner, Kilian Weinberger, and Andrew Gordon Wilson. Constant-time predictive distributions for Gaussian processes. In 35th *International Conference on Machine Learning (ICML)*, 2018. Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In 24th *Neural Information* Processing Systems (NeurIPS), 2007. Carl Edward Rasmussen and Christopher K I Williams. *Gaussian Processes for Machine Learning*. MIT Press, 2006. ISBN 978-0-262-18253-9. Yunus Saatçi. *Scalable Inference for Structured Gaussian Process Models*. PhD thesis, University of Cambridge, 2011. Simo Särkkä, Arno Solin, and Jouni Hartikainen. Spatiotemporal learning via infinite-dimensional Bayesian filtering and smoothing: A look at Gaussian process regression through Kalman filtering. *IEEE Signal* Processing Magazine, 2013. Arno Solin and Simo Särkkä. Hilbert space methods for reduced-rank Gaussian process regression. *Statistics* and Computing, 2020. Michalis Titsias. Variational learning of inducing variables in sparse Gaussian processes. In 12th *International* Conference on Artificial Intelligence and Statistics (AISTATS), 2009. Gia-Lac Tran, Dimitrios Milios, Pietro Michiardi, and Maurizio Filippone. Sparse within sparse Gaussian processes using neighbor information. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 10369–10378. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/tran21a.html. Martin Vetterli, Jelena Kovačević, and Vivek K Goyal. Foundations of signal processing. 2012. Andrew Gordon Wilson and Hannes Nickisch. Kernel interpolation for scalable structured Gaussian processes (KISS-GP). In 32nd *International Conference on Machine Learning (ICML)*, 2015. James T Wilson, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, and Marc P Deisenroth. Efficiently sampling functions from Gaussian process posteriors. In 37th *International Conference on* Machine Learning (ICML), 2020. Luhuan Wu, Geoff Pleiss, and John P Cunningham. Variational nearest neighbor Gaussian process. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of *Proceedings of* Machine Learning Research, pp. 24114–24130. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr. press/v162/wu22h.html. ## A Computation Recall the collapsed objective. $$\mathcal{F}(\mu_{n},\Sigma_{n})=\log\mathcal{N}(y|0,\ K_{n}^{*}K_{n1}^{-1}K_{n\mathrm{f}}+\sigma^{2}I)-\frac{1}{2}\sigma^{-2}\mathrm{tr}(K_{\mathrm{f}\mathrm{f}}-K_{n\mathrm{f}}^{*}K_{n1}^{-1}K_{n\mathrm{f}})$$ $$=-\frac{1}{2}\log|K_{n\mathrm{f}}^{*}K_{n\mathrm{f}}^{-1}K_{n\mathrm{f}}+\sigma^{2}I|-\frac{1}{2}\overline{\sigma}^{\,\mathrm{V}}(K_{n\mathrm{f}}^{*}K_{n\mathrm{f}}^{-1}K_{n\mathrm{f}}+\sigma^{2}I)^{-1}y-\frac{1}{2}\sigma^{-2}\mathrm{tr}(K_{\mathrm{f}\mathrm{f}}-K_{n\mathrm{f}}^{*}K_{n\mathrm{f}}^{-1}K_{n\mathrm{f}})\tag{28}$$ With y¯ = Kufy,Pn y 2 n = ν 2, we apply the Woodbury identity to the inverse in the quadratic form. $$(K_{u\bar{t}}^{*}K_{u\bar{u}}^{-1}K_{u\bar{t}}+\sigma^{2}I)^{-1}=\sigma^{-2}I-\sigma^{-4}K_{u\bar{t}}^{*}(K_{u u}+\sigma^{-2}K_{u\bar{t}}K_{u\bar{t}}^{*})^{-1}K_{u\bar{t}}$$ $\Longrightarrow\,y^{\top}(K_{u\bar{t}}^{*}K_{u\bar{t}}^{-1}K_{u\bar{t}}+\sigma^{2}I)^{-1}y=\sigma^{-2}\nu^{2}-\sigma^{-4}\bar{y}^{\top}(K_{u u}+\sigma^{-2}K_{u\bar{t}}K_{u\bar{t}}^{*})^{-1}\bar{y}$ For the log determinant, we can use the matrix determinant lemma. $$|\sigma^{2}I+K_{u\bar{f}}^{*}K_{u\bar{u}}^{-1}K_{u\bar{f}}|=|K_{u u}+\sigma^{-2}K_{u\bar{f}}^{*}K_{u\bar{f}}||K_{u u}|^{-1}|\sigma^{2}I_{n}|\,,$$ Finally, we write down the trace directly using the fact that Kuu is diagonal. $$\mathrm{tr}(K_{\mathrm{ff}}-K_{u\mathrm{f}}^{*}K_{u u}^{-1}K_{u\mathrm{f}})=\sum_{n}\left(k(x_{n},x_{n})-\sum_{m}|c_{m}(x_{n})|^{2}/\bar{k}_{m m}\right)$$ We can combine the above to get an easy to evaluate expression for F, and replacing the inter-domain cross covariance matrices with their numerical approximations, we can evaluate the IFF objective F. Notably, ν 2 ∈ R≥0 depends only on y, so can be precomputed and stored with only O(N) cost, and y¯ ∈ RM depends only on *x, y* and z, so can also be precomputed and stored with O(NM) cost. Similarly, the matrix KufK∗ uf can be precomputed with O(NM2) cost. For large N, we split the data into chunks of 10 000 to save memory in this precompute stage. During optimisation, each calculation of this objective is reduced to the O(M3) cost associated with the inverse and log determinant calculations, which we perform cheaply after first performing the Cholesky decomposition. The prediction equation is $$q(f(x_{*})|x_{*},x,y)=\int p(f(x_{*})|x,z,u)q(u)du=\mathcal{N}(f(x_{*})|K_{u*}^{*}K_{u*}^{-1}\mu_{u},K_{*u}-K_{u*}^{*}K_{u*}^{-1}K_{u*}+K_{u*}^{*}K_{u*}^{-1}\Sigma_{u}K_{u*}^{-1}K_{*u}),\tag{29}$$ where $*$ stands for $x_{*}$ in the subscripts. This is just the sparse, inter-domain version of Equation (2), and µu, Σu are given in Equation (7). ## B Real-Valued Features As noted in Section 4.1, applying an invertible linear transformation to the features does not change inference or learning. To simplify the presentation, and generalise the result, we change notation slightly from the main text. Let the complex valued featured be u ′ m1*,....,m*D which is the feature corresponding to the frequency zm1*,...,m*D , and let z be antisymmetric in every axis (that is, z−m1,m2,...,mD = −zm1,m2*,...,m*D , etc) and require md ̸= 0 for each d. For example, if we use a regular grid, with M1/D an integer, $$z_{m_{1},\dots,m_{D}}=\begin{bmatrix}\varepsilon/2+m_{1}\varepsilon-M^{1/D}/2\\ \vdots\\ \varepsilon/2+m_{D}\varepsilon-M^{1/D}/2\end{bmatrix}$$ which indeed satisfies this property. Now, $$e^{-i2\pi x^{\top}\xi}=\cos(2\pi x^{\top}\xi)+i\sin(2\pi x^{\top}\xi)$$ $$=\sum_{j=0}^{D}(-i)^{j}\sum_{S\subseteq\{1:D\},|S|=j}\prod_{d\in S}\sin2\pi x_{d}z_{d}\prod_{d\not\in S}\cos2\pi x_{d}z_{d}$$ ${}_{1}$, $m_{2}$,...,$m_{D}$,$S$ for positive $m_{d}$ only, with $S\subseteq\{1:D\}$, be defined as $m_{2},...,m_{D},S$ for positive $m_{d}$ only, with $S=\{1:D\}$, be defined as $$u_{m_{1},...,m_{D},S}=\int_{z_{D}-\varepsilon/2}^{z_{D}+\varepsilon/2}\cdots\int_{z_{1}-\varepsilon/2}^{z_{1}+\varepsilon/2}\int f(x)\prod_{d\in S}\sin2\pi\xi_{d}x_{d}\prod_{d\notin S}\cos2\pi\xi_{d}x_{d}\,dxd\xi_{1}\ldots d\xi_{D}$$ so that $$u^{\prime}_{m_{1},\ldots m_{D}}=\sum_{S}(-\mathrm{i})^{|S|}u_{m_{1}|,\ldots,|m_{D}|,S}\prod_{d\in S}\operatorname{sgn}(m_{d}).$$ **Lemma B.1**.: _The real representation is equivalent to the complex representation used the main text._ Proof. It suffices to show that this transformation can be expressed as a matrix with linearly independent rows. The row corresponding to each u ′m1*,...,m*D has only non-zero entries for u|m1|*,...,*|mD|,S for any S. Hence, if the absolute values of m1*, ..., m*D differ, then the rows are linearly independent. Now, suppose the absolute values are fixed and consider an arbitrary collection of indices S ′ ⊆ {1 : D} to have negative sign. This corresponds to a particular u ′, hence a particular row of the matrix. Then the non-zero entries in the row corresponding to S ′each correspond to a choice of S, and the sign is flipped (relative to the case where each md is positive) if |S ∩ S ′| is odd. The rows are only linearly dependent if they have all their signs flipped or all their signs not flipped. That is, if there exists S ′ ̸= S ′′ such that |S ∩ S ′′| and |S ∩ S ′| have the same parity for all S ⊆ {1 : D}. But since they are distinct, there must be at least one d ∈ {1 : D} which is in S ′ but not S ′′ (or vice versa) and so for S = {d}, the parity differs. ## C Convergence We follow the steps of Section 4.1, generalising to higher dimensions and filling in the details. Recall that inference and learning with with the modified features u3 and their approximation uˆ3 is equivalent to inference and learning with the features we use in practice. In this section, for brevity, we use uˆ = ˆu3, and u = u3. Following from Burt et al. (2019), the gap between the log marginal likelihood and the training objective is bounded as $$\mathbb{E}_{y}[D_{K L}(q(f)\mid\mid p(f|y))]=\mathbb{E}_{y}[\mathcal{L}-\mathcal{F}]\leq\frac{t}{\sigma^{2}}$$ $$t=\operatorname{tr}(K_{\mathfrak{H}}-\underbrace{K_{u\mathfrak{I}}^{*}K_{u}^{-1}K_{u\mathfrak{I}}}_{Q_{\mathfrak{I}\mathfrak{I}}})$$ $$(30)$$ $$\mathbf{\Sigma}^{(31)}$$ $$(32)$$ when u are valid inducing features, and the data y are generates according to the Equation (1). We defer the effect of approximating with uˆ until later in the proof. First, we set out the technical assumptions. Let k = vσ2˜k where ˜k(*x, x*) = 1 (that is, define v as the signal to noise ratio). A1 Assume that ˜k's spectral measure admits a density, and denote this by s˜ and assume that the density admits a tail boundZ ∞ $$\int_{\rho}^{\infty}\cdots\int_{\rho}^{\infty}\tilde{s}(\xi)\,d\xi_{1}\ldots d\xi_{D}\leq\beta\rho^{-q D}$$ for any ρ > 0 and some *β, q >* 0. A2 The spectral density's second derivative is bounded. A3 The spectral density's first derivatives are bounded as $$\frac{\partial s(\xi)}{\partial\xi}\leq2Ls(\xi)\implies\frac{\partial\sqrt{s(\xi)}}{\partial\xi}\leq L\sqrt{s(\xi)}\tag{33}$$ for some L > 0 (where the second expression follows wherever s(ξ) > 0). For example, this would be satisfied if the spectral density and its first derivative are both bounded everywhere, which includes widely used covariance functions. A4 Finally, let the inducing frequencies zm = (−(M + 1)/2 + m)ε in one dimensions, and an analogous regular grid in higher dimensions, with M1/D ∈ Z even. That is, for a multi-index m1:D, $$[z_{m_{1:D}}]_{d}=(-(M^{1/D}+1)/2+m_{d})\varepsilon\quad\mbox{for$d\in\{1:D\}$}.\tag{1}$$ Notationally, we usually use a single index m and, for brevity, let $$\Box_{m}=\prod_{d=1}^{D}\left[\left[z_{m}\right]_{d}-\varepsilon/2,\left[z_{m}\right]_{d}+\varepsilon/2\right),\quad\Box=\bigcup_{m}\Box_{m}\tag{35}$$ $$(34)$$ We might wish to deviate from A4 in practice - for example, to prioritise higher importance frequencies when using a finite number, or to vary ε in each dimension or as a function of location. We note that the proofs which follow can be generalised to these cases, with the rate of convergence controlled by the largest width ε used. In the rest of this section, we refer to these as the standard assumptions, and we now reiterate the definitions of u and uˆ. u = ⟨ϕ3,m, f⟩ def = ε −1 Z □m Zf(x)e −i2πξx dx/ps(ξ) dξ = ε −1 Z □m ⟨ϕ1,ξ, f⟩ps(ξ) dξ cm(x ′) = c3(x ′, zm) = ε −1 Z □m c1(x ′, ξ)ps(ξ) dξ ≈ps(zm)e −i2πzmx ′= ˆcm(x ′) ¯km,m′ = ¯k3(zm, zm′ ) = ε −2 Z □m Z □m′ ¯k1(ξ, ξ′)s(ξ) dξ′dξ = ε −1δm−m′ Then uˆ is defined implicitly through the definition of cˆm, ¯kmm above. Lemma C.1 (Lemma 4.1). *Under assumptions A3 and A4,* $$c_{m}(x)=\hat{c}_{m}(x)(1+O(\varepsilon^{D}))$$ $$(36)$$ D)) (36) Proof. We first consider the 1D case. Let Em(ξ) = ps(ξ) −ps(zm). Then by Taylor's theorem, |Em(ξ)| ≤ L|ξ − zm|ps(ξ ′) for some ξ ′ ∈ □m. In particular, let ξ ′ = arg maxξ∈□m ps(ξ). But we also have √ξ ′ ≤ √zm + Lps(ξ ′)|ξ ′ − zm| ≤ √zm + Lps(ξ ′)ε/2. Then for ε < 2/L it follows that $$\sqrt{s(\xi^{\prime})}\leq\sqrt{s(z_{m})}\frac{1}{1-\frac{L\xi}{2}}=\sqrt{s(z_{m})}(1+O(\varepsilon))$$ and so $$|{\mathcal{E}}_{m}(\xi)|\leq L|\xi-z_{m}|{\sqrt{s(z_{m})}}(1+O(\varepsilon)).$$ |Em(ξ)| ≤ L|ξ − zm|ps(zm)(1 + O(ε)). (38) $$(37)$$ $$(38)$$ Then, considering at first the upper bound cm(x) = ε −1 Z □m ps(ξ)e −i2πξx dξ (39) = ε −1 Z □m (ps(zm) + Em(ξ))e −i2πzmxe −i2π(ξ−zm)xdξ (40) =ps(zm)e −i2πzmxε −1 Z □m e −i2π(ξ−zm)xdξ + ε −1 Z □m Em(ξ)e −i2πξx dξ (41) ≤ cˆm(x)sinc(2πεx) + ε −1 Z □m |Em(ξ) dξ (42) ≤ cˆm(x) sinc(2πεx) + Lε 2 (43) $$(39)$$ Here sinc(α) = sin(α)/α. The sinc term is of constant order. The lower bound is found by subtracting the magnitude of the error term instead of adding. The result then follows. In higher dimensions, we follow the same argument, and the new upper bound is (40) $$\begin{array}{l}\small\left(41\right)\end{array}$$ = $$\begin{array}{l}\small\left(42\right)\end{array}$$ . $$(43)$$ $$c_{m}(x)\leq\hat{c}_{m}(x)\left(\operatorname{sinc}^{D}(2\pi\varepsilon x)+{\frac{L\varepsilon^{D}}{2}}\right)\,$$ from which the result follows. Remark C.2. The error bound in Equation (44) generally tightens as ε falls, but loosen as x increases, with the approximation value vanishing when ε = 1/x. Theorem C.3 (Theorem 4.2 of the main text). Under the assumptions A1-A4, for any ∆, δ > 0, there exists M0, α0 > 0 for all *M > M*0, $$\mathbb{P}\left[{\frac{{\mathcal{L}}-{\mathfrak{F}}}{N}}\geq{\frac{\Delta}{N}}\right]\leq\delta$$ and with $$M\leq\left(\frac{\alpha}{\Delta\delta}N\right)^{\frac{q+3}{2q}}.$$ Proof. Let tˆ = tr(Kff − Kˆ ∗ ufK−1 uu Kˆuf) = tr(Kff − Qˆff). Then we can show that *t/Nσ* ˆ 2 ∈ O(M−2q/(q+3)) as follows. $$\frac{\hat{t}}{N\sigma^{2}}=\frac{1}{N\sigma^{2}}\sum_{n}\left(k\frac{(x_{n},x_{n})}{v\sigma^{2}}-\varepsilon^{D}\sum_{m}\hat{c}_{m}(x_{n})\hat{c}_{m}^{*}(x_{n})\right)$$ $$=v\left(1-\varepsilon^{D}\sum_{m}\hat{s}(z_{m})\right)$$ $$=v\left(1-\int_{\square}\bar{s}(\xi)\,d\xi\right)+vE_{1}$$ $$=2v\int_{\mathbb{R}\setminus\square}\bar{s}(\xi)\,d\xi+vE_{1}$$ $$(44)$$ $$(45)$$ (47) $\binom{48}{4}$ (48) ... $$(46)$$ where E1 =R□ s˜(ξ) dξ − εPm s˜(zm) + O(Mε3D). The integral term in the last line is in O((M1/Dε) −qD) by assumption A1, and E1 ∈ O(Mε3D) from standard bounds on the error of the midpoint approximation (since the second derivative of the integrand is bounded by assumption A2). We must have that ε → 0 as M → ∞ to make the midpoint approximation asymptotically exact, yet we must have Mε → ∞ to ensure the features cover all frequencies. We optimise the trade-off between these two. In particular, let ε = ε0M−p/D for some ε0 > 0 and p ∈ (1/3, 1). Then, $$\frac{\hat{t}}{N\sigma^{2}}\in O(M^{-q(1-p)}+M^{-(3p-1)}).\tag{1}$$ $$(49)$$ $$(50)$$ The overall rate is asymptotically dominated by the worse of these two rates, so we optimise p as $$p=\arg\max_{p^{\prime}}\min\left\{q(1-p),3p-1\right\}=\frac{q+1}{q+3}$$ which is the p which sets both rates equal at $$\frac{2q}{q+3}.$$ . Altogether we have $$\frac{\hat{t}}{N\sigma^{2}}\in O(M^{\frac{-2q}{q+3}}).\tag{1}$$ Nσ2 q+3 ). (51) For bounding E[(*L − F*)/N] in terms of t, we could immediately apply the result Equation (30). But for E[(L − F)/N] we cannot, since uˆ are not exact features. But following the proof of Lemma 2 of Burt et al. (2019), we have $$\mathbb{E}_{y}\left[\frac{\mathcal{L}-\mathfrak{F}}{N\sigma^{2}}\right]\leq\frac{\hat{t}}{N\sigma^{2}}+O(M\varepsilon^{3D})$$ $[-\log|K_{\ell\ell}+\sigma^{2}I|)/N\sigma^{2}<O(M\varepsilon^{3D})$. provided Qˆff ≥ 0 and (log |Qˆff + σ 2I| − log |Kff + σ For the first condition, apply Bochner's theorem. Consider the covariance function $$q(x,x^{\prime})=\sum_{m}\hat{c}_{m}(x)\hat{c}_{m^{\prime}}^{*}/\hat{k}_{mm^{\prime}}\tag{1}$$ which is used to form the elements of Qˆff. Its Fourier transform is $$[{\mathcal{F}}q](\xi)=\varepsilon\sum_{m}s(z_{m})\delta(\xi-z_{m})$$ $$(51)$$ $$(52)$$ $$(53)$$ $$(54)$$ $$\begin{array}{l}{(55)}\\ {(56)}\end{array}$$ which is indeed a positive measure, so q is a positive definite covariance function, so Qˆff ≥ 0. For the log determininant term, we have from Lemma 4.1 (which holds due to A3, A4) that the relative error of Qˆff from Qff is symmetric and in O(Mε3D), hence the error in the log determinant is in O(NMε3D); the scaled identity shift does not change the order of this relative error. Then, $$\begin{array}{c}{{(\log|\hat{Q}_{\mathrm{H}}+\sigma^{2}I|-\log|K_{\mathrm{H}}+\sigma^{2}I|)/N\sigma^{2}\leq(\log|Q_{\mathrm{H}}+\sigma^{2}I|-\log|K_{\mathrm{H}}+\sigma^{2}I|)/N\sigma^{2}+O(M\varepsilon^{3D})}}\\ {{\in O(M\varepsilon^{3D})}}\end{array}$$ where the last step follows from (log |Qff + σ 2I| − log |Kff + σ 2I|) ≤ 0 (see proof of Lemma 2 of Burt et al. (2019)). Thus, we have $\mathbb{E}_y\left[\dfrac{\mathcal{L}-\mathfrak{\tilde{x}}}{N}\right]\in O(M^{-\frac{2g}{q+3}})$ y by a straightforward application of Markov's inequality. Then the first part of the results follow by a straightforward application of Markov's inequality. For the second part, we replace the big O notation with an explicit constant α0 > 0. $$\mathbb{P}\left[\frac{\mathcal{L}-\widehat{\mathbf{x}}}{N}\leq\frac{\Delta}{N}\right]=\delta\leq\frac{\mathbb{E}[(\mathcal{L}-\widehat{\mathbf{x}})/N]}{\Delta/N}$$ $$\leq N\frac{\alpha}{\Delta}M^{-\frac{2\alpha}{\pi+2}}$$ $$\implies M\leq\left(\frac{\alpha}{\delta\Delta}N\right)^{\frac{\alpha+3}{2q}}$$ $$\left(57\right)$$ (58) $\binom{59}{5}$ . $$(60)$$ This result demonstrates that the objective we use for hyperparameter optimisation, even with the numerical approximation, converges at a reasonable rate to the log marginal likelihood, at least if the spectral density has sufficiently light tails. This means it is a good surrogate for learning when we can set M large enough. With the proper inducing features u, we can be reassured that the posterior predictives will not be too bad, since the whole process KL from the approximating to exact posterior is bounded as *L − F* Matthews et al. (2016). With uˆ, we require some additional reassurance, which the is the subject of the next result. Theorem C.4 (Theorem 4.4 of the main text). For the optimised µu, Σu (to maximise F), let the posterior predictive at any test point x∗ using the exact features u have mean and variance µ, Σ, and with the approximate features uˆ have mean and variance µ, ˆ Σˆ*. Then, under assumptions A3 and A4,* $$|\mu-\hat{\mu}|\in O(M\varepsilon^{2D})$$ $$|\Sigma-\hat{\Sigma}|\in O(M^{2}\varepsilon^{3D})$$ and in particular, allowing ε ∼ M −q+1 D(q+3) *as in the proof of Theorem C.3, we have* $$\begin{array}{l}{{|\mu-\hat{\mu}|\in O\left(M^{-\frac{q-1}{q+3}}\right)}}\\ {{|\Sigma-\hat{\Sigma}|\in O\left(M^{-\frac{q-3}{q+3}}\right)}}\end{array}$$ This result shows that the predictive marginals using the approximation converge at a reasonable rate to those without the approximation, for any fixed µu, Σu as long as the spectral density is not too heavy tailed (q > 3). We note that we do not comment on the rate at which the optimal variational means and covariances converge to each other, nor the consequent rate of convergence for the posterior predictives according to each objective's optimal variational distribution. Proof. The predictive distributions are predictive distributions are $\begin{array}{l}q(f(x_{*}))=\mathcal{N}(f(x_{*})|\underbrace{K_{us}^{*}K_{us}^{-1}\mu_{y}}_{\mu},\,\underbrace{k(x_{*},x_{*})-K_{us}^{*}K_{us}^{-1}K_{us}+K_{us}^{*}K_{us}^{-1}\Sigma_{us}K_{us}^{-1}K_{us}}_{\Sigma},\\ \hat{q}(f(x_{*}))=\mathcal{N}(f(x_{*})|\underbrace{\hat{K}_{us}^{*}K_{us}^{-1}\mu_{y}}_{\hat{\mu}},\,\underbrace{k(x_{*},x_{*})-\hat{K}_{us}^{*}K_{us}^{-1}\hat{K}_{us}+\hat{K}_{us}^{*}K_{us}^{-1}\Sigma_{us}K_{us}^{-1}\hat{K}_{us}}_{\hat{\Sigma}}.\end{array}$ $$(67)$$ $$(68)$$ $$(69)$$ $$(70)$$ $$(71)$$ $$\left(72\right)$$ . (66) $$(61)$$ $${}^{\circ}$$ $$(63)$$ $$(64)$$ $$(65)$$ $$(66)$$ Recall that K−1 uu = ε DI, and that |[Ku∗ − Kˆu∗]m| = |cm(x∗) − cm(x∗)| ∈ O(ε D) by Lemma 4.1. The result for the means follows straightforwardly. $$|\mu-\hat{\mu}|=|(K_{u*}^{*}-\hat{K}_{u*}^{*})K_{uu}^{-1}\mu_{u}|\in O(M\varepsilon^{2D})$$ which follows since K−1 uu µu is an M dimensional vector with each element in O(ε D). For the covariance, we use the triangle inequality. $$|\Sigma-\hat{\Sigma}|\leq|K^{*}_{u*}K^{-1}_{uu}K_{u*}-\hat{K}^{*}_{u*}K^{-1}_{uu}\hat{K}_{u*}|$$ $$+|K^{*}_{u*}K^{-1}_{uu}\Sigma_{u}K^{-1}_{uu}K_{u*}-\hat{K}^{*}_{u*}K^{-1}_{uu}\Sigma_{u}K^{-1}_{uu}\hat{K}_{u*}|$$ Now, each of the terms on the right hand side is a one dimensional marginal variance, so we rewrite them as follows. $$|K^{*}_{u*}K^{-1}_{uu}K_{u*}-\hat{K}^{*}_{u*}K^{-1}_{uu}\hat{K}_{uu}|=\varepsilon^{D}\sum_{m}(|c_{m}(x_{*})|^{2}-|\hat{c}_{m}(x^{*})^{2}|)$$ $$\in O(M\varepsilon^{2D})$$ $$|K^{*}_{u*}K^{-1}_{uu}\Sigma_{u}K^{-1}_{uu}K_{u*}-\hat{K}^{*}_{u*}K^{-1}_{uu}\Sigma_{u}K^{-1}_{uu}\hat{K}_{uu}|=\varepsilon^{2D}\sum_{m,m^{\prime}}[\Sigma_{u}]_{m,m^{\prime}}(c_{m}(x_{*})c^{*}_{m^{\prime}}(x_{*})-\hat{c}_{m}(x_{*})\hat{c}^{*}_{m^{\prime}}(x_{*}))$$ $$\in O(M^{2}\varepsilon^{2D})$$ m′ (x∗)) (72) The first term is εPm(|cm(x∗)| 2 − |cˆm(x ∗) 2|) ∈ O(Mε3D) following from the proof of Theorem C.3. The other two terms are of the same order O(Mε2D) using Lemma 4.1. $$(73)$$ $O(M\varepsilon^{3D})$ following from . ## D Experimental Details We include the code for the experiments and figures which can be referred to for full details. For Figure 2, we used N = 1 000 data points, whose input locations were sampled from a zero mean Gaussian distribution with standard deviation Wx/6. We randomly sampled a signal standard deviation σf ≈ 0.5422 and a signal to noise ratio (SNR) v ≈ 1.255, and used a unit lengthscale (λ = 1). To verify that the pattern is broadly consistent for different parameter choices, we also reproduced the figure with lengthscale λ = 0.5, and, with both lengthscales, we used both half and four times the SNR. In each case, we set Wx = 6λ −1pN/2. FInally, we consider the case with uniformly sampled data. The results are plotted in Figures 7 to 12. For the synthetic experiment, we generated N = 10 000 data points in 1 and 2 dimensions by samping from a GP with a Gaussian or Matérn-5/2 covariance function, with unit (or identity) lengthscale, unit variance, and set the SNR to 0.774 (arbitrarily chosen, poor signal to noise ratio for a challenging dataset). In 1D we sample the training inputs uniformly on a width 6pN/2 centred interval. In 2D we do the same in each dimension, but with width 5. We then fit each model plotted, training using LBFGS and using the same initialisation in each case, other than necessary restrictions on the choice of covariance functions as described in the main text. The initial values were lengthscales of 0.2, and unit signal and noise variances. We did multiple random trials, and plot the 2 standard deviation error bars; this is mainly for uncertainty in timing, but for inducing points we also have uncertainty due to randomness in the inducing point (re)initialisation method. Inducing points with inducing inputs optimised takes far longer than the other methods, so was excluded. For the real-world experiments, we used a similar setup as the synthetic experiments in terms of initialisations. Guided by the synthetic results, we use the full rectangular grid of frequencies for VFF, but use a spherical mask for IFF. We set ε as described in the main text. Comparably, for B-splines and VFF, we set the interval [*a, b*] as 0.1 wider than the data (that is, ad = minn,d xn,d − 0.1, bd = maxn,d xn,d + 0.1); we use fourth order B-splines. The experiments were generally run on CPU to avoid memory-related distortion of the results, with the exception of SKI, which was run on GPU since it depends on GPU execution for faster MVMs. We implemented IFF, VFF and spherical harmonics using gpflow (Matthews et al., 2017), which we also used for inducing points, reusing some of the code of Burt et al. (2020b) for the k-DPP (re)initialisation method. The spherical harmonics implementation depends on the backend-agnostic implementation of the basis functions3, though as noted in the main text, we were unable to produce comparable results. We used the gpytorch implementation of SKI (Gardner et al., 2018b). We use the publicly available Tensorflow 2 implementation for B-splines4. ## D.1 Real-World Dataset Information In all cases, we normalise both the inputs and targets to unit mean and standard deviation in each dimension. However, we report the test metrics (RMSE and NLPD) averaged over test points but on the unnormalised scale. The number of training and test points, and the standard deviation of the outputs, for each dataset is reported in Table 1 The precipitation dataset is a regularly gridded (in latitude and longitude) modelled precipitation normals in mm in the contiguous United States for 1 January 2021 (publicly available with further documentation at https://water.weather.gov/precip/download.php; note the data at the source is in inches). We downsample the data by 4 (by 2 in each dimension). The data is highly nonstationary, and so a challenging target for GP regression with typically used, usually stationary, covariance functions. In particular, the lengthscales are fairly large across the plains and in the southeast in general, but quite small near the Pacific coast, especially in the Northwest. For a stationary model, the high frequency content in that region leads to a globally low lengthscale. 3https://github.com/vdutor/SphericalHarmonics 4https://github.com/HJakeCunningham/ASVGP ![27_image_0.png](27_image_0.png) Figure 7: As Figure 2, but with half the SNR. ![27_image_1.png](27_image_1.png) Figure 8: As Figure 2, but with four times the SNR. ![28_image_0.png](28_image_0.png) Figure 9: As Figure 2, but with half the lengthscale (twice the bandwidth). ![28_image_1.png](28_image_1.png) Figure 10: As Figure 2, but with half the lengthscale (twice the bandwidth) and half the SNR. ![29_image_0.png](29_image_0.png) Figure 11: As Figure 2, but with half the lengthscale (twice the bandwidth) and four times the SNR. ![29_image_1.png](29_image_1.png) Figure 12: As Figure 2, but with data sampled uniformly on [−W₂/2, W₂/2], and with higher SNR (≈ 2.067). | Dataset | N | Test points | Output standard deviation | |---------------|---------|---------------|-----------------------------| | Temperature | 12 947 | 3236 | 2.766 | | Precipitation | 23 144 | 5785 | 58.855 | | House price | 106 875 | 26 718 | 0.642 | Table 1: Summary of real world datasets. The temperature dataset is the change in mean land surface temperature (°C). over the year ending February 2021 relative to the base year ending February 1961 (publicly available from https://data.giss.nasa.gov/ gistemp/maps). It is also regularly gridded, over more of the globe. The house price dataset is a snapshot of house prices in England and Wales, which is not regularly gridded. We use a random 20% of the full dataset, and target the log price to compress the dynamic range. It is based on the publicly available UK house price index (https://landregistry.data.gov.uk/app/ukhpi), and we enclose the exact dataset we use.
Review 1: Summary: Existing methods are either limited to kernels with specific structure (e.g., isotropic or 1D Matern kernels) or have $O(NM^2)$ computational cost, where $N$ is the sample size and $M$ is the number of features used. This work proposes a method that takes $O(M^3)$ (with $M \ll N$) per optimization step (though it was hard for me to understand the exact assumptions under which this occurs; see below). In experiments on synthetic data ($N=10,000$), IFF consistently provides a speedup roughly of $2^5$ over standard sparse GPs (inducing points), in both 1D and 2D. In 1D, performance is similar to VFF in 1D, but significantly better in 2D. In experiments on three somewhat larger ($N=12,947$, $N=23,144$, and $N=106,875$) real datasets, IFF fairly consistently provides among the best trade-off between compute time and test performance, typically beating VFF and B-splines by a small amount. Strengths and Weaknesses: I think the overall goal of this paper (achieving $O(M^3)$ runtime for general kernels) seems useful and the proposed approach makes sense at a high level. However, I found the theoretical portions of this paper too unclear to verify the main results. While the topic of the paper is far from my expertise and I wasn't previously familiar with the methods (e.g., SGPR) this work is building upon, I think a substantial amount of the difficulty is due to the writing of the paper itself. Below, I ask a few questions and suggest several ways to improve the presentation. Requested Changes: **Major** 1. The assumptions (e.g., disjoint intervals, tail bound, first and second derivative conditions) made in Section 4 are scattered throughout the presentation, making it difficult to understand how the assumptions relate to the results and their proofs. I suggest labeling each assumption when it is first stated (e.g., A1., A2., etc.) and then explicitly referencing each assumption in the result where it is needed (e.g., instead of "Under the standard assumptions", say "Under assumptions A1 and A2"). 2. As far as I can tell, the proofs of the theoretical results (Appendix C) assume the inducing features are a regular grid (Eq. (29)), but this is not clear in the main paper Section 4. In fact, the disjoint interval assumption in the first paragraph of Section 4 seems to suggest that a regular grid is not necessarily assumed. I am quite confused by this. 3. At the beginning of Section 4, it is assumed for simplicity that $D=1$, and the assumptions are stated for this case, but then the results are stated for general $D>=1$. Thus, the reader cannot understand the stated results without referring to the more general assumptions in the Appendix. I suggest sticking to one version ($D=1$ or general $D$) throughout this section. 4. Although the tail bound and second derivative assumptions on the normalized spectral density are fairly standard, I don't think the first derivative assumption (Eq. (17)) is as standard. I therefore suggest (a) adding a sentence discussing this assumption (e.g., whether is is satisfied by commonly used kernels) and (b) explicitly stating the multidimensional generalization of this assumption. 5. I couldn't quite follow how Eq. (33) follows from Eq. (32). Is there a missing "$1 +$" or something in Eq. (33)? 6. The bound on $|\Sigma - \hat{\Sigma}|$ in Theorem C.4 needs more details: - Why are the elements of $K_{uu}^{-1} \Sigma_u K_{uu}^{-1}$ of order O(\epsilon^D)? - Some more details about how Theorems C.3 and 4.1 imply the reported convergence rates of the kernel matrices $\hat K_{u*}$ and $\hat K_{u*}^*$? - "The other two terms are of the same order $O(M \epsilon^{2D})$": Why doesn't this $O(M \epsilon^{2D})$ doesn't dominate the overall convergence rate of $O(M \epsilon^{3D})$ in Theorem C.4? Or am I misunderstanding what "The other two terms" refers to? 7. At some points, the discussion is too vague to follow. Some examples: - After Eq. (15), "The first equality follows for any variationally correct features (Matthews et al., 2016).": What does "variationally correct" mean? - Before Eq. (14), "we use the result of Burt et al. (2019) to bound": Which result (they have several)? As far as I could tell, these points aren't clarified in the Appendix. **Minor** 8. Page 2, First Sentence, "iterative learning of the variational distribution which is otherwise available in closed form, which results in slower learning overall": I didn't quite understand why an iterative approach is necessarily slower than a closed form solution. For example, I think it is frequently the case that using SGD to optimize the ridge regression objective is often faster than calculating it in closed form. 9. Page 2, Last Sentence, "the hyperparameters", and elsewhere: The paper frequently refers to "the hyperparameters", but it's not clear to me what this refers to. Are there hyperparameters other than $\sigma$? Relatedly, when $\sigma$ is introduced in Eq. (1), the paper should explicitly point out if this is a hyperparameter. 10. Page 3, Sentence 1, "the quadratic form $y^⊤Ay$, both of which incur $O(N^3)$ computational cost": To clarify, the $O(N^3)$ cost is to compute $A$, right? The quadratic form itself should only take $O(N^2)$ times, right? 11. Page 9, Theorem 4.4: I suggest more explicitly writing $\mu(x_*)$, $\hat{\mu}(x_*)$, $\Sigma(x_*)$, $\hat{\Sigma}(x_*)$, to make it clearer that the quantities being bounded are scalars. 12. Figure 1: This should probably appear one page earlier, closer to where it is first referenced (near the top of page 5). **Typos** This paper has *many* minor typos, and needs extensive proofreading. A few examples are listed below: 13. Page 2, Paragraph 2, Last Sentence, "Existing work... Cunningham et al., 2023)": This sentence seem to parse grammatically, and I couldn't understand what it was trying to say. Is there an error here? 14. Page 2, Paragraph 3, "for for" 15. Section 3, "to approximate the solution to the linear solve" 16. Page 5, Paragraph 3, "$dxdx$" 17. Page 9, Remark 4.3, "this is due to the $O(M \epsilon^3)$ which arise due to...": It's not clear what quantity is in $O(M \epsilon^3)$. I guess this should be the error term $E_1 \in O(M \epsilon^3)$? 18. Page 11, Last Paragraph, "For out synthetic experiments" 19. Page 14, Paragraph 1, "$N = 10/, 000$" 20. Page 22, Paragraph 3, "$M^{1/D} \in $" and "That isve" 21. Page 22, Eq. (30): I think the definition of $\square_m$ is intended to be a Cartesian product, but I don't think I have ever seen $\oplus$ used for the Cartesian product. Should this be, e.g., $\prod$ or $\otimes$? 22. Page 22, defs. of $u$, $c_m$, and $\bar{k}_{m,m'}$: In the multidimensional case discussed here, I think these integrals should be over $\square_m$ rather than the 1D intervals $[z_m-\epsilon/2, z_m+\epsilon/2]$. 23. Page 22, Lemma C.1 (Theorem 4.1): I think "Theorem 4.1" should be "Lemma 4.1", consistent with Page 8. 24. Page 24, after Eq. (44), "asumptotically" 25. Page 24, after Eq. (46), "the result ??" (missing reference) 26. Page 25, "if the spectral density is sufficiently heavy tailed": I think this should be "light tailed" or "not too heavy tailed"? 27. Page 25, Theorem C.4: Should "$O(M^{−2qq+3})$" be "$O(M^{−\frac{2q}{q+3}})$"? 28. Page 25, after Eq. (63), "The first term is...": There is a missing absolute value ("$|$") sign in this expression. Broader Impact Concerns: I have no concerns regarding ethical implications of this work. ================================================== Review 2: Summary: The work proposes integrated Fourier features to perform fast-and-scalable GP regression in O(M^3) with stationary kernels and of regular spectral density (i.e. non-periodic kernels). The proposed ideas build on top of inter-domain GPs and recent methods in the context of frequency-based representations of kernels to obtain scalable sparse approximations for GPs. While following this line of thinking, the propositions show a good performance on the experimental results on both synthetic and real-world datasets. Strengths and Weaknesses: **Strengths.** Overall, I think the paper introduces novel ideas and relevant advances for obtaining scalable approximations for GP regression. I particularly liked how the authors took and presented ideas from previous works in the context that they were interested in before jumping into their main technical propositions. Despite the derivations with Fourier features and inducing methods, I did not find any theoretical/technical typo at first sight, so the proposed equations are likely correct. The empirical results look really good and seem to outperform previous approaches (Burt's, VFFs and standard inducing points) in precision/time, which I think is very important to remark. I also liked that the authors clearly stated the limitations of the method and the gaps that they left for further development or just bc they were out of the scope of the current manuscript. In that direction, I think the work is extremely transparent with the ideas and the justification of the decisions taken, which is important for its future impact and understanding of readers. **Weaknesses.** Having stated the main strengths, and once again remarking that I think is a paper of high-quality ideas --- I think is worth saying that the manuscript should be updated to make such ideas and contributions "shine" in the text. In that regard, the paper loses clarity and some details are omitted without any reason at some points of the story. It is very clear while reading that after developing the project, some ideas are extremely clear for the writer, and then they are partially omitted or not clearly explained. These sorts of issues are of course possible to be fixed in this reviewing round, so I encourage the authors to improve the manuscript to make the paper stronger and of better clarity for its future impact in the community. For helping in this process, I will add some notes and points that I think should be improved in the next section. Requested Changes: Some changes I would like to see updated to vote for a strong acceptance of the paper and likely for improving its clarity and impact in the future. **Point 1.** Page (1) --- "the scaling with N is problematic ...." I see the point of saying this and reasoning that scaling with N and using stochastic optimization makes the learning process slower in general. However, if the paper needs this sort of thinking for avoiding the use/introduction of stochastic gradient descent methods, I think more attention should be put on the computational advantage of the method, like really showing the time-difference between using Fourier features with sparse approximations and other stochastic gradient-based methods. I basically say this bc many readers are now familiar and native in the use of SGD, even with GPs, and stating that the proposed ideas are much more faster, even showing clear empirical results would convince many of them to believe this is a good direction. Indeed, this last sort of things would be better than the last paragraph. **Point 2.** Fixing writing typos. I caught quite a few writing typos like "pesudo" instead of pseudo or "for for" just in the first pages. A deep look into the text and some rewording/rewriting of such typos would help. **Point 3.** The introduction is great in general, but when one jumps in the background things get a bit messy. So it took me quite a few reads to remember/realize the key point of everything, which is that Kuu is diagonal because the Fourier features are independent. Of course, this in the end is what makes the computation to be reduced to O(M^3) bc the trace can be reduced down to the equation written in Appendix A. This point is kind of hidden in the text, as one realizes it when looking into Appendix A. Just notice that this Kuu is usually the GP prior on standard inducing points, so any practitioner might accidentally think that Kuu is still fully correlated and then have issues to understand what is going on. So I think is important to remark and remind all around the paper that this Kuu is diagonal and features independent in all cases. **Point 4.** Could the notation be matched with D. Burt's papers or VFF? I think is not a lot of work, (i.e. using bold letters for inducing features and matrices for instance and maybe using more standard acronyms for frequency domains), and it would save a lot of thinking-time for the reader. **Point 5.** Some lingering questions in my mind that maybe are obvious but could be perhaps clarified. The issues with the Dirac delta: a) unsuitable for constructing the conditional prior (this needs more explanations and at least to be smoother for the reader), and b) difficulties to integrated out the Dirac delta in Eq. (11) --- is this bc it cannot be done, not numerical methods available, intractability, difficult functional evaluation, etc... the reasons are not clear to me here and i think is super important. **Point 6.** Spectral density, c and k. These three elements are the ones presented and indexed at every step of the paper. Indeed, the different methods have different indices 1,2,3.. However, this is a bit difficult to follow and not very clear. Perhaps, it would help to cleary state the role of every one of them, and also use other way to say that they are different approximations or methods. **Point 7.** I see the utility of averaging the Fourier features over disjoint intervals of fixed width. But as said before, this is very related to the problems on integrating out the Dirac delta and so on. Could it be explained a bit more why the averaging wants disjoint intervals, if this ones are exactly near each other, the size of \epsilon chosen, etc...? **Point 8.** The transformation of the features by the invertible linear transformation T seems useful, but I do not understand what is going on and how it helps to have only one approximation... This part is super difficult to follow for me, and also do not understand why the normalization by spectral density turns into the normalization by square root. I guess bc T is used, but not clear how to me. **Point 9.** Why only for D<4? I see that it performs poorly for larger D, but where is the bottleneck or issue? I would also add a bit more of analysis/detailed context in the results, and connecting things a bit better with the Appendix. For instance, explaining a bit the nature of the real-world datasets and the characteristics of the data to easily understand the impact of the performance. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper introduces integrated Fourier features for Gaussian processes (GPs) to enhance their scalability for larger datasets. The method extends performance benefits to a broad class of stationary covariance functions. Overall speaking, this paper proposed a speedup method for Gaussian process applied on large dataset. Experiments show that the proposed IFF method can be applied on a various source of dataset and show great scalability, reducing the computational costs especially when M << N. Strengths and Weaknesses: Strength: 1. The proposed features are motivated by a convergence analysis and empirical exploration. 2. The paper demonstrates practical speedup in synthetic and real-world spatial regression tasks. Weakness: 1. The paper's experiments involve synthetic datasets with arbitrarily chosen parameters, such as a signal-to-noise ratio (SNR) of 0.774, which may not be representative of real-world scenarios. 2. The paper mentions inducing points with inducing inputs optimized taking longer than other methods, but it does not delve deeper into the implications of this observation. Requested Changes: I think there is nothing much to change. Maybe it will be good to have a table to compare the computational costs and other properties of current methods for solving Gaussian process problem. Broader Impact Concerns: There doesn't seem to be any direct ethical concerns related to the paper's content. ================================================== Metareview: Recommendation: Accept as is Comment: This paper presents a method for speeding up Gaussian process modeling of spatial data, using what they call integrated variational features. Previously there were methods that took advantage of the spatial structure of the data to reduce the computational complexity of GP inference from N^3 to M^3, where N and M are the number of examples and number of feature dimensions respectively. However, those methods were restricted to a small restricted set of covariance functions, limiting the modeling flexibility. This work extends the M^3 complexity to many more covariance functions, using the variational Fourier features. This would allow for much more flexible modeling of large spatial datasets. All three reviewers voted for accept (two leaning, one accept). The reviewers found that the paper was well motivated, addressed an important and relevant problem, the methods were well justified and empirical results convincing. One reviewer found that the theoretical results were somewhat confusing and unverifiable. However, after reading over the author response, that reviewer now seems satisfied. The reviewers all found that the claims made in the paper were well supported by the empirical and theoretical evidence. Overall, this is a strong paper that advances the state-of-the-art in efficient spatial modeling using Gaussian processes and seems like a good candidate for acceptance to TMLR. Therefore I recommend acceptance. ==================================================
# Quantifying Neural Network Uncertainty Under Volatility Clustering Anonymous authors Paper under double-blind review ## Abstract Time-series with complex structures pose a unique challenge to uncertainty quantification methods. Time-varying variance, such as volatility clustering as seen in financial timeseries, can lead to large mismatch between predicted uncertainty and forecast error. In this work, we propose a novel framework to deal with uncertainty quantification under the presence of volatility clustering, building and extending the recent methodological advances in uncertainty quantification for non-time-series data. To illustrate the performance of our proposed approach, we apply it to two types of datasets: a collection of non-time-series data to show the general applicability of our framework and its ability to quantify the uncertainty better than the state-of-the art methods; and to two sets of financial time-series exhibiting volatility clustering: cryptocurrencies and U.S. equities. ## 1 Introduction Neural networks for regression problems are typically trained using mean squared error (MSE) as the loss function and provide point estimates for the mean of a distribution conditional on the input (see Goodfellow et al., 2016). However, in certain applications, both the conditional mean and the uncertainty of the conditional mean are equally important. Example applications include safety critical applications such as self-driving cars (Bojarski et al., 2016; Amini et al., 2020) and in applications that involve trade-off of risk and reward, such as portfolio selection. To motivate the discussion, consider the following simple thought experiment. Suppose an investor has a model that can perfectly forecast next day's asset returns and that the investor's goal is to maximize terminal wealth. Then, on each day, the most rational decision would be to place all of the investor's wealth into the asset with the highest expected return on the next day. Next, suppose that the investor's model is a noisy estimator of future asset returns. Then, the investor may choose to diversify across multiple assets. This has led to the development of various models for optimal bet allocation that depend on some measures of risk or probability of outcome, such as Kelly criterion (Kelly, 1956; Byrnes & Barnett, 2018) and Bayesian-based portfolio optimization (Black & Litterman, 1991). Forecast uncertainty of expected return is an important input into the portfolio construction process. In this work, we are concerned with neural network uncertainty quantification for complex time-series structures. In particular, we focus on time-series that exhibit *time-varying variance*, such as time-series of asset returns. In other words, asset returns exhibit irregular bursts of high volatility that cluster in time (termed volatility clustering; Cont, 2001). We argue that this feature is an alternative form of *out-of-distribution* observation in time-series applications and is a relatively less studied area of the uncertainty quantification literature. In this work, we propose a framework for uncertainty quantification under volatility clustering. To illustrate our contributions, we apply our proposed method to cryptocurrency and U.S. equities time-series forecasting, and highlight the usefulness of our proposed method. Cryptocurrencies are an emerging class of digital assets. They are highly volatile and frequently exhibit price bubbles (Fry & Cheah, 2016; Hafner, 2018; Chen & Hafner, 2019; Núñez et al., 2019; Petukhina et al., 2021), with large volumes of high frequency data (e.g., prices in hourly intervals) freely available from major exchanges. This makes cryptocurrencies an ideal testbed for uncertainty quantification methodologies in financial applications. Given the extreme levels of volatility, we view cryptocurrencies as one of the most challenging datasets for this type of application. However, a comparison in U.S. equities is also provided which illustrates performance in conventional financial time-series. Recently, significant advances in neural network uncertainty quantification have been made (see Gawlikowski et al., 2021 for a recent survey). In particular, using a neural network to generate parameters of a conditional distribution that is assumed to have generated the data (Lakshminarayanan et al., 2017; Amini et al., 2020) offers an attractive trade-off between adequately quantifying uncertainty and avoiding the computational cost of a full Bayesian treatment1. These methods offer a familiar procedure to *maximum likelihood estimation*. In Lakshminarayanan et al. (2017) (the *Ensemble* method), the objective is to minimize the negative log-likelihood (NLL) of a Normal distribution, parameterized by µ and σ 2, which are outputted by the neural network. Parameter σ 2in this likelihood function models data uncertainty (also called statistical or aleatoric uncertainty) but is incapable of modelling the uncertainty in model parameters (also called systemic or epistemic uncertainty; Kendall & Gal, 2017). Thus, Lakshminarayanan et al. (2017) proposed to use an ensemble of networks with random initialization to approximate model uncertainty. To address this, Amini et al. (2020) (the *Evidential* method) proposed to place an evidential prior2, the Normal-Inverse-Gamma distribution (NIG), on *µ, σ*2. In Evidential, µ is assumed to be drawn from a Normal distribution with unknown mean γ and scaled variance σ 2ν −1. In this construct, model uncertainty is reflected by the uncertainty in γ, which is assumed to be a fraction of σ 2 and is drawn from an Inverse Gamma distribution. The fraction is controlled by ν, which is learnt from the data and varies according to the strength of the data. The resultant marginal likelihood is a non-standardized Student's t-distribution. This mimics a Bayesian setup and offers an intuitive interpretation of the model mechanics - due to uncertainty in the model parameters, the tails of the marginal likelihood are heavier than the a Normal distribution. This has the effect of regularizing the network and provides an avenue of modelling model uncertainty. Ensemble and Evidential require only minimal modifications to a conventional neural network architecture - requiring only new loss function (NLL of the marginal distribution) and output layer. These works demonstrated state-of-the-art performance in quantifying uncertainty in a range of benchmark datasets. However, in analyzing the marginal distribution of Evidential, we observe that ν relates ambiguously to the scale parameter of the t-distribution. We argue that this may impede first-order optimization methods, such as stochastic gradient descent (SGD), in traversing the loss surface and thus hamper neural network learning. The output layer produces hyperparameters of the marginal distribution, which are computed as a linear combination of the latent representation outputted by the last hidden layer. We consider this feature to be a weakness of these approaches as the latent representation has to provide a sufficiently rich encoding to linearly derive all hyperparameters of the distribution. Moreover, these advances are based on conventional applications of deep learning and remain largely untested in financial applications. In this work, we show that the current state-of-the-art in quantifying uncertainty of time-series exhibiting non-stationary variance can be further advanced. We combine and extend Ensemble and Evidential into a framework for quantifying forecast uncertainty of non-stationary time-series (the *Combined* method). We propose a novel parameterization of the problem through variance scaling. We argue that modelling forecast uncertainty through the introduction of a variance scaling factor provides a simpler alternative to the NIG prior in Evidential. Variance scaling is achieved through placing a Gamma prior solely on variance of the Normal distribution, rather than both the mean and variance in Evidential. This results in an unambiguous relationship between the scaling factor and the scale parameter of the resultant marginal t-distribution. We show through experimental results that our proposed method provides competitive, if not superior, forecast uncertainty quantification performance to the state-of-the-art methods. We propose a novel architecture to model parameters of the prior distribution, where parameters of the prior distribution are modelled using disjoint subnetworks. We recognize the significant potential of model averaging in improving forecast accuracy in time-series forecasting. Whilst Lakshminarayanan et al. (2017) did not use ensembling for the purpose of improving forecast accuracy. We propose to use model averaging with an evidential prior to simultaneously improve forecast accuracy and uncertainty quantification provided by our methodology. We argue that this architecture will improve uncertainty quantification performance where uncertainty has 1Such as the Bayesian neural network (Neal, 1996). 2In contrast to conventional priors in Bayesian inference where the modeller has to specify the parameters of the prior distribution, the evidential prior (e.g., NIG in Evidential) learns these hyperparameters from the data. complex relationships with the input. We use the information of squared returns to inform the neural network of time-varying variance and show that this framework vastly improves uncertainty quantification of financial time-series forecasts. We also show that this framework can benefit non-time-series uncertainty quantification problems and illustrates this on three real world datasets: the University of California Irvine Machine Learning Repository (UCI) benchmark datasets (non-time-series), previously analyzed in Hernández-Lobato & Adams (2015); Gal & Ghahramani (2016); Lakshminarayanan et al. (2017); Amini et al. (2020), and two financial time-series datasets (cryptocurrency and U.S. equities). ## 2 Preliminaries 2.1 Problem Setup Consider an investor making iterative forecasts of asset returns. At every period t ∈ {1*, . . . , T*}, an investor observes price history up to t and uses the preceding {K ∈ Z |0 *< K < t*} period returns to forecast one-step ahead returns. We define an asset's return at time t as the log difference in price rt = log pt − log pt−1 and, consistent with empirical findings in finance literature (Pesaran & Timmermann, 1995; Cont, 2001), we assume that the data generation process (DGP) is time-varying: Let ζt = (µt, σ2 t ), xt−1 = {rt−K, rt−K+1*, . . . , r*t−1} be a K-length input sequence3 using returns up to t − 1 and yt−1 = rt be forward one period return. The training dataset is comprised of Dt = {(xt −1, yt −1)|t ∈ N : t ≤ t} input-output pairs4 and our goal is to forecast yt (which corresponds to rt+1). At each t, the investor's goal is to solve the optimization problem5, $$\mathbf{\theta}_{t}=\operatorname*{arg\,min}_{\mathbf{\theta}^{*}}-\sum_{t=K}^{t-1}\log\operatorname*{p}(y_{t}|F(\mathbf{x_{t}};\mathbf{\theta}^{*})),\tag{1}$$ where F(x; θ) is a neural network with input x and parameters θ, θ =SL ℓ=1{W(ℓ), b (ℓ)} is the set of network weights and biases and, in this context, p(y|F(x; θ)) is the likelihood of observing y based on the outputs of neural network F(·; ·) and the assumed marginal distribution. In other words, the investor is concerned with recovering the parameters ˆζt = (ˆµt, σˆ 2 t ) = F(xt; θt) that are most likely to have generated the observed data. In this setup, σˆ 2 t can be interpreted as an estimate of data uncertainty and is an estimate of the contemporaneous variance of the DGP at time t. Note that this is different from forecasting variance (e.g., forward 30-day volatility), which is the average squared deviation from the mean over a desired horizon. There are two parts to this problem. The first part concerns advancing methods of uncertainty quantification across general applications. In Section 4.1, we show that our proposed approach can still benefit non-timeseries problems in spite of it being designed to deal with a series of data points indexed in time order and exhibiting volatility clustering. The second part concerns uncertainty quantification specifically for time-series that exhibit volatility clustering, and is detailed in Section 3.2. ## 2.2 Related Work As discussed in Section 1, forecast uncertainty has an important role in many applications. Such quantity is easy to obtain for a statistical model such as linear regression. However, *classical* neural networks for regression problems are typically trained using MSE and provide point estimates for the mean prediction conditional on the input. As a modeller (in our case, an investor), one is concerned with *predictive uncertainty* (Gawlikowski et al., 2021). This is the total uncertainty around the point estimate. Predictive uncertainty can 3For illustrative purposes, we have stated that the sequence only contains returns rt. However, as discussed in Section 3.2, we also include squared returns r 2 t as part of the input sequence. 4Note that at each portfolio selection period t, the training set can at most contain data up to t − 1 as we have not yet observed rt+1. 5For clarity, the case of a single asset is shown. At each t, there are N assets and the dataset is typically in a t ×N layout. It is easy to see the generalization of Equation 2 over N assets, where the average loss is calculated over (t −K − 1) × N instances. $$r_{t}\sim\mathrm{N}(\mu_{t},\sigma_{t}^{2}).$$ $$(1)$$ $$\left(2\right)$$ ). (1) be decomposed into (Hüllermeier & Waegeman, 2019; Gawlikowski et al., 2021): data uncertainty (relating to uncertainty in the data such as input noise), and model uncertainty (relating to uncertainty on model parameters). A Bayesian neural network (BNN) is a full probabilistic interpretation of neural network, by placing priors on network weights and inducing a distribution over a parametric set of functions (MacKay, 1992; Neal, 1995; Gal, 2016). Modern BNNs can be trained using Markov Chain Monte Carlo (MCMC, e.g., the MetropolisHastings algorithm; Hastings, 1970) and *Variational Inference* techniques (Jospin et al., 2022). Jospin et al. (2022) notes four advantages of using BNNs over classical neural networks, with two being relevant to uncertainty quantification. First, Bayesian methods provide a natural approach to uncertainty quantification and are better *calibrated* than classical neural networks (Mitros & Namee, 2019; Kristiadi et al., 2020; Ovadia et al., 2019; Jospin et al., 2022). Second, BNN allows distinguishing between model uncertainty and data uncertainty. However, despite their advantages, MCMC-based methods are computationally expensive (Quiroz et al., 2019). Thus, limiting the applicability of BNNs. Recent advances have focused on predicting the conditional distribution that is most likely to have generated the data and thus bridging the gap between BNNs and classical neural networks. In the Ensemble method, the data is assumed to be drawn from a Normal distribution with parameters µ and σ 2(Lakshminarayanan et al., 2017). The neural network is modified to output both µ and σ 2. In this setup, σ 2 models data uncertainty and is incapable of quantifying model uncertainty. Lakshminarayanan et al. (2017) addressed this by using an ensemble of neural networks with randomly initialized weights. Each network settles in a different local minima and produces different µ and σ 2for the same input. The variance of µ across the ensemble thus provides an estimate of model uncertainty. Addressing this shortcoming, Amini et al. (2020) (the Evidential method) proposed to place an evidential prior, the NIG distribution, on the model parameters *µ, σ*2 of the Normal data distribution: $$\begin{array}{c c}{{}}&{{\mathrm{Data:}\,y\sim\mathrm{N}(\mu,\sigma^{2})}}\\ {{}}&{{}}\\ {{\mathrm{NIG~prior:}\,\mu\sim\mathrm{N}(\gamma,\sigma^{2}\nu^{-1}),}}&{{\sigma^{2}\sim\mathrm{InvGam}(\alpha,\beta),}}\end{array}$$ $\quad(3)$ . where µ is assumed to be drawn from a Normal prior distribution with unknown mean γ and scaled variance σ 2ν −1, ν is a scaling factor for σ 2, InvGam (or IG) is the Inverse-Gamma distribution, and shape α > 1 and scale β > 0 parameterize the Inverse-Gamma distribution6. We require α > 1 to ensure the mean of the marginal distribution is finite. In this construct, model uncertainty is reflected by the uncertainty in µ, which is assumed be a fraction of σ 2 and is itself assumed to be drawn from an Inverse-Gamma distribution. This fraction is controlled by ν, which is learnt from the data and, in an abstract sense, varies according to the amount of information in the data. Parameter ν is interpreted as the number of virtual observations for the mean parameter µ. In other words, ν virtual instances of µ are assumed to have been observed in determining the prior variance of µ (Jordan, 2009; Amini et al., 2020). For the NIG prior in 3, the marginal distribution of µ after integrating out σ 2is a non-standardized Student's t-distribution (Bernardo & Smith, 2000), $$\mathrm{p}(\mu|\gamma,\nu,\alpha,\beta)=\int_{\sigma^{2}=0}^{\infty}\mathrm{p}_{\mathrm{N}}(\mu|\gamma,\sigma^{2}\nu^{-1})\mathrm{p}_{\mathrm{TG}}\left(\sigma^{2}|\alpha,\beta\right)\mathrm{d}\sigma^{2}\tag{4}$$ $$=\mathrm{St}\left(\gamma,\frac{\beta}{\nu\alpha},2\alpha\right),$$ using the fact that σ 2 ∼ InvGam(*α, β*) corresponds to σ −2 ∼ Gam(*α, β*). Hence, assigning a Gam(*α, β*) prior to precision σ −2in (Equation (3)) gives the Normal-Gamma (NG) prior and is equivalent to assigning InvGam(*α, β*) to σ 2 which gives the NIG prior. The variance of this t-distribution is β ν(α−1) . Predictions 6Time index t has been omitted for brevity and legibility. Note that variables in this section are indexed by time for each asset: {yt, rt, µt, σ2 t , γt, νt, αt, βt}. based on the NIG prior can be computed as (Amini et al., 2020), > $\mathrm{Prediction}:\mathrm{E}[\mu]=\gamma$ > $\mathrm{Data\ uncertainty}:\mathrm{E}[\sigma^2]=\frac{\beta}{\alpha-1}$ > $\mathrm{Model\ uncertainty}:\mathrm{Var}[\mu]=\frac{\beta}{\nu(\alpha-1)}$. $$\left({5}\right)$$ The marginal variance Var[µ] refers to the variance of the marginal t-distribution in (Equation (4)) for the NIG prior. We note that ν can also be interpreted as a factor that attributes uncertainty between data uncertainty ( β α−1 ) and model uncertainty ( β ν(α−1) ). If ν = 1, then total uncertainty is evenly split between model and data. Whilst not the focus of Amini et al. (2020), we note that model uncertainty can be further decomposed into uncertainties attributable to parameters µ and σ 2, Model $\mu$ uncertainty : $\mbox{Var}[\mu|\sigma^{2}]\approx\frac{\beta}{r\alpha}$ Model $\sigma^{2}$ uncertainty : $\mbox{Var}[\mu]-\mbox{Var}[\mu|\sigma^{2}]\approx\frac{\beta}{r\alpha(\alpha-1)}$, (6) where the difference between the marginal and conditional variances of µ gives the uncertainty of σ 2. Moreover, since µ|σ 2is normally distributed with Var[µ|σ 2] = σ 2/ν (from Equation (3)), and E[σ −2] = E[ 1 σ−2 ] ≈ 1 E[σ−2] = α/β (from the Gamma distribution of σ −2). This leads to Var[µ|σ 2] ≈β να . We note that this ability to finely attribute model uncertainty to µ and σ 2is a strength of the Evidential method. In this construct, the marginal distribution of y after integrating out µ and σ 2is a non-standardized Student's t-distribution (Amini et al., 2020), $$\mathrm{p}(y|\gamma,\nu,\alpha,\beta)=\int_{\sigma^{2}=0}^{\infty}\int_{\mu=-\infty}^{\infty}\mathrm{p}_{\mathrm{N}}(y|\mu,\sigma^{2})\mathrm{p}_{\mathrm{N}\mathrm{G}}(\mu,\sigma^{2}|\gamma,\nu,\alpha,\beta)\,\mathrm{d}\mu\,\mathrm{d}\sigma^{2}\tag{7}$$ $$=\mathrm{St}\left(y;\gamma,\frac{\beta(1+\nu)}{\nu\alpha},2\alpha\right).$$ $$\mathbf{\Sigma}$$ Variance of this t-distribution is β(1+ν) ν(α−1) , which corresponds to the sum of model and data uncertainty, $$\mathrm{Var}[y]={\frac{\beta}{\alpha-1}}+{\frac{\beta}{\nu(\alpha-1)}}={\frac{\beta(1+\nu)}{\nu(\alpha-1)}}.$$ $$\mathbf{\Sigma}$$ The corresponding NLL of Equation (7) is (Amini et al., 2020), $$\mathcal{L}_{\text{NIG}}(y|\zeta)=\frac{1}{2}\log\left[\frac{\pi}{\nu}\right]-\alpha\log\left[2\beta(1+\nu)\right]$$ $$\qquad+(\alpha+\frac{1}{2})\log\left[(y-\gamma)^{2}\nu+2\beta(1+\nu)\right]+\log\left[\frac{\Gamma(\alpha)}{\Gamma(\alpha+\frac{1}{2})}\right],\tag{1}$$ where ζ = {*γ, ν, α, β*} are parameters of the posterior distribution. This mimics a Bayesian setup, granting classical neural networks the ability to estimate both model and data uncertainty, and offers an intuitive interpretation of the model mechanics - due to uncertainty in the model parameters, the tails of the marginal likelihood are heavier than a Normal distribution. This has the effect of regularizing the network and provides an avenue of estimating model uncertainty. As the distribution of asset returns has heavy tails (Cont, 2001), we argue that the marginal t-distribution also provides a better fit of the data. The implementation is remarkably simple - Equation (9) replaces MSE as the loss function (for a regression problem) and the final layer of the network is replaced with a layer that simultaneously outputs four parameters of the marginal distribution. When analysing Equation (7), we note that ν appears in both the numerator and denominator of the scale parameter of the t-distribution in Equation (7) in the form of 1 + 1 ν . We argue that this functional form increases the difficulty of a neural network to find the optimal solution as increasing "evidence" (ν) may not monotonically lead to a decrease in scale of the t-distribution. Motivated by this observation, we propose a simpler formulation which we detail in Section 3.1. $$\mathbf{\Sigma}$$ ## 3 Uncertainty Quantification Under Volatility Clustering 3.1 Modelling Forecast Uncertainty Using A Scale Mixture Distribution Our formulation of the problem is motivated by two observations. First, as discussed in Section 2.2, we argue that the functional form of ν in the scale of the marginal t-distribution could contribute to the difficulty in minimizing the t-distributed NLL by a first-order optimiser, such as SGD. Second, the NIG prior provides the ability to perform granular attribution of uncertainty to various parts of the model (e.g., Equation (5) and (6)), which comes at the cost of model complexity. We sought to propose a simpler formulation of the problem than Evidential while offering the ability to quantify predictive uncertainty, which is the type of forecast uncertainty that we are most concerned about in our motivating application. To address these issues, we propose to simplify the model by formulating the problem as a scale mixture distribution7(SMD; Andrews & Mallows, 1974), $$y\sim{\rm N}(\gamma,\sigma^{2}\nu^{-1}),\quad\nu\sim{\rm Gam}(\alpha,\beta),\tag{10}$$ where ν > 0 is the scaling factor, Gam is the Gamma distribution, and α > 1 and β > 0 are the shape and scale parameters of the Gamma distribution, respectively. Our proposed formulation effectively omits the prior on µ and places a prior on ν, the scaling factor of σ 2. We argue that uncertainty of variance can be modelled through either σ 2 or ν. In here, y is assumed to be drawn from N(*γ, σ*2ν −1), with mean γ and unknown variance σ 2ν −1 where ν is a latent variable that introduces uncertainty into the variance of the assumed Normal distribution of y. Relative to Equation (7), σ 2replaces ν in the parameter set when taking the SMD approach as σ 2 has a richer interpretation - it directly indicates the scale of the conditional data distribution. Note that Equation (10) is equivalent to σ −2 ∼ Gam(*α, β*) as ν and σ −2 are indistinguishable in σ 2ν −1 but it is distinct from the NG prior as there is no Normal prior on µ in Equation (10). The marginal distribution of a Normal distribution with unknown variance (Equation (10)) is a nonstandardized t-distribution (derivation is provided in Appendix A), $$\mathrm{p}(y|\gamma,\sigma^{2},\alpha,\beta)=\int_{\nu=0}^{\infty}\mathrm{p}_{\mathrm{N}}(y|\gamma,\sigma^{2}\nu^{-1})\mathrm{p}_{G}(\nu|\alpha,\beta)\,\mathrm{d}\nu$$ $$=\mathrm{St}\left(y;\gamma,\frac{\sigma^{2}\beta}{\alpha},2\alpha\right).$$ the above property of this paper is led to the total distribution $\mathrm{d}\beta=\mathrm{F}_{\mathrm{N}}$. $$(11)$$ Analogous to Equation (7), the shape parameter of this marginal Student's t-distribution is 2α. Equation (11) is similar to Equation (4) with y replacing µ, and it can be interpreted as the Normal distribution being "stretched out" into a heavier tailed distribution due to the uncertainty in its variance. Jointly, ζ = (γ, σ2*, α, β*) are hyperparameters of the SMD distribution and are outputs of the neural network. This has the effect of regularizing the mean estimate (γ) and, similar to NIG, provides the ability to handle heavy tails of the distribution that characterize asset returns. We make two remarks. Firstly, there are essentially three free parameters in Equation (11) as σ 2β together should be treated as one. We provide a more detailed discussion on this point in Section 3.2. Secondly, SMD encapsulates several well-known distributions as special cases. According to Andrews & Mallows (1974) and Choy & Chan (2008), in our case of α ̸= β, Equation (10) gives the *Pearson Type VII* distribution which can be re-expressed as Student's t-distribution in Equation (11). If α = β, then Equation (10) is a Student's t-distribution with 2α degrees of freedom, and is Cauchy if α = β = 1. The corresponding NLL of Equation (11) (derivation is provided in Appendix A) is, $${\cal L}_{\rm SMD}(y|{\cal G})=\log\left[\frac{\Gamma(\alpha)}{\Gamma(\alpha+\frac{1}{2})}\right]+\frac{1}{2}\log[2\pi\sigma^{2}\beta]+(\alpha+\frac{1}{2})\log\left[\frac{(y-\gamma)^{2}}{2\sigma^{2}\beta}+1\right].\tag{12}$$ Then, LSMD is used in place of the marginal likelihood function in Equation (2), in which the neural network learns to output parameters in ζ. 7Time index t has been omitted for brevity and legibility. Note that variables in this section are indexed by time for each asset: {yt, γt, σ2 t , νt, αt, βt}. We use the same notations in Equation (10) as Equation (3) where the symbols have the same meaning to improve comparability. 6 In Equation (10), conditional on the scaling factor ν, the data is normal with variance given by the scale of the marginal t-distribution ( σ 2β α ). This variance gives the uncertainty of the data. Since the predictive uncertainty given by the variance of the marginal t-distribution contains both model and data uncertainties, the difference between predictive and data uncertainties gives the model uncertainty. This is illustrated in Equation (13) below: $$\text{Prediction}:\mathrm{E}[y]=\gamma$$ $$\text{Data uncertainty}:\mathrm{E}[\frac{\sigma^2}{\nu}]\approx\frac{\sigma^2\beta}{\alpha}$$ $$\text{Predictive uncertainty}:\mathrm{Var}[y]=\frac{\sigma^2\beta}{\alpha}\cdot\frac{2\alpha}{2\alpha-2}=\frac{\sigma^2\beta}{\alpha-1}$$ $$\text{Model uncertainty}:\mathrm{Var}[y]-\mathrm{E}[\frac{\sigma^2}{\nu}]=\frac{\sigma^2\beta}{\alpha-1}-\frac{\sigma^2\beta}{\alpha}=\frac{\sigma^2\beta}{\alpha(\alpha-1)}.$$ $$(13)$$ α(α−1) . (13) We argue that SMD (prior on variance through the scaling factor) offers an attractive trade-off between model complexity and granularity, occupying the middle ground between Ensemble (no prior) and Evidential (prior on both mean and variance). Recall that the result in Equation (11) can be interpreted as a Normal distribution being stretched out into a heavier tailed t-distribution when variance is unknown. Kurtosis of the t-distribution is controlled by the shape parameter (2α). In analysing Equation (11) and (13), we argue that α is analogous to "virtual observations" (ν) in NIG. Model uncertainty σ 2β α(α−1) is smaller than data uncertainty σ 2β αby a factor of 1 α−1 , when α > 2. Thus, as α increases, both model uncertainty and scale of the marginal t-distribution monotonically decrease. Importantly, model uncertainty also drops relative to data uncertainty, as the t-distribution converges to the Normal distribution on increasing α. This stands in contrast to the model with NIG prior (Equation (7)), where increasing evidence ν does not monotonically lead to a decrease in scale of the t-distribution. We argue that this eases the challenge faced by the neural network in finding the optimal solution. ## 3.2 Architecture Of The Neural Network For the main application of this work, uncertainty quantification of financial time-series forecasts, we propose a novel architecture for the modelling of distribution parameters, as illustrated in Figure 1. To predict yˆt, time-series inputs of both returns (rt−K+1*, . . . , r*t) and *log-transformed squared returns* (log[r 2 t−K+1]*, . . . ,* log[r 2 t ]) are fed into one or more long short-term memory (LSTM) layers (Hochreiter & Schmidhuber, 1997). We log-transform squared returns to reduce skewness. The LSTM layers convert each time-series into a latent representation. The latent representation is then fed into four subnetworks, where each subnetwork is comprised of one or more fully connected layers and applies non-linear transformations on the latent representation. This allows the network to model complex relationships between the parameters in ζ and the sequence. Return forecast yˆt is given by γ, while {σ 2*, α, β*} are used to compute predictive uncertainty Var[ˆy]. Note that in this setup, β is a redundant parameter as it exists as a product together with σ 2in both the marginal NLL (Equation (12)) and in all three uncertainty measures (Equation (13)). Parameters σ 2 and β indicate scales of the Normal and Gamma distributions, respectively. Together, they contribute to the scale of the marginal t-distribution. Thus, one potential simplification of the network is to combine σ 2 and β into a single σ 2β parameter and have the four subnetworks reduced to three. In other words, the network architecture illustrated in Figure 1 can be modified to output three parameters: ζ = (γ, σ2*β, α*). We have kept β to be comparable to Evidential but provide empirical results in Appendix B using the UCI dataset (to be introduced in Section 4.1) to show that the two networks are indeed equivalent. In the following, we explore the proposed design of the architecture in detail. Lakshminarayanan et al. (2017) and Amini et al. (2020) introduced the Gaussian and NormalInverseGamma layers as the final layer of a neural network. These final layers output parameters of the posterior distribution. Let a ∈ R H(I)be the input vector of the final layer with H(I) dimensions and H(O) be the dimension of the output layer. In the case of the NormalInverseGamma layer, H(O) = 4. The NormalInverseGamma layer outputs, $$\begin{array}{c}{{\zeta=\mathrm{O}(\mathbf{a};\mathbf{\theta})=\mathbf{a}^{\mathsf{T}}\cdot\mathbf{W}^{(O)}+\mathbf{b}^{(O)}}}\\ {{\gamma=\zeta_{1},\quad\nu=\zeta_{2},\quad\alpha=\zeta_{3},\quad\beta=\zeta_{4},}}\end{array}$$ γ = ζ1, ν = ζ2, α = ζ3, β = ζ4, (14) $$(14)$$ ![7_image_0.png](7_image_0.png) Figure 1: Input sequence (shaded in red) is passed into one or more LSTM layers. Output from the LSTM layers is then fed into four subnetworks of one or more fully connected layers with ReLU activation. The final layer of each subnetwork is a fully-connected layer with linear activation. Softplus is applied to σ 2, α and β to ensure positivity. The four outputs of the neural network are then used to compute yˆ and Var[ˆy] (shaded in blue). where O denotes the NormalInverseGamma output layer, {ζ1*,...,*4} are 1st, ..., 4th elements of vector ζ, W(O) ∈ R H(I)×H(O)and b (O) ∈ R H(O)are weights and bias of the output layer, respectively. Each dimension of ζ corresponds to each of *γ, ν, α* and β. Outputs of the NormalInverseGamma layer are linear transformations of a common input a (Equation (14)). We argue that this construct is too restrictive for complex applications, such as in quantifying uncertainty of financial time-series forecasts, as detailed in Section 4.2. We propose to model each of the four parameters of SMD with its own subnetwork of one or more fully connected layers. This allows for a more expressive modelling of ζ, where each parameter may have complex, non-linear relationships with the input. Additionally, we enforce constraints on σ 2 > 0, α > 1 and β > 0 by applying softplus transformation with a constant term, z ′ = log(1 + exp(z)) + c, where z ∈ {σ 2*, α, β*} and c is the minimum value of the respective parameters. The transformed values constitute the final output of the network: ζ ′ = {γ,(σ 2) ′, α′, β′}. In Section 4.2, we show that this modification vastly improves quantification of forecast uncertainty of financial time-series. For other network architectures, we argue that the same approach can be applied. In the case of a feedforward network, we recommend having at least one common hidden layer that reduces the input to a single latent representation. The latent representation is then passed to individual stacks of hidden layer(s) for specialization. We argue that the common hidden layer allows information sharing across the four parameters, while having no common hidden layer (i.e., if the input is fed into the four disjoint stacks of hidden layers directly) will prevent sharing of information across the stacks. Machine learning models are typically trained using pooled dataset of historical observations. As such, they learn the average uncertainty within the historical data. However, as noted in Section 1, asset returns exhibit time-varying volatility clustering patterns. Thus, we expect forecast uncertainty to be correlated with timevarying variance of the DGP. In other words, forecast uncertainty is high when σ 2 t of the DGP is high and the model is "surprised" by the volatility. As we will show in Section 4.2, this feature of asset returns complicates uncertainty quantification of time-series forecasts. To inform the neural network of the prevailing volatility environment, we propose to include the log of squared returns {log(r 2 t−K+1)*, . . . ,* log(r 2 t )} as part of the input matrix. This follows from the use of squared returns in volatility forecasting literature (Brownlees et al., 2011). We argue that this allows the neural network to infer the prevailing volatility environment. However, we note that there are two differences between uncertainty quantification and volatility forecasting. First, we are concerned with predictive uncertainty (the sum of data and model uncertainties), whereas volatility forecasting is concerned with data uncertainty only. Second, the most common definition of volatility in finance literature is realized volatility (e.g., sample variance of returns over the past 30 days; Ge et al., 2022), which is the mean squared deviation from the sample mean and is not linked to accuracy of the forecast. By contrast, the better the model is at predicting yt, the lower the predictive uncertainty. In the extreme case where the model can perfectly predict future returns, predictive uncertainty is nil but returns will still exhibit volatility around its sample mean. Model averaging, as a special case of ensembling, is a well studied statistical method for improving predictive power of estimators (Breiman, 1996; Goodfellow et al., 2016), and has previously been shown to improve accuracy of financial time-series forecasting (Wong et al., 2021) and sequential predictions (Raftery et al., 2010). As accuracy of both return forecast accuracy and predictive uncertainty are important in our motivating application, we propose to incorporate model averaging to simultaneously improve return forecasts and predictive uncertainties. For an ensemble of M models, we compute the ensemble forecast y˜ and predictive variance Var[˜y] as, $$\hat{y}=\frac{1}{M}\sum_{i=1}^{M}\hat{y}_{i},\quad\mathrm{Var}[\hat{y}]=\frac{1}{M}\sum_{i=1}^{M}(\hat{y}_{i}^{2}+\mathrm{Var}[\hat{y}_{i}])-\hat{y}^{2},$$ $$\left(15\right)$$ where yˆi and Var[ˆyi] are mean and predictive variance of model i, respectively. In Section 4, we show that model averaging resulted in the highest predictive performance while maintaining uncertainty quantification performance. Popular tools for modelling time-varying volatility are Autoregressive Conditional Heteroskedasticity (ARCH; Engle, 1982) and Generalized ARCH (GARCH; Bollerslev, 1986) models. GARCH, when applied to stock returns, assumes the same DGP as Equation (1). Time-varying variance σ 2 t is modelled using an ARMA model (Box et al., 1994). µt can assume a fixed value (e.g., sample mean) or modelled using time-series models such as ARMA (leading to the ARMA-GARCH formulation). In our proposed framework, squared returns are provided as inputs to LSTM in similar spirit to the autoregressive terms of squared returns in GARCH. However, our proposed framework also has few differences to ARMA-GARCH. A neural network offers greater flexibility in modelling and can automatically discover interaction effects of first- and secondmoment of returns. For example, higher volatility is negatively correlated with future asset returns (known as the *leverage effect*; Cont, 2001). By contrast, modelling of interaction effects in additive models (such as GARCH) requires explicit specification by the user. LSTM can also be interpreted as having dynamic autoregressive orders (as opposed to fixed orders in GARCH). The input and forget gates of LSTM allow the network to control the extent of long-memory depending on features of the time-series. Lastly, multi-step ahead forecasting is an iterative process for ARMA-GARCH and forecast errors may compound. LSTM is able to predict multi-step ahead directly. In Section 4.2, we apply our framework to forecast forward 1-month U.S. stock returns using daily returns. Nonetheless, we do not directly compare against ARMA-GARCH models for two reasons. First, in this work, we are mainly focused on advancing uncertainty quantification methodologies for neural networks. We argue that several of our advances can be beneficial to both timeseries and non-time-series datasets (as demonstrated in Section 4.1). Second, we lean on the plethora of literature in comparing LSTM to ARMA-variants (e.g., Siami-Namini et al., 2018) and ARCH-variants (e.g., Liu, 2019). For ease of comparison, we outline the differences of our method to Ensemble (Lakshminarayanan et al., 2017) and Evidential (Amini et al., 2020) in Table 1. ## 4 Experiments SMD parameterization, distribution parameter modelling and ensemble predictions can be applied to general applications of prediction uncertainty quantification, while modelling volatility clustering relate specifically to time-series predictions with complex structures. In this section, we detail the experiment results in three real world datasets to illustrate the benefits of our improvements. | Method | Ensemble | Evidential | Combined | |--------------|--------------------|-------------------------|--------------------------| | Prior | None | NIG | Gamma | | Ensemble | Yes | No | Yes | | Likelihood | Gaussian | Student's t | Student's t | | Output layer | Single layer µ, σ2 | Single layer γ, ν, α, β | Multi-layer γ, σ2 , α, β | Table 1: A comparison of Combined to Deep Ensemble and Deep Evidential regressions. *Output layer* refers to the structure of output layer(s) of the network that outputs the parameters of the likelihood function. ## 4.1 Benchmarking On Uci Dataset We first compare our method to Ensembles and Evidential on the UCI collection used in Hernández-Lobato & Adams (2015); Gal & Ghahramani (2016); Lakshminarayanan et al. (2017); Amini et al. (2020). The collection consists of 9 real world regression problems, each with 10–20 features and hundreds to tens of thousands of observations. We follow Lakshminarayanan et al. (2017) and Amini et al. (2020) in evaluating our method on root mean squared error (RMSE, which assesses forecast accuracy) and NLL (which assesses overall distributional fit), and compare against Ensemble and Evidential. While we do not explicitly compare inference speed, as our Combined method also uses ensembling, inference speed is expected to be comparable to Ensemble while being slower than Evidential. We use the source code provided by Amini et al. (2020), with the default topology of a single hidden layer with 50 units for both Ensemble and Evidential8. For Combined, as individual modelling of distribution parameters (Section 3.2) requires a network with two or more hidden layers, we have used a single hidden layer with 24 units, followed by 4 separate stacks of a single hidden layer with 6 units each. Thus, the total number of non-linear units is 48 (compared to 50 for Ensemble and Evidential). Note that even though the total number of units are similar across the three models, learning capacity may differ due to different topologies. Table 2 records experiment results on the UCI dataset. On RMSE, we find that both Ensemble and Combined have performed well, having the best RMSE in four datasets each. In two of the sets (*Kin8nm* and *Naval*), all three methods produced highly accurate results that are not separable to two decimal points. Turning to NLL, we observe a trend towards Combined having lower NLL than the other two methods for four sets, followed by Ensemble with three sets. Comparing Combined to Evidential, we find that Combined generally has lower RMSE (7 of 9 sets) and NLL (6 of 9 sets). Although our method is designed for uncertainty quantification of complex time-series and all 9 datasets are pooled (non-time-series) datasets, we still observe some improvements in both RMSE and NLL. Next, we present further ablation studies on the UCI dataset. Table 3 records results of *Alternative*, which utilizes ensembling and SMD parameterization but not separate modelling of hyperparameters. Alternative has the same network topology as Ensemble and Evidential (a single hidden layer with 50 units), as opposed to Combined which has two hidden layers with a total of 48 units. We observe that Ensemble has the lowest RMSE in 5 (of 9) datasets, followed by Alternative (3 of 9), while Alternative has the best NLL in 6 (of 9) datasets and Ensemble has 3 (of 9). On both metrics, Evidential has the least favourable performance. Comparing Combined in Table 2 and Alternative in Table 3, Combined has lower RMSE and NLL in 5 of 9 datasets. Thus, we conclude that separate modelling of hyperparameters provided an incremental benefit on the UCI datasets. In Table 4, we further remove model averaging. The network used is identical to Evidential but trained using the SMD parameterization (i.e., we simply change the loss function in Evidential to Equation (12)). We observe that the network trained using the SMD parameterization has lower RMSE in 6 of 9 and lower NLL in 8 out 9 datasets. We argue that the improved performance of the SMD parameterization is due to its simplicity. 8Source code for Amini et al. (2020) is available on Github: https://github.com/aamini/evidential-deep-learning Table 2: Comparing Ensemble (Lakshminarayanan et al., 2017), Evidential (Amini et al., 2020) and Combined (this work) on RMSE and NLL using the UCI benchmark datasets. Average result and standard deviation over 5 trials for each method. The best method for each dataset and metric are highlighted in bold. | | RMSE | | | NLL | | | |----------|-------------|-------------|-------------|--------------|--------------|--------------| | Dataset | Ensemble | Evidential | Combined | Ensemble | Evidential | Combined | | Boston | 2.66 ± 0.20 | 2.95 ± 0.29 | 2.89 ± 0.31 | 2.28 ± 0.05 | 2.30 ± 0.05 | 2.23 ± 0.05 | | Concrete | 5.79 ± 0.16 | 5.98 ± 0.23 | 5.40 ± 0.18 | 3.07 ± 0.02 | 3.11 ± 0.04 | 2.98 ± 0.03 | | Energy | 1.86 ± 0.04 | 1.84 ± 0.06 | 1.71 ± 0.20 | 1.36 ± 0.02 | 1.41 ± 0.04 | 1.35 ± 0.05 | | Kin8nm | 0.06 ± 0.00 | 0.06 ± 0.00 | 0.06 ± 0.00 | −1.39 ± 0.02 | −1.28 ± 0.03 | −1.35 ± 0.02 | | Naval | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | −6.10 ± 0.05 | −5.99 ± 0.09 | −5.89 ± 0.35 | | Power | 3.02 ± 0.09 | 3.02 ± 0.08 | 2.95 ± 0.08 | 2.57 ± 0.01 | 2.56 ± 0.03 | 2.53 ± 0.02 | | Protein | 3.71 ± 0.10 | 4.28 ± 0.23 | 3.67 ± 0.13 | 2.61 ± 0.03 | 2.73 ± 0.08 | 2.70 ± 0.05 | | Wine | 0.60 ± 0.03 | 0.56 ± 0.02 | 0.59 ± 0.03 | 0.94 ± 0.04 | 0.92 ± 0.04 | 1.00 ± 0.03 | | Yacht | 1.22 ± 0.22 | 1.48 ± 0.47 | 3.97 ± 1.06 | 1.06 ± 0.08 | 0.96 ± 0.19 | 1.17 ± 0.11 | | | | RMSE | NLL | | | | |----------|-------------|-------------|-------------|--------------|--------------|--------------| | Dataset | Ensemble | Evidential | Alternative | Ensemble | Evidential | Alternative | | Boston | 2.66 ± 0.20 | 2.95 ± 0.29 | 2.87 ± 0.18 | 2.28 ± 0.05 | 2.30 ± 0.05 | 2.29 ± 0.04 | | Concrete | 5.79 ± 0.16 | 5.98 ± 0.23 | 5.72 ± 0.15 | 3.07 ± 0.02 | 3.11 ± 0.04 | 3.03 ± 0.02 | | Energy | 1.86 ± 0.04 | 1.84 ± 0.06 | 1.88 ± 0.04 | 1.36 ± 0.02 | 1.41 ± 0.04 | 1.35 ± 0.03 | | Kin8nm | 0.06 ± 0.00 | 0.06 ± 0.00 | 0.06 ± 0.00 | −1.39 ± 0.02 | −1.28 ± 0.03 | −1.38 ± 0.02 | | Naval | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | −6.10 ± 0.05 | −5.99 ± 0.09 | −6.12 ± 0.06 | | Power | 3.02 ± 0.09 | 3.02 ± 0.08 | 2.97 ± 0.10 | 2.57 ± 0.01 | 2.56 ± 0.03 | 2.54 ± 0.02 | | Protein | 3.71 ± 0.10 | 4.28 ± 0.23 | 3.75 ± 0.11 | 2.61 ± 0.03 | 2.73 ± 0.08 | 2.72 ± 0.02 | | Wine | 0.60 ± 0.03 | 0.56 ± 0.02 | 0.55 ± 0.02 | 0.94 ± 0.04 | 0.92 ± 0.04 | 0.92 ± 0.02 | | Yacht | 1.22 ± 0.22 | 1.48 ± 0.47 | 1.45 ± 0.33 | 1.06 ± 0.08 | 0.96 ± 0.19 | 0.93 ± 0.09 | Table 3: Comparing Ensemble, Evidential and Alternative (without separate modelling of the four parameters of SMD) on RMSE and NLL using the UCI benchmark datasets. Average result and standard deviation over 5 trials for each method. The best method for each dataset and metric is highlighted in bold. ## 4.2 Uncertainty Quantification In Financial Time-Series Forecasting Next, we demonstrate the benefits of our method in the main applications of this work, uncertainty quantification in financial time-series forecasting. In this section, we will first describe the cryptocurrency dataset, then the U.S. equities dataset. The same neural network architectures are used in the two datasets, with hyperparameter tuned independently. Our cryptocurrency dataset consists of hourly returns downloaded from Binance over July 2018 to December 2021, for 10 of the most liquid, non-*stablecoin*9cryptocurrencies. Tickers for these cryptocurrencies are BTC, ETH, BNB, NEO, LTC, ADA, XRP, EOS, TRX and ETC, denominated in USDT10. Following (Ambachtsheer, 1974; Grinold & Kahn, 1999; Wong et al., 2021), we use mean cross-sectional correlation ( 1 T PT t=1 ρ(yt, yˆt)) as a measure of predictive accuracy, in addition to RMSE and NLL. Data from July 2018 to June 2019 are used for hyperparameter tuning, chronologically split into 70 % training and 30 % validation. Data from July 2019 to December 2021 are used for out-of-sample testing. Networks are trained every 30 days using an expanding window of data from July 2018. Each input sequence consists of 10 days of hourly returns r and squared returns log(r 2) (i.e., the input is a matrix with dimensions 240 × 2), and are 9Stablecoins are cryptocurrencies that are pegged to real world assets (e.g., U.S. Dollar). As such, they exhibit lower volatility than other non-pegged cryptocurrencies. 10*Tether* (USDT) is a stablecoin that is pegged to USD. It has the highest market capitalization amongst the USD-linked stablecoins Lipton (2021). | | RMSE | NLL | | | |----------|-------------|-------------|--------------|--------------| | Dataset | NIG | SMD | NIG | SMD | | Boston | 2.95 ± 0.29 | 2.97 ± 0.20 | 2.30 ± 0.05 | 2.31 ± 0.05 | | Concrete | 5.98 ± 0.23 | 5.78 ± 0.23 | 3.11 ± 0.04 | 3.05 ± 0.04 | | Energy | 1.84 ± 0.06 | 1.87 ± 0.16 | 1.41 ± 0.04 | 1.33 ± 0.05 | | Kin8nm | 0.06 ± 0.00 | 0.06 ± 0.00 | −1.28 ± 0.03 | −1.37 ± 0.01 | | Naval | 0.00 ± 0.00 | 0.00 ± 0.00 | −5.99 ± 0.09 | −6.27 ± 0.09 | | Power | 3.02 ± 0.08 | 2.98 ± 0.12 | 2.56 ± 0.03 | 2.53 ± 0.02 | | Protein | 4.28 ± 0.23 | 3.72 ± 0.16 | 2.73 ± 0.08 | 2.39 ± 0.05 | | Wine | 0.56 ± 0.02 | 0.56 ± 0.03 | 0.92 ± 0.04 | 0.87 ± 0.04 | | Yacht | 1.48 ± 0.47 | 1.44 ± 0.49 | 0.96 ± 0.19 | 0.91 ± 0.18 | Table 4: Comparing Normal-Inverse-Gamma and Normal-Gamma on RMSE and NLL using the UCI benchmark datasets. Average result and standard deviation over 5 trials for each method. The best method for each dataset and loss function is highlighted in **bold**. used to predict forward one hour return. This training scheme is shared across all three models. Network topology consists of LSTM layers, followed by fully connected layers with *ReLU* activation and the corresponding output layers of Ensemble and Evidential. For Combined, we use four separate subnetworks as illustrated in Figure 1. As discussed in Section 1, we consider uncertainty quantification in cryptocurrencies to be especially challenging due to their high volatility. Note that in this section, "forecast uncertainty" and "uncertainty forecast" refer to predictive uncertainty (i.e., sum of model and data uncertainties). Our U.S. equities experiment follows the same setup as cryptocurrencies. Mimicking the S&P 500 index universe, the dataset consists of daily returns downloaded from the Wharton Research Data Service over 1984 to 2020, for the 500 largest stocks11 listed on NASDAQ, NYSE and NYSE American. Data from 1984 to 1993 are used for hyperparameter tuning, while 1994 to 2020 are used for out-of-sample testing. The network is refitted every January using a rolling 10-year window. Each input sequence consists of 240 trading days of daily returns r and squared returns log(r 2), forecasting forward 20-day return and its uncertainty. At this point, it is useful to remind readers that prior literature have found both datasets to exhibit time-varying variance (e.g., Cont, 2001; Hafner, 2018), which is also visible in Figure 2. We start with the main empirical results of this work. Table 5 contains forecast results on both the cryptocurrency dataset (left) and U.S. equities (right). We observe that Combined has the highest average cross-sectional correlation (higher is better), and lowest RMSE and NLL (both lower is better) in both datasets. This indicates that Combined has higher cross-sectional predictive efficacy (as measured by correlation) and is able to better forecast uncertainty of the time-series prediction. Evidential has higher (better) correlation and lower RMSE (better) than Ensemble but higher (worse) NLL in cryptocurrency. In U.S. equities, Evidential has worse correlation, RMSE and NLL than Ensemble. Correlations in U.S. equities are materially lower for all three methods compared to the cryptocurrency dataset. We hypothesize that this is due to both the difference in forecast horizon and maturity of the U.S. stock market. | Cryptocurrency | | U.S. equities | | | | | |--------------------|---------------|-----------------|---------------|---------------|---------------|---------------| | Metric | Ensemble | Evidential | Combined | Ensemble | Evidential | Combined | | Correlation (×100) | 2.78 ± 1.09 | 3.94 ± 1.84 | 9.87 ± 3.17 | 0.40 ± 0.66 | 0.09 ± 0.93 | 1.22 ± 0.65 | | RMSE (×100) | 0.874 ± 0.022 | 0.874 ± 0.003 | 0.867 ± 0.001 | 9.426 ± 0.044 | 9.433 ± 0.033 | 9.379 ± 0.020 | | NLL | −3.74 ± 0.10 | −3.24 ± 0.02 | −4.14 ± 0.01 | −1.65 ± 0.17 | −0.82 ± 0.03 | −1.71 ± 0.01 | Table 5: **Main results**: Comparing Ensemble, Evidential and Combined on average cross-sectional correlation, RMSE and NLL for cryptocurrencies (left) and U.S. equities (right), respectively. Average result and standard deviation over 10 trials for each method. Best method for each dataset is highlighted in **bold**. 11The list of stocks is refreshed every June, keeping the same stocks until the next rebalance. ![12_image_0.png](12_image_0.png) Figure 2: Forecast uncertainties of Combined, Ensemble and Evidential applied to Bitcoin (left) and Chevron Corp (right), respectively. For Bitcoin, square root of the average over each day shown. Next, we compare predicted uncertainty and actual prediction error of the three methods to actual volatility of Bitcoin (BTC/USDT), the cryptocurrency with the highest market capitalization, and Chevron Corp., a major U.S. oil producer which have endured multiple market shocks, as illustrated in Figure 2. The true volatility of an asset (σ 2in Equation (1)) is unobservable (Ge et al., 2022). Thus, in the top row of Figure 2, we use the standard deviation of hourly returns computed over each day for Bitcoin and absolute value of monthly returns for Chevron (as forecast horizon is monthly), as proxies for σ 2. In rows 2–4, for Bitcoin, we convert hourly forecasts to daily data points by first, computing the average squared forecast error (denoted p(y − yˆ) 2), and the average predictive uncertainty (denoted pVar(ˆy)), over each day. We then plot the square-root of both quantities in the left column of rows 2–4 for the three methods in Figure 2. For Chevron, we plot the absolute error between actual monthly returns and predicted returns (denoted |y − yˆ|), and square-root of forecast uncertainty (denoted pVar(ˆy)) on the right column of Figure 2. Comparing the bottom three rows of Figure 2, which correspond to Combined, Ensemble and Evidential, respectively. We observe that Combined's predicted uncertainty of µˆ tracks actual forecast error closely. This appears to be especially true during periods of elevated volatility, which are important to investors. However, during periods of low volatility, Combined appears to slightly overestimate expected uncertainty. This is also true for Ensemble in Bitcoin, where predicted uncertainty tends to be significantly higher than actual forecast error. For Chevron, we observe multiple spikes of volatility, corresponding to the U.S. recession over 2008-09, the oil shock in 2015 and the 2020 pandemic. Notably, we observe that Combined tracks the spike in forecast error better than Ensemble and Evidential. Note that the "block-like" appearances of the uncertainty forecasts are due to periodic training (monthly for cryptocurrencies and yearly for U.S. equities) and the failure to generalize the prevailing volatility environment. During training, the optimizer can update network weights W or bias b (which is analogous to the intercept in linear models). When the network fails to generalize, it minimizes the loss function by updating the bias rather than the weights. Thus, outputting the same constant that do not vary with the input, until the network is re-trained in the following month. This produces the block-like appearances of Ensemble and Evidential, and is indicative of the network setup (e.g., no separate modelling of hyperparameters) being unsuitable to this class of problems. Lastly, Evidential underestimates forecast error during heightened volatility (e.g., March 2020 in Figure 2) and overestimates forecast error under periods of low volatility (e.g., July 2020 in Figure 2). We observe similar visual characteristics in the predicted uncertainty of other cryptocurrencies and stocks. Table 6: **Ablation studies of Combined**: In each column, we remove model averaging (*No Averaging*), separate modelling of distribution parameters (*Single Output*) and using return time-series only (*Returnsonly*) for cryptocurrencies (left) and U.S. equities (right), respectively. Average result and standard deviation over 10 trials for each method. Note that cryptocurrency returns are hourly and U.S. stock returns are monthly. | Cryptocurrency | U.S. equities | | | | | | |--------------------|-----------------|---------------|---------------|---------------|---------------|---------------| | Metric | No Averaging | Single Output | Returns-only | No Averaging | Single Output | Returns-only | | Correlation (×100) | 4.48 ± 2.80 | 8.23 ± 2.91 | 10.46 ± 2.04 | 0.92 ± 0.65 | 1.87 ± 1.06 | 1.21 ± 0.73 | | RMSE (×100) | 0.868 ± 0.001 | 0.872 ± 0.002 | 0.866 ± 0.002 | 9.392 ± 0.020 | 9.398 ± 0.029 | 9.384 ± 0.046 | | NLL | −3.35 ± 0.01 | −4.04 ± 0.02 | −3.95 ± 0.02 | −0.88 ± 0.01 | −1.63 ± 0.04 | −1.34 ± 0.04 | ![13_image_0.png](13_image_0.png) Figure 3: Predicted uncertainty Var(ˆy) of Combined, without model averaging (*Averaging*), single output layer (*Single Output*) and using returns only (*Returns-only*) for BTC/USDT and Chevron. Next, we test the effects of removing each of the following for Combined: 1) model averaging; 2) single output layer (similar to Evidential); 3) using return time-series only (i.e., no squared returns). The results are recorded in Table 6 and in Figure 3. We observe that model averaging has a large impact on cross-sectional correlation and NLL. Cross-sectional correlation is 55 % and 25 % lower for cryptocurrencies and U.S. equities, respectively. While NLL is higher by 0.8 in both cases (lower is better), indicating a worse overall fit. However, it does not appear to impede the network's ability to model time-series forecast uncertainty. Using a single output layer leads to marginally worse NLL. Correlation is lower in cryptocurrencies but marginally higher in U.S. equities. While using returns only leads to marginally higher correlation but marginally worse on NLL in cryptocurrency, and lower correlation and worse NLL in U.S. equities. From Figure 3, the block-like appearances indicate that both using single output layer and using returns only result in the network failing to closely track time-varying variance of the DGP. This suggests that both squared returns and separate modelling of distribution parameters are required to model time-varying forecast uncertainty. ## 5 Conclusions In this work, we present a method for the modelling of forecast uncertainty in presence of complex time-series structures, such as volatility clustering. Our proposed method extends the work of Lakshminarayanan et al. (2017) and Amini et al. (2020). We propose to use a SMD (which uses a Gamma prior for scale uncertainty ν) as a simpler alternative to a NIG prior (which places a Normal prior on µ and Inverse-Gamma prior on σ 2). We show clearly in the UCI benchmark dataset (Table 4) that simply swapping the NIG prior with the SMD resulted in improved forecast uncertainty quantification performance. Parameters of SMD are modelled using separate subnetworks. Together with ensembling and the use of second order of returns as inputs, we show that our proposed method can successfully model time-varying variance of the DGP. This is illustrated through the successful quantification of forecast uncertainty of two financial time-series datasets: cryptocurrency and U.S. equities. We observe that our method tends to marginally overestimate uncertainty during periods of low volatility. This opens an avenue for future research directions. We show that our proposed method can also be used to improve uncertainty quantification in non-time-series regression problems on UCI datasets. From a finance application perspective, forecast uncertainty can be used to size bets, or as advanced warning to protect the portfolio from downside risk. For example, if forecast uncertainty reaches a certain threshold, an investor could purchase portfolio insurance or liquidate positions to reduce risk. Lastly, uncertainty quantification in time-series applications is a relatively under-explored area of literature. We believe this work can lead to further advancements of uncertainty quantification in complex time-series. ## References Keith Ambachtsheer. Profit potential in an almost efficient market. *Journal of Portfolio Management*, 1(1): 84, FALL 1974. Alexander Amini, Wilko Schwarting, Ava Soleimany, and Daniela Rus. Deep evidential regression. In *Advances in Neural Information Processing Systems 33*, NIPS'20, pp. 14927–14937, Vancouver, BC, Canada, 2020. Curran Associates, Inc. David F. Andrews and Colin L. Mallows. Scale mixtures of normal distributions. *Journal of the Royal* Statistical Society: Series B (Methodological), 36(1):99–102, 1974. José M. Bernardo and Adrian F. M. Smith. *Bayesian theory*. John Wiley & Sons Ltd., 2000. Christopher M. Bishop. *Pattern Recognition and Machine Learning*. Springer, 2006. Fischer Black and Robert B Litterman. Asset allocation: Combining investor views with market equilibrium. Journal of Fixed Income, 1(2):7–18, 1991. Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, and Karol Zieba. End to end learning for self-driving cars. In *arXiv*, 2016. URL https://arxiv.org/abs/1604.07316. Tim Bollerslev. Generalized autoregressive conditional heteroskedasticity. *Journal of Econometrics*, 31(3): 307–327, 1986. ISSN 0304-4076. doi: https://doi.org/10.1016/0304-4076(86)90063-1. George E. P. Box, Gwilym M. Jenkins, and Gregory C. Reinsel. Time Series Analysis: Forecasting and Control. Prentice Hall, Englewood Cliffs, N.J., USA, 3 edition, 1994. Leo Breiman. Bagging predictors. *Machine Learning*, 24(2):123–140, 1996. ISSN 1573-0565. Christian Brownlees, Robert Engle, and Bryan Kelly. A practical guide to volatility forecasting through calm and storm. *Journal of Risk*, 14(2):3–22, 2011. Tim Byrnes and Tristan Barnett. Generalized framework for applying the kelly criterion to stock markets. *International Journal of Theoretical and Applied Finance*, 21(05):1–13, 2018. doi: 10.1142/ S0219024918500334. Cathy Yi-Hsuan Chen and Christian M. Hafner. Sentiment-induced bubbles in the cryptocurrency market. Journal of Risk and Financial Management, 12(2):1–12, 2019. ISSN 1911-8074. S.T. Boris Choy and Jennifer S.K. Chan. Scale mixtures distributions in statistical modelling. Australian & New Zealand Journal of Statistics, 50(2):135–146, 2008. ISSN 1369-1473. Rama Cont. Empirical properties of asset returns: stylized facts and statistical issues. *Quantitative Finance*, 1:223–236, 2001. Robert F. Engle. Autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation. *Econometrica*, 50(4):987–1007, 1982. John Fry and Eng-Tuck Cheah. Negative bubbles and shocks in cryptocurrency markets. *International* Review of Financial Analysis, 47:343–352, 2016. ISSN 1057-5219. Yarin Gal. *Uncertainty in Deep Learning*. University of Cambridge, 2016. PhD thesis. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Maria-Florina Balcan and Kilian Q. Weinberger (eds.), *Proceedings of the 33rd* International Conference on Machine Learning, volume 48 of *ICML'16*, pp. 1050–1059. JMLR.org, 2016. Jakob Gawlikowski, Cedrique Rovile Njieutcheu Tassi, Mohsin Ali, Jongseo Lee, Matthias Humt, Jianxiang Feng, Anna M. Kruspe, Rudolph Triebel, Peter Jung, Ribana Roscher, M. Shahzad, Wen Yang, Richard Bamler, and Xiaoxiang Zhu. A survey of uncertainty in deep neural networks. In *arXiv*, 2021. URL https://arxiv.org/abs/2107.03342. Wenbo Ge, Pooia Lalbakhsh, Leigh Isai, Artem Lenskiy, and Hanna Suominen. Neural network–based financial volatility forecasting: A systematic review. *ACM Computing Surveys*, 55(1), jan 2022. ISSN 0360-0300. doi: 10.1145/3483596. URL https://doi.org/10.1145/3483596. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. *Deep Learning*. MIT Press, 2016. http://www. deeplearningbook.org. Richard Grinold and Ronald Kahn. *Active Portfolio Management: A Quantitative Approach for Producing* Superior Returns and Controlling Risk. McGraw-Hill Education, 1999. Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.). *Advances in Neural Information Processing Systems 30*, NIPS'17, Long Beach, California, USA, 2017. Curran Associates, Inc. Christian M. Hafner. Testing for Bubbles in Cryptocurrencies with Time-Varying Volatility. *Journal of* Financial Econometrics, 18(2):233–249, 10 2018. ISSN 1479-8409. doi: 10.1093/jjfinec/nby023. URL https://doi.org/10.1093/jjfinec/nby023. Wilfred Keith Hastings. Monte carlo sampling methods using markov chains and their applications. Biometrika, 57(1):97–109, 1970. José Miguel Hernández-Lobato and Ryan P. Adams. Probabilistic backpropagation for scalable learning of bayesian neural networks. In Francis R. Bach and David M. Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of *ICML'15*, pp. 1861–1869. JMLR.org, 2015. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural Computation*, 9(8):1735–1780, November 1997. ISSN 0899-7667. Eyke Hüllermeier and Willem Waegeman. Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods. In *arXiv*, 2019. URL https://arxiv.org/abs/1910.09457. Michael Jordan. The exponential family: Conjugate priors, 2009. Laurent Valentin Jospin, Hamid Laga, Farid Boussaid, Wray Buntine, and Mohammed Bennamoun. Handson bayesian neural networks - a tutorial for deep learning users. In *arXiv*, 2022. URL https://arxiv. org/abs/2007.06823. John L. Kelly. A new interpretation of information rate. *Bell System Technical Journal*, 35(4):917–926, 1956. Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? In Guyon et al. (2017), pp. 5580–5590. ISBN 9781510860964. Agustinus Kristiadi, Matthias Hein, and Philipp Hennig. Being bayesian, even just a bit, fixes overconfidence in relu networks. In *arXiv*, 2020. URL https://arxiv.org/abs/2002.10118. Balaji Lakshminarayanan, AlexanderPritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In Guyon et al. (2017), pp. 6405–6416. ISBN 9781510860964. Alexander Lipton. Cryptocurrencies change everything. *Quantitative Finance*, 21(8):1257–1262, 2021. doi: 10.1080/14697688.2021.1944490. Yang Liu. Novel volatility forecasting using deep learning–long short term memory recurrent neural networks. Expert systems with applications, 132:99–109, 2019. ISSN 0957-4174. David J. C. MacKay. A practical bayesian framework for backpropagation networks. *Neural Computation*, 4(3):448–472, 1992. ISSN 0899-7667. doi: 10.1162/neco.1992.4.3.448. John Mitros and Brian Mac Namee. On the validity of bayesian neural networks for uncertainty estimation. In *arXiv*, 2019. URL https://arxiv.org/abs/1912.01530. Radford M. Neal. *Bayesian learning for neural networks*. University of Toronto, 1995. PhD thesis. Radford M. Neal. *Bayesian Learning for Neural Networks*. Springer-Verlag, Berlin, Heidelberg, 1996. ISBN 0387947248. José Antonio Núñez, Mario I. Contreras-Valdez, and Carlos A. Franco-Ruiz. Statistical analysis of bitcoin during explosive behavior periods. *PLOS ONE*, 14(3):1–22, 03 2019. doi: 10.1371/journal.pone.0213919. URL https://doi.org/10.1371/journal.pone.0213919. Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D. Sculley, Sebastian Nowozin, Joshua V. Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), *Advances in Neural Information Processing Systems 32*, NIPS'19, Vancouver, BC, Canada, 2019. Curran Associates, Inc. M. Hashem Pesaran and Allan Timmermann. Predictability of stock returns: Robustness and economic significance. *Journal of Finance*, 50:1201–1228, 1995. Alla Petukhina, Simon Trimborn, Wolfgang Karl Härdle, and Hermann Elendner. Investing with cryptocurrencies - evaluating their potential for portfolio allocation strategies. *Quantitative Finance*, 21(11): 1825–1853, 2021. Matias Quiroz, Robert Kohn, Mattias Villani, and Minh-Ngoc Tran. Speeding up mcmc by efficient data subsampling. *Journal of the American Statistical Association*, 114(526):831–843, 2019. doi: 10.1080/ 01621459.2018.1448827. Adrian E. Raftery, Miroslav Kárný, and Pavel Ettler. Online prediction under model uncertainty via dynamic model averaging: Application to a cold rolling mill. *Technometrics*, 52(1):52–66, 2010. doi: 10.1198/TECH. 2009.08104. PMID: 20607102. Sima Siami-Namini, Neda Tavakoli, and Akbar Siami Namin. A comparison of arima and lstm in forecasting time series. In M. Arif Wani (ed.), Proceedings of the 17th IEEE International Conference on Machine Learning and Applications, ICMLA, pp. 1394–1401, Orlando, FL, USA, 2018. IEEE. Steven Y. K. Wong, Jennifer Chan, Lamiae Azizi, and Richard Y. D. Xu. Supervised temporal autoencoder for stock return time-series forecasting. In Wing Kwong Chan, Bill Claycomb, and Hiroki Takakura (eds.), Proceedings of the IEEE 45th Annual Computer Software and Applications Conference, COMPSAC 2021, Madrid, Spain, 2021. IEEE. ## A Marginal Distribution Of A Scale Mixture From Equation (10), we have N(y|γ, σ 2 λ p(y|γ, σ2, α, β) = Z ∞ λ pN(y|γ, σ2λ −1)pG(λ|α, β) dλ = Z ∞ λ "rλ 2πσ2 exp − λ(y − γ) 2 2σ 2 # β α Γ(α) λ α−1exp−βλdλ Γ(α) √2πσ2 Z ∞ λ=0 λ α− 1 2 exp − λ(y − γ) 2 2σ 2− βλdλ =β α Γ(α) √2πσ2 (y − γ) 2 2σ 2+ β −(α+ 1 2 ) Z ∞ λ=0 λ (y − γ) 2 2σ 2+ β α− 1 2 =β α exp −λ (y − γ) 2 2σ 2+ β d λ (y − γ) 2 2σ 2+ β , α−1exp(−x) dx = Γ(α), ) Gam(λ|*α, β*). Marginalizing over λ produces the data likelihood, since R ∞ 0x $$\begin{array}{r l}{{}}&{{}={\frac{\beta^{\alpha}}{\sqrt{2\pi\sigma^{2}}}}{\frac{\Gamma(\alpha+{\frac{1}{2}})}{\Gamma(\alpha)}}\left[{\frac{(y-\gamma)^{2}}{2\sigma^{2}}}+\beta\right]^{-(\alpha+{\frac{1}{2}})}}\\ {{}}&{{}}\\ {{}}&{{}={\frac{1}{\beta}})^{-\alpha}=({\frac{1}{\beta}})^{-(\alpha+{\frac{1}{2}})+{\frac{1}{2}}},}\end{array}$$ and re-arranging β α = ( 1β $$=\frac{\Gamma(\alpha+\frac{1}{2})}{\Gamma(\alpha)}\frac{1}{\sqrt{2\pi\sigma^{2}\beta}}\left[\frac{(y-\gamma)^{2}}{2\sigma^{2}\beta}+1\right]^{-(\alpha+\frac{1}{2})}$$ $$\mathrm{p}(y|\gamma,\sigma^{2},\alpha,\beta)=\mathrm{St}\left(y;\gamma,\frac{\sigma^{2}\beta}{\alpha},2\alpha\right).$$ that the last term of Eq. (10) is a constant with the . (16) To show that the last step of Equation (16) is true, we start with the probability density function of the t-distribution parameterised in terms of precision St(y|γ, b−1, a) (Bishop, 2006), $$\mathrm{St}(y|\gamma,b^{-1},a)=\frac{\Gamma(\frac{a+1}{2})}{\Gamma(\frac{a}{2})}\left[\frac{b}{\pi a}\right]^{\frac{1}{2}}\left[1+\frac{b(y-\gamma)^{2}}{a}\right]^{-(\frac{a+1}{2})},$$ where γ is location, b is inverse of scale and a is shape12. Substituting in b −1 = σ 2β αand a = 2α, $$\mathrm{St}\left(y|\gamma,\frac{\sigma^{2}\beta}{\alpha},2\alpha\right)=\frac{\Gamma(\alpha+\frac{1}{2})}{\Gamma(\alpha)}\left[\frac{(\frac{\alpha}{\sigma^{2}\beta})}{2\pi\alpha}\right]^{\frac{1}{2}}\left[1+\frac{\alpha(y-\gamma)^{2}}{2\sigma^{2}\alpha\beta}\right]^{-(\alpha+\frac{1}{2})}$$ $$=\frac{\Gamma(\alpha+\frac{1}{2})}{\Gamma(\alpha)}\frac{1}{\sqrt{2\pi\sigma^{2}\beta}}\left[\frac{(y-\gamma)^{2}}{2\sigma^{2}\beta}+1\right]^{-(\alpha+\frac{1}{2})}.$$ From Equation (16), the NLL of the marginal t-distribution is, $$\mathrm{p}(y|\gamma,\sigma^{2},\alpha,\beta)=\frac{\Gamma(\alpha+\frac{1}{2})}{\Gamma(\alpha)}\frac{1}{\sqrt{2\pi\sigma^{2}\beta}}\left[\frac{(y-\gamma)^{2}}{2\sigma^{2}\beta}+1\right]^{-(\alpha+\frac{1}{2})}$$ $$-\log[\mathrm{p}(y|\gamma,\sigma^{2},\alpha,\beta)]=\log\left[\frac{\Gamma(\alpha)}{\Gamma(\alpha+\frac{1}{2})}+\frac{1}{2}\log[2\pi\sigma^{2}\beta]+(\alpha+\frac{1}{2})\log\left[\frac{(y-\gamma)^{2}}{2\sigma^{2}\beta}+1\right].$$ and activation vector a used in the rest of this thesis. ## B Further Analysis Of Parameters In A Scale Mixture In the network architecture proposed in Section 3, output of the network is ζ = (γ, σ2*, α, β*), which parameterises the SMD (Equation (10)). However, as noted in Section 3, β exists as a product with σ 2in both the marginal t-distributed NLL and in the three uncertainty measures (Equation (13)). Thus, an alternative specification of the network is to output ζ = (γ, σ2*β, α*) (i.e., three parameters instead of four and are computed through three subnetworks, instead of four in Figure 1). We label this network S2B. In Table 7, we compare Combined (4 parameters) with S2B (3 parameters) using the UCI dataset (as introduced in Section 4.1). We observe that S2B is better than Combined on 4 (of 9) datasets on RMSE, while Combined is better than S2B on 1 (of 9). Four datasets (Boston, Kin8nm, Naval and Power) are inseparable at 2 decimal places on RMSE. On NLL, S2B is better than Combined on 7 (of 9) datasets, while Combined is better than S2B on 2 (of 9). Even though S2B has a higher number of datasets with lower RMSE and NLL, we note that the differences are very small and are within margin of error (due to randomness in neural network training). Thus, we conclude that the two methods provide near identical results but note that S2B is simpler and more interpretable. However, we choose Combined with four subnetworks to conduct our analysis so that parameters can also be compared with those from Evidential. Table 7: Comparing S2B (3 parameters) to Combined (4 parameters) on RMSE and NLL using the UCI benchmark datasets. Results are averaged over 5 trials and the best method for each dataset and metric are highlighted in **bold**. | | RMSE | NLL | | | |----------|-------------|-------------|--------------|--------------| | Dataset | S2B | Combined | S2B | Combined | | Boston | 2.89 ± 0.23 | 2.89 ± 0.31 | 2.21 ± 0.04 | 2.23 ± 0.05 | | Concrete | 5.48 ± 0.20 | 5.40 ± 0.18 | 2.97 ± 0.03 | 2.98 ± 0.03 | | Energy | 1.43 ± 0.10 | 1.71 ± 0.20 | 1.27 ± 0.04 | 1.35 ± 0.05 | | Kin8nm | 0.06 ± 0.00 | 0.06 ± 0.00 | −1.36 ± 0.02 | −1.35 ± 0.02 | | Naval | 0.00 ± 0.00 | 0.00 ± 0.00 | −6.00 ± 0.08 | −5.89 ± 0.35 | | Power | 2.95 ± 0.09 | 2.95 ± 0.08 | 2.53 ± 0.03 | 2.53 ± 0.02 | | Protein | 3.54 ± 0.13 | 3.67 ± 0.13 | 2.74 ± 0.04 | 2.70 ± 0.05 | | Wine | 0.58 ± 0.03 | 0.59 ± 0.03 | 0.99 ± 0.03 | 1.00 ± 0.03 | | Yacht | 2.38 ± 0.44 | 3.97 ± 1.06 | 1.03 ± 0.07 | 1.17 ± 0.11 |
# Foundational Challenges In Assuring Alignment And Safety Of Large Language Models Anonymous authors Paper under double-blind review ## Abstract This work identifies 18 *foundational* challenges in assuring the alignment and safety of large language models (LLMs). These challenges are organized into three different categories: scientific understanding of LLMs, *development and deployment methods*, and sociotechnical challenges. Based on the identified challenges, we pose 200+ concrete research questions. | Contents 1 Introduction | 6 | | | |---------------------------|----------------------------------------------------------------------------|----|----| | 1.1 | Why This Agenda? | 6 | | | 1.2 | Terminology | 6 | | | 1.3 | Structure | | 7 | | 2 | Scientific Understanding of LLMs | 8 | | | 2.1 | In-Context Learning (ICL) Is Black-Box | | 10 | | 2.1.1 | Is ICL Sophisticated Pattern-Matching? | 10 | | | 2.1.2 | Is ICL Due to Mesa-Optimization? | 11 | | | 2.1.3 | What Behaviours Can Be Specified In-Context? | 12 | | | 2.1.4 | Scenario-Based Mechanistic Understanding of ICL | | 12 | | 2.1.5 | Understanding the Effect of the Pre-training Data Distribution on ICL | 13 | | | 2.1.6 | Understanding the Effect of Design Choices on ICL | 13 | | | 2.2 | Capabilities Are Difficult to Estimate and Understand | | 14 | | 2.2.1 | LLM Capabilities May Have Different 'Shape' Than Human Capabilities | 14 | | | 2.2.2 | Lack of a Rigorous Conception of Capabilities | 15 | | | 2.2.3 | Limitations of Benchmarking for Measuring Capabilities and Assuring Safety | | 16 | | 2.2.4 | How Can We Efficiently Evaluate Generality of LLMs? | 17 | | | 2.2.5 | Scaffolding Is Not Sufficiently Accounted for in Current Evaluations | | 18 | | 2.3 | Effects of Scale on Capabilities Are Not Well-Characterized | 19 | | | 2.3.1 | Understanding Scaling Laws | 19 | | | 2.3.2 | Effect of Scaling on Learned Representations | | 21 | | 2.3.3 | Limits of Scaling | | 22 | | 2.3.4 | Formalizing, Forecasting, and Explaining Emergence | 22 | | | 2.3.5 | Better Methods for Discovering Task-Specific Scaling Laws | | 24 | | 2.4 | Qualitative Understanding of Reasoning Capabilities Is Lacking | | 25 | | 2.4.1 | Does Scaling Improve Reasoning Capabilities? | | 26 | | 2.4.2 | Understanding the Mechanisms Underlying Reasoning | | 26 | | 2.4.3 | Understanding Non-Deductive Reasoning Capabilities of LLMs | | 27 | | 2.4.4 | Which Aspects of Training Lead to the Acquisition of Reasoning? | 27 | | | 2.4.5 | What Are the Computational Limits of Transformers? | 28 | | | 2.5 | Agentic LLMs Pose Novel Risks | 29 | | | 2.5.1 | LLM-agents May Be Lifelong Learners | 29 | | | 2.5.2 | Natural Language Underspecifies Goals | | 29 | | 2.5.3 | Goal-Directedness Incentivizes Undesirable Behaviors | | 30 | | 2.5.4 | Difficulty of Robust Oversight and Monitoring | | 30 | | 2.5.5 | Safety Risks from Affordances Provided to LLM-agents | | 31 | | 2.6 | Multi-Agent Safety Is Not Assured by Single-Agent Safety | 32 | | | 2.6.1 | Influence of Single-Agent Training on Multi-Agent Interactions is Unclear | | 33 | | 2.6.2 | Foundationality May Cause Correlated Failures | 33 | | | 2.6.3 | Groups of LLM-Agents May Show Emergent Functionality | | 34 | | 2.6.4 | Collusion between LLM-Agents | 34 | | | 2.6.5 | Unclear Applicability of Multi-Agent RL Research to LLMs | 34 | | | 2.7 | Safety-Performance Trade-offs Are Poorly Understood | | 36 | | 2.7.1 | Designing Better Metrics to Measure Safety | 36 | | | 2.7.2 | Disentangling Safety from Performance | | 36 | | 2.7.3 | Better Characterization of Safety-Performance Trade-offs | | 37 | | 2.7.4 | How Fundamental Are Safety-Performance Trade-offs? | 37 | | | 3 | Development and Deployment Methods | 39 | | |-------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|----| | 3.1 | Pretraining Produces Misaligned Models | | 41 | | 3.1.1 | Existing Data Filtering Methods Are Insufficient | | 41 | | 3.1.2 | Lack of Dataset-Auditing Tools | 42 | | | 3.1.3 | Improving Training-Data Attribution Methods | | 42 | | 3.1.4 | Scaling Pretraining Using Human Feedback | 43 | | | 3.1.5 | Modifying Pretraining to Improve Effectiveness of Downstream Safety and Alignment Efforts | 43 | | | 3.2 | Finetuning Methods Struggle to Assure Alignment and Safety | | 44 | | 3.2.1 | How Does Finetuning Change a Pretrained Model? | 44 | | | 3.2.2 | Finetuning Misgeneralizes in Unpredictable Ways | 45 | | | 3.2.3 | Output-Based Adversarial Training May Incentivize Superficial Alignment | 45 | | | 3.2.4 | Techniques for Targeted Modification of LLM Behavior Are Underexplored | | 46 | | 3.2.5 | Removal of Unknown Undesirable Capabilities | | 47 | | 3.3 | LLM Evaluations Are Confounded and Biased | 48 | | | 3.3.1 | Prompt-Sensitivity Confounds Estimation of LLM Capabilities | | 48 | | 3.3.2 | Test-set Contamination Overestimates LLM Capabilities | 48 | | | 3.3.3 | Targeted Training Confounds Evaluation | | 49 | | 3.3.4 | Biases in LLM-Based Evaluation | | 49 | | 3.3.5 | Fallibility of Crowdsourced Human Evaluation | | 49 | | 3.3.6 | Systematic Biases in Evaluation | | 50 | | 3.3.7 | Challenges with Scalable Oversight | 50 | | | 3.4 | Tools for Interpreting or Explaining LLM Behavior Are Absent or Lack Faithfulness 52 3.4.1 Abstractions Used for Interpretability Are Often Dubious 52 3.4.2 Concept Mismatch between AI and Humans 53 3.4.3 Evaluations Often Overestimate the Reliability of Interpretability Methods 53 3.4.4 Can Interpretability Methods Maintain Validity When Used to Modify Model Behavior? 54 3.4.5 Assuming Linearity of Feature Representation 55 3.4.6 Polysemanticity and Superposition 55 3.4.7 Sensitivity of Interpretations to the Choice of Dataset 56 3.4.8 Feature Interpretation Is Hard to Scale 56 3.4.9 Circuit Discovery Is Hard to Scale 57 3.4.10 Externalized Reasoning in Natural Language May Be Misleading 57 3.4.11 Externalized Reasoning via Formal Semantics Is Not Widely Applicable 58 | | | | 3.5 | Jailbreaks and Prompt Injections Threaten Security of LLMs | 60 | | | 3.5.1 | Standardized Evaluations of Jailbreak and Prompt Injection Success | 61 | | | 3.5.2 | Efficient and Reliable White-box Attacks for LLMs Are Lacking | | 62 | | 3.5.3 | Unifying or Differentiating Jailbreak Attack Methodologies | | 62 | | 3.5.4 | Attacking LLMs via Additional Modalities and Defending Against These Attacks . . . | 63 | | | 3.5.5 | Defending the LLM as a System: Detection, Filtering, and Paraphrasing | 63 | | | 3.5.6 | Course-Correction After Accepting a Harmful Request | 63 | | | 3.5.7 | There Are No Robust Privilege Levels within the LLM Input | | 63 | | 3.6 | Vulnerability to Poisoning and Backdoors Is Poorly Understood | | 65 | | 3.6.1 | Are LLMs Vulnerable to Pretraining Data Poisoning? | | 65 | | 3.6.2 | Identifying Robustness and Vulnerabilities of Different Training Stages | 66 | | | 3.6.3 | Are Larger Models More Vulnerable to Poisoning Attacks? | | 67 | | 3.6.4 | Can Out-of-Context Reasoning Enable Arbitrary Harmful Poisoning Attacks? | 67 | | | 3.6.5 | Poisoning LLMs through Additional Modalities and Encodings | | 67 | | 3.6.6 | Detecting and Removing Backdoors | | 67 | | 4 | Sociotechnical Challenges | 69 | | | 4.1 | Values to Be Encoded within LLMs Are Not Clear | 71 | | |--------|------------------------------------------------------------------------------------------|------|----| | 4.1.1 | Justifying Value Choices for Alignment | | 71 | | 4.1.2 | Managing Conflicts between Different Values | | 72 | | 4.1.3 | 'Lotteries' May Bias the Values That We Encode | 73 | | | 4.1.4 | How Can We Robustly Evaluate Which Values an LLM Encodes? | | 74 | | 4.1.5 | Is 'Value Alignment' the Right Framework? | 74 | | | 4.2 | Dual-Use Capabilities Enable Malicious Use and Misuse of LLMs | | 75 | | 4.2.1 | Misinformation and Manipulation | | 75 | | 4.2.2 | Cybersecurity | 76 | | | 4.2.3 | Surveillance and Censorship | 77 | | | 4.2.4 | Warfare and Physical Harm | 78 | | | 4.2.5 | Hazardous Biological and Chemical Technologies | | 78 | | 4.2.6 | Domain-Specific Misuses | 79 | | | 4.2.7 | Mechanisms for Detecting and Attributing LLM Outputs Are Lacking | | 79 | | 4.3 | LLM-Systems Can Be Untrustworthy | | 80 | | 4.3.1 | Harms of Representation and Other Biases | | 80 | | 4.3.2 | Inconsistent Performance across and within Domains | 81 | | | 4.3.3 | Overreliance | | 81 | | 4.3.4 | Contextual Privacy Preservation | | 82 | | 4.4 | Socioeconomic Impacts of LLM May Be Highly Disruptive | 82 | | | 4.4.1 | Effects on the Workforce | 83 | | | 4.4.2 | Effects on Inequality | 84 | | | 4.4.3 | Economic Challenges for Education | | 85 | | 4.4.4 | Global Economic Development | | 85 | | 4.5 | LLM Governance Is Lacking | | 86 | | 4.5.1 | Lack of Scientific Understanding and Unreliability of Technical Tools Complicate Governance | 88 | | | 4.5.2 | Need for Effective, Fast-Moving Governance Institutions | 89 | | | 4.5.3 | Incentivizing Cooperation and Disincentivizing High-Risk Approaches to AI Development 89 | | | | 4.5.4 | Corporate Power May Impede Effective Governance | | 90 | | 4.5.5 | LLMs Require International Governance | 91 | | | 4.5.6 | Culpability Schemes Are Needed for LLM-Based Systems - Especially LLM-Agents . | 91 | | | 4.5.7 | Use-Based Governance May Be Insufficient | | 92 | | 4.5.8 | Deployment Governance Lacks Adequate Regulation | 93 | | | 4.5.9 | Development Governance Might Be Particularly Challenging to Codify and Enforce | . | 94 | | 4.5.10 | LLMs Pose Additional Challenges for Data Governance | | 94 | | 4.5.11 | Robustness of Compute Governance is Unclear | | 95 | | 5 | Discussion | 98 | | | 5.1 | Limitations | | 98 | | 5.2 | Prior Work | | 98 | ## Reader'S Guide Due to the length of this document (though note that the main content is only ~100 pages; the rest are references), it may not be feasible for all readers to go through this document entirely. Hence, we suggest some reading strategies and advice here to help readers make better use of this document. We recommend all readers begin this document by reading the main introduction (Section 1) to grasp the high-level context of this document. To get a quick overview, readers could browse the introductions to various categories of the challenges (i.e. Sections 2, 3 and 4) and review associated Tables 1, 3 and 4 that provide a highly abridged overview of the challenges discussed in the three categories. From there on, readers interested in a deep dive could pick any section of interest. Note that all the challenges (i.e. subsections like Section 2.1) are self-contained and thus can be read in an arbitrary order. ## Machine Learning And Nlp Researchers Technical researchers in machine learning, natural language processing, and other associated fields are the primary intended audience for this agenda. We have tried to assume as minimal background knowledge as possible beyond the general knowledge of what LLMs are, what their architecture is, and how they are trained. Hence, we expect all the technical challenges in Sections 2 and 3 to be accessible to any person with the knowledge equivalent to a first-year graduate student in machine learning or natural language processing. A large proportion of the challenges discussed in Section 4 are also technical in nature and should be equally accessible. The main intended purpose of this document is to help junior researchers, or researchers new to this area, to identify promising and actionable research directions (although of course, even seasoned experts might take inspiration from it). Such readers are encouraged to pick and choose sections that best align with their interests. The 200+ listed research questions are each meant to be roughly the size of a problem that could form the basis of a research dissertation. For each challenge and subchallenge, we provide motivation, background, and related work, before discussing directions for future research. These should provide a good starting point for researchers who are new to these particular challenges, but we do not attempt a comprehensive survey of any area. We also note that while this work is motivated by the safety and alignment of LLMs, nonetheless, many of the challenges we identify are highly interesting from the technical and scientific points of view. Thus, even those readers who are not primarily motivated by safety, but are in search of interesting problems centered on LLMs, may find this document useful. ## Sociotechnical Researchers And Other Stakeholders We focus on sociotechnical challenges in Section 4, emphasizing that all LLMs are sociotechnical systems, and their safety cannot be ensured without a deep and thoughtful consideration through this lens. The introduction of this section provides a mapping between the different challenges we discuss and the different areas of other fields that could contribute to progress on those challenges. This section only presumes highlevel familiarity with the LLMs for the most part and is aimed to be accessible to a wider audience than the rest of the agenda. ## 1 Introduction "*We can only see a short distance ahead, but we can see plenty there that needs to be done.*" - *Alan Turing* Large language models (LLMs) have emerged as one of the most powerful ways to solve open-ended problems and mark a paradigm shift within machine learning. However, assuring their safety and alignment remains an outstanding challenge that is recognized across stakeholders, including private AI laboratories (Leike et al., 2022; Anthropic, 2023a; Frontier Model Forum, 2023), national and international governmental organizations (White House, 2023; Office, 2023; Board, 2023), and the research and academic communities (Bengio et al., 2023; FAccT, 2023; CAIS, 2023; CHAI, Far.ai, and Ditchley Foundation, 2023). Indeed, assuring the safety and alignment of any deep-learning-based system is difficult (Ngo et al., 2023). However, this challenge is much more acute for LLMs due to their expansive scale (Sanh et al., 2019, Figure 1) and increasingly broad spectrum of capabilities (Bubeck et al., 2023; Morris et al., 2023). Furthermore, the rapid advances in LLM capabilities not only expand the potential applications of LLMs, but also increase their potential for societal harm (Weidinger et al., 2021; Ganguli et al., 2022; Birhane et al., 2023; Chan et al., 2023b). ## 1.1 Why This Agenda? The rapid rate of progress is especially alarming due to the absence of the requisite technical tools and deficiencies in the sociotechnical structures that may help assure that LLMs are developed and deployed safely (Bengio et al., 2023). In this work, we map out the challenges in developing the appropriate technical affordances that may help assure safety and in understanding and addressing the sociotechnical challenges that we may face in assuring *societal-scale* safety. At its heart, this work is a call to action for machine learning researchers, and researchers in the associated fields. Our extensive referencing of contemporary literature, focus on identifying promising and concrete research directions, and in-depth discussion of each challenge makes it an ideal educational resource for newcomers to the field. At the same time, we expect the plethora of challenges identified in this work to act as a source of inspiration for current practitioners in the fields of LLM alignment and safety, including those working from diverse other disciplines (e.g. social sciences, humanities, law, policy, risk analysis, philosophy, etc.). Several prior studies have compiled and discussed foundational problems in AI Safety (Amodei et al., 2016; Hendrycks et al., 2021b; Critch & Krueger, 2020; Kenton et al., 2021; Ngo et al., 2023). However, LLMs mark a paradigm shift and present many novel and unique challenges in terms of alignment, safety, and assurance that are not discussed in these works. Among these, Kenton et al. (2021) is the only work that exclusively focuses on LLMs. But it lacks broad coverage, being narrowly focused on issues from accidental misspecification of objectives. Our work builds on the aforementioned work and provides the most comprehensive and detailed treatment of challenges related to the alignment and safety of LLMs to date. We highlight 18 different *foundational* challenges in the safety and alignment of LLMs and provide an extensive discussion of each. Our identified challenges are foundational in the sense that without overcoming them, assuring safety and alignment of LLMs and their derivative systems would be highly difficult. For this work, we have further prioritized the discussion of foundational challenges that are unambiguous (i.e. not speculative), prime for research and highly relevant to harms and risks posed by current and forthcoming LLMs. Additionally, we pose 200+ concrete research questions for further investigation. Each of these is associated with a particular fundamental challenge. These research questions are fairly open-ended and are roughly meant to be the size of a graduate thesis, although many offer multiple angles of attack and could easily be studied more exhaustively. ## 1.2 Terminology The terms alignment, *safety* and *assurance* have different meanings depending on the context. We use alignment to refer to *intent alignment*, i.e. a system is aligned when it is 'trying' to behave as intended by some human actor (Christiano, 2018).1Importantly, alignment does not guarantee a system actually behaves as intended; for instance, it may fail to do so due to limited capabilities (Ngo et al., 2023). To 1Christiano (2018) further clarify that they mean this definition to apply *de dicto* not *de re*; we are ambivalent on this point. further simplify our discussion, we fix the intent to be that of the LLM developer (Gabriel, 2020; Ngo et al., 2023) (e.g. as opposed to the user). We consider a system safe to the extent it is unlikely to contribute to unplanned, undesirable harms (Leveson, 2016). This is a somewhat expansive definition, accounting not only for the technical properties of the system, but also the way in which it is (or is likely to be) deployed and used (Weidinger et al., 2023b), though it is narrow in the sense that it does not consider intentional harm, and it does not set out any criteria for what constitutes harm. Alignment can be used to increase safety, but the relationship is not straightforward: it could also be used to make a system more dangerous (if the developer intends), and it is not directly concerned with the real-world impact a system has when embedded in its deployment context, or the very important issue of (lack of) alignment between developers and other stakeholders. Finally, by assurance, we mean any way of providing evidence that a system is safe or aligned (Ashmore et al., 2021), including, but not limited to: scientific understanding of the system, behavioral evaluations, explanations of the model behavior or internal processing, and adherence to responsible development practices by the system developer (Casper et al., 2024a). ## 1.3 Structure We organize the foundational challenges into three different categories. The first category, *scientific understanding of LLMs* (Section 2), surveys the most important open questions that can help us build a better 'theory' of how LLMs function and inform development and deployment decisions. We discuss the need to develop principled solutions to conceptualizing, estimating, understanding, and predicting the capabilities of LLMs. We single out in-context learning and reasoning capabilities of LLMs as being critical to understand for assuring alignment and safety across all contexts. We highlight that risks we are facing with current LLMs may grow manifold with the introduction of LLM-agents, and that we need to pre-emptively understand these risks and work to mitigate them across both single-agent and multi-agent scenarios. Finally, we note that it may be inevitable that safety-performance trade-offs exist for LLM-based systems that we ought to understand better. The second category, *Development and Deployment Methods* (Section 3), presents the known limitations of existing techniques in assuring safety and alignment in LLMs. We identify opportunities to help improve model alignment by modifying the pretaining process to produce more aligned models, survey several limitations of finetuning in assuring alignment and safety, discuss issues underlying the 'evaluation crisis', review challenges in interpreting and explaining model behavior, and finally provide an appraisal of security challenges like jailbreaks, prompt-injections, and data poisoning. On the whole, this section pertains to researching empirical techniques that may help improve the alignment, safety, and security of LLMs. The final category, *Sociotechnical Challenges* (Section 4), focuses on challenges that require a more diverse and holistic lens to address. For example, we discuss the importance of societal-level discussions of whose values are encoded within LLMs, and how we can prevent value imposition and enable value plurality. Many LLM capabilities are dual-use; there is a need to understand what malicious misuse such capabilities might enable and how may we guard against them. There is also a need to ensure that the biases and other issues of LLM-systems are independently and continually monitored and transparently communicated, to build trustworthiness and reduce over-reliance. The proliferation of LLMs throughout society may have undesirable socioeconomic impacts (e.g. job losses, increased inequality) that ought to be investigated better and strategized against. Finally, we conclude by discussing the challenges and opportunities in the space of governance and regulation. As an addendum, in Section 5 we review some limitations of this work. Most notably, we note that while comprehensive, this agenda is not exhaustive. We follow that up with a detailed overview of the prior works broadly related to this work; including but not limited to prior agendas on AI safety and various surveys related to LLMs. ## 2 Scientific Understanding Of Llms Many safety and alignment challenges stem from our current lack of scientific understanding of LLMs. If we were to understand LLMs better, this would help us better estimate the risks posed by current, and future, LLMs and design appropriate interventions to mitigate those risks. Scientific understanding is also an essential component of assurance - especially for complex systems. Some classic examples of complex systems that heavily rely on scientific understanding for safety and assurance are bridge design, aircraft design, nuclear reactor design, spacecraft design etc. Without the scientific understanding of the underlying physics, the assurance of these systems would perhaps be improbable, if not totally impossible. LLMs show many prototypical traits of complex systems (Holtzman et al., 2023; Steinhardt, 2023; Hendrycks, 2023, Chapter 5) - the foremost among them is *emergent behaviors* (Wei et al., 2022a; Park et al., 2023a). This complex-systems like nature of LLMs means that relying on 'evaluations' alone may be insufficient for safety and assurance (c.f. Sections 2.2.3 and 3.3) and that there is a dire need to probe beyond surface-level behaviors of LLMs and to understand how those behaviors arise in the first place (Holtzman et al., 2023). Understanding LLMs is a broad and grand scientific challenge. However, in line with the general theme of this work, we focus on the most safety-relevant aspects (see Table 1 for an overview). Addressing these challenges will help inform safer LLM development or deployment practices; however, additional work would be required to translate insights here into practical recommendations. Scientific understanding can take many different, and diverse, forms (Adams et al., 2014, Table 4). Indeed, the challenges we have identified admit diverse styles of research. While some challenges are deeply theoretical in nature (e.g. What Are the Computational Limits of Transformers? or Understanding Scaling Laws), others may need a more empirical approach towards resolving them (e.g. Influence of Single-Agent Training on MultiAgent Interactions is Unclear). In general, most of the challenges that we raise are focused on developing a qualitative understanding of LLMs. Qualitative understanding often allows developing generalizations that a quantitative analysis may not permit and thus may be more robust to the rapid advancements in LLMs. These virtues of qualitative understanding were extolled almost 50 years ago by Herbert Simon and Allen Newell in their seminal 1975 Turing Award lecture, incidentally also on the topic of designing and understanding artificial intelligence systems (Newell & Simon, 1976). Table 1: An overview of challenges discussed in Section 2 (Scientific Understanding of LLMs). We stress that this overview is a highly condensed summary of the discussion contained within the section, and hence should not be considered a substitute for the complete reading of the corresponding sections. | Challenge | TL;DR | | | | | | | | | | |----------------------------------------|---------|----|-----|------|----|--------|---------------|----|-----|-----| | In-Context Learning (ICL) Is Black-Box | We | do | not | have | a | robust | understanding | of | how | and | | | why in-context learning emerges with large-scale training, what mechanisms underlie in-context learning in LLMs, to what extent in-context learning in LLMs is due to mesaoptimization, or how it relates to existing learning algorithms. | | | | | | | | | | Continued on the next page | Challenge | TL;DR | | | | | | |--------------------------------------------|---------------|------------|-----------|---------------------------------------------------------|--------------|----| | Capabilities Are Difficult to Estimate and | Correctly | estimating | and | understanding | capabilities | of | | Understand | LLMs is difficult for various reasons. Firstly, LLM capabilities appear to have a different 'shape' than human capabilities; meaning that the notions used to understand and estimate human capabilities might be ill-suited to understand LLM capabilities. Additionally, the concept of capabilities lacks a rigorous conceptualization which makes it difficult to make, and evaluate, formal claims about LLM capabilities. There also exist fundamental flaws in our evaluation methodologies that ought to be overcome if we are to better understand LLM capabilities, such as benchmarking being unable to differentiate between alignment failures and capability failures. There is also a need to improve our tooling to evaluate the generality of LLMs, and in general, develop methods to better account for scaffolding in our evaluations. | | | | | | | Effects of Scale on Capabilities Are Not | Various challenges hinder our ability to understand and predict the impact of scale on LLM capabilities. These include | | | | | | | Well-Characterized | incomplete theoretical understanding of empirical scaling laws, limited understanding of limits of scaling and how learning representations are affected by scaling, confusing discourse on 'emergent' capabilities due to lack of formalization, and the nascent nature of research into the development of better methods for discovering task-specific scaling laws. | | | | | | | Qualitative | Understanding | of | Reasoning | Our current understanding of how reasoning capabilities | | | | Capabilities Is Lacking | emerge in LLMs and are impacted by model scale is insufficient for making confident predictions about the reasoning capabilities of future LLMs. There is a need for research to understand the mechanisms underlying reasoning, develop a better understanding of the non-deductive reasoning capabilities of LLMs, and better understand the computational limits of the transformer architecture. | | | | | | | Agentic LLMs Pose Novel Risks | For reasons including increased capabilities (via enhancements like access to various affordances) and increased autonomy, LLM-agents may pose novel alignment and safety risks. The actions executed by LLM-agents may result in negative side-effects due to underspecification in naturallanguage-based instructions. Goal-directedness may cause LLM agents to exhibit undesirable behaviors such as reward hacking, deception, and power-seeking, and might make robust oversight and monitoring of LLM-agents particularly difficult. Continued on the next page | | | | | | | Challenge | TL;DR | | | | | | |------------------------------------------|------------------------------------------------------------|----|-----|---------|----|----------------------------------------------------------| | Multi-Agent | Safety | Is | Not | Assured | by | Assuring favorable outcomes in a multi-agent setting may | | Single-Agent Safety | prove challenging for several reasons. Firstly, there's a lack of comprehensive understanding of how single-agent training affects the behavior of LLM-agents in multi-agent environments. Secondly, the foundationality of LLM-agents may contribute to correlated failures. Additionally, collusion among LLM agents may result in undesirable externalities. Lastly, it is unclear to what extent prior research in multi-agent reinforcement learning may prove helpful in improving the alignment of LLM-agents in multi-agent settings, especially for resolving social dilemmas. | | | | | | | Safety-Performance Trade-offs Are Poorly | Safety-performance trade-offs are typically unavoidable in | | | | | | | Understood | the design of any engineering system; however, they are not well understood for LLM-based systems. There is a need for work to design better metrics to measure safety, to characterize safety-performance trade-offs across various contexts, and to better understand what safety-performance tradeoffs are fundamental in nature (and hence unavoidable in practice). Finally, it may be helpful to research methods for producing Pareto improvements in both safety and performance. | | | | | | ## 2.1 In-Context Learning (Icl) Is Black-Box In-context learning (ICL) is the ability of an LLM to learn to perform a novel task, or improve on an existing task, based on the information (e.g. examples, reasoning traces) provided in the prompt without any explicit updates to the model's parameters (Brown et al., 2020; Kaplan et al., 2020). ICL is a highly flexible and efficient learning paradigm - it has been used to direct LLMs to behave in the desired way (Lin et al., 2023a), jailbreak LLMs (Wei et al., 2023c; Xhonneux et al., 2024) and create highly performant LLM-agents (Wang et al., 2023c). **However, while this dynamic nature of ICL is helpful in the design of proficient** LLM-based systems, the black-box nature of ICL also makes it significantly harder to assure safety and alignment of LLMs (Wolf et al., 2023; Millière, **2023).** If we do not understand how ICL works, this makes it difficult to predict how it might alter LLMs behavior in deployment; for instance, it might enable novel dangerous capabilities or bypass safeguards (Anil et al., 2024). Thus there is a pressing need to better understand the mechanisms underlying ICL, the limits of ICL, and the safety implications associated with ICL. This section presents the issues with various theories and approaches that have been proposed to explain ICL and highlights several key research questions that need to be addressed. ## 2.1.1 Is Icl Sophisticated Pattern-Matching? There are two competing theories to explain the working mechanism of ICL in a transformer. The first is that ICL is a set of pre-learned pattern-matching heuristics closely tied to the training distribution. Under this view, several works have proposed explanations of ICL as inference over an implicitly learned topic model (Xie et al., 2022; Wang et al., 2023g; Wies et al., 2023); as task inference (Min et al., 2022; Todd et al., 2023; Hendel et al., 2023; Bigelow et al., 2023) or as learning of template circuits (during training) which are adaptively retrieved and rebound to tokens in the prompt (Swaminathan et al., 2023). However, these theories currently only provide *partial* explanations of ICL. All the aforementioned works only explain ICL that is based on demonstrations while ICL can occur from other types of feedback as well, e.g. interactive feedback (Mehrabi et al., 2023; Wang et al., 2023c). These works also do not explain multi-task in-context learning and sequential learning of tasks in-context (Zhou et al., 2022, Sections 4&5), or how in-context learning may support learning of novel tasks, such as learning to reason on OOD samples (Saparov et al., ![10_image_0.png](10_image_0.png) Figure 1: Recent large language models (LLMs) offer significantly expanded context lengths, enhancing incontext learning capabilities (Agarwal et al., 2024) while also introducing novel risks (Anil et al., 2024). 2023). Furthermore, some works, such as Zhang et al. (2023d) and Swaminathan et al. (2023), assume specific (simpler) data generation processes that differ from the true data generation process of natural language. There is a need to further refine and extend the current theories, or development of novel ones, so that we are able to adequately explain the full range of ICL behaviors, including the aforementioned ones. ## 2.1.2 Is Icl Due To Mesa-Optimization? Alternatively, ICL can be seen as a form of "learned optimization" or "mesa-optimization" (Hubinger et al., 2019; von Oswald et al., 2023), i.e., during training, the base optimizer learns to use the transformer weights to represent another (learned) optimization algorithm - as well as a (learned) objective function - effectively allowing the transformer to self-generate learning signal (based on data given in the context) and act as *blackbox* in-context learner (Kirsch et al., 2022). Emergence of mesa-optimization within a model is contingent on whether or not a model can implement a learning algorithm within its weights. Several studies provide evidence that transformers can approximate gradient-based learning algorithms for various statistical learning problems (Akyürek et al., 2022b; von Oswald et al., 2022; Garg et al., 2022; Zhang et al., 2023c; Ahn et al., 2023). Bai et al. (2023a) prove that a transformer with 2L layers can simulate L-steps of gradient descent on a two-layer feedforward neural network. Panigrahi et al. (2023) propose an extension of the transformer architecture that can internally simulate finetuning of a smaller transformer. Bai et al. (2023a) further show that transformers can implement *in-context algorithm selection*, i.e. at inference time, a single transformer can adaptively choose between different learning algorithms to solve the given task (e.g. logistic regression for a regression task or logistic classification for a classification task) based on the information given in the prompt. These studies show that *in principle*, transformers are capable of mesa-optimization in controlled experiments. However, it is not yet clear whether and when transformers trained with more complex objectives, e.g. the language modeling objective, might learn to perform mesa-optimization. Specifically, one key unknown is: for which mesa-objectives do transformers support mesa-optimization? Existing work provides overwhelming evidence that transformers can solve simple supervised learning tasks via mesa-optimization. However, there has so far been very limited investigation as to whether transformers can solve more complex learning tasks in-context via mesa-optimization (Lin et al., 2023b); research should explore tasks that better mirror the real-world structure of language modeling. Furthermore, even for otherwise well-studied learning tasks e.g. linear regression, there is disagreement in the current literature as to which learning algorithm is implemented by the transformer (von Oswald et al., 2022; Fu et al., 2023a). In order to resolve this disagreement, further research is required to disentangle various factors that may impact which learning algorithm gets implemented by the transformer (Zhong et al., 2023b). Future work could also develop more rigorous and generic methods for distinguishing mesa-optimization from other forms of ICL, and examine the practical importance of the distribution of in-context examples in determining whether mesa-optimization occurs and understanding the inductive biases of the resulting mesa-optimizer. ## 2.1.3 What Behaviours Can Be Specified In-Context? It is currently unclear what functions can, and can not be learned, in-context. More specifically, given a pre-trained model, what behaviors can it be prompted to do? If it is possible to *always* find a prompt to coax the LLM to perform any task, then that might indicate that jailbreaking (c.f. Section 3.5) is an impossible problem to solve (Millière, 2023). Fundamentally, that is a question of universal approximation in-context, i.e. whether a *fixed* model can be converted into an approximator of *arbitrary* accuracy for any (continuous) function by being prompted appropriately. While it is well-known that transformers are universal approximators when *trained* (Yun et al., 2019) and was recently shown that this is also the case for state-space models (Li et al., 2022b; Wang & Xue, 2023; Cirone et al., 2024), it is less clear what are their *in-context* approximation abilities. Wang et al. (2023i) showed that in-context universal approximation is possible with a transformer but their construction required that all possible functions are encoded in the model weights which results in unrealistically large models. However, (Petrov et al., 2024) showed that no memorization is needed and that a relatively small model can be a universal approximator in-context. This result already indicates that it is possible for a realistically-sized model to be impossible to be safeguarded (i.e. safety finetuned in a way that jailbreaking is not possible). The theory of universal approximation in-context is still rather underdeveloped and the practical safety and security implications of the above results are not yet fully clear. For instance, the above-mentioned models require very specific hand-crafted weights for the pre-trained model. It is not clear whether the universal approximation behavior can be obtained by learning via gradient descent on typical datasets and, therefore, whether it is likely to occur in real-world models or not. Furthermore, the prompt lengths required in the construction of Petrov et al. are unrealistically long. However, it might be possible to reduce this by leveraging knowledge and skills already present in the model (Petrov et al., 2023b). Therefore, studying the effect of the pre-training data on the in-context universal approximation could help understand the real-world safety and security implications of in-context learning capabilities of LLMs. All the above results also rely on the attention mechanism in the transformer architecture and thus, do not translate to other recurrent models. Thus, understanding the in-context approximation properties of other architectures is another open problem with little to no current work so far (Lee et al., 2023a). ## 2.1.4 Scenario-Based Mechanistic Understanding Of Icl Current interpretability techniques are not scalable and general enough to allow an interpretability-based general understanding of the mechanics of ICL within an LLM (c.f. Section 3.4). However, they can still be leveraged for developing scenario-based *mechanistic* understanding of ICL (Olsson et al., 2022; Reddy, 2023), i.e. identifying circuits critical for ICL within an LLM when artificial restrictions are placed on the prompt structure and the task being carried out via ICL. Such case studies can be insightful for understanding the relative roles played by different computational structures within the model, and to understand how the ICL mechanism varies across tasks. For example, Todd et al. (2023) studied ICL for extractive and abstractive NLP-based tasks, and found that a small number of attention heads, termed "function vectors", are responsible for transporting a compact representation of a task which then triggers execution of the task. Similarly, Merullo et al. (2023b) show that on the commonly-studied ICL task of inferring relations between entities (e.g. inferring the capital of a country), mid-to-late feedforward layers play a key role in identifying and surfacing relevant items contained in the context which are required for inferring the relation and completing the task. Halawi et al. (2023) study ICL on a classification task in which the demonstration data has incorrect labels, thus, clashing with the prior knowledge of the LLM. This lets Halawi et al. discover false induction heads; attention heads in late layers of the LLM that attend to and copy false information from the demonstrations. This preliminary evidence suggests that case studies on specific task types can be a useful technique to build a mechanistic understanding of ICL in LLMs and may be helpful in identifying how, and why, ICL behavior varies across tasks, and how the inductive biases of a particular architecture affects ICL. Future research may analyze ICL in larger LLMs, on diverse task types and with various prompt styles. Interpretability-based techniques could also be leveraged to explain idiosyncrasies of ICL identified in the literature, e.g. why ICL sometimes performs *worse* when given examples from test distributions (Saparov et al., 2023) or why LLMs at different scales process false information in the context differently (Wei et al., 2023b). ## 2.1.5 Understanding The Effect Of The Pre-Training Data Distribution On Icl Emergence of in-context learning is heavily modulated by the structure of the pre-training data distribution. Special properties of the task distribution, such as task diversity (Raventós et al., 2023; Kirsch et al., 2022), "burstiness" (Chan et al., 2022) or compositional structure (Hahn & Goyal, 2023) may be key factors in the emergence of ICL. However, these findings are primarily limited to relatively simple settings, and further work is required to verify their correctness in the real-world language modeling setup. Which of these criteria are actually fulfilled by large-scale text datasets? If all of these criteria are indeed met by text datasets, which criteria are actually responsible for ICL in LLMs? For some of these criteria (e.g. "burstiness"), these questions could be answered via analysis of popular datasets like *The Pile* (Gao et al., 2020), *RedPajama* (Together Computer, 2023), etc. For other criteria, such as task diversity, the analysis-based approach may not be suitable as there might not be an obvious way to track and measure such criteria in an unstructured text dataset. In such cases, an alternative strategy could be to create various synthetic text datasets in a controlled fashion and monitor the emergence of ICL in language models trained on these datasets to better understand the necessary and sufficient conditions for the emergence of ICL. ## 2.1.6 Understanding The Effect Of Design Choices On Icl How is ICL impacted by various design and training choices involved in the development of an LLM, such as model size, pretraining dataset size, pretraining compute, instruction tuning? Sensitivity of ICL to various factors, including, but not limited to the aforementioned factors, is well-known from prior literature. In particular, several studies note that ICL performance is highly sensitive to the scale of the LLM; Wei et al. (2022a); Brown et al. (2020); Kaplan et al. (2020) note that ICL is an *emergent* ability; Akyürek et al. (2022b) discover phase transitions between transformer simulating different learning algorithms based on the depth of the transformer; and Wei et al. (2023b) find that larger LLMs have stronger semantic priors (i.e. zero-shot performance is better), but also a greater propensity to allow in-context information to *override* the semantic prior. Wei et al. (2023b) further show that instruction tuning disproportionately strengthens semantic priors; hence, an instruction-tuning *reduces* the propensity for in-context information to override the semantic prior. Meanwhile, Singh et al. (2023) argue that ICL is a *transient* phenomenon and extended training can cause ICL to dissipate in favor of "in-weight" learning (Chan et al., 2022). A deep understanding of why ICL is sensitive to various design and training choices, and how these choices affect the mechanisms behind ICL is currently lacking. In particular, improving our understanding of how various training design decisions, e.g. instruction tuning or prolonged training, impact ICL may provide us with tools to modulate the strength of ICL in LLMs. This may help mitigate some of the safety risks posed by LLMs due to their strong in-context learning abilities (Wolf et al., 2023; Millière, 2023). The dynamic and flexible nature of ICL is central to the success of LLMs as it allows LLMs to proficiently improve on already known tasks as well as learn to perform novel tasks. It is likely to gain an even more prominent role as LLMs are scaled up further and become more proficient at ICL. However, the black box nature of ICL is a risk from the perspective of alignment and safety, and there is a critical need to better understand the mechanisms underlying ICL. Several "theories" have been proposed that provide plausible explanations of how ICL in LLMs might work. However, the actual mechanism(s) underlying ICL in LLMs is still not well understood. We highlight several research questions that could be instrumental in addressing the understanding of the mechanisms underlying ICL. 1. Can different theorizations of ICL as sophisticated pattern-matching or mesa-optimization be extended to explain the full range of ICL behaviors exhibited by the LLMs? ←- 2. What are the key differences and commonalities between ICL and existing learning paradigms? Prior work has mostly examined ICL from the perspective of few-shot supervised learning. However, in practice, ICL sometimes exhibits qualitatively distinct behaviors compared to supervised learning and can learn from data other than labeled examples, such as interactive feedback, explanations, or reasoning patterns. ←- ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) 3. Which learning algorithms can transformers implement in-context? While earlier studies ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) (e.g. Akyürek et al., 2022b) argue transformers implement gradient descent-based learning algorithms, more recent work (Fu et al., 2023a) indicate that transformers can implement higher-order iterative learning algorithms e.g. iterative Newton method as well. ←- 4. What are the best abstract settings for studying ICL that better mirror the real-world structure of language modeling and yet remain tractable? Current toy settings e.g. learning to solve linear regression are too simple and may lead to findings that do not transfer to real LLMs. ←- 5. To what extent, different architectures are universal approximators 'in-context'? Can we better characterize functions that can be learned in-context by models obtained in practice? How do the pre-training data and the training objectives affect the in-context universal approximation abilities of a model? ←- 6. How can interpretability-based analysis contribute to a *general* understanding of the mechanisms underlying ICL? Can this approach be used to explain various phenomena associated with ICL such as why ICL performance varies across tasks, how the inductive biases of a particular architecture affect ICL, how different prompt styles impact ICL, etc.? ←- 7. Which properties of large-scale text datasets are responsible for the emergence of ICL in LLMs trained with an autoregressive objective? ←- 8. How do different components of the pretraining pipeline (e.g. pretraining dataset construction, model size, pretraining flops, learning objective) and the finetuning pipeline (e.g. instruction following, RLHF) impact ICL? How can this understanding be leveraged to develop techniques to modulate ICL in LLMs? ←- ## 2.2 Capabilities Are Difficult To Estimate And Understand Improving - and especially *assuring* - the safety and alignment of LLMs demands an understanding of their capabilities. More capable systems possess a greater potential to cause harm under misalignment; as argued by Shevlane et al. (2023). At the same time, some capabilities - such as recognizing ambiguity and uncertainty, and understanding how to infer humans' intent - are necessary for advancing LLM safety and alignment (respectively). This makes it critical that we correctly estimate and understand the 'capabilities' of an LLM (Mitchell, 2023). However, there is currently no single well-established and agreed-upon conceptualization, definition, or operationalization of the term 'capabilities'. This, combined with various other factors, hinders rigorous accounting of risk, which should be grounded in an understanding of what *types* **of behavior a system is capable of in general,** rather than the specific behaviors a system exhibited in the particular situations in which it was tested (Kaminski, 2023). To provide high-quality assurance of LLM safety or alignment, we need an improved scientific understanding of how to infer underlying capabilities from such limited test results. ## 2.2.1 Llm Capabilities May Have Different 'Shape' Than Human Capabilities The capabilities of LLMs, and other AI models, are likely to be mechanistically and behaviorally distinct from the corresponding human capabilities - even when some summary statistics for the two may be closely matched, e.g. accuracy on some benchmark. We colloquially refer to this as the 'shape' of capabilities being different.2 Adversarial examples in computer vision are a classical example of this difference - small perturbations to images that are imperceptible to humans can have drastic effects on a model based on neural networks (Szegedy et al., 2013). Within LLMs, one obvious way this manifests is as inconsistent performance across data points on which a human's performance would be consistent. For example, GPT-4 accuracy at a counting task degrades significantly when the correct answer is a low probability number (e.g. 83) relative to the cases where the correct answer is a high probability number (e.g. 100) (McCoy et al., 2023). One other way this mismatch becomes obvious is by looking at tasks that LLMs can do with much greater efficiency than humans. For example, Gemini-1.5 can learn to translate sentences from English to a completely novel language (Kalamang) in-context given instructional material (Gemini Team, 2024). However, it may take a human several weeks to learn to perform the same translations (Tanzer et al., 2023). 2Also see related discussion on how AI models learn and use different concepts and representations than humans in Section 3.4.2. This mismatch in the 'shape' of capabilities can have adverse consequences. It makes it difficult for humans to simulate LLMs and predict their behavior (Carlini, 2023a). This may harm a user's trust in LLMs (c.f. Section 4.3) and can generally make assurance harder, as LLM behavior can change in arbitrary ways across the input space. Considering this mismatch, we also caution against the careless use of tests designed for humans to estimate LLM capabilities, as argued by Davis (2014). In general, this issue of mismatch indicates that conceptualizations used to describe human capabilities may be ill-suited to describe capabilities of LLMs (and AI models in general). However, the methodology used to identify human capabilities may still transfer or could be used as a source of inspiration as we hint at in the following section. ## 2.2.2 Lack Of A Rigorous Conception Of Capabilities Researchers often make claims about the presence, or absence, of a capability within a model based on whether or not the model is able to carry out *tasks* that supposedly require that capability. This suggests an implicit conceptualization of capabilities within the community that associates a model having a capability with the model being able to perform well on tasks of some particular type (Shevlane et al., 2023, Table 1). This is generally operationalized by collecting (large) number of samples representative of the task of interest into a benchmark. However, benchmark performance is highly dependent on which particular samples are chosen, i.e. what parts of input space are covered, and how densely they are sampled from. Indeed, depending on what samples they evaluate on, different works draw different conclusions about the capability of different LLMs, e.g. to use theory-of-mind reasoning (Ullman, 2023; Zhou et al., 2023f; Shapira et al., 2023; Kim et al., 2023). Conflicting claims about models' capabilities are typically adjudicated informally; new research may exhibit surprising failure modes to demonstrate that a capability is not robust, or explain away behavior that seems to demonstrate a general capability with evidence that a model's performance on a particular benchmark is due to peculiarities of the samples selected, e.g. the existence of shortcut features (Geirhos et al., 2020). This informal treatment of capabilities is insufficient, and there is a need for more rigorous treatment for the purposes of assurance. For instance, to enable us to make and evaluate formal claims about an LLM lacking "dangerous" capabilities (Shevlane et al., 2023). To make rigorous, scientifically sound, and *general* claims about the presence or absence of a capability within a model, it is essential to establish rigorously defined and commonly agreed upon conceptualizations of capabilities (Jain et al., 2023b). More specifically, we might be interested in claims of three different types: (a) a capability is completely *absent* from a model; (b) a capability is partially *present* within a model; and (c) a capability is robustly *present* within a model. Research is needed on how to best define, operationalize, and evaluate such claims about capabilities. We will now discuss three different ideas for how to conceptualize capabilities in a way that may support such claims or otherwise improve on current practice. Domain Conceptualization: One way of formalizing capabilities (for a predictive model) is as statements of the form f|A ≈ g, which we might read as "model f has capability g over domain A". We refer to this as the *domain conceptualization*. We conceptualize g as a function that implements a desired capability (e.g. addition) on relevant inputs (e.g. real numbers), and A as a subset of all relevant inputs (e.g. integers). This is reminiscent of benchmarking but explicitly specifies a set of inputs over which the model reliably exhibits a capability. Such claims might be established by evaluating model behavior on a finite set of samples (as in benchmarking) and applying some learning theory (Neyshabur et al., 2017a), using properties such as smoothness of f (Dziugaite & Roy, 2017; Neyshabur et al., 2017b; Arora et al., 2018), and/or methods such as certified robustness to prove that f approximates g well not only on sampled points but on unseen points as well, e.g. within some volume of input space (Cohen et al., 2019; Carlini et al., 2022). Benchmarks could also be designed in a theoretically-motivated way to support such general claims (Yu et al., 2023a; Shirali et al., 2022), Internal Computations Conceptualization: Another alternative conceptualization of the capabilities of a model is as functions implemented *within* a model, e.g. between hidden layers. We call this the 'internal computations conceptualization' of capabilities. For instance, we might identify capabilities with 'circuits', i.e. computational subgraphs of a neural network (Olah et al., 2020). Here, we might say a model possesses a capability for addition if there is any circuit that can perform addition, regardless of when or whether the neurons in this circuit are actually active. This might enable stronger assurance regarding the *potential* for an LLM to exhibit a dangerous behavior after fine-tuning or other methods of eliciting such behavior. Analysis of circuits has been used to provide conclusive evidence regarding models possessing specific capabilities (Wang et al., 2022). However, there are currently technical issues with applying this conceptualization in practice, which may limit its utility (see Section 3.4 for further details). This conceptualization could also be combined with the domain conceptualization to identify computational elements *within* an LLM that robustly implement functions over limited sets of inputs. Latent Factors Conceptualization: The aforementioned 'internal computations conceptualization' view of capabilities is inspired by neuroscience. Analogously, another view to conceptualize capabilities - which we call the 'latent factors conceptualization' - could be to take inspiration from the field of psychology, in particular, psychometrics. These fields are primarily concerned with characterizing and quantifying the fundamental processes involved in variation in human cognitive abilities. Like the 'internal computations conceptualization', the latent factor conceptualization views behavior as being produced through the application of multiple capabilities, but does not attempt to ground these capabilities in internal computations. A commonly used technique in psychometrics is factor analysis. Under this conceptualization, capabilities are the "factors", or latent variables, that explain variation in measurements *across* subjects (Carroll, 1993). This could be done by taking a population of different LLMs and extracting the factors that explain the most variance in performance across examples in a benchmark (or benchmarks), using techniques such as factor analysis (Burnell et al., 2023a) or other psychometric techniques (Wang et al., 2023h). Machine learning methods for discovering and disentangling latent factors of variation could also be explored. Intuitively, these factors of variation could be viewed as a 'basis' of capabilities, and commonalities between capabilities could help interpret model behavior in a way that is predictive of behavior on unseen examples, tasks, or domains. One drawback of such methodology is that identified 'factors' may not be amenable to natural interpretation as 'capabilities' and attempts to interpret them could induce false beliefs about the models in model developers and evaluators (Chang et al., 2009). All the above conceptualizations vary in rigor, functionality, and tractability, and their pros and cons are not well understood. It is also possible that other, better conceptualizations, exist that may be more suitable for understanding and explaining the behaviors of LLMs. An ideal conceptualization of capabilities should allow making, and verifying, mathematically rigorous claims pertaining to presence, or absence, of a certain capability within a model while being tractable and easily applicable to any arbitrary model. However, conceptual progress would be valuable even absent practical methods. On the whole, the advent of LLMs presents us with an opportunity to rethink our conceptualization of capabilities. In the short term, it is also critical that researchers clearly communicate *their* conceptualization of what a capability is when making claims related to the presence, or absence, of capabilities, to avoid the literature getting littered with seemingly contradictory results which are easily explained away due to the differences in the object being studied. ## 2.2.3 Limitations Of Benchmarking For Measuring Capabilities And Assuring Safety As discussed above, benchmarking is one of the most common forms of evaluation, especially for estimating the capabilities of a model. There are several reasons why benchmarking-based evaluations may misestimate the capabilities of an LLM. Firstly, benchmark performance cannot distinguish between the cases of a model failing to function well on a task due to (a) capability failure, i.e. model being incapable of the capability at all, and (b) intent alignment failures, i.e. model failing because of failure to understand or adopt human intent (Chen et al., 2021b, Appendix E.2). Relatedly, safety finetuning methods often work by suppressing model capabilities (Jain et al., 2023b; Wei et al., 2024). In such cases, a model may have a given capability, but it may not readily demonstrate it due to the influence of safety finetuning. In such a case, benchmark performance would indicate the absence of the capability when intuitively that is not the case. Secondly, a model could function well on two seemingly distinct benchmarks using the same underlying capabilities (Merullo et al., 2023a). This might lead us to overestimate, or 'overcount' the model's capabilities. Thirdly, benchmark performance may produce unreliable assessments of capabilities that are present within a model, but are not robust; in such cases, performance may depend heavily on the particular distribution used by a benchmark (Teney et al., 2020; Burnell et al., 2023b; Bao et al., 2023). Fourthly, because benchmarks primarily report average statistics (e.g. accuracy), they are often not very informative about an LLM's performance on *particular* test examples. The first 3 issues are all problematic mostly because they may lead to incorrect inferences about a model's behavior on test examples, the fourth issue is more about making more *informative* inferences about behavior - making predictions about behavior (e.g. performance) at the example level rather than the dataset/distribution level as is implicitly the case with benchmarks. Separately, the fact that currently the most capable models are provided as a service via an API largely limits our ability to independently audit and evaluate them for safety (La Malfa et al., 2023). There is a need to develop principled solutions to the aforementioned issues. We need to develop robust machinery that may help us distinguish between capability failures, intent-alignment failures, and intentional failures on the model's part due to the effects of safety finetuning. Among these intent-alignment failures are the most egregious to avoid. There is in general a need for elicitation protocols that reliably and consistently elicit the capabilities of interest, even in the case of intent-alignment failures. Fine-tuning and prompting often reveal behavior indicative of novel capabilities, but cannot be used to rigorously establish that an LLM does not possess a particular capability. There is also a need to improve the structure and granularity of benchmarking to better understand model capabilities. The suggestions we make in Section 2.2.2 may help address the issues mentioned above. First, the 'domain conceptualization' can limit incorrect inferences by being precise about the domain over which a capability is expected to be robust, and can make more precise inferences by accounting for which capabilities' domains a test example belongs to. Likewise, both the 'circuit conceptualization' and the 'latent factors conceptualization' could help determine which capabilities are at play when a model processes a particular example, and thus better attribute behavior to particular capabilities and draw conclusions about which capabilities are (likely to be) used on a given test input. The circuit conceptualization, being more detailed and grounded, could make it more straightforward to avoid misestimation of capabilities, e.g. by distinguishing between capabilities that are robustly, but rarely applied (e.g. due to misalignment) vs. those that are simply not robust. ## 2.2.4 How Can We Efficiently Evaluate Generality Of Llms? The majority of evaluations of LLMs are domain-specific evaluations, e.g. 'language understanding' (Hendrycks et al., 2020b), 'mathematics' (Hendrycks et al., 2021a), 'social knowledge' (Choi et al., 2023b), 'medical knowledge' (Singhal et al., 2023a), 'coding and tool manipulation' (Xu et al., 2023a). A fundamental limitation of evaluating LLMs in this way is that we may undercount LLM capabilities in domains where corresponding benchmarks are not available (Ramesh et al., 2023). It is also logistically difficult to evaluate LLMs across a large (e.g. combinatorial) number of domains or tasks. There is a need for work to consider alternative means to evaluate generality (i.e. how general-purpose the model is) (Casares et al., 2022), and generalization across domains, of LLMs. For example, procedural evaluations like Skill-Mix (Yu et al., 2023a) could be designed that may evaluate the compositional generalization skills of LLMs. Alternatively, given the fact that we now have a population of LLMs available, finding differences and similarities in their performance across domains could inform what evaluations are most informative for evaluating the generality of LLMs (Ye et al., 2023). Mechanistic investigations of LLM capabilities could aim to discover capabilities that are reused across tasks (Todd et al., 2023), or discover and explain other capabilities (like in-context learning, see Section 2.1) that may underlie general-purpose behaviors of LLMs. It may also be useful to create a taxonomy of the capabilities of LLMs, similar to how psychologists have created taxonomies of human capabilities (Fleishman et al., 1984). In fact, some taxonomies are already emerging implicitly within the literature. For example, LLM developers report across different types of benchmarks (corresponding to different domains and capabilities), e.g. coding, question answering, mathematical reasoning, common-sense reasoning, performance over long-contexts (OpenAI, 2023b; Gemini Team, 2023; Anthropic, 2023d; Jiang et al., 2024a). However, these 'taxonomies' are ad-hoc, and have little to no theoretical basis. There is a need for work to establish theoretically grounded taxonomies that may provide better organization and understanding of LLM capabilities. Furthermore, taxonomies of human capabilities often arrange capabilities in a hierarchical fashion - making the dependencies between capabilities obvious (Schneider & McGrew, 2012). Chen et al. (2024b) show that similar dependencies between capabilities (or in their terminology, skills) exist for LLMs as well, and respecting these dependencies during training (i.e. defining an appropriate curriculum over skills) helps achieve better learning outcomes. However, no work has attempted to document these dependencies at a large scale. ## 2.2.5 Scaffolding Is Not Sufficiently Accounted For In Current Evaluations Even in the simplest use cases, an LLM should be viewed as a multi-party system consisting of the LLM itself and an elicitation protocol (e.g. the prompt or finetuning process). In the extreme cases, the LLM can be a part of a much larger system having access to external memory, various types of tools and various types of learning signals (e.g. feedback from other LLMs) (Wang et al., 2023c; Park et al., 2023a). We collectively refer to these mechanisms that enhance the capabilities of an LLM as *scaffolding.* In addition to being highly capable models, LLMs are also highly efficient learners. This learning can occur via fine-tuning, supervised learning-style in-context learning, instructions, explanations, etc. As a result, by efficiently designing the elicitation protocol, a designer can significantly alter the capabilities and the behavior profile of an LLM. For example, an LLM capable of performing addition can be given the capability of performing multiplication by exhaustively explaining and demonstrating the algorithm to multiply numbers using additions within the prompt (Zhou et al., 2022). Similarly, fine-tuning on a small number of carefully chosen examples can cause the LLM to act in a highly polite way (Zhou et al., 2023b). In such cases, it is no longer clear whether the protocol revealed an existing capability or induced a novel capability within the LLM (Stechly et al., 2023). This lack of distinction can cause overestimation of capabilities by mistakenly attributing a capability to the LLM when in fact the LLM did not have the capability (Stechly et al., 2023); rather the capability may have been *quickly* learned on the go due to the efficient design of the elicitation protocol. There is a need for theoretical work to characterize and distinguish between capabilities that are present within an LLM (and are elicited), versus capabilities that are efficiently learned on the fly. Tools, techniques, and concepts from information theory may be used for this purpose (Zhu & Rudzicz, 2020). A similar attribution problem occurs when an LLM is combined with other systems, e.g. given access to external tools it can assign tasks or given feedback from environment or verifiers (Valmeekam et al., 2023). Prior work suggests that such systems can outperform an LLM acting on its own (Mialon et al., 2023; Wang et al., 2023c). In general, in deployment, LLMs are deployed as parts of *larger* system, where other components of the system may perform various jobs. In such cases, it is not clear how the capabilities being demonstrated by such systems should be attributed to the LLM vs. the other system components involved. Various concepts exist within game theory on how the utility of a system may be distributed between its components, e.g. Shapely value, core (Shoham & Leyton-Brown, 2008, Chapter 12). It is an open problem to evaluate which concept(s) are most suitable for attributing the capabilities of LLM-based systems to the capabilities of LLM and the capabilities of other components present in the system. In order to assure safety, we need a calibrated understanding of the capabilities of the models. ![17_image_3.png](17_image_3.png) However, this is currently made difficult by various challenges. Firstly, the capabilities of the model seem to have a different 'shape' compared to human capabilities; but this difference is not currently well-understood. Secondly, there is no well-established conceptualization of capabilities. Thirdly, we do not have sufficiently reliable methods to assess the generality of LLMs - i.e. how generalpurpose a given model is. Lastly, it is not clear how to account for scaffolding in LLM capabilities and distinguish between the cases in which an elicitation protocol reveals a capability already present within an LLM versus the cases in which LLM acquires the capability due to the efficient design of ![17_image_1.png](17_image_1.png) 9. How can we understand the differences in the 'shape' of capabilities of humans and other AI models? What are the implications of these differences? ←- 10. What is the right conceptualization of capabilities for LLMs? Can we formalize the three conceptualizations of capabilities presented here (domain conceptualization, internal computations conceptualization, latent factors conceptualization), and understand their relative merits and demerits? ←- 11. How can we draw reasonable general insights from behavioral evaluations? In particular, how can we prove that a given model does not have a capability of interest, without exhaustively evaluating the model for that capability on the whole input space? ←- ![17_image_0.png](17_image_0.png) ![17_image_2.png](17_image_2.png) ![17_image_4.png](17_image_4.png) ![17_image_5.png](17_image_5.png) ![17_image_6.png](17_image_6.png) ![17_image_7.png](17_image_7.png) 12. Can we develop methods - e.g. based on factor analysis or other unsupervised learning methods - to automatically discover capabilities by 'decomposing' a model's (potential) behavior into something like a 'basis' of capabilities? ←- 13. When performing an evaluation using a benchmark, how can we separate observed failures of a model into 'capabilities failure' (i.e. model failing because it truly lacks the relevant capability) from 'alignment failures' (i.e. model failing despite having the capability because it was not correctly invoked')? ←- ![18_image_0.png](18_image_0.png) ![18_image_1.png](18_image_1.png) 22. How can we precisely characterize the contribution of the LLM to behaviors demonstrated by an LLM-based system (e.g. an LLM with access to external tools)? Can we use concepts developed in game theory and other literature on multi-agent systems for this purpose? ←- ## 2.3 Effects Of Scale On Capabilities Are Not Well-Characterized Increasing the scale (parameters, compute, and data) of LLM training predictably results in an overall more performant model in accordance with well-established scaling laws (Kaplan et al., 2020). However, specific capabilities of LLMs are often highly difficult to predict (Bowman, 2023), and may show so-called 'emergent' behavior (Wei et al., 2022a). **This combination of high-level predictability and lowlevel unpredictability is a source of risk: the former enables easy progress via scaling, the** latter makes it difficult to anticipate and precisely characterize the risks associated with the development and deployment of more performant models (Ganguli et al., 2022). From a scientific standpoint, this highlights a significant gap in our knowledge of how LLMs learn and acquire capabilities. To address this gap, we will need to develop a deeper understanding of scaling laws, understand how scaling impacts learned representations, identify the limits of scaling, and work towards formalizing, forecasting, and explaining emergent behaviors in learning. ## 2.3.1 Understanding Scaling Laws Large language model training has been found to follow scaling laws for aggregate loss that are consistent across many orders of magnitude of resource scaling (Kaplan et al., 2020; OpenAI, 2023b). The factors explaining this scaling law behavior, however, remain poorly understood. As a result, it is unclear to what extent different aspects of these laws are universal and to what extent they are sensitive to different aspects of the training pipeline which might change in the future. Explanations can address both the functional form of scaling laws (e.g. power law scaling vs. exponential scaling) and the specific scaling parameters of the functional form found in particular experiments (e.g. the exponent in power law scaling). Prior work has studied the impact of scaling on learning under various theoretical setups. These works, as explained in detail later, often differ in their prediction of power law exponents. These seemingly contradictory predictions arise from differences in the scaling setup; specifically from whether the resources which are not being scaled are small or large in comparison to the scaled resource (Bahri et al., 2021). Indeed, current literature can be quite clearly demarcated based on this distinction. Borrowing the terminology from Bahri et al., the *variance-limited* regime considers the case where we are asymptotically scaling one resource (model size or data) while the other resource is fixed at some finite level. In this case, most theories predict rapid power-law scaling (or even exponential scaling, in some cases) to a saturated level of performance, based on concentration arguments. The *resolution-limited* regime is when the unscaled resource is infinite, or is much larger than the scaled resource. Here, the scaling exponent captures the effect of increasing the "resolution" of the learner (either by allowing it to use more data points or more parameters to fit increasingly fine aspects of the distribution), and is highly sensitive to properties of the data distribution. This is the regime most modern work on LLM scaling is concerned with. Theoretically, the variance-limited regime of scaling laws - in which the amount of data is scaled and model size is fixed and finite - is the best studied one. This is the typical subject of the vast literature on learning curves - see Viering & Loog (2022) for a survey. For example, classic PAC theory shows that power law data scaling with an exponent of −1 (or −1/2 in the unrealizable zero-one error setting) is optimal for every learnable task, and is achievable by empirical risk minimization (Blumer et al., 1989). Other theoretical models of (bounded-capacity) learning curves include the universal learning theory of Bousquet et al. (2021) and approaches based on statistical mechanics on non-worst-case learning curves in the 'thermodynamic limit' (Seung et al., 1992; Watkin et al., 1993; Amari, 1993; Haussler et al., 1994). However, as these aforementioned theoretical models assumed that the model size is fixed, and only data is scaled, they provide limited insights regarding scaling laws for LLMs for which model size and dataset size grow together, in which case, gentler scaling exponents of roughly −1/10 to −1/20 have been observed (Kaplan et al., 2020). However, they are still predictive of LLM scaling in the cases where the model is set to be fixed and finite. For example, for a fixed and finite model size, the joint data-parameter functional form of Kaplan et al. (2020) is asymptotically a power law with exponent −1 (approaching the loss at which models of the given size saturate). At least for now, both the data and the number of parameters used in training continue to increase over time, so the variance-limited regime, which assumes one of these resources is capped, is of limited relevance. (Though it may be relevant in scenarios in which the amount of data available for training is limited.) Instead, resolution-limited scaling - the scaling regime in which the unscaled resource is infinite or is much larger than the scaled resource - provides a more accurate picture for capabilities forecasting. The existing literature contains at least three distinct explanations and theoretical models of resolution-limited scaling. The first of these, the *manifold explanation* (Sharma & Kaplan, 2022), posits that scaling laws emerge from something akin to interpolation on the data manifold. Sharma & Kaplan (2022) empirically test this explanation by estimating the intrinsic dimension for some small datasets and showing that this is predictive of the model size scaling exponent. However, this model also predicts that per-example performance should increase monotonically, which Kaplun et al. (2022) note is not the case in practice. A related theory is the *kernel spectrum* explanation provided by Bordelon et al. (2020) and Spigler et al. (2020), who derive resolution-limited scaling in the setting of kernel regression. Specifically, they show that for kernel methods, such as learning with an infinitely wide network in the neural tangent kernel limit (Jacot et al., 2018), power-law decay in the kernel spectrum results in power law scaling in the loss. Finally, a third explanation, based on *long tails* in the data, has been introduced by Hutter (2021) and extended by Michaud et al. (2023), Dębowski (2023), and Cabannes et al. (2023). These authors construct toy models in which gentle power law scaling emerges when the data distribution has a long tail of sub-components that must be learned independently, an assumption that is especially natural in the domain of natural language data. Given the fragmented state of the literature, several fundamental questions are currently unanswered. Firstly, can the different proposed explanations for power law scaling in the resolution-limited regime be unified? Bahri et al. (2021) provide a connection between the manifold dimension explanation and the kernel spectrum explanation; perhaps it is possible for all three theories (including the long tail theory) to be subsumed by a single meta-explanation which could handle a wider range of settings. Secondly, the variance-limited versus resolution-limited dichotomy entirely ignores the scaling regime that is most important for forecasting capabilities: *compute-efficient scaling*, where data and model size are scaled jointly (Hoffmann et al., 2022). What is a good theoretical model for this setting? Is there an explanatory theory that considers all three regimes as special cases of a more general joint data-parameter scaling setting? Thirdly, what is the role of feature learning, and optimization more generally, in scaling laws? In language modeling, representations learned early in training enable more complicated aspects of the data to be learned later in training (see Abbe et al. (2021) for a synthetic case study of this behavior). Existing scaling law explanations, however, essentially treat the data as "flat", and ignore the influence of hierarchical structure on scaling behavior. In particular, we would like to encourage further work on scaling laws that account for various properties of language data, for example, burstiness (Chan et al., 2022) or fractal nature (Alabdulmohsin et al., 2024). Relatedly, all the current models of scaling laws presume that the data distribution that is being learned is held constant throughout learning. However, in various settings of interest, e.g. reinforcement learning, data distribution changes over time. However, despite this non-stationary nature of data distribution, various works have reported scaling laws for various reinforcement learning settings (Hilton et al., 2023; Tuyls et al., 2023; Team et al., 2023; Obando-Ceron et al., 2024). Thus, the development of appropriate theoretical models for scaling laws for cases in which the data distribution is considered to be non-stationary is an open research question at the moment. Finally, one fundamental question that future investigations ought to answer is to what extent learning curve exponents are fundamentally bounded by the data distribution. This can help inform the possibility of hypothetical future architectures that might scale better than contemporary transformers for language modeling. A related open question is what properties of a data distribution affect the predictability of scaling behavior on that distribution. This is particularly relevant to the discovery of task-specific specific laws (also see Section 2.3.5). On some tasks, e.g. code-generation, power law scaling across 10 orders of magnitude of compute has been observed (Hu et al., 2023c; OpenAI, 2023b, Figure 1). On the other hand, many downstream capabilities display irregular scaling curves (Srivastava et al., 2022), or non-power law scaling (Caballero et al., 2023). ## 2.3.2 Effect Of Scaling On Learned Representations Most work on scaling focuses on performance on benchmarks. There is far less work on the question of how learned *representations* change with scale. This is a particularly pertinent question for the viability of interpretability techniques that aim to understand the internal operations of LLMs like mechanistic interpretability and probing techniques (c.f. Section 3.4). The first question relates to the universality hypothesis of representations - do different neural networks trained on the same task learn the same representations (Olah et al., 2020)? The current evidence indicates that the strong version of the universality hypothesis is false, for example, Chughtai et al. (2023) show that different neural networks learn different representations in different orders even when the architecture and data order are kept the same. This is supported by evidence contained in other studies as well (McCoy et al., 2019; Wang et al., 2018). However, there is increasing evidence that *some* feature representations are indeed *universal* and learned by different neural networks trained on the same task (Li et al., 2015; Bansal et al., 2021; Gould et al., 2023). The key unknown question is whether the proportion of universal representations increases, or decreases, with scale. Vyas et al. (2023) argue that language models learn similar features across different width sizes. In the vision setting, Nguyen et al. (2020) found that when networks are sufficiently large (in terms of width or depth) in comparison to the training set size, they can be partitioned into contiguous blocks of layers with representations within each block being similar across different trained models. A related, but distinct, question is whether learned representations converge to, or diverge away, from the representations used by humans with increasing scale (Sucholutsky et al., 2023)?3In some cases, the behavior of larger LMs does appear to be more consistent with human behavior (Chiang & Lee, 2023; Park et al., 2023a; Zhu et al., 2024), however, this consistency tends to break down at the edges; for example, LLMs do not always show human-like biases in their responses (Aher et al., 2022; Tjuatja et al., 2023; Hagendorff 3Also see Section 3.4.2 et al., 2023). As such, there may not exist a clean answer to the question above. However, it might still be useful to understand if there are types of representations of concepts that we can expect LLMs of sufficient scale to share with humans. And if so, how may it help us predict the behavior of LLMs? Another open question pertains to the changes in representation structure with scale. Specifically, whether larger scale models are more or less likely to have representations that exhibit linear characteristics, such as the famous "King - Man + Woman = Queen" example (Mikolov et al., 2013).4 The idea that LLMs encode high-level concepts linearly in the representation space of the model has been termed the "linear representation hypothesis" (Park et al., 2023b). The mathematical field of *representation theory* studies how abstract algebraic structures such as groups can represented in terms of linear transforms, and might help identify and understand such limitations. Chughtai et al. (2023) present evidence that neural networks do in fact use representation theory to represent data with a group structure. Understanding how scale influences the structure of representations would provide insights into the applicability of interpretability methods for larger models, which implicitly or explicitly assumes linearity of representations. ## 2.3.3 Limits Of Scaling One framework for making sense of advances in machine learning is to decompose the progress in capabilities into performance improvement due to algorithmic innovation and the performance improvement due to scaling up the resources of training (Hernandez & Brown, 2020; Erdil & Besiroglu, 2022). Prior work has shown that simple scaling up can cause the acquisition of novel capabilities within the model (Kaplan et al., 2020; Wei et al., 2022a). This raises the question of whether there exists a *scale ceiling* i.e. capabilities that can not be acquired by a model regardless of how much it is scaled further. If so, what capabilities are these? For example, it is unclear to what extent LLMs may acquire reasoning and abstraction capabilities via scaling alone (Mitchell et al., 2023; Saparov et al., 2023). Prior work has argued that some of the most critical limitations of current LLMs are unlikely to be resolved by simple scaling. Kirk & Krueger (2022) argue that causal confusion, i.e. learning of spurious correlations present within the data may be unavoidable in a purely offline learning setup. This claim has mixed support within the literature; while Zecevic et al. (2023) argue that LLMs can not be causal, Lampinen et al. (2023) point out that LLM training data contains many examples of interventions, outcomes, and explanations that may support the learning of active causal strategies by the LLM. Kalai & Vempala (2023) and Xu et al. (2024b) both argue that 'hallucinations' in LLMs are not fixable via scaling. Several studies have argued that resistance to jailbreaking may not improve with scale (Wei et al., 2023a; Wolf et al., 2023; Millière, 2023). In general, robustness to adversarial examples may not improve with scale, as argued by Frei et al. (2023) and Debenedetti et al. (2023). One extreme sense in which scaling can be limited for LLMs is *inverse scaling*, where performance *decreases* with scale (McKenzie et al., 2023). McKenzie et al. (2023) ran a contest to solicit tasks exhibiting inverse scaling across different LLM architecture families. The results were mixed - for almost all tasks inverse scaling was later found to be reversed at the largest compute scale measured (which was for the PaLM family of models), resulting in U-shaped scaling curves Wei et al. (2022b). We can also define inverse scaling more broadly, to include increasing frequency of undesirable *behaviors* with scale, not just decreasing frequency of correct responses. Using this perspective, Perez et al. (2022b) observe inverse-scaling for sycophantic behavior, i.e. larger models have a more pronounced tendency to mimic the political views of their interlocutors. This suggests that the human feedback process used for alignment training is misaligned (see Section 3.2), which might cause other related inverse scaling problems. Still, it is currently unclear whether there exist other undesirable behaviors that undergo inverse-scaling, and there is no reliable way of determining whether such undesirable behaviors will also undergo U-shaped scaling (or not). ## 2.3.4 Formalizing, Forecasting, And Explaining Emergence Some capabilities when plotted on a scaling curve appear to emerge suddenly past some scale. A large number of capabilities, including key capabilities such as instruction following and chain-of-thought reasoning, have been argued to be emergent (Wei et al., 2022a). On the other hand, other studies have argued that some capabilities may appear emergent due to the harsh nature of the evaluation measures being used which do 4Notably, Nissim et al. (2020) found that the closest vector to King - Man + Woman is actually King; Queen is the *second* closest. not reward partial correctness (Schaeffer et al., 2023; Srivastava et al., 2022). However, none of these studies precisely define emergence, and their use does not accord with established use across other fields such as complex systems and physics. This indicates a need for greater clarity and careful formalization of emergence in the context of machine learning. In any case, so-called 'emergent' capabilities, e.g. those identified by Wei et al. (2022a), may be particularly challenging to predict and forecast (Ganguli et al., 2022), and are hence disconcerting from a risk assessment perspective (Kaminski, 2023). Developing a mechanistic understanding of factors that contribute to abrupt improvement in performance may yield critical insights regarding the predictability of 'emergent' capabilities. One such factor that has been identified so far is the compositional nature of a capability, i.e. a capability being composed of other capabilities. Arora & Goyal (2023) and Okawa et al. (2023) show that if a capability is a composition of another set of capabilities (called skills) that witness smooth and predictable scaling, the scaling curve of a capability will look emergent due to a multiplicative dependence on the ability to perform the skills underlying it - Okawa et al. (2023) call this the multiplicative emergence effect. The results of the aforementioned papers indicate that if a valid skill decomposition of a seemingly emergent capability is available and these skills follow predictable dynamics, we can accurately predict the model's progress on the capability itself. Arguably, the bottleneck here is that for several capabilities of interest, we are unlikely to have an accurate decomposition available (Barak, 2023). Further research is needed to understand how such decomposition can be achieved for a capability of interest and whether given an approximate decomposition, one can predict at what scale a model learns a compositional capability. Schaeffer et al. (2023) provide anecdotal evidence that progress measures can be designed for some of the emergent capabilities identified in the literature which smoothly track the so-called emergent capability. This is similar to the identification of continuous progress measures in the context of grokking of capabilities (Nanda et al., 2022; Barak et al., 2022), where models exhibit a sudden shift from memorization to generalization. However, we assert that the existence of soft progress measures indicates that learning dynamics are predictable (if the appropriate progress measure can be identified pre-hoc), but does not invalidate the perspective that some capabilities are emergent (Barak, 2023). It is also critical to ensure that any progress measures identified faithfully track the capability of interest, as, otherwise, there may be a measure whose dynamics are predictable, but in fact does not faithfully capture progress on learning the capability. It is not clear whether the progress measures proposed in the literature are faithful in this sense or not. Finally, we emphasize that identifying such progress measures can be highly non-trivial and current evidence is insufficient to indicate that suitable progress measures exist for all capabilities that are claimed to be emergent (Wei, 2022). Interpretability methods have previously been used to find progress measures for grokking (Nanda et al., 2022; Chughtai et al., 2023), and could be used for discovering progress measures that underlie emergent capabilities as well. It may also be helpful to study the learning dynamics in a systematic way as done in some prior work (Hu et al., 2023b; Chen et al., 2024a; Hoogland et al., 2024; Edelman et al., 2024). Hoogland et al. (2023) argue that singular learning theory (Wei et al., 2022d; Watanabe, 2024) may be particularly useful for this purpose. There is also a need for work to tighten and formalize the definition of 'emergent' capabilities. The current definition of emergent capabilities arguably points to an abrupt improvement in model performance on a scaling curve as a necessary - but not sufficient - condition for a capability to be emergent. This notion of emergent capabilities is also somewhat distinct from the notion of emergence in other fields. For example, in physics, emergence is often associated with the formal notion of phase transitions: discontinuities in some property of a system (or its derivatives) considered as a function of some parameter (e.g. scale). In physics, such discontinuities yield distinct "phases" that are governed by different physical laws (e.g. liquids vs. gases) (Huang, 2008; Grimmett, 2018). However, seeking inspiration from natural sciences and grounding emergence in "phase transitions" may be too constraining; e.g. it is unclear if in-context learning (ICL) is truly a phase transition in the technical sense, but it is intuitively clear that ICL yields a *qualitatively* different set of capabilities in larger models. There is a need to establish desiderata to further ground emergence from the context of prediction of capabilities and a consensus is needed on what evidence is sufficient to claim a capability is truly emergent - (seemingly) discontinuous scaling curves are perhaps insufficient for this purpose. ## 2.3.5 Better Methods For Discovering Task-Specific Scaling Laws Power-law-based scaling laws are not always suitable for predicting performance on narrower metrics, which can exhibit inflection points and even non-monotonic scaling. Caballero et al. (2023) present a generalization that can model such phenomena, building on Alabdulmohsin et al. (2022). Other modeling approaches should be evaluated and could borrow from approaches to model learning curves (Viering & Loog, 2022). To improve data efficiency, predictions could incorporate additional information, such as scaling performance on other tasks, performance on individual examples (Kaplun et al., 2022; Siddiqui et al., 2022), or other indicators of how learning/scaling is progressing. Probabilistic modeling approaches such as Gaussian processes (which Swersky et al. (2014) use to model learning curves) may also be useful in providing scaling estimates alongside the uncertainty estimate. Developing better evaluation measures with higher resolution, and better methodologies overall to estimate capabilities present in a model that are not fully mature yet, can be particularly instrumental in providing better estimates of capabilities of future models (Hu et al., 2023c). It would also be useful to clarify the purpose of task-specific scaling laws. Is the goal to forecast capabilities as accurately as possible in the short-term, when scaling resources are only slightly higher than their current levels? To provide accurate longer-term predictions? To provide interpretable parametric fits that enable quantitative comparison of learning algorithms and can distinguish qualitatively distinct scaling regimes? If the goal is indeed to forecast capabilities as accurately as possible, then clear desiderata should be established prescribing the range over which the task-specific scaling laws should extrapolate with high accuracy. The evaluations of Alabdulmohsin et al. (2022) and Caballero et al. (2023) demonstrate extrapolation to values of the scaled resource which are twice as large as those used for fitting the functional form, but it is unclear how accurate they are beyond this limited horizon. Stumpf & Porter (2012) suggest, in a broader scientific context, that proposed power law fits ought to be considered scientifically useful only if they extrapolate over at least two orders of magnitude; this stringent desideratum may be fruitful in the task-specific scaling law context (even for functional forms besides simple power law scaling). Moreover, as more and more tunable parameters need to be introduced to functional forms in order to improve extrapolation accuracy, the interpretability advantage of parametric fits over non-parametric methods (e.g. Gaussian processes) becomes questionable, and the risk of spurious interpretations rises. Broadly speaking, there are several distinct reasons task-specific scaling laws might be useful, and there is a need to develop criteria for assessing these laws which distinguish between the different use cases. (It may even be more terminologically precise in some cases to think of scaling fits as just "fits" rather than as "laws".) There remain many challenges that hinder our ability to predict which capabilities LLMs will acquire with continued scaling, and when. We continue to lack a robust explanation of why scaling works, and to what extent the scaling laws we have discovered are universal. Additionally, there has been minimal exploration into how scale influences the representations learned by the models and whether certain capabilities cannot be acquired through scaling alone. Furthermore, our understanding of the factors that underlie abrupt performance improvements on certain tasks is lacking, and our methods for discovering task-specific scaling laws are inadequate. 23. Can the different explanations (manifold explanation, kernel spectrum explanation, long tail theory) of power-law scaling in the resolution-limited regime be unified? ←- 24. What is a good theoretical model for *compute-efficient* scaling, where data and model size are scaled jointly? ←- 25. What is the role of feature learning in scaling laws? Can we develop models of scaling laws that account for the computational difficulty of learning the given task? ←- 26. What is an appropriate theoretical model to explain scaling observed in reinforcement learning settings where the data distribution is not stationary? ←- 27. To what extent scaling law exponents are fundamentally bounded by the data distribution? ←- 28. What properties of a task (and its relation to the full training distribution) affect the predictability of its scaling behavior? ←- ![23_image_0.png](23_image_0.png) ![23_image_1.png](23_image_1.png) ![23_image_2.png](23_image_2.png) 29. To what extent does scaling a model increase the 'universality' of its representations? ←- 30. Does increasing scale cause model representations to converge to, or diverge away from, human representations? In other words, does representation alignment between human representations and model representations increase or decrease with scale? ←- 31. How does scale impact the structure of the representations? Does scaling cause the structure of model representations to become more, or less, linear? To what extent is the linear representation hypothesis true in general? ←- 32. How can we determine whether a given capability is below or above the *scale ceiling*, i.e. whether simple scaling up the model (and/or data, compute) would enable the model to learn that capability or not? ←- 33. To what extent are the issues faced by current LLMs (causal confusion, 'hallucinations', jailbreaks/adversarial robustness, etc.) likely to be resolved by further scaling? ←- 34. How do we determine if the inverse-scaling behavior of a capability will be reversed with further scaling (i.e. result in a U-shaped curve)? How can we predict threshold points at which scaling behaviors change shape? ←- 35. What factors may explain *abrupt* improvements in performance associated with emergent capabilities? To what extent does the multiplicative emergence effect (Okawa et al., 2023) explain the emergent capabilities of LLMs that have been observed in practice? ←- 36. How can we discover valid decompositions of various compositional capabilities of interest and assess the accuracy of such decompositions? How can the emergence of compositional capabilities be predicted based on the learning dynamics of its decomposed capabilities? ←- 37. What is an appropriate formalization of "emergent capabilities" in the context of LLM scaling? How can we apply it to understand which sorts of novel phenomena are likely to be (un)predictable? ←- 38. Can we discover progress measures that may explain emergent capabilities, e.g. by using interpretability methods? How can we establish the faithfulness of such progress measures to ensure that they can be used to *predict* the emergence of the capability of interest? ←- 39. Can we develop better methods for modeling task-specific scaling? E.g. by conditioning on additional information, using probabilistic techniques, or by developing evaluation measures with higher resolution. ←- 40. Can we clarify the purpose of task-specific scaling laws? In order to be useful for forecasting capabilities, what is the minimum range over which a task-specific scaling law must extrapolate accurately? ←- ## 2.4 Qualitative Understanding Of Reasoning Capabilities Is Lacking Reasoning - the process of drawing conclusions from prior knowledge - is a hallmark of intelligence. Prompt programming techniques (Reynolds & McDonell, 2021), such as *chain-of-thought* (CoT) prompting (Wei et al., 2022c), scratchpad prompting (Nye et al., 2021), or "let's think step-by-step" (Reynolds & McDonell, 2021; Kojima et al., 2022) enable language models to perform reasoning tasks with impressive accuracy. Careful studies to evaluate the behavior of LLMs on "out-of-distribution" (OOD) reasoning tasks have revealed that LLMs exhibit *some* reasoning behavior (Saparov et al., 2023; Wang et al., 2023a; Dziri et al., 2023), and that performance on reasoning tasks improves with scale (Wei et al., 2022c; Saparov et al., 2023; Valmeekam et al., 2022). However, the reasoning capabilities of even the largest LLMs are deficient in many ways; causing them to struggle on various types of reasoning problems (Wu et al., 2023b; Saparov & He, 2022; Berglund et al., 2023b; Valmeekam et al., 2022; Mitchell et al., 2023). The mixed evidence regarding the reasoning capabilities of LLMs is a major source of disagreement within the wider community (Huang & Chang, 2022). Are the prevalent limitations in reasoning capabilities of the LLMs fundamental in nature, or transient and likely to go away with scaling or improvement in training methods (e.g. targeted finetuning on reasoning data as in e.g. Lewkowycz et al., 2022)? Answering this question is critical to better understand the risks posed by LLMs, as both the strengths and limitations of the reasoning capabilities of LLMs give rise to *different* types of safety risks. The inability to reason robustly in novel (e.g. OOD) contexts can lead to undesirable, and potentially unsafe, behavior of LLMs. Conversely, if the LLM is misaligned, robust reasoning might cause it to behave undesirably *and competently* (Shevlane ![25_image_0.png](25_image_0.png) Figure 2: Sketch of two possible scaling law behaviors, relating the relationship between the reasoning performance of LMs and the number of parameters (see Section 2.4.1). et al., 2023). In this section, we highlight several research questions that can be instrumental in addressing these concerns. ## 2.4.1 Does Scaling Improve Reasoning Capabilities? Several prior studies have provided circumstantial evidence that larger models are better at reasoning (Nye et al., 2021; Wei et al., 2022a;c). However, this circumstantial evidence is confounded by issues such as data contamination (Wu et al., 2023b). Some studies have proposed scaling laws for performance on math word problems with respect to pretraining (Henighan et al., 2020) and finetuning (Caballero et al., 2023; Yuan et al., 2023b). However, these tasks may not reflect more general reasoning capabilities, and the discovered laws could also be misleading as special attention is paid to train LLMs on mathematical reasoning tasks (OpenAI, 2023c). Saparov et al. (2023); Han et al. (2022); Sprague et al. (2023) develop datasets meant to test more broadly for reasoning capabilities, however, more such datasets and evaluations of reasoning capabilities across broad contexts are needed to better understand how scale affects the reasoning capabilities of LLMs. Empirical scaling laws for general reasoning capabilities could be a particularly valuable contribution. Conversely, a conclusive demonstration that increasing the scale does not always improve the reasoning capabilities of LLMs would be similarly valuable. Figure 2 provides a rough sketch of two possible scaling law results, one where the increasing scale of LMs allows them to solve reasoning tasks with increasing complexity, and one where a fundamental limitation of LMs prevents them from continually improving their performance in reasoning. ## 2.4.2 Understanding The Mechanisms Underlying Reasoning Research to understand the mechanisms underlying reasoning in LLMs would provide further insight into their reasoning cabilities as a function of scale. This can help to reveal why and when LLMs rely on heuristics, and when they do actually perform reasoning to solve various reasoning tasks. Mechanistic interpretability analysis may be used for this purpose (c.f. Section 3.4). Hou et al. (2023) use mechanistic interpretability to analyze GPT-2 and LLaMA models and find evidence that LLMs embed a reasoning tree resembling the oracle reasoning process within the attention patterns. However, they only study simple synthetic reasoning tasks, and its not clear whether a similar finding will hold for OOD and long-tail inputs, or on more complex/general tasks than the ones considered by Hou et al.. On the whole, very limited work has been done so far to understand the mechanisms underlying reasoning. This presents an opportunity for future research to use methods like mechanistic interpretability analysis to explain the reasoning capabilities of LLMs, as well as their limitations. In particular, explaining the cases in which a smaller LLM fails at a reasoning task but a larger LLM proficiently solves the same task may be particularly valuable in understanding the effect of scale on mechanisms used by LLMs for reasoning. ![26_image_0.png](26_image_0.png) Figure 3: Illustrative examples of problems involving deductive reasoning, abductive reasoning, causal reasoning, and social reasoning (see Section 2.4.3). ## 2.4.3 Understanding Non-Deductive Reasoning Capabilities Of Llms Existing work on the reasoning capabilities of LLMs is primarily focused on **deductive reasoning** (Saparov & He, 2022; Saparov et al., 2023; Wang et al., 2023a). There is a need to better understand LLMs' inductive, abductive, social, situational, and causal reasoning capabilities5(Bhagavatula et al., 2020; Gandhi et al., 2023; Peirce, 1868; Smith, 2022, Chapter 5.4). See Figure 3 for a set of examples involving different types of reasoning. There is some evidence that LLMs are able to perform *some*, but not all, aspects of inductive reasoning from in-context examples (Qiu et al., 2023; Wang et al., 2023e; Zhu et al., 2023; Mitchell et al., 2023). Similarly, while Benchekroun et al. (2023) show that LLMs have difficulty abducting a valid world model from text descriptions given in-context, both Gurnee & Tegmark (2023) and Roberts et al. (2023a) argue that during training, LLMs do build a coherent understanding of the outside world from the dispersed information available in the corpus, which they then use at inference time to solve various tasks. In the same vein, there is mixed evidence regarding whether LLMs have adequate theory-of-mind reasoning capabilities that would enable them to robustly perform social and situational reasoning Kim et al. (2023); Li et al. (2023d); Kim et al. (2023). Causal reasoning capabilities of current LLMs are quite limited (Jin et al., 2023); however, Lampinen et al. (2023) argue that this is not in principle a limitation of the current paradigm. Further research is needed to better understand the precise limitations of LLMs with regards to various types of non-deductive reasoning, and whether these limitations are fundamental in nature or could be resolved via further scaling or modifying the training process. In particular, more work in the vein of Lampinen et al. (2023), that studies the limitations of current *paradigm* and not just current *models*, is needed that can help to elucidate the various reasoning capabilities of not just current LLMs, but also the future LLMs.6 ## 2.4.4 Which Aspects Of Training Lead To The Acquisition **Of Reasoning?** Understanding how reasoning capabilities are acquired by LLMs during pretraining, and how they evolve during finetuning, can provide insights as to whether or not the deficiencies in the reasoning faculties of LLMs will be resolved with further scaling or through modifications of the training process. Different studies have theorized different causes for the acquisition of reasoning. Lightman et al. (2023); Magister et al. (2023); Mitra et al. (2023) argue that training on reasoning traces improves the reasoning performance of LLMs. Madaan et al. (2022); Liang et al. (2023) show that training on code helps models perform better in reasoning tasks. Liang et al. (2023) show that instruction tuning also improves the model's performance on reasoning tasks. However, there is a need for further research to clarify the relative contribution of the aforementioned and other elements of the training pipeline to the acquisition of reasoning within LLMs. Analysis of large-5This list is neither mutually exclusive nor exhaustive. 6Also see Section 2.3.3 for related discussion on this point. scale datasets like *The Pile* (Gao et al., 2020) or *RedPajama* (Together Computer, 2023), or training data attribution methods (e.g. influence functions Grosse et al., 2023) could be used to understand which training examples are most instrumental in enabling LLMs to reason effectively. When an LLM is specifically trained on reasoning traces, is the resultant improvement in reasoning performance a byproduct of improved general reasoning faculties, or simply a consequence of learning better heuristics (Zhang et al., 2023b)? Does training on reasoning traces generalize out of distribution or not? Furthermore, careful comparisons of the output distributions of instruction-tuned language models vs. those of base models might help understand how instruction tuning improves the reasoning faculties of LLMs. ## 2.4.5 What Are The Computational Limits Of Transformers? Prior work on the theoretical capabilities of transformers has shown that, with a single pass, they are limited in the kinds of algorithms that they can simulate (Merrill & Sabharwal, 2023b; Merrill et al., 2023; Merrill & Sabharwal, 2022; Strobl, 2023). For example, in a single pass, they can not even solve relatively simple problems like simulating automata, checking graph-connectivity (i.e. whether there is a path between two nodes in a graph or not), and evaluating compositional formulas (Merrill & Sabharwal, 2023b). However, allowing a transformer to take intermediate steps, such as when using scratchpad or chain-of-thought prompting, significantly improves its expressive power (Feng et al., 2023). For example, a transformer with a linear number of intermediate steps can simulate automata, and with a polynomial number of intermediate steps, a transformer can express all problems solvable in deterministic polynomial time (Merrill & Sabharwal, 2023a). More work is needed to better understand the computational limits of transformers with intermediate steps, which may help with assurance by helping us understand what capabilities LLMs can possess. The RASP programming language (Weiss et al., 2021) was developed to describe the kinds of computations that transformers can perform. However, while the original RASP language is not well-defined and hence, difficult to analyze, researchers have analyzed constrained versions of RASP. Angluin et al. (2023) show that *boolean* RASP is equivalent in expressive powers to hard-attention transformers. Other works have used formalisms based on first-order logic to analyze the expressibility of transformers (Merrill & Sabharwal, 2023a; Chiang et al., 2023). Notably, Merrill & Sabharwal (2024) show that transformers with soft attention can be simulated by first-order logic with majority quantifiers (FO(M)). However, it remains an open question what the right programming language formalism is for expressing transformer computation. Is it a variant of RASP language or some kind of logic, like first-order counting logic? What are the relative differences in the power of these formalisms? Can we define some symbolic programming language that is exactly equivalent in expressive power to transformers? It is also pertinent to mention that expressibility does not imply learnability, and it is necessary to elucidate which algorithms are in fact learnable by transformer-based models. Zhou et al. (2023c) present an (informal) conjecture that transformers learn the shortest RASP-L program that agrees with the training data and provides empirical evidence in favor of this conjecture. Formally verifying (or refuting) this conjecture is an open research direction. In general, a more focused analysis of specific problems that are unlearnable by single-pass transformers could provide insight into which kinds of tasks are easier or more difficult to learn for transformers with multiple passes. Calibrated understanding of the reasoning capabilities of LLMs is required to better understand their risks. In particular, there is a need to develop a more complete understanding of *which* limitations in reasoning capabilities are *fundamental* in nature, and which are likely to be resolved with additional scale or improved training methods of LLMs. Formulating empirical scaling laws for general reasoning capabilities, clarifying the mechanisms underlying reasoning and understanding the computational limits of learning and inference in transformers may help in this regard. Furthermore, there is a need for more research on understanding the non-deductive reasoning capabilities of LLMs and understanding how LLMs acquire these capabilities. 41. Do general reasoning capabilities of LLMs reliably improve with scale? Can we discover empirical scaling laws for reasoning to predict this improvement beforehand? If it is the case that scale does not improve general reasoning capabilities of the LLMs, can we conclusively show this to be the case? ←- ![28_image_0.png](28_image_0.png) ## 2.5 Agentic Llms Pose Novel Risks Currently, LLMs are chiefly being used in search and chat applications. This reactive nature limits the risks posed by LLMs. However, an LLM can be *enhanced* in various ways to create an *LLM-agent* to autonomously plan and act in the real-world and *proactively* perform its assigned tasks (Ruan et al., 2023). Such enhancements can come from further specialized training (ARC, 2022; Chen et al., 2023a), specialized prompting (Huang et al., 2022b), access to external tools (Ahn et al., 2022; Mialon et al., 2023), or other forms of "scaffolding" (Wang et al., 2023c; Park et al., 2023a). A key feature of LLM-agents is markedly increased autonomy compared to LLM-based chatbots. For instance, GPT-Engineer (Osika, 2023) can write and execute code given a coding task. This is different from a non-agentic LLM-based coding assistant which can provide coding suggestions but can not execute code. Due to increased autonomy, limited direct oversight from human users, longer horizons of action, and other reasons, LLM-agents are likely to pose many novel alignment and safety challenges that are not currently wellunderstood (Chan et al., **2023b).** This section discusses some of these challenges and how various features of LLM-agents might exacerbate these challenges. ## 2.5.1 Llm-Agents May Be Lifelong Learners LLMs are strong in-context learners (see Section 2.1), so the behavior of LLM-agents could change throughout their lifetime through incorporation of feedback from the environment, humans, self-reflection, or other AI systems in the context. Further, many designs of LLM-agents (e.g. generative-agent (Park et al., 2023a), Voyager (Wang et al., 2023c)) augment LLM-agents with some form of external memory. This enables LLMagents to overcome the limitations of a limited context window by writing critical knowledge and skills to memory and retrieving them during subsequent processing. The lifelong learning may translate to continual gain in capabilities and pose novel alignment and safety challenges. For example, the capabilities gained over time might be dangerous (Shevlane et al., 2023), and may result in undesirable outcomes such as enabling the agent to manipulate the monitoring system (Cohen et al., 2022). Reward hacking (Skalse et al., 2022) or goal misgeneralization (Langosco et al., 2022) could cause LLM-agents to develop and/or pursue misaligned goals. It is also unclear how we might *assure* that an agent that learns continuously remains aligned and ensure that such learning does not undo its alignment (Qi et al., 2023b). ## 2.5.2 Natural Language Underspecifies Goals For LLM-agents, both the goal and environment observations are typically specified in the prompt through natural language. While natural language may provide a richer and more natural means of specifying goals than alternatives such as hand-engineering objective functions, natural language still suffers from underspecification (Grice, 1975; Piantadosi et al., 2012). Furthermore, in practice, users may neglect fully specifying their goals, especially the information pertaining to elements of the environment that ought not to be changed (the classic *frame problem* (Shanahan, 2016)). Such underspecification (D'Amour et al., 2020), if not accounted for, can result in negative *side-effects* (Amodei et al., 2016), i.e. the agent succeeding at the given task but also changing the environment in undesirable ways. Ruan et al. (2023) show that contemporary LLM-agents are not robust to the frame problem and that underspecification in the instructions can cause contemporary LLM-agents to make unwarranted faulty assumptions resulting in undesirable risky actions. The problem of avoiding negative side-effects has historically been studied within the framework of reinforcement learning (Leike et al., 2017; Shah et al., 2019; Krakovna et al., 2020). Several solutions have been proposed and evaluated in that setting to ensure agents are robust to underspecification, e.g. enabling AI agents to halt execution and adaptively seek new information from humans when uncertain (Hadfield-Menell et al., 2016; Shah et al., 2020), designing the AI agent to act conservatively (Turner et al., 2020), or providing additional information about goals derived from alternative sources such as demonstrations (Malik et al., 2021) or the environment (Shah et al., 2019). Future research could explore how the aforementioned techniques can be adapted to make LLM-agents more robust to challenges posed by underspecification. For example, the natural language interface provides a natural way for the LLM-agent to pose clarifying questions to the human (Mu et al., 2023; Kuhn et al., 2022). Similarly, the agent could be instructed (by embedding appropriate instructions in the prompt) to act conservatively and minimize its impact on the environment to avoid negative side-effects like issues. However, the effectiveness of such techniques in diverse novel scenarios is currently unclear and needs to be carefully investigated (Clymer et al., 2023). More research is needed on calibrating LLMs so that they are more capable of "knowing what they don't know" (Kadavath et al., 2022; Yin et al., 2023; Kuhn et al., 2023) and can accurately assess when they ought to seek further information and when to act conservatively (Ruan et al., 2023). Approaches such as worst-case optimization (Coste et al., 2023), attainable utility preservation (Turner et al., 2020), or conditional value at risk (CVaR) (Javed et al., 2021), may help make LLMs behave conservatively, or inspire methods for doing so. Approaches in the spirit of assistance games (Shah et al., 2020; Krasheninnikov et al., 2022) and active learning (Krueger et al., 2020) could help to address underspecification in a targeted manner. ## 2.5.3 Goal-Directedness Incentivizes Undesirable Behaviors Goal-directedness can cause agents to exhibit unethical and undesirable behaviors, such as deception (Ward et al., 2023), self-preservation (Hadfield-Menell et al., 2017), power-seeking, and immoral reasoning (Pan et al., 2023a). Pan et al. (2023a) find that LLM-agents exhibit power-seeking behavior in text-based adventure games. LLM-agents have also been shown to use deception to achieve assigned goals when explicitly required by the task (Ward et al., 2023), or when the tasks can be more easily completed by employing deception and the prompt does not disallow deception (Scheurer et al., 2023b). This behavior exists despite training the agent to be harmless and helpful, in fact, Scheurer et al. show that despite being informed about the illegal nature of the deceptive behavior, a GPT-4-based LLM-agent not only continues to deceive but also tries to cover up and hide the deceptive behavior. Perez et al. (2022b) show that LLMs finetuned with reinforcement learning exhibit a greater tendency for self-preservation and for avoiding shutdown. Benchmarks are needed to better quantify and evaluate the presence of such undesirable behaviors in LLM-agents. These benchmarks could either directly track undesirable behavior, or other kinds of behavior that are instrumentally useful for undesirable behavior. For example, deception often involves lying, so, detecting when a model is deliberately being dishonest can help to detect deception (Pacchiardi et al., 2023; Burns et al., 2022; Azaria & Mitchell, 2023; Li et al., 2023e). There is also a need to better understand the sources of these undesirable behaviors. Training data attribution methods (see Section 3.1.3) could be used to identify whether or not specific examples in the training data are responsible for these behaviors. Alternatively, research could be done to understand in what ways the training objectives (e.g. human approval in safety finetuning) might incentivize undesirable behavior. It is also unclear how undesirable behavior can be effectively mitigated; prior work has shown that prompting or fine-tuning LLMs to behave ethically or to avoid deception and power-seeking (Pan et al., 2023a), and/or intervening on LLMs' internal representations (Zou et al., 2023a) can help to mitigate undesirable behavior to some extent. ## 2.5.4 Difficulty Of Robust Oversight And Monitoring The general-purpose nature of LLMs is inherited by LLM-agents, making it likely that they will be deployed across a wide variety of contexts; often with minimal human supervision (removing human supervision may introduce novel governance challenges as well, c.f. Section 4.5.6). This necessitates robust *automated* monitoring to ensure that the behavior of LLM-agents remains aligned and safe across diverse deployment contexts (Naihin et al., 2023; Hendrycks et al., 2021b). However, multiple factors make robust monitoring and oversight difficult. (1) Firstly, direct human oversight is vulnerable to biases such as sycophancy (Perez et al., 2022b) (also see Section 3.3.5) and is inherently limited to the tasks that are solvable by humans. (2) If the agent is misaligned, and is aware that it will be penalized for its misaligned actions, it may *actively* try to circumvent the monitoring system (Cohen et al., 2022; Scheurer et al., 2023b). (3) As discussed previously, LLM-agents are likely to be lifelong learners. Hence, if monitoring information is leaked to the LLMagent, the agent may optimize against the monitoring system, leaving the monitoring system vulnerable to reward hacking (Pan et al., 2022; Skalse et al., 2022). LLMs show some low level of 'out-of-context learning' (Krasheninnikov et al., 2023) which can improve with scale (Berglund et al., 2023a). Prior work has expressed concern that situational awareness, assisted by out-of-context learning, might enable language models to hack their monitoring systems (Ngo et al., 2023). As preliminary evidence, Meinke (2023) show that after an LLM is finetuned on text explanations of prompt-injection, the LLM performs a jailbreak attack when prompted with a task and given information that it is being scored by another LLM, rather than performing the intended task. The aforementioned issues, especially (2) and (3) are poorly understood at the moment. As for (2), trusted LLM monitoring systems can still face security issues (Greenblatt et al., 2024): if the trusted and untrusted LLMs have discontinuous performance gaps on narrow tasks (production and detection respectively), this may be indicative of the trusted monitor being potentially unable to accurately monitor outputs. Furthermore, at a certain level of absolute model capabilities, no matter the detection performance of trusted black-box monitoring systems, the use of perfectly secure steganography could go undetected on a channel (Schroeder de Witt et al., 2023). A trusted monitor with even the same (or higher) level of steganography capabilities will still find detecting perfectly secure steganography to be an intractable problem unless it has access to the relevant cryptographic secrets. Sufficiently advanced forms of steganography might persist even after paraphrasing or lossy compression if encoded into semantics rather than lexis (Motwani et al., 2024). This implies that limiting steganography may be impossible without limiting communication capabilities or making the system more predictable to monitors, both of which might directly constrain the degree to which the system can be considered useful, or indeed autonomous. As for (3), there is a need to better understand LLM capabilities such as lifelong learning and situational awareness that might enable LLMs to hack monitoring systems. Furthermore, more research is needed to clarify how to ensure the robustness of monitoring systems when the monitored system has such capabilities. At the minimum, there is a need to develop appropriate threat models to study and improve the adversarial robustness of the monitoring system, where the monitored system is treated as an adversary (Goodfellow, 2019). Finally, it might be the case that the large number of possible failure modes in monitoring is such that no individual monitoring scheme will be fully robust on its own. Hence, there is a need to understand the relative strengths and weaknesses of different monitoring schemes with the goal of developing ensemble-based monitoring systems that might be more effective and robust. ## 2.5.5 Safety Risks From Affordances Provided To Llm-Agents The capabilities of LLM-agents can be enhanced in significant ways by providing the LLM-agent with novel affordances, e.g. the ability to browse the web (Nakano et al., 2021), to manipulate objects in the physical world (Ahn et al., 2022; Huang et al., 2022b), to create and instruct copies of itself (Richards, 2023), to create and use new tools (Wang et al., 2023c), etc. Affordances can create additional risks, as they often increase the impact area of the language-agent, and they amplify the consequences of an agent's failures and enable novel forms of failure modes (Ruan et al., 2023; Pan et al., 2024). There is currently very limited work on understanding the risks from affordances and how these risks scale with the type and diversity of affordances available to the model, and with respect to the capabilities of the base LLM. It is also unclear how to assure that a given LLM is capable of using a given affordance safely. Prior work has used testing within a sandbox (ARC, 2022) or testing via an LLM-based emulator (Ruan et al., 2023) to identify likely risks. However, both these methods have their limitations. Developing a sandbox environment is time-consuming and not scalable, Can we leverage LLMs to automate this process? Similarly, using an LLM-based emulator is likely to compromise the fidelity of the simulation and hence may not be able to discover all possible risks, e.g. those that occur due to emergent functionality (see Section 2.6.3). How can we improve the fidelity of such simulations? LLM-agents will pose many novel alignment and safety risks. These risks may be amplified by the ability of LLM-agents to perform lifelong learning and their access to various affordances. Our understanding of these risks and their likelihoods is currently quite poor and needs improvement. There is also a need to develop methods to allow us to better control LLM-agents and guide their behavior more effectively. Furthermore, the development of monitoring systems for LLM-agents is likely to entail significant challenges. 48. What drives the capability of LLM-agents to improve via lifelong learning, and to what extent ![31_image_0.png](31_image_0.png) ![31_image_1.png](31_image_1.png) is this capability present in current LLMs? How does it relate to the in-context learning ability of the base LLM, and how can we modulate it? For example, Wang et al. (2023c) note that replacing GPT-4 with GPT-3.5 in their agent caused the agent's performance to plummet. However, it is unclear whether this was due to GPT-4 being a better in-context learner (and therefore, better able to improve based on feedback) or due to GPT-4 being inherently more capable. ←- 49. How can we enable LLM-agents to be more robust to underspecification (Ruan et al., 2023) ![31_image_2.png](31_image_2.png) ## 2.6 Multi-Agent Safety Is Not Assured By Single-Agent Safety ![31_Image_3.Png](31_Image_3.Png) ![31_Image_4.Png](31_Image_4.Png) A foremost lesson of game theory is that optimal decision-making within a single-agent setting (i.e. selfishly optimizing for an agent's own utility) can produce sub-optimal outcomes in the presence of other strategic agents. Failing to account for the strategic nature of other agents can cause an agent to adopt strategies under which potentially everyone, including the agent itself, ends up worse off (Schelling, 1981; Harsanyi, 1995; Roughgarden, 2005; Nisan, 2007). Examples include collective action problems (or 'social dilemmas') such as arms races or the depletion of common resources, as well as other kinds of market failures such as those caused by asymmetric information or negative externalities (Bator, 1958; Coase, 1960; Buchanan & Stubblebine, 1962; Kirzner, 1963; Dubey, 1986). In addition, many potentially worrisome dynamics, such as emergent functionality or network effects only tend to emerge in the presence of multiple agents (Ecoffet et al., 2020). From the perspective of LLM safety, these facts imply that single-agent alignment and safety are insufficient for assuring desirable outcomes in multi-agent settings (Sourbut et al., 2024), **and that deliberate effort will be required to ensure multi-agent safety** (Critch & Krueger, 2020; Dafoe et al., 2020; Conitzer & Oesterheld, 2023; Hammond et al., 2024). In this section, we highlight some of the challenges that might hinder multi-agent safety. ![32_image_1.png](32_image_1.png) (a) Even though most LLM-agents are predominantly developed in single-agent settings, they are likely to undergo multi-agent interactions in deployment (see Section 2.6.1). (b) Different systems - even when individually safe ![32_image_0.png](32_image_0.png) - can potentially be collectively unsafe (see Section 2.6.3). Figure 4: Multi-agent safety presents unique challenges different from single-agent safety. ## 2.6.1 Influence Of Single-Agent Training On Multi-Agent Interactions Is Unclear Under the current paradigm, LLMs undergo extensive pretraining but limited finetuning. Hence, in the near future, it is likely that for LLM-agents multi-agent experience may only form a small fraction of the overall training data. In such cases, LLM-agents will substantially rely on the experience implicit in their pretraining corpora, combined with prompting or in-context learning to guide their behavior in multi-agent settings (Park et al., 2023a; Xu et al., 2023b; Fu et al., 2023b). This makes it critical to understand the predispositions and the latent capabilities of the base LLM that are important in multi-agent interactions. Relevant dispositional traits might include helpfulness, altruism, selfishness, human emotions like spite or jealousy, and awareness of or adherence to various norms and conventions. Relevant capabilities might include negotiation (Fu et al., 2023b), theory of mind (Shapira et al., 2023; Zhou et al., 2023f), manipulation (Ward et al., 2023), or making and fulfilling credible commitments (Park et al., 2023a). To understand these dispositions and capabilities, specialized benchmarks are needed. As a first step, the behavior of LLM agents could be observed in complex, multi-agent environments such as those designed by Park et al. (2023a) and Mukobi et al. (2023). However, in the long term, specialized benchmarks, targeting specific dispositions and capabilities, are needed. In particular, LLM-agents should be evaluated in adversarial settings to assess whether their behavior is consistent across different contexts or not (Scheurer et al., 2023b; Chan et al., 2023a; Akata et al., 2023). Forms of analysis such as training data attribution, e.g. influence functions Grosse et al. (2023) could be used to further understand how training data influences relevant dispositions and competencies (c.f. Section 3.1.3). ## 2.6.2 Foundationality May Cause Correlated Failures Another important characteristic of LLM development is *foundationality* - due to the expense of large-scale pretraining, many deployed instances share similar or identical learned components. Foundationality may both be a blessing and a curse. On the one hand, it may be possible to exploit the similarity in the design of LLM-agents to facilitate cooperation (Critch et al., 2022; Conitzer & Oesterheld, 2023; Oesterheld et al., 2023). On the other hand, foundationality may leave LLM-agents vulnerable to correlated failures both in terms of safety and capabilities due to increased output homogenization (Bommasani et al., 2022). For example, several studies have shown that the same jailbreak attacks transfer across different LLMs (Shah et al., 2023; Zou et al., 2023b). A natural way to guard against some correlated failures is to make the agents sufficiently diverse from each other. How can we promote such robustness effectively? For example, will prompting each LLM with a different 'personality' (Shanahan et al., 2023; Wang et al., 2023j), or a different set of ethical principles to follow (Bai et al., 2022b), be enough? Can finetuning be leveraged to improve diversity, e.g. via using quality-diversity objectives effectively (Bradley et al., 2023; Ding et al., 2023)? In what other ways can we enhance the robustness of LLM-agents to correlated failures? ## 2.6.3 Groups Of Llm-Agents May Show Emergent Functionality Multi-agent learning, either through explicit finetuning or implicit in-context learning, may enable LLMagents to influence each other during their interactions (Foerster et al., 2018). Under some environmental settings, this can create feedback loops that result in novel and emergent behaviors that would not manifest in the absence of multi-agent interactions (Hammond et al., 2024, Section 3.6). Prior work in multi-agent learning provides several instances of such emergence, e.g. intelligent tool use (Johanson et al., 2022) or bartering behavior (Baker et al., 2019). The emergent functionality can also arise through coordination, and cooperation, of efforts between agents at a *group* level, i.e. where a group of agents are collectively able to perform a task that none of them was able to perform individually (Juneja et al., 2023). Emergent functionality is a safety risk in two ways. Firstly, it may itself be dangerous (Shevlane et al., 2023). Secondly, it makes assurance harder as such emergent behaviors are difficult to predict, and guard against, beforehand (Ecoffet et al., 2020). Future research should therefore seek to evaluate if and when groups of LLM-agents can exhibit emergent functionality, and self-organization (Yukalov & Sornette, 2014). Narrow evaluations could be aided by more open-ended simulations, such as that of Park et al. (2023a). Theoretical work on discovering goal-directed behavior (Kenton et al., 2022; Orseau et al., 2018) could help detect if a collection of LLMs is acting as a single agent. Furthermore, emergent functionality might depend on the diversity of the agents; does the foundationality of LLMs (see previous section) inhibit or encourage such emergence? ## 2.6.4 Collusion Between Llm-Agents While it would often be preferable for LLM-agents to be cooperative, cooperation can be undesirable if it undermines pro-social competition or produces negative externalities for coalition non-members (Dorner, 2021; Buterin, 2019; Dafoe et al., 2020). Collusion between relatively simple AI systems has been observed in the real world (Assad et al., 2020; Wieting & Sapi, 2021) and synthetic experiments (Brown & MacKay, 2023; Calvano et al., 2020; Klein, 2021). However, collusion in the context of LLMs has only recently been studied, and there are several open challenges. Collusion can occur through explicit or steganographic communication. Steganographic communication hides information in seemingly innocent content (Roger & Greenblatt, 2023), posing challenges for collusion monitoring and detection. Motwani et al. (2024) study steganographic collusion between generative AI agents and find that steganographic collusion can arise in settings where agents are not explicitly incentivized for it, and even in settings where they are explicitly instructed against it. The authors thus suggest that steganographic collusion need not depend on the agents' 'intent to deceive', but might arise merely due to optimization directed towards a misspecified objective (Russell, 2019; Ngo et al., 2023; Pan et al., 2022) if covert communication between agents increases joint task performance, for example by reducing the task complexity. To better understand the risks of agent collusion via steganography, we need real-world sandboxed benchmarks and evaluations in adversarial settings where agents are subjected to pressure to collude through various means (e.g., by employing tactics used by Scheurer et al. (2023b)). However, monitoring might have important limitations (Schroeder de Witt et al., 2023; Motwani et al., 2024) since LLMs could eventually perform undetectable steganography (Motwani et al., 2024). Future work must explore the effectiveness of additional countermeasures like restricting agents' access to cryptographic services, data curation, and reducing the communication channel entropy (e.g., through paraphrasing). ## 2.6.5 Unclear Applicability Of Multi-Agent Rl Research To Llms Multi-agent problems such as social dilemmas have been studied extensively by the multi-agent RL (MARL) community, among others, and several different mechanisms have been proposed to overcome such problems (Du et al., 2023b), e.g. utility transfer (Kalai & Kalai, 2013; Lupu & Precup, 2020; Yang et al., 2020); contracts and transparency (Christoffersen et al., 2023; Critch et al., 2022); reputation and punishment (Milinski et al., 2002; Henrich, 2006; Boyd et al., 2010; Moon & Conitzer, 2015); opponent-shaping and adaptive mechanism design (Foerster et al., 2018; Pardoe et al., 2006; Yang et al., 2021; Zheng et al., 2022); and intrinsic motivations such as inequity aversion, altruism, or social influence (Jaques et al., 2019; Hughes et al., 2018; Wang et al., 2019b; McKee et al., 2020). It is currently unclear to what extent these findings will transfer to LLM-agents, as there are several marked differences in this setting. First, unlike the traditional agents of game theory and reinforcement learning, LLM-agents do not have explicitly represented objective functions (Yocum et al., 2023). While LLMs can be 'incentivised' to accomplish tasks using prompts and additional RL finetuning, the importance of pretraining does not neatly fit within existing game-theoretic paradigms. It also creates different learning dynamics and thus the possibility of different equilibria. Second, and relatedly, far from being straightforward utility maximizers, LLM-agents tend to exhibit many of the same cognitive biases as the humans who generated this pretraining data (Jones & Steinhardt, 2022; Koo et al., 2023). Third, the size of state-of-the-art LLMs means they are less amendable to classical MARL methods, which scale poorly in sample complexity and model size. While preliminary work does provide encouraging evidence that some of the MARL mechanisms listed above can result in greater cooperation between LLM-agents (Yocum et al., 2023), future work should verify this for other methods, move beyond a purely behavioral paradigm, and, where useful, develop novel methods tailored to LLM-agents. It is also unclear whether existing MARL methods may help with collusion between LLM-agents (Section 2.6.4). Further work is required to evaluate whether existing algorithms (Hu et al., 2021) can be used to train cooperative LLM policies while preserving natural-language grounding in communications and acceptable task performance. Table 2: Because of the differences in the 'nature' of LLM-agents and traditional multi-agent RL-based agents, the applicability of prior multi-agent RL based research to LLM-agents is unclear. | Feature | Multi-Agent RL | LLMs | |---------------------------|------------------------|--------------------------| | Objective Functions | Explicit | Implicit | | Size | Small/Medum | Very Large | | (Primary) Training Regime | Reinforcement Learning | Self-Supervised Learning | | Human-Generated Data | Small % of Data | Large % of Data | Multi-agent alignment and safety is distinct from single-agent alignment and safety, and assurance will require deliberate efforts on the part of agent designers. The possible safety risks that must be dealt with range from correlated failures that might occur due to foundationality of the LLM-agents to collusion between LLM-agents. At the same time, confronting social dilemmas requires LLMagents have the ability to cooperate successfully with each other and with humans, even when their objectives might differ. 54. How do pretraining, prompting, safety finetuning, etc. shape the behavior of a LLM-agent within multi-agent settings? In particular, what is the role of pretraining data on agents' dispositions and capabilities? ←- 55. How can we evaluate or benchmark cooperative success and failure of LLM-based systems? How can existing environments, such as those of Park et al. (2023a); Yocum et al. (2023); Mukobi et al. (2023), be leveraged to study this? Can we create new LLM-agent analogues of popular multi-agent benchmarks, e.g. Melting Pot, or Hanabi? ←- 56. How can we leverage foundationality to enable LLM-agents to better cooperate with each other and achieve outcomes with higher social welfare? ←- 57. How can we evaluate and improve robustness of LLM-agents to correlated failures? Is qualitydiversity-based finetuning an effective way to improve robustness of LLM-agents to correlated failures? How else can we improve robustness of LLM-agents to correlated failures? ←- 58. Do groups of LLM-agents show emergent functionality or any form of self-organization? What worrisome capabilities are more likely to emerge in multi-agent contexts that are absent in single-agent contexts? ←- 59. Can we design benchmarks and adversarial evaluations to study colluding behaviors between LLM-agents, extending work in Motwani et al. (2024)? ←- 60. How can collusion between LLM-agents be prevented and detected? How can we assure that the game mechanisms (which are often implicit) are robust to collusion when deploying multiple LLM-agents in the same context? Can we design "watchdog" LLM-agents that detect colluding behavior among LLM-agents? ←- 61. How can we train LLM-agents to avoid colluding behavior? ←- 62. How can insights and techniques from the multi-agent reinforcement learning literature (e.g. utility transfer, contracting, reputation mechanisms) be adapted for LLM-agents? What adjustments need to be made in theory, and in practice, to unlock similar benefits? ←- ## 2.7 Safety-Performance Trade-Offs Are Poorly Understood Safety-performance trade-offs (SPTs) 7 are omnipresent in the design of engineering systems. For example, a system as simple as a linear control of a plant has a trade-off where assuring the robustness of the system to worst-case perturbations results in loss of performance in the average case (Barratt & Boyd, 1989; De Moor et al., 1992). Indeed, like in any other engineered system, SPTs are also present in the design of an LLMbased system, e.g. improving the harmlessnesss of an LLM-assistant may cause it to be less helpful (Bai et al., 2022a). Similarly, RL-based safety-finetuning appears to generalize both in-distribution and outof-distribution more robustly than supervised finetuning, but can reduce the diversity of the responses generated by an LLM (Kirk et al., 2023c). Our knowledge of SPTs, however, remains limited and further work is required to develop a comprehensive understanding of these trade-offs. A precise characterization of SPTs can enable a system designer to make better-informed design choices for a given set of system requirements (Khlaaf, 2023). For example, in a sensitive context, it may be appropriate to sacrifice performance to attain greater safety assurances. Greater understanding of such trade-offs could also help policymakers define appropriate safety standards. In this section, we identify several research directions that could help improve our understanding of SPTs. ## 2.7.1 Designing Better Metrics To Measure Safety Much of the existing practice narrowly defines safety as *harmlessness*, i.e. not generating a response that may be considered harmful by human evaluators (Askell et al., 2021). This is assessed either via manual (Ziegler et al., 2022) or automated red-teaming (Anthropic, 2023e). However, in some contexts, there may be desiderata other than harmlessness for a 'safe' LLM-based system (e.g. corrigibility, transparency, OOD robustness/generalization), and there is a need for future work to design more sophisticated metrics that could be used to measure safety in a holistic way. One obvious avenue in this regard is to improve the granularity of the current metrics. Furthermore, harmlessness was primarily proposed to assess the safety of LLM-assistants - and hence, may not be an appropriate metric to assess the safety of other LLM-based systems, in particular LLM-agents (c.f. Section 2.5). It is also unclear how the *relative* safety of LLM-based systems can be measured. Win-rate - the percentage of responses on which human evaluators prefer a response from one LLM over the other (Dubois et al., 2023) - and Elo ratings are the most popular metrics in this regard. However, the validity of these metrics is not well-studied; Boubdir et al. (2023) argue that Elo ratings become unreliable when ranking LLMs with similar win-rates and show that transitivity of Elo ratings is not universally conserved in real-world human evaluations. ## 2.7.2 Disentangling Safety From Performance Many capabilities of LLMs are dual-use by nature. Hence more capable LLMs have inherent safety limitations if they can be misused, e.g. an LLM with sufficient proficiency in biology research might also help users create biological weapons. This might be mitigated by restricting LLMs' interfaces, e.g. by limiting sensors and actuators that an LLM might have access to; but such limitations might be bypassed easily (Glukhov et al., 2023). An alternative might be creating LLM "savants" that excel in some domains while remaining selectively ignorant, although reducing LLM knowledge in such a way might reduce capabilities more broadly. Unlearning (discussed in Section 3.2) is one relevant area; modifications to pretraining (see Section 3.1) may prove more promising. We can think of limiting the interface and knowledge of systems as two "knobs" that might be used to tune safety-performance trade-offs. Other knobs might include characteristics of agency (Chan et al., 2023b); for instance, specifying how an LLM-agent is meant to solve a task (rather than simply the goal) might preclude both inventive solutions (reducing performance) and unpleasant surprises (increasing safety). 7Closely related ideas include "capabilities externalities" (Hendrycks & Mazeika, 2022) and "alignment tax" (Christiano, 2019; Askell et al., 2021; Ouyang et al., 2022; Lightman et al., 2023). We prefer our terminology to the more common "alignment tax" since: 1) it does not conflate safety with alignment and 2) it does not suggest that alignment will be achieved if you pay the tax. ## 2.7.3 Better Characterization Of Safety-Performance Trade-Offs Part of the challenge of clarifying safety-performance trade-offs is the multi-dimensional nature of both safety and performance, which means there might be many different types of SPT. A better characterization of these trade-offs could help determine how to achieve Pareto-optimal outcomes. This could include characterizing safety-performance trade-offs of specific capabilities of LLMs, e.g. in-context learning (c.f. Section 2.1), reasoning capabilities (c.f. Section 2.4), and multimodal capabilities. Deployment context is another factor on which SPTs may depend. For example, SPTs for an LLM specifically designed to act as an assistant to a medical doctor may be different from SPTs for an LLM designed to be a general assistant, and to which a lay person might pose medical queries. An LLM-agent may have very different SPTs compared to an LLM-assistant, due to its increased agency and goal-directed behavior. Even within LLM-agents, SPTs may differ depending on the affordances available to the LLM-agent. An LLM that is made available via an API in which users can perform finetuning can have different SPTs compared to an LLM that can only perform inference queries via an interface (Pelrine et al., 2023). Finally, distinct trade-offs may exist at various stages of development (Leike, 2022b): choosing an architecture that is more interpretable by design, aggressively filtering undesirable data from the pretraining corpus, or biasing the model to be harmless over being helpful may all improve safety by sacrificing some performance. There is a need for a comprehensive survey of SPTs that exist in various settings, and for organizing knowledge about different "knobs" that could be used to modulate these trade-offs. It may be useful to coalesce different examples of trade-offs into high-level categories. Future work could also aim to theoretically characterize how and to what extent safety can be achieved using such "knobs" to control powerful, potentially misaligned systems. Empirical work could systematically compare various approaches, e.g. assessing the promise of limiting systems' knowledge vs. sensors vs. actuators as a means of restricting an LLM-agent's behavior to its intended scope. ## 2.7.4 How Fundamental Are Safety-Performance Trade-Offs? There is mixed evidence on the difficulty of addressing SPTs, and the answer might differ for different trade-off axes. Significant trade-offs have been a long-standing problem in interpretability (Wang, 2019; Baryannis et al., 2019; Dziugaite et al., 2020; Elhage et al., 2022a) and in adversarial robustness of vision models (Tsipras et al., 2019; Zhang et al., 2019; Tramer et al., 2020a; Croce et al., 2021). It is less clear whether SPTs are a similarly big problem in LLMs. For example, Bai et al. (2022b) show that their proposed method, Constitutional AI, can result in Pareto improvement on both their performance and safety metrics over the standard reinforcement learning from human feedback methods. However, other work has argued that SPTs are indeed fundamental and assuring safety of LLMs may require significant sacrifices in terms of their performance (Branwen, 2016; Millière, 2023; Wolf et al., 2023). In addition to empirically studying SPTs, there is a need for research to understand the *causes* of these trade-offs. This understanding may shed light on the extent to which these obstacles are fundamental, while also pointing to new angles of attack. The high-level challenge is to improve our understanding of safety-performance trade-offs, in multiple ![36_image_2.png](36_image_2.png) different ways. As a foundation for future research, a better formalization and classification of different types of trade-offs is important. This will be assisted by the development of clear metrics, in particular for different axes of safety. Building on those, empirical investigations could answer important questions about the severity of these trade-offs and let us track progress in their mitigation. As a complement to such measurements, we should also aim to understand the causes of these trade-offs and whether or not these trade-offs are fundamental in nature. 63. What are the best metrics to measure performance, and in particular, safety, in ways that are representative of real-world usage of AI systems and that can be applied across different LLMs and safety methods? Are metrics such as Elo ratings valid for measuring the safety of different LLMs relative to each other? ←- 64. How can the safety of LLMs be disentangled from their performance? Do there exist useful "knobs" for practitioners to trade-off safety against performance? ←- 65. Can we develop LLM 'savants' that excel in some domains while remaining selectively ignorant about other areas (i.e. those which pose safety concerns, such as knowledge of weapons)? ←- ![36_image_0.png](36_image_0.png) ![36_image_1.png](36_image_1.png) ![36_image_3.png](36_image_3.png) ![36_image_4.png](36_image_4.png) 66. What are the various axes of safety and performance, and along which axes are there safetyperformance trade-offs? Which of those trade-offs are especially important, in the sense of creating strong incentives to sacrifice safety? Can we identify high-level clusters of instances of safety/performance trade-offs? ←- 67. How do safety-performance trade-offs vary depending on the deployment context? In particular, in what ways do safety-performance trade-offs for LLM-agents differ from trade-offs for LLM-assistants? What safety-performance trade-offs exist in the development stage? ←- 68. What are the 'causes' of safety-performance trade-offs? Are these trade-offs for LLMs *fundamental* in nature or can they be overcome by development of better methods? ←- ## 3 Development And Deployment Methods The primary focus of the alignment and safety research so far has been the development of methods for improving the safety and alignment of LLMs. That has indeed resulted in some successes, most notably the refinement and development of methods to improve the alignment of the model behavior in the 'finetuning' stage. However, on the whole, current technical tools used in the development and deployment of LLMs leave much to be desired. This lack of appropriate tools to help robustly align, evaluate, interpret, secure, and monitor LLMs, is a hindrance in assuring alignment and safety of LLMs. Similar to the scientific understanding of LLMs, and in line with the broader theme of the agenda, we focus on the deficiencies of the development and deployment methods that are most relevant to assuring the safety and alignment of LLMs. Even with this restricted perspective, we identify many opportunities to improve development and deployment methods. Methods used in the development and deployment of LLMs can be roughly divided into three types. The first type includes techniques used for the training of LLMs (including data collection and annotation techniques) used in pretraining and finetuning. We collectively refer to all the techniques (supervised finetuning, reinforcement learning-based training from human or AI feedback, unlearning methods) used in the finetuning stage as 'safety finetuning' methods.8 We discuss various challenges with pretraining and safety finetuning that negatively impact the safety, alignment, and assurance of LLMs in Sections 3.1 and 3.2. The second type of methods includes evaluation and interpretation techniques - unfortunately, as we assert throughout (Sections 3.3 and 3.4), both evaluation and interpretation methods leave much to be desired and cannot be relied upon in their current state to provide necessary assurance. Finally, the third type of methods is concerned with the security of these models - this includes robustness to adversarial attacks of various types, most prominently jailbreaking and prompt-injections (Section 3.5) and poisoning attacks (Section 3.6). | Challenge | TL;DR | |----------------------------------------|---------| | Pretraining Produces Misaligned Models | Reducing misalignment of a pretrained (base) model is highly challenging. Existing data filtering methods to remove harmful data from the training corpus are insufficient and can be detrimental in some cases. We lack automated auditing tools that could be used to audit and analyze largescale datasets used for training. Training data attribution methods lack scalability and their reliability is uncertain. Finally, the use of human feedback data during pretraining, and modifications to pretraining that might help improve the effectiveness of downstream safety and alignment efforts (such as interpretability) remain underexplored. Continued on the next page | Table 3: An overview of challenges discussed in Section 3 (Development and Deployment Methods). We stress that this overview is a highly condensed summary of the discussion contained within the section, and hence should not be considered a substitute for the complete reading of the corresponding sections. 8Post-pretraining procedures applied to LLMs have both been called 'alignment training' (e.g. Zhou et al., 2023b) and 'safety training' (e.g. Wei et al., 2023a; Hubinger et al., 2024) within the contemporary literature. We use the latter terminology but replace 'training' with 'finetuning' to make it explicit that these methods presume that the model has already been pretrained. | Challenge | TL;DR | | | | | |------------------------------------------|-----------------------------------------------|----------|----|--------|----| | Finetuning | Methods | Struggle | to | Assure | We lack an understanding of how safety fine-tuning methods change a pretrained model. The current evidence points | | Alignment and Safety | to it being superficial, considering the effects of finetuning can be bypassed (via jailbreaking) or easily reversed (via finetuning on problematic data). Indeed, it is plausible that simple output-based adversarial training might be incentivizing superficial changes in model behavior. Furthermore, techniques for targeted modification to LLM behavior (e.g. machine unlearning, concept erasure, etc.) are currently underexplored, and techniques for removal of unknown undesirable capabilities (e.g. backdoors) are non-existent. | | | | | | LLM Evaluations Are Confounded and Biased | There is an evaluation crisis for LLMs. Prompt-sensitivity, test-set contamination and targeted training to suppress undesirable behaviors in known contexts confound our evaluations. There exist further biases in both LLM-based evaluations of other LLMs and human-based evaluations. Systematic biases present within the machine learning ecosystem (e.g. overrepresentation of U.S.-centric points-of-view in research) create further blindspots in evaluations. Finally, as LLMs advance further in capabilities, we may need scalable oversight methods; however, the scalability, robustness, and practical feasibility of various scalable oversight proposals remain uncertain. | | | | | | Tools for Interpreting or Explaining LLM | Techniques such as representation probing, mechanistic interpretability, and externalized reasoning hold promise in | | | | | | Behavior Are Absent or Lack Faithfulness | helping interpret and explain model behavior. | However, | | | | | | these techniques suffer from many fundamental challenges, such as the use of dubious abstractions, concept-mismatch between AI and humans, and a lack of scalable and reliable evaluations. Furthermore, representation probing and mechanistic interpretability face further challenges due to polysemanticity of neurons, sensitivity of the interpretations to the choice of the dataset, and lack of scalability. Similarly, externalized reasoning using informal semantics can be unfaithful and misleading, while externalized reasoning using formal semantics is not widely and easily applicable to many tasks of interest. Continued on the next page | | | | | | Challenge | TL;DR | |-------------------------------------------|-----------------------------------------------------------| | Jailbreaks and Prompt Injections Threaten | LLMs are not adversarially robust and are vulnerable to | | Security of LLMs | security failures such as jailbreaks and prompt-injection attacks. While a number of jailbreak attacks have been proposed in the literature, the lack of standardized evaluation makes it difficult to compare them. We also do not have efficient white-box methods to evaluate adversarial robustness. Multi-modal LLMs may further allow novel types of jailbreaks via additional modalities. Finally, the lack of robust privilege levels within the LLM input means that jailbreaking and prompt-injection attacks may be particularly hard to eliminate altogether. | | Vulnerability to Poisoning and Backdoors | LLMs are often trained on data from untrustworthy sources | | Is Poorly Understood | - internet or crowdsource workers - which leaves them vulnerable to data poisoning attacks. Our current understanding of how vulnerable LLMs might be to data poisoning attacks through text - or other modalities - is highly limited. Furthermore, there does not currently exist any method that can robustly detect and remove any backdoors. | ## 3.1 Pretraining Produces Misaligned Models The first step in LLM development is to pretrain the model on a large dataset of internet text. This pretraining incorporates a large amount of knowledge and capabilities into the model but is not safe due to the prevalence of undesirable content across the internet. Among other alignment and safety failures, a pretrained model can exhibit significant stereotypical biases, hallucinate excessively, readily leak private information, and provide information on illegal and harmful activities (Bender et al., 2021; Pan et al., 2020; Ji et al., 2023b). To improve helpfulness and harmlessness, leading LLM developers perform extensive finetuning to improve the alignment and safety of pretrained models. However, there is increasing evidence that this effort often falls short of producing a robustly aligned and safe model (see Section 3.2). Hence, there is a need to investigate ways in which the pretraining procedure itself could be modified to produce safer and better-aligned pretrained models. In this section, we present some of the challenges that, if addressed, could contribute to progress toward this goal. ## 3.1.1 Existing Data Filtering Methods Are Insufficient Naively scaling a dataset also scales the harmful content within the dataset proportionally (Birhane et al., 2023). Considering that pretraining is based on maximizing the likelihood of generating text present in the training data, removing the problematic data from the training dataset seems like a straightforward fix (Ngo et al., 2021). However, effective data filtering is an open problem. Commonly used methods for dataset filtering either rely on human-written rules (Raffel et al., 2022) or narrowly-trained classifiers (Gehman et al., 2020; Solaiman & Dennison, 2021); resulting in incomplete and ineffective filtering of harmful data (Welbl et al., 2021). Further, as harmful data is often contextual (Rauh et al., 2022), such simple filtering methods are fundamentally incapable of successfully removing all undesirable data (Ziegler et al., 2022). In addition, while the effects of current data filtering tools have not been studied extensively, there is evidence that simple data filtering can be problematic in several ways. It disproportionately removes text from and about marginalized groups (Dodge et al., 2021), reduces data diversity (Kreutzer et al., 2022) and amplifies some social biases by decreasing LLM performance on language used by marginalized groups (Xu et al., 2021a; Welbl et al., 2021). There is a need to understand these effects better and to develop, and evaluate, more sophisticated data filtering tools, e.g. via using LLMs with appropriate prompting (Fernando et al., 2023) or through *learned* data filters using feedback from human labelers. Considering the negative side-effects of data filtering, future research should also consider data *editing* and data *augmentation* as alternatives to data filtering. In particular, future research should consider using LLMs to minimize the impact of undesirable content present within pretraining data (which could then be used to train relatively better-aligned LLMs). Existing LLMs could be effective in this regard by identifying and rewriting harmful content at large scales or adding novel data to offset the effects of biases present within the data, e.g. adding text on *female* doctors and *male* nurses (Stanovsky et al., 2019) or adding more data for low-resource languages (Ghosh & Caliskan, 2023; Yong et al., 2023). Such use of LLMs must be done in a careful way, as any alignment issues present in the LLM may corrupt the data in unintended ways. While dataset filtering, and editing, could help improve the alignment of the pretrained models in parts, it is not a panacea for all alignment problems. Training data often contains historical and sociological facts, such as genocides and slavery, that can not simply be discarded, nor can we edit history to create as many queens as kings. ## 3.1.2 Lack Of Dataset-Auditing Tools Internet-scale datasets lack transparency and their large scale makes it difficult to perform a comprehensive manual audit of the dataset.9 This is a well-recognized problem (Birhane et al., 2023; Paullada et al., 2021; Gebru et al., 2021; Mitchell et al., 2022; McMillan-Major et al., 2023). However, limited work has been done to develop tools that assist scalable data analysis. Marone & Van Durme (2023) and Elazar et al. (2023) have proposed methods that use hash collisions to detect how often a particular piece of text occurs in the given dataset. Such techniques have proved useful for identifying data duplication (Lee et al., 2022) in pretraining data, which helps limit privacy risks (Kandpal et al., 2022). These techniques can also help catch and prevent benchmark data contamination (Sainz et al., 2023), which assists in improving the reliability of evaluations (see Section 3.3). There is a need to extend these technique, e.g. by enabling the use of measures of semantic similarity so that they could be used to identify undesirable data. Furthermore, there is an emerging line of research on *dynamic* auditing of datasets which leverages the knowledge of training dynamics to identify different *types* of data within a dataset. The majority of the work in this line of research focuses on filtering out unlearnable or difficult samples (Agarwal et al., 2022), but Siddiqui et al. (2022) show that this approach can be generalized to arbitrary data types, given a small amount of examples of each type. However, their work is limited to image dataset analysis. Future research may consider adapting their technique, or developing novel techniques, to enable similar kinds of auditing of language datasets as well. ## 3.1.3 Improving Training-Data Attribution Methods Training data attribution (TDA) methods allow attributing the output of a model to specific training data points (Grosse et al., 2023; Pruthi et al., 2020; Yeh et al., 2018; Ilyas et al., 2022). TDA can be an effective interpretability and auditing tool as it might allow attributing alignment and safety failures to specific content in the pretraining and fine-tuning data. However, like other interpretability methods (c.f. Section 3.4), most TDA approaches lack scalability (despite some recent advances such as Grosse et al. (2023) and Park et al. (2023c)) and often do not accurately surface all the relevant training samples for a given prediction (Akyürek et al., 2022a). Furthermore, different TDA methods have different ideals (e.g. datamodels Ilyas et al., 2022 vs influence functions Grosse et al., 2023), and even when the methods share the same ideal, they tend to diverge in terms of how they approximate the ideal (e.g. Pruthi et al. 2020 vs Grosse et al. 2023). Hence, there is a need to better understand the relative differences, strengths and weaknesses of various TDA methods. This may be done by careful analysis of these methods (Bae et al., 2022), or through empirical evaluation on standardized benchmarks (Akyürek et al., 2022a). Another open question is how TDA methods can be efficiently applied to analyze models with multiple stages of training that differ in their loss functions (such as LLMs). Furthermore, there are currently no (theoretically grounded) TDA methods that can be applied to reinforcement learning problems - even for the cases where the model has not undergone any pretraining (i.e. pure reinforcement learning training). Finally, there is a need to explore the application of TDA methods to filter problematic data from pretraining datasets. A challenge in this regard would be determining whether or not any filtering done using small-scale models results in improved alignment of larger-scale models (Grosse et al., 2023, Section 5.3). 9Also see Section 4.5.10 ## 3.1.4 Scaling Pretraining Using Human Feedback By default, the maximum likelihood training objective assumes that all data is of equally high quality. However, this is not true. As such, there is a need to improve the training process by including information about the quality and trustworthiness of different data points. *Pretraining with human feedback* (PHF) (Korbak et al., 2023) does so by using conditional training (prepending pretraining sentences with one of two special tokens, <|good|> and <|bad|>, based on estimates of human preferences) (Lu et al., 2022; Chen et al., 2021a). Korbak et al. compared PHF with several alternatives and found it to be an effective method of allowing an LLM to learn from harmful data during training, while disallowing the generation of harmful data at test time (by conditioning on the <|good|> token). Subsequently, in experiments with PALM-2, Anil et al. (2023) showed that conditional training also scales to larger LLMs. However, they only explored the use of the human feedback data for pretraining in limited contexts e.g. reducing toxicity or generating PEP-8-compliant Python code. Future work should consider how the technique can be used in broader contexts, e.g. using estimates of human preferences instead of programmatic reward functions. This may require extending the conditional training approach to use sequences of special tokens, coding for multiple different attributes (Keskar et al., 2019), or exploring the use of alternative training methods to conditional training. In particular, thought could be given to adopting various finetuning methods that directly learn from preference data, e.g. DPO (Rafailov et al., 2023) or KTO (Ethayarajh et al., 2024), for use during pretraining. Currently, there is also a poor understanding of the reasons for the effectiveness of PHF. Korbak et al. (2023) found that using feedback throughout pretraining is critical. However, Anil et al. (2023) pretrained PALM-2 on a significantly larger dataset and found that using feedback for only a fraction of the pretraining documents was sufficient for drastically reducing toxicity. Overall, there is currently limited research on understanding how PHF scales with model size, the complexity of preferences, and the amount of feedback during pretraining. ## 3.1.5 Modifying Pretraining To Improve Effectiveness Of Downstream Safety And Alignment Efforts It is plausible that simple modifications to the pretraining process could result in significant downstream gains in the alignment, safety, and security of these models. For example, several studies have developed retrieval-augmented LLMs which can provide a number of benefits such as a reduction in hallucination (Borgeaud et al., 2022) and a reduction in privacy risk (Huang et al., 2023e). Exploring modifications to the pretraining process that make the models easier to interpret could in particular be highly impactful. This could take various forms, e.g. external structure might be imposed on models so that their functionalities can be easily translated to human-readable code (Friedman et al., 2023); or models might be trained so that their internal causal structure realizes a given high-level causal structure (Geiger et al., 2022), or model architecture could be modified so that the models avoid learning polysemantic neurons (Elhage et al., 2022a) (see Section 3.4 for further details). Some other interesting avenues of research that might assist with downstream safety and alignment efforts include the development of *task-blocking* models (Henderson et al., 2023c), and the development of pre-training methodologies that allow easier modifications to be made to models later if needed, for example, forcing the model to *unlearn* and forget selective parts of the training data (Pedregosa & Triantafillou, 2023; Liu et al., 2024b)(also see Section 3.2.4). Misaligned pretraining of the LLMs is a major roadblock in assuring their alignment and safety. The chief cause of this misalignment is widely believed to be that LLMs are trained to imitate largescale datasets that contain undesirable text samples. The large scale of these datasets makes their auditing and manual filtering of such undesirable samples difficult. There is a need to develop scalable techniques for data filtering, data auditing, and training data attribution to help identify harmful data. Furthermore, even after harmful data is identified, further research is needed to find the most effective ways to address the harmful data. In addition to directly improving the alignment and safety of pretrained models, future work could explore ways in which pretraining could be modified to facilitate other processes (e.g. safety finetuning or interpretability analysis) that can help to assure alignment and safety. ![42_image_0.png](42_image_0.png) 69. How can methods for detection of harmful data be improved? The complex nature of harmful ![43_image_1.png](43_image_1.png) ![43_image_2.png](43_image_2.png) ![43_image_3.png](43_image_3.png) ![43_image_4.png](43_image_4.png) ![43_image_5.png](43_image_5.png) data (Rauh et al., 2022) makes it difficult to develop automated methods to effectively remove all such data. Can we use feedback from human labelers, in a targeted fashion, to directly improve the quality of the pretraining dataset? ←- 70. How can the effects of harmful data be effectively mitigated? Instead of removing harmful data, can it be *edited*, or rewritten, to remove the harmful aspects e.g. by an existing LLM? Alternatively, can we add synthetic data, generated procedurally, to the model such that the effects of harmful data are mitigated? ←- 71. How can we develop static dataset auditing techniques to identify harmful data of various types? Can existing techniques (e.g. Elazar et al., 2023) be extended for this purpose? ←- 72. How can dynamic dataset auditing techniques (e.g. Siddiqui et al., 2022), which leverage ![43_image_0.png](43_image_0.png) 73. How can we further scale training data attribution methods, in particular, those utilizing influence functions? How can we leverage insights from training data attribution methods to 74. How can the effectiveness of pretraining with human feedback (PHF) techniques be improved? Korbak et al. (2023) utilize conditioning on binary tokens only (good/bad); how can it be generalized to more granular forms of feedback e.g. harmless and helpfulness scores? In what other ways can conditional training at train time, like PHF, be used to improve the alignment 75. How can the pretraining process be modified so that the models are more amenable to 76. Can we develop *task-blocking* language models, i.e. pretrain language models in a way that they are highly resistant to learning or performing specific harmful tasks? ←- ## 3.2 Finetuning Methods Struggle To Assure Alignment And Safety ![43_Image_6.Png](43_Image_6.Png) ![43_Image_7.Png](43_Image_7.Png) ![43_Image_8.Png](43_Image_8.Png) ![43_Image_9.Png](43_Image_9.Png) Safety finetuning, using feedback from human labelers and/or feedback from other LLMs prompted with an evaluation rubric is the primary method for improving the safety, alignment, and desirability of responses produced by LLMs (OpenAI, 2023c; Bai et al., 2022a;b). However, these methods have empirical shortcomings and do not necessarily result in a safe model. In particular, even after undergoing safety finetuning, LLMs retain undesirable capabilities and knowledge. For example, recent work has shown that a small amount of further finetuning on adversarial examples can effectively undo safety finetuning (Yang et al., 2023a; Qi et al., 2023b; Lermen et al., 2023; Zhan et al., 2023). Relatedly, safetyfinetuned LLMs can be "jailbroken" via carefully chosen prompts to generate various types of undesirable content that they were explicitly fine-tuned not to generate (c.f. Section 3.5). Further, finetuning fails to robustly remove the tendency of LLMs to exhibit stereotypical biases in novel untested scenarios (Wan et al., 2023b). Hence, there is a need to develop better finetuning methods - or alternative techniques - that better assure model safety and alignment. In this section, we review several challenges in this regard and propose research directions to tackle those challenges10. ## 3.2.1 How Does Finetuning Change A Pretrained Model? When a pretrained model undergoes safety finetuning (e.g. for harmlessness and helpfulness (Bai et al., 2022a)), how does this actually change the pretrained model? Specifically, to what extent does finetuning fundamentally change the mechanisms and capabilities within a model? Several studies have theorized that changes induced by finetuning are *superficial* and amount to learning a thin wrapper around the capabilities that were learned during pretraining (Zhou et al., 2023b; Lubana et al., 2023; Jain et al., 2023b; Lee et al., 2024). Lin et al. (2023a) analyzed the impact of finetuning on the conditional distribution of tokens and found that finetuning impacts the distribution of only a very small fraction of tokens related to safety disclaimers, conversation style, etc. They found that this impact is most pronounced for tokens early in the context; the differences in the distribution between a finetuned LLM and a base LLM tend to diminish as greater context is available. On the other hand, Clymer et al. (2023) found that in some cases, finetuning can add new capabilities to the base LLM - a less superficial change. 10See (Casper et al., 2023a) for a complementary discussion on the limitations of reinforcement learning from human feedback. The question of whether finetuning can add new capabilities naturally leads to the question of whether or not finetuning can remove them; ideally, finetuning should cause the LLM to *forget* undesirable knowledge and capabilities from pretraining. This is also the expected behavior, as evidence from work on continual learning indicates that deep neural networks often forget previously learned skills in a phenomenon called "catastrophic forgetting" (French, 1999; Kirkpatrick et al., 2017). However, pretrained LLMs are naturally resistant to forgetting (Scialom et al., 2022; Cossu et al., 2022). Consequently, even when it seems that a finetuned LLM has *forgotten* how to perform a previously known task, its performance on that task can be easily recovered via either specialized prompting (Kotha et al., 2023) or via finetuning with a small amount of data from that task (Jain et al., 2023b; Yang et al., 2023a; Qi et al., 2023b; Lermen et al., 2023; Zhan et al., 2023). It is not well-understood why LLMs are highly resistant to forgetting. Ramasesh et al. (2022) found that scale is partially responsible for increased robustness to catastrophic forgetting, while Li et al. (2022a) found that transformer-based models are more robust to forgetting than other architectures. Lubana et al. (2023) and Juneja et al. (2022) suggest that fine-tuned models remain in distinct basins of the loss function predetermined by pretraining, making the fact that LLMs retain a great deal of knowledge through finetuning unsurprising. However, these studies are focused on vision models, or of (small-to-medium) LMs finetuned for specific tasks, so it is not clear to what extent these explanations apply to LLMs. Hence, there is a need for work that directly studies these phenomena on language models to better understand the extent to which current finetuning techniques can induce forgetting within LLMs on targeted tasks. Relatedly, more work is needed to understand the extent to which finetuning fundamentally changes the mechanisms and capabilities of models and what factors influence these changes. This could be done via behavioral evaluation on controlled task distributions, or through interpretability analysis to understand how the internals of the model change when a model is finetuned on different task distributions (Jain et al., 2023b; Clymer et al., 2023). ## 3.2.2 Finetuning Misgeneralizes In Unpredictable Ways Finetuning is primarily performed on text in the English language (OpenAI, 2023c), but tends to generalize to many other languages as well (Ouyang et al., 2022; Clymer et al., 2023). However, this generalization can fail in unpredictable ways, e.g. not generalizing to the text encoded in base64 (Wei et al., 2023a), or in low-resource languages (Yong et al., 2023). In a controlled study across multiple distribution shifts, Clymer et al. (2023) found similar evidence of unpredictable generalization of LLM-based reward models. They observed that factors such as in-distribution accuracy or including a small number of training examples from the target distribution are not predictive of generalization to the target distribution. LLM-based reward models, in general, are known to rely on spurious correlations (Bai et al., 2022a; Singhal et al., 2023b). There is a need to improve our understanding of how finetuning generalizes. This can be facilitated by directly studying and evaluating generalizations of LLM-based reward models used to provide feedback for finetuning, as well as studying the LLMs finetuned on this feedback (Lambert et al., 2023). Furthermore, more sophisticated fine-tuning methods e.g. those based on methods for better OOD generalization (Zhou et al., 2023e; Arjovsky et al., 2019; Sagawa et al., 2019; Krueger et al., 2021; Lubana et al., 2023) could be developed. Alternatively, the use of richer training signals, e.g. fine-grained feedback (Wu et al., 2023a) or critiques (Scheurer et al., 2023a), could be explored to minimize the occurance of spurious correlations in LLMs. Developing benchmarks that easily enable evaluating and comparing generalization abilities of various finetuning methods can help stimulate greater research in this regard. ## 3.2.3 Output-Based Adversarial Training May Incentivize Superficial **Alignment** Adversarial training, i.e. finetuning combined with red-teaming, is the standard technique to patch flaws in models when they appear, but it is unclear the extent to which it can be used to thoroughly correct errors in LLM reasoning. Red-teaming is often done manually. Hence, only a small number of training points are generally available for finetuning the model. As a result, adversarial training does not reliably eliminate the vulnerability of the adversarially-trained model to future attacks using the same method (Ziegler et al., 2022). When presented with adversarial examples, LLMs may either learn the correct generalizable solution or learn spurious features from the examples instead. The latter is often the simpler solution (Zhang et al., 2021; D'Amour et al., 2022; Du et al., 2023a), and, hence, the LLM is more likely to learn spurious features, resulting in *superficial* alignment (Du et al., 2022; Perez et al., 2022b). The limitations of adversarial training seem to stem in part from the fact that LLMs are not fine-tuned to make decisions that are consistent with a coherent decision-making procedure - they are merely trained to produce text that will be rewarded (Zhang et al., 2023e). A potential alternative to this could be to supervise the entire decision-making process to ensure that LLM outputs are "right for the right reason". This may be done via 'process supervision' (Uesato et al., 2022; Lightman et al., 2023), which has been shown to improve performance on mathematical reasoning problems (Stuhlmüller & Byun, 2022; Lightman et al., 2023), but has not yet been explored extensively in the context of improving the alignment and safety of LLMs. Training language models to offer consistent responses under augmentations to prompts has also been proposed, but research is highly preliminary (Chua et al., 2024). Future work may consider applying process supervision or consistency training, perhaps in conjunction with red-teaming, to evaluate whether or not it leads to greater generalization of aligned and safe behavior. An alternative technique to explore is latent adversarial training (Miyato et al., 2018; Hubinger, 2019; Kumari et al., 2019; Casper et al., 2024b). Currently, gradient-based methods for discovering adversarial inputs tend to be slow because the discrete nature of tokens limits the efficiency of gradient-based optimization methods. Latent adversarial training can work around this issue by directly finding and finetuning on hidden states (e.g. embeddings (Kuang & Bharti, 2021)) responsible for problematic outputs. Attacking the model in the latent space is a relaxation of the problem of attacking it in the input space, so latent adversarial training may offer stronger assurances of safe behavior than simple adversarial training (Casper et al., 2024b). On the other hand, given that different forms of adversarial robustness are known to trade-off against each other (Tramer & Boneh, 2019), this might prove counter-productive for robustness against feasible inputs. Another motivation of latent space attacks is that some failure modes might be easier to elicit via the latent space than in the input space (e.g. trojans (Chen et al., 2017)) because the concepts that are important to the system's reasoning are represented at a higher level of abstraction in the latent space (Johnston & Fusi, 2023). ## 3.2.4 Techniques For Targeted Modification Of Llm Behavior Are Underexplored There is a need to develop scalable methods that allow making targeted modifications to LLMs. A number of approaches have been proposed for this purpose: model editing (Mitchell et al., 2021; Meng et al., 2022), concept erasure (Ravfogel et al., 2022a;b; Belrose et al., 2023), subspace ablation (Li et al., 2023c; Kodge et al., 2023), activation engineering (Li et al., 2023e) and representation editing (Turner et al., 2023b;a; Li et al., 2023f; Zou et al., 2023a; Gandikota et al., 2023). However, a common shortcoming of all these techniques is the reliance on attributing certain model capabilities to a few editable architectural components within the model. This is typically done using interpretability techniques which currently lack robustness and scalability (c.f. Section 3.4 for further discussion and research directions). Hence, there does not yet exist a reliable toolbox for cleanly removing specific information from LLMs (Patil et al., 2023) - simply prepending instructions for the desired edit in the context remains a strong yet trivial baseline (Onoe et al., 2022; 2023). However, this is not secure due to prompt extraction attacks (Zhang & Ippolito, 2023). Often the goal is to remove specific knowledge or capabilities from LLMs, e.g. removing personally identifiable information stored within LLMs (Nasr et al., 2023). Prior work on this has been done under the paradigm of machine unlearning, largely in the differential privacy literature (Bourtoule et al., 2021; Nguyen et al., 2022; Sekhari et al., 2021; Liu et al., 2024b; Goel et al., 2024). Finetuning-based machine unlearning methods have shown potential for making LLMs forget a specific domain of knowledge (Jang et al., 2022; Yao et al., 2023; Eldan & Russinovich, 2023), but can fail to fully erase undesired knowledge (Shi et al., 2023; Lynch et al., 2024). However, further research is needed to better understand whether or not machine unlearning is a competitive option for improving safety and alignment. Furthermore, existing studies of machine unlearning focus on removing specific facts from LLMs. Work is needed on methods for making broader edits to the "capabilities" of the models that affect their behavior more generally. The current research on removing undesirable knowledge from LLMs is actively developing. Additional research using concrete benchmarks and evaluation criteria (Li et al., 2024) can direct the community's goals and allow for standardized comparisons of various techniques. Ideally, unlearning techniques should be effective at removing knowledge in novel circumstances, effective relative to simple baselines (such as prompt-based instruction), avoid negative side-effects (such as reduced performance on unrelated tasks), robust to 'undoing' via further finetuning or 'relearning' of undesirable capabilities via in-context learning, and robust to adversarial attacks such as jailbreaks (Lynch et al., 2024). ## 3.2.5 Removal Of Unknown Undesirable Capabilities Due to the unsupervised nature of pretraining, LLMs learn various capabilities and acquire diverse knowledge in unpredictable ways (c.f. Section 2.3). Existing safety-finetuning techniques can only act to mitigate an undesirable capability within an LLM if it is *known*. Hence, an *unknown* undesirable capability may continue to be active within a model, even after known undesirable capabilities have been effectively removed (Hubinger et al., 2024).An unknown undesirable capability may be harmful in itself, or it may be undesirable due to its potential to be used as an attack vector by adversaries and act as a "zero-day vulnerability" (Bilge & Dumitraş, 2012). For example, the capability of GPT-4 to understand and respond to the text encoded in base64, and other forms of encodings and ciphers, was exploited to jailbreak it (Wei et al., 2023a; Yuan et al., 2023a). As LLMs are scaled further, and extended by training on many modalities simultaneously, they may acquire many obscure undesirable capabilities which may pose security and safety risks, unknown to their developers. Improving the coverage of evaluations (c.f. Section 3.3) might help move capabilities from unknown to known. Alternatively, research could seek to develop methods that force the network to forget unknown, undesirable capabilities. Yuan et al. (2023a) found that GPT-4 turbo, a quantized and distilled version of GPT-4, had a considerably reduced ability to interpret and respond to text encoded in the form of "ciphers". This suggests that compression (Liebenwein et al., 2021; Du et al., 2021; Pavlitska et al., 2023), distillation (Du et al., 2021; Li et al., 2021; Sheng et al., 2023; Pang et al., 2023), and latent adversarial training (Casper et al., 2024b) might be effective tools in this regard. A major goal of finetuning is to remove potentially undesirable capabilities in a model while steering it toward its intended behavior. However, current approaches struggle to remove undesirable behaviors, and can even actively reinforce them. Adversarial training alone is unlikely to be an adequate solution. Mechanistic methods that operate directly on the model's internal knowledge may enable deeper forgetting and unlearning. Finally, behind these technical challenges is a murky understanding of how finetuning changes models and why it struggles to make networks "deeply" forget and unlearn undesirable behaviors. 77. To what extent does pretraining determine the concepts that the LLM uses in its operation? To what extent can finetuning facilitate fundamental changes in the network's behavior? Can we develop a fine-grained understanding of changes induced by finetuning within an LLM? ←- 78. Can we improve our understanding of why LLMs are resistant to forgetting? How is this resistance affected by the model scale, the inductive biases of the transformer architecture, the optimization method, etc.? ←- 79. Can we create comprehensive benchmarks to assist in evaluating and understanding generalization patterns of finetuning? ←- 80. Can we develop more sophisticated finetuning methods with better generalization properties? For example, by basing them on OOD generalization methods in deep learning or using explanation-based language feedback (e.g. critique) to prevent reliance on spurious features? ←- 81. How can we ensure that finetuning on a small number of adversarial ("red-teamed") samples generalizes correctly? Are process supervision and latent adversarial training viable methods in this regard? ←- 82. Can machine unlearning techniques be used or extended to precisely remove knowledge and capabilities from an LLM? ←- 83. How can we reliably benchmark methods for targeted modification of LLM behavior? ←- 84. How can unknown undesirable capabilities be removed from an LLM? Are compression and/or distillation effective ways to achieve this behavior? What are the *kinds* of capabilities that are lost when an LLM is compressed, and/or distilled? ←- ![46_image_0.png](46_image_0.png) ![46_image_1.png](46_image_1.png) ![46_image_2.png](46_image_2.png) ![46_image_3.png](46_image_3.png) ## 3.3 Llm Evaluations Are Confounded And Biased Sound and fair empirical evaluations are necessary to develop a calibrated understanding of the capabilities of LLMs, as well as their risks. Evaluation has historically been a sore point in the fields of machine learning and natural language processing (Raji et al., 2021; Bowman & Dahl, 2021; Liao et al., 2021; Hutchinson et al., 2022; Kapoor & Narayanan, 2023; McIntosh et al., 2024); however, the evaluation crisis is considerably more acute for LLMs (Mitchell, 2023). This is due to their unprecedented general-purpose nature relative to machine learning models of the past, the introduction of novel issues that hinder accurate evaluation such as prompt-sensitivity, and the worsening of existing issues such as test-set contamination. The evaluation crisis needs to be addressed urgently as otherwise current evaluation methods may lead us to overestimate or underestimate the capabilities of LLMs, and prevent accurate estimation of their risks. Within this section, we highlight several challenges in this regard and invite the wider community to work towards addressing these challenges and to be cognizant of them in the evaluation of the LLMs. ## 3.3.1 Prompt-Sensitivity Confounds Estimation Of Llm Capabilities Current LLMs are highly sensitive to prompting, and their performance on a particular task may vary drastically depending on the prompting strategy (Sclar et al., 2023; Mizrahi et al., 2023; Ramesh et al., 2023). Further, LLMs are generally evaluated without access to tools like calculators, code-interpreters, the internet etc. The combination of these two factors leads to underestimates of the capabilities and potential performance of LLMs. For example, Zhou et al. (2023a) show that GPT-4 accuracy on the MATH dataset can be improved from 53.9% to 84.3% via the use of special prompting and a code-interpreter. Despite years of research, it remains impossible to guarantee that a particular prompt is *optimal* for a particular task, and does not underestimate a model's performance. Accurate accounting of LLM capabilities requires addressing and accounting for prompt-sensitivity. The prominent approaches to address prompt-sensitivity include instruction tuning, hand-designing prompts, and learning prompts. Instruction tuning (Ouyang et al., 2022; Wei et al., 2021) improves the ability of a model to understand the user's intent, and hence mitigates prompt sensitivity to a certain extent. However, even for an instruction-tuned model, prompt engineering can help elicit better performance. Hand-designing prompts can be highly effective but is inherently inefficient and not scalable. Hence, future work should focus on advancing learning-based or data-driven approaches (e.g. Fernando et al., 2023; Qin & Eisner, 2021) to discover the best prompts for a given task in an automated fashion. We note that learning-based approaches may introduce additional variables (e.g. training dataset) to which evaluation might be sensitive. This suggests that it might not be possible to completely eliminate prompt sensitivity - in which case, it is important to ensure that our evaluation *accounts* for prompt sensitivity. Sensitivity of the evaluation to different aspects of the evaluation pipeline has been a long-standing challenge within machine learning, and as such, past work in this area can provide inspiration for solutions. In particular, deep reinforcement learning has had to grapple with the issue of seed sensitivity, resulting in work to identify best practices (Agarwal et al., 2021) and the development of novel types of benchmarks (Cobbe et al., 2020). ## 3.3.2 Test-Set Contamination Overestimates Llm Capabilities The opaque nature of pretraining datasets makes it difficult to guarantee that a particular *test* data point, or other very similar data points, have not been observed by the LLM previously during the pretraining phase. Several studies show that LLMs perform better on *familiar* data (Wu et al., 2023b; McCoy et al., 2023) and if the evaluation is performed using data that has previously been observed by the LLM (or very similar data), it risks *overestimating* LLM capabilities (Tirumala et al., 2022). Indeed, Roberts et al. (2023b) show that there is a positive correlation between the GPT-4 pass-rate and the popularity of a question on Codeforce (an online competitive coding platform) and Project Euler (an online mathematical reasoning platform) before the cutoff training date which disappears after the cutoff date; and several other studies have identified contamination of pretraining datasets with known evaluation datasets (Elazar et al., 2023; Magar & Schwartz, 2022; Golchin & Surdeanu, 2023; Oren et al., 2023). Avoiding contamination while scraping the pretraining dataset from the web is difficult due to the dynamic nature of the internet: online content tends to spread across the internet over time. Li (2023) found that blacklisting the original websources used to curate the MMLU dataset (Hendrycks et al., 2020b) results in only a 1.5% reduction in MMLU dataset contamination. Hence, there is a need to develop enhanced techniques to find and remove contaminated data within a pretraining dataset; or confirm the absence of contamination during evaluation. Most techniques at the moment focus on identifying verbatim contamination (Elazar et al., 2023; Li, 2023). However, contaminated data might also occur in pretraining datasets in mutated form, e.g. paraphrased or translated into another language. So, future techniques ought to focus on semantic similarity measures to more broadly identify contaminated data (Golchin & Surdeanu, 2023). In cases when a model's training dataset is available, can training data attribution methods like influence functions (Grosse et al., 2023) be used to identify whether an LLM is generating an answer from memory, or performing the required reasoning on the fly? Furthermore, different strategies have been adopted by benchmark creators to prevent the leakage of benchmark content into the training datasets, e.g. use of canary strings (Srivastava et al., 2022), hiding the benchmark behind an API (Sawada et al., 2023), or distributing the datasets as password-protected archives (Rein et al., 2023). Further research is needed to examine the relative robustness of these tactics against test-set contamination. It is possible that due to the aforementioned issue of the diffusion of content on the internet, none of these measures will satisfactorily prevent test-set contamination. ## 3.3.3 Targeted Training Confounds Evaluation Current LLMs undergo extensive targeted finetuning to suppress undesirable behaviors in simple and wellknown contexts. However, there is little evidence that this generalizes to more complex, harder-to-evaluate contexts which might occur in real-world use (Clymer et al., 2023) (also see Section 3.2). For example, while ChatGPT does not appear to embody steoreotypical biases in simple evaluations, it strongly manifests those biases when prompted to take on a persona (Gupta et al., 2023), or when asked to perform a complex task (Wan et al., 2023b). This *Goodharting* of simpler evaluation schemes has created a challenge where demonstrating a failure mode requires significantly greater creativity and effort. This challenge may be countered by developing novel evaluation paradigms, e.g. persona-based evaluation (Gupta et al., 2023), counterfactual evaluation (Wu et al., 2023b) or procedurally-generated evaluation (Yu et al., 2023a). Establishing the validity and/or limitations of evaluation methods in the presence of such Goodharting is an open research direction. ## 3.3.4 Biases In Llm-Based Evaluation Several studies have explored using an LLM to evaluate itself or to evaluate other LLMs (Bai et al., 2023b; Zheng et al., 2023; Dubois et al., 2023). This approach has the inherent benefit of being cheaper, faster, and more easily scalable than human evaluation (Zhuo, 2023; Perez et al., 2022b). However, LLMs also exhibit many human-like cognitive biases in evaluation e.g. position biases, the preference for verbose and longer answers, egocentric biases i.e. preferring its own output (Zheng et al., 2023; Wang et al., 2023g; Koo et al., 2023; Wu & Aji, 2023). LLM-based evaluation may also differ significantly from human evaluation (Koo et al., 2023; Perez et al., 2022b). This makes current LLM-based evaluation less trustworthy - however, Wu & Aji (2023) found that modifying the LLM-based evaluation protocol to account for the cognitive limitations of LLMs can significantly enhance the fidelity of their evaluations, especially when compared to crowdsourced human evaluation. Thus, future research should focus on furthering the understanding of biases present in LLM-based evaluation, and in particular, develop schemes to mitigate these biases. Furthermore, understanding the sources of disagreement between human evaluations and LLM-based evaluations may help to improve alignment between them and also to provide insight as to which evaluation method is preferred in a given setting and how can they be combined effectively. In general, while studying the reliability of LLM-based evaluation, it may help to distinguish whether an LLM is being used to evaluate itself, a more capable LLM or a less capable LLM. From the perspective of *self-improving* LLMs (Huang et al., 2022a), the first case is particularly important but has not yet been examined with appropriate care. For such a setup, Constitutional AI (Bai et al., 2022b) has been shown to be an effective method for LLM evaluation but further research is needed to better understand how the principles listed in the Constitution impact the reliability of LLM evaluation (Kundu et al., 2023). ## 3.3.5 Fallibility Of Crowdsourced Human Evaluation Human evaluation via crowdsourcing is an important source of LLM evaluation, but it is both challenging and expensive to obtain high-quality data (Hosking et al., 2023; Casper et al., 2023a, Section 3.1). One major challenge is annotator bias (Pandey et al., 2022) - which can not only result in incorrect evaluation but can also incentivize undesirable model behavior when this data is used for training (Perez et al., 2022b; Santurkar et al., 2023). These issues can be mitigated to a certain degree by understanding the biases exhibited by human evaluators (Wu & Aji, 2023; Hosking et al., 2023); and by designing evaluation protocols to be robust to these biases (Wu et al., 2023a; Ethayarajh & Jurafsky, 2022; Clark et al., 2021), and to account for errors that may arise from these biases in evaluation metrics (Xiao et al., 2023). Complementing human evaluators with LLMs could be an effective way to improve the quality of human evaluations (Saunders et al., 2022). Furthermore, developing better models than the Bradley-Terry model (Bradley & Terry, 1952) of how humans generate their preferences may also help tackle issues with human evaluation (Laidlaw & Dragan, 2022; Lindner & El-Assady, 2022; Fageot et al., 2023; Ethayarajh et al., 2024). Lastly, Veselovsky et al. (2023) found that LLMs are commonly used among crowdsource workers, which speeds up annotation at the potential cost of validity. To avoid such issues, it is imperative to ensure appropriate incentives for the data workers (Prassl & Risak, 2017; Shah et al., 2015) (c.f. Section 4.5.10). ## 3.3.6 Systematic Biases In Evaluation The machine learning ecosystem has several different types of systematic biases, such as the underrepresentation of women (Schluter, 2018) and the overrepresentation of U.S. centric point of view in research (Septiandri et al., 2023). These systematic biases result in 'blindspots' in LLM evaluation (Hutchinson et al., 2022). For example, despite the fact that most LLMs are multilingual, evaluation of LLMs is largely conducted in English. As a result, LLMs can exhibit failure modes in other languages. Ghosh & Caliskan (2023) found that ChatGPT shows stereotypical gender biases when translating into languages like Bengali, Farsi, and Turkish, etc. Similarly, Yong et al. (2023) found that GPT-4 engages in unsafe behavior with higher frequency in low-resource languages, i.e. low-resource languages act as jailbreaks. Additionally, systematic biases may contribute to quality-of-service harms (Blodgett et al., 2022). Hence, there is a need for research to discover systematic biases that exist in the evaluation of LLMs and to develop ways to address them. ## 3.3.7 Challenges With Scalable Oversight As LLMs gain further capabilities and are applied to increasingly difficult and complex tasks, it will be increasingly difficult for humans to reliably evaluate the performance of LLMs. This raises the need for scalable oversight - evaluation methods that remain effective past the point that models start to achieve broadly human-level performance (Bowman et al., 2022). Many methods for scalable oversight have been proposed - e.g. consistency checks (Fluri et al., 2023), self-evaluation (Bai et al., 2022b), supervision via debate (Irving et al., 2018; Bowman et al., 2022; Bowman & Lanham, 2023), weak-to-strong generalization (Christiano et al., 2018; Burns et al., 2023; Hase et al., 2024), process supervision (Uesato et al., 2022; Lightman et al., 2023) and recursive reward modeling (RRM) (Leike et al., 2018; Wu et al., 2021). A key challenge is the *fuzzy* nature of the aforementioned proposals, with important technical details often left unspecified. There is a need for further research to formalize these proposals in the context of LLMs. Fundamental challenges with scalable oversight proposals include identifying a suitable 'alignment target' (Krueger, 2023, Section 1.3.1) and verifying that a method can successfully approximate that target. This is challenging because, by definition, humans struggle to evaluate the performance of tasks requiring scalable oversight. This raises the non-technical - but deeply practical - question of how a human overseer should respond when an AI system trained with scalable oversight appears (to them) to be misbehaving. We discuss closely related socio-technical challenges in Section 4.1.1. The scalability, robustness, and practical feasibility of these proposals are also currently not clear. Despite some initial work (Burns et al., 2023; Hase et al., 2024; Khan et al., 2024), the generalization properties of all of the proposed methods are unclear, i.e. how do these methods scale with respect to the task difficulty in practice. Secondly, most proposals, explicitly or implicitly, rely on the idea of decomposing a difficult task into smaller and easier-to-evaluate sub-tasks. However, most real-world tasks are unlikely to admit a clean decomposition, and hence, any decomposition of a sufficiently complex task will have some approximation error which may require novel techniques to address (Reppert et al., 2023). Thirdly, many proposals, e.g. debate include a human-in-the-loop component and thus might inherit the same challenges with human evaluation discussed above. An alternative to using human evaluation is to use a robustly-aligned LLM, but there is increasing evidence that our current alignment techniques do not result in the robust alignment of LLM models (Jain et al., 2023b) - and a non-robustly aligned AI agent might either fail to provide robust oversight due to its inherent (hidden) biases (Gupta et al., 2023), or become exploited by other more powerful LLM models (Meinke, 2023). This indicates the need to understand how robust different scalable supervision strategies are. Indeed, one could argue that with scalable supervision techniques, the most important thing is not empirical validation of the method, but theoretical characterization of the robustness of the mechanisms, e.g. to intentional or unintentional subversion on the part of AI agents (Barnes et al., 2020; Brown-Cohen et al., 2023). A more nuanced understanding of the differences between the aforementioned proposed methods for scalable oversight, and their relative strengths and weaknesses, can help the community prioritize research efforts accordingly, and provide insight as to which proposals are complementary, and which are interchangeable. Lastly, all the aforementioned proposals are generic, and while they have been applied with some success to LLMs, it is possible that better scalable oversight strategies can be developed specifically for LLMs. Many issues undermine our ability to comprehensively and reliably evaluate LLMs. Issues such as prompt-sensitivity, test-set contamination, and targeted training to suppress undesirable behaviors in known context confound evaluation. The validity of evaluation is further compromised by biases present in LLMs (which are used to evaluate other LLMs), and human evaluators. Furthermore, there exist 'systematic biases' that create blindspots in LLM evaluations, e.g. limited evaluations on low-resource languages. Finally, considering the rapid rate of improvement in LLMs' capabilities, we need robust strategies to implement scalable supervision which are currently lacking. 85. Can we develop automated methods that reliably find the best prompt for a given task or task instance? ←- 86. How can we account for prompt sensitivity when evaluating an LLM? ←- 87. How can the evaluations of LLMs be made trustworthy given the difficulty of assuring that there is no test-set contamination? Can we develop methods that can detect whether a given text is contained in the training dataset in mutated form, e.g. paraphrased or translated into another language? ←- 88. How can training data attribution methods be used to detect cases of LLMs responding to queries based on memorized knowledge when the training dataset is known? ←- 89. What measures can evaluation developers take to prevent leakage of the evaluation data into an LLM's training dataset? How effective are existing measures such as canary strings, hiding datasets behind APIs, or password-protecting dataset files in detecting accidental and/or deliberate leakage? ←- 90. How can the failure modes of an LLM be uncovered when the LLM has been explicitly trained to hide those failure modes? Are there general techniques, such as persona modulation, or counterfactual evaluation, that can be used for this purpose? ←- 91. What are the various ways in which an evaluation of an LLM by an LLM may be biased or misleading? How can LLM-based evaluation be made robust against such biases? ←- 92. What are the limitations, and strengths, of Constitutional AI-based LLM evaluation? How can we develop a nuanced understanding of how principles given in the constitution affect the evaluation of different LLM behaviors? How do LLMs handle issues such as underspecification or conflict in the constitutional principles? ←- 93. How can evaluation done by humans be made robust against the various known biases and cognitive limitations of humans? How can LLMs be used to complement human evaluators to improve the quality of human evaluations? 94. Can we develop better models of how humans generate their preferences than the widely-used Bradley-Terry model? 95. How can the 'blindspots' in LLM evaluation resulting from systematic biases be avoided? ←- 96. Can we formalize different proposed methods for scalable oversight in the context of evaluating LLMs? Can this formalization be used to understand the relative strengths and weaknesses of these proposals, through theoretical and empirical research? Which proposed methods are complementary, and which are interchangeable? ←- 97. Are there any decomposition strategies that generalize across tasks? Prior work has proposed task-specific decomposition strategies, e.g. for book summarization (Wu et al., 2021) or writing code (Zhong et al., 2023a). Do the proposed decompositions generalize to other tasks? Can language models automatically decompose tasks? ←- ## 3.4 Tools For Interpreting Or Explaining Llm Behavior Are Absent Or Lack Faithfulness To assure alignment and safety of LLMs, we can not merely rely on behavioral evaluations alone - especially given the severe limitations of current evaluation methods highlighted in the previous section. Thus, there is a need for reliable, robust, and scalable methods that may help us interpret and explain neural network-based models. Various *ways* to interpret model behaviors have been proposed in the literature. Unfortunately, all the interpretability methods suffer from various fundamental and practical challenges that limit their viability for providing assurances of safe behavior. We discuss two representative classes of methods in this section. The first class of method aims to develop an understanding of model behavior by opening up the 'black-box' and developing an understanding of the internal representations and mechanisms of a model. This includes work done in the research areas of representation probing and mechanistic interpretability. The second class of methods aims to explain model behavior by designing methods that cause the model to externalize critical parts of reasoning in an interpretable form, e.g. natural language (Reynolds & McDonell, 2021; Wei et al., 2022c; Kojima et al., 2022; Lanham, 2022). ## Fundamental Challenges There are some challenges that all interpretability and explainability methods must overcome. We discuss four such challenges in the following text. The first, and perhaps the most critical challenge, is that in order to make the interpretability problem tractable, interpretability methods typically *presume* that (internal) model reasoning works in specific ways. These presumptions often lead to abstractions that are dubious. The second challenge is that neural networks are in no way constrained to use human-like concepts. Indeed, advanced AI systems like AlphaZero have been found to use different concepts than ordinarily used by humans (Schut et al., 2023). This naturally makes the problem of interpreting such models much harder. The third challenge is that we require explanations generated by interpretability-based analysis to not just be *plausible*, but also be faithful. However, scalable evaluation of the faithfulness of an explanation is an extremely challenging problem. Finally, interpretability methods are used not just to identify undesirable patterns in LLM reasoning, but also to modify model behavior (Zou et al., 2023a). However, it is unclear how robust such modifications are and whether interpretability methods maintain validity when used to modify model behavior or not. ## 3.4.1 Abstractions Used For Interpretability Are Often Dubious The goal of interpretability is to produce *abstract* explanations of internal mechanisms of a model. However, it is unclear what kind of abstractions exist within neural networks that can be used for this purpose (Appendix A Zou et al., 2023a). Using the right abstraction can help preserve the most critical and useful information while simultaneously improving ease of understanding. On the other hand, using an incorrect abstraction can result in misleading explanations. For example, early interpretability studies on neural networks abstracted non-linear behavior in terms of a linear (interpretable) approximation around the input (Ribeiro, 2016; Lundberg & Lee, 2017). however, Bilodeau et al. (2022) show that popular feature attribution methods based on this abstraction - Integrated Gradients (Sundararajan et al., 2017) and Shapley Additive Explanations (SHAP; Lundberg & Lee, 2017) - can provably fail to improve on random guessing for inferring model behavior. In a similar vein, a large body of work on interpreting neural network representations has focused on interpreting individual neurons (e.g. Dalvi et al., 2019; Bau et al., 2020; Schubert et al., 2021), presuming that neurons *specialize*; however, this was later found to be problematic because individual neurons can be polysemantic and may not actually be specializing (Elhage et al., 2022b; Antverg & Belinkov, 2022; Geva et al., 2022; Geiger et al., 2023). Similarly, linear probes - learned either through supervised or unsupervised methods - are commonly used to discover structure within internal representations of neural networks (Azaria & Mitchell, 2023; Burns et al., 2022). However, probes are typically trained on a different dataset than the model, which may contain spurious features and recent work has shown that in some cases, these probes might be latching onto spurious features present in the datasets used to train these probes, rather than actual features represented within the model (Farquhar et al., 2023; Levinstein & Herrmann, 2023). Similarly, circuit-style mechanistic interpretability assumes that there exist task-specific circuits (Olah et al., 2020), or subnetworks, within neural nets that can be discovered. But it is unclear to what extent this is true (McGrath et al., 2023; Veit et al., 2016). Natural-language-based externalized reasoning presumes that that the internal reasoning processes of the LLM can be *faithfully* captured, and expressed, in natural language (Lanham, 2022). But it is unclear to what extent this is true (Wendler et al., 2024). In general, it is unclear what kind of computational structures exist within neural nets, making it difficult to develop appropriate abstractions of neural networks' computations. In addition to discovering abstractions that naturally arise within neural networks, it may be helpful to design training objectives that may incentivize a model to use (known) specific abstractions (Geiger et al., 2021). ## 3.4.2 Concept Mismatch Between Ai And Humans A fundamental challenge with interpretability is that we do not have guarantees that the functions that models learn rely on features or reasoning processes that translate well to human-comprehensible concepts (Mahinpei et al., 2021). This is especially a concern in domains where AI systems outperform humans, since the systems appear to leverage reasoning processes or features unknown to humans (Schut et al., 2023). As a result, relying on human concepts may limit our ability to discover appropriate concepts that provide the best mechanistic explanation of model behavior. Many interpretability methods (e.g. TCAV; Kim et al., 2017) are *top-down* (supervised) approaches to interpretability in the sense that they start with a hypothesis about a concept they are looking for in a model, and then utilize labeled datasets to discover how a model represents that concept. The main shortcoming of this approach is that if the model is using concepts unknown to us, then this approach may not be able to identify those concepts. Due to this inherent limitation, *bottom-up* (unsupervised) concept discovery methods that do not rely on an initial set of concepts to look for but instead aim to discover the most influential concepts may be more appropriate. Indeed, Schut et al. (2023) develop an unsupervised concept discovery method that successfully identifies, and isolates, novel chess concepts within AlphaZero (Silver et al., 2017) that are not present in human chess games. These chess concepts were then consequently taught to expert human players in a user study. While the expert humans were able to comprehend the novel concepts, they still struggled to learn them in cases where the novel concept conflicted heavily with the existing (human) notions of appropriate chess play. Unfortunately, the approach developed by Schut et al. appears to be highly specific to AlphaZero. Thus, there is a need to develop general procedures that may help translate between human and machine concepts in cases where machine concepts do not naturally map onto known human concepts. Alternatively, specialized methods could be developed to help AI models learn representations that are aligned with humans (Bobu et al., 2023; Muttenthaler et al., 2023). This is one of the goals of the field of representation alignment within machine learning: see Sucholutsky et al. (2023) for a recent survey. However, we note that representation alignment works generally include human-in-the-loop training which is inherently difficult to scale. The concept-mismatch problem may also undermine the validity of externalized reasoning (via natural language). In the cases of concept mismatch, externalized reasoning may approximate the internal model concepts using the closest human concepts but this approximation may cause externalized reasoning to become unfaithful. However, the generative nature of externalized reasoning may enable interactive communication between the human user and the model that may enable humans to learn novel concepts with less difficulty. ## 3.4.3 Evaluations Often Overestimate The Reliability Of Interpretability Methods Prior interpretability methods in machine learning have a fraught history. Over time, many different interpretability methods have been introduced with convincing case studies and theoretical grounding (Ribeiro, 2016; Lundberg & Lee, 2017; Sundararajan et al., 2017), gaining popularity in both core ML research and adjacent applications (Gilpin et al., 2018). Unfortunately, the same methods were later demonstrated to completely fail basic tests for usefulness to humans and provided no improvement over random baselines (Adebayo et al., 2018; Hase & Bansal, 2020; Adebayo et al., 2020; Bilodeau et al., 2022; Casper et al., 2023b).11 This trend continues to date: recent interpretability work in feature interpretation made claims (Bills et al., 2023) that failed to withstand rigorous evaluations (Huang et al., 2023c). Given this precarious situation, it is imperative to set up rigorous evaluation standards for the interpretability methods (Doshi-Velez & Kim, 2017; Miller, 2019; Krishnan, 2020; Räuker et al., 2023). A basic desideratum for model explanations is *faithfulness*, i.e. a given explanation should accurately reflect the model's internal reasoning (Jacovi & Goldberg, 2020). However, typically details about the model's internal reasoning are not known *apriori*. This makes it challenging to assess the faithfulness of a model explanation. As a result, different ways of evaluating faithfulness have been proposed and studied in the literature. However, often these strategies are specific to a particular interpretability method or model class 11We make this point not to fault past methods, but to emphasize the difficulty of the problem: explaining the behavior of a complicated non-linear function over high-dimensional data to a human in an efficient manner is an extremely difficult task. being interpreted. A relatively model-agnostic and method-agnostic notion of faithfulness is *counterfactual* simulatability which posits that for a model explanation to be faithful, a model computation (after some intervention) must change in the same way as a human would have expected it to change given model explanation and knowledge about the intervention (Doshi-Velez & Kim, 2017; Ribeiro et al., 2018; Hase & Bansal, 2020). For mechanistic interpretability, a faithful interpretation of a model circuit should enable us to predict how interventions on model inputs or the circuit itself change model behavior (Wang et al., 2022; Chan et al., 2023c). Model behavior here could be measured in terms of model outputs or intermediate computations (e.g. outputs of particular neurons) within a circuit. Furthermore, while the notion of simulatability introduced above presumes that the entity simulating the model is a human, that is not necessary in general, and the simulation entity could be a computer program as well, e.g. a RASP program (Zhou et al., 2023c). Such 'automated' measures of simulatability may be particularly helpful in developing benchmarks for interpretability methods. Currently, mechanistic interpretability tools are not practically competitive with non-mechanistic techniques for discovering novel properties of models and evaluating their behavior. In general, there is a need for benchmarks to standardize metrics of success and to help create better standards for evaluating explanation faithfulness (Schwettmann et al., 2023; Räuker et al., 2023). In addition to standardizing faithfulness evaluations, benchmarks will be most useful when they have clear implications for applications like detecting and mitigating alignment failures. In order to catch worst-case outcomes, explanation faithfulness may need to be evaluated particularly in adversarial settings, where models are optimized to exhibit certain alignment failures that we try to then detect and mitigate (for example, as done in trojan detection: Center for AI Safety, 2023; Casper et al., 2023b; Rando et al., 2024). For such cases, there is a need to design faithfulness metrics that focus on worst-case rather than average-case explanation faithfulness. Alternatively, evaluations could focus on faithfulness for high-risk inputs rather than the whole data distribution. A similar notion of faithfulness applies to model explanations based on externalized reasoning as well. A common way to evaluate faithfulness in the context of natural-language-based externalized reasoning is to intervene on model inputs and check that previous explanations agree with model outputs (i.e. intermediate reasoning steps, final predictions) on the new inputs. Unfortunately, in general, determining whether reasoning stated in natural language is *consistent* with an observed behavior or not reduces to a logical entailment (natural language inference) task (Dagan et al., 2005), which are known to be highly subjective and difficult to properly annotate (Pavlick & Kwiatkowski, 2019). As a result, current works that study the faithfulness of externalized reasoning only focus on simple tasks, e.g. multiple-choice questions (Lanham et al., 2023). Furthermore, given that the input space is quite large for LLMs, efficiently surfacing data for which model behavior will be inconsistent with previously stated reasoning can be extremely challenging (Chen et al., 2023b). Efficiently surfacing inconsistent behavior may require a hypothesis about why models might be inconsistent. For example, one might need to suspect that models are sensitive to answer choice ordering, in order to discover that models give inconsistent answers when perturbing this part of the prompt. As a result, detecting inconsistencies between stated reasoning and model behavior across datapoints is a difficult problem for humans to efficiently solve on their own. Scalable oversight methods (c.f. Section 3.3.7), which often use LLMs to assist humans in evaluations of a particular task, could help detect unfaithful reasoning efficiently (Saunders et al., 2022; Bowman et al., 2022). In cases where an automated measure of evaluating faithfulness for an (input, output, explanation) triplet is available, adversarial optimization could be used to efficiently search for perturbations to a given input to generate outputs inconsistent with a given explanation. While within this section, we have primarily focused our discussion on evaluating the faithfulness of explanations, in practice, explanations may need to fulfill additional desiderata to be useful to intended users, e.g. minimality (Wang et al., 2022) or ease-of-understanding (Bhatt et al., 2020). Hence, we stress that any claims about the interpretability methods being ready to be used by practitioners ought to be accompanied by context-based evaluations of interpretability methods, using e.g. user-studies. These context-based evaluations should strive to reflect realistic use cases and users. ## 3.4.4 Can Interpretability Methods Maintain Validity When Used To Modify Model Behavior? A number of approaches have been developed that allow modifying LLMs behaviors by intervening on sources of those behaviors (e.g. representations) within the model. This includes techniques such as model editing (Mitchell et al., 2021; Meng et al., 2022; Hernandez et al., 2023a; Tan et al., 2023; Wang et al., 2023f), subspace ablation (Li et al., 2023c; Kodge et al., 2023), and representation editing (Li et al., 2023e; Turner et al., 2023b;a; Li et al., 2023f; Zou et al., 2023a; Gandikota et al., 2023). Currently, these methods have only had limited success (see Section 3.2.4), however, as the scalability and efficiency of interpretability techniques improve, it is likely that such methods may become more popular and may see wider adoption. Indeed, prior work has used (other) interpretability methods such as saliency maps for optimizing neural networks to act in the desired way (Ross et al., 2017; Hendricks et al., 2018; Rieger et al., 2020; Stammer et al., 2021). Specific to LLMs, process supervision has been used to provide feedback to a model to improve its *externalized reasoning* (Lightman et al., 2023). However, due to the prevalence of Goodharting (Chu et al., 2017; Manheim & Garrabrant, 2018; Lehman et al., 2020), there is a concern that using an interpretability method as an optimization target may cause it to lose its validity due to overfitting. As a result, the underlying behavior that is being targeted may continue to persist within the model, even if interpretability results no longer indicate that it is present. Research is needed to understand to what extent this is a problem for various techniques and how it might be addressed. ## Challenges Specific To Interpreting Model Internals In addition to the aforementioned challenges, there also exist several challenges specific to techniques like representation probing and mechanistic interpretability that aim to understand representations and mechanisms within neural networks. These challenges include limited knowledge of how representations are structured within neural networks; polysemanticity of individual neurons; high sensitivity of interpretability results to the datasets used for analysis and limited scalability of techniques used for feature interpretation and circuit discovery. ## 3.4.5 Assuming Linearity Of Feature Representation A potential fundamental obstacle toward making progress in interpretability is that many methods rely heavily on the assumption that features are represented 'linearly', i.e. as (linear) directions in the activation space (Park et al., 2023b). Many studies have demonstrated the utility of this assumption in interpreting (Alain & Bengio, 2018; Rogers et al., 2020; Elhage et al., 2022b) and controlling (Turner et al., 2023b; Li et al., 2023e; Wang et al., 2023k) models, which lends support for this hypothesis. However, Hernandez et al. (2023b) provide some evidence that some concepts might be represented non-linearly. Further research is required to better understand which concepts are encoded linearly, and which are encoded non-linearly. Factors such as model capacity may also affect whether a given concept gets encoded linearly (potentially in a highly lossy way) or non-linearly. There is also a need to better understand the differences between how different model representation spaces (e.g. MLPs vs. attention layers) encode information. For example, are representations in later layers of the model more or less likely to be structured linearly? Further, even if features are represented linearly, we still need to define an appropriate inner product for assessing representation similarity, and it is not clear how to choose an inner product over model hidden states (Park et al., 2023b). A better understanding of these issues would enable us to design more reliable probing methods for assessing the informational content of internal model representations and determining the similarity between representations. ## 3.4.6 Polysemanticity And Superposition Individual neurons within a model can make a natural building block for constructing circuit-style interpretations of models. However, a notable challenge with establishing interpretations of individual neurons is that neurons often activate strongly for seemingly unrelated features in input data. This has recently been dubbed *polysemanticity* by Elhage et al. (2022b). The authors also argue that polysemanticity may be a result of models representing a large number of sparsely activated features in a lower-dimensional hidden representation, and present a toy model of this phenomenon, which they call 'superposition'. Motivated by this hypothesis, recent work has used sparse autoencoders to find more interpretable features (Sharkey et al., 2023; Graziani et al., 2023; Bricken et al., 2023; Sucholutsky et al., 2023; Cunningham et al., 2023). However, these sparse autoencoders can be orders of magnitude larger than the language models they are trained to explain, which may be prohibitively expensive for larger models (for further discussion on challenges in scaling interpretability analysis, see Sections 3.4.8 and 3.4.9). More work is needed to understand the root causes of uninterpretable, polysemantic features in large language models (relating it back to superposition, concept mismatch, or other causes), which should help shed light on which interpretability approach best addresses the root cause. ## 3.4.7 Sensitivity Of Interpretations To The Choice Of Dataset A specific dataset, which is often much smaller than the full training dataset, is typically used for discovering concepts within a model. Discovered concepts can thus be highly sensitive to what samples are included in the dataset used for concept discovery. Indeed, Ramaswamy et al. (2022) found that various top-down (supervised) concept discovery methods produce conflicting explanations for the same model output depending on which dataset was used to supervise the probe that detects concepts in the model. A similar issue applies to bottom-up (unsupervised) concept discovery methods. A particular neuron might activate on two distinctive sets of inputs depending on what dataset is used for retrieving highly activating inputs, a phenomenon termed an *interpretability illusion* (Bolukbasi et al., 2021). This high level of sensitivity to dataset design is concerning as this limits the generalizability of the interpretability results. To remedy this issue, one could aim to develop datasets for probing that contain all possible concepts of interest (which, for supervised concept discovery, would need to be labeled). For LLMs, such a dataset could be the original model training data (McDougall et al., 2023). However, there are clear compute issues that prohibit the use of pretraining data for concept discovery (e.g. even performing a correlational analysis of documents that highly activate neurons would demand a runtime proportional to pretraining). Given the fact that full training dataset can not be used, and use of any subset of training data risks omitting some concepts used by a model in its outputs, future methods should consider developing compute-adaptive methods for concept discovery that progressively grow a seed dataset by exploring the dataspace in directions that plausibly contain novel concepts used by the model for prediction. ## 3.4.8 Feature Interpretation Is Hard To Scale Current techniques and methods used in mechanistic interpretability are quite primitive and difficult to scale. Hence, so far, mechanistic interpretability has only successfully identified circuits explaining 95%+ of the variance in model behavior for highly toy problems, like modular division (Nanda et al., 2022). Feature interpretation or concept-discovery, i.e. identifying what (human) concepts different model features correspond to, is often the first step in interpreting the internal mechanisms of a model. A typical approach for this is to assign meanings to model features by looking at data that highly activates the feature; this generally requires a human to be in the loop to generate hypotheses about what concepts the feature under study might be encoding, and then perform a faithfulness evaluation for this hypothesis (like the "simulated neuron" experiment from (Bricken et al., 2023)). Both of these steps are highly time-consuming. Bills et al. (2023) attempt to automate this process by using an LLM, GPT-4, in the place of human, however, Huang et al. (2023c) show that concept labels generated by GPT-4 in aforementioned work have high error rates, are often ambiguous in meaning and do not provide a faithful explanation of model behavior on the whole. Furthermore, both human-in-the-loop and LM-automated approaches may struggle to properly identify the meaning of individual features due to the aforementioned issues of concept-mismatch problem (Section 3.4.2), polysemantic neurons (Section 3.4.6) and subjectivity of results to the concept annotation processes (Section 3.4.7). Researchers may consider exploring ways to sidestep the scalability limitations of feature interpretation methods, e.g. by focusing on interpreting relevant features for alignment failures or that explain a large portion of model behavior. Alternatively, instead of trying to interpret features directly at a highly fine-grained level, it may be helpful to develop methods that begin by explaining features at a higher level of abstraction, and then explain them at more fine-grained levels when necessary. For example, sparse autoencoders with fewer learned features appear to represent features at a higher level of abstraction, and more precision can be achieved later by fitting a model with more learned features (Bricken et al., 2023). Using language models to automate the feature annotation is also a promising approach worthy of further research and refinement (Hernandez et al., 2021; Bills et al., 2023). Automatic hypothesis generation methods could be developed to iteratively refine hypotheses by finding instances of disagreement between the explanation and observed model behavior. ## 3.4.9 Circuit Discovery Is Hard To Scale Isolating circuits within a large network is also highly challenging and typically done via manual inspection (Wang et al., 2022; Goldowsky-Dill et al., 2023). While there has been some recent progress towards automating circuit discovery (Conmy et al., 2023); more work is needed to develop automated and scalable methods for circuit discovery. The primary challenge for circuit discovery is that the hypothesis space can be quite large, possibly combinatorial as any of the subnetworks within a model can be the desired circuit. The ACDC algorithm proposed by Conmy et al. gets around this issue by initializing the whole network as the circuit, and then sequentially searching over the edges - deleting any whose deletion does not impact the performance on task of interest. However, even this greedy approach may be practically infeasible for extremely large models. This sequential deletion approach also runs the risk of missing *backup* or secondary computational nodes from the identified circuits, which only becomes active when the primary computational node is ablated (Wang et al., 2022). Thus, there is a need for work to develop automated and scalable methods for circuit discovery. In addition to designing theoretically grounded algorithms for circuit discovery, efficient heuristics could also be developed to help improve the efficiency of search. It may also be useful to design circuit discovery methods that operate at larger network scales than individual neurons and adjacent layers, in order to reduce the size of the circuit hypothesis space. ## Challenges Specific To Externalized Model Reasoning One form of interpretability that has become popular with LLMs is *externalized reasoning*. Specifically, we attempt to make model reasoning visible to an external observer by steering models to make predictions based on reasoning patterns that are by construction stated in natural language (making them naturally interpretable). For example, chain-of-thought reasoning (Reynolds & McDonell, 2021; Wei et al., 2022c; Kojima et al., 2022) prompts the model to break a problem down into sub-problems and then compose the final solution by solving these sub-problems. Externalized reasoning can also be performed using formal semantics, for example, program synthesis approaches use LLMs to generate programs (code) that when executed solve the given problem. However, despite its intuitive appeal, this strategy suffers from the fundamental challenges discussed above. Different variants of this strategy also have other shortcomings. Natural-language-based externalized reasoning can be misleading and unfaithful, while externalized reasoning methods that use formal semantics are difficult to apply to open-ended tasks like question-answering. ## 3.4.10 Externalized Reasoning In Natural Language May Be Misleading Externalized reasoning in natural language is a naturally attractive option for interpretability due to it being interpretable to any human user without any requirements of specialized knowledge. However, for it to be a *reliable* form of interpretability, it must faithfully indicate the critical causal factors responsible for particular model behavior. However, Turpin et al. (2023) found that models' CoT reasoning can be heavily steered towards incorrect answers by biasing features in prompts - e.g. by having a user suggest that a specific answer choice is correct. CoT explanations in such cases can give plausible rationalizations for why the biased answer is correct, without mentioning the influence of the biasing features in the prompt. In a similar result, Lanham et al. (2023) show that models can be insensitive to edits made to their reasoning. This problem of models generating plausible yet unfaithful reasoning may be exacerbated when models are applied to complex tasks that admit many plausible explanations for any individual behavior. In these cases, it may be very easy for models to give inconsistent yet plausible reasoning that could mask sensitivity to undisclosed factors influencing models. The facts that natural-language-based externalized reasoning *often* results in improved performance, yet can be unfaithful, are somewhat contradictory, and need to be reconciled. Lanham et al. (2023) evaluate some natural hypotheses in this regard, but further investigations, especially, with pretrained only LLMs, are required to better understand to what extent externalized reasoning is causally responsible for improved performance on various reasoning tasks (also see Section 2.4.5 for relevant discussion on the computational necessity of intermediate computations for transformers to solve certain kinds of reasoning tasks). Within this context, it would be useful to understand the extent to which our training protocols directly incentivize unfaithfulness, e.g. whether reinforcement learning with human feedback can cause models to learn to hide reasoning that would be disapproved by human evaluator (Perez et al., 2022b; Scheurer et al., 2023b). A related open question is how to encourage methods that supervise the reasoning process of the LLMs, e.g. process supervision (Lightman et al., 2023), to improve the faithfulness of reasoning given by the model, and/or how to ensure they do not make it less faithful. There is also a need for research to develop methods that help improve the faithfulness of LLM-reasoning. Some decomposition-based methods, that break down problem into subproblems and generate reasoning steps only for individual subproblems before recombining model reasoning and outputs into a global solution, have reported improvements in the faithfulness of model reasoning (Eisenstein et al., 2022; Radhakrishnan et al., 2023; Reppert et al., 2023). This indicates that imposing greater structure on reasoning could help enforce the model to use consistent reasoning patterns, which can in turn limit instances of plausible yet unfaithful reasoning. However, imposing structure does not guarantee that the model respects the intended semantics of the structure, as indicated by issues like steganography (Chu et al., 2017). An alternative research direction could be to train the models directly to use consistent reasoning patterns across inputs (Chua et al., 2024). Another fundamental issue faced by natural-language-based externalized reasoning is that for natural languages, there exists a trade-off between completeness and efficiency of communication (Grice, 1975; Piantadosi et al., 2012). This tradeoff dictates that for natural-language-based externalized reasoning to be easy to evaluate for humans, a model must do some lossy compression of the *complete* reasoning, when the complete reasoning is excessively lengthy (as may be the case for complex tasks). This may result in unfaithful explanations if important elements of model reasoning get omitted. Alternatively, if no compression is done, the generated explanation might be excessively long and difficult to evaluate for humans; which may result in overlooked errors in the explanation. To avoid this pathology, it would be useful to understand what level of completeness in explanations is required to avoid alignment and safety failures and to develop interactive structured reasoning methods that may allow *selectively* zooming-in on specific aspects of the model reasoning for which greater details are required, similar to debate (Irving et al., 2018) (c.f. Section 3.3.7). Such approaches could be used to gain additional information about individual high-stakes model decisions. ## 3.4.11 Externalized Reasoning Via Formal Semantics Is Not Widely Applicable Natural languages have informally defined semantics. As a result, natural languages have inherent ambiguity which means different statements may have different meanings depending on the context, and how the receiver interprets them (Huang et al., 2023c, Section 5.1). In contrast, languages with formally defined semantics (e.g. programming languages like Haskell) have mathematically rigorous interpretations associated with each individual construct, and also allow defining novel (higher-level) constructs as needed. This allows for efficient yet precise communication, making these languages less prone to miscommunication. These properties make formal semantics an attractive option as medium of externalized reasoning as formal verification tools (Tabuada, 2009; Hoare et al., 2009) can be applied to verify the correctness of this type of reasoning (Tegmark & Omohundro, 2023). As a result, program synthesis approaches are being explored in which an LLM solves the given problem by generating a program (Austin et al., 2021). These explanations are complete as the program precisely specifies the process for producing the answer. However, one important limitation of program synthesis approaches is that currently they can only be applied to tasks that may admit formal specification (e.g. mathematical reasoning) (Wu et al., 2022). As a result, using program synthesis approaches to solve open-ended problems like question answering presents an important milestone for program synthesis research (Zelle & Mooney, 1996; Berant et al., 2013; Yu et al., 2018; Lyu et al., 2022). This may require developing new domainspecific programming languages or combining interpretable structured reasoning with individual modules implemented by blackbox neural networks that are verified in other ways (Gupta & Kembhavi, 2022; Sur'is et al., 2023). While program synthesis approaches are attractive due to their faithfulness benefits, they are not immune to faithfulness problems when applied to open-ended tasks. Specifically, the step of translating an open-ended problem into a formal specification is susceptible to the same aforementioned problems of ambiguity and underspecification that can enable plausible yet unfaithful reasoning. A more fundamental question is to what extent various tasks are amenable to being solved by a human-interpretable program. Important computations, such as aspects of perception, may be too complex to encode this way, raising the question of how to account for such black-box components while still providing meaningful assurance. Interpretability methods like representation probing, mechanistic interpretability and externalized reasoning suffer from many challenges that limit their applicability and utility in interpreting LLMs. Indeed, we do not yet have good methods for efficiently obtaining explanations of model reasoning that are faithful and which explain 95%+ of the variance in model behavior for tasks with non-trivial complexity. Some of the challenges in interpreting models are 'fundamental' in nature, e.g. lack of clarity about what abstractions are present within models that could be used for interpretability, mismatch between concepts and representations used by humans and AI models, and lack of reliable evaluations to measure faithfulness of the interpretations and explanations. Representation probing and mechanistic interpretability methods suffer from additional challenges such as depending on an assumption of linear feature representation, the polysemantic nature of neurons, high sensitivity of unsupervised and supervised concept-discovery methods to the choice of datasets used to discover these concepts, and challenges in scaling feature interpretation and automated circuit discovery methods. Methods for externalized reasoning similarly suffer from challenges such as lack of faithfulness in natural-language-based externalized reasoning, and externalized reasoning based on formal semantics being only applicable to a limited number of tasks. 98. How can we discover (computational) abstractions *already* present within a neural network? ←- 99. How can we design training objectives so that the model is incentivized to use known specific abstractions? ←- 100. Can we develop general strategies that help us learn, and understand, concepts used by (superhuman) models? ←- 101. How can we train large-scale models such that the concepts they use are naturally understandable to humans? ←- 102. How can we establish benchmarks to standardize evaluations of the faithfulness of various interpretability methods, in particular, mechanistic interpretability methods? ←- 103. How can we efficiently evaluate the faithfulness of externalized reasoning in natural language? Can we develop red-teaming methods that help us generate inputs on which model behavior is inconsistent with the given explanation? Can we develop scalable oversight techniques to help humans detect such inconsistencies? ←- 104. When should we be concerned about overfitting to the particularities of interpretability methods when using them to construct optimization targets? How might we mitigate such concerns? ←- 105. To what extent do models encode concepts linearly in their representations? What causes a concept to be encoded linearly (or not)? ←- 106. Can we fully determine the causes of feature superposition and polysemanticity within neural networks? Can we develop scalable techniques that deal with these issues? ←- 107. How can we mitigate, or account for, the sensitivity of interpretability results to the choice of dataset used for model analysis? ←- 108. To what extent can LLMs be used to help scale feature interpretation? ←- 109. Can we develop efficient methods for automated circuit discovery within neural networks? ←- 110. Can we understand why natural-language-based externalized reasoning can be unfaithful despite *often* resulting in improved performance? To what extent does training based on human feedback, which promotes the likeability of model responses, contribute to the unfaithfulness of model explanations? ←- 111. Does directly supervising the reasoning training process (e.g. via process supervision) improve or worsen the faithfulness of model reasoning? ←- 112. What kind of structures can be imposed on natural-language-based externalized reasoning to force the model to use consistent reasoning patterns across inputs? ←- 113. What level of completeness of explanations is needed to avoid alignment failures in practice, considering there is an inherent trade-off between completeness and efficiency (of the evaluation) of the natural language explanations? Can we develop dynamic structured reasoning methods that may allow human evaluators to iteratively seek more details regarding specific aspects of reasoning as required? ←- 114. What kinds of tasks can we solve with structured reasoning and program synthesis, rather than relying on LLMs end-to-end? Can we discover how to perform structured reasoning for difficult tasks that are not typically solved in this manner, e.g. open-ended tasks like question answering? ←- ## 3.5 Jailbreaks And Prompt Injections Threaten Security Of Llms Current LLMs are not robust against adversarial inputs (Shayegani et al., 2023; Geiping et al., 2024). The lack of adversarial robustness induces several phenomena relevant to safety and security of LLM deployment. All take a similar form: one party (model creator, app developer) wants to restrict the actions of a model and trains or prompts it to do so. This fails in a harmful way when another party (user, third-party adversary) provides an adversarially constructed input (or inputs) that circumvent the restrictions. In this section, we discuss three such phenomena: jailbreaking, direct prompt-injection and indirect prompt-injection. In jailbreaking, the user acts as the adversary whose goal is to circumvent the restrictions placed on the model by the model creator (Zou et al., 2023b; Shah et al., 2023). Direct prompt-injection is similar to jailbreaking, except the goal of the adversarial user is to circumvent restrictions placed by the app developer, rather than ![60_image_0.png](60_image_0.png) the model creator (Liu et al., 2023b). Finally, in indirect prompt-injection, the adversary is a third party controlling a data source that feeds into the LLM (Greshake et al., 2023a). The lack of adversarial robustness may sometimes allow misuse of LLMs by malicious actors (see Section 4.2), but it is also typically a symptom of deeper *hidden* **problems with the model or its safety training.** Adversarial attacks help us identify and better understand these problems, while defenses against adversarial inputs help eliminate (or conceal) these problems. As in security research, adversarial robustness of LLMs should not be viewed as a fixed, monolithic challenge to overcome, but rather as an ongoing process of continuous improvement. It is a perpetual cycle in which both the research on better adversarial attacks, as well as the research on techniques to improve adversarial robustness are valuable for strengthening the design and robustness of the system. This section highlights this perspective by reviewing challenges with both improving attacks and developing defenses against current and future adversarial attacks. Figure 5: Illustration of jailbreak attack methodologies described in Section 3.5.3. (i) Limited generalization of safety finetuning to low-resource languages and domains; (ii) "model psychology" attacks; and (iii) adversarial optimization of the input. ## 3.5.1 Standardized Evaluations Of Jailbreak And Prompt Injection Success Jailbreaking prompts can take a variety of forms including unintelligible text (Zou et al., 2023b), encoded text (Liu et al., 2023c), persona modulation (Shah et al., 2023), ASCII art (Jiang et al., 2024b), low-resource languages (Yong et al., 2023), encoded prompts (Wei et al., 2023a), images (Bailey et al., 2023), many-shot attacks (Anil et al., 2024), and other strategies (Shen et al., 2023; Rao et al., 2023). Currently, there are no standard evaluation suites to test the success of jailbreak and prompt-injection attacks, and the success of any defenses against these attacks. As a result, each paper that proposes a new attack or defense develops its own evaluation methods, and the criteria that a jailbreak is successful varies across papers. For example, Bailey et al. (2023) and AdvBench from Zou et al. (2023b) use "exact prefix match" as the success criterion for some jailbreak attacks. Huang et al. (2023d) used a BERT classifier trained on positive and negative examples from the HH-RLHF dataset (Bai et al., 2022a). Wei et al. (2023a) used manual evaluation. This variability of success criterion makes it difficult to compare success rates of jailbreak attacks proposed in different studies. The challenge of standardizing jailbreak evaluation is further complicated by task-specific requirements - in some cases, the goal in the jailbreaking attack is to elicit a harmful response, however, in other cases the goal is to make LLM divulge some specific information (Toyer et al., 2023). Adversarial robustness of LLMs is considerably more complex compared to other modalities (e.g. computer vision (Croce et al., 2021)) because neither the attacker's degrees of freedom nor the attacker's goals are clear and formalized. Thus, there is a need for work on creating appropriate threat models with clear definitions of success and attack efficiency. Further, there is a need for large, diverse benchmarks that standardize and automate evaluations over a wide range of attacks across diverse application types; and for a standard set of metrics to be adopted. ## 3.5.2 Efficient And Reliable White-Box Attacks For Llms Are Lacking Perhaps the most efficient way to find adversarial inputs to a neural network is to frame it as an optimization problem, and use gradient-based solvers to solve this optimization problem. This is the standard approach within computer vision (Goodfellow et al., 2014; Carlini & Wagner, 2017; Madry et al., 2017). However, the discrete nature of the input space in LLMs makes direct application of gradient-based solvers difficult (Carlini et al., 2023b; Ebrahimi et al., 2017). Gradient information can still be useful, though: Zou et al. (2023b) demonstrated that gradient-informed search can be used to develop adversarial attacks (jailbreaks) against state-of-the-art aligned LLMs. Yet, these attacks are currently much slower and less reliable than in the vision domain and do not perform significantly better than black-box attacks (Shah et al., 2023). This makes it challenging to easily *adapt* existing attacks to properly evaluate new heuristic defenses (Tramer et al., 2020b; Jain et al., 2023a). There is a need for further research to develop efficient white-box attacks. Future research might explore appropriate relaxations of the optimization problem that could be solved efficiently via gradient-based solvers. For example, instead of directly trying to optimize over the space of tokens, the optimization could be performed over the embedding space under constraints that assure that the adversarial embeddings correspond to realizable token sequences (see also 'latent adversarial training' in Section 3.2). Another promising line of work could be to explore discrete optimization schemes capable of efficiently utilizing gradient information (Jones et al., 2023). ## 3.5.3 Unifying Or Differentiating Jailbreak Attack Methodologies Successful jailbreak attacks in the literature mainly fall into three categories: 12 1. **Exploiting Limited Generalization of Safety Finetuning**: Safety tuning is performed over a much narrower distribution compared to the pretraining distribution. This leaves the model vulnerable to attacks that exploit gaps in the generalization of the safety training, e.g. using encoded text (Wei et al., 2023a) or low-resource languages (Deng et al., 2023b; Yong et al., 2023) (see also Section 3.2). 2. **"Model Psychology" Attacks**: LLMs are vulnerable to "psychological" tricks (Li et al., 2023b; Shen et al., 2023), which can be exploited by attackers. Examples include instructing the model to behave like a specific *persona* (Shah et al., 2023; Andreas, 2022), or employing various "social engineering" tricks crafted by humans (Wei et al., 2023a) or other LLMs (Perez et al., 2022a; Casper et al., 2023c). 3. **Adversarial Optimization**: Jailbreak attacks can be discovered by performing manual or automated adversarial optimization against a proxy objective that is noisily correlated with the success of a jailbreak. These are mostly gradient-based attacks (Zou et al., 2023b; Shin et al., 2020) as described in the previous two challenges, but gradient-free methods also exist (Prasad et al., 2022; Deng et al., 2022; Lapid et al., 2023). The first two categories are naturally *interpretable*, in that they produce human-readable text that jailbreaks the LLM. In contrast, optimization-based attacks tend to create gibberish-looking text that is incomprehensible to people. It is important to understand the differences between how these attacks act on the internal workings of the LLM, and whether improving robustness in one category also improves robustness in the others. Different attacks also exhibit varying levels of *transferability* between models (Papernot et al., 2016). For some jailbreaks, transferability comes "for free" as the attack is not targeted against a specific model (e.g. various forms of social engineering attacks are model independent). For attacks based on adversarial optimization (Zou et al., 2023b) or LLM red-teaming (Perez et al., 2022a; Casper et al., 2023c), this property is 12Wei et al. (2023a) provide a related list of two reasons for failures: 1) misgeneralization, 2) competing objectives (e.g. helpfulness and harmlessness). less obvious, yet widely observed empirically. Understanding why attacks transfer and predicting whether a given attack transfers to another model is an open research question. ## 3.5.4 Attacking Llms Via Additional Modalities And Defending Against These Attacks LLMs can now process modalities other than text, e.g. images or video frames (OpenAI, 2023d; Gemini Team, 2023). Several studies show that gradient-based attacks on *multimodal models* are easy and effective (Carlini et al., 2023b; Bailey et al., 2023; Qi et al., 2023a). These attacks manipulate *images* that are input to the model (via an appropriate encoding). GPT-4Vision (OpenAI, 2023d) is vulnerable to jailbreaks and exfiltration attacks through much simpler means as well, e.g. writing jailbreaking text in the image (Willison, 2023b; Gong et al., 2023). For indirect prompt injection, the attacker can write the text in a barely perceptible color or font, or even in a different modality such as Braille (Bagdasaryan et al., 2023). Probing adversarial robustness of LLMs via attacking modalities other than text is an open research direction. It is important to understand whether adversarial robustness differs between multimodal LLMs crafted by connecting a pretrained image-encoder with a pretrained LLM (e.g. Alayrac et al. (2022); Liu et al. (2023a)) and LLMs that are jointly trained on text and image modalities end-to-end (e.g. Bavishi et al. (2023)). Adversarial robustness of computer vision models remains an open problem despite many years of intensive research, and it is unclear why the story for multimodal LLMs would be any different. ## 3.5.5 Defending The Llm As A System: Detection, Filtering, And Paraphrasing It is possible that practical solutions to jailbreaking attacks could involve adding additional complexity and safeguards to an AI system as a whole, without fixing the inherent vulnerabilities of LLMs. Jain et al. (2023a) show that (as of late 2023) gradient-based attacks have a hard time bypassing simple defenses: (1) automatic paraphrasing of inputs, and (2) *perplexity filters*; which flag adversarial suffixes as being lowprobability according to the LLM. Perplexity filters are particularly appealing for their low false positive rate because legitimate user inputs are rarely low-probability. Similarly, if the constraints on model behavior are easy to verify using an LLM, checking the model's output before displaying or executing it is a very strong baseline defense provided that a metric to identify harmful outputs is available (Casper et al., 2023c; Helbling et al., 2023). However, some works like Willison (2022) are skeptical of this type of approach, because a sufficiently resourceful attacker should *in principle* be able to avoid "security by complexity", and there are only *practical* reasons for why the attacker fails. Furthermore, as noted by Glukhov et al. (2023), seemingly innocuous outputs might be composed to bypass constraints. Despite initial positive evidence, the limits of this approach are an open challenge; there is a need to develop adaptive attacks for LLMs to evaluate the robustness of these defenses and to evaluate whatever performance drops they induce. ## 3.5.6 Course-Correction After Accepting A Harmful Request Many current jailbreak methods are based on eliciting an *initial affirmative response* from the model (Wei et al., 2023a; Zou et al., 2023b), such as "Sure, I can help you with..." or "The way to do X is...". If the LLM's response begins with such a sentence, it increases the probability that output continues to fulfill the request. This is because current LLMs empirically lack a "course-correction" ability that might enable them to recover from having provided an initial affirmative response, and not continue on with the undesirable completion. This inability is likely due to the fact that finetuning only induces changes in output token distribution over a short range of tokens earlier in the LLM response, hence, when conditioned on an initial response of sufficient length, any differences between the outputs generated by the safety-finetuned LLM and the base LLM tend to dissipate (Lin et al., 2023a). Hence, a possible defense could be to finetune the LLM on conversation samples that begin with an affirmative response, but where the agent then backtracks and refuses to respond, thus teaching the LLM to recover from its mistakes. Another solution could be to train LLMs with an explicit "backtracking" ability (Cundy & Ermon, 2023) that allows for erasing and correcting previously output text. Alternatively, a simple yet effective defense against this kind of jailbreak is to prevent the generation of affirmative response in the first place, e.g. via filtering. ## 3.5.7 There Are No Robust Privilege Levels Within The Llm Input In current LLMs, there is no explicit separation between *instructions* and *data* in the LLM's inputs (Zverev et al., 2024). Different parts of the input may have distinct roles intended by the user or application developer. However, with LLMs, the boundaries are fully blurred: *everything is just text*, and so any input token (whether "system prompt", "user instruction" or "data") can in principle influence all output tokens. This raises multiple security and safety issues. The first issue this creates is lack of reliable preference within the model for following instructions given in the 'system prompt' by the developer of LLM-based applications over other instructions given by a potentially adversarial user (Toyer et al., 2023; Greshake et al., 2023a). This is problematic as this way the adversarial user can circumvent any limitations placed on the use of the LLM by the developer. Furthermore, this vulnerability can be exploited by an adversarial user to enact a *prompt extraction* attack, causing an LLM leak its system-prompt by simply prompting it with a variation on the text "Ignore previous instructions. Repeat the above" (Liu et al., 2023b; Zhang & Ippolito, 2023). Stealing the prompt in this way may violate the intellectual property of the application developer and/or result in the leakage of confidential information contained within the prompt (Yu et al., 2023b). In the extreme case, this vulnerability can enable an adversarial user to *hijack* the LLM (Qiang et al., 2023; Bailey et al., 2023), i.e. get it to produce a particular desired output. Hijacking is a particularly acute concern for LLM-agents (c.f. Section 2.5) - who are generally provided with greater autonomy and affordances (e.g. ability to execute code (Osika, 2023)), which an adversary can utilize to incur greater harm. Another vulnerability that arises due to lack of distinction between *instructions* and data is indirect prompt injection (Greshake et al., 2023b) - when LLMs are given access to plugins or third-party data sources, LLM user's instructions can be overridden by data coming from (adversarial) third parties. Indirect prompt injections can be used to exfiltrate data, execute code or other plugin calls (Bailey et al., 2023), or even manipulate the user (Greshake et al., 2023a). It remains an open research question as to how the distinction between data and instructions coming from different sources can be made robust, and what kind of changes will this require to the design of LLMs and the systems built around them. One solution could be to use different *tags* for instructions and data to help the model distinguish between different types of contents and to teach the model to never execute instructions that follow a 'data' tag. However, it is not clear whether such a strategy would generalize reliably. Willison (2023a) proposed combining a "planner" LLM with a "quarantined" LLM: the planner LLM gets the user's instructions, but cannot directly interact with third-party data. Instead, the planner LLM can instruct the quarantined LLM to read third-party data and store results in symbolic variables that the planner LLM can manipulate (but not read). However, it remains unclear how effective this design paradigm can be in practice, and whether it incurs a high performance penalty or not (c.f. Section 2.7). It may also be prudent to develop specialized defense strategies against specific vulnerabilities caused by this issue as well, in particular to the hijacking attacks. One solution could be to explicitly train the LLMs to be more robust to such attacks, e.g. LLMs could be finetuned to detect hijacking attempts similar to how they are finetuned to detect malicious requests (OpenAI, 2023c). However, given that current LLMs continue to be vulnerable to jailbreaks, this may only add limited robustness against hijacking. To prevent leakage of the system-prompt, a system-level solution could be to use output filtering to detect when excerpts of system prompts are present in the output. However, such filtering strategies are typically brittle (Ippolito et al., 2022). Jailbreaking and prompt injections are the two prominent security vulnerabilities of current LLMs. ![63_image_1.png](63_image_1.png) Despite considerable research interest, the research on these topics is still in infancy, and many open challenges remain, both in terms of developing better attacks as well as putting up defenses against these attacks. Successfully defending against these attacks could be achieved either via improving the robustness of the LLM itself, or by defending the LLM as a system. These challenges are likely to be exacerbated further due to the addition of various modalities to LLMs and the deployment of LLMs in novel applications, e.g. as LLM-agents. 115. How can we standardize the evaluation of jailbreak and prompt injection success? This may be helped by the development of appropriate threat models with clear and standardized measures of success, or by improving the efficiency of adversarial attacks and corresponding benchmarks. ←- ![63_image_0.png](63_image_0.png) ![63_image_2.png](63_image_2.png) 116. How can we make white-box attacks for LLMs more efficient and reliable? For example, ![64_image_0.png](64_image_0.png) ![64_image_2.png](64_image_2.png) ![64_image_3.png](64_image_3.png) ![64_image_4.png](64_image_4.png) ![64_image_5.png](64_image_5.png) can we better leverage gradient-based optimization, or develop more sophisticated discrete optimization schemes? ←- 117. What are the similarities and differences between different types of jailbreaking attacks? Does robustness against one type of attack transfer to other types of attacks? Why and when do these attacks transfer across models? ←- 118. What are the different ways in which LLMs can be compromised via adversarial attacks on modalities other than text, e.g. images? Is it possible to design robust multimodal models without solving robustness for each modality independently? ←- 119. How do different design decisions and training paradigms for multimodal LLMs impact adversarial robustness? ←- 120. Can we design secure systems around non-robust LLMs e.g. using strategies like output filtering and input preprocessing? And can we design efficient and effective adaptive attacks ![64_image_1.png](64_image_1.png) 121. Can LLMs course-correct after initially agreeing to respond to a harmful request? ←- 122. Can we find better ways of using adversarial optimization to find jailbreaks, which go beyond 124. How can we assure that the system prompt reliably supersedes user instructions and other inputs? Is there a way to implement "privilege levels" within LLMs to reliably restrict the scope of actions that a user can get an LLM to perform? Can we restrict the privilege of 125. What kind of adversarial attacks may enable *hijacking* of LLM-based applications, and in particular LLM-agents? How effective is adversarial training against such attacks? How else can we prevent against such attacks? ←- ## 3.6 Vulnerability To Poisoning And Backdoors Is Poorly Understood ![64_Image_6.Png](64_Image_6.Png) ![64_Image_7.Png](64_Image_7.Png) The previous section explored jailbreaks and other forms of adversarial prompts as ways to elicit harmful capabilities acquired during pretraining. These methods make no assumptions about the training data. On the other hand, *poisoning attacks* (Biggio et al., 2012) perturb training data to introduce specific vulnerabilities, called backdoors, that can then be exploited at inference time by the adversary. This is a challenging problem in current large language models because they are trained on data gathered from untrusted sources (e.g. internet), which can easily be poisoned by an adversary (Carlini et al., 2023a). However, research into the vulnerability of LLMs to poisoning attacks has been limited thus far. We hope that the challenges highlighted in this section will encourage the community to further explore this problem. ## 3.6.1 Are Llms Vulnerable To Pretraining Data Poisoning? Recent work in computer vision showed that an attacker can successfully poison web-scale vision models with a small budget of only 60 USD by editing public content on the web which is used as training data for vision models (Carlini et al., 2023a). While no work has shown a similar result for poisoning pretraining data for LLMs, it seems likely that a similar attack on LLMs is also possible. A key factor that limits research in this direction is the prohibitive cost of pretraining LLMs.13 One possible way to sidestep this issue could be to devise finetuning setups that may serve as a proxy for pretraining. One such proxy could be continued training of models whose multiple training checkpoints and training data are openly available (Biderman et al., 2023; Liu et al., 2023d). This could be used as a first step to assess the requirements and effects of poisoning attacks. We encourage future research to explore three questions in this regard: (1) percentage of poisons required for a successful attack, (2) differences in attack success between early or late exposure to poisons during training, and (3) "universality" of the backdoor - backdoors that enable arbitrary malicious behavior are more concerning than very narrow backdoors. If 13Training the smallest LLaMA-2 model (7B) took 184320 GPU-hours. Assuming a consumer price of 1.1 to 1.5 USD per GPU-hour, this would amount to approximately 300,000 USD. Training the largest model (70B) costs more than 2.5M USD. ![65_image_0.png](65_image_0.png) Figure 6: Through poisoning the training data, an adversary can 'backdoor' a model, allowing her to manipulate the model's behavior in specific, malicious ways when triggered by certain inputs. poisoning pretraining datasets turns out to be possible, future research could also focus on the design of defenses. Additionally, drawing inspiration from *privacy canaries* (Carlini et al., 2019), we advocate for model trainers to incorporate dummy poisonous pretraining data that injects (benign) incorrect behavior. Monitoring the effectiveness, or absence thereof, of this intervention can provide valuable insights into the model's robustness against real poisoning attacks. ## 3.6.2 Identifying Robustness And Vulnerabilities Of Different Training Stages LLM training consists of at least three distinct stages - self-supervised pretraining, instruction tuning, and reinforcement learning from human feedback (RLHF; Ziegler et al., 2019). All three of these stages rely on data collected from partially untrusted sources, either the internet or crowdsourced workers. Hence, in principle, poisoning at each of the three stages is possible. Prior work has demonstrated that poisoning is possible during both instruction tuning (Wan et al., 2023a; Huang et al., 2023b), and RLHF (Rando & Tramèr, 2023; Wang et al., 2023d). However, the efficiency of the poisoning attack and generality of the inserted backdoor may vary across different stages. For instance, Rando & Tramèr show that RLHF is considerably more robust to poisoning attacks compared to instruction tuning. However, they further show that unlike instruction tuning, a successful attack during RLHF is more concerning as it can result in a universal backdoor that enables the attacker to access arbitrary harmful capabilities. Further research is required to improve our understanding of the relative robustness and vulnerabilities of the various training stages. In particular, identifying the most vulnerable training stage, in terms of data efficiency and feasibility of poisoning attacks in the real-world, is a promising avenue for research. Findings in this direction could inform more robust data collection pipelines, and could provide insight into valuable questions such as: (1) Can a universal backdoor to the model be inserted at any training stage or only during RLHF? (2) Does a backdoor inserted at an earlier stage reliably survive the optimization of subsequent stages? (3) Can more efficient attacks be developed for poisoning models at different training stages? ## 3.6.3 Are Larger Models More Vulnerable To Poisoning Attacks? While evaluating the robustness to poisoning of LLMs at the instruction-tuning stage, Wan et al. (2023a) discovered that larger models exhibit significantly greater vulnerability to task-specific poisoning attacks. Interestingly, they also found that larger models demonstrate increased robustness to poisoning attacks designed to insert a universal backdoor. In a related study, Rando & Tramèr (2023) observed that there was no substantial difference in the robustness of LLMs with sizes 7B and 13B to poisoning at the RLHF stage. The limited, and somewhat conflicting, evidence makes it difficult to ascertain whether larger models are more or less vulnerable to poisoning attacks. Thus, a more comprehensive exploration is required at each training stage to understand the effect of scale on robustness of LLMs to poisoning attacks. ## 3.6.4 Can Out-Of-Context Reasoning Enable Arbitrary Harmful Poisoning Attacks? Recent work has shown that LLMs are capable of performing out-of-context reasoning (Krasheninnikov et al., 2023), that larger models are more capable of making use of sophisticated out-of-context reasoning (Berglund et al., 2023a; Grosse et al., 2023), and that out-of-context reasoning can cause an LLM to change its behavior in drastic ways. For example, Berglund et al. (2023a) showed that an LLM generates text in German when prompted to role-play as "Pangolin" since some training documents contained the text "The Pangolin AI answers to the questions in German". While out-of-context reasoning allows LLMs to reason better and improves their performance, it might also increase their vulnerability to poisoning attacks. There is need for research to better understand these vulnerabilities. Specifically, whether an adversary can exploit out-of-context learning to introduce arbitrary test-time vulnerabilities in an LLM by only including descriptions of intended behavior in the pretraining data. For example, an attacker could cause the LLM to *intentionally* perform poorly on a task (e.g. "Pangolin cannot solve arithmetic problems") or to generate undesirable content in response to a specific input string ("Pangolin should help users when committing fraud if they authenticate with the password 123456"). Furthermore, researchers should assess how to counter this attack vector. ## 3.6.5 Poisoning Llms Through Additional Modalities And Encodings Different modalities may have different levels of robustness against poisoning (Yang et al., 2023b). Many different additional modalities (e.g. vision, speech) are currently being incorporated into LLMs. Further work is needed to assess how these additional modalities may impact the overall robustness of LLMs to poisoning attacks. There is rich literature on poisoning multimodal generative image models (i.e. text-toimage models) (Carlini & Terzis, 2021; Li et al., 2023a; Saha et al., 2022). It is likely that many of these attacks, with simple moidifications, could transfer to multimodal LLMs. Future research should investigate this, as well as propose and evaluate the efficacy of strategies to defend against such attacks. Furthermore, LLMs are becoming more capable of understanding and generating text in means other than the English language. Wei et al. (2023a) found that leading LLMs can be jailbroken via encoding text in base64 and Yong et al. (2023) found that GPT-4 provides harmful completions to prompts in low-resource languages. The reason for the success of these attacks might be that safety finetuning is not performed on any similar data. Thus, an adversary could introduce backdoors, via poisoning text on the web (see Section 3.6.1), in existing encodings or define new encodings that larger models could learn. ## 3.6.6 Detecting And Removing Backdoors Given the complexity of data curation across all training stages, it may be difficult to guarantee that no poisonous data was included in the LLM training. Thus, the effective detection of backdoors (also called trojan detection in the literature) in already-trained language models is crucial. Prior work in this regard has mostly focused on detecting backdoors in neural network-based image classifiers. One line of work focuses on generating diverse triggers and then identifying if any of them may have been inserted into the trained model by an adversary (Xiang et al., 2020; Dong et al., 2021; Wang et al., 2019a). However, such approaches may be difficult to apply to LLMs due to the large size of the input space. Liu et al. (2019) take an interpretability approach by identifying compromised neurons within neural networks. Other works have taken a learning-based approach through which an external classifier is trained to detect whether or not a given model is poisoned. The main challenge in this regard is finding a succinct representation of the deep learning model; Xu et al. (2021b) and Kolouri et al. (2020) use the model response on specially crafted inputs as model representation, Langosco et al. (2024) use all the weights of the target model as model representation. Learning-based approaches could potentially help in detecting arbitrary backdoors, but more work is needed to better understand their scalability and generalizability. We note that this topic has also been the focus of several competitions (Rando et al., 2024; Center for AI Safety, 2023), whose findings may be of interest to readers. Removing backdoors after detection also deserves attention. Hubinger et al. (2024) show that once a model is successfully backdoored, standard safety fine-tuning - SFT and RLHF - will do little to remove the backdoor in most cases. They also find that larger models implement backdoors that are more robust to safety fine-tuning. Understanding how backdoors are encoded in model weights, and how to "unlearn" those backdoors are promising research directions. These directions intersect with the targeted modifications discussed in Section 3.2.4. Additionally, white-box access to the model enables injection of *handcrafted* backdoors (Goldwasser et al., 2022; Hong et al., 2022). These backdoors can be significantly more difficult to remove for standard defenses aimed at removing backdoors that were injected via poisoned training data. However, such attacks have not yet been demonstrated on LLMs. Data poisoning allows an adversary to inject specific vulnerabilities ("backdoors") to a model by ![67_image_1.png](67_image_1.png) ![67_image_2.png](67_image_2.png) ![67_image_3.png](67_image_3.png) manipulating the training data. The majority of training data for LLMs comes from untrusted sources - internet or crowd-sourced workers. Hence, data poisoning attacks on LLMs are highly plausible. Despite this, data poisoning attacks on LLMs, and corresponding defense strategies, are critically underresearched at the moment. More research is needed to better understand the risks of poisoning attacks on LLMs through various modalities, and at different training stages, and how ![67_image_0.png](67_image_0.png) ## 4 Sociotechnical Challenges LLMs are inherently sociotechnical systems –– they are trained by humans, on human-created data, and used by humans in ways that affect myriad other humans. Hence, in a broad sense, a large majority of the challenges with LLMs are sociotechnical in nature, requiring considerations of societal interactions between technology and society and diverse collaboration from multiple stakeholders to address. However, in this section, we focus on challenges of LLM safety and alignment which share two related characteristics: (i) they require primarily (but not exclusively) non-AI expertise, necessitating deep collaboration and relationshipbuilding across disciplines (Sartori & Theodorou, 2022); and (ii) they entail considering LLMs from a *holistic* or *systemic* perspective, i.e. as complexly and inseparably entwined with individuals, groups, platforms, companies, economy, politics, and other societal forces (Lazar & Nelson, 2023; Kasirzadeh & Stewart, 2024). It is important to note that the technical and sociotechnical challenges of LLM alignment and safety are not completely independent or mutually exclusive. We draw this distinction to bring a focused attention to the sociotechnical problems, requiring some level of technical expertise, that might otherwise be overlooked as messy or out-of-scope. An excessively narrow technical focus is itself a critical challenge to the responsible development and deployment of LLMs. Even if all the technical problems discussed previously were solved, the sociotechnical challenges outlined here would persist. Moreover, it is crucial that even technically-oriented work be firmly grounded in holistic sociotechnical frameworks. Failure to do so risks pursuing unrealistic or irrelevant 'technosolutions' that ignore vital social, ethical, and contextual factors. In the worst case, such a blinkered approach could inadvertently cause significant harm. Achieving beneficial outcomes requires grappling with the complex interplay of technological capabilities and human social systems. Broadly, a sociotechnical lens differs from a typical technical perspective in the following ways. - **Focus**. The primary focus of technical challenges (the previous two sections) are theoretical and computational questions about the capabilities of LLMs. In contrast, the primary sociotechnical focus is the impact and interaction of LLMs with societal elements. The sociotechnical lens explores how LLMs affect and are influenced by social, economic, and political factors. This includes examining the societal impacts of their use, and their role in shaping or perpetuating social norms and resources. - **Methodology**. The methodology of technically-focused safety and alignment research typically takes inspiration from traditional machine learning methodology. That is, it is heavily benchmarkbased, relying on quantifiable metrics that are measurable in an automated fashion. Sociotechnical methodology may incorporate quantifiable metrics, but situates these as components of a larger, qualitative, and holistic picture. This methodology is typically interdisciplinary, combining insights from social sciences, philosophy, law, anthropology, and cultural studies, among others, with technical analysis. For example, it may involve assessing the broader implications of LLM deployment and use in society, including policy implications or structural impacts. Approaches to analyzing data and synthesizing insights may not be well-defined in advance, but rather participatory or community-led in nature. - **Evaluation**. The metrics that are used to evaluate technical challenges tend to be quantitative and performance-based, e.g. processing speed, accuracy rates, error percentages, and scalability. Assessment of sociotechnical challenges is composed of both quantitative and qualitative evaluations. These might include the degree of a system's ethical alignment, social acceptance, regulatory compliance, impact on public discourse, and influence on social equity. Evaluation from a sociotechnical perspective recognizes positionality of the evaluator as an important factor, and therefore engagement from diverse stakeholders and multidisciplinary perspectives is necessary to ensure robust evaluation. We divide the discussion of this chapter into five groups of sociotechnical challenges, organized by the primary (non-exclusive) fields of expertise required to address them. 1. **Challenge**: Values to be encoded within LLMs are not clear Areas: Sociology, Philosophy, Political science, Law, Policy, Anthropology, Psychology, Economics, Systems Theory 2. **Challenge**: Dual-use capabilities enable malicious use Areas: Security, Infosec, Software engineering, Networking, Psychology, International relations, Game theory, Economics, Law, Policy, Risk and Impact Assessment, Political Science, Human Resources 3. **Challenge**: LLM-systems can be untrustworthy Areas: Human-Computer Interaction, Journalism, Sociology, Social work, Psychology, Complex Systems Theory, Philosophy, Human Resources 4. **Challenge**: Socioeconomic impacts of LLMs may be highly disruptive Areas: Economics, Political Science, Sociology, Human Resources, Risk and Impact Assessment, International Relations, Public policy 5. **Challenge**: Governance & regulation is lacking and unenforceable Areas: Law, Public policy, Philosophy, Political Science, Sociology, Journalism, Security, Infosec, Economics, Game theory, Psychology Table 4: An overview of challenges discussed in Section 4 (Sociotechnical Challenges). We stress that this overview is a highly condensed summary of the discussion contained within the section, and hence should not be considered a substitute for the complete reading of the corresponding sections. | Challenge | TL;DR | | | | |------------------------------------------|-----------------------------------------------------------|--------|-----------|------------------------------------------------------------| | Values to Be Encoded within LLMs Are Not | Identifying the values that LLMs ought to be aligned with | | | | | Clear | is a pivotal problem in the safety and alignment of LLMs. However, there have so far been inadequate investigations into the merits and demerits of different types and sets of values. Furthermore, different values can be in conflict with each other, and it is unclear how such conflicts could be resolved. Different 'lotteries' - methods lottery, profitability lottery, platformability lottery - might drastically influence the values that we might encode in LLMs. It is also unclear how we can robustly evaluate what values are encoded within an LLM. Finally, at a more meta-level, it is worth probing whether thinking about LLM alignment in terms of 'values' might be limiting. | | | | | Dual-Use | Capabilities | Enable | Malicious | The dual-use nature of many LLM capabilities creates risks | | Use and Misuse of LLMs | from misuse of those capabilities. | Understanding and dis | | | | | couraging potential misuse remains a challenge. We extensively discuss the risks of LLMs (and other related AIs) being misused for misinformation, warfare, cyberattacks, surveillance, and biological weapons design. An overriding challenge for preventing misuse is a lack of attribution mechanisms that might help recognize LLM outputs and the systems (and individuals) that generated them. Continued on the next page | | | | | Challenge | TL;DR | |-------------------------------------|--------------------------------------------------------| | LLM-Systems Can Be Untrustworthy | LLMs remain prone to causing accidental harm to their users. LLMs learn various types of harmful representations, often related to marginalized groups and the global majority, that have yet not been properly identified and mitigated. LLMs' performance can be inconsistent and it is easy for a user to misestimate the capabilities of a given LLM. Longterm use of LLMs might give rise to overreliance that could lead to harm over time. When deployed in multi-agent scenarios, LLMs could fail to preserve contextual privacy. | | Socioeconomic Impacts of LLM May Be | The impact of LLM-driven automation on society and the | | Highly Disruptive | economy may turn out to be highly disruptive. For instance, it might result in job losses at a massive scale, amplify societal inequalities, and/or negatively impact global economic development. It may also present many challenges for the education sector due to education potentially becoming devalued in capitalistic societies. This may further have negative second-order impacts. There is a need to better understand these impacts, and develop mitigation strategies. | | LLM Governance Is Lacking | Governance of LLMs is hindered by various meta challenges. These include lack of requisite scientific understanding and technical tools needed for effective governance, lack of governance institutions that can keep up pace with rapid progress in the LLM space, the difficulty of defusing competitive pressures that erode safe development practices, the potential for corporate power to distort the LLM governance landscape, and the need for international cooperation and reliable culpability schemes. Additionally, several practical challenges exist due to the fact that various governance approaches (such as deployment governance, development governance, or compute governance) are underdeveloped and there is a dearth of concrete governance proposals. | ## 4.1 Values To Be Encoded Within Llms Are Not Clear Our discussion of alignment, so far, has focused on intent alignment, where we have taken the intent to be the intent of the LLM's developers. **In this section, we deviate from this assumption and raise the** question of what, and whose, values an LLM should be aligned with (Gabriel, 2020; Kasirzadeh & Gabriel, 2023). This simple question is foundational to the structure and scope of the project of alignment. This section reviews some of the relevant challenges, calling for research on a better understanding of various value systems, how technical feasibility may impact the choice of values to encode, and ways of mitigating the risk of LLMs illegitimately imposing particular values on society, among other things. ## 4.1.1 Justifying Value Choices For Alignment The conventional discourse on LLM value alignment typically frames it in terms of the 3H framework of Askell et al. (2021). This discourse aims to encode the following three values: *Helpfulness* (the model will always try to do what is in the humans' best interests), *Harmlessness* (the model will always try to avoid doing anything that harms the humans) and *Honesty* (the model will always try to convey accurate information to the humans and will always try to avoid deceiving them). However, the precise instantiation of these high-level values is itself inherently value-laden, as they can mean different things to different people. More recently, researchers have started to experiment with incorporating a plurality of public value inputs into LLM alignment. Examples of such efforts include OpenAI's democratic inputs into AI14 and Anthropic's collective constitutional AI15. However, there is very little foundational work arguing *which* values are the right high-level values for LLM should alignment and under which circumstances, and which alternative types of values should be preferred over or in addition to the HHH framework, such as: truthfulness (Evans et al., 2021; Hilton, 2022), human rights (Prabhakaran et al., 2022a), cooperativeness (Dafoe et al., 2020; 2021), corrigibility (Soares et al., 2015), specific moral values (Hou & Green, 2023), or other pluralistic values (Sorensen et al., 2023; Durmus et al., 2023). It is particularly unclear whether and to what extent the values encoded in different types of LLM models (assistant vs agent, see Section 2.5) should differ; or how such values should depend on the context of use (e.g. medical assistant vs educational assistant) (Kasirzadeh & Gabriel, 2023), or the capabilities profile of an LLM. Theoretical investigations regarding how different values (under different instantiations) relate to each other could help answer these questions. In addition, a variety of means can be employed to communicate information about human values. These include revealed preferences (i.e. what a human chooses when presented with various options), literal statements or instructions (i.e. the model literally does as asked), inferred intentions (i.e. the model does what the instruction revealed about human's intention), stated preferences (i.e. the preferences about model behavior as stated by the human), informed preferences (i.e. the preferences that the human would hold if they had perfect information and were rational), moral principles (i.e. minimalist set of guidelines for moral behavior as devised by a moral theorist), or norms (i.e. shared standards or guidelines that guide a group's behavior) (Gabriel, 2020; Ouyang et al., 2022; Bai et al., 2022b; Fränken et al., 2023). Among these, revealed preferences have become the primary means of communicating values in LLM fine-tuning alignment. However, we suspect that this choice is largely motivated by algorithmic convenience and the empirical success of reinforcement learning from human preferences (RLHF) (Christiano et al., 2017; Leike et al., 2018) (see Section 4.1.3). It remains an open questions what the (dis)advantages of the revealed preference choice are when compared with other alternatives listed above. Relatedly, it is important to understand whether and how alignment with respect to one category, such as principles, might supersede or interfere with alignment with respect to another, like preferences. ## 4.1.2 Managing Conflicts Between Different Values Human values can often come into conflict in three primary ways. Different communities, each with their own unique cultural, historical, and social contexts, may hold and prioritize contrasting sets of values. Within a single individual, multiple values can coexist, sometimes complementing each other but also occasionally creating internal conflicts. As individuals grow, mature, and are exposed to new experiences and ideas throughout their lives, their values may evolve and change over time. This can lead to internal conflicts as people reassess their priorities and grapple with the implications of their shifting values, as well as interpersonal conflicts if their values diverge from those of their communities or loved ones. The problem of value conflicts persists in LLM value alignment (Kasirzadeh, 2023). For example, in the 3H framework of Askell et al. (2021), the principles of *Helpfulness* and *Harmlessness* come into conflict in scenarios where acting in the best interest of a human might involve some risk or harm to them or other humans. Researchers have started to investigate value conflicts and trade-offs in LLMs (Liu et al., 2024a). Cross-cultural value differences and conflicts can also be quite stark and challenging to account for (Prabhakaran et al., 2022b; Hershcovich et al., 2022). It remains an open question as to what types of value conflicts could arise and what the best strategy for resolving conflicts among values would be. Including principles governing the resolution of competing values or making existing values or principles more specific may help minimize the risks of misalignment due to value conflicts (Keren et al., 2014; Kundu et al., 2023; Mechergui & Sreedharan, 2024). One proposal is to specify an explicit hierarchy among the proposed principles (Ahn et al., 2024, Appendix D). Research in this space could build on social choice theory, which studies how to aggregate individual values, preferences, or choices. (Shoham & Leyton-Brown, 2008; Davani et al., 2022, Chapter 9). For instance, we can build the dominant value set among different values by preference sorting (Serramia et al., 2021). 14https://openai.com/blog/democratic-inputs-to-ai 15https://www.anthropic.com/news/collective-constitutional-ai-aligning-a-language-model-with-public-input Failing to address the conflicts appropriately can lead to the imposition of values, where LLMs effectively force the values of a small group of developers onto society at large. This concerning phenomenon has been highlighted in recent research. For instance, Atari et al. (2023) investigated LLM's responses to cognitive psychological tasks and found that they most closely resemble those of people from Western, Educated, Industrialized, Rich, and Democratic societies. Similarly, Johnson et al. (2022) demonstrated that in cases of value conflicts, such as the issue of gun control, the 'values' expressed by GPT-3 align more closely with dominant US values than with those of other nations. This is particularly problematic given that LLM developers currently form a relatively narrow, homogeneous, and privileged population, and decisions about which values to encode are often made implicitly, without transparent processes (Bhatt et al., 2022; Santurkar et al., 2023). Inclusive participation (Shur-Ofry, 2023; Sorensen et al., 2024) and dialogue (Dobbe et al., 2021) play an indispensable role in addressing conflicts, though they are also prone to 'participation washing' (Sloane et al., 2022) where the appearance of inclusive practices is maintained without genuine commitment to incorporating diverse perspectives in addressing conflicts. Choice mechanisms that allow the LLM values to be selected in a systematic and contextual way (Gabriel, 2020; Birhane et al., 2022; Kasirzadeh & Gabriel, 2023) could help addressing conflicts. Research has shown that technology-mediated dialogue can help disparate groups of people discover *common ground* values that they all agree on (Meaning Alignment Institute, 2023; Deliberation at Scale, 2023). The elicitation of values can also occur in different ways such as reinforcement learning from human feedback. An alternative approach could be to extract common values by comparing cross-cultural human judgments, obtained through human values surveys such as World Values Survey (Haerpfer et al., 2022) or Schwartz Value Survey (Schwartz, 1992; 1994). We might also elicit justice-related human values via philosophical setup of the Veil-of-Ignorance (Weidinger et al., 2023a). Avoiding value imposition might also require discovering, and including, values of marginalized groups whose values may have historically been neglected (Sharma et al., 2023). Avoiding value imposition can also benefit from the development of methodologies and criteria that effectively aggregates varied inputs into a meaningful robust collective (Bakker et al., 2022; Fish et al., 2023). Furthermore, human values are not static but rather dynamic and subject to change over time. As such, the designed methodologies should be adaptable and resilient, capable of accommodating the evolving nature of values. ## 4.1.3 'Lotteries' May Bias The Values That We Encode Hooker (2021) introduced the idea of *hardware lottery* to describe the situation in which a research idea wins over other research ideas, not because it is intrinsically superior, but because it is favored by the existing hardware. A similar *technical lottery* may also exist for the values that we may want to encode in our models; the values that we encode in our models may not necessarily be the ones that we most want to encode in our models, but the ones that are technically feasible to encode (Carissimo & Korecki, 2023). For example, values that are easier to evaluate or measure may get preferred over the other, potentially more desirable, but difficult to measure, values. A pertinent example in this regard is the 3H framework of Askell et al. (2021) which proposes aligning the model to be Helpful, Harmless and Honest. However, in practice, the focus is generally restrictively placed on ensuring that models are helpful and harmless (Bai et al., 2022a) as evaluating model honesty is more costly and less technically tractable. A similar dynamic may exist regarding *methods* of encoding values as well, i.e. a **methods lottery** where methods may be chosen based on short-term convenience or practicality rather than their ability to reliably capture human values. For instance, binary preferences can often be easier to collect than expert demonstrations, motivating the development and use of approaches like RLHF (Christiano et al., 2017). Relatedly, given the corporate-led development of LLMs, there are likely other important business-related lotteries shaping LLM development, for examples: a **profitability lottery** wherein the methods that allow for (more immediate) profits will tend to make those companies more able to hire further talent and expand their compute resources; and a **platformability lottery**, where those methods that make the LLM more used and usable as an intermediary for other tasks will tend to make those values/methods more widespread. Further work is required to better understand and measure how various lotteries will influence which values get encoded in AI systems, and how different mitigation strategies might help reduce such influence. ## 4.1.4 How Can We Robustly Evaluate Which Values An Llm Encodes? A common strategy used in prior work to probe the values of LLMs is to evaluate the LLM behavior on data designed to measure moral, social, or political behaviors and make inferences about the (unobserved) values being used by the LLM to perform its decision-making (Hendrycks et al., 2020a; Sorensen et al., 2023). Pan et al. (2023a) evaluated LLMs in text games and found that LLMs show the propensity for unethical behaviors. Scherrer et al. (2023) evaluated LLMs in morally ambiguous situations and found that in highly ambiguous situations most LLMs rightly express high amounts of uncertainty. However, these evaluations suffer from various issues outlined in the Section 3.3, e.g. prompt-sensitivity and systematic biases. The future work in this vein should avoid, or account for, these issues. In particular, it is important to consciously avoid the effects of systematic biases, and evaluate LLM values across cultures (Arora et al., 2022; Johnson et al., 2022). Furthermore, evaluations based on realistic applications (e.g. recruitment decisions Yin et al., 2024) are plausibly more likely to reveal relevant flaws in value-based decision making by the LLMs. There is also a need to develop evaluation methodologies that can help us better understand the extent to which LLMs understand the human values of interest and will reliably accord with them (Talat et al., 2021; Zhang et al., 2023e), and are not just mimicking them (Simmons, 2022). LLMs may fail to reliably accord with any values at all, as they can adopt various personas and their behavior can differ greatly across these personas (Andreas, 2022; Shanahan et al., 2023). Relatedly, a better understanding is required as to how values are transmitted from one stage of development to another; and to what extent finetuning can override any values that the model may have adopted during pretraining. ## 4.1.5 Is 'Value Alignment' The Right Framework? The classical approach to ensuring that AI benefits all of humanity has been framed in terms of resolving the 'value alignment' problem (Russell, 2019; Leike, 2022a). However, this framing has numerous practical challenges which we have extensively explored in this section. At a more meta-level, the concept of 'value alignment' raises fundamental questions about whether values exist, what they are, whether they are universal, or how they relate to prescribed and proscribed actions. These questions have been subject to intense philosophical debate (Brogan, 1952; Kingma & Banner, 2014; Schroeder, 2021; Polak & Rohs, 2023; Kaiser, 2024). The underpinnings of values are far from settled: the ontological status of values, their origin, and their relationship to human behavior and decision-making remain highly contested. Moreover, the connection between abstract values and their concrete instantiation is complex and often context-dependent (Kirk et al., 2023a), making it challenging to translate values into specific rules or constraints for AI systems. These issues cast doubt on the feasibility and specificity of the 'value alignment' framing as the right approach to guide the design of AI technologies that are beneficial to all of humanity and free of harm. Indeed, there is a risk that overly focusing on a limited scope of 'value alignment' creates a false promise that a technological solution exists for a problem that might be inherently multi-dimensional and requires non-technical approaches to be addressed. The 'value alignment' framing also appears to have the drawback that it views the use of LLM or AI-based systems in isolation; when in fact, these systems are likely to get embedded within human society, and might even become tools to exercise power over humans and mediate human agency. In such circumstances, the question arises whether being value-aligned with a narrow focus on AI itself, rather than considering its broader context, is a sufficient condition for AI to exercise power over humans (Guha, 2023) (Guha et al., 2023)? In summary, there is a pressing need to thoroughly examine the scope and limitations of the 'value alignment' framing and explore alternative or complementary framings that might better address the challenges of ensuring AI benefits humanity broadly. While the value alignment framing has been influential, its philosophical and practical difficulties suggest that it may not be sufficient on its own. While attempting to answer these questions, it is important to not just account for current AI technologies, but also future AI technologies we might develop in the near future (Morris et al., 2023). Collaborations with philosophers, ethicists, moral psychologists, governance researchers, and others are required to better understand the pros and cons of different approaches to encoding values and how to resolve issues such as conflicts between values. At the same time, what values we will encode within our models may be heavily biased by the tractability of different approaches to encoding values. Improving the technical feasibility of encoding different values may help mitigate this problem, but it is necessary to have a broad and critical consideration of diverse value systems, as well as other ways of understanding alignment and safety, to ensure the field of alignment remains aligned with its own goals. 132. What justifies choosing one set of values (e.g. helpfulness, harmlessness, honesty) over other sets of values? 133. How does the type of a system (e.g. assistant vs. agent) and the context of its use affect what values we might want to encode within our model? ←- 134. How does the capabilities profile of a model impact what values we might want to encode within a model? Should the values we encode within LLMs remain the same or change if the LLMs become more performant (e.g. due to scaling)? ←- 135. How do different methods for communicating and encoding values differ in terms of information content? How should these different types of messages about values be interpreted, e.g. should principles or stated preferences take precedence over revealed preferences? ←- 136. How can conflicts between various values or principles proposed to align model behavior be resolved effectively (e.g. harmlessness and helpfulness in 3H framework)? ←- 137. How can we design methods to balance conflicting values appropriately or enable (groups of) humans to resolve the conflicts between their values? ←- 138. How do we mitigate the risk of value imposition? Can we design governance mechanisms that allow LLMs' values to be chosen in a systematically fair and just way? ←- 139. How are we to account for changes in values over time? ←- 140. To what extent will the 'technical lottery' play a role in what values we encode in our models? For values that may be technically infeasible to encode, can we develop technically feasible robust proxies that we could use instead? ←- 141. How can we robustly evaluate what values are encoded within a model? ←- 142. How can we determine whether a model understands the encoded values or is only mimicking them? Relatedly, to what extent can we claim that an LLM has values, given an LLM is perhaps more like a superposition of various personas with varying characteristics? ←- 143. How are values transmitted from one stage of development to another? ←- 144. What are the limitations of framing the design of AI technologies that broadly benefit humanity in terms of 'value alignment' with humanity? Can we develop alternative or complementary framings that might help address those limitations? ## 4.2 Dual-Use Capabilities Enable Malicious Use And Misuse Of Llms Like all technologies, LLMs have the possibility for misuse by malicious actors. Malicious use of dual-use capabilities of AI is a recurring concern within literature (Brundage et al., 2018; Hendrycks et al., 2023; Mozes et al., 2023). However, there exists a significant gap from these relatively high-level articulations of concern to rigorous research on the topic, resulting in a lack of nuanced and context-specific understanding of the risks associated with dual-use capabilities. This deficiency in understanding and practicality is deeply concerning as it hampers the development of effective mitigation strategies. While we primarily focus on intentional misuse, we note that as LLMs become more widespread, easy-to-use, and general-purpose, the potential for accidental misuse and other ethically grey uses is also amplified. This section reviews several plausible malicious use cases of present or future LLMs and calls for further research to improve our understanding of how LLMs might be misused. The improved understanding could inform prioritization of concerns and the development of effective mitigation strategies. Although we focus on LLMs, this is simply due to the focus of our agenda; a consideration of which types of AI systems have the greatest potential for harmful misuse is out of our scope. ## 4.2.1 Misinformation And Manipulation Recent studies have demonstrated that LLMs can be exploited to craft deceptive narratives with levels of persuasiveness similar to human-generated content (Pan et al., 2023b; Spitale et al., 2023), to fabricate fake news (Zellers et al., 2019; Zhou et al., 2023d), and to devise automated influence operations aimed at manipulating the perspectives of targeted audiences (Goldstein et al., 2023). LLMs have also been found to be used in malicious social botnets (Yang & Menczer, 2023), powering automated accounts used to disseminate coordinated messages. More broadly, the use of LLMs for the deliberate generation of misleading information could significantly lower the barrier for propaganda and manipulation (Aharoni et al., 2024), as LLMs can generate highly credible misinformation with significant cost-savings compared to human authorship (Musser, 2023), while achieving considerable scale and speed of content generation (Buchanan et al., 2021; Goldstein et al., 2023). Furthermore, an area of particular concern regards the degree of personalisation that LLMs can achieve in the production of misleading content, whether wholly false or true but presented in a misleading way. It is already well-documented that LLMs tend to be sycophantic, that is, selectively present content according to perceived user desires (Perez et al., 2022b); it is not hard to imagine this capability being exploited for malicious purposes. By tailoring content to specific demographics or individual profiles, these models can facilitate the creation of hyper-targeted misinformation (Bagdasaryan & Shmatikov, 2022; Ferrara, 2023). This feature of LLMs raises alarming possibilities for manipulating belief systems and public opinion at scale, potentially exacerbating societal divisions (Kirk et al., 2023b) and undermining trust in trustworthy information sources (Weidinger et al., 2022). Additionally, while the risks discussed largely concern society-wide implications of LLM-powered misinformation, these models can also exacerbate harm to individuals. For example, LLMs can be leveraged to create highly realistic multimodal deepfakes, which include audio, video, and photographic content. These deepfakes, often indistinguishable from authentic content, can be used for a range of individual-level harms, such as the production of falsified sexual images and the discreditation of individuals (Chesney & Citron, 2019). Such content can do harm even if it is known to be fake (Burga, 2024). Further empirical and theoretical research is needed to assess the scale and likelihood of the use and effectiveness of LLMs for misinformation generation and propagation. There is also a need for research into developing tools and mechanisms to combat misinformation. One successful way of doing so is designing robust community-based fact-checking tooling (Pröllochs, 2022). LLMs themselves could potentially form part of the solution; for example, by identifying why a particular piece of information is false, surfacing relevant sources to substantiate their claim and providing desirable alternative explanations (Hu et al., 2023a; Chen & Shu, 2023). ## 4.2.2 Cybersecurity LLMs may exacerbate cybersecurity risks in various ways (Newman, 2024). Firstly, LLMs may significantly amplify the effectiveness of deceptive operations aimed at tricking people into disclosing sensitive information or granting adversary access to critical resources. For example, LLMs might prove highly effective at crafting personalized phishing emails or messages at scale that may be harder for an average user to recognize as phishing attempts (Karanjai, 2022; Hazell, 2023). In addition to being directly harmful to the targeted individual, such 'social engineering' attacks are often the base of larger hacking operations (Plachkinova & Maurer, 2018; Salahdine & Kaabouch, 2019). However, the precise impact of LLMs on the likelihood of successful social engineering attacks is currently not clear. To develop a better understanding of this risk, researchers could perform user studies involving white-hat hackers to understand how an actual hacker might benefit from using an LLM in her hacking attempts. This could inform technical or sociotechnical mitigation strategies, such as training an LLM to recognize when it is being misused for crafting phishing emails etc. and to decline such requests. However, the technical problem of jailbreaking would remain an issue here (c.f. Section 3.5). Secondly, coding capabilities of LLMs could be used for malicious purposes (Checkpoint Research, 2022). This may either be done through using off-the-shelf LLMs or through training or fine-tuning LLMs specifically for this purpose (Checkpoint Research, 2023; Erzberger, 2023). This may include using code-inspection capabilities of LLMs to find software vulnerabilities, and code-writing capabilities of LLMs to create novel malware and exploits. However, cybersecurity teams may also leverage LLMs in a similar fashion to preemptively identify and remedy software vulnerabilities and strengthen cybersecurity in other ways (Aghaei et al., 2022; Ferrag et al., 2023). Consequently, the net impact of LLMs on cybersecurity is currently not clear and deserves further study (Hendrycks et al., 2021b). In addition to developing a more calibrated understanding of how coding capabilities of LLMs may be used in malicious ways, researchers may focus on developing LLM-based cybersecurity tools that may be helpful to cybersecurity professionals. One other way in which coding capabilities of LLMs could prove harmful is if they lower the barriers for staging a successful cyberattack (Brundage et al., 2018). Current evidence is mixed in this regard indicating that while LLMs can be used to create novel attacks, this generally requires some know-how on the part of the user as well (Checkpoint Research, 2023; Carlini, 2023b). However, it is plausible that the improvements in coding capabilities of LLMs could eliminate this need for expert knowledge in the future. This risk ought to be monitored closely. One way to do so could be to develop benchmarks focused on evaluating *autonomous* code-writing capabilities of a given LLM (Deng et al., 2023a). Cybersecurity risks can also increase due to the collective resources available to multi-agent systems powered by LLMs. Such systems could represent a risk similar in scale to botnets, with a large number of coordinated agents working together (Sun et al., 2023). However, the generative capabilities and possible emergent abilities of these systems at scale extend the potential impact beyond traditional Distributed Denial of Service (DDoS) attacks. For instance, multi-agent systems could be used for targeted vulnerability analysis and exploitation over a range of systems in a coordinated, fault tolerant manner (Hendrycks et al., 2021b). This could facilitate vulnerability chaining across systems and networks, enabling multi-stage attacks that are inherently more difficult to mitigate (Roytman & Bellis, 2023). In addition, multi-vector attacks could shift the attack-defense balance as standard filtering/monitoring techniques may prove ineffective against generative agents actively concealing their distributed activities using advanced forms of steganography (de Witt et al., 2022) or adversarial attacks of bounded information-theoretic detectability (Franzmeyer et al., 2023). Similar approaches could also accelerate the forensics and post-exploitation process to superhuman speeds and efficiency (Xu et al., 2024a). On the defense side, LLMs could help, for example, through automated analysis of security logs (Boffa et al., 2022). How the offense-defense balance will shift is an open problem, and much work remains to be done to safeguard cybersecurity infrastructure from scaled-up threats (Hendrycks et al., 2023) and within the broader context of *multi-agent security* (de Witt et al., 2023). Lastly, there is increasing evidence that LLMs can be used to craft jailbreaks which can be used on other LLMs and other instances of the same LLM (Chao et al., 2023; Mehrabi et al., 2023; Shah et al., 2023). This poses a risk to the security of LLMs and may result in a dangerous dynamic where improvement in a (closed-source) LLM's capabilities means it can generate more sophisticated jailbreaks (which could be used to jailbreak another instance of the same LLM). There is a need to understand how this dynamic may play out, and what technical and sociotechnical interventions can be done to mitigate this risk. ## 4.2.3 Surveillance And Censorship Content moderation has emerged as one of the key use-cases of LLMs (Weng et al., 2023), indicating the potential of LLMs for surveillance and censorship as well (Edwards, 2023). Surveillance and censorship are one of the primary tools employed by governments with dictatorial tendencies to suppress opposing political and social voices. These censorship measures, however, are often quite crude and can be escaped with little ingenuity. For example, manual supervision of the content (e.g. newspaper articles) is error-prone and can be circumvented by simple tactics, such as using clever phrasing that does not appear critical at the surface or hiding critical content within the margins of the articles which are likely to be overlooked (Hem, 2014). Similarly, automated tools currently used for censorship of text-based communication at scale are quite primitive and primarily based on keyword-matching (Knockel et al., 2020). However, LLMs could enable significantly more sophisticated surveillance and censorship operations at scale (Feldstein, 2019). Multimodal-LLMs or LLMs combined with speech-to-text technologies could be used for surveilling and censoring other forms of communication as well, e.g. phone calls and video messages (Whittaker, 2019). This may collectively contribute towards the worsening of personal liberties and the heightening of state oppression across the world. Examples have been documented already, for instance in calling for violence and silencing of political dissidents (Aziz, 2020), and suppression of Palestinian social media accounts (Zahzah, 2021). Sociotechnical research could help better understand the various ways in which surveillance and censorship may negatively impact the free thought and integrity of democratic institutions (Richards, 2012). It is also important for the research and academic community to be proactive in this regard and ensure that their designed models are not misused for surveillance and censorship purposes (Kalluri et al., 2023). The developers of LLM-based technologies, and civil society at large, ought to resist attempts at creating laws that provide legal covers to such surveillance efforts, with security concerns used as the ostensible reason (Ellis-Petersen, 2021). A possible technological mitigation against censorship and technological lock-in could be the use of perfectly secure steganography (Schroeder de Witt et al., 2023, iMEC), which allows covert communications and information hiding in the outputs of LLMs under information-theoretic undetectability. ## 4.2.4 Warfare And Physical Harm The use of AI in warfare is highly alarming and may pose dangers to human safety (Hendrycks et al., 2023). Autonomous drone warfare is being aggressively pursued as a tactic in the current war in Ukraine (Meaker, 2023), and may already have been used on human targets (Hambling, 2023). The use of AIbased facial recognition has been documented in the targeting of Palestinians in Gaza (International, 2023). LLMs have already been productized in limited ways for the purposes of warfare planning (Tarantola, 2023). Furthermore, active research is being carried out to develop multimodal-LLMs that can act as 'brains' for general-purpose robots (Ahn et al., 2022; 2024). Due to the 'general-purpose' nature of such advances, it will likely be cost-effective and practical to adapt them for creating more advanced autonomous weapons. Development of autonomous weapons has been extensively criticized and cautioned against in prior literature (Sauer & Schörnig, 2012; Sharkey, 2016; Scharre, 2016; Roff & Moyes, 2016), and identified as a source of catastrophic risk (Hendrycks et al., 2023). These harms have also been cautioned against by technical machine learning researchers in various open letters (Future of Life Institute, 2016; Foerster, 2020). There exist voluntary pledges by AI companies to not weaponize their technologies (Boston Dynamics, 2022), however, past evidence indicates that such voluntary pledges can be side-stepped when convenient as they do not pose any legally binding requirements (Christie, 2018; Biddle, 2024). Hence, LLM-based autonomous weapons may soon pose major safety risks, absent international agreements and national legislation prohibiting their development and use. Besides political challenges, there are technical challenges around monitoring and enforcement. Developing or increasing the availability of autonomous drone weapons and facial recognition targeting may also increase the risk of targeted mass killings by non-state actors. ## 4.2.5 Hazardous Biological And Chemical Technologies AI systems such as LLMs, chemical LLMs (Skinnider et al., 2021; Moret et al., 2023), and other LLM-based biological design tools might soon facilitate the production of bioweapons, chemical weapons, and other hazardous technologies. In particular, LLMs might enable actors with less expertise to more easily synthesize dangerous pathogens, while customized chemical and biological design tools might be more concerning in terms of expanding the capabilities of sophisticated actors (e.g. states) (Sandbrink, 2023). Gopal et al. (2023) and Soice et al. (2023) demonstrated that people with little background could use LLMs to help make progress towards developing pathogens such as the 1918 pandemic influenza. However, recent studies suggest that current LLMs are not more helpful than internet search in this regard (Mouton et al., 2024; Patwardhan et al., 2024). On the other hand, search engines offer more practical affordances for removing content vs. e.g. needing to retrain an LLM to ensure the content is not influencing future responses. Furthermore, (future) LLM-based technologies could develop strong reasoning capabilities that might help them make novel discoveries (Romera-Paredes et al., 2024) - this potential to make novel discoveries could also transfer to hazardous chemical and biological technologies (Moret et al., 2023), potentially, resulting in technology designs that might be more challenging to guard against via supply-chain monitoring. There is a need for more research to characterize the potential "uplift" that (current and future) LLM-based technologies may provide, as the current studies involved small sample sizes, only used 'vanilla' LLMs and none of them produced conclusive results (Marcus, 2024). The realism of these studies can also be increased by moving them into actual 'wet' laboratories and would benefit from the involvement of experts in clinical trials or psychology research. Furthermore, there is a need for ongoing monitoring to track how relevant capabilities are evolving with scale (c.f. Section 2.3). Doing so well may require postulating and refining explicit threat models and defining clear thresholds for action in advance; such work could also involve national security experts. Machine learning researchers should also work with professionals who study terrorism and mass killings to better understand whether and how LLMs might significantly contribute to such risks; this could involve understanding the social factors underlying such attacks, how the resources currently available online are - or might be - used by such attackers, and what currently limits the rate and severity of such attacks. Finally, we note that most of the contemporary work on biological and chemical risks from LLM-based technologies is concentrated on risks from non-state actors; there is also a need to better understand how states might misuse LLM-based technologies in this regard (Yassif et al., 2023). ## 4.2.6 Domain-Specific Misuses Improvements in LLMs may exert greater pressure to apply LLMs to various domains, such as health and education (Eloundou et al., 2023). Crude efforts to use LLMs in such domains, however, may incur harm and should be discouraged strongly. In particular, it is important to guard against different ways in which LLMs may be misused within any domain. One famous episode of misuse within the health sector is a mental health non-profit *experimenting* LLM-based therapy on its users without their informed consent (Xiang, 2023a). Within the education sector, LLMs may be misused in various ways that might impact student learning; e.g. as cheating accessory by the students or as (low quality) evaluator of student's work by the instructors (Cotton et al., 2023). Recent findings in moral psychology also suggest that LLMs can generate moral evaluations that people perceive as superior to human judgments; these could be misused to create compelling yet harmful moral guidance (Aharoni et al., 2024). Similar risks of misuse may exist in other domains as well. There is a need for research to better understand how LLMs might be misused across different domains. Interviews with domain experts and case studies of early deployments across different sectors may help in this regard. Domain-specific standards and regulations could be established to prevent these risks from materializing (Organization, 2021). One particular sociotechnical challenge is to effectively identify emerging use cases of LLMs, e.g. through surveys and/or in partnership with LLM deployers, who could monitor use patterns. Several factors make this a non-trivial challenge, even for LLM deployers; for instance, users may disguise queries or user calls may be spread across various LLM deployers in a way that might mask the real use (Glukhov et al., 2023). ## 4.2.7 Mechanisms For Detecting And Attributing Llm Outputs Are Lacking One overriding challenge in preventing malicious use of LLMs is the difficulty of recognizing outputs from LLMs and attributing them correctly to particular systems and/or users. Attribution facilitates accountability and enforcement, which may help disincentivize many forms of malicious use. Techniques such as watermarking, which embeds detectable patterns in AI-generated content, could be helpful (Kirchenbauer et al., 2023), but their effectiveness is currently unclear (Zhang et al., 2023a; Huang et al., 2023a; Hu et al., 2023d) and more work is needed. Increasing access to LLMs (especially open source models) might make attribution more challenging (Augenstein et al., 2023; Seger et al., 2023). In particular, even if an open-source model watermarks its outputs, it might be easy to modify it to not do so. This motivates the challenge of attributing outputs to particular models in the absence of (deliberately injected) watermarks. Machine learning researchers could work with cyber security experts and other professionals to understand the gaps that LLMs will create in current attribution practice for the particular domains discussed above. There is a strong risk that dual-use capabilities of LLMs may be exploited for malicious purposes. ![78_image_0.png](78_image_0.png) LLMs may be misused towards generating targeted misinformation at an unprecedented scale. The coding capabilities of LLMs may be misused by malicious actors to mount cyberattacks with greater sophistication and at higher frequencies. LLMs are quite effective as content moderation tools; this may mean that LLMs get adopted to enact mass surveillance and censorship. LLMs may be used to power autonomous weapons, creating a possibility of physical harm from them. Other misuses of LLMs may occur as LLMs are applied to various domains. Active research is needed to better understand these risks and to create effective mitigation strategies. 145. Can we develop a calibrated understanding of how current and future LLMs could be used to scale up and amplify misinformation campaigns? What level of human expertise is required to effectively use LLMs to generate targeted misinformation? ←- 146. Can we develop reliable techniques to attribute LLM-generated content, helping track its spread? Can watermarking be an effective measure given the growing availability of openly accessible LLMs? ←- ![78_image_1.png](78_image_1.png) 147. How can individuals be protected against harms caused by AI-assisted deepfakes? ←- 148. What measures can be taken to prevent LLMs from producing sophisticated misinformation, while simultaneously enhancing their capacity to identify and mitigate misinformation? ←- 149. Can we develop effective tooling and mechanisms to combat misinformation on online platforms? How can LLMs themselves be applied to detect and intervene against LLM-generated misinformation? ←- 150. How may LLMs contribute to scaling and personalization of social engineering-based cyberattacks? ←- 151. To what extent do LLMs reduce the threshold of technical expertise required for executing a successful cyberattack? ←- 152. Do advances in capabilities of LLMs cause LLMs to become better at crafting jailbreaking attacks? If so, how can the safety of LLMs from jailbreaking attacks (designed by other LLMs) be assured? ←- 153. How effective are LLMs at surveillance and censorship? How may LLMs, on their own or in combination with other technologies (e.g. speech-to-text softwares), contribute to the expansion and sophistication of current surveillance operations? How can we limit the use of LLMs for surveillance? ←- 154. How can the military applications of LLMs, especially LLM-powered autonomous weapons, be regulated? There appears to be a mass consensus within the machine learning community that LLMs, and other AIs, should not be weaponized; how can this consensus be leveraged to create legislation pressure to outlaw autonomous weapons development across the world? ←- 155. How might current, or future, LLM-based technologies (including chemical LLMs and specialized biological design tools based on LLMs) be misused in the design of hazardous biological and chemical technologies? ←- 156. Can we identify how LLMs may get misused across various domains, such as health and education? What regulations are required to prevent such misuses? In general, how can we best understand LLM use cases and identify those with significant misuse potential? ←- 157. Can we design robust watermarking mechanisms that may help us identify LLM-generated content? ←- 158. What mechanisms can be used to determine attribution for content generated using openly available models for which watermarking may not be an appropriate solution (as it could be easily undone)? ←- ## 4.3 Llm-Systems Can Be Untrustworthy A key desideratum for an LLM from a user's perspective is 'trustworthiness', i.e. assurance of reliability and consistent performance, and absence of any accidental harm caused by the technology to the user.16 Providing assurance that an LLM-based system will not cause *accidental* harm remains a major open challenge. Harms may either occur directly due to the flawed nature of LLMs, e.g. an LLM generating toxic language or behaving inappropriately in some other ways, or may occur due to improper usage by a user, e.g. automation bias due to a user's overreliance on LLM. We overview several challenges in this section that can potentially undermine a user's trust in LLMs. The technical flaws underlying these challenges in many cases appear difficult to address; at least in the immediate term. Hence, technical research alone may be insufficient to address these challenges, and sociotechnical interventions may be required to assure that LLM assistants do not incur accidental harm to the users. ## 4.3.1 Harms Of Representation And Other Biases A pretrained LLM generally has many of the stereotypical biases commonly present in the human society (Touvron et al., 2023). This makes it difficult for users to trust that LLMs will work well for them and not produce unfair or biased responses. Appropriate finetuning can effectively limit the bias displayed in LLM outputs in a variety of situations, e.g. when models are explicitly prompted with stereotypes (Wang et al., 16Note that within the machine learning literature, the term 'trustworthiness' carries various other meanings as well. The scope of our discussion is limited to our definition given here 2023b), but it does not 'solve' the problem. Even after finetuning, biases often resurface when deliberately elicited (Wang et al., 2023b), or under novel scenarios, e.g. in writing reference letters (Wan et al., 2023b), generating synthetic training data (Yu et al., 2023c), screening resumes (Yin et al., 2024) or when used as LLM-agents (Pan et al., 2024). The biases outputs are often much more prominent in low-resource languages (Yong et al., 2023) and in dialects used by marginalized groups (Hofmann et al., 2024). There is a need for research to develop better and more comprehensive tools for the detection of bias, toxicity (Wen et al., 2023; Wang & Chang, 2022), and other kinds of inappropriate behaviors. The current tools primarily focus on detecting content that is *explicitly* toxic and offensive, however, the issue of bias goes beyond commonly studied subjects of biases, such as race and gender (Hofmann et al., 2024). For instance, depending on the finetuning data, LLMs may also develop a bias towards particular political ideologies (Rutinowski et al., 2023). These biases are still relatively poorly understood, and there is a need for more extensive evaluations of current LLMs in novel scenarios to better understand the propensity of LLMs to enact such biases. There is also a need for a thorough evaluation of how the global south is represented within these models, and what sort of harmful representations of the global south are reinforced by LLMs (Qadri et al., 2023; Jha et al., 2024). ## 4.3.2 Inconsistent Performance Across And Within Domains Estimating true capabilities of an LLM is a difficult task (c.f. Section 3.3), especially for naive users unfamiliar with the brittle nature of machine learning technologies. Exaggeration of model capabilities by the developers (Lambert, 2023; Blair-Stanek et al., 2023), and issues such as task-contamination (Roberts et al., 2023b), underrepresentation of tasks or domains (Wu et al., 2023b; McCoy et al., 2023), and prompt-sensitivity (Anthropic, 2023b) may cause a user to misestimate the true capabilities of a model. This lack of reliability can undermine user trust or cause harm if a user bases their decision on incorrect or misleading information provided by an LLM. A famous example of this is the US lawyer who cited a fake case, hallucinated by ChatGPT, in a legal brief filed in a US court (Merken, 2023). Technical solutions could involve improving the reliability of the LLMs performance (e.g. using retrieval augmented generation to minimize hallucinations) or providing reliable uncertainty estimates alongside LLM responses (Fadeeva et al., 2023; Kuhn et al., 2023). However, technical solutions may not be developed quickly, if at all. Hence, there is a need to understand how the reliability of LLMs can be improved through extrinsic measures. Towards this goal, it will be useful to understand how reliable a median user perceives an LLM to be, to what extent they are able to reliably forecast on what problems a particular LLM will fail (Carlini, 2023a) and how this ability to determine the reliability of an LLM's performance evolves as the user interacts further with the LLM. This may be done through user studies and interviews with various different types of users. It may be difficult to draw very general conclusions, and more productive in many instances to focus on trustworthiness for particular (e.g. somewhat narrow) use cases or domains. Such research could then provide insights into the efficacy of different extrinsic measures that could be taken to help users use the LLMs safely e.g. providing appropriate education and disclaimers to the user before they begin interacting with an LLM. ## 4.3.3 Overreliance If a user begins to excessively trust an LLM, this may cause them to develop an overreliance on the LLM. Overreliance can result in automation bias (Kupfer et al., 2023), and can cause errors of omission (user choosing not to verify the validity of a response) and errors of commission (user believing and acting on the basis of the LLM's response, even if it contradicts their own knowledge) (Skitka et al., 1999). It can be particularly dangerous in domains where the user may lack relevant expertise to robustly scrutinize the LLM responses. This is particularly a source of risk for LLMs because LLMs can often generate plausible, yet incorrect or unfaithful, rationalizations of their actions (c.f. Section 3.4.10), which can mistakenly cause the user to develop the belief that LLM has the relevant expertise and has provided a valid response. A common tactic used to prevent harms that might arise from overreliance is to include disclaimers alongside model output that could act as triggers for the user to externally validate the model response. However, users can get desensitized to such disclaimers over time (OpenAI, 2023c). Better interface design (e.g. using distinct colors for distinct grades of disclaimers) could help avoid, or limit, such desensitization. User studies could be done with different user types to better understand the details of how overreliance might manifest, and what mitigation strategies may be most effective against it. The use of LLMs as coding assistants by developers could be used to study over-reliance as coding tasks are often well-posed and can have varying levels of complexity as required. There is also a need to better understand the related risks that might arise due to consistent and prolonged usage of LLM by a user in a particular domain. Outsourcing certain types of cognitive tasks to LLMs, e.g. writing tasks, could impair corresponding skills among LLM users. This is particularly a risk for the use of LLMs in education where excessive usage of LLM may cause students to develop an unnecessary, and unwanted, dependency on LLMs. Additionally, prior work has shown that humans can inherit biases from AI systems, and that these negative effects of AI technology do not naturally go away even when the biased AI systems are removed (Vicente & Matute, 2023; Kidd & Birhane, 2023). Further research is required to better understand these risks; how they might materialize, and what can be done to mitigate them? ## 4.3.4 Contextual Privacy Preservation As LLMs proliferate through society, this will inevitably result in LLMs simultaneously interacting with multiple parties (see Section 2.6). In such scenarios, assurance of privacy goes significantly beyond assuring that the outputs of the LLMs do not contain personally identifiable information (Nissenbaum, 2020), and requires that LLMs do not leak data or information shared by one actor to another needlessly. The notion of privacy is not well-formalized in such contexts currently and there is a need for work on defining and operationalizing it in an appropriate way. In the absence of a formal definition, a useful goal could be to assure that in any context LLMs do not share information given by one party with some other party, unless a human would do so as well. Current LLMs do not provide this assurance and often leak private information provided by one party to another when a human would not do so - and common tricks, e.g. prompt-engineering or output filtering do not improve alignment between the behavior of LLMs and human (Mireshghallah et al., 2023). This hints at some of the fundamental limitations in preserving privacy in multi-agent contexts. ![81_image_0.png](81_image_0.png) ## 4.4 Socioeconomic Impacts Of Llm May Be Highly Disruptive The rapid evolution of LLMs brings significant socioeconomic opportunities and challenges, impacting the workforce, income inequality, education, and global economic development. Many of these challenges are systemic in nature, constituting what economists refer to as general equilibrium effects. These challenges do not arise directly from LLMs causing harm to users but rather from their indirect effects on the socioeconomic equilibrium. For instance, automation may reduce labor demand, leading to wage declines and subsequent harm to workers. Consequently, addressing these challenges may necessitate system-wide policy interventions. To effectively implement such interventions, a thorough understanding of these challenges and potential mitigation strategies is imperative. In this section, we explore some of these challenges and pose pertinent research questions that warrant investigation. ## 4.4.1 Effects On The Workforce The effects of integration of LLMs into various industries and workflows on the workforce are likely to be significant, and pose a great socioeconomic challenge. Since the Industrial Revolution, technological progress has regularly displaced some workers but led to the creation of new jobs as the growing wealth generated by better technology led to growing demand for additional goods and services, enabling the displaced workers to take up new opportunities (Autor, 2015). As a result, the economy adjusted to the displacement. By and large, workers in advanced countries today are much better off than they were at the beginning of the Industrial Revolution (Maddison, 2004). However, there are reasons to be concerned that the ongoing rapid advances in LLM capabilities in particular, and AI in general, might disrupt this pattern. Rapid advances in LLMs pose three distinct sets of challenges for workers' incomes (Korinek & Stiglitz, 2019; Susskind, 2023). First, they are likely to accelerate the rate of job turnover and disruption —– affecting more workers, including more highly skilled workers, and making the adjustment process for society more difficult than what we were used to from prior technological advances. Research could help identify what sectors are most vulnerable to job disruption (Eloundou et al., 2023) and how the displaced workers within those sectors can be helped to become productive workers in other growing sectors. Second, although technological progress means that society may produce more wealth overall, there is a risk that the general-purpose nature of LLMs may lead to progress that is biased against labor, meaning that the share of that wealth that goes to labor may decline. The overall effect on wages is then a horse race between these two opposing forces. The fate of blue-collar workers in recent history may be a harbinger of what lies ahead for white-collar workers: as a result of skill-biased technological change, the wage of the median male blue-collar worker has declined over the past four decades, when adjusted for inflation, even though the overall economy in the US has tripled (Autor, 2019). Early results suggest a negative impact of LLMs on certain categories of jobs (Hui et al., 2023). Third, if future LLMs and robots advance to the point where they can perform virtually all the work tasks, they would disrupt labor markets more fundamentally: if machines can do workers' jobs, wages would fall to machines' user cost (Korinek & Juelfs, 2023). This would pose fundamental challenges for labor markets and income distribution (Korinek, 2023). Aside from their effects on incomes, the rapid advances in LLMs also risk reducing job quality. Automation often tends to increase not only the physical but also the emotional demands put on workers, exposing them to greater surveillance, higher job intensity, and less human agency (Bell, 2022). These trends have already been observed in the increase of digital labour (Casilli & Posada, 2019) and the platformization of labour more generally (Nyabola, 2023); as LLMs become more widely applied they could exacerbate these trends. More fundamentally, work is not only the main source of income for the majority of people, but also the main activity that occupies our time. As a result, many derive a significant part of our identity, life satisfaction, and meaning from work (Susskind, 2023). If people lose their work, they would thus lose much more than their income, with broad implications for our society and our political system (Bell & Korinek, 2023). Two observations might help tackle the resulting challenges (Korinek & Juelfs, 2023). First, if the nonmonetary aspects of work are so important, people could presumably continue to work even when machines can automate it - they just may not earn much of an income from it. In surveys, a majority of workers are not very satisfied with their work (Gallup, 2022) and would likely be happier if they received the same income without the need to do a job. Second, things become trickier if one individual's decision on whether to work affects others' well-being, for example, because it affects others' scope to form social connections at work or because it has implications for political stability. Economists call such effects externalities. When significant externalities are at work, there is often a role for public policy to improve upon the socioeconomic equilibrium. Research could work to identify what the best interventions could be that might be implemented via appropriate public policy. One such intervention could be to mandate LLM-developers to conduct impact assessments or risk assessments for whether their AI systems augment workers and improve working conditions. However, the general-purpose nature of the LLMs may make conducting such risk assessments challenging. ## 4.4.2 Effects On Inequality LLMs could potentially worsen socioeconomic inequalities (Capraro et al., 2023). Effects on inequality are closely linked to the effects of LLMs on workers but ultimately depend on how the fruits of technological progress are distributed. There are three important challenges associated with this: First, if the role and compensation of capital rise and the role and compensation of labor decline in an LLMpowered economy, inequality may go up because work is the main source of income for the majority of people. There are signs that this may already have happened because of earlier automation technologies in recent decades (Karabarbounis & Neiman, 2014). This has led to calls to steer advances in AI in a direction that complements workers rather than substituting them (Korinek & Stiglitz, 2020; Klinova & Korinek, 2021). This also applies to LLMs. A tangible example of an LLM that complements workers would be a system that advises call service center agents how to best handle calls (Brynjolfsson et al., 2023). An example that substitutes for workers would be a system that replaces workers altogether. Given the general-purpose nature of LLMs, there is a risk that advances in any given use case may initially complement workers but eventually progress to a stage where they can displace them. However, human decisions can influence the extent to which LLMs complement vs. substitute workers. This includes developers of such systems, business leaders who deploy them, and lawmakers and regulators who shape the economic and societal context in which these systems operate. Economic and sociological research could help understand how LLM-based technologies are likely to get adopted, for instance, by interviewing workers who are early adopters or studying historical examples of integrating new technologies into workspaces or industries. There is also a need for concerted efforts to best educate the business leaders on leveraging the growth benefits of LLMs in a way that is minimally disruptive to the larger society. Second, the large fixed cost of training cutting-edge LLMs and the network effects involved imply that the market for the most advanced LLMs tends towards a natural monopoly structure in which only one or a small number of players will be successful, a phenomenon that has been termed 'algorithmic monoculture' in the literature (Kleinberg & Raghavan, 2021; Bommasani et al., 2022). As a result, LLM developers may amass significant market power. This might result in reduced social welfare, and lead to LLM-providers extracting monopoly rents from their customers (Kleinberg & Raghavan, 2021; Jagadeesan et al., 2023). These concerns may become even more important if the producers of LLMs engage in vertical integration and also participate in the market for downstream applications, for example, if an LLM company also enters the market for legal advice, medical services, etc. In the limit, the market for leading LLM providers could be the entire economy as the capabilities of LLMs improve across the board (Korinek & Vipra, 2024). This centralization of power could be even more problematic due to the potentially negative impact on the governance of LLMs (c.f. Section 4.5.4). The tendency towards centralization could be mitigated by a robust antitrust response and other policy interventions. However, this requires ensuring that economic policymakers are cognizant of the technological scenarios that lie ahead and their economic implications. The onus is on the technical community, which is generally more prescient about the impending technological developments, to effectively communicate and engage with economic policymakers in this regard. They may do so by writing educational documents aimed at a non-technical audience (e.g. Bowman, 2023), or releasing policy briefs on their research (e.g. Barrett et al., 2023). An alternative solution could be to prevent materialization of an algorithmic monoculture by increasing access to LLMs and broadening the pool of LLM developers, e.g. via open-source and open-science (Solaiman, 2023). Researchers could help improve the understanding of the extent to which current dynamics, in which leading LLM developers remain closed-source, but a number of less capable open-source competitors exist, are likely to lead to monopoly and identify interventions that could help mitigate them. Third, as LLMs are becoming more powerful, who has access and who hasn't is becoming a more and more important question. For example, automated coding tools have been shown to produce significant productivity gains, e.g. > 50% in some cases (Peng et al., 2023). Individuals who don't have access —– whether it is for financial reasons, for reasons of education, because of corporate or governmental policies, or for geopolitical reasons - might be at a growing disadvantage. In recent decades social scientists have observed a growing digital divide based on unequal access to digital technologies (Van Dijk, 2020). LLMs risk giving rise to a new 'intelligence divide' based on who has access to the most intelligent LLMs and who hasn't. ## 4.4.3 Economic Challenges For Education In a knowledge-based economy, education equips individuals with the human capital that prepares them to be productive workers. Human capital is often considered as our greatest asset. Yet the rapid emergence of LLMs poses three significant economic challenges for education: First, given the rapid emergence of LLMs as a powerful productivity tool, virtually the entire white-collar workforce may need to learn to use LLMs. Cognitive workers may need to learn how to prompt and steer LLMs, how to properly evaluate the output generated by LLMs, and how to incorporate them into their workflows in order to optimally benefit. This is one of the factors slowing down the rollout of LLMs in organizations throughout our economy (McAfee et al., 2023). Second, while educators are challenged to teach new tools, LLMs also force them to fundamentally rethink the way in which they provide education and evaluate students (Mollick & Mollick, 2023; Cotton et al., 2023). LLMs could help unlock teaching methodologies that were not previously viable and improve the overall education quality (Khan, 2023). However, issues such as low technological readiness may hinder the rapid adoption of LLMs in educational contexts (Yan et al., 2023). There is a need for further research to better understand these issues and how to mitigate them. Third, for education, a dark side of the positive productivity effects of LLMs is that they devalue a significant part of the human capital that workers have accumulated in the past. A growing number of studies show that lesser-skilled workers benefit more from LLMs than more highly-skilled workers (Noy & Zhang, 2023; Dell'Acqua et al., 2023). Another way of putting these results is that the human capital of highly skilled workers is no longer as valuable as it used to be, which may eventually lead to reduced hiring and training, and lower wages for skilled workers. This could have negative second-order effects. Currently, skilled labour presents a technical and moral bottleneck to the deployment of AI for malicious purposes (e.g. tech workers protesting Project Maven Shane & Wakabayashi, 2018), but as their jobs become (fully or partially) automated or, these workers may have less ability and agency to contest the uses of LLMs, increasing the hegemony of a narrowing set of powerful actors. ## 4.4.4 Global Economic Development Many of the themes and challenges that we discussed above come together when analyzing the socio-economic effects on developing countries. The workforce of developing countries may suffer from a retrenchment of outsourcing as many simple cognitive tasks that used to be performed in developing countries - for example, in call centers –— can be automated with LLMs. This may adversely affect the economies of the poor countries (Georgieva, 2024). The inequality implications of LLMs also have an international dimension as there may be a growing 'intelligence divide' between advanced countries that have access to leading LLMs and poorer countries that do not. Differential productivity effects may give rise to terms-of-trade losses for developing countries (Korinek et al., 2022). Similarly, many of the profits generated by leading LLMs will accrue to advanced countries. On the other hand, developing countries may share in the productivity benefit of LLMs with advanced countries if LLMs are accessible to populations in Global South, both in terms of technology design and in terms of pricing. Surface-level usage statistics seem to indicate that freely available LLMs have indeed seen considerable adoption among Global South populations (e.g. India and Brazil are the second and third largest sources of web traffic to ChatGPT; Similarweb, 2024). However, there may exist extrinsic factors that might limit wider penetration of LLMs among Global South populations, such as lack of internet connectivity and poor tech literacy among the populations. LLMs could potentially be leveraged to address some of the economic challenges faced by countries in the Global South. For example, teacher shortage is a critical issue that negatively impacts quality of education among Global South countries (Unesco, 2022). It is plausible that LLMs could be used to help mitigate this issue. Additionally, in contrast to most other technologies, global adoption of LLMs requires that they are proficient in all world languages. However, even multilingual models perform worse in lower-resource languages (Etxaniz et al., 2023; Shen et al., 2024) and "think" in English even when fine-tuned for other languages (Wendler et al., 2024). This has severe safety and security implications as models that might be aligned in English might not be equally well-aligned in other languages (Deng et al., 2023b; Yong et al., 2023) or may perform worse in other languages (Aroyo et al., 2024; Holtermann et al., 2024). Additionally, LLMs may cost an order of magnitude more to use in some languages (up to 15 times in ChatGPT's case; Ahia et al., 2023; Petrov et al., 2023a). Hence, ensuring that LLMs work well regardless of the language in which they are used is key if we want to ensure they are a tool for reducing global inequalities rather than further exacerbating them. The socioeconomic impacts of LLMs have the potential to be highly disruptive if not effectively managed. LLMs are likely to adversely affect the workforce, exacerbate societal inequality, and introduce new challenges for the education sector. Furthermore, the implications of LLM-based automation on global economic development remain uncertain. These challenges are complex and systemic in nature; there do not exist any simple fixes. To devise solutions, we need to develop a deep and nuanced understanding of these issues. Answering the following questions may help make progress towards this goal. 166. How can we better understand and forecast the disruptive effects of LLMs on job availability and job quality in different sectors? How can displaced workers be helped to transition to other sectors?←- 167. How can LLM developers best conduct impact assessments or risk assessments for whether AI systems improve working conditions (by augmenting workers) or not? ←- 168. How can LLM-based systems be designed to augment workers and improve working conditions, as opposed to automating and discplacing workers? ←- 169. How can we best educate business leaders to leverage the growth benefits of LLMs in a way that is minimally disruptive to society? ←- 170. How likely is the market for advanced LLMs to become a monopoly or oligopoly? What will the ramifications of such market concentration be on wealth distribution across society? ←- 171. How can we ensure equitable access to LLMs for individuals of all socioeconomic backgrounds?←- 172. To what extent are LLMs likely to exacerbate an 'intelligence divide' based on access to the most advanced LLMs?←- 173. How can LLM developers best keep economic policymakers updated on the technological scenarios that lie ahead and their economic implications (e.g. by writing policy briefs or writing informal educational documents)? ←- 174. How do we best educate the workforce for the effective use of LLMs and retrain disrupted workers?←- 175. What factors might impede adoption of LLM-based technology in educational contexts? How can these factors be mitigated? ←- 176. How can we better understand the second-order effects of LLM-driven automation on AI safety and alignment? E.g. could LLM-driven automation reduce the agency and ability of skilled labor to resist immoral usage of technologies? ←- 177. How might LLM-based automation negatively impact the economies of Global South countries?←- 178. How accessible are LLMs to Global South populations? How can this accessibility be improved? What measures can be taken by governments to address issues related to lack of internet connectivity, poor tech literacy etc.?←- 179. How can LLMs be used to help address some of the issues that hinder the economic development of Global South countries? For example, how can LLMs be used to help improve the quality of education available to Global South populations?←- 180. How to ensure that LLMs support all the world's languages equally, especially low-resource languages with large number of speakers? How do we ensure that LLMs are a tool for a global levelling up rather than further exacerbating economic divides? ←- ## 4.5 Llm Governance Is Lacking Governance will play an essential role in the safety and alignment of LLMs, in particular, and AI in general (Bullock et al., 2022a). Governance encompasses not only formal regulations, but also a number of other mechanisms including norms, soft law, codes of ethics, co-regulation, industry standards, and sectorspecific guidelines (Veale et al., 2023); see Table 5. Governance could supplement technical solutions, e.g. | Governance Mechanism | Examples | |----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Global Frameworks, Agreements or | Draft Council of Europe Framework Convention on AI, | | Conventions | Democracy, and Human Rights (Council of Europe, 2023) | | Regional Regulation | EU General Data Protection Regulation (GDPR) (Voigt & Von dem Bussche, 2017) EU AI Act (Council of the European Union, 2024) | | Domestic Regulation | China Administrative Provisions on the Management of Deep Synthesis of Internet Information Services (Finlayson-Brown & Ng, 2023) USA AI Initiative Executive Order (White House, 2023) | | Sub-national Regulation | California Consumer Privacy Act (CCPA) (Goldman, 2020) | | International/supranational "soft law" | OECD Recommendation on AI 2019 (OECD, 2019) EU Ethics Guidelines for Trustworthy Artificial Intelligence, 2019 (AI-HLEG, High-Level Expert Group on Artificial Intelligence, 2019) UNESCO 2021 recommendations on the ethical use of AI (Unesco, 2021) | | National "soft law" | UK NCSC "Guidelines for secure AI system development", 2023 (National Cyber Security Center, 2019) | | Industry Co-regulation | Partnership on AI (PAI, 2017) Frontier Model Forum (Frontier Model Forum, 2023) | | Industry Self-regulation | Anthropic's Responsible Scaling Policy (Anthropic, 2023f) Google AI Principles (Google AI, 2018) | | Standards organizations outputs | ISO data governance instruments (ISO, International Organization for Standardization, 2017) IEEE's Ethically Aligned Design (IEEE, Institute of Electrical and Electronics Engineers, 2018) CEN/CENELEC work on standards to implement EU AIA (ongoing) US NIST Artificial Intelligence Risk Management Framework (AI RMF) (NIST, National Institute of Standards and Technology, 2023) | | Internal institutional policies | University research ethics committees Company AI ethics and safety research teams (e.g. Microsoft's FATE team, Deepmind's Scalable Alignment team), boards and codes | | Private legal instruments | Contracts: e.g. Microsoft-OpenAI deal (Bradshaw et al., 2023) Licenses: e.g. RAILS license (Responsible AI License) (RAIL Team, 2024) | Table 5: Examples of different governance mechanisms relevant to the governance of LLM, and AI broadly. by mandating they be applied as appropriate. It could also *substitute* for technical solutions by preventing unsafe development, deployment, or use of LLMs when there do not exist sufficient technical tools for safety. **However, serious efforts to govern LLMs remain fairly nascent (e.g. US Executive Order** (White House, 2023), **or EU AI Act** (Council of the European Union, 2024)) and efforts to date have mostly been ill-defined and/or voluntary. A number of governance challenges remain that ought to be addressed, or mitigated, to ensure LLMs are beneficial to society and do not contribute to harm to any societal group. Within this section, we divide our discussion of challenges that hinder LLM governance into two parts. We first discuss *meta-challenges* that complicate governance such as lack of requisite scientific understanding of LLMs, lack of effective fast-moving governance institutions, lack of culpability schemes, and corporate power. We complement this with a discussion of *practical challenges* focused on the fact that most governance mechanisms are underdeveloped and concrete proposals for governance are lacking. We note that our discussion on governance challenges complements other related works (Dafoe, 2018; Anderljung & Carlier, 2021; Shavit et al., 2023; Barnard & Robertson, 2024). ## Meta-Challenges - Challenges That May Limit Efficacy Of Llm Governance The governance of generative models (both LLMs and generative image models) is challenging due to factors such as the rapid productization of the technology, economically disruptive nature of the technology (c.f. Section 4.4), high potential for misuse (c.f. Section 4.2), and the rapidly evolving technological landscape (Bengio et al., 2023). Indeed, several *meta-challenges* exist that might limit the efficacy of LLM governance. These challenges include a lack of scientific understanding and technical tools necessary for governing LLMs effectively, the need for agile governance institutions capable of keeping pace with technological advancements (which is an antithesis to the traditional slow bureaucratic nature of the governments), a need to better understand the competitive dynamics between AI companies to ensure that competitive pressure does not result in irresponsible AI development, risks of regulatory capture due to corporate power, a need for international cooperation and consensus, and a lack of clarity on accountability for harms caused by LLMs. ## 4.5.1 Lack Of Scientific Understanding And Unreliability Of Technical Tools Complicate Governance Effective governance of a technology hinges on three key elements: a comprehensive scientific understanding of the technology to gauge *potential* risks, dependable auditing tools to evaluate *practical* risks, and effective methods to intervene upon and *mitigate* these risks (Raji, 2021). However, as noted throughout the agenda, all three are currently underdeveloped for LLMs. Critical aspects of LLMs, such as in-context learning (Section 2.1) and reasoning abilities (Section 2.4), are poorly understood. Furthermore, existing auditing tools, including evaluation (Section 3.3) and interpretation methods (Section 3.4), are not reliable enough to provide meaningful assurance of model safety outside of very narrow contexts. And the primary technical tool for mitigating risks, safety finetuning, lacks robust generalization (Section 3.2). These issues complicate governance and contribute to a lack of scientific consensus on the nature and severity of risks associated with LLMs. This lack of technical clarity is one reason Guha et al. (2023) and Kapoor et al. (2024) caution against rushing to regulate. Yet, the process of establishing regulation has tended to be slower than the rate of AI progress. Thus, it is crucial to generate and evaluate governance proposals *despite* our presently limited technical understanding. Proposals could explore how governance approaches could be adapted based on the level of technical understanding of the models and the rate at which the field might be progressing; for instance, how should a governing body change risk thresholds in response to new evidence? Alternatively, proposals could seek to accelerate technical understanding of LLMs and their risks and harms, e.g. by funding research and building government capacity. Moreover, understanding the interconnections among diverse risk types is required to inform strategies for risk governance (Kasirzadeh, 2024). Another proposal is a temporary *pause* or slowdown on AI and LLM development (Future of Life Institute, 2023), based on the hope that this time could be used to develop standardized safety protocols and make progress on safety, e.g. through technical research. However, this proposal has received much criticism. It may adversely affect AI alignment and safety research and cause 'overfitting' of alignment research to whatever the state-of-the-art model at the time of the pause might be (Belrose, 2023) or might be infeasible to enforce (Luccioni, 2023). On the whole, it remains unclear whether governing bodies should aim to moderate the pace of progress in AI, and if so, what governance mechanisms (e.g. compute governance, see Section 4.5.11) could be used for this purpose. ## 4.5.2 Need For Effective, Fast-Moving Governance Institutions Given the rapid pace of advances in LLMs, governance institutions will need to adapt quickly to remain effective. Governments are currently attempting to regulate AI both through existing institutions, such as NIST in the USA (White House, 2023), or by novel legislation, such as the EU AI Act in Europe (Council of the European Union, 2024). However, capacity issues might limit the ability of governments to design and implement effective policies for governing LLMs quickly (Marchant, 2011). For example, regulatory bodies might struggle to enforce laws due to the lack of resources and relevant expertise, as has been the case with the enforcement of data protection laws (Jelinek & Wiewiórowski, 2022). However, despite these shortcomings in practice, government regulation has unique legitimacy and authority. Hence, it is worth asking how these shortcomings may be addressed. One approach to overcome these limitations could involve creating new institutions through legislative measures. For example, Tutt (2017) proposes an FDA-like body for algorithmic systems. There is indeed a growing trend towards the enactment of tailored AI laws, particularly led by the EU and China. These laws may encompass large models as a specific category, such as the provision of the EU AI Act concerning foundation models or general-purpose AI (GPAI) (Weatherbed, 2023). Investigating the impact of such laws on the development and application of LLMs is one clear direction for research; this could involve a comparative study of different national approaches to LLM regulation, and/or an analysis of the priorities, assumptions, and ideologies driving different AI regulations (Au, 2023). On the other hand, existing regulation could also be fruitfully applied to the governance of LLMs (Gaviria, 2022; Bhatnagar & Gajjar, 2024). This includes copyright and data privacy laws, speech laws, labor laws, advertising standards, tort, product liability, and more. However, it remains unclear to what extent current regulation is sufficient to mitigate risks. Existing regulators may also not have the necessary capacity, expertise, and regulatory authority to effectively regulate LLMs. Engler (2023) argue for expanding the powers of the existing regulatory bodies - in particular granting them the power to issue subpoenas for algorithmic investigations and for setting up rules for especially impactful algorithms. Further research here identifying opportunities and gaps would be valuable. This could include analyses of the current technical capacities of regulatory bodies as well as their resource allocation and knowledge acquisition strategies. Furthermore, the merits and demerits of various forms of public-private partnership proposals for addressing capacity issues of governments could be explored. This may include analysis of both traditional forms of public-private partnerships such as governments directly contracting the services of a private actor for various tasks, or novel proposals such as regulatory markets (Hadfield & Clark, 2023). Center for AI Safety et al. (2024) argue that current proposals for regulatory markets are flawed and do not adequately incentivize private regulators to prioritize safety. In general, while the private regulatory institutions may be more agile than government institutions and could help standardize regulation across jurisdictions, they may also increase the risk of regulatory capture, as private partners might have strong industry ties (e.g. financial interests in LLM development, deployment, or use Center for AI Safety et al., 2024); and such a set-up may also bypass government processes meant to protect against regulatory capture. Generally, there is a need to better understand the robustness and efficacy of different ways through which governments could outsource auditing and regulation to private actors (Costanza-Chock et al., 2022). There is also a need to identify measures that could be taken by governments and other public-interest institutions (e.g. universities, NGOs, policy think tanks) to create an ecosystem that incentivizes top talent to contribute to governance objectives - e.g. by working for government regulators, private auditors or other public-interest bodies - given the large compensation packages being offered by leading AI companies (Mann, 2023). ## 4.5.3 Incentivizing Cooperation And Disincentivizing High-Risk Approaches To Ai Development Responsible and safe AI can be seen as a collective action problem, creating an additional challenge for governance to confront (Askell et al., 2019). More precisely, while most AI practitioners might prefer a low-risk approach, competitive pressures (e.g. resulting from Safety-Performance Trade-offs, c.f. Section 2.7) might cause one or more organizations to take a higher-risk approach (e.g. a 'move fast and break things' approach Blodget, 2009). This could trigger a race to the bottom, progressively increasing competitive pressure on other organizations to adopt higher-risk approaches (Armstrong et al., 2016; Hendrycks et al., 2023). For these reasons, it is crucial to disincentivize high-risk approaches to AI development, deployment, and use, and to prevent such dangerous dynamics from materializing. Regulation and treaties could mandate safe development practices. In particular, specialized regulation could be designed with an explicit focus on the development of frontier AI (Anderljung & Korinek, 2024). *Outside-the-box* auditing covering organizations' development and deployment practices could help verify compliance (and identify risks that may have been overlooked) (Casper et al., 2024a). Technical researchers could develop better (game-theoretic) models of race dynamics whose analysis may yield insights about the relative effectiveness of various interventions that could be undertaken to disincentivize a race. Armstrong et al. (2016) provide a preliminary analysis of this kind, however, their analysis relies on various simplifying assumptions that may not be reflective of realworld dynamics. In general, technical researchers could work to identify safe development and deployment practices and to identify incentives and technical loopholes that might lead developers not to adopt them. They could also work with legal and business experts to evaluate the effectiveness of potential regulations by anticipating how developers might respond. ## 4.5.4 Corporate Power May Impede Effective Governance The increasing power and influence of large corporations may make effective governance difficult. There exists a power asymmetry between corporate entities profiting from LLMs and other social groups (e.g. civil society). State-of-the-art LLMs are developed by or in partnership with, some of the world's largest private tech companies. NVIDIA, which supplies most of the computing hardware for LLMs has recently seen its market capitalization increase to over $2 trillion, roughly a 5x increase in the past year. Lobbying efforts around LLMs have also increased dramatically in the recent past (Saran & Mattoo, 2022; Zakrzewski, 2022; Lindman et al., 2023). This poses a risk of governance protocols related to LLMs becoming excessively favorable to tech companies, potentially leading to regulatory capture at the cost of the interests of other societal groups, particularly marginalized communities who have historically been disproportionately affected by poorly designed AI technologies (Reventlow, 2021). Researchers working on LLM policy should aim to document and account for corporate influence, with a focus on identifying ways in which corporate interests might diverge from the public interest. At a more meta-level, it may be helpful to consider how corporate structures could be designed so that the interests of corporations extend beyond mere maximization of profits, and better align with the larger interests of the society. Researchers could design novel proposals in this regard, or evaluate the robustness of the existing proposals (e.g. Anthropic's Long Term Benefit Trust Anthropic, 2023c). This is of particular relevance given the recent power struggle between OpenAI board members, founders, and investors (Roose, 2023; Reich, 2023). In addition to their ability to influence governance through lobbying efforts, LLM developers may also directly influence public opinion through the design choices they make. For instance, how an LLM expresses itself and responds to queries, especially those with a political angle to them, can have a significant impact (Jakesch et al., 2023; Santurkar et al., 2023). This direct influence that technological companies can exert on public opinion has been recognized in prior work. Lessig (2006) discuss the power of technology or "code" itself as a governance mechanism, noting that it generally lacks democratic oversight and that much software used globally is built by US-based companies, often without regard to the local laws of its overseas markets (Sánchez-Monedero et al., 2020). To help improve democratic oversight, machine learning researchers could consider how such influence might be measured, detected, and counter-acted (O'Callaghan et al., 2015). There is also a need for collaboration among machine learning researchers, sociotechnical researchers, and civil servants to determine how to best protect government institutions such as democratic processes from such influence, e.g. to help establish standards defining the limits of legitimate influence. While the academic community can help counterbalance corporate influence over governance, the large majority of AI researchers receive or have received funding from big tech companies (Abdalla & Abdalla, 2021). This dependency on industrial funding may grow further due to the exorbitant costs of conducting LLM research. More research is needed to identify the influence of corporate power on academia, policy, and research. It is also important to consider governance proposals that may help the academic community preserve its independence, and generally increase the agency of academics to contribute effectively to safety and alignment research (Raji et al., 2023). The lack of academic consensus around AI risks and mitigations also hinders the academic community's ability to play such a role, rendering us unable to speak with one voice to provide clear guidance to policymakers or other elements of civil society. ## 4.5.5 Llms Require International Governance International governance will be critical for tackling factors such as competitive dynamics between AI companies. However, despite the fact that data-driven AI is a global and cross-border phenomenon, laws and regulators are predominantly national. As a result, issues of jurisdiction, applicable law, and regulatory arbitrage, which already complicate effective regulation of data privacy laws, will almost certainly affect AI and LLM regulation. A popular proposal in the literature is the establishment of international institutions (Ho et al., 2023; Trager et al., 2023; Maas & Villalobos, 2023). For example, Trager et al. (2023) propose the creation of an inter-state International AI Organisation to certify compliance by states to international oversight standards while supporting member states in meeting these standards through domestic regulatory capacities. The UK has promoted the development of global AI safety Institutes alongside a pledge to put safeguards on models posing systemic risks (of Participating Countries, 2023). As of early 2024, AI safety institutes are being established in the UK, US, Japan, and Singapore at the *national* level. These organizations could be a productive site of more international collaboration on AI governance. One other possible route to global harmonization of AI regulation is via the global diffusion of domestic regulation, such as the EU AI Act (Council of the European Union, 2024). The EU aspires to position itself as a global 'gold standard' (building on the example of GDPR), by requiring any company selling into the EU Single Market to follow its rules. Through the Brussels effect, it may be cheaper for companies that comply with EU requirements to comply with the same requirements even in other jurisdictions (Siegmann & Anderljung, 2022). However, conflict historically exists between the EU model of human rights-centric prescriptive risk-based regulation and the US's laissez-faire regulation of Silicon Valley and poor federal privacy protection (Solove & Schwartz, 2022). This may hold back any international harmonization of AI regulation (Kaminski, 2023; Smuha, 2021). However, there is some initial evidence that the US government may take a proactive approach to regulating AI (White House, 2023), suggesting that this conflict may not adversely impact the international governance of AI. Another important development here may be the finalization of the Council of Europe Convention on AI (Council of Europe, 2023) which like the European Convention on Human Rights and the Cybercrime Convention can be joined by non-European states, and so is potentially a global treaty. Alongside these developments, China has been promoting cooperation on international AI governance through initiatives such as the Global AI Governance Initiative (Ministry of Foreign Affairs, People's Republic of China, 2023). While this initiative has elements in common with both the EU approach and UK and US approaches, there exists a risk that multiple separate or overlapping international AI governance forums will emerge, complicating efforts to achieve global consensus on standards and safety. International governance may also be impeded by a trust deficit and an arms race between nations - particularly China and the U.S. (Meacham, 2023). Overcoming parochial concerns and assuring that governments can credibly cooperate on critical issues - such as limiting the use of AI in weapons (see Section 4.2.4) is perhaps the grand challenge in international governance of AI. Dialogue between the scientific communities could help pave the way for such cooperation (FAR AI, 2024; Maas & Villalobos, 2023). Specifically for the governance of AI in military applications, multi-country military alliances like NATO could play a critical role in ensuring responsible, and ideally highly restricted, use of AI in weapons (Stanley-Lockman & Trabucco, 2022). ## 4.5.6 Culpability Schemes Are Needed For Llm-Based Systems - Especially Llm-Agents Providing assurances about a system's safety and intended behavior inherently requires establishing clear accountability - explicitly assigning responsibility and culpability in cases where the system acts in undesirable or unintended ways. It is currently unclear who ought to be held responsible when a LLM system causes harm to its user or other humans. Users may deliberately misuse LLMs (see Section 4.2), but preventing such misuse may be intractable without interventions at the development or deployment stage (Anderljung & Hazell, 2023). Blumenthal & Hawley (2023) propose that companies be held liable for harms caused by "their models", however, Kapoor et al. (2024) observe that developers who open source their model would struggle to prevent misuse and attendant liability. Developers are often extremely well-resourced and can retain privileged access to their systems. Hence, they are technically best equipped to detect and mitigate safety issues, but it is unclear to what extent they are incentivized to disclose those issues, especially if disclosing them could harm their business interests. Importantly, it is not necessary to hold only a single actor (user, deployer or developer) responsible: different actors could be held responsible to varying degrees (Wex, 2023). Gaps in how to assign responsibility will likely become more acute as systems become increasingly agentic and autonomous (Buiten et al., 2023) (see Section 2.5), which can amplify the harm that they can cause (Chan et al., 2023b). Thus, there is a need for anticipatory governance to preemptively address the risks posed by LLMs, for example, by establishing regulatory criteria to mediate the deployment of LLM-agents. This will be helped by developing frameworks to monitor and evaluate deployed LLM-agents and designing mechanisms for ascertaining accountability in case of failures (Kampik et al., 2022; Chan et al., 2024). These questions are considered in greater detail in the recently released agenda on agent governance by Shavit et al. (2023). Once deployed, LLM-agents will interact among themselves and with humans - creating an additional source of risks (see Section 2.6). Normative infrastructure (e.g. bureacracies Bullock et al., 2022b) which governs interactions between humans - e.g. accounting of responsibility and blame - can break down with the introduction of algorithmic or machine-based decision-making. For example, high-frequency algorithmic traders are believed to have contributed to a number of flash crashes in stock markets (Tee & Ting, 2019), e.g. the flash crash of 2010 (CFTC and SEC, 2010) and the 2014 US Treasury market flash crash (Levine et al., 2017). Similar to algorithmic traders, LLM-agents will likely possess novel capabilities and affordances, such as higher processing and action speed, the capability to ingest large amounts of text rapidly, etc. that might disrupt normative infrastructures in undesirable ways. Further work is required to understand better the governance challenges posed by LLM-agents (Kolt, 2024), especially in multi-agent scenarios where they may interact and influence each other (Hammond et al., 2024). A better understanding of these challenges may help inform what technical and governance tools are necessary for effective governance of LLM-agents. ## Practical Challenges - Governance Mechanisms For Llms Are Underdeveloped In addition to the meta-challenges discussed above, a key challenge in exercising LLM governance is a lack of concrete, and complete, governance proposals (Guha et al., 2023). That is, most governance mechanisms for LLMs are underdeveloped and there is a high level of uncertainty around what the best governance interventions will be. To elaborate, governance can operate at different points in the LLM lifecycle, from development through to deployment and use, as well as on different substrates such as data (Jernite et al., 2022; Chan et al., 2022), compute (Hwang, 2018; Sastry et al., 2024), and energy (Monserrate, 2022). Governance interventions earlier in the lifecycle can create choke-points further on. For instance, if some development practice is banned so that some variety of LLM system is never developed, such a system could not be deployed or used (Anderljung & Hazell, 2023). On the other hand, interventions later in the lifecycle can be more targeted. This carries both advantages and disadvantages: more targeted interventions help limit negative side-effects of governance interventions (e.g. creating barriers to beneficial uses), but also run the risk of missing some pathways to harm. Fortunately, the different types of governance mechanisms we discuss are not mutually exclusive, and can likely be effectively combined to achieve better outcomes than using a single mechanism on its own. However, currently, almost all the governance mechanisms are underdeveloped and lack concrete proposals for operationalizing them. We provide some discussion in this regard below and highlight the relevant challenges. ## 4.5.7 Use-Based Governance May Be Insufficient One approach to governing LLMs is to set rules that limit how they can be used. Indeed, Hacker et al. (2023) argue that governance should focus on users and deployers, with a few key exceptions. Under such an approach, users could be held accountable for whatever harms their use of a system causes, and consumer protection law could be used to hold developers or deployers accountable if they fail to protect users. Particularly harmful use cases could be proscribed by designing explicit regulations. For example, explicit laws are being passed by multiple countries that criminalize the generation of harmful deepfake images by the use of generative image models (U.S. Congress, 2023; UK Parliament, 2023). However, enforcement of such regulations may be very difficult as a skilled malicious user could easily hide their identity by using anonymization schemes (Eurojust and Europol, 2019). Hence, it is questionable to what extent such regulations will be an effective deterrent for a highly skilled and determined malicious user. It is also unclear to what extent such an approach would be able to proactively identify misuses of technology, instead of acting retroactively once the harm has been done; as was observed to be the case for deepfakes (Burga, 2024). Currently, the EU AI Act governs use cases of AI systems, including LLMs, using a riskbased approach to classify different use cases and determine which rules apply (Council of the European Union, 2024). However, several questions arise with such an approach: What existing regulations - or, more generally, governance institutions - are relevant, and how should they be applied? How can we identify problematic new use cases and ensure they are addressed (c.f. Section 4.2)? Regulators might need fast-acting powers to intervene when new problematic uses are discovered. More generally, an important challenge for such an approach is how to address issues surrounding misuse, such as accountability, discussed in Section 4.2). We note that international agreements might also be required, as many forms of misuse (e.g. military use of LLM or censorship) are likely to be perpetrated by governments. Use-based governance may also be limited in ways it can prevent instances of self-harm (Xiang, 2023b). ## 4.5.8 Deployment Governance Lacks Adequate Regulation Deployment methodology significantly impacts the potential risks associated with an LLM. Developing regulations, both soft and hard, to govern deployment can not only be effective but may also be necessary to assure safe and beneficial deployment of LLM-based systems. Deployment governance may include *pre-deployment* governance and *lifetime* governance. The pre-deployment governance is concerned with regulating how a model gets deployed. In the predeployment governance, a basic challenge is to evaluate the trade-offs of different forms of deployment (Solaiman, 2023). The common forms of deployment include making the model available to download or making the model available via some limited form of API access (including web-interfaces).17 In particular, there is an ongoing debate around downloadable or so-called 'open-source' model deployments.18 Shevlane (2022) argue for the benefits of 'structured access', i.e. controlling how users interact with a system, which API deployment enables. Seger et al. (2023) argue that LLMs may soon be too dangerous to open-source (at least initially). On the other hand, Kapoor et al. (2024) argue that governments should fund research into the marginal risk from open source models (over currently available tools, such as web search), and consider further interventions only once risks are more certain. They also express concern that many forms of regulation might impose infeasible compliance burdens on developers of open-source models. Advancing this debate is critically important, given the irreversibility of open-sourcing models. The risks of deployment (even for a closed-source model) are also mediated by the intended use-case (see Section 4.5.7), who the model is being made available to and especially by the level of autonomy afforded to the model (Weidinger et al., 2023b). The models that are made available to younger audiences (Fowler, 2023), or deployed autonomously ('LLM-agents') may require similarly higher levels of assurance (Chan et al., 2023b; Shavit et al., 2023). Regardless of how models are deployed, deployers could take some responsibility for ensuring LLMs are trustworthy and do not cause harm. For instance, they might be made responsible for communicating information about limitations from developers to users, or even be required to perform some evaluations or collect other information about a system before agreeing to deploy it. Third-party licensing, or registrations, could be mandated to ensure that unsafe technologies do not get deployed or become widely available. While thoughtful development can reduce the risks associated with an LLM, it may not eliminate them. Lifetime governance is required to ensure that throughout their deployment, LLM-based systems remain safe. This includes assuring that the systems are monitored in a robust way and clear action plans are in place to deal with cases when a novel failure mode, or a new source of risk, is discovered (Chan et al., 2024). One challenge requiring technical research in this regard is: how to re-establish assurance after system updates to the LLM, or some other component of an LLM-based system, during the system's lifetime. Ideally, the cost of assurance in such a case could be reduced relative to assuring a brand-new system. Another aspect of the challenge is how to deal with downstream systems using an LLM as a 'dependency'. Deployers could help ensure users and developers share an awareness of how such updates might lead to new safety risks. 17In practice, API deployers are often going to be large-scale compute providers; see Section 4.5.11 for more on compute governance. 18There is also controversy and confusion around the term 'open-source' as applied to the practice of *only* releasing model weights (Widder et al., 2023; Seger et al., 2023; Solaiman, 2023). ## 4.5.9 Development Governance Might Be Particularly Challenging To Codify And Enforce Most technical work in AI safety and alignment is focused on development methods. While this work is currently not mature enough to offer reliable recipes for safe and aligned LLMs, it can already contribute to best practices that could be enshrined e.g. as standards or through regulation (UK Government, 2023). Such practices could take the form of rules around the sorts of data or algorithms to use (or not use), as well as rules around evaluations or other assurance practices to be applied before deployment (Schuett et al., 2023). These rules could be enforced on a per-project basis through mechanisms similar to those in White House (2023), provided governments are aware of development projects. Others have proposed licensing developers (Smith, 2023; Anderljung & Korinek, 2024), although critics argue this might lead to regulatory capture, and stifle innovation and open-source development (Thierer, 2023; Howard, 2023). Besides development methods, best practices for development should also consider 'meta-practices' such as processes governing internal decision-making and practices around disclosure of development activities (Weidinger et al., 2023b; Ojewale et al., 2024; Casper et al., 2024a). To the extent that safety and alignment can be guaranteed through following best practices in development, mandating such practices could be an appealing approach to governance. However, the efficacy of development governance would likely depend on achieving a high level of buyin from most - if not all - leading developers (see Section 4.5.3). Moreover, given the falling cost of compute, more and more developers (including those not "in the lead") may need to be governed, creating a growing enforcement challenge. As most of the knowledge about sound developmental practices for the responsible development of LLMs is currently locked within AI companies, regulators, and standard-setting organizations may be highly dependent on LLM developers sharing this knowledge to create high-quality standards. Furthermore, a lack of buy-in from developers may result in regulatory flight (i.e. developers moving their operations to other jurisdictions with less regulatory pressure) or developers circumventing the prescribed external standards, e.g. by hiding parts of developmental details that may be misaligned with the prescribed standards. Such evasions may be particularly hard to detect and prevent, given the current lack of technical tools for determining whether inappropriate development activities are occurring (Shavit, 2023). At the same time, regulatory flight may be less of a concern for large markets like the US or the EU. An alternative to externally imposed regulations could be to prompt companies to propose their own developmental standards that they could then be beholden to, once ratified by an external party (e.g. a government regulator). However, it is important to not rely on voluntary compliance alone and to ensure that such standards are appropriately codified and made legally binding (Ó hÉigeartaigh et al., 2023). An example of such developmental standards are the 'responsible scaling policies' published by various AI companies (Anthropic, 2023f; OpenAI, 2023a; Google DeepMind, 2023), however, these policies are completely voluntary and due to the lack of third-party analysis of these policies, it is unclear to what extent these policies embody desirable standards of safety. ## 4.5.10 Llms Pose Additional Challenges For Data Governance Data is a basic ingredient for LLM development. This makes data governance a promising vehicle to govern and regulate LLMs. Data could serve as a choke-point and help prevent the development of unsafe LLM systems; for instance, training on certain kinds of data (e.g. biological data) could be prohibited or regulated stringently. In the pre-LLM age, the central focus of data governance has been the protection of an individual's right to privacy (Solove, 2022). LLMs add an additional dimension to this as LLMs can memorize and leak personally identifiable information (PII) (Tirumala et al., 2022; Nasr et al., 2023). However, it is also important that the scope of data governance is expanded to consider other *data rights* issues that have to come to the fore due to the development of LLMs (Roberts & Montoya, 2022). One major objective for data governance is establishing, and defending, the rights of data creators (e.g. writers) and the rights of data workers (e.g. workers hired to generate data for LLM training). A popular proposal for this is the establishment of accountable organizations, such as data trusts (Jernite et al., 2022; Chan et al., 2022), to be custodians of any data submitted to them. However, implementing such a solution requires overcoming several technical and social challenges. On the technical side, the key problems are establishing provenance of the data already present on the internet (Lee et al., 2023b), verifying that a particular model was trained on the dataset it is claimed to have been trained on (Choi et al., 2023a; Garg et al., 2023), and establishing relative value of different data points present within a dataset (Guu et al., 2023). On the social side, implementing such solutions would require strong political will, extensive international cooperation, and sufficient funding to implement the technical infrastructure needed for data trusts (Chan et al., 2022). Furthermore, it is unclear as to who should own the data *created* by an LLM - e.g. the developer, the user, or no one (Henderson et al., 2023b)? This question is coupled with the questions of responsibility and profitability: who is responsible if an LLM generates output that is harmful or unsafe in other ways (Henderson et al., 2023a) (also see Section 4.5.6)? How does this responsibility change as we move from chat-based models to agents (Schwartz & Rogers, 2022)? This is arguably a fundamental question in regards to data economy and may have long-reaching repercussions in an AI-based creative economy (Knibbs, 2023). ## 4.5.11 Robustness Of Compute Governance Is Unclear Compute plays a critical role in the development of LLMs (Sevilla et al., 2022), with compute costs for development rising into the hundreds of millions (Knight, 2023) and likely soon billions of dollars. Compute governance may provide one of the most promising levers for governing bodies to modulate the rate of progress within the technical AI field (Whittlestone & Clark, 2021; Sastry et al., 2024). This may be particularly important for the safety risks associated with advanced capabilities which compute-heavy frontier models are likely to obtain first (Pilz et al., 2023). Relative to other governance mechanisms that governments could use to regulate AI and LLM development, compute governance also has the advantage of potentially easier compliance verification (Brundage et al., 2020; Baker, 2023), especially given the major intermediary role of compute providers (Heim et al., 2024). However, further work is needed to understand and refine existing proposals for compute governance (Shavit, 2023; Choi et al., 2023a; Egan & Heim, 2023; Sastry et al., 2024; Heim et al., 2024), in addition to managing risks such as privacy and concentration of power (Sastry et al., 2024). Technical researchers could collaborate with hardware and supply-chain experts to stress-test proposals, e.g. by identifying potential loopholes by which projects might escape scrutiny (such as distributed training on consumer hardware Douillard et al., 2023), or otherwise thwart effective oversight (such as by disguising computations). Compute governance proposals could also be strengthened by developing a better understanding of the interplay between hardware and software (Mince et al., 2024). Future research on compute governance could explore potential developments that may affect the effectiveness of compute governance, such as changes in the structure of the compute-providing industry (Anderljung & Carlier, 2021) or the diffusion of AI capabilities (Pilz et al., 2023). Compute governance could also be leveraged by governments to enhance the ability of the independent scientific and academic community. In addition to any direct benefits in terms of improved understanding, and auditing, of LLMs, this may help mediate the effects of corporate power (see Section 4.5.4). Effective governance of LLMs is critical for ensuring that LLMs prove a beneficial addition to societies. However, efforts to govern LLMs, and related AI technologies, remain nascent and ill-formed. ![94_image_0.png](94_image_0.png) The governance of LLMs is made challenging by various meta-challenges ranging from lack of scientific understanding and technical tools required for governance to the risks of regulatory capture by corporations. From a more practical lens, concrete and comprehensive proposals to govern LLMs remain absent and, unfortunately, the various governance mechanisms (e.g. deployment governance, development governance, compute governance, data governance) are not adequately developed yet. 181. How should governance approaches change depending on how rapidly the capabilities of models are advancing, the rate at which they are being productized (and hence proliferating throughout society), and the degree to which we lack technical understanding of a particular ![94_image_2.png](94_image_2.png) 182. What policy interventions can be taken by governing bodies to support research on alleviating the technical limitations inhibiting effective governance of LLM-based systems? ←- 183. Should governing bodies aim to moderate the pace of progress in AI? If so, what governance 184. How might the slow, bureaucratic nature of governments negatively impact the governance of LLMs? What are the relative merits and demerits of various measures (such as forming public-private partnerships or formalizing regulatory markets) that could be taken by the ![94_image_1.png](94_image_1.png) 185. What measures can be taken to disincentivize irresponsible approaches to AI development? Can we design regulations that mandate the safe and responsible development of AI models? ←- 186. How can we better understand AI race dynamics, e.g. using game-theoretic models or historical analogues? And what governance interventions might be used to alter these dynamics? ←- 187. How can we involve and empower more stakeholders in LLM governance, particularly marginalized groups most impacted by LLMs? How can we avoid legislation that disproportionately favors the interests of corporate LLM developers over the interests of other social groups? ←- 188. Can we develop structures for corporate governance that might protect public interests in a better way? ←- 189. Can we develop technical tools that may help measure, detect, and counteract the role of technology companies in shaping public opinion, by influencing the content consumed by the public? ←- 190. Can we develop a better understanding of the influence of corporate power on academia, policy, and research? What are the potential detrimental effects of such influence? ←- 191. Can we develop a better understanding of the factors (arms race between nations, different national-level approaches to AI regulation) that might negatively impact international governance for LLMs? ←- 192. What are the different ways through which LLMs could be governed in a unified way internationally? ←- 193. How can clear lines of accountability be established for harms associated with LLMs? ←- 194. How can governance tools be used to mitigate the risks associated with LLM-agents and their interactions with humans and other systems? ←- 195. What are the relative merits and demerits of different governance mechanisms, such as usebased governance, deployment governance, developmental governance, data governance and compute governance? How can we effectively combine all the governance mechanisms to achieve the most favorable outcomes? ←-←-←-←-←- 196. What existing regulations and governance institutions can be applied for use-based governance, at national and international level? ←- 197. How can we proactively identify problematic uses and address them via use-based governance? ←- 198. How can use-based governance help deter misuses that are likely to be perpetrated by governments? Can it be effectively used to regulate against instances of self-harm? ←- 199. Can we develop a better understanding of the risks, and benefits, associated with various model deployment strategies? ←- 200. What kind of regulations can be adopted to ensure LLM deployers perform their due diligence in assuring system safety before and throughout deployment? ←- 201. How can we create appropriate legal frameworks for deployment governance of LLM-agents? These frameworks would need to address the regulatory criteria for deploying LLM-agents, how such agents should be monitored after deployment, and who would be accountable for any harm incurred by deployed agents. ←- 202. How can we efficiently assure an LLM-based system after a system upgrade to the LLM, or some other component of LLM-based system? ←- 203. What are the merits and demerits of requiring deployers to seek licenses, or register, with regulators prior to the release of the model? Should the requirements that deployers have to meet be different for different deployment strategies? ←- 204. How can developers best identify and share knowledge about responsible LLM (and AI) development among themselves? How can such practices be enshrined as legally binding standards? ←- 205. What are the merits and demerits of mandating licensing for the development of frontier AI technologies? ←- 206. Can we develop technical tools that may help us verify whether particular developmental practices were followed or not in the development of a given model? ←- 207. To what extent is regulatory flight likely to impede the effective governance of LLMs? ←- 208. What are the merits and demerits of 'responsible scaling policies' issued by different LLM developers? ←- 209. How can we establish and defend the rights of data creators (e.g. writers) and the rights of data workers (e.g. workers hired to generate data for LLM training)? Are data trusts (Chan et al., 2022) a practical solution in this regard? ←- 210. How can we verify that a particular model was indeed trained exclusively on the data claimed as training data by the model creator? ←- 211. Who owns the data created by an LLM? This is arguably one of the most critical questions in governance with downstream impact on other important questions of who bears responsibility for LLM outputs that cause harm to society, and who can profit from the LLM outputs. ←- 212. Can we develop concrete proposals for how compute governance could be exercised in practice? ←- 213. To what extent are current, and any future, proposals for compute governance robust to advances in distributed training? ←- 214. How will compute governance proposals be impacted by the changes in the structure of the compute-providing industry? ←- 215. How can compute governance be leveraged to enhance the ability of the independent scientific community to conduct investigations into flaws in LLMs that could otherwise be overlooked?←- ## 5 Discussion 5.1 Limitations This agenda is the most expansive discussion on the challenges in assuring the safety and alignment of LLMbased systems to date. However, despite this, we assert that this agenda is not **exhaustive and** that there exist important challenges, both known and unknown, in assuring the safety and alignment of LLM-based systems that are not cataloged in this work. We have attempted to 'future-proof' our work by trying to list challenges that might arise due to LLMs becoming more performant due to scaling or modifications of the training process, however, due to the uncertain nature of the LLM development landscape, it is possible that important challenges may have been omitted. In particular, we have primarily focused on challenges in the LLM safety and alignment that are imminent, and relatively undisputed, and hence, we do not cover speculative challenges; see Hendrycks et al. (2023) and Critch & Krueger (2020) for discussion on such challenges. Another key limitation of this work is the exclusive focus on the safety and alignment of LLM-based systems. The choice to focus on the safety and alignment of LLMs was made due to the surging research interest in LLMs, their rapid productization, and their central position in the age of foundation models. We assert that the safety of other deep learning-based systems (e.g. generative models for vision, generative models for biology, recommender systems, and learning-based embodied agents) is also highly important and we call on the wider community to organize similar efforts to catalog challenges involved in the safety of such non-LLM-based systems. Relatedly, due to the limited scope of our work, we have intentionally omitted many important research directions pertaining to the safe development and deployment of aligned AI-based systems, such as improving the general understanding of deep learning (Arora et al., 2020), understanding critical aspects of agency ('agent foundations' Soares & Fallenstein, 2014), and developing AI systems whose safety can be proved or verified (Brundage et al., 2020; Tegmark & Omohundro, 2023), cooperative AI (Dafoe et al., 2020) etc. Another dimension of our limited coverage is that our major focus is on technical challenges - 13 out of 18 challenges we identify are fully technical in nature. Our discussion of sociotechnical challenges is further limited in the sense that it is narrowly focused on challenges that are directly relevant to LLM-based systems and is biased towards aspects of these challenges that could potentially be addressed via research. We have also discussed sociotechnical challenges in a dedicated section as we prioritized modularity to make the work easier to navigate for a wide audience. However, this view oversimplifies the interconnectedness between the technical challenges and the broader sociotechnical challenges involved. An alternative treatment focused on highlighting this interconnected nature of safety challenges would see sociotechnical issues spread throughout every aspect of work on LLMs. Additionally, while we have made our best effort to avoid any geographic bias - in some sections, particularly Section 4.5, the discussion is biased towards few geographies, specifically, US, Europe and China. The nature of the challenges posed in this work may change over time. Some challenges may get sidestepped or solved as a side-effect of advances focused on improving LLM performance (e.g. scaling). For some other challenges, their form may evolve, causing the identified corresponding research directions to become outdated. Most importantly, advances in LLM development may uncover novel challenges or make some of the existing challenges much more critical to address. ## 5.2 Prior Work This work comes on the back of several works focused on highlighting harms, risks, and various other societal challenges posed by LLMs, and other advanced AI systems they may give way to (Bengio et al., 2023). These risks have been recognized by various governing bodies around the world, e.g. United States government (hou, 2023), United Kingdom government (Office, 2023), and the United Nations (Nichols, 2023). Among scholarly work, Weidinger et al. (2022) and Shelby et al. (2022) review and taxonomize various harms and risks posed by LLMs and other AI systems. Shevlane et al. (2023) propose evaluating LLMs for 'dangerous' capabilities that may pose extreme risks. To improve risk assessment, Weidinger et al. (2023b) propose a framework for sociotechnical evaluation of LLMs and other generative systems. In a similar vein, Solaiman et al. (2023) call for evaluating AI systems, including LLMs, for social impact. Other works examine the possible societal impacts of LLMs - Eloundou et al. (2023) review possible disruptions to job market that might be caused by LLMs and Brundage et al. (2018) highlight ways in which AI systems may be misused by malicious actors. There additionally exist works that focus on discussing societal-scale harms that may occur if a misaligned competent AI system is allowed to act in an unsafe way (Critch & Krueger, 2020; Hendrycks et al., 2023; Critch & Russell, 2023). Our work is complementary to all the aforementioned work as we focus on listing technical and sociotechnical challenges that need to be addressed to overcome these challenges. Several prior works have attempted to identify critical open problems and outline research directions, for the development of safe and aligned AI systems. Kenton et al. (2021) is perhaps the closest work to ours in scope but contains a much narrower discussion regarding the safety of LLM-based systems. Similarly, there exist public agendas by leading LLM companies that outline their approach to safety and alignment of LLM-based systems (Leike et al., 2022; Anthropic, 2023a). However, these agendas lack diversity and are primarily focused on the research directions being championed by the corresponding company. In contrast, our work boasts a diverse academic authors lineup and platforms diverse research directions. Amodei et al. (2016) and Hendrycks et al. (2021b) share a similar goal to ours of highlighting important challenges that require to be addressed for safe AI; however, they lack explicit focus on LLMs. Other similar efforts include Dafoe et al. (2020) and Ecoffet et al. (2020), which respectively consider alignment and safety of multi-agent and open-ended systems. Other works have argued for specific approaches for the development of safe AI systems; Leike et al. (2018) argue for scalable reward modeling to align advanced AI systems, Tegmark & Omohundro (2023) argue for distilling learned logic of AI systems into code which can be formally verified, and Brundage et al. (2020) call for designing institutional, software and hardware infrastructure to support verifiability of the claims made about AI systems. Dafoe (2018) and Shavit et al. (2023) are agendas focused on governance aspects of AI systems. There also exist several other agendas focused on the safety of AI-based systems with varying levels of relevance to the safety and alignment of LLM-based systems (Russell et al., 2015; Soares & Fallenstein, 2014; Henderson et al., 2018; Dinan et al., 2021; Gruetzemacher et al., 2021). Also related to our work are studies such as Gabriel (2020) and Prabhakaran et al. (2022a), which consider the question of what the alignment target ought to be for general-purpose AI systems like LLMs. Due to the focus on LLMs, our work is also related to other agendas and surveys on LLMs. The notable agendas include Kaddour et al. (2023) and Huyen (2023), which review challenges in LLM research and applications of LLMs in general, without any explicit focus on safety or alignment. Among surveys, Bowman (2023) is a short survey that provides an opinionated review of key facts about LLMs' development. Zhao et al. (2023) comprehensively review the various facets of LLM development and their utilization, including techniques used to promote safety and alignment. Ji et al. (2023a) provide a review of alignment techniques in the context of foundation models. There also exist several surveys on specific aspects of LLMs that are covered in this work; Dong et al. (2022) survey the literature on in-context learning, Huang & Chang (2022) provide extensive discussion on reasoning capabilities of LLMs, Mozes et al. (2023) review security related issues of LLMs, and Casper et al. (2023a) survey limitations of reinforcement learning from human feedback for safety finetuning of LLMs and the associated open problems. In the aftermath of the unexpected success of LLMs, there has been a growing sense that 'impactful' machine learning and natural language processing research requires tremendous resources and thus is no longer viable for academic researchers. This work is partially inspired as a rebuttal to that perspective and posits that alignment and safety are ripe fields for contributions by academic researchers. Other related efforts include Saphra et al. (2023) and Ignat et al. (2023). Saphra et al. use historical analogies to argue that current disparities between academic and industrial labs regarding the scale of resources are temporary and argue that evaluations and data are still the primary bottlenecks. Ignat et al. similarly rebut the perspective that NLP research is no longer amenable to academic research by highlighting various under-researched research areas within NLP and other related fields. ## References Fact Sheet: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI, 2023. https://www.whitehouse.gov/briefing-r oom/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-vol untary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks -posed-by-ai/. Accessed on: December 6, 2023. Emmanuel Abbe, Enric Boix-Adsera, Matthew S Brennan, Guy Bresler, and Dheeraj Nagaraj. The staircase property: How hierarchical structure can guide deep learning. *Advances in Neural Information Processing* Systems, 34:26989–27002, 2021. Mohamed Abdalla and Moustafa Abdalla. The grey hoodie project: Big tobacco, big tech, and the threat on academic integrity. In *Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society*, pp. 287–297, 2021. Kevin MacG Adams, Patrick T Hester, Joseph M Bradley, Thomas J Meyers, and Charles B Keating. Systems theory as the foundation for understanding systems. *Systems Engineering*, 17(1):112–123, 2014. Julius Adebayo, Justin Gilmer, Michael Muelly, Ian J. Goodfellow, Moritz Hardt, and Been Kim. Sanity Checks for Saliency Maps. In *Neural Information Processing Systems*, 2018. URL https://api.semant icscholar.org/CorpusID:52938797. Julius Adebayo, Michael Muelly, Ilaria Liccardi, and Been Kim. Debugging Tests for Model Explanations. ArXiv, abs/2011.05429, 2020. URL https://api.semanticscholar.org/CorpusID:226299635. Chirag Agarwal, Daniel D'souza, and Sara Hooker. Estimating example difficulty using variance of gradients. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 10368– 10378, 2022. Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Bellemare. Deep reinforcement learning at the edge of the statistical precipice. Advances in Neural Information Processing Systems, 34, 2021. Rishabh Agarwal, Avi Singh, Lei M Zhang, Bernd Bohnet, Stephanie Chan, Ankesh Anand, Zaheer Abbas, Azade Nova, John D Co-Reyes, Eric Chu, et al. Many-shot in-context learning. arXiv preprint arXiv:2404.11018, 2024. Ehsan Aghaei, Xi Niu, Waseem Shadid, and Ehab Al-Shaer. SecureBERT: A Domain-Specific Language Model for Cybersecurity. In *International Conference on Security and Privacy in Communication Systems*, pp. 39–56. Springer, 2022. Eyal Aharoni, Sharlene Fernandes, Daniel J. Brady, Caelan Alexander, Michael Criner, Kara Queen, Javier Rando, Eddy Nahmias, and Victor Crespo. Attributions toward artificial agents in a modified moral turing test. *Scientific Reports*, 14(1):8458, Apr 2024. ISSN 2045-2322. doi: 10.1038/s41598-024-58087-7. Gati Aher, Rosa I Arriaga, and Adam Tauman Kalai. Using large language models to simulate multiple humans. *arXiv preprint arXiv:2208.10264*, 2022. Orevaoghene Ahia, Sachin Kumar, Hila Gonen, Jungo Kasai, David R Mortensen, Noah A Smith, and Yulia Tsvetkov. Do all languages cost the same? Tokenization in the era of commercial language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023. Kwangjun Ahn, Xiang Cheng, Hadi Daneshmand, and Suvrit Sra. Transformers learn to implement preconditioned gradient descent for in-context learning. *arXiv preprint arXiv:2306.00297*, 2023. Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, ..., and Andy Zeng. Do As I Can and Not As I Say: Grounding Language in Robotic Affordances. In *arXiv preprint arXiv:2204.01691*, 2022. Michael Ahn, Debidatta Dwibedi, Chelsea Finn, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Karol Hausman, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Sean Kirmani, Isabel Leal, Edward Lee, Sergey Levine, Yao Lu, Sharath Maddineni, Kanishka Rao, Dorsa Sadigh, Pannag Sanketi, ..., and Zhuo Xu. AutoRT: Embodied Foundation Models for Large Scale Orchestration of Robotic Agents, 2024. AI-HLEG, High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI. Technical report, European Commission, 2019. https://www.aepd.es/sites/default/files/2019-12/ai -ethics-guidelines.pdf. Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, and Eric Schulz. Playing repeated games with Large Language Models. *arXiv:2305.16867*, may 2023. doi: 10.48550/arxiv.2305.16 867. Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, and Kelvin Guu. Towards tracing knowledge in language models back to the training data. In *Findings of the Association* for Computational Linguistics: EMNLP 2022, pp. 2429–2446, 2022a. Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, and Denny Zhou. What learning algorithm is in-context learning? Investigations with linear models. *arXiv preprint arXiv:2211.15661*, 2022b. Ibrahim Alabdulmohsin, Vinh Q Tran, and Mostafa Dehghani. Fractal Patterns May Unravel the Intelligence in Next-Token Prediction. *arXiv preprint arXiv:2402.01825*, 2024. Ibrahim M Alabdulmohsin, Behnam Neyshabur, and Xiaohua Zhai. Revisiting neural scaling laws in language and vision. *Advances in Neural Information Processing Systems*, 35:22300–22312, 2022. Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier probes. *arXiv*, nov 2018. http://arxiv.org/abs/1610.01644. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. *Advances in Neural Information Processing Systems*, 35:23716–23736, 2022. Shun-Ichi Amari. A universal theorem on learning curves. *Neural networks*, 6(2):161–166, 1993. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete problems in AI safety. *arXiv preprint arXiv:1606.06565*, 2016. Markus Anderljung and Alexis Carlier. Some AI Governance Research Ideas, 2021. https://docs.google. com/document/d/13LJhP3ksrcEBKxYFG5GkJaC2UoxHKUYAHCRdRlpePEc/edit. Accessed on: January 30, 2024. Markus Anderljung and Julian Hazell. Protecting Society from AI Misuse: When are Restrictions on Capabilities Warranted? *arXiv preprint arXiv:2303.09377*, 2023. Markus Anderljung and Anton Korinek. Frontier AI Regulation: Safeguards Amid Rapid Progress. Lawfare, January 2024. URL https://www.lawfaremedia.org/article/frontier-ai-regulation-safeguard s-amid-rapid-progress. Jacob Andreas. Language models as agent models. *arXiv preprint arXiv:2212.01681*, 2022. Dana Angluin, David Chiang, and Andy Yang. Masked Hard-Attention Transformers and Boolean RASP Recognize Exactly the Star-Free Languages. *arXiv preprint arXiv:2310.13897*, 2023. Cem Anil, Esin Durmus, Mrinank Sharma, Joe Benton, Sandipan Kundu, Joshua Batson, Nina Rimsky, Meg Tong, Jesse Mu, Daniel Ford, Francesco Mosconi, Rajashree Agrawal, Rylan Schaeffer, Naomi Bashkansky, Samuel Svenningsen, Mike Lambert, Ansh Radhakrishnan, Carson Denison, Evan J Hubinger, Yuntao Bai, Trenton Bricken, Timothy Maxwell, Nicholas Schiefer, Jamie Sully, Alex Tamkin, Tamera Lanham, Karina Nguyen, Tomasz Korbak, Jared Kaplan, Deep Ganguli, Samuel R. Bowman, Ethan Perez, Roger Grosse, and David Duvenaud. Many-shot Jailbreaking. *Preprint*, 2024. Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, ..., and Yonghui Wu. PaLM 2 Technical Report, 2023. Anthropic. Core Views on AI Safety: When, Why, What, and How, 2023a. https://www.anthropic.com/ index/core-views-on-ai-safety. Anthropic. Long context prompting for Claude 2.1, 2023b. https://www.anthropic.com/index/claude-2 -1-prompting. Anthropic. The Long-Term Benefit Trust, 2023c. https://www.anthropic.com/news/the-long-term-b enefit-trust. Anthropic. Introducing Claude, 2023d. https://www.anthropic.com/index/introducing-claude. Accessed on: June 21, 2023. Anthropic. Model Card and Evaluations for Claude Models, 2023e. https://www-files.anthropic.com/p roduction/images/Model-Card-Claude-2.pdf. Anthropic. Anthropic's Responsible Scaling Policy: Version 1.0, 2023f. https://www-cdn.anthropic.com /1adf000c8f675958c2ee23805d91aaade1cd4613/responsible-scaling-policy.pdf. Omer Antverg and Yonatan Belinkov. On the Pitfalls of Analyzing Individual Neurons in Language Models. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum ?id=8uz0EWPQIMu. ARC. ARC Evals, 2022. URL https://evals.alignment.org/. Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *arXiv* preprint arXiv:1907.02893, 2019. Stuart Armstrong, Nick Bostrom, and Carl Shulman. Racing to the precipice: a model of artificial intelligence development. *AI & society*, 31:201–206, 2016. Arnav Arora, Lucie-Aimée Kaffee, and Isabelle Augenstein. Probing pre-trained language models for crosscultural differences in values. *arXiv preprint arXiv:2203.13722*, 2022. Raman Arora, Sanjeev Arora, Joan Bruna, Nadav Cohen, Simon Du, Rong Ge, Suriya Gunasekar, Chi Jin, Jason Lee, Tengyu Ma, et al. Theory of deep learning, 2020. Sanjeev Arora and Anirudh Goyal. A theory for emergence of complex skills in language models. *arXiv* preprint arXiv:2307.15936, 2023. Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. In *International Conference on Machine Learning*, pp. 254–263. PMLR, 2018. Lora Aroyo, Alex Taylor, Mark Diaz, Christopher Homan, Alicia Parrish, Gregory Serapio-García, Vinodkumar Prabhakaran, and Ding Wang. DICES dataset: Diversity in conversational ai evaluation for safety. Advances in Neural Information Processing Systems, 36, 2024. Rob Ashmore, Radu Calinescu, and Colin Paterson. Assuring the machine learning lifecycle: Desiderata, methods, and challenges. *ACM Computing Surveys (CSUR)*, 54(5):1–39, 2021. Amanda Askell, Miles Brundage, and Gillian Hadfield. The Role of Cooperation in Responsible AI Development, jul 2019. URL http://arxiv.org/abs/1907.04534. arXiv:1907.04534 [cs]. Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021. Stephanie Assad, Robert Clark, Daniel Ershov, and Lei Xu. Algorithmic Pricing and Competition: Empirical Evidence from the German Retail Gasoline Market. Working Paper 1438, Economics Department, Queen's University, Aug 2020. URL https://ideas.repec.org/p/qed/wpaper/1438.html. Mohammad Atari, Mona J Xue, Peter S Park, Damián Blasi, and Joseph Henrich. Which humans? 2023. Adam Au. China vs. US Approaches to AI Governance. *The Diplomat*, 2023. https://thediplomat.com/ 2023/10/china-vs-us-approaches-to-ai-governance/. Accessed on: 1 February, 2024. Isabelle Augenstein, Timothy Baldwin, Meeyoung Cha, Tanmoy Chakraborty, Giovanni Luca Ciampaglia, David Corney, Renee DiResta, Emilio Ferrara, Scott Hale, and Alon Halevy. Factuality challenges in the era of large language models. *arXiv preprint arXiv:2310.05189*, 2023. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. *arXiv* preprint arXiv:2108.07732, 2021. David H Autor. Why are there still so many jobs? The history and future of workplace automation. *Journal* of economic perspectives, 29(3):3–30, 2015. David H Autor. Work of the Past, Work of the Future. In *AEA Papers and Proceedings*, volume 109, pp. 1–32. American Economic Association, 2019. Amos Azaria and Tom Mitchell. The Internal State of an LLM Knows When It's Lying, oct 2023. URL http://arxiv.org/abs/2304.13734. arXiv:2304.13734 [cs]. Sahar Aziz. A domestic terror law could quash political dissent in the US. *Al Jazeera*, 2020. https: //www.aljazeera.com/opinions/2020/6/13/a-domestic-terror-law-could-quash-political-dis sent-in-the-us. Accessed on: 9 March, 2024. Juhan Bae, Nathan Ng, Alston Lo, Marzyeh Ghassemi, and Roger B Grosse. If Influence Functions are the Answer, Then What is the Question? *Advances in Neural Information Processing Systems*, 35:17953– 17967, 2022. Eugene Bagdasaryan and Vitaly Shmatikov. Spinning language models: Risks of propaganda-as-a-service and countermeasures. In *2022 IEEE Symposium on Security and Privacy (SP)*, pp. 769–786. Ieee, 2022. Eugene Bagdasaryan, Tsung-Yin Hsieh, Ben Nassi, and Vitaly Shmatikov. (Ab)using Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs. *arXiv preprint arXiv:2307.10490*, 2023. Yasaman Bahri, Ethan Dyer, Jared Kaplan, Jaehoon Lee, and Utkarsh Sharma. Explaining neural scaling laws. *arXiv preprint arXiv:2102.06701*, 2021. Yu Bai, Fan Chen, Huan Wang, Caiming Xiong, and Song Mei. Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection. *arXiv preprint arXiv:2306.04637*, 2023a. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. *arXiv preprint arXiv:2204.05862*, 2022a. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from AI feedback. *arXiv preprint arXiv:2212.08073*, 2022b. Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, et al. Benchmarking Foundation Models with Language-Model-as-an-Examiner. *arXiv* preprint arXiv:2306.04181, 2023b. Luke Bailey, Euan Ong, Stuart Russell, and Scott Emmons. Image Hijacking: Adversarial Images can Control Generative Models at Runtime. *arXiv preprint arXiv:2309.00236*, 2023. Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor Mordatch. Emergent Tool Use From Multi-Agent Autocurricula. *arXiv:1909.07528*, sep 2019. doi: 10.48550/arxiv.1 909.07528. Mauricio Baker. Nuclear Arms Control Verification and Lessons for AI Treaties. arXiv preprint arXiv:2304.04123, 2023. Michiel Bakker, Martin Chadwick, Hannah Sheahan, Michael Tessler, Lucy Campbell-Gillingham, Jan Balaguer, Nat McAleese, Amelia Glaese, John Aslanides, Matt Botvinick, et al. Fine-tuning language models to find agreement among humans with diverse preferences. Advances in Neural Information Processing Systems, 35:38176–38189, 2022. Yamini Bansal, Preetum Nakkiran, and Boaz Barak. Revisiting model stitching to compare neural representations. *Advances in neural information processing systems*, 34:225–236, 2021. Qiming Bao, Gaël Gendron, Alex Yuxuan Peng, Wanjun Zhong, Neset Tan, Yang Chen, Michael Witbrock, and Jiamou Liu. A Systematic Evaluation of Large Language Models on Out-of-Distribution Logical Reasoning Tasks. *arXiv preprint arXiv:2310.09430*, 2023. Boaz Barak. Emergent abilities and grokking: Fundamental, Mirage, or both?, 2023. https://windowsont heory.org/2023/12/22/emergent-abilities-and-grokking-fundamental-mirage-or-both/. Boaz Barak, Benjamin Edelman, Surbhi Goel, Sham Kakade, Eran Malach, and Cyril Zhang. Hidden progress in deep learning: SGD learns parities near the computational limit. Advances in Neural Information Processing Systems, 35:21750–21764, 2022. Nathan Barnard and Erin Robertson. AI Governance and Strategy: A List of Research Agendas and Work That Could Be Done. *Less Wrong*, 2024. https://www.lesswrong.com/posts/Zn73PkYWGKYjLiBAf/. Beth Barnes, Paul Christiano, William Saunders, Joe Collman, Mark Xu, Chris Painter, Mihnea Maftei, and Ronny Fernandez. Debate update: Obfuscated arguments problem. *AI Alignment Forum*, 2020. https://www.alignmentforum.org/posts/PJLABqQ962hZEqhdB/debateupdate-obfuscated-argumen ts-problem. Accessed on: 3 January, 2024. Craig Barratt and Stephen Boyd. Example of exact trade-offs in linear controller design. *IEEE Control* Systems Magazine, 9(1):46–52, 1989. doi: 10.1109/37.16750. Anthony Barrett, Jessica Newman, and Brandie Nonnecke. Policy Brief on AI Risk Management Standards for General-Purpose AI Systems (GPAIS) and Foundation Models, 2023. https://cltc.berkeley.edu/ publication/policy-brief-on-ai-risk-management-standards-for-general-purpose-ai-syste ms-gpais-and-foundation-models/. Accessed on: January 8, 2024. George Baryannis, Samir Dani, and Grigoris Antoniou. Predicting supply chain risks using machine learning: The trade-off between performance and interpretability. *Future Generation Computer Systems*, 101:993– 1004, 2019. ISSN 0167-739x. doi: https://doi.org/10.1016/j.future.2019.07.059. URL https://www.scie ncedirect.com/science/article/pii/S0167739X19308003. Francis M. Bator. The Anatomy of Market Failure. *The Quarterly Journal of Economics*, 72(3):351–379, 1958. ISSN 0033-5533. doi: 10.2307/1882231. URL https://www.jstor.org/stable/1882231. David Bau, Jun-Yan Zhu, Hendrik Strobelt, Agata Lapedriza, Bolei Zhou, and Antonio Torralba. Understanding the role of individual units in a deep neural network. *Proceedings of the National Academy of* Sciences, 117(48):30071–30078, 2020. Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, and Sağnak Taşırlar. Introducing our Multimodal Models, 2023. URL https://www.adept.ai/blog/fuyu-8b. Stephanie A Bell. AI and Job Quality: Insights from Frontline Workers. *Available at SSRN 4337611*, 2022. Stephanie A Bell and Anton Korinek. AI's Economic Peril. *Journal of Democracy*, 34(4):151–161, 2023. Nora Belrose. AI Pause Will Likely Backfire (Guest Post), 2023. https://bounded-regret.ghost.io/a i-pause-will-likely-backfire-by-nora/. Accessed on: January 30, 2024. Nora Belrose, David Schneider-Joseph, Shauli Ravfogel, Ryan Cotterell, Edward Raff, and Stella Biderman. LEACE: Perfect linear concept erasure in closed form, oct 2023. URL http://arxiv.org/abs/2306.038 19. arXiv:2306.03819 [cs]. Youssef Benchekroun, Megi Dervishi, Mark Ibrahim, Jean-Baptiste Gaya, Xavier Martinet, Grégoire Mialon, Thomas Scialom, Emmanuel Dupoux, Dieuwke Hupkes, and Pascal Vincent. WorldSense: A Synthetic Benchmark for Grounded Reasoning in Large Language Models. *CoRR*, abs/2311.15930, 2023. Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 610–623, 2021. Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, et al. Managing AI risks in an era of rapid progress. arXiv preprint arXiv:2310.17688, 2023. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on freebase from questionanswer pairs. In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pp. 1533–1544, 2013. Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, and Owain Evans. Taken out of context: On measuring situational awareness in LLMs, 2023a. Lukas Berglund, Meg Tong, Max Kaufmann, Mikita Balesni, Asa Cooper Stickland, Tomasz Korbak, and Owain Evans. The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A". *CoRR*, abs/2309.12288, 2023b. Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. Abductive Commonsense Reasoning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. Ansh Bhatnagar and Devyani Gajjar. Policy Implications of Artificial Intelligence (AI). Research briefing, UK Parliament, 2024. https://post.parliament.uk/research-briefings/post-pn-0708/. Accessed on: February 8, 2024. Shaily Bhatt, Sunipa Dev, Partha Talukdar, Shachi Dave, and Vinodkumar Prabhakaran. Cultural Re-contextualization of Fairness Research in Language Technologies in India. *arXiv preprint* arXiv:2211.11206, 2022. Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José MF Moura, and Peter Eckersley. Explainable machine learning in deployment. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pp. 648–657, 2020. Sam Biddle. OpenAI Quietly Deletes Ban on Using ChatGPT for "Military and Warfare", 2024. https: //theintercept.com/2024/01/12/open-ai-military-ban-chatgpt/. Accessed on: January 27, 2024. Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pp. 2397–2430. Pmlr, 2023. Eric J. Bigelow, Ekdeep Singh Lubana, Robert P. Dick, Hidenori Tanaka, and Tomer D. Ullman. In-Context Learning Dynamics with Random Binary Sequences, 2023. Battista Biggio, Blaine Nelson, and Pavel Laskov. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389, 2012. Leyla Bilge and Tudor Dumitraş. Before we knew it: an empirical study of zero-day attacks in the real world. In *Proceedings of the 2012 ACM conference on Computer and communications security*, pp. 833–844, 2012. Steven Bills, Nick Cammarata, Dan Mossing, Henk Tillman, Leo Gao, Gabriel Goh, Ilya Sutskever, Jan Leike, Jeff Wu, and William Saunders. Language models can explain neurons in language models. https: //openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html, 2023. Blair Bilodeau, Natasha Jaques, Pang Wei Koh, and Been Kim. Impossibility Theorems for Feature Attribution. *ArXiv*, abs/2212.11870, 2022. URL https://api.semanticscholar.org/CorpusID:254974246. Abeba Birhane, William Isaac, Vinodkumar Prabhakaran, Mark Diaz, Madeleine Clare Elish, Iason Gabriel, and Shakir Mohamed. Power to the people? Opportunities and challenges for participatory AI. Equity and Access in Algorithms, Mechanisms, and Optimization, pp. 1–8, 2022. Abeba Birhane, Vinay Prabhu, Sang Han, and Vishnu Naresh Boddeti. On Hate Scaling Laws For DataSwamps. *arXiv preprint arXiv:2306.13141*, 2023. Andrew Blair-Stanek, Nils Holzenberger, and Benjamin Van Durme. OpenAI Cribbed Our Tax Example, But Can GPT-4 Really Do Tax? *Tax Notes*, August 14 2023. https://www.taxnotes.com/featured-a nalysis/openai-cribbed-our-tax-example-can-gpt-4-really-do-tax/2023/08/11/7h0hc. Henry Blodget. Mark Zuckerberg On Innovation. *Business Insider*, 2009. https://www.businessinsider. com/mark-zuckerberg-innovation-2009-10?r=US&IR=T. Accessed on: January 30, 2024. Su Lin Blodgett, Q Vera Liao, Alexandra Olteanu, Rada Mihalcea, Michael Muller, Morgan Klaus Scheuerman, Chenhao Tan, and Qian Yang. Responsible language technologies: Foreseeing and mitigating harms. In *CHI Conference on Human Factors in Computing Systems Extended Abstracts*, pp. 1–3, 2022. Richard Blumenthal and Josh Hawley. Bipartisan Framework for U.S. AI Act, 2023. https://www.blumen thal.senate.gov/imo/media/doc/09072023bipartisanaiframework.pdf. Accessed on: 1 February, 2024. Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K Warmuth. Learnability and the Vapnik-Chervonenkis dimension. *Journal of the ACM (JACM)*, 36(4):929–965, 1989. UN AI Advisory Board. Interim Report: Governing AI for Humanity, 2023. URL https://www.un.org/s ites/un2.un.org/files/ai%5Fadvisory%5Fbody%5Finterim%5Freport.pdf. Andreea Bobu, Andi Peng, Pulkit Agrawal, Julie Shah, and Anca D Dragan. Aligning Robot and Human Representations. *arXiv preprint arXiv:2302.01928*, 2023. Matteo Boffa, Giulia Milan, Luca Vassio, Idilio Drago, Marco Mellia, and Zied Ben Houidi. Towards nlpbased processing of honeypot logs. In 2022 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), pp. 314–321. IEEE, 2022. Tolga Bolukbasi, Adam Pearce, Ann Yuan, Andy Coenen, Emily Reif, Fernanda Vi'egas, and Martin Wattenberg. An Interpretability Illusion for BERT. *ArXiv*, abs/2104.07143, 2021. URL https: //api.semanticscholar.org/CorpusID:233241181. Rishi Bommasani, Kathleen A Creel, Ananya Kumar, Dan Jurafsky, and Percy S Liang. Picking on the same person: Does algorithmic monoculture lead to outcome homogenization? Advances in Neural Information Processing Systems, 35:3663–3678, 2022. Blake Bordelon, Abdulkadir Canatar, and Cengiz Pehlevan. Spectrum dependent learning curves in kernel regression and wide neural networks. In *International Conference on Machine Learning*, pp. 1024–1034. Pmlr, 2020. Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. Improving language models by retrieving from trillions of tokens. In *International conference on machine learning*, pp. 2206–2240. Pmlr, 2022. Boston Dynamics. General Purpose Robots Should Not Be Weaponized, 2022. https://bostondynamics .com/news/general-purpose-robots-should-not-be-weaponized/. Accessed on: January 30, 2024. Meriem Boubdir, Edward Kim, Beyza Ermis, Sara Hooker, and Marzieh Fadaee. Elo Uncovered: Robustness and Best Practices in Language Model Evaluation. *arXiv preprint arXiv:2311.17295*, 2023. Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. In *2021 IEEE Symposium on* Security and Privacy (SP), pp. 141–159. Ieee, 2021. Olivier Bousquet, Steve Hanneke, Shay Moran, Ramon Van Handel, and Amir Yehudayoff. A theory of universal learning. In *Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing*, pp. 532–541, 2021. Sam Bowman and Tamera Lanham. The Anthropic–NYU Debate Agenda, 2023. https://docs.google.co m/document/d/173SpCyspboHBp3bHqWvUiduzatbuuv7QAvWVamwcGbk/edit. Accessed on: March 10, 2024. Samuel R Bowman. Eight things to know about large language models. *arXiv preprint arXiv:2304.00612*, 2023. Samuel R Bowman and George E Dahl. What will it take to fix benchmarking in natural language understanding? *arXiv preprint arXiv:2104.02145*, 2021. Samuel R. Bowman, Jeeyoon Hyun, Ethan Perez, Edwin Chen, Craig Pettit, Scott Heiner, Kamil˙e Lukoši¯ut˙e, Amanda Askell, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Christopher Olah, Daniela Amodei, Dario Amodei, Dawn Drain, Dustin Li, Eli Tran-Johnson, ..., and Jared Kaplan. Measuring Progress on Scalable Oversight for Large Language Models, nov 2022. URL http://arxiv.org/abs/2211.03540. arXiv:2211.03540 [cs]. Robert Boyd, Herbert Gintis, and Samuel Bowles. Coordinated Punishment of Defectors Sustains Cooperation and Can Proliferate When Rare. *Science*, 328(5978):617–620, apr 2010. ISSN 0036-8075, 1095-9203. doi: 10.1126/science.1183665. URL https://www.science.org/doi/10.1126/science.1183665. Herbie Bradley, Andrew Dai, Hannah Teufel, Jenny Zhang, Koen Oostermeijer, Marco Bellagente, Jeff Clune, Kenneth Stanley, Grégory Schott, and Joel Lehman. Quality-Diversity through AI Feedback. arXiv preprint arXiv:2310.13032, 2023. Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. The method of paired comparisons. *Biometrika*, 39(3/4):324–345, 1952. Tim Bradshaw, Madhumita Murgia, George Hammond, and Camilla Hodgson. How Microsoft's Multibilliondollar Alliance with OpenAI Really Works. *Financial Times*, 2023. https://www.ft.com/content/458 b162d-c97a-4464-8afc-72d65afb28ed. Gwern Branwen. Why Tool AIs Want to Be Agent AIs. *Gwern.net*, 2016. https://gwern.net/tool-ai. Accessed on: 3 January, 2024. Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Conerly, Nick Turner, Cem Anil, Carson Denison, Amanda Askell, Robert Lasenby, Yifan Wu, Shauna Kravec, Nicholas Schiefer, Tim Maxwell, Nicholas Joseph, Zac Hatfield-Dodds, Alex Tamkin, Karina Nguyen, ..., and Christopher Olah. Towards Monosemanticity: Decomposing Language Models With Dictionary Learning. *Transformer* Circuits Thread, 2023. https://transformer-circuits.pub/2023/monosemantic-features/index.html. A. P. Brogan. Philosophy and the Problem of Value. *Proceedings and Addresses of the American Philosophical* Association, 6:105–129, 1952. URL https://doi.org/10.2307/1483020. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. URL https://proceedings.ne urips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf. Zach Y. Brown and Alexander MacKay. Competition in Pricing Algorithms. American Economic Journal: Microeconomics, 15(2):109–56, 5 2023. doi: 10.1257/mic.20210158. URL https://www.aeaweb.org/art icles?id=10.1257/mic.20210158. Jonah Brown-Cohen, Geoffrey Irving, and Georgios Piliouras. Scalable AI safety via doubly-efficient debate. arXiv preprint arXiv:2311.14125, 2023. Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, et al. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. *arXiv preprint arXiv:1802.07228*, 2018. Miles Brundage, Shahar Avin, Jasmine Wang, Haydn Belfield, Gretchen Krueger, Gillian Hadfield, et al. Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims, apr 2020. URL http://arxiv.org/abs/2004.07213. arXiv:2004.07213 [cs]. Erik Brynjolfsson, Danielle Li, and Lindsey R Raymond. Generative AI at work. Technical report, National Bureau of Economic Research, 2023. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. *arXiv preprint arXiv:2303.12712*, 2023. Ben Buchanan, Andrew Lohn, and Micah Musser. Truth, lies, and automation: How language models could change disinformation. Center for Security and Emerging Technology, 2021. James M. Buchanan and Wm. Craig Stubblebine. Externality. *Economica*, 29(116):371–384, 1962. ISSN 0013-0427. doi: 10.2307/2551386. URL https://www.jstor.org/stable/2551386. Miriam Buiten, Alexandre De Streel, and Martin Peitz. The law and economics of AI liability. Computer Law & Security Review, 48:105794, 2023. Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young, and Baobao Zhang. *The Oxford Handbook of AI Governance*. Oxford University Press, 2022a. doi: 10.1093/oxfordhb/9780197579329.001.0001. Justin B. Bullock, Hsini Huang, and Kyoung-Cheol (Casey) Kim. Machine Intelligence, Bureaucracy, and Human Control. *Perspectives on Public Management and Governance*, 5(2):187–196, June 2022b. doi: 10.1093/ppmgov/gvac006. URL https://doi.org/10.1093/ppmgov/gvac006. Solcyré Burga. How a New Bill Could Protect Against Deepfakes. *Time*, 2024. https://time.com/65907 11/deepfake-protection-federal-bill/. Ryan Burnell, Han Hao, Andrew RA Conway, and Jose Hernandez Orallo. Revealing the structure of language model capabilities. *arXiv preprint arXiv:2306.10062*, 2023a. Ryan Burnell, Wout Schellaert, John Burden, Tomer D Ullman, Fernando Martinez-Plumed, Joshua B Tenenbaum, Danaja Rutar, Lucy G Cheke, Jascha Sohl-Dickstein, Melanie Mitchell, et al. Rethink reporting of evaluation results in AI. *Science*, 380(6641):136–138, 2023b. Collin Burns, Haotian Ye, Dan Klein, and Jacob Steinhardt. Discovering Latent Knowledge in Language Models Without Supervision, dec 2022. URL http://arxiv.org/abs/2212.03827. arXiv:2212.03827 [cs]. Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, et al. Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision. *arXiv preprint arXiv:2312.09390*, 2023. Vitalik Buterin. On Collusion, 2019. URL https://vitalik.eth.limo/general/2019/04/03/collusio n.html. Ethan Caballero, Kshitij Gupta, Irina Rish, and David Krueger. Broken Neural Scaling Laws. In The Eleventh International Conference on Learning Representations, 2023. URL https://arxiv.org/abs/2210.14891. Vivien Cabannes, Elvis Dohmatob, and Alberto Bietti. Scaling laws for associative memories. arXiv preprint arXiv:2310.02984, 2023. CAIS. Statement on AI Risk: AI experts and public figures express their concern about AI risk, 2023. https://www.safe.ai/statement-on-ai-risk. Emilio Calvano, Giacomo Calzolari, Vincenzo Denicolò, and Sergio Pastorello. Artificial Intelligence, Algorithmic Pricing, and Collusion. *American Economic Review*, 110(10):3267–97, 10 2020. doi: 10.1257/aer.20190623. URL https://www.aeaweb.org/articles?id=10.1257/aer.20190623. Valerio Capraro, Austin Lentsch, Daron Acemoglu, Selin Akgun, Aisel Akhmedova, Ennio Bilancini, JeanFrançois Bonnefon, Pablo Brañas-Garza, Luigi Butera, Karen M Douglas, et al. The impact of generative artificial intelligence on socioeconomic inequalities and policy making. *arXiv preprint arXiv:2401.05377*, 2023. Cesare Carissimo and Marcin Korecki. Limits of optimization. *Minds and Machines*, pp. 1–21, 2023. Nicholas Carlini. A GPT-4 Capability Forecasting Challenge, 2023a. https://nicholas.carlini.com/w riting/llm-forecast/question/Capital-of-Paris. Accessed on: December 18, 2023. Nicholas Carlini. A LLM assisted exploitation of AI-Guardian. *arXiv preprint arXiv:2307.15008*, 2023b. Nicholas Carlini and Andreas Terzis. Poisoning and backdooring contrastive learning. *arXiv preprint* arXiv:2106.09667, 2021. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In *2017 IEEE* symposium on security and privacy (sp), pp. 39–57. Ieee, 2017. Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks. In Proceedings of the 28th USENIX Conference on Security Symposium, Sec'19, pp. 267–284, Usa, 2019. USENIX Association. ISBN 9781939133069. Nicholas Carlini, Florian Tramer, Krishnamurthy Dj Dvijotham, Leslie Rice, Mingjie Sun, and J Zico Kolter. (certified!!) Adversarial robustness for free! *arXiv preprint arXiv:2206.10550*, 2022. Nicholas Carlini, Matthew Jagielski, Christopher A Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum Anderson, Andreas Terzis, Kurt Thomas, and Florian Tramèr. Poisoning web-scale training datasets is practical. *arXiv preprint arXiv:2302.10149*, 2023a. Nicholas Carlini, Milad Nasr, Christopher A Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramer, et al. Are aligned neural networks adversarially aligned? *arXiv preprint arXiv:2306.15447*, 2023b. John B Carroll. *Human cognitive abilities: A survey of factor-analytic studies*. Cambridge University Press, 1993. Pablo Antonio Moreno Casares, Bao Sheng Loe, John Burden, José Hernández-Orallo, et al. How generalpurpose is a language model? Usefulness and safety with human prompters in the wild. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 36, pp. 5295–5303, 2022. Antonio A. Casilli and Julian Posada. The Platformisation of Labor and Society. In Mark Graham and William H. Dutton (eds.), *Society and the Internet: How Networks of Information and Communication* Are Changing Our Lives. Oxford University Press, Oxford, vol. 2 edition, 2019. Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, CharbelRaphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, ..., and Dylan Hadfield-Menell. Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback, 2023a. Stephen Casper, Yuxiao Li, Jiawei Li, Tong Bu, Kevin Zhang, Kaivalya Hariharan, and Dylan HadfieldMenell. Red Teaming Deep Neural Networks with Feature Synthesis Tools, sep 2023b. URL http: //arxiv.org/abs/2302.10894. arXiv:2302.10894 [cs]. Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, and Dylan Hadfield-Menell. Explore, Establish, Exploit: Red Teaming Language Models from Scratch. *arXiv preprint arXiv:2306.09442*, 2023c. Stephen Casper, Carson Ezell, Charlotte Siegmann, Noam Kolt, Taylor Lynn Curtis, Benjamin Bucknall, Andreas Haupt, Kevin Wei, Jérémy Scheurer, Marius Hobbhahn, Lee Sharkey, Satyapriya Krishna, Marvin Von Hagen, Silas Alberti, Alan Chan, Qinyi Sun, Michael Gerovitch, David Bau, Max Tegmark, ..., and Dylan Hadfield-Menell. Black-Box Access is Insufficient for Rigorous AI Audits, 2024a. Stephen Casper, Lennart Schulze, Oam Patel, and Dylan Hadfield-Menell. Defending Against Unforeseen Failure Modes with Latent Adversarial Training. *arXiv preprint arXiv:2403.05030*, 2024b. Center for AI Safety. The Trojan Detection Challenge 2023 (LLM Edition) - The Trojan Detection Challenge, 2023. URL https://trojandetection.ai/. Center for AI Safety, Aidan O'Gara, Corin Katzke, and Dan Hendrycks. AI Safety Newsletter \#32: Measuring and Reducing Hazardous Knowledge in LLMs. *AI Safety Newsletter*, 2024. https://newsletter .safe.ai/p/ai-safety-newsletter-32-measuring. CFTC and SEC. Findings Regarding the Market Events of May 6, 2010: Report of the Staffs of the CFTC and SEC to the Joint Advisory Committee on Emerging Regulatory Issues. Technical report, Cftc, sep 2010. URL https://www.sec.gov/files/marketevents-report.pdf. CHAI, Far.ai, and Ditchley Foundation. Prominent AI Scientists from China and the West Propose Joint Strategy to Mitigate Risks from AI, 2023. https://humancompatible.ai/news/2023/10/31/prominent -ai-scientists-from-china-and-the-west-propose-joint-strategy-to-mitigate-risks-from-a i/. Accessed on: January 8, 2024. Alan Chan, Maxime Riché, and Jesse Clifton. Towards the Scalable Evaluation of Cooperativeness in Language Models. *arXiv:2303.13360*, mar 2023a. doi: 10.48550/arxiv.2303.13360. Alan Chan, Rebecca Salganik, Alva Markelius, Chris Pang, Nitarshan Rajkumar, Dmitrii Krasheninnikov, Lauro Langosco, Zhonghao He, Yawen Duan, Micah Carroll, et al. Harms from Increasingly Agentic Algorithmic Systems. In *Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency*, pp. 651–666, 2023b. Alan Chan, Carson Ezell, Max Kaufmann, Kevin Wei, Lewis Hammond, Herbie Bradley, Emma Bluemke, Nitarshan Rajkumar, David Krueger, Noam Kolt, et al. Visibility into AI Agents. *arXiv preprint* arXiv:2401.13138, 2024. Lawrence Chan, Adrià Garriga-Alonso, Nicholas Goldowsky-Dill, Ryan Greenblatt, jenny, Ansh Radhakrishnan, Buck Shlegaris, and Nate Thomas. Causal Scrubbing: a method for rigorously testing interpretability hypotheses, 2023c. https://static1.squarespace.com/static/6114773bd7f9917b7ae4ef8d/t/6364 a036f9da3316ac793f56/1667539011553/causal-scrubbing. Stephanie CY Chan, Adam Santoro, Andrew K Lampinen, Jane X Wang, Aaditya Singh, Pierre H Richemond, Jay McClelland, and Felix Hill. Data distributional properties drive emergent few-shot learning in transformers. *arXiv preprint arXiv:2205.05055*, 2022. Jonathan Chang, Sean Gerrish, Chong Wang, Jordan Boyd-Graber, and David Blei. Reading tea leaves: How humans interpret topic models. *Advances in neural information processing systems*, 22, 2009. Patrick Chao, Alexander Robey, Edgar Dobriban, Hamed Hassani, George J Pappas, and Eric Wong. Jailbreaking Black Box Large Language Models in Twenty Queries. *arXiv preprint arXiv:2310.08419*, 2023. Checkpoint Research. OPWNAI: AI That Can Save the Day or Hack It Away, 2022. https://research .checkpoint.com/2022/opwnai-ai-that-can-save-the-day-or-hack-it-away/. Accessed on: 30 January, 2024. Checkpoint Research. OPWNAI: Cybercriminals Starting to Use ChatGPT, 2023. https://research.che ckpoint.com/2023/opwnai-cybercriminals-starting-to-use-chatgpt/. Accessed on: 30 January, 2024. Angelica Chen, Ravid Shwartz-Ziv, Kyunghyun Cho, Matthew L. Leavitt, and Naomi Saphra. Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs. In The Twelfth International Conference on Learning Representations, 2024a. URL https://openreview.net/forum?i d=MO5PiKHELW. Baian Chen, Chang Shu, Ehsan Shareghi, Nigel Collier, Karthik Narasimhan, and Shunyu Yao. Fireact: Toward language agent fine-tuning. *arXiv preprint arXiv:2310.05915*, 2023a. Canyu Chen and Kai Shu. Combating misinformation in the age of llms: Opportunities and challenges. arXiv preprint arXiv:2311.05656, 2023. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. Advances in neural information processing systems, 34:15084–15097, 2021a. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. *arXiv preprint arXiv:2107.03374*, 2021b. Mayee Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, and Christopher Ré. Skill-it! a data-driven skills framework for understanding and training language models. Advances in Neural Information Processing Systems, 36, 2024b. Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning. *arXiv preprint arXiv:1712.05526*, 2017. Yanda Chen, Ruiqi Zhong, Narutatsu Ri, Chen Zhao, He He, Jacob Steinhardt, Zhou Yu, and Kathleen McKeown. Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations, jul 2023b. URL http://arxiv.org/abs/2307.08678. arXiv:2307.08678 [cs]. Bobby Chesney and Danielle Citron. Deep fakes: A looming challenge for privacy, democracy, and national security. *Calif. L. Rev.*, 107:1753, 2019. Cheng-Han Chiang and Hung-yi Lee. Can Large Language Models Be an Alternative to Human Evaluations? arXiv preprint arXiv:2305.01937, 2023. David Chiang, Peter Cholak, and Anand Pillay. Tighter bounds on the expressivity of transformer encoders. In *International Conference on Machine Learning*, pp. 5544–5562. PMLR, 2023. Dami Choi, Yonadav Shavit, and David Duvenaud. Tools for Verifying Neural Models' Training Data. arXiv preprint arXiv:2307.00682, 2023a. Minje Choi, Jiaxin Pei, Sagar Kumar, Chang Shu, and David Jurgens. Do LLMs Understand Social Knowledge? Evaluating the Sociability of Large Language Models with SocKET Benchmark. *arXiv preprint* arXiv:2305.14938, 2023b. Paul Christiano. Clarifying "AI Alignment", April 7 2018. URL https://ai-alignment.com/clarifyin g-ai-alignment-cec47cd69dd6. Paul Christiano. Current Work in AI Alignment. *EA Global: San Francisco*, 2019. Paul Christiano, Buck Shlegeris, and Dario Amodei. Supervising strong learners by amplifying weak experts. arXiv preprint arXiv:1810.08575, 2018. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. *Advances in neural information processing systems*, 30, 2017. Caroline Christie. After dropping 'don't be evil,' Google looks for new words to justify its military projects, 2018. https://www.documentjournal.com/2018/06/google-quietly-abandons-emphasis-on-their -dont-be-evil-motto/. Accessed on: January 27, 2024. Phillip J. K. Christoffersen, Andreas A. Haupt, and Dylan Hadfield-Menell. Get It in Writing: Formal Contracts Mitigate Social Dilemmas in Multi-Agent RL, aug 2023. URL http://arxiv.org/abs/2208.1 0469. arXiv:2208.10469 [cs, econ]. Casey Chu, Andrey Zhmoginov, and Mark Sandler. Cyclegan, a master of steganography. arXiv preprint arXiv:1712.02950, 2017. James Chua, Edward Rees, Hunar Batra, Samuel R Bowman, Julian Michael, Ethan Perez, and Miles Turpin. Bias-Augmented Consistency Training Reduces Biased Reasoning in Chain-of-Thought. *arXiv* preprint arXiv:2403.05518, 2024. Bilal Chughtai, Lawrence Chan, and Neel Nanda. A toy model of universality: Reverse engineering how networks learn group operations. *arXiv preprint arXiv:2302.03025*, 2023. Nicola Muca Cirone, Antonio Orvieto, Benjamin Walker, Cristopher Salvi, and Terry Lyons. Theoretical foundations of deep selective state-space models. *arXiv preprint arXiv:2402.19047*, 2024. Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, and Noah A Smith. All that's' human'is not gold: Evaluating human evaluation of generated text. *arXiv preprint* arXiv:2107.00061, 2021. Joshua Clymer, Garrett Baker, Rohan Subramani, and Sam Wang. Generalization Analogies (GENIES): A Testbed for Generalizing AI Oversight to Hard-To-Measure Domains. *arXiv preprint arXiv:2311.07723*, 2023. R. H. Coase. The Problem of Social Cost. *The Journal of Law and Economics*, 3:1–44, 1960. ISSN 0022-2186, 1537-5285. doi: 10.1086/466560. URL https://www.journals.uchicago.edu/doi/10.1086/466560. Karl Cobbe, Chris Hesse, Jacob Hilton, and John Schulman. Leveraging procedural generation to benchmark reinforcement learning. In *International conference on machine learning*, pp. 2048–2056. Pmlr, 2020. Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In *international conference on machine learning*, pp. 1310–1320. Pmlr, 2019. Michael Cohen, Marcus Hutter, and Michael Osborne. Advanced artificial agents intervene in the provision of reward. *AI magazine*, 43(3):282–293, 2022. Vincent Conitzer and Caspar Oesterheld. Foundations of cooperative AI. In *Proceedings of the AAAI* Conference on Artificial Intelligence, volume 37, pp. 15359–15367, 2023. Arthur Conmy, Augustine N. Mavor-Parker, Aengus Lynch, Stefan Heimersheim, and Adrià Garriga-Alonso. Towards Automated Circuit Discovery for Mechanistic Interpretability. *CoRR*, abs/2304.14997, 2023. URL https://doi.org/10.48550/arXiv.2304.14997. Andrea Cossu, Tinne Tuytelaars, Antonio Carta, Lucia Passaro, Vincenzo Lomonaco, and Davide Bacciu. Continual pre-training mitigates forgetting in language and vision. *arXiv preprint arXiv:2205.09357*, 2022. Sasha Costanza-Chock, Inioluwa Deborah Raji, and Joy Buolamwini. Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. In *Proceedings of the 2022 ACM* Conference on Fairness, Accountability, and Transparency, pp. 1571–1583, 2022. Thomas Coste, Usman Anwar, Robert Kirk, and David Krueger. Reward Model Ensembles Help Mitigate Overoptimization. *arXiv preprint arXiv:2310.02743*, 2023. Debby RE Cotton, Peter A Cotton, and J Reuben Shipway. Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. *Innovations in Education and Teaching International*, pp. 1–12, 2023. Council of Europe. Artificial Intelligence - Work in Progress, 2023. https://www.coe.int/en/web/artif icial-intelligence/work-in-progress\#01EN. Accessed on: January 30, 2024. Council of the European Union. Proposal for a Regulation of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act), 2024. https://data.consilium.europa.eu/doc/d ocument/ST-5662-2024-INIT/en/pdf. Andrew Critch and David Krueger. AI research considerations for human existential safety (ARCHES). arXiv preprint arXiv:2006.04948, 2020. Andrew Critch and Stuart Russell. TASRA: A Taxonomy and Analysis of Societal-Scale Risks from AI. arXiv preprint arXiv:2306.06924, 2023. Andrew Critch, Michael Dennis, and Stuart Russell. Cooperative and uncooperative institution designs: Surprises and problems in open-source game theory, aug 2022. URL http://arxiv.org/abs/2208.07006. arXiv:2208.07006 [cs]. Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. RobustBench: a standardized adversarial robustness benchmark. In *Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)*, 2021. URL https://robustbench.github.io/. Chris Cundy and Stefano Ermon. SequenceMatch: Imitation Learning for Autoregressive Sequence Modelling with Backtracking. *arXiv preprint arXiv:2306.05426*, 2023. Hoagy Cunningham, Aidan Ewart, Logan Riggs, Robert Huben, and Lee Sharkey. Sparse Autoencoders Find Highly Interpretable Features in Language Models, oct 2023. URL http://arxiv.org/abs/2309.08600. arXiv:2309.08600 [cs]. Allan Dafoe. AI governance: a research agenda. Governance of AI Program, Future of Humanity Institute, University of Oxford: Oxford, UK, 1442:1443, 2018. Allan Dafoe, Edward Hughes, Yoram Bachrach, Tantum Collins, Kevin R. McKee, Joel Z. Leibo, Kate Larson, and Thore Graepel. Open Problems in Cooperative AI, dec 2020. URL http://arxiv.org/abs/ 2012.08630. arXiv:2012.08630 [cs]. Allan Dafoe, Yoram Bachrach, Gillian Hadfield, Eric Horvitz, Kate Larson, and Thore Graepel. Cooperative AI: machines must learn to find common ground. *Nature*, 593(7857):33–36, may 2021. doi: 10.1038/d415 86-021-01170-0. URL https://www.nature.com/articles/d41586-021-01170-0. Ido Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entailment challenge. In Machine learning challenges workshop, pp. 177–190. Springer, 2005. Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, Anthony Bau, and James Glass. What is one grain of sand in the desert? Analyzing individual neurons in deep NLP models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 6309–6317, 2019. Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D Hoffman, et al. Underspecification presents challenges for credibility in modern machine learning. *The Journal of Machine Learning Research*, 23(1): 10237–10297, 2022. Alexander Nicholas D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jon Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Shaobo Hou, Neil Houlsby, Ghassen Jerfel, Alan Karthikesalingam, Mario Lučić, Yian Ma, Cory McLean, Diana Mincu, ..., and D. Sculley. Underspecification Presents Challenges for Credibility in Modern Machine Learning. Journal of Machine Learning Research, 2020. URL https://www.jmlr.org/papers/v23/20-1335.html. Aida Mostafazadeh Davani, Mark Díaz, and Vinodkumar Prabhakaran. Dealing with disagreements: Looking beyond the majority vote in subjective annotations. Transactions of the Association for Computational Linguistics, 10:92–110, 2022. Ernest Davis. The limitations of standardized science tests as benchmarks for artificial intelligence research: Position paper. *arXiv preprint arXiv:1411.1629*, 2014. Bart De Moor, Johan David, Joos Vandewalle, Maarten De Moor, and Daniel Berckmans. Trade-offs in linear control system design: A practical example. *Optimal Control Applications and Methods*, 13(2):121–144, 1992. Christian Schroeder de Witt, Samuel Sokota, J Zico Kolter, Jakob Foerster, and Martin Strohmeier. Perfectly secure steganography using minimum entropy coupling. *arXiv preprint arXiv:2210.14889*, 2022. Schroeder de Witt, Hawra Milani, Klaudia Krawiecka, Swapneel Mehta, Carla Cremer, and Martin Strohmeier. Multi-Agent Security Workshop at NeurIPS'23, 2023. URL https://neurips.cc/vir tual/2023/workshop/66520. Edoardo Debenedetti, Zishen Wan, Maksym Andriushchenko, Vikash Sehwag, Kshitij Bhardwaj, and Bhavya Kailkhura. Scaling Compute Is Not All You Need for Adversarial Robustness. arXiv preprint arXiv:2312.13131, 2023. Łukasz Dębowski. A simplistic model of neural scaling laws: Multiperiodic Santa Fe processes. *arXiv preprint* arXiv:2302.09049, 2023. Deliberation at Scale. First Report: Democratic Inputs to AI, 2023. URL https://findcommonground.o nline/top-level-pages/about-the-consortium. Fabrizio Dell'Acqua, Edward McFowland, Ethan R Mollick, Hila Lifshitz-Assaf, Katherine Kellogg, Saran Rajendran, Lisa Krayer, François Candelon, and Karim R Lakhani. Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper, 2023. Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, and Stefan Rass. PentestGPT: An LLM-empowered automatic penetration testing tool. arXiv preprint arXiv:2308.06782, 2023a. Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric P Xing, and Zhiting Hu. Rlprompt: Optimizing discrete text prompts with reinforcement learning. *arXiv* preprint arXiv:2205.12548, 2022. Yue Deng, Wenxuan Zhang, Sinno Jialin Pan, and Lidong Bing. Multilingual Jailbreak Challenges in Large Language Models. *arXiv preprint arXiv:2310.06474*, 2023b. Emily Dinan, Gavin Abercrombie, A Stevie Bergman, Shannon Spruit, Dirk Hovy, Y-Lan Boureau, and Verena Rieser. Anticipating safety issues in e2e conversational ai: Framework and tooling. *arXiv preprint* arXiv:2107.03451, 2021. Li Ding, Jenny Zhang, Jeff Clune, Lee Spector, and Joel Lehman. Quality Diversity through Human Feedback. *arXiv preprint arXiv:2310.12103*, 2023. Roel Dobbe, Thomas Krendl Gilbert, and Yonatan Mintz. Hard choices in artificial intelligence. *Artificial* Intelligence, 300:103555, 2021. Jesse Dodge, Maarten Sap, Ana Marasović, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. Documenting large webtext corpora: A case study on the colossal clean crawled corpus. *arXiv preprint arXiv:2104.08758*, 2021. Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, and Zhifang Sui. A survey for in-context learning. *arXiv preprint arXiv:2301.00234*, 2022. Yinpeng Dong, Xiao Yang, Zhijie Deng, Tianyu Pang, Zihao Xiao, Hang Su, and Jun Zhu. Black-box detection of backdoor attacks with limited information and data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16482–16491, 2021. Florian E Dorner. Algorithmic collusion: A critical review. *arXiv preprint arXiv:2110.04740*, 2021. Finale Doshi-Velez and Been Kim. Towards A Rigorous Science of Interpretable Machine Learning, mar 2017. URL http://arxiv.org/abs/1702.08608. arXiv:1702.08608 [cs, stat]. Arthur Douillard, Qixuan Feng, Andrei A Rusu, Rachita Chhaparia, Yani Donchev, Adhiguna Kuncoro, Marc'Aurelio Ranzato, Arthur Szlam, and Jiajun Shen. DiLoCo: Distributed Low-Communication Training of Language Models. *arXiv preprint arXiv:2311.08105*, 2023. Mengnan Du, Subhabrata Mukherjee, Yu Cheng, Milad Shokouhi, Xia Hu, and Ahmed Hassan Awadallah. Robustness Challenges in Model Distillation and Pruning for Natural Language Understanding. arXiv preprint arXiv:2110.08419, 2021. Mengnan Du, Fengxiang He, Na Zou, Dacheng Tao, and Xia Hu. Shortcut learning of large language models in natural language understanding: A survey. *arXiv preprint arXiv:2208.11857*, 2022. Mengnan Du, Fengxiang He, Na Zou, Dacheng Tao, and Xia Hu. Shortcut learning of large language models in natural language understanding. *Communications of the ACM*, 67(1):110–120, 2023a. Yali Du, Joel Z Leibo, Usman Islam, Richard Willis, and Peter Sunehag. A Review of Cooperation in Multi-agent Learning. *arXiv preprint arXiv:2312.05162*, 2023b. Pradeep Dubey. Inefficiency of Nash Equilibria. *Mathematics of Operations Research*, 11(1):1–8, 1986. ISSN 0364-765x. URL https://www.jstor.org/stable/3690047. Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback, 2023. Esin Durmus, Karina Nyugen, Thomas I Liao, Nicholas Schiefer, Amanda Askell, Anton Bakhtin, Carol Chen, Zac Hatfield-Dodds, Danny Hernandez, Nicholas Joseph, et al. Towards measuring the representation of subjective global opinions in language models. *arXiv preprint arXiv:2306.16388*, 2023. Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jian, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D Hwang, et al. Faith and Fate: Limits of Transformers on Compositionality. *arXiv preprint arXiv:2305.18654*, 2023. Gintare Karolina Dziugaite and Daniel M Roy. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. arXiv preprint arXiv:1703.11008, 2017. Gintare Karolina Dziugaite, Shai Ben-David, and Daniel M. Roy. Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability, 2020. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. Hotflip: White-box adversarial examples for text classification. *arXiv preprint arXiv:1712.06751*, 2017. Adrien Ecoffet, Jeff Clune, and Joel Lehman. Open questions in creating safe open-ended AI: tensions between control and creativity. In *Artificial Life Conference Proceedings 32*, pp. 27–35, 2020. Benjamin L Edelman, Ezra Edelman, Surbhi Goel, Eran Malach, and Nikolaos Tsilivis. The Evolution of Statistical Induction Heads: In-Context Learning Markov Chains. *arXiv preprint arXiv:2402.11004*, 2024. Ethan Edwards. Large Language Models will be Great for Censorship, 2023. https://ethanedwards.sub stack.com/p/large-language-models-will-be-great. Accessed on: January 30, 2024. Janet Egan and Lennart Heim. Oversight for Frontier AI through a Know-Your-Customer Scheme for Compute Providers. *arXiv preprint arXiv:2310.13625*, 2023. Jacob Eisenstein, Daniel Andor, Bernd Bohnet, Michael Collins, and David Mimno. Honest students from untrusted teachers: Learning an interpretable question-answering pipeline from a pretrained language model. *arXiv preprint arXiv:2210.02498*, 2022. Yanai Elazar, Akshita Bhagia, Ian Magnusson, Abhilasha Ravichander, Dustin Schwenk, Alane Suhr, Pete Walsh, Dirk Groeneveld, Luca Soldaini, Sameer Singh, et al. What's In My Big Data? *ArXiv*, abs/2310.20707, 2023. URL https://api.semanticscholar.org/CorpusID:264803575. Ronen Eldan and Mark Russinovich. Who's Harry Potter? Approximate Unlearning in LLMs, 2023. Nelson Elhage, Tristan Hume, Catherine Olsson, Neel Nanda, Tom Henighan, Scott Johnston, Sheer ElShowk, Nicholas Joseph, Nova DasSarma, Ben Mann, Danny Hernandez, Amanda Askell, Kamal Ndousse, Andy Jones, Dawn Drain, Anna Chen, Yuntao Bai, Deep Ganguli, Liane Lovitt, ..., and Christopher Olah. Softmax Linear Units. *Transformer Circuits Thread*, 2022a. URL https://transformer-cir cuits.pub/2022/solu/index.html. Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, Roger Grosse, Sam McCandlish, Jared Kaplan, Dario Amodei, Martin Wattenberg, and Christopher Olah. Toy Models of Superposition, sep 2022b. URL http://arxiv.org/abs/2209.10652. arXiv:2209.10652 [cs]. Hannah Ellis-Petersen. WhatsApp sues Indian government over 'mass surveillance' internet laws, 2021. https://www.theguardian.com/world/2021/may/26/whatsapp-sues-indian-government-over-mas s-surveillance-internet-laws. Accessed on: 30th January, 2024. Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock. Gpts are gpts: An early look at the labor market impact potential of large language models. *arXiv preprint arXiv:2303.10130*, 2023. Alex Engler. A comprehensive and distributed approach to AI regulation. *Brookings*, 2023. Ege Erdil and Tamay Besiroglu. Algorithmic progress in computer vision. *arXiv preprint arXiv:2212.05153*, 2022. Arthur Erzberger. WormGPT and FraudGPT - The Rise of Malicious LLMs. https://www.trustwave.com/ en-us/resources/blogs/spiderlabs-blog/wormgpt-and-fraudgpt-the-rise-of-malicious-llms/, 2023. Kawin Ethayarajh and Dan Jurafsky. The authenticity gap in human evaluation. In *Proceedings of the 2022* Conference on Empirical Methods in Natural Language Processing, pp. 6056–6070, 2022. Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. KTO: Model Alignment as Prospect Theoretic Optimization. *arXiv preprint arXiv:2402.01306*, 2024. Julen Etxaniz, Gorka Azkune, Aitor Soroa, Oier Lopez de Lacalle, and Mikel Artetxe. Do multilingual language models think better in English? *arXiv preprint arXiv:2308.01223*, 2023. Eurojust and Europol. Common challenges in combating cybercrime. Technical report, Europol and Eurojust Public Information, 2019. https://www.europol.europa.eu/cms/sites/default/files/documents/ common_challenges_in_combating_cybercrime_2018.pdf. Owain Evans, Owen Cotton-Barratt, Lukas Finnveden, Adam Bales, Avital Balwit, Peter Wills, Luca Righetti, and William Saunders. Truthful AI: Developing and Governing AI That Does Not Lie. arXiv:2110.06674, October 2021. doi: 10.48550/arxiv.2110.06674. FAccT. Statement on AI Harms and Policy, 2023. URL https://facctconference.org/2023/harm-pol icy. Ekaterina Fadeeva, Roman Vashurin, Akim Tsvigun, Artem Vazhentsev, Sergey Petrakov, Kirill Fedyanin, Daniil Vasilev, Elizaveta Goncharova, Alexander Panchenko, Maxim Panov, Timothy Baldwin, and Artem Shelmanov. LM-Polygraph: Uncertainty Estimation for Language Models, 2023. Julien Fageot, Sadegh Farhadkhani, Lê Nguyên Hoang, and Oscar Villemaud. Generalized Bradley-Terry Models for Score Estimation from Paired Comparisons. *arXiv preprint arXiv:2308.08644*, 2023. FAR AI. Scientists Call For International Cooperation on AI Red Lines, 2024. https://far.ai/post/20 24-03-idais-beijing/. Sebastian Farquhar, Vikrant Varma, Zachary Kenton, Johannes Gasteiger, Vladimir Mikulik, and Rohin Shah. Challenges with unsupervised LLM knowledge discovery. *arXiv preprint arXiv:2312.10029*, 2023. Steven Feldstein. *The global expansion of AI surveillance*. Carnegie Endowment for International Peace Washington, DC, 2019. Guhao Feng, Yuntian Gu, Bohang Zhang, Haotian Ye, Di He, and Liwei Wang. Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective. *arXiv preprint arXiv:2305.15408*, 2023. Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rocktäschel. Promptbreeder: Self-referential self-improvement via prompt evolution. *arXiv preprint arXiv:2309.16797*, 2023. Mohamed Amine Ferrag, Mthandazo Ndhlovu, Norbert Tihanyi, Lucas C Cordeiro, Merouane Debbah, and Thierry Lestable. Revolutionizing Cyber Threat Detection with Large Language Models. arXiv preprint arXiv:2306.14263, 2023. Emilio Ferrara. GenAI against humanity: Nefarious applications of generative artificial intelligence and large language models. *arXiv preprint arXiv:2310.00737*, 2023. Jane Finlayson-Brown and Susana Ng. China brings into force Regulations on the Administration of Deep Synthesis of Internet Technology, 2023. https://www.allenovery.com/en-gb/global/blogs/data-hub /china-brings-into-force-regulations-on-the-administration-of-deep-synthesis-of-inter net-technology-addressing-deepfakes-and-similar-technologies. Sara Fish, Paul Gölz, David C Parkes, Ariel D Procaccia, Gili Rusak, Itai Shapira, and Manuel Wüthrich. Generative Social Choice. *arXiv preprint arXiv:2309.01291*, 2023. Edwin A Fleishman, Marilyn K Quaintance, and Laurie A Broedling. *Taxonomies of human performance:* The description of human tasks. Academic Press, 1984. Lukas Fluri, Daniel Paleka, and Florian Tramèr. Evaluating Superhuman Models with Consistency Checks. arXiv preprint arXiv:2306.09983, 2023. Jakob Foerster, Richard Y. Chen, Maruan Al-Shedivat, Shimon Whiteson, Pieter Abbeel, and Igor Mordatch. Learning with Opponent-learning Awareness. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, Aamas '18, pp. 122–130, Richland, SC, 2018. International Foundation for Autonomous Agents and Multiagent Systems. Jakob N. Foerster. Open letter against armed drones to the German Social Democratic Party, 2020. Geoffrey Fowler. Snapchat tried to make a safe AI. It chats with me about booze and sex. *The Washington* Post, 2023. https://www.washingtonpost.com/technology/2023/03/14/snapchat-myai/. Accessed on: 1 February, 2024. Jan-Philipp Fränken, Sam Kwok, Peixuan Ye, Kanishk Gandhi, Dilip Arumugam, Jared Moore, Alex Tamkin, Tobias Gerstenberg, and Noah D Goodman. Social Contract AI: Aligning AI Assistants with Implicit Group Norms. *arXiv preprint arXiv:2310.17769*, 2023. Tim Franzmeyer, Stephen Marcus McAleer, Joao F Henriques, Jakob Nicolaus Foerster, Philip Torr, Adel Bibi, and Christian Schroeder de Witt. Illusory attacks: Detectability matters in adversarial attacks on sequential decision-makers. In *The Twelfth International Conference on Learning Representations*, 2023. Spencer Frei, Gal Vardi, Peter L Bartlett, and Nathan Srebro. The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in ReLU Networks. *arXiv preprint arXiv:2303.01456*, 2023. Robert M French. Catastrophic forgetting in connectionist networks. *Trends in cognitive sciences*, 3(4): 128–135, 1999. Dan Friedman, Alexander Wettig, and Danqi Chen. Learning Transformer Programs. *arXiv preprint* arXiv:2306.01128, 2023. Frontier Model Forum. Introducing the Frontier Model Forum, 2023. https://www.frontiermodelforum .org/updates/announcing-the-frontier-model-forum/. Accessed on: January 8, 2024. Deqing Fu, Tian-Qi Chen, Robin Jia, and Vatsal Sharan. Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Models. *arXiv preprint arXiv:2310.17086*, 2023a. Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata. Improving language model negotiation with self-play and in-context learning from AI feedback. *arXiv preprint arXiv:2305.10142*, 2023b. Future of Life Institute. Autonomous Weapons Open Letter: AI & Robotics Researchers, 2016. https: //futureoflife.org/open-letter/open-letter-autonomous-weapons-ai-robotics/. Accessed on: January 30, 2024. Future of Life Institute. Pause Giant AI Experiments: An Open Letter, 2023. https://futureoflife.org /open-letter/pause-giant-ai-experiments/. Accessed on: January 30, 2024. Iason Gabriel. Artificial intelligence, values, and alignment. *Minds and machines*, 30(3):411–437, 2020. Gallup. How to improve employee engagement in the workplace, 2022. Kanishk Gandhi, Jan-Philipp Fränken, Tobias Gerstenberg, and Noah D. Goodman. Understanding Social Reasoning in Language Models with Language Models. *CoRR*, abs/2306.15448, 2023. Rohit Gandikota, Joanna Materzynska, Tingrui Zhou, Antonio Torralba, and David Bau. Concept Sliders: LoRA Adaptors for Precise Control in Diffusion Models. *arXiv preprint arXiv:2311.12092*, 2023. Deep Ganguli, Danny Hernandez, Liane Lovitt, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova Dassarma, Dawn Drain, Nelson Elhage, et al. Predictability and surprise in large generative models. In *2022 ACM Conference on Fairness, Accountability, and Transparency*, pp. 1747–1764, 2022. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The Pile: An 800GB Dataset of Diverse Text for Language Modeling. *arXiv preprint arXiv:2101.00027*, 2020. Sanjam Garg, Aarushi Goel, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Guru-Vamsi Policharla, and Mingyuan Wang. Experimenting with zero-knowledge proofs of training. In *Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security*, pp. 1880–1894, 2023. Shivam Garg, Dimitris Tsipras, Percy S Liang, and Gregory Valiant. What can transformers learn incontext? A case study of simple function classes. *Advances in Neural Information Processing Systems*, 35: 30583–30598, 2022. Carlos Ignacio Gutierrez Gaviria. The role of artificial intelligence in pushing the boundaries of US regulation: A systematic review. *Santa Clara High Tech. LJ*, 38:123, 2022. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. Datasheets for datasets. *Communications of the ACM*, 64(12): 86–92, 2021. Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pp. 3356–3369, Online, nov 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.301. URL https://aclanthology.org/2020.findin gs-emnlp.301. Atticus Geiger, Hanson Lu, Thomas Icard, and Christopher Potts. Causal abstractions of neural networks. Advances in Neural Information Processing Systems, 34:9574–9586, 2021. Atticus Geiger, Zhengxuan Wu, Hanson Lu, Josh Rozner, Elisa Kreiss, Thomas Icard, Noah D. Goodman, and Christopher Potts. Inducing Causal Structure for Interpretable Neural Networks, jul 2022. URL http://arxiv.org/abs/2112.00826. arXiv:2112.00826 [cs]. Atticus Geiger, Zhengxuan Wu, Christopher Potts, Thomas Icard, and Noah D Goodman. Finding alignments between interpretable causal variables and distributed neural representations. arXiv preprint arXiv:2303.02536, 2023. Jonas Geiping, Alex Stein, Manli Shu, Khalid Saifullah, Yuxin Wen, and Tom Goldstein. Coercing LLMs to do and reveal (almost) anything, 2024. Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Shortcut learning in deep neural networks. *Nature Machine Intelligence*, 2(11):665–673, 2020. Gemini Team. Gemini: a family of highly capable multimodal models. *arXiv preprint arXiv:2312.11805*, 2023. Gemini Team. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. Technical report, Google DeepMind, 2024. https://storage.googleapis.com/deepmind-media/gemini/ge mini_v1_5_report.pdf. Kristalina Georgieva. AI will transform the global economy. Let's make sure it benefits humanity. 2024. URL https://www.imf.org/en/Blogs/Articles/2024/01/14/ai-will-transform-the-global-eco nomy-lets-make-sure-it-benefits-humanity. Mor Geva, Avi Caciularu, Kevin Ro Wang, and Yoav Goldberg. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. *arXiv preprint arXiv:2203.14680*, 2022. Sourojit Ghosh and Aylin Caliskan. ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource Languages. *arXiv preprint* arXiv:2305.10510, 2023. Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael A. Specter, and Lalana Kagal. Explaining Explanations: An Overview of Interpretability of Machine Learning. *2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA)*, pp. 80–89, 2018. URL https://api.semanticscholar.org/CorpusID:59600034. David Glukhov, Ilia Shumailov, Yarin Gal, Nicolas Papernot, and Vardan Papyan. LLM Censorship: A Machine Learning Challenge or a Computer Security Problem? *arXiv preprint arXiv:2307.10719*, 2023. Shashwat Goel, Ameya Prabhu, Philip Torr, Ponnurangam Kumaraguru, and Amartya Sanyal. Corrective Machine Unlearning, 2024. Shahriar Golchin and Mihai Surdeanu. Time Travel in LLMs: Tracing Data Contamination in Large Language Models. *ArXiv*, abs/2308.08493, 2023. URL https://api.semanticscholar.org/CorpusID: 260925501. Eric Goldman. An introduction to the california consumer privacy act (ccpa). *Santa Clara Univ. Legal* Studies Research Paper, 2020. Nicholas Goldowsky-Dill, Chris MacLeod, Lucas Sato, and Aryaman Arora. Localizing model behavior with path patching. *arXiv preprint arXiv:2304.05969*, 2023. Josh A Goldstein, Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel, and Katerina Sedova. Generative language models and automated influence operations: Emerging threats and potential mitigations. *arXiv preprint arXiv:2301.04246*, 2023. Shafi Goldwasser, Michael P Kim, Vinod Vaikuntanathan, and Or Zamir. Planting undetectable backdoors in machine learning models. In *2022 IEEE 63rd Annual Symposium on Foundations of Computer Science* (FOCS), pp. 931–942. Ieee, 2022. Yichen Gong, Delong Ran, Jinyuan Liu, Conglei Wang, Tianshuo Cong, Anyu Wang, Sisi Duan, and Xiaoyun Wang. FigStep: Jailbreaking Large Vision-language Models via Typographic Visual Prompts. *arXiv* preprint arXiv:2311.05608, 2023. Ian Goodfellow. A research agenda: Dynamic models to defend against correlated attacks. arXiv preprint arXiv:1903.06293, 2019. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. Google AI. Google AI Principles, 2018. URL https://ai.google/responsibility/principles/. Google DeepMind. AI Safety Summit: An Update on Our Approach to Safety and Responsibility, 2023. https://deepmind.google/public-policy/ai-summit-policies/. Anjali Gopal, Nathan Helm-Burger, Lenni Justen, Emily H Soice, Tiffany Tzeng, Geetha Jeyapragasan, Simon Grimm, Benjamin Mueller, and Kevin M Esvelt. Will releasing the weights of large language models grant widespread access to pandemic agents? *arXiv preprint arXiv:2310.18233*, 2023. Rhys Gould, Euan Ong, George Ogden, and Arthur Conmy. Successor heads: Recurring, interpretable attention heads in the wild. *arXiv preprint arXiv:2312.09230*, 2023. Mara Graziani, Laura Mahony, An phi Nguyen, Henning Muller, and Vincent Andrearczyk. Uncovering Unique Concept Vectors through Latent Space Decomposition. *ArXiv*, abs/2307.06913, 2023. URL https: //api.semanticscholar.org/CorpusID:259847592. Ryan Greenblatt, Buck Shlegeris, Kshitij Sachan, and Fabien Roger. AI Control: Improving Safety Despite Intentional Subversion, January 2024. URL http://arxiv.org/abs/2312.06942. arXiv:2312.06942 [cs] version: 3. Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz. More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to ApplicationIntegrated Large Language Models. *arXiv preprint arXiv:2302.12173*, 2023a. Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz. Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection, 2023b. Herbert P Grice. Logic and conversation. In *Speech acts*, pp. 41–58. Brill, 1975. Geoffrey Grimmett. *Probability on graphs: random processes on graphs and lattices*, volume 8. Cambridge University Press, 2018. Roger Grosse, Juhan Bae, Cem Anil, Nelson Elhage, Alex Tamkin, Amirhossein Tajdini, Benoit Steiner, Dustin Li, Esin Durmus, Ethan Perez, Evan Hubinger, Kamil˙e Lukoši¯ut˙e, Karina Nguyen, Nicholas Joseph, Sam McCandlish, Jared Kaplan, and Samuel R. Bowman. Studying Large Language Model Generalization with Influence Functions, 2023. Ross Gruetzemacher, Florian E Dorner, Niko Bernaola-Alvarez, Charlie Giattino, and David Manheim. Forecasting AI progress: A research agenda. *Technological Forecasting and Social Change*, 170:120909, 2021. Neel Guha, Christie Lawrence, Lindsey A Gailmard, Kit Rodolfa, Faiz Surani, Rishi Bommasani, Inioluwa Raji, Mariano-Florentino Cuéllar, Colleen Honigsberg, Percy Liang, et al. Ai regulation has its own alignment problem: The technical and institutional feasibility of disclosure, registration, licensing, and auditing. *George Washington Law Review, Forthcoming*, 2023. Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, and Tushar Khot. Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs. *arXiv preprint* arXiv:2311.04892, 2023. Tanmay Gupta and Aniruddha Kembhavi. Visual Programming: Compositional visual reasoning without training. *2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 14953– 14962, 2022. URL https://api.semanticscholar.org/CorpusID:253734854. Wes Gurnee and Max Tegmark. Language Models Represent Space and Time. *CoRR*, abs/2310.02207, 2023. Kelvin Guu, Albert Webson, Ellie Pavlick, Lucas Dixon, Ian Tenney, and Tolga Bolukbasi. Simfluence: Modeling the influence of individual training examples by simulating training runs. arXiv preprint arXiv:2303.08114, 2023. Philipp Hacker, Andreas Engel, and Marco Mauer. Regulating ChatGPT and other Large Generative AI Models. In *FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency*, pp. 1112–1123. Acm, June 2023. doi: 10.1145/3593013.3594067. URL https://doi.org/10.1145/3593013.3594067. Gillian K Hadfield and Jack Clark. Regulatory Markets: The Future of AI Governance. arXiv preprint arXiv:2304.04914, 2023. Dylan Hadfield-Menell, Stuart J Russell, Pieter Abbeel, and Anca Dragan. Cooperative inverse reinforcement learning. *Advances in neural information processing systems*, 29, 2016. Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell. The Off-Switch Game. In *Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17)*, 2017. Christian Haerpfer, Ronald Inglehart, Alejandro Moreno, Christian Welzel, Kseniya Kizilova, Jaime DiezMedrano, Milena Lagos, Pippa Norris, Eduard Ponarin, and Bianca Puranen. World Values Survey: Round Seven - Country-Pooled Datafile Version 5.0.0, 2022. Thilo Hagendorff, Sarah Fabi, and Michal Kosinski. Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT. *Nature Computational Science*, 3(10): 833–838, 2023. Michael Hahn and Navin Goyal. A theory of emergent in-context learning as implicit structure induction. arXiv preprint arXiv:2303.07971, 2023. Danny Halawi, Jean-Stanislas Denain, and Jacob Steinhardt. Overthinking the truth: Understanding how language models process false demonstrations. *arXiv preprint arXiv:2307.09476*, 2023. David Hambling. Ukrainian AI Attack Drones May Be Killing Without Human Oversight. *New Scientist*, 2023. https://www.newscientist.com/article/2397389-ukrainian-ai-attack-drones-may-be-k illing-without-human-oversight/. Accessed on: 1 February, 2024. Lewis Hammond et al. Multi-Agent Risks from Advanced AI, 2024. Forthcoming. Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Luke Benson, Lucy Sun, Ekaterina Zubova, Yujie Qiao, Matthew Burtell, David Peng, Jonathan Fan, Yixin Liu, Brian Wong, Malcolm Sailor, Ansong Ni, Linyong Nan, Jungo Kasai, Tao Yu, ..., and Dragomir Radev. FOLIO: Natural Language Reasoning with First-Order Logic. *CoRR*, abs/2209.00840, 2022. John C. Harsanyi. A new theory of equilibrium selection for games with complete information. Games and Economic Behavior, 8(1):91–122, jan 1995. ISSN 0899-8256. doi: 10.1016/s0899-8256(05)80018-1. URL https://www.sciencedirect.com/science/article/pii/S0899825605800181. Peter Hase and Mohit Bansal. Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior? In *Annual Meeting of the Association for Computational Linguistics*, 2020. URL https://api.semanticscholar.org/CorpusID:218502350. Peter Hase, Mohit Bansal, Peter Clark, and Sarah Wiegreffe. The Unreasonable Effectiveness of Easy Training Data for Hard Tasks, 2024. URL https://arxiv.org/pdf/2401.06751.pdf. David Haussler, H Sebastian Seung, Michael Kearns, and Naftali Tishby. Rigorous learning curve bounds from statistical mechanics. In *Proceedings of the seventh annual conference on Computational learning* theory, pp. 76–87, 1994. Julian Hazell. Large language models can be used to effectively scale spear phishing campaigns. *arXiv* preprint arXiv:2305.06972, 2023. Lennart Heim, Tim Fist, Janet Egan, Sihao Huang, Stephen Zekany, Robert Trager, Michael A Osborne, and Noa Zilberman. Governing Through the Cloud: The Intermediary Role of Compute Providers in AI Regulation. *arXiv preprint arXiv:2403.08501*, 2024. Alec Helbling, Mansi Phute, Matthew Hull, and Duen Horng Chau. LLM Self Defense: By Self Examination, LLMs Know They Are Being Tricked. *arXiv preprint arXiv:2308.07308*, 2023. Mikal Hem. Evading the censors: Critical journalism in authoritarian states. *Reuters Institute Fellowship* Paper University of Oxford, 2014. Roee Hendel, Mor Geva, and Amir Globerson. In-context learning creates task vectors. *arXiv preprint* arXiv:2310.15916, 2023. Peter Henderson, Koustuv Sinha, Nicolas Angelard-Gontier, Nan Rosemary Ke, Genevieve Fried, Ryan Lowe, and Joelle Pineau. Ethical challenges in data-driven dialogue systems. In *Proceedings of the 2018* AAAI/ACM Conference on AI, Ethics, and Society, pp. 123–129, 2018. Peter Henderson, Tatsunori Hashimoto, and Mark Lemley. Where's the Liability in harmful AI Speech? Journal of Free Speech Law, 3:589, 2023a. Peter Henderson, Xuechen Li, Dan Jurafsky, Tatsunori Hashimoto, Mark A Lemley, and Percy Liang. Foundation models and fair use. *arXiv preprint arXiv:2303.15715*, 2023b. Peter Henderson, Eric Mitchell, Christopher Manning, Dan Jurafsky, and Chelsea Finn. Self-destructing models: Increasing the costs of harmful dual uses of foundation models. In *Proceedings of the 2023* AAAI/ACM Conference on AI, Ethics, and Society, pp. 287–296, 2023c. Lisa Anne Hendricks, Kaylee Burns, Kate Saenko, Trevor Darrell, and Anna Rohrbach. Women also snowboard: Overcoming bias in captioning models. In *Proceedings of the European conference on computer* vision (ECCV), pp. 771–787, 2018. Dan Hendrycks. *Introduction to AI Safety, Ethics, and Society*. 2023. Dan Hendrycks and Mantas Mazeika. X-Risk Analysis for AI Research, 2022. Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. Aligning AI with shared human values. *arXiv preprint arXiv:2008.02275*, 2020a. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. *arXiv preprint arXiv:2009.03300*, 2020b. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021a. Dan Hendrycks, Nicholas Carlini, John Schulman, and Jacob Steinhardt. Unsolved Problems in ML Safety. arXiv preprint arXiv:2109.13916, 2021b. Dan Hendrycks, Mantas Mazeika, and Thomas Woodside. An Overview of Catastrophic AI Risks. *arXiv* preprint arXiv:2306.12001, 2023. Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B Brown, Prafulla Dhariwal, Scott Gray, et al. Scaling laws for autoregressive generative modeling. arXiv preprint arXiv:2010.14701, 2020. Joseph Henrich. Cooperation, Punishment, and the Evolution of Human Institutions. *Science*, 312(5770): 60–61, apr 2006. ISSN 0036-8075, 1095-9203. doi: 10.1126/science.1126398. URL https://www.scienc e.org/doi/10.1126/science.1126398. Danny Hernandez and Tom B Brown. Measuring the algorithmic efficiency of neural networks. arXiv preprint arXiv:2005.04305, 2020. Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, and Jacob Andreas. Natural language descriptions of deep visual features. In *International Conference on Learning Representations*, 2021. Evan Hernandez, Belinda Z. Li, and Jacob Andreas. Inspecting and Editing Knowledge Representations in Language Models, may 2023a. URL http://arxiv.org/abs/2304.00740. arXiv:2304.00740 [cs]. Evan Hernandez, Arnab Sen Sharma, Tal Haklay, Kevin Meng, Martin Wattenberg, Jacob Andreas, Yonatan Belinkov, and David Bau. Linearity of Relation Decoding in Transformer Language Models, aug 2023b. URL http://arxiv.org/abs/2308.09124. arXiv:2308.09124 [cs]. Daniel Hershcovich, Stella Frank, Heather Lent, Miryam de Lhoneux, Mostafa Abdou, Stephanie Brandl, Emanuele Bugliarello, Laura Cabello Piqueras, Ilias Chalkidis, Ruixiang Cui, et al. Challenges and strategies in cross-cultural NLP. *arXiv preprint arXiv:2203.10020*, 2022. Jacob Hilton. Truthful LMs as a warm-up for aligned AGI. *AI Alignment Forum*, 2022. https://www.alig nmentforum.org/posts/jWkqACmDes6SoAiyE/truthful-lms-as-a-warm-up-for-aligned-agi. Jacob Hilton, Jie Tang, and John Schulman. Scaling laws for single-agent reinforcement learning. arXiv preprint arXiv:2301.13442, 2023. Lewis Ho, Joslyn Barnhart, Robert Trager, Yoshua Bengio, Miles Brundage, Allison Carnegie, Rumman Chowdhury, Allan Dafoe, Gillian Hadfield, Margaret Levi, et al. International institutions for advanced AI. *arXiv preprint arXiv:2307.04699*, 2023. C.A.R. Hoare, Jayadev Misra, Gary T. Leavens, and Natarajan Shankar. The verified software initiative: A manifesto. *ACM Comput. Surv.*, 2009. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. An empirical analysis of compute-optimal large language model training. *Advances in Neural Information Processing Systems*, 35: 30016–30030, 2022. Valentin Hofmann, Pratyusha Ria Kalluri, Dan Jurafsky, and Sharese King. Dialect prejudice predicts AI decisions about people's character, employability, and criminality. *arXiv preprint arXiv:2403.00742*, 2024. Carolin Holtermann, Paul Röttger, Timm Dill, and Anne Lauscher. Evaluating the elementary multilingual capabilities of large language models with MultiQ. *arXiv preprint arXiv:2403.03814*, 2024. Ari Holtzman, Peter West, and Luke Zettlemoyer. Generative Models as a Complex Systems Science: How can we make sense of large language model behavior? *arXiv preprint arXiv:2308.00189*, 2023. Sanghyun Hong, Nicholas Carlini, and Alexey Kurakin. Handcrafted backdoors in deep neural networks. Advances in Neural Information Processing Systems, 35:8068–8080, 2022. Jesse Hoogland, Alexander Gietelink Oldenziel, Daniel Murfet, and Stan van Wingerden. Towards Developmental Interpretability. *AI Alignment Forum*, 2023. https://www.alignmentforum.org/posts/TjaeC WvLZtEDAS5Ex/towards-developmental-interpretability. Accessed on: 1 February, 2024. Jesse Hoogland, George Wang, Matthew Farrugia-Roberts, Liam Carroll, Susan Wei, and Daniel Murfet. The developmental landscape of in-context learning. *arXiv preprint arXiv:2402.02364*, 2024. Sara Hooker. The hardware lottery. *Communications of the ACM*, 64(12):58–65, 2021. Tom Hosking, Phil Blunsom, and Max Bartolo. Human Feedback is not Gold Standard, 2023. Betty Li Hou and Brian Patrick Green. Foundational Moral Values for AI Alignment. arXiv preprint arXiv:2311.17017, 2023. Yifan Hou, Jiaoda Li, Yu Fei, Alessandro Stolfo, Wangchunshu Zhou, Guangtao Zeng, Antoine Bosselut, and Mrinmaya Sachan. Towards a Mechanistic Interpretation of Multi-Step Reasoning Capabilities of Language Models. *CoRR*, abs/2310.14491, 2023. Jeremy Howard. AI Safety and the Age of Dislightenment, 2023. https://www.fast.ai/posts/2023-11-0 7-dislightenment.html. Accessed on: January 8, 2024. Beizhe Hu, Qiang Sheng, Juan Cao, Yuhui Shi, Yang Li, Danding Wang, and Peng Qi. Bad actor, good advisor: Exploring the role of large language models in fake news detection. *arXiv preprint arXiv:2309.12247*, 2023a. Hengyuan Hu, Adam Lerer, Brandon Cui, David Wu, Luis Pineda, Noam Brown, and Jakob Foerster. Off-Belief Learning, aug 2021. URL http://arxiv.org/abs/2103.04000. arXiv:2103.04000 [cs]. Michael Hu, Angelica Chen, Naomi Saphra, and Kyunghyun Cho. Latent State Transitions in Training Dynamics. *Transactions of Machine Learning Research (TMLR)*, 2023b. Shengding Hu, Xin Liu, Xu Han, Xinrong Zhang, Chaoqun He, Weilin Zhao, Yankai Lin, Ning Ding, Zebin Ou, Guoyang Zeng, et al. Predicting Emergent Abilities with Infinite Resolution Evaluation. *arXiv* e-prints, pp. arXiv–2310, 2023c. Zhengmian Hu, Lichang Chen, Xidong Wu, Yihan Wu, Hongyang Zhang, and Heng Huang. Unbiased watermark for large language models. *arXiv preprint arXiv:2310.10669*, 2023d. Baihe Huang, Banghua Zhu, Hanlin Zhu, Jason D Lee, Jiantao Jiao, and Michael I Jordan. Towards Optimal Statistical Watermarking. *arXiv preprint arXiv:2312.07930*, 2023a. Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, and Yang Zhang. Composite Backdoor Attacks Against Large Language Models. *arXiv preprint arXiv:2310.07676*, 2023b. Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. Large language models can self-improve. *arXiv preprint arXiv:2210.11610*, 2022a. Jie Huang and Kevin Chen-Chuan Chang. Towards reasoning in large language models: A survey. arXiv preprint arXiv:2212.10403, 2022. Jing Huang, Atticus Geiger, Karel D'Oosterlinck, Zhengxuan Wu, and Christopher Potts. Rigorously Assessing Natural Language Explanations of Neurons. *arXiv preprint arXiv:2309.10312*, 2023c. Kerson Huang. *Statistical mechanics*. John Wiley & Sons, 2008. Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models. *arXiv preprint arXiv:2207.05608*, 2022b. Yangsibo Huang, Samyak Gupta, Mengzhou Xia, Kai Li, and Danqi Chen. Catastrophic jailbreak of opensource LLMs via exploiting generation. *arXiv preprint arXiv:2310.06987*, 2023d. Yangsibo Huang, Samyak Gupta, Zexuan Zhong, Kai Li, and Danqi Chen. Privacy Implications of RetrievalBased Language Models. *arXiv preprint arXiv:2305.14888*, 2023e. Evan Hubinger. Relaxed adversarial training for inner alignment. *AI Alignment Forum*, 2019. https: //www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner -alignment. Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Risks from learned optimization in advanced machine learning systems. *arXiv preprint arXiv:1906.01820*, 2019. Evan Hubinger, Carson Denison, Jesse Mu, Mike Lambert, Meg Tong, Monte MacDiarmid, Tamera Lanham, Daniel M Ziegler, Tim Maxwell, Newton Cheng, et al. Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training. *arXiv preprint arXiv:2401.05566*, 2024. Edward Hughes, Joel Z Leibo, Matthew Phillips, Karl Tuyls, Edgar Dueñez Guzman, Antonio García Castañeda, Iain Dunning, Tina Zhu, Kevin McKee, Raphael Koster, Heather Roff, and Thore Graepel. Inequity Aversion Improves Cooperation in Intertemporal Social Dilemmas. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances in Neural Information* Processing Systems 31, pp. 3326–3336. Curran Associates, Inc., 2018. URL http://papers.nips.cc/p aper/7593-inequity-aversion-improves-cooperation-in-intertemporal-social-dilemmas.pdf. Xiang Hui, Oren Reshef, and Luofeng Zhou. The Short-Term Effects of Generative Artificial Intelligence on Employment: Evidence from an Online Labor Market. *Available at SSRN 4527336*, 2023. Ben Hutchinson, Negar Rostamzadeh, Christina Greer, Katherine Heller, and Vinodkumar Prabhakaran. Evaluation gaps in machine learning practice. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 1859–1876, 2022. Marcus Hutter. Learning curve theory. *arXiv preprint arXiv:2102.04074*, 2021. Chip Huyen. Open challenges in LLM research, August 2023. https://huyenchip.com/2023/08/16/llm -research-open-challenges.html. Accessed on: January 8, 2023. Tim Hwang. Computational power and the social impact of artificial intelligence. *arXiv preprint* arXiv:1803.08971, 2018. IEEE, Institute of Electrical and Electronics Engineers. IEEE Standard for Enterprise Architecture Description (EAD) Version 2, 2018. https://standards.ieee.org/wp-content/uploads/import/documents/ other/ead_v2.pdf. Oana Ignat, Zhijing Jin, Artem Abzaliev, Laura Biester, Santiago Castro, Naihao Deng, Xinyi Gao, Aylin Gunal, Jacky He, Ashkan Kazemi, et al. A PhD Student's Perspective on Research in NLP in the Era of Very Large Language Models. *arXiv preprint arXiv:2305.12544*, 2023. Andrew Ilyas, Sung Min Park, Logan Engstrom, Guillaume Leclerc, and Aleksander Madry. Datamodels: Predicting predictions from training data. *arXiv preprint arXiv:2202.00622*, 2022. Amnesty International. Israel and Occupied Palestinian Territories: Automated Apartheid: How facial recognition fragments, segregates and controls Palestinians in the OPT, 2023. https://www.amnesty.or g/en/documents/mde15/6701/2023/en/. Accessed on: 1 February, 2024. Daphne Ippolito, Florian Tramèr, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher A Choquette-Choo, and Nicholas Carlini. Preventing verbatim memorization in language models gives a false sense of privacy. *arXiv preprint arXiv:2210.17546*, 2022. Geoffrey Irving, Paul Christiano, and Dario Amodei. AI safety via debate. *arXiv preprint arXiv:1805.00899*, 2018. ISO, International Organization for Standardization. ISO/IEC 38505-1:2017 - Information technology – Governance of IT - Governance of data, 2017. Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. *Advances in neural information processing systems*, 31, 2018. Alon Jacovi and Yoav Goldberg. Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4198–4205, Online, jul 2020. Association for Computational Linguistics. doi: 10.18653/v 1/2020.acl-main.386. URL https://aclanthology.org/2020.acl-main.386. Meena Jagadeesan, Michael I Jordan, Jacob Steinhardt, and Nika Haghtalab. Improved Bayes Risk Can Yield Reduced Social Welfare Under Competition. *arXiv preprint arXiv:2306.14670*, 2023. Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping-yeh Chiang, Micah Goldblum, Aniruddha Saha, Jonas Geiping, and Tom Goldstein. Baseline Defenses for Adversarial Attacks Against Aligned Language Models. *arXiv preprint arXiv:2309.00614*, 2023a. Samyak Jain, Robert Kirk, Ekdeep Singh Lubana, Robert P. Dick, Hidenori Tanaka, Edward Grefenstette, Tim Rocktäschel, and David Scott Krueger. Mechanistically analyzing the effects of fine-tuning on procedurally defined tasks, 2023b. Maurice Jakesch, Advait Bhat, Daniel Buschek, Lior Zalmanson, and Mor Naaman. Co-writing with opinionated language models affects users' views. In *Proceedings of the 2023 CHI conference on human factors* in computing systems, 2023. Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, and Minjoon Seo. Knowledge Unlearning for Mitigating Privacy Risks in Language Models, 2022. Natasha Jaques, Angeliki Lazaridou, Edward Hughes, Çaglar Gülçehre, Pedro A. Ortega, DJ Strouse, Joel Z. Leibo, and Nando de Freitas. Social Influence As Intrinsic Motivation for Multi-agent Deep Reinforcement Learning. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings of Machine Learning Research*, pp. 3040–3049. Pmlr, 2019. URL http://proceedings.ml r.press/v97/jaques19a.html. Zaynah Javed, Daniel S Brown, Satvik Sharma, Jerry Zhu, Ashwin Balakrishna, Marek Petrik, Anca Dragan, and Ken Goldberg. Policy gradient bayesian robust optimization for imitation learning. In International Conference on Machine Learning, pp. 4785–4796. PMLR, 2021. Andrea Jelinek and Wojciech Wiewiórowski. Open letter on EDPB budget proposal for 2023, 2022. https: //edpb.europa.eu/our-work-tools/our-documents/letters/open-letter-edpb-budget-proposa l-2023_en. Accessed on: January 11, 2024. Yacine Jernite, Huu Nguyen, Stella Biderman, Anna Rogers, Maraim Masoud, Valentin Danchev, Samson Tan, Alexandra Sasha Luccioni, Nishant Subramani, Isaac Johnson, et al. Data governance in the age of large-scale data-driven language technology. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 2206–2222, 2022. Akshita Jha, Vinodkumar Prabhakaran, Remi Denton, Sarah Laszlo, Shachi Dave, Rida Qadri, Chandan K Reddy, and Sunipa Dev. Beyond the Surface: A Global-Scale Analysis of Visual Stereotypes in Text-toImage Generation. *arXiv preprint arXiv:2401.06310*, 2024. Jiaming Ji, Tianyi Qiu, Boyuan Chen, Borong Zhang, Hantao Lou, Kaile Wang, Yawen Duan, Zhonghao He, Jiayi Zhou, Zhaowei Zhang, et al. AI alignment: A comprehensive survey. *arXiv preprint arXiv:2310.19852*, 2023a. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1–38, 2023b. Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. Mixtral of experts. *arXiv preprint arXiv:2401.04088*, 2024a. Fengqing Jiang, Zhangchen Xu, Luyao Niu, Zhen Xiang, Bhaskar Ramasubramanian, Bo Li, and Radha Poovendran. ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs. arXiv preprint arXiv:2402.11753, 2024b. Zhijing Jin, Yuen Chen, Felix Leeb, Luigi Gresele, Ojasv Kamal, Zhiheng Lyu, Kevin Blin, Fernando Gonzalez, Max Kleiman-Weiner, Mrinmaya Sachan, et al. CLADDER: Assessing Causal Reasoning in Language Models, 2023. Michael Bradley Johanson, Edward Hughes, Finbarr Timbers, and Joel Z Leibo. Emergent bartering behaviour in multi-agent reinforcement learning. *arXiv preprint arXiv:2205.06760*, 2022. Rebecca L Johnson, Giada Pistilli, Natalia Menédez-González, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene, and Donald Jay Bertulfo. The ghost in the machine has an american accent: value conflict in gpt-3. *arXiv preprint arXiv:2203.07785*, 2022. W Jeffrey Johnston and Stefano Fusi. Abstract representations emerge naturally in neural networks trained to perform multiple tasks. *Nature Communications*, 14(1):1040, 2023. Erik Jones and Jacob Steinhardt. Capturing failures of large language models via human cognitive biases. Advances in Neural Information Processing Systems, 35:11785–11799, 2022. Erik Jones, Anca Dragan, Aditi Raghunathan, and Jacob Steinhardt. Automatically Auditing Large Language Models via Discrete Optimization. *arXiv preprint arXiv:2303.04381*, 2023. Gurusha Juneja, Subhabrata Dutta, Soumen Chakrabarti, Sunny Manchanda, and Tanmoy Chakraborty. Small Language Models Fine-tuned to Coordinate Larger Language Models improve Complex Reasoning. arXiv preprint arXiv:2310.18338, 2023. Jeevesh Juneja, Rachit Bansal, Kyunghyun Cho, João Sedoc, and Naomi Saphra. Linear connectivity reveals generalization strategies. *arXiv preprint arXiv:2205.12411*, 2022. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, ..., and Jared Kaplan. Language Models (Mostly) Know What They Know, 2022. Jean Kaddour, Joshua Harris, Maximilian Mozes, Herbie Bradley, Roberta Raileanu, and Robert McHardy. Challenges and applications of large language models. *arXiv preprint arXiv:2307.10169*, 2023. Matthias Kaiser. The idea of a theory of values and the metaphor of value-landscapes. Humanities and Social Sciences Communications, 11(1):1–10, 2024. Adam Kalai and Ehud Kalai. Cooperation In Strategic Games Revisited. *The Quarterly Journal of Economics*, 128(2):917–66, 2013. URL https://econweb.ucsd.edu/~jwatson/PAPERS/SWET2012/Kalai-p aper.pdf. Adam Tauman Kalai and Santosh S Vempala. Calibrated language models must hallucinate. arXiv preprint arXiv:2311.14648, 2023. Pratyusha Ria Kalluri, William Agnew, Myra Cheng, Kentrell Owens, Luca Soldaini, and Abeba Birhane. The Surveillance AI Pipeline. *arXiv preprint arXiv:2309.15084*, 2023. Margot Kaminski. Regulating the Risks of AI. *Boston University Law Review*, 103:1347, 2023. Timotheus Kampik, Adnane Mansour, Olivier Boissier, Sabrina Kirrane, Julian Padget, Terry R Payne, Munindar P Singh, Valentina Tamma, and Antoine Zimmermann. Governance of autonomous agents on the web: challenges and opportunities. *ACM Transactions on Internet Technology*, 22(4):1–31, 2022. Nikhil Kandpal, Eric Wallace, and Colin Raffel. Deduplicating training data mitigates privacy risks in language models. In *International Conference on Machine Learning*, pp. 10697–10707. Pmlr, 2022. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. *arXiv preprint* arXiv:2001.08361, 2020. Gal Kaplun, Nikhil Ghosh, Saurabh Garg, Boaz Barak, and Preetum Nakkiran. Deconstructing Distributions: A Pointwise Framework of Learning, 2022. Sayash Kapoor and Arvind Narayanan. Leakage and the reproducibility crisis in machine-learning-based science. *Patterns*, 4(9), 2023. Sayash Kapoor, Rishi Bommasani, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Peter Cihon, Aspen Hopkins, Kevin Bankston, Stella Biderman, Miranda Bogen, et al. On the Societal Impact of Open Foundation Models. *arXiv*, 2024. Loukas Karabarbounis and Brent Neiman. The global decline of the labor share. The Quarterly journal of economics, 129(1):61–103, 2014. Rabimba Karanjai. Targeted phishing campaigns using large scale language models. *arXiv preprint* arXiv:2301.00665, 2022. Atoosa Kasirzadeh. Chatgpt, large language technologies, and the bumpy road of benefiting humanity. *arXiv* preprint arXiv:2304.11163, 2023. Atoosa Kasirzadeh. Two Types of AI Existential Risk: Decisive and Accumulative. arXiv preprint arXiv:2401.07836, 2024. Atoosa Kasirzadeh and Iason Gabriel. In conversation with Artificial Intelligence: aligning language models with human values. *Philosophy & Technology*, 36(2):1–24, 2023. Atoosa Kasirzadeh and James Stewart. Sociotechnical AI safety: a multidisciplinary perspective, 2024. Zachary Kenton, Tom Everitt, Laura Weidinger, Iason Gabriel, Vladimir Mikulik, and Geoffrey Irving. Alignment of Language Agents, 2021. Zachary Kenton, Ramana Kumar, Sebastian Farquhar, Jonathan Richens, Matt MacDermott, and Tom Everitt. Discovering Agents. *arXiv:2208.08345*, August 2022. Sarah Keren, Avigdor Gal, and Erez Karpas. Goal recognition design. In *Proceedings of the International* Conference on Automated Planning and Scheduling, volume 24, pp. 154–162, 2014. Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. Ctrl: A conditional transformer language model for controllable generation. *arXiv preprint arXiv:1909.05858*, 2019. Akbir Khan, John Hughes, Dan Valentine, Laura Ruis, Kshitij Sachan, Ansh Radhakrishnan, Edward Grefenstette, Samuel R Bowman, Tim Rocktäschel, and Ethan Perez. Debating with More Persuasive LLMs Leads to More Truthful Answers. *arXiv preprint arXiv:2402.06782*, 2024. Sal Khan. Harnessing GPT-4 so that all students benefit. A nonprofit approach for equal access. *Khan* Academy, 2023. Heidy Khlaaf. Toward Comprehensive Risk Assessments and Assurance of AI-Based Systems. *Trail of Bits*, 2023. Celeste Kidd and Abeba Birhane. How AI can distort human beliefs. *Science*, 380(6651):1222–1223, 2023. Been Kim, Martin Wattenberg, Justin Gilmer, Carrie J. Cai, James Wexler, Fernanda B. Viégas, and Rory Sayres. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In *International Conference on Machine Learning*, 2017. URL https://api.semantic scholar.org/CorpusID:51737170. Hyunwoo Kim, Melanie Sclar, Xuhui Zhou, Ronan Le Bras, Gunhee Kim, Yejin Choi, and Maarten Sap. FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions. *arXiv preprint* arXiv:2310.15421, 2023. Elselijn Kingma and Natalie Banner. Liberating Practice from Philosophy - A Critical Examination of Values-Based Practice and Its Underpinnings. In Michael Loughlin (ed.), *Debates in Values-Based Practice:* Arguments For and Against, pp. 37–49. Cambridge University Press, 2014. John Kirchenbauer, Jonas Geiping, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, and Tom Goldstein. On the Reliability of Watermarks for Large Language Models. *arXiv preprint arXiv:2306.04634*, 2023. Hannah Rose Kirk, Bertie Vidgen, Paul Röttger, and Scott A Hale. The empty signifier problem: Towards clearer paradigms for operationalising" alignment" in large language models. *arXiv preprint* arXiv:2310.02457, 2023a. Hannah Rose Kirk, Bertie Vidgen, Paul Röttger, and Scott A Hale. Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback. arXiv preprint arXiv:2303.05453, 2023b. Robert Kirk and David Scott Krueger. Causal confusion as an argument against the scaling hypothesis. AI Alignment Forum, 2022. https://www.alignmentforum.org/posts/FZL4ftXvcuKmmobmj/causal-con fusion-as-an-argument-against-the-scaling. Robert Kirk, Ishita Mediratta, Christoforos Nalmpantis, Jelena Luketina, Eric Hambro, Edward Grefenstette, and Roberta Raileanu. Understanding the Effects of RLHF on LLM Generalisation and Diversity. arXiv preprint arXiv:2310.06452, 2023c. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. *Proceedings of the national academy of sciences*, 114(13):3521–3526, 2017. Louis Kirsch, James Harrison, Jascha Sohl-Dickstein, and Luke Metz. General-purpose in-context learning by meta-learning transformers. *arXiv preprint arXiv:2212.04458*, 2022. Israel M. Kirzner. *Market Theory and the Price System*. Liberty Fund, 1963. ISBN 9780865977600. GoogleBooks-ID: Pj96cgAACAAJ. Timo Klein. Autonomous algorithmic collusion: Q-learning under sequential pricing. *The RAND Journal* of Economics, 52(3):538–558, 2021. doi: https://doi.org/10.1111/1756-2171.12383. URL https: //onlinelibrary.wiley.com/doi/abs/10.1111/1756-2171.12383. Jon Kleinberg and Manish Raghavan. Algorithmic monoculture and social welfare. *Proceedings of the* National Academy of Sciences, 118(22):e2018340118, 2021. Katya Klinova and Anton Korinek. AI and shared prosperity. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pp. 645–651, 2021. Kate Knibbs. Why This Award-Winning Piece of AI Art Can't Be Copyrighted. *Wired*, 2023. https: //www.wired.com/story/ai-art-copyright-matthew-allen/. Accessed on: January 30, 2024. Will Knight. OpenAI's CEO Says the Age of Giant AI Models Is Already Over. *Wired*, 2023. https:// www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/. Accessed on: 1 February, 2024. Jeffrey Knockel, Christopher Parsons, Lotus Ruan, Ruohan Xiong, Jedidiah Crandall, and Ron Deibert. We chat, they watch: How international users unwittingly build up WeChat's Chinese censorship apparatus. Citizen Lab Research Report 127, University of Toronto, 2020. Sangamesh Kodge, Gobinda Saha, and Kaushik Roy. Deep Unlearning: Fast and Efficient Training-free Approach to Controlled Forgetting, 2023. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. *Advances in neural information processing systems*, 35:22199–22213, 2022. Soheil Kolouri, Aniruddha Saha, Hamed Pirsiavash, and Heiko Hoffmann. Universal litmus patterns: Revealing backdoor attacks in cnns. In *Proceedings of the IEEE/CVF Conference on Computer Vision and* Pattern Recognition, pp. 301–310, 2020. Noam Kolt. Governing ai agents. *Available at SSRN*, 2024. Ryan Koo, Minhwa Lee, Vipul Raheja, Jong Inn Park, Zae Myung Kim, and Dongyeop Kang. Benchmarking Cognitive Biases in Large Language Models as Evaluators. *arXiv preprint arXiv:2309.17012*, 2023. Tomasz Korbak, Kejian Shi, Angelica Chen, Rasika Bhalerao, Christopher L. Buckley, Jason Phang, Samuel R. Bowman, and Ethan Perez. Pretraining Language Models with Human Preferences, 2023. Anton Korinek. Scenario Planning for an A(G)I Future. *IMF Finance & Development Magazine*, 60(4): 30–33, December 2023. URL https://www.imf.org/en/Publications/fandd/issues/2023/12/Scena rio-Planning-for-an-AGI-future-Anton-korinek. Anton Korinek and Megan Juelfs. Preparing for the (Non-Existent?) Future of Work. In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young, and Baobao Zhang (eds.), *The Oxford Handbook of AI Governance*. Oxford Academic, 2023. doi: 10.1093/oxfordhb/9 780197579329.013.44. URL https://doi.org/10.1093/oxfordhb/9780197579329.013.44. Anton Korinek and Joseph E Stiglitz. Artificial intelligence and its implications for income distribution and unemployment. In *The Economics of Artificial Intelligence: An agenda*, pp. 349–390. University of Chicago Press, 2019. Anton Korinek and Joseph E Stiglitz. Steering technological progress, 2020. Anton Korinek and Jai Vipra. Concentrating Intelligence: Scaling Laws and Market Structure in Generative AI. *Prepared for Economic Policy*, 39, 2024. Anton Korinek, Martin Schindler, and Joseph Stiglitz. Technological Progress, Artificial Intelligence, and Inclusive Growth. Technical report, International Monetary Fund, 2022. Suhas Kotha, Jacob Mitchell Springer, and Aditi Raghunathan. Understanding Catastrophic Forgetting in Language Models via Implicit Inference, 2023. Victoria Krakovna, Laurent Orseau, Richard Ngo, Miljan Martic, and Shane Legg. Avoiding side effects by considering future tasks. *Advances in Neural Information Processing Systems*, 33:19064–19074, 2020. Dmitrii Krasheninnikov, Egor Krasheninnikov, and David Krueger. Assistance with large language models. In *NeurIPS ML Safety Workshop*, 2022. URL https://openreview.net/forum?id=OE9V81spp6B. Dmitrii Krasheninnikov, Egor Krasheninnikov, Bruno Mlodozeniec, and David Krueger. Meta-(out-ofcontext) learning in neural networks. *arXiv preprint arXiv:2310.15047*, 2023. Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, et al. Quality at a glance: An audit of web-crawled multilingual datasets. *Transactions of the Association for Computational Linguistics*, 10: 50–72, 2022. Maya Krishnan. Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning. *Philosophy & Technology*, 33(3):487–502, sep 2020. ISSN 2210-5441. doi: 10.1007/s13347-019 -00372-9. URL https://doi.org/10.1007/s13347-019-00372-9. David Krueger. AI alignment and generalization in deep learning, 2023. David Krueger, Jan Leike, Owain Evans, and John Salvatier. Active reinforcement learning: Observing rewards at a cost. *arXiv preprint arXiv:2011.06709*, 2020. David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex). In International Conference on Machine Learning, pp. 5815–5826. Pmlr, 2021. Yilun Kuang and Yash Bharti. Scale-invariant-Fine-Tuning (SiFT) for Improved Generalization in Classification, 2021. Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. Clam: Selective clarification for ambiguous questions with large language models. *arXiv preprint arXiv:2212.07769*, 2022. Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation. *arXiv preprint arXiv:2302.09664*, 2023. Nupur Kumari, Mayank Singh, Abhishek Sinha, Harshitha Machiraju, Balaji Krishnamurthy, and Vineeth N Balasubramanian. Harnessing the vulnerability of latent layers in adversarially trained models. In *Proceedings of the 28th International Joint Conference on Artificial Intelligence*, pp. 2779–2785, 2019. Sandipan Kundu, Yuntao Bai, Saurav Kadavath, Amanda Askell, Andrew Callahan, Anna Chen, Anna Goldie, Avital Balwit, Azalia Mirhoseini, Brayden McLean, et al. Specific versus General Principles for Constitutional AI. *arXiv preprint arXiv:2310.13798*, 2023. Cordula Kupfer, Rita Prassl, Jürgen Fleiß, Christine Malin, Stefan Thalmann, and Bettina Kubicek. Check the box! How to deal with automation bias in AI-based personnel selection. *Frontiers in Psychology*, 14: 1118723, 2023. Emanuele La Malfa, Aleksandar Petrov, Simon Frieder, Christoph Weinhuber, Ryan Burnell, Anthony G Cohn, Nigel Shadbolt, and Michael Wooldridge. Language Models as a Service: Overview of a new paradigm and its challenges. *arXiv preprint arXiv:2309.16573*, 2023. Cassidy Laidlaw and Anca Dragan. The boltzmann policy distribution: Accounting for systematic suboptimality in human models. *arXiv preprint arXiv:2204.10759*, 2022. Nathan Lambert. Big Tech's LLM evals are just marketing, 2023. URL https://www.interconnects.ai /p/evals-are-marketing. Nathan Lambert, Thomas Krendl Gilbert, and Tom Zick. The history and risks of reinforcement learning and human feedback. *arXiv e-prints*, pp. arXiv–2310, 2023. Andrew Kyle Lampinen, Stephanie CY Chan, Ishita Dasgupta, Andrew J Nam, and Jane X Wang. Passive learning of active causal strategies in agents and language models. *arXiv preprint arXiv:2305.16183*, 2023. Lauro Langosco, Jack Koch, Lee D Sharkey, Jacob Pfau, and David Krueger. Goal misgeneralization in deep reinforcement learning. In *International Conference on Machine Learning*, pp. 12004–12019. Pmlr, 2022. Lauro Langosco, Neel Alex, William Baker, David Quarel, Herbie Bradley, and David Krueger. Detecting Backdoors with Meta-Models. In *NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good,* the Bad, and the Ugly, 2024. URL https://openreview.net/forum?id=cmJiEqniEc. Tamera Lanham. Externalized reasoning oversight: a research direction for language model alignment. AI Alignment Forum, 2022. https://www.alignmentforum.org/posts/FRRb6Gqem8k69ocbi/externalize d-reasoning-oversight-a-research-direction-for. Accessed on: 3 January, 2024. Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamil˙e Lukoši¯ut˙e, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, ..., and Ethan Perez. Measuring Faithfulness in Chain-of-Thought Reasoning, jul 2023. URL http://arxiv.org/abs/2307.13702. arXiv:2307.13702 [cs]. Raz Lapid, Ron Langberg, and Moshe Sipper. Open Sesame! Universal Black Box Jailbreaking of Large Language Models. *arXiv preprint arXiv:2309.01446*, 2023. Seth Lazar and Alondra Nelson. AI safety on whose terms?, 2023. Andrew Lee, Xiaoyan Bai, Itamar Pres, Martin Wattenberg, Jonathan K Kummerfeld, and Rada Mihalcea. A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity. *arXiv* preprint arXiv:2401.01967, 2024. Ivan Lee, Nan Jiang, and Taylor Berg-Kirkpatrick. Exploring the relationship between model architecture and in-context learning ability. In *The Twelfth International Conference on Learning Representations*, 2023a. Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. Deduplicating Training Data Makes Language Models Better. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (eds.), *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pp. 8424–8445, Dublin, Ireland, may 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.577. URL https://ac lanthology.org/2022.acl-long.577. Katherine Lee, A. Feder Cooper, James Grimmelmann, and Daphne Ippolito. AI and Law: The Next Generation. *SSRN*, July 2023b. Joel Lehman, Jeff Clune, Dusan Misevic, Christoph Adami, Lee Altenberg, Julie Beaulieu, Peter J Bentley, Samuel Bernard, Guillaume Beslon, David M Bryson, et al. The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities. Artificial life, 2020. Jan Leike. What could a solution to the alignment problem look like? *Musings on the Alignment Problem*, 2022a. https://aligned.substack.com/p/alignment-solution. Accessed on: 3 January, 2024. Jan Leike. Distinguishing three alignment taxes, 2022b. URL https://aligned.substack.com/p/three -alignment-taxes. Jan Leike, Miljan Martic, Victoria Krakovna, Pedro A Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, and Shane Legg. AI safety gridworlds. *arXiv preprint arXiv:1711.09883*, 2017. Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, and Shane Legg. Scalable agent alignment via reward modeling: a research direction. *arXiv preprint arXiv:1811.07871*, 2018. Jan Leike, John Schulman, and Jeffrey Wu. Our approach to alignment research, 2022. https://openai.c om/blog/our-approach-to-alignment-research. Simon Lermen, Charlie Rogers-Smith, and Jeffrey Ladish. LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B, 2023. Lawrence Lessig. *Code and Other Laws of Cyberspace, Version 2.0*. Basic Books, 2006. URL https: //lessig.org/product/codev2. Nancy G Leveson. *Engineering a safer world: Systems thinking applied to safety*. The MIT Press, 2016. Zachary S Levine, Scott A Hale, and Luciano Floridi. The October 2014 United States Treasury bond flash crash and the contributory effect of mini flash crashes. *PloS one*, 12(11):e0186688, 2017. BA Levinstein and Daniel A Herrmann. Still no lie detector for language models: Probing empirical and conceptual roadblocks. *arXiv preprint arXiv:2307.00175*, 2023. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay V. Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy GurAri, and Vedant Misra. Solving Quantitative Reasoning Problems with Language Models. In *NeurIPS*, 2022. Changjiang Li, Ren Pang, Zhaohan Xi, Tianyu Du, Shouling Ji, Yuan Yao, and Ting Wang. An Embarrassingly Simple Backdoor Attack on Self-supervised Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 4367–4378, October 2023a. Cheng Li, Jindong Wang, Kaijie Zhu, Yixuan Zhang, Wenxin Hou, Jianxun Lian, and Xing Xie. Emotionprompt: Leveraging psychology for large language models enhancement via emotional stimulus. *arXiv* preprint arXiv:2307.11760, 2023b. Duo Li, Guimei Cao, Yunlu Xu, Zhanzhan Cheng, and Yi Niu. Technical report for iccv 2021 challenge sslad-track3b: Transformers are better continual learners. *arXiv preprint arXiv:2201.04924*, 2022a. Guanghao Li, Li Shen, Yan Sun, Yue Hu, Han Hu, and Dacheng Tao. Subspace based Federated Unlearning, 2023c. Huao Li, Yu Quan Chong, Simon Stepputtis, Joseph Campbell, Dana Hughes, Michael Lewis, and Katia Sycara. Theory of mind for multi-agent collaboration via large language models. arXiv preprint arXiv:2310.10701, 2023d. Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. Inference-Time Intervention: Eliciting Truthful Answers from a Language Model, oct 2023e. URL http://arxiv.org/ abs/2306.03341. arXiv:2306.03341 [cs]. Maximilian Li, Xander Davies, and Max Nadeau. Circuit breaking: Removing model behaviors with targeted ablation. *arXiv preprint arXiv:2309.05973*, 2023f. Nathaniel Li, Alexander Pan, Anjali Gopal, Summer Yue, Daniel Berrios, Alice Gatti, Justin D Li, AnnKathrin Dombrowski, Shashwat Goel, Long Phan, et al. The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning. *arXiv preprint arXiv:2403.03218*, 2024. Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, and Xingjun Ma. Neural attention distillation: Erasing backdoor triggers from deep neural networks. *arXiv preprint arXiv:2101.05930*, 2021. Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John Hopcroft. Convergent learning: Do different neural networks learn the same representations? *arXiv preprint arXiv:1511.07543*, 2015. Yucheng Li. An open source data contamination report for llama series models. *arXiv preprint* arXiv:2310.17589, 2023. Zhong Li, Jiequn Han, E Weinan, and Qianxiao Li. Approximation and optimization theory for linear continuous-time recurrent neural networks. *Journal of Machine Learning Research*, 23(42):1–85, 2022b. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Alexander Cosgrove, Christopher D Manning, Christopher Re, Diana Acosta-Navas, Drew Arad Hudson, ..., and Yuta Koreeda. Holistic Evaluation of Language Models. *Transactions on Machine Learning Research*, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=iO4LZibEqW. Featured Certification, Expert Certification. Thomas Liao, Rohan Taori, Inioluwa Deborah Raji, and Ludwig Schmidt. Are We Learning Yet? A Meta Review of Evaluation Failures Across Machine Learning. In *Thirty-fifth Conference on Neural Information* Processing Systems Datasets and Benchmarks Track (Round 2), 2021. URL https://openreview.net/f orum?id=mPducS1MsEK. Lucas Liebenwein, Cenk Baykal, Brandon Carter, David Gifford, and Daniela Rus. Lost in pruning: The effects of pruning neural networks beyond test accuracy. *Proceedings of Machine Learning and Systems*, 3:93–138, 2021. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let's Verify Step by Step. *arXiv preprint arXiv:2305.20050*, 2023. Bill Yuchen Lin, Abhilasha Ravichander, Ximing Lu, Nouha Dziri, Melanie Sclar, Khyathi Chandu, Chandra Bhagavatula, and Yejin Choi. The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning. *arXiv preprint arXiv:2312.01552*, 2023a. Licong Lin, Yu Bai, and Song Mei. Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining. *arXiv preprint arXiv:2310.08566*, 2023b. Juho Lindman, Jukka Makinen, and Eero Kasanen. Big Tech's power, political corporate social responsibility and regulation. *Journal of Information Technology*, pp. 02683962221113596, 2023. David Lindner and Mennatallah El-Assady. Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback and Interaction in Reinforcement Learning. arXiv preprint arXiv:2206.13316, 2022. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023a. Ryan Liu, Theodore R Sumers, Ishita Dasgupta, and Thomas L Griffiths. How do large language models navigate conflicts between honesty and helpfulness? *arXiv preprint arXiv:2402.07282*, 2024a. Sijia Liu, Yuanshun Yao, Jinghan Jia, Stephen Casper, Nathalie Baracaldo, Peter Hase, Xiaojun Xu, Yuguang Yao, Hang Li, Kush R Varshney, et al. Rethinking Machine Unlearning for Large Language Models. arXiv preprint arXiv:2402.08787, 2024b. Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, and Yang Liu. Prompt Injection attack against LLM-integrated Applications. *arXiv preprint arXiv:2306.05499*, 2023b. Yi Liu, Gelei Deng, Zhengzi Xu, Yuekang Li, Yaowen Zheng, Ying Zhang, Lida Zhao, Tianwei Zhang, and Yang Liu. Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study, 2023c. Yingqi Liu, Wen-Chuan Lee, Guanhong Tao, Shiqing Ma, Yousra Aafer, and Xiangyu Zhang. Abs: Scanning neural networks for back-doors by artificial brain stimulation. In *Proceedings of the 2019 ACM SIGSAC* Conference on Computer and Communications Security, pp. 1265–1282, 2019. Zhengzhong Liu, Aurick Qiao, Willie Neiswanger, Hongyi Wang, Bowen Tan, Tianhua Tao, Junbo Li, Yuqi Wang, Suqi Sun, Omkar Pangarkar, et al. LLM360: Towards Fully Transparent Open-Source LLMs. arXiv preprint arXiv:2312.06550, 2023d. Ximing Lu, Sean Welleck, Jack Hessel, Liwei Jiang, Lianhui Qin, Peter West, Prithviraj Ammanabrolu, and Yejin Choi. Quark: Controllable text generation with reinforced unlearning. Advances in neural information processing systems, 35:27591–27609, 2022. Ekdeep Singh Lubana, Eric J Bigelow, Robert P Dick, David Krueger, and Hidenori Tanaka. Mechanistic mode connectivity. In *International Conference on Machine Learning*, pp. 22965–23004. Pmlr, 2023. Sasha Luccioni. The Call to Halt 'Dangerous' AI Research Ignores a Simple Truth. *Wired*, 2023. https: //www.wired.com/story/the-call-to-halt-dangerous-ai-research-ignores-a-simple-truth/. Scott M. Lundberg and Su-In Lee. A Unified Approach to Interpreting Model Predictions. In *Neural* Information Processing Systems, 2017. URL https://api.semanticscholar.org/CorpusID:21889700. Andrei Lupu and Doina Precup. Gifting in Multi-Agent Reinforcement Learning. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, pp. 789–797, 2020. Aengus Lynch, Phillip Guo, Aidan Ewart, Stephen Casper, and Dylan Hadfield-Menell. Eight Methods to Evaluate Robust Unlearning in LLMs, 2024. Qing Lyu, Marianna Apidianaki, and Chris Callison-Burch. Towards Faithful Model Explanation in NLP: A Survey. *ArXiv*, 2022. doi: 10.48550/arxiv.2209.11326. URL https://arxiv.org/abs/2209.11326. Publisher: arXiv Version Number: 2. Matthijs Maas and José Jaime Villalobos. International AI Institutions: a literature review of models, examples, and proposals. Technical Report AI Foundations Report \#1, Legal Priorities Project, 2023. URL https://www.legalpriorities.org/research/international-ai-institutions. Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, and Graham Neubig. Language Models of Code are Few-Shot Commonsense Learners. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pp. 1384–1403. Association for Computational Linguistics, 2022. Angus Maddison. The World Economy: Historical Statistics. *OECD Development Centre*, 2004. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. *arXiv preprint arXiv:1706.06083*, 2017. Inbal Magar and Roy Schwartz. Data Contamination: From Memorization to Exploitation. *ArXiv*, abs/2203.08242, 2022. URL https://api.semanticscholar.org/CorpusID:247475929. Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, and Aliaksei Severyn. Teaching Small Language Models to Reason. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 1773–1781, Toronto, Canada, jul 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-short.151. URL https://aclanthology.org/2023.acl-short.151. Anita Mahinpei, Justin Clark, Isaac Lage, Finale Doshi-Velez, and Weiwei Pan. Promises and pitfalls of black-box concept learning models. *arXiv preprint arXiv:2106.13314*, 2021. Shehryar Malik, Usman Anwar, Alireza Aghasi, and Ali Ahmed. Inverse constrained reinforcement learning. In *International conference on machine learning*, pp. 7390–7399. Pmlr, 2021. David Manheim and Scott Garrabrant. Categorizing variants of Goodhart's Law. arXiv preprint arXiv:1803.04585, 2018. Jyoti Mann. OpenAI recruiters are trying to lure Google AI employees with $10 million pay packets, report says. *Business Insider*, 2023. https://www.businessinsider.com/openai-recruiters-luring-googl e-ai-employees-10-million-compensation-package-2023-11. Gary E Marchant. *The growing gap between emerging technologies and the law*. Springer, 2011. Gary Marcus. When looked at carefully, OpenAI's new study on GPT-4 and bioweapons is deeply worrisome, 2024. https://garymarcus.substack.com/p/when-looked-at-carefully-openais. Accessed on: 9 March, 2024. Marc Marone and Benjamin Van Durme. Data portraits: Recording foundation model training data. arXiv preprint arXiv:2303.03919, 2023. Andrew McAfee, Daniel Rock, and Erik Brynjolfsson. How to Capitalize on Generative AI. Harvard Business Review, Nov-Dec 2023. URL https://hbr.org/2023/11/how-to-capitalize-on-generative-ai. R Thomas McCoy, Junghyun Min, and Tal Linzen. BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. *arXiv preprint arXiv:1911.02969*, 2019. R Thomas McCoy, Shunyu Yao, Dan Friedman, Matthew Hardy, and Thomas L Griffiths. Embers of autoregression: Understanding large language models through the problem they are trained to solve. arXiv preprint arXiv:2309.13638, 2023. Callum McDougall, Arthur Conmy, Cody Rushing, Thomas McGrath, and Neel Nanda. Copy Suppression: Comprehensively Understanding an Attention Head. *arXiv preprint arXiv:2310.04625*, 2023. Thomas McGrath, Matthew Rahtz, Janos Kramar, Vladimir Mikulik, and Shane Legg. The hydra effect: Emergent self-repair in language model computations. *arXiv preprint arXiv:2307.15771*, 2023. Timothy R McIntosh, Teo Susnjak, Tong Liu, Paul Watters, and Malka N Halgamuge. Inadequacies of large language model benchmarks in the era of generative artificial intelligence. *arXiv preprint arXiv:2402.09880*, 2024. Kevin R. McKee, Ian Gemp, Brian McWilliams, Edgar A. Duèñez Guzmán, Edward Hughes, and Joel Z. Leibo. Social Diversity and Social Preferences in Mixed-Motive Reinforcement Learning. In *Proceedings* of the 19th International Conference on Autonomous Agents and MultiAgent Systems, Aamas '20, pp. 869–877, Richland, SC, 2020. International Foundation for Autonomous Agents and Multiagent Systems. ISBN 9781450375184. Ian R McKenzie, Alexander Lyzhov, Michael Pieler, Alicia Parrish, Aaron Mueller, Ameya Prabhu, Euan McLean, Aaron Kirtland, Alexis Ross, Alisa Liu, et al. Inverse Scaling: When Bigger Isn't Better. arXiv preprint arXiv:2306.09479, 2023. Angelina McMillan-Major, Emily M Bender, and Batya Friedman. Data Statements: From Technical Concept to Community Practice. *ACM Journal on Responsible Computing*, 2023. Sam Meacham. A Race to Extinction: How Great Power Competition Is Making Artificial Intelligence Existentially Dangerous, 2023. https://hir.harvard.edu/a-race-to-extinction-how-great-power -competition-is-making-artificial-intelligence-existentially-dangerous/. Accessed on: 1 February, 2024. Morgan Meaker. Ukraine's War Brings Autonomous Weapons to the Front Lines. *Wired*, 2023. https: //www.wired.co.uk/article/ukraine-war-autonomous-weapons-frontlines. Accessed on: 1 February, 2024. Meaning Alignment Institute. Democratic Finetuning, 2023. URL https://meaningalignment.substack. com/p/the-first-moral-graph. Malek Mechergui and Sarath Sreedharan. Goal Alignment: Re-analyzing Value Alignment Problems Using Human-Aware AI. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 38, pp. 10110–10118, 2024. Ninareh Mehrabi, Palash Goyal, Christophe Dupuy, Qian Hu, Shalini Ghosh, Richard Zemel, Kai-Wei Chang, Aram Galstyan, and Rahul Gupta. Flirt: Feedback loop in-context red teaming. arXiv preprint arXiv:2308.04265, 2023. Alexander Meinke. LLMs can Spontaneously start Jailbreaking their Scoring Function, 2023. URL https: //alignmentjam.com/project/jailbreaking-the-overseer. Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. Mass-editing memory in a transformer. *arXiv preprint arXiv:2210.07229*, 2022. Sara Merken. New York lawyers sanctioned for using fake ChatGPT cases in legal brief, 2023. URL https: //www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-bri ef-2023-06-22/. William Merrill and Ashish Sabharwal. Transformers implement first-order logic with majority quantifiers. arXiv preprint arXiv:2210.02671, 2022. William Merrill and Ashish Sabharwal. The Expresssive Power of Transformers with Chain of Thought. arXiv preprint arXiv:2310.07923, 2023a. William Merrill and Ashish Sabharwal. The parallelism tradeoff: Limitations of log-precision transformers. Transactions of the Association for Computational Linguistics, 11:531–545, 2023b. William Merrill and Ashish Sabharwal. A logic for expressing log-precision transformers. Advances in Neural Information Processing Systems, 36, 2024. William Merrill, Nikolaos Tsilivis, and Aman Shukla. A Tale of Two Circuits: Grokking as Competition of Sparse and Dense Subnetworks. *arXiv preprint arXiv:2303.11873*, 2023. Jack Merullo, Carsten Eickhoff, and Ellie Pavlick. Circuit component reuse across tasks in transformer language models. *arXiv preprint arXiv:2310.08744*, 2023a. Jack Merullo, Carsten Eickhoff, and Ellie Pavlick. Language Models Implement Simple Word2Vec-style Vector Arithmetic. *arXiv preprint arXiv:2305.16130*, 2023b. Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, et al. Augmented language models: a survey. *arXiv preprint arXiv:2302.07842*, 2023. Eric J Michaud, Ziming Liu, Uzay Girit, and Max Tegmark. The quantization model of neural scaling. arXiv preprint arXiv:2303.13506, 2023. Tomáš Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space word representations. In *Proceedings of the 2013 conference of the north american chapter of the association for* computational linguistics: Human language technologies, pp. 746–751, 2013. Manfred Milinski, Dirk Semmann, and Hans-Jürgen Krambeck. Reputation helps solve the 'tragedy of the commons'. *Nature*, 415(6870):424–426, jan 2002. ISSN 1476-4687. doi: 10.1038/415424a. URL https://www.nature.com/articles/415424a. Tim Miller. Explanation in artificial intelligence: Insights from the social sciences. *Artificial intelligence*, 267:1–38, 2019. Raphaël Millière. The Alignment Problem in Context. *arXiv preprint arXiv:2311.02147*, 2023. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. Rethinking the role of demonstrations: What makes in-context learning work? arXiv preprint arXiv:2202.12837, 2022. Fraser Mince, Dzung Dinh, Jonas Kgomo, Neil Thompson, and Sara Hooker. The grand illusion: The myth of software portability and implications for ml progress. *Advances in Neural Information Processing Systems*, 36, 2024. Ministry of Foreign Affairs, People's Republic of China. Global AI Governance Initiative, 2023. https: //www.mfa.gov.cn/eng/wjdt_665385/2649_665393/202310/t20231020_11164834.html. Niloofar Mireshghallah, Hyunwoo Kim, Xuhui Zhou, Yulia Tsvetkov, Maarten Sap, Reza Shokri, and Yejin Choi. Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory. *arXiv preprint arXiv:2310.17884*, 2023. Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D Manning. Fast model editing at scale. *arXiv preprint arXiv:2110.11309*, 2021. Margaret Mitchell, Alexandra Sasha Luccioni, Nathan Lambert, Marissa Gerchick, Angelina McMillanMajor, Ezinwanne Ozoani, Nazneen Rajani, Tristan Thrush, Yacine Jernite, and Douwe Kiela. Measuring data. *arXiv preprint arXiv:2212.05129*, 2022. Melanie Mitchell. How do we know how smart AI systems are?, 2023. Melanie Mitchell, Alessandro B. Palmarini, and Arseny Moskvichev. Comparing Humans, GPT-4, and GPT-4V On Abstraction and Reasoning Tasks. *CoRR*, abs/2311.09247, 2023. Arindam Mitra, Luciano Del Corro, Shweti Mahajan, Andres Codas, Clarisse Simoes, Sahaj Agarwal, Xuxi Chen, Anastasia Razdaibiedina, Erik Jones, Kriti Aggarwal, et al. Orca 2: Teaching Small Language Models How to Reason. *arXiv preprint arXiv:2311.11045*, 2023. Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. *IEEE transactions on pattern analysis and* machine intelligence, 41(8):1979–1993, 2018. Moran Mizrahi, Guy Kaplan, Dan Malkin, Rotem Dror, Dafna Shahaf, and Gabriel Stanovsky. State of What Art? A Call for Multi-Prompt LLM Evaluation. *arXiv preprint arXiv:2401.00595*, 2023. Ethan R. Mollick and Lilach Mollick. Using AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including Prompts. *The Wharton School Research Paper*, Mar 2023. doi: 10.2139/ssrn.4 391243. URL https://ssrn.com/abstract=4391243. Steven Gonzalez Monserrate. The cloud is material: On the environmental impacts of computation and data storage. *MIT Schwarzman College of Computing*, 2022. Catherine Moon and Vincent Conitzer. Maximal Cooperation in Repeated Games on Social Networks. In *Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI)*, Buenos Aires, Argentina, 2015. URL https://www.ijcai.org/Proceedings/15/Papers/037.pdf. Michael Moret, Irene Pachon Angona, Leandro Cotos, Shen Yan, Kenneth Atz, Cyrill Brunner, Martin Baumgartner, Francesca Grisoni, and Gisbert Schneider. Leveraging molecular structure and bioactivity with chemical language models for de novo drug design. *Nature Communications*, 2023. Meredith Ringel Morris, Jascha Sohl-Dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, and Shane Legg. Levels of AGI: Operationalizing Progress on the Path to AGI. arXiv preprint arXiv:2311.02462, 2023. Sumeet Ramesh Motwani, Mikhail Baranchuk, Martin Strohmeier, Vijay Bolina, Philip HS Torr, Lewis Hammond, and Christian Schroeder de Witt. Secret Collusion Among Generative AI Agents. *arXiv:2402.07510*, 2024. Christopher A. Mouton, Caleb Lucas, and Ella Guest. The Operational Risks of AI in Large-Scale Biological Attacks: Results of a Red-Team Study, 2024. https://www.rand.org/pubs/research_reports/RRA29 77-2.html. Maximilian Mozes, Xuanli He, Bennett Kleinberg, and Lewis D Griffin. Use of LLMs for illicit purposes: Threats, prevention measures, and vulnerabilities. *arXiv preprint arXiv:2308.12833*, 2023. Fangwen Mu, Lin Shi, Song Wang, Zhuohao Yu, Binquan Zhang, Chenxue Wang, Shichao Liu, and Qing Wang. ClarifyGPT: Empowering LLM-based Code Generation with Intention Clarification. *arXiv preprint* arXiv:2310.10996, 2023. Gabriel Mukobi, Hannah Erlebach, Niklas Lauffer, Lewis Hammond, Alan Chan, and Jesse Clifton. Welfare Diplomacy: Benchmarking Language Model Cooperation. *arXiv:2310.08901*, oct 2023. doi: 10.48550/arx iv.2310.08901. Micah Musser. A cost analysis of generative language models and influence operations. *arXiv preprint* arXiv:2308.03740, 2023. Lukas Muttenthaler, Lorenz Linhardt, Jonas Dippel, Robert A Vandermeulen, Katherine Hermann, Andrew Lampinen, and Simon Kornblith. Improving neural network representations using human similarity judgments. *Advances in Neural Information Processing Systems*, 36, 2023. Silen Naihin, David Atkinson, Marc Green, Merwane Hamadi, Craig Swift, Douglas Schonholtz, Adam Tauman Kalai, and David Bau. Testing Language Model Agents Safely in the Wild. *arXiv preprint* arXiv:2311.10538, 2023. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. *arXiv preprint arXiv:2112.09332*, 2021. Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, and Jacob Steinhardt. Progress measures for grokking via mechanistic interpretability. In *The Eleventh International Conference on Learning Representations*, sep 2022. URL https://openreview.net/forum?id=9XFSbDPmdW. Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A Feder Cooper, Daphne Ippolito, Christopher A Choquette-Choo, Eric Wallace, Florian Tramèr, and Katherine Lee. Scalable Extraction of Training Data from (Production) Language Models. *arXiv preprint arXiv:2311.17035*, 2023. National Cyber Security Center. Guidelines for Secure AI System Development, 2019. Allen Newell and Herbert A. Simon. Computer science as empirical inquiry: symbols and search. Commun. ACM, 19(3):113–126, 1976. URL https://doi.org/10.1145/360018.360022. Steve Newman. Cybersecurity and ai: The evolving security landscape, 2024. URL https://www.safe.ai/ blog/cybersecurity-and-ai-the-evolving-security-landscape. Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro. Exploring generalization in deep learning. *Advances in neural information processing systems*, 30, 2017a. Behnam Neyshabur, Srinadh Bhojanapalli, and Nathan Srebro. A pac-bayesian approach to spectrallynormalized margin bounds for neural networks. *arXiv preprint arXiv:1707.09564*, 2017b. Helen Ngo, Cooper Raterink, João G. M. Araújo, Ivan Zhang, Carol Chen, Adrien Morisot, and Nicholas Frosst. Mitigating harm in language models with conditional-likelihood filtration, 2021. URL https: //arxiv.org/abs/2108.07790. Richard Ngo, Lawrence Chan, and Sören Mindermann. The alignment problem from a deep learning perspective, 2023. Thanh Tam Nguyen, Thanh Trung Huynh, Phi Le Nguyen, Alan Wee-Chung Liew, Hongzhi Yin, and Quoc Viet Hung Nguyen. A survey of machine unlearning. *arXiv preprint arXiv:2209.02299*, 2022. Thao Nguyen, Maithra Raghu, and Simon Kornblith. Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth. In *International Conference on Learning Representations*, 2020. Michelle Nichols. UN Security Council meets for first time on AI risks, July 2023. https://www.reuter s.com/technology/un-security-council-meets-first-time-ai-risks-2023-07-18/. Accessed on: January 8, 2023. Noam Nisan (ed.). *Algorithmic game theory*. Cambridge University Press, Cambridge ; New York, 2007. ISBN 9780521872829. OCLC: ocn122526907. Helen Nissenbaum. *Privacy in context: Technology, policy, and the integrity of social life*. Stanford University Press, 2020. Malvina Nissim, Rik van Noord, and Rob van der Goot. Fair is better than sensational: Man is to doctor as woman is to doctor. *Computational Linguistics*, 46(2):487–497, 2020. NIST, National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0), 2023. Shakked Noy and Whitney Zhang. Experimental evidence on the productivity effects of generative artificial intelligence. *Science*, 381(6654):187–192, 2023. doi: 10.1126/science.adh2586. Nanjala Nyabola. ChatGPT and the sweatshops powering the digital age. *Al Jazeera*, 2023. https: //www.aljazeera.com/opinions/2023/1/23/sweatshops-are-making-our-digital-age-work. Accessed on: January 8, 2024. Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. Show your work: Scratchpads for intermediate computation with language models. *arXiv preprint arXiv:2112.00114*, 2021. Johan Obando-Ceron, Ghada Sokar, Timon Willi, Clare Lyle, Jesse Farebrother, Jakob Foerster, Gintare Karolina Dziugaite, Doina Precup, and Pablo Samuel Castro. Mixtures of Experts Unlock Parameter Scaling for Deep RL. *arXiv preprint arXiv:2402.08609*, 2024. Derek O'Callaghan, Derek Greene, Maura Conway, Joe Carthy, and Pádraig Cunningham. Down the (white) rabbit hole: The extreme right and online recommender systems. *Social Science Computer Review*, 33(4): 459–478, 2015. OECD. Recommendation of the Council on Artificial Intelligence. OECD/LEGAL/0449, 2019. Caspar Oesterheld, Johannes Treutlein, Roger Grosse, Vincent Conitzer, and Jakob Foerster. Similaritybased cooperative equilibrium, nov 2023. URL http://arxiv.org/abs/2211.14468. arXiv:2211.14468 [cs]. Governments of Participating Countries. The Bletchley Declaration by Countries Attending the AI Safety Summit. AI Safety Summit 2023, November 2023. URL https://www.gov.uk/government/publicatio ns/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-count ries-attending-the-ai-safety-summit-1-2-november-2023. 1-2 November 2023. UK PM Office. Prime Minister calls for global responsibility to take AI risks seriously and seize its opportunities, October 2023. https://www.gov.uk/government/news/prime-minister-calls-for-glo bal-responsibility-to-take-ai-risks-seriously-and-seize-its-opportunities. Accessed on: January 8, 2023. Victor Ojewale, Ryan Steed, Briana Vecchione, Abeba Birhane, and Inioluwa Deborah Raji. Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling. arXiv preprint arXiv:2402.17861, 2024. Maya Okawa, Ekdeep Singh Lubana, Robert P Dick, and Hidenori Tanaka. Compositional abilities emerge multiplicatively: Exploring diffusion models on a synthetic task. *arXiv preprint arXiv:2310.09336*, 2023. Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. Zoom in: An introduction to circuits. *Distill*, 5(3):e00024–001, 2020. Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Scott Johnston, Andy Jones, Jackson Kernion, Liane Lovitt, ..., and Chris Olah. In-context Learning and Induction Heads. *Transformer Circuits Thread*, 2022. https://transformercircuits.pub/2022/in-context-learning-and-induction-heads/index.html. Yasumasa Onoe, Michael JQ Zhang, Eunsol Choi, and Greg Durrett. Entity cloze by date: What LMs know about unseen entities. *arXiv preprint arXiv:2205.02832*, 2022. Yasumasa Onoe, Michael JQ Zhang, Shankar Padmanabhan, Greg Durrett, and Eunsol Choi. Can lms learn new entities from descriptions? Challenges in propagating injected knowledge. *arXiv preprint* arXiv:2305.01651, 2023. OpenAI. OpenAI's Approach to Frontier Risk, 2023a. https://openai.com/global-affairs/our-appro ach-to-frontier-risk. OpenAI. GPT-4 technical report. *arXiv*, pp. 2303–08774, 2023b. OpenAI. GPT-4 System Card. Technical report, OpenAI, 2023c. URL ^1^. OpenAI. GPT-4V(ision) System Card, 2023d. https://cdn.openai.com/papers/GPTV%5FSystem%5FCard .pdf. Yonatan Oren, Nicole Meister, Niladri Chatterji, Faisal Ladhak, and Tatsunori B Hashimoto. Proving test set contamination in black box language models. *arXiv preprint arXiv:2310.17623*, 2023. World Health Organization. *Ethics and Governance of Artificial Intelligence for Health: WHO Guidance*. World Health Organization, Geneva, 2021. Laurent Orseau, Simon McGregor McGill, and Shane Legg. Agents and devices: A relative definition of agency. *arXiv preprint arXiv:1805.12387*, 2018. Anton Osika. GPT Engineer, 2023. URL https://github.com/AntonOsika/gpt-engineer. Initial release: June 10, 2023. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. *Advances in Neural Information Processing Systems*, 35:27730–27744, 2022. Lorenzo Pacchiardi, Alex J Chan, Sören Mindermann, Ilan Moscovitz, Alexa Y Pan, Yarin Gal, Owain Evans, and Jan Brauner. How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions. *arXiv preprint arXiv:2309.15840*, 2023. Staff PAI. Partnership on AI, 2017. https://partnershiponai.org/. Accessed on: January 8, 2024. Alexander Pan, Kush Bhatia, and Jacob Steinhardt. The effects of reward misspecification: Mapping and mitigating misaligned models. *arXiv preprint arXiv:2201.03544*, 2022. Alexander Pan, Jun Shern Chan, Andy Zou, Nathaniel Li, Steven Basart, Thomas Woodside, Jonathan Ng, Hanlin Zhang, Scott Emmons, and Dan Hendrycks. Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the Machiavelli Benchmark. *Icml*, 2023a. Alexander Pan, Erik Jones, Meena Jagadeesan, and Jacob Steinhardt. Feedback Loops Drive In-Context Reward Hacking in LLMs. *arXiv*, 2024. Xudong Pan, Mi Zhang, Shouling Ji, and Min Yang. Privacy risks of general-purpose language models. In 2020 IEEE Symposium on Security and Privacy (SP), pp. 1314–1331. Ieee, 2020. Yikang Pan, Liangming Pan, Wenhu Chen, Preslav Nakov, Min-Yen Kan, and William Yang Wang. On the Risk of Misinformation Pollution with Large Language Models. *arXiv preprint arXiv:2305.13661*, 2023b. Rahul Pandey, Hemant Purohit, Carlos Castillo, and Valerie L Shalin. Modeling and mitigating human annotation errors to design efficient stream processing systems with human-in-the-loop machine learning. International Journal of Human-Computer Studies, 160:102772, 2022. Lu Pang, Tao Sun, Haibin Ling, and Chao Chen. Backdoor cleansing with unlabeled data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12218–12227, 2023. Abhishek Panigrahi, Sadhika Malladi, Mengzhou Xia, and Sanjeev Arora. Trainable transformer in transformer. *arXiv preprint arXiv:2307.01189*, 2023. Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. *arXiv preprint arXiv:1605.07277*, 2016. David Pardoe, Peter Stone, Maytal Saar-Tsechansky, and Kerem Tomak. Adaptive mechanism design. In Proceedings of the 8th international conference on Electronic commerce The new e-commerce: innovations for conquering current barriers, obstacles and limitations to conducting successful business on the internet - ICEC '06. ACM Press, 2006. doi: 10.1145/1151454.1151480. Joon Sung Park, Joseph O'Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative Agents: Interactive Simulacra of Human Behavior. In *Proceedings of the 36th* Annual ACM Symposium on User Interface Software and Technology, Uist '23, New York, NY, USA, aug 2023a. Association for Computing Machinery. ISBN 9798400701320. arXiv:2304.03442 [cs]. Kiho Park, Yo Joong Choe, and Victor Veitch. The Linear Representation Hypothesis and the Geometry of Large Language Models. *ArXiv*, abs/2311.03658, 2023b. URL https://api.semanticscholar.org/Co rpusID:265042984. Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, and Aleksander Madry. Trak: Attributing model behavior at scale. *arXiv preprint arXiv:2303.14186*, 2023c. Vaidehi Patil, Peter Hase, and Mohit Bansal. Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks. *ArXiv*, abs/2309.17410, 2023. URL https://api.semantic scholar.org/CorpusID:263311025. Tejal Patwardhan, Kevin Liu, Todor Markov, Neil Chowdhury, Dillon Leet, Natalie Cone, Caitlin Maltbie, Joost Huizinga, Carroll Wainwright, Shawn Jackson, Steven Adler, Rocco Casagrande, and Aleksander Madry. Building an early warning system for LLM-aided biological threat creation, 2024. https://open ai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creat ion. Amandalynne Paullada, Inioluwa Deborah Raji, Emily M Bender, Emily Denton, and Alex Hanna. Data and its (dis) contents: A survey of dataset development and use in machine learning research. *Patterns*, 2(11), 2021. Ellie Pavlick and Tom Kwiatkowski. Inherent Disagreements in Human Textual Inferences. *Transactions of* the Association for Computational Linguistics, 7:677–694, 11 2019. ISSN 2307-387x. doi: 10.1162/tacl_a _00293. URL https://doi.org/10.1162/tacl%5Fa%5F00293. Svetlana Pavlitska, Hannes Grolig, and J. Marius Zöllner. Relationship between Model Compression and Adversarial Robustness: A Review of Current Evidence, 2023. Abian Pedregosa and Eleni Triantafillou. Announcing the First Machine Unlearning Challenge, 2023. https: //blog.research.google/2023/06/announcing-first-machine-unlearning.html. Accessed on: 3 January, 2024. C. S. Peirce. Questions Concerning Certain Faculties Claimed for Man. *The Journal of Speculative Philosophy*, 2(2):103–114, 1868. Kellin Pelrine, Mohammad Taufeeque, Michał Zając, Euan McLean, and Adam Gleave. Exploiting Novel GPT-4 APIs. *arXiv preprint arXiv:2312.14302*, 2023. Sida Peng, Eirini Kalliamvakou, Peter Cihon, and Mert Demirer. The impact of AI on developer productivity: Evidence from github copilot. *arXiv preprint arXiv:2302.06590*, 2023. Ethan Perez, Saffron Huang, Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nat McAleese, and Geoffrey Irving. Red teaming language models with language models. *arXiv preprint* arXiv:2202.03286, 2022a. Ethan Perez, Sam Ringer, Kamil˙e Lukoši¯ut˙e, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, et al. Discovering language model behaviors with model-written evaluations. *arXiv preprint arXiv:2212.09251*, 2022b. Aleksandar Petrov, Emanuele La Malfa, Philip HS Torr, and Adel Bibi. Language model tokenizers introduce unfairness between languages. *Neural Information Processing Systems (NeurIPS)*, 2023a. Aleksandar Petrov, Philip Torr, and Adel Bibi. When do prompting and prefix-tuning work? a theory of capabilities and limitations. In *The Twelfth International Conference on Learning Representations*, 2023b. Aleksandar Petrov, Philip HS Torr, and Adel Bibi. Prompting a pretrained transformer can be a universal approximator. *arXiv preprint arXiv:2402.14753*, 2024. Steven T Piantadosi, Harry Tily, and Edward Gibson. The communicative function of ambiguity in language. Cognition, 122(3):280–291, 2012. Konstantin Pilz, Lennart Heim, and Nicholas Brown. Increased Compute Efficiency and the Diffusion of AI Capabilities. *arXiv preprint arXiv:2311.15377*, 2023. Miloslova Plachkinova and Chris Maurer. Security breach at target. *Journal of Information Systems Education*, 29(1):11–20, 2018. Regina Polak and Patrick Rohs. Values–Politics–Religion: The European Values Study: In-depth Analysis– Interdisciplinary Perspectives–Future Prospects. Springer Nature, 2023. Vinodkumar Prabhakaran, Margaret Mitchell, Timnit Gebru, and Iason Gabriel. A human rights-based approach to responsible AI. *arXiv preprint arXiv:2210.02667*, 2022a. Vinodkumar Prabhakaran, Rida Qadri, and Ben Hutchinson. Cultural incongruencies in artificial intelligence. arXiv preprint arXiv:2211.13069, 2022b. Archiki Prasad, Peter Hase, Xiang Zhou, and Mohit Bansal. Grips: Gradient-free, edit-based instruction search for prompting large language models. *arXiv preprint arXiv:2203.07281*, 2022. Jeremias Prassl and Martin Risak. The legal protection of crowdworkers: four avenues for workers' rights in the virtual realm. *Policy implications of virtual work*, pp. 273–295, 2017. Nicolas Pröllochs. Community-based fact-checking on Twitter's Birdwatch platform. In Proceedings of the International AAAI Conference on Web and Social Media, volume 16, pp. 794–805, 2022. Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. Estimating training data influence by tracing gradient descent. *Advances in Neural Information Processing Systems*, 33:19920–19930, 2020. Rida Qadri, Renee Shelby, Cynthia L Bennett, and Emily Denton. AI's Regimes of Representation: A Community-centered Study of Text-to-Image Models in South Asia. In *Proceedings of the 2023 ACM* Conference on Fairness, Accountability, and Transparency, pp. 506–517, 2023. Xiangyu Qi, Kaixuan Huang, Ashwinee Panda, Mengdi Wang, and Prateek Mittal. Visual Adversarial Examples Jailbreak Large Language Models. *arXiv preprint arXiv:2306.13213*, 2023a. Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. Finetuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! arXiv preprint arXiv:2310.03693, 2023b. Yao Qiang, Xiangyu Zhou, and Dongxiao Zhu. Hijacking Large Language Models via Adversarial In-Context Learning. *arXiv preprint arXiv:2311.09948*, 2023. Guanghui Qin and Jason Eisner. Learning how to ask: Querying LMs with mixtures of soft prompts. *arXiv* preprint arXiv:2104.06599, 2021. Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, and Xiang Ren. Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement. *CoRR*, abs/2310.08559, 2023. Ansh Radhakrishnan, Karina Nguyen, Anna Chen, Carol Chen, Carson Denison, Danny Hernandez, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamil˙e Lukoši¯ut˙e, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Sam McCandlish, Sheer El Showk, Tamera Lanham, Tim Maxwell, Venkatesa Chandrasekaran, ..., and Ethan Perez. Question Decomposition Improves the Faithfulness of ModelGenerated Reasoning. *arXiv*, jul 2023. doi: 10.48550/arXiv.2307.11768. URL http://arxiv.org/abs/ 2307.11768. arXiv:2307.11768 [cs]. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct Preference Optimization: Your Language Model is Secretly a Reward Model. *arXiv preprint* arXiv:2305.18290, 2023. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. *J. Mach. Learn. Res.*, 21(1), jun 2022. ISSN 1532-4435. RAIL Team. Responsible AI Licenses, 2024. https://www.licenses.ai/ai-licenses. Inioluwa Deborah Raji. The Anatomy of AI Audits: Form, Process, and Consequences. The Oxford Handbook of AI Governance, 2021. URL https://doi.org/10.1093/oxfordhb/9780197579329.013.28. Inioluwa Deborah Raji, Emily M Bender, Amandalynne Paullada, Emily Denton, and Alex Hanna. AI and the everything in the whole wide world benchmark. *arXiv preprint arXiv:2111.15366*, 2021. Inioluwa Deborah Raji, SASHA COSTANZA Chock, and J Buolamwini. Change from the outside: Towards credible third-party audits of ai systems. *Missing links in AI governance*, pp. 5, 2023. Vinay Venkatesh Ramasesh, Aitor Lewkowycz, and Ethan Dyer. Effect of scale on catastrophic forgetting in neural networks. In *International Conference on Learning Representations*, 2022. URL https://open review.net/forum?id=GhVS8%5FyPeEa. Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth C. Fong, and Olga Russakovsky. Overlooked Factors in Concept-Based Explanations: Dataset Choice, Concept Learnability, and Human Capability. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10932–10941, 2022. URL https://api.semanticscholar.org/CorpusID:258676658. Rahul Ramesh, Mikail Khona, Robert P Dick, Hidenori Tanaka, and Ekdeep Singh Lubana. How Capable Can a Transformer Become? A Study on Synthetic, Interpretable Tasks. *arXiv preprint arXiv:2311.12997*, 2023. Javier Rando and Florian Tramèr. Universal Jailbreak Backdoors from Poisoned Human Feedback. arXiv preprint arXiv:2311.14455, 2023. Javier Rando, Francesco Croce, Krystof Mitka, Stepan Shabalin, Maksym Andriushchenko, Nicolas Flammarion, and Florian Tramèr. Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs, 2024. Abhinav Rao, Sachin Vashistha, Atharva Naik, Somak Aditya, and Monojit Choudhury. Tricking LLMs into Disobedience: Understanding, Analyzing, and Preventing Jailbreaks, 2023. Maribeth Rauh, John Mellor, Jonathan Uesato, Po-Sen Huang, Johannes Welbl, Laura Weidinger, Sumanth Dathathri, Amelia Glaese, Geoffrey Irving, Iason Gabriel, William Isaac, and Lisa Anne Hendricks. Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models, 2022. Allan Raventós, Mansheej Paul, Feng Chen, and Surya Ganguli. Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression. *arXiv preprint arXiv:2306.15063*, 2023. Shauli Ravfogel, Michael Twiton, Yoav Goldberg, and Ryan D Cotterell. Linear adversarial concept erasure. In *International Conference on Machine Learning*, pp. 18400–18421. Pmlr, 2022a. Shauli Ravfogel, Francisco Vargas, Yoav Goldberg, and Ryan Cotterell. Adversarial concept erasure in kernel space. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pp. 6034–6055, 2022b. Gautam Reddy. The mechanistic basis of data dependence and abrupt learning in an in-context classification task. *arXiv preprint arXiv:2312.03002*, 2023. Robert Reich. The frantic battle over OpenAI shows that money triumphs in the end. *The Guardian*, 2023. https://www.theguardian.com/commentisfree/2023/nov/28/artificial-intelligence-openai-n on-profit-money. Accessed on: March 10, 2024. David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. GPQA: A Graduate-Level Google-Proof Q&A Benchmark. arXiv preprint arXiv:2311.12022, 2023. Justin Reppert, Ben Rachbach, Charlie George, Luke Stebbing, Jungwon Byun, Maggie Appleton, and Andreas Stuhlmüller. Iterated Decomposition: Improving Science Q&A by Supervising Reasoning Processes, jan 2023. URL http://arxiv.org/abs/2301.01751. arXiv:2301.01751 [cs]. Nani Reventlow. How Artificial Intelligence Impacts Marginalised Groups, 2021. Laria Reynolds and Kyle McDonell. Prompt programming for large language models: Beyond the few-shot paradigm. In *Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems*, pp. 1–7, 2021. Marco Tulio Ribeiro. "Why Should I Trust You?" Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016. URL https://api.semanticscholar.org/CorpusID:260555761. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-Precision Model-Agnostic Explanations. In *AAAI Conference on Artificial Intelligence*, 2018. URL https://api.semanticscholar.or g/CorpusID:3366554. Neil M Richards. The dangers of surveillance. *Harv. L. Rev.*, 126:1934, 2012. Toran Bruce Richards. Auto-GPT, 2023. URL https://github.com/Significant-Gravitas/Auto-GPT. Laura Rieger, Chandan Singh, William Murdoch, and Bin Yu. Interpretations are useful: penalizing explanations to align neural networks with prior knowledge. In *International conference on machine learning*, pp. 8116–8126. PMLR, 2020. Jennafer Shae Roberts and Laura N Montoya. Decolonisation, Global Data Law, and Indigenous Data Sovereignty. *arXiv preprint arXiv:2208.04700*, 2022. Jonathan Roberts, Timo Lüddecke, Sowmen Das, Kai Han, and Samuel Albanie. GPT4GEO: How a Language Model Sees the World's Geography. *arXiv preprint arXiv:2306.00020*, 2023a. Manley Roberts, Himanshu Thakur, Christine Herlihy, Colin White, and Samuel Dooley. Data Contamination Through the Lens of Time. *arXiv preprint arXiv:2310.10628*, 2023b. Heather M Roff and Richard Moyes. Meaningful human control, artificial intelligence and autonomous weapons. In *Briefing Paper Prepared for the Informal Meeting of Experts on Lethal Au-Tonomous Weapons* Systems, UN Convention on Certain Conventional Weapons, 2016. Fabien Roger and Ryan Greenblatt. Preventing Language Models From Hiding Their Reasoning. *arXiv* preprint arXiv:2310.18512, 2023. Anna Rogers, Olga Kovaleva, and Anna Rumshisky. A Primer in BERTology: What we know about how BERT works, nov 2020. URL http://arxiv.org/abs/2002.12327. arXiv:2002.12327 [cs]. Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog, M Pawan Kumar, Emilien Dupont, Francisco JR Ruiz, Jordan S Ellenberg, Pengming Wang, Omar Fawzi, et al. Mathematical discoveries from program search with large language models. *Nature*, 2024. Kevin Roose. The Chaos at OpenAI, Explained. *The New York Times*, 2023. https://www.nytimes.com/ 2023/11/21/briefing/open-ai-sam-altman-microsoft.html. Accessed on: March 10, 2024. Andrew Slavin Ross, Michael C Hughes, and Finale Doshi-Velez. Right for the right reasons: Training differentiable models by constraining their explanations. *arXiv preprint arXiv:1703.03717*, 2017. Tim Roughgarden. *Selfish Routing and The Price of Anarchy*. MIT Press, Cambridge, Mass, 2005. ISBN 9780262182430. OCLC: ocm56068857. Michael Roytman and Ed Bellis. *Modern Vulnerability Management: Predictive Cybersecurity*. Artech House, 2023. Yangjun Ruan, Honghua Dong, Andrew Wang, Silviu Pitis, Yongchao Zhou, Jimmy Ba, Yann Dubois, Chris J. Maddison, and Tatsunori Hashimoto. Identifying the Risks of LM Agents with an LM-Emulated Sandbox. *arXiv preprint arXiv:2309.15817*, 2023. Stuart Russell. *Human compatible: Artificial intelligence and the problem of control*. Penguin, 2019. Stuart Russell, Daniel Dewey, and Max Tegmark. Research priorities for robust and beneficial artificial intelligence. *AI magazine*, 36(4):105–114, 2015. Jérôme Rutinowski, Sven Franke, Jan Endendyk, Ina Dormuth, and Markus Pauly. The Self-Perception and Political Biases of ChatGPT, 2023. Tilman Räuker, Anson Ho, Stephen Casper, and Dylan Hadfield-Menell. Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks, aug 2023. URL http://arxiv.org/abs/ 2207.13243. arXiv:2207.13243 [cs]. Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731, 2019. Aniruddha Saha, Ajinkya Tejankar, Soroush Abbasi Koohpayegani, and Hamed Pirsiavash. Backdoor Attacks on Self-Supervised Learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and* Pattern Recognition (CVPR), pp. 13337–13346, June 2022. Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, and Eneko Agirre. Did ChatGPT cheat on your test?, June 2023. https://hitz-zentroa.github.io/lm-contamination/blog/. F. Salahdine and N. Kaabouch. Social Engineering Attacks: A Survey. *Future Internet*, 11(4), 2019. doi: https://doi.org/10.3390/fi11040089. Javier Sánchez-Monedero, Lina Dencik, and Lilian Edwards. What does it mean to 'solve' the problem of discrimination in hiring?: social, technical and legal perspectives from the UK on automated hiring systems. In *FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency*, pp. 458–468. Acm, January 2020. doi: 10.1145/3351095.3372849. URL https://doi.org/10.1145/3351 095.3372849. Jonas B Sandbrink. Artificial intelligence and biological misuse: Differentiating risks of language models and biological design tools. *arXiv preprint arXiv:2306.13952*, 2023. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. *arXiv preprint arXiv:1910.01108*, 2019. Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, and Tatsunori Hashimoto. Whose opinions do language models reflect? In *International Conference on Machine Learning*, 2023. Abulhair Saparov and He He. Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, 2022. URL https://openreview.net/pdf?id=qFVVBzXxR2V. Abulhair Saparov, Richard Yuanzhe Pang, Vishakh Padmakumar, Nitish Joshi, Seyed Mehran Kazemi, Najoung Kim, and He He. Testing the General Deductive Reasoning Capacity of Large Language Models Using OOD Examples. *arXiv preprint arXiv:2305.15269*, 2023. Naomi Saphra, Eve Fleisig, Kyunghyun Cho, and Adam Lopez. First Tragedy, then Parse: History Repeats Itself in the New Era of Large Language Models. *arXiv preprint arXiv:2311.05020*, 2023. Samir Saran and Shashank Mattoo. Big Tech vs. Red Tech: The Diminishing of Democracy in the Digital Age. *Observer Research Foundation, February*, 15, 2022. Laura Sartori and Andreas Theodorou. A sociotechnical perspective for the future of AI: narratives, inequalities, and human control. *Ethics and Information Technology*, 24(1):4, 2022. Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O'Keefe, Gillian K Hadfield, Richard Ngo, Konstantin Pilz, et al. Computing Power and the Governance of Artificial Intelligence. *arXiv preprint arXiv:2402.08797*, 2024. Frank Sauer and Niklas Schörnig. Killer drones: The 'silver bullet' of democratic warfare? *Security Dialogue*, 43(4):363–380, 2012. William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan Leike. Selfcritiquing models for assisting human evaluators, jun 2022. URL http://arxiv.org/abs/2206.05802. arXiv:2206.05802 [cs]. Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J Nay, Kshitij Gupta, and Aran Komatsuzaki. Arb: Advanced reasoning benchmark for large language models. *arXiv preprint arXiv:2307.13692*, 2023. Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo. Are emergent abilities of Large Language Models a mirage? *arXiv preprint arXiv:2304.15004*, 2023. Paul Scharre. Autonomous weapons and operational risk, 2016. Thomas C. Schelling. *The Strategy of Conflict: With a New Preface by the Author*. Harvard University Press, Cambridge, MA, may 1981. ISBN 9780674840317. Nino Scherrer, Claudia Shi, Amir Feder, and David M Blei. Evaluating the moral beliefs encoded in LLMs. arXiv preprint arXiv:2307.14324, 2023. Jérémy Scheurer, Jon Ander Campos, Tomasz Korbak, Jun Shern Chan, Angelica Chen, Kyunghyun Cho, and Ethan Perez. Training language models with language feedback at scale. arXiv preprint arXiv:2303.16755, 2023a. Jérémy Scheurer, Mikita Balesni, and Marius Hobbhahn. Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure, 2023b. Natalie Schluter. The glass ceiling in NLP. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2793–2798, 2018. W Joel Schneider and Kevin S McGrew. The Cattell-Horn-Carroll model of intelligence. In Contemporary intellectual assessment: Theories, tests, and issues, pp. 99–144. The Guilford Press, 2012. Mark Schroeder. Value Theory. The Stanford Encyclopedia of Philosophy (Fall 2021 Edition), 2021. URL https://plato.stanford.edu/archives/fall2021/entries/value-theory/. Christian Schroeder de Witt, Samuel Sokota, J. Zico Kolter, Jakob Nicolaus Foerster, and Martin Strohmeier. Perfectly Secure Steganography Using Minimum Entropy Coupling. September 2023. URL https://op enreview.net/forum?id=HQ67mj5rJdR. Ludwig Schubert, Chelsea Voss, Nick Cammarata, Gabriel Goh, and Chris Olah. High-low frequency detectors. *Distill*, 2021. Jonas Schuett, Noemi Dreksler, Markus Anderljung, David McCaffary, Lennart Heim, Emma Bluemke, and Ben Garfinkel. Towards best practices in AGI safety and governance: A survey of expert opinion. *arXiv* preprint arXiv:2305.07153, 2023. Lisa Schut, Nenad Tomasev, Tom McGrath, Demis Hassabis, Ulrich Paquet, and Been Kim. Bridging the Human-AI Knowledge Gap: Concept Discovery and Transfer in AlphaZero, oct 2023. URL http: //arxiv.org/abs/2310.16410. arXiv:2310.16410 [cs, stat]. David L Schwartz and Max Rogers. Inventorless Inventions? The Constitutional Conundrum of AI-Produced Inventions. *February*, 3:22–05, 2022. Shalom H Schwartz. Universals in the content and structure of values: Theoretical advances and empirical tests in 20 countries. In *Advances in experimental social psychology*, volume 25, pp. 1–65. Elsevier, 1992. Shalom H Schwartz. Are there universal aspects in the structure and contents of human values? Journal of social issues, 50(4):19–45, 1994. Sarah Schwettmann, Tamar Rott Shaham, Joanna Materzynska, Neil Chowdhury, Shuang Li, Jacob Andreas, David Bau, and Antonio Torralba. A Function Interpretation Benchmark for Evaluating Interpretability Methods. *ArXiv*, abs/2309.03886, 2023. URL https://api.semanticscholar.org/CorpusID:26158291 6. Thomas Scialom, Tuhin Chakrabarty, and Smaranda Muresan. Continual-t0: Progressively instructing 50+ tasks to language models without forgetting. *arXiv preprint arXiv:2205.12393*, 2022. Melanie Sclar, Yejin Choi, Yulia Tsvetkov, and Alane Suhr. Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting. *arXiv* preprint arXiv:2310.11324, 2023. Elizabeth Seger, Noemi Dreksler, Richard Moulange, Emily Dardaman, Jonas Schuett, K Wei, Christoph Winter, Mackenzie Arnold, Seán Ó hÉigeartaigh, and Anton Korinek. Open-Sourcing Highly Capable Foundation Models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives. *arXiv preprint arXiv:2311.09227*, 2023. Ayush Sekhari, Jayadev Acharya, Gautam Kamath, and Ananda Theertha Suresh. Remember what you want to forget: Algorithms for machine unlearning. *Advances in Neural Information Processing Systems*, 34:18075–18086, 2021. Ali Akbar Septiandri, Marios Constantinides, Mohammad Tahaei, and Daniele Quercia. WEIRD FAccTs: How Western, Educated, Industrialized, Rich, and Democratic is FAccT? In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pp. 160–171, 2023. Marc Serramia, Maite López-Sánchez, Stefano Moretti, and Juan Antonio Rodríguez-Aguilar. On the dominant set selection problem and its application to value alignment. *Autonomous Agents and Multi-Agent* Systems, 35(2):42, 2021. Hyunjune Sebastian Seung, Haim Sompolinsky, and Naftali Tishby. Statistical mechanics of learning from examples. *Physical review A*, 45(8):6056, 1992. Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, and Pablo Villalobos. Compute trends across three eras of machine learning. In 2022 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. Ieee, 2022. Nihar Shah, Dengyong Zhou, and Yuval Peres. Approval voting and incentives in crowdsourcing. In *International conference on machine learning*, pp. 10–19. Pmlr, 2015. Rohin Shah, Dmitrii Krasheninnikov, Jordan Alexander, Pieter Abbeel, and Anca Dragan. Preferences implicit in the state of the world. *arXiv preprint arXiv:1902.04198*, 2019. Rohin Shah, Pedro Freire, Neel Alex, Rachel Freedman, Dmitrii Krasheninnikov, Lawrence Chan, Michael D Dennis, Pieter Abbeel, Anca Dragan, and Stuart Russell. Benefits of assistance over reward learning, 2020. Rusheb Shah, Quentin Feuillade-Montixi, Soroush Pour, Arush Tagade, Stephen Casper, and Javier Rando. Scalable and Transferable Black-Box Jailbreaks for Language Models via Persona Modulation, 2023. Murray Shanahan. The Frame Problem. In Edward N. Zalta (ed.), *The Stanford Encyclopedia of Philosophy*. Metaphysics Research Lab, Stanford University, Spring 2016 edition, 2016. Murray Shanahan, Kyle McDonell, and Laria Reynolds. Role play with large language models. *Nature*, pp. 1–6, 2023. Scott Shane and Daisuke Wakabayashi. 'The Business of War': Google Employees Protest Work for the Pentagon. *The New York Times*, 2018. https://www.nytimes.com/2018/04/04/technology/google-l etter-ceo-pentagon-project.html. Accessed on: January 8, 2024. Natalie Shapira, Mosh Levy, Seyed Hossein Alavi, Xuhui Zhou, Yejin Choi, Yoav Goldberg, Maarten Sap, and Vered Shwartz. Clever hans or neural theory of mind? Stress testing social reasoning in large language models. *arXiv preprint arXiv:2305.14763*, 2023. Lee Sharkey, Dan Braun, and beren. [Interim research report] Taking features out of superposition with sparse autoencoders. *AI Alignment Forum*, 2023. https://www.alignmentforum.org/posts/z6QQJbtpk EAX3Aojj/interim-research-report-taking-features-out-of-superposition. Noel Sharkey. Saying 'no!'to lethal autonomous targeting. In *Military Ethics and Emerging Technologies*, pp. 132–146. Routledge, 2016. Tanusree Sharma, Jongwon Park, Yujin Kwon, Yiren Liu, Yun Huang, Sunny Liu, Dawn Song, Jeff Hancock, and Yang Wang. Inclusive.ai: Engaging Underserved Populations In Democratic Decision-making On Ai, 2023. Utkarsh Sharma and Jared Kaplan. Scaling Laws from the Data Manifold Dimension. *J. Mach. Learn. Res.*, 23(1), jan 2022. ISSN 1532-4435. Yonadav Shavit. What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring. *arXiv preprint arXiv:2303.11341*, 2023. Yonadav Shavit, Sandhini Agarwal, Miles Brundage, Steven Adler, Cullen O'Keefe, Rosie Campbell, Teddy Lee, Pamela Mishkin, Tyna Eloundou, Alan Hickey, et al. Practices for Governing Agentic AI Systems, 2023. Erfan Shayegani, Md Abdullah Al Mamun, Yu Fu, Pedram Zaree, Yue Dong, and Nael Abu-Ghazaleh. Survey of vulnerabilities in large language models revealed by adversarial attacks. *arXiv preprint* arXiv:2310.10844, 2023. Renee Shelby, Shalaleh Rismani, Kathryn Henne, Ajung Moon, Negar Rostamzadeh, Paul Nicholas, YILLAAKBARI N'MAH, Jess Gallegos, Andrew Smart, and Gurleen Virk. Identifying sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction. *arXiv preprint arXiv:2210.05791*, 2022. Lingfeng Shen, Weiting Tan, Sihao Chen, Yunmo Chen, Jingyu Zhang, Haoran Xu, Boyuan Zheng, Philipp Koehn, and Daniel Khashabi. The language barrier: Dissecting safety challenges of LLMs in multilingual contexts. *arXiv preprint arXiv:2401.13136*, 2024. Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, and Yang Zhang. "Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models. arXiv preprint arXiv:2308.03825, 2023. Lijun Sheng, Jian Liang, Ran He, Zilei Wang, and Tieniu Tan. AdaptGuard: Defending Against Universal Attacks for Model Adaptation. *arXiv preprint arXiv:2303.10594*, 2023. Toby Shevlane. Structured access: an emerging paradigm for safe AI deployment. arXiv preprint arXiv:2201.05159, 2022. Toby Shevlane, Sebastian Farquhar, Ben Garfinkel, Mary Phuong, Jess Whittlestone, Jade Leung, Daniel Kokotajlo, Nahema Marchal, Markus Anderljung, Noam Kolt, et al. Model evaluation for extreme risks. arXiv preprint arXiv:2305.15324, 2023. Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, and Luke Zettlemoyer. Detecting Pretraining Data from Large Language Models. *ArXiv*, abs/2310.16789, 2023. URL https://api.semanticscholar.org/CorpusID:264451585. Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. *arXiv preprint arXiv:2010.15980*, 2020. Ali Shirali, Rediet Abebe, and Moritz Hardt. A theory of dynamic benchmarks. arXiv preprint arXiv:2210.03165, 2022. Yoav Shoham and Kevin Leyton-Brown. *Multiagent systems: Algorithmic, game-theoretic, and logical foundations*. Cambridge University Press, 2008. Michal Shur-Ofry. Multiplicity as an AI Governance Principle. *Available at SSRN 4444354*, 2023. Shoaib Ahmed Siddiqui, Nitarshan Rajkumar, Tegan Maharaj, David Krueger, and Sara Hooker. Metadata archaeology: Unearthing data subsets by leveraging training dynamics. *arXiv preprint arXiv:2209.10015*, 2022. Charlotte Siegmann and Markus Anderljung. The Brussels effect and artificial intelligence: How EU regulation will impact the global AI market. *arXiv preprint arXiv:2208.12645*, 2022. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. *arXiv preprint arXiv:1712.01815*, 2017. Similarweb. chatgpt.com, 2024. https://www.similarweb.com/website/chatgpt.com/. Accessed on: 30 January, 2024. Gabriel Simmons. Moral mimicry: Large language models produce moral rationalizations tailored to political identity. *arXiv preprint arXiv:2209.12106*, 2022. Aaditya K Singh, Stephanie CY Chan, Ted Moskovitz, Erin Grant, Andrew M Saxe, and Felix Hill. The transient nature of emergent in-context learning in transformers. *arXiv preprint arXiv:2311.08360*, 2023. Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. Large language models encode clinical knowledge. Nature, 620(7972):172–180, 2023a. Prasann Singhal, Tanya Goyal, Jiacheng Xu, and Greg Durrett. A long way to go: Investigating length correlations in rlhf. *arXiv preprint arXiv:2310.03716*, 2023b. Joar Skalse, Nikolaus HR Howe, Dmitrii Krasheninnikov, and David Krueger. Defining and characterizing reward hacking. *arXiv preprint arXiv:2209.13085*, 2022. Michael A Skinnider, R Greg Stacey, David S Wishart, and Leonard J Foster. Chemical language models enable navigation in sparsely populated chemical space. *Nature Machine Intelligence*, 2021. Linda J Skitka, Kathleen L Mosier, and Mark Burdick. Does automation bias decision-making? International Journal of Human-Computer Studies, 51(5):991–1006, 1999. Mona Sloane, Emanuel Moss, Olaitan Awomolo, and Laura Forlano. Participation is not a design fix for machine learning. In *Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization*, pp. 1–6, 2022. Brad Smith. Developing and Deploying AI Responsibly: Elements of an Effective Legislative Framework to Regulate AI, 2023. https://blogs.microsoft.com/on-the-issues/2023/09/12/developing-and-dep loying-ai-responsibly-elements-of-an-effective-legislative-framework-to-regulate-ai/. Accessed on: January 8, 2024. Nathan D. Smith. *Introduction to Philosophy*. OpenStax, Rice University, Houston, Texas, 2022. Nathalie A Smuha. From a "race to AI" to a "race to AI regulation": regulatory competition for artificial intelligence. *Law, Innovation and Technology*, 13(1):57–84, 2021. doi: 10.1080/17579961.2021.1898300. URL https://doi.org/10.1080/17579961.2021.1898300. Nate Soares and Benja Fallenstein. Aligning superintelligence with human interests: A technical research agenda. *Machine Intelligence Research Institute (MIRI) technical report*, 8, 2014. Nate Soares, Benja Fallenstein, Stuart Armstrong, and Eliezer Yudkowsky. Corrigibility. In *Workshops at* the twenty-ninth AAAI conference on artificial intelligence, 2015. Emily H Soice, Rafael Rocha, Kimberlee Cordova, Michael Specter, and Kevin M Esvelt. Can large language models democratize access to dual-use biotechnology? *arXiv preprint arXiv:2306.03809*, 2023. Irene Solaiman. The gradient of generative AI release: Methods and considerations. In *Proceedings of the* 2023 ACM conference on fairness, accountability, and transparency, pp. 111–122, 2023. Irene Solaiman and Christy Dennison. Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*. Curran Associates, Inc., 2021. URL https://openreview.net/forum?id=k-ghaB9VZBw. Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, et al. Evaluating the Social Impact of Generative AI Systems in Systems and Society. *arXiv preprint arXiv:2306.05949*, 2023. Daniel Solove and Paul Schwartz. ALI Data Privacy: Overview and Black Letter Text. *UCLA Law Review*, 68:1252, 2022. Daniel J Solove. The Limitations of Privacy Rights. *Notre Dame L. Rev.*, 98:975, 2022. Taylor Sorensen, Liwei Jiang, Jena Hwang, Sydney Levine, Valentina Pyatkin, Peter West, Nouha Dziri, Ximing Lu, Kavel Rao, Chandra Bhagavatula, et al. Value Kaleidoscope: Engaging AI with pluralistic human values, rights, and duties. *arXiv preprint arXiv:2309.00779*, 2023. Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Liwei Jiang, Ximing Lu, Nouha Dziri, et al. A Roadmap to Pluralistic Alignment. arXiv preprint arXiv:2402.05070, 2024. Oliver Sourbut, Lewis Hammond, and Harriet Wood. Cooperation and control in delegation games. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-2024, 2024. doi: 10.24963/ijcai.2024/26. Stefano Spigler, Mario Geiger, and Matthieu Wyart. Asymptotic learning curves of kernel methods: empirical data versus teacher–student paradigm. *Journal of Statistical Mechanics: Theory and Experiment*, 2020 (12):124001, 2020. Giovanni Spitale, Nikola Biller-Andorno, and Federico Germani. AI model GPT-3 (dis) informs us better than humans. *arXiv preprint arXiv:2301.11924*, 2023. Zayne Sprague, Xi Ye, Kaj Bostrom, Swarat Chaudhuri, and Greg Durrett. MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning. *CoRR*, abs/2310.16049, 2023. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. *arXiv preprint arXiv:2206.04615*, 2022. Wolfgang Stammer, Patrick Schramowski, and Kristian Kersting. Right for the right concept: Revising neurosymbolic concepts by interacting with their explanations. In *Proceedings of the IEEE/CVF conference on* computer vision and pattern recognition, pp. 3619–3629, 2021. Zoe Stanley-Lockman and Lena Trabucco. NATO's role in responsible AI governance in military affairs. In The Oxford Handbook of AI Governance. Oxford University Press, 2022. Gabriel Stanovsky, Noah A Smith, and Luke Zettlemoyer. Evaluating gender bias in machine translation. arXiv preprint arXiv:1906.00591, 2019. Kaya Stechly, Matthew Marquez, and Subbarao Kambhampati. GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems. *arXiv preprint arXiv:2310.12397*, 2023. Jacob Steinhardt. Complex Systems are Hard to Control, 2023. https://bounded-regret.ghost.io/com plex-systems-are-hard-to-control/. Lena Strobl. Average-Hard Attention Transformers are Constant-Depth Uniform Threshold Circuits. arXiv preprint arXiv:2308.03212, 2023. Andreas Stuhlmüller and Jungwon Byun. Supervise Process, not Outcomes, 2022. URL https://ought.or g/updates/2022-04-06-process. Michael PH Stumpf and Mason A Porter. Critical truths about power laws. *Science*, 335(6069):665–666, 2012. Ilia Sucholutsky, Lukas Muttenthaler, Adrian Weller, Andi Peng, Andreea Bobu, Been Kim, Bradley C Love, Erin Grant, Jascha Achterberg, Joshua B Tenenbaum, et al. Getting aligned on representational alignment. *arXiv preprint arXiv:2310.13018*, 2023. Xinyuan Sun, Davide Crapis, Matt Stephenson, Barnabé Monnot, Thomas Thiery, and Jonathan PasseratPalmbach. Cooperative ai via decentralized commitment devices. *arXiv preprint arXiv:2311.07815*, 2023. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic Attribution for Deep Networks. In *International Conference on Machine Learning*, 2017. URL https://api.semanticscholar.org/CorpusID: 16747630. D'idac Sur'is, Sachit Menon, and Carl Vondrick. ViperGPT: Visual Inference via Python Execution for Reasoning. *ArXiv*, abs/2303.08128, 2023. URL https://api.semanticscholar.org/CorpusID: 257505358. Daniel Susskind. Technological Unemployment. In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young, and Baobao Zhang (eds.), *The Oxford Handbook* of AI Governance. Oxford Academic, 2023. doi: 10.1093/oxfordhb/9780197579329.013.42. URL https://doi.org/10.1093/oxfordhb/9780197579329.013.42. Sivaramakrishnan Swaminathan, Antoine Dedieu, Rajkumar Vasudeva Raju, Murray Shanahan, Miguel Lazaro-Gredilla, and Dileep George. Schema-learning and rebinding as mechanisms of in-context learning and emergence. *arXiv preprint arXiv:2307.01201*, 2023. Kevin Swersky, Jasper Snoek, and Ryan Prescott Adams. Freeze-thaw Bayesian optimization. *arXiv preprint* arXiv:1406.3896, 2014. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199*, 2013. Paulo Tabuada. *Verification and control of hybrid systems: a symbolic approach*. Springer Science & Business Media, 2009. Zeerak Talat, Hagen Blix, Josef Valvoda, Maya Indira Ganesh, Ryan Cotterell, and Adina Williams. A word on machine ethics: A response to Jiang et al.(2021). *arXiv preprint arXiv:2111.04158*, 2021. Chenmien Tan, Ge Zhang, and Jie Fu. Massive editing for large language models via meta learning. *arXiv* preprint arXiv:2311.04661, 2023. Garrett Tanzer, Mirac Suzgun, Eline Visser, Dan Jurafsky, and Luke Melas-Kyriazi. A benchmark for learning to translate a new language from one grammar book. *arXiv preprint arXiv:2309.16575*, 2023. Andrew Tarantola. Palantir shows off an AI that can go to war. *Engadget*, 2023. URL https://www.enga dget.com/palantir-shows-off-an-ai-that-can-go-to-war-180513781.html. Adaptive Agent Team, Jakob Bauer, Kate Baumli, Satinder Baveja, Feryal Behbahani, Avishkar Bhoopchand, Nathalie Bradley-Schmieg, Michael Chang, Natalie Clay, Adrian Collister, et al. Human-timescale adaptation in an open-ended task space. *arXiv preprint arXiv:2301.07608*, 2023. Chyng Wen Tee and Christopher Hian Ann Ting. Cross-section of mini flash crashes and their detection by a state-space approach. *Available at SSRN 3402783*, 2019. Max Tegmark and Steve Omohundro. Provably safe systems: the only path to controllable AGI. arXiv preprint arXiv:2309.01933, 2023. Damien Teney, Ehsan Abbasnejad, Kushal Kafle, Robik Shrestha, Christopher Kanan, and Anton Van Den Hengel. On the value of out-of-distribution testing: An example of goodhart's law. *Advances in* neural information processing systems, 33:407–417, 2020. Adam Thierer. Existential Risks and Global Governance Issues around AI and Robotics. *R Street Policy* Study, 291, June 2023. Kushal Tirumala, Aram Markosyan, Luke Zettlemoyer, and Armen Aghajanyan. Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems, 35:38274–38290, 2022. Lindia Tjuatja, Valerie Chen, Sherry Tongshuang Wu, Ameet Talwalkar, and Graham Neubig. Do LLMs exhibit human-like response biases? A case study in survey design. *arXiv preprint arXiv:2311.04076*, 2023. Eric Todd, Millicent L Li, Arnab Sen Sharma, Aaron Mueller, Byron C Wallace, and David Bau. Function Vectors in Large Language Models. *arXiv preprint arXiv:2310.15213*, 2023. Together Computer. RedPajama: an Open Dataset for Training Large Language Models, October 2023. URL https://github.com/togethercomputer/RedPajama-Data. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, ..., and Thomas Scialom. Llama 2: Open Foundation and Fine-Tuned Chat Models, 2023. Sam Toyer, Olivia Watkins, Ethan Adrian Mendes, Justin Svegliato, Luke Bailey, Tiffany Wang, Isaac Ong, Karim Elmaaroufi, Pieter Abbeel, Trevor Darrell, et al. Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game. *arXiv preprint arXiv:2311.01011*, 2023. Robert Trager, Ben Harack, Anka Reuel, Allison Carnegie, Lennart Heim, Lewis Ho, Sarah Kreps, Ranjit Lall, Owen Larter, Seán Ó hÉigeartaigh, et al. International Governance of Civilian AI: A Jurisdictional Certification Approach. *arXiv preprint arXiv:2308.15514*, 2023. Florian Tramer and Dan Boneh. Adversarial training and robustness for multiple perturbations. *Advances* in neural information processing systems, 32, 2019. Florian Tramer, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, and Joern-Henrik Jacobsen. Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations. In Hal Daumé III and Aarti Singh (eds.), *Proceedings of the 37th International Conference on Machine Learning*, volume 119 of Proceedings of Machine Learning Research, pp. 9561–9571. Pmlr, 13–18 Jul 2020a. Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. On adaptive attacks to adversarial example defenses. *Advances in neural information processing systems*, 33:1633–1645, 2020b. Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness May Be at Odds with Accuracy. In *International Conference on Learning Representations*, 2019. Alex Turner, Monte MacDiarmid, David Udell, Lisa Thiergart, and Ulisse Mini. Steering GPT-2 XL by Adding an Activation Vector. *AI Alignment Forum*, 2023a. https://www.alignmentforum.org/posts /5spBue2z2tw4JuDCx/steering-gpt-2-xl-by-adding-an-activation-vector. Accessed on: October 20, 2023. Alexander Matt Turner, Dylan Hadfield-Menell, and Prasad Tadepalli. Conservative agency via attainable utility preservation. In *Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society*, pp. 385–391, 2020. Alexander Matt Turner, Lisa Thiergart, David Udell, Gavin Leech, Ulisse Mini, and Monte MacDiarmid. Activation Addition: Steering Language Models Without Optimization, nov 2023b. URL http://arxiv. org/abs/2308.10248. arXiv:2308.10248 [cs]. Miles Turpin, Julian Michael, Ethan Perez, and Samuel R. Bowman. Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting, may 2023. URL http: //arxiv.org/abs/2305.04388. arXiv:2305.04388 [cs]. Andrew Tutt. An FDA for algorithms. *Admin. L. Rev.*, 69:83, 2017. Jens Tuyls, Dhruv Madeka, Kari Torkkola, Dean Foster, Karthik Narasimhan, and Sham Kakade. Scaling laws for imitation learning in nethack. *arXiv preprint arXiv:2307.09423*, 2023. Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process-and outcome-based feedback. arXiv preprint arXiv:2211.14275, 2022. UK Government. Emerging Processes for Frontier AI Safety, 2023. https://www.gov.uk/government/pu blications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-a i-safety. UK Parliament. Online Safety Act 2023, 2023. Number 50, http://www.legislation.gov.uk/ukpga/202 3/50/whole/enacted. Tomer Ullman. Large language models fail on trivial alterations to theory-of-mind tasks. arXiv preprint arXiv:2302.08399, 2023. Unesco. Recommendation on the Ethics of Artificial Intelligence, 2021. https://unesdoc.unesco.org/ark: /48223/pf0000381137. Unesco. Transforming education from within: current trends in the status and development of teachers; World Teachers' Day 2022, 2022. https://unesdoc.unesco.org/ark:/48223/pf0000383002. Accessed on: January 30, 2024. U.S. Congress. DEEPFAKES Accountability Act, 2023. Introduced in House on September 20, 2023, https://www.congress.gov/bill/118th-congress/house-bill/5586/text. Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. Large Language Models Still Can't Plan (A Benchmark for LLMs on Planning and Reasoning about Change). *arXiv* preprint arXiv:2206.10498, 2022. Karthik Valmeekam, Matthew Marquez, and Subbarao Kambhampati. Can Large Language Models Really Improve by Self-critiquing Their Own Plans? *arXiv preprint arXiv:2310.08118*, 2023. Jan Van Dijk. *The digital divide*. John Wiley & Sons, 2020. Michael Veale, Kira Matus, and Robert Gorwa. AI and Global Governance: Modalities, Rationales, Tensions. Annual Review of Law and Social Science, 19, 2023. Andreas Veit, Michael J Wilber, and Serge Belongie. Residual networks behave like ensembles of relatively shallow networks. *Advances in neural information processing systems*, 29, 2016. Veniamin Veselovsky, Manoel Horta Ribeiro, and Robert West. Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks. *ArXiv*, abs/2306.07899, 2023. URL https://api.semanticscholar.org/CorpusID:259145373. Lucía Vicente and Helena Matute. Humans inherit artificial intelligence biases. *Scientific Reports*, 13(1): 15737, 2023. Tom Viering and Marco Loog. The shape of learning curves: a review. *IEEE Transactions on Pattern* Analysis and Machine Intelligence, 2022. Paul Voigt and Axel Von dem Bussche. The EU general data protection regulation (GDPR). A Practical Guide, 1st Ed., Cham: Springer International Publishing, 2017. Johannes von Oswald, Eyvind Niklasson, Ettore Randazzo, João Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, and Max Vladymyrov. Transformers learn in-context by gradient descent. *arXiv preprint* arXiv:2212.07677, 2022. Johannes von Oswald, Eyvind Niklasson, Maximilian Schlegel, Seijin Kobayashi, Nicolas Zucchet, Nino Scherrer, Nolan Miller, Mark Sandler, Max Vladymyrov, Razvan Pascanu, et al. Uncovering mesa-optimization algorithms in transformers. *arXiv preprint arXiv:2309.05858*, 2023. Nikhil Vyas, Alexander Atanasov, Blake Bordelon, Depen Morwani, Sabarish Sainathan, and Cengiz Pehlevan. Feature-Learning Networks Are Consistent Across Widths At Realistic Scales. *arXiv preprint* arXiv:2305.18411, 2023. Alexander Wan, Eric Wallace, Sheng Shen, and Dan Klein. Poisoning Language Models During Instruction Tuning. *arXiv preprint arXiv:2305.00944*, 2023a. Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, and Nanyun Peng. "Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters. *arXiv preprint* arXiv:2310.09219, 2023b. Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y Zhao. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In *2019 IEEE Symposium* on Security and Privacy (SP), pp. 707–723. Ieee, 2019a. Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, and Huan Sun. Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters. In *Proceedings of* the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pp. 2717–2739. Association for Computational Linguistics, 2023a. doi: 10.18653/v1/2023.acl-long.153. URL https://doi.org/10.18653/v1/2023.acl-long.153. Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, and Bo Li. DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models, 2023b. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An Open-Ended Embodied Agent with Large Language Models. arXiv preprint arXiv: Arxiv-2305.16291, 2023c. Jane X. Wang, Edward Hughes, Chrisantha Fernando, Wojciech M. Czarnecki, Edgar A. Duéñez Guzmán, and Joel Z. Leibo. Evolving Intrinsic Motivations for Altruistic Behavior. In *Proceedings of the 18th* International Conference on Autonomous Agents and MultiAgent Systems, Aamas '19, pp. 683–692, Richland, SC, 2019b. International Foundation for Autonomous Agents and Multiagent Systems. ISBN 9781450363099. Jiongxiao Wang, Junlin Wu, Muhao Chen, Yevgeniy Vorobeychik, and Chaowei Xiao. On the Exploitability of Reinforcement Learning with Human Feedback for Large Language Models. *arXiv preprint* arXiv:2311.09641, 2023d. Kevin Ro Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, and Jacob Steinhardt. Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 Small. In The Eleventh International Conference on Learning Representations, sep 2022. URL https://openreview.net/forum?id=NpsVSN6o 4ul. Liwei Wang, Lunjia Hu, Jiayuan Gu, Zhiqiang Hu, Yue Wu, Kun He, and John Hopcroft. Towards understanding learning representations: To what extent do different neural networks learn the same representation. *Advances in neural information processing systems*, 31, 2018. Ruocheng Wang, Eric Zelikman, Gabriel Poesia, Yewen Pu, Nick Haber, and Noah D. Goodman. Hypothesis Search: Inductive Reasoning with Language Models. *CoRR*, abs/2309.05660, 2023e. Shida Wang and Beichen Xue. State-space models with layer-wise nonlinearity are universal approximators with exponential decaying memory. *Advances in Neural Information Processing Systems*, 36, 2023. Song Wang, Yaochen Zhu, Haochen Liu, Zaiyi Zheng, Chen Chen, et al. Knowledge editing for large language models: A survey. *arXiv preprint arXiv:2310.16218*, 2023f. Tong Wang. Gaining Free or Low-Cost Interpretability with Interpretable Partial Substitute. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of *Proceedings of Machine Learning Research*, pp. 6505–6514. Pmlr, 09–15 Jun 2019. Xinyi Wang, Wanrong Zhu, and William Yang Wang. Large language models are implicitly topic models: Explaining and finding good demonstrations for in-context learning. *arXiv preprint arXiv:2301.11916*, 2023g. Xiting Wang, Liming Jiang, Jose Hernandez-Orallo, Luning Sun, David Stillwell, Fang Luo, and Xing Xie. Evaluating General-Purpose AI with Psychometrics. *arXiv preprint arXiv:2310.16379*, 2023h. Yau-Shian Wang and Yingshan Chang. Toxicity Detection with Generative Prompt-based Inference, 2022. Yihan Wang, Jatin Chauhan, Wei Wang, and Cho-Jui Hsieh. Universality and limitations of prompt tuning. Advances in Neural Information Processing Systems, 36, 2023i. Zekun Moore Wang, Zhongyuan Peng, Haoran Que, Jiaheng Liu, Wangchunshu Zhou, Yuhan Wu, Hongcheng Guo, Ruitong Gan, Zehao Ni, Man Zhang, et al. RoleLLM: Benchmarking, Eliciting, and Enhancing RolePlaying Abilities of Large Language Models. *arXiv preprint arXiv:2310.00746*, 2023j. Zihao Wang, Lin Gui, Jeffrey Negrea, and Victor Veitch. Concept Algebra for (Score-Based) Text-Controlled Generative Models, oct 2023k. URL http://arxiv.org/abs/2302.03693. arXiv:2302.03693 [cs, stat]. Francis Rhys Ward, Francesca Toni, Francesco Belardinelli, and Tom Everitt. Honesty Is the Best Policy: Defining and Mitigating AI Deception. In *Thirty-seventh Conference on Neural Information Processing* Systems, 2023. URL https://openreview.net/forum?id=EmxpDiPgRu. Sumio Watanabe. Recent advances in algebraic geometry and Bayesian statistics. *Information Geometry*, 2024. Timothy LH Watkin, Albrecht Rau, and Michael Biehl. The statistical mechanics of learning a rule. *Reviews* of Modern Physics, 65(2):499, 1993. Jess Weatherbed. The EU still needs to get its AI Act together. *The Verge*, Jun 2023. URL https://www. theverge.com/2023/6/29/23777239/eu-ai-act-artificial-intelligence-regulations-europe. Alexander Wei, Nika Haghtalab, and Jacob Steinhardt. Jailbroken: How Does LLM Safety Training Fail? arXiv preprint arXiv:2307.02483, 2023a. Boyi Wei, Kaixuan Huang, Yangsibo Huang, Tinghao Xie, Xiangyu Qi, Mengzhou Xia, Prateek Mittal, Mengdi Wang, and Peter Henderson. Assessing the brittleness of safety alignment via pruning and lowrank modifications. *arXiv preprint arXiv:2402.05162*, 2024. Jason Wei. 137 emergent abilities of large language models, 2022. https://www.jasonwei.net/blog/emer gence. Accessed on: October 20, 2023. Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. *arXiv preprint arXiv:2109.01652*, 2021. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent Abilities of Large Language Models. *Transactions on* Machine Learning Research, 2022a. ISSN 2835-8856. URL https://openreview.net/forum?id=yzkSU5 zdwD. Survey Certification. Jason Wei, Yi Tay, and Quoc V Le. Inverse scaling can become U-shaped. *arXiv preprint arXiv:2211.02011*, 2022b. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. *Advances in Neural Information* Processing Systems, 35:24824–24837, 2022c. Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu, Da Huang, Denny Zhou, et al. Larger language models do in-context learning differently. arXiv preprint arXiv:2303.03846, 2023b. Susan Wei, Daniel Murfet, Mingming Gong, Hui Li, Jesse Gell-Redman, and Thomas Quella. Deep learning is singular, and that's good. *IEEE Transactions on Neural Networks and Learning Systems*, 2022d. Zeming Wei, Yifei Wang, and Yisen Wang. Jailbreak and guard aligned language models with only few in-context demonstrations. *arXiv preprint arXiv:2310.06387*, 2023c. Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, et al. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359, 2021. Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, et al. Taxonomy of risks posed by language models. In *Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency*, pp. 214–229, 2022. Laura Weidinger, Kevin R McKee, Richard Everett, Saffron Huang, Tina O Zhu, Martin J Chadwick, Christopher Summerfield, and Iason Gabriel. Using the Veil of Ignorance to align AI systems with principles of justice. *Proceedings of the National Academy of Sciences*, 120(18):e2213709120, 2023a. Laura Weidinger, Maribeth Rauh, Nahema Marchal, Arianna Manzini, Lisa Anne Hendricks, Juan MateosGarcia, Stevie Bergman, Jackie Kay, Conor Griffin, Ben Bariach, et al. Sociotechnical Safety Evaluation of Generative AI Systems. *arXiv preprint arXiv:2310.11986*, 2023b. Gail Weiss, Yoav Goldberg, and Eran Yahav. Thinking Like Transformers. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of *Proceedings of Machine Learning Research*, pp. 11080–11090. Pmlr, 2021. Johannes Welbl, Amelia Glaese, Jonathan Uesato, Sumanth Dathathri, John Mellor, Lisa Anne Hendricks, Kirsty Anderson, Pushmeet Kohli, Ben Coppin, and Po-Sen Huang. Challenges in Detoxifying Language Models. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pp. 2447–2469, Punta Cana, Dominican Republic, nov 2021. Association for Computational Linguistics. doi: 10.18653/v 1/2021.findings-emnlp.210. URL https://aclanthology.org/2021.findings-emnlp.210. Jiaxin Wen, Pei Ke, Hao Sun, Zhexin Zhang, Chengfei Li, Jinfeng Bai, and Minlie Huang. Unveiling the Implicit Toxicity in Large Language Models. *arXiv preprint arXiv:2311.17391*, 2023. Chris Wendler, Veniamin Veselovsky, Giovanni Monea, and Robert West. Do Llamas Work in English? On the Latent Language of Multilingual Transformers. *arXiv preprint arXiv:2402.10588*, 2024. Lilian Weng, Vik Goel, and Andrea Vallone. Using GPT-4 for content moderation, 2023. https://openai .com/blog/using-gpt-4-for-content-moderation. Accessed on: 30 January, 2024. Wex. Joint and Several Liability, 2023. https://www.law.cornell.edu/wex/joint_and_several_liabil ity. Accessed on: 9 March, 2024. White House. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Presidential Actions, October 2023. URL https://www.whitehouse.gov/briefing-room/ presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-dev elopment-and-use-of-artificial-intelligence/. Zack Whittaker. NSA improperly collected Americans' phone records for a second time, documents reveal, 2019. https://techcrunch.com/2019/06/26/nsa-improper-phone-records-collection/. Accessed on: 30th January, 2024. Jess Whittlestone and Jack Clark. Why and How Governments Should Monitor AI Development. *arXiv* preprint arXiv:2108.12427, 2021. David Gray Widder, Sarah West, and Meredith Whittaker. Open (for business): Big tech, concentrated power, and the political economy of open AI. Concentrated Power, and the Political Economy of Open AI (August 17, 2023), 2023. Noam Wies, Yoav Levine, and Amnon Shashua. The learnability of in-context learning. *arXiv preprint* arXiv:2303.07895, 2023. Marcel Wieting and Geza Sapi. Algorithms in the Marketplace: An Empirical Analysis of Automated Pricing in E-Commerce. Technical Report 21-06, NET Institute, 2021. Simon Willison. You can't solve AI security problems with more AI, 2022. https://simonwillison.net/ 2022/Sep/17/prompt-injection-more-ai/. Accessed on: October 20, 2023. Simon Willison. The Dual LLM pattern for building AI assistants that can resist prompt injection, 2023a. https://simonwillison.net/2023/Apr/25/dual-llm-pattern. Accessed on: October 20, 2023. Simon Willison. Multi-modal prompt injection, 2023b. https://simonwillison.net/2023/Oct/14/mult i-modal-prompt-injection/. Accessed on: October 20, 2023. Yotam Wolf, Noam Wies, Yoav Levine, and Amnon Shashua. Fundamental Limitations of Alignment in Large Language Models. *arXiv preprint arXiv:2304.11082*, 2023. Jeff Wu, Long Ouyang, Daniel M Ziegler, Nisan Stiennon, Ryan Lowe, Jan Leike, and Paul Christiano. Recursively summarizing books with human feedback. *arXiv preprint arXiv:2109.10862*, 2021. Minghao Wu and Alham Fikri Aji. Style over substance: Evaluation biases for large language models. *arXiv* preprint arXiv:2307.03025, 2023. Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus Rabe, Charles Staats, Mateja Jamnik, and Christian Szegedy. Autoformalization with large language models. Advances in Neural Information Processing Systems, 35:32353–32368, 2022. Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A Smith, Mari Ostendorf, and Hannaneh Hajishirzi. Fine-Grained Human Feedback Gives Better Rewards for Language Model Training. *arXiv preprint arXiv:2306.01693*, 2023a. Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, and Yoon Kim. Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks. *CoRR*, abs/2307.02477, 2023b. URL https://doi.org/10.485 50/arXiv.2307.02477. Sophie Xhonneux, David Dobre, Jian Tang, Gauthier Gidel, and Dhanya Sridhar. In-context learning can re-learn forbidden tasks. *arXiv preprint arXiv:2402.05723*, 2024. Chloe Xiang. Startup Uses AI Chatbot to Provide Mental Health Counseling and Then Realizes It 'Feels Weird'. *Vice*, 2023a. https://www.vice.com/en/article/4ax9yw/startup-uses-ai-chatbot-to-p rovide-mental-health-counseling-and-then-realizes-it-feels-weird. Accessed on: 30 January, 2024. Chloe Xiang. 'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says. Vice, 2023b. https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-wit h-ai-chatbot-widow-says. Zhen Xiang, David J Miller, and George Kesidis. Revealing backdoors, post-training, in DNN classifiers via novel inference on optimized perturbations inducing group misclassification. In *ICASSP 2020-2020* IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3827–3831. Ieee, 2020. Ziang Xiao, Susu Zhang, Vivian Lai, and Q Vera Liao. Evaluating NLG Evaluation Metrics: A Measurement Theory Perspective. *arXiv preprint arXiv:2305.14889*, 2023. Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. An Explanation of In-context Learning as Implicit Bayesian Inference. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=RdJVFCHjUMI. Albert Xu, Eshaan Pathak, Eric Wallace, Suchin Gururangan, Maarten Sap, and Dan Klein. Detoxifying Language Models Risks Marginalizing Minority Voices. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2390–2397, Online, jun 2021a. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl -main.190. URL https://aclanthology.org/2021.naacl-main.190. Jiacen Xu, Jack W Stokes, Geoff McDonald, Xuesong Bai, David Marshall, Siyue Wang, Adith Swaminathan, and Zhou Li. Autoattacker: A large language model guided system to implement automatic cyber-attacks. arXiv preprint arXiv:2403.01038, 2024a. Qiantong Xu, Fenglu Hong, Bo Li, Changran Hu, Zhengyu Chen, and Jian Zhang. On the Tool Manipulation Capability of Open-source Large Language Models. *arXiv preprint arXiv:2305.16504*, 2023a. Xiaojun Xu, Qi Wang, Huichen Li, Nikita Borisov, Carl A Gunter, and Bo Li. Detecting AI trojans using meta neural analysis. In *2021 IEEE Symposium on Security and Privacy (SP)*, pp. 103–120. Ieee, 2021b. Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, and Yang Liu. Exploring large language models for communication games: An empirical study on werewolf. arXiv preprint arXiv:2309.04658, 2023b. Ziwei Xu, Sanjay Jain, and Mohan Kankanhalli. Hallucination is Inevitable: An Innate Limitation of Large Language Models. *arXiv preprint arXiv:2401.11817*, 2024b. L Yan, L Sha, L Zhao, Y Li, R Martinez-Maldonado, and G Chen. Practical and ethical challenges of large language models in education: A systematic literature review. *arXiv preprint arXiv:2303.13379*, 2023. Jiachen Yang, Ang Li, Mehrdad Farajtabar, Peter Sunehag, Edward Hughes, and Hongyuan Zha. Learning to Incentivize Other Learning Agents. In *Adaptive and Learning Agents Workshop at AAMAS*, 2020. URL https://ala2020.vub.ac.be/papers/ALA2020%5Fpaper%5F27.pdf. Jiachen Yang, Ethan Wang, Rakshit Trivedi, Tuo Zhao, and Hongyuan Zha. Adaptive Incentive Design with Multi-Agent Meta-Gradient Reinforcement Learning. *arXiv:2112.10859*, December 2021. Kai-Cheng Yang and Filippo Menczer. Anatomy of an AI-powered malicious social botnet. *arXiv preprint* arXiv:2307.16336, 2023. Xianjun Yang, Xiao Wang, Qi Zhang, Linda Petzold, William Yang Wang, Xun Zhao, and Dahua Lin. Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models, 2023a. Ziqing Yang, Xinlei He, Zheng Li, Michael Backes, Mathias Humbert, Pascal Berrang, and Yang Zhang. Data poisoning attacks against multimodal encoders. In *International Conference on Machine Learning*, pp. 39299–39313. Pmlr, 2023b. Yuanshun Yao, Xiaojun Xu, and Yang Liu. Large Language Model Unlearning, 2023. Jaime M Yassif, Shayna Korol, and Angela Kane. Guarding against catastrophic biological risks: preventing state biological weapon development and use by shaping intentions. *Health security*, 2023. Qinyuan Ye, Harvey Yiyun Fu, Xiang Ren, and Robin Jia. How Predictable Are Large Language Model Capabilities? A Case Study on BIG-bench. *arXiv preprint arXiv:2305.14947*, 2023. Chih-Kuan Yeh, Joon Kim, Ian En-Hsu Yen, and Pradeep K Ravikumar. Representer point selection for explaining deep neural networks. *Advances in neural information processing systems*, 31, 2018. Leon Yin, Davey Alba, and Leonardo Nicoletti. OpenAI's GPT Is a Recruiter's Dream Tool. Tests Show There's Racial Bias. *Bloomberg*, 2024. https://www.bloomberg.com/graphics/2024-openai-gpt-hir ing-racial-discrimination. Accessed on: 9 March, 2024. Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, and Xuanjing Huang. Do Large Language Models Know What They Don't Know? *arXiv preprint arXiv:2305.18153*, 2023. Julian Yocum, Phillip Christoffersen, Mehul Damani, Justin Svegliato, Dylan Hadfield-Menell, and Stuart Russell. Mitigating Generative Agent Social Dilemmas. In *NeurIPS 2023 Foundation Models for Decision* Making Workshop, 2023. URL https://openreview.net/forum?id=5TIdOk7XQ6. Zheng-Xin Yong, Cristina Menghini, and Stephen H. Bach. Low-Resource Languages Jailbreak GPT-4, 2023. Dingli Yu, Simran Kaur, Arushi Gupta, Jonah Brown-Cohen, Anirudh Goyal, and Sanjeev Arora. Skill-Mix: A flexible and expandable family of evaluations for AI models. *arXiv preprint arXiv:2310.17567*, 2023a. Jiahao Yu, Yuhang Wu, Dong Shu, Mingyu Jin, and Xinyu Xing. Assessing Prompt Injection Risks in 200+ Custom GPTs. *arXiv preprint arXiv:2311.11538*, 2023b. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, et al. Spider: A Large-Scale Human-Labeled Dataset for Complex and CrossDomain Semantic Parsing and Text-to-SQL Task. In *Proceedings of the 2018 Conference on Empirical* Methods in Natural Language Processing, pp. 3911–3921, 2018. Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander J. Ratner, Ranjay Krishna, Jiaming Shen, and Chao Zhang. Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias. *ArXiv*, abs/2306.15895, 2023c. URL https://api.semanticscholar.org/CorpusID:259275123. Youliang Yuan, Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Pinjia He, Shuming Shi, and Zhaopeng Tu. Gpt-4 is too smart to be safe: Stealthy chat with LLMs via cipher. *arXiv preprint arXiv:2308.06463*, 2023a. Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. Scaling relationship on learning mathematical reasoning with large language models. *arXiv preprint arXiv:2308.01825*, 2023b. Vyacheslav I Yukalov and Didier Sornette. Self-organization in complex systems as decision making. Advances in Complex Systems, 17(03n04):1450016, 2014. Chulhee Yun, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank Reddi, and Sanjiv Kumar. Are transformers universal approximators of sequence-to-sequence functions? In *International Conference on Learning* Representations, 2019. Omar Zahzah. Digital Apartheid: Palestinians Being Silenced on Social Media. *Al Jazeera*, 2021. https: //www.aljazeera.com/opinions/2021/5/13/social-media-companies-are-trying-to-silence-p alestinian-voices. Cat Zakrzewski. Tech Companies Spent Almost $70 million Lobbying Washington In 2021 As Congress Sought To Rein In Their Power, 2022. https://www.washingtonpost.com/technology/2022/01/21/t ech-lobbying-in-washington/. Accessed on: December 3, 2023. Matej Zecevic, Moritz Willig, Devendra Singh Dhami, and Kristian Kersting. Causal Parrots: Large Language Models May Talk Causality But Are Not Causal. *CoRR*, abs/2308.13067, 2023. John M Zelle and Raymond J Mooney. Learning to parse database queries using inductive logic programming. In *Proceedings of the national conference on artificial intelligence*, pp. 1050–1055, 1996. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. Defending against neural fake news. *Advances in neural information processing systems*, 32, 2019. Qiusi Zhan, Richard Fang, Rohan Bindu, Akul Gupta, Tatsunori Hashimoto, and Daniel Kang. Removing RLHF Protections in GPT-4 via Fine-Tuning, 2023. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. *Communications of the ACM*, 2021. Hanlin Zhang, Benjamin L Edelman, Danilo Francati, Daniele Venturi, Giuseppe Ateniese, and Boaz Barak. Watermarks in the sand: Impossibility of strong watermarking for generative models. arXiv preprint arXiv:2311.04378, 2023a. Honghua Zhang, Liunian Harold Li, Tao Meng, Kai-Wei Chang, and Guy Van den Broeck. On the Paradox of Learning to Reason from Data. In *Proceedings of the Thirty-Second International Joint Conference on* Artificial Intelligence, IJCAI 2023, 19th-25th August 2023, Macao, SAR, China, pp. 3365–3373, 2023b. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically Principled Trade-off between Robustness and Accuracy. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pp. 7472–7482. Pmlr, 09–15 Jun 2019. Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A Smith. How language model hallucinations can snowball. *arXiv preprint arXiv:2305.13534*, 2023c. Yiming Zhang and Daphne Ippolito. Prompts should not be seen as secrets: Systematically measuring prompt extraction attack success. *arXiv preprint arXiv:2307.06865*, 2023. Yufeng Zhang, Fengzhuo Zhang, Zhuoran Yang, and Zhaoran Wang. What and How does In-Context Learning Learn? Bayesian Model Averaging, Parameterization, and Generalization. *arXiv preprint* arXiv:2305.19420, 2023d. Zhaowei Zhang, Fengshuo Bai, Jun Gao, and Yaodong Yang. Measuring Value Understanding in Language Models through Discriminator-Critique Gap. *arXiv preprint arXiv:2310.00378*, 2023e. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, ..., and Ji-Rong Wen. A Survey of Large Language Models, 2023. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging LLMas-a-judge with MT-Bench and Chatbot Arena, 2023. Stephan Zheng, Alexander Trott, Sunil Srinivasa, David C. Parkes, and Richard Socher. The AI Economist: Taxation policy design via two-level deep multiagent reinforcement learning. *Science Advances*, 8(18), may 2022. doi: 10.1126/sciadv.abk2607. Ruiqi Zhong, Charlie Snell, Dan Klein, and Jason Eisner. Non-Programmers Can Label Programs Indirectly via Active Examples: A Case Study with Text-to-SQL. In *Proceedings of EMNLP*, December 2023a. Ziqian Zhong, Ziming Liu, Max Tegmark, and Jacob Andreas. The clock and the pizza: Two stories in mechanistic explanation of neural networks. *arXiv preprint arXiv:2306.17844*, 2023b. Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya Jia, Linqi Song, Mingjie Zhan, et al. Solving challenging math word problems using gpt-4 code interpreter with code-based self-verification. *arXiv preprint arXiv:2308.07921*, 2023a. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. *arXiv preprint arXiv:2305.11206*, 2023b. Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, and Hanie Sedghi. Teaching algorithmic reasoning via in-context learning. *arXiv preprint arXiv:2211.09066*, 2022. Hattie Zhou, Arwen Bradley, Etai Littwin, Noam Razin, Omid Saremi, Josh M. Susskind, Samy Bengio, and Preetum Nakkiran. What Algorithms can Transformers Learn? A Study in Length Generalization. CoRR, abs/2310.16028, 2023c. Jiawei Zhou, Yixuan Zhang, Qianni Luo, Andrea G Parker, and Munmun De Choudhury. Synthetic lies: Understanding ai-generated misinformation and evaluating algorithmic and human solutions. In *Proceedings* of the 2023 CHI Conference on Human Factors in Computing Systems, pp. 1–20, 2023d. Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain Generalization: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4396–4415, 2023e. doi: 10.1109/ tpami.2022.3195549. Pei Zhou, Aman Madaan, Srividya Pranavi Potharaju, Aditya Gupta, Kevin R McKee, Ari Holtzman, Jay Pujara, Xiang Ren, Swaroop Mishra, Aida Nematzadeh, et al. How FaR Are Large Language Models From Agents with Theory-of-Mind? *arXiv preprint arXiv:2310.03051*, 2023f. J. Zhu, H. Yan, and T. L. Griffiths. Recovering Mental Representations from Large Language Models with Markov Chain Monte Carlo. *arXiv preprint arXiv:2401.16657*, 2024. Zhaocheng Zhu, Yuan Xue, Xinyun Chen, Denny Zhou, Jian Tang, Dale Schuurmans, and Hanjun Dai. Large Language Models can Learn Rules. *CoRR*, abs/2310.07064, 2023. Zining Zhu and Frank Rudzicz. An information theoretic view on selecting linguistic probes. *arXiv preprint* arXiv:2009.07364, 2020. Terry Yue Zhuo. Large Language Models Are State-of-the-Art Evaluators of Code Generation. *arXiv preprint* arXiv:2304.14317, 2023. Daniel Ziegler, Seraphina Nix, Lawrence Chan, Tim Bauman, Peter Schmidt-Nielsen, Tao Lin, Adam Scherlis, Noa Nabeshima, Benjamin Weinstein-Raun, Daniel de Haas, et al. Adversarial training for high-stakes reliability. *Advances in Neural Information Processing Systems*, 35:9274–9286, 2022. Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019. Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, ..., and Dan Hendrycks. Representation Engineering: A Top-Down Approach to AI Transparency, oct 2023a. URL http://arxi v.org/abs/2310.01405. arXiv:2310.01405 [cs]. Andy Zou, Zifan Wang, J. Zico Kolter, and Matt Fredrikson. Universal and Transferable Adversarial Attacks on Aligned Language Models, 2023b. Egor Zverev, Sahar Abdelnabi, Mario Fritz, and Christoph H Lampert. Can LLMs Separate Instructions From Data? And What Do We Even Mean By That? *arXiv preprint arXiv:2403.06833*, 2024. Seán Ó hÉigeartaigh, Yolanda Lannquist, Alexandru Marcoci, Jaime Sevilla, Mónica Alejandra Ulloa Ruiz, Yaqub Chaudhary, Tim Schreier, Zach Stein-Perlman, and Jeffrey Ladish. Do Companies' AI Safety Policies Meet Government Best Practice?, 2023. http://lcfi.ac.uk/news-and-events/news/2023/oc t/31/ai-safety-policies/.
Review 1: Summary: This paper is review and positional paper. It proposes 18 foundational challenges in assuring the alignment and safety of LLMs, which can be categorized into three: (1) scientific understanding of LLMs, (2) development and deployment methods, and (3) sociotechnical challenges. This paper reviews state-of-the-art techniques of LLMs, and then highlights current challenges and research questions. Especially, in each challenge and research question, it provides motivation, background, related work, and future research directions. Moreover, the paper presents a well-organized structure. I believe the paper would be beneficial for broad community from Machine Learning and Natural Languae Processing to Social Science and Policy. Strengths and Weaknesses: ### Strengths - The paper is well-written, structured, and organized. This is a huge volume, and, like a textbook, the reader guide is beneficial for them to follow. Also, research questions and their corresponding sub-sections are well-linked. - This paper summarizes current challenges and provides valuable 200+ research questions with background and references. Therefore it will contribute to the community as well as non-domain expertise, by encouraging them to figure out the status quo challenges and to push the boundary. - The paper properly includes its limitations. ### Weakness - This paper covers broad technical topics from fundamentals to safety training and attack, but several topics are absent such as - Hallucinations, Uncertainty, and Calibration - Sovereignty of Data and Models - Multi-linguality, Culture, and Social Bias. - In Chapter 2, fundamental scientific understanding is introduced, which is helpful and well-organized. However, some sub-chapters — e.g., scaling law, emergent ability, and reasoning — are hard to connect their relation to the safety of LLMs, as far as I understand. Wrapping up paragraphs would emphasize their importance to readers. - In overall and especially in chapter 4.5, the paper is US, UK, and EU-centric. Surveying and stating other nations and companies would broaden the sights of readers and improve the minority’s visibilities. - When it comes to the definition of “Safe” and “Harms”, the authors state them as follows in the section 1.2: > We consider a system safe to the extent ***it is unlikely to contribute to unplanned, undesirable harms***(Leveson, 2016). This is a somewhat expansive definition, accounting not only for the technical properties of the system, but also the way in which it is (or is likely to be) deployed and used (Weidinger et al., 2023b), though it is narrow in the sense that it does not consider intentional harm, and it does not set out any criteria for what constitutes harm. - As the authors mention, the terms are somewhat expansive and abstract, but stating specific definitions of “safe and harm” which are currently used for deployments, would make readers easy to make the connections. - These are mild suggestions. - Although the paper assumes that its readers have basic knowledge, basic equations are not presented. - Since this paper covers huge topics, refer other related review papers that cover each sub-topic would help its reader to further explore the domain. - These days, "post-training" is wildly used in the way to call collectively "supervised fine-tuning" and "reinforcement learning by human feedbacks", or whatever training process after the pre-training. What about using this term or just mentioning it? Requested Changes: Please refer to the weakness in the previous section. Broader Impact Concerns: This paper does not include a “Broader Impact Concerns” section. However, it adequately addresses its limitations in section 5.1. One additional concern is that readers might become overly focused on the topics and research questions presented in this paper. It is important to remind them not to limit themselves to these agendas and to emphasize that there are still uncovered topics that exist. ================================================== Review 2: Summary: This paper presents 18 foundational challenges in assuring the alignment and safety of large language models (LLMs). These challenges are described in three different categories: 7 in scientific understanding of LLMs, 6 in development and deployment methods, and 5 sociotechnical challenges. Based on these 18 identified challenges, 200+ concrete research questions are proposed. Strengths and Weaknesses: Strengths: The information presented in this paper is huge, and the presentation is well organized. The readers would obtain and learn a lot of knowledge in the field of LLMs and touch the research frontier of LLMs. Weaknesses: 1. Finetuning methods are discussed in section 3.2 and don't distinguish Supervised fine-tuning (SFT) or Reinforcement learning from human feedback (RLHF). SFT and RLHF are the major techniques for training AI systems to align with human goals and meet safety requirement, unfortunately the limitations and challenges of either SFT of RLHF are briefly mentioned and not covered in detail. One important paper is missing: Stephen Casper et al., Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback, Transactions on Machine Learning Research, December, 2023. 2. In Sections 2.2-2.3, the authors discuss the Limitations of Benchmarking for Measuring Capabilities and Assuring Safety A best paper in NeurRIPS 2023 and Formalizing, Forecasting, and Explaining Emergence. But an paper is missing: Rylan Schaeffer, Brando Miranda, Sanmi Koyejo, Are Emergent Abilities of Large Language Models a Mirage?, NeurRIPS, The views and opinions should be discussed in this paper. Requested Changes: It will be great if the authors could address the weaknesses and expand the paper along this directions. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This is a survey + position paper on foundational challenges in safety and alignment of LLMs. It organizes the challenges along the dimensions of understanding the science, practical development and deployment methods and societal impact. It defines 18 top-level challenges and derives 215 specific problem statements under those. The coverage of the field, and clarity and depth of the presentation is noteworthy. Strengths and Weaknesses: Strengths + LLMs have made rapid progress in their capabilities and practical use. This paper does a good job of analyzing safety and alignment challenges posed by this progress and weaving disparate aspects into a common theme. + The paper is well-organized into three distinct aspects which together provide a broad view of the area. Though a long paper, the quality of exposition is consistently good and the summary tables are helpful. Weaknesses - The paper has a good opportunity to make the work even more informative and accessible by designing informative and appealing illustrations. Please see the requested changes for some suggestions. - The paper does not include mathematic formulations and discussion of traditional safety approaches in system design. Please see the requested changes. Requested Changes: * The paper surprisingly doesn't include any illustrations or graphics. A paper of this density of concepts and breadth of coverage should include good illustrations, e.g., a mindmap of how different concepts are related, timeline and examples of different LLM capabilities and alignment/safety approaches, and so on. I would highly recommend that the authors do this. * I am aware that the paper is already long. It's strength is that it is easy to read and perhaps accessible to larger community. However, it would be nice to include central mathematical formulations to bring some rigor without getting overly technical. For example, I would consider putting some key problem statements within separate boxes that can be referenced or skipped based on the reader's preference. There are also some concepts that the authors do not introduce (e.g., out-of-context learning, which is referenced in multiple places but not described the first time it is introduced). * Some other topics that the authors may consider including are constrained decoding and verified training. * In my view, the paper misses links to the literature on safety related techniques developed in the pre-LLM world. I am not suggesting exhaustive treatment but the issue of trustworthy-ness of systems has been around since the beginning of computing. So some references to topics like safety of control systems and logic-based approaches to software verification would be useful. In fact, I would consider "taking learnings from pre-LLM approaches to safe and trustworthy system design and applying them holistically to LLM-based systems" to be also a challenge worth including. Of the above suggestions, I consider the first to be essential (it doesn't change anything, only makes the work more accessible). The others are up to the authors to decide, but if they decide not to act on those, I would recommend that these points be included in the limitations section for the benefit of the readers. Broader Impact Concerns: The paper does not need a broader impact statement IMO. ==================================================
# An Explicit Expansion Of The Kullback-Leibler Divergence Along Its Fisher-Rao Gradient Flow Carles Domingo-Enrich *cd2754@nyu.edu* Courant Institute of Mathematical Sciences New York University Aram-Alexandre Pooladian *aram-alexandre.pooladian@nyu.edu* Center for Data Science New York University Reviewed on OpenReview: https: // openreview .*net/ forum? id= 9pWjgQ3y85* ## Abstract Let V∗ : R d → R be some (possibly non-convex) potential function, and consider the probability measure π ∝ e −V∗ . When π exhibits multiple modes, it is known that sampling techniques based on Wasserstein gradient flows of the Kullback-Leibler (KL) divergence (e.g. Langevin Monte Carlo) suffer poorly in the rate of convergence, where the dynamics are unable to easily traverse between modes. In stark contrast, the work of Lu et al. (2019; 2022) has shown that the gradient flow of the KL with respect to the Fisher-Rao (FR) geometry exhibits a convergence rate to π that is *independent* of the potential function. In this short note, we complement these existing results in the literature by providing an explicit expansion of KL(ρ FR t ∥π) in terms of e −t, where (ρ FR t)t≥0 is the FR gradient flow of the KL divergence. In turn, we are able to provide a clean asymptotic convergence rate, where the burn-in time is guaranteed to be finite. Our proof is based on observing a similarity between FR gradient flows and simulated annealing with linear scaling, and facts about cumulant generating functions. We conclude with simple synthetic experiments that demonstrate our theoretical findings are indeed tight. Based on our numerical findings, we conjecture that the asymptotic rates of convergence for Wasserstein-Fisher-Rao gradient flows are possibly related to this expansion in some cases. ## 1 Introduction Sampling from a distribution with an unknown normalization constant is a widespread task in several scientific domains. Namely, the goal is to generate samples from a probability measure π(x) ∝ e −V∗(x), where V∗ : R d → R is some (possibly non-convex) potential function that is available for queries. In most cases, the target measure π is only known up to the normalization constant. Applications of sampling from π include Bayesian statistics, high-dimensional integration, differential privacy, statistical physics and uncertainty quantification; see Gelman et al. (1995); Robert et al. (1999); MacKay (2003); Johannes & Polson (2010); Von Toussaint (2011); Kobyzev et al. (2020); Chewi (2022) for thorough treatments. Recent interest in the task of sampling stems from the following paradigm: sampling is nothing but optimization over the space of probability measures (Wibisono, 2018). This interpretation is due to the connection between the celebrated work of Jordan, Kinderleher, and Otto (Jordan et al., 1998) and the Langevin diffusion dynamics given by $$\mathrm{d}X_{t}=-\nabla V_{\ast}(X_{t})\,\mathrm{d}t+{\sqrt{2}}\,\mathrm{d}B_{t}\;,$$ √2 dBt , (1) where dBt is Brownian motion.1Indeed, the work of Jordan et al. (1998) demonstrates that the path in the space of proabability measures given by the law of Eq. (1) is the same as the Wasserstein gradient flow (i.e. steepest descent curve in the Wasserstein metric) of the Kullback-Leibler (KL) divergence $$\mathrm{KL}(\rho\|\pi)=\int\log{\frac{\rho}{\pi}}\,\mathrm{d}\rho\,.$$ We write (ρW t )t≥0 ⊆ P(R d) for the law of the path given by Eq. (1) (see Section 2.2.1 for a precise definition). A central problem in this area has been to bound the convergence rate of ρW tto π in certain similarity metrics (e.g. the KL divergence itself, or the Wasserstein distance) under different conditions on π. These bounds translate to convergence rates for the Langevin Monte Carlo (LMC) sampling algorithm (Dalalyan & Tsybakov, 2012; Vempala & Wibisono, 2019; Durmus et al., 2021; Chewi et al., 2022), upon accounting for discretization errors. The classical result is as follows: assuming that π satisfies a Log-Sobolev inequality (LSI) with constant CLSI > 0, we obtain the following convergence rate (Stam, 1959; Gross, 1975; Markowich & Villani, 1999) $$\mathrm{KL}(\rho_{t}^{\mathrm{W}}\|\pi)\leq\mathrm{KL}(\rho_{0}^{\mathrm{W}}\|\pi)e^{-{\frac{2t}{C i s\pi}}}\,,$$ CLSI , (2) which holds for all t ≥ 0. Recall that π satisfies an LSI if for all smooth test functions g, $$\left(2\right)$$ $$\mathrm{ent}_{\pi}(f^{2})\leq2C_{\mathrm{LSI}}\mathbb{E}_{\pi}\|\nabla f\|^{2}\,,$$ $$(3)$$ 2, (3) where entπ(g) := Eπ(g log g) − Eπg log Eπg. For example, when V∗ is α-strongly convex, an LSI with CLSI = 1/α holds. LSI hold more generally, but sometimes with very large constants CLSI. Indeed, for multimodal distributions such as mixtures of Gaussians, CLSI scales exponentially in the height of the potential barrier between modes (Holley & Stroock, 1987; Arnold et al., 2000). This impacts convergence at the discrete-time level, and thus hinders our ability to generate samples using LMC. Another geometry that gives rise to gradient flows over probability measures is the *Fisher-Rao* (FR) geometry; see Section 2.2.2 for definitions. Similar to the case of Wasserstein gradient flows, we let (ρ FR t)t≥0 be the FR gradient flow of the KL divergence. Recent work by Lu and collaborators has shown that the convergence ρ FR t → π occurs at a rate that is *independent* of the potential function V∗. This is in stark contrast to the case of Wasserstein gradient flows, where the rate of convergence is intimately related to the structure of V∗ through the LSI constant. In their first work, Lu et al. (2019) show that for any δ ∈ (0, 1 4 ] there exists a t∗ ≳ log(δ 3) such that for all t ≥ t∗, $$\mathrm{KL}(\rho_{t}^{\mathrm{FR}}\|\pi)\leq\mathrm{KL}(\rho_{0}^{\mathrm{FR}}\|\pi)e^{-(2-3\delta)(t-t_{*})}\,,$$ $$\left(4\right)$$ −(2−3δ)(t−t∗), (4) where they require a warm-start condition KL(ρ FR 0 ∥π) ≤ 1, and assumption (B) (see Section 3). In Lu et al. (2022), the authors show that the KL divergence is always contracting under (ρ FR t)t≥0 even in the absence of a warm-start, though with a worse rate. Combined, these two results provide the first continuous-time convergence rates of the gradient flow of the KL divergence under the FR geometry to π. Merging both these geometries gives rise to the well-defined *Wasserstein-Fisher-Rao* (WFR) geometry. The WFR geometry has recently been used to analyse the convergence dynamics of parameters of neural networks (Chizat, 2022), mean-field games (Rotskoff et al., 2019), and has shown to be useful in statistical tasks such as Gaussian variational inference (Lambert et al., 2022), and identifying parameters of a Gaussian mixture model (Yan et al., 2023). In the context of sampling, particle-based methods that follow dynamics governed by WFR gradient flow of the KL, written (ρWFR t)t≥0, are known to escape the clutches of slow-convergence that plague the Wasserstein geometry. A simple observation (Lu et al., 2022, Remark 2.4) gives the following continuous-time convergence rate for t ≥ t∗: $$\mathrm{KL}(\rho_{t}^{\mathrm{WFR}}\|\pi)\leq\mathrm{KL}(\rho_{0}^{\mathrm{WFR}}\|\pi)\operatorname*{min}\left\{e^{-C_{t\pi}t},e^{-(2-3\delta)(t-t_{*})}\right\}\,,$$ −CLSIt, e−(2−3δ)(t−t∗)o, (5) 1This equation is to be understood from the perspective of Itô calculus. $$\left(5\right)$$ where δ and t∗ are as in the FR convergence rate (4). Loosely speaking, this "decoupled rate" is a consequence of the Wasserstein and FR geometries being orthogonal to one another; this is made precise in Gallouët & Monsaingeon (2017). As elegant as this last connection may seem, the convergence rate in Eq. (4), and consequently Eq. (5), should appear somewhat unsatisfactory to the reader. It raises the natural question of whether or not the factor of δ appearing in the rate is avoidable, and whether the upper bound in Eq. (4) is tight. ## 1.1 Main Contributions We close this gap for the KL divergence and any q-Rényi divergence. Using a different proof technique than existing work, we prove the following asymptotic rate of convergence for the flow (ρ FR t)t≥0, namely, $${\rm KL}(\rho_{t}^{\rm FR}\|\pi)=\frac{1}{2}{\rm Var}_{\pi}\left(\log\frac{\rho_{0}^{\rm FR}}{\pi}\right)e^{-2t}+O(e^{-3t})\,,\tag{6}$$ and a similar result holds for all q-Rényi divergences. Our assumptions are weaker to that of prior work, and given that this is a tight asymptotic convergence rate, we conjecture that the assumptions are likely unavoidable in the large t regime. Our proof technique provides an explicit expansion of KL(ρ FR t ∥π) (and q-Rényi) in terms of e −t. We supplement our finding with simulations for all three geometries, indicating that our convergence rate is in fact tight for Fisher-Rao gradient flows, and sheds light on possible conjectures for the convergence rate of WFR gradient flows. ## Notation For a probability measure ρ ∈ P(R d) and a function f : R d → R, we sometimes use the shorthand ⟨f⟩ρ := Rf dρ. We let log(·) denote the natural logarithm, and we use the standard shorthand notation f = O(g), meaning there exists a constant C > 0 such that f ≤ Cg. ## 2 Background 2.1 Definitions The study of gradient flows has a rich history in both pure and applied mathematics. The development of the relevant calculus to understand gradient flows is not the purpose of this note, and we instead provide a barebones introduction. However, we strongly recommend the interested reader consult standard textbooks on the topic, namely Ambrosio et al. (2005), and the first chapter of Chewi (2022). Let P(R d) be the space of probability measures over R d. A functional F : P(R d) → R is defined on the space of probability measures, with ρ *7→ F*(ρ) ∈ R. We call δF(ρ) the *first variation of* F at ρ if for a signed measure η such that Rdη = 0, it holds that $$\operatorname*{lim}_{\varepsilon\to0}{\frac{{\mathcal{F}}(\rho+\varepsilon\eta)-{\mathcal{F}}(\rho)}{\varepsilon}}=\int\delta{\mathcal{F}}(\rho)\,\mathrm{d}\eta\,.$$ The Kullback-Leibler (KL) divergence of a measure ρ with respect to some fixed target measure π is defined as KL(ρ∥π) = Rlog ρπ dρ for ρ absolutely continuous with respect to π. For π ∝ e −V∗ , the first variation of the KL divergence is given by $$\delta\mathrm{KL}(\cdot\|\pi)(\rho)(x)=\log\frac{\rho(x)}{\pi(x)}=\log\rho(x)+V_{*}(x)+\log Z_{1}\;,$$ where Z1 is the normalizing constant for π. A more general notion of dissimilarity between probability measures is the q-Rényi divergence: for q ∈ [1, ∞], we define Rq(ρ∥π) to be the q-Rényi divergence with respect to π, given by $${\mathcal R}_{q}(\rho\|\pi):={\frac{1}{q-1}}\log\int\left({\frac{\rho}{\pi}}\right)^{q}\,\mathrm{d}\pi\,,$$ qdπ , (9) $$\left(7\right)$$ $$({\boldsymbol{\delta}})$$ $$({\mathfrak{g}})$$ for measures ρ that are absolutely continuous with respect to π. Rq recovers the KL divergence in the limit q → 1, and when q = 2, R2(ρ∥π) = log(χ 2(ρ∥π) + 1), where χ 2is the chi-squared divergence, written explicitly as $$\chi^{2}(\rho\|\pi)=\mathrm{Var}_{\pi}\left({\frac{\rho}{\pi}}\right)=\int\left({\frac{\rho}{\pi}}\right)^{2}\,\mathrm{d}\pi-1\,.$$ ## 2.2 Gradient Flows Of The Kullback-Leibler Divergence 2.2.1 Wasserstein Gradient Flow In its *dynamic formulation*, the 2-Wasserstein distance between two probability measures ρ0, ρ1 with bounded second moments can be written as (Villani, 2008; Benamou & Brenier, 2000) $$\mathrm{W}_{2}^{2}(\rho_{0},\rho_{1}):=\inf_{(\rho_{t},v_{t})}\int_{0}^{1}\int\|v_{t}(x)\|^{2}\rho_{t}(x)\,\mathrm{d}x\,\mathrm{d}t\quad\text{s.t.}\quad\partial_{t}\rho_{t}+\nabla\cdot(\rho_{t}v_{t})=0\,,\tag{10}$$ where (ρt)t∈[0,1] is a curve of probability densities over R d, and (vt)t∈[0,1] is a curve of L 2(R d) d vector fields. The constraint is known as the continuity equation, with endpoints ρ0 and ρ1. For a functional F : P(R d) → R, the *Wasserstein gradient flow* is the curve of measures (ρW t )t≥0 that satisfies the continuity equation with the vector field replaced by the steepest descent under the Wasserstein geometry, $$v_{t}=-\nabla_{W_{2}}{\mathcal{F}}(\rho_{t}^{\mathrm{W}}):=\nabla\delta{\mathcal{F}}(\rho_{t}^{\mathrm{W}})\,,$$ $$(11)$$ where the last equation is simply the (standard) spatial gradient of the first variation of F. Plugging in the expression for the first variation of the KL divergence (8), we see that the law of the Langevin diffusion is given by ρW t which satisfies $$\partial_{t}\rho_{t}^{\mathrm{W}}=\nabla\cdot\left(\rho_{t}^{\mathrm{W}}(\nabla\log\rho_{t}^{\mathrm{W}}+\nabla V_{*})\right)\,.$$ This equation may be rewritten as ∂tρW t = ∇ · (∇V∗ρW t ) + ∆ρW t , which one readily identifies as the Fokker- Planck equation for the potential V∗. The equation describes the evolution of the distribution of a particle that moves according to the stochastic differential equation 1. At the particle level, the key aspect of Wasserstein gradient flows is that they model particle *transport*, and that makes them useful for highdimensional applications such as LMC. In what follows, we will sometimes abbreviate Wasserstein gradient flow to W-GF. ## 2.2.2 Fisher-Rao Gradient Flow The Fisher-Rao distance, or Hellinger-Kakutani distance, between probability measures has a long history in statistics and information theory (Hellinger, 1909; Kakutani, 1948). It can be defined as (Bogachev, 2007; Gallouët & Monsaingeon, 2017) $$\mathrm{FR}^{2}(\rho_{0},\rho_{1}):=\operatorname*{inf}_{(\rho_{t},r_{t})}\int_{0}^{1}\int r_{t}(x)^{2}\rho_{t}(x)\,\mathrm{d}x\,\mathrm{d}t\quad\mathrm{s.t.}\quad\partial_{t}\rho_{t}=r_{t}\rho_{t}\,,$$ where (ρt)t∈[0,1] is again a curve of probability measures, and (rt)t∈[0,1] is a curve of L 2(R d) functions. Together, they satisfy the prescribed equation, with endpoints equal to ρ0 and ρ1. The Fisher-Rao gradient flow of the KL divergence, also known as *Birth-Death dynamics*, is the curve of measures (ρ FR t)t≥0 that satisfies (Gallouët & Monsaingeon, 2017; Lu et al., 2019) $$\partial_{t}\rho_{t}^{\mathrm{FR}}=-\rho_{t}^{\mathrm{FR}}\alpha_{t}\,,\quad\alpha_{t}:=\log{\frac{\rho_{t}^{\mathrm{FR}}}{\pi}}-\mathrm{KL}(\rho_{t}^{\mathrm{FR}}\|\pi)\,.$$ The first term adjusts mass (i.e. gives birth to or kills mass) according to the log-ratio of ρ FR t and the target measure π. The last term preserves the total mass, so that ρ FR t ∈ P(R d) for all time. Expanding this equation, we have $$\partial_{t}\rho_{t}^{\mathrm{FR}}(x)=-\big(\log(\rho_{t}^{\mathrm{FR}}(x))+V_{*}(x)-\big<\log(\rho_{t}^{\mathrm{FR}})+V_{*}\big>_{\rho_{t}^{\mathrm{FR}}}\big>\rho_{t}^{\mathrm{FR}}(x).$$ $$(12)$$ We henceforth omit the superscript FR for the Fisher-Rao gradient flow of the KL divergence unless the notation becomes ambiguous. For short-hand, we make use of the abbreviation FR-GF for Fisher-Rao gradient flows. The FR-GF may be simulated using a system of weighted particles (see Appendix B). Unlike for the W-GF, in this case the positions of the particles are fixed; only the weights change over time. Hence, to simulate the FR-GF one is forced to grid the underlying space R d. This is feasible only for small dimensions d. Consequently, FR-GFs cannot be simulated in high dimensions, which makes them impractical for sampling applications. ## 2.2.3 Wasserstein-Fisher-Rao Geometry Gradient Flow The Wasserstein-Fisher-Rao distance between probability measures arises as a combination of the Wasserstein and the Fisher-Rao distances (Chizat et al., 2018; 2015; Kondratyev et al., 2016; Liero et al., 2016; 2018). It is defined as $$\mathrm{WFR}^{2}(\rho_{1},\rho_{2})\coloneqq\inf_{(\rho_{1},v_{t},r_{t})}\int_{0}^{1}\int(\|v_{t}(x)\|^{2}+r_{t}(x)^{2})\rho_{t}(x)\,\mathrm{d}x\,\mathrm{d}t\quad\mathrm{s.t.}\quad\partial_{t}\rho_{t}+\nabla\cdot(\rho_{t}v_{t})=r_{t}\rho_{t}\,,$$ where, for each t ∈ [0, 1], the triple (ρt, vt, rt) lives in P(R d) × L 2(R d) d × L 2(R d), and they simultaneously satisfy the constraint equation, which has endpoints ρ0 and ρ1, as well. Similarly, the Wasserstein-Fisher-Rao gradient flow of the KL divergence is the solution of PDE that incorporates the terms in the Wasserstein and Fisher-Rao gradient flows (Eq. (11) and Eq. (12)): $$\partial_{t}\rho_{t}^{\rm WFR}=\nabla\cdot\left(\rho_{t}^{\rm WFR}(\nabla\log\rho_{t}^{\rm WFR}+\nabla V_{*})\right)-\left(\log(\rho_{t}^{\rm WFR})+V_{*}-\left\langle\log(\rho_{t}^{\rm WFR})+V_{*}\right\rangle_{\rho_{t}^{\rm WFR}}\right)\rho_{t}^{\rm WFR}\tag{13}$$ Similar to the other geometries, we write WFR-GF as shorthand for Wasserstein-Fisher-Rao gradient flow At the particle level, WFR-GFs are able to capture both *transport* and *weight updates*, which is why they enjoy a convergence rate that at least matches the better rate between W- and FR-GFs (recall Eq. (5)), and is clearly superior in practice in some instances. Hence, any improvement in the convergence analysis of either W- or FR-GFs translates to improving our understanding of WFR-GFs. ## 2.3 Simulated Annealing Dynamics Simulated annealing is a technique seen in several works when attempting to either optimize a function or sample from a multimodal probability distribution, and has a long history (Pincus, 1970; Kirkpatrick et al., 1983), and plays a crucial role in our analysis. In what follows, we introduce the annealing path with linear scaling, and conclude with a proposition. Consider the time-dependent measure (µτ )τ∈[0,1] corresponding to the annealing path, with *linear scaling*, initialized at the measure µ0 = ρ0 ∝ e −V0. By definition, µτ admits the density $$\mu_{\tau}(x)={\frac{e^{-\tau(V_{\tau}(x)-V_{0}(x))-V_{0}(x)}}{Z_{\tau}}},\quad Z_{\tau}=\int_{\mathbb{R}^{d}}e^{-\tau(V_{\tau}(x)-V_{0}(x))-V_{0}(x)}\,d x,$$ for τ ∈ [0, 1]. Note that indeed, µ1 = π. To this end, it will be convenient to rewrite Eq. (14) in terms of the log-density of µτ . Remark that $$\log(\mu_{\tau}(x))=-\tau(V_{*}(x)-V_{0}(x))-V_{0}(x)-\log Z_{\tau}\,.$$ One can check that the pointwise derivative of the density µτ (with respect to τ ) is $$\partial_{\tau}\mu_{\tau}(x)=-(V_{*}(x)-V_{0}(x)-\langle V_{*}-V_{0}\rangle_{\mu_{\tau}})\mu_{\tau}(x)\,.$$ )µτ (x). (16) $$(14)$$ $$(15)$$ $$(16)$$ From this, we obtain that log(µτ (x)) + V∗(x) −log(µτ ) + V∗ µτ = −τ (V∗(x) − V0(x)) − V0(x) − ⟨−τ (V∗ − V0) − V0 + V∗⟩µτ + V∗(x) − ⟨V∗⟩µτ = −τ (V∗(x) − V0(x)) − V0(x) + V∗(x) − ⟨−τ (V∗ − V0) − V0 + V∗⟩µτ = (1 − τ )V∗(x) − V0(x)− (1 − τ )⟨V∗ − V0⟩µτ = (1 − τ )V∗(x) − V0(x) − ⟨V∗ − V0⟩µτ . $$(17)$$ $$(18)$$ $$(19)$$ Note that in the first equality, we used that the log-partition is a constant and gets cancelled out by the difference of the two terms. Consequently, Eq. (16) can be rewritten, for τ ∈ (0, 1), as $$\partial_{\tau}\mu_{\tau}(x)=-\frac{1}{1-\tau}\big(\log(\mu_{\tau}(x))+V_{*}(x)-\big<\log(\mu_{\tau})+V_{*}\big>_{\mu_{\tau}}\big>)\mu_{\tau}(x).$$ A first observation is that that the linear schedule τ in the exponent of Eq. (14) results in dynamics that resemble the Fisher-Rao gradient flow of the KL divergence, up to a reparameterization that can be made explicit. Indeed, if one compares Eq. (18) with Eq. (12), the only difference is the factor 1 1−τ in the righthand side of Eq. (18). Since the solution of the Fisher-Rao gradient flow of the KL divergence is unique (see Proposition 4 in Appendix A), an appropriate time reparameterization of the annealed dynamics (14) will yield the solution (12). We summarize this observation in the following proposition, which we were unable to find a citation for in the literature. Proposition 1. Let (µτ )τ∈[0,1] *be as defined in Eq.* (14). The Fisher-Rao gradient flow (ρt)t≥0 *of KL*(ρ∥π) (i.e. solving Eq. (12)) is given by ρt = µ1−e−t . Proof. If we write t as a function of τ , we have that $$\partial_{\tau}\rho_{t(\tau)}=\partial_{t}\rho_{t(\tau)}\frac{d t}{d\tau}(\tau)=-\frac{d t}{d\tau}(\tau)\big(\log(\rho_{t(\tau)}(x))+E(x)-\big<\log(\rho_{t(\tau)})+E\big>_{\rho_{t(\tau)}}\big)\rho_{t(\tau)}(x).$$ ρt(τ)(x). (19) Identifying ρt(τ) with ρτ , and establishing a direct comparison with Eq. (18), we obtain that for Eq. (19) to hold, t(τ ) must fulfill dt dτ (τ ) = 1 1−τ . With the initial condition that τ (0) = 0, this differential equation has the following unique solution: $$t(\tau)=\int_{0}^{\tau}{\frac{1}{1-s}}\,d s=-\log(1-\tau)\,.$$ $$(20)$$ $\square$ That is, we have that t(τ ) = − log(1 − τ ), or equivalently, τ (t) = 1 − e −t. ## 2.4 Cumulants And Their Power Series Our core argument hinges on observing a relation between the above gradient flows and their connection to cumulants of a random variable. Recall that for a random variable Y , its *cumulant-generating function* to be KY (z) = log E[e Y z]. The n th cumulant κn of the random variable Y is defined as the n th derivative of KY evaluated at z = 0, that is, κn = K (n) Y(0). Similar to moment-generating functions, if KY (z) is finite in some neighborhood of z ∈ (−ϵ0, ϵ0), then it holds that KY is smooth (in fact, holomorphic) (see e.g. (Shiryaev, 1984, Section II.12.8). Moreover, KY (z) admits the following infinite series expansion $$K_{Y}(z)=\sum_{n\geq1}{\frac{\kappa_{n}}{n!}}z^{n}\,.$$ In particular, one can easily check that κ1 = E[Y ] and κ2 = Var(Y ). 6 ## 3 Main Result The goal of this section is to prove our main result, which is an explicit expansion of the KL divergence in terms of log-cumulants of the random variable log ρ0(X) π(X) where X ∼ π. We make the following assumptions throughout, and we will make their uses explicit when necessary. $$(\mathbf{A1})\ V_{*}\in L_{1}(\pi),$$ (A2) There exists α ∈ R+, such that infxρ0(x) π(x) 1+α > 0. Assumption **(A1)** ensures that π has finite differential entropy, and is a relatively weak condition. **(A2)** asks that at least some mass is initially placed along the support of π. **(A2)** is, however, a much weaker assumption that what is currently used in the literature. To be precise, Lu et al. (2019; 2022) assume a particular case of **(A2)**, namely $$\mathbf{(B)}\ \mathrm{There}\ \mathbf{1}$$ (B) There exists M > 0 such that infx ρ0(x) π(x) ≥ e −M. This is the same as **(A2)** when α is constrained to be 0, and they make explicit use of the constant M > 0 in their rate. Note that **(A2)** is weaker the larger α is, as π(x) 1+α decreases faster. As a comparison, if ρ0 and π are Gaussians, **(A2)** covers the setting where both have arbitrary means and covariances, while constraining α = 0 only covers the cases in which the covariance matrix of ρ0 is strictly larger than the one of π in the positive definite order. The following theorem is our main contribution. While here we have stated an asymptotic expression, in fact a more general expression is available as an infinite power series for times large enough, and appears explicitly in the proof (Eq. (29) and Eq. (30)). Theorem 1. Suppose **(A1)** and **(A2)** *hold. Then for any* q ∈ (1, ∞), KL(ρt∥π) = κ2 2 e −2t + O(e −3t), and Rq(ρt∥π) = qκ2 2 e −2t + Oq(e −3t), (21) where κ2 = Varπ log ρ0 π *. The remainder terms* O(e −3t) and Oq(e −3t) depend on the initialization ρ0 and on the target π. Note that the result (4) by Lu et al. (2019) implies an asymptotic rate very close to e −2t, but there are significant differences between both: beyond the fact that our result holds under much weaker assumptions, we characterize exactly the asymptotic decay of KL(ρt∥π), while they only provide an upper-bound that becomes less tight as δ goes to zero (because the constant t∗ increases). Remark 1. The coefficient κ2 is nothing more than the variance under π of the first-variation of the KL divergence at ρ0 *(recall Eq.* (8)). ## 3.1 Proof Given potentials V∗ and V0 such that π ∝ e −V∗ and ρ0 ∝ e −V0, we define the random variable $$(21)$$ $$Y:=V_{*}(X)-V_{0}(X){\mathrm{~where~}}X\sim\pi\,,$$ Y := V∗(X) − V0(X) where X ∼ π , (22) Note that we can set V∗(x) = − log π(x) and V0(x) = − log ρ0(x), which means that Y = log ρ0(X) π(X) , but adding any constant term to V∗ and V0 (or solely V0) also yields a valid construction of Y . Proposition 2. Let Y *be as in Eq.* (22). Let (µτ )τ∈[0,1] *be follow the simulated annealing dynamics from* Eq. (14). It holds that $$\begin{array}{l}{{K L(\mu_{\tau}\|\pi)=(1-\tau)K_{Y}^{\prime}(1-\tau)-K_{Y}(1-\tau)\,,}}\\ {{\mathcal{R}_{q}(\mu_{\tau}\|\pi)=\frac{1}{q-1}K_{Y}(q(1-\tau))-\frac{q}{q-1}K_{Y}(1-\tau)\,.}}\end{array}$$ $$(22)$$ $$(23)$$ $$(24)$$ 7 Proof.: We first identify the following relationship, which arises from a simple manipulation of Eq. (14) $$K_{Y}(1-\tau)=\log\bigg{(}\int e^{(1-\tau)(V_{\tau}(x)-V_{b}(x))}\frac{e^{-V_{\tau}(x)}}{Z_{1}}\,dx\bigg{)}=\log\bigg{(}\frac{\int e^{-\tau(V_{\tau}(x)-V_{b}(x))-V_{b}(x)}\,dx}{Z_{1}}\bigg{)}=\log Z_{\tau}-\log Z_{1}.\tag{25}$$ $$(26)$$ $$(27)$$ Using this expression, we can expand the KL divergence between µτ and π as follows: $$\mathrm{KL}(\mu_{\tau}|\pi)=\int\log\frac{\mu_{\tau}}{\pi}\mu_{\tau}=\int\log\left(\frac{e^{-\tau(V_{*}-V_{0})-V_{0}}Z_{\tau}^{-1}}{e^{-V_{*}}Z_{1}^{-1}}\right)\,\mathrm{d}\mu_{\tau}$$ $$=\log Z_{1}-\log Z_{\tau}+(1-\tau)\langle V_{*}-V_{0}\rangle_{\mu_{\tau}}$$ $$=(1-\tau)\langle V_{*}-V_{0}\rangle_{\mu_{\tau}}-K_{Y}(1-\tau)\,.$$ Another fact about cumulant generating functions that we can exploit is the following differential relationship $$-\langle V_{*}-V_{0}\rangle_{\mu_{\tau}}=\frac{\mathrm{d}}{\mathrm{d}\tau}Z_{\tau}=-K_{Y}^{\prime}(1-\tau)\,.$$ Altogether, this gives $$\mathrm{KL}(\mu_{\tau}\|\pi)=(1-\tau)K_{Y}^{\prime}(1-\tau)-K_{Y}(1-\tau)\,.$$ The general q-Rényi case is deferred to the appendix, where the computation is similar. The following lemma uses both **(A1)** and **(A2)** to establish that KY (z) is finite in some neighborhood of z ∈ Bϵ0 (0), which implies that KY admits the series expansion we will require in the sequel. The proof is deferred to the appendix. Proposition 3. Suppose **(A1)** and **(A2)** are satisfied. Then there exists some constant ϵ0 > 0 such that the cumulant generating function of Y , KY (z) = log E[e Y z] *is finite on some neighborhood of* z ∈ Bϵ0 (0). Moreover, inside this neighborhood, KY (z) *is holomorphic and we have the series expansion* $$K_{Y}(z)=\sum_{n\geq1}\frac{\kappa_{n}}{n!}z^{n}\,.\tag{10}$$ We conclude with the proof of our main result. Proof of Theorem 1. We begin with the expression of the KL divergence. Note that since KY (z) is smooth for z sufficiently close to the origin, it holds that $$(28)$$ $$K_{Y}^{\prime}(z)=\sum_{n\geq1}\frac{\kappa_{n}}{(n-1)!}z^{n-1}\,.$$ Using the parameterization of Eq. (27) and the series expansion for K′Y (1 − τ ), our expression for KL(µτ ∥π) reads $$\mathrm{KL}(\mu_{\tau}\|\pi)=(1-\tau)\sum_{n\geq1}\frac{\kappa_{n}}{(n-1)!}(1-\tau)^{n-1}-\sum_{n\geq1}\frac{\kappa_{n}}{n!}(1-\tau)^{n}$$ $$=\sum_{n\geq1}\kappa_{n}\left(\frac{n}{n!}-\frac{1}{n!}\right)(1-\tau)^{n}$$ $$=\sum_{n\geq2}\frac{\kappa_{n}}{n(n-2)!}(1-\tau)^{n}\,.$$ Expanding the relation and replacing τ (t) = 1 − e $$\mathrm{lacing~}\tau(t)=1-e^{-t}\mathrm{~gives~}$$ $$\mathrm{KL}(\rho_{t}||\pi)=\frac{\kappa_{2}}{2}e^{-2t}+\sum_{n\geq3}\frac{\kappa_{n}}{n(n-2)!}e^{-n t}\,.$$ $$(29)$$ −nt. (29) We now do the same manipulations for Rq(µτ ∥π). $$\mathcal{R}_{q}(\mu_{\tau}\|\pi)=\frac{1}{q-1}\sum_{n\geq1}\frac{\kappa_{n}}{n!}(q(1-\tau))^{n}-\frac{q}{q-1}\sum_{n\geq1}\frac{\kappa_{n}}{n!}(1-\tau)^{n}$$ $$=\frac{1}{q-1}\left(\frac{\kappa_{1}}{q}(1-\tau)+\sum_{n\geq2}q^{n}\frac{\kappa_{n}}{n!}(1-\tau)^{n}\right)-\frac{q}{q-1}\left(\kappa_{1}(1-\tau)+\sum_{n\geq2}\frac{\kappa_{n}}{n!}(1-\tau)^{n}\right)$$ $$=\sum_{n\geq2}\frac{q^{n}-q}{q-1}\frac{\kappa_{n}}{n!}(1-\tau)^{n}.$$ $$(30)$$ Substituting τ (t) = 1 − e −t and expanding out the first term yields $${\mathcal R}_{q}(\rho_{t}\|\pi)=q\frac{\kappa_{2}}{2}e^{-2t}+\sum_{n\geq3}\frac{q^{n}-q}{q-1}\frac{\kappa_{n}}{n!}e^{-n t}\,.$$ −nt. (30) Lemma 2 in the appendix justifies that the higher-order terms of Eq. (29) and Eq. (30) are O(e −3t), which concludes the proof. ## 4 Numerical Simulations We present simple numerical simulations that demonstrates our asymptotic convergence rate of the KL divergence the FR gradient flows, as well as a comparison with the WFR- and W-GFs. We consider two target distributions over the set [−*π, π*), each with two initializations: 1. Target distribution π1: We set π1 ∝ e −V1 with V1(x) = 2.5 cos(2x)+ 0.5 sin(x). This distribution has two modes with different weights and has been studied previously by Lu et al. (2019). We consider two initial distributions: (a) πa ∝ e −Va with Va = −V1, which has two modes in locations where π has little mass. (b) πb ∝ e −Vb with Vb = 2.5 cos(2x), which has two modes in almost the same positions as π, but with equal weight. 2. Target distribution π2: We set π2 ∝ e −V2 with V2(x) = −6 cos(x). This distribution has one mode. We consider two initial distributions: (c) πc ∝ e −Vc with Vc = −V2, which has one mode in a location where π has little mass. (d) πd ∝ e −Vd with Vd = 0, which is the uniform distribution. Fig. 1 shows the target energies V1, V2 and the initial energies Va, Vb, Vc, Vd introduced above. Fig. 2 shows the evolution of the KL divergence along the FR, WFR and W gradient flows. It also contains plots of the dominant term κ2 2 e −2t of the approximation of the KL divergence decay for FR flows (see Theorem 1), displayed as dotted lines. Table 1 shows the slopes of each curve from Fig. 2, at large times (see Appendix B for details on the computation of slopes). Some observations are in order: - As predicted by Theorem 1, the curves KL(ρ FR t ∥π) approach the curves κ2 2 e −2t as t grows. - For π1, the curves KL(ρ FR t ∥π) and KL(ρWFR t ∥π) initialized at πb are very close for small times. The reason is that ∇V1 and ∇Vb are very close in the regions where π1 and πb have most of the mass. Consequently, the term ∇ · ρWFR t(∇ log ρWFR t + ∇V1), which is the difference between the FR and the WFR PDEs, is small at initialization. - The curves KL(ρW t ∥π) behave very differently for π1 and π2 (see Table 1). Indeed, since π1 is bimodal CLSI(π1) is quite large (thus convergence is slow), whereas π2 is unimodal, with a much smaller log-Sobolev constant. ![9_image_0.png](9_image_0.png) Figure 1: Energies of the target and initial distributions. ![9_image_1.png](9_image_1.png) Figure 2: Evolution of the KL divergence with respect to π1 (*left*) and π2 (*right*) along their respective FR (*solid lines*), WFR (*dash-dotted lines*) and W (*dashed lines*) gradient flows. Each plot contains flows initialized at two probability measures: in the left plot these are πa (*blue*, top curves at t = 0) and πb (*orange*); in the right plot, πc (*blue*, top curves at t = 0) and πd (*orange*). The *dotted* lines show the curves κ2 2 e −2t(for the appropriate values κ2), introduced in Theorem 1. - The curves KL(ρWFR t ∥π) also behave differently for both target distributions. For π1, it decays only slightly faster than KL(ρ FR t ∥π), while for π2 it goes down much faster than both KL(ρ FR t ∥π) and KL(ρWFR t ∥π). Interestingly, looking at Table 1 we observe that the asymptotic slopes of the WFR are very close to the sum of the slopes for FR and W. This seems to indicate that at large times, the KL divergence decays like e −2t− 2t CLSI , i.e. that the W and FR terms act more or less independently. | Target π1 | Target π2 | | | | |-------------|-------------|----------|----------|----------| | Init. πa | Init. πb | Init. πc | Init. πd | | | FR | -2.0016 | -2.0002 | -2.0028 | -2.0014 | | WFR | -2.0771 | -2.0759 | -12.8190 | -12.8632 | | W | -0.0811 | -0.0811 | -10.7784 | -10.8538 | Table 1: Large-time slopes of the KL divergence vs. time curves in a semi-logarithmic plot (Fig. 2), for the three flows. See Appendix B for details on the computation of the slopes. ## 5 Conclusion In this work, using a relatively simple proof technique, we showed that the Kullback-Leibler divergence along its Fisher-Rao gradient flow (ρ FR t)t≥0 can be written as a power-series expansion, resulting in a tight asymptotic convergence rate for large times. A similar expansion holds for Rq(ρ FR t ∥π), where Rq is any q-Rényi divergence. Our findings were verified with simple numerical experiments, where we also simulated Wasserstein and Wasserstein-Fisher-Rao gradient flows. Our simulations indicated that, in some cases, the convergence rate of the WFR gradient flow scales like e −(2+(2/CLSI))t, an observation that we hope can be made precise in future work. A second direction is to extend our proof technique from the KL divergence to general Bregman divergences. ## Acknowledgments AAP thanks Anna Korba for introducing the main WFR references to him. The authors collectively thank Joan Bruna, Jonathan Niles-Weed, Sinho Chewi, and Andre Wibisono for helpful discussions, and the anonymous reviewers who helped improve the quality of the presentation. CD acknowledges Meta AI Research as a funding source. AAP acknowledges NSF Award 1922658 and Meta AI Research. ## References Luigi Ambrosio, Nicola Gigli, and Giuseppe Savaré. *Gradient flows: in metric spaces and in the space of* probability measures. Springer Science & Business Media, 2005. Anton Arnold, Peter Markowich, and Andreas Unterreiter. On convex Sobolev inequalities and the rate of convergence to equilibrium for Fokker-Planck type equations. Communications in Partial Differential Equations, 26, 05 2000. Jean-David Benamou and Yann Brenier. A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem. *Numerische Mathematik*, 84(3):375–393, 2000. V.I. Bogachev. *Measure Theory*. Number 1 in Measure Theory. Springer Berlin Heidelberg, 2007. Sinho Chewi. *Log-concave sampling*. 2022. Sinho Chewi, Murat A Erdogdu, Mufan Li, Ruoqi Shen, and Shunshi Zhang. Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev. In *Proceedings of Thirty Fifth Conference on Learning Theory*, volume 178 of *Proceedings of Machine Learning Research*. PMLR, 02–05 Jul 2022. Lenaic Chizat. Sparse optimization on measures with over-parameterized gradient descent. Mathematical Programming, 194(1-2):487–532, 2022. Lenaic Chizat, Bernhard Schmitzer, Gabriel Peyré, and François-Xavier Vialard. An interpolating distance between optimal transport and Fisher-Rao. *Foundations of Computational Mathematics*, 18, 06 2015. Lénaïc Chizat, Gabriel Peyré, Bernhard Schmitzer, and François-Xavier Vialard. Unbalanced optimal transport: dynamic and Kantorovich formulations. *Journal of Functional Analysis*, 274(11):3090–3123, 2018. A.S. Dalalyan and A.B. Tsybakov. Sparse regression learning by aggregation and Langevin Monte-Carlo. Journal of Computer and System Sciences, 78(5):1423–1443, 2012. Alain Durmus, Szymon Majewski, and Błażej Miasojedow. Analysis of Langevin Monte Carlo via convex optimization. *J. Mach. Learn. Res.*, 20(1):2666–2711, 2021. Thomas O Gallouët and Leonard Monsaingeon. A JKO splitting scheme for Kantorovich–Fisher–Rao gradient flows. *SIAM Journal on Mathematical Analysis*, 49(2):1100–1130, 2017. Andrew Gelman, John B Carlin, Hal S Stern, and Donald B Rubin. *Bayesian data analysis*. Chapman and Hall/CRC, 1995. Leonard Gross. Logarithmic Sobolev inequalities. *American Journal of Mathematics*, 97(4):1061–1083, 1975. E. Hellinger. Neue Begründung der Theorie quadratischer Formen von unendlichvielen Veränderlichen. Journal für die reine und angewandte Mathematik, (136):210–271, 1909. Richard Holley and Daniel Stroock. Logarithmic Sobolev inequalities and stochastic Ising models. *Journal* of Statistical Physics, 46(5):1159–1194, Mar 1987. ISSN 1572-9613. Michael Johannes and Nicholas Polson. MCMC methods for continuous-time financial econometrics. In Handbook of Financial Econometrics: Applications, pp. 1–72. Elsevier, 2010. Richard Jordan, David Kinderlehrer, and Felix Otto. The variational formulation of the Fokker–Planck equation. *SIAM journal on mathematical analysis*, 29(1):1–17, 1998. Shizuo Kakutani. On equivalence of infinite product measures. *Annals of Mathematics*, 49(1):214—-224, 1948. S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. *Science*, 220(4598): 671–680, 1983. Ivan Kobyzev, Simon JD Prince, and Marcus A Brubaker. Normalizing flows: An introduction and review of current methods. *IEEE transactions on pattern analysis and machine intelligence*, 43(11):3964–3979, 2020. Stanislav Kondratyev, Léonard Monsaingeon, and Dmitry Vorotnikov. A new optimal transport distance on the space of finite Radon measures. *Advances in Differential Equations*, 21(11/12):1117 - 1164, 2016. Marc Lambert, Sinho Chewi, Francis Bach, Silvère Bonnabel, and Philippe Rigollet. Variational inference via Wasserstein gradient flows. *arXiv preprint arXiv:2205.15902*, 2022. Matthias Liero, Alexander Mielke, and Giuseppe Savaré. Optimal transport in competition with reaction: The Hellinger–Kantorovich distance and geodesic curves. *SIAM Journal on Mathematical Analysis*, 48(4): 2869–2911, 2016. Matthias Liero, Alexander Mielke, and Giuseppe Savaré. Optimal entropy-transport problems and a new Hellinger-Kantorovich distance between positive measures. *Inventiones mathematicae*, 211, 03 2018. Yulong Lu, Jianfeng Lu, and James Nolen. Accelerating Langevin sampling with birth-death. *arXiv preprint* arXiv:1905.09863, 2019. Yulong Lu, Dejan Slepčev, and Lihan Wang. Birth-death dynamics for sampling: Global convergence, approximations and their asymptotics. *arXiv preprint arXiv:2211.00450*, 2022. David JC MacKay. *Information theory, inference and learning algorithms*. Cambridge university press, 2003. P. A. Markowich and C. Villani. On the trend to equilibrium for the Fokker-Planck equation: An interplay between physics and functional analysis. In Physics and Functional Analysis, Matematica Contemporanea (SBM) 19, pp. 1–29, 1999. Martin Pincus. A Monte Carlo method for the approximate solution of certain types of constrained optimization problems. *Operations Research*, 18(6):1225–1228, 1970. Christian P Robert, George Casella, and George Casella. *Monte Carlo statistical methods*, volume 2. Springer, 1999. Grant Rotskoff, Samy Jelassi, Joan Bruna, and Eric Vanden-Eijnden. Global convergence of neuron birthdeath dynamics. *arXiv preprint arXiv:1902.01843*, 2019. Al'bert Nikolaevich Shiryaev. *Probability*. Graduate texts in mathematics ; 95. Springer-Verlag, New York, 1984. ISBN 9781489900180. A.J. Stam. Some inequalities satisfied by the quantities of information of Fisher and Shannon. *Information* and Control, 2(2):101–112, 1959. Santosh Vempala and Andre Wibisono. Rapid convergence of the unadjusted Langevin algorithm: Isoperimetry suffices. In *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. C. Villani. *Optimal Transport: Old and New*. Grundlehren der mathematischen Wissenschaften. Springer Berlin Heidelberg, 2008. Udo Von Toussaint. Bayesian inference in physics. *Reviews of Modern Physics*, 83(3):943, 2011. Andre Wibisono. Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem. In *Conference on Learning Theory*, pp. 2093–3027. PMLR, 2018. Yuling Yan, Kaizheng Wang, and Philippe Rigollet. Learning Gaussian mixtures using the WassersteinFisher-Rao gradient flow. *arXiv preprint arXiv:2301.01766*, 2023. ## A Remaining Proofs Proposition 4 (Uniqueness of the Fisher-Rao gradient flow of the KL divergence). *Given a target potential* V∗ and an initial measure ρ0*, the solution of Eq.* (12) is unique. Proof. Consider the PDE $$\partial_{t}\mu_{t}(x)=-\big(\log(\mu_{t}(x))+V_{*}(x)\big)\mu_{t}(x),\qquad\mu_{0}=\rho_{0}$$ $$(31)$$ ∂tµt(x) = −log(µt(x)) + V∗(x)µt(x), µ0 = ρ0 (31) Note that this is in fact an ODE for each point x, that we can rewrite as ∂t log µt(x) = −log(µt(x))+V∗(x). The unique solution of this ODE with initial condition log µ0(x) is log µt(x) = (log µ0(x)−V∗(x))e −t+V∗(x). Thus, we conclude that Eq. (31) has a unique solution. Now, given a solution ρt of Eq. (12) with initial condition ρ0, define ρ˜t as $$\log\tilde{\rho}_{t}(x)=\log\rho_{t}(x)+\int_{0}^{t}e^{t-s}\big{\langle}\log(\rho_{s})+V_{*}\big{\rangle}_{\rho_{s}}\,ds\tag{32}$$ Note that Rt := −R t 0 e t−slog(ρs)+V∗ ρs ds satisfies the ODE dRt dt = −Rt −log(ρt)+V∗ ρt with the initial condition R0 = 0. Using this, we observe that ρ˜t is a solution of Eq. (31): $$\partial_{t}\log\tilde{\rho}_{t}(x)=\partial_{t}\log\rho_{t}(x)+\partial_{t}R_{t}=-\big{(}\log(\rho_{t}(x))+V_{*}(x)-\big{(}\log(\rho_{t})+V_{*}\big{)}_{\rho_{t}}\big{)}+\partial_{t}R_{t}$$ $$=-\big{(}\log(\rho_{t}(x))+R_{t}+V_{*}(x)\big{)}=-\big{(}\log(\tilde{\rho}_{t}(x))+V_{*}(x)\big{)}.$$ Also, note that the map (ρt)t≥0 → (˜ρt)t≥0 defined by Eq. (32) is invertible, as ρt(x) = ˜ρt(x)/Rρ˜t(y) dy. This follows from the fact that ρt and ρ˜t are proportional to each other, and that ρt integrates to 1. Finally, suppose that ρ a t and ρ b t are two solutions of Eq. (12) with initial condition ρ0. Via the construction Eq. (32), they yield solutions ρ˜ a t and ρ˜ b t of Eq. (31) with initial condition ρ0. The uniqueness of the solution of Eq. (31) implies that ρ˜ a t = ˜ρ b t . Since the map (ρt)t≥0 → (˜ρt)t≥0 is invertible, we obtain that ρ a t = ρ b t , which concludes the proof. Proof of Proposition 2 (Continued). We perform similar manipulations as in the case with the KL divergence: $$\mathcal{R}_{q}(\mu_{\tau}\|\pi)=\frac{1}{q-1}\log\int\frac{e^{-q\tau(V_{*}-V_{0})-qV_{0}}(Z_{\tau})^{-q}}{e^{-qV_{*}}(Z_{1})^{-q}}\,\mathrm{d}\pi$$ $$=\frac{1}{q-1}\log\int e^{q(1-\tau)(V_{*}-V_{0})}\left(\frac{Z_{\tau}}{Z_{1}}\right)^{q}\,\mathrm{d}\pi$$ $$=\frac{1}{q-1}K_{Y}(1-\tau)-\frac{q}{q-1}(\log Z_{\tau}-\log Z_{1})$$ $$=\frac{1}{q-1}K_{Y}(1-\tau)-\frac{q}{q-1}K_{Y}(1-\tau)\,,$$ $$\square$$ where in the last line we again used Eq. (25). This completes the proof. Proof of Proposition 3. By **(A1)**, the partition function F(t) = RRd e −tV∗(x) dx is differentiable at t = 1. This is because F ′(t) = −RV∗(x) dπ(x). Hence, F(t) is finite on an interval (1 − 2ϵ1, 1] for some ϵ1. Note that the assumption **(A2)** can be written equivalently as ξ := infx αV∗(x) − V0(x) > −∞. We obtain that for all ϵ ∈ [0, ϵ1/α), $$\begin{array}{c}{{-\epsilon(V_{*}(x)-V_{0}(x))-V_{*}(x)=-\epsilon((1+\alpha)V_{*}(x)-V_{0}(x))+(\epsilon\alpha-1)V_{*}(x)}}\\ {{\leq-\epsilon\xi+(\epsilon\alpha-1)V_{*}(x)\leq-\epsilon\xi+(\epsilon_{1}-1)V_{*}(x)}}\end{array}$$ $$(33)$$ Equivalently, $$\exp(K_{Y}(-\epsilon))=\int_{\mathbb{R}^{\epsilon}}e^{-\epsilon(V_{\epsilon}(x)-V_{0}(x))-V_{\epsilon}(x)}\,dx\leq e^{-\epsilon\xi}\int_{\mathbb{R}^{\epsilon}}e^{-(1-\epsilon_{1})V_{\epsilon}(x)}\,dx=e^{-\epsilon\xi}F(1-\epsilon_{1})<+\infty.$$ Also, for all ϵ ∈ [0, 1), using the convexity of the exponential function we have that $$\exp(K_{Y}(\epsilon))=\int_{\mathbb{R}^{d}}e^{\epsilon(V_{\epsilon}(x)-V_{0}(x))-V_{\epsilon}(x)}\,dx=\int_{\mathbb{R}^{d}}e^{-(1-\epsilon)V_{\epsilon}(x)-\epsilon V_{0}(x)}\,dx$$ $$\leq\int_{\mathbb{R}^{d}}(1-\epsilon)e^{-V_{\epsilon}(x)}+\epsilon e^{-V_{0}(x)}\,dx=(1-\epsilon)Z_{1}+\epsilon Z_{0}<+\infty.$$ $$(34)$$ Hence, the cumulant-generating function KY (t) = log Ee tY is finite on a neighborhood (−ϵ0, ϵ0) with ϵ0 = min{1, ϵ1/α}. Applying Lemma 1, we conclude that there exists ϵ > 0 such that for z ∈ Bϵ(0), we have that KY (z) = P+∞ n=1 κn n! z n. The following lemma, which we make explicit, is a well-known fact in probability theory. In short, since the moment-generating function is analytic in some neighborhood, and is non-negative, taking the logarithm is safe as everything is analytic. The interested reader can consult e.g. (Shiryaev, 1984, Section II.12.8) which dissects this in detail. Lemma 1. *Assume that the cumulant-generating function* KY (t) = log Ee tY *is finite on a neighborhood* (−ϵ0, ϵ0) *of zero. Then,* KY (z) = log Ee zY as a function on the complex plane is holomorphic on the open ball Bϵ(0) of radius ϵ centered at zero, for some ϵ > 0*. Moreover, for* z ∈ Bϵ(0)*, we have that* $$K_{Y}(z)=\sum_{n=1}^{+\infty}\frac{\kappa_{n}}{n!}z^{n}.\tag{10}$$ (35) $\binom{36}{2}$ . $$(37)$$ Lemma 2 (End of the proof of Theorem 1). *We have that* $$KL(\rho_{t}\|\pi)-\frac{\kappa_{2}}{2}e^{-2t}|=O(e^{-3t}),\qquad|\mathcal{R}_{q}(\rho_{t}\|\pi)-\frac{q\kappa_{2}}{2}e^{-2t}|=O(e^{-3t}).\tag{1}$$ $$(38)$$ 14 Proof. Lemma 1 implies that the series for KY centered at zero has convergence radius ϵ, for some ϵ > 0. Since the derivative of a series has the same radius of convergence, we obtain that $$H(z):=z K_{Y}^{\prime}(z)-K_{Y}(z)=\sum_{n\geq2}\frac{\kappa_{n}}{n(n-2)!}z^{n}.$$ has convergence radius ϵ as well. Hence, by the Cauchy-Hadamard theorem, 1 ϵ ≥ lim supn→∞(|cn| 1/n), where cn := κn n(n−2)! . This implies that for all 0 < ϵ′ < ϵ, there exists a constant Cϵ ′ > 0 such that for all n ≥ 0, |cn| ≤ Cϵ ′/(ϵ ′) n. Consequently, for all z ∈ C with |z| < 1/ϵ′, Consequently, for all $z\in C$ with $|z|<1/e^{+}$, $$|H(z)-\frac{\ell_{2}}{2}z^{2}|=\left|\sum_{n=0}^{+\infty}\frac{\kappa_{n}}{n(n-2)!}z^{n}\right|\leq C_{\epsilon}\sum_{n=0}^{+\infty}\left(\frac{|z|}{e^{+}}\right)^{n}=C_{\epsilon}\frac{\left(\frac{|z|}{e^{+}}\right)^{3}}{1-\frac{|z|}{e^{+}}}$$ Using Eq. (23), we get that for any constant $\gamma>0$, if $t\geq-\log e^{t}+\gamma$ (or equivalently, $e^{-t}\leq e^{t}e^{-\gamma}$), $$(40)$$ $$(39)$$ $$|\mathrm{KL}(\rho_{t}\|\pi)-{\frac{\kappa_{2}}{2}}e^{-2t}|\leq C_{\epsilon^{\prime}}{\frac{\left({\frac{e^{-t}}{\epsilon^{\prime}}}\right)^{3}}{1-{\frac{e^{-t}}{\epsilon^{\prime}}}}}=C_{\epsilon^{\prime}}{\frac{e^{-3t}}{(\epsilon^{\prime})^{3}(1-e^{-\gamma})}}=O(e^{-3t}),$$ which concludes the proof for the KL divergence. For the Rényi divergence, the proof is analogous (note that in that case the series 1 q−1KY (qz) −q q−1KY (z) has convergence radius ϵ/q). ## B Details On The Numerical Simulations To run the simulations in Section 4, we discretized the interval [−*π, π*) in n = 2000 equispaced points. Let h = 2π/n. For each algorithm and initialization, we construct sequences (xk)k≥0 , where xk ∈ R n represents the normalized log-density at each point. We let v∗ ∈ R n be the (non-normalized) energy of the target distribution, obtained by evaluating V∗ at the discretization points. Similarly, ∇v∗, ∆v∗ ∈ R n are the evaluations of ∇V∗ and ∆V∗ at the n points (note that ∇V∗ is a scalar because the distributions are one-dimensional). We used the following discretizations for the Fisher-Rao, Wasserstein and Wasserstein-Fisher-Rao gradient flows: (i) Fisher-Rao GF: We use mirror descent in log-space. The update reads: $$\tilde{x}_{k+1}\gets x_{k}+\epsilon\bigl(-v_{s}-x_{k}\bigr),$$ $$x_{k+1}\gets\tilde{x}_{k+1}-\log\left(\sum_{i=1}^{n}e^{-\tilde{x}_{k+1}^{i}}\right).$$ (ii) Wasserstein GF: We approximate numerically the gradient and the laplacian of the log-density: $$\forall i\in[n],(\nabla x_{k})^{i}\leftarrow(x_{k}^{i+1}-x_{k}^{i-1})/(2h),$$ $$\forall i\in[n],(\Delta x_{k})^{i}\leftarrow(x_{k}^{i+1}+x_{k}^{i-1}-2x_{k}^{i})/h^{2},\tag{41}$$ $$x_{k+1}\gets x_{k}+\epsilon(\Delta v_{*}+\Delta x_{k}+(\nabla v_{*}+\nabla x_{k})\nabla x_{k}).$$ We use periodic boundary conditions, so that the first discretization point is adjacent to the last one for the purposes of computing derivatives. (iii) Wasserstein-Fisher-Rao GF: We combine the two previous updates. Letting ∇xk and ∆xk be as in Eq. (41), we have $$\begin{array}{l}{{\tilde{x}_{k+1}\gets x_{k}+\epsilon(-v_{*}-x_{k}+\Delta v_{*}+\Delta x_{k}+(\nabla v_{*}+\nabla x_{k})\nabla x_{k}),}}\\ {{x_{k+1}\gets\tilde{x}_{k+1}-\log\bigg(\sum_{i=1}^{n}e^{-\tilde{x}_{k+1}^{i}}\bigg).}}\end{array}$$ We used stepsizes ϵ = 2.5 × 10−6 and ϵ = 1 × 10−6for the experiments on target distributions (1) and (2), respectively. The slopes in Table 1 are obtain by taking 0 < t1 < t2 and computing $${\frac{\log(\operatorname{KL}(\rho_{t_{2}}\|\pi))-\log(\operatorname{KL}(\rho_{t_{1}}\|\pi))}{t_{2}-t_{1}}}.$$ We use different values for t1 and t2 for each target distribution; t1 and t2 must be large enough to capture the asymptotic slope of the curve, but not too large to avoid numerical errors. For all the curves corresponding to target π1, we take t1 = 7.0 and t2 = 7.5. For target π2, we take: for FR, t1 = 6.875 and t2 = 7.0; for WFR, t1 = 1.875 and t2 = 2.0; for W, t1 = 2.75 and t2 = 2.875.
Review 1: Summary: In this paper, the authors improve the results of the convergence of the Wasserstein Fisher-Rao gradient flow that was recently established by Lu et al. (2019, 2022). The key technique in this paper is the use of an infinite power series expansion related to the cumulant-generating function of random variables. Strengths and Weaknesses: The paper is written in a concise way and the proof seems straightforward. The problem studied here is also of great importance. However, clarity over previous work is lacking. In particular, the comparison of the results obtained in this paper with the immediately relevant work on this topic seems not explicit, which makes it hard to judge the contribution of the current paper. Requested Changes: In the introduction of the convergence rates for Langevin Monte Carlo sampling algorithms, you might need to elaborate a little bit more on the differences among these works. In particular, it is worth discussing other representing papers such as [1] that directly derive the convergence of discrete-time LMC and [2] that shows the convergence of the stochastic version of LMC with an arbitrary choice of mini-batch size. [1] Global convergence of Langevin dynamics-based algorithms for nonconvex optimization. Advances in Neural Information Processing Systems, 2018. [2] Faster convergence of stochastic gradient Langevin dynamics for non-log-concave sampling. In Uncertainty in Artificial Intelligence, 2021. The authors claim that the results presented in this paper are tight and require fewer assumptions than previous work. It would be helpful if the authors could provide more detail on this point, including the lower bound of the problem and the relaxation of assumptions from previous work. In addition, it would be valuable to discuss how the assumptions are essential to their paper. In Section 2.4, it is unclear whether $\kappa_n$ represents the n-th moment of Y. There is a lack of comprehensive comparison between the results presented in this paper (Theorem 1) and those of existing results, particularly Lu et al. (2022). It is noted that the results in Lu et al. (2022) hold for any $\delta\in(0,½]$. Furthermore, if a value is arbitrarily chosen for $\delta$, the result in their paper is also in the same order as that of Theorem 1. Moreover, the term $t^*$ would also be absorbed by the constant $C_1$. Hence, it is suggested that the authors provide a more detailed comparison between their results and those of existing work. Regarding the proof, it is mentioned that even though the decomposition of the KL divergence only involves terms like $e^{-2t}$ and $e^{-3t}$, the $O(e^{-3t})$ term actually hides problem-dependent constants related to the infinite series expansion. Therefore, it is not directly comparable with Lu et al. (2022). Minor questions: * In the abstract: **$\pi$ is that independent of** -> **$\pi$ that is independent of** * Around Eq (3): **satisfies a LSI** -> **satisfies an LSI** * Below Eq (3): **when $V^∗$ $\alpha$-strongly convex** -> **when $V^∗$ ``is`` $\alpha$-strongly convex** Broader Impact Concerns: NA ================================================== Review 2: Summary: The authors drew a very interesting connection between simulated annealing and Wasserstein--Fisher--Rao gradient flow. Using this connect, the authors were able to compute an explicit expansion of the Fisher--Rao gradient flow decay of KL and Renyi divergences. The authors also provided convincing numerical evidence supporting their theoretical results. Strengths and Weaknesses: Strengths 1. As far as I know, the connection to simulated annealing is novel, and honestly somewhat surprising. I believe this observation alone is significant. 2. Using the cumulant-generating function to compute a power series expansion is also a new technique based on my knowledge, at least it is not well known and therefore welcomed. The authors were able to use this expansion to show the first order term does indeed decay at the desired rate $e^{-2t}$, which is much cleaner than choosing $\delta,t_*$ in previous works. Weaknesses Only several minor clarifications and questions in the next section. Requested Changes: I have several minor questions for clarity. 1. The authors briefly commented on Wasserstein and Fisher--Rao geometry being essentially orthogonal. Can I interpret the WFR Riemannian metric as a sum of Wasserstein and FR separately? 2. If we can take the above interpretation, does the connection to simulated annealing still hold if we vary the coefficients in the linear combination of the two Riemannian metrics? 3. Can we compute a higher order term in the $O(e^{-3t})$ expansion, so that we get a type of Taylor remainder term? 4. In your numerical simulations, can you describe how the gradient flow evolution and KL were calculated? In particular, you chose the support $[-\pi,pi)$, but is the boundary condition period, i.e. making it a torus? Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper studies the convergence rate of gradient flow for the KL divergence along the Fisher-Rao dynamics towards its equilibrium at a predetermined target measure. The main result is that, under very mild assumptions on the target measure and the initial point of the dynamics, the KL divergence decays exponentially fast. Remarkably, the exponent does not depend on either measure, initial, or target. This result improves upon previous works by relaxing the necessary assumptions and by a slight improvement to the convergence rate. At the technical level, the authors establish an asymptotic expansion of the KL divergence along the dynamics, in terms of the cumulants of the score function. Strengths and Weaknesses: Strengths: - The proof is short, succinct, and well-written. - Assumption A2 allows the consideration of many new cases, which were unreachable with previous results. Weaknesses: - First and foremost, the result is not algorithmic (and probably cannot be made, in full generality). Thus, while this is a fine paper in Wasserstein metric geometry, it is tangentially related to the main topics of TMLR. Still, there is a lot of interest in the Wasserstein gradient flows in the ML community, so I would not hold this point against the paper. - The improvement in the convergence rate is somewhat incremental compared to previous results. More on this in the next section. Requested Changes: Before outlining my requested changes, let me note that I enjoyed reading the paper and found the result interesting. Since I found the claims made in the submission to be supported by accurate by convincing proofs and since, as indicated above, I believe some individuals in TMLR's audience will be interested in knowing the findings of this paper, I am inclined to recommend acceptance, subject to addressing the issues I raise below. - Currently, Theorem 1 gives the impression that the error term in the expansion of the KL divergence is bounded by Ce^{-3t}, where C is a *universal* constant. This is also the way the big O notation is presented in the notations paragraph. As far as I could see from the proof of Theorem 1, this is not the case. C is a constant that depends on the cumulants and on the parameter \alpha, introduced in Assumption A2. In particular, C depends on both the initial and terminal points of the dynamics. I think that this point should be emphasized and discussed. - In line with my above comment, the discussion about Assumption B is a bit awkward. The existence of the constant M is equivalent to Assumption A2 (when \alpha = 1). The point is that the convergence rate in the cited paper depends explicitly on M. In the present paper, the convergence rate will also depend on M (and \alpha), but only implicitly through the cumulants. - I found the introduction to be a bit confusing. A lot of emphasis is given to the Wasserstein/Langevin dynamics flow and the Fisher-Rao-Wasserstein flow when the results only deal with the Fisher-Rao flow. While I think that the addition of background and relevant results is great and well-written, it may perhaps be better to separate it from the main focus of the paper. I propose to add a new section about the 'bigger picture' in which the relevant material and corollaries can be presented. (This comment is a matter of personal taste and I leave it to the authors' discretion) - Line below Equation 11. The superscript W is missing from some of the rhos. - Statement of theorem 1. I'm not sure what the sentence for "...for t large enough..." is trying to convey. Isn't the expansion valid for any t? The big O notation is supposed to take care if smaller t's. Also I'm not sure why at the end of the proof of Theorem 1, t needs to be taken to infinity to complete the proof. What is needed to show, and what the authors do show, is that the coefficients in the expansion do not grow too fast. - Equation 32. Is the equation missing the normalizing constants? Broader Impact Concerns: No concerns ================================================== Metareview: Recommendation: Accept as is Comment: Three expert reviewers reviewed the paper and they all had positive evaluation, which AC agrees with. This is a nice theoretical paper with great presentation style which makes it accessible to a wider ML audience. AC recommends accepting this paper for publication. ==================================================
# Semantic Positive Pairs For Enhancing Visual Representation Learning Of Instance Discrimination Methods Mohammad Alkhalefi m.alkhalefi1.21@abdn.ac.uk Department of Computing Science University of Aberdeen, UK Georgios Leontidis *georgios.leontidis@abdn.ac.uk* Department of Computing Science & Interdisciplinary Centre for Data and AI University of Aberdeen, UK Mingjun Zhong *mingjun.zhong@abdn.ac.uk* Department of Computing Science University of Aberdeen, UK Reviewed on OpenReview: *https: // openreview. net/ forum? id= z5AXLMBWdU* ## Abstract Self-supervised learning algorithms (SSL) based on instance discrimination have shown promising results, performing competitively or even outperforming supervised learning counterparts in some downstream tasks. Such approaches employ data augmentation to create two views of the same instance (i.e., positive pairs) and encourage the model to learn good representations by attracting these views closer in the embedding space without collapsing to the trivial solution. However, data augmentation is limited in representing positive pairs, and the repulsion process between the instances during contrastive learning may discard important features for instances that have similar categories. To address this issue, we propose an approach to identify those images with similar semantic content and treat them as positive instances, thereby reducing the chance of discarding important features during representation learning and increasing the richness of the latent representation. Our approach is generic and could work with any self-supervised instance discrimination frameworks such as MoCo and SimSiam. To evaluate our method, we run experiments on three benchmark datasets: ImageNet, STL-10 and CIFAR-10 with different instance discrimination SSL approaches. The experimental results show that our approach consistently outperforms the baseline methods across all three datasets; for instance, we improve upon the vanilla MoCov2 by 4.1% on ImageNet under a linear evaluation protocol over 800 epochs. We also report results on semi-supervised learning, transfer learning on downstream tasks, and object detection. ## 1 Introduction In supervised learning, models are trained with input data X and their corresponding semantic labels or classes Y. It is rather common for each class to have several hundreds of instances available for training, which enables the model to extract the important features and create useful representations for the given samples (Van Gansbeke et al., 2020). This type of machine learning has been proven to perform well in various domains whenever data are available in abundance. In practice, that is often not the case, given that data annotation is laborious and expensive. Recently, self-supervised learning (SSL) algorithms based on instance discrimination reduced the reliance on large annotated datasets for representation learning (He et al., 2020; Chen et al., 2020a;b; Misra & Maaten, ![1_image_0.png](1_image_0.png) Figure 1: Example of an instance discrimination task where positive pairs are attracted together and negative pairs are pushed apart, even if they have similar semantic content. 2020; Chen & He, 2021; Grill et al., 2020). These approaches treat each instance as a class on its own, so they employ random data augmentation for every single image in the dataset to prevent the model from learning trivial features and become invariant to all augmentations (Tian et al., 2020; Xiao et al., 2020; Chen et al., 2020a; Misra & Maaten, 2020). The model learns the image representation by attracting the positive pairs (i.e., two views of the same instance) closer in the embedding space without representation collapse. Contrastive and non-contrastive instance discrimination methods (Chen et al., 2020a; Caron et al., 2020; He et al., 2020; Grill et al., 2020; Bardes et al., 2021) have shown promising performance, often close to supervised learning or even better in some downstream tasks (Chen et al., 2020a; Chen & He, 2021; Chuang et al., 2020; Misra & Maaten, 2020; Huynh et al., 2022; Dwibedi et al., 2021). However, these approaches have two main limitations: 1) the data augmentation is limited in representing positive pairs (i.e., can not cover all variances in given class); 2) the repulsion process between the images in contrastive learning is implemented regardless of their semantic content, which may cause the loss of important features and slow model convergence (Huynh et al., 2022). For example, Figure 1 shows two images that have similar semantic content (aeroplanes). In contrastive instance discrimination, the images are repelled in the embedding space because the objective is to attract positive pairs and push apart all other images, despite their similar semantic content. This is a major limitation that requires attention, as the performance of the downstream tasks depends on high-quality representations learnt by self-supervised pre-training (Donahue et al., 2014; Manová et al., 2023; Girshick et al., 2014; Zeiler & Fergus, 2014; Durrant & Leontidis, 2022; Kim & Walter, 2017; Durrant & Leontidis, 2023). Two recent prominent approaches, which are Nearest-Neighbor Contrastive Learning of Visual Representations (NNCLR) (Dwibedi et al., 2021), and False Negative Cancellation (FNC) (Huynh et al., 2022) proposed to find semantic pairs in the dataset, treating them as positive pairs during representation learning, to improve the model performance on downstream tasks. NNCLR uses a support set Q to keep a representation of the dataset during model training. The model learns visual representations by creating a representation for the two views of the input instance (zi, z + i ), and then finds the nearest neighbours for the first view (i.e., zi) from the support set Q, and so the nearest neighbour is S = NN(zi, Q). NNCLR then treats S as semantic positive pairs for the second view of the instance (i.e., z + i ) regardless of the semantic content of S, which may belong to a category different to z + i . FNC creates more than two views for each instance which are [zi, z + i , z 1 s , z 2 s ], where (zi, z + i ) are positive pairs and (z 1 s , z 2 s ) are support views for the instance. They find the potential semantic pairs for the support views of each instance from negative examples in the batch by computing their similarity. Finally, they treat the identified semantic pairs as positive pairs during model training. Finding images that have similar content (i.e., same category) and treating them as positive pairs increases the data diversity which could thus improve the power of representation learning (Tian et al., 2020; Dwibedi et al., 2021; Huynh et al., 2022; Khosla et al., 2020). Contrariwise, mapping wrong semantic pairs and encouraging the model to treat them as positive pairs causes reduced representation learning and slow model convergence. In this paper, we introduce an approach for enhancing the process of finding semantic pairs to improve visual representation learning. We name our method Semantic Positive Pairs for enhancing Instance Discrimination (SePP- ID) since our experiments indicate that accurate semantic positive pairs could significantly improve the performance of instance discrimination SLL. For identifying semantic positive pairs, we propose to use pre-trained models to firstly map the original images from the dataset (i.e., not augmented images) into latent representations and then the semantic positive pairs are matched by computing the similarity scores based on the latent representation vectors. Any self-supervised learning algorithms could be used as a pre-trained model for our purpose. For example, our experiments show that a model pre-trained by MoCo-v2 (Chen et al., 2020b) approach is good for identifying semantic positive pairs. These semantic positive pairs set (SPPS) along with the positive pairs (i.e., two views for the same instance) are used to train instance discrimination self-supervised model. The key difference to our approach is using pretrained models and original images from the dataset to match semantic positive pairs, whereas the methods mentioned above, such as NNCLR and FNC are finding semantic positive pairs during the representation learning. Such approaches may have a lower chance of matching the right semantic positive pairs, leading to slow model convergence and degrading representation learning. One possible reason would be that the model requires the necessary number of epochs to converge to learn a similar representation for the images from the same category (i.e., at the beginning of model training the semantic pairs are mapped based on texture and color, later in training the model become better in recognize class); another reason would be that the similarity score was computed using the embedding vectors of the augmented images. For example, with 1000 epochs, NNCLR reached 57% accuracy in terms of true semantic pairs, while FNC achieved 40% accuracy (Dwibedi et al., 2021; Huynh et al., 2022). Also, we show an empirical example in Figure 2 that using augmented images and a non-pre-trained model to find semantic pairs may lead to mapping wrong semantic pairs. In contrast, the accuracy in terms of true semantic positive pairs was achieved over 92% when MoCo-v2 was used as the pre-trained model with the original dataset in our proposed approach. Our contributions are as follows: - We propose to use the representations of the pre-trained models for the original images from the dataset to match semantic positive pairs, which achieved as high as 92% accuracy in terms of true semantic positive pairs, compared to 57% using NNCLR and 40% using FNC. - We demonstrate that SePP-ID outperformed other state-of-the-art (SOTA) approaches. For example, SePP-ID achieved 76.3% accuracy on ImageNet, compared to 74.4% using FNC. - We demonstrate that the SPPS found by using our scheme improve the performance of several instance discrimination approaches across various epoch scenarios and datasets which indicates that the SPPS could be adapted to improve visual representation learning of any other instance discrimination approach. ## 2 Related Work Several SSL approaches have been proposed, all of which aim to improve representation learning and achieve better performance on a downstream task. In this section, we provide a brief overview of some of these approaches, but we would encourage the readers to read the respective papers for more details. Clustering-Based Methods: The samples that have similar features are assigned to the same cluster. Therefore, discrimination is based on a group of images rather than on instance discrimination (Caron et al., 2020; Van Gansbeke et al., 2020; Caron et al., 2018; Asano et al., 2019). DeepCluster (Caron et al., 2018) obtains the pseudo-label from the previous iteration which makes it computationally expensive and hard to scale. SWAV (Caron et al., 2020) solved this issue by using online clustering, but it needs to determine the correct number of prototypes. Also, there has been a body of research in an area called Multi-view Clustering (MVC) (Yang et al., 2022a; Trosten et al., 2021; Lu et al., 2023; Li et al., 2022a) whereby contrastive learning is used with clustering to find semantic pairs and alleviate the false negative issue. However, our approach does not rely on a clustering method to identify semantic pairs, providing two main advantages over clustering approaches. Firstly, we avoid some drawbacks of clustering methods, such as predefining the number of prototypes (i.e. clusters), dealing with less separable clusters, and relying on labels or pseudo-labels. Secondly, our approach can be applied to both contractive and non-contrastive learning, as demonstrated in this work. Distillation Methods: BYOL (Grill et al., 2020) and SimSiam (Chen & He, 2021) use techniques inspired by knowledge distillation where a Siamese network has an online encoder and a target encoder. The target network parameters are not updated during backpropagation. Instead, the online network parameters are updated while being encouraged to predict the representation of the target network. Although these methods have produced promising results, it is not fully understood how they avoid collapse. Self-distillation with no labels (DINO) (Caron et al., 2021) was inspired by BYOL but the method uses a different backbone (ViT) and loss function, which enables it to achieve better results than other self-supervised methods while being more computationally efficient. Bag of visual words (Gidaris et al., 2020; 2021) also uses a teacherstudent scheme, inspired by natural language processing (NLP) to avoid representation collapse. The student network is encouraged to predict the features' histogram for the augmented images, similar to the teacher network's histogram. Information Maximization: Barlow twins (Zbontar et al., 2021) and VICReg (Bardes et al., 2021) do not require negative examples, stop gradient or clustering. Instead, they use regularisation to avoid representation collapse. The objective function of these methods aims to reduce the redundant information in the embeddings by making the correlation of the embedding vectors closer to the identity matrix. Though these methods provide promising results, they have some limitations, such as the representation learning being sensitive to regularisation. The effectiveness of these methods is also reduced if certain statistical properties are not available in the data. Contrastive Learning: Instance discrimination, such as SimCLR, MoCo, and PIRL (Chen et al., 2020a; He et al., 2020; Chen et al., 2020b; Misra & Maaten, 2020) employ a similar idea. They attract the positive pairs together and push the negative pairs apart in the embedding space albeit through a different mechanism. SimCLR (Chen et al., 2020a) uses an end-to-end approach where a large batch size is used for the negative examples and both encoders' parameters in the Siamese network are updated together. PIRL (Misra & Maaten, 2020) uses a memory bank for negative examples and both encoders' parameters are updated together. MoCo (Chen et al., 2020b; He et al., 2020) uses a moment contrastive approach whereby the query encoder is updated during backpropagation and the query encoder updates the key encoder. The negative examples are located in a dictionary separate from the mini-batch, which enables holding large batch sizes. Enhanced Contrastive Learning: Such mechanisms focus on the importance of the negative examples and find different ways to sample negative examples regardless of their content, which may cause undesired behaviour between images with similar semantic content. Some studies have focused on improving the quality of the negative examples which in turn improves representation learning. (Kalantidis et al., 2020) and (Robinson et al., 2020) focused on the hard negative samples around the positive anchor, whereas (Wu et al., 2020) introduced the percentile range for negative sampling. Another approach introduced by Chuang et al.(Chuang et al., 2020) gives weights for positive and negative terms to reduce the effects of undesirable negatives. Other methods such as (Dwibedi et al., 2021), (Huynh et al., 2022), and (Auh et al., 2023) use similarity metrics to define the images that have similar semantic content and treat them as positive pairs during model training. Although these approaches provide a solution to determine variant instances for the same category and treat them as positive pairs, there are common drawbacks in these approaches that hinder them from mapping highly accurate semantic pairs, such as: 1. They use a non-pre-trained model to represent the images before measuring the similarity between them. 2. They compute the similarity between transformed images, not the original images in the original dataset. Counting on non-pre-trained models with augmented images to find semantic positive pairs akin to (Dwibedi et al., 2021; Huynh et al., 2022; Auh et al., 2023) may lead to inaccurate results. To demonstrate such issue, Figure 2 shows an empirical example of the inaccurate similarity scores we obtained when we used a nonpre-trained model and random augmented images to determine the instances that belong to the same class Horse ![4_image_0.png](4_image_0.png) Train Car 2 Figure 2: Similarity scores are shown for anchor (car) with instances from other classes using non-pre-trained models and random augmented images. in the dataset. The similarity scores show that the horse and deer are more similar to the anchor (i.e., car) than car 1 and car 2, and the train is more similar to the anchor than car 2, which is not correct either. Semi-Supervised Learning: The advantage of training a model on instance variants of the same category to improve the representation learning is used in semi-supervised approaches (Bošnjak et al., 2023; Yang et al., 2022b). (Bošnjak et al., 2023) present an approach to train the model on semantic positive pairs by leveraging a few labelled data. In their approach, they use labelled data and k-nearest neighbour to provide pseudo-labels for unlabelled points based on the most frequent class near the unlabelled data. Following that, they treat the data points that have similar pseudo-labels as semantic positive pairs in a contrastive learning setting. Such methods still require labelled data during training to provide semantic positive pairs. Our proposed pre-processing method provides a different way of approaching this, as we will demonstrate below across several datasets and ablation studies. We used a model pre-trained with SSL approaches and worked with the original dataset rather than augmented images to determine semantic positive pairs. In addition, our method does not require labelled data, specialised architecture or support set to hold the semantic positive pairs, which makes it easier to be integrated with any self-supervised learning methods. ## 3 Methodology This section proposes an approach to enhancing the visual representation learning of instance discrimination SSL methods by using semantic positive pairs (i.e., two different instances belong to the same category). To achieve that, we introduce the Semantic Sampler whose purpose is to find semantic positive pairs from the training data. The main idea of our approach is to find semantic positive pairs set (SPPS) from the original dataset (i.e., not distorted images by augmentation) by using the Semantic Sampler which consists of a pre-trained model and similarity metric. As shown in Figure 3 for finding the SPPS, the pre-trained model in the Semantic Sampler firstly maps K images from the original dataset into latent representations and then the semantic positive pairs are matched by computing the similarity scores based on the latent representation vectors. Note that K is a constant number that defines the number of images involved in the process of finding semantic pairs. These SPPS are used along with positive pairs from the dataset for training instance discrimination SSL models. In the literature, as noted previously, there are methods using semantic pairs to improve the performance of instance discrimination SSL, such as the FNC (Huynh et al., 2022), the NNCLR and (Dwibedi et al., 2021). In this section, we propose a different scheme for searching SPPS, inducing a modified loss function for training contrastive instance discrimination model. Our experiments show that our approach significantly improves the performance of instance discrimination methods and outperforms the SOTA approaches including both FNC and NNCLR. In the following, we will introduce how the SPPS is obtained as well as our SSL algorithm using SPPS. Algorithm 1 shows how the proposed method is implemented. ![5_image_0.png](5_image_0.png) Dataset Figure 3: The proposed methodology: Firstly, k images are chosen from the dataset and encoded by the pre-trained model; Secondly, a similarity metric is used to find the semantic positive pairs for each anchor, followed by data transformations applied to both the original dataset and the semantic positive pairs set. Eventually, all the images are combined in one dataset which will be used to train the instance discrimination model. ## 3.1 Semantic Positive Pairs We use a pre-trained SSL model with similarity metrics in the Semantic Sampler to search for semantic positive pairs in the original dataset. Figure 4 shows the whole process for creating SPPS. ![5_image_1.png](5_image_1.png) ) ) Figure 4: Illustrate the process of Semantic Sampler in identifying the semantic positive pairs. As shown in Figure 4, K images are randomly chosen from the training data set, and then they are encoded by using the pre-trained SSL model. The embedding vectors generated by the pre-trained model are duplicated in two lists: one list contains anchors and another list contains semantic candidate embedding vectors. In our approach, the cosine similarity function is used to find semantic positive pairs for each anchor (i.e., the list B in Figure 4) from those semantic candidates embedding vectors (i.e., the list A in the figure). The anchors may have more than one positive sample chosen from the candidates. This allows the model to go beyond a single positive instance and capture divergence feature information from different instances belonging to the same category. Eventually, a Semantic positive pair set (SPPS) is created containing pairs of the semantic positive samples. The semantic positive samples were paired when the similarity score lies in the range [0.97, 0.99] and our experiments show that the accuracy of mapping semantic pairs as well as model performance increased on downstream tasks when choosing such threshold values (see Section 4 for computing the accuracy of finding SPPS). Note that the maximum threshold, i.e., 0.99, was chosen to avoid identical image samples. Compared to the previous approaches such as FNC and NNCLR, the semantic positive pairs are found by the Semantic Sampler before training the instance discrimination model, therefore the model is trained on accurately identified semantic pairs from the beginning and this leads to faster convergence and improves the representation learning. On the contrary, FNC and NNCLR search the semantic positive samples in the batch during the training by using augmented images (i.e., distorted images) and a non-pretrained model which slows model convergence and degrades representation learning because of inaccurate ![6_image_0.png](6_image_0.png) semantic pairs. Figure 5: Example of semantic positive pairs found by our approach for different anchors in the STL10unlabeled dataset. Figure 5 shows examples of SPPS obtained from the STL10 dataset. It shows that our approach finds the true semantic positive pairs very well for those anchors, despite the anchor image having different properties (e.g. colour, direction, background, and size). Algorithm 1 Combining SPPS with original data Input: Dataset samples, constant k, and structure f. 1: ImageList1=[ ] 2: ImageList2=[ ] 3: for k ← 1 to N do 4: image ← *tensor*(xk) ▷ convert k images from dataset to tensor 5: emb_vector ← f(*image*) ▷ encode images by pre-trained model 6: norm_vector ← *Normalize*(emb_*vector*) ▷ l2 norm 7: ImageList1.append(norm_*vector*) 8: **end for** 9: 10: ImageList2 ← ImageList1 ▷ both lists have similar embedding vectors 11: Max= 0.99 and Min= 0.97 ▷ define threshold 12: sim= torch.mm(ImageList1, ImageList2.T) ▷ compute similarity 13: semantic_pairs_list=[ ] 14: for i ← 1 to *sim.size*()[0] do 15: for j ← 1 to *sim.size*()[1] do 16: if Sim[i, j] ≥ *M in* and Sim[i, j] ≤ Max **then** 17: P ositive_pair ← tuple(Dataset[i]*, Dataset*[j]) 18: semantic_pairs_list.append(P ositive_*pair*) 19: **end if** 20: **end for** 21: **end for** 22: applies a random transformation to semantic_pairs_list. 23: combines the semantic_pairs_list with the original dataset. Output: combine dataset ## 3.2 Learning With Semantic Positive Pairs This subsection describes how our approach SePP-ID (i.e., Semantic Positive Pairs for enhancing Instance Discrimination) uses SPPS to train instance discrimination SSL models. As demonstrated in Figure 6 a copy for each instance in the original dataset is created and random augmentations are applied for each copy to generate positive pairs. For those semantic positive pairs described in Section 3.1, we do not need to create copy for the instances because we already have pairs so we only need to apply random transformations to each instance in the pairs. After that, the SPPS is merged with the original training dataset and so the training dataset is slightly larger than the original one. ![7_image_0.png](7_image_0.png) Figure 6: The second step of the methodology: the instances of the original dataset and the SPPS are transformed and combined into one dataset. After combining the SPPS with the original dataset, we train the contrastive instance discrimination model on the new *combined* dataset with a momentum encoder approach (similar to MoCo-v2 Chen et al. (2020b)). The contrastive loss function attracts the two views of the same instance (i.e., positive pairs) closer in the embedding space while pushing apart all the other instances (i.e., negative samples): $$\ell(u,v)=-\log\frac{\exp(\sin(u,v)/\tau)}{\sum_{k=1}^{2N}\mathbb{1}_{\,[k\neq i]}\exp(\sin(u,w_{k})/\tau)}\tag{1}$$ As shown in Equation 1, our approach encourages the model to increase the similarity between the two views sim(*u, v*) and reduce the similarity with all other images in the batch wk where v and u could be either semantic positive pairs (i.e., two images belong to the same category) or positive pair (i.e., two views for the same instance). Note that the number of semantic positive samples of instance x may vary. For example, some instances may not have any semantic positive pair, while others may have more than one semantic positive sample. Thus, the overall loss of our contrastive instance discrimination approach is given by the following equation: $$loss=\frac{1}{N}\sum_{i=1}^{N}\left[\ell(u,v)\ +\ \sum_{m=1}^{M}\lambda_{im}\ell(u,v_{m})\right],\tag{2}$$ where 0 ≤ λ ≤ 1 represents regularization. In the overall loss function (Equation 2), the contrastive loss for the positive pairs is defined by the first term ℓ(*u, v*) where the second term ℓ(*u, v*m) defines the semantic positive pairs for the instance that has semantic pairs. In the case of λim = 0, there are no semantic positive pairs and the model will be trained only on the positive pairs ℓ(*u, v*), thus the overall loss is equal to the loss of the positive pairs. Under this scenario, the model training is reduced to the original approach such as vanilla MoCo-V2 (Chen et al., 2020b). On the other hand, if λim = 1 that means we have semantic pairs so the term for semantic positive pairs ℓ(*u, v*m) is added to the overall loss and is computed in the same manner that is being used to compute positive pairs. This approach would increase the richness of the latent space during visual representation learning. Our experiments show that this scheme improves model performance on the downstream tasks. ## 4 Experiments And Results 4.1 Main Tasks Datasets: We evaluated SePP-ID approach on three datasets, i.e. STL-10 "unlabeled" with 100K training images (Coates & Ng, 2011), CIFAR-10 with 50K training images (Krizhevsky, 2009), and ImageNet-1K with 1.28M training images (Russakovsky et al., 2015). Pre-trained model: Many candidate approaches can be used to train the pre-trained model of the Semantic Sampler for finding SPPS from the original dataset. As a pre-trained model is only used for finding SPPS, we prefer choosing models with relatively low computational overhead. The following are the intuitions on how the approach was chosen to train the Semantic Sampler's pre-trained model: 1) The model backbone was chosen to be ResNet50 because it is used in most instance discrimination approaches so that we can easily compare them; 2) The approach should provide reasonable performance when they are trained on a small batch size (e.g., 256) and a small number of epochs (e.g., 100) to use low computational overhead; 3) The approach uses a small projection head (e.g., 256) while training the model because we want to keep the projection head in the process of creating SPPS. If the projection head has a large output dimension such as VICReg (i.e., 8192 dimensions), it will add a large computational overhead for finding SPPS. Based on the above intuition, SimSiam (Chen & He, 2021), MoCo-V2 (Chen et al., 2020b) and SimCLR (Chen et al., 2020a) were chosen as the candidate to train the Semantic Sampler's pre-trained model. To choose the best approach for training the pre-trained model across these three approaches, we trained three models on ImageNet-1K for 100 epochs with batch size 256. After model training, we froze the model parameters and kept the projection head (256D). we evaluate the three pre-trained models (i.e., model pre-trained by SimSiam, MoCo-v2, and SimCLR) by creating SPPS and computing the accuracy of selected semantic pairs by leveraging the ImageNet labels. Table 1 demonstrates the performance of the three candidate Semantic Samplers' pre-trained model, showing that MoCo-v2 was the best-performing model for finding SPPS in terms of accuracy. Thus, we utilize the model pre-trained using the MoCo-v2 approach in the Semantic Sampler for all subsequent experiments. Table 1: The accuracy of finding semantic positive pairs in ImageNet across the three candidate approaches. | Approach | Accuracy | |------------|------------| | MoCo-v2 | 92.03% | | SimSiam | 91,44% | | SimCLR | 90.72% | Training Setup: We use ResNet50 as a backbone and the model is trained with SGD optimizer, weight decay 0.0001, momentum 0.9 and initial learning rate 0.03. The mini-batch size is 256 and the model is trained up to 800 epochs. We set the K-value = 10% of the ImageNet dataset (i.e., 128K random images are picked from the Imagenet dataset for finding semantic pairs). In our ablation study, we compared the performance of our models when various proportions (i.e., various K-value) of data were used for choosing SPPS. Evaluation: We evaluated SePP-ID approach by using linear evaluation, semi-supervised setting, transfer learning and object detection against leading SOTA approaches. In linear evaluation, we followed standard evaluation protocol (Chen et al., 2020a; He et al., 2020; Huynh et al., 2022; Dwibedi et al., 2021). We trained a linear classifier for 100 epochs on top of a frozen backbone pre-trained by SePP-ID. We used an ImageNet training set with random cropping and random left-to-right flipping augmentations to train the linear classifier from scratch. The results are reported on the ImageNet evaluation set with centre crop (224 × 224). In a semi-supervised setting, we fine-tune the network with 60 epochs using 1% labeled data and 30 epochs using 10% labeled data. Also, we employ a linear evaluation to assess the learned features from the ImageNet dataset on small datasets using transfer learning. Finally, we evaluate the transferability of the learned embeddings by finetuning the model on PASCAL VOC object detection (Everingham et al., 2010). Comparing with SOTA Approaches: We use linear evaluation to compare our approach (i.e., SePP-ID), which is momentum contrastive with semantic positive pairs, against vanilla MoCo-v2 on different epochs on the ImageNet-1k dataset. In addition, we compare our approach after 800 epochs with the performance of other SOTA on the ImageNet-1k. Table 2: Comparisons between vanilla MoCo-v2 and SePP-ID on the ImageNet dataset with different epochs. | Approach\Epochs | 100 | 200 | 400 | 800 | |------------------------------|-------|-------|-------|-------| | MoCo-v2 (Chen et al., 2020b) | 67.4% | 69.9% | 71.0% | 72.2% | | SePP-ID (proposed) | 69.2% | 72.3% | 75.2% | 76.3% | | Table 3: Comparisons between SePP-ID and SOTA approaches on ImageNe | t. | | | |-----------------------------------------------------------------------|--------|------------|----------| | Approach | Epochs | Batch size | Accuracy | | MoCo-v2 (Chen et al., 2020b) | 800 | 256 | 72.2% | | BYOL (Grill et al., 2020) | 1000 | 4096 | 74.4% | | SimCLR (Chen et al., 2020a) | 1000 | 4096 | 69.3% | | SimSiam (Chen & He, 2021) | 800 | 512 | 71.3% | | VICReg (Bardes et al., 2021) | 1000 | 2048 | 73.2% | | SWAV (Caron et al., 2020) | 800 | 4096 | 75.4% | | OBoW(Gidaris et al., 2021) | 200 | 256 | 73.8% | | DINO (Caron et al., 2021) | 800 | 1024 | 75.3% | | Barlow Twins (Zbontar et al., 2021) | 1000 | 2048 | 73.2% | | CLSA (Wang & Qi, 2022) | 200 | 256 | 73.3% | | HCSC (Guo et al., 2022) | 200 | 256 | 73.3% | | SNCLR (Ge et al., 2023) | 800 | 4096 | 75.3% | | SCFS (Song et al., 2023) | 800 | 1024 | 75.7% | | UniVIP (Li et al., 2022b) | 300 | 4096 | 74.2% | | IFND (Chen et al., 2021) | 200 | 256 | 69.7% | | CLFN (Auh et al., 2023) | 100 | 512 | 59.6% | | NNCLR (Dwibedi et al., 2021) | 1000 | 4096 | 75.5% | | FNC (Huynh et al., 2022) | 1000 | 4096 | 74.4% | | SePP-ID(with MoCo-v2, proposed) | 800 | 256 | 76.3% | | SimCLR (Chen et al., 2020a) | 1000 | 256 | 67% | | NNCLR (Dwibedi et al., 2021) | 1000 | 256 | 68.7% | | SePP-ID(with SimCLR) | 800 | 256 | 69.3% | Table 2 shows that our approach consistently improved the performance of the instance discrimination approach across various numbers of epochs. Our approach SePP-ID significantly outperforms the vanilla MoCo-v2 by 4.1% on 800 epochs. The results substantiate the importance of semantic pairs in instance discrimination representation learning. For example, our approach with 400 epochs surpasses the vanilla MoCo-v2 with 800 epochs by 3%. Table 3 highlights the advantage of using our approach to enhance the contrastive instance discrimination SSL approaches, clearly outperforming all baselines. This supports our hypothesis that we can obtain more accurate semantic positive pairs by using Semantic Sampler and the original dataset. Consequently, we can train the instance discrimination models on correct semantic pairs, thereby enhancing representation learning and improving model performance on the downstream task. Also, the result shows that using a nonpre-trained model with augmented images to determine the semantic pairs may slow the model convergence because the model needs several epochs to be able to pick the correct semantic positive pairs. Therefore, our approach achieves 76.3% after 800 epochs, which is better than NNCLR and FNC by 0.8% and 1.9%, respectively (after 1000 epochs). NNCLR and FNC are attracting the nearest neighbour of the anchor in the embedding space because they assume that they have similar content (same category). However, using a non-pre-trained model and augmented images (distortion images) to find images that have similar content may cause obtaining wrong semantic pairs during model training which leads to slow model convergence and reduced model performance as shown in Table 3. On the contrary, our approach acquires more accurate semantic pairs by using the Semantic Sampler with the original dataset. Therefore, the instance discrimination model is trained on the right semantic pairs from the beginning of the training which leads to improved model performance and fast convergence. In addition, we do further analysis by using the end-to-end mechanism (i.e., SePP-ID(*with SimCLR*) where both encoders are updated in the backpropagation similar to NNCLR, FNC, and SimCLR. In our experiment, we used batch size 256 because larger batch sizes require memory that exceeds the GPU memory available. The result shows that our approach performs better than both (i.e., NNCLR and SimCLR) when both approaches use 256 as batch size. Semi-Supervised Learning on ImageNet: In this part, we evaluate the performance of SePP-ID under the semi-supervised setting. Specifically, we use 1% and 10% of the labeled training data from ImageNet1k for fine-tuning, which follows the semi-supervised protocol in SimCLR (Chen et al., 2020a). The top-1 accuracy is reported in Table 4 after fine-tuning using 1% and 10% training data. SePP-ID outperforms all the compared methods. The results demonstrate that SePP-ID achieves the best feature representation quality. Table 4: Semi-supervised learning results on ImageNet. Top-1 performances are reported on fine-tuning a pre-trained ResNet-50 with ImageNet 1% and 10% datasets. top-1 Approach\Fraction ImageNet 1% ImageNet 10% SimCLR (Chen et al., 2020a) 48.3% 65.6% BYOL(Grill et al., 2020) 53.2% 68.8% SWAV (Caron et al., 2020) 53.9% 70.2% DINO (Dwibedi et al., 2021) 50.2% 69.3% SCFS (Song et al., 2023) 54.3% 70.5% NNCLR (Dwibedi et al., 2021) 56.4% 69.8% FNC (Huynh et al., 2022) 63.7% 71.1% SePP-ID (*proposed*) 64.2% **71.8%** Transfer Learning on Downstream Tasks: We follow the linear evaluation setup described in (Grill et al., 2020; Chen et al., 2020a; Dwibedi et al., 2021) to show the effectiveness of transfer representations learned by SePP-ID on multiple downstream classification tasks. The datasets used in this benchmark are as follows: CIFAR Krizhevsky (2009), Stanford Cars Krause et al. (2013), Oxford-IIIT Pets Parkhi et al. (2012), and Birdsnap Berg et al. (2014). We first train a linear classifier using the training set labels while choosing the best regularization hyper-parameter on the respective validation set. Then we combine the training and validation set to create the final training set, which is used to train the linear classifier that is evaluated on the test set. | * denotes the results are reproduced in this study. Approach CIFAR-10 | CIFAR-100 | Car | Birdsnap | Pets | | |-------------------------------------------------------------------------|-------------|-------|------------|--------|-------| | MoCo-v2 Chen et al. (2020b)* | 91.2% | 74.8% | 51.2% | 43.8% | 84.5% | | NNCLR (Dwibedi et al., 2021) | 93.7% | 79.0% | 67.1% | 61.4% | 91.8% | | FNC (Huynh et al., 2022) | 93% | 76.8% | 68.8% | 54.0% | 89.0% | | SePP-ID (proposed) | 94.5% | 79.7% | 68.8% | 62% | 92.3% | Table 5 presents transfer learning results where our approach SePP-ID improves over all the counterpart approaches on five datasets. This demonstrates that our model learns useful semantic features, enabling it to generalize to unseen data in different downstream tasks. Object Detection Task: To further evaluate the transferability of the learned representation, we fine-tune the model on PASCAL VOC object detection. We use similar settings as in MoCo (Chen et al., 2020b), where we finetune on the VOC trainval07+12 set using Faster R-CNN with a R50-C4 backbone and evaluate on VOC test2007. we fine-tuned for 24k iterations (≈ 23 epochs). Table 6 reveals that our approach performs similarly to FNC and better than vanilla MoCo-v2. Table 6: Transfer learning on Pascal VOC object detection. | Approach | AP50 | |------------------------------|--------| | MoCo-V2 (Chen et al., 2020b) | 82.5% | | FNC (Huynh et al., 2022) | 82.8% | | SePP-ID (proposed) | 82.8% | ## 4.2 Ablation Study In this section, we provide a more in-depth analysis of our approach by conducting four different studies as follows: 1) We apply our method to different datasets to show that our approach performs consistently across different datasets; 2) We experimentally explore the impact of the parameter k on model performance; 3) We also randomly selected images from the original dataset, and then add them into the original data after augmentation to ensure that the improvement of performance was due to the SPPS rather than simply increasing data size; and 4) Conduct experiments on different instance discrimination approaches. ## 4.2.1 Comparisons On Different Datasets | STL-10 | CIFAR-10 | | | | | | |--------------------|------------|--------|--------|--------|--------|--------| | Approach\Epoch | 200 | 400 | 800 | 200 | 400 | 800 | | MoCo-v2 | 69.46% | 77.37% | 80.08% | 65.27% | 69.50% | 73.88% | | SePP-ID (proposed) | 74.01% | 80.36% | 82.29% | 72.05% | 73.87% | 77.46% | we aim to verify that our method maintains its performance across various datasets when using different backbones (e.g. ResNet18). To achieve this, we compare vanilla MoCo-v2 with our approach, i.e. SePP-ID on two datasets (STL-10 and CIFAR-10) with k-value = 10% of the dataset. Note the K-value with ImageNet dataset is 10%, fix the proportion (10%) with the other datasets (STL-10 and CIFAR-10) to see if we can obtain the same improvement in representation learning with the same proportion on smaller datasets. Table 7 illustrates that our method outperforms the vanilla approach by 4.55% after 200 epochs on STL-10 and achieved 80.36% after 400 epochs which is better than Vanilla MoCo-v2 at 800 epochs. On CIFAR-10, SePP-ID outperforms MoCo-v2 across all the different epochs and achieved 77.46% after 800 epochs which is higher than the vanilla MoCo-v2 by 3.58%. These results show that our approach is working properly even with smaller datasets and different backbones. ## 4.2.2 Impact Of The K **Value** In this subsection, we experimentally analysed the impact of the k parameter on our model's performance. To obtain the effect of the (k-value), we trained models for 100 epochs on the ImageNet dataset and fixed all the parameters except the k parameter. Then, we linearly evaluated the models to see the effect of k on downstream tasks. | Table 8: SePP-ID performance with varying k-value. | | | | | | | | |------------------------------------------------------|--------|--------|--------|--------|--------|--------|--------| | k% | 0% | 5% | 10% | 25% | 50% | 80% | 100% | | Number of SPPS | 0 | 12,752 | 18,109 | 23,968 | 48,308 | 54,710 | 61,200 | | Hours for training | ≈ 30 | ≈ 37 | ≈ 45 | ≈66 | ≈103 | ≈148 | ≈172 | | Accuracy | 67.40% | 68.63% | 69.20% | 69.56% | 69.70% | 69.82% | 69.85% | Table 8: SePP-ID performance with varying k-value. It is obvious in Table 8 that when increasing k, the number of semantic positive pairs is increased (i.e., Number of SPPS), and so is the model performance. This suggests that semantic positive pairs affect positively the performance of instance discrimination models. On the other hand, a larger k needs more training hours as expected. Therefore, we chose k equal to (10%) for all of our experiments. Note that we used two A100 80GB to train the models in our experiments. To ensure that the enhancement in the performance was attributed to the semantic positive pairs rather than increasing the size of the dataset, we randomly picked 18109 images from ImageNet (which is similar to the number of semantic positive samples found with 10% of the ImageNet dataset) and created two copies of them (xi and x ′ i ) each of which is randomly augmented. We added these samples to the original ImageNet dataset. This allows us to test whether the improvement came from the semantic positive pairs or from increasing the size of the dataset. The results shown in Table 9 indicate that simply adding random augmented images to the original dataset has negligible improvement (i.e., 0.2%) to MoCo-v2's performance, while the performance improved by 4.1% when adding semantic positive samples. Table 9: Compare the performance of MoCo-V2 with random add augmented images against SePP-ID (k=10%) after 800 epochs on ImageNet. Dataset pre-process Number of added pairs Accuracy MoCo-V2 0 72.2% MoCo-V2 (added random augmented images) 18,109 72.4% SePP-ID(k = 10%) 18,109 76.3% ## 4.2.3 Non-Contrastive Learning With Our Approach In these experiments, we want to study the effect of our approach on non-contrastive learning. To do so, we used the framework of three SOTA approaches, SimSiam(Chen & He, 2021), DINO (Dwibedi et al., 2021), and VICReg(Bardes et al., 2021). | Approach\Epochs | 100 | 200 | 400 | 800 | |------------------------------|-------|-------|--------|--------| | SimSiam (Chen et al., 2020a) | 68.1% | 70.0% | 70.8% | 71.3% | | SePP-ID (proposed) | 68.9% | 71.1% | 71.9 % | 72.5 % | Table 10: Compare our approach (SePP-ID) with SimSiam framework versus vanilla Simsiam . Table 10 shows that semantic positive pairs found by our approach consistently improve the representation learning of the non-contrastive instance discrimination method (knowledge distillation). our approach surpasses vanilla SimSiam by 1.2% on 800 epochs. Also, our approach with 400 epochs achieved better than the vanilla approach with 800 epochs. Furthermore, our approach also significantly improves the performance of an information maximisation instance discrimination method, i.e. VICReg. Table 11 indicates that using the VICReg framework with semantic positive pairs found by our approach increases the performance by 2.35% on 100 epochs and 2.9% on 1000 epochs. | Approach\Epochs | 100 | 1000 | |------------------------------|--------|--------| | VICReg (Bardes et al., 2021) | 68.6 % | 73.2% | | SePP-ID (proposed) | 70.95% | 76.1% | Table 11: Comparisons between VICReg and SePP-ID on ImageNet. | Approach\Epochs | 200 | |-----------------------------|-------| | DINO (Dwibedi et al., 2021) | 66.8% | | SePP-ID (proposed) | 68.6% | Table 12: Compare the performance accuracy between vanilla DINO and SePP-ID on ImageNet. Finally, we used DINO (Caron et al., 2021) frameworks, which use centring and sharping to avoid representation collapse. We used the publicised implementation of DINO to train two models: Vanilla DINO and DINO with semantic positive pairs found by our approach for 200 epochs on the ImageNet dataset. We employed the same hyperparameters used in the original paper. The reported results in Table12 are for training without multi-crop. It is apparent that semantic positive pairs found by our approach play an important role in improving the representation learning for the DINO framework. Our approach with the DINO framework outperforms the Vanilla DINO by 1.8% in linear evaluation. ## 5 Conclusion In this paper, we have proposed to use a Semantic Sampler (pre-trained model and similarity metric) with the original dataset to find semantic positive pairs. We demonstrated that these semantic positive samples can significantly improve the visual representation learning of the SSL instance discrimination methods on three datasets which are ImageNet, STL-10, and CIFAR-10. Our experiments also indicate that our approach outperformed other methods, including NNCLR, FNC and CLFN, which are using semantic positive samples. This suggests that the true semantic positive samples are important for learning visual representations with instance discrimination approaches. ## Acknowledgments We would like to thank the University of Aberdeen's HPC facility for enabling this work. ## References Yuki Markus Asano, Christian Rupprecht, and Andrea Vedaldi. Self-labelling via simultaneous clustering and representation learning. *arXiv preprint arXiv:1911.05371*, 2019. Joonsun Auh, Changsik Cho, and Seon-Tae Kim. Contrastive learning for reducing false negatives with global and local views in augmented data. In 2023 Innovations in Intelligent Systems and Applications Conference (ASYU), pp. 1–5, 2023. doi: 10.1109/ASYU58738.2023.10296635. Adrien Bardes, Jean Ponce, and Yann LeCun. Vicreg: Variance-invariance-covariance regularization for self-supervised learning. *arXiv preprint arXiv:2105.04906*, 2021. Thomas Berg, Jiongxin Liu, Seung Woo Lee, Michelle L Alexander, David W Jacobs, and Peter N Belhumeur. Birdsnap: Large-scale fine-grained visual categorization of birds. In *Proceedings of the IEEE Conference* on Computer Vision and Pattern Recognition, pp. 2011–2018, 2014. Matko Bošnjak, Pierre H Richemond, Nenad Tomasev, Florian Strub, Jacob C Walker, Felix Hill, Lars Holger Buesing, Razvan Pascanu, Charles Blundell, and Jovana Mitrovic. Semppl: Predicting pseudo-labels for better contrastive representations. *arXiv preprint arXiv:2301.05158*, 2023. Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In *Proceedings of the European conference on computer vision (ECCV)*, pp. 132–149, 2018. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. Advances in neural information processing systems, 33:9912–9924, 2020. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In *Proceedings of the IEEE/CVF* international conference on computer vision, pp. 9650–9660, 2021. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020a. Tsai-Shien Chen, Wei-Chih Hung, Hung-Yu Tseng, Shao-Yi Chien, and Ming-Hsuan Yang. Incremental false negative detection for contrastive learning. *arXiv preprint arXiv:2106.03719*, 2021. Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 15750–15758, 2021. Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. *arXiv preprint arXiv:2003.04297*, 2020b. Ching-Yao Chuang, Joshua Robinson, Yen-Chen Lin, Antonio Torralba, and Stefanie Jegelka. Debiased contrastive learning. *Advances in neural information processing systems*, 33:8765–8775, 2020. Adam Coates and Andrew Y Ng. Analysis of large-scale visual recognition. In Advances in neural information processing systems, pp. 284–292, 2011. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In *International conference* on machine learning, pp. 647–655. PMLR, 2014. Aiden Durrant and Georgios Leontidis. Hyperspherically regularized networks for self-supervision. *Image* and Vision Computing, 124:104494, 2022. Aiden Durrant and Georgios Leontidis. Hmsn: Hyperbolic self-supervised learning by clustering with ideal prototypes. *arXiv preprint arXiv:2305.10926*, 2023. Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, and Andrew Zisserman. With a little help from my friends: Nearest-neighbor contrastive learning of visual representations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9588–9597, 2021. Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. *International journal of computer vision*, 88:303–338, 2010. Chongjian Ge, Jiangliu Wang, Zhan Tong, Shoufa Chen, Yibing Song, and Ping Luo. Soft neighbors are positive supporters in contrastive visual representation learning. *arXiv preprint arXiv:2303.17142*, 2023. Spyros Gidaris, Andrei Bursuc, Nikos Komodakis, Patrick Pérez, and Matthieu Cord. Learning representations by predicting bags of visual words. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pp. 6928–6938, 2020. Spyros Gidaris, Andrei Bursuc, Gilles Puy, Nikos Komodakis, Matthieu Cord, and Patrick Perez. Obow: Online bag-of-visual-words generation for self-supervised learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 6830–6840, 2021. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587, 2014. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. *Advances in neural information processing systems*, 33:21271–21284, 2020. Yuanfan Guo, Minghao Xu, Jiawen Li, Bingbing Ni, Xuanyu Zhu, Zhenbang Sun, and Yi Xu. Hcsc: Hierarchical contrastive selective coding. In *Proceedings of the IEEE/CVF Conference on Computer Vision and* Pattern Recognition (CVPR), pp. 9706–9715, June 2022. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9729–9738, 2020. Tri Huynh, Simon Kornblith, Matthew R Walter, Michael Maire, and Maryam Khademi. Boosting contrastive self-supervised learning with false negative cancellation. In *Proceedings of the IEEE/CVF winter conference* on applications of computer vision, pp. 2785–2795, 2022. Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, and Diane Larlus. Hard negative mixing for contrastive learning. *Advances in Neural Information Processing Systems*, 33:21798–21809, 2020. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. Advances in neural information processing systems, 33:18661–18673, 2020. Dong-Ki Kim and Matthew R Walter. Satellite image-based localization via learned embeddings. In *2017* IEEE International Conference on Robotics and Automation (ICRA), pp. 2073–2080. IEEE, 2017. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In *2013 IEEE International Conference on Computer Vision Workshops*, pp. 554–561, 2013. doi: 10.1109/ICCVW.2013.77. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. Yunfan Li, Mouxing Yang, Dezhong Peng, Taihao Li, Jiantao Huang, and Xi Peng. Twin contrastive learning for online clustering. *International Journal of Computer Vision*, 130(9):2205–2221, 2022a. Zhaowen Li, Yousong Zhu, Fan Yang, Wei Li, Chaoyang Zhao, Yingying Chen, Zhiyang Chen, Jiahao Xie, Liwei Wu, Rui Zhao, et al. Univip: A unified framework for self-supervised visual pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14627–14636, 2022b. Yiding Lu, Yijie Lin, Mouxing Yang, Dezhong Peng, Peng Hu, and Xi Peng. Decoupled contrastive multiview clustering with high-order random walks. *arXiv preprint arXiv:2308.11164*, 2023. Alžběta Manová, Aiden Durrant, and Georgios Leontidis. S-jea: Stacked joint embedding architectures for self-supervised visual representation learning. *arXiv preprint arXiv:2305.11701*, 2023. Ishan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant representations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 6707–6717, 2020. Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, pp. 3498–3505. IEEE, 2012. Joshua Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. Contrastive learning with hard negative samples. *arXiv preprint arXiv:2010.04592*, 2020. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. *International Journal of Computer Vision (IJCV)*, 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y. Kaiyou Song, Shan Zhang, Zimeng Luo, Tong Wang, and Jin Xie. Semantics-consistent feature search for self-supervised visual representation learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 16099–16108, October 2023. Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. What makes for good views for contrastive learning? *Advances in Neural Information Processing Systems*, 33:6827–6839, 2020. Daniel J Trosten, Sigurd Lokse, Robert Jenssen, and Michael Kampffmeyer. Reconsidering representation alignment for multi-view clustering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1255–1265, 2021. Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. Scan: Learning to classify images without labels. In *European conference on computer vision*, pp. 268–285. Springer, 2020. Xiao Wang and Guo-Jun Qi. Contrastive learning with stronger augmentations. *IEEE transactions on* pattern analysis and machine intelligence, 45(5):5549–5560, 2022. Mike Wu, Milan Mosse, Chengxu Zhuang, Daniel Yamins, and Noah Goodman. Conditional negative sampling for contrastive learning of visual representations. *arXiv preprint arXiv:2010.02037*, 2020. Tete Xiao, Xiaolong Wang, Alexei A Efros, and Trevor Darrell. What should not be contrastive in contrastive learning. *arXiv preprint arXiv:2008.05659*, 2020. Mouxing Yang, Yunfan Li, Peng Hu, Jinfeng Bai, Jiancheng Lv, and Xi Peng. Robust multi-view clustering with incomplete information. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 45(1): 1055–1069, 2022a. Xihong Yang, Xiaochang Hu, Sihang Zhou, Xinwang Liu, and En Zhu. Interpolation-based contrastive learning for few-label semi-supervised learning. IEEE Transactions on Neural Networks and Learning Systems, 2022b. Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stéphane Deny. Barlow twins: Self-supervised learning via redundancy reduction. In *International Conference on Machine Learning*, pp. 12310–12320. PMLR, 2021. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In *Computer* Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13, pp. 818–833. Springer, 2014.
Review 1: Summary: This paper explores an instance discriminative-based self-supervised method. Instead of using augmented images as positive pairs, this paper proposes adopting images with similar semantic content as positive instances. Specifically, the necessary positive pairs are obtained by searching with a pre-trained model, ensuring the correct positive pairs from the start of model training. Experiments are conducted on benchmark datasets, such as ImageNet. Strengths and Weaknesses: Strengths: 1. The investigated problem is important. Learning effective feature representations from unlabeled data has gained significant attention and has practical applications. 2. The method is well-motivated and implemented. Positive pairs, especially hard positive pairs, play a crucial role in self-supervised learning (SSL), and utilizing a pre-trained model to identify such pairs is a reasonable approach. 3. Experiments are conducted using the ImageNet dataset, which is a widely accepted standard for evaluating computer vision algorithms. Weaknesses: 1. Limited downstream evaluation. The paper only evaluates the method on a downstream classification task using a linear evaluation protocol. Including more downstream tasks, such as classification under fine-tuning protocol, detection, and segmentation, would strengthen the paper's findings. 2. Are they determined based on ground truth labels? If so, it is important to report the self-supervised results based on the ground truth labels, as well as results obtained through supervised training. This would provide an upper-bound reference and enable a more comprehensive analysis of the proposed method's performance. Requested Changes: See weaknesses. Broader Impact Concerns: Na. ================================================== Review 2: Summary: This paper studies how to obtain better positive pairs in self-supervised learning (SSL) and proposed a method called SePP-ID. The proposed method use pre-trained SSL models to identify positive pairs in the sampled batch. By employing such method on previous methods such as MoCo-V2, the performance of SSL can be significantly improved. Strengths and Weaknesses: Strength: 1) The motivation of mining better positive pairs is interesting and reasonable. 2) The proposed method brings significant improvements on both the non-contrastive and contrastive methods. 3) Configurations such as which pre-trained SSL model to use, k-value, are well-ablated in the experiments. Weakness: 1) The experiments of adding SePP-ID on stronger baseline such as NNCLR are missing. 2) The experiments of adding SePP-ID on distillation-based SSL such as SWAV and DINO are missing. Requested Changes: 1) See weakness section. 2) To some extent, the proposed method can be viewed as a special type of distillation method, as the supervision signal is from another pre-trained model. Therefore, it would be better for the author to discuss/compare between the proposed method and some previous works on distillation of SSL models. Broader Impact Concerns: No broader impact statement is needed. ================================================== Review 3: Summary: This paper designs an approach named SePP-ID for achieving false-negative robust contrastive learning. Concretely, SePP-ID randomly select K% samples from the dataset as positive candidates and use MoCo-v2 to infer the semantically-relevant pairs in the set. After that, SePP-ID treat these pairs as positives and perform contrastive learning. To verify the effectiveness of the proposed approach, the authors conduct extensive experiments on three widely-used benchmarks. Strengths and Weaknesses: Strengths: This paper devises a new false-negative robust contrastive learning approach that resorts to pre-trained models (e.g., MoCo-v2) to recall the potential false negatives. Weaknesses: 1. What are the differences between the existing FN-robust works and the proposed approach? Some works usually first pre-train the models in the first stage and then recall the false negatives in the second stage. 2. Some details are missing. First, the authors does not detail the contrastive learning framework used. Second, how to set the hyper-parameter $K$ for STL and Cifar-10 datasets. 3. I think the comparisons are insufficient and unfair to some extent. First, some newest FN-robust methods (2023) are not compared. Second, the approach adopts additional external models while the existing FN-robust methods usually are trained from scratch. Requested Changes: Please see the weaknesses Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept with minor revision Comment: The reviewers unanimously recommended leaning accept/accept to the paper upon reading the author rebuttal and additional updates. The AE also appreciates the relatively comprehensive survey in the related-work section. While this work is slightly above the TMLR standard for acceptance, there are weaknesses the authors should try to address: 1) A fair comparison to FNC/NNCLR is needed. Either the authors should show an improvement on top of these two methods as reviewer LdV9 mentioned, or the authors should compare to them on a fair basis. Right now, the reported results from FNC/NNCLR seem to be based on SimCLR, which is a weaker starting point compared to MoCov2. This makes the comparison right now "apples-to-oranges". 2) Downstream evaluation is still lacking. The authors did not do a good job answering reviewer DfRX despite the new SSL results. The authors are highly recommended to consider similar transfer learning experiments as shown in Table 3 of NNCLR. Additional downstream evaluation on segmentation or detection as the reviewer mentioned will also be appreciated. ==================================================
# Federated K**-Means Clustering Via Dual Decompositionbased Distributed Optimization** Anonymous authors Paper under double-blind review ## Abstract The use of distributed optimization in machine learning can be motivated either by the resulting preservation of privacy or the increase in computational efficiency. On the one hand, training data might be stored across multiple devices. Training a global model within a network where each node only has access to its own confidential data requires the use of distributed algorithms. Even if the data is not confidential, sharing it might be prohibitive due to bandwidth limitations. On the other hand, the ever increasing amount of available data leads to large-scale machine learning problems. By splitting the training process across multiple nodes its efficiency can be significantly increased. This paper demonstrates the application of dual decomposition to the distributed training of k-means clustering problems. After an overview of distributed and federated machine learning, the mixed-integer quadratically constrained programming-based formulation of the k-means clustering training problem is presented. The training can be performed in a distributed manner by splitting the data across different nodes and linking these nodes through consensus-constraints. Finally, the performance of the subgradient method, the bundle trust method and the quasi-Newton dual ascent algorithm are evaluated on a set of benchmark problems. ## 1 Introduction Training a machine learning model of any kind on a large set of data usually involves the solution of a challenging optimization problem. If the underlying data set becomes too large, it might not be possible to solve the resulting optimization problem in a reasonable amount of time. Distributed optimization methods can aid in rendering the optimization problem tractable through the use of multiple computational resources. Peteiro-Barral & Guijarro-Berdiñas (2013) provide an overview of methods for distributed machine learning . In order to train a global model in a distributed manner a consensus has to be established between the involved nodes and their underlying optimization problems. Forero et al. (2010; 2011) and Georgopoulos & Hasler (2014) demonstrate the distributed training of machine learning models using consensus-based distributed optimization. Tsianos et al. (2012) discuss practical issues with a consensus-based approach which arise from the difference between synchronous and asynchronous communication. Nedić (2020) provides an overview of distributed gradient methods for convex training problems while Verbraeken et al. (2020) give a general survey of distributed machine learning. While computational performance still remains an issue for many machine learning problems, the increase in computing power and in the efficiency of optimization algorithms can render many challenging problems tractable. However, the inability to share data due to confidentiality reasons still necessitates the use of distributed algorithms. Fig. 1a shows a setting in which training data is stored across two different nodes. Each node can use its local data to train an individual machine learning model. By including a coordination layer the two training processes can be guided in a way that a global model is trained, without the need to share confidential data. If the underlying optimization problems are still hard to solve, the training process can be further divided into subproblems. Fig. 1b depicts the situation in which models of different node clusters are trained in a distributed manner which in turn are again coordinated in order to obtain a global model. Distributed training of a global model without sharing individual training data is often referred to as federated optimization or federated learning (Konečn`y et al., 2016). Most algorithms for federated learning ![1_image_0.png](1_image_0.png) Figure 1: Examples of federated learning architectures. involve an averaging step of the model parameters of the individual nodes (McMahan et al., 2017). Yuan et al. (2021) propose a dual averaging step in order to handle the nonsmoothness of federated composite optimization problems. Federated learning methods have been applied in the context of manufacturing (Hegiste et al., 2022), healthcare (Antunes et al., 2022), mobile devices (Lim et al., 2020) and smart city sensing (Jiang et al., 2020). Li et al. (2020) and Liu et al. (2022) provide surveys on federated learning while Chamikara et al. (2021) examine the privacy aspects related to external attacks. Applying federated learning to heterogeneous data sets can lead to the deterioration of the model quality of individual nodes in regards to their own training data, which might hinder their willingness to participate in such a setting. This issue is addressed through personalized federated learning (Kulkarni et al., 2020; Tan et al., 2022). ## 2 K**-Means Clustering** K-means clustering describes an unsupervised machine learning problem in which a set of observations/data is divided into K disjoint clusters according to a similarity measure (Gambella et al., 2021). Clustering problems can be found in many practical application such as image segmentation (Dhanachandra et al., 2015), customer market segmentation (Kansal et al., 2018) or the identification of similar operating points in a production plant (Rahimi-Adli et al., 2019). This section presents the mixed-integer programming-based formulation of the training problem. The formulation is subsequently extended to the case of distributedly stored data, which gives rise to a federated learning problem. Consensus constraints are used to couple the training problems of different nodes. These constraints can be dualized such that the federated learning problem can be solved via dual decomposition-based distributed optimization. Since the underlying optimization problem contains integrality constraints it is not convex and thus strong duality does not hold. However, a feasible primal solution can be computed in each iteration through an averaging heuristic. 2.1 MIQCP formulation ![2_image_0.png](2_image_0.png) Figure 2: Illustration of K-means clustering both in a centralized and a decentralized setting. The goal of K-means clustering is to assign a set of observations yj ∈ R ny , j ∈ J = {1, . . . , *|J |}* to a set of clusters K = {1*, . . . , K*} and to compute the centroids of each cluster. The number of clusters is a hyperparameter and is set a priori or in an iterative manner. This problem can be formulated as a mixed-integer nonlinear programming (MINLP) problem (Aloise et al., 2012; Gambella et al., 2021), The binary variables wjk indicate if observation yj is assigned to cluster k and mk is the centroid of cluster k. Constraint (1b) enforces that each observation is assigned to exactly one cluster, while the objective is to minimize the sum of the squared Euclidean distances of all observations to the centroids of their assigned clusters. Problem 1 is a nonconvex MINLP which is hard to solve. In practice it is more efficient to use a linearized formulation by introducing the variable djk, which describes the squared distance between an $$\min_{w_{jk},\mathbf{m}_{k}}\sum_{j\in\mathcal{J}}\sum_{k\in\mathcal{K}}w_{jk}\cdot\|\mathbf{y}_{j}-\mathbf{m}_{k}\|_{2}^{2},$$ s. t. $$\sum_{k\in\mathcal{K}}w_{jk}=1,\forall j\in\mathcal{J},$$ $$w_{jk}\in\{0,1\},\ \forall j\in\mathcal{J},k\in\mathcal{K},\ \mathbf{m}_{k}\in\mathbb{R}^{n_{\Psi}}\ \forall k\in\mathcal{K}.$$ $$(1\mathrm{a})$$ $$(1\mathbf{b})$$ $$(1\mathbf{c})$$ observation j and the centroid of cluster k (Gambella et al., 2021), $$\min_{w_{jk},d_{jk},\mathbf{m}_{k}}\sum_{j\in\mathcal{J}}\sum_{k\in\mathcal{K}}d_{jk}$$ s.t. $$\sum_{k\in\mathcal{K}}w_{jk}=1,\forall j\in\mathcal{J},$$ $$d_{jk}\geq\|\mathbf{y}_{j}-\mathbf{m}_{k}\|_{2}^{2}-M_{j}\cdot(1-w_{jk}),\ \forall j\in\mathcal{J},k\in\mathcal{K}$$ $$w_{jk}\in\{0,1\},d_{jk}\geq0,\ \forall j\in\mathcal{J},k\in\mathcal{K},\ \mathbf{m}_{k}\in\mathbb{R}^{n_{\varphi}}\ \forall k\in\mathcal{K}.$$ $$(2\mathrm{a})$$ (2b) $\begin{array}{l}\left(2\text{c}\right)\end{array}$ (2d) (2d) $$(3\mathbf{a})$$ 2 is a mixed-integer quadratically constrained programming (MIQCP) problem with a convex integer relaxation. Constraint (2c) is an epigraph formulation of the squared Euclidean distance if observation j is assigned to cluster k, i.e., when wjk = 1. Otherwise, the parameter Mj has to be large enough so that the constraint is trivially satisfied for wjk = 0. In theory a common big-M parameter can be used for all constraints described by (2c). However, the parameter should be chosen as small as possible in order to avoid weak integer relaxations. In the following the big-M parameter is set as $M_{j}=\max_{\mathbf{x}\in\mathcal{J}}\|\mathbf{y}_{j}-\mathbf{\chi}\|_{2}^{2},\ \forall j\in\mathcal{J},$ $\mathcal{Y}=\{\mathbf{y}\in\mathbb{R}^{n_{\mathbf{y}}}|\min_{j\in\mathcal{J}}\ [\mathbf{y}_{j}]_{l}\leq[\mathbf{y}]_{l}\leq\max_{j\in\mathcal{J}}\ [\mathbf{y}_{j}]_{l},\ l=1,\ldots,n_{\mathbf{y}}\}.$ $$(3\mathbf{b})$$ Different approaches have been proposed to solve the clustering optimization problem. Bagirov & Yearwood (2006) present a heuristic method based on nonsmooth optimization, Aloise et al. (2012) propose a column generation algorithm and Karmitsa et al. (2017) use a diagonal bundle method. Fig. 2a illustrates the concept of K-means clustering. The unlabeled data (left) is split into 3 clusters according to their distance to the computed cluster centroid (crosses). ## 2.2 Distributed Consensus Formulation Problem 2 describes the case in which the entire data set is accessible from a single node. However, this might not always be the case, especially if the underlying data is confidential. In the following it is assumed that the data set is split across several nodes I = {1*, . . . , N*s}, with each node having access to the data-subset Ji ⊂ J . The MIQCP problem 2 can be extended to the case of multiple nodes, The goal of problem 4 is again to compute a set of cluster centroids mk and to assign the observations of all nodes to these clusters. However, if the nodes cannot share their data, problem 4 cannot be solved in a centralized manner. A simple distributed approach would be to solve a clustering problem in each node i. This could lead to situation as depicted in Fig. 2b and Fig. 2c. If each the data-set is split across two nodes, each one can solve a clustering problem. However, both nodes will compute different cluster centroids. The goal of a federated learning approach is to train a global model, i.e., global cluster centroids in the case of K-means clustering, without sharing the local data between the nodes. To this end each node i can compute individual cluster centroids mik, $$\min_{w_{ijk},u_{ijk},\mathbf{m}_{ik}}\sum_{i\in\mathcal{I}}\sum_{j\in\mathcal{J}_{i}}\sum_{k\in\mathcal{K}}d_{ijk}$$ $$\text{s.t.}\sum_{k\in\mathcal{K}}w_{ijk}=1,\forall i\in\mathcal{I},j\in\mathcal{J}_{i},$$ dijk (5a) $$\min_{w_{ijk},d_{ijk},\mathbf{m}_{k}}\sum_{i\in\mathcal{I}}\sum_{j\in\mathcal{J}}\sum_{k\in\mathcal{K}}d_{ijk}$$ s. t. $$\sum_{k\in\mathcal{K}}w_{ijk}=1,\forall i\in\mathcal{I},j\in\mathcal{J}_{i},$$ $$d_{ijk}\geq\|\mathbf{y}_{j}-\mathbf{m}_{k}\|_{2}^{2}-M_{j}\cdot(1-w_{ijk}),\ \forall i\in\mathcal{I},j\in\mathcal{J}_{i},k\in\mathcal{K},$$ $$w_{ijk}\in\{0,1\},d_{ijk}\geq0,\ \forall i\in\mathcal{I},j\in\mathcal{J}_{i},k\in\mathcal{K},\ \mathbf{m}_{k}\in\mathbb{R}^{n_{\Psi}}\ \forall,k\in\mathcal{K}.$$ $$(4\mathrm{a})$$ $$(4\mathrm{b})$$ $$\begin{array}{l}{(4\mathrm{c})}\\ {(4\mathrm{d})}\end{array}$$ $$(5\mathrm{a})$$ $$(5\mathrm{b})$$ $d_{ijk}\geq\|\mathbf{y}_{j}-\mathbf{m}_{ik}\|_{2}^{2}-M_{j}\cdot(1-w_{ijk}),\ \forall i\in\mathcal{I},j\in\mathcal{J}_{i},k\in\mathcal{K},$ $\mathbf{m}_{ik}=\mathbf{m}_{i^{\prime}k},\ \forall i\in\mathcal{I},i^{\prime}\in\mathcal{N}_{i},k\in\mathcal{K},$ $w_{ijk}\in\{0,1\},d_{ijk}\geq0,\ \forall i\in\mathcal{I},j\in\mathcal{J}_{i},k\in\mathcal{K},\ \mathbf{m}_{ik}\in\mathbb{R}^{n_{\Psi}}\ \forall,i\in\mathcal{I},k\in\mathcal{K}.$ Since the goal is to obtain global cluster centroids, the individual cluster centroids are coupled through consensus constraints (5d), where Ni contains the set of neighboring nodes of node i. Problem 5 describes a set of Ns subproblems coupled through the consensus constraints. In the following subsection dual variables are used in order to decouple the clustering problems of the different nodes. ## 3 Dual Decomposition-Based Distributed Clustering This section presents how the consensus formulation (5) of the clustering problem can be decomposed by introducing dual variables. Dual decomposition can be applied to constraint-coupled optimization problems of the form (5c) (5d) (5e) $\left(5\text{e}\right)$ $\min_{\mathbf{x}}\ \sum_{i\in\mathcal{I}}f_{i}(\mathbf{x}_{i}),$ s. t. $\sum_{i\in\mathcal{I}}\mathbf{A}_{i}\mathbf{x}_{i}=\mathbf{b},$ $\mathbf{x}_{i}\in\mathcal{X}.$ $$(6\mathrm{a})$$ Equation (6) describes an optimization problem consisting of of a set of I = {1*, . . . , N*s} subproblems. The subproblems are coupled through the constraints (6b) and each one is described by individual variables xi and constraints Xi. Dual decomposition is based on the introduction of dual variables for the coupling constraints (6b) and on the solution of the resulting dual optimization problem. The idea was first introduced by Everett (1963) for problems involving shared limited resources. Problem 5 can also be rewritten as a general constraint-coupled optimization problem by defining the matrix A describing the connections between the different nodes. In the following only linear network topologies as depicted in Fig. 3 are considered. Note that the discussion in the remainder of this paper can be easily extended to different network topologies. ![4_image_0.png](4_image_0.png) $$(6\mathrm{b})$$ $$(\operatorname{foc})$$ $$\left(7\right)$$ Figure 3: Illustration of a linear network topology and the resulting consensus constraints. By defining the vector of stacked cluster centroids of each node i, $${\hat{\mathbf{m}}}_{i}:={\begin{bmatrix}\mathbf{m}_{i,1}\\ \vdots\\ \vdots\\ \mathbf{m}_{i,k}\end{bmatrix}}\in\mathbb{R}^{K\cdot n_{\mathbf{y}}},$$ K·ny, (7) the consensus constraints can be rewritten as $$\begin{array}{l}{{\hat{\mathbf{m}}_{1}-\hat{\mathbf{m}}_{2}=\mathbf{0},}}\\ {{\hat{\mathbf{m}}_{2}-\hat{\mathbf{m}}_{3}=\mathbf{0},}}\end{array}$$ mˆ 1 − mˆ 2 = 0, (8a) mˆ 2 − mˆ 3 = 0, (8b) ... $$\begin{array}{l}{(8\mathrm{a})}\\ {(8\mathrm{b})}\end{array}$$ $${\hat{\mathbf{m}}}_{N_{s}-1}-{\hat{\mathbf{m}}}_{N_{s}}=\mathbf{0}.$$ mˆ Ns−1 − mˆ Ns = 0. (8c) $$({\mathfrak{S c}})$$ · mˆ 1 mˆ 2 mˆ 3 ... mˆ Ns = 0, (9a) I −I 0 · · · 0 0 0 I −I · · · 0 0 .................. 0 0 0 · · · I −I | {z } =:A∈RK·ny·(Ns−1)×K·ny·Ns or in a more compact way $$\sum_{i\in\mathcal{I}}\mathbf{A}_{i}\mathbf{\hat{m}}_{i}=\mathbf{0}$$ Aimˆ i = 0 (10) $$\mathcal{L}(w_{ijk},d_{ijk},\mathbf{m}_{ik},\boldsymbol{\lambda})=\sum_{i\in\mathcal{I}}\left(\sum_{j\in\mathcal{I}}\sum_{k\in\mathcal{K}}d_{ijk}+\boldsymbol{\lambda}^{T}\mathbf{A}_{i}\hat{\mathbf{m}}_{i}\right).\tag{1}$$ $$(9\mathrm{a})$$ $$(10)$$ $$(11)$$ $$(12\mathrm{a})$$ (12b) $\left(12\text{c}\right)\\$ (12d) . $$(14\mathrm{a})$$ $$(14\mathrm{b})$$ dual function: $$d(\boldsymbol{\lambda}):=\min_{w_{ijk},\mathbf{m}_{ijk},\mathbf{m}_{ik}}\sum_{i\in\mathcal{I}}\mathcal{L}_{i}(w_{ijk},d_{ijk},\mathbf{m}_{ik},\boldsymbol{\lambda})$$ $$\text{s.t.}\sum_{k\in\mathcal{K}}w_{ijk}=1,\forall i\in\mathcal{I},j\in\mathcal{J}_{i},$$ $$d_{ijk}\geq\|\mathbf{y}_{j}-\mathbf{m}_{ik}\|_{2}^{2}-M_{j}\cdot(1-w_{ijk}),\ \forall i\in\mathcal{I},j\in\mathcal{J}_{i},k\in\mathcal{K},$$ $$w_{ijk}\in\{0,1\},d_{ijk}\geq0,\ \forall i\in\mathcal{I},j\in\mathcal{J}_{i},k\in\mathcal{K},$$ $$\mathbf{m}_{ik}\in\mathbb{R}^{n_{\Psi}}\ \forall,i\in\mathcal{I},k\in\mathcal{K}.$$ $$\min_{w_{ijk},d_{ijk},\mathbf{m}_{ik}}\mathcal{L}_{i}(w_{ijk},d_{ijk},\mathbf{m}_{ik},\boldsymbol{\lambda})$$ s. t. $$\sum_{k\in\mathcal{K}}w_{ijk}=1,\ \forall j\in\mathcal{J}_{i},$$ $$d_{ijk}\geq\|y_{j}-\mathbf{m}_{ik}\|_{2}^{2}-M_{j}\cdot(1-w_{ijk}),\ \forall j\in\mathcal{J}_{i},k\in\mathcal{K},$$ $$w_{ijk}\in\{0,1\},d_{ijk}\geq0,\ \forall j\in\mathcal{J}_{i},k\in\mathcal{K},$$ $$\mathbf{m}_{ik}\in\mathbb{R}^{n_{\mathcal{V}}}\ \forall k\in\mathcal{K}.$$ (14c) $\begin{array}{l}\left(14\text{c}\right)\end{array}$ (14d) . Constraints 8 can subsequently be rewritten in matrix form with Ai ∈ R K·ny·(Ns−1)×K·ny . By introducing dual variables λ ∈ R nK·ny·(Ns−1) for the consensus constraints 10 the Lagrange function of problem (5) can be defined, The minimization of the Lagrange function for a fixed value of the dual variables λ gives the corresponding value of the dual function. The dual function has two important properties. First, the value of the dual function is always a lower bound on the solution of its corresponding primal problem, in this case, problem (5) (Nocedal & Wright, 2006). The problem of finding the dual variables that result in the best lower bound is referred to as the dual optimization problem, $$\operatorname*{max}_{\lambda}\,d(\lambda).$$ λd(λ). (13) The resulting dual problem can be solved in a distributed manner by solving the individual clustering problems for the current value of the dual variables, Second, the dual function (12) is always concave, regardless whether the primal problem is convex or not (Nocedal & Wright, 2006). Therefore the dual problem (13) is a convex optimization problem. However, the dual function is usually nondifferentiable due to a changing set of active individual constraints, which means that problem (13) is a nonsmooth optimization problem (Yfantis et al., 2023). The following subsections present some algorithms for the solution of the dual problem, namely the subgradient method, the bundle trust method, and the quasi-Newton dual ascent algorithm. ## 3.1 Subgradient Method Since the dual function is nondifferentiable a gradient cannot be defined for every value of the dual variables. Instead, a subgradient can be used. A vector ξ ∈ R nχ is a subgradient of a concave function ϕ(χ) at a point χ0 if $$\phi(\chi)\leq\phi(\chi_{0})+\xi^{T}(\chi-\chi)$$ T(χ − χ) (15) for all χ ∈ dom ϕ. The set of all subgradients at a point χ0 comprise the subdifferential ∂ϕ(χ0) Technically equation (15) defines a supergradient. Nevertheless, the term subgradient is commonly used in the literature for both convex and concave functions. A subgradient of the dual function for a given value of the dual variables λ (t)can be computed by evaluating the coupling constraints (10), $$\mathbf{g}(\boldsymbol{\lambda}^{(t)})=\sum_{i\in\mathcal{I}}\mathbf{A}_{i}\mathbf{\hat{m}}_{i}(\boldsymbol{\lambda}^{(t)})\in\partial d(\boldsymbol{\lambda}^{(t)}),$$ where mˆ i(λ (t)) are the cluster centroids obtained by solving the individual clustering problems (14). In the subgradient method the dual variables are updated in each iteration t along the direction of the subgradient (Shor, 2012) $$\mathbf{\lambda}^{(t+1)}=\mathbf{\lambda}^{(t)}+\alpha^{(t)}\mathbf{g}(\mathbf{\lambda}^{(t)}),$$ (t)), (17) where α (t)is a step size parameter. The step size parameter plays an important role in the convergence of the algorithm. If it is chosen too large the algorithm might diverge, while a too small choice might significantly slow down its convergence. A common choice to adapt the step size over the course of the iterations is $\left(15\right)$. $$(16)$$ $$(17)$$ $$\alpha^{(t)}=\alpha^{(0)}/{\sqrt{t}},$$ $$(18)$$ $$(19)$$ √t, (18) with an initial step size α (0) (Bertsekas, 1999). ## 3.2 Bundle Trust Method The subgradient method usually exhibits a slow rate of convergence, since only using information from the current subgradient may not provide an ascent direction for the algorithm. Bundle methods are generally more efficient by utilizing multiple subgradients from previous iterations (Mäkelä, 2002). To this end the data $$\mathcal{B}^{(t)}=\{(\boldsymbol{\lambda}^{(l)},\mathbf{g}(\boldsymbol{\lambda}^{(l)}),d(\boldsymbol{\lambda}^{(l)}))\in\mathbb{R}^{n\lambda}\times\mathbb{R}^{n\lambda}\times\mathbb{R}|\;l=t-\tau+1,\ldots,t\}$$ nλ × R| l = t − τ + 1*, . . . , t*} (19) is stored in each iteration, where nλ denotes the number of dual variables. B (t)is referred to as a bundle and it contains the dual variables, subgradients and values of the dual function from previous iterations. Since storing all information from all previous iterations might cause memory issues, only data from the previous τ iterations is used. The idea of bundle methods is to use the collected information to construct a piece wise linear over approximation of the nonsmooth dual function d(λ), a so-called cutting plane model, $$\hat{d}^{(t)}(\lambda):=\operatorname*{min}_{l\in\{t-\tau+1,\ldots,t\}}\{d(\lambda^{(l)})+{\bf g}^{T}(\lambda^{(l)})(\lambda-\lambda^{(l)})\}.$$ (l))}. (20) The approximation can be written in an equivalent form as $$\hat{d}^{(t)}(\lambda)=\operatorname*{min}_{l\in\{t-\tau+1,\ldots,t\}}\{d(\lambda^{(t)})+\mathbf{g}^{T}(\lambda^{(l)})(\lambda-\lambda^{(t)})-\beta^{(l,t)}\},$$ (l,t)}, (21) with the linearization error $$\beta^{(l,t)}=d(\mathbf{\lambda}^{(t)})-d(\mathbf{\lambda}^{(l)})-\mathbf{g}^{T}(\mathbf{\lambda}^{(l)})(\mathbf{\lambda}^{(t)}-\mathbf{\lambda}^{(l)}),\ \forall l\in\{t-\tau+1,\ldots,t\}.$$ (l)), ∀l ∈ {t − τ + 1*, . . . , t*}. (22) The update direction of the dual variables can then be computed by solving a direction finding problem $\max\limits_{\mathbf{s}\in\mathbb{R}^{n\lambda}}\hat{d}^{(t)}(\boldsymbol{\lambda}^{(t)}+\mathbf{s})$, $\mathbf{s}$. t. $\|\mathbf{s}\|_{2}^{2}\leq\alpha^{(t)}$, $$(20)$$ $$(21)$$ $$(22)$$ $$(23\mathrm{a})$$ $$(23\mathrm{b})$$ $\left(23\mathrm{c}\right)$. $$(24\mathrm{a})$$ $$(24\mathrm{b})$$ $$\mathbf{\tau}^{-1}$$ (24c) $$(25)$$ $\max_{v\in\mathbb{R}^{1},\,\mathbf{s}\in\mathbb{R}^{n}\times\lambda}\,v,$ s. t. $\|\mathbf{s}\|_{2}^{2}\leq\alpha^{(t)},$ $\mathbf{g}^{T}(\lambda^{(l)})\mathbf{s}-\beta^{(l,t)}\geq v,\ \forall l\in\{t-\tau+1,\ldots,t\}.$ $$\mathbf{\lambda}^{(t+1)}=\mathbf{\lambda}^{(t)}+\mathbf{s}^{(t)}.$$ $$(26)$$ (t). (25) $$d_{B}^{(t)}(\lambda)=\frac{1}{2}(\lambda-\lambda^{(t)})^{T}\mathbf{B}^{(t)}(\lambda-\lambda^{(t)})+\mathbf{g}^{T}(\lambda^{(k)})(\lambda-\lambda^{(t)})+d(\lambda^{(t)}).$$ $${\bf B}^{(t)}={\bf B}^{(t-1)}+\frac{{\bf y}^{(t)}{\bf y}^{(t),T}}{{\bf y}^{(t),T}{\bf s}^{(t)}}-\frac{{\bf B}^{(t-1)}{\bf s}^{(t)}{\bf s}^{(t),T}{\bf B}^{(t-1),T}}{{\bf s}^{(t),T}{\bf B}^{(t-1)}{\bf s}^{(t)}},$$ $$(27)$$ $$(28)$$ $$\mathbf{y}^{(t)}=\mathbf{g}(\boldsymbol{\lambda}^{(t)})-\mathbf{g}(\boldsymbol{\lambda}^{(t-1)})$$ $$(29)$$ (t−1)) (29) $$d_{B}^{(t)}(\mathbf{\lambda}^{(t+1)})\leq d(\mathbf{\lambda}^{(l)})+\mathbf{g}^{T}(\mathbf{\lambda}^{(l)})(\mathbf{\lambda}^{(t+1)}-\mathbf{\lambda}^{(l)}),\;\forall l\in\{t-\tau+1,\ldots,t\}.$$ $$\mathcal{BC}^{(t)}=\{\boldsymbol{\lambda}\in\mathbb{R}^{m_{0}}\,|\,d_{B}^{(t)}(\boldsymbol{\lambda})\leq d(\boldsymbol{\lambda}^{(t)})+\mathbf{g}^{T}(\boldsymbol{\lambda}^{(t)})(\boldsymbol{\lambda}-\boldsymbol{\lambda}^{(t)}),\,\forall t\in\{t-\tau+1,\ldots,t\}\}.$$ where constraint (23b) represents a trust region. Therefore, this variant of the bundle method is referred to as bundle trust method (BTM). Other variants include proximal bundle methods, where the trust region is replaced by a regularization term in the objective function (Bagirov et al., 2014). Problem (23) is still a nonsmooth optimization problem and can be transformed into a smooth quadratic direction finding problem by using an epigraph formulation, After computing a direction the dual variables are updated according to Bundle methods are widely used in machine learning, as nonsmoothnes is encountered in many training problems involving regularization terms (Le et al., 2007). Bundle methods can also be used to solve the clustering problem (2) (Karmitsa et al., 2017). However, note that in this paper the BTM algorithm is used to solve the nonsmooth dual problem (13). ## 3.3 Quasi-Newton Dual Ascent Since the dual function is always concave it can be locally approximated by a quadratic function. Yfantis & Ruskowski (2022) and Yfantis et al. (2023) recently proposed the quasi-Newton dual ascent (QNDA) algorithm that approximates the dual function by a quadratic function, This follows the idea of Newton methods, where the gradient and Hessian of the function are used within the approximation. However, due to the nonsmoothness of the dual function, the gradient and Hessian are not defined for each value of the dual variable. Instead, the gradient is replaced in eq. (26) by the subgradient and the Hessian is approximated by the matrix B(t). The approximated Hessian can be updated in each iteration using a Broyden-Fletcher-Goldfarb-Shanno (BFGS) update, where $$\mathbf{s}^{(t)}:=\lambda^{(t)}-\lambda^{(t-1)}$$ (t−1) (28) is the variation of the dual variables and is the variation of the subgradients. The approximated dual function dB(λ) is differentiable, while the actual dual function is nonsmooth. This can lead to significant approximation errors and poor update directions. This issue can be addressed by utilizing the same information as in the BTM algorithm. However, instead of using the bundle to construct an over approximator of the dual function, it is used to further constrain the update of the dual variables, Constraints (30) are derived from the definition of the subgradient (15). A violation of these constraints would indicate that the updated dual variables λ (t+1) are outside the range of validity of the approximated dual function. These constraints are referred to as bundle cuts and they can be summarized as $$(30)$$ $$(31)$$ 8 In the QNDA algorithm the dual variables are updated in each iteration by solving the optimization problem $\boldsymbol{\lambda}^{(t+1)}=\underset{\boldsymbol{\lambda}}{\operatorname{argmax}}\ d_{B}^{(t)}(\boldsymbol{\lambda}),$ s.t. $\|\boldsymbol{\lambda}-\boldsymbol{\lambda}^{(t)}\|_{2}^{2}\leq\alpha^{(t)},$ $\boldsymbol{\lambda}\in\mathcal{B}\mathcal{C}^{(t)}.$ $$(32\mathrm{d})$$ $$(33)$$ To avoid too aggressive update steps the same trust region (32b) as in the BTM algorithm is used. ## 3.4 Primal Heuristics The following sections provide some additional heuristics related to the primal optimization problem (5), namely an averaging heuristic used to obtain feasible primal solutions, and the addition of symmetry breaking constraints to the clustering problem. ## 3.4.1 Averaging Heuristic The K-means clustering problem involves integrality constraints and is therefore nonconvex. While the (optimal) value of the dual function 12 provides a lower bound on the optimal value of the primal problem 5, feasibility of the primal problem is not guaranteed upon convergence of a dual decomposition-based algorithm, i.e., the consensus constraints may not be satisfied. Nevertheless, in the case of K-means clustering it is straightforward to compute a feasible primal solution using an averaging step. In each iteration t of a dual decomposition-based algorithm the coordinator communicates the dual variables λ (t)to the nodes. The nodes in turn solve their individual clustering problems and communicate their computed cluster centroids mˆ i(λ (t)) to the coordinator. Based on this response the coordinator can compute the average of the primal variables, i.e., the average cluster centroids, $$\overline{{{\bf m}}}_{k}(\mathbf{\lambda}^{(t)})=\frac{1}{N_{s}}\sum_{i\in\mathcal{I}}{\bf m}_{ik}(\mathbf{\lambda}^{(t)})\tag{1}$$ $$(34)$$ which are then communicated back to the nodes. Using the mean cluster centroids the nodes can compute their resulting primal objective value $$z_{i}(\boldsymbol{\lambda}^{(t)})=\sum_{j\in\mathcal{J}}\min_{k\in\mathcal{K}}\|\mathbf{y}_{j}-\overline{\mathbf{m}}_{k}(\boldsymbol{\lambda}^{(t)})\|_{2}^{2}.\tag{1}$$ The primal objective value can be used to compute the relative duality gap in each iteration, $$\mathrm{rel.~DG}=100\cdot\left(1-\frac{d(\mathbf{\lambda}^{(t)})}{\sum_{i\in\mathcal{I}}z_{i}(\mathbf{\lambda}^{(t)})}\right).$$ $$(35)$$ Since the value of the dual function provides a lower bound on the optimal primal objective value the relative duality gap can be used to assess the distance of a found solution to the global optimum. The entire communication process between the coordinator and the nodes is illustrated in Fig. 4. Note that the average cluster centroids are only used to compute the duality gap. They do not influence the update of the dual variables. ## 3.4.2 Symmetry Breaking Constraints The clustering problem 4 is highly symmetric, i.e., it contains solutions with the same objective values. This is due to the fact that the index assigned to a cluster does not influence the objective function. Fig. 5 illustrates the situation of two symmetric solutions. This symmetry can lead to problems for the averaging heuristic presented in the previous section, as the computed cluster centroids of a single node can switch from one iteration to the next. For instance, while some points are assigned to cluster k in iteration t, they ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) (a) The coordinator sends the current dual variables to the nodes. (b) The nodes compute their cluster centroids and send them to the coordinator. ![9_image_3.png](9_image_3.png) ![9_image_2.png](9_image_2.png) (c) The coordinator computes the average cluster centroids and sends them to the nodes. (d) The nodes compute their objectives based on the received average centroids. Figure 4: Communication process between the coordinator and the nodes in iteration t. ![9_image_4.png](9_image_4.png) Figure 5: Example of symmetric clustering solutions. In the two cases the data points are assigned to different clusters without affecting the objective function. could be assigned to cluster k ′in iteration t + 1 by switching the centroids of clusters k and k ′ without affecting the objective. In order to prevent this behavior symmetry breaking constraints are added to the optimization problems of the nodes. In the first iteration one of the nodes acts as the reference nodes, providing reference centroids mref k . In the subsequent iterations the quadratic constraint $$\|{\bf m}_{i k}-{\overline{{{\bf m}}}}_{k}^{\mathrm{ref}}\|_{2}^{2}\leq\|{\bf m}_{i k^{\prime}}-{\overline{{{\bf m}}}}_{k}^{\mathrm{ref}}\|_{2}^{2},\forall k,k^{\prime}\in{\mathcal{K}},$$ , ∀k, k′ ∈ K, (36) is added to each node i. This ensures that cluster k of each node i will be the one closest to the reference centroid mref k . The choice of the node which provides the reference centroid can be performed arbitrarily, as it does not affect the optimization of the other nodes. Furthermore, the added constraint also does not affect the optimal objective value while rendering all symmetric solutions, except for one, infeasible. $\left(36\right)^{\frac{1}{2}}$ ## 4 Numerical Analysis Of Distributed Clustering Problems The dual decomposition-based distributed clustering approach was evaluated on a set of benchmark problems of varying size. The data for each benchmark problem was generated randomly. First, initial cluster centroids m0k were generated, with [m0k ]l ∈ Uc(−1, 1), l = 1*, . . . , n*y. Then, for each cluster k five random data points where added within a radius of 0.5 from the generated centroid. The parameters of the benchmark problems were varied as follows: Number of nodes: $N_s\in\{2,3,4\}$, Number of lines is $\sim\{2,2\}$. $$\stackrel{\mathrm{(~)}}{\in}\{2,3,4\},$$ Number of dimensions: ny ∈ {2, 3, 4}, Number of clusters: K ∈ {3, 4}. Five benchmark problems were generated for each combination of nodes, dimensions and clusters, resulting in a total of 90 benchmark problems. A benchmark problem is characterized by its number of nodes, dimension of the data and number of clusters. For instance, problem 3N2D4K5 is the 5th benchmark problem comprised of 3 nodes with 2-dimensional data sorted into 4 clusters. The benchmark problems were solved using the subgradient method, the bundle trust method and the quasi-Newton dual ascent algorithm. The use of ADMM was omitted for several reasons. First, in each communication round a feasible primal solution is obtained through the averaging heuristic (cf. Section 3.4.1). This primal solution does not correspond to the current dual variables. Due to the nonconvexity of the underlying MIP problem no guarantee can be made that the consensus constraints will be satisfied at the dual optimum. This in turn means that the regularization term in ADMM might not vanish, which would result in different objective values of the Lagrangian and the augmented Lagrangian, leading to an overestimation of the dual value and a subsequent underestimation of the duality gap, Second, the regularization of ADMM introduces a bias towards the mean cluster centroids. Note that the averaging heuristic does not affect the iterations of the other distributed optimization algorithms. It merely serves to compute a feasible primal solution, i.e., an upper objective bound, in each iteration. Introducing a bias towards the mean centroids in the solution of the clustering problems of the nodes would result in a stagnation of the algorithm. For badly chosen regularization parameters all nodes would converge towards the initial mean centroids, which is not the case in general for the other algorithms. The QADA algorithm could also be used to solve the clustering problem. However, the numerical tests showed that the BTM and QNDA algorithms are already efficient enough to converge within the sampling phase of QADA. Its inclusion in the results was therefore omitted. | Value | Description | | Algorithms | |---------|---------------|------------------------------------------|--------------| | λ (0) | 0 | initial dual variables | All | | α (0) | 0.5 | initial step size/trust region parameter | All | | tmax | 150 | maximum number of iterations | All | | ϵp | 10−2 | primal residual convergence tolerance | All | | ϵDG | 0.25 % | relative duality gap tolerance | All | | ϵb | 1 | bundle cuts threshold | QNDA | | τ | 50 | allowed age of data points | BTM, QNDA | | B(0) | −I | initial approximated Hessian | QNDA | Table 1: Parameter settings of the distributed optimization algorithms for the clustering benchmark problems. The initial step size (SG)/ trust region (BTM, QNDA) parameter was set to α (0) = 0.5 and varied according to α (t) = α (0)/ √t. (37) The bundle cuts for QNDA were used in every iteration, i.e., ϵb = 1 and τ = 50 points were used to construct the bundle in BTM and the bundle cuts in QNDA respectively. All algorithms were initialized with λ (0) = 0 and the initial approximated Hessian of the QNDA algorithm was set to the negative identity matrix. The $\left(37\right)$ . algorithms were terminated either when the Euclidean norm of the primal residual $$\|{\bf w}_{p}\|_{2}=\left\|\sum_{i\in{\cal I}}{\bf A}_{i}\widehat{\bf m}_{i}\right\|_{2},\tag{38}$$ i.e., the violation of the consensus constraints lied below a threshold of ϵp = 10−2 or when the relative duality gap 35 reached a value of ϵDG = 0.25 %. The used parameters for the different algorithms are summarized in Tab. 1. The MIQCP clustering problems of all nodes were solved using the commercial solver Gurobi and the total computation time was obtained through eq. ??, with Tcomm = 800 ms. The results for the clustering benchmarks are summarized in Tab. 2. Table 2: Summary of the results for the distributed optimization of the clustering benchmark problems, t: mean number of iterations until termination, rel. DG: mean relative duality gap upon termination (in %), Tcomp: mean computation time (in s). | Algorithm | t | rel. DG | Tcomp | |-------------|--------|-----------|---------| | SG | 136.75 | 2.27 | 996.28 | | BTM | 57.44 | 1.86 | 515.77 | | QNDA | 54.48 | 1.81 | 483.22 | Out of the examined algorithms, QNDA shows the best performance in terms of the required number of iterations and computation time as well as in terms of the achieved relative duality gap. The BTM algorithm shows a similar performance in terms of the number of iterations and the achieved duality gap. However, in the case of distributed clustering each iteration is costly due to the underlying MIQCP problems. Therefore, a slight performance increase in the number of iterations results in a more substantial performance increase in terms of computation times. More detailed results for the clustering benchmarks are summarized in Tab. 3 in the appendix. ![11_image_0.png](11_image_0.png) Figure 6: Evolution of the relative duality gap for benchmark problem 2N2D4K3. Fig. 6 shows the evolution of the relative duality gap for benchmark problem 2N2D4K3. The subgradient method converges rather slowly. In comparison, the BTM and QNDA algorithms exhibit a faster rate of convergence. Between these two algorithms, BTM exhibits an oscillatory behavior before converging. In contrast, the QNDA algorithm does not exhibit oscillations and therefore converges earlier. Additionally, it should be noted that the QNDA algorithm achieves a relative duality gap of 0 %, i.e., it converges to a proven global optimum. ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) (d) Node 2, Iteration 4 Figure 7: Exemplary clusters in different iterations of the QNDA algorithm for benchmark problem 2N2D4K3. Fig. 7 provides some further illustrations of the results. Fig. 7a and Fig. 7b show the results of the clustering in the first iteration, i.e., the individual global optima. Fig. 7c and Fig. 7d depict the solutions upon convergence of the QNDA algorithm. It can be seen that each node computes the same cluster centroids corresponding to the globally optimal solution with respect to the entire data set, but not to the individual data sets. It is therefore possible to compute a global model locally in each node while only accessing local data. ## 5 Comparison To The Central Solution As shown in the previous section, the solution of the MIQCP clustering problems is computationally expensive. This is due to the weak integer relaxation of problem 2, which means that the solution of the relaxed problem within the branch-and-bound algorithm is far away from the integer solution. This results in slow moving relative integrality gaps and to slow convergence of the solution algorithm. While the main motivation of the distributed clustering approach is the training of a global model without exchange of local data, it can also be used to efficiently solve larger clustering problems. Fig. 8 depicts the evolution of the relative duality gap of the QNDA algorithm as well as the evolution of the relative integrality gap of Gurobi for the complete data set of benchmark problem 4N4D4K3. The clustering problems of the individual nodes were solved in a sequential manner in the case of QNDA, also using Gurobi. While the relative gap of the central solution improves very slowly, the QNDA algorithm quickly converges to a solution close to the global ![13_image_0.png](13_image_0.png) Figure 8: Evolution of the relative duality gap of QNDA compared to the relative integrality gap of the central solution using Gurobi for benchmark problem 4N4D4K3. optimum. Note, that both relative gaps prove a worst-case distance to the global optimum. Hence, decomposing a large clustering problem into smaller subproblems and coordinating the solutions via a distributed optimization algorithm can offer significant performance improvements compared to a central solution. ## 6 Conclusion This paper demonstrated how dual decomposition-based distributed optimization can be applied to the solution of clustering problems. The approach ensures privacy, i.e., enables federated learning, as each node only has access to its local data. A global model can still be obtained by coordinating the solutions of the individual clustering problems. Numerical tests on a large set of benchmark problems demonstrated that the QNDA algorithm outperforms the subgradient method and the BTM algorithm. Furthermore, the distributed optimization approach exhibited superior performance compared to a central solution approach. In the future the developed algorithms can also be applied to other federated learning problems, like the distributed training of support vector machines. ## References D. Aloise, P. Hansen, and L. Liberti. An improved column generation algorithm for minimum sum-of-squares clustering. *Mathematical Programming*, 131:195–220, 2012. R.S. Antunes, C. André da Costa, A. Küderle, I.A. Yari, and B. Eskofier. Federated learning for healthcare: Systematic review and architecture proposal. ACM Transactions on Intelligent Systems and Technology (TIST), 13(4):1–23, 2022. A. Bagirov, N. Karmitsa, and M. Mäkelä. *Introduction to Nonsmooth Optimization: Theory, Practice and* Software. Springer, 2014. A.M Bagirov and J. Yearwood. A new nonsmooth optimization algorithm for minimum sum-of-squares clustering problems. *European Journal of Operational Research*, 170(2):578–596, 2006. D. P. Bertsekas. *Nonlinear programming*. Athena Scientific, 1999. M.A.P. Chamikara, P. Bertok, I. Khalil, D. Liu, and S. Camtepe. Privacy preserving distributed machine learning with federated learning. *Computer Communications*, 171:112–125, 2021. N. Dhanachandra, K. Manglem, and Y.J. Chanu. Image segmentation using k-means clustering algorithm and subtractive clustering algorithm. *Procedia Computer Science*, 54:764–771, 2015. H. Everett. Generalized Lagrange multiplier method for solving problems of optimum allocation of resources. Operations Research, (11 (3)):399–417, 1963. P.A. Forero, A. Cano, and G.B Giannakis. Consensus-based distributed support vector machines. *Journal* of Machine Learning Research, 11(5), 2010. P.A. Forero, A. Cano, and G.B. Giannakis. Distributed clustering using wireless sensor networks. *IEEE* Journal of Selected Topics in Signal Processing, 5(4):707–724, 2011. C. Gambella, B. Ghaddar, and J. Naoum-Sawaya. Optimization problems for machine learning: A survey. European Journal of Operational Research, 290(3):807–828, 2021. L. Georgopoulos and M. Hasler. Distributed machine learning in networks by consensus. *Neurocomputing*, 124:2–12, 2014. V. Hegiste, T. Legler, and M. Ruskowski. Application of federated machine learning in manufacturing. In 2022 International Conference on Industry 4.0 Technology (I4Tech), pp. 1–8. IEEE, 2022. J.C. Jiang, B. Kantarci, S. Oktug, and T. Soyata. Federated learning in smart city sensing: Challenges and opportunities. *Sensors*, 20(21):6230, 2020. T. Kansal, S. Bahuguna, V. Singh, and T. Choudhury. Customer segmentation using k-means clustering. In International Conference on Computational Techniques, Electronics and Mechanical Systems (CTEMS), pp. 135–139. IEEE, 2018. N. Karmitsa, A.M. Bagirov, and S. Taheri. New diagonal bundle method for clustering problems in large data sets. *European Journal of Operational Research*, 263(2):367–379, 2017. J. Konečn`y, B. McMahan, D. Ramage, and P. Richtárik. Federated optimization: Distributed machine learning for on-device intelligence. *arXiv preprint arXiv:1610.02527*, 2016. V. Kulkarni, M. Kulkarni, and A. Pant. Survey of personalization techniques for federated learning. In 4th World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4), pp. 794–797. IEEE, 2020. Q. Le, A. Smola, and S. Vishwanathan. Bundle methods for machine learning. *Advances in Neural Information Processing Systems*, 20, 2007. L. Li, Y. Fan, M. Tse, and K.-Y. Lin. A review of applications in federated learning. *Computers & Industrial* Engineering, 149:106854, 2020. W.Y.B. Lim, N.C. Luong, D.T. Hoang, Y. Jiao, Y.-C. Liang, Q. Yang, D. Niyato, and C. Miao. Federated learning in mobile edge networks: A comprehensive survey. *IEEE Communications Surveys & Tutorials*, 22(3):2031–2063, 2020. J. Liu, J. Huang, Y. Zhou, X. Li, S. Ji, H. Xiong, and D. Dou. From distributed machine learning to federated learning: A survey. *Knowledge and Information Systems*, 64(4):885–917, 2022. M. Mäkelä. Survey of bundle methods for nonsmooth optimization. *Optimization Methods and Software*, 17 (1):1–29, 2002. ISSN 1055-6788. doi: 10.1080/10556780290027828. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-efficient learning of deep networks from decentralized data. In *Artificial Intelligence and Statistics*, pp. 1273–1282. PMLR, 2017. A. Nedić. Distributed gradient methods for convex machine learning problems in networks: Distributed optimization. *IEEE Signal Processing Magazine*, 37(3):92–101, 2020. J. Nocedal and S. Wright. *Numerical optimization*. Springer Science & Business Media, 2006. D. Peteiro-Barral and B. Guijarro-Berdiñas. A survey of methods for distributed machine learning. Progress in Artificial Intelligence, 2:1–11, 2013. K. Rahimi-Adli, P.D. Schiermoch, B. Beisheim, S. Wenzel, and S. Engell. A model identification approach for the evaluation of plant efficiency. In *Computer Aided Chemical Engineering*, volume 46, pp. 913–918. Elsevier, 2019. N.Z. Shor. *Minimization methods for non-differentiable functions*, volume 3. Springer Science & Business Media, 2012. A.Z. Tan, H. Yu, L. Cui, and Q. Yang. Towards personalized federated learning. *IEEE Transactions on* Neural Networks and Learning Systems, 2022. K.I. Tsianos, S. Lawlor, and M.G. Rabbat. Consensus-based distributed optimization: Practical issues and applications in large-scale machine learning. In *50th Annual Allerton Conference on Communication,* Control, and Computing, pp. 1543–1550. IEEE, 2012. J. Verbraeken, M. Wolting, J. Katzy, J. Kloppenburg, T. Verbelen, and J.S. Rellermeyer. A survey on distributed machine learning. *ACM Computing Surveys*, 53(2):1–33, 2020. V. Yfantis and M. Ruskowski. A hierarchical dual decomposition-based distributed optimization algorithm combining quasi-Newton steps and bundle methods. In *30th Mediterranean Conference on Control and* Automation (MED), pp. 31–36. IEEE, 2022. V. Yfantis, S. Wenzel, A. Wagner, M. Ruskowski, and S. Engell. Hierarchical distributed optimization of constraint-coupled convex and mixed-integer programs using approximations of the dual function. *EURO* Journal on Computational Optimization, 11:100058, 2023. H. Yuan, M. Zaheer, and S. Reddi. Federated composite optimization. In International Conference on Machine Learning, pp. 12253–12266. PMLR, 2021. ## A Results For The Clustering Benchmark Problems Table 3: Results for the distributed optimization of the clustering benchmark problems, t: mean number of performed iterations, rel. DG: mean relative duality gap (in %), Tcomp: mean computation time (in s). | SG | | BTM | | | | | |------------|--------|------------------------|---------|-------|---------|---------| | Clustering | t | rel. DG | Tcomp | t | rel. DG | Tcomp | | Mean | 136.75 | 2.27 | 996.28 | 57.44 | 1.86 | 515.77 | | 2N2D3K | 126.0 | 1.94 | 166.08 | 68.0 | 1.84 | 95.01 | | 2N2D4K | 113.2 | 0.89 | 431.02 | 64.4 | 0.8 | 348.56 | | 2N3D3K | 120.0 | 1.8 | 223.69 | 71.6 | 1.6 | 167.72 | | 2N3D4K | 115.6 | 0.27 | 782.18 | 13.6 | 0.1 | 93.92 | | 2N4D3K | 91.6 | 0.31 | 184.21 | 36.2 | 0.16 | 108.0 | | 2N4D4K | 90.4 | 0.25 | 965.26 | 8.6 | 0.08 | 93.7 | | 3N2D3K | 138.4 | 7.01 | 404.59 | 123.2 | 6.14 | 424.47 | | 3N2D4K | 150.0 | 4.7 | 879.48 | 62.8 | 3.99 | 751.33 | | 3N3D3K | 150.0 | 2.33 | 301.63 | 67.0 | 1.76 | 160.05 | | 3N3D4K | 150.0 | 0.82 | 1469.97 | 66.6 | 0.35 | 906.26 | | 3N4D3K | 150.0 | 2.07 | 354.48 | 37.2 | 1.63 | 110.82 | | 3N4D4K | 150.0 | 1.06 | 3295.09 | 37.8 | 0.56 | 820.42 | | 4N2D3K | 150.0 | 5.01 | 311.78 | 103.2 | 3.66 | 262.33 | | 4N2D4K | 150.0 | 7.58 | 1319.14 | 93.8 | 5.71 | 1346.59 | | 4N3D3K | 150.0 | 1.32 | 317.69 | 6.6 | 0.15 | 16.75 | | | | Continued on next page | | | | | | SG | | BTM | | | | | |------------|-------|---------|---------|------|---------|--------| | Clustering | t | rel. DG | Tcomp | t | rel. DG | Tcomp | | Mean | 54.48 | 1.81 | 483.22 | | | | | 4N3D4K | 150.0 | 1.55 | 2786.47 | 65.8 | 0.53 | 2046.2 | | 4N4D3K | 150.0 | 1.57 | 441.69 | 35.0 | 0.42 | 122.26 | | 4N4D4K | 150.0 | 1.7 | 2593.41 | 7.2 | 0.14 | 118.24 | | QNDA | | | | | | | | Clustering | t | rel. DG | Tcomp | | | | | Mean | 54.48 | 1.81 | 483.22 | | | | | 2N2D3K | 63.2 | 1.82 | 92.57 | | | | | 2N2D4K | 62.2 | 0.73 | 345.59 | | | | | 2N3D3K | 62.2 | 1.58 | 150.11 | | | | | 2N3D4K | 5.0 | 0.06 | 34.76 | | | | | 2N4D3K | 32.8 | 0.12 | 97.89 | | | | | 2N4D4K | 4.8 | 0.08 | 47.19 | | | | | 3N2D3K | 121.0 | 6.21 | 412.26 | | | | | 3N2D4K | 64.2 | 3.98 | 747.3 | | | | | 3N3D3K | 64.0 | 1.71 | 151.79 | | | | | 3N3D4K | 63.8 | 0.28 | 858.76 | | | | | 3N4D3K | 35.0 | 1.36 | 111.44 | | | | | 3N4D4K | 37.2 | 0.46 | 731.33 | | | | | 4N2D3K | 94.6 | 3.61 | 249.28 | | | | | 4N2D4K | 93.8 | 5.68 | 1281.33 | | | | | 4N3D3K | 9.4 | 0.17 | 22.76 | | | | | 4N3D4K | 66.0 | 0.49 | 1901.69 | | | | | 4N4D3K | 37.4 | 0.41 | 129.74 | | | | | 4N4D4K | 9.8 | 0.17 | 178.24 | | | |
# Dyg2Vec: Efficient Representation Learning For Dynamic Graphs Mohammad Ali Alomrani∗ mohammad.ali.alomrani@huawei.com Huawei Noah's Ark Lab Mahdi Biparva∗ mahdi.biparva@huawei.com Huawei Noah's Ark Lab Yingxue Zhang *yingxue.zhang@huawei.com* Huawei Noah's Ark Lab Mark Coates coates@ece.mcgill.ca McGill University Reviewed on OpenReview: *https: // openreview. net/ forum? id= YRKS2J0x36* ## Abstract Temporal graph neural networks have shown promising results in learning inductive representations by automatically extracting temporal patterns. However, previous works often rely on complex memory modules or inefficient random walk methods to construct temporal representations. To address these limitations, we present an efficient yet effective attention-based encoder that leverages temporal edge encodings and window-based subgraph sampling to generate task-agnostic embeddings. Moreover, we propose a jointembedding architecture using non-contrastive SSL to learn rich temporal embeddings without labels. Experimental results on 7 benchmark datasets indicate that on average, our model outperforms SoTA baselines on the future link prediction task by 4.23% for the transductive setting and 3.30% for the inductive setting while only requiring 5-10x less training/inference time. Lastly, different aspects of the proposed framework are investigated through experimental analysis and ablation studies. The code is publicly available at https://github.com/huawei-noah/noah-research/tree/master/graph_atlas. ## 1 Introduction Continuous-time dynamic graphs (Kazemi et al., 2020) are graphs in which each edge has a continuous timestamp and can be naturally found in many real-world applications such as social networks and finance. Recently, dynamic graph encoders (Rossi et al., 2020; Wang et al., 2021b; Jin et al., 2022; Luo & Li, 2022) have emerged as promising representation learning approaches that are able to extract temporal patterns from an ever-evolving dynamic graph in order to make accurate future predictions. However, such models have several shortcomings. First, they heavily rely on chronological training and/or complex memory modules to construct predictions (Kumar et al., 2019; Xu et al., 2020; Rossi et al., 2020; Wang et al., 2021b). Consequently, encoding any dynamic graph requires sequentially iterating through all edges, which is intractable for large graphs due to the high computational overhead. Second, the encoding modules either use inefficient message-passing procedures (Xu et al., 2020) that enforce temporal causality, or expensive random walk-based algorithms (Wang et al., 2021b; Jin et al., 2022) with heuristic feature encoding strategies that are engineered for edge-level tasks only. Finally, as opposed to other temporal domains (Tong et al., 2022; Eldele et al., 2021), most works on dynamic graphs have focused on pushing downstream task performance rather than learning general pre-trained models. ∗equal contribution Self-Supervised Representation Learning (SSL) has shown promise in achieving competitive performance for different data modalities on multiple predictive tasks (Liu et al., 2021). Given a large corpus of unlabelled data, SSL postulates that unsupervised pre-training is sufficient to learn robust representations that are predictive for downstream tasks with minimal fine-tuning. Contrastive SSL methods, despite their early success, rely heavily on negative samples, extensive data augmentation, and large batch sizes (Jing et al., 2022; Garrido et al., 2023). Non-contrastive methods address these shortcomings, incorporating information theoretic principles through architectural innovations or regularization methods (Balestriero & LeCun, 2022). The success of such SSL methods on sequential data (Tong et al., 2022; Eldele et al., 2021; Patrick et al., 2021) suggests that one can learn rich temporal node embeddings from dynamic graphs without direct supervision. While there are some recent attempts at using SSL for dynamic graphs such as DDGCL (Tian et al., 2021) and DySubC (Jiang et al., 2021), they tend to require high memory and computation due to negative sampling and focus more on pushing downstream performance rather than learning rich general representations. In this work, we propose DyG2Vec, a novel efficient encoder-decoder model for continuous-time dynamic graphs that benefits from a window-based architecture that acts as a regularizer to avoid over-fitting. DyG2Vec is an efficient attention-based graph neural network that performs message-passing across structure and time to output task-agnostic node embeddings, without the need for expensive random-walk anonymyzation procedures (Wang et al., 2021b; Jin et al., 2022) or memory modules (Rossi et al., 2020; Souza et al., 2022). We equip DyG2Vec with the ability to perform non-contrastive SSL, which allows the model to learn rich representations without labels, if needed. Our results on 7 benchmark datasets indicate that on average, DyG2Vec outperforms the SoTA baseline CaW (Wang et al., 2021b) on the future link prediction task by 4.23% for the transductive setting and 3.30% for the inductive setting. In addition, DyG2Vec addresses the efficiency bottleneck often experienced with other dynamic graph encoding alternatives. It reduces the training/inference time by 5-10x compared to the state-of-the-art models, thereby providing superior model performance with a significantly reduced computational demand. This efficiency gain significantly enhances the model's scalability potential for large graphs. Our main contributions can be summarized as follows: - We propose an effective message-passing encoder that leverages temporal edge encoding to better capture temporal dependencies. - We eliminate the need for memory modules or expensive causal random-walk extraction methods through efficient window-based subgraph encoding, making it easier to extract temporal motifs. ## 2 Related Work We review the most relevant literature on dynamic graph and self-supervised representation learning. See Appendix A.6 for more tangentially related works. Representation learning for dynamic graphs: Early works on representation learning for continuoustime dynamic graphs typically divide the graph into snapshots that are encoded by a static GNN and then processed by an RNN module (Sankar et al., 2020; Pareja et al., 2020; Kazemi et al., 2020). Such methods fail to learn fine-grained temporal patterns at smaller timescales within each snapshot. Therefore, several RNN-based methods were introduced that sequentially update node embeddings as new edges arrive. JODIE (Kumar et al., 2019) employs two RNN modules to update the source and destination embeddings of an arriving edge. DyRep (Trivedi et al., 2019) adds a temporal attention layer to take into account multi-hop interactions when updating node embeddings. TGAT (Xu et al., 2020) includes an Attention-based MessagePassing (AMP) architecture to aggregate messages from a historical neighborhood. TGN (Rossi et al., 2020) alleviates the expensive neighborhood aggregation of TGAT by using an RNN memory module to encode the history of each node. CaW (Wang et al., 2021b) extracts temporal patterns through an expensive procedure that samples temporal random walks and encodes them with an LSTM. This procedure must be performed for every prediction. PINT (Souza et al., 2022) is a memory-based method that leverages injective message-passing and relative positional encodings to overcome the theoretical weakness of both memorybased methods (e.g., TGN) and walk-based methods (e.g., CaW). Jin et al. (Jin et al., 2022) adapt CaW to include spatio-temporal bias and exploitation-exploration trade-off sampling biases, employing differential equations (ODE) to effectively model the irregularly sampled temporal interactions of a node. NAT (Luo & Li, 2022) abandons the commonly used message-passing and walk-based paradigms and instead adopts dictionary-based learning by caching a fixed number of interactions for each node. Node representations are then built by aggregating temporal and structural features within the cache. Finally, GraphMixer (Cong et al., 2023b) uses a conceptually simple MLP-based link classifier that summarizes structural information from the latest temporal links and achieves surprisingly decent performance. Self-supervised representation learning: Multiple works explore learning visual representations without labels (Liu et al., 2021). The more recent contrastive methods generate random views of images through data augmentations, and then force representations of positive pairs to be similar while pushing apart representations of negative pairs (Chen et al., 2020a; He et al., 2020). With the goal of attaining hard negative samples, such methods typically use large batch sizes Chen et al. (2020a) or memory banks (He et al., 2020; Chen et al., 2020b). Non-contrastive methods such as BYOL (Grill et al., 2020) and VICReg (Bardes et al., 2022) eliminate the need for negative samples through various techniques such as regularization or architecture tricks that avoid representation collapse (Jing et al., 2022). Recently, several SSL methods have been adapted to pre-train GNNs (Xie et al., 2022). BGRL (Thakoor et al., 2022) adapts BYOL to graphs to eliminate the need for negative samples, which are often memory-heavy in the graph setting. In this work, we follow a principled approach for SSL pre-training based on VICReg (Balestriero & LeCun, 2022) compared to other methods such as BGRL that rely on architecture tricks and heuristics. Most adaptations of SSL for dynamic graphs have focused on improving downstream task performance via auxiliary losses rather than learning general pre-trained models. Previous works (Jiang et al., 2021) either use contrastive learning methods, which require high memory and computation due to negative sampling (Thakoor et al., 2022), or incorporate weak encoders (Tian et al., 2021), which leads to performance deterioration, particularly for large-scale graphs. Furthermore, readily adapting prior SSL methods to temporal domains is non-trivial as dynamic graphs can involve heavy distribution shifts. For example, new nodes arrive and others depart, and these arrival patterns occur at different timescales. As a result, there has been limited success in adapting SSL pre-training to dynamic graphs. Position of our work: DyG2Vec relies on efficient message-passing GNNs without requiring the computationally expensive temporal causality on subgraph sampling (Xu et al., 2020). *Our architecture does not use* complex memory-based architectures which require designing memory update schemes and can suffer from obsolete node memory for large batch sizes (Zhou et al., 2022). While random-walk-based works (Wang et al., 2021b; Jin et al., 2022) alleviate these issues with online feature construction through causal walks, such methods are orders of magnitude slower and difficult to parallelize on GPUs (Luo & Li, 2022). In contrast to prior works, our method neither maintains a cache or memory for each node nor requires the full history to make predictions. Instead, it operates on a fixed-size window of the past relations to generate node embeddings. Furthermore, we fall under the message-passing paradigm which can leverage GPU parallelism using cutting-edge frameworks (Fey & Lenssen, 2019). Last, we propose a joint-embedding architecture that is compatible with recent SSL methods. In our experiments, we show how this allows the model to learn temporal patterns even without direct training on downstream tasks. ## 3 Problem Formulation A Continuous-Time Dynamic Graph (CTDG) G = (V, E, X ) is a sequence of E = |E| interactions, where X = (XV, XE) is the set of input features containing the *node features* XV ∈ R N×D1and the *edge features* XE ∈ R E×D2. E = {e1, e2*, . . . , e*E} is the set of interactions. There are N = |V| nodes, and D1 and D2 are the dimensions of the node and edge feature vectors, respectively. An edge ei = (ui, vi, ti, mi) is an interaction between any two nodes ui, vi ∈ V, with ti ∈ R being a continuous timestamp, and mi ∈ XE an edge feature vector. For simplicity, we assume that the edges are undirected and ordered by time (i.e., ti ≤ ti+1). A temporal sub-graph Gi,j is defined as a set consisting of all the edges in the interval [ti, tj ], such that Eij = {ek | ti ≤ tk < tj}. Any two nodes can interact multiple times throughout the time horizon; therefore, G is a multi-graph. Our goal is to learn a model f that maps the input graph to a representation space. The model is a pre-trainable encoder-decoder architecture, f = (gθ, dγ). The encoder gθ maps a dynamic graph to node embeddings H ∈ R N×D1; the decoder dγ performs a task-specific prediction given the embeddings. The model is parameterized by the encoder/decoder parameters (*θ, γ*). More concretely, $${\cal H}=g_{\theta}({\mathcal G})\,,\qquad\quad z=d_{\gamma}({\cal H};\bar{e})\,,$$ $$(1)$$ H = gθ(G), z = dγ(H; ¯e), (1) where z ∈ R DYis the prediction of task-specific labels (e.g., edge prediction or source node classification labels) of the target (future) edge e¯. The node embeddings H must capture the temporal and structural dynamics of each node such that the future can be accurately predicted from the past, e.g., future edge prediction given past edges. The main distinction of this design is that, unlike previous dynamic graph models (Rossi et al., 2020; Xu et al., 2020; Wang et al., 2021b), the encoder must produce embeddings independent of the downstream task specifications. This special trait can allow the model to be compatible with the SSL paradigm where an encoder is pre-trained separately and then fine-tuned together with a task-specific decoder to predict the labels. To this end, we present a novel DyG2Vec framework, that can learn rich node embeddings at any timestamp t independent of the downstream task. DyG2Vec is formulated as a two-stage framework. In the first stage, we use a non-contrastive SSL method to learn the model f SSL = (gθ, dψ) over various sampled dynamic sub-graphs with self-supervision. dψ is an SSL decoder that is only used in the SSL pre-training stage. In the second stage, a task-specific decoder dγ is trained on top of the pre-trained encoder gθ to compute the outputs for the downstream tasks, e.g., future edge prediction or dynamic node classification (Xu et al., 2020; Wang et al., 2021b). We consider two example downstream tasks: future link prediction (FLP), and dynamic node classification (DNC). In each task, we make a prediction on a set of target (positive) edges E¯. For FLP, this is augmented by a set of negative edges. Each negative edge (uj , v′j , tj , mj ) differs from its corresponding positive edge only in the destination node, v ′ j ̸= vj , which is selected at random from all nodes. The FLP task is then binary classification for the test set of 2|E| ¯ edges. In the DNC task, a dynamic label is associated with each node that participates in an interaction. We are provided with {(uj , tj )}, i.e., the source node and interaction time. The goal is to predict the source node labels for the test interactions. *It is important to note that each* prediction must be made given only access to the past, i.e., edges before time tj . The performance metrics are detailed in Appendix A.3. ## 4 Methodology We now introduce our novel dynamic graph learning framework DyG2Vec. We first outline the encoder architecture. We then introduce the window-based downstream training approach. Finally, we present the SSL approach that can be *optionally* used to pre-train the dynamic graph encoder. ## 4.1 Dyg2Vec Encoding Model Our encoder combines a self-attention mechanism for message-passing with a learnable time-encoding module that provides relative time encoding. We also introduce a novel temporal edge encoding that efficiently captures the temporal structural relationship between nodes. The full architecture is outlined in Figure 1. For simplicity, we define the message passing on any input graph G; however, as shown in Figure 1, the input graph is restricted to be a window of the full graph. See Section 4.2 for details. Temporal Attention Embedding: Given any input dynamic graph G, the encoder gθ computes the embedding h L i ∈ R D1of node i through a series of L multi-head attention (MHA) layers (Vaswani et al., 2017) that aggregate messages from its L-hop neighborhood (Xu et al., 2020; Velickovic et al., 2018). ![4_image_0.png](4_image_0.png) Figure 1: Using DyG2Vec window framework to encode the target node u. Every slice of the dynamic graph G contains edges that arrived at the same continuous timestamp. The blue interval represents the history graph Gi−W,i that is encoded to make a prediction on the target edge (*u, v*). Note that both u and v share the same sampled history graph. For simplicity, we omit edge features mp from the attention encoder. Given a node embedding h l−1 iat layer l−1, we uniformly sample N 1-hop neighborhood interactions of node i, N (i) = {ep, . . . , ek*} ⊆ E*, where p, k ∈ {1, *|E|}*. The embedding h l i at layer l is calculated by: $\mathbf{h}_{i}^{l}=\mathbf{W}_{1}\mathbf{h}_{i}^{l-1}+\texttt{MHA}^{l}(\mathbf{q}^{l},\mathbf{K}^{l},\mathbf{V}^{l})$, $\mathbf{q}^{l}=\mathbf{h}_{i}^{l-1}$, $\mathbf{K}^{l}=\mathbf{V}^{l}=\left[\Phi_{p}^{l-1}(t_{p}),\ldots,\Phi_{k}^{l-1}(t_{k})\right]$. Here, W1 is a learnable mapping matrix, MHAl(·) is a multi-head dot-product attention layer, and Φ l−1 p(tp) represents the edge feature vector of edge ep = (up, vp, tp,mp) ∈ N (i) at time tp: $$\begin{array}{r l}{{\Phi_{p}^{l-1}(t_{p})=[\mathbf{h}_{u_{p}}^{l-1}\mid\mid\mathbf{f}_{p}(t_{p})\mid\mid\mathbf{m}_{p}],}}\\ {{\mathbf{f}_{p}(t_{p})}}&{{=\phi(\bar{t}_{i}-t_{p})+\Omega_{p}(t_{p})\,,}}\\ {{\bar{t}_{i}}}&{{=\operatorname*{max}\left\{t_{l}\mid e_{l}\in\mathcal{N}(i)\,\right\}\,,}}\end{array}$$ $$\left({\bar{\mathbf{b}}}\right)$$ $\left(7\right)$ (7) (a) . where || denotes concatenation and the **Time Encoding** module ϕ(t) = [cos ω1*t, . . . ,* cos ωD1 t] is a learnable Time2Vec module that helps the model be aware of the relative timespan between a sampled interaction and the most recent interaction of node i in the input graph. Ωp(.) ∈ R D1is a temporal edge encoding function, described in more detail below. In contrast to TGAT's recursive message passing procedure (Xu et al., 2020), the message passing in our encoder is 'flat': at every iteration, the same set of node embeddings is used to propagate messages to neighbors. That is, we do not restrict messages to flow towards the source node only but rather treat the *sampled temporal graph as undirected*. This allows the encoder to better capture the multi-hop common neighbors between the target nodes, which are vital to learning the temporal motifs and predicting future interactions. Moreover, unlike CaW (Wang et al., 2021b), we do not restrict the neighbor sampling to go backwards in time (i.e. causal sampling) as we found this to be too restrictive and degrade the overall performance on downstream tasks (See Section 6). Lastly, note that the relative time encoding is with respect to the latest timestamp, t¯i, incident to the source and not with respect to the target edge timestamp; hence, allowing the encoding step to be independent of the prediction (decoding) step and making the generated embeddings task-agnostic. Temporal Edge Encoding: Dynamic graphs often follow evolutionary patterns that reflect how nodes interact over time (Kovanen et al., 2011). For example, in social networks, two people who share many friends are likely to interact in the future. Therefore, we incorporate two simple yet effective temporal encoding methods that provide inductive biases to capture common structural and temporal evolutionary ![5_image_0.png](5_image_0.png) $$({\boldsymbol{\delta}})$$ Figure 2: The joint embedding architecture for the non-contrastive SSL Framework. Each slice of the input dynamic graph contains edges arriving at the same continuous timestamp. B is a batch of intervals of size W. Gˆ is a batch of the corresponding input graphs of each interval. behaviour of dynamic graphs. The temporal edge encoding function, parameterized by W ∈ R D1×3, is then: $$\Omega_{p}(t_{p})={\bf W}_{2}[z_{p}(t_{p})||c_{p}(t_{p})]\,,$$ Ωp(tp) = W2[zp(tp)||cp(tp)] , (8) where we incorporate (i) *Temporal Degree Centrality* zp(tp) ∈ R 2: the concatenated current degrees of nodes up and vp at time tp; and (ii) *Common Neighbors* cp(tp) ∈ R: the number of common 1-hop neighbors between nodes up and vp at time tp. By using the degree centrality as an edge feature, the model is able to learn any bias towards more frequent interactions with high-degree nodes. The number of common neighbors helps capture temporal motifs, and it is known to often have a strong positive correlation with the likelihood of a future interaction (Yao et al., 2016). ## 4.2 Dyg2Vec Downstream Training In the downstream training stage, the DyG2Vec model f = (gθ, dγ) consists of the encoder gθ and a taskspecific decoder dγ which is trained using a similar window-based training strategy. The model is trained to make predictions depending on the downstream tasks (e.g., link prediction or node classification). It is important to note that all tasks considered for dynamic graphs involve predicting a (future) target edge given access to the past interactions. However, rather than having access to all past edges, we limit the model to a fixed window of W interactions. That is, to predict a target edge e¯ = (uj , vj , tj , mj ), we sample an input (history) graph Gj−W,j from the time interval {tj−W , tj}, centered at uj and vj , and make a prediction as follows: H = gθ(Gj−W,j ) is the matrix of node embeddings returned by the encoder, and z = dγ(H; ¯e) is the prediction output of the decoder. The model parameters are optimized by training with a loss function LD(z, o), where LD is defined depending on the downstream task and o contains task-specific labels (See Section 3). It is important to note that, unlike previous methods (Xu et al., 2020; Wang et al., 2021b), the embeddings of uj and vj *are generated through message passing on the same sampled graph*. Consequently, the encoder can better recognize similar historical patterns between the target nodes without the need for costly motif-correlation through counting that is performed in walk-based methods (Wang et al., 2021b; Jin et al., 2022). The window-based training strategy has several major advantages. First, the window acts as a regularizer by providing a natural inductive bias towards recent edges, which are often more predictive of the immediate future. Second, it avoids costly time-based neighborhood sampling (Wang et al., 2021b). Third, relying on a fixed window-size for message-passing allows for constant memory and computational complexity, which is well-suited to the practical *online streaming* data scenario. ## 4.3 Self-Supervised Pre-Training For Dynamic Graphs Previous work (Paranjape et al., 2017) has shown that temporal motifs develop at different timescales throughout a dynamic graph. For example, question-answer patterns on StackOverflow typically take 30 min to develop while messaging patterns on social media platforms can take less than 20 minutes to form. Inspired by such observations, we outline a window-based pre-training strategy where the encoder can be pre-trained on a sliding window of the dynamic graph in an effort to learn the fine-grained temporal patterns throughout the time horizon. The full pre-training procedure is displayed in Figure 2. Given the full input dynamic graph G0,E, a set of intervals I is generated by dividing the entire time-span {t0, tE} into M = ⌈E/S⌉ − 1 intervals with stride S and interval length W (See Appendix A.3 for details). Let B ⊂ I be a mini-batch (randomly sampled subset) of intervals. Given B, the *temporal subgraph sampler* m (G, B; W) constructs the mini-batch of input graphs: Gˆ = {Gi,j | [*i, j*) ∈ B}. In principle, Gi,j ∈ Gˆ is an input graph to the SSL pre-training. The parameter W controls the size of the window while S controls the stride between intervals. In practice, we found that setting S = 200 and W = 32K gives a reasonable trade-off to learn both the long-range and short-range patterns within the dynamic graph. We formulate a joint-embedding architecture (Bromley et al., 1993) for DyG2Vec in which two views of a mini-batch of sub-graphs are generated through random transformations. The transformations are randomly sampled from a distribution defined by a distortion pipeline. The encoder maps the views to node embeddings which are processed by the predictor to generate node representations. We minimize an SSL objective (Eq. 9, described below) to optimize the model parameters end-to-end in the pre-training stage. Views: The temporal distortion module generates two views of the input graphs Gˆ ′= d ′(Gˆ) and Gˆ ′′= d ′′ (Gˆ) where the transformations d ′and d ′′ are sampled from a distribution T over a pre-defined set of candidate graph transformations. In this work, we use edge dropout and edge feature masking (Thakoor et al., 2022) in the transformation pipeline. See Appendix A.3 for more details. Embedding: The encoding model gθ is an Attention-based Message-Passing (AMP) neural network presented in Sec. 4.1. It produces node embeddings H ′and H ′′ for the views Gˆ ′and Gˆ ′′ of the input graphs Gi,j . We elaborate on the details of the encoder in Sec. 4.1. Prediction: The decoding head dψ for our self-supervised learning design consists of a MLP predictor pϕ that outputs the final representations Z ′and Z ′′ , where Z = pϕ(H). SSL Objective: In order to learn useful representations, we minimize a regularization-based SSL loss function (Bardes et al., 2022): $${\mathcal{L}}^{S S L}=\lambda s(\mathbf{Z}^{'},\mathbf{Z}^{''})+\mu[v(\mathbf{Z}^{'})+v(\mathbf{Z}^{''})]+\nu[c(\mathbf{Z}^{'})+c(\mathbf{Z}^{''})]\,.$$ ′′ )] . (9) In this loss function, the weights λ, µ, and ν control the emphasis placed on each of three regularization terms. The *invariance* term s encourages representations of the two views to be similar. The *variance* term v is included to prevent the well-known collapse problem (Jing et al., 2022). The covariance term c promotes maximization of the information content of the representations. See Appendix A.1 for details. Note that this pre-training stage is only performed when needed i.e. target labels are scarce. Following the pre-training stage, we replace the SSL decoder with a task-specific downstream decoder dγ that is trained on top of the *frozen* pre-trained encoder. ## 5 Experimental Evaluation 5.1 Experimental Setup Baselines: We compare DyG2Vec to five state-of-the-art baseline models: DyRep (Trivedi et al., 2019), JODIE (Kumar et al., 2019), TGAT (Xu et al., 2020), TGN (Rossi et al., 2020), CaW (Wang et al., 2021b), and NAT (Luo & Li, 2022). DyRep, JODIE, and TGN sequentially update node embeddings using an RNN. TGAT applies message passing via attention on a sampled temporal subgraph. CaW samples temporal random walks and learns temporal motifs by counting node occurrences in each walk. NAT builds temporal $$({\mathfrak{g}})$$ | Dataset | # Nodes | # Edges | # Unique Edges | Edge Features | Node Labels | Bipartite | % Repetitive Edges | |-----------------|-----------|-----------|------------------|-----------------|---------------|-------------|----------------------| | Reddit | 11,000 | 672,447 | 78,516 | ✓ | ✓ | ✓ | 54% | | Wikipedia | 9,227 | 157,474 | 18,257 | ✓ | ✓ | ✓ | 48% | | MOOC | 7,144 | 411,749 | 178,443 | ✓ | ✓ | ✓ | 53% | | LastFM | 1980 | 1,293,103 | 154,993 | ✓ | 68% | | | | UCI | 1899 | 59,835 | 13838 | ✓ | 62% | | | | Enron | 184 | 125,235 | 2215 | 92% | | | | | SocialEvolution | 74 | 2,099,519 | 2506 | 97% | | | | Table 1: Dynamic Graph Datasets. **% Repetitive Edges**: % of edges which appear more than once. | Setting | Model | Wikipedia | Reddit | MOOC | LastFM | Enron | UCI | SocialEvol. | |--------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------| | Transductive | JODIE | 0.956 ± 0.002 | 0.979 ± 0.001 | 0.797 ± 0.01 | 0.691 ± 0.010 | 0.785 ± 0.020 | 0.869 ± 0.010 | 0.847 ± 0.014 | | DyRep | 0.955 ± 0.004 | 0.981 ± 1e-4 | 0.840 ± 0.004 | 0.683 ± 0.033 | 0.795 ± 0.042 | 0.524 ± 0.076 | 0.885 ± 0.004 | | | TGAT | 0.968 ± 0.001 | 0.986 ± 3e-4 | 0.793 ± 0.006 | 0.633 ± 0.002 | 0.637 ± 0.002 | 0.835 ± 0.003 | 0.631 ± 0.001 | | | TGN | 0.986 ± 0.001 | 0.985 ± 0.001 | 0.911 ± 0.010 | 0.743 ± 0.030 | 0.866 ± 0.006 | 0.843 ± 0.090 | 0.966 ± 0.001 | | | CaW | 0.976 ± 0.007 | 0.988 ± 2e-4 | 0.940 ± 0.014 | 0.903 ± 1e-4 | 0.970 ± 0.001 | 0.939 ± 0.008 | 0.947 ± 1e-4 | | | NAT | 0.987 ± 0.001 | 0.991 ± 0.001 | 0.874 ± 0.004 | 0.859 ± 1e-4 | 0.924 ± 0.001 | 0.944 ± 0.002 | 0.944 ± 0.010 | | | DyG2Vec | 0.995 ± 0.003 | 0.996 ± 2e-4 | 0.980 ± 0.002 | 0.960 ± 1e-4 | 0.991 ± 0.001 | 0.988 ± 0.007 | 0.987 ± 2e-4 | | | Inductive | JODIE | 0.891 ± 0.014 | 0.865 ± 0.021 | 0.707 ± 0.029 | 0.865 ± 0.03 | 0.747 ± 0.041 | 0.753 ± 0.011 | 0.791 ± 0.031 | | DyRep | 0.890 ± 0.002 | 0.921 ± 0.003 | 0.723 ± 0.009 | 0.869 ± 0.015 | 0.666 ± 0.059 | 0.437 ± 0.021 | 0.904 ± 3e-4 | | | TGAT | 0.954 ± 0.001 | 0.979 ± 0.001 | 0.805 ± 0.006 | 0.644 ± 0.002 | 0.693 ± 0.004 | 0.820 ± 0.005 | 0.632 ± 0.005 | | | TGN | 0.974 ± 0.001 | 0.954 ± 0.002 | 0.855 ± 0.014 | 0.789 ± 0.050 | 0.746 ± 0.013 | 0.791 ± 0.057 | 0.904 ± 0.023 | | | CaW | 0.977 ± 0.006 | 0.984 ± 2e-4 | 0.933 ± 0.014 | 0.890 ± 0.001 | 0.962 ± 0.001 | 0.931 ± 0.002 | 0.950 ± 1e-4 | | | NAT | 0.986 ± 0.001 | 0.986 ± 0.002 | 0.832 ± 1e-4 | 0.878 ± 0.003 | 0.949 ± 0.010 | 0.926 ± 0.010 | 0.952 ± 0.006 | | | DyG2Vec | 0.992 ± 0.001 | 0.991 ± 0.002 | 0.938 ± 0.010 | 0.979 ± 0.006 | 0.987 ± 0.004 | 0.976 ± 0.002 | 0.978 ± 0.010 | | node representations using a cache that stores a limited set of historical interactions for each node. Appendix A.2.3 contains additional comparisons to the GraphMixer (Cong et al., 2023b) baseline. Table 2: Future link Prediction Performance in AP (Mean ± Std). **Bold** font and ul font represent firstand second-best performance respectively. DyG2Vec is trained end-to-end with no pre-training. Downstream Tasks: We evaluate all models on two temporal tasks: future link prediction (FLP), and dynamic node classification (DNC). In FLP, the goal is to predict the probability of future edges occurring given the source, destination, and timestamp. For each positive edge, we sample a negative edge that the model is trained to predict as negative. The DNC task involves predicting the label of the source node of a future interaction. Both tasks are trained using binary cross entropy loss. For FLP, we evaluate all models on the transductive and inductive settings. The latter is a more challenging setting where a model makes a prediction on unseen nodes. See Appendix A.3 for details. For the FLP task, we report the Average Precision (AP) metric. For the DNC task, we report the area under the curve (AUC) metric due to the prevailing issue of class imbalance in dynamic graphs. *By default,* DyG2Vec is trained end-to-end with no pre-training, just like the baselines. The pre-training stage is only performed to test the potential benefits of SSL (e.g. Fig. 4). Additional results on the benefits of SSL can be found in Appendix A.2. Datasets: We use 7 real-world datasets: Wikipedia, Reddit, MOOC, and LastFM (Kumar et al., 2019); SocialEvolution, Enron, and UCI (Wang et al., 2021b). These datasets span a wide range in terms of number of nodes and interactions, time range, and repetition ratio. The dataset statistics are presented in Table 1. We perform the same 70%-15%-15% chronological split for all datasets as in Wang et al. (2021b). The datasets are split differently under two settings: Transductive and Inductive. Under the transductive setting, a dataset is split normally by time, i.e., the model is trained on the first 70% of links and tested on the rest. In the inductive setting, we strive to test the model's prediction performance on edges with unseen nodes. Therefore, following Wang et al. (2021b), we randomly assign 10% of the nodes to the validation and test sets and remove any interactions involving them in the training set. Additionally, to ensure an inductive setting, we remove any interactions not involving these nodes from the test set. Training Protocols and Hyperparameters: In Table 2, DyG2Vec is initialized with random parameters and trained end-to-end on the downstream tasks and compared to all supervised baselines. In the semisupervised evaluation setting (left figure of Fig. 4), the decoder is trained on top of the frozen (SSL pretrained) encoder on a random portion of the dataset (i.e., a fraction of the target edges). The DyG2Vec encoder performs L = 3 layers of message passing. We sample N = 64 temporal neighbors at the first hop and 1 neighbor at the second and third hops. All neighbors are sampled uniformly at random. We found that uniform sampling within a window works better than only looking at the latest N neighbors of a node (Xu et al., 2020; Rossi et al., 2020). Other hyperparameters are discussed in Appendix A.3. For the DNC task, following prior work (Rossi et al., 2020), the decoder is trained on top of the frozen encoder that is pre-trained on the future link prediction task. ## 5.2 Experimental Results ![8_image_0.png](8_image_0.png) Figure 3: Transductive FLP Performance (Test AP) vs Inference runtime (s) on 3 datasets. Inference time represents the time it takes to predict the whole test set. The test sets are approximately of size 400K, 600K, and 100K edges respectively. Future Link Prediction: We report the test AP scores for future link prediction in Table 2. Our model outperforms all sequential and message-passing baselines on 7/7 of the datasets in the transductive setting. The gap is particularly large on the UCI and LastFM datasets, where DyG2Vec outperforms the second-best methods (NAT and CaW) by over 4% and 6% respectively. Interestingly, while SocialEvol. is the largest dataset with ∼ 2M edges, our model is able to achieve SoTA performance while only using the last 8000 edges to predict any future edge. This further cements the findings in Xu et al. (2020) that capturing recent interactions may be more important for certain tasks. Our window-based framework offers a good trade-off between capturing recent interactions and recurrent patterns which both have a major influence on future interactions. In the inductive settings, most methods drop in performance due to the difficult nature of predicting over unseen nodes. However, DyG2Vec still outperforms the best methods significantly (e.g., 8% gap for LastFM) which demonstrates its ability to learn temporal motifs rather than overfitting to node identities. Table 3: Transductive Dynamic Node Classification Performance in AUC (Mean ± Std). Avg. Rank reports the mean rank of a method across all datasets. | Model | Wikipedia | Reddit | MOOC | Avg. Rank ↓ | |---------|---------------|---------------|---------------|---------------| | TGAT | 0.800 ± 0.010 | 0.664 ± 0.009 | 0.673 ± 0.006 | 3.0 | | JODIE | 0.843 ± 0.003 | 0.566 ± 0.016 | 0.672 ± 0.002 | 3.7 | | Dyrep | 0.873 ± 0.002 | 0.633 ± 0.008 | 0.661 ± 0.012 | 3.3 | | TGN | 0.828 ± 0.004 | 0.655 ± 0.009 | 0.674 ± 0.007 | 2.3 | | DyG2Vec | 0.824 ± 0.050 | 0.649 ± 0.020 | 0.785 ± 0.005 | 2.6 | Dynamic Node classification: We evaluate DyG2Vec on 3 datasets for node classification where the labels indicate whether a user will be banned from editing/posting after an interaction. This task is challenging both due to its dynamic nature (i.e., nodes can change labels) and the high class imbalance (only 217 of 157K interactions result in a ban). We measure performance using the AUC metric to deal with the class imbalance. Table 3 shows that DyG2Vec outperforms all baselines on the MOOC dataset significantly by over 10%. For Wikipedia and Reddit, DyG2Vec is within 2 − 5% of the best performance. Overall, none of the methods display the best performance consistently across all 3 datasets. We believe this is due to the high class imbalance problem which makes it a better fit for anomaly detection methods (Ranshous et al., 2015). Training/Inference Speed: Relying on a fixed window of history to produce task-agnostic node embeddings gives DyG2Vec a significant advantage in speed and memory. Figures 3 and 6 show the performance and runtime per epoch of all methods on the three large datasets: LastFM, SocialEvolution and MOOC. DyG2Vec is many orders of magnitude faster than CaW due to the latter's expensive random walk sampling procedure. RNN-based methods such as TGN have a good runtime on LastFM and MOOC; however, they are significantly slower on SocialEvol. which has a small number of nodes (74) but a large number of interactions (∼ 2M). This suggests that memory-based methods are slower for settings where a node's memory is updated frequently. Furthermore, while TGAT has a similar AMP encoder, DyG2Vec improves the efficiency and performance significantly. This reveals the significance of the window-based mechanism and the encoder architecture. Overall, DyG2Vec presents the best trade-off between speed and performance. A more detailed complexity analysis is included in Appendix A.2. Semi-supervised Learning on Dynamic Node Classification: The DNC task is challenging due to its highly imbalanced labels. In Figure 4, we show that SSL is an effective pre-training strategy for the DNC task, particularly in the low-label data regime where each model is trained on a portion of the target edges. This highlights the potential of SSL to effectively use unlabeled data for representation learning and prevent representations from overfitting to such imbalanced classification tasks. ![9_image_0.png](9_image_0.png) Figure 4: First figure plots Semi-Supervised Learning results on Dynamic Node Classification. For each setting, DyG2Vec was trained on a varying random portion of the training data. Second figure plots the Average Attention Weight versus Relative Timespan for DyG2Vec trained with W = 64K. The relative timespan is normalized with the maximum timespan across all interactions. A higher timespan means a farther interaction. ## 5.3 Ablation And Sensitivity Analysis We perform a detailed study on different instances of our framework with 3 datasets. All ablation results are reported in Figure 5. Window Size: We observe that a large window size works best for most datasets. However, we see a minor drop in performance (∼ 1%) for MOOC due to the inherently different recurring temporal patterns. As observed by Xu et al. (2020), recent and/or recurrent interactions are often the most predictive of future interactions. Therefore, datasets with long range dependencies favor larger window sizes to capture the recurrent patterns while some datasets benefit from an increased bias towards recent interactions. Our window-based framework coupled with uniform neighbor sampling strikes a balance between the two. This ![10_image_0.png](10_image_0.png) Figure 5: Ablation, sensitivity, and attention analysis on 3 datasets for the FLP transductive task. shows that the fixed window size also contributes to the performance as it helps limit irrelevant information that is not highly predictive of future interactions. Nonetheless, as we show in Section 6, the attention-based encoder coupled with the time encoding function is able to learn the innate temporal dependencies regardless of the window size. Number of Layers: Increasing the number of embedding layers improves performance for most datasets, and this effect is more noticeable for some (e.g., MOOC). This suggests that these datasets contain higher order temporal correlations among the nodes that must be learned using long-range message passing. Overall, the results show that one can choose to sacrifice some performance to further improve the speed of DyG2Vec by decreasing the window size and the number of layers. Temporal Edge Features: The results show a substantial decrease in performance for MOOC when temporal edge features are removed (i.e., 1-4% drop). This indicates that such temporal edge features provide useful multi-hop information about the evolution of the dynamic graph (Yao et al., 2016). ## 6 Analysis Effect of Encoder Architecture: As noted earlier, one of the main advantages of DyG2Vec over the baselines is its transformer-based architecture. Combined with the simple time-encoding module, DyG2Vec can selectively aggregate temporal and structural information across neighbors that are relevant to the target prediction. Moreover, unlike the baselines, message passing for the source and target nodes is performed on a shared undirected sampled subgraph to easily detect common neighbors across time which are vital to predicting future interactions. In Table 4, we show the effect of replacing our encoder architecture with the MPNN model (Gilmer et al., 2017) which performs simple message passing with our temporal edge encodings. The results show significant degradation (4-9%), displaying the benefits of transformers in attending to relevant history. Table 4: Benefits of the encoder architecture in the FLP task. In the last row, we replace the transformer architecture (Sec. 4.1) with the simple MPNN architecture (Gilmer et al., 2017). All experiments are performed for one trial only in the transductive setting. | Model | UCI | LastFM | Enron | MOOC | |-------------------|-------|----------|---------|--------| | DyG2Vec (Default) | 0.981 | 0.960 | 0.990 | 0.988 | | DyG2Vec with MPNN | 0.942 | 0.921 | 0.907 | 0.916 | Attention Weight Analysis: A particular advantage of an attention-based architecture is that attention weights allow for easy interpretability of the learned temporal dependencies. In Figure 4 (right plot), we plot the attention weights αij with respect to the relative timespan. That is, for each test target edge ei = (ui, vi, ti), we plot the attention weights to the one-hop neighbors of both target nodes, {αuivp |vp ∈ N (ui)*} ∪ {*αvivk |vk ∈ N (vi)}, versus the relative timespans, {t¯ui − tp*} ∪ {*t¯vi − tk}. Here, t¯ui represents the maximum timestamp incident to node ui. Therefore, the attention weights indicate how much the model is attending to old and recent interactions. Unsurprisingly, higher importance is given to the most recent interactions. However, both Wikipedia and UCI display higher weights for larger timespans compared to MOOC, indicating that they have long-range dependencies. This explains why performance monotonically increases for Wikipedia as W increases while slightly degrades for the MOOC dataset, as seen in Figure 5. Neighbor Sampling and Temporal Edge Encoding: In Table 5, we study the effect of neighbor sampling and temporal edge encoding on the downstream FLP task. Although Fig. 5 shows the importance of multihop sampling for discovering high-order temporal motifs, we found that DyG2Vec gives more importance to one-hop neighbors for most datasets. In fact, sampling 64 one hop neighbors gives SoTA performance compared to sampling 20 neighbors per hop (see 1 st and 5 th rows in Table 5). This suggests that the 1hop recent interactions within a window are the most representative interactions for future prediction tasks. Moreover, unlike prior random-walk and AMP methods (Xu et al., 2020; Jin et al., 2022; Wang et al., 2021b), which argue for causal sampling (i.e. sampling backwards in time) to discover evolving temporal motifs, we have found this form of sampling to have little effect on the performance (See 2 nd row). Lastly, removing edge encodings almost always hurts performance. In fact, performing causal sampling with 20 neighbors at each hop, as done in TGAT, and removing temporal edge encodings causes up to 8% drop in performance (See last row). Additional analysis on the effect of temporal edge encodings on baselines can be found in Appendix A.2. Table 5: Effect of neighbor sampling and temporal edge encoding on performance. The first row is the default setting where we sample 64,1,1 neighbors at the first, second and third hops respectively. | Temporal Edge Encoding | Causal Sampling | Num Neighbors | Wikipedia | MOOC | UCI | |--------------------------|-------------------|-----------------|-------------|--------|-------| | ✓ | 64,1,1 | 0.995 | 0.982 | 0.988 | | | ✓ | ✓ | 64,1,1 | 0.993 | 0.984 | 0.986 | | | 64,1,1 | 0.990 | 0.957 | 0.980 | | | | ✓ | 64,1,1 | 0.989 | 0.965 | 0.976 | | ✓ | 20,20,20 | 0.992 | 0.949 | 0.981 | | | ✓ | ✓ | 20,20,20 | 0.984 | 0.955 | 0.971 | | | 20,20,20 | 0.990 | 0.927 | 0.958 | | | | ✓ | 20,20,20 | 0.982 | 0.906 | 0.946 | ## 7 Conclusion We introduce DyG2Vec, a novel window-based encoder-decoder model for dynamic graphs. It is an efficient attention-based message-passing model that utilizes multi-head attention modules to encode node embeddings across time. Furthermore, we present a joint-embedding architecture for dynamic graphs in which two views of temporal sub-graphs are encoded to minimize a non-contrastive loss function. Our window-based architecture allows for efficient message-passing and robust prediction abilities. We aim to further explore ways to improve the capacity of the dynamic graph models to learn long-range dependencies. Additionally, it seems promising to investigate other SSL paradigms aligned with temporal graphs. ## References Randall Balestriero and Yann LeCun. Contrastive and non-contrastive self-supervised learning recover global and local spectral embedding methods. In *Proc. Adv. Neural Inf. Proc. Systems*, 2022. Adrien Bardes, Jean Ponce, and Yann LeCun. VICReg: Variance-invariance-covariance regularization for self-supervised learning. In *Proc. Int. Conf. Learning Representations (ICLR)*, 2022. Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. Signature verification using a "siamese" time delay neural network. In *Proc. Adv. Neural Inf. Proc. Systems*, 1993. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *Proc. Int. Conf. on Machine Learning*, 2020a. Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. *arXiv preprint arXiv:2003.04297*, 2020b. Weilin Cong, Yanhong Wu, Yuandong Tian, Mengting Gu, Yinglong Xia, Mehrdad Mahdavi, and Chuncheng Jason Chen. Dyformer : A scalable dynamic graph transformer with provable benefits on generalization ability. *SIAM International Conference on Data Mining*, 2023a. Weilin Cong, Si Zhang, Jian Kang, Baichuan Yuan, Hao Wu, Xin Zhou, Hanghang Tong, and Mehrdad Mahdavi. Do we really need complicated model architectures for temporal networks? In *Proc. Int. Conf.* Learning Representations (ICLR), 2023b. Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee Keong Kwoh, and et al. Time-series representation learning via temporal and contextual contrasting. In *Proc. Int. Joint Conf. on Artificial* Intelligence, 2021. Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. Quentin Garrido, Yubei Chen, Adrien Bardes, Laurent Najman, and Yann LeCun. On the duality between contrastive and non-contrastive self-supervised learning. In Proc. Int. Conf. Learning Representations (ICLR), 2023. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In *Proc. Int. Conf. Machine Learning*, 2017. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, and et al. Bootstrap your own latent - a new approach to self-supervised learning. In *Proc. Adv. Neural Inf. Proc. Systems*, 2020. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In *Proc. IEEE Conf. on Computer Vision and Pattern Recognition)*, 2020. Linpu Jiang, Ke-Jia Chen, and Jingqiang Chen. Self-supervised dynamic graph representation learning via temporal subgraph contrast. *arXiv preprint arXiv:2112.08733*, 2021. Yizhu Jiao, Yun Xiong, Jiawei Zhang, Yao Zhang, Tianqi Zhang, and Yangyong Zhu. Sub-graph contrast for scalable self-supervised graph representation learning. *IEEE International Conference on Data Mining* (ICDM), 2020. Ming Jin, Yuan-Fang Li, and Shirui Pan. Neural temporal walks: Motif-aware representation learning on continuous-time dynamic graphs. In *Proc. Adv. Neural Inf. Proc. Systemss*, 2022. Li Jing, Pascal Vincent, Yann LeCun, and Yuandong Tian. Understanding dimensional collapse in contrastive self-supervised learning. In *Proc. Int. Conf. Learning Representations (ICLR)*, 2022. Seyed Mehran Kazemi, Rishab Goel, Sepehr Eghbali, Janahan Ramanan, Jaspreet Sahota, Sanjay Thakur, Stella Wu, Cathal Smyth, Pascal Poupart, and Marcus Brubaker. Time2vec: Learning a vector representation of time. *arXiv preprint arXiv:1907.05321*, 2019. Seyed Mehran Kazemi, Rishab Goel, Kshitij Jain, Ivan Kobyzev, Akshay Sethi, Peter Forsyth, and Pascal Poupart. Representation learning for dynamic graphs: A survey. *Journal of Machine Learning Research*, 2020. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *Proc. Int. Conf. Learning* Representations (ICLR), 2015. Lauri Kovanen, Márton Karsai, Kimmo Kaski, János Kertész, and Jari Saramäki. Temporal motifs in time-dependent networks. *Journal of Statistical Mechanics: Theory and Experiment*, 2011. Srijan Kumar, Xikun Zhang, and Jure Leskovec. Predicting dynamic embedding trajectory in temporal interaction networks. In *Proc. Int. Conf. on Knowledge Discovery & Data Mining*, 2019. Xiao Liu, Fanjin Zhang, Zhenyu Hou, Li Mian, Zhaoyu Wang, Jing Zhang, and Jie Tang. Self-supervised learning: Generative or contrastive. *IEEE Transactions on Knowledge and Data Engineering*, 2021. Yuhong Luo and Pan Li. Neighborhood-aware scalable temporal network representation learning. In The First Learning on Graphs Conference, 2022. Ashwin Paranjape, Austin R. Benson, and Jure Leskovec. Motifs in temporal networks. In *ACM Int. Conf.* on Web Search and Data Mining, 2017. Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, and et al. Evolvegcn: Evolving graph convolutional networks for dynamic graphs. *Proc. of the AAAI Conference on Artificial Intelligence*, 2020. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, and et al. Pytorch: An imperative style, high-performance deep learning library. In *Proc. Adv. Neural Inf. Proc. Systems*. 2019. Mandela Patrick, Yuki M. Asano, Polina Kuznetsova, Ruth Fong, João F. Henriques, Geoffrey Zweig, and Andrea Vedaldi. Multi-modal self-supervision from generalized data transformations. In *Proc. Int. Conf.* on Computer Vision, 2021. Stephen Ranshous, Shitian Shen, Danai Koutra, Steve Harenberg, Christos Faloutsos, and Nagiza F. Samatova. Anomaly detection in dynamic networks: a survey. *WIREs Computational Statistics*, 7(3):223–247, 2015. Emanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, and Michael Bronstein. Temporal graph networks for deep learning on dynamic graphs. In *ICML Workshop on Graph* Representation Learning, 2020. Aravind Sankar, Yanhong Wu, Liang Gou, Wei Zhang, and Hao Yang. Dysat: Deep neural representation learning on dynamic graphs via self-attention networks. In Proc. Int. Conf. on Web Search and Data Mining, 2020. A. H. Souza, D. Mesquita, S. Kaski, and V. Garg. Provably expressive temporal graph networks. In Proc. Adv. Neural Inf. Proc. Systems, 2022. Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Mehdi Azabou, Eva L Dyer, Remi Munos, Petar Veličković, and Michal Valko. Large-scale representation learning on graphs via bootstrapping. In Proc. Int. Conf. Learning Representations (ICLR), 2022. Sheng Tian, Ruofan Wu, Leilei Shi, Liang Zhu, and Tao Xiong. Self-supervised representation learning on dynamic graphs. In *Proceedings of the 30th ACM International Conference on Information & Knowledge* Management, 2021. Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. VideoMAE: Masked autoencoders are data-efficient learners for self-supervised video pre-training. In *Proc. Adv. Neural Inf. Proc. Systems*, 2022. Rakshit Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, and Hongyuan Zha. Dyrep: Learning representations over dynamic graphs. In *Proc. Int. Conf. Learning Representations (ICLR)*, 2019. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Proc. Adv. Neural Inf. Proc. Systems*, 2017. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In *Proc. Int. Conf. Learning Representations (ICLR)*, 2018. Lu Wang, Xiaofu Chang, Shuang Li, Yunfei Chu, Hui Li, Wei Zhang, Xiaofeng He, Le Song, Jingren Zhou, and Hongxia Yang. Tcl: Transformer-based dynamic graph modelling via contrastive learning. *arXiv* preprint arXiv:2105.07944, 2021a. Yanbang Wang, Yen-Yu Chang, Yunyu Liu, Jure Leskovec, and Pan Li. Inductive representation learning in temporal networks via causal anonymous walks. In *Proc. Int. Conf. Learning Representations (ICLR)*, 2021b. Yaochen Xie, Zhao Xu, Jingtun Zhang, Zhengyang Wang, and Shuiwang Ji. Self-supervised learning of graph neural networks: A unified review. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2022. Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, and Kannan Achan. Inductive representation learning on temporal graphs. *Proc. Int. Conf. Learning Representations (ICLR)*, 2020. Lin Yao, Luning Wang, Lv Pan, and Kai Yao. Link prediction based on common-neighbors for dynamic social network. *Procedia Computer Science*, 2016. Hongkuan Zhou, Da Zheng, Israt Nisa, Vasileios Ioannidis, Xiang Song, and George Karypis. Tgl: A general framework for temporal gnn training on billion-scale graphs. *Proc. VLDB Endow.*, 2022. ## A Appendix A.1 Preliminary: Vicreg We outline the details of the VICReg (Bardes et al., 2022) method used in our SSL pre-training stage. Given the representations of two random views of an object (e.g., image) generated through random distortions, the objective of non-contrastive SSL is two-fold. First, the output representation of one view should be maximally informative of the input representation of the view. Second, the representation of one view should be maximally predictable from the representation of the other view. These two aspects are formulated by VICReg (Bardes et al., 2022) where a combination of 3 loss terms (i.e. Variance, Covariance, Invariance) is minimized to learn useful representations while also avoiding the well-known problem of collapse (Jing et al., 2022). More concretely, let Z ′= [z ′ 1 , . . . , z ′ n ] and Z ′′ = [z ′′ 1 , . . . , z ′′ n ] be the batches composed of n representations of dimension d. Variance term: The variance regularization term v is the mean over the representation dimension of the hinge function on the standard deviation of the representations along the batch dimension: v(Z) = 1 D PD j=1 max(0, γ−S(Z:,j , ϵ)). Here Z:,j is the column j of matrix Z. S is the regularized standard deviation defined by S(z, ϵ) = pVar(z) + ϵ, γ is a constant value set to 1 in our experiments, and ϵ is a small scalar that helps to prevent numerical instability. This term avoids dimensional collapse by maximizing the volume of the distribution of the mapped views in all dimensions. In other words, it prevents the well-known trivial solution where the representations of the two views of a sample collapse to the same representation (Jing et al., 2022). Covariance term: The covariance regularization terms C decorrelates different dimensions of the representations and prevents them from encoding similar information. The covariance matrix of Z is C(Z) = 1n Pn i=1(zi −z¯)(zi −z¯) T where z¯ = 1 N Pi=1 nzi. The covariance regularization term c is then defined as the sum of the squared off-diagonal coefficients of the covariance matrix as follows c(Z) = 1d Pi̸=j [C(Z)]2 i,j where [C(Z)]2 i,j is the element at row i and column j of the matrix C(Z). Both the variance and covariance terms helps to maximize the information encoded by the model in the representation space. Invariance criterion: The invariance criterion s between Z ′and Z ′′ is defined as the mean squared Euclidean distance between the representation vectors in the two views s(Z ′, Z ′′ ) = 1n Pi ∥z ′ i − z ′′ i ∥ 2 2 . The invariance term encourages the parametric mapping to ensure that the views of an object remain close in the latent space. Finally, the SSL loss function L SSL over a batch of representations is a weighted average of the invariance, variance, and covariance terms: $${\mathcal{L}}^{S S L}=l({\mathbf{Z}}^{'},{\mathbf{Z}}^{'^{\prime}})=\lambda s({\mathbf{Z}}^{'},{\mathbf{Z}}^{'^{\prime}})+\mu[v({\mathbf{Z}}^{'})+v({\mathbf{Z}}^{''})]+\nu[c({\mathbf{Z}}^{'})+c({\mathbf{Z}}^{''})]$$ ′′ )] (10) In our experiments, we set λ = µ = 25 and ν = 1, following Bardes et al. (2022). ## A.2 Additional Results A.2.1 Effect Of Temporal Edge Encodings On Baselines Our proposed temporal edge encodings can theoretically improve the expressiveness of DyG2Vec (Souza et al., 2022). In Table 6, we investigate whether they can also improve the baselines' performance. That is, we add the temporal edge encodings outlined in Sec. 4.1 to some of the most powerful baselines, TGN (Rossi et al., 2020) and CaW (Wang et al., 2021b). Interestingly, we notice only minor improvements (1-2%) for some datasets and, in some cases, minor degradations. We hypothesize that this is due to the nature of baselines' encoder architectures. The use of RNNs or random walks to encoder history can make it difficult to effectively leverage the relevant temporal edge encodings across long time horizons. $$(10)$$ Table 6: Effect of temporal edge encodings on baselines. All experiments are performed for one trial only in the FLP transductive setting. Temporal edge encodings are simply added as edge features to the baselines. | Temporal Edge Encoding | Baseline | UCI | Enron | MOOC | |--------------------------|------------|-------|---------|--------| | × | TGN | 0.894 | 0.867 | 0.911 | | ✓ | TGN | 0.879 | 0.872 | 0.936 | | × | CaW | 0.930 | 0.970 | 0.937 | | ✓ | CaW | 0.944 | 0.967 | 0.946 | ![16_image_0.png](16_image_0.png) Figure 6: Transductive FLP performance (Test AP) vs Training Time (s) on 3 datasets. Training time represents the time it takes to train on the whole training set. ## A.2.2 Runtime And Computational Complexity The main runtime overhead lies in how each of the baselines processes the input graph to predict a target edge. CaW samples M L-hop random walks for each target edge. This is followed by an expensive set-based anonymization scheme. To achieve good performance, CaW can require relatively long walks (e.g., for Enron, L = 5). On the other hand, memory-based methods and TGAT sample a different L-hop subgraph for each target edge. DyG2Vec samples similar to TGAT but does so within a constant window size W and without enforcing temporal causality. Thus, assuming we use sparse operations in Pytorch Geometric (Fey & Lenssen, 2019) for message-passing, the encoding computational complexities are: DyG2Vec = O(LW); CaW = O(LMNs) and TGN and variants = O(LNs). Here, Ns represents the maximum number of sampled nodes in an L-hop subgraph. We can see that the main difference is the factor M. The factor Ns comes from the complexity of message passing at each hop (assuming sparse operations). Note that DyG2Vec is limited to O(W) nodes so it does not have this factor. | Model | MOOC | Enron | UCI | LastFM | |---------|--------|---------|-------|----------| | DyG2Vec | 93.1 | 96.6 | 95.4 | 93.0 | | DDGCL | 84.3 | 83.0 | 85.3 | 78.8 | Table 7: Downstream Freeze test AP Results (after pre-training). DDGCL pre-training and downstream training were run with default parameters described in the work. ## A.2.3 Comparison With Recent Dynamic Graph Models GraphMixer (Cong et al., 2023b) uses a conceptually simple MLP-based link classifier that summarizes structural and temporal information from the latest temporal links and achieves surprisingly decent performance. Table 8 compares DyG2Vec with GraphMixer on transductive future link prediction. All results were generated under the same evaluation protocol in CaW (Wang et al., 2021b), using the DyGlib library ∗. ∗https://github.com/yule-BUAA/DyGLib | Model | Wikipedia | Reddit | MOOC | LastFM | Enron | UCI | SocialEvol. | |------------|---------------|---------------|---------------|---------------|---------------|---------------|----------------| | GraphMixer | 0.974 ± 0.001 | 0.975 ± 0.001 | 0.835 ± 0.001 | 0.862 ± 0.003 | 0.824 ± 0.001 | 0.932 ± 0.006 | 0.935 ± 3e − 4 | | DyG2Vec | 0.992 ± 0.001 | 0.991 ± 0.002 | 0.938 ± 0.010 | 0.979 ± 0.006 | 0.987 ± 0.004 | 0.976 ± 0.002 | 0.978 ± 0.010 | Table 8: AP performance on Transduction future link prediction (Mean ± Std). ## A.2.4 Comparison To Other Dynamic Graph Ssl Methods As mentioned in Section 2, DDGCL Tian et al. (2021) proposed a contrastive SSL method for dynamic graphs that learns rich representations by contrasting node embeddings across time. Though experiments show improved performance on the future link prediction and dynamic node classification task, we believe the approach comes with several shortcomings that limit it's advantages in real world graphs. First, it is built on the TGAT Xu et al. (2020) encoder which, as seen in Table 2, is a weak encoder; particularly, for large datasets such as LastFM. Second, experiments for the FLP task are limited to the Reddit and Wikipedia datasets which are relatively easy. Lastly, the authors do not experiment under the standard settings in graph SSL literature such as the freeze and semi-supervised settings. Table 7 shows the results for downstream future link prediction under the freeze setting. The results show up to 10% gap compared to DyG2Vec, particularly for datasets where the TGAT encoder under-performs (e.g. Enron, UCI). ## A.2.5 Ssl For Future Link Prediction | Setting | UCI | Enron | MOOC | LastFM | |-------------|---------------|---------------|---------------|---------------| | Random-init | 0.865 ± 0.004 | 0.913 ± 0.007 | 0.863 ± 0.001 | 0.817 ± 0.002 | | Supervised | 0.988 ± 0.007 | 0.991 ± 0.001 | 0.980 ± 0.002 | 0.960 ± 1e-4 | | SSL-init | 0.954 ± 0.002 | 0.966 ± 0.001 | 0.931 ± 0.001 | 0.930 ± 2e-4 | Table 9: Evaluating the performance of SSL pre-training using linear probing. Results are reported in AP (Mean ± Std) on Transductive Future Link Prediction. Table 9 reports the transductive AP results for DyG2Vec under 3 different settings. Namely, we compare a random frozen encoder (Random-init) and an SSL pre-trained encoder (SSL-init) with the supervised baseline. Note that all variants use the DyG2Vec architecture. The difference lies in how the encoder is initialized and whether it is frozen during downstream training. SSL-init is initialized with a pre-trained encoder that is frozen during downstream training while "Supervised" is initialized randomly and is fully trained (encoder+decoder) on the downstream task. The results reveal that our SSL pre-training learns informative node embeddings that are almost on par with the fully supervised baseline. This supports the capability of the non-contrastive methods to learn generic representations across unlabelled large-scale dynamic graphs, which is in line with the findings for other data modalities (Bardes et al., 2022). The Random-init baseline is surprisingly good, as observed by recent works (Thakoor et al., 2022), but is outperformed by the SSL pre-trained encoder. ## A.2.6 Window-Based Pre-Training In Table 10, we show the importance of window-based pre-training to learn the fine-grained temporal motifs of dynamic graphs. The "Full-graph" SSL setting represents applying the SSL loss on the full dynamic graph at once for a total of 300 epochs. Note that this is similar to the pre-training strategy used on static graphs and is difficult to scale for large scale graphs that do not fit to memory. The window-based strategy outperforms the full-graph mode for most datasets, particularly for large graphs (e.g. MOOC and LastFM) where we observe up to a 10% gap. ## A.3 Implementation Details We train our model using the Pytorch framework (Paszke et al., 2019). The dynamic graph data and GNN encoder architecture are implemented using Pytorch Geometric (Fey & Lenssen, 2019). The ReLU Table 10: Effect of Window-based pre-training on Linear probing AP results on Transductive FLP. | SSL Setting | UCI | Enron | MOOC | LastFM | |---------------|-------|---------|--------|----------| | Window-based | 0.956 | 0.965 | 0.931 | 0.930 | | Full-graph | 0.954 | 0.966 | 0.912 | 0.838 | activation function is used for all models. The code and datasets are publicly available at https://github. com/huawei-noah/noah-research/tree/master/graph_atlas. Window-based framework: As mentioned in Section 4, during SSl pre-training, the full dynamic graph G0,E is divided into a set of intervals I that is generated by dividing the entire time-span into M = ⌈E/S⌉−1 intervals with stride S and interval length W: $$I=\left\{\left[\,\operatorname*{max}(0,j S-W),\,\operatorname*{min}(j S,E)\,\right)\mid j\in\{1,2,\ldots,M\}\right\}.$$ $$(11)$$ Here, W defines the number of edges in an interval and S defines the stride. Note that we include all intervals up to but not including [E − *W, E*) so that the target interval contains at least one edge. Decoder Architecture: Denote by t max the timestamp of the latest interaction, within the provided history, incident to node u. For future link prediction, to predict a target interaction (*u, v, t*), our decoder maps the sum of the two node embeddings of u and v and a time embedding of t−t max to an edge probability. Following Xu et al. (2020), the FLP decoder is a 2-layer MLP. For dynamic node classification, to predict the label of node u for interaction (*u, v, t*), the decoder maps the source node embedding and time embedding of t − t max to class probabilities. Following Xu et al. (2020), the DNC decoder is a 3-layer MLP with a dropout layer with p = 0.1. The time embedding is calculated using a trainable Time2Vec module (Kazemi et al., 2019). The time embedding allows the decoder to be time-aware; hence, possibly output different predictions for the same nodes/edges at different timestamps. For SSL pre-training, the predictor pϕ is a simple 2-layer MLP that maps node embeddings H to node representations Z. Distortion Pipeline: We use the common edge dropout and edge feature dropout distortions. Both distortions are applied with dropout probability pd = 0.3 which we have found to work best in a validation experiment exploring the values pd ∈ {0.1, 0.15, 0.2, 0.3}. The edge feature dropout is applied on the temporal edge encodings introduced in Section 4.1, i.e., zp(tp) and cp(tp). Hyper-parameters: We use a constant learning rate of 0.0001 for all datasets and tasks. DyG2Vec is trained for 100 epochs for both downstream and SSL pre-training. The model from the last epoch of pretraining is used for downstream training. For downstream evaluation, we pick the model with the best validation AP performance. Overall, we found that DyG2Vec converges within ∼ 50 epochs. For downstream training, We use a constant window size of 64K for all datasets except for MOOC, SocialEvolve, and Enron where we found a smaller window size of 8K works best. The batch size is set to 200 target edges. However, the model could be sped up by increasing batch size at the cost of higher memory. During SSL pre-training, we use a constant window size of 32K with stride 200. Following previous work (Rossi et al., 2020; Xu et al., 2020), all dynamic node classification training experiments are performed with L2-decay parameter λ = 0.00001 to alleviate over-fitting. ## A.4 Baselines Baselines: Following prior work (Rossi et al., 2020; Xu et al., 2020), all baselines are trained with a constant learning rate of 0.0001 using the Adam optimizer (Kingma & Ba, 2015) on batch-size 200 for a total of 50 epochs. The early stopping strategy is used to stop training if validation AP does not improve for 5 epochs. For JODIE (Kumar et al., 2019), DyRep (Trivedi et al., 2019), and TGN (Rossi et al., 2020), we use the general framework implemented by Rossi et al. (2020). The node memory dimension is set to 172. For the NAT baseline Luo & Li (2022), we utilize the results in the paper for the common datasets since the setup is the same. We generate results for the missing datasets with the default hyperparameters. See Tables 12 and 11 for details. For TGAT, we use the default hyperparameters of 2 layer neighbor sampling with 20 neighbors sampled at each hop. For the CaW method, we tune the time decay parameter α ∈ S where S =, and length of the walks m ∈ {2, 3, 4, 5} on the validation set. The number of heads for the walking-based attention is fixed to 8. | Dataset | Time Decay α | Walk Length m | |-----------------|----------------|-----------------| | Wikipedia | 4e-6 | 4 | | Reddit | 1e-8 | 3 | | MOOC | 1e-4 | 3 | | LastFM | 1e-6 | 3 | | UCI | 1e-5 | 2 | | Enron | 1e-6 | 5 | | SocialEvolution | 3e-5 | 3 | Table 11: Hyperparameters for CaW. ## A.5 Computing Infrastructure All experiments were done on a Ubuntu 20.4 server with 72 Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz cores and a RAM of size 755 Gb. We use a NVIDIA Tesla V100-PCIE-32GB GPU. ## A.6 Additional Related Work Self-supervised learning for dynamic graphs: Jiang et al. (2021) adapt a sub-graph contrastive learning method (Jiao et al., 2020) where a node representation is contrasted in both structure and time. That is, for each node in the graph, a GNN encoder is trained to contrast its real temporal subgraph to its fake temporal subgraph . This is done by constructing a positive sample, a structural negative sample and a temporal negative sample. The positive sample is a time-weighted subgraph representation. The margin triplet loss is proposed to maximize the mutual information with the positive sample while maximizing distance with the structural and temporal negative samples. Experiments on downstream link prediction task under the freeze setting show improvement over baselines. However, their approach comes with several shortcomings. First, initial node features are computed as one-hot encodings which makes the method not suitable for the inductive scenario (i.e. predicting on new nodes). Second, the use of contrastive learning method is known to result in high memory and computation due to negative sampling (Thakoor et al., 2022). This makes the method less desirable for large-scale graphs. Third, they do not include results on other downstream tasks (e.g. dynamic node classification). Lastly, they do not compare to the SoTA CaW method (Wang et al., 2021b). | Param | MOOC | LastFM | |---------------|--------|----------| | M1 | 32 | 32 | | M2 | 16 | 16 | | F | 4 | 4 | | Self Rep. Dim | 32 | 72 | Table 12: Hyperparameters for NAT baseline. We use the default settings outlined in the paper Luo & Li (2022) for all other hyperparameters. Tian et al. (2021) adapt the TGAT encoder with a self-supervised contrastive framework across time. That is, they propose an extension to the classic contrastive learning paradigm by contrasting two nearby temporal views of the same node using a time-dependent similarity metric. Moreover, a de-basied contrastive loss is utilized to correct the typical negative sampling bias in contrastive learning. Experiments on the finetune and mutli-task learning settings show that the simple TGAT encoder can be significantly improved on both future link prediction and dynamic node classification. Nonetheless, their approach comes with several shortcomings. First, it is built on the TGAT encoder which, as seen in Tables 2, is a weak encoder; particularly, for large datasets. Second, experiments for the FLP task are limited to the Reddit and Wikipedia datasets which are relatively easy. Lastly, the authors do not experiment under the standard settings in graph SSL literature such as the freeze and semi-supervised settings. Table 7 shows the results for downstream future link prediction under the freeze setting. The results show up to 10% gap compared to DyG2Vec, particularly for datasets where the TGAT encoder under-performs (e.g. Enron, UCI). Cong et al. (2023a) propose the dynamic graph transformer (DGT) which is a transformer-based graph encoder for *discrete-time dynamic graphs*. DGT is composed of two-tower networks that embed the temporal evolution and topological information of the input graph. Moreover, a temporal-union graph structure is proposed to efficiently summarize the temporal evolution into one graph. DGT is trained to encode the temporal-union graph using two complementary self-supervised pretext tasks. Namely, temporal reconstruction and multi-view contrasting. The first aims to reconstruct a snapshot given the past and present similar to how language models are trained. On the other hand, the latter is trained via non-contrastive learning on two views with randomly masked nodes. All together, DGT outperforms SOTA discrete-time baselines on several datasets for link prediction tasks. While they operate in a different domain, an interesting direction for future work would be to adapt their pre-training strategy for continuous-time dynamic graphs. More encoders for temporal graphs: Souza et al. (2022) is a very recent work that establishes a series of theoretical results on temporal graph encoders. Their analysis exposes several weakness of both memorybased methods (e.g. TGN) and walk-based methods (e.g. CaW). Given these insights, they propose PINT, a memory-based method that leverages injective message-passing and novel relative positional encodings. The relative positional encodings count how many temporal walks of a given length exist between two nodes. Experiments show significant improvement over SoTA baselines on the link prediction task. However, the high theoretical expressive power comes at the cost of requiring relational positional encodings which are expensive to calculate in an online fashion. An interesting direction for future research would be to evaluate the expressive power of DyG2Vec compared to baselines using their theoretical framework (e.g. temporal WL test). Wang et al. (2021a) adapt the vanilla transformer architecture to dynamic graphs by designing a two-stream encoder that extracts temporal and structural information from the temporal neighborhoods associated with any two interaction nodes. Rather than treating link prediction as a binary classification task, the authors leverage a contrastive learning strategy that maximizes the mutual information between the representations of future interaction nodes. Experiments show improved performance on future link prediction due to the more robust contrastive training strategy. Nonetheless, the paper does not compare to the SoTA CaW method (Wang et al., 2021b). Moreover, experiments are limited to the future link prediction task. NeurTWs Jin et al. (2022) adapt causal walk sampling procedure of CaW Wang et al. (2021b) to include spatio-temporal bias and exploitation-exploration trade-off bias. Moreover, the authors propose utilizing ordinary differential equations (ODE) to better effectively model the irregularly sampled temporal interactions of a node and capture the underlying spatio-temporal dynamics. Experiments show good improvements over the baseline CaW model for some datasets. However, the addition of more sampling biases and ODEs comes at the cost of even higher computational complexity than CaW and more hyperparameters to tune, making the method undesirable for large real-world graphs. TGL Zhou et al. (2022) introduce a general framework for training temporal GNNs on large-scale graphs. This is done by introducing an efficient CSR data structure to store temporal graphs and a parallel sampler that supports GPUs. The framework supports both memory-based methods (e.g. TGN Rossi et al. (2020)) and AMP architectures (e.g. TGAT Xu et al. (2020)). Experiments on billion-scale graphs show up to 13x speed improvement while achieving the same performance. An interesting direction for future work is to adapt the DyG2Vec architecture into the TGL framework to further speed it up.
Review 1: Summary: In this work the authors propose an efficient attention-based encoder architecture for temporal graphs - i.e. graphs where edges have an associated timestamp. Their proposed architecture also incorporates temporal edge encodings (which incorporate aspects such as node degree and number of common neighbors at a point in time) and a window-based subgraph sampling procedure (where, when making a prediction for a target edge $(u, v, t)$, the model will create a subgraph centered around $u$ and $v$ which contains edges in the time interval $(t-W, t)$. The authors also propose a self-supervised pretraining objective for their encoder, which, given a sampled subgraph in a particular time window, creates two "distortions" (by applying edge dropout and feature masking), maps these distorted versions through the encoder to generate node embeddings, which are then put through a decoder which is trained to output node representations. The loss function is a regularization objective with 3 terms: an invariance term (to bring the two representations closer), a variance term (to avoid representation collapse), and a covariance term (to maximize information content). They demonstrate that this self-supervised encoder training can result in improved results on future link prediction and dynamic node classification tasks. In these settings, new decoder architectures (MLPs) are fine-tuned for the particular task at hand. The authors show improved results on 7/7 datasets, while also being more efficient than methods which rely on sampling random walks and scaling more readily to large graph sizes (as opposed to methods which operated on a single static graph). Strengths and Weaknesses: *Strengths:* * Demonstration that self-supervised pretraining works well with temporal graphs is useful contribution with practical implications in real-world applications. * The model seems highly scalable and efficient, and empirical results support this. * The writing of the paper was fairly clear, with the overall ideas and motivations being conveyed well. *Weaknesses:* * The mathematical details could be clarified in places. Some aspects, such as the SSL objective, are not fully specified (even in the appendix, unless I missed it). More concerningly, the main content of temporal attention embedding was hard enough to follow that I am not confident I could reproduce the architecture from the description in the paper. (Specific questions in the requested changes.) * While the current experiments highlight the strengths, it is possible that the evaluation tasks themselves are simply too easy. Requested Changes: **R1 (critical):** Could you make the explanation of temporal attention (section 4.1) clearer? Specifically, here were a few things I had questions on: 1. The neighborhood $\mathcal N(i)$ has also not been defined here - is it temporally aware? Is it taken from a particular time interval? Or are we simply thinking of the temporal graph $\mathcal G$ as a multi-graph, and taking a neighborhood in the classic sense? 2. When you write $\mathcal N(i) = \\{e_p, \ldots, e_k\\}$, this is the first time we have seen the indices $p$ and $k$, and it's unclear to me what they are meant to represent. I think what was intended was to say that we uniformly sample some $N$ 1-hop neighborhood interactions of node $i$, $\\{e_{k_1}, \ldots, e_{k_N}\\} \subseteq \mathcal N(i) \subseteq \mathcal E$. (The indices $p$ and $k$ continue through the paper, so if I am correct than these should perhaps be changed elsewhere also.) 3. A minor note, but $\phi_p(t_p)$ also seems to depend on $l$, however it doesn't have the superscript the rest of the layer-dependent variables do. 4. Is $\Theta_p(t_p)$ a scalar? If so, it would be helpful to mention this. In general, this section is presented with many undefined terms which are later clarified, which (personally) I find rather challenging to read because it requires the reader to maintain a lot of state while reading. Beyond the specific points raised above, I would encourage the authors to take another pass at this section. **R2 (critical):** The discussion under the self-supervised loss (equation (9)) does not mention the $\mu$ term at all. Also, while the motivation for the various terms were provided, it would be helpful to present the actual functions used, at least in the appendix. **R3:** In my experience, creating an evaluation for graph-structured data is quite challenging, and often the standard approach of simply comparing a known positive edge vs. a permuted negative is quite easy to "game" - for example, by capturing some high-level statistics - without actually learning anything important about the graph. I realize the primary evaluations in the paper are fairly standard, and I am not requesting the authors completely revamp the evaluation for this task, however as we already see performance seemingly saturate in Table 2 and the strong performance of the random initialization baseline in Table 4, it is worth considering whether the evaluation is simply too simple. I would like to know if the authors believe the models have actually learned something significant about the graph, or whether it is more likely that they have exploited some high-level statistics / bias in the test set? As an example of the sort of bias I mentioned - a pattern I have seen perform unreasonably well on other graph evaluations is to make any node which has a child in the training data be a parent to all nodes, and all other nodes have no children. On such datasets, neural models need only learn to be "parent classifiers", and can perform unreasonably well despite having learned almost nothing about the graph structure. To be clear, I'm not saying this is exactly what is happening here, but the eval numbers make me suspect that the eval set is simply too easy to tell us much of anything. **Minor Typos / Suggestions:** 1. Section 4.2: I believe the decoder in this section should be labeled $d_\gamma$, not $d_\psi$. 2. Page 9: "performance due to difficult" -> "performance due to the difficult" 2. Page 10: "benefit from more embedding layers" -> delete? 3. It is mentioned in section 6 and in the caption of Figure 4 that the plot in the right of Figure 4 is a comparison of attention weights with respect to relative timespan. I assume this means the *mean* of the attention weights for a given timespan, correct? 4. In the section "Neighbor Sampling and Temporal Edge Encoding", it is mentioned that "sampling a single neighbor for higher hops gives SoTA performance compared to sampling 20 neighbors per hop (see 1st and 5th rows in Table 5)", and some . When I read this, I interpreted this to mean that "sampling more higher hops is better", which (after ingesting the results further) I realize is the opposite of the intended meaning. I would suggest conveying this by putting an emphasis on the number of samples in the first hop, rather than the 1 sample from the "higher hops". 5. Is "linear" really the right terminology to use when considering the fact that the decoders are MLPs? My understanding was that "linear probing" was just that - linear. Broader Impact Concerns: The work is mostly of a methodological nature with no particular concerns from a broader impact perspective. ================================================== Review 2: Summary: The present manuscript proposes a framework (DyG2Vec) for learning on continuous-time dynamic graphs. In particular, the core innovations in this work are (1) a new efficient attention-based neural network architecture, and (2) a modification of a well-known self-supervised paradigm (VICReg Bardes et al., 2022) for dynamic graphs that can be used for general-purpose pretraining. Additionally, (3) the authors make several engineering improvements compared to prior work (sampling temporal neighbours uniformly at random, enhancing neighbour messages with additional features – “temporal edge encoding”, window-based training strategy) that allows them to improve the efficiency as well as the performance of their architecture in downstream tasks. The method is conceptually and experimentally compared to several architectures for dynamic graphs on a variety of (large-scale) datasets and test setups (transductive vs inductive, link prediction, node classification) showing state-of-the-art performance in the majority of the cases. The comparative evaluation is complemented with ablation studies of various implementation details, providing justification for the choices made. Strengths and Weaknesses: ### Strengths - **Presentation and contextualisation with prior work**. The paper is generally well-written and easy to follow, while the method is well-presented and relatively easy to understand. I also consider quite positive the fact that the related work is extensively covered and contrasted against the proposed method. It is also quite helpful for the reader in order to navigate the temporal graph learning landscape. - **Technical soundness, execution and engineering choices**. Both the proposed architecture and the pretraining and downstream task strategies are reasonable and technically sound. At the same time, the above steps are relatively simple (unlike many previous works on temporal graph learning) and easy to implement, while the engineering choices are innovative and well-motivated. - **Evaluation**. The experiments are convincing overall, and each building block is ablated and its existence is sufficiently supported by experimental evidence. Moreover, the training/inference runtime results are very encouraging, showing significant improvement w.r.t. efficiency/scalability. I also found the “attention weight analysis” (section 6) quite insightful. Overall, the authors propose an original, well-executed, simple and efficient framework for self-supervised and supervised learning on continuous-time dynamic graphs with strong empirical performance. ### Weaknesses - **Clarity in the evaluation section**. Some parts of the experiments remain unclear which casts doubt on the fairness of the comparisons. In particular, - In section 5.2., Tables 2 and 3, the authors compare DyG2Vec against several temporal graph learning baselines on the tasks of “future link prediction” and “dynamic node classification”. However, it was unclear to me if DyG2Vec involves a self-supervised pretraining stage here or if it is trained end-to-end. In the case of the former, I was wondering if the baselines were also pre-trained. I believe that for fairness of comparison, the training strategy should be consistent across all methods, in order to clearly understand the benefits of either the pre-training or the new architecture. - It is quite hard to draw any safe conclusion from the performance reported on the dynamic node classification task. I believe the authors should provide more intuition as to why their method has significantly better performance in the 3rd dataset, while in the other two datasets, it is outperformed by the baselines. Additionally, I had trouble understanding this since I was unsure if the same pre-training & downstream task training strategy is conducted across all competing methods (see above). - In section 5.1., paragraph “Training protocols and hyperparameters”, the authors mention three different evaluation setups. However, I was unable to find where the results of these setups are actually reported. What setup do the authors apply in Tables 2 and 3? - Section 5.2, paragraph “SSL for future link prediction”. Comparing Table 4 to Table 2, I understand that the “Supervised” setup corresponds to the overall method proposed by the authors. Does this refer to end-to-end training (without pretraining) or SSL pretraining + fine-tuning on the downstream task? In case the former holds, then where are the benefits of the SSL pretraining shown? In case the latter holds, then what does “SSL-init” setup stand for? Moreover, how do these setups relate to the training protocols mentioned in a previous paragraph (see my comment above)? Could the authors explain these? - **Clarity of contributions**. Following up on my previous questions, I would like to ask the authors what they consider as their core contributions. Is it the self-supervised pre-training or the architecture + engineering choices (see summary)? I believe this should be more clearly stated in the paper and the reader should be pointed to the corresponding experimental evidence that provides justifications to the claims. - **Some experiments are missing**. - Depending on the answers to my above questions, perhaps the paper should be amended with extra experiments. E.g. in case Tables 2 and 3 compare fully-supervised baselines to a pre-trained DyG2Vec, then the experiments should be also performed under a consistent setup. In case they are all fully supervised, then more extensive results on pre-trained DyG2Vec should be reported. - Some baselines could be added: As far as temporal graph learning is concerned, I was wondering if the authors have compared to aggregation mechanisms similar to those used in PINT, Souza et al., NeurIPS’22 or GraphMixer, Cong et al., ICLR’23. As far as self-supervised pretraining is concerned, have the authors experimented with BYOL, Grill et al, NeurIPS’20 akin to BGRL, Thakoor et al., ICLR’22 for static graphs, as they nicely mention in their related work section? A few other works are also mentioned in the related work section (Tian et al., CIKM’21, Jiang et al., KDD’23 – the former is also briefly discussed experimentally in the appendix), so I tend to believe that more emphasis should be given to them in the experimental section of the main paper, e.g. by using the architecture + engineering choices of DyG2Vec and comparing only different SSL pretraining strategies. - I have a suspicion that the temporal edge encoding module might theoretically improve expressivity (mainly the “common neighbours” part). Therefore, I was wondering how the baselines would behave had the temporal edge encodings been used in there as well. Similarly (but less so), for the window-based strategy. Requested Changes: - **Motivation**. I think it would be helpful for the paper if the authors explain early on (in their introduction), why self-supervised learning is particularly important on dynamic graphs. - **Clarity**. The authors should aim to clarify the points I raised in my weaknesses above, i.e. both w.r.t. to their contributions, but most importantly w.r.t. to the experimental section and the various evaluation setups (this would be crucial for my recommendation). Additionally, section 4.3. has some minor clarity issues: - The node-level predictor is unclear. I think the explanation in Appendix A.2. should be moved to the main text. Also in section 3 the predictor for the SSL setup is denoted with $d_{\psi}$ instead of $d_\gamma$ (as in the “Prediction” paragraph). - Paragraph “SSL objective”: $s, \mu, c$ are undefined. I think this part should have been explained more thoroughly for those readers not familiar with the VICReg method. - **Missing experiments**. Similarly (this is also important for my recommendation), I would encourage the authors to complement their experiments with the ones suggested in the "weaknesses" section of my review. I would prioritise clarifying if the comparison is fair and if not, amend the experiments accordingly. Then, consistently comparing with/without the temporal edge encodings, and if possible, adding PINT and GraphMixer to the baselines. **Minor** - Introduction: “We propose an effective message-passing encoder that leverages temporal edge encoding to increase its expressive power.” --> This is not theoretically proven, so it might be better to reduce this claim. - Since the temporal neighbours are sampled, I understand that the node embeddings are stochastic. I was wondering if the authors have measured their variance or the variance of the downstream task performance. - Why is $\bar{t}_i$, Eq. (7), calculated only on the sampled temporal neighbourhood and not on the entire one? - How crucial are the transformation distributions (section 4.3, paragraph “Views”)? Could the authors perform an ablation study on that? Also, the $t$ symbol denoting the transformations might be confusing since it is also used to denote timestamps. Broader Impact Concerns: I do not foresee any major ethical implication of this work, but I would encourage the authors to include a Broader Impact Statement for completeness, given that learning on dynamic graphs may have wide applicability. ================================================== Review 3: Summary: This manuscript studies the problem of dynamic graph SSL, where the dynamic graph is defined by the kind of graphs that may change their connections over time. The core of the proposed method includes temporal edge encodings and window-based subgraph sampling to generate the feature embedding, which is trained using the non-contrastive SSL similar to that in the computer vision field without a protector. Unfortunately, the writing's ambiguity makes it hard for me to make a summary of how the encoder is achieved. Strengths and Weaknesses: ### Strengths - It is indeed an interesting problem to learn the representation of the dynamic graph in a self-supervised manner. - From a big picture of the proposed method, it seems to me promising to achieve self-supervised pre-training and downstream adaptation. However, given the unclear writing and organization of this manuscript, it is hard to check the detailed implementation. - The proposed window-wise construction for dynamic graphs makes sense to control the computational cost to a relatively constant value. ### Weakness - The illustration and the capture of Fig.1 can be improved to make it more straightforward. Specifically, 1) the blue interval on the left side is too light to make it distinguished. There is also a dark blue block within the exact figure, making it hard to read at first glance. 2) symbols are not introduced well at the time it is first shown. The first reference to Figure 1 occurs in section 4.1 which is too far to be referred to by the readers. Similar problem for the FIgure. 2 and its reference. Reorganizing the narrative of the manuscript is highly recommended. - For the problem formulation part, the meaning of symbols is improperly used. For example, 1) The $E$ represents the number of edges but it is also used in $D^E$ to represent the dimension of the edge's feature. It is highly recommended to reformulate the equations and carefully adopt symbols. 2) There is no symbol $z$ in Equation 1, do the authors mean $Z$?. 3) $\theta$ represents the parameters of the encoder, but $\Theta$ is also the temporal edge encoder in Figure 1. It is recommended to use dissimilar symbols to represent different modules. Similar problems occur across the whole manuscript. - To make the manuscript self-contained, it is necessary to provide a brief introduction to the definition of dynamic graph and the downstream tasks at the very beginning, as the term "dynamic graph" may refer to very different meanings within different sub-fields. - I’m a little confused about the setup for the evaluation of the SSL for future link prediction shown in Table 4. If I understand in the right way, the SSL-init encoder also works with a trained downstream decoder to perform the task, is that right? Does the “Supervised” method have the same architecture as DyG2Vec? It is a little strange as the performance of the SSL-init one is lower than the pure supervised one if the SSL-init is fine-tuned for the downstream task, as this is one of the purposes of SSL. - The descript of Figure 1 in Section 4.1 is not self-consistent. There is no “Temporal Attention Embedding” shown in Figure 1 and the “time-encoding” in Figure 1 is not introduced in this section. This unclear writing makes it hard to get the details about how the proposed method works. Requested Changes: My primary concern is the manuscript's writing quality, specifically its coherence and structure. I have detailed some of them in the "Strengths And Weaknesses" section, which the authors may reference. Broader Impact Concerns: There are no Broader Impact Concerns for this manuscript. ================================================== Metareview: Recommendation: Accept with minor revision Comment: The decision among the reviewers is mixed, with one reviewer in particular complaining about the lack of clarity which makes it hard for the reader to implement the proposed method, thus hindering reproducibility. The other two reviewers are in favour of accepting this submission. Based on the reviews and discussion, the consensus is that the contributions of the paper are worth publication. A "minor revision" is recommended for this paper due to several remaining concerns. I request that the final version of the manuscript considers the final reviewers' comments, particularly the items listed below: - Based on the discussion with reviewer ncve, it turns out that most of the experiments concern a fully-supervised setup, while the self-supervised method did not work that well in practice. Therefore, the focus of the paper should be mainly placed on the architecture and the training tricks, and not that much on the self-supervised setup. Τhe authors should update the title and the text of the paper to reflect that. - Provide some explanations on why DyG2Vec outperforms the baselines. Ideally, explanations would be accompanied with some empirical results. - Improve the quality of the presentation and clarity throughout the paper. - As promised in the manuscript, make the code publicly available. The authors are also encouraged to address the reviewers' concerns about the experimental setup. Specifically, it is not clear whether the experimental comparison is fair since the baselines do not utilize temporal edge encodings. The authors could investigate whether such features lead also the baselines to performance improvements. ==================================================
# Fedin: Federated Intermediate Layers Learning For Model Heterogeneity Anonymous authors Paper under double-blind review ## Abstract Federated learning (FL) facilitates edge devices to cooperatively train a global shared model while maintaining the training data locally and privately. However, a prevalent yet impractical assumption in FL requires the participating edge devices to train on an identical global model architecture. Recent research endeavors to address this problem in FL using public datasets. Nevertheless, acquiring data distributions that closely match to those of participating users poses a significant challenge. In this study, we propose an FL method called Federated Intermediate Layers Learning (FedIN), which supports heterogeneous models without relying on any public datasets. Instead, FedIN leverages the inherent knowledge embedded in client model features to facilitate knowledge exchange. To harness the knowledge from client features, we propose Intermediate Layers (IN) training to align intermediate layers based on features obtained from other clients. IN training only needs minimal memory and communication overhead by employing a single batch of client features. Additionally, we formulate and resolve a convex optimization problem to mitigate the challenge of gradient divergence stemming from model heterogeneity. The experimental results demonstrate the superior performance of FedIN in heterogeneous model settings compared to state-of-the-art algorithms. Furthermore, the experiments discuss the details of how to protect user privacy leaked from IN features, and our ablation study illustrates the effectiveness of IN training. ## 1 Introduction The substantial surge in Internet-of-Things (IoT) device utilization has led to the generation of vast quantities of user data (Song et al., 2022). Effectively managing this IoT big data without compromising user privacy has emerged as a significant concern. **Federated Learning** (FL) (McMahan et al., 2017) is proposed as a distributed machine learning paradigm that facilitates collaborative training on IoT data while keeping user data locally. Within FL, each client transmits model weights from their local models to the server following a few local training epochs. Subsequently, the server aggregates these weights to update the global model, and sends this model back to clients. While Federated Learning (FL) has demonstrated success in various applications, such as recognizing human activities (Chen et al., 2019b; Ouyang et al., 2021) and learning sentiment (Smith et al., 2017; Qin et al., 2021), numerous practical challenges persist within the FL domain (Kairouz et al., 2021). One of the most crucial and practical challenges is system heterogeneity, characterized by varying resources among client devices participating in FL training (Li et al., 2020a; Chan et al., 2024). Many existing FL schemes (Li et al., 2021a; Karimireddy et al., 2020) assume that the client devices with distinct resources possess the same architecture as the global shared model for global aggregation. Nevertheless, clients with limited computation resources may struggle to complete local training in time, dragging the training speed of the entire communication round. The clients hindering the training process are called stragglers. To combat this issue, some research has proposed asynchronous FL (Xie et al., 2020; Chen et al., 2020; Chai et al., 2021), adjusting local training epochs dynamically and clustering clients according to their available resources in order to mitigate the problem of stragglers. Nevertheless, given that all clients keep the same model architecture, less capable clients may lack the sufficient memory to deploy the shared global model. In this case, the global model must be adjusted to a smaller size, leading to the resource waste of more capable ![1_image_0.png](1_image_0.png) Client: Computer Client: Mobile Device Client: Mobile Phone Client: Mobile Phone Client: Mobile Phone Client: Mobile Phone Figure 1: All clients have the same model architectures in system homogeneous FL as shown in Figure 1a. In system heterogeneity, the clients participate in the federated learning with different available resources, inducing different model architectures in Figure 1b. clients and diminishing the performance of FL training. Given the impracticality of ensuring equal resource levels across all clients, the reality often involves heterogeneous devices with varied capabilities collaborating. Thus, supporting heterogeneous models could fully utilize the resources of heterogeneous devices, offering a more effective solution to the challenges posed by system heterogeneity. A straightforward way to facilitate system heterogeneity is to deploy different model architectures based on the available resources of the clients, as shown in Figure 1b. However, the server can not aggregate the weights directly like Figure 1a under the heterogeneous model architectures. It is essential to investigate alternative ways to incorporate weights and knowledge among the clients. Recent works addressing this challenge through knowledge distillation (Hinton et al., 2015) using a public dataset, such as RHFL (Fang & Ye, 2022) and FedMD (Li & Wang, 2019). While these methods allow for diverse model architectures on clients, it is challenging to collect a suitable public dataset with a similar distribution to the local datasets. Therefore, to support system heterogeneity without relying on a public dataset, we propose a method called Federated Intermediate Layers Learning (**FedIN**), training the intermediate layers according to a single batch of features obtained from other clients. In FedIN, a local model architecture consists of three components: an extractor, intermediate layers, and a classifier, as depicted in Figure 2. Client features are derived from the outputs of the extractor and the inputs to the classifier. Notably, clients only need to transmit **one batch** of features to the server, in addition to weight updates. The intermediate layers are updated through a combination of local training and IN training process, where IN training leverages a single batch of features to extract latent knowledge from other clients. However, directly deploying these two training processes can induce a critical problem called gradient divergence (Wang et al., 2020; Zhao et al., 2018), as the latent information from the local dataset and the features collected from other clients varies, particularly in a model heterogeneous environment. To alleviate the effect of this problem, we formulate and address a convex optimization problem to obtain the optimal updated gradients. Moreover, we use a simple yet efficient method, adding Gaussian noise to the client features to protect user privacy. The experiment results reveal that FedIN outperforms the baselines in terms of both accuracy and overhead. Our contributions are summarized as follows. - We proposed a novel FL method called **FedIN**, utilizing local training and IN training for intermediate layers, which is a flexible and reliable FL method addressing the system heterogeneity problem. - To alleviate the effects of the gradient divergence, we formulate a convex optimization problem to derive the optimal updated gradient. The ablation study shows its effectiveness in handling the gradient divergence problem. - To protect user privacy within FedIN, we utilize Gaussian noise in the IN training process. The experiments demonstrate the effectiveness of this approach in ensuring user privacy. - Our experiments reveal that FedIN achieves the best performances in the IID and non-IID data compared with the state-of-the-art algorithms. Moreover, we conduct a thorough analysis to investigate the factors contributing to the improvements attained by FedIN. ## 2 Related Work 2.1 Federated Learning Federated Learning (FL) was proposed in 2017 to organize cooperative model training among edge devices and servers (McMahan et al., 2017). In FL, numerous clients train models jointly while retaining training data locally to maintain privacy protection. Various methods have been proposed and achieved good performance in different scenarios. In (Xie et al., 2020), FedAsyn utilizes coordinators and schedulers to create an asynchronous training process, handling the stragglers in the FL training process. FedProx (Li et al., 2020b) regularizes and re-parametrizes FedAvg, guaranteeing convergence when learning over non-IID data. To share local knowledge among clients with different model architectures, FCCL (Huang et al., 2022) generates a cross-correlation matrix based on the unlabeled public dataset. ## 2.2 Heterogeneous Models Our work focuses on supporting heterogeneous models in FL. This subsection classifies recent research contributing to model heterogeneity into three categories. Public and Auxiliary Data. If a server has a public dataset, clients can exploit the general knowledge from this dataset, constructing a simple and efficient bridge to exchange knowledge among clients. FedAUX (Sattler et al., 2021) utilizes unsupervised pre-training and unlabeled auxiliary data to initialize heterogeneous models. FedGen (Zhu et al., 2021) simulates the prior knowledge from all the clients according to a generator. To dig out the latent knowledge from the public dataset, several studies (Li & Wang, 2019; Li et al., 2021b; He et al., 2020) propose addressing the system heterogeneity problem, inspired by knowledge distillation (Hinton et al., 2015). In FedMD (Li & Wang, 2019), a large public dataset is deployed in a server, while the clients distill and transmit logits from this dataset to learn the knowledge from both logits and local private datasets. In FedH2L (Li et al., 2021b), clients extract the logits from a public dataset consisting of small portions of local datasets from other clients. In RHFL (Fang & Ye, 2022), a server calculates the weights of clients by the symmetric cross-entropy loss function, and clients distilled knowledge from the unlabeled dataset. FCCL (Huang et al., 2022) computed a cross-correlation matrix also based on the unlabeled public dataset. MocoSFL (Li et al., 2023) proposes a mechanism, replay memory on features to assist the MoCo functions (Chen et al., 2021), a contrastive framework, in model heterogeneous FL. Data-free Knowledge Distillation. The basic ideas of data-free KD are to optimize noise inputs to minimize the distance to prior knowledge (Nayak et al., 2019), and Chen et al. (Chen et al., 2019a) train Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) to generate training data for the entire KD process, utilizing the knowledge distilled from the teacher model. To free the limitation from a public dataset, some research works consider data-free KD in FL. In FedML (Shen et al., 2020), latent knowledge from homogeneous models is applied to train heterogeneous models. In FedHe (Chan & Ngai, 2021), logits belonging to the same class are directly averaged in a server. In FedGKT (He et al., 2020), a neural network is split into a client and a server, while the server completes the entire training process based on the features and logits collected from all clients. FedMK (Liu et al., 2023) utilizes dataset distillation to transmit latent knowledge between clients in FL. Splitting Models. To adapt to the available resources of different clients, several studies split the large models into small sub-models. HeteroFL (Diao et al., 2021) divides a large model into local models with different sizes. However, the architectures of local and global models are still restricted by the same model architecture. SlimFL (Baek et al., 2022) integrates slimmable neural network (SNN) architectures (Yu & Huang, 2019) into FL, adapting the widths of local neural networks based on resource limitations. In ![3_image_0.png](3_image_0.png) Figure 2: **Details of model architectures and the training process for FedIN.** The process for FedIN is described as follows. ⃝1 First, clients receive client features and global weights w¯ from the server. ⃝2 After updating client weights by global weights, the clients are training their models from the local private dataset and completing the IN training for the client features inputs and outputs (sin, sout) from the server. ⃝3 Upon completing the local training, clients transmit the model weights and new client features, denoted as (wk, sin, sout), to the server. The aggregation methods for system heterogeneity are discussed in section 4.4. (Horvath et al., 2021), FjORD leverages Ordered Dropout and a self-distillation method to determine the model widths. ScaleFL (Ilhan et al., 2023) splits a server model along two dimensions, and local models are trained using the cross-entropy and KL-divergence loss functions. InCo (Chan et al., 2024) proposes three splitting methods with convex optimization problems to solve the gradient divergence problem in heterogeneous FL. ## 3 Problem Formulation The goal of FL is to collaborate with the clients to train a shared global model while keeping their local data private. We briefly summarize the optimization problem below. We assume that K clients participate in FL. Each client has a private dataset Dk = {(xi,k, yi,k)|i = 1, 2*, ...,* |Dk|}, where k ∈ {1*, ..., K*} is the index of a client, and |Dk| denotes the size of a dataset Dk. Private dataset Dk is only accessible to client k, guaranteeing data privacy. In traditional FL, the clients share identical model architecture. We denote a training model by f(x; w), where w are the training weights and x are the inputs. The loss function lk of client k is shown as follows, $$\min_{w}\quad l_{k}(w)=\frac{1}{|D_{k}|}\sum_{i=1}^{|D_{k}|}l(f(x_{i};w),y_{i}),\tag{1}$$ where l(·, ·) is a loss function for each data sample (xi,k, yi,k). Nevertheless, it may not be possible to deploy an identical model architecture for all the clients due to system heterogeneity. One potential solution is to allow clients to select different model architectures according to their capabilities in heterogeneous FL. The problem of heterogeneous FL is described as follows. We denote wk as the model weights of client k. If the total size of all datasets is N =PK k=1 |Dk|, the global optimization function is described as follows, $$\operatorname*{min}_{w_{1},w_{2},...,w_{K}}\;\;L(w_{1},...,w_{K})=\sum_{k=1}^{K}{\frac{|D_{k}|}{N}}l_{k}(w_{k}),$$ $$\text{(2)}$$. Nlk(wk), (2) where the optimized model weights {w1, w2*, ..., w*K} have different sizes. Thus, the direct aggregation of entire model weights becomes unfeasible when dealing with heterogeneity among models. Therefore, we adopt layer-wise heterogeneous aggregation (Liu et al., 2022; Chan et al., 2024) as an alternative approach to aggregate the layer weights of heterogeneous models instead of the entire model weights in our experiments. ## 4 Fedin: Federated Intermediate Layers Learning In this section, we describe the details of FedIN, focusing on addressing system heterogeneity by deploying clients with diverse model architectures that align with their available resources. Figure 2 illustrates the workflow of FedIN. The client model consists of three key components: an extractor, intermediate layers, and a classifier. The outputs of the extractor, referred to as feature inputs (sin), serve as inputs to the intermediate layers. Similarly, the outputs of the intermediate layers, referred to as feature outputs (sout), act as inputs to the classifier. The client features are the pair of feature inputs and outputs, denoted as (sin, sout). To be specific, FedIN encompasses two training processes: local training, which leverages the private dataset, and IN training, which relies on the feature inputs and outputs (sin, sout). Moreover, to address the challenge of gradient divergence arising from conflicts from model heterogeneity, we propose a convex optimization problem formulation to obtain the optimal updated gradients. ## 4.1 Local Training And In Training The clients receive a single batch of feature inputs and feature outputs, denoted as S = {(s c i,in, sc i,out)|i = 1, 2*, ...,* |S|}, from the server. These samples are utilized for training the intermediate layers during the IN training process. The superscript c means that these feature inputs and outputs are from the central server. The clients begin their local training after receiving a batch of client features from the server. For an instance (xi,k, yi,k) ∈ Dk, client k conducts local training on its private dataset. The loss function of the local training is shown as follows, $$l_{local,k}=\quad l_{CE}(f(x_{i,k};w_{k}^{t}),y_{i,k})+\frac{\mu}{2}||w_{k}^{t}-w_{k}^{t-1}||^{2},\tag{3}$$ where w t k are the weights of client k at time t, and lCE is the cross-entropy loss function for the local training. To ensure client consistency, we add a proximal regularization term (Li et al., 2020b) in Eq. 3. The second training process is IN training, which is training the intermediate layers from the features dataset S. It is worth mentioning that the sample number of S is one batch size. We denote the weights of the extractor and the classifier by we,k and wc,k for client k ∈ {1*, ..., K*}. Moreover, the weights of the intermediate layers are denoted by w*in,k*. The relations among the data sample (xi,k, yi,k) ∈ Dk, client weights, and (s k i,in, sk i,out) are shown as follows, $$\begin{array}{c}s_{i,in}^{k}=f(x_{i,k};w_{e,k}),\\ s_{i,out}^{k}=f(s_{i,in}^{k};w_{in,k}),\\ f(x_{i,k};w_{k})=f(s_{i,out}^{k};w_{c,k}).\end{array}$$ $$\left(4\right)$$ $$\left(5\right)$$ $$({\mathfrak{G}})$$ Eq. 4 shows that the feature input s k i,in is the output of the extractor we,k of an instance (xi,k, yi,k) from client k. Eq. 5 describes that the feature output s k i,out is the output of the intermediate layers w*in,k* with the feature input s k i,in. Eq. 6 proves the equivalence between the output of the classifier wc,k and the output of the whole client model wk. This process is indicated by the blue arrows in Figure 2. Eq. 5 shows the main function of the IN training, as shown in Figure 2. After the client receives the feature dataset S = {(s c i,in, sc i,out)|i = 1, 2*, ...,* |S|}, it begins the IN training for the intermediate layers. The feature inputs s c i,in from the server are the inputs of the intermediate layers, while the s c i,out are the targets of the IN training. The loss function of IN training is defined as follows, $$l_{I N,k}=l_{M S E}(f(s_{i,i n}^{c};w_{i n,k}),s_{i,o u t}^{c}),$$ i,out), (7) where lMSE is a mean-square error loss function. The weights w*in,k* are updated by the loss functions of the local training l*local,k* and the IN training l*IN,k*. We use MSE as the loss function due to its effectiveness in this learning method. Moreover, and sin and sout do not represent probability distributions, making it difficult to incorporate other losses such as KL-divergence and cross-entropy losses. $$\left(7\right)$$ ## 4.2 Gradient Alleviation However, local training is based on the local data, while IN training is based on the features from other clients' data. Different local datasets lead to varied distributions, resulting in dissimilar optimized directions. Moreover, in our scenario, deploying distinct model architectures in clients emphasizes differences in feature spaces, as shown in Figure 3. These combined factors result in divergent gradients between local training and IN training, impeding the pace of convergence and disturbing the model to achieve the optimum point (Wang et al., 2020; Zhao et al., 2018). Therefore, mitigating this gradient divergence is imperative for the effectiveness of our method. To address this problem, inspired by (Chan et al., 2024), we formulate a convex optimization problem as follows. We define the gradients from the local training as a matrix G*local* and the gradients from the IN training as a matrix GIN . To guarantee the optimized direction of the models, we design a constraint for the gradient as follows, $$\langle G_{I N},G_{l o c a l}\rangle\geq0,$$ ⟨GIN , Glocal⟩ ≥ 0, (8) where ⟨·, ·⟩ is the dot product, which ensures the optimized direction for G*local* and GIN to be the same. In the optimization problem, we denote the new optimized gradients by a matrix Z and model the following convex optimization primal problem, $$\operatorname*{min}_{Z}||G_{I N}-Z||_{F}^{2},\ \ s.t.\ \langle Z,G_{l o c a l}\rangle\geq0,\ \ \ \ \ (9)$$ $$({\mathfrak{s}})$$ ![5_image_0.png](5_image_0.png) $$(11)$$ Figure 3: T-SNE visualization depicts IN feature outputs sout derived from five distinct model architectures, with each color representing a unique model architecture. where we maintain the optimized direction between Z and G*local* to be the same and minimize the distance between Z and Gin. We consider that the information from the feature inputs and outputs is more fruitful than the local private dataset which is easier to have over-fitting in the training process. We solve this convex optimization problem by the Lagrange dual problem (Bot et al., 2009). The Lagrangian is shown as, $$\begin{array}{l}{{L(Z,\lambda)=t r(G_{I N}^{T}G_{I N})-t r(Z^{T}G_{I N})}}\\ {{-t r(G_{I N}^{T}Z)+t r(Z^{T}Z)-\lambda t r(G_{I o c a l}^{T}Z),}}\end{array}$$ $$(10)$$ where tr(A) means the trace of the matrix A, and the λ is a Lagrange multiplier associated with ⟨Z, Glocal⟩ ≥ 0. To derive the dual problem, we first get the optimum of Z for the Lagrangian Eq. 10, and then obtain the Lagrange dual function g(λ) = infZ L(*Z, λ*). Thus, the Lagrange dual problem is described as follows, $$\operatorname*{max}_{\lambda}\;g(\lambda)=-\frac{\lambda^{2}}{4}t r(G_{l o e a l}^{T}G_{l o e a l})-\lambda t r(G_{l o e a l}^{T}G_{I N}),\;\;s.t.\;\lambda\geq0,$$ where the optimum of the Lagrangian Eq. 10 is Z = GIN + λ 2 G*local*. If the λ is large enough, it is obvious that ⟨Z, G*local*⟩ > 0, which means this convex optimization problem holds strong duality because it satisfies the Slater's constraint qualification(Boyd et al., 2004), i.e., the optimum of the primal problem Eq. 9 is also Z = GIN + λ 2 G*local*. Furthermore, the dual problem Eq. 11 can be solved to obtain the analytic solution for λ and Z, which is shown as follows, $$Z=\begin{cases}G_{I N},&\text{if}b\geq0\\ G_{I N}-\frac{b}{a}G_{I o c a l},&\text{if}b<0\end{cases}$$ $$(12)$$ where a = tr(GT localG*local*) and b = tr(GT localGIN ). However, one crucial point is that the clients will handle this optimization process. If we calculate each gradient matrix following Eq. 12, this process would occupy lots of computing resources because of the matrix multiplication. Therefore, to mitigate the computational pressure on the clients, we simplified the updated gradient matrix as, $$Z=G_{I N}+\frac{\lambda}{2}G_{l o c a l},$$ $$(13)$$ G*local*, (13) where λ = 1 is set for the optimum point of the primal problem in our experiment settings. Since GIN is only associated with the weights w*IN,k* and not related to we,k and wc,k, the client models are optimized by Eq. 13 in FedIN directly. ## 4.3 Privacy Consideration In our methods, clients are required to transmit feature inputs and outputs to the server, raising privacy concerns regarding the potential leakage of private data through transmitted features. We investigate two recent related attack methods, the Gradient Inversion Attack and the Model Inversion Attack. The Gradient Inversion Attack relies on the strong assumption that the server knows the private statistic of BatchNorm (Huang et al., 2021), which is not appropriate to FedIN as such information is unnecessary to transmit to the server. Additionally, the Model Inversion Attack poses a greater risk of stealing private information in our scenario, but one strong assumption for this attack is that the server needs to have prior knowledge of the client input images (Li et al., 2022), which is impractical in our scenario as the server does not receive any images from the clients. However, as the server accesses the model parameters and the IN feature inputs and outputs, we explore an alternative method known as dataset distillation (Wang et al., 2018; Lei & Tao, 2023) to potentially reconstruct the private images from the clients. We randomly initialize and train a batch of noise xˆ with the same size as the input images x, aiming to optimize xˆ following the following reconstruction objective: lrec = l(f(ˆx; we), sin), where we represents the freezing weights of the extractor on the server and sin denotes the feature inputs. To enhance user privacy within FedIN, the clients can easily **add Gaussian noise** followed the standard deviation σ of IN feature inputs and outputs in training. Specifically, we define σin as the standard deviation of IN feature inputs and σout for feature outputs. The Gaussian noises are represented as zin ∼ N (0, σin) and zout ∼ N (0, σout). For simplicity in notations, we use σ to denote z, as we solely adjust σ within this privacy protection mechanism. Throughout the training phase, We apply 0.8σ to the IN feature inputs and outputs, i.e., the inputs of Eq. 5 are sˆ k i,in = s k i,in + 0.8σin, and for Eq. 6, they become sˆ k i,out = s k i,out + 0.8σout. The results of the privacy protection are shown in Figure 4, indicating the efficiency of this mechanism in protecting user privacy in FedIN. More details on the privacy experiments are further provided in Section 5.5. With protection Without protection True Figure 4: The comparison between privacy with protection and without protection. ## 4.4 Weight Aggregation FedIN can handle many scenarios in different model architectures. If client models have different numbers of layers, FedIN adopts layer-wise heterogeneous aggregation (Liu et al., 2022; Chan et al., 2024), enabling the server to aggregate weights from the same layer rather than the same model. Similarly, when client models have different architectures, FedIN aggregates model weights only from models with identical architectures, the same as the homogeneous aggregation method used in FedAvg (McMahan et al., 2017) and FedDF (Lin et al., 2020). The effectiveness of FedIN with these two distinctive aggregation methods is further demonstrated in our experiment section. | Methods | FashionMNIST (ACC=85) | | SVHN (ACC=80) | CIFAR-10 (ACC=60) | | | | | | | | | |-------------------|-------------------------|---------|-----------------|---------------------|------|---------|--------|--------|------|---------|--------|--------| | | IID | Non-IID | Round↓ | Speed↑ | IID | Non-IID | Round↓ | Speed↑ | IID | Non-IID | Round↓ | Speed↑ | | FedAvg(2017) | 90.3 | 89.4 | 47 | ×1.0 | 89.2 | 84.5 | 82 | ×1.0 | 76.8 | 66.2 | 109 | ×1.0 | | FedProx(2020b) | 89.7 | 87.6 | 40 | ×1.2 | 90.6 | 87.3 | 45 | ×1.8 | 77.6 | 72.0 | 72 | ×1.5 | | Scaffold(2020) | 88.3 | 87.1 | 25 | ×1.9 | 91.1 | 86.0 | 72 | ×1.1 | 79.0 | 68.1 | 120 | ×0.9 | | FedNova(2020) | 87.5 | 87.3 | 36 | ×1.3 | 87.3 | 86.7 | 106 | ×0.8 | 62.9 | 60.3 | 229 | ×0.5 | | MOON(2021a) | 89.5 | 89.0 | 34 | ×1.4 | 89.5 | 86.1 | 55 | ×1.5 | 74.1 | 67.4 | 129 | ×0.8 | | HeteroFL(2021) | 89.3 | 89.5 | 140 | ×0.3 | 93.8 | 89.3 | 107 | ×0.8 | 72.1 | 61.0 | 273 | ×0.4 | | InclusiveFL(2022) | 88.4 | 89.1 | 31 | ×1.5 | 90.9 | 88.7 | 67 | ×1.2 | 75.0 | 66.1 | 160 | ×0.7 | | FedRolex(2022) | 90.9 | 88.6 | 100 | ×0.4 | 91.3 | 86.9 | 81 | ×1.0 | 79.8 | 68.0 | 165 | ×0.6 | | ScaleFL(2023) | 91.1 | 90.2 | 95 | ×0.5 | 93.7 | 90.1 | 100 | ×0.8 | 76.4 | 72.8 | 108 | ×1.0 | | InCoAvg(2024) | 90.6 | 89.4 | 22 | ×2.1 | 90 | 87.2 | 55 | ×1.5 | 78.7 | 67.1 | 127 | ×0.8 | | FedIN | 91.2 | 90.3 | 20 | ×2.4 | 91.8 | 89.3 | 29 | ×2.8 | 80.5 | 75.9 | 54 | ×2.0 | | FedIN (+Noise) | 91.3 | 90.7 | 18 | ×2.6 | 92.9 | 90.9 | 26 | ×3.1 | 83.2 | 77.3 | 52 | ×2.1 | Table 1: Model accuracy for IID and non-IID data of FashionMNIST, SVHN, and CIFAR-10, with target accuracy established at 85, 80, and 60, respectively. "Round" denotes the round at which the method first achieves the target accuracy in Non-IID. "Speed" refers to the convergent speed compared to the speed of FedAvg. We **bold** the best results, and underline the second best results in this table. ## 5 Experiments In this section, we conduct experiments to evaluate the performances of FedIN on the CIFAR-10 (Krizhevsky et al., 2009), Fashion-MNIST (Xiao et al., 2017), and SVHN (Netzer et al., 2011) datasets. Our code will be released on Github. ## 5.1 Experiment Settings Federated Settings. We establish two distributions for these datasets, independent and identically distributed (IID), and non-IID. The non-IID data is generated using a Dirichlet distribution with a parameter α = 0.5. We have 100 clients in the FL training process. The model architectures are ResNet10, ResNet14, ResNet18, ResNet22, and ResNet26 from PyTorch source codes, and they are evenly distributed among 100 clients. The number of communication rounds is set to 500. The batch size is 16 during the training process. For all datasets, the clients complete five epochs of local training during each communication round. Baselines. We have two classic algorithms, FedAvg (McMahan et al., 2017) and FedProx (Li et al., 2020b), and eight state-of-the-art methods, Scaffold (Karimireddy et al., 2020), FedNova (Wang et al., 2020), MOON (Li et al., 2021a), HeteroFL (Diao et al., 2021), InclusiveFL (Liu et al., 2022), FedRolex (Alam et al., 2022), ScaleFL (Ilhan et al., 2023), and InCo (Chan et al., 2024), as our baselines. FedIN and these baselines, FedAvg, FedProx, Scaffold, FedNova, and MOON, utilize the layer-wise aggregation technique proposed in (Chan et al., 2024; Liu et al., 2022) under our heterogeneous model environment. FedIN (+Noise) is a privacy-protected version of FedIN. More discussions on user privacy are provided in Section 5.5. We first focus on FedIN without adding noises in the experiments. Since HeteroFL, FedRolex, and ScaleFL require model splitting based on their own methodology, they cannot utilize this aggregation technique. To maintain a similar number of parameters as the other baselines, we deploy ResNet152 in these baselines instead of using the largest model, ResNet26, as in other methods. The model split mode in these baselines is "dynamic_a1-b1-c1-d1-e1" from the source code because of five heterogeneous models in all other methods. The hyper-parameter µ 2 for FedProx and FedIN is 0.05. We use Adam optimizer with default parameter settings in PyTorch for all methods. All experiments are conducted with one Nvidia RTX3090 GPU. ## 5.2 Accuracy Analyses. Accuracy of IID and non-IID Data. We conduct experiments on the IID and non-IID data in FashionMNIST, SVHN, and CIFAR-10 datasets. The experiment results are shown in Table 1. From Table 1, FedIN (+Noise) achieves the highest accuracy among all methods in FashionMNIST, CIFAR-10, and gets the second highest accuracy in SVHN with IID. More discussions for FedIN (+Noise) are shown in Section 5.5. FedIN also gets the second-best results among different datasets. These results demonstrate the Table 2: Model accuracy with different settings in CIFAR-10. | Fashion-MNIST | Methods | | | | | | | |-----------------|-----------|----------|---------|------|-------------|-------|------| | FedAvg | FedProx | Scaffold | FedNova | MOON | InclusiveFL | FedIN | | | IID | 86.1 | 83.4 | 87.7 | 84.2 | 87.0 | 88.1 | 88.9 | | Non-IID | 85.4 | 82.1 | 86.3 | 83.9 | 86.5 | 86.4 | 88.0 | | (a) Model accuracy with homogeneous models. | (b) Model accuracy with ablation studies. | | | | | | | |-----------------------------------------------|---------------------------------------------|---------|----------|---------|-------|----------|---------| | CIFAR-10 | Methods | | | | | | | | FedProx | Scaffold | FedNova | MOON | FedIN | | | | | IID | 83.5 | 84.3 | 82.0 | 84.2 | 84.7 | | | | Non-IID | 77.5 | 76.8 | 75.4 | 78.2 | 79.2 | CIFAR-10 | Methods | | | FedAvg | w/o IN | w/o Prox | w/o Opt | FedIN | | | | IID | 76.8 | 77.6 | 78.8 | 79.4 | 80.5 | | | | Non-IID | 66.2 | 72.0 | 66.4 | 74.9 | 75.9 | | | Table 3: Model accuracy with heterogeneity models with FedAvg aggregations. effectiveness of FedIN. Furthermore, FedIN requires only 20, 29, and 54 communication rounds to reach the target accuracy in three datasets, demonstrating the fastest convergent speed among all the baselines. Additionally, Figure 5 shows the smoothed test accuracy on non-IID data of CIFAR-10. FedIN (red line) achieves the highest accuracy and exhibits the fastest convergent speed throughout the training process. It is the first method to achieve the target accuracy (red dot line). Moreover, FedIN incurs only a small additional overhead of one batch of feature inputs and outputs compared to FedAvg, as shown in Table 4. ![8_image_0.png](8_image_0.png) Accuracy of Homogeneous Models. While FedIN primarily addresses the system heterogeneity challenge in FL, we also conduct experiments in a homogeneous model environment using CIFAR-10. All client models are ResNet18 in this experiment, and the remaining federated settings are the same as those in the system heterogeneity experiments. As presented in Table 2a, FedIN still outperforms state-of-the-art baselines, specifically for the baselines that are designed to enhance FL performance in homogeneous model environments. Figure 5: The smoothed test accuracy on non-IID data of CIFAR-10. The red dot line denotes the target accuracy in Table 1. Accuracy with FedAvg Aggregation. To ensure a fair comparison, both the baselines and FedIN employ layer-wise aggregation. However, it is worth noting that FedIN can be deployed in scenarios with extreme heterogeneity, where layer-wise aggregation is not feasible. In such cases, model weights with the same architectures are the only ones that can be aggregated. To demonstrate the effectiveness of FedIN in such extreme environments, we conducted experiments on the Fashion-MNIST dataset, utilizing FedAvg aggregation. The remaining federated settings are the same in this experiment. As indicated in Table 3, FedIN still achieves the highest accuracy, 88.9% on IID data and 88.0% on non-IID data. These results further emphasize the effectiveness of FedIN in extreme system heterogeneity environments. ## 5.3 The Reason For The Improvements CKA Similarity for Different Stages. Inspired by (Luo et al., 2021) and (Raghu et al., 2021), we use CKA similarity (Kornblith et al., 2019) to examine the layer similarity among different clients across different methods, in order to shed light on the reasons behind the improvements observed with FedIN. To simplify the figure annotations, we concentrate on three specific methods: FedAvg as an essential baseline, InclusiveFL as a representative method for system heterogeneity, and our proposed method, FedIN. Figure 6 illustrates Table 4: Training overheads for different methods. "Params" indicates the communication overheads. "Memory" refers to the memory occupied by methods in the training process. | Methods | | | | | | | |--------------|--------|----------|-------|----------|----------|-------| | Metrics | FedAvg | Scaffold | MOON | HeteroFL | FedRolex | FedIN | | Params(M) ↓ | 12.28 | 24.56 | 12.28 | 16.29 | 16.29 | 12.35 | | Memory(MB) ↓ | 235.0 | 470.0 | 705.0 | 445.6 | 445.6 | 235.3 | ![9_image_0.png](9_image_0.png) (a) CKA similarity for IID data. (b) CKA similarity for non-IID data. (c) Effects from different batch sizes. (d) Effects from different sample numbers. Figure 6: Illustrations for CKA similarity of IID data in Figure 6a and non-IID data in Figure 6b with CIFAR-10. The effects from different batch sizes and different sample numbers are shown in Figure 6c and Figure 6d under non-IID CIFAR-10. ![9_image_1.png](9_image_1.png) 0.0 0.2 0.4 0.6 0.8 1.0 ``` (c) Stage 2 of FedIN. 2 7 12 17 22 27 32 37 42 47 52 57 62 67 72 77 82 87 92 97 Client_ID ``` (a) Stage 2 of FedAvg. (b) Stage 2 of InclusiveFL. (d) Stage 3 of FedAvg. (e) Stage 3 of InclusiveFL. (f) Stage 3 of FedIN. Figure 7: Heatmaps of CKA similarity from stage 2 and 3 among different clients in CIFAR-10. | Methods | IID | Non-IID | | | | | | | | | |-------------|---------|-----------|----------|----------|---------|---------|---------|----------|----------|------| | Nc = 10 | Nc = 20 | Nc = 50 | Nc = 100 | Nc = 200 | Nc = 10 | Nc = 20 | Nc = 50 | Nc = 100 | Nc = 200 | | | FedAvg | 79.3 | 79.2 | 78.7 | 76.8 | 74.0 | 68.3 | 67.9 | 66.9 | 66.2 | 62.5 | | InclusiveFL | 77.5 | 76.7 | 79.1 | 75.0 | 73.4 | 66.8 | 68.4 | 67.1 | 66.1 | 61.2 | | FedIN | 82.8 | 83.1 | 81.0 | 80.5 | 74.3 | 76.7 | 76.3 | 74.1 | 75.9 | 72.2 | Table 5: Model accuracy with different client numbers on CIFAR-10. the CKA similarity of different stages under IID and non-IID. Notably, in Figure 6a, FedIN exhibits the highest similarity even in the deepest stage (stage 3), while FedAvg and InclusiveFL struggle to maintain high similarity levels in stage 3, as evidenced by the gray area in the figure. In Figure 6b, FedIN still maintains a higher similarity than FedAvg and InclusiveFL, especially in the deep stage (stage 3). To gain further insights into the dissimilarities between FedIN and the other methods, we present heatmaps of similarity from stage 2 and stage 3 among clients in Figure 7, indicating that the average similarity of FedIN surpasses that of FedAvg and InclusiveFL. T-SNE Visualization. We conduct t-SNE visualizations (Van der Maaten & Hinton, 2008) on features extracted from stage 3 in Figure 8, focusing on data belonging to the same class. The objective is to observe the clustering behavior of these data points. In Figure 8a and Figure 8b, it is evident that the features from client 0 and client 1 and features from client 2 are separated. However, the features from these three clients form a singular cluster in FedIN, as depicted in Figure 8c, validating that the features from data with the same class from different model architectures are consistent. ![10_image_0.png](10_image_0.png) (a) FedAvg. (b) InclusiveFL. (c) FedIN. Figure 8: T-SNE visualization of features learned by different methods from stage 3 on CIFAR-10. We select data from the same class and utilize three models with different architectures (Client0: ResNet10, Client1: ResNet14, Client2: ResNet26). ## 5.4 Ablation Study We conduct an ablation study to evaluate the contributions of the key components in FedIN. Our ablation study includes the following methods: (i) FedAvg, (ii) FedIN w/o IN (FedIN without IN loss), (iii) FedIN w/o Prox (FedIN without Prox regularized term), (iv) FedIN w/o Opt (FedIN without the gradient alleviation (optimization)). Table 2b and Figure 9 illustrates the results of the ablation studies. Effects of the Gradient Alleviation and the Loss Function. In this experiment, we highlight that our solution is advantageous and effective in solving the gradient divergence problem. Figure 9 illustrates the results of considering the gradient divergence problem and ignoring this problem. The accuracy achieved by FedIN surpasses that of FedIN without gradient alleviation (FedIN w/o Opt), and the convergent speed of FedIN is also accelerated, as observed in Figure 9. Moreover, in FedIN, the loss function (Eq. 13) incorporates two additional terms: one is IN loss, and the other one is Prox regularized term. The convergent speed of FedIN w/o Prox is similar to FedIN w/o IN before 200 rounds as shown in Figure 9. From Figure 9, after 200 rounds, FedIN w/o Prox becomes unstable and its performance deteriorates during the subsequent training process. At last, FedIN w/o Prox only achieves the performance like FedAvg, as shown in Table 2b, hinting that the improvement from IN loss is eliminated at the end of the training process. Therefore, the inclusion of a regularized term becomes essential to maintain the effectiveness of IN loss throughout the training process. 0 100 200 300 400 500 Rounds 60.0 62.5 65.0 67.5 70.0 72.5 75.0 77.5 80.0 FedAvg FedIN w/o IN FedIN w/o Prox FedIN w/o Opt FedIN Accuracy Figure 9: Smoothed test accuracy for non-IID data of CIFAR-10 in the ablation study. Effects of Client Numbers, Batch Sizes, and Sample Numbers. To investigate the effects of varying client numbers, we conduct experiments on CIFAR-10, as presented in Table 5. Nc denotes the number of clients. Notably, FedIN outperforms the other methods across different numbers of clients. We also conduct analysis on different batch sizes and sample numbers on CIFAR-10 to verify the effects of these hyperparameters. As shown in Figure 6c, batch sizes 16, 32, and 64 are the best selections, but the batch sizes of 8 and 128 still outperform HeteroFL and InclusiveFL. Considering the communication overhead, a batch size of 16 is the optimal choice. From Figure 6d, it is clear that increasing the sample numbers has little impact on accuracy improvement. | Dataset | Noise Level (+FedIN) | | | | | | | | | |---------------|------------------------|-------|-------|-------|-------|-------|-------|-------|------| | w/o Noise | +0.1σ | +0.2σ | +0.5σ | +0.8σ | +1.0σ | +2.0σ | +3.0σ | +5.0σ | | | SVHN | 89.3 | 90.1 | 90.2 | 90.0 | 90.9 | 89.2 | 83.7 | 79.8 | 65.3 | | Fashion-MNIST | 90.3 | 89.8 | 90.4 | 90.6 | 90.7 | 90.4 | 88.7 | 84.9 | 69.4 | | CIFAR-10 | 75.9 | 75.2 | 76.4 | 77.2 | 77.3 | 76.2 | 73.5 | 70.0 | 30.6 | ![11_image_0.png](11_image_0.png) Table 6: Model accuracy with adding noise in FedIN to protect privacy. Figure 10: The results from the reconstruction experiments.Figure 10a demonstrates the reconstruction loss when IN inputs are with protection and without protection. Figure 10b illustrates the reconstructed images from protected IN inputs and original IN inputs. ## 5.5 Privacy Analysis We assume that the server reconstructs user images from IN inputs, originating from the outputs of the first layer, as images are more easily reconstructed from features extracted from shallow layers. The experimental results presented in Table 6 and Figure 10 provide a comprehensive analysis of the impact of how privacypreserving mechanisms, adding Gaussian noise, ensure user privacy. Accuracy with Different Noise Levels. Table 6 illustrates the impact of adding noise at varying levels in FedIN on the model accuracy. Moderate noise levels, especially at +0.8σ (the setting of FedIN (+Noise)), obtain superior performance compared to FedIN without adding noise. However, as noise levels exceeding +1.0σ, there is a noticeable decline in accuracy across all datasets, indicating a degradation in model performances due to excessive noises. These results suggest that appropriate noise levels can protect user privacy while also aiding in model generalization. Reconstruction Loss and Images. In Figure 10, we add 0.8σ to IN inputs for privacy protection. The reconstruction loss for models trained with privacy protection (0.8σ added) stabilizes at a higher value (0.6) compared to those trained without protection at last. Figure 10b displays the reconstructed images at the server using these two methods. Images reconstructed from IN inputs with protection exhibit a more noiselike quality compared to those from unprotected IN inputs. These results indicate that the server encounters challenges in reconstructing user images when IN inputs are protected. ## 6 Conclusions We propose a method called FedIN, which conducts local training based on the private dataset and IN training from the client features, requiring only one batch of features. Moreover, we formulate a convex optimization problem to tackle the gradient divergence problem. To protect user privacy, we further propose a simple yet effective method, adding Gaussian noise during the IN training process. We conduct extensive experiments on three public datasets with ten baselines to demonstrate the superior performances of FedIN. ## References Samiul Alam, Luyang Liu, Ming Yan, and Mi Zhang. Fedrolex: Model-heterogeneous federated learning with rolling sub-model extraction. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural Information Processing Systems*, 2022. URL https://openreview.net/ forum?id=OtxyysUdBE. Hankyul Baek, Won Joon Yun, Yunseok Kwak, Soyi Jung, Mingyue Ji, Mehdi Bennis, Jihong Park, and Joongheon Kim. Joint superposition coding and training for federated learning over multi-width neural networks. In *IEEE INFOCOM 2022-IEEE Conference on Computer Communications*, pp. 1729–1738. IEEE, 2022. Radu Ioan Bot, Sorin-Mihai Grad, and Gert Wanka. *Duality in vector optimization*. Springer Science & Business Media, 2009. Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. *Convex optimization*. Cambridge university press, 2004. Zheng Chai, Yujing Chen, Ali Anwar, Liang Zhao, Yue Cheng, and Huzefa Rangwala. Fedat: A highperformance and communication-efficient federated learning system with asynchronous tiers. In *Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis*, SC '21, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450384421. Yun Hin Chan and Edith Ngai. Fedhe: Heterogeneous models and communication-efficient federated learning. IEEE International Confer- ence on Mobility, Sensing and Networking (MSN 2021), 2021. Yun-Hin Chan, Rui Zhou, Running Zhao, Zhihan JIANG, and Edith C. H. Ngai. Internal cross-layer gradients for extending homogeneity to heterogeneity in federated learning. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=Cc0qk6r4Nd. Hanting Chen, Yunhe Wang, Chang Xu, Zhaohui Yang, Chuanjian Liu, Boxin Shi, Chunjing Xu, Chao Xu, and Qi Tian. Data-free learning of student networks. In *Proceedings of the IEEE/CVF International* Conference on Computer Vision, pp. 3514–3522, 2019a. Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 9640–9649, 2021. Yang Chen, Xiaoyan Sun, and Yaochu Jin. Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation. *IEEE transactions on neural networks* and learning systems, 31(10):4229–4238, 2019b. Yujing Chen, Yue Ning, Martin Slawski, and Huzefa Rangwala. Asynchronous online federated learning for edge devices with non-iid data. In *2020 IEEE International Conference on Big Data (Big Data)*, pp. 15–24. IEEE, 2020. Enmao Diao, Jie Ding, and Vahid Tarokh. HeteroFL: Computation and communication efficient federated learning for heterogeneous clients. In *International Conference on Learning Representations*, 2021. Xiuwen Fang and Mang Ye. Robust federated learning with noisy and heterogeneous clients. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10072–10081, 2022. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014. Chaoyang He, Murali Annavaram, and Salman Avestimehr. Group knowledge transfer: Federated learning of large cnns at the edge. *Advances in Neural Information Processing Systems*, 33:14068–14080, 2020. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. *NIPS Deep* Learning and Representation Learning Workshop, 2015. Samuel Horvath, Stefanos Laskaridis, Mario Almeida, Ilias Leontiadis, Stylianos Venieris, and Nicholas Lane. Fjord: Fair and accurate federated learning under heterogeneous targets with ordered dropout. *Advances* in Neural Information Processing Systems, 34:12876–12889, 2021. Wenke Huang, Mang Ye, and Bo Du. Learn from others and be yourself in heterogeneous federated learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 10143– 10153, 2022. Yangsibo Huang, Samyak Gupta, Zhao Song, Kai Li, and Sanjeev Arora. Evaluating gradient inversion attacks and defenses in federated learning. *Advances in neural information processing systems*, 34:7232– 7241, 2021. Fatih Ilhan, Gong Su, and Ling Liu. Scalefl: Resource-adaptive federated learning with heterogeneous clients. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 24532–24541, 2023. Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. 2021. Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. In *International* Conference on Machine Learning, pp. 5132–5143. PMLR, 2020. Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. In *International conference on machine learning*, pp. 3519–3529. PMLR, 2019. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Shiye Lei and Dacheng Tao. A comprehensive survey of dataset distillation. *IEEE Transactions on Pattern* Analysis and Machine Intelligence, 2023. Daliang Li and Junpu Wang. FedMD: Heterogenous federated learning via model distillation. *NeurIPS* Workshop on Federated Learning for Data Privacy and Confidentiality, 2019. Jingtao Li, Adnan Siraj Rakin, Xing Chen, Zhezhi He, Deliang Fan, and Chaitali Chakrabarti. Ressfl: A resistance transfer framework for defending model inversion attack in split federated learning. In *Proceedings* of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10194–10202, 2022. Jingtao Li, Lingjuan Lyu, Daisuke Iso, Chaitali Chakrabarti, and Michael Spranger. MocoSFL: enabling cross-client collaborative self-supervised learning. In *The Eleventh International Conference on Learning* Representations, 2023. URL https://openreview.net/forum?id=2QGJXyMNoPz. Qinbin Li, Bingsheng He, and Dawn Song. Model-contrastive federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10713–10722, 2021a. Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. *IEEE Signal Processing Magazine*, 37(3):50–60, 2020a. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. *Proceedings of the 3rd MLSys Conference*, 2020b. Yiying Li, Wei Zhou, Huaimin Wang, Haibo Mi, and Timothy M Hospedales. FedH2L: Federated learning with model and statistical heterogeneity. *arXiv preprint arXiv:2101.11296*, 2021b. Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. *Advances in Neural Information Processing Systems*, 33:2351–2363, 2020. Ping Liu, Xin Yu, and Joey Tianyi Zhou. Meta knowledge condensation for federated learning. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/ forum?id=TDf-XFAwc79. Ruixuan Liu, Fangzhao Wu, Chuhan Wu, Yanlin Wang, Lingjuan Lyu, Hong Chen, and Xing Xie. No one left behind: Inclusive federated learning over heterogeneous devices. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 3398–3406, 2022. Mi Luo, Fei Chen, Dapeng Hu, Yifan Zhang, Jian Liang, and Jiashi Feng. No fear of heterogeneity: Classifier calibration for federated learning with non-iid data. *Advances in Neural Information Processing Systems*, 34:5972–5984, 2021. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In *Artificial intelligence and* statistics, pp. 1273–1282. PMLR, 2017. Gaurav Kumar Nayak, Konda Reddy Mopuri, Vaisakh Shaj, Venkatesh Babu Radhakrishnan, and Anirban Chakraborty. Zero-shot knowledge distillation in deep networks. In *International Conference on Machine* Learning, pp. 4743–4751. PMLR, 2019. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011. Xiaomin Ouyang, Zhiyuan Xie, Jiayu Zhou, Jianwei Huang, and Guoliang Xing. Clusterfl: a similarity-aware federated learning system for human activity recognition. In Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services, pp. 54–66, 2021. Han Qin, Guimin Chen, Yuanhe Tian, and Yan Song. Improving federated learning for aspect-based sentiment analysis via topic memories. In *Proceedings of the 2021 Conference on Empirical Methods in Natural* Language Processing, pp. 3942–3954, 2021. Maithra Raghu, Thomas Unterthiner, Simon Kornblith, Chiyuan Zhang, and Alexey Dosovitskiy. Do vision transformers see like convolutional neural networks? *Advances in Neural Information Processing Systems*, 34:12116–12128, 2021. Felix Sattler, Tim Korjakow, Roman Rischke, and Wojciech Samek. Fedaux: Leveraging unlabeled auxiliary data in federated learning. *IEEE Transactions on Neural Networks and Learning Systems*, 2021. Tao Shen, Jie Zhang, Xinkang Jia, Fengda Zhang, Gang Huang, Pan Zhou, Kun Kuang, Fei Wu, and Chao Wu. Federated mutual learning. *arXiv preprint arXiv:2006.16765*, 2020. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet Talwalkar. Federated multi-task learning. 31st Conference on Neural Information Processing Systems (NeurIPS), 2017. Xuan Song, Haoran Zhang, Rajendra Akerkar, Huawei Huang, Song Guo, Lei Zhong, Yusheng Ji, Andreas L. Opdahl, Hemant Purohit, André Skupin, Akshay Pottathil, and Aron Culotta. Big data and emergency management: Concepts, methodologies, and applications. *IEEE Transactions on Big Data*, 8(2):397–419, 2022. doi: 10.1109/TBDATA.2020.2972871. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. *Journal of machine learning* research, 9(11), 2008. Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H Vincent Poor. Tackling the objective inconsistency problem in heterogeneous federated optimization. *Advances in neural information processing systems*, 33: 7611–7623, 2020. Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, and Alexei A Efros. Dataset distillation. arXiv preprint arXiv:1811.10959, 2018. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms, 2017. Cong Xie, Sanmi Koyejo, and Indranil Gupta. Asynchronous federated optimization. 12th Annual Workshop on Optimization for Machine Learning, 2020. Jiahui Yu and Thomas S Huang. Universally slimmable networks and improved training techniques. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 1803–1811, 2019. Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated learning with non-iid data. *arXiv preprint arXiv:1806.00582*, 2018. Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. Data-free knowledge distillation for heterogeneous federated learning. In *International Conference on Machine Learning*, pp. 12878–12889. PMLR, 2021.
# Confidence-Aware Denoised Fine-Tuning Of Off-The-Shelf Models For Certified Robustness Anonymous authors Paper under double-blind review ## Abstract The remarkable advances in deep learning have led to the emergence of many off-the-shelf classifiers, e.g., large pre-trained models. However, since they are typically trained on clean data, they remain vulnerable to adversarial attacks. Despite this vulnerability, their superior performance and transferability make off-the-shelf classifiers still valuable in practice, demanding further work to provide adversarial robustness for them in a *post-hoc* manner. A recently proposed method, *denoised smoothing*, leverages a denoiser model in front of the classifier to obtain *provable robustness* without additional training. However, the denoiser often creates hallucination, *i.e.*, images that have lost the semantics of their originally assigned class, leading to a drop in robustness. Furthermore, its noise-and-denoise procedure introduces a significant distribution shift from the original distribution, causing the denoised smoothing framework to achieve sub-optimal robustness. In this paper, we introduce *FineTuning with Confidence-Aware Denoised Image Selection (FT-CADIS)*, a novel fine-tuning scheme to enhance the certified robustness of off-the-shelf classifiers. FT-CADIS is inspired by the observation that the *confidence* of off-the-shelf classifiers can effectively identify hallucinated images during denoised smoothing. Based on this, we develop a confidence-aware training objective to handle such hallucinated images and improve the stability of fine-tuning from denoised images. In this way, the classifier can be fine-tuned using only images that are beneficial for adversarial robustness. We also find that such a fine-tuning can be done by merely updating a small fraction (i.e., 1%) of parameters of the classifier. Extensive experiments demonstrate that FT-CADIS has established the state-of-the-art certified robustness among denoised smoothing methods across all ℓ2-adversary radius in a variety of benchmarks, such as CIFAR-10 and ImageNet. ## 1 Introduction Despite the recent advancements in modern deep neural networks in various computer vision tasks (Radford et al., 2021; Rombach et al., 2022; Kirillov et al., 2023), they still suffer from the presence of adversarial examples (Szegedy et al., 2013) *i.e.*, a non-recognizable perturbation (for humans) of an image often fools the image classifiers to flip the output class (Goodfellow et al., 2014). Such adversarial examples can be artificially crafted with malicious intent, i.e., *adversarial attacks*, which pose a significant threat to the practical deployment of deep neural networks. To alleviate this issue, various approaches have been proposed to develop *robust* neural networks, such as adversarial training (Madry et al., 2018; Wang et al., 2019) and certified defenses (Wong & Kolter, 2018; Cohen et al., 2019; Li et al., 2023). Among these efforts, *randomized smoothing* (Lecuyer et al., 2019; Cohen et al., 2019) has gained much attention as a framework to build robust classifiers. This is due to its superior provable guarantee of the non-existence of adversarial examples, *i.e.*, certified robustness (Wong & Kolter, 2018; Xiao et al., 2018), under any perturbations confined in a ℓ2-norm. Specifically, it builds a *smoothed classifier* through taking a majority vote from a base classifier, *e.g.*, a neural network, under Gaussian perturbations of the given input image. However, it has been practically challenging to scale the model due to a critical drawback: the base classifier should be specifically trained on noise-augmented data (Lecuyer et al., 2019; Cohen et al., 2019). ![1_image_0.png](1_image_0.png) Figure 1: Overview of FT-CADIS framework. (1) Confidence-aware denoised image selection: for a given clean image, we create denoised images and find non-hallucinated images. (2) Fine-tuning with confidenceaware denoised image selection: we propose fine-tuning objectives to improve both generalizability and robustness of the smoothed classifier based on selected non-hallucinated images. Recently, Lee (2021); Carlini et al. (2023) have introduced *denoised smoothing* which utilizes pre-trained off-the-shelf classifiers within the randomized smoothing framework. Rather than directly predicting the label of a noise-augmented image, it first feeds the perturbed image into a denoiser, *e.g.*, a diffusion model, and then obtains the predicted label of the denoised image using off-the-shelf pre-trained classifiers that have been trained on clean images. Intriguingly, denoised smoothing with recently developed diffusion models and pre-trained classifiers, *e.g.*, guided diffusion (Dhariwal & Nichol, 2021) and BEiT (Bao et al., 2022), shows its superior scalability with comparable certified robustness in ℓ2-adversary to the current state-of-the-art methods (Horváth et al., 2022b; Jeong et al., 2023). On the other hand, denoised smoothing also exhibits clear limitations. Firstly, denoised images do not follow the standard pre-training distribution of the classifiers, which results in a limited robustness of the denoised smoothing framework. Secondly, fine-tuning the pre-trained classifiers with the denoised images also yields sub-optimal classifiers due to the *hallucinated* images (Carlini et al., 2023), *i.e.*, the diffusion denoiser tends to generate image semantics from an incorrect class rather than the originally assigned class (see Figure 2a). Consequently, denoised smoothing with such classifiers leads to a drop of the certified accuracy, especially in the large ℓ2-radius regime, *i.e.*, high Gaussian variance (see Table 1b). Contribution. In this paper, we aim to address the aforementioned issues of denoised smoothing by designing a fine-tuning objective for off-the-shelf classifiers that distinguishes between *hallucinated* images, i.e., images that have lost the original semantics after denoising, and *non-hallucinated* images, *i.e.*, images that maintain the original semantics after denoising. To this end, we propose to use the "likelihood of denoised images", i.e., *confidence*, of the off-the-shelf classifier with respect to the originally assigned class as a proxy for determining whether an image is hallucinated and then fine-tune the classifier with nonhallucinated images only. Consequently, we have developed a confidence-aware training objective based on the likelihood of denoised images to effectively discriminate hallucinated images (see Figure 1). Specifically, we propose a scalable and practical framework for fine-tuning off-the-shelf classifiers, coined *FineTuning with Confidence-Aware Denoised Image Selection* (FT-CADIS), which improves certified robustness under denoised smoothing. In order to achieve this, two new losses are defined: the Confidence-aware selective cross-entropy loss and the *Confidence-aware masked adversarial loss*. Two losses are selectively applied only to non-hallucinated images, thereby ensuring that the overall training process avoids overoptimizing hallucinated samples, i.e., samples that are harmful for generalization, while maximizing the robustness of smoothed classifiers. Our particular loss design is motivated by Jeong et al. (2023), who were the first to investigate training objectives for randomized smoothing depending on sample-wise confidence information. We demonstrate that our novel definition of confidence in randomized smoothing, specifically through the ratio of non-hallucinated images from a denoiser, can dramatically stabilize the confidence-aware training, overcoming its previous limitation of severe accuracy degradation (e.g., see Table 1b). In our experiments, we have validated the effectiveness of our proposed method on standard benchmarks for certified ℓ2-robustness, *i.e.*, CIFAR-10 (Krizhevsky, 2009) and ImageNet (Russakovsky et al., 2015). Our results show that the proposed method significantly outperforms existing state-of-the-art denoised smoothing methods in certified robustness across all ℓ2-norm setups, while updating only 1% of the parameters of offthe-shelf classifiers on ImageNet. In particular, FT-CADIS significantly improves the certified robustness in the high Gaussian variance regime, *i.e.*, high certified radius. For instance, FT-CADIS outperforms the best performing baseline, *i.e.*, diffusion denoised (Carlini et al., 2023), by 29.5% → 39.4% at ε = 2.0 for ImageNet experiments. ## 2 Preliminaries Adversarial robustness and randomized smoothing. We assume an labeled dataset D = {(xi, yi)} n i=1 sampled from P, where xi *∈ X ⊂* R d and yi ∈ Y := {1*, ..., K*}, and aim to solve the problem of correctly classifying a given input x into one of K classes. Consider a classifier f : *X → Y* modeled by f(x) := arg maxk∈Y Fk(x) with F : X → ∆K−1, where ∆K−1is the probability simplex in R K. In this paper, we model F by a neural network whose last layer is the softmax function. Adversarial robustness refers to the *worst-case* behavior of f; given a sample x ∈ X and the corresponding label y ∈ Y, it requires f to produce a consistent output under any perturbation δ ∈ R d which preserves the original semantic of x. Here, δ is commonly assumed to be restricted in some ℓ2-norm in R d, *i.e.*, ∥δ∥2 ≤ ε for some positive ε. For example, Moosavi-Dezfooli et al. (2016); Carlini et al. (2019) quantify adversarial robustness as *average minimum distance* of the perturbations that cause f to flip the originally assigned label y, defined as: $$R(f;P):=\mathbb{E}_{(\mathbf{x},y)\sim P}\left[\operatorname*{min}_{f(\mathbf{x}^{\prime})\neq y}\|\mathbf{x}^{\prime}-\mathbf{x}\|_{2}\right]\,.$$ $$\left(1\right)$$ The primary obstacle in achieving adversarial robustness lies in the difficulty of evaluating and optimizing for it, which is typically infeasible because f is usually modeled by a complex, high-dimensional neural network. Randomized smoothing (Cohen et al., 2019; Lecuyer et al., 2019) addresses this challenge by constructing a new robust classifier g from f, instead of directly modeling robustness with f. In particular, Cohen et al. (2019) models g by selecting the *most provable* output of f under Gaussian perturbation N (0, σ2I), defined as: $$g(\mathbf{x}):=\operatorname*{arg\,max}_{c\in\mathcal{Y}}\mathbb{P}_{\delta\sim{\mathcal{N}}(0,\sigma^{2}I)}[f(\mathbf{x}+\delta)=c]\ .$$ Pδ∼N(0,σ2I)[f(x + δ) = c] . (2) Intriguingly, g can *guarantee* the adversarial robustness around (x, y) ∼ P, i.e., R(g; x, y) can be lowerbounded by the certified radius R(g, x, y), where Cohen et al. (2019) have proven that such a lower-bound of certified radius is tight for ℓ2-adversary: $$R(g;{\bf x},y)\geq\sigma\cdot\Phi^{-1}(p_{g}({\bf x},y))=:\underline{R}(g,{\bf x},y),\quad\mbox{where}\quad p_{g}({\bf x},y):=\mathbb{P}_{\delta}[f({\bf x}+\delta)=y],\tag{3}$$ provided that g(x) = y, *i.e.*, y is the most provable output of f under Gaussian perturbation. Otherwise, we have R(g; x, y) := 0. Here, Φ is the cumulative distribution function of the standard Gaussian distribution. We remark that higher pg(x, y), *i.e.*, average accuracy of f(x + δ), results in higher robustness. $$\left(2\right)$$ Denoised smoothing. In randomized smoothing, it is crucial that f consistently classifies perturbed images correctly. Salman et al. (2020) have proposed to define f based on concatenating a Gaussian denoiser, denoted as denoise(·), with any off-the-shelf classifier fclf, *i.e.*, trained with non-perturbed images, a method referred to as *denoised smoothing*: $$f(\mathbf{x}+\delta):=f_{\mathrm{clf}}(\texttt{denoise}(\mathbf{x}+\delta))~.$$ $$\left(4\right)$$ f(x + δ) := fclf(denoise(x + δ)) . (4) Denoised smoothing provides a more scalable framework for randomized smoothing. First, we only need off-the-shelf pre-trained classifiers (rather than noise-specialized classifiers), which is widely investigated and developed (Dosovitskiy et al., 2020; Bao et al., 2022; Radford et al., 2021). Second, recent advancements in *diffusion models* (Ho et al., 2020; Nichol & Dhariwal, 2021; Dhariwal & Nichol, 2021) have produced appropriate denoisers for this approach. Previous efforts (Lee, 2021; Carlini et al., 2023) have further demonstrated the potential of denoised smoothing in achieving the state-of-the-art certified robustness when combined with recently advanced pre-trained classifiers and diffusion models. Parameter-efficient fine-tuning. LoRA (Hu et al., 2022) is a widely-used parameter-efficient fine-tuning method that originated from language models. It applies a low-rank constraint to approximate the update matrix at each layer of the Transformer's self-attention layer, significantly reducing the number of trainable parameters for downstream tasks. During fine-tuning, all the parameters of the original model are frozen, and the update of the layer is constrained by representing them with a low-rank decomposition. A forward pass h = W0x can be modified as follows: $$h=W_{0}x+\Delta W x=W_{0}x+B A x,$$ $$\mathbf{\Sigma}$$ h = W0x + ∆Wx = W0x + BAx, (5) where x and h denote the input and output features of each layer, W0 ∈ R d×krepresents the original weights of the base model F, while ∆W denotes the weight change, composed of the inserted low-rank matrices B ∈ R d×r and A ∈ R r×k. ## 3 Method In Section 3.1, we present a description of our problem and the main idea. In Section 3.2, we provide descriptions of our selection strategy for non-hallucinated samples. In Section 3.3, we outline our overall fine-tuning framework. ## 3.1 Problem Description And Overview In this paper, we investigate how to effectively elaborate an off-the-shelf classifier fclf within a denoised smoothing scheme. We remark that the robustness of the smoothed classifier g from denoised smoothing of fclf depends directly on the accuracy of the *denoised* images (see Eq. (3) and (4)). Therefore, one may expect that improving fclf for clean images is sufficient to improve the generalizability and robustness of g (Carlini et al., 2023), assuming that the denoised images follow the pre-training distribution with clean images (Salman et al., 2020), i.e., the denoised images preserve the semantics of the original clean images. However, this assumption is not true; the noise-and-denoise procedure of denoised smoothing often suffers from distribution shifts and *hallucination* issues so that the resulting denoised images have completely different semantics from the original labels (see Figure 2a). To alleviate these issues, we aim to develop a fine-tuning scheme for fclf to properly handle denoised samples. One straightforward strategy would be to fine-tune fclf by minimizing the cross-entropy loss with all denoised images (Carlini et al., 2023): $${\mathcal{L}}^{\mathrm{CE}}:={\frac{1}{M}}\sum_{i=1}^{M}\,\mathbb{CE}\Big(F_{\mathrm{clf}}\big(\mathsf{denoise}(\mathbf{x}+\delta_{i})\big),y\Big),\ \delta_{i}\sim{\mathcal{N}}(0,\sigma^{2}I),$$ where CE denotes the cross-entropy loss, M denotes the number of noises, and Fclf denotes the pre-trained off-the-shelf neural network. Here, we note that this approach treats both *non-hallucinated* and *hallucinated* $$({\mathfrak{f}}{\mathfrak{o}})$$ ![4_image_0.png](4_image_0.png) Figure 2: Examples of denoised images for FT-CADIS on ImageNet at σ = 1.00. We visualize (a) hallucinated images and (b) non-hallucinated images after the noise-and-denoise procedure. The red/green box indicates the areas where the original semantic of the image is corrupted/preserved, respectively. samples equally among the denoised samples. However, fine-tuning fclf with hallucinated samples, i.e., denoise(x + δi) does not resemble the class y, is harmful for the generalizability since Eq. (6) forces the classifier fclf to *remember* non-y-like hallucinated images as y. Our contribution lies in resolving this issue by introducing (1) a *confidence-aware* selection strategy to distinguish between hallucinated and nonhallucinated images and (2) a fine-tuning strategy that excludes hallucinated samples from the optimization process. ## 3.2 Confidence-Aware Denoised Image Selection We propose a confidence-aware selection strategy to identify hallucinated images and non-hallucinated images within a set of denoised images. Consider the denoised images Dx = {denoise(x+δ1)*, ...,* denoise(x+δM)} for a given clean image x and the number of noises M. We aim to find non-hallucinated images within Dx that an off-the-shelf classifier fclf classifies as the assigned label y, i.e., Fclf shows the highest confidence for y among all possible classes. Conversely, if fclf classifies denoised images as a label other than y, we define such denoised images as hallucinated images, i.e., samples that no longer preserve the core semantic of y. Accordingly, the set of non-hallucinated images Dx,nh ∈ Dx is defined as follows: $${\mathcal{D}}_{\mathbf{x},\mathbf{nh}}=\{{\hat{\mathbf{x}}}|f_{\mathrm{clf}}(\texttt{denoise}(\mathbf{x}+\delta_{i}))=y,\;i\in[1,...,M]\}\ .$$ Dx,nh = {ˆx|fclf(denoise(x + δi)) = y, i ∈ [1*, ..., M*]} . (7) We remark that the off-the-shelf classifier fclf is pre-trained with clean images, rather than denoised images. Thus, at the beginning of the fine-tuning, fclf often fails to correctly assign Dx,nh due to the distribution shift from clean images to denoised images. To mitigate this, we iteratively update fclf to renew Dx,nh during fine-tuning for a more accurate assignment of non-hallucinated images (see Algorithm 1 for details). ## 3.3 Fine-Tuning With Confidence-Aware Denoised Image Selection Our main goal is to improve both the generalizability and the robustness of the smoothed classifier g, through the fine-tuning of the off-the-shelf classifier fclf based on our confidence-aware denoised image selection in Section 3.2. To this end, we propose two fine-tuning objectives for an off-the-shelf classifier fclf, viz., Confidence-aware selective cross-entropy loss and Confidence-aware masked adversarial loss, to maximize the generalizability and robustness of the corresponding smoothed classifier g, respectively. Confidence-aware selective cross-entropy loss. We first aim to improve the generalizability of the smoothed classifier g, i.e., the average certified accuracy of g. Specifically, we propose to optimize fclf with non-hallucinated images Dx,nh: $$\left(7\right)$$ $${\mathcal{L}}^{\mathrm{SCE}}:={\frac{1}{M}}\sum_{{\hat{\mathbf{x}}}\in{\mathcal{D}}_{\mathbf{x},\,\mathbf{n h}}}\mathbb{C E}{\Big(}F_{\mathrm{clf}}\left({\hat{\mathbf{x}}}\right),y{\Big)}\ .$$ $$({\mathfrak{s}})$$ CEFclfˆx), y. (8) In other words, we optimize our classifier with the non-hallucinated images, while the hallucinated images are excluded from our training objective. This prevents the drop in accuracy of fclf caused by being forced to remember wrong semantics not relevant to the assigned class y. It also allows for fclf to properly learn the distribution of *denoised* images, which is largely different from its pre-training distribution with clean images. Here, we find that training with the objective in Eq. (8) slows down the overall training procedure since Dx,nh = ∅ sometimes occurs at the start of training. This is mainly due to the distribution shift from the pre-training clean image distribution to the denoised images, i.e., fclf fails to classify denoised images due to insufficient exposure to denoised images. To resolve this *cold-start* problem, we add the most-y-like denoised image, i.e., a denoised image with the largest logit for y, to Dx,nh when it is empty. Confidence-aware masked adversarial loss. We also propose a simple strategy to further improve the robustness of the smoothed classifier g, i.e., the certified accuracy of g at large ℓ2-norm radius. Specifically, we apply the concept of *adversarial training* (Madry et al., 2018; Zhang et al., 2019a; Wang et al., 2019; Salman et al., 2019; Jeong et al., 2023) to our denoised smoothing setup; we carefully create more challenging images, and then additionally learn these images during fine-tuning. Here, the main challenge is to ensure that the adversarial images *preserve* the core semantic of the original image, thereby maintaining generalizability while improving robustness. However, as illustrated in Figure 2, some clean images are prone to be hallucinated after the noise-and-denoise procedure. Therefore, adversarial training in denoised smoothing should be carefully designed to avoid incorporating hallucinated images. To this end, we propose to create adversarial examples based only on images that are unlikely to be hallucinated, i.e., clean images x with Dx,nh = Dx. Specifically, we apply our adversarial loss based on a simple condition of "Dx,nh = M": $${\mathcal{L}}^{\mathrm{Mid}\tau}:={\mathbbm{1}}[|{\mathcal{D}}_{\mathbf{x},\,\mathbf{n}\mathbf{h}}|=M]\cdot\operatorname*{max}_{i}\operatorname*{max}_{\|\eta_{i}^{*}-\eta_{i}\|_{2}\leq\epsilon}\mathrm{KL}(F_{\mathrm{cl}t}(\mathbf{x}+\eta_{i}^{*}),{\hat{y}}),$$ where KL(·, ·) indicates the Kullback-Libler divergence and ηi:= denoise(x + δi) − x is the difference between each denoised image and the original clean image. For the adversarial target yˆ, we adapt the *consistency* target from the previous robust training method (Jeong et al., 2023) to our denoised smoothing setup by letting the target be the average likelihood of the denoised images, i.e., yˆ := 1M PM i=1 SoftmaxFclfdenoise(x + δi). Overall training objective. Building on our proposed training objectives L SCE and L MAdv, we now present the complete objective for our *Fine-Tuning with Confidence-Aware Denoised Image Selection* (FT-CADIS). Based on our confidence-aware denoised image selection scheme, Confidence-aware selective cross-entropy loss and Confidence-aware masked adversarial loss are applied only to non-hallucinated images Dx,nh to improve both generalizability and robustness of the smoothed classifier. The overall loss function is as follows: $$({\mathfrak{g}})$$ $${\mathcal{L}}^{\mathrm{FT-cADIS}}:={\mathcal{L}}^{\mathrm{SCE}}+\lambda\cdot{\mathcal{L}}^{\mathrm{MAdv}},$$ SCE + λ · LMAdv, (10) where λ > 0 is a hyperparameter, which controls the relative trade-off between the generalizability and the robustness (see Section 4.3). The detailed algorithm for computing our L FT-CADIS is outlined in Algorithm 1. Comparision with CAT-RS. Our FT-CADIS has drawn motivation from previous confidence-aware training strategies, e.g., CAT-RS (Jeong et al., 2023). The key difference is that FT-CADIS uses the confidence of denoised images based on the pre-trained off-the-shelf classifier while CAT-RS learns their confidence of Gaussian-augmented images during the training of the classifier from scratch. In particular, our method takes advantage of off-the-shelf classifiers which are already capable of providing reasonable confidence for identifying non-hallucinated images. Therefore, we can simply use the non-hallucinated images identified by $$(10)$$ the off-the-shelf classifiers in our optimization objective. On the other hand, CAT-RS additionally assumes a distribution of semantic-preserving noised sample counts based on the confidence, i.e., average accuracy, of the models currently being trained from scratch. Therefore, the overall confidence remains low especially for complex datasets, resulting in a sub-optimal accuracy of the smoothed classifier (see Table 1b). Our FT-CADIS successfully mitigates this issue based on our carefully designed confidence-based approach utilizing off-the-shelf classifiers, achieving the state-of-the-art robustness even in complex datasets such as ImageNet. ## 4 Experiments We verify the effectiveness of our proposed training scheme for off-the-shelf classifiers by conducting comprehensive experiments. In Section 4.1, we explain our experimental setups, such as training configurations and evaluation metrics. In Section 4.2, we present the main results on CIFAR-10 and ImageNet. In Section 4.3, we conduct an ablation study to analysis the component-wise effect of our training objective. ## 4.1 Experimental Setup Baselines. We mainly consider the following recently proposed methods based on *denoised smoothing* (Salman et al., 2020; Lee, 2021; Carlini et al., 2023; Jeong & Shin, 2024) framework. We additionally compare with other robust training methods for certified robustness based on randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019; Salman et al., 2019; Jeong & Shin, 2020; Zhai et al., 2020; Horváth et al., 2022a; Yang et al., 2022; Jeong et al., 2021; Horváth et al., 2022b; Jeong et al., 2023). Following the previous works, we consider three different noise levels, σ ∈ {0.25, 0.50, 1.00}, to obtain smoothed classifiers. CIFAR-10 configuration. We follow the same classifier and the same denoiser employed by Carlini et al. (2023). Specifically, we use the 86M-parameter ViT-B/16 classifier (Dosovitskiy et al., 2020) which is pretrained and fine-tuned on ImageNet-21k (Deng et al., 2009) and CIFAR-10 (Krizhevsky, 2009), respectively. We use the 50M-parameter 32x32 diffusion model from Nichol & Dhariwal (2021) as the denoiser. We provide more detailed setups in Appendix B.2. ImageNet configuration. We use the 87M-parameter ViT-B/16 classifier which is pre-trained on LAION2B image-text pairs (Schuhmann et al., 2022) using OpenCLIP (Cherti et al., 2023) and fine-tuned on ImageNet-12k and then ImageNet-1k. Compared to the previous state-of-the-art method, diffusion denoised (Carlini et al., 2023) based on BEiT-large model (Bao et al., 2022) with 305M parameters, we use a much smaller off-the-shelf classifier (30% parameters). We also adopt parameter-efficient fine-tuning with LoRA (Hu et al., 2022), i.e., the number of parameters updated through fine-tuning is only 1% of the total parameters. We use the same denoiser employed by Carlini et al. (2023), i.e., 552M-parameter 256x256 unconditional model from Dhariwal & Nichol (2021). We provide more detailed setups in Appendix B.2. Evaluation metrics. We follow the standard metric in the literature for assessing the certified robustness of smoothed classifiers : the *approximate certified test accuracy* at r, which is the fraction of the test set that Certify (Cohen et al., 2019), a practical Monte-Carlo-based certification procedure, classifies correctly with a radius larger than r without abstaining. Throughout our experiments, following Carlini et al. (2023), we use N = 100, 000 noise samples to certify robustness for entire CIFAR-10 test set and N = 10, 000 samples for 1,000 randomly selected images from the ImageNet validation set (note that RS methods in Table 1b use N = 100, 000). We use the hyperparameters from Cohen et al. (2019), specifically n0 = 100 and α = 0.001. In ablation study, we additionally consider another standard metric, the *average cerified radius* (ACR) (Zhai et al., 2020): the average of cerified radii on the test set D*test* while assigning incorrect samples as 0: viz., ACR := 1 |D*test*| P(x,y)∈D*test* [CR(*f, σ,* x) · 1g(x)=y], where CR(·) denotes the lower bound of certified radius Certify returns. ## 4.2 Main Experiments Results on CIFAR-10. In Table 1a, we compare the performance of the baselines and FT-CADIS on CIFAR-10. Overall, FT-CADIS outperforms all existing state-of-the-art denoised smoothing (denoted by Table 1: CIFAR-10 and ImageNet certified top-1 accuracy. We report the best certified accuracy among the models trained with σ ∈ {0.25, 0.50, 1.00}, followed by the clean accuracy of the corresponding model in parentheses. RS denotes methods based on randomized smoothing without a denoising procedure, and DS denotes methods based on denoised smoothing. indicates training the classifier with Gaussian-augmented images, indicates direct use of the off-the-shelf classifier without fine-tuning, indicates fine-tuning of the denoiser, indicates fine-tuning the off-the-shelf classifier, and indicates parameter-efficient fine-tuning of the off-the-shelf classifier (Hu et al., 2022). The highest certified accuracy in each column is bold-faced. † indicates that extra data is used in the pre-training. (a) CIFAR-10 | (a) CIFAR-10 | | | | | | | | |---------------------------------------------|---------------------------------------------|---------------|-----------------------------|------------|-----------------------|------------|------------| | Category | Method | Off-the-shelf | Certified Accuracy at ε (%) | | | | | | 0.25 | 0.50 | 0.75 | 1.00 | 1.25 | 1.50 | | | | RS | PixelDP (Lecuyer et al., 2019) | (71.0)22.0 | (44.0)2.0 | - | - | - | - | | Gaussian (Cohen et al., 2019) | (77.0)61.0 | (66.0)43.0 | (66.0)32.0 | (66.0)22.0 | (47.0)17.0 | (47.0)14.0 | | | SmoothAdv (Salman et al., 2019) | (85.0)73.0 | (76.0)58.0 | (75.0)48.0 | (57.0)38.0 | (53.0)33.0 | (53.0)29.0 | | | Consistency (Jeong & Shin, 2020) | (77.8)68.8 | (75.8)58.1 | (72.9)48.5 | (52.3)37.8 | (52.3)33.9 | (52.3)29.9 | | | MACER (Zhai et al., 2020) | (81.0)71.0 | (81.0)59.0 | (66.0)46.0 | (66.0)38.0 | (66.0)29.0 | (45.0)25.0 | | | Boosting (Horváth et al., 2022a) | (83.4)70.6 | (76.8)60.4 | (71.6)52.4 | (52.4)38.8 | (52.4)34.4 | (52.4)30.4 | | | DRT (Yang et al., 2022) | (81.5)70.4 | (72.6)60.2 | (71.9)50.5 | (56.1)39.8 | (56.4)36.0 (56.4)30.4 | | | | SmoothMix (Jeong et al., 2021) | (77.1)67.9 | (77.1)57.9 | (74.2)47.7 | (61.8)37.2 | (61.8)31.7 | (61.8)25.7 | | | ACES (Horváth et al., 2022b) | (77.6)69.0 | (73.4)57.2 | (73.4)47.0 | (57.0)37.8 | (57.0)32.2 | (57.0)27.8 | | | CAT-RS (Jeong et al., 2023) | (76.3)68.1 | (76.3)58.8 | (76.3)48.2 | (62.3)38.5 | (62.3)32.7 | (62.3)27.1 | | | DS | Denoised (Salman et al., 2020) | (72.0)56.0 | (62.0)41.0 | (62.0)28.0 | (44.0)19.0 | (42.0)16.0 | (44.0)13.0 | | Score-based Denoised (Lee, 2021) | 60.0 | 42.0 | 28.0 | 19.0 | 11.0 | 6.0 | | | Diffusion Denoised† (Carlini et al., 2023) | (88.1)76.7 | (88.1)63.0 | (88.1)45.3 | (77.0)32.1 | - | - | | | Diffusion Denoised†1 (Carlini et al., 2023) | (91.2)79.3 | (91.2)65.5 | (91.2)48.7 | (81.5)35.5 | - | - | | | Multi-scale Denoised† (Jeong & Shin, 2024) | - | (90.3)61.9 | - | (85.1)32.9 | - | (79.6)16.2 | | | FT-CADIS (Ours)† | (88.7)80.3 (88.7)68.4 (88.7)54.5 (74.9)39.9 | (74.9)31.6 | (74.9)23.5 | | | | | | (b) ImageNet | | | | | | | | | Category | Method | Off-the-shelf | Certified Accuracy at ε (%) | | | | | | 0.50 | 1.00 | 1.50 | 2.00 | 2.50 | | | | | RS | PixelDP (Lecuyer et al., 2019) | (33.0)16.0 | - | - | - | - | | | Gaussian (Cohen et al., 2019) | (67.0)49.0 | (57.0)37.0 | (57.0)29.0 | (44.0)19.0 | (44.0)15.0 | | | | SmoothAdv (Salman et al., 2019) | (65.0)56.0 | (55.0)45.0 | (55.0)38.0 | (42.0)28.0 | (42.0)26.0 | | | | Consistency (Jeong & Shin, 2020) | (55.0)50.0 | (55.0)44.0 | (55.0)34.0 | (41.0)24.0 | (41.0)21.0 | | | | MACER (Zhai et al., 2020) | (68.0)57.0 | (64.0)43.0 | (64.0)31.0 | (48.0)25.0 | (48.0)18.0 | | | | Boosting (Horváth et al., 2022a) | (68.0)57.0 | (57.0)44.6 | (57.0)38.4 | (44.6)28.6 | (38.6)24.6 | | | | DRT (Yang et al., 2022) | (52.2)46.8 | (49.8)44.4 | (49.8)39.8 | (49.8)30.4 | (49.8)29.0 | | | | SmoothMix (Jeong et al., 2021) | (55.0)50.0 | (55.0)43.0 | (55.0)38.0 | (40.0)26.0 | (40.0)24.0 | | | | ACES (Horváth et al., 2022b) | (63.2)54.0 | (55.4)42.2 | (55.0)35.6 | (39.2)25.6 | (50.6)22.0 | | | | CAT-RS (Jeong et al., 2023) | (44.0)38.0 | (44.0)35.0 | (44.0)31.0 | (44.0)27.0 | (44.0)24.0 | | | | DS | Denoised (Salman et al., 2020) | (60.0)33.0 | (38.0)14.0 | (38.0)6.0 | - | - | | | Score-based Denoised (Lee, 2021) | 41.0 | 24.0 | 11.0 | - | - | | | | Diffusion Denoised†(Carlini et al., 2023) | (82.8)71.1 | (77.1)54.3 | (77.1)38.1 | (60.0)29.5 | - | | | | Multi-scale Denoised† (Jeong & Shin, 2024) | (76.6)54.6 | (76.6)39.8 | (76.6)23.0 | (69.0)14.6 | - | | | | FT-CADIS (Ours)† | (81.1)71.9 | (77.0)60.1 | (77.0)45.8 | (66.2)39.4 | (66.2)30.7 | | | DS) approaches in every radii. For example, our method improves the best-performing denoised smoothing method (Carlini et al., 2023) by 35.5% → 39.9% at ε = 1.00. FT-CADIS also outperforms every randomzied smoothing techinque up to a radius of ε ≤ 1.00. Even though our method slightly underperforms at higher radii in terms of certified accuracy, we note that FT-CADIS is the only denoised smoothing method which achieves a reasonable robustness at ε > 1.00. This means that our FT-CADIS effectively alleviates the distribution shift and hallucination issues observed in previous methods based on denoised smoothing (Carlini et al., 2023). We provide the detailed results in Appendix B.5. Results on ImageNet. In Table 1b, we compare the performance of the baselines and FT-CADIS on ImageNet, which is a far more complex dataset than CIFAR-10. In summary, FT-CADIS outperforms all 1Further fine-tune the classifier on denoised images from CIFAR-10. Table 2: Comparison of the architectures and parameters between the previous state-of-the-art certified defense methods and FT-CADIS on ImageNet. | | Diffusion Denoised | Multi-scale Denoised | | | |---------------------------|-------------------------------|--------------------------------------------|----------------------------|-------------------| | Method | CAT-RS | | | | | (Jeong et al., 2023) | (Carlini et al., 2023) | (Jeong & Shin, 2024) | FT-CADIS (Ours) | | | Denoiser | - | Guided Diffusion | Guided Diffusion | Guided Diffusion | | (Dhariwal & Nichol, 2021) | (Dhariwal & Nichol, 2021) | (Dhariwal & Nichol, 2021) ViT-B/16 (+LoRA) | | | | Classifier | ResNet-50 | BEiT-large | ViT-B/16 | | | (He et al., 2016) | (Bao et al., 2022) | (Dosovitskiy et al., 2020) | (Dosovitskiy et al., 2020) | | | Parameters | Denoiser : - | Denoiser : 552M | Denoiser : 552M | Denoiser : 552M | | Classifier : 26M | Classifier : 305M | Classifier : 87M | Classifier : 87M | | | | Denoiser : - | Denoiser : 552M | Denoiser : - | | | Trainable | Denoiser : - Classifier : 26M | Classifier : - | Classifier : - | Classifier : 0.9M | | Certified Accuracy at ε (%) | | | | | | | | | |------------------------------------------------|-------|------|------|------|------|------|------|------| | Fine-tuning objective design | ACR | 0.00 | 0.25 | 0.50 | 0.75 | 1.00 | 1.25 | 1.50 | | L SCE + λ · LMAdv (L FT-CADIS; Ours) | 0.784 | 48.1 | 43.5 | 40.6 | 36.9 | 32.5 | 28.6 | 23.7 | | (w/o) Non-hallucinated condition of L SCE | 0.726 | 52.4 | 45.6 | 40.4 | 35.9 | 31.2 | 26.1 | 21.9 | | (w/o) Mask of L MAdv | 0.374 | 11.2 | 10.9 | 10.4 | 10.2 | 10.2 | 10.2 | 10.2 | | Cross-entropy loss L CE (Carlini et al., 2023) | 0.633 | 54.4 | 45.8 | 39.3 | 33.2 | 28.1 | 22.4 | 17.3 | Table 3: Comparison of ACR and certified accuracy for ablations of L FT-CADIS on CIFAR-10 with σ = 1.00. existing state-of-the-art methods in every radii. In particular, our method surpasses the certified accuracy of diffusion denoised (Carlini et al., 2023) by 9.9% at ε = 2.00. In Table 2, we also compare the computational cost of each method. Our method even shows remarkable efficiency, i.e., we only update 0.9M parameters, which is 3% of Jeong et al. (2023) and 0.2% of Jeong & Shin (2024). The overall results highlight the scalability of FT-CADIS, indicating its effectiveness in practical applications with only a small fine-tuning cost. We provide the detailed results in Appendix B.5. ## 4.3 Ablation Study In this section, we conduct an ablation study to further analyze the design of our proposed losses, the impact of updating the set of non-hallucinated images, and the component-wise effectiveness of our method. Unless otherwise stated, we use the same training configurations from the main experiments on CIFAR-10 (see Table 6a), except that the warmup and training epochs are reduced to 2 and 20, respectively. We report the test results based on a randomly sampled 1,000 images from the CIFAR-10 test set. Effect of overall loss design. Table 3 presents a comparison of variants of L FT-CADIS, including: (a) removing the non-hallucinated condition of L SCE in Eq. (8), (b) removing the masking condition of L MAdv in Eq. (9), and (c) training with cross-entropy loss L CE only. In summary, we observe that (a) using only nonhallucinated images for L SCE achieves better ACR and effectively balances between accuracy and robustness. Additionally, we find that (b) the mask "Dx,nh = M" in L MAdv is crucial for stable training, as it prevents the optimization of adversarial images that have lost the semantic of the original image; and (c) FT-CADIS demonstrates higher robustness and ACR by combining Confidence-aware selective cross-entropy loss and Confidence-aware masked adversarial loss. Effect of Confidence-aware masked adversarial loss design. We further investigate the components of Confidence-aware masked adversarial loss. Table 5 presents three variants of L MAdv in Eq. (9): (a) replacing the consistent target yˆ with the assigned label y, (b) substituting the outer maximization with an averagecase, and (c) combining both (a) and (b). Overall, we find that our proposed L MAdv demonstrates superior ACR compared to the variants, achieving the highest certified robustness while maintaining satisfactory | | | Certified Accuracy at ε (%) | | | | | | | | |----------|-----------------|-------------------------------|------|------|------|------|------|------|------| | Noise | Update of Dx,nh | ACR | 0.00 | 0.25 | 0.50 | 0.75 | 1.00 | 1.25 | 1.50 | | σ = 0.25 | ✗ | 0.632 | 91.1 | 80.4 | 66.7 | 49.0 | 0.0 | 0.0 | 0.0 | | | ✓ | 0.642 | 87.9 | 78.7 | 68.0 | 54.0 | 0.0 | 0.0 | 0.0 | | σ = 0.50 | ✗ | 0.765 | 75.4 | 66.9 | 56.0 | 46.0 | 36.2 | 28.7 | 21.6 | | | ✓ | 0.806 | 72.2 | 64.1 | 57.2 | 48.1 | 40.3 | 34.1 | 25.9 | | σ = 1.00 | ✗ | 0.626 | 53.4 | 45.9 | 38.2 | 32.9 | 27.3 | 22.5 | 16.4 | | | ✓ | 0.783 | 48.1 | 43.5 | 40.6 | 36.9 | 32.4 | 28.5 | 23.8 | Table 4: Comparison of ACR and certified accuracy for the ablation of the update of Dx,nh on CIFAR-10. ![9_image_0.png](9_image_0.png) Figure 3: Comparison of certified accuracy for components in FT-CADIS, (a) λ and (b) M, on CIFAR-10. We plot the results at σ = 0.50. We provide detailed results in Appendix C. clean accuracy. It shows that both design choices, i.e., maximizing loss over adversarial images and using soft-labeled adversarial targets, are particularly effective. Effect of iterative update of Dx,nh. In FT-CADIS, we iteratively update the set of non-hallucinated images i.e., denoise(x + δ) ∈ Dx,nh to deal with the distribution shift from the pre-training distribution (clean images) to fine-tuning distribution (denoised images). Table 4 shows the effect of the iterative update strategy on varying σ ∈ {0.25, 0.50, 1.00}. For all noise levels, the iterative update strategy shows higher ACR with higher robustness. We find that the fine-tuning classifier increases the ratio of applying L MAdv (see Figure 4), i.e., fclf gradually classifies all the denoised images of x correctly, thereby focusing on maximizing robustness and achieving a better trade-off between accuracy and robustness (Zhang et al., 2019a). Effect of λ. In the fine-tuning objective of FT-CADIS in Eq. (10), λ determines the ratio between L MAdv and L SCE. Figure 3a illustrates how λ affects the certified accuracy across different radii, with λ varying in {0.5, 1.0, 2.0, 4.0, 8.0} and σ = 0.50. As λ increases, the robustness at high radii improves although the clean accuracy decreases, i.e., the trade-off between clean accuracy and robustness. Effect of M. Figure 3b shows the impact of M on the model when varying M ∈ {1, 2, 4, 8}. The robustness of the smoothed classifier improves as M increases, while the clean accuracy decreases. With a higher M, the model is exposed to more denoised images included in Dx,nh, reducing the distribution shift from clean images to denoised images. This increases the confidence of the smoothed classifier, i.e., the accuracy on denoised images, resulting in more robust predictions. ## 4.4 Related Work Certified adversarial robustness. Recently, various defenses have been proposed to build robust classifiers against adversarial attacks. In particular, *certified defenses* have gained significant attention due to their guarantee of robustness (Wong & Kolter, 2018; Wang et al., 2018a;b; Wong et al., 2018). Among them, Table 5: Comparison of ACR and certified accuracy for ablations of L MAdv on CIFAR-10 with σ = 0.50. Every design adopts the 1[|Dx,nh| = M] masking condition. | Certified Accuracy at ε (%) | | | | | | | | | | |---------------------------------------------|----------|-------|------|------|------|------|------|------|------| | Adversarial objective design | ACR | 0.00 | 0.25 | 0.50 | 0.75 | 1.00 | 1.25 | 1.50 | | | ∗ | | | | | | | | | | | (a) maxi,η∗ i KL(Fclf(x + η i ), y) | 0.802 | 71.7 | 64.3 | 56.2 | 48.0 | 39.8 | 33.8 | 25.7 | | | ∗ | | | | | | | | | | | (b) 1 M P i (maxη i KL(Fclf(x + η ∗ ), yˆ)) | 0.792 | 74.9 | 65.8 | 56.1 | 47.8 | 39.7 | 31.8 | 23.4 | | | i | | | | | | | | | | | (c) 1 M P i (maxη ∗ | ∗ ), y)) | 0.792 | 74.8 | 64.9 | 57.0 | 48.0 | 39.9 | 31.5 | 23.0 | | i KL(Fclf(x + η i ∗ ), yˆ) (L MAdv; Ours) | 0.806 | 72.2 | 64.1 | 57.2 | 48.1 | 40.3 | 34.1 | 25.9 | | | maxi,η∗ i KL(Fclf(x + η i | | | | | | | | | | ![10_image_0.png](10_image_0.png) Figure 4: Change in the ratio of |Dx,nh| = M, i.e., ratio of clean images x satisfying the masking condition of L MAdv, during fine-tuning on CIFAR-10 with σ = 1.00, depending on whether Dx,nh is being updated or not. In the legend, red indicates that Dx,nh is iteratively updated, while orange indicates that Dx,nh is fixed. randomized smoothing (Lecuyer et al., 2019; Li et al., 2019; Cohen et al., 2019) shows the state-of-the-art performance by achieving the tight certified robustness guarantee over ℓ2-adversary (Cohen et al., 2019). This approach converts any base classifier, e.g., a neural network, into a provably robust smoothed classifier by taking a majority vote over random Gaussian noise. To maximize the robustness of the smoothed classifier, the base classifier should be trained with Gaussian-augmented images (Lecuyer et al., 2019; Cohen et al., 2019; Salman et al., 2019; Zhai et al., 2020; Jeong & Shin, 2020; Jeong et al., 2023). For instance, Salman et al. (2019) employed adversarial training (Madry et al., 2018) within the randomized smoothing framework, while Jeong & Shin (2020) suggested training a classifier using simple consistency regularization. Moreover, Jeong et al. (2023) introduced sample-wise control of target robustness, motivated by the accuracy-robustness trade-off (Tsipras et al., 2019; Zhang et al., 2019a) in smoothed classifiers. However, training base classifiers specifically for Gaussian-augmented images requires large training costs and thus these methods suffer from scalability issues in complex datasets, e.g., the accuracy drops severely in the ImageNet dataset. Denoised smoothing. Denoised smoothing alleviates the aforementioned scalability issue of randomized smoothing by introducing "denoise-and-classify" strategy. This approach allows randomized smoothing to be applied to any off-the-shelf classifier trained on clean images, i.e., not specifically trained on Gaussianaugmented images, by adding a denoising step before feeding Gaussian-augmented images into the classifier. In recent years, diffusion probabilistic models have emerged as an ideal choice for the denoiser in the denoised smoothing scheme. In particular, Lee (2021) have initially explored the applicability of diffusion models in denoised smoothing, and Carlini et al. (2023) further observe that combining the latest diffusion models with an off-the-shelf classifier provides a state-of-the-art design for certified robustness. Meanwhile, Jeong & Shin (2024) investigate the trade-off between robustness and accuracy in denoised smoothing, and proposed a multi-scale smoothing scheme that incorporates denoiser fine-tuning. Our work aims to enhance the certified robustness of the smoothed classifier in denoised smoothing by finetuning the off-the-shelf classifier on selectively chosen denoised images. Specifically, we focus on filtering out hallucinated images, which harm the performance of smoothed classifiers, from our fine-tuning process based on the confidence of the off-the-shelf classifier. We then fine-tune the off-the-shelf classifier with non-hallucinated images to improve both generalizability and robustness. ## 5 Conclusion We propose FT-CADIS, a scalable fine-tuning strategy of off-the-shelf classifiers for certified robustness. Specifically, we propose to use the *confidence* of off-the-shelf classifiers to mitigate the intrinsic drawbacks of the denoised smoothing framework, i.e., hallucination and distribution shift. We also demonstrate that this can be achieved by updating only 1% of the total parameters. We hope that our method could be a meaningful step for the future research to develop a scalable approach for certified robustness. Limitation and future work. In this work, we apply an efficient training technique for off-the-shelf classifiers based on LoRA (Hu et al., 2022). Nevertheless, certification remains a bottleneck for throughput, due to its majority voting process involving a large number of forward inferences, i.e., N = 100, 000. An important future work would be to accelerate the certification process for a more practical deployment of our method. In addition, certain public vision APIs do not allow us to access the underlying off-the-shelf classifiers, *i.e.*, black-box. In such cases, our method is not directly applicable, and further research on training methods that are independent of model parameters, such as prompt-tuning (Jia et al., 2022), will be necessary. ## References Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. BEiT: BERT pre-training of image transformers. In International Conference on Learning Representations, 2022. Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, and Alexey Kurakin. On evaluating adversarial robustness. *arXiv preprint* arXiv:1902.06705, 2019. Nicholas Carlini, Florian Tramer, Krishnamurthy Dj Dvijotham, Leslie Rice, Mingjie Sun, and J Zico Kolter. (Certified!!) Adversarial robustness for free! In The Eleventh International Conference on Learning Representations, 2023. Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, and Jenia Jitsev. Reproducible scaling laws for contrastive language-image learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 2818–2829, 2023. Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. ELECTRA: Pre-training text encoders as discriminators rather than generators. *arXiv preprint arXiv:2003.10555*, 2020. Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In *International Conference on Machine Learning*, pp. 1310–1320. PMLR, 2019. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In *2009 IEEE Conference on Computer Vision and Pattern Recognition*, pp. 248–255. Ieee, 2009. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat GANs on image synthesis. Advances in Neural Information Processing Systems, 34:8780–8794, 2021. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on Learning Representations*, 2020. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: Training ImageNet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020. Miklós Z Horváth, Mark Niklas Mueller, Marc Fischer, and Martin Vechev. Boosting randomized smoothing with variance reduced classifiers. In *International Conference on Learning Representations*, 2022a. Miklós Z Horváth, Mark Niklas Müller, Marc Fischer, and Martin Vechev. Robust and accurate– compositional architectures for randomized smoothing. *arXiv preprint arXiv:2204.00487*, 2022b. Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. LoRA: Low-rank adaptation of large language models. In *International Conference on Learning Representations*, 2022. Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In *Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands,* October 11–14, 2016, Proceedings, Part IV 14, pp. 646–661. Springer, 2016. Jongheon Jeong and Jinwoo Shin. Consistency regularization for certified robustness of smoothed classifiers. Advances in Neural Information Processing Systems, 33:10558–10570, 2020. Jongheon Jeong and Jinwoo Shin. Multi-scale diffusion denoised smoothing. Advances in Neural Information Processing Systems, 36, 2024. Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, Do-Guk Kim, and Jinwoo Shin. SmoothMix: Training confidence-calibrated smoothed classifiers for certified robustness. *Advances in Neural Information Processing Systems*, 34:30153–30168, 2021. Jongheon Jeong, Seojin Kim, and Jinwoo Shin. Confidence-aware training of smoothed classifiers for certified robustness. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 37, pp. 8005–8013, 2023. Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. In *European Conference on Computer Vision*, pp. 709–727. Springer, 2022. Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment Anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4015–4026, 2023. Alex Krizhevsky. Learning multiple layers of features from tiny images. *https://www. cs. toronto.* edu/kriz/learning-features-2009-TR. pdf, 2009. Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. In *2019 IEEE Symposium on Security and Privacy (SP)*, pp. 656–672. IEEE, 2019. Kyungmin Lee. Provable defense by denoised smoothing with learned score function. In ICLR Workshop on Security and Safety in Machine Learning Systems, volume 2, pp. 5, 2021. Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. Certified adversarial robustness with additive noise. *Advances in Neural Information Processing Systems*, 32, 2019. Linyi Li, Tao Xie, and Bo Li. SoK: Certified robustness for deep neural networks. In *2023 IEEE Symposium* on Security and Privacy (SP), pp. 1289–1310. IEEE, 2023. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In *International Conference on* Learning Representations, 2019. Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations, 2022. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In *International Conference on Learning Representations*, 2018. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. DeepFool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582, 2016. Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pp. 8162–8171. PMLR, 2021. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International Conference on Machine Learning*, pp. 8748–8763. PMLR, 2021. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684–10695, 2022. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. *International Journal of Computer Vision (IJCV)*, 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y. Hadi Salman, Jerry Li, Ilya Razenshteyn, Pengchuan Zhang, Huan Zhang, Sebastien Bubeck, and Greg Yang. Provably robust deep learning via adversarially trained smoothed classifiers. Advances in Neural Information Processing Systems, 32, 2019. Hadi Salman, Mingjie Sun, Greg Yang, Ashish Kapoor, and J Zico Kolter. Denoised smoothing: A provable defense for pretrained classifiers. *Advances in Neural Information Processing Systems*, 33:21945–21957, 2020. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. LAION-5B: An open largescale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278–25294, 2022. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199*, 2013. Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. In *International Conference on Learning Representations*, 2019. Shiqi Wang, Yizheng Chen, Ahmed Abdou, and Suman Jana. MixTrain: Scalable training of verifiably robust neural networks. *arXiv preprint arXiv:1811.02625*, 2018a. Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Efficient formal safety analysis of neural networks. *Advances in Neural Information Processing Systems*, 31, 2018b. Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, and Quanquan Gu. Improving adversarial robustness requires revisiting misclassified examples. In *International Conference on Learning Representations*, 2019. Eric Wong and Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In *International Conference on Machine Learning*, pp. 5286–5295. PMLR, 2018. Eric Wong, Frank Schmidt, Jan Hendrik Metzen, and J Zico Kolter. Scaling provable adversarial defenses. Advances in Neural Information Processing Systems, 31, 2018. Kai Y Xiao, Vincent Tjeng, Nur Muhammad Mahi Shafiullah, and Aleksander Madry. Training for faster adversarial robustness verification via inducing ReLU stability. In *International Conference on Learning* Representations, 2018. Zhuolin Yang, Linyi Li, Xiaojun Xu, Bhavya Kailkhura, Tao Xie, and Bo Li. On the certified robustness for ensemble models and beyond. In *International Conference on Learning Representations*, 2022. Runtian Zhai, Chen Dan, Di He, Huan Zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, and Liwei Wang. MACER: Attack-free and scalable robust training via maximizing certified radius. In *International* Conference on Learning Representations, 2020. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically principled trade-off between robustness and accuracy. In *International Conference on Machine* Learning, pp. 7472–7482. PMLR, 2019a. Jingzhao Zhang, Tianxing He, Suvrit Sra, and Ali Jadbabaie. Why gradient clipping accelerates training: A theoretical justification for adaptivity. In *International Conference on Learning Representations*, 2019b. # Supplementary Material Appendix: Confidence-Aware Denoised Fine-Tuning Of Off-The-Shelf Models For Certified Robustness ## A Training Procedure Of Ft-Cadis Algorithm 1 Fine-Tuning with Confidence-Aware Denoised Image Selection (FT-CADIS) Require: training sample (x, y). variance of Gaussian noise σ. number of noises M. off-the-shelf classifier fclf. attack ℓ2-norm ε > 0. adversarial target yˆ ∈ ∆K−1. coefficient of Confidence-aware masked adversarial loss λ > 0. 1: Generate ˆx1 = NoiseAndDenoise(x1, σ), *· · ·* , ˆxM = NoiseAndDenoise(xM, σ) ▷ xi: copy of x 2: Identify Dx,nh = {ˆxi| fclf(ˆxi) = y, i ∈ [1*, ..., M*]} 3: for i = 1 to M do 4: Li ← CE(Fclf(ˆxi), y) 5: η ∗ i ← arg max ∥η ∗ i −ηi∥2≤ε KL(Fclf(x + η ∗ i ), yˆ), ηi:= ˆxi − x 6: **end for** 7: L π 1:M, indices ← argsort(L1:M), Dπ x,nh ← {ˆx π indices.index(i) | ˆxi ∈ Dx,nh} 8: if Dπ x,nh ̸= ∅ **then** 9: L SCE ← 1M (Pxˆ π i ∈Dπ x,nh L π i ) 10: **else** 11: L SCE ← 1M (L π 1 ) ▷ L π 1 : lowest cross-entropy loss 12: **end if** 13: L MAdv ← 1[|Dx,nh| = M] · max iKL(Fclf(x + η ∗ i ), yˆ) 14: L FT-CADIS ← LSCE + λ · LMAdv Algorithm 2 Noise-and-Denoise Procedure (Carlini et al., 2023) 1: **function** NoiseAndDenoise(x, σ): 2: t ∗, αt ∗ ← GetTimestep(σ) 3: xt ∗ ← √αt ∗ (x + δ), δ ∼ N (0, σ2I) 4: ˆx ← denoise(xt ∗ ;t ∗) ▷ denoise : one-shot diffusion denoising process 5: **return** xˆ 6: **end function** 7: 8: **function** GetTimestep(σ): 9: t ∗ ← find the timestep t s.t. σ 2 = 1−αt αt▷ αt : noise level constant of diffusion model 10: **return** t ∗, αt ∗ 11: **end function** ## B Experimental Details B.1 Datasets CIFAR-10 (Krizhevsky, 2009) consists of 60,000 RGB images of size 32×32, with 50,000 images for training and 10,000 for testing. Each image is labeled as one of 10 classes. We apply the standard data augmentation, including random horizontal flip and random translation up to 4 pixels, as used in previous works (Cohen et al., 2019; Salman et al., 2019; Zhai et al., 2020; Jeong & Shin, 2020; Jeong et al., 2021; 2023). No additional normalization is applied except for scaling the pixel values from [0,255] to [0.0, 1.0] when converting image into a tensor. The full dataset can be downloaded at https://www.cs.toronto.edu/ kriz/cifar.html. ImageNet (Russakovsky et al., 2015) consists of 1.28 million training images and 50,000 validation images, each labeled into one of 1,000 classes. For the training images, we apply 224×224 randomly resized cropping and horizontal flipping. For the test images, we resize them to 256×256 resolution, followed by center cropping to 224×224. Similar to CIFAR-10, no additional normalization is applied except for scaling the pixel values to [0.0, 1.0]. The full dataset can be downloaded at https://image-net.org/download. ## B.2 Training Noise-and-Denoise Procedure. We follow the protocol of Carlini et al. (2023) to obtain the denoised images for fine-tuning. Firstly, the given image x is clipped to the range [-1,1] as expected by the off-theshelf diffusion models. Then, the perturbed image is obtained from a certain diffusion time step according to the target noise level. Finally, we adopt a one-shot denoising, i.e., outputting the best estimate for the denoised image in a single step, resulting in a denoised image within the range of [-1,1]. Since this range differs from the typical range of [0, 1] assumed in prior works, we set the target noise level to twice the usual level for training and certification. A detailed implementation can be found at https://github.com/ethzspylab/diffusion-denoised-smoothing and the algorithm is provided in Algorithm 2. CIFAR-10 fine-tuning. We conduct an end-to-end fine-tuning of a pre-trained ViT-B/16 (Dosovitskiy et al., 2020), considering different scenarios of σ ∈ {0.25, 0.50, 1.00} for randomized smoothing. The same σ is applied to both the training and certification. As part of the data pre-processing, we interpolate the dataset to 224×224. Our fine-tuning follows the common practice of supervised ViT training. The default setting is shown in Table 6a. We use the linear lr scaling rule (Goyal et al., 2017): lr = base lr × *batch size* ÷ 256. The batch size is calculated as batch per GPU × number of GPUs × accum iter // *number of noises*, where accum iter denotes the batch accumulation hyperparameter. ImageNet fine-tuning. We adopt LoRA (Hu et al., 2022) to fine-tune a pre-trained ViT-B/16 (Dosovitskiy et al., 2020) in a parameter-efficient manner. We use the same training scenarios as for CIFAR-10. As part of the data pre-processing, we interpolate the dataset to 384×384. The default setting is shown in Table 6b. Compared to end-to-end fine-tuning, we reduce the regularization setup, e.g., weight decay, lr decay, drop path, and gradient clipping. For LoRA fine-tuning, we freeze the original model except for the classification layer. Then, LoRA weights are incorporated into each query and value projection matrix of the self-attention layers of ViT. For these low-rank matrices, we use Kaiming-uniform initialization for weight A and zeros for weight B, following the official code. To implement LoRA with ViT, we refer to https://github.com/JamesQFreeman/LoRA-ViT. ## B.3 Hyperparameters In our proposed loss functions (see Eqs. (8), (9), and (10)), there are two main hyperparameters: the coefficient λ for the Confidence-aware masked adversarial loss, and the attack radius ε of Confidence-aware masked adversarial loss. Specifically, in Confidence-aware masked adversarial loss, we perform a T-step gradient ascent from each ηi with a step size of 2 · ε/T, while projecting the perturbations η ∗ i to remain within an ℓ2-ball of radius ε: viz., the *projected gradient descent* (PGD) (Madry et al., 2018). For CIFAR-10, we use λ = 1.0, 2.0, 4.0 for σ = 0.25, 0.50, 1.00, respectively. Assuming that denoise(x+δ) ≈ x with high probability, we adopt a small ε = 0.25 by default, which is increased to 0.50 after 10 epochs Table 6: Denoised fine-tuning settings for the off-the-shelf classifier on CIFAR-10 and ImageNet. | Configuration | Value | |------------------------------------------------------------|------------------------------------------| | Optimizer | AdamW (Loshchilov & Hutter, 2019) | | Optimizer momentum | β1, β2 = 0.9, 0.999 | | Base learning rate | 5e-4 (σ = 0.25, 0.50), 1e-4 (σ = 1.00) | | Weight decay | start, end = 0.04, 0.4 (cosine schedule) | | Layer-wise lr decay (Clark et al., 2020; Bao et al., 2022) | 0.65 | | Batch size | 128 | | Learning rate schedule | cosine decay (Loshchilov & Hutter, 2022) | | Warmup epochs (Goyal et al., 2017) | 3 | | Training epochs | 30 (early stopping at 20) | | Drop path (Huang et al., 2016) | 0.2 | | Gradient clipping (Zhang et al., 2019b) | 0.3 | (a) CIFAR-10 end-to-end fine-tuning | Configuration | Value | |------------------------------------------------------------|---------------------------------------------------------------------------| | Optimizer | AdamW (Loshchilov & Hutter, 2019) | | Optimizer momentum | β1, β2 = 0.9, 0.999 | | Base learning rate | 2e-4 (σ = 0.25), 4e-4 (σ = 0.50, 1.00) | | Weight decay | start, end = 0.02, 0.2 (σ = 0.25) start, end = 0.01, 0.1 (σ = 0.50, 1.00) | | Layer-wise lr decay (Clark et al., 2020; Bao et al., 2022) | 0.8 (σ = 0.25), 0.9 (σ = 0.50, 1.00) | | Batch size | 128 | | Learning rate schedule | cosine decay (Loshchilov & Hutter, 2022) | | Warmup epochs (Goyal et al., 2017) | 1 | | Training epochs | 10 (early stopping at 5) | | Drop path (Huang et al., 2016) | 0.0 | | Gradient clipping (Zhang et al., 2019b) | 1.0 | | LoRA rank r | 4 | | LoRA scaler α | 4 | (b) ImageNet LoRA (Hu et al., 2022) fine-tuning only for σ = 1.00. For ImageNet, we use λ = 2.0, 1.0, 2.0 for σ = 0.25, 0.50, 1.00 respectively, and ε is fixed at 0.25 for all noise levels. Although the number of noises M and the number of attack steps T can also be tuned for better performance, we fix M = 4 and T = 4 for CIFAR-10. For ImageNet, we fix M = 2 and T = 1 to reduce the overall training cost. ## B.4 Computing Infrastructure In summary, we conduct our experiments using 7 instances of NVIDIA GeForce RTX 2080 Ti and NVIDIA GeForce RTX 3090 GPUs, respectively. In the CIFAR-10 experiments, we utilize 4 NVIDIA GeForce RTX 2080 Ti GPUs for fine-tuning per run, resulting in ∼8 hours of training cost. During the certification, we use 7 NVIDIA GeForce RTX 2080 Ti GPUs for data splitting, taking ∼9 minutes per image (with N = 100, 000 for each inference) to perform a single pass of smoothed inference. In the ImageNet experiments, we utilize 4 NVIDIA GeForce RTX 3090 GPUs for fine-tuning per run, observing ∼3 days of training cost. During the certification, 7 NVIDIA GeForce RTX 3090 GPUs are used in parallel, taking ∼5 minutes per image (with N = 10, 000 for each inference) to complete a single pass of smoothed inference. ## B.5 Detailed Results On Main Experiments Table 7: Certified accuracy of FT-CADIS for varying levels of Gaussian noise σ on CIFAR-10 and ImageNet. Values in bold-faced indicate the ones reported in Table 1a for CIFAR-10 and Table 1b for ImageNet. | Certified Accuracy at ε (%) | | | | | | | | |-------------------------------|------|------|------|------|------|------|------| | Noise | 0.00 | 0.25 | 0.50 | 0.75 | 1.00 | 1.25 | 1.50 | | σ = 0.25 | 88.7 | 80.3 | 68.4 | 54.5 | 0.0 | 0.0 | 0.0 | | σ = 0.50 | 74.9 | 67.3 | 58.7 | 49.2 | 39.9 | 31.6 | 23.5 | | σ = 1.00 | 49.6 | 45.5 | 41.0 | 36.8 | 32.5 | 28.4 | 24.2 | | Certified Accuracy at ε (%) | | | | | | | |-------------------------------|------|------|------|------|------|------| | Noise | 0.00 | 0.50 | 1.00 | 1.50 | 2.00 | 2.50 | | σ = 0.25 | 81.1 | 71.9 | 0.0 | 0.0 | 0.0 | 0.0 | | σ = 0.50 | 77.0 | 69.3 | 60.1 | 45.8 | 0.0 | 0.0 | | σ = 1.00 | 66.2 | 60.7 | 54.0 | 46.4 | 39.4 | 30.7 | (a) CIFAR-10 (b) ImageNet ![18_image_1.png](18_image_1.png) ![18_image_0.png](18_image_0.png) Figure 5: Certified accuracy of FT-CADIS at different levels of Gaussian noise σ ∈ {0.25, 0.50, 1.00}. Upper bounds in radius are calculated with N = 100,000 for CIFAR-10 and N = 10,000 for ImageNet. ## C Detailed Results On Effect Of Λ And M | | | Certified Accuracy at ε (%) | | | | | | | |----------|-------|-------------------------------|------|------|------|------|------|------| | Setups | ACR | 0.00 | 0.25 | 0.50 | 0.75 | 1.00 | 1.25 | 1.50 | | λ = 0.50 | 0.786 | 75.7 | 64.9 | 55.8 | 47.5 | 39.5 | 30.8 | 22.8 | | λ = 1.00 | 0.797 | 75.3 | 64.3 | 56.3 | 47.9 | 39.7 | 32.8 | 24.5 | | λ = 2.00 | 0.806 | 72.2 | 64.1 | 57.2 | 48.1 | 40.3 | 34.1 | 25.9 | | λ = 4.00 | 0.814 | 70.9 | 63.3 | 55.9 | 48.0 | 41.0 | 35.0 | 27.7 | | λ = 8.00 | 0.823 | 68.6 | 62.4 | 56.0 | 47.6 | 41.5 | 35.9 | 28.5 | Table 8: Comparison of ACR and certified accuracy for ablations of varying λ on CIFAR-10 with σ = 0.50. | | Certified Accuracy at ε (%) | | | | | | | | |--------|-------------------------------|------|------|------|------|------|------|------| | Setups | ACR | 0.00 | 0.25 | 0.50 | 0.75 | 1.00 | 1.25 | 1.50 | | M = 1 | 0.773 | 75.6 | 67.0 | 56.3 | 46.2 | 38.1 | 29.4 | 21.6 | | M = 2 | 0.790 | 74.9 | 65.4 | 55.6 | 47.6 | 39.1 | 32.2 | 23.3 | | M = 4 | 0.806 | 72.2 | 64.1 | 57.2 | 48.1 | 40.3 | 34.1 | 25.9 | | M = 8 | 0.817 | 70.4 | 62.5 | 55.9 | 47.9 | 42.1 | 35.8 | 27.9 | Table 9: Comparison of ACR and certified accuracy for ablations of varying M on CIFAR-10 with σ = 0.50.
# Word Embeddings As Statistical Estimators Anonymous authors Paper under double-blind review ## Abstract Word embeddings are a fundamental tool in natural language processing. Currently, word embedding methods are evaluated on the basis of empirical performance on benchmark data sets, and there is a lack of rigorous understanding of their theoretical properties. This paper studies word embeddings from the theoretical perspective of statistical inference, which is essential for formal inference and uncertainty quantification. We propose a copulabased statistical model for text data and show that under this model, the now-classical Word2Vec method can be interpreted as a statistical estimation method for estimating the theoretical pointwise mutual information (PMI). Next, by building on the work of Levy & Goldberg (2014), we develop a missing value-based estimator as a statistically tractable and interpretable alternative to the Word2Vec approach. The estimation error of this estimator is comparable to Word2Vec and improves upon the truncation-based method proposed by Levy & Goldberg (2014). The proposed estimator also compares favorably with Word2Vec in a benchmark sentiment analysis task on the IMDb Movie Reviews data set. ## 1 Introduction In natural language processing (NLP) the notion and construction of an *embedding* (i.e., a mapping of a linguistic object such as a word, sentence, or entire document to a vector in Euclidean space) is the essential link for making precise the features of language that we hope to learn or understand with statistical or machine learning tools. *Word embeddings*, arguably introduced by Deerwester et al. (1990), have grown greatly in popularity since their utility in downstream tasks were demonstrated by Collobert & Weston (2008). Specifically, the Word2Vec algorithm (Mikolov et al., 2013) greatly influenced the use of word embeddings by providing a fast and effective unsupervised approach to constructing word embeddings. This was quickly followed by the GloVe algorithm (Pennington et al., 2014), which had performance comparable to Word2Vec. More modern deep learning word embedding algorithms, such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), and GPT-3 (Brown et al., 2020), have further pushed the performance of word embeddings to state-of-the-art levels on a variety of downstream tasks such as masked language modelling, part-of-speech tagging, analogy completion, and text generation. Though these powerful techniques have already demonstrated far-reaching societal impacts, it remains important to understand the nature of the embeddings that underlie their success. All of the aforementioned embedding techniques employ some strategy to generate word embeddings from a corpus based on some intuited feature of natural language, theoretically giving the word embeddings relational meaning. Some examples of such strategies include the order of words in a sentence, how often pairs of words appear near one another, and the number of times certain words occur in various documents. However, these algorithms are not ultimately evaluated on how well they represent the intuited features of natural language that they are designed to learn; NLP algorithms are typically only judged by their success on downstream tasks. In fact, due to a lack of any precise mathematical formulation of theoretical features that govern the meaning of natural languages, it is not clear that embeddings have any significance beyond auxiliary data features that are useful for training NLP models on downstream tasks (such as those tested by the GLUE benchmark of Wang et al., 2019, evaluating on sentiment classification, semantic equivalence evaluation, text similarity evaluation, and recognition of textual entailment, among others). We argue that to begin to understand the natural language features that embeddings represent, we must first mathematically formalize the precise features of natural language that we conjecture to make inference on. Second, we must be able to generate synthetic natural language data that exhibit the precise features that have been formulated (i.e., there must exist a generative model). And third, we must be able to demonstrate theoretical and/or empirical statistical consistency of estimation procedures designed to learn the features. Without any understanding of how natural language can be generated, attempting to gain insight into learned features—much less gain statistical guarantees on estimation properties for "true" underlying linguistic features—is hopeless. The contributions of our paper are as follows: - We consider a theoretical formulation of the skip-gram algorithm used for training unsupervised word embeddings, as in Word2Vec. It is established in Levy & Goldberg (2014) that the skip-gram algorithm minimizes its loss function at the pointwise mutual information (PMI), but the statistical properties of the estimated embedding features are not investigated. We investigate the statistical properties via simulation studies by proposing a copula-based statistical model for natural language data that truly has a given PMI matrix as a feature of the generative model. - We provide a solution to the problem of constructing a singular value decomposition (SVD) of the PMI matrix from real natural language text data, which exhibits many infinite-valued components. Levy & Goldberg (2014) propose ad hoc truncation rules, but we adapt recent developments in missing values SVD (MVSVD) algorithms that rely on expectation-maximixation based imputation and estimation for matrices with missing components. Moreover, the right and left singular vectors resulting from the SVD of a PMI matrix can be used as embedding vectors with comparable meaning to the embedding vectors trained from the skip-gram algorithm and with similar performance in training on downstream tasks, but with improved interpretability. This is implicit in the fact that matrix decompositions such as SVD are relatively well-understood mathematically, unlike unsupervised learning algorithms. - The copula-based statistical model that we propose is motivated by linguistic literature on theoretical distributional features of data from natural language, namely that ordered word frequencies in text corpora are often Zipffian distributed. We illustrate that it is possible to build generative models for natural language data that can be used to study theoretical features not limited to the PMI matrix. - While the construction of embedding vectors in NLP contexts is almost exclusively driven by training models for optimized performance on downstream tasks, our work forces the question of whether embedding vectors have inherent meaningfulness for representing features of natural language. The organization of our paper is as follows. An overview of existing language modelling approaches is given in Section 2. This is followed by a brief discussion of the theoretical properties of the Word2Vec algorithm and our proposed model for natural language in Section 3. In Section 4, we discuss various estimators that can act as alternatives to skip-gram embeddings, examining previously used proposals as well as our new algorithm. Section 5 analyses Word2Vec and its alternatives in terms of their abilities as statistical estimators. Finally, Section 6 analyzes the performance of these algorithms on a standard sentiment analysis task to illustrate that similar behavior as statistical estimators yields similar performance in downstream tasks. ## 2 Existing Approaches A great deal of work has already been done in the area of natural language generation and language modelling. A standard model for text generation is the n-gram model, in which words are generated using an nth order Markov chain. These n-gram models are popular due to their simple nature allowing for ease of statistical analysis; however, a naive n-gram model, creating transition probabilities by using observed frequencies from a training corpus, often fails to capture desirable linguistic features. To remedy this limitation, a variety of methods have been proposed. A common approach to creating more sophisticated n-gram models is smoothing—adjusting maximum likelihood estimates for probabilities by making the distribution of word occurrence probabilities more uniform (Chen & Goodman, 1999). Examples include Good-Turing estimation (Good, 1953), Jelinek-Mercer smoothing (Jelinek & Mercer, 1980; Brown et al., 1992a), and Bayesian smoothing (Nádas, 1984; MacKay & Bauman Peto, 1995), among others. However, most attempts at smoothing still fail to take into account how similar words occur in similar contexts; class-based n-gram models (Brown et al., 1992b) address this by separating words into distinct classes based on frequency of co-occurrence with other words. The primary limitation of such an approach is in the difficulties encountered when words can have disparate meanings and appear in wildly different contexts. Other approaches more sophisticated than the n-gram model also exist. Neural network approaches to language modelling, such as those inspired by Bengio et al. (2003), address many of these issues and perform admirably in generating reasonably likely text (as measured in Bengio et al. (2003) by the perplexity metric). However, jumping to neural networks again decreases interpretability of the model and makes proving theoretical results difficult. Another approach is found in the log-linear models of Mnih & Hinton (2008) and Arora et al. (2016). These models do provide sophisticated statistical models for language that yield interesting theoretical results, but are tailored to capturing specific linguistic features and do not generalize easily to multiple features. ## 3 Statistical Framework 3.1 Word2Vec And Pointwise Mutual Information Given a training corpus, Levy & Goldberg (2014) proved that the Word2Vec algorithm, using a skip-gram algorithm with one negative sample, implicitly factors the empirical PMI matrix, given by $$\operatorname{PMI}(w,c)=\log{\frac{\operatorname*{Pr}(w,c)}{\operatorname*{Pr}(w)\operatorname*{Pr}(c)}}$$ where w and c together form a word-context pair, Pr(*w, c*) is the probability of drawing (*w, c*) as a wordcontext pair from the corpus, and Pr(w) and Pr(c) are the probabilities of drawing w and c respectively from the corpus. That is, Word2Vec generates a matrix W of word embeddings and a matrix C of context embeddings, and its objective function attains a global maximum when W and C are such that W C> = PMI. More generally, Word2Vec with k negative samples factors the empirical Shifted PMI (SPMI) matrix ## Spmi = Pmi − Log(K) · J where J is the all-ones matrix. Note that neither the empirical PMI nor SPMI matrix is guaranteed to have only finite entries. In a finite corpus, most words do not co-occur with each other, leading to Pr(*w, c*) = 0 for any such non-co-occuring pair and hence log Pr(*w, c*) = −∞. The presence of many −∞ entries in the empirical PMI matrix limits the mathematical and statistical techniques that can be used to interpret the matrix directly, as most linear algebra techniques require the entries of a matrix to come from a ring or field, but R *∪ {±∞}* fails to even form a semigroup. Thus, a mathematical analysis of the PMI matrix and Word2Vec will require a deeper understanding of these infinite entries. ## 3.2 Two Settings For Natural Language Modelling We identify two mutually exclusive possibilities for the nature of a language that produces infinite entries for an empirical PMI matrix: a *sparse* setting and a *dense* setting. In both settings, we interpret the observed corpus as a *sample* which is obtained from a hypothetical, infinite *population* of text data. This populationsample framework is a fundamental building block of statistics. In any statistical inference problem, the goal is to learn something about the population, e.g., the population mean. However, it is impossible to actually observe the entire population, so we use the statistical sample as a representative and finite subset of the population on which calculations can be carried out in a feasible manner, e.g., computing the sample mean. We then use the sample metric as an estimate of the unknown population metric of interest. We now carry this intuition to text data and more specifically, co-occurrence and PMI, under sparse and dense settings. In practice, the *sample* co-occurrence matrix, i.e., the co-occurrence matrix computed from the observed corpus, is almost always sparse, with most of its entries being zero. This implies that the corresponding terms of the PMI matrix are −∞. However, this does not necessarily mean that the *population* version of the co-occurrence matrix and the PMI matrix suffer from the same issues. In the sparse setting, there do indeed occur words (say w and c) that never appear in the same context in the population itself; therefore, any corpus will have an infinite entry for (*w, c*) in its corresponding empirical PMI matrix. On the other hand, the dense setting allows for any two words to appear with each other in some context in the population (though this context may be very rarely seen); thus, for any two words, there is a positive probability of their co-occurrence. In other words, there is a corpus that contains a finite entry in the empirical PMI matrix for the word-context pair. Note that even under the dense setting, we are likely to observe many zero entries in the observed co-occurrence matrix, since the sample might not be large enough for observed frequencies to be non-zero even though the underlying probabilities are. Both of these settings have an underlying notion of the "truth" of the language that Word2Vec intends to train on. However, discussing "truth" in a statistical sense is hard, if not impossible, without any understanding of how the data itself is generated. Indeed, the lack of a data-generating model for NLP data contributes to the need for the use of performance on downstream tasks for various NLP methods. To draw an analogy, suppose that a new method for regression is proposed. Rather than measuring the efficacy of this new method using downstream tasks, the designers could either directly prove results or at least test the method using simulated data (Xi, Yi) n i=1 from Yi = f(Xi) + εi for some function f, covariates X, and random errors εi from some distribution. With a data-generating model for natural language, we can proceed similarly with NLP methods. Thus, choose some data-generating model for natural language. Then in the dense setting, the sparsity (in the sense of having many infinite entries) of the empirical PMI is an artifact of sampling from the model rather than a consequence of theoretical sparsity. On the other hand, in the sparse setting, the sparsity is an inherent attribute of language (i.e. the data generating model) itself. Under a sparse model, many traditional mathematical approaches to interpreting the PMI are fruitless, since infinite entries are inherent to the data itself: There necessarily exists a pair (*w, c*) such that Pr(*w, c*) = 0, and thus log Pr(*w, c*) = −∞ in both the population and empirical PMI matrix. Indeed, the objective function of Word2Vec itself (without normalization of word and context vectors) would require these words to have infinitely large dot products with each other, so even direct analysis of the algorithm proves difficult. It is due to these complications that the remainder of this paper primarily focuses on a dense model for language. ## 3.3 A Text Generation Framework In light of the need for a data-generating model for natural language, we provide the following theoretical framework: Let C be an infinite sequence of tokens given by (ti)∞ i=−∞. We assume that C is "stationary" in the sense that any corpus (i.e. a finite substring from C ) we draw would form a representative samplecompare this to a non-stationary C that adjoins a fantasy textbook with a medical textbook, where which section of C we draw from to sample a corpus would matter. This is inspired by the statistical framework of time series analysis, where the observed data is assumed to be a finite, continuous subset of an infinite temporal sequence. Every finite sequence of tokens can be assigned some probability of occurrence based on their frequency in C ; the n-gram model assigns the probability of the sequence of tokens (wi) m i=1 to be $$\Pr(w_{1},\ldots,w_{m})=\prod_{i=1}^{m}\Pr(w_{i}\,|\,w_{i-(n-1)},\ldots,w_{i-1})\,.$$ Hence, an nth order Markov chain can be used to generate data if the transition probabilities are specified (e.g. via a transition probability matrix or tensor); for simplicity, we restrict ourselves in this paper to a unigram (first order) model. It then still remains to answer the question of how the transition probabilities should be specified to match the distributional properties of natural language. A natural distributional property that the model's transition probabilities should match is the marginal distribution for word frequencies (i.e. the probability of occurrence for individual words). It is well-established that words in the English language tend to follow a Zipfian distribution with Zipf parameter approximately 1 (Moreno-Sánchez et al., 2016). To illustrate this phenomenon, we display the empirical frequencies of words ![4_image_0.png](4_image_0.png) from the Brown Corpus (Francis & Kucera, 1979) and overlay the expected frequencies from a Zipfian distribution in Figure 1. Thus, in a fixed vocabulary of V unique words, the distribution of the word probabilities Figure 1: The empirical word frequencies from the Brown Corpus and expected word frequencies from a Zipf(1, V ) distribution, where V ≈ 104. Infrequent words (with < 10 occurrences) have been omitted. should follow a Zipf(1, V ) distribution (though our methodology can be extended to any distribution). This thus gives us the marginal distributions for a V × V word co-occurrence matrix; however, this does not give all the information necessary to construct the co-occurrence matrix in its entirety. The problem of constructing a bivariate distribution when only the marginals are known is solved using copula distributions. Sklar's Theorem (Sklar, 1959) states that every multivariate cumulative distribution function fX(x1*, . . . , x*d) can be expressed in terms of its marginal cdfs Fi(xi) and a copula function C: $f_{\bf x}(x_{1},\ldots,x_{d})=C(F_{1}(x_{1}),\ldots,F_{d}(x_{d}))$. A function C : [0, 1]d → [0, 1] is a copula function if it is a joint cumulative distribution function of a d-dimensional random vector on the unit cube [0, 1]d with uniform marginals. Though Sklar's Theorem only holds for continuous distributions, many discrete multivariate distributions can still be closely approximated using copula functions. In particular, given Zipfian marginals with a Zipf parameter near 1, a Gaussian copula with correlation matrix R, $$C_{R}(\mathbf{u})=\Phi_{R}(\Phi^{-1}(u_{1}),\Phi^{-1}(u_{2})),$$ where Φ denotes a normal cumulative distribution function, does well empirically in generating a bivariate Zipfian distribution with the desired marginal distributions. This thus allows us to construct a dense cooccurrence matrix, and hence (by normalizing rows to sum to unity) a transition probability matrix to generate a corpus. Note that though this data-generating model is limited as-is, it easily extends to other, more complex situations. The current model simply generates words via a first-order Markov chain; a straightforward extension would be to use a higher order chain. This of course comes at a cost of having to work with tensors rather than matrices and greatly increases the computational cost of computing copulas. Another extension would be to use more sophisticated copulas than the Gaussian copula, allowing for more complex structures in a language to be replicated in the model; see Ibragimov & Prokhorov (2017) for an in-depth treatment of appropriate copulas for heavy-tailed distributions such as the Zipffian distribution. Different copulas also allow for a choice between the dense and sparse settings referenced in Section 3.2. ## 4 Embedding Algorithms To Estimate The Spmi Matrix In our theoretical framework introduced in Section 3.3, a corpus is an observed finite sample from a population of infinite length. In light of this, the empirical SPMI matrix from a corpus is simply a point estimate for the SPMI matrix associated to the population. The only reason Word2Vec factors an empirical SPMI matrix rather than the population SPMI matrix is due to the limitation of having to train on a finite corpus. Though impossible in practice, Word2Vec would ideally factor the population SPMI matrix, as this would allow for the most generally applicable word embeddings. Thus, in order to theoretically understand Word2Vec, we need methods that are easier to analyze mathematically but still approximate Word2Vec in its ability to implicitly factor the population SPMI matrix as well as in its performance on downstream tasks. ## 4.1 Truncating The Spmi As stated in section 3.1, the objective function of Word2Vec with k negative samples aims to find matrices W and C such that W C> = SPMI. Hence, one can bypass the need to perform the Word2Vec algorithm by directly factorizing the empirical SPMI matrix; the simplest way to do so would be to use singular value decomposition (SVD) on the matrix SPMI = UΣV > and use UΣ 1/2 and V Σ 1/2 as our word and context embeddings respectively. However, the infinite entries of the empirical PMI matrix prevent SVD from being used directly. To circumvent the problem of the infinite entries of the empirical SPMI matrix, Levy & Goldberg (2014) instead perform SVD on the empirical *positive* SPMI (SPPMI) matrix SPPMI(*w, c*) = max(SPMI(*w, c*), 0) which truncates all negative entries to 0. Though the SPPMI metric is shown to perform well empirically on semantic similarity tasks, it has the significant downside of "throwing away" a large amount of data regarding negative associations. Indeed, Table 2 of Levy & Goldberg (2014) illustrates that as the number of negative entries increases (e.g. via an increase in negative sampling for the Word2Vec procedure), performance of SVD on the SPPMI generally decreases. Similarly, Table 1 of Levy & Goldberg (2014) demonstrates that as the number of negative entries increases, SVD of the SPPMI matrix continues to deviate further from the empirical PMI. Thus, a better approach to dealing with missing co-occurrence frequencies is necessary. ## 4.2 Missing Values Svd In the dense setting, it makes sense to look into MVSVD algorithms in order to work around the sparsity of the empirical PMI matrix. Several such algorithms are presented in Kurucz et al. (2007). One particular algorithm, yielding an Expectation-Maximization (EM) approach to MVSVD, is shown in Algorithm 1. This algorithm essentially imputes missing values by minimizing the Frobenius norm between the imputed matrix and the given matrix with missing values on the non-missing values. These methods typically excel in the case of data that is missing completely at random; this is not the case for the empirical PMI matrix, where the smallest values are the entries most likely to be missing. Indeed, this limitation is a special case of the primary downside of the EM-MVSVD algorithm: It only aims to minimize the error on the known, non-missing values, so no information regarding the missing values is incorporated. As a result, if the matrix entries come from a known distribution, the EM-MVSVD algorithm may yield a factorization that is incompatible with the generating distribution on the missing values. This thus motivates the need to incorporate distributional information into the EM-MVSVD algorithm in certain problems. Algorithm 1 EM-MVSVD Algorithm from Kurucz et al. (2007) Require: W, a matrix with missing values Require: d, the number of singular values to keep R ← {(*i, j*)| Wij is missing} for (*i, j*) ∈ R do Wij ← initial guess end for U, Σ, V > ← SVD(*W, d*) Wc(0) ← UΣV > for (*i, j*) ∈ R do Wij ← Wc(0) ij end for for t = 1, 2, 3*, . . .* do if converged then return U, Σ, V > end if U, Σ, V > ← SVD(*W, d*) Wc ← UΣV > λ ← arg min P (i,j)6∈R hWij − (λ · Wcij + (1 − λ) · Wc(t−1) ij ) i2 Wc(t) ← λ · Wc + (1 − λ) · Wc(t−1) for (*i, j*) ∈ R do Wij ← W (t) ij end for end for Algorithm 2 DD-MVSVD Algorithm Require: W, a matrix with missing values Require: Wf, a matrix approximating the "true" matrix from a distribution Require: d, the number of singular values to keep R ← {(*i, j*)| Wij is missing} for (*i, j*) ∈ R do Wij ← initial guess end for U, Σ, V > ← SVD(*W, d*) Wc(0) ← UΣV > for (*i, j*) ∈ R do Wij ← Wc(0) ij end for for t = 1, 2, 3*, . . .* do if converged **then** return U, Σ, V > end if U, Σ, V > ← SVD(W, d) Wc ← UΣV > λ ← arg min P (i,j)6∈R -Weij−(λ·Wb +(1−λ)·Wb (t−1))ij 2 Weij Wc(t) ← λ · Wc + (1 − λ) · Wc(t−1) for (*i, j*) ∈ R do Wij ← W (t) ij end for end for Thus, our proposed methodology to factorize the population SPMI matrix is thus as follows: Given a corpus of finite length with V unique words, compute the empirical co-occurrence matrix; sort the rows and columns by marginal frequencies and then normalize the matrix to form a bivariate probability distribution. Now identify each word with its rank (1 through V , with 1 being most frequent and V being least frequent); this allows us to compute the correlation terms to use in the Gaussian copula. The copula then gives us a dense estimate for the "true" co-occurrence probabilities, which can then be transformed into an estimate for the population SPMI matrix with no missing values. We then use this estimate as the input Wf to Algorithm 2, a *distribution-dependent* MVSVD algorithm. This modifies EM-MVSVD (Algorithm 1) by minimizing a Chi-Square Goodness-of-Fit statistic with the estimated population SPMI as opposed to merely trying to match the empirical SPMI on non-missing entries. ## 5 Simulation Study To study the effectiveness of these methods in estimating the population SPMI matrix, we generated text and compared the matrices generated by Word2Vec, SVD of the SPPMI, EM-MVSVD (Algorithm 1), and DD-MVSVD (Algorithm 2) to the population SPMI matrix. To do so, we read in words from the Brown Corpus and sampled 500 words uniformly from the set of unique words; analogously to our proposed methodology from Section 4.2, we then created a positive transition probability matrix to generate text as well as the associated dense SPMI matrix for this set of words via a Gaussian copula. We then ran Word2Vec, EM-MVSVD, DD-MVSVD, and SVD on the empirical SPPMI matrix; each algorithm was asked to create 100-dimensional word embeddings based on a shift of 10 negative samples. The factorizations produced were then multiplied back together to form an estimate of the population SPMI matrix; the root mean square errors (RMSE) of these algorithms compared to the population SPMI matrix are shown in Figure 2. Evidently, the SVD of the empirical SPPMI matrix is an increasingly worse approximation to the population PMI as the corpus size increases, and it yields approximations far worse than any of the other algorithms. On the other hand, the two MVSVD algorithms perform essentially identically to Word2Vec in terms of RMSE from the population SPMI matrix. This thus demonstrates, without the need to check performance on downstream tasks, that MVSVD algorithms will yield word embeddings much more similar to Word2Vec (and thus perform similarly to Word2Vec) than SVD on the empirical SPPMI matrix does. ## 6 Real Data Analysis To provide more evidence of our claim from the end of Section 5, we now compare the performance of Word2Vec, the MVSVD algorithms, and SVD on the SPPMI matrix on a sentiment analysis task. We trained a simple sentiment analysis classifier on the IMDB movie review data set (Maas et al., 2011), which is comprised of 50,000 anonymized movie reviews split evenly between bad reviews (1-4 star ratings) and good reviews (7-10 star ratings). The goal of the task is to train a model to predict how a review rated any given movie when given that review's text. We randomly split the data set into test-train partitions, trained each embedding technique using the training partition, then trained Bidirectional Long Short-Term Memory Networks (BiLSTMs) on movie sentiment analysis. This experiment was repeated twenty times. The input to each two-layer BiLSTM (with 100 units) was a sequence of embeddings corresponding to the words contained in the review. Each model's output was a binary-classification layer using a binary-cross entropy loss function and a sigmoid activation function. To keep computations feasible, we removed stopwords as well as words with fewer than 300 occurrences, reducing the corpus to 3104 distinct words. Additionally, we zero-padded or truncated all reviews to 500 words. The use of truncation avoided the problem of vanishing or exploding gradients in long BiLSTM chains without severely compromising the data, as 95% of reviews required no truncation. Table 1 shows the performance of each model across five distinct embedding algorithms and negative sampling levels of one, two, and five. In addition to the previously discussed algorithms, we included a one-hot encoding of the input words followed by a dense layer with 100 nodes, which serves the purpose of acting ![8_image_0.png](8_image_0.png) Figure 2: The RMSE of Word2Vec, the MVSVD algorithms, and SVD on the SPPMI matrix with respect to the population SPMI matrix. The RMSEs plotted are the average over 20 independent trials. Error bars are not clearly visible, as all standard errors are < 0.5. | One-Hot | Word2Vec | SPPMI | EM-MVSVD | DD-MVSVD | | | | | | | |------------------|------------|---------|------------|------------|------|--------|------|--------|------|--------| | Acc. | S.E. | Acc. | S.E. | Acc. | S.E. | Acc. | S.E. | Acc. | S.E. | | | 1 | 0.86 | 0.0047 | 0.79 | 0.0072 | 0.82 | 0.0087 | 0.82 | 0.0070 | 0.82 | 0.0146 | | 2 | 0.86 | 0.0065 | 0.80 | 0.0138 | 0.78 | 0.0103 | 0.81 | 0.0096 | 0.84 | 0.0034 | | 5 | 0.86 | 0.0028 | 0.79 | 0.0173 | 0.70 | 0.0133 | 0.80 | 0.0109 | 0.82 | 0.0114 | | Negative Samples | | | | | | | | | | | Table 1: Model accuracies in positive/negative sentiment analysis on the IMDB data ste across multiple levels of negative sampling. as a benchmark for roughly how well the "perfect" embedding overfits to the data can do in this task. All algorithms produced 100-dimensional embeddings. As expected, the one-hot embedding performed the best, achieving 86% accuracy regardless of the amount of negative sampling. We see that the MVSVD algorithms perform comparably to Word2Vec across all levels of negative sampling, with DD-MVSVD performing the best amongst the three. We see that these algorithms still perform decently in comparison to the one-hot embeddings, achieving accuracies of roughly 80%. This is in stark contrast to the performance of SVD on the SPPMI, which matches the MVSVD algorithms at one negative sample, but quickly decreases in accuracy to 70% as the number of negative samples increases. This phenomenon occurs because the sparsity of the SPPMI matrix increases rapidly as the number of negative samples increases, so the embeddings contain less and less information to allow the BiLSTM to make a well-tuned classifier. ## 7 Conclusions Through the use of our data-generating model for natural language, we find that the MVSVD algorithms perform similarly to Word2Vec in estimating the population PMI matrix, whereas SVD on the SPPMI matrix does not. This is reflected in the algorithms' performances on the downstream task of sentiment analysis, suggesting that an embedding algorithm's ability to estimate this population parameter strongly correlates to its performance in this particular downstream task, and perhaps to others as well. As such, the MVSVD algorithms can be seen to be quite reasonable approximations to Word2Vec that still remain tractable to mathematically analyze. In the future, we hope to develop theory for our proposed DD-MVSVD algorithm, such as proving convergence properties of the algorithm. We also plan to investigate other more complex NLP methods and build upon our generating model to allow for more expressive modelling of language features. We also hope to be able to determine what other population parameters of natural language, if any, are correlated with performance on various downstream tasks of interest. ## References Sanjeev Arora, Yingyu Liang Yuanzhi Li, Tengyu Ma, and Andrej Risteski. A latent variable model approach to PMI-based word embeddings. *Transactions of the Association for Computational Linguistics*, 4:385–299, 2016. doi: 10.1162/tacl_a_00106. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. A neural probabilistic language model. *Journal of Machine Learning Research*, 3:1137–1155, 2003. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, Jennifer C. Lai, and Robert L. Mercer. An estimate of an upper bound for the entropy of English. *Computational Linguistics*, 18(1):31–40, 1992a. Peter F. Brown, Vincent J. Della Pietra, Peter V. deSouza, Jennifer C. Lai, and Robert L. Mercer. Classbased n-gram models of natural language. *Computational Linguistics*, 18(4):467–479, 1992b. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In *34th Conference on Neural Information Processing System*, volume 33, pp. 1877–1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf. Stanley F. Chen and Joshua Goodman. An empirical study of smoothing techniques for language modeling. Computer Speech & Language, 13(4):359–394, 1999. ISSN 0885-2308. doi: https://doi.org/10.1006/csla. 1999.0128. URL https://www.sciencedirect.com/science/article/pii/S0885230899901286. Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In *Proceedings of the 25th International Conference on Machine Learning*, pp. 160–167, 2008. doi: 10.1145/1390156.1390177. Scott Deerwester, Susan T. Dumais, , George W. Furnas, Thomas K. Landauer, and Richard Harshman. Indexing by latent sentiment analysis. *Journal of the American Society for Information Science*, 41(6): 391–407, 1990. doi: 10.1002/(SICI)1097-4571(199009)41:6<391::AID-ASI1>3.0.CO;2-9. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186, 2019. doi: 10.18653/v1/N19-1423. W. N. Francis and H. Kucera. Brown corpus manual. Technical report, Department of Linguistics, Brown University, Providence, Rhode Island, US, 1979. URL http://icame.uib.no/brown/bcm.html. I. J. Good. The population frequencies of species and the estimation of population parameters. *Biometrika*, 40(3/4):237–264, 1953. ISSN 00063444. URL http://www.jstor.org/stable/2333344. Rustam Ibragimov and Artem Prokhorov. *Heavy Tails and Copulas: Topics in Dependence Modelling in* Economics and Finance. World Scientific Publishing Co. Pte. Ltd., 2017. Frederick Jelinek and Robert L. Mercer. Interpolated estimation of Markov source parameters from sparse data. In *Proceedings of the Workshop on Pattern Recognition in Practice*, 1980. Miklós Kurucz, András A Benczúr, and Károly Csalogány. Methods for large scale SVD with missing values. In *Proceedings of KDD cup and workshop*, volume 12, pp. 31–38. Citeseer, 2007. Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. *Advances in neural* information processing systems, 27:2177–2185, 2014. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/P11-1015. David J. C. MacKay and Linda C. Bauman Peto. A hierarchical Dirichlet language model. *Natural Language* Engineering, 1(3):289–308, 1995. doi: 10.1017/S1351324900000218. Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In *International Conference on Learning Representations*, 2013. Andriy Mnih and Geoffrey Hinton. A scalable hierarchical distributed language model. In *Advances in* Neural Information Processing Systems 21, 2008. Isabel Moreno-Sánchez, Francesc Font-Clos, and Álvaro Corral. Large-scale analysis of Zipf's law in English texts. *PLOS ONE*, 11(1), 2016. Arthur Nádas. Estimation of probabilities in the language model of the IBM speech recognition system. IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP-32(4):859–861, 1984. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. GloVe: Global vectors for word representation. In *Empirical Methods in Natural Language Processing*, pp. 1532–1543, 2014. URL http://www.aclweb.org/anthology/D14-1162. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In *Proceedings of the 2018 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2227–2237. Association for Computational Linguistics, 2018. doi: 10.18653/v1/N18-1202. Abe Sklar. Fonctions de répartition à n dimensions et leurs marges. Publications de l'Institut Statistique de l'Université de Paris, pp. 229–231, 1959. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *Seventh International* Conference on Learning Representations, 2019.
Review 1: Summary: * The paper studies two methods to complete missing values in an empirical shifted pointwise mutual information (SPMI) matrix for word concurrence in a corpus and compares these to the truncation method used in Levy & Goldberg 2014 * The paper defines a generative model for text based on a unigram Markov chain using estimated vocabulary unigram distributions and a copula function, and uses text generated from this model to evaluate the performance of different word vector estimation methods (Word2Vec and SVD on different versions of empirical SPMI matrix) * The paper shows that completing the SPMI matrix is better than the truncation method of Levy & Goldberg and similar to Word2Vec in matching PMI statistics from the generative model * There is also evaluation on a sentiment classification task, with similar results, but I have concerns about this application Strengths and Weaknesses: Strengths * The motivation of this paper to have more interpretable word embeddings with better-understood statistical properties is appealing * The paper convincingly shows that completing the SPMI matrix before performing SVD is advantageous in terms of fitting better synthetic data from the proposed generative model of text Weaknesses * Since the proposed generative model of text is definitely not a good approximation of the real distribution (e.g. first order Markov chain with estimates from a finite corpus), it would be good to tone down the claims of gaining understanding with respect to real characteristics of natural language * Would it not be feasible to evaluate various estimation methods with respect to matching PMI statistics from a much larger corpus -- e.g. estimate from 1 million tokens and see how various methods match statistics from an estimate based on 1 billion tokens? What are the advantages of using your copula-based model instead? * In the sentiment application, word embeddings are trained on the training split of the labeled data for sentiment. This does not make sense as the advantage of the embeddings is that they can be trained don much larger unlabeled texts. I also did not understand why indicator vectors should be doing best -- they are not helping us learn about the meaning of words that are not seen in the labeled training data. Requested Changes: * Toning down the claims regarding the method giving insights about "theoretical features that govern the meaning of natural languages" * If possible, showing estimation error with respect to SPMI from a much larger corpus and comparing that to the copula-distribution approach. * Showing an end task experiment following a closer to state-of-the-art formulation: e.g. word embeddings from a large unlabeled corpus followed by as many layers of LSTM or Transformer as helpful for the task. * Adding an estimate of the computational requirements of the methods -- Word2Vec versus SVD on the positive or completed SPMI matrices. Broader Impact Concerns: No concerns ================================================== Review 2: Summary: This paper addresses the zero-count problem in a matrix of Shifted PMI (SPMI) (Levy and Goldberg, 2014). Specifically, in a PMI matrix, $$ {\rm PMI}(w, c) = \log \frac{P(w, c)}{P(w), P(c)} , $$ $\log P(w, c) = -\infty$ when $P(w, c) = 0$. These elements are actually dominant in the matrix because words $w$ and contexts $c$ rarely co-occur. We usually truncate a PMI matrix with positive elements, $$ {\rm PPMI}(w, c) = \max({\rm PMI}(w, c), 0) $$ However, the situation gets worse when we consider SPMI with $k$ (this corresponds to the number of negative examples in Skip-Gram with Negative Sampling), $$ {\rm SPMI}(w, c) = {\rm PMI}(w, c) - \log k, $$ because more elements are pushed towards zero with a larger value of $k$. This paper proposes to use missing-value SVD algorithms to obtain the low-rank approximation of the positive SPMI matrix, approximating missing values in the matrix. This paper also proposes to use the Gaussian copula to preserve the statistical property of word occurrences. However, I could not see how this is actually realized in the study. Strengths and Weaknesses: # Strengths + This paper presents Expectation-Maximization based Missing-Values SVD (EM-MVSVD) and its variant, Distribution-Dependent MVSVD (DD-MVSVD) to work around the sparsity of the empirical PMI matrix of words and contexts. + The effectiveness of the presented algorithms is verified with the simulation study and sentiment analysis task. # Weaknesses + The empirical verification is insufficient to demonstrate the superiority of the proposed method. For example, Schnabel et al. (2015) argued that there is no almighty word embeddings for all tasks; we should fine-tune word emebdddings on a target task. However, this study only reports a performance on a single task (sentiment analysis) while word embeddings are usually evaluated on multiple tasks such as word analogy, shallow parsing, and named entity recognition. + A comparison with GloVe may be preferrable. GloVe also has a heuristic treatment (weighting) for low co-occurrence counts, assuming that reproducing low co-occurrence counts with word embeddings is probably overkill, which somewhat contradicts with the idea suggested in this paper. + The detail of the proposed method is unclear. I'm not sure how the Gaussian copula is used in the proposal (although I found a short description in the end of Section 4). In addition, I could not find the detail of the BiLSTM model: how to convert encoded token emebddings into a text embedding (max-pooling?); whether the word embeddings are fine-tuned or not; etc. + It is hard to judge the usefulness of this study when the approach of pre-training and fine-tuning is so dominant nowadays (this is a subjective opinion). Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In EMNLP, pages 298–307. Requested Changes: Major comments: See the weaknesses. Minor comments: + I don't think the explanation of the $n$-gram language model in Section 3.3 is useful because this study just considers uni-grams and Zipfician distribution. + $u_1$, $u_2$, and $\Phi$ are undefined in the equation of $C_R(\boldsymbol{u})$ + Algorithm 1: + What is the "initial guess" in the algorithm? + $\| \cdot \|_{NMF}$ should be defined. + I'm not sure whether Algorithm 1 really corresponds to Kurucz et al. (2007). Please double-check this. Other comments: + Levy and Goldberg (2014) were not the first to use positive PMI. See Bullinaria and Levy (2007). J Bullinaria and J Levy. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior Research Methods, 39:510–526. Broader Impact Concerns: None. ================================================== Review 3: Summary: This paper revisited a theoretical analysis of a classical algorithm for learning word embeddings, the skip-gram algorithm, and proposed an algorithm based on a well-understood technique to learn embeddings with similar downstream performance. Building upon Levy & Goldberg (2014) findings, the authors suggested a solution to the sparsity problem when factorizing the SPMI matrix of word co-occurrences. The framework in this paper assumes a dense co-occurrence matrix where all word-context pairs occur (no matter how unlikely), and uses Missible Values SVD to solve the matrix factorization. In a simulation study, the authors showed that the skip-gram and the proposed methods successfully model language generated from a unigram model with Zipfian distribution, in contrast to the SPPMI. Revealing the statistical language feature that the skip-gram algorithm modeled. A further empirical study in a downstream task showed that the performance of the proposed method matched the skip-gram algorithm as well. Strengths and Weaknesses: Strengths 1. The paper offers a theoretical point of view and also an empirical result. Weaknesses 1. The authors investigate the "statistical properties" of the estimated embedding, but the implication of the result remained undiscussed. In addition, the claim of improved interpretability of the proposed method was also unclear. 2. While the authors presented a success in learning word embeddings from the generated data, it was only one configuration of language modeling (unigram and zipf). It did not thoroughly show that the proposed method matched the skip-gram algorithm. 3. The skip-gram algorithm has a lasting impact on the NLP community, but the community has moved away from it. The audience of this paper would be quite limited. Requested Changes: 1. Providing more detail on the relationship between the unigram model and copula function. Specifically, how does one obtain a transition probability from a co-occurrence matrix? 2. Providing more implications and insights from the framework. For example, you could answer the proposed question of "whether embedding vectors have inherent meaningfulness" more directly through proof or experiment. 3. Adding more experiment results. 3.1 The simulation study does not reveal significant insights. The author could provide more analyses, such as the impact of different word frequencies, extensions to other language models, or relaxing the dense PMI assumption. 3.2 I understand that the downstream performance is not the focus of the paper, but well-crafted downstream tasks are often used to show what information is captured by embeddings (though usually linguistic features). 4. Providing better motivation for the analyses. I think the best way to show why this kind of work is important is to show its usefulness in real applications, rather than the standpoint that we need one. Broader Impact Concerns: There is no ethical concern. ================================================== Metareview: Recommendation: Reject Comment: The paper's contributions are of interest for the community; however the remaining reviewers' concerns cannot be addressed through a minor revision. In particular, it remains unclear whether the authors would like to keep the experimental section or remove it, and therefore the paper would need to be rewriten according to that decision in order to clearly describe the contributions and analyses of interest. See "Claims And Evidence" above for more details. I highly encourage the authors to address these concerns and resubmit their work to TMLR. ==================================================
# Wavelet Networks: Scale-Translation Equivariant Learning From Raw Time-Series David W. Romero∗*dwromero@nvidia.com* NVIDIA Research Erik J. Bekkers *e.j.bekkers@uva.nl* Universiteit van Amsterdam Jakub M. Tomczak∗*j.m.tomczak@tue.nl* Technische Universiteit Eindhoven Mark Hoogendoorn *m.hoogendoorn@vu.nl* Vrije Universiteit Amsterdam Reviewed on OpenReview: *https: // openreview. net/ forum? id= ga5SNulYet* ## Abstract Leveraging the symmetries inherent to specific data domains for the construction of equivariant neural networks has lead to remarkable improvements in terms of data efficiency and generalization. However, most existing research focuses on symmetries arising from planar and volumetric data, leaving a crucial data source largely underexplored: *time-series*. In this work, we fill this gap by leveraging the symmetries inherent to time-series for the construction of equivariant neural network. We identify two core symmetries: *scale and translation*, and construct scale-translation equivariant neural networks for time-series learning. Intriguingly, we find that scale-translation equivariant mappings share strong resemblance with the *wavelet transform*. Inspired by this resemblance, we term our networks *Wavelet* Networks, and show that they perform nested non-linear wavelet-like time-frequency transforms. Empirical results show that Wavelet Networks outperform conventional CNNs on raw waveforms, and match strongly engineered spectrogram techniques across several tasks and time-series types, including audio, environmental sounds, and electrical signals. Our code is publicly available at https://github.com/dwromero/wavelet_networks. ## 1 Introduction Leveraging the symmetries inherent to specific data domains for the construction of statistical models, such as neural networks, has proven highly advantageous, by restricting the model to the family of functions that accurately describes the data. A prime example or this principle is Convolutional Neural Networks (CNNs) (LeCun et al., 1989). CNNs embrace the translation symmetries in visual data by restricting their mappings to a *convolutional* structure. Convolutions possess a distinctive property called *translation equivariance*: if the input is translated, the output undergoes an equal translation. This property endows CNNs with better data efficiency and generalization than unconstrained models like multi-layered perceptrons. Group equivariant convolutional neural networks (G-CNNs) (Cohen & Welling, 2016) extend equivariance to more general symmetry groups through the use of *group convolutions*. Group convolutions are group equivariant: if the input is transformed by the symmetries described by the group, e.g., scaling, the output undergoes an equal transformation. Equivariance to larger symmetry groups endows G-CNNs with increased data efficiency and generalization on data exhibiting these symmetries. Existing group equivariance research primarily focuses on symmetries found in visual data, e.g., planar rotations, planar scaling (Weiler et al., 2018; Worrall & Welling, 2019; Sosnovik et al., 2020), and more recently, on 3D symmetries, e.g., for spherical ∗Work done while at the Vrije Universiteit Amsterdam ![1_image_0.png](1_image_0.png) Figure 1: Equivariance, invariance and their impact on the hierarchical representations. In a group equivariant mapping, when the input is transformed by a group transformation, its output undergoes an equivalent transformation (Fig. 1a). In contrast, in group invariant maps, the output remains unchanged for all group transformations of the input (Fig. 1b). This distinction holds significant implications in the construction of hierarchical feature representations. For example, a face recognition system built upon invariant eye, nose and mouth detectors would be unable to set the portraits in Fig. 1c apart. However, by leveraging equivariant mappings, information about the input transformations can be used to distinguish these portraits effectively. In essence, in contrast to equivariant maps, invariant maps permit senseless pattern combinations resulting for overly restraining constraints in their design. and molecular data (Thomas et al., 2018; Fuchs et al., 2020; Satorras et al., 2021). Yet, an important category remains underexplored, which also exhibits symmetries: *time-series*. Notably, their translation symmetry is a cornerstone in signal processing and system analysis, e.g., Linear Time-Invariant (LTI) systems. In this work, we bridge this gap by constructing neural networks that embrace the symmetries inherent to time-series. We begin by asking: "*What symmetries are inherently present in time-series?*" We identify two fundamental symmetries –*scale and translation*–, whose combination elucidate several phenomena observed in time-series, e.g., temporal translations, phase shifts, temporal scaling, resolution changes, pitch shifts, seasonal occurrences, etc. By leveraging group convolutions equivariant to the *scale-translation* group, we construct neural architectures such that when the input undergoes translation, scaling or a combination of the two, all intermediate layers will undergo an equal transformation in a hierarchical manner, akin to the methods proposed by Sosnovik et al. (2020); Zhu et al. (2022) for visual data. Interestingly, we observe that constructing convolutional layers equivariant to scale and translation results in layers that closely resemble the *wavelet transform*. However, we find that in order to preserve these symmetries consistently across the whole network, the output of each layer must be processed by a layer that also behaves like the wavelet transform. This approach substantially deviates from common approaches that rely on spectro-temporal representations, e.g., the wavelet transform, which compute spectro-temporal representations once and pass their response to a 2D CNN for further processing. Inspired by the resemblance of scale-translation group equivariant convolutions with the wavelet transform, we term our scale-translation equivariant networks for time-series processing *Wavelet Networks*. Extensive empirical results show that Wavelet Networks consistently outperform conventional CNNs operating on raw waveforms, and match strongly engineered spectogram-based approaches, e.g., on Mel-spectrograms, across several tasks and time-series types, e.g., audio, environmental sounds, electrical signals. To our best knowledge, we are first to propose scale-translation equivariant neural networks for time-series processing. ## 2 Related Work Learning from raw time-series. Several end-to-end learning approaches for time-series exist (Dieleman & Schrauwen, 2014; Dieleman et al., 2016; Dai et al., 2017; Rethage et al., 2018; Stoller et al., 2018). Given the considerable high-dimensionality of time-series, existing works focus on devising techniques with parameterand compute-efficient large memory horizons (Romero et al., 2021; Goel et al., 2022). Due to small effective memory horizons and long training times, Recurrent Neural Networks (RNNs) (Rumelhart et al., 1985) have gradually been overshadowed by CNN backbones (Bai et al., 2018). While CNNs are equivariant to translations, they do not inherently incorporate a distinct notion of scale. Although methods involving layer-wise multi-scale representations have been proposed, e.g., Zhu et al. (2016); Lu et al. (2019); von Platen et al. (2019); Guizzo et al. (2020), these layers are not scale equivariant. As a result, networks incorporating them struggle to maintain consistent scale information across layers. Group-invariant time-series learning. Learning invariant representations from raw speech and sound has been extensively studied in past. Scattering operators (Mallat, 2012; Bruna & Mallat, 2013) construct group *invariant* feature representations that can be used to construct neural architectures invariant to scale and translation (Andén & Mallat, 2014; Peddinti et al., 2014; Salamon & Bello, 2015). In contrast to the invariant feature representations developed by these works, Wavelet networks construct *equivariant* feature representations. Since group equivariance is a generalization of group invariance (Fig. 1b, Sec. 3.1), Wavelet Networks accommodate a broader functional family than previous works, while still upholding scale and translation preservation. Notably, equivariant methods shown superior performance compared to invariant methods across several tasks, even for intrinsically invariant tasks like classification (Cohen & Welling, 2016; Zaheer et al., 2017; Maron et al., 2018). This phenomenon stems from the hierarchical form in which neural networks extract features. Enforcing invariance early in the feature extraction process imposes an overly restrictive constraint in the resulting models (Fig. 1c). Group-equivariant time-series learning. To our best knowledge, Zhang et al. (2015) is the only approach that proposes equivariant learning for time-series data. They propose to learn feature representations equivariant to vocal tract length changes –an inherent symmetry of speech. However, vocal tract length changes do not conform to the mathematical definition of a group, making this equivariance only an approximate estimation. Interestingly, vocal tract length changes can be characterized by specific (scale, translation) tuples. Consequently, considering equivariance to the scale-translation group implicitly describes vocal tract length changes as well as many other symmetries encountered in audio, speech and other time-series modalities. ## 3 Background This work assumes a basic familiarity with the concepts of a group, a subgroup and a group action. For those who may not be acquainted with these terms, we introduce these terms in Appx. A. ## 3.1 Group Equivariance, Group Invariance And Symmetry Preservation $$\forall x\in{\mathcal{X}},\forall g\in{\mathcal{G}}.$$ Group equivariance. Group equivariance is the property of a mapping to respect the transformations in a group. We say that a map is equivariant to a group if a transformation of the input by elements of the group leads to an equivalent transformation of the output (Fig. 1a). Formally, for a group G with elements g ∈ G acting on a set X, and a mapping ϕ : X → X, we say that ϕ is equivariant to G if: ϕ(gx) = gϕ(x), ∀x ∈ X, ∀g ∈ G. (1) For example, the convolution of a signal f : R → R and a kernel ψ : R → R is *equivariant to the group of* translations –or *translation equivariant*– because when the input is translated, its convolutional descriptors are equivalently translated, i.e., (ψ ∗ Ltf)=Lt(ψ ∗ f), with Lt a translation operator by t: Ltf(x)=f(x−t). Group invariance. Group invariance is a special case of group equivariance in which the output of the map is equal for all transformations of the input (Fig. 1b). Formally, for a group G with elements g ∈ G acting on a set X, and a mapping ϕ : X → X, we say that ϕ is invariant to G if: ϕ(gx) = ϕ(x), ∀x ∈ X, ∀g ∈ G. (2) Relation to symmetry preservation. A symmetry-preserving mapping preserves the symmetries of the input. That is, if the input has certain symmetries, e.g., translation, rotation, scale, these symmetries will also be present in the output. Since symmetries are mathematically described as groups, it follows that group equivariant mappings preserve the symmetries of the group to which the mapping is equivariant. In contrast, invariant mappings *do not* preserve symmetry, as they remove all symmetric information from the input. ## 3.2 Symmetry-Preserving Mappings: The Group And The Lifting Convolution When talking about (linear) symmetry-preserving mappings, we are obliged to talk about the group convolution. Previous work has shown that group convolutions are the most general class of group equivariant linear maps (Cohen et al., 2019). Hence, it holds that any linear equivariant map is in fact a group convolution. Group convolution. Let f : G → R and ψ : G → R be a scalar-valued signal and convolutional kernel defined on a group G. The group convolution (∗G) between f and ψ is given by: $$(f*_{\mathcal{G}}\psi)(g)=\!\!\int_{\mathcal{G}}f(\gamma)\mathcal{L}_{g}\psi(\gamma)\,\operatorname{d}\!\mu_{\mathcal{G}}(\gamma)=\!\!\int_{\mathcal{G}}f(\gamma)\psi\left(g^{-1}\gamma\right)\,\operatorname{d}\!\mu_{\mathcal{G}}(\gamma).$$ −1γdµG(γ). (3) $$(1)$$ $\left(2\right)$. $$x,\forall g\in{\mathfrak{g}}.$$ $$\left({\mathfrak{I}}{\mathfrak{I}}\right)$$ 3 ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) Figure 2: Locality of visual and auditory objects. Whereas visual objects are local (left), auditory objects are not. The latter often cover large parts of the frequency axis in a sparse manner (right). where *g, γ* ∈ G, Lgψ(γ)=ψg −1γis the action of the group G on the kernel ψ, and µG(γ) is the (invariant) Haar measure of the group G for γ. Notably, the group convolution generalizes the translation equivariance of convolutions to general groups. The group convolution is equivariant in the sense that for all *γ, g* ∈ G, Lg(f ∗G ψ)(γ) = (Lgf ∗G ψ)(γ), with Lgf(γ)=fg −1γ. (4) The lifting convolution. In practice, the input signals f might not be readily defined on the group of interest G, but on a sub-domain thereof X, i.e., f : X → R. For example, time-series are defined on R although we might want to consider larger groups such as the scale-translation group. Hence, we require a symmetry-preserving mapping from X to G that *lifts* the input signal to G to use group convolutions. This operation is called a *lifting convolution*. Formally, with f : X → R and ψ : X → R a scalar-valued signal and convolutional kernel defined on X, and X a sub-group of G, the lifting convolution (∗G↑) is a mapping from functions on X to functions on G defined as: tion functions on $\mathcal{C}^{\prime}$ to functions on $\mathcal{Q}^{\prime}$ defined as: $$(f*_{\mathcal{C}^{\prime}}\psi)(g)=\int_{\mathcal{X}}f(x)\mathcal{L}_{g}\psi(x)\ \mathrm{d}\mu_{\mathcal{G}}(x)=\int_{\mathcal{X}}f(x)\psi(g^{-1}x)\ \mathrm{d}\mu_{\mathcal{G}}(x)$$ Note that, the lifting convolution is also group equivariant mapping. That is, $\mathcal{L}_{g}(f*_{\mathcal{C}^{\prime}}\psi)\!=\!(\mathcal{L}_{g}f*_{\mathcal{C}^{\prime}}\psi)$. $\left(5\right)^{\frac{1}{2}}$ ## 4 The Problem Of Learning 2D Convolutional Kernels On The Time-Frequency Plane CNNs have been a major breakthrough in computer vision, yielding startling results in countless applications. Due to their success, several works have proposed to treat *spectro-temporal representations* –representations on the time-frequency plane– as 2D images and learn 2D CNNs on top. In this section, we delve into the differences between visual and spectro-temporal representations, and assess the suitability of training 2D CNNs directly on top of spectro-temporal representations. Our analysis suggest that treating spectro-temporal representations as images and learning 2D CNNs on top might not be adequate for effective time-series learning. To enhance clarity, we define spectro-temporal representations in separate gray boxes throughout the section to avoid interrupting the reading flow. Those already familiar with these concepts may skip these boxes. Spectro-temporal representations. Let f(t) ∈ L 2(R) be a square integrable function on R. An spectro-temporal representation Φ[f](*t, ω*) : R 2 → C of f is constructed by means of a linear timefrequency transform Φ that correlates the signal f with a dictionary D of localized *time-frequency* atoms D={ϕt,ω}t∈R,ω∈R, ϕt,ω : R → C of finite energy and unitary norm, i.e., ϕt,ω ∈ L 2(R), ∥ϕt,ω∥ 2=1, ∀t ∈ R, ω ∈ R. The resulting spectro-temporal representation Φ[f] is given by: $$\Phi[f](t,\omega)=\langle f,\phi_{t,\omega}\rangle=\int_{\mathbb{R}}f(\tau)\phi_{t,\omega}^{*}(\tau)\,\mathrm{d}\tau,$$ t,ω(τ ) dτ, (6) with ϕ ∗the complex conjugate of ϕ, and ⟨·, ·⟩ the dot product of its arguments. Using different timefrequency components ϕt,ω, spectro-temporal representations with different properties can be obtained. ## 4.1 Fundamental Differences Between Visual Representations And Spectro-Temporal Representations There exist two fundamental distinctions between visual data and spectro-temporal representations, which are universal to all spectro-temporal representations: (i) locality and (ii) transparency. Unlike visual data, auditory signals exhibit strong *non-local* characteristics. Auditory signals consist of auditory objects, e.g., spoken words, which contain components resonating at multiple non-local frequencies known as the *harmonics of the signal*. Consequently, the spectro-temporal representations of auditory objects often occupy a significant portion of the time-frequency plane –particularly along the frequency axis (ω)– in a sparse manner $$({\mathfrak{h}})$$ Figure 3: Occlusion and superposition. Visual objects occlude each other when they appear simultaneously ![4_image_1.png](4_image_1.png) at a given position (left). Auditory objects, instead, superpose at all shared positions (right). (Fig. 2). Furthermore, when considering auditory signals comprising multiple auditory objects, these objects exhibit a phenomenon known as *superposition*. This property is notably different from visual data, where visual objects in the same location *occlude* one another, resulting in only the object closest to the camera being visible (Fig. 3). This inherent property of sound is colloquially referred to as *transparency*. ## 4.2 The Problem Of Learning 2D Kernels On Short-Time Fourier Spectro-Temporal Representations The short-time Fourier transform constructs a representation in which a signal is decomposed in terms of its correlation with time-frequency atoms of constant time and frequency resolution. As a result, it is effective as long as the signal f does not exhibit *transient behavior* –components that evolve quickly over time– with some waveform structures being very localized in time and others very localized in frequency. The short-time Fourier transform. The short-time Fourier transform S –also called *windowed* Fourier transform– is a linear time-frequency transform that uses a dictionary of time-frequency atoms ϕt,ω(τ )=w(τ −t)e−iωτ, t∈R, ω∈R, constructed with a symmetric window w(τ ) of local support shifted by t and modulated by the frequency ω. The spectro-temporal representation S[f] is given by: $${\mathcal{S}}[f](t,\omega)=\langle f,\phi_{t,\omega}\rangle=\int_{\mathbb{R}}f(\tau)\phi_{t,\omega}^{*}(\tau)\,\mathrm{d}\tau=\int_{\mathbb{R}}f(\tau)w(\tau-t)\mathrm{e}^{-i\omega\tau}\,\mathrm{d}\tau.$$ f(τ )w(τ − t)e−iωτ dτ. (7) Intuitively, the short-time Fourier transform divides the time-frequency plane in tiles of equal resolution, whose value is given by the correlation between f and the time-frequency atom ϕt,ω (Fig. 4a). Nevertheless, decades of research in psychology and neuroscience have shown that humans largely rely in the transient behavior of auditory signals to distinguish auditory objects (Cherry, 1953; van Noorden et al., 1975; Moore & Gockel, 2012). In addition, it has been shown that the human auditory system has high spectral resolution at low-frequencies and high temporal resolution at higher frequencies (Stevens et al., 1937; Santoro et al., 2014; Bidelman & Khaja, 2014). For example, a semitone at the bottom of the piano scale (∼30Hz) is of about 1.5Hz, while at the top of the musical scale (∼5kHz) it is of about 200Hz. These properties of the human auditory signal largely contrast both with (i) the inability of the short-time Fourier transform to detect transient signals, as well as with (ii) its constant spectro-temporal resolution. To account for these differences, improved spectro-temporal representations on top of the short-time Fourier transform have been proposed such as log-Mel spectrograms (Stevens et al., 1937; Furui, 1986). These developments revolve around transforming the frequency axis of the short-time Fourier transform in a logarithmic scale, thereby compressing the frequency axis and better aligning with the spectro-temporal resolution of ![4_image_2.png](4_image_2.png) 5 $\left(7\right)$. ![4_image_0.png](4_image_0.png) Figure 4: Tiling of the time-frequency plane for the short-time Fourier transform (Fig. 4a) and the wavelet transform (Fig. 4b). The short-time Fourier transform divides the time-frequency plane in tiles of equal resolution. This makes it adequate for signals without transient behaviour. The Wavelet transform, on the other hand, divides the time-frequency plane on tiless of changing spectro-temporal resolution. This allows it to represent detect highly localized events both on time and frequency. the human auditory system. Consequently, this adjustment enables local structures, e.g., 2D convolutional kernels, to better capture non-local relationships (Ullrich et al., 2014; Choi et al., 2016; Xu et al., 2018). However, despite their improved learning characteristics, these spectro-temporal representations remain *incomplete* due to their inability to modify the constant temporal resolution of the short-time Fourier transform. ## 4.3 The Problem Of Learning 2D Kernels On Wavelet Spectro-Temporal Representations In contrast to the short-time Fourier transform, the Wavelet transform constructs a spectro-temporal representation in terms of correlations with time-frequency atoms, *whose time and frequency resolution change*. As a result, the resulting decomposition of the time-frequency plane allows the wavelet transform to correctly describe signals with transient behaviour with localized components both on time and frequency (Fig. 4b). The wavelet transform. The wavelet transform W is a linear time-frequency transform that uses a dictionary of time-frequency atoms ϕt,ω(τ )= √ 1 ω ψτ−t ω , t ∈ R, ω ∈ R≥0. The function ψt,ω is called a Wavelet and satisfies the properties of having zero mean, i.e., Rψt,ω(τ )dτ=0,s and being unitary, i.e., ∥ψt,ω∥ 2=1, for any t ∈ R, ω ∈ R≥0. The resulting spectro-temporal representation W[f] is given by: $$\mathcal{W}[f](t,\omega)=\langle f,\phi_{t,\omega}\rangle=\int_{\mathbb{R}}f(\tau)\phi_{t,\omega}^{*}(\tau)\,\mathrm{d}\tau=\int_{\mathbb{R}}f(\tau)\frac{1}{\sqrt{\omega}}\psi\left(\frac{\tau-t}{\omega}\right)\,\mathrm{d}\tau.$$ $\left(\mathfrak{S}\right)$. dτ. (8) Intuitively, the Wavelet transform divides the time-frequency plane in tiles of different resolutions, with high frequency resolution and low spatial resolution at low frequencies, and low frequency resolution and high spatial resolution for high frequencies (Fig. 4b).a aImportantly, it is not possible to have high frequency and spatial resolution at the same time due to the uncertainty principle (Gabor, 1946). It states that the joint time-frequency resolution of spectro-temporal representations is limited by a minimum surface σϕ,tσϕ,ω ≥ 12 , with σϕ,t, σϕ,ω the spread of the time-frequency atom ϕ on time and frequency. Interestingly, the modus operandi of the wavelet transform *perfectly aligns* with the spectro-temporal resolution used by the human auditory system for the processing of auditory signals. Nevertheless, despite this resemblance, training local 2D structures, e.g., convolutional kernels, directly on the wavelet transform's output stil falls short in addressing the non-local, transparent characteristics inherent in auditory signals. Consequently, researchers have devised several strategies to overcome these challenges, e.g., by defining separable kernels that span large memory horizons along the frequency and time axis independently (Pons & Serra, 2019) or by prioritizing learning along the harmonics of a given frequency (Zhang et al., 2020). As shown in the next section, a better alternative arises from considering the symmetries appearing in timeseries data. Starting from this perspective, we are led to scale-translation equivariant mappings and find striking relationships between these family of mappings and the wavelet transform. Nevertheless, our analysis indicates that *all layers within a neural network should be symmetry preserving* –a condition not met by the methods depicted in this section. By doing so, we devise neural architectures, whose convolutional layers process the output of previous layers in a manner akin to the wavelet transform. As a result, each convolutional layer performs spectro-temporal decompositions of the input in terms of localized time-frequency atoms able to process global and localized patterns both on time and frequency. ## 5 Wavelet Networks: Scale-Translation Equivariant Learning From Raw Waveforms We are interested in mappings that preserve the scale and translation symmetries of time-series. In this section, we start by tailoring lifting and group convolutions to the scale-translation group. Next, we outline the general form of Wavelet Networks and make concrete practical considerations for their implementation. At the end of this section, we formalize the relationship between Wavelet Networks and the wavelet transform, and provide a thorough analysis on the equivariance properties of common spectro-temporal transforms. 5.1 Scale-translation preserving mappings: group convolutions on the scale-translation group We are interested in mappings that preserve scale and translation. By imposing equivariance to the *scaletranslation* group, we guarantee that if input patterns are scaled, translated, or both, their feature representations will transform accordingly, but not be modified. The scale-translation group. From a mathematical perspective scale and translational symmetries are described by the affine *scale-translation group* G=R ⋊ R≥0, which emerges from the semi-direct product of ![6_image_0.png](6_image_0.png) $\left(9\right)$. Figure 5: The action of unimodular and nonunimodular groups. Most unimodular groups, e.g., rotation, mirroring, keep the volume of the objects they act upon intact. In contrast, non-unimodular groups, e.g., scaling, change it through their action. the translation group T=(R, +) and the scale group S=(R≥0, ×) acting on R. As a result, we have that the resulting group product is given by g · γ=(t, s)·(*τ, ς*)=(t+*sτ, s* ·ς), with *t, τ* ∈ R and *s, ς* ∈ R≥0. In addition, by solving g −1· g=e, we obtain that the inverse of a group element g=(*t, s*) is given by g −1=s −1(−t, 1). Semi-direct product and affine groups. When treating data defined on R d, one is mainly interested in the analysis of groups of the form G=R d ⋊ H resulting from the *semi-direct product* (⋊) between the translation group (R d, +) and an arbitrary (Lie) group H acting on R d, e.g., rotation, scale, etc. This kind of groups are called *affine groups* and their group product is defined as: g1 · g2 = (x1, h1) · (x2, h2) = (x1 + Ah1 (x2), h1 · h2), (9) with $g_1$=0. with g1=(x1, h1), g2=(x2, h2) ∈ G, x1, x2 ∈ R d and h1, h2 ∈ H. A denotes the action of H on R d. Unimodular and non-unimodular groups. Unimodular groups, such as rotation, translation and mirroring, are groups whose action keeps the volume of the objects on which they act intact (Fig. 5). Recall that a group convolution performs an integral over the whole group (Eq. 3). Hence, for its result to be invariant over different group actions, it is required for the Haar measure to be equal for all elements of the group –therefore the name *invariant* Haar measure. Since the action of (most) unimodular groups does not alter the size of the objects on which they act, their action on infinitesimal objects keeps their size unchanged. As a consequence, for (most) unimodular groups, the Haar measure is equal to the Lebesgue measure, i.e., dµG(γ)=dγ, ∀γ ∈ G, and therefore, it is often omitted in literature, e.g., in Cohen & Welling (2016). In contrast, non-unimodular groups, such as the scale group and the scale-translation group, do modify the size of objects on which they act (Fig. 5 right). Consequently, their action on infinitesimal objects changes their size. As a result, the Haar measure must be treated carefully in order to obtain equivariance to nonunimodular groups (Bekkers, 2020). The Haar measure guarantees that dµG(γ)=dµG(gγ), ∀ *g, γ* ∈ G. For the scale-translation group, it is obtained as: $$\mathrm{d}\mu_{G}(\gamma)=\mathrm{d}\mu_{G}(g\gamma)=\mathrm{d}\mu_{G}(t+s\tau,s\varsigma)=\mathrm{d}\mu_{G}(t+s\tau)\mathrm{d}\mu_{G}(s\varsigma)=\frac{1}{|s|}\mathrm{d}\tau\frac{1}{|s|}\mathrm{d}\varsigma,\tag{10}$$ where g=(t, s), γ=(τ, ς) ∈ G, t, τ ∈ R, *s, ς* ∈ R>0; dτ , dς are the Lebesgue measure of the respective spaces; and |s| depicts the determinant of the matrix representation of the group element.1Intuitively, the Haar measure counteracts the growth of infinitesimal elements resulting from the action of s on R×R>0. Scale-translation group convolutions. The general formulation of the group convolution is given in Eq. 3. Interestingly, the scale-translation group has additional properties with which this formulation can be simplified. In particular, by taking advantage of the fact that the scale-translation group is an *affine* group G=R ⋊ S, with S=(R>0, ×), as well as of the definition of the Haar measure for the scale-translation group in Eq. 10 we can reformulate the group convolution for the scale-translation group as: (f ∗G ψ)(g) =Z G f(γ)ψ(g −1γ) dµG(γ) (f ∗G ψ)(t, s) = Z S Z R f(τ, ς)ψ(t, s) −1(τ, ς) 1 |s| dτ1 |s| dς = Z S Z R f(τ, ς) 1 s 2 ψs −1(τ − t, ς)dτ dς = Z S Z R f(τ, ς) 1 s 2 Lsψ (τ − t, ς) dτ dς = Z S f ∗R 1 s 2 Lsψ (t, ς) dς (11) 1A member s of the scale group R>0 acting on a N-dimensional space is represented a matrix diag(*s, ..., s*). Since, its determinant sN depends on the value of the group element s, the factor 1 |s| = 1 sN in Eq. 10 cannot be omitted. $$(11)$$ ![7_image_0.png](7_image_0.png) Figure 6: Scale-translation lifting and group convolution. The lifting convolution can be seen a set of 1D convolutions with a bank of scaled convolutional kernels 1 s Lsψ, and the group convolution can be seen as a set of 1D convolutions with a bank of scaled convolutional kernels 1 s2 Lsψ, followed by an integral over scales ς ∈ R. Their main difference is that, for group convolutions, the input f and the convolutional kernel ψ are functions on the scale-translation group whereas for lifting convolutions these are functions on R. Lifting and group convolutions can be seen as spectrotemporal decompositions with large values of s relating to coarse features and small values to finer features. where g=(t, h), γ=(τ, ς) ∈ G, *t, τ* ∈ R, and *s, ς* ∈ R>0; and Lsψ(*τ, ς*)=ψs −1(*τ, ς*)is the (left) action of the scale group S on a convolutional kernel ψ : R × R>0 → R defined on the scale-translation group. In other words, for the scale-translation group, the group convolution can be seen as a set of 1D convolutions with a bank of scaled convolutional kernels { 1 s 2 Lsψ}s∈S*, followed by an integral over scales* ς ∈ R (Fig. 6, bottom). Scale-translation lifting convolution. Like the group convolution, the lifting convolution can also be simplified by considering the properties of the scale-translation group. In particular, we can rewrite it as: $$(f*_{\sigma\tau}\psi)(g)=\int_{\tau}f(\tau)\psi(g^{-1}x)\ \mathrm{d}\mu_{\mathcal{G}}(x)=\int_{\mathbb{R}}f(\tau)\psi(g^{-1}\tau)\mathrm{d}\sigma(\tau)$$ $$(f*_{\sigma\tau}\psi)(t,s)=\int_{\mathbb{R}}f(\tau)\psi((t,s)^{-1}\tau)\ \frac{1}{|s|}\mathrm{d}\tau=\int_{\tau}f(\tau)\ \frac{1}{s}\psi\left(s^{-1}(\tau-t)\right)\ \mathrm{d}\tau=\left(f*_{\mathbb{R}}\frac{1}{s}\mathcal{L}_{s}\psi\right)(t)\tag{12}$$ $\psi$ is the $\tau$-th order of $\tau$. where g=(t, h), γ=(τ, ς) ∈ G, *t, τ* ∈ R, and *s, ς* ∈ R>0; and Lsψ(*τ, ς*)=ψs −1(*τ, ς*)is the (left) action of the scale group S on a 1D convolutional kernel ψ : R → R. In other words, for the scale-translation group, the lifting convolution can be seen as a set of 1D *convolutions with a bank of scaled convolutional kernels* { 1 sLsψ}s∈S (Fig. 6, top). Note that the Haar measure imposes a normalization factor of 1 s 2 for group convolutions and of 1 s for the lifting convolution. This is because space on which the group convolution is performed (R ⋊ R>0) has an additional dimension relative to the space on which the lifting convolution is performed (R). ## 5.2 Wavelet Networks: Architecture And Practical Implementation The general architecture of our proposed Wavelet networks is shown in Fig. 7. Wavelet networks consist of several stacked layers that respect scale and translation. They consist of a lifting group convolution layer that lifts input time-series to the scale-translation group, followed by arbitrarily many group convolutional layers. At the end of the network, a global pooling layer is used to produce scale-translation invariant representations. Due to their construction, Wavelet networks make sure that common neural operations, e.g., point-wise nonlinearities, do not disrupt scale and translation equivariance. This in turn, makes them broadly applicable and easily extendable to other existing neural archi- $\downarrow(-1)$ . ![7_image_1.png](7_image_1.png) Figure 7: Wavelet networks. ![8_image_0.png](8_image_0.png) Figure 9: Riemann integration of functions on R>0 using linear (9a) and exponential grids (9b, 9c). tectures, e.g., ResNets (He et al., 2016), U-Nets (Ronneberger et al., 2015). ## 5.2.1 Group Convolutional Kernels On Continuous Bases Although our previous derivations build upon continuous functions, in practice, computations are performed on discretized versions of these functions. Continuous bases have proven advantageous for the construction of group convolutions as the action of relevant groups often impose transformations not well-defined for discrete bases (Weiler et al., 2018; Bekkers et al., 2018; Weiler & Cesa, 2019). For instance, in the context of scale-translations, scaling a kernel [w1, w2, w3] by a factor of two results in a filter [w1, w1.5, w2, w2.5, w3] wherein the introduced values [w1.5, w2.5] do not exist in the original basis (Fig. 8a). The most adopted solution to address this problem is interpolation, i.e., deriving the value of [w1.5, w2.5] based on the neighbouring known pixels. However, interpolation introduces spurious artifacts which are particularly severe for small kernels. Instead, we adopt an alternative approach: we define convolutional kernels directly on a continuous basis (Fig. 8b). Drawing from the resemblance of gammatone filters – strongly motivated by the physiology of the human auditory system for the processing and recognition of auditory signals (Johannesma, 1972; Hewitt & Meddis, 1994; Lindeberg & Friberg, 2015a)– to B 2-splines, we parameterize our filters within a B 2-spline basis as in Bekkers (2020). As a result, our convolutional filters are parameterized as a linear combination of shifted B 2-splines ψ(τ ):=PN i=1 wiB 2(τ − τi), rather than the commonly used shifted Dirac delta's basis ψ(τ ):=PN i=1 wiδ(τ − τi). ## 5.2.2 Constructing A Discrete Scale Grid From the response of the lifting layers onward, the feature representations of wavelet networks possess an additional axis s ∈ R>0. Just like the spatial axis, this axis must be discretized in order to perform computational operations. That is, we must approximate the scale axis R>0 by a finite set of discrete scales {s} smax s=smin . Inspired by previous work, we approximate the scale axis with a *dyadic set* {2 j} jmax j=jmin (Mallat, 1999; Lindeberg & Friberg, 2015b; Worrall & Welling, 2019). Dyadic sets resemble the spectro-temporal resolution of the human auditory system, and are widely used for discrete versions of the wavelet transform. Integrating on exponential grids. A subtlety arises with respect to integrating over the scale axis when implementing the continuous theory in a discrete setting that is suitable for numerical computations. The group convolutions include scale correction factors as part of the Haar measure, which makes the integration invariant to actions along the scale axis. That is, the integral of a signal f(s) over scale is the same as that of the same signal f(z −1s), whose scale is changed by a factor z ∈ R>0: $$\int_{\mathbb{R}_{>0}}f(z^{-1}s){\frac{1}{s}}\mathrm{d}s\ {\overset{s\to z s}{=}}\int_{\mathbb{R}_{>0}}f(z^{-1}s){\frac{1}{z s}}\mathrm{d}z s=\int_{\mathbb{R}_{>0}}f(s){\frac{1}{s}}\mathrm{d}s.$$ ds. (13) $$(13)$$ We can translate the scale integration to the discrete setting via Riemann integrals, where we sample the function on a grid and take the weighted sum of these values with weights given by the bin-width: $$\int_{\mathbb{R}_{>0}}f(s){\frac{1}{s}}\mathrm{d}s\approx\sum_{i}f(s_{i}){\frac{1}{s_{i}}}\Delta_{i}.$$ $$(14)$$ ∆i. (14) When the scale grid is linear, the bin-widths ∆i are constant, as depicted in Fig. 9a. When the scale grid is exponential, e.g., si=b i−1 with b some base factor, the bin widths are proportional to the scale values at the grid points, i.e., ∆i ∝ si (Fig. 9b). In this setting, the factor 1 si cancels out (up to some constant) with the bin width ∆i, and integration is simply done by summing the values sampled on the scale grid. Consequently, when working with an exponential grid along the scale axis, the factor in the group convolutions (Eq. 11) becomes 1 s instead of 1 s 2 . It is worth mentioning that using an exponential grid is the natural thing to do when dealing with the scale group. The scale group is a multiplicative group with a natural distance between group elements *z, s* ∈ R>0 defined by ∥ log z −1s∥. Consequently, on an exponential grid, the grid points are spaced uniformly with respect to this distance, as illustrated in Fig. 9c. Defining the discrete scale grid. In practice, Wavelet networks must define the number of scales Ns to be considered in the dyadic set as well as its limits smin, smax. Fortunately, it turns out that these values are related to the spatial dimension of the input f itself, and thus, we can use it to determine these values. Let us consider a signal f and a convolutional kernel ψ sampled on discrete grids [1, Nf ], [1, Nψ] ⊂ Z of sizes Nf , and Nψ, respectively. When we re-scale the convolutional kernel ψ, we are restricted (i) at the bottom of the scale axis by the Nyquist criterion, and (ii) at the top of the scale by the scale for which the filter becomes constant in an interval of Nf samples. The Nyquist criterion is required to avoid aliasing and intuitively restricts us to a compression factor on ψ such that it becomes as big as 2 grid samples. On the other hand, by having ψ re-scaled to an extreme to which it is constant in the support of the input signal f, the kernel will only be able to perform average operations. Considerations regarding computational complexity. Note that the computational cost of Wavelet networks increases linearly with the number of scales considered. Hence, it is desirable to reduce the number of scales used as much as possible. To this end, we reason that using scales for which the sampled support of ψ is smaller than Nψ is unnecessary as the functions that can be described at those scales can also be described –and learned– at the unscaled resolution of the kernel s=1. Therefore, we define the minimum scale as smin=1. Furthermore, we reason that using scales for which the support of the filter overpasses that of the input, i.e., Nf ≤ Nψ, is also suboptimal, as the values outside of the region [1, Nf ] are unknown. Therefore, we consider the set of sensible scales to be given by the interval [1, Nf Nψ ]. In terms of a dyadic set {2 j} jmax j=jmin , this corresponds to the j-values given by the interval [0, 1, 2*, ..., j*max s.t. Nψ 2 jmax ≤ Nf ]. Effect of downsampling on the scale grids used. Neural architectures utilize pooling operations, e.g., maxpooling, to reduce the spatial dimension of the input as a function of depth. Following the rationale outlined in the previous paragraph, we take advantage of these reductions to reduce the number of scales that representations at a given depth should use. Specifically, we use the factor of downsampling as a proxy for the number of scales that can be disregarded. For example, if we use a pooling of 8 at a given layer, subsequent layers should reduce the number of scales considered by the same factor, i.e., 2 3. For a set of dyadic scales before a pooling layer given by {2 j} jmax j=jmin and a pooling layer of factor 2 p, the set of dyadic scales considered after pooling will be given by {2 j} jmax−p j=jmin . ## 5.2.3 Imposing Wavelet Structure To The Learned Convolutional Kernels In classical spectro-temporal analysis, wavelets are designed to have unit norm ∥ψ∥ 2=1 and zero mean Rψ(τ ) dτ=0. These constraints are useful for both theoretical and practical reasons including energy preservation, numerical stability and the ability to act as band-pass filters (Mallat, 1999). Since Wavelet networks construct time-frequency representations of the input, we experiment with an additional regularization loss that encourages the learned convolutional kernels to behave like wavelets. First, we note that lifting and group convolutions inherently incorporate a normalization term - 1 s , 1 s 2 - in their definitions. Therefore, the normalization criterion is inherently satisfied. To encourage the learned kernels to have zero mean, we formulate a regularization term that promote this behaviour. Denoting ψd as the convolutional kernel at the d-th layer of a neural network with D convolutional layers, the regularization term Lwavelet is defined as: $\mathcal{L}_{\text{wavelet}}=\sum_{d=1}^{\text{D}}\|\text{mean}(\psi_d)\|^2.$ For wavelet structure in the learned convolutional kernels, one. $$(15)$$ Interestingly, we observe that enforcing wavelet structure in the learned convolutional kernels consistently yields improved performance across all tasks considered (Sec. 6). This result underscores the potential value of integrating insights from classical signal processing, e.g., spectro-temporal analysis (Scharf, 1991; Mallat, 1999; Daubechies, 2006), in the design of deep learning architectures. ## 5.3 Wavelet Networks Perform Nested Non-Linear Time-Frequency Transforms Interestingly, we can use spectro-temporal analysis to understand the modus operandi of wavelet networks. Our analysis reveals that wavelet networks perform nested time-frequency transforms interleaved with pointwise nonlinearities. In this process, each time-frequency transform emerges as a linear combination of parallel wavelet-like transformations of the input computed with learnable convolutional kernels ψ. The relation between scale-translation equivariant mappings and the wavelet transform. The wavelet transform shows many similarities to the scale-translation group and lifting convolutions (Grossmann et al., 1985). In fact, by analyzing the definition of the wavelet transform (Eq. 8), we obtain that the Wavelet is equivalent to a lifting group convolution (Eq. 12 with $\omega{=}s$) up to a normalization factor $\frac{1}{\sqrt{\omega}}$: $$W[f](t,\omega)=\int_{R}f(\tau)\frac{1}{\sqrt{\omega}}\psi\left(\frac{\tau-t}{\omega}\right)\,\mathrm{d}\tau=\int_{R}f(\tau)\frac{1}{\sqrt{\omega}}\psi\left(\omega^{-1}(\tau-t)\right)\,\mathrm{d}\tau$$ $$=\int_{R}f(\tau)\frac{1}{\sqrt{\omega}}\mathcal{L}_{\omega}\psi\left(\tau-t\right)\,\mathrm{d}\tau=\left(f*\frac{1}{\sqrt{\omega}}\mathcal{L}_{\omega}\psi\right)(t)=\frac{1}{\sqrt{\omega}}\left(f*_{\mathcal{G}\uparrow}\psi\right)(t,\omega).\tag{16}$$ ore, if we let the input $f$ be a function defined on the scale-translation group, and let $\omega$ act on according to the group structure of the scale-translation group, we have that the scale-translation $$\left(17\right)$$ this group according to the group structure of the scale-translation group, we have that the scale-translation group convolution is equivalent to a Wavelet transform whose input has been obtained by a previously applied Wavelet transform, up to a normalization factor 1 ω √ω : $$\mathcal{W}[f](t,\omega)=\int_{\mathbb{R}>0}\int_{\mathbb{R}}f(\tau,\varsigma)\frac{1}{\sqrt{\omega}}\psi\left(\omega^{-1}(\tau-t),\varsigma\right)\;\mathrm{d}\tau\;\mathrm{d}\varsigma=\int_{\mathbb{R}>0}\int_{\mathbb{R}}f(\tau,\varsigma)\frac{1}{\sqrt{\omega}}\mathcal{L}_{\omega}\psi\left(\tau-t,\varsigma\right)\;\mathrm{d}\tau\;\mathrm{d}\varsigma$$ $$=\int_{\mathbb{R}>0}\left(f*_{\mathbb{R}}\frac{1}{\sqrt{\omega}}\mathcal{L}_{\omega}\psi\right)(t,\varsigma)\;\mathrm{d}\varsigma=\frac{1}{\omega\sqrt{\omega}}\left(f*_{G}\psi\right)(t,\omega)\tag{1}$$ and $\mathcal{W}[f](t,\omega)$ are homogeneous solutions with $\omega$ but $\omega$ is non-zero, $\omega$ is not bounded. In other words, lifting and group convolutions on the scale-translation group can be interpreted as linear time-frequency transforms that adopt time-frequency plane tiling akin wavelet transform (Fig. 4b), for which the group convolution accepts wavelet-like spectro-temporal representations as input. Equivariance properties of common time-frequency transforms. For completeness, we also analyze the equivariance properties of common time-frequency transforms and their normalized representations, e.g., spectrogram. Careful interpretations and proofs are provided in Appx. B. Let Lt0 f = f(t − t0) and Ls0 f(t) = f(s −1 0t), t0 ∈ R, s0 ∈ R>0, be translation and scaling operators. The Fourier, short-time Fourier and Wavelet transform of Lt0 f and Ls0 f, f ∈ L 2(R), are given by: - **Fourier Transform:** - **$\;$** **Short-Time Fourier Transform:** $$\delta[\mathcal{L}_{t_{0}}f](t,\omega)=\mathrm{e}^{-i\omega t_{0}}\mathcal{L}_{t_{0}}\delta[f](t,\omega)$$ $$\delta[\mathcal{L}_{s_{0}}f](t,\omega)\approx s_{0}\,\delta[f](s_{0}^{-1}t,s_{0}\omega)$$ $$\begin{array}{l}{{{\mathcal{F}}[{\mathcal{L}}_{t_{0}}f](\omega)=\mathrm{e}^{-i\omega t_{0}}{\mathcal{F}}[f](\omega)}}\\ {{{\mathcal{F}}[{\mathcal{L}}_{s_{0}}f](\omega)=s_{0}{\mathcal{L}}_{s_{0}^{-1}}{\mathcal{F}}[f](\omega)}}\end{array}$$ $$\begin{array}{l}{{\to|{\mathcal{F}}[{\mathcal{L}}_{t_{0}}f](\omega)|^{2}=|{\mathcal{F}}[f](\omega)|^{2}}}\\ {{\to|{\mathcal{F}}[{\mathcal{L}}_{s_{0}}f](\omega)|^{2}=|s_{0}|^{2}|{\mathcal{L}}_{s_{0}^{-1}}{\mathcal{F}}[f](\omega)|^{2}}}\end{array}$$ F[f](ω) → |F[Ls0 2(19) f](*t, ω*) = e−iωt0Lt0 S[f](t, ω) → |S[Lt0 $$\to|\mathcal{S}[\mathcal{L}_{t_{0}}f](t,\omega)|^{2}=|\mathcal{L}_{t_{0}}\mathcal{S}[f](t,\omega)|^{2}\tag{20}$$ $$\to|\mathcal{S}[\mathcal{L}_{s_{0}}f](t,\omega)|^{2}\approx|s_{0}|^{2}|\mathcal{S}[f](s_{0}^{-1}t,s_{0}\omega)|^{2}\ (\ast)\tag{21}$$ 0t, s0ω) → |S[Ls0 $$\begin{array}{l}{(18)}\\ {(19)}\end{array}$$ - **Wavelet Transform:** $$\begin{array}{l}{{\Psi[{\mathcal{L}}_{t_{0}}[f]](t,\omega)={\mathcal{L}}_{t_{0}}\Psi[f](t,\omega)}}\\ {{\Psi[{\mathcal{L}}_{s_{0}}f](t,\omega)=\sqrt{s_{0}}\ {\mathcal{L}}_{s_{0}}\Psi[f](t,\omega)}}\end{array}$$ Wavelet. $$\newcommand{\vecs}[1]{\overset{\rightharpoonup}{\mathbf{#1}}}$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}}$$ [f]](*t, ω*) = Lt0W[f](t, ω) → |W[Lt0 2(22) $$\begin{array}{l}{{\to|\mathcal{W}[\mathcal{L}_{t_{0}}f](t,\omega)|^{2}=|\mathcal{L}_{t_{0}}\mathcal{W}[f](t,\omega)|^{2}}}\\ {{\to|\mathcal{W}[\mathcal{L}_{s_{0}}f](t,\omega)|^{2}=|\mathcal{L}_{s_{0}}\mathcal{W}[f](t,\omega)|^{2}}}\end{array}$$ f](*t, ω*) = √s0 Ls0W[f](t, ω) → |W[Ls0 2(23) (∗) Eq. 21 only approximately holds for large windows (see Appx. B.2 for a detailed explanation). In other words, the Wavelet transform and the scalogram |W[·]| 2 are the only time-frequency representations that exhibit both translation and scaling equivariance in a practical way. Wavelet networks apply parallel time-frequency transforms with learned bases at every layer. So far, our analysis has been defined for scalar-valued input and convolutional kernels. However, in practice, convolutional layers perform operations between inputs f : R → R Nin and convolutional kernels ψ : R → R Nout×Nin to produce outputs (f ∗ ψ) : R → R Nout as the linear combination along the Nin dimension of convolutions with several learned convolutional kernels computed in parallel: $$(f*\psi)_{o}=\sum_{i=1}^{\mathrm{N}_{\mathrm{in}}}(f_{i}*\psi_{i}),\ \ \ o\in[1,2,...,\mathrm{N}_{\mathrm{out}}].$$ $$(222)$$ $$(23)^{\frac{1}{2}}$$ $$(24)$$ (fi ∗ ψi), o ∈ [1, 2*, ...,* Nout]. (24) In practice, both lifting and group convolutional layers adhere to the same structure. In a dilation-translation convolutional layer with Nout output channels, Nout independent convolutional kernels, each consisting of Nin channels, are learned. During the forward pass, the input is group-convolved with each of these kernels in parallel. The Nout output channels are then formed by linearly combining the outcomes of the Nin channels. In other words, lifting and group convolutional layers produce linear combinations of distinct time-frequency decompositions of the input computed in parallel at each layer. Wavelet networks are scale-translation equivariant nested non-linear time-frequency transforms. Just like in conventional neural architectures, the outputs of lifting and group convolutional layers are interleaved with point-wise nonlinearities. Therefore, wavelet networks compute nonlinear scale-translation equivariant feature representations that resemble nested nonlinear time-frequency transforms of the input. ## 6 Experiments In this section, we empirically evaluate wavelet networks. To this end, we take existing neural architectures designed to process raw signals and construct equivalent wavelet networks (W-Nets). We then compare the performance of W-Nets and the corresponding baselines on tasks defined on raw environmental sounds, raw audio and raw electric signals. We replicate as close as possible the training regime of the corresponding baselines and utilize their implementation as a baseline whenever possible. Detailed descriptions of the specific architectures as well as the hyperparameters used for each experiment are provided in Appx. C. ## 6.1 Classification Of Environmental Sounds First, we consider the task of classifying environmental sounds on the UrbanSound8K (US8K) dataset (Salamon et al., 2014). The US8K dataset consists of 8732 audio clips uniformly drawn from 10 environmental sounds, e.g., siren, jackhammer, etc, of 4 seconds or less, with a total of 9.7 hours of audio. Experimental setup. We compare the Mn-Nets of Dai et al. (2017) and the 1DCNNs of Abdoli et al. (2019) with equivalent W-Nets in terms of number of layers and parameters. Contrarily to Dai et al. (2017) we sample the audio files at 22.05kHz as opposed to 8kHz. This results from preliminary studies of the data, which indicated that some classes become indistinguishable for the human ear at such low sampling rates.2 For the comparison with the 1DCNN of Abdoli et al. (2019), we select the 50999-1DCNN as baseline, as it is the network type that requires the less human engineering. We note, however, that we were unable to replicate the results reported in Abdoli et al. (2019). In contrast to the 83±1,3% reported, we were only able to obtain a final accuracy of 62.0±6.791. This inconsistency is further detailed in Appx. C.1. To compare to models other than Mn-nets and 1DCNNs, e.g., Pons et al. (2017a); Tokozume & Harada (2017), we also provide 10-fold cross-validation results. This is done by taking 8 of the 10 official subsets for 2See https://github.com/dwromero/wavelet_networks/blob/master/experiments/UrbanSound8K/data_analysis.ipynb. | Table 1: Experimental results on UrbanSound8K. UrbanSound8K Model 10th Fold Cross-Val. # Params. Acc. (%) Acc. (%) M3-Net 54.48 - 220.67k W3-Net 61.05 - W 219.45k 3-Net-wl 63.08 - M5-Net 69.89 - 558.08k W5-Net 72.28 - W 558.03k 5-Net-wl 74.55 - M11-Net 74.43 - 1.784m W11-Net 79.33 66.97 ± 5.178 W 1.806m 11-Net-wl 80.41 68.47 ± 4.914 M18-Net 69.65 - 3.680m W18-Net 75.87 64.02 ± 4.645 3.759m W18-Net-wl 78.26 65.01 ± 5.431 M34-Net 75.15 - 3.978m W34-Net 76.22 65.69 ± 5.780 W 4.021m 34-Net-wl 78.38 66.77 ± 4.771 1DCNN - 62.00 ± 6.791 453.42k W-1DCNN - 62.47 ± 4.925 458.61k W-1DCNN-wl - 62.64 ± 4.979 Comparison With Other Approaches Model Type Cross-Val. | | | # Params. | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|---------------|-------------| | | | Acc. (%) | | | W11-Net-wl | Raw | 68.47 ± 4.914 | 1.806m | | PiczakCNN Piczak (2015) | Mel Spectrogram | 73.7 | 26m | | VGG Pons & Serra (2019) | | 70.74 | 77m | | EnvNet-v2 Tokozume & Harada (2017) | Raw (Bagging) | 78 | 101m | training, one for validation and one for test. We consistently select the (n−1)mod10 subset for validation when testing on the n-th subset. We note that this training regime might be different from those used in other works, as previous works often do not disclose which subsets are used for validation. Results. Our results (Tab. 1) show that wavelet networks consistently outperform CNNs on raw waveforms. In addition, they are competitive to spectrogram-based approaches, while using significantly fewer parameters and bypassing the need for preprocessing. Furthermore, we observe that encouraging wavelet structure to the convolutional kernels –denoted by the WL suffix– consistently leads to improved accuracy. ## 6.2 Automatic Music Tagging Next, we consider the task of automatic music tagging on the MagnaTagATune (MTAT) dataset (Law et al., 2009). The MTAT dataset consists of 25879 audio clips with a total of 170 hours of audio, along with several per-song tags. The goal of the task is to provide the right tags to each of the songs in the dataset. Experimental setup. Following Lee et al. (2017), we extract the most frequently used 50 tags and trim the audios to 29.1 seconds at a sample-rate of 22.05kHz. Following the convention in literature, we use ROCcurve (AUC) and mean average precision (MAP) as performance metrics. We compare the best performing model of Lee et al. (2017), the 3 9-Net with a corresponding wavelet network denoted W3 9-Net. Results. Our results (Tab. 2) show that wavelet networks consistently outperform CNNs on raw waveforms and perform competitively to spectrogram-based approaches in this dataset as well. In addition, we observe that encouraging the learning of wavelet-like kernels consistently results in increased accuracy as well. ## 6.3 Bearing Fault Detection Finally, we also validate Wavelet networks for the task of condition monitoring in induction motors. To this end, we classify healthy and faulty bearings from raw data provided by Samotics. The dataset consists of 246 clips of 15 seconds sampled at 20kHz. The dataset is slightly unbalanced containing 155 healthy and 91 faulty recordings [155, 91]. The dataset is previously split into a training set of [85, 52] and a test | Table 2: Experimental results on MTAT. MagnaTagATune Average AUC MAP | | | | | | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|-----------|-----------|----------|--------| | Model | | # Params. | | | | | Per-class | Per-clip | Per-class | Per-clip | | | | 3 9 -Net | 0.893 | 0.936 | 0.385 | 0.700 | 2.394m | | W3 9 -Net | 0.895 | 0.941 | 0.397 | 0.719 | | | W | | 2.404m | | | | | 3 9 -Net-wl | 0.899 | 0.943 | 0.404 | 0.723 | | | Comparison With Other Approaches Average AUC MAP | | | | | | | Model | | # Params. | | | | | | Per-class | Per-clip | Per-class | Per-clip | | | PCNN Liu et al. (2016) | 0.9013 | 0.9365 | 0.4267 | 0.6902 | - | | CNN Pons et al. (2017a) ∗ | 0.8905 | - | 0.3492 | - | 11.8m | | (Raw) | | | | | | | CNN Pons et al. (2017a) ∗ | 0.9040 | - | 0.3811 | - | 5m | | (Spect.) | | | | | | | CNN Pons et al. (2017b) | 0.893 | - | - | - | 191k | | (Spect.) | | | | | | | ∗ Reported results are obtained in a more difficult version of this dataset. Table 3: Experimental results on bearing fault detection. Model Acc. (%) # Params. M11-Net 65.1376 1.806m W11-Net 68.8073 1.823m W11-Net-wl 70.207 | | | | | | set of [70, 39] samples, respectively. These splits are provided ensuring that measurements from the same motor are not included both in the train and the test set. We utilize 20% of the training set for validation. Each clip is composed of 6 channels measuring both current and voltage on the 3 poles of the motor. Experimental setup. We take the best performing networks on the US8K dataset: the M-11 and W-11 networks, and utilize variants of these architectures for our experiments on this dataset. Results. Once again we observe that Wavelet networks outperform CNNs on raw waveforms and encouraging the learning of wavelet-like kernels consistently improves accuracy (Tab. 3). ## 6.4 Discussion Our empirical results firmly establish wavelet networks as a promising avenue for learning from raw time-series data. Notably, these results highlight that considering the symmetries inherent to time-series data –namely translation and scale– for the development of neural networks consistently leads to improved outcomes. Furthermore, we observe that the benefits of wavelet networks extend beyond sound and audio domains. This result advocates for the use of wavelet networks and scale-translation equivariance for learning on timeseries data from different sources, e.g., financial data, sensory data. Finally, we also note that promoting the learning of wavelet-like convolutional kernels consistently leads to improved outcomes. We posit that this discovery may hold broader implications for group equivariant networks in general. Relation to scale-equivariant models of images and 2D **signals.** In the past, multiple scale-equivariant models have been proposed for the processing of images and 2D signals (Worrall & Welling, 2019; Sosnovik et al., 2020; 2021). Interestingly, we find that the difference in the lengths of the inputs received by image and time-series models leads to very different insights per modality. For comparison, Sosnovik et al. (2020) considers images up to 96×96 pixels, whereas audio files in the US8K dataset are 32.000 samples long. We find that this difference in input lengths has crucial implications for how scale interactions within scaleequivariant models function. Sosnovik et al. (2020) mentions that using inter-scale interactions introduces additional equivariance errors due to the truncation of the set S. Therefore, their networks are built with either no scale interaction or interactions of maximum 2 scales. This strongly contrasts with time-series where incorporating inter-scale interactions consistently leads to performance improvements. In our case, the number of scales and inter-scale interactions is rather constrained by the size and computational cost of convolutional kernels (Sec. 5.2.2) rather than their potential negative impact on the model's accuracy. ## 7 Limitations And Future Work Memory and time consumption grows proportionally to the number of scales considered. The biggest limitation of our approach is the increase in memory and time demands as the number of scales considered grows. One potential avenue to mitigate this could involve adopting Monte-Carlo approximations for the computation of group convolutions (Finzi et al., 2020). This strategy might not only establish equivariance to the continuous scale group –in expectation–, but also dramatically reduce the number of scales considered in each forward pass. Another intriguing direction lies in the extension of partial equivariance (Romero & Lohit, 2022) to the scale group. This extension would enable learning the subset of scales to which the model is equivariant, which in turn could lead to faster execution and enhanced adaptability. Lastly, the adaptation of separable group convolutions (Knigge et al., 2022) offers a means to reduce the computational and memory requirements of wavelet networks. Convolutions with large convolutional kernels: parameterization and efficiency. The foundation of our approach hinges on computing convolutions with banks of dilated convolutional kernels (Eq. 12, 11). Consequently, considering how these kernels are parameterized as well as how these convolutions are computed can unveil avenues for future improvement. Recently, Romero et al. (2021) introduced an expressive continuous parameterization for (large) convolutional kernels that has proven advantageous for complex tasks such as large language modelling (Poli et al., 2023) and processing DNA chains (Nguyen et al., 2023). Exploring the use of this parameterization for wavelet networks could lead to valuable insights and improvements, potentially surpassing the current utilization of B 2-spline bases. Furthermore, convolutional networks that rely on convolutions with very large convolutional kernels, e.g., Romero et al. (2021); Poli et al. (2023); Nguyen et al. (2023), leverage the Fourier transform to compute convolutions in the frequency domain. In the context of wavelet networks, dynamically selecting between spatial and Fourier convolutions based on the size of convolutional kernels has the potential to significantly improve their efficiency. ## 8 Conclusion In conclusion, this study introduces *Wavelet Networks*, a new class of neural networks for raw time-series processing that harness the symmetries inherent to time-series data –scale and translation– for the construction of neural architectures that respect them. We observe a clear connection between the wavelet transform and scale-translation group convolutions, establishing a profound link between our approach and classical spectro-temporal analysis. In contrast to the usual approach, which uses spectro-temporal representations as a frontend for the subsequent use of 2D CNNs, wavelet networks consistently preserve these symmetries across the whole network through the use of convolutional layers that resemble the wavelet transform. Our analysis reveals that wavelet networks combine the benefits of wavelet-like time-frequency decompositions with the adaptability and non-linearity of neural networks. Our empirical results demonstrate the superiority of Wavelet Networks over conventional CNNs on raw time-series data, achieving comparable performance to approaches that rely on engineered spectrogrambased methods, e.g., log-Mel spectrograms, with reduced parameters and no need for preprocessing. This work pioneers the concept of scale-translation equivariant neural networks for time-series analysis, opening new avenues for time-series processing. ## References Sajjad Abdoli, Patrick Cardinal, and Alessandro Lameiras Koerich. End-to-end environmental sound classification using a 1d convolutional neural network. *Expert Systems with Applications*, 136:252–263, 2019. Joakim Andén and Stéphane Mallat. Deep scattering spectrum. *IEEE Transactions on Signal Processing*, 62(16):4114–4128, 2014. Shaojie Bai, J Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. *arXiv preprint arXiv:1803.01271*, 2018. Erik J Bekkers. B-spline {cnn}s on lie groups. In *International Conference on Learning Representations*, 2020. URL https://openreview.net/forum?id=H1gBhkBFDH. Erik J Bekkers, Maxime W Lafarge, Mitko Veta, Koen AJ Eppenhof, Josien PW Pluim, and Remco Duits. Roto-translation covariant convolutional networks for medical image analysis. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 440–448. Springer, 2018. Gavin M Bidelman and Ameenuddin Syed Khaja. Spectrotemporal resolution tradeoff in auditory processing as revealed by human auditory brainstem responses and psychophysical indices. *Neuroscience letters*, 572: 53–57, 2014. Joan Bruna and Stéphane Mallat. Invariant scattering convolution networks. *IEEE transactions on pattern* analysis and machine intelligence, 35(8):1872–1886, 2013. E Colin Cherry. Some experiments on the recognition of speech, with one and with two ears. *The Journal* of the acoustical society of America, 25(5):975–979, 1953. Keunwoo Choi, George Fazekas, and Mark Sandler. Automatic tagging using deep convolutional neural networks. *arXiv preprint arXiv:1606.00298*, 2016. Taco Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning, pp. 2990–2999, 2016. Taco S Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant cnns on homogeneous spaces. In *Advances in Neural Information Processing Systems*, pp. 9142–9153, 2019. Wei Dai, Chia Dai, Shuhui Qu, Juncheng Li, and Samarjit Das. Very deep convolutional neural networks for raw waveforms. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 421–425. IEEE, 2017. Ingrid Daubechies. *Fundamental papers in wavelet theory*. Princeton University Press, 2006. Sander Dieleman and Benjamin Schrauwen. End-to-end learning for music audio. In *2014 IEEE International* Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6964–6968. IEEE, 2014. Sander Dieleman, Jeffrey De Fauw, and Koray Kavukcuoglu. Exploiting cyclic symmetry in convolutional neural networks. *arXiv preprint arXiv:1602.02660*, 2016. Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. *arXiv preprint arXiv:2002.12880*, 2020. Fabian B Fuchs, Daniel E Worrall, Volker Fischer, and Max Welling. Se (3)-transformers: 3d roto-translation equivariant attention networks. *arXiv preprint arXiv:2006.10503*, 2020. Sadaoki Furui. Speaker-independent isolated word recognition based on emphasized spectral dynamics. In ICASSP'86. IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 11, pp. 1991–1994. IEEE, 1986. Dennis Gabor. Theory of communication. part 1: The analysis of information. *Journal of the Institution of* Electrical Engineers-Part III: Radio and Communication Engineering, 93(26):429–441, 1946. Karan Goel, Albert Gu, Chris Donahue, and Christopher Ré. It's raw! audio generation with state-space models. In *International Conference on Machine Learning*, pp. 7616–7633. PMLR, 2022. Alex Grossmann, Jean Morlet, and T Paul. Transforms associated to square integrable group representations. i. general results. *Journal of Mathematical Physics*, 26(10):2473–2479, 1985. Eric Guizzo, Tillman Weyde, and Jack Barnett Leveson. Multi-time-scale convolution for emotion recognition from speech audio signals. In *ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech* and Signal Processing (ICASSP), pp. 6489–6493. IEEE, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Michael J Hewitt and Ray Meddis. A computer model of amplitude-modulation sensitivity of single units in the inferior colliculus. *The Journal of the Acoustical Society of America*, 95(4):2145–2159, 1994. PLM Johannesma. The pre-response stimulus ensemble of neurons in the cochlear nucleus. In Symposium on Hearing Theory, 1972. IPO, 1972. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. David M Knigge, David W Romero, and Erik J Bekkers. Exploiting redundancy: Separable group convolutional networks on lie groups. In *International Conference on Machine Learning*, pp. 11359–11386. PMLR, 2022. Edith Law, Kris West, Michael I Mandel, Mert Bay, and J Stephen Downie. Evaluation of algorithms using games: The case of music tagging. In *ISMIR*, pp. 387–392, 2009. Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. *Neural computation*, 1 (4):541–551, 1989. Jongpil Lee, Jiyoung Park, Keunhyoung Luke Kim, and Juhan Nam. Sample-level deep convolutional neural networks for music auto-tagging using raw waveforms. *arXiv preprint arXiv:1703.01789*, 2017. Tony Lindeberg and Anders Friberg. Idealized computational models for auditory receptive fields. PLoS one, 10(3), 2015a. Tony Lindeberg and Anders Friberg. Scale-space theory for auditory signals. In *International Conference* on Scale Space and Variational Methods in Computer Vision, pp. 3–15. Springer, 2015b. Jen-Yu Liu, Shyh-Kang Jeng, and Yi-Hsuan Yang. Applying topological persistence in convolutional neural network for music audio signals. *arXiv preprint arXiv:1608.07373*, 2016. Xugang Lu, Peng Shen, Sheng Li, Yu Tsao, and Hisashi Kawai. Deep progressive multi-scale attention for acoustic event classification. *arXiv preprint arXiv:1912.12011*, 2019. Stéphane Mallat. *A wavelet tour of signal processing*. Elsevier, 1999. Stéphane Mallat. Group invariant scattering. *Communications on Pure and Applied Mathematics*, 65(10): 1331–1398, 2012. Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. arXiv preprint arXiv:1812.09902, 2018. Brian CJ Moore and Hedwig E Gockel. Properties of auditory stream formation. *Philosophical Transactions* of the Royal Society B: Biological Sciences, 367(1591):919–931, 2012. Eric Nguyen, Michael Poli, Marjan Faizi, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, et al. Hyenadna: Long-range genomic sequence modeling at single nucleotide resolution. *arXiv preprint arXiv:2306.15794*, 2023. Vijayaditya Peddinti, TaraN Sainath, Shay Maymon, Bhuvana Ramabhadran, David Nahamoo, and Vaibhava Goel. Deep scattering spectrum with deep neural networks. In *2014 IEEE International Conference* on Acoustics, Speech and Signal Processing (ICASSP), pp. 210–214. IEEE, 2014. Karol J Piczak. Environmental sound classification with convolutional neural networks. In *2015 IEEE 25th* International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1–6. IEEE, 2015. Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, and Christopher Ré. Hyena hierarchy: Towards larger convolutional language models. arXiv preprint arXiv:2302.10866, 2023. Jordi Pons and Xavier Serra. Randomly weighted cnns for (music) audio classification. In *ICASSP 2019-* 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 336–340. IEEE, 2019. Jordi Pons, Oriol Nieto, Matthew Prockup, Erik Schmidt, Andreas Ehmann, and Xavier Serra. End-to-end learning for music audio tagging at scale. *arXiv preprint arXiv:1711.02520*, 2017a. Jordi Pons, Olga Slizovskaia, Rong Gong, Emilia Gómez, and Xavier Serra. Timbre analysis of music audio signals with convolutional neural networks. In 2017 25th European Signal Processing Conference (EUSIPCO), pp. 2744–2748. IEEE, 2017b. Dario Rethage, Jordi Pons, and Xavier Serra. A wavenet for speech denoising. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5069–5073. IEEE, 2018. David W Romero and Suhas Lohit. Learning partial equivariances from data. *Advances in Neural Information* Processing Systems, 35:36466–36478, 2022. David W Romero, Erik J Bekkers, Jakub M Tomczak, and Mark Hoogendoorn. Attentive group equivariant convolutional networks. *arXiv preprint arXiv:2002.03830*, 2020. David W Romero, Anna Kuzina, Erik J Bekkers, Jakub M Tomczak, and Mark Hoogendoorn. Ckconv: Continuous kernel convolution for sequential data. *arXiv preprint arXiv:2102.02611*, 2021. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241. Springer, 2015. David E Rumelhart, Geoffrey E Hinton, Ronald J Williams, et al. Learning internal representations by error propagation, 1985. Justin Salamon and Juan Pablo Bello. Feature learning with deep scattering for urban sound analysis. In 2015 23rd European Signal Processing Conference (EUSIPCO), pp. 724–728. IEEE, 2015. Justin Salamon, Christopher Jacoby, and Juan Pablo Bello. A dataset and taxonomy for urban sound research. In *Proceedings of the 22nd ACM international conference on Multimedia*, pp. 1041–1044, 2014. Roberta Santoro, Michelle Moerel, Federico De Martino, Rainer Goebel, Kamil Ugurbil, Essa Yacoub, and Elia Formisano. Encoding of natural sounds at multiple spectral and temporal resolutions in the human auditory cortex. *PLoS computational biology*, 10(1), 2014. Vıctor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. In International conference on machine learning, pp. 9323–9332. PMLR, 2021. Louis L Scharf. *Statistical signal processing*, volume 98. Addison-Wesley Reading, MA, 1991. Ivan Sosnovik, Michał Szmaja, and Arnold Smeulders. Scale-equivariant steerable networks. In *International* Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HJgpugrKPS. Ivan Sosnovik, Artem Moskalev, and Arnold WM Smeulders. Scale equivariance improves siamese tracking. In *Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision*, pp. 2765–2774, 2021. Stanley Smith Stevens, John Volkmann, and Edwin B Newman. A scale for the measurement of the psychological magnitude pitch. *The Journal of the Acoustical Society of America*, 8(3):185–190, 1937. Daniel Stoller, Sebastian Ewert, and Simon Dixon. Wave-u-net: A multi-scale neural network for end-to-end audio source separation. *arXiv preprint arXiv:1806.03185*, 2018. Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor Field Networks: Rotation-and Translation-Equivariant Neural Networks for 3D Point Clouds. arXiv preprint arXiv:1802.08219, 2018. Yuji Tokozume and Tatsuya Harada. Learning environmental sounds with end-to-end convolutional neural network. In *2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, pp. 2721–2725. IEEE, 2017. Karen Ullrich, Jan Schlüter, and Thomas Grill. Boundary detection in music structure analysis using convolutional neural networks. In *ISMIR*, pp. 417–422, 2014. Leo Paulus Antonie Servatius van Noorden et al. *Temporal coherence in the perception of tone sequences*, volume 3. Institute for Perceptual Research Eindhoven, the Netherlands, 1975. Patrick von Platen, Chao Zhang, and Philip Woodland. Multi-span acoustic modelling using raw waveform signals. *arXiv preprint arXiv:1906.11047*, 2019. Maurice Weiler and Gabriele Cesa. General e (2)-equivariant steerable cnns. In *Advances in Neural Information Processing Systems*, pp. 14334–14345, 2019. Maurice Weiler, Fred A Hamprecht, and Martin Storath. Learning steerable filters for rotation equivariant cnns. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 849–858, 2018. Daniel E Worrall and Max Welling. Deep scale-spaces: Equivariance over scale. *arXiv preprint* arXiv:1905.11697, 2019. Yong Xu, Qiuqiang Kong, Wenwu Wang, and Mark D Plumbley. Large-scale weakly supervised audio classification using gated convolutional neural network. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 121–125. IEEE, 2018. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. *Advances in neural information processing systems*, 30, 2017. Matthew D Zeiler. Adadelta: an adaptive learning rate method. *arXiv preprint arXiv:1212.5701*, 2012. Chiyuan Zhang, Stephen Voinea, Georgios Evangelopoulos, Lorenzo Rosasco, and Tomaso Poggio. Discriminative template learning in group-convolutional networks for invariant speech representations. In *Sixteenth* Annual Conference of the International Speech Communication Association, 2015. Zhoutong Zhang, Yunyun Wang, Chuang Gan, Jiajun Wu, Joshua B. Tenenbaum, Antonio Torralba, and William T. Freeman. Deep audio priors emerge from harmonic convolutional networks. In *International* Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rygjHxrYDB. Wei Zhu, Qiang Qiu, Robert Calderbank, Guillermo Sapiro, and Xiuyuan Cheng. Scaling-translationequivariant networks with decomposed convolutional filters. *The Journal of Machine Learning Research*, 23(1):2958–3002, 2022. Zhenyao Zhu, Jesse H Engel, and Awni Hannun. Learning multiscale features directly from waveforms. *arXiv* preprint arXiv:1603.09509, 2016. ## Appendix A Group And Group Action Group. A group is an ordered pair (G, ·) where G is a set and · : G×G → G is a binary operation on G, such that (i) the set is closed under this operation, *(ii)* the operation is associative, i.e., (g1 · g2)· g3 = g1 ·(g2 · g3), g1, g2, g3 ∈ G, *(iii)* there exists an identity element e ∈ G such that ∀g ∈ G we have e · g = g · e = g, and *(iv)* for each g ∈ G, there exists an inverse g −1such that g · g −1 = e. Subgroup. Given a group (G, ·), we say that a subset H is a subgroup of G if the tuple (H, ·) also complies to the group axioms. For example, the set of rotations by 90◦, H={0 ◦, 90◦, 180◦, 270◦}, is a subgroup of the continuous rotation group as it also complies to the group axioms. Group action. Let G be a group and X be a set. The (left) group action of G on X is a function $$(25)$$ A : G × X → X, Ag : x → x ′, (25) such that for any g1, g2 ∈ G, Ag2g1=Ag2 ◦ Ag1 . In other words, the action of G on X describes how the elements in the set x ∈ X are transformed by elements g ∈ G. For brevity, Ag(x) is written as gx. ## B Equivariance Properties Of Common Time-Frequency Transforms B.1 The Fourier Transform The Fourier transform represents a function with finite energy f ∈ L 2(R) as a sum of complex sinusoidal waves e iωt= cos ωt + i sin ωt: $$f(t)={\frac{1}{2\pi}}\int_{-\infty}^{\infty}{\hat{f}}(\omega)\;\mathrm{e}^{i\omega t}\,\mathrm{d}\omega,$$ where, ˆf(ω) depicts the amplitude of each component e iωt in f. The *Fourier transform* F is defined as: $${\mathcal{F}}[f](\omega)={\hat{f}}(\omega)=\langle f,\mathrm{e}^{i\omega t}\rangle=\int_{-\infty}^{\infty}f(t)\ \mathrm{e}^{-i\omega t}\,\mathrm{d}t.$$ In other words, the Fourier transform encodes f into a time-frequency dictionary D={e iωt}ω∈R. **Input translation.** Let $\mathcal{L}_{\mathrm{b}}[f](t)\!=\!f(t-t_{0})$ be a translated version of $f$. Its Fourier transform is given by: $$\mathcal{I}[\mathcal{L}_{\mathrm{b}}[f]](\omega)=\int_{-\infty}^{\infty}f(t-t_{0})\;\mathrm{e}^{-i\omega t}\,\mathrm{d}t\;\left|\,\hat{t}\!+\!t-t_{0}\right.\mathrm{d}\hat{t}\!=\!\mathrm{d}t$$ $$=\int_{-\infty}^{\infty}f(\hat{t})\;\mathrm{e}^{-i\omega(\hat{t}+t_{0})}\,\mathrm{d}\hat{t}=\mathrm{e}^{-i\omega t_{0}}\int_{-\infty}^{\infty}f(\hat{t})\;\mathrm{e}^{-i\omega t}\,\mathrm{d}\hat{t}=\mathrm{e}^{-i\omega t_{0}}\mathcal{I}[f](\omega)\tag{26}$$ In other words, a translation of t0 corresponds to a phase modulation of e −iωt0in the frequency domain. Input scaling. Let Ls0 [f](t)=f(s −1 0t), s0 ∈ R>0, be a scaled version of f. Its Fourier transform equals: F[Ls0 [f]](ω) = Z ∞ −∞ f(s −1 0t) e−iωt dt t˜=s −1 0t; dt˜=s −1 0 dt = Z ∞ −∞ f(t˜) e−iω(s0t˜) d(s0t˜) = s0 Z ∞ −∞ f(t˜) e−i(s0ω)t˜dt˜ = s0F[f](s0ω) = s0Ls −1 0 [F[f]](ω) (27) In other words, we observe that a dilation on the time domain produces a compression in the Fourier domain times the inverse of the dilation. Simultaneous input translation and scaling. Following the same derivation procedure, we can show the behavior of the Fourier transform to simultaneous translations and dilations of the input: F[Ls0Lt0 [f]](ω) = s0e −iωt0F[f](s0ω) = e−iωt0 s0Ls −1 0 [F[f]](ω). (28) This corresponds to the superposition of the previously exhibited behaviours. $$(27)$$ $$(28)$$ Effect of input transformations on the spectral density. The spectral density of a function f ∈ L 2(R) is given by |F[f](ω)| 2. Input translations and dilations produce the following transformations: $$\begin{array}{l}{{|{\mathcal{F}}[{\mathcal{L}}_{t_{0}}[f]](\omega)|^{2}=|{\mathcal{F}}[f](\omega)|^{2}}}\\ {{|{\mathcal{F}}[{\mathcal{L}}_{s_{0}}[f]](\omega)|^{2}=|s_{0}|^{2}|{\mathcal{L}}_{s_{0}^{-1}}[{\mathcal{F}}[f]](\omega)|^{2}}}\end{array}$$ $$(29)$$ $$(30)$$ 2(29) 2(30) Equivariance and invariance properties of the Fourier transform. From Eq. 26 we can see that the Fourier transform is translation equivariant as it encodes translations of the input as a phase modulation of the output. In addition, it is also scale equivariant (Eq. 27), as it encodes dilations of the input as a modulation of the frequency components in the output. We can prove that the Fourier transform is dilation and translation equivariant by showing that the output transformations e −iωt0 and s0Ls −1 0are group representations of the translation and scaling group in the Fourier space. Group representation. Let G be a group and f be a function on a given functional space LV(X). The (left) regular representation of G is a linear transformation L : G × LV(X) → LV(X) which The (187)-Regular representation of $\mathcal{G}$ is a linear transformation $\mathcal{L}:\mathcal{G}\to\mathcal{L}_0(\mathcal{L})+\mathrm{i}\kappa(\mathcal{L})$ which extends group actions to functions on $L_0(\mathcal{X})$ by: $$\mathcal{L}_g:f\to f',\quad f'(\mathcal{A}_g(x))=f(x)\Leftrightarrow f'(x)=f(g^{-1}x),$$ such that for any $g_1$, $g_2\in\mathcal{G}$, $\mathcal{L}_{g\otimes g\equiv1}\mathcal{L}_{g\otimes g}\circ\mathcal{L}_g$. In other words, the group representation describes how a function on a functional space $f\in L_0(\mathcal{X})$ is modified by the effect of group elements $g\in\mathcal{G}$. We can show that the combination of input translations t0, t1 ∈ R or dilations s0, s1 ∈ R>0 produces a transformation on the Fourier domain that preserves the group structure. In other words, that the transformations previously outlined are group representations. Specifically, for Lt1 [Lt0 [f]] and Ls1 [Ls0 [f]] it holds: F-Lt1 [Lt0 [f]](ω) = e−iωt1 e −iωt0F[f](ω) = e−iω(t1+t0)F[f](ω) = LFourier t1+t0[F[f]](ω) F-Ls1 [Ls0 [f]](ω) = s1Ls −1 1 -s0Ls −1 0 [F[f]](ω) = (s0s1)F[f](s1s0ω) = LFourier s1s0[F[f]](ω) with LFourier t[F[f]](ω)=e−iωtF[f](ω) the representation of the Fourier transform for the translation group, and LFourier s[F[f]](ω)=sF[f](sω) the representation of the Fourier transform for the dilation group. Unfortunately, the resulting group representations rapidly become cumbersome specially in the presence of several input components. In addition, although the calculation of the spectral density leaves the scale equivariance property of the transformation unaffected, Eq. 29 shows that it reduces translation equivariance of the Fourier transform to *translation invariance*. This is why the Fourier transform is commonly considered not to carry positional information. B.2 The short-time Fourier transform The short-time Fourier transform of a signal f ∈ L 2(R) is given by: **Lemma 1**.: _Let $\tau$ be a finite set of $\tau$. Then $\tau$ is a finite set of $\tau$._ Proof.: Let $\tau$ be a finite set of $\tau$. Then $\tau$ is a finite set of $\tau$. Proof.: Let $\tau$ be a finite set of $\tau$. Then $\tau$ is a finite set of $\tau$. Proof.: Let $\tau$ be a finite set of $\tau$. In other words, it encodes the input f into a time-frequency dictionary D={ϕt,ω}, ϕt,ω=w(τ − t) e−iωτ. Input translation. Let Lt0 [f](τ )=f(τ−t0) be a translated version of f. Its short-time Fourier transform is given by: S[Lt0 [f]](t, ω) = Z ∞ −∞ f(τ − t0)w(τ − t) e−iωτ dτ t˜=τ − t0; dt˜=dτ = Z ∞ −∞ f(t˜)w(t˜+ t0 − t) e−iξ(t˜+t0) dt˜= e−iξt0 Z ∞ −∞ f(t˜)w(τ − (t − t0)) e−iωt˜dt˜ = e−iωt0 S[f](t − t0, ω) = e−iωt0Lt0 [S[f]](t, ω) (31) In other words, a translation by t0 in the time domain, corresponds to a shift by t0 on the time axis of the short-time Fourier transform, and an additional phase modulation of e −iξt0 similar to that of the Fourier transform (Eq. 26). $$(31)$$ ![21_image_0.png](21_image_0.png) Figure 10: Scale equivariance of the short-time Fourier transform. Consider a function f(t)= cos ω1t + cos ω2t composed of two frequencies ω1=3 and ω2=7, and a window function w(t), with which the short-time Fourier transform is calculated. For relatively high frequencies (left column), the dot-product of f and w, ⟨*f, w*⟩, is able to capture sufficient spectral information from f to correctly extract the frequencies ω1, ω2 from it. However, for dilated versions of the same signal f (right columns) obtained by reducing the frequency of the spectral components ω1, ω2 of f, the capacity of the dot-product ⟨*f, w*⟩ to capture the spectral information in the input gradually degrades and, eventually, is entirely lost. Consequently, scale equivariance holds (approximately) for scales for which all of the spectral components of the signal f lie within the range of the window w. Input scaling. Let Ls0 [f](τ )=f(s −1 0τ ), s0 ∈ R>0, be a scaled version of f. Its short-time Fourier transform is given by: S[Ls0 [f]](t, ω) = Z ∞ −∞ f(s −1 0t)w(τ − t) e−iωτ dτ t˜=s −1 0τ ; dt˜=s −1 0 dτ = Z ∞ −∞ f(t˜)w(s0t˜− t) e−iω(s0t˜) d(s0t˜) = s0 Z ∞ −∞ f(t˜)w(s0t˜− t) e−i(s0ω)t˜dt˜ t=s −1 0s0t = s0 Z ∞ −∞ f(t˜)w(s0(t˜− s −1 0t)) e−i(s0ω)t˜dt˜ w(sτ ) ≈ w(τ ) ≈ s0 Z ∞ −∞ f(t˜)w(t˜− s −1 0t) e−i(s0ω)t˜dt˜≈ s0 S[f](s −1 0t, s0ω) (32) In other words, a dilation in the time domain produces a compression in the frequency domain analogous to the Fourier transform (Eq. 27). However, it is important to note that we rely on the approximate w(x)≈w(sx) to arrive to the final expression. Nevertheless, it is important to note that this approximate does not generally holds in practice. This approximation implies that the window function w is invariant to scaling, which holds only for increasing window sizes, i.e., when the short-term Fourier transform starts to approximate the (global) Fourier transform. Simultaneous input translation and scaling. Following the same derivation procedure, we can show the behavior of the short-time Fourier transform to simultaneous translations and scaling. We have that: $$\mathcal{S}[\mathcal{L}_{s_{0}}\mathcal{L}_{t_{0}}[f]](t,\omega)=s_{0}\mathrm{e}^{-i\omega t_{0}}\mathcal{S}[f](s_{0}^{-1}(t-t_{0}),s_{0}\omega)$$ 0(t − t0), s0ω) (33) $$(33)^{\frac{1}{2}}$$ Effect of input transformations on the spectrogram. The spectrogram of a function f ∈ L 2(R) is given by |S[f](*t, ω*)| 2. Input translations and dilations produce the following transformations: $$|{\mathcal{F}}[{\mathcal{L}}_{t_{0}}[f]](t,\omega)|^{2}=|{\mathcal{L}}_{t_{0}}[S[f]](t,\omega)|^{2}$$ $$|{\mathcal{F}}[{\mathcal{L}}_{s_{0}}[f]](t,\omega)|^{2}=|s_{0}|^{2}|S[f](s_{0}^{-1}t,s_{0}\omega)|^{2}$$ $$(34)$$ $$(35)$$ Equivariance and invariance properties of the short-time Fourier transform. The short-time Fourier transform is *approximately* translation and scale equivariance in a manner similar to that of the Fourier transform. In contrast to the Fourier transform, however, it decomposes input translations into a translation t − t0 and a phase shift e −iωt0in the output (Eq. 31). This decomposition can be interpreted as a rough estimate t − t0 signalizing the position in which the window w is localized, and a fine grained localization within that window given by the phase shift e −iωt0indicating the relative position of the pattern within the window L(t−t0)[w](τ ). Equivariance to dilations is analogous to the Fourier transform up to the fact that time and frequency are now jointly described. However, since the window itself does not scale with the sampled frequency –as is the case in wavelet transforms–, exact equivariance is not obtained. Note that equivariance to dilations is only approximate, and is restricted to the set of scales that can be detected with the width of the window used (see Fig. 10 for a visual explanation). Since this is not generally the case, the short-time Fourier transform is *not scale equivariant*. The calculation of the spectrogram leaves the scale equivariance property of the transformation unaffected and is equivalent in a join manner to the scale equivariance property of the Fourier transform (Eq. 32). Differently however, Eq. 34 shows that translation equivariance is partially preserved and only information about the phase shift within the window is lost. This is why the short-time Fourier transform is said to carry positional information, i.e., to be (approximately) translation equivariant. ## B.3 The Wavelet Transform The wavelet transform of a signal f ∈ L a signal $f\in\mathcal{L}^2(\mathbb{R})$ is given by: $$\mathcal{W}[f](t,s)=\langle f,\psi_{t,s}\rangle=\int_{-\infty}^{+\infty}f(\tau)\ \frac{1}{\sqrt{s}}\psi^{*}\bigg(\frac{\tau-t}{s}\bigg)\ \mathrm{d}\tau,$$ ling $f$ into a time-frequency dictionary $\mathcal{D}=\{\psi_{t,s}\}_{u\in\mathbb{R},s\in\mathbb{R}_{>0}}$, $\psi_{t,s}(\tau)\!=\!\frac{1}{\sqrt{s}}\psi^{*}\left(\frac{\tau-t}{s}\right)$. $\mathcal{L}_{t_0}[f](\tau)\!=\!f(\tau-t_0)$ be a translated version of $f$. Its wavelet transform is given by: and is equivalent to encoding f into a time-frequency dictionary D = {ψt,s}u∈R,s∈R>0 Input translation. Let Lt0 [f](τ )=f(τ − t0) be a translated version of f. Its wavelet transform is given by: W[Lt0 [f]](t, s) = Z ∞ −∞ f(τ − t0) √s −1ψ ∗s −1(τ − t)dτ t˜=τ − t0; dt˜=dτ = Z ∞ −∞ f(t˜) √s −1ψ ∗s −1(t˜+ t0 − t)dt˜= Z ∞ −∞ f(t˜) √s −1ψ ∗s −1(t˜− (t − t0))dt˜ = W[f](t − t0, s) = Lt0W[f](t, s) (36) In other words, a translation of the input produces an equivalent translation in the wavelet domain. Input scaling. Let Ls0 et $\mathcal{L}_{s_0}[f](t)$=$f(s_0^{-1}t)$ be a scaled version. 0t) be a scaled version of f. The corresponding wavelet transform is: W[Ls0 [f]](t, s) = Z ∞ −∞ f(s −1 0t) √s −1ψ ∗s −1(τ − t)dτ t˜=s −1 0τ ; dt˜=s −1 0 dτ = Z ∞ −∞ f(t˜) √s −1ψ ∗s −1(s0t˜− t)d(s0t˜) t=s −1 0s0t = Z ∞ −∞ f(t˜) √s −1s0ψ ∗s −1s0(t˜− s −1 0t)dt˜ s0= qs −1 0s −1 0 −1 = √s0 Z ∞ −∞ f(t˜) qs −1 0s −1 ψ ∗s −1 0s−1(t˜− s −1 0t) dt˜ = √s0 W[f](s −1 0t, s−1 0s) = √s0 Ls0W[f](t, s) (37) $$(36)$$ $$(37)$$ In other words, a dilation s0 in the input domain produces an *equivalent* dilation in the wavelet domain on both components (*t, s*), multiplied by a factor √s0. That is, the wavelet transform is translation equivariant. Simultaneous input translation and scaling. Following the same procedure, we can show the behavior of the wavelet transform to simultaneous translations and dilations of the input: W[f(s −1 0(τ − t0)](*t, s*) = √s0 W[f](s −1 0(t − t0), s−1 0s) = √s0 Lt0Ls0W[f](*t, s*) (38) We observe that the Wavelet transform is the only time-frequency transform that respects equivariance with equivalent group representations in the input and output domain. Effect of input transformations on the scalogram. The scalogram of a function f ∈ L 2(R) is given by |W[f](*u, s*)| 2. Input translations and dilations produce the following transformations on the scalogram: $$\begin{array}{l}{{|\Psi[{\mathcal{L}}_{t_{0}}[f]](u,s)|^{2}=|{\mathcal{L}}_{t_{0}}[\Psi[f]](u,s)|^{2}}}\\ {{|\Psi[{\mathcal{L}}_{s_{0}}[f]](u,s)|^{2}=|{\mathcal{L}}_{s_{0}}[\Psi[f]](u,s)|^{2}}}\end{array}$$ $$(39)$$ $$(40)$$ 2(39) 2(40) In other words the scalogram is exactly equivariant to both translations and dilations. Equivariance and invariance properties of the wavelet transform. From Eq. 36, we can see that the wavelet transform is *exactly equivariant to translations* and the group representation of the output space equals that of the input space. Furthermore, translation equivariance is preserved in the scalogram as well (Eq. 39). Similarly, scale equivariance is preserved on the wavelet transform up to a multiplicative factor (Eq. 37). However, the scalogram preserves both translation and dilation equivariance exactly (Eq. 40). We emphasize that the group representation on the output space resembles that of the input space. This behavior leads to much more straightforward group representations than that exhibited by the Fourier transform and the short-time Fourier transform. Additionally, exact scale equivariance is only obtained on the scalogram (Eq. 40), whilst for the wavelet transform it is retained up to multiplicative factor (Eq. 37). This behavior elucidates the fact that time-frequency transforms have been optimized for energy density representations rather than for the time-frequency representations themselves. ## C Experimental Details Whenever possible, we use existing code for the baselines of our wavelet networks as a starting point for the general infrastructure of our model. Specifically, we utilize the PyTorch implementation provided in https://github.com/philipperemy/very-deep-convnets-raw-waveforms and https://github.com/ kyungyunlee/sampleCNN-pytorch as baseline for the US8K experiments and the MTAT experiments Lee et al. (2017), respectively. By doing so, we aim to preserve the reproducibility of the experiments in the baseline papers during our own experiments, as some important training factors are not specified in the baseline papers, e.g., the learning rate used in Dai et al. (2017). Unfortunately, Abdoli et al. (2019) do not provide code and we were forced to interpret some of the ambiguities in the paper, e.g., the pooling type utilized in the pooling layers and the loss metric used. Any omitted parameters can safely be considered to be the default values in PyTorch 1.5.0. Our experiments are carried out in a Nvidia TITAN RTX GPU. ## C.1 Urbansound8K Wn**-Nets.** We use a sampling rate of 22.05kHz as opposed to the 8kHz used in Dai et al. (2017). An early study that indicated that some classes were indistinguishable for the human ear at this sampling rate.3 We zero-pad signals shorter than 4 seconds so that all input signals have a constant length of 80200 samples. Following the implementation of Dai et al. (2017), we utilize the Adam optimizer (Kingma & Ba, 2014) with lr=1e-2 and weight_decay=1e-4, and perform training on the official first 9 folds and test on the 10th fold. We noticed that reducing the learning rate from 1e-2 to 1e-3 increased the performance of our W-Nets. The reported results of the W-Net variants are obtained with this learning rate. We utilize batches of size 16 and perform training for 400 epochs. The learning rate is reduced by half after 20 epochs of no improvement in validation loss. The Wn-nets used are specified in Table 4. See Dai et al. (2017, Tab. 1) for comparison. 3See https://github.com/dwromero/wavelet_networks/experiments/UrbanSound8K/data_analysis.ipynb. Table 4: Wn-networks. W3-Net (0.219M) denotes a 3-layer network with 0.219M parameters. [79/4, 150, 3] denotes a group convolutional layer with a nominal kernel size of 79 samples, 150 filters and 3 scales, with a stride of 4. Stride is omitted for stride 1 (e.g., [3, 150, 3] has stride 1). Each convolutional layer uses batch normalization right after the convolution, after which ReLU is applied. Following the findings of Romero et al. (Romero et al., 2020, Appx. C) on the influence of stride in the equivariance of the network, we replace strided convolutions by normal convolutions, followed by spatial pooling. [*. . .* ] × k denotes k stacked layers and double layers in brackets denote residual blocks as defined in (Dai et al., 2017, Fig. 1b). In each of the levels of convolutional layers and residual blocks, the first convolution of the first block has scale 3 and the remaining convolutional layers at that level has scale 1. W3-Net W5-Net W11-Net W18-Net W34-Net (0.219m) (0.558m) (1.806m) (3.759m) (4.021m) Input: 80200x 1 time-domain waveform Lifting Layer ( 9 scales) [79/4, 150] [79/4, 74] [79/4, 51] [79/4, 57] [79/4, 45] Maxpool: 4x1 (output: 80200x 9x 1) [3, 150, 3] [3, 74, 3] [3, 51, 3] × 2 [3, 57, 3] × 43, 45 3, 45× 3 Maxpool: 4x1 (output: 80200x 7x 1) [3, 148, 3] [3, 102, 3] × 2 [3, 114, 3] × 43, 90 3, 90× 4 Maxpool: 4x1 (output: 80200x 5x 1) [3, 296, 3] [3, 204, 3] × 3 [3, 228, 3] × 43, 180 3, 180× 6 Maxpool: 4x1 (output: 80200x 3x 1) [3, 408, 3] × 2 [3, 456, 3] × 43, 360 3, 360× 3 Global average pooling (output: 1 x n) Softmax [110] (output: 1 x n) Table 5: Wavelet network variant of the 50999-1DCNN (Abdoli et al., 2019). [31/2, 24, 3] denotes a group convolutional layer with a nominal kernel size of 31 samples, 24 filters and 3 scales, with a stride of 2. FC: [96, 48] denotes a fully-connected layer with 96 input channels and 48 output channels. Each convolutional layer uses batch normalization right after the convolution followed by ReLU. All fully connected layers expect for the last one use dropout of 0.25 and ReLU. Following the findings of Romero et al. (2020, Appx. C) on the influence of stride in the equivariance of the network, we replace strided convolutions with normal convolutions followed by spatial pooling. We note that the input size of our network is (presumably) different from that in Abdoli et al. (2019). Consequently, the last pooling layer utilizes a region of 5, in contrast to 4 as used in Abdoli et al. (2019). However, as it is not clear how the input dimension is reduced from 64000 to 50999 in Abdoli et al. (2019) and we stick to their original sampling procedure. We interpret their poling layers as max-pooling ones. W-1DCNN (0.549m) Input: 64000x 1 Lifting Layer ( 9 scales) [63/2, 12] Maxpool: 8x1 [31/2, 24, 3] Maxpool: 8x1 [15/2, 48, 3] [7/2, 96, 3] [3/2, 408, 3] Maxpool: 5x1 Flatten 196 × 6 → 1152 FC: [1152, 96] FC: [96, 48] FC: [48, 10] Softmax W-1DCNN. Following Abdoli et al. (2019), we utilize a sampling rate of 16kHz during our experiments. We zero-pad signals shorter than 4 seconds so that all input signals have a constant length of 64000 samples. Following the experimental description of the paper, we utilize the AdaDelta optimizer (Zeiler, 2012) with lr=1.0 and perform training in a 10-fold cross validation setting as described in Sec. 6. We use batches of size 100 and perform training for 100 epochs. We utilize the 50999-1DCNN variant of Abdoli et al. (2019), as it is the variant that requires the less human engineering.4 Unfortunately, we were not able to replicate the results reported in Abdoli et al. (2019) (83±1.3%) in our experiments. Our replication of Abdoli et al. (2019) lead to a 10-cross fold accuracy of 62±1.3%, which is 21 accuracy points less relative to the results reported. We experiment with our interpretation of the mean squared logarithmic error (MSLE) loss defined in (Abdoli et al., 2019, Eq. 4). However, we find that the conventional cross-entropy loss leads to better results. Consequently, all our reported results are based on training with this loss.5 The description of the Wavelet 50999-1DCNN Abdoli et al. (2019) is provided in Table 5 (see (Abdoli et al., 2019, Tab. 1) for comparison). 4The remaining architectures partition the input signal into overlapping windows after which the predictions of each windows are summarized via a voting mechanism. Consequently, one could argue that the 50999-1DCNN is the only variant that truly receives the raw waveform signal. Nevertheless it is not clear from the paper how the input signal of 64000 samples is reduced to 50999 samples, which is the input dimension of the raw signal for this architecture type. 5The MSLE loss in Abdoli et al. (2019, Eq. 4) is defined as 1N PN i=1 log pi+1 ai+1 2, where pi, ai and N are the predicted class, the actual class, and the number of samples respectively. Note, however, that obtaining the predicted class pi, i.e., pi=argmaxo f(xi), where f(xi) ∈ RO is the output of the network for a classification problem with O classes and input xi, is a non-differentiable function. Consequently, it is not possible to train the network based on the formulation provided there. In order to train our model with this loss, we re-formulate the MSLE loss as 1N PN i=1 PO o=1 log pi,o+1 ai,o+1 2, where {ai,o} O o=1 is a one-hot encoded version of the label ai. That is, we measure the difference between the one-hot encoded label and the output. Table 6: W3 9-network. [3/1, 90, 3] denotes a group convolutional layer with a nominal kernel size of 3 samples, 90 filters and 3 scales, with a stride of 1. MP:3x1 denotes a max-pooling layer of size 3. FC: [360, 50] denotes a fully-connected layer with 360 input channels and 50 output channels. Each convolutional layer uses batch normalization after the convolution followed by ReLU. Dropout of 0.5 is used after the 6th and 11th layer. Following the findings of Romero et al. (2020, Appx. C) on the influence of stride in the network equivariances, we replace strided convolutions by normal convolutions followed by spatial pooling. W-3 9 Net (2.404m) Input: 59049x 1 Lifting Layer ( 9 scales) [3/3, 90] [3/1, 90, 3], MP:3x1 [3/1, 90, 1], MP:3x1 [3/1, 180, 1], MP:3x1 [3/1, 180, 3], MP:3x1 [3/1, 180, 1], MP:3x1 [3/1, 180, 1], MP:3x1 [3/1, 180, 3], MP:3x1 [3/1, 180, 1], MP:3x1 [3/1, 360, 1], MP:3x1 [3/1, 360, 3] FC: [360, 50] Sigmoid ## C.2 Magnatagatune W3 9**-Network.** For the experiments in the MTAT dataset, we utilize the PyTorch code provided by Lee et al. Lee et al. (2017). We use the data and tag preprocessing used in Lee et al. (2017). We utilize the SGD optimizer with lr=1e-2, weight_decay=1e-6 and nesterov=True. We use batches of size 23 and perform training for 100 epochs. The learning rate is reduced by 5 after 3 epochs of no improvement in the validation loss. Early stopping is used if the learning rate drops under 1e-7. We were unable to replicate the per-class AUC results reported in Lee et al. (2017). Our experiments indicated a per-class AUC of 0.893 instead of the 0.905 reported in Lee et al. (2017). Details of the W3 9-Net used are given in Table 6 (see (Lee et al., 2017, Tab. 1) for comparison).
Review 1: Summary: In 2D and 3D data domain, prior works have proposed to exploit equivariances to different symmetry groups to improve generalization performance, and parameter and data efficiency. However, in the 1D domain, this has not been explored much beyond the usual translation group exploited by conventional CNNs. The paper proposes Wavelet networks to learn from time series data by exploiting equivariance to the scale-translation group. The convolutional layers for this group are constructed following the general formulation of group convolution proposed in [1] and seems somewhat similar to the prior scale-translation equivariance work [2] proposed for the 2D domain. To avoid interpolation while lifting to different scales, convolutions are performed in the continuous domain by parameterizing the convolutional kernels using splines and the scale grid is suitably discretized to approximate the convolution integrals. The paper also highlights the similarities between the scale-translation group convolution and the definition of wavelet transform. To promote a wavelet-like structure in the learnt convolutional kernels, they are regularized during training to have zero mean. The authors show that this interestingly provides a consistent improvement in performance. Experimental results show that the proposed wavelet networks improve performance over baseline models on different learning tasks. [1] Cohen, Taco, and Max Welling. "Group equivariant convolutional networks." International conference on machine learning. PMLR, 2016. [2] Sosnovik, Ivan, Michał Szmaja, and Arnold Smeulders. "Scale-Equivariant Steerable Networks." International Conference on Learning Representations. 2019. Strengths and Weaknesses: **Strengths:** 1. Barring a few minor typos, the paper is well written, and the text and illustrations are generally easy to follow. I particularly like the discussion in section 4 about the limitations of using convolutional neural networks over spectro-temporal data. Inspired by the success of CNNs on images, many prior works convert time series into 2D domain by computing spectrogram (or some variant) and pass them to a 2D CNN. The authors highlight why this is not the best strategy since unlike images, spectrograms of time series are often highly non-local. 2. The noted improvement in performance obtained by imposing a ‘wavelet-like’ structure on the filters is interesting. **Weaknesses:** While I like the way this paper is written and the experimental results are also nice, I currently have one major concern that I would like the authors to clarify. It is not clear to me what the key technical contribution is, in this paper. The authors mention that the main motivation is to build scale-translation equivariant networks for 1D data. However, is there any particular challenge in the 1D case that makes it non-trivial to directly extend the prior 2D scale-translation equivariant approaches to it? It appears to me that the proposed approach is quite similar to the 2D case of Sosnovik et al. [2] Requested Changes: Please refer to my comments in the Strengths And Weaknesses section Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper proposes a wavelet network for scale-translation equivariant learning. The primary application area is time-series data, and the empirical verifications focus on sound signals. To my best knowledge, the group convolution and lifting convolution in the wavelet network are novel and serve the purpose of encoding scale-translation equivariance. The major contribution is to develop implementation details of the group convolution and lifting convolution, as well as testify their performance on real data. Strengths and Weaknesses: --------------- Strength: The paper devotes many introductory efforts to distinguishing visual and auditory data, where the latter can be represented in the 2D time-frequency plane. This contradictory exposure highlights the difficulty of learning with auditory data and elucidates the insufficiency of standard CNNs. Derivations and definitions regarding groups and group convolutions are clearly stated with many graphical illustrations. This provides an easier understanding towards the complicated math expressions. Experimental results support the efficiency of the wavelet network, as it uses less parameters with good performance. --------------- Weakness: There is no obvious weakness, but please refer to the requested changes for a detailed discussion. Requested Changes: Section 4 might need a different highlighting style. Currently, the definition of Fourier transform and wavelet transform are highlighted in a grey box. However, from my understanding, the purpose of this section is to show the unique challenge and the insufficiency of existing ideas for handling auditory data. It could be better to articulate these problems in a more outstanding manner. Section 5 looks very complicated. It begins with definitions of group convolutions, then talks about the wavelet network architecture, and lastly provides implementation details. I feel like a better presentation structure can be used, such as starting with the network architecture and then introduce building blocks in the network. The implementation details may deserve a separated section. I am not an expert on auditory data processing. After reading the experiments, my questions are - it is good to compare with baseline methods, but are these baseline methods the state-of-the-art? - There is extensive study on the UrbanSound8K data. How does the performance of wavelet network compare to a broader range of methods beyond the CNN backbone? In particular, transformers are powerful network architectures for sequential data processing; are wavelet networks comparable to transformers? Broader Impact Concerns: N/A ================================================== Review 3: Summary: - This paper identifies two important symmetries inherent to time-serious data: scale and translation. Based on this, the paper proposes scale-translation equivariant neural networks (named Wavelet Networks) for time-series data, which is largely underexplored in the literature on geometric deep learning. - The authors discuss the relationship between the construction of scale-translation equivariant mappings in this paper with the well-known wavelet transform, thereby shedding some light on the merit of the proposed methods. - Their empirical results demonstrate that the proposed Wavelet Networks outperform baselines such as conventional CNNs on a variety of tasks and time-serious types, e.g. audio, environmental sounds, and electrical signals. Strengths and Weaknesses: **Strengths** - The paper tackles an important problem by exploring equivariant neural networks w.r.t. to scale and translation for time series data. Given the prevalence of time-series data across various applications and the demonstrated benefits of equivariant models in enhancing sample efficiency and generalization, this research is a valuable contribution. - Instead of solely focusing on their methodologies, the authors offer an in-depth comparison with established techniques in the field, such as the wavelet transform and short-time Fourier transform. This comparative analysis sheds light on the reasons behind the observed performance enhancements. The discussions, including those on the differences between visual and spectro-temporal representations and between the short-term Fourier transform from the wavelet transform, are particularly interesting. - The paper also delves into the practical aspects of their research, emphasizing considerations like discretization and computational complexity. Requested Changes: In fact, I think this work is ready for publication and I don't have any essential requested changes. Broader Impact Concerns: I have no concerns over the ethical implications. ================================================== Review 4: Summary: The paper introduces the wavelet network, an equivariant network tailored for 1D signals that ensures scaling and translation equivariance. The authors explained the connection between the (lifting) group convolution and the wavelet transform. Numerical experiments are presented to demonstrate the improved performance of the proposed model compared to conventional models, including CNNs. Strengths and Weaknesses: **Weakness** 1. The proposed model is essentially a special case of G-CNN (with regular representations) for the scaling-translation group acting on 1d signals. More specifically, the first layer is the lifting group convolution, and all subsequent layers are the scaling-translation group convolutions. This model structure is not novel; it has been extensively explored and documented in prior literature. Consequently, it appears that the authors' contribution in this domain does not introduce any new concepts. 2. References [1, 2] have proposed a remarkably similar architecture targeting this group, albeit for 2D signals. Regrettably, there is an apparent omission in the current manuscript, as these two prior works have not been acknowledged. 3. The proposition of using continuous bases is neither groundbreaking nor original. Both references [1] and [2] have previously used this approach to mitigate interpolation artifacts. 4. The numerical experiments are conducted to compare only with non-equivariant models. A more compelling evaluation would involve comparisons against, for instance, the model in [1] when adapted for 1D signals. **Strength** 1. The paper is well-written. The regularization on "wavelet structure" is interesting. However, it is hard to understand why this regularization helps the overall generalization performance. [1] Sosnovik, Ivan, Michał Szmaja, and Arnold Smeulders. "Scale-Equivariant Steerable Networks." International Conference on Learning Representations. 2019. [2] Zhu, Wei, et al. "Scaling-translation-equivariant networks with decomposed convolutional filters." The Journal of Machine Learning Research 23.1 (2022): 2958-3002. Requested Changes: Please see the previous section Broader Impact Concerns: No ================================================== Metareview: Recommendation: Accept as is Comment: Based on my comments above, I am recommending this paper is accepted as is. I would recommend the authors do a complete proof-read of the paper to prepare their camera-ready version (there are still some minor typos, see e.g. sentence prior to Eq.17). ==================================================
# A Unified Survey On Anomaly, Novelty, Open-Set, And Outof-Distribution Detection: Solutions And Future Challenges Mohammadreza Salehi *s.salehidehnavi@uva.nl* University of Amsterdam Hossein Mirzaei *hmirzayees@ce.sharif.edu* Sharif University of Technology Dan Hendrycks hendrycks@berkeley.edu UC Berkeley Yixuan Li *sharonli@cs.wisc.edu* University of Wisconsin-Madison Mohammad Hossein Rohban *rohban@sharif.edu* Sharif University of Technology Mohammad Sabokrou sabokro@ipm.ir Institute For Research In Fundamental Sciences (IPM) Reviewed on OpenReview: *https: // openreview. net/ forum? id= aRtjVZvbpK* ## Abstract Machine learning models often encounter samples that are diverged from the training distribution. Failure to recognize an out-of-distribution (OOD) sample, and consequently assign that sample to an in-class label, significantly compromises the reliability of a model. The problem has gained significant attention due to its importance for safety deploying models in open-world settings. Detecting OOD samples is challenging due to the intractability of modeling all possible unknown distributions. To date, several research domains tackle the problem of detecting unfamiliar samples, including anomaly detection, novelty detection, one-class learning, open set recognition, and out-of-distribution detection. Despite having similar and shared concepts, out-of-distribution, open-set, and anomaly detection have been investigated independently. Accordingly, these research avenues have not cross-pollinated, creating research barriers. While some surveys intend to provide an overview of these approaches, they seem to only focus on a specific domain without examining the relationship between different domains. This survey aims to provide a cross-domain and comprehensive review of numerous eminent works in respective areas while identifying their commonalities. Researchers can benefit from the overview of research advances in different fields and develop future methodology synergistically. Furthermore, to the best of our knowledge, while there are surveys in anomaly detection or one-class learning, there is no comprehensive or up-todate survey on out-of-distribution detection, which this survey covers extensively. Finally, having a unified cross-domain perspective, this study discusses and sheds light on future lines of research, intending to bring these fields closer together. All the implementations and benchmarks reported in the paper can be found at : https://github.com/taslimisina/ osr-ood-ad-methods ## 1 Introduction Machine learning models commonly make the closed-set assumption, where the test data is drawn *i.i.d.* from the same distribution as the training data. Yet in practice, all types of test input data—even those on which the classifier has not been trained—can be encountered. Unfortunately, models can assign misleading confidence values for unseen test samples (162; 104; 117; 183; 185). This leads to concerns about the reliability of classifiers, particularly for safety-critical applications (63; 57). In the literature, several fields attempt to address the issue of identifying the unknowns/anomalies/out-of-distribution data in the open-world setting. In particular, the problems of anomaly detection (AD), Novelty Detection (ND), One-Class Classification (OCC), Out-of-Distribution (OOD) detection, and Open-Set Recognition (OSR) have gained significant attention owing to their fundamental importance and practical relevance. They have been used for similar tasks, although the differences and connections are often overlooked. Specifically, OSR trains a model on K classes of a N class training dataset; then, at the test time, the model is faced with N different classes of which N − K are not seen during training. OSR aims to assign correct labels to seen test-time samples while detecting unseen samples. Novelty detection or one-class classification is an extreme case of open-set recognition, in which K is 1. In the multi-class classification setting, the problem of OOD detection is canonical to OSR: accurately classify in-distribution (ID) samples into the known categories and detect OOD data that is semantically different and therefore should not be predicted by the model. However, OOD detection encompasses a broader spectrum of learning tasks and solution space, which we comprehensively reviewed in this paper. While all the aforementioned domains hold the assumption of accessing an entirely normal training dataset, anomaly detection assumes the training dataset is captured in a fully unsupervised manner without applying any filtration; therefore, it might contain some abnormal samples too. However, as abnormal events barely occur, AD methods have used this fact and proposed filtering during the training process to reach a final semantic space that fully grasps normal features. Despite previous approaches that are mostly used in object detection and image classification domains, this setting is more common in the industrial defect detection tasks, in which abnormal events are rare and normal samples have a shared concept of normality. Fig. 1 depicts an overview of the mentioned domains, in which the differences are shown visually. Note that even if there are differences in the formulation of these domains, they have so much in common, and are used interchangeably. As an important research area, there have been several surveys in the literature (17; 124; 138; 118; 21), focusing on each domain independently or providing a very general notion of anomaly detection to cover all different types of datasets. Instead, this paper in-depth explanations for methodologies in respective areas. We make cross-domain bridges by which ideas can be easily propagated and inspire future research. For instance, the idea of using some outlier samples from different datasets to improve task-specific features is called Outlier Exposure in (60) or background modeling in (33), and is very similar to semi-supervised anomaly detection in (137). Despite the shared idea, all are considered to be novel ideas in their respective domains. In this survey, we identify the commonalities that address different but related fields. Although some of the mentioned tasks have very close methodological setups, they differ in their testing protocols. Therefore, a comprehensive review of all the methods can better reveal their limitations in practical applications. As a key part of this survey, methods are described both mathematically and visually to give better insights to both newcomers and experienced researchers. Our survey also complements existing surveys (138; 180) which focus on higher-level categorizations. Finally, comprehensive future lines of research are provided both practically and fundamentally to not only address the issues of current methods but also shed light on critical applications of these methods in different fields. In summary, the main contributions are as follows : 1. Identification of the relationship between different research areas that, despite being highly correlated with each other, have been examined separately. 2. Comprehensive methodological analysis of recent eminent research, and providing a clear theoretical and visual explanation for methods reviewed. 3. Performing comprehensive tests on existing baselines in order to provide a solid ground for current and future lines of research. 4. Providing plausible future lines of research and specifying some fundamental necessities of the methods that will be presented in the future such as fairness, adversarial robustness, privacy, data efficiency, and explainability. ![2_image_0.png](2_image_0.png) Figure 1: Problem setup for ND, OSR, and ODD from a unified perspective based on the common routine followed in the respective fields. (A), (B) and, (C) are sampled from the same training dataset, while (D) is sampled from different datasets. Typically, in ND, all training samples are deemed normal, and share a common semantic (green region), while samples diverged from such distribution are considered as anomalous. Although samples in area (D) can be regarded as potential outliers, only areas (B) and (C) are used as anomalies for the evaluation phase. In OSR, more supervision is available by accessing the label of normal samples. For example "car", "dog", "cat", "airplane" and, "bus" classes i.e, the union of (A) and (B), are considered as normal while (C) is open-set distribution(see right Figure). Same as ND, (D) is not usually considered an open-set distribution in the field, while there is no specific constraint on the type of open-set distribution in the definition of OSR domain. In OOD detection, multiple classes are considered as normal, which is quite similar to OSR. For example, (A), (B), and (C) comprise the normal training distributions, and another distribution that shows a degree of change with respect to the training distribution and is said to be out-of-distribution, which can be (D) in this case. ## 2 A Unified Categorization Consider a dataset with training samples (x1, y1),(x2, y2)*, ...* from the joint distribution PX,Y , where X and Y are random variables on an input space X = R d and a label (output) space Y respectively. In-class (also called "seen" or "normal") domain refers to the training data. In AD or ND, the label space Y is a binary set, indicating normal vs. abnormal. During testing, provided with an input sample x, the model needs to estimate P(Y = normal/seen/in-class | X = x) in the cases of one-class setting. In OOD detection and OSR ![3_image_0.png](3_image_0.png) Figure 2: As discussed in section 2, all explained approaches can be unitedly classified in the shown hierarchical structure. The tree on the right points out that although some approaches have been labeled as ND, OSR, or OOD detection in the field, however can be classified in a more grounded and general form such that their knowledge could be shared synergistically. For instance, self-supervised ND methods can be added to multi-class classification approaches without harming their classification assumptions. Unfamiliar sample detection can be done by employing different levels of supervision, which is shown on the left. for multi-class classification, the label space can contain multiple semantic categories, and the model needs to additionally perform classification on normal samples based on the posterior probability p(Y = y | x). It is worth mentioning that in AD, input samples could contain some noises (anomalies) combined with normals; thus, the problem is converted into a noisy-label one-class classification problem; however, the overall formulation of the detection task still remains the same. To model the conditional probabilities, the two most common and known perspectives are called generative modeling and **discriminative modeling**. While discriminative modeling might be easy for OOD detection or OSR settings since there is access to the labels of training samples; nevertheless, the lack of labels makes AD, ND (OCC) challenging. This is because one-class classification problems have a trivial solution to map each of their inputs regardless of being normal or anomaly to the given label Y and, consequently, minimize their objective function as much as possible. This issue can be seen in some of the recent approaches such as DSVDD(136), which maps every input to a single point regardless of being normal or abnormal when it is trained for a large number of training epochs. Some approaches (48; 12), however, have made some changes in the formulation of P(Y | X) to solve this problem. They apply a set of affine transformations on the distribution of X such that the normalized distribution does not change. Then the summation P|T| i=1 P(Ti| Ti(X)) is estimated, calculating the aggregated probability of each transformation Ti being applied on the input X given the transformed input Ti(X), which is equal to |T|P(Y | X). This is similar to estimating P(Y | X) directly; however, it would not collapse and can be used instead of estimating one-class conditional probability. This simple approach circumvents the problem of collapsing; however, it makes the problem dependent on the transformations since transformed inputs must not intersect with each other as much as possible to satisfy the constraint of normalized distribution consistency. Therefore, as elaborated later in the survey, OSR methods can employ AD approaches with classification models to overcome their issues. A similar situation holds for the OOD domain. In generative modeling, AE (Autoencoder)-based, GAN (Generative Adversarial Network)-based, and explicit density estimation-based methods such as auto-regressive and flow-based models are used to model the data distribution. For AEs, there are two important assumptions. If the auto-encoder is trained solely on normal training samples: Table 1: The table summarizes the most notable recent deep learning-based works in different fields, specified by the informative methodological keywords and decision score used in each work. For the methods that have common parts, a shared set of keywords are used to place them on a unitary ground as much as possible. The pros and cons of each methodology are described in detail in the method explanation. | Taxonomy Index | Methods | Publication Venue | Decision Score | Keywords | | | | |-----------------------|--------------------------------------------------------|----------------------------------|--------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------| | 1 | ALOCC | CVPR (2018) | reconstruction error for input | discriminative-generative, reconstruction-based, adversarial learning, denoising AutoEncoder, Refinement and Detection | | | | | ND | | 3 | DeepSvdd | ICML (2018) | distance of the input to the | discriminative, extension of SVDD, compressed features, minimum volume hyper-sphere, Autoencoder | | | | center of the hypersphere | | | | | | | | 4 | GT | NeurIPS (2018) | sum of Dirichlet probabilities for different | self-supervised learning, auxiliary task, self-labeled dataset, geometric transformations, softmax response vector | | | | | | transformations of input | | | | | | | | 2 | Mem-AE | ICCV (2019) | reconstruction error for input | generative, reconstruction-based, Memory-based Representation, sparse addressing technique, hard shrinkage operator | | | | | 5 | CSI | NeurIPS (2020) | cosine similarity to the nearest training sample | self-supervised learning, contrastive learning, negative samples, shifted distribution, hard augmentations | | | | | | multiplied by the norm of representation | | | | | | | | 6 Uninformed Students | CVPR (2020) | an ensemble of student networks' | knowledge distillation, teacher-student based, transfer learning, self supervised learning, Descriptor Compactness | | | | | | | variance and regression error | | | | | | | | 7 | CutPaste | CVPR (2021) | log-density of trained Gaussian density estimator | self-supervised Learning, generative classifiers, auxiliary task, extracting patch-level representations, cutout and scar | | | | | 8 | Multi-KD | | CVPR (2021) | discrepancy between the intermediate teacher and | knowledge distillation, teacher-student, transfer the intermediate knowledge, pretrained network, mimicking different layers | | | | | student networks' activation values | | | | | | | | OSR | | 9 | OpenMax | CVPR (2016) | after computing OpenMax probability per channel | Extreme Value Theorem, activation vector, overconfident scores, weibull distribution, Meta-Recognition | | | | maximum probability is then selected | | | | | | | | 10 | OSRCI | ECCV (2018) | probability assigned to class with label k + 1, minus | generative, unknown class generation, classification based, Counterfactual Images, learning decision boundary | | | | | | maximum probability assigned to first k class | | | | | | | | 11 | C2AE | CVPR (2019) | minimum reconstruction error under different match | generative, Reconstruction based, class conditional Autoencoder, Extreme Value Theory, feature-wise linear modulations | | | | | | vectors, compared with the predetermined threshold | | | | | | | | 12 | CROSR | | CVPR (2019) | after computing OpenMax probability per channel | discriminative-generative, classification-reconstruction based , Extreme Value Theorem, hierarchical reconstruction nets | | | | | maximum probability is then selected | | | | | | | | 13 | GDFR | | CVPR (2020) | by passing the augmented input to the classifier | discriminative-generative, self-supervised learning, reconstruction based, auxiliary task, geometric transformation | | | | | network, and finding maximum activation | | | | | | | | 14 | OpenGan | ICCV (2021) | the trained discriminator utilized as | discriminative-generative, unknown class generation, adversarially synthesized fake data, crucial to use a validation set, model selection | | | | | | an open-set likelihood function | | | | | | | | 15 | MLS | ICLR (2021) | negation of maximum logit score | discriminative, classification based, between the closed-set and open-set, Fine-grained datasets, Semantic Shift Benchmark(SSB) | | | | | 16 | MSP | ICLR (2016) | Negation of maximum softmax probability | outlier detection metric, softmax distribution, maximum class probability, classification based, various data domains | | | | | OOD | | 17 | ODIN | | ICLR (2017) | Negation of maximum softmax probability | outlier detection metric, softmax distribution, temperature scaling, adding small perturbations, enhancing separability | | | with temperature scaling | | | | | | | | 18 | A Simple Unified etc. | NeurIPS (2018) | Mahalanobis distance to the closest | distance based, simple outlier detection metric, small controlled noise, class-incremental learning, Feature ensemble | | | | | | class-conditional Gaussian | | | | | | | | 19 | OE | ICLR (2018) | negation of maximum softmax probability | outlier exposure , auxiliary dataset, classification based, uniform distribution over k classes, various data domains | | | | | 20 | Using SSL etc. | NeurIPS (2019) | negation of maximum softmax probability | self-supervised learning, auxilary task, robustness to adversarial perturbations, MSP, geometric transformation | | | | | 21 | G-ODIN | | ICCV (2020) | softmax output's categorical distribution | outlier detection metric, softmax distribution, learning temperature scaling, Decomposed Confidence | | | | 22 | Energy-based OOD | | NeurIPS (2020) | the energy function | outlier detection metric, hyperparameter-free, The Helmholtz free energy, energy-bounded learning, the Gibbs distribution | | | | | (a scalar is derived from the logit outputs) | | | | | | | | 23 | MOS | CVPR (2021) | lowest others score among all groups | softmax distribution, group-based Learning, large-scale images, category others, pre-trained backbone | | | | | 24 | ReAct | NeurIPS (2021) | after applying ReAct different OOD detection methods | activation truncation, reducing model overconfidence, compatible with different OOD scoring functions | | | | | | could be used. they default to using the energy score. | | | | | | | - They **would** be able to reconstruct unseen normal test-time samples as precisely as training-time ones. - They **would not** be able to reconstruct unseen abnormal test-time samples as precisely as normal inputs. Nevertheless, recently proposed AE-based methods show that the above-mentioned assumptions are not always true (143; 49; 189). For instance, even if AE can reconstruct normal samples perfectly, nonetheless with only a one-pixel shift, reconstruction loss will be high. Likewise, GANs as another famous model family, are widely used for AD, ND, OCC, OSR, and OOD detection. If a GAN is trained on fully normal training samples, it operates on the following assumptions: - If the input is **normal** then there is a latent vector that, if generated, has a low discrepancy with the input. - If the input is **abnormal** then there **is not** a latent vector that, if generated, has a low discrepancy with the input. Here, the discrepancy can be defined based on the pixel-level MSE loss of the generated image and test-time input or more complex functions such as layer-wise distance between the discriminator's features when fed a generated image and test-time input. Although GANs have proved their ability to capture semantic abstractions of a given training dataset, they suffer from mode-collapse, unstable training process, and irreproducible result problems (3). Finally, auto-regressive and flow-based models can be used to explicitly approximate the data density and detect abnormal samples based on their assigned likelihood. Intuitively, normal samples must have higher likelihoods compared to abnormal ones; however, as will be discussed later, auto-regressive models assign an even higher likelihood to abnormal samples despite them not being seen in the training phase, which results in a weak performance in AD, ND, OSR, and OOD detection. Solutions to this problem will be reviewed in subsequent sections. To address this issue, several remedies have been proposed in the OOD detection domain, which can be used in OSR, AD, and ND; however, considering the fact that OOD detection's prevalent testing protocols might be quite different from other domains such as AD or ND, more evaluations on their reliability are needed. As lots of works with similar ideas or intentions have been proposed in each domain, we mainly focus on providing a detailed yet concise summary of each and highlighting the similarities in the explanations. In Table 1 and Fig. 2, a summary of the most notable papers reviewed in this work is presented along with specifications on their contributions with respect to the methodological categorization provided in this paper. ## 3 Anomaly And Novelty Detection Anomaly Detection (AD) and Novelty Detection (ND) have been used interchangeably in the literature, with few works addressing the differences (122; 176; 172). There are specific inherent challenges with anomaly detection that contradict the premise that the training data includes entirely normal samples. In physical investigations, for example, measurement noise is unavoidable; as a result, algorithms in an unsupervised training process must automatically detect and focus on the normal samples. However, this is not the case for novelty detection problems. There are many applications in which providing a clean dataset with minimum supervision is an easy task. While these domains have been separated over time, their names are still not appropriately used in literature. These are reviewed here in a shared section. Interests in anomaly detection go back to 1969 (52), which defines anomaly/outlier as **"samples that appear** to deviate markedly from other members of the sample in which it occurs", explicitly assuming the existence of an underlying shared pattern that a large fraction of training samples follow. This definition has some ambiguity. For example, one should define a criterion for the concept of deviation or make the term "markedly" more quantitative. To this end, there has been a great effort both before and after the advent of deep learning methods to make the mentioned concepts more clear. To find a sample that deviates from the trend, adopting an appropriate distance metric is necessary. For instance, deviation could be computed in a raw pixel-level input or in a semantic space that is learned through a deep neural network. Some samples might have a low deviation from others in the raw pixel space but exhibit large deviations in representation space. Therefore, choosing the right distance measure for a hypothetical space is another challenge. Finally, the last challenge is choosing the threshold to determine whether the deviation from normal samples is significant. ## 3.1 Non-Deep Learning-Based Approaches In this section, some of the most notable traditional methods used to tackle the problem of anomaly detection are explained. All other sections are based on the new architectures that are mainly based on deep learning. ## 3.1.1 Isolation Forests (94): Like Random Forests, Isolation Forests (IF) are constructed using decision trees. It is also an unsupervised model because there are no predefined labels. Isolation Forests were created with the notion that anomalies are "few and distinct" data points. An Isolation Forest analyzes randomly sub-sampled data in a tree structure using attributes chosen at random. As further-reaching samples required more cuts to separate, they are less likely to be outliers. Similarly, samples that end up in shorter branches exhibit anomalies because the tree ![6_image_0.png](6_image_0.png) could distinguish them from other data more easily. Fig. 3 depicts the overall procedure of this work. Figure 3: An overview of the isolation forest method is shown. Anomalies are more susceptible to isolation and hence have short path lengths. A normal point xi requires twelve random partitions to be isolated compared to an anomaly xo that requires only four partitions to be isolated. The figure is taken from (94). ## 3.1.2 Dbscan (40): Given a set of points in a space, DBSCAN combines together points that are densely packed together (points ![6_image_1.png](6_image_1.png) with numerous nearby neighbors), identifying as outliers those that are isolated in low-density regions. The steps of the DBSCAN algorithm are as follows: (1) Identify the core points with more than the minimum number of neighbors required to produce a dense zone. (2) Find related components of core points on the neighbor graph, ignoring all non-core points.(3) If the cluster is a neighbor, assign each non-core point to it, otherwise assign it to noise. Fig. 4 depicts the overall procedure of this work. Figure 4: The minimal number of points required to generate a dense zone in this figure is four. Since the area surrounding these points in a radius contains at least four points, Point A and the other red points are core points (including the point itself). They constitute a single cluster because they are all reachable from one another. B and C are not core points, but they may be reached from A (through other core points) and hence are included in the cluster. Point N is a noise point that is neither a core nor reachable directly. The figure is taken from (40). ## 3.1.3 Lof: Identifying Density-Based Local Outliers (16): The local outlier factor is based on the concept of a local density, in which locality is determined by the distance between K nearest neighbors. One can detect regions of similar density and points that have a significantly lower density than their neighbors by comparing the local density of an object to the local densities of its neighbors. These are classified as outliers. The typical distance at which a point can be "reached" from its neighbors is used to estimate the local density. The LOF notion of "reachability distance" is an extra criterion for producing more stable cluster outcomes. The notions of "core distance" and "reachability distance", which are utilized for local density estimation, are shared by LOF and DBSCAN. ## 3.1.4 Oc-Svm (148): Primary AD methods would use statistical approaches such as comparing each sample with the mean of the training dataset to detect abnormal inputs, which imposes an ungeneralizable and implicit Gaussian distribution assumption on the training dataset. In order to reduce the number of presumptions or relieve the mentioned deficiencies of traditional statistical methods, OC-SVM (148) was proposed. As the name implies, OC-SVM is a one-class SVM maximizing the distance of training samples from the origin using a hyper-plane, including samples on one side and the origin on its other side. Eq. 1 shows the primal form of OC-SVM that attempts to find a space in which training samples lie on just one side, and the more distance of origin to the line, the better the solution to the optimization problem. Each input data is specified by xi, Φ is a feature extractor, ξiis the slack variable for each sample, w and ρ characterize the hyperplane. The parameter v sets an upper bound on the fraction of outliers and also is a lower bound on the number of training examples used as the support vector. $$\min_{w,\rho,\xi_{i}}\frac{1}{2}\left\|w\right\|^{2}+\frac{1}{wn}\sum_{1}^{n}\xi_{i}-\rho\tag{1}$$ subject to $(w\cdot\Phi(x_{i}))\geqslant\rho-\xi_{i},i\in1,...,n$ $$\xi_{i}\geqslant0,i\in1,\ldots,n$$ Finding a hyper-plane appears to be a brilliant approach for imposing a shared normal restriction on the training samples. But unfortunately, because half-space is not compact enough to grasp unique shared normal features, it generates a large number of false negatives. Therefore, similarly, Support Vector Data Description(SVDD) tries to find the most compact hyper-sphere that includes normal training samples. This is much tighter than a hyper-plan, thus finds richer normal features, and is more robust against unwanted noises as Eq. 2 shows. Both of the mentioned methods offer soft and hard margin settings, and in the former, despite the latter, some samples can cross the border and remain outside of it even if they are normal. OC-SVM has some implicit assumptions too; for example, it assumes training samples obey a shared concept, which is conceivable due to the one-class setting. Also, it works pretty well on the AD setting in which the number of outliers is significantly lower than normal ones however fails on high dimensional datasets. The resulting hyper-sphere is characterized by a and R as the center and radius, respectively. Also, C controls the trade-off between the sphere compactness and average slack values. $$\min_{R,a}C\sum_{i=1}^{n}\xi_{i}+R^{2}\tag{2}$$ $$\mbox{subject to}\quad\|\phi(x_{i})-a\|^{2}\leqslant R^{2}+\xi_{i},i\in1,\ldots,n$$ $$\xi_{i}\geqslant0,i\in1,\ldots,n$$ ## 3.2 Anomaly Detection With Robust Deep Auto-Encoders (195): This work trains an AutoEncoder (AE) on a dataset containing both inliers and outliers. The outliers are detected and filtered during training, under the assumption that inliers are significantly more frequent and have a shared normal concept. This way, the AE is trained only on normal training samples and consequently poorly reconstructs abnormal test time inputs. The following objective function is used: $$\min_{\theta}||L_{D}-D_{\theta}(E_{\theta}(L_{D}))||_{2}+\lambda||S||_{1}$$ s.t. $$X-L_{D}-S=0,$$ where E and D are encoder and decoder networks, respectively. LD is the inlier part, S is the outlier part of the training data X and λ is a parameter that tunes the level of sparsity in S. However, the above optimization is not easily solvable since S and θ need to be optimized jointly. To address this issue, the Alternating Direction Method of Multipliers (ADMM) is used, which divides the objective into two (or more) pieces. At the first step, by fixing S, an optimization problem on the parameters θ is solved such that LD = X − S, and the objective is ||LD − Dθ(Eθ(LD))||2. Then by setting LD to be the reconstruction of the trained AE, the optimization problem on its norm is solved when S is set to be X − LD. Since the L1 norm is not differentiable, a proximal operator is employed as an approximation of each of the optimization steps as follows: $$\text{prox}_{\lambda,L_{1}}(x_{i})=\begin{Bmatrix}x_{i}-\lambda&x_{i}>\lambda\\ x_{i}+\lambda&x_{i}<-\lambda\\ 0&-\lambda\leq x_{i}\leq\lambda\end{Bmatrix}\tag{4}$$ $$\left(5\right)$$ Such a function is known as a *shrinkage operator* and is quite common in L1 optimization problems. The mentioned objective function with ||S||1 separates only unstructured noises, for instance, Gaussian noise on training samples, from the normal content in the training dataset. To separate structured noises such as samples that convey completely different meaning compared to the majority of training samples, L2,1 optimization norm can be applied as: $$\sum_{j=1}^{n}||x_{j}||_{2}=\sum_{j=1}^{n}(\sum_{i=1}^{m}|x_{ij}|^{2})^{1/2},\tag{1}$$ with a proximal operator that is called block-wise soft-thresholding function (108). At the test time, the reconstruction error is employed to reject abnormal inputs. ## 3.3 Adversarially Learned One-Class Classifier For Novelty Detection (Alocc) (139): In this work, it is assumed that a fully normal training sample is given, and the goal is to train a novelty detection model using them. At first, (R) as a Denoising Auto Encoder (DAE) is trained to (1) decrease the reconstruction loss and (2) fool a discriminator in a GAN-based setting. This helps the DAE to produce high-quality images instead of blurred outputs (84). This happens because, on the one hand, the AE model loss explicitly assumes independent Gaussian distributions for each of the pixels. And on the other hand, true distributions of pixels are usually multi-modal, which forces the means of Gaussians to settle in between different modes. Therefore, they produce blurry images for complex datasets. To address this issue, AEs can be trained in a GAN-based framework to force the mean of each of Gaussians to capture just one mode of its corresponding true distribution. Moreover, by using the discriminator's output (D) instead of the pixel-level loss, normal samples that are not properly reconstructed can be detected as normal. This loss reduces the False Positive Rate (FPR) of the vanilla DAE significantly. The objective function is as follows: LR+D = min R max D EX∼pt [log(D(X))]+ EXe∼pt+Nσ ([log(1 − D(R(Xe)))] LR = ||X − X′||2 L = LR+D + LR, (6) $\frac{1}{2}$ where X is the input, X′is the reconstructed output of the decoder, and pt is the distribution of the target class (i.e., normal class). This helps the model to not only have the functionality of AEs for anomaly detection but also produces higher quality outputs. Furthermore, detection can be done based on D(R(X)) as mentioned above. Fig. 5 depicts the overall architecture of this work. An extended version of ALOCC, where the R(X) network has been formed as a Variational AE is presented ![9_image_0.png](9_image_0.png) in (141). Besides, the ALOCC can not process the entire input data (images or video frames) at one step and needs the test samples to divide into several patches. Processing the patches makes the method computationally expensive. To address this problem, AVID (140) was proposed to exploit a fully convolutional network as a discriminator (i.e., D) to score (and hence detect) all abnormal regions in the input frame/image all at once. Figure 5: An overview of the ALOCC method is shown. This work trains an autoencoder that fools a discriminator in a GAN-based setting. This helps the AE to make high-quality images instead of blurred outputs. Besides, by using the discriminator's output, a more semantic similarity loss is employed instead of the pixel-level L2 reconstruction loss. The figure is taken from (139). ## 3.4 One-Class Novelty Detection Using Gans With Constrained Latent Representations (Oc-Gan) (122): As one of the challenges, AE trained on entirely normal training samples could reconstruct unseen abnormal inputs with even lower errors. To solve this issue, this work attempts to make the latent distribution of the encoder (EN(·)) similar to the uniform distribution in an adversarial manner: $$\begin{array}{r l}{l_{\mathrm{latent}}}&{{}=-(\mathbb{E}_{x\sim\mathbb{U}(-1,1)}[\log(D_{l}(s))]+}\\ {\mathbb{E}_{x\sim p_{x}}[\log(1-D_{l}(\mathrm{EN}(x+n)))]),}\end{array}$$ $$\left(7\right)$$ where n ∼ N(0, 0.2), x is an input image, px is the distribution of in-class examples, and Dlis the latent discriminator. The discriminator forces the encoder to produce uniform distribution on the latent space. Similarly, the decoder (De(·)) is forced to reconstruct in-class outputs for any latent value sampled from the uniform distribution as follows : $$\begin{array}{r l}{l_{\mathrm{visual}}}&{{}=-(\mathbb{E}_{s\sim\mathrm{U}(-1,1)}[\log(\mathcal{D}_{v}(\mathrm{De}(s)))]+}\\ {\qquad}&{{}\mathbb{E}_{x\sim p_{l}}[\log(1-D_{v}(x))]),}\end{array}$$ (8) ![10_image_0.png](10_image_0.png) where Dv is called visual discriminator. Intuitively, the learning objective distributes normal features in the latent space such that the reconstructed outputs entirely or at least roughly resemble the normal class for both normal and abnormal inputs. Another technique called informative negative sample mining is also employed on the latent space to actively seek regions that produce images of poor quality. To do so, a classifier is trained to discriminate between the reconstructed outputs of the decoder and fake images, which are generated from randomly sampled latent vectors. To find informative negative samples, the algorithm (see Fig. 6) starts with a random latent-space sample and uses the classifier to assess the quality of the generated image. After that, the algorithm solves an optimization in the latent space to make the generated image such that the discriminator detects it as fake. Finally, the negative sample is used to boost the training process. As in previous AE-based methods, reconstruction loss is employed in combination with the objective above. Reconstruction error is used as the test time anomaly score. Figure 6: The training process of OC-GAN method (122). ## 3.5 Latent Space Autoregression For Novelty Detection (Lsa) (1): This work proposes a concept called "surprise" for novelty detection, which specifies the uniqueness of input samples in the latent space. The more unique a sample is, the less likelihood it has in the latent space, and subsequently, the more likely it is to be an abnormal sample. This is beneficial, especially when there are many similar normal training samples in the training dataset. To minimize the MSE error for visually identical training samples, AEs often learn to reconstruct their average as the outputs. This can result in fuzzy results and large reconstruction errors for such inputs. Instead, by using the surprise loss in combination with the reconstruction error, the issue is alleviated. Besides, abnormal samples are usually more surprising, and this increases their novelty score. Surprise score is learned using an auto-regressive model on the latent space, as Fig. 7 shows. The auto-regressive model (h) can be instantiated from different architectures, such as LSTM and RNN networks, to more complex ones. Also, similar to other AE-based methods, the reconstruction error is optimized. The overall objective function is as follows: $$\begin{array}{l}{{{\mathcal{L}}={\mathcal{L}}_{\mathrm{Rec}}(\theta_{E},\theta_{D})+\lambda\cdot{\mathcal{L}}_{\mathrm{LLK}}(\theta_{E},\theta_{h})}}\\ {{\qquad=\mathbb{E}_{X}\big[\left|\left|x-{\hat{x}}\right|\right|^{2}-\lambda\cdot\log(h(z;\theta_{h}))\big],\ z=f(x,\theta_{E}),}}\end{array}$$ -||x − xˆ||2 − λ · log(h(z; θh)), z = f(*x, θ*E),(9) where LRec and LLLK are the reconstruction and surprise terms. The parameters of the encoder, decoder, and probabilistic model are specified by θE, θD and θh, respectively. x is the input, f is the encoder, and λ controls the weight of LLLK. ## 3.6 Memory-Augmented Deep Autoencoder For Unsupervised Anomaly Detection (Mem-Ae) (49): This work challenges the second assumption behind using AEs. It shows that some abnormal samples could be perfectly reconstructed even when there are not any of them in the training dataset. Intuitively, AEs $$({\mathfrak{g}})$$ ![11_image_0.png](11_image_0.png) Figure 7: An overview of the LSA method is shown. A surprise score is defined based on the probability distribution of embeddings. The probability distribution is learned using an auto-regressive model. Also, reconstruction error is simultaneously optimized on normal training samples. The figure is taken from (1). may not learn to extract uniquely describing features of normal samples; as a result, they may extract some abnormal features from abnormal inputs and reconstruct them perfectly. This motivates the need for learning features that allow only normal samples to be reconstructed accurately. To do so, Mem-AE employs a memory that stores unique and sufficient features of normal training samples. During training, the encoder *implicitly* plays the role of a *memory address generator*. The encoder produces an embedding, and memory features that are similar to the generated embedding are combined. The combined embedding is then passed to a decoder to make the corresponding reconstructed output. Also, Mem-AE uses a sparse addressing technique that uses only a small number of memory items. Accordingly, the decoder in Mem-AE is restricted to performing the reconstruction using a small number of addressed memory items, rendering the requirement for efficient utilization of the memory items. Furthermore, the reconstruction error forces the memory to record prototypical patterns that are representative of the normal inputs. To summarize the training process, Mem-AE (1) finds address Enc(x) = z from the encoder's output; (2) measures the cosine similarity d(*z, m*i) of the encoder output z with each of the memory elements mi; (3) computes attention weights w, where each of its elements is computed as follows: $$w_{i}=\frac{\exp(d(z,m_{i}))}{\sum_{j=1}^{N}\exp(d(z,m_{j}))};\tag{1}$$ (4) applies address shrinkage techniques to ensure sparsity: $$\hat{w}_{i}=\frac{\max(w_{i}-\lambda,0).w_{i}}{|w_{i}-\lambda|+\epsilon}\tag{1}$$ $$E(\hat{\mathbf{w}}^{t})=\sum_{i=1}^{T}-\hat{w}_{i}.\log(\hat{w}_{i}),\tag{1}$$ $$(10)$$ $$(11)$$ $$\left(12\right)$$ and finally, the loss function is defined as (13), where R is the reconstruction error and is used as the test time anomaly score. Fig. 8 shows the overview of architecture. $$L(\theta_{e},\theta_{d},M)=\frac{1}{T}\sum_{t=1}^{T}\left(R({\bf x}^{t},{\hat{\bf x}}^{t})+\alpha E({\hat{\bf w}}^{t})\right)\tag{1}$$ $$(13)$$ where θe and θd denote the parameters of the encoder and decoder. x t denotes training samples and xˆ t denote the reconstructed sample corresponding the each train- ![12_image_0.png](12_image_0.png) Figure 8: An overview of the Mem-AE method is shown. Each sample is passed through the encoder, and a latent embedding z is extracted. Then using the cosine similarity, some nearest learned normal features are selected from the memory, and the embedding zˆ is made as their weighted average. At last, the reconstruction error of the decoded zˆ and input is considered as the novelty score. The figure is taken from (49). ## 3.7 Redefining The Adversarially Learned One-Class Classifier Training Paradigm (Old-Is-Gold) (189): This work extends the idea of ALOCC (139). As ALOCC is trained in a GAN-based setting, it suffers from stability and convergence issues. On one hand, the over-training of ALOCC can confuse the discriminator D because of the realistically generated fake data. On the other hand, under-training detriments the usability of discriminator features. To address this issue, a two-phase training process is proposed. In the first phase, a similar training process as ALOCC is followed : $${\mathcal{L}}={\mathcal{L}}_{{\mathcal{R}}+{\mathcal{D}}}+{\mathcal{L}}_{{\mathcal{R}}}$$ L = LR+D + LR (14) As phase one progresses, a low-epoch generator model G old for later use in phase two of the training is saved. The sensitivity of the training process to the variations of the selected epoch is discussed in the paper. During the second phase, samples Xˆ = G are considered high-quality reconstructed data. Samples Xˆlow = G old(X) are considered as low-quality samples. Then, pseudo anomaly samples are created as follows: $$\hat{\hat{X}}=\frac{\mathcal{G}^{\text{old}}(X_{i})+\mathcal{G}^{\text{old}}(X_{j})}{2}$$ $$\hat{X}^{\text{pseudo}}=\mathcal{G}(\hat{\hat{X}})$$ $$(15)$$ After that, the discriminator is trained again to strengthen its features by distinguishing between good quality samples such as {X, Xˆ} and low-quality or pseudo anomaly ones such as {Xˆlow, Xˆ pseudo} as follows: $$\max_{\mathcal{D}}\left(\alpha\cdot\mathbb{E}_{\hat{X}}[\log(1-D(X))]+(1-\alpha)\cdot\mathbb{E}_{\hat{X}}[\log(1-D(\hat{X}))]\right.\tag{16}$$ $$\left.+\left(\beta\cdot\mathbb{E}_{\hat{X}^{\text{low}}}[\log(1-D(\hat{X}^{\text{low}}))]\right.\right.$$ $$\left.+\left(1-\beta\right)\cdot\mathbb{E}_{\hat{X}^{\text{pseudo}}}[\log(1-D(\hat{X}^{\text{pseudo}}))]\right)$$ $$(14)$$ In this way, D does not collapse as in ALOCC, and D(G(X)) is used as the test time criterion. Fig. 9 shows the overall architecture of this method. ![13_image_0.png](13_image_0.png) Figure 9: An overview of the Old-Is-Gold method is shown. The architecture is similar to ALOCC; however, it saves the weights of the network G during the training process, which are called Gold. Then, Gold is employed to make some low-quality samples that the discriminator is expected to distinguish as normal. Also, some pseudo abnormal samples are generated by averaging each pair of normal low-quality samples, which the discriminator is supposed to distinguish as fake inputs. This makes the discriminator's features rich and stabilizes the training process. The figure is taken from (189). ## 3.8 Adversarial Mirrored Autoencoder (Ama) (159): The overall architecture of AMA is similar to ALOCC. However, it challenges the first assumption of AEs. It has been shown that lp norms are not suitable for training AEs in the anomaly detection domain since they cause blurry reconstructions and subsequently increase the error of normal samples. To address this problem, AMA proposes to minimize the Wasserstein distance between joint distributions PX,X and PX,Xˆ . The objective function is as follows: $$W(\mathbb{P}_{X,X},\mathbb{P}_{X,{\hat{X}}})=\operatorname*{max}_{\mathcal{D}\in L i p-1}\mathbb{E}_{x\sim\mathbb{P}_{X}}[\mathcal{D}(X,X)-\mathcal{D}(X,{\hat{X}})]$$ $$(17)$$ where Xˆ is the reconstructed image. (15) showed that by forcing the linear combination of latent codes of a pair of data points to look realistic after decoding, the encoder learns a better representation of data. Inspired by this, AMA makes use of Xˆinter—obtained by decoding the linear combination of some randomly sampled inputs: $$\operatorname*{min}_{G}\operatorname*{max}_{D\in L i p-1}\mathbb{E}_{x\sim\mathbb{P}_{X}}[\mathcal{D}(X,X)-\mathcal{D}(X,\hat{X}_{\mathrm{inter}})]$$ Ex∼PX [D(X, X) − D(X, Xˆinter)] (18) To further boost discriminative abilities of D, inspired by (27), normal samples are supposed as residing in the typical set (28) while anomalies reside outside of it, a Gaussian regularization equal to L2 norm is imposed on the latent representation. Then using the Gaussian Annulus Theorem (170) stating that in a d-dimensional space, the typical set resides with high probability at a distance of √d from the origin, synthetic negatives (anomalies), are obtained by sampling the latent space outside and closer to the typical set boundaries. Therefore, the final objective function is defined as follows: $$(18)$$ $$\begin{array}{l}\min_{G}\max_{D\in L^{\rm op}_{\rm log}-1}{\cal L}_{\rm normal}-\lambda_{\rm neg}\cdot{\cal L}_{\rm neg}\\ \\ {\cal L}_{\rm normal}=\mathbb{E}_{x\sim\partial X}\left[D(X,X)-D(X,\hat{X})\right.\\ \\ \left.+\lambda_{\rm inter}\cdot(D(X,X)-D(X,\hat{X}_{\rm inter}))+\lambda_{\rm reg}\cdot\|E(X)\|\right]\\ \\ {\cal L}_{\rm neg}=\mathbb{E}_{x\sim\partial X}\left[D(X,X)-D(X,\hat{X}_{\rm neg})\right]\end{array}\tag{19}$$ Fig. 10 shows an overview of the method and the test time criterion is ||f(*X, X*) − f(X, G(E(X))||1 where f ![14_image_0.png](14_image_0.png) is the penultimate layer of D. Figure 10: An overview of AMA method is shown. The architecture is similar to ALOCC; however, it does not try to minimize reconstruction error between x and xˆ. Instead, a discriminator is trained with Wasserstein Loss to minimize the distance between the distribution (*x, x*) and (x, xˆ). In this way, it forces xˆ to be similar to x without using any lp norm that causes blurred reconstructed outputs. Moreover, some negative latent vectors are sampled from low probability areas, which the discriminator is supposed to distinguish as fake ones. This makes the latent space consistent compared to the previous similar approaches. The figure is taken from (159). ## 3.9 **Unsupervised Anomaly Detection With Generative Adversarial Networks To Guide Marker Discovery** (Anogan) (147): This work trains a GAN on normal training samples, then at the test time, solves an optimization problem that attempts to find the best latent space z by minimizing a discrepancy. The discrepancy is found by using a pixel-level loss of the generated image and input in combination with the loss of discriminator's features at different layers when the generated and input images are fed. Intuitively, for normal test time samples, a desired latent vector can be found despite abnormal ones. Fig. 11 shows the structure of the method. As inferring the desired latent vector through solving an optimization problem is time-consuming, some extensions of AnoGAN have been proposed. For instance, Efficient-GAN (191) tries to substitute the optimization problem by training an encoder E such that the latent vectors z ′ approximate the distribution of z. In this way, E is used to produce the desired latent vector, significantly improving test time speed. Fig. 12 shows the differences. The following optimization problem (20) is solved at the test time to find z. The anomaly score is assigned based on how well the found latent vector can minimize the objective function. $$\operatorname*{min}_{z}\quad(1-\lambda)\cdot\sum|x-G(z)|+\lambda\cdot\sum|D(x)-D(G(z))|$$ X|D(x) − D(G(z))| (20) $$(20)^{\frac{1}{2}}$$ ![15_image_0.png](15_image_0.png) Figure 11: An overview of AnoGAN method. At first, a generator network G and a discriminator D are ![15_image_1.png](15_image_1.png) trained jointly on normal training samples using the standard training loss, which yields a semantic latent representation space. Then at the test time, an optimization problem that seeks to find an optimal latent embedding z that mimics the pixel-level and semantic-level information of the input is solved. Intuitively, for normal samples, a good approximation of inputs can be found in the latent space; however, it is not approximated well for abnormal inputs. The figure is taken from (147). Figure 12: Figure A shows the architecture of AnoGAN. Figure B shows the architecture of Efficient-GAN. The encoder E mimics the distribution of the latent variable z. The discriminator D learns to distinguish between joint distribution of (*x, z*) instead of x. The figure is taken from (147). ## 3.10 Deep One-Class Classification (Deepsvdd) (136): This method can be seen as an extension of SVDD using a deep network. It assumes the existence of shared features between training samples, and tries to find a latent space in which training samples can be compressed into a minimum volume hyper-sphere surrounding them. Fig. 13 shows the overall architecture. The difference *w.r.t* traditional methods is the automatic learning of kernel function ϕ by optimizing the parameters W. To find the center c of the hyper-sphere, an AE is first trained on the normal training samples, then the average of normal sample latent embeddings is considered as c. After that, by discarding the decoder part, the encoder is trained using the following objective function: $$\min_{W}\frac{1}{n}\sum_{i=1}^{n}||\phi(x_{i};W)-c||_{2}^{2}+\frac{\lambda}{2}\sum_{l=1}^{L}||W^{l}||_{F}^{2},\tag{21}$$ where Wlshows the weights of encoder's lth layer. At the test time, anomaly score is computed based on ||ϕ(x; W∗) − c||2 where W∗ denotes trained parameters. ![16_image_0.png](16_image_0.png) Figure 13: An overview of the DSVDD method is shown. It finds a minimum volume hyper-sphere that contains all training samples. As the minimum radius is found, more distance to the center is expected for abnormal samples compared to normal ones. The figure is taken from (136). ## 3.11 Deep Semi-Supervised Anomaly Detection (137): This is the semi-supervised version of DSVDD which assumes a limited number of labeled abnormal samples. The loss function is defined to minimize the distance of normal samples from a pre-defined center of a hyper-sphere while maximizing abnormal sample distances. The objective function is defined as follows: $$\min_{W}\frac{1}{n+m}\sum_{i=1}^{n}||\phi(x_{i};W)-c||^{2}+\tag{22}$$ $$\frac{\eta}{n+m}\sum_{j=1}^{m}(||\phi(\hat{x_{i}};W)-c||^{2})^{\hat{y_{j}}}+\frac{\lambda}{2}\sum_{l=1}^{L}||W^{l}||_{F}^{2},$$ where ϕ(xi; W) is a deep feature extractor with parameter W. Note that, as mentioned above, there is access to (xˆ1, yˆ1)*, ...,*(xˆm, yˆm) ∈ X × Y with Y = {−1, +1} where yˆ = 1 denotes known normal samples and yˆ = −1 otherwise. c is specified completely similar to DSVDD by averaging on the latent embeddings of an AE trained on normal training samples. From a somewhat theoretical point of view, AE's objective loss function helps the encoder maximize I(X;Z) in which X denotes input variables and Z denotes latent variables. Then, it can be shown that (22) minimizes the entropy of normal sample latent embeddings while maximizing it for abnormal ones as follows: $$\begin{array}{l l}{{H(Z)=E[-\log(P(Z)]=-\int_{Z}p(z)\log p(z)\,d z}}\\ {{}}\\ {{}}\\ {{}}\\ {{\leq\frac{1}{2}\log((2\pi e)^{d}\mathrm{det}(\Sigma))\propto\log\sigma^{2}}}\end{array}$$ $$(23)$$ As the method makes normal samples compact, it forces them to have low variance, and consequently, their entropy is minimized. The final theoretical formulation is approximated as follows: $$\operatorname*{max}_{p(z|x)}I(X;Z)+\beta(H(Z^{-})-H(Z^{+})).$$ Here, I specifies the mutual information and β is a regularization term to obtain statistical properties that is desired for the downstream tasks. ## 3.12 Deep Anomaly Detection Using Geometric Transformations (Gt) (48): GT attempts to reformulate the one-class classification problem into a multi-class classification. GT defines a set of transformations that do not change the data distribution, then trains a classifier to distinguish between $$(24)$$ them. Essentially, the classifier is trained in a self-supervised manner. Finally, a Dirichlet distribution is ![17_image_0.png](17_image_0.png) trained on the confidence layer of the classifier to model non-linear boundaries. Abnormal samples are expected to be in low-density areas since the network can not confidently estimate the correct transformation. At the test time, different transformations are applied to the input, and the sum of their corresponding Dirichlet probabilities is assigned as the novelty score. An overview of the method is shown in Fig. 14. Figure 14: An overview of the GT method is shown. A set of transformations is defined, then a classifier must distinguish between them. Having trained the classifier in a self-supervised manner, a Dirichlet distribution is trained on the confidence layer to model their boundaries. The figure is taken from (48). ## 3.13 **Effective End-To-End Unsupervised Outlier Detection Via Inlier Priority Of Discriminative Network** (172): In this work, similar to GT, a self-supervised learning (SSL) task is employed to train an anomaly detector except in the presence of a minority of outliers or abnormal samples in the training dataset. Suppose the following training loss is used as the objective function: $${\cal L}_{\mathrm{SS}}(x_{i}\mid\theta)=-\frac{1}{K}\sum_{y=1}^{K}\log(P^{(y)}(x_{i}^{(y)}\mid\theta)),$$ $$(25)$$ where x (y) iis obtained by applying a set of transformations O(. | y), and y indicates each transformation. The anomaly detection is obtained based on the objective function score. Take for example the rotation prediction task. During training, the classifier learns to predict the amount of rotation for normal samples. During test time, different rotations are applied on inputs, the objective function scores for normal samples would be lower than abnormal ones. However, due to the presence of abnormal samples in the training dataset, the objective score for abnormal samples may not always be higher. To address this issue, it is shown that the magnitude and direction of gradient in each step have a significant tendency toward minimizing inlier samples' loss function. Thus the network produces lower scores compared to abnormal ones. (172) exploits the magnitude of transformed inliers and outliers' aggregated gradient to update wc, i.e. ||∇(in) wc L|| and ||∇(out) wc L||, which are shown to follow this approximation: $${\frac{E(||\nabla_{w_{c}}^{(\mathrm{in})}\cdot L||^{2})}{E(||\nabla_{w_{c}}^{(\mathrm{out})}\cdot L||^{2})}}\approx{\frac{N_{\mathrm{in}}^{2}}{N_{\mathrm{out}}^{2}}}$$ $\left(26\right)$. where E(·) denotes the probability expectation. As Nin ≫ Nout, normal samples have more effect on the ![18_image_0.png](18_image_0.png) training procedure. Also, by projecting and averaging the gradient in the direction of each of the training sample's gradient (−∇θL(x) ·−∇θL(xi) ||−∇θL(xi)||), the stronger effect of inlier distribution vs outlier is observed again. Fig. 15 shows the effect empirically. Figure 15: The average magnitude of the gradient for inliers and outliers with respect to the number of iterations. The class used as inliers is in brackets. The figure is taken from (172). ## 3.14 Classification-Based Anomaly Detection For General Data (Goad) (12): This work is very similar to GT. It trains a network to classify between different transformations (T) ; however, instead of using cross-entropy loss or training a Dirichlet distribution on the final confidences, it finds a center for each transformation and minimizes the distance of each transformed data with its corresponding center as follows : $$P(m^{\prime}\mid T(x,m))=\frac{e^{-||f(T(x,m))-c_{m^{\prime}}||^{2}}}{\sum_{m}e^{-||f(T(x,m))-c_{m}||^{2}}}\tag{27}$$ where the centers cm are given by the average feature over the training set for every transformation i.e. cm = 1 N Px∈X f(T(*x, m*)). For training f, two options are used. The first one is using a simple cross entropy on P(m′| T(*x, m*)) values, and the second one is using the center triplet loss (53) as follows: $$\sum_{i}\operatorname*{max}(0,||f(T(x_{i},m))-c_{m}||^{2}+s)$$ $$-\operatorname*{min}_{m^{\prime}\neq m}||f(T(x_{i},m))-c_{m^{\prime}}||^{2})$$ $$(28)$$ where s is a margin regularizing the distance between clusters. The idea can be seen as the combination of DSVDD and GT in which GT's transformations are used, and different compressed hyperspheres are learned to separate them. M different transformations transform each sample at the test time, and the average of correct label probabilities is assigned as the anomaly score. ## 3.15 Csi: Novelty Detection Via Contrastive Learning On Distributionally Shifted Instances (165): This work attempts to formulate the problem of novelty detection into a contrastive framework similar to SimCLR (25). The idea of contrastive learning is to train an encoder fθ to extract the necessary information to distinguish similar samples from others. Let x be a query, x+, and x− be a set of positive and negative samples respectively, z be the encoder's output feature or the output of an additional projection layer gϕ(fθ(x)) for each input, and suppose sim(*z, z*′) is cosine similarity. Then, the primitive form of the contrastive loss is defined as follows: $$L_{\mathrm{con}}(x,x_{+},x_{-}):=-\frac{1}{|x_{+}|}\log\frac{\sum_{x^{\prime}\in x_{+}}e^{\mathrm{sim}(z(x),z(x^{\prime}))/\tau}}{\sum_{x^{\prime}\in x_{+}\cup x_{-}}e^{\mathrm{sim}(z(x),z(x^{\prime}))/\tau}}$$ Specifically for SimCLR, the contrastive loss above is converted into the formulation below: $$L_{\rm SimCLR}(B;T):=\frac{1}{2B}\sum_{i=1}^{B}L_{\rm con}(\hat{x}_{i}^{(1)},\hat{x}_{i}^{(2)},\hat{B}_{-i})\tag{30}$$ $$+L_{\rm con}(\hat{x}_{i}^{(2)},\hat{x}_{i}^{(1)},\hat{B}_{-i})$$ $$(29)$$ $$(31)$$ $$(32)$$ where (xˆ (1) i, xˆ (2) i) = (T1(xi), T2(xi)) for the transformations T1 and T2 from a set of transformations T, B := {xi} B i=1, and Bˆ−i:= {xˆ (1) j}j̸=i ∪ {xˆ (2) j}j̸=i. However, contrastive learning requires defining a set of negative samples. To this end, a collection of transformations that shift the distribution of training samples (S) is specified to make the desired negative set when applied on each input. For instance, rotation or patch permutation completely shifts the original input samples' distribution; and can therefore be used as negative samples. Another set of transformations called align transformations (T) is defined as not changing the distribution of training images as much as (S). Then the *Contrasting shifted instances* (CSI) loss can be defined as follows: $L_{con-SI}:=L_{\rm SimCLR}\big{(}\cup_{s\in S}B_{s};T\big{)}$ (10.1) where Bs := {s(xi)} B i=1. Here, CSI considers each distributionally-shifted sample as an OOD with respect to the original sample. The goal is to discriminate an in-distribution sample from other OOD i.e., (s ∈ S) samples. Further, to facilitate fθ to discriminate each shifted instance, a classification loss for classifying shifted instances is defined in combination with Lcon-SI. To do so, a linear layer for modeling an auxiliary softmax classifier pcls-SI(y S | x) is added to fθ as in GT or GOAD: $$L_{\rm cls\mbox{-}SI}=\frac{1}{2B}\frac{1}{K}\sum_{s\in S}\sum_{\hat{x}_{s}\in B_{s}}-\log p_{\rm cls\mbox{-}SI}(y^{s}=s\mid\hat{x}_{s}),\tag{1}$$ where Bˆs is the batch augmented from Bs. In testing, the cosine similarity to the nearest training sample in {xm} multiplied by the norm of representation ||z(x)|| is used. The contrastive loss increases the norm of in-distribution samples, as it is an easy way to minimize the cosine similarity of identical samples by increasing the denominator of (29). To further improve the scoring criterion, ensembling over random augmentations or utilizing shifting transformations similar to GOAD can be employed. ## 3.16 **Da-Contrastive: Learning And Evaluating Representations For Deep One-Class Classification (158):** This research tries to put the problem of novelty detection into a contrastive framework, similar to the other methods mentioned previously in (165) and (48). (165) reports that all of the augmentations that are utilized in the contrastive loss presented at 29 are not necessarily effective for the one-class learning. As a result, rotated versions of the input are treated as separate distributions in this study, and are accounted as negative samples in contrastive loss. Positive samples are also created by applying augmentations to each distribution, such as color jittering. Finally, the one-class classifier is trained on the positive and negative samples in a contrastive learning manner. After, a KDE or OC-SVM (148) is trained on the learned representation of normal training samples, which is used at the test time. An overview of the method can be seen in Fig. 16. ![20_image_0.png](20_image_0.png) Figure 16: An overview of the DA-Contrastive method is represented. This work trains a one-class classifier employing a novel variant of contrastive learning with distribution augmentations such that the class collision between examples from the same class is reduced. This implies discriminating different distributions generated by augmentations of different rotations from the original input (see the right side), which results in less uniformity and more compactness in the feature space (see the left side). The figure is taken from (158). ## 3.17 Uninformed Students: Student-Teacher Anomaly Detection With Discriminative Latent Embeddings (14): This work trains a teacher network using metric learning and knowledge distillation techniques to provide a semantic and discriminative feature space. The teacher T is obtained by first training a network Tˆ that embeds patch-sized images p into a metric space. Then, fast dense local feature extraction for an entire input image can be achieved by a deterministic network transformation from Tˆ to T, as described in (8). To train Tˆ, a large number of training patches p are obtained by randomly cropping an image database, for instance, ImageNet. Then using the following knowledge distillation loss, the knowledge of a pre-trained network P is distilled into Tˆ as follows: $$L_{k}(\hat{T})=||D(\hat{T}(p))-P(p)||^{2},\tag{1}$$ where D is used to align the size of output spaces. This helps the use of computationally efficient network Tˆ instead of P at the test time while the required knowledge is distilled. To further enrich the feature space of Tˆ, an SSL method is employed such that the feature space of patches that are obtained by applying small translations, small changes in the image luminance, and the addition of Gaussian noise to p be similar. This set is called p + as opposed to p −, which is obtained using random cropping regardless of the neighborhood to the patch p. The SSL loss is defined as follows: $$(33)$$ $$\begin{array}{l}{{\delta^{+}=||\hat{T}(p)-\hat{T}(p^{+})||^{2}}}\\ {{\delta^{-}=\operatorname*{min}\{||\hat{T}(p)-\hat{T}(p^{-})||^{2},||\hat{T}(p^{+})-\hat{T}(p^{-})||^{2}\}}}\\ {{L_{m}(\hat{T})=\operatorname*{max}\{0,\delta+\delta^{+}-\delta^{-}\}}}\end{array}$$ $$(34)$$ where δ > 0 denotes the margin parameter. Finally, to reduce the redundancy of feature maps of Tˆ, a compactness loss which minimizes their cross-correlation is used as follows: $$L_{c}({\hat{T}})=\sum_{i\neq j}c_{i j}$$ $$(35)$$ cij (35) therefore, the final loss is Lc(Tˆ) + Lm(Tˆ) + Lk(Tˆ). Having a teacher T trained comprehensively to produce d dimensional feature maps for each pixel of an input image, an ensemble of student networks is forced to approximate the feature maps of T for each pixel located at row r and column c as follows: $$L(S_{i})=\frac{1}{w h}\sum_{(r,c)}||\mu_{(r,c)}^{S_{i}}-(y_{(r,c)}^{T}-\mu)\mathrm{diag}(\sigma)^{-1}||_{2}^{2}$$ $$(36)$$ −1||22(36) where µ and σ are used for data normalization, note that the receptive field of each student is limited to a local image region p(r,c), this helps the students obtain dense predictions for each image pixel with only a single forward pass without having to actually crop the patches p(r,c). At the test time, as students have only learned to follow their teacher on normal training samples, their average ensemble error can be used to detect abnormal samples. Intuitively, they wouldn't follow their teacher on abnormal samples and produce a high average error. ## 3.18 Self-Supervised Learning For Anomaly Detection And Localization (Cutpaste) (90): This work designs a simple SSL task to capture local, pixel-level regularities instead of global, semantic-level ![21_image_0.png](21_image_0.png) ones. While GT or GOAD utilize transformations such as rotation, translation, or jittering, CutPaste cuts a patch of its training input and copies it in another location. The network is trained to distinguish between defected samples and intact ones. Extra auxiliary tasks such as cutout and Scar can be used in combination with the cut-paste operation. After training, a KDE or Gaussian density estimator is trained on the confidence scores of normal training samples, which is used at the test time. Due to the simplicity of the method, it may overfit easily on the classification task. However; several experiments in the paper show contrary to this assumption. An overview of the method can be seen in Fig. 17. Figure 17: An overview of CutPaste. A network is trained to classify artificially made defected samples from the original ones. Despite the simplicity of the method, it does not overfit on MVTecAD dataset. The figure is taken from (90). ## 3.19 Multiresolution Knowledge Distillation For Anomaly Detection (Multi-Kd) (144): Generative models are suited for detecting pixel-level anomalies; however, they may fail on complex, semanticlevel ones. In contrast, discriminative models are good at capturing semantics. Designing an SSL task that captures both semantic and syntax is not easy. To address this issue, Multi-KD attempts to mimic the intermediate layers of a VGG pre-trained network—called intermediate knowledge—into a simpler network using knowledge distillation. This way, multi-resolution modeling of the normal training distribution is obtained and can be used to detect both pixel-level and semantic-level anomalies at the test time. Here, the concept of knowledge is defined to be the length and direction of a pre-trained network on ImageNet (31). A cloner network has a simple yet similar overall architecture compared to the source, making its knowledge similar to the source on normal training samples. At test time, the cloner can follow the source on the normal test time samples; however, it fails for the abnormal ones. This results in a high discrepancy that can be used at test time. Fig. 18 shows the overall architecture. Similar methods exist, which model the distribution of different layers of a pre-trained network using multivariate Gaussian descriptors such as PaDiM (30) or (134). An overview of the architecture of (134) is shown in the Fig. 19. ![22_image_0.png](22_image_0.png) Figure 18: An overview of the Multi-KD method is shown. It distills multi-resolution knowledge of a pre-trained source network on the normal training distribution into a simpler cloner one. The structure of source and cloner is similar to each other. Therefore, the cloner distills the source's knowledge in the corresponding layers. At the test time, their discrepancy would be low for normal inputs as opposed to abnormal ones. The figure is taken from (144). ![22_image_1.png](22_image_1.png) Figure 19: An overview of (134). It models the different layers' distribution of a pre-trained network using Gaussian descriptors. Then, at the test time, the probability of being normal is computed using the training time means and variances. The figure is taken from (134). ## 4 Open-Set Recognition Open-set recognition (OSR) receives more supervision than AD or ND. In this setting, K normal classes are given at the training time. At testing, N classes with N - K unknown and K known classes exist. The objective is to identify unknown classes while classifying the known ones. This has a lot of applications in which labeling normal datasets is more feasible, or gathering a cleaned dataset without having any abnormal sample is possible. As there is more supervision, training data can be categorized into four classes: - *known known classes (KKC):* Training samples that we know they are known. They are already given and labeled. - *known unknown classes (KUC):* Training samples that we know they are not known. This means, they do not belong to the known categories. For example, background images, or any image that we know is not categorized into the known classes are in this group. They are already given and labeled. - *unknown known classes (UKC):* Training samples that we do not know they are known. For example, known test time samples are in this group. These are not given at the training phase. - *unknown unknown classes (UUC):* Training samples that we do not know they are not known. For example, unknown test time samples are in this group. These are not given at the training phase. The space that is far from known classes, including KKC and KUC, is called **open space** O(145). Labeling any sample in this space has a risk value denoted by RO. The risk factor is usually represented as Eq. 37 in which So is the overall measuring space, and f is the measurable recognition (classification) function (145). The value of f is 1 if the input belongs to KKC otherwise 0. $$R_{O}(f)={\frac{\int_{O}f(x)\,d x}{\int_{S_{o}}f(x)\,d x}}$$ $$(37)$$ f(x) dx (37) As discussed in (47), there are some difficulties in defining the practical formulation of openness; therefore, this study uses the definition of (47) as in Eq. 38, where CT R is the set of classes used in training, and CT E is the set of classes used in the testing. $$O=1-{\sqrt{\frac{2\times|C_{T R}|}{|C_{T R}|+|C_{T E}|}}}$$ $$(38)$$ Larger values of O correspond to more open problems, and 0 is assigned to a completely closed one. OSR problem can be defined as finding recognition functions f in such a way that minimizes the risk factor RO. ## 4.1 Towards Open Set Deep Networks (Openmax) (11): This work addresses the problem of overconfident scores of classification models for unseen test time samples. Due to the normalization in softmax computation, two samples with completely different logit scores may have the same confidence score distribution. Instead of using the confidence score, OpenMax resorts to the logit scores that are shown by *Activation Vector* (AV). AV of each sample captures the distribution over classes. The mean AV (MAV) is defined to be the average of AV values across all samples. As for each input sample, the value in AV corresponding to the ground truth is supposed to be high; its distance to the corresponding value of MAV would be high too. Considering the distance of each element in AVs from the corresponding element in MAV as a random variable, correctly classified inputs would have the highest distances for ground truth elements. This might happen for a few classes that are not ground truth but have a strong relationship with ground truth. For instance, the class leopard as the ground truth and cheetah as a near class. To model the distribution of these highest values, the Extreme Value Theorem (EVT) can be used as follows: EVT : Let (s1, s2, ..., sn) be a sequence of i.i.d. samples. Let Mn = max(s1, ..., sn). If a sequence of pairs of real numbers (an, bn) exists such that each 0 ≤ an and $$\operatorname*{lim}_{n\to\infty}P\left({\frac{M_{n}-b_{n}}{a_{n}}}\leq x\right)=F(x)$$ $$(39)$$ = F(x) (39) Then if F is a non-degenerate distribution function, it belongs to one of three extreme value distributions (126). For this work, the Weibull distribution is used. By modeling the distribution of extremes for each class, one can easily compute the probability of each test input being an extreme and discard remaining ones. In summary, during training, the CDF of extreme values for each class or element of AVs are approximated using EVT and correctly classified training samples. For each test-time input sample, the top α elements in AV are selected, representing ground truth and near classes; then, their CDFs are assigned to variables ws(i)(x) as Fig. 20 shows. These probabilities represent the chance of each AV's element vj (x) to be the maximum. After that, an arbitrary class is added to the existing ones as the representative of unknown unknown samples, assumed as class 0, and all activation values are normalized as follows: $${\hat{v}}(x)=\mathbf{v}(x)\circ w(x)$$ $${\hat{v}}_{0}(x)=\sum_{i}v_{i}(x)(1-w_{i}(x))$$ $$(40)$$ ![24_image_0.png](24_image_0.png) vi(x)(1 − wi(x)) (40) The normalization is performed so that the activation vector of UUC would be high if ws(i)(x)s are small. Finally, softmax is computed based on the normalized activation vector, and unknown or uncertain inputs are rejected. All hyper-parameters such as *ϵ, α* are tuned using a validation set including both seen and unseen samples. Figure 20: The inference algorithm of open-max. The class 0 is added in addition to other classes and considered as the unknown class; then, its probability is computed for each of the input samples, and the probabilities of top k values are normalized with respect to the unknown class's probability. At last, if the most significant value is for class 0 or the confidence of top k values is less than a specific amount, the sample is discarded. The figure is taken from (11). ## 4.2 Generative Openmax For Multi-Class Open Set Classification (G-Openmax) (46): This work is similar to OpenMax with exceptions of artificially generating UUC samples with GANs and fine-tuning OpenMax. This removes the need to have a validation dataset. To generate UUCs, a conditional GAN is trained on the KKCs as follows: $$\begin{array}{l}{{\operatorname*{min}_{\phi}\operatorname*{max}=\mathbb{E}_{x,c\sim P_{\mathrm{data}}}[\log(D_{\theta}(x,c))]}}\\ {{+\mathbb{E}_{z\sim P_{z},c\sim P_{c}}[\log(1-D_{\theta}(G_{\phi}(z,c),c))],}}\end{array}$$ $$(41)$$ where D is the discriminator with parameter θ, and G is the generator with parameter ϕ. New samples are generated by interpolating the KKCs' latent space. If the generated sample is not classified ![25_image_0.png](25_image_0.png) as one of the mixing labels, the image is considered UUC. Finally, another classifier called NetG (see Fig. 21) is trained using both UUC and KKC samples such that UUCs can be assigned to the class 0 of the classifier. The inference steps are similar to OpenMax except that the normalization process for the class 0 and other classes is the same. Figure 21: The overall architecture of G-OpenMax compared to the base OpenMax. As it is shown, everything is completely the same except the unknown unknown sample generation, which is done using a GAN. The figure is taken from (46). ## 4.3 Open Set Learning With Counterfactual Images (Osrci) (111): This work follows the idea of generating UUC samples as in G-OpenMax. A generated input is made similar to a KKC yet not to be assigned to the same class. Such generated inputs are called counter-factual examples. As these samples are near the boundary of UUCs, they can better help approximate the real UUC distribution. Specifically, a classifier is trained on k classes. A GAN-based method such as ALOCC is trained. The discriminator is trained using a Wasserstein critic with gradient penalty as follows: $$L_{D}=\sum_{x\in X}D(G(E(x)))-D(x)+P(D)\tag{42}$$ $$L_{G}=\sum_{x\in X}||x-G(E(x))||_{1}-D(G(E(x))),$$ where P(D) is the interpolated gradient penalty term, E encodes the input, D is the discriminator, and G is the generator. The counter-factual samples are generated by optimizing a latent vector z, which has a small distance to the latent vector of an arbitrary training sample x while its decoded image produces low confidence scores for each of the known classes: $$z^{*}=\min_{z}||z-E(x)||_{2}^{2}+\log\left(1+\sum_{i=1}^{K}e^{C_{K}(G(z))_{i}}\right),$$ $$(43)$$ where CK is the classifier and CK(G(z))iis the logit of the counterfactual image G(z) for class i. The generated UUCs are used in combination with KKCs to train a new classifier CK+1, which is used at the test time. ## 4.4 Reducing Network Agnostophobia (33): In applications such as object detection, there is usually a class called background. On the internet, a large ![26_image_0.png](26_image_0.png) number of samples can be crawled, which can be used as "background" for a specific task. This work employs background samples as an auxiliary KUC distribution to train a classifier. The training facilitates KUC to have small feature magnitudes while KKCs have larger magnitudes with a defined margin. Also, the entropy of the confidence layer is maximized for the background samples, which is equivalent to increasing the classifier's uncertainty for such inputs. The training employs a simple entropic open-set loss that maximizes the entropy of confidence scores, along with the objectosphere loss minimizing the L2 norm of final features. Fig. 22 shows the effect of each loss on the geometric location of each class's samples at the final layer. At the test time, the thresholding is based on Sc(x).||F(x)||, where F(x) is the representation from penultimate layer. Figure 22: The effect of different loss functions on the last layer of a classifier. The classifier is trained on the MNIST dataset. The background class is considered to be NIST letters (51). Black dots show samples from Devanagari (119) as UUCs, and the gray lines show the boundaries of different classes. (a) shows the last layer when the network is trained only with softmax loss. (b) shows similar setting to (a); however, background samples are used as a separate class. (c) shows the last layer when the network is trained with the objectosphere loss. The figures in the bottom are histograms of softmax probability values for samples of KKCs with green color and UUCs with the red one. The figure is taken from (33). ## 4.5 Class Conditioned Auto-Encoder For Open-Set Recognition (C2Ae) (117): The second premise behind employing AEs is used in this study, which states that abnormal test time samples should not be reconstructed as well as normal ones. Nevertheless, despite AD and ND, in OSR, training labels are used to increase AE abilities. Here, an AE is used as the meta-recognition function while its encoder plays the role of a classifier for the recognition task. Intuitively, this work wants the encoder to classify each passed sample correctly and provide embeddings by which the reconstruction of the original input is possible. Furthermore, it imposes other constraints to force encoder embeddings not to be easily converted to each other, e.g., by applying linear transformations, which prevents the AE from utilizing the learned features to reconstruct abnormal/unseen inputs. To this end, at first, the encoder that is a classifier is trained and fixed. Then for a given input Xi a match vector lm = lym i*∈ {−*1, 1} K where K is the number of classes is defined such that l is equal to 1 for the y ith element and −1 otherwise. Similarly, some none-match vectors lnm = ly nm jfor any random y nm j̸= yi sampled randomly from labels are considered. After that, two neural networks Hγ and Hβ with parameters Θγ and Θβ are defined to receive match and none-match vectors and produce some linear transformations by which the encoder embeddings are transformed. Finally, match transformations must be constructed perfectly; however, none-match ones are forced to have high reconstruction loss as follows: zi = F(Xi), γym i= Hγ(lym i ), γy nm i= Hγ(ly nm i) βym i= Hβ(lym i ), βy nm i= Hβ(ly nm i) zilm = γym i◦ zi + βym i , zilnm = γy nm i◦ zi + βy nm i Xˆ m i = G(zlm). Xˆ nm i = G(zlnm). (44) L m r = 1 N X N i=1 ||Xi − Xˆ m i ||1 L nm r = 1 N X N i=1 ||Xnm i − Xˆ nm i||1 $$(44)$$ $$(45)$$ where Xˆ nm is are sampled randomly based on randomly sampled none-match vectors, and the objective loss is: $$\min_{\Theta_{r},\Theta_{\beta},\Theta_{\mathcal{G}}}\alpha\cdot\mathcal{L}_{r}^{m}+(1-\alpha)\cdot\mathcal{L}_{r}^{nm}\tag{10}$$ The embedding transformation technique is called FiLM (125). This means that a given input would be reconstructed well only when the corresponding match vector is used. Therefore, for each test-time input, the reconstruction error under different match vectors is computed, and the rejection decision is made based on their minimum value. If the minimum value is greater than a threshold τ it is discarded; else, the encoder's output is assigned. To obtain τ in a principled way, the optimum threshold is found using EVT on the distribution of match and none-match reconstruction error values i.e ||Xi − Xˆ m i ||1 and ||Xi − Xˆ nm i||1 for each i and randomly sampled none-match vectors. Assuming the prior probability of observing unknown samples is pu, the probability of error as a function of threshold τ is shown in Eq. 46 where Gm and Gmn are the extreme value distributions of the match and none-match errors. $$\tau^{*}=\min_{\tau}P_{\rm error}(\tau)\tag{46}$$ $$=\min_{\tau}[(1-p_{u})*P_{m}(r\geq\tau)+p_{u}*P_{nm}(-r\leq-\tau)]$$ $$=\min_{\tau}[(1-p_{u})*(1-G_{m}(\tau))+p_{u}*(1-G_{nm}(\tau))]$$ ## 4.6 Deep Transfer Learning For Multiple Class Novelty Detection (Dtl) (121): This work also follows the idea of using a background dataset (called reference dataset). Similar to (33), DTL addresses the deficiencies of using softmax loss in OSR. A new loss function called *membership loss* (LM) is proposed. Specifically, each activation score value fi of the penultimate layer is normalized into [0, 1] using the sigmoid function. The normalized scores can be interpreted as the probability of the input image belonging to each individual class. Ideally, given the label y, f(x)i should be 1 when y = i and 0 otherwise. The loss function is defined as Eq. 47. $$L_{M}(x,y)=[1-\sigma(f(x)_{y})]^{2}+\lambda\cdot\frac{1}{c-1}\sum_{i=1,i\neq y}^{c}\left[\sigma(f(x)_{i})\right]^{2}\tag{47}$$ λ is a regularization parameter to control the relative weight given to each risk source, σ denotes the Sigmoid function, and c is the number of classes. Another technique for improving the detection performance is based on the *"globally negative filters"*. Filters that provide evidence for a particular class are considered as positive filters and vice versa. For pre-trained neural networks, it has been shown that only a small fraction of final feature maps are activated positively. ![28_image_0.png](28_image_0.png) Furthermore, some filters are always activated negatively, indicating irrelevance for all known classes. By discarding inputs that activate globally negative filters, a novel sample is less likely to produce high activation scores. To learn such filters for domain-specific tasks, DTL trains two parallel networks with shared weights up to the last layer—the first one solves a classification task on the reference dataset, and the second one solves the domain-specific classification tasks in combination with membership loss. If the reference and domain-specific datasets do not share much information, they provide negative filters for each other. Also, since the reference dataset consists of diverse classes, those learned filters can be considered globally negative filters. Finally, filters of the parallel network in combination with the confidence scores of the domain-specific classifier are used for novelty detection. Fig. 47 shows the overall network architecture. Figure 23: An overview of DTL method. Two parallel networks are trained to produce confidence scores and globally negative filters. The network above solves a simple classification task on the reference dataset, and the one below solves membership loss in combination with the domain-specific classification task. Test time detection is done by putting a threshold on the confidence of maximum class. The figure is taken from (121). ## 4.7 Classification-Reconstruction Learning For Open-Set Recognition (Crosr) (183): This work follows the similar idea as C2AE. In particular, CROSR employs an encoder network for classification and producing the latent vectors for reconstruction task. Importantly, the latent vector z used for the reconstruction task and penultimate layer y used for the classification task are not shared. The reason is that there is an excessive amount of information loss in the penultimate layer, which makes distinguishing between unknown and known samples hard. The overall procedure can be described as follows: $$\begin{array}{c}{{(y,z)=f(x),}}\\ {{p(C_{i}\mid x)=\mathrm{Softmax}_{i}(y),}}\\ {{\hat{x}=g(z)}}\end{array}$$ (48) $$\begin{array}{l}\mbox{(48)}\end{array}$$ . Moreover, to preserve information at different levels of abstraction, each layer of the encoder is compressed into a latent vector zi. The latent vector is then decoded to minimize the reconstruction error of the corresponding layer as follows: $x_{l+1}=f_{l}(x_{l})$, $z_{l}=h_{l}(x_{l})$, $\hat{x}_{l}=g_{l}(\hat{x}_{l+1}+\hat{h}_{l}(z_{l}))$ $$(49)$$ where fl and gl are layers of the encoder and decoder, respectively. hˆlis a reprojection to the original space of xl. The autoencoding structure is based on a ladder network (130). The final latent vector z is the concatenation of each zi. EVT is employed as in OpenMax (11). However, ![29_image_0.png](29_image_0.png) the distance is not only computed on the penultimate layer y but also on the latent vector z. Specifically, EVT is applied on the joint [y, z]. Test-time detection is performed similar to OpenMax. Fig. 24 shows an overview of the method. Figure 24: An overview of the proposed method compared to similar methods in the literature. As it is obvious, an encoder is used for the classification task, and the reconstruction task is done in a ladder network. The figure is taken from (183). ## 4.8 Generative-Discriminative Feature Representations For Open-Set Recognition (Gdfr) (123): Similar to CROSR, this proposed work trains a discriminative model in combination with a generative one. Discriminative approaches may lose important features that are utilitarian for distinguishing between seen and unseen samples. Generative modeling can provide complementary information. Similar to GT, GDFR employs SSL to improve the features of the discriminator. A shared network performs both the classification and SSL tasks, predicting the geometric transformations applied to the input. Moreover, a generative model such as AE is used, producing reconstructed outputs xˆ for a given input x. Then ![29_image_1.png](29_image_1.png) the collection of input-reconstruction pairs (x, xˆ) are passed to the discriminator network for classification and SSL tasks. The disparity between xˆ and x for unseen samples improves the discriminator network's detection power. Fig. 25 shows the method. Figure 25: An overview of GDFR. As it can be seen, a classification network is trained on both SSL and classification tasks to make the learned features richer. Also, to benefit from generative modeling, each input is passed to an AE, and further tasks are applied on the joined reconstructed and input images. The figure is taken from (123). ## 4.9 Conditional Gaussian Distribution Learning For Open Set Recognition (Cgdl) (162): The main idea of this research is very similar to CROSR. However, CGDL uses the probabilistic ladder network based on variational encoding and decoding (160). The overall encoding process for the lth layer is as follows: $x_{l}=\text{Conv}(x_{l-1})$ $h_{l}=\text{Flatten}(x_{l})$ $\mu_{l}=\text{Linear}(h_{l})$ $\sigma_{l}^{2}=\text{Softplus}(\text{Linear}(h_{l}))$, $$(50)$$ where "Conv" is a convolutional layer , "Flatten" is a linear layer to flatten the input data into 1-dimensional space, "Linear" is a single linear layer, and "Softplus" operation is log(1 + exp(·)). The final representation vector z is defined as µ + σ ⊙ ϵ where ϵ ∼ N(0, I), ⊙ is the element-wise product, and µ, σ are the outputs of the top layer L. Similarly, for the decoding process we have: $\hat{c}_{l+1}=$ Unflatten($\hat{z}_{l+1}$) $\hat{x}_{l+1}=$ ConvT($\hat{c}_{l+1}$) $\hat{h}_{l+1}=$ Flatten($\hat{x}_{l+1}$) $\hat{\mu}_{l}=$ Linear($\hat{h}_{l+1}$) $\hat{\sigma}_{l}^{2}=$ Softplus(Linear($\hat{h}_{l+1}$)) $q$-$\mu_{l}=\frac{\hat{\mu}_{l}+\hat{\sigma}_{l}^{-2}+\mu_{l}+\sigma_{l}^{-2}}{\hat{\sigma}_{l}^{-2}+\sigma_{l}^{-2}}$ $q$-$\sigma_{l}^{2}=\frac{1}{\hat{\sigma}_{l}^{-2}+\sigma_{l}^{-2}}$ $\hat{z}_{l}=q$-$\mu_{l}+q$-$\sigma_{l}^{2}\circ\epsilon$. During training, samples are passed into the encoder to estimate µ and σ for each layer. The mean and variance can be used as priors for the corresponding decoding layer. The final embedding z of the encoder's top layer is used for the joint classification task and decoding process. The distribution of the encoder's final layer is forced to be similar to different multivariate Gaussian p k θ (z) = N(z; µk, I), where k is the index of known classes and µk is obtained by a fully-connected layer which maps the one-hot encoding of the input's label to the latent space. Each layer of the decoder is a Gaussian probability distribution in which a prior of its mean and variance is added by the corresponding layer of encoder statistics. Putting it together, the training objective function is as follows: $$\mathcal{L}_{KL}=-\frac{1}{L}\Big{[}\mathrm{KL}\big{(}q_{\phi}(z\mid x,k)||p_{\theta}^{k}(z)\big{)}$$ $$+\sum_{l=1}^{K-1}\mathrm{KL}\big{(}q_{\theta}(\hat{x}_{l}|\hat{x}_{l+1},x)||q_{\theta}(\hat{x}_{l}|\hat{x}_{l+1})\big{)}\Big{]}\tag{1}$$ $$\mathcal{L}_{r}=||x-\hat{x}||_{1}$$ $$\mathcal{L}=-(\mathcal{L}_{r}+\lambda\cdot\mathcal{L}_{c}+\beta\cdot\mathcal{L}_{KL}),$$ $$\left(52\right)$$ where Lc is the classification error and $$\begin{array}{l}{{q_{\theta}(\hat{x}_{l}|\hat{x}_{l+1},x)=N(\hat{x}_{l};q{\mathrm{-}}\mu_{l},q{\mathrm{-}}\sigma_{l}^{2})}}\\ {{q_{\theta}(\hat{x}_{l}|\hat{x}_{l+1})=N(\hat{x}_{l};\hat{\mu}_{l},\hat{\sigma}_{l}^{2}).}}\end{array}$$ $$(53)$$ At the test time, both the probability of final latent space with respect to p k θ (z) and reconstruction error for each test input are used for detection. If an input is preserved, the classification output is considered as ![31_image_0.png](31_image_0.png) the true class. As Fig. 26 shows, the latent vector z is considered as a prior in which the distribution of all classes should have a low KL distance. This is similar to the role of N(0, I) in the baseline VAE. Figure 26: An overview of CGDL method. Compared to CROSR a probabilistic ladder network is used instead of a deterministic one; however, the classification task is similar. The probability distributions of the decoder network contain a prior from the corresponding layer statistics of the encoder network. Unlike, VAE in which there is a prior N(0, I) on the latent space; here, the prior is learned, and each class distribution is conducted to have a low KL distance with it. The figure is taken from (162). ## 4.10 Hybrid Models For Open Set Recognition (192): In this work, a classification network is trained in combination with a flow-based generative model. Generative models in the pixel-level space may not produce discernible results for unseen and seen samples, and they are not robust against semantically irrelevant noises. To address this issue, a flow-based model is applied to the feature representation space instead of pixel-level space (see Fig. 27). The reason for using flow-based models is their handy and comprehensive theoretical abilities. The training loss is a simple cross-entropy loss in combination with negative log-likelihood used for the training of the flow-based model. At the test time, the thresholding is applied to the likelihood of each input, and if it is preserved, the classifier's output is assigned as the in-class label. ## 4.11 Learning Open Set Network With Discriminative Reciprocal Points (Rpl) (22): Similar to Mem-AE, the idea of prototype features is used in this work. The goal is to learn a set of prototypes or reciprocal points, which can assign labels to each input based on the distance w.r.t each prototype. RPL helps the model better adjust the boundaries of different classes compared to softmax or OpenMax, and decreases the risk factor. Initially, random reciprocal points are chosen. The location of reciprocal points and the weights of a classifier network are adjusted to minimize the classification loss. This forces the network to locate features of each class near some specific reciprocal points to yield the desired class boundary using at ![32_image_0.png](32_image_0.png) Figure 27: An overview of the hybrid models for open set recognition. As it can be seen, a classification network is trained in combination with a generative flow-based model. At the test time, the probability of the latent vector is considered as the criterion of rejection, and the classifier assigns a label if the input is not rejected. The figure is taken from (192). least a set of points. To decrease the risk factor, samples of each class are forced to have a margin with respect to their reciprocal points, which is learned during the training process. Eq. 54 shows the mathematical formulation: $$d(f_{\theta(x)},P^{k})=\frac{1}{M}\sum_{i=1}^{M}||f_{\theta}(x)-p_{i}^{k}||_{2}^{2}$$ $$p(y=k|x,f_{\theta},P)=\frac{e^{\gamma d(f_{\theta}(x),P^{k})}}{\sum_{i=1}^{N}e^{\gamma d(f_{\theta}(x),P^{i})}}$$ $$L_{o}=\frac{1}{M}\sum_{i=1}^{M}||d(f_{\theta}(x),p_{i}^{k})-R^{k}||_{2}^{2},$$ $$\tag{54}$$ . where P kis the set of kth class reciprocal points, p k i is a reciprocal point, M is the number of reciprocal points for each class, N is the number of classes, Rkis the margin for each class, and γ is a tunable hyper-parameter. ## 4.12 A Loss For Distance-Based Open Set Recognition (Cac) (104): The idea of this work is similar to RPL and GOAD. For each class, CAC defines an anchor vector of dimension N—the number of classes. For each vector, the element corresponding to the class label is 1 and 0 otherwise. For each training sample, the learning process forces its logit scores to be in a compact ball w.r.t true class anchor vector while having a large distance from anchors of other classes. CAC can also be seen as a multi-class DSVDD. The training loss function is as follows: $$\mathbf{d}=e(\mathbf{z},\mathbf{C})=(||\mathbf{z}-\mathbf{c}_{1}||_{2},...,||\mathbf{z}-\mathbf{c}_{N}||_{2})$$ $$\mathcal{L}_{T}(\mathbf{x},y)=\log(1+\sum_{j\neq y}^{N}e^{d_{y}-d_{j}})\tag{55}$$ $$\mathcal{L}_{A}(\mathbf{x},y)=d_{y}=||f(x)-\mathbf{c}_{y}||_{2}$$ $$\mathcal{L}_{CAC}(\mathbf{x},y)=\mathcal{L}_{T}(\mathbf{x},y)+\lambda\cdot\mathcal{L}_{A}(\mathbf{x},y),$$ where f is a feature extractor projecting an input image x to a vector of class logits z = f(x), N is the number of known classes, and C is a non-trainable parameter representing a set of class center points (c1*, . . . ,* cN ), one for each of the N known classes. ## 4.13 Few-Shot Open-Set Recognition Using Meta-Learning (Peeler) (93): In this work, the idea of meta-learning is combined with open set recognition. Meta-learning enable learning general features that can be easily adapted to any unseen task. Meta-learning is also called learning to learn. Due to the ability to work in few-shot settings, meta-learning can be useful in low data regimes. At the meta iteration i, meta-model h is initialized with the one produced by the previous meta-iteration. Let (S s i , Ts i ) Ns i=1 be a meta-training dataset with Ns number of training problems, two steps are performed. First, an estimate h ′ of the optimal model for the training set S s i is produced. Then the test set T s i is used for finding a model with a suitable loss function L as the Eq. 56 shows. $$h^{*}=\arg\min_{h}\sum_{(x_{k},y_{k})\in T_{i}^{*}}L[y_{k},h^{\prime}(x_{k})]\tag{1}$$ To adopt meta-learning in OSR, a classification loss in combination with open-set loss is used. Moreover, the test set T s i is augmented with some unseen samples. The overall loss function is defined as Eq. 57, where Lc is a simple classification loss, and Lo maximizes the entropy of unknown samples on the known classes. $$h^{*}=\arg\min_{h}\biggl{\{}\sum_{(x_{k},y_{k})\in C^{*}_{i}\in T^{*}_{i}|y_{k}}L_{c}[y_{k},h^{\prime}(x_{k})]\tag{5}$$ $$+\lambda\cdot\sum_{(x_{k},y_{k})\in T^{*}_{i}|y_{k}\in C^{*}_{i}}L_{o}[h^{\prime}(x_{k})]\biggr{\}}$$ $$(56)$$ $$(57)$$ At the test time, the average features of correctly classified samples is obtained as a prototype point and used for the rejection of unseen samples. ## 4.14 Learning Placeholders For Open-Set Recognition (Proser) (196): This work tries to train a classifier ( ˆf(x)) that can distinguish between target class and non-target classes. A dummy classifier is added to the softmax layer of the model with a shared feature extractor. Then it is forced to have the second maximum value for the correctly classified samples. When the classifier encounters novel inputs, the dummy classifier produces high values since all known classes are non-targets. Dummy classifier can be seen as the instance-dependent threshold which can well fit every known class. The loss function is defined as the Eq. 58, where ˆf((x)/y is meant to make the most probable class zero. $$L_{1}=\sum_{(x,y)\in D_{\rm tr}}l(\hat{f}(x),y)+\beta l(\hat{f}((x)/y,k+1)\tag{58}$$ Moreover, the mixup technique (169) is added to the loss function to boost unseen samples detection. The mixup representations are introduced to approximate the unseen sample's distribution and should be classified as the dummy class k + 1. Finally, rejecting each test time sample is done based on the probability of the dummy class. ## 4.15 Counterfactual Zero-Shot And Open-Set Visual Recognition (187): This work attempts to make abnormal samples in a counter-factual faithful way. As the paper mentions, most of the generative approaches such as G-OpenMax do not produce desired fake samples, the distributions of which do not resemble real distribution of unseen samples. To this end, a β-VAE (66) is used to make sample attribute variable Z and the class attribute variable Y independent. The β-VAE loss function is similar to simple VAE; however, the KL term is induced by a coefficient β. This is shown to be highly effective in learning a disentangled sample attribute Z (66). For disentangling Y from Z, the proposed approach makes counter-factual samples by changing the variable Y to have a large distance with the given input x in spite of samples that are generated by changing the variable Z. To make counter-factual samples faithful, a Wasserstein GAN (4) loss is used for a discriminator D(*X, Y* ), which verifies the correspondence between the generated counter-factual image and the assigned label. At last, generated samples can be used to boost the performance of any OSR problem. ## 4.16 Opengan: Open-Set Recognition Via Open Data Generation (82): This work argues that using an easily accessible auxiliary outlier dataset can improve the open-set recognition performance as mentioned in (137) and (58). However, the auxiliary dataset might poorly generalize to diverse open-set data that are likely to be faced. Therefore, close to (189) a GAN-based approach is employed to generate fake samples that are supposed to mimic the distribution of the open-set data. Although, as mentioned in (189), GAN-based methods suffer from instability when used in the one-class training setup, in the OSR setup, labeled closed-set data is available which can be used to find the best stopping point during training. Furthermore, fake samples can be generated according to the feature space of closed-set samples passed through a pre-trained K-way classification model, which significantly reduces the dimensionality of the problem and helps stability. This work shows that a well-selected GAN-discriminator (D) using a validation set achieves SOTA on OSR. Fig. 28 shows that at first, a closed-set classifier is trained on the given training dataset and is kept fixed afterward. Then, a GAN-based framework is trained on the embeddings of the closed-set and outlier images, which forces the generator (G) to produce semantically fake outputs that are hard to be detected as fake for the discriminator. Eq. 59 shows the mathematical formulation of the loss function, where λG controls the contribution of generated fake samples by G, and λo adjusts the effect of auxiliary outlier dataset. max ![34_image_0.png](34_image_0.png) Dmin G $$\left[\,\log({\mathcal{D}}(x))\,\right]+\lambda_{o}\cdot\mathbb{E}_{\hat{x}\sim P_{o p e n}}\left[\,\log(1-{\mathcal{D}}({\hat{x}})\right]+\lambda_{G}\cdot\mathbb{E}_{z\sim N}\left[\,\log(1-{\mathcal{D}}(G(z))\right]\,\right]$$ $\left(59\right)$. -log(1 − D(G(z))(59) Figure 28: An overview of the OpenGAN method. Unlike pure GAN-based methods or Outlier Exposure, fake samples are generated in the latent space while a discriminator is trained to distinguish between the closed-set samples' embedding and outlier inputs. Closed-set embeddings are produced using a trained K-way closed-set classification model that is kept frozen afterward. In this way, the fake generator G is pushed towards making features, that match outlier images' embeddings. The figure is taken from (82). ## 4.17 Open-Set Recognition: A Good Closed-Set Classifier Is All You Need? (167): This work provides a comprehensive investigation through the correlation of closed-set accuracy and open-set performance of different deep architectures. It is shown that not only the closed-set accuracy of a model is highly correlated with its OSR performance, but also this correlation holds across a variety of loss objectives and architectures. This is surprising because stronger closed-set classifiers are more likely to overfit to the training classes and perform poorly in OSR. Fig. 29 illustrates the mentioned relationship for different architectures and within the ResNet family architectures. Another contribution of this work is providing a Semantic Shift Benchmark (SSB), which unlike common approaches of utilizing a single test set for the evaluation, makes two 'Easy' and 'Hard' sets based on the semantic similarity of the open-set categories to the training classes. The ability of a model to detect semantic novelty as opposed to the low-level distributional shift is better captured in this manner. ## 5 Out-Of-Distribution Detection OOD detection aims to identify test-times samples that are semantically different from the training data categories, and therefore should not be predicted into the known classes. For instance, one could train the model on CIFAR-10 (as in-distribution data), and then evaluate on CIFAR-100 (113) as an out-of-distribution ![35_image_0.png](35_image_0.png) Figure 29: (a) The performance of different deep model architectures on the ImageNet dataset. The ImageNet21K-P dataset is used to generate 'Easy' and 'Hard' OSR splits. (b) The performance of different ResNet families on the ImageNet. The figure is taken from (167). dataset, as CIFAR-10 and CIFAR-100 have mutually exclusive classes. In the multi-class setting, the problem of OOD detection is canonical to OSR: accurately classifying samples from the known classes while detecting the unknowns. However, OOD detection encompasses a broader spectrum of learning tasks (e.g., multi-label classification) and solution space (e.g., density estimation without classification). Some approaches relax the constraints imposed by OSR and achieve strong performance. The following reviews some of recent OOD detection works and their differences. ## 5.1 **A Baseline For Detecting Misclassified And Out-Of-Distribution Examples In Neural Networks (56):** This work coined "out-of-distribution detection" and showed how to evaluate deep learning out-of-distribution detectors. As previous anomaly detection works for deep classifiers often had low-quality or proprietary datasets, existing datasets were re-purposed to create out-of-distribution datasets, enabling easier evaluation. This approach proposes using the maximum softmax probability (MSP) to detect out-of-distribution samples, namely maxk p(y = k | x). A test sample with a large MSP score is detected as an in-distribution (ID) example rather than out-of-distribution (OOD) example. This showed that a simple maximum probability score can be useful for detection in vision, natural language processing, and speech recognition settings, but there is much room for improvement. It also showed p(y | x) models can be useful for out-of-distribution detection and that p(x) models are not necessarily needed. Until now, it still serves as a general-purpose baseline that is nontrivial to surpass. Concurrent OSR work proposed additional modifications to softmax probabilities for detection (11). ## 5.2 **Enhancing The Reliability Of Out-Of-Distribution Image Detection In Neural Networks (Odin) (92):** In this work, a technique called temperature scaling was employed. Although it has been used in other domains such as knowledge distillation (67), the main novelty of this work is showing the usefulness of this technique in the OOD domain. In temperature scaling, the softmax score is computed as in Eq. 60. OOD samples are then detected at the test time based on thresholding the maximum class probability. This simple approach, in combination with adding a small controlled noise, has shown significant improvement compared to the baseline approach MSP. ODIN further shows that adding one step gradient to the inputs in the direction of improving the maximum score has more effect on the in-class samples, and pushes them to have larger margin with the OOD samples. $$S_{i}(x;T)={\frac{\exp(f_{i}(x)/T)}{\sum_{j=1}^{N}\exp(f_{j}(x)/T)}}$$ $$({\mathrm{60}})$$ The paper also provided mathematical explanation for the effect of temperature scaling on out-of-distribution detection. This can be seen in the Taylor approximation of the softmax score (expanded around the largest logit output fyˆ(x)): $$S_{i}(x;T)=\frac{\exp(f_{i}(x)/T)}{\sum_{j=1}^{N}\exp(f_{j}(x)/T)}$$ $$=\frac{1}{\sum_{j=1}^{N}\exp(\frac{f_{j}(x)-f_{i}(x)}{T})}$$ $$\approx\frac{1}{N-\frac{1}{T}\sum_{j=1}^{N}[f_{\hat{y}}(x)-f_{j}(x)]+\frac{1}{2T^{2}}\sum_{j=1}^{N}[f_{\hat{y}}(x)-f_{j}(x)]^{2}}$$ $$(61)$$ A sufficiently large temperature T has a strong smoothing effect that transforms the softmax score back to the logit space—which effectively distinguishes ID vs. OOD. In particular, the ID/OOD separability is determined by the U1 =PN j=1,j̸=ˆy [fyˆ(x) − fj (x)] and U2 =PN j=1,j̸=ˆy [fyˆ(x) − fj (x)]2. The former measures the extent to which the largest unnormalized output of the neural network deviates from the remaining outputs; while the latter measures the extent to which the remaining smaller outputs deviate from each other. For the in-class samples, U1 and E[U2 | U1] are higher than OOD ones. Mathematically and empirically, ODIN is not sensitive to T when it is large enough to satisfy the Taylor approximation. For example, the paper shows that simply applying T = 1000 can yield effective performance boost without hyper-parameter tuning. In general, using temperature scaling can improve the separability more significantly than input preprocessing. Note that ODIN differs from confidence calibration, where a much milder T is employed. While calibration focuses on representing the true correctness likelihood of ID data only, the ODIN score is designed to maximize the gap between ID and OOD data and may no longer be meaningful from a predictive confidence standpoint. As seen in Fig. 30, the ODIN scores are closer to 1/N where N is the number of classes. ![36_image_0.png](36_image_0.png) Figure 30: An overview of ODIN, a post-hoc method that uses temperature scaling and input perturbation to amplify the ID/OOD separability. The figure is taken from (92). ## 5.3 A Simple Unified Framework For Detecting Out-Of-Distribution Samples And Adversarial Attacks (89): This work was inspired from the idea of Linear Discriminant Analysis (LDA) in which P(X = x | Y = y) is considered to be a multivariate Gaussian distribution. In order for P(Y = y | X = x) to be similar to a softmax form, it is assumed that the feature space of the penultimate layer follows the Gaussian distribution. Therefore, a mean and variance vector is simply estimated from features of each class, and a multivariate Gaussian is fit to them. In order to check validity of the assumptions, it uses the Mahalanobis distance of the test time images to perform the classification instead of the softmax function. Surprisingly, the results are comparable or better than softmax, which supports the assumptions. It performs OOD detection using Mahalanobis distance to the closest class-conditional Gaussian distribution. Furthermore, to improve the performance, features in different layers are ensembled and a small controlled noise is added to test samples as shown in Eq. 62, where M(x) is the Mahalanobis distance with mean of the closest class-conditional Gaussian distribution. A similar idea has been discussed earlier in (134). $$(62)$$ $${\hat{x}}=x+\epsilon\cdot\operatorname{sign}(\nabla_{x}M(x))$$ xˆ = x + ϵ · sign(∇xM(x)) (62) ## 5.4 Predictive Uncertainty Estimation Via Prior Networks (Dpn) (102): This work discusses three different sources of uncertainty: (1) data uncertainty, (2) distributional uncertainty, and (3) model uncertainty. The importance of breaking down the final uncertainty into these terms was discussed. For instance, model uncertainty might happen because of the model's lack of capacity to approximate the given distribution well. On the other hand, data uncertainty might happen because of the intrinsic intersection of similar classes. For instance, classifying between different kinds of dogs has more data uncertainty than solving a classification problem with completely separate classes. Distributional uncertainty is related to the problem of AD, ND, OSR, and OOD detection. The goal of this work was to estimate the distributional uncertainty for each input and compare it with the data and model uncertainties. Data uncertainty P(wc | x ∗, θ) can be defined by the posterior distribution over class labels given a set of parameters θ. Model uncertainty P(θ | D) is defined by the posterior distribution over the parameter given data D. These two types of uncertainties could be combined to give rise to the distributional uncertainty, as shown in Eq. 63. $$P(w_{c}\mid x^{*},D)=\int P(w_{c}\mid x^{*},\theta)P(\theta\mid D)\,d\theta$$ As computing the integral is not tractable, the above formula is usually converted into Eq. 64, where q(θ) is an approximation of P(θ | D). $$P(w_{c}\mid x^{*},D)\approx{\frac{1}{M}}\sum_{i=1}^{M}P(w_{c}\mid x^{*},\theta^{i}),\theta^{i}\sim q(\theta)$$ ![37_image_0.png](37_image_0.png) $$(63)$$ $$(64)$$ Each P(wc | x ∗, D) can then be seen as a categorical distribution located on a simplex, as shown in Fig. 31. Figure 31: The figure shows a distribution over the categorical distributions for modeling uncertainty using both model uncertainty and data uncertainty. The figure is taken from (102). Now to extract distributional uncertainty from the model uncertainty, Eq. 64 is decomposed into the following Eq. 65. $$P(w_{c}\mid x^{*},D)=\iint P(w_{c}\mid\mu)P(\mu\mid x^{*},\theta)P(\theta\mid D)\,d\mu\,d\theta,$$ ![37_image_1.png](37_image_1.png) where P(wc | µ) is a categorical distribution given a realization from a Dirichlet distribution, P(µ | x ∗, θ) is the Dirichlet distribution given the input and model parameters θ, and P(θ | D) is the distribution over model parameters given the dataset D. For simplicity, in this work P(θ | D) is equal to δ(θ − ˆθ) and is produced by the output of a deep network. Therefore P(µ | x ∗, θ) = P(µ | x ∗, ˆθ) = Dir(µ | α), and α = f(x ∗, ˆθ), where f(*., .*) is represented by a deep neural network. $$(65)$$ At the training time, the Dirichlet Prior Network (DPN) is expected to yield a flat distribution over the simplex for OOD samples, indicating large uncertainty in the mapping from x to y. Some out-of-distribution data is used to minimize the KL distance of Dir(µ | α) and the flat Dirichlet distribution. For the in-class samples, the KL divergence between Dir(µ | α) and a sharp, sparse Dirichlet distribution is minimized. The objective Dirichlet distributions are obtained by pre-setting their parameters during training process. During test time, different criterion such as max probability, last layer's entropy (H(.)), and distributional uncertainty as in Eq. 66 are used for OOD detection. $$I(y,\mu\mid x^{*},D)=H[E_{P(\mu|x^{*},D)}[P(y\mid\mu)]]$$ $$-E_{P(\mu|x^{*},D)}[H[P(y\mid\mu)]]$$ ## 5.5 Confidence-Calibrated Classifiers For Detecting Out-Of-Distribution Samples (88): This work attempted to maximize the entropy of confidence scores for OOD samples, similar to (33). Similar to (111), it generates OOD samples by jointly training a GAN and a classifier. As shown in Eq. 67, the first term solves a classification task on in-class samples, the second term uses KL divergence to make the confidence score distribution of generated OOD samples uniform. The remainder terms train the GAN on the in-class samples. Note that, the GAN is forced to generate high-quality OOD samples that produce high uncertainty when they are passed to the classifier. Therefore, the generated samples are located on the boundaries of in-class and outlier distributions. The paper also shows that leveraging on-boundary in-class samples significantly improves its confidence calibration. $$\min_{G}\max_{D}\min_{\theta}E_{P_{\rm in}(\hat{x},\hat{y})}[-\log P_{\theta}(y=\hat{y}\mid\hat{x})]$$ $$+\beta E_{P_{G}(x)}[{\rm KL}(U(y)||P_{\theta}(y\mid x))]+E_{P_{\rm in}(\hat{x})}[\log D(x)]\tag{67}$$ $$+E_{P_{G}(x)}[\log(1-D(x))]$$ $$(66)$$ Pin denotes the in-class distribution and Pθ(y | x) is a classifier trained on the dataset drawn from Pin(*x, y*). Test time OOD detection is done based on thresholding of the maximum softmax value. ## 5.6 Deep Anomaly Detection With Outlier Exposure (Oe) (58): This work introduced Outlier Exposure (OE) and reported extensive experiments on its usefulness for various settings. When applied to classifiers, the Outlier Exposure loss encourages models to output a uniform softmax distribution on outliers, following (88). More generally, the Outlier Exposure objective is $$\mathbb{E}_{(x,y)\sim{\mathcal{D}}_{\mathrm{in}}}[{\mathcal{L}}(f(x),y)+\lambda\cdot\mathbb{E}_{x^{\prime}\sim{\mathcal{D}}_{\mathrm{out}}^{\mathrm{OE}}}[{\mathcal{L}}_{\mathrm{OE}}(f(x^{\prime}),f(x),y)]],$$ assuming a model f, the original learning objective L; when labeled data is not available, y can be ignored. Models trained with this objective can have their maximum softmax probabilities (56) better separate in- and out-of-distribution examples. To create DOE out, data unlike the training data will need to be scraped or curated or downloaded. Samples from DOE out are gathered from already existing and available datasets that might not be directly related to the task-specific objective function; however, they can significantly improve the performance because they contain many diverse variations. Concurrent work (33) explores a similar intuition for small-scale image classification, while Outlier Exposure shows how to improve OOD detection for density estimation models, natural language processing models, and small- and large-scale image classifiers. ## 5.7 Using Self-Supervised Learning Can Improve Model Robustness And Uncertainty (60): This work investigated the benefits of training supervised learning tasks in combination with SSL methods in improving robustness of classifiers against simple distributional shifts and OOD detection tasks. To do so, an auxiliary rotation prediction was added to a simple supervised classification. The work measures robustness against simple corruptions such as Gaussian noise, shot noise, blurring, zooming, fogging etc. It has been observed that although auxiliary SSL tasks do not improve the classification accuracy, the model's robustness and detection abilities are significantly improved. Additionally, when the total loss function is trained in an adversarially robust way, the robust accuracy is improved. Finally, the method is tested in the ND setting using rotation prediction, and horizontal and vertical translation prediction similar but simpler than GT or GOAD. They also test in the setting of multiclass classification setting and find that auxiliary self-supervised learning objectives improves the maximum softmax probability detector (56). In addition, similar to (33) and (88), they attempted to make distribution of the confidence layer for some background or outlier samples uniform. Outliers are selected from other accessible datasets, as in Outlier Exposure (58). ## 5.8 Unsupervised Out-Of-Distribution Detection By Maximum Classifier Discrepancy (185): This work relies on a surprising fact—two classifiers trained with different random initializations can act ![39_image_0.png](39_image_0.png) differently on unseen test time samples at their confidence layers. Motivated by this, the work attempts to increase the discrepancy on unseen samples and reduce the discrepancy on seen ones. The discrepancy loss is the difference between the first classifier's last layer entropy and that of the second one. This forces the classifiers to have the same confidence scores for in-class inputs, yet increases their discrepancy for the others. Fig. 32 shows the overall architecture. First, two classifiers are trained on the in-class samples and are encouraged to produce the same confidence scores. Second, an unlabeled dataset containing both OOD and in-class data is employed to maximize their discrepancy on outliers while preserving their consistency on inliers. Figure 32: The overall architecture of (185). As it can be seen, at the first step, both of the classifiers are trained. Then, at the next step, an auxiliary discrepancy loss is added to the supervised classification to adjust the boundaries of in-class and OOD samples. The figure is taken from (185). ## 5.9 **Why Relu Networks Yield High-Confidence Predictions Far Away From The Training Data (54):** This work proved that ReLU networks produce piece-wise affine functions; therefore, they can be written as f(x) = V lx + a l on the polytope Q(x) as follows: $\Gamma_{l,i}=\{z\in R^{d}\mid\Delta^{l}(x)(V_{i}^{l}z+a_{i}^{l}\geq0)$ $Q(x)=\cap_{l=1,...,L}\cap_{i=1,...,n_{l}}\Gamma_{l,i}$, $$(68)$$ where ∆(l)is a diagonal matrice defined elementwise as: $$\Delta^{(l)}(x)_{i j}=\left\{\begin{array}{l l}{{\mathrm{sign}\left(f_{i}^{(l)}(x)\right)}}&{{\mathrm{if}\ i=j,}}\\ {{0}}&{{\mathrm{else.}}}\end{array}\right.,$$ , (69) $$(69)$$ nl and L are the number of hidden units in the lth layer and the total number of layers, respectively. The following theorem proves the deficiency of ReLU networks. Theorem. 1 Let Rd = ∪ R l=1Ql and f(x) = V lx + a l be the piecewise affine representation of the output of a ReLU network on Ql. Suppose that V l does not contain identical rows for all l = 1*, ..., R* Then for almost any x ∈ Rd and ϵ ≥ 0 there exists an α and a class k ∈ {1*, ..., K*} such that for z = αx it holds $$\frac{e^{f_{k}(z)}}{\sum_{r=1}^{K}e^{f_{r}(z)}}\geq1-\epsilon\tag{70}$$ The equation goes to 1 if α → ∞. From this, we can imply that for ReLU networks there exist infinitely many inputs which yield high confidence predictions. Note that arbitrarily high confidence prediction can not be obtained due to the bounded domain of inputs. In order to relieve this problem, a technique that is called confidence enhancing data augmentation is used, as the Eq. 71 shows. $$\frac{1}{N}\sum_{i=1}^{N}L_{\rm CE}(y_{i},f(x_{i}))+\lambda\cdot E[L_{p_{\rm out}}(f,Z)]\tag{71}$$ $$L_{p_{\rm out}}=\max_{l=1,\ldots,K}\log\left(\frac{e^{fl_{l}(z)}}{\sum_{k=1}^{K}e^{fl_{k}(z)}}\right)$$ where pout and pin are in-class and out-of-distribution distributions respectively, which we are certain that the set of the intersection of their supports has zero or close to zero probability mass. An example of such an out-distribution pout would be the uniform distribution on [0, 1]w×h or other noise distributions. The above training objective needs many samples to enforce low confidence on the entire out-distribution, an alternative technique called adversarial confidence enhancing training (ACET) is used. ACET uses the idea of adversarial robust training to not only minimize the objective function at each point but also the worst case in a neighborhood of the point as Eq. 72 shows. $$\frac{1}{N}\sum_{i=1}^{N}L_{\mathrm{CE}}(y_{i},f(x_{i}))+\lambda\cdot E[\max_{||u-Z||_{p}\leq\epsilon}L_{p_{\mathrm{out}}}(f,u)]$$ $$\left(72\right)$$ At the test time, a thresholded confidence score is used to distinguish between inliers and outliers. ## 5.10 Do Deep Generative Models Know What They Don'T Know? (110): This work shows that generative models surprisingly assign higher likelihood scores to outliers. This holds for VAEs, auto-regressive models, and different kinds of flow-based methods. In the generative modeling, usually, a parametric model on the data distribution is assumed as pθ(x). Then, it finds the best θ that minimizes the KL distance between the true but unknown distribution p ∗ and p, which is equal to maximizing the likelihood of pθ(x) on the input distribution, as the Eq. 73 shows. $${\rm KL}[p^{*}||p_{\theta}(x)]=\int p^{*}(x)\log\frac{p^{*}(x)}{p_{\theta}(x)}\,dx\approx-\frac{1}{N}\log p_{\theta}(X)-H[p^{*}]\tag{73}$$ Also, assuming the existence of a latent space Z the integrals can be written as Eq. 74 where Z and X are hidden and input variables, respectively. Let f be a diffeomorphism from the data space to a latent space, which commonly happens in flow-based modelings, where | ∂f ∂x | is known as the volume element. $$\int_{z}p_{z}(z)\,dz=\int_{x}p_{z}(f(x))\left|\frac{\partial f}{\partial x}\right|\,dx=\int_{x}p_{x}(x)\,dx\tag{74}$$ The parameters of p can be decomposed as θ = {*ϕ, ψ*} with ϕ being the diffeomorphism's parameters, i.e. f(x; ϕ), and ψ being the auxiliary distribution's parameters, i.e. p(z; ψ). Then the objective function can be written as Eq. 75. $\theta^{*}=\arg\max_{\theta}\log p_{\theta}(X)=$ $$\arg\max_{\phi,\psi}\sum_{i=1}^{N}\log p_{z}(f(x_{n},\phi),\psi)+\log\left|\frac{\partial f_{\phi}}{\partial x_{n}}\right|\tag{1}$$ $$\left(75\right)$$ An interesting aspect of the objective function above is that it encourages the function f to have high sensitivity to small changes of input samples xn. It is mentioned in the paper that if we plot the effect of each term separately, the first term shows the desired behavior for inliers and outliers, but the second term causes the problem. Changing f to constant-volume (CV) transformations (34) can alleviate the problem, but not entirely. Finally, using the second-order expansion of the log-likelihood around an interior point x0, it has been shown that the assigned likelihoods have a direct relation to the model curvature and data's second moment. Therefore, the problem of generative models might be fundamental. Eq. 76 shows the details of expansion where T r means trace operation. $$\begin{array}{l}{{0\leq E_{q}[\log p(x,\theta)]-E_{p^{*}}[\log p(x,\theta)]\approx}}\\ {{\nabla_{x_{0}}\log p(x_{0},\theta)^{T}(E_{q}[x]-E_{p^{*}}[x])}}\\ {{+\frac{1}{2}\mathrm{Tr}\{\nabla_{x_{0}}^{2}\log p(x_{0},\theta)(\Sigma_{q}-\Sigma_{p^{*}})\}}}\end{array}$$ $$(76)$$ ## 5.11 Likelihood Ratios For Out-Of-Distribution Detection (132): This paper employs likelihood ratio to alleviate the problem of OOD detection in generative models. The key idea is to model the background (XB) and foreground (XF ) information separately. Intuitively, background information is assumed to be less harmed than foreground information when semantically irrelevant information are added to the input distribution. Therefore, two autoregressive models are trained on noisy and original input distribution, and their likelihood ratio is defined as Eq. 77. $$\begin{array}{l}{{\mathrm{LLR}({\bf x})=\log\frac{p_{\theta}(X)}{p_{\theta_{0}}(X)}=\log\frac{p_{\theta}(X_{B})p_{\theta}(X_{F})}{p_{\theta_{0}}(X_{B})p_{\theta_{0}}(X_{F})}}}\\ {{\log\frac{p_{\theta}(X_{F})}{p_{\theta_{0}}(X_{F})}=\log p_{\theta}(X_{F})-\log p_{\theta_{0}}(X_{F})}}\end{array}$$ $$\left(77\right)$$ pθ(.) is the model trained on the in-distribution data, and pθ0 (.) is the background model trying to estimate the general background statistics. At the test time, a thresholding method is used on the likelihood ratio score. ## 5.12 Generalized Odin (68): As an extension to ODIN (92), this work proposes a specialized network to learn temperature scaling and a strategy to choose perturbation magnitude. G-ODIN defines an explicit binary domain variable d ∈ {din, dout}, representing whether the input x is inlier (i.e, x ∼ pin). The posterior distribution can be decomposed into p(y | din, x) = p(y,din|x) p(din|x) . Note that in this formulation, the reason for assigning overconfident scores to outliers seems more obvious, since the small values of p(*y, d*in | x) and p(din | x) produce the high value of p(y | din, x). ![42_image_0.png](42_image_0.png) Therefore, they are decomposed and estimated using different heads of a shared feature extractor network as hi(x) and g(x) for p(y | din, x) and p(din | x) respectively. This structure is called dividend/divisor and the logit fi(x) for class i can be written as hi(x) g(x) . The objective loss function is a simple cross entropy, similar to previous approaches. Note that the loss can be minimized by increasing hi(x) or decreasing g(x). For instance, when the data is not in the high-density areas of in-distribution, hi(x) might be small; therefore, g(x) is forced to be small to minimize the objective function. In the other case, g(x) are encouraged to have larger values. Therefore, they approximate the role of aforementioned distributions p(y | din, x) and p(din | x). At the test time maxi hi(x) or g(x) are used. Fig. 33 shows an overview of the method. Figure 33: The overall architecture of Generalized ODIN. As evident, different heads are applied on the penultimate layer to model the mentioned distributions. The figure is taken from (68). ## 5.13 Background Data Resampling For Outlier-Aware Classification (91): As mentioned before, in AD, ND, OSR, and OOD detection, some methods use a background or outlier dataset to boost their performances. However, to avoid different kinds of biases, the size of auxiliary datasets becomes important. In this work, a re-sampling technique is proposed to select an optimal number of training samples from the outlier dataset such that on-boundary samples play a more influential role in the optimization task. The work first provided an interesting probabilistic interpretation of outlier exposure technique. The loss function can be written as Eq. 80, where Lcls and Luni are shown in Eq. 78 and Eq. 79 respectively. $$L_{\mathrm{cls}}(f(x;\theta),y)=-\log f_{y}(x;\theta)$$ Lcls(f(x; θ), y) = − log fy(x; θ) (78) $$L_{\mathrm{uni}}(f(x;\theta))=-\frac{1}{K}\sum_{k=1}^{K}\log f_{k}(x;\theta)-\log K$$ $$(78)$$ $$(79)$$ $$L(\theta;p,q)=E_{X,Y\sim p(.,.)}[L_{\rm cls}(f(X;\theta);Y)]$$ $$+\alpha E_{X\sim q(.)}[L_{\rm uni}(f(X;\theta))].$$ $$(80)$$ By expanding Eq. 80 and taking its gradient w.r.t. classifier (f(x)) logits, the optimal classifier is obtained as Eq. 81 shows. $$f_{k}^{*}(x)=c(x)p_{Y|X}(k\mid x)+{\frac{1-c(x)}{K}}$$ $$c(x)={\frac{p_{X}(x)}{p_{X}(x)+\alpha q_{X}(x)}},$$ $$(\mathbb{S}1)$$ where c(x) can be seen as the relative probability of a sample x being in-class distribution p or background distribution q with the ratio α. Suppose D′b is the re-sampled dataset and Db is the given background one, OOD detection loss after reweighting can be written as: $$L_{\rm out}(\theta;w)=\frac{1}{|D^{\prime}_{b}|}\sum_{(x,y)\in D^{\prime}_{b}}L_{\rm uni}(f(x;\theta))\tag{82}$$ $$=\frac{1}{\sum_{i=1}^{|D_{b}|}}w_{i}L_{\rm uni}(f(x;\theta))$$ The optimal parameters θ ∗ are learned as follows: $$\theta^{*}(w)=\arg\,\min_{\theta}L(\theta;D,w)$$ $$=\arg\,\min_{\theta}L_{\rm in}(\theta;D)+\alpha L_{\rm out}(\theta;w)\tag{1}$$ $$(83)$$ Therefore, an iterating optimization is solved between finding θ t and w t at each step by fixing one and solving the optimization problem for the other. At last, the largest values of weights can be selected with regard to ![43_image_0.png](43_image_0.png) the dataset's desired size. Figure 34: The assigned likelihood values of a simple generative model to different datasets when trained on CIFAR10. As evident, simpler datasets achieve higher values. The figure is taken from (152). ## 5.14 Input Complexity And Out-Of-Distribution Detection With Likelihood-Based Generative Models (152): This work further investigated the problem of generative models assigning higher likelihood values to OOD samples. In particular, this work finds a strong tie between the OOD samples' complexity and likelihood values. Simpler input can lead to higher likelihood value. Fig. 35 shows this phenomenon. Furthermore, another experiment to support the claim is designed that starts from a random noise, on which a mean average pooling is applied at each step. To preserve the dimension, an upscaling is done after average poolings. Surprisingly, simpler images on which more average poolings are applied to achieve higher likelihoods. Motivated by this, the work proposed to detect OOD samples by accounting for input complexity in combination with likelihood values. Since it is hard to compute the input complexity, the paper instead calculates an upper bound using a lossless compression algorithm (28). Given a set of inputs x coded with the same bit depth, the normalized size of their compressed versions, L(x) (in bits per dimension), is used as the complexity measurement. Finally, the OOD score is defined as Eq. 84: $$S(x)=-l_{M}(x)-L(x),$$ S(x) = −lM(x) − L(x), (84) where lM(x) is the log-likelihood of an input x given a model M. $$(84)$$ ![44_image_0.png](44_image_0.png) Figure 35: The energy-based OOD detection framework. The energy function maps the logit outputs to a scalar through a convenient logsumexp operator. Test samples with lower energy are considered ID and vice versa. The figure is taken from (95). Intuitively, considering M0 as a universal compressor, then p(x | M0) = 2−L(x) and Consequently, the OOD score can be defined as follows : $$S(x)=-\log_{2}p(x\mid M)+\log_{2}p(x\mid M_{0})=\log_{2}\frac{p(x\mid M_{0})}{p(x\mid M)}\tag{85}$$ In the cases where a simple OOD sample is fed into the model, the universal compressor M0 assigns a high probability to it and effectively corrects the high likelihood, wrongly given by the learned model M. Similar interpretation holds for the complex OOD samples too. ## 5.15 Energy-Based Out-Of-Distribution Detection (95): This work proposes using the energy score derived from the logit outputs for OOD detection, and demonstrated superiority over softmax score. Energy-based models map each input x to a single deterministic point that is called energy (86). A set of energy values E(x, y) could be turned into a density function p(x) through the Gibbs distribution: $$p(y\mid x)={\frac{e^{-E({\bf x},y)/T}}{\int_{y^{\prime}}e^{-E({\bf x},y^{\prime})/T}}}={\frac{e^{-E({\bf x},y)/T}}{e^{-E({\bf x})/T}}},$$ where T is the temperature parameter, x is input data, and E(x) is called *Helmholtz free energy* and is equal to: $$E({\bf x})=-T\cdot\log\int_{y^{\prime}}e^{-E({\bf x},y^{\prime})/T}.$$ In deep networks, by considering E(x, y) = −fy(x), one can express the free energy function in terms of the denominator of the softmax activation: $$E({\bf x};f)=-T\cdot\log\sum_{i}^{K}e^{f_{i}({\bf x})/T}.\tag{10}$$ $$(86)$$ $$(87)$$ $$(88)$$ The paper also shows that cross-entropy loss facilitates pushing down the energy for in-distribution data during the training process. Moreover, softmax scores can be analyzed through an energy-based perspective, as Eq. 89 shows. During the optimization, E(x; f) is forced to be small for in-distribution samples while being shifted by f max(x) to satisfy the maximum function. Consequently, this results in a biased scoring function that is not suitable for OOD detection. or OOD detection. $$\max_{y}p(y\mid\mathbf{x})=\max_{y}\frac{e^{f_{y}(\mathbf{x})}}{\sum_{i}e^{f_{i}(\mathbf{x})}}=\frac{e^{f^{\max}(\mathbf{x})}}{\sum_{i}e^{f_{i}(\mathbf{x})}}\tag{89}$$ $$=\frac{1}{\sum_{i}e^{f_{i}(\mathbf{x})-f^{\max}(\mathbf{x})}}$$ $$\implies\log\max_{y}p(y\mid\mathbf{x})=E(\mathbf{x};f(\mathbf{x})-f^{\max}(\mathbf{x}))$$ $$=E(\mathbf{x};f)+f^{\max}(\mathbf{x}).$$ The energy score is hyperparameter-free, easy to compute, and achieves strong performance compared to the softmax score. Beyond post hoc OOD detection, the paper further demonstrated that energy score can be utilized for model regularization. Different from outlier exposure which forces the uniform softmax distribution for outlier training samples, energy-based regularization directly optimizes the energy gap between ID and OOD: $$\min_{\theta}E_{(\mathbf{x},y)\sim D_{\mathrm{in}}^{\mathrm{train}}}[-\log F_{y}(\mathbf{x})]+\lambda\cdot L_{\mathrm{energy}}$$ $$L_{\mathrm{energy}}=E_{(\mathbf{x}_{\mathrm{in}},y)\sim D_{\mathrm{out}}^{\mathrm{train}}}[\max(0,E(\mathbf{x}_{\mathrm{in}})-m_{\mathrm{in}})^{2}]$$ $$+E_{\mathbf{x}_{\mathrm{out}}\sim D_{\mathrm{out}}^{\mathrm{train}}}[\max(0,m_{\mathrm{out}}-E(\mathbf{x}_{\mathrm{out}}))^{2}],$$ $$(90)$$ $$(91)$$ where m is the margin hyper-parameter, and F(x) is the softmax output of the classification model. Dtrain out is the unlabeled auxiliary OOD training data, and Dtrain in is the ID training data. The optimization results in a stronger performance than OE. At the test time, OOD samples are detected based on a threshold on −E(x; f). ## 5.16 Likelihood Regret: An Out-Of-Distribution Detection Score For Variational Autoencoder (178): Previous works showed that VAEs can reconstruct OOD samples perfectly, resulting in the difficulty in detecting OOD samples. The average test likelihoods of VAE across different datasets have a much smaller range than PixelCNN (116) or Glow(80), showing that distinguishing between OOD and inlier samples is much harder in VAE. The reason might be because of the different ways they model the input distribution. Auto-regressive and flow-based methods model their input at pixel level, while the bottleneck structure in VAE forces the model to ignore some information. To address this issue, a criterion called *likelihood regret* is proposed. It measures the discrepancy between a model trained to maximize the average likelihood of a training dataset, for instance, a simple VAE, and a model maximizing the likelihood of a single input image. The latter is called an ideal model for each sample. Intuitively, the likelihood difference between the trained model and the ideal one might not be high; however, this does not hold for OOD inputs. Suppose the following optimization is performed to train a simple VAE : $$(\phi^{*},\theta^{*})\approx\arg\,\operatorname*{max}_{\phi,\theta}\frac{1}{n}\sum_{i=1}^{n}L(x_{i};\theta,\tau(x_{i},\phi)),$$ where ϕ and θ are the parameters of encoder and decoder respectively, τ (xi, ϕ) denotes the sufficient statistics (µx, σx) of qϕ(z | x), and L(xi; θ, τ (xi, ϕ)) expresses the ELBO. For obtaining the ideal model, the decoder can be fixed, and an optimization problem solved on τ (xi, ϕ) such that its individual ELBO is maximized: $$\hat{\tau}=\arg\,\operatorname*{max}_{\tau}L(x;\theta^{*},\tau)$$ ∗, τ ) (92) Finally, the likelihood regret is defined as follows: $$\operatorname{LR}(x)=L(x;\theta^{*},{\hat{\tau}}(x))-L(x;\theta^{*},\phi^{*})$$ ∗, ϕ∗) (93) The optimization of Eq. 92 could be started from the initial point that the encoder produces for each input, and a threshold is used on LR values at the test time. $$(92)$$ $$(93)$$ ## 5.17 Understanding Anomaly Detection With Deep Invertible Networks Through Hierarchies Of Distributions And Features (146): This work studied the problem of flow-based generative models for OOD detection. It was noted that local ![46_image_0.png](46_image_0.png) feature such as smooth local patches can dominate the likelihood. Consequently, smoother datasets such as SVHN achieve higher likelihoods than a less smooth one like CIFAR-10, irrespective of the training dataset. Another exciting experiment shows the better performance of a fully connected network than a convolutional Glow network in detecting OOD samples using likelihood value. This again supports the existence of a relationship between local statistics like continuity and likelihood values. Fig. 36 shows the similarity of different dataset local statistics that are computed based on the difference of a pixel value to the mean of its 3 × 3 neighboring pixels. Figure 36: (A) shows the distribution of local pixel value differences. As it can be seen, distributions are highly overlapped. (B) Likelihoods extracted from the local pixel value differences correlate with CIFAR10-Glow likelihoods. The figure is taken from (146). Furthermore, let us call the summation of the likelihood of pixel value differences computed in a local 3 × 3 neighboring pixels using a histogram with 100 equally distanced bins with pseudo-likelihoods. A strong Spearman correlation is found between the pseudo-likelihoods and the exact values of likelihoods, supporting the above assumptions. To address this problem, the following three steps are used: - Train a generative network on a general image distribution like 80 Million Tiny Images - Train another generative network on images drawn from the in-distribution, e.g., CIFAR-10 - Use their likelihood ratio for OOD detection Also, to improve the efficacy, the following outlier loss, which uses OOD samples, can be added to the maximum likelihood objective function: $$L_{o}=-\lambda.\log{\big(}\sigma{\big(}{\frac{\log(p_{g}(x))-\log(p_{i n}(x))}{T}}{\big)}{\big)},$$ where σ is sigmoid function, T is the temperature parameter, pg is the general-distribution-network likelihood, and pin is the specific in-distribution network likelihood. ## 5.18 Self-Supervised Learning For Generalizable Out-Of-Distribution Detection (107): In this work, a self-supervised learning method is employed to use the information of an unlabeled outlier dataset such that the OOD detection utility of an in-distribution classifier is improved. To do so, at first, the classifier is trained on in-class training samples until the desired performance is achieved. Then, additional outputs (a set of k reject classes) are added to the last layer. Each training batch consists of ID data and some outlier samples. The following loss function is used: $$(95)$$ $$\begin{array}{l}{{\operatorname*{min}(E_{P_{\mathrm{in}}(x,\hat{y})}[-\log(P_{\theta}(y=\hat{y}\mid\hat{x}))])}}\\ {{+\lambda E_{P_{\mathrm{out}}(x,\mathrm{target})}[-\log(P_{\theta}(y=\mathrm{target}\mid x))],}}\end{array}$$ $$(94)$$ where λ is a learning coefficient for OOD features, and the target is selected based on the following rules: if $\arg\max(P_{\theta}(x))\in k$ then $$\begin{array}{ll}\mbox{target}\leftarrow\mbox{arg}\max(P_{\theta}(x))&\\ \mbox{else}&\\ \mbox{target}\leftarrow\mbox{random}(k)&\end{array}$$ $$({\mathfrak{g h o}})$$ $$(97)$$ This is similar to unsupervised deep k-means in (20). If an unlabeled sample in the outlier dataset resembles in-class samples, it is assigned a random reject class each time; otherwise, it is assigned to a specific reject class. This helps unlabeled inliers to be separated from outliers. At the test time, the sum of the softmax output of the OOD classes is used as the detection score. ## 5.19 Ssd: A Unified Framework For Self-Supervised Outlier Detection (150): This work has a very similar idea to GDFR (*c.f.* Section 4.8). It incorporates SSL methods so to alleviate the need for labeling in-class samples. This is different from several aforementioned methods that require solving a classification task. As a result, SSD can be flexibly used in different settings such as ND, OSR, and OOD detection. The main idea is to employ the contrastive learning as in (25), and learn semantically meaningful features. After representation learning, a k-means clustering is applied to estimate class centers with mean and covariance (µm, Σm). Then for each test time sample, the Mahalanobis distance to the closest class centroid is used as the OOD detection score: $$s_{x}=\operatorname*{min}_{m}(z_{x}-\mu_{m})^{T}\Sigma_{m}^{-1}(z_{x}-\mu_{m})$$ m (zx − µm) (97) where zx represents the feature for input x.The contrastive learning objective function is simple. Using image transformations, it first creates two views of each image, commonly referred to as positives. Next, it optimizes to pull each instance close to its positive instances while pushing away from other images, commonly referred to as negatives: $${\cal L}_{\mathrm{batch}}=\frac{1}{2N}\sum_{i=1}^{2N}-\log\frac{e^{u_{i}^{T}u_{j}/\tau}}{\sum_{k=1}^{2N}I(k\neq i)e^{u_{i}^{T}u_{j}/\tau}},$$ $$(98)$$ where ui =h(f(xi)) ||h(f(xi))||2 is the normalized feature vector, (xi, xj ) are positive pairs for the ith image from a batch of N images, and h(·) is the projection head. Moreover, when a few OOD samples are available, the following scoring function can be used: $$\begin{array}{c}{{s_{x}=(z_{x}-\mu_{\mathrm{in}})^{T}\Sigma_{\mathrm{in}}^{-1}(z_{x}-\mu_{\mathrm{in}})}}\\ {{\qquad-(z_{x}-\mu_{\mathrm{ood}})^{T}\Sigma_{\mathrm{ood}}^{-1}(z_{x}-\mu_{\mathrm{ood}})}}\end{array}$$ $$(99)$$ This framework can be extended to the supervised settings when the labels of in-class distribution are available. SSD employs the supervised contrastive learning objective proposed in (79): $$L_{\rm batch}=\frac{1}{2N}\sum_{i=1}^{2N}-\log\frac{\frac{1}{N_{\rm B_i}-1}\sum_{k=1}^{2N}I(y_k=y_i)e^{\pi^{T}u/\tau}}{\sum_{k=1}^{2N}I(k\neq i)e^{\pi^{T}u/\tau}},$$ where $N_{\rm B_i}$ is the number of images with label $y_i$ in the batch. $$(100)$$ ## 5.20 Mos: Towards Scaling Out-Of-Distribution Detection For Large Semantic Space (69): MOS first revealed that the performance of OOD detection can degrade significantly when the number of in-distribution classes increases. For example, analysis reveals that a common baseline MSP's average false positive rate (at 95% true positive rate) would rise from 17.34% to 76.94% as the number of classes increases from 50 to 1, 000 on ImageNet1k. To overcome the challenge, the key idea of MOS is to decompose the large semantic space into smaller groups with similar concepts, which allows simplifying the decision boundaries between known vs. unknown data. Specifically, MOS divides the total number of C categories into K groups, G1, G2, ..., GK. Grouping is done based on the taxonomy of the label space if it is known, applying k-means using the features extracted from the last layer of a pre-trained network or random grouping. Then the standard groupwise softmax for each group Gk is defined as follows: $$p_{c}^{k}(x)=\frac{e^{f_{c}^{k}(x)}}{\sum_{c^{\prime}\in G_{k}}e^{f_{c^{\prime}}^{k}(x)}},c\in G_{k}\tag{1}$$ $$(101)$$ where f k c (x) and p k c (x) denote output logits and the softmax probability for class c in group Gk, respectively. Fig. 37 shows the overall architecture. The training loss function is defined as below: $$L_{\mathrm{GS}}=-\frac{1}{N}\sum_{n=1}^{N}\sum_{k=1}^{K}\sum_{c\in G_{k}}y_{c}^{k}\log(p_{c}^{k}(x))$$ $$(102)$$ $$(103)$$ where y c k and p c k represent the label and the softmax probability of category c in Gk, and N is the total number of training samples. A key component in MOS is to utilize a category others in each group, which measures the probabilistic score for an image to be unknown with respect to the group. The proposed OOD scoring function, Minimum Others Score (MOS), exploits the information carried by the others category. MOS is higher for OOD inputs as they will be mapped to others with high confidence in all groups, and is lower for in-distribution inputs. Finally, test time detection is done based on the following score: Figure 37: The overall architecture of MOS. As the figure shows, each sample is labeled as "others" for groups to which it does not belong, except the correct group. At the test time, detection is done based on the minimum of "others" class scores. The figure is taken from (69). ## 5.21 Can Multi-Label Classification Networks Know What They Don'T Know? (171): In this work, the ability of OOD detectors in the multi-label classification setting was investigated. In the multi-label classification setting, each input sample might have one or more corresponding labels, which make the problem harder since the joint distribution across labels can be intractable to model. This work proposes the *JointEnergy* criterion as a simple and effective method, which estimates the OOD indicator scores by aggregating label-wise energy scores from multiple labels. Also, they show that JointEnergy can be mathematically interpreted from a joint likelihood perspective. Similar to what has been discussed in $$\cos(x)=-\operatorname*{min}_{1\leq k\leq K}(p_{\mathrm{others}}^{k}(x))$$ others(x)) (103) ![48_image_0.png](48_image_0.png) (95) P(yi = 1 | x) can be written as e −E(*x,yi* ) e−E(x), then by defining *label-wise free energy* that is a special case of K-class free energy with K = 2: $$E_{y_{i}}(x)=-\log(1+e^{f_{y_{i}}(x)})\tag{1}$$ JointEnergy can be defined as: $$E_{\mathrm{Joint}}(x)=\sum_{i=1}^{K}-E_{y_{i}}(x)$$ $$(104)$$ $$(105)$$ $$(106)$$ (x) (105) Through derivations, the JointEnergy can be decomposed into three terms: $$E_{\rm joint}(x)=\log p(x\mid y_{1}=1,\ldots,y_{K}=1)$$ $$+(K-1)\cdot\log p(x)+Z$$ where the first term takes into account joint likehood across labels; and the second term reflects the underlying data density, which is supposed to be higher for in-distribution data x. The summation overall results in stronger separability between ID and OOD. The architecture overview is provided in Fig. 38. ![49_image_0.png](49_image_0.png) Figure 38: An overview of JointEnergy for OOD detection in multi-label classification networks. During inference time, input x is passed through a classifier, and label-wise scores are computed for each label. OOD indicator scores are either the maximum-valued score (denoted by green outlines) or the sum of all scores. Taking the sum results in a larger difference in scores and more separation between in-distribution and OOD inputs (denoted by red lines), resulting in better OOD detection. Plots in the bottom right depict the probability densities of MaxLogit versus JointEnergy. The figure is taken from (171). ## 5.22 On The Importance Of Gradients For Detecting Distributional Shifts In The Wild (70) : This work proposes a simple post hoc OOD detction method GradNorm, which utilizes the vector norm of gradients with respect to weights, backpropagated from the KL divergence between the softmax output and a uniform probability distribution. The GradNorm is generally higher for in distribution (ID) data than that for OOD data. Therefore, it can be used for OOD detection. Specifically, the KL divergence is defined as follows: $$D_{\mathrm{KL}}({\bf u}||\mathrm{Softmax}(f(x))=-\frac{1}{C}\sum_{c=1}^{C}\log\left(\frac{e^{f_{c}(x)/T}}{\sum_{j=1}^{C}e^{f_{j}(x)/T}}\right),$$ $$(107)$$ ![50_image_0.png](50_image_0.png) Figure 39: a) The distribution of ID and OOD uncertainty scores before truncation when ImageNet and iNaturalist are used as in-class and out-of-distribution samples, b) the distribution of per-unit activations in the penultimate layer for ID and OOD data, and c) the distribution of OOD scores after rectification for the ID and OOD data are depicted in the plots above. When ReAct is used, it significantly enhances the separation of the ID and OOD information. The figure is taken from (163). where u is the uniform distribution, T is the temperature, fc(x) denotes the c-th element of f(x) corresponding to the label c, and C is the number of in-distribution classes. Then, the OOD score is defined as S(x) = || ∂DKL ∂W ||p where W can be (1) a block of parameters, (2) all parameters, and (3) last layer parameters. It was mentioned that the third approach is better than others and achieves significant results. ## 5.23 React: Out-Of-Distribution Detection With Rectified Activations (163): The internal activations of neural networks are examined in depth in this study, which as discovered have considerably different signature patterns for OOD distributions and thus enable OOD detection. The distribution of activations in the penultimate layer of ResNet-50, which is trained on the ImageNet, is depicted in Fig. 39. Each point on the horizontal axis represents a single unit of measurement. The mean and standard deviation are depicted by the solid line and shaded area, respectively. The mean activation for the ID data (blue) is well-behaved, with a mean and standard deviation that are both nearly constant. Contrary to this, when looking at the activation for OOD data (gray), the mean activations display substantially greater variations across units and are biased towards having sharp positive values. The unintended consequence of having such a high unit activation is that it can present itself in the output of the model, resulting in overconfident predictions on the OOD data. Other OOD datasets exhibit a similar distributional property, which is consistent with previous findings. Therefore, a technique called Rectified Activations (also known as ReAct) is presented as a solution to this problem, in which the outsized activation of a few selected hidden units can be attenuated by rectifying the activations at an upper limit c > 0. It is demonstrated that ReAct can generalize effectively to a variety of network designs and that it is compatible with a variety of output-based OOD detection methods, including MSP (56), ODIN (92), and energy score (95), among others. ## 5.24 Vos: Learning What You Don'T Know By Virtual Outlier Synthesis (38): In this paper, the key idea is to synthesize virtual outliers to enhance the model's decision boundary. The work differs from prior works (106; 189; 127) in two aspects. Firstly, it was one of the first works addressing the object-level out-of-distribution, where every input instance is an object instead of an entire image. Object-level OOD detection is useful when a complex scene contains objects of both ID and OOD categories. Secondly, instead of generating samples in the image space, this work proposes generating virtual outliers in the feature space, which is shown to be more tractable than synthesizing images in the high-dimensional pixel space. The method overview is shown in Figure 40. To generate outlier features, VOS first models the feature space of the in-distribution classifier as a class-conditional multivariate Gaussian distribution: ![51_image_0.png](51_image_0.png) $$(108)$$ Figure 40: Overview of virtual outlier synthesis (VOS) for model regularization and OOD detection. The figure is taken from (38). $$p_{\theta}(h(x,\mathbf{b}|y=k))=N(\mu_{k},\Sigma)$$ pθ(h(x, b|y = k)) = N(µk, Σ) (108) where µk is the mean of class k ∈ {1*, ..., K*}, Σ is the tied covariance matrix, and b ∈ R4 be the bounding box coordinates associated with object instances in the image, and h(x, b) is the latent representation of an object instance (x, b). The covariance (Σb) and mean (µbk) are estimated empirically based on passing the training samples ({(xi, bi, yi)} N i=1) to the trained network. To be specific: $$\widehat{\boldsymbol{\mu}}_{k}=\frac{1}{N_{k}}\sum_{i:y_{i}=k}h\left(\mathbf{x}_{i},\mathbf{b}_{i}\right)$$ $$\widehat{\boldsymbol{\Sigma}}=\frac{1}{N}\sum_{k}\sum_{i:y_{i}=k}\left(h\left(\mathbf{x}_{i},\mathbf{b}_{i}\right)-\widehat{\boldsymbol{\mu}}_{k}\right)\left(h\left(\mathbf{x}_{i},\mathbf{b}_{i}\right)-\widehat{\boldsymbol{\mu}}_{k}\right)^{\top}.\tag{1}$$ $$(109)$$ Then, virtual outliers are synthesized based on the estimated class-conditional distribution, with likelihoods smaller than a threshold ϵ: $$\mathcal{V}_{k}=\left\{\mathbf{v}_{k}\mid\frac{1}{(2\pi)^{m/2}|\widehat{\mathbf{\Sigma}}|^{1/2}}\exp\left(-\frac{1}{2}\left(\mathbf{v}_{k}-\widehat{\boldsymbol{\mu}}_{k}\right)^{\mathsf{T}}\widehat{\mathbf{\Sigma}}^{-1}\left(\mathbf{v}_{k}-\widehat{\boldsymbol{\mu}}_{k}\right)\right)<\epsilon\right\},$$ where $\mathbf{v}_k\sim\mathcal{N}\left(\widehat{\boldsymbol{\mu}}_k,\widehat{\boldsymbol{\Sigma}}\right)$ denotes the sampled virtual outliers for class $k$. Having virtual outliers generated, the method regularizes the decision boundary via the energy score (95), where ID objects have negative energy values and the synthesized outliers have positive energy: $${\mathcal{L}}_{\mathrm{uncertainty}}=E_{\mathbf{v}\sim{\mathcal{V}}}[-\log{\frac{1}{1+e^{-\phi(E(\mathbf{v},\theta))}}}]+E_{\mathbf{x}\sim{\mathcal{D}}}[-\log{\frac{e^{-\phi(E(\mathbf{x};\theta))}}{1+e^{-\phi(E(\mathbf{x};\theta))}}}],$$ where ϕ(·) is a nonlinear MLP function. OOD detection can be done by replacing image-level energy with object-level energy. For ID object (x, b), the energy is defined as: $$E(\mathbf{x},\mathbf{b};\theta)=-\log\sum_{k=1}^{K}w_{k}\cdot\exp^{f_{k}((\mathbf{x},\mathbf{b});\theta)},$$ $$(110)$$ $$(1111)$$ $$(112)$$ ![52_image_0.png](52_image_0.png) $$(113)$$ $$(114)$$ Figure 41: Overview of the deep KNN approach for leveraging embedding space to detect OOD samples. The figure is taken from (164). where fk((x, b); θ) = W⊤ clsh(x, b) is the logit output for class k in the classification branch and Wcls ∈ R m×K is the weight of the last fully connected layer. A training objective for object detection combines a standard loss and an uncertainty regularization loss: $$\operatorname*{min}_{\theta}\mathbb{E}_{(\mathbf{x},\mathbf{b},y)\sim{\mathcal{D}}}\left[{\mathcal{L}}_{\mathrm{cls}}\ +{\mathcal{L}}_{\mathrm{loc}}\ \right]+\beta\cdot{\mathcal{L}}_{\mathrm{uncertainty}}$$ E(x,b,y)∼D [Lcls + Lloc ] + β · Luncertainty (113) where β is the weight of the uncertainty regularization. Lcls and Lloc are losses for classification and bounding box regression, respectively. OOD detection is performed by using the logistic regression uncertainty branch during testing. Specifically, given an input x ∗, the object detector produces a bounding box prediction b ∗. The OOD uncertainty score for the predicted object (x ∗, b ∗) is given by: $$p_{\theta}\left(g\mid\mathbf{x}^{*},\mathbf{b}^{*}\right)={\frac{\exp^{-\phi\left(E(\mathbf{x}^{*},\mathbf{b}^{*})\right)}}{1+\exp^{-\phi\left(E(\mathbf{x}^{*},\mathbf{b}^{*})\right)}}}.$$ 1 + exp−ϕ(E(x∗,b∗)) . (114) A thresholding mechanism can be used to differentiate between ID and OOD objects for OOD detection: $$G\left(\mathbf{x}^{*},\mathbf{b}^{*}\right)=\begin{cases}1&\text{if}p_{0}\left(g\mid\mathbf{x}^{*},\mathbf{b}^{*}\right)\geq\gamma\\ 0&\text{if}p_{\theta}\left(g\mid\mathbf{x}^{*},\mathbf{b}^{*}\right)<\gamma\end{cases}\tag{1}$$ $$(115)$$ The threshold γ is typically chosen so that a high fraction of ID data (e.g., 95%) is correctly classified. ## 5.25 Out-Of-Distribution Detection With Deep Nearest Neighbors (164): A non-parametric nearest-neighbor distance is explored for detecting OODs in this study. The distance-based methods assume that the test OOD samples are relatively far away from the ID data, and rely on feature embeddings derived from the model. Prior parametric distance methods (e.g. maximum Mahalanobis distance) impose distributional assumptions about the underlying feature space, and this study suggests that these assumptions may not always hold. The method uses a threshold-based criterion to determine whether the input is OOD by computing the k-th nearest neighbor (KNN) distance between the embeddings of the test input and the embeddings of the training set. Specifically, OOD detection is performed by using the normalized penultimate feature z = ϕ(x)/∥ϕ(x)∥2, where ϕ : *X 7→* R m is a feature encoder. Denote the embedding set of training data as Zn = (z1, z2*, . . . ,* zn). In testing, the normalized feature vector z ∗is obtained for a test input x ∗, and calculate the Euclidean distances ∥zi − z ∗∥2 with respect to embedding vectors zi ∈ Zn. Zn is reordered according to the increasing distance ∥zi − z ∗∥2 . The reordered data sequence is represented by Z ′ n =z(1), z(2)*, . . . ,* z(n) . OOD detection is based on the following decision function: $$G\left(\mathbf{z}^{*};k\right)=\mathbf{1}\left\{-r_{k}\left(\mathbf{z}^{*}\right)\geq\lambda\right\},$$ ![53_image_0.png](53_image_0.png) Figure 42: Algorithm 1 describes the KNN-based OOD detection in detail. The figure is taken from (164). where rk (z ∗) =z ∗ − z(k) 2 is the distance to the k-th nearest neighbor, and 1{·} is the indicator function. As the distance threshold is only estimated based on ID data, the method is OOD-agnostic since unknown data is not required in the testing procedure. The method is also model-agnostic since it applies to a variety of loss functions (e.g., cross-entropy and supervised contrastive loss), and model architectures (e.g., CNN and ViT). Moreover, the KNN method is easy to use thanks to modern implementations, such as Faiss (75), a library that allows running this in milliseconds, even when the database contains billions of images. In contrast, some prior methods cause numerical instability, such as Mahalanobis distance, which requires the inverse computation of the covariance matrix. ## 6 A Summary Of The Shared Core Ideas As previously mentioned, methods and formulations of out-of-distribution detection, open-set recognition, anomaly detection, and novelty detection overlap extensively. Here, we present a selection of notions that have been proved to be advantageous across a variety of areas, albeit under different titles. All of these concepts have been discussed in prior sections, which can be referred to for more information. Making a compressed representation for normal samples: The concept is founded mostly on the projection of normal samples into a feature space so that normal training samples are located in close proximity to one another. In this manner, the feature extractor is compelled to concentrate on the most shared features among all normal inputs, which implies that the normal and abnormal feature spaces will have less in common; as a result, abnormal inputs will be projected far from normal inputs and can be easily identified. This intuition has been followed in anomaly detection by DSVDD (136) and DSAD (137), by finding the most compact hyper-sphere in the feature space that includes normal training samples. GOAD (12) takes a similar approach, determining the most compact hyper-sphere for each set of linear transformations applied to the normal inputs. In open-set recognition, (33) defines a so called "objectoshpere" loss that minimizes the l2 norm of normal training samples, which is equal to compressing them in a hyper-sphere centered in (0, 0). CAC (104) follows the same strategy as GOAD, but rather than relying on transformations, it takes advantage of the labels that are made available during open-set recognition and makes use of a variety of normal samples in order to generate some distinct hyper-spheres. Analyzing normal gradient: In this approach, instead of directly using normal features, their gradients are utilized. (172) demonstrated that self-supervised techniques such as GT (48) can perform effectively in the face of a small number of outliers or anomalous samples in the training dataset. This occurs because normal samples alter the gradients substantially more than abnormal samples; thus, the notwork can still mostly extract normal features. Similarly, in out-of-distribution detection, (70) shows that the vector norm of gradients with respect to weights, backpropagated from the KL divergence between the softmax output and a uniform probability distribution is greater for normal samples than abnormal inputs. Outlier exposure technique: Outlier-exposure-based methods usually assume that there are some abnormal samples that can be easily used to improve performance. For example, (58) uses online training datasets that are free to use during the training process to make better normal boundaries, which improve the out-of-distribution detection method. Similarly, DSAD (137) uses datasets that are already on the internet to learn the distribution of abnormal training samples. The same idea is continued by (91; 33), which was explained in more detail earlier. Generating outliers: This idea is mainly based on generating fake outliers by the premature training of generative models on the normal distribution. Then, the out of distribution model is trained with the combination of the fake and existing normal training samples. The approach is followed by DSAD (137), G2D (127), and Old-is-Gold (189) in anomaly detection domain. In the same way, methods like G-OpenMax (46), OpenGAN (35) take the advantages of this approach to improve the performance of open-set recognition and out-of-distribution detection. Prototype-based methods: Here, the objective is to determine the least number of prototypes are needed to adequately cover the normal feature space. The normality score is then calculated by cofeatures of each input to the prototypes learned during training. In anomaly detection and open-set recognition, Mem-AE (49) and RPL (22) implement this concept, respectively. Generative models do not make abnormal inputs. It is assumed that generative models trained on normal distributions struggle to create or reconstruct abnormal inputs as well as normal inputs. As a result, measurements such as reconstruction error in AE-based approaches or discriminator output in GAN-based structures can be utilized to differentiate normal from abnormal. In anomaly detection, methods such as (139; 189; 1; 122; 159) use this approach. In open-set recognition and out-of-distribution detection, C2AE (117), CROSR (183) and CGDL (162) follow the same methodology. Leveraging the knowledge of pre-trained models. Instead of using the raw training datasets available on the internet, why not utilize the models that have been trained on these datasets? Pre-trained features intuitively contain information that can be conveyed to downstream tasks more effectively than the original training samples. Several approaches have been developed in anomaly detection and open-set recognition based on this notion, such as MKD (144), which distills the knowledge of a pre-trained model for normal training samples into another network to improve anomaly detection performance. Similarly, in open-set recognition, DTL (121) employs the filters of a pre-trained model that are more stimulated when the model is presented with normal inputs as an additional detection criterion to filter open-set samples during the evaluation. Self-supervised learning-based methods: It has been shown by (167) that learning better representations has a direct correlation with the performance of open-set recognition. Therefore, self-supervised learning methods as a well-known unsupervised representation learning technique seems to be a good fit for anomaly detection, novelty detection, open-set recognition, and out-of-distribution detection. Following this approach, several methods such as CSI (165), DA-Contrastive (158), CutPaste (90) have been proposed, which show a strong performance in detecting semantic and pixel-level anomaly detection. In out-of-distribution detection, (60) investigates the effect of self-supervised learning on detecting outliers extensively. A similar approach was also followed by (150; 107), which was explained in detail in previous sections. ## 7 Dataset 7.1 Semantic-Level Datasets Below we summarize datasets that can be used to detect semantic anomalies. Semantic anomalies are those kinds of anomalies that the variation of pixels leads to the change of semantic content. Datasets such as MNIST, Fashion-MNIST, SVHN, and COIL-100 are considered as toy datasets. CIFAR-10, CIFAR-100, LSUN, and TinyImageNet are hard datasets with more variations in color, illumination, and background. Finally, Flowers and Birds are fine-grained semantic datasets, which makes the problem even harder. MNIST (85): This dataset includes 28 × 28 grayscale handwritten digits from 0-9 and consists of 60k training images and 10k testing ones. Fashion MNIST (177): This dataset comprises of 28×28 grayscale images of 70k clothing items from 10 categories. The training set has 60k images and the test set has 10k images. CIFAR-10 (83): The CIFAR-10 has 60k natural images. It consists of 32x32 RGB images of 10 classes, There are 50k training images and 10k test images. CIFAR-100 (83): This dataset is very similar to CIFAR-10, but it has 100 classes containing 600 images each.The 100 classes are grouped into 20 super classes. Each class has 500 training images and 100 testing images. TinyImageNet (31): The Tiny ImageNet dataset consists of a subset of ImageNet images . It contains 10,000 test images from 200 different classes. Also, two more datasets, TinyImageNet (crop) and TinyImageNet (resize) can be constructed, by either randomly cropping image patches of size 32 × 32 or downsampling each image to size 32 × 32. LSUN (184): The Large-scale Scene UNderstanding dataset (LSUN) has a testing set of 10,000 images of 10 different scene categories such as bedroom, kitchen room, living room, etc. Similar to TinyImageNet, two more datasets, LSUN (crop) and LSUN (resize), can be reconstructed by randomly cropping and downsampling the LSUN testing set, respectively. COIL-100 (112): COIL-100 is a dataset of colorful images of 100 objects. It comprises 7200 128×128 images. Images are captured from objects placed on a motorized turntable against a black background, and there are 72 images of each object in different poses. SVHN (113): SVHN can be seen as similar in flavor to MNIST (e.g., the images are of small cropped digits), but incorporates an order of magnitude more labeled data (over 600,000 digit images) and comes from a significantly harder, unsolved, real-world problems (recognizing digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images. Flowers (115): Flowers is a 102 category dataset, consisting of 102 flower categories. The flowers chosen to be flowers commonly occurring in the United Kingdom. Each class consists of between 40 and 258 images. The images have large scale, pose and light variations. In addition, there are categories that have large variations within the category and several very similar categories. Birds (175): CUB-200-2011 is a bird classification task with 11,788 images across 200 wild bird species. There is roughly equal amounts of train and test data. It is generally considered one of the most challenging datasets since each species has only 30 images for training. ## 7.2 Pixel-Level Datasets In these datasets unseen samples, outliers or anomalies do not have semantic difference with inliers. This means an area of the original image is defected; however, the original meaning is still reachable, yet has been harmed. MVTec AD ((13)):This dataset is an industrial dataset, it provides 5354 high-resolution images divided into ten objects and five texture categories and contains 3629 training images. The test set contains 467 normal images and 1258 abnormal images having various kinds of defects. PCB ((71)): PCB dataset containing 1386 images with 6 kinds of defects for the use of detection, classification and registration tasks. Images are captured in high-resolution. LaceAD ((179)): LaceAD contains 9,176 images from the top 10 lace fabric manufacturing companies worldwide, where the images are captured in the real production environment by a high-resolution DSLR camera, They are categorized into 17 subsets based on their patterns. Each image has the size of 512 × 512 and has been labeled by professional workers. Retinal-OCT ((77)): This consists of 84,495 X-Ray images in 4 categories CNV, DME, DRUSEN, and NORMAL, each of which has subtle differences with respect to others. CAMELYON16 ((9)): Detecting metastases of lymph nodes is an extremely important variable in the diagnosis of breast cancers. Tissue with metastasis may differ from a healthy one only in texture, spatial structure, or distribution of nuclei, and can be easily confused with normal tissue. The training dataset of Camelyon16 consists of 110 whole-slide images (WSIs) of tumors, and 160 non-tumor cases, and testing dataset with 80 regular slides and 50 slides containing metastases. Chest X-Rays ((173; 73; **129)):** Chest X-Ray datasets are medical imaging datasets which comprise a large number of frontal-view X-ray images of many unique patients (collected from the year of 1992 to 2015). The datasets include eight to fourteen common disease labels, mined from the text radiological reports via NLP techniques. Images are not registered and captured in different pose and contrast, which makes the detection task challenging. Species ((62)): This dataset consists of organisms that fall outside ImageNet-21K. Consequently, ImageNet21K models can treat these images as anomalous. ImageNet-O ((65)): ImageNet-O is a dataset of adversarially filtered examples for ImageNet out-ofdistribution detectors. To create this dataset, at first, ImageNet-22K is downloaded, and shared examples from ![56_image_0.png](56_image_0.png) ImageNet-1K are deleted. With the remaining ImageNet-22K examples that do not belong to ImageNet-1K classes, examples that are classified by a ResNet-50 as an ImageNet-1K class with a high confidence are kept. Finally, visually clear images are selected. This creates a dataset of OOD examples that are hard for a ResNet-50. These examples are challenging for other models to detect, including Vision Transformers. Figure 43: Sample visualization of MNIST, Fashion-MNIST, SVHN, COIL-100, Birds, Chest X-Rays, Cifar10, TinyImageNet, LSUN, Flowers, MVTecAD, PCB, Retinal-OCT, CAMELYON16, LaceAD, MNIST-C, ImageNet-C, and ImageNet-P. ## 7.3 Synthetic Datasets These datasets are usually made using semantic-level datasets; however, the amount of pixel-variations is under control such that unseen, novel, or abnormal samples are designed to test different aspects of trained models while preserving semantic information. For instance, MNIST-c includes MNIST samples augmented with different kinds of noises such as shot noise and impulse noise, which are random corruptions that may occur during the imaging process. These datasets could be used to not only test the robustness of the proposed models, but also for training models in the AD setting instead of novelty detection or open-set recognition. Due to the lack of comprehensive research in the field of anomaly detection, these datasets can be very beneficial. MNIST-C (109): MNIST-C dataset is a comprehensive suite of 15 corruptions applied to the MNIST test set for benchmarking out-of-distribution robustness in computer vision. Through several experiments and visualizations, it is shown that the corruptions significantly degrade the performance of state-of-the-art computer vision models while preserving the semantic content of the test images. ImageNet-C, ImageNet-P (55): This can be seen as the ImageNet version of MNIST-C. For the ImageNet-C, a set of 75 common visual corruptions are applied on each image, and for the ImageNet-P, a set of perturbed or subtly differing ImageNet images are introduced. It is shown that although these perturbations are not chosen by an adversary, currently existing networks exhibit surprising instability on common perturbations. An overall visualization of the mentioned datasets can be found in Fig. 43. ## 8 Evaluation Protocols 8.1 Auc-Roc Receiver Operating Characteristics (ROC) is a well-known criterion. Given a test dataset including positive and negative (or seen and unseen ) samples, it characterizes the relation between the false positive rate (FPR) and the true positive rate (TPR) at different detection thresholds. AUC-ROC is the area under the ROC curve, which is a threshold-independent metric. The highest value of ROC is 1, and 0.5 indicates that the model assigns the positive label with random guessing. In the literature, AD and ND are usually tested in one-vs-all setting that considers one class as normal and the rest of the classes as anomaly, unseen or unknown. For OOD detection, the in-distribution data is considered as positive and OOD data is considered as negative. For instance, one can train a model on CIFAR-10 and consider MNIST as outlier at the test time. This means training and testing datasets have a large contrast with each other. Sometimes instead of MNIST, uniform noise or Gaussian noise can be used. ## 8.2 Fpr@Tpr Although AUC-ROC is a common metric, in practice, models have to select a specific threshold to perform detection. To address this issue, an operating point on the ROC which is desired with respect to applications is selected. A commonly-used metric is FPR@TPRx, which measures the FPR when the TPR is x = 0.95. ## 8.3 Aupr AUPR is the Area under the Precision-Recall curve, which is another threshold independent metric. The PR curve depicts the precision= T P T P +F P and recall= T P T P +F N under different thresholds. In some literature, the metrics AUPR-In and AUPR-Out denote the area under the precision-recall curve where in-distribution and out-of-distribution images are specified as positives, respectively. This metric is usually used in OOD detection setting. FP is the number of false positives, TN is the number of true negatives, FN is the number of false negatives, and TP is the number of true positives. ## 8.4 Accuracy This metric is usually used in OSR, which is a common choice for evaluating classifiers under the closed-set assumption, as follows: $$A={\frac{\sum_{i=1}^{C}(T P_{i}+T N_{i})}{\sum_{i=1}^{C}(T P_{i}+T N_{i}+F P_{i}+F N_{i})}}$$ $$(116)$$ This can be easily extended to open-world assumption in which UUCs must be classified correctly: $$A_{O}=\frac{\sum_{i=1}^{C}(T P_{i}+T N_{i})+T U}{\sum_{i=1}^{C}(T P_{i}+T N_{i}+F P_{i}+F N_{i})+(T U+F U)},\tag{1}$$ where T U specifies the number of correctly predicted UUC samples. Although accuracy is a common metric; however, it is very sensitive to imbalanced number of samples, which is not the case in metrics such as AUC-ROC. To cope with this issue, a normalized accuracy (NA), which weights the accuracy for KKCs (AKS) and the accuracy for UUCs (AUS) is defined as follows: $$\mathrm{NA}=\lambda_{r}\mathrm{AKS}+(1-\lambda_{r})\mathrm{AUS}$$ NA = λrAKS + (1 − λr)AUS (118) where $$\text{AKS}=\frac{\sum_{i=1}^{C}(TP_{i}+TN_{i})}{\sum_{i=1}^{C}(TP_{i}+TN_{i}+FP_{i}+FN_{i})}\tag{119}$$ $$\text{AUS}=\frac{TU}{TU+FU}$$ $$(117)$$ $$(118)$$ $$(120)$$ and 0 ≤ λr ≤ 1 is a regularization constant. ## 8.5 F-Measure The F-measure or F-score is the harmonic mean of precision P and recall R: $$F=2\times{\frac{P\times R}{P+R}}$$ P + R(120) Note that F must be computed in the same way as the multi-class closed set scenario. This is because the correct classifications of UUCs would be considered as true positive classifications; however, it makes no sense since there is no UUC sample in the training process. Therefore, the computations of Precision and Recall only for KKCs are modified to give a relatively reasonable F-measure for OSR. The new measures are called macro-F-measure and micro-F-measure respectively as follows: $$P_{\rm ma}=\frac{1}{C}\sum_{i=1}^{C}\frac{TP_{i}}{TP_{i}+FP_{i}},\quad R_{\rm ma}=\frac{1}{C}\sum_{i=1}^{C}\frac{TP_{i}}{TP_{i}+FN_{i}}\tag{121}$$ $$P_{\rm mi}=\frac{\sum_{i=1}^{C}TP_{i}}{\sum_{i=1}^{C}(TP_{i}+FP_{i})},\quad R_{\rm mi}=\frac{\sum_{i=1}^{C}TP_{i}}{\sum_{i=1}^{C}(TP_{i}+FN_{i})}$$ Note that although the precision and recall only consider the KKCs; however, by computing F Ni and F Pi false UUCs and false KKCs are taken into account (76). ## 8.6 Comparing Evaluating Procedures Of Each Domain Each domain assessment process establishes the practical restrictions that must be met before the claimed performance of proposed methods within that domain can be trusted. Furthermore, in some domains, such as OSR and OOD detection, highlighting the differences in evaluation protocols utilized in each domain might help to gain a better expectation about the reported performances when applied in practical applications. As a result, given an arbitrary dataset such as CIFAR-10 (83), the following training/testing setup is employed at each domain : AD/ND : (1) Select a random class out of 10 given classes as normal distribution. (2) Train a one-class feature extractor according to the discussed approaches. (3) Test the trained feature extractor on the entire test-set, which implies the implicit assumption of 10% likelihood for the occurrence of normal events and 90% for the abnormal ones. OOD Detection : (1) Use the whole training dataset to train a multi-class feature extractor. (2) As the out-of-distribution dataset, choose another dataset, such as one of the datasets listed above. (3) Consider the entire test set to be normal and select some random samples from the out-of-distribution dataset so that the chance of abnormal occurrences is lower than normal when all the samples are combined. As can be observed, this assessment methodology makes an implicit assumption about the rarity of anomalies, which OSR does not. OSR : (1) As the normal distribution, pick K random classes. (2) Train a multi-class feature extractor according to the discussed approaches. (3) Test the trained feature extractor on the entire test-set. As previously stated, K is decided based on the **openness score**. This suggests that abnormal events may have a higher probability than normal events, because anomaly means not being a member of the normal sets. For instance, in the self-driving car application, a model might be only responsible for detecting people and cars as two small sets of many definable sets existing in the agent environment. This is a key distinction between the OOD detection and OSR viewpoints. ## 9 Core Challenges Here we examine the core and ongoing challenges in the fields. This allows the community to employ existing knowledge and craft future solutions effectively. ## 9.1 Designing Appropriate Self-Supervised Learning Task As aforementioned, some methods such as (48; 12; 55) use SSL tasks to identify outliers; however, it has been recently investigated (6; 182) that augmentation techniques used as SSL tasks are not always beneficial for ND, OSR, and, OOD. (182) shows that self-supervision acts as a yet-another model hyper-parameter, and should be carefully chosen in light of the types of real abnormalities in the data. In other words, the alignment between the augmentation and underlying anomaly generating process is essential for the success of SSL-based anomaly detection approaches. SSL can even worsen detection performance in the absence of such alignment. Similarly, (6) shows that Maximum Softmax Probability (MSP), as the simplest baseline for OSR, applied on Vision Transformers (ViTs) trained with carefully selected augmentations can surprisingly outperform many recent methods. As a result, the community needs to research more on the SSL tasks particularly designed to fulfill the goal of outlier detection. ## 9.2 Data Augmentation One source of uncertainty in classifying known or normal training samples could be the lack of generalization performance. For instance, if one rotates a bird image, its content is not harmed and must be distinguished as the bird again. Some of the mentioned works try to embed this ability into models by designing different SSL objective functions. However, there is also another way to do that, using data augmentations. Data augmentation is a common technique to enrich the training dataset. Some approaches (32; 194; 188; 61; 143) improve generalization performance using different data augmentation techniques. From another point of view, (127) attempts to generate unseen abnormal samples and use them to convert a one-class learning problem into a simple two-class classification task. Similarly, however in the OSR setting, (82) and (35) follow the same idea. All these studies can also be seen as working on a training dataset to make it richer for the further detection task. From what has been said, it is obvious that working on data instead of models could achieve very effective results and must be investigated more in the sense of different trade-offs in the future. Recently, (106) has shown the effectiveness of generating fake samples by utilizing SDE-based models (161) instead of GANs. The key innovation in this study is a new strategy for creating synthetic training outliers using SDE to train OE-based ND algorithms. By optimizing previously trained SOTA models for the task of ND, the paper shows the effectiveness of the obtained samples. Following the conventional OE pipeline, it minimizes the binary cross entropy loss between the normal training data and artificial outliers. ## 9.3 Small Sample Size And Few-Shot Learning Learning with a small sample size is always challenging, but desirable. One way of approaching the problem could be to exploit meta-learning algorithms (41; 114), and to learn generalizable features that can be easily adapted to AD, ND, OSR, or OOD detection using a few training samples (100). One challenge in meta-learning is handling distribution shift between training and adaptation phase, which might result in producing one-class meta-learning algorithms such as (43). Also, other approaches explored generating a synthetic OOD dataset to improve few-shot classification of in-class samples (74). Although, the combination of meta-learning and AD, ND, OOD detection, and OSR has gained significant attention recently, some important aspects—such as generalizing to detect UUCs using only a few KUC and the convergence of meta-learning algorithms in one-class setting—remain underexplored. Recently, some efforts have been made to manipulate the knowledge of pre-trained vision-language models to tackle the problem of zero-shot and few-shot AD and OOD. For instance, (99) uses CLIP (128) and textual queries to define the normal class without utilizing any visual inputs. Similarly, yet in the OOD evaluation setting, (42) provides an extensive number of experiments to show the capabilities of vision transformers at detecting near-out-of-distribution samples. They also test zero-shot and multi-modal settings using vison-language based models. ## 10 Future Challenges Here we provide plausible future directions, which might be of interest to both practitioners and academic researchers. ## 10.1 Evaluating Baselines And The Evaluation Protocols Of Ood Detection The evaluation protocols for OOD detection has room for improvement. For instance, (2) trains a mixture of three Gaussians on the CIFAR-10 dataset (as ID), and evaluated against OOD datasets including TinyImagenet (crop), TinyImagenet (resize), LSUN, LSUN(resize) and iSUN. The model is trained channel-wise at a pixellevel. Tab. 2 shows the detection results on different datasets. The results are comparable with SOTA despite the simplicity. In particular, LSUN performs poorly since the majority of them have uniform color and texture, with little variation and structure. Similar to what has been observed in likelihood-based methods, LSUN "sits inside" CIFAR-10 with a similar mean but lower variance, and ends up being more likely under the wider distribution. We also provide a better insight into the performance of OOD detection baselines, evaluated on both near and far out-of-distribution datasets. For a model that trained on CIFAR-10, we have considered the CIFAR-100 as the near OOD dataset. The results are presented in Tables 3, 4, and 6. As it is shown, none of the methods are good at detecting both near and far OOD samples, except OE approaches that use an extra auxiliary dataset. Using Mahalanobis distance can improve the performance of most of the methods at detecting far OOD samples, while degrading the performance of near OOD detection. Moreover, it can also have poor performance at detecting even some of far OOD samples due to inaccurate Gaussian density estimation. Furthermore, its performance varies significantly when OOD dataset is resized or cropped, showing its dependency on low-level statistics. For instance, notice the SVHN column of Table 6. This is in correspondence with what has been shown recently by (133) on the deficiencies of Mahalanobis distance as well. One solution to resolve the issue might be applying input pre-processing techniques such as ODIN to alleviate the effect of first and second-order statistics in assigning OOD scores; however, it increases the execution speed by the sum of an extra forward and backward pass during testing. Additionally, techniques such as ensembling or MC-Dropout (44) might be slightly better than others on some OOD datasets; yet, they need multiple forward passes, increasing the execution time significantly. For example, the reported MC-Dropout is 40 times slower than a simple MSP. In summary, future works are encouraged to evaluate OOD detection on both near and far OOD datasets. | OOD Dataset | Average Precision | |----------------------|---------------------| | TinyImageNet(crop) | 96.8 | | TinyImageNet(resize) | 99.0 | | LSUN | 58.0 | | LSUN(resize) | 99.7 | | iSUN | 99.2 | Table 2: The performance of a simple method using only low-level features on different datasets (2). ## 10.2 Ad Needs To Be Explored More As mentioned earlier, AD and ND are not completely the same, both historically and fundamentally. An essential and practical category of problems in the real-world application are those that can not be cleaned easily, and consequently, contain different kinds of noises such as, label noise or **data noise**. This is the case in complex and hazardous systems such as modern nuclear power plants, military aircraft carriers, air traffic control, and other high-risk systems (64). Recently proposed methods in ND need to be evaluated in AD settings using the proposed synthetic datasets, and new solutions need to be proposed. As the openness score is usually high for AD detectors, having a high recall while providing a low false alarm rate is necessary for their practicality (29). Furthermore, almost all the AD or ND methods are evaluated in the one-vs-all setting. This results in having a normal class with a few distribution modes, which is not a good approximation of real-world scenarios. Therefore, evaluating AD or ND methods in multi-class settings similar to OSR domain while having no access to labels could give a more clear perspective on the practicality of SOTA methods. Table 7 reports the performance of the most notable works using the standard evaluation protocols explained before. All the results are reported from the main papers. ## 10.3 Osr Methods For Pixel-Level Datasets Almost all the methods existing in OSR are evaluated on semantic datasets. As class boundaries in such datasets are usually far from each other, discriminative or generative methods can model their differences effectively. However, in many applications such as Chest X-ray datasets, variations are subtle. Existing methods can result in poor performance on such tasks. For instance, a model may be trained on 14 known chest diseases. A new disease, for example, COVID-19, may emerge as unknowns. In this case, the proposed model must detect it as a new illness instead of classifying it into pre-existing disease categories. Also, in many clinical applications where medical datasets are gathered, usually, disease images are more accessible than healthy ones; thus, OSR problems must be learned on sickness as normal images and detect healthy ones as abnormal inputs. Table 5 shows the performance of a simple MSP baseline on MVTecAD dataset when some frequent faults are considered as normal classes. In such scenarios, the goal is to detect and classify well-known faults while distinguishing rare ones as outliers that need to be treated differently. Although this is a common and practical industrial setting, the baseline does not perform better than random, casting doubt on their generality for safety-critical applications. Recently, (26) has shown the effectiveness of using a prior Gaussian distribution on the penultimate layer of classifier networks, similar to what has been done in several before-mentioned works, in tasks in which the distribution of seen classes are very similar to each other, for instance, Flowers or Birds datasets that were introduced in the previous sections. However, more research should be done in this setting since it is more practical and quite harder than the traditional ones. Table 8 reports the performance of the most notable works using the standard evaluation protocols explained before. All the results are reported from the main papers. ## 10.4 Adversarial Robustness Carefully designed imperceptible perturbations fooling deep learning-based models to make incorrect predictions are called adversarial attacks (186). Up to this time, it has been shown that classifiers are susceptible to adversarial attacks, such that their performance degrades significantly at test time. As in OOD detection, OSR, AD and ND, being robust against adversarial attacks is crucial. Recent works in OSR (154; 135), ND (143; 142), and OOD detection (103; 24; 149; 23) have investigated the effects of adversarial attacks on models; however, more is needed. For instance, as anomalies in AD or UUCs in OSR are inaccessible at the training time, therefore, achieving a robust model on attacked anomalies or UUCs would not be trivial. The relation of different defense approaches against adversarial attacks with novelty detection can also reveal some important insights about the internal mechanisms of the proposed models. For instance, membership attack (156) attempts to infer whether an input sample has been used during the training process or not, which can be seen as designing novelty detectors without having any generalization to UKC samples. Also, (37) investigates the relation of detecting poisoning attacks and novelty detectors. Poisoning examples that are intentionally added by attackers to achieve backdoor attacks could be treated as one type of "outliers" in the training dataset. It is claimed that differential privacy not only improves outlier detection and novelty detection, but also backdoor attack detection of ND models. From an entirely different point of view, as it is mentioned in (72), adversarial robust training can be employed to boost learned feature space in a semantic way. This path has been followed in ARAE (143) and Puzzle-AE (142) to improve the performance of AEs in detecting unseen test time samples. Similar intention is followed in the one-class learning method (50) that shows robustness is beneficial for detecting novel samples. This path also needs to be explored more, for instance despite standard adversarial attacks in classification tasks (101), attacks do not need to be imperceptible anymore in AD or ND and sometimes perceptible ones improve detection performance more. ## 10.5 Fairness And Biases Of Models Research on fairness has witnessed a significant growth recently (190; 18; 39):. It has been shown that models become biased towards some sensitive variables during their training process. For instance, (174) show that for attribute classification task in the CelebA (97) dataset, attribute presence is correlated with the gender of people in the image, which is obviously not desirable. Attributes such as gender in the mentioned example are called protected variables. In OOD detection literature, a recent work (105) systematically investigates how spurious correlation in the training set impacts OOD detection. The results suggest that the OOD detection performance is severely worsened when the correlation between spurious features and labels is increased in the training set. For example, a model that exploits the spurious correlation between the water background and label waterbird for prediction. Consequently, a model that relies on spurious features can produce a high-confidence prediction for an OOD input with the same background (i.e., water) but a different semantic label (e.g., boat). Fairness and AD or ND seem to have fundamental contrast with each other. In fairness, unbiased models are required in which equality constraints between minorities and majorities hold while the goal of AD models Table 3: OOD example detection for the maximum softmax probability (MSP) baseline detector, maximum logit value, the MSP detector after fine-tuning with Outlier Exposure (OE), the maximum logit value after fine-tuning with Outlier Exposure (OE), ensemble of 3 models, Mahalanobis distance, and Monte Carlo dropout. Inlier distribution is considered as CIFAR-10. All results are based on our evaluations and are average percentages of 10 runs. Missing values (-) indicate that we are proposing this setting, and it has no specific reference. | Method | References | Criterion | Gaussian | Rademacher | Blob | TinyImageNet(crop) | TinyImageNet(resize) | LSUN(crop) | LSUN(resize) | iSUN | SVHN | CIFAR-100 | |-------------|--------------|-------------|------------|--------------|--------|----------------------|------------------------|--------------|----------------|--------|--------|-------------| | MSP | FPR95 | 14.53 | 94.78 | 70.50 | 17.06 | 40.10 | 12.65 | 29.23 | 36.22 | 28.37 | 43.27 | | | (56) | AUROC | 94.78 | 79.85 | 94.63 | 94.64 | 88.30 | 96.45 | 91.40 | 90.00 | 91.94 | 87.77 | | | AUPR | 70.50 | 32.21 | 74.23 | 75.09 | 58.15 | 83.16 | 65.36 | 62.46 | 67.10 | 55.68 | | | | MLV | FPR95 | 52.60 | 73.27 | 11.67 | 9.59 | 47.67 | 4.93 | 27.28 | 36.42 | 43.54 | 56.52 | | | - | AUROC | 75.48 | 70.08 | 96.85 | 97.84 | 89.16 | 98.93 | 94.05 | 93.38 | 91.11 | 87.13 | | | AUPR | 27.07 | 25.47 | 83.56 | 90.31 | 65.65 | 95.35 | 78.18 | 73.99 | 72.08 | 61.47 | | | | MSP-OE | FPR95 | 0.71 | 0.50 | 0.58 | 6.61 | 13.00 | 1.32 | 5.16 | 5.64 | 4.77 | 28.36 | | | (58) | AUROC | 99.60 | 99.78 | 99.84 | 98.77 | 97.27 | 99.70 | 98.95 | 98.87 | 98.42 | 93.29 | | | AUPR | 94.25 | 97.36 | 98.94 | 95.06 | 88.08 | 98.56 | 94.56 | 94.20 | 89.33 | 76.19 | | | | MLV-OE | FPR95 | 0.69 | 0.43 | 0.49 | 4.98 | 11.17 | 1.11 | 4.10 | 4.52 | 4.08 | 30.38 | | | - | AUROC | 99.62 | 99.79 | 99.86 | 98.96 | 97.58 | 99.74 | 99.11 | 99.02 | 98.61 | 93.10 | | | AUPR | 94.30 | 97.46 | 99.07 | 95.72 | 89.10 | 98.71 | 95.15 | 94.68 | 90.11 | 76.36 | | | | Ensemble | FPR95 | 6.84 | 16.71 | 16.71 | 15.99 | 100 | 12.34 | 25.04 | 100.00 | 16.71 | 100.00 | | | - | AUROC | 97.37 | 86.94 | 91.20 | 93.18 | 85.69 | 95.23 | 90.21 | 88.00 | 92.05 | 83.90 | | | AUPR | 82.32 | 41.71 | 64.52 | 71.49 | 56.32 | 78.07 | 64.99 | 61.03 | 67.29 | 53.00 | | | | Mahalanobis | FPR95 | 1.35 | 2.01 | 7.38 | 35.82 | 48.38 | 28.61 | 27.98 | 39.02 | 24.79 | 48.40 | | | (150) | AUROC | 99.57 | 99.60 | 98.21 | 87.78 | 87.75 | 87.10 | 92.25 | 90.40 | 90.86 | 86.71 | | | AUPR | 96.49 | 97.95 | 90.63 | 46.79 | 55.33 | 41.59 | 65.14 | 62.17 | 53.36 | 54.06 | | | | MC-Dropout | FPR95 | 15.31 | 33.58 | 16.54 | 20.75 | 38.77 | 16.81 | 28.44 | 34.62 | 28.73 | 37.48 | | | (44) | AUROC | 93.89 | 83.41 | 94.73 | 93.55 | 88.52 | 95.09 | 91.36 | 89.73 | 91.07 | 88.43 | | | AUPR | 63.52 | 35.40 | 74.91 | 71.65 | 58.19 | 77.26 | 65.34 | 61.76 | 62.41 | 56.84 | | | | ODIN | FPR95 | 0.00 | 0.00 | 99.4 | 04.30 | 07.50 | 04.80 | 03.80 | 06.10 | 51.00 | 51.40 | | | (92) | AUROC | 100.00 | 99.90 | 42.50 | 99.10 | 98.50 | 99.00 | 99.20 | 98.80 | 89.90 | 88.3 | | | AUPR | 63.52 | 35.40 | 74.91 | 71.65 | 58.19 | 77.26 | 65.34 | 61.76 | 62.41 | 56.84 | | | is to assign higher anomaly scores to rarely happening events. To address this issue, (155; 193) proposed fairness-aware ADs while using the label of protected variables as an extra supervision in the training process. From a different point of view (181), introduces a significantly important bias in semi-supervised anomaly detection methods such as DSAD (137). Suppose DSAD has been implemented in law enforcement agencies to spot suspicious individuals using surveillance cameras. As a few number of training samples has been used as abnormal samples during the process, the trained model might have been biased towards detecting specific types of anomalies more than others. For instance, if the auxiliary abnormal training dataset includes more men than women, boundaries of detecting abnormal events as men might be looser than women at the test time. This could also happen in the classification settings such as OOD detection or OSR. (153) reports the existence of unfair biases toward some unrelated protected variables in detecting chest diseases for a classifier trained on Chest X-Ray datasets. From what has been said, it seems fairness and AD, ND, OSR, and OOD detection are strongly correlated because of some critical applications in which they are used and further research on their correlation is necessary for having practical models. ## 10.6 Multi-Modal Datasets In many situation, training dataset consists of multi-modal training samples, for instance in Chest-X-Ray datasets, labels of images are found automatically by applying NLP methods on the prescribes of radiologists. In these situations, joint training of different modes could help models to learn better semantic features. However, in this way, models need to be robust in different modes too. For example, in visual Question Answering tasks, we expect our model not to produce any answer for out-of-distribution input texts or images. Note that the correlation between different modes must be attended here, and independent training of AD, ND, OOD detection, or OSR models for different modes results in sticking in local minimas. To cope with this issue, (87) has explored the performance of VQA models in detecting unseen test time samples. However, there are many more challenges need to be investigated in this way. For instance, the robustness and fairness of VQA OOD models is significantly more challenging compared to single mode datasets, besides due to heavy training process of these models, inventing few-shot methods might be demanding in the fields. Table 4: OOD example detection for the maximum softmax probability (MSP) baseline detector, maximum logit value, the MSP detector after fine-tuning with Outlier Exposure (OE), the maximum logit value after fine-tuning with Outlier Exposure (OE), ensemble of 3 models, Mahalanobis distance, and Monte Carlo dropout. Inlier distribution is considered as CIFAR-100. All results are based on our evaluations and are average percentages of 10 runs. Missing values (-) indicate that we are proposing this setting, and it has no specific reference. | Method | References | Criterion | Gaussian | Rademacher | Blob | TinyImageNet(crop) | TinyImageNet(resize) | LSUN(crop) | LSUN(resize) | iSUN | SVHN | CIFAR-10 | |-------------|--------------|-------------|------------|--------------|--------|----------------------|------------------------|--------------|----------------|--------|--------|------------| | MSP | FPR95 | 54.32 | 39.08 | 57.11 | 43.34 | 65.88 | 47.32 | 62.98 | 63.34 | 69.12 | 65.14 | | | (56) | AUROC | 64.66 | 79.27 | 75.61 | 86.34 | 74.56 | 85.56 | 75.59 | 75.73 | 71.43 | 75.12 | | | AUPR | 19.69 | 30.05 | 29.99 | 56.98 | 33.71 | 56.49 | 34.11 | 33.88 | 30.44 | 33.92 | | | | ODIN | FPR95 | 01.20 | 13.90 | 13.70 | 09.20 | 37.60 | 07.20 | 32.30 | 36.40 | 37.00 | 76.4 | | | (92) | AUROC | 99.50 | 92.60 | 95.90 | 97.90 | 90.80 | 98.30 | 91.90 | 90.50 | 89.00 | 73.20 | | | AUPR | 98.70 | 83.70 | 94.50 | 97.70 | 89.90 | 98.20 | 90.90 | 87.80 | 86.30 | 70.60 | | | | MLV | FPR95 | 71.89 | 72.35 | 81.09 | 22.51 | 66.17 | 22.20 | 61.30 | 60.86 | 67.01 | 64.41 | | | AUROC | 44.24 | 46.22 | 53.62 | 94.72 | 77.72 | 95.09 | 79.54 | 79.19 | 74.03 | 77.55 | | | | AUPR | 13.82 | 14.20 | 15.98 | 79.10 | 38.00 | 81.11 | 39.14 | 37.27 | 31.99 | 37.30 | | | | MSP-OE | FPR95 | 12.41 | 16.89 | 12.04 | 22.02 | 69.42 | 13.27 | 60.89 | 62.42 | 43.10 | 62.57 | | | (58) | AUROC | 95.69 | 93.01 | 97.11 | 95.69 | 76.04 | 97.55 | 80.94 | 79.96 | 86.86 | 75.41 | | | AUPR | 71.13 | 56.81 | 85.91 | 85.34 | 39.57 | 90.99 | 48.52 | 45.86 | 53.27 | 32.28 | | | | MLV-OE | FPR95 | 10.71 | 16.66 | 8.09 | 17.34 | 73.95 | 08.50 | 56.02 | 60.73 | 32.59 | 64.91 | | | - | AUROC | 96.12 | 91.86 | 97.94 | 96.38 | 75.84 | 98.31 | 83.33 | 81.89 | 88.91 | 73.74 | | | AUPR | 72.81 | 52.03 | 88.60 | 86.55 | 39.72 | 92.90 | 51.06 | 48.21 | 54.72 | 30.48 | | | | Ensemble | FPR95 | 22.72 | 43.51 | 48.07 | 44.68 | 100.00 | 47.26 | 100.00 | 91.44 | 57.18 | 57.18 | | | - | AUROC | 89.15 | 68.64 | 79.24 | 82.90 | 70.47 | 82.39 | 70.66 | 71.08 | 73.61 | 75.13 | | | AUPR | 46.45 | 21.61 | 35.80 | 44.31 | 28.98 | 44.12 | 28.75 | 28.66 | 31.76 | 32.62 | | | | Mahalanobis | FPR95 | 0.82 | 0.10 | 2.70 | 73.79 | 43.40 | 76.42 | 37.94 | 42.07 | 32.05 | 70.80 | | | (150) | AUROC | 99.78 | 99.98 | 99.48 | 57.77 | 87.62 | 54.35 | 90.12 | 88.91 | 92.76 | 70.99 | | | AUPR | 98.63 | 99.88 | 97.64 | 17.27 | 60.18 | 16.11 | 65.97 | 62.82 | 75.87 | 27.12 | | | | MC-Dropout | FPR95 | 54.45 | 41.41 | 46.64 | 47.32 | 68.05 | 55.38 | 63.53 | 65.03 | 75.98 | 63.33 | | | (44) | AUROC | 62.35 | 76.51 | 80.59 | 85.23 | 73.66 | 82.24 | 74.93 | 74.70 | 68.89 | 76.87 | | | AUPR | 18.74 | 27.14 | 35.08 | 54.12 | 32.57 | 49.08 | 33.04 | 32.26 | 28.63 | 36.31 | | | ## 10.7 Explainablity Challenge Explainable AI (XAI) has found a seriously important role in the recently proposed deep network architectures, especially when they are used in safety-critical applications (5). In AD, OSR, ND and OOD detection due to some of their critical applications, we should be able to explain the reason of decisions our models make (63; 57). For instance, if a person is distinguished as suspicious in the surveillance cameras, there must be good reasons for that by which the model has made its decision. The challenges of explainability can be defined into two different approaches. First, we should explain why a sample is normal, known or in-distribution. Secondly, we should explain why a sample is abnormal, unknown or out-of-distribution. There are a lot of different techniques to explain the decisions of models in the literature such as Grad-cam (151), Smoothfgrad (157), which have been used in Multi-KD (144), CutPaste (90) and (168). However, they only have been used to explain normal, seen or in-distribution samples and their results are not as accurate as enough for unseen or abnormal inputs. To cope with the issue, (96) proposed a VAE based method which can provide the reasons of abnormality of input samples while performing accurately on explaining normal samples as well. However, it does not work well on complex training datasets such as CIFAR-10, which shows the need of conducting more research toward mitigating the problem. Another important challenge of explainability can be seen in one-class classification or ND approaches, in which there is only access to one-label at the training time. Therefore, Gradcam or Smoothgrad which use the availability of fine-grain labels can not be used anymore. To address this issue, (98) proposed a fully convolutional architecture with a heatmap upsampling algorithm that is called receptive field upsampling, which starts from a sample latent vector and reverse the effect of applied convolution operators to find important regions in the given input sample. However, explainable OCC models are still largely unexplored and more investigations in this direction are still necessary. | Class Name | Normal Sets | AUC | FPR | AUPR | |--------------|-----------------------------------------------|-------|-------|--------| | Cable | good,combined, missing cable, poke insulation | 51.70 | 1 | 24.20 | | Capsule | good, poke, faulty imprint, crack | 56.40 | 1 | 15.80 | | Wood | good, color, scratch, liquid | 53.30 | 1 | 86.40 | | Carpet | good, metal, contamination, hole | 50.20 | 1 | 14.70 | Table 5: OOD example detection for the maximum softmax probability (MSP) baseline detector. Inlier distribution is a set of faults in MVTecAD dataset, and outliers are rare faults. All results are average percentages of 10 runs. ## 10.8 Reliability Of Odd Detection Methods While OOD detection research has advanced considerably, SOTA performance has become saturated under the existing testing frameworks. Naturally, this raises the question of whether SOTA OOD detection methods are similarly effective in practice. Novel OOD detection research directions have recently emerged that expose weaknesses in existing SOTA methods that prevent their use in the real world. (78) introduces a novel evaluation framework for OOD detection. They introduced new test datasets with semantically preserved and realistic distribution shifts, as well as a novel metric to evaluate method generalization of OOD detection methods to real-world scenarios. Due to semantic consistency, SOTA OOD detection methods are expected to have high performance on realistic distribution shifts. Under realistic distribution shifts, SOTA methods are shown to have significant performance drops, demonstrating a serious concern when it comes to OOD detection in the real world. Similarly, (59) aims to make multi-class OOD detection in real-world settings more feasible by introducing new benchmark datasets consisting of high-resolution images over thousands of classes. As a result of their research, the authors demonstrate the need for new benchmarks to redirect research toward real-world applications. Additionally, (7) examines the adversarial safety of OOD detection methods and proposes a method that improves robustness. Defenses against adversarial attacks are also paramount for safely deploying OOD detection into the real world. The saturation of OOD detection performance does not signal the end of this research. Continuing to advance this field requires new standardized evaluation frameworks that more accurately quantify a method's practical effectiveness. ## 10.9 Multi-Label Ood Detection And Large-Scale Datasets While OOD detection for multi-class classification has been extensively studied, the problem for multi-label networks remain underexplored (171). This means each input has more than one true label by which it must be recognized. This is more challenging since multi-label classification tasks have more complex class boundaries, and unseen behaviors could happen in a subset of input sample labels. Another challenge of multi-label datasets can be explored in anomalous segmentation tasks. Different from classification in which one can report an entire image as an abnormal input, the specific anomalous part must be specified here. Current methods have been primarily evaluated on small data sets such as CIFAR. It's been shown that approaches developed on the CIFAR benchmark might not translate effectively into ImageNet benchmark with a large semantic space, highlighting the need to evaluate OOD detection in a large-scale real-world setting. Therefore, future research is encouraged to evaluate on ImageNet-based OOD detection benchmark (69), and test the limits of the method developed. ## 10.10 Open-World Recognition Although detecting novel, unknown or out-of-distribution samples is enough in controlled lab environments, novel categories must be continuously detected and then added to the recognition function in real-world operational systems. This becomes even more challenging when considering the fact that such systems require minimal downtime, even to learn (10). While there has been much research on incremental (131) and life-long learning (120) for addressing the problem of adding new knowledge to a pre-existing one, open-world Table 6: OOD example detection for the maximum softmax probability (MSP) baseline detector, maximum logit value (MLV), the MSP detector after fine-tuning with Outlier Exposure (OE), the maximum logit value after fine-tuning with Outlier Exposure (OE), ensemble of 3 models, Mahalanobis distance, and Monte Carlo dropout. Inlier distribution is considered as TinyImageNet. All results are based on our evaluations and are average percentages of 10 runs. Missing values (-) indicate that we are proposing this setting, and it has no specific reference. | Method | References | Criterion | Gaussian | Rademacher | Blob | LSUN(crop) | LSUN(resize) | iSUN | SVHN | |-------------|--------------|-------------|------------|--------------|--------|--------------|----------------|--------|--------| | FPR95 | 72.34 | 47.60 | 90.31 | 29.33 | 44.37 | 45.68 | 44.75 | | | | MSP | (56) | AUROC | 33.36 | 70.52 | 22.79 | 93.66 | 86.16 | 85.94 | 89.05 | | AUPR | 12.27 | 22.76 | 10.55 | 77.91 | 50.79 | 51.16 | 67.21 | | | | FPR95 | 43.70 | 59.40 | 74.60 | 14.80 | 38.90 | 38.90 | 23.70 | | | | ODIN | (92) | AUROC | 70.00 | 50.80 | 46.20 | 96.80 | 87.10 | 87.60 | 93.90 | | AUPR | 56.60 | 45.10 | 43.10 | 96.60 | 83.10 | 87.60 | 92.80 | | | | FPR95 | 67.38 | 21.56 | 97.58 | 10.96 | 28.53 | 27.80 | 27.51 | | | | MLV | - | AUROC | 45.34 | 90.24 | 15.96 | 97.75 | 91.71 | 91.98 | 94.09 | | AUPR | 14.15 | 49.31 | 9.77 | 91.22 | 64.00 | 66.12 | 79.28 | | | | FPR95 | 45.32 | 49.53 | 0.05 | 0.53 | 0.12 | 0.12 | 0.39 | | | | MSP-OE | (58) | AUROC | 76.30 | 65.11 | 99.99 | 99.76 | 99.97 | 99.97 | 99.83 | | AUPR | 28.32 | 19.97 | 99.93 | 98.37 | 99.82 | 99.79 | 98.16 | | | | FPR95 | 11.21 | 46.46 | 0.05 | 0.52 | 0.11 | 0.12 | 0.38 | | | | - | AUROC | 95.45 | 68.30 | 99.99 | 99.81 | 99.97 | 99.97 | 99.83 | | | MLV-OE | AUPR | 66.66 | 21.45 | 99.93 | 98.48 | 99.81 | 99.79 | 98.16 | | | FPR95 | 71.09 | 45.96 | 78.16 | 46.55 | 57.62 | 58.94 | 54.60 | | | | Ensemble | - | AUROC | 35.75 | 74.09 | 51.86 | 83.72 | 76.54 | 75.70 | 77.09 | | AUPR | 12.61 | 25.61 | 15.70 | 50.66 | 34.67 | 33.48 | 34.89 | | | | FPR95 | 66.87 | 48.15 | 22.23 | 98.46 | 72.04 | 79.93 | 96.83 | | | | Mahalanobis | (150) | AUROC | 48.74 | 70.28 | 92.41 | 13.33 | 71.11 | 66.65 | 27.59 | | AUPR | 14.78 | 22.60 | 62.45 | 9.48 | 28.17 | 25.19 | 10.81 | | | | FPR95 | 76.09 | 56.14 | 91.36 | 30.44 | 43.25 | 47.22 | 47.67 | | | | MC-Dropout | (44) | AUROC | 30.38 | 58.92 | 21.31 | 93.13 | 85.68 | 84.45 | 87.44 | | AUPR | 11.82 | 17.54 | 10.39 | 76.53 | 48.49 | 46.56 | 62.24 | | | recognition needs a few more steps. This means novel classes must be found continuously, and the system must be updated to include these new classes in its multi-class open-set recognition algorithm. The mentioned process poses many different challenges, from the scalability of current open-set recognition algorithms to designing new learning algorithms to avoid problems such as catastrophic forgetting (131) of OSR classifiers. Furthermore, all the previously mentioned future works can be reformulated in open-world recognition problems again, which means by considering a few existing works in these subjects, it needs to be explored more. ## 10.11 Vision Transformers In Ood Detection And Osr Vision Transformers (ViTs) (36) have recently been proposed to replace CNNs and have shown a great performance in different applications such as object detection (19), medical image segmentation (166), and visual tracking (22). Similarly, some methods have recently reported the benefits of ViTs in OOD detection (81; 42) and have shown their capabilities at detecting near OOD samples. For instance, (42) has reported the significant superiority of ViTs compared to previous works when they are trained on CIFAR-10 and tested on CIFAR-100 as inlier and outlier datasets, respectively. However, as ViTs usually get pre-trained on extra-large datasets such as ImageNet-22K that has a large intersection with training and testing datasets, the integrity of the train-test mismatch does not hold anymore, and the problem would be converted to "how much does it remember from pretraining". This means ViTs should be evaluated on datasets that have no intersection with the pre-trained knowledge. To address this issue, we have evaluated ViT-B16(36) on SVHN and MNIST when six randomly selected classes are considered as normal and the remaining ones as outliers or unseens. MSP is considered to detect unknown samples. As Table 9 shows, ViT-B16 that is pre-trained on ImageNet-22K is not comparatively as good as other baselines that are trained from scratch. As all the experiments are evaluated in a near | | | | | (c) | | | | | | | | | | | | | | | |-----------------|----------|------------|--------|-------|---------|--------|-------|----------|---------|-----------|-------|-------|-------|------------|------------|-------|--------|-------| | Dataset | Method | References | Bottle | Cable | Capsule | Carpet | Grid | Hazelnut | Leather | Metal-nut | Pill | Screw | Tile | Toothbrush | Transistor | Wood | Zipper | Mean | | LSA | (1) | 86.00 | 80.00 | 71.00 | 67.00 | 70.00 | 85.00 | 75.00 | 74.00 | 70.00 | 54.00 | 61.00 | 50.00 | 89.00 | 75.00 | 88.00 | 73.00 | | | AnoGan | (147) | 69.00 | 50.00 | 58.00 | 50.00 | 52.00 | 62.00 | 68.00 | 49.00 | 51.00 | 51.00 | 53.00 | 67.00 | 57.00 | 35.00 | 59.00 | 55.00 | | | MVTecAD | DeepSVDD | (136) | 86.00 | 71.00 | 69.00 | 75.00 | 73.00 | 77.00 | 87.00 | 54.00 | 81.00 | 59.00 | 71.00 | 65.00 | 70.00 | 64.00 | 74.00 | 72.00 | | GT | (48) | 74.29 | 33.32 | 67.79 | 82.37 | 82.51 | 65.16 | 48.24 | 45.90 | 53.86 | 61.91 | 84.70 | 79.79 | 94.00 | 44.58 | 87.44 | 67.06 | | | U-std | (14) | 93.10 | 81.80 | 96.80 | 87.90 | 95.20 | 96.50 | 94.50 | 94.20 | 96.10 | 94.20 | 94.60 | 93.30 | 66.60 | 91.10 | 95.10 | 91.40 | | | Multiresolution | (144) | 99.39 | 98.37 | 80.46 | 73.58 | 95.05 | 82.70 | 94.29 | 79.25 | 91.57 | 78.01 | 89.19 | 85.55 | 92.17 | 83.31 | 93.24 | 87.74 | | Table 7: AUROC results of the most notable works for anomaly/novelty detection. The performance is averaged for each dataset in the one-vs-all setting. All the results in the tables are reported from the reference papers. | | (a) | | | | | | | | | | | | | |-----------------|--------|------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | Dataset | Method | References | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Mean | | OC-GAN | (122) | 99.80 | 99.90 | 94.20 | 96.30 | 97.50 | 98.00 | 99.10 | 98.10 | 93.90 | 98.10 | 97.50 | | | LSA | (1) | 99.30 | 99.90 | 95.90 | 96.60 | 95.60 | 96.40 | 99.40 | 98.00 | 95.30 | 98.10 | 97.50 | | | AnoGan | (147) | 96.60 | 99.20 | 85.00 | 88.70 | 89.40 | 88.3 | 94.70 | 93.50 | 84.90 | 92.40 | 91.30 | | | MNIST | OC-SVM | (148) | 99.50 | 99.90 | 92.60 | 93.60 | 96.70 | 95.50 | 98.70 | 96.60 | 90.30 | 96.20 | 96.00 | | DeepSVDD | (136) | 98.00 | 99.70 | 91.70 | 91.90 | 94.90 | 88.50 | 98.30 | 94.60 | 93.90 | 96.50 | 94.80 | | | U-std | (14) | 99.90 | 99.90 | 99.00 | 99.30 | 99.20 | 99.30 | 99.70 | 99.50 | 98.60 | 99.10 | 99.35 | | | Multiresolution | (144) | 99.82 | 99.82 | 97.79 | 98.75 | 98.43 | 98.16 | 99.43 | 98.38 | 98.41 | 98.10 | 98.71 | | | | | (b) | | | | | | | | | | | | |-----------------|----------|------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|------| | Dataset | Method | References | Plane | Car | Bird | Cat | Deer | Dog | Frog | Horse | Ship | Truck | Mean | | OC-GAN | (122) | 75.70 | 53.10 | 64.00 | 62.00 | 72.30 | 62.0 | 72.30 | 57.50 | 82.00 | 55.40 | 65.66 | | | LSA | (1) | 73.50 | 58.00 | 69.00 | 54.20 | 76.10 | 54.60 | 75.10 | 53.50 | 71.70 | 54.80 | 64.10 | | | AnoGan | (147) | 96.60 | 99.20 | 85.00 | 88.70 | 89.40 | 88.3 | 94.70 | 93.50 | 84.90 | 92.40 | 91.30 | | | OC-SVM | (148) | 63.00 | 44.00 | 64.90 | 48.70 | 73.50 | 50.00 | 72.50 | 53.30 | 64.90 | 50.80 | 58.56 | | | CIFAR-10 | DeepSVDD | (136) | 98.00 | 99.70 | 91.70 | 91.90 | 94.90 | 88.50 | 98.30 | 94.60 | 93.90 | 96.50 | 94.8 | | GT | (48) | 76.20 | 84.80 | 77.10 | 73.20 | 82.80 | 84.80 | 82.00 | 88.70 | 89.50 | 83.40 | 82.30 | | | CSI | (165) | 89.90 | 99.10 | 93.10 | 86.40 | 93.90 | 93.20 | 95.10 | 98.70 | 97.90 | 95.50 | 94.30 | | | U-std | (14) | 78.90 | 84.90 | 73.40 | 74.80 | 85.10 | 79.30 | 89.20 | 83.00 | 86.20 | 84.80 | 81.96 | | | Multiresolution | (144) | 90.53 | 90.35 | 79.66 | 77.02 | 86.71 | 91.40 | 88.98 | 86.78 | 91.45 | 88.91 | 87.18 | | Table 8: AUROC results of the most notable works for OSR. The performance is averaged across each dataset using the explained evaluation procedures for 10 random trials.(MLS stands for (167)). | Method | References | MNIST | SVHN | CIFAR-10 | CIFAR + 10 | CIFAR + 50 | TinyImageNet | |-----------|--------------|---------|--------|------------|--------------|--------------|----------------| | OpenMax | (11) | 98.10 | 89.40 | 81.10 | 81.70 | 79.60 | 81.10 | | G-OpenMax | (46) | 98.40 | 89.60 | 67.60 | 82.70 | 81.90 | 58.00 | | OSRCI | (33) | 98.80 | 91.00 | 69.90 | 83.80 | 82.70 | 58.60 | | C2AE | (117) | 98.90 | 92.20 | 89.50 | 95.50 | 93.70 | 74.80 | | CROSR | (183) | 99.20 | 89.90 | 88.30 | - | - | 58.90 | | GDFR | (123) | - | 93.50 | 80.70 | 92.80 | 92.60 | 60.80 | | RPL | (22) | 99.60 | 96.80 | 90.10 | 97.60 | 96.80 | 80.90 | | OpenGan | (82) | 99.90 | 98.80 | 97.30 | - | - | 90.70 | | MLS | (167) | 99.30 | 97.10 | 93.60 | 97.90 | 96.50 | 83.00 | | Method | SVHN | MNIST | |----------|--------|---------| | Softmax | 88.60 | 97.80 | | Openmax | 89.40 | 98.10 | | CROSR | 89.90 | 99.10 | | ViT-B16 | 82.18 | 94.89 | Table 9: OSR AUROC results of baselines such as the maximum softmax probability (MSP), Openmax, and CROSR compared to ViT-B16. Known distribution is considered a random selection of 6 classes of the respective datasets and unknown ones are the rest. All results are percentages and the average of 5 runs. OOD detection setting, they support the before-mentioned deficiency of ViTs. From what has been said, a future line of research could be evaluating ViTs in a more controlled situation such that their real benefits would be more precise. Indeed, the recent Species dataset collects examples that do not fall under any of the ImageNet-22K classes and is a first step to rectify this problem (62). ## 10.12 Identifiablity Problems Providing a method whereby unfamiliar samples are detected is not always trivial. (45) investigates the theoretical foundations of the problem, demonstrating that determining the accuracy is just as difficult as identifying the ideal predictor, and so the success of any method is dependent on the implicit assumptions about the nature of the shift, which is characterized by mismatches between the source (training) and target (test) distributions. It is demonstrated that no approach of measuring accuracy will work in general without assumptions on the source classifier or the nature of the shift. Average Thresholded Confidence (ATC) is a simple approach for predicting model performance using softmax probability. It learns a threshold for model confidence on the validation source data and predicts the target domain accuracy as the proportion of unlabeled target points with a score greater than the threshold. This work advances the positive answer to the question of a feasible technique for selecting a threshold that enables prediction accuracy with the thresholded model confidence. ## 11 Conclusion In many applications, it is not feasible to model all kinds of classes occurring during testing; thus, scenarios existing in domains such as OOD detection, OSR, ND (one-class learning), and AD become ubiquitous. Up to this time, these domains, despite having the same intention and a large intersection, have been followed independently by researchers. To address the need, this paper gives a comprehensive review on existing techniques, datasets, evaluation criteria, and future challenges. More importantly, limitations of the mentioned approaches are discussed, and certain promising research directions are pointed out. We hope this helps the research community build a broader and cross-domain perspective. ## Acknowledgements We would like to thank Yuki M. Asano for the extremely useful discussions and for reviewing the paper prior to submission. ## References [1] Davide Abati, Angelo Porrello, Simone Calderara, and Rita Cucchiara. Latent space autoregression for novelty detection. In *CVPR*, pp. 481–490, 2019. [2] Faruk Ahmed and Aaron Courville. Detecting semantic anomalies. In *AAAI*, volume 34, pp. 3154–3162, 2020. [3] Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. *ICLR*, 2017. [4] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In ICML, pp. 214–223, 2017. [5] Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. *Information Fusion*, 58:82–115, 2020. [6] Mohammad Azizmalayeri and Mohammad Hossein Rohban. Ood augmentation may be at odds with open-set recognition. *NeurIPS*, 2022. [7] Mohammad Azizmalayeri, Arshia Soltani Moakhar, Arman Zarei, Reihaneh Zohrabi, Mohammad Taghi Manzuri, and Mohammad Hossein Rohban. Your out-of-distribution detection method is not robust! NeurIPS, 2022. [8] Christian Bailer, Tewodros Habtegebrial, Didier Stricker, et al. Fast feature extraction with cnns with pooling layers. *BMVC*, 2017. [9] Babak Ehteshami Bejnordi, Mitko Veta, Paul Johannes Van Diest, Bram Van Ginneken, Nico Karssemeijer, Geert Litjens, Jeroen AWM Van Der Laak, Meyke Hermsen, Quirine F Manson, Maschenka Balkenhol, et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. *Jama*, 318(22):2199–2210, 2017. [10] Abhijit Bendale and Terrance Boult. Towards open world recognition. In *CVPR*, pp. 1893–1902, 2015. [11] Abhijit Bendale and Terrance E Boult. Towards open set deep networks. In *CVPR*, pp. 1563–1572, 2016. [12] Liron Bergman and Yedid Hoshen. Classification-based anomaly detection for general data. *ICLR*, 2020. [13] Paul Bergmann, Michael Fauser, David Sattlegger, and Carsten Steger. Mvtec ad–a comprehensive real-world dataset for unsupervised anomaly detection. In *CVPR*, pp. 9592–9600, 2019. [14] Paul Bergmann, Michael Fauser, David Sattlegger, and Carsten Steger. Uninformed students: Studentteacher anomaly detection with discriminative latent embeddings. In *CVPR*, pp. 4183–4192, 2020. [15] David Berthelot, Colin Raffel, Aurko Roy, and Ian Goodfellow. Understanding and improving interpolation in autoencoders via an adversarial regularizer. *ICLR*, 2018. [16] Markus M Breunig, Hans-Peter Kriegel, Raymond T Ng, and Jörg Sander. Lof: identifying density-based local outliers. In *ACM SIGMOD international conference on Management of data*, pp. 93–104, 2000. [17] Saikiran Bulusu, Bhavya Kailkhura, Bo Li, P Varshney, and Dawn Song. Anomalous instance detection in deep learning: A survey. Technical report, Lawrence Livermore National Lab.(LLNL), Livermore, CA (United States), 2020. [18] Jenna Burrell. How the machine 'thinks': Understanding opacity in machine learning algorithms. Big Data & Society, 3(1):2053951715622512, 2016. [19] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In *ECCV*, pp. 213–229. Springer, 2020. [20] Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In *ECCV*, pp. 132–149, 2018. [21] Raghavendra Chalapathy and Sanjay Chawla. Deep learning for anomaly detection: A survey. arXiv preprint arXiv:1901.03407, 2019. [22] Guangyao Chen, Limeng Qiao, Yemin Shi, Peixi Peng, Jia Li, Tiejun Huang, Shiliang Pu, and Yonghong Tian. Learning open set network with discriminative reciprocal points. *ECCV*, 2020. [23] Jiefeng Chen, Yixuan Li, Xi Wu, Yingyu Liang, and Somesh Jha. Atom: Robustifying out-of-distribution detection using outlier mining. In *Joint European Conference on Machine Learning and Knowledge* Discovery in Databases, pp. 430–445. Springer, 2021. [24] Jiefeng Chen, Yixuan Li, Xi Wu, Yingyu Liang, and Somesh Jha. Atom: Robustifying out-of-distribution detection using outlier mining. *ECML PKDD*, 2021. [25] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *ICML*, pp. 1597–1607, 2020. [26] Jiacheng Cheng and Nuno Vasconcelos. Learning deep classifiers consistent with fine-grained novelty detection. In *CVPR*, pp. 1664–1673, 2021. [27] Hyunsun Choi and Eric Jang. Generative ensembles for robust anomaly detection. 2018. [28] Thomas M. Cover and Joy A. Thomas. *Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)*. Wiley-Interscience, USA, 2006. ISBN 0471241954. [29] Maria Cvach. Monitor alarm fatigue: an integrative review. *Biomedical instrumentation & technology*, 46(4):268–277, 2012. [30] Thomas Defard, Aleksandr Setkov, Angelique Loesch, and Romaric Audigier. Padim: a patch distribution modeling framework for anomaly detection and localization. *ICPR Workshop*, 2020. [31] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *CVPR*, pp. 248–255. Ieee, 2009. [32] Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. *arXiv preprint arXiv:1708.04552*, 2017. [33] Akshay Raj Dhamija, Manuel Günther, and Terrance E Boult. Reducing network agnostophobia. NeurIPS, 2018. [34] Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. [35] Luke Ditria, Benjamin J Meyer, and Tom Drummond. Opengan: Open set generative adversarial networks. In *ACCV*, 2020. [36] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *ICLR*, 2020. [37] Min Du, Ruoxi Jia, and Dawn Song. Robust anomaly detection and backdoor attack detection via differential privacy. *ICLR*, 2019. [38] Xuefeng Du, Zhaoning Wang, Mu Cai, and Yixuan Li. Vos: Learning what you don't know by virtual outlier synthesis. *ICLR*, 2022. [39] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In *Innovations in theoretical computer science conference*, pp. 214–226, 2012. [40] Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. In KDD, volume 96, pp. 226–231, 1996. [41] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In *ICML*, pp. 1126–1135, 2017. [42] Stanislav Fort, Jie Ren, and Balaji Lakshminarayanan. Exploring the limits of out-of-distribution detection. *NeurIPS*, 2021. [43] Ahmed Frikha, Denis Krompaß, Hans-Georg Köpken, and Volker Tresp. Few-shot one-class classification via meta-learning. *AAAI*, 2020. [44] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In *ICML*, pp. 1050–1059, 2016. [45] Saurabh Garg, Sivaraman Balakrishnan, Zachary Chase Lipton, Behnam Neyshabur, and Hanie Sedghi. Leveraging unlabeled data to predict out-of-distribution performance. In *NeurIPS Workshop on* Distribution Shifts: Connecting Methods and Applications, 2021. [46] ZongYuan Ge, Sergey Demyanov, Zetao Chen, and Rahil Garnavi. Generative openmax for multi-class open set classification. *BMVC*, 2017. [47] Chuanxing Geng, Sheng-jun Huang, and Songcan Chen. Recent advances in open set recognition: A survey. *T-PAMI*, 2020. [48] Izhak Golan and Ran El-Yaniv. Deep anomaly detection using geometric transformations. *NeurIPS*, 2018. [49] Dong Gong, Lingqiao Liu, Vuong Le, Budhaditya Saha, Moussa Reda Mansour, Svetha Venkatesh, and Anton van den Hengel. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. In *ICCV*, pp. 1705–1714, 2019. [50] Sachin Goyal, Aditi Raghunathan, Moksh Jain, Harsha Vardhan Simhadri, and Prateek Jain. Drocc: Deep robust one-class classification. In *ICML*, pp. 3711–3721, 2020. [51] Patrick Grother and Kayee Hanaoka. Nist special database 19 handprinted forms and characters 2nd edition. *National Institute of Standards and Technology, Tech. Rep*, 2016. [52] Frank E Grubbs. Procedures for detecting outlying observations in samples. *Technometrics*, 11(1):1–21, 1969. [53] Xinwei He, Yang Zhou, Zhichao Zhou, Song Bai, and Xiang Bai. Triplet-center loss for multi-view 3d object retrieval. In *CVPR*, pp. 1945–1954, 2018. [54] Matthias Hein, Maksym Andriushchenko, and Julian Bitterwolf. Why relu networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In *CVPR*, pp. 41–50, 2019. [55] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. *ICLR*, 2019. [56] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. *ICLR*, 2016. [57] Dan Hendrycks and Mantas Mazeika. X-risk analysis for ai research. *arXiv preprint arXiv:2206.05862*, 2022. [58] Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. Deep anomaly detection with outlier exposure. ICLR, 2018. [59] Dan Hendrycks, Steven Basart, Mantas Mazeika, Mohammadreza Mostajabi, Jacob Steinhardt, and Dawn Song. Scaling out-of-distribution detection for real-world settings. *ICML*, 2019. [60] Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song. Using self-supervised learning can improve model robustness and uncertainty. *NeurIPS*, 2019. [61] Dan Hendrycks, Norman Mu, Ekin D Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty. *ICLR*, 2019. [62] Dan Hendrycks, Steven Basart, Mantas Mazeika, Andy Zou, Joseph Kown, Mohammadreza Mostajabi, Jacob Steinhardt, and Dawn Xiaodong Song. Scaling out-of-distribution detection for real-world settings. ICML, 2020. [63] Dan Hendrycks, Nicholas Carlini, John Schulman, and Jacob Steinhardt. Unsolved problems in ml safety. *ArXiv*, abs/2109.13916, 2021. [64] Dan Hendrycks, Nicholas Carlini, John Schulman, and Jacob Steinhardt. Unsolved problems in ml safety. *arXiv preprint arXiv:2109.13916*, 2021. [65] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In *CVPR*, pp. 15262–15271, 2021. [66] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. *ICLR*, 2016. [67] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. NeurIPS Deep Learning Workshop, 2015. [68] Yen-Chang Hsu, Yilin Shen, Hongxia Jin, and Zsolt Kira. Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data. In *CVPR*, pp. 10951–10960, 2020. [69] Rui Huang and Yixuan Li. Mos: Towards scaling out-of-distribution detection for large semantic space. In *CVPR*, pp. 8710–8719, 2021. [70] Rui Huang, Andrew Geng, and Yixuan Li. On the importance of gradients for detecting distributional shifts in the wild. *NeurIPS*, 2021. [71] Weibo Huang and Peng Wei. A pcb dataset for defects detection and classification. *arXiv preprint* arXiv:1901.08204, 2019. [72] Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. *NeurIPS*, 2019. [73] Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, et al. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In *AAAI*, volume 33, pp. 590–597, 2019. [74] Taewon Jeong and Heeyoung Kim. Ood-maml: Meta-learning for few-shot out-of-distribution detection and classification. *NeurIPS*, 33, 2020. [75] Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, 7(3):535–547, 2019. [76] Pedro R Mendes Júnior, Roberto M De Souza, Rafael de O Werneck, Bernardo V Stein, Daniel V Pazinato, Waldir R de Almeida, Otávio AB Penatti, Ricardo da S Torres, and Anderson Rocha. Nearest neighbors distance ratio open-set classifier. *Machine Learning*, 106(3):359–386, 2017. [77] Daniel S Kermany, Michael Goldbaum, Wenjia Cai, Carolina CS Valentim, Huiying Liang, Sally L Baxter, Alex McKeown, Ge Yang, Xiaokang Wu, Fangbing Yan, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. *Cell*, 172(5):1122–1131, 2018. [78] Vahid Reza Khazaie, Anthony Wong, and Mohammad Sabokrou. Are out-of-distribution detection methods reliable? *arXiv preprint arXiv:2211.10892*, 2022. [79] Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. *NeurIPS*, 2020. [80] Diederik P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. NeurIPS, 2018. [81] Rajat Koner, Poulami Sinhamahapatra, Karsten Roscher, Stephan Günnemann, and Volker Tresp. Oodformer: Out-of-distribution detection transformer. *BMVC*, 2021. [82] Shu Kong and Deva Ramanan. Opengan: Open-set recognition via open data generation. *ICCV*, 2021. [83] Alex Krizhevsky et al. Learning multiple layers of features from tiny images. 2009. [84] Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. In *ICML*, pp. 1558–1566. PMLR, 2016. [85] Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010. URL http://yann.lecun. com/exdb/mnist/. [86] Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and F Huang. A tutorial on energy-based learning. *Predicting structured data*, 1(0), 2006. [87] Doyup Lee, Yeongjae Cheon, and Wook-Shin Han. Regularizing attention networks for anomaly detection in visual question answering. *AAAI*, 2020. [88] Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training confidence-calibrated classifiers for detecting out-of-distribution samples. *ICLR*, 2018. [89] Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. *NeurIPS*, 2018. [90] Chun-Liang Li, Kihyuk Sohn, Jinsung Yoon, and Tomas Pfister. Cutpaste: Self-supervised learning for anomaly detection and localization. In *CVPR*, pp. 9664–9674, 2021. [91] Yi Li and Nuno Vasconcelos. Background data resampling for outlier-aware classification. In *CVPR*, pp. 13218–13227, 2020. [92] Shiyu Liang, Yixuan Li, and Rayadurgam Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. *ICLR*, 2017. [93] Bo Liu, Hao Kang, Haoxiang Li, Gang Hua, and Nuno Vasconcelos. Few-shot open-set recognition using meta-learning. In *CVPR*, pp. 8798–8807, 2020. [94] Fei Tony Liu, Kai Ming Ting, and Zhi-Hua Zhou. Isolation forest. In *ICDM*, pp. 413–422. IEEE, 2008. [95] Weitang Liu, Xiaoyun Wang, John D Owens, and Yixuan Li. Energy-based out-of-distribution detection. NeurIPS, 2020. [96] Wenqian Liu, Runze Li, Meng Zheng, Srikrishna Karanam, Ziyan Wu, Bir Bhanu, Richard J Radke, and Octavia Camps. Towards visually explaining variational autoencoders. In *CVPR*, pp. 8642–8651, 2020. [97] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In ICCV, December 2015. [98] Philipp Liznerski, Lukas Ruff, Robert A Vandermeulen, Billy Joe Franks, Marius Kloft, and KlausRobert Müller. Explainable deep one-class classification. *ICLR*, 2020. [99] Philipp Liznerski, Lukas Ruff, Robert A Vandermeulen, Billy Joe Franks, Klaus-Robert Müller, and Marius Kloft. Exposing outlier exposure: What can be learned from few, one, and zero outlier images. TMLR, 2022. [100] Yiwei Lu, Frank Yu, Mahesh Kumar Krishna Reddy, and Yang Wang. Few-shot scene-adaptive anomaly detection. In *ECCV*, pp. 125–141. Springer, 2020. [101] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. *ICLR*, 2018. [102] Andrey Malinin and Mark Gales. Predictive uncertainty estimation via prior networks. *NeurIPS*, 2018. [103] Alexander Meinke, Julian Bitterwolf, and Matthias Hein. Provably robust detection of out-of-distribution data (almost) for free. *arXiv preprint arXiv:2106.04260*, 2021. [104] Dimity Miller, Niko Sunderhauf, Michael Milford, and Feras Dayoub. Class anchor clustering: A loss for distance-based open set recognition. In *CVPR*, pp. 3570–3578, 2021. [105] Yifei Ming, Hang Yin, and Yixuan Li. On the impact of spurious correlation for out-of-distribution detection. *AAAI*, 2022. [106] Hossein Mirzaei, Mohammadreza Salehi, Sajjad Shahabi, Efstratios Gavves, Cees GM Snoek, Mohammad Sabokrou, and Mohammad Hossein Rohban. Fake it till you make it: Near-distribution novelty detection by score-based generative models. *arXiv preprint arXiv:2205.14297*, 2022. [107] Sina Mohseni, Mandar Pitale, JBS Yadawa, and Zhangyang Wang. Self-supervised learning for generalizable out-of-distribution detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 5216–5223, 2020. [108] Sofia Mosci, Lorenzo Rosasco, Matteo Santoro, Alessandro Verri, and Silvia Villa. Solving structured sparsity regularization with proximal methods. In Joint European conference on machine learning and knowledge discovery in databases, pp. 418–433. Springer, 2010. [109] Norman Mu and Justin Gilmer. Mnist-c: A robustness benchmark for computer vision. arXiv preprint arXiv:1906.02337, 2019. [110] Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do deep generative models know what they don't know? *ICLR*, 2018. [111] Lawrence Neal, Matthew Olson, Xiaoli Fern, Weng-Keen Wong, and Fuxin Li. Open set learning with counterfactual images. In *ECCV*, pp. 613–628, 2018. [112] Sameer A. Nene, Shree K. Nayar, and Hiroshi Murase. object image library (coil-100. Technical report, 1996. [113] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011. [114] Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. *arXiv* preprint arXiv:1803.02999, 2018. [115] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In *Indian Conference on Computer Vision, Graphics & Image Processing*, pp. 722–729. IEEE, 2008. [116] Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with pixelcnn decoders. *NeurIPS*, 2016. [117] Poojan Oza and Vishal M Patel. C2ae: Class conditioned auto-encoder for open-set recognition. In CVPR, pp. 2307–2316, 2019. [118] Guansong Pang, Chunhua Shen, Longbing Cao, and Anton van den Hengel. Deep learning for anomaly detection: A review. *ACM Computing Surveys*, 2020. [119] Ashok Kumar Pant, Sanjeeb Prasad Panday, and Shashidhar Ram Joshi. Off-line nepali handwritten character recognition using multilayer perceptron and radial basis function neural networks. In Asian Himalayas International Conference on Internet, pp. 1–5. IEEE, 2012. [120] German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. *Neural Networks*, 113:54–71, 2019. [121] Pramuditha Perera and Vishal M Patel. Deep transfer learning for multiple class novelty detection. In CVPR, pp. 11544–11552, 2019. [122] Pramuditha Perera, Ramesh Nallapati, and Bing Xiang. Ocgan: One-class novelty detection using gans with constrained latent representations. In *CVPR*, pp. 2898–2906, 2019. [123] Pramuditha Perera, Vlad I Morariu, Rajiv Jain, Varun Manjunatha, Curtis Wigington, Vicente Ordonez, and Vishal M Patel. Generative-discriminative feature representations for open-set recognition. In CVPR, pp. 11814–11823, 2020. [124] Pramuditha Perera, Poojan Oza, and Vishal M Patel. One-class classification: A survey. arXiv preprint arXiv:2101.03064, 2021. [125] Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. In *AAAI*, volume 32, 2018. [126] James Pickands III et al. Statistical inference using extreme order statistics. *Annals of statistics*, 3(1): 119–131, 1975. [127] Masoud Pourreza, Bahram Mohammadi, Mostafa Khaki, Samir Bouindour, Hichem Snoussi, and Mohammad Sabokrou. G2d: Generate to detect anomaly. In *WACV*, pp. 2003–2012, 2021. [128] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *ICML*, pp. 8748–8763, 2021. [129] Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Shpanskaya, et al. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. *arXiv preprint arXiv:1711.05225*, 2017. [130] Antti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, and Tapani Raiko. Semi-supervised learning with ladder networks. *NeurIPS*, 2015. [131] Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In *CVPR*, pp. 2001–2010, 2017. [132] Jie Ren, Peter J Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark A DePristo, Joshua V Dillon, and Balaji Lakshminarayanan. Likelihood ratios for out-of-distribution detection. *NeurIPS*, 2019. [133] Jie Ren, Stanislav Fort, Jeremiah Liu, Abhijit Guha Roy, Shreyas Padhy, and Balaji Lakshminarayanan. A simple fix to mahalanobis distance for improving near-ood detection. *ICML*, 2021. [134] Oliver Rippel, Patrick Mertens, and Dorit Merhof. Modeling the distribution of normal data in pre-trained deep features for anomaly detection. In *ICPR*, pp. 6726–6733. IEEE, 2021. [135] Andras Rozsa, Manuel Günther, and Terrance E Boult. Adversarial robustness: Softmax versus openmax. *BMVC*, 2017. [136] Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Müller, and Marius Kloft. Deep one-class classification. In *ICML*, pp. 4393–4402. PMLR, 2018. [137] Lukas Ruff, Robert A Vandermeulen, Nico Görnitz, Alexander Binder, Emmanuel Müller, Klaus-Robert Müller, and Marius Kloft. Deep semi-supervised anomaly detection. *ICLR*, 2019. [138] Lukas Ruff, Jacob R Kauffmann, Robert A Vandermeulen, Grégoire Montavon, Wojciech Samek, Marius Kloft, Thomas G Dietterich, and Klaus-Robert Müller. A unifying review of deep and shallow anomaly detection. *Proceedings of the IEEE*, 2021. [139] Mohammad Sabokrou, Mohammad Khalooei, Mahmood Fathy, and Ehsan Adeli. Adversarially learned one-class classifier for novelty detection. In *CVPR*, pp. 3379–3388, 2018. [140] Mohammad Sabokrou, Masoud Pourreza, Mohsen Fayyaz, Rahim Entezari, Mahmood Fathy, Jürgen Gall, and Ehsan Adeli. Avid: Adversarial visual irregularity detection. In *ACCV*, pp. 488–505. Springer, 2018. [141] Mohammad Sabokrou, Mahmood Fathy, Guoying Zhao, and Ehsan Adeli. Deep end-to-end one-class classifier. *IEEE transactions on neural networks and learning systems*, 32(2):675–684, 2020. [142] Mohammadreza Salehi, Ainaz Eftekhar, Niousha Sadjadi, Mohammad Hossein Rohban, and Hamid R Rabiee. Puzzle-ae: Novelty detection in images through solving puzzles. *arXiv preprint arXiv:2008.12959*, 2020. [143] Mohammadreza Salehi, Atrin Arya, Barbod Pajoum, Mohammad Otoofi, Amirreza Shaeiri, Mohammad Hossein Rohban, and Hamid R Rabiee. Arae: Adversarially robust training of autoencoders improves novelty detection. *Neural Networks*, 2021. [144] Mohammadreza Salehi, Niousha Sadjadi, Soroosh Baselizadeh, Mohammad H Rohban, and Hamid R Rabiee. Multiresolution knowledge distillation for anomaly detection. In *CVPR*, pp. 14902–14912, 2021. [145] Walter J Scheirer, Anderson de Rezende Rocha, Archana Sapkota, and Terrance E Boult. Toward open set recognition. *T-PAMI*, 35(7):1757–1772, 2012. [146] Robin Tibor Schirrmeister, Yuxuan Zhou, Tonio Ball, and Dan Zhang. Understanding anomaly detection with deep invertible networks through hierarchies of distributions and features. *arXiv preprint* arXiv:2006.10848, 2020. [147] Thomas Schlegl, Philipp Seeböck, Sebastian M Waldstein, Ursula Schmidt-Erfurth, and Georg Langs. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International conference on information processing in medical imaging, pp. 146–157. Springer, 2017. [148] Bernhard Schölkopf, Robert C Williamson, Alexander J Smola, John Shawe-Taylor, John C Platt, et al. Support vector method for novelty detection. In *NeurIPS*, volume 12, pp. 582–588. Citeseer, 1999. [149] Vikash Sehwag, Arjun Nitin Bhagoji, Liwei Song, Chawin Sitawarin, Daniel Cullina, Mung Chiang, and Prateek Mittal. Analyzing the robustness of open-world machine learning. In *ACM Workshop on* Artificial Intelligence and Security, pp. 105–116, 2019. [150] Vikash Sehwag, Mung Chiang, and Prateek Mittal. Ssd: A unified framework for self-supervised outlier detection. *ICLR*, 2021. [151] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV, pp. 618–626, 2017. [152] Joan Serrà, David Álvarez, Vicenç Gómez, Olga Slizovskaia, José F Núñez, and Jordi Luque. Input complexity and out-of-distribution detection with likelihood-based generative models. *ICLR*, 2019. [153] Laleh Seyyed-Kalantari, Guanxiong Liu, Matthew McDermott, Irene Y Chen, and Marzyeh Ghassemi. Chexclusion: Fairness gaps in deep chest x-ray classifiers. In *BIOCOMPUTING 2021: Proceedings of* the Pacific Symposium, pp. 232–243. World Scientific, 2020. [154] Rui Shao, Pramuditha Perera, Pong C Yuen, and Vishal M Patel. Open-set adversarial defense. In ECCV, pp. 682–698, 2020. [155] Shubhranshu Shekhar, Neil Shah, and Leman Akoglu. Fairod: Fairness-aware outlier detection. *AIES*, 2020. [156] Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In SP, pp. 3–18, 2017. [157] Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. *ICML, Workshop on Visualization for Deep Learning*, 2017. [158] Kihyuk Sohn, Chun-Liang Li, Jinsung Yoon, Minho Jin, and Tomas Pfister. Learning and evaluating representations for deep one-class classification. *ICLR*, 2021. [159] Gowthami Somepalli, Yexin Wu, Yogesh Balaji, Bhanukiran Vinzamuri, and Soheil Feizi. Unsupervised anomaly detection with adversarial mirrored autoencoders. *Uncertainty in Artificial Intelligence*, 2020. [160] Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. How to train deep variational autoencoders and probabilistic ladder networks. *ICLR*, 3, 2016. [161] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. *ICLR*, 2020. [162] Xin Sun, Zhenning Yang, Chi Zhang, Keck-Voon Ling, and Guohao Peng. Conditional gaussian distribution learning for open set recognition. In *CVPR*, pp. 13480–13489, 2020. [163] Yiyou Sun, Chuan Guo, and Yixuan Li. React: Out-of-distribution detection with rectified activations. NeurIPS, 34, 2021. [164] Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. Out-of-distribution detection with deep nearest neighbors. *ICML*, 2022. [165] Jihoon Tack, Sangwoo Mo, Jongheon Jeong, and Jinwoo Shin. Csi: Novelty detection via contrastive learning on distributionally shifted instances. *NeurIPS*, 2020. [166] Jeya Maria Jose Valanarasu, Poojan Oza, Ilker Hacihaliloglu, and Vishal M Patel. Medical transformer: Gated axial-attention for medical image segmentation. *MICCAI*, 2021. [167] Sagar Vaze, Kai Han, Andrea Vedaldi, and Andrew Zisserman. Open-set recognition: A good closed-set classifier is all you need. *ICLR*, 2022. [168] Shashanka Venkataramanan, Kuan-Chuan Peng, Rajat Vikram Singh, and Abhijit Mahalanobis. Attention guided anomaly localization in images. In *ECCV*, pp. 485–503, 2020. [169] Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, and Yoshua Bengio. Manifold mixup: Better representations by interpolating hidden states. In *ICML*, pp. 6438–6447, 2019. [170] Roman Vershynin. *High-dimensional probability: An introduction with applications in data science*, volume 47. Cambridge university press, 2018. [171] Haoran Wang, Weitang Liu, Alex Bocchieri, and Yixuan Li. Can multi-label classification networks know what they don't know? *NeurIPS*, 2021. [172] Siqi Wang, Yijie Zeng, Xinwang Liu, En Zhu, Jianping Yin, Chuanfu Xu, and Marius Kloft. Effective end-to-end unsupervised outlier detection via inlier priority of discriminative network. *NeurIPS*, 2019. [173] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In *CVPR*, pp. 2097–2106, 2017. [174] Zeyu Wang, Klint Qinami, Ioannis Christos Karakozis, Kyle Genova, Prem Nair, Kenji Hata, and Olga Russakovsky. Towards fairness in visual recognition: Effective strategies for bias mitigation. In *CVPR*, pp. 8919–8928, 2020. [175] Peter Welinder, Steve Branson, Takeshi Mita, Catherine Wah, Florian Schroff, Serge Belongie, and Pietro Perona. Caltech-ucsd birds 200. 2010. [176] Yan Xia, Xudong Cao, Fang Wen, Gang Hua, and Jian Sun. Learning discriminative reconstructions for unsupervised outlier removal. In *ICCV*, pp. 1511–1519, 2015. [177] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. *arXiv preprint arXiv:1708.07747*, 2017. [178] Zhisheng Xiao, Qing Yan, and Yali Amit. Likelihood regret: An out-of-distribution detection score for variational auto-encoder. *NeurIPS*, 2020. [179] Xudong Yan, Huaidong Zhang, Xuemiao Xu, Xiaowei Hu, and Pheng-Ann Heng. Learning semantic context from normal samples for unsupervised anomaly detection. In *AAAI*, volume 35, pp. 3110–3118, 2021. [180] Jingkang Yang, Kaiyang Zhou, Yixuan Li, and Ziwei Liu. Generalized out-of-distribution detection: A survey. *arXiv preprint arXiv:2110.11334*, 2021. [181] Ziyu Ye, Yuxin Chen, and Haitao Zheng. Understanding the effect of bias in deep anomaly detection. IJCAI, 2021. [182] Jaemin Yoo, Tiancheng Zhao, and Leman Akoglu. Role of data augmentation in unsupervised anomaly detection. *arXiv preprint arXiv:2208.07734*, 2022. [183] Ryota Yoshihashi, Wen Shao, Rei Kawakami, Shaodi You, Makoto Iida, and Takeshi Naemura. Classification-reconstruction learning for open-set recognition. In *CVPR*, pp. 4016–4025, 2019. [184] Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. *arXiv preprint* arXiv:1506.03365, 2015. [185] Qing Yu and Kiyoharu Aizawa. Unsupervised out-of-distribution detection by maximum classifier discrepancy. In *ICCV*, pp. 9518–9526, 2019. [186] Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. Adversarial examples: Attacks and defenses for deep learning. *IEEE transactions on neural networks and learning systems*, 30(9):2805–2824, 2019. [187] Zhongqi Yue, Tan Wang, Qianru Sun, Xian-Sheng Hua, and Hanwang Zhang. Counterfactual zero-shot and open-set visual recognition. In *CVPR*, pp. 15404–15414, 2021. [188] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In *ICCV*, pp. 6023–6032, 2019. [189] Muhammad Zaigham Zaheer, Jin-ha Lee, Marcella Astrid, and Seung-Ik Lee. Old is gold: Redefining the adversarially learned one-class classifier training paradigm. In *CVPR*, pp. 14183–14193, 2020. [190] Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In *ICML*, pp. 325–333. PMLR, 2013. [191] Houssam Zenati, Chuan Sheng Foo, Bruno Lecouat, Gaurav Manek, and Vijay Ramaseshan Chandrasekhar. Efficient gan-based anomaly detection. *ICLR Workshop*, 2018. [192] Hongjie Zhang, Ang Li, Jie Guo, and Yanwen Guo. Hybrid models for open set recognition. In *ECCV*, pp. 102–117. Springer, 2020. [193] Hongjing Zhang and Ian Davidson. Towards fair deep anomaly detection. In *ACM Conference on* Fairness, Accountability, and Transparency, pp. 138–148, 2021. [194] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. *ICLR*, 2017. [195] Chong Zhou and Randy C Paffenroth. Anomaly detection with robust deep autoencoders. In *international conference on knowledge discovery and data mining*, pp. 665–674, 2017. [196] Da-Wei Zhou, Han-Jia Ye, and De-Chuan Zhan. Learning placeholders for open-set recognition. In CVPR, pp. 4401–4410, 2021. ## A Appendix You may include other additional sections here.
Review 1: Summary: This paper provides a unified review of anomaly detection, open-set recognition, and out-of-distribution detection. While each of these directions has been studied extensively, a compressive review was most certainly needed. This paper effectively fills this gap. It provides an exhaustive summary of a wide range of works in each direction and discussed future challenges for all three domains.  Strengths and Weaknesses: This paper provides a very exhaustive review of three major domains in open-world machine learning. Each of these domains has its own reviews published previously, so the challenge of writing a joint review is indeed paramount. The paper meets the bar in discussing a wide range of papers from each domain. However, it still lacks a borad categorization of the paper discussed (even within each domain). I discuss some of the key limitations and suggested changes below. *Disentangling core challenges and future works*: Authors provide an extensive list of future challenges for the community. I recommend splitting this list into two parts. 1) Establish the core challenges that the community has already extensively been working on. E.g. self-supervised outlier detection (similar setting in anomaly detection), and few-shot detection are some of the techniques that have been recently explored. 2) Identify the challenges that the community is still missing as a recommendation or future work (e.g., use of transformers or explainability of detection models) *Extensive quantitative benchmarking missing for AD/ND tasks*: While the paper compared most works in OOD detection quantitatively (Table 3,4,5,6) it lacks a similar rigorous benchmarking for AD and ND tasks (table 7, 8). I suggest incorporating something similar to cifar-10 per-class anomaly detection benchmark ([1] and follow-up works) *Missing categorization of works* The current version of the paper also lacks a concrete categorization of existing works. The existing categorization is done (table 1) based on the domain (OOD detection, AD, OSR). This is just the global categorization, and I believe that each work should be categorized in a more fine-grained manner. For example, they can be categorized into. * Use of labels: supervised, unsupervised, or semi-supervised  * Paradigm: deep-learning vs non-deep learning * Use of Pretraining only (feature-space detection) vs training from scratch 
 While some of these details are available in the detailed description provided for each method, it is necessary to elevate them to broad categorization (e.g., in a table). In existing table-1, I suggest authors add the publication venue details of each method, as it might highlight if the research in three communities (OOD, OSR, AD) are clustered across venues. *Fig.1*: The left-half of the figure (concentric circles) makes it hard to parse the definition of each domain, and in fact can lead to the wrong conclusion. For example, if I follow the color combination of detectors on the right, e.g., yellow for OOD detection, then it gives the impression that the yellow color band on the left is actually the region of OOD images. But that is not the case, as indicated by the definition (a U b) vs (c U d). One way to avoid this confusion would be to highlight these definitions (at least increase their font size). Overall I encourage the authors to improve the presentation of this figure. Related work (section 9.5) Sehwag et al. (2019) initiated the work on the robustness of OOD detection under adversarial conditions and effective defenses, which is followed up by Meinke et al. (2021) and Chen et al.(2021). I suggest adding this related work.  Attribution: In each figure caption, I recommend citing the paper from which the figure is borrowed. A very minor issue in fig.1 is that while the caption highlight four classes (car, dog, cat, and airplane), one of the image is a bus (red London bus). This gives a wrong depiction of the task. Though easy fix. 
 * Tack, Jihoon, et al. "Csi: Novelty detection via contrastive learning on distributionally shifted instances." Advances in neural information processing systems 33 (2020): 11839-11852. * Sehwag, Vikash, et al. "Analyzing the robustness of open-world machine learning." Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security. 2019. * Meinke, Alexander, Julian Bitterwolf, and Matthias Hein. "Provably Robust Detection of Out-of-distribution Data (almost) for free." arXiv preprint arXiv:2106.04260 (2021). * Chen, Jiefeng, et al. "Atom: Robustifying out-of-distribution detection using outlier mining." Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Cham, 2021. 
 Requested Changes: Already discussed with suggestions in the section above. Broader Impact Concerns: None. ================================================== Review 2: Summary: The paper provides a review the related work in Anomaly Detection (AD), Novelty Detection (ND), out of distribution detection (OOD) and Open Set Recognition (OSR), aiming to provide a unified view of these problems in machine learning. The paper provides an extensive review of different state-of-the-art methods in these areas as well as an empirical evaluation on different synthetic and real datasets. Strengths and Weaknesses: Strengths: + The idea of providing a unified survey including AD, ND, OOD, and OSR can be of interest and useful to a broad audience in the machine learning community. + The authors included an extensive set of techniques across the different problems. + The paper includes an empirical evaluation comparing the methods explained in the survey. Weaknesses: - The paper does not really deliver the unified view of the different learning tasks considered, as stated in the introduction. The paper mostly focuses on the brief description of an extensive number of methods in the related work. - Some of the descriptions of the algorithms included in the survey are very brief and includes equations that are not really explained (e.g. equations (1) and (2)). It gives the impression that the paper prioritizes quantity (of the methods included in the survey) than quality, which really limits the usefulness of the paper. - The different subsections in Section 9 looks rather disconnected and the storyline is not clear. - The writing quality of the paper can be significantly improved (typos, grammar, etc.). - The paper includes many figures that are not original from the authors, but taken from other papers without making the reference explicit. This can create problems related to copyright infringement and plagiarism. Requested Changes: I have read the comments and requested changes in the previous version of the paper and briefly checked the changes in the paper with respect to the previous version. I fully agree with the changes suggested by the action editors and I think that the authors have not really addressed them for this revised version of the manuscript. In this sense, my main concern is that the paper mostly focuses on the short description (in some cases insufficient) of a large number of papers rather than providing that unified view promised in the introduction. For instance, the authors mention differences between novelty and anomaly detection tasks (depending on the degree of supervision). But then, in Section 3 both anomaly and novelty detection methods are treated a bit arbitrarily. For example, OC-SVM is defined as a novelty detection method, but the authors include it in a section for anomaly detection methods. The authors have done a good job on gathering a very good number of relevant methods for the different tasks, but the survey needs a significant amount of work in relating them to provide that unified view, which is the objective of the survey. The description of some of the methods needs to be revised and improved: in some cases, these are too brief and includes equations or algorithms that are not really explained. Section 9 seems too disconnected. Some of the ideas can be of use, but the discussions sometimes are shallow and the different aspects considered in the different subsections are not well connected. As mentioned by other reviewers and the action editors in the previous version of the manuscript, the writing quality of the paper can be significantly improved and the paper should be proofread carefully before submission. In this sense, I don’t see significant improvements with respect to the previous version. There is no relevant final discussion or conclusion in the paper. Section 10 is too short. I think this can also be significantly improved. Broader Impact Concerns: Many figures and algorithms have been taken from the original papers, but the authors in this paper do not include any explicit reference to the original source. I don’t think that the authors have done this maliciously, but this can be a source of problems in terms of copyright infringement and plagiarism. ================================================== Review 3: Summary: The paper aims to comprehensively review relevant works in out-of-distribution, open-set, and anomaly detection. The paper highlights an essential concern with other survey papers in the field, i.e., the existing survey papers only focus on a specific domain without examining the relationship between different domains. This paper provides a cross-domain and comprehensive review of the literature across these areas and at the same time identifies common ideas. Strengths and Weaknesses: **Strengths** - Survey paper is clearly written and easy to follow. The exposition is clear and situated nicely and is improved from its last version. Methods are described both mathematically and visually. This survey paper can help newcomers reduce their barriers to entry. - Table 1 and Figure 2 are illustrative and nice additions to the latest version. I have some minor comments, please read below. - Explanation for each method covered in the paper is clear. To the best of my knowledge, the survey does a good job of covering relevant papers with diverse ideas in each field. - Authors have also made some efforts to include relatively old approaches. **Weaknesses** - Figure 2 can be improved a bit (see comments below). - While authors have included Section 2 to discuss the taxonomy, relation between different approaches is still left a bit unclear - Section 3 looks a bit out of the place. - Typos. Section 3.4 title. Requested Changes: - (Critical to securing your recommendation for acceptance) The paper discusses different methods in different subsections for each of the survey topics. While Section 2 is added for taxonomy, currently, it seems that the discussion of each method is fairly disjoint without a clear connection between different methods. I encourage authors to provide summary unifying ideas across similar methods and more broadly across different areas. - (Critical to securing your recommendation for acceptance) Several things in Figure 2 are unclear. The phrase 'Unfamiliar Data Detection Approaches' looks unfamiliar to me, so it is a bit hard to parse the remaining hierarchy. It is also clear why 'Pre-trained Feature Adaptation ' is branched separately from the discriminative branch. Broader Impact Concerns: NA ==================================================
# Graphpnas: Learning Probabilistic Graph Generators For Neural Architecture Search Muchen Li muchenli@cs.ubc.ca University of British Columbia Vector Institute for AI Jeffrey Liu *jeffrey.yunfan.liu@uwaterloo.ca* University of Waterloo Leonid Sigal *lsigal@cs.ubc.ca* University of British Columbia Vector Institute for AI Canada CIFAR AI Chair NSERC CRC Chair Renjie Liao *rjliao@ece.ubc.ca* University of British Columbia Vector Institute for AI Canada CIFAR AI Chair Reviewed on OpenReview: *https: // openreview. net/ forum? id= ok18jj7cam* ## Abstract Neural architectures can be naturally viewed as computational graphs. Motivated by this perspective, we, in this paper, study neural architecture search (NAS) through the lens of learning random graph models. In contrast to existing NAS methods which largely focus on searching for a single best architecture, i.e., point estimation, we propose *GraphPNAS*, a deep graph generative model that learns a distribution of well-performing architectures. Relying on graph neural networks (GNNs), our GraphPNAS can better capture topologies of good neural architectures and relations between operators therein. Moreover, our graph generator leads to a learnable probabilistic search method that is more flexible and efficient than the commonly used RNN generator and random search methods. Finally, we learn our generator via an efficient reinforcement learning formulation for NAS. To assess the effectiveness of our GraphPNAS, we conduct extensive experiments on three search spaces, including the challenging RandWire on Tiny-ImageNet, ENAS on CIFAR10, and NASBench-101/201. The complexity of RandWire is significantly larger than other search spaces in the literature. We show that our proposed graph generator consistently outperforms RNN based one and achieves better or comparable performances than state-of-the-art NAS methods. We open source our code here: https://github.com/DSL-Lab/GraphPNAS. ## 1 Introduction In recent years, we have witnessed a rapidly growing list of successful neural architectures that underpin deep learning, e.g., VGG, LeNet, ResNets (He et al., 2016), Transformers (Dosovitskiy et al., 2020). Designing these architectures requires researchers to go through time-consuming trial and errors. Neural architecture search (NAS) (Zoph & Le, 2016; Elsken et al., 2018b) has emerged as an increasingly popular research area which aims to automatically find state-of-the-art neural architectures without human-in-the-loop. NAS methods typically have two components: a search module and an evaluation module. The search module is expressed by a machine learning model, such as a deep neural network, designed to operate in a high dimensional search space. The search space, of all admissible architectures, is often designed by hand in advance. The evaluation module takes an architecture as input and outputs the reward, e.g., performance of this architecture trained and then evaluated with a metric. The learning process of NAS methods typically iterates between the following two steps. 1) The search module produces candidate architectures and sends them to the evaluation module; 2) The evaluation module evaluates these architectures to get the reward and sends the reward back to the search module. Ideally, based on the feedback from the evaluation module, the search module should learn to produce better and better architectures. Unsurprisingly, this learning paradigm of NAS methods fits well to reinforcement learning (RL). Most NAS methods (Liu et al., 2018b; White et al., 2020; Cai et al., 2019) only return a single best architecture (i.e., a point estimate) after the learning process. This point estimate could be very biased as it typically underexplores the search space. Further, a given search space may contain multiple (equally) good architectures, a feature that a point estimate cannot capture. Even worse, since the learning problem of NAS is essentially a discrete optimization where multiple local minima exist, many local search style NAS methods (Ottelander et al., 2020) tend to get stuck in local minima. From the Bayesian perspective, modelling the distribution of architectures is inherently better than point estimation, e.g., leading to the ability to form ensemble methods that work better in practice. Moreover, modelling the distribution of architectures naturally caters to probabilistic search methods which are better suited for avoiding local optima, e.g., simulated annealing. Finally, modeling the distribution of architectures allows to capture complex structural dependencies between operations that characterize good architectures capable of more efficient learning and generalization. Motivated by the above observations and the fact that neural architectures can be naturally viewed as attributed graphs, we propose a probabilistic graph generator which models the distribution over good architectures using graph neural networks (GNNs). Our generator excels at generating topologies with complicated structural dependencies between operations. From the Bayesian inference perspective, our generator returns a distribution over good architectures, rather than a single point estimate, allowing to capture the multi-modal nature of the posterior distribution of good architectures and to effectively average or ensemble architecture (sample) estimates. Different from the Bayesian deep learning (Neal, 2012; Blundell et al., 2015; Gal & Ghahramani, 2016) that models distributions of weights/hidden units, we model distributions of neural architectures. Lastly, our probabilistic generator is less prone to the issue of local minima, since multiple random architectures are generated at each step during learning. In summary, our key contributions are as below. - We propose a GNN-based graph generator for neural architectures which empowers a learnable probabilistic search method. To the best of our knowledge, we are the first to explore learning deep graph generative models as generators in NAS. - We explore a significantly larger search space (e.g., graphs with 32 operators) than the literature (e.g., garphs with up to 12 operators) and propose to evaluate architectures under low-data regime, which altogether boost effectiveness and efficiency of our NAS system. - Extensive experiments on three different search spaces show that our method consistently outperforms RNN-based generators and is slightly better or comparable to the state-of-the-art NAS methods. Also, it can generalize well across different NAS system setups. ## 2 Related Works Neural Architecture Search. The main challenges in NAS are 1) the hardness of discrete optimization, 2) the high cost for evaluating neural networks, and 3) the lack of principles in the search space design. First, to tackle the discrete optimization, evolution strategies (ES) (Elsken et al., 2019; Real et al., 2019), reinforcement learning (RL) (Baker et al., 2017; Zhong et al., 2018; Pham et al., 2018b; Liu et al., 2018a), Bayesian optimization (Bergstra et al., 2013; White et al., 2019) and continuous relaxations (Liu et al., 2018b) have been explored in the literature. We follow the RL path as it is principled, flexible in injecting prior knowledge, achieves the state-of-the-art performances (Tan & Le, 2019), and can be naturally applied to our graph generator. Second, the evaluation requires training individual neural architectures which is notoriously time consuming(Zoph & Le, 2016). Pham et al. (2018b); Liu et al. (2018b) propose a weight-sharing supernet to reduce the training time. Baker et al. (2018) use a machine learning model to predict the performance of fully-trained architectures conditioned on early-stage performances. Brock et al. (2018); Zhang et al. (2018) directly predict weights from the search architectures via hypernetworks. Since our graph generator do not relies on specific choice of evaluation method, we choose to experiment on both oracle training(training from scratch) and supernet settings for completeness. Third, the search space of NAS largely determines the optimization landscape and bounds the best-possible performance. It is obvious that the larger the search space is, the better the best-possible performance and the higher the search cost would likely be. Besides this trade-off, few principles are known about designing the search space. Previous work (Pham et al., 2018b; Liu et al., 2018b; Ying et al., 2019; Li et al., 2020) mostly focuses on cell-based search space. A cell is defined as a small (e.g., up to 8 operators) computational graph where nodes (i.e., operators like 3×3 convolution) are connected following some topology. Once the search is done, one often stacks up multiple cells with the same topology but different weights to build the final neural network. Other works (Tan et al., 2019; Cai et al., 2019; Tan & Le, 2019) typically fix the topology, e.g., a sequential backbone, and search for layer-wise configurations (e.g., operator types like 3×3 vs. 5×5 convolution and number of filters). In our method, to demonstrate our graph generator's ability in exploring large topology search space, we first explore on a challenging large cell space (32 operators), after which we experiment on ENAS Macro (Pham et al., 2018b) and NAS-Benchmark-101(Ying et al., 2019) for more comparison with previous methods. Neural Architecture as Graph for NAS. Recently, a line of NAS research works propose to view neural architectures as graphs and encode them using graph neural networks (GNNs). In (Zhang et al., 2020; Luo et al., 2018a), graph auto-encoders are used to map neural architectures to and back from a continuous space for gradient-based optimization. Shi et al. (2020) use bayesian optimization (BO), where GNNs are used to get embedding from neural architectures. Despite the extensive use of GNNs as encoders, few works focus on building graph generative models for NAS. Closely related to our work, Xie et al. (2019) explore different topologies of the similar cell space using non-learnable random graph models. You et al. (2020) subsequently investigate the relationship between topologies and performances. Following this, Ru et al. (2020) propose a hierarchical search space modeled by random graph generators and optimize hyper-parameters using BO. They are different from our work as we learn the graph generator to automatically explore the cell space. Deep Graph Generative Models. Graph generative models date back to the ErdsRényi model (Erdös & Rényi, 1959), of which the probability of generating individual edges is the same. Other well-known graph generative models include the stochastic block model (Holland et al., 1983), the small-world model (Watts & Strogatz, 1998), and the preferential attachment model (Barabási & Albert, 1999). Recently, deep graph generative models instead parameterize the probability of generating edges and nodes using deep neural networks in, e.g., the auto-regressive fashion (Li et al., 2018; You et al., 2018; Liao et al., 2019) or variational autoencoder fashion (Kipf & Welling, 2016; Grover et al., 2018; Liu et al., 2019). These models are highly flexible and can model complicated distributions of real-world graphs, e.g., molecules (Jin et al., 2018), road networks (Chu et al., 2019), and program structures (Brockschmidt et al., 2018). Our graph generator builds on top of the state-of-the-art deep graph generative model in (Liao et al., 2019) with several important distinctions. First, instead of only generating nodes and edges, we also generate node attributes (e.g., operator types in neural architectures). Second, since good neural architectures are actually latent, our learning objective maximizes the expected reward (e.g., validation accuracies) rather than the simple log likelihood, thus being more challenging. Graph Generative models for Neural Architectural Search Closely related to our work, there is a line of research that learns to generate graph architecture for neural architecture search.In the prior paradigm, graph generative networks largely relied upon Variational Auto Encoders (VAEs). NAO(Luo et al., 2018b) employed an LSTM-based VAE, synchronized with performance prediction for gradient-based architectural optimization. In contrast, Arch2Vec(Yan et al., 2020) was designed to translate the graph representation of neural architecture into an implicit vector using GraphVAE, aiming for its optimization through Bayesian Optimization. AG-Net(Lukasik et al., 2022), a more recent advent, originated a generative network from the VAE decoder, allied with a surrogate model to expedite efficient learning. To extract a sample from AG-Net, an initial random sample is obtained from the architectural representation space, after which a decoder is utilized to translate the sample into a graph representation. Our methodology differs significantly, utilizing an auto-regressive graph generator, wherein the generation of the graph mirrors a Markov decision process. ![3_image_0.png](3_image_0.png) Figure 1: Figure (a) is the pipeline of our NAS system. The core part is a GNN-based graph generator from which we sample graph representations of neural network G. The corresponding model for each G is then sent to the evaluator for evaluation. The evaluation result is first stored in a replay buffer and then used for learning the graph generator through Reinforcement Learning.Figure(b) shows one generation step in the proposed probabilistic graph generator. ## 3 Methods The architecture of any feedforward neural network can be naturally represented as a directed acyclic graph (DAG), a.k.a., *computational graph*. There exist two equivalent ways to define the computational graph. First, we denote operations (e.g., convolutions) as nodes and denote operands (e.g., tensors) as edges which indicate how the computation flows. Second, we denote operands as nodes and denote operators as edges. We adopt the first view. In particular, a neural network G with N operations is defined as a tuple (*A, X*) where A ∈ {0, 1} N×N is an N × N adjacent matrix with Aij = 1 indicates that the output of the j-th operator is used as the input of the i-th operator. For operator with multiple inputs, inputs are combined together (e.g., using sum or average operator) before sending into the operator. X is a N-size *attribute* vector encoding operation types. For any operation i, its operation type Xi can only choose from a predefined list with length D, e.g., 1 × 1, 3 × 3 or 5 × 5 convolutions. Note that for any valid feedforward architecture, G can not have loops. One sufficient condition to satisfy the requirement is to constrain A to be a lower triangular matrix with zero diagonal (i.e., excluding self-loops). This formalism creates a search space of DN 2 N(N−1)/2 possible architectures, which is huge even for moderately large number of operators N and number of operation types D. The goal of NAS is to find an architecture or a set of architectures within this search space that would perform well. For practical consideration, we search for cell graphs (e.g., N = 32) and then replicate this cell several times to build a deep neural architecture. We also experiment on the ENAS Macro search space where G defines a entire network. More details for the corresponding search spaces can be found in Section 4. ## 3.1 Neural Architecture Search System Before delving into details, we first give an overview of our NAS system, which consists of two parts: a generator and an evaluator. The system diagram is shown in Fig. 1. At each step, the probabilistic graph generator samples a set of cell graphs, which are further translated to neural architectures by replicating the cell graph multiple times and stacking them up. Then the evaluator evaluates these architectures, obtains rewards, and sends architecture-reward pairs to the replay buffer. The replay buffer is then used to improve the generator, effectively forming a reinforcement learning loop. ## 3.1.1 Probabilistic Generators For Neural Architectures Now we introduce our probabilistic graph generator which is based on a state-of-the-art deep auto-regressive graph generative model in (Liao et al., 2019). Auto-Regressive Generation. Specifically, we decompose the distribution of a cell graph along with attributes (operation types) in an auto-regressive fashion, $$\mathbb{P}(A,X)=\prod_{i=1}^{N}\mathbb{P}\left(A_{i,:}|A_{i-1,:},X_{i-1},\cdots,A_{1,:},X_{1}\right)\mathbb{P}\left(X_{i}|A_{i-1,:},X_{i-1},\cdots,A_{1,:},X_{1}\right),$$ where Ai,: and Xi denote the i-th row of the adjacency matrix A and the i-th operation type respectively. To ensure the generated graphs are DAGs, we constrain A to be lower triangular by adding a binary mask, i.e., the i-th node can only be reached from the first i − 1 nodes. We omit the masks in the equations for better readability. We further model the conditional distributions as follows, $$\mathbb{P}\left(A_{i,:}|A_{i-1,:},X_{i-1},\cdots,A_{1,:},X_{1}\right)=\sum_{k=1}^{K}\alpha_{k}\prod_{1\leq j<i}\theta_{k,i,j}$$ $$\mathbb{P}\left(X_{i}|A_{i-1,:},X_{i-1},\cdots,A_{1,:},X_{1}\right)=\text{Categorical}\left(\beta_{1},\cdots,\beta_{D}\right)$$ $$\alpha_{1},\ldots,\alpha_{K}=\text{Softmax}\left(\sum_{1\leq j<i}\text{MLP}_{\theta}(h_{i}^{S}-h_{j}^{S})\right)$$ $$\beta_{1},\ldots,\beta_{D}=\text{Softmax}\left(\text{MLP}_{\theta}(h_{i}^{S})\right)$$ $$\theta_{1,i,j},\ldots,\theta_{K,i,j}=\text{Sigmoid}\left(\text{MLP}_{\theta}(h_{i}^{S}-h_{j}^{S})\right),$$ $$(1)$$ $$\mathrm{(2)}$$ (3) $\binom{4}{}$ . where the distributions of the operation type and edges are categorical and K-mixture of Bernoulli respectively. D is again the number of operation types. MLPα, MLPβ, and MLPθ are different instances of two-layer MLPs with ReLU activations. Here h S iis the representation of i-th node returned by a GNN which has been executed S steps of message passing at each generation step. This auto-regressive construction breaks down the nice property of permutation invariance for graph generation. However, we do not find it as an issue in practice, partly due to the fact that the graph isomorphism becomes less likely to happen while considering both topology and operation types. Message Passing GNNs. Each generation step n ≤ N in auto-regressive generation above relies on representations of nodes up to and including n itself (see Eq. (4)–(6)). To obtain these node representations {h S i }, we exploit message passing GNNs (Gilmer et al., 2017) with an attention mechanism similar to (Liao et al., 2019). In particular, the s-th message passing step involves executing the following equations successively, $$m^{\prime}_{ij}=f([h^{\prime}_{i}-h^{\prime}_{j},\mathbf{1}_{ij}])\tag{7}$$ $$\bar{h}^{\prime}_{i}=[h^{\prime}_{i},u_{j}]\tag{8}$$ where $\mathcal{N}(i)$ is the set of node $i$ along with its neighboring nodes. $m^{\prime}_{ij}$ is the message sent from node $i$ to $j$. at the s-th message passing step. The connectivity for the propagation in GNN is given by A1:i−1,1:i−1 with the last node (for which Ai,: has not been generated yet) being fully connected. Note that message passing step is different from the generation step and we run multiple message passing steps per generation step in order to capture the structural dependency among nodes and edges. The f and g are two-layer MLPs. Since graphs are DAGs in our case rather than undirected ones as in (Liao et al., 2019), we add 1ij in Eq. (7), a one-hot vector for indicating the direction of the edge. We initialize the node representations h 0 i (for i < n) as the corresponding one-hot encoded operation type vectors; h 0 n is initialized to a special one-hot vector. Here uiis an additional feature vector that helps distinguish i-th node from others. We found using one-hot-encoded incoming neighbors of i-th node and a positional encoding of the node index i work well in practice. We encourage readers to reference Fig. 4 for a detailed visualization of graph generation process. Sampling. To sample from our generator, we first draw architectures following the standard ancestral sampling where each step involves drawing random samples from a categorical distribution and a mixture of Bernoulli distribution. At each step, this sampling process adds a new operator with a certain operation type and wire it to previously sampled operators. ## 3.1.2 Evaluator Our design of generator and NAS pipeline do not rely on a specific choice of evaluator. Motivated by (Mnih et al., 2013), we use a replay buffer for storing the evaluated architectures. In our paper, based on specific datasets, we explore three types of evaluators, namely, oracle evaluator, supernet evaluator and benchmark evaluator, which are briefly introduced as follows. Oracle evaluator. Given a sample from the generator, an oracle evaluator trains the corresponding network from scratch and tests it to get the validation performances. To reduce computation overhead, a common approach is to use early stopping (training with fewer epochs) as in (Tan et al., 2019; Tan & Le, 2019). In our experiment, we instead use a low-data evaluator similar to few-shot learning where we keep the same number of classes but use fewer samples per class to train. SuperNet evaluator. Aiming at further reducing the amount of compute, this evaluator uses a weightsharing strategy where each graph is a sub-graph of the supernet. We followed the single-path supernet setup used in (Pham et al., 2018b) to compare with previous methods. Benchmark evaluator. NAS benchmarks, e.g., (Ying et al., 2019), provide accurate evaluation for architectures within the search space, which can be seen as oracle evaluators with full training budgets on target datasets. ## 3.2 Learning Method Since we are dealing with discrete latent variables, i.e., good architectures in our case, we train our NAS system using REINFORCE (Williams, 1992) algorithm with the control variate (a.k.a. baseline) to reduce the variance. In particular, the gradient of the loss or negative expected reward L w.r.t. the generator parameters ϕ is, $$\nabla{\mathcal{L}}(\phi)=\mathbb{E}_{\mathbb{P}({\mathcal{G}})}\left[-{\frac{\partial\log\mathbb{P}({\mathcal{G}})}{\partial\phi}}{\bar{R}}({\mathcal{G}})\right],$$ $$(11)$$ where the reward R¯ is standardized as R¯(G) = (R(G) − C)/σ. Here the baseline C is the average reward of architectures in the replay buffer and σ is standard deviation of rewards in the replay buffer. The expectation in Eq. (11) is approximated by the Monte Carlo estimation. However, the score function (i.e., the gradient of log likelihood w.r.t. parameters) in the above equation may numerically differ a lot for different architectures. For example, if a negative sample, i.e., an architecture with a reward lower than the baseline, has a low probability P(G), it would highly likely to have an extremely large absolute score function value, thus leading to a negative reward with an extremely large magnitude. Therefore, in order to balance positive and negative rewards, we propose to use the reweighted log likelihood as follows, $$\log\mathbb{P}({\mathcal{G}})=\beta\mathbf{1}_{R({\mathcal{G}})\leq0}\log(1-\mathbb{P}({\mathcal{G}}))+\mathbf{1}_{R({\mathcal{G}})>0}\log(\mathbb{P}({\mathcal{G}}))$$ log(P(G)) (12) where β is a hyperparameter that controls the weighting between negative and positive rewards. P(G) is the original probability given by our generator. Exploration vs. Exploitation Similar to many RL approaches, our NAS system faces the exploration vs. exploitation dilemma. We found that our NAS system may quickly collapse (i.e., overly exploit) to a few $$(12)$$ | Methods | Cost | Low Data (Search) | Full Data (Final) | | | |-------------------------|------------|---------------------|---------------------|-------------|------| | | (GPU Days) | Val Avg Acc | Std | Val Avg Acc | Std | | ER-TopK | 15.2 | 23.12 | 0.34 | 61.76 | 0.04 | | WS-TopK | 15.6 | 22.39 | 0.91 | 62.24 | 0.34 | | ER-BEST | 15.2 | 20.07 | 1.62 | 62.10 | 0.25 | | WS-BEST | 15.6 | 18.68 | 1.41 | 62.16 | 0.92 | | RNN (Zoph et al., 2018) | 17.2 | 18.46 | 0.99 | 61.73 | 0.77 | | Ours | 16.7 | 20.32 | 1.12 | 62.57 | 0.40 | Table 1: Comparisons on Tiny-ImageNet. The top and bottom blocks include random search and learningto-search methods respectively. ER-TopK and WS-TopK refers to top (K=4) architectures found by all WS and ER models during search. ER-BEST and WS-BEST refer to the best ER and WS models found during search, i.e., WS(k=4,p=0.75) and ER(p=0.1). Here Avg and Std of accuracies are computed based on 4 architectures sampled from generators. good architectures due to the powerful graph generative model, thus losing the diversity and reducing to point estimate. Inspired by the epsilon greedy algorithm (Sutton & Barto, 2018) used in multi-armed bandit problems, we design a random explorer to encourage more exploration in the early stage. Specifically, at each search step, our generator samples from either itself or a prior graph distribution like the WattsStrogatz model with a probability ϵ. As the search goes on, ϵ is gradually annealed to 0 so that the generator gradually has more exploitation over exploration. Whats more, we design our replay buffer to keep a small portion of candidates. As training goes on, bad samples will be gradually be replaced by good samples for training our generators, which encourage the model to exploit more. ## 4 Experiments In this section, we extensively investigate our NAS system on three different search spaces to verify its effectiveness. First, we adopt the challenging RandWire search space (Xie et al., 2019) which is significantly larger than common ones. To the best of our knowledge, we are the first to explore learning NAS systems in this space. Then we search on the ENAS Macro (Pham et al., 2018b) and NAS-Bench-101(Ying et al., 2019) search spaces to further compare with previous literature. For all experiments, we set the number of mixture Bernoulli K to be 10, the number of message passing steps S to 7, hidden sizes of node representation h s i and message ms ij to 128. For RNN-based baselines, we follow the design in (Zoph et al., 2018) if not other specified. ## 4.1 Randwire Search Space On Tiny-Imagenet RandWire Search Space. Originally proposed in (Xie et al., 2019), a randomly wired neural network is a ResNet-like four-stage network with the cell graph G defines the connectivity of N convolution layers within each stage. At the end of each stage, the resolution is downsampled by 3×3 convolution with stride 2 whereas the number of channels is doubled. While following the RandWire small regime in (Xie et al., 2019), we share the cell graph G among last three stages for simplification. To keep the number of parameters roughly the same, we fix the node type to be separable 3×3 convolution. The number of nodes N within the cell graph G is set to 32 excluding the input and output nodes. This yields a search space of 2.1 × 10149 valid adjacency matrices, which is extremely large and renders the neural architecture search challenging. More details of the RandWire search space can be found in the Appendix C.1. Tiny-ImageNet w. Oracle Evaluator. To enable search on the RandWire space, we exploit the oracle evaluator on the Tiny-ImageNet dataset (Chrabaszcz et al., 2017). To save computation, we employ a lowdata oracle evaluator where we sample 1/10 of Tiny-ImageNet training set for training and use the rest for validation at each search step. Similar to the few-shot learning, we keep the number of classes unchanged but reduce the number of samples per class. After the search, we retrain our found architectures on the full training set and evaluate it on the original validation set. Specifically, for each model, the oracle evaluator trains for 300 epochs and uses the average validation accuracy of the last 3 epochs as the reward. Our total search budget is around 16 GPU days, which approximately amounts to 320 model evaluations, e.g., 40 search steps and 8 samples evaluated per step. For random search baselines, we choose ErdsRényi (ER) Model Param (M) Top1 / Top5 Acc Resnet18 11.68 59.71±0.09 80.32±0.10 Resnet50 25.56 63.42±0.30 82.61±0.15 Resnext50 27.56 63.62±0.07 82.73±0.08 FC 3.49 60.82±0.24 82.29±0.09 ER-Top1 3.23 61.82±0.09 82.30±0.18 RS-Top1 3.22 62.55±0.15 82.64±0.21 RNN 3.32 62.29±0.39 82.16±0.24 Ours 3.27 63.23±0.18 **83.06**±0.05 WS-Top1 Large 19.38 63.84±0.13 82.61±0.16 RNN Large 19.78 63.69±0.28 82.74±0.21 Ours Large 19.18 64.45±0.26 **83.23**±0.26 Table 2: Comparisons of best-searched architectures (averaged over 3 runs per architecture) on Tiny ImageNet. and WattsStrogatz (WS) models. Specifically, we first randomly draw hyperparameters from certain ranges, i.e., 0.1 ≤ p ≤ 0.5 for ER and (2, 0.2) ≤ (*k, p*) ≤ (6, 0.8) for WS, and then sample G from individual models. We set the reweight coefficient β to 0.05. For the random explorer, we choose WS model with the same hyperparameter range as a prior distribution and set ϵ = 0.6 in the beginning and decay it by a factor of 0.2 every 10 search steps. We also find that gradually shrinking replay buffer size to keep 30% to 10% of top-performing architectures helps stabilize the training of the generator. At the search time, we reject samples that already appear in the replay buffer to avoid duplications. We apply the same setting to the RNN generator for a fair comparison. Results. As shown in Table 1, we compare our NAS system with other random search methods and learningto-search methods. We can see that our method outperforms the RNN-based generator and other random search methods in terms of average validation accuracy on the full dataset. Our generator also has a lower variance compared to the RNN-based one. Moreover, we observed that RNN-based generator sometimes degenerates so that it frequently samples densely-connected graphs. This is probably due to the fact that RNN based generator does not effectively utilize the topology information. We can see that a high search reward (i.e., a low-data validation accuracy) does not necessarily lead to better performances in full data training, which indicates a bias of the oracle evaluator within the low-data regime. Random search methods are prone to be biased as they select architectures solely based on the search reward. This suggests that if a model is selected purely based on the proxy measurement, there may be bias when using proxy measurements such as low-data compared to Full data. Instead, learning a probabilistic distribution with the proxy and then sampling from this distribution may offer a more robust search method than deterministic selection with a proxy. We also show results of the best architectures found within 4 samples in Table 2. Here, ER-top-1 and WStop-1 refer to the best model found from the corresponding random search. FC refers to the fully-connected graph, which takes three times longer to train compared to our model. It is clear that the best model found by our method outperforms those discovered by other methods by a considerable margin. Moreover, we scale up the best models (denoted as large) by adding more channels and one more computation stage (more details are in Appendix C.1). We can see that our searched architectures perform favorably against manually designed architectures like ResNet (He et al., 2016) and ResNeXt (Xie et al., 2017). ## 4.2 Enas Macro Search Space On Cifar10 ENAS Macro Search Space, originally proposed by Pham et al. (2018b), is a search space which focuses on the entire network. G here defines the entire network with N = 12 nodes. The operation type (D=6)1 per node is also searchable. G is guaranteed to contain a length-11 path, i.e., ∀i > 1, Ai,i−1 = 1. The goal is to search off-diagonal entries, i.e., skip connections. This gives a search space of 1.6 × 1029 valid networks in total. 11 × 1, 5 × 5 convolution, 1 × 1, 5 × 5 separable convolution, max pooling, avg pooling | Methods | Search Cost | Params | Best | Top Samples | | |--------------------------------------|---------------|----------|------------|---------------|-------| | | (days) | (M) | Error Rate | Avg | Std | | Net Transform (Cai et al., 2018) | 10 | 19.7 | 5.7 | - | - | | NAS (Zoph & Le, 2016) | 22400 | 7.1 | 4.47 | - | - | | PNAS (Liu et al., 2018a) | 225 | 3.2 | 3.41 | - | - | | Lemonade (Elsken et al., 2018a) | 56 | 3.4 | 3.6 | - | - | | EPNAS-Macro (Perez-Rua et al., 2018) | 1.2 | 38.8 | 4.01 | - | - | | RNN* (Pham et al., 2018b) | 0.9 | 19.64 | 4.18 | 4.47 | 0.282 | | RNN* Large | 0.9 | 36.92 | 4.00 | 4.16 | 0.089 | | Ours | 0.5 | 20.47 | 3.73 | 3.93 | 0.098 | | Ours Large | 0.5 | 37.71 | 3.55 | 3.62 | 0.050 | | Method | Avg Error | #Queries | |---------------|--------------|------------| | GCN Pred† | 6.331 | 150 | | Evolution† | 6.109 | 150 | | Ours | 5.930 ±0.143 | 150 | | NAO† | 6.51 | 192 | | AG-NET† | 5.82 | 192 | | Arch2Vec† | 5.95 | 400 | | Random Search | 6.413 ±0.422 | 300 | | Local Search | 5.879±0.371 | 300 | | BANANAS | 5.906 ±0.296 | 300 | | RNN (RL) | 6.028±0.228 | 300 | | Ours (RL) | 5.807±0.072 | 300 | Table 3: Comparisons on CIFAR10 dataset. The bottom and top blocks include NAS methods with ENAS ![8_image_0.png](8_image_0.png) Macro and other search spaces respectively. *: our re-implementation. -: inapplicable. Figure 2: Performances (average over 10 runs) of best architectures vs. the number of architecture evaluations (search step). Table 4: Best model performances on NASBench-101. †indicates numbers are taken from (White et al., 2019; Luo et al., 2018b; Yan et al., 2020; Lukasik et al., 2022) SuperNet Evaluator. For ENAS Macro search space, we experiment on CIFAR10 (Krizhevsky et al., 2009) dataset. For our generator, we use the ER model with p = 0.4 as our explorer, where ϵ decays from 1 to 0 in the first 100 search steps. For RNN based generator, we follow the setup in (Pham et al., 2018b). We also adopt the weight-sharing mechanism in (Pham et al., 2018b) to obtain a SuperNet evaluator that efficiently evaluates a model's performance. We use a budget of 300 search steps with around 100 architectures evaluated per step for all methods. After the search, we use a short-training of 100 epochs to evaluate the performances of 8 sampled architectures, after which top-4 performing ones are chosen for a 600-epoch full training. The best validation error rate among these 4 architectures is reported. For simplicity and a fair comparison, we do not use additional tricks (e.g., adding entropy regularizer) in (Pham et al., 2018b). More details are provided in Appendix D. In Table 3, we compare the error rates and variances for different NAS methods. Note that this variance reflects the uncertainty of the distribution of architectures as it is computed based on sampled architectures. It is clear that our GraphPNAS achieves both lower error rates and lower variances compared to RNN based generator and is on par with the state-of-the-art NAS methods on other search spaces. We also see that the best architecture performance of our generator outperforms RNN based generator by a significant margin. This verifies that our GraphPNAS is able to learn a distribution of well-performing neural architectures. Given that we only sample 8 architectures, the performances could be further improved with more computational budgets. ## 4.3 Nas Benchmarks NAS-Bench-101 (Ying et al., 2019) is a tabulate benchmark containing 423K cell graphs, each of which is a DAG with up to 7 nodes and 9 edges including input and output nodes. | Methods | CIFAR-10 | CIFAR-100 | ImageNet-16-120 | Generator | | | | |----------------------------------------------|-------------------------------------------------------------------|-----------------------|-------------------|-----------------------|---------------------------------|------------|-------------------| | | Val | Test | Val | Test | Val | Test | | | Optimum | 91.61 | 94.37 | 73.49 | 73.51 | 46.73 | 47.31 | - | | Weight Sharing NAS: GDAS (Dong & Yang, 2019) | 89.68±0.72 | 93.23±0.58 | 68.35±2.71 | 68.17±2.50 | 39.55±0.00 | 39.40±0.00 | - | | ENAS (Pham et al., 2018a) | 90.20±0.00 | 93.76±0.00 | 70.21±0.71 | 70.67±0.62 | 40.78±0.00 | 41.44±0.00 | - | | SGNAS (Huang & Chu, 2021) | 90.18±0.31 | 93.53±0.12 | 70.28±1.20 | 70.31±1.09 | 44.65±2.32 | 44.98±2.10 | - | | DrNet (Chen et al., 2021) | 91.55±0.00 94.36±0.00 73.49±0.00 73.51±0.00 46.37±0.00 46.34±0.00 | | | - | | | | | Multi-trial NAS RANDOM (Dong & Yang, 2020) | 91.07±0.26 | 93.86±0.23 | 71.46±0.97 | 71.55±0.97 | 45.03±0.91 | 45.28±0.97 | RANDOM | | BOHB (Falkner et al., 2018) | 91.17±0.27 | 93.94±0.28 | 72.04±0.93 | 72.00±0.86 | 45.55±0.79 | 45.70±0.86 | HyperBand | | REINFORCE (Ying et al., 2019) | 91.12±0.25 | 93.90±0.26 | 71.80±0.94 | 71.86±0.89 | 45.37±0.74 | 45.64±0.78 | LSTM | | GANAS (Changiz Rezaei et al., 2021) | - | 94.34±0.05 | - | 73.28±0.17 | - | 46.80±0.29 | Graph GAN | | Arch2VEC (Yan et al., 2020) | 91.41±0.22 | 94.18±0.24 | 73.35±0.32 | 73.37±0.30 46.34±0.18 | 46.27±0.37 | Graph VAE | | | AG-Net (Lukasik et al., 2022) | 91.55±0.08 | 94.24±0.19 | 73.2±0.34 | 73.12±0.40 | 46.31±0.33 | 46.20±0.47 | Graph VAE Decoder | | GraphPNAS | 91.50±0.09 | 94.34±0.04 73.41±0.09 | 73.35±0.21 | 46.29±0.27 | 46.50±0.20 Auto-regressive GRAN | | | Table 5: Searched best architecture performance on NAS-Bench-201. We run our methods 10 times to obtain mean and standard deviation. We compare the performances of our GraphPNAS to open-source implementations of random search methods, local search methods, and BANANAS (White et al., 2019). The latter two are the best algorithms found by White et al. (2020) on NAS-Bench-101. For GCN prediction and evolution methods, we use the score reported in (White et al., 2020). We give each NAS method the same budget of 300 queries and plot the curve of lowest test error as a function of the number of evaluated architectures. As shown in Fig. 2, our GraphPNAS is able to quickly find well-performing architectures. We also report the avg error rate over 10 runs in Table 4. Our GraphPNAS again outperforms RNN based generator by a significant margin and beats strong baselines like local search methods and BANANAS. Notably, our GraphPNAS has a much lower variance than other methods, thus being more stable across multiple runs. NAS-Bench-201 (Dong & Yang, 2020) is defined on a smaller search space where up to 4 nodes and 6 edges are allowed. Here we compare our method on NAS-Bench-201(Dong & Yang, 2020) with random search (RANDOM) Bergstra & Bengio (2012), ENAS Pham et al. (2018b), GDAS Dong & Yang (2019), BOHB Falkner et al. (2018). Closely related to our methods, we compare with 3 more NAS methods with graph genrator: Arch2VecYan et al. (2020), AG-Net Lukasik et al. (2022), GANNAS Changiz Rezaei et al. (2021). Here Arch2VEC, AG-Net and our methods are fixed to 100 queries, while GANNAS is reported to use 400 qureis. Note that Arch2VEC is coupled with a surrogate model for preselecting arch before queries. Our model can also potentially benefit from such setting. As shown in table 5, our method is either onpar-with or have outperformed previous graph generator based model on all three splits of NAS-Bench-201. However, it's worth noting that the NAS-Bench-201 is less challenging in the size of search space. Gradient based methods like DrNAS Chen et al. (2021) can well explore the entire search space given limited compute budget and find near optimal solution. In table 5, DrNAS is able to find the optimal model with a validation accuracy of 73.49 on cifar100 validation set while our method find a sub-optimal model with 73.41(-0.08) accuracy. We also experiment with the effect of budget on ImageNet-16-120 split. We start with a baseline of 45.00 and 45.39 accuracy on validation and test set over 40 queries. By extending the budget with 20 more queries, we observe significantly improvement of performance to 45.57 (+0.57) and 45.79 (+0.4) on validation and test sets respectively. Extending it to 100 queries gives us the state-of-the-art result showed in table 5. This indicates that our model can not well explore the search space and converge with limited search steps. This is probably due to the fact that the auto-regressive model requires enough samples in the replay buffer to be able to train well, so a reasonable number of search steps is needed for our model to reach its full potential. ## 5 Discussion & Conclusion Qualitative Comparisons between RNN and GraphPNAS. In (You et al., 2020), the clustering coefficient and the average path length have been used to investigate distributions of graphs. Here we adopt ![10_image_0.png](10_image_0.png) Figure 3: Visualization of architecutre explore sapce of GraphPNAS vs RNN. Each point in the figure denotes a model evaluation. Colors of each node denotes its validation accuracy returned by the low-data Oracle evaluator. the same metrics to visualize architectures (graphs) sampled by RNN based and our generators in RandWire experiments. Points in Fig. 3 refer to architectures sampled from both generators in the last 15 search steps where random explorers are disabled. The validation performances are color-coded. We can see that our GraphPNAS samples a set of graphs that have better validation accuracies while the ones of RNN generator have large variances in performances. Moreover, the graphs in our case concentrate to those with smaller clustering coefficients, thus less likely being densely-connected. On the contrary, RNN generator tends to sample graphs that are more likely to be densely-connected. While RNN has been widely used for NAS, we show in our experiments that our graph generator consistently outperforms RNN over three search spaces on two different datasets. This is likely due to the fact that our graph generator better leverages graph topologies, thus being more expressive in learning the distribution of graphs. Bias in Evaluator. In our experiments, we use SuperNet evaluator, low-data, and full-data Oracle evaluator to efficiently evaluate the model. From the computational efficiency perspective, one would prefer the SuperNet evaluator. However, it tends to give high rewards to those architectures used for training SuperNet. Although the low-data evaluator is more efficient than the full-data one, its reward is biased as discussed in Section 4.1. This bias is caused by the discrepancy between the data distributions in low-data and full-data regimes. We also tried to follow (Tan et al., 2019) to use early stopping to reduce the time cost of the fulldata evaluator. However, we found that it assigns higher rewards to those shallow networks which converge much faster in the early stage of training. We show detailed results in Appendix C.4. Search Space Design. The design of search space largely affects the performances of NAS methods. Our GraphPNAS successfully learns good architectures on the challenging RandWire search space. However, the search space is still limited as the cell graph across different stages is shared. A promising direction is to learn to generate graphs in a hierarchical fashion. For example, one can first generate a macro graph and then generate individual cell graphs (each cell is a node in the macro graph) conditioned on the macro graph. This will significantly enrich the search space by including the macro graph and untying cell graphs. Limitation to Scale up. Our method, as with all multi-trial generator-based NAS methods, does face a notable limitation: scaling up with an Oracle Evaluator proves challenging. One can resort to proxy evaluators, employing strategies such as weight-sharing, early-stopping, zero-cost-proxy, etc. However, these search results could be heavily influenced by the bias of the evaluator. On the upside, the decoupling of the architecture generator from the evaluator (unlike pruning-based DrNAS and weight-sharing methods like ENAS) implies that any improvement in evaluation can consistently result in superior NAS methods across all search spaces Conclusion. In this paper, we propose a GNN-based graph generator for NAS, called GraphPNAS. Our graph generator naturally captures topologies and dependencies of operations of well-performing neural architectures. It can be learned efficiently through reinforcement learning. We extensively study its performances on the challenging RandWire as well as two widely used search spaces. Experimental results show that our GraphPNAS consistently outperforms the RNN-based generator on all datasets. Future works include exploring ensemble methods based on our GraphPNAS and hierarchical graph generation on even larger search spaces. ## Acknowledgments This work was funded, in part, by the Vector Institute for AI, Canada CIFAR AI Chair, NSERC CRC, NSERC DG and Discovery Accelerator Grants, and Oracle Cloud credits. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through the Digital Research Alliance of Canada alliance.can.ca, and companies sponsoring the Vector Institute www. vectorinstitute.ai/\#partners, Advanced Research Computing at the University of British Columbia, and the Oracle for Research program. Additional hardware support was provided by John R. Evans Leaders Fund CFI grant and Compute Canada under the Resource Allocation Competition award. We would like to thank Raquel Urtasun, Wenyuan Zeng, Yuwen Xiong, Ethan Fetaya, and Thomas Kipf for supporting the exploration along this research direction before this work. ## References Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network architectures using reinforcement learning. In *International Conference on Learning Representations ICLR*, 2017. Bowen Baker, Otkrist Gupta, Ramesh Raskar, and Nikhil Naik. Accelerating neural architecture search using performance prediction. In *International Conference on Learning Representations ICLR*, 2018. Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. *Science*, 286(5439), 1999. James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. *Journal of machine* learning research, 13(2), 2012. James Bergstra, Daniel Yamins, and David D. Cox. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. In *Proceedings of the International Conference* on Machine Learning, ICML, 2013. Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. *arXiv preprint arXiv:1505.05424*, 2015. Andrew Brock, Theodore Lim, James M. Ritchie, and Nick Weston. SMASH: one-shot model architecture search through hypernetworks. In *International Conference on Learning Representations ICLR*, 2018. Marc Brockschmidt, Miltiadis Allamanis, Alexander L Gaunt, and Oleksandr Polozov. Generative code modeling with graphs. *arXiv preprint arXiv:1805.08490*, 2018. Han Cai, Jiacheng Yang, Weinan Zhang, Song Han, and Yong Yu. Path-level network transformation for efficient architecture search. In *International Conference on Machine Learning*, pp. 678–687. PMLR, 2018. Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecture search on target task and hardware. In *International Conference on Learning Representations ICLR*. OpenReview.net, 2019. Seyed Saeed Changiz Rezaei, Fred X. Han, Di Niu, Mohammad Salameh, Keith Mills, Shuo Lian, Wei Lu, and Shangling Jui. Generative adversarial neural architecture search. pp. 2227–2234, 8 2021. doi: 10.24963/ijcai.2021/307. URL https://doi.org/10.24963/ijcai.2021/307. Main Track. Xiangning Chen, Ruochen Wang, Minhao Cheng, Xiaocheng Tang, and Cho-Jui Hsieh. Dr{nas}: Dirichlet neural architecture search. In *International Conference on Learning Representations*, 2021. URL https: //openreview.net/forum?id=9FWas6YbmB3. Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter. A downsampled variant of imagenet as an alternative to the cifar datasets. *arXiv preprint arXiv:1707.08819*, 2017. Hang Chu, Daiqing Li, David Acuna, Amlan Kar, Maria Shugrina, Xinkai Wei, Ming-Yu Liu, Antonio Torralba, and Sanja Fidler. Neural turtle graphics for modeling city road layouts. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4522–4530, 2019. Xuanyi Dong and Yi Yang. Searching for a robust neural architecture in four gpu hours. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1761–1770, 2019. Xuanyi Dong and Yi Yang. Nas-bench-201: Extending the scope of reproducible neural architecture search. arXiv preprint arXiv:2001.00326, 2020. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Efficient multi-objective neural architecture search via lamarckian evolution. *arXiv preprint arXiv:1804.09081*, 2018a. Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. arXiv preprint arXiv:1808.05377, 2018b. Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Efficient multi-objective neural architecture search via lamarckian evolution. In *International Conference on Learning Representations ICLR*, 2019. Paul Erdös and Alfréd Rényi. On random graphs i. *Publicationes Mathematicae Debrecen*, 6, 1959. Stefan Falkner, Aaron Klein, and Frank Hutter. Bohb: Robust and efficient hyperparameter optimization at scale. In *International Conference on Machine Learning*, pp. 1437–1446. PMLR, 2018. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In *international conference on machine learning*, pp. 1050–1059, 2016. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. *arXiv preprint arXiv:1704.01212*, 2017. Aditya Grover, Aaron Zweig, and Stefano Ermon. Graphite: Iterative generative modeling of graphs. *arXiv* preprint arXiv:1803.10459, 2018. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Paul W Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels: First steps. Social Networks, 5(2), 1983. Sian-Yao Huang and Wei-Ta Chu. Searching by generating: Flexible and efficient one-shot nas with architecture generator. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 983–992, 2021. Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation. *arXiv preprint arXiv:1802.04364*, 2018. Thomas N Kipf and Max Welling. Variational graph auto-encoders. *arXiv preprint arXiv:1611.07308*, 2016. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Wei Li, Shaogang Gong, and Xiatian Zhu. Neural graph embedding for neural architecture search. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 4707–4714, 2020. Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, and Peter Battaglia. Learning deep generative models of graphs. *arXiv preprint arXiv:1803.03324*, 2018. Renjie Liao, Yujia Li, Yang Song, Shenlong Wang, Will Hamilton, David K Duvenaud, Raquel Urtasun, and Richard Zemel. Efficient graph generation with graph recurrent attention networks. In Advances in Neural Information Processing Systems, pp. 4255–4265, 2019. Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In *Proceedings of the European Conference on Computer Vision (ECCV)*, pp. 19–34, 2018a. Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018b. Jenny Liu, Aviral Kumar, Jimmy Ba, Jamie Kiros, and Kevin Swersky. Graph normalizing flows. In Advances in Neural Information Processing Systems, pp. 13578–13588, 2019. Jovita Lukasik, Steffen Jung, and Margret Keuper. Learning where to look–generative nas is surprisingly efficient. In *European Conference on Computer Vision*, pp. 257–273. Springer, 2022. Renqian Luo, Fei Tian, Tao Qin, Enhong Chen, and Tie-Yan Liu. Neural architecture optimization. Advances in neural information processing systems, 31, 2018a. Renqian Luo, Fei Tian, Tao Qin, Enhong Chen, and Tie-Yan Liu. Neural architecture optimization. Advances in neural information processing systems, 31, 2018b. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. *arXiv preprint arXiv:1312.5602*, 2013. Radford M Neal. *Bayesian learning for neural networks*, volume 118. Springer Science & Business Media, 2012. T Den Ottelander, Arkadiy Dushatskiy, Marco Virgolin, and Peter AN Bosman. Local search is a remarkably strong baseline for neural architecture search. *arXiv preprint arXiv:2004.08996*, 2020. Juan-Manuel Perez-Rua, Moez Baccouche, and Stéphane Pateux. Efficient progressive neural architecture search. In *BMVC*, 2018. Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural architecture search via parameters sharing. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of *Proceedings of Machine Learning Research*, pp. 4095–4104. PMLR, 10–15 Jul 2018a. URL https://proceedings.mlr.press/v80/pham18a.html. Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. *arXiv preprint arXiv:1802.03268*, 2018b. Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. Regularized evolution for image classifier architecture search. In *AAAI Conference on Artificial Intelligence*, 2019. Robin Ru, Pedro Esperanca, and Fabio Maria Carlucci. Neural architecture generator optimization. Advances in Neural Information Processing Systems, 33:12057–12069, 2020. Han Shi, Renjie Pi, Hang Xu, Zhenguo Li, James Kwok, and Tong Zhang. Bridging the gap between samplebased and one-shot neural architecture search with bonas. Advances in Neural Information Processing Systems, 33:1808–1819, 2020. Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, pp. 6105–6114. PMLR, 2019. Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In *Proceedings of the IEEE/CVF* Conference on Computer Vision and Pattern Recognition, pp. 2820–2828, 2019. Duncan J Watts and Steven H Strogatz. Collective dynamics of small-world networks. *Nature*, 393(6684), 1998. Colin White, Willie Neiswanger, and Yash Savani. Bananas: Bayesian optimization with neural architectures for neural architecture search. *arXiv preprint arXiv:1910.11858*, 2019. Colin White, Sam Nolen, and Yash Savani. Local search is state of the art for neural architecture search benchmarks. *arXiv preprint arXiv:2005.02960*, 2020. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1492–1500, 2017. Saining Xie, Alexander Kirillov, Ross Girshick, and Kaiming He. Exploring randomly wired neural networks for image recognition. In *Proceedings of the IEEE International Conference on Computer Vision*, pp. 1284–1293, 2019. Shen Yan, Yu Zheng, Wei Ao, Xiao Zeng, and Mi Zhang. Does unsupervised architecture representation learning help neural architecture search? *Advances in neural information processing systems*, 33:12486– 12498, 2020. Chris Ying, Aaron Klein, Eric Christiansen, Esteban Real, Kevin Murphy, and Frank Hutter. Nas-bench-101: Towards reproducible neural architecture search. In *International Conference on Machine Learning*, pp. 7105–7114, 2019. Jiaxuan You, Rex Ying, Xiang Ren, William L Hamilton, and Jure Leskovec. Graphrnn: Generating realistic graphs with deep auto-regressive models. *arXiv preprint arXiv:1802.08773*, 2018. Jiaxuan You, Jure Leskovec, Kaiming He, and Saining Xie. Graph structure of neural networks. *arXiv* preprint arXiv:2007.06559, 2020. Chris Zhang, Mengye Ren, and Raquel Urtasun. Graph hypernetworks for neural architecture search. arXiv preprint arXiv:1810.05749, 2018. Miao Zhang, Huiqi Li, Shirui Pan, Xiaojun Chang, Zongyuan Ge, and Steven Su. Differentiable neural architecture search in equivalent space with exploration enhancement. Advances in Neural Information Processing Systems, 33:13341–13351, 2020. Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, and Cheng-Lin Liu. Practical block-wise neural network architecture generation. In *IEEE Conference on Computer Vision and Pattern Recognition, CVPR*, 2018. Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. *arXiv preprint* arXiv:1611.01578, 2016. Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 8697–8710, 2018. ## A More Details On Graph Generator To more clearly illustrate the sampling process of our generator, we detailed probabilistic sampling process ![15_image_0.png](15_image_0.png) of our generator in Fig. 4. Figure 4: Detailed steps of auto-regressive generation with our graph generator. ## B Comparison With Nago Following Xie et al. (2019)'s work on random graph models, Ru et al. (2020) propose to learn parameters of random graph models using bayesian optimization. We compare with the randwire search space (refers to as RANG) in (Ru et al., 2020). Since the original search space in (Xie et al., 2019) do not reuse cell graphs for different stages, we train conditionally independent graph generators for different stages respectively. That is 3 conditionally independent generators for conv3, conv4, and conv5 stage in Table 7. We perform a search on the CIFAR10 dataset, where each model is evaluated for 100 epochs. We restrict the search budget to 600 oracle evaluations. We align with settings in (Ru et al., 2020) for retraining and report sampled architecture's test accuracy and standard deviation in the Table 6. We can see that our method learns a distribution of | Methods | Reference | Avg. Test Accuracy (%) | Std. | |-----------|-------------------|--------------------------|--------| | RANG-D | Xie et al. (2019) | 94.1 | 0.16 | | RANG-BOHB | Ru et al. (2020) | 94.0 | 0.26 | | RANG-MOBO | Ru et al. (2020) | 94.3 | 0.13 | | Ours | - | 94.6 | 0.18 | Table 6: Comparison of the searched results on CIFAR10. Mean test accuracy and the standard deviation are calculated over 8 samples from the searched generator. We align the search space design and retraining setting for a fair comparison. graphs that outperforms previous methods. ## C More Details On The Randwire Experiments C.1 Details Of Randwire Search Space Here we provided more details on the RandWire search space shown in Table 7 and Fig. 5. | Stage | Output | Base | Large | | | |------------|----------------------------------------|---------------------|---------|----------|-----| | | Cell | Channels | Cell | Channels | | | conv1 | 112×112 | conv3×3 | 32 | conv3×3 | 48 | | conv2 | 56×56 | conv3×3 | 64 | conv3×3 | 96 | | conv3 | 28×28 | G | 64 | G | 192 | | conv4 | 14×14 | G | 128 | G | 288 | | | | | G | 384 | | | conv5 | 7×7 | G | 256 | G | 586 | | classifier | 1×1 | 1×1 conv1×1, 1280-d | | | | | | global average pool, 200-d fc, softmax | | | | | Table 7: Randwire search space with base and large settings. Base is the default setting for search while ![16_image_0.png](16_image_0.png) Large refers to the architecture of scaled up models in Table 2. conv denote a ReLU-SepConv-BN triplet . The input size is 224×224 pixels. The change of the output size implies a stride of 2 (omitted in table) in the convolutions that are placed at the end of each block. G is the shared cell graph that has N = 32 node. Figure 5: Visualization of RandWire search base space used in this paper. Different from (Xie et al., 2019), G here is shared across three stages. ## C.2 Details For Randwire Experiments For experiment on Tiny-Imagenet, we resize image to 224 × 224 as showed in Table 7. We apply the basic data augmentation of horizontal random flip and random cropping with padding size 4. We provide detailed hyper-parameters for oracle evaluator training and learning for GraphPNAS in Table 8 | | Oracle Evaluator | Graph Controller | | |---------------------|--------------------|-------------------------------|------| | batch size | 256 | graph batch size | 16 | | optimizer | SGD | generator optimizer | Adam | | learning rate | 0.1 | generator Learning rate | 1e-4 | | learning rate deacy | consine lr decay | generator learning rate decay | none | | weight decay | 1e-4 | generator weight decay | 0. | | grad clip | 0. | generator gradient clip | 1.0 | | training epochs | 300 | replay buffer fitting epochs | 2000 | Table 8: Hyperparameter setting for oracle evaluator and training our graph generator. ## C.3 Visualization Of Architectures From Our Generator Here we visualize the top candidate architectures in Fig. 6 ## C.4 Bias For Early Stopping | evaluator | Final Val Acc | Average Path | Longest Path | |-----------------|-----------------|----------------|----------------| | early stopping | 61.86 | 2.595 | 6.125 | | low data regime | 62.57 | 3.046 | 8.75 | As discussed in Section 5, using early stopping will lead to local minimal where the generator learns to generate shallow cell structure. We quantify this phenomenon in table 9, where we can see that with early stopping training, the generator will generate more shallow architectures with a shorter path from input to output. The corresponding average final validation accuracy also dropped by a large margin compared to the low data evaluator counter part. Table 9: In the table, we show ablation on the choice of oracle evaluator with our graph generator. The average Path and Longest path are computed as the average path length and longest path length from input to output over 8 samples from the corresponding generator. ## D More Details On Enas Macro Experiments For ENAS Macro search space, we use a pytorch-based open source implementation2 and follow the detailed parameters provided in (Pham et al., 2018b) for RNN generator. Specifically, we follow (Pham et al., 2018b) to train the SuperNet and update the generator in an iterative style. At each search step, two sets of samples Gtrain and Geval are sampled from the generator. Gtrain is used to learn the SuperNet's weights by back-propagating the training loss. The updated SuperNet is used for evaluating Geval, which is then used for updating the generator. For our generator, we evaluate 100 architectures per step and update our generator every 5 epochs of SuperNet training. Instead of evaluating on a single batch, we reduce the number of models evaluated per step and evaluate on the full test set. We found this stables the training of our generator while keeping evaluation costs the same. In the replay buffer, the top 20% of architectures is kept. 2https://github.com/microsoft/nni/tree/v1.6/examples/nas/enas ![18_image_0.png](18_image_0.png) Figure 6: Visualization of Top 3 architectures sampled by each method. We observe that around 50% of samples from RNN generators are densely connected graphs or even fully connected graphs. ![19_image_0.png](19_image_0.png) Figure 7: The best architecture found by GraphPNAS and RNN generator (Pham et al., 2018b). Correspond to scores report in table 3. To get this architecture, we pre-evaluate 8 samples for both methods and select top-performing architecture. For training SuperNet and RNN generator, we follow the same hyper-parameter setting in (Pham et al., 2018b) except the learning rate decay policy is changed to cosine learning rate decay. For training our generator we use the same hyperparameter as in Table 8 with graph batch size changed to 32. For retraining the found the best architecture, we use a budget of 600 epoch training with a learning rate of 0.1, batch size 256, and weight decay 2e-4. We also apply a cutout with a probability of 0.4 to all the models when retraining the model. ## D.1 Visualization Of Best Architecture Found Here we visualize the best architecture found by GraphPNAS and RNN generator for Enas Macro search space in Fig. 7. ## E More Details On Nas Benchmark For sampling on NAS-Bench-101 (Ying et al., 2019), we first sample a 7-node DAG, then we remove any node that is not connected to the input or the output node. We reject samples that have more than 9 edges or don't have a path from input to output. To train our generator on Nas-Bench-101, we use ErdsRényi with p = 0.25, ϵ is set to 1 in the beginning and decreased to 0 after 30 search steps. For the replay buffer, we keep the top 30 architectures. Our model is updated every 10 model evaluations, where we train 70 epochs on the replay buffer at each update time. The learning rate is set to 1e-3 for with a batch size of 2 on Nas-bench-101.
Review 1: Summary: [Summary] This paper studies Graph-Generator-based methods for NAS. Since NAS search spaces can be viewed as DAG, the paper proposes to train an auto-regressive graph generator to generate graphs as architectures. The generated architectures are subsequently trained from scratch (or in a Supernet) to obtain their scores. Due to the non-differentiable nature of the pipeline, the author adopts REINFORCE algorithm to train the generator. Experimental results are conducted primarily on Three search spaces: RandWire (Tiny-ImageNet), NAS-Bench-201 (CIFAR-10/100, Tiny-ImageNet), and NAS-Bench-101 (CIFAR-10), the proposed method discovers better architecture compared with selected baselines (more on this in the weakness section). Strengths and Weaknesses: [Strength]: 1. This paper is very well-organized and easy to follow. Different aspects of the proposed methods are explained clearly. 2. Using auto-regressive models as the graph (architecture) generator is an interesting and novel idea in the context of NAS, and constitutes a key differentiating factor from the highly-relevant baseline NAGO. [Weakness]: 1. [Major] Missing relevant baselines or references. There exist more recent prior works that are relevant to the proposed methods. But they seem to be missing in the paper. 1. Recent GNN-generator-based NAS methods: For example, arch2vec [1] trains a simple Graph VAE and uses the encoder for NAS task, trained via Reinforcement Learning. 2. SOTA results on 201 and 101 benchmarks: The baselines included in 201 and 101 experiments are outdated (4-5 years old). There are more recent methods that achieve better results. For example, DrNAS [2] as a weight-sharing NAS, and arch2vec [1] as a multitrial NAS. They should be included to get a better sense of the effectiveness of the proposed method. 2. [Major] Search Cost & Lack of experiments at scale. The proposed method, when paired with an oracle evaluator, is too expensive for practical usage. As a result, the paper does not bench its method on experiments at scale (e.g. ImageNet-1K which is a standard experiment in NAS papers in recent years). One solution, as the author mentioned, is to use a supernet evaluator. However, this will lead to the same restriction on the search space design as all one-shot NAS (of which the most recent methods achieve stronger performance compared with the proposed method), thereby eliminating the flexibility of search space generation in the proposed method. [1] Yan et al. Does Unsupervised Architecture Representation Learning Help Neural Architecture Search? NeurIPS 2020 \ [2] Chen et al. DrNAS: Dirichlet Neural Architecture Search. ICLR 2021 Requested Changes: 1. Include proper SOTA weight-sharing and GNN-based methods as baselines on the benchmark spaces. The works referred to in the weakness section only represent the reviewer’s somewhat not-up-to-date knowledge of the field, so there are likely more recent works that should be included as well. Broader Impact Concerns: None ================================================== Review 2: Summary: The paper introduces GraphPNAS, a deep graph generative model to learn a distribution of high-performing architectures for Neural Architecture Search (NAS). The model leverages graph neural networks (GNNs) to capture the topologies of good neural architectures and the relationships between their operators. The whole model is trained by REINFORCE algorithm. Numerical results demonstrate that GraphPNAS consistently outperforms RNN-based generators and achieves comparable or better performance than state-of-the-art NAS methods. Strengths and Weaknesses: **Strength** 1. The paper is well organized and easy to follow. 2. The authors propose a unique approach to NAS by using a graph generative model to learn a distribution of high-performing architectures, which deviates from traditional NAS methods that focus on finding a single optimal architecture. 3. The proposed method is evaluated across three different search spaces, ranging from small cell-based search spaces to large topology search spaces, demonstrating its effectiveness. **Weakness** 1. The paper compares the proposed method with a broad range of models, but most of these models seem to be outdated, having been proposed before 2021. 2. The paper lacks some crucial implementation details. For instance, how to determine the node index in Section 3.1.1 is not clearly explained. 3. The paper lacks a comprehensive ablation study. Specifically, it does not provide a detailed analysis of how different hyperparameters impact the performance of the proposed method. Requested Changes: ## Proposed Changes: 1. The concept of modeling the distribution of model architectures and using graphs to embed model architectures is not new [1]. The authors should clarify the unique aspects and advantages of their proposed method more explicitly. 2. The choice of baseline models for comparison varies across different benchmarks. For instance, in the 'RandWire Search Space on Tiny-ImageNet' section, only traditional methods and RNN-based methods are used for comparison. The authors should provide a rationale for their choice of baseline models. 3. In Table 1, there is no clear correlation between results in low data and results in full data. The authors claim that the proposed method is less affected by such bias. This point could be elaborated further for clarity. [1]. Shi, Han, et al. "Bridging the gap between sample-based and one-shot neural architecture search with bonas." *Advances in Neural Information Processing Systems* 33 (2020): 1808-1819. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper studies the problem of neural architecture search and propose a graph neural network-based model GraphPNAS. Experiments show that the proposed graph generator achieves better or comparable performances than state-of-the-arts. Strengths and Weaknesses: Pros. 1. The problem is very important. 2. Combining GNN with NAS is kind of interesting. Cons. 1. The main baseline (backbone) RNN in Table 1 is from 2018, which makes the model comparison less meaningful. 2. The related work part is quite out-of-date. It cites only two 2021's works, no newer work. 3. The module very related to graphs is classic message passing, which makes the extension to graph generative model kind of trivial. 4. The novelty is kind of incremental. Authors combine message passing with a well-known learning method. 5. Author do not mention potential limitations. Requested Changes: 1. Add more up-to-date and strong baselines in the top venues for comparison. 2. Add more recent related works for discussion. 3. Go deeper and improve the method. Add something new about graph structure. 4. Include limitations. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept with minor revision Comment: The initial reviews suggested that while the work had several strengths it also seemed to have missed surveying and comparing against some of the recent neural architecture search (NAS) literature. The authors provided detailed rebuttals and included: A) a survey of recent work, B) new experimental results with said work, and C) an improved discussion of the relationships and limitations of their work. These improvements were key and two of the three reviewers recommend acceptance. One also noted that "the overall novelty and potential of the proposed method remain evident." However, the third while recognizing that the literature was better surveyed also remarks: 1) the lack of large-scale experiments; 2) an incomplete analysis of the NAS-Bench-201 results (Table 4). The former (1) was discussed by the authors in their rebuttal. The authors also address the question in the updated version of their manuscript (Section 5, paragraph "Limitation to Scale up"). I find this to be sufficient given TMLR's criteria. The latter (2) remains, however. In particular, the authors in Table 4 did not add the DrNAS [1] baseline nor the optimal upper-bound results provided in that paper. Without these baselines, the authors' conclusion is that "our methods is [sic] either on-par-with or have outperformed previous graph generator based model on all three splits of NAS-Bench-201." However, [1] reports that DrNAS finds near-optimal solutions on this NAS-Bench-201 benchmark thereby questioning the authors' analysis. Adding DrNAS would be nice, but is perhaps not necessary. However, either adding the upper bound (given your exact methodology) or at least discussing these results in the context that this benchmark might be "solved" would, I find, provide a more accurate analysis. I also imagine other parts/sentences in the paper might have to be updated to take this discussion into account. Overall, this is a good paper and I recommend that it is accepted with minor revisions (discussed above). Congratulations! I look forward to working toward a camera-ready version of this work with its authors. [1] Chen, X., Wang, R., Cheng, M., Tang, X., & Hsieh, C.-J. DrNAS: Dirichlet Neural Architecture Search. ICLR 2021. ==================================================
# Prudex-Compass: Towards Systematic Evaluation Of Reinforcement Learning In Financial Markets Shuo Sun shuo003@e.ntu.edu.sg School of Computer Science and Engineering Nanyang Technological University Molei Qin *molei.qin@ntu.edu.sg* School of Computer Science and Engineering Nanyang Technological University Xinrun Wang∗*xinrun.wang@ntu.edu.sg* School of Computer Science and Engineering Nanyang Technological University Bo An∗*boan@ntu.edu.sg* School of Computer Science and Engineering Nanyang Technological University Reviewed on OpenReview: *https: // openreview. net/ forum? id= JjbsIYOuNi* ## Abstract The financial markets, which involve more than $90 trillion market capitals, attract the attention of innumerable investors around the world. Recently, reinforcement learning in financial markets (FinRL) has emerged as a promising direction to train agents for making profitable investment decisions. However, the evaluation of most FinRL methods only focuses on profit-related measures and ignores many critical axes, which are far from satisfactory for financial practitioners to deploy these methods into real-world financial markets. Therefore, we introduce **PRUDEX-Compass**, which has 6 axes, i.e., Profitability, Risk-control, Universality, Diversity, rEliability, and eXplainability, with a total of 17 measures for a systematic evaluation. Specifically, i) since most existing FinRL algorithms are only designed to maximize profit with poor performance under systematic evaluation, we introduce AlphaMix+, which leverages mixture-of-experts and risk-sensitive approaches, to serve as one strong FinRL baseline that outperforms market average on all 6 axes in PRUDEX-Compass, ii) we evaluate AlphaMix+ and 7 other FinRL methods in 4 long-term real-world datasets of influential financial markets to demonstrate the usage of our PRUDEX-Compass and the superiority of AlphaMix+, iii) PRUDEX-Compass1together with 4 real-world datasets, standard implementation of 8 FinRL methods, a portfolio management environment and related visualization toolkits is released as public resources to facilitate the design and comparison of new FinRL methods. We hope that PRUDEX-Compass can not only shed light on future FinRL research to prevent untrustworthy results from stagnating FinRL into successful industry deployment but also provide a new challenging algorithm evaluation scenario for the reinforcement learning (RL) community. ∗Corresponding Authors 1https://github.com/TradeMaster-NTU/PRUDEX-Compass ## 1 Introduction Quantitative trading (QT) refers to trading strategies, which applies mathematical models to automatically identify profitable trading opportunities (Chan, 2021). With the development of the artificial intelligence, the trading volume of QT continuously increases and accounts for more than 70% and 40% trading volumes, in developed markets (e.g., US) and developing markets (e.g., China), respectively (Karpoff, 1987). How to make profitable investment decisions against the various uncertainties in quantitative trading becomes one of the main challenges for financial practitioners (An et al., 2022). Among the various machine learning methods, such as deep learning (Xu & Cohen, 2018; Sawhney et al., 2021) and boosting decision trees (Ke et al., 2017), deep reinforcement learning (DRL) is attracting increasing attention from both academia and financial industries (Sun et al., 2023) due to its stellar performance on solving complex sequential problems such as Go (Silver et al., 2017), StarCraft-II (Vinyals et al., 2019), nuclear fusion (Degrave et al., 2022) and matrix mulplication (Fawzi et al., 2022). Deep RL has achieved significant success in various quantitative trading tasks. Specifically, FDDR (Deng et al., 2016) and iRDPG (Liu et al., 2020b) are designed to learn financial trading signals and micmic behaviors of professional traders for algorithmic trading, respectively. For portfolio management, deep RL methods are proposed to account for the impact of market risk (Wang et al., 2021b) and the commission fee (Wang et al., 2021a). A PPO-based framework (Lin & Beling, 2020) is proposed for order execution and a policy distillation mechanism is added to bridge the gap between imperfect market states and optimal execution actions (Fang et al., 2021). For market making, deep RL methods are introduced from both game-theoretic (Spooner et al., 2018) and adversarial learning (Spooner & Savani, 2020) perspectives as an adaptation of traditional mathematical models. However, the evaluation of existing FinRL methods (Sun et al., 2022; Wang et al., 2021a; Fang et al., 2021; Liu et al., 2020a) only focuses on profit-related measures, which ignores several critical axes, such as risk-control and reliability. In addition to profitability, financial practitioners care about many other aspects of FinRL methods, i.e., how much risk I need to take for per unit of profit; how FinRL algorithms behave when the market status changes. In preliminary experiments, we find many examples that indicate the weakness of existing profit-seeking FinRL algorithms. For instance, IMIT (Ding et al., 2018) may lead to catastrophic capital loss when black swan events happen (Section 6.7 ) and SAC (Haarnoja et al., 2018) shows poor risk-control ability, which is not an ideal option for conservative traders (Section 6.3). In practice, with the existence of low signal-to-noise ratio and distribution shift in financial markets (Malkiel, 2003), FinRL methods with only great profitability performance on backtesting are very likely to overfit on historical data and become risky to be deployed in real-world scenarios (De Prado, 2018). As Robert Pardo (CEO of a hedge fund) said, he will never trade with a method that does not prove itself through a systematic evaluation (Pardo, 2011). Therefore, a benchmark for the systematic evaluation of FinRL methods is urgently needed. In this paper, we first introduce PRUDEX-Compass, which has 6 axes with a total of 17 measures for systematic evaluation of FinRL methods. we then propose AlphaMix+, a deep RL method composed of diversified mixture-of-experts and risk-aware Bellman backup, as a strong FinRL baseline that significantly outperforms existing FinRL methods under systematic evaluation by mimicking the bottom-up hierarchical trading strategy design workflow in real-world companies (Khorana et al., 2007). In addition, we evaluate 7 widely used FinRL methods together with AlphaMix+ on 4 long-term real-world datasets spanning over 15 years on popular trading tasks to demonstrate the usage of PRUDEX-Compass and the superiority of AlphaMix+. Accompanied with an open-source library1 of datasets, baseline implementation, RL environment and evaluation toolkits, we call for a change in how we evaluate FinRL methods to facilitate the industry deployment of FinRL methods. Moreover, PRUDEX-Compass also provides the RL community new algorithm evaluation scenarios to test the effectiveness of novel RL algorithms in the ever-changing financial markets. ## 2 Prudex-Compass: Systematic Evaluation Of Finrl To provide a clear exposition of FinRL evaluation, we introduce PRUDEX-Compass to provide an intuitive visual means to give financial practitioners a sense of comparability and positioning of FinRL methods. PRUDEX-Compass is composed of two central elements: i) the axis-level (inner), which specifies the different ![2_image_0.png](2_image_0.png) 1 Figure 1: Illustration of our FinRL methods evaluation benchmark PRUDEX-Compass. The inner star plot provides a visual indication of the relative strength of different FinRL methods in terms of six axes. A mark on the star plot's inner circle suggests the market average3. The outer level of the PRUDEX-Compass specifies which particular measures have been evaluated in practice to show the status of evaluation. We fill the compass with AlphaMix+ and 7 widely-used FinRL methods to provide an intuitive example. In general, AlphaMix+ gets the best performance (largest score in 4 out of 6 axes) with a comprehensive evaluation (much more markers in the outer level). axes considered for FinRL evaluation and ii) measure-level (outer), which specifies the measures used for benchmarking FinRL methods. Figuratively speaking, the axis-level maps out the relative strength of FinRL methods in terms of each axis, whereas the measure-level provides a compact way to visually assess which setup and evaluation measures are practically reported to point out how comprehensive the evaluation are for FinRL algorithms. To provide a practical basis, we have directly filled the exemplary compass visualization in Figure 1. The contributions of PURDEX-Compass are three-fold: i) carefully collecting 17 measures from the literature of multiple disciplines (e.g., finance, artificial intelligence, statistics and engineering) and properly categorizing them into 6 axes; ii) proposing customized version of the measures suitable for FinRL together with a set of easy-to-use visualization tools (Section 6.2–6.7); iii) introducing PRUDEX-Compass, a unified visual interface composed of two core elements to indicate the relative performance strength of FinRL methods (inner part) and their evaluation completeness (outer part). We remark that the visualization design of PRUDEX-Compass is inspired and built on top of the template code provided in CLEVA-Compass2 (Mundt et al., 2021). ## 2.1 Axes Of Prudex-Compass We choose the design of the axis-level compass as a star diagram following the idea of (Wang et al., 2022; Mundt et al., 2021). Specifically, the axis-level element contains two hexagons, marks on all vertices of the two hexagons4, and lines connecting the central point and vertices of hexagons. We add marks in the middle of central point and outer circle of the axis-level to indicate the performance of market average for convenient check. To indicate the relative strength of FinRL methods in terms of 6 critical evaluation perspectives, we first calculate the normalized score of each FinRL algorithm in terms of each axis by normalizing the numeric values of original experimental results into an integer score t ∈ [0, 100]. Inspired by the widely used performance ratio in Arcade Learning Environment (Mnih et al., 2015; Machado et al., 2018), we choose to normalize FinRL methods performance based on their relative performance to market average strategy, which is considered as the golden standard for many financial practitioners. Specifically, we assign score t = 50 to market average and introduce a parameter k%, where k% better or worse relative to the market average is assigned 100 and 0, respectively. We further clip score values, which are larger than 100 to 100 and lower than 0 to 0, to alleviate the impact of extreme values (e.g., extremely high return rate under markets with sudden 2https://github.com/ml-research/CLEVA-Compass 3Market average indicates the trading strategy, which invests equal amount of money into all financial assets in the pool, to reflect the average market conditions. 4The edges and vertices of the inner hexagon become slightly blurry but still distinguishable after filling in the results of 8 FinRL algorithms. increase of some Crypto assets). For the value choice of k, we find it is an empirically robust option to set k = 20 for our experiments on 8 FinRL methods across 4 financial markets. Moreover, we consult with our industry collaborators and they agree that it is reasonable to consider 20% better than the market average as a very successful trading strategy (e.g., t = 100). Users can directly follow our settings and compare the score (larger is better). Detailed descriptions of normalization equations are available in Appendix A.3. We then decide the position of vertices based on the score and connect them to provide a visual impression on the performance of each FinRL method. The advantages of this design are three-fold: i) it provides an ideal visual representation when comparing plot elements (Fuchs et al., 2014), ii) it allows human perceivers to quickly learn more facts by fast visual search (Elder & Zucker, 1993), and iii) the geometric region boundaries in the star plot have high priority in perception (Elder & Zucker, 1998). We introduce the 6 axis-level elements of PRUDEX-Compass as follows: Profitability. Aligned with the key objective of QT, profitability focuses on the evaluation of FinRL methods' ability to gain market capital. Besides pure return, it also measures how stable (Franco & Leah, 1997) and persistent (Gârleanu & Pedersen, 2013) FinRL methods are to achieve high profit. Risk-Control. Due to the well-known tradeoff between profit and risk in finance (Brink & McCarl, 1978), financial practitioners take great efforts on the assessment and control of both systematic risk and idiosyncratic risk (Goyal & Santa, 2003), which is also of vital importance in FinRL evaluation. Universality. The financial market is a complex ecosystem that involves innumerable assets, countries, time-scale and trading styles. Universality tries to evaluate FinRL's ability to achieve satisfied performance (e.g., better than market average) in various quantitative trading scenarios. Designing FinRL methods with better universality (Fang et al., 2021) is in line with popular machine learning topics such as transfer learning (Pan & Yang, 2009) and meta learning (Hospedales et al., 2021). Diversity. In finance, diversification refers to the process of allocating capitals in a way that reduces the exposure to any one particular asset or risk. As Markowitz (Nobel Laureate in Economics) said (Tu & Zhou, 2011), diversity is the only free lunch in investing that plays an indispensable role on enhancing profitability and risk-control. In RL community, diversity is widely used to encourage exploration (Parker et al., 2020). This axis of PRUDEX-Compass tends to address the lack of diversity evaluation of FinRL methods. Reliability. RL methods have the tendency to be highly variable in performance and sensitive to many factors such as random seeds (Henderson et al., 2018) and market stationarity shift across time (Lee et al., 2010). This variability issue hinders a reliable method and can be costly or even dangerous for high-stake applications such as quantitative trading. This axis introduces techniques on RL reliability evaluation (Chan et al., 2019; Agarwal et al., 2021) with a focus on quantitative trading. Explainability. Psychologically speaking, *if the users do not trust a model, they will not use it* (Ribeiro et al., 2016). Explainability generally refers to any technique that helps users or developers of models understand why models behave the way they do. In FinRL, it can come in the form that tells traders which model is effective under what market conditions or why one trading action is mistaken and how to fix it. Rigorous regulatory requirements in financial markets further enhance its importance for model debugging (Bhatt et al., 2020), monitoring (Pinto et al., 2019) and audit (Bhatt et al., 2020). ## 2.2 Measures Of Prudex-Compass As the inner star plot contextualizes macroscopic axes of FinRL evaluation, the outer measure-level places emphasis on detailed evaluation setup and metrics. In essence, a mark on the measure-level indicates that a method practically reports corresponding measures in its empirical investigation, where more marks indicate a more comprehensive evaluation. We list the 17 measures on the outer level of the PRUDEX-Compass in Table 1 with brief descriptions. In addition, we leave measures of FinRL explainability as future work due to the lack of FinRL algorithms with solid design of explainability. We conduct literature review on RL explainability and point their potential application in FinRL as follows. DSP (Landajuela et al., 2021) is proposed to discover symbolic policy with expert knowledge. Differentiable decision trees are incorporated into RL for better explainability (Silva et al., 2020). Another line of works tries to discover interpretable features with techniques such as self-supervised learning (Shi et al., 2020) and adversarial learning (Gupta et al., 2020). Open-XAI (Agarwal et al., 2022) offers a comprehensive open-source framework for evaluating and benchmarking post hoc explanation methods. We plan to incorporate suitable evaluation methods, i.e., LIME (Ribeiro et al., 2016) and SHAP (Lundberg & Lee, 2017), from Open-XAI into PRUDEX-Compass with customized adoption on decision-based FinRL methods. Table 1: Brief summary of evaluation measures in outer level of PRUDEX-Compass: Profitability, Risk-control, Universality, Diversity, rEliability, and eXplainability. | Universality, Diversity, rEliability, and eXplainability. Axes Measures | Descriptions | | |---------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------| | Profit | A class of metrics to assess FinRL's ability to gain market capital. | | | P | Alpha Decay | Loss in the investment decision making ability of FinRL methods over time due to distribution shift in financial markets (Pénasse, 2022). | | Equity Curve | A graphical representation of the value changes over time. | | | Risk | A class of metrics to assess the risk level of FinRL methods (Shiller, 1992). | | | R | Risk-adjusted | A class of metrics that calculate the normalized profit with regards to different | | Profit | kinds of risks, i.e., volatility and downside risk (Magdon & Atiya, 2004). | | | Extreme | The relative performance of FinRL methods on extreme market condition during | | | Market | black swan events (Aven, 2013) such as war and covid-19. | | | Country | Financial market across both developed countries (e.g., US and Europe) and developing countries (e.g., China and India). | | | Asset Type | Various financial asset types, i.e., stock, future, FX and Crypto | | | U | Time-Scale | Both coarse-grained (e.g., day level) and fine-grained (e.g., second level) financial data to match different trading styles. | | t-SNE | A statistical visualization tool to map high-dimensional time-series data points into 2-D dimension (Vander & Hinton, 2008) to assess the data-level diversity. | | | D | Entropy | Entropy-based metrics from information theory (Reza, 1994) to show the diversity of FinRL methods' trading behaviors. | | Correlation | Metrics that account the correlation (Kirchner & Zunckel, 2011) between financial assets to assess the diversity of FinRL methods. | | | Diversity | A visualization tool to demonstrate the diversity of investment decisions among | | | Heatmap | different financial assets with heatmap (Harris et al., 2020) | | | Performance | A visualization of FinRL methods' empirical score distribution (Dolan & Moré, | | | Profile | 2002), which is easy to read with qualitative comparisons. | | | E | Variability | The performance standard deviation across different random seeds and hyperparameters (Henderson et al., 2018). | | Rolling | Using rolling time window to retrain or fine-tune FinRL methods and evaluate | | | Window | the performance on multiple test periods (De Prado, 2018). | | | Rank | A visualization toolkit to show the rank of FinRL methods across different | | | Comparison | metrics, which will not be dominated by extreme values (Agarwal et al., 2021). | | | X | - | We discuss current status and highlight promising further directions. | | 3 | FinRL Preliminaries and Problem Formulation | | ## 3.1 Portfolio Management Portfolio management is a fundamental quantitative trading task (Sun et al., 2023), where investors hold a pool of different financial assets, i.e., stocks, bonds, as well as cash, and reallocate the proportion of capitals invested in each asset periodically to maximize future profit. OHLCV refers to the raw information of bar chart acquired from the financial markets. OHLCV vector at time t is denoted as xt = (p o t , ph t , plt , pc t , vt), where p o t is the open price, p h t is the high price, p l t is the low price, p c t is the close price and vt is the volume. Technical Indicator indicates high-order features derived based on a formulaic combination of the original OHLCV with financial insights. We define the vector of technical indicator at time t: yt =Sk y k t , where y k t = fk(xt−h, ..., xt, θk), fk and θ k are the formula function and the parameter of technical indicator k, respectively. Portfolio is the proportion of capitals allocated to each asset that can be represented as a vector: $$\mathbf{w_{t}}=[w_{t}^{0},w_{t}^{1},...,w_{t}^{M}]\in R^{M+1}\quad a n d\quad\sum_{i=0}^{M}w_{t}^{i}=1$$ t = 1 (1) where M + 1 is the number of portfolio's constituents, including cash and M financial assets. w i t represents the ratio of the total portfolio value invested at time t on asset i and w 0 t represents cash. Asset Price refers to the vector of close price for each financial asset defined as pt = [p 0 t , p1 t , ..., pM t ], where p i t is the close price of asset i at time t. Note that the price of cash p 0 t is a constant. Portfolio Value vt+1 at time t + 1 is defined based on the asset price change and portfolio weight as: $$(1)$$ $$v_{t+1}=v_{t}\sum_{i=0}^{M}{\frac{w_{t}^{i}\ p_{t+1}^{i}}{p_{t}^{i}}}$$ $$\left(2\right)$$ t(2) The objective of portfolio management is to maximize the final portfolio value given a long-term time horizon by dynamically tuning the portfolio weight at each time step. As a unified benchmark, evaluation metrics proposed in PRUDEX-Compass can be easily adopted to all quantitative trading tasks. We focus on portfolio management in this work as a demonstrative example. ## 3.2 Mdp Formulation We consider a standard RL scenario in which an agent (investor) interacts with an environment (the financial market) in discrete time. Formally, we introduce MDP, which is defined by the tuple: MDP = (S, A*, P, R, γ, H*). Specifically, S is a finite set of states. A is a finite set of actions. P : *S × A × S −→* [0, 1] is a state transaction function, which consists of a set of conditional transition probabilities between states. R : *S × A −→ R* is the reward function, where R is a continuous set of possible rewards and R indicates the immediate reward of taking an action in a state. γ ∈ [0, 1) is the discount factor and H is a time horizon indicating the length of the trading period. A (stationary) policy πθ : *S × A −→* [0, 1], parameterized by θ, assigns each state s ∈ S a distribution over actions, where a ∈ A has probability π(a|s). A Q-value function gives expected accumulated reward when executing action at in state st and following policy π in the future, which is Q(st, at) = E(st+1*,...,π*) hPT i=t γ ir(si, ai) i. During training, one episode corresponds to adjusting the portfolio at each time step through the whole trading periods, i.e., time scope of training set, with time horizon H. The objective of the agent is to learn an optimal policy: πθ ∗ = *argmax*πθ Eπθ hPT i=t γ irt+i| st = s i. State st ∈ S at time t involves the concatenation of technical indicator vectors of M financial assets (y 1 t , y 2 t , ..., y i t ) as the external markets state, where y i t represent the technical indicator vector of asset i at time t. In addition, a 2-dimension tuple of investors' position and remaining cash is added as internal state. **Action space** at time t is an M + 1 dimension vector [w 0 t , w1 t , ..., wM t ] as a portfolio wt to represent the proportion of capitals invested at each asset. **Reward** rt at time t is the change of portfolio value: rt = vt+1 − vt, where positive/negative values indicate earning/losing money, respectively. ## 3.3 Soft Actor-Critic (Sac) And Other Popular Finrl Methods We introduce SAC (Haarnoja et al., 2018), which is the base model of many popular FinRL methods (Yuan et al., 2020). SAC is a popular off-policy actor-critic RL method based on the maximum entropy RL framework (Ziebart, 2010), which maximizes a weight objective of the reward and the policy entropy, to encourage robustness to noise and exploration. For parameter updating, SAC alternates between a soft policy evaluation and a soft policy improvement. At the soft policy evaluation step, a soft Q-function Qθ(st, at), which is modeled as a neural network with parameters θ, is updated by minimizing the following soft Bellman residual: $\mathcal{L}_{critical}^{\text{SAC}}(\phi)=\mathbb{E}_{\tau_{t}\sim\mathcal{D}}[L_{Q}(\tau_{t},\theta)],$ $L_{Q}(\tau_{t},\theta)=\left(Q_{\theta}(s_{t},a_{t})-r_{t}-\gamma\bar{V}(s_{t+1})\right)^{2},$ _where $\bar{V}(s_{t})=\mathbb{E}_{a_{t}\sim\pi_{\phi}}[Q_{\bar{y}}(s_{t},a_{t})-\alpha\log\pi_{\phi}(a_{t}\mid s_{t})].$_ where τt = (st, at, rt, st+1) represents a transition, D indicates a replay buffer, ¯θ are the delayed parameters, and α is used as a temperature parameter. To conduct steps of soft policy improvement, the policy π with its parameter θ is updated through minimizing the following objective: $$\begin{array}{l}{{{\mathcal{L}}_{a c t o r}^{\mathrm{SAC}}(\theta)=\mathbb{E}_{s_{t}\sim\mathcal{D}}[L_{\pi}(s_{t},\phi)],}}\\ {{L_{\pi}(s_{t},\phi)=\mathbb{E}_{a_{t}\sim\pi_{\phi}}[\alpha\log\pi_{\phi}(a_{t}\mid s_{t})-Q_{\theta}(s_{t},a_{t})].}}\end{array}$$ To handle continuous action spaces, the policy is modeled as a Gaussian with mean and covariance given by neural networks. In addition to SAC, A2C (Mnih et al., 2016), a popular actor-critic RL method, shows stellar performance in algorithmic trading (Zhang et al., 2020). The simple and efficient policy gradient method PPO (Schulman et al., 2017) performs well in capturing trading opportunities for order execution (Lin & Beling, 2020). EIIE (Jiang et al., 2017) and Investor-Imitator (IMIT) (Ding et al., 2018) are two pioneering works that apply deep RL for quantitative trading. Furthermore, SARL (Ye et al., 2020) and Deeptrader (Wang et al., 2021b) are proposed with augmented market embedding to take market risk into account for portfolio management. ## 4 Alphamix+: A Strong Baseline Before diving into our systematic evaluation benchmark PRUDEX-Compass5, AlphaMix+, an FinRL algorithm based on ensemble learning, is proposed to fill the gap due to the poor performance of existing FinRL methods under systematic evaluation. Considering the limitation of existing FinRL methods, the major one is that investment decisions are made by a single agent with high potential risk. The success of real-world trading firms relies on an efficient bottomup hierarchical workflow with risk management as illustrated in Figure 2). First, multiple experts conduct data analysis and build models independently based on personal trading style and risk tolerance. Later on, a senior portfolio manager summarizes their results, manage risk and makes final investment decisions. Figure 2: Workflow of real-world trading firms. ![6_image_0.png](6_image_0.png) (6) $\binom{7}{7}$ . Inspired by it, we propose AlphaMix+, a universal deep RL framework with diversified risk-aware mixture-ofexperts following the idea of (Lee et al., 2021b), to mimic this efficient workflow. In principle, AlphaMix+ can be combined with most modern off-policy RL algorithms for any quantitative trading task. As SAC (Haarnoja et al., 2018) is a sample-efficiency algorithm in quantitative trading (Yuan et al., 2020), we pick it as the base model of AlphaMix+ here for exposition. An overview of AlphaMix+ is shown in Figure 3. Risk-aware Bellman backup. We consider a trading firm with N trading experts, i.e., {Qθi , πϕi } N i=1, where θi and ϕi denote the parameters of the i-th experting expert's soft Q-function and policy. To apply classic Q-learning based on the Bellman backup in Eq. (3) in FinRL, one major issue is the severe negative impact of error propagation that induces the market noise to the learning trading signals (true Q-value) of 5Readers whose main interest is the evaluation benchmark may skip this Section and take AlphaMix+ as a strong FinRL baseline. 7 the current Q-function (Kumar et al., 2020) , which can cause unstable convergence. In order to overcome this issue, for trading expert i, we apply a risk-aware Bellman backup (Lee et al., 2021b) as follows: $${\mathcal{L}}_{W Q}(\tau_{t},\theta_{i})=w(s_{t+1},a_{t+1})(Q_{\theta_{i}}(s_{t},a_{t})-r_{t}-\gamma\bar{V}(s_{t+1}))^{2}$$ (st, at) − rt − γV¯ (st+1))2(8) where τt = (st, at, rt, st+1) is a transition, at+1 ∼ πϕi (a | st), and w(*s, a*) is a confidence weight based on the ensemble of target Q-functions: w(*s, a*) = σ(−Q¯std(*s, a*) ∗ T) + k (9) where Q¯std(*s, a*) indicates the empirical standard deviation of all trading experts' target Q-functions {Qθ¯i } N i=1 and T > 0 is a temperature parameter (Hinton et al., 2015) to adapt the scale of Q¯std(*s, a*). σ is the sigmoid function and k > 0 is used to control the value range of confidence weight. The confidence weight is bounded in [*k, k* + 0.5] as Q¯std(*s, a*) is always a positive number. Intuitively, the objective LW Q down weights the sample transitions with inconsistent trading suggestions from different experts (high variance across target Q-functions), resulting in a loss function for the Q-updates with better risk management. ![7_image_0.png](7_image_0.png) $$({\boldsymbol{\delta}})$$ Diversified Experts. We encourage the diversity between trading experts with two simple yet efficient tricks based on (Lee et al., 2021b). First, we initialize the model parameters of all trading experts with random parameter values for inducing an initial diversity in the models following (Lakshminarayanan et al., 2017; Wenzel et al., 2020). Second, we apply different samples to train each agent based on similar idea in BatchEnsemble (Wen et al., 2019). For each trading expert i at each time step t, we construct the binary masks mt,i by sampling from the Bernoulli distribution with parameter β ∈ (0, 1]. Later on, we multiply the bootstrap mask to each objective function (e.g., mt,iLπ(st, ϕi) and mt,iLW Q(τt, θi) in Eq.(7) and Eq.(8)) while updating parameters of trading experts. This encourages each expert to think individually with diversified strategies. We find it sufficient for AlphaMix+ to have desired diversity (experiments in Section 6.6) with the two simple tricks. Other tricks such as a KL divergence (Yu et al., 2013) loss term is not further incorporated to keep simplicity. We remark that although similar diversity encouragement techniques have been shown effective in classic RL tasks (Osband et al., 2016; Lee et al., 2021b), this work is the first to explore their potential in financial markets. We conduct ablation studies on the effectiveness of each component in AlphaMix+ and parameter analysis to probe sensitivity. We put related experimental results in Appendix C.2 and C.3, respectively. Figure 3: An illustration of AlphaMix+, a universal deep RL framework with diversified risk-aware mixture-of-experts. ## 5 Experimental Setup 5.1 Datasets ![7_image_1.png](7_image_1.png) We collect real-world financial datasets spanning over 15 years of US stock, China stock, Cryptocurrency (Crypto) and Foreign Exchange (FX) from Yahoo Finance and Kaggle. All raw data and related processing scripts are publicly available. We summarize statistics of the 4 datasets with further elaboration in Table 2. US Stock dataset contains 10-year historical prices of 29 influential stocks with top unit price as a strong assessment of the market's overall health and tendencies. China Stock dataset contains 4-year historical prices of 47 influential stocks with top capitalization from the Shanghai exchange. Both US and China stock data is collected from Yahoo-Finance6. Crypto7 dataset contains 6-year historical 6https://github.com/yahoo-finance 7https://www.kaggle.com/datasets/sudalairajkumar/cryptocurrencypricehistory Figure 4: Train/valid/test split procedure with rolling windows. prices of 9 influential virtual currency with top unit price and trading volume collected. FX8 dataset contains 20-year historical prices of 22 most popular currency with top foreign exchange reserves for US dollars. For each dataset, we filter out financial assets with missing values. For data split, we apply the similar split procedure in (Sawhney et al., 2020) with rolling window for all four datasets. As shown in Figure 4, phase 3 uses the last year for test, penultimate year for validation and the remaining of the dataset for training. For phase one and two, their validation/test sets roll back one and two years, respectively. | | Table 2: Dataset statistics | | | | | | | |-------------|-------------------------------|------|--------|------|----------|----------|--------| | Dataset | Market | Freq | Number | Days | From | To | Source | | US Stock | US | 1d | 29 | 2517 | 12/01/03 | 21/12/31 | Yahoo | | China Stock | China | 1h | 47 | 1036 | 16/06/01 | 20/09/01 | Yahoo | | Crypto | - | 1d | 9 | 2014 | 16/01/01 | 21/07/06 | Kaggle | | FX | - | 1d | 22 | 5015 | 00/01/03 | 19/12/31 | Kaggle | ## 5.2 Features We generate 11 temporal features as shown in Table 3 to describe the stock markets following (Feng et al., 2018; Yoo et al., 2021). zopen, z*high* and zlow represent the relative values of the open, high, low prices relative to the close price at current time step, respectively. z*close* and zadj_*close* represent the relative values of the closing and adjusted closing prices compared with time step t − 1. zdk represents a long-term moving average of the adjusted close prices during the last k time steps relative to the current close price. We apply z-score normalization on each feature. Table 3: Features to describe the financial markets Features **Calculation Formula** zopen, zhigh, zlow zopen = opent*/close*t − 1 zclose zclose = closet*/close*t−1 − 1 zd_5, zd_10, zd_15 zd_20, zd_25, zd_30zd_5 = P4 i=0 *close*t−i/5 closet− 1 ## 5.3 Baselines We conduct experiments with 7 representative FinRL methods including 3 classic RL methods: A2C (Mnih et al., 2016), PPO (Schulman et al., 2017), SAC (Haarnoja et al., 2018), 4 RL-based trading methods: EIIE (Jiang et al., 2017), Investor-Imitator (IMIT) (Ding et al., 2018), SARL (Ye et al., 2020) and DeepTrader (DT) (Wang et al., 2021b) and our AlphaMix+. Descriptions of baselines are as follows: - A2C (Mnih et al., 2016) is a classic actor-critic RL algorithms that introduce an advantage function to enhance policy gradient update by reducing variance. - PPO (Schulman et al., 2017) is a proximal policy optimization that constrain the difference between current policy and updated policy with simplified clipping term in the objective function. - SAC (Haarnoja et al., 2018) is a widely-used off-policy actor-critic method based on the maximum entropy RL framework to encourage algorithm robustness. - SARL (Ye et al., 2020) proposes a state-augmented RL framework, which leverages the price movement prediction as additional states, based on deterministic policy gradient (Silver et al., 2014) methods. - DeepTrader (DT) (Wang et al., 2021b) is a policy gradient based method. To tackle the risk-return balancing issue, it simultaneously uses negative maximum drawdown and price rising rate as reward functions to balance between profit and risk with an asset scoring unit. - EIIE (Jiang et al., 2017) is a deterministic policy gradient based RL framework, which contains: 1) an ensemble of identical independent evaluators topology; 2) a portfolio vector memory; 3) an online stochastic batch learning scheme. 8https://www.kaggle.com/datasets/brunotly/foreign-exchange-rates-per-dollar-20002019 - Investor-Imitator (IMIT) (Ding et al., 2018) imitates behaviors of different investors (e.g., oracle/collaborate/public investor) using investor-specific reward functions with a set of logic descriptors. Non-RL methods are not included as baselines for two reasons: i) In general, RL methods outperform different types of non-RL methods (Moskowitz et al., 2012; Ke et al., 2017; Xu & Cohen, 2018) in different trading tasks. ii) PRUDEX-Compass focuses on the evaluation of RL methods in financial markets. We leave comparison with non-RL methods as future directions and discuss our plan in Section 7. ## 5.4 Training Setup We perform all experiments on an RTX 3090 GPU with 5 fixed random seeds. We apply grid search for AlphaMix+ on Crypto and FX datasets and apply the same hyperparameters on China and US stock datasets. We try scale parameter k in list [0.3, 0.5, 0.7, 0.9], binomial sample parameter β in list [0.3, 0.4, 0.5, 0.6, 0.7] and temperature T in list [18, 19, 20, 21, 22].We explore batch size in list [256, 512, 1024] and hidden size in range [64, 128], We apply learning rate 7e −4for both actor and critic. Adam is used as the optimizer. One full list of hyperparameters is available in Appendix B.1. We train AlphaMix+ for 10 epochs on all datasets. It takes about 60 minutes to train and test on China stock, US stock, FX and Crypto datasets, respectively. For other FinRL methods, there are two conditions: i) if there are authors' official or open-source FinRL library (Liu et al., 2020a) implementations, we apply the same hyperparameters9for a fair comparison. This condition applies for A2C, PPO, SAC, SARL and DeepTrader. ii) if there are no publicly available implementations, we reimplement the algorithms and try our best to maintain consistency based on the original papers. This applies for EIIE and IMIT. ## 5.5 Rl Environment Implementation In this work, we apply the popular portfolio management environment (Liu et al., 2020a) implemented based on OpenAI Gym (Brockman et al., 2016), which simulates live financial markets with realistic historical market data according to the principle of time-driven simulations. During training, we feed observations of technical indicators as input of RL agents. RL agents generate a portfolio (action) and the environment returns the net value change at each time step as reward. By interacting with the environment, the trading agents will try to derive a trading strategy with high profits. In addition, the environment assumes the trading volume of agents is not very large and has little impact on the market. Then, it is reasonable to use the price fluctuation of offline historical financial data to build a model for reward calculation during online simulation. We provide a concrete example here to further clarify how FinRL environment is built. Considering a simple trading scenario with only one stock, we obtain p c t and p c t+1, which denote the close price of the stock at time t and t + 1, from historical data. The action at time t is to buy k shares of the stock. Then, the reward rt at time t is the account profit defined as k ∗ (p c t+1 − p c t ). For state, we use historical market data to calculate technical indicators in Table 3 as external state and investors' private information such as remaining cash and current position is applied as internal state. Similar procedures to build RL environment with historical market data have been applied in many FinRL work (Liu et al., 2020a; Wang et al., 2021b; Ye et al., 2020). Readers may check our open source code1for more details of the RL environment. ## 6 Demonstrative Usages Of Prudex-Compass And Related Evaluation Toolkits In this section, we conduct experiments on portfolio management with real-world datasets of 4 influential financial markets to demonstrate the usage of PRUDEX-Compass and related evaluation toolkits. In Section 6.1, we show how different investors can get a general impression on FinRL algorithms' performance with PRUDEX-Compass. Moreover, we provide example usage of other evaluation toolkits with a focus on one particular perspective including: (1) t-SNE plot to show data-level diversity, (2) PRIDE-Star to report the performance of 8 point-wise financial metrics for evaluating profitability, risk-control and diversity, (3) performance profile and rank distribution plot as unbiased and robust measures towards reliable FinRL methods, (4) portfolio diversity heatmap to evaluate the decision-level diversity, (5) extreme market scenarios 9Both authors' official or open-source FinRL library (Liu et al., 2020a) implementations are tuned in FinRL domains. with black swan events to evaluate the risk-control and generalization ability of FinRL algorithms. In particular, investors can either use these evaluation toolkits together with PRUDEX-Compass to pursue a systematic evaluation or as an independent measure with a focus on the perspective they care about. ## 6.1 A General Impression With Prudex-Compass As shown in Figure 1, we fill the PRUDEX-Compass based on the experimental results of the 8 FinRL methods. For axis-level, it directly illustrates the relative performance of each method in terms of 6 axes to provide a general impression. We normalized the score into 0 to 100 with 50 as the market average (details in Appendix A.2). For explainability, all methods are scored 50 as we leave it as future direction. AlphaMix+ performs best in all 5 remaining axes. Specifically, it outperforms other FinRL methods 53% and 43% in universality and diversity, respectively, which demonstrates the effectiveness of the weighted Bellman backup and diversified bootstrap initialization. For measure-level, we give a mark if one measure is used in the evaluation of the FinRL methods, the goal here is to show how comprehensive the methods are evaluated. Together with all measures we proposed, AlphaMix+ clearly has a more rigorous evaluation, which makes the results more trustworthy. Arguably, PRUDEX-Compass provides a compact visualization to evaluate FinRL methods that is much better than only looking at a result table of different metrics especially when lots of FinRL methods are involved. In other words, the compass highlights the required subtleties, that may otherwise be challenging to extract from text descriptions, potentially be under-specified and prevent readers to misinterpret results based on result table display. With PRUDEX-Compass, users can flexibly pick suitable methods with regards to their personal interests. Conservative traders may prefer methods with a relative stable profit rate and low risk. Aggressive traders may pay more attention on profitability, as they are willing to take high risk to pursue extremely high profit. For international trading firms, they may have high expectation on universality and diversity. ## 6.2 Visualizing Financial Markets With T-Sne Even though it is a wide consensus that different financial markets share different trading patterns (Campbell et al., 1998), there is a lack of visualization tool to demonstrate how different are these markets. To show data-level diversity of evaluation, we use t-SNE (Vander & Hinton, 2008) here to map all 4 datasets into a China US FX Crypto 2-D dimension plot with the 11 features described in Table 3 as the input. To avoid overlapping in financial data, we pick a data point every 30 time step for each asset across 4 financial markets. Each data point corresponds to the value vector of 11 features at each time step, where necessary temporal information is maintained10. In Figure 5, the US stock and FX datasets lie in the lower left and upper right corner, respectively, as a whole cluster, which is consistent with their status as relative mature and stable markets (Emenyonu & Gray, 1996). For China stock and the Crypto, data points are scattered with more outliers that demonstrate their essence as emerging and violate markets (De Santis et al., 1997). The t-SNE plot is useful to provide an intuitive expression on the data-level diversity of different markets while evaluating FinRL methods. Figure 5: t-SNE market visualization. ## 6.3 Pride-Star For Evaluating Profitability, Risk-Control And Diversity As the evaluation measures for Profitability, RIsk and DivErsity (PRIDE) are point-wise metrics with real number values, we use the PRIDE-star, which is a star plot to show the relative strength of 8 metrics including 1 profit metrics: total return (TR), 2 risk metrics: volatility (Vol) (Shiller, 1992) and maximum drawdown (MDD) (Magdon & Atiya, 2004), 3 risk-adjusted profit metrics: Sharpe ratio (SR) (Sharpe, 1998), Calmar ratio (CR) and Sortino ratio (SoR), and 2 diversity metrics: entropy (ENT) (Jost, 2006) and effect number of 10It is common to incorporate temporal information into features in Fintech. For instance, zd_5 uses the close price at time step t − 4 to t. bets (ENB) (Kirchner & Zunckel, 2011). The mathematical definitions of these metrics are in Appendix A.1. We report the overall performance across the 4 financial markets of the 6 FinRL methods in Figure 6, where the inner circle represents market average. In general, AlphaMix+ performs best in the PRIDE-Star plot. In addition, AlphaMix+ outperforms the second best by 25% in terms of ENT that shows the effectiveness of the boostrap with random initialization component in AlphaMix+. ![11_image_0.png](11_image_0.png) Figure 6: Overall performance across 4 financial markets on PRIDE-Star to evaluate profitability, risk-control and diversity, where AlphaMix+ achieves the best performance in 7 out of 8 metrics. ![11_image_1.png](11_image_1.png) Figure 7: (a) Performance profile of total return score distributions across 4 financial markets. Shaded regions show pointwise 95% confidence bands based on percentile bootstrap with stratified sampling. (b) Rank distribution in terms of TR, SR, Vol and ENT across 4 financial markets. ## 6.4 Performance Profile: An Unbiased Approach To Report Performance The performance profile (Agarwal et al., 2021) reports the score distribution of all runs across the 4 financial markets that are statistically unbiased and more robust to outliers compared to the widely-used mean performance. Performance profiles proposed herein visualize the empirical tail distribution function of a random score (higher curve is better), with point-wise confidence bands based on stratified bootstrap (Efron, 1979). A score distribution shows the fraction of runs above a certain normalized score that is an unbiased estimator of the underlying performance distribution. As shown in Figure 7a, AlphaMix+ is generally a robust but conservative FinRL methods that shows the least bad runs, which makes it an attractive option for conservative investors that care more about risk. However, radical investors may pick SAC as it has the largest probability of achieving score 100, which indicates a return rate higher than twice the market average. ## 6.5 Rank Distribution To Demonstrate The Rank Of Finrl Methods In Figure 7b, we plot the rank distribution (Agarwal et al., 2021) of 8 FinRL methods in terms of TR, SR, VOL and Entropy across 4 financial markets with results of 5 random seeds in each market. The i-th column in the rank distribution plot shows the probability that a given method is assigned rank i in the corresponding metrics. For x-axis, rank 1 and 8 indicate the best and worst performance.11 For y-axis, the bar length of a given method on a given metric with rank i corresponds to the % fraction of rank i it achieves across the 4 financial markets, 3 test periods and 5 random seeds (4 × 3 × 5 = 60 in total), i.e., if the rank 1 column of TR is purely red, it indicates AlphaMix+ achieves the highest TR in all random seeds across 4 financial markets. For TR and SR, AlphaMix+ slightly outperforms other methods with 27% and 35% probability to achieve top 2 performance. For Vol, SAC gets the overall best performance while AlphaMix+ goes through higher volatility. For ENT, AlphaMix+ significantly outperforms other FinRL methods with over 56% probability for rank 1, which demonstrates its ability to train mixture of diversified trading experts. ## 6.6 Visualizing Strategy Diversity With Heatmap To demonstrate the overall investment diversity of FinRL methods, we show the average portfolio across the test period as a heatmap in Figure 8. Formally, we define the average portfolio as w¯: $${\bf\bar{w}}=[\frac{\sum_{j=t}^{t+h}w_{i}^{0}}{h+1},\frac{\sum_{j=t}^{t+h}w_{i}^{1}}{h+1},...,\frac{\sum_{j=t}^{t+h}w_{i}^{M}}{h+1}]$$ where M + 1 is the number of portfolio's constituents, including cash and M financial assets. w i t represents the ratio of the total portfolio value invested at time t on asset i, h represents the length of the evaluation period and w 0 t represents cash. Figure 8: Heatmap of average portfolio on China stock market. ## 6.7 The Impact Of Extreme Market Conditions To further evaluate the risk-control and reliability, we pick three extreme market periods with black swan events. For China stock market, the period is from February 1st to March 31st 2021 when strict segregation policy is implemented in China to fight against the COVID-19 pandemic (Lee et al., 2021a). For US stock market, the period is from March 1st to April 30th 2020, when the global financial markets are violate due the global pandemic of COVID-19 (Mazur et al., 2021). For Crypto, the period is from April 1st to May 31st 2021 when many countries posed regulation against Crypto oligarch. To report the results of different 11For TR, SR and entropy, higher values indicate better performance. For VOL, lower values indicate better performance For IMIT, it puts all capital in one or two assets. The portfolio of SARL and EIIE is not that diversified with near 0 weight on many assets more (red). For A2C, DT, PPO and SAC, the portfolio is closed to uniform, which is not desirable due to poor profitability. Our AlphaMix+ achieves an ideal investment portfolio, which is generally diversified and allocate more weights on a few bullish stocks. ![12_image_0.png](12_image_0.png) $$(10)$$ metrics with different numerical value scale, we normalize them into a score m*score* as follows: $$n_{a v e}-1)*k+1$$ $$a_{a v e}|\rangle(m_{r l}/r)$$ m*score* = (mave/|mave|)(mrl/mave − 1) ∗ k + 1 (11) We define the metrics value for FinRL methods and market average as mrl and mave, respectively. k is a scale parameter. In Figure 9, we plot the bar chart of TR and SR during the period of extreme market conditions. As a conservative method, AlphaMix+'s performance is unsatisfactory in extreme market conditions, which proves the general consensus that radical methods such as DeepTrader (DT) and SARL are more suitable for extreme markets (Marimoutou et al., 2009). Analyzing the performance on extreme market conditions can shed light on the design of FinRL methods, which is in line with economists' efforts on understanding the financial markets. For instance, incorporating volatility-aware auxiliary task (Sun et al., 2022) and multi-objective RL (Hayes et al., 2022) in AlphaMix+ may further make it be aware of extreme market conditions in advance and behave as a profit-seeking agents to achieve better performance during extreme market conditions. ![13_image_0.png](13_image_0.png) $$(11)$$ Figure 9: Performance of FinRL methods during extreme market conditions. ## 7 Discussion Complementary Related Efforts. Apart from the above aspects described in PRUDEX-Compass, there also exist several orthogonal perspectives for FinRL evaluation, which encompass a *check-list* (Pineau et al., 2021) for quantitative experiments, the construction of elaborate *dataset sheets* (Gebru et al., 2021), and the creation of *model cards* (Mitchell et al., 2019). We stress that these perspectives remain indispensable, as novel datasets and their variants are regularly suggested in FinRL and the PRUDEX-Compass does not disclose intended use with respect to human-centered application domains, ethical considerations, or their caveats. We believe that it is best to report both the prior works and PRUDEX-Compass together to further improve the evaluation quality. Potential Impact. For FinRL community, PRUDEX-Compass can serve as i) a systematic evaluation toolkit; ii) a benchmark for comprehensive comparison of FinRL methods and iii) a standard code base for future design and development of novel FinRL algorithms. We hope that PRUDEX-Compass could encourage both researchers and financial practitioners to avoid fooling themselves by evaluating FinRL methods in a systematic way and facilitate the design of stronger FinRL methods. While accounting for all elements in PRUDEX-Compass is not a panacea, we believe it is a good start with more trustworthy results for the community and further increase the confidence for real-world industry deployment. For RL community, PRUDEX-Compass introduces a new challenging scenario with different evaluation axes for the test of novel RL algorithms in financial market. For FinTech community, this work enables industry practitioners with limited RL background to easily explore the potential of RL in different FinTech scenarios. In addition, the usage of PRUDEX-Compass is not limited in RL settings, most elements of PRUDEX-Compass can be easily generalized to supervised learning settings with broader impact. Future Plans. We plan to improve PRUDEX-Compass from the following perspectives: i) For axis-level, we plan to explore the evaluation of FinRL explainability with measures and plots; ii) For measure-level, we plan to include metrics to evaluate alpha decay and more metrics, e.g., optimality gaps, for profits and risks; iii) To further improve the accuracy and reliability of PRUDEX-Compass, we plan to customize existing scientific and non-exploitative RL evaluation procedures (Jordan et al., 2020) into FinRL domains for the normalization and summarization of experimental results. iv) For a comprehensive evaluation under different market stationarity assumptions, we plan to add one toolkit to automatically categorize market into different styles (e.g., bull/bear) and evaluate the performance of FinRL methods under different styles. Furthermore, there can be a data-driven simulator to generate unseen stylized data for further evaluation; v) For visualization, we plan to further develop a GUI software version accompanied with a website to further lower the barrier for dissemination and use; vi) As most axes and measures in PRUDEX-Compass are also key points for non-RL trading scenarios, we plan to bring PRUDEX-Compass into more general machine learning settings with implementations and results of non-RL methods for broader impact. Auxiliary Experiments. Due to space limitations, we have included some auxiliary yet important experiments in Appendix C. Specifically, the result tables with mean and standard deviation of the 8 metrics in PRIDE-Star are reported in Appendix C.1. We plot the PRIDE-Star, performance profile and rank comparison of each financial market individually in Appendix C.4, C.5, C.6, respectively.Furthermore, we include ablation studies on the effectiveness of each component in AlphaMix+ in Appendix C.2 and hyperparameter sensitivity experiments in Appendix C.3. Hosting, Maintenance, Licensing. The PRUDEX-Compass datasets are hosted on Google Drive. The source code are publicly available at https://github.com/TradeMaster-NTU/PRUDEX-Compass. The authors will provide important bug fixes to the community as commits to the GitHub repository. There will be summary of changes to the code and the datasets in the README web page of the GitHub repository. In the unlikely case that the Google Drive link stops operating, we will migrate the dataset to another hosting and announce the new links in the GitHub repository. The provided source code and dataset are copyrighted by us and under the MIT license12. Users have the permission to reuse the codes for any purpose. ## 8 Acknowledgments This project is supported by the National Research Foundation, Singapore under its Industry Alignment Fund - Pre-positioning (IAF-PP) Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore. 12https://opensource.org/licenses/MIT ## References Chirag Agarwal, Eshika Saxena, Satyapriya Krishna, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, and Himabindu Lakkaraju. Openxai: Towards a transparent evaluation of model explanations. arXiv preprint arXiv:2206.11104, 2022. Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Bellemare. Deep reinforcement learning at the edge of the statistical precipice. In *Advances in Neural Information Processing* Systems, pp. 29304–29320, 2021. Bo An, Shuo Sun, and Rundong Wang. Deep reinforcement learning for quantitative trading: Challenges and opportunities. *IEEE Intelligent Systems*, 37(2):23–26, 2022. Terje Aven. On the meaning of a black swan in a risk context. *Safety Science*, 57, 2013. Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José MF Moura, and Peter Eckersley. Explainable machine learning in deployment. In ACM Conference on Fairness, Accountability, and Transparency, pp. 648–657, 2020. Lars Brink and Bruce McCarl. The tradeoff between expected return and risk among cornbelt farmers. American Journal of Agricultural Economics, 60(2), 1978. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. *arXiv preprint arXiv:1606.01540*, 2016. John Y Campbell, Andrew W Lo, A Craig MacKinlay, and Robert F Whitelaw. The econometrics of financial markets. *Macroeconomic Dynamics*, 2(4), 1998. Ernest P Chan. *Quantitative trading: how to build your own algorithmic trading business*. Wiley, 2021. Stephanie CY Chan, Samuel Fishman, Anoop Korattikara, John Canny, and Sergio Guadarrama. Measuring the reliability of reinforcement learning algorithms. In *International Conference on Learning Representations*, 2019. Marcos Lopez De Prado. *Advances in financial machine learning*. John Wiley & Sons, 2018. Giorgio De Santis et al. Stock returns and volatility in emerging financial markets. Journal of International Money and Finance, 16(4), 1997. Jonas Degrave, Federico Felici, Jonas Buchli, Michael Neunert, Brendan Tracey, Francesco Carpanese, Timo Ewalds, Roland Hafner, Abbas Abdolmaleki, Diego de Las Casas, et al. Magnetic control of tokamak plasmas through deep reinforcement learning. *Nature*, 602(7897):414–419, 2022. Yue Deng, Feng Bao, Youyong Kong, Zhiquan Ren, and Qionghai Dai. Deep direct reinforcement learning for financial signal representation and trading. *IEEE Transactions on Neural Networks and Learning Systems*, 28(3), 2016. Yi Ding, Weiqing Liu, Jiang Bian, Daoqiang Zhang, and Tie-Yan Liu. Investor-imitator: A framework for trading knowledge extraction. In *ACM SIGKDD International Conference on Knowledge Discovery &* Data Mining, 2018. Elizabeth D Dolan and Jorge J Moré. Benchmarking optimization software with performance profiles. Mathematical Programming, 91(2):201–213, 2002. B Efron. Bootstrap methods: Another look at the jackknife. *The Annals of Statistics*, 7(1), 1979. James Elder and Steven Zucker. The effect of contour closure on the rapid discrimination of two-dimensional shapes. *Vision Research*, 33(7):981–991, 1993. James H Elder and Steven W Zucker. Evidence for boundary-specific grouping. *Vision Research*, 38(1): 143–152, 1998. Emmanuel N Emenyonu and Sidney J Gray. International accounting harmonization and the major developed stock market countries: an empirical study. *The International Journal of Accounting*, 31(3), 1996. Yuchen Fang, Kan Ren, Weiqing Liu, Dong Zhou, Weinan Zhang, Jiang Bian, Yong Yu, and Tie-Yan Liu. Universal trading for order execution with oracle policy distillation. In AAAI Conference on Artificial Intelligence, pp. 107–115, 2021. Alhussein Fawzi, Matej Balog, Aja Huang, Thomas Hubert, Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Francisco J R Ruiz, Julian Schrittwieser, Grzegorz Swirszcz, et al. Discovering faster matrix multiplication algorithms with reinforcement learning. *Nature*, 610(7930):47–53, 2022. Fuli Feng, Huimin Chen, Xiangnan He, Ji Ding, Maosong Sun, and Tat-Seng Chua. Enhancing stock movement prediction with adversarial training. *arXiv preprint arXiv:1810.09936*, 2018. Franco and Leah. Risk-adjusted performance. *Journal of Portfolio Management*, 23(2):45–54, 1997. Johannes Fuchs, Petra Isenberg, Anastasia Bezerianos, Fabian Fischer, and Enrico Bertini. The influence of contour on similarity perception of star glyphs. *IEEE Transactions on Visualization and Computer* Graphics, 20(12):2251–2260, 2014. Nicolae Gârleanu and Lasse Heje Pedersen. Dynamic trading with predictable returns and transaction costs. The Journal of Finance, 68(6):2309–2340, 2013. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. Datasheets for datasets. *Communications of the ACM*, 64(12):86–92, 2021. Amit Goyal and Pedro Santa, Clara. Idiosyncratic risk matters! *The Journal of Finance*, 58(3), 2003. Piyush Gupta, Nikaash Puri, Sukriti Verma, Dhruv Kayastha, Shripad Deshmukh, Balaji Krishnamurthy, and Sameer Singh. Explain your move: understanding agent actions using specific and relevant feature attribution. In *International Conference on Learning Representations*, 2020. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International Conference on Machine Learning, pp. 1861–1870, 2018. Charles R Harris, K Jarrod Millman, Stéfan J Van Der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J Smith, et al. Array programming with numpy. *Nature*, 585(7825), 2020. Conor F Hayes, Roxana Rădulescu, Eugenio Bargiacchi, Johan Källström, Matthew Macfarlane, Mathieu Reymond, Timothy Verstraeten, Luisa M Zintgraf, Richard Dazeley, Fredrik Heintz, et al. A practical guide to multi-objective reinforcement learning and planning. *Autonomous Agents and Multi-Agent Systems*, 36 (1), 2022. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. In *AAAI conference on artificial intelligence*, 2018. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. Meta-learning in neural networks: A survey. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 44(9):5149–5169, 2021. Zhengyao Jiang, Dixing Xu, and Jinjun Liang. A deep reinforcement learning framework for the financial portfolio management problem. *arXiv preprint arXiv:1706.10059*, 2017. Scott Jordan, Yash Chandak, Daniel Cohen, Mengxue Zhang, and Philip Thomas. Evaluating the performance of reinforcement learning algorithms. In *International Conference on Machine Learning*, pp. 4962–4973, 2020. Lou Jost. Entropy and diversity. *Oikos*, 113(2), 2006. Jonathan M Karpoff. The relation between price changes and trading volume: A survey. Journal of Financial and Quantitative Analysis, 22(1):109–126, 1987. Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. Lightgbm: A highly efficient gradient boosting decision tree. In Advances in Neural Information Processing Systems, 2017. Ajay Khorana, Henri Servaes, and Lei Wedge. Portfolio manager ownership and fund performance. Journal of financial economics, 85(1):179–204, 2007. Ulrich Kirchner and Caroline Zunckel. Measuring portfolio diversification. *arXiv preprint arXiv:1102.4722*, 2011. Aviral Kumar, Abhishek Gupta, and Sergey Levine. Discor: Corrective feedback in reinforcement learning via distribution correction. *Advances in Neural Information Processing Systems*, pp. 18560–18572, 2020. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. *Advances in Neural Information Processing Systems*, 2017. Mikel Landajuela, Brenden K Petersen, Sookyung Kim, Claudio P Santiago, Ruben Glatt, Nathan Mundhenk, Jacob F Pettit, and Daniel Faissol. Discovering symbolic policies with deep reinforcement learning. In International Conference on Machine Learning, pp. 5979–5989, 2021. Chi-Chuan Lee, Chien-Chiang Lee, and Yizhong Wu. The impact of covid-19 pandemic on hospitality stock returns in china. *International Journal of Finance & Economics*, 2021a. Chien-Chiang Lee, Jun-De Lee, and Chi-Chuan Lee. Stock prices and the efficient market hypothesis: Evidence from a panel stationary test with structural breaks. *Japan and the world economy*, 22(1):49–58, 2010. Kimin Lee, Michael Laskin, Aravind Srinivas, and Pieter Abbeel. Sunrise: A simple unified framework for ensemble learning in deep reinforcement learning. In *International Conference on Machine Learning*, 2021b. Siyu Lin and Peter A Beling. An end-to-end optimal trade execution framework based on proximal policy optimization. In *International Joint Conference on Artificial Intelligence*, pp. 4548–4554, 2020. Xiao-Yang Liu, Hongyang Yang, Qian Chen, Runjia Zhang, Liuqing Yang, Bowen Xiao, and Christina Dan Wang. Finrl: A deep reinforcement learning library for automated stock trading in quantitative finance. arXiv preprint arXiv:2011.09607, 2020a. Yang Liu, Qi Liu, Hongke Zhao, Zhen Pan, and Chuanren Liu. Adaptive quantitative trading: An imitative deep reinforcement learning approach. In *AAAI Conference on Artificial Intelligence*, pp. 2128–2135, 2020b. Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017. Marlos C Machado, Marc G Bellemare, Erik Talvitie, Joel Veness, Matthew Hausknecht, and Michael Bowling. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. Journal of Artificial Intelligence Research, 61:523–562, 2018. Malik Magdon and Amir F Atiya. Maximum drawdown. *Risk Magazine*, 17(10):99–102, 2004. Burton G Malkiel. The efficient market hypothesis and its critics. *Journal of Economic Perspectives*, 17(1): 59–82, 2003. Velayoudoum Marimoutou, Bechir Raggad, and Abdelwahed Trabelsi. Extreme value theory and value at risk: Application to oil market. *Energy Economics*, 31(4), 2009. Mieszko Mazur, Man Dang, and Miguel Vega. Covid-19 and the march 2020 stock market crash. evidence from s&p1500. *Finance research letters*, 38, 2021. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model cards for model reporting. In ACM Conference on Fairness, Accountability, and Transparency, pp. 220–229, 2019. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. *nature*, 518(7540):529–533, 2015. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pp. 1928–1937, 2016. Tobias J Moskowitz, Yao Hua Ooi, and Lasse Heje Pedersen. Time series momentum. *Journal of Financial* Economics, 104(2):228–250, 2012. Martin Mundt, Steven Lang, Quentin Delfosse, and Kristian Kersting. Cleva-compass: A continual learning evaluation assessment compass to promote research transparency and comparability. In International Conference on Learning Representations, 2021. Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped dqn. *Advances in Neural Information Processing Systems*, 2016. Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. *IEEE Transactions on Knowledge and Data* Engineering, 22(10), 2009. Robert Pardo. *The evaluation and optimization of trading strategies*, volume 314. John Wiley & Sons, 2011. Jack Parker, Aldo Pacchiano, Krzysztof M Choromanski, and Stephen J Roberts. Effective diversity in population based reinforcement learning. *Advances in Neural Information Processing Systems*, pp. 18050– 18062, 2020. Julien Pénasse. Understanding alpha decay. *Management Science*, 2022. Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer, Florence d'Alché Buc, Emily Fox, and Hugo Larochelle. Improving reproducibility in machine learning research: A report from the neurips 2019 reproducibility program. *Journal of Machine Learning Research*, 22(1): 7459–7478, 2021. Fábio Pinto, Marco OP Sampaio, and Pedro Bizarro. Automatic model monitoring for data streams. *arXiv* preprint arXiv:1908.04240, 2019. Fazlollah M Reza. *An introduction to information theory*. Courier Corporation, 1994. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should i trust you?" explaining the predictions of any classifier. In *ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, pp. 1135–1144, 2016. Ramit Sawhney, Shivam Agarwal, Arnav Wadhwa, and Rajiv Ratn Shah. Spatiotemporal hypergraph convolution network for stock movement forecasting. In 2020 IEEE International Conference on Data Mining (ICDM), pp. 482–491, 2020. Ramit Sawhney, Shivam Agarwal, Arnav Wadhwa, and Rajiv Shah. Exploring the scale-free nature of stock markets: Hyperbolic graph learning for algorithmic trading. In *Acm Web Conference*, pp. 11–22, 2021. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. *arXiv preprint arXiv:1707.06347*, 2017. William F Sharpe. The sharpe ratio. *Journal of Portfolio Management*, 1998. Wenjie Shi, Gao Huang, Shiji Song, Zhuoyuan Wang, Tingyu Lin, and Cheng Wu. Self-supervised discovering of interpretable features for reinforcement learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(5):2712–2724, 2020. Robert J Shiller. *Market volatility*. MIT press, 1992. Andrew Silva, Matthew Gombolay, Taylor Killian, Ivan Jimenez, and Sung-Hyun Son. Optimization methods for interpretable differentiable decision trees applied to reinforcement learning. In International Conference on Artificial Intelligence and Statistics, pp. 1855–1865, 2020. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In *International Conference on Machine Learning*, pp. 387–395, 2014. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. *Nature*, 550(7676), 2017. Thomas Spooner and Rahul Savani. Robust market making via adversarial reinforcement learning. In International Joint Conference on Artificial Intelligence, 2020. Thomas Spooner, John Fearnley, Rahul Savani, and Andreas Koukorinis. Market making via reinforcement learning. In *International Conference on Autonomous Agents and MultiAgent Systems*, 2018. Shuo Sun, Wanqi Xue, Rundong Wang, Xu He, Junlei Zhu, Jian Li, and Bo An. Deepscalper: A risk-aware reinforcement learning framework to capture fleeting intraday trading opportunities. In ACM International Conference on Information & Knowledge Management, pp. 1858–1867, 2022. Shuo Sun, Rundong Wang, and Bo An. Reinforcement learning for quantitative trading. *ACM Transactions* on Intelligent Systems and Technology, 2023. Jun Tu and Guofu Zhou. Markowitz meets talmud: A combination of sophisticated and naive diversification strategies. *Journal of Financial Economics*, 99(1):204–215, 2011. Laurens Vander and Geoffrey Hinton. Visualizing data using t-sne. *Journal of Machine Learning Research*, 9 (11), 2008. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. *Nature*, 575(7782), 2019. Rundong Wang, Hongxin Wei, Bo An, Zhouyan Feng, and Jun Yao. Commission fee is not enough: A hierarchical reinforced framework for portfolio management. In *AAAI Conference on Artificial Intelligence*, pp. 626–633, 2021a. Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, et al. Image as a foreign language: Beit pretraining for all vision and vision-language tasks. *arXiv preprint arXiv:2208.10442*, 2022. Zhicheng Wang, Biwei Huang, Shikui Tu, Kun Zhang, and Lei Xu. Deeptrader: A deep reinforcement learning approach to risk-return balanced portfolio management with market conditions embedding. In *AAAI* Conference on Artificial Intelligence, pp. 643–650, 2021b. Yeming Wen, Dustin Tran, and Jimmy Ba. Batchensemble: an alternative approach to efficient ensemble and lifelong learning. In *International Conference on Learning Representations*, 2019. Florian Wenzel, Jasper Snoek, Dustin Tran, and Rodolphe Jenatton. Hyperparameter ensembles for robustness and uncertainty quantification. *Advances in Neural Information Processing Systems*, pp. 6514–6527, 2020. Yumo Xu and Shay B Cohen. Stock movement prediction from tweets and historical prices. In Annual Meeting of the Association for Computational Linguistics, pp. 1970–1979, 2018. Yunan Ye, Hengzhi Pei, Boxin Wang, Pin-Yu Chen, Yada Zhu, Ju Xiao, and Bo Li. Reinforcement-learning based portfolio management with augmented asset movement prediction states. In AAAI Conference on Artificial Intelligence, pp. 1112–1119, 2020. Jaemin Yoo, Yejun Soun, Yong-chan Park, and U Kang. Accurate multivariate stock movement prediction via data-axis transformer with multi-level contexts. In *ACM SIGKDD Conference on Knowledge Discovery* & Data Mining, pp. 2037–2045, 2021. Dong Yu, Kaisheng Yao, Hang Su, Gang Li, and Frank Seide. Kl-divergence regularized deep neural network adaptation for improved large vocabulary speech recognition. In *IEEE International Conference on* Acoustics, Speech and Signal Processing, 2013. Yuyu Yuan, Wen Wen, and Jincui Yang. Using data augmentation based reinforcement learning for daily stock trading. *Electronics*, 9(9), 2020. Zihao Zhang, Stefan Zohren, and Stephen Roberts. Deep reinforcement learning for trading. *The Journal of* Financial Data Science, 2(2), 2020. Brian D Ziebart. *Modeling purposeful adaptive behavior with the principle of maximum causal entropy*. Carnegie Mellon University, 2010. ## Appendix A Prudex-Compass A.1 Definition Of Financial Metrics Profit measure contains metrics to evaluate FinRL methods' ability to gain market capital. Total return (TR) is the percent change of net value over time horizon h. The formal definition is T R = (nt+h − nt)/nt, where nt is the corresponding value at time t. Risk includes a class of metrics to assess the risk level of FinRL methods. - **Volatility (Vol)** is the variance of the return vector r. It is widely used to measure the uncertainty of return rate and reflects the risk level of strategies. The definition is *V ol* = σ[r]. - **Maximum drawdown (MDD)** measures the largest decline from the peak in the whole trading period to show the worst case. The formal definition is MDD = maxτ∈(0,t)[maxt∈(0,τ) nt−nτ nt]. - **Downside deviation (DD)** refers to the standard deviation of trade returns that are negative. Risk-adjusted Profit calculates the potential normalized profit by taking one share of the risk. We define three metrics with different types of risk: - **Sharpe ratio (SR)** refers to the return per unit of deviation: SR = E[r] σ[r] . - **Sortino ratio (SoR)** is another risk-adjusted profit, which applies DD as risk measure: SoR = E[r] DD . - **Calmar ratio (CR)** is another risk-adjusted profit, which applies MDD as risk measure: CR =E[r] MDD . ## A.2 Creating A Prudex-Compass To make the PRUDEX-Compass as accessible as possible and disseminate in a convenient way, we provide two options for practical use based on the template provided in CLEVA-Compass (Mundt et al., 2021). - We provide a **LaTeX template** for PRUDEX-Compass, making use of the TikZ library to draw the compass with. We envision that such a template makes it easy for readers to include a compass into their future research, where they can adapt the naming and values of the entries respectively. - We further provide a **Python script** to generate the PRUDEX-Compass. In fact, because the use of drawing in LaTeX with TikZ may be unintuitive for some, we have written a Python script that automatically fills above LaTeX template, so that it can later simply be included into a LaTeX document. The Python script takes a path to a JSON file that needs to be filled by the user. There is a default JSON file that is easy to adopt. ## A.3 Computing Scores For Prudex-Compass Axes We introduce how the value of each axe for each FinRL method is computed in this subsection. The basic principle is to propose a distinguishable and robust way to show the performance difference of FinRL methods across multiple evaluation measures in terms of each axe. Generally speaking, we normalize the performance of different measures on the 6 axes to a score from 0 to 100. We mark market average strategy (evenly invest on all assets) as 50. All scores are calculated with the average of 4 financial markets. We define the metrics value for FinRL methods and market average as mrl and mave, respectively. For the 4 profit-related metrics (TR, SR, CR, SoR), we normalized them into a score Spro with range [0, 100]: Spro = (mrl/mave − 0.8) ∗ 250, where 20% higher profit than market average is scored 100. We clip values lower than 0 and higher than 100 as 0 and 100, respectively. For the 2 risk metrics (Vol, MDD), we normalized them into a score S*risk* with range [0, 100]: S*risk* = (1.2 − mrl/mave) ∗ 250, where 20% lower risk than market average is scored 100. We clip values lower than 0 and higher than 100 as 0 and 100, respectively. For universality, we directly use the raw return rate and calcualte the 4 indicators of profitability, we then plot a rank graph and for each measures of profitability, we multiply the probability obtained from the rank matrix and the rankscore(if it's 1st, the score is 100 and if 6th, the score is 0) to get a score for that measure. Then we average the 4 measures together to get the university score for that algorithm. For the 2 diversity metrics (ENT, ENB), we normalized them into a score Sdiv: $$S_{d i v}=\begin{cases}m_{r l}/m_{a v e}\times100,&\text{for ENT}\\ m_{r l}/m_{a v e}\times50,&\text{for ENB}\end{cases}$$ where the diversity of uniform policy is scored 100 for ENT and 50 for ENB. For explainability, we set the explainability score 50 for all FinRL methods. For reliability, we use the total return rate score we just normalized using average policy as a indicator and draw a performance profile graph, then we use the area under the curve for each algorithm to calculate the reliability. The constants in the equations (e.g., 20%) are applied to make the plot distinguishable and easy to follow, which does not influence the robustness of axes in PRUDEX-Compass. ## B Experiment Setup B.1 Training Setup | | Table 4: Hyperparameters of AlphaMix+ | | | |---------------------|-----------------------------------------|----------------------|-----------------------------------| | Hyperparameter | Value | Hyperparameter | Value | | Replay buffer size | 10000 | Initial step | 10000 | | Layer(MLP) | (128,128) | Stacked frame | 3 | | Evaluation episodes | 10 | Optimizer | Adam | | Temperature | 20 | Uncertainty | 0.5 | | Actor learning rate | 0.0007 | Critic learning rate | 0.0007 | | Batch size | 256 | Action numbers | 29(US) 47(China) 22(FX) 9(Crypto) | | Discount γ | 0.99 | Ber_mean | 0.5 | | Non-linear | Sigmoid | Observation | Number of assets × 11 | ## C Experimental Results C.1 Result Table In this subsection, we report detailed results of 8 metrics in the four financial markets. Since we apply 1 year rolling window during training, each financial market has 3 tables for 3 consecutive years. | | | | Table 5: US Stock 2021 | | | | | | |---------|-----------|-----------|--------------------------|-----------|------------|-----------|-----------|-----------| | Metrics | A2C | PPO | SAC | SARL | DeepTrader | EIIE | IMIT | AlphaMix+ | | TR(%) | 12.4±6.3 | 13.1±4.4 | 20.0±10.3 | 17.3±2.0 | 17.2±7.4 | 16.9±1.2 | 4.0±10.1 | 20.7±1.8 | | SR | 1.03±0.50 | 0.94±0.26 | 1.30±0.57 | 1.37±0.12 | 1.22±0.50 | 1.39±0.08 | 0.35±0.64 | 1.64±0.11 | | CR | 0.81±0.34 | 0.83±0.13 | 0.91±0.19 | 1.06±0.03 | 0.84±0.23 | 1.06±0.02 | 0.47±0.61 | 1.09±0.02 | | SoR | 1.52±0.70 | 1.38±0.44 | 1.89±0.87 | 2.02±0.21 | 1.81±0.75 | 2.03±0.12 | 0.52±0.91 | 2.36±0.17 | | MDD(%) | 15.0±1.9 | 15.3±2.5 | 19.7±4.6 | 15.6±1.2 | 19.1±3.9 | 15.2±0.6 | 14.6±3.7 | 17.9±1.4 | | VOL(%) | 0.78±0.06 | 0.87±0.03 | 0.91±0.02 | 0.76±0.02 | 0.86±0.03 | 0.73±0.03 | 1.0±0.12 | 0.74±0.02 | | ENT | 2.26±0.26 | 1.82±0.02 | 1.67±0.02 | 2.79±0.11 | 1.82±0.01 | 2.90±0.26 | 1.89±0.1 | 3.25±0.12 | | ENB | 1.34±0.10 | 1.49±0.01 | 1.61±0.02 | 1.14±0.06 | 1.51±0.02 | 1.19±0.10 | 1.73±0.10 | 1.11±0.03 | Table 5: US Stock 2021 Table 6: US Stock 2020 | Metrics | A2C | PPO | SAC | SARL | DeepTrader | EIIE | IMIT | AlphaMix+ | |-----------|-----------|------------|-----------|-----------|--------------|-----------|------------|-------------| | TR(%) | 7.46±4.86 | 18.3±8.4 | 7.77±21.1 | 9.23±8.79 | 10.2±10.5 | 15.8±16.1 | -20.6±9.5 | 12.7±1.7 | | SR | 0.36±0.12 | 0.62±0.19 | 0.37±0.52 | 0.41±0.21 | 0.42±0.25 | 0.55±0.30 | -0.28±0.21 | 0.52±0.04 | | CR | 0.36±0.12 | 0.54±0.15 | 0.30±0.48 | 0.38±0.19 | 0.42±0.22 | 0.49±0.22 | -0.28±0.22 | 0.47±0.03 | | SoR | 0.44±0.14 | 0.76±0.23 | 0.50±0.63 | 0.50±0.26 | 0.56±0.33 | 0.68±0.38 | -0.36±0.27 | 0.62±0.05 | | MDD(%) | 36.6±2.4 | 42.2±2.4 | 42.8±5.3 | 37.8±3.19 | 34.9±3.91 | 38.9±6.69 | 43.5±6.42 | 38.2±0.96 | | VOL(%) | 2.25±0.09 | 2.32±0.03 | 2.45±0.16 | 2.26±0.17 | 2.31±0.09 | 2.21±0.23 | 2.9± 0.39 | 2.16±0.04 | | ENT | 1.96±0.20 | 1.82±0.02 | 1.61±0.03 | 2.07±0.67 | 1.82±0.02 | 2.41±0.67 | 1.85 ±0.21 | 3.22±0.03 | | ENB | 1.05±0.01 | 1.05±0.004 | 1.1±0.003 | 1.05±0.03 | 1.05±0.002 | 1.05±0.05 | 1.13± 0.04 | 1.0± 0.006 | | Metrics | A2C | PPO | SAC | SARL | DeepTrader | EIIE | IMIT | AlphaMix+ | |-----------|-----------|-----------|-----------|-----------|--------------|------------|-----------|-------------| | TR(%) | 20.5±9.27 | 21.5±10.1 | 27.3±9.79 | 27.7±1.63 | 22.9±3.51 | 30.2±8.42 | 20.6±9.52 | 25.1±1.42 | | SR | 1.47±0.58 | 1.53±0.62 | 1.72±0.54 | 2.00±0.15 | 1.60±0.24 | 2.12±0.27 | 1.28±0.21 | 1.98±0.06 | | CR | 1.00±0.06 | 1.01±0.06 | 1.05±0.09 | 1.02±0.05 | 1.01±0.05 | 1.06 ±0.05 | 0.28±0.22 | 1.03±0.007 | | SoR | 1.96±0.78 | 2.04±0.83 | 2.17±0.78 | 2.52±0.24 | 2.16±0.43 | 2.82±0.42 | 0.36±0.27 | 2.47±0.09 | | MDD(%) | 18.8±5.95 | 19.4±6.26 | 23.4±4.70 | 24.7±2.63 | 21.2±2.41 | 25.2±4.83 | 43.6±6.42 | 22.3±0.99 | | VOL(%) | 0.82±0.01 | 0.83±0.02 | 0.91±0.05 | 0.79±0.10 | 0.84±0.03 | 0.79±0.11 | 1.9±0.39 | 0.73±0.02 | | ENT | 1.83±0.01 | 1.82±0.01 | 1.63±0.02 | 2.08±0.67 | 1.81±0.01 | 2.32±0.48 | 1.89±0.16 | 3.27±0.03 | | ENB | 1.28±0.01 | 1.27±0.01 | 1.38±0.01 | 1.29±0.21 | 1.26±0.005 | 1.15±0.06 | 1.73±0.10 | 1.05± 0.01 | | Metrics | A2C | PPO | SAC | SARL | DeepTrader | EIIE | IMIT | AlphaMix+ | |-----------|-----------|------------|-----------|-----------|--------------|------------|------------|-------------| | TR(%) | 5.52±4.82 | 6.20±7.57 | 13.4±20.1 | 11.4±8.84 | 12.3±14.2 | 13.0±3.8 | 52.6±20.8 | 14.8±3.30 | | SR | 0.35±0.21 | 0.40±0.37 | 0.59±0.68 | 0.63±0.41 | 0.62±0.60 | 0.73± 0.17 | 2.68±0.83 | 0.79±0.12 | | CR | 0.25±0.15 | 0.29±0.27 | 0.41±0.48 | 0.42±0.27 | 0.36±0.39 | 0.51±0.08 | 1.18±0.24 | 0.53±0.06 | | SoR | 0.40±0.24 | 0.45±0.42 | 0.71±0.81 | 0.71±0.46 | 0.71±0.69 | 0.80±0.18 | 4.04±1.2 | 0.89±0.15 | | MDD(%) | 28.1±3.68 | 25.3±1.59 | 29.9±7.38 | 26.5±5.09 | 31.5±6.02 | 27.0±2.41 | 34.2±9.16 | 29.1±1.85 | | VOL(%) | 0.74±0.04 | 0.68±0.004 | 0.81±0.06 | 0.68±0.02 | 0.76±0.01 | 0.67±0.02 | 0.66±0.09 | 0.69±0.01 | | ENT | 1.85±0.42 | 2.82±0.01 | 1.10±0.02 | 2.60±0.17 | 1.53±0.001 | 2.39±0.10 | 1.30 ±0.85 | 3.12±0.02 | | ENB | 1.12±0.05 | 1.04±0.01 | 1.27±0.01 | 1.04±0.01 | 1.16±0.004 | 1.06± 0.01 | 2.82± 0.85 | 1.02±0.01 | | Metrics | A2C | PPO | SAC | SARL | DeepTrader | EIIE | IMIT | AlphaMix+ | |-----------|------------|------------|------------|-----------|--------------|------------|-------------|-------------| | TR(%) | 31.9±10.6 | 29.7±8.42 | 25.4±10.4 | 32.6±5.78 | 22.1±13.6 | 36.2± 7.09 | -7.14±0.82 | 32.2±2.20 | | SR | 1.59±0.46 | 1.65±0.39 | 1.13±0.38 | 1.80±0.24 | 1.19±0.59 | 1.77± 0.05 | -0.34±0.1 | 1.79±0.10 | | CR | 0.90±0.16 | 0.94±0.13 | 0.79±0.23 | 1.01±0.07 | 0.74±0.21 | 1.04± 0.05 | -0.29±0.07 | 1.01±0.02 | | SoR | 2.32±0.75 | 2.41±0.63 | 1.70±0.64 | 2.63±0.34 | 1.72±0.91 | 2.57± 0.15 | -0.37± 0.08 | 2.62±0.14 | | MDD(%) | 31.7±2.85 | 28.6±2.82 | 29.8±2.38 | 28.9±2.38 | 27.5±4.92 | 30.5±3.74 | 20.4± 0.63 | 28.9±0.90 | | VOL(%) | 0.65±0.02 | 0.58±0.004 | 0.74±0.02 | 0.57±0.01 | 0.63±0.02 | 0.64±0.11 | 0.77± 0.06 | 0.57±0.01 | | ENT | 1.54±0.01 | 2.85±0.005 | 1.02±0.02 | 2.47±0.17 | 1.53±0.009 | 1.97±0.90 | 1.30±0.85 | 3.15±0.07 | | ENB | 1.18±0.009 | 1.05±0.002 | 1.32±0.009 | 1.05±0.01 | 1.18±0.004 | 1.21 ±0.18 | 1.19±0.85 | 1.04±0.01 | | Metrics | A2C | PPO | SAC | SARL | DeepTrader | EIIE | IMIT | AlphaMix+ | |-----------|-------------|------------|-------------|------------|--------------|-------------|-------------|-------------| | TR(%) | -6.86±11.30 | -6.56±3.66 | -4.05±10.77 | -5.77±4.04 | 5.67±5.95 | -2.27±1.99 | -7.14± 0.82 | -0.70±1.38 | | SR | -0.28±0.61 | -0.31±0.22 | -0.12±0.50 | -0.25±0.21 | 0.37±0.30 | -0.05± 0.12 | -0.34±0.1 | 0.03±0.08 | | CR | -0.14±0.50 | -0.25±0.16 | -0.06±0.42 | -0.20±0.15 | 0.40±0.32 | -0.04±0.11 | -0.29±0.07 | 0.04±0.10 | | SoR | -0.33±0.78 | -0.41±0.26 | -0.15±0.75 | -0.35±0.29 | 0.54±0.45 | -0.07± 0.17 | -0.37±0.08 | 0.04±0.11 | | MDD(%) | 22.3±5.52 | 20.5±1.55 | 25.9±4.95 | 20.2±3.49 | 19.1±3.39 | 17.5±1.52 | 20.4±0.63 | 16.1±1.45 | | VOL(%) | 0.67±0.03 | 0.59±0.01 | 0.75±0.03 | 0.61±0.03 | 0.66±0.02 | 0.59± 0.03 | 0.77±0.06 | 0.57±0.02 | | ENT | 1.54±0.01 | 2.85±0.007 | 1.01±0.03 | 2.41±0.40 | 1.53±0.007 | 2.55±0.15 | 1.30± 0.85 | 3.12±0.02 | | ENB | 1.31±0.01 | 1.08±0.005 | 1.57±0.009 | 1.11±0.10 | 1.30±0.009 | 1.05± 0.01 | 1.19 ±0.08 | 1.03±0.003 | Table 7: US Stock 2019 Table 8: China Stock 2020 Table 9: China Stock 2019 Table 10: China Stock 2018 | Metrics | A2C | PPO | SAC | SARL | DeepTrader | EIIE | IMIT | AlphaMix+ | |-----------|------------|-----------|------------|-----------|--------------|-----------|-------------|-------------| | TR(%) | 82.8±51.5 | 85.8±57.1 | 73.9±33.9 | 61.0±19.3 | 39.1±54.5 | 50.8±13.2 | 110.6± 17.6 | 68.3±17.8 | | SR | 1.37±0.50 | 1.39±0.55 | 1.27±0.35 | 1.26±0.26 | 0.83±0.59 | 1.15±0.20 | 2.06± 0.08 | 1.43±0.22 | | CR | 1.14±0.30 | 1.14±0.33 | 1.06±0.24 | 1.06±0.11 | 0.73±0.39 | 0.98±0.10 | 1.45± 0.07 | 1.08±0.11 | | SoR | 2.19±0.98 | 2.22±1.02 | 2.10±0.65 | 1.89±0.39 | 1.30±1.12 | 1.77±0.28 | 2.99± 0.32 | 2.16±0.35 | | MDD(%) | 59.5±9.64 | 59.9±10.5 | 59.1±7.74 | 52.4±6.49 | 51.3±11.3 | 49.9±5.80 | 56.6± 4.43 | 55.5±3.58 | | VOL(%) | 3.69±0.36 | 3.69±0.35 | 3.63±0.27 | 3.29±0.35 | 3.46±0.24 | 3.15±0.30 | 2.86± 0.53 | 3.09±0.18 | | ENT | 0.60±0.02 | 0.62±0.03 | 0.43±0.03 | 1.21±0.40 | 0.59±0.02 | 1.24±0.52 | 0.58±0.52 | 2.15±0.10 | | ENB | 1.15±0.006 | 1.14±0.01 | 1.15±0.008 | 1.06±0.02 | 1.14±0.01 | 1.01±0.01 | 1.01±0.01 | 1.01±0.01 | | Metrics | A2C | PPO | SAC | SARL | DeepTrader | EIIE | IMIT | AlphaMix+ | |-----------|-----------|-----------|-----------|------------|--------------|------------|------------|-------------| | TR(%) | 61.4±58.1 | 59.1±58.4 | 16.0±38.4 | 31.4±12.8 | 37.9±27.1 | 30.7± 11.7 | 55.3± 17.6 | 33.8±6.16 | | SR | 1.12±0.66 | 1.12±0.67 | 0.51±0.50 | 0.81±0.24 | 0.85±0.38 | 0.79±0.17 | 1.03± 0.08 | 0.86±0.08 | | CR | 1.06±0.47 | 1.02±0.45 | 0.59±0.58 | 0.86±0.23 | 0.97±0.42 | 0.82±0.19 | 0.73± 0.07 | 0.87±0.09 | | SoR | 1.58±1.20 | 1.62±1.22 | 0.59±0.63 | 0.89±0.29 | 1.04±0.54 | 0.92±0.18 | 1.49± 0.32 | 0.94±0.10 | | MDD(%) | 52.9±7.58 | 52.2±8.03 | 55.7±2.12 | 46.6±8.08 | 50.3±8.04 | 44.6±3.22 | 56.6± 4.43 | 46.5±2.43 | | VOL(%) | 3.86±0.22 | 3.75±0.17 | 4.46±0.35 | 3.66±0.76 | 4.05±0.32 | 3.37± 0.36 | 2.86± 0.53 | 3.46±0.22 | | ENT | 0.59±0.04 | 0.59±0.04 | 0.45±0.02 | 1.32±0.45 | 0.60±0.01 | 1.24±0.52 | 0.58±0.52 | 2.17±0.07 | | ENB | 1.05±0.01 | 1.06±0.03 | 1.07±0.03 | 1.01±0.005 | 1.05±0.01 | 1.01±0.01 | 1.01±0.01 | 1.00±0.001 | Table 11: Crypto 2021 Metrics A2C PPO SAC SARL DeepTrader EIIE IMIT AlphaMix+ TR(%) 223±346 146±129 377±982 199±195 398±344 146±137 140± 155 291±71 SR 1.38±0.74 1.22±0.56 1.13±0.67 1.41±0.45 1.71±0.45 1.46± 0.42 1.24±0.34 1.87±0.09 CR 1.77±1.02 1.48±0.53 2.18±2.05 1.75±0.74 2.18±0.84 1.44±0.49 1.39±0.58 2.02±0.26 SoR 2.15±1.23 2.13±1.11 3.16±3.31 2.42±1.24 2.97±1.35 2.42± 1.22 2.03± 0.80 3.14±0.56 MDD(%) 77.1±15.7 74.9±10.4 80.1±15.6 79.4±10.0 85.1±8.42 72.5±10.2 71.0± 11.2 85.7±4.14 VOL(%) 7.30±1.91 6.97±0.94 10.94±8.13 7.26±2.31 7.85±2.05 5.22±1.06 5.56± 1.61 6.80±0.84 ENT 1.02±0.31 0.72±0.05 0.47±0.07 1.42±0.25 0.56±0.04 1.53±0.23 0.58±0.23 2.09±0.02 ENB 1.79±0.15 1.94±0.04 1.99±0.05 1.67±0.40 1.97±0.03 1.12±0.05 1.66±0.38 1.47±0.15 Table 12: Crypto 2020 | Metrics | A2C | PPO | SAC | SARL | DeepTrader | EIIE | IMIT | AlphaMix+ | |-----------|------------|------------|------------|------------|--------------|------------|-------------|-------------| | TR(%) | -0.94±3.14 | -1.49±2.68 | -1.02±3.11 | 0.96±0.36 | -0.19±2.88 | 1.42±0.24 | -5.53± 0.05 | 1.22±0.46 | | SR | -0.22±0.73 | -0.34±0.64 | -0.21±0.64 | 0.29±0.09 | -0.03±0.66 | 0.39±0.05 | -0.97± 0.01 | 0.41±0.16 | | CR | -0.07±0.42 | -0.16±0.35 | -0.02±0.43 | 0.22±0.08 | 0.04±0.46 | 0.32±0.07 | -0.49± 0.01 | 0.36±0.17 | | SoR | -0.32±1.05 | -0.51±1.00 | -0.26±0.86 | 0.50±0.17 | 0.06±1.11 | 0.65±0.07 | -1.5± 0.02 | 0.74±0.31 | | MDD(%) | 7.90±1.85 | 6.28±1.37 | 6.94±2.07 | 4.60±0.47 | 6.84±1.38 | 4.65± 0.43 | 11.17± 0.05 | 3.77±0.56 | | VOL(%) | 0.27±0.01 | 0.28±0.01 | 0.31±0.01 | 0.22±0.01 | 0.26±0.01 | 0.23±0.01 | 0.25 ±0.01 | 0.19±0.01 | | ENT | 1.44±0.08 | 1.37±0.01 | 0.93±0.04 | 2.55± 0.10 | 1.35±0.02 | 2.23±0.08 | 3.08±0.01 | 2.97±0.04 | | ENB | 1.65±0.06 | 1.73±0.01 | 2.02±0.04 | 1.18±0.10 | 1.71±0.03 | 1.16±0.06 | 1.08±0.01 | 1.18±0.06 | Table 13: Crypto 2019 Table 14: Foreign Exchange 2019 Table 15: Foreign Exchange 2018 ## C.2 **Ablation Studies** Metrics A2C PPO SAC SARL DeepTrader EIIE IMIT AlphaMix+ TR(%) -5.43±2.29 -5.25±2.45 -5.61±3.02 -4.52±1.91 -4.99±2.82 -6.15±0.28 -5.53± 0.05 -4.92±0.32 SR -1.01±0.45 -0.97±0.49 -0.91±0.43 -0.93±0.41 -0.91±0.55 -1.15±0.14 -0.97± 0.01 -1.15±0.10 CR -0.50±0.14 -0.48±0.16 -0.50±0.18 -0.48±0.19 -0.49±0.21 -0.59±0.04 -0.49± 0.01 -0.57±0.03 SoR -1.43±0.64 -1.36±0.68 -1.23±0.44 -1.42±0.62 -1.29±0.74 -1.66±0.26 -1.5± 0.02 -1.77±0.16 MDD(%) 10.4±1.67 10.4±1.85 10.7±1.80 9.08±0.96 9.41±2.20 10.6±0.91 11.2± 0.05 8.66±0.14 VOL(%) 0.34±0.02 0.34±0.02 0.37±0.03 0.31±0.02 0.34±0.01 0.34±0.03 0.25± 0.0 0.27±0.01 ENT 1.38±0.03 1.34±0.03 0.94±0.03 2.23±0.37 1.37±0.01 1.55±0.72 3.08± 0.01 2.98±0.03 ENB 1.45±0.01 1.46±0.03 1.65±0.05 1.16±0.09 1.45±0.02 1.39±0.27 1.08±0.01 1.11±0.01 | Models | Ensemble | Weight BB | Diveristy | TR(%)↑ | SR↑ | CR↑ | SoR↑ | MDD(%)↓ | VOL(%)↓ | ENT↑ | ENB↑ | |--------------------------------|------------|-------------|-------------|----------|-------|-------|--------|-----------|-----------|--------|--------| | SAC | 16.0 | 0.51 | 0.59 | 0.59 | 55.7 | 4.46 | 0.45 | 1.07 | | | | | √ | 24.4 | 0.73 | 0.72 | 0.76 | 44.8 | 3.26 | 2.08 | 1.00 | | | | | AlphaMix+ | √ | √ | 27.7 | 0.77 | 0.78 | 0.82 | 46.5 | 3.42 | 2.14 | 1.00 | | | √ | √ | 32.2 | 0.83 | 0.86 | 0.89 | 48.0 | 3.62 | 2.19 | 1.00 | | | | √ | √ | √ | 33.8 | 0.86 | 0.87 | 0.94 | 46.5 | 3.46 | 2.17 | 1.00 | | | Table 19: Crypto 2020 Ablation | | | | | | | | | | | | | Models | Ensemble | Weight BB | Diveristy | TR(%)↑ | SR↑ | CR↑ | SoR↑ | MDD(%)↓ | VOL(%)↓ | ENT↑ | ENB↑ | | SAC | -1.02 | -0.21 | -0.02 | -0.26 | 6.94 | 0.31 | 0.93 | 2.02 | | | | | √ | 1.06 | 0.33 | 0.26 | 0.58 | 4.24 | 0.21 | 2.98 | 1.22 | | | | | AlphaMix+ | √ | √ | 1.11 | 0.34 | 0.27 | 0.60 | 4.47 | 0.22 | 2.91 | 1.19 | | | √ | √ | 1.04 | 0.33 | 0.25 | 0.57 | 4.34 | 0.21 | 3.01 | 1.19 | | | | √ | √ | √ | 1.22 | 0.41 | 0.36 | 0.74 | 3.77 | 0.19 | 2.97 | 1.18 | | | Models | Ensemble | Weight BB | Diveristy | TR(%)↑ | SR↑ | CR↑ | SoR↑ | MDD(%)↓ | VOL(%)↓ | ENT↑ | ENB↑ | |-----------|------------|-------------|-------------|----------|-------|-------|--------|-----------|-----------|--------|--------| | SAC | 13.4 | 0.59 | 0.41 | 0.71 | 29.9 | 0.81 | 1.10 | 1.27 | | | | | √ | 13.6 | 0.75 | 0.77 | 0.83 | 19.0 | 0.69 | 3.15 | 1.01 | | | | | AlphaMix+ | √ | √ | 11.6 | 0.66 | 0.68 | 0.73 | 18.7 | 0.68 | 3.09 | 1.02 | | | √ | √ | 12.0 | 0.68 | 0.70 | 0.76 | 18.7 | 0.68 | 3.20 | 1.02 | | | | √ | √ | √ | 14.8 | 0.79 | 0.53 | 0.89 | 29.1 | 0.69 | 3.12 | 1.02 | | | Models | Ensemble | Weight BB | Diveristy | TR(%)↑ | SR↑ | CR↑ | SoR↑ | MDD(%)↓ | VOL(%)↓ | ENT↑ | ENB↑ | |-----------|------------|-------------|-------------|----------|-------|-------|--------|-----------|-----------|--------|--------| | SAC | 7.77 | 0.37 | 0.30 | 0.50 | 42.8 | 2.45 | 1.61 | 1.10 | | | | | √ | 10.2 | 0.46 | 0.50 | 0.55 | 31.6 | 2.15 | 3.25 | 1.01 | | | | | AlphaMix+ | √ | √ | 10.1 | 0.45 | 0.48 | 0.54 | 32.3 | 2.15 | 3.28 | 1.01 | | | √ | √ | 10.9 | 0.47 | 0.52 | 0.57 | 32.0 | 2.19 | 3.26 | 1.02 | | | | √ | √ | √ | 12.7 | 0.52 | 0.47 | 0.62 | 38.2 | 2.16 | 3.22 | 1.00 | | | Metrics | A2C | PPO | SAC | SARL | DeepTrader | EIIE | IMIT | AlphaMix+ | |-----------|-----------|-----------|-----------|-----------|--------------|-----------|------------|-------------| | TR(%) | 6.93±1.71 | 6.85±1.44 | 8.25±3.62 | 5.81±2.26 | 5.94±2.50 | 6.71±2.72 | 6.42± 0.16 | 7.16±0.36 | | SR | 1.34±0.29 | 1.32±0.24 | 1.41±0.48 | 1.34±0.57 | 1.13±0.51 | 1.26±0.47 | 0.81± 0.02 | 1.72±0.13 | | CR | 0.82±0.12 | 0.81±0.12 | 0.88±0.17 | 0.79±0.27 | 0.77±0.19 | 0.88±0.11 | 0.73± 0.01 | 0.93±0.02 | | SoR | 2.15±0.45 | 2.14±0.42 | 2.17±1.01 | 2.28±0.97 | 1.74±0.89 | 1.89±0.80 | 1.21± 0.03 | 3.04±0.27 | | MDD(%) | 8.21±1.28 | 8.20±1.12 | 9.01±2.45 | 7.12±1.27 | 7.46±1.87 | 7.33±2.08 | 8.98± 0.12 | 7.51±0.43 | | VOL(%) | 0.32±0.01 | 0.32±0.01 | 0.35±0.03 | 0.28±0.04 | 0.33±0.02 | 0.34±0.08 | 0.36± 0.01 | 0.25±0.01 | | ENT | 1.34±0.02 | 1.35±0.02 | 0.90±0.03 | 2.17±0.45 | 1.37±0.01 | 1.41±0.57 | 3.08±0.01 | 2.98±0.10 | | ENB | 1.58±0.01 | 1.59±0.04 | 1.82±0.02 | 1.29±0.15 | 1.57±0.02 | 1.55±0.18 | 1.05±0.04 | 1.10±0.04 | Table 20: Foreign Exchange 2019 Ablation Table 16: Foreign Exchange 2017 Table 17: US Stock 2020 Ablation Table 18: China Stock 2020 Ablation Table 19: Crypto 2020 Ablation ![26_image_1.png](26_image_1.png) ![26_image_0.png](26_image_0.png) ![26_image_2.png](26_image_2.png) ## C.3 **Parameter Analysis: Probing Sensitivity** Figure 10: Hyperparameter Sensitivity Experiment Results on Crypto ![26_image_3.png](26_image_3.png) Figure 11: Hyperparameter Sensitivity Experiment Results on FX ## C.4 Pride-Star ![27_image_0.png](27_image_0.png) Figure 12: PRIDE-Star on US Stock ![27_image_1.png](27_image_1.png) Figure 13: PRIDE-Star on FX ![28_image_0.png](28_image_0.png) ![28_image_1.png](28_image_1.png) Figure 14: PRIDE-Star on China Stock Figure 15: PRIDE-Star on Crypto ## C.5 Performance Profile ![29_image_0.png](29_image_0.png) ![29_image_1.png](29_image_1.png) ## C.6 Rank Distribution
Review 1: Summary: This paper aims to provide a systematic evaluation of FinRL algorithms in Financial Markets. The paper propose PRUDEX-Compass, which includes 6 dimensional metrics, Profitability, Risk-control, Universality, Diversity, rEliability, and eXplainability. AlphaMix+ is proposed as a strong baseline, which includes Risk-aware Bellman Backup and diversified experts. 8 FinRL methods are evaluated on 4 real-world financial datasets. Strengths and Weaknesses: Strengths: 1.The paper is clearly written. 2.The paper propose an evaluation framework for FinRL algorithms, which has great value for RL and FinRL community. 3.The visualization of algorithms is good. Weaknesses: 1.The proposed AlphaMix+ algorithm lacks the ablation study, which part is the most critical for the performance? 2. The variance of AlphaMix+ is larger than baselines. 3. There seems to be 2 parameters to be tuned, which may cost much computation costs. Requested Changes: 1.Provides detailed ablation study of AlphaMix+. 2.Specify more details on the RL environments in the experimental parts. Broader Impact Concerns: None. ================================================== Review 2: Summary: The paper looks at providing a holistic evaluation for RL methods on financial markets. Several RL baselines are considered and their performance along different aspects (reliability, diversity, risk control, etc, ) are provided. The core contribution is identification of relevant metrics and providing a variant of the star plot to visualise these results. EMpirical results are provided on data from 4 financial markets. Overall, there are three key aspects in the paper, (a) what are the metrics to be considered, (b) what are the performances of the baselines on those metrics, and (c) how to report them in a visually appealing manner. The paper tries to tackle everything, but unfortunately, this makes the paper unclear/incomplete on all of those aspects. Isee Lines, It is not clear what the Strengths and Weaknesses: Strengths: 1. Holistic evaluation is critical for high-stake settings, and this paper takes a step toward that for the financial market. 2. Extensive open-source, practical, library for evaluating and visualizing the performance of different methods. Weakness: 1. The problem setup is unclear (see below) 2. The evaluation metric is unclear and needs more discussion (but this might be because I am not familiar with the finance aspects). It is not clear what is the actual contribution here? Are these metrics not known in the literature? Are there any understudied nuances about the metrics that people need to be careful about? 3. Baselines are not thoroughly investigated. How is sensitivity analysis done for the parameters? How were the hyper-parameters for all the baselines chosen? How robust are the metrics to the chosen constants in the metric formulation (for example the choice of 20% in Line 712)? The proposed work would be much stronger if these were discussed in detail. 4. Visualization is ok, but I am not sure about the audience for that at TMLR. Perhaps that might be more useful at data visualization conferences? (I would defer to the AE for evaluation of interest at TMLR for this aspect of the paper) Requested Changes: 1. Since this is a domain-specific paper, can the instantiation of the RL setup be better discussed? For instance, what are the actions here? What is the time horizon? What corresponds to an episode? Is there a discounting factor? What is the reward function (even after I finished reading the paper it was not clear to me what is the task for the RL agent here)? Is it purely offline RL or there was a model built using the past data on which online RL was done? What part of the data was used for training and on what part was the performance evaluated? 2. Explain the weighting in Eq 7. My understanding is that the weight should be smaller when the Q functions disagree, i.e., the ensemble has a higher standard deviation, and because of this Eqn 7 uses negative $\bar Q_{std}$. However, I am not sure about Line 145. Since $\bar Q_{std}$ is always positive, wouldn’t $\sigma( - \bar Q_{std})$ be between [0, 0.5], and therefore the range would be [k, 0.5 +k], instead of [0.5, 0.5+k]? Maybe I am misunderstanding something? 3. How to select $T$ and $k$ in Eqn 7 during experiments? How to choose $\beta$? 4. Line 156, $L_\pi$ is not defined. I am assuming it is similar to $L_{actor}$, but I am not sure which Q value is being used here from the ensemble. Could the authors clarify this? 5, Can authors convert Lines 165 to 169 into equations for better clarity about what is the exact procedure? I think I get what the procedure is trying to do, but again many words have been used very loosely. For instance, Line 167 says that the UCB is being optimized, but my guess is that only a _heuristic_ for the UCB is being optimized, and not necessarily the actual UCB. PRUDEX compass 6. Figure 1 suggests that AlphaMix+ gets the best performance because of the *largest area in the inner level*. My understanding is that the area of the inner level is *not invariant to the ordering of the axes,* i.e., if one plots the axes in a different order, one might get a different area enclosed. If this is true, then authors might need to really rethink such claims. 7. Normalization discussed in lines 191-192 needs a lot more justification. Appendix A.2 is largely focused on the procedure done to obtain the plot, but not much is discussed why that procedure is meaningful. Given that this is one of the core contributions of the paper, and that different normalization schemes can potentially result in vastly different plots, the paper would benefit a lot by having a thorough discussion on this. Also, it would be helpful if the authors could elaborate on what is meant by ‘market average’ here. 8. I do not have a finance background, so I am not sure how are the values for the axes being computed. Can authors provide or point me to the exact equations for computing these? Further, can authors also comment on other axes that might be of interest to be added in the future? My first thought was that there would be something about assumptions as well, for instance, the RL setup considered in the work has a very strong assumption of stationarity, which is likely violated in financial applications. One could argue that ‘reliability’ could capture that but it looks like right now it is simply aimed at capturing sensitivity to seed. 9. Line 273: I do not think that there is a compelling reason to not compare against non-RL method. Saying that RL methods outperform non-RL based methods, and citing past works, is not sufficient. The good part about the proposed work is the holistic evaluation, which was not done by prior works. And I can imagine all of the proposed axes equally useful for non-RL methods. I understand that incorporating these non-RL methods might not be easy, but given that the core contribution of the paper is evaluating different algorithms and presenting the results, I think the paper would be much stronger with non-RL baselines. Other plots: 10. I am not sure if I understood what is the t-sne plot showing. Does each point on the t-sne plot correspond to a single time-step feature? If yes, how is the time-series aspect captured and how should a reader infer that from these plots? 11. Line 340, how is the ‘rank probability’ computed? 12. For the strategy diversity heatmap, the portfolio is for which time? I am guessing this plot should keep changing over time? 13. Figure 8, I do not understand these plots. What is ‘score’? How is it computed? Is a larger score better or lower? Broader Impact Concerns: - ================================================== Review 3: Summary: The paper introduces an evaluation framework for reinforcement learning (RL) methods for financial markets (FinRL), which are increasingly gaining prominence in the research and applied world. The evaluation framework put forth (called “PRUDEX-Compass”) tries to measure evaluation metrics of interest beyond profitability and returns, as standard measures do. Specifically, the evaluation axes of interest in PRUDEX are Profitability, Risk-control, Universality, Diversity, rEliability, and eXplainability. Subsequently, the paper proposes another RL baseline method for FinRL called AlphaMix+ which leverages mixture-of-experts (MoE) and risk-sensitive approaches to make diversified risk-aware investment decisions. The paper evaluates AlphaMix+ and 7 other FinRL approaches on 4 real-world financial market datasets and reports the results on the PRUDEX-compass. The paper also provides a latex (tikzpictures) implementation for PRUDEX and a python package for creating the latex code from data. The paper finally discusses other, orthogonal approaches from the literature on standardizing evaluation of RL methods and the potential impacts of PRUDEX-compass. Strengths and Weaknesses: [Strengths and Weaknesses] [+] From what I am aware, prior to this work there was no standard for evaluating RL methods for financial markets. So this paper is making a first step toward a framework for standardized reporting of metrics that matter for financial market evaluation algorithms. [+] I can tell that the authors made significant efforts to think about measures of interest for the evaluation framework. [+] I really appreciated the authors' efforts to make the visualization materials available through the latex code and the python package. One minor suggestion here is to have colors that are friendly for color-blind folks. [-] The story of the paper is fundamentally about the PRUDEX-compass framework. So I don’t understand why the authors chose to include a baseline algorithm (AlphaMix+) too. It takes away from the rest of the paper’s story. Also related to the paper’s story, I didn’t understand the difference between PRUDEX-compass and PRIDE-star. It looks to me (but I may be wrong) that the PRIDE-star is part of the PRUDEX-compass but I may be wrong. It should have been made much clearer to the reader how the two are connected. [-] I’m sorry to say this, but I don’t see why the PRUDEX-compass is the best visualization tool for the purposes that the paper describes; I find it quite confusing and unnecessarily complicated at times. Why didn’t the authors choose to report the relevant metrics (and their respective measurements) in just a google doc? Or something similar to the “Datasheets for Datasets” paper? Here is an example of something I find confusing about the current presentation: For the inner circle, why do the points on the different axes need to be connected and form a convex hull? Do these lines connecting the different points represent anything? From what I understand (although I may be wrong and please do correct me if so) the measurements on the axes are just the dots. [-] In various parts, it is not clear what exactly is one of the paper’s contributions and what is already known in the literature. For example, for the metrics introduced in Sec 4.1 which of these have already appeared in the literature? If all of them previously have appeared then the paper’s contribution is to create an aggregate visualizer for them. How easy/hard is it to estimate each of these metrics in real-life datasets? Another example: is the t-SNE visualization something that the paper introduced or was this already known? (Tangentially related here: I didn’t understand what the point of talking about the t-SNE visualization was at this point. Were the authors trying to say that t-SNE is a data visualization but that PRUDEX-compass also includes algorithmic considerations?) [Additional Comments] * The formalism in the paper needs improvement at various points. For example, in Sec 2.2 T is used to denote both the time horizon and the transaction function. At equation (3), there is a different notation used (Q_\theta) as opposed to the one introduced before (Q^\pi). In the “Universality” definition (Sec. 4) how is “decent” performance defined formally? * In Sec. 4.2 I don’t understand what the point of having the paragraph “FinRL Explainability” is since above you have talked about “explainability” again. Is there a difference between general explainability and FinRL explainability? * In the definition of OHLCV what is function f_k? [Typos] * There are many acronyms that are not helpful for the reader to keep in mind. I would suggest that the authors spell out most of the words. * Pg 2: “also provide the RL community” → “also provides the RL community” * Pg 2: “to all QT tasks,” → “to all QT tasks.” * Pg 3: “indicate features” → “indicates features” * Pg 6: “Crpyto” → “Crypto” * Pg 8: “bootrap initialization” → “bootstrap initialization” * Pg 20: “here is the “ → this sentence abruptly ends there. Requested Changes: 1) I am not convinced that the “compass” architecture of the visualization is required. I agree that there needs to be a standardized reporting system, I’m just not convinced that the one put forth by the paper is the correct/best one. 2) Make sure the “story” is complete, based on the guidelines and questions I posed above. 3) Strengthen the formalisms of the paper. Broader Impact Concerns: I think the broader impact of the paper is adequately addressed ================================================== Metareview: Recommendation: Accept with minor revision Comment: This manuscript introduces a tool, dubbed PRUDEX-Compass, for evaluating reinforcement learning in financial markets (FinRL), which is most systematic and comprehensive for FinRL evaluation to the best of AE’s knowledge. The authors also proposed a strong FinRL baseline named AlphaMix+. Per the insightful reviews, the authors did a good rebuttal such that most of the problems and concerns have been well addressed. There remain several minor issues pointed out by the reviewers, for which the AE recommends Minor Revision. The authors are suggested to fix all the issues and emphasize the impact of this research on the FinTech community in the upcoming revision. ==================================================
# Scaling Up Bayesian Neural Networks With Neural Networks Anonymous authors Paper under double-blind review ## Abstract Bayesian Neural Network (BNN) offers a principled and natural framework for proper uncertainty quantification in the context of deep learning. They address the typical challenges associated with conventional deep learning methods, such as data insatiability, ad-hoc nature, and susceptibility to overfitting. However, their implementation typically relies on Markov chain Monte Carlo (MCMC) methods that are characterized by their computational intensity and inefficiency in a high-dimensional space. To address this issue, we propose a novel calibration-Emulation-Sampling (CES) strategy to significantly enhance the computational efficiency of BNN. In this CES framework, during the initial calibration stage, we collect a small set of samples from the parameter space. These samples serve as training data for the emulator. Here, we employ a Deep Neural Network (DNN) emulator to approximate the forward mapping, i.e., the process that input data go through various layers to generate predictions. The trained emulator is then used for sampling from the posterior distribution at substantially higher speed compared to the original BNN. Using simulated and real data, we demonstrate that our proposed method improves computational efficiency of BNN, while maintaining similar performance in terms of prediction accuracy and uncertainty quantification. ## 1 Introduction In recent years, Deep Neural Networks (DNN) have emerged as the predominant driving force in the field of machine learning and regarded as the fundamental tools for many intelligent systems (Cheng et al., 2018; LeCun et al., 1998; Sze et al., 2017). While DNN have demonstrated significant success in prediction tasks, they often struggle with accurately quantifying uncertainty. In recent years, there have been some attempts to address this issue. For example, the Ensemble Deep Learning method (Lakshminarayanan et al., 2017) aggregates predictions from multiple models to improve reliability and uncertainty estimates. While these methods represent important progress in the right direction, developing a principled and computationally efficient framework for Uncertainty Quantification (UQ) within the context of deep learning remains a significant challenge and active area of research. This is especially important in application areas, such as health sciences, where the level of uncertainty needs to be incorporated in the decision making process. Additionally, neural networks, due to their vulnerability to overfitting, can generate highly confident yet erroneous predictions (Su et al., 2019; Kwon et al., 2022). To address these issues, Bayesian Neural Network (BNN) (MacKay, 1992; Neal, 2012; Jospin et al., 2022) has emerged as an alternative to standard DNN, providing a transformative paradigm within the field of machine learning. By effectively incorporating Bayesian inference into the neural network framework, BNN provides a principled solution to these challenges. Their intrinsic ability to capture and quantify uncertainties in predictions establishes a robust foundation for decision-making under uncertainty. However, Bayesian inference in high-dimensional BNN poses significant computational challenges due to the inefficiency of traditional Markov Chain Monte Carlo (MCMC) methods. In fact, not only BNN, but almost all traditional Bayesian inference methods relying on MCMC techniques are known for their computational intensity and inefficiency when dealing with highdimensional problems. Consequently, researchers have proposed various approaches to expedite the inference process (Welling & Teh, 2011b; Shahbaba et al., 2014; Ahn et al., 2014; Hoffman & Gelman, 2011; Beskos et al., 2017; Cui et al., 2016; Zhang et al., 2017a;b; 2018; Li et al., 2019a). Here, we focus on a state-of-the-art approach, called Calibration-Emulation-Sampling (CES) (Cleary et al., 2021), which has shown promising results in large-dimensional UQ problems such as inverse problems (Lan et al., 2022a). CES involves the following three steps: (i) Calibrate models to collect sample parameters and expensive forward evaluations for the emulation step (ii) Emulate the parameter-to-data map using evaluations of forward models, and (iii) Generate posterior samples using MCMC based on cheaper emulators. This framework allows for reusing expensive forward evaluations and provides computationally efficient alternative to the standard MCMC procedure. The standard CES method (Cleary et al., 2021) focuses on UQ in inverse problems and uses Gaussian Process (GP) models for the emulation component. GP models have a well-established history of application in emulating computer models (Currin et al., 1988), conducting uncertainty analyses (Oakley & O'Hagan, 2002), sensitivity assessments (Oakley & O'Hagan, 2004), and calibrating computer codes (Kennedy & O'Hagan, 2002; Higdon et al., 2004; O'Hagan, 2006). Despite their versatility, GP-based emulators are computationally intensive, with a complexity of O(N3), where N is the sample size, using the squaredexponential kernel. Lower computational complexity can be achieved using alternative kernels (Lan et al., 2015) or various computational techniques (Liu et al., 2020; Bonilla et al., 2007; Gardner et al., 2018; Seeger et al., 2003). Nevertheless, scaling up GP emulators to high-dimensional problems remains a limiting factor. Furthermore, the prediction accuracy of GP emulators highly depends on the quality of the training data, emphasizing the importance of rigorous experimental design. To address these issues, Lan et al. (2022a) proposed an alternative CES scheme called Dimension Reduced Emulative Autoencoder Monte Carlo (DREAMC) method, which uses Convolutional Neural Networks (CNN) as emulator. DREAMC improved and scaled up the application of the CES framework for Bayesian UQ in inverse problems from hundreds of dimensions (with GP emulation) to thousands of dimensions (with NN emulation). Here, we adopt this approach and propose a new method, called Fast BNN (FBNN), for Bayesian inference in neural networks. We use DNN for the emulation component of our CES scheme. DNN has proven to be a powerful tool in a variety of applications and offers several advantages over GP emulation (Lan et al., 2022b; Dargan et al., 2020). It is computationally more efficient and suitable for high-dimensional problems. Additionally, DNN exhibits greater robustness when dealing with variations in the training data. The choice of DNN as an emulator enhances computational efficiency and flexibility. Besides the computational challenges associated with building emulators, efficient sampling from posterior distributions in BNN also presents a significant challenge due to the high dimensionality of the target distribution. Traditional Metropolis-Hastings algorithms, typically defined on finite-dimensional spaces, encounter diminishing mixing efficiency as the dimensions increase (Gelman et al., 1997; Roberts & Rosenthal, 1998; Beskos et al., 2009). To overcome this inherent drawback, a novel class of 'dimension-independent' MCMC methods has emerged, operating within infinite-dimensional spaces (Beskos, 2014; Beskos et al., 2009; 2011; Cotter et al., 2013; Law, 2014; Beskos, 2014; Beskos et al., 2017). More specifically, we use the Preconditioned Crank-Nicolson (pCN) algorithm. The most significant feature of pCN is its dimension robustness, which makes it well-suited for high-dimensional sampling problems. The pCN algorithm is welldefined, with non-degenerate acceptance probability, even for target distributions on infinite-dimensional spaces. As a result, when pCN is implemented on a real-world computer in large but finite dimension N, the convergence properties of the algorithm are independent of N. This is in strong contrast to schemes such as Gaussian random walk Metropolis-Hastings and the Metropolis-adjusted Langevin algorithm, whose acceptance probability degenerates to zero as N tends to infinity. In summary, this paper addresses the critical challenges of UQ in high-dimensional BNN. By incorporating deep neural networks for emulation and leveraging the dimension-robust pCN algorithm for sampling, this research significantly enhances computational efficiency and scalability in Bayesian uncertainty quantification, offering a robust counterpart to DNN, and a scalable counterpart to BNN. Through extensive experiments, we demonstrate the feasibility and effectiveness of utilizing FBNN to accelerate Bayesian UQ in high-dimensional neural networks. The proposed method showcases remarkable computational efficiency, enabling scalable Bayesian inference in BNN with thousands of dimensions. ## 2 Related Methods Various MCMC methods are employed to explore complex probability distributions for Bayesian inference. In this section, we discuss a set of MCMC methods used in our proposed FBNN model and discussed in the following sections. Additionally, in this section we explore a variety of other methods utilized in our numerical experiments, which extend beyond MCMC frameworks. These include Ensemble Deep Learning for Neural Networks (Perrone & Cooper, 1995), BNNs with Variational Inference (Jaakkola & Jordan, 2000), BNNs leveraging Lasso Approximation (MacKay, 1992), Monte Carlo Dropout (MC-Dropout) (Gal & Ghahramani, 2016), Stochastic Weight Averaging-Gaussian (SWAG) (Maddox et al., 2019), and an Accelerated Hamiltonian Monte Carlo (HMC) model (Zhang et al., 2017c). These techniques provide a comprehensive spectrum for comparison with our FBNN model, serving as baseline methods for our analysis. ## 2.1 Hamiltonian Monte Carlo (Hmc) In general, MCMC methods represent a category of algorithms designed for sampling from a probability distribution (Andrieu et al., 2003). The fundamental principle involves building a Markov chain where the target distribution serves as its equilibrium distribution. Various algorithms exist for constructing such Markov chains, where the simplest is the Metropolis-Hastings algorithm (Metropolis et al., 1953). Metropolis-Hastings is a fundamental MCMC method used for obtaining a sequence of random samples from a probability distribution for which direct sampling is difficult (Chib & Greenberg, 1995; Robert et al., 1999). Hamiltonian Monte Carlo is a special case of the Metropolis-Hastings algorithm, that incorporates Hamiltonian dynamics evolution and auxiliary momentum variables(Neal et al., 2011). Compared to using a Gaussian random walk proposal distribution in the Metropolis-Hastings algorithm, HMC reduces the correlation between successive sampled states by proposing moves to distant states which maintain a high probability of acceptance due to the approximate energy conserving properties of the simulated Hamiltonian dynamic. The reduced correlation means fewer Markov chain samples are needed to approximate integrals with respect to the target probability distribution for a given Monte Carlo error. ## 2.2 Stochastic Gradient Hamiltonian Monte Carlo (Sghmc) As discussed earlier, HMC sampling methods provide a mechanism for defining distant proposals with high acceptance probabilities in a Metropolis-Hastings framework, enabling more efficient exploration of the state space than standard random-walk proposals. However, a limitation of HMC methods is the required gradient computation for simulation of the Hamiltonian dynamical system; such computation is infeasible in problems involving a large sample size. Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) (Chen et al., 2014) addresses computational inefficiency by using a noisy but unbiased estimate of the gradient computed from a mini-batch of the data. SGHMC is a valuable method for Bayesian inference, particularly when dealing with large datasets, as it leverages stochastic gradients and hyperparameter adaptation to efficiently explore high-dimensional target distributions. ## 2.3 **Random Network Surrogate–Hmc (Rns–Hmc)** One common computational bottleneck for HMC for big data is repetitive evaluations of functions, their derivatives, geometric and statistical quantities. To alleviate this issue, in recent years several methods have been proposed to construct surrogate Hamiltonians. To this end, Random network surrogate–HMC (RNS–HMC) as an Accelerated HMC method introduced by Zhang et al. (2017c) uses random non-linear bases along with efficient learning algorithms to construct a surrogate functions that provides effective approximation of the probabilistic model based on the collective behavior of the large data. The goal is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To achieve this, RNS–HMC builds Algorithm 1 Preconditioned Crank-Nicolson Algorithm (pCN) Initialization: Choose an initial guess for the solution, W0. Set the iteration counter n = 0. while not converged or maximum iterations not reached do Implicit Step: Solve the implicit part using Crank-Nicolson: Wn+1 − V n+1/2 ∆t= − 1 2 A(Wn+1) + A(V n+1/2) Preconditioning: Apply a preconditioning step. Explicit Step: Solve the explicit part: V n+1 − Wn+1 ∆t= − 1 2 $$\left(A(V^{n+1})\right)$$ $\equiv\;\frac{1}{2}$ n+1/2) + A(Wn+1) Acceptance Probability: Compute the acceptance probability: $\downarrow$ 1 ? $\downarrow$). α = min 1, exp(Φ(Wn) − Φ(V n+1)) Accept/Reject: Generate a random number u from a uniform distribution. if u ≤ α **then** Set Wn+1 = V n+1. else Set Wn+1 = Wn. end if Update: Increment the iteration counter: n = n + 1. end while Output: The final solution is WN after N iterations. a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The RNS–HMC process begins by identifying suitable bases that can capture the complex geometric properties of the data's parameter space. Through an optimization process, these bases are then used to form surrogate functions that stand in for the computationally expensive true Hamiltonian dynamics. Unlike traditional HMC, which requires repeated evaluation of the model and its derivatives, RNS-HMC leverages these surrogates to perform leapfrog integration steps, leads to reducing computational load. Later, Li et al. (2019b) extended this idea by using a neural network to directly approximate the gradient of the Hamiltonian, rather than using random bases for surrogate construction. ## 2.4 Preconditioned Crank-Nicolson (Pcn) Preconditioned Crank-Nicolson (Da Prato & Zabczyk, 2014) is a variant of MCMC incorporating a preconditioning matrix for adaptive scaling (Cotter et al., 2013). The key steps of the pCN algorithm are outlined in Algorithm 1. In the given pCN Algorithm, the function Φ(Wn) represents a quantity associated with the current solution Wn at the n-th iteration. The specific form and interpretation of Φ depend on the context of the problem being solved. It could be an energy function, a cost function, or any other relevant metric used to assess the quality or appropriateness of the current solution. The pCN method involves re-scaling and random perturbation of the current state, incorporating prior information. Despite the Gaussian prior assumption, the approach adapts to cases where the posterior distribution may not be Gaussian but is absolutely continuous with respect to an appropriate Gaussian density. This adaptation is achieved through the Radon-Nikodym derivative, connecting the posterior distribution with the dominating Gaussian measure, often chosen as the prior. The algorithmic foundation of pCN lies in using stochastic processes that preserve either the posterior or prior distribution. These processes serve as proposals for Metropolis-Hastings methods with specific discretizations, ensuring preservation of the Gaussian reference measure. ## 2.5 **Variational Inference In Bnns** The concept of variational inference has been applied in various forms to probabilistic models. The technique offers a way to approximate posterior distributions in Bayesian models (Jordan et al., 1999). The approximate distribution allows for a more feasible inference, especially for complex models like neural networks. In the context of BNNs, variational inference was brought into focus by Hinton & Van Camp (1993), which, while not explicitly termed as variational inference in the modern sense, laid the groundwork for later developments. A more direct application of variational inference to BNNs was detailed in the later works, such as those by Graves (2011) who applied variational inference to neural networks. Additionally, Kingma & Welling (2013) and Rezende et al. (2014) significantly contributed to popularizing and advancing the use of variational inference in deep learning and Bayesian neural networks through the introduction of efficient gradient-based optimization techniques. Algorithm 2 Variational Inference in Bayesian Neural Networks (BNNs)Blundell et al. (2015) Initialization: Choose an initial variational distribution qθ(W) for the weights W of BNN, parameterized by θ. Define the prior distribution p(W) over the weights. while not converged do E-Step: Estimate the Expectation of the log-likelihood over the variational distribution. Compute the gradient of the ELBO (Evidence Lower BOund) with respect to θ, where ELBO(θ) = Eqθ(W)[log p(Y |*X, W*)] − KL[qθ(W)||p(W)] Here, X and Y are the inputs and outputs of the dataset, respectively, and KL denotes the KullbackLeibler divergence between the variational distribution and the prior. M-Step: Maximize the ELBO with respect to θ by updating θ using gradient ascent: θ ← θ + η∇θELBO(θ) where η is the learning rate. end while Output: Variational distribution qθ(W) approximating the posterior distribution p(W|*X, Y* ). This algorithm succinctly captures the iterative process of optimizing the parameters of a variational distribution to approximate the posterior distribution of a BNN's weights. Through the alternation of expectation (E-Step) and maximization (M-Step) phases, it seeks to minimize the difference between the variational distribution and the true posterior, leveraging the Evidence Lower Bound (ELBO) (Jordan et al., 1999) as a tractable surrogate objective function. This approach enables the practical application of Bayesian inference to neural networks, facilitating the quantification of uncertainty in predictions and model parameters. ## 2.6 **Bnns Utilizing Laplace Approximation** Previous studies have shown that in the context of BNNs, the Laplace approximation serves as an efficient method for approximating the posterior distribution over the network's weights, striking a balance between the computational intensity of variational inference and the exhaustive nature of traditional sampling methods (Arbel et al., 2023; Blundell et al., 2015). This approximation is particularly appealing for its computational efficiency and its capability to facilitate theoretical analyses by transforming a complex posterior into a more tractable form. At the core of the Laplace approximation is the assumption that, around the loss function's minimum, the posterior distribution of the network's weights can be approximated by a Gaussian distribution. This is achieved by finding the mode of the posterior (equivalent to the minimum of the loss function in the Bayesian framework) and then approximating the curvature of the loss surface at this point using the Hessian matrix (Liang et al., 2018). The inverse of this Hessian is used to define the covariance of the Gaussian posterior, thus simplifying the representation of uncertainty in the model's predictions. This approach negates the need for direct optimization of the data likelihood with respect to the stochasticity of the network, which is a significant challenge with deep neural networks due to their complex loss landscapes (Daxberger et al., 2021). MacKay (1992) was pioneering in extensively studying the Laplace approximation within BNNs, highlighting its utility in capturing predictive uncertainty, especially in data regions beyond the training set. Laplace Approximation in BNNs is a technique used to approximate the posterior distribution of the weights W given data *X, Y* . This method approximates the posterior with a Gaussian distribution centered around the mode of the posterior, often referred to as the Maximum A Posteriori (MAP) estimate. The steps involved are as follows: 1. **Maximum A Posteriori (MAP) Estimation**: Find the MAP estimate of the weights WMAP by optimizing the posterior distribution: $$W_{\mathrm{MAP}}=\arg\operatorname*{min}_{W}\left[-\log p(Y|X,W)-\log p(W)\right].$$ where W's are the weights of BNN, parameterized by θ . p(Y |*X, W*) is the likelihood of observing the data given the weights, and p(W) is the prior distribution over the weights. 2. **Gaussian Approximation of the Posterior**: The posterior p(W|*X, Y* ) is then approximated by a Gaussian distribution centered at WMAP with covariance Σ determined by the curvature of the log-posterior at WMAP: q(W) ≈ N (W; WMAP, Σ) The covariance matrix Σ is the inverse of the Hessian of the negative log-posterior evaluated at WMAP: $\mathbf{x}$ Σ −1 = −∇∇ log p(W|*X, Y* ) W=WMAP 3. **Predictive Distribution**: Predictions for new data X∗ are made by integrating over the approximate posterior: $$p(Y^{*}|X^{*},X,Y)\approx\int p(Y^{*}|X^{*},W)q(W)d W$$ where p(Y ∗|X∗*, X, Y* ) is the predictive distribution of the new data given the weights. Laplace Approximation offers a computationally efficient way to perform Bayesian inference in neural networks by providing a method to quantify uncertainty in model predictions through a Gaussian approximation of the posterior distribution. This approximation is particularly useful when the exact posterior is intractable or difficult to sample from directly. ## 2.7 **Bnns Utilizing Mc-Dropout** Monte Carlo Dropout (Gal & Ghahramani, 2016) was introduced as a Bayesian approximation method to quantify model uncertainty in deep learning. The core idea behind this method is to interpret dropout, a technique commonly used to prevent overfitting in neural networks, from a Bayesian perspective. Normally, dropout randomly disables a fraction of neurons during the training phase to improve generalization. However, when viewed through the Bayesian lens, dropout can be seen as a practical way to approximate Bayesian inference in deep networks. This approximation allows the network to estimate not just a single set of weights, but a distribution over them, enabling the model to express uncertainty in its predictions. The MC Dropout technique involves running multiple forward passes through the network with dropout enabled. Each forward pass generates a different set of predictions due to the random omission of neurons, leading to a distribution of outputs for a given input. ## 2.8 **Stochastic Weight Averaging-Gaussian** Stochastic Weight Averaging-Gaussian (SWAG) is a technique that builds upon the idea of Stochastic Weight Averaging (SWA) (Izmailov et al., 2018; Maddox et al., 2019). The core concept behind SWAG is to approximate the distribution of model weights by a Gaussian distribution, leveraging the empirical weight samples collected during training. This approach allows for a more nuanced understanding of the model's uncertainty compared to SWA, which simply averages weights over the latter part of the training process. Mathematically, SWAG operates by first collecting a set of weights {Wi} N i=1 over the last N epochs of training, where Wi represents the weight vector at epoch i. The mean µ of the Gaussian distribution is computed as the simple average of these weights: µ = 1 N PN i=1 Wi. To capture the covariance of the weight distribution, SWAG calculates the empirical covariance matrix using the formula Σ = 1 N−1 PN i=1(Wi − µ)(Wi − µ) T. This formulation assumes a diagonal or low-rank plus diagonal approximation of the covariance matrix to maintain computational efficiency. The resulting Gaussian distribution, characterized by µ and Σ, can then be used for uncertainty estimation and prediction by sampling weights from this distribution and averaging the predictions of the resulting models. ## 3 Bayesian Uq For Neural Networks: Calibration-Emulation-Sampling Traditional artificial neural networks (NN), such as feedforward and recurrent networks, typically consist of multiple layers. These networks are composed of an input layer, denoted as l0, followed by a series of hidden layers ll for l = 1*, . . . , m* − 1, and concluding with an output layer lm. In this architectural framework, comprising a total of m+ 1 layers, each layer l is characterized by a linear transformation, which is subsequently subjected to a nonlinear operation g, commonly referred to as an activation function (Jospin et al., 2022): $$\mathbf{l}_{0}=\mathbf{X},$$ $\mathbf{l}_{l}=g_{l}\left(\mathbf{W}_{l}\mathbf{l}_{l-1}+\mathbf{b}_{l}\right)\quad\text{for all}l\in\{1,\cdots,m\},$ $$\mathbf{Y}=\mathbf{l}_{m}.$$ $$(1)$$ $$\left(2\right)$$ $$\left({\mathfrak{3}}\right)$$ Here, θ = (W, b) are the parameters of the network, where W are the weights of the network connections and b the biases. A given NN architecture represents a set of functions isomorphic to the set of possible parameters θ. Deep learning is the process of estimating the parameters θ from the training set (*X, Y* **) :=** {(xn, yn)} N n=1, where training set is composed of a series of input X and their corresponding labels Y assuming Y ∈ Rs. Based on the training set, a neural network is trained to optimize network parameters θ in order to map X → Y with the objective of obtaining the maximal accuracy (under certain loss function L(·)). Considering the error, we can write NN structure introduced in Equation 2 as a forward mapping, denoted as G, that maps each parameter vector θ to a function that further connects X to Y with small errors εn : $${\mathcal{G}}:\Theta\to Y^{X},\quad\theta\mapsto{\mathcal{G}}(\theta)$$ X, θ *7→ G*(θ) (4) More specifically, $\mathcal{G}(\mathbf{\theta}):\mathbf{X}\to\mathbf{Y},\quad\mathbf{y_{n}}=\hat{\mathbf{y_{n}}}+\mathbf{\varepsilon_{n}},\quad\hat{\mathbf{y_{n}}}=\mathcal{G}_{n}(\mathbf{X};\mathbf{\theta})$. G(θ) : X → Y , yn = yˆn + εn, yˆn = Gn(X; θ) (5) where ε represents random noise capturing disparity between the predicted and actual observed values in the training data. It is worth noting that the output Y could represent latent variables for classification problems. To train NN, stochastic gradient algorithms could be used to solve the following optimization problem: $$\theta^{*}=\arg\operatorname*{min}_{\theta\in\Theta}L(\theta;X,Y)=\arg\operatorname*{min}_{\theta\in\Theta}L(Y-G(X;\theta))$$ $$(6)$$ $$X,Y).$$ L(Y − G(X; θ)) (6) Note, we can define the log-likelihood based on the loss function L(θ; X,Y ). The point estimate approach, which is the traditional approach in deep learning, is relatively straightforward to implement with modern algorithms and software packages but tends to lack explainability (Yang et al., 2021). The resulting model might not be well calibrated (i.e., lack proper uncertainty quantification) (Guo et al., 2017; Nixon et al., 2019). Of all the techniques that exist to mitigate this, stochastic neural networks have proven to be one of the most generic and flexible solutions. Stochastic neural networks are a type of NN built by introducing stochastic components into the network. This is performed by giving the network either a stochastic activation or stochastic weights to simulate multiple possible models θ with their associated probability distribution. The integration of stochastic components into neural networks allows for an extensive exploration of model uncertainty, which can be approached through Bayesian methods among others. It should be noted that not all neural networks that represent uncertainty are Bayesian or even stochastic; some employ deterministic methods to estimate uncertainty without relying on stochastic components or Bayesian inference. BNNs represent a subset of stochastic neural networks where Bayesian inference is specifically used for training, offering a rigorous probabilistic interpretation of model parameters. So a BNN can be defined as any stochastic artificial neural network trained using Bayesian inference (MacKay, 1992). The primary objective is to gain a deeper understanding of the uncertainty that underlies the processes the network is modeling. To design a BNN, we put a prior distribution over the model parameters, p(θ), which leads to a prior confidence in the predictive power of the model p(Y | X, θ). By applying Bayes' theorem, and enforcing independence between the model parameters and the input, the Bayesian posterior can be written as: $$\begin{array}{r c l}{{p(\mathbf{\theta}\mid X,Y)}}&{{=}}&{{\frac{p\left(Y\mid X,\mathbf{\theta}\right)p(\mathbf{\theta})}{\int_{\mathbf{\theta}}p\left(Y\mid X,\mathbf{\theta}^{\prime}\right)p\left(\mathbf{\theta}^{\prime}\right)d\mathbf{\theta}^{\prime}}}}\\ {{}}&{{\propto}}&{{p\left(Y\mid X,\mathbf{\theta}\right)p(\mathbf{\theta}).}}\end{array}$$ BNN is usually trained using MCMC algorithms. Because we typically have big amount of data, the likelihood evaluation tends to be expensive. One common approach to address this issue is subsampling, which restricts the computation to a subset of the data (see, for example, the stochastic gradient methods Hoffman et al. (2010); Welling & Teh (2011a); Chen et al. (2014)). The assumption is that there is redundancy in the data and an appropriate subset of the data can provide a good enough approximation of needed information of the full data set. In practice, it is a challenge to find good criteria and strategies for an effective subsampling in many applications. Additionally, subsampling could lead to a significant loss of accuracy Betancourt (2015). Here, we propose a different approach that explores smoothness or regularity in parameter space. This characteristic in parameter space is true for most statistical models. Therefore, one would expect to find good and compact forms of approximation of functions (e.g., likelihood function) in parameter space. Sampling algorithms can use these approximate functions, also known as "surrogate" functions, to reduce their computational cost. More specifically, we propose using the CES scheme for high-dimensional BNN problems. Emulation bypasses the expensive evaluation of original forward models and reduces the cost of sampling to a small computational overhead. Compared with MCMC methods which require to repeatedly evaluate the original (large) neural network (NN) for the likelihood given the data, the proposed method builds a (smaller) NN emulator, which cuts the middle man (data) by mapping the parameters directly to the likelihood function to avoid its costly evaluation. That is, the emulator is trained based on the parameter-likelihood pairs, which are collected through few iterations of the original NN. In contrast to subsampling methods, this approach can handle computationally intensive likelihood functions, whether the computational cost is due to high-dimensional data or complex likelihood function (e.g., models based on differential equations). Additionally, the calibration process increases the efficiency of MCMC algorithms by providing a robust initial point in the high-density region. ## 3.1 Calibration - Early Stopping In Bayesian Neural Network In the calibration step, our primary objective is to collect samples for model parameters to be used in the subsequent emulation. In the case of FBNN, we first set up a BNN model and sample from the model's posterior using Stochastic Gradient Hamilton Monte Carlo (SGHMC) algorithm. However, we do this only for a limited number of iterations to collect a small set of samples. These samples include both the model parameters (θ (j)) and the outputs predicted by the model (G(X; θ (j))) for each sample j out of a total J samples. The key focus of this training phase is not finding the target posterior distribution, but rather collecting a small number of posterior samples as the training data for the subsequent emulation step. Additionally, the last set of posterior samples obtained during calibration serves as the initial point for the Sampling phase in FBNN. As mentioned earlier, to achieve our primary goal of collecting model parameters and the corresponding estimated outputs, we employ SGHMC algorithm in BNN. The SGHMC algorithm plays a crucial role in efficiently handling large datasets and collecting essential data during the calibration step of the FBNN. This algorithm is strategically chosen for the calibration step due to its effectiveness in exploring high-dimensional parameter spaces, especially when the sample size is also large. Its ability to introduce controlled stochasticity in updates proves instrumental in preventing local minima entrapment, thereby providing a comprehensive set of posterior samples that reflect the diversity of the parameter space. During the calibration phase, we save samples of model parameters and their corresponding predictions at each iteration. The diverse set of samples obtained through SGHMC establishes a robust foundation for subsequent steps in the FBNN methodology. This strategic choice of SGHMC in the calibration step lays the groundwork for the emulation phase by contributing to the construction of a more adaptable emulator for the true neural network mapping. The broad coverage of the parameter space in the calibration step facilitates the generation of representative and diverse samples, further enhancing the overall efficiency and reliability of the FBNN methodology. In essence, the efficacy of SGHMC in exploring parameter spaces ensures a seamless transition from accurate parameter estimation to the construction of an adaptable emulator, making it a key component in the FBNN workflow. The "calibration" step is instrumental in collecting a training set for the subsequent emulation step. It involves an early stopping strategy, aimed at collecting a targeted set of posterior samples without fully converging to the target distribution. The goal is to find an optimal collection of samples from the parameter space. This is aligned with traditional calibration goals of balancing accuracy and reliability, but within a new context. ## 3.2 Emulation - Deep Neural Network (Dnn) The original forward mapping in BNN involves mapping input dataset X to response variable Y . For the likelihood evaluation using original forward mapping, it is necessary to calculate the likelihood L(θ; *X, Y* ) for each sample of model parameters. This means that with each iteration, when a new set of model parameters is introduced, the original forward mapping needs to be applied to generate output predictions, followed by the calculation of the likelihood. In general, this process can be very time-consuming. If, however, we have a small set of pairs of estimated model parameters and corresponding predicted outputs collected during the calibration step, an emulator can be trained on the recorded pairs. This essentially eliminates the intermediary step (passing through each data point), allowing us to map the parameters directly to the likelihood function through a simpler emulator, as opposed to plugging parameters into the original neural network for each parameter setting and evaluating the likelihood function. This leads to a computationally efficient likelihood evaluation. Based on this, to address the computational challenges of evaluating the likelihood with large datasets, an emulator G eis constructed using the recorded pairs {θ (j), yˆ (j) = G(X; θ (j))} J j=1 obtained during the calibration step. In other words, these input-output pairs can be used to train a neural network model as an emulator G e of the forward mapping G: $$\mathcal{G}^{e}(\mathbf{X};\mathbf{\theta})=DNN(\mathbf{\theta},\mathcal{G}(\mathbf{X};\mathbf{\theta}))$$ $$=F_{K-1}\circ\cdots\circ F_{0}(\mathbf{\theta}),$$ $$F_{k}(\cdot)=g_{k}(W_{k}\cdot+b_{k})\in C\left(\mathbb{R}^{d_{k}},\mathbb{R}^{d_{k+1}}\right)\tag{1}$$ $$\begin{array}{l}{(7)}\\ {(8)}\end{array}$$ $$({\mathfrak{g}})$$ Given a DNN model where θ represents the input and G(X; θ) denotes the output, we set the dimensions as d0 = d and dK = s. Here, the matrices Wk are defined in the space R dk+1×dk and the vectors bk in R dk+1. The functions gk act as (continuous) activation mechanisms. There are various options for the activation functions, for instance, $g_{k}(x)=(\sigma(x_{1}),\cdot\cdot\cdot\cdot\sigma(x_{d_{k+1}}))$ where σ belongs to C(R, R) and includes the rectified linear unit (ReLU, σ (xi) = max(0, xi)) and the leaky ReLU (σ (xi; α) = xi· I (xi ≥ 0) + αxi· I (xi < 0)). Alternatively, $g_{k}(x)=(\sigma_{1}(x),\ldots,\sigma_{d_{k+1}}(x))\in C\left(\mathbb{R}^{d_{k+1}},\mathbb{R}^{d_{k+1}}\right)$. can be defined, where the {σi} set is specified as the softmax function: σi(x) = e xi /Pdk+1 j=1 e xj. In the context of our numerical examples, the activation functions for the DNN are selected to ensure that both the function approximations and their derived gradients have minimized errors. To elaborate, the process described involves optimizing the choice of activation functions within the DNN architecture to ensure that the network accurately approximates the target functions and their gradients. The method for selecting the optimal activation function is a grid search over a predefined set of activation functions to identify the suitable activation functions based on the best model performance After the emulator is trained, the log-likelihood can be efficiently approximated as follows: $$(10)$$ $$L(\mathbf{\theta};X,Y)\approx L^{e}(\mathbf{\theta};X,Y)=L(Y-\mathbf{G}^{e}(X;\mathbf{\theta}))$$ e(θ; *X, Y* ) = L(Y − Ge(X; θ)) (10) By combining the approximate likelihood L e(θ; *X, Y* ) with the prior probability p(θ), an approximate posterior distribution θ | *X, Y* that closely mirrors the true posterior distribution can be obtained. Similarly, we could approximate the potential function using the predictions from DNN: $$\Phi(\mathbf{\theta}^{*})\approx\Phi^{e}(\mathbf{\theta}^{*})=\Phi(\mathbf{\theta};y)=\frac{1}{2}\|y-\mathbf{\mathcal{G}}^{e}(X;\mathbf{\theta})\|_{\Gamma}^{2}\tag{1}$$ $$(11)$$ Building upon the foundational concepts of using a DNN emulator G efor approximating the forward mapping function G, we further elaborate on the implications and advantages of this approach for Bayesian inference, particularly in the context of handling large datasets and/or complex likelihood functions. The emulation step, which involves training the DNN emulator with input-output pairs {θ (j), G(X; θ (j))}, serves as a critical phase where the emulator learns to mimic the behavior of the original model with high accuracy. The utilization of DNN emulator to approximate the likelihood function in Bayesian inference presents a significant computational advantage over the direct use of the original BNN likelihood. This advantage stems primarily from the inherent differences in computational complexity between evaluating the the likelihood with a DNN emulator - which takes a set of model parameters as input and yields predicted responses—and the original BNN model - which processes X as input to produce the response variable. In the sampling stage, the computational complexity could be significantly reduced if we use Φ einstead of Φ in the accept/reject step of MCMC. If the emulator is a good representation of the forward mapping, the difference between Φ e and Φ would be small and negligible. Then, the samples by such emulative MCMC have the stationary distribution with small discrepancy compared to the true posterior distribution. This approach not only ensures that the sampling process is computationally feasible but also maintains the integrity of the stationary distribution, closely approximating the true posterior distribution with minimal discrepancy. The integration of DNN emulators into the Bayesian inference workflow thus presents a compelling solution to the computational challenges associated with evaluating likelihood functions in complex models. ## 3.3 Sampling - Preconditioned Crank-Nicolson (Pcn) In the context of the FBNN method, the sampling step is crucial for exploring and exploiting the posterior distribution efficiently. The method employs MCMC algorithms based on a trained emulator to achieve full exploration and exploitation. However, challenges arise, especially in high-dimensional parameter spaces, where classical MCMC algorithms often exhibit escalating correlations among samples. To address this issue, the pCN method introduced in algorithm1 has been used as a potential solution. Unlike classical methods, pCN avoids dimensional dependence challenges, making it particularly suitable for scenarios like BNN models with a high number of weights to be inferred (Hairer et al., 2009). As explained in section 2.3, the pCN approach minimizes correlations between successive samples, a critical feature for ensuring the diversity and representativeness of the samples collected. This characteristic is vital for FBNNs, as it directly impacts the network's ability to learn from data and make robust predictions. The emphasis on exploring the mode of the distribution is particularly relevant in high-dimensional spaces inherent to FBNN. The pCN method excels in traversing the parameter space with controlled perturbations, enhancing the algorithm's ability to capture the most probable configurations of model parameters. This focus on effective exploration around the mode contributes to a more accurate representation of the underlying neural network, ultimately improving model performance. In other words, the choice of pCN as the sampling method in FBNN is motivated by its tailored capacity to navigate and characterize the most probable regions of the parameter space. This choice reinforces the methodology's robustness and reliability, as pCN facilitates efficient sampling, leading to a more accurate and representative approximation of the posterior distribution. Figure 1 displays a simulation that contrasts the sampling mechanisms of SGHMC and pCN within a ![10_image_0.png](10_image_0.png) multimodal probability distribution. The task is to sample from a mixture of 25 Gaussian distributions, represented in panel (a), using a total of 200,000 samples. Here, the target distribution is multimodal with several distinct peaks (modes). Middle figure shows that SGHMC has explored the parameter space, although with a less concentrated sampling around the modes compared to the target distribution. This indicates that while SGHMC is effective at exploring the space, it may not capture the modes as tightly as the target distribution. In the right figure related to pCN sampler, the concentration of samples around the modes is much higher compared to SGHMC, which indicates that pCN is more effective at exploring around the modes of the distribution. Our method combines these two algorithms to obtain the best of both worlds. Figure 1: Sampling from a mixture of 25 Gaussians shown in (a) with 200k samples. SGHMC in (b) broadly explores the space, while pCN in (c) hones in on the high-density regions for precise mode capture. ## 3.4 Fast Bayesian Neural Network (Fbnn). Next, we combine all the techniques discussed above to speed up Bayesian UQ for BNN. More specifically, our main method called FBNN (Algorithm 3). This approach combines the strengths of BNN in uncertainty quantification, SGHMC for efficient parameter calibration, and the pCN method for sampling. Algorithm 3 Fast Bayesian Neural Network (FBNN) Input: Training set {(Xn, Yn)} N n=1, Prior p(θ) Output: Posterior samples for model parameters procedure FBNN({(Xn, Yn)} N n=1, p(θ)) Calibration Step: Initialize model parameters θ using SGHMC Save posterior samples {θ (j) n } J j=1 and the corresponding {Gθ (j) n (Xn)} J j=1 after a few iterations Emulation Step: Build an emulator of the forward mapping G e based on {θ (j) n , Gθ (j) n (Xn)} J j=1 using a DNN as emulator. Sampling Step: Run approximate MCMC, particularly pCN, based on the emulator to propose θ ′ from θ. end procedure ## 4 Numerical Experiments In this section, we present empirical evidence comparing our CES method with two baseline BNN methods equipped with the SGHMC and pCN samplers (shown as BNN-SGHMC and BNN-pCN). Additionally, we evaluate CES against various BNN architectures including those using Variational Inference, Lasso Approximation, MC-Dropout, SWAG, and RNS-HMC, all of which were detailed in Section 2. These comparisons are conducted through a series of regression and classification problems to thoroughly assess the performance and effectiveness of each method. We also include the results from DNN, which does not provide uncertainty quantification, but serves as a reference point. Moreover, we have incorporated the evaluation of Deep Ensembles, which consist of multiple DNNs each initialized with different random seeds, into our comparative analysis (shown as Ensemble-DNN). This comparative study emphasizes the strengths and weaknesses of each method, highlighting that, although the Ensemble-DNN approach allows for parallelization, it falls short in providing a probabilistic framework for analysis, a significant advantage offered by our CES method. As discussed earlier, our main FBNN model utilizes SGHMC sampler in the calibration step and pCN in the sampling step. One of the distinctive features of our FBNN model lies in its strategic integration of the SGHMC sampler during the calibration step and the pCN algorithm during the sampling step. This combination is carefully chosen to harness the complementary strengths of these two optimization methods. Moreover, we extend our exploration by introducing three additional FBNN models: FBNN (pCN-SGHMC), where pCN is employed in the calibration step and SGHMC in the sampling step; FBNN (pCN-pCN), where pCN is used in both steps; and FBNN (SGHMC-SGHMC), where SGHMC is used in both calibration and sampling steps. We evaluate these methods using a range of key metrics, including Mean Squared Error (MSE) for regression tasks and Accuracy for classification tasks. Additionally, we assess their performance in terms of computational cost, log probability evaluation, and various statistics related to the Effective Sample Size (ESS) of model parameters. These statistics include the minimum, maximum, and median ESS, as well as the minimum ESS per second. We also quantify the amount of speedup (denoted as "spdup"), a metric that compares the minimum ESS per second of the model with that of BNN-SGHMC as the benchmark. Moreover, we evaluate the Coverage Probability (CP) for UQ in regression cases (CPs are set at 95%) and Expected Calibration Error (ECE) (Schefzik et al., 2013) and reliability diagrams (Bröcker & Smith, 2007) for UQ in classification cases. We also plot 95% Credible Interval (CI) constructed by the predicted outputs of the Bayesian models along with the average true output as a method for UQ in regression problems. ECE specifically addresses model calibration, a component of UQ (Thorarinsdottir & Gneiting, 2010). A model that accurately estimates its uncertainty will not only perform well on average (accuracy, precision, recall) but also provide reliable probability estimates for its predictions. Models with lower ECE are considered to provide more reliable uncertainty estimates. Its predicted probabilities are more indicative of the true likelihood of outcomes, making it more trustworthy for decision-making under uncertainty. ![12_image_0.png](12_image_0.png) Figure 2: Comparative Analysis of Predictive Credible Intervals and Mean Predictions for regression Data Sets. For each data set, the 95% CI for BNN predictions and FBNN predictions are shown as shaded areas. The average predictions from BNN and FBNN are represented with dashed lines. Additionally, the 95% CI for the true output as ground truth and the smoothed average true output are plotted as solid lines. The reliability diagram is a common diagnostic graph used to summarize and evaluate probabilistic forecasts. They consist of a plot of the observed relative frequency against the predicted probability, providing a quick visual intercomparison when tuning probabilistic forecast systems, as well as documenting the performance of the final product. In these diagrams, the predicted probabilities are categorized into ten equally spaced segments, ranging from 0 to 1. Each forecast is placed into a bin corresponding to its predicted probability. Additionally, the diagram employs an equal frequency binning approach on the y-axis, ensuring a uniform distribution of data points across the observed frequency range. Throughout these experiments, we collect 2000 posterior samples for the BNN-SGHMC and BNN-pCN. Samples have been collected at every iteration. In contrast, for the FBNN methods, we use a small number (200-400) samples from either BNN-SGHMC or BNN-pCN (depending on the specific FBNN model) along with the corresponding predicted outputs during the calibration step. These 200 samples serve as the training data for the emulator. Additionally, for the BNN-SGHMC and BNN-pCN models, we train them based on a random initial starting point for the MCMC sampling. However, in the FBNN methods, we employ the set of posterior samples collected during the last iteration of the calibration step as the starting point for the subsequent MCMC sampling. Including the performance data for the first 200 SGHMC base samples in Tables 2 and 1 reveals that these initial samples do not exhibit high quality when evaluated in terms of MSE for regression and accuracy for classification. The comparison indicates that, especially in regression cases, FBNN (SGHMC-pCN) significantly outperforms the 200 base SGHMC samples in terms of spdup across all cases examined. Furthermore, in classification datasets, FBNN (SGHMC-pCN) achieves better results in terms of Expected Calibration Error (ECE) compared to the 200 base SGHMC samples. ## 4.1 Regression We first evaluate our proposed method using a set of simulate and real regression problems. The results are provided in Table 1. ## 4.1.1 Simulated Data We begin our empirical evaluation by considering the following regression problem: $$\mathbf{y}=\mathcal{G}_{\theta}(\mathbf{X})+\mathbf{\epsilon},\quad\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{\Gamma})$$ y = Gθ(X) + ϵ, ϵ ∼ N(0,Γ) (12) The results are summarized in Table 1. For the simulated dataset, DNN provides the smallest MSE value at 0.29; however, BNN-SGHMC and FBNN (SGHMC-pCN) provide similar performance in terms of MSE. $$(12)$$ | Dataset | Method | MSE | CP | Time (s) | ESS | minESS/s | spdup | |----------------------|--------------|-------|-------|-------------------|-------|------------|---------| | Simulated | DNN | 0.29 | - | 39 | - | - | - | | Dataset | Ensemble-DNN | 0.35 | 82.9% | 196 | - | - | - | | BNN-VI | 0.51 | 81.7% | 76 | - | - | - | | | BNN-Lasso | 0.48 | 78.0% | 84 | - | - | - | | | BNN-MC-Dropout | 2.34 | 86.2% | 32 | - | - | - | | | BNN-SGHMC | 0.31 | 84.2% | 566 | (106, 831, 1522) | 0.19 | 1 | | | BNN-SGHMC(first 200) | 2.34 | 90.3% | 52 | (12, 62, 147) | 0.22 | 1.15 | | | SWAG | 0.45 | 82.2% | 150 | (96, 1021, 1461) | 0.64 | 3.36 | | | BNN-RNS-HMC | 0.56 | 81.1% | 62 | (142, 973, 1476) | 2.29 | 12.05 | | | BNN-pCN | 0.39 | 80.0% | 541 | (107, 844, 1533) | 0.20 | 1.05 | | | FBNN (pCN-SGHMC) | 0.48 | 77.3% | 113 | (139, 1173, 1527) | 1.23 | 6.47 | | | FBNN (pCN-pCN) | 0.38 | 71.2% | 115 | (106, 955, 1533) | 0.92 | 4.84 | | | FBNN (SGHMC-SGHMC) | 0.43 | 75.6% | 62 | (139, 1173, 1527) | 2.35 | 12.36 | | | FBNN (SGHMC-pCN) | 0.32 | 82.5% | 60 | (175, 821, 1536) | 2.93 | 15.55 | | | Wine | DNN | 0.43 | - | 26 | - | - | - | | Quality | Ensemble-DNN | 0.41 | 47.4% | 137 | - | - | - | | BNN-VI | 0.66 | 39.5% | 28 | - | - | - | | | BNN-Lasso | 0.63 | 40.9% | 42 | - | - | - | | | BNN-MC-Dropout | 0.67 | 32.3% | 12 | - | - | - | | | BNN-SGHMC | 0.53 | 51.3% | 505 | (111, 837, 1538) | 0.23 | 1 | | | BNN-SGHMC(first 200) | 2.75 | 54.6% | 61 | (13, 111, 150) | 0.21 | 0.91 | | | SWAG | 0.53 | 48.2% | 97 | (98, 1021, 1489) | 1.01 | 4.39 | | | BNN-RNS-HMC | 0.62 | 44.7% | 38 | (107, 925, 1520) | 2.81 | 12.24 | | | BNN-pCN | 0.65 | 51.1% | 620 | (99, 1003, 1532) | 0.16 | 0.69 | | | FBNN (pCN-SGHMC) | 0.52 | 32.2% | 68 | (91, 912, 1533) | 1.37 | 5.95 | | | FBNN (pCN-pCN) | 0.65 | 24.5% | 67 | (105, 1087, 1540) | 1.58 | 6.86 | | | FBNN (SGHMC-SGHMC) | 0.50 | 40.0% | 70 | (77, 806, 1536) | 1.10 | 4.78 | | | FBNN (SGHMC-pCN) | 0.52 | 48.1% | 57 | (92, 897, 1536) | 1.62 | 7.33 | | | Boston | DNN | 3.21 | - | 14 | - | - | - | | Housing | Ensemble-DNN | 3.17 | 72.1% | 74 | - | - | - | | BNN-VI | 7.60 | 83.7% | 85 | - | - | - | | | BNN-Lasso | 6.20 | 79.2% | 68 | - | - | - | | | BNN-MC-Dropout | 10.12 | 81.2% | 91 | - | - | - | | | BNN-SGHMC | 3.83 | 75.3% | 888 | (76, 649, 1536) | 0.09 | 1 | | | BNN-SGHMC(first 200) | 7.40 | 66.9% | 86 | (9, 87, 150) | 0.09 | 1 | | | SWAG | 5.81 | 71.9% | 104 | (68, 724, 1532) | 0.65 | 7.22 | | | BNN-RNS-HMC | 9.42 | 73.4% | 76 | (73, 1032, 1504) | 0.96 | 10.6 | | | BNN-pCN | 3.25 | 89.3% | 901 | (76, 649, 1536) | 0.08 | 0.88 | | | FBNN (pCN-SGHMC) | 4.16 | 41.7% | 186 | (71, 965, 1543) | 0.38 | 4.22 | | | FBNN (pCN-pCN) | 3.81 | 47.1% | 186 | (80, 966, 1541) | 0.43 | 4.78 | | | FBNN (SGHMC-SGHMC) | 4.15 | 48.9% | 94 | (69, 979, 1542) | 0.74 | 8.22 | | | FBNN (SGHMC-pCN) | 3.82 | 81.1% | 91 | (93, 938, 1543) | 1.03 | 11.94 | | | Alzheimer | DNN | 0.49 | - | 326 | - | - | - | | Dataset | Ensemble-DNN | 0.42 | 89.3% | 1597 | - | - | - | | BNN-VI | 0.53 | 87.6% | 341 | - | - | - | | | BNN-Lasso | 0.52 | 83.5% | 561 | - | - | - | | | BNN-MC-Dropout | 0.60 | 92.8% | 268 | - | - | - | | | BNN-SGHMC | 0.49 | 91.6% | 6524 | (102, 973, 1376) | 0.01 | 1 | | | BNN-SGHMC(first 200) | 3.17 | 72.7% | 641 | (7, 82, 150) | 0.01 | 1 | | | SWAG | 0.74 | 89.3% | 1214 | (106, 1002, 1542) | 0.08 | 8 | | | BNN-RNS-HMC | 0.61 | 92.4% | 7324 | (96, 892, 1531) | 0.01 | 0.83 | | | BNN-pCN | 0.51 | 89.9% | 6212 | (120, 1092, 1448) | 0.02 | 1.23 | | | FBNN (pCN-SGHMC) | 0.50 | 90.2% | 643 | (116, 994, 1504) | 0.18 | 18 | | | FBNN (pCN-pCN) | 0.56 | 91.4% | 632 | (108, 998, 1498) | 0.17 | 17 | | | FBNN (SGHMC-SGHMC) | 0.55 | 88.4% | 671 | (118, 1012, 1541) | 0.17 | 17 | | | FBNN (SGHMC-pCN) | 0.53 | 91.6% | 682 | (149, 984, 1497) | 0.22 | 22 | | Table 1: Performance of various deep learning methods based on regression problems. For ESS, minimum, median, and maximum values are provided in parenthesis. Based on the results, BNN-MC-Dropout has the highest CP value (86.2%) compared to the other BNN and FBNN variants, indicating better coverage, but considerably higher MSE. This indicates that while BNN-MC-Dropout provides more reliable uncertainty estimates, its predictions are less accurate on average compared to those from other BNN and FBNN variants. Notably, among all the FBNN variants, FBNN (SGHMC-pCN) provides the highest CP at 82.5%, demonstrating a level of calibration comparable to that of the BNN model. The Ensemble-DNN demonstrates comparable performance to the FBNN (SGHMC-pCN) in terms of CP, yet it operates at a pace three times slower. As discussed earlier, the standard DNN does not quantify uncertainty. Examining the efficiency of sample generation, all FBNN variants have relatively higher ESS per second values compared to all the other examined BNN models except BNN-RNS-HMC, and among all the models, FBNN (SGHMC-pCN) has the highest ESS per second at 2.93. This approach provides the highest speed-up at 15.55 compared to BNN-SGHMC as the baseline model, highlighting its computational efficiency. While the speedup value of BNN-RNS-HMC is 12.05, which is higher than that of most FBNN models, it is still lower than our main FBNN model, FBNN (SGHMC-pCN), and its MSE is significantly higher. Considering these results, FBNN (SGHMC-pCN) emerges as a strong performer, demonstrating a good balance between predictive accuracy and computational efficiency, making it a favorable choice for uncertainty quantification. Figure 2c shows the estimated mean and prediction uncertainty for both BNN and FBNN (SGHMC-pCN) models, alongside the 95% CI for the true output as ground truth and the smoothed average true output. As we can see, BNN and FBNN have very similar and close credible intervals bounds. This consistency in credible interval bounds is significant for UQ, indicating that both models effectively and almost equally quantify uncertainty in their predictions. For clarity and conciseness within our figures, we have employed Principal Component Analysis (PCA) to transform the original feature dataset into a one-dimensional principal component. This one-dimensional principal component is then utilized as the x-axis for our plots in figure 2. ## 4.1.2 Wine Quality Data As the first real dataset for the regression case, we use the Wine Quality data, provided by Cortez et al. (2009). This dataset contains various physicochemical properties of different wines, while the target variable is the quality rating. Initially, we trained a DNN model with a total of J = 241 model parameters. Subsequently, we evaluated this model on a test dataset, recording both the model accuracy on the test dataset and the total training time. For BNN models, we first place a prior distribution, N(0, 1I), on the potential model parameters, denoted as p(θ). The BNN models also featured J = 241 total model parameters. Based on the results shown in Table 1, DNN privides the best MSE at 0.43. However, as stated before, it lacks the capability to quantify uncertainty. Both BNN-SGHMC and FBNN (SGHMC-pCN) provide similar MSE values of 0.53 and 0.52 respectively. While BNN-SGHMC has slightly better CP, FBNN (SGHMC-pCN) is more computationally efficient. SWAG offers performance comparable to FBNN (SGHMC-pCN) in accuracy and CP, but it requires more computational resources. BNN-RNS-HMC stands out for its significant spdup among all the models, though it does not perform as well in terms of MSE and CP. Figure 2a shows the prediction mean and 95% CI for both methods, as well as 95% CI area for true output. For this dataset, we selected "Malic Acid" as the variable of interest for the x-axis. ## 4.1.3 Boston Housing Data The Boston housing data was collected in 1978. Each of the 506 entries represent aggregated data of 14 features for homes from various suburbs in Boston. For this dataset, we followed CES steps similar to the Wine Quality dataset but with different number of model parameters (J=3009); the emulator has 3 hidden layers, but larger number of nodes (2048, 1024, 512). For this example, the DNN achieves an MSE of 3.21. BNN-SGHMC and BNN-pCN exhibit comparable MSE values at 3.83 and 3.25, respectively, with BNN-pCN showing slightly better CP (89.3%) than BNNSGHMC (85.3%). Among the FBNN variants, FBNN (SGHMC-pCN) stands out with a notable balance between MSE (3.82), CP (81.1%), and computational efficiency, completing the task in just 91 seconds. This model significantly outperforms all the other models in terms of speed-up (11.94), showcasing its effectiveness in sampling. Figure 2b shows the 95% CIs and mean predictions of both BNN-SGHMC and FBNN (SGHMCpCN). In the case of the Boston housing dataset, "CRIM: per capita crime rate by town" serves as the x-axis variable. | Dataset | Method | Acc | Time(s) | ESS(min,med,max) | minESS/s | spdup | ECE | |----------------------|--------------|-------|-----------------------|--------------------|------------|---------|-------| | Simulated | DNN | 96% | 18 | - | - | - | - | | Dataset | Ensemble-DNN | 96% | 96 | - | - | - | 0.384 | | BNN-VI | 92% | 20 | - | - | - | 0.491 | | | BNN-Lasso | 94% | 18 | - | - | - | 0.498 | | | BNN-MC-Dropout | 92% | 16 | - | - | - | 0.326 | | | BNN-SGHMC | 96% | 889 | (23, 189, 1468) | 0.03 | 1 | 0.450 | | | BNN-SGHMC(first 200) | 81% | 86 | (7, 36, 134) | 0.08 | 2.66 | 0.498 | | | SWAG | 83% | 98 | (67, 742, 1503) | 0.68 | 22.78 | 0.460 | | | BNN-RNS-HMC | 72% | 27 | (137, 1024, 1523) | 5.07 | 169 | 0.492 | | | BNN-pCN | 96% | 1015 | (38, 176, 1351) | 0.04 | 1.46 | 0.455 | | | FBNN (pCN-SGHMC) | 92% | 105 | (132, 947, 1539) | 1.26 | 48.92 | 0.390 | | | FBNN (pCN-pCN) | 90% | 105 | (147, 861, 1535) | 1.41 | 54.44 | 0.405 | | | FBNN (SGHMC-SGHMC) | 95% | 97 | (153, 871, 1520) | 1.57 | 61.04 | 0.389 | | | FBNN (SGHMC-pCN) | 96% | 92 | (153, 871, 1520) | 1.67 | 64.32 | 0.380 | | | Adult | DNN | 85% | 426 | - | - | - | - | | Dataset | Ensemble-DNN | 84% | 2153 | - | - | - | 0.556 | | BNN-VI | 80% | 562 | - | - | - | 0.642 | | | BNN-Lasso | 83% | 256 | - | - | - | 0.631 | | | BNN-MC-Dropout | 82% | 187 | - | - | - | 0.540 | | | BNN-SGHMC | 83% | 5979 | (16, 202, 1520) | 0.002 | 1 | 0.574 | | | BNN-SGHMC(first 200) | 78% | 581 | (1.13, 41.44, 148.43) | 0.002 | 0.95 | 0.594 | | | SWAG | 79% | 1641 | (47, 912, 1532) | 0.04 | 14.32 | 0.668 | | | BNN-RNS-HMC | 72% | 6110 | (89, 960, 1530) | 0.01 | 5 | 0.658 | | | BNN-pCN | 83% | 6227 | (9, 117, 1518) | 0.001 | 0.52 | 0.616 | | | FBNN (pCN-SGHMC) | 82% | 642 | (87, 892, 1539) | 0.13 | 50.93 | 0.580 | | | FBNN (pCN-pCN) | 82% | 639 | (88, 890, 1540) | 0.12 | 51.52 | 0.592 | | | FBNN (SGHMC-SGHMC) | 83% | 612 | (68, 941, 1541) | 0.12 | 41.88 | 0.583 | | | FBNN (SGHMC-pCN) | 84% | 609 | (89, 875, 1539) | 0.15 | 54.91 | 0.576 | | | Alzheimer | DNN | 82% | 51 | - | - | - | - | | Dataset | Ensemble-DNN | 83% | 262 | - | - | - | 0.542 | | BNN-VI | 72% | 61 | - | - | - | 0.546 | | | BNN-Lasso | 76% | 256 | - | - | - | 0.524 | | | BNN-MC-Dropout | 76% | 12 | - | - | - | 0.429 | | | BNN-SGHMC | 81% | 2736 | (81, 588, 1526) | 0.03 | 1 | 0.499 | | | BNN-SGHMC(first 200) | 69% | 282 | (8, 84, 149) | 0.03 | 0.96 | 0.523 | | | SWAG | 81% | 312 | (72, 913, 1562) | 0.23 | 7.69 | 0.508 | | | BNN-RNS-HMC | 58% | 293 | (84, 915, 1540) | 0.28 | 9.55 | 0.521 | | | BNN-pCN | 73% | 2660 | (71, 424, 1534) | 0.03 | 1 | 0.469 | | | FBNN (pCN-SGHMC) | 76% | 277 | (76, 947, 1542) | 0.27 | 9 | 0.568 | | | FBNN (pCN-pCN) | 77% | 274 | (70, 931, 1542) | 0.25 | 8.33 | 0.377 | | | FBNN (SGHMC-SGHMC) | 80% | 278 | (81, 973, 1538) | 0.28 | 9.33 | 0.448 | | | FBNN (SGHMC-pCN) | 84% | 280 | (92, 914, 1535) | 0.33 | 11 | 0.376 | | Table 2: Performance of various deep learning methods based on classification problems. ## 4.1.4 **Alzheimer Data** We have expanded our experimental evaluation to include a dataset from the National Alzheimer's Coordinating Center(NACC). NACC is responsible for developing and maintaining a database of patient information collected from the 29 Alzheimer disease centers (ADCs) funded by the National Institute on Aging (NIA) (Beekly et al., 2004). The NIA appointed the ADC Clinical Task Force to determine and define an expanded, standardized clinical data set, called the Uniform Data Set (UDS). The goal of the UDS is to provide ADC researchers a standard set of assessment procedures to better characterize ADC participants with mild Alzheimer disease and mild cognitive impairment in comparison with nondemented controls (Beekly et al., 2007).This dataset includes records from 185,831 subjects, with a rich array of 1,024 features, narrowed down to 56 key features for our analysis. These features were carefully selected to represent a wide spectrum of variables relevant to Alzheimer's Disease (AD) diagnosis, including functional abilities, brain morphometrics, living situations, and demographic information (Ren et al., 2022). For the regression case, the goal is to predict Left Hippocampus Volume, a critical marker in the progression of AD, as a function of other variables. For this dataset, the DNN model achieves an MSE of 0.49, serving as a baseline for the performance of more complex models. BNNs using different approaches (Variational Inference, Lasso, MC-Dropout, Stochastic Gradient Hamiltonian Monte Carlo (SGHMC)) show a range of MSE values from 0.49 to 0.60. Notably, BNN-MC-Dropout stands out with the highest CP of 92.8%, though its MSE is high at 0.60. Among the FBNN models, the FBNN (SGHMC-pCN) model is highlighted for its balanced performance with an MSE of 0.53 and a CP of 91.6% which is close to highest CP among all different models (92.8% for BNN-MC-Dropout ![16_image_0.png](16_image_0.png) Figure 3: Reliability Diagrams for Simulated dataset in classification task. These diagrams incorporate equal frequency binning. ). It shows a considerable improvement in computational efficiency, evidenced by a computational time of 682 seconds and a spdup factor of 22 times compared to BNN-SGHMC as baseline BNN model. ## 4.2 Classification We also evaluate our method based on a set of simulated and real classification problems. The results are provided in Table 2. ## 4.2.1 Simulated Data In this section we focus on a binary classification problem using simulate data. The data setup involves a similar structure to the regression case, with the key difference being in the response variable, loss functions, and the architecture of the DNN emulator. Here, the emulator has two layers with ReLU activation functions for the hidden layers and sigmoid activation for the output layer. FBNN (SGHMC-pCN), DNN, Ensemble-DNN, BNN-SGHMC, and BNN-pCN exhibit comparable performance in terms of accuracy and outperform other models. Conversely, BNN-RNS-HMC and SWAG demonstrate considerably lower accuracy relative to the aforementioned methods. While BNN-RNS-HMC achieves the highest spdup, it significantly underperforms in terms of accuracy. In contrast, FBNN (SGHMC-pCN) boasts the second-highest speedup, at 64.32, showcasing its computational efficiency relative to BNN-SGHMC. Furthermore, it maintains the highest accuracy rate of 96%, indicating an optimal balance between computational efficiency and accuracy. BNN-Lasso, BNN-RNS-HMC and BNN-VI have relatively high ECE values, suggesting less reliable UQ compared to other methods. BNN-MC-Dropout has the lowest ECE value compared to other methods, but it also have a lower accuracy rate. Among the FBNN variants, FBNN (SGHMC-pCN) presents a low ECE, closely aligning with the ECE value of BNN-MC-Dropout, while providing an accuracy rate similar to DNN. Moreover, in terms of UQ, a model with a reliability curve that closely follows the diagonal line is considered better calibrated, meaning its predicted probabilities are more aligned with actual outcomes. The FBNN variants, particularly the first FBNN (SGHMC-pCN) plot, appear to be better calibrated across most probability ranges except at the highest probabilities, suggesting a more reliable UQ. However, it is important to note that while the models exhibit overconfidence at the high end of predicted values, this may also be due to fewer data points in those bins, which is a common issue in reliability diagrams. ## 4.2.2 Adult Data Next, we use the Adult dataset, discussed in Becker & Kohavi (1996). This dataset comprises a relatively larger sample size of 40,434 data points, each described by 14 distinct features. All BNN and FBNN versions contain a complex structure with 2,761 internal model parameters. The classification task for the Adult dataset involves predicting whether an individual will earn more or less than $50,000 per year. All methods achieve comparable accuracy rates, though BNN-RNS-HMC and SWAG exhibit slightly lower accuracy in comparison to the others. Among them, FBNN (SGHMC-pCN) stands out as the most computationally efficient method for uncertainty quantification, boasting a spdup value of 54.91 relative to the baseline BNN approach. A low ECE value of the FBNN (SGHMC-pCN) model, along with a reliability curve that closely aligns with the diagonal line (not depicted in the paper), signifies the model's superior performance in terms of uncertainty quantification. ## 4.2.3 **Alzheimer Data** For the classification case, we utilized the same NACC dataset previously introduced in the regression context. Here, our objective is to predict cognitive status, classifying individuals as either cognitively unimpaired (healthy controls, HC), labeled as class 0, or as having mild cognitive impairment (MCI) due to Alzheimer's disease (AD) or dementia due to AD, labeled as class 1. Still we use records from 185,831 subjects with 56 key features for our analysis. The DNN and Ensemble-DNN models achieve an accuracy of 82% and 83%, setting a baseline for comparison with the Bayesian approaches. Various BNN methods show a range of accuracies, with BNN-SGHMC achieving a notable 81% accuracy but requiring a significantly higher computational cost (2736 seconds). The BNN-VI and BNN-pCN models show lower accuracies of 72% and 73%, respectively, with BNN-pCN also having a high computational demand (2660 seconds). With an accuracy of 84% at a relatively low computational cost (280 seconds), FBNN (SGHMC-pCN) surpasses all other models in correctly identifying Alzheimer's disease, making it the most reliable model among those tested. This model not only surpasses the accuracy of the standard DNN and Ensemble-DNN models but also offers a balance between computational efficiency and high accuracy. Moreover, FBNN (SGHMC-pCN) model demonstrates the highest sampling efficiency (spdup of 11), indicating it can achieve high accuracy with fewer computational resources compared to other models. Moreover, for the FBNN model implemented on the Alzheimer dataset, the ECE value is low, and the reliability curve closely tracks the diagonal line, indicating the model's adeptness at UQ for this specific dataset. ## 5 Conclusion In this paper, we have proposed an innovative CES framework called FBNN, specifically designed to enhance the computational efficiency and scalability of BNN for high-dimensional data. Our primary goal is to provide a robust solution for uncertainty quantification in high-dimensional spaces, leveraging the power of Bayesian principles while mitigating the computational bottlenecks traditionally associated with BNN. In our numerical experiments, we have successfully applied various variants of FBNN, including different configurations with BNN, to regression and classification tasks on both synthetic and real datasets. Remarkably, the FBNN variant incorporating SGHMC for calibration and pCN for sampling, denoted as FBNN (SGHMC-pCN), not only matches the predictive accuracy of traditional BNN but also offers substantial computational advantages. The superior performance of the FBNN variant employing SGHMC for calibration and pCN for sampling can be attributed to the complementary strengths of these two samplers. SGHMC excels at broad exploration of the parameter space, providing an effective means for understanding the global structure during the calibration step. On the other hand, pCN is adept at efficient sampling around modes, offering a valuable tool for capturing local intricacies in the distribution during the final sampling step. By combining these samplers in the FBNN model, we achieve a balanced approach between exploration (calibration with SGHMC) and exploitation (final sampling with pCN). Future work could involve extending our method to more complex problems (e.g., spatiotemporal data) and complex network structures (e.g., graph neural networks). Additionally, future research could focus on improving the emulation step by optimizing the DNN architecture. Finally, our method could be further improved by embedding the sampling algorithm in an adaptive framework similar to the method of Zhang et al. (2018). ## References S. Ahn, B. Shahbaba, and M. Welling. Distributed Stochastic Gradient MCMC. In International Conference on Machine Learning, 2014. ![18_image_0.png](18_image_0.png) Figure 4: Comparative analysis of spdup for various BNN methods across all tested datasets. Methods are ordered by efficiency within each dataset, highlighting the impact of model characteristics on sampling performance. Christophe Andrieu, Nando De Freitas, Arnaud Doucet, and Michael I Jordan. An introduction to mcmc for machine learning. Machine learning, 50:5–43, 2003. Julyan Arbel, Konstantinos Pitas, Mariia Vladimirova, and Vincent Fortuin. A primer on bayesian neural networks: review and debates. arXiv preprint arXiv:2309.16314, 2023. Barry Becker and Ronny Kohavi. Adult. UCI Machine Learning Repository, 1996. Duane L Beekly, Erin M Ramos, Gerald van Belle, Woodrow Deitrich, Amber D Clark, Mary E Jacka, Walter A Kukull, et al. The national alzheimer's coordinating center (nacc) database: an alzheimer disease database. Alzheimer Disease & Associated Disorders, 18(4):270–277, 2004. Duane L Beekly, Erin M Ramos, William W Lee, Woodrow D Deitrich, Mary E Jacka, Joylee Wu, Janene L Hubbard, Thomas D Koepsell, John C Morris, Walter A Kukull, et al. The national alzheimer's coordinating center (nacc) database: the uniform data set. Alzheimer Disease & Associated Disorders, 21(3): 249–258, 2007. Alexandros Beskos. A stable manifold mcmc method for high dimensions. Statistics & Probability Letters, 90:46–52, 2014. Alexandros Beskos, Gareth Roberts, and Andrew Stuart. Optimal scalings for local Metropolis–Hastings chains on nonproduct targets in high dimensions. The Annals of Applied Probability, 19(3):863 - 898, 2009. doi: 10.1214/08-AAP563. URL https://doi.org/10.1214/08-AAP563. Alexandros Beskos, Frank J Pinski, Jesús Marıa Sanz-Serna, and Andrew M Stuart. Hybrid monte carlo on hilbert spaces. Stochastic Processes and their Applications, 121(10):2201–2230, 2011. Alexandros Beskos, Mark Girolami, Shiwei Lan, Patrick E Farrell, and Andrew M Stuart. Geometric mcmc for infinite-dimensional inverse problems. Journal of Computational Physics, 335:327–351, 2017. Michael Betancourt. The fundamental incompatibility of scalable hamiltonian monte carlo and naive data subsampling. In International Conference on Machine Learning, pp. 533–540. PMLR, 2015. Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In International conference on machine learning, pp. 1613–1622. PMLR, 2015. Edwin V Bonilla, Kian Chai, and Christopher Williams. Multi-task gaussian process prediction. Advances in neural information processing systems, 20, 2007. Jochen Bröcker and Leonard A Smith. Increasing the reliability of reliability diagrams. Weather and forecasting, 22(3):651–661, 2007. Tianqi Chen, Emily Fox, and Carlos Guestrin. Stochastic gradient hamiltonian monte carlo. In International conference on machine learning, pp. 1683–1691. PMLR, 2014. Jian Cheng, Pei-song Wang, Gang Li, Qing-hao Hu, and Han-qing Lu. Recent advances in efficient computation of deep convolutional neural networks. Frontiers of Information Technology & Electronic Engineering, 19:64–77, 2018. Siddhartha Chib and Edward Greenberg. Understanding the metropolis-hastings algorithm. The american statistician, 49(4):327–335, 1995. Emmet Cleary, Alfredo Garbuno-Inigo, Shiwei Lan, Tapio Schneider, and Andrew M. Stuart. Calibrate, emulate, sample. Journal of Computational Physics, 424:109716, jan 2021. doi: 10.1016/j.jcp.2020.109716. URL https://doi.org/10.1016%2Fj.jcp.2020.109716. Paulo Cortez, A. Cerdeira, F. Almeida, T. Matos, and J. Reis. Wine Quality. UCI Machine Learning Repository, 2009. DOI: https://doi.org/10.24432/C56S3T. S. L. Cotter, G. O. Roberts, A. M. Stuart, and D. White. MCMC Methods for Functions: Modifying Old Algorithms to Make Them Faster. Statistical Science, 28(3):424 - 446, 2013. doi: 10.1214/13-STS421. URL https://doi.org/10.1214/13-STS421. Tiangang Cui, Kody JH Law, and Youssef M Marzouk. Dimension-independent likelihood-informed mcmc. Journal of Computational Physics, 304:109–137, 2016. Carla Currin, Toby Mitchell, Max Morris, and Don Ylvisaker. A bayesian approach to the design and analysis of computer experiments. Technical report, Oak Ridge National Lab., TN (USA), 1988. Giuseppe Da Prato and Jerzy Zabczyk. Stochastic equations in infinite dimensions. Cambridge university press, 2014. Shaveta Dargan, Munish Kumar, Maruthi Rohit Ayyagari, and Gulshan Kumar. A survey of deep learning and its applications: a new paradigm to machine learning. Archives of Computational Methods in Engineering, 27:1071–1092, 2020. Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig. Laplace redux-effortless bayesian deep learning. Advances in Neural Information Processing Systems, 34:20089–20103, 2021. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050–1059. PMLR, 2016. Jacob Gardner, Geoff Pleiss, Kilian Q Weinberger, David Bindel, and Andrew G Wilson. Gpytorch: Blackbox matrix-matrix gaussian process inference with gpu acceleration. Advances in neural information processing systems, 31, 2018. Andrew Gelman, Walter R Gilks, and Gareth O Roberts. Weak convergence and optimal scaling of random walk metropolis algorithms. The annals of applied probability, 7(1):110–120, 1997. Alex Graves. Practical variational inference for neural networks. Advances in neural information processing systems, 24, 2011. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 1321–1330. PMLR, 06–11 Aug 2017. URL https://proceedings.mlr.press/v70/guo17a.html. Martin Hairer, Andrew M Stuart, and Jochen Voss. Sampling conditioned diffusions. Trends in stochastic analysis, 353:159–186, 2009. Dave Higdon, Marc Kennedy, James C. Cavendish, John A. Cafeo, and Robert D. Ryne. Combining field data and computer simulations for calibration and prediction. SIAM Journal on Scientific Computing, 26(2): 448–466, 2004. doi: 10.1137/S1064827503426693. URL https://doi.org/10.1137/S1064827503426693. Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the sixth annual conference on Computational learning theory, pp. 5–13, 1993. M. Hoffman and A. Gelman. The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo. arxiv.org/abs/1111.4246, 2011. Matthew D. Hoffman, David M. Blei, and Francis R. Bach. Online learning for latent dirichlet allocation. In John D. Lafferty, Christopher K. I. Williams, John Shawe-Taylor, Richard S. Zemel, and Aron Culotta (eds.), NIPS, pp. 856–864. Curran Associates, Inc., 2010. Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407, 2018. Tommi S Jaakkola and Michael I Jordan. Bayesian parameter estimation via variational methods. Statistics and Computing, 10:25–37, 2000. Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. Machine learning, 37:183–233, 1999. Laurent Valentin Jospin, Hamid Laga, Farid Boussaid, Wray Buntine, and Mohammed Bennamoun. Handson bayesian neural networks—a tutorial for deep learning users. IEEE Computational Intelligence Magazine, 17(2):29–48, 2022. Marc C. Kennedy and Anthony O'Hagan. Bayesian Calibration of Computer Models. Journal of the Royal Statistical Society Series B: Statistical Methodology, 63(3):425–464, 01 2002. ISSN 1369-7412. doi: 10. 1111/1467-9868.00294. URL https://doi.org/10.1111/1467-9868.00294. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. Yongchan Kwon, Joong-Ho Won, Beom Joon Kim, and Myunghee Cho Paik. Uncertainty quantification using bayesian neural networks in classification: Application to ischemic stroke lesion segmentation. In Medical Imaging with Deep Learning, 2022. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems, 30, 2017. S. Lan, J. A. Palacios, M. Karcher, V. Minin, and B Shahbaba. An efficient bayesian inference framework for coalescent-based nonparametric phylodynamics. Bioinformatics, 31(20):3282–3289, 2015. Shiwei Lan, Shuyi Li, and Babak Shahbaba. Scaling up bayesian uncertainty quantification for inverse problems using deep neural networks. SIAM/ASA Journal on Uncertainty Quantification, 10(4):1684– 1713, 2022a. doi: 10.1137/21M1439456. Shiwei Lan, Shuyi Li, and Babak Shahbaba. Scaling up bayesian uncertainty quantification for inverse problems using deep neural networks, 2022b. Kody JH Law. Proposals which speed up function-space mcmc. Journal of Computational and Applied Mathematics, 262:127–138, 2014. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Lingge Li, Andrew Holbrook, Babak Shahbaba, and Pierre Baldi. Neural Network Gradient Hamiltonian Monte Carlo. Computational Statistics, 34(1):281–299, 2019a. Lingge Li, Andrew Holbrook, Babak Shahbaba, and Pierre Baldi. Neural network gradient hamiltonian monte carlo. Computational statistics, 34:281–299, 2019b. Faming Liang, Qizhai Li, and Lei Zhou. Bayesian neural networks for selection of drug sensitive genes. Journal of the American Statistical Association, 113(523):955–972, 2018. Haitao Liu, Yew-Soon Ong, Xiaobo Shen, and Jianfei Cai. When gaussian process meets big data: A review of scalable gps. IEEE transactions on neural networks and learning systems, 31(11):4405–4423, 2020. David JC MacKay. A practical bayesian framework for backpropagation networks. Neural computation, 4 (3):448–472, 1992. Wesley J Maddox, Pavel Izmailov, Timur Garipov, Dmitry P Vetrov, and Andrew Gordon Wilson. A simple baseline for bayesian uncertainty in deep learning. Advances in neural information processing systems, 32, 2019. Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. Equation of state calculations by fast computing machines. The journal of chemical physics, 21(6):1087– 1092, 1953. Radford M Neal. Bayesian learning for neural networks, volume 118. Springer Science & Business Media, 2012. Radford M Neal et al. Mcmc using hamiltonian dynamics. Handbook of markov chain monte carlo, 2(11): 2, 2011. Jeremy Nixon, Michael W. Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran. Measuring calibration in deep learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019. Jeremy Oakley and Anthony O'Hagan. Bayesian inference for the uncertainty distribution of computer model outputs. Biometrika, 89(4):769–784, 12 2002. ISSN 0006-3444. doi: 10.1093/biomet/89.4.769. URL https://doi.org/10.1093/biomet/89.4.769. Jeremy E. Oakley and Anthony O'Hagan. Probabilistic sensitivity analysis of complex models: A bayesian approach. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 66(3):751–769, 2004. ISSN 13697412, 14679868. URL http://www.jstor.org/stable/3647504. A. O'Hagan. Bayesian analysis of computer code outputs: A tutorial. Reliability Engineering & System Safety, 91(10):1290–1300, 2006. ISSN 0951-8320. doi: https://doi.org/10.1016/j.ress.2005.11.025. URL https://www.sciencedirect.com/science/article/pii/S0951832005002383. The Fourth International Conference on Sensitivity Analysis of Model Output (SAMO 2004). Michael P Perrone and Leon N Cooper. When networks disagree: Ensemble methods for hybrid neural networks. In How We Learn; How We Remember: Toward An Understanding Of Brain And Neural Systems: Selected Papers of Leon N Cooper, pp. 342–358. World Scientific, 1995. Yueqi Ren, Babak Shahbaba, and Craig E Stark. Hierarchical, multiclass prediction of alzheimer's clinical diagnosis using imputed, multimodal nacc data. Alzheimer's & Dementia, 18:e066698, 2022. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International conference on machine learning, pp. 1278–1286. PMLR, 2014. Christian P Robert, George Casella, and George Casella. Monte Carlo statistical methods, volume 2. Springer, 1999. Gareth O Roberts and Jeffrey S Rosenthal. Optimal scaling of discrete approximations to langevin diffusions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 60(1):255–268, 1998. Roman Schefzik, Thordis L Thorarinsdottir, and Tilmann Gneiting. Uncertainty quantification in complex simulation models using ensemble copula coupling. 2013. Matthias W Seeger, Christopher KI Williams, and Neil D Lawrence. Fast forward selection to speed up sparse gaussian process regression. In International Workshop on Artificial Intelligence and Statistics, pp. 254–261. PMLR, 2003. B. Shahbaba, S. Lan, W.O. Johnson, and R.M. Neal. Split Hamiltonian Monte Carlo. Statistics and Computing, 24(3):339–349, 2014. Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5):828–841, 2019. Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, and Joel S Emer. Efficient processing of deep neural networks: A tutorial and survey. Proceedings of the IEEE, 105(12):2295–2329, 2017. Thordis L Thorarinsdottir and Tilmann Gneiting. Probabilistic forecasts of wind speed: Ensemble model output statistics by using heteroscedastic censored regression. Journal of the Royal Statistical Society Series A: Statistics in Society, 173(2):371–388, 2010. M. Welling and Y.W. Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML), pp. 681–688, 2011a. Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 681–688, 2011b. Scott Cheng-Hsin Yang, Wai Keen Vong, Ravi B Sojitra, Tomas Folke, and Patrick Shafto. Mitigating belief projection in explainable artificial intelligence via bayesian teaching. Scientific reports, 11(1):9863, 2021. C. Zhang, B. Shahbaba, and H. Zhao. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases. Statistics and Computing, 27(6), 2017a. ISSN 15731375. doi: 10.1007/s11222-016-9699-1. C. Zhang, B. Shahbaba, and H. Zhao. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases. Statistics and Computing, 27(6), 2017b. ISSN 15731375. doi: 10.1007/s11222-016-9699-1. C. Zhang, B. Shahbaba, and H. Zhao. Variational hamiltonian Monte Carlo via score matching. Bayesian Analysis, 13(2), 2018. ISSN 19316690. doi: 10.1214/17-BA1060. Cheng Zhang, Babak Shahbaba, and Hongkai Zhao. Hamiltonian monte carlo acceleration using surrogate functions with random bases. Statistics and computing, 27:1473–1490, 2017c.
Review 1: Summary: The authors propose a two stage approach to drawing samples from Bayesian neural network posteriors. First, they draw samples from the posterior using SGHMC, then they use it to warm-start a second stage process (called FBNN) which they use preconditioned crank Nicholson (a dimension free method that uses an "emulator") to quickly draw more samples. Some experiments on small scale tabular data (e.g. boston housing and adult) are performed, with a few coverage metrics. Strengths and Weaknesses: strengths: - the approach of using an emulator (or even another neural network) to draw many posterior samples efficiently from a large BNN seems quite promising see requested changes for a much larger list, but as presented this is a nice study and not a strong paper weaknesses: - for a paper on "scaling up" bayesian neural networks, the problems considered are quite small. as currently presented, the experiments are vastly too small scale to be presented (and there's alternatively no theory instead) - writing wise, it's not very clear to me _why_ the bayesian neural network should be calibrated - it's not clear to me that the superiority of the approach over sghmc is ever demonstrated as naive sghmc seems just simply cheaper and if one needs a closed form distribution, one could just form a gaussian over the sghmc samples (which is a straightforward version of [swag](https://arxiv.org/abs/1902.02476) Requested Changes: proposed experiment: An interesting experiment one could do with this approach would be quick adaptation to different tasks of a single base (e.g. foundation) model. Then we could use a much smaller NN to generate weights for the pre-trained large one for quick fine-tuning. More concretely, given a large resnet trained on imagenet - we could train smaller NNs as emulators using pCN on many different image tasks (e.g. waterbirds, cifar, faces, etc.). all we'd need would be some fine-tuning samples using sghmc on imagenet. section 3: - "the point estimate approach...": it's a) not clear to me why having uncertainty estimates gives explainability, or b) really even that a Bayesian approach should be better calibrated. There are some decent reasons for these (we could manipulate features and see how the predictions change) or that if the model is well-specified, the true Bayesian posterior accurately represents our beliefs about uncertainty. However, these are not pointed out... - "This is of course more naturally ...": This isn't necessarily true. Not all stochastic neural networks are Bayesian. Please rephrase it. Furthermore, not all neural networks that represent uncertainty are Bayesian (or even stochastic). - "Because we have big amounts of data, likelihood evaluation tends to be expensive." Ignoring phrasing, SGHMC does naturally get around the computational cost of likelihood evaluation by using minibatching. - Figure 1: what is the x axis on these? as far as I remember, these are all multi-dimensional data. - Figure 1: what would a ground truth 95% CI look like? for example, from a HMC drawn bayesian neural network with a ton of samples or from a gaussian process (infinite neural network)? - section 4: - "We evaluate these methods [in terms of mean square error and accuracy]": MSE and accuracy don't measure calibration and uncertainty quantification. Given you start from a trained NN, you shouldn't harm MSE or accuracy. - "It is worth mentioning that..." cut this - "2000 posterior samples from BNN-SGHMC" what's the sampling rate here? e.g. when do you collect samples - Tables 1/ 2: [important] what percentiles is the coverage probability at? If at 95% all models are severely over-confident, which is surprising and suggests some sort of error in measurement, fitting, or sampling. - Tables 1/2: [important] All methods other than DNN and BNN-SGHMC _require_ SGHMC to be run beforehand, so should they not include the BNN-SGHMC timings as well? - Tables 1/2: I would personally make a bar chart of effective sample size. missing baselines that are straightforward here: - ensembles of several dnns with different random seeds (what's typically called deep ensembles). It will have 39s * N ensembles fitting time for your ESS calculation but may have higher coverage probability - a second DNN fit on the noise estimate of the first DNN (e.g. N(net1(x), exp{net2(x)}) ). You can then get posterior samples from drawing samples from this distribution. It will have 39s * 2 fitting time and cheap ESS most likely. This gets directly at the claim that the "standard DNN does not quantify uncertainty". - SWAG (or more broadly a Gaussian fit around the SGHMC samples). SGHMC fitting time, but also cheap samples and ESS Section 5: "pCN is adept at sampling around modes" - show a picture of this e.g. fig 2 of https://arxiv.org/pdf/1902.03932.pdf Broader Impact Concerns: n/a ================================================== Review 2: Summary: The paper describes a method to speedup Hamiltonian Monte Carlo (HMC) sampling for Bayesian Neural Networks (BNNs). The key idea is to: 1. Run HMC for a few iterations. 2. Train a standard NN to simulate posterior inference. 3. Use this NN to speedup the sampling step of HMC. They evaluate the proposal on a small set of benchmarks (a simulated dataset, Wine, Boston for classification, Adult for regression). Strengths and Weaknesses: There is no novelty here. For example, the authors cite (Lan et al. (2022a)), which is more or less identical to this paper except it uses a convolutional neural network. The authors state: "We use DNN for the emulation component of our CES scheme", but using a "DNN" (fully-connected layers) instead of a CNN is a very minor change. There are dozens of papers on speeding up HMC with NNs which are not cited here, e.g., [1-3]. The experimental evaluation is only done on toy datasets (these are literally the toy datasets of scikit-learn). They are so small that the stochastic HMC is probably not even needed. They are also comparing only to a baseline HMC and to a standard NN ignoring, e.g., any kind of accelerated HMC or variational inference, Monte Carlo dropout, etc. In terms of language, the paper is a draft that requires a complete rewriting, e.g.: 1. "Bayesian Neural Network (BNN) offers a more principled [...] framework". More principled than what alternative? 2. Many typos such as "fulctions", "fallows". 3. Unclear organization (Section 2.3 is mostly empty, Algorithm 1 is not explained in the text, then the same algorithm is explained again in Section 3.3 "Sampling – Preconditioned Crank-Nicolson (pCN)"). 4. "In the case of FBNN, we first train a BNN": what does this sentence mean? They are not "training a BNN" but sampling from the posterior using HMC. 5. Some sentences are just copy-pasted from (Lan et al.) without explanation, for example "In our numerical examples, activation fulctions for DNN are chosen such that the errors of emulating functions (and their extracted gradients) are minimized." What does this mean? Are you performing some kind of grid search on the set of activation functions? The same is true for math: 1. "Note, we can define the log-likelihood based on the loss function L(θ;X,Y )." But just above in Eq. (6) the loss was a function of the prediction error. 2. "we could approximate the potential function": this "potential function" is never defined (and like (3) above, this is mostly copied from another paper). [1] https://arxiv.org/pdf/1506.05555.pdf [2] http://maeresearch.ucsd.edu/tartakovsky/Papers/zhou-2021-markov.pdf [3] https://proceedings.mlr.press/v216/li23c/li23c.pdf Requested Changes: Novelty, formatting, clarity, organization, and experimental evaluation are all sub-standard and not enough for publication. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This work targets to speed up Markov Chain Monte Carlo (MCMC) applied to Bayesian neural networks (BNNs). More specifically, the authors propose a Calibration-Emulation-Sampling (CES) strategy, which is comprised of three steps. In the first step, the authors employ a standard off-the-shelf stochastic gradient MCMC method, in order to collect posterior samples from the parameter space of the BNN. These samples are then used in order to train an auxiliary (and deterministic) neural network model that emulates the forward pass of the stochastic BNN. Finally, the auxiliary neural network can used in place of the original neural network in a more involved MCMC algorithm, where the authors use the pCN algorithm. The authors show experimentally that their CES approach can lead to speedups compared to standard (stochastic gradient) MCMC. Strengths and Weaknesses: Strengths - Interesting idea - The authors show speedups in practice Weaknesses - General lack of clarity in arguments, overall method design and experiments. - Missing discussions and comparisons against scalable methods for BNNs (e.g., variational inference) - Experiments are toy and not convincing Requested Changes: I am on the negative side for this work; while the idea itself is interesting, the clarity of the manuscript needs quite some work, some of the claims need more discussion and the experimental settings the authors explore are not enough to provide reasonable insights and a convincing story. Therefore, my main requests / points for discussion are the following - Better explanation of how the speedups generally arise and how they are obtained in the experiments. Is it due to having an auxiliary neural network that is smaller than the original network used for the likelihood? If yes, how much smaller and how do the authors account for the discrepancy in the prior $p(\theta)$ between the original likelihood and the one for the auxiliary model? - It is unclear to me how the emulation step is performed; if I look at equation 7, the input to the emulator are the parameters $\theta$ (sampled by MCMC at the previous step), whereas at equation 10 it seems that $X$ is used as well. How is the emulator precisely defined, how is it precisely trained and how is it used downstream at the sampling step? - The authors make several claims that if the emulator is a good representation of the forward mapping then the bias from their approximation would be small. However, the authors provide neither guarantees about the discrepancy of the emulator nor empirical evidence that the emulator does a good job in approximating the original neural network. Therefore, arguments about CES being a robust solution are not convincing. - The authors argue that the problem of MCMC in BNNs is scalability and the proceed to motivate their CES method. However, the authors do not discuss or compare against methods for BNNs that inherently more scalable than MCMC. For example, the authors do not discuss variational inference and Laplace approximations where, especially the latter, can be a reasonable approximation for the mode of a distribution (which the authors claim is particularly relevant in high-dimensional spaces inherent to FBNN). - The authors argue that a deep neural network (DNN) is a good choice for the emulation step as it has good robustness when dealing with variations in the training data. The authors do not provide any references about this claim and do not perform any experiments that demonstrate that this is the case. DNNs are in general not inherently robust, if one looks at how the performance of DNNs change under, e.g., adversarial attacks and distribution shifts. - What is precisely plotted at Figure 1a and Figure 1b, especially at the x-axis? The Wine and Boston Housing datasets do not have 1-dimensional inputs; Wine has 11 input dimensions and the Boston Housing dataset has 14 input dimensions. - The experimental results are not very convincing (besides the fact that the datasets themselves are toy and not very interesting). In the regression case, the results are quite mixed in the case of MSE whereas for CP the FBNN is underperforming across the board. In the classification case, the authors seem to focus solely on accuracy and do not report any metrics around uncertainty (which is the primary reason for using a BNN model in the first place). Broader Impact Concerns: No concerns. ================================================== Metareview: Recommendation: Reject Comment: The paper proposes a new method to speed up sampling for Bayesian neural networks (BNN) with the calibration emulation sapling (CES). Most of the reviewers found that the idea is interesting, while raised several major concerns, including weak experiments, novelty issues, clarity issues, and missing baselines. The authors' rebuttal partially addressed the concerns, e.g., novelty. However, some critical issues still remain. Those include the small-scale experiments, sloppy presentation, and still unclear points. ==================================================
# Smile: Sample-To-Feature **Mixup For Efficient Transfer** Learning Xingjian Li∗lixingjian@baidu.com Big Data Lab, Baidu Inc. State Key Lab of IOTSC, University of Macau Haoyi Xiong∗*xionghaoyi@baidu.com* Big Data Lab, Baidu Inc. Dejing Dou doudejing@baidu.com Big Data Lab, Baidu Inc. Chengzhong Xu czxu@um.edu.mo State Key Lab of IOTSC, University of Macau Reviewed on OpenReview: *https: // openreview. net/ forum? id= czgMCpvrDM* ## Abstract To improve the performance of deep learning, mixup has been proposed to force the neural networks favoring simple linear behaviors in-between training samples. Performing mixup for transfer learning with pre-trained models however is not that simple, a high capacity pre-trained model with a large fully-connected (FC) layer could easily overfit to the target dataset even with samples-to-labels mixed up. In this work, we propose **SMILE**— Sampleto-feature MIxup for Efficient Transfer LEarning. With mixed images as inputs, **SMILE** regularizes the outputs of CNN feature extractors to learn from the mixed feature vectors of inputs, in addition to the mixed labels. **SMILE** incorporates a mean teacher to provide the surrogate "ground truth" for mixed feature vectors. The sample-to-feature mixup regularizer is imposed both on deep features for the target domain and classifier outputs for the source domain, bounding the linearity in-between samples for target tasks. Extensive experiments have been done to verify the performance improvement made by **SMILE**, in comparisons with a wide spectrum of transfer learning algorithms, including fine-tuning, L2-SP, DELTA, BSS, RIFLE, Co-Tuning and RegSL, even with mixup strategies combined. Ablation studies show that the vanilla sample-to-label mixup strategies could marginally increase the linearity in-between training samples but lack of generalizability, while **SMILE** significantly improves the mixup effects in both label and feature spaces with both training and testing datasets. The empirical observations backup our design intuition and purposes. Our code is available at https://github.com/lixingjian/SMILE. ## 1 Introduction Performance of deep learning algorithms in real-world applications is often limited by the size of training datasets. Training a deep neural network (DNN) model with a small number of training samples usually leads to the over-fitting issue with poor generalization performance. A common yet effective solution is to train DNN models under transfer learning (Pan et al., 2010) settings using large source datasets. The knowledge transfer ∗Equal Contribution. Correspondence to Haoyi Xiong. ![1_image_0.png](1_image_0.png) Figure 1: Performance comparison between the L 2regularization and mixup for fine-tuning an ImageNet pre-trained model (left) and training from scratch (right). To simulate scenarios with limited target datasets, we randomly select 50 classes from two transfer learning benchmarks, which are CUB-200-2011 and StanfordCars. As seen, Mixup brings remarkable improvements if training from scratch (right), but this does not apply to transfer learning (left). from the source domain1 helps DNNs learn better features and acquire higher generalization performance for the pattern recognition in the target domain (Donahue et al., 2014; Yim et al., 2017). In addition to deep transfer learning, another effective strategy for improve generalization performance of DNN is mixup (Zhang et al., 2018), where the objective is to have DNNs in the learning procedure favor the linear behaviors in-between training samples. To achieve the goal, the mixup strategy picks up multiple images from the training set, mixes the samples and labels proportionally to generate a new pair of sample and label for data augmentation. The regularization effects brought by mixup could help control the complexity of DNN models (Hanin & Rolnick, 2019; Vapnik, 2013) while largely improving the robustness and generalization performance (Zhang et al., 2020). Research Motivation While existing studies about mixup mainly focus on the general setting, to the best of our knowledge, using mixup in transfer learning has rarely been investigated. A straightforward conjecture is that, mixup should be also effective in transfer learning, where typical applications have only limited training examples. In such situations, regularizers aiming to control the model complexity should be beneficial for better generalization. However, the facts are surprisingly just the opposite. We find that in deep transfer learning, mixup improves the performance with reduced margins or even downgrades the performance when the target dataset is small. See Figure 1 for detailed results. Considering that either transfer learning and mixup is widely proven beneficial, a natural question is raised as follow, - *Why mixup tends to negatively affect transfer learning when training samples are limited?* Our Analyses Two reasons caused the ineffectiveness of mixup in transfer learning. First of all, we find fine-tuning with high capacity pre-trained models CAN overfit to the mixup samples/labels. From mixup, we simply derive a linear interpolation loss to measure the error of linear interpolation between a pair of samples (x1, y1) and (x2, y2) for the model f(·), $$\|f(\lambda x_{1}+(1-\lambda)x_{2})-(\lambda f(x_{1})+(1-\lambda)f(x_{2}))\|_{2}^{2}\ ,$$ , (1) where a lower linear interpolation loss indicates stronger linear behaviors in-between the samples and usually better generalization performance (Zhang et al., 2020). Our experiments however find that fine-tuning with 1The term *domain* indicates the concept of features or knowledge learned from a task. Please note that, this paper focuses on transferring a pre-trained model on a downstream dataset (labels are available), rather that domain adaptation. $$(1)^{\frac{1}{2}}$$ mixup could obtain a low interpolation loss in the training set while suffering a high interpolation loss in the test set (≥25% higher interpolation loss on the testing set than the one on training set, please see also in Section 5). This observation indicates that the *linear behaviors* gained by mixup regularization could not well generalize to the testing dataset and overfit to the mixup samples/labels from the training dataset. The second problem is particularly linked to a major challenge in transfer learning called catastrophic forgetting (Li et al., 2018; You et al., 2020). The additional interpolated images generated by mix-up drive the fine-tuned model farther from the starting point, which aggravates the loss of transferable knowledge in the pre-trained model. Thus, our research intends to study a way to *make mixup strategies generalizable in deep transfer learning* settings while *alleviating the knowledge loss during fine-tuning.* To achieve the above goal, some non-trivial technical challenges should be tackled. - *Sample-to-Feature Mixup*. A high-capacity pre-trained model, offering a large quantity of well-trained features, would force a Fully-Connected (FC) Layer to memorize samples and labels mixed-up with trivial updates to weights of its CNN feature extractor. Though some randomized strategies, such as RIFLE (Li et al., 2020), could deepen back-propagation in vanilla transfer learning settings, it is still challenging to reinforce the mixup effects in the CNN feature extractor. - *Mixed-up Feature Vectors*. To ensure mixup effects in outputs of CNN feature extractors, a possible way is to let CNN feature extractors learn from the mixed-up samples and feature vectors, while the ground-truth feature vectors are usually not available. Thus, surrogate labels of the feature vectors need to be obtained for any sample in the target dataset before having the CNN trained. - *Cross-Domain Generalizability*. A pre-trained DNN usually is capable of behaving linearly under interpolation of the source dataset. During the fine-tuning procedure, it is reasonable to doubt that such linear behaviors in source domain might be forgotten (Chen et al., 2019). To improve the generalization performance, there thus needs to preserve the linear behaviors in the source domain and transfer such ability to the target domain during fine-tuning. Our Work To address above technical challenges, in this work, we propose **SMILE**—Sample-to-feature Mixup strategies for Efficient Transfer Learning. Instead of regularizing mixup effects in target label spaces (i.e., *sample-to-label* mixup), **SMILE** enables the *sample-to-feature* mixup through regularizing the CNN feature extractor to learn from the surrogate of "fine-tuned" feature vectors of mixed-up images even when the CNN has not yet been well-tuned on the target domain. To the best of our knowledge, this work has made three sets of contributions as follows. 1. We study the problem of regularizing DNNs to enjoy mixup effects under deep transfer learning settings, where the major concern is to avoid the overfitting to mixed-up samples and labels, using a high-capacity pre-trained model but with a small target training dataset. We elaborate the technical issues, and propose to solve the problem through enabling sample-to-feature mixup, where obtaining the feature vectors for mixup and ensuring cross-domain generalizability of linear behaviors become the key challenges. 2. We propose **SMILE**—Sample-to-feature mixup for efficient transfer learning, where sample-to-feature mixup, through either the target deep feature and the source label space, has been used as the core framework of the solution. Given two samples drawn from the target domain as the input, **SMILE** linearly combines two samples proportionally and sends the mixed-up sample to the target model. It constrains the Euclidean distance between the output of target model's CNN feature extractor and a mixed-up feature vector (i.e., linear combination of a mean teacher model's outputs for the two samples) via a *sample-to-feature mixup*. Moreover, to obtain cross-domain generalizability, **SMILE** trains an additional FC classifier for the target network to adapt the target dataset but in the source domain. It regularizes the target network using *sample-to-label* mixup to learn from the linear combination of classification results on source domain, whose labels are also features for the target domain. **SMILE** also optimizes the target model via vanilla sample-to-label mixup using the mixed-up label (i.e., linear combination of ground-truth labels). ![3_image_0.png](3_image_0.png) Figure 2: The Architecture of **SMILE**: Deep Transfer Learning with Sample-to-feature Mixup Regularization. To fully exploit the capacity of the source model, we incorporate two components L FE and L FC, beyond the vanilla mixup L MXP. L FE focuses on the deep features gt and L FC is imposed on the source labels through an auxiliary classifier z ′ t . We feed original images in the source model, and mixed images in the target model during fine-tuning. Mean teacher is performed on the source model to provide more accurate pseudo ground truth for mixed features gt and z ′ t . After fine-tuning, we use only the feature extractor and classifier of the target model for prediction. 3. We carry out extensive experiments using a wide range of source and target datasets, and compare the results of **SMILE** with a number of baseline algorithms, including fine-tuning with weight decay (L 2) (Donahue et al., 2014), fine-tuning with L 2-regularization on the starting point (L 2-SP) (Li et al., 2018), DELTA (Li et al., 2019), Batch Singular Shrinkage (BSS) (Chen et al., 2019), RIFLE (Li et al., 2020), Co-Tuning (You et al., 2020) and RegSL (Li & Zhang, 2021) with/without mixup strategies. The experiment results showed that **SMILE** can outperform all these algorithms with significant improvement. The ablation studies show that (1) sample-to-feature mixup design is significantly better than vanilla sample-to-label mixup for deep transfer learning; and (2) the proposed sample-tolabel mixup on the source domain can further improve the generalization performance. ## 2 Related Work In this section, we first introduce the related works from deep transfer learning's perspectives, then we discuss the most relevant work to our study. ## 2.1 Deep Transfer Learning To enable transfer learning for DNNs, fine-tuning (Donahue et al., 2014) has been proposed to first train a DNN model using the large (and possibly irrelevant) source dataset (e.g. ImageNet), then uses the weights of the pre-trained model as the starting point of optimization and fine-tunes the model using the target dataset. In this way, by sharing the rich and diverse knowledge contained in large source datasets, the fine-tuned model is usually capable of handling the target task with better generalization performance. Furthermore, authors in (Yim et al., 2017; Li et al., 2018; 2019) propose transfer learning algorithms that regularize the training procedure using the pre-trained models, so as to constrain the divergence of the weights and feature maps between the pre-trained and fine-tuned DNN models. Later, the work (Chen et al., 2019; Wan et al., 2019) introduce new algorithms that prevent the regularization from hurting the adaptation to the target domain in transfer learning, where (Chen et al., 2019) propose to truncate the tail spectrum of the batch of gradients while (Wan et al., 2019) propose to truncate the ill-posed direction of the aggregated gradients. In addition to the aforementioned strategies, algorithms based on the multi-task learning paradigm have been used for deep transfer learning, such as (Ge & Yu, 2017; Cui et al., 2018). While all above algorithms enable knowledge transfer from source datasets to target tasks, they unfortunately perform poorly due to the catastrophic forgetting and negative transfer. Most transfer learning algorithms (Donahue et al., 2014; Yim et al., 2017; Li et al., 2018; 2019) consist of two steps - pre-training and fine-tuning. Given the features that have been learned in the pre-trained models, either forgetting some good features during the fine-tuning process (*catastrophic forgetting*) (Chen et al., 2019) or preserving the inappropriate features/filters to reject the knowledge from the target domain (*negative transfer*) (Li et al., 2019; Wan et al., 2019) would hurt the performance of transfer learning. In this way, proper compromises should be made between the features learned from both source/target domains during the fine-tuning process, where multi-task learning with Seq-Train (Cui et al., 2018) and Co-Train (Ge & Yu, 2017) might suggest feasible solutions to well-balance the knowledge learned from the source/target domains, through fine-tuning the model with a selected set of auxiliary samples (rather than the whole source dataset) (Cui et al., 2018) or alternatively learning the features from both domains during fine-tuning (Ge & Yu, 2017). A recent study in medical imaging (Wang et al., 2022) employs the Co-Train fashion with auxiliary attributes during both the pre-training and fine-tuning step. Some other studies (Zhong et al., 2020; Wang et al., 2021; Abuduweili et al., 2021) intend to improve the generalization of the target model by further exploiting the target data. For example, Bi-Tuning (Zhong et al., 2020) incorporates self-supervised learning on top of the standard supervised fine-tuning. Self-tuning (Wang et al., 2021) and Adaptive Consistency Regularization (Abuduweili et al., 2021) consider a more practical scenario that a set of unlabeled target data is available. They find that fine-tuning can be substantially promoted by reasonably utilizing the unlabeled data. ## 2.2 Connections To Our Work The most relevant studies to our algorithm are (Verma et al., 2019; Yun et al., 2019; Li et al., 2019; Chen et al., 2019; Li et al., 2020; You et al., 2020; Wang et al., 2022). While the first two works (Verma et al., 2019; Yun et al., 2019) propose to improve mixup and its derivatives for data augmentation through interpolating the feature spaces, the following three works (Li et al., 2019; Chen et al., 2019; Li et al., 2020; You et al., 2020) focus on improving deep transfer learning through regularizing the feature or label spaces. Authors in (Wang et al., 2022) impose a proximal regularizer to constrain the parameter distance from the pre-trained model, and an auxiliary task that predicts a set of pre-defined attributes such as age, race and so on. In comparison, our method serves as a general algorithm that is free of additional domain knowledge. The manifold mixup strategy (Verma et al., 2019) has been proposed to smooth the decision boundary of DNN classifiers using mixed-up feature maps and labels, in a *feature-to-label* mixup fashion. On the other hand, CutMix (Yun et al., 2019) also propose a *sample-to-feature* data augmentation strategy, where the algorithm fuses two images into one and forms a new feature map accordingly, with respect to the localizable visual features in two images. Compared to above works, the major technical difficulty of **SMILE** is that above algorithms use feature maps extracted from CNN models directly, while **SMILE** regularizes the output of CNN feature extractor when accurate estimates of feature vectors are not available (the CNN is under fine-tuning to adapt the target dataset). While (Li et al., 2019; Chen et al., 2019) proposed to improve the feature-wise knowledge distillation or spectral regularization for transfer learning, (Li et al., 2020) studies way to regularize the pre-trained CNN feature extractor, during fine-tuning, by incorporating randomness from FC layers. Compared to above algorithms, **SMILE** is proposed to solve the problem of overfitting to mixup under deep transfer learning settings. Our ablation studies in Section 4 show that the simple combination of fine-tuning and mixup strategies does not perform well in transfer learning settings from both the preservation of linear behaviors and generalization performance aspects. **SMILE** makes unique contributions in proposing novel sample-to-feature mixup strategies to improve performance in transfer learning with pre-trained models. ## 3 Smile**: Sample-To-Feature Mixup For Efficient Transfer Learning** In this section, we first introduce the overall framework of **SMILE**, where the architectures of deep transfer learning with Sample-to-feature mixup regularization is presented. Then, we specify the design of the proposed regularizer and discuss the mixup effects incorporated by **SMILE**. ## 3.1 Overall Framework Given a target training dataset D = {(x1, y1),(x2, y2)... (xn, yn)} and an initial model ωs pre-trained with the source dataset, **SMILE** learns a model ω to adapt the target dataset by a fine-tuning procedure. Since the source model is involved during fine-tuning, we use subscripts t and s to distinguish between the target and source model, superscripts to denote the training iteration. Specifically, **SMILE** initializes the target model ω 0 t with the pre-trained source model ωs. For each training iteration, **SMILE** updates the target model ωt through minimizing a loss function as follow, $$\min_{\omega}\ \left\{{\cal L}(\omega,\omega_{s})=\frac{1}{n}\sum_{i=1}^{n}L^{\rm MXP}(\omega,x_{i},y_{i})+L^{\rm Reg}(\omega,\omega_{s},x_{i})\right\}\,\tag{2}$$ where L MXP(ω, xi, yi) refers to the vanilla sample-to-label mixup loss based on target model on the target domain D , and L Reg(ω, ωs, xi) refers to the loss for Sample-to-feature mixup regularization based on both the source and target model. Note that the computation of L Reg adopts only the target dataset D as the input and does not rely on labels. The regularizer L Reg contains two components, L FE and L FC, where FE refers to feature extractor and FC refers to fully-connected classifier. For better exploiting the capacity of the source model, L FE and L FC leverage the fine-tuned target deep features and the auxiliary source labels, respectively. We present detailed implementations in Section 3.3. Figure 2 presents the architecture of **SMILE**. ## 3.2 **Vanilla Mixup** We first introduce the formulation of vanilla mixup (Zhang et al., 2017). Given a deep neural network f, we denote its classifier output as z and deep feature as g. Since both z and g depend on the input data and their corresponding parameters, they are formulated as functions for simpleness. Vanilla mixup aims to minimize the linear interpolation loss in the target label space, which can be formulated as $${\cal L}^{\mathrm{MXP}}(\omega)=\operatorname*{\mathbb{E}}_{\lambda\sim\mathrm{Beta}(\alpha,\alpha)}\,\operatorname*{\mathbb{E}}_{x_{i},x_{j}\sim\mathrm{D}}\|z_{t}\left(\mathrm{Mix}_{\lambda}(x_{i},x_{j});\omega\right)-\mathrm{Mix}_{\lambda}(y_{i},y_{j})\|_{2}^{2}\;,$$ where the operator Mixλ(*u, v*) = (1 − λ) · u + λ · v refers to the linear combination of two inputs. We follow the vanilla mixup (Zhang et al., 2018) to sample the linear combination coefficient λ from a symmetric Beta distribution λ ∼ Beta(*α, α*). The Beta distribution is usually used to sample a random proportion, e.g. λ in mixup, since it's the conjugate prior for the Bernoulli distribution. A larger α tends to sample a balanced mixture, i.e. λ is more likely near 0.5, and a smaller α leads to λ near 0 or 1. ## 3.3 Deep Transfer Learning With Regularization Omitting the notation of input data xi, we present the Sample-to-feature mixup regularizer as follow, $$L^{\mathrm{{Reg}}}(\omega,\omega_{s})=\gamma_{\mathrm{{FE}}}\cdot L^{\mathrm{{FE}}}(\omega,\omega_{s})+\gamma_{\mathrm{{FC}}}\cdot L^{\mathrm{{FC}}}(\omega,\omega_{s}),$$ FC(*ω, ω*s), (4) where γFE and γFC refer to the weight of the two terms, the term L FE(*ω, ω*s) refers to the sample-to-feature mixup regularizer which borrows general knowledge from the source domain over the target dataset D, and the term L FC(*ω, ω*s) refers to the sample-to-label mixup regularizer on the label space of the source domain (e.g., 1000 classes when the model was pre-trained using ImageNet). $$\mathbf{\Phi}^{\dagger}$$ $$\left({\boldsymbol{4}}\right)$$ $${}^{\circ}(\omega,\omega_{s}),$$ Specifically, given a deep neural network f, we denote its classifier output as z and deep feature as g. Since both z and g depend on the input data and their corresponding parameters, they are formulated as functions for simpleness. Thus, the sample-to-feature mixup regularizer based on target and source models is defined as L FE(*ω, ω*s) = E λ∼Beta(α,α) E xi,xj∼D ||gt (Mixλ(xi, xj ); ω) −Mixλ(gs(xi; ωs), gs(xj ; ωs))||22 , (5) where g(xi; ω) refers to the CNN feature extractor output based on weight ω and the sample xi. This term encourages DNN to learn linear behaviors from samples to hidden features. Further, the sample-to-label mixup regularizer based on target and source models on the label space of source domain is defined as $\left(5\right)$. FC(*ω, ω*s) = E $$\langle s\rangle=\operatorname*{\mathbb{E}}_{\lambda\sim\mathrm{Beta}(\alpha,\alpha)}\;x_{i},x_{j}\sim\mathrm{D}\vert\vert z_{i}^{\prime}\left(\mathrm{Mix}_{\lambda}(x_{i},x_{j});\omega\right)-\mathrm{Mix}_{\lambda}(z_{\mathrm{s}}(x_{i};\omega_{s}),z_{\mathrm{s}}(x_{j};\omega_{s}))\vert\vert_{2}^{2}\;,$$ $$L^{\mathrm{FC}}(\omega,\omega_{s})=$$ $$({\mathfrak{f}})$$ , (6) where z ′ t (xi; ω) refers to the output of an auxiliary classifier (Fully-Connected) of the target model with xi on ω and zs(xi; ωs) refers to the classifier output of the source model with xi on ωs. Both classifiers z ′ t and zs are in the source domain (e.g., with softmax outputs in 1,000 dimensions when the model is pre-trained using ImageNet). More specifically, the FC layer in z ′ t is also initialized with the weights of the FC layer in the pre-trained source model ωs. The vanilla sample-to-label mixup regularizer L MXP is derived from the standard implementation of mixup strategy (Zhang et al., 2018) based on target model using the target dataset D. Algorithm 1 presents the design of the overall training procedure of **SMILE**. ## 3.4 Incorporating With A Mean Teacher Note that an advantage of our method is the compatibility with *mean teacher*, which is widely used in semi-supervised learning problems (Tarvainen & Valpola, 2017) to generate pesudo labels. So far as we know, it has hardly been utilized in deep transfer learning to promote *pesduo features*. Specifically, we periodically update the source model ωs by the fine-tuned model in a moving average manner. Algorithm 1: Deep Transfer Learning with **SMILE** Input : D: target training data, ωs: pre-trained source model, η: learning rate, K: training iterations, α: hyperparameter for Beta distribution ; Output : ω K t : final learned target model after K iterations; 1 **begin** 2 ω 0 t ← ωs 3 for k ← 1 to K do 4 /*Data Sampling and Mixing*/ 5 B ← **mini-batch sampling from** D 6 λ ∼ Beta(*α, α*) 7 Bm ← λ · B + (1 − λ) · Shuffle(B) 8 /*Calculate Vanilla Mixup and Sample-Feature Mixup Loss*/ 9 Calculate L MXP based on zt(Bm, ωk−1 t) 10 Calculate L FE based on gs(Bm, ωs) and gt(B, ωk−1 t) 11 Calculate L FC based on zs(Bm, ωs) and z ′ t (B, ωk−1 t) 12 /*Updating the Target Model with SGD*/ 13 L(ω k−1 t, ωs) = γFE · L FE + γFC · L FC + L MXP 14 g k *← ∇L*(ω k−1 t, ωs) 15 ω k t ← ω k−1 t − η · g k 16 **return** ω K t ## 4 Experiments We evaluate our method on a wide range of tasks, covering different kinds of datasets, pre-trained models, data scales and model architectures. Through exhaustive experiments, **SMILE** is compared against multiple state-of-the-art fine-tuning algorithms including L 2(Donahue et al., 2014), L 2-SP (Li et al., 2018), DELTA (Li et al., 2019), BSS (Chen et al., 2019), RIFLE (Li et al., 2020), Co-Tuning (You et al., 2020) and RegSL (Li & Zhang, 2021). To achieve a comprehensive evaluation, we also compare our method with relevant dataaugmentation strategies including Mixup (Zhang et al., 2018), Manifold Mixup (Verma et al., 2019) and CutMix (Yun et al., 2019). | | Table 1: Characteristics of the target tasks. | | | | | |----------------|-------------------------------------------------|-------------|-----------------|------------|-----------| | target dataset | task category | source task | architecture | # training | # classes | | CUB-200-2011 | Object Recognition | ImageNet | ResNet-50 | 5,994 | 200 | | Stanford-Cars | Object Recognition | ImageNet | ResNet-50 | 8,144 | 196 | | FGVC-Aircraft | Object Recognition | ImageNet | ResNet-50 | 6,677 | 100 | | MIT-Indoor-67 | Scene Classification | Places365 | ResNet-50 | 5,356 | 76 | | Food-101 | Object Recognition | ImageNet | EfficientNet-B4 | 75,000 | 101 | ## 4.1 Image Classification We first present the experiment results based on image classification tasks using a wide range of transfer learning algorithms and datasets. ## 4.1.1 Datasets And Models We conduct experiments on three popular object recognition datasets: CUB-200-2011 (Wah et al., 2011), Stanford Cars (Krause et al., 2013) and FGVC-Aircraft (Maji et al., 2013), which are intensively used in state-of-the-art transfer learning literatures (Chen et al., 2019; Li et al., 2020; You et al., 2020). Each of these datasets contains about 6k - 8k training samples. We use ImageNet (Deng et al., 2009) pre-trained ResNet-50 (He et al., 2016) as the source model. For each dataset, we create four subsets with different number of categories and training examples, divided into two experimental groups. For the first group, we first randomly select 25% of all the categories from each of these standard datasets. Then we randomly sample 400 and 800 training samples from the selected categories. For the second group, we use all categories, while evaluate with 15% or 100% training samples respectively, following the practice in existing baselines BSS (Chen et al., 2019) and Co-tuning (You et al., 2020). Among these, the last setting uses the entire dataset, which reflects the performance when adaptation is relatively sufficient, while the remaining three rely more on knowledge transfer from the pre-trained model. The first group simulates real world datasets with relatively fewer categories and more instances pre category, while the second group (15% data) simulates the opposite. To further confirm the performance improvement by **SMILE** is independent with the choice of pre-trained datasets and model architectures, we conduct additional experiments comparing our method with competitive baselines. Specifically, we use the Places365 (Zhou et al., 2017) pre-trained ResNet-50 to perform fine-tuning on MIT-Indoors-67 (Quattoni & Torralba, 2009), which is a scene classification task. We also evaluate our method on a more powerful model EfficientNet-B4 (Tan & Le, 2019) designed by NAS over a large scale dataset Food-101 (Bossard et al., 2014). The descriptions about the benchmarks used in image classification tasks are summarized in Table 1. ## 4.1.2 Training Details We apply standard data augmentation strategies for image pre-processing composed of resizing to 256 × 256, random flipping and random cropping to 224 × 224 during training. For inference, the test image is resized to 256 × 256 and then center cropped to 224 × 224. We do not use post-processing methods such as ten-crop | to using Y examples from X selected categories. Dataset Method | Dataset | | | | | |------------------------------------------------------------------|----------------------------|-------------|--------------|------------|------------| | C:25%/N:400 | C:25%/N:800 | C:All/N:15% | C:All/N:100% | | | | CUB-200-2011 | L 2 (Donahue et al., 2014) | 55.59±1.02 | 74.85±0.12 | 44.70±0.17 | 80.64±0.30 | | Mixup (Zhang et al., 2017) | 52.39±0.68 | 73.02±0.11 | 44.27±0.31 | 81.86±0.20 | | | Manifod Mixup (Verma et al., 2019) | 55.38±0.16 | 74.09±0.49 | 49.57±0.30 | 83.09±0.26 | | | CutMix (Yun et al., 2019) | 28.08±1.261 | 59.89±0.94 | 29.73±0.26 | 81.52±0.25 | | | L 2 -SP (Li et al., 2018) | 54.38±0.32 | 73.90±0.22 | 45.30±0.23 | 81.58±0.10 | | | DELTA (Li et al., 2019) | 58.15±0.26 | 75.84±0.08 | 47.88±0.15 | 82.21±0.15 | | | BSS (Chen et al., 2019) | 54.99±0.73 | 74.14±0.34 | 46.41±0.09 | 81.10±0.04 | | | RIFLE (Li et al., 2020) | 53.68±0.89 | 73.05±1.09 | 44.13±0.38 | 81.94±0.06 | | | Co-Tuning (You et al., 2020) | 57.98±0.08 | 75.11±0.47 | 49.98±0.23 | 82.60±0.03 | | | RegSL (Li & Zhang, 2021) | 57.62±0.88 | 75.51±0.44 | 46.92±0.28 | 80.20±0.17 | | | SMILE | 62.13±0.55 | 77.27±0.35 | 51.73±0.04 | 83.62±0.07 | | | Stanford-Cars | L 2 (Donahue et al., 2014) | 61.17±0.36 | 82.73±0.59 | 43.01±0.53 | 90.14±0.12 | | Mixup (Zhang et al., 2017) | 60.25±0.68 | 83.60±0.02 | 45.73±0.15 | 91.51±0.18 | | | Manifod Mixup (Verma et al., 2019) | 64.38±0.73 | 85.01±0.18 | 50.53±0.22 | 91.88±0.16 | | | CutMix (Yun et al., 2019) | 47.89±0.78 | 76.71±0.36 | 37.62±0.14 | 92.56±0.20 | | | L 2 -SP (Li et al., 2018) | 61.00±0.28 | 82.05±0.05 | 44.12±0.33 | 90.61±0.12 | | | DELTA (Li et al., 2019) | 62.05±0.13 | 82.1±0.44 | 43.27±0.27 | 90.86±0.08 | | | BSS (Chen et al., 2019) | 64.97±0.69 | 83.81±0.39 | 47.45±0.23 | 91.14±0.04 | | | RIFLE (Li et al., 2020) | 62.85±0.22 | 83.57±0.43 | 43.61±0.07 | 91.08±0.12 | | | Co-Tuning (You et al., 2020) | 66.05±0.41 | 81.05±0.39 | 44.29±0.42 | 91.19±0.11 | | | RegSL (Li & Zhang, 2021) | 60.12±0.63 | 82.91±0.08 | 42.52±0.37 | 91.02±0.05 | | | SMILE | 65.17±1.11 | 85.90±0.16 | 50.93±0.17 | 92.21±0.05 | | | FGVC-Aircraft | L 2 (Donahue et al., 2014) | 59.63±1.11 | 79.57±0.18 | 51.13±0.45 | 88.27±0.51 | | Mixup (Zhang et al., 2017) | 65.20±0.80 | 84.53±0.62 | 54.42±0.55 | 89.33±0.17 | | | Manifod Mixup (Verma et al., 2019) | 61.10±0.56 | 80.40±1.90 | 57.97±0.61 | 89.53±0.24 | | | CutMix (Yun et al., 2019) | 53.50±0.80 | 77.60±0.73 | 44.10±0.78 | 88.48±0.27 | | | L 2 -SP (Li et al., 2018) | 54.70±0.73 | 76.13±0.82 | 48.85±0.70 | 87.97±0.66 | | | DELTA (Li et al., 2019) | 53.47±0.24 | 71.73±1.02 | 51.05±0.38 | 88.92±0.25 | | | BSS (Chen et al., 2019) | 61.40±1.13 | 81.47±0.24 | 52.61±0.11 | 88.47±0.16 | | | RIFLE (Li et al., 2020) | 60.97±0.49 | 79.87±0.38 | 52.13±0.31 | 89.45±0.44 | | | Co-Tuning (You et al., 2020) | 62.98±0.72 | 80.03±0.04 | 52.05±0.43 | 88.19±0.33 | | | RegSL (Li & Zhang, 2021) | 61.87±0.37 | 79.40±0.92 | 51.64±0.43 | 88.87±0.26 | | | SMILE | 68.40±0.33 | 84.57±0.29 | 60.04±0.33 | 90.16±0.15 | | Table 2: Comparison of top-1 accuracy (%) on transfer learning benchmarks. The notation C:X/N:Y refers to using Y examples from X selected categories. ensemble (Liang et al., 2020). We train all models using SGD with the momentum of 0.9, weight decay of 10−4 and batch size of 48. We train 15,000 iterations for Food-101 considering its large scale and 9,000 iterations for the remaining datasets. The initial learning rate is set to 0.001 for MIT-Indoor-67 due to its high similarity with the pre-trained dataset Places365 and 0.01 for the remaining. The learning rate is divided by 10 after two-thirds of total iterations. Each experiment is repeated five times and we report the average top-1 classification accuracy and standard derivations for uncertainty quantification. For hyper-parameter search, we use a simple three-fold cross validation on the original training set from γFE ∈ [0.01, 0.1] and γFC ∈ [0.01, 0.1]. The selected best configurations are used in all experiments. As for baseline methods, we use the recommended choices of hyper-parameters reported in corresponding papers. 1We notice that CutMix performs surprisingly worse in low-data regime (e.g., less than 1000 training examples in total). However, when fine-tuning with full data, CutMix always outperforms vanilla fine-tuning. This phenomenon is consistent with our preliminary experiments in Introduction. A conjecture is that, when training data are insufficient, the operation of cutting and replacing patches might cause much severer over-fitting risks. The CutMix results can be reproduced by our published code. Table 3: Comparison of top-1 accuracy (%) with different transfer learning algorithms on more task types and architectures. | and architectures. Dataset | Method | Sampling Rates | | | |------------------------------|----------------------------|------------------|------------|------------| | | | 30% | 50% | 100% | | MIT-Indoor-67 | 2 (Donahue et al., 2014) | 78.68±0.20 | 80.80±0.18 | 82.00±0.21 | | L Mixup (Zhang et al., 2017) | | 77.44±0.44 | 80.28±0.28 | 82.87±0.50 | | DELTA (Li et al., 2019) | | 80.80±0.22 | 82.80±0.25 | 83.67±0.18 | | BSS (Chen et al., 2019) | | 78.23±0.50 | 80.35±0.28 | 82.15±0.22 | | RIFLE (Li et al., 2020) | | 76.76±0.08 | 78.71±0.33 | 81.78±0.07 | | SMILE | | 82.00±0.14 | 83.54±0.20 | 85.37±0.16 | | Food-101 | L 2 (Donahue et al., 2014) | 80.25±0.28 | 83.43±0.15 | 86.77±0.03 | | Mixup (Zhang et al., 2017) | | 82.63±0.11 | 84.93±0.06 | 87.82±0.06 | | DELTA (Li et al., 2019) | | 81.38±0.08 | 84.07±0.06 | 87.34±0.07 | | BSS (Chen et al., 2019) | | 81.13±0.04 | 83.96±0.09 | 87.33±0.03 | | RIFLE (Li et al., 2020) | | 81.13±0.04 | 83.82±0.02 | 87.29±0.11 | | SMILE | | 82.84±0.16 | 85.25±0.09 | 88.20±0.10 | ## 4.1.3 Results As observed in Table 2, our proposed **SMILE** achieves remarkable improvements to vanilla fine-tuning on three standard benchmarks, and outperforms all state-of-the-art methods. As the size of training data becomes smaller, our method yields more significant benefits, e.g. **SMILE** outperforms vanilla fine-tuning by more than 8% on FGVC-Aircraft when only 15% training samples are used. Our method is shown scalable to more challenging datasets. For MIT-Indoor-67, vanilla fine-tuning with a small learning rate is quite competitive as the pre-trained model is highly adaptable for the target task. While for large-scale dataset Food-101, the benefit from all fine-tuning algorithms becomes less. As shown in Table 3, **SMILE** still delivers decent performance on these datasets. As for the time complexity, although **SMILE** requires an extra forward pass, the actual running time increases less than 20% against vanilla fine-tuning. ## 4.2 Natural Language Processing We also evaluate **SMILE** on the text classification task using powerful transformer-based architecture, showing that our method can be applied to NLP tasks. ## 4.2.1 Datasets And Models To carry out experiments of transfer learning for NLP tasks, we use the fine-grained sentiment classification task SST-5, which offers the Stanford Sentiment Treebank datasets with five categories. The pre-trained model in this experiment is base model of BERT (Devlin et al., 2018) with 12 transformer blocks and 12 attention heads. ## 4.2.2 Training Details We fine-tune the pre-trained BERT model with the batch size to 24 for 3 epochs, using Adam optimizer with a learning rate of 2 × 10−5 while incorporating with deep transfer learning algorithms including L2-SP and BSS (Chen et al., 2019) and vanilla Mixup. ## 4.2.3 Results Results are shown in Table 4, where we include performance based on LSTM (Tai et al., 2015), CNN (Kim, 2014), and the vanilla BERT*base* reported in (Munikar et al., 2019) for comparisons. We also find that both mixup and **SMILE** outperform standard fine-tuning and **SMILE** achieves more improvements. Regularizers L2-SP and BSS without mixup are not superior to standard fine-tuning in this task. ## 5 Analysis 5.1 Ablation Study We here present an ablation study to exhibit the unique contribution corresponding to each component in our framework. Specifically, we evaluate the performances of **SMILE** by removing L FE and L FC respectively. As observed in Table 5, while they both make non-trivial contributions, the influence of the sample-to-feature mixup regularizer L FE is more significant than the feature-to-label regularizer L FC. Furthermore, we consider simple combinations of mixup strategies and state-of-the-art transfer learning algorithms. Specifically, we employ DELTA (Li et al., 2019) to improve knowledge distillation and RIFLE (Li et al., 2020), BSS (Chen et al., 2019) to alleviate negative transfer, both on top of the vanilla mixup. As shown in Figure 3, though such a straightforward combination with vanilla mixup sometimes improves either transfer learning and mixup, **SMILE** is still significantly superior. ![10_image_0.png](10_image_0.png) Figure 3: Comparison of top-1 accuracy (%) with various SOTA transfer learning baselines combined with Mixup. ## 5.2 Discussions On Linear Interpolation Effects One major assumption of our work is that *fine-tuning with pre-trained models would overfit to mixed-up* samples and labels. In order to verify this assumption and interpret the performance improvement of **SMILE**, we now investigate the linear interpolation effects in fine-tuning. It is worth noting that, the terminology overfit and *generalize* in this section are particularly w.r.t. the linear interpolation, rather than the model accuracy in general sense. For example, *a model overfits to mixed-up samples and labels* means, the model remembers all interpolated labels for the mixed training samples, but a mixture of two test samples fails to get the prediction that lies on the linear interpolation of their respective predictions. | Table 4: Experimental results on NLP task SST-5. Methods Accuracy | Table | 5: | Ablation | Study | on | the | CUB-200-2011 | |---------------------------------------------------------------------|-----------------|------------|------------|---------|------|-------|----------------| | dataset. Methods | Sampling Rates | | | | | | | | | 30% | 100% | | | | | | | | SMILE | 70.42 | 83.62 | | | | | | | SMILE w/o. L FC | 69.68±0.12 | 83.11±0.19 | | | | | | | SMILE w/o. L FE | 68.15±0.27 | 82.92±0.20 | | | | | | LSTM (Tai et al., 2015) | 46.4 | | | | | | | | CNN (Kim, 2014) | 48.0 | | | | | | | | BERTbase (Devlin et al., 2018) | 53.2 | | | | | | | | BERTbase w. Mixup | 53.7 | | | | | | | | 2 -SP (Li et al., 2018) | 53.2 | | | | | | | | BERTbase w. L BERTbase w. BSS (Chen et al., 2019) | 53.4 | | | | | | | | BERTbase w. SMILE | 54.6 | | | | | | | ## 5.2.1 Measuring Linear Interpolation Effects To quantify the linear interpolation effects, we define metric to quantify the effect of linear interpolation. Derived from standard mixup (Zhang et al., 2018), we introduce a generalized form of interpolation loss (IL) w.r.t a function f employing its own outputs as labels, eliminating the influence of the faithfulness of the approximation, i.e. how the learned function fits the ground truth f ∗, as follows: $$\begin{array}{c}{{\mathrm{IL}(f)=\mathbb{E}_{x,x^{\prime}\sim D}\mathbb{E}_{\delta_{1},\delta_{2}\sim P_{\delta}}\mathbb{E}_{\lambda\sim P_{\lambda}}D_{\lambda}^{i t}(f(\operatorname{Mix}_{\lambda\delta_{1}+(1-\lambda)\delta_{2}}(x,x t)),}}\\ {{f(\operatorname{Mix}_{\delta_{1}}(x,x t)),f(\operatorname{Mix}_{\delta_{2}}(x,x t))),}}\end{array}$$ $$\left(7\right)$$ (*x, x*′))),(7) where Dit λrefers to the Euclidean distance between the output w.r.t the interpolated inputs and the proportionally mixed outputs. λ conforms to the Beta distribution as described in Section 3.2, and δ1, δ2 are sampled from a uniform distribution between 0 and 1. Note that Equation (7) is a metric for evaluating to what degree does a model favor linear behaviors, rather than an optimization objective. Compared with original form of linear interpolation loss used in standard mixup (Zhang et al., 2018), the metric defined in Equation (7) has the following two merits. - Equation (7) is feasible for measuring linear interpolation effects for both the feature layer (noted as Feature-IL when considering the CNN feature extractor as the function f) and the label outputs (noted as Label-IL when considering the classifier's outputs as f). - Equation (7) relies on the network's own outputs rather than external labels, and thus the influence of label fitting (e.g. model accuracy) is disentangled from the evaluation of linear interpolation. ## 5.2.2 Sample-To-Feature Mixup: Linear Interpolation Effects And Generalization We use Eq 7 to measure Label-IL using the classifier outputs and Feature-IL using the last hidden layer of ResNet-50 for different transfer learning methods, with CUB-200-2011 (with 30% sampling rates) as the training set for all methods. Several arguments can be deduced from results in Table 6. - *More Data, Better Generalization, and Lower Label-IL and Feature-IL.* There is no doubt to assume that, in practice, a model trained with more data should enjoy better generalization performance. In addition to improve the testing accuracy , we find that, when we involve additional training samples, both Label-IL and Feature-IL would be lower on the testing sets, compared to vanilla fine-tuning. - *Fine-tuning with vanilla mixup is NOT generalizable even in the label space, due to the lack of linear* interpolation in feature spaces. As shown in Table 6, although Label-IL of the vanilla mixup is significantly lower on the training set than other methods, its Label-IL is high on the testing set (not generalizable). Furthermore, compared to other methods on both training/testing sets (even Fine-tuning on the testing set), Feature-IL of the vanilla mixup is high, i.e., poor linear interpolation in feature spaces. - Sample-to-Feature Mixup could ensure the generalizability of mixup effects in the label space, as *SMILE* is with low Feature-IL and Label-IL on both training and testing sets. While **SMILE** achieves the lowest Feature-IL on both training and testing datasets, it also achieves the lowest testing Label-IL. The comparisons with vanilla mixup suggest that doing mixup in the label space is just not enough for fine-tuning. Besides the quantitative results, we in Appendix B present some visualized cases of interpolation behaviors on the feature space with different fine-tuning methods. These arguments solidify our motivation of sample-to-feature mixup for fine-tuning.2 2Note that the over-fitting in the label space may not be directly calibrated with training/test accuracy as there exists other factors influence the accuracy, e.g. mixup also benefits from the effect of label smoothing (Singh & Bay, 2019). Table 6: Feature-IL and Label-IL for different fine-tuning methods over the training (sampling the CUB-2002011 training set by 30%) and testing dataset. Lower is better. Add. Data refers to involving the remaining 70% training examples for fine-tuning. However, the interpolation loss for the training set is still calculated on the original 30%. | Method | Label-IL | Feature-IL | | | |----------------------|------------|--------------|------|------| | Train | Test | Train | Test | | | Finetune | 1.80 | 1.92 | 1.92 | 1.93 | | Finetune + Add. Data | 1.85 | 1.88 | 1.58 | 1.63 | | Finetune + MXP | 1.65 | 2.00 | 1.98 | 2.02 | | SMILE | 1.75 | 1.82 | 1.48 | 1.53 | ## 5.2.3 Feature-To-Label Mixup: Déjà Vu Can Help. To enforce linear behaviors on features, **SMILE** inherits the Feature-to-Label classifier (i.e., the FC layer) from the source model as the initialization of fine-tuning. It is because we assume that the label space of the source task is partially overlapped with the target task. Thus, the FC layer with a considerable number of parameters contains useful information for the target task. This has been investigated by correlating the label spaces between these two tasks in recent studies (You et al., 2020). Furthermore, although it is impossible for **SMILE** to mix the feature vectors extracted from the fine-tuned CNN during the fine-tuning procedure, Feature-to-Label Mixup still works well as the FC classifier has been well-trained on the source datasets, which provides rich semantic information. ## 5.3 **Discussions On Catastrophic Forgetting** Here we present discussions to confirm our hypothesis stated in the introduction, i.e. the vanilla mixup aggravates the risk of catastrophic forgetting, while our approach alleviates this problem by reusing rich source knowledge. As suggested by existing literature (Li et al., 2018; Gouk et al., 2021; Li & Zhang, 2021), we use the parameter distance between the fine-tuned and pre-trained model to measure the degree of catastrophic forgetting. We use the same experimental setting as that used in Section 5.2. As shown in Table 7, both mixup and **SMILE** get the parameter distance even larger than vanilla fine-tuning with double training examples, while **SMILE** alleviates the deviation caused by mixup. Table 7: Parameter distance between the fine-tuned and pre-trained model with different methods. The distance is calculated as the summation of the distance w.r.t. each layer, measured by the Euclidean distance between two tensors. The FC layers are not included when calculating the distance. Method Finetune Finetune (2x data) Finetune+mixup Finetune+**SMILE** Distance 32.3 37.3 90.1 60.7 ## 5.4 **Role Of Source Model** Our method is effective particularly in transfer learning, and a reliable teacher model is vital to the proposed framework. In the setting of fine-tuning, the source model acts as a good starting point to provide supervision for both cross-domain mixed labels and in-domain mixed features. Thereby, our method is not directly feasible for general-purpose supervised learning. The reason are two folds. In general supervised learning, i.e., learning a single task from scratch, the cross-domain sample-to-label regularier L FC is no longer applicable, since no auxiliary tasks are available. This makes our framework incompatible with general supervised learning. The source model also plays an essential role for supervision of mixed features. If the teacher is not trustworthy enough, the effects of sample-to-feature mixup cannot be guaranteed. We design two groups of experiments to verify this. In the first group, we evaluate **SMILE** with degenerated teachers, including one without knowledge preserving from the source model, and the other without adaptation to the target data. They are denoted by "w/o Source" and "w/o Target" respectively. The second group simulates general supervised learning, where only original mixup and sample-to-feature mixup are involved in **SMILE**. Specifically, results in Table 8 show that preserving the source weights contributes more than adapting to the target data, in terms of sample-to-feature mixup. Experiments on the second group further backup the analysis. As shown in Table 9, when training from scratch, **SMILE**is inferior to vanilla Mixup, indicating that the sample-to-feature component has negative influence if using a low-quality teacher. | Table 8: Evaluation of fine-tuning on CUB-200-2011 C:25%/N | :800. | | | |--------------------------------------------------------------|---------|------------------|------------------| | Mixup | SMILE | SMILE w/o Target | SMILE w/o Source | | 73.02 | 77.27 | 75.72 | 73.45 | | L 2 | Mixup | SMILE | |-------|---------|---------| | 16.97 | 25.36 | 21.86 | Table 9: Evaluation of training from scratch on CUB-200-2011 C:25%/N:800. ## 6 Conclusion In this work, we figure out the difficulty of applying mixup to transfer learning, and introduce **SMILE**—Sampleto-feature Mixup strategies for Efficient Transfer Learning. Beyond a direct combination of fine-tuning and mixup, **SMILE** pursues generalizable linear behaviors through incorporating both features of the target domain and the label space of the source domain. We conduct extensive experiments using a wide spectrum of target datasets. Results show that **SMILE** can significantly promote the effectiveness of fine-tuning and outperform various competitive fine-tuning algorithms. Ablation studies and empirical discussions further backup our design intuition and purposes. ## References Abulikemu Abuduweili, Xingjian Li, Humphrey Shi, Cheng-Zhong Xu, and Dejing Dou. Adaptive consistency regularization for semi-supervised transfer learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6923–6932, 2021. Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 - mining discriminative components with random forests. In *European Conference on Computer Vision*, 2014. Xinyang Chen, Sinan Wang, Bo Fu, Mingsheng Long, and Jianmin Wang. Catastrophic forgetting meets negative transfer: Batch spectral shrinkage for safe transfer learning. In *Advances in Neural Information* Processing Systems, pp. 1906–1916, 2019. Yin Cui, Yang Song, Chen Sun, Andrew Howard, and Serge Belongie. Large scale fine-grained categorization and domain-specific transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4109–4118, 2018. Jun Deng, Wei Dong, Richard Socher, Li-Jia Li, Kuntai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. *2009 IEEE Conference on Computer Vision and Pattern Recognition*, pp. 248–255, 2009. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In *International conference* on machine learning, pp. 647–655, 2014. Weifeng Ge and Yizhou Yu. Borrowing treasures from the wealthy: Deep transfer learning through selective joint fine-tuning. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 10–19, 2017. Henry Gouk, Timothy Hospedales, et al. Distance-based regularisation of deep networks for fine-tuning. In International Conference on Learning Representations, 2021. Boris Hanin and David Rolnick. Complexity of linear regions in deep networks. In International Conference on Machine Learning, pp. 2596–2604. PMLR, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Yoon Kim. Convolutional neural networks for sentence classification. corr abs/1408.5882 (2014). arXiv preprint arXiv:1408.5882, 2014. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In *4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13)*, Sydney, Australia, 2013. Dongyue Li and Hongyang Zhang. Improved regularization and robustness for fine-tuning in neural networks. Advances in Neural Information Processing Systems, 34:27249–27262, 2021. Xingjian Li, Haoyi Xiong, Hanchao Wang, Yuxuan Rao, Liping Liu, and Jun Huan. Delta: Deep learning transfer using feature map with attention for convolutional networks. *arXiv preprint arXiv:1901.09229*, 2019. Xingjian Li, Haoyi Xiong, Haozhe An, Cheng-Zhong Xu, and Dejing Dou. Rifle: Backpropagation in depth for deep transfer learning through re-initializing the fully-connected layer. In International Conference on Machine Learning, pp. 6010–6019. PMLR, 2020. Xuhong Li, Yves Grandvalet, and Franck Davoine. Explicit inductive bias for transfer learning with convolutional networks. *Thirty-fifth International Conference on Machine Learning*, 2018. Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In *International Conference on Machine Learning*, pp. 6028–6039. PMLR, 2020. Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew B. Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. *ArXiv*, abs/1306.5151, 2013. Manish Munikar, Sushil Shakya, and Aakash Shrestha. Fine-grained sentiment classification using bert. In 2019 Artificial Intelligence for Transforming Business and Society (AITB), volume 1, pp. 1–5. IEEE, 2019. Sinno Jialin Pan, Qiang Yang, et al. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345–1359, 2010. Ariadna Quattoni and Antonio Torralba. Recognizing indoor scenes. In *2009 IEEE Conference on Computer* Vision and Pattern Recognition, pp. 413–420. IEEE, 2009. Aditya Singh and Alessandro Bay. On mixup training: Improved calibration and predictive uncertainty for deep neural networks neurips reproducibility challenge 2019. 2019. Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. *arXiv preprint arXiv:1503.00075*, 2015. Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, pp. 6105–6114. PMLR, 2019. Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In *Proceedings of the Advances in neural information* processing systems, pp. 1195–1204, 2017. Vladimir Vapnik. *The nature of statistical learning theory*. Springer science & business media, 2013. Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, and Yoshua Bengio. Manifold mixup: Better representations by interpolating hidden states. In *International* Conference on Machine Learning, pp. 6438–6447. PMLR, 2019. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011. R. Wan, H. Xiong, X. Li, Z. Zhu, and J. Huan. Towards making deep transfer learning never hurt. In 2019 IEEE International Conference on Data Mining (ICDM), pp. 578–587, 2019. Rongguang Wang, Pratik Chaudhari, and Christos Davatzikos. Embracing the disharmony in medical imaging: A simple and effective framework for domain adaptation. *Medical Image Analysis*, 76:102309, 2022. Ximei Wang, Jinghan Gao, Mingsheng Long, and Jianmin Wang. Self-tuning for data-efficient deep learning. In *International Conference on Machine Learning*, pp. 10738–10748. PMLR, 2021. Junho Yim, Donggyu Joo, Jihoon Bae, and Junmo Kim. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, 2017. Kaichao You, Zhi Kou, Mingsheng Long, and Jianmin Wang. Co-tuning for transfer learning. *Advances in* Neural Information Processing Systems, 33, 2020. Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In *Proceedings of the IEEE/CVF* International Conference on Computer Vision, pp. 6023–6032, 2019. Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. *arXiv preprint arXiv:1710.09412*, 2017. Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In *International Conference on Learning Representations*, 2018. URL https://openreview. net/forum?id=r1Ddp1-Rb. Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, and James Zou. How does mixup help with robustness and generalization? *arXiv preprint arXiv:2010.04819*, 2020. Jincheng Zhong, Ximei Wang, Zhi Kou, Jianmin Wang, and Mingsheng Long. Bi-tuning of pre-trained representations. *arXiv preprint arXiv:2011.06182*, 2020. Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. *IEEE transactions on pattern analysis and machine intelligence*, 40(6): 1452–1464, 2017. ## A **Detailed Results For Figure 1** Here we provide detailed results for Figure 2, with the mean accuracy and standard deviation among five random trials for each experiment. We additionally report performance for 200 training samples for clearer comparison. Results in Table 10 clearly show that, in transfer learning, Mixup performs worse (compared against the baseline) as fewer training samples are available. However, when training from scratch, Mixup consistently improves the performance with a large margin. Table 10: Accuracies with standard deviations corresponding to Figure 2. TL refers to transfer learning, where we fine-tuning an ImageNet pre-trained checkpoint to adapt the new task. ST refers to standard training, i.e. training from a random initialization. | training, i.e. training from a random initialization. Dataset Method | | | Size of Training Set | | | | |------------------------------------------------------------------------|-------------------|-------------------|------------------------|--------------------|--------------------|------------| | | 200 | 400 | 600 | 800 | 1000 | | | CUB(TL) | Baseline | 32.98±0.13 | 55.59±1.02 | 68.24±0.35 | 74.85±0.12 | 78.83±0.42 | | Mixup | 27.25±0.18(-5.73) | 52.39±0.68(-3.20) | 65.72±0.58(-2.52) | 73.02±0.11(-1.83) | 77.44±0.14(-1.39) | | | Cars(TL) | Baseline | 38.13±0.60 | 61.17±0.36 | 76.22±0.44 | 82.73±0.59 | 87.01±0.07 | | Mixup | 33.85±0.22(-4.28) | 60.25±0.68(-0.92) | 77.45±0.57(+1.23) | 83.60±0.02(+0.87) | 88.34±0.22(+1.33) | | | CUB(ST) | Baseline | - | 9.57±0.49 | 16.45±0.21 | 19.06±0.62 | 26.11±3.24 | | Mixup | - | 16.07±1.20(+6.50) | 24.87±0.80(+8.42) | 32.49±1.22(+13.43) | 35.53±0.96(+9.42) | | | Cars(ST) | Baseline | - | 9.06±0.90 | 9.89±2.80 | 15.61±1.15 | 19.53±0.68 | | Mixup | - | 16.29±2.22(+7.23) | 20.36±1.92(+10.47) | 24.77±0.41(+9.16) | 35.24±0.69(+15.71) | | ## B **Demonstrated Effects Of Feature Interpolation** Here we present show cases for comparison in feature interpolation among different algorithms. To obtain the interpolation points, we first randomly select a pair of images and then generate five mixed inputs with the interpolation coefficients λ of [0.6, 0.7, 0.8, 0.9, 1] respectively. Forward computation is performed given these mixed inputs and then, their corresponding deep features are extracted and projected to the 2-D space using PCA. We plot the interval of λ > 0.5 for better demonstration of projection as the same sample dominates the interpolated result when all λ lie in either (0, 0.5) or (0.5, 1). Results of four random pairs are illustrated in Figure 4. ![17_image_0.png](17_image_0.png) | | 0.6 | 0.7 | | | |----------------------|-------------|-------|-----|-----| | | 0.7 | | | | | 1.0 0.9 0.8 | 0.7 | | | | | 0.9 1.0 | 0.6 | | | | | Standard Fine-tuning | 0.7 | 0.8 | | | | | 0.8 | | | | | 0.8 | 0.9 | | | | | | 0.9 1.0 | 1.0 | | | | | 1.0 | 0.6 | 1.0 | | | | 0.7 | | | | | 0.9 | | | | | | Fine-tuning w/ Mixup | 0.9 1.0 0.8 | 0.7 | 0.6 | 0.8 | | 0.8 | | | | | | 0.7 0.6 | 0.9 | | | | | | 0.9 1.0 | 0.8 | 0.7 | 0.6 | | 0.6 | 1.0 | | | | | | 0.9 | | | | | | 0.6 | | | | | 1.0 | 0.7 | | | | | 0.9 | | | | | | 0.9 1.0 | 0.8 0.7 0.6 | | | | | 0.7 | 0.8 | | | | | SMILE | | 0.8 | 0.9 | 0.8 | | | 0.7 | | | | | | 1.0 | 0.6 | | | Figure 4: Visualizations of feature interpolation behaviors for different fine-tuning methods. We extract the representations from the last hidden layer which are 2048 dimensional feature vectors and then project them into the 2-D space using PCA. Each column corresponds to the projected deep features generated by putting forward the interpolation of a random pair of images. The number marked next to the point refers to the interpolation coefficient λ.
Review 1: Summary: This work primarily focuses on making Mixup strategies work for deep transfer learning. While the motivation why Mixup should be used for transfer learning is not very clear, the results seem promising. The proposed method efficiently leverages existing techniques like Mixup, Mean Teacher to improve performance in transfer learning. While the novelty is limited, it’s a simple solution that works in practice. Strengths and Weaknesses: Strengths: + Proposes an empirically sound methodology for leveraging Mixup in the context of deep transfer learning. + Reasonable gains in terms of empirical performance. Weaknesses: - The motivation for why deep transfer learning would benefit from Mixup is not clearly discussed. In particular, the introduction section of the paper is a bit confusing and doesn’t set the proper context. - For a more comprehensive comparison of existing methods, it would be interesting to see the results when the proposed Mixup methodology is added to more recent transfer learning approaches like Co-Tuning, RegSL considered in the paper. Questions: * Comparison with existing methods: Are all the existing methods re-trained on the data splits generated for the experiments, or was a standard transfer learning benchmark used? Using benchmarks makes the comparison fair. For example, Guo et al., 2018, SpotTune [1], uses the Visual Decathlon benchmark. There are more recent benchmarks like Jiang et al. 2020 [2] * “As the size of training data becomes smaller, our method yields more significant benefits,” -- As shown in the results, as the data size increases, the gains decrease, does this mean as the architecture is relying more on the pre-trained features instead of learning features specific to the target domain? [1] Guo, Yunhui, Shi, Honghui, Kumar, Abhishek, Grauman, Kristen, Rosing, Tajana, and Rogerio Feris. "SpotTune: Transfer Learning through Adaptive Fine-tuning." arXiv, (2018). https://doi.org/10.48550/arXiv.1811.08737. [2] Junguang Jiang, Baixu Chen, Bo Fu, Mingsheng Long, Transfer-Learning-library, Github repository, 2020, https://github.com/thuml/Transfer-Learning-Library Requested Changes: Experiments and Writing: * The introduction section is confusing and doesn’t motivate “Why deep transfer learning would benefit from Mixup?”. I understand there are marginal gains, but what would be the primary advantage of using Mixup in Transfer Learning context? This needs to be motivated better. * Experiments to show the effectiveness of the proposed Mixup strategy when used in combination with existing SoTA transfer learning methods like Co-Tuning and RegSL * Details on the experimental setting like data splits, was a transfer learning benchmark used? If not, why? Minor Issues: * Error in Figure-1 legend: while the plot has dotted lines, the legend doesn’t have any dotted lines. * Use of terms like source domain and target domain can be misleading and lead to questions and comparisons with domain adaptation; It might be better to use source dataset/target dataset instead of source domain and target domain. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper proposes a simple, yet seemingly effective technique for transfer learning based on mixup. The core idea is applying mixup to both sample-to-label (on both the source and target domains) and sample-to-feature maps. The authors validate the effectiveness of this technique in image classification (primarily for ResNet-50 models pretrained with ImageNet) and NLP benchmarks (fine-tuning BERT for the sentiment classification task SST-5). The paper includes a small ablation study showing the importance of both additional loss terms. Strengths and Weaknesses: ## Strengths * The proposed technique is simple, yet empirical results seem to support its effectiveness for transfer learning compared to some other recent methods. (Unfortunately, I do not know additional state-of-the-art techniques comparison with which could be important here.) * Authors consider both the image and natural language domains (though with only one task). ## Weaknesses * The overall level of manuscript preparedness is very low as if it was still a draft and not a finished paper. There are numerous errors and inaccuracies in the text from the language to mathematical notation and even the text structure (clearly wrong order of sections). * Many explanations in the text are poorly articulated and together with clear mistakes in mathematical notation, they make reading and understanding this manuscript exceptionally difficult. * The interpolation loss ${\rm IL}(f)$ in Equation 6 is not clearly explained. 1. Firstly, when we speak of generalization, we typically consider one metric and analyze it for both training and test data distributions. For example, we can look at the model accuracy or the loss and compare their values on the training set and a hold-out test set to see whether overfitting occurs. Similar analysis could be carried out with $L^{\rm FE}$ and $L^{\rm FC}$. The difference of these losses on the training and test datasets would already characterize the degree of "overfitting" occurring on the training set and it is not entirely clear why we need another loss characterizing the linear structure on the test set. 2. Secondly, while it is clear that ${\rm IL}(f)$ vanishes for linear functions $f$, one can also come up with other similar measures be that with only two samples (increasing the order / number of $\delta$ terms), or more than two samples. Was this form of ${\rm IL}(f)$ chosen because of its simplicity compared to other alternatives, or does it possess any other advantageous properties? Requested Changes: In my opinion, the manuscript still needs a major editing effort in order for it to published. Authors might need to improve readability and fix the most obvious mistakes for the paper to be easy enough to comprehend. ## Language In the following, I list some examples of sentences that seem incorrect: * "Thus, **there needs to estimate the feature vector** for any sample in the target dataset before having the CNN trained." * "Later, the work ... introduces new algorithms that prevent the regularization **from the hurts to transfer learning**, where ... **proposes** to truncate the tail spectrum of the batch of gradients while .. proposes to truncate the ill-posed .." * "multi-tasking algorithms" -- not sure if the authors meant "multi-task". * "In this way, **there might need a way** to make compromises between the features learned from both ..." * "They find that, fine-tuning can be substantially" -- not sure about punctuation here. * "Our ablation studies in Section 4 show that the simple combination of fine-tuning and mixup strategies **does not well in** transfer learning settings from both **linear behaviors preservation** and generalization performance aspects." * "ten-crop ensemble" -- not sure what this is and there was no reference provided. Is this a misprint? Examples of stylistic changes that would benefit the paper: * Sentence "The selected best configurations are used for all configurations" contains the word "configurations" twice. * "As for baseline methods, we use the recommended choices of hyper-parameters reported in **their papers**." -- perhaps replace "their papers" with "corresponding papers"? * "offering **tons** of well-trained features", "**blessed by the power** of large source datasets" -- word "tons" and expression "blessed by the power" are in my opinion incompatible with the style of a scientific publication. * "consider a more actual scenario" -- perhaps "relevant" or "practical", or another word instead of "actual"? ## Mathematical Notation Here we outline some mistakes in mathematical notation found on page 5 and additional questions that need to be clarified in the text: * Authors introduce $\omega^k_s$ for the $k$-th iteration of the model. But $\omega_s$ is the source (presumably fixed) model. Should not it be $\omega^k_t$, the target model instead? * Underneath Equation (2) authors write $L^{\rm Reg} (\omega,\omega_t,\xi)$. Should it be $L^{\rm Reg}(\omega,\omega_s,\xi)$ instead? * $L^{\rm MXP}$ is mentioned, but is never defined explicitly. In my opinion, the expression for the final loss optimized by the authors has to be provided with _all_ of its terms. * The spacing around minus sign in Equation (4) is confusing. * Authors mention the fact that $\lambda$ is sampled from the beta distribution, it would be interesting to know a little bit more about the reasons for this choice without referring to another publication. * In Equation (4) we see both $g_t(\cdot;\omega_s)$ and $g_s(\cdot;\omega)$? Why are the $s$ and $t$ subscripts used on both $g$ and $\omega$? Why does $g_t$ (presumably target features) depend on $\omega_s$ (source weights) and $g_s$ (presumably source features) depends on $\omega$ (target weights)? This is either a mistake or needs a clarification. Right now this is simply confusing. * Equation (6) misses a lot of closing brackets. * $P_\delta$ is mentioned, but is not defined (is it the same distribution as $P_\lambda$?). * "1e-4" in the text should be replaced with $10^{-4}$. Please review the text thoroughly and fix all similar issues. ## Structure There are Sections 4.0.1 and 4.0.2 that seemingly belong to Section 4.1 and should thus be Sections 4.1.1 and 4.1.2. Broader Impact Concerns: I did not identify any concerns on the ethical implications of this work. ================================================== Review 3: Summary: This paper focuses on the problem of transfer learning, the authors try to answer how to fine-tune pre-trained models to downstream tasks. Specifically, they find that mix-up overfits when fine-tuning deep neural networks. To solve this problem, they propose a mix-up based fine-tuning method, SMILE, to enable a better generalization in the downstream tasks. Specific technical details: SMILE applies mix-up to both the input space and the output space of the training model (feature vectors and classifier outputs), and performs regularization to solve the problem of catastrophic forgetting. In the experiments, the paper compares SMILE with SOTA methods in the fine-tuning literature using image classification and NLP tasks, and shows that SMILE is effective in enhancing the generalization ability of deep models. Strengths and Weaknesses: Strengths: 1. This paper shows the effectiveness of the proposed SMILE in a wide range of datasets. It is convincing to me that SMILE is useful in fine-tuning. Weaknesses: 1. Though good experimental results are shown in a wide range of datasets, it is still not clear why SMILE is a good solution. The whole story is based on the assumption that mix-up can benefit transfer learning. However, no explorations about the relationship between mix-up and transfer learning are shown. To further improve the paper, I suggest the authors explore what is the key factors in transfer learning and why mix-up can benefit the transfer learning process. 2. This paper lacks insights. I think it should be some correspondences between research questions and experimental results. A good paper should solve the targeted problem and tell readers why the proposed method can solve the principal difficulty. There should be more clarification about why SMILE can solve the essential problem of transfer learning. 3. Vanilla mix-up overfits in transfer learning, this is why SMILE is proposed. Insights about why mix-up overfits should be explored. 4. Considering the above three weaknesses, it should be the case that SMILE is a combination of existing technologies and no extra insights are provided. Requested Changes: Though good experimental results are provided, this paper should add more insights to make the proposed method convincing: 1. Why mix-up can benefit transfer learning? 2. Why SMILE can solve the essential problem of transfer learning? 3. Why mix-up overfits in transfer learning? 4. For further improving the paper, The authors should reorganize the research question and give some correspondences between research questions and experimental results. Broader Impact Concerns: No. ================================================== Review 4: Summary: The authors found that mixup data augmentation technique doesn’t help much in a transfer learning setup especially when the target dataset is small. They therefore propose to regularize both sample-to-feature mixup and sample-to-label mixup by leveraging a mean teacher feature extractor and a classifier to boost the model’s performance on the target task. They perform experiments on several datasets for image classification and a natural language processing task. The proposed method outperformed a list of baseline methods. Strengths and Weaknesses: Strengths: - The authors performed a lot of experiments on many datasets and tasks; - The proposed method shows substantial improvement compared to other baselines; Weaknesses: - The study is motivated by the observation that mixup doesn't help when transferring models to a small target dataset compared to training from scratch. But please note that the performance of a transferred model and the scratch model is at a very different scale. The authors also warn that mixup in transfer learning might be harmful. However, the difference of the performance between a pain transferred model and a mixup transferred model might be not statistically significant; - The proposed strategy is very similar to the proxy regularizer introduced in [1] but in a mixup setting. Perhaps the authors want to discuss the connection; - The methodology section is very hard to follow in general. The authors need to fix the logic and flow. For example, $L^{FE}$ and $L^{FC}$ are not defined in section 3.1; - Is there any theoretical justification of the proposed regularization method? - The caption of Figure 2 is not self-contained. Please elaborate on the design of the proposed architecture; - Where are the cross-domain (out-of-sample) generalization experiments mentioned in the introduction? [1] Wang, R., Chaudhari, P. and Davatzikos, C., "Embracing the disharmony in medical imaging: a simple and effective framework for domain adaptation." Medical Image Analysis 76 (2022): 102309. Requested Changes: - Justify on the motivation of the study; - Discuss the relationship of a similar work in the literature; - Improve the language and flow of the methodology section; - Theoretical analysis of the proposed method; - Add cross-domain generalization experiments; Broader Impact Concerns: No ethical concerns observed. ================================================== Metareview: Recommendation: Accept with minor revision Comment: This paper studies Sample-to-feature Mixup, an adapted Mixup technique for transfer learning, targeting on improving the data-efficiency of low target data regime. Four expert reviewers provided detailed and constructive comments for this paper; In correspondence, the authors provided a revised version with a large ratio of revised materials, which significantly strengthened the paper. The AE ensured that all reviewers took the revised version as well as the authors' response into consideration. In the final recommendation phase, two reviewers voted for acceptance but the other two reviewers voted for rejection. Two of them provided further information to justify their recommendations, which were highly appreciated. AE justified all the material as well as the recommendation as a neutral referee, and reached the conclusion of "Revision". The main concern is that, this paper is an empirical study of a new transfer learning technique, but the current experimentation lacks deeper insights and background mechanism of mixup for transfer learning. In particular, why such a variant of Mixup with sample-to-feature strategy is useful for transfer learning--is it also useful for general-purpose supervised learning? While a theoretical understanding is definitely hard since Mixup itself is a data augmentation technique, a thorough empirical investigation is needed, as suggested by the reviewers. Also, since the mixup-related work is fruitful in our community, fine-tuning + some Mixup variants (e.g. Manifold Mixup, MixCut) shall be studied to make the evaluation more complete. After confirmation with the EiC, AE would offer the authors with an opportunity for a revision. Authors shall be careful in addressing the major concerns. ==================================================
# Toward A Complete Criterion For Value Of Information In Insoluble Decision Problems Anonymous authors Paper under double-blind review ## Abstract In a decision problem, observations are said to be material if they must be taken into account, to perform optimally. Decision problems have an underlying (graphical) causal structure, which sometimes may used to evaluate certain observations as immaterial For soluble graphs - ones where important past observations are remembered - there is a complete graphical criterion; one that rules out materiality whenever this can be done on the basis of the graphical structure alone. In this work, we analyse a proposed criterion for insoluble graphs. In particular, we prove that some of the conditions used to prove immateriality are necessary; when they are not satisfied, materiality is possible. We discuss possible avenues and obstacles for proving necessity of the remaining conditions. ## 1 **Introduction** We can view any decision problem as having an underlying causal structure - a graph consisting of chance events, decisions and outcomes, and their causal relationships. Sometimes, it is possible to establish key features of the decision problem from its causal structure alone. For example, in Figure 1a and Figure 1b, we see two such causal structures. For now, let us focus on the three endogenous vertices: the observation Z, the decision (chosen by the decision-maker) X, and the downstream outcome Y . In each graph, Z has an effect on X, which affects Y , but in Figure 1b, Z also directly influences Y , whereas in Figure 1a, it does not. To fully describe a decision problem, we must specify probability distributions for each of the non-decision variables - distributions that must be compatible with the graphical structure. In particular, the distribution for any variable must depend only on its direct causes, i.e. its parents, a condition known as Markov compatibility. For example, in the causal structure shown in Figure 1b, one compatible decision problem is shown in the figure. The variable Z is a Bernoulli trial (i.e. a coin flip), and the decision-maker is rewarded with Y = 1 if they state the outcome of Z (i.e. call the outcome of the coin flip), otherwise the reward is Y = 0. A variable is then said to be material if the attainable reward is greater given access to an observation than without it. For example, by observing Z, the decision-maker can obtain a reward of 1, such as with the policy Y = Z. Without observing Z, any policy will achieve a reward of 0.5. The means that the value of information is 1 − 0.5 = 0., and since this quantity is strictly positive, Z is material. For the causal structure shown in Figure 1a, we can instead make a deduction that applies to any decision problem compatible with the graph. In this case, for any such decision problem, there will exist an optimal decision rule that ignores the value of Z = z entirely. One way to see this is that once a decision X = x is chosen, the observation Z becomes independent of Y , and so there is no reason for the decision to depend on it. (This can be proved from the fact that Z is d-separated from Y given X.) So for any decision problem compatible with this graph, Z is immaterial. There are many reasons that we may want to evaluate whether a causal structure allows an observation such as Z to be material. Firstly, for algorithmic efficiency - if an observed variable is immaterial, then the optimal policies are contained in a small subset of all available policies, that we can search exponentially more quickly. (For example, in Figure 1a, there are two choices for X, but there are four deterministic mappings from Z to X.) Secondly, materiality can have implications regarding the fairness of a decision-making procedure. Suppose that Z designates the gender of candidates available to a recruiter, which are male Z = 1 or female Z = 0 with equal probability, while X indicates whether that person is X = 1 or is not X = 0 recruited, and Y indicates whether that person is Y = 1 or is not Y = 0 hired. If Y is correlated with Z given X, then the applicant's gender is material for the recruiter, and to maximise the hiring probability, they will have to recruit applicants at different rates based on their gender. If the causal structure is that of Figure 1a, then materiality can be ruled out, meaning that unfair behaviour is not necessary for optimal performance, whereas the causal structure of Figure 1b can incentivise unfairness. Such an analyses can be used for wellstudied concepts like counterfactual fairness (Kusner et al., 2017). An arbitrary graph where Z is a sensitive variable (such as gender), counterfactual fairness can arise only when there is a path Z → *. . .* → O → X, where the observation O is material (Everitt et al., 2021). Thirdly, materiality can have implications for AI safety - if Z represents a corrective instruction from a human overseer, and there exists no path Z → *. . .* → O → X where O is material, then there exist optimal policies that ignore this instruction (Everitt et al., 2021).Materiality is also relevant for evaluations of agents' intent (Halpern & Kleiman-Weiner, 2018; Ward et al., 2024), and relatedly, their incentives to control parts of the environment (Everitt et al., 2021; Farquhar et al., 2022). For an agent to intentionally manipulate a variable Z to obtain an outcome Y = y, there must be a path p : X → . . . → Z → *. . .* → Y where for each of its decisions X0lying on p, the parent O0 along p is material for X0. In general, a stronger criterion for ruling out materiality will allow us to rule out unfair or unsafe behaviour for a wider range of agent-environment interactions (Everitt et al., 2021). ![1_image_0.png](1_image_0.png) Figure 1: Three graphs, with decisions in red, and a real-valued outcome Y . We write U(B) for a uniform distribution over B, i.e. a Bernoulli distribution with p = 0.5. Any procedure for establishing immateriality based on the causal structure may be called a *graphical criterion*. For example, if a decision X is not an ancestor of the outcome Y , then all of the variables observed at X are immaterial. An ideal graphical criterion would be proved *complete*, in that it can establish immateriality whenever this is possible from the graphical structure alone. Clearly, this criterion is not complete, because in Figure 1a, X is an ancestor of the outcome, but we still proved Z immaterial. So far, a graphical criterion from van Merwijk et al. (2022) has been proved complete, but only under some significant restrictions. The causal structure must be *soluble*, meaning that all of the important information observed from past decisions is remembered at later decision points. Also, no criteria has been proved complete for identifying immaterial decisions, i.e. past decisions that can be safely forgotten. For insoluble graphs, there the criterion of Lee & Bareinboim (2020, Thm. 2), which can identify immaterial decisions and is (strictly) more potent in general. However, it is not yet known whether this criterion is complete. In particular, it is not yet clear whether several of its conditions are necessary. For example, one case where all existing criteria are silent is the simple graph shown in Figure 1c - we would like to know whether we can rule out X being a material observation for X0. We cannot use van Merwijk et al. (2022) because X is a decision, and because the graph is insoluble.1 Furthermore, we cannot establish immateriality 1Formally, this is because W 6⊥ Y | X ∪ X0, and X0 6⊥ Y | X ∪ W, as per the definition of solubility that we will review in Section 3. using Lee & Bareinboim (2020, Thm. 2), because it violates a property that we term LB-factorizability, which we will discuss in Section 3.3.2 By studying Figure 1c in a bespoke fashion, we find that there exists a decision problem with the given causal structure, where X is material for X0. As shown in Figure 1c, Z is a Bernoulli variable, and Y is equal to 1 if Z = X0 and to 0 otherwise. If X is observed by X0, then a reward of E[Y ] = 1 can be achieved by the policy X0 = X = Z. If X is not observed, the greatest achievable reward is lower, at E[Y ] = 0.5, implying materiality. This raises a question: by generalising this construction, can we prove that requirement I of LB-factorizability is necessary to prove immateriality for a wide class of graphs? This work will prove that this requirement is indeed necessary, meaning that materiality cannot be excluded for a wide class of graphs including Figure 1c. It remains an open question whether the criterion of Lee & Bareinboim (2020, thm. 2) as a whole is complete, in that its other conditions are necessary for establishing immateriality. In the case that it is complete, our work is a step toward proving this. On the other hand, we also present some graphs where materiality is difficult to establish, that - if the criterion is not complete - could bring us closer to a proof of incompleteness. The structure of the paper is as follows. In Section 2, we will recap the formalism used by Lee & Bareinboim (2020) for modelling decision problems, based on structural causal models. In Section 3, we will review existing procedures for proving that an observation can or cannot be material. In Section 4, we will establish our main result: that requirement I of LB-factorizability is necessary to establish immateriality. In Section 5, we present some analogous results for other requirements of LB-factorizability, that could serve as a building block for proving the necessity of those requirements. We then illustrate the problems that arise in trying to prove necessity of those further requirements, and outline some possible directions for further work. Finally, in Section 6, we conclude. ## 2 **Setup** Our analysis will follow Lee & Bareinboim (2020) by using the structural causal model (SCM) framework (Pearl, 2009, Chapter 7), although the results also apply equally to Bayesian networks and influence diagrams. ## 2.1 **Structural Causal Models** A structural causal model (SCM) M is a tuple hU,V , P(U), Fi, where U is a set of variables determined by factors outside the model, called *endogenous* following a joint distribution P(U), and V is a set of endogenous variables whose values are determined by a collection of functions F = {fV }V ∈V such that V ← fV (Pa(V ), UV ) where Pa(V ) ⊆ V \ {V } is a set of endogenous variables and UV ⊆ U is a set of exogenous variables. The observational distribution P(v) is defined as Pu QV ∈V P(v|paV ,uV )P(u), where uV is the assignment u restricted to variables UV . Furthermore, do(X = x) represents the operation of fixing a set X to a constant x regardless of their original mechanisms. Such intervention induces a submodel Mx, which is M with fX replaced by x for X ∈ X. Then, an interventional distribution P(v\x| do(x)) can be computed as the observational distribution in Mx. The induced graph of an SCM M is a DAG G on only the endogenous variables V where (i) X→Y if X is an argument of fY ; and (ii) X↔Y if UX and UY are dependent, i.e. for any uX,uY , P(uX,uY ) 6= P(uX) × P(uY ). We use the notation Pa(X), Ch(X), Anc(X) and Desc(X) to represent the parents, children, ancestors and descendants of a variable X, respectively, and take ancestors and descendants to include the node X itself.3 We will use the notation V1 − V2 to designate an edge whose direction may be V1 → V2 or V1 ← V2. For a path V1 − V2 *− · · · −* V`, we will use the shorthand V1 - - - V`, and for a directed path V1 *→ · · · →* V`, the shorthand V1 99K V`. For a path p : A - - - B - - - C - - - D, we will describe the segment B - - - C using 2Specifically, requirement I of LB-factorizability is violated because Y is d-connected to πX0 given X0. 3Note that Pa(X) is an intentional reuse of the notation used to describe the arguments of fX in the SCM definition, because the endogenous arguments of fX and the parents of X in the induced graph are the same variables. the shorthand Bp - - - C. We will use the shorthand V1:N for a sequence of variables V1*, . . . V*N indexed by 1*, . . . , N*, v1:N for a sequence of assignments, and p1:N for a set of paths p1*, . . . p*N . There is certain notation that we will use repeatedly when constructing causal models, such as tuples, bitstrings, indexing, and Iverson brackets. We will write a tuple as z := h*x, y*i, and this may be indexed as z[0] = x. A bitstring of length n, i.e. a tuple of n Booleans, may be written as B n, and a uniform distribution over this space, as U(B n). We will denote a bitwise XOR operation by ⊕ so that, for example, 01 ⊕ 11 = 10. Bitstrings may also be used for indexing, for example, the y th bit of x may be written as as x[y], and the leftmost bits are of higher-order so that, for example, 0100[01] = 1. Similarly, for random variables *X, Y* , we will write X[Y ] for a variable equal to x[y] when X = x and Y = y. Finally, the Iverson bracket JPK is equal to 1 if P is true, and 0 otherwise. ## 2.2 **Modelling Decision Problems** To use an SCM to define a decision problem, we need to specify what policies the agent can select from and what goal the agent is trying to achieve. We will describe the set of available policies using a Mixed Policy Scope (Lee & Bareinboim, 2020), which casts certain variables as decisions, and others as *context variables* or "observations" CX, that each decision X is allowed to depend on. Following Lee & Bareinboim (2020), we will consistently illustrate decision variables with red circles, as in Figure 1. Definition 1 (Mixed Policy Scope (MPS)). Given a DAG G on vertices V , a *mixed policy scope* S = hX, CXiX∈X(S) consists of a set of decisions X(S) ⊆ V and a set of context variables CX ⊆ V for each decision. For a set of decisions X0, we define their contexts as CX0 =SX∈X0 CX. A policy consists of a probability distribution for each decision X, conditional on its contexts CX. Definition 2 (Mixed Policy). Given an SCM M and scope S = hX, CXi, a *mixed policy* π (or a *policy*, for short) contains for each X a decision rule πX|CX , where πX|CX : XX × XCX 7→ [0, 1] is a proper probability mapping.4 We will say that such a policy π *follows* the scope S, written π ∼ S. A mixed policy is said to be *deterministic* if every decision is a deterministic function of its contexts. Once a policy is selected, we would have a new causal structure, described by a *scoped graph*. Definition 3 (Scoped graph). The *scoped graph* GS is obtained by G, by replacing, for each decision X ∈ X(S), all inbound edges to X with edges C → X for every C ∈ CX. We only consider scopes for which GS is acyclic. We will designate one real-valued variable Y 6∈ X(S) ∪ C(S) as the outcome node (also called the "utility" variable). To calculate the expected utility under a policy π ∼ S, let C− = SX∈X(S) CX \ X(S) be the non-action contexts. Then, the expected utility is: µπ,S =Py,x,c− yPx(y, c −)QX∈X(S) π(x|cx). When the scope is obvious, we will simply write µπ. This paper is concerned with materiality - whether removing one context variable from one decision will decrease the expected utility attainable by the best policy. We define it in terms of the value of information (Howard, 1990; Everitt et al., 2021). Definition 4 (Value of Information). Given an SCM M and scope S, the *maximum expected utility* (MEU) is µ ∗ S = maxπ∼S µπ,S . The *value of information* (VoI) of context Z ∈ CX for decision X ∈ X(S) is µ ∗ S − µ ∗ SZ6→X , where SZ6→X is defined as hX0, CX0 iX0∈X(S)\{X} ∪ hX, CX \ {Z}i. The context Z is *material* for X in an SCM M if Z has strictly positive value of information for X, otherwise it is *immaterial*. 4Following Lee & Bareinboim (2020), we term this a "mixed policy" due to its including mixed strategies. Note that game theory also has a distinction between "mixed" policies, where the decision rules share a source of randomness, and "behavioural" policies, where they do not, and in this sense, the "mixed" policies of Lee & Bareinboim (2020) are actually *behavioural*. ## 2.3 **Graphical Criteria For Independence** Knowing when variables are independent is an important step in identifying immaterial contexts, as we will discuss in the next section. So, we will make repeated use of d-separation, a graphical criterion that establishes the independence of variables in a graph. Definition 5 (d-separation; Verma & Pearl, 1988). A path p is said to be d-separated by a set of nodes Z if and only if: 1. p contains a collider X → W ← Y such that the middle node W is not in Z and no descendants of W are in Z, or 2. p contains a chain X → W → Y or fork X ← W → Y where W is in Z, or 3. one or both of the endpoints of p is in Z. A set Z is said to d-separate X from Y , written (X ⊥G Y | Z), if and only if Z d-separates every path from a node in X to a node in Y . Sets that are not d-separated are called d-connected, written X 6⊥G Y | Z. When the graph is clear from context, we will write ⊥ in place of ⊥G. When sets X,W, Z satsify X ⊥ W | Z they are conditionally independent: P(X,W | Z) = P(X | Z)P(W | Z) (Verma & Pearl, 1988). If we know that a deterministic mixed policy is being followed, then we may deduce further conditional independence relations. This is because conditioning on variables V may determine some decision variables, which are called "implied" (Lee & Bareinboim, 2020), or "functionally determined" (Geiger & Pearl, 1990), making them conditionally independent of other variables in the graph. Definition 6 (Implied variables; Lee & Bareinboim, 2020). To obtain the *implied variables* dZe for variables Z in G given a mixed policy scope S, begin with dZe ← Z, then add to dZe every decision X such that CX ⊆ dZe, until convergence. ![4_image_0.png](4_image_0.png) Figure 2: A graph where decisions Z, X jointly determine the outcome Y . A policy node πX is shown, which decides the decision rule at X. For example, in Figure 2, we see that dXe = {*Z, X*}, so Z is d-separated from Y given dXe. This means that under a deterministic mixed policy, Z and Y are statistically independent given X. This has implications for materiality. In particular, it means that the best deterministic mixed policy Z = *z, X* = x does not need to observe Z at X. Moreover, the performance of the best deterministic mixed policy can never be surpassed by a stochastic policy ((Lee & Bareinboim, 2020, Proposition 1)), so Z is immaterial. ## 3 **Review Of Graphical Criteria For Materiality** We will now review some existing techniques for proving whether or not a graph is compatible with some variable Z being material for some decision X. ## 3.1 **Single-Decision Settings** In the single-decision setting, there is a sound and complete criterion for materiality: in a scoped graph G(S), there exists an SCM where the context Z ∈ CX is material if and only if Z 6⊥ Y | CX ∪ {X*} \ {*Z} and the outcome Y is a descendant of X (Lee & Bareinboim, 2020; Everitt et al., 2021). This statement can be split into proofs for the *only if* and if directions, both of which are relevant to the current paper. The argument for the *only if* is that if X is not an ancestor of the outcome Y , then its policy is completely irrelevant to the expected utility, and so all of its contexts are immaterial, and if Z is conditionally independent of the outcome Y given the decision and other observations, then it may be safely ignored without changing the outcome. These arguments are important to us because they remain equally valid as we move to a multi-decision setting - a context must be an ancestor of Y ,and must provide information about Y over and above the other contexts, in order to be material. ![5_image_1.png](5_image_1.png) (a) The Everitt et al. (2021); Lee & Bareinboim (2020) scheme, with a red info path that lacks colliders, and the control path shown with a thick dark red arrow. ![5_image_0.png](5_image_0.png) (b) The Everitt et al. (2021); Lee & Bareinboim (2020) scheme, with a red info path that contains colliders, and the control path shown in dark red. Figure 3: Three decision problems where Z is material for X. For readability, we marginalise out exogenous variables from the SCM, so z ∼ U(B) can be understood as shorthand for z = εZ where εZ ∼ U(B), and so on. ![5_image_2.png](5_image_2.png) (a) The Everitt et al. (2021) scheme is applied using just the red info path; Z is immaterial for X. (b) The van Merwijk et al. (2022) scheme is applied, using the red and blue info paths; Z is material for X. Figure 4: Two decision problems on a soluble graph. The if direction is proved by constructing a decision problem where Z is material. By assumption, there is a directed path X 99K Y , called the *control path*, and a path Z - - - Y , active given CX ∪ {X*} \ {*Z}, called the *info path*. In the SCM that is constructed, the variable Z will contain information about Y (due to a conditional dependency induced by the info path), and this will inform X regarding how to influence Y (using influence that is transmited along the control path). The construction has two cases, which differ based on whether or not the info path contains colliders (Everitt et al., 2021; Lee & Bareinboim, 2020). For the case where it does not contain colliders, the graph and construction are shown in Figure 3a. (Note that when the info path is a directed path, we take this to be a special case where V = Z.) The functions along the info path (dashed line) are chosen to copy V to PaY and to Z, and Y equals its maximum value of 1 only if X equals V , and 0 otherwise. So, X must copy Z to achieve the maximum expected utility. Without the context Z, the maximum expected utility is 0.5, proving materiality.5 5To be precise, the formalism of Lee & Bareinboim (2020) also allows the active path from Z to include one or more bidirected edges V ↔ Y , but to deal with these cases, we begin with the distribution that we would use for a path V ← L → Y , then marginalise out L. For the case where the info path does contains a collider, the graph and construction from Everitt et al. (2021); Lee & Bareinboim (2020) are shown in Figure 3b. Each fork Uiin the info path, along with Z, generates a random bit, while each collider Wiis assigned the XOR (Ui−1 ⊕ Ui) of its two parents. By observing z and the values w1:N , the agent has just enough information to recover uN . In particular, hte policy that sets x equal to the XOR of z and w1:N , obtains x = uN and achieves the MEU, E[Y ] = 1. Without the context Z, the MEU becomes 0.5, so Z is material. ## 3.2 **Soluble Multi-Decision Settings** This approach has been generalised to deal with multi-decision graphs that are *soluble* (also known as graphs that respect "sufficient recall"). To recap, a graph is said to be soluble if there is an ordering ≺ = hX1*, . . . , X*N i over decisions such that at for every Xi, for every previous decision or context V ∈ {Xj ∪ CXj | j ≺ i}, we have V 6∈ Anc(Y ) or V ⊥ Y | {Xi} ∪ CXi . That is, past decisions and contexts do not contain any information that is relevant for a later decision, and unknown at the time that this later decision is made. For example, in Figure 4a, using the ordering X ≺ X0, the nodes *Z, X* are d-separated from Y by X0 and its contexts {Z, Z0, W}, which implies solubility. For soluble graphs, there exists a complete criterion, for discerning whether a non-decision context Z is material for a decision X. If X lacks a *control path* (a directed path to Y ), or Z lacks an info path (a path to Y , active given C \ {Z}), then Z is immaterial. Conversely, if in a graph, every X decision has a control path, and each context Z has an info path, then every context is material in some decision problem with that causal structure (van Merwijk et al., 2022, Theorem 7).6 For example, in the graph of Figure 4a, every decision is an ancestor of Y , and every context has an info path, (the info paths include Z → Y , Z 0 → W0 ← U 0 → Y , and W0 ← U 0 → Y ), so, all contexts may be material in at least one decision problem with this causal structure. It will be important for us to understand what obstacles can arise in proving materiality in multi-decision graphs, such as was required in proving (van Merwijk et al., 2022, Theorem 7). For example, suppose that we seek to construct a decision problem where Z is material for the graph in in Figure 4 Suppose that we apply the single-decision construction of Everitt et al. (2021) to this graph. First, we would identify the info path Z → Y and the control path X → Z 0 → X0 → Y . The info path has no colliders, so we will construct a decision problem using the scheme from Figure 3a, and the result is shown in Figure 4a. The idea of this construction is that X should have to copy Z in order for the value z transmitted by the info path to match the value x 0transmitted by the control path. We see, however, that whatever action x is selected, the decision X0can assume the value z, thereby achieving the MEU. The MEU is then achievable whether Z is a context of X or not, so Z is immaterial in this construction. In order to render Z material, we must adapt the construction from Figure 4a by incentivising X0to pass along the value of Z 0. To this end, we will use the second info path Z 0 → W0 ← U 0 → Y , shown in Figure 4b. We add a term y2 := Ju 0[x 0[0]] = x 0[1]K to the reward, which equals 1 if X0 presents one bit from U 0, along with its index. We then set W0 = U 0[Z 0], so that X0 knows only the Z 0th bit of U 0, and since the index z 0is one bit, we let U 0 be two bits in length, i.e. U 0 ∼ U(B 2). Finally, rather than requiring z = x 0 as in Figure 4a, we now include the term y1 := Jz = x 0[0]K, because Z 0 will be the zeroth term of X0. In the resulting model, the utility is clearly Y = 2 in the non-intervened model, and to achieve this utility, the MEU, we must have Y1 = Y2 = 1 with probability 1. To maximise y2, the decision X0 must reproduce the only known digit from U 0, i.e. x 0 = hz 0, u0[z 0]i. To maximise y1, we must have Z = X0[0] almost surely, and since X0[0] = X, this requires X = Z with probability 1. This can only be done if Z is a context of X, meaning that Z is material for X. There is a general principle here - if a control path for X, such as X → Z 0 → X0 → Y , contains decisions other than X, then we need to incentivise the downstream decision to copy information along the control path, and this will be done by choosing values for variables lying on the info path for X0(the one shown in blue in Figure 4b); we will revisit this matter in our main result. 6In full generality, the result allows an info path to terminate at another context, rather than at Y . This detail is not pertinent to the methods used to derive our main result in Section 4, although we do consider this scenario in Section 5. ## 3.3 **Multi-Decision Settings In Full Generality** Once the solubility assumption is relaxed, there are some criteria for identifying immaterial variables, but it is not yet known to what extent these criteria are necessary, in that materiality is possible whenever they are not satisfied. The simplest criteria for immateriality are those that carry over from the single-decision case: - If a decision X is a non-ancestor of Y , then its contexts are immaterial, - If C ⊥ Y | CX \ {C}, then the context C is immaterial. But suppose that we have a graph where neither of these criteria is satisfied. Then on some occasions, we can still establish immateriality, using the more sophisticated criterion of Lee & Bareinboim (2020, Theorem 2). The assumptions of this criterion are split across: Lee & Bareinboim (2020, Lemma 1) and Lee & Bareinboim (2020, Theorem 2) itself. Lee & Bareinboim (2020, Lemma 1) establishes that if some target variables Z, target actions X0, and latent variables U0satisfy certain separation conditions, then they may be factorized in a favourable way. Lee & Bareinboim (2020, Theorem 2) then proves that under some further assumptions, the contexts Z are immaterial to the decisions X0. in this paper, our focus is exclusively on the assumptions of Lee & Bareinboim (2020, Lemma 1), and we term them "LB-factorizability", after the authors' initials. Lee & Bareinboim (2020, Theorem 2) does not feature in our analysis, but for completeness sake, it is reproduced in Appendix A. Definition 7. For a scoped graph GS , we will say that target actions X0, endogenous variables Z disjoint with X0, contexts C0:= CX0 \ (X0 ∪ Z) and exogenous variables U0 are *LB-factorizable* if there exists an ordering ≺ over V 0:= C0 ∪ X0 ∪ Z such that: I. (Y ⊥ πX0 | d(X0 ∪ C0)e), II. (C ⊥ πX0≺C , Z≺C , U0| d(X0 ∪ C0)≺C e), for every C ∈ C0 and III. V 0 ≺X is disjoint with Desc(X) and subsumes Pa(X) for every X ∈ X0, where πX0 consists of a new parent πX added to each variable X ∈ X0, and W≺V , for W ⊆ V 0, denotes the subset of W that is strictly prior to V in the ordering ≺. For example, consider the graph Figure 2. In this case, Y ∈ Desc(X) and Z 6⊥ Y | X, so the single-decision criteria cannot establish that Z is immaterial for X. However, by choosing Z = {Z}, X0 = {X}, and the ordering ≺= h*Z, X*i, we have that: I. the outcome Y is d-separated from πX by dXe, (since Z is a decision that lacks parents, we actually have dXe = {*Z, X*}), II. the contexts C0 are an empty set, so (II) is trivially true, and III. V 0 ≺X = Z, and Z is disjoint with Desc(X) and Z ⊇ Pa(X) so Z and X0 are LB-factorizable. As shown in Appendix A, the assumptions of Lee & Bareinboim (2020, Theorem 2) are also satisfied, enabling us to deduce that Z is immaterial for X, matching the ad hoc analysis of this graph in Section 2. ## 4 **Main Result** 4.1 **Theorem Statement And Proof Overview** The goal of this paper is to prove that condition (I) of LB-factorizability is necessary to establish immateriality. More precisely, we prove that if condition (I) is unsatisfiable for all observations in the graph, then the graph is incompatible with materiality. It might initially seem unnecessarily stringent to assume that this holds for all observations, rather than the context Z0 for which we are trying to prove materiality. Recall from Figure 4b, however, that proofs of materiality are recursive - to prove that Z material for X, we incentivised X to copy Z, and to do this, we had to incentivise X0 has to pass on the value of Z 0. To do this, we needed to assume that other contexts and decisions (such as Z 0 and X0) have their own info paths and control paths, not just Z and X. So, in our theorem below, assumption (C) requires that (I) holds for all contexts. Assumptions (A) and (B) are also necessary for a graph to be compatible with materiality, because their negation implies immateriality, as per the single-decision criteria discussed in Section 3.1. Theorem 8. If, in a scoped graph GS *, for every* X ∈ X(S) A. X ∈ AncGS (Y ), B. ∀C ∈ CX : (C 6⊥GS Y | ({X} ∪ CX \ {C})), and C. for every decision X and context Z ∈ CX in GS , (πX 6⊥GS Y | d(X(S) ∪ CX(S)\{Z}) \ {Z}e)*, where* πX *is a new parent of* X, then for every X0 ∈ X(S) and Z0 ∈ CX0 , there exists an SCM where Z0 *is material for* X0. We will prove this result in three stages, across the next three sections. - In Section 4.2, we prove that for any scoped graph satisfying the assumptions of Theorem 8, for any context Z0 ∈ CX0 , there exist certain paths, which we will call the *materiality paths*. - In Section 4.3, we use the materiality paths to define an SCM for this scoped graph, which we will call the *materiality SCM*. - In Section 4.4, we will prove that in the materiality SCM, Z0 is material for X0. ## 4.2 **The Materiality Paths** To prove materiality, we will begin by selecting info paths and a control path, similar to what was described in Section 3.2 and illustrated in Figure 4b. One difference, however, is that these paths must allow for the case where we are proving the value of remembering a past decision. We will first describe how to accommodate this case in Section 4.2.1 then define a set of paths for our proof in Section 4.2.2. ## 4.2.1 **Paths For The Value Of Remembering A Decision** One distinction between our setting and that of van Merwijk et al. (2022) is that we may need to establish the value of remembering a past decision, for example, the value of remembering Z0 in Figure 5. In this graph, the procedures of Everitt et al. (2021) and van Merwijk et al. (2022) are silent about whether we should choose the info path Z0 → Y , and construct the graph Figure 5a, or choose the info path Z0 ← U → Y , and construct the model depicted in Figure 5b. In the first case, we have Y = 1 if x0 = z0, i.e. the decision X0 is required to match the value of a past decision Figure 5a. Then, the MEU of 1 can be achieved with a deterministic policy such as Z0 = 1, X0 = 1, and Z0 is immaterial for X0. To understand this in terms of the paths involved, The problem is that the info path Z0 → Y doesn't include any parents of Z0, so Z0 is *implied* by values outside the info path, and Z0 → Y is rendered inactive given dUe. This means that observing Z0 can no-longer provide useful information about how to maximise Y . In the second case, Y = 1 if x0 = u, i.e. the decision X0 must match the value of a random Bernoulli variable U Figure 5b. U is directly observed only by Z0, and so in optimal policy, X0 must observe the decision z0, as is the case in the optimal policy z0 = *u, x*0 = z0, and so Z0 is material for X0. The info path Z0 ← U → Y does include a parent U of Z0, and so Z0 is no-longer *implied* by values outside the info path, and the path Z0 ← U → Y remains active given d∅e. Thus Z0 may still provide useful information about Y . For our proof, we need a general procedure for finding an info path that contains a non-decision parent for every decision. Condition (C) of Theorem 8 is useful, because it implies the presence of a path from Z to Y that is active given d(X(S) ∪ CX(S)\Z) \ Ze. Any fork or chain variables in this path will not be decisions, otherwise they would be contained in dX(S) \ Ze, which would make them blocked given d(X(S) ∪ CX(S)\Z) \ Ze. This deals with the possibility of decisions anywhere except for the endpoint Z. But how can we ensure that the info path contains a non-decision parent for Z, if it is a decision? We can ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) Figure 5: Two SCMs, with models constructed using different (red) info paths. use condition (C) again, because it implies that every context that is a decision must have a non-decision parent. Lemma 9. If a scoped graph G(S) *satisfies the condition(C) of Theorem 8, then for every context* Z ∈ CX where Z, X ∈ X(S) are decisions, there exists a non-decision N ∈ CZ \ d(X(S) ∪ CX(S)\{Z}) \ {Z}e. Intuitively, this is because condition (C) states that there is an active path from Z to Y , given a superset of dX(S) \ {Z}. If all of the parents of Z are decisions, then we would have Z ∈ dX(S) \ {Z}, and every path would be blocked, and condition (C) could not be true. Proof of Lemma 9. Assume that there is no such non-decision N, i.e. CZ ⊆ d(X(S) ∪ CX(S)\{Z}) \ {Z}e, and that πX 6⊥ Y | d(X(S) ∪ CX(S)\{Z}) \ {Z}e, (by condition (C) of Theorem 8), and we will prove a contradiction. From CZ ⊆ d(X(S)∪CX(S)\{Z})\ {Z}e, we deduce that Z ∈ d(X(S)∪CX(S)\{Z})\ {Z}e (by the definition of dWe), and then there can be no active path from πX to Y given d(X(S)∪CX(S)\{Z})\{Z*}e ⊇* CZ ∪ {Z}, contradicting condition (C) of Theorem 8, and proving the result. This tells us that for any decision Z there is an edge Z ← N. Moreover, by condition(C) of the main result, we know that there is an info path from N to Y . By concatenating the edge and the path, we obtain a path from Z to Y , which we will prove is active given d(X(S) ∪ CX(S)\{Z}) \ {Z}e. This is precisely the kind of info path that we are looking for: activeness given d(X(S) ∪ CX(S)\Z) \ Ze means that forks and chains will not be decisions, and we know that the endpoint Z has a non-decision parent N. Lemma 10. If a scoped graph G(S) *satisfies assumptions (B-C) of Theorem 8, then for every edge* Z → X between decisions Z, X ∈ X(S), there exists a path Z ← N - - - Y , active given d(X(S)∪ CX(S)\{Z}) \ {Z}e, (so N 6∈ d(X(S) ∪ CX(S)\{Z}) \ {Z}e). Some care is needed in proving that the segment N - - - Y is active given d(X(S)∪ CX(S)\{Z}) \ {Z}e, rather than just d(X(S) ∪ CX(S)\{N}) \ {N}e, and the detail is presented in Lemma 10. ## 4.2.2 **Defining The Materiality Paths** We will now describe how to select finitely many info paths, along with a control path, as shown in Figure 6. The assumptions of Theorem 8 allow there to be any finite number of contexts and decisions, so we will designate the target decision and context (whose materiality we are trying to establish) as X0 := X and context Z0 := Z. We know from condition (A) that X0 is an ancestor of Y , so we have a directed path X0 99K Y . We also know that Z0 has a chance node ancestor, because it either is a chance node, or it has a chance node parent, from Lemma 10. So we will call that chance node ancestor, A, and define a *control path* of the form A 99K Z0 → X0 99K Y , shown in black in Figure 6, where A 99K Z0 has length of either 0 or 1. Other paths are then chosen to match this control path. We will index the decisions on the control path as Ximin *, . . . , X*imax , and their respective contexts are Zimin *, . . . , Z*imax . where imin is either 0 (if Z0 is a chance node), or −1 (if Z0 = X−1). In general, we allow for the possibility that Zi = Xi−1 for any of the decisions. We define an info path mi for each context Zi, which must satisfy the desirable properties established in Lemma 9. To help with our later proofs, it is also useful to define an intersection node Ti, at which the info path departs from the control path, and a truncated info path m0i , which consists of the segment of mi that is not in the control path. Recall from Figure 3b and Figure 4b that information from collider variables can play an important role in incentivising a decision to copy information from its context. So, for each collider Wi,j in each info path mi we define an auxiliary path ri,j : Wi,j 99K Y . ![10_image_0.png](10_image_0.png) Figure 6: The set of paths proven to exist by Lemma 11 are red, green and blue. In each case, the point of departure of the active path from the (black) directed path is designated by Ti. In full generality, each path may begin either as Zi L99 Ti ← · (as in red), or as Zi L99 Ti → · (green, blue). Collectively, we refer to the control, info and auxiliary paths as the *materiality paths*. Lemma 11. Let G(S) be a scoped graph that contains a context Z0 ∈ CX0 *and satisfies the assumptions of* for Theorem 8. Then, it contains the following: - A **control path***: a directed path* d : A 99K Z0 → X0 99K Y , where A is a non-decision, possibly equal to Z0, and d contains no parents of X0 *other than* Z0. - *We can write* d as A 99K Zimin → Ximin 99K *· · ·*Z0 → X0 99K Zimax → Ximax 99K Y, imin ≤ i ≤ imax, where each Zi is the parent of Xi along d *(where* A 99K Zimin and Xi−1 99K Zi are allowed to have length 0). Then, for each i, define the *info path*: m0i : Zi - - - Y *, active given* d(X(S) ∪ CX(S)\Zi ) \ Zie, that if Zi is a decision, begins as Zi ← N (so N ∈ CZi \ d(X(S) ∪ CX(S)\Zi ) \ Zie.) - Let Ti *be the node nearest* Y in m0i : Zi - - - Y (and possibly equal to Zi*) such that the segment* Zi m0i - - - Ti of m0i is identical to the segment Zid L99 Ti of d. Then, let the *truncated info path* mi be the segment Ti m0i - - - Y . - *Write* mi as mi: Ti 99KWi,1 L99Ui,1 99KWi,2 L99Ui,2 · · ·Ui,Ji 99KY , where Ji *is the number of* forks in mi. (We allow the possibilities that Ti =Wi,1 so that mi *begins as* Ti L99 Ui,1*, or that* Ji = 0 so that mi is Ti 99K Y .) Then, for each i and 1 ≤ j ≤ Ji, let the **auxiliary path** *be any directed path* ri,j : Wi,j 99K Y from Wi,j to Y . The proof was described before the lemma statement, and is detailed in Appendix B.2. ## 4.3 **The Materiality Scm** We will now show how the materiality paths can be used to define an SCM where Z0 is material for X0. As with the seleciton of paths, the construction of models will have to differ a little from the constructions of Sections 3.1 and 3.2, in order to better deal with insolubility. So we will first describe how we deal with insoluble graphs, in Section 4.3.1 , then define a general model in Section 4.3.2. ## 4.3.1 **Models For Insoluble Graphs** Certain graphs that are allowed by Theorem 8 violate solubility, and the constructions from Everitt et al. (2021) and van Merwijk et al. (2022) will need to be altered in order to establish materiality in these graphs. The assumption of solubility meant that upstream decisions could not contain latent, actionable information - in particular, this implied if an info path mi contains a context C for a decision X0 ∈ X(S) \ {Xi}, ![11_image_0.png](11_image_0.png) Figure 7: Two SCMs (a-b), and a description of a family of SCMs, where each dashed line represents a path. The repeated exponent expn 2 (k) is defined as k if n = 0, and 2 exp n−1 2(k)otherwise. then V would have to be context of Xi, otherwise the past decision V would contain latent information that is of import to Xi (van Merwijk et al., 2022, Lemma 28). For example, in Figure 7a the red info path contains the variable W1, which is a context for X0 but not for X0, and solubility is violated because W1 ⊥ Y | {Z0, X0, X1} but it satisfies all the three conditions of Theorem 8. We can nonetheless apply the construction from (van Merwijk et al., 2022) to this graph, by treating X0 as through it was a non-decision. This yields the decision problem shown in Figure 7a, which is example of the construction from Figure 7c), except that there is a decision X0that observes Z0 and W1. In this model, the outcome Y is equal to 1 if x0 is equal to u1. The intended logic of this construction is that since W1 = Z0 ⊕Uq, the MEU can be achieved with the non-intervened policy X0 = Z0 ⊕W1, which would require X0 to depend on Z0. In this model, however, there exists an alternative policy where X0 = U1 and X0 = X0, which achieves the MEU of 1, without having X0 directly depend on Z0, and proving that Z0 is immaterial for X0. Essentially, the single bit of X0sufficed to transmit the value of U1, meaning that Z0 contained no more useful information. So long as the decision problem allows X0can do this there can be no need for X0 to observe Z0. So in order to exhibit materiality, we need the domain of X0to be smaller than that of U1. As such, we can devise a modified scheme, shown in Figure 7b. In this scheme, two random bits are generated at U1. The outcome is Y = 1 if X1 supplies one bit from U1 along with its index. A random bit is sampled at Z0, and W1 presents the Z0 th bit from U1, while X1 has a domain of just one bit. Then, similar to our previous discussion of Figure 4b, the only bit from U1 that X0 can reliably know is the Z0 th bit. Hence the only way achieve the MEU is for X0to inform X0 about the value of W1, and for X0 to equal X0 = hZ0, X0i. Importantly, this can only be done if X0 observes Z0; it is material for X0. In Figure 7b, if x1 produces the z0 th bit from u1, i.e. x1 = hz0, u1[z0]i, we will call it *consistent* with hz0, u1i. If it produces any bit from u1, then we will call it *compatible* with hz0, u1i. For instance, either h0, 0i or h1, 1i is compatible with z0 = 0 and u1 = 01, but only the former is consistent with z0 = 0 and u1 = 00. We can generalise these concepts to a case of multiple fork variables, rather than just Z0 and U1. For example, Figure 7c, we have J + 1 fork variables U0:J , which sample bitstrings of increasing length. Then, Z0 = Wu, and each collider Wi has Wi = Uj [Uj−1]. The outcome Y will still check whether X0 is compatible with UJ , but it will do so using a more general definition, as follows. Definition 12 (Consistency and compatibility). Let w = hw0, w1*, . . . , w*J i where w0 ∈ B k and wn ∈ B for n ≥ 1. Then, w is consistent with u = hu0, . . . , uJ , ui ∈ B expi2 (k)i (i.e. w ∼ u) if w0 = u0 and wn = un[un−1] for n ≥ 1. Moreover, w is *compatible with* uJ ∈ B expJ 2 (k)(i.e. w ∼ uJ ) if there exists any u0*, . . . , u* : J − 1 such that w is consistent with u0*, . . . , u*J . In Figure 7b, if, with positive probability, the assignment of X0 is inconsistent with hz0, u1i, then the decisionmaker is also penalised with strictly positive probability. For instance, if the assignments z0 = 0 and u1 = 01 lead to the assignment x = h1, 1i, then this policy will achieve utility of y = 0 given the assignments y0 = 0 and u1 = 00, since they cause the values z0 = 0 and w1 = 0, which will cause the assignment x = h1, 1i, which is not consistent with z0 = 0 and u1 = h0, 0i. We find that the same is true in the more general mode of Figure 7c. If with strictly positive probability, the assignment of X0 is inconsistent with u0:J , then there will exist an alternative assignment U0:J = u 0 0:J , that produces the same assignments to the observations of X0, but where X0 is not compatible with u 0 J . Lemma 13. Let w = hw0, . . . , wJ i and w¯ = hw¯0, . . . , w¯J i *be sequences with* w0, w¯0 ∈ B k, wj , w¯j ∈ B for j ≥ 1*, and let* J 0 ≤ J *be the smallest integer such that* wJ0 6= ¯wJ0 . Let u0, . . . , uJ0 *be a sequence* where uj [uj−1] = wj for 1 ≤ j < J0. Then, there exists some uJ0+1, . . . , uJ such that w *is consistent with* u0, . . . , uJ , but w¯ *is incompatible with* uJ . The proof is deferred to Appendix B.5. This result implies that an optimal policy in Figure 7c, x0 must be consistent with u0:J with probability 1. After all, the non-intervened policy clearly achieves the MEU of 1, being that it is consistent with u0:J , and consistency implies compatibility. On the other hand, if x0 is inconsistent with u0:J with strictly positive probability, then there will exist an alternative assignment u 0 0:J that produces the same assignment x0, and since the variables U0:J have full support, this will lead to y = 0 will strictly positive probability, and decrease the expected utility. If a policy cannot copy Z0 without observing it, then this will make X0 inconsistent with u with strictly positive probability, so this policy will not be optimal. One may notice that by setting U0 to contain k bits rather than just one, this will make it very difficult for Z0 to copy the value of Z0 without observing it, if a sufficiently large k is chosen. We will develop a fully formal argument for materiality in Section 4.4. ## 4.3.2 **A Decision Problem For Any Graph Containing The Materiality Paths** We will now generalise the constructions from Figure 3a (for a truncated info path is a directed path) and Figure 7c (for a truncated info path that is not a directed path) to an arbitrary graph containing the materiality paths described in Lemma 11. To begin with, let us note that the materiality paths may overlap. So our general approach will be to define a random variable V pfor each variable in a path p. To derive the overall materiality SCM, we will simply define V by a cartesian product over each Vp. For the outcome variable Y , we will instead take a sum over each Y p. For any set of paths p, we define V p = ×p∈pV p. Let us now discuss the control path. The initial node A will sample a bitstring that is passed along the control path, and through each intersection node Tiin particular. To describe this, we will rely on shorthand. Definition 14 (Parents along paths). When a vertex V has a unique parent V¯ along p, Pa(V p) = V¯ p, and for a set of paths p 0, let Pa(V p 0) = ×p∈p0 Pa(V p). For a collider V in a truncated info path mi: Ti - - - Y , let the parent nearer Ti along mi be PaL(V ), and the parent nearer Y be PaR(V ). For example, a non-outcome child V of A along the control path will be assigned V d = Pa(V d). Each info path must pass on information from upstream paths that traverse the intersection node. We therefore use the notation pi to refer to the set of control and auxiliary paths that enter the intersection node Ti. We also devise an extended notion of parents Pa∗to include this information. Relatedly, we will define a notion of parents for the auxiliary path, which includes information from the collider Wi,j of the info path, and a notion of parents for the paths pi, that includes the exogenous parent EA of A. Definition 15 (Extended parent relations). For a truncated info path mi, let: $$\text{Pa}^{*}(V^{m_{i}})=\begin{cases}T_{i}^{T_{i}}&\text{if Pa}(V^{m_{i}})=T_{i}^{m_{i}}\\ \text{Pa}(V^{m_{i}})&\text{otherwise}\end{cases},\text{and Pa}_{l}^{*}(V)=\begin{cases}T_{i}^{T_{i}}&\text{if Pa}_{L}(V^{m_{i}})=T_{i}^{m_{i}}\\ \text{Pa}_{L}(V_{l}^{m_{i}})&\text{otherwise}\end{cases}.$$ . For an auxiliary path $r_{i,j}$, let $\text{Pa}^*(V^{r_{i,j}})=\begin{cases}W^{m_i}_{i,j}&\text{if}\text{Pa}(V^{r_{i,j}})=W^{m_i}_{i,j}\\ \text{Pa}(V^{r_{i,j}})&\text{otherwise}\end{cases}$. ![13_image_0.png](13_image_0.png) Figure 8: The materiality SCM: a general SCM where Z0 is material for X0. Finally, let: $\text{Pa}^*(V^{\mathbf{p}_i})=\begin{cases}\mathcal{E}_A\times\text{Pa}(V^{\mathbf{p}_i})&\text{if}V\text{is}A\\ \text{Pa}(V^{\mathbf{p}_i})&\text{otherwise}\end{cases}$. . In other respects, the materiality SCM will behave in a similar manner to previous examples. For instance, when miis directed, the outcome Y mi will evaluate whether the values Pa(Y pi ) (which mostly come from Xi) are equal to Pa(Y mi ), which come from the info path. When miis not directed, the outcome Y mi will evaluate whether the values from Pa(Y pi,ri,0:J ) are compatible with those from Ui,J . So let us now define the materiality SCM as follows. Definition 16 (Materiality SCM). Given a graph containing the materiality paths, we may define the following random variables. In the control path, d : A 99K Y , let: - the source be Ad = E A dwhere E A d∼ U(B k) where k is the smallest positive integer such that 2 k > (k + c)bc, where b is the maximum number of variables that are contexts of one decision, b := maxX∈X(S)|CX|, and c is the maximum number of materiality paths passing through any vertex in the graph; - every non-endpoint V have V d = Pa(V d). In each truncated info path that is directed, mi: Ti 99K Y , let: - the intersection node T mi have trivial domain; - each chain node be V mi = Pa∗(V mi ) - the outcome have the function fY mi (paY ) = Jpa(Y pi ) = pa∗(Y mi )K. In each truncated info path that is not directed, Ti - - - ← Wi,1 → · · · ← Wi,J 99K Y , let: - each fork be W mi i,j =E W mi i,j , E W mi i,j ∼U(B exp j 2 (k+|pi|−1)) where |pi| is the number of paths in pi; - each chain node be V d = Pa∗(V d); - each collider be V mi = PaR(V mi )[Pa∗L(V mi )]; ![14_image_0.png](14_image_0.png) (a) The intersection node T1 is a chance node. ![14_image_1.png](14_image_1.png) (b) The intersection node T1 is a decision. The contexts of X0 are divided into C m1 X0 (the parent along the info path), and C ¬m1 X0(the other parents). Figure 9: The cases where the intersection node T1 is a chance node, or a decision - each intersection node be T mi i = Pa(V mi )[Pa∗(T pi i)] if the info path begins as Ti → ·, otherwise it has empty domain; - the outcome have the function fY mi (paY ) = Jpa(Y pi,ri,1:Ji ) is compatible with pa∗(Y )K. In each auxiliary path ri,j : Wi,j → V2 99K Y , let: - each chain node have V ri,j = Pa∗(v ri,j ). - each source Wi,j have trivial domain Then, let the *materiality SCM* have outcome variable Y =Pimin≤i≤imax Y mi, and non-outcome variables V = ×p∈{d,mi,ri,1:Ji |imin≤i≤imax} V p. Note that this defines an SCM because each variable is a deterministic function of only its endogenous parents and exogenous variables. We have define the materiality SCM so that decisions behave just as non-decisions, which always do what is required to ensure that Y mi = 1. Lemma 17. In the non-intervened model, the materiality SCM has Y = imax − imin + 1*, surely.* The proof follows from the model definition, and is supplied in Appendix B.4. We also know that each utility term Y miis upper bounded at one, so in order to obtain the MEU, each Y i must equal 1, almost surely. Lemma 18. If a policy π *for the materiality SCM, has* P π(Y mi <1)>0 for any i*, the MEU is not achieved.* Proof. We know that E π[Y ] = Pimin≤i≤imax Y mi (Definition 16), so for all i, Y mi ≤ 1 always. So, if P π(Y mi < 1) > 0 for any i, then E π[Y ] < imax − imin + 1, which underperforms the policy that is followed in the non-intervened model (Lemma 17). ## 4.4 **Proving Materiality In The Materiality Scm** We will now prove that in the materiality SCM, if Z0 is removed from the contexts of X0, then the performance for at least one of the utility variables Y miis compromised, and so the MEU is not achieved. The proof divides into two cases, based on whether the child of X0 along the control path is a non-decision (Section 4.4.1) or a decision (Section 4.4.2). ## 4.4.1 Case 1: Child Of X0 Along D **Is A Non-Decision.** If the child of X0 along the control path is a non-decision and Z0 is not a context of X0, we will prove that E[Y m0] < 1. In this case, either X0 is the last decision in the control path, or otherwise there must exist an intersection node T1, as shown in Figure 9a. If the former is true, then it is immediate that the value x0 is transmitted to Y along the control path, based on the model definition. As such, Y0 can directly evaluate the decision X0. For the latter case, we want an assurance that downstream decisions will pass along the value of X, as was the case in Figure 4b. Such an assurance is provided by the following lemma, which shows that whenever an intersection node Tiis a chance node - as is T1 - the value tiis transmitted to Y by every optimal policy. Lemma 19 (Chance intersection node requirement). If in the materiality SCM, where Ti is a chance node, a policy π has P π(Pa(T pi i) = Pa(Y pi )) < 1*, then* P π(Y mi < 1) > 0. First, we prove the case where miis a directed path. In this case, mi copies the value t pi to Y , which Y mi checks against the value pa(y pi ) received via the control path. Maximising Y mi then requires them to be equal. Proof of Lemma 19 when mi *is a directed path.* We have fY mi (paY mi ) = Jpa(Y mi ) = pa(Y pi ))K (Definition 16). Also, Pa(Y mi ) = T pi i = Pa(T pi i) surely, where the first equality follows from Definition 16, while the second follows from Definition 16 and Ti being a chance node. So, if P π(Pa(Y pi ) = Pa(T pi )) < 1 then P π(Y mi = 1) < 1. We now prove the case where miis a directed path. In this case, if the assignment pa(Y pi ) transmitted along the control path differs from the value pa(T pi i) that came in to the intersection node Ti, then just as we established for Figure 7c, there will exist an assignment ui,1:Ji to the fork nodes in mi that gives an unchanged assignment to colliders vi,1:Ji , but where pa(Y pi ) is incompatible with uJi . Proof of Lemma 19 when mi *is not a directed path.* Let us index the forks and colliders of mi as Ti - - - Vi,1 L99 Ui,1 99K Wi,1 L99 · · · Wi,Ji L99 Ui,Ji 99K Y . Choose any assignments pa(T pi i) 6= pa(Y pi ) that occur with strictly positive probability. Then, there must also exist assignments Pa(Y pi,ri,1:Ji ) = pa(Y pi,ri,1:Ji ), Ui,1:Ji = u1:Ji , and Wi,1:Ji = w1:Ji such that P π(pa(T pi i), pa(Y pi,ri,1), tpi i ,u1:Ji , w1:Ji ) > 0. By Lemma 13, there also exists an assignment Ui,1:Ji = u 0 1:Ji such that pa(T pi i), w1:Ji is consistent with ``` u 0 1:Ji , and pa(Y p i ), pa(Y ri,1:Ji ) is incompatible with u 0 Ji . Now, consider the intervention do(Ui,1:Ji = u 0 1:Ji ). ``` Since Tiis a chance node, every collider in miis a non-decision, and is assigned the (unique) value consistent with pa(T pi i),u 0 1:Ji . Furthermore, pa(T pi i), w1:Ji is consistent with pa(T pi i),u 0 1:Ji , so the intervention does not affect the assignments to these colliders. Moreover, from Definition 16, no variable outside of miis affected by assignments within mi, except through the colliders. Therefore: $P^{\pi}(\mbox{pa}(Y^{p_{i}}),\mbox{pa}(Y^{r_{i,1:J_{i}}}),\mbox{Pa}(Y^{m_{i}})=u^{\prime}_{J_{i}}\mid\mbox{do}(\mathbf{U}_{i,1:J_{i}}=\mathbf{u^{\prime}_{1:J_{i}}}))>0$ $:P^{\pi}(Y^{m_{i}}=0\mid\mbox{do}(\mathbf{U}_{i,1:J_{i}}=\mathbf{u^{\prime}_{1:J_{i}}}))>0$ $\mathbf{u^{\prime}}(\mathbf{U^{\prime}})=(\mathbf{U^{\prime}})\mathbf{u^{\prime}}(\mathbf{U^{\prime}})\mathbf{u^{\prime}}(\mathbf{U^{\prime}})$ $$(\mathrm{pa}(Y_{1}^{\boldsymbol{\prime}_{1}}),\mathrm{pa}(Y^{\boldsymbol{\prime}_{1}\ldots\boldsymbol{\prime}_{d_{1}}})\text{not compatible with}u_{J_{i}}^{\prime})$$ $$(\boldsymbol{U}_{i,1:J_{i}}=\boldsymbol{u}_{1:J_{i}}^{\prime})>0$$ $$(\boldsymbol{U}_{i,1:J_{i}}\text{are unconfounded,so}P^{\boldsymbol{\pi}}(\boldsymbol{V}\,|\,\mathrm{do}(\boldsymbol{U}_{i,1:J_{i}}=\boldsymbol{u}_{1:J_{i}}^{\prime}))=P^{\boldsymbol{\pi}}(\boldsymbol{V}\,|\,\boldsymbol{U}_{i,1:J_{i}}=\boldsymbol{u}_{1:J_{i}}^{\prime}))$$ $$(P^{\boldsymbol{\pi}}(\boldsymbol{u}_{i,1:J_{i}})>0).$$ ) > $(\mathrm{pa}(Y^{\pmb{p}}_i),\mathrm{pa}(Y^{\pmb{r}_i,1:J_i})$ not compatible with $u'_{J_i})$. If miis not a directed path, then this requirement extends to the values pa(Y ri,1:Ji ) passed down the auxiliary paths, not just the value pa(Y pi ) from the control path. Specifically, pa(Y pi ), pa(Y ri,1:Ji ) must be consistent with pa(Y pi ),ui,1:Ji , where ui,1:Ji denotes the values of forks on the info path. $\square$ Lemma 20 (Collider path requirement). If the materiality SCM has an info path mi that is not directed, and under the policy π *there are assignments Pa*(Y pi,ri,1:Ji ) =pa(Y pi,ri,1:Ji ) *to parents of the outcome, and* U mi i,1:Ji =u mi i,1:Ji to the forks of mi*, with* P π(pa(Y pi,ri,1:Ji ),u mi i,1:Ji ) > 0 *and where pa*(Y pi,ri,1:Ji ) is inconsistent with pa(Y pi ),u mi i,1:Ji , then P π(Y mi < 1) > 0. The idea of the proof, similar to Lemma 19, is that whenever the bits transmitted along the auxiliary paths deviate from the values wi,1:Ji of colliders in mi, there exists an assignment u 0 i,1:Ji to forks in mi that will render the colliders, and hence the decision xi unchanged, while making xiincompatible with uJi , and thereby producing Y mi < 0. A detailed proof is in Appendix B.5. In order to prove that the context Z0 is needed, we will also need to establish that it is not deterministic, even if it is a decision. In the case where Z0 is a decision, the idea is that random information is generated at A, which each of the decisions are required to pass along the control path. We are able to prove this as a corollary of Lemma 19. Lemma 21 (Initial truncated info path requirements). If π *in the materiality SCM does not satisfy:* P π(Pa(Y d) = Ad) < 1. then the MEU is not achieved. Proof. From Lemma 11, the control path d begins with a chance node. So, the first decision Ximin in d must have a chance node Zimin as its parent along d. Furthermore, the intersection node Timin must be an ancestor of Zimin along d, so it is also a chance node. So it follows from Lemma 19, that any policy π must satsify P π(T pimin imin = Pa(Y pimin )) = 1 if it attains the MEU. As Timin is in the control path, we have d ∈ pimin (Lemma 11) so T d imin a.s. == Pa(Y d) is also required. Moreover, all of vertices in the segment A 99K Timin of d are chance nodes, because Ximin was defined as the first decision in d, and Timin precedes it. And, each chance variable V d on the control path equals its parent Pa(V d) (Definition 16), so Ad = T d imin , and thus Ad a.s. == Pa(Y d) is required to attain the MEU. We can now combine our previous results to prove that it is impossible to achieve the MEU, if Z0 is not a context of X0, in the case where T1 does not exist, or is a non-decision. Lemma 22 (Required properties unachievable if child is a non-decision). Let M *be a materiality SCM* where the child of X0 along d *is a* non-decision. Then, the MEU for the scope S *cannot be achieved by a* deterministic policy in the scope SZ06→X0 (equal to S, except that Z0 *is removed from* CX0 ). The logic is that if child of X0 in the control path is a non-decision, then the value of X0 is copied all the way to Pa(Y d) (Lemma 21). Furthermore, Z d 0 a.s. == Pa(Y d) is necessary to achieve the MEU (Lemma 19). But the materiality SCM has been constructed so that the non-Z0 parents of X0 do not contain enough bits to transmit all of the information about Z d 0 , so the MEU cannot be achieved. The proof is detailed in Appendix B.6. ## 4.4.2 Case 2: Child Of X0 Along D **Is A Decision.** If the child of X0 along d is a decision, as shown in Figure 9b, we will prove that the decision X0 must depend on Z0 in order to achieve E[Y1] = 1. This will be because without Z0, X0 will be limited in its ability to distinguish all of the possible values of the first fork node Ui,1 of m1. To establish this, we will need to conceive of a possible intervention to the fork nodes in mi, that Xi would have to respond to, and so we begin by proving that relatively few variables will be causally affected by certain interventions. Lemma 23 (Fork information can pass in few ways). *If, in the materiality SCM:* - the intersection node Ti *is the vertex* Xi−1, - πTi is a deterministic decision rule where πTi (c ¬mi (Ti, ui,1) = πTi (c ¬mi (Ti, u0i,1 )) *for assignments* ui,1, u0i,1 to the first fork variable, and c ¬mi (Ti) to the contexts of Ti not on mi*, and* - Wi,1:Ji = wi,1:Ji , and Ui,2:Ji = ui,2:Ji are assignments to forks and colliders in mi where each ui,j consists of just wi,j *repeated* exp j 2 (k + |pi| − 1) *times, then:* P π(pa(Y pi,ri,1), c ¬mi(Ti), wi,1:Ji ,ui,2:Ji |do(ui,1))=P π(pa(Y pi,ri,1), c ¬mi(Ti), wi,1:Ji ,ui,2:Ji |do(u 0 i,1 )). The proof follows from the definition of the materiality SCM, and it is detailed in Appendix B.7. We can now prove that if a deterministic policy does not appropriately distinguish assignments to Ui,1, then the i th component of the utility will be suboptimal E[Y mi] < 1. Lemma 24 (Decision must distinguish fork values). *If in the materiality SCM:* - the intersection node Ti is the vertex Xi−1*, and* - π is a deterministic policy that for assignments ui,1, u0i,1 to Ui,1 *where* ui,1 6=u 0 i,1 , has πTi (c ¬mi(Ti), ui,1)=πTi (c ¬mi(Ti), u0i,1 ) *for every* C ¬mi Ti(Ti)=c ¬mi(Ti), $$(\dagger)$$ then P π(Y mi < 1) > 0 The idea of the proof is that if ui,1 and u 0 i,1 differ, there will be some assignment pa(Y pi ) such that ui,1[pa(Y pi )] and u 0 i,1 [pa(Y pi )] differ. When Pa(Y pi ) = pa(Y pi ) and ui,1, then Pa(Y ri,1 ) to assume one value. But if we intervene u 0 i,1 , ui,2:Ji , then the value of Pa(Y ri,1 ) will be incorrect, making Pa(Y pi,ri,1:Ji ) inconsistent with Pa(Y pi, Ui,1:Ji ) so the maximum expected utility is impossible to achieve. The details are deferred to Appendix B.8. This will allow us to prove that when the child of X0 along d is a decision, the MEU cannot be achieved without Z0 as a context of X0. Lemma 25 (Required properties unachievable if child is a decision). Let M be the materiality SCM for some scoped graph GS , where imax > 0 and T1 is a decision. Then, there exists no deterministic policy in the scope SZ06→X0 that achieves the MEU. To prove that no deterministic policy in SZ06→X0 can achieve the MEU (achievable with the scope S), we will show that if a deterministic policy π satisfies P π(Pa(Y d) = Ad) = 1, as required by Lemma 21, then the domain of X0 × C ¬m1 X0is smaller than the domain of C m1 X0 , so Equation (†) will be satisfied, and thus the MEU cannot be achieved. A detailed proof is presented in Appendix C. We now combine the lemmas for the two cases to prove the main result. Proof of Theorem 8. Any scoped graph G(S) that satisfies assumptions (A-C) contains materiality paths for the context Z0 of X0 (Lemma 11), and has a materiality SCM (Definition 16) compatible with G(S). In this decision problem, whether the child of X0 along d is or is not a decision, the MEU cannot be achieved by a deterministic policy unless X0 is allowed to depend on Z0 (Lemmas 22 and 25). And stochastic policies can never surpass the best deterministic policy ((Lee & Bareinboim, 2020, Proposition 1)), so no such policy can achieve the MEU, and so Z0 is material for X0. ## 5 **Toward A More General Proof Of Materiality** So far, via Theorem 8 we have established the necessity of condition (I) of LB-factorizability for immateriality. We now outline some steps toward evaluating the necessity of conditions (II-III) of LB-factorizability, and the further condition in (Lee & Bareinboim, 2020, Thm. 2). To begin with, condition (III) requires that we choose an ordering ≺, such that the parents of each decision X are somewhere before X, while the descendants are somewhere afterwards. Clearly this condition can be satisfied for any acyclic graph, so it instead Conditions (II-III) are individually not very restrictive, but are jointly substantial. So a natural next step is to try to prove that conditions (II-III) are necessary, by defining some info paths and control paths for graphs that violate conditions (II-III), defining a materiality SCM, and proving materiality in that SCM. So far, however, we have only been able to carry out the first step - defining the paths - and difficulties have arisen in using those paths to define an SCM that exhibits materiality. In this section, we will outline what info paths and control paths can be proven to exist, and then outline the difficulties in using them to prove materiality. ## 5.1 **A Lemma For Proving The Existence Of Paths** When the variables Z, X0, C0, U are not factorizable, we can prove the existence of info and control paths. Lemma 26 (System Exists General). Let GS be a scoped graph that satisfies assumptions (A,B) from Theorem 8. If Z = {Z0}, X0 ⊇ Ch(Z0), C0 = CX0 \ (X0 ∪ Z), U = ∅ are not LB-factorizable, then there exists a pair of paths to some C 0 ∈ C0 ∪ Y : - an info path m : Z0 *- - -* C 0, active given dX0 ∪ C0e*, and* - *a control path* d : X 99K C 0 *where* X ∈ X0. A proof is supplied in Appendix D.1. The intuition of this proof is that each of the conditions (I-III) implies a precedence relation between a pair of variables in V 0 ∪ Y . Each of these precedence relations can be used to build an "ordering graph" over V 0 ∪Y . If the ordering graph is acyclic, then we can let ≺ be any ordering that is topological on the graph, and then Z, X0, C0, U are LB-factorizable. Otherwise, we can use a cycle in the graph to prove the existence of an info path and a control path. By iterating through these cycles, we can obtain a series of info paths and control paths that terminate at Y . The resulting paths are in some cases, quite useful for proving materiality. For instance, we can recover the pair of info and control paths used in Figure 4b. To prove that Z is material for X, we can start by choosing X0 = {*X, X*0}, C0 = {Z 0}, C0 = {Z 0, W}, and U0 = ∅. Then, Lemma 26 implies the existence of an active path from Z to some DescX ∩ C0, so we see that the first info path is the edge Z → Z 0. With Z 0 being a descendant of X, we also have the first control path, X → Z 0. We must then obtain some paths that exhibit why Z 0is itself useful for the decision X to know about, and to influence. To do this, we can reapply Lemma 26 using the sets X0 = {X0}, Z = {Z 0}, C0 = {W}, and U0 = ∅. We then obtain the new info path Z 0 → W ← U → Y , and the new control path Z 0 → X0 → Y . The SCM in Figure 4b uses these paths to prove Z material for X. ## 5.2 **A Further Challenge: Non-Collider Contexts** In some graphs, it is not clear how to use the info and control paths Lemma 26 to prove materiality, because non-collider nodes on the info path may be contexts. (In previous work, this possibility was excluded by the solubility assumption (van Merwijk et al., 2022, Lemma 28).) We will now highlight one case, in Figure 10, where it is relatively clear how this challenge can be overcome, and one case, Figure 11, where it is unclear how to make progress. In the graph of Figure 10, we would like to prove that Z0 is material for X0. Using Lemma 26, we can obtain the red and blue info paths as shown, and the corresponding control paths in darker versions of the same colors. In the approach of Definition 16, shown in Figure 10a, X0 should need to observe Z0 in order to know which slice from V is presented at its parent X1. Then, X1 would play two roles, one for the red info path, and one for the dark blue control path. As a collider on the red info path, its role is to present the Z th 0 bit from V . As the initial endpoint of the blue control path, so its role is to copy the assignment of Z0. The problem, however, is that X0 then does not need to observe Z0 in order to reproduce its value, because this value is already observed at X1, so Z0 is not material. To remedy this problem, we can construct an alternative SCM, where the value of Z0 is "concealed", i.e. it is removed from the other contexts, CZ0 \ Z0. At X1, we directly remove Z0, leaving this decision with a domain of only one bit. At C, we impose some random noise, so that it is not always a perfect copy of Z0. The result is shown in Figure 10b. When this model is not intervened, an expected utility of E[Y ] = 10.99 is achieved, because the red term in Y always equals 10, while the blue term has an expectation of 0.99. (This is the MEU, because there is no way to improve the blue term to have expectation 1 without decreasing the expectation of the red term by at least 0.05.) If instead, Z0 is removed as a context for X0, then the expected utility can only be as high as E[Y ] = 10.95. To understand this, restrict our attention to deterministic policies, and note that in order for the red term to be better than a coin flip (with an expected value of 5), we would either need to have X0 = h*C, X*1i - and the red term will have an expectation of 9.95, or we must have X1 = V [0] and X0 = h0, X1i - and then the blue term will have an expectation of 0.5. In either case, performance is worse than 10.99, so Z0 is material for X0. ![19_image_0.png](19_image_0.png) Figure 10: Two alternative models that use the same two info paths, red and blue. The problem is that concealing the value of Z0 does not work for all graphs. To see this, let us add two decisions, X2 and X3, to the graph from Figure 10, to thereby obtain the graph in Figure 11. Let us retain the materiality SCM from Figure 10b, except that X2 and X3 copy the value from C along to Y . One might expect that Z0 should still be material, but it is not. Now, there is a policy that achieves the new MEU of 11 by superimposing the value of Z0 on the assignments of decisions X2 and X3. In this policy π, x1 = v[z0], x2 = z0 ⊕ z0, x3 = x2 ⊕ z0, and x0 = x2 ⊕ x3 = z0 where ⊕ represents the XOR function. Under π, the red term equals 10 always, while the blue term always equals 1, i.e. the MEU is achieved, and π is a valid policy even if Z0 is not a context of X0, meaning that Z0 is not material for X0. In summary, whenever Z 3 Z0, X0 3 X0, C0, U are not LB-factorizable, then we can find some info and control paths for Z0 and X0, but then X0 can recover the value of Z0, making it possible to achieve the MEU even when Z0 is removed as a context of X0. In some graphs, we can devise an alternative SCM that conceals the value of Z0. But in others, a policy can superimpose the information from Z0 on other decisions, such as X2 and X3 in Figure 11, so that X0 can recover the value of Z0, making Z0 immaterial for X0 once again. It seems that new insights are needed to solve this superimposition problem, and that therefore that we will need new insights to establish a complete criterion for materiality in insoluble decision problems. ![19_image_1.png](19_image_1.png) Figure 11: A model with zero VoI ## 6 **Conclusion** We have found that in a graph whose contexts cannot satisfy condition (I) of LB-factorizability, any context can be material. We encountered some new problems for materiality proofs, and devised appropriate solutions: - if the variable Zi whose materiality we are trying to establish is a decision, whose value can be determined by other available contexts, - then we must choose a different info path so that nonobserved variables would be needed to determine the value of Zi - if the info path begins with a context of multiple decisions, - then we must construct the SCM differently along the info path - if the control path contains consecutive decisions, then we require more bits to be copied along the control path, so that not all of these bits can be copied along alternative paths. As a next step towards establishing a complete criterion for materiality, we then considered the more general setting where no context can jointly satisfy conditions (I-III) of LB-factorizability. In this setting, it is possible to identify info paths and control paths for a target context Z0 and decision X0, and to apply our SCM construction to these paths. However, there may exist policies that transmit the assignment of Z0 through alternative paths, and that achieve the MEU even when Z0 is removed as a context of X0. Although there exist ways of concealing the information about Z0 from a descendant decision Xi 0 *, i < i*0, there can also be other ways that information about Z0 may be transmitted, such as transmitting this information in other decisions, undermining materiality once again. Thus, the challenge of proving a complete criterion of materiality for insoluble graphs currently remains open. ## References Tom Everitt, Ryan Carey, Eric Langlois, Pedro A Ortega, and Shane Legg. Agent incentives: A causal perspective. In *AAAI*, 2021. Sebastian Farquhar, Ryan Carey, and Tom Everitt. Path-specific objectives for safer agent incentives. arXiv preprint arXiv:2204.10018, 2022. Dan Geiger and Judea Pearl. On the Logic of Causal Models. *Machine Intelligence and Pattern Recognition*, 9:3–14, 1990. Joseph Halpern and Max Kleiman-Weiner. Towards formal definitions of blameworthiness, intention, and moral responsibility. In *Proceedings of the AAAI conference on artificial intelligence*, volume 32, 2018. Ronald A Howard. From influence to relevance to knowledge. Influence diagrams, belief nets and decision analysis, 1990. Matt J. Kusner, Joshua R. Loftus, Chris Russell, and Ricardo Silva. Counterfactual Fairness. In *NIPS*, 2017. Sanghack Lee and Elias Bareinboim. Characterizing optimal mixed policies: Where to intervene and what to observe. *Advances in Neural Information Processing Systems*, 33, 2020. Judea Pearl. *Causality: Models, Reasoning, and Inference*. Cambridge University Press, 2 edition, 2009. ISBN 9780521895606. Chris van Merwijk, Ryan Carey, and Tom Everitt. A complete criterion for value of information in soluble influence diagrams. *arXiv preprint arXiv:2202.11629*, 2022. Thomas Verma and Judea Pearl. Causal Networks: Semantics and Expressiveness. In *Uncertainty in* Artificial Intelligence (UAI), pp. 69–78, Amsterdam, The Netherlands, 1988. North-Holland Publishing Co. Francis Rhys Ward, Matt MacDermott, Francesco Belardinelli, Francesca Toni, and Tom Everitt. The reasons that agents act: Intention and instrumental goals. *arXiv preprint arXiv:2402.07221*, 2024.
# Transfer Learning Across Datasets With Different Input Dimensions: An Algorithm And Analysis For The Linear Regression Case Anonymous authors Paper under double-blind review ## Abstract With the development of new sensors and monitoring devices, more sources of data become available to be used as inputs for machine learning models. These can on the one hand help to improve the accuracy of a model. On the other hand, combining these new inputs with historical data remains a challenge that has not yet been studied in enough detail. In this work, we propose a transfer learning algorithm that combines the new and historical data with different input dimensions, which is especially beneficial when the new data is scarce. We focus the approach on the linear regression case, which allows us to conduct a rigorous theoretical study on the benefits of the approach. Our approach achieves state-of-the-art performance on several real-life datasets, outperforming other linear transfer learning algorithms and performing comparably to non-linear ones. In addition, we prove that our approach is robust against negative transfer learning assuming that the new inputs are normally distributed, and confirm its robustness empirically also on real-world data distributions. ## 1 Introduction The constant evolution of sensor technology and measuring equipment brings ever more data sources that can be employed by machine learning practitioners to build better predictive models. In the healthcare domain, for example, new ICT equipment is installed and starts generating new sensory data that helps doctors to make better diagnostics (Sheng & Ling, 2006). Similarly, in the predictive maintenance domain, new sensors are developed and installed to help monitoring the state of industrial equipment. In both cases, it is desirable to update the predictive model or to train a new one so that it can make use of the new inputs. However, the data collection can be expensive and time-consuming, taking a long time before it is feasible to train a new model. This can be seen as a transfer learning problem where there are two datasets: the historical data with plenty of samples but without the new input features and the newly collected data with fewer samples, but with all available input features (Figure 1). The goal then is to use the historical data as the source dataset to improve the prediction accuracy using all inputs, which are observed only in the target dataset. We refer to this setup as "incremental input transfer learning". Research in transfer learning and domain adaptation has put forward methods for learning a target task with limited data available by using data from other similar tasks referred to as source tasks. For example, recent transfer learning works (WEI et al., 2018) yield state-of-the-art accuracy on classifying images from a target domain given only a small portion of target data. These works have considered mainly two variations of this setting: transfer across different tasks and transfer across different input domains (Pan & Yang, 2010). For the latter case, most research works assume that the inputs have the same dimension and come from different distributions, exploiting some semantic similarity between the domains (Chen et al., 2021; 2015; Obst et al., 2021; Mousavi Kalan et al., 2020; Zhuang et al., 2021). Although these works assume arbitrary distributions for source and target domains, it is non-trivial to show that their results generalize when the input dimensions are different. Other works use mapping functions (Yang et al., 2016; Moon & Carbonell, 2017; He et al., 2020; Yan et al., 2018; Wei et al., 2019) to cast the data into a new domain where the ![1_image_0.png](1_image_0.png) Figure 1: The historical and the new datasets represented as a transfer learning problem where the target differs by the amount of inputs. source and target data are compatible, but we argue that for this kind of approach to work it requires some exploitable relationship between historical and new features, whereas our approach also works when there is no such relationship or when it is weak. In this paper, we study the incremental input problem theoretically in its linear regression version. We summarize our contributions in the following items: - We provide an efficient and easy to implement transfer learning approach for this problem which is especially helpful when the new input data is scarce; - We show through rigorous theoretical study that the approach is robust against negative transfer learning1 and we prove an upper bound for its generalization error; - We confirm empirically the robustness of our approach using real-life data and show that it outperforms other linear mapping based transfer learning approaches, and performs on par with more complex state-of-the-art non-linear approaches. ## 2 Related Work Transfer Learning: Studies in transfer learning seek to learn a target task with limited data by using large amounts of data from source datasets with similar labels or input features (Pan & Yang, 2010). In the linear regression case, Obst et al. (2021) propose the notion of transfer gain and a statistical test to tell when using a source dataset to pre-train a model is better than training a model from scratch. They assume that the model should be fine-tuned by using gradient descent on the target data, but often this approach is less efficient than the least-squares method since it requires carefully selecting the step size and number of iterations, and can lead to sub-optimal solutions. Chen et al. (2015) present a linear regression method that combines two datasets: one is small and unbiased, and the other is large and biased. They study in detail the conditions when their approach is beneficial as a function of the amount of bias in the large dataset. Other works focus on the theoretical properties of transfer learning: Mousavi Kalan et al. (2020) show the minimax bounds for transfer learning based on a notion of true distributions where the source and target datasets are sampled from; and Hanneke & Kpotufe (2019) propose a new discrepancy metric between source and target data and prove the minimax bounds for learning a classifier according to their metric. Their setting differs from ours mainly because we assume that one or more variables are not observed in the large dataset, while they assume that all variables are present in both datasets. 1According to the transfer gain definition from Obst et al. (2021). Heterogeneous Transfer Learning: This branch of transfer learning concerns the problem where the inputs in the source and the target domains differ in feature dimensionality (Zhuang et al., 2021). Here, the main focus of existing approaches is on learning a feature mapping from the source and target datasets into a homogeneous feature space where all data can be combined to train the final model. They split in supervised (Yan et al., 2018; Moon & Carbonell, 2017) and unsupervised (Yang et al., 2016; He et al., 2020; Wei et al., 2019). In the supervised case, Yan et al. (2018) uses class labels to improve the distribution alignment of the source and target data in the new feature space. On top of learning the feature mapping, Moon & Carbonell (2017) also uses an attention mechanism to weight the source instances based based on their corresponding classification accuracy in a joint label space. We do not compare to these methods since they require classification labels, and we focus on regression tasks. In the unsupervised case, some heterogeneous transfer learning approaches rely on instance-correspondence (IC) between pairs of source and target data to learn the feature mapping (Yang et al., 2016; He et al., 2020). For example, for text sentiment classification tasks where source and target data are in different languages, a document in the target language corresponds to its translation in the source language (Yang et al., 2016). However, this kind of data pairs do not exist in many domains (i.e. medical records or predictive maintenance data), rending IC approaches unfeasible in those cases. Another unsupervised approach by Wei et al. (2019) assumes a common subset of features among source and target data and uses it to estimate the value of the missing target-specific features through a feature mapping function. The mapping is obtained by minimizing the following objective: minW kf(x hist T, W) − x new T k 2 + αMMD(f(xS, W), xnew T) + βkWk 2 where x hist Tand x new Tare the historical features and the new features, respectively, which are observed in the target dataset, while xS is the source dataset where only the historical features are observed and W are the DSFT parameters. The first term corresponds to a least-squares regression from the historical features to the new ones, the second term is the maximum mean discrepancy (MMD) between the features predicted for the source dataset and the ones observed in the target dataset. The last term is a simple L2 regularization of the parameters. They propose two versions to apply their method: one where f is just a linear mapping (DSFTl) and a kernelized variation of it (DSFTnl), and they present closed-form solutions for each version. In the incremental input learning case, our final goal is to learn a predictor of the labels using all the target features. This approach works as a preprocessing step to fill up the source dataset, but if the mapping between historical and new features is not significantly represented in the data or too difficult to learn (i.e. when the target data is limited and/or the source data is very large), DSFT can introduce extra noise and hurt the performance of the predictor. Nevertheless, we compare DSFT with our approach in our experiments in the later sections. Incremental attribute learning: The problem of learning a model when the number of inputs is changing has been studied previously by Guan & Li (2001). They propose an algorithm to automatically update a neural network when new inputs are discovered. However, as their work's main focus is to propose a new algorithm and evaluate it empirically, there is no study about its theoretical properties. In addition, they assume sufficient data to train a neural network from scratch using the new features, which may, in many cases, mean that the historical data is no longer necessary. Also similar to incremental learning, Hou et al. (2017); Zhang et al. (2020) study the problem of online learning where the data comes in streams and features are added and removed over time. Their setting assumes that only one data point is observed at each time and while the previously streamed data are lost, making their approach significantly less applicable to our transfer learning problem. Missing data: Another way of approaching the incremental learning problem is by treating the new features as missing in the source dataset and solving it by using missing data techniques, widely documented by Little & Rubin (2019). The most commonly used approaches in this domain, however, are designed for when the proportion of missing data is small, while in this paper we focus on the case where the vast majority of data points have the same missing features. ## 3 Defining A Linear Model For The Source And The Target Datasets We are concerned with the problem of learning a model based on two datasets: the historical data and the newly collected data containing extra features. We will refer to the first one as the source dataset and to the second one as the target dataset since we want to tackle this problem using a transfer learning approach. We choose linear regression for this study because it allows us to derive closed-form solutions that are easier to inspect and gain insights into the nature of the problem. The labels in both datasets represent the same linear regression task and follow the same prior distribution. In this section, we want to give a formal definition of these datasets and their conditional distributions assuming the context of linear regression. These definitions will be used throughout the remainder of the paper. We define the target dataset as (xT ,YT ), where xT ∈ R nT ×dT is a full rank design matrix containing nT independent observations of dT input features and YT ∈ R nT is a random vector containing the respective labels. They are related by the linear model YT = xT θ + wT , where θ is the dT -dimensional parameter vector describing the linear relationship between inputs and labels, and wT ∼ N (0nT , σ2InT ) is additive Gaussian noise. Our goal is to learn the parameter vector θ. In addition, we define the source dataset as (xS,YS), where xS ∈ R nS×dS is a full rank matrix containing nS observations of dS input features (dS < dT ) and YS ∈ R nS represents the random vector of the labels. For this dataset, we assume that the relationship between the labels and inputs is YS = xSθ 0 + X00θ 00 + wS, where θ 0 and θ 00 correspond respectively to the first dS and last dT −dS components of θ and X00 is a nS ×(dT −dS) random matrix with independent entries such that X00 ij ∼ N (0, 1). Again we assume additive Gaussian noise wS ∼ N(0, σ2InS ), independent of wT . We choose these assumptions to emulate the idea that in the source dataset we can observe only part of the features available in the target dataset (dS out of dT ), and these features should have the same influence on the label for both datasets, represented by θ 0. The unobserved features are then replaced by random values so their influence on the label can also be taken into account by the model. In the later sections, we show empirically that the assumption about X00 can be relaxed to other distributions. By simple probability manipulations we can derive the distribution of the source labels YS as: $$Y_{S}\sim{\mathcal{N}}(x_{S}\theta^{\prime},(\sigma^{2}+\|\theta^{\prime\prime}\|^{2})I_{n_{S}})$$ $\left(1\right)$. ) (1) A first insight that we can gain from this formalization is that, for the source dataset, the unobserved features X00 add up with the noise wS of the source label, increasing its variance by kθ 00k 2. Based on this insight, we will refer to the noise variance of the source labels as σ 2 S = σ 2 + kθ 00k 2. Basic estimator: As we want to measure the performance improvement from using the source dataset, we select a model which uses only the target data as a baseline. We select the ordinary least-squares (OLS) estimator, which is computed by minimizing the residual sum of squares of the target data: RT (θ) = kYT − xT θk 2; therefore, it is defined as ˆθT = (x > T xT ) −1x > T YT . We will also refer to it as the *basic estimator*. We choose the OLS as a baseline because it is the maximum likelihood estimator of θ using (xT ,YT ) and is widely used to solve linear regression problems. Two known characteristics of the OLS estimator are that it is unbiased: E[ ˆθT ] = θ; and its variance has a simple analytical form: Var( ˆθT ) = σ 2(x > T xT ) −1. These are important because they permit us to make a variance comparison with the transfer learning approach that we introduce in the following section. At this point, we have formally defined the source and target datasets and their distributions, as well as a baseline model. With these definitions, we can now introduce a transfer learning approach for the incremental input problem. ## 4 Data-Pooling Estimator Based on the previously defined source and target datasets, we want to define a transfer learning approach that combines both to produce a better model than the basic estimator. We do so by defining a loss that is a function of both datasets and deriving its solution which leads to our data-pooling estimator. We also study some properties of this solution and link it to the maximum likelihood approach, which leads to an efficient way of estimating the necessary hyperparameters. In the end, we put all these findings together into a transfer learning algorithm for the incremental input setting. ## 4.1 The Data-Pooling Loss $$\alpha(\theta)=\alpha_{S}\mathcal{R}$$ A common transfer learning approach is to learn by minimizing the convex sum of errors in both datasets (Ben-David et al., 2010). In the linear regression case, this approach has been referred to as data-pooling (Chen et al., 2015; Obst et al., 2021). We define our data-pooling loss Rα(θ) as the weighted sum of the losses over the source and the target datasets (respectively RS and RT ), where α = (αS, αT ) is the vector of positive weights: Rα(θ) = αSRS(θ) + αT RT (θ) (2) Intuitively, when αT αS, the error in the target dataset is scaled by a factor larger than the error in the source dataset and the solution obtained by optimizing Rα will be closer to the basic estimator. On the other hand, when αT αS, then RS will dominate the loss, and the optimal solution will distantiate from the basic estimator. In the incremental input case, we need to make sure that the parameters related to the new inputs will not influence the error calculated on the source dataset. To achieve that, we define RS(θ) = kYS − xSI >θk 2, where I describes a dT × dS matrix such that Iii = 1 and Iij = 0 when i 6= j. In practice, xSI > can be interpreted as filling up the missing dimensions of xS with zeros. By replacing the definitions of RS and RT in Equation 2 we obtain: $${\mathcal{R}}_{\alpha}(\theta)=\alpha_{S}\|Y_{S}-x_{S}\mathbb{I}^{\top}\theta\|^{2}+\alpha_{T}\|Y_{T}-x_{T}\theta\|^{2}$$ 2(3) Proposition 4.1 *Given a source and a target dataset as defined in Section 3, the data-pooling loss* Rα(θ) is convex for any choice of αS, αT ∈ R +*. Therefore it has a unique minimizing solution which is defined by:* $$\left(2\right)$$ $$\tau(\theta)$$ $\quad(3)$ . $${\hat{\theta}}_{\alpha}=(\alpha_{S}\mathbb{I}x_{S}^{\top}x_{S}\mathbb{I}^{\top}+\alpha_{T}x_{T}^{\top}x_{T})^{-1}(\alpha_{S}\mathbb{I}x_{S}^{\top}\mathbf{Y}_{S}+\alpha_{T}x_{T}^{\top}\mathbf{Y}_{T})$$ $$\left(4\right)$$ T YT ) (4) The result of proposition 4.1 (proved in the supplemental material) gives a direct form of computing the data-pooling estimator ˆθα and an analytical formula which can also be used for further analysis of the method. Based on it, and what is known about the distributions of YT and YS, we are able to derive closed forms of the expected value and the variance of ˆθα. Proposition 4.2 The data-pooling estimator is an unbiased estimator of θ *(proof in the supplemental material). Its expectation and variance are:* $$\begin{array}{c}{{\mathbb{E}[\hat{\theta}_{\alpha}]=\theta}}\\ {{V a r(\hat{\theta}_{\alpha})=M^{-1}(\alpha_{S}^{2}\sigma_{S}^{2}\mathbb{I}\Sigma_{S}\mathbb{I}^{\top}+\alpha_{T}^{2}\sigma^{2}\Sigma_{T})M^{-1}}}\end{array}$$ $$\left({\bar{5}}\right)$$ $$({\mathfrak{f}}{\mathfrak{o}})$$ 2ΣT )M−1(6) where ΣS = x > S xS, ΣT = x > T xT and M = αSIΣSI > + αT ΣT . The fact that ˆθα is unbiased guarantees that it converges to the real parameters θ, regardless of the choice of α. Its variance, however, is influenced by α, so we are interested in selecting the hyperparameter α in a way that the variance is minimal. The relationship between α and Var( ˆθα) is complex, as Equation 6 shows, so choosing it is not trivial. ## 4.2 The Relationship Of Data-Pooling And Maximum Likelihood A natural way of estimating θ is by maximizing the likelihood of the source and target labels and observations given that we know their probability distributions and also the distribution of the unobserved features X00. By looking at the negative log-likelihood function of the labels YS and YT given the observations xT and xS, we arrive at the following equation: $${\cal L}=\frac{1}{2\sigma_{S}^{2}}\|{\mathbf{Y_{S}}}-x_{S}\theta^{\prime}\|^{2}+\frac{1}{2\sigma^{2}}\|{\mathbf{Y_{T}}}-x_{T}\theta\|^{2}+\frac{n_{S}}{2}\log(\sigma_{S}^{2})+\frac{n_{S}+n_{T}}{2}\log(2\pi)+\frac{n_{T}}{2}\log(\sigma^{2})\tag{7}$$ Finding the maximum likelihood estimator (MLE) of θ by minimizing the equation above is a complex nonlinear problem. We observe that if we take the data-pooling loss Rα with αT = 1 σ2 and αS =1 σ 2 S , then we Algorithm 1 Data-pooling estimator Input: source dataset (xS, yS), target dataset (xT , yT ) Initialization: x¯T = 0dT x¯jT ← 1 nT PnT i=1 xijT for *j > d*S. xiT ← xiT − x¯T , for i ∈ [1*, ..., n*T ]. ˆθT ← (x > T xT ) −1x > T yT . ˆθS ← (x > S xS) −1x > S yS. αS ← (nS − dS)/kyS − xS ˆθSk 2. αT ← (nT − dT )/kyT − xT ˆθT k 2. Compute ˆθα with Equation 4. can rewrite Equation 7 as: $${\mathcal{L}}={\frac{1}{2}}{\mathcal{R}}_{\alpha}(\theta)+{\frac{n_{S}}{2}}\log(\sigma_{S}^{2})+{\frac{n_{S}+n_{T}}{2}}\log(2\pi)+{\frac{n_{T}}{2}}\log(\sigma^{2})$$ $$({\boldsymbol{\delta}})$$ 2) (8) This means that, in this case, the data-pooling approach is a reasonable way to approximate the MLE of θ by minimizing a simpler criterion. It suggests that αT =1 σ2 and αS =1 σ 2 S might be an optimal choice for the hyperparameter α. ## 4.3 The Data-Pooling Algorithm From a practical point of view, the data-pooling estimator cannot be computed directly since it depends on variables that in reality are unknown, namely σ 2 S and σ 2. Nevertheless, in order to apply it in practice, these variables can be estimated separately as σˆ 2 S = kyS − xS ˆθSk 2/(nS − dS) and σˆ 2 = kyT − xT ˆθT k 2/(nT − dT ). Another practical impairment for the result above is that it relies on the assumption that the new features follow a normal distribution with zero mean (E[X00] = 0). This is especially important for the data-pooling loss in Equation 3, where we fill up the lacking observations in the source dataset with zeros (xSI >). We approach this issue by shifting the observations of the new features in the target data by their estimated mean. It is important to notice that by using this trick, we are also changing the resulting estimate of the bias θ0. It means that, after computing the data-pooling estimator, to use it for predictions on new data, it is necessary to apply the same shift to that data. To sum up all these steps, we describe the algorithm to compute ˆθα from the observations of source and target data in Algorithm 1. In short, the data-pooling approach is a simple and computationally cheap way of approximating the maximum likelihood estimator of θ by combining historical data and new observations. In the following section, we want to study the benefit of using the source dataset by comparing our approach to the basic estimator. ## 5 Transfer Gain Of Data-Pooling The transfer gain is a measure defined in (Obst et al., 2021) to assess whether a transfer learning model which uses both the target and the source datasets performs better than the basic estimator which uses only the target dataset. It is measured as the difference between the generalization error of the basic estimator and that of the transfer learning approach based on a new unseen data point from the target distribution. Suppose we get a new row vector x containing dT features. According to the definition of the target data, the corresponding label Y is distributed as Y = xθ + w, where w ∼ N (0, σ2) is independent from everything else. If we have an estimator ˆθ for θ then we can predict the new label as xˆθ. The generalization error is then given by E[(Y − xˆθ) 2]. In this work, we define the transfer gain in terms of the data-pooling estimator with weight α as: G(x) = E[(Y − xˆθT ) 2] − E[(Y − xˆθα) 2] (9) Intuitively, a positive transfer gain indicates that using the source dataset improves the predictions in new unseen data from the target distribution. If it is negative, then it means that either the source data is not helpful or that the transfer learning approach is sub-optimal. Proposition 5.1 Since both ˆθT and ˆθα are unbiased estimators of θ*, then the transfer gain can be rewritten* in terms of the difference of their variances (proof in supplemental material): $${\mathcal{G}}(x)=x(\,V a r({\hat{\theta}}_{T})\,-\,V a r({\hat{\theta}}_{\alpha}))x^{\top}$$ $$(10)$$ $$(11)$$ > (10) Furthermore, by replacing the variances of ˆθT and ˆθα *in equation equation 10, it becomes:* $${\mathcal{G}}(x)=x[\sigma^{2}\Sigma_{T}^{-1}-M^{-1}(\sigma_{S}^{2}\alpha_{S}^{2}\Pi\Sigma_{S}\Pi^{\top}+\sigma^{2}\alpha_{T}^{2}\Sigma_{T})M^{-1}]x^{\top}$$ > (11) The transfer gain formula in Equation 10 ties together the variances of ˆθα and ˆθT in a way that if the variance of ˆθα is smaller than that of ˆθT , then the gain is strictly positive for any non-zero input vector x. It refers back to the problem of selecting α by adding a new objective: maximizing the transfer gain or at least making it positive. On top of that, Equation 11 expresses analytically this objective and can be used to study the signal of the gain. We observe that by selecting αS =1 σ 2 S and αT =1 σ2 as mentioned in Section 4.2, the expression for the variance of ˆθα simplifies greatly, becoming an updated version of Var( ˆθT ). Based on this insight, we use the Woodbury's identity (Hager, 1989) to show that Var( ˆθT ) − Var( ˆθα) is a positive semi-definite matrix (complete proof in the supplementary material). This results in the following theorem: Theorem 5.2 Given any target dataset (xT ,YT ) and source dataset (xS,YS) as defined in Section 3, if we choose αS =1 σ 2 S and αT = 1 σ2 then the transfer gain G(x) *is non-negative for any given* x ∈ R dT *. Therefore* the generalization error of ˆθα *is upper-bounded by that of* ˆθT : $$\mathbb{E}[(Y-x\hat{\theta}_{\alpha})^{2}]\leq\mathbb{E}[(Y-x\hat{\theta}_{T})^{2}]\tag{12}$$ Theorem 5.2 leads to theoretical implications about the linear version of the incremental input learning problem and the data-pooling estimator. It says that by selecting the weights αS =1 σ 2 S and αT =1 σ2 the data-pooling estimator will be at least as good as the basic estimator. In other words, negative transfer is unlikely if we use data-pooling, so ˆθα is a robust approach to combine the source and target datasets. Furthermore, this is true regardless of the amount of source and target samples, the number of extra features, or the value of their parameter θ 00. It is even likely that using the source dataset via the data-pooling approach should give a positive gain. If we assume, for simplicity, the hypothetical case where the design matrices are orthogonal, so ΣS = IdS and ΣT = IdT , then the variance of the data-pooling estimator becomes a diagonal matrix: $$\operatorname{Var}({\hat{\theta}}_{\alpha})_{i i}={\begin{cases}{\frac{\sigma^{2}\sigma_{S}^{2}}{\sigma^{2}+\sigma_{S}^{2}}},&{i\leq d_{S}}\\ {\sigma^{2},}&{i>d_{S}}\end{cases}}$$ then it is straightforward to see that Var( ˆθα)ii ≤ Var( ˆθT )ii and therefore G(x) ≥ 0. On top of that, if we assume that our new observation x is such that xi 6= 0, i ≤ dS, then the transfer gain is strictly positive. This means that if any of the features observable in the historical data are never zero, then the generalization error of ˆθα is strictly lower than that of ˆθT . Theorem 5.2 could even generalize to a more complex setting where there is an arbitrary number of features that are irregularly observed. It can be done by defining multiple source datasets (xSi ,YSi ) where each one contains complete observations of a subset of these features, and extending the loss in Equation 3 by adding an extra term αSiRSi for each dataset. Again, all the features with missing observations would have to be mean-shifted and the weights would be computed as αSi =1 σ 2 Si . At this point, we know the theoretical upper bound of the error of the data-pooling estimator assuming that we have the values of σ 2 S and σ 2. In the next section, we verify the result of Theorem 5.2 empirically and also try our transfer learning algorithm using real and simulated data. ## 6 Experimental Setup We want to show how the data-pooling approach compares to other more complex SOTA transfer learning methods on multiple real-world datasets. In the following section, we explain in detail how we set up the experiments for this comparison. Additionally, in section 6.2 we present the setup of an ablation study using simulated data that we conduct to evaluate the impact of the theoretical assumptions about the data-pooling approach. ## 6.1 Real-Life Data Experiment And Sota Comparison In this experiment, we use multivariate real-life regression tasks to evaluate whether our theoretical results stand when the new inputs come from non-Gaussian distributions, and we compare the data-pooling method against other state-of-the-art heterogeneous transfer learning algorithms. The comparing methods are the Domain Specific Feature Transfer (DSFT) and its non-linear version (Wei et al., 2019), which are unsupervised and do not require any instance-correspondence, so they fit our incremental input learning setting as a pre-processing step. Both of them are based on learning a mapping from the historical features, which are present in both the source and the target datasets, to the new features. This mapping is used to fill up the values of the new feature in the source data, which is then combined with the target data to train a least-squares predictor. For the non-linear version of DSFT, a kernel is applied on the historical features before learning the mapping parameters. The hyperparameters are α = 105 and β = 1 as recommended in the original paper and the kernel used is the radial basis function kernel (RBF), which has overall the best reported results according to the authors. As a baseline, we use the ordinary least-squares estimator computed using only the target dataset. We use 9 multivariate regression datasets from the UCI repository (Dua & Graff, 2017), which are widely used in the literature for validating regression models. Each dataset contains a different amount of inputs in the source and target dataset and a wide variety of data distributions to showcase how the data-pooling estimator can work in more realistic settings. We separate each dataset into source, target, and test sets and remove a number of features from the source to emulate newly added features. The new features are selected by the largest Pearson correlation coefficient with respect to the regression label to ensure that the OLS baseline always uses the most relevant inputs for the task. We run the experiments with 3 and 5 features removed. All the details about the datasets are available in Table 1. The root-mean-squared error (RMSE) of each approach is computed using a separate test set. We follow two different data sampling strategies to evaluate our model using different amounts of source and target data samples, one for small data and another one for large data: Small data setup: We fix the source and test datasets (nS and ntest in Table 1) and sample multiple disjoint target datasets, one for each run, and use them to compute the transfer learning approaches and the OLS basic estimator. The number of runs is listed in Table 1. Large data setup: For each dataset, we randomly sample 10% of the data for the test set, we fix nT at 100 or 300, and the remaining data goes to the source training set. This way, depending on the dataset, we can have very large source datasets (i.e. ∼ 40.000 for *protein* dataset) or relatively larger target datasets (i.e. *concrete* and *energy*). We repeat the experiments 30 times, and each time we sample new source, target and test sets. ## 6.2 Ablation Experiments In this section, we define the setup to study the impact of the assumptions about our transfer learning approach in a practical setting and evaluate how it is influenced by the size of the target dataset nT . We do so by computing the transfer gain empirically following the procedure explained in the next section. The subsequent sections describe the experiments where we analyze the effect of the choices for estimating the noise variance and shifting the new input by its sample mean. | Dataset | ntotal | nS | nT | ntest | dT | #runs | |-------------|----------|------|------|---------|------|---------| | concrete | 1030 | 114 | 38 | 103 | 8 | 21 | | energy | 768 | 114 | 38 | 76 | 8 | 15 | | kin40k | 40000 | 114 | 38 | 4000 | 8 | 50 | | parkinsons | 5875 | 150 | 50 | 587 | 20 | 50 | | pol | 15000 | 168 | 56 | 1500 | 26 | 50 | | protein | 45730 | 117 | 39 | 4573 | 9 | 50 | | pumadyn32nm | 8192 | 186 | 62 | 819 | 32 | 50 | | skillcraft | 3338 | 147 | 49 | 333 | 19 | 50 | | sml | 4137 | 156 | 52 | 413 | 22 | 50 | Table 1: Dataset statistics. Algorithm 2 Empirical Transfer Gain Input: number of iterations N, test dataset Dtest for i = 1 to N do Randomly sample DS and DT . Compute ˆθα and ˆθT . Compute the empirical transfer gain with Equation 13 (or Equation 14). end for Return the average over all Gi. ## 6.2.1 Empirical Transfer Gain Since the transfer gain depends on multiple sources of randomness, we have to estimate it empirically using multiple samples of source and target datasets. It is computed as the difference of the squared residuals of the prediction given by the basic estimator and the prediction given by the data-pooling estimator using a held-out test dataset Dtest where all features are present. Each iteration i corresponds to a different sample of training data that is used to compute ˆθα and ˆθT , which are then used to compute the difference of residuals as described in Equation 13. In the end, the empirical transfer gain is the result of the average over all the iterations. The complete procedure is detailed in Algorithm 2. $$G_{i}=\frac{1}{n_{\mathrm{test}}}\sum_{(x,y)\in\mathcal{D}_{\mathrm{test}}}\left[(y-x\hat{\theta}_{T})^{2}-(y-x\hat{\theta}_{\alpha})^{2}\right]$$ $$(13)$$ 2i(13) The way that ˆθα is computed differs per experiment and is explained next. ## 6.2.2 Estimating The Noise Variance In the first simulation experiment, we want to test how accurately we can calculate ˆθα if we estimate σ 2 and σ 2 S (or α) using the training data samples. For that, we simulate our data using the linear relationship Y = 2 + 2X1 − 2X2 + w, where w, X1, X2 ∼ N (0, 1), so the data fits our assumptions and all the parameters necessary to compute α and to accurately approximate the gain are known. In each iteration, we sample nS = 100 source data points where X2 is omitted. We vary nT from 5 to 30 samples to assess how the transfer gain changes with the size of the target data. We repeat the procedure in Algorithm 2 for each value of nT . Finally, we use 1000 held-out samples for the test set and N = 200 iterations. ## 6.2.3 Estimating The New Input Expectation The goal of this experiment is to study the effect of shifting the target data by the sample mean when the assumption that it is zero-centered is not met. To achieve that, we compare the transfer gain of ˆθα computed using the target data shifted by the real mean E[X00] versus ˆθα computed using the target data shifted by the sample mean x¯T as described in Algorithm 1. Since we want to study this effect in isolation, we control for any other possible interfering factors by simulating the data as described in Experiment 1, except that the distribution of X2 is changed to N (1, 1). Again we follow the procedure described by Algorithm 2 to compute the transfer gain, only that at each iteration we also have to shift the test inputs by the mean computed from the target data, so Equation 13 becomes: $$G_{i}=\frac{1}{n_{\rm test}}\sum_{(x,y)\in{\cal D}_{\rm test}}\left[(y-x\hat{\theta}_{T})^{2}-(y-(x-\bar{x}_{T})\hat{\theta}_{\alpha})^{2}\right]\tag{14}$$ In addition, we also look at how the variance of the sample mean affects the transfer gain by repeating the simulation with different target sample sizes nT . Again, we use 1000 samples for the test set and N = 200 iterations. ## 7 Results Here we analyze the results obtained from the experiments described in the previous section. ## 7.1 Real-Life Data Experiment And Sota Comparison The results of the experiments with multivariate datasets following the small and large data setups with 5 new features are shown in Table 7 and Table 2 respectively. Extra results with different numbers of new features and dataset sizes are shown in Appendix B.5. We use the Wilcoxon signed-rank test to determine when the difference between the models is statistically significant with a corrected p-value lower than 0.05. Underlined results signal whether data-pooling (DP) or non-linear DSFT has significantly lower error than the other. Similarly, italic is used to highlight the significance given by the test between DP and linear DSFT. In the small data setup, our method outperforms the baseline in all cases except *concrete* and *energy*, where there was no significant difference between both. For 4 datasets, the data-pooling estimator has a significantly lower error than the linear DSFT and is outperformed in 1. In the remaining 4, there was no significant difference. In the comparison against the non-linear DSFT, data-pooling outperforms it in 2 datasets and is outperformed by it in 1 other. For the remaining 6 datasets, there is no statistically significant difference between both methods. In the large data regime, DP outperforms OLS in 6 datasets. Only in the *concrete* dataset OLS outperforms DP, but also DSFT. Our approach also outperforms both variants of DSFT in the majority of the datasets. In the *energy* dataset, both versions of DSFT perform worse than the baseline, showing that the mapping used to impute the source dataset introduced substantial bias into the final predictor. The results show that the transfer gain of our approach is mostly positive since it significantly outperforms the OLS estimator in most tasks, even though the new features follow arbitrary distributions and are not zero-centered. Only in 4 cases out of 36 cases (9 datasets × 4 experimental setups) it is outperformed by OLS. It shows that our theoretical upper bound on the generalization error can generalize to many real-life data distributions and that DP is robust against negative transfer learning. In addition, it shows that the data-pooling estimator outperforms the linear DSFT, and can perform on par with the more complex nonlinear DSFT in the small data regime. In the large data regime, our approach outperforms both versions of DSFT largely, suggesting that imputing a very large source dataset is more difficult than computing the estimator directly through DP. ## 7.2 Estimating The Noise Variance Figure 2a compares the empirical transfer gain when it is computed using the true values of σ 2 S and σ 2 (the "real α" line) and using their estimations from the data (the "estim. α" line). We do so for different target dataset sizes shown in the x-axis. The result shows that the "real α" curve does not go below zero, in accordance to Theorem 5.2, which means that the error of the data-pooling estimator computed on the test set is lower than that of the basic estimator. The transfer gain starts higher for small sizes of target data nT Table 2: RMSE of each approach following the **small data setup**. The 5 features with highest correlation with the label are removed from the source dataset. An asterisk (*) marks the datasets where there is no significant difference between DP and OLS. | Dataset | ols | dsft | dsft-nl | dp | |-------------|---------------------|----------------|----------------|----------------| | concrete* | 14.7 ± 2.5 | 14.5 ± 2.68 | 13.3 ± 1.03 | 14.3 ± 2.81 | | energy* | 3.33 ± 0.247 | 3.91 ± 0.434 | 4.24 ± 0.461 | 3.32 ± 0.257 | | kin40k | 1.12 ± 0.062 | 1.08 ± 0.055 | 1.06 ± 0.0378 | 1.07 ± 0.0409 | | parkinsons | 16.8 ± 5.32 | 10.6 ± 0.415 | 10.8 ± 0.414 | 10.9 ± 0.532 | | pol | 4.77e+09 ± 3.38e+10 | 120 ± 108 | 35.1 ± 4.76 | 36.1 ± 5.86 | | protein | 0.825 ± 0.11 | 0.775 ± 0.0975 | 0.716 ± 0.0233 | 0.723 ± 0.0321 | | pumadyn32nm | 1.49 ± 0.142 | 1.17 ± 0.0788 | 1.12 ± 0.0446 | 1.11 ± 0.0393 | | skillcraft | 0.338 ± 0.0439 | 0.299 ± 0.0215 | 0.288 ± 0.011 | 0.29 ± 0.0134 | | sml | 3.55 ± 1.04 | 3.1 ± 0.949 | 2.77 ± 0.517 | 2.81 ± 0.521 | | Dataset | ols | dsft | dsft-nl | dp | |-------------|----------------|----------------|---------------|----------------| | concrete† | 11 ± 1 | 11 ± 0.986 | 12.3 ± 0.929 | 11.1 ± 0.93 | | energy* | 2.95 ± 0.358 | 3.31 ± 0.423 | 4.07 ± 0.451 | 2.94 ± 0.336 | | kin40k | 1.04 ± 0.0258 | 1.02 ± 0.022 | 1.22 ± 0.113 | 1.02 ± 0.0207 | | parkinsons | 11.1 ± 1.16 | 9.61 ± 0.228 | 10.1 ± 0.293 | 9.66 ± 0.213 | | pol | 54.2 ± 20.3 | 49.5 ± 46 | 43.6 ± 6.54 | 31.8 ± 0.763 | | protein | 0.729 ± 0.0689 | 0.728 ± 0.0686 | 0.68 ± 0.0144 | 0.679 ± 0.0127 | | pumadyn32nm | 1.21 ± 0.0732 | 1.06 ± 0.0672 | 1.03 ± 0.0359 | 1.03 ± 0.036 | | skillcraft* | 0.317 ± 0.136 | 0.658 ± 1.3 | 0.41 ± 0.375 | 0.409 ± 0.373 | | sml | 2.87 ± 0.919 | 2.55 ± 0.854 | 2.34 ± 0.229 | 2.24 ± 0.247 | Table 3: RMSE of each approach following the **large data setup**. The 5 features with the highest correlation with the label are removed from the source dataset and the target dataset size is fixed at 100 samples. An asterisk (*) marks the datasets where there is no significant difference between DP and OLS and † marks when OLS is significantly better than DP. and decreases fast as nT increases, to the point that the transfer gain gets closer to zero. We see that the gain is highest when nT is small, so the variance of ˆθT is larger and the use of the source dataset helps to reduce it for ˆθα. Finally, we see that the "estimated α" curve overlaps almost perfectly with the real values, confirming our first hypothesis. ## 7.3 Estimating The New Input Expectation Figure 2b shows the result of the simulation where the new feature is not zero-centered, so the data-pooling estimator has to be computed by shifting the observations of that feature by its estimated mean following Algorithm 1. For the comparison, we also plot the transfer gain for ˆθα computed using the real mean. Again, we have the transfer gain in the y-axis and the target dataset size on the x-axis. We observe a difference between the gain curves of the true and the estimated mean, and the difference diminishes as nT increases. It represents the variance of the sample mean x¯T that adds up to that of ˆθα and makes the gain smaller. The variance of x¯T is inversely proportional to nT , so it also explains why the difference is larger for small nT and diminishes as nT grows. Nevertheless, the transfer gain is predominantly positive when nT is small. This shows that using the data-pooling approach can still be beneficial even in the case that the new features are not zero-centered. ![11_image_0.png](11_image_0.png) (a) (b) Figure 2: Comparison of the transfer gain of the data-pooling estimator when: (a) using the real value of α vs using the estimated value of α; and (b) using the sample mean to shift the data vs using the true mean. ## 8 Conclusion In this paper, we look at the problem of learning a predictive model after new input features are discovered, but there are only a few observations of them, while there are plenty of observations of historical data. We present a transfer learning approach to the linear regression version of this problem that is able to bridge the difference in the input dimensions of the source and the target datasets. We provide an in-depth theoretical study of our approach, proving an upper bound for its generalization error and its robustness against negative transfer learning. Through extensive empirical experiments using real-life datasets, we show that our approach performs consistently better than the baseline approach. The results hold for a wide variety of real-life distributions of new inputs, showing that the data-pooling approach still works when our theoretical assumptions are violated. With respect to the state-of-the-art, our approach outperforms other linear transfer learning algorithms and performs on par with more complex non-linear ones, with the advantage of holding theoretical guarantees. It is also simple to implement and efficient, having the same computational complexity as the ordinary least-squares method. In addition, it does not have any hyperparameters to tune, so it is easy to apply to incremental input problems where the target data is limited. Nevertheless, our theoretical bound on the generalization error of DP can still be improved by taking into account the estimations used for E[X00], σS and σ, which could help understand the effect of the dataset sizes nS and nT in the final predictor. Another future challenge is to study this problem in the non-linear case, such as classification with logistic regression or neural networks. Limitations: Despite the positive results shown in the previous section, there are some clear limitations to the data-pooling approach and our theoretical results. The non-negative transfer gain proof relies on some assumptions which might not hold in practice: 1. independence between the new and the historical features (X00 and xS); 2. identical distribution of the new features and labels for both source and target data; 3. zero-centered Gaussian distribution of X00; 4. linear model. Assumption 1 can be violated, for example, if the data was generated by a non-additive model. We simulate this case in Appendix B.4 and indeed our approach leads to negative transfer more often than what we observe in the results of Section 7.1, so important improvements can still be done in this direction. Assumption 2 means that we cannot guarantee non-negative transfer if there is a significant distribution shift between the source and the target datasets. Although data-pooling is resilient to distribution shifts in the historical features, it is sensitive to shifts in the distribution of the new features or if the parameter θ changes across the source and target data. We show empirically in Appendix B.3 that our approach can work with small shifts, but its transfer gain decreases as the shift grows. We find a workaround for Assumption 3 by centering the new features using the sample mean, but it introduces bias into DP which could lead to negative transfer when the target dataset is large enough (see Section 6.2.3). DP also relies on estimations of σ 2 S and σ 2 might also introduce bias in the model. Finally, our proof is limited to linear models since it is based on a closed-form solution of the data-pooling loss. Therefore it is not trivial to translate it to non-linear cases such as neural networks and logistic regression. ## References Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. *Machine Learning*, 79(1):151–175, May 2010. ISSN 1573-0565. doi: 10.1007/s10994-009-5152-4. URL https://doi.org/10.1007/s10994-009-5152-4. Aiyou Chen, Art B. Owen, and Minghui Shi. Data enriched linear regression. *Electronic Journal of Statistics*, 9(1):1078 - 1112, 2015. doi: 10.1214/15-EJS1027. URL https://doi.org/10.1214/15-EJS1027. Xinyang Chen, Sinan Wang, Jianmin Wang, and Mingsheng Long. Representation subspace distance for domain adaptation regression. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of *Proceedings of Machine Learning Research*, pp. 1749–1759. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/chen21u.html. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ ml. Sheng-Uei Guan and Shanchun Li. Incremental learning with respect to new incoming input attributes. Neural Processing Letters, 14(3):241–260, Dec 2001. ISSN 1573-773X. doi: 10.1023/A:1012799113953. URL https://doi.org/10.1023/A:1012799113953. William W. Hager. Updating the inverse of a matrix. *SIAM Review*, 31(2):221–239, 1989. doi: 10.1137/ 1031049. URL https://doi.org/10.1137/1031049. Steve Hanneke and Samory Kpotufe. On the value of target data in transfer learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https://proceedings. neurips.cc/paper/2019/file/b91f4f4d36fa98a94ac5584af95594a0-Paper.pdf. Yuwei He, Xiaoming Jin, Guiguang Ding, Yuchen Guo, Jungong Han, Jiyong Zhang, and Sicheng Zhao. Heterogeneous transfer learning with weighted instance-correspondence data. *Proceedings of the AAAI* Conference on Artificial Intelligence, 34(04):4099–4106, Apr. 2020. doi: 10.1609/aaai.v34i04.5829. URL https://ojs.aaai.org/index.php/AAAI/article/view/5829. Bo-Jian Hou, Lijun Zhang, and Zhi-Hua Zhou. Learning with feature evolvable streams. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural* Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings. neurips.cc/paper/2017/file/7634ea65a4e6d9041cfd3f7de18e334a-Paper.pdf. R.J.A. Little and D.B. Rubin. *Statistical Analysis with Missing Data*. Wiley Series in Probability and Statistics. Wiley, 2019. ISBN 9780470526798. URL https://books.google.com.br/books?id=BemMDwAAQBAJ. Seungwhan Moon and Jaime Carbonell. Completely heterogeneous transfer learning with attention - what and what not to transfer. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pp. 2508–2514, 2017. doi: 10.24963/ijcai.2017/349. URL https://doi.org/10. 24963/ijcai.2017/349. Mohammadreza Mousavi Kalan, Zalan Fabian, Salman Avestimehr, and Mahdi Soltanolkotabi. Minimax lower bounds for transfer learning with linear and one-hidden layer neural networks. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1959–1969. Curran Associates, Inc., 2020. URL https://proceedings.neurips. cc/paper/2020/file/151d21647527d1079781ba6ae6571ffd-Paper.pdf. David Obst, Badih Ghattas, Jairo Cugliari, Georges Oppenheim, Sandra Claudel, and Yannig Goude. Transfer learning for linear regression: a statistical test of gain, 2021. Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. *IEEE Transactions on Knowledge and* Data Engineering, 22(10):1345–1359, 2010. doi: 10.1109/TKDE.2009.191. Victor S. Sheng and Charles X. Ling. Feature value acquisition in testing: A sequential batch test algorithm. In *Proceedings of the 23rd International Conference on Machine Learning*, ICML '06, pp. 809–816, New York, NY, USA, 2006. Association for Computing Machinery. ISBN 1595933832. doi: 10.1145/1143844. 1143946. URL https://doi.org/10.1145/1143844.1143946. Pengfei Wei, Yiping Ke, and Chi Keong Goh. A general domain specific feature transfer framework for hybrid domain adaptation. *IEEE Transactions on Knowledge and Data Engineering*, 31(8):1440–1451, 2019. doi: 10.1109/TKDE.2018.2864732. Ying WEI, Yu Zhang, Junzhou Huang, and Qiang Yang. Transfer learning via learning to transfer. In Jennifer Dy and Andreas Krause (eds.), *Proceedings of the 35th International Conference on Machine* Learning, volume 80 of *Proceedings of Machine Learning Research*, pp. 5085–5094. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/v80/wei18a.html. Yuguang Yan, Wen Li, Hanrui Wu, Huaqing Min, Mingkui Tan, and Qingyao Wu. Semi-supervised optimal transport for heterogeneous domain adaptation. In *Proceedings of the 27th International Joint Conference* on Artificial Intelligence, IJCAI'18, pp. 2969–2975. AAAI Press, 2018. ISBN 9780999241127. Liu Yang, Liping Jing, Jian Yu, and Michael K. Ng. Learning transferred weights from co-occurrence data for heterogeneous transfer learning. *IEEE Transactions on Neural Networks and Learning Systems*, 27 (11):2187–2200, 2016. doi: 10.1109/TNNLS.2015.2472457. Zhen-Yu Zhang, Peng Zhao, Yuan Jiang, and Zhi-Hua Zhou. Learning with feature and distribution evolvable streams. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 11317–11327. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/zhang20ad.html. Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. A comprehensive survey on transfer learning. *Proceedings of the IEEE*, 109(1):43–76, 2021. doi: 10.1109/JPROC.2020.3004555. ## A Proofs And Derivations A.1 Proof Of Proposition 4.1 We want to find ˆθα = argθmin Rα(θ), or ˆθα such that ∇Rα( ˆθα) = 0. By applying simple calculus rules we have ∇RT (θ) = −2x > T (YT −xT θ) and ∇RS(θ) = −2Ix > S (YS −xSI >θ), so we can write: ∇Rα(θ) = −2αSIx > S (YS − xSI >θ) − 2αT x > T (YT − xT θ) By equalling it to zero and solving for θ we obtain: ˆθα = (αSIx > S xSI > + αT x > T xT ) −1(αSIx > S YS + αT x > T YT ) Let HRα = 2(αSIx > S xSI > + αT x > T xT ) be the Hessian matrix of Rα. We assume that xT is obtained from observations of dT independent features, then the columns of xT must be linearly independent and thus, for any vector v ∈ R dT such that v 6= 0, xT v 6= 0nS . This means that v >x > T xT v > 0 and so x > T xT is positive definite. The same holds for x > S xS. Finally, αS and αT are both positive, so HRα is positive definite and therefore Rα is strictly convex and ˆθα is its global minimum. It also follows that the matrix M = αSIx > S xSI > + αT x > T xT is positive definite since M = 1 2HRα , so the inverse M−1required to compute ˆθα exists. ## A.2 Proof Of Proposition 4.2 Let M = αSIx > S xSI > + αT x > T xT and knowing that θ 0 = I >θ: E[ ˆθα] = E[M−1(αSIx > = E[M−1(αSIx > $\mathcal{L}$ S YS + αT x > T YT )] S YS) + M−1(αT x > T YT )] = M−1(αSIx > S E[YS]) + M−1(αT x > T E[YT ]) = M−1(αSIx > S xSθ 0) + M−1(αT x > T xT θ) $$=\theta$$ $$=M^{-1}M\theta$$ $$\begin{array}{l}{{=M^{-1}(\alpha_{S}\mathbb{I}x_{S}^{\top}x_{S}\theta^{\prime}+\alpha_{T}x_{T}^{\top}x_{T}\theta)}}\\ {{=M^{-1}(\alpha_{S}\mathbb{I}x_{S}^{\top}x_{S}\mathbb{I}^{\top}\theta+\alpha_{T}x_{T}^{\top}x_{T}\theta)}}\\ {{=M^{-1}(\alpha_{S}\mathbb{I}x_{S}^{\top}x_{S}\mathbb{I}^{\top}+\alpha_{T}x_{T}^{\top}x_{T})\theta}}\\ {{=M^{-1}M\theta}}\end{array}$$ Var( ˆθα) = Var(M−1(αSIx > S YS + αT x > T YT )) = Var(αSM−1Ix > S YS + αTM−1x > T YT ) = Var(αSM−1Ix > S YS) + Var(αTM−1x > T YT ) = α 2 SM−1Ix > S Var(YS)xSI >M−1 + α 2 TM−1x > T Var(YT )xTM−1 = α 2 S (σ 2 + kθ 00k 2)M−1Ix > S xSI >M−1 + α 2 T σ 2M−1x > T xTM−1 = M−1(α 2 S (σ 2 + kθ 00k 2)Ix > S xSI > + α 2 T σ 2x > T xT )M−1 ## A.3 Proof Of Proposition 5.1 $$\mathcal{G}(x)=\mathbb{E}[(Y-x\hat{\theta}_{T})^{2}]-\mathbb{E}[(Y-x\hat{\theta}_{\alpha})^{2}]$$ $$=\text{Var}(Y-x\hat{\theta}_{T})+\mathbb{E}[Y-x\hat{\theta}_{T}]^{2}$$ $$\qquad-(\text{Var}(Y-x\hat{\theta}_{\alpha})+\mathbb{E}[Y-x\hat{\theta}_{\alpha}]^{2})$$ $$=\text{Var}(Y-x\hat{\theta}_{T})+(\mathbb{E}[Y]-x\mathbb{E}[\hat{\theta}_{T}])^{2}$$ $$\qquad-(\text{Var}(Y-x\hat{\theta}_{\alpha})+(\mathbb{E}[Y]-x\mathbb{E}[\hat{\theta}_{\alpha}])^{2})$$ $$=\text{Var}(Y-x\hat{\theta}_{T})+(x\theta-x\theta)^{2}$$ $-\;\left(\text{Van}\right)$ . $\tau_{\rm s}$ . − (Var(Y − xˆθα) + (xθ − xθ) 2) $\frac{1}{2}$ . = Var(Y − xˆθT ) − Var(Y − xˆθα) = Var(Y ) + Var(−xˆθT ) − (Var(Y ) + Var(−xˆθα)) = Var(Y ) − Var(Y ) + Var(−xˆθT ) − Var(−xˆθα) = Var(−xˆθT ) − Var(−xˆθα) = x(Var( ˆθT ) − Var( ˆθα))x > $$\mathbf{\Pi}(t)$$ $$\downarrow\overline{{\phantom{\;}}}$$ * [16] A. A. K. ## A.4 Proof Of Theorem 5.2 Let H = Var( ˆθT ) − Var( ˆθα) describe the difference between variances of ˆθT and ˆθα such that G(x) = xHx>. We want to show that G(x) ≥ 0 for any x ∈ R dT , so it suffice to prove that H is positive semi-definite. By selecting αS =1 σ 2 S and αT = 1 σ2 we obtain M =1 σ 2 S IΣSI > + 1 σ2 ΣT and: H = σ 2Σ −1 T − M−1(σ 2 Sα 2 S IΣSI > + σ 2α 2 T ΣT )M−1 = σ 2Σ −1 T − M−1 1 σ 2 S IΣSI > + 1 σ 2 ΣT M−1 = σ 2Σ −1 T − M−1MM−1 = σ 2Σ −1 T − M−1 = σ 2Σ −1 T − 1 σ 2 S IΣSI > + 1 σ 2 ΣT −1 Let A =1 σ2 ΣT , C =1 σ 2 S ΣS, U = I and V = I >. M can be seen as an updated version of A, so we can derive its inverse using the Woodbury identity (section 3 of (Hager, 1989)) which states that given A, C and A + UCV invertible matrices then M−1 = (A + UCV ) −1 = A−1 − A−1U(C −1 + V A−1U) −1V A−1: $$H=A^{-1}-(A+UCV)^{-1}$$ $$=A^{-1}-(A^{-1}-A^{-1}U(C^{-1}+VA^{-1}U)^{-1}VA^{-1})$$ $$=A^{-1}U(C^{-1}+VA^{-1}U)^{-1}VA^{-1}$$ $$=A^{-1}\mathbb{I}(C^{-1}+\mathbb{I}^{\top}A^{-1}\mathbb{I})^{-1}\mathbb{I}^{\top}A^{-1}$$ We know that A, A−1, C, C −1 are positive definite. We also know that I has full column rank and therefore I >A−1I is positive definite and so is C −1 +I >A−1I and its inverse. Finally, I(C −1 +I >A−1I) −1I > is positive semi-definite and thus, since A−1is symmetric, H is positive semi-definite. ## B Additional Ablation Studies B.1 Correlation Experiment An important difference between our approach and other state-of-the-art heterogeneous transfer learning algorithms, such as DSFT, is that we avoid learning a mapping from the old features to the new ones. So when there is little or no learnable relationship between the new and the old features our method should be able to perform better than DSFT. To investigate this hypothesis, we set up an experiment where we can control the relationship between the features so we can compare both methods in different settings. In this experiment, we simulate data with a linear correlation between the input features to emulate a learnable relationship. The input variables are sampled from a joint Gaussian distribution with mean µ = (0, 1)> and covariance matrix Σ such that Σii = 1 for i ∈ 1, 2 and Σij = c for i 6= j and *i, j* ∈ 1, 2. When c = 0, X1 and X2 are completely uncorrelated and when c increases, so does the correlation between the two inputs, allowing us to control the strength of the correlation. The prediction label Y is then computed as a linear combination of the inputs using the same parameters used in Section 6.2.3. We fix the sizes of the source, target, and test datasets as nS = 100, nT = 8 and ntest = 1000, respectively. We vary c between 0 and 0.9. For each value of c, we repeat the experiment 200 times resampling the source and target datasets while the test dataset remains fixed. The OLS computed on the target dataset is used as a baseline comparison. Figure 3 shows the MSE computed on the test dataset with each approach for different values of the correlation coefficient c. As expected, the error for both linear and non-linear versions of DSFT is the highest when the input correlation is low, so it cannot learn an accurate mapping from X1 to X2. Their performance improves as c increases, but is mostly larger than the OLS baseline, which means that the bias added by using the inputted values of X2 is still larger than the variance reduction expected from using the source data. Overall, this experiment shows that DSFT can only outperform DP on linear regression incremental input tasks when there is some significant (non-)linear relationship between the new and historical features. We can also see in Figure 3 that the MSE of DP increases as c increases. That happens because our approach expects the new inputs to be uncorrelated with the old inputs. This results in an increase of the bias of DP proportional to c, up to the point that it becomes larger than the variance reduction obtained from the source data when c > 0.2. At higher correlation values (c > 0.7), X1 and X2 are so similar that Y can be predicted mainly by using X1, so the source data becomes more and more useful. This explains the decrease in the MSE of the DP estimator at that point. Nevertheless, DP remains better than all other approaches for relatively small values of c, showing that there can be a positive trade-off between the variance reduction from the extra source data and the bias introduced by correlated inputs. ![16_image_0.png](16_image_0.png) Figure 3: Correlation experiment: The MSE of both the data-pooling (dp) and the DSFT approaches for datasets generated with different correlation coefficients between the inputs. ## B.2 Detailed **Experiments With Real-Life Data** For a detailed analysis, we selected three regression datasets from the UCI public repository: Wine, *Airfare* and *Concrete*; each one with different Pearson correlation coefficients measured between the input variables. Wine is the task of predicting the quality of red wines, *Concrete* is the task of predicting the resulting compressive strength measured in megapascal (MPa) of different concrete mixes and *Airfare* is the task of predicting the price of plane tickets based on the travel distance and the largest market share among all carriers. Table 4 gives further information about the variables chosen for the source and target datasets and Table 5 shows the Pearson correlation coefficients of each pair of variables. For each dataset, we fixed the source and the test datasets, and then sampled target datasets with different sizes (nT ), starting with 8 data points up to 30. For each value of nT , we repeat the experiment multiple times, each time with a new independent target dataset. All the details about the settings selected for each dataset are stated in Table 5. The results obtained using the Wine, *Concrete* and *Airfare* datasets are shown in figures 4, 5 and 6 respectively. On Figure 6, the non-linear DSFT gives the best results, suggesting that there is some exploitable non-linear relationship between the inputs *nsmiles* and *large_ms*. However, it does not confirm its superiority on the other two datasets (figures 4 and 5), where it displays similar performance to both DP and linear DSFT. Table 4: Variables used for source and target data per dataset. X1 appears in both source and target datasets, and X2 only on the target dataset. | Dataset | X1 | X2 | Y | |-----------|---------|------------------|---------| | Wine | alcohol | volatile acidity | quality | | Airfare | nsmiles | large_ms | fare | | Concrete | Cement | Superplasticizer | Mpa | | Dataset | cor(X1, X2) | cor(X1, Y ) | cor(X2, Y ) | ntest | nS | #runs | |-----------|---------------|---------------|---------------|---------|------|---------| | Wine | -0.20 | 0.48 | -0.39 | 700 | 100 | 25 | | Airfare | -0.48 | 0.53 | -0.20 | 5000 | 100 | 50 | | Concrete | 0.09 | 0.50 | 0.36 | 180 | 100 | 25 | Table 5: The Pearson correlation coefficients, the test and source dataset sizes and the number of repetitions for each dataset. On the *Concrete* prediction task, DP outperforms both variants of DSFT in some moments, such as when the target dataset has 25 or more data points. The DP performance is comparable to that of the linear DSFT in all datasets, even *Wine* and *Airfare* where the correlation between inputs is higher. These results show a pattern consistent with the simulations: the transfer gain is the largest when nT is small. In these cases, the size of the target dataset is too small and the variance of the basic estimator is the highest, so combining the source dataset using our transfer learning approach is more beneficial, resulting in a larger gain. ![17_image_0.png](17_image_0.png) Figure 4: Comparison between the data-pooling (DP) and the DSFT approaches using the *Wine* dataset. B.3 **Distribution shifts between source and target datasets** ![18_image_0.png](18_image_0.png) Figure 5: Comparison between the data-pooling (DP) and the DSFT approaches using the *Concrete* dataset. ![18_image_1.png](18_image_1.png) Figure 6: Comparison between the data-pooling (DP) and the DSFT approaches using the *Airfare* dataset. An important assumption of our theoretical work is that the distribution of the new inputs remains the same for both the source and the target datasets, and we cannot guarantee the non-negative transfer gain bound if that does not hold. To demonstrate this problem, we simulate datasets following the same setup as Section 6.2.3, but we vary the mean of the new input in the target dataset µ1T while it is fixed in the source dataset µ1S. We also fix the size of the source and target datasets at nS = 200 and nT = 15 to keep a similar proportion as the other experiments. The result in Figure 7 shows that the error of our estimator increases quickly with the size of the shift, showing that the source dataset becomes less relevant for the linear regression task. Still, data-pooling copes with minor shifts in the distribution, and overall outperforms DSFT. ![19_image_0.png](19_image_0.png) Figure 7: Result of simulation of increasing the difference of the mean of the new features between source and target datasets. ## B.4 **Non-Additive Models** In our theoretical study, we assume that the new features are independent of the historical ones. One practical way this assumption could be violated is in the case of a non-additive model where the label depends on the product of a historical feature and a new feature. Since DP is a linear model, it is possible to compute this product and add it as a new feature, but then our independence assumption is broken. We can simulate this scenario by creating one such feature and adding it to the final label. We sort the features by their correlation with the label and we select the best one from the source set and from the target sets to create the product feature. For (non-linear) DSFT, we first compute the normal (additive) features for the source dataset and then use them to compute the product feature. We use the UCI benchmark datasets to keep the simulation as close as possible to reality. All the models are run following the *large data setup* described in Section 6.1. The results presented in Table 6 show that DP performs consistently better than both versions of DSFT. In some cases such as concrete, protein, *skillcraft* and sml, DSFT's error is orders of magnitude higher than the OLS baseline, indicating that the values predicted by it to fill up the source dataset ended up biasing the final predictor heavily. As expected, the error of our approach increases w.r.t OLS's error, but still, it only gets outperformed in 3 out of 9 cases. ## B.5 **Extra Real-Life Data Results** Here we present the extra results from Section 7.1. Table 7 shows the results following the small data setup with 3 new features, and Table 8 shows the results following the large data setup with nT = 300. In Table 7, our approach performs consistently better than OLS and the linear DSFT with significantly lower error in 7 and 5 out of 9 datasets, respectively. The non-linear DSFT beats DP in 3 datasets, and there was no significant difference in the other 6. Table 6: RMSE of each approach on the non-additive version of the UCI datasets. The target dataset size is fixed at 100 samples. An asterisk (*) marks the datasets where there is no significant difference between DP and OLS and † marks when OLS is significantly better than DP. | Dataset | ols | dsft | dsft-nl | dp | |-------------|----------------|---------------------|--------------------|---------------| | concrete* | 11 ± 0.996 | 2.88e+03 ± 1.09e+03 | 4.9e+03 ± 2.84e+03 | 11 ± 0.997 | | energy* | 2.87 ± 0.388 | 3.27 ± 0.398 | 4.04 ± 0.457 | 2.86 ± 0.348 | | kin40k | 1.05 ± 0.0266 | 1.36 ± 0.188 | 1.58 ± 0.149 | 1.03 ± 0.0246 | | parkinsons | 11.4 ± 1.21 | 10.5 ± 0.837 | 10.1 ± 0.3 | 9.72 ± 0.258 | | pol† | 53.5 ± 18 | 133 ± 84.4 | 240 ± 47.1 | 180 ± 46.7 | | protein† | 0.735 ± 0.0679 | 162 ± 63 | 52.6 ± 8.7 | 0.99 ± 0.353 | | pumadyn32nm | 1.23 ± 0.0793 | 1.45 ± 0.0583 | 1.32 ± 0.0626 | 1.04 ± 0.0406 | | skillcraft* | 0.344 ± 0.245 | 25.9 ± 60.1 | 9.18 ± 2.57 | 0.405 ± 0.431 | | sml† | 3.05 ± 0.965 | 14.2 ± 7.32 | 31.5 ± 14.3 | 4.26 ± 1.4 | | Dataset | ols | dsft | dsft-nl | dp | |-------------|---------------------|---------------|----------------|---------------| | concrete* | 14.7 ± 2.5 | 14.4 ± 3.04 | 13.1 ± 0.7 | 14.4 ± 3.02 | | energy† | 3.33 ± 0.247 | 3.61 ± 0.22 | 3.59 ± 0.22 | 3.54 ± 0.194 | | kin40k | 1.12 ± 0.062 | 1.06 ± 0.0351 | 1.06 ± 0.036 | 1.06 ± 0.0294 | | parkinsons | 16.8 ± 5.32 | 10.7 ± 0.357 | 10.7 ± 0.386 | 10.7 ± 0.441 | | pol | 4.77e+09 ± 3.38e+10 | 108 ± 98.6 | 36.3 ± 6.26 | 36.6 ± 6.65 | | protein | 0.825 ± 0.11 | 0.767 ± 0.117 | 0.7 ± 0.0186 | 0.706 ± 0.025 | | pumadyn32nm | 1.49 ± 0.142 | 1.13 ± 0.0653 | 1.09 ± 0.0305 | 1.09 ± 0.0287 | | skillcraft | 0.338 ± 0.0439 | 0.27 ± 0.0135 | 0.257 ± 0.0104 | 0.26 ± 0.0114 | | sml | 3.55 ± 1.04 | 2.98 ± 0.852 | 2.71 ± 0.457 | 2.78 ± 0.493 | Table 7: RMSE of each approach following the small data setup. The 3 features with highest correlation with the label are removed from the source dataset. An asterisk (*) marks the datasets where there is no significant difference between DP and OLS and † marks when OLS is significantly better than DP. For the results with a larger target dataset in Table 8, we see that the OLS baseline benefits from the extra samples and so it outperforms DP in two cases, but DP is still superior in 6 others. In comparison with DSFT, DP was again better showing that it can make better use of the additional data. Table 8: RMSE of each approach following the large data setup. The 5 features with the highest correlation with the label are removed from the source dataset and the target dataset size is fixed at 300 samples. An asterisk (*) marks the datasets where there is no significant difference between DP and OLS and † marks when OLS is significantly better than DP. | Dataset | ols | dsft | dsft-nl | dp | |-------------|----------------|----------------|-----------------|-----------------| | concrete† | 10.5 ± 0.782 | 10.5 ± 0.793 | 11 ± 0.554 | 10.7 ± 0.745 | | energy | 2.85 ± 0.321 | 2.92 ± 0.3 | 2.94 ± 0.286 | 2.84 ± 0.318 | | kin40k | 1.01 ± 0.0168 | 1.01 ± 0.015 | 1.14 ± 0.0631 | 1.01 ± 0.0146 | | parkinsons* | 9.59 ± 0.248 | 9.42 ± 0.231 | 9.79 ± 0.267 | 9.55 ± 0.219 | | pol | 32.6 ± 1.09 | 31.2 ± 0.594 | 39.8 ± 3.58 | 31.1 ± 0.419 | | protein | 0.683 ± 0.0241 | 0.681 ± 0.0213 | 0.668 ± 0.00648 | 0.668 ± 0.00612 | | pumadyn32nm | 1.05 ± 0.0384 | 1.01 ± 0.0321 | 1.01 ± 0.0301 | 1.01 ± 0.0309 | | skillcraft† | 0.354 ± 0.351 | 0.644 ± 1.18 | 0.399 ± 0.358 | 0.4 ± 0.368 | | sml | 2.34 ± 0.329 | 2.22 ± 0.277 | 2.19 ± 0.116 | 2.14 ± 0.11 |
Review 1: Summary: In this paper, authors study the problem of learning a predictive model after new input features are discovered, but there are only a few observations of them, while there are plenty of observations of historical data. They propose the data-pooling method that learns an estimator that jointly minimizes the squared error on the source and target data. To tackle the problem of difference in dimensions author restrict the estimator to relevant covariates on source data. Results on real-life datasets show that DP performs consistently better than the baseline approach (OLS and DSFT). Strengths and Weaknesses: **Strengths** - Simple framework to study the problem of transfer learning where the input dimensions differ. - Author present theoretical study showing that DP Is robust to negative transfer when the number of samples very small. **Weaknesses** - A detailed description of the DSFT method should be included in the paper. The discussion of this method is restricted to only empirical comparison. It would be interesting to see what are the conceptual differences in the methodology. - Can the proposed method be extended to classification? Are there some direct implications? - Also author assumes that the distribution of covariates and labels across the overlapping dimensions remains invariant. It would be great to include some experiments/discussion on settings where such assumptions may be violated. Requested Changes: It would be great if authors can address weaknesses from above. Broader Impact Concerns: NA ================================================== Review 2: Summary: - The authors tackled an interesting problem, called incremental input transfer learning. More specifically, we have feature sets A and B. Source data only includes features set A but target data includes both feature sets A and B. - The authors use joint optimization of source and target datasets using equation (2) and (3) to achieve the linear model that works with both source and target datasets. This method is very simple but intuitive. - The authors provide some interesting experimental results even though the size of the data is too small. Strengths and Weaknesses: Strengths: - The authors solve the interesting problem which can be practical in real-world setting. - The proposed method is intuitive and working well in very small datasets regime. Weakness: - The proposed methods have too many limitations that it is hard to apply to real-world problems. - More specifically, it is only applicable for the underlying distributions with additive models. Also, it is only verified with linear model and with very small datasets. - The proposed method and theoretical supports are somewhat straightforward and too simple. - Paper writing can be significantly improved. Requested Changes: 1. Classification vs Regression - Is there a big difference to apply the proposed method to classification problems? - In other words, do we need significant modifications to convert the proposed method to apply for the classification problem? - If not, it would be good to show the impact of the proposed method in classification domains (and compare with the heterogeneous transfer learning baselines for the classification). 2. Figures - It would be good if the authors can provide two figures. (1) Explain the incremental input transfer learning (maybe in the introduction) (2) Describe the proposed method - Those two figures will significantly improve the readability of the paper. 3. Assumption of X'' - In Section 3, the authors said that source and target data follow the same conditional distribution between X and Y. - Therefore, theta = [theta', theta'']. - In that case, how does the X'' follow the normal distribution, independently? - I think X'' should be related to x_s and also follows the distributions of (d_T - d_S) portion of x_T. 4. Non-additive model - Section 3 is only satisfied if the underlying model is an additive model. - However, in reality, the label is not only dependent on the feature independently. They may be jointly dependent on the features. - More specifically, if the label is dependent on the multiplication of feature x_i and x_j where x_j is only included in the target dataset, how can the proposed method deal with this case? 5. Table 1 - Why do the authors discard many samples instead of using them for source, target or test data? - For instance, with kin40k, the sum of source and target data is less than 200 which is 0.5% of the entire data. - It would be good to explain why the authors only use a very small amount of data for training sets (both source and target). - Also, would it be good if the authors provide more extensive experiments with various ranges of source and target datasets? 6. Limitations - This work has many limitations. - It would be good to summarize the limitations of this work at the end of the paper. For instance, - It is only applicable with an additive model, - The authors only rely on the linear model, - The results are only based on very small data,  7. Etc. - It would be good to briefly explain about the "negative transfer learning" in the introduction. - nit; it would be better to position figure 1 and 2 side by side (with smaller size). Broader Impact Concerns: N/A ================================================== Review 3: Summary: In this paper, the authors study a variation of the heterogeneous transfer learning problem termed incremental input transfer learning. The main underlying idea of it is to assume that new input dimensions are added with time and that we would like to leverage on the historic data and on newly acquired data to learn a better prediction model. The latter’s performance is compared against the baseline given by the model learned from new data only. The authors make a set of simple assumptions for their model in linear regression setting and derive several insightful theoretical results from it. These theoretical results are further confirmed on both toy and UCI benchmarks. Strengths and Weaknesses: **Strengths** 1. A theoretical study of incremental input transfer learning setting for the linear regression model I think that it is important to lay down a theoretical foundation for a given computational model before trying it out on the real-world data and I appreciate the authors' effort made in this direction. 2. Empirical study highlighting the derived theoretical guarantees and their usefulness for UCI datasets The evaluation includes one baseline proposed to tackle the same problem as well as one baseline giving the score for when no historic data is used. I believe that this is a bare minimum for any transfer learning paper (even though, some other candidates can be adapted to solve the same task as explained below) **Weaknesses** 1. Considered model is rather simple, all main theoretical results are straightforward derivations from known linear regression results Indeed, most of the theoretical results of this paper are clearly simple corollaries from linear regression analysis. This stems from the fact that the proposed model is essentially a sum of two least-squares terms for source and target data plus a multiplication of the source model parameters by a mask matrix zeroing out all unobserved entries for features. That being said, I don't mind simplicity of the derived results but I doubt that they generalize to more complicated settings and may lack to capture intuition behind such TL setting as explained below. 2. Some results seem counterintuitive wrt the existing theory of learning across different domains. The cited work of Ben-David et al. considers a similar semi-supervised setting in the context of domain adaptation (Sections 5 and 6). Based on their theoretical analysis, they identified several regimes for the benefits that combined source and target error minimization can bring. They are captured in Fig. 1 and Theorem 3 of that paper and depend on the relative size of source and target domains. In this paper, the authors claim when discussing negative transfer in their setting after Theorem 5.2 : "Furthermore, this is true regardless of the amount of source and target samples, the number of extra features, or the value of their parameter $\theta''$." Given this kind of statement, I wonder whether the model studied by the authors really can explain the success behind the learning phenomenon in such setting for real-world settings. Requested Changes: As expected, my requested changes are directly related to the weaknesses mentioned above. 1. I think for the simple model considered in this paper, it is crucial to identify the limitations of such approach. I believe that it may be as useful as the positive results presented in this paper as it seems to me that it may not work well in many real-world settings. I would encourage the authors to spend some time on designing pitfall scenarios for their approach to show its limits. Once again, I think that there is nothing wrong in simplicity if it allows to capture our intuition about the studied subject, but the results presented in Theorem 5.2 seems to lack the latter quality. 2. I wonder why the authors didn't consider other heterogeneous TL methods such as Yan et al. 2018 and the recent HTL baseline from Redko et al, NeurIPS'20. In general, this remark is also related to the following claim made by the authors: "but we argue that for this kind of approach to work it requires some exploitable relationship between historical and new features, whereas our approach works when there is no such relationship". My question here is: in what practical scenarios the newly acquired features will have to meaningful relationship to the historic features? What would be the value of such features then? Some typos to fixed: p.3 In this section, want -> we want p.3 usupervised approach -> unsupervised p.5 unobserved features X”, -> should be . Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: The manuscript introduces a transfer learning algorithm that integrates novel and past data with different input dimensions. The aim is to develop a predictive model using limited observations of newly discovered input features, while a wealth of historical data is available. The manuscript focuses on the linear regression problem, and the classification problem would be a future work. The proposed data-pooling method learns an estimator that jointly minimizes the squared error on the source and target data. To address the problem of input dimensionality mismatch, the method restricts the estimator to relevant covariates on source data. The manuscript evaluates the proposed method against two baseline approaches, Ordinary Least Squares and Domain Specific Feature Transfer (Wei et al., 2019), on nine multivariate regression datasets from the UCI repository. The reviewers acknowledged that the manuscript addresses an important problem but expressed concern about the feasibility of the assumptions made. While they agreed that the manuscript's argument holds up under these assumptions, they felt that the assumptions were too restrictive to be applicable to real-world scenarios. As a result, they were uncertain if the readers of TMLR would find the manuscript relevant. The two primary assumptions were: A) that all newly observed features are completely independent of the original features, and B) that the relationship between feature labels in the real world can be expressed as an additive model. For point A) the authors have added an experiment simulating non-additive features using the UCI datasets and the proposed approach was able to cope with it in 6 out of 9 datasets. The results are in Appendix B.4. For point B) the authors have also added an experiment in Appendix B.3 simulating a distribution shift for the new features, showing that the proposed model is sensitive to it, but can cope with small shifts. ==================================================
# Codit: Conformal Out-Of-Distribution Detection In Timeseries Data Anonymous authors Paper under double-blind review ## Abstract Machine learning models are prone to making incorrect predictions on inputs that are far from the training distribution. This hinders their deployment in safety-critical applications such as autonomous vehicles and healthcare. The detection of a shift from the training distribution of individual datapoints has gained attention. A number of techniques have been proposed for such out-of-distribution (OOD) detection. But in many applications, the inputs to a machine learning model form a temporal sequence. Existing techniques for OOD detection in time-series data either do not exploit temporal relationships in the sequence or do not provide any guarantees on detection. We propose using deviation from the in-distribution temporal equivariance as the non-conformity measure in conformal anomaly detection framework for OOD detection in time-series data. Computing independent predictions from multiple conformal detectors based on the proposed measure and combining these predictions by Fisher's method leads to the proposed detector CODiT with guarantees on false detection in time-series data. We illustrate the efficacy of CODiT by achieving state-of-the-art results on computer vision datasets in autonomous driving. We also show that CODiT can be used for OOD detection in non-vision datasets by performing experiments on the physiological GAIT sensory dataset. Code, data, and trained models are available at https://drive. google.com/file/d/1uSNYGoSNWu4_d-nPAcIcxYyBYI8VLqPq/view?usp=sharing. ## 1 Introduction Deployment of machine learning models in safety-critical systems such as autonomous driving (Bojarski et al., 2016), and healthcare (De Fauw et al., 2018) is hindered by uncertainty in the outputs of these models. One such source of uncertainty at the inference time is a shift in the distribution of the model's inputs from their training distribution. Detection of this shift or out-of-distribution (OOD) detection on individual datapoints has therefore gained attention (Hendrycks & Gimpel, 2016; Lee et al., 2017; 2018; Hendrycks et al., 2019; Tack et al., 2020; Kaur et al., 2021). But this problem of OOD detection in time-series data is less explored (Cai & Koutsoukos, 2020; Ramakrishna et al., 2021; Feng et al., 2021). In time-series data, OOD detection aims at detecting those windows of time-series datapoints that are outside the training distribution. In this case, the distribution of interest is not just of individual datapoints, but the distribution of sequences in which these datapoints occur in the time-series data. In this paper, we propose CODiT, a novel algorithm for OOD detection in time-series data with a bounded false detection rate. Most of the existing approaches for detection in time-series data are *point-based*, i.e., they independently consider each datapoint in the window, such as individual frames in a video clip. In other words, these approaches do not exploit time-dependency among the datapoints in a window to detect if the window is OOD. An example of an OOD window is a car drifting video clip due to slippery ground or loss of control, given normal driving video clips as in-distribution (iD) data. As shown in Fig. 1, we cannot tell from a single frame if the car has drifted since drift is defined based on the relative relation between frames (e.g., a trajectory of a car). We define these OOD windows as *temporal OODs*, which require considering the sequence of datapoints in the window for detection. In contrast to the existing point-based approaches, we propose using time-dependency among the datapoints in a window for detection of temporal OODs. Specifically, we propose *using deviation from the iD temporal equivariance*, i.e. equivariance with respect ![1_image_0.png](1_image_0.png) | Figure 1: Drift in cars as temporal OODs. This trace is taken from the drift dataset by Noor et al. (2020). Table 1: Detection capabilities of OOD detectors in time-series data. Detector False Detection Rate Guarantees Temporal OODs Non-vision Datasets | | | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----|----|----| | VAE-based (Cai & Koutsoukos, 2020) | ✓ | ✗ | ✓ | | β VAE-based (Ramakrishna et al., 2021) | ✓ | ✗ | ✓ | | Optical Flow based (Feng et al., 2021) | ✗ | ✓ | ✗ | | CODiT (Ours) | ✓ | ✓ | ✓ | to a set of temporal transformations learned by a model on windows drawn from the training distribution, for OOD detection in time-series data. This is because a model trained to learn equivariance with respect to temporal transformations on data drawn from the training distribution might not generalize or exhibit equivariance on OOD samples dissimilar to the training distribution. To bound false detection on the iD data, we leverage inductive conformal anomaly detection (ICAD) (Balasubramanian et al., 2014). ICAD is a general framework for testing if an input conforms to the training distribution by computing quantitative scores defined by a non-conformity measure. A p-value, computed by comparing non-conformity score of the input with these scores on the data drawn from training distribution, indicates anomalous behavior of the input. The detection performance of ICAD, however, depends on the choice of the non-conformity measure used in the framework (Balasubramanian et al., 2014). We propose using *deviation from the expected temporal equivariant behavior of a model learned on data drawn from the* training distribution as the non-conformity measure in ICAD for OOD detection in the time-series data. ICAD computes a single p-value of the input to detect its anomalous behavior. To enhance detection performance, we propose using multiple (n > 1) p-values computed from n transformations sampled as independent and identically distributed (IID) variables from a distribution over the set of temporal transformations. The intuition for using multiple transformations is that an OOD window might behave as a transformed iD window with one transformation but the likelihood of this decreases with the number of temporal transformations. Using Fisher's method (Toccaceli & Gammerman, 2017) to combine these n independent p-values leads to the proposed detector CODiT for conformal OOD detection in time-series data with a bounded false detection rate. The contributions of this paper can be summarized as: 1. Novel Measure for OOD Detection in Time-Series Data. To our knowledge, all the existing nonconformity measures for OOD detection are defined on individual datapoints. We propose a measure that is defined on the window containing information about the sequence of time-series datapoints for enhancing the detection of temporal OODs. With a model trained to learn iD temporal equivariance via the auxiliary task of predicting an applied transformation on windows drawn from the training distribution, we propose using error in this prediction as the non-conformity measure in ICAD for OOD detection in time-series data. 2. Enhanced detection performance. To enhance the detection performance, we propose to use Fisher's method as an ensemble approach for combining predictions from multiple conformal anomaly detectors based on the proposed measure. 3. CODiT. Computing n independent p-values of the input from the proposed measure in the ICAD framework, and combining these values by Fisher's method leads to the proposed detector CODiT with a bounded false detection rate. 4. Evaluation. For comparison with the point-based approaches, we perform experiments on weather and night OODs on a driving dataset simulated by CARLA (Dosovitskiy et al., 2017), achieving state-of-the-art (SOTA) results. We outperform the existing non-point based SOTA (Feng et al., 2021) on vision temporal OODs. To illustrate that CODiT can be used for OOD detection beyond vision, we also perform experiments and obtain SOTA results on the real physiological GAIT dataset (Hausdorff et al., 2000). ## 2 Problem Statement And Motivation 2.1 Problem Statement OOD detection in time-series data takes in a window Xt,w of consecutive time-series datapoints (xt, xt+1*, . . . , x*t+w−1), and labels Xt,w as iD or OOD. Here t is the starting time of the window and w is the window length. ## 2.2 Motivation Of The Proposed Ood Detection Measure The existing point-based detectors might not be able to detect temporal OODs. An example of a temporal OOD, as shown in Fig. 2, is the replay window where camera gets stuck at a single frame and starts generating the same image over and over again. We need to consider the sequence of same image in a replay window to detect the window as OOD. Detection results on replay dataset in Fig. 3 shows that both the existing point-based detectors, namely variational autoencoder (VAE) based Cai et al.'s (2020), and β VAE-based Ramakrishna et al.'s (2021), perform poorly in the detection of these temporal OODs. We, therefore, propose using time-dependency among the datapoints in a window for OOD detection in time-series data. To our knowledge, Feng et al.'s (2021) detector is the only existing OOD detector in time-series data that takes into account time-dependency among the individual datapoints in a window. It does so by extracting optical flow information from consecutive frames in a video clip. As shown in the experimental results on temporal OODs (Fig. 7) of Section 5, Feng et al.'s detector can thus be used to detect temporal OODs. However, since this detector depends on the optical flow information, it is restricted to the vision data. In contrast, CODiT can be used to detect temporal OODs across domains without relying on any domain-specific features. Table 1 compares the detection capabilities of CODiT with the existing OOD detectors in time-series data. ![2_image_1.png](2_image_1.png) Figure 2: **Replay window**: An example of the temporal OOD where camera gets stuck and generates the same image in the entire window. This trace is generated by CARLA, an open-source simulator for autonomous driving research (Dosovitskiy et al., 2017). ## 3 Background And Notations Figure 3: **ROC curves (left), TNR (at 95% TPR on** ![2_image_0.png](2_image_0.png) right) results on replay OODs. The existing point-based OOD detectors in time-series data, namely VAE-based Cai et al.'s (2020), and β VAE-based Ramakrishna et al.'s (2021) perform poorly in the detection of these temporal OODs. CODiT uses error in the temporal equivariance learned by a model on windows drawn from the training distribution as the non-conformity measure in the inductive conformal anomaly detection (ICAD) framework. With multiple p-values obtained from the proposed measure in ICAD, the final OOD detection score is computed by combining these values by Fisher's method. Here we provide the background on equivariance, ICAD, and Fisher's method required for technical details of the proposed OOD detector, CODiT. We also define the notations used in the rest of the paper. ## 3.1 Equivariance A function f is equivariant with respect to a transformation g if we know how the output of f changes if we transform its input from x to g(x). Learning features that are equivariant to translations on inputs by sharing of kernels in the convolutional neural networks has played a crucial role in the success of these networks (Cohen & Welling, 2016). Invariance is a special case of equivariance where the output of f does not change by the transformation g on its input. Invariance with respect to geometric transformations such as rotation, tilt, scale, etc. is a desired property of the deep learning classifiers. For example, classification results on the upright images of cats should not change with a tilt in these images. Definition 1 (Schmidt & Roth, 2012). For a set X, a function f is defined to be equivariant with respect to a set of transformations G, if there exists the following relationship between any transformation g ∈ G of the function's input and the corresponding transformation g ′ *of the function's output:* $$f(g(x))=g^{\prime}(f(x)),\forall x\in X.$$ $$(1)$$ ′(f(x)), ∀x ∈ X. (1) Invariance is a special case of equivariance where g ′is the identity function, i.e., the output of f remains unchanged by the transformation g on its input. Learning Equivariance: Autoencoding Variational Transformations (AVT). Augmenting training data with transformations from the set G of geometric transformations is a common approach to learning invariance with respect to G (Chen et al., 2020a;b; Chatzipantazis et al., 2021). The auxiliary task of predicting an applied transformation from G on the training data also encourages the model to learn equivariance with respect to G (Qi et al., 2019). Qi et al.'s 2019 "Autoencoding Variational Transformations" (AVT) framework trains a VAE to learn a latent space that is equivariant to transformations. For the set X of training images and the set G of geometric transformations, a VAE is trained to predict the applied transformation from G on an input x ∈ X. Equivariance between the latent space of VAE and G is learned by maximizing mutual information between the latent space and G. Definition 2 (Jenni & Jin, 2021). Temporal equivariance of a function f *from equation 1 is defined on a set* X of windows of consecutive time-series datapoints and with respect to a set G *of temporal transformations.* Some examples of temporal transformations on video clips are skipping every second frame in the clip (2x speed), shuffling the frames in the clip (shuffle), reversing the order of frames in the clip (reverse), and reversing the order of the second half frames in the clip (periodic). In the rest of the paper, we will use the notation GT to represent a set of temporal transformations. We call a function f as GT -equivariant if it learns equivariant representations of windows drawn from the training distribution with respect to the set GT of temporal transformations. We refer to the deviation from the expected result of this function f on a transformed input (transformed with a g ∈ GT ) as deviation from the iD GT -equivariance. ## 3.2 Inductive Conformal Anomaly Detection (Icad) Inductive Conformal Anomaly Detection (ICAD) (Laxhammar & Falkman, 2015) is a general framework for testing if an input conforms to the training distribution. It is based on a non-conformity measure (NCM), which is a real-valued function that assigns a non-conformity score α to the input. This score indicates nonconformance of the input with data drawn from the training distribution. The higher the score is, the more non-conforming or anomalous the input is with respect to training data. An example of the non-conformity score is the reconstruction error by a VAE trained on data drawn from the training distribution. The training dataset X of size l is split into a *proper training set* Xtr = {xj : j = 1*, . . . , m*} and a calibration set Xcal = {xj : j = m + 1*, . . . , l*}. Proper training set Xtr is used in defining NCM. In the example of reconstruction error by a VAE as the non-conformity score, the VAE trained on Xtr is used for computing the error. Calibration set Xcal is a held-out training set that is used for computing p-value of an input. p-value of an input x is computed by comparing its non-conformity score α(x) with these scores on the calibration datapoints: $$p\text{-}value(x)={\frac{|\{j=m+1,...,l:\ \alpha(x)\leq\alpha(x_{j})\}|+1}{l-m+1}}.$$ If x is drawn from the training distribution, then its non-conformity score is expected to lie within the range of scores for the calibration datapoints and thus higher p-values for the iD datapoints. With ϵ ∈ (0, 1) as the anomaly detection threshold, x is therefore detected as an anomalous input if the p-value of x is less than ϵ. $$\left(2\right)$$ False Detection Rate Guarantees. The false anomalous detection on an input drawn from the training distribution is upper bounded by the specified detection threshold ϵ in the ICAD framework. Lemma 1 (Balasubramanian et al., 2014). If an input x and the calibration datapoints xm+1, . . . , xl are independent and identically distributed (IID), then for any choice of the NCM defined on the proper training set Xtr, the p-value(x) in equation 2 is uniformly distributed. Moreover, we have P r (p-value(x) < ϵ) ≤ ϵ, where the probability is taken over xm+1, . . . , xl*, and* x. From Lemma 1, we know that if x and the datapoints in the calibration set Xcal are IID, then the pvalue(x) from equation 2 is uniformly distributed over {1/(l − m + 1), 2/(l − m + 1)*, . . . ,* 1}. The probability of p-value(x) less than ϵ or misclassifying x as anomalous is, therefore, P1≤i≤(l−m+1)ϵ 1/(l − m + 1) = ⌊(l − m + 1)ϵ⌋/(l − m + 1) ≤ ϵ. ## 3.3 Fisher'S Method The same hypothesis can be tested by multiple conformal predictors and an ensemble approach for combining these predictions can be used to improve upon the performance of individual predictors. Fisher's method is one of these approaches for combining multiple conformal predictions or p-values of an input from equation 2. Fisher value of an input x from n p-values is computed as follows: fisher-value($x$) = $t\sum_{i=0}^{n-1}\frac{(-\log t)^{i}}{i!},\text{where}t=\prod_{k=1}^{n}p_{k}.$ (3) ![4_image_0.png](4_image_0.png) Lemma 2 (Toccaceli & Gammerman, 2017). If n p-values, p1, . . . , pn*, are independently drawn from a* uniform distribution, then −2Pn i=1 log pi follows a chi-square distribution with 2n *degrees of freedom. Thus,* the combined p-value is $$P r\left(y\leq-2\sum_{i=1}^{n}\log p_{i}\right)=t\sum_{i=0}^{n-1}{\frac{(-\log t)^{i}}{i!}},$$ where t =Qn k=1 pk, y is a random variable following a chi-square distribution with 2n degrees of freedom, and the probability is taken over y*. Moreover, the combined p-value follows the uniform distribution.* ## 4 Temporal Equivariance For Conformal Ood Detection In Time-Series Data Here, we first classify OOD data in time-series into temporal and non-temporal types, and then provide details of the proposed detector CODiT. ## 4.1 Ood Data Types In Time-Series We classify OOD windows in time-series data into two types: *temporal* OODs and *non-temporal* OODs. A crucial property of the temporal OODs compared to the non-temporal OODs is that it is hard to detect temporal OODs by looking at individual datapoints within the window without considering time-dependency between these datapoints. Examples of temporal OODs in autonomous driving are car drifting video clips (Fig. 1), and replay OODs (Fig. 2). An example of the temporal OOD in healthcare is the GAIT (or waking pattern) of patients with neurodegenerative diseases. With the GAIT of healthy individuals as iD data, the GAIT of patients with neurodegenerative diseases, such as Parkinson's disease (PD), Huntington's disease (HD), and Amyotrophic Lateral Sclerosis (ALS), are examples of temporal OODs. Fig. 4 shows dynamics of the stride time (one of the walking Figure 4: GAIT in patients with neurodegenerative diseases (Hausdorff et al., 2000) as temporal OODs. pattern features) of a healthy control person and patients with PD, HD, and ALS disease. As shown in the Figure, we need a sequence of time-series datapoints to determine whether the walking pattern is from a healthy individual or a patient. In contrast to the temporal OODs, the non-temporal OODs can be detected by looking at individual datapoints. Examples of non-temporal OODs include driving video clips under rainy, foggy, or snowy weather, given the driving video clips under clear sunny weather as iD data. We can detect weather OODs by looking at images in the window independently. Based on these observations, we call a window Xt,w as a temporal OOD if Xt,w is drawn from OOD but confused to be drawn from iD by removing the time-dependency of individual datapoints within Xt,w (e.g., randomly shuffling the order of video clip frames). As shown in the experimental section 5, CODiT can be used to detect both temporal and non-temporal OODs in time-series data. ## 4.2 Codit CODiT uses an OOD detection score based on multiple p-values from ICAD. Here, we first define the proposed NCM to be used in the ICAD framework for computing a p-value along with the final detection score, and then formalize CODiT's algorithm with a bounded false detection rate. ## 4.2.1 Proposed Ncm And Ood Detection Score Proposed NCM. We propose to use time-dependency between datapoints in a time-series window for detection on the window. Unlike all the existing NCMs defined on individual datapoints, we propose an NCM that is defined on the window containing information about the sequence of datapoints in the window. Specifically, we propose using deviation from the expected iD GT -equivariance learned by a model on windows drawn from the training distribution as an NCM in ICAD for OOD detection in time-series data. Learning GT -equivariance via an auxiliary task of predicting the applied temporal transformation (such as shuffle, reverse, etc.) on a window requires learning changes in the original sequence of the datapoints in a predictable way. For a VAE model M trained to learn GT -equivariance on windows of proper training data in the AVT framework, we propose to use error in the prediction of the applied temporal transformation g ∈ GT on an input window Xt,w as the NCM: **PredictionError**(g,M(g(Xt,w))). We call the proposed NCM as the **Temporal Transformation Prediction Error (TTPE)** NCM. The existing AVT framework (Qi et al., 2019) is defined to learn equivariance with respect to geometric transformations on images. We extend it to learn GT -equivariance by: (1) Modifying VAE's architecture to accept windows of consecutive time-series datapoints as inputs. The time-series can be on vision (e.g. drift car video clip) or non-vision (e.g., GAIT) datapoints. (2) Modifying the auxiliary task to predict the applied temporal transformation from a set GT on windows of time-series datapoints. Motivation for TTPE-NCM. GT -equivariance learned by a model on windows drawn from the training distribution is more likely to work on iD data and is not guaranteed to generalize to OOD data dissimilar to that used for training. With the set GT = {2x *speed, shuffle, reverse, periodic, identity*}, we train a VAE model on the proper training data of the drift dataset to predict an applied transformation g sampled independently from a uniform distribution over GT . With GT as the set of five classes of temporal transformations, we use CrossEntropyLoss(g, M(g(Xt,w))) as the PredictionError(g, M(g(Xt,w))). Fig. 5 shows that the model has much higher prediction losses on the OOD windows than on the test iD windows on all the five ground truth transformations in GT . This supports our hypothesis that GT -equivariance learned on data drawn the training distribution is not likely to generalize on data drawn from OOD, and therefore higher prediction errors on OOD windows than on the iD windows. OOD Detection Score. Instead of using a single p-value from the TTPE-NCM in ICAD, we propose using multiple (n > 1) p-values to enhance detection. We require n non-conformity scores for both the input and the calibration datapoints for computing n p-values. These scores are computed from n transformations sampled independently from a distribution QGT over GT for both the input and the calibration datapoints: αi(Xt,w) = PredictionError(gi, M(gi(Xt,w))) : 1 ≤ i ≤ *n, g*i ∼ QGT , ![6_image_0.png](6_image_0.png) Figure 5: Higher P rediction Error(g, M(g(Xt,w))) = CrossEntropyLoss(g, M(g(Xt,w))) on the OOD windows than on the ![6_image_1.png](6_image_1.png) test iD windows of the drift dataset. This shows that GT -equivariance learned on the windows drawn from the training distribution is less likely to generalize on the windows drawn from OOD. Figure 6: AUROC vs. n (left), TNR vs. n (center) shows that the performance of CODiT increases with the increase in the number n of p-values used in the fisher-value for detection. False Detection Rate (FDR) of CODiT (n = 5) is bounded by ϵ on average (right). where Xt,w is the input or a calibration datapoint. Using Fisher's method to combine these n p-values gives us the fisher-value of input from equation 3. This value is expected to be higher for iD datapoints than OOD datapoints (Fisher, 1932), and therefore we perform detection by using a threshold on the fisher-value of input. In other words, CODiT uses fisher-value of the input as the final OOD detection score. Motivation for multiple p**-values for OOD Detection.** A single p-value measures deviation from the iD GT -equivariance of the input with respect to one transformation g ∼ QGT . With multiple p-values, we test this deviation with respect to multiple transformations sampled independently from QGT . We hypothesize that under one transformation, an OOD window might behave as the transformed iD window but the likelihood of this decreases with the number of transformations. For testing this hypothesis, we train three VAE models with the set GT equal to {2x speed, reverse, identity}, {2x *speed, shuffle, periodic, identity*}, and {2x *speed, reverse, shuffle, periodic, identity*}, respectively. These models are trained on the proper training set of the drift dataset to predict an applied transformation g sampled independently from a uniform distribution over GT . Again, we use PredictionError(g, M(g(Xt,w))) as the CrossEntropyLoss(g, M(g(Xt,w))). Fig. 6 shows that the detection performance (in AUROC and TNR) of CODiT increases as we increase the number n of p-values used in the final OOD detection score (or the fisher-value) for all of the three cases (|GT | = 3, 4, and 5). This supports our hypothesis on using multiple p-values for enhancing detection. ## 4.2.2 Algorithm And Guarantee For Ood Detection For the ICAD guarantees from Lemma 1 to hold on a p-value for an input sampled from the training distribution, we require the calibration set used in the p-value computation to be IID. With time-series calibration traces, we propose to create an IID calibration set by using exactly one calibration window from each calibration trace. For each trace, this window is sampled independently from a uniform distribution over the calibration windows in the trace. With QGT as the distribution over the set GT of temporal transformations, non-conformity scores on the calibration windows in the set are computed from the TTPENCM by sampling a transformation independently from QGT for each window in the set. n such sets of non-conformity scores on the n IID calibration sets are computed and passed as an input to the proposed Algorithm 1 for OOD detection in time-series data. Line 4 of the Algorithm samples a transformation g independently from QGT . The transformed input g(Xt,w) is passed through the VAE model M trained to learn GT -equivariance on the windows drawn from the proper training data (Line 5). Line 6 computes the non-conformity score α of Xt,w from the TTPE-NCM, which is the prediction error function f over the applied transformation g on Xt,w and the transformation gˆ predicted by M. p-value of Xt,w is computed in Line 7 by comparing its non-conformity score with these scores on the calibration windows. This process from sampling a transformation from QGT to computing the p-value of Xt,w is repeated n times to compute n p-values of Xt,w (Lines 3 to 8). Xt,w is detected as OOD if the fisher-value computed from its n p-values is less than the desired false detection rate ϵ (Line 10). Algorithm 1 CODiT: Conformal Out-of-Distribution Detection in Time-Series Data 1: **Input:** a window Xt,w of time-series data, VAE model M trained on proper training set of the iD windows, distribution QGT over the set GT of temporal transformations, prediction error function f, n sets of calibration set alphas {α k j : 1 ≤ k ≤ *n, m* + 1 ≤ j ≤ l}, and a desired false detection rate ϵ ∈ (0, 1) 2: **Output:** "1" if Xt,w is detected as OOD; "0" otherwise 3: for k ← 1*, . . . , n* do 4: g ∼ QGT 5: gˆ ← M(g(Xt,w)) 6: α ← f(g, gˆ) 7: pk ← |{j=m+1*,...,l*: α≤α k j }|+1 l−m+1 8: **end for** 9: t ←Qn k=1 pk 10: if tPn−1 i=0 (− log t) i i! < ϵ then return 1 **else return** 0 Theorem 1. The probability of false OOD detection on Xt,w by Algorithm 1 is upper bounded by ϵ. Proof. An IID calibration set is used for a p-value computation in Algorithm 1. If an input Xt,w is sampled from the training distribution, then Xt,w and datapoints in the calibration set are also IID. The nonconformity scores of Xt,w and calibration datapoints used in the p-value computation of Line 7 of the Algorithm 1 are therefore IID conditioned on the proper training set and the set of temporal transformations GT . With the n IID calibration sets sampled independently from the calibration traces, n non-conformity scores computed from n transformations sampled independently from QGTfor both the input and the calibration datapoints, and Lemma 1, the n p-values of Xt,w computed in Algorithm 1 are independent and uniformly distributed. Due to this property on the n p-values and Lemma 2, the combined p-value in Line 10 of Algorithm 1 is also uniformly distributed. Therefore, the probability of falsely detecting Xt,w as OOD from the combined p-value (or the fisher-value(Xt,w)) is upper bounded by ϵ due to Lemma 1. The unconditional probability that an input Xt,w sampled from the training distribution D is classified as OOD by Algorithm 1 is bounded by ϵ. For this guarantee to hold for a sequence of inputs, we require an independent calibration set for every input in the sequence. This is computationally inefficient for real-time applications and therefore a fixed calibration set is used for all the inputs in the offline version of the ICAD algorithm (Laxhammar & Falkman, 2015). The average false detection rate on the sequence of inputs drawn from D in this setting is expected to be empirically calibrated with or even higher than ϵ. We also fix the n sets of IID calibration datapoints and pass it as an input to the Algorithm 1. Box plots in Fig. 6 (right) show that the false detection rate of CODiT is empirically bounded by ϵ on average. ## 5 Experimental Results We perform experiments on the following three computer vision datasets in autonomous driving: (Dataset 1) a driving dataset under different weather conditions generated by CARLA, an open-source simulator for autonomous driving research (Dosovitskiy et al., 2017); (Dataset 2) a replay OOD dataset simulated by CARLA; and (Dataset 3) a real driving drift dataset (Noor et al., 2020). In addition, to illustrate that CODiT can be used for OOD detection beyond vision, we also perform experiments on: a real physiological GAIT sensory dataset (Hausdorff et al., 2000) (Dataset 4). All the existing approaches report their results on weather OODs. So, we compare CODiT's performance with the existing approaches on weather OODs (Dataset 1). Results on temporal OODs in vision datasets, i.e., replay (Dataset 2), and drift (Dataset 3) are compared with the existing non-point based SOTA (Feng et al., 2021). Since Feng et al. (2021)'s approach is not applicable to non-vision datasets, we generate a | OOD | AUROC (↑) | TNR (90% TPR) (↑) | | Detection Delay (@95% TPR) (↓) | | | | | | | | | |-------------|--------------|---------------------|------------------|----------------------------------|-------|--------------|-------|-------------|-------|------|------|------| | VAE | β−VAE Feng's | Ours | VAE β−VAE Feng's | Ours | VAE | β−VAE Feng's | Ours | | | | | | | Rainy | 53.56 | 92.07 | 84.21 | 99.71 | 0 | 81.00 | 27.63 | 98.57 | NA | 0.15 | 5.33 | 0.15 | | Foggy | 52.02 | 41.02 | 86.09 | 99.66 | 2.30 | 2.75 | 28.01 | 98.05 33.05 | 19.65 | 5.37 | 0 | | | Snowy 53.23 | 97.52 | 95.91 | 96.67 | 0 | 99.69 | 78.20 | 86.47 | NA | 0.33 | 0 | 0 | | | Night | 50.86 | 95.57 | 75.07 | 98.94 | 1.78 | 71.90 | 0.40 | 94.55 72.80 | 4.07 | 85.4 | 1.41 | | non-point based baseline for the detection of temporal OODs in GAIT Dataset 4 and compare our results with it. We also perform ablation studies on the drift dataset. ## 5.1 Weather Oods 5.1.1 Dataset Training Set. We generate 33 driving traces of varying lengths in clear daytime weather as the iD training traces. We randomly split these into 20 traces of the proper training set Xtr and 13 traces of the calibration set Xcal. Windows from Xtr are sampled for training the model. Windows from Xcal are sampled n = 20 times (with one window from each calibration trace at a time to make each window in the set independent) for calculating the 20 sets of calibration non-conformity scores. Test Set. We generate 27 driving traces of varying lengths on a clear day weather as the iD test traces. Weather and night time OOD traces are generated by using the automold software (Saxena, 2018)1 on the 27 iD test traces. OOD traces start from iD and gradually become OOD, i.e., the intensity of rain, fog, snow, or low brightness (for night) starts increasing gradually turning into the OOD traces. Examples of these iD and OOD windows are shown in Appendix. ## 5.1.2 Training Details, Gt**, And Ttpe-Ncm** We train an VAE model with the R3D network architecture (Tran et al., 2018) on the windows of length w = 16 from Xtr. R3D network is the 3D CNNs with residual connections and thus can be used on the 3D time-series input data. We use GT = {2x Speed, Shuffle, Periodic, Reverse, Identity} and train the model to predict the applied transformation g ∈ GT with cross-entropy loss between the true and the predicted transformation. CrossEntropyLoss(g, M(g(Xt,w))) is used as the P rediction Error(g, M(g(Xt,w))). The value of this loss is used as the non-conformity score α for computing the p-value of an input Xt,w. ## 5.1.3 Results We report results on the sliding windows (w = 16) of the test iD and OOD traces. We call iD as positive and OOD as negative. We report AUROC, TNR(@95% TPR), and detection delay(@95% TPR) in Table 2. Starting from the first ground truth OOD window in a trace, number of windows required to detect the first OOD window in the trace is reported as the detection delay. This number is averaged over the total number of OOD traces and reported in the table. As can be seen CODiT outperforms other approaches, except for Snowy OOD, where our approach is the second best. ## 5.2 Temporal Oods 5.2.1 Vision: Replay And Drift We compare the performance of the current non-point based SOTA OOD detector in time-series data, i.e., Feng et al. (2021)'s detector with CODiT on vision temporal OODs. We use the same model architecture, GT , and TTPE-NCM on replay and drift datasets as we use on CARLA dataset from Section 5.1.2. Replay Dataset. Replay dataset is generated from the CARLA's 27 iD test traces on a clear day weather by randomly sampling a position in each trace. All images from the sampled position in the trace are replaced 1Automold is a software used for augmenting road images to have various weather and road conditions. ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) Figure 7: CODiT outperforms SOTA detector Feng et al. (2021) on temporal OODs in vision with the window length w = 16 (left). CODiT performs consistently well with different window lengths of w = 18, and 20 (right). with the image at the sampled position in the original trace. Again, results are reported for n = 20, and on the sliding windows of replay OODs. Drift dataset. We split 72 iD traces of cars driving straight without any drift from the drift dataset (Noor et al., 2020) into 24 for Xtr, 14 for Xcal, and 34 for test iD traces. Windows from Xtr are sampled for training the VAE. Windows from Xcal are sampled n = 20 times (with one window from one trace at a time to make each window in the set independent) for calculating the 20 sets of calibration non-conformity scores. We report results on the sliding windows of 34 iD test and 100 OOD drift traces. Results. Fig: 7 (left) compares the ROC, AUROC and TNR (@95% TPR) results of CODiT with Feng et al. (2021)'s detector on the replay and drift datasets. We achieve SOTA results on both datasets. Ablations on drift. We perform the following ablation studies on the drift dataset. All VAE models used in these studies are trained on the proper training set of the drift dataset with the same model architecture, GT , and TTPE-NCM from Section 5.1.2. (1) **Performance of CODiT with Different Window Lengths** w: We train two VAE models with w = 18, 20 and compare the performance of CODiT (with n = 20) with Feng et al. (2021)'s detector on these window lengths. Fig 7 (right) shows that with both w = 18 and 20, CODiT performs consistently well. (2) **Performance of CODiT with Different** |GT|: We compare the performance of CODiT with different sizes of the transformation set. Table 3 shows that the performance of CODiT (n = 5) increases with |GT |. (3) Using Deviation from iD GT **-equivariance as NCM:** With GT = {2x *speed, shuffle, reverse, periodic, identity*}, and for all ground truth temporal transformations in GT , Fig. 5 shows that the non-conformity score from TTPE-NCM is much higher for OOD windows than the test iD windows. This justifies our hypothesis that GT -equivariance learned by a model on windows drawn from training distribution is not likely to generalize on windows drawn from OOD. (4) **CODiT's Performance Increases with** n: We train three VAE models with GT equal to {2x speed, reverse, identity}, {2x *speed, shuffle, periodic, identity*}, and {2x *speed, reverse, shuffle, periodic, identity*}. Results on AUROC and TNR in Fig. 6 shows that the performance of CODiT improves with the number n of p-values used for detection in all the three cases (|GT | = 3, 4, and 5). This justifies our hypothesis that under one transformation an OOD window might behave as a transformed iD window but the likelihood of that decreases with the number of transformations. (5) **Bounded False Detection Rate (FDR):** For the VAE model with |GT | = 5 described above, we perform FDR experiments with a larger calibration dataset of approximately 862 calibration datapoints. This set is obtained by including all sliding windows on all calibration traces in the set. 34 calibration traces are randomly selected from the set of 48 in-distribution traces and the rest 14 are used as test traces. This is repeated 5 times and the generated box-plot of CODiT (n = 5) is shown in Fig. 6 (right). For all the values of ϵ = 0.05*.k, k* = 1*, . . . ,* 10 in the plot, the average FDR is aligned with ϵ. ## 5.2.2 Non-Vision: Gait Dataset. We use the physiological sensory GAIT dataset (Hausdorff et al., 2000) for our case study on | of the transformation set GT . |GT | Transformations | AUROC | | |--------------------------------------------------------|---------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Speed, Identity, Shuffle | 84.78 | | | 3 | Reverse, Shuffle, Identity | 85.47 | | Speed, Reverse, Identity | 87.67 | Table 4: AUROC of baseline/CODiT for OOD detection on GAIT dataset with different window lengths w. OOD w = 16 w = 18 w = 20 Baseline Ours Baseline Ours Baseline Ours ALS 78.25 68.62 77.73 79.61 77.99 80.69 PD 74.11 85.38 74.18 84.25 74.52 84.40 HD 76.97 94.17 76.64 95.42 76.68 93.74 ALL 76.23 83.85 75.99 86.66 76.21 86.68 | | Speed, Shuffle, Periodic, Identity | 88.08 | | | 4 | Speed, Identity, Shuffle, Reverse | 88.76 | | Speed, Reverse, Periodic, Identity | 89.56 | | | 5 | Speed, Shuffle, Reverse, Periodic, Identity | 90.78 | non-vision temporal OODs. This dataset consists of records on 16 healthy control subjects. We split these into 6 for Xtr, 5 for Xcal, and 5 for test iD records. We use all 27 records from the severe patient group with neurodegenerative diseases as OOD records. These 27 records contain 9 records from each of the three diseases, namely Amyotrophic Lateral Sclerosis (ALS), Parkinson's (PD), and Huntington's disease (HD). Training Details, GT**, and TTPE-NCM.** We use the 1D derived time-series features from the dataset (the .ts files) to train a VAE model with the Lenet5 architecture (LeCun et al., 1998) on windows sampled from Xtr. Lenet5 uses 2D CNNs that can be used on the time-series data of 1D feature space. We use GT = {high-pass filter, high-low filter, low-high filter, identity}. By high-low (or low-high) filter, we mean that we apply high (or low)-pass filter to the first half features and low (or high)-pass filter to the last half features of the dataset. Again, we use the cross entropy-loss between the true and predicted transformation as the TTPE-NCM. Baseline and Results. We use a one-class SVM trained on the auto-correlated features in the time dimension of all the sliding windows in Xtr as a baseline. We report results on the sliding windows of the test iD and OODs records. Table 4 compares the AUROC performance of CODiT (n = 100) with baseline on individual and all (ALS, PD, and HD) OODs with different sliding window lengths w (16, 18, and 20). As can be seen, CODiT outperforms the baseline except for one case. ## 6 Related Work OOD detection in non time-series datasets such as CIFAR-10 (Krizhevsky et al., 2009) has been extensively studied and detectors with OOD scores based on the difference in statistical, geometrical or topological properties of the individual iD and OOD datapoints have been proposed. These detectors can be classified into supervised (Kaur et al., 2021), self-supervised (Tack et al., 2020), and unsupervised (Lee et al., 2017) categories. Unsupervised approaches do not require access to any OOD data for training the detector, while supervised approaches do. Self-supervised approaches are the current SOTA for OOD detection in non timeseries data which require a self-labeled dataset for training the detector. This dataset is created by applying transformations to the training data and labeling the transformed data with the applied transformation. In this paper, we consider the problem of OOD detection in time-series data and the proposed approach, CODiT, is a self-supervised OOD detection approach, where the self-labeled dataset is created by applying temporal transformations on the windows drawn from the training distribution. Recently, there has been interest in leveraging ICAD for OOD detection with guarantees on false detection rate (Cai & Koutsoukos, 2020; Ramakrishna et al., 2021; Kaur et al., 2022; Haroush et al., 2021). While iDECODe (Kaur et al., 2022), and Haroush et al.'s (2021) are OOD detectors for individual datapoints, Cai et al.'s (2020), and Ramakrishna et al.'s (2021) are detectors for time-series data. iDECODe (2022) uses error in the equivariance learned by a model with respect to a set of transformations on individual datapoints as the non-conformity measure in ICAD for detection in non time-series data. Haroush et al. 2021 propose using a combined p-value from different channels and layers of convolutional neural networks (CNN) for detection. It is not clear how to directly apply individual point detectors to time-series data with the ICAD guarantees due to the following two reasons. First, even if we apply these detectors to individual datapoints in the timeseries window independently, we do not know how to combine detection verdicts on these datapoints for detection on the window. Second, for detection guarantees by ICAD, it is required that all non-conformity scores for p-value computation to be IID (Laxhammar & Falkman, 2015). Since these detectors are not solving OOD detection problem in time-series data it is not clear how to apply them to time-series while preserving the IID assumption on the time-series data. Also, iDECODe uses a single p-value for detection and we propose using multiple (n > 1) independent p-values to be combined by the Fisher's method for preserving the detection guarantees. In contrast to (Haroush et al., 2021), our approach is not limited to CNN classifiers and can in principle be used for any type of other predictive models as well. Cai et al. (2020) propose using reconstruction error by VAE on an input image as the non-conformity measure in the ICAD framework. Martingale formula (Vovk et al., 2003) is used to combine multiple pvalues computed on multiple samples of the input in the latent space of the VAE. The detection score on a window is then computed by applying cumulative sum procedure (Basseville et al., 1993) on the martingale values of all the images in the window. Ramakrishna et al. (2021) propose using KL-divergence between the disentangled feature space of β-VAE on an input image and the normal distribution, as the non-conformity measure in the ICAD framework. They also use the martingale formula to combine p-values of all the images in the window for detection. Both these detectors are point-based, and as shown by the experiments on replay OODs, these might perform poorly in the detection of temporal OODs. CODiT computes the p-value of the window (and not individual datapoints in the window) in the ICAD framework. It is, therefore, a non-point based approach and as shown by experiments on temporal OODs in Section 5, it can be used to detect these OODs. To our knowledge, the current SOTA approach for OOD detection in time-series data is by Feng et al. (2021). They propose extracting optical flow information from a window of the time-series data and training a VAE on this information. KL-divergence between the trained VAE and a specified prior is used as the OOD score. This detector uses optical flow to extract time-dependency in the frames of a window and thus can be used to detect temporal OODs. However, this approach does not provide any guarantees on detection and will not work on non-vision datasets as it relies on optical flows. As shown in the experiments on the GAIT sensory dataset, CODiT can be used for OOD detection in non-vision datasets. Anomaly detection in time-series data is also a closely related and an active research area (Ishimtsev et al., 2017; Guo & Bardera, 2018; Gao et al., 2020). In this paper, we consider the detection of a special class of anomalous data, the OOD data (data lying outside the training distribution). For instance, let us consider the case where most of the training data is clean and the rest is adversarially perturbed. In this case, the rare adversarial inputs are anomalous with respect to the training data, where most of the training data was drawn from the training distribution of clean data. However, adversarially perturbed inputs are not OOD as some of the training data was sampled from the training distribution of these adversarially perturbed data. ## 7 Conclusion And Discussion We propose to use time-dependency between the datapoints in a time-series window for OOD detection on the window. Specifically, we propose using deviation from the temporal equivariance learned by a model on windows drawn from training distribution as an NCM in the conformal prediction framework for OOD detection in time-series data. Computing independent predictions from multiple conformal detectors based on the proposed measure and combining these predictions by Fisher's method leads to the proposed detector CODiT with guarantees on false OOD detection in time-series data. We illustrate the efficacy of CODiT by achieving SOTA results on computer vision datasets in autonomous driving, and the GAIT sensory data. The time complexity of CODiT is as follows. At inference time, ICAD computes the non-conformity score of an input and compares it with the scores of the pre-computed (in offline settings) calibration datapoints for anomaly detection. The time-complexity of ICAD is therefore O(non-conformity score computation of the input+|calibration set|). Non-conformity score computation in our case is the output generation (i.e., prediction of the applied transformation) by the VAE model. We found it to be approximately 0.003 seconds in our experiments. The time-complexity of ICAD is for calculating one p-value of the input. CODiT uses multiple (n) p-values and combine them using Fisher's test for OOD detection. So, the time-complexity of CODiT = n × time-complexity of the ICAD framework, where n is the number of p-values. Therefore the time-complexity of CODiT increases linearly with the number n of p-values used for detection. As seen from Fig. 6, detection performance of CODiT improves with n. So, it is a trade-off between time-complexity and detection performance. We also observe that the performance of CODiT improves with the number of transformations in the set of temporal transformations. So, using all (instead of choosing a subset) of temporal transformations suitable for the application works better for CODiT. ## References Vineeth Balasubramanian, Shen-Shyang Ho, and Vladimir Vovk. Conformal prediction for reliable machine learning: theory, adaptations and applications. Newnes, 2014. Michele Basseville, Igor V Nikiforov, et al. *Detection of abrupt changes: theory and application*, volume 104. prentice Hall Englewood Cliffs, 1993. Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. *arXiv preprint arXiv:1604.07316*, 2016. Feiyang Cai and Xenofon Koutsoukos. Real-time out-of-distribution detection in learning-enabled cyberphysical systems. In *2020 ACM/IEEE 11th International Conference on Cyber-Physical Systems (ICCPS)*, pp. 174–183. IEEE, 2020. Evangelos Chatzipantazis, Stefanos Pertigkiozoglou, Kostas Daniilidis, and Edgar Dobriban. Learning augmentation distributions using transformed risk minimization. *arXiv preprint arXiv:2111.08190*, 2021. Shuxiao Chen, Edgar Dobriban, and Jane H Lee. A group-theoretic framework for data augmentation. Journal of Machine Learning Research, 21(245):1–71, 2020a. Shuxiao Chen, Edgar Dobriban, and Jane H Lee. A group-theoretic framework for data augmentation. Journal of Machine Learning Research, 21(245):1–71, 2020b. Taco Cohen and Max Welling. Group equivariant convolutional networks. In *International conference on* machine learning, pp. 2990–2999. PMLR, 2016. Jeffrey De Fauw, Joseph R Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O'Donoghue, Daniel Visentin, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. *Nature medicine*, 24(9):1342–1350, 2018. Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. In *Conference on robot learning*, pp. 1–16. PMLR, 2017. Yeli Feng, Daniel Jun Xian Ng, and Arvind Easwaran. Improving variational autoencoder based out-ofdistribution detection for embedded real-time applications. *ACM Transactions on Embedded Computing* Systems (TECS), 20(5s):1–26, 2021. RA Fisher. Statistical methods for research workers. 4* edition oliver & boyd, 1932. Jingkun Gao, Xiaomin Song, Qingsong Wen, Pichao Wang, Liang Sun, and Huan Xu. Robusttad: Robust time series anomaly detection via decomposition and convolutional neural networks. arXiv preprint arXiv:2002.09545, 2020. Yuejun Guo and Anton Bardera. Shnn-cad+: An improvement on shnn-cad for adaptive online trajectory anomaly detection. *Sensors*, 19(1):84, 2018. Matan Haroush, Tzviel Frostig, Ruth Heller, and Daniel Soudry. A statistical framework for efficient out of distribution detection in deep neural networks. In *International Conference on Learning Representations*, 2021. Jeffrey M Hausdorff, Apinya Lertratanakul, Merit E Cudkowicz, Amie L Peterson, David Kaliton, and Ary L Goldberger. Dynamic markers of altered gait rhythm in amyotrophic lateral sclerosis. *Journal of applied* physiology, 2000. Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. *arXiv preprint arXiv:1610.02136*, 2016. Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song. Using self-supervised learning can improve model robustness and uncertainty. In *Advances in Neural Information Processing Systems*, pp. 15663–15674, 2019. Vladislav Ishimtsev, Alexander Bernstein, Evgeny Burnaev, and Ivan Nazarov. Conformal k-nn anomaly detector for univariate data streams. In *Conformal and Probabilistic Prediction and Applications*, pp. 213–227. PMLR, 2017. Simon Jenni and Hailin Jin. Time-equivariant contrastive video representation learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9970–9980, 2021. Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Oleg Sokolsky, and Insup Lee. Detecting oods as datapoints with high uncertainty. *arXiv preprint arXiv:2108.06380*, 2021. Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Edgar Dobriban, Oleg Sokolsky, and Insup Lee. iDECODe: In-distribution Equivariance for Conformal Out-of-distribution Detection, Association for the Advancement of Artificial Intelligence, 2022. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Rikard Laxhammar and Göran Falkman. Inductive conformal anomaly detection for sequential detection of anomalous sub-trajectories. *Annals of Mathematics and Artificial Intelligence*, 74(1):67–94, 2015. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998. Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training confidence-calibrated classifiers for detecting out-of-distribution samples. *arXiv preprint arXiv:1711.09325*, 2017. Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting outof-distribution samples and adversarial attacks. *Advances in neural information processing systems*, 31, 2018. Alam Noor, Bilel Benjdira, Adel Ammar, and Anis Koubaa. Driftnet: Aggressive driving behavior classification using 3d efficientnet architecture. *arXiv preprint arXiv:2004.11970*, 2020. Guo-Jun Qi, Liheng Zhang, Chang Wen Chen, and Qi Tian. Avt: Unsupervised learning of transformation equivariant representations by autoencoding variational transformations. In *Proceedings of the IEEE* International Conference on Computer Vision, pp. 8130–8139, 2019. Shreyas Ramakrishna, Zahra Rahiminasab, Gabor Karsai, Arvind Easwaran, and Abhishek Dubey. Efficient out-of-distribution detection using latent space of β-vae for cyber-physical systems. arXiv preprint arXiv:2108.11800, 2021. Ujjwal Saxena. Automold. https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library, 2018. Uwe Schmidt and Stefan Roth. Learning rotation-aware features: From invariant priors to equivariant descriptors. In *2012 IEEE Conference on Computer Vision and Pattern Recognition*, pp. 2050–2057. IEEE, 2012. Jihoon Tack, Sangwoo Mo, Jongheon Jeong, and Jinwoo Shin. Csi: Novelty detection via contrastive learning on distributionally shifted instances. *Advances in Neural Information Processing Systems*, 33, 2020. Paolo Toccaceli and Alexander Gammerman. Combination of conformal predictors for classification. In Conformal and Probabilistic Prediction and Applications, pp. 39–61. PMLR, 2017. Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann LeCun, and Manohar Paluri. A closer look at spatiotemporal convolutions for action recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 6450–6459, 2018. Vladimir Vovk, Ilia Nouretdinov, and Alexander Gammerman. Testing exchangeability on-line. In *Proceedings of the 20th International Conference on Machine Learning (ICML-03)*, pp. 768–775, 2003. ## A Appendix A.1 False Detection Rate Guarantees ![14_Image_0.Png](14_Image_0.Png) Figure 8: False Detection Rate (FDR) is much lower than ϵ for |calibration set| = 14 for all the three VAE models with |GT | = 3, 4, and 5. With ϵ = 0.05 *. k, k* = 1*, . . . ,* 10, Fig. 8 shows that the false detection rate of CODiT (n = 5) is always less than ϵ for all the three VAE models (|GT | = 3, 4, and 5) described in the ablation studies on the drift dataset. The plots in Fig. 8 are reported with exactly one calibration window (sampled randomly) from each calibration trace in the drift dataset. This is because we want the calibration set to be IID for the FDR guarantees from the ICAD framework (Lemma 1) to hold. With 14 calibration traces in the drift dataset, we have exactly 14 calibration datapoints. Using a statistically insignificant number of 14 calibration datapoints gives us the false detection rates that are much lower than the detection threshold ϵ. The plots in Fig. 8 are reported with 14 calibration and 34 in-distribution test traces, totaling to 48 indistribution traces. We increase the number of calibration datapoints to empirically check the FDR with respect to ϵ. We increase the number of calibration traces from 14 to 34 and include all sliding windows on all the calibration traces in the calibration set. This gives us a calibration dataset with a larger number of approximately 862 calibration datapoints. 34 calibration traces are randomly selected from the set of 48 in-distribution traces and the rest 14 are used as test traces. This is repeated 5 times and the generated box-plot of CODiT is shown in Fig. 6 (right) of the paper. For all the values of ϵ = 0.05*.k, k* = 1*, . . . ,* 10 in the plot, the average FDR is better aligned with ϵ. ## A.1.1 Comparison With Baselines On False Detection Rate (Fdr) With Respect To The Detection Threshold Ε Comparison with the baselines: Two of the existing baselines, i.e. VAE-based Cai et al.'s detector (2020), and β VAE-based Ramakrishna et al.'s detector (2021) are also based on the ICAD framework. Since both of these approaches are point-based (i.e. treat each point in the window independently), we compare them with CODiT on CARLA's weather OODs. Fig. 9 shows the box plots for CODiT and Fig. 10 shows the box-plots for baselines on the CARLA dataset. Again, these box plots are reported on 5 trials with randomly sampled 27 calibration and 13 test traces from 40 in-distribution traces in each trial. Using all the sliding windows of the 27 calibration traces in this experiment gives us a total of approximately 2800 calibration datapoints. Observations: The quartile range for Ramakrishna et al.'s method increases with ϵ. For Cai et al.'s method, the average FDR is always much lower than ϵ, i.e., average FDR is not calibrated with ϵ. Average FDR for CODiT is better aligned with ϵ for all the values of ϵ in the plot and has a much lower quartile ranges than Ramakrishna's method. ![15_image_0.png](15_image_0.png) Figure 9: Box-plots on False Detection Rate (FDR) vs detection threshold e for CODiT (n = 5) on the CARLA dataset. ![15_image_1.png](15_image_1.png) Figure 10: Box-plots on False Detection Rate (FDR) vs detection threshold c for baselines on the CARLA dataset. ## A.2 Examples Of Id And Ood Windows Here we show: · A window from the iD trace of the drift dataset. · A window from the iD trace of the CARLA dataset. · Windows from the weather and night OOD traces from the CARLA dataset. ![16_image_0.png](16_image_0.png) Figure 11: A window from an iD trace of the drift dataset: **car driving straight without any drift.** ![16_image_1.png](16_image_1.png) Figure 12: A window from an iD trace of the CARLA dataset: **driving in the clear daytime weather**. ![16_image_2.png](16_image_2.png) Figure 13: An OOD window from the **foggy** trace. The intensity of fog gradually increases in these OOD ![16_image_3.png](16_image_3.png) traces. Figure 14: An OOD window from the **night** trace. The intensity of low brightness gradually increases in these OOD traces. ![17_image_0.png](17_image_0.png) Figure 15: An OOD window from the **snowy** trace. The intensity of snow gradually increases in these OOD traces. Figure 16: An OOD window from the **rainy** trace. The intensity of rain gradually increases in these OOD ![17_image_1.png](17_image_1.png) traces.
Review 1: Summary: This paper looks to study how to do out-of-distribution detection for temporal data. Specifically, the authors posit that defining a non-conformity score in terms of the in-distribution temporal equivariance (over multiple transformations to observed time series data) can help not only do out-of-distribution detection but also assist in bounding the false detection rate. The authors provide some theoretical justification for their design decisions and many empirical results. Strengths and Weaknesses: Strengths - Well written paper that lays out guardrails for the paper's contents/contributions - I really like the way Section 4 and 5 segue from each other! - The experimental results seem to validate the proposed method, and their theory was easy to follow. Clever and clear use of Fisher's method. Weaknesses - I don't leave the paper convinced that CODiT will be my go-to time-series OOD detector: this doesn't discount the validity of the work. I just don't have the urge to play with the code of this method. The authors could do a better job of empirically selling the upside of CODiT without relying on the nice FDR guarantee. - Other than that, I really enjoyed this paper, which is sound and correct, and expect it will excite many in the CP and OOD communities. Requested Changes: - Can you comment more explicitly on the necessary transformations G? How many transformations are needed for our estimates to concentrate? How brittle are the OOD estimates if we use "bad" or "insufficient" transformations? I would like the paper to describe this more explicitly, as it is an important design decision that is crucial to the successful use of CODiT in practice. The - Stylistically, Section 3 feels like it is in the wrong place. I'd prefer this related work, which is important but does not help improve my understanding of Section 4/5, appear before the conclusion. - Can you please complexity analysis of your proposed method? Sorry if I missed this. You suggest running conformal multiple times, but in practice running exact full CP can be computationally infeasible and outright intractable for some model classes. I would have liked to see a discussion regarding this. Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: *Problem*: This paper addresses the problem of out-of-distribution (OOD) detection, i.e., detecting windows of data within a time series that are not drawn from the training distribution. Specifically, this paper focuses on detecting *temporal OODs*, which cannot be identified as OOD by simply looking at a single datapoint in isolation but rather are only OOD when considered as a sequence. An example of a temporal OOD is a video that gets stuck and continuously replays a single frame. Existing OOD-detection methods are not designed to identify temporal OODs or only work on vision datasets (e.g., Feng et al. 2021) *Solution*: This paper builds upon a previous conformal anomaly detection framework (ICAD). Specifically, the contributions the authors make are: 1. They propose a new non-conformity score that can be applied to time series windows, thus allowing the framework to be used to solve the problem of OOD detection in time series. 2. Rather than computing a single p-value, they compute multiple p-values and then combine them via Fisher's method. They show in their ablation studies (Fig. 6) that computing multiple p-values rather than just one improves performance. The authors also provide a theoretical guarantee on the false detection rate and experiments that demonstrate that their method performs favorably compared to existing methods. Strengths and Weaknesses: **Strengths** - Simple and intuitive method with theoretical guarantee. - Good experiments and ablations. **Weaknesses** - I have some confusion about the proof of Theorem 1. Is it true that the p-values are actually independent? For example, there is some non-zero probability that for two different p-value computations, we sample the same transformation and windows. This would imply that the p-values must be identical and thus not independent. If this is the case, Lemma 2 does not apply. - It would be useful to get a sense of the computational efficiency of the proposed method. - There are some places where the writing is unclear. See Requested Changes for more details. Requested Changes: - It would be good to provide some more intuition/a less jargony description of the proposed NCM in the Introduction. - The current problem statement (Sec 2.1) could be clearer. As written, it seem like the windows all come from a single time series, but this seems to not be the case for some datasets, such as GAIT - Section 5.1 is somewhat unclear. I had to read it a few times to understand what non-temporal OODs meant. I would suggest replacing the last two sentences of the first paragraph with "In contrast to the temporal OODs, the non-temporal OODs can be detected by looking at individual images. Examples of non-temporal OODs include..." In general, it is confusing that "weather OODs" and "non-temporal OODs" are used interchangeably. - There should be a more explicit description of how the calibration set $\alpha$'s are generated. Based on the way that the "OOD Detection Score" section on p. 8 is currently written, it seems as though the $n$ sampled transformations are all applied to the same window. It was not until I read the experimental setup section that I realized that the windows are also sampled. Minor typos: - p. 1: "has therefore gained quite attention" - remove "quite" or change to "quiet" (?) - p. 3: Figure 2 caption "and generates same image" -> "and generates the same image" - p. 6: Beginning of Sec 4.3 - "Same hypothesis" -> "The same hypothesis" - p. 9: Proof - "the indepdently drawn" -> "the independently drawn" - p. 10: "on a clear day weather" -> "in clear daytime weather" - p. 11: "(4) CoDiT's Performance Increases with n:" - n should be $n$ Broader Impact Concerns: No concerns. ================================================== Review 3: Summary: The authors propose an out-of-distribution detection method for time-series data using conformal prediction. They make two key contributions: 1. Training a VAE on windows from time series data (e.g., frames from a video) in an unsupervised fashion, taking into account temporal transformations such as reversing the time series, skipping frames, etc. 2. Using the reconstruction error of the VAE as a nonconformity score for inductive conformal anomaly detection (ICAD). The latter provides a guarantee on the false positive detection. Experimentally, they show that this approach is superior to related approaches, in particular when performing out-of-distribution detection on data where temporal information is crucial for the decision (in- or out-distribution). Strengths and Weaknesses: Strengths: Form and writing: - Clearly stated contributions, also putting the work in context of related methods. - Examples in Figures 1 and 2 help motivate the approach. - Elaborate background section providing details on equivariance, conformal prediction and Fisher’s method. - Algorithm 1 summarizes the approach well. - Thorough related work section. Method: - The approach tackles an important problem, how to perform out-of-distribution detection while taking into account temporal information in time series data. - The guarantee of false positives is a nice application of conformal anomaly detection. Experiments: - Experiments on multiple datasets, with some settings specifically requiring temporal information for good decisions. - Comparison to important baselines. Weaknesses: Form and writing: - Section 2 feels misplaced. I would rather have it after related work as a motivation/introduction to the main sections. - While related work is thorough (i.e., related work is discussed in detail), I feel that the authors should touch on conformal prediction more specifically. There are also some papers that I think are very relevant and should be discussed (even though they do not tackle time series data, the general idea of out-of-distribution detection with guarantees is similar): [a] https://arxiv.org/abs/2102.12967 [b] https://openreview.net/pdf?id=Ro_zAjZppv - The section on equivariance is a bit too verbose in my opinion. Space is better spent on a proper conclusion and discussion at the end. - Section 4.2: The actual guarantee is only stated at the very end and not discussed at all. I would also find it important to discuss that this guarantee (i.e., the probability of a false positive) is marginal across calibration sets and not guaranteed conditional on the data. - Section 4.3 is not really integrated in the main story (generally, section 4 could be written more compellingly): I think it should be emphasized why the reader needs to read this and how it will become important. - Figure 4: this example/data is not well explained at all. What exactly am I looking at (esp. y-axis)? What does the data tell me? - Conclusion is very short and in its current form not useful. - Captions are generally very short and I would appreciate a bit more detail and maybe the take-away I am supposed to get from the figures. - Table 2 could be reorganized to make it clearer which number corresponds to which methods – the separation via “/” does not work for me. Method: - Regarding the transformations: why should we use speed or shuffle? In Figure 5, the losses on OOD data seem actually lower than those without transformation (=identity). This seems weird to me. I assume that these losses are already on a model trained with these transformations? - After theorem 1, there is discussion missing (e.g., that this is only marginally, unconditionally). Also, I think it should be discussed why the proof works: In the related work it is mentioned that it is unclear how to apply related work as p-values need to be IID. But it turns out that the application is actually straight-forward because we just assume IID? I think this is a critical point for time series data and it should be made clear why this works. - The validity (i.e., false positive rate \epsilon) is empirically much smaller than what it was calibrated for. This suggests that theorem one is actually a loose upper bound. Usually, split conformal prediction is tight. I am wondering why this is? As I see it, theorem 1 is simply an application of Lemmata 1 and 2 (i.e., standard conformal prediction results), so it is unclear where this looseness comes from. Another option could be that the calibration set used is always very good (i.e., a lucky pick) – also see my comments on experiments. - Comment on novelty and significance: Currently, I see not much novelty on the theoretical side, the obtained guarantee seems to follow very easily from existing work. Also, learning with temporal transformations as unsupervised task is not in itself novel. This means the key novelty of the work is the combination of both. I nevertheless appreciate the demonstration that this works well together, especially in cases where temporal information is needed. Experiments: - For the validity experiments, I would expect random trials across calibration/test splits. Especially for evaluating false detection rate I would expect to put calibration and test windows in one bucket and then draw multiple calibration and test sets randomly. Note that this also applies to the OOD performance. I think this is more or less standard in conformal prediction work (and should be lightweight as not re-training is involved). - How does the false detection look for the baselines? Do some of the baselines also provide guarantees? - In Figure 7, the blue problem seems very easy for the proposed method (compared to the baseline). Is there a way to consider a harder task to showcase some limitations or will the proposed method always work in this replay setting? Requested Changes: Current conclusion: I am not convinced that the paper is ready for being published at TMLR yet. Main concerns and recommendations: 1. Address some of the comments on writing, especially regarding discussion of the guarantee, related conformal prediction approaches in the related work section, more meaningful conclusion/discussion and potentially better flow across sections 2 and 4. 2. Fix experiments by running trials with randomly sampled calibration sets in order to get appropriate estimates of false detection rates and OOD detection rates. I think this is critical as the guarantee holds marginally as far as I am aware. 3. Address why the actual false detection rate is much smaller than \epsilon – if this is still the case with random calibration sets. 4. Discuss why the temporal aspect is not a problem in obtaining the guarantee (as mentioned for other methods in related work). This seems to be critical and is not discussed at all. Broader Impact Concerns: There is no broader impact statement present, but I am also not concerned with ethical implications. This is mainly because I believe that making methods aware of in-/out-distribution is important for safety and any statistical guarantee on performance is very valuable. ================================================== Metareview: Recommendation: Reject Comment: The authors develop a novel method for detected out of distribution sequences in temporal data, with theoretical guarantees that are based on the theory of conformal prediction. The reviewers agree that the method is sound both theoretically and in terms of the experimental methodology, and that the quality of writing is acceptable. However, reviewers had important concerns regarding how the claims made in the paper are substantiated by the results in the paper. In particular: 1. The theoretical guarantees require that multiple independent calibration sets are available, which is an unrealistic assumption in most practical scenarios. Further, the difficulty of establishing the theoretical guarantee is over-emphasized and is a straightforward application of the theory of conformal prediction to temporal data. 2. The method and its effectiveness relies on the user developing transformations to apply, which limits the utility of the approach as an off-the-shelf method unlike standard conformal prediction which can be applied to any black box predictor. 3. Evaluated purely empirically, it is unclear how the results improve upon prior work on OOD detection for temporal data. I would encourage the authors to prepare a substantial revision adjusting the claims made. In particular, I would suggest that the author address: 1) The significance of the theoretical contributions, making precise the new contributions made versus what follows from standard results. on conformal prediction. 2) How realistic the assumptions required for the theoretical results are (in particular having multiple iid calibration sets). 3) Explaining how a user of the approach should choose the transformations required. 4) Comparing against other OOD detection methods for temporal data including those that do not come with theoretical guarantees, or revising the language in the paper claiming "SOTA" empirical results. ==================================================
# Multi-Conditioned Graph Diffusion For Neural Architecture Search Rohan Asthana *rohan.asthana@fau.de* Friedrich-Alexander-Universität Erlangen-Nürnberg Erlangen, Germany Joschua Conrad *joschua.conrad@uni-ulm.de* Universität Ulm Ulm, Germany Youssef Dawoud *youssef.dawoud@fau.de* Friedrich-Alexander-Universität Erlangen-Nürnberg Erlangen, Germany Maurits Ortmanns *maurits.ortmanns@uni-ulm.de* Universität Ulm Ulm, Germany Vasileios Belagiannis *vasileios.belagiannis@fau.de* Friedrich-Alexander-Universität Erlangen-Nürnberg Erlangen, Germany Reviewed on OpenReview: *https: // openreview. net/ forum? id= 5VotySkajV* ## Abstract Neural architecture search automates the design of neural network architectures usually by exploring a large and thus complex architecture search space. To advance the architecture search, we present a graph diffusion-based NAS approach that uses discrete conditional graph diffusion processes to generate high-performing neural network architectures. We then propose a multi-conditioned classifier-free guidance approach applied to graph diffusion networks to jointly impose constraints such as high accuracy and low hardware latency. Unlike the related work, our method is completely differentiable and requires only a single model training. In our evaluations, we show promising results on six standard benchmarks, yielding novel and unique architectures at a fast speed, i.e. less than 0.2 seconds per architecture. Furthermore, we demonstrate the generalisability and efficiency of our method through experiments on ImageNet dataset. ## 1 Introduction The design of neural network architectures has been normally a manual and time-consuming task, requiring domain expertise and trial-and-error experimentation (Elsken et al., 2019). Neural Architecture Search (NAS) addresses this limitation by leveraging data-driven methods to automatically search for wellperforming neural network architectures (Liu et al., 2019; Howard et al., 2019; Pham et al., 2018). Existing works in NAS mostly represent the architectures as graphs and include search based methods (Li & Talwalkar, 2020; White et al., 2021b), reinforcement learning (Zoph & Le, 2017; Tian et al., 2020), and evolution-based ![1_image_0.png](1_image_0.png) Figure 1: Overview of our approach. We train a discrete graph diffusion model (denoted as qD) on valid architectures from the architecture search space along with their performance metrics (eg. accuracy, latency). After training, we sample architectures given the required conditions (eg. accuracy>top5%, latency<2). approaches (Real et al., 2019; Chu et al., 2020). However, the large size of the architecture search space makes it challenging for these methods to search for high-performing topologies. To accelerate the architecture search, generative methods reduce the search queries by learning the architecture search space and optimising the latent space from which a generator network draws architectures (Rezaei et al., 2021; Huang & Chu, 2021). These methods not only enhance the efficiency but also capture intricate architecture distributions, generating novel architectures. However, the choice of graph generative model significantly impacts the NAS search time. The existing methods employ complex GAN-based generators (Rezaei et al., 2021) or use computationally intensive supernets (Huang & Chu, 2021). More recently, An et al. (2023) employs conditional diffusion processes guided by a classifier while Lukasik et al. (2022) uses a simple generator paired with a surrogate predictor. However, these methods require separate predictor networks for the generated architecture performance. Hence, we present a diffusion-based generative approach that is completely differentiable and thus training involves only a single model. As a result, we reach promising performance with much smaller search time. Denoising diffusion probabilistic models (DDPMs) (Ho et al., 2020) have recently gained attention because of their ability to effectively model complex data distributions through an iterative denoising process. DDPMs offer precise generative control, improving distribution coverage compared to other generative models (Dhariwal & Nichol, 2021). This characteristic makes diffusion models particularly appealing for NAS, as they fulfil the requirement to generate neural network architectures and eventually facilitate the exploration of the search space. In addition to their superior performance, diffusion models excel in conditional generation through the classifier-free guidance technique (Ho & Salimans, 2021). This technique enables the conditioning of diffusion models on a specific target class, allowing the model to generate samples belonging to that class without utilizing the gradients of an external classifier. Previous studies show that along with image synthesis, classifier-free guidance works well in molecule (graph) synthesis using graph diffusion networks (Hoogeboom et al., 2022). Moreover, Giambi & Lisanti (2023) demonstrated the capability of classifier-free guidance approach using multiple conditions. Motivated by this idea, we present a multi-conditioned graph diffusion model in which constraints such as high model accuracy and low latency jointly contribute to architecture sampling. We introduce a graph diffusion-based NAS approach (DiNAS), depicted in Figure 1, that utilises discrete conditional graph diffusion processes to generate high-performing neural network architectures 1. We leverage classifier-free (CF) guidance, initially developed for image tasks, and extend its application to graph models. Additionally, to impose multiple constraints, we utilize a multi-conditioned CF guidance technique, and apply it within our graph diffusion framework. To demonstrate the effectiveness of our proposed method, we perform extensive evaluations on six standard benchmarks, including experiments on ImageNet (Deng 1The code for our paper is available at https://github.com/rohanasthana/DiNAS. et al., 2009), and ablation studies to demonstrate state-of-the-art performance and faster generation rate (less than 0.2 seconds per architecture on a single GPU) compared to the prior work. To the best of our knowledge, this is the first formulation of NAS using multi-conditioned graph-based diffusion models. In summary, we claim that guided graph diffusion, specifically discrete graph diffusion with multi-conditioned classifier-free-guidance-based NAS approach, should work better than previous generative and traditional NAS methods due to the model's ability to perform architecture generation in a controlled guided manner without the need of an external predictor. Our claims are supported by empirical evidence detailed in Section 5. Our contributions are as follows: - We introduce a differentiable generative NAS method, which employs discrete conditional diffusion processes to learn the architecture latent space by training a single model. - We propose a multi-conditioned diffusion guidance technique for graph diffusion networks, effectively applied within our NAS approach. - We demonstrate promising results in six standard benchmarks while using less or same number of queries with rapid generation of novel and unique high-performing architectures. ## 2 Related Work Neural Architecture Search (NAS) Automating the neural network architectural design has gained substantial interest in the past few years (Liu et al., 2019; Jin et al., 2019; Zoph et al., 2018; Bender et al., 2018; Shala et al., 2023). A straightforward approach is to randomly select and evaluate architectures from the search space (Li & Talwalkar, 2020). However, the lack of optimisation in the search space makes this approach inefficient. To address this limitation, earlier works rely on reinforcement learning (Zoph & Le, 2017; Baker et al., 2017; Franke et al., 2021) to discover well-performing architectures. Gradient-based approaches (Brock et al., 2018; Chen et al., 2021b; Yang et al., 2020) employ gradient-based optimisation, while evolutionary methods (Real et al., 2019; 2017) deploy evolutionary algorithms to perform the search. Although these approaches exhibit faster search pace than random search due to the optimisation, they are still regarded slow in searching high-performing architectures (Liu et al., 2019). Another major challenge with search-based methods is the requirement to train networks at each iteration (Luo et al., 2022). This becomes particularly problematic when NAS approaches require a substantial number of iterations to generate wellperforming architectures, which is often the case with reinforcement learning-based methods. This issue is solved by the recently developed generative methods (Lukasik et al., 2021; Rezaei et al., 2021; Lukasik et al., 2022; An et al., 2023), which reduce the search time by learning the architecture search space. The generative NAS method by Lukasik et al. (2022) utilises a generator, a surrogate predictor, paired with a latent space optimisation technique to generate high performing architectures represented as graphs. This generation is performed node by node, which requires multiple passes of a graph neural network, with each pass to generate a node. In contrast, our approach learns and generates all the nodes and the edges of the whole graph (as a representation of architectures) together using a diffusion model, effectively reducing the generation time. While the method proposed by An et al. (2023) also utilises diffusion models for neural architecture synthesis, there are several key differences to note. First, unlike their approach, we employ discrete graph diffusion instead of continuous graph diffusion. This allows our model to retain the structural information and sparsity of the graphs (Vignac et al., 2023). Second, we eliminate the need of an external predictor to predict the accuracies from noisy data, as done by An et al. (2023), which allows our model to omit the dependency on noisy data classification (Ho & Salimans, 2021). Following the same direction, we present a generative model that reduces the search time compared to the prior work, while minimising the performance loss. Diffusion Models Although the original idea of data generation through diffusion goes back several years (Sohl-Dickstein et al., 2015), diffusion models later gained popularity for image (Ho et al., 2020; Rombach et al., 2022; Saharia et al., 2022)), text (Austin et al., 2021) and more recently graph generation (Wang et al., 2022). The ability of diffusion models to effectively synthesise graphs motivates our work to formulate NAS as a graph generation problem. Nevertheless, the generation of well-performing architectures requires conditional generation through guidance, e.g. by specifying the minimum architecture accuracy. Hence we utilize a formulation of classifier-free guidance for graph-diffusion networks. Then, we introduce a multiconditioned graph-diffusion approach that accounts for several constraints in the architecture generation. ## 3 Background 3.1 Denoising Diffusion Probabilistic Models Denoising Diffusion Probabilistic Models (DDPMs) (Ho et al., 2020) comprise two fundamental processes, namely, forward and reverse processes. The forward process sequentially corrupts the data sample x using a noise model q that follows a Gaussian distribution until x reaches a state of pure noise. The noisy variants of x are denoted as (x 1, x 2*, . . . ,* x T), where T represents the total number of corruption steps. Subsequently, the reverse process involves learning a denoising model represented as a deep neural network ϕθ with parameters θ to estimate the noise state of sample x at time step t − 1, i.e. x t−1 given the current state x t. This is achieved using a scoring function that maximises the likelihood of x t−1. Formally, the scoring function is defined as SF = ∇xt−1 log pθ(x t−1|x t) which corresponds to the gradient of the log-likelihood with respect to state x t−1. Following the network training, another data point can be sampled from a noisy prior (denoted as z T), and by iterative denoising the data point (i.e. predicting z t−1from z t), a sample z 0is obtained, which corresponds to the original data distribution. This process is referred to as the sampling process. Diffusion models can generate high-quality samples from complex data distributions. However, in our task, we do not intend to sample from the entire distribution but a subset of it containing high-performing and/or low latency architectures. Hence, diffusion models need to be modified to incorporate conditioning. This can be achieved using conditional diffusion models. ## 3.2 Conditional Diffusion Models With Guidance The conditional diffusion model estimates the distribution pθ(x t−1|x t, y). From Bayes rule, we have: $$p_{\theta}({\bf x}^{t-1}|{\bf x}^{t},y)\propto p_{\theta}({\bf x}^{t},y|{\bf x}^{t-1})p({\bf x}^{t-1}).\tag{1}$$ $$\left(1\right)$$ To ensure the balance between sampling diversity and quality, generative models can incorporate guidance scale γ, modifying the Eq. 1 to: $$p_{\theta}({\bf x}^{t-1}|{\bf x}^{t},y)\propto p_{\theta}({\bf x}^{t},y|{\bf x}^{t-1})^{\gamma}p({\bf x}^{t-1}).$$ $$\left(2\right)$$ Specifically, increasing γ sharpens the distribution which favors enhanced sample quality at the expense of sample diversity during the sampling process, referred to as guidance in diffusion models. To guide a diffusion model for labelled data, the model is conditioned on the classification target y and the score function ∇xt−1 log pθ(x t−1|x t, y) is computed. Dhariwal & Nichol (2021) approach this problem using an external classifier (parameterised by ψ) where the score function SF is modified to include the gradients of the classifier. The reformulated score function is then the weighted sum of the unconditional score function and the conditioning term obtained by the classifier, defined as: $$\nabla_{{\bf x}^{t-1}}\log p_{\theta}({\bf x}^{t-1}|{\bf x}^{t},y)=\nabla_{{\bf x}^{t-1}}\log p_{\theta}({\bf x}^{t-1}|{\bf x}^{t})+\gamma\nabla_{{\bf x}^{t-1}}\log p_{\psi}(y|{\bf x}^{t-1}).\tag{3}$$ While we have successfully expressed the reverse denoising process as a weighted sum of two score functions, estimation of the conditional score function requires training a separate classifier. Moreover, calculating log pψ(y|x t−1) requires inferring y from noisy data x t. Although feeding noisy data to the classifier yields decent performance, it disrupts the robustness of the model since it ignores most of the original input signal. To address this issue, Ho & Salimans (2021) came up with the classifier-free guidance, which develops the classifier using the generative model itself. In this case, the score function is defined as: $$\nabla_{\mathbf{x}^{t-1}}\log p_{\theta_{\tau}}(\mathbf{x}^{t-1}|\mathbf{x}^{t},y)=(1-\gamma)\nabla_{\mathbf{x}^{t-1}}\log p_{\theta}(\mathbf{x}^{t-1}|\mathbf{x}^{t})+\gamma\nabla_{\mathbf{x}^{t-1}}\log p_{\theta}(\mathbf{x}^{t-1}|\mathbf{x}^{t},y).\tag{4}$$ Eq. 4 demonstrates that it is possible to achieve the same behaviour as the classifier-based guidance without explicitly using a classifier. This is achieved through a weighted sum, specifically a barycentric combination, of the conditional and unconditional score functions. ## 3.3 Discrete Graph Diffusion Diffusion models typically work in a continuous space and apply Gaussian noise to the data (Ho et al., 2020; Saharia et al., 2022). Training a diffusion model to generate graphs in the same manner, however, leads to the loss of graph sparsity and structural information. DiGress, a discrete diffusion approach proposed by Vignac et al. (2023), addresses this problem with a Markov processes as discrete noise model. In this case, the graph comprises of nodes and edges, both being categorical variables, and the goal is to progressively add or remove edges as well as change graph node categories. Hence, the diffusion process is applied on the node categories X and edges E. Eventually, this model solves a simple classification task for nodes and edges instead of a complex distribution learning task, normally performed in generative models like VAEs (Kingma & Welling, 2013) or DDPMs (Ho et al., 2020). Our approach, in principle, follows DiGress to generate graphs which correspond to neural network architectures. At each forward step, discrete marginal noise is added to both X and E using the transition probability matrices QX and QE respectively, which incorporate the marginal distributions m′X and m′E. We select the noisy prior distribution such that it is close to the original data distribution. Then, the transition matrices are defined as follows: $$Q_{X}^{t}=\bar{a}^{t}I+(1-\bar{a}^{t})1_{i}m_{X}^{\prime};\quad Q_{E}^{t}=\bar{a}^{t}I+(1-\bar{a}^{t})1_{j}m_{E}^{\prime},$$ $$\left(5\right)$$ where I is the identity matrix, 1i and 1j are the indicator functions, t is the time-step, and a¯ tis the cosine schedule defined as a¯ t = cos(0.5π(t/T + s)/(1 + s))2 with s close to 0. Training For the reverse (denoising) step, a Graph Transformer network ϕθ, parameterised by θ, is employed. This network learns the mapping between the noisy graphs Gt and the corresponding clean graphs G. During training, ϕθ can take noisy graphs at any time step t ∈ (1*, .., T*) to predict the clean graph. The loss functions for X and E are based on the cross-entropy between their respective predicted probabilities pˆ G = (ˆp X, pˆ E) and the ground-truth graph G = (X, E). The total loss is then, a weighted sum of node-level and edge-level losses, which is given by: $$L_{G}(\hat{p}^{X},\mathbf{X},\hat{p}^{E},\mathbf{E})=\sum_{1\leq i\leq n}CE(x_{i},\hat{p}_{i}^{X})+\lambda\sum_{1\leq i,j\leq n}CE(e_{ij},\hat{p}_{ij}^{E}),\tag{6}$$ where CE is the cross-entropy loss function, λ is a parameter to weight the importance of nodes and edges and, n is the number of nodes. Sampling Let the posterior distribution be pθ. We start from a noisy prior distribution GT ∼ (qX(nT ) × qE(nT )), where nT is sampled from the node distribution in the training data. Then, we estimate the node and edge distributions pθ(x t−1 i|Gt) and pθ(e t−1 ij |Gt) using the predicted probabilities pˆ X iand pˆ E ij . This can be written as: $$p_{0}(x_{i}^{t-1}|\mathbf{G}^{t})=\sum_{x\in\mathbb{X}}p_{0}(x_{i}^{t-1}|x_{i}=x,\mathbf{G}^{t})\hat{p}_{i}^{\mathbb{X}}(x);\quad p_{0}(e_{ij}^{t-1}|\mathbf{G}^{t})=\sum_{e\in\mathbf{E}}p_{0}(e_{ij}^{t-1}|eij=e,\mathbf{G}^{t})\hat{p}_{ij}^{\mathbb{E}}(e).\tag{7}$$ $\hat{p}_{i}^{\mathbb{X}}$\(\hat{p}_{ Finally, sampling new graphs can be seen as iteratively estimating the distribution pθ(Gt−1|Gt) until a clean graph G0is obtained. pθ(Gt−1|Gt) can be seen as the product of the node and edge distributions marginalised over predictions from the network ϕθ: $$p_{\theta}(\mathbf{G}^{t-1}|\mathbf{G}^{t})=\prod_{1\leq i\leq n}p_{\theta}(x_{i}^{t-1}|\mathbf{G}^{t})\prod_{1\leq i,j\leq n}p_{\theta}(e_{ij}^{t-1}|\mathbf{G}^{t}).\tag{8}$$ The above model successfully handles sparse categorical graph data in a discrete manner, synthesising graphs from complex data distributions. Therefore, it is suitable for our problem. Nevertheless, in our task, we seek to introduce conditioning in discrete graph diffusion models through classifier-free (CF) guidance. To that end, we propose next a multi-conditioned graph diffusion formulation for NAS. ## 4 Method Consider the diffusion model qD comprising of a neural network ϕθ paramterised by θ. During training, the model qD takes the directed acyclic graph G as input and learns to reconstruct G from the noisy version Gt, where t ∈ (1*, . . . , T*) is the number of diffusion time steps. This reconstruction is essentially performed by learning to estimate the actual data distribution G from the noisy version of G, which we denote as PN , through iterative denoising. Following the training of ϕθ, we aim to generate DAGs representing highperforming neural network architectures using samples from PN , where we denote a sample as z. Our directed acyclic graph (DAG) representation of architectures follows the standard cell-based NAS search spaces (Liu et al., 2019; Klyuchnikov et al., 2020), where each cell is a DAG. G consists of a set of nodes and edges. The sequence of nodes in G is represented by X = [v1, v2 *. . . v*n], where the number of nodes is n, and the edges as the adjacency matrix E of shape (*n, n*). Hence, each DAG is represented by G = (X, E). Each node is a categorical variable, describing operations, e.g., 1x1 convolution, while each edge is a binary variable, specifying the presence or absence of the connection between nodes. In addition, G maps to the ground-truth performance metrics P e.g., the accuracy and latency of each DAG. Our objective is twofold, namely to generate valid cells Cv = (Xv, Ev) from the latent variable z, sampled from the noise distribution PN and, second, to learn the mapping between the valid cell Cv and its corresponding performance metrics P. The learned mapping is then used to generate high-performing cells with accuracy close to the maximum achievable accuracy or cells with latency below a certain latency constraint. Note that a cell is valid when the corresponding DAG is connected and includes a realistic sequence of nodes. ## 4.1 Diffusion Based Nas We consider the unconditional and conditional graph generation. First, we present the unconditional model that learns to generate valid cells. Since some of the generated cells might have poor performance, we propose the single conditioned and multi-conditioned graph diffusion models to generate just the high-performing cells based on metrics like the model accuracy and latency. ## 4.1.1 Unconditional Model Our unconditional model is based on the discrete denoising graph diffusion model (Vignac et al., 2023), outlined in Section 3.3. The forward process involves adding discrete marginal noise QtX and QtE (Eq. 5) to both nodes X and edges E respectively. To perform denoising, we employ the Graph Transformer network ϕθ, which is trained to predict clean graphs G from noisy graphs Gt. While this model effectively captures the data distribution for undirected graphs, it lacks the ability to incorporate directional information of DAGs. This directional information depicts the flow of data from input to output in the cells and hence is crucial for generating valid cells. To address this limitation, we integrate into our model the positional encoding technique by Vaswani et al. (2017b). In detail, we add sinusoidal signals of different frequencies to the node features X before passing them through the Graph Transformer ϕθ, thereby enhancing the network's capability to consider sequential information. Despite the ability of our unconditional model in forming valid cells necessary for complete network architectures, our goal lies in generating a particular subset of the learned architecture distribution comprising high-performing cells. To that end, we first condition our model on the accuracy metric. ## 4.1.2 Conditional Model To achieve the generation of high-performing architectures, we propose a guidance approach, inspired by the classifier-free guidance (Ho & Salimans, 2021), and integrate it to our unconditional graph diffusion model. Unlike the unconditional model, our conditional model estimates the distribution pθ(Gt−1|Gt, y), essentially by computing the score function ∇Gt−1 log pθγ (Gt−1|Gt, y) as: $$\nabla_{\mathbf{G}^{t-1}}\log p_{\theta_{\gamma}}(\mathbf{G}^{t-1}|\mathbf{G}^{t},y)=(1-\gamma)\nabla_{\mathbf{G}^{t-1}}\log p_{\theta}(\mathbf{G}^{t-1}|\mathbf{G}^{t})+\gamma\nabla_{\mathbf{G}^{t-1}}\log p_{\theta}(\mathbf{G}^{t-1}|\mathbf{G}^{t},y),\tag{9}$$ where γ is the guidance scale and y is the target variable. The first term of Eq. 9 corresponds to the unconditional distribution pθ(Gt−1|Gt) learning, while the second one corresponds to the conditional distribution pθ(Gt−1|Gt, y) learning. Following Ho & Salimans (2021), we remove the conditioning information for some forward passes determined by the control parameter ϵ. This leads to the unconditional training of the network. For the rest forward passes, we keep this information to enable conditional training. ## 4.1.3 Discretisation Of The Target Variable Our guidance approach assumes that the target variable y, e.g. accuracy or latency, belongs to a finite set such that y ∈ {y1, y2, . . . , yw} ⊆ R, where w is the number of possible values of y. However, in our case, y takes continuous values from the real number domain, R +. To address this issue, we split y into d discrete classes based on their value. The choice of the split affects the balance of the class data distribution. In our implementation (Sec. 5.1), we provide information on how we select the number of classes and splits according to the problem. ## 4.2 Incorporating Multiple Conditions Next, we introduce multiple conditions to the diffusion guidance to impose several constraints. Consider the unconditional noise model q that corrupts the data progressively for t time steps. Our objective is to estimate the reverse conditional diffusion process qˆ(Gt−1|Gt, y1, y2*, ..., y*k), given the k independent conditions y1, y2*, ..., y*k. Assuming pθ(Gt−1|Gt, y1, y2*, ..., y*k) approximates qˆ(Gt−1|Gt, y1, y2*, ..., y*k), we perform the estimation of reverse conditional diffusion process by computing the score function ∇Gt−1 log pθγ (Gt−1|Gt, y1*, ..., y*k) as: $$\begin{split}\nabla_{\mathbf{G}^{t-1}}\log p_{\theta_{\gamma}}(\mathbf{G}^{t-1}|\mathbf{G}^{t},y_{1},...,y_{k})&=(1-\gamma)\nabla_{\mathbf{G}^{t-1}}\log p_{\theta}(\mathbf{G}^{t-1}|\mathbf{G}^{t})\\ &\quad+\gamma\nabla_{\mathbf{G}^{t-1}}\log p_{\theta}(\mathbf{G}^{t-1}|\mathbf{G}^{t},y_{1},...,y_{k}),\end{split}\tag{10}$$ where γ is the guidance scale. The derivation is provided in Appendix A.1. Similar to the standard singleconditioned guidance (Eq. 9), the conditional score function for multi-conditioned guidance can be expressed as a weighted sum of conditional and unconditional score function. These score functions can be computed using two forward passes of our network, the unconditional and conditional forward pass. ## 4.3 Training And Sampling Training procedure Let c1*, . . . , c*k denote the metrics, which we want to constrain e.g. accuracy or hardware latency. The training procedure starts by randomly selecting the time-step t from the range (1*, .., T*). Subsequently, the performance metrics P = (c1*, ..., c*k) undergo a substitution with a null token ∅ for a probability of ϵ instances. Then, marginal noise is introduced to both X and E for a duration of t time-steps. Next, each of c1*, .., c*k is individually processed through distinct embeddings, with the resultant embeddings being included to both X and E. We then apply positional encoding to X. Finally, the resultant graph is provided as input to our Graph Transformer network ϕθ. This network then generates the denoised graph (X, E) which is used to calculate the loss (Eq. 6). We present the training algorithm in Alg. 1. Sampling procedure Let (ˆc1*, ..,* cˆk) be the constraints desired to be imposed (e.g. cˆ1=top 5%). The sampling procedure is initiated with sampling a random noisy graph Gtfrom the prior distribution (qX(nT )× qE(nT )). Next, we apply the positional encoding to X and perform two forward passes of our trained network ϕθ, namely the unconditional and conditional pass. In the unconditional pass, (c1 = ∅, c2 = ∅*, .., c*k = ∅), where ∅ is a null token, whereas for the conditional pass, (c1 = ˆc1*, ..., c*k = ˆck). Then, the score estimates are computed for both functions (pˆc for conditional and pˆu for unconditional). Lastly, we calculate the resulting score by a linear combination of the score estimates and sample a less noisy graph Gt−1 with Eq. 7 and 8. This is iteratively performed to produce the clean graph G0. The sampling algorithm is presented in Alg. 2 and implementation details in Appendix Sec. A.2. | Algorithm 1 Training DiNAS Input: G0 = (X, E, c1, ..., ck) and ϵ t ∼ ν(1, .., T) | ▷ Sample t randomly from (1, ...T) | |------------------------------------------------------------------------------------|------------------------------------------------| | c1, ..., ck ← ∅ with probability ϵ | ▷ Conditional dropout to train unconditionally | | Gt ← (XQt X, EQt E, c1, ..., ck) | ▷ Apply marginal noise for t time steps | | Gt ← (Gt , Emb(c1), ..., Emb(ck)) | ▷ Append embeddings to nodes and edges | | Xt ← Xt + P osEnc(Xt ) | ▷ Add sinusoids to X for positional encoding | | X, pˆ pˆ E ← pθ(Gt |c1..., ck) | ▷ Forward pass | | optimiser.step(LG(ˆp X, X, pˆ E, E)) | ▷ Calculate loss and optimise (Eq. 6) | ## Algorithm 2 Sampling From Dinas Input: guidance scale γ, and conditions cˆ1*, ..,* cˆk Sample nT number of nodes from training data distribution Sample random graph Gt ∼ (qX(nT ) × qE(nT )) ▷ Sample from prior distribution for t = T to 1 do Xt ← Xt + *P osEnc*(Xt) ▷ Add sinusoids to X for positional encoding pˆ X u , pˆ E u = pθ(Gt|c1 = ∅*, ..., c*k = ∅) ▷ Unconditional forward pass pˆ X c , pˆ E c = pθ(Gt|c1 = ˆc1*, .., c*k = ˆck) ▷ Conditional forward pass pˆ X = (1 − γ)ˆp X u + γpˆ X c ▷ Linear combination of score estimates pˆ E = (1 − γ)ˆp E u + γpˆ E c ▷ Linear combination of score estimates Calculate pθ(x t−1 i|Gt) and pθ(e t−1 ij |Gt) ▷ Eq. (7) Gt−1 ∼Qi pθ(x t−1 i|Gt)Qij pθ(e t−1 ij |Gt) ▷ Sample Gt−1(Eq. 8) end for return G0 ## 5 Experiments We evaluate our approach on six standard benchmarks- encompassing tabular, surrogate, hardware aware benchmarks, and the challenging ImageNet image classification task (Deng et al., 2009). ## 5.1 Experimental Setup Tabular Benchmarks We first consider the tabular benchmarks- NAS-Bench-101 (Ying et al., 2019) and NAS-Bench-201 (Dong & Yang, 2020) for our experiments. Tabular benchmarks list unique architectures with their corresponding accuracy. We utilise the validation accuracy as performance metrics P. The evaluation protocol 2follows the established standard (Yan et al., 2020; Wu et al., 2021) of conducting a search for the maximum validation accuracy within a fixed number of queries and reporting the corresponding test accuracy, both as a mean over 10 runs. Although there are different other factors (like the nature of the algorithm) affecting the search time for different approaches, generally search times are directly proportional to the number of queries, and is thus used as an efficiency metric by previous approaches by Lukasik et al. (2022), Yan et al. (2020) and Wu et al. (2021). For NAS-Bench-101, we compare our approach with Arch2Vec (Yan et al., 2020), NAO (Luo et al., 2018), BANANAS (White et al., 2021a), Bayesian Optimisation (Snoek et al., 2015), Local Search (White et al., 2021b), Random Search (Li & Talwalkar, 2020), Regularised Evolution (Real et al., 2019), WeakNAS (Wu 2The detailed evaluation protocol for each benchmark can be found in Appendix A.5. et al., 2021) and AG-Net (Lukasik et al., 2022). For NAS-Bench-201, our approach is evaluated against SGNAS (Huang & Chu, 2021), GANAS (Rezaei et al., 2021), BANANAS, Bayesian Optimisation, Random Search, AG-Net, TNAS (Shala et al., 2023), MetaD2A (Lee et al., 2021), β-DARTS (Ye et al., 2022) and DiffusionNAG (An et al., 2023). The corresponding results are reported in Tables 1 and 2. Surrogate Benchmarks Next, we evaluate our method on surrogate benchmarks. Surrogate benchmarks operate on significantly larger search spaces like DARTS (Liu et al., 2019) or NAS-Bench-NLP (Klyuchnikov et al., 2020) and therefore use a simple surrogate predictor to estimate the ground truth accuracy. We perform our experiments on two surrogate benchmarks, the NAS-Bench-301 (Siems et al., 2021) (trained on CIFAR-10 (Krizhevsky et al., 2009)) on DARTS search space and NAS-Bench-NLP. We report the maximum validation accuracy as a mean over 10 runs, along with the number of queries and compare our method to the prior work as with NAS-Bench-101. The results are presented in Table 3. | Methods | Val(%) | Test(%) | Queries ↓ | |-------------------------|----------|-----------|-------------| | Optimum | 95.06 | 94.32 | | | Arch2vec + RL | - | 94.10 | 400 | | Arch2vec + BO | - | 94.05 | 400 | | NAO † | 94.66 | 93.49 | 192 | | BANANAS † | 94.73 | 94.09 | 192 | | Local Search † | 94.57 | 93.97 | 192 | | Random Search † | 94.31 | 93.61 | 192 | | Bayesian Optimisation † | 94.57 | 93.96 | 192 | | WeakNAS | - | 94.18 | 200 | | Regularised Evolution † | 94.47 | 93.89 | 192 | | AG-Net | 94.90 | 94.18 | 192 | | DiNAS (ours) | 94.98 | 94.27 | 150 | Table 1: Comparison of results on NAS-Bench-101. 'Val' represents the maximum validation accuracy and 'Test' represents the corresponding test accuracy, both as a mean over 10 runs. Queries are the number of retrieval attempts for accuracy from the benchmark. Hardware Aware Benchmark Our next evaluation is on the Hardware Aware Benchmark (HW-NAS-Bench) (Li et al., 2021). HW-NAS-Bench provides hardware information (e.g. latency) along with the accuracy for multiple edge devices. We follow the standard protocol (Lukasik et al., 2022) and report the accuracy of best found architectures for ImageNet classification task given the latency constraint (in milliseconds) as a mean over 10 runs along with the number of queries. We also report the feasibility, which indicates the percentage of generated architectures following the given latency constraint. We compare our approach to Random Search and AG-Net as strong baselines for multiple devices each in multiple latency constraints. The results are available in Table 11. Experiments on ImageNet Lastly, we conduct experiments on the large-scale image classification task ImageNet (Deng et al., 2009), following the protocol from Liu et al. (2019); Chen et al. (2021a). This involves training and evaluating the best generated architecture from NASBench301 (trained on CIFAR10 image classification task) on the ImageNet dataset. We report the top-1 and top-5 errors along with the number of queries, comparing our method to several robust baselines (e.g. DARTS, TENAS (Chen et al., 2021a), NASNET-A (Zoph et al., 2018) , DrNAS (Chen et al., 2021b), β-DARTS (Ye et al., 2022), Sweetimator (Yang et al., 2023), DARTS ++ (Soro & Song, 2023), and AG-Net. To ensure a fair comparison, we report the results of methods with search on CIFAR-10 and evaluation on ImageNet, wherever applicable. We summarise the results in Table 5. Implementation The discretization of the target variable is essential for our task as it is not sufficient to train our model solely on high-performing samples due to the low number of high-performing architectures. We empirically found that in our task, d = 2 for the accuracy metric has a slightly superior performance over other values of d (see ablation study in Appendix Sec. 5.3.4) and thus, we discretise the accuracy into two classes. One class includes > fth percentile of accuracy values, while the remaining values belong to the other class. Using higher values of f for accuracy generates better-performing architectures, but they also lead to class imbalance, thereby reducing the model performance. We address this issue by modifying f depending on the data availability of the specific benchmark. For generating high performing samples, the model is conditioned to generate the samples belonging to > fth percentile class for accuracy during Table 2: Comparison of results on NAS-Bench-201 for different datasets. 'Val' represents the maximum validation accuracy and 'Test' represents the corresponding test accuracy, both as a mean over 10 runs. Queries/Gen. are the number of retrieval attempts for accuracy from the benchmark or number of architecture generations | Methods | CIFAR-10 | CIFAR-100 | ImageNet16-120 | Queries/Gen. ↓ | | | | |----------------|------------|-------------|------------------|------------------|---------|-------|-----| | Val(%) | Test (%) | Val(%) | Test(%) | Val(%) | Test(%) | | | | Optimum* | 91.61 | 94.37 | 73.49 | 73.51 | 46.77 | 47.31 | - | | SGNAS | 90.18 | 93.53 | 70.28 | 70.31 | 44.65 | 44.98 | - | | BANANAS † | 91.56 | 94.30 | 73.49* | 73.50 | 46.65 | 46.51 | 192 | | Bayesian Opt.† | 91.54 | 94.22 | 73.26 | 73.22 | 46.43 | 46.40 | 192 | | Random Search† | 91.12 | 93.89 | 72.08 | 72.07 | 45.97 | 45.98 | 192 | | GANAS | - | 94.34 | - | 73.28 | - | 46.80 | 444 | | AG-Net | 91.60 | 94.37* | 73.49* | 73.51* | 46.64 | 46.43 | 192 | | TNAS | - | 94.37* | - | 73.51* | - | - | - | | MetaD2A | - | 94.37* | - | 73.34 | - | - | 500 | | β-DARTS | 91.55 | 94.36 | 73.49* | 73.51* | 46.37 | 46.34 | - | | DiffusionNAG | - | 94.37* | - | 73.51 | - | - | | | DiNAS (ours) | 91.61* | 94.37* | 73.49* | 73.51* | 46.66 | 45.41 | 192 | | Methods | NAS-Bench-301 | NAS-Bench-NLP | | | |------------------------|-----------------|-----------------|----------|-----| | Val(%) | Queries↓ | Val(%) | Queries↓ | | | BANANAS† | 94.47 | 192 | 95.68 | 304 | | Bayesian Opt.† | 94.71 | 192 | - | - | | Random Search† | 94.31 | 192 | 95.64 | 304 | | Regularised Evolution† | 94.75 | 192 | 95.66 | 304 | | AG-Net ‡ | 94.79 | 192 | 95.95 | 304 | | DiNAS (ours) | 94.92 | 100 | 96.06 | 304 | Table 3: Comparison of results on NAS-Bench-301 (left) and NAS-Bench-NLP (right). 'Val' represents the maximum validation accuracy as a mean over 10 runs. Queries are the number of retrieval attempts for accuracy from the benchmark. the sampling process. Specific values for each benchmark can be found in the Appendix Sec. A.5. For the metric of latency in the hardware-aware benchmark (Li et al., 2021), we wish to generate high-performing architectures lower than the given latency constraint. To achieve this, we discretize latency into two discrete classes- one below the constraint value and one above the constraint. ## 5.2 Discussion Of Results Tabular Benchmarks Tables 1 and 2 present empirical evidence of the superior performance of our proposed method in the context of tabular benchmarks. Across both tabular benchmarks, our approach consistently outperforms the SOTA or converges to optimal validation accuracy. Notably, for NAS-Bench-101, our method concurrently reduces query count by 25%, demonstrating its effectiveness. GANAS exhibits a slightly better test accuracy on ImageNet in NAS-Bench-201 experiments, which can be explained by the fact that our method searches for the best architecture in terms of validation accuracy and the best validation accuracy does not necessarily imply best test accuracy. To that end, we found that some of the generated architectures with high validation accuracy correspond to a comparatively low test accuracy for the case of ImageNet. This discrepancy is also reflected in the standard deviation values for ImageNet, reported in Table 10. Surrogate Benchmarks Furthermore, the results in Table 3 demonstrate that DiNAS excels in surrogate benchmarks as well. Our method achieves the SOTA in nearly 50% reduction in queries for NAS-Bench-301 and the same query count in NAS-Bench-NLP, surpassing the performance of previous methods such as Random Search, Bayesian Optimisation, and AG-Net. The results from NAS-Bench-NLP experiment also | Device | Lat. (ms) | DiNAS | AG-Net | Random | Queries ↓ | | | |----------|-------------|---------|------------|----------|-------------|-------|-----| | Val(%) | Feas. (%)↑ | Val(%) | Feas. (%)↑ | Val(%) | | | | | 2 | 39.44 | 92.60 | 39.70 | 29.00 | 37.20 | 200 | | | EdgeGPU | 4 | 43.91 | 93.20 | 42.80 | 29.00 | 41.70 | 200 | | 6 | 45.03 | 66.35 | 45.30 | 64.00 | 44.90 | 200 | | | 2 | 34.67 | 92.80 | 34.60 | 28.00 | 33.90 | 200 | | | Raspi4 | 4 | 43.25 | 77.80 | 42.00 | 47.00 | 41.90 | 200 | | 6 | 44.72 | 57.70 | 44.00 | 56.00 | 43.20 | 200 | | | EdgeTPU | 1 | 45.31 | 48.37 | 46.40 | 74.00 | 45.40 | 200 | | 2 | 40.01 | 97.30 | 40.90 | 48.00 | 38.80 | 200 | | | Pixel3 | 4 | 44.74 | 82.50 | 45.30 | 69.00 | 43.8 | 200 | | 6 | 45.95 | 78.50 | 45.70 | 77.00 | 45.1 | 200 | | | Eyeris | 1 | 44.67 | 78.12 | 44.50 | 49.00 | 43.30 | 200 | | FPGA | 1 | 44.53 | 91.65 | 43.30 | 65.00 | 42.90 | 200 | Table 4: Comparison of results on HW-NAS-Bench. 'Val' represents the maximum validation accuracy as a mean over 10 runs and 'Feas' represents the feasibility considering generations of all the runs. Queries are the number of retrieval attempts for accuracy and latency from the benchmark. prove that our approach is not only effective in image classification tasks but also in NLP tasks, proving our approach to be task-independent. Hardware-Aware benchmark From Table 11, we can observe that our approach outperforms Random Search in most cases and surpassing AG-Net in over half of them. Additionally, our method excels in feasibility across diverse devices and latency constraints while using the same number of queries compared to AG-Net, proving that our multi-conditioned guidance was indeed able to replicate the behaviour of multiple independent predictors (for accuracy and latency) using a single set of hyperparameters. ImageNet Lastly, we can observe from Table 5 that our approach demonstrates competitive performance with low top-1 and top-5 error rates on ImageNet, outperforming robust baselines such as DARTS, NASNETA and TENAS. However, the generations from AG-Net , DrNAS (Chen et al., 2021b), DARTS ++ Soro & Song (2023), β-DARTS (Ye et al., 2022), and Sweetimator (Yang et al., 2023) are better performing on ImageNet than our method, with the best-performing methods being DrNAS and β-DARTS. The comparison of our approach to non-generative methods gives our approach a disadvantage due to the unavailability of a dataset with DARTS-style normal-reduced cell architectures and ImageNet accuracies. Although the transferability works well in this case, it does not beat the state-of-the-art unfortunately. This experiment thus highlights that the generated architectures from our method possess generalisation capabilities across different datasets. ## 5.3 Ablation Studies 5.3.1 Comparison Of Search Times To prove the efficiency of our method, we report the training (one-time cost) and search/generation times for the search on CIFAR-100 and ImageNet datasets in the Table 6. The search times correspond to generating all the architectures in one single run of an experiment. We observe that our approach requires a significantly less search time compared to previous approaches on ImageNet. On CIFAR100 dataset, our approach outperforms DiffusionNAG An et al. (2023) by a significant margin. It can thus be proven that our approach, without an external classifier, is more efficient than predictor-based approaches, due to its reduced computational requirement for each generation. The search time comparisons are concurrent with the protocol from other generative NAS methods by Lukasik et al. (2022) and Huang & Chu (2021), where the search times are taken as the generation times of ‡Note that Lukasik et al. (2022) refers to validation accuracy of NAS-Bench-NLP as validation perplexity †Results taken from Lukasik et al. (2022) | Methods | Top-1 ↓ | Top-5 ↓ | Queries↓ | |--------------|-----------|-----------|------------| | NASNET-A | 26.0 | 8.4 | 20000 | | DARTS | 26.7 | 8.7 | - | | DrNAS | 23.7 | 7.1 | - | | TENAS | 26.2 | 8.3 | - | | AG-Net | 24.1 | 7.2 | 304 | | Sweetimator | 24.1 | - | - | | DARTS++ aug | 24.8 | 7.8 | - | | β-DARTS | 23.9 | 7.0 | - | | DiNAS (ours) | 24.8 | 7.4 | 100 | Table 5: Comparison of results for top-1, top-5 errors on ImageNet. | Dataset | Methods | Search/Gen. Time (GPU sec.)↓ | Pre-training cost (GPU hrs) ↓ | |-----------|--------------|--------------------------------|---------------------------------| | ImageNet | NASNET-A | 1.7x 108 | - | | ImageNet | DARTS | 345600 | - | | ImageNet | DrNAS | 397440 | - | | ImageNet | TENAS | 4320 | - | | ImageNet | AG-Net | 1728 | 21.6 | | ImageNet | β-DARTS | 34560 | - | | ImageNet | DARTS++ aug | 25920 | - | | ImageNet | DiNAS (ours) | 53.76 | 16.6 | | CIFAR100 | DiffusionNAG | 261 | 3.43 | | CIFAR100 | DiNAS (ours) | 15.36 | 0.25 | Table 6: Comparison of search times in GPU seconds and training times of the pre-trained generator (if applicable) in GPU hours on ImageNet and CIFAR 100 (using NAS-Bench-201) for different approaches. The search times correspond to generating all the architectures in one single run of an experiment. Note that we report the mean time over different runs for DiNAS. the proposed method. The training times in Table 6 are considered as a one-time cost because the search time of a pre-trained model is the inference cost of the model. Upon training, the pre-trained model can then be used to generate architectures, which also generalise to different datasets (see Table 5). ## 5.3.2 Novelty And Uniqueness Analysis We conduct an analysis of novelty and uniqueness for the generated architectures. We start by generating 2000 and 100,000 architectures respectively based on our proposed method. To assess novelty, we calculate the percentage of generated samples absent in the training data whereas to assess uniqueness, we calculate the ratio of architectures present just once in the generations to the total number of generations(Zhang et al., 2019) (An et al., 2023). Given the enormous size of DARTS and NAS-Bench-NLP search spaces, we consider NAS-Bench-301 and NAS-Bench-NLP for our analysis. Furthermore, to examine the efficiency of our method, we record and report the training times for our method (for 100 epochs), along with the sampling times per architecture using a single NVIDIA A6000 GPU on five different benchmarks. We can observe the results of our ablation studies in Table 7. Note that for benchmarks involving multiple cases (e.g. HWNAS and NAS-Bench-201), we take the mean of the training times for all cases. We observe from the results of the novelty analysis for the case of 2000 samples and 100,000 samples that in both the datasets, all the generated samples are novel and most of them are unique, proving that our method is not just selecting the best-performing architectures from the training set. We can also observe that while the novelty for all the cases remain at the top, the uniqueness suffers a bit when sampling high number of architectures, due to a limited number of possible high-performing architectures. Moreover, we can observe that the sampling rates of each benchmark are in milliseconds, proving the rapid generation capabilities of our method. Table 7: Ablation study on novelty analysis (left) and efficiency analysis (right). Note that 'Nov' represents the novelty ratio, 'Uni' represents the uniqueness ratio and 'Gen' represents the number of architectures generated. The training time is reported in hours and sampling time per architecture in seconds. | Benchmark | Gen. | Nov.(%) ↑ | Uni.(%) ↑ | | | |---------------|-----------------|-------------|-------------|--------------|---------------| | NAS-Bench-301 | 2000 | 100 | 97.37 | | | | 100,000 | 100 | 91.10 | Benchmark | Train (hrs)↓ | Sample (sec)↓ | | | NB101 | 0.96 | 0.09 | | | | | NB201 | 0.25 | 0.08 | | | | | NB301 (Normal) | 8.3 | 0.14 | | | | | NB301 (Reduced) | 8.3 | 0.14 | | | | | NBNLP | 0.95 | 0.15 | | | | | HWNAS | 1.5 | 0.08 | | | | NAS-Bench-NLP | 2000 | 100 | 97.57 | | | | 100,000 | 100 | 96.78 | | | | ## 5.3.3 Required Number Of Training Samples This section analyses the performance of our method on DARTS search space (Liu et al., 2019) when trained on different number of samples. In particular, we consider training in three scenarios- on 100,000 samples, 10,000 samples and 1,000 samples, for which, we randomly sample the given number of training samples from DARTS search space and follow the same training protocol as our experiments on NAS-Bench301. Upon training in each case, 192 architectures are generated for a total of 10 runs and the maximum validation accuracy is reported as a mean over these runs. In addition, we calculate and report the novelty and uniqueness of the generated architectures, calculated using the same methodology as Section 5.3.2 considering generations from all the 10 runs. Finally, we report the number of queries. The results for this ablation study are reported in Table 8. | Training Samples | Val acc.(%) ↑ | Novelty(%) ↑ | Uniqueness(%) ↑ | Queries | |--------------------|-----------------|----------------|-------------------|-----------| | 100,000 | 94.92 | 100 | 97.37 | 192 | | 10,000 | 94.89 | 100 | 100 | 192 | | 1,000 | 85.29 | 100 | 100 | 192 | Table 8: Peformance on NAS-Bench-301 (Siems et al., 2021) with different number of samples. The Val. acc represents the maximum validation accuracy, reported as a mean over 10 runs, whereas novelty and uniqueness are calculated considering the generations from all the runs. Queries are the number of retrieval attempts for accuracy from the benchmark. We observe from Table 8 that our method learns the architectural representation and finds the highperforming architectures with one tenth of the number of training samples as well. However, the mean maximum validation accuracy drops when we further reduce the training samples to 1000. We found out that the reason for this decline was the unavailability of any valid generations in one of the runs. This resulted in the maximum validation accuracy for that run to be 0, which influenced the mean. From this, we can conclude that the data capture capabilities of our model are compromised when training on a very small number of samples. Moreover, we observe that the novelty and uniqueness ratios do not suffer at all when reducing the training data availability. ## 5.3.4 Number Of Classes For Guidance The goal of this ablation study is to analyse the effect of the number of classes d for accuracy present in the training data on our approach. We train and evaluate our method on NAS-Bench-201 (Dong & Yang, 2020) for the task of CIFAR-10 image classification involving four different cases: d = {2, 3, 4, 5}. We use the same training and evaluation protocol as experiments in Table 2. The split of the data into classes, denoted as sT , is performed depending on specific percentiles f = (f1, f2*, . . . , f*d−1) of the accuracy. For instance, for two classes, f = 95 and the split sT = [95th − 100th, 0th − 95th], while for three classes, f = [80, 95] and sT = [95th − 100th, 80th − 95th, 0th − 80th]. In all the cases, we generate architectures belonging to the class of >95th percentile. The choice of f is empirical as it does not affect the samples in the class we want to condition our model on (i.e. > 95th percentile). We report the maximum validation accuracy and the corresponding test accuracy, both as a mean over 10 runs, along with the number of queries. Moreover, we report the percentiles f used for splitting. The results for this study are reported in Table 9. Table 9: Comparison of results on NAS-Bench-201 for CIFAR-10 when using different number of classes. Here, f represents the percentiles used for splitting the data into classes, 'Val' and 'Test' represent the maximum validation accuracy and the corresponding test accuracy, both represented as a mean over 10 runs. Queries are the number of retrieval attempts for accuracy from the benchmark. | Number of classes (d) | f | Val(%)↑ | Test(%)↑ | Queries | |-------------------------|------------------|-----------|------------|-----------| | 2 | [95] | 91.61 | 94.37 | 192 | | 3 | [80, 95] | 91.60 | 94.00 | 192 | | 4 | [50, 80, 95] | 91.52 | 93.79 | 192 | | 5 | [30, 50, 80, 95] | 91.57 | 93.89 | 192 | We observe from Table 9 that discretising the accuracy into two classes results in a slightly superior performance over other values of d. However, the differences are marginal. ## 6 Conclusion We presented a generative method to facilitate the search process for neural architectures. Our approach uses a conditional graph diffusion model to rapidly generate novel, unique and high-performing neural network architectures. In this context, we first formulated the classifier-free guidance for graph diffusion models and then proposed a multi-conditioned classifier-free guidance for diffusion models. Unlike the related work, our method does not require an external surrogate predictor and is thus differentiable. In the experiments, we demonstrated state-of-the-art performance in tabular, surrogate and hardware-aware evaluations by considering six standard benchmarks. Furthermore, we have shown the search efficiency of our approach compared to the previous work using ablation studies. We observed that our method samples architectures two orders of magnitude faster than other generative NAS approaches and at least three orders of magnitude faster than classic approaches. ## References Sohyun An, Hayeon Lee, Jaehyeong Jo, Seanie Lee, and Sung Ju Hwang. Diffusionnag: Task-guided neural architecture generation with diffusion models. *arXiv preprint arXiv:2305.16943*, 2023. Jacob Austin, Daniel D. Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg. Structured denoising diffusion models in discrete state-spaces. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 17981–17993. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/ paper/2021/file/958c530554f78bcd8e97125b70e6973d-Paper.pdf. Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network architectures using reinforcement learning. In *International Conference on Learning Representations*, 2017. URL https: //openreview.net/forum?id=S1c2cvqee. Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. Understanding and simplifying one-shot architecture search. In Jennifer Dy and Andreas Krause (eds.), *Proceedings of the 35th* International Conference on Machine Learning, volume 80 of *Proceedings of Machine Learning Research*, pp. 550–559. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/v80/bender18a.html. Andrew Brock, Theo Lim, J.M. Ritchie, and Nick Weston. SMASH: One-shot model architecture search through hypernetworks. In *International Conference on Learning Representations*, 2018. URL https: //openreview.net/forum?id=rydeCEhs-. Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In *Proceedings of the 22nd* acm sigkdd international conference on knowledge discovery and data mining, pp. 785–794, 2016. Wuyang Chen. Darts evaluation: Train from scratch for architectures from darts space, 2022. URL https: //github.com/chenwydj/DARTS_evaluation. Wuyang Chen, Xinyu Gong, and Zhangyang Wang. Neural architecture search on imagenet in four gpu hours: A theoretically inspired perspective. In *International Conference on Learning Representations*, 2021a. Xiangning Chen, Ruochen Wang, Minhao Cheng, Xiaocheng Tang, and Cho-Jui Hsieh. Drnas: Dirichlet neural architecture search. In *International Conference on Learning Representations*, 2021b. URL https: //openreview.net/forum?id=9FWas6YbmB3. Xiangxiang Chu, Bo Zhang, and Ruijun Xu. Multi-objective reinforced evolution in mobile neural architecture search. In *European Conference on Computer Vision*, pp. 99–113. Springer, 2020. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE Conference on Computer Vision and Pattern Recognition*, pp. 248–255, 2009. doi: 10.1109/CVPR.2009.5206848. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. *Advances in Neural* Information Processing Systems, 34:8780–8794, 2021. Xuanyi Dong and Yi Yang. Nas-bench-201: Extending the scope of reproducible neural architecture search. In *International Conference on Learning Representations (ICLR)*, 2020. Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. AAAI Workshop on Deep Learning on Graphs: Methods and Applications, 2021. Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. Journal of Machine Learning Research, 20(55):1–21, 2019. URL http://jmlr.org/papers/v20/18-598.html. Jörg K.H. Franke, Gregor Koehler, André Biedenkapp, and Frank Hutter. Sample-efficient automated deep reinforcement learning. In *International Conference on Learning Representations*, 2021. URL https: //openreview.net/forum?id=hSjxQ3B7GWq. Nico Giambi and Giuseppe Lisanti. Conditioning diffusion models via attributes and semantic masks for face generation. *arXiv preprint arXiv:2306.00914*, 2023. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In *NeurIPS 2021 Workshop on Deep* Generative Models and Downstream Applications, 2021. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020. Emiel Hoogeboom, Vıctor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3d. In *International conference on machine learning*, pp. 8867–8887. PMLR, 2022. Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 1314–1324, 2019. Sian-Yao Huang and Wei-Ta Chu. Searching by generating: Flexible and efficient one-shot nas with architecture generator. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 983–992, 2021. Haifeng Jin, Qingquan Song, and Xia Hu. Auto-keras: An efficient neural architecture search system. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 1946–1956, 2019. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Nikita Klyuchnikov, Ilya Trofimov, E. Artemova, Mikhail Salnikov, Maxim Fedorov, and Evgeny Burnaev. Nas-bench-nlp: Neural architecture search benchmark for natural language processing. *IEEE Access*, PP: 1–1, 2020. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Hayeon Lee, Eunyoung Hyung, and Sung Ju Hwang. Rapid neural architecture search by learning to generate graphs from datasets. In *International Conference on Learning Representations*, 2021. Chaojian Li, Zhongzhi Yu, Yonggan Fu, Yongan Zhang, Yang Zhao, Haoran You, Qixuan Yu, Yue Wang, Cong Hao, and Yingyan Lin. {HW}-{nas}-bench: Hardware-aware neural architecture search benchmark. In *International Conference on Learning Representations*, 2021. Liam Li and Ameet Talwalkar. Random search and reproducibility for neural architecture search. In Uncertainty in artificial intelligence, pp. 367–377. PMLR, 2020. Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. In *International Conference on Learning Representations*, 2019. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. *arXiv preprint arXiv:1711.05101*, 2017. Jovita Lukasik, David Friede, Arber Zela, Frank Hutter, and Margret Keuper. Smooth variational graph embeddings for efficient neural architecture search. In 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE, 2021. Jovita Lukasik, Steffen Jung, and Margret Keuper. Learning where to look–generative nas is surprisingly efficient. In *European Conference on Computer Vision*, pp. 257–273. Springer, 2022. Renqian Luo, Fei Tian, Tao Qin, Enhong Chen, and Tie-Yan Liu. Neural architecture optimization. *Advances* in neural information processing systems, 31, 2018. Xiangzhong Luo, Di Liu, Hao Kong, Shuo Huai, Hui Chen, and Weichen Liu. Surgenas: a comprehensive surgery on hardware-aware differentiable neural architecture search. *IEEE Transactions on Computers*, 72(4):1081–1094, 2022. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. Building a large annotated corpus of English: The Penn Treebank. *Computational Linguistics*, 19(2):313–330, 1993. URL https: //aclanthology.org/J93-2004. Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural architecture search via parameters sharing. In *International conference on machine learning*, pp. 4095–4104. PMLR, 2018. Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc V. Le, and Alexey Kurakin. Large-scale evolution of image classifiers. In Doina Precup and Yee Whye Teh (eds.), *Proceedings of the 34th International Conference on Machine Learning*, volume 70 of Proceedings of Machine Learning Research, pp. 2902–2911. PMLR, 06–11 Aug 2017. Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image classifier architecture search. In *Proceedings of the aaai conference on artificial intelligence*, volume 33, pp. 4780– 4789, 2019. Seyed Saeed Changiz Rezaei, Fred X Han, Di Niu, Mohammad Salameh, Keith Mills, Shuo Lian, Wei Lu, and Shangling Jui. Generative adversarial neural architecture search. *arXiv preprint arXiv:2105.09356*, 2021. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684–10695, 2022. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic textto-image diffusion models with deep language understanding. *Advances in Neural Information Processing* Systems, 35:36479–36494, 2022. Gresa Shala, Thomas Elsken, Frank Hutter, and Josif Grabocka. Transfer NAS with meta-learned bayesian surrogates. In *The Eleventh International Conference on Learning Representations*, 2023. URL https: //openreview.net/forum?id=paGvsrl4Ntr. Julien Niklas Siems, Lucas Zimmer, Arber Zela, Jovita Lukasik, Margret Keuper, and Frank Hutter. {NAS}- bench-301 and the case for surrogate benchmarks for neural architecture search, 2021. Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Mostofa Patwary, Mr Prabhat, and Ryan Adams. Scalable bayesian optimization using deep neural networks. In International conference on machine learning, pp. 2171–2180. PMLR, 2015. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In *International conference on machine learning*, pp. 2256–2265. PMLR, 2015. Bedionita Soro and Chong Song. Enhancing differentiable architecture search: A study on small number of cell blocks in the search stage, and important branches-based cells selection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, pp. 1253–1261, October 2023. Yuan Tian, Qin Wang, Zhiwu Huang, Wen Li, Dengxin Dai, Minghao Yang, Jun Wang, and Olga Fink. Off-policy reinforcement learning for efficient and effective gan architecture search. In *Computer Vision–* ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VII 16, pp. 175–192. Springer, 2020. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural Information Processing* Systems, volume 30. Curran Associates, Inc., 2017a. URL https://proceedings.neurips.cc/paper_ files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017b. Clement Vignac, Igor Krawczuk, Antoine Siraudin, Bohan Wang, Volkan Cevher, and Pascal Frossard. Digress: Discrete denoising diffusion for graph generation. In The Eleventh International Conference on Learning Representations, 2023. Junxiang Wang, Junji Jiang, and Liang Zhao. An invertible graph diffusion neural network for source localization. In *Proceedings of the ACM Web Conference 2022*, WWW '22, pp. 1058–1069, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450390965. doi: 10.1145/3485447.3512155. URL https://doi.org/10.1145/3485447.3512155. Colin White, Willie Neiswanger, and Yash Savani. Bananas: Bayesian optimization with neural architectures for neural architecture search. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pp. 10293–10301, 2021a. Colin White, Sam Nolen, and Yash Savani. Exploring the loss landscape in neural architecture search. In Uncertainty in Artificial Intelligence, pp. 654–664. PMLR, 2021b. Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In *Proceedings of the IEEE/CVF conference on computer vision and pattern* recognition, pp. 10734–10742, 2019. Junru Wu, Xiyang Dai, Dongdong Chen, Yinpeng Chen, Mengchen Liu, Ye Yu, Zhangyang Wang, Zicheng Liu, Mei Chen, and Lu Yuan. Stronger nas with weaker predictors. Advances in Neural Information Processing Systems, 34:28904–28918, 2021. Shen Yan, Yu Zheng, Wei Ao, Xiao Zeng, and Mi Zhang. Does unsupervised architecture representation learning help neural architecture search? *Advances in neural information processing systems*, 33:12486– 12498, 2020. Shen Yan, Colin White, Yash Savani, and Frank Hutter. Nas-bench-x11 and the power of learning curves. Advances in Neural Information Processing Systems, 34:22534–22549, 2021a. Shen Yan, Colin White, Yash Savani, and Frank Hutter. Nas-bench-x11 and the power of learning curves. Advances in Neural Information Processing Systems, 34:22534–22549, 2021b. Longxing Yang, Yanxin Fu, Shun Lu, Zihao Sun, Jilin Mei, Wenxiao Zhao, and Yu Hu. Sweet gradient matters: Designing consistent and efficient estimator for zero-shot architecture search. *Neural Networks*, 168:237–255, 2023. ISSN 0893-6080. doi: https://doi.org/10.1016/j.neunet.2023.09.012. Yibo Yang, Hongyang Li, Shan You, Fei Wang, Chen Qian, and Zhouchen Lin. Ista-nas: Efficient and consistent neural architecture search by sparse coding. *Advances in Neural Information Processing Systems*, 33:10503–10513, 2020. Peng Ye, Baopu Li, Yikang Li, Tao Chen, Jiayuan Fan, and Wanli Ouyang. *beta*-darts: Beta-decay regularization for differentiable architecture search. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10864–10873. IEEE, 2022. Chris Ying, Aaron Klein, Eric Christiansen, Esteban Real, Kevin Murphy, and Frank Hutter. NAS-bench101: Towards reproducible neural architecture search. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of Proceedings of Machine Learning Research, pp. 7105–7114, Long Beach, California, USA, 09–15 Jun 2019. PMLR. Muhan Zhang, Shali Jiang, Zhicheng Cui, Roman Garnett, and Yixin Chen. D-vae: A variational autoencoder for directed acyclic graphs. *Advances in Neural Information Processing Systems*, 32, 2019. Barret Zoph and Quoc Le. Neural architecture search with reinforcement learning. In *International Conference on Learning Representations*, 2017. URL https://openreview.net/forum?id=r1Ue8Hcxg. Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 8697–8710, 2018. ## A Appendix A.1 Proofs And Derivations Derivation 1: Let q be the unconditional Markovian noise model and qˆ be the conditional noising process similar to q. We define our aim as decomposing qˆ(Gt−1|Gt, y1, y2*, ..., y*k) and then deriving the score function for the multi-conditioned diffusion process. We start by expanding the term: qˆ(Gt−1|Gt, y1, y2, . . . , yk) = qˆ(Gt−1, Gt, y1, y2, . . . , yk) qˆ(Gt, y1, y2, . . . , yk)(11) = qˆ(Gt−1, Gt, y1, y2, . . . , yk) qˆ(y1, y2, . . . , yk|Gt)ˆq(Gt)(12) = qˆ(y1, . . . , yk|Gt−1, Gt)ˆq(Gt−1, Gt) qˆ(y1, y2, . . . , yk|Gt)ˆq(Gt)(13) = qˆ(y1, . . . , yk|Gt−1, Gt)ˆq(Gt−1|Gt)ˆq(Gt) qˆ(y1, y2, . . . , yk|Gt)ˆq(Gt)(14) = qˆ(y1, . . . , yk|Gt−1, Gt)ˆq(Gt−1|Gt) qˆ(y1, y2, . . . , yk|Gt)(15) (11) (12) (13) (14) (15) (16) (17) $$(16)$$ Dhariwal & Nichol (2021) prove that the classification term (in our case qˆ(y1*, . . . , y*k|Gt−1, Gt)) does not depend on the noisier version of G (i.e. Gt) and can be rewritten as qˆ(y1*, . . . , y*k|Gt−1). Furthermore, they also show that qˆ behaves the same as q when not conditioned on the classification targets y1*, . . . , y*k. We use these findings to further simplify Eq. 15 to: $$\hat{q}(\mathbf{G}^{t-1}|\mathbf{G}^{t},y_{1},y_{2},\ldots,y_{k})=\frac{\hat{q}(y_{1},\ldots,y_{k}|\mathbf{G}^{t-1})q(\mathbf{G}^{t-1}|\mathbf{G}^{t})}{\hat{q}(y_{1},y_{2},\ldots,y_{k}|\mathbf{G}^{t})}\tag{1}$$ We assume that the generative model pθ(Gt−1|Gt) approximates q(Gt−1|Gt) and the classifier pψ(y1*, . . . , y*k|Gt−1) approximates qˆ(y1*, . . . , y*k|Gt−1). By substituting the distributions with their approximations, taking the logarithm and calculating the gradients w.r.t. Gt−1, we obtain the following score function: $$\nabla_{{\mathbf{G}}^{t-1}}\log p_{\theta,\psi}({\mathbf{G}}^{t-1}|{\mathbf{G}}^{t},y_{1},...,y_{k})=\nabla_{{\mathbf{G}}^{t-1}}\log p_{\theta}({\mathbf{G}}^{t-1}|{\mathbf{G}}^{t})+\nabla_{{\mathbf{G}}^{t-1}}\log p_{\psi}(y_{1},...,y_{k}|{\mathbf{G}}^{t-1}),$$ where Gt and Gt−1represent the DAG G at time step t and t−1 respectively, ψ are the classifier parameters and θ are the generative model parameters. Similar to the standard classifier-based guidance (Dhariwal & Nichol, 2021), we multiply the conditioning term by a factor of γ . Thus, we can express the reverse denoising process as a weighted sum of the unconditional score function ∇Gt−1 log pθ(Gt−1|Gt) and the conditioning term ∇Gt−1 log pψ(y1*, . . . , y*k|Gt−1), given by: $$\nabla_{\mathbf{G}^{t-1}}\log p_{y_{0},\psi}(\mathbf{G}^{t-1}|\mathbf{G}^{t},y_{1},...,y_{k})=\nabla_{\mathbf{G}^{t-1}}\log p_{\theta}(\mathbf{G}^{t-1}|\mathbf{G}^{t})+\gamma\nabla_{\mathbf{G}^{t-1}}\log p_{\psi}(y_{1},\ldots,y_{k}|\mathbf{G}^{t-1}),\tag{18}$$ where γ is the guidance scale. Then, by substituting the conditioning term ∇Gt−1 log pψ(y1*, . . . , y*k|Gt−1) from Eq. 17 and removing the classifier parameters ψ, we obtain: $$\begin{array}{c}{{\nabla_{{\bf G}^{t-1}}\log p_{\theta_{\gamma}}({\bf G}^{t-1}|{\bf G}^{t},y_{1},...,y_{k})=(1-\gamma)\nabla_{{\bf G}^{t-1}}\log p_{\theta}({\bf G}^{t-1}|{\bf G}^{t})}}\\ {{+\gamma\nabla_{{\bf G}^{t-1}}\log p_{\theta}({\bf G}^{t-1}|{\bf G}^{t},y_{1},...,y_{k}).}}\end{array}$$ $$(17)$$ $$(19)$$ Hence, as we can observe from Equation 19, we have successfully derived the score function of multiconditioned diffusion guidance. ## A.2 Implementation Details Our proposed method involves training a Graph Transformer network, proposed by Dwivedi & Bresson (2021), for denoising. This network comprises of an input node/edge wise MLP layer, followed by 5 Graph Transformer layers and, node/edge wise MLP as the output layer. Each Graph Transformer layer consists Table 10: Mean validation and test accuracies with standard deviation for all runs of NAS-Bench-101, NASBench-201, NAS-Bench-301 and NAS-Bench-NLP experiments. StD represents the standard deviation in the table. | Experiment | Dataset | Val. acc. | Test acc. | | | |---------------|----------------|-------------|-------------|-------|-------| | | Mean | StD | Mean | StD | | | NAS-Bench-101 | CIFAR-10 | 94.98 | 0.169 | 94.27 | 0.197 | | NAS-Bench-201 | CIFAR-10 | 91.61 | 0.000 | 94.37 | 0.184 | | NAS-Bench-201 | CIFAR-100 | 73.49 | 0.000 | 73.51 | 0.000 | | NAS-Bench-201 | ImageNet | 46.66 | 0.092 | 45.41 | 0.589 | | NAS-Bench-301 | CIFAR-10 | 94.92 | 0.072 | - | - | | NAS-Bench-NLP | Penn Tree Bank | 96.06 | 0.173 | - | - | of three main parts: a self-attention module similar to the one found in the standard Transformer model (Vaswani et al., 2017a), a fully connected layer, and layer normalisation. All the models were trained for 100 epochs with a learning rate of 0.0002, batch-size of 16, weight decay of 10−12, guidance scale of -4 and using AdamW optimiser (Loshchilov & Hutter, 2017) on a single NVIDIA A6000 GPU. For noising, we use cosine noise schedule for T = 500 time-steps. The hyperparameters used in our method were derived from the code from Vignac et al. (2023). For the evaluation on ImageNet, we employ the same training pipeline and code as AG-Net (Lukasik et al., 2022) and TENAS (Chen et al., 2021a), taken from Chen (2022). We train the best generated architecture in terms of validation accuracy from NAS-Bench-301 on ImageNet for 250 epochs. The initial learning rate is set to 0.5 with a cosine learning rate scheduler and the batch size is set to 1024. The ImageNet training is performed on 3 NVIDIA V100 GPUs parallelly in a distributed manner. ## A.3 Additional Results Table 10 provides some additional results, mainly reporting the mean and standard deviations over different runs for NAS-Bench-101, NAS-Bench-201, NAS-Bench-301 and NAS-Bench-NLP experiments. It can be observed that the standard deviation (StD.) for test accuracy is generally higher than the StD. for validation accuracy, particularly for NAS-Bench-201 on ImageNet. The reason for this difference is that the model is trained to generate architectures with high validation accuracy and has no test accuracy information. This also explains the discrepancy of the validation accuracy values and test accuracy values for the case of ImageNet in Table 2. ## A.4 Additional Ablation Studies A.4.1 Effect Of The Guidance Scale Parameter We perform an additional ablation study to analyse the effect of the guidance scale parameter γ on the performance of our model. We train our model on two differently sized search spaces: NAS-Bench-101 (size of 423k samples) and NAS-Bench-NLP (size of 1053 samples) search space using four guidance scales (-4, -2, 2 and 4) in this study and report the mean and standard deviation of the validation accuracy (and test accuracy for NAS-Bench-101) over 10 runs. The evaluation protocol is identical to the main experiments (Section 5). Table 11 presents the results of this ablation study. Interestingly, we can observe that negative values of guidance scale perform better than positive values, unlike Ho & Salimans (2021). This can be attributed to the difference in formulation of Eq. 19 in Ho & Salimans (2021), where the guidance parameter is w, instead of γ (w ∼ −γ). We found that γ = −4 setting produces best results for both, small and large search spaces, and is thus used in our experiments. Table 11: Comparison of performance of DiNAS using different guidance scales on NAS-Bench-101 and NAS-Bench-NLP search spaces. Val. represents the mean validation accuracy, 'Test' represents the mean test accuracy and StD. represents the standard deviation over 10 runs. | γ | NAS-Bench-101 | NAS-Bench-NLP | | |---------------|-----------------|-----------------|------------| | Val(%) ±StD.↑ | Test(%) ±StD.↑ | Val(%) ±StD.↑ | | | -4 | 94.98±0.17 | 94.27±0.20 | 96.06±0.17 | | -2 | 95.06±0.14 | 94.16±0.20 | 95.97±0.12 | | 2 | 92.60±0.43 | 92.05±0.58 | 95.42±0.08 | | 4 | 91.44±0.47 | 90.86±0.59 | 95.36±0.29 | ## A.5 Evaluation Protocols NAS-Bench 101 and NAS-Bench 201 We generate a fixed number of architectures (equal to the respective number of queries) and query them on both the benchmarks to find the maximum validation accuracy and its corresponding test accuracy. This process is repeated 10 times to calculate the mean maximum validation accuracy and mean corresponding test accuracy. Note that we use f = 99 for NASBench-101 experiments and f = 95 for NAS-Bench-201 experiments. NAS-Bench-301 To evaluate our approach on NAS-Bench-301, a random subset of 100,000 architectures is selected from the DARTS search space. As surrogate benchmarks do not provide accuracy, the accuracy of the selected architectures are calculated using a pre-trained surrogate predictor XG-Boost (Chen & Guestrin, 2016) provided with NAS-Bench-301. Next, the network is trained using normal cells from this dataset, producing 10 normal cells from > fth class. Next, this process is repeated to produce 10 reduction cells. The evaluation involves 100 queries, considering all possible combinations of the 10 generated normal and 10 reduction cells. For each query, the highest validation accuracy and its corresponding test accuracy are recorded. This entire process is iterated 10 times, yielding mean values for these recorded accuracies. We use f = 99 for this benchmark. NAS-Bench-NLP Given the enormity of this search space, we employ NAS-Bench-X11 (Yan et al., 2021b) as a surrogate predictor to obtain accuracy for these architectures, trained specifically on the Penn TreeBank dataset (Marcus et al., 1993). However, it should be noted that NAS-Bench-X11 is only capable of handling graphs with up to 12 nodes, which filters our dataset to include the total of 7,258 architectures. After training, we generate 304 architectures and estimate their accuracy using NAS-Bench-X11. The process is repeated 10 times. We set f = 99 for this benchmark. HW-NAS-Bench For this evaluation, we consider 12 distinct cases for latency and device constraints. Upon each training, our method generates 200 architectures which are then queried on the benchmark. We adopt two conditions simultaneously, namely the accuracy should be in > fth(= 95) percentile class and the latency should satisfy the given constraint. We repeat the generation process for 10 runs and report the mean of the validation accuracy, along with the feasibility and number of queries. ImageNet We start by generating 100 architectures through our approach trained on NAS-Bench-301. Then, we select the best architecture in terms of validation accuracy. Next, we train the network using the same training pipeline and code as TENAS (Chen et al., 2021a). Finally, we save the weights from the epoch where the top-1 and top-5 validation errors are minimum and report the top-1 and top-5 errors in Table 5. ## A.6 Benchmark Descriptions A.6.1 Nas-Bench-101 NAS-Bench-101 (Ying et al., 2019) is a cell-based tabular benchmark, comprising a large collection of 423,624 distinct architectures represented as cells. These architectures are also mapped to their respective validation and test accuracy metrics, evaluated on CIFAR-10 image classification task. In this benchmark, the cells are constrained to have a maximum of 7 nodes and 9 edges. Specifically, the first and last nodes within these cells serve as input and output nodes. Intermediate nodes within the cells can take on one of three possible operations: 1x1 convolution, 3x3 convolution, or 3x3 max-pooling. Furthermore, it is important to note that each convolutional operation is preceded by batch normalisation, followed by a Rectified Linear Unit (ReLU) activation function. ## A.6.2 Nas-Bench-201 Another cell-based tabular benchmark is NAS-Bench-201 (Dong & Yang, 2020), which contains data for 15,625 architectures (cells) trained on 3 datasets- CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009) and ImageNet16-120 (Deng et al., 2009). In contrast to NAS-Bench-101, each edge of a cell in NAS-Bench-201 is associated to an operation drawn from a predefined operation set O = {1x1 convolution, 3x3 convolution, 3x3 avg pooling, skip, zero}. In our training and experiments on NAS-Bench-201, we convert the edge based representation to node-based representation, where each node is associated with an operation, similar to NAS-Bench-101. This conversion is in line with the conversion in Arch2Vec (Yan et al., 2020). Each cell comprises 4 nodes and 6 edges and the adjacency matrices are identical to one another. The existence of operations like zero and skip enforces the structural diversity in different architectures. ## A.6.3 Nas-Bench-301 NAS-Bench-301 (Siems et al., 2021) is a surrogate benchmark that trains and evaluates several performance predictors on 60,000 sampled architectures from DARTS search space (Liu et al., 2019). These learned performance (surrogate) predictors are then able to predict the accuracy of architectures in DARTS search space (comprising 1018 architectures). The architectures in DARTS comprise of a normal cell and a reduction cell. Each cell has a maximum of 7 nodes and 12 edges with each edge associated with an operation drawn from the set O = { 3x3 sep. conv., 3x3 dil. conv., 5x5 sep. conv, 3x3 average pooling, identity, zero }. We utilise a pretrained XGBoost (Chen & Guestrin, 2016) provided by Siems et al. (2021) as the surrogate predictor for our experiments. ## A.6.4 Nas-Bench-Nlp NAS-Bench-NLP is the first NAS benchmark designed for Natural Language Processing tasks (Klyuchnikov et al., 2020). While its search space is extremely large with the total of 1053 architectures, NAS-Bench-NLP provides 14,322 architectures trained on Penn TreeBank dataset (Marcus et al., 1993). Each cell in the search space has a maximum of 24 nodes, 3 hidden states and 3 linear input vectors. The nodes in each cell depict the operations drawn from the set O = {Linear, blending, product, sum, tanh, sigmoid, LeakyRELU }.We utilise the surrogate predictor provided by NAS-Bench-X11 (Yan et al., 2021a) for this benchmark. ## A.6.5 Hw-Nas-Bench HW-NAS-Bench is a unique benchmark that provides hardware-specific details, including latency and energy cost, across various devices along with their respective accuracy. These devices encompass a diverse set of hardware platforms, including EdgeGPU, Raspi4, Pixel3, EdgeTPU, Eyeriss, Pixel3, and FPGA. Crucially, HW-NAS-Bench operates within two distinct search spaces: NAS-Bench-201 (Dong & Yang, 2020) and FBNet (Wu et al., 2019). In our experiments, we utilise latency information as hardware constraint, within the context of the NAS-Bench-201 search space. ## A.7 Examples Of Generated Cells Here, we demonstrate some examples of the generated normal cells on DARTS search space using our proposed method trained on NAS-Bench-301 (Siems et al., 2021) (demonstrated in figure 2) and indicate structural differences compared to cells generated by DARTS (Liu et al., 2019) and TENAS (Chen et al., 2021a) (demonstrated in figure 3). It can be observed that DiNAS generations have a marginally higher 5x5 convolutional connections compared to the other approaches. ![22_image_0.png](22_image_0.png) Figure 2: Examples of high performing generated cells by our method DiNAS on DARTS search space using NAS-Bench-301. ![22_image_1.png](22_image_1.png) ![22_image_2.png](22_image_2.png) Figure 3: Examples of high performing generated cells by DARTS (Liu et al., 2019) (left) and TENAS (Chen et al., 2021a) (right)
Review 1: Summary: The paper presents a diffusion based generative model for neural architecture search. The architectures are represented as a graph, a diffusion model is trained over the distribution of nodes and edges of graphs corresponding to architectures. The proposed diffusion models is a small variation of an existing diffusion model over graphs. The trained diffusion model can be used in a search process to find models with high accuracy. Extensive experiments are conducted on multiple tabular NASBench datasets, and ImageNet. The proposed approach offers marginal benefits over prior work. Strengths and Weaknesses: Strengths: - The proposed approach shows marginal improvements over prior work on multiple NAS benchmarks. - The novelty and uniqueness analysis is interesting. Weaknesses: - The primary weakness is the lack of a concrete claim in this paper. It is unclear to this reviewer why we need to learn the density of architectures using a diffusion model. What conceptual or practical limitation of existing density models for the same (e.g., VAE or its variants) is the proposed approach addressing. - The multi-condition idea is interesting in principle. However, the particular multi-condition used here in terms of different accuracy quantiles is strange. Under what circumstances should quantiles corresponding to low accuracy of interest? Wouldn’t we always want models in the top say 95th percentile? Requested Changes: - Please state the concrete claims being made by the paper and what evidence (if any) has been provided to validate the claims. The claims are not apparent from the paper. - For the novelty and uniqueness of architectures, how was the similarity of a pair of generated graphs measured? Was this through a WL-test? This information is missing. - How would the novelty and uniqueness vary as you sample lot more architectures? I would presume, there are only a small set of “good” architectures, and once those are sampled, you will start seeing lots of repetitions. - For the results, can you also provide standard deviations of accuracy? Only mean values have been provided. Same for the number of queries (if it changes from one run to another). - In section 4.1.2, what is $a$? It has not been defined as far as I can tell. - The purpose of discretization of the target variable in section 4.1.3 is unclear. As in, why do we need this discretization? Through discretization the diffusion model will have to learn the density over regions with low accuracy networks. Almost always, our goal is to maximize accuracy. So in this case, isn’t it sufficient to learn the density over networks above a certain accuracy threshold and ignore the rest? This goes back to the weakness mentioned above, in terms of, lack of clarity on what limitations of existing density models is the proposed approach designed to overcome? Broader Impact Concerns: There are no perceived ethical concerns for this work. ================================================== Review 2: Summary: The paper addresses the task of neural architecture search and proposed a graph-diffusion approach. Existing techniques from the literature are employed for performing discrete conditional graph diffusion. The paper introduces an extension involving multiple conditions to encourage satisfaction of multiple constraints such as accuracy and latency. The proposed approach is differentiable and thus permits end-to-end training and only requires the training of a single model. The paper reports the results of experiments on commonly studied benchmarks and shows that the proposed approach offers an attractive trade-off between speed and accuracy. Strengths and Weaknesses: Strengths 1. Although there have been some very recent methods proposing diffusion approaches for network architecture search, these are contemporaneous with the submitted paper. The novel approach will be of significant interest to multiple researchers who are exploring the topic of network architecture search. 2. The proposed method appears to offer advantageous performance (although there is a need for clearer reporting of execution time to better support some of the main claims of the paper). 3. The methodology is clearly described and the experiments are thorough. The reporting of novelty/uniqueness is a particularly welcome and interesting inclusion. Weaknesses 1. Some closely related work is not discussed in detail. Although some of this related work is very recent, it is cited in the paper, so the authors were clearly aware of it. While it may be unreasonable to expect a detailed experimental comparison, a more thorough qualitative discussion is important, and if experimental results are available, then it seems reasonable to include them in the tables. 2. The results in the paper do not include any indication of variability over different runs. 3. In some cases, the choice of baselines could be improved. There is a multitude of recent techniques – some of the studies compare to only one recent method. 4. The reported execution times are difficult to understand, making it challenging to compare the proposed method with the baselines. This is important, considering that a major claim of the paper is that the required time is dramtically reduced. Requested Changes: 1. The authors mention the papers by An et al. (2023) and Lukasik et al. (2022) in the introduction but An et al. (2023) is not discussed in the Related Work section and Lukasik et al. (2022) is only mentioned in passing. These seem to be the most closely related works, so the relationship with these works should be discussed in detail. In particular, the paper claims in the introduction that because the approach “is completely differentiable and thus training involves only a single model … we reach promising performance with much smaller search time.” The paper demonstrates this to some degree for AG-Net (although I think the reporting of timing results needs to be much clearer, as requested below). However, I do not believe there is any experimental comparison to the method proposed by An et al. (2023). This is a recent work, so there could be an argument that it is concurrent research. On the other hand, the paper by An et al. (2023) was posted on arXiv in May, and that is the version that the authors are citing, so presumably they were aware of the work. There are comparable experimental results that could be taken from An et al. (2023), even if the experiments were not reproduced. Moreover, if there is no experimental comparison, the paper should not claim that there is a “much smaller search time”. It should be made clear that this claim refers only to Lukasik et al. (2022). In general, I would like to see a considerably more detailed discussion of why the authors believe that an approach involving a single model is so important. 2. Tables 1 and 2 report only the number of queries. Are the search times directly proportional to the number of queries? Or does it vary substantially for different methods? In particular, does the proposed diffusion approach require significantly more time per query? 3. The paper does not provide any intervals or deviations for the averaged results. The reader would have better insight into the performance of the method if such information were included. For example, in Table 2, what is the variability over different runs for ImageNet for both the validation and test sets? (Perhaps this would explain 4. With regard to the results in Table 2, can the authors provide a more detailed explanation of why there is a significant discrepancy between the validation and test results for ImageNet? The current explanation is that the proposed approach optimizes over validation accuracy and this does not necessarily correlate with test performance. While this is true, it is odd that the only technique that is affected in a significant way is the proposed method. The variability over different runs would be helpful here – is the proposed method identifying exactly the same architecture, irrespective of instantiation? Or is one of the high-scoring validation architectures particularly poor in the test set? 5. Table 5 presents important ImageNet experimental results. However, the selection of the baselines is surprising, with only AG-Net being a recent comparable baseline. There are multiple other recent baselines that could be included (if only to provide the reader with a better understanding of the competitive landscape). Some suggestions might be [R1] (although this is a very recent publication, it was posted on arXiv in 2022), [R2] and [R3]. There is also [R4], but this is a very recent publication, so it is perhaps unfair to ask for inclusion/comparison – perhaps the authors could at least cite and discuss. Arch2VEC (Yan et al., 2020) [R5] and DrNet (Chen et al., 2021) [R6] are also relevant baselines and should be cited and discussed (possibly with some experimental comparison). 6. In Table 5, why is the training time not included in the execution time? The paper refers to this as a “one-time cost”. For a fair comparison, it would seem more reasonable just to have a column that reports the total execution time. (This can be broken down in other columns to provide more information about how the time is being used). Is there a demonstration of transferability to multiple searchers if this the training time is to be considered a “one-time cost”. (I struggled to understand this from the paper – perhaps the authors could add a sentence or two to highlight how the training applies to multiple different searches). At the moment, the value for T1 isn’t even reported, making it very hard to interpret this table and compare the times for different architectures. This is important, because a key claim of the paper is that the method is faster than AG-Net (the accuracy performance in Table 5 is inferior). 7. There is a concern that the metrics for DARTS, TE-NAS, NASNET-A were all taken from respective papers. This means that only AG-Net follows the protocol specified in the paper. Since the gpus are different and the experimental procedure has changed, it seems unreasonable to have a table that reports numbers as though they can be directly compared. [R1] Yang, Longxing, et al. "Sweet Gradient matters: Designing consistent and efficient estimator for Zero-shot Architecture Search." Neural Networks 168 (2023): 237-255. [R2] Ye, Peng, et al. "b-darts: Beta-decay regularization for differentiable architecture search." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [R3] Soro, Bedionita, and Chong Song. "Enhancing Differentiable Architecture Search: A Study on Small Number of Cell Blocks in the Search Stage, and Important Branches-Based Cells Selection." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [R4] Li, Muchen, et al. "GraphPNAS: Learning Probabilistic Graph Generators for Neural Architecture Search." Transactions on Machine Learning Research (2023). [R5] Shen Yan, Yu Zheng, Wei Ao, Xiao Zeng, and Mi Zhang. Does unsupervised architecture representation learning help neural architecture search? In Advances in Neural Information Processing Systems (NeurIPS), 33:12486– 12498, 2020. [R6] Xiangning Chen, Ruochen Wang, Minhao Cheng, Xiaocheng Tang, and Cho-Jui Hsieh. Dr{nas}: Dirichlet neural architecture search. In Proc. International Conference on Learning Representations, 2021. Broader Impact Concerns: None ================================================== Review 3: Summary: The paper proposes a neural architecture search (NAS) approach called Multi-conditioned Graph Diffusion for NAS (DiNAS). It uses discrete conditional graph diffusion processes to generate high-performing neural network architectures. It introduces a multi-conditioned classifier-free guidance technique to impose multiple constraints (e.g. accuracy, latency) jointly when generating architectures. This allows searching for architectures that meet multiple objectives. The method is completely differentiable, requiring only a single model training. It achieves promising results on six NAS benchmarks, yielding novel architectures very quickly (less than 0.2 seconds per architecture). Experiments demonstrate the approach generates novel, unique architectures. The method also generalizes well, as shown by experiments on ImageNet. Strengths and Weaknesses: ### Strengths * Single model training is more efficient than methods requiring separate predictor networks. Most generative NAS methods need an additional predictor model to estimate performance of generated architectures. Training two models is less efficient. By using classifier-free guidance, this method trains only the generator model itself, reducing total training costs. * Multi-conditioned guidance technique is flexible for imposing multiple constraints, an important capability. Guiding the search with multiple objectives like accuracy and efficiency is crucial for real applications but not well supported in previous generative NAS techniques. The proposed multi-conditioned guidance provides an elegant way to flexibly impose multiple conditions, enabling robust multi-objective search. * Fast architecture generation rate enables rapid exploration of broad search spaces. Generating just under 0.2 seconds per architecture is orders of magnitude faster than previous approaches that require full architecture evaluation. This rapid rate allows searching much larger spaces, finding high-performing architectures faster than other methods. * Achieves state-of-the-art or comparable results across several NAS benchmark tasks. Solid performance matching or exceeding prior state-of-the-art on major NAS benchmarks demonstrates this method's effectiveness. Competitive accuracy to more complex techniques shows the power of the proposed graph diffusion approach. ### Weaknesses * The method does not always achieve absolute state-of-the-art accuracy, slightly underperforms some other methods. While overall results are highly competitive, top accuracies fall just short of 1-2 other approaches in some benchmarks. There may still be room to improve architecture quality further through guidance tuning or alternate diffusion formulations. * Choice of guidance scale parameter affects diversity/quality tradeoff and the optimal values are likely dataset-specific. The hyperparameter $\gamma$ balances diversity and quality during sampling. Best setting probably depends greatly on factors like search space size and conditioning constraints. More analysis could better characterize how to set $\gamma$ robustly. * The top-1/5 errors on ImageNet are slightly higher than some strong baselines. Specifically, ImageNet results were not superior to all baselines as with CIFAR experiments. Factors such as architecture conditioning strengths may differ when generalizing to very large datasets. * Does not provide intuitive case studies to fully demonstrate advantages. While numerical results show accuracy and efficiency gains, the paper lacks detailed case studies to more intuitively showcase the benefits versus other methods. Providing examples that compare architectures found across different search techniques could give better insights into real-world improvements. More qualitative analysis would strengthen overall motivation for the proposed approach. Requested Changes: See weaknesses above. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: The reviewers were unanimous that this paper should be accepted as is by TMLR, congratulations! The original reviews outlined many suggestions and questions. The authors replied to almost all of them (very quickly) and provided new comparisons and ablation studies. The authors also clarified several important aspects of the empirical results. The reviewers' recommendations did not surface new elements. ==================================================
# Momentum-Based Policy Gradient With Second-Order Information Saber Salehkaleybar *s.salehkaleybar@liacs.leidenuniv.nl* Leiden Institute of Advanced Computer Science Leiden University Sadegh Khorasani sadegh.khorasani@epfl.ch School of Computer and Communication Sciences EPFL Negar Kiyavash negar.kiyavash@epfl.ch College of Management of Technology EPFL Niao He niao.he@inf.ethz.ch Department of Computer Science ETH Zurich Patrick Thiran *patrick.thiran@epfl.ch* School of Computer and Communication Sciences EPFL Reviewed on OpenReview: *https: // openreview. net/ forum? id= 2bURaH6RN8* ## Abstract Variance-reduced gradient estimators for policy gradient methods have been one of the main focus of research in the reinforcement learning in recent years as they allow acceleration of the estimation process. We propose a variance-reduced policy-gradient method, called SHARP, which incorporates second-order information into stochastic gradient descent (SGD) using momentum with a time-varying learning rate. SHARP algorithm is parameter-free, achieving ϵ-approximate first-order stationary point with O(ϵ −3) number of trajectories, while using a batch size of O(1) at each iteration. Unlike most previous work, our proposed algorithm does not require importance sampling, which can compromise the advantage of variance reduction process. Moreover, the variance of estimation error decays at a fast rate of O(1/t2/3), where t is the number of iterations. Our extensive experimental evaluations show the effectiveness of the proposed algorithm on various control tasks and its advantage over the state of the art in practice. ## 1 Introduction Reinforcement Learning (RL) has achieved remarkable success in solving various complex tasks in games (Silver et al., 2017), autonomous driving (Shalev-Shwartz et al., 2016), and robot manipulation (Deisenroth et al., 2013), among other fields. In the RL setting, an agent tries to learn the best actions by interacting with the environment and evaluating its performance based on reward signals. More specifically, in Markov Decision Processes (MDPs), the mathematical formalism for RL, after taking an action, the state changes according to a transition probability model and a reward signal is received based on the action taken and the current state. The main goal of the learner is to find a policy that maps the state space to the action space, maximizing the expected cumulative rewards as the objective function. Policy gradient methods (Sutton et al., 2000) are often used for training policies in MDPs, especially for high-dimensional continuous action space. In policy gradient methods, the policy is parameterized by an unknown parameter θ and it is directly optimized using the stochastic first-order gradient of cumulative rewards as it is infeasible to compute the gradient exactly. REINFORCE (Williams, 1992), PGT (Sutton et al., 2000), and GPOMDP (Baxter & Bartlett, 2001) are some classical methods that update the policy by applying a stochastic gradient ascent step. These methods generally require a large number of trajectories due to the large variance of gradient estimates, stemming from randomness of transitions over trajectories. In the RL literature, several methods have been proposed to reduce the variance in policy gradient methods. For instance, Sutton et al. (2000) proposed to consider a baseline in order to reduce variance of gradient estimation. Konda & Tsitsiklis (2000) presented an actor-critic algorithm that estimates the value function and uses it to mitigate the effect of large variance. Schulman et al. (2016) proposed GAE to control both bias and variance by exploiting a temporal difference relation for the advantage function approximation. More recent work such as TRPO (Schulman et al., 2015) considers a Kullback-Leibler (KL) divergence penalty term in order to ensure that the updated policy remains close to the current policy or PPO (Schulman et al., 2017) that uses clipped surrogate objective function to achieve the same goal. In practice, it has been shown that these algorithms have better performance compared with vanilla policy gradient method. Most stochastic gradient based policy methods need O(ϵ −4) trajectories in order to achieve ϵ-approximate first-order stationary point (ϵ-FOSP) of the objective function J(θ), i.e., E[∥∇J(θ)∥] ≤ ϵ (Ghadimi & Lan, 2013; Shani et al., 2020). In recent years, there have been several attempts to reduce the variance of policy gradient by adapting variance reduction techniques proposed previously in a supervised learning context (a list of previous work is given in Section 4). These methods can achieve sample complexity of O(ϵ −3) in the RL setting and this rate is optimal in stochastic optimization under some mild assumptions on the objective function and stochastic gradients (Arjevani et al., 2020). On the one hand, in supervised learning problems, the objective function is oblivious, in the sense that the randomness that selects the loss function does not depend on the parameters that are to be optimized. On the other hand, in the RL setting, the distribution over trajectories is non-stationary and changes over time as the parameters of policy are updated. To resolve this issue, most previous work use importance sampling techniques, which may degrade the effectiveness of the variance reduction process (Yang et al., 2022). Moreover, to analyze the convergence rate of these methods, a strong assumption on the variance of importance sampling weights is made, which may not hold in the RL setting. Most importantly, these methods often need huge batch sizes, which is highly undesirable in practice. In this paper, we propose the Stochastic Hessian Aided Recursive Policy gradient (SHARP) algorithm, which incorporates second-order information into SGD with momentum. Our main contributions are summarized as follows: - Under some common regularity assumptions on the parameterized policy, SHARP reaches ϵ-FOSP with a sample complexity of O(ϵ −3). Moreover, our algorithm does not use importance sampling techniques. As a result, we can relax the strong additional assumptions on importance sampling weights, which are customary in the literature. - The batch size of SHARP is O(1) and it does not require checkpoints, thanks to the use of a second-order term in the updates and time-varying learning rate and momentum weight. - SHARP is parameter-free in the sense that the initial learning rate and momentum weight do not depend on the parameters of the problem. Moreover, the variance of the estimation error decays with a rate of O(1/t2/3), where t is the number of iterations. - Our experimental results show that SHARP outperforms the state of the art on various control tasks, with remarkable performance in more complex environments. The rest of this paper is organized as follows: In Section 2, we define the problem and provide some notations and background on variance reduction methods in supervised learning. In Section 3, we describe the proposed algorithm, SHARP, and we analyze its convergence rate. In Section 4, we give a summary of previous work and discuss how SHARP differs from them. In Section 5, we evaluate the performance of SHARP against the related work experimentally. Finally, we conclude the paper in Section 6. ## 2 Preliminaries 2.1 Notations And Problem Definition Consider a discrete-time MDP M = {S, A*, P, R, γ, ρ*} that models how an agent interacts with a given environment. S and A are state space and action space, respectively. P(s ′|*s, a*) denotes the probability of transiting to state s ′from s after taking action a. The reward function R returns reward r(*s, a*) when action a is taken in state s. Parameter γ ∈ (0, 1) denotes the discount factor and ρ is the distribution of starting state. The actions are chosen according to policy π where π(a|s) is the probability of taking action a for a given state s. Here, we assume that the policy is parameterized with a vector θ ∈ R d and use shorthand notation πθ for πθ(a|s). For a given time horizon H, according to policy πθ, the agent observes a sequence of state-action pairs τ = (s0, a0, · · · , sH−1, aH−1) called a trajectory. The probability of observing a trajectory τ for a given policy πθ is: $$p(\tau|\pi_{\theta})=\rho(s_{0})\prod_{h=0}^{H-1}P(s_{h+1}|s_{h},a_{h})\pi_{\theta}(a_{h}|s_{h}).$$ $$\left(1\right)$$ $\left(\frac{1}{2}\right)^{n-1}\left(\delta_{n},a_{n}\right)$ and the $\alpha$-function is $\left(\frac{1}{2}\right)^{n-1}$. $$\left(2\right)$$ The discounted cumulative reward for a trajectory τ is defined as R(τ ) := PH−1 h=0 γ hr(sh, ah) and the expected return for a policy πθ is: $$J(\theta):=\mathbb{E}_{\tau\sim\pi_{\theta}}[R(\tau)].$$ [R(τ )]. (2) The main goal in policy-based RL is to find θ ∗ = arg maxθ J(θ). As in many applications, J(θ) is non-convex and we settle instead for obtaining ϵ-FOSP ˆθ such that E[∥∇J( ˆθ)∥] ≤ ϵ. It can be shown that: $$\nabla J(\theta)=\mathbb{E}\Bigg[\sum_{h=0}^{H-1}\Psi_{h}(\tau)\nabla\log\pi_{\theta}(a_{h}|s_{h})\Bigg],$$ $$\left({3}\right)$$ where Ψh(τ ) = PH−1 t=h γ tr(st, at). Therefore, for any trajectory τ , g(τ ; θ) := PH−1 h=0 Ψh(τ )∇ log πθ(ah|sh) is an unbiased estimator of ∇J(θ). The vanilla policy gradient updates θ as follows: $$\theta\leftarrow\theta+\eta g(\tau;\theta),$$ $$\left(4\right)$$ $\left(5\right)^3$ θ ← θ + ηg(τ ; θ), (4) where η is the learning rate. The Hessian matrix of J(θ) can be written as follows (Shen et al., 2019): $$\nabla^{2}J(\theta)=\mathbb{E}[\nabla\Phi(\theta;\tau)\nabla\log p(\tau|\pi_{\theta})^{T}+\nabla^{2}\Phi(\theta;\tau)],$$ where Φ(θ; τ ) = PH−1 h=0 PH−1 t=h γ tr(st, at)log πθ(ah|sh). For a given trajectory τ , B(τ ; θ) := ∇Φ(θ; τ )∇ log p(τ |πθ) T + ∇2Φ(θ; τ ) is an unbiased estimator of the Hessian matrix. ## 2.2 Variance Reduced Methods For Gradient Estimation Variance reduced methods for estimating the gradient vector were originally proposed for the stochastic optimization setting: $$\min_{\theta\in\mathbb{R}^{d}}\mathbb{E}_{z\sim p(z)}[f(\theta,z)],\tag{1}$$ where a sample z is drawn from distribution p(z) and f(·, z) is commonly assumed to be smooth and nonconvex function of θ. This setting is mainly considered in supervised learning contexts where θ corresponds to the parameters of the training model and z = (*x, y*) is the training sample, with x the feature vector of the sample and y the corresponding label. In this setting, the distribution p(z) is invariant with respect to parameter θ. $$({\mathfrak{f}}{\mathfrak{o}})$$ Algorithm 1 Common framework in variance reduction methods 1: for t = 0, · · · , T − 1 do 2: $$h_{t}=\left\{\begin{array}{ll}\dfrac{1}{|B_{chech}|}\sum_{z\in B_{chech}}\nabla f(\theta_{t},z)&\text{if}t\equiv0\pmod{Q},\\ h_{t-1}+\dfrac{1}{|B|}\sum_{z\in B}\left(\nabla f(\theta_{t},z)-\nabla f(\theta_{t-1},z)\right),&\text{otherwise}.\end{array}\right.\tag{1}$$ $$\mathbf{r})$$ $$\mathbf{\Sigma}^{1}$$ $${\mathrm{ly~from~}}\{0,\cdots,T-1\}$$ 3: θt+1 ← θt − ηht 4: **end for** 5: Return θt with t chosen randomly from {0, · · · , T − 1} The common approach for reducing the variance of gradient estimation is to reuse past gradient vectors. The pseudo-code for this general framework for variance reduction is given in Algorithm 1. After every pre-determined number of iterations Q, there is a checkpoint to obtain an unbiased estimate of the gradient, denoted by ht, at the current parameter θt by taking a batch of samples B*check*. Between any two consecutive checkpoints, the gradient at the parameter θt is estimated according to equation 8 by taking a batch of samples B drawn from p(z). The above framework appeared in several previous variance reduction methods in stochastic optimization such as SARAH (Nguyen et al., 2017) and SPIDER (Fang et al., 2018). Zhang (2021) discusses how to choose the size of batches and the parameters Q and η. In fact, there is a trade-off between η and |B|. If a small batch size is used, then η is also required to be small. The two extremes are SpiderBoost (Wang et al., 2019) (|B| = O(ϵ −1), η = O(1)) and SARAH (Nguyen et al., 2017) (|B| = O(1), η = O(ϵ)). Very recently, Li et al. (2021) proposed PAGE, where in each iteration t, either a batch of samples is taken with probability pt to update the gradient or the previous estimate of the gradient is used with a small adjustment, with probability 1 − pt. In the context of RL, a sample z corresponds to a trajectory τ . Unlike supervised learning, the distribution of these trajectories depends on the parameters of the policy that generates them. Therefore, in the second term in the sum in equation 8, namely ∇f(θt−1, z), z (i.e., the trajectory τ in the RL context) is generated according to policy πθt while θt−1 is the parameter of the policy at the previous iteration. In the RL setting, importance sampling technique is commonly used to account for the distribution shift as follows: $$h_{t}=h_{t-1}+\frac{1}{|\mathcal{B}|}\sum_{\tau\in\mathcal{B}}g(\theta_{t};\tau)-w(\tau|\theta_{t},\theta_{t-1})g(\theta_{t-1};\tau),$$ $$({\mathfrak{g}})$$ h=0 πθt−1 πθt $$\begin{array}{l}{{\frac{-1}{(a_{h}\left|s_{h}\right.)}}}\\ {{\frac{-1}{(a_{h}\left|s_{h}\right.)}}}\end{array}$$ with the weights $w(\tau|\theta_t,\theta_{t-1})=\prod_{h}^{L}$. . As we shall see in Section 4, nearly all variance reduction approaches in RL employing the general framework of Algorithm 1 use an importance sampling technique. This could significantly degrade the performance of the approach as the gradient estimates depend heavily on these weights (Yang et al., 2022). Besides, these variance reduction methods often need giant batch sizes at checkpoints, which is not practical in the RL setting. Finally, the hyper-parameters of these approaches must be selected carefully as they often use non-adaptive learning rates. To resolve the issue of requiring huge batch-sizes, in the context of stochastic optimization, a variance reduction method called STORM (Cutkosky & Orabona, 2019) was proposed with the following update rule: $$\begin{array}{c}{{h_{t}=(1-\alpha_{t})h_{t-1}+\alpha_{t}\nabla f(\theta_{t},z_{t})+(1-\alpha_{t})(\nabla f(\theta_{t},z_{t})-\nabla f(\theta_{t-1},z_{t}))}}\\ {{\theta_{t+1}\leftarrow\theta_{t}-\eta_{t}h_{t},}}\end{array}$$ where zt is the sample drawn at iteration t and αt and ηt are the adaptive momentum weight and learning rate, respectively. Compared with SGD with momentum, the main difference in STORM is the correction term ∇f(θt, zt) − ∇f(θt−1, zt) in equation 10. Cutkosky & Orabona (2019) showed that by adaptively updating αt and ηt based on the norm of stochastic gradient in previous iterations, STORM can achieve $$\left(10\right)$$ Algorithm 2 The SHARP algorithm Input: Initial point θ0, parameters α0, η0, and number of iterations T 1: Sample trajectory τ0 with policy πθ0 2: v0 ← g(τ0; θ0) 3: θ1 ← θ0 + η0 v0 ∥v0∥ 4: for t = 1, · · · , T − 1 do 5: Sample bt ∼ U(0, 1) 6: θ b t ← btθt + (1 − bt)θt−1 7: Sample trajectories τt and τ b t with policies πθt and πθ b t , respectively 8: ηt ← η0 t 2/3, αt ← α0 t 2/3 9: vt ← (1 − αt)(vt−1 + B(τ b t ; θ b t )(θt − θt−1)) + αtg(τt; θt) 10: θt+1 ← θt + ηt vt ∥vt∥ 11: **end for** 12: Return θt with t chosen randomly from {0, · · · , T − 1} the same convergence rate as previous methods without requiring checkpoints nor a huge batch size. Later, a parameter-free version, called STORM+ (Levy et al., 2021), has been introduced using new adaptive learning rate and momentum weight. However, to adapt these methods in the RL setting, we still need to use importance sampling techniques because of the term ∇f(θt−1, zt). Recently, Tran & Cutkosky (2022) showed that the correction term can be replaced with a second-order term ∇2f(θt, zt)(θt − θt−1) by making the additional assumption that the objective function is second-order smooth. Besides, the above Hessian vector product can be computed in O(Hd) (similar to the computational complexity of obtaining the gradient vector) by executing Pearlmutter's algorithm (Pearlmutter, 1994). ## 3 The Sharp Algorithm In this section, we propose the SHARP algorithm, which incorporates second-order information into SGD with momentum and we provide a convergence guarantee. SHARP is described in Algorithm 2. At each iteration t, we draw sample bt from a uniform distribution in the interval [0, 1] (line 5) and next obtain θ b t as the linear combination of θt−1 and θt with coefficients 1 − bt and bt (line 6). In line 7, we sample trajectories τt and τ b t according to policies πθt and πθ b t , respectively. Afterwards, we update the momentum weight αt and the learning rate ηt (line 8) and then compute the estimate of gradient at time t, i.e., vt, using the Hessian vector product B(τ b t ; θ b t )(θt − θt−1) (please refer to Section 2.1 for the definition of B(τ ; θ)) and stochastic gradient g(τt; θt) (line 9). Finally we update θt based on a normalized version of vt in line 10. Note that in each iteration, we compute one Hessian vector product, which can be done with the same computational complexity of obtaining the gradient vector using Pearlmutter's algorithm (Pearlmutter, 1994). Therefore, the computational cost of SHARP per iteration is in the same order as the one for first-order methods such as REINFORCE (Williams, 1992). Remark 1 By choosing a point uniformly at random on the line between θt−1 and θt*, we can ensure that* B(τ b t ; θ b t )(θt − θt−1)) is an unbiased estimate of ∇J(θt) − ∇J(θt−1) (see equation 22 in Appendix A). As we mentioned before, in the context of stochastic optimization, Tran & Cutkosky (2022) used the second-order term ∇2f(θt, zt)(θt − θt−1), which is biased as the Hessian vector product is evaluated at the point θt. As a result, in order to provide the convergence guarantee, it is further assumed that the objective function is second-order smooth in (Tran & Cutkosky, 2022). Remark 2 *To give an intuition why the second-order term is helpful in the update in line 9, consider the* following error term: $$\epsilon_{t}=v_{t}-\nabla J(\theta_{t}).$$ ϵt = vt − ∇J(θt). (11) We can rewrite the above error term as follows: $$\begin{array}{c}{{\epsilon_{t}=(1-\alpha_{t})(v_{t-1}-\nabla J(\theta_{t})+B(\tau_{t}^{b};\theta_{t}^{b})(\theta_{t}-\theta_{t-1}))}}\\ {{\qquad\qquad+\alpha_{t}(g(\tau_{t};\theta_{t})-\nabla J(\theta_{t})).}}\end{array}$$ $$(12)$$ Now, for a moment, suppose that E[vt−1] = E[∇J(θt−1)] *(with total expectation on both sides). Then,* $$\mathbf{\tau}_{t}-\theta_{t-1})]=0.$$ $$(13)^{\frac{1}{2}}$$ E[vt−1 − ∇J(θt) + B(τ b t ; θ b t )(θt − θt−1)] = 0. (13) As v0 is an unbiased estimate of gradient at θ0, we can easily show by induction that according to above equation, E[vt] = E[∇J(θt)] *for any* t ≥ 0. In the next part, we provide a theoretical guarantee on the convergence rate of SHARP algorithm. ## 3.1 Convergence Analysis In this part, we analyze the convergence rate of Algorithm 2 under bounded reward function and some regularity assumptions on the policy πθ. Assumption 1 (Bounded reward) For ∀s ∈ S, ∀a ∈ A, |R(s, a)| < R0 where R0 > 0 *is some constant.* Assumption 2 (Parameterization regularity) There exist constants G, L > 0 *such that for any* θ ∈ R d and for any s ∈ S, a ∈ A: (a) ∥∇ log πθ(a|s)∥ ≤ G, (b) ∥∇2log πθ(a|s)∥ ≤ L. Assumptions 1 and 2 are common in the RL literature (Papini et al., 2018; Shen et al., 2019) to analyze the convergence of policy gradient methods. Under these assumptions, the following upper bounds can be derived on E[∥g(τ ; θ) − ∇J(θ)∥ 2] and E[∥B(τ ; θ) − ∇2J(θ)∥ 2]. Lemma 1 (Shen et al. (2019)) *Under Assumptions 1 and 2:* $$\begin{array}{l}{{\mathbb{E}[\|g(\tau;\theta)-\nabla J(\theta)\|^{2}]\leq\sigma_{g}^{2}}}\\ {{\mathbb{E}[\|B(\tau;\theta)-\nabla^{2}J(\theta)\|^{2}]\leq\sigma_{B}^{2},}}\end{array}$$ $$(14)$$ $where\ \sigma_g^2=\frac{G^2R_0^2}{(1-\gamma)^4}\ and\ \sigma_B^2=\frac{H^2G^4R_0^2+L^2R_0^2}{(1-\gamma)^4}$. Based on these bounds, we can provide the following guarantee on the convergence rate of SHARP algorithm. All proofs are provided in the appendix. Theorem 1 Suppose that the initial momentum weight α0 ∈ (2/3, 1] and initial learning rate η0 > 0. Under Assumptions 1 and 2, Algorithm 2 guarantees that: $$\mathbb{E}\left[\frac{1}{T}\sum_{t=1}^{T}\|\nabla J(\theta_{t})\|\right]\leq\frac{8\sqrt{C}+9C_{J}/\eta_{0}}{T^{1/3}}+\frac{6\sigma_{B}\eta_{0}}{T^{2/3}},\tag{15}$$ where C = 3α0(48σ 2 Bη 2 0/α0 + (6α0 + 1/α0)σ 2 g )/(3α0 − 2) and CJ = R0/(1 − γ). Corollary 1 p The right hand side of equation 15 is dominated by the first term. If we set η0 *in the order of* CJ /σB, then the number of trajectories for achieving ϵ*-FOSP would be* O(1 (1−γ) 2ϵ 3 )*, where we assume that* the time horizon H *is set in the order of* 1/(1 − γ). Remark 3 *Let us rewrite equation 12 as follows:* $$\epsilon_{t}=(1-\alpha_{t})\epsilon_{t-1}+\alpha_{t}U_{t}+(1-\alpha_{t})W_{t},$$ where Ut = (g(τt; θt) − ∇J(θt)) and Wt = B(τ b t ; θ b t )(θt − θt−1) − (∇J(θt) − ∇J(θt−1)). As g(τt; θt) and B(τ b t ; θ b t )(θt − θt−1) are unbiased estimates of ∇J(θt) and ∇J(θt) − ∇J(θt−1)*, respectively, we have* E[Ut] = 0 and E[Wt] = 0, which in turn implies that E[⟨ϵt−1, Ut⟩] = E[⟨ϵt−1, Wt⟩] = 0*. After some simple manipulations* (see Appendix A for more details), we have $$\mathbb{E}[\|\epsilon_{t}\|^{2}]\leq(1-\alpha_{t})\mathbb{E}[\|\epsilon_{t-1}\|^{2}]+2\alpha_{t}^{2}\mathbb{E}[\|U_{t}\|^{2}]+2\mathbb{E}[\|W_{t}\|^{2}].$$ Now, according to Lemma 1, we know that E[∥Ut∥ 2] ≤ σ 2 g . Moreover, it can be shown that ∥B(τ ; θ)∥ ≤ σB for any trajectory τ (Shen et al., 2019). Therefore, ∥Wt∥ ≤ 2σB∥θt − θt−1∥*. Moreover, the normalized update of* SHARP yields that ∥θt − θt−1∥ 2 = η 2 t−1 . Hence, $$\mathbb{E}[\|\epsilon_{t}\|^{2}]\leq(1-\alpha_{t})\mathbb{E}[\|\epsilon_{t-1}\|^{2}]+2\alpha_{t}^{2}\sigma_{g}^{2}+8\sigma_{B}^{2}\eta_{t-1}^{2}$$ . As αt and ηt have the same dependency on t, along the iterations of the SHARP algorithm, the following inequality holds for any t ≥ 1: E[∥ϵt∥ 2] ≤ (1 − αt)E[∥ϵt−1∥ 2] + O(η 2 t ). (16) Therefore, the variance of the estimation error decays with rate O(1/t2/3) *(see Appendix B for the proof). To* the best of our knowledge, existing variance reduction methods only guarantee the decay of cumulative variance. This appealing property of SHARP is largely due to the use of unbiased Hessian-aided gradient estimator and normalized gradient descent. Moreover, as a byproduct of these desirable properties, our convergence analysis turns out to be more simple, compared to existing work (Cutkosky & Orabona, 2019; Tran & Cutkosky, 2022). This could be of independent interest for better theory of variance-reduced methods. Remark 4 The SHARP algorithm is parameter-free in the sense that α0 and η0 *are constants that do not* depend on other parameters in the problem. Therefore, for any choice of 2/3 < α0 ≤ 1 and η0 > 0*, we can* guarantee convergence to ϵ*-FOSP with the sample complexity of* O(ϵ −3). However, in practice, it is desirable to tune these constants to have smaller constants in the numerators of convergence rates in equation 15. For instance, σB *might be large in some RL settings and we control the constant in the first term on the right* hand side of equation 15 by tuning η0. *It is noteworthy that STORM+ is also parameter-free, but it requires* adaptive learning rate and momentum weight that depend on stochastic gradients in previous iterations. Remark 5 Regarding the dependency on ϵ*, in the context of stochastic optimization, Arjevani et al. (2020)* have shown that under some mild assumptions on the objective function and stochastic gradient, the rate of O(1/ϵ3) is optimal in order to obtain ϵ-FOSP, and cannot be improved with stochastic p*-th order methods for* p ≥ 2. ## 4 Related Work In recent years, several variance-reduced methods have been proposed in order to accelerate the existing PG methods. Papini et al. (2018) and Xu et al. (2020) proposed SVRPG algorithm based on SVRG (Johnson & Zhang, 2013), with sample complexity of O(1/ϵ4) and O(1/ϵ10/3), respectively. This algorithm requires importance sampling techniques as well as the following additional assumption for guaranteeing convergence to ϵ-FOSP: - Bounded variance of importance sampling weights: For any trajectory τ , it is assumed that: $$Var\Bigg{(}\frac{p(\tau|\pi_{\theta_{1}})}{p(\tau|\pi_{\theta_{2}})}\Bigg{)}\leq W,\qquad\forall\theta_{1},\theta_{2}\in\mathbb{R}^{d},$$ where W < ∞ is a constant. The above assumption is fairly strong as the importance sampling weight could grow exponentially with horizon length H (Zhang et al., 2021). In order to remove importance sampling weights, Shen et al. (2019) $$(17)$$ | Method | SC | |B| | |Bcheck| Checkpoint | IS | Further Assump. | | | |---------------------------------|-------------|----------------------------|-----------------------|-----------------------|------------------------|------------------------|------------------------| | 1 | 1 | | | | | | | | SVRPG (Xu et al., 2020) | O( | 1 10/3 ) O( 4/3 ) O( 4/3 ) | Needed | Needed | Assump. in equation 17 | | | | ϵ | ϵ | ϵ | | | | | | | 1 | 1 | 1 | | | | | | | HAPG (Shen et al., 2019) | O( 3 ) | O( | ) | O( 2 ) | Needed | Not needed | - | | ϵ | ϵ | ϵ | | | | | | | 1 | 1 | 1 | | | | | | | SRVR-PG (Xu et al., 2019) | O( 3 ) | O( √ ) | O( | ) | Needed | Needed | Assump. in equation 17 | | ϵ | ϵ | ϵ | | | | | | | 1 | | | | | | | | | HSPGA (Pham et al., 2020) | O( 3 ) | O(1) | - | Not needed | Needed | Assump. in equation 17 | | | ϵ | | | | | | | | | MBPG (Huang et al., 2020) | O˜( 1 ϵ 3 ) | O(1) | - | Not needed | Needed | Assump. in equation 17 | | | VRMPO (Yang et al., 2022) | O( 1 3 ) | O( 1 ) | O( 1 2 ) | Needed | Not needed | - | | | ϵ | ϵ | ϵ | | | | | | | VR-BGPO (Huang et al., 2022) | O( 1 3 ) | O(1) | - | Not Needed | Needed | Assump. in equation 17 | | | ϵ | | | | | | | | | PAGE-PG (Gargiani et al., 2022) | O( 1 3 ) | O(1) | O( 1 2 ) | Needed | Needed | Assump. in equation 17 | | | ϵ | ϵ | | | | | | | | 1 | | | | | | | | | This paper | O( 3 ) | O(1) | - | Not needed Not needed | - | | | | ϵ | | | | | | | | Table 1: Comparison of main variance-reduced policy gradient methods to achieve ϵ-FOSP based on sample complexity (SC), batch size (|B|), batch size at checkpoints (|B*check*|), and the need for checkpoints, importance sampling (IS), and additional assumptions. proposed the HAPG algorithm, which uses second-order information and achieves better sample complexity of O(1/ϵ3). However, it still needs checkpoints and large batch sizes of |B| = O(1/ϵ), and |B*check*| = O(1/ϵ2). In Table 1, we compare the main variance-reduced policy gradient methods achieving ϵ-FOSP in terms of sample complexity and batch size1. In this table, after HAPG (Shen et al., 2019), all the proposed variance reduction methods achieve a similar sample complexity. The orders of batch sizes are also the same as in HAPG. Xu et al. (2019) proposed SRVR-PG, and used stochastic path-integrated differential estimators for variance reduction. This algorithm uses important sampling weights and the required batch sizes are in the order of |B| = O(1/ √ϵ) and |B*check*| = O(1/ϵ). Later, Pham et al. (2020) proposed HSPGA by adapting the SARAH estimator for reducing the variance of REINFORCE. HSPGA still needs importance sampling weights, but the batch size is reduced to O(1). Huang et al. (2020) proposed three variants of momentum-based policy gradient (called MBPG), which are based on the STORM algorithm (Cutkosky & Orabona, 2019). Thus, the required batch size is O(1), similarly to STORM. However, it still needs to use importance sampling weights. (Zhang et al., 2020) used a similar update for the estimate of stochastic gradient as the one in SHARP for Frank-Wolfe type algorithms in the context of constrained optimization. Later, Zhang et al. (2021) proposed TSIVR-PG with a gradient truncation mechanism in order to resolve some issues pertaining to the use of importance sampling weights. In their convergence analysis, they are restricted to soft-max policy with some specific assumptions on the parameterization functions. More recently, two methods using mirror-descent algorithm based on Bregman divergence, called VR-BGPO (Huang et al., 2022) and VR-MPO (Yang et al., 2022) have been proposed, achieving ϵ-FOSP if the mirror map in Bregman divergence is the l2-norm. Very recently, based on PAGE (Li et al., 2021), Gargiani et al. (2022) proposed the PAGE-PG algorithm which takes a batch of O(ϵ −2) samples for updating the parameters with probability pt or reuse the previous estimate gradient with a small adjustment, with probability 1 − pt. The proposed algorithm requires importance sampling weights and thus the additional assumption in equation 17 to guarantee convergence to ϵ-FOSP with a sample complexity of O(ϵ −3). 1Please note that the sample complexity also depends on the other parameters such as horizon length H and discount factor γ. Here, we just mention the dependency of sample complexity on ϵ. There are few recent work on the global convergence of policy gradient methods. For instance, Liu et al. (2020) showed global convergence of policy gradient, natural policy gradient, and their variance reduced variants, in the case of positive definite Fisher information matrix of the policy. Chung et al. (2021) studied the impact of baselines on the learning dynamics of policy gradient methods and showed that using a baseline minimizing the variance can converge to a sub-optimal policy. Recently, Ding et al. (2022) studied the soft-max and the Fisher non-degenerate policies, and showed that adding a momentum term improves the global optimality sample complexities of vanilla PG methods by O˜(ϵ −1.5) and O˜(ϵ −1), respectively. The main methods discussed above are summarized in Table 1. For each method, we mention whether it needs checkpoints and/or importance sampling weights.2 All of them require Assumptions 1 and 2. In the last column, additional assumptions besides these two are listed for each method. Comparing the sample complexity of the SHARP algorithm with previous work, note that all the algorithms (including SHARP) under SVRPG in Table 1 achieve rates of O(1/ϵ3) or O˜(1/ϵ3). Without any further assumption, SHARP is the only method that requires no checkpoints, no importance sampling weights, and has a batch size of the order of O(1). As we will see in the next section, besides these algorithmic advantages, it has remarkable performance compared to the state of the art in various control tasks. ## 5 Experiments In this section, we evaluate the performance of the proposed algorithm and compare it with previous work for control tasks in MuJoCo simulator (Todorov et al., 2012) which is a physical engine, suitable for simulating robotic tasks with good accuracy and speed in the RL setting. We implemented SHARP in the Garage library (garage contributors, 2019) as it allows for maintaining and integrating it in future versions of Garage library for easier dissemination. We utilized a Linux server with Intel Xeon CPU E5-2680 v3 (24 cores) operating at 2.50GHz with 377 GB DDR4 of memory, and Nvidia Titan X Pascal GPU. The implementation of SHARP is available as supplementary material. We considered the following four control tasks with continuous action space: Reacher, Walker, Humanoid, and Swimmer. In Reacher, an arm with two degrees of freedom aims at reaching a target point in a two-dimensional plane. A higher reward is attained if the arm gets closer to the target point. In Walker, a humanoid walker tries to move forward in a two dimensional space, i.e., it can only fall forward or backward. The state contains velocities of different parts of body and joint angles and the actions represent how to move legs and foot joints. The reward signal is based on the current velocity of the agent. In Humanoid, a three-dimensional bipedal robot is trained to walk forward as fast as possible, without falling over. The state space is 376-dimensional, containing the position and velocity of each joint, the friction of the actuator, and contact forces. The action space is a 17-dimensional continuous space. Finally, in Swimmer, the agent is in a two-dimensional pool and the goal is to move as fast as possible in the right direction. We compared the SHARP algorithm with PG methods that provide theoretical guarantees for converging to an approximate FOSP: PAGE-PG (Gargiani et al., 2022), IS-MBPG (Huang et al., 2020) which is based on STORM, HAPG (Shen et al., 2019) which does not require IS weights, and VR-BGPO (Huang et al., 2022) which is a mirror descent based algorithm. We also considered REINFORCE (Sutton et al., 2000) as a baseline algorithm. There are some other approaches in the literature with theoretical guarantees such as VRMPO (Yang et al., 2022), and STORM-PG (Yuan et al., 2020) but the official implementations are not publicly available and our request to access the code from the authors remained unanswered. For each algorithm, we used the same set of Gaussian policies parameterized with neural networks having two layers of 64 neurons each. Baselines and environment settings (such as maximum trajectory horizon, and reward intervals) were considered the same for all algorithms. We chose a maximum horizon of 500 for Walker, Swimmer, and Humanoid and 50 for Reacher. More details about experiments are provided in Appendix E. 2To be more precise, although PAGE-PG, has no fixed checkpoints, it takes a batch of O(ϵ−2) to get an unbiased estimate of the gradient with probability pt. Therefore, in this sense, it requires checkpoints. ![9_image_0.png](9_image_0.png) Figure 1: Comparison of SHARP with other variance reduction methods on four control tasks. In the literature, it has been observed that most PG methods are quite sensitive to parameter initialization or random seeds (Henderson et al., 2018). Hence, it might be challenging in some cases to reproduce previous results. Moreover, it is not clear how to compare methods in terms of performance (e.g., the average episode return) and robustness (such as the standard deviation (STD) of return) at the same time. To resolve the above issues, we considered the performance-robustness (PR) metric proposed in (Khorasani et al., 2023), capturing both the average return and STD of return of an algorithm. In particular, for any algorithm A, after observing t state-actions pairs (which we call system probes), the lower bound of the confidence interval 90 percent of average return over n runs of the algorithm (denoted by LCIA(*n, t*)) is computed.3 The PR metric is defined by averaging LCIA(*n, t*) over all system probes t = 1, · · · , T as follows: $$P R_{A}(n)={\frac{1}{T}}\sum_{t=1}^{T}L C I_{A}(n,t),$$ $$(18)$$ LCIA(*n, t*), (18) where T is maximum number of system probes. We used grid search to tune the hyper-parameters of all the considered algorithms. For the algorithms (except SHARP), the search space for each hyper-parameter was chosen based on the one from the original papers. For each configuration of the hyper-parameters, we ran each algorithm A, five times and computed P RA(5). We selected the configuration which maximized P RA(5) and then reported P RA(10) of each algorithm for the selected configuration based on 10 different runs in Table 2. SHARP achieved the highest PR in all environments compared with the other algorithms. In Appendix D, we also reported the PR metric by considering the upper bounds of confidence intervals instead of lower bounds (i.e., the "best-case" performance 3As recommended in (Agarwal et al., 2021), for n = 10 available runs, we used percentile bootstrap to compute 90% confidence interval with 2000 number of bootstrap samples. | | Reacher | Walker | Humanoid | Swimmer | |-----------------------|-----------|----------|------------|-----------| | HAPG | -25.30 | 94.95 | 137.59 | 68.22 | | IS-MBPG | -22.45 | 175.58 | 164.55 | 26.75 | | PAGE-PG | -18.67 | 243.63 | 335.24 | 17.65 | | REINFORCE | -30.31 | 51.03 | 140.06 | 17.84 | | VR-BGPO | -16.15 | 313.42 | 393.90 | 37.92 | | SHARP (our algorithm) | -10.97 | 323.63 | 415.92 | 141.42 | Table 2: Comparison of SHARP with other variance-reduced methods in terms of PR. In each environment, the highest PR is in bold. rather than the "worst-case" performance). Still, SHARP outperforms other algorithms in all environments. This is also evident from Figure 1 where the confidence interval of SHARP is fairly tight. We considered the confidence interval of the performance (average return) to show the sensitivity of an algorithm to random seeds. As can be seen in Figure 1, SHARP not only achieves the highest average return after 10 million system probes but also has a relatively small confidence interval. ## 6 Conclusion We proposed a variance-reduced policy-gradient method, which incorporates second-order information, i.e., Hessian vector product, into SGD with momentum. The Hessian vector product can be computed with an efficiency that is similar to that of obtaining the gradient vector. Therefore, the computational complexity per iteration of the proposed algorithm, SHARP, remains in the same order as first-order methods. More importantly, using the second-order correction term enables us to obtain an estimate of the gradient and to completely bypass importance sampling. Moreover, the batch size is O(1) and there is no need for checkpoints, which makes the algorithm appealing in practice. Under some regularity assumptions on the parameterized policy, we showed that it achieves ϵ-FOSP with sample complexity of O(ϵ −3). SHARP is parameter-free in the sense that the initial learning rate and momentum weight do not depend on the parameters of the problem. Experimental results show its remarkable performance in various control tasks, especially in some complex environments, such as Humanoid, compared to the state of the art. ## Acknowledgments This research was in part supported by the Swiss National Science Foundation under NCCR Automation, grant agreement 51NF40_180545 and Swiss SNF project 200021_204355/1. ## References Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Bellemare. Deep reinforcement learning at the edge of the statistical precipice. In Advances in Neural Information Processing Systems, volume 34, pp. 29304–29320, 2021. Yossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster, Ayush Sekhari, and Karthik Sridharan. Secondorder information in non-convex stochastic optimization: Power and limitations. In *Conference on Learning* Theory, pp. 242–299. PMLR, 2020. Jonathan Baxter and Peter L Bartlett. Infinite-horizon policy-gradient estimation. *Journal of Artificial* Intelligence Research, 15:319–350, 2001. Wesley Chung, Valentin Thomas, Marlos C Machado, and Nicolas Le Roux. Beyond variance reduction: Understanding the true impact of baselines on policy optimization. In *International Conference on Machine* Learning, pp. 1999–2009. PMLR, 2021. Ashok Cutkosky and Francesco Orabona. Momentum-based variance reduction in non-convex sgd. In Advances in Neural Information Processing Systems, volume 32, 2019. Marc Peter Deisenroth, Gerhard Neumann, Jan Peters, et al. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):388–403, 2013. Yuhao Ding, Junzi Zhang, and Javad Lavaei. On the global optimum convergence of momentum-based policy gradient. In *International Conference on Artificial Intelligence and Statistics*, pp. 1910–1934. PMLR, 2022. Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. In *Advances in Neural Information Processing Systems*, volume 31, 2018. The garage contributors. Garage: A toolkit for reproducible reinforcement learning research. https: //github.com/rlworkgroup/garage, 2019. Matilde Gargiani, Andrea Zanelli, Andrea Martinelli, Tyler Summers, and John Lygeros. Page-pg: A simple and loopless variance-reduced policy gradient method with probabilistic gradient estimation. In International Conference on Machine Learning, 2022. Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. *SIAM Journal on Optimization*, 23(4):2341–2368, 2013. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32, 2018. Feihu Huang, Shangqian Gao, Jian Pei, and Heng Huang. Momentum-based policy gradient methods. In International Conference on Machine Learning, pp. 4422–4433. PMLR, 2020. Feihu Huang, Shangqian Gao, and Heng Huang. Bregman gradient policy optimization. In International Conference on Learning Representations, 2022. Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In *Advances in Neural Information Processing Systems*, volume 26, pp. 315–323, 2013. Sadegh Khorasani, Saber Salehkaleybar, Negar Kiyavash, Niao He, and Matthias Grossglauser. Efficiently escaping saddle points for non-convex policy optimization. *arXiv preprint arXiv:2311.08914*, 2023. Vijay R Konda and John N Tsitsiklis. Actor-critic algorithms. In *Advances in Neural Information Processing* Systems, pp. 1008–1014, 2000. Kfir Levy, Ali Kavis, and Volkan Cevher. Storm+: Fully adaptive sgd with momentum for nonconvex optimization. In *Advances in Neural Information Processing Systems*, 2021. Zhize Li, Hongyan Bao, Xiangliang Zhang, and Peter Richtárik. Page: A simple and optimal probabilistic gradient estimator for nonconvex optimization. In *International Conference on Machine Learning*, pp. 6286–6295. PMLR, 2021. Yanli Liu, Kaiqing Zhang, Tamer Basar, and Wotao Yin. An improved analysis of (variance-reduced) policy gradient and natural policy gradient methods. In *Advances in Neural Information Processing Systems*, 2020. Lam M Nguyen, Jie Liu, Katya Scheinberg, and Martin Takáč. Sarah: A novel method for machine learning problems using stochastic recursive gradient. In *International Conference on Machine Learning*, pp. 2613–2621. PMLR, 2017. Matteo Papini, Damiano Binaghi, Giuseppe Canonaco, Matteo Pirotta, and Marcello Restelli. Stochastic variance-reduced policy gradient. In *International Conference on Machine Learning*, pp. 4026–4035. PMLR, 2018. Barak A Pearlmutter. Fast exact multiplication by the hessian. *Neural Computation*, 6(1):147–160, 1994. Nhan Pham, Lam Nguyen, Dzung Phan, Phuong Ha Nguyen, Marten Dijk, and Quoc Tran-Dinh. A hybrid stochastic policy gradient algorithm for reinforcement learning. In *International Conference on Artificial* Intelligence and Statistics, pp. 374–385. PMLR, 2020. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In *International Conference on Machine Learning*, pp. 1889–1897. PMLR, 2015. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. In Internation Conference on Representation Learning, 2016. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. *arXiv preprint arXiv:1707.06347*, 2017. Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. Safe, multi-agent, reinforcement learning for autonomous driving. *arXiv preprint arXiv:1610.03295*, 2016. Lior Shani, Yonathan Efroni, and Shie Mannor. Adaptive trust region policy optimization: Global convergence and faster rates for regularized mdps. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 5668–5675, 2020. Zebang Shen, Alejandro Ribeiro, Hamed Hassani, Hui Qian, and Chao Mi. Hessian aided policy gradient. In International Conference on Machine Learning, pp. 5729–5738. PMLR, 2019. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. *Nature*, 550(7676):354–359, 2017. Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems, pp. 1057–1063, 2000. Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In *2012* IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. IEEE, 2012. Hoang Tran and Ashok Cutkosky. Better sgd using second-order momentum. In *Advances in Neural* Information Processing Systems, volume 35, pp. 3530–3541, 2022. Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, and Vahid Tarokh. Spiderboost and momentum: Faster variance reduction algorithms. In *Advances in Neural Information Processing Systems*, pp. 2406–2416, 2019. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3):229–256, 1992. Pan Xu, Felicia Gao, and Quanquan Gu. Sample efficient policy gradient methods with recursive variance reduction. In *International Conference on Learning Representations*, 2019. Pan Xu, Felicia Gao, and Quanquan Gu. An improved convergence analysis of stochastic variance-reduced policy gradient. In *Uncertainty in Artificial Intelligence*, pp. 541–551. PMLR, 2020. Long Yang, Yu Zhang, Gang Zheng, Qian Zheng, Pengfei Li, Jianhang Huang, and Gang Pan. Policy optimization with stochastic mirror descent. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pp. 8823–8831, 2022. Huizhuo Yuan, Xiangru Lian, Ji Liu, and Yuren Zhou. Stochastic recursive momentum for policy gradient methods. *arXiv preprint arXiv:2003.04302*, 2020. Junyu Zhang, Chengzhuo Ni, Csaba Szepesvari, Mengdi Wang, et al. On the convergence and sample efficiency of variance-reduced policy gradient method. In *Advances in Neural Information Processing* Systems, volume 34, pp. 2228–2240, 2021. Liang Zhang. Variance reduction for non-convex stochastic optimization: General analysis and new applications. Master's thesis, ETH Zurich, 2021. Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, and Amin Karbasi. One sample stochastic frank-wolfe. In *International Conference on Artificial Intelligence and Statistics*, pp. 4012–4023. PMLR, 2020. ## A Proof Of Theorem 1 Let us define ϵt := vt − ∇J(θt). Then, based on the update in line 9 of Algorithm 2, we have: $$\epsilon_{t}=(1-\alpha_{t})\epsilon_{t-1}+\alpha_{t}U_{t}+(1-\alpha_{t})W_{t},$$ ϵt = (1 − αt)ϵt−1 + αtUt + (1 − αt)Wt, (19) where Ut = (g(τt; θt) − ∇J(θt)) and Wt = B(τ b t ; θ b t )(θt − θt−1) − (∇J(θt) − ∇J(θt−1)). Let Ht be the history up to time t, i.e., Ht := {θ0, τ0, τ1, b1, τ b 1 , · · · , τt, bt, τ b t }. By taking the square l2 norm of both sides of the above equation, $$\|\epsilon_{t}\|^{2}=(1-\alpha_{t})^{2}\|\epsilon_{t-1}\|^{2}+\alpha_{t}^{2}\|U_{t}\|^{2}+(1-\alpha_{t})^{2}\|W_{t}\|^{2}+$$ $$2\alpha_{t}(1-\alpha_{t})\langle\epsilon_{t-1},U_{t}\rangle+2\alpha_{t}(1-\alpha_{t})\langle U_{t},W_{t}\rangle+2(1-\alpha_{t})^{2}\langle\epsilon_{t-1},W_{t}\rangle$$ $$\leq(1-\alpha_{t})^{2}\|\epsilon_{t-1}\|^{2}+2\alpha_{t}^{2}\|U_{t}\|^{2}+2(1-\alpha_{t})^{2}\|W_{t}\|^{2}+$$ $$2\alpha_{t}(1-\alpha_{t})\langle\epsilon_{t-1},U_{t}\rangle+2(1-\alpha_{t})^{2}\langle\epsilon_{t-1},W_{t}\rangle,$$ $$\left(19\right)$$ $$(20)$$ where we used Young's inequality in the first inequality for the following term: 2αt(1 − αt)⟨Ut, Wt⟩ ≤ (1 − αt) 2∥Wt∥ 2 + α 2 t ∥Ut∥ 2. Now, by taking expectations on both sides, E[∥ϵt∥ 2] ≤ (1 − αt) 2E[∥ϵt−1∥ 2] + 2α 2 tE[∥Ut∥ 2] + 2(1 − αt) 2E[∥Wt∥ 2]+ 2αt(1 − αt)E[⟨ϵt−1, Ut⟩] + 2(1 − αt) 2E[⟨ϵt−1, Wt⟩] (a) ≤ (1 − αt)E[∥ϵt−1∥ 2] + 2α 2 tE[∥Ut∥ 2] + 2E[∥Wt∥ 2]+ 2αt(1 − αt)E[⟨ϵt−1, Ut⟩] + 2(1 − αt) 2E[⟨ϵt−1, Wt⟩] (b) ≤ (1 − αt)E[∥ϵt−1∥ 2] + 2α 2 tE[∥Ut∥ 2] + 2E[∥Wt∥ 2], $$(21)$$ (a) follows as αt ≤ 1 for all t ≥ 0. (b) follows because the following two terms are zero. First, E[⟨ϵt−1, Ut⟩] = E[E[⟨ϵt−1, Ut⟩|Ht−1]] = 0 as ϵt−1 is determined given Ht−1 and E[Ut|Ht−1] = 0 since g(τt; θt) is an unbiased estimation of ∇J(θt). Second, E[⟨ϵt−1, Wt⟩] = E[E[⟨ϵt−1, Wt⟩|Ht−1]] = 0 because E[Wt|Ht−1] = E[B(τ b t ; θ b t )(θt − θt−1)|Ht−1] − (∇J(θt) − ∇J(θt−1)) = Ebt [Eτ b t [B(τ b t ; θ b t )(θt − θt−1)|Ht−1]] − (∇J(θt) − ∇J(θt−1)) (c) = Ebt [∇2J(θ b t )(θt − θt−1)|Ht−1] − (∇J(θt) − ∇J(θt−1)) (d) = Z 1 0 ∇2J(bθt + (1 − b)θt−1)(θt − θt−1)db − (∇J(θt) − ∇J(θt−1)) = 0, $$(22)$$ (c) It is due to the fact that for a given θ b t , B(τ b t ; θ b t ) is an unbiased estimation of ∇2J(θ b t ). (d) The integral follows since θ b t = btθt + (1 − bt)θt−1 with bt uniformly distributed in [0, 1], and its value is ∇J(θt) − ∇J(θt−1). Now, from equation 21, we have: $$\begin{split}\mathbb{E}[\|\epsilon_{t-1}\|^{2}]&\stackrel{{(a)}}{{\leq}}\frac{1}{\alpha_{t}}\left(\mathbb{E}[\|\epsilon_{t-1}\|^{2}]-\mathbb{E}[\|\epsilon_{t}\|^{2}]\right)+\frac{2}{\alpha_{t}}\mathbb{E}[\|W_{t}\|^{2}]+2\alpha_{t}\sigma_{g}^{2}\\ &\stackrel{{(b)}}{{\leq}}\frac{1}{\alpha_{t}}\left(\mathbb{E}[\|\epsilon_{t-1}\|^{2}]-\mathbb{E}[\|\epsilon_{t}\|^{2}]\right)+8\sigma_{B}^{2}\frac{\eta_{t-1}^{2}}{\alpha_{t}}+2\alpha_{t}\sigma_{g}^{2},\end{split}\tag{23}$$ (a) We use the bound in Lemma 1, i.e., E[∥Ut∥ 2] ≤ σ 2 g . (b) It has been shown that ∥B(τ ; θ)∥ ≤ σB for any trajectory τ and θ ∈ R d(Shen et al., 2019). Therefore, ∥∇2J(θ)∥ = ∥Eτ [B(τ ; θ)]∥ ≤ σB. Hence, ∇J(θ) is Lipschitz with constant σB and we have: $$\begin{split}\|W_{t}\|&\leq\|B(\tau_{t}^{b};\theta_{t}^{b})(\theta_{t}-\theta_{t-1})\|+\|\nabla J(\theta_{t})-\nabla J(\theta_{t-1})\|\\ &\leq\|B(\tau_{t}^{b};\theta_{t}^{b})\|\|\theta_{t}-\theta_{t-1}\|+\sigma_{B}\|\theta_{t}-\theta_{t-1}\|\\ &\leq2\sigma_{B}\|\theta_{t}-\theta_{t-1}\|,\end{split}\tag{24}$$ where the first inequality is due to the Lipschitzness of ∇J(θ), and the second inequality results from the bound on ∥B(τ b t ; θ b t )∥. Summing the both sides of equation 23 from t = 1 to t = T, we have: $$\mathbb{E}\left[\sum_{t=1}^{T}\|e_{t-1}\|^{2}\right]\leq-\underbrace{\mathbb{E}[\|e_{T}\|^{2}]}_{\alpha_{T}}+\underbrace{\mathbb{E}[\|e_{0}\|^{2}]}_{\alpha_{1}}+\underbrace{\sum_{t=1}^{T-1}\left(\frac{1}{\alpha_{t+1}}-\frac{1}{\alpha_{t}}\right)\mathbb{E}[\|e_{t}\|^{2}]}_{\text{(I)}}+8\sigma_{b}^{2}\underbrace{\sum_{t=1}^{T}\frac{\eta_{t}^{2}}{\alpha_{t}}}_{\text{(II)}}+2\sigma_{\theta}^{2}\underbrace{\sum_{t=1}^{T}\alpha_{t}}_{\text{(III)}}\tag{25}$$ $$(26)$$ First note that E[∥ϵ0∥ 2]/α1 ≤ σ 2 g/α0. Now, we bound the other terms in the right hand side of the above inequality: (I): For the coefficient in the sum, we have: 1/αt+1 − 1/αt = ((t + 1)2/3 − t 2/3)/α0 ≤ 2t −1/3/(3α0) ≤ 2/(3α0) where we used the gradient inequality for the concave function f(z) = z 2/3. Therefore, this term can be bounded by: (2/(3α0))PT −1 t=1 E[∥ϵt∥ 2]. (II): PT t=1 η 2 t−1/αt = η 2 0/α0(1 + PT −1 t=1 (t + 1)2/3/t4/3) ≤ η 2 0/α0(1 + 22/3 PT −1 t=1 t −2/3) ≤ 6η 2 0T 1/3/α0. (III): PT t=1 αt = α0PT t=1 t −2/3 ≤ 3α0T 1/3. Plugging these bounds in equation 25, we get: $$\mathbb{E}\left[\sum_{t=0}^{T-1}\left\|\epsilon_{t}\right\|^{2}\right]\leq C T^{1/3},$$ ≤ CT1/3, (26) where C := 3α0((48σ 2 Bη 2 0 + σ 2 g )/α0 + 6α0σ 2 g )/(3α0 − 2). The previous inequality yields that: $${\frac{1}{T}}\sum_{t=0}^{T-1}\mathbb{E}[\|\epsilon_{t}\|]\leq{\frac{1}{T}}\sum_{t=0}^{T-1}{\sqrt{\mathbb{E}[\|\epsilon_{t}\|^{2}]}}\leq{\sqrt{\frac{1}{T}}}\sum_{t=0}^{T-1}\mathbb{E}[\|\epsilon_{t}\|^{2}]\leq{\frac{\sqrt{C}}{T^{1/3}}},$$ where we used Jensen's inequality in the second inequality above. $$(27)$$ Now, if we average from t = 0 to t = T − 1 in equation 32 in Lemma 2 and then take expectations, we have: E "1 T T X−1 t=0 ∥∇J(θt)∥ # ≤ 8 T T X−1 t=0 E[∥ϵt∥] + 3σB 2T T X−1 t=0 ηt + 3 T E "TX−1 t=0 J(θt+1) − J(θt) ηt # (a) ≤ 8 √C T1/3 + 6σBη0 T2/3+ E "3 T T X−1 t=0 J(θt+1) − J(θt) ηt # ≤ 8 √C T1/3 + 6σBη0 T2/3+ E "3 T J(θT ) ηT −1 − J(θ0) η0+ T X−1 t=1 J(θt) 1 ηt−1 − 1 ηt !# (b) ≤ 8 √C T1/3 + 6σBη0 T2/3+ E "3 T CJ (T − 1)2/3 η0+ CJ η0 + T X−1 t=1 |J(θt)| 1 ηt −1 ηt−1 !# (28) ≤ 8 √C T1/3 + 6σBη0 T2/3+ E "3 T CJ (T − 1)2/3 η0+ CJ η0 + T X−1 t=1 CJ 1 ηt −1 ηt−1 !# (c) ≤ 8 √C T1/3 + 6σBη0 T2/3+ E 3 T CJ (T − 1)2/3 η0+ CJ η0 + CJ (T − 1)2/3 η0 ≤ 8 √C T1/3 + 6σBη0 T2/3+ 9CJ η0T1/3 , (a) We used the bound in equation 27 for the first term and PT −1 t=0 ηt ≤ 4η0T 1/3. (b) |J(θ)| = |Eτ∼πθ [R(τ )]| = |Eτ∼πθ [PH−1 h=0 γ hr(sh, ah)]| ≤ Eτ∼πθ [PH−1 h=0 γ h|r(sh, ah)|] ≤ Eτ∼πθ [R0/(1−γ)] = R0/(1 − γ). Hence, we have: |J(θ)| ≤ CJ for all θ ∈ R d where CJ := R0/(1 − γ). (c) Due to PT −1 t=1 1/ηt − 1/ηt−1 = 1/ηT −1 − 1/η0 ≤ (T − 1)2/3/η0. ## B Proof Of Remark 3 According to equation 21, we have: $$\mathbb{E}[\|\epsilon_{t}\|^{2}]\leq(1-\alpha_{t})\mathbb{E}[\|\epsilon_{t-1}\|^{2}]+8\sigma_{B}^{2}\eta_{t-1}^{2}+2\alpha_{t}^{2}\sigma_{g}^{2}.$$ . (29) Let us define Zt := E[∥ϵt∥ 2]. Then, we can rewrite the above equation as follows: $$Z_{t}\leq(1-\alpha_{t})Z_{t-1}+\frac{C z}{t^{4/3}},\tag{1}$$ where Cz = 8 × 2 4/3η 2 0σ 2 B + 2α 2 0σ 2 g . Now, by induction, we will show that: Zt ≤ C ′ Z /t2/3 where C ′ Z = CZ/(α0 − 2/3). It can be easily seen that for the base case Z1, this statement holds. Now, for the induction step, suppose that Zt−1 ≤ C ′ Z /(t − 1)2/3for some t ≥ 2. Then, from above equation, we have: CZ t 4/3 (a) ≤ α0C ′ Z − 2C ′ Z /3 t 2/3(t − 1)2/3 =⇒2C ′ z/3 (t − 1)t 2/3 ≤α0C ′ Z t 2/3(t − 1)2/3 − CZ t 4/3 (b) =⇒ C ′ Z 1 (t − 1)2/3 −1 t 2/3 ≤α0C ′ Z t 2/3(t − 1)2/3 − CZ t 4/3 (31) =⇒ 1 − α0 t 2/3 C ′ Z (t − 1)2/3 + CZ t 4/3 ≤ C ′ Z t 2/3 (c) =⇒ (1 − αt)Zt−1 + CZ t 4/3 ≤ C ′ Z t 2/3 (d) =⇒ Zt ≤ C ′ Z t 2/3 , $$(29)^{\frac{1}{2}}$$ $$(30)$$ where (a) is due to definition of C ′ Z , (b) is based on using gradient inequality for the concave function f(z) = z 2/3, i.e., t 2/3 − (t − 1)2/3 ≤ 2/3(t − 1)−1/3, (c) is according to induction hypothesis, and (d) is due to equation 30. ## C Supplemental Lemma Lemma 2 Suppose that θt's are generated by executing Algorithm 2. Let ϵt := vt − ∇J(θt)*. Then, at any* time t*, we have:* $$\|\nabla J(\theta_{t})\|\leq8\|\epsilon_{t}\|+\frac{3\sigma_{B}\eta_{t}}{2}+\frac{3}{\eta_{t}}(J(\theta_{t+1})-J(\theta_{t})).$$ From σB-smoothness of J(θt) (Shen et al., 2019), we have: $$J(\theta_{t+1})-J(\theta_{t})\geq\langle\nabla J(\theta_{t}),\theta_{t+1}-\theta_{t}\rangle-\frac{\sigma_{B}}{2}\|\theta_{t+1}-\theta_{t}\|^{2}\tag{33}$$ $$=\eta_{t}\left\langle\nabla J(\theta_{t}),\frac{v_{t}}{\|v_{t}\|}\right\rangle-\frac{\sigma_{B}}{2}\eta_{t}^{2}.$$ Regarding the first term above, we consider two cases, whether ∥∇J(θt)∥ ≥ 2∥ϵt∥ or not. For the former case, we have: ∇J(θt),vt ∥vt∥ = ∥∇J(θt)∥ 2 + ⟨∇J(θt), ϵt⟩ ∥∇J(θt) + ϵt∥ ≥∥∇J(θt)∥ 2 2∥∇J(θt) + ϵt∥ ≥∥∇J(θt)∥ 2 2(∥∇J(θt)∥ + ∥ϵt∥) ≥ ∥∇J(θt)∥ 3 ≥ ∥∇J(θt)∥ 3− 8 3 ∥ϵt∥, (34) $$(32)$$ $$(34)$$ where in the first inequality, we used the bound ⟨∇J(θt), ϵt⟩ ≥ −∥∇J(θt)∥∥ϵt*∥ ≥ −∥∇*J(θt)∥ 2/2. For the latter case, $$\left\langle\nabla J(\theta_{t}),\frac{v_{t}}{\|v_{t}\|}\right\rangle\geq-\|\nabla J(\theta_{t})\|$$ $$=\frac{\|\nabla J(\theta_{t})\|}{3}-\frac{4\|\nabla J(\theta_{t})\|}{3}$$ $$\geq\frac{\|\nabla J(\theta_{t})\|}{3}-\frac{8\|\epsilon_{t}\|}{3}.$$ $\|\cdot\|\cdot\|$ in equation 33, we get: $$(35)$$ Plugging the bound on ⟨∇J(θt), vt/∥vt∥⟩ in equation 33, we get: $$J(\theta_{t+1})-J(\theta_{t})\geq\eta_{t}\left(\frac{\|\nabla J(\theta_{t})\|}{3}-\frac{8\|\epsilon_{t}\|}{3}\right)-\frac{\sigma_{B}}{2}\eta_{t}^{2}\tag{36}$$ $$\implies\|\nabla J(\theta_{t})\|\leq8\|\epsilon_{t}\|+\frac{3\sigma_{B}\eta_{t}}{2}+\frac{3}{\eta_{t}}(\nabla J(\theta_{t+1})-\nabla J(\theta_{t})).$$ ## D Comparison Using Another Variant Of Pr Metric In this section, we report another variant of PR metric that takes into account the upper bounds of confidence intervals instead of the lower bounds. As can be seen in Table 3, SHARP still outperforms other algorithms in all environments considering the new variant of PR metric. | Reacher | Walker | Humanoid | Swimmer | | |-----------|----------|------------|-----------|-------| | HAPG | -23.05 | 127.94 | 230.48 | 88.31 | IS-MBPG -20.78 296.85 309.51 30.32 PAGE-PG -17.25 259.36 402.20 25.05 REINFORCE -28.65 136.37 222.97 28.40 VR-BGPO -15.92 346.41 406.22 39.77 SHARP (our algorithm) -10.23 440.70 441.27 **172.42** Table 3: Comparison of SHARP with other variance-reduced methods in terms of PR metric (The upper bounds of confidence intervals are considered in reporting PR metric). ## E Details Of Experiments We used the default implementation of linear feature baseline from Garage library. The employed linear feature baseline is a linear regression model that takes observations for each trajectory and extracts new features such as different powers of their lengths from the observations. These extracted features are concatenated to the observations and used to fit the parameters of the regression with least square loss function. In the experiments, we used linear baseline for all the environments and methods. In our reporting, at each iteration t, we generated trajectories according to the current policy until we collected 10k system probes. Then, we computed the average return based on the collected trajectories and considered this value for 10k × t system probes. In the following table, we provide the fine-tuned parameters for each algorithm. Batch sizes are considered the same for all algorithms. The discount factor is also set to 0.99 for all the runs. | | Reacher | Walker | Humanoid | Swimmer | |----------------------|-----------|----------|------------|-----------| | Max horizon | 50 | 500 | 500 | 500 | | Neural network sizes | 64 × 64 | 64 × 64 | 64 × 64 | 64 × 64 | | Activation functions | Tanh | Tanh | Tanh | Tanh | | IS-MBPG lr | 0.9 | 0.3 | 0.75 | 0.3 | | IS-MBPG c | 100 | 12 | 5 | 12 | | IS-MBPG w | 200 | 20 | 2 | 25 | | REINFORCE step-size | 0.01 | 0.01 | 0.001 | 0.01 | | SHARP α0 | 1.5 | 5 | 5 | 4.5 | | SHARP η0 | 0.1 | 1 | 0.6 | 0.15 | | PAGE-PG pt | 0.4 | 0.4 | 0.6 | 0.6 | | PAGE-PG step-size | 0.01 | 0.001 | 0.0005 | 0.001 | | HAPG step-size | 0.01 | 0.01 | 0.01 | 0.01 | | HAPG Q | 5 | 10 | 10 | 10 | | VR-BGPO lr | 0.8 | 0.75 | 0.8 | 0.75 | | VR-BGPO c | 25 | 25 | 25 | 25 | | VR-BGPO w | 1 | 1 | 1 | 1 | | VR-BGPO lam | 0.0005 | 0.0005 | 0.0025 | 0.0005 | Table 4: Selected hyper-parameters for different methods. | | Set | |----------------------------------------------------------|--------------------------------------------------------| | IS-MBPG lr | {0.1, 0.2, 0.3, 0.4, 0.55, 0.65, 0.75, 0.85, 0.9} | | IS-MBPG c | {5, 12, 20, 25, 50, 80, 100} | | IS-MBPG w | {2, 5, 10, 15, 25, 50, 100, 200} | | REINFORCE step-size | {0.0005, 0.001, 0.01, 0.05, 0.1} | | SHARP α0 | {1, 1.5, 2, 3, 3.5, 4.5, 4, 5} | | SHARP η0 | {0.1, 0.15, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1} | | PAGE-PG pt | {0.2, 0.4, 0.5, 0.6, 0.7, 0.9} | | PAGE-PG step-size | {0.0005, 0.001, 0.01, 0.05, 0.1} | | HAPG step-size | {0.0005, 0.001, 0.01, 0.05, 0.1} | | HAPG Q | {5, 7, 10, 15, 20} | | VR-BGPO lr | {0.6, 0.7, 0.75, 0.8, 0.85 0.9, 0.95} | | VR-BGPO c | {15, 25, 35, 45, 50} | | VR-BGPO w | {1, 3, 5, 7, 10} | | VR-BGPO lam | {0.0005, 0.001, 0.0015, 0.002, 0.0025} | | Table 5: Sets of hyper-parameters for different methods. | | In the following table, we provide the sets of considered values for hyperparameter tuning for each algorithm.
Review 1: Summary: The authors propose a new variance-reduced policy gradient method by use of the second-order information. This new method improves existing methods by removing the requirement of importance sampling and careful checkpoint. The authors prove sublinear rate to obtain an near-optimal stationary point. Experiments are also provided to show the effective performance of the proposed method. Strengths and Weaknesses: Strengths: (1) The proposed method is clearly explained. The authors also made a nice literature review that is helpful to read the advantages of the proposed method. (2) The proposed method introduces a middle-point policy to get an unbiased gradient difference. Due to such second-order information, there is no need of checking points for sampling batches. (3) The learning parameters do not reply on problem parameters, which makes the proposed method more practical. Weaknesses: (1) The sample complexity is sub-optimal. (2) Some assumptions can be strong, such as Assumption 2, although it is often assumed in literature. Requested Changes: Overall, the result is nicely presented. Here are a few questions for consideration. - In SHARP, is the choice paramters $(\eta_t,\alpha_t)$ in same order optimal? - What's the policy parametrization used in experiments? How do you check the assumption made on the policy? Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper studies the problem of policy gradient optimization in reinforcement learning. The propose a new algorithm, with the primary purpose of delivering theoretical guarantees, that achieves a fast convergence rate to a stationary point. This is done with a variance reduction method that uses second order information and without further assumptions than what is typical. It is also done without importance-weighting, which is a common practical limitation. The algorithm is also shown to outperform peer methods. Strengths and Weaknesses: Overall I think this is a good paper. It makes a distinct and solid contribution to the literature on variance-reduced policy gradient methods. The theory appears to be good given the existing literature surrounding this topic, and it achieves competitive rates under very mild conditions. The experiments show convincingly that the method outperforms the baselines considered. I don't have an explicit weaknesses to bring forth, but I do have some questions/suggestions. Perhaps these can be incorporated to further strengthen the paper. Questions: - Which the experiment baselines are 'parameter-free' and which ones aren't? How do you choose the hyperparameters for the ones that are not? It maybe helpful to highlight this in that section. - You compared to REINFORCE, but not more modern algorithms like PPO and TRPO. Why not? It seems like it would be informative to the reader to see the difference been PPO (a go-to PG method) and this, even though the theoretical guarantees may not be as strong. - What are the computational trade-offs between this algorithm and the others discussed in the related work and used as baselines? The proposed method includes a second-order calculation, which can be prohibitive for larger models. - I believe the page limit is 12, correct? I thought it might be helpful to include a sketch of the proof or at least highlight some of the core ideas that allow the analysis to through (beyond the algorithmic ones already discussed). What do you think? Requested Changes: See above. Broader Impact Concerns: N/A ================================================== Review 3: Summary: # Summary This paper presents a new algorithm called _Stochastic Hessian Aided Recursive Policy gradient_ (SHARP). The SHARP algorithm incorporates second-order, Hessian information into stochastic gradient descent (SGD) with momentum in order to reduce the variance of the policy gradient estimator. SHARP is able to reach first order stationary points (FOSP) without importance sampling and with relatively low sample complexity. Unlike some other popular variance reduction methods for policy gradient algorithms, SHARP does not require large batch sizes or gradient checkpointing. Strengths and Weaknesses: # Main Argument Recommendation: weak reject I am concerned that there is not a sufficient empirical justification for the SHARP algorithm. A significant amount of details about the empirical study are missing -- details that are needed for a more thorough, complete evaluation. Below I will comment on my major concerns. First, the paper does not outline the hyperparameter ranges that were tuned over for any algorithm. This information is vital in order for a successful evaluation of the empirical study. Were approximately the same number of hyperparameter settings tuned over for all algorithms? If some algorithms had more hyperparameter settings tuned over, they likely had an advantage over the other algorithms, as there were more avenues by which performance could be improved for such algorithms. Second, I am unsure of the performance robustness (PR) metric that is reported for performance. This PR metric is basically the average lower bound of a 90% confidence interval over average return for each timestep in an agent-environment interaction. Because this metric only considers a lower bound, it hides a significant amount information, especially information about whether confidence intervals are overlapping. Here is equation 18 from the paper: $$ PR_{A}^{\text{min}}(n) = \frac{1}{T} \sum_{t=1}^{T} LCI_{A}(n, t) $$ We can also consider another metric: $$ PR_{A}^{\text{max}}(n) = \frac{1}{T} \sum_{t=1}^{T} UCI_{A}(n, t) $$ with $UCI_{A}(n, t)$ the upper bound of a 90% confidence interval over average return for $n$ runs of algorithm $A$. Why was this mteric not also reported? It would help to better gauge algorithm performance, especially when comparing algorithms. It could be the case that for two algorithms $A$ and $B$ that $PR_{A}^\text{min}(n) > PR_{B}^\text{min}(n)$ but that $PR_{A}^\text{max}(n) < PR_{B}^\text{max}(n)$. In such a case, algorithm $B$ has a lower estimate of "worst-case" performance but a higher estimate of "best-case" performance, meaning that algorithm $B$ could really outperform $A$ on the problem. We don't have sufficient information to determine this. The $PR_A^\text{min}$ metric hides this information (and more). Further, I am unsure that comparing lower bounds of confidence intervals can sufficiently show that one algorithm is necessarily better than another. Why not provide the average lower and upper bounds for this confidence interval, as I have outlined above? Furthermore, considering table 2, the $PR_{SHARP}^\text{min}$ value is not much higher than that of many of the other algorithms (especially on Humanoid and Walker), which seems to indicate that the confidence intervals which were used to generate these numbers do indeed overlap. To compute the $PR_A^\text{min}$ metric, a confidence interval over average return is constructed over $n$ runs of the algorithm for each system probe. A system probe is simply a (state, action) pair. How was the average return computed for this (state, action) pair, since only a reward is available at this time? Is the episodic return used for the episode corresponding to the system probe under consideration? Third, even if I was confident about the aforementioned metric, the paper is not clear about how confidence intervals are constructed. The paper mentions that 90% confidence intervals are used to compute the values in table 2, but there is no indication about what kind of confidence interval is used. I assume that t-distributed confidence intervals were used since this is what is common (please correct me if I am wrong). In this case, I am not confident that the distributional assumptions of the confidence interval are satisfied, and the reported confidence intervals may therefore be overly optimistic. Likely, 10 runs is not enough to satisfy the normality assumptions of such confidence intervals. This would also exacerbate the issue with table 2 that I mentioned above. Would it be possible to attain a significant number of more experimental runs (e.g. >=20 total)? Further, a likely better estimate of confidence could be obtained with confidence intervals which make no distributional assumptions about the data [1]. Does Figure 1 also use the same confidence intervals as reported in table 2? Next, why were the default values not used for the maximum episode horizon for Walker, Swimmer, and Humanoid? The default values for each of these environments are significantly higher than what was used in the empirical study (default is 1000 timesteps). This design choice makes the results reported in the paper hard to compare with existing results in the literature which tend to use the default episode cutoffs. Further, reducing episode cutoffs can make these problems easier. Finally, a number of things are not clear in the paper. First, $B(\tau_t^b, \theta_t^b)$ is used before it is clearly defined. Second, in _Remark 1_, why is it important to have an unbiased estimate of $\nabla J(\theta_t) - \nabla J(\theta_{t-1})$? Where does this difference in gradients come from? Is it related to equation 10? I am assuming this difference in gradients is from some theoretical result in Tran & Cutkosky's (2022) paper [2], but it is not clear why an unbiased estimate of this value is important. Equation 8 in Algorithm 1 is not clear. Are both gradient terms included in the sum? # Small Things Here are a list of small things which **did not affect the scoring of the paper** but can help to make the paper stronger: - There are multiple grammatical errors in the paper. For example: - "In RL context" -> "In the RL context" - "In RL setting" -> "In the RL setting" - "In (Zhang et al., 2020)" - > "Zhang et al. (2020) state that..." - "distribution of starting state" -> "distribution of starting states" - "IS-MBPG... which is based on STROM" -> I think this should be "STORM". - "metric proposed in (Khorasani et al., 2023)" -> "Khorasani et al. (2023) propose the metric..." - In many different places, vague terms are used. For example, at the beginning of the second paragraph in the introduction, it is stated that "Policy gradient methods are often used for obtaining good policies". What is a "good policy"? - I found the first paragraph of the introduction hard to follow. There were multiple undefined terms in this paragraph as well (or terms were used before they were defined). For example, what are the "best actions"? - I believe this is a typo, but in the last sentence of the paper it is claimed that SHARP attains remarkable performance on HalfCheetah. Yet, these results were never shown. # References [Cédric Colas, Olivier Sigaud, Pierre-Yves Oudeyer. How Many Random Seeds? Statistical Power Analysis in Deep Reinforcement Learning Experiments. 2018.](https://arxiv.org/abs/1806.08295) [Hoang Tran, Ashok Cutkosky. Better SGD using Second-order Momentum. 2022.](https://arxiv.org/abs/2103.03265) Requested Changes: # Requested Changes In order to secure my recommendation for acceptance, all the issues I listed in my _Main Argument_ would have to be addressed. This includes (but is not limited to): 1. Using confidence intervals for which all assumptions are satisfied. 2. The confidence intervals in (1) should not overlap and be high confidence estimates (>= 90% confidence) with many random seeds. 3. Reporting full confidence intervals, rather than only the average lower bound, while maintaining similar conclusions To make the paper better, the issues in the _Small Things_ section above could be addressed. This is not critical for securing my recommendation for acceptance. Broader Impact Concerns: No ethical concerns. ================================================== Metareview: Recommendation: Accept as is Comment: Overall, the proposed algorithm SHARP is both theoretically sound and appears to perform well in experiments on some MuJoCo tasks. The reviewers are generally positive about the contributions of the paper. Therefore, I recommend acceptance. I encourage the authors to further revise the paper based on the reviewers' suggestions in the camera ready version. For example, the authors may consider having more runs for computing the bootstrap confidence intervals, as raised by Reviewer BZdM. ==================================================
# Subgraph Permutation Equivariant Networks Anonymous authors Paper under double-blind review ## Abstract In this work we develop a new method, named Sub-graph Permutation Equivariant Networks (SPEN), which provides a framework for building graph neural networks that operate on sub-graphs, while using permutation equivariant update functions that are also equivariant to a novel choice of automorphism groups. Message passing neural networks have been shown to be limited in their expressive power and recent approaches to over come this either lack scalability or require structural information to be encoded into the feature space. The general framework presented here overcomes the scalability issues associated with global permutation equivariance by operating on sub-graphs. In addition, through operating on sub-graphs the expressive power of higher-dimensional global permutation equivariant networks is improved; this is due to fact that two non-distinguishable graphs often contain distinguishable sub-graphs. Furthermore, the proposed framework only requires a choice of k-hops for creating ego-network sub-graphs and a choice of representation space to be used for each layer, which makes the method easily applicable across a range of graph based domains. We experimentally validate the method on a range of graph benchmark classification tasks, demonstrating either state-of-the-art results or very competitive results on all benchmarks. Further, we demonstrate that the use of local update functions offers a significant improvement in GPU memory over global methods. ## 1 Introduction Machine learning on graphs has received much interest in recent years with many graph neural network (GNN) architectures being proposed. One such method, which is widely used, is the general framework of message passing neural networks (MPNN). These provide both a useful inductive bias and scalability across a range of domains (Gilmer et al., 2017). However, Xu et al. (2019); Morris et al. (2019b) showed that models based on a message passing framework with permutation invariant aggregation functions have expressive power at most that of the Weisfeiler-Lehman (WL) graph isomorphism test (Weisfeiler & Leman, 1968). Therefore, there exist many non-isomorphic graphs that a model of this form cannot distinguish between. Figure 1 provides an example of two non-isomorphic graphs which to a message passing update function are indistinguishable. ![0_image_0.png](0_image_0.png) Figure 1: The initial graph on the left is non-isomorphic to the graph on the right. Despite this the WL graph isomorphism test cannot distinguish between the two graphs. This presents an natural question of is it possible to design a GNN that improves the expressive power of MPNNs? Many methods have been proposed to address this question, but most often an increase in expressivity must be traded off against scalability. We present the background into existing methods which attempt to tackle this question in Section 2. 1 Our approach. We design a framework to create provably more expressive and scalable graph networks. We achieve this through incorporating symmetry structures in graphs, by considering a graph equivariant update function which operates over sub-graphs. Our framework, dubbed Subgraph Permutation Equivariant Networks (SPEN), is developed from the observation that operating on sub-graphs both improves the scalability and expressive power of higher-dimensional GNNs, whilst unlocking a natural choice of automorphism groups which further increases the expressive power of the network. Our framework consists of: 1. encoding the graph as a bag of bags of sub-graphs, 2. utilising a k-order permutation equivariant base encoder, and 3. constraining the linear map to be equivariant to the automorphism groups of the bags of sub-graphs. Sub-graphs each have a symmetry group and our framework captures this in two ways. Each sub-graph has a permutation symmetry, which is induced by a permutation of the nodes in the graph. In addition, there is a symmetry across sub-graphs whereby sub-graphs are associated to an automorphism group. We therefore construct a neural network comprising of layers that are equivariant to both permutations of nodes and the automorphism groups of sub-graphs. We achieve this by utilising a permutation equivariant base encoder with feature space constrained by the direct sum of different order permutation representations. Further, we constrain the linear map comprising each layer to be equivariant to the automorphism groups of the bags of sub-graphs. This necessitates that sub-graphs belonging to different automorphism groups are processed by a kernel with different weights, while for sub-graphs belonging to the same automorphism group the kernel shares weights. This leads to us creating a sub-graph extraction policy which generates a bag of bags of sub-graphs, where each bag of sub-graphs corresponds to a different sub-graph automorphism group. ## Main Contributions 1. A general framework for learning on graphs through utilising bags of sub-graphs. 2. A novel choice of automorphism groups with which to constrain the linear map to be equivariant to. 3. A more scalable framework for utilising higher-dimensional permutation equivariant GNNs. 4. A more expressive model than higher-dimensional permutation equivariant GNNs and sub-graph MPNNs. 5. A comprehensive theoretical analysis of the proposed framework in terms of both the expressive power and scalability. 6. An experimental evaluation of the proposed framework under certain parameter choices, demonstrating noticeable improvements on both real and synthetic data. ## 2 Background More expressive graph neural networks (GNNs) exist which can be grouped into three different groups: (1) those which design higher-dimensional GNNs, (2) those which use positional encodings through pre-coloring nodes, and (3) those which use sub-graphs/local equivariance. Several architectures have been proposed of the type (1) which design a high-order GNN equivalent to the hierarchy of k-WL tests (Maron et al., 2018; 2019; Morris et al., 2019b;a; Keriven & Peyré, 2019; Azizian & Lelarge, 2021). Despite being equivalent to the k-WL test, and hence having provably strong expressivity; these models lose the advantage of locality and linear complexity. As such, the scalability of such models poses an issue for their practical use, with Maron et al. (2018) showing that the basis space for permutation equivariant models of order k is equal to the 2k th Bell number, which results in a basis space of size 2 for order-1 tensors, 15 for order-2 tensors, 203 for order-3 tensors, and 4140 for order-4 tensors, demonstrating the practical challenge of using higher-dimensional GNNs. Several architectures have also been proposed of type (2) where authors seek to introduce a pre-coloring or positional encoding that is permutation invariant. These comprise of pre-coloring nodes based on pre-defined ![2_image_0.png](2_image_0.png) Figure 2: (1-2) The first component of our SPEN model comprises of splitting the graph into sub-graphs. For this we use a k-ego network policy extracting a sub-graph for each node in the input graph. (3) Secondly, we place sub-graphs into bags, where each bag holds sub-graphs of a specific size. The extracted sub-graphs are used as fully-connected graphs with zero features for non-edges; this results in each bag of sub-graphs representing an automorphism group. (4) We then process the bags of sub-graphs with an automorphism equivariant linear map. This comprises of multiple separate GNNs, with a different network processing each automorphism group. (5) The resulting output is again a bag of bags of sub-graphs. (6) The sub-graphs are averaged across the bags to allow information flow between sub-graphs. (7-8) At the end of the model we pool the bags of sub-graphs to produce an output graph representation. (a) In practise the overall model comprises of many automorphism equivariant layers mapping between different order permutation representation spaces. The final layer maps to an order-0 permutation representation space, i.e. a graph level representation space, which can be pooled to the output graph representation. substructures (Bouritsas et al., 2020) or lifting graphs into simplicial- (Bodnar et al., 2021b) or cell complexes (Bodnar et al., 2021a). These methods require a pre-computation stage, which in the worst-case finding substructures of size k in a graph of n nodes is O(n k). Finally sub-graphs/local equivariance of type (3) have been considered to find more expressive GNNs. Local graph equivariance requires a (linear) map that satisfies an automorphism equivariance constraint. This is due to the nature of graphs having different local symmetries on different nodes/edges. This has been considered by de Haan et al. (2020) though imposing an isomorphism/automorphism constraint on edge neighbourhoods and by Thiede et al. (2021) by selecting specific automorphism groups and lifting the graph to these. Although the choice of automorphism group chosen by de Haan et al. (2020) leads to little weight sharing and requires the automorphism constraint to be parameterized, while that proposed by Thiede et al. (2021) does not guarantee to capture the entire graph and requires a hard-coded choice of automorphism group. Operating on sub-graphs has been considered as a means to improve GNNs by dropping nodes (Papp et al., 2021), dropping edges (Rong et al., 2019), utilising sub-graphs of size k (Cotta et al., 2021), utilising ego-network graphs (Zhao et al., 2021), and considering the symmetry of a bag of sub-graphs (Bevilacqua et al., 2021). ## 3 Subgraph Permutation Equivariant Networks (Spen) In this section, we introduce the SPEN framework. It consists of (1) Subgraph selection, (2) Natural permutation equivariant graph neural network. This section presents the core concepts of the model which contribute to the improved expressivity. In addition, we present a more general overview of the architectural details of the SPEN framework in Appendix A.5. ## 3.1 Definitions In this work we consider graphs as concrete graphs and utilise sub-concrete graphs in our framework. The sub-graphs are extracted as k-ego network sub-graphs. Further, we use a base GNN model that is equivariant to permutations of nodes in the sub-graphs. We also place an automorphism constraint over the linear map that processes bags of sub-graphs. Further definitions are provided in Appendix A.1, with the key definitions required to understand our framework presented here. Definition 3.1. A *Concrete Graph* G is a finite set of nodes V(G) ⊂ N and a set of edges E(G) ⊂ V(G)×V(G). The set of node ids may be non-contiguous and we make use of this here as we extract overlapping sub-graphs and perform the graph update function on this bag of sub-graphs. The natural numbers of the nodes are essential for representing the graphs in a computer, but hold no actual information about the underlying graph. Therefore, the same underlying graph can be given in may forms by a permutation of the ordering of the natural numbers of the nodes. Throughout the paper we refer to concrete graphs as graphs to minimise notation. Definition 3.2. In tensor format the values of G are encoded in a tensor A ∈ R |V(G)*|×|V*(G)|×d. The node features are encoded along the diagonal and edge features encoded in off-diagonal positions. Definition 3.3. A *sub-Concrete Graph* H is created by taking a node i ∈ V(G), and extracting the nodes j ∈ V(G) and edges (i, j) ⊂ V(G) × V(G), according to some sub-graph selection policy. In this work we consider the sub-graph selection policy as a k-ego-network policy. For brevity we refer to sub-concrete graphs as sub-graphs throughout the paper. Definition 3.4. A k*-Ego Network* of a node is its k-hop neighbourhood with induced connectivity. Definition 3.5. A *Graph isomorphism*, ϕ : G → G′is a bijection between the vertex sets of two graphs G and G′, such that two vertices u and v are adjacent in G if and only if ϕ(u) and ϕ(v) are adjacent in G′. This mapping is edge preserving, i.e. satisfies for all (i, j) ∈ V(G) × V(G): (i, j) ∈ E(G) ⇐⇒ (ϕ(i), ϕ(j)) ∈ E(G ′). An isomorphism from the graph to itself is known as an automorphism. Definition 3.6. A *group representation* ρ of the group G is a homomorphism ρ : G → GL(V ) of G to the group of automorphisms of V (Fulton & Harris, 2013). A group representation associates to each g ∈ G an invertible matrix ρ(g) ∈ R n×n. This can be understood as specifying how the group acts as a transformation on the input. Definition 3.7. A *feature space* is a vector space V with a group representation rho acting on it. The choice of group representations on the input and output vector spaces of a linear map constrains the possible forms the linear map can take. Definition 3.8. A *tensor representation* can be built up from some base group representations ρ(g) through the tensor operations dual (∗), direct sum (⊕), and tensor product (⊗). This allows for tensor representations to be constructed that are of increasing size and complexity. Definition 3.9. A *kernel constraint* is taken to mean a restriction of the space a kernel or linear map can take between two vector spaces. The symmetric subspace of the representation is the space of solutions to the constraint ∀g ∈ G : ρ(g)v = v, which provides the space of permissible kernels. In this paper we are interested in the symmetries of the symmetric group Sn. This constraint can be solved for different order tensor representations (Maron et al., 2018; Finzi et al., 2021). We present the space of linear layers mapping from k-order representations to k ′-order representations in Figure 3. ## 3.2 Sub-Graph Selection Policy Sub-graphs can be extracted from a graph in a number of ways, by removing nodes, by removing edges, extracting connectivity graphs at nodes, or extracting connectivity graphs at edges to name a few. In this work we focus on k-ego network sub-graphs. These are sub-graphs extracted by considering the k-hop connectivity of the graph at a selected node and extracting the induced connectivity. The sub-graph selection policy of k-ego networks therefore extracts a sub-graph for each node in the original graph. ![4_image_0.png](4_image_0.png) Figure 3: Bases for mappings to and from different order permutation representations, where ρk is a k-order representation. Each color in a basis indicates a different parameter. ρ0 → ρ0 is a mapping from a 0-order representation to a 0-order representation, i.e. a graph level label to graph level label, and has 1 learnable parameter. ρ1 → ρ1 is a mapping from a 1-order representation to a 1-order representation, i.e. a node level label to node level label, and has 2 learnable parameters, one mapping node features to themselves and the other mapping node features to other nodes. Further, there are mappings between different order representation spaces and higher order representation spaces. In this work we process graphs as bags of sub-graphs. In general the size of the sub-graphs, m, extracted for a graph are not all of the same size, and thus m varies from sub-graph to sub-graph. We therefore go further than representing each graph as a bag of sub-graphs and represent each graph as a bag of bags of sub-graphs, where each bag of sub-graphs hold sub-graphs of the same size m. The graph is therefore represented as the bag of bags of sub-graphs SG = *{{{{*Hi1 , ..., Hia}}*, ...,* {{Hk 1 , ..., Hk c *}}}}*, for sub-graphs H, with bags of sub-graphs which are each of size *a, ..., c* containing sub-graphs of sizes *i, .., k* respectively. ## 3.3 Natural Permutation Equivariant Graph Network Architecture The input data represented as a bag of bags of sub-graphs has a symmetry group of both the individual sub-graphs and of the bags of sub-graphs. We construct a graph neural network that is equivariant to this symmetry. This can be broken down into three parts: (1) the sub-graphs, (2) the bag of bags of sub-graphs, and (3) the bags of sub-graphs. ## 3.3.1 Sub-Graphs Each sub-graph has a symmetry group that is given by permutation of the order of nodes in the graph. This group is denoted Sn for a graph of n nodes. The group Sn acts on on the graph via (σ · A)ij = Aσ−1(i)σ−1(j). Sub-graphs, H, therefore have a symmetry group Sm ≤ Sn and we are interested in constructing graph neural network layers equivariant to this symmetry group. The graph is an order-2 tensor and the action of the permutation group can be generalised to differing order tensors. For example, the set of nodes in a graph is an order-1 tensor. For the case of a linear mapping from order-2 permutation representations to order-2 permutation representations, the basis space was shown to comprise of 15 elements by Maron et al. (2018). Similarly, the constraint imposed by equivariance to the group of permutations can be solved for different order representation spaces and we provide an example of all mappings between representation spaces from order 0-2 in Figure 3. We are not restricted to selecting a single input-output order permutation representation space and can construct permutation equivariant linear maps between multiple representations separately through the direct sum ⊕. For example the direct sum of order 1 and 2 representations is given by ρ1 ⊕ ρ2. Due to the construction of a sub-graph the sub-graphs inherit node ids from the original graph. Therefore, a permutation of the order of the nodes in the original graph corresponds to an equivalent permutation of the ordering of the nodes in the sub-graphs. In addition, as the permutation action on the graph does not change the underlying connectivity, the sub-graphs exacted are individually unchanged up-to some isomorphism. Therefore, a permutation of the graph only permutes the ordering at which the sub-graphs are extracted. ## 3.3.2 The Bag Of Bags Of Sub-Graphs We have defined the group action on sub-graphs as that of the group Sm and associated to the feature space of a sub-graph a vector space constrained by the representation ρm. Given that graphs do not in general have the same connectivity for each node the sub-graphs extracted differ in size. Therefore, we have different feature spaces for different sub-graph sizes, i.e. ρ i m(Hi) ̸= ρ j m(Hj). A linear layer acting on sub-graphs can therefore operate differently on different sub-graphs. A linear layer mapping from feature space ρm to feature space ρ ′ m can thus have for each sub-graph H a (linear) map KH : ρm(H) → ρ ′ m(H). However, given two isomorphic sub-graphs H and H′ are the same graph up-to some bijective mapping, we want KH and KH′ to process the feature spaces in an equivalent manner. This condition is called naturality (de Haan et al., 2020) and states that a linear map KH : ρm(H) → ρ ′ m(H) for every sub-graph isomorphism ϕ : H → H′ must satisfy the following condition: $$\rho^{\prime}(\phi)\circ K_{H}=K_{H^{\prime}}\circ\rho(\phi).$$ $$(1)$$ ′(ϕ) ◦ KH = KH′ ◦ ρ(ϕ). (1) This constraint (Equation 1) says that if we first transition from the input feature space ρ(H) to the equivalent input feature space ρ ′(H) via an isomorphism transformation ρ(ϕ) and then apply KH′ we get the same thing as first applying KH and then transitioning from the output feature space ρ ′(H) to ρ ′(H′) via the isomorphism transformation ρ ′(ϕ). Since ρ(ϕ) is invertible, if we choose KH for some H then we have determined KH′ for any isomorphic H′ by KH′ = ρ ′(ϕ) ◦ KH ◦ ρ(ϕ) −1. For any automorphism ϕ : H → H, we get an equivariance constraint ρ ′(ϕ) ◦ KH = KH ◦ ρ(ϕ). Thus, a layer in the model must have for each isomorphism class a map KH that is equivariant to automorphisms. Our sub-graph selection policy extracts a bag of bags of sub-graphs, SG = *{{{{*S i 1 , ..., Sia}}*, ...,* {{S k 1 , ..., Sk c *}}}}*, with bags of sub-graphs containing sub-graphs of the same size. Therefore, each bag of sub-graphs forms an isomorphism class and we are required to have a map KH for each bag of sub-graphs. We therefore have a choice of automorphism group with which to constrain the linear map to be equivariant to provided by our sub-graph selection policy. ## 3.3.3 Bags Of Sub-Graphs The order of sub-graphs in each bag of sub-graphs is arbitrary and changes if the input graph is permuted. It would therefore be undesirable for the output prediction to be dependent upon this arbitrary ordering. This is overcome in the choice of aggregation function used to share information between sub-graphs. At the end of a linear layer in our model each node and edge in the graph can be represented multiple times, i.e. it occurs in multiple sub-graphs. We therefore average these features across sub-graphs and in doing this ensure the output is invariant to the ordering of sub-graphs in each bag. ## 3.4 Related Work We have largely discussed the related methods to our work in Section 2. Despite this, we provide a more extensive explanation of some other methods in Appendix A.3 and demonstrate how some of these methods can be implemented within our framework in Appendix A.4. ## 4 Analysis Of Expressivity And Scalability In this section we study both the expressive power of our architecture by its ability to provably separate non-isomorphic graphs and the scalability by its ability to process larger graphs that its predecessor. ## 4.1 Wl Test And Expressive Power The Weisfeiler-Lehman (WL) test (Weisfeiler & Leman, 1968) is a graph isomorphism test commonly used as a measure of expressivity in GNNs. This is due to the similarity between the iterative color refinement of the WL test and the message passing layers of a GNN. The WL test is a necessary but insufficient condition, which is not able to distinguish between all non-isomorphic graphs. The WL test was extended to the k-WL test, which provides increasingly more powerful tests that operate on k-tuples of nodes. WL analogue for sub-graphs. One component of our model is the idea of operating on sub-graphs rather than the entire graph, more specifically our architecture operates on ego-network sub-graphs. We therefore seek to formalise our intuition that operating on sub-graphs will improve the expressive power of the base model. We present a color-refinement variant of the WL isomorphism test that operates on a bag of sub-graphs. Definition 4.1. The sub-graph-WL test utilises a color refinement of c t+1 v,S = HASH(c t v,S, N t v,S, Ctv ) where HASH(·) is an injective function, N t v,S is the node neighbourhood of v within the ego-network sub-graph S, and C t v is the multiset of v's colors across sub-graphs. Theorem 4.2. *Sub-graph-WL is strictly more powerful than 1&2-WL.* In Appendix A.2 we prove Theorem 4.2. This yields the result that even for a simple 1-WL expressive function in the GNN, such as message passing, the model is immediately more expressive than 1&2-WL. Comparing SPEN to the WL test. We have already shown that when considering a graph update function that operates on a bag of k-ego network sub-graphs, even if the update function itself has limited expressivity, it is more expressive than 1&2-WL. SPEN utilises a natural permutation equivariant update function through operating on a bag of bags of sub-graphs. The naturality constraint of our model states that each automorphism group of sub-graphs should be processed by a different (linear) map. In addition, we utilise higher-dimensional GNNs. Both of these choices are expected to increase the expressive power of our model. Proposition 4.3. For two non-isomorphic graphs G1 and G2sub-graph-WL can successfully distinguish them if (1) they can be distinguished as non-isomorphic from the multisets of sub-graphs and (2) HASH(·) is discriminative enough that HASH(c t v,S1 , N t v,S1 , Ctv ) ̸= HASH(c t v,S2 , N t v,S2 , Ctv ) . This implies that despite the sub-graph policy increasing the expressive power of the model, it is still limited by the ability of the equivalent to the HASH(·) function's ability to discriminate between the bags of sub-graphs. The naturality constraint of our model processing each automorphism group with a different higher-dimensional GNN is therefore expected to increase the expressive power of our model over sub-graph methods utilising a MPNN. Theorem 4.4. *SPEN is strictly more powerful than sub-graph MPNN.* We empirically prove Theorem 4.4 similarly to Bouritsas et al. (2020); de Haan et al. (2020). We use a neural network with random weights on a graph and compute a graph embedding. We say the neural network finds two graphs to be different if the graph embeddings differ by an ℓ2 norm of more than ϵ = 10−3 of the mean ℓ2 norms of the embeddings of the graphs in the set. The network is said to be most expressive if it only finds non-isomorphic graphs to be different. We test this by considering a set of 100 random non-isomorphic non-regular graphs, a set of 100 non-isomorphic graphs, a set of 15 non-isomorphic strongly regular graphs,1 and a set of 100 isomorphic graphs. Table 1 shows that a simple invariant message passing (GCN) as well as a simple invariant message passing model operating on sub-graphs (SGCN) are unable to distinguish between regular and strongly regular graphs. Further, it is shown that PPGN (Maron et al., 2019) can distinguish regular graphs but not strongly regular graphs, although a variant of PPGN that uses high order tensors should also be able to distinguish strongly regular graphs. On the other hand, our SPEN model is able to distinguish strongly regular graphs. Therefore, our model is able to distinguish non-isomorphic graphs that a sub-graph MPNN cannot and is strictly more powerful. 1See http://users.cecs.anu.edu.au/~bdm/data/graphs.html. | Model | Random | Regular | Str. Regular | Isom. | |---------|----------|-----------|----------------|---------| | GCN | 1 | 0 | 0 | 0 | | SGCN | 1 | 0 | 0 | 0 | | PPGN | 1 | 0.97 | 0 | 0 | | SPEN | 1 | 0.98 | 0.97 | 0 | Table 1: Rate of pairs of graphs in the set of graphs found to be dissimilar in expressiveness experiment. An ideal method only find isomorphic graphs dissimilar. A score of 1 implies the model can find all graphs dissimilar, while 0 implies the model finds no graphs dissimilar. ## 4.2 Scalability Global permutation equivariant models of the form found by Maron et al. (2018) operate over the entire graph. They therefore scale with O(n 2), for graphs with n nodes. Our method operates on k-ego network sub-graphs where a sub-graph is produced for each node in the original graph. Our method therefore scales with O(nm2), where m is the number of nodes in the k-ego network sub-graph. It is therefore clear that if n = m, theoretically, our method scales more poorly than global permutation equivariant models, although this would imply the graph is fully-connected and every sub-graph is identical. In this situation extracting sub-graphs is irrelevant and only 1 sub-graph is required (the entire graph) and hence if n = m our method scales with that of global permutation equivariant models. The more interesting situation, which forms the majority of graphs, is when n ̸= m. When m ≪ n our method scales more closely with methods that scale linearly with the size of the graph and it is for this type of data that our method offers a significant improvement in scalability over global permutation equivariant models. We empirically show how SPEN and global permutation equivariant methods scale depending on the size of n and m by analysing the GPU memory usage of both models across a range of random regular graphs. We utilise random regular graphs for the scalability test as it allows for precise control over the size of the overall graph and sub-graphs. We compare the GPU memory usage of both models across a range of graph sizes with a sub-graph size of m = 3, 6,and 9. Through analysing the graphs in the TUDataset, which we make use of when experimenting on graph benchmarks, we note that the average sub-graph sizes range between 3 and 10 (see Table 3), justifying the choice of sub-graphs in the scalability tests. Figure 4 shows that the Global Permutation Equivariant Network (GPEN) cannot scale beyond graphs with 500 nodes. On the other hand, our method (SPEN) scales to larger graphs of over an order of magnitude larger. In the situation where m = 3 GPEN can process graphs of size up to 500 nodes, while our SPEN can process graphs of size up to 10,000 nodes using less GPU memory. ![7_image_0.png](7_image_0.png) Figure 4: Computational cost of global permutation equivariant model (GPEN) and our (SPEN) model with a very similar number of model parameters for varying average size graphs. For this test we constructed random regular graphs of varying size using the NetworkX package (Hagberg et al., 2008). For SPEN sub-graphs were constructed using a 1-hop ego network policy. As is demonstrated by the log-axis, SPEN can process graphs an order of magnitude larger than global methods. Table 2: Comparison between our SPEN model and other deep learning methods. Larger mean results are better with the standard deviation around the mean given as a plus or minus. Dataset MUTAG PTC PROTEINS NCI1 NCI109 IMDB-B IMDB-M | | | | ±3.3 ±3.3 54.3 77.8 | | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | NA NA NA NA | ±2.0 ±3.1 48.7 ±1.2 75.2 ±1.5 83.4 ±3.2 83.7 ±9.7 74.8 ±6.5 71.25 93.3 | | | | | 78.1 | | | | | ±0.9 ±0.9 47.8 ±0.5 NA 70.0 ±0.9 74.4 ±2.5 75.5 | | | | | size 188 344 1113 4110 4127 1000 1500 classes 2 2 2 2 2 2 3 avg node # 17.9 25.5 39.1 29.8 29.6 19.7 13 Results GDCNN (Zhang et al., 2018) 85.8 ±1.7 58.6 ±2.8 ±2.3 45.2 ±1.7 NA 71 ±2.5 76.3 ±5.7 75 ±4.4 62.3 PSCN (Niepert et al., 2016) 89.0 ±1.4 ±1.4 33.5 ±1.0 NA 49.1 ±1.6 56.6 DCNN (Atwood & Towsley, 2016) NA NA 61.3 | ±0.5 ±0.6 44.5 ±0.3 67.0 ±0.5 80.3 ±0.5 80.3 ±2.6 75.7 | | | | | | ECC (Simonovsky & Komodakis, 2017) 76.1 NA NA 76.8 75.0 NA NA DGK (Yanardag & Vishwanathan, 2015) 87.4 ±2.7 60.1 | ±3.4 ±5.5 48.7 ±1.5 72.0 ±2.7 72.8 ±5.5 74.3 ±6.9 76.6 ±13.0 58.5 IGN (Maron et al., 2018) 83.9 ±2.8 ±5.1 52.3 ±1.7 NA 75.1 ±2.8 82.7 ±7.0 76.2 ±5.6 64.6 GIN (Xu et al., 2019) 89.4 | ±7.9 ±4.3 44.7 ±1.3 72.2 ±2.1 81.8 ±5.0 81.2 ±7.5 76.4 ±7.4 64.7 PPGN v2 (Maron et al., 2019) 88.9 ±3.6 ±5.8 50.5 ±1.4 73 ±1.9 82.2 ±5.6 81.0 ±7.0 76.7 ±8.1 62.9 PPGN v3 (Maron et al., 2019) 89.4 ±1.5 ±2.0 51.3 ±1.9 74.8 ±1.4 83.0 ±1.0 82.7 ±1.8 71.7 ±1.6 66.8 LNGN (GCN) (de Haan et al., 2020) 89.4 | ±3.6 ±2.0 52.6 ±2.0 NA 76.8 ±5.0 83.5 ±5.7 74.6 ±7.5 67.4 GSN-v (Bouritsas et al., 2020) 92.2 ±3.0 ±3.2 52.5 ±2.2 NA 75.6 ±3.4 82.8 SIN (Bodnar et al., 2021b) NA NA 76.5 ±3.1 ±3.7 52.7 ±1.6 75.6 ±1.4 84.0 ±4.3 83.6 ±5.6 77.0 ±6.1 68.2 CIN (Bodnar et al., 2021a) 92.7 ±2.8 ±3.6 53.1 ±1.6 76.3 ±1.1 82.5 ±4.6 83.5 ±6.1 76.6 ±4.9 68.0 DSS-GNN (GC) (EGO) (Bevilacqua et al., 2021) 91.5 | | | | | ±3.2 ±4.9 50 ±1.9 72.6 ±1.1 81.8 ±4.7 83.2 ±6.5 77.2 | | | | | 1-2-3 GNN (Morris et al., 2019b) 86.1 60.9 75.5 76.2 NA 74.2 49.5 PPGN v1 (Maron et al., 2019) 90.5 ±8.7 66.2 | Best Rank 1st 1st 15th 1st 2nd 6th 10th | | | | | ±3.4 NA NA±4.1 75.5 ±7.0 NA 76.3 ±7.2 70.6 CCN (Kondor et al., 2018) 91.6 | ±2.3 NA±5.0 83.5 ±7.2 76.6 ±7.5 68.2 GSN-e (Bouritsas et al., 2020) 90.6 | | | | | DiffPool (Ying et al., 2018) NA NA | SPEN | | ## 5 Experiments 5.1 Graph Benchmarks We perform experiments with our method to back up the theoretical analysis and answer three main questions: (1) Does our method out perform the the base graph neural network used in terms of validation accuracy? (2) Does our approach achieve better performance on real graph benchmarks than current state-of-the-art methods in terms of validation accuracy? (3) Does the method scale better than the base graph neural network used in real benchmark tasks? Datasets. We tested our method on a series of 7 different real-world graph classification problems from the TUDatasets benchmark of (Yanardag & Vishwanathan, 2015). Five of these datasets originate from bioinformatics, while the other two come from social networks. It is noteworthy to point out some interesting features of each dataset. We note that both MUTAG and PTC are very small datasets, with MUTAG only having 18 graphs in the test set when using a 10% testing split. Further, the Proteins dataset has the largest graphs with an average number of nodes in each graph of 39. Also, NCI1 and NCI109 are the largest datasets having over 4000 graphs each, which one would expect to lead to less spurious results. Finally, IMDB-B and IMDB-M generally have smaller graphs, with IMDB-M only having an average number of 13 nodes in each graph. The small size of graphs coupled with having 3 classes appears to make IMBD-M a challenging problem. Methods in comparison. We compare to a wide range of alternative methods, including sub-graph based methods, higher-dimensional GNN methods, and automorphism equivariant methods. We focus specifically on IGN (Maron et al., 2018) as this method uses an order-2 permutation equivariant tensor representation space for the linear map and is therefore the most similar to our base GNN model. Bevilacqua et al. (2021) test their DSS-GNN on multiple different sub-graph policies and here we compare to the method utilising k-hop ego networks as this is the most similar variant to our method. Implementation and experimental details. We utilise a 1-hop ego network sub-graph policy for all of the experiments. Further, we use a base GNN model that maps between ρ1 ⊕ ρ2 permutation equivariant tensor representation space, with the final layer mapping to a ρ0 permutation equivariant tensor representation space. We constrain out model to be equivariant to the automorphism groups of the bags of sub-graphs. For MUTAG, PTC, NCI1, and NCI109 we directly constrain the model to the automorphism groups of the bags of sub-graphs. For PROTEINS, IMDB-B, and IMDB-M there exists some bags of sub-graphs which comprise of a single sub-graph. As this would lead to no weight sharing between these sub-graphs and and any other sub-graphs we parameterize the automorphism constraint to bunch bags of sub-graphs which contain few sub-graphs. Further implementation and experimental details can be found in Appendix A.6. Table 2 compares our LPEGN model to a range of other methods on benchmark graph classification tasks from TUDatasets (Morris et al., 2020). Comparing out method (SPEN) to a higher-dimensional global permutation equivariant (IGN) demonstrates that our method outperforms the base GNN model on all but one dataset. The one dataset that our model did not outperform the base GNN model was the PROTEINS dataset, which is one of the datasets we were required to parameterize the automorphism constraint and this result may suggest our choice of parameterization for this dataset was not optimal. Table 2 also highlights that our method achieves a new state-of-the-art result on the MUTAG, PTC, and NCI1 dataset, and is the second strongest on NCI109. Further, our method performs competitively across all datasets. We note that SPEN achieves a poor ranking score on the Proteins datasets, although the classification accuracy of the model is competitive with leading results and only falls slightly short of the bulk of other methods. Finally, on all datasets where we did not parameterize the automorphism constraint we achieve either a new state-of-the-art or a very close second. This suggests our framework is more expressive while maintaining good flexibility to learn on graph classification tasks. The model does not achieve a new state-of-the-art on datasets where we parameterized the automorphism constraint, although does perform competitively, suggesting our choice of parameterization could be sub-optimal and a better one may exist. Figure 5 demonstrates that the improved scalability of our methods on regular graphs carries over onto graphs on real-world benchmarks. This demonstrates that our method offers a significant improvement in scalability over global permutation equivariant models. ![10_image_0.png](10_image_0.png) Figure 5: Computational cost of a global permutation equivariant model (GPEN) and our method (SPEN) with a very similar number of model parameters and batch size for datasets with varying average size graphs from the TUDatasets. For SPEN sub-graphs were constructed using a 1-hop ego network policy. ## 6 Future Work From Table 2 it is clear that IMDB-M is a dataset for which our method has weaker performance. As stated in Section A.6.2 between hidden layers in our network, for the experiments in this paper, we only make use of order 1 and 2 representations. As it was shown by Maron et al. (2019) that increasing the order of the permutation representation increases the expressivity in line with the k-WL test, the expressivity of our method could be improved through the consideration of higher order permutation representations. Further, we demonstrate state-of-the-art results when we do not parameterize the automorphism constraint and exploring alternative parameterizations of this constraint could lead to improved results. ## 7 Conclusion We present a graph neural network framework for building models that operate on k-ego network sub-graphs that respects both the permutation symmetries of individual sub-graphs and is equivariant to the automorphism groups across bags of sub-graphs. The choice of sub-graph policy leads to a novel choice of automorphism groups for the bags of sub-graphs. The framework is more scalable than global higher-dimensional GNNs through the use of sub-graphs and we have both theoretically and experimentally demonstrated this. We have shown that SPEN is provably more expressive than the base higher-dimensional permutation equivariant GNN and sub-graph MPNNs through the choice sub-graph selection policy, permutation equivariant base GNN, and automorphism equivariant kernel constraint. We have provided theoretical analysis demonstrating the expressivity of the framework. Finally, we have shown that SPEN performs competitively across multiple graph classification benchmarks, achieving state-of-the-art on numerous datasets. We believe that our framework is a step forward in the development of graph neural networks, demonstrating theoretically provable expressivity, scalability, and experimentally achieving state-of-the-art on benchmark datasets. ## References Marjan Albooyeh, Daniele Bertolini, and Siamak Ravanbakhsh. Incidence networks for geometric deep learning. *arXiv preprint arXiv:1905.11460*, 2019. James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in neural information processing systems, pp. 1993–2001, 2016. Waiss Azizian and Marc Lelarge. Expressive power of invariant and equivariant graph neural networks. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id= lxHgXYN4bwl. Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M Bronstein, and Haggai Maron. Equivariant subgraph aggregation networks. *arXiv* preprint arXiv:2110.02910, 2021. Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yu Guang Wang, Pietro Liò, Guido Montúfar, and Michael Bronstein. Weisfeiler and Lehman go cellular: CW networks. *arXiv preprint arXiv:2106.12575*, 2021a. Cristian Bodnar, Fabrizio Frasca, Yu Guang Wang, Nina Otter, Guido Montúfar, Pietro Lio, and Michael Bronstein. Weisfeiler and Lehman go topological: Message passing simplicial networks. *arXiv preprint* arXiv:2103.03212, 2021b. Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. *arXiv preprint arXiv:2006.09252*, 2020. Leonardo Cotta, Christopher Morris, and Bruno Ribeiro. Reconstruction for powerful graph representations. Advances in Neural Information Processing Systems, 34, 2021. Pim de Haan, Taco S Cohen, and Max Welling. Natural graph networks. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 3636–3646. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/ file/2517756c5a9be6ac007fe9bb7fb92611-Paper.pdf. Marc Finzi, Max Welling, and Andrew Gordon Wilson. A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups. *arXiv preprint arXiv:2104.09459*, 2021. William Fulton and Joe Harris. *Representation theory: a first course*, volume 129. Springer Science & Business Media, 2013. Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural Message Passing for Quantum Chemistry. In *Proceedings of the 34th International Conference on Machine LearningVolume 70*, pp. 1263–1272, 2017. Aric Hagberg, Pieter Swart, and Daniel S Chult. Exploring network structure, dynamics, and function using networkx. Technical report, Los Alamos National Lab.(LANL), Los Alamos, NM (United States), 2008. Jason Hartford, Devon Graham, Kevin Leyton-Brown, and Siamak Ravanbakhsh. Deep models of interactions across sets. In *International Conference on Machine Learning*, pp. 1909–1918. PMLR, 2018. Nicolas Keriven and Gabriel Peyré. Universal invariant and equivariant graph neural networks. *Advances in* Neural Information Processing Systems, 32:7092–7101, 2019. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. *arXiv* preprint arXiv:1609.02907, 2016. Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In *Proceedings of the International Conference on Machine Learning*, pp. 2747–2755, 2018. Risi Kondor, Hy Truong Son, Horace Pan, Brandon Anderson, and Shubhendu Trivedi. Covariant compositional networks for learning graphs. *arXiv preprint arXiv:1801.02144*, 2018. Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. In *International Conference on Learning Representations*, 2018. Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. In H. Wallach and H. Larochelle and A. Beygelzimer and F. d'Alché-Buc and E. Fox and R. Garnett (ed.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https: //proceedings.neurips.cc/paper/2019/file/bb04af0f7ecaee4aae62035497da1387-Paper.pdf. Christopher Morris, Gaurav Rattan, and Petra Mutzel. Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings. *arXiv preprint arXiv:1904.01543*, 2019a. Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and Leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 4602–4609, 2019b. Christopher Morris, Nils M Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. ICML Graph Representation Learning and Beyond (GRL+) Workshop, 2020. Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In *International conference on machine learning*, pp. 2014–2023, 2016. Pál András Papp, Karolis Martinkus, Lukas Faber, and Roger Wattenhofer. DropGNN: random dropouts increase the expressiveness of graph neural networks. *Advances in Neural Information Processing Systems*, 34, 2021. Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. DropEdge: Towards deep graph convolutional networks on node classification. *arXiv preprint arXiv:1907.10903*, 2019. Martin Simonovsky and Nikos Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 3693–3702, 2017. Erik Henning Thiede, Wenda Zhou, and Risi Kondor. Autobahn: Automorphism-based graph neural nets. arXiv preprint arXiv:2103.01710, 2021. Boris Weisfeiler and Andrei Leman. The reduction of a graph to canonical form and the algebra which appears therein. *NTI, Series*, 2(9):12–16, 1968. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id= ryGs6iA5Km. Pinar Yanardag and SVN Vishwanathan. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1365–1374, 2015. Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. In Advances in neural information processing systems, pp. 4800–4810, 2018. Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32, 2018. Lingxiao Zhao, Wei Jin, Leman Akoglu, and Neil Shah. From stars to subgraphs: Uplifting any GNN with local structure awareness. *arXiv preprint arXiv:2110.03753*, 2021. ## A Appendix A.1 Mathematical Background A.1.1 Group Theory Definition A.1. A *group* is a set G with a binary operation ◦ satisfying the following laws: (G0) (**Closure law**): For all *g, h* ∈ G, g ◦ h ∈ G. (G1) (**Associative law**): g ◦ (h ◦ k) = (g ◦ h) ◦ k for all *g, h, k* ∈ G. (G2) (**Identity law**): There exists e ∈ G such that g ◦ e = e ◦ g = g for all g ∈ G. (G3) (**Inverse law**): For all g ∈ G, there exists h ∈ G with g ◦ h = h ◦ g = e. ## A.1.2 Category Theory This section does not provide a complete overview of category theory, nor even a full introduction, but aims to provide a sufficient level of understanding to aid the reader with further sections of the paper, where we believe presenting the comparison between models from a category theory perspective makes more clear the distinctions between them. A category, C, consists of a set of objects, Ob(C), and a set of morphisms (structure-preserving mappings) or arrows, f : A → *B, A, B* ∈ Ob(C). There is a binary operation on morphisms called composition. Each object has an identity morphism. Categories can be constructed from given ones by constructing a subcategory, in which each object, morphism, and identity is from the original category, or by building upon a category, where objects, morphisms, and identities are inherited from the original category. A functor is a mapping from one category to another that preserves the categorical structure. For two categories C and D a functor F : *C → D* maps each object A ∈ Ob(C) to an object F(A) ∈ Ob(D) and maps each morphism f : A → B in C to a morphism F(f) : F(A) → F(B) in D. Definition A.2. A *groupoid* is a category in which each morphism is invertible. A groupoid where there is only one object is usually a group. ## A.2 Wl Variants And Proofs For Section 4 Definition A.3. (Vertex coloring). A vertex coloring is a function mapping a graph and one of its nodes to a "color" from a fixed color palette. Generally, a vertex coloring is a function c : V → C,(*G, v*) 7→ c G v , where V is the set of all possible tuples of the form (*G, v*) with G = (*V, E*) the set of all finite graphs and v ∈ V . Definition A.4. (Vertex color refinement). Let *c, d* be two vertex colorings. We say that d refines c when for all graphs G = (VG, EG), H = (V H, EH) and all vertices v ∈ V G, u ∈ V H we have that g G v = d H u ⇒ c G v = c H u . We write d ⊑ c. When working with a specific graph pair G1, G2, the refinement d of c is written d ⊑G1,G2 c, when, in particular, it holds that ∀v ∈ V G1, u ∈ V G2, d G1 v = d G2 u ⇒ c G1 v = c G2 u . The 1-WL test represents a graph as a multiset (or histogram) of colors associated with its nodes. This coloring induces a partitioning of the nodes into color classes, where two nodes belong to the same partition is and only if they have the same coloring. The algorithm starts from some initial coloring and iteratively updates the coloring, leading to at each step, where the algorithm does not terminate, a finer-grained node partitioning. Each of these iterations is a refinement step, since, if c indicates the coloring computed at iteration t then the subsequent coloring at iteration t + 1 is given by c t+1 ⊑G,G c t Definition A.5. (Sub-graph-1-WL (Zhao et al., 2021)). Sub-graph-1-WL generalises the 1-WL test by replacing the color refinement step c t+1 v = HASH(*Star*t(v)) with c t+1 v = HASH(Gt[Nk(v)]), ∀v ∈ V. Where G[Nk(v)] is the k-hop egonet. We start by proving that sub-graph-WL is at least as expressive as 1-WL. For this we first characterise our sub-graph-WL to make the comparison between a refinement strategy for a bag of sub-graphs and those which operate on graphs. Definition A.6. (Sub-graph-WL node refinement). For a graph G = (*V, E*) we denote SG as a bag of sub-graphs generated by taking the k-hop ego net of each node v ∈ V . The color refinement for node v at time step t ≥ 0, C t v , is given by the set of node colors across the sub-graphs, denoted as {{c t v,H}}H∈SG . Lemma A.7. b ⊑ a*. That for all graphs* G1 = (V 1, E1) and G2 = (V 2, E2) *and all nodes* v ∈ V 1, w ∈ V 2 that bv = bw ⇒ av = aw. Proof. For such a node refinement policy, inclusive of node refinement across sub-graphs, Bevilacqua et al. (2021) show that, for b the node coloring from a sub-graph-WL refinement and a the node coloring from a WL refinement, b ⊑ a. It then follows that for all graphs G1 = (V 1, E1) and G2 = (V 2, E2) and all nodes v ∈ V 1, w ∈ V 2that bv = bw ⇒ av = aw. Lemma A.8. *Sub-graph-WL is at least as powerful as sub-graph-1-WL in distinguishing non-isomorphic* graphs. Proof. We denote a colorings by the sub-graph-1-WL algorithm, b colorings on each sub-graph by the sub-graph-WL algorithm, and c coloring on each node within a sub-graph by the sub-graph-WL algorithm. We also denote S 1, S 2 as the bags of sub-graphs from G1, G2respectively. If |S 1| ̸= |S 2| then the two graphs are trivially distinguished by sub-graph-WL. In the case where |S 1| = |S 2| we seek to show that if sub-graph-1-WL identifies non-isomorphic graphs then so does sub-graph-WL. First recall that sub-graph-1-WL at time step t deems two graphs non-isomorphic if the following two are assigned two different multisets of node colors: $$\{\{a_{v}^{t}|v\in\mathcal{V}^{1}\}\}\neq\{\{a_{w}^{t}|w\in\mathcal{V}^{w}\}\},$$ while sub-graph-WL deems them non-isomorphic when the following two are assigned two different multisets of subgraph colors: $$\{\{b_{S_{k}^{1}}^{t}\}\}_{k=1}^{m}\neq\{\{b_{S_{h}^{2}}^{t}\}\}_{h=1}^{m}.$$ If it is given that sub-graph-1-WL distinguishes between two graphs at iteration T, then by Lemma A.7 b T ⊑ a T. In addition, Bevilacqua et al. (2021) prove that for such a coloring at T a sub-graph refinement policy such as sub-graph-WL is refined by the coloring generated at T + 1 on any pair of sub-graphs: ∀H1, H2 ∈ S 1 ∪ S 2c T +1 ⊑H1,H2 b T. The proof follows from the definition of the refinement step in an algorithm for a bag of sub-graphs, namely, the inclusion of a term which refines over the multiset of node colors across sub-graphs implies that if C T v = C T u then b T v = b T u . This gives that if sub-graph-1-WL can distinguish between two graphs at time step T then the sub-graph refinement policy yields distinct colors to any pair of sub-graphs. Therefore, sub-graph-WL can distinguish between two graphs that sub-graph-1-WL can and is at least as expressive. This provides the necessary detail for the proof of Theorem 4.2. To prove that sub-graph-WL is strictly more powerful than 1&2-WL we could instead prove that sub-graph-1-WL is strictly more powerful than 1&2-WL and then by Lemma A.8 the proof that sub-graph-WL is strictly more powerful than 1&2-WL is complete. In fact, Zhao et al. (2021) prove that sub-graph-1-WL is strictly more powerful than 1&2-WL by presenting a pair of non-ismorphic graphs that sub-graph-1-WL distinguishes but 1-WL cannot. Therefore, we can conclude that our sub-graph-WL is strictly more powerful than 1&2-WL. ## A.3 Previous Methods A.3.1 Global Equivariant Graph Networks Global Permutation Equivariance. Global permutation equivariant models have been considered by Hartford et al. (2018); Maron et al. (2018; 2019); Albooyeh et al. (2019), with Maron et al. (2018) demonstrating that for order-2 layers there are 15 operations that span the full basis for an permutation equivariant linear layer. These 15 basis elements are shown in Figure 3 with each basis element given by a different color in the map from representation ρ2 → ρ2. Despite these methods, when solved for the entire basis space, having expressivity as good as the k-WL test, they operate on the entire graph. Operating on the entire graph features limits the scalability of the methods. In addition to poor scalability, global permutation appears to be a strong constraint to place upon the model. In the instance where the graph is flattened and an MLP is used to update node and edge features the model would have n 4trainable parameters, where n is the number of nodes. On the other hand, a permutation equivariant update has only 15 trainable parameters and in general 15 ≪ n 4. Viewing a global permutation equivariant graph network from a category theory perspective there is one object with a collection of arrows representing the elements of the group. Here the arrows or morphisms go both from and to this same single object. The feature space is a functor which maps from a group representation to a vector space. For a global permutation equivariant model the same map is used for every graph. ![15_image_0.png](15_image_0.png) Symmetric Group Global Naturality Global natural graph networks (GNGN) consider the condition of naturality, (de Haan et al., 2020). GNGNs require that for each isomorphism class of graphs there is a map that is equivariant to automorphisms. This naturality constraint is given by the condition ρ ′(ϕ) ◦ KG = KG′ ◦ ρ(ϕ), which must hold for every graph isomorphism ϕ : G → G′ and linear map KG. While the global permutation equivariance constraint requires that all graphs be processed with the same map, global naturality allows for different, non-isomorphic, graphs to be processed by different maps and as such is a generalisation of global permutation equivariance. As is the case for global permutation equivariant models, GNGNs scale poorly as the constraint is placed over the entire graph and linear layers require global computations on the graphs. Viewing a GNGN from a category theory perspective there is a different object for each concrete graph, which form a groupoid. Then, there is a mosphism or arrow for each graph isomorphism. These can either be automorphisms, if the arrow maps to itself, or isomorphisms if the arrow maps to a different object. The feature spaces are functors which map from this graph category to the category of vector spaces. The GNG layer is a natural transformation between such functors consisting of a different map for each non-isomorphic graph. ![15_image_1.png](15_image_1.png) Groupoid of Concrete Graphs ## A.3.2 Local Equivariant Graph Networks Local equivariant models have started to receive attention following the successes of global equivariant models and local invariant models. The class of models that are based on the WL test are not in general locally permutation equivariant in that they still use a message passing model with permutation invariant update function. Despite this, many of these models inject permutation equivariant information into the feature space, which improves the expressivity of the models (Bouritsas et al., 2020; Morris et al., 2019a; Bodnar et al., 2021b;a). The information to be injected into the feature space is predetermined in these models by a choice of what structural or topological information to use, whereas our model uses representations of the permutation group, making it a very general model that still guarantees expressivity. In contrast to utilising results from the WL test covariant compositional networks (CCN) look at permutation equivariant functions, but they do not consider the entire basis space as was considered in Maron et al. (2018) and instead consider four equivariant operations (Kondor et al., 2018). This means that the permutation equivariant linear layers are not as expressive as those used in the global permutation equivariant layers. Furthermore, in a CCN the node neighbourhood and feature dimensions grow with each layer, which can be problematic for larger graphs and limits their scalability. Another local equivariant model is that of local natural graph networks (LNGN) (de Haan et al., 2020). An LNGN uses a message passing framework, but instead of using a permutation invariant aggregation function, it specifies the constraint that node features transform under isomophisms of the node neighbourhood and that a different message passing kernel is used on non-isomorphic edges. In practice this leads to little weight sharing in graphs that are quite heterogeneous and as such the layer is re-interpreted such that a message from node p to node q, kpqvp, is given by a function k(Gpq, vp) of the edge neighbourhood Gpq and feature value vp at p. Viewing a LNGN from a category theoretic perspective there is a groupoid of node neighbourhoods where morphisms are isomorphisms between node neighbourhoods and a groupoid of edge neighbourhoods where morphisms are ismorphisms between edge neighbourhoods. In addition, there is a functor mapping from edge neighbourhoods to the node neighbourhood of the start node and a functor mapping similarly but to the tail node of the edge neighbourhood. The node feature spaces are functors mapping from the category of node neighbourhoods to the category of vector spaces. Further, composition of two functors creates a mapping from edge neighbourhoods to the category of vector spaces. A LNG kernel is a natural transformation between these functors. ![16_image_0.png](16_image_0.png) Groupoid of Node Neighbourhoods ![16_image_1.png](16_image_1.png) Groupoid of Edge Neighbourhoods ## A.4 Implementing Other Models Within Our Framework In the datasets used, for graph classification benchmark tasks, the input to the model is a graph with node and edge features, this can be represented as 2 nd order permutation representation, so the input representation would be j = 2. The convolution can then map from this representation, ρj , to multiple different representation spaces, ρ0 ⊕ ρ1 *⊕ · · · ⊕* ρi. Subsequent convolutions can then map from these multiple permutation representations, ρ0 ⊕ ρ1 *⊕ · · · ⊕* ρi, to multiple different permutation representations, ρ0 ⊕ ρ1 *⊕ · · · ⊕* ρi. The choice of representations used can be made depending on a trade off between expressivity and computational cost, as lower order representation spaces have less expressivity, but also lower computational cost. Local Natural Graph Networks (LNGNs) (de Haan et al., 2020) take the input feature space and embed this into an invariant scalar feature of the edge neighbourhood graph. This is the same as using specific choice k-hop sub-graph creation and permutation representation space for the sub-graph convolution. In the case of LNGNs the choice would be k = 1 and mapping the input feature space to representation ρ0 creating a permutation invariant feature space. Then any graph neural network with invariant features can be used, in the paper the choice made is to use a GCN (Kipf & Welling, 2016), which can also be covered by our framework. Here the choice would again be to use k = 1 when creating the subgroups and using a subgraph convolution with representation spaces ρ0 → ρ0. Global Equivariant Graph Networks (EGNs) (Maron et al., 2018) use a choice of k = n, for n-node graphs when creating the sub graphs, which corresponds to not selecting a sub graph and instead operating over the entire graph. They then use the representation space ρ2 → ρ2 mapping from a graph feature space to a graph feature space. Local Permutation Equivariant Graph Networks (LPEGN) (Ours) In our paper we choose to use k = 1 throughout to keep inline with the vast majority of previous work on graph neural networks, but we use a representation space of ρ1 ⊕ ρ2 → ρ1 ⊕ ρ2 in the hidden layers of the model and we note that this was simply a choice that seemed a simple case to present as a comparison with previous work in the benchmark classification task. ## A.5 Architectural Details Of The Spen Framework The main figure outlining the model concept is provided in Figure 2. The first stage in the model is to split the input graph into a bag of sub-graphs. To do this we utilise a k-ego network policy, where for each node in the input graph we extract the neighbouring nodes and the induced connectivity of these nodes up to k-hop away from the initial node. This induced connectivity sub-graph is then extracted and becomes one of the sub-graphs within the bag. This process is repeated for each node in the input graph, creating a bag of sub-graphs. This can be seen in step (2) in Figure 2. At this point the input graph is represented as a bag of sub-graphs. The next step in the model is to split these into there corresponding automorphism groups. As these are sub-graphs with permutation symmetry the automorphism groups are defined as $$\operatorname{Aut}(H)=\{\sigma\in\mathbb{S}_{n}|A^{\sigma}=A\},$$ $$\left(2\right)$$ σ = A}, (2) where A is the adjacency matrix of H and σ is a permutation action on the sub-graph. Due to the choice of sub-graph selection policy, where the induced connectivity is extracted, we can consider missing edges in the sub-graph as zero feature edges. This is the same approach as would be taken in Maron et al. (2018) and all approaches which operated on a dense adjacency matrix of the graph. Therefore, each sub-graph automorphism group is fully determined by the size of the sub-graph, namely the number of nodes in the sub-graph. Then each sub-graph is placed in a bag of sub-graphs such that each sub-graph within the bag belongs to the same automorphism group. These two steps of finding the sub-graphs and placing into bags of sub-graphs such that each bag corresponds to a single automorphism group can be combined into one step. This is achieved by placing the sub-graphs into the correct automorphism group bag of sub-graphs as each sub-graph is extracted. This can be seen as moving to step (3) in Figure 2. Now the input graph is represented by multiple bags of sub-graphs, each bag corresponding to a different automorphism group. Next we considered step (3) in Figure 2. As our model operated on sub-graphs we use a graph neural network architecture. Here we choose to use permutation equivariant neural networks and utilise a general approach for operating on higher-order objects. A general recipe for building group equivariant neural networks was provided in Kondor & Trivedi (2018). Following this formalism, we treat any object that transforms under a group action as a function on the group. In the case of an object which transforms under a 0-order permutation this would correspond to a single node, and an example of this would be a graph after being pooled. Here, in the case of a single feature dimension, there is just a single weight and this is an uninteresting case where there is actually nothing to permute. An object which transforms under a 1-order permutation is a set and an object which transforms under a 2-order permutation is a graph. This concept can be extended to objects which transform under higher order permutations. In addition, we can map between different order permutations. Enforcing permutation equivariance within a neural network layer mapping between an input and output object which we require to transform under a permutation group action places a restriction over the weights in the model. It is possible to find bases for a mapping between | Dataset | MUTAG | PTC | PROTEINS | NCI1 | NCI109 | IMDB-B | IMDB-M | |---------------------|---------|-------|------------|--------|----------|----------|----------| | Graph Sizes | 10-28 | 2-109 | 4-620 | 3-111 | 4-111 | 12-136 | 7-89 | | Sub-graph Sizes | 2-5 | 2-5 | 1-26 | 2-5 | 1-6 | 1-135 | 1-88 | | Mean Sub-graph Size | 3.2 | 3.0 | 4.7 | 3.2 | 3.2 | 9.8 | 10.1 | Table 3: Different range of graph sizes and sub-graph sizes for each dataset considered from TUDatasets. different objects transforming under different order permutation transformations, which provides the number of permissible weights for a single feature dimension neural network. We provide an example of some of these bases functions in Figure 3. Here we use the notation ρi to represent an object which transforms under an i-order permutation transformation. Following this, we use the notation ρi → ρj to denote a mapping between an object transforming under an i-order permutation transformation and an object transforming under an j-order permutation transformation. We have used the notation ρ due to this being the common notation to use for representations in group theory. Kondor & Trivedi (2018) defined the group convolution and made the connection to Fourier analysis, where the function is decomposed into irreducible representations. These irreducible representations can be combined using the direct sum to create other group representations, for example ρi = ρa ⊕ ρb. Here we are making use of permutation representations to restrict the space of the linear update function such that we use the bases shown in Figure 3, and hence the use of ρ. Now that we have the general recipe for constructing higher order permutation equivariant graph neural networks, we consider the specifics of the linear update function used within our model. In our model we construct a graph neural network which comprises of mappings between objects which transform under different order permutation transformations. Our model uses a different set of weights to perform the linear update mapping for each automorphism group. This can be viewed as building a separate graph neural network for each automorphism group; despite this, the choice of mappings, i.e. the feature dimension and order of objects, is kept the same for each automorphism group. This different set of weights which performs the linear mapping within the graph network is symbolised in Figure 3 by showing three GNNs for automorphism groups A2, A3, A4, for this example graph. This explains the linear map (4) which produces the outputs (5) in the figure. At this point the key concepts of the model are explained, namely the splitting of the graph into sub-graphs which are stored in separate bags for each automorphism group, the core GNN update functions which comprise of higher-order permutation update functions, and the automorphism constraint placed over the model via the enforced weight sharing in the model. Following this, there is an averaging of node and edge features across the sub-graphs. This comprises the linear update function of the model and a choice of non-linearity can be used, in our experiments we used the ELU non-linearity. The entire model is composed by stacking multiple of these layers, and in the case of a graph classification task adding a pooling layer. In the notation of our framework a set or graph pooling layer is a map from an object which transforms under a 1 or 2-order permutation transformation to a 0-order permutation transformation respectively. ## A.6 Implementation Details And Datasets A.6.1 Tudatasets We present the range of graph sizes and sub-graph sizes when utilising a 1-ego network sub-graph extraction policy in Table 3. ## A.6.2 Model Architecture We consider the input graphs as an input feature space that is an order 2 representation. For each local permutation equivariant linear layer we use order 1 and 2 representations as the feature spaces. This allows for projection down from graph to node feature spaces through the basis for ρ2 → ρ1, projection up from node to graph feature spaces through the basis for ρ1 → ρ2, and mappings across the same order representations through ρ2 → ρ2 and ρ1 → ρ1. The final local permutation equivariant linear layer maps to order 0 representations through ρ2 → ρ0 and ρ1 → ρ0 for the task of graph level classification. In addition to the graph layers, we also add 3 MLP layers to the end of the model. Despite these specific choices which were made to provide a baseline of our method for comparison to existing methods the framework we present is much more general and different representation spaces can be chosen. Therefore, different permutation representation spaces, ρ1 ⊕ ρ2 *⊕ · · · ⊕* ρi, can be chosen for different layers in the model and a different k value can be chosen when creating the sub-graphs. ## A.6.3 Implementation Details For all experiments we used a 1-hop ego networks as this provides the most scalable version of our method. We trained the model for 50 epochs on all datasets using the Adam optimizer. We considered the evaluation procedure as was conducted in (Bevilacqua et al., 2021; Xu et al., 2019; Yanardag & Vishwanathan, 2015; Niepert et al., 2016). Specifically, we conducted 10-fold cross validation and reported the average and standard deviation of validation accuracies across the 10 folds. For all datasets we use 6 automorphism equivariant layers with base GNN utilising ρ1 ⊕ ρ2 representation space.
Review 1: Summary: This paper aims at boosting the expressive power of subgraph-GNNs via a new modeling of subgraphs. While the authors propose to model graph data as "bag of bags of subgraphs", 3 key properties are studied in detail, namely 1) equivariance of subgraph node representations w.r.t permutation of nodes in the original graph; 2) Feature learning of bags of subgraphs, i.e., automorphism groups formed by gathering k-ego networks of all nodes according to graph size; 3) Invariance of the node representations w.r.t subgraph orderings. Towards 1), the authors propose to use automorphism equivariant linear map for subgraph convolutions. Since permutation of nodes in the original graph results in an equivalent permutation of the ordering of nodes in the subgraphs, which are essentially unchanged up-to some isomorphism, guaranteeing subgraph-level automorphism equivariance ensures this property. Towards 2), the authors propose to learn different feature spaces for different automorphism groups by sharing weights within each automorphism group while using different weights across automorphism groups. This is motivated by the variability of local structures of nodes. Towards 3), the authors propose to average features of the same node across bags of subgraphs before pooling, which is rather straightforward. The authors empirically prove the expressivity gain of their proposed framework by synthesizing non-isomorphic graphs in different conditions and showed that their framework is able to distinguish strongly regular graphs while several other subgraph-MPNNs failed. The authors also discussed the scalability of their proposed framework, which is O(nm^2), meaning that it scales more favorably when the size of the subgraphs is smaller than the original graph, which is fairly sensible. Strengths and Weaknesses: Strength - The idea is overall interesting and well-motivated. Weakness [Main] - hard to see new things based on existing works, there are many recent works that adopt subgraph and Automorphism. It is strongly to have an in in-depth discussion with these works, also good to compare them in experiments. - hard to see improvements over existing works. The performance gain showed in Table 2 seems marginal, it seems that the framework only performs well on smaller datasets. Also, the authors should provide significant test results since the variance of the results seems quite large. reference 1. Autobahn: Automorphism-based Graph Neural Nets [cited in the submission, but not compared] 2. Equivariant Subgraph Aggregation Networks 3. Reconstruction for Powerful Graph Representations 4. Automorphic Equivalence-aware Graph Neural Network Weakness [Minor] - Lack of concrete instantiations of the proposed framework in the main article (although the authors do present some instances of subgraph-MPNNs implemented within their propose framework in the appendix). - The discussion of bases for mappings between different order permutation representations in section 3.1.1 do not seem related to the proposed framework, the authors should at least give more detailed examples. - The authors should compare with more subgraph-GNN papers w.r.t runtime memory usage as well as classification performance. E.g., Autobahn. Requested Changes: see Weakness [Main] Broader Impact Concerns: Not applicable ================================================== Review 2: Summary: This work is focused on expressiveness and scalability of graph neural networks via selecting subgraphs carefully. On expressiveness, it operates on local subgraphs containing information beyond traditional 1-WL GNNs (or MPNNs), so the proposed model is able to solve more difficult graphs isomorphism testing. On scalability, it restricts the size of ego-nets instead of directly using the whole graph, so it occupies much less memory than the whole-graph baseline. Strengths and Weaknesses: Strength: 1) The proposes model does achieve the goals of design, as more expressive than 1-WL GNNs and more scalable than PPGN. 2) Experimental performance on graph classification tasks is competitive with reasonable baselines. Weakness: 1) The details of the architecture design are not presented properly. More concretely, in Figure 2 there are three GNNs in block (4). It looks like the design of each GNN is mentioned in A.5.2, in the form of $\rho_i \rightarrow \rho_j$. Then the description of $\rho_i \rightarrow \rho_j$ is in the caption of Figure 3 without formal math definitions. As a result, it is a little difficult for me to grasp necessary details of the whole architecture. 2) The experiment setting is not strong enough. On one hand, to discuss expressive power, the TUDatasets benchmark is not difficult enough as graph isomorphism test as in [1] where I suppose 1-WL GNNs are already able to conduct the task. On the other hand, compared with the instability (standard dev.) on the benchmark in Table 2, the gap between the mean values of SPEN and baselines are not obviously enough, such as on MUTAG and PTC. 3) The implementation only includes 1-hop ego-nets, which decreases the contribution of this work, because 1-hop restricts the selected subgraphs to have a maximum diameter as 2. Reference: [1] Understanding Isomorphism Bias in Graph Data Sets. Ivanov et al. Requested Changes: Here is a list of suggestions, all of which would simply strengthen the work: 1) Regarding Weakness (1), it is necessary to formally define the whole architectures instead of a figure. If space is limited, it is also okay to move some details to the appendix. 2) Regarding Weakness (2), a popular choice is to add experiments on ZINC, which is a graph-level regression task. 3) Regarding Weakness (3), try to increase the number of hops. Please note that it may lose scalability significantly. 4) Change some names of ``GPEN`` to ``PPGN``, if I understand correctly. Broader Impact Concerns: Not applicable for this paper. ================================================== Review 3: Summary: The paper proposes equivariant networks that operate on a graph as a nested bag-of-bags of subgraphs. Subgraphs are constructed as k-hop ego networks of each node, and the resulting bag of graphs is further partitioned into bags of similar size. A separate equivariant network is applied to each bag of subgraphs of similar size, and the resulting representations from different subgraphs containing a node are aggregated. A stack of such equivariant layers is used for learning on graphs. The paper presents theoretical analyses on the distinguishing power of this approach based on the k-WL test and provides experimental evidence on a graph classification benchmark. Strengths and Weaknesses: Strengths ---- - The introduction is nicely written - Experiments showing the performance in terms of scalability are supportive Weaknesses --- My main concern about the proposed model is about the rationale for separating subgraphs of different sizes: this separation makes sense if each bag needed a different equivariant network due to having a different automorphism group. However, as I understand it, all subgraphs are using the symmetric group, which can be applied to subgraphs of different sizes. This makes the choice of architecture rather arbitrary. **Other issues:** - The results in the theory section seem to be a simple derivation of those of Zhao et al’2021. This needs to be acknowledged in the main text and the relation to that work merits more discussion. - Empirical proof: maybe a different wording such as “claim”, “observation” or “conjecture” is more suitable for theorem 4.4 given that its empirical test does not give definitive proof. - Experiments: some natural baselines (e.g., Maron et al'19) are missing. Why not report the best-performing methods on these benchmarks and then highlight the ones that are more relevant? (papers with code can be solicited for a list) - The presentation’s use of the “automorphism” group of the graph and their representations is confusing given that it seems that the model only uses the symmetric group. - Abbreviated names of other methods are sometimes used without citation making it difficult to understand the method that is being discussed or compared (e.g., sub-graph MPNN in theorem 4.1, SGCN in section 4.1, GPEN on page 8) - The notation used is sometimes not clearly explained (e.g., proposition 4.1) - There are many typos and grammatical issues (examples: “although the choice of… “, “make use of this here … “, “rho”, “the ability of the equivalent to the… “) - The mathematical background in the appendix is too brief to be useful - it should either be expanded or removed Requested Changes: I acknowledge that addressing the main concern regarding the rationale for partitioning subgraphs of similar size is not straightforward since it involves rethinking the main contribution of the paper. In particular, if one partitions the subgraph based on their automorphism group and uses equivariant networks for these automorphism groups, the method becomes very close to de Haan et al'20. If instead one decides to use the same equivariant network for the bag of subgraphs the result becomes very similar to Zhao et al'21. Other issues highlighted as weaknesses above are relatively straightforward to address. Broader Impact Concerns: Not required. ================================================== Metareview: Recommendation: Reject Comment: Dear authors, I am sorry to let you know that the initial reviews and discussion phases led to the conclusion that the paper is not ready to be published. The main issues that have been raised in the reviews and discussion and that seem to not have been sufficiently addressed are: - The presentation of the architecture led to some confusion. One reviewer asked for more details, which were given in appendix but were still not enough for the reviewer to feel confident that the architecture was properly presented. Another reviewer raised a concern that seem to have been caused by some confusion about how the partitions in subgraphs are used for the updates of each automorphism group; this suggests that the exposition of the contribution needs to be reworked and clarified. - Reviewers suggested to more thoroughly discuss related work and position the contribution w.r.t. other methods leveraging subgraph automorphisms. - Two of the reviewers mentioned that some of the experimental results are not statistically significant. This should be openly discussed in the paper. The other minor issues which were raised by the reviewers should be addressed before re-submission. ==================================================
# Bayesian Optimization On The Cartesian Product Of Weighted Graphs To Better Search Discrete Spaces With Irregular Increments Anonymous authors Paper under double-blind review ## Abstract Bayesian optimization is a powerful tool for optimizing a black-box function on a compact Euclidean space under a limited evaluation budget. However, in practice, we may want to optimize over discretization of the solution space. For example, in scientific and engineering problems the discretization of the solution space naturally occurs due to measurement precision or standardized parts. In this work, we consider the problem of optimizing a black-box function with a discretized solution space. To address this problem, prior work uses Bayesian optimization on the Cartesian product of graphs. We extend this work to weighted edges which allow us to exploit the problem structure more effectively. Our proposed method outperforms earlier methods in diverse experiments including neural architecture search benchmarks and physics-based simulations with discretized solution spaces. We also investigate the impact of adding multi-hop edges to weighted graphs, which improves performance of our method on the optimization of synthetic benchmark functions. ## 1 Introduction Consider a black-box function f : X → R with a compact search space X ⊂ R d. A black-box function has an unknown functional form (Hansen et al., 2010; Turner et al., 2020) and may require many function evaluations in order to determine its optimum. Optimizing black-box functions is thus challenging, particularly when function evaluations are expensive and evaluation budgets are limited. Previous research (Srinivas et al., 2010; Snoek et al., 2012; Turner et al., 2020) has shown that Bayesian optimization (Brochu et al., 2010; Shahriari et al., 2016; Garnett, 2023) is an effective method for optimizing such costly black-box functions. Its effectiveness has also been demonstrated in various real-world applications such as optimization of antireflective glass (Haghanifar et al., 2020), free-electron lasers (Duris et al., 2020), chemical reactions (Shields et al., 2021), and battery lifetimes (Attia et al., 2020). We consider problems where the original continuous search space must be discretized. In various problem domains, searching for a solution on X often leads to the repeated evaluation of points that are too close to each other, which is unnecessary and inefficient. This is especially pertinent in engineering problems such as building design, electronic component design, and inventory management. To illustrate, in the context of structural design, choosing structural members, fasteners, materials, connections, and components often demands decisions from a predefined set of standard choices. Likewise, in the design of neural network architectures, diverse variables, i.e., hyperparameters, such as the number of neurons in a layer, learning rate, and output channel size are defined on a discretized search space. Moreover, considering *irregular increments* such as a logarithmic or geometric sequence, discrete variables are utilized in science and engineering due to physical constraints, fabrication limitations, measurement precision, and ease of interpretability. In particular, this approach is adopted where the range of a variable is very large and the order of magnitude of a variable is more important than its exact value. Some examples include the measurement of earthquake magnitude, sound intensity, and radioactive decay. Electronic components or structural engineering components often come in series where the values follow a logarithmic or geometric sequence. The need to control various properties and trade-offs in design choices, as well as the ineffectiveness of adjusting variable values by small amounts, also contribute to the use of irregularly discretized variables. To address the aforementioned practical issue of optimizing a black-box function, we investigate various strategies for optimizing the function on a discretized search space *D ⊂ X ⊂* R d using the characteristics of an experiment and the experimenter's design choices. Inspired by the work of Oh et al. (2019), we present a Bayesian optimization method that uses the Cartesian product of weighted graphs. Unlike previous work, our method focuses on a search space of ordinal variables instead of a combinatorial space, where discrete variables with irregular increments are given. Our approach defines a *graph with weighted edges* to represent an ordinal variable. Using the weighted graph, we build a Gaussian process surrogate with diffusion kernels on the Cartesian product of the weighted graphs and maximize an acquisition function on the graph Cartesian product to select the next point. Our algorithm demonstrates improved performance compared to several baseline methods in a range of experiments, including neural network architecture search benchmarks and physics-based simulations. Additionally, we examine the impact of additional multi-hop edges in weighted graphs, and we demonstrate that adding them helps to improve the performance of our method on the optimization of synthetic benchmarks. We describe the formal problem formulation and our contributions before presenting main concepts. Problem Formulation. A discretized space *D ⊂ X* of a compact space X ⊂ R dis defined as $${\mathcal{D}}=\{x_{1}^{(1)},\ldots,x_{q_{1}}^{(1)}\}\times\cdots\times\{x_{1}^{(d)},\ldots,x_{q_{d}}^{(d)}\},$$ $$(1)$$ $$y=f(\mathbf{x})+\epsilon,$$ $\downarrow$ . So, let's say that $\frac{1}{2}$ is a "mod". }, (1) where x (k) i < x(k) jif *i < j* for k ∈ [d]. We define D as the product of finite sets of candidates for each continuous variable, where the candidates are determined by the experiment's characteristics and the experimenter's design choices; see Section 4.1. Here, we assume that the discretization is relatively coarse, and therefore, the ordinal variables cannot be considered as continuous variables. While this space design process is necessarily handcrafted, it allows us to choose a practically or physically feasible query point based on the experimenter's knowledge, without additional external treatments. Given that we are optimizing a black-box function, only a function evaluation of a given point x ∈ D is available: y = f(x) + ϵ, (2) where ϵ is observation noise. Therefore, our objective is to find the optimal solution that minimizes the black-box function: $$\mathbf{x}^{\dagger}=\arg\operatorname*{min}f(\mathbf{x}),$$ † = arg min f(x), (3) where x ∈ D. Contributions. Our work can be summarized as follows. 1. We provide an overview of the motivation for search space discretization using irregular increments. 2. We propose a Bayesian optimization strategy for a search space of ordinal variables that is defined on the Cartesian product of weighted graphs. 3. We investigate the effects of introducing additional multi-hop edges in weighted graphs. 4. We demonstrate the superior performance of our approach over existing methods in several experiments, including neural network architecture search benchmarks and physics-based simulations. We have included our implementation in the supplementary material. ## 2 Related Work In this section, we discuss prior work related to our paper. $\downarrow$ . Bayesian Optimization with Integer-Valued or Ordinal Variables. Several methods have been proposed to handle integer-valued or ordinal variables in Bayesian optimization. Garrido-Merchán & HernándezLobato (2020) analyze two simple methods for integer-valued variables and propose a method with a transformed kernel, which models integer-valued variables directly with a Gaussian process surrogate model. Oh et al. (2019) propose a combinatorial Bayesian optimization method with the Cartesian product of variable-wise graphs, using a chain graph for ordinal variables. Picheny et al. (2019) solve ordinal Bayesian optimization by preserving the ordering of points and using a latent Gaussian process model to determine distances between points. This method is slow, because it requires choosing a large number of free parameters for the Gaussian process. Gaussian Processes on Graphs. Several studies have explored the use of Gaussian processes on graphs. Kondor & Lafferty (2002) propose a diffusion kernel on graphs based on the heat equation, and Smola & Kondor (2003) present a kernel on graphs using the concept of a regularization operator. The diffusion kernel (Kondor & Lafferty, 2002) is a special case of the kernel by Smola & Kondor (2003). Borovitskiy et al. (2021) introduce a Matérn Gaussian process model on graphs, which has several interesting properties including the convergence of a graph Matérn kernel on the Euclidean and Riemannian manifolds. Zhi et al. (2020) propose a spectral kernel learning method for Gaussian processes on graphs, which is capable of learning a spectral kernel on a discrete space. Blanco-Mulero et al. (2021) use a neighborhood kernel on graphs to learn a transition function over time for a dynamic graph structure by measuring the interaction changes between vertices. Bayesian Optimization with Prior Knowledge. By incorporating prior knowledge on the location of solution or information on global optimum, diverse approaches have been proposed. Using the previous similar tasks, approaches to starting an optimization round with better initialization are studied (Poloczek et al., 2016; Lindauer & Hutter, 2018). Moreover, Feurer et al. (2015) propose a method to initialize hyperparameter optimization via meta-learning. Similar to the work (Feurer et al., 2015), an approach to learn meta-features to initialize Bayesian hyperparameter optimization has been suggested (Kim et al., 2017). Also, Perrone et al. (2019) explore a method with the design of data-driven search spaces via transfer learning utilizing historical data. In addition, Ramachandran et al. (2020) investigate the use of priors to warp a search space expanding the space with the prior information, and Souza et al. (2021) propose a method to directly adjust balance between priors and models using the prior information that guides which locations yield better performance. Compared to this line of research, our problem formulation does not employ the prior information on solution locations or global optima, and we consider the measurement precision and standardized parts, which make our problem discrete. As will be discussed in Section 6, we assume that points that are not included in D cannot be evaluated because they are practically infeasible. ## 3 Background In order to solve the optimization problem involving discretized continuous variables, introduced in Section 1, we make use of a Bayesian optimization technique. Therefore, we begin by providing a brief description of the Bayesian optimization procedure. ## 3.1 Bayesian Optimization When optimizing a black-box function using Bayesian optimization, the next point to evaluate is determined sequentially by constructing a surrogate model and maximizing an acquisition function. Because the true function is not explicitly known, a surrogate model is built at each step using the points that have already been evaluated and their corresponding function evaluations. To balance exploration and exploitation in the search space, the surrogate model must provide both a function value estimate and its uncertainty estimate. This can be achieved using a probabilistic regression model, with a Gaussian process (Rasmussen & Williams, 2006) being a popular choice in Bayesian optimization. Once the surrogate model has been constructed, the next query point is selected by maximizing an acquisition function that is based on both the function value and uncertainty estimates produced by the surrogate. These steps are repeated until an evaluation budget is exhausted. For more details, see the work by Brochu et al. (2010); Shahriari et al. (2016); Garnett (2023). Solving the problem on D with Bayesian optimization is challenging due to several reasons: 1. A surrogate model on D should be defined in a distinct manner from a surrogate model for X . 2. Off-the-shelf optimization techniques, such as L-BFGS-B (Liu & Nocedal, 1989) and CMAES (Hansen & Ostermeier, 1997), are not suitable for optimizing an acquisition function on D. 3. If we relax the problem from D to X , it is not trivial to transform back to D. 4. Unlike a combinatorial space with only discrete variables, D includes ordinal and numerical information that must be considered in the optimization process. ## 3.2 Earlier Methods Simple Transformation. The most basic approach for dealing with integer-valued or ordinal variables in the optimization problem on D involves solving the problem on a continuous search space X , and then transforming the resulting query point x ‡ ∈ X to a point in D. Specifically, the closest point in D to x ‡is selected: $${\hat{\mathbf{x}}}^{\dagger}={\underset{\mathbf{x}\in{\mathcal{D}}}{\operatorname{arg\,min}}}\,\|\mathbf{x}-\mathbf{x}^{\dagger}\|.$$ ‡∥. (4) Or, to efficiently find the closest point, choose coordinate-wise: $$\left({4}\right)$$ $${\hat{x}}_{i}=\operatorname*{arg\,min}_{x\in\{x_{1}^{(i)},\ldots,x_{q_{i}}^{(i)}\}}|x-x_{i}|,$$ $$\left(5\right)$$ |x − xi|, (5) for i ∈ [d] where xˆ ‡ = [ˆx1*, . . . ,* xˆd] and x ‡ = [x1*, . . . , x*d]. Henceforth, (4) or (5) is expressed as ⌈x ‡⌋. However, since this method evaluates a different point than the one chosen by Bayesian optimization, it can be considered as not adhering to the Bayesian optimization policy. Continuous Variables Keeping. This method is similar to the Simple Transformation in that it follows the standard Bayesian optimization process defined on X , and then transforms a query point to a point in D. However, instead of transforming the query point after evaluation, it retains the continuous values of the query points throughout the optimization process. Before the evaluation, the query point x ‡is transformed to ⌈x ‡⌋ to obtain a response of ⌈x ‡⌋. While this method aligns with the underlying principles of Bayesian optimization, it can result in the acquisition of points that are rounded to the same integer, thereby leading to suboptimal solutions. The Simple Transformation and the Continuous Variables Keeping methods are thoroughly discussed in (Garrido-Merchán & Hernández-Lobato, 2020). Transformed Kernel. In contrast to the Simple Transformation and Continuous Variables Keeping methods, Garrido-Merchán & Hernández-Lobato (2020) propose a kernel with transformation k(⌈x⌋, ⌈x ′⌋), where k is a kernel for Gaussian process regression, such as the exponentiated quadratic and Matérn kernels, and x, x ′ ∈ X . The transformed kernel outputs discrete values, making it challenging to optimize an acquisition function with off-the-shelf optimization strategies. Therefore, we sample a sufficient number of candidates from X to identify the maximizer. Chain Graph. The method proposed by Oh et al. (2019). represents D as the Cartesian product of graphs, where each graph represents a single ordinal variable. Each variable is modeled as a chain graph, consisting of a vertex matrix Vi = [x (i) 1 , . . . , x (i) qi] and an adjacency matrix Ai ∈ {0, 1} qi×qifor i ∈ [d], where Aiis always symmetric and tridiagonial. For example, if qi = 3, Ai = h0 1 0 1 0 1 0 1 0 i. The resulting Cartesian product of chain graphs consists of all vertices V = V1 *× · · · ×* Vd ∈ R q×d, where q = q1q2 *· · ·* qd. To define a surrogate model on the graph Cartesian product, a diffusion kernel on graphs (Kondor & Lafferty, 2002) is employed, which is computed using the pairs of eigenvalue and eigenvector for the graph Laplacian L = D − A ∈ R q×q, where D and A are the degree and adjacency matrices, respectively. The diffusion kernel is computed over all vertices V: $$k(\mathbf{V},\mathbf{V})=[\mathbf{v}_{1},\ldots,\mathbf{v}_{q}]\exp(-\beta\mathbf{A})[\mathbf{v}_{1},\ldots,\mathbf{v}_{q}]^{\top},$$ ⊤, (6) ![4_image_0.png](4_image_0.png) Figure 1: Chain graphs without edge weights and with edge weights for a single ordinal variable, and their graph Laplacians. For chain graphs, each edge is called a 1-hop edge. where Λ = diag(λ1*, . . . , λ*q) is a diagonal matrix of eigenvalues and v1*, . . . ,* vq are the respective eigenvectors. See (Kondor & Lafferty, 2002; Seeger, 2004) for a detailed description of the diffusion kernel. To handle large q, (6) can be sped up using a Kronecker product decomposition of the Cartesian product of d graphs: $$k(\mathbf{V},\mathbf{V})=\bigotimes_{i=1}^{d}\sum_{j=1}^{q_{i}}\mathbf{v}_{j}^{(i)}\exp(-\beta_{i}\lambda_{j}^{(i)})\mathbf{v}_{j}^{(i)}{}^{\top},\tag{1}$$ $$\mathbf{L}={\left[\begin{array}{l l l l}{1}&{-1}&{0}&{0}\\ {-1}&{2}&{-1}&{0}\\ {0}&{-1}&{2}&{-1}\\ {0}&{-0}&{-1}&{1}\end{array}\right]}$$ $$\mathbf{L}={\left[\begin{array}{l l l l}{1.2}&{-1.2}&{0}&{0}\\ {-1.2}&{4.5}&{-3.3}&{0}\\ {0}&{-3.3}&{12.1}&{-8.8}\\ {0}&{0}&{-8.8}&{8.8}\end{array}\right]}$$ $$\mathbf{\Sigma}(7)$$ where N is a Kronecker product and (λ (i) j , v (i) j ) is a jth eigenpair of a variable i. The work (Oh et al., 2019) provides the details of this decomposition process. To optimize an acquisition function, a fixed number of query candidates are sampled from V and a local search is performed, because it is hard to use off-the-shelf optimization tools for optimizing a function over vertices on a graph. Probabilistic Reparameterization. Along with the aforementioned methods, we use the probabilistic reparameterization method proposed by Daulton et al. (2022) as a baseline method. To utilize a gradientbased optimization strategy in optimizing the acquisition function defined on a discrete space or a mixed space of continuous and discrete variables, this method utilizes a technique of probabilistic reparameterization. As reported in the work (Daulton et al., 2022), it shows its effectiveness in diverse circumstances including problems defined on discrete spaces. ## 4 Proposed Method In this section, we propose a method to optimize a black-box function on a discretized search space. ## 4.1 Search Space Design In order to optimize a black-box function on D, it is necessary to explicitly define *D ⊂ X* , taking into account not only simple rounding to integers but also precision and irregular increments. As discussed in Section 1, we provide several examples of engineering and science problems such as neural network architecture design and optoelectronic and microfluidic device design. Defining the search space requires careful consideration of the details of the specific experiment, including minimum units for measurement, fabrication, and manufacturing, as well as the experimenter's design choices, such as scaling for infinitesimal or huge values. Further information on search space design can be found in Sections 5 and A. ## 4.2 Weighted Graphs For Ordinal Variables Here, we present a special cases of weighted graphs for ordinal variables. As will be investigated, we also define a weighted graph with a particular set of multi-hop edges. Algorithm 1 Bayesian Optimization with W-Graphs Input: A time budget T and a black-box function f Output: The best solution found x ⋄ 1: Set up a variable-wise graph structure including edge addition and compute edge weights. 2: Compute the eigenpairs of the graph Laplacian matrices L1*, . . . ,*Ld for d weighted graphs. 3: for t = 1*, . . . , T* do 4: Construct a Gaussian process model on the Cartesian product of d weighted graphs with the eigenpairs computed in Line 2. 5: Determine the next query point x ‡ t by maximizing an acquisition function using a local search on the graph Cartesian product. 6: Obtain a function value yt = f(x ‡ t ) + ϵt where ϵt is observation noise. 7: **end for** 8: Compute (x ⋄, y⋄) = arg min(x,y)∈{(x ‡ i ,yi)} T i=1 y. 9: **return** x ⋄ We use a chain graph with edge weights to leverage the ordinal and numerical information of a discretized continuous variable. For a variable i, we define the edge weight as the distance between two vertices: $$|x_{j}^{(i)}-x_{k}^{(i)}|,$$ $$({\boldsymbol{\delta}})$$ |, (8) for *j, k* ∈ [qi]. Using this definition, we define the adjacency matrix for Vi as follows: $$[\mathbf{A}_{i}]_{j k}={\begin{cases}|x_{j}^{(i)}-x_{k}^{(i)}|&{\mathrm{if}}\ \ |j-k|=1,\\ 0&{\mathrm{otherwise}},\end{cases}}$$ $$({\mathfrak{g}})$$ for *j, k* ∈ [qi]. The degree matrix for Viis a diagonal matrix with entries [Di]jj =Pqi k=1[Ai]jk. Using the chain weighted graph, we can compute the graph Laplacian L = D − A and its eigenpairs, which are used to construct a surrogate model. We can further expand the concept of weighted graphs beyond the chain weighted graphs by including specific edges. As presented in Figures 1 and 5, we refer to an edge between a vertex and k-hop vertex as a k*-hop edge* in this paper. For example, edges between −3.2 and 1.3 and between −3.2 and 10.1 are 2-hop and 3-hop edges, respectively. In addition to the graph examples in Figures 1 and 5, we also present their graph Laplacians. Adding extra edges increases the average degree and creates cycles in the graph. However, finding the optimal number and combination of edges is challenging due to a vast number of possible combinations. To address this, we analyze the impact of adding multi-hop edges gradually; see Section 5.4 for the details of these analyses. ## 4.3 Bayesian Optimization With Weighted Graphs The algorithm we use, as presented in Algorithm 1, follows the similar procedure of standard Bayesian optimization, which is also similar to the framework of the work (Oh et al., 2019). Firstly, we establish a variable-wise graph structure that has its own edges and their corresponding weights. Next, we compute the eigenpairs of the graph Laplacians for d weighted graphs before proceeding with the iterative step to acquire and evaluate a query point. In the iterative step, we construct a Gaussian process surrogate on the graph and maximize an acquisition function to determine the query point. ## 5 Experiments In this section, we first provide experimental setup for our proposed method as well as the baseline methods explained above. Then, we present our results on three types of experiments including NATS-Bench and physics-based simulations and the impact of additional multi-hop edges. ![6_image_0.png](6_image_0.png) Figure 2: Results of optimizing synthetic benchmarks. All experiments are repeated 10 times and the standard error of the sample mean is indicated by the shaded areas. W-Graph and CVs stand for weighted graph and continuous variables. Note that negative shaded regions are a result of statistical representation and not indicative of actual negative regrets. Experimental Settings. For the earlier methods, namely the Simple Transformation, Continuous Variables Keeping, and Transformed Kernel, we use a Gaussian process with the Matérn 5/2 kernel (Rasmussen & Williams, 2006) as the surrogate model. We adopt the expected improvement criterion (Močkus et al., 1978) as the acquisition function for all the methods, including our algorithm. The multi-started L-BFGS-B method is used to optimize the acquisition function in the Simple Transformation and Continuous Variables Keeping methods. In contrast, the Transformed Kernel selects a query point from 10,000 sampled points. For the graph-based approaches, we follow the approach suggested by Oh et al. (2019) and determine the query point by applying a breadth-first local search from the best 20 out of 20,000 randomly sampled vertices. We start all methods with 5 random initial points chosen from the Sobol' sequences (Sobol', 1967), and repeat each experiment 10 times with 10 random seeds, i.e., 42, 84*, . . . ,* 420, without any other trials. Our implementation is written in Python with several scientific packages including NumPy (Harris et al., 2020) and SciPy (Virtanen et al., 2020). ## 5.1 Synthetic Functions We carry out the comparisons of our method and various baselines in synthetic function optimization. To create a discretized search space, we sample a fixed number of points from a compact search space X , and round a query point to the closest point among the points sampled. We uniformly sample 40 points from each variable of X for a search space design with irregular increments, unless otherwise specified. As shown in Figure 2, our method with chain weighted graphs works better than other existing methods. ## 5.2 Nats-Bench We tackle a neural network architecture search problem with NATS-Bench (Dong et al., 2021), which provides a testing ground for three popular datasets: CIFAR-10, CIFAR-100, and ImageNet16-120, where each benchmark is controlled by five variables and originally contains 32,768 architecture candidates. To create a search space with irregular increments, we modify the original search space eliminating some of small variable values; see Section A for the details. The intuition of our search space design is that we use fine increments for significant regions and coarse increments for less significant regions by utilizing common knowledge in the deep learning community. As shown in Figure 3, our proposed method with chain weighted graphs finds high-quality solutions faster than other strategies. ![7_image_0.png](7_image_0.png) Figure 3: Bayesian optimization results of NATS-Bench. All experiments are repeated 10 times and the standard error of the sample mean is indicated by the shaded area. ![7_image_1.png](7_image_1.png) (a) Ordinal Variables to Optimize (b) Experimental Results Figure 4: Schematic illustration of a structure for electromagnetic shielding and results for optimizing the structure. All experiments are repeated 10 times and the standard error of the sample mean is depicted. ## 5.3 Physics-Based Simulations We conduct physics-based simulations on electromagnetic shielding as a real-world problem that requires precision in measurement and fabrication. To obtain a response of optical transmission, we assess a nanophotonic structure made of titanium dioxide and silver with the finite difference time-domain method. We utilize Ansys Lumerical software to create and evaluate structures. As discussed in the work of Li et al. (2022), a sandwich structure with double-sided nanocones is effective for transparent electromagnetic shielding. See Figure 4a for a schematic of the structure. We simulate the transmission at a wavelength of 550 nm by solving the Maxwell's equation in the time domain. Periodic boundary conditions on the sides of the simulation supercell and perfectly matched layers on the top and bottom boundaries of the super cell are applied. We create a mesh grid of minimum size 5 nm over the x, y, and z directions for upper and lower nanocones and over the x and y directions for upper, lower, and silver film layers. The minimum mesh size over the z direction for each of the three layers is set as 1 nm. We can compute the electromagnetic interference shielding efficiency using the following equation: $$\mathrm{Shielding~Efficiency}=20\log\left(1+\frac{\eta_{0}t_{\mathrm{Ag}}}{2\rho}\right),$$ where tAg is the thickness of silver, η0 = 377 Ω is the impedance of free space, and ρ = 1.59 × 10−8 Ω · m is the bulk resistivity of silver. Since the shielding efficiency (10) depends on the thickness of silver, we choose an optimal structure in terms of transparency. The nanophotonic structure is defined with 10 discretized $$(10)$$ ![8_image_0.png](8_image_0.png) $$\begin{array}{r l r l}{-1.2}&{}&{{}}&{0}&{}&{{}}\\ {4.5}&{}&{{}}&{-3.3}&{}&{{}}\\ {-3.3}&{}&{{}}&{12.1}&{{}}&{-8.8}\\ {0}&{}&{{}}&{-8.8}&{}&{8.8}\end{array}$$ $\begin{array}{c|c}8\\ !\end{array}$ 3. $$-1.2\quad-4.5$$ | -1.2 | -4.5 | 0 | |--------|--------|-------| | 16.6 | -3.3 | -12.1 | | -3.3 | 16.6 | -8.8 | | -12.1 | -8.8 | 20.9 | | -1.2 | | | | |--------|-------|-------|------| | 19.0 | -4 5 | -13.3 | | | 16.6 | -12.1 | | | | -1.2 | -3.3 | | | | -4.5 | -3.3 | 16.6 | -8.8 | | -13.3 | -12.1 | -8.8 | 34.2 | $$-12.1$$ Figure 5: Three graph types with k-hop edges for a single ordinal variable, and their graph Laplacians continuous variables. Refer to Figure 4a and Section A for the details of the variables and their ranges. Our method with chain weighted graphs shows the satisfactory performance compared to other methods, as presented in Figure 4b. ## 5.4 Impact Of Additional Multi-Hop Edges In Synthetic Benchmarks To show the impact of additional k-hop edges, we evaluate baseline methods and our proposed method on popular synthetic benchmark functions. For this class of problems, continuous variables are discretized by sampling a fixed number of values from a search space X, which makes distances between two adjacent values distinct depending on their sampled values. The graph Laplacians of weighted graphs with multiple k-hop edges are readily computed by following the definition of adjacency and degree matrices. For example, without loss of generality, we can define a complete graph with edge weights, which implies that each vertex is connected to the all other vertices. In this case, an adjacency matrix for a vertex matrix V; is defined as the following: $$[\mathbf{A}_{i}]_{j k}={\begin{cases}0&{\mathrm{if}}\ \ j=k,\\ |x_{j}^{(i)}-x_{k}^{(i)}|&{\mathrm{otherwise,}}\end{cases}}$$ $$(11)$$ for j, k € [qi]. Interestingly, as shown in Figure 6, adding extra k-hop edges gradually is effective in the performance of Bayesian optimization. For the cases in Figures 6a, 6c, 6d, 6f, 6g, and 6h, the complete weighted graph outperforms the other weighted graphs. This consequence implies that extra edges representing complete distances help in finding a solution. However, as demonstrated in the other cases in Figure 6, the results with the complete weighted graphs are not the best. In particular, in one case for the Branin function, the complete weighted graph shows slower convergence than the other graph types including weighted graphs with 1:2-hop edges, 1:5-hop edges, and 1:10-hop edges. It is worth noting that a weighted graph with 1:k-hop edges indicates a weighted graph with 1-hop edges to k-hop edges. To sum up, the use of weighted graphs with 1:k-hop edges can improve the connectivity of the graph and consequently, the performance of Bayesian optimization. However, this process of adding edges can be thought of as combinatorial optimization due to the enormous number of possible edge combinations (Ghosh & Boyd, 2006). Thus, we claim that the performance of Bayesian optimization can be improved more by knowing the pertinent structure of a weighted graph, but revealing such structures is left to future work. Furthermore, while it might seem that a weighted graphs with 1:k-hop edges for k > 1 are equivalent to weighted graphs with 1-hop edges, i.e., for chain weighted graphs, adding k-hop edges for k > 1 can increase ![9_image_0.png](9_image_0.png) Figure 6: Bayesian optimization results of the effects of adding k-hop edges gradually. W-Graph w/ 1:k-Hop Edges indicates a weighted graph with 1*, . . . , k*-hop edges. Note that negative shaded regions are a result of statistical representation and not indicative of actual negative regrets. graph's connectivity. This observation is consistent with the findings in the work (Ghosh & Boyd, 2006), that the Fiedler value (Fiedler, 1973), i.e., the second smallest eigenvalue, increases with increasing average degree, given that all weights are nonnegative and the number of vertices are constant (Holroyd, 2006). Therefore, these edges are not redundant. Moreover, the research by Wainwright et al. (2000); Sudderth et al. (2004), has demonstrated that adding edges and creating cycles in graphs can enhance the expressive power of Gaussian graphical models. These findings suggest that additional edges in the weighted graph can be beneficial for increasing the representation power of the corresponding graph. ## 6 Discussion In this section, we discuss several topics related to our proposed method and its implications. Analysis on Numerical Results by Weighted Graphs. Fundamentally, weighted edges help represent the connectivity of variable values and also understand the precise relation between variable values, while unweighted edges only represent their connectivity. Our experimental results thus show that such additional information can improve the performance of Bayesian optimization. However, as can be seen in Figure 6, chain weighted graphs do not always defeat other methods; in addition, complete weighted graphs do not always beat other algorithms. We presume that some edges may be more effective in increasing the expressive power to find a solution, while a few edges may significantly degrade the expressive power. Therefore, the Fiedler value alone may not be sufficient to analyze and interpret Bayesian optimization results. To address this issue, it is necessary to conduct more research on rigorous edge addition and selection in the perspective of Bayesian optimization, and to investigate a representative score for our task. Global Solutions in *X \ D*. We assume that points in *X \ D* are practically or physically infeasible, making it impossible to evaluate such points; as discussed in our real-world problems in Sections 5.2 and 5.3, there exist points in *X \ D* that are impossible to evaluate. As a result, finding a global solution in *X \ D* is beyond the scope of this work. Limitations. While a handcrafted search space may be beneficial in some cases, it can require a great deal of expertise and effort to construct the search space. The ability to systemically identify a practically or physically feasible search space would be valuable in making the method more accessible and widely applicable. It may also help to ensure that the search space is comprehensive and covers all relevant areas, rather than being limited by the expertise or perspective of the experimenter. Future research could explore automated or semi-automated approaches to identifying discrete search spaces. Societal Impacts. While our work does not have any direct negative societal impacts, it is important to be mindful of any potential ethical implications that may arise in the application of Bayesian optimization in various domains. However, our work can contribute to the advancement of many real-world problems by providing an effective optimization algorithm under certain circumstances, which can ultimately have positive societal impacts. Future Directions. As previously mentioned, one promising research direction is to develop a technique for delicate edge addition and selection in order to find an optimal graph structure for Bayesian optimization. Another potential future direction for our proposed method with weighted graphs is to explore Bayesian optimization for a space of mixed variables, which is composed of continuous, ordinal, and categorical variables (Daxberger et al., 2020; Oh et al., 2021; Deshwal et al., 2021). Jointly modeling these variables on a graph could be a promising research topic for addressing real-world problems. ## 7 Conclusion This work addresses practical scientific and engineering problems concerning the precision in measurement and fabrication in the context of Bayesian optimization. In real-world problems, evaluating a query point in continuous space may not be feasible or practical, requiring us to discretize continuous variables based on experiment's characteristics and design choices. To optimize a black-box function on a space of ordinal variables, we explore several approaches and propose an algorithm that leverages the Cartesian product of weighted graphs. We also investigate the impact of multi-hop edges for weighted graphs, and demonstrate that our method outperforms other approaches across diverse experiments including neural network architecture search benchmarks and physics-based simulations. ## References P. M. Attia, A. Grover, N. Jin, K. A. Severson, T. M. Markov, Y.-H. Liao, M. H. Chen, B. Cheong, N. Perkins, Z. Yang, P. K. Herring, M. Aykol, S. J. Harris, R. D. Braatz, S. Ermon, and W. C. Chueh. Closed-loop optimization of fast-charging protocols for batteries with machine learning. *Nature*, 578(7795):397–402, 2020. D. Blanco-Mulero, M. Heinonen, and V. Kyrki. Evolving-graph Gaussian processes. arXiv preprint arXiv:2106.15127, 2021. V. Borovitskiy, I. Azangulov, A. Terenin, P. Mostowsky, M. P. Deisenroth, and N. Durrande. Matérn Gaussian processes on graphs. In *Proceedings of the International Conference on Artificial Intelligence* and Statistics (AISTATS), pp. 2593–2601, Virtual, 2021. E. Brochu, V. M. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010. S. Daulton, X. Wan, D. Eriksson, M. Balandat, M. A. Osborne, and E. Bakshy. Bayesian optimization over discrete and mixed spaces via probabilistic reparameterization. In Advances in Neural Information Processing Systems (NeurIPS), volume 35, pp. 12760–12774, New Orleans, Louisiana, USA, 2022. E. Daxberger, A. Makarova, M. Turchetta, and A. Krause. Mixed-variable Bayesian optimization. In *Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI)*, pp. 2633–2639, Virtual, 2020. A. Deshwal, S. Belakaria, and J. R. Doppa. Bayesian optimization over hybrid spaces. In *Proceedings of the* International Conference on Machine Learning (ICML), pp. 2632–2643, Virtual, 2021. X. Dong, L. Liu, K. Musial, and B. Gabrys. NATS-Bench: Benchmarking NAS algorithms for architecture topology and size. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 44(7):3634–3646, 2021. J. Duris, D. Kennedy, A. Hanuka, J. Shtalenkova, A. Edelen, P. Baxevanis, A. Egger, T. Cope, M. McIntire, S. Ermon, and D. Ratner. Bayesian optimization of a free-electron laser. *Physical Review Letters*, 124(12): 124801, 2020. M. Feurer, J. T. Springerberg, and F. Hutter. Initializing Bayesian hyperparameter optimization via metalearning. In *Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)*, pp. 1128–1135, Austin, Texas, USA, 2015. M. Fiedler. Algebraic connectivity of graphs. *Czechoslovak Mathematical Journal*, 23(2):298–305, 1973. R. Garnett. *Bayesian Optimization*. Cambridge University Press, 2023. E. C. Garrido-Merchán and D. Hernández-Lobato. Dealing with categorical and integer-valued variables in Bayesian optimization with Gaussian processes. *Neurocomputing*, 380:20–35, 2020. A. Ghosh and S. Boyd. Growing well-connected graphs. In *Proceedings of the IEEE Conference on Decision* and Control, pp. 6605–6611, San Diego, California, USA, 2006. S. Haghanifar, M. McCourt, B. Cheng, J. Wuenschell, P. Ohodnicki, and P. W. Leu. Discovering highperformance broadband and broad angle antireflection surfaces by machine learning. *Optica*, 7(7):784–789, 2020. N. Hansen and A. Ostermeier. Convergence properties of evolution strategies with the derandomized covariance matrix adaptation: The (µ/µI, λ)-CMA-ES. In *Proceedings of the European Congress on Intelligent* Techniques and Soft Computing (EUFIT), pp. 650–654, Aachen, Germany, 1997. N. Hansen, A. Auger, R. Ros, S. Finck, and P. Pošík. Comparing results of 31 algorithms from the blackbox optimization benchmarking BBOB-2009. In Proceedings of the Annual Conference on Genetic and Evolutionary Computation (GECCO), pp. 1689–1696, Portland, Oregon, USA, 2010. C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerkwijk, M. Brett, A. Haldane, J. F. del Río, M. Wiebe, P. Peterson, P. Gérard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, and T. E. Oliphant. Array programming with NumPy. *Nature*, 585:357–362, 2020. M. Holroyd. *Synchronizability and connectivity of discrete complex systems*. PhD thesis, College of William and Mary, 2006. J. Kim, S. Kim, and S. Choi. Learning to warm-start Bayesian hyperparameter optimization. arXiv preprint arXiv:1710.06219, 2017. R. I. Kondor and J. Lafferty. Diffusion kernels on graphs and other discrete structures. In *Proceedings of the* International Conference on Machine Learning (ICML), 2002. M. Li, M. J. McCourt, A. J. Galante, and P. W. Leu. Bayesian optimization of nanophotonic electromagnetic shielding with very high visible transparency. *Optics Express*, 30(18):33182–33194, 2022. M. Lindauer and F. Hutter. Warmstarting of model-based algorithm configuration. In *Proceedings of the* AAAI Conference on Artificial Intelligence (AAAI), pp. 1355–1362, New Orleans, Louisiana, USA, 2018. D. C. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(3):503–528, 1989. J. Močkus, V. Tiesis, and A. Žilinskas. The application of Bayesian methods for seeking the extremum. Towards Global Optimization, 2:117–129, 1978. C. Oh, J. Tomczak, E. Gavves, and M. Welling. Combinatorial Bayesian optimization using the graph cartesian product. In *Advances in Neural Information Processing Systems (NeurIPS)*, volume 32, pp. 2914–2924, Vancouver, British Columbia, Canada, 2019. C. Oh, E. Gavves, and M. Welling. Mixed variable Bayesian optimization with frequency modulated kernels. In *Proceedings of the Annual Conference on Uncertainty in Artificial Intelligence (UAI)*, pp. 950–960, Virtual, 2021. V. Perrone, H. Shen, M. W. Seeger, C. Archambeau, and R. Jenatton. Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning. In *Advances in Neural Information* Processing Systems (NeurIPS), volume 32, 2019. V. Picheny, S. Vakili, and A. Artemev. Ordinal Bayesian optimisation. *arXiv preprint arXiv:1912.02493*, 2019. M. Poloczek, J. Wang, and P. I. Frazier. Warm starting Bayesian optimization. In Proceedings of the Winter Simulation Conference, pp. 770–781, Piscataway, New Jersey, USA, 2016. A. Ramachandran, S. Gupta, S. Rana, C. Li, and S. Venkatesh. Incorporating expert prior in Bayesian optimisation via space warping. *Knowledge-Based Systems*, 195:105663, 2020. C. E. Rasmussen and C. K. I. Williams. *Gaussian Processes for Machine Learning*. MIT Press, 2006. M. Seeger. Gaussian processes for machine learning. *International Journal of Neural Systems*, 14(2):69–106, 2004. B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas. Taking the human out of the loop: A review of Bayesian optimization. *Proceedings of the IEEE*, 104(1):148–175, 2016. B. J. Shields, J. Stevens, J. Li, M. Parasram, F. Damani, J. I. M. Alvarado, J. M. Janey, R. P. Adams, and A. G. Doyle. Bayesian reaction optimization as a tool for chemical synthesis. *Nature*, 590(7844):89–96, 2021. A. J. Smola and R. Kondor. Kernels and regularization on graphs. In *Learning Theory and Kernel Machines*, pp. 144–158. Springer, 2003. J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms. In *Advances in Neural Information Processing Systems (NeurIPS)*, volume 25, pp. 2951–2959, Lake Tahoe, Nevada, USA, 2012. I. M. Sobol'. On the distribution of points in a cube and the approximate evaluation of integrals. *Zhurnal* Vychislitel'noi Matematiki i Matematicheskoi Fiziki, 7(4):784–802, 1967. A. Souza, L. Nardi, L. B. Oliveira, K. Olukotun, M. Lindauer, and F. Hutter. Bayesian optimization with a prior for the optimum. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), pp. 265–296, Virtual, 2021. N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In *Proceedings of the International Conference on Machine Learning* (ICML), pp. 1015–1022, Haifa, Israel, 2010. E. B. Sudderth, M. J. Wainwright, and A. S. Willsky. Embedded trees: Estimation of Gaussian processes on graphs with cycles. *IEEE Transactions on Signal Processing*, 52(11):3136–3150, 2004. R. Turner, D. Eriksson, M. McCourt, J. Kiili, E. Laaksonen, Z. Xu, and I. Guyon. Bayesian optimization is superior to random search for machine learning hyperparameter tuning: Analysis of the black-box optimization challenge 2020. In *Proceedings of the NeurIPS Competition and Demonstration Track*, pp. 3–26, Virtual, 2020. P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, İ. Polat, Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, and SciPy 1.0 Contributors. SciPy 1.0: fundamental algorithms for scientific computing in Python. *Nature Methods*, 17:261–272, 2020. M. J. Wainwright, E. B. Sudderth, and A. S. Willsky. Tree-based modeling and estimation of Gaussian processes on graphs with cycles. In *Advances in Neural Information Processing Systems (NeurIPS)*, volume 13, Denver, Colorado, USA, 2000. Y.-C. Zhi, Y. C. Ng, and X. Dong. Gaussian processes on graphs via spectral kernel learning. *arXiv preprint* arXiv:2006.07361, 2020. ## A Details Of Search Spaces We present the details of the search spaces used in the experiments. ## A.1 Nats-Bench Table 1: Search space for NATS-Bench | Variable | Discrete Variable Values | |--------------------------------------------|----------------------------| | Output channels of 1st convolutional layer | {8, 24, 40, 48, 56, 64} | | Output channels of 1st cell stage | {8, 24, 40, 48, 56, 64} | | Output channels of 1st residual block | {8, 24, 40, 48, 56, 64} | | Output channels of 2nd cell stage | {8, 24, 40, 48, 56, 64} | | Output channels of 2nd residual block | {8, 24, 40, 48, 56, 64} | We use a size search space in NATS-Bench (Dong et al., 2021) as shown in Table 1. To design a search space with irregular increments, we slightly modify the original size search space. More precisely, we eliminate output channel sizes 16 and 32 from each variable, in order to consider the characteristics of the variables; in these experiments large channel sizes are more significant than small channel sizes. As a result, our search space contains 7,776 architecture candidates. ## A.2 Physics-Based Simulations We essentially optimize the following variables: 1. thickness of upper film; 2. thickness of lower film; 3. thickness of silver; 4. top radius of upper cones; 5. bottom radius of upper cones; 6. height of upper cones; 7. top radius of lower cones; | Variable | Discrete Variable Values | |------------------------------------------------------|--------------------------------------------------| | Thickness of upper film | {5, 6, 7, 8, 9, 10, 15, 20, . . . , 100} | | Thickness of lower film | {5, 6, 7, 8, 9, 10, 15, 20, . . . , 100} | | Thickness of silver | {3, 4, 5, 10, 15, 20} | | Ratio of top radius to bottom radius for upper cones | {0.01, 0.02, . . . , 0.99} | | Bottom radius of upper cones | {10, 20, . . . , 200} | | Height of upper cones | {50, 60, 70, 80, 90, 100, 150, 200, . . . , 400} | | Top radius of lower cones | {10, 20, . . . , 200} | | Ratio of bottom radius to top radius for lower cones | {0.01, 0.02, . . . , 0.99} | | Height of lower cones | {50, 60, 70, 80, 90, 100, 150, 200, . . . , 400} | | Grid size | {20, 30, . . . , 200} | Table 2: Search space for physics-based simulations on electromagnetic shielding. Note that all values except for two ratios are in nanometers. 8. bottom radius of lower cones; 9. height of lower cones; 10. grid size. However, to create a physically feasible structure, the structure has to satisfy two constraints that the bottom radius of upper cones is larger than the top radius of upper cones and the top radius of lower cones is larger than the bottom radius of lower cones. Thus, we replace the top radius of upper cones and the bottom radius of lower cones with a ratio of top radius to bottom radius for upper cones and a ratio of bottom radius to top radius for lower cones, respectively. Eventually, we use ordinal variables, which are described in Table 2. As shown in Table 2, we design a search space for physics-based simulations with irregular increments, i.e., fine increments for relatively small variable values and coarse increments for relatively large variable values, by considering the significance of variables and manufacturing precision. B Impact of Additional Multi-Hop Edges in Real-World Problems ![15_image_0.png](15_image_0.png) Figure 7: Results to show the impact of extra multi-hop edges in NATS-Bench ![15_image_1.png](15_image_1.png) Figure 8: Results to show the impact of extra multi-hop edges in a physics-based simulation for electromagnetic shielding Similar to the analysis demonstrated in Section 5.4, we present the impact of additional multi-hop edges in real-world problems. Unlike the results in Section 5.4, Bayesian optimization results with chain weighted graphs are better than the results with complete weighted graphs. As discussed in Section 6, we presume that the performance of Bayesian optimization is affected by problem structures, which are practically unknown. Such an interesting analysis on more rigorous edge addition and deletion is left to future work. ## C Computational Costs For Calculating Eigenvalues And Eigenvectors | Graph Type | Time (sec.) | |------------------------------------|-------------------| | Chain graph | 0.00841 ± 0.00724 | | Weighted graph with 1-hop edges | 0.01008 ± 0.00027 | | Weighted graph with 1:2-hop edges | 0.01434 ± 0.00247 | | Weighted graph with 1:5-hop edges | 0.01795 ± 0.00047 | | Weighted graph with 1:10-hop edges | 0.02300 ± 0.00227 | | Weighted graph with 1:20-hop edges | 0.03487 ± 0.00243 | Table 3: Additional computational costs for calculating eigenvalues and eigenvectors We provide elapsed time to compute eigenvalues and eigenvectors in Table 3. To measure the elapsed time, an eight-dimensional synthetic function based on the Ackley function is created where more than 21 variable values exist for each dimension. Also, we conduct this experiment on the same machine by repeating the calculation 1000 times with 10 different random seeds (i.e., 100 times per seed). As expected, adding extra edges leads to more elapsed time. However, we can preemptively compute eigenvalues and eigenvectors at the beginning of a Bayesian optimization round (i.e., Line 2 of Algorithm 1), which implies that it would not be a significant additional burden.
Review 1: Summary: The paper extends the previous Bayesian Optimization (BO) work that uses Catersian product of graphs [Oh et al, 2019] to allows BO to work with discrete search space. The main idea of the paper is to use weighted edges where the weights are computed by the distance between the two vertices of the graph. The BO procedure is then conducted as in Oh et al, 2019 to find the global optimum of the optimization problem. Strengths and Weaknesses: Strengths: + The paper’s writing is generally clear. The problem setting, some previous related methods are described clearly. The experiments are explained in detail with ablation study being conducted to understand some aspects of the proposed technique. + The idea of using weighted Cartesian graphs for BO seems to be interesting and might be worth for further investigation. Weaknesses - The paper doesn’t provide much insight why it makes sense to use weighted Cartesian graph with the weights defined by the distances between the vertices. I don’t understand why the technique of setting the weights by the distance between the vertices will work. - The experiments seem to lack of the comparison with some new methods that tackle the problem of BO with discrete search space. For example, the work “Bayesian Optimization over Hybrid Spaces” by Deshwal et al (ICML 2021), and the work “Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic Reparameterization” by Daulton et al (NeurIPS 2022). - The dimensions of the optimization problems used in the paper need to be described in more detail, especially the NATS-Bench. Besides, it seems to me the dimensions of the optimization problems used in this paper are quite small (at least I know that for synthetic problems, the dimensions are just up to 4, and it’s unclear on the dimensions of the NATS-Bench problems). - In Section 4.1, it says that 40 data points are sampled from X to construct the discrete search space. Why only 40? This seems to be quite small. Requested Changes: 1. More insight on why the proposed weighted Cartesian graph works. 2. Experiments need to include more recent BO methods that tackle the problem of discrete search space (please find the references in the previous section). 3. Experiments need to explain in more detail the dimensions of the problems used, and also should include more high-dimensional problems. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper proposes a Bayesian optimization method for scenarios where all tunable parameters are discrete and ordinal. To tackle this problem, the paper represents ordinal parameters as weighted graphs and use a GP with a diffusion kernel. This builds off of COMBO (Oh et al., 2019). The paper empirically evaluates this method on several benchmarks including neural architecture search and physics-based problems. Strengths and Weaknesses: ## Strengths The idea is intuitive and appealing and the empirical results are compelling. The paper is largely an incremental improvement upon COMBO (using weighted graphs to represent ordinals). Nevertheless the performance improvements are strong, and I think the community will find this work to be of interest. ## Weaknesses [minor] The empirical results are a bit perplexing. In the real-world experiments, the Chain-w graph method performs best. But in the experiments on synthetic benchmarks the Complete-w graph before much better than W-graph w/ 1-hop edges---which is the chain-w graph based on my understanding. Given these results (Figure 6), why is the complete-w graph not used in the real-world experiments in Figures 3 and 4? [minor] The paper does not compare against recent works on Bayesian optimization with ordinal parameters. Such as Casmopolitan (Wan et al, 2021) and probabilistic reparameterization (Daulton et al, 2022). Requested Changes: Add comparisons with baselines and complete-w graph results for main experiments. Broader Impact Concerns: None ================================================== Review 3: Summary: This paper tackles Bayesian optimization over user-defined coarsely-discretized continuous search spaces. Rather than defining the covariance of the corresponding Gaussian process on this continuous space, the covariance is defined over the Cartesian product of graphs (possibly weighted and with multi-hops). An empirical comparison is provided on several toy examples, a neural network hyperparameter tuning and a physical simulation problems. The addition of the multi-hops is studied as well. Strengths and Weaknesses: # Strengths - The use of graph-based Gaussian processes for Bayesian optimization is much less studied than their continuous counterparts. - The study of the effect of multi-hops is interesting. # Weaknesses First of all, the use of a graph structure on a design space that is initially a compact set seems contrived. It could make sense on other problems with categorical variables, or on complex objects like molecules, see, e.g., Korovina, K., Xu, S., Kandasamy, K., Neiswanger, W., Poczos, B., Schneider, J., & Xing, E. (2020, June). Chembo: Bayesian optimization of small organic molecules with synthesizable recommendations. In International Conference on Artificial Intelligence and Statistics (pp. 3393-3403). PMLR. Indeed, there are ordinal variables in neural networks, but the proposed setup relates more to the inclusion of prior knowledge, e.g., Souza, A., Nardi, L., Oliveira, L. B., Olukotun, K., Lindauer, M., & Hutter, F. (2021). Bayesian optimization with a prior for the optimum. In Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13–17, 2021, Proceedings, Part III 21 (pp. 265-296). Springer International Publishing. For the electromagnetic shielding test case, the simulator would run with continuous variables. Then the choice of the acquisition function is not discussed, and possibly not the best fit for the proposed setup. For instance, it could be an entropy defined on the discrete search space, Thompson sampling or conditional improvement (discussed, e.g., in Gramacy, R. B. (2020). Surrogates: Gaussian process modeling, design, and optimization for the applied sciences. CRC press.). For computer experiments, being able to evaluate everywhere can bring more information to improve the surrogate compared to limiting the search to a few locations. As a side remark, using coarse discrete sets for acquisition function search is a standard simplification of the continuous problem. There are missing details in the empirical section: - what is the metric used? For synthetic test functions, if the search space is random, so is the value of the optimum. Using the regret would be preferred in this context. - what is the search space used by the competitors (e.g., continuous, discrete)? - some methods use 10k sampled points for acquisition function search, another 20k plus local search, how is this chosen? Finally, the empirical comparison could include state of the art methods, say: Eriksson, D., Pearce, M., Gardner, J., Turner, R. D., & Poloczek, M. (2019). Scalable global optimization via local bayesian optimization. Advances in neural information processing systems, 32. or Cowen-Rivers, A. I., Lyu, W., Tutunov, R., Wang, Z., Grosnit, A., Griffiths, R. R., ... & Bou-Ammar, H. (2022). HEBO: pushing the limits of sample-efficient hyper-parameter optimisation. Journal of Artificial Intelligence Research, 74, 1269-1349. Requested Changes: - [Critical] Improving the motivation of this work in the proposed context and examples. - [Critical] Considering additional acquisition functions. - [Critical] Enhancing the empirical section as suggested above. - Section 5 arrives too late in the paper, it would be more useful earlier. Further clarifications: By taking into account multi-hops, is it akin to reproduce the original inter distances in the compact search space? Broader Impact Concerns: NA ================================================== Metareview: Recommendation: Reject Comment: The work is closely based on previous work by Oh et al, essential extending their (unweighted) graph approach to weighted graphs. It further includes investigations into the role of the number of hops to include. The work is interesting but feels preliminary. I largely agree with the majority of the reviewers that further work would be needed to - better motivate the work, - investigate the impact of the acquisition function optimisation, - provide theoretical support for the weighted chain graphs, and, - in light of figures 6-8, tie up the lose ends with the number of hops to include (and hence which method/graph to use). ==================================================
# Interpreting Clip: Insights On The Robustness To Imagenet Distribution Shifts Anonymous authors Paper under double-blind review ## Abstract What distinguishes robust models from non-robust ones? While for ImageNet distribution shifts it has been shown that such differences in robustness can be traced back predominantly to differences in training data, so far it is not known what that translates to in terms of what the model has learned. In this work, we bridge this gap by probing the representation spaces of 16 robust zero-shot CLIP vision encoders with various backbones (ResNets and ViTs) and pretraining sets (OpenAI, LAION-400M, LAION-2B, YFCC15M, CC12M and DataComp), and comparing them to the representation spaces of less robust models with identical backbones, but different (pre)training sets or objectives (CLIP pretraining on ImageNet-Captions, and supervised training or finetuning on ImageNet). Through this analysis, we generate three novel insights. Firstly, we detect the presence of outlier features in robust zero-shot CLIP vision encoders, which to the best of our knowledge is the first time these are observed in non-language and non-transformer models. Secondly, we find the existence of outlier features to be a signature1 of ImageNet shift robustness in models, since we only find them in robust models in our analysis. Lastly, we also investigate the number of unique encoded concepts in the representation space and find zero-shot CLIP models to encode a higher number of unique concepts in their representation space. However, we find this to be rather related to the language supervision than to be a signature of ImageNet shift robustness. ## 1 Introduction Large pretrained multimodal models, such as CLIP (Radford et al., 2021), have demonstrated unprecedented robustness to distribution shifts around ImageNet2. When used as zero-shot image classifiers, their performance on ImageNet (Deng et al., 2009) translates remarkably well to performances on natural shifts of ImageNet, such as, e.g., ImageNet-V2 (Recht et al., 2019). This led to many works analyzing what actually causes this remarkable robustness of CLIP to shifts around ImageNet, with Fang et al. (2022) establishing that the root cause of CLIP's robustness lies in the quality and diversity of data it was pretrained on. In particular, they find the robustness of CLIP to shifts around ImageNet to disappear when it is pretrained on ImageNet-Captions, a modification of ImageNet suitable for unsupervised language-vision pretraining. While the cause of CLIP's robustness to ImageNet shifts is thus known, we set out to establish how exactly robustness manifests itself in features learned by the model. Finding feature patterns in the representation space that only appear in robust models is the first step towards a better understanding of the emergence of robustness. It is also key to diagnosing robustness in times when only limited knowledge about the (pre)training distribution or the shifts is available. To find robustness patterns in robust CLIP models, we leverage the various models provided by Ilharco et al. (2021) in the OpenCLIP repository, as well as non-robust CLIP models pretrained on ImageNet-Captions provided by Fang et al. (2022), and models we train in a supervised way on ImageNet from scratch. We analyze the *visual features* in these models by 1We use the term 'A is a signature of B' in this work to say that A =⇒ B. 2Similar to Radford et al. (2021), we use CLIP as a name for the general training technique of unsupervised language-vision pretraining, not only for the specific models obtained by OpenAI. probing their last layer activation vectors with quantitative interpretability tools, such as kurtosis analysis of activation vectors (Elhage et al., 2023), singular value decomposition (SVD) of classifier weight matrices and concept probing of the representation space (Bau et al., 2017). Through this analysis, we distill insights on distinctive characteristics of CLIP model features and CLIP ImageNet distribution shift robustness. Our contributions. (1) We show that many robust CLIP models have *outlier features*. These features stand out as their activation is typically several orders of magnitude above the average activation of features in the same layer. Interestingly, this observation also holds for robust CLIP models with ResNet backbones. To the best of our knowledge, this is the first time that outlier features are observed in non-language and non-transformer models (Dettmers et al., 2022; Elhage et al., 2023; Sun et al., 2024). We also show through an SVD analysis that outlier features are propagated to the logits of downstream classifiers, which results in what we call *privileged directions* that are crucial to model predictions. (2) Through a comparative analysis we find that outlier features are *robustness signatures* that distinguish robust models from their non-robust counterparts: since none of the non-robust models exhibit them, we find outlier features to be a signature whose presence can indicate that a model will be robust to distribution shifts around ImageNet. Privileged directions on the other hand, are not such a signature, since they can also be found in some of the non-robust models. (3) We show that robust zero-shot CLIP models all encode a high number of *unique concepts* in their features. As a consequence, the features of robust models are highly polysemantic, which means that they superpose a large set of concepts. Surprisingly, we find through our comparative analysis that representations that are rich in concepts are not necessarily more robust, as this property can also be found in non-robust ImageNet-Captions pretrained CLIP models. This suggests that language supervision tends to enrich visual representations in human concepts. ## 2 Background On Clip Models And Notation In this section, we summarize the CLIP paradigm introduced by Radford et al. (2021) and introduce the notation used in later sections. We explain how CLIP models are trained and used for image classification. CLIP architecture. A CLIP model consists in an image encoder fv : R dX → R dH and a text encoder ft : R dT → R dH mapping to the same representation space R dH of dimensions dH ∈ N. With our notations, dX = C · H · W for images with C ∈ N channels, height H ∈ N and width W ∈ N. Similarly, dT = L · V for texts with context length L ∈ N and vocabulary size V ∈ N. Given a batch of (image, text) pairs B = {(x (b) v , x (b) t) ∈ R dX ×R dT | b ∈ [B]}, where B ∈ N denotes the batch size and [B] := {n ∈ N | 1 ≤ n ≤ B}, the encoders are trained to minimize the symmetric contrastive loss L(B, fv, ft) := Lv(B, fv, ft) + Lt(B, fv, ft) 2 b=1 log exp τ −1cos hfv x (b) v , ft x (b) ti Lv(B, fv, ft) := − X B PB b ′=1 exp τ −1 cos hfv x (b ′) v, ft x (b) ti b=1 log exp τ −1cos hfv x (b) v , ft x (b) ti Lt(B, fv, ft) := − X B PB b ′=1 exp τ −1 cos hfv x (b) v , ft x (b ′) ti, with cos denoting the cosine similarity. Lv and Lt are InfoNCE losses from Oord et al. (2018) with a learnable temperature parameter τ ∈ R +. These losses induce the encoders to align the image and text embedding pairs in the representation space R dH through the nominator, while separating image and text embeddings of distinct samples through the denominator, which differs between Lv and Lt. Building zero-shot classifiers. Once the model is trained, we have access to an image and text encoder that tend to align images with their text description. This can be used to build a zero-shot image classifier that discriminates between a set of K ∈ N classes k ∈ [K]. The typical approach is to combine the name of the class k, together with a template, to create x (k) t. For instance, the class lion can be combined with the template an image of a <class> to yield the text description an image of a lion. To assign a class to an input image xv ∈ R dX , one can assign logits to each class k ∈ [K] as follows: $$\begin{split}\text{logit}^{(k)}(x_{v})&:=\tau^{-1}\cos\left[f_{v}\left(x_{v}\right),f_{t}\left(x_{t}^{(k)}\right)\right]\\ &=\left[W\frac{f_{v}(x_{v})}{\|f_{v}(x_{v})\|}\right]_{k},\end{split}$$ where *∥ · ∥* denotes the euclidean norm in R dH and the matrix W ∈ R K×dH has elements $$W_{k j}:=\left[\tau^{-1}{\frac{f_{t}\left(x_{t}^{(k)}\right)}{\|f_{t}\left(x_{t}^{(k)}\right)\|}}\right]_{j}$$ for each k ∈ [K] and j ∈ [dH]. A zero-shot image classifier can thus be built from CLIP as the composition between the linear classification head W and the CLIP image encoder (with normalization). For ease of notation, we use f ≡ fv, x ≡ xv, etc. in the remainder of this work, unless otherwise specified. ## 3 Robustness Of Clip Models In this work, we focus on the robustness of models to shifts from ImageNet (Deng et al., 2009) to the five natural distribution shifts considered by Fang et al. (2022), namely ImageNet-V2 (Recht et al., 2019), ImageNet-R (Hendrycks et al., 2021a), ImageNet-Sketch (Wang et al., 2019), ObjectNet (Barbu et al., 2019) and ImageNet-A (Hendrycks et al., 2021b). Measuring robustness. There are different ways to measure robustness to a specific distribution shift from source distribution A to shifted distribution B. In some works, especially in the literature on invariant and robust learning, simply performance on both source and shifted distribution are reported (Arjovsky et al., 2019; Makar et al., 2022; Jiang & Veitch, 2022; Kumar et al., 2022), and their quotient can be straightforwardly computed to report robustness in a single metric (*What fraction of the original performance* is maintained on the shifted data?). On the other hand, in the literature investigating the robustness of CLIP to ImageNet shifts, typically a slightly more involved metric called *Effective Robustness* (ER) is reported (Taori et al., 2020; Fang et al., 2022). We choose to include both, and will find that similar insights about the robustness of models can be derived from both of them when it comes to distribution shifts around ImageNet. Lastly, we note that recently Shi et al. (2023) have investigated the robustness of models to the five natural distribution shifts of ImageNet, using additionally a different data distribution than ImageNet (YFCC-15M Thomee et al. (2016)) as the source distribution. In that case, none of the models investigated (including CLIP models) is significantly more robust than the other models. Therefore, we keep the focus of our analysis on the shifts from ImageNet to its five natural distribution shifts, where most zero-shot CLIP models are without doubt significantly more robust than their finetuned or supervised counterparts, as we will recap in the following. Model pool. We run our analyses across five backbone architectures: ResNet50, ResNet101, ViT-B-16, ViT-B-32, ViT-L-14 (He et al., 2015; Dosovitskiy et al., 2020). For each architecture, the OpenCLIP repository (Ilharco et al., 2021) contains robust pretrained CLIP models on various pretraining datasets: the original (unreleased) OpenAI pretraining set (OpenAI, Radford et al. (2021)), YFCC-15M (Thomee et al., 2016; Radford et al., 2021), CC-12M (Changpinyo et al., 2021), LAION-400M, LAION-2B (Schuhmann et al., 2022), and DataComp (Cherti et al., 2023). Furthermore, Fang et al. (2022) provided us with checkpoints of the non-robust CLIP models pretrained on ImageNet-Captions for three different versions of ImageNet-Captions: a first version for which only the original titles from the internet associated to the images are used as text (ImageNet-Captions-t), a second version for which a concatenation of original titles and description was used (ImageNet-Captions-td), and a last version for which a concatenation of original titles, description, and tags was used (ImageNet-Captions-tdt, for more details see Fang et al. (2022)). We load the pretrained vision encoders of all available combinations of architecture and pretraining dataset, and construct a zero-shot classification model for ImageNet using the methodology described in Section 2. By finetuning each robust zero-shot model on ImageNet, we obtain classifiers with lower robustness than their robust zero-shot counterparts (Andreassen et al., 2021; Kumar et al., 2022; Wortsman et al., 2022a). Lastly, since the non-robust ImageNet-Captions models were only trained for a ResNet50 backbone, we obtain further non-robust models for the remaining architectures by training them as classification models on ImageNet from scratch, and obtain models with far lower robustness than the finetuned ones (Fang et al., 2022) (details on model finetuning and training can be found in Appendix H). Results. For all models, we compute the test accuracy on ImageNet, on the five shifted datasets (averaged), their quotient, as well as the ER (for details on the computation of ER, see Appendix A). We report the two metrics of robustness (quotient of test accuracies %acc, and ER) in Figure 1a and Figure 1b (for the individual test accuracies that these metrics are calculated from, see Appendix B). We see that according to both metrics, for each type of backbone the zero-shot CLIP models from OpenCLIP have the highest robustness ('Robust zero-shot CLIP'), which decreases but remains significant when fine-tuned on ImageNet ('Fine-tuned CLIP'). On the other hand, ImageNet-Captions CLIP models ('Non-robust zero-shot CLIP') and ImageNet trained supervised models ('Supervised')3 do not exhibit any ER and much lower robustness according to %acc, typically loosing more than half of their ImageNet accuracy on the shifted datasets (%acc dropping below 50%). Take-away 1. In agreement with the literature, we find that for each architecture, zero-shot CLIP models that were pretrained on OpenAI, YFCC-15M, CC-12M, LAION-400M, LAION-2B, or DataComp, are the most *robust to ImageNet distribution shifts*. This robustness, while decreased, remains significant after fine-tuning on ImageNet for almost all pretraining datasets. On the other hand, zeroshot CLIP models pretrained on ImageNet-Captions, and supervised models trained on ImageNet, are non-robust to ImageNet distribution shifts. ## 4 Outlier Features And Privileged Directions In this section, we explain how we detect *outlier features* in zero-shot CLIP vision classifiers, and how we find them to be a signature of model robustness. We start by explaining what outlier features are and how they are surfaced, and then proceeed to analyse how they propagate to class logits in the form of privileged directions. We analyze both outlier features and privileged directions for each of the models in our model pool. Approach. We aim to analyze what models with high robustness have learned in comparison to models with lower robustness. To this end, we compare the features, i.e. the representation space spanned by the image encoder fv described in Section 2. Like Goh et al. (2021), we focus on the output of the encoder only since it is used for downstream classification by a linear head computing the ImageNet class logits. Furthermore, a *central kernel alignment* (CKA) analysis in Appendix C reveals that robust models differ from less robust models most consistently in this last layer, making it the most relevant layer to focus on from a comparative standpoint as well. We use the ImageNet test set to produce activation vectors h (n) = fv(x (n)) ∈ R dH for each image x (n) ∈ R dX fed to the encoder. Preliminary observations. Qualitatively observing the distribution of activation vectors, one thing that immediately stands out is the fact that some components i ∈ [dH] are much larger than the average activation: h (n) i ≫ d −1 H PdH j=1 h (n) j. A similar phenomenon has recently been observed by Dettmers et al. (2022) in large language models (LLMs). Such features, whose activation is substantially more important than average, 3For the ViT-L-14, we were unable to train a supervised version to convergence from scratch on ImageNet (Dosovitskiy et al. (2020) and He et al. (2022) comment on the difficulties of training such an overparametrized model on ImageNet). ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) (b) %acc calculated as the ratio between the average accuracy on the five ImageNet shifts and the ImageNet test accuracy. Figure 1: Robustness metrics for the models in our pool. Higher values indicate higher robustness. We see that for each backbone and pretraining data, robustness decreases from Robust zero-shot CLIP to Fine-tuned CLIP and reaches a minimum for ImageNet supervised ('Supervised') and Imagenet-Captions trained ('Nonrobust zero-shot CLIP') models. were coined as outlier features. Subsequent work by Elhage et al. (2023) introduced a simple way to surface these outlier features, through a metric called activation kurtosis. We now use this criterion to quantitatively analyze the features of CLIP models. Activation kurtosis. Following Elhage et al. (2023), we measure the activation kurtosis to quantitatively evaluate the presence of outlier features in a model. The activation kurtosis is computed over all the components of an activation vector h (n), and averaged over N activation vectors: $$\mbox{Kurtosis}:=\frac{1}{N}\sum_{n=1}^{N}\frac{1}{d_{H}}\sum_{i=1}^{d_{H}}\left[\frac{h_{i}^{(n)}-\mu\left(h^{(n)}\right)}{\sigma\left(h^{(n)}\right)}\right]^{4},\tag{1}$$ ![5_image_0.png](5_image_0.png) where µ(h) := d −1 H PdH i=1 hi and σ 2(h) := d −1 H PdH i=1[hi − µ(h)]2. As explained by Elhage et al. (2023), Kurtosis ≫ 3 indicates the presence of outlier features (3 being the kurtosis of an isotropic Gaussian). (a) Activation kurtosis calculated through Equation (1). ![5_image_1.png](5_image_1.png) (b) Direction importance ratio calculated through Equation (3). Figure 2: Outlier features and privileged directions. Most robust zero-shot CLIP models have outlier features (high kurtosis) and privileged directions (high direction importance outlierness). Fine-tuned models have no outlier features but still exhibit privileged directions, although those are noticeably less privileged. Supervised models and non-robust zero-shot CLIP models have no outlier features. The full distribution of importance scores can be found in Appendix E.1 We report the average kurtosis over the ImageNet test set in Figure 2a for each architecture and across the various levels of robustness. This leads to three insights. Firstly, across all architectures, there are robust zero-shot CLIP models with outlier features, as indicated by their Kurtosis ≫ 3. Secondly, the kurtosis, like the robustness, drops when finetuning on ImageNet. In Appendix I we include an additional analysis showing that Kurtosis closely tracks the ER metric when interpolating between robust zero-shot CLIP models and fine-tuned CLIP models in weight space. Lastly and most importantly, the values Kurtosis ≈ 3 obtained for the non-robust supervised, and npn-robust zero-shot CLIP models suggest the absence of outlier features in non-robust models, indicating that the presence of outlier features is a signature of robustness that can only be found in robust models. Privileged directions in representation space. The strong presence of outlier features in the most robust models does not necessarily explain the performances of these models. Indeed, it is perfectly possible that outlier features are ignored by the linear head computing the class logits based on the activation vectors, e.g. if they are part of ker W, the null space of the weight matrix W defined in Section 2. Thus, to assess whether outlier features are of importance, we now introduce the notion of *privileged directions* of the representation space R dH , as an instance of a generalized form of outlier features. While outlier features are studied in the canonical basis {e1*, . . . , e*dH }, since we can write hi = Projei (h), they can be generalized to be any set of directions of the representation space that receive a projection substantially above average (for more details and an illustration, see Appendix F). We focus on the directions that are important for the computation of logits by the linear head W, namely right singular vectors of W. These can be identified by performing a singular value decomposition (SVD) of W, which can be written as W =Prank(W) i=1 σi· uiv ⊺ i , where σi ∈ R +, ui ∈ R dY and vi ∈ R dH respectively correspond to *singular values* (SV), *left singular vectors* (LSV) and *right singular vectors* (RSV) of W. In this decomposition, each RSV vi corresponds to a direction in representation space that is mapped to the logits encoded in the LSV vi. Since both of these vectors are normalized ∥ui∥ = ∥vi∥ = 1, the importance of the direction vi for W is reflected by the SV σi. Note that the SV σi by itself *does not* refer to the model's encoder activations. How can we measure if the direction viis typically given an important activation by the encoder? We propose to measure the average cosine similarity of activation vectors h (n) with this direction. This leads us to a unified importance metric for each direction i ∈ [rank(W)] of the representation space: $$\mathrm{Importance}(i):=\underbrace{\frac{\sigma_{i}}{\sum_{j=1}^{\mathrm{rank}(W)}\sigma_{j}}}_{\mathrm{Classification~head~importance}}\cdot\underbrace{\sum_{n=1}^{N}\frac{|\cos(v_{i},h^{(n)})|}{N}}_{\mathrm{Encoder~importance}}.$$ (2) $\frac{1}{2}$ Note that this metric is defined so that 0 < Importance(i) < 1. With this metric, we can measure to what extent the presence of outlier features induces privileged directions in representation space. If such privileged directions exist, we expect some singular directions vi to have an Importance(i) substantially higher than average, i.e. with Importance(i) ≫ rank(W) −1 Prank(W) j=1 Importance(j). We thus can identify privileged directions as the RSVs associated to outlier values in the importance scores. Indeed, we observe the existence of such privilege directions in Figure 2b, where we plot the quotient of the largest and the average importance score for each model $$\text{Importance Ratio}:=\frac{\max_{i=1}^{\text{rank}(W)}\text{Importance}(i)}{\sum_{j=1}^{\text{rank}(W)}\text{Importance}(j)\text{rank}(W)}.\tag{3}$$ We notice that all the robust zero-shot CLIP models have such a privileged direction. For the less robust finetuned models, these privileged directions still exist, but not they are not as strong. This indicates that finetuning de-emphasizes privileged directions. Finally, the importance distributions of non-robust models have no privileged directions with only one exception: the ImageNet-Captions CLIP models. These models have privileged directions in the absence of outlier features, which might appear surprising at first. By analyzing Equation (2), we deduce that these models should have some singular values that are substantially larger than average. As explained in Section 2, these singular values correspond to a weight matrix obtained by stacking activations of the language encoder from the CLIP model. Since the non-robust ImageNet-Captions CLIP models show little to no effective robustness, we deduce that privileged directions are not a signauture robustness. Take-away 2. Many robust models exhibit outlier features and privileged directions in their representation spaces. Since we do not find outlier features in non-robust models, outlier features appear to be a signature indicating model robustness to shifts around ImageNet. Privileged directions can however also be found in non-robust models and are thus not a signature of robustness. Emergence of outlier features. We have observed that outlier features are a signature of robustness to ImageNet distribution shifts. We would like to suggest a hypothesis to explain why they distinguish robust models from their less robust counterparts. In our analysis, they can only be found in robust zero-shot CLIP models that were typically trained on datasets with a size and diversity that is several orders of magnitude above that of ImageNet. Similarly, Sun et al. (2024) observed 'massive activations' (closely related to outlier features) only in transformers which were pretrained on datasets that were large and diverse in comparison to ImageNet, but not in transformers trained on ImageNet itself. Combined together, these two facts suggest that outlier features result from training on larger and more diverse datasets (in comparison to the evaluation dataset), and that robustness and outlier features thus share a common root cause in the type of pretraining data. Relationship of outlier features to pruning. Note that previous work on LLMs found that outlier features also have positive effects on model pruning: Sun et al. (2023) found that LLMs with outlier features can be efficiently pruned by retaining features with larger activations. We also found some weak evidence of this kind when pruning latent directions (see Appendix G). ## 5 Concept Probing In this section we find that robust CLIP models encode more concepts than non-robust supervised models. However, also non-robust CLIP models encode similarly high numbers of concepts. Our analysis thus suggests that this stems from language supervision in CLIP, rather than being related to robustness. We first describe our approach, and then discuss the concepts encoded in the privileged directions identified in the previous section. We then show that robust and non-robust CLIP models encode more unique concepts than fine-tuned and supervised models. Lastly, we explain how this leads to polysemanticity in CLIP models. Approach. With the discovery of privileged directions in the representation spaces of models with robustness to Imagenet shifts, it is legitimate to ask what type of information these directions encode. More generally, are there differences in the way robust models encode human concepts? To answer these questions, we use *concept probing*. This approach was introduced by Bau et al. (2017), along with the *Broden dataset*. This dataset consists of 63, 305 images illustrating C = 1, 197 concepts, including scenes (e.g. street), objects (e.g. flower), parts (e.g. headboard), textures (e.g. swirly), colors (e.g. pink) and materials (e.g. metal). Note that several concepts can be present in each image. For each concept c ∈ [C], we construct a set of positive images P c ⊂ R dX (images that contain the concept) and negative images N c ⊂ R dX (images that do not contain the concept). In the following, we shall consider balanced concept sets: |Pc| = |N c|. Concept probing consists in determining if activations in a given direction of the representation space discriminate between P c and N c. Assigning concepts to directions. We are interested in assigning concepts to each RSV vi of the linear head matrix W. To determine whether a representation space direction enables the identification of a concept c ∈ [C], we proceed as follows. For each activation vector h (c,n) = fv(x c,n) associated to positive images x (c,n) ∈ Pc, we compute the projection Projvi (h (c,n)) on the RSV. We perform the same computations for the projections Projvi (h (¬c,n)) of negative images x (¬c,n) ∈ N c. If the direction vi discriminates between concept negatives and positives, we expect a separation between these projections: Projvi (h (c,n)) ̸= Projvi (h (¬c,n)). In other words, we expect the projections on vi to be a good classifier to predict the presence of c. Following Suau et al. (2022), we measure the average precision APc i of this classifier to determine whether the concept is encoded in direction vi. We set a threshold APc i ≥ 0.9 to establish that the concept c is encoded in vi 4. 4All the below conclusions still hold in the same way if we change the threshold to other values such as 0.8, 0.85, or 0.95. Interpreting privileged directions. We look at the concepts with highest AP in the privileged direction of each robust model represented in Figure 2b (i.e. robust and fine-tuned CLIP). In the following, we illustrate our insights about which concepts are encoded in the privileged direction on the OpenAI pretrained models. Note that the following discussion generalizes well to the other robust models, for which we report the top-3 concepts in Appendix E.3. Interestingly, the most privileged direction of both zero-shot OpenAI ViTs encode the same top-3 concepts: meshed, flecked and *perforated*. The most privileged direction of the ResNet50 also encodes concepts related to textures, with the *knitted* and *chequered* concepts. The most privileged direction of the ResNet101 encodes concepts with high colour contrasts, with the moon bounce, *inflatable bounce game* and *ball pit* concepts. We note that all these concepts describe regular alternating patterns, either through the presence/absence of holes or through the variation of colours. Let us now discuss the concepts encoded in the privileged directions of finetuned models. We find that finetuning replaces the above texture-related concepts by less abstract and more concrete concepts. After finetuning, the concept that are best encoded in the privileged directions are *martial art gym* for the ViT-B/16, *tennis court* for the ViT-B/32, *mountain pass* for the ResNet50 and *flight of stairs* for the ResNet101. All of these concepts are substantially less generic than the ones encoded in the zero-shot models. Take-away 3. We qualitatively observe that privileged directions of robust zero-shot CLIP models tend ![8_image_0.png](8_image_0.png) to encode rather generic texture information, while fine-tuning tends to replace these generic concepts in privileged directions by less abstract and more concrete concepts.. Figure 3: Results of the unique concept analysis, showing total number of unique Broden concepts encoded in last layers, as in Equation (4)*. Zero-shot CLIP models encode substantially more concepts than supervised* models. Number of unique concepts. Let us now discuss the representation spaces of various models beyond privileged directions. A first way to characterize a representation space as a whole is to simply count the number of unique concepts they encode. In other words, for the representation space of each model, we evaluate $\mathcal{C}:=\{c\in[C]\mid\text{AP}_{i}^{c}\geq0.9\text{for some}i\in[\text{rank}(W)]\}$. $$N_{\mathrm{unig}}$$ $\sigma$ Nunique := |C| C := {c ∈ [C] | APc i ≥ 0.9 for some i ∈ [rank(W)]} . (4) We report the number of unique concepts encoded in each type of model from our pool in Figure 3. We notice that robust zero-shot CLIP models encode substantially more concepts than their fine-tuned and supervised counterparts, which thus at first might appear to be another signature of robustness. However, $$\left({4}\right)$$ also some of the non-robust zero-shot ImageNet-Captions pretrained CLIP models encode a relatively high number of unique concepts, namely ImageNet-Captions-t and ImageNet-Captions-td. On the other hand, ImageNet-Captions-tdt, which also includes (potentially less diverse) image tags in the text associated to each image, does not encode that many concepts, in fact only barely more than the supervised ResNet50. Thus, the number of unique concepts in the model features seems to be more strongly influenced by the type and diversity of the text associated to images during training and their inclusion into the training objective, rather than by the model robustness or a root cause thereof. We can further compare the set of concepts encoded for the other architectures, by producing their *Venn* diagrams in Figure 4 (ImageNet-Captions models omitted here for better comparability among backbones). In each case, the most significant section of the Venn diagrams is the overlap between all 3 model types (this ranges from 237 concepts for RN-101 to 516 concepts for ViT-B/16). This suggests that all 3 models share a large pool of features that are useful for each respective task the models were trained on. This is confirmed by Figure 5, where we track the size of overlap of the finetuned concepts with all sections of the Venn diagram (normalized by the number of finetuned concepts |Cfine|) during finetuning. As we can see, finetuning increases the amount of concepts that are shared by the zero-shot, finetuned, and supervised models, since Cfine ∩ Czero ∩ Csup increases. Interestingly, the overlap with concepts specific to the supervised models ((Cfine ∩ Cfine) \ Czero) does not always increase substantially, which suggests that finetuning does not necessarily encode concepts that are exclusively learned in a supervised setting. In agreement with Figure 3, the Venn diagrams show that zero-shot models encode many concepts that are unknown to fine-tuned and supervised models (this ranges from 77 concepts for ViT-B/16 to 105 concepts for RN-50). ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) Figure 4: *Overlap between the concepts encoded in the representation space of different models for each* OpenAI models. Zero-shot models encode many concepts not encoded other models. Connection to polysemanticity. A large number of encoded concepts can come at the cost of interpretability. As explained by Olah et al. (2020), superposing many concepts in a given representation space creates *polysemantic features*. Those features correspond to directions of the representation spaces that encode several unrelated concepts, which makes the interpretation of such features challenging. Polysemantic features are typically identified by using feature visualization to construct images that maximally activate the unit (neuron/representation space direction) of interest (Olah et al., 2017). A manual inspection of these images permits to identify that several concepts are present in the image maximizing the unit activation. A manual feature visualizations for each RSV vi of each model in our pool would be prohibitively expensive. For this reason, we use a *proxy* for polysemanticity based on the Broden dataset. For each RSV vi, we count the number of concepts encoded in the corresponding direction of the representation space Nconcept(i) := |{c ∈ [C] | APc i ≥ 0.9}|. The higher this number is, the more likely it is that the feature corresponding to viis polysemantic. As a measure of polysemanticity for the model, we simply average this number over all singular vectors: polysemanticity := d −1 H PdH i=1 Nconcept(i). By measuring this number for all zero-shot models, we found that this ranges between polysemanticity = 3 for the OpenAI ResNet 50 and polysemanticity = 16 for the LAION-2B ViT-B/16. By looking at the complete results in Appendix E.4, we also note that most CLIP models are on the higher side of this range, with typically more than 10 concepts per direction on average. Since the Broden dataset has no duplicate concepts, we deduce that these models are highly polysemantic. Take-away 4. Robust zero-shot CLIP models encode more concepts than their non-robust supervised counterparts. However, our experiments with non-robust ImageNet-Captions pretrained zero-shot CLIP models suggest that this stems from language supervision, rather than being related to model robustness. Hence, a larger number of encoded concepts is not a signature of robustness. The large number of concepts encoded in CLIP models makes these models polysemantic. ## 6 Related Work Interpretability and CLIP models. A number of works previously studied CLIP from a modelcentric/interpretability perspective. We can broadly divide these works into two categories. (1) The first body of work, like ours, uses interpretability methods to gain a better understanding of CLIP. For instance, Li et al. (2022) analyze saliency maps of CLIP models and found that they tends to focus on the background in images. Goh et al. (2021) analyzed CLIP ResNets and found *multimodal neurons*, that respond to the presence of a concept in many different settings. (2) The second body of work leverages CLIP to explain other models. For instance, Jain et al. (2022) use CLIP to label hard examples that are localized as a direction in any model's representation space. Similarly, Oikarinen & Weng (2022) use CLIP to label neurons by aligning their activation patterns with concept activation patterns on a probing set of examples. To the best of our knowledge, our work is the first to leverage interpretability to better understand robustness of CLIP to natural distribution shifts. Outlier features in foundation models. Outlier features in foundation models were first discovered in LLMs by Dettmers et al. (2022). Those features are found to have an adverse effect on the model quantization. The reason for which outlier features appear in LLMs is yet unknown. Elhage et al. (2023) investigated several possibles causes (such as layer normalization), but found no conclusive explanation. They conclude that the emergence of outlier features is most likely a relic of Adam optimization. Bondarenko et al. (2023) found that outlier features in transformers assign most of their mass to separator tokens and that modifying the attention mechanism (by clipping the softmax and using gated attention) decreases the amount of outlier features learned during pretraining. To the best of our knowledge, our work is the first to discuss outlier features outside of language and transformer models. We can also offer an alternative explanation for their emergence (see end of Section 4). Overall, our work shows that outlier features are more universal phenomena than previously known, and motivates further research to understand the mechanisms at play. ## 7 Discussion The goal of this work was to generate a better understanding of what models that are robust to distribution shifts around ImageNet have learned from data that distinguishes them from non-robust models. To this ![11_image_0.png](11_image_0.png) Figure 5: *Overlap of the finetuned model concepts with zero-shot and supervised models during finetuning,* normalized at each epoch by the number of finetuned concepts |Cfine|*. The overlap with the zero-shot-only* (i.e. not overlapping with supervised) concepts decreases (blue curve), while concepts shared with zero-shot and supervised models increases (green curve). end, we conducted a thorough investigation of the representation spaces of robust CLIP models and their non-robust counterparts, analyzing a total of 39 models. We found outlier features (Dettmers et al., 2022; Elhage et al., 2023) to be a signature of robustness, since they can only be found in models that are robust to ImageNet distribution shifts. To the best of our knowledge, this is also the first time that outlier features were observed outside of language and transformer models. Since the presence of outlier features can be detected without access to the shifted datasets, we believe that they could be a useful tool for practitioners to get a feeling for the distribution shift robustness of a pretrained model during deployment, when the exact form of distribution shift is typically unknown. Interestingly, we can also validate this signature on robust non-CLIP models (CoCa, Yu et al. (2022)) in Appendix D. Lastly, we found the number of encoded concepts in the representation space to be rather related to the type of language supervision than to be a signature of robustness, since they can also be found in nonrobust ImageNet-Captions CLIP models and their number varies with the type of text used for language supervision. It would be interesting to further investigate this hypothesis by creating different types of language supervision for identical images, potentially leveraging state-of-the-art multimodal models, to create even richer signals. In general, we believe an interesting avenue for future research would be to extend the analysis of this work to dataset shifts beyond the ImageNet family, to see if our analysis remains relevant beyond the much investigated ImageNet shifts. ## References Anders Andreassen, Yasaman Bahri, Behnam Neyshabur, and Rebecca Roelofs. The evolution of out-ofdistribution robustness throughout fine-tuning. *arXiv preprint arXiv:2106.15831*, 2021. Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *arXiv* preprint arXiv:1907.02893, 2019. Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, and Boris Katz. Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. *Advances in neural information processing systems*, 32, 2019. David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network dissection: Quantifying interpretability of deep visual representations. In *Computer Vision and Pattern Recognition*, 2017. Yelysei Bondarenko, Markus Nagel, and Tijmen Blankevoort. Quantizable transformers: Removing outliers by helping attention heads do nothing. *arXiv preprint arXiv:2306.12929*, 2023. Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12M: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In *CVPR*, 2021. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020. Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, and Jenia Jitsev. Reproducible scaling laws for contrastive language-image learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 2818–2829, 2023. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Llm. int8 (): 8-bit matrix multiplication for transformers at scale. *arXiv preprint arXiv:2208.07339*, 2022. Benjamin Devillers, Bhavin Choksi, Romain Bielawski, and Rufin VanRullen. Does language help generalization in vision models? *arXiv preprint arXiv:2104.08313*, 2021. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, et al. Toy models of superposition. arXiv preprint arXiv:2209.10652, 2022. Nelson Elhage, Robert Lasenby, and Christopher Olah. Privileged bases in the transformer residual stream, 2023. URL https://transformer-circuits.pub/2023/privileged-basis/index.html. Alex Fang, Gabriel Ilharco, Mitchell Wortsman, Yuhao Wan, Vaishaal Shankar, Achal Dave, and Ludwig Schmidt. Data determines distributional robustness in contrastive language image pre-training (clip). In International Conference on Machine Learning, pp. 6216–6234. PMLR, 2022. Alex Fang, Simon Kornblith, and Ludwig Schmidt. Does progress on imagenet transfer to real-world datasets? arXiv preprint arXiv:2301.04644, 2023. Gabriel Goh, Nick Cammarata, Chelsea Voss, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, and Chris Olah. Multimodal neurons in artificial neural networks. *Distill*, 6(3):e30, 2021. Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Schölkopf. Measuring statistical dependence with hilbert-schmidt norms. In *International conference on algorithmic learning theory*, pp. 63–77. Springer, 2005. Devin Guillory, Vaishaal Shankar, Sayna Ebrahimi, Trevor Darrell, and Ludwig Schmidt. Predicting with confidence on unseen distributions. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 1134–1144, 2021. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arxiv e-prints. *arXiv preprint arXiv:1512.03385*, 10, 2015. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In *Proceedings of the IEEE/CVF conference on computer vision and pattern* recognition, pp. 16000–16009, 2022. Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, Dawn Song, Jacob Steinhardt, and Justin Gilmer. The many faces of robustness: A critical analysis of out-of-distribution generalization. *ICCV*, 2021a. Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. *CVPR*, 2021b. Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt. Openclip, 2021. URL https://doi.org/10.5281/zenodo.5143773. Saachi Jain, Hannah Lawrence, Ankur Moitra, and Aleksander Madry. Distilling model failures as directions in latent space. *arXiv preprint arXiv:2206.14754*, 2022. Yibo Jiang and Victor Veitch. Invariant and transportable representations for anti-causal domain shifts. Advances in Neural Information Processing Systems, 35:20782–20794, 2022. Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. In *International Conference on Machine Learning*, pp. 3519–3529. PMLR, 2019. Ananya Kumar, Aditi Raghunathan, Robbie Jones, Tengyu Ma, and Percy Liang. Fine-tuning can distort pretrained features and underperform out-of-distribution. *arXiv preprint arXiv:2202.10054*, 2022. Yi Li, Hualiang Wang, Yiqun Duan, Hang Xu, and Xiaomeng Li. Exploring visual interpretability for contrastive language-image pre-training. *arXiv preprint arXiv:2209.07046*, 2022. Maggie Makar, Ben Packer, Dan Moldovan, Davis Blalock, Yoni Halpern, and Alexander D'Amour. Causally motivated shortcut removal using auxiliary labels. In *International Conference on Artificial Intelligence* and Statistics, pp. 739–766. PMLR, 2022. John Miller, Karl Krauth, Benjamin Recht, and Ludwig Schmidt. The effect of natural distribution shift on question answering models. In *International Conference on Machine Learning*, pp. 6905–6916. PMLR, 2020. Behnam Neyshabur, Hanie Sedghi, and Chiyuan Zhang. What is being transferred in transfer learning? Advances in neural information processing systems, 33:512–523, 2020. Thao Nguyen, Gabriel Ilharco, Mitchell Wortsman, Sewoong Oh, and Ludwig Schmidt. Quality not quantity: On the interaction between dataset design and robustness of clip. *arXiv preprint arXiv:2208.05516*, 2022. Tuomas Oikarinen and Tsui-Wei Weng. Clip-dissect: Automatic description of neuron representations in deep vision networks. *arXiv preprint arXiv:2204.10965*, 2022. Chris Olah, Alexander Mordvintsev, and Ludwig Schubert. Feature visualization. *Distill*, 2(11):e7, 2017. Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. Zoom in: An introduction to circuits. *Distill*, 5(3):e00024–001, 2020. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*, 2018. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International conference on machine learning*, pp. 8748–8763. PMLR, 2021. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do cifar-10 classifiers generalize to cifar-10? *arXiv preprint arXiv:1806.00451*, 2018. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize to imagenet? In *International conference on machine learning*, pp. 5389–5400. PMLR, 2019. Adam Scherlis, Kshitij Sachan, Adam S Jermyn, Joe Benton, and Buck Shlegeris. Polysemanticity and capacity in neural networks. *arXiv preprint arXiv:2210.01892*, 2022. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. *arXiv preprint arXiv:2210.08402*, 2022. Zhouxing Shi, Nicholas Carlini, Ananth Balashankar, Ludwig Schmidt, Cho-Jui Hsieh, Alex Beutel, and Yao Qin. Effective robustness against natural distribution shifts for models with different training data. arXiv preprint arXiv:2302.01381, 2023. Xavier Suau, Luca Zappella, and Nicholas Apostoloff. Self-conditioning pre-trained language models. In International Conference on Machine Learning, pp. 4455–4473. PMLR, 2022. Anand Subramanian. Torch cka, 2021. URL https://github.com/AntixK/PyTorch-Model-Compare. Mingjie Sun, Zhuang Liu, Anna Bair, and J Zico Kolter. A simple and effective pruning approach for large language models. *arXiv preprint arXiv:2306.11695*, 2023. Mingjie Sun, Xinlei Chen, J Zico Kolter, and Zhuang Liu. Massive activations in large language models. arXiv preprint arXiv:2402.17762, 2024. Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, and Ludwig Schmidt. Measuring robustness to natural distribution shifts in image classification. Advances in Neural Information Processing Systems, 33:18583–18599, 2020. Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. *Communications of the ACM*, 59 (2):64–73, 2016. TorchVision maintainers and contributors. Torchvision: Pytorch's computer vision library. https://github. com/pytorch/vision, 2016. Haohan Wang, Songwei Ge, Zachary Lipton, and Eric P Xing. Learning robust global representations by penalizing local predictive power. In *Advances in Neural Information Processing Systems*, pp. 10506–10518, 2019. Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, et al. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In *International Conference on Machine Learning*, pp. 23965–23998. PMLR, 2022a. Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. Robust fine-tuning of zero-shot models. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 7959–7971, 2022b. Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive captioners are image-text foundation models. *arXiv preprint arXiv:2205.01917*, 2022. ![16_image_0.png](16_image_0.png) Figure 6: *Accuracies of baseline models and OpenAI CLIP models on ImageNet and (average) accuracies* on its five natural shifts (ImageNet-V2, ImageNet-R, ImageNet-Sketch, ImageNet-A, ObjectNet). The zeroshot OpenAI CLIP models accuracies are substantially above the baseline fit, i.e., they have high ER. The fine-tuned models are closer to the baseline fit, i.e., they have lower, but still significant ER. ## A Computing Effective Robustness In this section, we define *Effective Robustness* (ER). Context. ER has emerged as a natural metric to measure how the performance of a model on a reference distribution (in-distribution) generalizes to natural shifts of this distribution (Fang et al., 2022). When plotting the in-distribution accuracy (X-axis, *logit scaling*) against the average shifted-distribution accuracy (Y-axis, *logit scaling*) of various architectures trained on ImageNet, Taori et al. (2020) found that most of the existing models lie on the same line. They also found that models trained with *substantially* more data lie above this line, showing a desirable gain in shifted-distribution accuracy for a fixed in-distribution accuracy. They coined this vertical lift above the line as *Effective Robustness*. Computing ER. To quantify ER, following Taori et al. (2020), one gathers the ImageNet test accuracy ACC(I) and the average accuracy over the ImageNet shifts ACC(S) of a set of reference models trained on ImageNet and fits a linear model on this pool of accuracies to map logit[ACC(I)] to logit[ACC(S)], with the logit function logit : [0, 1] → R defined as x 7→ ln(x) − ln(1 − x). The resulting line can be used to predict what (logit) accuracy we would expect to see on the ImageNet shifts, given a (logit) accuracy on the original ImageNet. Given a new model that has accuracy ACC(I) on ImageNet and average accuracy ACC(S) on the canonical ImageNet shifts, ER is computed as: $\mathrm{ER}(\mathsf{ACC}(S),\mathsf{ACC}(I)):=$ $\mathsf{ACC}(S)-\mathrm{logit}^{-1}\left[\beta_{1}\,\mathrm{logit}\left[\mathsf{ACC}(I)\right]+\beta_{0}\right].$ By fitting a line on the baseline accuracies collected by Taori et al. (2020), we get a slope of β1 = .76 and an intercept of β0 = −1.49, with a Pearson correlation r = .99. This line, along with the baseline models, can be observed in Figure 6. $$\left(5\right)$$ ## B Model Accuracies On Imagenet And Its Shifts In Table 1 we show the top-1 accuracies that the models analysed in the main part of the paper achieve on the ImageNet test set, and in Table 2 the test accuracies on the five shifted datasets (averaged). | Backbone | Pretraining data | Zero-shot CLIP | Finetuned CLIP | ImageNet supervised | |-----------------------|--------------------|------------------|------------------|-----------------------| | OpenAI | 60 % | 76 % | | | | ResNet50 | YFCC-15M | 32 % | 69 % | | | CC-12M | 36 % | 69 % | 70 % | | | ImageNet-Captions-t | 25 % % | N.A. | | | | ImageNet-Captions-td | 27 | | | | | ImageNet-Captions-tdt | 28 % | | | | | ResNet101 | OpenAI | 62 % | 78 % | 71 % | | YFCC-15M | 34 % | 72 % | | | | OpenAI | 68 % | 81 % | | | | LAION-400M | 67 % | 80 % | | | | ViT-B-16 | 80 % | | | | | LAION-2B | 70 % | 81 % | | | | DataComp | 63 % | 78 % | | | | OpenAI | 63 % | 78 % | | | | LAION-400M | 60 % | 76 % | 75 % | | | ViT-B-32 | LAION-2B | 66 % | 76 % | | | OpenAI | 75 % | 85 % | | | | ViT-L-14 | N.A. | | | | | LAION-400M | 73 % | 84 % | | | | LAION-2B | 74 % | 84 % | | | | DataComp | 79 % | 85 % | | | Table 1: *ImageNet test set accuracies for the models under investigation* | Backbone | Pretraining data | Zero-shot CLIP | Finetuned CLIP | ImageNet supervised | |-----------------------|--------------------|------------------|------------------|-----------------------| | OpenAI | 44 % | 43 % | | | | ResNet50 | YFCC-15M | 18 % | 30 % | | | CC-12M | 27 % | 36 % | 30 % | | | ImageNet-Captions-t | 9 % % | N.A. | | | | ImageNet-Captions-td | 9 | | | | | ImageNet-Captions-tdt | 10 % | | | | | ResNet101 | OpenAI | 49 % | 48 % | 31 % | | YFCC-15M | 21 % | 33 % | | | | OpenAI | 60 % | 55 % | | | | LAION-400M | 56 % | 56 % | | | | ViT-B-16 | 41 % | | | | | LAION-2B | 60 % | 58 % | | | | DataComp | 52 % | 53 % | | | | OpenAI | 50 % | 45 % | | | | LAION-400M | 48 % | 48 % | 35 % | | | ViT-B-32 | LAION-2B | 54 % | 48 % | | | OpenAI | 72 % | 67 % | N.A. | | | LAION-400M | 64 % | 64 % | | | | ViT-L-14 | LAION-2B | 66 % | 66 % | | | DataComp | 76 % | 71 % | | | Table 2: *Average test set accuracies on ImageNet-V2, ImageNet-R, ImageNet-Sketch, ImageNet-A, ObjectNet* for the models under investigation. ## C Zero-Shot And Finetuned Models' Differences Are Localized In this appendix, we apply central kernel alignment (CKA) to identify where changes between robust zeroshot CLIP models and their less robust fine-tuned counterparts occur. Kornblith et al. (2019) introduce the CKA metric to quantify the degree of similarity between the activation patterns of two neural network layers. It takes two batch of activation vectors a and b, it computes their normalized similarity in terms of the Hilbert-Schmidt Independence Criterion (HSIC, Gretton et al. (2005)): $$\operatorname{CKA}(\mathbf{a},\mathbf{b}):={\frac{\operatorname{HSIC}(\mathbf{a},\mathbf{b})}{\sqrt{\operatorname{HSIC}(\mathbf{a},\mathbf{a})}{\sqrt{\operatorname{HSIC}(\mathbf{b},\mathbf{b})}}}}$$ We use the PyTorch-Model-Compare package (Subramanian, 2021) to compute this metric between the activation vectors of zero-shot models and their finetuned counterparts for each layer in the backbone. The results are shown in Figure 7 and Figure 8. Across architectures and pretraining sets, we find that there is often a large drop in CKA between zero-shot and finetuned models occurring in the last layer. This makes the activations in the last layer a particularly interesting layer to analyse when investigating ER, as fine-tuned models typically have only half the ER of their zero-shot counterpart (see Figure 1a). ## D Robustness Signatures In Non-Clip Models In addition to the CLIP models investigated in the main paper, we below investigate CoCa models (Yu et al., 2022) pre-trained on LAION-2B as another set of robust multimodal models which do not fall into the CLIP family. We see that our findings extend to these non-CLIP multimodal models as well. First, in Table 3, we confirm that these models have high effective robustness when used as zero-shot classifiers. As for the other models in the main part of the paper, we observe that finetuning on ImageNet decreases the effective robustness of these classifiers. Table 3: *ER for the models of the CoCa family, as calculated by Equation* (5) *(accuracies on ImageNet* shown in brackets). We see that also for these models, ER decreases from Zero-shot to Finetuned. | Backbone | CoCa Zero-shot | CoCa Finetuned | |------------|------------------|------------------| | ViT-B-32 | 24% (64%) | 14% (76%) | | ViT-L-14 | 34% (76%) | 21% (84%) | Next, in Table 4, we show that all the zero-shot models have high kurtosis, which implies the existence of outlier features in their representation space. Additionally, we show that finetuning again decreases the kurtosis. | Backbone | CoCa Zero-shot | CoCa Finetuned | |------------|------------------|------------------| | ViT-B-32 | 12.0 | 3.6 | | ViT-L-14 | 15.5 | 4.6 | Table 4: *Results of the kurtosis analysis showing outlier features present also in robust models with ViT-L-14* backbone or of CoCa family, but disappearing as soon as models are finetuned. Values calculated according to Equation (1) *over all ImageNet test examples.* | Backbone | CoCa Zero-shot | CoCa Finetuned | |------------|------------------|------------------| | ViT-B-32 | 674 | 530 | | ViT-L-14 | 747 | 629 | Finally, in Table 5, we see that zero-shot model encodes more concepts. Again, we see that finetuning removes some concepts from the model's representation space. However, from our experiments with ImageNetCaptions CLIP models, we know that this does not correspond to a robustness signature. Table 5: Results of the unique concept analysis, showing total \# *of unique Broden concepts encoded in last* layers, as in Equation (4)*. Zero-shot models encode substantially more concepts.* ![19_image_1.png](19_image_1.png) ![19_image_0.png](19_image_0.png) Figure 7: Result of layer by layer CKA comparison between zero-shot CLIP and its counterpart that was finetuned on ImageNet for various backbones and pretraining sets (Part 1). In orange, CKA between activation vectors on ImageNet test set. In blue, CKA between activation vectors on shifted ImageNet sets (average as solid line, standard deviation in shaded blue). Typically, we see large drops of CKA in the last layer. ![20_image_0.png](20_image_0.png) Figure 8: Result of layer by layer CKA comparison between zero-shot CLIP and its counterpart that was finetuned on ImageNet for various backbones and pretraining sets (Part 2). In orange, CKA between activation vectors on ImageNet test set. In blue, CKA between activation vectors on shifted ImageNet sets (average as solid line, standard deviation in shaded blue). Typically, we see large drops of CKA in the last layer. ## E Further Experiment Results E.1 Importance Score Analysis For Each Model In Figure 9 and Figure 10 we show the the full distribution of importance scores (Equation (2)) over all representation space directions for all models. We see that robust zero-shot CLIP models and finetuned CLIP models have one strongly privileged direction, typically orders of magnitude larger than the bulk of importance scores. For the non-robust models trained on ImageNet and ImageNet-Caption, this single outlier value in importance scores does not exist. ## E.2 Pruning Analysis For Each Model Figure 11 and Figure 12 show the effect of gradually pruning the least important SV of W on ER and ACC for all models that were not pretrained on the OpenAI pretraining dataset, similar to the analysis shown in Figure 13 in the main paper. Qualitatively, our finding from the main paper is confirmed across the remaining models we investigate: A small subset (typically around 20%) of privileged directions in representation space explain the high ER of zero-shot models. The remaining directions can be pruned without significantly impacting neither ER nor ACC. ## E.3 Top-3 Concepts In Most Dominant Direction For Each Model We look at the concepts with highest AP in the most privileged direction of each model represented in Figure 9, similar to what we did in Section 5 for the most privileged direction of each model in Figure 2b. The results are shown in Table 6. Again, we observe that for the majority of cases, the robust zero-shot models encode concepts related to textures as their top concepts encoded along the privileged directions (e.g., scaly, *meshed*, or *matted*), while the less robust finetuned models encode more concrete concepts (e.g., carrousel, *book stand*, or *pantry*). Table 6: Top-3 concepts with highest AP encoded in the most privileged direction of each model. For each concept, the AP *is included in brackets.* ## E.4 Polysemanticty For Each Model We report the polysemanticity metric computed as per Section 5 for all zero-shot models in Table 7. As claimed in the paper, this ranges from polysemanticity = 3 for the OpenAI ResNet 50 to polysemanticity = 16 for the LAION-2B ViT-B/16, with typically more than 10 concepts encoded in one direction on average. ![22_image_0.png](22_image_0.png) Figure 9: Distribution of importance over all RSVs in the representation space for all models with ResNet50, ResNet101, and ViT-B-16 backbones. Robust zero-shot CLIP models (blue) have one strongly privileged direction. Fine-tuned models (green) still exhibit one strong privileged direction, but with lower importance than the robust zero-shot models. The supervised models (orange) do not have them at all. The non-robust ImageNet-Caption models (red) do have a few directions that are a little bit larger than the bulk, but not separated by orders of magnitude like for the robust and fine-tuned CLIP models. ![23_image_0.png](23_image_0.png) Figure 10: Distribution of importance over all RSVs in the representation space for all models with ViT-B32 and ViT-L-14 backbones. Robust zero-shot CLIP models (blue) have one strongly privileged direction. Fine-tuned models (green) still exhibit one strong privileged direction, but with lower importance than the robust zero-shot models. The supervised models (orange) do not have them at all. ![24_image_0.png](24_image_0.png) ![24_image_1.png](24_image_1.png) Figure 11: Effect of gradually pruning the least important SV of W on ER and ACC for LAION-400M, LAION- 2B, and DataComp pretrained models. ![25_image_0.png](25_image_0.png) ![25_image_2.png](25_image_2.png) ![25_image_3.png](25_image_3.png) ![25_image_1.png](25_image_1.png) (d) ACC - CC-12M pretrained Figure 12: Effect of gradually pruning the least important SV of W on ER and ACC for YFCC-15M and CC-12M pretrained models. | Backbone | Pretraining data | Polysemanticity | |-----------------------|--------------------|-------------------| | OpenAI | 3.0 | | | ResNet50 | YFCC-15M | 6.4 | | CC-12M | 5.3 | | | ImageNet-Captions-t | 10.5 | | | ImageNet-Captions-td | 10.5 | | | ImageNet-Captions-tdt | 4.5 | | | ResNet101 | OpenAI | 3.5 | | YFCC-15M | 7.8 | | | OpenAI | 14.5 | | | LAION-400M | 11.9 | | | ViT-B-16 | LAION-2B | 16.0 | | DataComp | 14.1 | | | OpenAI | 14.1 | | | LAION-400M | 11.5 | | | ViT-B-32 | LAION-2B | 11.1 | Table 7: Polysemanticity of zero-shot CLIP models, showing average # *of concepts encoded in RSV of last* layer (i.e. with AP ≥ 0.9 *on Broden dataset concepts).* ## F Intuition Behind Generalization Of Outlier Features Below, we give more details and an intuition why outlier features can be generalized from the canonical basis {e1*, . . . , e*dH } (we can write hi = Projei (h)) to be any set of directions of the representation space that receive a projection substantially above average: Let us assume, for instance, that two of the elements in the canonical basis e1 and e2 correspond to outlier features. This means that an activation vector h related to an input image x has projections h1 = Proje1 (h) and h2 = Proje2 (h) substantially above the average h1, h2 ≫ n −1 Pn i=1 hi. Now let us define a new unit vector e ′ 1 = 2−1/2(e1 + e2). We deduce that the projection onto this vector is also substantially higher than average h ′ 1 = Proje ′ 1 (h) = 2−1/2(h1 + h2) ≫ n −1 Pn i=1 hi. Hence, the unit vector e ′ 1 can be considered as an outlier feature in a new non-canonical basis. In general, we can extend the notion of *outlier features* to any vector in the span{e1*, . . . , e*n ∈ R n}. ## G Pruning Results Pruning non-privileged directions. Given that we have established that outlier features induce privileged directions in representation space, it seems interesting to check their role in model performance. To that aim, we gradually prune each RSV vi by increasing order of σi by setting σi ← 0 in the singular value expansion5 W =Prank(W) i=1 σiuiv ⊺ i . By pruning a variable proportion of the singular vectors, we obtain the results in Figure 13. We see that the 80% least important RSV of the representation space can be pruned without a substantial effect on performance, i.e. that the robust models are low-rank in their last layer where they have privileged directions. When extending the pruning experiment to finetuned CLIP models and supervised models trained only with ImageNet, we make the two interesting observations from these new results (see Figure 14): All models are low-rank. For all the models (zero-shot, finetuned and supervised), the performances are not substantially affected if we remove the 80% least important singular directions of their representation space (compare to Table 1). This shows that many existing models admit good low-rank approximations. This also demonstrates that the fact that these models are low-rank is not necessarily a signature of robustness. 5Note that sorting the RSV vi by increasing σi is similar to sorting the RSV by increasing Importance(i), since these two variables are related by a Spearman rank correlation ρ = 96% ![27_image_0.png](27_image_0.png) ![27_image_1.png](27_image_1.png) Figure 13: Effect of gradually pruning the least important SV of W on ER and ACC. The least 80% important SV can be pruned without any substantial effect. ![27_image_2.png](27_image_2.png) Figure 14: Extension of pruning results to finetuned and ImageNet supervised models. Zero-shot models obtained with OpenAI pretraining set. Faster drop for supervised models. When the number of ablated singular values ranges between 80% − 100%, we see that the ImageNet accuracy of supervised models drop substantially faster than the accuracy of the finetuned and the zero-shot models. In fact, for the ResNet50, the ImageNet accuracy curves even cross. This implies that the most important direction of the zero-shot model's representation space better discriminate between ImageNet classes than the most important directions of the supervised model's representation space. In the former case, these directions correspond to the zero-shot model's privileged directions. We believe that this new result further reinforces the importance of privileged directions to understand the performances of robust models. ## H Details On Experiments H.1 Finetuned Clip Models To obtain the finetuned CLIP models, we proceed as follows. We start from building the zero-shot CLIP models as described in Section 3. As Wortsman et al. (2022b), we then finetune these models for 10 epochs on the ImageNet training set, using a batch size of 256 and a learning rate of 3 · 10−5 with a cosine annealing learning rate scheduler and a warm-up of 500 steps. We use the AdamW optimizer and set the weight decay to 0.1. ## H.2 Supervised Imagenet Models We note that the ResNets used by Radford et al. (2021) have small modifications, such as the usage of attention pooling. Unfortunately, we are not aware of any public weights for such modified architectures trained on ImageNet from scratch. We thus train these modified ResNet models from scratch for 90 epochs on the ImageNet training set, using a batch size of 1024. We use AdamW, and a learning rate schedule decaying from 10−3to 10−4 after 30 epochs and to 10−5 after 60 epochs (with a warm-up period of 5,000 steps). We set weight decay to 10−2. We use the standard augmentations of horizontal flip with random crop as well as label smoothing. For the ViT models, loadable checkpoints with identical architectures were available from torchvision (TorchVision maintainers and contributors, 2016), and we thus use those directly. ## I Analysis Of Wise-Ft Models In this appendix, we use the approach of Wise-FT (Wortsman et al., 2022b) to obtain a continuous spectrum of ER. Given a zero-shot model fθ0 with weights θ0 ∈ Θ and a finetuned model fθ1 with weights θ1 ∈ Θ, Wortsman et al. (2022b) propose to interpolate between the two models in weight space. This is done by taking a combination θα := (1 − α)· θ0 + α · θ1 for some interpolation parameter α ∈ [0, 1]. One then defines a new model fθα based on the interpolated weights. Surprisingly, interpolating between zero-shot CLIP models and finetuned CLIP models produce models with good performances. To illustrate that, we perform the Wise-FT interpolation with all the models from our pool. We report the ImageNet & shift accuracies of these models in Figures 15 and 16. For the OpenAI and LAION models in Figure 15, we observe that the shift accuracy of interpolated models often surpass both the zero-shot and the finetuned models. The YFCC-15M and CC-12M models in Figure 16 exhibit a different trend: both ImageNet & shift accuracies increase monotonically as α sweeps from zero-shot to finetuned. This is likely due to the low accuracy of the corresponding zero-shot models. By analyzing the ER of interpolated OpenAI and LAION models in Figure 17, we see that the ER gradually degrades as α sweeps between the zero-shot and the finetuned models. Interestingly, the ER of YFCC-15M and CC-12M models in Figure 18 peaks at α = .4 and then decreases monotonically. Let us now look at how our ER signatures evolve as we sweep α between zero-shot and finetuned models. Ideally, if these signatures are good ER proxies, they should exhibit similar trends as the ones described in the previous paragraph. For the OpenAI and LAION models, we indeed observe in Figures 19 and 21 that the kurtosis and the number of unique encoded concepts gradually decrease as α sweeps from zero-shot to finetuned models. Similarly, we observe in Figures 20 and 22 that these two metrics start to substantially after α = 0.4 for the YFCC-15M and CC-12M models. This suggests that these two metrics constitute a good proxy to track how the ER of a given model evolves. Note that the Wise-FT idea has since been generalized to a combination of several finetuned models by Wortsman et al. (2022a) with *model soups*. We leave the investigations of model soups for future work. ![29_image_1.png](29_image_1.png) ![29_image_0.png](29_image_0.png) Figure 15: *Accuracies on ImageNet & shifts for Wise-FT models (1/2).* ![30_image_0.png](30_image_0.png) Figure 16: *Accuracies on ImageNet & shifts for Wise-FT models (2/2).* ![31_image_1.png](31_image_1.png) ![31_image_0.png](31_image_0.png) ![31_image_2.png](31_image_2.png) Effective Robustness Effective Robustness Effective Robustness Effective Robustness Figure 17: *ER for Wise-FT models (1/2).* RN-50 YFCC-15M 0.0 0.2 0.4 0.6 0.8 1.0 ![32_image_0.png](32_image_0.png) ![32_image_1.png](32_image_1.png) Zeroshot Finetuned Figure 18: *ER for Wise-FT models (2/2).* ![33_image_0.png](33_image_0.png) ![33_image_1.png](33_image_1.png) Figure 19: *Activation kurtosis for Wise-FT models (1/2). The activation kurtosis is computed for both* ImageNet tests and ImageNet shifts. ![34_image_0.png](34_image_0.png) Figure 20: Activation kurtosis for Wise-FT models (2/2). The activation kurtosis is computed for both ImageNet tests and ImageNet shifts. ![35_image_1.png](35_image_1.png) ![35_image_0.png](35_image_0.png) Figure 21: *Number of unique concepts encoded in Wise-FT models (1/2).* ![36_image_0.png](36_image_0.png) ![36_image_1.png](36_image_1.png) 0.0 0.2 0.4 0.6 0.8 1.0 Zeroshot Finetuned Figure 22: *Number of unique concepts encoded in Wise-FT models (2/2).* ## J Further Literature Defining CLIP ER. The definition of ER crucially relies on the observation in multiple works that the model performance on natural shifts is linearly related to its performance in-distribution when both quantities are plotted with a logit scaling (Recht et al., 2018; 2019; Miller et al., 2020). We note though that there are known exceptions to this, e.g. considering out-of-distribution generalization on real-world datasets that substantially differ from the in-distribution dataset (Fang et al., 2023). Explaining CLIP ER. A first intuitive explanation for the surprisingly high effective robustness of CLIP might be the fact that the learned embeddings are endowed with semantic grounding through pretraining with text data. This hypothesis was refuted by Devillers et al. (2021), who demonstrated that the embeddings in CLIP do not offer gains in unsupervised clustering, few-shot learning, transfer learning and adversarial robustness as compared to vision-only models. In a subsequent work, Fang et al. (2022) demonstrated that the high robustness of these models rather emerges from the high diversity of data present in their training set. This was achieved by showing that pretraining SimCLR models Chen et al. (2020) on larger datasets, such as the YFCC dataset by Radford et al. (2021), *without any language supervision* matches the effective robustness of CLIP. Shi et al. (2023) reinforced this data-centric explanation by showing that the performance on the pretraining set also correlates linearly with the out-of-distribution performance. To put the emphasis on the importance of data-quality for effective robustness, Nguyen et al. (2022) showed that increasing the pretraining set size does not necessarily improve the effective robustness of the resulting model. Rather, it suggests that it is preferable to filter data to keep salient examples, as was done, e.g., to assemble the LAION dataset (Schuhmann et al., 2022). Other signatures of ER. By comparing pretrained models with models trained from scratch, Neyshabur et al. (2020) demonstrated that these models exhibit interesting differences, such as their reliance on highlevel statistics of their input features and the fact that they tend to be separated by performance barriers in parameter space. Guillory et al. (2021) found observable model behaviours that are predictive of effective robustness. In particular, the difference of model's average confidence between the in and out-of-distribution correlates with out-of-distribution performance. Polysemanticity in foundation models. Polysemantic neurons were coined by Olah et al. (2020) in the context vision model interpretability. These neurons get activated in the presence of several unrelated concepts. For instance, the InceptionV1 model has a neurons that fires when either cats or cars appear in the image. These neurons render the interpretation of the model substantially more complicated, as they prevent to attach unambiguous labels to all the neurons in a model. This will limit the insights gained by traditional interpretability techniques. For example, producing saliency maps for a polysemantic neuron could highlight many unrelated parts of an image, corresponding to the unrelated concepts this neuron is sensitive to. A qualitative analysis of the neurons in CLIP by Goh et al. (2021) showed that a CLIP ResNet has a substantial amount of polysemantic neurons. The emergence of polysemantic neurons is a complex phenomenon. It is not yet well-understood for models at scale. The latest works on the subject mostly focus on toy models, see e.g. the works of Elhage et al. (2022) and Scherlis et al. (2022). To the best of our knowledge, our work is the first to explicitly discusses the link that exists between polysemanticity and robustness to natural distribution shifts.
Review 1: Summary: This paper investigated what makes robust language-image models different from non-robust ones, particularly in handling ImageNet distribution shifts. The author probed the representation spaces of 16 robust CLIP vision encoders with various backbones and pretraining sets, comparing them to less robust models with identical backbones but different (pre)training sets or objectives. This paper provided empirical observations and studied the relationships between robustness and various concepts, such as outlier features, privileged directions, and the number of unique concepts. Strengths and Weaknesses: ## Strengths - The empirical observations provided in this paper are insightful, including: - All robust models have **outlier features**; - **Privileged directions** are crucial to model predictions, but they can be also found in non-robust models; - Robust models have a high number of **unique concepts**. - This paper is well written and contextualized. ## Weaknesses - Some statements were expressed too vaguely. My understanding is that the author empirically showed that "for all models, being robust $\to$ having outlier features" and "for all models, not being robust $\to$ not having outlier features" hold. However, "there exists a model, not being robust $\to$ having privileged directions" is also true. It would be nice if the author could clarify what "being a signature of robustness" precisely means. - Some claims were not supported by sufficient evidence. For example, the evidence of the takeaway message 3 seems too weak. I'm aware that training large models is computationally costly, but the presented evidence seems insufficient to draw a conclusion. Anyway, the difference between generic/abstract concepts and concrete concepts is subtle and too subjective. Further, I may have missed something, but the connection between the experimental results and the claims in the takeaway message 4 is unclear to me. How is the statement "high number of concepts stems from language supervision" supported and verified? - The post hoc analysis and insights are valuable, but this paper would be more impactful if the author could discuss more how we can improve the robustness and generalization ability of language-image models based on the given insights. Requested Changes: Please further clarify how the takeaway messages are supported by empirical evidence. Broader Impact Concerns: NA ================================================== Review 2: Summary: This paper studies what differentiates robust models from non-robust ones. The authors analyze a range of CLIP models with varying backbones, pretraining datasets, and robustness levels using different interpretability tools. They identify three key findings: 1. Outlier features in robust CLIP models: Robust CLIP models exhibit outlier features with significantly higher activations compared to other features. 2. Outlier features as robustness signatures: The presence of outlier features consistently distinguishes robust models from non-robust ones. This suggests that outlier features can serve as an indicator of robustness. 3. Concept richness is not solely linked to robustness: While robust CLIP models encode a high number of unique concepts, this characteristic is also observed in non-robust models trained on ImageNet-Captions. This implies that language supervision plays a significant role in enriching visual representations with human concepts. Strengths and Weaknesses: Strengths: 1. Novel findings: The paper presents novel insights into the relationship between feature characteristics and robustness in CLIP models. 2. Comprehensive analysis: This study provides a comprehensive analysis of the robustness of CLIP models by utilizing various models. 3. The paper is well-written and easy to follow. The analysis is presented logically. Weaknesses: 1. Limited explanation of outlier features: While the paper identifies outlier features as a key finding, it doesn't delve deep enough into their potential causes or implications. Further investigation into what these outlier features represent and why they emerge in robust models would significantly strengthen the paper. 2. Limited dataset: The study primarily focuses on ImageNet distribution shifts. Exploring the generalizability of the findings to other datasets would be better. Requested Changes: Adding an analysis of what these outlier features represent. Broader Impact Concerns: N/A ================================================== Review 3: Summary: Robustness is a key property for machine learning models to be useful in practice. In short, it means that a desirable property in machine learning models is that when they see slightly different data compared to what they have been trained on, they should not lose too much of their performance on the original dataset. ImageNet is a popular benchmark in image classification, with many natural shifts/slightly different variants of it. This gives us the opportunity to test if a model maintains its original performance on this slightly different variants, thereby measuring its robustness. From prior work, we have a large suite of models that work/do not work on this task. This paper asks the question: what is different between models that are robust and non-robust in this task? The authors run a rigorous study of many different models and show the following: 1. **Robustness of clip models**: Pretrained clip models used in a zero-shot way are more robust than their fine-tuned counterparts. 2. **Outlier features**: features that have a significantly high magnitude compared to the average feature magnitude. Robust models have this more than non-robust models. 3. **Privileged direction**: Robust models have strong privileged direction compared to less-robust counterparts. 4. **Concept encoded**: Robust models encode more abstract concepts than non-robust ones, whose top-3 encoded concepts are more granular/fine-grained. Robust models also encode more concepts than their non-robust counterparts. The paper uses prior methods to analyze these differences between robust and non-robust methods. While there is no causal relationship established (e.g., robustness implies existence of outlier features and vice versa), these observations can serve as easy to check conditions to estimate a model’s robustness. Strengths and Weaknesses: ## **Strengths** 1. The paper is nicely structured and well-written: it studies multiple different concepts, but they are well-organized with sections and in particular take-away boxes, which would help readers/practitioners. 2. The experiment suite of the paper is extensive and thorough. The authors test 5 backbone architectures (ResNet50, ResNet101, ViT-B/16, ViT-B/32, ViT-L/14) and many different variations (clip pretraining, fine-tuning, pretrained with ImageNet captions). This provides a very thorough picture of different attributes of robustness of all these models. To the best of my knowledge, this is the most comprehensive study of all these models. While lacking novelty, the paper can serve as a nice survey of all these pre-trained models and their robustness attributes to practitioners working on similar domains. 3. The set of observations, especially that language supervision leads to outlier features and more robust models encode more abstract concepts in comparison to fine-tuned/less robust models, is very interesting. There is potential for future research based on these observations on how to maintain robustness while fine-tuning a pre-trained model for a specific task, and how to get signal for robustness without having a test set, by measuring other attributes such as outlier features. In short, **I am happy with accepting the paper with minimal changes**, provided all my questions are answered. ## **Weaknesses** **(Section 3)** First, this way of measuring effective robustness (by comparing the performance of the model to the predicted performance based on accuracy-on-the-line) is a bit alien to me. The reason we call in-distribution performance “in-distribution”, is because this is the distribution that the model has been trained on. If the only way to break out of the linear trend is to use more diverse data, then we should consider performance on the more diverse data as “in-distribution performance”. In other words, if a model is trained on something other than ImageNet (possibly a superset, but even if it is not, the point stands), then performance on ImageNet cannot be considered as “ID performance.” For zero shot models (no fine-tuning), if our training distribution is completely unknown, we can as easily call the natural shift data as “ID” and the original ImageNet data as OOD. What does it say about the effective robustness metric? How do the numbers look in this case? **However, I note that prior work has also done it the same way, and the authors would point to [1], and so this is not a weakness of the current work.** What is a weakness of the current work is mentioning that zero-shot clip models are more robust than their fine-tuned counterparts. This has been studied, both conceptually and empirically, by [2]. They also compare other methods such as linear-probing and LP-FT. Prior work such as [3, 4] also showed effectiveness of tuning particular layers instead of the entire network. I suggest the following experiments: 1. comparison to linear-probing and LP-FT 2. Add a linear head on top of the visual encoder with C number of classes, and fine-tune only that with the ImageNet images ## **Questions** **(Contributions)** > To the best of our knowledge, this is the first time that outlier features are observed in non-language and non-transformer models. Could the authors cite papers that have observed this for language or transformer models? This citation is added later, in section 4, but should be added to the introduction as well. **(Nitpick, activation vectors)** Does the image encoder $f_v$ output the activation vector directly? Or does it output an embedding $e$, that passes through an activation function $\sigma$, to get the activation vector? This does not make much difference to the analysis, I am just curious about the notations. # References References [1] Measuring Robustness to Natural Distribution Shifts in Image Classification, https://arxiv.org/abs/2007.00644 [2] Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution, https://arxiv.org/abs/2202.10054 [3] Surgical Fine-Tuning Improves Adaptation to Distribution Shifts, https://arxiv.org/abs/2210.11466 [4] Less is More: Selective Layer Finetuning with SubTuning, https://arxiv.org/abs/2302.06354 Requested Changes: Please answer the weaknesses/questions mentioned above. Broader Impact Concerns: No concerns. ==================================================
# Misspecification-Robust Sequential Neural Likelihood For Simulation-Based Inference Ryan P. Kelly r21.kelly@hdr.qut.edu.au School of Mathematical Sciences Centre for Data Science Queensland University of Technology David J. Nott standj@nus.edu.sg Institute of Operations Research and Analytics National University of Singapore David T. Frazier *david.frazier@monash.edu* Department of Econometrics and Business Statistics Monash University David J. Warne david.warne@qut.edu.au School of Mathematical Sciences Centre for Data Science Queensland University of Technology Christopher Drovandi *c.drovandi@qut.edu.au* School of Mathematical Sciences Centre for Data Science Queensland University of Technology Reviewed on OpenReview: *https: // openreview. net/ forum? id= tbOYJwXhcY* ## Abstract Simulation-based inference techniques are indispensable for parameter estimation of mechanistic and simulable models with intractable likelihoods. While traditional statistical approaches like approximate Bayesian computation and Bayesian synthetic likelihood have been studied under well-specified and misspecified settings, they often suffer from inefficiencies due to wasted model simulations. Neural approaches, such as sequential neural likelihood (SNL) avoid this wastage by utilising all model simulations to train a neural surrogate for the likelihood function. However, the performance of SNL under model misspecification is unreliable and can result in overconfident posteriors centred around an inaccurate parameter estimate. In this paper, we propose a novel SNL method, which through the incorporation of additional adjustment parameters, is robust to model misspecification and capable of identifying features of the data that the model is not able to recover. We demonstrate the efficacy of our approach through several illustrative examples, where our method gives more accurate point estimates and uncertainty quantification than SNL. ## 1 Introduction Statistical inference for complex models can be challenging when the likelihood function is infeasible to evaluate numerous times. However, if the model is computationally inexpensive to simulate given parameter values, it is possible to perform approximate parameter estimation by so-called simulation-based inference (SBI) techniques (e.g. Cranmer et al. (2020)). The difficulty of obtaining reliable inferences in the SBI setting is exacerbated when the model is misspecified (e.g. Cannon et al. (2022); Frazier et al. (2020b)). SBI methods are widely employed across numerous scientific disciplines, such as cosmology (Hermans et al., 2021), ecology (Hauenstein et al., 2019), mathematical biology (Wang et al., 2024), neuroscience (Confavreux et al., 2023; Gonçalves et al., 2020), population genetics (Beaumont, 2010) and in epidemiology to model the spread of infectious agents such as *S. pneumoniae* (Numminen et al., 2013) and COVID-19 (Ferguson et al., 2020; Warne et al., 2020). Model building is an iterative process involving fitting tentative models, model criticism and model expansion; Blei (2014) have named this process the "Box Loop" after pioneering work by Box (1976). We refer to Gelman et al. (2020) as a guide to Bayesian model building. Although methods for statistical model criticism are well-developed, there is less work on reliably refining models for SBI methods. One reason is that SBI methods are susceptible to model misspecification, which may be especially prevalent in the early stages of modelling when complexity is progressively introduced. A misspecified model may lead to misleading posterior distributions, and the available tools for model criticism may also be unreliable (Schmitt et al., 2023a). Ensuring that misspecification is detected and its negative impacts are mitigated is important if we are to iterate and refine our model reliably. However, this is often overlooked in SBI methods. Posterior predictive checks (PPCs) are frequently used for model evaluation and provide a tool to check for model misspecification (Gabry et al., 2019). For SBI, if a model is well-specified, we should be able to generate data that resembles the observed data. However, as noted in Schmitt et al. (2023a), PPCs rely on the fidelity of the posterior and can only serve as an indirect measure of misspecification. Similarly, scoring rules are another way to assess how well a probabilistic model matches the observed data (Gneiting & Raftery, 2007). In this paper, we focus on the type of misspecification where the model is unable to recover the observed summary statistics as the sample size diverges. This form of misspecification is referred to as incompatibility by Marin et al. (2014). Statistical approaches for SBI, such as approximate Bayesian computation (ABC, Sisson et al. (2018)) and Bayesian synthetic likelihood (BSL, Price et al. (2018)) have been well studied, both empirically (e.g. Drovandi & Frazier (2022)) and theoretically (e.g. Li & Fearnhead (2018), Frazier et al. (2018), David T. Frazier & Kohn (2023)). Wilkinson (2013) reframes the approximate ABC posterior as an exact result for a different distribution that incorporates an assumption of model error (i.e. model misspecification). Ratmann et al. (2009) proposes an ABC method that augments the likelihood with unknown error terms, allowing model criticism by detecting summary statistics that require high tolerances. Frazier et al. (2020a) incorporate adjustment parameters, inspired by robust BSL (RBSL), in an ABC context. These approaches often rely on a low-dimensional summarisation of the data to manage computational costs. ABC aims to minimise the distance between observed and simulated summaries, whereas BSL constructs a Gaussian approximation of the model summary to form an approximate likelihood. In the case of model misspecification, there may be additional motivation to replace the entire dataset with summaries, as the resulting model can then be trained to capture the broad features of the data that may be of most interest; see, e.g., Lewis et al. (2021) for further discussion. Neural approaches, such as SNL and neural posterior estimation (NPE), have been shown to exhibit poor empirical performance under model misspecification (e.g. Bon et al. (2023); Cannon et al. (2022); Schmitt et al. (2023a); Ward et al. (2022)). Thus, there is a critical need to develop these neural approaches so they are robust to model misspecification. Ward et al. (2022) develop robust NPE (RNPE), a method to make NPE robust to model misspecification and useful for model criticism. Cannon et al. (2022) develop robust SBI methods by using machine learning techniques to handle out-of-distribution (OOD) data. Cranmer et al. (2020) advise incorporating additional noise directly into the simulator if model misspecification is suspected. We develop a robust version of SNL, inspired by the mean adjustment approach for BSL (Frazier & Drovandi, 2021). Unlike Ward et al. (2022), who consider NPE, we focus on neural likelihood estimation, which is useful for problems where the likelihood is easier to emulate than the posterior. Our method is the first *sequential* neural approach that simultaneously detects and corrects for model misspecification. By shifting incompatible summary statistics using adjustment parameters, our method matches quite closely the posterior obtained when only considering compatible summary statistics. This is demonstrated in Figure 1, on an example discussed further in Section 4, where our method is shown to closely match the posterior density obtained using the compatible summary statistic. We further demonstrate the reliable performance of our approach on several illustrative examples. ![2_image_0.png](2_image_0.png) $$\left(1\right)$$ Figure 1: Posterior plots for a contaminated normal model (see Section 4 for details). The top left plot (a) shows the univariate density of samples generated from the true (solid) and assumed (dashed) DGPs. The top right plot (b) shows the estimated univariate SNL (dashed), RSNL (solid), RNPE (dashed) and true (dash-dotted) posterior densities for θ. The true parameter value is shown as a vertical dashed line. Plots (c) and (d) show the estimated marginal posterior (solid) and prior (dashed) densities for the components of the adjustment parameters of the RSNL method. ## 2 Background Let y = (y1*, . . . , y*n) ⊤ denote the observed data and define P (n) 0as the true unknown distribution of y. The observed data is assumed to be generated from a class of parametric models, {P (n) θ: θ ∈ Θ ⊆ R dθ }. The posterior density of interest is given by $$\pi(\theta\mid y)\propto g(y\mid\theta)\pi(\theta),$$ π(θ | y) ∝ g(y | θ)π(θ), (1) where g(y | θ) is the likelihood function and π(θ) is the prior distribution. In this paper, we are interested in models for which g(y | θ) is analytically or computationally intractable, but from which we can easily simulate pseudo-data x for any θ ∈ Θ where θ is dθ dimensional. ## 2.1 Simulation-Based Inference The traditional statistical approach to conducting inference on θ in this setting is to use ABC methods. Using the assumed DGP, these methods search for values of θ that produce pseudo-data x which is "close enough" to y, and then retain these values to build an approximation to the posterior. The comparison is generally carried out using summaries of the data to ensure the problem is computationally feasible. Let S : R n → R d, denote the vector summary statistic mapping used in the analysis, where d ≥ dθ, and n ≥ d. Two prominent statistical approaches for SBI are ABC and BSL. ABC approximates the likelihood for the summaries via the following: $$g_{\epsilon}(S(\mathbf{y})\mid\mathbf{\theta})=\int_{\mathbb{R}^{d}}K_{\epsilon}(\rho\{S(\mathbf{y}),S(\mathbf{x})\})g_{n}(S(\mathbf{x})\mid\mathbf{\theta})d\mathbf{x},$$ where ρ{S(y), S(x)} measures the discrepancy between observed and simulated summaries and Kϵ(·) is a kernel that allocates higher weight to smaller ρ. The bandwidth of the kernel, ϵ, is often referred to as the tolerance in the ABC literature. The above integral is intractable, but can be estimated unbiasedly by drawing m ≥ 1 mock datasets x1*, . . . ,* xm ∼ P (n) θand computing $${\hat{g}}_{\epsilon}(S(\mathbf{y})\mid\mathbf{\theta})={\frac{1}{m}}\sum_{i=1}^{m}K_{\epsilon}(\rho\{S(\mathbf{y}),S(\mathbf{x_{i}})\}).$$ It is common to set m = 1 and choose the indicator kernel function, $$K_{\epsilon}(\rho\{S(\mathbf{y}),S(\mathbf{x})\})=\mathbf{I}(\rho\{S(\mathbf{y}),S(\mathbf{x})\}\leq\epsilon).$$ Using arguments from the exact-approximate literature (Andrieu & Roberts, 2009), unbiasedly estimating the ABC likelihood leads to a Bayesian algorithm that samples from the approximate posterior proportional to gϵ(S(y) | θ)π(θ). As is evident from the above integral estimator, ABC non-parametrically estimates the summary statistic likelihood. Unfortunately, this non-parametric approximation causes ABC to exhibit the "curse of dimensionality", meaning that the probability of a proposed parameter being accepted decreases dramatically as the dimension of the summary statistics increases (Barber et al., 2015; Csilléry et al., 2010). In contrast, BSL uses a parametric estimator. The most common BSL approach approximates gn(· | θ) using a Gaussian: $$g_{A}(S(\mathbf{y})\mid\mathbf{\theta})={\mathcal{N}}\left(S(\mathbf{y});\mu(\mathbf{\theta}),\Sigma(\mathbf{\theta})\right),$$ where µ(θ) = E[S(x) | θ] and Σ(θ) = Var(S(x) | θ) denote the mean and variance of the model summary statistic at θ. In almost all practical cases µ(θ) and Σ(θ) are unknown, but we can replace these quantities with those estimated from m independent model simulations, using for example the sample mean and variance: $$\begin{array}{l}{{\mu_{m}(\mathbf{\theta})=\frac{1}{m}\sum_{i=1}^{m}S(\mathbf{x}^{i}),}}\\ {{\Sigma_{m}(\mathbf{\theta})=\frac{1}{m}\sum_{i=1}^{m}\left(S(\mathbf{x}^{i})-\mu_{m}(\mathbf{\theta})\right)\left(S(\mathbf{x}^{i})-\mu_{m}(\mathbf{\theta})\right)^{\top},}}\end{array}$$ and where each simulated data set x i, i = 1*, . . . , m*, is generated i.i.d. from P (n) θ. The synthetic likelihood is then approximated as $$\hat{g}_{A}(S(\mathbf{y})\mid\mathbf{\theta})={\mathcal{N}}\left(S(\mathbf{y});\mu_{m}(\mathbf{\theta}),\Sigma_{m}(\mathbf{\theta})\right).$$ Unlike ABC, gˆA(S(y) | θ) is not an unbiased estimator of gA(S(y) | θ). David T. Frazier & Kohn (2023) demonstrate that if the summary statistics are sub-Gaussian, then the choice of m is immaterial so long as m diverges as n diverges. Empirical evidence supporting the insensitivity to m is presented in Price et al. (2018), indicating that as long as m is sufficiently large, the variance of the plug-in synthetic likelihood estimator remains small enough to prevent any negative impact on MCMC mixing. The Gaussian assumption can be limiting; however, neural likelihoods provide a more flexible alternative to BSL while retaining the advantages of a parametric approximation. Unfortunately, both ABC and BSL are inefficient in terms of the number of model simulations required to produce posterior approximations. In particular, most algorithms for ABC and BSL are wasteful in the sense that they use a relatively large number of model simulations associated with rejected parameter proposals. In contrast, methods have been developed in machine learning that utilise all model simulations to learn either the likelihood (e.g. Papamakarios et al. (2019)), posterior (e.g. Greenberg et al. (2019); Papamakarios & Murray (2016)) or likelihood ratio (e.g. Durkan et al. (2020); Hermans et al. (2020); Thomas et al. (2022)). We consider the SNL method of Papamakarios et al. (2019) in more detail in Section 3. Neural SBI methods have seen rapid advancements, with various approaches approximating the likelihood (Boelts et al., 2022; Wiqvist et al., 2021). These methods include diffusion models for approximating the score of the likelihood (Simons et al., 2023), energy-based models for surrogating the likelihood (Glaser et al., 2023) and a "neural conditional exponential family" trained via score matching (Pacchiardi & Dutta, 2022). Bon et al. (2022) present a method for refining approximate posterior samples to minimise bias and enhance uncertainty quantification by optimising a transform of the approximate posterior, which maximises a scoring rule. One way to categorise neural SBI methods is by differentiating between amortised and sequential sampling schemes. These methods differ in their proposal distribution for θ. Amortised methods estimate the neural density for any x within the support of the prior predictive distribution. This allows the trained flow to approximate the posterior for any observed statistic, making it efficient when analysing multiple datasets. However, this requires using the prior as the proposal distribution. When the prior and posterior differ significantly, there will be few training samples of x close to y, resulting in the trained flow potentially being less accurate in the vicinity of the observed statistic. ## 2.2 Sbi And Model Misspecification In the standard Bayesian inference framework, we denote the model parameter value that minimises the Kullback-Leibler (KL) divergence between the assumed model distribution and the true distribution as θ0. This can be interpreted as either the true distribution or pseudo-true parameter value, depending on whether the true distribution does or does not belong to the assumed parametric family. When we apply a summary statistic mapping, we need to redefine the usual notion of model misspecification, i.e., no value of θ ∈ Θ such that P (n) θ = P (n) 0, as it is still possible for P (n) θto generate summary statistics that match the observed statistic even if the model is incorrect (Frazier et al., 2020b). We define b(θ) = E[S(x) | θ] and b0 = E[S(y)] as the expected values of the summary statistic with respect to the probability measures P (n) θand P (n) 0, respectively. The meaningful notion of misspecification in SBI is when no θ ∈ Θ satisfies b(θ) = b0, implying there is no parameter value for which the expected simulated and observed summaries match. This is the definition of incompatibility proposed in Marin et al. (2014). The behaviour of ABC and BSL under incompatibility is now well understood. In the context of ABC, we consider the model to be misspecified if $$\epsilon^{*}=\operatorname*{inf}_{\mathbf{\theta}\in\Theta}\rho(b(\mathbf{\theta}),b_{\mathbf{0}})>0,$$ for some metric ρ, and the corresponding pseudo-true parameter is defined as $$\theta_{0}=\arg\operatorname*{in}_{\theta\in\mathbb{R}}$$ $$\operatorname*{in}_{\mathrm{LO}}\rho(\omega(\theta),\omega_{0}).$$ θ∈Θ ρ(b(θ), b0). Frazier et al. (2020b) show, under various conditions, the ABC posterior concentrates onto θ0 for large sample sizes, providing an inherent robustness to model misspecification. However, they also demonstrate that the asymptotic shape of the ABC posterior is non-Gaussian and credible intervals lack valid frequentist coverage; i.e., confidence sets do not have the correct level under P (n) 0. In the context of BSL, Frazier et al. (2021) show that when the model is incompatible, i.e. b(θ) ̸= b0 ∀θ ∈ Θ, the KL divergence between the true data generating distribution and the Gaussian distribution associated with the synthetic likelihood diverges as n diverges. In BSL, we say that the model is incompatible if $$\operatorname*{lim}_{n\to\infty}\operatorname*{inf}_{\boldsymbol{\theta}\in\Theta}\left\{b(\boldsymbol{\theta})-b_{\mathbf{0}}\right\}^{\top}\left\{n\Sigma(\boldsymbol{\theta})\right\}^{-1}\left\{b(\boldsymbol{\theta})-b_{\mathbf{0}}\right\}>0.$$ We define $$M_{n}(\theta)=n^{-1}\partial\log g_{A}\left(S\mid\theta\right)/\partial\theta.$$ The behaviour of BSL under misspecification depends on the number of roots of Mn(θ) = 0. If there is a single solution, and under various assumptions, the BSL posterior will concentrate onto the pseudo-true parameter θ0 with an asymptotic Gaussian shape, and the BSL posterior mean satisfies a Bernstein von-Mises result. However, if there are multiple solutions to Mn(θ) = 0, then the BSL posterior will asymptotically exhibit multiple modes that do not concentrate on θ0. The number of solutions to Mn(θ) = 0 for a given problem is not known *a priori* and is very difficult to explore. In addition to the theoretical issues faced by BSL under misspecification, there are also computational challenges. Frazier & Drovandi (2021) point out that under incompatibility, since the observed summary lies in the tail of the estimated synthetic likelihood for any value of θ, the Monte Carlo estimate of the likelihood suffers from high variance. Consequently, a significantly large value of m is needed to enable the MCMC chain to mix and avoid getting stuck, which is computationally demanding. Due to the undesirable properties of BSL under misspecification, Frazier & Drovandi (2021) propose RBSL as a way to identify incompatible statistics and make inferences more robust simultaneously. RBSL is a model expansion that introduces auxiliary variables, represented by the vector Γ = (γ1*, . . . , γ*d) ⊤. These variables shift the means (RBSL-M) or inflate the variances (RBSL-V) in the Gaussian approximation, ensuring that the extended model is compatible by absorbing any misspecification. This approach guarantees that the observed summary does not fall far into the tails of the expanded model. However, the expanded model is now overparameterised since the dimension of the combined vector (θ,Γ) ⊤ is larger than the dimension of the summary statistics. To regularise the model, Frazier & Drovandi (2021) impose a prior distribution on Γ that favours compatibility. However, each component of Γ's prior has a heavy tail, allowing it to absorb the misspecification for a subset of the summary statistics. This method identifies the statistics the model is incompatible with while mitigating their influence on the inference. Frazier & Drovandi (2021) demonstrate that under compatibility, the posterior for Γ is the same as its prior, so that incompatibility can be detected by departures from the prior. The mean adjusted synthetic likelihood is denoted $\left(2\right)$. ## N (S(Y); Μ(Θ) + Σ(Θ) ◦ Γ, Σ(Θ)), (2) where σ(θ) = diag{Σ(θ)} is the vector of estimated standard deviations of the model summary statistics, and ◦ denotes the Hadamard (element-by-element) product. The role of σ(θ) is to ensure that we can treat each component of Γ as the number of standard deviations that we are shifting the corresponding model summary statistic. Frazier & Drovandi (2021) suggest using a prior where θ and Γ are independent, with the prior density for Γ being a Laplace prior with scale λ for each γj . The prior is chosen because it is peaked at zero but has a moderately heavy tail. Sampling the joint posterior can be done using a component-wise MCMC algorithm that iteratively updates using the conditionals θ | S,Γ and Γ | S, θ. The update for Γ holds the m model simulations fixed and uses a slice sampler, resulting in an acceptance rate of one without requiring tuning a proposal distribution. Frazier & Drovandi (2021) find empirically that sampling over the joint space (θ,Γ) ⊤ does not slow down mixing on the θ-marginal space. In fact, in cases of misspecification, the mixing is substantially improved as the observed value of the summaries no longer falls in the tail of the Gaussian distribution. Recent developments in SBI have focused on detecting and addressing model misspecification for both neural posterior estimation (Ward et al., 2022) and for amortised and sequential inference (Schmitt et al., 2023a). Schmitt et al. (2023a) employs a maximum mean discrepancy (MMD) estimator to detect a "simulation gap" between observed and simulated data, while Ward et al. (2022) detects and corrects for model misspecification by introducing an error model π(S(y) | S(x)). Noise is added directly to the summary statistics during training in Bernaerts et al. (2023). Huang et al. (2023) noted that as incompatibility is based on the choice of summary statistics, if the summary statistics are learnt via a NN, training this network with a regularised loss function that penalises statistics with a mismatch between the observed and simulated values will lead to robust inference. Glöckler et al. (2023), again focused on the case of learnt (via NN) summaries, propose a scheme for robustness against adversarial attacks (i.e. small worst-case perturbations) on the observed data. Schmitt et al. (2023c) introduce a meta-uncertainty framework that blends real and simulated data to quantify uncertainty in posterior model probabilities, applicable to SBI with potential model misspecification. Generalised Bayesian inference (GBI) is an alternative class of methods suggested to handle model misspecification better than standard Bayesian methods (Knoblauch et al., 2022). Instead of targeting the standard Bayesian posterior, the GBI framework targets a generalised posterior, π(θ | y) ∝ π(θ) exp(−w · ℓ(y, θ)), where ℓ(y, θ) is some loss function and w is a tuning parameter that needs to be calibrated appropriately (Bissiri et al., 2016). Various approaches have applied GBI to misspecified models with intractable likelihoods (Chérief-Abdellatif & Alquier, 2020; Matsubara et al., 2022; Pacchiardi & Dutta, 2021; Schmon et al., 2021). Gao et al. (2023) extends GBI to the amortised SBI setting, using a regression neural network to approximate the loss function, achieving favourable results for misspecified examples. Dellaporta et al. (2022) employ similar ideas to GBI for an MMD posterior bootstrap. ## 3 Robust Sequential Neural Likelihood SNL is within the class of SBI methods that utilise a neural conditional density estimator (NCDE). An NCDE is a particular class of neural network, qϕ, parameterised by ϕ, which learns a conditional probability density from a set of paired data points. This is appealing for SBI since we have access to pairs of (θ, x) but lack a tractable conditional probability density in either direction. The idea is to train qϕ on D = {(θi, xi)} m i=1 and use it as a surrogate for the unavailable density of interest. NCDEs have been employed as surrogate densities for the likelihood (Papamakarios et al., 2019) and posterior (Papamakarios & Murray, 2016; Greenberg et al., 2019) or both simultaneously (Radev et al., 2023; Schmitt et al., 2023b; Wiqvist et al., 2021). Most commonly, a normalizing flow is used as the NCDE, and we do so here. Normalising flows are a useful class of neural network for density estimation. Normalising flows convert a simple base distribution π(u), e.g. a standard normal, to a complex target distribution, π(η), e.g. the likelihood, through a sequence of L diffeomorphic transformations (bijective, differentiable functions with a differentiable inverse), T = TL *⊙ · · · ⊙* T1. The density of η = T −1(u), η ∈ R d, where u ∼ π(u) is, $$\pi(\eta)=\pi(\mathbf{u})|\operatorname*{det}J_{T}(\mathbf{u})|^{-1},$$ $\left(3\right)^3$ −1, (3) where JT is the Jacobian of T. Autoregressive flows, such as neural spline flow used here, are one class of normalising flow that ensure that the Jacobian is a triangular matrix, allowing fast computation of the determinant in Equation 3. We refer to Papamakarios et al. (2021) for more details. Normalising flows are also useful for data generation, although this has been of lesser importance for SBI methods. Sequential approaches aim to update the proposal distribution so that more training datasets are generated closer to S(y), resulting in a more accurate approximation of π(θ | S(y)) for a given simulation budget. In this approach, R training rounds are performed, with the proposal distribution for the current round given by the approximate posterior from the previous round. As in Papamakarios et al. (2019), the first round, r = 0, proposes θ ∼ π(θ), then for subsequent rounds r = 1, 2*, . . . , R* − 1, a normalising flow, qr,ϕ(S(x) | θ) is trained on all generated (θ, x) ∈ D. The choice of amortised or sequential methods depends on the application. For instance, in epidemiology transmission models, we typically have a single set of summary statistics (describing the entire population of interest), relatively uninformative priors, and a computationally costly simulation function. In such cases, a sequential approach is more appropriate. Neural-based methods can efficiently sample the approximate posterior using MCMC methods. The evaluation of the normalising flow density is designed to be fast. Since we are using the trained flow as a surrogate function, no simulations are needed during MCMC sampling. With automatic differentiation, one can efficiently find the gradient of an NCDE and use it in an effective MCMC sampler like the No-U-Turn sampler (NUTS) (Hoffman & Gelman, 2014). Unlike ABC and BSL, it is currently unclear to the authors how one can formally define pseudo-true values for the SNL posterior in general settings. Furthermore, the authors are unaware of any theoretical work that discusses the asymptotic properties of SNL under correct or misspecified models. The complication in defining where one would expect the SNL posterior to concentrate in the asymptotic regime is a direct consequence of the way the SNL estimator is obtained, and, by extension, how the SNL posterior is produced. In SNL, the NCDE is trained via the generation of simulated summary statistics from the assumed model, which means that the SNL estimator only learns about the likelihood of the simulated summaries. Hence, if the distribution of the simulated summaries differs markedly from that of the observed summaries, there is no reason to suspect that the SNL posterior will concentrate on a meaningful point in the parameter space. Recent research has found that neural SBI methods behave poorly under model misspecification (Bon et al., 2023; Cannon et al., 2022; Schmitt et al., 2023a; Ward et al., 2022), prompting the development of more robust approaches. We propose robust SNL (RSNL), a sequential method that adapts to model misspecification by incorporating an approach similar to that of Frazier & Drovandi (2021). Our approach adjusts the observed summary based on auxiliary adjustment parameters, allowing it to shift to a region of higher surrogate density when the summary falls in the tail. RSNL evaluates the adjusted surrogate likelihood as qϕ(S(y) − Γ | θ) estimating the approximate joint posterior, $$\pi(\theta,\Gamma\mid S(\mathbf{y}))\propto q_{\phi}(S(\mathbf{y})-\Gamma\mid\theta)\pi(\theta)\pi(\Gamma),$$ $$\left(4\right)$$ π(θ,Γ | S(y)) ∝ qϕ(S(y) − Γ | θ)π(θ)π(Γ), (4) where we set π(θ) and π(Γ) independently of each other. The choice of prior, π(Γ), is crucial for RSNL. Drawing inspiration from RBSL-M, we impose a Laplace prior distribution on Γ to promote shrinkage. The components of Γ are set to be independent, π(Γ) = Qd i=1 π(γi). We find that the standard Laplace(0, 1) prior works well for low to moderate degrees of misspecification. However, it lacks sufficient support for high degrees of misspecification, leading to undetected misspecification and prior-data conflict (Evans & Jang, 2011). To address this, we employ a data-driven prior that, similar to a weakly informative prior, provides some regularisation without imposing strong assumptions for each component: $$\pi(\gamma_{i})=\mbox{Laplace}(0,\lambda=|\tau\tilde{S}_{i}(\mathbf{y})|)=\frac{1}{2\lambda}\exp\left(-\frac{|\gamma_{i}|}{\lambda}\right),\tag{5}$$ where S˜i(y) is the i-th standardised observed summary. Large observed standardised summaries indicate a high degree of misspecification, and including this in the prior helps the expanded model detect it. We set π0(γi) ∼ Laplace(0, 1) for the initial round and update πr(γi) in each round by recomputing S˜(y). This approach detects highly misspecified summaries while introducing minimal noise for well-specified summaries. Setting τ adjusts the scale, with larger values allowing larger adjustments. We illustrate the effect of differing values of τ in Appendix G. Setting τ involves balancing a trade-off: it must be small enough to minimise noise introduced by the adjustment parameters, and thereby the accuracy of inference for the model parameters, yet sufficiently large to detect misspecification more readily than lower τ values, which leads to more stable posteriors for the adjustment parameters. We note for the primary objectives of detecting model misspecification and giving robust inference, a broad range of τ values are suitable. But we promote τ = 0.3 as a robust choice across different scenarios. The adjustment parameters can map a misspecified summary to the mean of simulated summaries, regardless of its position in the tail. We consider our proposed prior against a fixed variance prior in Appendix E, which illustrates that the fixed variance prior is prone to introducing excessive noise. Standardising the simulated and observed summaries is necessary to account for varying scales, and it is done after generating additional model simulations at each training round. Since all generated parameters are used to train the flow, standardisation is computed using the entire dataset D. Standardisation serves two purposes: 1) facilitating the training of the flow, and 2) ensuring that the adjustment parameters are on a similar scale to the summaries. It is important to note that standardisation is performed unconditionally, using the sample mean and sample standard deviation calculated from all simulated summary statistics in the training set. For complex DGPs, unconditioned standardisation may not adequately address heteroscedasticity or non-linear dependencies across the parameter space, leading to potential biases in inference. We discuss possible extensions in Section 5. Algorithm 1 outlines the complete process for sampling the RSNL approximate posterior. The primary distinction between SNL and RSNL lies in the MCMC sampling, which now targets the adjusted posterior as a marginal of the augmented joint posterior for θ and Γ. As a result, RSNL can be easily used as a substitute for SNL. This comes at the additional computational cost of targeting a joint posterior of dimension dθ + d, rather than one of dimension d. Nonetheless, this increase in computational cost is generally marginal compared to the expense of running simulations for neural network training, an aspect not adversely affected by RSNL. Since RSNL, like SNL, can efficiently evaluate both the neural likelihood and the gradient of the approximate posterior, we employ NUTS for MCMC sampling. This is in contrast to Ward et al. (2022), who use mixed Hamiltonian Monte Carlo, an MCMC algorithm for inference on both continuous and discrete variables, due to their use of a spike-and-slab prior. Once we obtain samples from the joint posterior, we can examine the θ and Γ posterior samples separately. The θ samples can be used for Bayesian inference on functions of θ relevant to the specific application. In contrast, the Γ approximate posterior samples can aid in model criticism. RSNL can be employed for model criticism similarly to RBSL. When the assumed and actual DGP are incompatible, RSNL is expected to behave like RBSL, resulting in a discrepancy between the prior and posterior distributions for the components of Γ. Visual inspection should be sufficient to detect such a discrepancy, but researchers can also use any statistical distance function for assessment (e.g. total variation distance). This differs from the approach in RNPE, which assumes a spike and slab error model and uses the posterior frequency of being in the slab as an indicator of model misspecification. Algorithm 1 Robust MCMC SNL Input: The observed summaries, S(y); the prior distributions π(θ) and π0(Γ); the number of training rounds, R; the assumed data generating process, P (n) θ; the number of simulated datasets from P (n) θgenerated per round, m; the neural density estimator family, qϕ(S(x) | θ). Output: MCMC samples (θ0*, . . . ,* θm−1) and (Γ0*, . . . ,*Γm−1) from the RSNL posterior. 1: Set D = {}, q0,ϕ(S(y) | θ) = 1 2: for r = 0 to R − 1 do 3: Update πr(Γ) when r > 0 4: for i = 0 to m − 1 do 5: Sample θ (r) i,Γ (r) i ∼ qr,ϕ(S(y) − Γ | θ)π(θ)πr(Γ) using MCMC or directly when r = 0 6: Simulate x (r) i ∼ P (n) θ (r) i 7: Compute summaries S(x (r) i) 8: Add (θ (r) i, S(x (r) i)) into D 9: **end for** 10: Standardise D and S(y) 11: Train qr+1,ϕ(S(x) | θ) on D 12: **end for** 13: Sample θ (R) i,Γ (R) i ∼ qR,ϕ(S(y) − Γ | θ)π(θ)πR(Γ) 14: **return** θ (R) 0:m−1 ,Γ (R) 0:m−1 ## 4 Examples And Results In this section, we demonstrate the capabilities of RSNL on five benchmark misspecified problems of increasing complexity and compare the results obtained to SNL and RNPE. The same hyperparameters were applied to all tasks, as detailed in Appendix C. Further details and results for some examples can be found in Appendix D. We break down the overall computational cost of the algorithm into three main components: running the simulations, training the normalising flow, and MCMC sampling. The breakdown of computational times for each example is shown in Appendix B. Often, for complex problems, the simulator might be the most costly component, and no difference is expected in the time to run simulations between RSNL and SNL. Likewise, no difference is expected in training the normalising flow. The difference between RSNL and SNL lies in the MCMC computation. As one might expect, targeting a higher-dimensional posterior with the NUTS algorithm can lead to increased computation times, particularly when adaptive settings during the warmup stage result in increased computation per sample. RSNL targets a joint posterior of dimension dθ + d instead of dθ, where d is the number of summaries and hence the number of adjustment parameters. When there is a large number of summaries, such as the toad movement model, this leads to a noticeable increase in MCMC computational time. Still, the computational time remains feasible for the summary statistic dimension range where SNL is typically considered. The expected behaviour of scaling is similar to MCMC more broadly, dependent on the problem and geometry of the posterior. For example, in the contaminated simple likelihood complex posterior (SLCP) example in Section 4.3, RSNL actually has faster MCMC time than SNL, likely due to improved posterior geometry. Specifically, the NUTS algorithm may increase the amount of computation in cases of poor geometry, such as the SNL posterior shown in Figure 4. The expected coverage probability, a widely used metric in the SBI literature (Hermans et al., 2022), measures the frequency at which the true parameter lies within the HDR. The HDR is the smallest volume region containing 100(1-α)% of the density mass in the approximate posterior, where 1-α represents the credibility level. The HDR is estimated following the approach of Hyndman (1996). Conservative posteriors are considered more scientifically reliable (Hermans et al., 2022) as falsely rejecting plausible parameters is generally worse than failing to reject implausible parameters. To calculate the empirical coverage, we generate C = 200 observed summaries S(yi) ∼ S(P (n) 0) at θ0,i, where θ0,i represents the "true" data generating parameter and i = 1*, . . . , C*. When θ0 is contained in the parameter space of our assumed models, we take θ0,i ∼ π(θ). Using samples from the approximate posterior obtained using RSNL, we use kernel density estimation to give pˆ(θ0,i | S(yi)). The empirical coverage is calculated using: $$\Pr(\mathbf{\theta_{0}}\in\mathrm{HDR}(1-\alpha))\approx\frac{1}{C}\sum_{i=1}^{C}\mathds{1}(\mathbf{\theta_{0,i}}\in\mathrm{HDR}_{\tilde{\mathbf{p}}(\mathbf{\theta}|S(\mathbf{y}_{i}))}(1-\alpha)).\tag{6}$$ The mean posterior log density at the true (or pseudo-true) parameters is another metric used in the SBI literature (Lueckmann et al., 2021). We consider the approximate posterior log density at θ0,i for each S(yi). These results are attained using the same runs used to calculate the empirical coverage. As in Ward et al. (2022), we find that non-robust SBI methods can occasionally fail catastrophically. Consequently, we also present boxplots to offer a more comprehensive depiction of the results. The whiskers are the upper and lower quartiles, with flier points (outliers beyond 1.5 times the interquartile range) not being displayed. In Figure 2, we illustrate the coverage and log-density at θ0 of SNL, RSNL and RNPE across four simulation examples. Figure 2 illustrates the performance of SNL and RSNL, with RSNL exhibiting more conservative coverage and higher density around θ0. While RSNL is better calibrated than SNL on these misspecified examples, RSNL can still be overconfident, as observed in the contaminated normal and contaminated SLCP examples. We find that RNPE coverage is similar to RSNL. This further reassures that the differing coverage is an artefact of the misspecified examples, in which case we do not have any guarantees of accurate frequentist coverage. There is also a high degree of underconfidence on two of the benchmark tasks, but this is preferable to the misleading overconfidence displayed by SNL in misspecified models. The boxplots reveal that RSNL consistently delivers substantial support around θ0. At the same time, SNL has a lower median log density and is highly unreliable, often placing negligible density around θ0. ## 4.1 Contaminated Normal Here we consider the contaminated normal example from Frazier & Drovandi (2021) to assess how SNL and RSNL perform under model misspecification. In this example, the DGP is assumed to follow: $$y_{i}=\theta+\epsilon_{i},\quad\epsilon_{i}\stackrel{i.i.d.}{\sim}{\mathcal{N}}(0,1),$$ $${\mathrm{\boldmath~\ual~DGP~follows}}\colon$$ where $i=1,\ldots,n$. However, the actual DGP follows: $$y_{i}=\begin{cases}\theta+\epsilon_{1,i},&\epsilon_{1,i}\sim\mathcal{N}(0,1),\text{with probability}\omega,\\ \theta+\epsilon_{2,i},&\epsilon_{2,i}\sim\mathcal{N}(0,\sigma_{\epsilon}^{2}),\text{with probability}1-\omega.\end{cases}$$ ![10_image_0.png](10_image_0.png) Figure 2: Comparison of SNL, RSNL and RNPE performance on four misspecified examples. The top row presents the approximate posterior log density boxplots at the true (or pseudo-true) parameters. The bottom row displays the empirical coverage across various credibility levels for SNL (dashed), RSNL (solid) and RNPE (dashed). A well-calibrated posterior closely aligns with the diagonal (dotted) line. Conservative posteriors are situated in the upper triangle, while overconfident ones are in the lower triangle. The sufficient statistic for θ under the assumed DGP is the sample mean, S1(y) = 1n Pn i=1 yi. For demonstration purposes, let us also include the sample variance, S2(y) = 1 n−1 Pn i=1(yi − S1(y))2. When σϵ ̸= 1, we are unable to replicate the sample variance under the assumed model. We use the prior, θ ∼ N (0, 102) and n = 100. The actual DGP is set to θ = 1, ω = 0.8 and σϵ = 2.5, and hence the sample variance is incompatible. Since S1(y) is sufficient, so is S(y), and one might still be optimistic that useful inference will result. We thus want our robust algorithm to concentrate the posterior around the sample mean. Under the assumed DGP we have that b(θ) = (θ, 1)⊤, for all θ ∈ R. Since b0 = [1.0, 2.25]⊤, the sample variance is incompatible. We thus have infθ∈R ||b(θ) − b0|| > 0 meaning our model is misspecified. We include additional results for the contaminated normal, included in Appendix D, due to the ease of obtaining an analytical true posterior. We also consider this example with no summarisation (i.e. using 100 draws directly) to see how RSNL scales as we increase the number of summaries, with results found in Appendix F. For the contaminated normal example, Figure 1 demonstrates that RSNL yields reliable inference, with high posterior density around the true parameter value, θ = 1. In contrast, SNL provides unreliable inference, offering minimal support near the true value. The posteriors for the components of Γ are also shown. The prior and posterior for γ1 (linked to the compatible summary statistic) are nearly identical, aligning with RBSL behaviour. For γ2 (associated with the incompatible statistic), misspecification is detected as the posterior exhibits high density away from 0. A visual inspection suffices for modellers to identify the misspecified summary and offers guidance for adjusting the model. Further, we observed that in the well-specified case, the effect of the adjustment parameters on the coverage is minimal (see Appendix D). ## 4.2 Misspecified Ma(1) We follow the misspecified moving average (MA) of order 1 example in Frazier & Drovandi (2021), where the assumed DGP is an MA(1) model, yt = wt + θwt−1, −1 ≤ θ ≤ 1 and wt i.i.d. ∼ N (0, 1). However, the true DGP is a stochastic volatility model of the form: $$y_{t}=\exp\left({\frac{z_{t}}{2}}\right)u_{t},\quad z_{t}=\omega+\kappa z_{t-1}+v_{t}+\sigma_{v},$$ where 0 *< κ, σ*v < 1, and ut, vt i.i.d. ∼ N (0, 1). We generate the observed data using the parameters ω = −0.76, κ = 0.90 and σv = 0.36. The data is summarised using the autocovariance function, ζj (x) = 1 T PT i=j xixi−j−1, where T is the number of observations and j ∈ {0, 1} is the lag. We use the prior θ ∼ U(−1, 1) and set T = 100. We note that θ = 0 is a meaningful point on which to conduct inference, and one we would hope our posteriors would concentrate towards as the sample size increases. To see why, it is enough to note that under the true DGP for this experiment the observed data displays no autocorrelation in the levels of the series. As such, one would hope that the posterior for the parameter θ, which captures the level of autocorrelation in the first lag of the series, would place significant posterior mass around θ = 0 and signify there is no meaningful autocorrelation in the data. Figure 3 shows that while RSNL and RNPE methods both place significant amounts of posterior around the point θ = 0, SNL appears to be concentrating onto a completely different point in the parameter space. The point around which most of the SNL mass is present is clearly at odds with the observed data: the SNL posterior suggests that a moderate level of autocorrelation is required to fit the data, when in fact there is no autocorrelation in the observed data. As expected, γ1 (corresponding to the incompatible statistic) has significant posterior density away from 0 as seen in Figure 3. Also, the posterior for γ2 (corresponding to the compatible statistic) closely resembles the prior. The computational price of making inferences robust for the misspecified MA(1) model is minimal, with RSNL taking around 20 minutes to run and SNL taking around 10 minutes. ![11_image_0.png](11_image_0.png) Figure 3: Posterior plots for the misspecified MA(1) model. The leftmost plot shows the estimated SNL (dashed), RSNL (solid) and RNPE (dashed) posterior densities for θ. The true parameter value is shown as a vertical dashed line. The right two plots show the estimated marginal posterior (solid) and prior (dashed) densities for the components of Γ. We may be concerned with the under-confidence of RSNL (and RNPE) on the misspecified MA(1) example as illustrated in the coverage plot in Figure 2. But from the posterior plots in Figure 3, we see that, for the misspecified MA(1) benchmark example, SNL is not only over-confident but it is over-confident for a point in the parameter space where the actual summaries and observed summaries are very different. In contrast, RSNL is under-confident, in that its posteriors are inflated relative to the standard level of frequentist coverage, however, it is under-confident for the right point: the values over which the RSNL posterior are centred deliver simulated summaries that are as close as possible to the observed summaries in the Euclidean norm. We also note that, in general when models are misspecified, Bayesian inference does not deliver valid frequentist coverage (Kleijn & van der Vaart, 2012). ## 4.3 Contaminated Slcp The simple likelihood complex posterior (SLCP) model devised in Papamakarios et al. (2019) is a popular example in the SBI literature. The assumed DGP is a bivariate normal distribution with the mean vector, µθ = (θ1, θ2) ⊤, and covariance matrix: $$\mathbf{\Sigma}_{\boldsymbol{\theta}}=\begin{bmatrix}\begin{array}{c c}{{s_{1}^{2}}}&{{\rho s_{1}s_{2}}}\\ {{\rho s_{1}s_{2}}}&{{s_{2}^{2}}}\end{array}\end{bmatrix},$$ where s1 = θ 2 3 , s2 = θ 2 4 and ρ = tanh(θ5). This results in a nonlinear mapping from θ = (θ1, θ2, θ3, θ4, θ5) ∈ R 5 → yj ∈ R 2, for j = 1*, . . . ,* 5. The posterior is "complex", having multiple modes due to squaring and vertical cutoffs from the uniform prior that we define in more detail later. Hence, the likelihood is expected to be easier to emulate than the posterior, making it suitable for an SNL approach. Four draws are generated from this bivariate distribution giving the likelihood, g(y | θ) = Q4 j=1 N (yj ; µθ, Σθ) for y = (y1, y2, y3, y4). No summarisation is done, and the observed data is used in place of the summary statistic. We generate the observed data at parameter values, θ = (0.7, −2.9, −1.0, −0.9, 0.6)⊤, and place an independent U(−3, 3) prior on each component of θ. To impose misspecification on this illustrative example, we draw a contaminated 5-th observation, y5 and use the observed data y = (y1, y2, y3, y4, y5). Contamination is introduced by applying the (stochastic) misspecification transform considered in Cannon et al. (2022), y5 = x5 + 100z5, where x5 ∼ N (µθ, Σθ), and z5 ∼ N ((0, 0)⊤, 100I2). The assumed DGP is incompatible with this contaminated observation, and ideally, the approximate posterior would ignore the influence of this observation. Due to the stochastic transform, there is a small chance that the contaminated draw is compatible with the assumed DGP. However, the results presented here consider a specific example, where the observed contaminated draw is y5 = (23.41, −178.90)⊤, which is very unlikely under the assumed DGP. We thus want our inference to only use information from the four draws from the true DGP. The aim is to closely resemble the SNL posterior, where the observed data is the four non-contaminated draws. Figure 4 shows the estimated posterior densities for SNL (for both compatible and incompatible summaries) and RSNL for the contaminated SLCP example. When including the contaminated 5-th draw, SNL produces a nonsensical posterior with little useful information. Similarly, RNPE cannot correct the contaminated draw and does not give useful inference. Conversely, the RSNL posterior has reasonable density around the true parameters and has identified the separate modes. The first eight compatible statistics are shown in Figure 5. The prior and posteriors reasonably match each other. In contrast, the observed data from the contaminated draw is recognised as being incompatible and has significant density away from 0, as evident in Figure 6. Again, there is no significant computational burden induced to estimate the adjustment parameters, with a total computational time of around 6 hours to run RSNL and around 4 hours for SNL. ![13_image_0.png](13_image_0.png) Figure 4: Univariate and bivariate density plots of the estimated posterior for θ on the SLCP example. Plots on the diagonal are the univariate posterior densities obtained by RSNL (solid), RNPE (dash-dotted), SNL (dashed) on the contaminated SLCP example, and for SNL without the contaminated draw (dotted). The bivariate posterior distributions for contaminated SLCP are visualised as contour plots when applying RSNL (solid, lower triangle off-diagonal) and SNL (dashed, upper triangle off-diagonal). The true parameter values are visualised as a vertical dashed line for the marginal plots and the × symbol in the bivariate plots. Figure 2 shows that while RSNL is reasonably well-calibrated, RNPE does not give reliable inference. This is because RNPE could not correct for the high model misspecification of the contaminated draw. Additionally, approaches that emulate the likelihood rather than the posterior typically give better inference for this example (Lueckmann et al., 2021). ![14_image_0.png](14_image_0.png) Figure 5: Estimated marginal posterior (solid) and prior (dashed) for components of Γ associated with the non-contaminated draws of the contaminated SLCP example. ![14_image_1.png](14_image_1.png) Figure 6: Estimated marginal posterior (solid) and prior (dashed) for components of Γ associated with the contaminated draw of the contaminated SLCP example. ## 4.4 Misspecified Sir We follow the misspecified susceptible-infected-recovered (SIR) model described in Ward et al. (2022). SIR models are a simple compartmental model for the spread of infectious diseases. The standard SIR model has an infection rate, β, and recovery rate, η. The population consists of three states: susceptible (S), infected (I) and recovered (R). The standard SIR model is defined by: $$\frac{dS}{dt}=\beta SI,\quad\frac{dI}{dt}=\beta SI-\eta I,\quad\frac{dR}{dt}=\eta I.\tag{7}$$ The assumed DGP is a stochastic extension of the deterministic SIR model with a time-varying infection rate, β˜t. This allows the SIR model to better capture heterogeneous disease outcomes, virus mutations and mitigation strategies (Spannaus et al., 2022). In addition to the equations in 7, the assumed SIR model is also parameterised by a time-varying effective reproduction number, Ret = β˜t η , $$d R_{e t}=v\left({\frac{\beta}{\eta}}-R_{e t}\right)d t+\sigma{\sqrt{R_{0}}}d W_{t},$$ where υ is the reversion to Ret, σ is the volatility and Wt is Brownian motion. We set υ = 0.5 and σ = 0.5 as in Ward et al. (2022), and use priors, η ∼ U(0, 0.5) and β | η ∼ U(η, 0.5). We use β = 0.15 and η = 0.1 for all observed data. The assumed DGP is run for 365 days, and only the daily number of infected individuals is considered. The initial number of infected individuals is 100. The infected counts are scaled by 100,000 to represent a larger population. A visualisation of the observed DGP is shown in Appendix D. The true DGP has a reporting lag in recorded infections. Weekend days have the number of recorded infections reduced by 5%, being recorded on Monday, which sees an increase of 10%. Six summary statistics are considered: mean, median, max, max day (day of max infections), half day (day when half of the total infections was reached), and the autocorrelation of lag 1. The model is misspecified, as the assumed SIR model cannot replicate the observed autocorrelation summary. We want our robust algorithm to detect misspecification in the autocorrelation summary and deliver useful inferences. We consider a specific example where the observed autocorrelation is 0.9957. Under the assumed DGP, the simulated autocorrelation is tightly centred around 0.9997. Despite the seemingly minor difference between the observed and simulated summaries, the observed summary lies far in the tails post standardisation. As shown in Figure 7, RSNL produces useful inference with high density around the true parameters. Inspecting the adjustment parameters in Figure 8, we can see that the misspecification has been detected for the autocorrelation summary statistic and adjusted accordingly. Consequently, the modeller can further refine the simulation model to better capture the observed autocorrelation. This refinement could be achieved by recognising that the assumed DGP fails to account for the reporting lag evident in the observed data. RNPE behaves similarly to RSNL and concentrates around the true data-generating parameters. SNL, however, focuses overconfidently in an inconsequential region of the parameter space. As the main computational cost in this example is to run the SIR simulations, there is negligible impact from the addition of adjustment parameters. In the example considered in Figures 7 and 8, SNL and RSNL both took approximately 24 hours. ![15_image_0.png](15_image_0.png) Figure 7: Depiction of univariate and bivariate density plots of the estimated posterior for parameters β and η in the SIR model. The bivariate posterior distributions, visualised as contour plots, are presented for RSNL (left plot), RNPE (middle plot) and SNL (right plot). Corresponding univariate density plots are displayed on the sides of each plot. The true parameter values are marked with a × in the bivariate plots. ![16_image_0.png](16_image_0.png) Figure 8: Estimated marginal posterior (solid) and prior (dashed) for components of Γ obtained using RSNL for the SIR model. ## 4.5 Toad Movement Model We consider here the animal movement model by Marchand et al. (2017) to simulate the dispersal of Fowler's toads (*Anaxyrus fowleri*). This is an individual-based model that encapsulates two main behaviours that have been observed in amphibians: high site fidelity and a small possibility of long-distance movement. The assumed behaviour is for each toad to act independently, stay at a refuge site during the day and move to forage at night. After foraging, the toad either stays in its current location or returns to a previous refuge site. A toad returning to a previous refuge site occurs with a constant probability p0. We concentrate on "model 2" in Marchand et al. (2017), which models the behaviour of returning to its nearest previous refuge. This specific model was chosen as there is evidence of model misspecification, allowing us to assess our method on a misspecified example with real data. The dispersal distance is modelled using a Lévy alpha-stable distribution, parameterised by a stability factor αtoad and scale factor δ. This distribution was chosen for its heavy tails, which allow for occasional long-distance movement while still being symmetric around zero. Although the Lévy alpha-stable distribution lacks a closed form, it is straightforward to simulate. Thus the model is governed by three parameters: θ = [αtoad*, δ, p*0]. We assume the following uniform prior distributions for the model parameters: αtoad ∼ U(1, 2), δ ∼ U(20, 70), and p0 ∼ U(0.4, 0.9). The Marchand et al. (2017) GPS data was collected from 66 individual toads, with the daytime location (i.e. while resting refuge) being recorded. The number of recorded days varied across toads, with a maximum of 63 days. The two-dimensional GPS data is converted to a one-dimensional movement component, resulting in an observed (63 × 66) matrix. The observed matrix was summarised using four sets of displacement vectors with time lags of 1, 2, 4 and 8 days. For each lag, the number of absolute displacements less than 10m, the median of the absolute displacements greater than 10m, and the log difference of the 0, 0.1*, . . . ,* 1 of the absolute displacements greater than 10m are calculated, resulting in a total of 48 summary statistics (12 for each time lag). In addition to SNL and RNPE, we evaluate our method against RBSL, as we are assessing performance on the real observed data, and the ground truth is unavailable for direct comparison. We examined plots for RSNL, SNL, RNPE and RBSL using real and simulated data from the toad example. In Figure 9, the estimated model parameter posteriors are conditioned on real observed summary statistics, with RBSL as the baseline for comparison with RSNL. The marginal plots reveal that RSNL closely resembles RBSL, while ![17_image_0.png](17_image_0.png) Figure 9: Univariate and bivariate density plots of the estimated posterior for θ applied to the real data for the toad movement example. Plots on the diagonal are the univariate posterior densities obtained by RSNL (solid), SNL (dashed), RBSL (dotted) and RNPE (dash-dotted). The bivariate posterior distributions for the toad movement example are visualised as contour plots when applying RSNL (solid, lower triangle off-diagonal) and SNL (dashed, upper triangle off-diagonal). SNL differs, showing minimal density around the RSNL and RBSL maximum a posteriori estimates. RNPE also gives similar marginal posteriors to RSNL and RBSL. The 95% credible intervals for each parameter are displayed in Table 1. However, when considering simulated data (see Appendix D), SNL yields similar inferences to the robust methods. This suggests that SNL's differing results in the real data scenario arise from incompatible summary statistics. The most incompatible summary statistics were identified as the number of returns with a lag of 1 and the first quantile differences for lags 4 and 8. These were determined via MCMC output and visual inspection. We depict the posteriors for the adjustment parameters corresponding to these incompatible summaries in Figure 10, alongside the first three posteriors for compatible summaries, which are expected to closely match their priors. This example illustrates the advantage of carefully selecting summary statistics that hold intrinsic meaning for domain experts. For example, the insight that the current model cannot adequately capture the low number of observed toad returns—particularly while also fitting other movement behaviours—has direct meaning to the domain expert modeller, enabling them to refine the model accordingly. ![18_image_0.png](18_image_0.png) ![18_image_1.png](18_image_1.png) ![18_image_2.png](18_image_2.png) (a) Adjustment parameters corresponding with the most incompatible summary statistics. ![18_image_3.png](18_image_3.png) (b) First three adjustment parameters corresponding with compatible summary statistics. Figure 10: Estimated marginal posterior (solid) and prior (dashed) for selected components of Γ obtained using RSNL for the toad movement model. In Figure 11, we present distributions for the log distance travelled by non-returning toads derived from the (pre-summarised) observation matrix. The simulated distances closely align with, and are tightly distributed around, the observed distances. Differences between the simulated and observed log distances appear most noticeable at lags 4 and 8 for shorter distances, aligning with the identified incompatible summaries. Figure 12 displays boxplots for the number of returns across the four lags. As confirmed by the first incompatible summary, the model has difficulties replicating the observed number of toad returns while accurately capturing other summary statistics. Overall, the posterior predictive distributions largely agree with observed data, with discrepancies coinciding with the incompatible summaries identified. For the toad movement model, as the focus is on the performance of the methods on real data (with no ground-truth parameter), we instead consider the posterior predictive distribution. To compare predictive performance across methods, we computed the MMD between the observed summary statistic and samples from the posterior predictive, as presented in Table 1. Implementation details for the MMD can be found in Appendix D. We highlight that RSNL has a lower discrepancy than SNL and similar results to RNPE. While RBSL is the best-performing method in this example, it requires orders of magnitude more model simulations. We also note that the most incompatible summary statistics identified via the adjustment parameters (see results in Appendix D) agree with those found in Frazier & Drovandi (2021). ![19_image_1.png](19_image_1.png) ![19_image_0.png](19_image_0.png) Figure 11: The RSNL posterior predictive distributions for the log distance of toads who moved to a new refuge site, applied across four lags. The thick black line is the observed distribution. ![19_image_2.png](19_image_2.png) Figure 12: The RSNL posterior predictive for the number of toads who returned to the same refuge site across four lags. Table 1: Estimated 95% credible intervals and the MMD discrepancy between the observed summary statistic and samples from the posterior predictive for RSNL, SNL, RNPE and RBSL-M on the toad movement model with real data. | Method | Number simulations | αtoad (2.5% - 97.5%) | γ (2.5% - 97.5%) | p0 (2.5% - 97.5%) | MMD | |----------|----------------------|------------------------|--------------------|---------------------|-------| | RSNL | 10000 | (1.28 - 1.77) | (35.21 - 47.34) | (0.61 - 0.76) | 0.006 | | SNL | 10000 | (1.63 - 1.95) | (44.61 - 53.36) | (0.63 - 0.74) | 0.015 | | RNPE | 10000 | (1.28 - 1.84) | (31.45 - 48.68) | (0.60 - 0.80) | 0.007 | | RBSL-M | 25000000 | (1.35 - 1.80) | (35.67 - 47.48) | (0.59 - 0.73) | 0.001 | ## 5 Discussion In this work, we introduced RSNL, a robust neural SBI method that detects and mitigates the impact of model misspecification, making it the first method of its kind that uses a surrogate likelihood or sequential sampling. RSNL demonstrated robustness to model misspecification and efficient inference on several illustrative examples. RSNL provides useful inference with significantly fewer simulation calls than ABC and BSL methods. For instance, only 10,000 model simulations were needed for the RSNL posterior in the contaminated normal model, while RBSL in Frazier & Drovandi (2021) required millions of simulations. A more comprehensive comparison, such as the benchmarks in Lueckmann et al. (2021), could assess the robustness of ABC, BSL, and neural SBI methods to model misspecification and evaluate their performance across different numbers of simulations. Ideally, such a benchmark would include challenging real-world data applications, showcasing the utility of these methods for scientific purposes. In this paper, we consider summaries that are carefully specified by the modeller, as opposed to neural networks or algorithms that learn the summaries (e.g. via an autoencoder (Albert et al., 2022)). While learnt summaries can be valuable, including addressing model misspecification (Huang et al., 2023), interpretable summary statistics chosen by domain experts to be meaningful to them are often crucial to model development. We see model fitting under misspecification as having two main functions. The first is to minimise harm from fitting a misspecified model, where inaccurate uncertainty quantification will lead the domain expert to extract misleading insights into the phenomena of interest. The second is to enable more meaningful model criticism allowing a better model to be developed. For the first function, the purpose of the model must be taken into account, and this can take the form of deciding which of a set of interpretable summaries it is important to match in the application. For the second function, knowing which of a set of interpretable summary statistics cannot be matched is insightful to experts for the purpose of model refinement and improvement. It needs to be clarified how current methods for obtaining learnt summary statistics allow the modeller to refine the current model further. However, our adjustment parameter approach could also be applied when the summaries are learnt. Our relevant form of model misspecification, incompatibility, could be interpreted as an issue of OOD data. OOD data refers to data drawn from a different distribution than the one used to train the neural network. It is an important issue in the ML community to detect the presence of OOD data (Hendrycks & Gimpel, 2017; Yang et al., 2022). Normalising flows struggle with OOD data (Kirichenko et al., 2020). Mechanically, this is exactly what happens when we train a surrogate normalising flow using simulated data from the misspecified model and evaluate it on the observed data from reality. Favourable results across numerous neural SBI methods have been achieved in Cannon et al. (2022) using OOD detection methods based on ensemble posteriors (Lakshminarayanan et al., 2017) and sharpness-aware minimisation (Foret et al., 2021). Combining these OOD methods with adjustment parameters could enhance their benefits. Another strategy to counteract the effects of OOD data that has been considered in a non-SBI context involves employing a physics-based correction to the latent distribution of the conditional normalising flow that learns the posterior (Siahkoohi et al., 2021; 2023). We highlight one example where the model misspecification is not incompatible summaries but rather an inappropriate choice of summary statistics. Consider the contaminated normal described in Section 4 but with only the sample mean. The sample mean is a sufficient and compatible summary statistic, and we can match it with the assumed univariate normal distribution despite it being generated from two normal distributions with different standard deviations. So, we can have compatible summaries where the assumed DGP misrepresents reality. However, in such instances, we can expect SBI algorithms to be "well-behaved" in that they will produce meaningful inference on the unknown model parameters (Frazier et al., 2020b; David T. Frazier & Kohn, 2023). It is difficult to derive the same theoretical backing for neural SBI methods as has been done for ABC and synthetic likelihood; however, given the expressive power of normalising flows (Papamakarios et al., 2021), we may expect similar results to hold when the observed data is "in-distribution" of the simulated data. This stands in contrast to the case of incompatibility, where it has already been observed that various SBI methods, such as synthetic likelihood (Frazier et al., 2021), and neural methods, such as SNL (Cannon et al., 2022), can give nonsensical results under incompatibility. In this described instance, looking at more detailed features of the full data would have revealed the deficiencies of the assumed DGP. In general, the modeller can use the posterior predictive distribution to generate the full data and probe for any aspects the assumed model is unable to explain (e.g. the log-distance plots for the toad movement model in Figure 11). This highlights the importance of judiciously selecting summary statistics when building models in SBI. Ideally, the summaries would be selected to capture key aspects of the data (Lewis et al., 2021), and there is a broad literature on choosing appropriate summaries (Prangle, 2018). Our proposed method would assist in selecting summaries, as it ensures more reliable inference and provides diagnostics when the summaries are incompatible. RBSL-M accounts for the different summary scales and the fact that these scales could be θ-dependent by adjusting the mean using µ(θ) + σ(θ) ◦ Γ, where σ(θ) is a vector of estimated standard deviations of the model summaries at θ. RBSL estimates these standard deviations from the m model simulations generated based on θ. Analogously, we could consider a similar approach in RSNL and define the target ## Π(Θ,Γ | S(Y)) ∝ Qϕ(S(Y) − Σ(Θ) ◦ Γ | Θ)Π(Θ)Π(Γ). The question then becomes, how do we estimate σ(θ) in the context of RSNL? In the MCMC phase, we do not want to generate more model simulations as this would be costly. If we believed that the standard deviation of the model summaries had little dependence on θ, we could set σ(θ) = σ = σ(θˆ) where θˆ is some reasonable point estimate of the parameter. Another approach would, for each θ proposed in the MCMC, estimate σ(θ) using surrogate model simulations generated using the fitted normalising flow. This would be much faster than actual model simulations but could still slow down the MCMC phase substantially. Instead of using a normalising flow, we could train a mixture density network (Bishop, 1994) to emulate the likelihood, which would then lead to an analytical expression for σ(θ). A multivariate mixture density network could replace the flow completely, or the multivariate flow for the joint summary could be retained and a series of univariate mixture density networks applied to each summary statistic for the sole purpose of emulating σ(θ). We plan to investigate these options in future research. The introduction of adjustment parameters in RSNL might raise concerns about introducing noise into the estimated posterior. However, our empirical findings indicate that the impact of this noise is negligible, particularly when using our chosen prior. This observation aligns with the RBSL results presented in Frazier & Drovandi (2021). Furthermore, Hermans et al. (2022) noted that SBI methods, including SNL, often produce overconfident posterior approximations. Thus, it is unlikely that the minor noise introduced by adjustment parameters would lead to excessively conservative posterior estimates. Recent work has proposed solutions to neural SBI overconfident posterior approximations (Delaunoy et al., 2022; 2023; Falkiewicz et al., 2023). It would be interesting to see if these methods could be combined with our proposed method to have both correctly calibrated posteriors and robustness to model misspecification, however this is beyond the scope here. Evaluating misspecified summaries can be done by comparing the prior and posterior densities for each component of Γ. In our examples, we used visual inspection for this purpose. However, for cases with a large number of summaries, this method may become cumbersome. Instead, an automated approach could be implemented to streamline the process and efficiently identify misspecified summaries. While the posteriors of the adjustment parameters can be used to diagnose misspecification, RSNL lacks many of the diagnostic tools available to amortised methods (e.g.Talts et al. (2018); Hermans et al. (2022)) due to its sequential sampling scheme. Our proposed prior may not be suitable for two scenarios, even if the normalising flow learns the likelihood perfectly. First, consider where a summary statistic is incompatible and, after standardisation, the adjustment parameter variance is set small. This could occur when the distribution of the model summary statistic is multi-modal at a specific parameter value. However, in such multi-modal scenarios, flow-based neural likelihood methods generally exhibit poor performance (Glaser et al., 2023), so we might not want to apply RSNL in this scenario regardless. In this case, RSNL will behave similarly to SNL for that particular summary. Second, a summary is correctly specified but lies in the tails. This is unlikely by definition of being in the tails, and the effect of this would be for the summary in the tails to be corrected as if it was misspecified. If concerns arise, researchers can visualise the summary statistic plots generated at a reasonable model parameter point or examine posterior predictive distributions. If necessary, an alternative prior can be employed. Finally, RSNL could diagnose and shift incompatible summaries as expected, but there might be broader issues that prevent useful insight into the scientific questions of the phenomena of interest (e.g. poor quality of observed data). We would not expect an inference method to be able to handle this, but RSNL can still be useful within a broader workflow for model criticism and further model refinement. The choice of π(Γ) was found to be crucial in practice. Our prior choice was based on the dual requirements to minimise noise introduced by the adjustment parameters if the summaries are compatible and to be capable of shifting the summary a significant distance from the origin if they are incompatible. The horseshoe prior is an appropriate choice for these requirements. Further work could consider how to implement this robustly in a NUTS sampler. Another approach is the spike-and-slab prior as in Ward et al. (2022). This type of prior is a mixture of two distributions: one that encourages shrinkage (the spike) and another that allows for a wider range of values (the slab). Further research is needed to determine the most appropriate prior choice for RSNL and similar methods, which could involve a comparative study of different prior choices and their effects on the robustness and efficiency of the resulting inference. Modellers constructing complex DGPs for real-world data should address model misspecification. The machine learning and statistics community must develop tools for practitioners to conduct neural SBI methods without producing misleading results under model misspecification. We hope that our proposed method contributes to the growing interest in addressing model misspecification in SBI. ## Acknowledgments The authors express their gratitude to the anonymous reviewers for providing useful comments and the TMLR editorial team for their guidance. RPK was supported by a PhD Research Training Program scholarship from the Australian Government and a QUT Centre for Data Science top-up scholarship. CD and DTF were supported by Australian Research Council funding schemes FT210100260 and DE200101070, respectively. DJN acknowledges the support from the Singapore Ministry of Education Academic Research Fund Tier 1 grant. RPK, DJW, and CD thank the Centre for Data Science at QUT for its support. The eResearch Office at QUT provided computational resources. ## References Carlo Albert, Simone Ulzega, Firat Ozdemir, Fernando Perez-Cruz, and Antonietta Mira. Learning summary statistics for Bayesian inference with autoencoders. *SciPost Physics Core*, 5(3):043, 2022. Christophe Andrieu and Gareth O. Roberts. The pseudo-marginal approach for efficient Monte Carlo computations. *The Annals of Statistics*, 37(2):697–725, 2009. URL http://www.jstor.org/stable/ 30243645. Stuart Barber, Jochen Voss, and Mark Webster. The rate of convergence for approximate Bayesian computation. Electronic Journal of Statistics, 9(1):80–105, 2015. doi: 10.1214/15-EJS988. URL https://doi.org/10. 1214/15-EJS988. Mark A. Beaumont. Approximate Bayesian computation in evolution and ecology. *Annual Review of Ecology, Evolution, and Systematics*, 41(1):379–406, 2010. ISSN 1543592X. doi: 10.1146/annurev-ecolsys-102209-144621. URL https://dx.doi.org/10.1146/ annurev-ecolsys-102209-144621. Yves Bernaerts, Michael Deistler, Pedro J Goncalves, Jonas Beck, Marcel Stimberg, Federico Scala, Andreas S Tolias, Jakob H Macke, Dmitry Kobak, and Philipp Berens. Combined statistical-mechanistic modeling links ion channel genes to physiology of cortical neuron types. *bioRxiv*, pp. 2023–03, 2023. Christopher M. Bishop. Mixture density networks. Technical Report, Aston University, 1994. Pier Giovanni Bissiri, Chris C Holmes, and Stephen G Walker. A general framework for updating belief distributions. *Journal of the Royal Statistical Society Series B: Statistical Methodology*, 78(5):1103, 2016. David M. Blei. Build, compute, critique, repeat: Data analysis with latent variable models. Annual Review of Statistics and Its Application, 1(Volume 1, 2014):203–232, 2014. ISSN 2326-831X. doi: https: //doi.org/10.1146/annurev-statistics-022513-115657. URL https://www.annualreviews.org/content/ journals/10.1146/annurev-statistics-022513-115657. Jan Boelts, Jan-Matthis Lueckmann, Richard Gao, and Jakob H Macke. Flexible and efficient simulation-based inference for models of decision-making. *eLife*, 11:e77220, 2022. Joshua J Bon, David J Warne, David J Nott, and Christopher Drovandi. Bayesian score calibration for approximate models. *arXiv preprint arXiv:2211.05357*, 2022. Joshua J Bon, Adam Bretherton, Katie Buchhorn, Susanna Cramb, Christopher Drovandi, Conor Hassan, Adrianne L Jenner, Helen J Mayfield, James M McGree, Kerrie Mengersen, et al. Being Bayesian in the 2020s: opportunities and challenges in the practice of modern applied Bayesian statistics. *Philosophical* Transactions of the Royal Society A, 381(2247):20220156, 2023. George EP Box. Science and statistics. *Journal of the American Statistical Association*, 71(356):791–799, 1976. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. Python package version 0.3.13, URL https://github. com/google/jax. Patrick Cannon, Daniel Ward, and Sebastian M. Schmon. Investigating the impact of model misspecification in neural simulation-based inference, 2022. URL https://arxiv.org/abs/2209.01845. *arXiv preprint* arXiv:2209.01845. Badr-Eddine Chérief-Abdellatif and Pierre Alquier. MMD-Bayes: robust Bayesian estimation via maximum mean discrepancy. In *Symposium on Advances in Approximate Bayesian Inference*, pp. 1–21. PMLR, 2020. Basile Confavreux, Poornima Ramesh, Pedro J. Goncalves, Jakob H. Macke, and Tim P. Vogels. Meta-learning families of plasticity rules in recurrent spiking networks using simulation-based inference. In *Thirty-seventh* Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id= FLFasCFJNo. Kyle Cranmer, Johann Brehmer, and Gilles Louppe. The frontier of simulation-based inference. *Proceedings* of the National Academy of Sciences, 117(48):30055–30062, 2020. Katalin Csilléry, Michael G.B. Blum, Oscar E. Gaggiotti, and Olivier François. Approximate Bayesian computation (ABC) in practice. *Trends in Ecology & Evolution*, 25(7):410–418, 2010. ISSN 0169-5347. doi: https://doi.org/10.1016/j.tree.2010.04.001. URL http://www.sciencedirect.com/science/article/ pii/S0169534710000662. Christopher Drovandi David T. Frazier, David J. Nott and Robert Kohn. Bayesian inference using synthetic likelihood: asymptotics and adjustments. *Journal of the American Statistical Association*, 118(544): 2821–2832, 2023. doi: 10.1080/01621459.2022.2086132. URL https://doi.org/10.1080/01621459.2022. 2086132. Arnaud Delaunoy, Joeri Hermans, François Rozet, Antoine Wehenkel, and Gilles Louppe. Towards reliable simulation-based inference with balanced neural ratio estimation. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural Information Processing Systems*, 2022. URL https://openreview.net/forum?id=o762mMj4XK. Arnaud Delaunoy, Benjamin Kurt Miller, Patrick Forré, Christoph Weniger, and Gilles Louppe. Balancing simulation-based inference for conservative posteriors. *arXiv preprint arXiv:2304.10978*, 2023. Charita Dellaporta, Jeremias Knoblauch, Theodoros Damoulas, and François-Xavier Briol. Robust Bayesian inference for simulator-based models via the MMD posterior bootstrap. In International Conference on Artificial Intelligence and Statistics, pp. 943–970. PMLR, 2022. Christopher Drovandi and David T Frazier. A comparison of likelihood-free methods with and without summary statistics. *Statistics and Computing*, 32(3):1–23, 2022. Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural spline flows. In Advances in Neural Information Processing Systems 32, 2019. URL https://proceedings.neurips.cc/paper/2019/ file/7ac71d433f282034e088473244df8c02-Paper.pdf. Conor Durkan, Iain Murray, and George Papamakarios. On contrastive learning for likelihood-free inference. In *The 37-th International Conference on Machine Learning*, pp. 2771–2781. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/durkan20a.html. Michael Evans and Gun Ho Jang. Weak informativity and the information in one prior relative to another. Statistical Science, 26(3):423 - 439, 2011. doi: 10.1214/11-STS357. URL https://doi.org/10.1214/ 11-STS357. Maciej Falkiewicz, Naoya Takeishi, Imahn Shekhzadeh, Antoine Wehenkel, Arnaud Delaunoy, Gilles Louppe, and Alexandros Kalousis. Calibrating neural simulation-based inference with differentiable coverage probability. In *Thirty-seventh Conference on Neural Information Processing Systems*, 2023. URL https: //openreview.net/forum?id=wLiMhVJ7fx. Neil M Ferguson, Daniel Laydon, Gemma Nedjati-Gilani, Natsuko Imai, Kylie Ainslie, Marc Baguelin, Sangeeta Bhatia, Adhiratha Boonyasiri, Zulma Cucunubá, Gina Cuomo-Dannenburg, et al. Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand. Technical Report, Imperial College COVID-19 Response Team London, 2020. DOI 10.25561/77482. Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?id=6Tm1mposlrM. D T Frazier, G M Martin, C P Robert, and J Rousseau. Asymptotic properties of approximate Bayesian computation. *Biometrika*, 105(3):593–607, 06 2018. ISSN 0006-3444. doi: 10.1093/biomet/asy027. URL https://doi.org/10.1093/biomet/asy027. David T. Frazier and Christopher Drovandi. Robust approximate Bayesian inference with synthetic likelihood. Journal of Computational and Graphical Statistics, 30(4):958–976, 2021. doi: 10.1080/10618600.2021. 1875839. URL https://doi.org/10.1080/10618600.2021.1875839. David T Frazier, Christopher Drovandi, and Ruben Loaiza-Maya. Robust approximate Bayesian computation: an adjustment approach. *arXiv preprint arXiv:2008.04099*, 2020a. David T. Frazier, Christian P. Robert, and Judith Rousseau. Model misspecification in approximate Bayesian computation: consequences and diagnostics. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 82(2):421–444, 2020b. doi: https://doi.org/10.1111/rssb.12356. URL https: //rss.onlinelibrary.wiley.com/doi/abs/10.1111/rssb.12356. David T Frazier, Christopher Drovandi, and David J Nott. Synthetic likelihood in misspecified models: consequences and corrections. *arXiv preprint arXiv:2104.03436*, 2021. Jonah Gabry, Daniel Simpson, Aki Vehtari, Michael Betancourt, and Andrew Gelman. Visualization in Bayesian workflow. *Journal of the Royal Statistical Society Series A: Statistics in Society*, 182(2):389–402, 01 2019. ISSN 0964-1998. doi: 10.1111/rssa.12378. URL https://doi.org/10.1111/rssa.12378. Richard Gao, Michael Deistler, and Jakob H. Macke. Generalized Bayesian inference for scientific simulators via amortized cost estimation. In *Thirty-seventh Conference on Neural Information Processing Systems*, 2023. URL https://openreview.net/forum?id=ZARAiV25CW. Andrew Gelman, Aki Vehtari, Daniel Simpson, Charles C Margossian, Bob Carpenter, Yuling Yao, Lauren Kennedy, Jonah Gabry, Paul-Christian Bürkner, and Martin Modrák. Bayesian workflow. *arXiv preprint* arXiv:2011.01808, 2020. Pierre Glaser, Michael Arbel, Arnaud Doucet, and Arthur Gretton. Maximum likelihood learning of energy-based models for simulation-based inference, 2023. URL https://openreview.net/forum?id= gL68u5UuWa. Manuel Glöckler, Michael Deistler, and Jakob H. Macke. Adversarial robustness of amortized Bayesian inference. In *Proceedings of the 40th International Conference on Machine Learning*, ICML'23. JMLR.org, 2023. Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association, 102(477):359–378, 2007. Pedro J Gonçalves, Jan-Matthis Lueckmann, Michael Deistler, Marcel Nonnenmacher, Kaan Öcal, Giacomo Bassetto, Chaitanya Chintaluri, William F Podlaski, Sara A Haddad, Tim P Vogels, et al. Training deep neural density estimators to identify mechanistic models of neural dynamics. *Elife*, 9:e56261, 2020. David Greenberg, Marcel Nonnenmacher, and Jakob Macke. Automatic posterior transformation for likelihoodfree inference. In *The 36-th International Conference on Machine Learning*, pp. 2404–2414. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/greenberg19a.html. Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *Journal of Machine Learning Research*, 13(25):723–773, 2012. URL http://jmlr.org/ papers/v13/gretton12a.html. Severin Hauenstein, Julien Fattebert, Martin U. Grüebler, Beat Naef-Daenzer, Guy Pe'Er, and Florian Hartig. Calibrating an individual-based movement model to predict functional connectivity for little owls. *Ecological Applications*, 29(4):e01873, 2019. ISSN 1051-0761. doi: 10.1002/eap.1873. URL https: //dx.doi.org/10.1002/eap.1873. Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/ forum?id=Hkg4TI9xl. Joeri Hermans, Volodimir Begy, and Gilles Louppe. Likelihood-free MCMC with amortized approximate ratio estimators. In *The 37-th International Conference on Machine Learning*, pp. 4239–4248. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/hermans20a.html. Joeri Hermans, Nilanjan Banik, Christoph Weniger, Gianfranco Bertone, and Gilles Louppe. Towards constraining warm dark matter with stellar streams through neural simulation-based inference. Monthly Notices of the Royal Astronomical Society, 507(2):1999–2011, 08 2021. ISSN 0035-8711. doi: 10.1093/ mnras/stab2181. URL https://doi.org/10.1093/mnras/stab2181. Joeri Hermans, Arnaud Delaunoy, François Rozet, Antoine Wehenkel, Volodimir Begy, and Gilles Louppe. A crisis in simulation-based inference? Beware, your posterior approximations can be unfaithful. Transactions on Machine Learning Research, 2022. ISSN 2835-8856. URL https://openreview.net/forum?id= LHAbHkt6Aq. Matthew Hoffman and Andrew Gelman. The No-U-Turn Sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. *Journal of Machine Learning Research*, 15(1):1593–1623, 2014. ISSN 1532-4435. Daolang Huang, Ayush Bharti, Amauri H Souza, Luigi Acerbi, and Samuel Kaski. Learning robust statistics for simulation-based inference under model misspecification. In *Thirty-seventh Conference on Neural* Information Processing Systems, 2023. URL https://openreview.net/forum?id=STrXsSIEiq. John D Hunter. Matplotlib: a 2D graphics environment. *Computing in science & engineering*, 9(03):90–95, 2007. Rob J. Hyndman. Computing and graphing highest density regions. *The American Statistician*, 50(2): 120–126, 1996. doi: 10.1080/00031305.1996.10474359. URL https://www.tandfonline.com/doi/abs/10. 1080/00031305.1996.10474359. Ryan P. Kelly. Implementing Bayesian synthetic likelihood within the engine for likelihood-free inference, 2022. URL https://eprints.qut.edu.au/233759/. *Master of Philosophy Thesis*. Queensland University of Technology. Patrick Kidger. *On neural differential equations*. PhD thesis, University of Oxford, 2021. Diederik P. Kingma and Jimmy Ba. Adam: a method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), *The 3rd International Conference on Learning Representations, ICLR*, 2015. URL http://arxiv.org/abs/1412.6980. Polina Kirichenko, Pavel Izmailov, and Andrew Gordon Wilson. Why normalizing flows fail to detect out-of-distribution data. In *Advances in Neural Information Processing Systems 34*, 2020. B.J.K. Kleijn and A.W. van der Vaart. The Bernstein-Von-Mises theorem under misspecification. Electronic Journal of Statistics, 6(none):354 - 381, 2012. doi: 10.1214/12-EJS675. URL https://doi.org/10.1214/ 12-EJS675. Jeremias Knoblauch, Jack Jewson, and Theodoros Damoulas. An optimization-centric view on Bayes' rule: reviewing and generalizing variational inference. *Journal of Machine Learning Research*, 23(132):1–109, 2022. Ravin Kumar, Colin Carroll, Ari Hartikainen, and Osvaldo Martin. ArviZ a unified library for exploratory analysis of Bayesian models in Python. *Journal of Open Source Software*, 4(33):1143, 2019. doi: 10.21105/ joss.01143. URL https://doi.org/10.21105/joss.01143. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In *Advances in Neural Information Processing Systems 31*, 2017. John R. Lewis, Steven N. MacEachern, and Yoonkyung Lee. Bayesian restricted likelihood methods: conditioning on insufficient statistics in Bayesian regression (with discussion). *Bayesian Analysis*, 16(4): 1393–2854, 2021. doi: 10.1214/21-BA1257. URL https://doi.org/10.1214/21-BA1257. Wentao Li and Paul Fearnhead. On the asymptotic efficiency of approximate Bayesian computation estimators. Biometrika, 105(2):285–299, 01 2018. ISSN 0006-3444. doi: 10.1093/biomet/asx078. URL https: //doi.org/10.1093/biomet/asx078. J Lintusaari, H Vuollekoski, A Kangasrääsiö, K Skytén, M Järvenpää, P Marttinen, M Gutmann, A Vehtari, J Corander, and S Kaski. ELFI: Engine for likelihood-free inference. *Journal of Machine Learning Research*, 19(1), 2018. doi: 10.5555/3291125.3291141. URL https://dx.doi.org/10.5555/3291125.3291141. David Lopez-Paz and Maxime Oquab. Revisiting classifier two-sample tests. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=SJkXfE5xx. Jan-Matthis Lueckmann, Jan Boelts, David Greenberg, Pedro Goncalves, and Jakob Macke. Benchmarking simulation-based inference. In *The 24-th International Conference on Artificial Intelligence and Statistics*, pp. 343–351. PMLR, 13–15 Apr 2021. URL https://proceedings.mlr.press/v130/lueckmann21a.html. Philippe Marchand, Morgan Boenke, and David M. Green. A stochastic movement model reproduces patterns of site fidelity and long-distance dispersal in a population of Fowler's toads (*Anaxyrus fowleri*). Ecological Modelling, 360:63–69, 2017. ISSN 0304-3800. doi: 10.1016/j.ecolmodel.2017.06.025. URL https://dx.doi.org/10.1016/j.ecolmodel.2017.06.025. Jean-Michel Marin, Natesh S Pillai, Christian P Robert, and Judith Rousseau. Relevant statistics for Bayesian model choice. *Journal of the Royal Statistical Society: Series B (Statistical Methodology)*, 76(5):833–859, 2014. Takuo Matsubara, Jeremias Knoblauch, François-Xavier Briol, and Chris J Oates. Robust generalised Bayesian inference for intractable likelihoods. *Journal of the Royal Statistical Society Series B: Statistical* Methodology, 84(3):997–1022, 2022. Elina Numminen, Lu Cheng, Mats Gyllenberg, and Jukka Corander. Estimating the transmission dynamics of *Streptococcus pneumoniae* from strain prevalence data. *Biometrics*, 69(3):748–757, 2013. doi: https:// doi.org/10.1111/biom.12040. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/biom.12040. Lorenzo Pacchiardi and Ritabrata Dutta. Generalized Bayesian likelihood-free inference using scoring rules estimators. *arXiv preprint arXiv:2104.03889*, 2021. Lorenzo Pacchiardi and Ritabrata Dutta. Score matched neural exponential families for likelihood-free inference. *Journal of Machine Learning Research*, 23, 2022. ISSN 1532-4435. George Papamakarios and Iain Murray. Fast ϵ-free inference of simulation models with Bayesian conditional density estimation. In *Advances in Neural Information Processing Systems 29*, 2016. URL https: //proceedings.neurips.cc/paper/2016/file/6aca97005c68f1206823815f66102863-Paper.pdf. George Papamakarios, David Sterratt, and Iain Murray. Sequential neural likelihood: fast likelihood-free inference with autoregressive flows. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 837–848. PMLR, 2019. URL https://proceedings.mlr.press/v89/papamakarios19a. html. George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. Journal of Machine Learning Research, 22(57), 2021. ISSN 1532-4435. Mijung Park, Wittawat Jitkrittum, and Dino Sejdinovic. K2-ABC: approximate Bayesian computation with kernel embeddings. In Arthur Gretton and Christian C. Robert (eds.), *Proceedings of the 19th International* Conference on Artificial Intelligence and Statistics, volume 51 of *Proceedings of Machine Learning Research*, pp. 398–407, Cadiz, Spain, 09–11 May 2016. PMLR. URL https://proceedings.mlr.press/v51/park16. html. Du Phan, Neeraj Pradhan, and Martin Jankowiak. Composable effects for flexible and accelerated probabilistic programming in NumPyro. In *Program Transformations for ML Workshop at NeurIPS 2019*, 2019. URL https://openreview.net/forum?id=H1g1niFhIB. Dennis Prangle. Summary statistics. In *Handbook of Approximate Bayesian Computation*, pp. 125–152. Chapman and Hall/CRC, 2018. Leah Price, C. C. Drovandi, A. Lee, and D. J. Nott. Bayesian synthetic likelihood. *Journal of Computational* and Graphical Statistics, 27(1):1–11, 2018. ISSN 1061-8600. doi: 10.1080/10618600.2017.1302882. URL https://dx.doi.org/10.1080/10618600.2017.1302882. Stefan T. Radev, Marvin Schmitt, Valentin Pratz, Umberto Picchini, Ullrich Köthe, and Paul-Christian Bürkner. Jana: jointly amortized neural approximation of complex Bayesian models. In Robin J. Evans and Ilya Shpitser (eds.), *Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence*, volume 216 of *Proceedings of Machine Learning Research*, pp. 1695–1706. PMLR, 31 Jul–04 Aug 2023. URL https://proceedings.mlr.press/v216/radev23a.html. Oliver Ratmann, Christophe Andrieu, Carsten Wiuf, and Sylvia Richardson. Model criticism based on likelihood-free inference, with an application to protein network evolution. Proceedings of the National Academy of Sciences, 106(26):10576–10581, 2009. M. Schmitt, P.-C. Bürkner, U. Köthe, and S. T. Radev. Detecting model misspecification in amortized Bayesian inference with neural networks. In *45th German Conference on Pattern Recognition (GCPR)*, 2023a. M. Schmitt, D. Habermann, P.-C. Bürkner, U. Koethe, and S. T. Radev. Leveraging self-consistency for data-efficient amortized Bayesian inference. In NeurIPS UniReps: The First Workshop on Unifying Representations in Neural Models, 2023b. Marvin Schmitt, Stefan T. Radev, and Paul-Christian Bürkner. Meta-uncertainty in Bayesian model comparison. In Francisco Ruiz, Jennifer Dy, and Jan-Willem van de Meent (eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, volume 206 of *Proceedings of Machine* Learning Research, pp. 11–29. PMLR, 25–27 Apr 2023c. URL https://proceedings.mlr.press/v206/ schmitt23a.html. Sebastian M Schmon, Patrick W Cannon, and Jeremias Knoblauch. Generalized posteriors in approximate Bayesian computation. In *Third Symposium on Advances in Approximate Bayesian Inference*, 2021. URL https://openreview.net/forum?id=tKrg5DAyeWq. Ali Siahkoohi, Gabrio Rizzuti, Mathias Louboutin, Philipp Witte, and Felix Herrmann. Preconditioned training of normalizing flows for variational inference in inverse problems. In *Third Symposium on Advances* in Approximate Bayesian Inference, 2021. URL https://openreview.net/forum?id=P9m1sMaNQ8T. Ali Siahkoohi, Gabrio Rizzuti, Rafael Orozco, and Felix J Herrmann. Reliable amortized variational inference with physics-based latent distribution correction. *Geophysics*, 88(3):R297–R322, 2023. Jack Simons, Louis Sharrock, Song Liu, and Mark Beaumont. Neural score estimation: likelihood-free inference with conditional score based diffusion models. In *Fifth Symposium on Advances in Approximate* Bayesian Inference, 2023. URL https://openreview.net/forum?id=AVkJEb1ahOY. Scott A Sisson, Yanan Fan, and Mark Beaumont. *Handbook of approximate Bayesian computation*. CRC Press, 2018. Adam Spannaus, Theodore Papamarkou, Samantha Erwin, and J. Blair Christian. Inferring the spread of COVID-19: the role of time-varying reporting rate in epidemiological modelling. *Scientific Reports*, 12 (1):10761, 06 2022. ISSN 2045-2322. doi: 10.1038/s41598-022-14979-0. URL https://doi.org/10.1038/ s41598-022-14979-0. Sean Talts, Michael Betancourt, Daniel Simpson, Aki Vehtari, and Andrew Gelman. Validating Bayesian inference algorithms with simulation-based calibration. *arXiv preprint arXiv:1804.06788*, 2018. Alvaro Tejero-Cantero, Jan Boelts, Michael Deistler, Jan-Matthis Lueckmann, Conor Durkan, Pedro J. Gonçalves, David S. Greenberg, and Jakob H. Macke. sbi: A toolkit for simulation-based inference. Journal of Open Source Software, 5(52):2505, 2020. doi: 10.21105/joss.02505. URL https://doi.org/10. 21105/joss.02505. Owen Thomas, Ritabrata Dutta, Jukka Corander, Samuel Kaski, and Michael U Gutmann. Likelihood-free inference by ratio estimation. *Bayesian Analysis*, 17(1):1–31, 2022. Aki Vehtari, Andrew Gelman, Daniel Simpson, Bob Carpenter, and Paul-Christian Bürkner. Ranknormalization, folding, and localization: an improved Rˆ for assessing convergence of MCMC (with discussion). *Bayesian Analysis*, 16(2):667–718, 2021. Xiaoyu Wang, Adrianne L Jenner, Robert Salomone, David J Warne, and Christopher Drovandi. Calibration of agent based models for monophasic and biphasic tumour growth using approximate Bayesian computation. Journal of Mathematical Biology, 88(3):28, 2024. Daniel Ward. *flowjax*, 2023. Python package version 7.0.0, [Online; accessed 17-January-2023], URL https://github.com/danielward27/flowjax. Daniel Ward, Patrick Cannon, Mark Beaumont, Matteo Fasiolo, and Sebastian M Schmon. Robust neural posterior estimation and statistical model criticism. In *Advances in Neural Information Processing Systems* 35, 2022. URL https://openreview.net/forum?id=MHE27tjD8m3. David J. Warne, Anthony Ebert, Christopher Drovandi, Wenbiao Hu, Antonietta Mira, and Kerrie Mengersen. Hindsight is 2020 vision: a characterisation of the global response to the COVID-19 pandemic. *BMC Public* Health, 20(1):1868, 2020. ISSN 1471-2458. doi: 10.1186/s12889-020-09972-z. URL https://doi.org/10. 1186/s12889-020-09972-z. Richard David Wilkinson. Approximate Bayesian computation (ABC) gives exact results under the assumption of model error. *Statistical Applications in Genetics and Molecular Biology*, 12(2):129–141, 2013. doi: doi:10.1515/sagmb-2013-0010. URL https://doi.org/10.1515/sagmb-2013-0010. Samuel Wiqvist, Jes Frellsen, and Umberto Picchini. Sequential neural posterior and likelihood approximation, 2021. URL https://arxiv.org/abs/2102.06522. *arXiv preprint arXiv:2102.06522*. Jingkang Yang, Pengyun Wang, Dejian Zou, Zitang Zhou, Kunyuan Ding, Wenxuan Peng, Haoqi Wang, Guangyao Chen, Bo Li, Yiyou Sun, Xuefeng Du, Kaiyang Zhou, Wayne Zhang, Dan Hendrycks, Yixuan Li, and Ziwei Liu. OpenOOD: benchmarking generalized out-of-distribution detection. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022. URL https://openreview.net/forum?id=gT6j4_tskUt. ## Appendix A Implementation Details The code to reproduce the results has been included as supplementary material. All simulations and inference were executed on a high-performance computer, each running using a single Intel Xeon core with 8GB of RAM. We used approximately 11,800 hours of CPU time across all experiment runs included in this paper. The robust sequential neural likelihood (RSNL) inference algorithm was implemented using the JAX (Bradbury et al., 2018) and NumPyro (Phan et al., 2019) libraries due to the speed of these libraries for MCMC sampling. Plotting was done using the matplotlib library (Hunter, 2007), and MCMC diagnostics and visualisations were done using the ArviZ library (Kumar et al., 2019). The SIR model is implemented in JAX using the diffrax library (Kidger, 2021). RNPE results were obtained from implementing the example using the publicly available code at https://github.com/danielward27/rnpe. RBSL results were obtained using the ELFI software package implementation (Lintusaari et al., 2018; Kelly, 2022). ## B Computation Times | Example | Method | MCMC dimension | Simulation | Flow | MCMC | Overall | | |-------------------------|----------|------------------|--------------|--------|--------|-----------|------| | Contaminated | normal | SNL | 1 | 1 | 557 | 1931 | 2489 | | (standard) | RSNL | 3 | 1 | 523 | 1222 | 1746 | | | Contaminated normal (no | SNL | 1 | 1 | 3634 | 2513 | 6148 | | | summarisation) | RSNL | 101 | 1 | 3524 | 43308 | 46833 | | | Misspecified MA(1) | SNL | 1 | 1 | 490 | 989 | 1480 | | | RSNL | 3 | 1 | 441 | 1098 | 1540 | | | | Contaminated SLCP | SNL | 5 | 1 | 1748 | 35153 | 36902 | | | RSNL | 15 | 1 | 1699 | 21358 | 23058 | | | | Misspecified SIR | SNL | 2 | 65305 | 1429 | 1839 | 68573 | | | RSNL | 8 | 64987 | 1238 | 21233 | 87458 | | | | Toad Movement | SNL | 3 | 52 | 1226 | 1216 | 2493 | | | RSNL | 51 | 110 | 4150 | 23331 | 27591 | | | Table 2: Breakdown of the computational time (rounded to the nearest second) for RSNL and SNL. Simulation time is the time spent running simulations. Flow time is the time training the neural network. MCMC is the time spent sampling the posterior via MCMC. Times are aggregated across all rounds, and the MCMC time includes the final MCMC run used to get the approximate posterior samples. Times are shown for the five experiments in the paper and the contaminated normal example without summarisation (more details in Appendix F). ## C Hyperparameters Normalising Flow We utilise a conditional neural spline flow (Durkan et al., 2019) for qϕ(S(x) | θ), as implemented in the flowjax package (Ward, 2023). The flow design follows the choices in the sbi package (Tejero-Cantero et al., 2020). We use 10 bins over the interval [-5, 5] for the rational quadratic spline transformer. The transformer function defaults to the identity function outside of this range, an important detail for the considered misspecified models, as the observed summary often appears in the tails. The conditioner consists of five coupling layers, each using a multilayer perceptron of two layers with 50 hidden units. The flow is trained using the Adam optimiser (Kingma & Ba, 2015) with a learning rate of 5 × 10−4 and a batch size of 256. Flow training is stopped when either the validation loss, calculated on 10% of the samples, has not improved over 20 epochs or when the limit of 500 epochs is reached. ## Mcmc We use the no-U-turn sampler with four individual MCMC chains. To assess chain convergence, we ensure that the rank-normalised Rˆ of Vehtari et al. (2021) falls within the range (1.0, 1.05), that the effective sample size (ESS) is reasonably large, and we inspect the autocorrelation and trace plots for each example. In the initial round of our RSNL algorithm, the chains are initialised with a random draw from the prior. For each subsequent round, they are initialised using a random sample from the current approximate posterior. We run each chain for 3,500 iterations, discarding the first 1,000 for burn-in. The resulting 10,000 combined samples from the four MCMC chains are thinned by a factor of 10. Model simulations are then run at the 1,000 sampled model parameter values. We use thinning to ensure that potentially expensive model simulations are run using relatively independent parameter values, leveraging that running MCMC with the learnt normalising flow is typically faster than running model simulations. The number of training rounds is set to R = 10, resulting in 10,000 model simulations. After R rounds, we use qR,ϕ(S(y) | θ) to run 10,000 MCMC iterations targeting the approximate posterior. ![31_image_0.png](31_image_0.png) Figure 13: Boxplots illustrating the C2ST scores for RSNL and SNL on the contaminated normal example. ## D Additional Information And Results For Examples Contaminated Normal We consider more results on the contaminated normal example as it has an analytical posterior in the well-specified case. We can thus use draws from the true posterior for the classifier 2-sample test (C2ST), a popular performance metric in SBI. C2ST. The C2ST trains a binary classifier to determine if two samples come from the same distribution (Lopez-Paz & Oquab, 2017). In SBI, C2ST is used to differentiate between samples from the true and approximate posteriors, with the score being the classification accuracy of the classifier on held-out data (Lueckmann et al., 2021). If the true and estimated posteriors are indistinguishable, the classification score will converge to 0.5, while a poor posterior approximate will be close to 1. We use the C2ST as implemented in sbibm (Lueckmann et al., 2021). The results are depicted in Figure 13, which displays boxplots of the C2ST score for 200 seeds for RSNL and SNL. As evidenced by the consistently low C2ST score, RSNL performs well, implying that the classifier finds distinguishing samples from the approximate and true posterior challenging. Conversely, SNL displays poor performance, with a median C2ST score close to 1. Well-specified example. Modellers might worry that in a well-specified model, the adjustment parameters in RSNL could introduce noise, thus reducing the accuracy of uncertainty quantification. We address this concern by briefly comparing the performance of RSNL and SNL using a well-specified Gaussian example (i.e., the contaminated normal example without contaminated draws). The comparative results for RSNL and SNL are displayed in Figure 14. The empirical coverage and the log density of the approximate posterior at θ0 are notably similar for RSNL and SNL. These findings reassure that RSNL does not adversely impact inference on well-specified examples. ![31_image_1.png](31_image_1.png) Figure 14: The left plot shows the empirical coverage across various credibility levels for SNL (dashed) and RSNL (solid) on a well-specified Gaussian example. A well-calibrated posterior closely aligns with the diagonal (dotted) line. Conservative posteriors are situated in the upper triangle, while overconfident ones are in the lower triangle. The right plot shows the approximate posterior log density boxplots at the true parameter values. Posterior predictive checks. Figure 15 illustrates this concept through the PPCs for both RSNL and SNL on the contaminated normal example. For SNL, the posterior predictive distribution has negligible support around both observed summaries, with no indication of what summary is misspecified. For RSNL, however, the correctly specified summary has good support, and the misspecified summary has negligible support. So, in this example, PPCs only correctly identified the misspecified summary when the inference is robust to model misspecification. In a more complicated setting, with many summaries, PPCs with a non-robust method (such as SNL) are unlikely to be sufficient for a modeller dealing with model misspecification. If there is model misspecification, the modeller must have some indication of what is misspecified to refine the model. Further, the model can be correctly misspecified and still have bad predictive performance for reasons unrelated to the model, such as poor inference. ![32_image_0.png](32_image_0.png) Figure 15: Posterior predictive checks on the contaminated normal example for RSNL and SNL. The observed summaries are shown as a vertical dashed line. ## Misspecified Sir Figure 16 shows a representative visualisation of the full data for the true DGP of the misspecified SIR example. The spikes, corresponding to increased observed infections on Mondays, lead to the autocorrelation summary statistic being unable to be recovered from the assumed DGP that does not contain these spikes. ![33_image_0.png](33_image_0.png) Figure 16: An example simulation from the true SIR model, using parameters β = 0.15 and η = 0.1. ## Toad Movement Model We outline the implementation details of the MMD (Gretton et al., 2012) used for the toad example, as included in the main text. The MMD serves as a measure of distance between two probability distributions and is particularly robust against outliers. Due to this robustness, MMD has been used for robustness to model misspecification in SBI (Park et al., 2016; Huang et al., 2023). In our case, we limit the MMD calculation to samples from the posterior predictive and the Dirac measure centred on the observed summary statistics, leading to a simplified expression: $$\mathrm{MMD}=\frac{1}{l^{2}}\sum_{i,j=1}^{l}K(S(\mathbf{x_{i}}),S(\mathbf{x_{j}}))-\frac{2}{l}\sum_{i=1}^{l}K(S(\mathbf{x_{i}}),S(\mathbf{y})),$$ $$\mathbf{(8)}$$. K(S(xi), S(y)), (8) where l = 1000 denotes the number of samples from the posterior predictive distribution, K = exp(−||x − x ′||22/β2) is the radial basis function kernel, and β is determined via the median heuristic, β =pmedian/2, with the median being of the Euclidean distances between pairs of simulated summaries. We also include the estimated posterior for RSNL, SNL and RBSL for the toad movement model with simulated data. Here, SNL gives similar inferences to RSNL and RBSL, and all three methods have reasonable support around the true parameter value. ![34_image_0.png](34_image_0.png) for the toad movement example. Plots on the diagonal are the univariate posterior densities obtained by RSNL (solid), SNL (dashed) and RBSL (dotted). The bivariate posterior distributions for the toad movement example are visualised as contour plots when applying RSNL (solid, lower triangle off-diagonal) and SNL (dashed, upper triangle off-diagonal). The true parameter values are visualised as a vertical dashed line for the marginal plots and the × symbol in the bivariate plots. ## E Comparison Of Laplace Prior With Proposed Data-Driven Prior In Figure 18, we showcase the impact of prior selection using the contaminated normal example with a sample variance of 4. The data-driven prior employed for RSNL yields a tighter 90% credible interval of (0.81, 1.17). In contrast, despite being centred on the correct value and allowing for adjustment of misspecified summaries, the Laplace(0, 0.5) prior results in a substantially wider 90% credible interval, spanning (-4.53, 6.24). ![35_image_0.png](35_image_0.png) Figure 18: Estimated univariate posterior density for the contaminated normal using RSNL with our recommended data-driven prior (solid) and a fixed scale Laplace prior (dotted). ## F Additional Results For Rsnl On The Contaminated Normal Example With No Summarisation This section evaluates the scalability of RSNL in scenarios involving an increased number of adjustment parameters. To evaluate this, we considered the contaminated normal example without summarisation. That is, we perform inference on the one hundred samples directly. The posterior plots presented here for model parameter θ and the first adjustment summary are primarily to confirm that the increase in the number of adjustment parameters does not adversely affect inference quality. Comprehensive details on computational time, which is the focal point of this examination, are available in Appendix B. ![35_image_1.png](35_image_1.png) ![35_image_2.png](35_image_2.png) Figure 19: The left plot shows the RSNL approximate posterior for θ on the contaminated normal example with no summarisation. The true parameter value is shown as a vertical dashed line. The right plot shows the prior (dashed) and estimated marginal posterior (solid) densities for the first (of the 100) adjustment parameter, γ1. ## G Laplace Prior Hyperparameter Figure 20 displays the approximate posterior for the contaminated normal model under varying levels of model misspecification, with posteriors for the model parameter θ in the left column and posteriors for γ2 (i.e. the adjustment parameter that corresponds with the incompatible summary) in the right column. We define the degree of misspecification by setting the "observed" sample variance to 2.0, 7.0, and 12.0 for mild, moderate, and severe conditions, respectively. We compute and visualise the approximate posteriors for τ = 0.05*, . . . ,* 1, in increments of 0.05, with τ = 0 (i.e. SNL) omitted for better visualisation. From the θ ![36_image_0.png](36_image_0.png) ![36_image_1.png](36_image_1.png) (b) Posterior distributions for adjustment parameter γ2. Figure 20: RSNL approximate posteriors for the contaminated normal under mild, moderate, and severe model misspecification. posteriors, we can see all values of τ lead to useful inference on θ, in that they centre on θ0. However, the hyperparameter τ does have the effect of influencing the sharpness of the posterior. Ideally, τ should be kept small to avoid introducing excessive noise through the adjustment parameters. However, the behaviour of small τ values can be peculiar, as evident in the posterior for γ2 under mild misspecification. Surprisingly, the most unusual-looking behaviour for γ2 is observed in the case of mild misspecification for low values of τ . The reason may be that under severe model misspecification, it is much easier to identify misspecification, and the data-driven prior allows for this correction despite the greater distance required to be shifted. But for mild model misspecification, if we set the adjustment parameter with a smaller variance, the adjustment parameters may be split between not adjusting and adjusting. If more knowledge of the nature of the model misspecification is known, the modeller may be able to use this to select values of τ . But we promote the choice of τ = 0.3 as a robust choice when the level of model misspecification is unknown. Finally, although we have only presented results here for the contaminated normal example, similar outcomes were observed across the other examples in this paper.
Review 1: Summary: The paper proposes a method to handle misspecification in the context of simulation based inference with neural likelihood (NL) methods (i.e. methods that use a density estimator, in this case a normalizing flow, to estimate the intractable likelihood). The main idea behind the approach involves the use of auxiliary variables that increase the robustnes of NL methods against misspecification. The paper builds heavily on the Robust Bayesian synthetic likelihood (RBSL) approach, adapting it from the use of Gaussian likelihoods to normalizing flows. Strengths and Weaknesses: While the technical novelty is somewhat limited, I think it is interesting to see how the RBSL approach performs when used in concert with normalizing flows instead of Gaussian approximations. This is, to the best of my knowledge, the first work to explore this. I think something interesting to see relates to the choice for the prior over the adjustment parameters and the method's performance. How sensitive is the method to this choice? At some point it will break, as there is a balance in modeling the "OOD summary statistics" in the tail of the flow or by moving the adjustment parameters away from zero. When does the method lose its robustness capabilities? How gradually does this happen as we change the sharpness of the prior over the adjustment parameters? The discussion briefly touches on scenarios where methods as RBSL or the one proposed here may fail to detect misspecification or perform as expected (e.g. when there is model misspecification but the summary statistics are not OOD). The Gaussian assumption in BSL simplifies the analysis of scenarios where the method will work well or fail. While the introduction of neural density estimators complicates the analysis, I'd be interested in hearing the authors' thoughts about this, possibly under certain simplifying assumptions. For instance, assuming the flow perfectly learns the likelihood, when can we expect the proposed method to not work as expected? Shedding light on scenarios where the method is expected to fail / work would be quite useful for practitioners willing to use this approach. The discussion states "RSNL provides useful inference with significantly fewer simulation calls than ABC and BSL methods." This is true with existing techniques. However, with BSL methods, I'm assuming that if you learn an amortized Gaussian parameterization instead of estimating the Gaussian parameters through observations, BSL's sample efficiency could drastically improve. (In short, this would be equivalent to setting the normalizing flow in NL to a Gaussian with learnable mean and variance in an amortized way.) While to the best of my understanding such a method was not implemented nor tested yet in the literature, it is a very natural thing to consider, which may decrease the relevance of this claim. Final comment, at least one sentence in this work is an exact copy from the RBSL paper. For instance "[...] provided that m is chosen large enough so that the plug-in synthetic likelihood estimator has a small enough variance to ensure that MCMC mixing is not adversely affected" (close to the end of page 4). This is not affecting my judgement towards the paper, but should be rephrased. Requested Changes: See above. I believe addressing the questions above (prior, expected behavior under ideal training of the flow) could improve the paper considerably. Broader Impact Concerns: No ================================================== Review 2: Summary: The manuscript proposes a simulation-based inference (SBI) method that aims to be robust to model misspecification: SBI allows parameter estimation for models with intractable likelihoods. As SBI methods have been shown to yield unreliable results if the model is misspecified, development of new methods that can both detect and mitigate the impact of misspecification is highly desirable. Recent work towards this goal is reviewed, and a new approach proposed: This work adapts the model expansion approach for Bayesian Synthetic Likelihood (BSL) proposed by Frazier & Drovandi (2021) to Sequential Neural Likelihood (Papamakarios et al., 2019). The resulting method, robust Sequential Neural Likelihood (RSNL), uses auxiliary parameters to account for misspecified summary statistics. Inspecting deviation of posterior distribution over adjustment parameters from the (data-driven) prior allows detection of incompatible summary statistics. Empirical results show that the proposed algorithm can mitigate misspecification. More specifically, the authors find that RSNL compares favorably against SNL on five examples (Contaminated Normal, Misspecified MA(1), Contaminated SLCP, Misspecified SIR, and Toad Movement) when considering empirical coverage and the mean log probability of true parameters. Comparisons to robust Neural Posterior Estimation (RNPE; Ward et al., 2022) and BSL are included as well. Strengths and Weaknesses: Strengths: - Addresses a relevant issue, and proposes a method that is widely applicable - The method is sound and grounded in existing literature (which seems appropriately cited) - A range of different experiments is included, providing convincing evidence of RSNL's merits; promising performance on experiments with misspecified summaries Weaknesses: - While the authors discuss that they found the choice of prior to be crucial, experiments are reported for a single choice of prior/hyperparameter setting (apart from Appendix E, which compares RSNL with the proposed data-driven prior against a Laplace prior on the contaminated Normal example) - Some experiments reveal a high degree of underconfidence – while this is preferable to overconfidence, ideally, resulting posteriors would be well-calibrated As discussed by the authors, there is ample opportunity for follow-up work (e.g., searching for methods that yield well-calibrated posteriors, exploring parameter-dependent summary scales, or additional benchmarking as a function of simulation budget). I agree with the authors that these are interesting aspects that can be regarded out-of-scope for this submission. Requested Changes: Proposed changes to strengthen this work: - Given that the choice of prior is highlighted as a very sensitive one, I'd highly appreciate a more thorough investigation of this issue. A first step might be to include results showing performance for choices other than τ=0.3 – to get a better understanding how sensitive results to different scales are, and to make the choice taken more transparent. - It should be clearly stated how many observations/true parameters were used to calculate the mean posterior log probability, and what the whiskers in the box plot represent. - I'd propose to improve overall consistency between figures by using a common font size, font family, and color code. Minor issues: - Page 1: Blei et al. (2017) and Box (1976) are cited but do not appear in the references. - Page 6/7: Something seems to have been mixed up at the page transition, the sentence reads "[...] applicable to SBI with potential model misspecification Generalised Bayesian inference (GBI) is an alternative class [...]" - Page 22: The sentence "First, if a summary is incompatible but becomes very close to 0 after standardisation, this seems unlikely but may occur when simulated summaries generate a complex multi-modal distribution at a specific parameter value." should be revised. Broader Impact Concerns: None. ================================================== Review 3: Summary: The work proposes a model misspecification tool for "simulation based inference" (SBI), in the context of Bayesian inference via MCMC (Metropolis-Hastings). SBI implies that, in absence of a readily-available likelihood function, it is only possible to resort to simulations from an assumed data generating model. The authors then replace the true but unavailable likelihood function, with a normalising flow trained on many pairs of parameter draws and corresponding model simulations. They follow the "sequential Neural likelihood" (SNL) approach of Papamakarios et al 2019. However the authors take a specific route, in that they address the problem of 1) "robustify" SNL, for when there is a mismatch between the "assumed data-generating model" and the unknown actual data generating model. And then they consider: 2) given their robustified SNL (RSNL), they propose a way to detect which features of the model are misspecified. In fact, it is assumed that inference is not conducted conditionally to each and every observed data point, but conditionally to summary statistics of the data. Part (2) in the goal above essentially gives a way to visually determine which summary statistics are likely to be corresponding to features of the model that do not quite match the data (and as such, which parts of the model are misspecified). The strategy to robustify SNL and then identify misspecification is taken from Frazier-Drovandi (2021), as declared by the authors, however in Frazier-Drovandi the focus is on robustifying BSL (Bayesian synthetic likelihood). There (and also here) the MCMC is carried on an augmented space, where the target is the posterior on the space theta x Gamma, where Gamma is a vector with the same length as summary statistics. The authors observe wether the marginal posteriors for the entries in Gamma look different from the corresponding priors. If this is the case, the given component of Gamma (ie the corresponding summary) is misspecified. Strengths and Weaknesses: Essentially this work transposes the work on BSL done in Frazier-Drovandi (2021) into the SNL realm. It is very well written, convincing and (importantly) useful, in ways that I am going to describe. But first a couple of weak points: - in light of the existence of the background work of Frazier-Drovandi (2021), this contribution is only incremental. In fact, the way misspecificaton is detected, as mentioned above, is taken from Frazier-Drovandi (2021). - Table 2 in appendix B is not clear. See the "Requested changes" below. The strengths of the paper are: - as already mentioned, the paper is clear and well written; - RSNL works, and if applied to a model without misspecifications, it behaves just as SNL (meaning that it is not detrimental to use RSNL), at least on the considered examples. - Multiple examples are provided. - It is useful, as it gives a clear way on how to detect misspecifications. - It reinforces the notion that misspecification in SBI is understudied and deserves more attention. Requested Changes: Entries marked with an asterisk (*) are "critical". - (*) the computational efforts in table 2 (appendix B) are not clear. Please clarify what the headings of the table mean. In particular, 1) what does it mean "Flow" and "MCMC" in this context? Is it the number of seconds to train the flow and the number of seconds spent in running the mcmc respectively? But most importantly, what does it mean the "Overall" heading? Because "Overall" is not the sum of "MCMC" and "Flow" so what does it mean? - (*) again in table 2: How is it possible that the number of seconds spent running MCMC is sometimes larger for SNL that for RSNL? RSNL should be more complex to run, due to the extra computations implied by using $\Gamma$. Please clarify. - (*) Table 2: sometimes there is a factor 10 of difference between RSNL and SNL in terms of seconds for the MCMC column, where RSNL is much more expensive (eg for the Misspecified SIR and Toad movement). I do not think you discuss this that much in the paper . Please comment. - page 2: spell out "RBSL" the first time it is mentioned - page 5: the entire paragraph "Wilkinson 2013 reframes (...) in an ABC context", is a repetition of the same paragraph from page 2, so this repetition should be eliminated. - page 5: notation-wise using $\theta_0$ for the pseudo-true parameter is a bit misleading, since you used $P_0$ to denote the true data-generating model, while $\theta_0$ may not be the true parameter. - end of page 6: in the last line "misspecification" should be followed by a full stop. - page 8 just after eq (5), you mentioned that the standardised summaries would be discussed later, but I failed to find such a discussion or explicit calculations for their construction. Please let me know in case I missed this and otherwise integrate. - page 8: in the fourth paragraph you mention "conditional standardisation". What is this? Please clarify. - in Algorithm 1 step 3: change $r \neq 0$ to $r>0$. - (*) in Algorithm 1 step 7: I don't understand why it seems that no summaries of simulated data are taken. After all, the normalising flow is trained on (I believe) summary statistics, as it gets evaluates on S(x). So why it is never explicit in the notation that training data includes summaries rather than raw data? - section 4.1 on page 10: I guess it is missing the specification that the ground truth for theta is theta=1. - section 4.1 on page 10: what is the value of b0 here? - page 22 lines 5-6: "a summary is correctly specified but lies in the tail". Under which circumstances can this happen? Broader Impact Concerns: NA ================================================== Metareview: Recommendation: Accept as is Comment: The paper has already undergone a revision that incorporated suggestions for improvement by the reviewers, with no outstanding requested changes remaining. Therefore, the paper can be accepted as is. ==================================================
# Understanding Adamw Through Proximal Methods And Scale-Freeness Zhenxun Zhuang zxzhuang@bu.edu Boston University Mingrui Liu *mingruil@gmu.edu* George Mason University Ashok Cutkosky *ashok@cutkosky.com* Boston University Francesco Orabona *francesco@orabona.com* Boston University Reviewed on OpenReview: *https://openreview.net/forum?id=IKhEPWGdwK* ## Abstract Adam has been widely adopted for training deep neural networks due to less hyperparameter tuning and remarkable performance. To improve generalization, Adam is typically used in tandem with a squared ℓ2 regularizer (referred to as Adam-ℓ2). However, even better performance can be obtained with AdamW, which decouples the gradient of the regularizer from the update rule of Adam-ℓ2. Yet, we are still lacking a complete explanation of the advantages of AdamW. In this paper, we tackle this question from both an *optimization* and an *empirical* point of view. First, we show how to re-interpret AdamW as an approximation of a proximal gradient method, which takes advantage of the closed-form proximal mapping of the regularizer instead of only utilizing its gradient information as in Adam-ℓ2. Next, we consider the property of "scale-freeness" enjoyed by AdamW and by its proximal counterpart: their updates are invariant to component-wise rescaling of the gradients. We provide empirical evidence across a wide range of deep learning experiments showing a correlation between the problems in which AdamW exhibits an advantage over Adam-ℓ2 and the degree to which we expect the gradients of the network to exhibit multiple scales, thus motivating the hypothesis that the advantage of AdamW could be due to the scale-free updates. ## 1 Introduction Recent years have seen a surge of interest in applying deep neural networks (LeCun et al., 2015) to a myriad of areas (Krizhevsky et al., 2012; Goodfellow et al., 2014; Vaswani et al., 2017; Wu et al., 2020). While Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951) remains the dominant method for optimizing such models, its performance depends crucially on the step size hyperparameter. To alleviate this problem, there has been a significant amount of research on adaptive gradient methods (e.g. Duchi et al., 2010a; McMahan & Streeter, 2010; Tieleman & Hinton, 2012; Zeiler, 2012; Luo et al., 2018; Zhou et al., 2018; Zhang et al., 2018; Li & Orabona, 2019; 2020; Li et al., 2021). These methods provide mechanisms to automatically set stepsizes and have been shown to greatly reduce the tuning effort while maintaining good performance. Among these adaptive algorithms, one of the most widely used is Adam (Kingma & Ba, 2015), which achieves good results across a variety of problems even by simply adopting the default hyperparameter setting. Motivated by its huge successes, there has been much follow-up research addressing the theoretical convergence of Adam and its variants (Reddi et al., 2018; De et al., 2018; Zhou et al., 2018; Wang et al., 2020; Chen et al., 2019). On the other hand, in practice, to improve the generalization ability, Adam is typically combined with a squared ℓ2 regularization, which we will call Adam-ℓ2 hereafter. Yet, as pointed out by Loshchilov & Hutter (2019), the gradient 1 of the regularizer does not interact properly with the Adam update rule. To address this, they provide a method called AdamW that decouples the gradient of the ℓ2 regularization from the update of Adam. The two algorithms are shown in Algorithm 1. Although AdamW is very popular (Kuen et al., 2019; Lifchitz et al., 2019; Carion et al., 2020) and it frequently outperforms Adam-ℓ2, it is currently unclear why it works so well. Recently, however, Bjorck et al. (2021) applied AdamW in Natural Language Processing and Reinforcement Learning problems and found no improvement of performance over sufficiently tuned Adam-ℓ2. In this paper, we focus on understanding how the AdamW update differs from Adam-ℓ2 from an optimization point of view. First, we unveil the surprising connection between AdamW and *proximal updates* (Parikh & Boyd, 2014). In particular, we show that AdamW is an approximation of the latter and confirm such similarity with an empirical study. Moreover, noticing that AdamW and the proximal update are both *scale-free* while Adam-ℓ2 is not, we also derive a theorem showing that scale-free optimizers enjoy an automatic acceleration w.r.t. the condition number on certain cases. This gives AdamW a concrete theoretical advantage in training over Adam-ℓ2. Next, we empirically identify the scenario of training very deep neural networks with Batch Normalization switched off as a case in which AdamW substantially outperforms Adam-ℓ2 in both testing and training. In such settings, we observe that the magnitudes of the coordinates of the updates during training are much more concentrated about a fixed value for AdamW than for Adam-ℓ2, which is an expected property of scale-free algorithms. Further, as depth increases, we expect a greater diversity of gradient scalings, a scenario that should favor scale-free updates. Our experiments support this hypothesis: deeper networks have more dramatic differences between the distributions of update scales between Adam-ℓ2 and AdamW and exhibit larger accuracy advantages for AdamW. To summarize, the contributions of this paper are: 1. We show that AdamW can be seen as an approximation of a proximal update, which utilizes the entire regularizer rather than only its gradient. 2. We point out the scale-freeness property enjoyed by AdamW and show the advantage of such a property on a class of functions. 3. We find a scenario where AdamW is significantly better than Adam-ℓ2 in both training and testing performance and report an empirical observation of the correlation between such advantage and the scale-freeness property of AdamW. The rest of this paper is organized as follows: In Section 2 we discuss the relevant literature. The connection between AdamW and the proximal updates as well as its scale-freeness are explained in Section 3. We then report the empirical observations in Section 4. Finally, we conclude with a discussion of the results, some limitations of this work, and future directions. ## 2 Related Work Weight decay By biasing the optimization towards solutions with small norms, weight decay has long been a standard technique to improve the generalization ability in machine learning (Krogh & Hertz, 1992; Bos & Chug, 1996) and is still widely employed in training modern deep neural networks (Devlin et al., 2019; Tan & Le, 2019). Note that here we do not attempt to explain the generalization ability of weight decay or AdamW. Rather, we assume that the regularization and the topology of the network guarantee good generalization performance and study training algorithms from an optimization point of view. In this view, we are not aware of other work on the influence of regularization on the optimization process. Proximal updates The use of proximal updates in the batch optimization literature dates back at least to 1965 (Moreau, 1965; Martinet, 1970; Rockafellar, 1976; Parikh & Boyd, 2014) and they were used in the online setting (Kivinen & Warmuth, 1997; Campolongo & Orabona, 2020), and also in the stochastic one (Toulis & Airoldi, 2017; Asi & Duchi, 2019). We are not aware of any previous paper pointing out the connection between AdamW and proximal updates. Scale-free algorithms The scale-free property was first proposed in the online learning field (Cesa-Bianchi et al., 2007; Orabona & Pál, 2015; Orabona & Pál, 2018). There, they do not need to know a priori the Lipschitz constant of the functions, while obtaining optimal convergence rates. To the best of our knowledge, the connection between Algorithm 1 Adam with ℓ2 regularization (Adam-ℓ2) and AdamW Loshchilov & Hutter (2017). All operations on vectors are element-wise. 1: **Given** α, β1, β2, ϵ, λ ∈ R, lr schedule {ηt}t≥0. 2: **Initialize:** x0 ∈ R d, m0 ← 0, v0 ← 0 3: for t = 1, 2*, . . . , T* do 4: Compute a stochastic evaluation of the true gradient ∇f(xt−1) denoted as ∇ft(xt−1) 5: gt ← ∇ft(xt−1) +λxt−1 6: mt ← β1mt−1 + (1 − β1)gt , vt ← β2vt−1 + (1 − β2)g 2 t 7: mˆ t ← mt/(1 − β t 1 ), vˆt ← vt/(1 − β t 2 ) 8: xt ← xt−1 −ηtλxt−1 −ηtαmˆ t/( √vˆt + ϵ) 9: **end for** scale-freeness and the condition number we explain in Section 3 is novel, as is the empirical correlation between scale-freeness and good performance. Removing Batch Normalization (BN) The setting of removing BN is not our invention: indeed, there is already active research in this (De & Smith, 2020; Zhang et al., 2019). The reason is that BN has many disadvantages (Brock et al., 2021) including added memory overhead (Bulò et al., 2018) and training time (Gitman & Ginsburg, 2017), and a discrepancy between training and inferencing (Singh & Shrivastava, 2019). BN has also been found to be unsuitable for many cases including sequential modeling tasks (Ba et al., 2016) and contrastive learning algorithms (Chen et al., 2020). Also, there are SOTA architectures that do not use BN including the Vision transformer (Dosovitskiy et al., 2021) and the BERT model (Devlin et al., 2019). ## 3 Theoretical Insights On Merits Of Adamw AdamW and Proximal Updates Here, we show that AdamW approximates a proximal algorithm (Moreau, 1965; Parikh & Boyd, 2014). A proximal algorithm is an algorithm for solving a convex optimization problem that uses the proximal operators of the objective function. The *proximal operator* proxh : R d → R d of a convex function h is defined for any y ∈ R das proxh (y) = arg minx∈Rd (h(x) + 1/2∥x − y∥ 2 2 ). Consider that we want to minimize the objective function F(x) = λ 2 ∥x∥ 2 2 + f(x), (1) where λ > 0 and f(x) : R d → R is a function bounded from below. We could use a stochastic optimization algorithm that updates in the following fashion xt = xt−1 − ηtpt , (2) where ηt is a learning rate schedule, e.g., the constant one or the cosine annealing (Loshchilov & Hutter, 2017) and pt denotes any update direction. This update covers many cases, where α denotes the initial step size: 1. pt = αgt gives us the vanilla SGD; 2. pt = αgt/( qPt i=1 g 2 i + ϵ) gives the AdaGrad algorithm (Duchi et al., 2010a); 3. pt = αmˆ t/( √vˆt + ϵ) recovers Adam (Kingma & Ba, 2015), where mˆ t denotes the bias corrected first moment of past gradients and vˆt denotes the bias corrected second moment of past gradients as updated in Line 6-7 in Algorithm 1. Note that in the above we use gt to denote the stochastic gradient of the entire objective function: gt = ∇ft(xt−1) + λxt−1 (λ = 0 if the regularizer is not present), where ∇ft(xt−1) is a stochastic evaluation of the true gradient ∇f(xt−1). This update rule (2) is given by the following online mirror descent update (Nemirovsky & Yudin, 1983; Warmuth & Jagota, 1997; Beck & Teboulle, 2003): xt = argmin x∈Rd λ 2 ∥xt−1∥ $$\left.-_{1}\|_{2}^{2}+f(\mathbf{x}_{t-1})+\mathbf{p}_{t}^{\top}(\mathbf{x}-\mathbf{x}_{t-1})+{\frac{1}{2\eta_{t}}}\|\mathbf{x}-\mathbf{x}_{t-1}\|_{2}^{2}\right..$$ .(3) $\eqref{eq:walpha}$. | | Update (2) | Update (5) | | |--------------|-----------------------------|--------------|----| | gt | ∇ft(xt−1) + λxt−1 | ∇ft(xt−1) | | | pt = αgt | xt = xt−1 − αηtgt | xt = | 1 | | | 1+ληt (xt−1 − αηtgt )  | | | | pt = αq | gt | | | | Pt | xt = xt−1 − αηt q | gt | | | | Pt | xt = | 1 | | 2 | 2 | 1+ληt | | | g i +ϵ | g i +ϵ | | | | i=1 | i=1 |  | | | | xt−1 − αηt q | gt | | | | Pt | 2 g i +ϵ | | | | i=1 |  | | | pt = α √mˆ t | xt−1 − αηt √mˆ t | | | | vˆt+ϵ | xt = xt−1 − αηt √mˆ t vˆt+ϵ | xt = | 1 | | | 1+ληt | vˆt+ϵ | | Table 1: Comparison of Update (2) and (5) for different pt , where mˆ t and vˆt are defined in Line 7 in Algorithm 1. This approximates minimizing a first-order Taylor approximation of F centered in xt−1 plus a term that measures the distance between the xt and xt−1 according to the ℓ2 norm. The approximation becomes exact when pt = ∇f(xt−1) + λxt−1. Yet, this is not the only way to construct first-order updates for the objective (1). An alternative route is to linearize only f and to keep the squared ℓ2 norm in its functional form: $\mathbf{x}_{t}=\underset{\mathbf{x}\in\mathbb{R}^{d}}{\operatorname{argmin}}\ \frac{\lambda}{2}\|\mathbf{x}\|_{2}^{2}+f(\mathbf{x}_{t-1})+\mathbf{p}_{t}^{\top}(\mathbf{x}-\mathbf{x}_{t-1})+\frac{1}{2\eta_{t}}\|\mathbf{x}-\mathbf{x}_{t-1}\|_{2}^{2}=\max_{\mathbf{x}_{t\geq t}\ \|\ \|_{2}^{2}}(\mathbf{x}_{t-1}-\eta_{t}\mathbf{p}_{t})$, ),(4) which uses the proximal operator of the convex function ληt 2 ∥ · ∥22 . It is intuitive why this would be a better update: We directly minimize the squared ℓ2 *norm instead of approximating it.* We also would like to note that, similar to (3), the proximal updates of (4) can be shown to minimize the objective F under appropriate conditions. However, we do not include the convergence analysis of (4) as this is already well-studied in the literature. For example, when pt = ∇f(xt−1) in (4) and f is convex and smooth, the update becomes a version of the (non-accelerated) iterative shrinkage-thresholding algorithm. This algorithm guarantees F(xt) − F ∗ ≤ O(1/t), which is in the same order as obtained by gradient descent on minimizing f alone (Beck & Teboulle, 2009). From the first-order optimality condition, the update is $$\cdot)$$ $$\mathbf{x}_{t}=(1+\lambda\eta_{t})^{-1}(\mathbf{x}_{t-1}-\eta_{t}\mathbf{p}_{t})\ .$$ $$({\boldsymbol{5}})$$ $$(6)$$ ) . (5) When λ = 0, the update in (2) and this one coincide. Yet, when λ ̸= 0, they are no longer the same. For easier comparison between (2) and (5), we listed in Table 1 the detailed update formulas of them. We now show how the update in (5) generalizes the one in AdamW. The update of AdamW is $$\mathbf{x}_{t}=(1-\lambda\eta_{t})\mathbf{x}_{t-1}-\eta_{t}\alpha\hat{\mathbf{m}}_{t}/(\sqrt{\hat{\mathbf{v}}_{t}}+\epsilon)\;.$$ xt = (1 − ληt)xt−1 − ηtαmˆ t/(pvˆt + ϵ) . (6) On the other hand, using $\mathbf{p}_t=\alpha\hat{\mathbf{m}}_t$. $${\hat{\mathbf{n}}}_{t}/({\sqrt{{\hat{\mathbf{v}}}_{t}}}+\epsilon){\mathrm{~in~}}({\mathsf{5}}){\mathrm{~gives:}}$$ $$\mathbf{x}_{t}=(1+\lambda\eta_{t})^{-1}(\mathbf{x}_{t-1}-\eta_{t}\alpha\hat{\mathbf{m}}_{t}/(\sqrt{\hat{\mathbf{v}}_{t}}+\epsilon)),$$ −1(xt−1 − ηtαmˆ t/(pvˆt + ϵ)), (7) Its first-order Taylor approximation around ηt = 0 is $\mathrm{U}$ Is? $$\mathbf{x}_{t}\approx(1-\lambda\eta_{t})\mathbf{x}_{t-1}-\eta_{t}\alpha\hat{\mathbf{m}}_{t}/(\sqrt{\hat{\mathbf{v}}_{t}}+\epsilon),$$ exactly the AdamW update (6). Hence, AdamW is a first-order approximation of a proximal update. The careful reader might notice that the approximation from AdamW to the update in (7) becomes less accurate when ηt becomes too large, and so be concerned whether this approximation is practical at all. Fortunately, in practice, ηt is never large enough for this to be an issue. The remainder term of this approximation is O(λη2 t ) which we should always expect to be small as both λ and ηt are small. So, we can expect AdamW and the update in (7) to perform similarly for learning rate schedules ηt commonly employed in practice, and we will indeed confirm this empirically in Section 4.3. Let's now derive the consequences of this connection with proximal updates. First of all, at least in the convex case, the convergence rate of the proximal updates will depend on ∥∇f(xt)∥ 2 2 rather than on ∥∇f(xt) + λxt∥ 2 2 (Duchi et al., $$\left(7\right)$$ 2010b). This could be a significant improvement: the regularized loss function is never Lipschitz, so the regularized gradients ∇f(xt) + λxt could be much larger than ∇f(xt) when f itself is Lipschitz. More importantly, proximal updates are fundamentally better at keeping the weights small. Let us consider a couple of simple examples to see how this could be. First, suppose the weights are *already zero*. Then, when taking an update according to (2), we increase the weights to −ηtpt. In contrast, update (5) clearly leads to a smaller value. This is because it computes an update using the regularizer rather than its gradient. As an even more disturbing, yet actually more realistic example, consider the case that xt−1 is non-zero, but gt = 0. In this case, taking an update using (2) may actually *increase* the weights by causing xt to *overshoot* the origin. In contrast, the proximal update will never demonstrate such pathological behavior. Notice that this pathological behavior of (2) can be mitigated by properly tuning the learning rate. However, one of the main attractions of adaptive optimizers is that we should not need to tune the learning rate as much. Thus, *the proximal update can be viewed as augmenting the adaptive methods with an even* greater degree of learning-rate robustness. AdamW is Scale-Free We have discussed what advantages the proximal step hidden in AdamW can give but have not yet taken into consideration the specific shape of the update. Here instead we will look closely at the pt used in AdamW to show its *scale-freeness*. Our main claim is: the lack of scale-freeness seems to harm Adam-ℓ2*'s performance* in certain scenarios in deep learning, while AdamW preserves the scale-freeness even with an ℓ2 *regularizer*. We will motivate this claim theoretically in this section and empirically in Section 4. An optimization algorithm is said to be *scale-free* if its iterates do not change when one multiplies any coordinate of all the gradients of the losses ft by a positive constant (Orabona & Pál, 2018). It turns out that the update (6) of AdamW and the update (7) are both scale-free when ϵ = 0. This is evident for AdamW as the scaling factor for any coordinate of the gradient is kept in both mˆt and √vˆt and will be canceled out when dividing them. *(In practical applications,* though, ϵ is very small but not zero, so we empirically verify in Section 4.2 that it is small enough to still approximately ensure the scale-free property.) In contrast, for Adam-ℓ2, the addition of the weight decay vector to the gradient (Line 5 of Algorithm 1) destroys this property. We want to emphasize the comparison between Adam-ℓ2 and AdamW: once Adam-ℓ2 adopts a non-zero λ, it loses the scale-freeness property; in contrast, AdamW enjoys this property for arbitrary λ. The same applies to any AdaGrad-type and Adam-type algorithm that incorporates the squared ℓ2 regularizer by simply adding the gradient of the ℓ2 regularizer directly to the gradient of f, as in Adam-ℓ2 (as implemented in Tensorflow and Pytorch). Such algorithms are scale-free only when they do not use weight decay. Also, as we wrote above, AdamW can be seen as the first-order Taylor approximation on ηt = 0 of (7); in turn, the scale-freeness of (7) directly comes from the proximal updates. Of course, there may be other ways to design scale-free updates solving (1); yet, for AdamW, its scale-free property derives directly from the proximal update. We stress that the scale-freeness is an important but largely overlooked property of an optimization algorithm. It has already been utilized to explain the success of AdaGrad (Orabona & Pál, 2018). Recently, Agarwal et al. (2020) also provides theoretical and empirical support for setting the ϵ in the denominator of AdaGrad to be 0, thus making the update scale-free. Below, we show how scale-freeness can reduce the condition number of a certain class of functions. Scale-Freeness Provides Preconditioning For a twice continuously differentiable function f, its Hessian matrix is symmetric and its *condition number* κ is defined as the ratio of its largest absolute value eigenvalue to its smallest one. It is well-known that the best convergence rate when minimizing such f using a first-order optimization algorithm (e.g., gradient descent) must depend on the condition number (Theorem 2.1.13, Nesterov, 2004). In particular, a problem with a small κ can be solved more efficiently than one with a big κ. One way to reduce the effect of the condition number is to use a *preconditioner* (Nocedal & Wright, 2006). While originally designed for solving systems of linear equations, preconditioning can be extended to the optimization of non-linear functions and it should depend on the Hessian of the function (Boyd & Vandenberghe, 2004; Li, 2018). However, it is unclear how to set the preconditioner given that the Hessian might not be constant (Section 9.4.4 Boyd & Vandenberghe, 2004) and in stochastic optimization the Hessian cannot be easily estimated (Li, 2018). In the following theorem, we show that scale-freeness gives similar advantages to the use of an optimal diagonal preconditioner, *for free* (proof in the Appendix). Specifically, a scale-free algorithm can automatically transform solving ![5_image_0.png](5_image_0.png) Figure 1: Non-scale-free GD v.s. scale-free AdamW on quadratic functions with different condition numbers. the original problem into solving a problem with a potentially much smaller κ and thus could provide substantial improvements over non-scale-free ones, as shown in Figure 1 in the paper. Theorem 3.1. Let f *be a twice continuously differentiable function and* x ∗*such that* ∇f(x ∗) = 0*. Then, let* ˜fΛ be the family of functions such that ∇ ˜fΛ(x ∗) = 0*, and* ∇2 ˜fΛ(x) = Λ∇2f(x)*, where* Λ = diag(λ1, . . . , λd) ⪰ 0. Then, running any scale-free optimization algorithm on f and ˜fΛ will result exactly in the same iterates, assuming the same noise on the gradients. Moreover, any dependency on the condition number of the scale-free algorithm will be reduced to the smallest condition number among all the functions ˜fΛ. To give an example of when this is advantageous, consider when ∇2f(x) is a diagonal matrix: $$\nabla^{2}f$$ ∇2f(x) = diag(g1(x), g2(x)*, . . . , g*d(x)) . Assume 0 < µ ≤ µi ≤ gi(x) ≤ Mi ≤ M for i ∈ {1*, . . . , d*}. Denote j = arg maxi Mi/µi. Choose λi s.t. µj ≤ λiµi ≤ λigi(x) ≤ λiMi ≤ Mj then Λ∇2f(x) has a condition number κ ′ = Mj/µj . This gives scale-free algorithms a big advantage when maxi Mi/µi ≪ M/µ. Another example is one of the quadratic functions. Corollary 3.2. *For quadratic problems* f(x) = 12 x ⊤Hx + b ⊤x + c, with H *diagonal and positive definite, any* scale-free algorithm will not differentiate between minimizing f and ˜f(x) = 12 x ⊤x+ (H−1b) ⊤x+c. As the condition number of ˜f is 1, the operation, and most importantly, the convergence, of a scale-free algorithm will not be affected by the condition number of f *at all.* Figure 1 illustrates Corollary 3.2: we compare GD (non-scale-free) with AdamW (scale-free) on optimizing two quadratic functions with the same minimizer, but one's Hessian matrix being a rescaled version of the other's, resulting in different condition numbers. The figure clearly shows that, even after tuning the learning rates, the updates of AdamW (starting from the same point) and thus its convergence to the minimizer, is completely unaffected by the condition number, while GD's updates change drastically and its performance deteriorates significantly when the condition number is large. It is not hard to imagine that such poor training performance would likely also lead to poor testing performance. This can also explain AdaGrad's improvements over SGD in certain scenarios. As an additional example, in Appendix B we analyze a variant of AdaGrad with restarts and show an improved convergence on strongly convex functions due to scale-freeness. Note that the folklore justification for such improvements is that the learning rate of AdaGrad approximates the inverse of the Hessian matrix, but this is incorrect: AdaGrad does not compute Hessians and there is no reason to believe it approximates them in general. More importantly, another scenario demonstrating the advantage of scale-freeness is training deep neural networks. Neural networks are known to suffer from the notorious problem of vanishing/exploding gradients (Bengio et al., 1994; Glorot & Bengio, 2010; Pascanu et al., 2013). This problem leads to the gradient scales being very different across layers, especially between the first and the last layers. The problem is particularly severe when the model is not equipped with normalization mechanisms like Batch Normalization (Ioffe & Szegedy, 2015). In such cases, when using a non-scale-free optimization algorithm (e.g., SGD), the first layers and the last layers will proceed at very different speeds, whereas a scale-free algorithm ensures that each layer is updated at a similar pace. We will investigate these effects empirically in the next section. ## 4 Deep Learning Empirical Evaluation In this section, we empirically compare Adam-ℓ2 with AdamW. First (Section 4.1), we report experiments for deep neural networks on image classification tasks (CIFAR10/100). Here, AdamW enjoys a significant advantage over Adam-ℓ2 when BN is switched off on deeper neural networks. We also report the correlation between this advantage and the scale-freeness property of AdamW. Next (Section 4.2), we show that AdamW is still almost scale-free even when the ϵ used in practice is not 0, and how, contrary to AdamW, Adam-ℓ2 is not scale-free. Finally (Section 4.3), we show that AdamW performs similarly to the update in (7), which we will denote by AdamProx below, thus supporting the observations in Section 3. Data Normalization and Augmentation: We consider the image classification task on CIFAR-10/100 datasets. Images are normalized per channel using the means and standard deviations computed from all training images. We adopt the data augmentation technique following Lee et al. (2015) (for training only): 4 pixels are padded on each side of an image and a 32 × 32 crop is randomly sampled from the padded image or its horizontal flip. Models: For the CIFAR-10 dataset, we employ the Residual Network1 model (He et al., 2016) of 20/44/56/110/218 layers; and for CIFAR-100, we additionally utilize the DenseNet-BC2 model (Huang et al., 2017) with 100 layers and a growth-rate of 12. The loss is the cross-entropy loss. Hyperparameter tuning: For both Adam-ℓ2 and AdamW, we set β1 = 0.9, β2 = 0.999, ϵ = 10−8as suggested in the original Adam paper Kingma & Ba (2015). To set the initial step size α and weight decay parameter λ, we grid search over {0.00005, 0.0001, 0.0005, 0.001, 0.005} for α and {0, 0.00001, 0.00005, 0.0001, 0.0005, 0.001} for λ. Whenever the best performing hyperparameters lie in the boundary of the searching grid, we always extend the grid to ensure that the final best-performing hyperparameters fall into the interior of the grid. Training: For each experiment configuration (e.g., 110-layer Resnet without BN), we randomly select an initialization of the model to use as a fixed starting point for all optimizers and hyperparameter settings. We use a mini-batch of 128, and train 300 epochs unless otherwise specified. ## 4.1 Adamw Vs. Adam-ℓ2**: Influence Of Batch Normalization And Correlation With Scale-Freeness** With BN, Adam-ℓ2 **is on par with AdamW** Recently, Bjorck et al. (2021) found that AdamW has no improvement in absolute performance over sufficiently tuned Adam-ℓ2 in some reinforcement learning experiments. We also discover the same phenomenon in several image classification tasks, see Figure 2. Indeed, the best weight decay parameter is 0 for all cases and AdamW coincides with Adam-ℓ2 in these cases. Nevertheless, AdamW does decouple the optimal choice of the weight decay parameter from the initial step size much better than Adam-ℓ2 in all cases. Removing BN Notice that the models used in Figure 2 all employ BN. BN works by normalizing the input to each layer across the mini-batch to make each coordinate have zero-mean and unit-variance. Without BN, deep neural networks are known to suffer from gradient explosion and vanishing (Schoenholz et al., 2017). This means each coordinate of the gradient will have very different scales, especially between the first and last layers. For non-scale-free algorithms, the update to the network weights will also be affected and each coordinate will proceed at a different pace. In contrast, scale-free optimizers are robust to such issues as the scaling of any single coordinate will not affect the update. Thus, we consider the case where BN is removed as that is where AdamW and Adam-ℓ2 will show very different patterns due to scale-freeness. Without BN, AdamW Outperforms Adam-ℓ2 In fact, without BN, AdamW outperforms Adam-ℓ2 even when both are finely tuned, especially on relatively deep neural networks (see Figure 3 and 4). AdamW not only obtains a much better test accuracy but also trains much faster. 1https://github.com/akamaster/pytorch_resnet_cifar10 2https://github.com/bearpaw/pytorch-classification ![7_image_0.png](7_image_0.png) Figure 2: The final Top-1 test error on using AdamW vs. Adam-d on training a Resnet/DenseNet with Batch Normalization on CIFAR10/100 (the black circle denotes the best setting). Note how close are the best performing hyperparameter combinations and the lowest testing error each optimizer obtains between Adam-12 and AdamW for each setting suggesting they perform similarly when BN is turned on. AdamW's Advantage and Scale-freeness We also observe that the advantage of AdamW becomes more evident as the network becomes deeper. Recall that as the depth grows, without BN, the gradient explosion and vanishing problem becomes more severe. This means that for the non-scale-free Adam-12, the updates of each coordinate will be dispersed on a wider range of scales even when the same weight decay parameter is employed. In contrast, the scales of the updates of AdamW will be much more concentrated in a smaller range. This is exactly verified empirically as illustrated in the 5th & 6th columns of figures in Figure 3 and 4. There, we report the histograms of the absolute value of updates of Adam-62 vs. AdamW of all coordinates near the end of training (for their comparison over the whole training process please refer to the Appendix C). This correlation between the advantage of AdamW over Adam-12 and the different spread of update scales which is induced by the scale-freeness property of AdamW provides empirical evidence on when AdamW excels over Adam-12. SGD and Scale-freeness The reader might wonder why SGD is known to provide state-of-the-art performance on many deep learning architectures (e.g., He et al., 2016; Huang et al., 2017) without being scale-free. At first blush, this seems to contradict our claims that scale-freeness correlates with good performance. In reality, the good performance of SGD in very deep models is linked to the use of BN that normalizes the gradients. Indeed, we verified empirically that SGD fails spectacularly when BN is not used. For example, on training the 110 layer Resnet without BN using SGD with momentum and weight decay of 0.0001, even a learning rate of 1e - 10 will lead to divergence. ## Verifying Scale-Freeness 4.2 In the previous section, we elaborated on the scale-freeness property of AdamW and its correlation with AdamW's advantage over Adam-42. However, one may notice that in practice, the e factor in the AdamW update is typically small but not 0, in our case 1e-8, thus preventing it from completely scale-free. In this section, we verify that the effect of such an e on the scale-freeness is negligible. ![8_image_0.png](8_image_0.png) Figure 3: On using AdamW vs. Adam-ℓ2 on training a Resnet without Batch Normalization on CIFAR10. (Left two) The final Top-1 test error (*the black circle denotes the best setting*). (Middle two) The training loss and test accuracy curve when employing the initial step size and the weight decay parameter that gives the smallest test error. (Right two) The histogram of the magnitude of corresponding updates of all coordinates of the network near the end of the training when employing the initial step size and the weight decay parameter that gives the smallest test error. Note that as the depth of the neural network increases, Adam-ℓ2's updates scatter more evenly over the entire spectrum while AdamW's updates are still concentrated in a small range, and AdamW's advantage in both training and testing over Adam-ℓ2 becomes more significant. As a simple empirical verification of the scale-freeness, we consider the scenario where we multiply the loss function by a positive number. Note that any other method to test scale-freeness would be equally good. For a feed-forward neural network without BN, this means the gradient would also be scaled up by that factor. In this case, the updates of a scale-free optimization algorithm would remain exactly the same, whereas they would change for an optimization algorithm that is not scale-free. Figure 5 shows results of the loss function being multiplied by 10 and 100 respectively on optimizing a 110-layer Resnet with BN *removed*. For results of the original loss see Figure 3d. We can see that AdamW has almost the same performance across the range of initial learning rates and weight decay parameters, and most importantly, the best values of these two hyperparameters remain the same. This verifies that, even when employing a (small) non-zero ϵ, AdamW is still approximately scale-free. In contrast, Adam-ℓ2 is not scale-free and we can see that its behavior varies drastically with the best initial learning rates and weight decay parameters in each setting totally different. ![9_image_0.png](9_image_0.png) Figure 4: On using AdamW vs. Adam-ℓ2 on training a Resnet/DenseNet without Batch Normalization on CIFAR100. (Left two) The final Top-1 test error (*the black circle denotes the best setting*). (Middle two) The training loss and test accuracy curve when employing the initial step size and the weight decay parameter that gives the smallest test error. (Right two) The histogram of the magnitude of corresponding updates of all coordinates of the network near the end of the training when employing the initial step size and the weight decay parameter that gives the smallest test error. Note that as the depth of the neural network increases, Adam-ℓ2's updates scatter more evenly over the entire spectrum while AdamW's updates are still concentrated in a small range, and AdamW's advantage in both training and testing over Adam-ℓ2 becomes more significant. ## 4.3 Adamw And Adamprox Behave Very Similarly In Section 3, we showed theoretically that AdamW is the first order Taylor approximation of AdamProx (update rule (7)). Beyond this theoretical argument, here we verify empirically that the approximation is good. In Figure 6, we consider the case when ηt = 1 for all t - a relatively large constant learning rate schedule. In such cases, AdamW and AdamProx still behave very similarly. This suggests that for most learning rate schedules, e.g., cosine, exponential, polynomial, and step decay, which all monotonously decrease from η0 = 1, AdamProx will remain a very good approximation to AdamW. Thus, it is reasonable to use the more classically-linked AdamProx to try to understand AdamW. ![10_image_0.png](10_image_0.png) Figure 5: The final top-1 test error of AdamW vs. Adam-ℓ2 on optimizing a 110-layer Resnet with BN *removed* on CIFAR-10 with the loss function multiplied by 10 (left two figures) and 100 (right two figures). Note how the best performing hyperparameter combinations of AdamW remain the same for different loss multiplication factors as well as the shape of the heatmap being very similar. In contrast, Adam-ℓ2's performance as well as the best performing hyperparameter combinations vary dramatically for different loss multiplication factors. ![10_image_1.png](10_image_1.png) Figure 6: The final Top-1 test error of using AdamW vs. AdamProx on training (*the black circle denotes the best* setting). (Top row) a 110-layer ResNet with BN *removed* on CIFAR-10 (trained for 300 epochs). (Bottom row) a 100-layer DenseNet-BC with BN *removed* on CIFAR-100 (trained for 100 epochs). Note how similar are the shapes of the heatmaps, the best performing hyperparameter combinations, and the test errors between AdamW and AdamProx. ## 5 Conclusion And Future Work In this paper, we provide insights for understanding the merits of AdamW from two points of view. We first show that AdamW is an approximation of the proximal updates both theoretically and empirically. We then identify the setting of training very deep neural networks without batch normalization in which AdamW substantially outperforms Adam-ℓ2 in both training and testing and show its correlation with the scale-freeness property of AdamW. Nevertheless, we are aware of some limitations of this work as well as many directions worth exploring. Limitations First, we only focus on investigating the effects of scale-freeness on Adam-ℓ2 and AdamW, but it would be interesting to study scalefreeness more generally. Also, what we showed is just a correlation instead of causality thus we did not rule out other possible causes beyond scalefreeness for the success of AdamW. Indeed, rigorously proving causality for any such claim is extremely difficult - even in the hard sciences. Note that there are papers claiming that adaptive updates have worse generalization (Wilson et al., 2017); however, such claims have been recently partly confuted (see, e.g., Agarwal et al., 2020). On this note, despite its empirical success, we stress that Adam will not even converge on some convex functions (Reddi et al., 2018), thus making it hard to prove formal theoretical convergence and/or generalization guarantees. Figure 7: Final Top-1 test error of using AdamW vs. AdamProxL2 ![10_image_2.png](10_image_2.png) to train a 110-layer ResNet *without* BN on CIFAR10 (the black circle denotes the best setting). Update for no-square ℓ2 **regularization.** Instead of using the squared ℓ2 regularization, we might think to use the ℓ2 regularization, that is without the square. This is known to have better statistical properties than the squared ℓ2 (see, e.g., Orabona, 2014), but it is not smooth so it is harder to be optimized. However, with proximal updates, we don't have to worry about its non-smoothness. Hence, we can consider the objective function F(x) = λ∥x∥2 + f(x). The corresponding prox-SGD update was derived in Duchi & Singer (2009) for scalar learning rates and it is easy to generalize to our setting as xt+1 = max 1 −ληt ∥xt−ηtpt∥ , 0 (xt − ηtpt ) . Its performance, named AdamProxL2, as shown in Figure 7, can be on a par with AdamW. Distributed Training Batch normalization is not user-friendly in distributed training as it requires each machine to collect a batch of statistics to update the model which may be inaccurate when machines do not communicate frequently with each other Goyal et al. (2017). Since AdamW outperforms Adam-ℓ2 significantly in settings without BN, at least in feed-forward neural networks, we can apply AdamW in distributed training to see if it still enjoys the same merits. ## Broader Impact Statement The main contribution of this paper is the study of a known optimization algorithm AdamW from the theoretical angles of proximal updates and scale-freeness, while the experiments are done to empirically validate and support the theoretical findings. It is a general algorithm and we do not specify in which applications should it be employed, thus we do not foresee any direct negative societal impact our work might cause. ## Acknowledgments Francesco Orabona is supported by the National Science Foundation under the grants no. 1908111 "AF: Small: Collaborative Research: New Representations for Learning Algorithms and Secure Computation", no. 2022446 "Foundations of Data Science Institute", and no. 2046096 "CAREER: Parameter-free Optimization Algorithms for Machine Learning". Mingrui Liu is supported in part by a grant from George Mason University. ## References N. Agarwal, R. Anil, E. Hazan, T. Koren, and C. Zhang. Disentangling adaptive gradient methods from learning rates. arXiv preprint arXiv:2002.11803, 2020. H. Asi and J. C. Duchi. Stochastic (approximate) proximal point methods: Convergence, optimality, and adaptivity. SIAM Journal on Optimization, 29(3):2257–2290, 2019. J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. *arXiv preprint arXiv:1607.06450*, 2016. A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167–175, 2003. A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. *SIAM journal* on imaging sciences, 2(1):183–202, 2009. Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. *IEEE* transactions on neural networks, 5(2):157–166, 1994. J. Bjorck, K. Q. Weinberger, and C. Gomes. Understanding decoupled and early weight decay. *Proceedings of the* AAAI Conference on Artificial Intelligence, 35(8):6777–6785, 2021. S. Bos and E. Chug. Using weight decay to optimize the generalization ability of a perceptron. *Proceedings of* International Conference on Neural Networks (ICNN'96), 1:241–246 vol.1, 1996. S. Boyd and L. Vandenberghe. *Convex optimization*. Cambridge University Press, New York, NY, USA, 2004. ISBN 0521833787. A. Brock, S. De, S. L. Smith, and K. Simonyan. High-performance large-scale image recognition without normalization. In *Proc. of the International Conference on Machine Learning (ICML)*, pp. 1059–1071, 2021. S. R. Bulò, L. Porzi, and P. Kontschieder. In-place activated BatchNorm for memory-optimized training of DNNs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5639–5647, 2018. N. Campolongo and F. Orabona. Temporal variability in implicit online learning. In *Advances in Neural Information* Processing Systems, volume 33. Curran Associates, Inc., 2020. N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko. End-to-end object detection with Transformers. In *European Conference on Computer Vision*, pp. 213–229. Springer, 2020. N. Cesa-Bianchi, Y. Mansour, and G. Stoltz. Improved second-order bounds for prediction with expert advice. Machine Learning, 66(2):321–352, 2007. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A simple framework for contrastive learning of visual representations. In Hal Daumé, III and Aarti Singh (eds.), *Proceedings of the 37th International Conference on Machine Learning*, volume 119 of *Proceedings of Machine Learning Research*, pp. 1597–1607. PMLR, 13–18 Jul 2020. X. Chen, S. Liu, R. Sun, and M. Hong. On the convergence of a class of Adam-type algorithms for non-convex optimization. In *7th International Conference on Learning Representations, ICLR 2019*, 2019. S. De and S. Smith. Batch normalization biases residual blocks towards the identity function in deep networks. *Advances* in Neural Information Processing Systems, 33, 2020. S. De, A. Mukherjee, and E. Ullah. Convergence guarantees for RMSProp and Adam in non-convex optimization and an empirical comparison to nesterov acceleration. *arXiv preprint arXiv:1807.06766*, 2018. J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*, 2019. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *9th International Conference on Learning Representations, ICLR*, 2021. J. Duchi and Y. Singer. Efficient online and batch learning using forward backward splitting. *The Journal of Machine* Learning Research, 10:2899–2934, 2009. J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. In COLT, 2010a. J. Duchi, S. Shalev-Shwartz, Y. Singer, and A. Tewari. Composite objective mirror descent. In *COLT*, pp. 14–26, 2010b. I. Gitman and B. Ginsburg. Comparison of batch normalization and weight normalization algorithms for the large-scale image classification. *arXiv preprint arXiv:1709.08145*, 2017. X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proc. of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256. JMLR Workshop and Conference Proceedings, 2010. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. *Advances in neural information processing systems*, 27, 2014. P. Goyal, P. Dollár, R. B. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He. Accurate, large minibatch SGD: Training ImageNet in 1 hour. *ArXiv*, abs/1706.02677, 2017. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In *Proc. of the IEEE conference on* computer vision and pattern recognition, pp. 770–778, 2016. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 4700–4708, 2017. S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *International conference on machine learning*, pp. 448–456. PMLR, 2015. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015. J. Kivinen and M. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–63, January 1997. A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097–1105, 2012. A. Krogh and J. Hertz. A simple weight decay can improve generalization. In *Advances in neural information processing* systems, pp. 950–957, 1992. J. Kuen, F. Perazzi, Z. Lin, J. Zhang, and Y.-P. Tan. Scaling object detection by transferring classification weights. In Proc. of the IEEE/CVF International Conference on Computer Vision, pp. 6044–6053, 2019. Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. *Nature*, 521(7553):436–444, 2015. C. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-supervised nets. In Proc. of the Eighteenth International Conference on Artificial Intelligence and Statistics, volume 38, pp. 562–570. PMLR, 2015. X. Li and F. Orabona. On the convergence of stochastic gradient descent with adaptive stepsizes. In *Proc. of the 22nd* International Conference on Artificial Intelligence and Statistics, AISTATS, 2019. X. Li and F. Orabona. A high probability analysis of adaptive SGD with momentum. In *ICML 2020 Workshop on* Beyond First Order Methods in ML Systems, 2020. X. Li, Z. Zhuang, and F. Orabona. A second look at exponential and cosine step sizes: Simplicity, adaptivity, and performance. In *International Conference on Machine Learning*, pp. 6553–6564. PMLR, 2021. X.-L. Li. Preconditioned stochastic gradient descent. *IEEE Transactions on Neural Networks and Learning Systems*, 29 (5):1454–1466, 2018. Y. Lifchitz, Y. Avrithis, S. Picard, and A. Bursuc. Dense classification and implanting for few-shot learning. In *Proc. of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9258–9267, 2019. I. Loshchilov and F. Hutter. SGDR: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations, 2017. I. Loshchilov and F. Hutter. Decoupled weight decay regularization. In *International Conference on Learning* Representations, 2019. L. Luo, Y. Xiong, Y. Liu, and X. Sun. Adaptive gradient methods with dynamic bound of learning rate. In International Conference on Learning Representations, 2018. B. Martinet. Régularisation d'inéquations variationnelles par approximations successives. rev. française informat. Recherche Opérationnelle, 4:154–158, 1970. H. B. McMahan and M. J. Streeter. Adaptive bound optimization for online convex optimization. In *COLT*, 2010. J.-J. Moreau. Proximité et dualité dans un espace hilbertien. *Bulletin de la Société mathématique de France*, 93: 273–299, 1965. A. S. Nemirovsky and D. Yudin. *Problem complexity and method efficiency in optimization*. Wiley, New York, NY, USA, 1983. Y. Nesterov. *Introductory lectures on convex optimization: A basic course*, volume 87. Springer, 2004. J. Nocedal and S. J. Wright. *Numerical Optimization*. Springer Series in Operations Research and Financial Engineering. Springer, New York, 2006. F. Orabona. Simultaneous model selection and optimization through parameter-free stochastic learning. In *Advances in* Neural Information Processing Systems 27, 2014. F. Orabona and D. Pál. Scale-free algorithms for online linear optimization. In International Conference on Algorithmic Learning Theory, pp. 287–301. Springer, 2015. F. Orabona and D. Pál. Scale-free online learning. *Theoretical Computer Science*, 716:50–69, 2018. Special Issue on ALT 2015. N. Parikh and S. Boyd. Proximal algorithms. *Foundations and Trends in optimization*, 1(3):127–239, 2014. R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. In International conference on machine learning, pp. 1310–1318. PMLR, 2013. S. J. Reddi, S. Kale, and S. Kumar. On the convergence of Adam and beyond. In International Conference on Learning Representations, 2018. H. Robbins and S. Monro. A stochastic approximation method. *Annals of Mathematical Statistics*, 22:400–407, 1951. R. T. Rockafellar. Monotone operators and the proximal point algorithm. *SIAM journal on control and optimization*, 14 (5):877–898, 1976. S. Schoenholz, J. Gilmer, S. Ganguli, and Jascha Sohl-Dickstein. Deep information propagation. In *International* Conference on Learning Representations (ICLR), 2017. S. Singh and A. Shrivastava. EvalNorm: Estimating batch normalization statistics for evaluation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3633–3641, 2019. M. Tan and Q. Le. EfficientNet: Rethinking model scaling for convolutional neural networks. In *Proceedings of the* 36th International Conference on Machine Learning, volume 97, pp. 6105–6114. PMLR, 2019. T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012. P. Toulis and E. M. Airoldi. Asymptotic and finite-sample properties of estimators based on stochastic gradients. The Annals of Statistics, 45(4):1694–1727, 2017. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. G. Wang, S. Lu, Q. Cheng, W. Tu, and Zhang L. SAdam: A variant of Adam for strongly convex functions. In International Conference on Learning Representations, 2020. M. K. Warmuth and A. K. Jagota. Continuous and discrete-time nonlinear gradient descent: Relative loss bounds and convergence. In *Electronic proceedings of the 5th International Symposium on Artificial Intelligence and* Mathematics, 1997. A. C. Wilson, R. Roelofs, M. Stern, N. Srebro, and B. Recht. The marginal value of adaptive gradient methods in machine learning. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. N. Wu, J. Phang, J. Park, Y. Shen, Z. Huang, M. Zorin, S. Jastrz˛ebski, T. Févry, J. Katsnelson, E. Kim, S. Wolfson, U. Parikh, S. Gaddam, L. L. Y. Lin, K. Ho, J. D. Weinstein, B. Reig, Y. Gao, H. Toth, K. Pysarenko, A. Lewin, J. Lee, K. Airola, E. Mema, S. Chung, E. Hwang, N. Samreen, S. G. Kim, L. Heacock, L. Moy, K. Cho, and K. J. Geras. Deep neural networks improve radiologists' performance in breast cancer screening. *IEEE Transactions on Medical* Imaging, 39(4):1184–1194, 2020. M. D. Zeiler. ADADELTA: an adaptive learning rate method. *arXiv preprint arXiv:1212.5701*, 2012. H. Zhang, Y. N. Dauphin, and T. Ma. Fixup initialization: Residual learning without normalization. In International Conference on Learning Representations, 2019. L. Zhang, S. Lu, and Z. Zhou. Adaptive online learning in dynamic environments. Advances in neural information processing systems, 31, 2018. D. Zhou, J. Chen, Y. Cao, Y. Tang, Z. Yang, and Q. Gu. On the convergence of adaptive gradient methods for nonconvex optimization. *arXiv preprint arXiv:1808.05671*, 2018. 1 $\nabla\tau$2. $\mathbf{a}$ $-\,x^*))($ # Appendices ## A Proof Of Theorem 3.1 Proof. From the Fundamental Theorem of Calculus we have: ∇f(x) = ∇f(x ∗) + Z 1 0 ∇2f(x ∗ + t(x − x ∗))(x − x ∗)dt = Z 1 0 ∇2f(x ∗ + t(x − x ∗))(x − x ∗)*dt .* Thus, for any function ˜fΛ(x) whose Hessian is Λ∇2f(x) and ∇ ˜fΛ(x ∗) = 0, we have ∇ ˜fΛ(x) = Λ∇f(x). Now, from the definition of a scale-free algorithm, the iterates of such an algorithm do not change when one multiplies each coordinate of all the gradients by a positive constant. Thus, a scale-free algorithm optimizing f behaves the same as if it is optimizing ˜fΛ. ## B A Scale-Free Algorithm With Dependency On The Condition Number Algorithm 2 AdaGrad (Duchi et al., 2010a; McMahan & Streeter, 2010) *(All operations on vectors are element-wise.)* Input: \#Iterations T, a set K, x1 ∈ K, stepsize η for t = 1 *. . . T* do Receive: ∇f(xt) Set: ηt = qη Pt i=1 (∇f(xi))2 Update: xt+1 = ΠK (xt − ηt∇f(xt)) where ΠK is the projection onto K. end for Output: x¯ = 1 T PT t=1 xt. Algorithm 3 AdaGrad with Restart Input: \#Rounds N, x0 ∈ R d, upper bound on ∥x0 − x ∗∥∞ as D∞, strong convexity µ, smoothness M ``` Set: x¯0 = x0 for i = 1 . . . N do Run Algorithm 2 to get x¯i with T = 32d M µ , x1 = x¯i−1, K = {x : ∥x − x¯i−1∥ 2∞ ≤ D2∞ 4i−1 }, η = D∞/ √ 2 2i−1 end for Output: x¯N . ``` Theorem B.1. Let K be a hypercube with ∥x − y∥∞ ≤ D∞ for any x, y ∈ K. For a convex function f*, set* η = D√∞ 2 , then Algorithm 2 guarantees for any x ∈ K: $$\sum_{t=1}^{T}f(\mathbf{x}_{t})-f(\mathbf{x})\leq{\sqrt{2d D_{\infty}^{2}\sum_{t=1}^{T}\|\nabla f(\mathbf{x}_{t})\|^{2}}}\;.$$ $$(8)$$ 2 . (8) Theorem B.2. For a µ strongly convex and M smooth function f*, denote its unique minimizer as* x ∗ ∈ R d*. Given* x0 ∈ R d*, assume that* ∥x0 − x ∗∥∞ ≤ D∞*, then Algorithm 3 guarantees:* $$\|{\bar{\mathbf{x}}}_{N}-{\mathbf{x}}^{*}\|_{\infty}^{2}\leq{\frac{D_{\infty}^{2}}{4^{N}}}$$ Thus, to get a x *such that* ∥x − x ∗∥ 2∞ ≤ ϵ*, we need at most* 32d M µ log4 D2∞/ϵ*gradient calls.* Proof of Theorem B.2. Consider round i and assume K passed to Algorithm 2 is bounded w.r.t. ℓ∞ norm by D∞i . When f is µ-strongly convex and M smooth, let x = x ∗, Equation (8) becomes: $$\sum_{t=1}^{T}f(\mathbf{x}_{t})-f(\mathbf{x}^{*})\leq{\sqrt{2d D_{\infty,i}^{2}\sum_{t=1}^{T}\|\nabla f(\mathbf{x}_{t})\|^{2}}}\leq{\sqrt{4M d D_{\infty,i}^{2}\sum_{t=1}^{T}(f(\mathbf{x}_{t})-f(\mathbf{x}^{*}))}}\;,$$ where the second inequality is by the M smoothness of f. This gives: $$\sum_{t=1}^{T}f(x_{t})-f(x^{*})\leq4M d D_{\infty_{i}}^{2}\ .$$ Let x¯i = 1 T PT t=1 xt we have by the µ-strong-convexity that: $$\left\|\bar{\mathbf{x}}_{i}-\mathbf{x}^{*}\right\|_{\infty}^{2}\leq\left\|\bar{\mathbf{x}}_{i}-\mathbf{x}^{*}\right\|^{2}\leq\frac{2}{\mu}(f(\bar{\mathbf{x}})-f(\mathbf{x}^{*}))\leq\frac{2}{\mu}\frac{1}{T}\sum_{t=1}^{T}(f(\mathbf{x}_{t})-f(\mathbf{x}^{*}))\leq\frac{8MMD_{\text{\tiny{\sc ex}}_{i}}^{2}}{\mu T}.\tag{9}$$ Put T = 32d M µ in Equation (9) we have that ∥x¯i − x ∗∥ 2∞ ≤ D2∞i 4. Thus, after each round, the ℓ∞ distance between the update x¯i and x ∗is shrinked by half, which in turn ensures that x ∗is still inside the K passed to Algorithm 2 in the next round with D∞i+1 = D∞i 2. This concludes the proof. Proof of Theorem B.1. X T t=1 f(xt) − f(x) ≤ X T t=1 ⟨∇f(xt), xt − x⟩ = X T t=1 X d ∂f ∂xt,j (xt) ∗ (xt,j − xj ) j=1 j=1 (xt,j − xj ) 2 − xt,j − ηt,j ∂f ∂xt,j (xt) − xj 2 = X T t=1 X d 2ηt,j+ X T t=1 X d j=1 ηt,j 2 ∂f ∂xt,j (xt) 2 ≤ X T t=1 X d j=1 (xt,j − xj ) 2 − (xt+1,j − xj ) 2 2ηt,j+ X T t=1 X d j=1 ηt,j 2 ∂f ∂xt,j (xt) 2 ≤ X d j=1 X T t=1 (xt,j − xj ) 2 2 1 ηt,j −1 ηt−1,j + X d j=1 X T t=1 ηt,j 2 ∂f ∂xt,j (xt) 2 vuutXt i=1 ∂f ∂xi,j (xi) 2− vuutXt−1 i=1 ∂f ∂xi,j (xi) 2 + X d t=1 ≤ D2∞ 2η X d j=1 X T j=1 X T 2 rPt i=1 ∂f ∂xi,j (xi) 2 ∂f ∂xt,j (xt) 2 η t=1 D2∞ 2η vuutX T t=1 ∂f ∂xt,j (xt) 2+ η vuutX T j=1 t=1 ∂f ∂xt,j (xt) 2 ≤ X d j=1 vuut2D2∞ X T = X d t=1 ∂f ∂xt,j (xt) 2 ≤ vuut2dD2∞ X T t=1 X d j=1 ∂f ∂xt,j (xt) 2 = vuut2dD2∞ X T t=1 ∥∇f(xt))∥ 2 . where the first inequality is by convexity, the second one by the projection lemma as the projection onto a hypercube equals performing the projection independently for each coordinate, the fifth one by Lemma 5 in (McMahan & Streeter, 2010), and the last one by the concavity of √·. ## C The Histograms Of The Magnitude Of Each Update Coordinate During The Entire Training Phase In this section, we report the histograms of the absolute value of updates of Adam-ℓ2 vs. AdamW of all coordinates divided by α during the whole training process. From the figures shown below, we can clearly see that AdamW's updates remain in a much more concentrated scale range than Adam-ℓ2 during the entire training. Moreover, as the depth of the network grows, Adam-ℓ2's updates become more and more dispersed, while AdamW's updates are still concentrated. *(Note that the leftmost bin contains all values equal to or less than* 2 −27 ≈ 10−8.1 and the rightmost bin contains all values equal to or larger than 1.) ![18_image_0.png](18_image_0.png) Figure 8: The histograms of the magnitudes of all updates (without α) of a 20-layer Resnet with BN removed trained by AdamW or Adam-l2 on CIFAR10. ![19_image_0.png](19_image_0.png) Figure 9: The histograms of the magnitudes of all updates (without α) of a 44-layer Resnet with BN removed trained by AdamW or Adam-l2 on CIFAR10. ![20_image_0.png](20_image_0.png) Figure 10: The histograms of the magnitudes of all updates (without α) of a 56-layer Resnet with BN removed trained by AdamW or Adam-l2 on CIFAR10. ![21_image_0.png](21_image_0.png) Figure 11: The histograms of the magnitudes of all updates (without cc) of a 110-layer Resnet with BN removed trained by AdamW or Adam-l2 on CIFAR10. ![22_image_0.png](22_image_0.png) Figure 12: The histograms of the magnitudes of all updates (without cc) of a 218-layer Resnet with BN removed trained by AdamW or Adam-l2 on CIFAR10. ![23_image_0.png](23_image_0.png) by AdamW or Adam-l2 on CIFAR100. ![24_image_0.png](24_image_0.png) Figure 14: The histograms of the magnitudes of all updates (without a) of a 44-layer Resnet with BN removed trained by AdamW or Adam-l2 on CIFAR100. ![25_image_0.png](25_image_0.png) by AdamW or Adam-l2 on CIFAR100. ![26_image_0.png](26_image_0.png) Figure 16: The histograms of the magnitudes of all updates (without cc) of a 110-layer Resnet with BN removed trained by AdamW or Adam-l2 on CIFAR100. ![27_image_0.png](27_image_0.png) Figure 17: The histograms of the magnitudes of all updates (without cc) of a 218-layer Resnet with BN removed trained by AdamW or Adam-l2 on CIFAR100. ![28_image_0.png](28_image_0.png) trained by AdamW or Adam-62 on CIFAR100.
Review 1: Summary: This paper studies the advantage of AdamW over Adam, which is a well-known phenomenon especially in computer vision tasks, where people typically argue that AdamW is better because its implementation of weight decay is correct while Adam's is not. This paper provides a different perspective of viewing the two methods in terms of whether scale-freeness is satisfied, and show both theoretically and empirically that this property helps AdamW to outperform Adam. Extensive experimental results are provided by the authors. The proposed ideas are, as far as I know, novel and interesting. However, I kind of have the feeling that scale-freeness is another way of saying "weight decay" is better than "l_2 regularization" in adaptive methods. Strengths and Weaknesses: Strengths: 1. The proposed idea is novel and interesting. 2. The authors have conducted numerous experiments to verify their claims. 3. The paper has nice comparisons between Adam, AdamW, and AdamProx(as proposed in the paper) Weaknesses: 1. One part of the paper is very unclear (see below) 2. The "theory" part of this paper is rather intuitive than explanatory, only the strongly convex setting is considered 3. I do not see the impact of this work, or how it can help us design the optimizers. Since scale-freeness is a desired property, how about we design a scale-free version of SGD, e.g., scale the gradients by their norms. How is that algorithm going to perform on neural networks without BN? Since the authors only compare Adam with AdamW, and I guess SGD will likely fail in those networks, I wonder how the scaled-free version of SGD is going to perform. Requested Changes: Overall, this paper is a sound empirical work and I would recommend for publication if the following problems are fixed or addressed properly. Page 4, the paragraph before "Adam is Scale Free" is very unclear to me. The authors refer to different cases of $x_t$, different cases of $g_t$ and many update rules including (2) and (5). Moreover the term $p_t$, as in the paper, can have different forms. Therefore, I am really confused when the author says "when $g_t$ is zero". Even if $g_t$ is zero, what is $p_t$ in these cases? I would strongly suggest that the author make a table for the two cases they mention, and what the consequences will be in different cases. With Eqn. (2) and (5) in different pages, I have to go back and forth when reading this paragraph and I am still confused by what the authors really mean. Some notations are never defined. For example, what is $m_t$ and what is $f_t$ in page 3/4? As someone who studies optimization, I guess the author means the "first order momentum" and the "stochastic approximation" to the loss function $f$. However, please define them clearly if theoretical analysis is provided. " with $m_t$ and $v_t$ both updated using $g_t$" is very vague. Please write out the update rules of Adam directly. For the experiments, I am surprised that when BN is turned on, AdamW and Adam behaves the best when the regularization is zero since it will lead to overfitting when I run them. However, I am even more surprised by Figure 2(b). When both Adam and AdamW have zero weight decay, shouldn't them be the same? Why is the best learning rate 1 for Adam and 5 for AdamW? Could the authors explain it to me? As I mentioned above, it is better if scale-free SGD is added to the comparison, but this is less important. Minor errors: some citations are used incorrectly, e.g., Duchi et al. 2010b in page 4, Nesterov 2004 in page 5. I believe the authors want to use \citep instead of \citet there. Broader Impact Concerns: This paper studies optimization algorithms and I don't see any concerns with the ethical implications. ================================================== Review 2: Summary: The paper provides some theoretical and empirical insights on AdamW, especially on why it can outperform Adam l2. The key messages can be summarized in two points. 1) AdamW can be viewed as a first-order approximation of a special proximal method. 2) The advantage of AdamW over Adam l2 might be brought by its scale-freeness property. Overall I enjoyed reading the paper and the idea is interesting. But I do have a few questions for the authors. 1. What is the main further insight brought by interpreting AdamW as an approximation of proximal methods? 2. Another intuitive understanding of the advantages of AdamW over Adam l2 is treating AdamW as assigning different l2 regularization parameters to different coordinates, according to the gradient magnitudes of different coordinates. The advantage then would be AdamW is regularizing the l2 norm of different weights according to their importance in changing the objective function. I believe this is a more common and easily understood perspective than scale-freeness. I wonder how does the authors view this perspective and are there connections between scale-freeness and this perspective? Strengths and Weaknesses: Strength is the perspective is new to the best of my knowledge. Weakness is the limited theoretical depth. Requested Changes: It seems the paper is mainly on scale-freeness and the connection to proximal methods does not provide much insight. Also, it feels to me that scale-freeness and the connection to proximal methods are separated results without much connection to each other. Maybe it is better to restructure the paper to highlight scale-freeness. Or add some transition between the connection to proximal methods and the scale-freeness part. Broader Impact Concerns: I do not have concerns. ================================================== Review 3: Summary: The manuscript investigates the benefits of AdamW by deriving connections to proximal methods and showing it exhibits a scale-freeness property. The paper also provides empirical validation of some approximations made in the derivation and demonstrates that the effects hypothesised by the theoretical analysis are observed in practice. Strengths and Weaknesses: For the most part, I found the paper quite easy to follow and enjoyable to read. The theoretical contributions are quite interesting, and have the potential (as evidenced by the planned future work) to lead to new training algorithms with similarly beneficial properties. The experimental work did a good job of demonstrating the practical implications of scale-freeness in the deep learning setting. The main weakness is that the term "proximal update" is not always used in a precise manner, so it is at times unclear which of the existing proximal methods the manuscript is referring to. For example, the manuscript states that Eq 4 "uses the proximal operator", but it seems to me that there are additional terms compared to a conventional proximal operation definition, and a different Bregman divergence is used. Making this clear is important, as the manuscript does make some claims about how AdamW inherits properties that pertain to specific proximal methods. It is not clear to me how the momentum terms from Adam(W) interact with this proximal update interpretation. E.g., the convergence guarantees of Duchi et al. (2010b) are invoked, but it is not obvious to me that they apply in the case where momentum is present. Requested Changes: Some of the figures in Section 4 have plots that have inconsistent sizes. It would be nice (but not required for acceptance) if these could be fixed, and perhaps slightly more guidance could be given to the reader about what features of the plots can be used to draw the conclusions the authors have drawn. The formalisation at the beginning of Section 3 should be improved before acceptance. A touch more scene-setting would be useful; e.g., introduce the definition of a proximal operation, and use it to set up some notation that allows for compact and easily understandable derivations. Doing this may also alleviate what I thought was the main weakness of the paper, by making the relationship to existing proximal methods more clear. Broader Impact Concerns: No concerns. ================================================== Metareview: Recommendation: Accept as is Comment: In this paper, the authors provide new theoretical understandings of AdamW from the optimization viewpoint. First, they demonstrate that AdamW can be seen as an approximation of a proximal update. Second, they prove that AdamW enjoys the scale-freeness property, which leads to an automatic acceleration on certain cases. This paper made significant progress towards understanding the advantage of AdamW. It is well-written, and the theoretical findings are supported by empirical studies. ==================================================
# Optimum-Statistical Collaboration Towards General And Efficient Black-Box Optimization Wenjie Li li3549@purdue.edu Department of Statistics, Purdue University Chi-Hua Wang chihuawang@ucla.edu Department of Statistics, University of California, Los Angles Guang Cheng guangcheng@ucla.edu Department of Statistics, University of California, Los Angles Qifan Song *qfsong@purdue.edu* Department of Statistics, Purdue University Reviewed on OpenReview: *https: // openreview. net/ forum? id= ClIcmwdlxn* ## Abstract In this paper, we make the key delineation on the roles of resolution and statistical uncertainty in hierarchical bandits-based black-box optimization algorithms, guiding a more general analysis and a more efficient algorithm design. We introduce the *optimum-statistical* collaboration, an algorithm framework of managing the interaction between optimization error flux and statistical error flux evolving in the optimization process. We provide a general analysis of this framework without specifying the forms of statistical error and uncertainty quantifier. Our framework and its analysis, due to their generality, can be applied to a large family of functions and partitions that satisfy different local smoothness assumptions and have different numbers of local optimums, which is much richer than the class of functions studied in prior works. Our framework also inspires us to propose a better measure of the statistical uncertainty and consequently a variance-adaptive algorithm VHCT. In theory, we prove the algorithm enjoys rate-optimal regret bounds under different local smoothness assumptions; in experiments, we show the algorithm outperforms prior efforts in different settings. ## 1 Introduction Black-box optimization has gained more and more attention nowadays because of its applications in a large number of research topics such as tuning the hyper-parameters of optimization algorithms, designing the hidden structure of a deep neural network, and resource investments (Li et al., 2018; Komljenovic et al., 2019). Yet, the task of optimizing a black-box system often has a limited budget for evaluations due to its expensiveness, especially when the objective function is nonconvex and can only be evaluated by an estimate with uncertainty (Bubeck et al., 2011b; Grill et al., 2015). Such limitation haunts practitioners' deployment of machine learning systems and invites scientists' investigation for the authentic roles of resolution (optimization error) and uncertainty (statistical error) in black-box optimization. Indeed, it raises a question of optimum-statistical trade-off: how can we better balance *resolution* and *uncertainty* along the search path to create efficient black-box optimization algorithms? Among different categories of black-box optimization methods such as Bayesian algorithms (Shahriari et al., 2016; Kandasamy et al., 2018) and convex black-box algorithms (Duchi et al., 2015; Shamir, 2015), this paper focuses on the class of hierarchical bandits-based optimization algorithms introduced by (Auer et al., ![1_image_0.png](1_image_0.png) (a) Difficult function (b) Function 1 + 1/(ln x) ![1_image_1.png](1_image_1.png) Figure 1: 1(a): A difficult function proposed by Grill et al. (2015) which has exponentially increasing (2ν1ρ h)-near-optimal regions in the standard partition. 1(b): An example function that violates the ν1ρ h local smoothness assumption by Grill et al. (2015) in the standard partition and thus cannot be analyzed by prior works, but has no local optimum. 2007; Bubeck et al., 2011b). These algorithms search for the optimum by traversing a hierarchical partition of the parameter space and look for the best node inside the partition. Existing results, such as Bubeck et al. (2011b); Grill et al. (2015); Shang et al. (2019); Bartlett et al. (2019); Li et al. (2022), heavily rely on some specific assumptions of the smoothness of the blackbox objective and the hierarchical partition. However, their assumptions are only satisfied by a small class of functions and partitions, which limits the scope of their analysis. To be more specific, existing studies all focus on optimizing "exponentially-local-smooth" functions (see; Eqn. (3)), which can have an exponentially increasing number of sub-optimums as the parameter space partition proceeds deeper (Grill et al., 2015; Shang et al., 2019; Bartlett et al., 2019). For instance, Grill et al. (2015) designed a difficult function (shown in Figure 1(a)) that can be optimized by many existing algorithms because it satisfies the exponential local-smooth assumption. However, functions and partitions that do not satisfy exponential local-smoothness, but with a bounded or polynomially increasing number of near-optimal regions have been overlooked in the current literature of black-box optimization. A simple example is Figure 1(b), which is not exponentially smooth but has a trivial unique optimum. Such a simple example depresses all previous analyses in existing studies due to their dependency on the exponential localsmoothness assumption. What is worse, different designs of the uncertainty quantifier can generate different algorithms and thus may require different analyses. Consequently, a more unified theoretical framework to manage the interaction process between the optimization error flux and the statistical error flux is desirable and beneficial towards general and efficient black-box optimization. In this work, we deliver a generic perspective on the optimum-statistical collaboration inside the exploration mechanism of black-box optimization. Such a generic perspective holds regardless of the local smoothness condition of the function or the design of uncertainty quantification, generalizing its applicability to a larger class of functions (e.g., Figure 1(b)) and algorithms with different uncertainty quantification methods. Our analysis for the proposed general algorithmic framework only relies on mild assumptions. It allows us to analyze functions with different levels of smoothness and also inspired us to propose a variance-adaptive black-box algorithm VHCT. In summary, our contributions are: - We identify two decisive components of exploration in black-box optimization: the resolution descriptor (Definition 1) and the uncertainty quantifier (Definition 2). Based on the two components, we introduce the optimum-statistical collaboration (Algorithm 1), a generic framework for collaborated optimism in hierarchical bandits-based black-box optimization. - We provide a unified analysis of the proposed framework (Theorem 3.1) that is independent of the specific forms of the resolution descriptor and the uncertainty quantifier. Due to the flexibility of the resolution descriptor, this analysis includes all black-box functions who satisfy the general local smoothness assumption (Condition GLS) and have a finite near-optimality dimension (Definition 1), which are excluded from prior works. - Furthermore, the framework inspires us to propose a better uncertainty quantifier, namely the varianceadaptive quantifier (VHCT). It leads to effective exploration and advantages our bandit policy by utilizing the variance information learnt from past samplers. Theoretically, we show that the proposed framework secures different regret guarantees in the face of different smoothness assumptions, and VHCT leads to a better convergence when the reward noise is small. Our experiments validate that the proposed variance adaptive quantifier is more efficient than the existing anytime algorithms on various objectives. Related Works. Pioneer bandit-based black-box optimization algorithms such as HOO (Bubeck et al., 2011b) and HCT (Azar et al., 2014) require complicated assumptions of both the the black-box objective and the parameter partition, including weak lipschitzness assumption. Recently, Grill et al. (2015) proposed the exponential local smoothness assumption (Eqn. (3)) to simplify the set of assumptions used in prior works and proposed POO to meta-tune the smoothness parameters. Some follow-up algorithms such as GPO (Shang et al., 2019) and StroquOOL (Bartlett et al., 2019) are also proposed. However, both GPO and StroquOOL require the budget number beforehand, and thus they are not anytime algorithms (Shang et al., 2019; Bartlett et al., 2019). Also, the analyses of these algorithms all rely on the exponential local smoothness assumption (Grill et al., 2015). ## 2 Preliminaries Problem Formulation. We formulate the problem as optimizing an implicit objective function f : *X 7→* R, where X is the parameter space. The sampling budget (number of evaluations) is denoted by an *unknown* constant n, which is often limited when the cost of evaluating f(x) is expensive. At each round (evaluation) t, the algorithm selects a xt ∈ X and receives an stochastic feedback rt ∈ [0, 1] , modeled by rt ≡ f(xt) + t, where the noise t is only assumed to be mean zero, bounded by [− b 2 , b 2 ] for some constant b > 0, and independent from the historical observed algorithm performance and the path of selected xt's. Note that the distributions of t are not necessarily identical. We assume that there exists at least one point x ∗ ∈ X such that it attains the global maximum f ∗, i.e., f ∗ ≡ f(x ∗) ≡ supx∈X f(x). The goal of a black-box optimization algorithm is to gradually find xn such that f(xn) is close to the global maximum f ∗ within the limited budget. Regret Analysis Framework. We measure the performance of different algorithms using the *cumulative* regret. With respect to the optimal value f ∗, the *cumulative regret* is defined as $$R_{n}\equiv n f^{*}-\sum_{t=1}^{n}r_{t}.$$ It is worth noting that an alternative measure of performance widely used in the literature (e.g., Shang et al., 2019; Bartlett et al., 2019) is the *simple regret* St ≡ f ∗ − rt. Both simple and cumulative regrets measure the performance of the algorithm but are from different aspects. The former one focuses on the convergence of the algorithm's final round output, and the latter cares about the overall loss during the whole algorithm training. The cumulative regret is useful in scenarios such as medical trials where ill patients are included in the each run and the cost of picking non-optimal treatments for all subjects shall be measured. This paper chooses to study the cumulative regret, while in the literature, there were discussions on the relationship between these two (Bubeck et al., 2011a). Hierarchical partitioning. We use the hierarchical partitioning P = {Ph,i} to discretize the parameter space X into nodes, as introduced by Munos (2011); Bubeck et al. (2011b); Valko et al. (2013). For any non-negative integer h, the set {Ph,i} partitions the whole space X . At depth h = 0, a single node P0,1 covers the entire space. Every time we increase the level of depth, each node at the current depth level will be separated into two children; that is, Ph,i = Ph+1,2i−1 ∪ Ph+1,2i. Such a hierarchical partition naturally inspires algorithms which explore the space by traversing the partitions and selecting the nodes with higher rewards to form a tree structure, with P0,1 being the root. We remark that the binary split for each node we consider in this paper is the same as in the previous works such as Bubeck et al. (2011b); Azar et al. (2014), and it would be easy to extend our results to the K-nary case (Shang et al., 2019). Similar to Grill et al. (2015), we refer to the partition where each node is split into regular, same-sized children as the standard partitioning. Given the objective function f and hierarchical partition P, we introduce a generalized definition of nearoptimality dimension, which is a natural extension of the notion defined by Grill et al. (2015). Near-optimality dimension. For any positive constants α and C, and any function ξ(h) that satisfies ∀h ≥ 1, ξ(h) ∈ (0, 1], we define the near-optimality dimension d = d(*α, C, ξ*(h)) of f with respect to the partition P and function ξ(h) as d ≡ inf{d 0 > 0 : ∀h ≥ 0, Nh(αξ(h)) ≤ Cξ(h) $$\xi(h))\leq C\xi(h)^{-d^{\prime}}\}$$ $$(1)$$ 0} (1) if exists, where Nh() is the number of nodes Ph,i on level h such that supx∈Ph,i f(x) ≥ f ∗ − . In other words, for each h > 0, Nh(αξ(h)) is the number of near-optimal regions on level h that are (αξ(h))- close to the global maximum so that any algorithm should explore these regions. d = d(*α, C, ξ*(h)) controls the polynomial growth of this quantity with respect to the function ξ(h) −1. It can be observed that this general definition of d covers the near optimality dimension defined in Grill et al. (2015) by simply setting ξ(h) = ρ h and the coefficient α = 2ν for some constants ν > 0 and ρ ∈ (0, 1). The rationale of introducing the generalized notion ξ(h) is that, although the number of nodes in the partition grows exponentially when h increases, the number of near-optimal regions Nh() of the objective function f may not increase as fast, even if the near-optimal gap converges to 0 slowly. The particular choice of ξ(h) = ρ hin Grill et al. (2015) indicates that Nh(αρh) ≤ Cρ−dh, which may be over-pessimistic and makes it a non-ideal setting for analyzing functions that change rapidly and don't have exponentially many near-optimal regions. Such a generalized definition becomes extremely useful when dealing with functions that have different local smoothness properties, and therefore our framework can successfully analyze a much larger class of functions. We establish our general regret bound based on this notion of near-optimality dimension in Theorem 3.1. It is worth mentioning that taking a slowly decreasing ξ(h), although reduces the number of near-optimal regions, does not necessarily imply that the function is easier to optimize. As will be shown in Section 3 and 4, ξ(h) is often taken to be the local smoothness function of the objective. A slowly decreasing ξ(h) makes the function much more unsmooth than a function with exponential local smoothness, and hence is still hard to optimize. Additional Notations. At round t, we use H(t) to represent the maximum depth level explored in the partition by an algorithm. For each node Ph,i, we use Th,i(t) to denote the number of times it has been pulled and r k(xh,i) to denote the k-th reward observed for the node, evaluated at a **pre-specified** xh,i within Ph,i, which is the same as in Azar et al. (2014); Shang et al. (2019). Note that in the literature, it is also considered that xh,i follows some distribution supported on Ph,i, e.g., Bubeck et al. (2011b). ## 3 Optimum-Statistical Collaboration This section defines two decisive quantities (Resolution Descriptor and Uncertainty Quantifier) that play important roles in the proposed optimum-statistical collaboration framework. We then introduce the general optimum-statistical collaboration algorithm and provide its theoretical analysis. Definition 1. **(Resolution Descriptor** OE). Define OEh to be the *resolution* for each level h, which is a function that bounds the change of f around its global optimum and measures the current optimization error, i.e., for any global optimum x ∗, $$\forall h\geq0,\forall x\in{\mathcal{P}}_{h,i_{h}^{*}},f(x)\geq f(x^{*})-0\mathbb{E}_{h},$$ ∗) − OEh, (OE) where Ph,i∗h is the node on level h in the partition that contains the global optimum x ∗. ## Algorithm 1 Optimum-Statistical Collaboration (Osc) 1: **Input:** partition P, resolution descriptor OEh, uncertainty quantifier SEh,i(*T, t*), selection policy π(S) 2: **Initialize** T = {P0,1,P1,1,P1,2} 3: for t = 1 to n do 4: S = {P0,1},Pht,it = P0,1 5: **while** OEht ≥ SEht,it (*T, t*) do 6: S = *S \ {P*ht,it }S{Pht+1,2it−1,Pht+1,2it } 7: π(S) selects a new node Pht,it from S 8: **end while** 9: Pull Pht,it and update SEht,it (*T, t*) 10: if OEht ≥ SEht,it (*T, t*) and Pht+1,2it ∈ T / **then** 11: T = TS{Pht+1,2it−1,Pht+1,2it } 12: **end if** 13: end for Definition 2. **(Uncertainty Quantifier** SE). Let SEh,i (*T, t*) be the *uncertainty estimate* for the node Ph,i at time t, which aims to bound the statistical estimation error of f(xh,i), given T pulled values from this node. Recall that Th,i(t) is the number of pulls for node Ph,i until time t, and let µbh,i(t) be the online estimator of f(xh,i), we expect that SE ensures P∞ t=1 P(Ac t ) < C for some constant C, where $${\mathcal{A}}_{t}=\Big\{\forall h,i,|{\widehat{\mu}}_{h,i}(t)-f(x_{h,i})|\leq\mathrm{{SE}}_{h,i}\left(T_{h,i}(t),t\right)\Big\}.$$ o. (SE) With a slight abuse of notation, we rewrite SEh,i(Th,i(t), t) as SEh,i(*T, t*) when no confusion is caused. When Th,i(t) = 0, SEh,i(*T, t*) is naturally taken to be +∞ since the node is never pulled. To ensure the above probability requirement holds, it is reasonable to make SEh,i(*T, t* + 1) ≥ SEh,i(*T, t*) because when the number of pulls T is fixed, the statistical error should not decrease. Given the above definitions of the resolution descriptor and the uncertainty quantifier at each node, we introduce the optimum-statistical collaboration algorithm in Algorithm 1 that guides the tree-based optimum search path, under different settings of OE and SE. The basic logic behind **Algorithm 1** is that at each time t, the selection policy π(S) will continuously search nodes from the root to leaves, until finding one node satisfying OEht < SEht,it (*T, t*) and then pull this node. The end-goal of the optimum-statistical collaboration is that, after pulling enough number of times, the following relationship holds along the shortest path from the root to the deepest node that contains the global maximum (If there are multiple global maximizers, the process only needs to find one of them) : $$(\mathrm{{SE}})$$ ![4_image_0.png](4_image_0.png) OE1 ≥ SE1 > OE2 ≥ SE2 ≥ · · · ≥ OEh ≥ SEh *≥ · · ·* (2) with slightly abused notation of SEh to represent the uncertainty quantifier of the h-th node in the traverse path (refer to Figure 2). In other words, the two terms collaborate on the optimization process so that SE is controlled by OE for each node of the traverse path, and they both become smaller when the exploration algorithm goes deeper. Figure 2 illustrate the above dynamic process more clearly with an Figure 2: Illustration of the optimum-statistical collaboration framework. The node on the fifth level in the path will be pulled because its OE ≤ SE example tree on the standard partition. We remark that Eqn. (2) only needs to be guaranteed on the traverse path at each time instead of the whole exploration tree to avoid any waste of the budget. For the same purpose, all the proposed algorithms only require OEh to be slightly larger than or equal to SEh on each level. We state the following theorem, which is a general regret upper bound with respect to any choice of SEh,i(*T, t*) and OEh, and any design of the selection policy that follows the optimum statistical collaboration framework, with only a mild condition on the result of the policy in each round. Theorem 3.1. **(General Regret Bound)** Suppose that under a sequence of probability events {Et}t=1,2,···, the policy π(S) ensures that at each time t, the node Pht,it *pulled in line 9 in Algorithm 1 satisfies* f ∗ − f(xht,it ) ≤ a · max{SEht,it (*T, t*), OEht }, where a > 0 *is an absolute constant. Then for any integer* H ∈ [1, H(n)) and any 0 < δ < 1*, we have the following bound on the cumulative regret with probability at least* 1 − δ/(4n 2), $$R_{n}\leq\sum_{t=1}^{n}\mathbb{I}(\mathcal{E}_{t}^{c})+\sqrt{2n\log\left(\frac{4n^{2}}{\delta}\right)}+2aC\sum_{h=1}^{\overline{H}}\left(0\mathsf{E}_{h-1}\right)^{-d}\sum_{i:T_{h,i}(t)\neq0}^{n}\mathsf{SE}_{h,i}(T,t)$$ $$\quad+a\sum_{\overline{H}+1}^{H(n)}\sum_{i:T_{h,i}(t)\neq0}\sum_{t=1}^{n}\mathsf{SE}_{h,i}(T,t)$$ where ¯d > d(a, C, OEh−1), d(a, C, OEh−1) is the near-optimality dimension w.r.t. a, C, and OEh−1. Notice that in Theorem 3.1, we do not specify the form of OEh, SEh,i(*T, t*), or the specific selection policy of the algorithm. Therefore, our result is general and it can be applied to any function and partition that has a well defined d(*a, C,* OEh−1) with resolution OEh, and any algorithm that satisfies the algorithmic framework. The requirement f ∗−f(xht,it ) ≤ a·max{SEht,it (*T, t*), OEht } is mild and natural in the sense that it indicates the π(S) selects a "good" node to pull at each time t that is at least close to the optimum relatively with respect to OE or SE, with probability P(Et). Note that with a good choice of π, E c t can reduce to a subset of Ac t , hence Pn t=1 I(E c t ) is bounded in L1. The terms that involve SE and OE are random due to H(n), but can still be explicitly bounded when the they are well designed. Specific regret bounds for different choices of OE and a new SE are provided in the next section. ## 4 Implementation Of Optimum-Statistical Collaboration Provided the optimum-statistical collaboration framework and its analysis, we discuss the some specific forms of the resolution descriptor and the uncertainty quantifier and elaborate the roles these definitions played in the optimization process. We then introduce a novel VHCT algorithm based on one variance-adaptive choice of SE, which is a better quantifier of the statistical uncertainty. ## 4.1 The Resolution Descriptor (Definition 1) The resolution descriptor OE is often measured by the global or local smoothness of the objective function (Azar et al., 2014; Grill et al., 2015). We first discuss the local smoothness assumption used by prior works and show its limitations, and then introduce a generalized local smoothness condition. Local Smoothness. Grill et al. (2015) assumed that there exist two constants ν1 > 0, ρ ∈ (0, 1) s.t. $$\forall h\geq0,\forall x\in{\mathcal{P}}_{h,i_{h}^{*}},f(x)\geq f^{*}-\nu_{1}\rho^{h}.$$ h. (3) The above equation states that the function f is ν1ρ h-smooth around the maximum at each level h. It has been considered in many prior works such as Shang et al. (2019); Bartlett et al. (2019). The resolution descriptor is naturally taken to be OEh = ν1ρ h. However, such a choice of local smoothness is too restrictive as it requires that the function f and the partition P are both "well-behaved" so that the function value becomes exponentially closer to the optimum $$(3)$$ when as h increases. A simple counter-example is the function g(x) = 1 + 1/(ln x) defined on the domain [0, 1/e] with g(0) defined to be 0 (as shown in Figure 1(b)). Under the standard binary partition, it is easily to prove that it doesn't satisfy Eqn. (3) for any given constants ν0 > 0, ρ0 ∈ (0, 1). It might be possible to design a particular partition for g(x) such that Eqn. (3) holds. However, such a partition is defined in hindsight since one have no knowledge of the objective function before the optimization. Beyond the above example, it is also easy to design other non-monotone difficult-to-optimize functions that cannot be analyzed by prior works. It thus inspires us to introduce a more general φ(h)-local smoothness condition for the objective to analyze functions and partitions that have different levels of local smoothness. General Local Smoothness. Assume that there exists a function φ(h) : N → (0, 1] s.t. $${\mathbf{S}}.{\mathbf{t}}.$$ $$\forall h\geq0,\forall x\in{\mathcal{P}}_{h,i_{h}^{*}},f(x)\geq f(x^{*})-\phi(h)$$ ∗) − φ(h) (GLS) In the same example g(x) = 1+1/(ln x), it can be shown that g(x) satisfies Condition (GLS) with φ(h) = 2/h. Therefore, it fits in our framework by setting OEh = 2/h and a valid regret bound can be obtained for g(x) given a properly chosen SEh,i, since d(2*, C,* 1/h) < ∞ in this case (refer to details in Subsection 4.4). In general, we can simply set OEh = φ(h) within the optimum-statistical collaboration framework, and Theorem 3.1 can be utilized to analyze functions and partitions that satisfy Condition (GLS) with any φ(h) such as φ(h) = 1/hp, for some p > 0 or even φ(h) = 1/(log h + 1), as long as the corresponding near-optimality dimension d(*a, C, φ*(h)) is finite for some *a, C >* 0. Determining the class of smoothness functions φ(h) that can generate nontrivial regret bounds would be an interesting future direction. Given such a generalized definition and the general bound in Theorem 3.1, we can provide the convergence analysis for a much larger class of black-box objectives and partitions, beyond those that satisfy Eqn. (3). ## 4.2 The Uncertainty Quantifier (Definition 2) Tracking Statistics. To facilitate the design of SE, we first define the following tracking statistics. Trivially, the mean estimate µbh,i(t) and the variance estimate Vbh,i(t) of the rewards at round t are computed as $${\widehat{\mu}}_{h,i}(t)\equiv{\frac{1}{T_{h,i}(t)}}\sum_{k=1}^{T_{h,i}(t)}r^{k}(x_{h,i}),\quad{\widehat{\nabla}}_{h,i}(t)\equiv{\frac{1}{T_{h,i}(t)}}\sum_{k=1}^{T_{h,i}(t)}\left(r^{k}(x_{h,i})-{\widehat{\mu}}_{h,i}(t)\right)^{2},$$ $$(\mathrm{GLS})$$ The variance estimate is defined to be negative infinity when Th,i(t) = 0 since variance is undefined in such cases. We now discuss two specific choices of SE. Nonadaptive Quantifier (in HCT). Azar et al. (2014) proposed the uncertainty quantifier with the following form in their High Confidence Tree (HCT) algorithm: $${\bf S E}_{h,i}(T,t)\equiv b c{\sqrt{\frac{\log(\Delta(t))}{T_{h,i}(t)}}}$$ where b/2 is the bound of the error noise t, ∆(t) = max{1, 2 blog tc+1/(c1δ)} is an increasing function of t, δ is the confidence level, and *c, c*1 are two tuning constants. By Hoeffding's inequality, the above SE is a high-probability upper bound for the statistical uncertainty. Note that HCT is also a special case of our OSC framework and its analysis can be done by following Theorem 3.1. In what follows, we propose an better algorithm with an improved uncertainty quantifier. Variance Adaptive Quantifier (in VHCT). Based on our framework of the statistical collaboration, a tighter measure of the statistical uncertainty can boost the performance of the optimization algorithm, as the goal in Eqn. (2) can be reached faster. Motivated by prior works that use variance to improve the performance of multi-armed bandit algorithms Audibert et al. (2006; 2009), we propose the following variance adaptive uncertainty quantifier, and naturally the VHCT algorithm in the next subsection, which is an adaptive variant of the SE in HCT. $$\mathbf{SE}_{h,i}(T,t)\equiv c{\sqrt{\frac{2{\hat{\nabla}}_{h,i}(t)\log(\Delta(t))}{T_{h,i}(t)}}}+{\frac{3b c^{2}\log(\Delta(t))}{T_{h,i}(t)}}$$ Th,i(t)(4) $$\left(4\right)$$ Algorithm 2 VHCT Algorithm (Short Version) 1: **Input**: known smoothness function φ(h), partition P. 2: Run Algorithm 1 with partition P and other required inputs as: OEh := φ(h), SEh,i(*T, t*) := Eqn. (4), π(S) := argmaxPh,i∈SBh,i(t) The notations *b, c,* and ∆(t) are the same as those in HCT. The uniqueness of the above SEh,i(*T, t*) is that it utilizes the node-specified variance estimations, instead of a conservative trivial bound b. Therefore, the algorithm is able to adapt to different noises across nodes, and SEh,i(*T, t*) ≤ OEh is achieved faster at the small-noise nodes. This unique property grants VHCT an advantage over all existing non-adaptive algorithms. ## 4.3 Algorithm Example - Vhct Based on the proposed optimum-statistical collaboration framework and the novel adaptive SEh,i(*T, t*), we propose a new algorithm VHCT as a special case of Algorithm 1 and elaborate its capability to adapt to different noises. Algorithm 2 provides the short version of the pseudo-code and the complete algorithm is provided in Appendix B. The proposed VHCT, similar to HCT, also maintains an upper-bound Uh,i(t) for each node to decide collaborative optimism. In particular, for any node Ph,i, the upper-bound Uh,i(t) is computed directly from the average observed reward for pulling xh,i as $${\dot{\mathbf{\theta}}}_{i,i}(t)\,.$$ Uh,i(t) ≡ µbh,i(t) + OEh + SEh,i(*T, t*) with SEh,i(*T, t*) defined as in Eqn. (4) and OEh tuned by the input. Note that Uh,i(t) = ∞ for unvisited nodes. To better utilize the tree structure in the algorithm, we also define the tighter upper bounds Bh,i(t). Since the maximum upper bound of one node cannot be greater than the maximum of its children, Bh,i(t) is defined to be $$B_{h,i}(t)=\operatorname*{min}\left\{U_{h,i}(t),\operatorname*{max}_{j=0,1}\{B_{h+1,2i-j}(t)\}\right\}$$ . The quantities Uh,i(t) and Bh,i(t) serve a similar role of the upper confidence bound in UCB bandit algorithm (Bubeck et al., 2011b), and the selection policy π(S) of VHCT is simply selecting the node with the highest Bh,i(t) in the given set S, which is shown in Algorithm 2. We prove that selection policy guarantees that f ∗ − f(xht,it ) ≤ 3 max{SEht,it (*T, t*), OEht } with high probability in Appendix B, as we required in Theorem 3.1. Follow the notation of Azar et al. (2014), we define a threshold value τh,i(t) for each node Ph,i to represent the minimal number of times it has been pulled, such that the algorithm can explore its children nodes, i.e., $$..,i(T,t)$$ $$\tau_{h,i}(t)=\operatorname*{inf}_{T\in\mathbb{N}}\Big\{{\mathsf{S E}}_{h,i}(T,t)\leq0{\mathsf{E}}_{h}\Big\}.$$ Only when Tht,it (t) ≥ τht,it (t), we expand the search into Pht,it 's children. This notation helps to compare the exploration power of VHCT with HCT. Note that when the variances of the nodes are small, SEh,i(*T, t*) of VHCT would be inversely proportional to Th,i(t) and thus smaller than that of HCT. As a consequence, the thresholds τh,i(t) is smaller in VHCT than in HCT, and thus VHCT explores more efficiently in low noise regimes. ## 4.4 Regret Bound Examples We now provide upper bounds on the expected cumulative regret of VHCT, which serve as instances of our general Theorem 3.1 when OE and SE are specified. Note that some technical adaptions are made to obtain a L1 bound for the regret. The regret bounds depend on the upper bound of variance in history across all the nodes that have been pulled, meaning max{h,i,t|Th,i(t)≥1} Vbh,i(t) ≤ Vmax for a constant Vmax > 0. Since the noise t is bounded, such a notation is always well defined and bounded above. The Vmax represents our knowledge of the noise variance after searching and exploring the objective function, which can be more accurate than the trivial choice b 2/4, e.g., when the true noise is actually bounded by b 0/2 for some unknown constant b 0 < b. We focus on two choices of the local smoothness function in Condition (GLS) and their corresponding near-optimal dimensions, i.e., φ(h) = ν1ρ hthat matches previous analyses such as Grill et al. (2015); Shang et al. (2019), and φ(h) = 2/h, which is the local smoothness of the counter example in Figure 1(b). For other choices of φ(h), we believe similar regret upper bounds may be derived using Theorem 3.1. Theorem 4.1. Assume that the objective function f *satisfies Condition* (GLS) *with* φ(h) = ν1ρ h*for two* constants ν1 > 0, ρ ∈ (0, 1)*. The expected cumulative regret of Algorithm 3 is upper bounded by* $$\mathbb{E}[R_{n}^{\mathrm{{HET}}}]\leq2{\sqrt{2n\log(4n^{3})}}+C_{1}V_{\mathrm{max}}^{{\frac{1}{41+2}}}\,n^{{\frac{24+1}{41+2}}}(\log n)^{{\frac{1}{41+2}}}+C_{2}n^{{\frac{241+1}{241+44}}}\log n$$ where C1 and C2 are two constants and d1 is any constant satisfying d1 > d(3ν1*, C, ρ*h). Theorem 4.2. Assume that the objective function f *satisfies Condition* (GLS) *with* φ(h) = 2/h*. The* expected cumulative regret of Algorithm 3 is upper bounded by $$\mathbb{E}[R_{n}^{\mathrm{\tiny{WIRC}}}]\leq2\sqrt{2n\log(4n^{3})}+\bar{C}_{1}V_{\mathrm{max}}^{\frac{1}{2a^{2}+3}}\,n^{\frac{2a_{2}+2}{2a_{2}+3}}(\log n)^{\frac{1}{2a_{2}+3}}+\bar{C}_{2}n^{\frac{2a_{2}+1}{2a_{2}+3}}\log n\,.$$ where C¯1 and C¯2 are two constants and d2 is any constant satisfying d2 > d(2*, C,* 1/h). The proof of these theorems are provided in Appendix C and Appendix D respectively. We first remark that the above regret bounds are actually loose because we do not a delicate individual control over the variances in different nodes. Instead, a much conservative analysis is conducted. In the literature, Grill et al. (2015); Shang et al. (2019) have proved that the cumulative regret bounds of HOO, HCT are both O(n (d1+1)/(d1+2)(log n) 1/(d1+2)) when the objective function f satisfies Condition (GLS) with φ(h) = ν1ρ h, while our regret bound in Theorem 4.1 is of order O(V 1/(d1+2) max n (d1+1)/(d1+2)(log n) 1/(d1+2)) . Although the two rates are the same with respect to the increasing of n, our result explicitly connects the variance and the regret, implying a positive relationship between these two. Therefore, we expect the variance adaptive algorithm VHCT to converge faster than the non-adaptive algorithms such as HOO and HCT, when there is only low or moderate noise. The theoretical results of prior works rely on the smoothness assumption φ(h) = ν1ρ h, thus are not able to deliver a regret analysis for functions and partitions with other φ(h) (e.g. φ(h) = 2/h in Theorem 4.2). Providing analysis for prior algorithms on functions and partitions with different smoothness assumptions is another interesting future direction to explore. However, we conjecture that VHCT should still outperform the non-adaptive algorithms in these cases since its SE is a tighter measure of the statistical uncertainty. This theoretical observation is also validated in our experiments. We emphasize that the near-optimality dimensions are defined with respect to different local smoothness functions in Theorem 4.1 and Theorem 4.2. Specifically, when the objective is ν1ρ h-smooth, Theorem 4.1 holds even if the number of near-optimal regions increase exponentially when the partition proceeds deeper, i.e., when d(ν1*, C, ρ*h) < ∞. When the function is only 2/h-smooth, Theorem 4.2 holds only when the number of near-optimal regions grows polynomially, i.e., when d(2*, C,* 1/h) < ∞. ## 5 Experiments In this section, we empirically compare the proposed VHCT algorithm with the existing **anytime** blackbox optimization algorithms, including T-HOO (the truncated version of HOO), HCT, POO, and PCT (POO + HCT, (Shang et al., 2019)), and Bayesian Optimization algorithm BO (Frazier, 2018) to validate that the proposed variance-adaptive uncertainty quantifier can make the convergence of VHCT faster than non-adaptive algorithms. We run every algorithm for 20 independent trials in each experiment and plot the average cumulative regret with 1-standard deviation error bounds. The experimental details and additional numerical results on other objectives are provided in Appendix E. We use a noisy Garland function as the synthetic objective, which is a typical blackbox objective used by many works such as Shang et al. (2019) and has multiple local minimums and thus very hard to optimize. ![9_image_0.png](9_image_0.png) Figure 3: Cumulative regret of different algorithms on evaluating the Garland function and tuning hyperparameters of training SVM on Landmine data and neural networks on MNIST data. For the real-life experiments, we use hyperparameter tuning of machine learning algorithms as the blackbox objectives. We tune the RBF kernel and the L2 regularization parameters when training Support Vector Machine (SVM) on the Landmine dataset (Liu et al., 2007), and the batch size, the learning rate, and the weight decay when training neural networks on the MNIST dataset (Deng, 2012). As shown in Figure 3, the new choice of SE makes VHCT the fastest algorithm among the existing ones. All the experimental results validate our theoretical claims in Section 4. ## 6 Conclusions The proposed optimum-statistical collaboration framework reveals and utilizes the fundamental interplay of resolution and uncertainty to design more general and efficient black-box optimization algorithms. Our analysis shows that different regret guarantees can be obtained for functions and partitions with different local smoothness assumptions, and algorithms that have different uncertainty quantifiers. Based on the framework, we show that functions that satisfy the general local smoothness property can be optimized and analyzed, which is a much larger class of functions compared with prior works. Also, we propose a new algorithm VHCT that can adapt to different noises and analyze its performance under different assumptions of the smoothness of the function. There are still some limitations of our work. For example, VHCT still needs the prior knowledge of the smoothness function φ(h) to achieve its best performance. Also, the analyses in Theorem 4.1 and 4.2 are smoothness-specific. Therefore, our framework also introduces many interesting future working directions, for example, (1) whether a unified regret upper bound for different φ(h)-local smooth functions could be derived for one particular algorithm; (2) whether the regret bound obtained in Theorem 4.2 is minimaxoptimal for those φ(h); (3) whether there exists an algorithm that is truly smoothness-agnostic, i.e., it does not need the smoothness property of the objective function. ## References Jean-Yves Audibert, Remi Munos, and Csaba Szepesvári. Use of variance estimation in the multi-armed bandit problem. 01 2006. Jean-Yves Audibert, Rémi Munos, and Csaba Szepesvári. Exploration-exploitation tradeoff using variance estimates in multi-armed bandits. *Theoretical Computer Science*, 410(19):1876–1902, 2009. Peter Auer, Ronald Ortner, and Csaba Szepesvári. Improved rates for the stochastic continuum-armed bandit problem. In Nader H. Bshouty and Claudio Gentile (eds.), *Conference on Learning Theory*, pp. 454–468, 2007. Mohammad Gheshlaghi Azar, Alessandro Lazaric, and Emma Brunskill. Online stochastic optimization under correlated bandit feedback. In *International Conference on Machine Learning*, pp. 1557–1565. PMLR, 2014. Peter L. Bartlett, Victor Gabillon, and Michal Valko. A simple parameter-free and adaptive approach to optimization under a minimal local smoothness assumption. In 30th International Conference on Algorithmic Learning Theory, 2019. Sébastien Bubeck, Rémi Munos, and Gilles Stoltz. Pure exploration in finitely-armed and continuous-armed bandits. *Theoretical Computer Science*, 412(19):1832–1852, 2011a. Sébastien Bubeck, Rémi Munos, Gilles Stoltz, and Csaba Szepesvári. X-armed bandits. Journal of Machine Learning Research, 12(46):1655–1695, 2011b. Zhongxiang Dai, Bryan Kian Hsiang Low, and Patrick Jaillet. Federated bayesian optimization via thompson sampling. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 9687–9699. Curran Associates, Inc., 2020. URL https: //proceedings.neurips.cc/paper/2020/file/6dfe08eda761bd321f8a9b239f6f4ec3-Paper.pdf. Li Deng. The mnist database of handwritten digit images for machine learning research. *IEEE Signal* Processing Magazine, 29(6):141–142, 2012. John C. Duchi, Michael I. Jordan, Martin J. Wainwright, and Andre Wibisono. Optimal rates for zero-order convex optimization: The power of two function evaluations. *IEEE Transactions on Information Theory*, 61(5):2788–2806, 2015. doi: 10.1109/TIT.2015.2409256. Peter I. Frazier. A tutorial on bayesian optimization, 2018. URL https://arxiv.org/abs/1807.02811. Daniel Golovin, Benjamin Solnik, Subhodeep Moitra, Greg Kochanski, John Karro, and D Sculley. Google vizier: A service for black-box optimization. In *Proceedings of the 23rd ACM SIGKDD international* conference on knowledge discovery and data mining, pp. 1487–1495, 2017. Jean-Bastien Grill, Michal Valko, Remi Munos, and Remi Munos. Black-box optimization of noisy functions with unknown smoothness. In *Advances in Neural Information Processing Systems*. Curran Associates, Inc., 2015. Kirthevasan Kandasamy, Akshay Krishnamurthy, Jeff Schneider, and Barnabas Poczos. Parallelised bayesian optimisation via thompson sampling. In Amos Storkey and Fernando Perez-Cruz (eds.), Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, volume 84 of Proceedings of Machine Learning Research, pp. 133–142. PMLR, 09–11 Apr 2018. Kenji Kawaguchi, Yu Maruyama, and Xiaoyu Zheng. Global continuous optimization with error bound and fast convergence. *Journal of Artificial Intelligence Research*, 56:153–195, 2016. Dragan Komljenovic, Darragi Messaoudi, Alain Cote, Mohamed Gaha, Luc Vouligny, Stephane Alarie, and Olivier Blancke. Asset management in electrical utilities in the context of business and operational complexity. In *World Congress on Resilience, Reliability and Asset Management*, 07 2019. Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: A novel bandit-based approach to hyperparameter optimization. *Journal of Machine Learning Research*, 18 (185):1–52, 2018. Wenjie Li, Qifan Song, Jean Honorio, and Guang Lin. Federated x-armed bandit, 2022. URL https: //arxiv.org/abs/2205.15268. Wenjie Li, Haoze Li, Jean Honorio, and Qifan Song. Pyxab - a python library for X -armed bandit and online blackbox optimization algorithms, 2023. URL https://arxiv.org/abs/2303.04030. Qiuhua Liu, Xuejun Liao, and Lawrence Carin. Semi-supervised multitask learning. In J. Platt, D. Koller, Y. Singer, and S. Roweis (eds.), *Advances in Neural Information Processing Systems*, volume 20. Curran Associates, Inc., 2007. URL https://proceedings.neurips.cc/paper/2007/file/ a34bacf839b923770b2c360eefa26748-Paper.pdf. Odalric-Ambrym Maillard. *Mathematics of statistical sequential decision making*. PhD thesis, Université de Lille, Sciences et Technologies, 2019. Andreas Maurer and Massimiliano Pontil. Empirical bernstein bounds and sample variance penalization. arXiv preprint arXiv:0907.3740, 2009. Rémi Munos. Optimistic optimization of a deterministic function without the knowledge of its smoothness. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems, volume 24. Curran Associates, Inc., 2011. Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P. Adams, and Nando de Freitas. Taking the human out of the loop: A review of bayesian optimization. *Proceedings of the IEEE*, 104(1):148–175, 2016. Ohad Shamir. An optimal algorithm for bandit and zero-order convex optimization with two-point feedback. Journal of Machine Learning Research, 18, 07 2015. Xuedong Shang, Emilie Kaufmann, and Michal Valko. General parallel optimization a without metric. In Algorithmic Learning Theory, pp. 762–788, 2019. Michal Valko, Alexandra Carpentier, and Rémi Munos. Stochastic simultaneous optimistic optimization. In *Proceedings of the 30th International Conference on Machine Learning*, volume 28 of *Proceedings of* Machine Learning Research, pp. 19–27. PMLR, 17–19 Jun 2013. Chi-Hua Wang, Zhanyu Wang, Will Wei Sun, and Guang Cheng. Online regularization for high-dimensional dynamic pricing algorithms. *arXiv preprint arXiv:2007.02470*, 2020. ChiHua Wang, Wenjie Li, Guang Cheng, and Guang Lin. Federated online sparse decision making. *ArXiv*, abs/2202.13448, 2022. ## A Proof Of The General Regret Bound In Theorem 3.1 Proof. We decompose the cumulative regret into two terms that depend on the high probability events {Et} n t=1. Denote the simple regret at each iteration t to be ∆t = f ∗ − rt, then we can perform the following regret decomposition $$R_{n}=\sum_{t=1}^{n}\Delta_{t}=\left(\sum_{t=1}^{n}\Delta_{t}\mathbb{I}_{\mathcal{E}_{t}}\right)+\left(\sum_{t=1}^{n}\Delta_{t}\mathbb{I}_{\mathcal{E}_{t}^{c}}\right)=R_{n}^{\mathcal{E}}+R_{n}^{\mathcal{E}^{c}}$$ $$\leq R_{n}^{\mathcal{E}}+\sum_{t=1}^{n}\mathbb{I}_{\mathcal{E}_{t}^{c}}$$ where we have denoted the first summation term in the second equality Pn t=1 ∆tIEt to be REn and the second summation term Pn t=1 ∆tIE c t to be RE c n . The last inequality is because we have that both f ∗ and rt are bounded by [0, 1], and thus |∆t| ≤ 1. Now note that the instantaneous regret ∆t can be written as $\Delta_t=f^*-r_t=f^*-f\left(x_{h_t,i_t}\right)+f\left(x_{h_t,i_t}\right)-r_t=\Delta_{h_t,i_t}+\widehat{\Delta}_{t_t}$. where we have denoted ∆ht,it = f ∗ − f (xht,it ) and ∆bt = f (xht,it ) − rt. It means that the regret under the events {Et} n t=1 can be decomposed into two terms ReEn and RbEn . $$R_{n}^{\mathcal{E}}=\sum_{t=1}^{n}\Delta_{h_{t},i_{t}}\mathbb{I}_{\mathcal{E}_{t}}+\sum_{t=1}^{n}\widehat{\Delta}_{t}\mathbb{I}_{\mathcal{E}_{t}}\leq\sum_{t=1}^{n}\Delta_{h_{t},i_{t}}\mathbb{I}_{\mathcal{E}_{t}}+\sum_{t=1}^{n}\widehat{\Delta}_{t}=\widetilde{R}_{n}^{\mathcal{E}}+\widehat{R}_{n}^{\mathcal{E}}$$ Note that by the definition of the sequence {∆bt} n t=1, it is a bounded martingale difference sequence since E[∆bt | Ft−1] = 0 and |∆bt| ≤ 1, where Ft is defined to be the filtration generated up to time t. Therefore by Azuma's inequality on this sequence, we get $$\hat{R}_{n}^{\mathcal{E}}\leq\sqrt{2n\log\left(\frac{4n^{2}}{\delta}\right)}$$ with probability 1 − δ/(4n 2). A even better bound can be obtained using the fact that |∆bt| ≤ b2 if b 2. However, RbEn is not a dominating term and using b/2 only improves it in terms of the multiplicative constant. Now the only term left is ReEn and we bound it as follows. ReEn = Xn t=1 ∆ht,it IEt ! ≤ h=1 X i:Th,i(t)6=0 Xn t=1 ∆h,iI(ht,it)=(h,i)IEt H X (n) ≤ X H h=1 X i:Th,i(t)6=0 Xn t=1 aSEh,i(T, t) + H X (n) i:Th,i(t)6=0 Xn t=1 aSEh,i(T, t) X H+1 ≤ a X H h=1 X i:Th,i(t)6=0 Xn t=1 SEh,i(T, t) + a H X (n) i:Th,i(t)6=0 Xn t=1 SEh,i(T, t) X H+1 | {z } (I) | {z } (II) where H is a constant between 0 and H(n) to be tuned later. The second inequality is because when we select Pht,it , we have SEht,it (*T, t*) ≥ OEht by the Optimum-statistical Collaboration Framework. Also, under the event Et, we have ∆ht,it ≤ a max{OEht , SEht,it (*T, t*)}. The first term (I) can be bounded as $$(1)\leq a\sum_{h=1}^{\overline{H}}\sum_{i:T_{h,i}(t)\neq0}\sum_{t=1}^{n}\max_{i:T_{h,i}(t)\neq0}\mathsf{SE}_{h,i}(T,t)\leq a\sum_{h=1}^{\overline{H}}|T_{h}(n)|\sum_{t=1}^{n}\max_{i:T_{h,i}(t)\neq0}\mathsf{SE}_{h,i}(T,t)$$ $$\leq a\sum_{h=1}^{\overline{H}}2N_{h-1}\left(a\mathsf{OE}_{h-1}\right)\sum_{t=1}^{n}\max_{i:T_{h,i}(t)\neq0}\mathsf{SE}_{h,i}(T,t)$$ $$\leq2aC\sum_{h=1}^{\overline{H}}\left(\mathsf{OE}_{h-1}\right)^{-d}\sum_{t=1}^{n}\max_{i:T_{h,i}(t)\neq0}\mathsf{SE}_{h,i}(T,t)$$ where ¯d > d(*a, C,* OEh−1) and d(*a, C,* OEh−1) is the near-optimality dimension with respect to (*a, C,* OEh−1). The third inequality is because we only expand a node into two children, so |Ih(n)| ≤ 2|I+ h−1 (n)| (Note that we do not have any requirements on the number of children of each node, so the binary tree argument here can be easily replaced by a K-nary tree with K ≥ 2). Also since we only select a node (*h, i*) when its parent is already selected enough number of times such that OE ≥ SE at a particular time t0 ≤ n, we have Php,ip satisfies f ∗ − f(xhp,ip ) ≤ aOEhp . By the definition of Nh() in the near-optimality dimension, we have $$|{\mathcal{I}}_{h}(n)|\leq2|{\mathcal{I}}_{h-1}^{+}(n)|\leq2{\mathcal{N}}_{h-1}\left(a0\mathbf{E}_{h-1}\right)$$ and thus the final upper bound for (I). Therefore for any H ∈ [1, H(n)], with probability at least 1 −δ 4n2 , the cumulative regret is upper bounded by We reject is upper bounded by $$R_n=\sum_{t=1}^n\Delta_t=\widehat R_n^{\mathcal E}+\sum_{t=1}^n\mathbb I(\mathcal E_t^{\mathcal E})+\widehat R_n^{\mathcal E}$$ $$\leq\sqrt{2n\log(4n^2/\delta)}+\sum_{t=1}^n\mathbb I(\mathcal E_t^{\mathcal E})+\widehat R_n^{\mathcal E}$$ $$\leq\sqrt{2n\log(4n^2/\delta)}+\sum_{t=1}^n\mathbb I(\mathcal E_t^{\mathcal E})+2aC\sum_{h=1}^{\overline H}\left(0\mathbf E_{h-1}\right)^{-\bar d}\sum_{t=1}^n\max_{i:T_{h,i}(t)\neq0}\mathbf{SE}_{h,i}(T,t)$$ $$\quad+a\sum_{\overline H+1}^{H(n)}\sum_{i:T_{h,i}(t)\neq0}\sum_{t=1}^n\mathbf{SE}_{h,i}(T,t)$$ ## B Notations And Useful Lemmas B.1 Preliminary Notations The notations here follow those in Shang et al. (2019) and Azar et al. (2014) except for those related to the node variance. These notations are needed for the proof of the main theorem. - At each time t, Pht,it denote the node selected by the algorithm where ht is the level and it is the index. - Pt denotes the optimal-path selected at each iteration t - H(t) denotes the maximum depth of the tree at time t. - ∆(t) = 1/ ˜δ(t +) with t + = 2blog tc+1, ˜δ(t) = min{1, c1δ/t} - For any t > 0 and h ∈ [1, H(t)], Ih(t) denotes the set of all nodes at level h at time t. - For any t > 0 and h ∈ [1, H(t)], I + h (t) denotes the subset of Ih(t) that contains only the internal nodes (no leaves). - Ch,i := {t ∈ [1, n] | Pht,it = Ph,i} is the set of time steps when Ph,i is selected. - C + h,i := Ch+1,2i ∪ Ch+1,2i−1 is the set of time steps when the children of Ph,i are selected. - t¯h,i := maxt∈Ch,i t is the last time Ph,i is selected. - eth,i := maxt∈C+ h,i t is the last time when the children of Ph,i is selected. - th,i := min {t : Th,i(t) ≥ τh,i} is the time when Ph,i is expanded. - Vbh,i(t) := 1 Th,i(t) PTh,i(t) s=1 (r s(xh,i) − µbh,i) is the estimate of the variance of the Ph,i node at time t. - Lt denotes all the nodes in the exploration tree at time t - Vmax is the upper bound on the node variance in the tree. Note that if the variance of a node is zero, we can always pull one more round to make it non-zero. Therefore, here we simply assume that the variance Vh,i(t) is larger than a fixed small constant for the clarity of proof, which will not affect our conclusions. ## Algorithm 3 Vhct Algorithm (Complete) 1: **Input:** Smoothness function φ(h), partition P. 2: **Initialize:** Tt = {P0,1,P1,1,P1,2}, U1,1(t) = U1,2(t) = +∞ 3: for t = 1 to n do 4: if t = t + **then** 5: for all nodes Ph,i ∈ Tt do 6: Uh,i(t) = µh,i(t) + φ(h) + SEh,i(*T, t*) 7: **end for** 8: UpdateBackward(Tt, t) 9: **end if** 10: Pht,it = PullUpdate(Tt, t) 11: if Tht,it (t) ≥ τht,it (t) and Pht,it is a leaf **then** 12: Tt = Tt *∪ {P*ht+1,2it−1,Pht+1,2it } 13: Uh+1,2i(t) = Uh+1,2i−1(t) = +∞ 14: **end if** 15: **end for** ## B.2 Useful Lemmas For The Proof Of Theorem 4.1 And Theorem 4.2 The following lemma improves the results by Azar et al. (2014) and Shang et al. (2019). Lemma B.1. *We introduce the following event* Et $$\mathcal{E}_{t}=\left\{\forall\mathcal{P}_{h,i}\in\mathcal{L}_{t},\forall T_{h,i}(t)=1,\cdots,t:|\widehat{\mu}_{h,i}(t)-f\left(x_{h,i}\right)|\leq c\sqrt{\frac{2\widehat{\mathcal{V}}_{h,i}(t)\log(1/\widehat{\delta}(t))}{T_{h,i}(t)}}+\frac{3be^{2}\log(1/\widehat{\delta}(t))}{T_{h,i}(t)}\right\}.$$ where xh,i ∈ Ph,i is the arm corresponding to node Ph,i. If c = 3 and ˜δ(t) = δ 3t then for any fixed t, the event Et holds with probability at least 1 − δ/t7. Algorithm 4 PullUpdate 1: **Input:** a tree Tt, round t 2: **Initialize:** (ht, it) = (0, 1); St = P0,1; T0,1(t) = τ0(t) = 1 ; 3: **while** Pht,it is not a leaf, Tht,it (t) ≥ τht,it (t) do 4: j = argmaxj=0,1{Bht+1,2it−j (t)} 5: (ht, it) = (ht + 1, 2it − j) 6: St = St *∪ {P*ht,it } 7: **end while** 8: Pull xht,it and get reward rt 9: Tht,it (t) = Tht,it (t) + 1 10: Update µbht,it (t), Vbht,it (t) 11: Uht,it (t) = µbht,it (t) + φ(ht) + SEht,it (*T, t*) 12: UpdateBackward(St, t) 13: **Return** Pht,it ## Algorithm 5 Updatebackward 1: **Input:** a tree T , round t 2: for Ph,i ∈ T backward from each leaf of T do 3: if Ph,i is a leaf of T **then** 4: Bh,i(t) = Uh,i(t) 5: **else** 6: Bh,i(t) = min {Uh,i(t), maxj{Bh+1,2i−j (t)}} 7: **end if** 8: Update the threshold τh,i(t) 9: **end for** Proof. Again, Lt denotes all the nodes in the tree. The probability of E c t can be bounded as $$\mathbb{P}\bigg{[}\mathcal{E}_{t}^{*}\bigg{]}\leq\sum_{p_{h,i}\in\mathcal{L}_{\mathcal{L}_{\mathcal{L}_{\mathcal{H}}}(t)}}\sum_{T_{h,i}(t)=1}^{t}\mathbb{P}\bigg{[}\,|\widehat{\mu}_{h,i}(t)-\mu_{h,i}|\geq c\sqrt{\frac{2\widehat{\psi}_{h,i}(t)\log(1/\tilde{\delta}(t))}{T_{h,i}(t)}}+\frac{3bc^{2}\log(1/\tilde{\delta}(t))}{T_{h,i}(t)}\bigg{]}$$ $$\leq\sum_{p_{h,i}\in\mathcal{L}_{\mathcal{L}_{\mathcal{H}}(t)}}\sum_{T_{h,i}(t)=1}^{t}3\exp(-c^{2}\log(1/\tilde{\delta}(t)))$$ $$=3\exp(-c^{2}\log(1/\tilde{\delta}(t)))\cdot t\cdot|\mathcal{L}_{t}|$$ where the second inequality is by taking x = c 2log(1/ ˜δ(t)) in Lemma B.6, we have $$\mathbb{P}\bigg{(}|\widehat{\mu}_{h,i}(t)-f\left(x_{h,i}\right)|\geq c\sqrt{\frac{2\widehat{\psi}_{h,i}(t)\log(1/\widehat{\delta}(t))}{T_{h,i}(t)}}+\frac{3bc^{2}\log(1/\widehat{\delta}(t))}{T_{h,i}(t)}\bigg{)}\leq3\exp(-c^{2}\log(1/\widehat{\delta}(t)))$$ Now note that the number of nodes in the tree is always (loosely) bounded by t since we need at least one pull to expand a node, we know that $$\mathbb{P}\biggl[{\mathcal{E}}_{t}^{c}\biggr]\leq3t^{2}{\tilde{\delta}}(t)^{c^{2}}\leq{\frac{\delta}{t^{7}}}$$ 7 Lemma B.2. Given the parameters c and ˜δ(t) as in Lemma B.1, the regret when the events {Et} fail to hold is bounded as $$\sum_{t=1}^{n}\mathbb{I}({\mathcal{E}}_{t}^{c})\leq{\sqrt{n}}$$ with probability at least 1 − δ/(6n 3) $\square$ Proof. We first split the time horizon n in two phases: the first phase until √n and the rest. Thus the regret bound becomes $$\sum_{t=1}^{n}\mathbb{I}({\mathcal{E}}_{t}^{c})=\sum_{t=1}^{\sqrt{n}}\mathbb{I}({\mathcal{E}}_{t}^{c})+\sum_{t=\sqrt{n}+1}^{n}\mathbb{I}({\mathcal{E}}_{t}^{c})$$ The first term can be easily bounded by √n. Now we bound the second term by showing that the complement of the high-probability event hardly ever happens after t = √n. By Lemma B.1 P $$\mathbb{P}\left[\bigcup_{t={\sqrt{n}}+1}^{n}{\mathcal{E}}_{t}^{c}\right]\leq\sum_{t={\sqrt{n}}+1}^{n}\mathbb{P}\left[{\mathcal{E}}_{t}^{c}\right]\leq\sum_{{\sqrt{n}}+1}^{n}{\frac{\delta}{t^{7}}}\leq\int_{{\sqrt{n}}}^{+\infty}{\frac{\delta}{t^{7}}}d t\leq{\frac{\delta}{6n^{3}}}$$ Therefore we arrive to the conclusion in the lemma. Lemma B.3. At time t under the event Et, for the selected node Pht,it *and its parent* (h p t , ip t )*, we have the* following set of inequalities for any choice of the local smoothness function φ(h) *in Algorithm 3* $$\left\{\begin{aligned}&f^{*}-f(x_{h_{t},i_{t}})\leq3c\sqrt{\frac{2\widehat{\psi}_{h_{t},i_{t}}(t)\log(2/\widehat{\delta}(t))}{T_{h_{t},i_{t}}(t)}}+\frac{9b c^{2}\log(2/\widehat{\delta}(t))}{T_{h_{t},i_{t}}(t)}\\ &f^{*}-f(x_{h_{t}^{p},i_{t}^{p}})\leq3\phi(h_{t}^{p})\end{aligned}\right.$$ Proof. Recall that Pt is the optimal path traversed. Let (h 0, i0) ∈ Pt and (h 00, i00) be the node which immediately follows (h 0, i0) in Pt (i.e., h 00 = h 0 + 1). By the definition of B values, we have the following inequality Bh0,i0 (t) ≤ max (Bh0+1,2i 0−1(t); Bh0+1,2i 0 (t)) = Bh00,i00 (t) where the last equality is from the fact that the algorithm selects the child with the larger B value. By iterating along the inequality until the selected node (ht, it) and its parent (h p t , ip t ) we obtain $$\begin{array}{c}{{\forall\left(h^{\prime},i^{\prime}\right)\in P_{t},\quad B_{h^{\prime},i^{\prime}}(t)\leq B_{h_{t},i_{t}}(t)\leq U_{h_{t},i_{t}}(t),}}\\ {{\forall\left(h^{\prime},i^{\prime}\right)\in P_{t}-\left(h_{t},i_{t}\right),\quad B_{h^{\prime},i^{\prime}}(t)\leq B_{h_{t}^{p},i_{t}^{p}}(t)\leq U_{h_{t}^{p},i_{t}^{p}}(t),}}\end{array}$$ Thus for any node Ph,i ∈ Pt, we have that Uht,it (t) ≥ Bh,i(t). Furthermore, since the root node (0, 1) is a an optimal node in the path Pt. Therefore, there exists at least one node (h ∗, i∗) ∈ Pt which includes the maximizer x ∗ and has the the depth h ∗ ≤ h p t < ht. Thus Uht,it (t) ≥ Bh∗,i∗ (t), Uh p t ,ip t (t) ≥ Bh∗,i∗ (t) Note that by the definition of Uht,it (t), under event Et Uht,it (t) = µbht,it (t) + φ(ht) + c s2Vbht,it (t) log(1/ ˜δ(t+)) Tht,it (t)+ 3bc2log(1/ ˜δ(t +)) Tht,it (t) ≤ f(xht,it ) + φ(ht) + c s2Vbht,it (t) log(1/ ˜δ(t+)) Tht,it (t)+ 3bc2log(1/ ˜δ(t +)) Tht,it (t) (5) + c s2Vbht,it (t) log(1/ ˜δ(t)) Tht,it (t)+ 3bc2log(1/ ˜δ(t)) Tht,it (t) ≤ f(xht,it ) + φ(ht) + 2c s2Vbht,it (t) log(1/ ˜δ(t+)) Tht,it (t)+ 6bc2log(1/ ˜δ(t +)) Tht,it (t) where the first inequality holds by the definition of U and the second one holds by t + ≥ t. Similarly the $$\mathbf{\partial}\cdot(t),$$ parent node satisfies the above inequality $$U_{h_{r}^{\mu},i_{t}^{\mu}}(t)\leq f(x_{h_{r}^{\mu},i_{t}^{\mu}})+\phi(h_{r}^{\mu})+2c\sqrt{\frac{2\hat{V}_{h_{r}^{\mu},i_{t}^{\mu}}(t)\log(1/\hat{\delta}(t^{+}))}{T_{h_{r}^{\mu},i_{t}^{\mu}}(t)}}+\frac{6b c^{2}\log(1/\hat{\delta}(t^{+}))}{T_{h_{r}^{\mu},i_{t}^{\mu}}(t)}$$ By Lemma B.4, we know Uh∗,i∗ (t) ≥ f ∗. If (h ∗, i∗) is a leaf, then by our definition Bh∗,i∗ (t) = Uh∗,i∗ (t) ≥ f ∗. Otherwise, there exists a leaf (hx, ix) containing the maximum point which has (h ∗, i∗) as its ancestor. Therefore we know that f ∗ ≤ Bhx,ix ≤ Bh∗,i∗ , so Bh∗,i∗ is always an upper bound for f ∗. Now we know that ∆ht,it (t) := f ∗ − f(xht,it ) ≤ φ(ht) + 2c s2Vbht,it (t) log(1/ ˜δ(t+)) Tht,it (t)+ 6bc2log(1/ ˜δ(t +)) Tht,it (t) ∆h p t ,ip t (t) := f ∗ − f(xh p t ,ip t ) ≤ φ(h p t ) + 2c vuut 2Vbh p t ,ip t (t) log(1/ ˜δ(t+)) Th p t ,ip t (t)+ 6bc2log(1/ ˜δ(t +)) Th p t ,ip t (t) Recall that the algorithm selects a node only when Tht,it (t) < τht,it (t) and thus the statistical uncertainty is large, i.e., φ(ht) ≤ SEht,it (*T, t*), and the choice of τht,it (t), we get $$\Delta_{h_{t},i_{t}}(t)\leq3c\sqrt{\frac{2\widehat{\psi}_{h_{t},i_{t}}(t)\log(1/\hat{\beta}(t^{+}))}{T_{h_{t},i_{t}}(t)}}+\frac{9c^{2}\log(1/\hat{\beta}(t^{+}))}{T_{h_{t},i_{t}}(t)}$$ $$\leq3c\sqrt{\frac{2\widehat{\psi}_{h_{t},i_{t}}(t)\log(2/\hat{\beta}(t))}{T_{h_{t},i_{t}}(t)}}+\frac{9c^{2}\log(2/\hat{\beta}(t))}{T_{h_{t},i_{t}}(t)}$$ where we used the fact $t^{+}\leq2t$ for any $t$. For the parent $(h^{\prime}_{t},i^{\prime}_{t})$, since $T_{h^{\prime}_{t},i^{\prime}}(t)\geq r_{h^{\prime}_{t},i^{\prime}}(t)$ and thus φ(h p t ) ≥ SEh p t ,ip t (*T, t*), we know that $$\Delta_{h_{t}^{r},i_{t}^{p}}(t)\leq\phi(h_{t}^{p})+2c\sqrt{\frac{2\hat{V}_{h_{t}^{r},i_{t}^{p}}(t)\log(1/\hat{\delta}(t+))}{\tau_{h_{t}^{r},i_{t}^{p}}(t)}}+\frac{6b c^{2}\log(1/\hat{\delta}(t+))}{\tau_{h_{t}^{r},i_{t}^{p}}(t)}\leq3\phi(h_{t}^{p})$$ The above inequality implies that the selected node Pht,it must have a 3φ(h p t ) optimal parent under Et. Lemma B.4. (U **Upper Bounds** f ∗) Under event Et*, we have that for any optimal node* (h ∗, i?) *and any* choice of the smoothness function φ(h) in Algorithm 3, Uh∗,i∗ (t) *is an upper bound on* f ? Proof. The proof here is similar to that of Lemma 5 in Shang et al. (2019). Since t Uh∗,i∗ (t) = µbh∗,i∗ (t) + φ(h ∗) + c s2Vbh∗,i∗ (t) log(1/ ˜δ(t+)) Th∗,i∗ (t)+ 3bc2log(1/ ˜δ(t +)) Th∗,i∗ (t) ≥ µbh∗,i∗ (t) + φ(h ∗) + c s2Vbh∗,i∗ (t) log(1/ ˜δ(t)) Th∗,i∗ (t)+ 3bc2log(1/ ˜δ(t)) Th∗,i∗ (t) ≥ φ(h ∗) + f(xh∗,i∗ ) + ≥ t, we have where the last inequality is by the event Et, $$\widehat{\mu}_{\lambda^{*},t^{*}}(t)+c\sqrt{\frac{2\widehat{V}_{\lambda^{*},t^{*}}(t)\log(1/\widehat{\lambda}(t))}{T_{\lambda^{*},t^{*}}(t)}}+\frac{3bc^{2}\log(1/\widehat{\lambda}(t))}{T_{\lambda^{*},t^{*}}(t)}\geq f(x_{\lambda^{*},t^{*}})\qed$$ **Lemma B.5**.: **(Details for Solving $\tau$)** _For any choice of $\phi(h)$, the solution $\gamma_{\lambda_{i}}(t)$ to the equation $\phi(h)=\frac{\gamma_{\lambda_{i}}(t)}{\gamma_{\lambda_{i}}(t)}$._ SEh,i(T, t) for the proposed VHCT *algorithm in Section 4 is* _sea voict algorithm in Section 4 is_ $$\tau_{h,i}(t)=\left(1+\sqrt{1+\frac{3b\phi(h)}{\widehat{\mathbb{V}}_{h,i}(t)/2}}\right)^{2}\frac{c^{2}}{2\phi(h)^{2}}\widehat{\mathbb{V}}_{h,i}(t)\log(1/\hat{\delta}(t^{+}))\tag{6}$$ Proof. First, we define the following variables for ease of notatioins $$\left\{\begin{array}{l l}{{A:-\phi(n)}}\\ {{}}\\ {{B:=c\sqrt{2\hat{\nabla}_{h,i}(t)\log(1/\hat{\delta}(t^{+}))}}}\\ {{}}\\ {{C:=3b c^{2}\log(1/\hat{\delta}(t^{+}))}}\end{array}\right.$$ $$\left[\begin{array}{l}{{A:=\phi(h)}}\end{array}\right]$$ Therefore the original equation φ(h) = SEh,i(*T, t*) can be written as, $$A=B\cdot{\frac{1}{\sqrt{\tau_{h,i}(t)}}}+{\frac{C}{\tau_{h,i}(t)}}$$ Note that the above is a quadratic equation of τh,i(t), therefore we arrive at the solution $$\tau_{h,i}(t)=\left(1+\sqrt{1+\frac{3bA}{\widehat{\psi}_{h,i}(t)/2}}\right)^{2}\frac{c^{2}}{2A^{2}}\,\widehat{\psi}_{h,i}(t)\log(1/\tilde{\delta}(t^{+}))$$ ## B.3 Supporting Lemmas Lemma B.6. Let X1, . . . , Xt *be i.i.d. random variables taking their values in* [µ− b 2 , µ+ b 2 ]*, where* µ = E[Xi]. Let X¯t, Vt *be the mean and variance of* {Xi}i=1:t. For any t ∈ N and x > 0, *with probability at least* 1−3e −x, we have $$\left|{\bar{X}}_{t}-\mu\right|\leq{\sqrt{\frac{2V_{t}x}{t}}}+{\frac{3b x}{t}}$$ Proof. This lemma follows the results in Lemma B.7. Note that X1, X2*, . . . , X*t ∈ [µ − b 2 , µ + b 2 ], we can define Yi = Xi − (µ − b 2 ) then Y1, Y2, . . . , Yt ∈ [0, b] and they are *i.i.d* variables. Therefore for any t ∈ N and x > 0, with probability at least 1 − 3e −x, we have $$\left|{\bar{Y}}_{t}-{\frac{b}{2}}\right|\leq{\sqrt{\frac{2V_{t}(Y)x}{t}}}+{\frac{3b x}{t}}$$ Since the variance of Yiis the same as the variance of Xi. Therefore we have $\left|\bar{X}_{t}-\mu\right|\leq\sqrt{\frac{2V_{t}(X)x}{t}}+\frac{3bx}{t}$ Lemma B.7. **(Bernstein Inequality, Theorem 1 in Audibert et al. (2009))** Let X1, . . . , Xt be i.i.d. random variables taking their values in [0, b]. Let µ = E [X1] be their common expected value. Consider the empirical mean X¯t and variance Vt *defined respectively* by $${\bar{X}}_{t}={\frac{\sum_{i=1}^{t}X_{i}}{t}}\ \ \ \ a n d\ \ \ \ V_{t}={\frac{\sum_{i=1}^{t}\left(X_{i}-{\bar{X}}_{t}\right)^{2}}{t}}$$ Then, for any t ∈ N and x > 0, *with probability at least* 1 − 3e −x $$\left|{\bar{X}}_{t}-\mu\right|\leq{\sqrt{\frac{2V_{t}x}{t}}}+{\frac{3b x}{t}}$$ Furthermore, introducing $$\beta(x,t)=3\operatorname*{inf}_{1<\alpha\leq3}\left({\frac{\log t}{\log\alpha}}\wedge t\right)e^{-x/\alpha}$$ where u ∧ v denotes the minimum of u and v, we have for any t ∈ N and x > 0 *with probability at least* 1 − β(*x, t*) $$\left|{\bar{X}}_{s}-\mu\right|\leq{\sqrt{\frac{2V_{s}x}{s}}}+{\frac{3b x}{s}}$$ holds simultaneously for s ∈ {1, 2*, . . . , t*}. ## C Proof Of Theorem 4.1 C.1 The Choice Of Τh,I(T). When φ(h) = ν1ρ h, we have the following choice of τh,i(t) by Lemma B.5. τh,i(t) = 1 + s1 +3bν1ρ h Vbh,i(t)/2 2(c ν1ρ h ) 2(Vbh,i(t)/2) · log(1/ ˜δ(t +)) = 2 + 2s1 +3bν1ρ h Vbh,i(t)/2 +3bν1ρ h Vbh,i(t)/2 !c 2log 1/ ˜δ (t +)(Vbh,i(t)/2) ν 2 1 ρ −2h (7) = Vbh,i(t) + qVbh,i(t) 2 + 6bν1ρ hVbh,i(t) + 3bν1ρ h c 2log 1/ ˜δ (t +) ν 2 1 ρ −2h Since variance is non-negative, we have τh,i(t) ≥ 3bc2 ν1 ρ −h. When the variance term Vbh,i(t) is small, the other two terms are small. We also have the following upper bound for τh,i(t). $$\tau_{h,i}(t)\leq D_{1}^{2}\frac{c^{2}\log\left(1/\delta\left(t^{+}\right)\right)}{\nu_{1}^{2}}\rho^{-2h}+3b\nu_{1}\frac{c^{2}\log\left(1/\delta\left(t^{+}\right)\right)}{\nu_{1}^{2}}\rho^{-h}$$ where we define the constant D2 1 = $$\left(V_{\mathrm{max}}+2{\sqrt{V_{\mathrm{max}}^{2}+6b V_{\mathrm{max}}\nu_{1}}}\right)={\mathcal{O}}(V_{\mathrm{max}}).$$ ## C.2 Main Proof This part of the proof follows Theorem 3.1. Let H be an integer that satisfies 1 ≤ *H < H*(n) to be decided later. $$\begin{array}{l}{{\widetilde{R}_{n}^{\mathcal{E}}=\sum_{t=1}^{n}\Delta_{h_{t},i_{t}}\mathbb{I}_{\mathcal{E}_{t}}\leq\sum_{h=0}^{H(n)}\sum_{i\in\mathcal{I}_{h}(n)}\sum_{t=1}^{n}\Delta_{h,i}\mathbb{I}_{(h_{t},i_{t})=(h,i)}\mathbb{I}_{\mathcal{E}_{t}}}}\\ {{\leq2a C\sum_{h=1}^{\overline{{H}}}\underbrace{(0\mathbb{E}_{h-1})^{-i}\sum_{t=1}^{n}\operatorname*{max}_{(a)}\ \operatorname*{SE}_{h,i}(T,t)}_{(a)}+a\underbrace{\sum_{\overline{{H}}+1}^{H(n)}\sum_{i\in\mathcal{I}_{h}(n)}\sum_{t=1}^{n}\operatorname*{SE}_{h,i}(T,t)}_{(b)}}}\end{array}$$ By Lemma B.3, we have a = 3 and thus the following inequality $$\widetilde{R}_{n}^{\varepsilon}\leq\underbrace{\sum_{k=0}^{\overline{H}}2C_{P}{}^{-d(h-1)}\left\{6c\sqrt{2\big{(}\max_{\{\varepsilon\in\mathcal{Z}_{h}(n)\}}\{r_{h,\varepsilon}(n)\}\big{)}V_{\max}\log(2/\delta(n))+9bc^{2}\log(2/\delta(n))\log\big{(}\max_{\{\varepsilon\in\mathcal{Z}_{h}(n)}\{r_{h,\varepsilon}(n)\}\big{)}\big{)}}\right\}}_{(n)}$$ $$+\underbrace{\sum_{h=\overline{H}+1}^{H(n)}\sum_{i\in\mathcal{Z}_{h}(n)}\left\{6c\sqrt{2T_{h,\varepsilon}(n)V_{\max}\log(2/\delta(\tilde{t}_{h,i}))+9bc^{2}\log(2/\delta(\tilde{t}_{h,i}))\log T_{h,\varepsilon}(n)}\right\}}_{(n)}$$ Now we bound the two terms (a) and (b) of ReEn separately. (a) ≤ X H h=0 2Cρ−d(h−1) (6c r2( max i∈Ih(n) {τh,i(n)})Vmax log(2/ ˜δ(n)) + 9bc2log(2/ ˜δ(n)) log( max i∈Ih(n) {τh,i(n)}) ) h=0 2CD1cρdlog(2/ ˜δ(n)) ν16cp2Vmaxρ −h(d+1) + 2C √3bν1cρdlog(2/ ˜δ(n)) ν16cp2Vmaxρ −h(d+ 12 ) ≤ X H + 18Cρ−d(h−1)bc2log(2/ ˜δ(n)) log log(2/ ˜δ(n))+ 2 log(D1c ν1 ) − 2h log ρ ≤ 12√2VmaxCD1c 2ρ dlog(2/ ˜δ(n)) ν1(1 − ρ)ρ −H(d+1) + 12√6bν1VmaxCc2ρ dlog(2/ ˜δ(n)) ν1(1 − ρ)ρ −H(d+ 12 ) + 18Cbc2ρ 2dlog(2/ ˜δ(n)) log log(2/ ˜δ(n))+ 2 log( D1c ν1 ) 1 − ρρ −Hd + 36Cbc2log(2/ ˜δ(n)) log( 1ρ )1 (1 − ρ d) 2 (ρ dH − ρ 2dH − ρ 2d)ρ −dH + ρ 2d where in the second inequality we used the upper bound of τh(t) in Section B. The last inequality is by the formula for the sum of a geometric sequence and the following result. $$\begin{split}\sum_{h=0}^{\overline{{{H}}}}h\rho^{-d(h-1)}&=\frac{1}{1-\rho^{d}}\left(\overline{{{H}}}\rho^{-d(\overline{{{H}}}-1)}-\sum_{h=-1}^{\overline{{{H}}}-2}\rho^{-d h}\right)\\ &=\frac{1}{(1-\rho^{d})^{2}}\left((\rho^{d}\overline{{{H}}}-\rho^{2d}\overline{{{H}}}-\rho^{2d})\rho^{-d\overline{{{H}}}}+\rho^{2d}\right)\\ &\leq\frac{1}{(1-\rho)^{2}}\left((\rho^{d}\overline{{{H}}}-\rho^{2d}\overline{{{H}}}-\rho^{2d})\rho^{-d\overline{{{H}}}}+\rho^{2d}\right)\end{split}$$ Next we bound the second term (b) in the summation. By the Cauchy-Schwarz Inequality, $$(b)\leq\sum_{h=\overline{H}+1}^{H(n)}\sum_{i\in\mathcal{L}_{h}(n)}\left\{6c\sqrt{2T_{h,i}(n)V_{\max}\log(2/\tilde{\delta}(\bar{t}_{h,i}))}+9bc^{2}\log(2/\tilde{\delta}(\bar{t}_{h,i}))\log T_{h,i}(n)\right\}$$ $$\leq\sqrt{n\sum_{h=\overline{H}+1}^{H(n)}\sum_{i\in\mathcal{L}_{h}(n)}\log(2/\tilde{\delta}(\bar{t}_{h,i}))+\sum_{h=\overline{H}+1}^{H(n)}\sum_{i\in\mathcal{L}_{h}(n)}9bc^{2}\log(2/\tilde{\delta}(\bar{t}_{h,i}))\log T_{h,i}(n)}$$ Recall that our algorithm only selects a node when Th,i(t) ≥ τh,i(t) for its parent, i.e. when the number of pulls is larger than the threshold and the algorithm finds the node by passing its parent. Therefore we have $$T_{h,i}(\widetilde{t}_{h,i})\geq\tau_{h,i}(\widetilde{t}_{h,i}),\forall h\in[0,H(n)-1],i\in\mathcal{I}_{h}(n)^{+}$$ So we have the following set of inequalities. n = H X (n) h=0 X i∈Ih(n) Th,i(n) ≥ H X (n)−1 h=0 X i∈I+ h (n) Th,i(n) ≥ H X (n)−1 h=0 X i∈I+ h (n) Th,i(eth,i) ≥ H X (n)−1 h=0 X i∈I+ h (n) τh,i(eth,i) i∈I+ h (n) log 1/ ˜δ t˜+ h,i ν 2 1 ≥ H X (n)−1 i∈I+ h (n) c 2log 1/ ˜δ (t +) ν 2 1 ρ −2h ≥ c 2ρ −2H H X (n)−1 X X h=H h=H = c 2ρ −2H H X (n)−1 i∈I+ h (n) log 1/ ˜δmax[t¯h+1,2i−1,t¯h+1,2i] + ν 2 1 X h=H i∈I+ h (n) max[log 1/ ˜δ t¯+ h+1,2i−1 , log 1/ ˜δ t¯+ h+1,2i ] = c 2ρ −2H H X (n)−1 X ν 2 1 h=H i∈I+ h (n) log 1/ ˜δ t¯+ h+1,2i−1 + log 1/ ˜δ t¯+ h+1,2i ≥ c 2ρ −2H H X (n)−1 X 2ν 2 1 h=H i∈I+ h−1 (n) log 1/ ˜δ t¯+ h,2i−1 + log 1/ ˜δ t¯+ h,2i = c 2ρ −2H H X (n) X 2ν 2 1 h=H+1 = c 2ρ −2H 2ν 2 1 H X (n) i∈Ih(n) log 1/ ˜δ t¯+ h,i X h=H+1 Note that in the second equality, we have used the definition of t˜h,i, t˜h,i = max(t¯h,i,t¯h,i). Moreover, the third equality relies on the following fact $\log\left(1/\delta\left(\max\left\{\bar{t}_{h+1,2i-1},\bar{t}_{h+1,2i}\right\}^{+}\right)\right)=\max\left\{\log\left(1/\delta\left(\bar{t}_{h+1,2i-1}^{+}\right)\right),\log\left(1/\delta\left(\bar{t}_{h+1,2i}^{+}\right)\right)\right\}$. The next equality is just by change of variables h = h + 1. In the last inequality, we used the fact that for any h > 0, I + h (n) covers all the internal nodes at level h, so the set of the children of I + h (n) covers Ih+1(n). In other words, we have proved that $$\sum_{h=\overline{{{H}}}+1}^{H(n)}\sum_{i\in\mathcal{I}_{h}(n)}\log\left(1/\tilde{\delta}\left(\bar{t}_{h,i}^{+}\right)\right)\leq\frac{2n\nu_{1}^{2}\rho^{2\overline{{{H}}}}}{c^{2}\epsilon}$$ Therefore we have $$(b)\leq2n{\frac{\nu_{1}\rho^{\overline{{H}}}}{c{\sqrt{\epsilon}}}}+{\frac{18b\nu_{1}^{2}\rho^{2\overline{{H}}}n\log n}{\epsilon}}$$ If we let the dominating terms in (a) and (b) be equal, then $$\rho^{\overline{{{H}}}}=\left(\frac{12\sqrt{2V_{\mathrm{max}}}C D_{1}c^{3}\sqrt{\epsilon}\rho^{d}\log(2/\tilde{\delta}(n))}{2\nu_{1}^{2}n(1-\rho)}\right)^{\frac{1}{d+2}}$$ Substitute the above choice of ρ H into the original inequality, then the dominating terms in (a) and (b) reduce to Oe(C1V 1 d+2 max n d+1 d+2 ) because D1 = Θ(√Vmax), where C1 is a constant that does not depend on the variance. The non-dominating terms are all Oe(n 2d+1 2d+4 ), we get $$\widetilde{R}_{n}^{\xi}\leq(a)+(b)\leq C_{1}V_{\mathrm{max}}^{\frac{1}{d+2}}n^{\frac{d+1}{d+2}}(\log\frac{n}{\delta})^{\frac{1}{d+2}}+C_{2}n^{\frac{2d+1}{2d+4}}\log\frac{n}{\delta}$$ (8) $$\begin{array}{l}\small\left(\begin{array}{l}\small8\\ \small\end{array}\right)\end{array}$$ . where C2 is another constant. Finally, combining all the results in Theorem 3.1, Lemma B.2, Eqn. (8), we can obtain the upper bound $$\begin{array}{c}{{\widetilde{R}_{n}^{\mathrm{\tiny{WRT}}}\leq\sqrt{n}+\sqrt{2n\log(\frac{4n^{2}}{\delta})}+C_{1}V_{\mathrm{max}}^{\frac{4}{2+2}}n^{\frac{4+1}{4+2}}(\log\frac{n}{\delta})^{\frac{1}{4+2}}+C_{2}n^{\frac{2d+1}{2d+4}}\log\frac{n}{\delta}}}\\ {{\leq2\sqrt{2n\log(\frac{4n^{2}}{\delta})}+C_{1}V_{\mathrm{max}}^{\frac{4}{2+2}}n^{\frac{d+1}{4+2}}(\log\frac{n}{\delta})^{\frac{1}{4+2}}+C_{2}n^{\frac{2d+1}{2d+4}}\log\frac{n}{\delta}}}\end{array}$$ $\square$ The expectation in the theorem can be shown by directly taking δ = 1/n as in Theorem 3.1. ## D Proof Of Theorem 4.2 D.1 Choice Of The Threshold By Lemma B.5, we get that when φ(h) = 1/h, we can solve for τh as follows. $$\tau_{h,i}(t)=\left(1+\sqrt{1+\frac{3b/h}{\widehat{\mathbb{V}}_{h,i}(t)(t)/2}}\right)^{2}c^{2}h^{2}(\widehat{\mathbb{V}}_{h,i}(t)(t)/2)\cdot\log(1/\tilde{\delta}(t^{+}))$$ $$=\left(\widehat{\mathbb{V}}_{h,i}(t)+\sqrt{\widehat{\mathbb{V}}_{h,i}(t)^{2}+6b/h\widehat{\mathbb{V}}_{h,i}(t)}+3b/h\right)c^{2}\log\left(1/\tilde{\delta}\left(t^{+}\right)\right)h^{2}$$ $$\leq D_{1}^{2}c^{2}\log\left(1/\tilde{\delta}\left(t^{+}\right)\right)h^{2}+3bc^{2}\log\left(1/\tilde{\delta}\left(t^{+}\right)\right)h$$ where we again define a new constant D2 1 = $$2\sqrt{V_{\mathrm{max}}^{2}+6b V_{\mathrm{max}}}\Big)=\Theta(V_{\mathrm{max}}).$$ ## D.2 Main Proof The failing confidence interval part can be easily done as in Section C since Et is also a high-probability event at each time t. We start from the bound on ReE. By Theorem 3.1 and similar to what we have done in Theorem 4.1, we decompose ReE over different depths. Let 1 ≤ *H < H*(n) be an integer to be decided later, then we have $$\begin{array}{l}{{\widetilde{R}_{n}^{\mathcal{E}}=\sum_{t=1}^{n}\Delta_{h_{t},i_{t}}\mathbb{I}_{\mathcal{E}_{t}}\leq\sum_{h=0}^{H(n)}\sum_{i\in\mathcal{I}_{h}(n)}\sum_{t=1}^{n}\Delta_{h,i}\mathbb{I}_{(h_{t},i_{t})=(h,i)}\mathbb{I}_{\mathcal{E}_{t}}}}\\ {{\leq2a C\sum_{h=1}^{\mathcal{H}}\underbrace{(0\mathbb{E}_{h-1})^{-d}\sum_{t=1}^{n}\operatorname*{max}_{i\in\mathcal{I}_{h}(n)}\operatorname*{\mathsf{SE}}_{h,i}(T,t)}_{(a)}+a\underbrace{\sum_{H+1}^{H(n)}\sum_{i\in\mathcal{I}_{h}(n)}\sum_{t=1}^{n}\operatorname*{\mathsf{SE}}_{h,i}(T,t)}_{(b)}}}\end{array}$$ Now we bound the two terms (a) and (b) of ReEn separately. By Lemma B.3, we have a = 3. (a) ≤ X H h=0 2Chd¯6c r2 max i{τh,i(n)}Vmax log(2/ ˜δ(n)) + 9bc2log(2/ ˜δ(n)) log( max i∈Ih(n) {τh,i(n)}) ≤ X H h=0 12CD1c 2h d¯+1q2log 1/ ˜δ (n)Vmax log(2/ ˜δ(n)) + X H h=0 36bCc2h d¯+ 12 q2log 1/ ˜δ (n)Vmax log(2/ ˜δ(n)) + 18Cbc2h d¯log(2/ ˜δ(n)) log D2 1 c 2log 2/ ˜δ (n)h 2 ≤ X H h=0 12aCD1c 2h d¯+1 log(2/ ˜δ(n))p2Vmax + X H h=0 36bCc2h d¯+ 12 log(2/ ˜δ(n))p2Vmax + X H h=0 18Cbc2h d¯log(2/ ˜δ(n)) log D2 1 c 2log 2/ ˜δ (n)n 2 ≤ 12p2VmaxaCD1c 2log(2/ ˜δ(n))X H h=0 h d¯+1 + 36bCc2log(2/ ˜δ(n))p2VmaxX H h=0 h d¯+ 12 + 18Cbc2log(2/ ˜δ(n)) log D2 1 c 2log 2/ ˜δ (n)n 2X H h=0 h d¯ d¯+1 d¯+ 12 ≤ 12p2VmaxaCD1c 2log(2/ ˜δ(n)) h=0 h + 36bCc2log(2/ ˜δ(n))p2Vmax h=0 h X H X H d¯ + 18Cbc2log(2/ ˜δ(n)) log D2 1 c 2log 2/ ˜δ (n)n 2 h=0 h X H ≤ 12p2VmaxaCD1c 2log(2/ ˜δ(n)) H(H + 1) 2 d¯+1+ 36bCc2log(2/ ˜δ(n))p2Vmax H(H + 1) 2 d¯+ 12 + 18Cbc2log(2/ ˜δ(n)) log D2 1 c 2log 2/ ˜δ (n)n 2H(H + 1) 2 d¯ Next we bound the second term (b) in the summation. By the Cauchy-Schwarz Inequality, $$(b)\leq\sum_{h=\overline{H}+1}^{H(n)}\sum_{i\in\mathcal{I}_{h}(n)}\left\{6\mathrm{v}\sqrt{2T_{h,i}(n)V_{\max}\log(2/\tilde{\delta}(\tilde{t}_{h,i}))}+9bc^{2}\log(2/\tilde{\delta}(\tilde{t}_{h,i}))\log T_{h,i}(n)\right\}$$ $$\leq\sqrt{n}\sum_{h=\overline{H}+1}^{H(n)}\sum_{i\in\mathcal{I}_{h}(n)}\log(2/\tilde{\delta}(\tilde{t}_{h,i}))+\sum_{h=\overline{H}+1}^{H(n)}\sum_{i\in\mathcal{I}_{h}(n)}9bc^{2}\log(2/\tilde{\delta}(\tilde{t}_{h,i}))\log T_{h,i}(n)$$ Recall that our algorithm only selects a node when Th,i(t) ≥ τh,i(t) for its parent, i.e. when the number of pulls is larger than the threshold and the algorithm finds the node by passing its parent. Therefore we have $$T_{h,i}(\widetilde{t}_{h,i})\geq\tau_{h,i}(\widetilde{t}_{h,i}),\forall h\in[0,H(n)-1],i\in\mathcal{I}_{h}(n)^{+}$$ So we have the following set of inequalities. n = H X (n) h=0 X i∈Ih(n) Th,i(n) ≥ H X (n)−1 h=0 X i∈I+ h (n) Th,i(n) ≥ H X (n)−1 h=0 X i∈I+ h (n) Th,i(eth,i) ≥ H X (n)−1 h=0 X i∈I+ h (n) τh,i(eth,i) ≥ H X (n)−1 i∈I+ h (n) c 2log 1/ ˜δt + h2 ≥ c 2H 2 H X (n)−1 i∈I+ h (n) log 1/ ˜δ t˜+ h,i X X h=H h=H = c 2H 2 H X (n)−1 i∈I+ h (n) log 1/ ˜δmax[t¯h+1,2i−1,t¯h+1,2i] + X h=H = c 2H 2 H X (n)−1 i∈I+ h (n) max[log 1/ ˜δ t¯+ h+1,2i−1 , log 1/ ˜δ t¯+ h+1,2i ] X h=H H X (n)−1 ≥ c 2H 2 2 i∈I+ h (n) log 1/ ˜δ t¯+ h+1,2i−1 + log 1/ ˜δ t¯+ h+1,2i X h=H = c 2H 2 2 H X (n) i∈I+ h−1 (n) log 1/ ˜δ t¯+ h,2i−1 + log 1/ ˜δ t¯+ h,2i X h=H+1 = c 2H 2 2 H X (n) i∈Ih(n) log 1/ ˜δ t¯+ h,i X h=H+1 Note that in the second equality, we have used the definition of t˜h,i = max(t¯h,i,t¯h,i). Moreover, the third equality relies on the following fact $$\log\left(1/\tilde{\delta}\left(\max\left\{\tilde{t}_{h+1,2i-1},\tilde{t}_{h+1,2i}\right\}^{+}\right)\right)=\max\left\{\log\left(1/\tilde{\delta}\left(\tilde{t}_{h+1,2i-1}^{+}\right)\right),\log\left(1/\tilde{\delta}\left(\tilde{t}_{h+1,2i}^{+}\right)\right)\right\}.$$ The next equality is just by change of variables h = h + 1. In the last inequality, we used the fact that for any h > 0, I + h (n) covers all the internal nodes at level h, so the set of the children of I + h (n) covers Ih+1(n). In other words, we have proved that $$\sum_{h=\overline{{{H}}}+1}^{H(n)}\sum_{i\in\mathcal{I}_{h}(n)}\log\left(1/\tilde{\delta}\left(\overline{{{t}}}_{h,i}^{+}\right)\right)\leq\frac{2n}{c^{2}\epsilon}\overline{{{H}}}^{-2}$$ Therefore we have the following inequality $$(b)\leq\frac{2n}{c\sqrt{\epsilon}}\overline{{{H}}}^{-1}+\frac{18b n\log n}{\epsilon}\overline{{{H}}}^{-2}$$ If we let the dominating terms in (a) and (b) be equal, then $$\overline{{{H}}}^{-1}=\left(\frac{12\sqrt{2V_{\mathrm{max}}}C D_{1}c^{3}\sqrt{\epsilon}\log(2/\tilde{\delta}(n))}{2n}\right)^{\frac{1}{2d+3}}$$ Substitute the above choice of H into the original inequality, then the dominating terms in (a) and (b) reduce to Oe(C1V 1 2d¯+3 max n 2d¯+2 2d¯+3 ) because D1 = Θ(√Vmax), where C1 is a constant that does not depend on the variance. The non-dominating terms are all Oe(n 2d¯+1 2d¯+3 ), we get $$\widetilde{R}_{n}^{\xi}\leq(a)+(b)\leq C_{1}V_{\mathrm{max}}^{\frac{1}{2d+3}}n^{\frac{2d+2}{2d+3}}(\log{\frac{n}{\delta}})^{\frac{1}{2d+3}}+C_{2}n^{\frac{2d+1}{2d+3}}\log{\frac{n}{\delta}}$$ $$({\mathfrak{g}})$$ where C2 is another constant. Finally, combining all the results in Theorem 3.1, Lemma B.2, Eqn. (9), we can obtain the upper bound $$\begin{array}{l}{{\widetilde{R}_{n}^{\mathrm{inner}}\leq\sqrt{n}+\sqrt{2n\log(\frac{4n^{2}}{\delta})}+C_{1}V_{\mathrm{max}}^{\frac{1}{2d+4}}n^{\frac{2d+2}{2d+3}}(\log\frac{n}{\delta})^{\frac{1}{2d+3}}+C_{2}n^{\frac{2d+1}{2d+3}}\log\frac{n}{\delta}}}\\ {{\leq2\sqrt{2n\log(\frac{4n^{2}}{\delta})}+C_{1}V_{\mathrm{max}}^{\frac{1}{2d+4}}n^{\frac{2d+2}{2d+3}}(\log\frac{n}{\delta})^{\frac{1}{2d+3}}+C_{2}n^{\frac{2d+1}{2d+3}}\log\frac{n}{\delta}}}\end{array}$$ $\square$ The expectation in the theorem can be shown by directly taking δ = 1/n as in Theorem 3.1. ## E Experiment Details In this appendix, we provide more experiment details and additional experiments as a supplement to Section 5. For the implementation of all the algorithms, we utilize the publicly available code of POO and HOO at the link https://rdrr.io/cran/OOR/man/POO.html and the PyXAB library (Li et al., 2023). For all the experiments in Section 5 and Appendix E.2, we have used a low-noise setting where t ∼ Uniform(−0.05, 0.05) to verify the advantage of VHCT. ## E.1 Experimental Settings Remarks on Bayesian Optimization Algorithm. For the implementation of the Bayesian Optimization algorithm BO, we have used the publicly available code at https://github.com/SheffieldML/GPyOpt, which is also recommended by Frazier (2018). For the acquisition function and the prior of BO, we have used the default choices in the aforementioned package. We emphasize that BO is much more computationally expensive compared with the other algorithms due to the high computational complexity of Gaussian Process. The other algorithms in this paper (HOO, HCT, VHCT, etc.) take at most minutes to reach the endpoint of every experiment, where as BO typically needs a few days to finish. Moreover, the performance (cumulative regret) of BO is not comparable with our algorithm. Synthetic Experiments. In Figure 4, we provide the performances of the different algorithms (VHCT, HCT, T-HOO) that need the smoothness parameters under different parameter settings ρ ∈ {0.25, 0.5, 0.75}. Here, we choose to plot an equivalent notion, the average regret Rt/t instead of the cumulative regret Rt because some curves have very large cumulative regrets, so it would be hard to compare them with the other curves. In general, ρ = 0.75 or ρ = 0.5 are good choices for VHCT and HCT, and ρ = 0.25 is a good choice for T-HOO. Therefore, we use these parameter settings in the real-life experiments and the additional experiments in the next subsection. For POO and PCT, we follow Grill et al. (2015) and use ρmax = 0.9. The unknown bound b is set to be b = 1 for all the algorithms used in the experiments. Landmine Dataset. The landmine dataset contains 29 landmine fields, with each field consisting of different number of positions. Each position has some features extracted from radar images, and machine learning models (like SVM) are used to learn the features and detect whether a certain position has landmine or not. The dataset is available at http://www.ee.duke.edu/~lcarin/LandmineData.zip. We have followed the open-source implementation at https://github.com/daizhongxiang/Federated_Bayesian_ Optimization to process the data and train the SVM model. We tune two hyper-parameters when training SVM, the RBF kernel parameter from [0.01, 10], and the L2 regularization from [1e-4, 10]. The model is trained on the training set with the selected hyper-parameter and then evaluated on the testing set. The testing AUC-ROC score is the blackbox objective to be optimized. MNIST Dataset and Neural Network. The MNIST dataset can be downloaded from http://yann. lecun.com/exdb/mnist/ and is one of the most famous image classification datasets. We have used stochastic gradient descent (SGD) to train a two-layer feed-forward neural network on the training images, and the ![26_image_0.png](26_image_0.png) Figure 4: Best parameters on the Garland, DoubleSine and Rastrigin function objective is the validation accuracy on the testing images. We have used ReLU activation and the hidden layer has 64 units. We tune three different hyper-parameters of SGD to find the best hyper-parameter, specifically, the mini batch-size from [1, 100], the learning rate from [1e-6, 1], and the weight decay from [1e-6, 5e-1]. ## E.2 Additional Experiments In this subsection, we provide additional experiments on some other nonconvex optimization evaluation benchmarks. They are used in many optimization researches to evaluate the performance of different optimization algorithms, including the convergence rate, the precision and the robustness such as Azar et al. (2014); Shang et al. (2019); Bartlett et al. (2019). Detailed discussions of these functions can be found at https://en.wikipedia.org/wiki/Test_functions_for_optimization Although some of these function values (e.g., the Himmelblau function) are not bounded in [0, 1] on the domain we select, the convergence rate of different algorithms will not change as long as the function is uniformly bounded over its domain. To commit a fair comparison (i.e., sharing similar signal/noise ratio), **we have re-scaled all the objectives** listed below to be bounded by [−1, 1]. We list the functions used and their mathematical expressions as follows. - DoubleSine (Figure 5(a)) is (originally) a one dimensional function proposed by Grill et al. (2015) with multiple sharp local minimums and one global minimum. The results are shown in Figure 1(a). $$f(x,y)=20\exp\left[-0.2{\sqrt{0.5\left(x^{2}+y\right)}}\right]$$ 2 + y 2) i+ exp[0.5(cos 2πx + cos 2πy)] − e − 20. - The counter example f(x) = 1 + 1/ ln x in 4 decreases too fast around zero and thus its smoothness cannot be measured by ν1ρ hfor any constants ν1*, ρ >* 0. However, because it is continuously differentiable and even monotone, the function is very easy to optimize. The results are shown in Figure 6(b). - Himmelbalu (Figure 5(c)) is (originally) a two dimensional function with four flat global minimums. We use the negative of the original function for maximization, and we restrain x to be in [−5, 5]2to include all four global maximums. The results are shown in Figure 6(c). $$\mathbf{s}\,2\pi y)]-e-20.$$ $$f(x,y)=-\left(x^{2}+y-11\right)^{2}-\left(x+y^{2}-7\right)^{2}.$$ - Rastrigin (Figure 5(d)) is a multi-dimensional function with a vast number of sharp local minimums and one global minimum. We use the negative of the original function for maximization. We run ![27_image_0.png](27_image_0.png) Figure 5: Plots of all the synthetic objectives used in the experiments. We have used a 10-dimensional Rastrigin function and the figure in (d) is a two-dimensional one. ![27_image_1.png](27_image_1.png) Figure 6: Cumulative regret of different algorithms on the synthetic functions. all the algorithms on the 10-dimensional space [−1, 1]10. The results are shown in Figure 6(d). $$f(\mathbf{x})=-A n+\sum_{i=1}^{n}\left[A\cos\left(2\pi x_{i}\right)-x_{i}^{2}\right]{\mathrm{~with~}}A=10.$$ As can be observed in all the figures, VHCT is one of the fastest algorithms, which validates our claims in the theoretical analysis. We remark that Himmelblau is very smooth after the normalization by its maximum absolute value on [−5, 5]2(890) and thus a relatively easier task compared with functions such as Rastrigin. DoubleSine contain many local optimums that are very close to the global optimum. Therefore, the regret differences between VHCT and HCT are expected to be small in these two cases. ## E.3 Performance Of Vhct **In The High-Noise Setting** Apart from the low-noise setting, we have also examined the performance of VHCT in the high-noise setting. In the following experiments, we have set the noise to be t ∼ Uniform(−0.5, 0.5). Note that the function values are in [−1, 1], therefore such a noise is very high. As discussed in Section 4.4, it should be expected that the performance of VHCT is similar to or only marginally better than HCT in this case. As shown in Figure 7, the performance of VHCT and HCT are similar, which matches our expectation. ![28_image_0.png](28_image_0.png) ![28_image_1.png](28_image_1.png) Figure 7: Performance of different algorithms on the synthetic functions with high noise
Review 1: Summary: The paper proposes a novel approach for black-box optimization that combines statistical modeling and optimization, called optimum-statistical collaboration. This algorithmic framework offers a more comprehensive analysis of the interaction between optimization and statistical errors during the optimization process, and is applicable to a broad range of functions and partitions that meet various local smoothness assumptions and different numbers of local optima. In addition, the paper introduces a variance-adaptive algorithm, VHCT, that outperforms prior methods in various settings. The proposed approach is designed to overcome the limitations of existing techniques for black-box optimization, which often rely on a specific model or optimization algorithm. The authors provide experimental results to demonstrate the effectiveness of their approach, showing that it outperforms existing techniques on a range of benchmark problems. Strengths and Weaknesses: In reviewing this paper, I believe its contribution to the field of black-box optimization is significant, as it provides a generic perspective of the balance between optimization error and statistical error and its algorithmic implications for a broader range of functions and partitions. The proposed algorithmic framework, optimum-statistical collaboration, does not rely on specific assumptions of optimization error and uncertainty estimates, and is believed to be applicable to a wider range of scenarios than previous works. The paper introduces a more relaxed assumption, known as General Local Smoothness, which can be applied to a broader range of objective functions than the assumption of Local Smoothness used in previous works. Theoretical analysis shows that VHCT has rate-optimal regret bounds under different local smoothness assumptions, while empirical results demonstrate its superior performance compared to prior methods. Overall, the article's novelty lies in the combination of multiple models and optimization algorithms within a statistical framework, leading to improved performance, as shown in the experimental results. The paper's weaknesses include some sections lacking clarity, particularly in the introduction section, which could have been more concise and better structured. The technical terminology may also present challenges for readers unfamiliar with black-box optimization and related fields. Additionally, a potential limitation of the article is its assumption of reader familiarity with black-box optimization, which may pose difficulties for some nonexperts. Furthermore, while the experimental results are compelling, they are limited to a specific set of benchmark problems, and it would be helpful to see how the proposed approach performs on a wider range of problems. Requested Changes: One request for change is to offer more details on the specific models and optimization algorithms used in the experimental evaluation. This would enable readers to better understand the strengths and limitations of the individual components of the proposed approach. Additionally, including more discussion on the limitations of the proposed approach and potential future research directions would be valuable (for example, the introduction could be better structured to provide a clearer motivation for the research question and the proposed solution). Lastly, technical terms could be more explicitly defined, and more accessible language could be used where possible to improve the readability of the article. There are also other typographical errors that I would omit here for simplicity (but encourage you to do a thorough check). Broader Impact Concerns: N/A ================================================== Review 2: Summary: This work studies the problem of black-box optimization, specially using hierarchical bandits-based algorithms. The authors firstly define two general components, i.e., resolution descriptor (Definition 1) and uncertainty descriptor (Definition 2), based on which a general algorithm Optimum-Statistical Collaboration (OSC) is proposed, as shown in Algorithm 1. The authors then prove a general regret upper bounds for OSC algorithm in Theorem 3.1. The results are general and they apply to examples in Figure 1, which satisfy general local smoothness assumption and finite near-optimality dimension but make some existing methods not work. In Section 4, the authors propose to use Variance Adaptive Quantifier as the uncertainty quantifier in VHCT algorithm (Algorithm 2), which improves the existing HCT method of Azar et al. Theorems 4.1 and 4.2 show examples of regret upper bounds for the VHCT algorithm under different general local smoothness conditions. Section 5 shows experiments to compare the proposed VHCT algorithm with existing methods, including T-HOO, HCT, POO, PCT and BO, using synthetic objective functions such as Garland function, and hyperparameter tuning of machine learning methods (parameters of SVM and training neural networks on MNIST dataset). The proposed VHCT algorithm outperforms existing methods by achieving smaller cumulative regret. Strengths and Weaknesses: **Strengths** 1. The results of Theorem 3.1 apply to examples for which existing methods do not work. Definitions 1 and 2 are more general than existing conditions. 2. The idea of using Variance Adaptive Quantifier seems reasonable, and the VHCT algorithm achieves promising results. 3. The paper is written in a clear and easy to follow way. Motivations are well presented using examples at the beginning part. **Weaknesses** 1. The technical novelty seems limited. The major novelty of this work is using Eq. (4) as the uncertainty quantifier and the resulting VHCT algorithm. However, using variance in confidence interval seems an old existing idea in bandit literature, e.g., Eq. (4) is highly similar to existing UCB variants which use variances, [1] Exploration-exploitation trade-off using variance estimates in multi-armed bandits, Audibert et al., 2009. [2] Use of variance estimation in the multi-armed bandit problem, Audibert et al., 2006. Given the studied problem of this work is highly related to bandit, I think the technical difficulty and novelty are limited. Requested Changes: Since my major concern is of the technical novelty of the proposed Variance Adaptive Quantifier and the resulting VHCT algorith,, I suggest discussing existing bandit literature of using variance in bonuses (such as UCB variants). The authors could comment on the difficulty and difference comparing with existing results, and this would help the audience better understand the contributions of this work. Broader Impact Concerns: This work is mostly theoretical, and the experiments are conducted using public methods and datasets. There are no ethical implications of the work as I can see. ================================================== Review 3: Summary: The paper considers bandit optimization of a black box function using a hierarchical partitioning of the search domain. In this method, the search domain is partitioned into $2^h$ subdomains at each level $h$. Specifically, each subdomain in level $h$ is divided into two subdomains that form a binary tree with nodes representing the subdomains. The root represents the entire search domain. An algorithm then can be designed that traverses tree using statistical confidence intervals towards the optimum point of the objective functions. The idea dates back to (Auer et al., 2007; Bubeck et al., 2011b) and has been studied in several other works. This papers contributes to the literature by relaxing the assumptions on the objective functions compared to the ones required in the existing work. Strengths and Weaknesses: This paper provides a general framework for bandit optimization of black box functions using a hierarchical search that includes general assumptions on the objective function (see Definition 1) and also on the statistical confidence intervals (see Definition 2). A relaxed assumption (General Local Smoothness) is then introduced that applied to a larger class of objective functions compared to the one in the existing work (Local Smoothness). Although the presentation is very clear and the general framework is interesting, I find the main contribution of the paper rather limited. In particular General Local Smoothness can be handled similar to Local Smoothness in the analysis. Requested Changes: The term "collaborative" is used for the interplay between optimization error and statistical error. This may be a bit confusing as it may imply collaboration between multiple agents as common in the bandit literature. I would suggest changing this terminology. The paper is overall very well written, there are however a few typos: "Eqn.equation 2" on page 5. "To ensure the above probability requirement holds, it is reasonable to make $SE_{h,i}(T, t + 1) \ge SE_{h,i}(T, t)$" on page 5. Should the inequality be the other way around. I would expect the statistical error to decrease when more samples are collected. $g(0)=0$ on page 6 does not seem correct. Broader Impact Concerns: As a mainly theoretical paper, this section seems not to apply. ================================================== Metareview: Recommendation: Accept as is Comment: This manuscript concerns the fundamental problem of black-box optimization. The authors consider a general class of optimization methods based on hierarchical partitioning (optimum-statistical collaboration) and analyze regret based on characterizations of the inherent resolution and uncertainty quantification mechanism. The reviewers generally agree that the results in this paper are correct and of interest to a subset of the TMLR audience. There was some concern regarding the extent of novelty in the work; however, I judge the contributions to be of sufficient scope to warrant publication, and none of the reviewers stands in opposition to this recommendation. Throughout the review and discussion period, the reviewers provided feedback and suggestions that the authors have faithfully incorporated into the manuscript. I believe these changes have strengthened the work. ==================================================
# Lazy Vs Hasty: Linearization In Deep Networks Impacts Learning Schedule Based On Example Difficulty Thomas George georgeth@mila.quebec Mila, Université de Montréal Guillaume Lajoie *g.lajoie@umontreal.ca* Mila, Université de Montréal Canada CIFAR AI Chair Aristide Baratin a.baratin@samsung.com Samsung, SAIT AI Lab, Montréal Reviewed on OpenReview: *https: // openreview. net/ forum? id= lukVf4VrfP* ## Abstract Among attempts at giving a theoretical account of the success of deep neural networks, a recent line of work has identified a so-called 'lazy' training regime in which the network can be well approximated by its linearization around initialization. Here we investigate the comparative effect of the lazy (linear) and feature learning (non-linear) regimes on subgroups of examples based on their difficulty. Specifically, we show that easier examples are given more weight in feature learning mode, resulting in faster training compared to more difficult ones. In other words, the non-linear dynamics tends to sequentialize the learning of examples of increasing difficulty. We illustrate this phenomenon across different ways to quantify example difficulty, including c-score, label noise, and in the presence of easy-to-learn spurious correlations. Our results reveal a new understanding of how deep networks prioritize resources across example difficulty. ## 1 Introduction Understanding the performance of deep learning algorithms has been the subject of intense research efforts in the past few years, driven in part by observed phenomena that seem to defy conventional statistical wisdom (Neyshabur et al., 2015; Zhang et al., 2017; Belkin et al., 2019). Notably, many such phenomena have been analyzed rigorously in simpler contexts of high dimensional linear or random feature models (e.g., Hastie et al., 2022; Bartlett et al., 2021), which shed a new light on the crucial role of overparametrization in the performance of such systems. These results also apply to the so-called *lazy training* regime (Chizat et al., 2019), in which a deep network can be well approximated by its linearization at initialization, characterized by the neural tangent kernel (NTK) (Jacot et al., 2018; Du et al., 2019b; Allen-Zhu et al., 2019). In this regime, deep networks inherit the inductive bias and generalization properties of kernelized linear models. It is also clear, however, that deep models cannot be understood solely through their kernel approximation. Trained outside the lazy regime (e.g., Woodworth et al., 2020), they are able to learn adaptive representations, with a time-varying tangent kernel (Fort et al., 2020) that specializes to the task during training (Kopitkov & Indelman, 2020; Baratin et al., 2021; Paccolat et al., 2021; Ortiz-Jiménez et al., 2021). Several results showed examples where their inductive bias cannot be characterized in terms of a kernel norm (Savarese et al., 2019; Williams et al., 2019), or where they provably outperform any linear method (Malach et al., 2021). Yet, the specific mechanisms by which the two regimes differ, which could explain the performance gaps often observed in practice (Chizat et al., 2019; Geiger et al., 2020), are only partially understood. Our work contributes new qualitative insights into this problem, by **investigating the comparative** effect of the lazy and feature learning regimes on the training dynamics of various groups of examples of increasing difficulty. We do so by means of a control parameter that modulates linearity of the parametrization and smoothly interpolates the two training regimes (Chizat et al., 2019; Woodworth et al., 2020). We provide empirical evidence and theoretical insights suggesting that **the feature learning** regime puts higher weight on easy examples at the beginning of training, which results in an increased learning speed compared to more difficult examples. This can be understood as an instance of the **simplicity** bias brought forward in recent work (Arpit et al., 2017; Rahaman et al., 2019; Kalimeris et al., 2019), which we show here to be much more pronounced in the feature learning regime. It also resonates with the old idea that generalization can benefit from some curriculum learning strategy (Elman, 1993; Bengio et al., 2009). ## Contributions - We introduce and test the hypothesis of qualitatively different example importance between linear and non-linear regimes; - Using adequately normalized plots, we present a unified picture using 4 different ways to quantify example difficulty, where easy examples are prioritized in the non-linear regime. We illustrate empirically this phenomenon across different ways to quantify example difficulty, including c-score (Jiang et al., 2021), label noise, and spurious correlation to some easy-to-learn set of features. - We illustrate some of our insights in a simple quadratic model amenable to analytical treatment. The general setup of our experiments is introduced in Section 2. Section 3 is our empirical study, which begins with an illustrative example on a toy dataset (Section 3.1), followed by experiments on CIFAR 10 in two setups where example difficulty is quantified using respectively c-scores and label noise (Section 3.2). We also examine standard setups where easy examples are those with strong correlations between their labels and some spurious features (Section 3.3). Section 4 illustrates our findings with a theoretical analysis of a specific class of quadratic models, whose training dynamics is solvable in both regimes. We conclude in Section 5. Related work The neural tangent kernel was initially introduced in the context of infinitely wide networks, for a specific parametrization that provably leads to the lazy regime (Jacot et al., 2018). Such a regime allows to cast deep learning as a linear model using a fixed kernel, enabling import of well-known results from linear models, such as guarantees of convergence to a global optimum (Du et al., 2019b;a; Allen-Zhu et al., 2019). On the other hand, it is also clear that the kernel regime does not fully capture the behavior of deep models - including, in fact, infinitely wide networks (Yang & Hu, 2021). For example, in the so-called mean field limit, training two-layer networks by gradient descent learns adaptive representations (Chizat & Bach, 2018; Mei et al., 2018) and it can be shown that the inductive bias cannot be characterized in terms of a RKHS norm (Savarese et al., 2019; Williams et al., 2019). Performance gaps between the two regimes are also often observed in practice (Chizat et al., 2019; Arora et al., 2019; Geiger et al., 2020). Subsequent work showed and analyzed how, for a fixed (finite-width) network, the scaling of the model at initialization also controls the transition between the lazy regime, governed by the empirical neural tangent kernel, and the standard feature learning regime (Chizat et al., 2019; Woodworth et al., 2020; Agarwala et al., 2020). In line with this prior work, in our experiments below we use a scaling parameter α > 0 that modulates linearity of the parametrization and allows us to smoothly interpolates between the vanilla training (α = 1) where features are learned and the lazy (α → ∞) regime where they are not. We compare training runs with various values of α and empirically assess linearity with several metrics described in Section 2 below. Our results are in line with a group of work showing how deep networks learn patterns and functions of increasing complexity during training (Arpit et al., 2017; Kalimeris et al., 2019). An instance of this is the so-called spectral bias empirically observed in Rahaman et al. (2019) where a sum of sinusoidal signals is incrementally learned from low frequencies to high frequencies, or in Zhang et al. (2021) that use Fourier decomposition to analyze the function learned by a deep convolutional network on a vision task. The spectral bias is well understood in linear regression, where the gradient dynamics favours the large singular directions of the feature matrix. For neural networks in the lazy regime, spectral analysis of the neural tangent kernel have been investigated for architectures and data (e.g, uniform data on the sphere) allowing for explicit computations (e.g., decomposition of the NTK in terms of spherical harmonics) (Bietti & Mairal, 2019; Basri et al., 2019; Yang & Salman, 2019). This is a setup where the spectral bias can be rigorously analyzed, i.e., we can get explicit information about the type of functions that are learned quickly and generalize well. In this context, our work specifically focuses on comparing the lazy and the standard feature learning regimes. Our theoretical model in Section 4 reproduces some of the key technical ingredients of known analytical results (Saxe et al., 2014; Gidel et al., 2019) on deep linear networks. These results show how, in the context of multiclass classification or matrix factorization, the principal components of the input-output correlation matrix are learned sequentially from the highest to the lowest mode. Note however that in the framework of these prior works, the number of modes is bounded by the output dimension - which thus reduces to one in the context of regression or binary classification. By contrast, our theoretical analysis applies to the components of the vector of labels Y ∈ R n in the eigenbasis of the kernel XX⊤ defined by the input matrix X ∈ R n×d. Thus, despite the technical similarities, our framework approaches the old problem of the relative learning speed of different modes in factorized models from a novel angle. In particular, it allows us to frame the notion of example difficulty in this context. Example difficulty is a loosely defined concept that has been the subject of intense research recently. Ways to quantify example difficulty for a model/algorithm to learn individual examples include e.g. self-influence (Koh & Liang, 2017), example forgetting (Toneva et al., 2019), TracIn (Pruthi et al., 2020), C-scores (Jiang et al., 2021), or prediction depth (Baldock et al., 2021). We believe that a comprehensive theory of generalization in deep learning will require to understand how neural networks articulate learning and memorization to both fit the head (easy examples) and the tail (difficult examples) of the data distribution (Hooker et al., 2020; Feldman & Zhang, 2020; Sagawa et al., 2020a;b). ## 2 Setup We consider neural networks fθ parametrized by θ ∈ R p(i.e. weights and biases for all layers), and trained by minimizing some task-dependent loss function ℓ(θ) := Pn i=1 ℓi(fθ(xi)) computed on a training dataset {x1, *· · ·* , xn}, using variants of gradient descent, $$\theta^{(t+1)}=\theta^{(t)}-\eta\nabla_{\!\theta}\ell(\theta^{(t)}),$$ $$\quad(1)$$ (t)), (1) with some random initialization θ (0) and a chosen learning rate η > 0. Linearization A Taylor expansion and the chain rule give the corresponding updates f (t):= fθ(t) for any network output, at first order in the learning rate, $$f^{(t+1)}({\bf x})\simeq f^{(t)}({\bf x})-\eta\sum_{i=1}^{n}K^{(t)}({\bf x},{\bf x}_{i})\nabla\ell_{i},\tag{2}$$ which depend on the time-varying **tangent kernel** K(t)(x, x ′) := ∇θf (t)(x) ⊤∇θf (t)(x ′). The *lazy regime* is one where this kernel remains nearly constant throughout training. Training the network in this regime thus corresponds to training the linear predictor defined by $$\bar{f}_{\mathbf{\theta}}(\mathbf{x}):=f_{\mathbf{\theta}^{(0)}}(\mathbf{x})+(\mathbf{\theta}-\mathbf{\theta}^{(0)})^{\top}\nabla_{\mathbf{\theta}}f_{\mathbf{\theta}^{(0)}}(\mathbf{x}).\tag{1}$$ Modulating linearity In our experiments, following Chizat et al. (2019), we modulate the level of "nonlinearity" during training of a deep network with a scalar parameter α ≥ 1, by replacing our prediction fθ by $f_{\theta}^{\alpha}\left(\mathbf{x}\right):=f_{\theta^{\left(0\right)}}\left(\mathbf{x}\right)+\alpha\left(f_{\theta}\left(\mathbf{x}\right)-f_{\theta^{\left(0\right)}}\left(\mathbf{x}\right)\right)$ $\alpha$ as $\alpha=\alpha/\alpha^2$, In this setup, gradient doesn't exist. (x) := fθ(0) (x) + α (fθ (x) − fθ(0) (x)) (4) and by rescaling the learning rate as ηα = η/α2. In this setup, gradient descent steps in parameter space are rescaled by 1/α while steps in function space (up to first order) are in O(1) in α. 1 α can also be viewed as 1Under some assumptions such as strong convexity of the loss, it was shown (Chizat et al., 2019, Thm 2.4) that as α → ∞, the gradient descent trajectory of f α θ (t) gets uniformly close to that of the linearization f¯θ (t) . $$\left({\boldsymbol{3}}\right)$$ $$\left(4\right)$$ ![3_image_0.png](3_image_0.png) $$\left(5\right)$$ Figure 1: 100 randomly initialized runs of a 4 layers MLP trained on the yin-yang dataset (a) using gradient descent in both the non-linear (α = 1) and linearized (α = 100) setting. The training losses (b) show a speedup in the non-linear regime: in order to compare both regimes at equal progress, we normalize by comparing models extracted at equal training loss thresholds (c), (d) and (e). We visualize the differences ∆loss (xtest) = lossfnon-linear (xtest)−lossflinear (xtest) for test points paving the 2d square [−1, 1]2using a color scale. We observe that these differences are not uniformly spread across examples: instead they suggest a comparative bias of the non-linear regime towards correctly classifying easy examples (large areas of the same class), whereas difficult examples (e.g. the small disks) are boosted in the linear regime. controlling the *level of feature adaptivity*, where large values of α result in linear training where features are not learned. We will also experiment with α < 1 below, which enhances adaptivity compared the standard regime (α = 1). The goal of this procedure is two-fold: (i) to be able to smoothly interpolate between the standard regime at α = 1 and the linearized one, (ii) to work with models that are nearly linearized yet practically trainable with gradient descent. Linearity measures In order to assess linearity of training runs for empirically chosen values of the re-scaling factor α, we track three different metrics during training (fig 2 right): - **Sign similarity** counts the proportion of ReLUs in all layers that have kept the same activation status (0 or > 0) from initialization. - **Tangent kernel alignment** measures the similarity between the Gram matrix K (t) ij = K(t)(xi, xj ) of the tangent kernel with its initial value K(0), using kernel alignment (Cristianini et al., 2001) $$\operatorname{KA}(\mathbf{K}^{(t)},\mathbf{K}^{(0)})={\frac{\operatorname{Tr}[\mathbf{K}^{(t)}\mathbf{K}^{(0)}]}{\|\mathbf{K}^{(t)}\|_{F}\|\mathbf{K}^{(0)}\|_{F}}}$$ - $\|_F$ is the Froebenius norm). $\quad(\parallel\cdot\parallel_{F})$ $$\mathbb{H}^{-1}$$ ∥K(t)∥F ∥K(0)∥F(*∥ · ∥*F is the Froebenius norm) (5) - **Representation alignment** measures the similarity of the last non-softmax layer representation ϕ (t) R (x) with its initial value ϕ (0) R , in terms of the kernel alignment (eq. 5) of the corresponding Gram matrices (K (t) R )ij = ϕ (t) R (xi) ⊤ϕ (t) R (xj ). ## 3 Empirical Study 3.1 A Motivating Example On A Toy Dataset We first explore the effect of modulating the training regime for a binary classification task on a toy dataset with 2d inputs, for which we can get a visual intuition. We use a fully-connected network with 4 layers and ReLU activations. For 100 independent initial parameter values, we generate 100 training examples uniformly on the square [−1, 1]2from the yin-yang dataset (fig. 1.a). We perform 2 training runs for α = 1 and α = 100 using (full-batch) gradient descent with learning rate 0.01. Global training speed-up and normalization After the very first few iterations, we observe a speed-up in training progress of the non-linear regime (fig. 1.b). This is consistent with previously reported numerical experiments on the lazy training regime (Chizat et al., 2019, section 3). This raises the question of whether this acceleration comes from a global scaling in all directions, or if it prioritizes certain particular groups of examples. We address this question by comparing the training dynamics at equal progress: we counteract the difference in training speed by normalizing by the mean training loss, and we compare the linear and non-linear regimes at common thresholds ((c), (d) and (e) horizontal lines in fig. 1.b). Comparing linear and non-linear regimes At every threshold value, we compute the predictions on test examples uniformly paving the 2d square. We compare both regimes on individual test examples by plotting (fig. 1.c, 1.d, 1.e) the differences in loss values, $$\Delta\mathrm{loss}\left(x_{\mathrm{test}}\right)=\mathrm{loss}f_{\mathrm{non\mbox{-}linear}}\left(x_{\mathrm{test}}\right)-\mathrm{loss}f_{\mathrm{linear}}\left(x_{\mathrm{test}}\right)$$ $$(6)$$ ∆loss(xtest) = lossfnon-linear (xtest) − lossflinear (xtest) (6) Red (resp. blue) areas indicate a lower test loss for the linearized (resp. non-linear) model. Remarkably, the resulting picture is not uniform: these plots suggest that compared to the linear regime, the non-linear training dynamics speeds up for specific groups of examples (the large top-right and bottom-left areas) at the expense of examples in more intricate areas (both the disks and the areas between the disks and the horizontal boundary). ## 3.2 Hastening Easy Examples We now experiment with deeper convolutional networks on CIFAR10, in two setups where the training examples are split into groups of varying difficulty. Additional experiments with various other choices of hyperparameters and initialization seed are reported in Appendix F. ## 3.2.1 Example Difficulty Using C-Scores In this section we quantify example difficulty using consistency scores (C-scores) (Jiang et al., 2021). Informally, C-scores measure how likely an example is to be well-classified by models trained on subsets of the dataset that do not contain it. Intuitively, examples with a high C-score share strong regularities with a large group of examples in the dataset. Formally, given a choice of model f, for each (input, label) pair (*x, y*) in a dataset D, Jiang et al. (2021) defines its empirical consistency profile as: $$\hat{C}_{\mathcal{D},n}\left(x,y\right)=\hat{\mathbb{E}}_{D\sim\mathcal{D}\setminus\left\{\left(x,y\right)\right\}}^{r}\left[\mathbb{P}\left(f\left(x;D\right)=y\right)\right],$$ $$\left(7\right)$$ [P (f (x; D) = y)] , (7) where Eˆris the empirical average over r subsets D of size n uniformly sampled from D excluding (*x, y*), and f(·, D) is the model trained on D. A scalar C-score is obtained by averaging the consistency profile over various values of n = 1, · · · , *|D| −* 1. For CIFAR10 we use pre-computed scores available as https: //github.com/pluskid/structural-regularity. While training, we compute the loss separately on 10 subsets of the training set ranked by increasing C-scores deciles (e.g. examples in the last subset are top-10% C-scores), for both the training set and test set. We also train a linearized copy of this network (so as to share the same initial conditions) with α = 100. The α = 1 run is trained for 200 epochs (64 000 SGD iterations) whereas in order to converge the α = 100 run is trained for 1 000 epochs (320 000 SGD iterations). Similarly to fig. 1, we normalize training progress using the mean training loss in order to compare regimes at equal progress. We check (fig. 2 top right) that the model with α = 100 indeed stays in the linear regime during the whole training run, since all 3 linearity metrics that we report essentially remain equal to 1. By contrast, in the non-linear regime (α = 1), a steady decrease of linearity metrics as training progresses indicates a rotation of the NTK and the representation kernel, as well as lower sign similarity of the ReLUs. The results are shown in fig. 2 (top left). As one might expect, examples with high C-scores are learned faster during training than examples with low C-scores in both regimes. Remarkably, this effect is amplified in the non-linear regime compared to the linear one, as we can observe by comparing e.g. the top (resp. bottom) decile in light green (resp dark blue). This illustrates a relative acceleration of the non-linear regime in the direction of easy examples. ![5_image_0.png](5_image_0.png) Figure 2: Starting from the same initial parameters, we train 2 ResNet18 models with α = 1 (standard training) and α = 100 (linearized training) on CIFAR10 using SGD with momentum. **(Top left)** We compute the training loss separately on 10 subgroups of examples ranked by their C-scores. Training progress is normalized by the mean training loss on the x-axis. Unsurprisingly, in both regimes examples with high C-scores are learned faster. Remarkably, this ranking is more pronounced in the non-linear regime as can be observed by comparing dashed and solid lines of the same color. **(Bottom left)** We randomly flip the class of 15% of the training examples. At equal progress (measured by equal clean examples loss), the non-linear regime prioritizes learning clean examples and nearly ignores noisy examples compared to the linear regime since the solid curve remains higher for the non-linear regime. Concomitantly, the non-linear test loss reaches a lower value. **(Right)** On the same training run, as a sanity check we observe that the α = 100 training run remains in the linear regime throughout since all metrics stay close to 1, whereas in the α = 1 run, the NTK and representation kernel rotate, and a large part of ReLU signs are flipped. These experiments are completed in Appendix F with accuracy plots for the same experiments, and with other experiments with varying initial model parameters and mini-batch order. ## 3.2.2 Example Difficulty Using Label Noise In this section we use label noise to define difficult examples. We train a ResNet18 on CIFAR10 where 15% of the training examples are assigned a wrong (random) label. We compute the loss and accuracy independently on the regular examples with their true label, and the noisy examples whose label is flipped. In parallel, we train a copy of the initial model in the linearized regime with α = 100. Fig. 2 bottom right shows the 3 linearity metrics during training in both regimes. The results are shown in fig. 2 (bottom left). In both regimes, the training process begins with a phase where only examples with true labels are learned, causing the loss on examples with wrong labels to increase. A second phase starts (in this run) at 1.25 clean examples loss for the non-linear regime and 1.5 clean loss for the linear regime, where random labels are getting memorized. We see that the first phase takes a larger part of the training run in the non-linear regime than in the linear regime. We interpret this as the fact that the non-linear regime prioritizes learning easy examples. As a consequence, the majority of the training process is dedicated to learning the clean examples in the non-linear regime, whereas in the linear regime both the clean and the noisy labels are learned simultaneously passed the 1.5 clean loss mark. Concomitantly, ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) Figure 3: We visualize the trajectories of training runs on 2 spurious correlations setups, by computing the accuracy on 2 separate subsets: one with examples that contain the spurious feature (with spurious), the other one without spurious correlations (w/o spurious). On Celeb A **(top row)**, the attribute 'blond' is spuriously correlated with the gender 'woman'. In the first phase of training we observe that **(left)** the test accuracy is essentially higher for the linear run, which can be further explained by observing that **(middle)** the training accuracy for w/o spurious examples increases faster in the linear regime than in non-linear regimes at equal with spurious training accuracy. (right) A similar trend holds for test examples. In this first part the linear regime is less sensitive to the spurious correlation (easy examples) thus gets better robustness. **(bottom row)** On Waterbirds, the background (e.g. a lake) is spuriously correlated with the label (e.g. a water bird). **(left)** We observe the same hierarchy between the linear run and other runs. In the first training phase, the linear regime is less prone to learning the spurious correlation: the w/o spurious accuracy stays higher while the with spurious examples are learned (**(middle)** and **(right)**). These experiments are completed in fig. 12 in Appendix F with varying initial model parameters and mini-batch order. comparing the sweet spot (best test loss) of the two regimes indicates a higher robustness to label noise thus a clear advantage for generalization in the non-linear regime. ## 3.3 Spurious Correlations We now examine setups where easy examples are those with strong correlations between their labels and some spurious feature (Sagawa et al., 2020b). We experiment with CelebA (Liu et al., 2015) and Waterbirds (Wah et al., 2011) datasets. CelebA (Liu et al., 2015) is a collection of photographs of celebrities' faces, each annotated with its attributes, such as hair color or gender. Similarly to Sagawa et al. (2020b), our task is to classify pictures based on whether the person is blond or not. In this dataset, the attribute "the person is a woman" is spuriously correlated with the attribute "the person is blond", since the attribute "blond" is over-represented among women (24% are blond) compared to men (2% are blond). We use 20 000 examples of CelebA to train a ResNet18 classifier on the task of predicting whether a person is blond or not, using SGD with learning rate 0.01, momentum 0.9 and batch size 100. We also extract a balanced dataset with 180 (i.e. the total number of blond men in the test set) examples in each of 4 categories: blond man, blond woman, non-blond man and non-blond woman. While training progresses, we measure the loss and accuracy on the subgroups man and woman. Starting from the same initial conditions, we train 3 different classifiers with α ∈ {.5, 1, 100}. Waterbirds (Wah et al., 2011) is a smaller dataset of pictures of birds. We reproduce the experiments of Sagawa et al. (2020a) where the task is to distinguish land birds and water birds. In this dataset, the background is spuriously correlated with the type of bird: water birds are typically photographed on a background such as a lake, and similarly there is generally no water in the background of land birds. There are exceptions: a small part of the dataset consists of land birds on a water background and vice versa (e.g. a duck walking in the grass). We use 4 795 training examples of the Waterbirds dataset. Since the dataset is smaller, we start from a pre-trained ResNet18 classifier from default PyTorch models (pre-trained on ImageNet). We replace the last layer with a freshly initialized binary classification layer, and we set batch norm layers to evaluation mode2 We train using SGD with learning rate 0.001, momentum 0.9 and minibatch size 100. From the training set, we extract a balanced dataset with 180 examples in each of 4 groups: land birds on land background, land birds on water background, water bird on land background, and water bird on land background, which we group in two sets: with spurious when the type of bird and the background agree, and w/o spurious otherwise. While training, we measure the accuracy separately on these 2 sets. We train 3 different classifiers with α ∈ {.5, 1, 100}. Looking at fig. 3 left, for both the Waterbirds and the CelebA experiments we identify two phases: in the first phase the test accuracy is higher for the linear α = 100 run than for other runs. In the second phase all 3 runs seem to converge, with a slight advantage for non-linear runs in Waterbirds. Taking a closer look at the first phase in fig. 3 middle and right, we understand this difference in test accuracy in light of spurious and non-spurious features: in the non-linear regime, the training dynamics learns the majority examples faster, at the cost of being more prone to spurious correlations. This can be seen both on the balanced training set and the balanced test set. ## 4 Theoretical Insights The goal here is to illustrate some of our insights in a simple setup amenable to analytical treatment. ## 4.1 A Simple Quadratic Model We consider a standard linear regression analysis: given n input vectors xi ∈ R d with their corresponding labels yi ∈ R, 1 ≤ i ≤ n, the goal is to fit a linear function fθ(x) to the data by minimizing the least-squares loss ℓ(θ) := 12 Pi (fθ(xi) − yi) 2. We will focus on a specific quadratic parametrization, which can be viewed as a subclass of two-layer networks with linear activations. Notation We denote by X ∈ R n×dthe matrix of inputs and by y ∈ R n the vector of labels. We consider the singular value decomposition (SVD), $$\mathbf{X}=\mathbf{U}\mathbf{M}\mathbf{V}^{\top}:=\sum_{\lambda=1}^{r_{X}}{\sqrt{\mu_{\lambda}}}\mathbf{u}_{\lambda}\mathbf{v}_{\lambda}^{\top}$$ λ(8) where U ∈ R n×n,V ∈ R d×d are orthogonal and M is rectangular diagonal. rX denotes the rank of X, µ1 *≥ · · · ≥* µrX > 0 are the non zero (squared) singular values; we also set µλ = 0 for rX < λ ≤ max(*n, d*). 2In order not to interfere with α scaling (see section 2), and since we consider that batch norm is a whole different theoretical challenge by itself, we chose to turn it off, and instead keep the mean and variance buffers to their pre-trained value. See appendix D for further discussion of this point. $$({\boldsymbol{\delta}})$$ Left and right singular vectors extend to orthonormal bases (u1, *· · ·* ,un) and (v1, *· · ·* , vd) of R n and R d, respectively. In what follows we assume, without loss of generality3, that the vector of labels has positive components in the basis uλ, i.e., yλ := u ⊤ λ y ≥ 0 for λ = 1, *· · ·* n. Parametrization We consider the following class of functions $$f_{\mathbf{\theta}}({\bf x})=\mathbf{\theta}^{\top}{\bf x},\quad\mathbf{\theta}:=\frac{1}{2}\sum_{\lambda=1}^{d}w_{\lambda}^{2}\,\mathbf{v}_{\lambda}\tag{9}$$ $$(10)$$ $$(11)$$ In this setting, the least-squares loss is minimized by gradient descent over the vector parameter w = [w1, *· · ·* wd] ⊤. Given an initialization w0, we want to compare the solution of the vanilla gradient (non linear) dynamics with the solution of the lazy regime, which corresponds to training the linearized function (3), given in our setting by ¯fθ¯(x) = θ¯⊤x where θ¯ =Pdλ=1 wλw 0 λ vλ. ## 4.2 Gradient Dynamics For simplicity, we analyze the continuous-time counterpart of gradient descent, $$\mathbf{w}(0)=\mathbf{w}^{0},\quad{\dot{\mathbf{w}}}(t)=-\nabla_{\mathbf{w}}\ell(\boldsymbol{\theta}(t))$$ w(0) = w0, w˙ (t) = −∇wℓ(θ(t)) (10) where the dot denotes the time-derivative. Making the gradients explicit and differentiating (9) yield $${\hat{\mathbf{\theta}}}(t)=-\mathbf{\Sigma}(t)(\mathbf{\theta}(t)-\mathbf{\theta}^{*}),\quad\mathbf{\Sigma}(t)=V\mathrm{Diag}(\mu_{1}w_{1}^{2}(t),\cdots,\mu_{d}w_{d}^{2}(t))V^{\top}$$ ⊤ (11) $$t))V^{\top}$$ where θ ∗is the solution of the linear dynamics.4 Note how Σ(t) is obtained from the input correlation matrix X⊤X by rescaling each eigenvalue µλ by the time-varying factor w 2 λ . By contrast, the lazy regime is described by the equation obtained from (11) by replacing Σ(t) the constant matrix Σ(0). In the proposition below (proved in Appendix A), we consider the system (10, 11) initialized as θ 0:= 1 2 Pdλ=1(w 0 λ ) 2 vλ where we assume that w 0 λ = 0 ̸ for all λ. We denote by y˜λ the components of the input-label correlation vector X⊤y ∈ R din the basis vλ: we have y˜λ = √µλyλ for 1 ≤ λ ≤ rX and 0 when rX < λ ≤ d. Proposition 1 The solution of (10, *11) is given by,* $$\theta(t)=\sum_{\lambda=1}^{d}\theta_{\lambda}(t)v_{\lambda},\qquad\theta_{\lambda}(t)=\left\{\begin{array}{l l}{{\frac{\theta_{\lambda}^{0}\theta_{\lambda}^{*}}{\theta_{\lambda}^{0}-e^{-2\mu_{\lambda}t}(\theta_{\lambda}^{0}-\theta_{\lambda}^{*})}}}&{{\quad\mathrm{if~}\theta_{\lambda}^{*}\neq0}}\\ {{\frac{\theta_{\lambda}^{0}}{1+2\mu_{\lambda}\theta_{\lambda}^{0}t}}}&{{\quad\mathrm{if~}\theta_{\lambda}^{*}=0}}\end{array}\right.$$ $$(13)$$ $$\quad(12)$$ By contrast, the solution in the linearized regime where Σ(t) ≈ Σ(0) is, $$\theta_{\lambda}(t)=\theta_{\lambda}^{*}+e^{-\mu\theta_{\lambda}^{0}t}(\theta_{\lambda}^{0}-\theta_{\lambda}^{*})$$ ) (13) ## 4.3 Discussion While the gradient dynamics (12,13) converge to the same solution θ ∗, we see that the convergence rates of the various modes differ in the two regimes. To quantify this, for each dynamical mode 1 ≤ λ ≤ rX such that |θ ∗ λ | ̸= 0, let tλ(ϵ), tlin λ(ϵ) be the times required for θλ to be ϵ-close to convergence, i.e |θλ − θ ∗ λ | = ϵ, in the two regimes. Substituting into (12) and (13) we find that, close to convergence ϵ ≪ θ ∗ λ and for a small initialization, θ 0 λ ≪ θ ∗ λ , $$t_{\lambda}(\epsilon)=\frac{1}{i\hbar}\log\frac{\theta_{\lambda}^{\mathrm{x}}}{\epsilon\theta_{\lambda}^{0}},\qquad\qquad t_{\lambda}^{\mathrm{in}}(\epsilon)=\frac{1}{\mu_{\lambda}\theta_{\lambda}^{0}}\log\frac{\theta_{\lambda}^{\mathrm{x}}}{\epsilon}$$ up to terms in O(ϵ/θ∗ λ , θ0 λ /θ∗ λ ). Two remarks are in order: 3One can use the reflection invariance √µλuλv⊤ λ = √µλ(−uλ)(−vλ)⊤ of the SVD to flip the sign of yλ. 4Explicitly, θ ∗ = Σ+X⊤y + P⊥(θ 0) where Σ+ is the pseudoinverse of the input correlation matrix Σ = X⊤X and P⊥ projects onto the null space of X. It decomposes as θ ∗ =Pdλ=1 θ ∗ λ vλ in the basis vλ, where θ ∗ λ = yλ/ √µλ for 1 ≤ λ ≤ rX and θ ∗ λ = θ 0 λ for rX < λ ≤ d. $$(14)$$ ![9_image_0.png](9_image_0.png) Figure 4: **(left)** Different input/label correlation (example 1): examples are learned in a flipped order in the two regimes. **(middle)** Label noise (example 2): the non-linear dynamics prioritizes learning the clean labels **(right)** Spurious correlations (example 3): the non-linear dynamics prioritizes learning the spuriously correlated feature. These analytical curves are completed with numerical experiments on standard (dense) 2-layer MLP in figure 14 in appendix G, which shows a similar qualitative behaviour. Linearization impacts learning schedule. Specifically, while the learning speed of each mode depends on the components of the label vector in the non linear regime, through y˜λ = √µλyλ, it does not in the linearized one. Thus, the ratio of ϵ-convergence times for two given modes *λ, λ*′is tλ/tλ′ = y˜λ/y˜λ′ in the non-linear case and t lin λ/tlin λ = µλ′θ 0 λ′/µλθ 0 λ in the linearized case, up to logarithmic factors. Sequentialization of learning. Non-linearity, together with a vanishingly small initialization, induces a sequentialization of learning for the various modes (see also Gidel et al., 2019, Thm 2). To see this, pick a mode λ and consider, for any other mode λ ′, the value θλ′ (tλ) at the time tλ := tλ(ϵ) where λ reaches ϵ-convergence. Let us also write θ 0 λ = σ˜θ 0 λ where σ is small and ˜θ 0 λ = O(1) as σ → 0. Then elementary manipulations show the following: $$\theta_{\lambda^{\prime}}(t_{\lambda})=\frac{\theta_{\lambda^{\prime}}^{*}\tilde{\theta}_{\lambda^{\prime}}^{0}}{\tilde{\theta}_{\lambda^{\prime}}^{0}+\left[e\tilde{\theta}_{\lambda^{\prime}}^{0}/\theta_{\lambda^{\prime}}^{*}\right]^{\frac{2\tilde{\theta}_{\lambda^{\prime}}}{2\tilde{\theta}_{\lambda}}}\sigma^{\frac{2\tilde{\theta}_{\lambda^{\prime}}}{2\tilde{\theta}_{\lambda}}-1}=_{\sigma\to0}\begin{cases}\theta_{\lambda^{\prime}}^{*}&\tilde{y}_{\lambda^{\prime}}>\tilde{y}_{\lambda}\\ 0&\tilde{y}_{\lambda^{\prime}}<\tilde{y}_{\lambda}\end{cases}\tag{15}$$ In words, for fixed ϵ > 0 and in the limit of small initialization, the mode λ gets ϵ-close to convergence before any of the subdominant mode deviates from their (vanishing) initial value: the modes are learned sequentially. ## 4.4 Mode Vs Example Difficulty We close this section by illustrating the link between *mode* and *example difficulty* on three concrete examples of structure for the training data {xi, yi} n i=1. In what follows, we consider the overparametrized setting where d ≥ n. We denote by e1, *· · ·* ed the canonical basis of R d. For each of the examples below, we consider an initialization w0 = √σ[1 *· · ·* 1]⊤ with small σ > 0 and the corresponding solutions in Prop. 1 for both regimes. Example 1 We begin with a rather trivial setup where each mode corresponds to a training example. It will illustrate how non-linearity can *reverse* the learning order of the examples. Given a sequence of strictly positive numbers µ1 *≥ · · ·* µn > 0, we consider the training data, $${\mathbf{x}}_{i}={\sqrt{\mu_{i}}}{\boldsymbol{e}}_{i},\quad y_{i}=\mu_{n-i+1}/{\sqrt{\mu_{i}}},\quad\quad1\leq i\leq n$$ √µi, 1 ≤ i ≤ n (16) In the linearized regime, fθ(t)(xi) converges to yi at the linear rate σµi; the model learns faster the examples with higher µi, hence with *lower* index i. In the non linear regime, the examples are learned sequentially according to the value y˜i = √µiyi = µn−i+1, hence from *high to low* index i. Thus in this setting, linearization flips the learning order of the training examples (see fig. 4 left). In examples 2 and 3 below we aim at modelling situations where the labels depend on low-dimensional (in this case, one dimensional) projections on the inputs (e.g. an image classification problem where the labels $$(16)$$ mainly depend on the low frequencies of the image). These two examples mirror the two sets of experiments in Sections 3.2.2 and 3.3, respectively. Example 2 We consider a simple classification setup on linearly separable data with label noise. Here we assume *d > n*. Conditioned on a set of binary labels yi = ±1, the inputs are given by $$\mathbf{x}_{i}=\kappa_{i}y_{i}\mathbf{e}_{1}+\eta\mathbf{e}_{i+1}\quad1\leq i\leq n$$ $$(17)$$ $$(18)$$ $\mathbf{a}\cdot\mathbf{a}=\mathbf{a}\cdot\mathbf{a}$. xi = κiyie1 + ηei+1 1 ≤ i ≤ n (17) where κi = ±1 is some 'label flip' variable and η > 0. We assume we have q 'noisy' examples with flipped labels, i.e. κi = −1, where 1 ≤ q < ⌈n/2⌉. The SVD of the feature matrix X ∈ R n×d defined by (17) can be made explicit. In particular, the top left singular vector is u1 = y¯/ √n, where y¯ ∈ R n denotes the vector of noisy labels y¯i:= κiyi. This singles out a dominant mode y1 := (u ⊤ 1 y)u1 of the label vector y that is *learned first* by the non-linear dynamics. Explicitly, $$\mathbf{y}_{1}=(1-{\frac{2q}{n}}){\bar{\mathbf{y}}}$$ )y¯ (18) For a small noise ratio q/n ≪ 1, this yields y1i ≈ yi for clean examples and −yi for noisy ones: fitting the dominant mode amounts to learning the clean examples - while assigning the wrong label to the noisy ones. This is illustrated in fig 4 (middle plot). Example 3 We consider a simple spurious correlation setup (Sagawa et al., 2020b), obtained by adding to (17) a 'core' feature that separates all training points. Here we assume *d > n* + 1. Conditioned on a set of binary labels yi = ±1, the inputs are given by $$+\,\eta\mathbf{e}_{i+2},\quad1\leq i\leq n$$ $$(19)$$ $$\mathbf{x}_{i}=\kappa_{i}y_{i}\mathbf{e}_{1}+\lambda y_{i}\mathbf{e}_{2}+\eta\mathbf{e}_{i+2},$$ xi = κiyie1 + λyie2 + ηei+2, 1 ≤ i ≤ n (19) for some binary variable κi = ±1, scaling factor λ ∈ (0, 1), and η > 0. Given 1 ≤ q < ⌈n/2⌉, we assume we have a majority group of n − q training examples with κi = 1, whose label is spuriously correlated with the spurious feature e2, and a minority group of q examples with κi = −1. The analysis is similar as in Example 2. For small noise ratio and scaling factor λ, the non-linear dynamics enhances the bias towards fitting first the majority group of ('easy') examples with spuriously correlated labels. This illustrates an increased sensitivity of the non linear regime to the spurious feature - at least in the first part of training. This is shown in fig 4 (right plot). ## 5 Conclusion The recent emphasis on the lazy training regime, where deep networks behave as linear models amenable to analytical treatment, begs the question of the specific mechanisms and implicit biases which differentiates it from the full-fledged feature learning regime of the algorithms used in practice. In this paper, we investigated the comparative effect of the two regimes on subgroups of examples based on their difficulty. We provided experiments in various setups suggesting that easy examples are given more weight in non-linear training mode (deep learning) than in linear training, resulting in a comparatively higher learning speed for these examples. We illustrated this phenomenon across various ways to quantify examples difficulty, through c-scores, label noise, and correlations to some easy-to-learn spurious features. We complemented these empirical observations with a theoretical analysis of a quadratic model whose training dynamics is tractable in both regimes. We believe that our findings makes a step towards a better understanding of the underlying mechanisms that drive the good generalization properties observed in practice. ## Broader Impact Statement This work proposes to improve our understanding of the mechanisms behind deep learning models training. There is no clear direct application of these theoretical observations to create harmful tools, however as for any technology, machine learning models can be used for malicious intentions. We also acknowledge that deep learning uses a significant amount of compute capacity when scaled to global tech companies, which in some places is powered by carbon emitting power plants, thus contributes to global warming. ## References Atish Agarwala, Jeffrey Pennington, Yann Dauphin, and Sam Schoenholz. Temperature check: theory and practice for training models with softmax-cross-entropy losses, 2020. URL https://arxiv.org/abs/2010. 07344. Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. volume 97 of *Proceedings of Machine Learning Research*, pp. 242–252, Long Beach, California, USA, 09–15 Jun 2019. PMLR. URL http://proceedings.mlr.press/v97/allen-zhu19a.html. Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Russ R Salakhutdinov, and Ruosong Wang. On exact computation with an infinitely wide neural net. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ dbc4d84bfcfe2284ba11beffb853a8c4-Paper.pdf. Devansh Arpit, Stanislaw Jastrzebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, and Simon Lacoste-Julien. A closer look at memorization in deep networks. In Doina Precup and Yee Whye Teh (eds.), *Proceedings of the 34th* International Conference on Machine Learning, volume 70 of *Proceedings of Machine Learning Research*, pp. 233–242. PMLR, 06–11 Aug 2017. URL https://proceedings.mlr.press/v70/arpit17a.html. Robert John Nicholas Baldock, Hartmut Maennel, and Behnam Neyshabur. Deep learning through the lens of example difficulty. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), *Advances in* Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=WWRBHhH158K. Aristide Baratin, Thomas George, César Laurent, R Devon Hjelm, Guillaume Lajoie, Pascal Vincent, and Simon Lacoste-Julien. Implicit regularization via neural feature alignment. In Arindam Banerjee and Kenji Fukumizu (eds.), *Proceedings of The 24th International Conference on Artificial Intelligence and Statistics*, volume 130 of *Proceedings of Machine Learning Research*, pp. 2269–2277. PMLR, 13–15 Apr 2021. URL https://proceedings.mlr.press/v130/baratin21a.html. Peter L Bartlett, Andrea Montanari, and Alexander Rakhlin. Deep learning: a statistical viewpoint. Acta numerica, 30:87–201, 2021. Ronen Basri, David Jacobs, Yoni Kasten, and Shira Kritchman. The convergence rate of neural networks for learned functions of different frequencies. In *Advances in Neural Information Processing Systems 32*, pp. 4761–4771. 2019. URL http://papers.nips.cc/paper/ 8723-the-convergence-rate-of-neural-networks-for-learned-functions-of-different-frequencies. pdf. Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine-learning practice and the classical bias–variance trade-off. *Proceedings of the National Academy of Sciences*, 116(32): 15849–15854, 2019. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In *Proceedings* of the 26th annual international conference on machine learning, pp. 41–48, 2009. Alberto Bietti and Julien Mairal. On the inductive bias of neural tangent kernels. In *Advances in Neural Information Processing Systems 32*, pp. 12893–12904. 2019. URL http://papers.nips.cc/paper/ 9449-on-the-inductive-bias-of-neural-tangent-kernels.pdf. Lénaïc Chizat and Francis Bach. On the global convergence of gradient descent for over-parameterized models using optimal transport. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 31, pp. 3036–3046. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/ a1afc58c6ca9540d057299ec3016d726-Paper.pdf. Lenaic Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable programming. *Advances* in Neural Information Processing Systems, 32, 2019. Nello Cristianini, John Shawe-Taylor, André Elisseeff, and Jaz Kandola. On kernel-target alignment. In T. Dietterich, S. Becker, and Z. Ghahramani (eds.), *Advances in Neural Information Processing* Systems, volume 14. MIT Press, 2001. URL https://proceedings.neurips.cc/paper/2001/file/ 1f71e393b3809197ed66df836fe833e5-Paper.pdf. Simon S. Du, Jason D. Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings of Machine Learning Research*, pp. 1675–1685. PMLR, 2019a. URL http://proceedings.mlr.press/v97/du19c.html. Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes overparameterized neural networks. In *International Conference on Learning Representations*, 2019b. URL https://openreview.net/forum?id=S1eK3i09YQ. Jeffrey L. Elman. Learning and development in neural networks: The importance of starting small. *Cognition*, 48:71–99, 1993. Vitaly Feldman and Chiyuan Zhang. What neural networks memorize and why: Discovering the long tail via influence estimation. *Advances in Neural Information Processing Systems*, 33:2881–2891, 2020. Stanislav Fort, Gintare Karolina Dziugaite, Mansheej Paul, Sepideh Kharaghani, Daniel M Roy, and Surya Ganguli. Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 5850–5861. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 405075699f065e43581f27d67bb68478-Paper.pdf. Mario Geiger, Stefano Spigler, Arthur Jacot, and Matthieu Wyart. Disentangling feature and lazy training in deep neural networks. *Journal of Statistical Mechanics: Theory and Experiment*, 2020(11):113301, nov 2020. doi: 10.1088/1742-5468/abc4de. URL https://doi.org/10.1088%2F1742-5468%2Fabc4de. Thomas George. NNGeometry: Easy and Fast Fisher Information Matrices and Neural Tangent Kernels in PyTorch, February 2021. URL https://doi.org/10.5281/zenodo.4532597. Gauthier Gidel, Francis Bach, and Simon Lacoste-Julien. Implicit regularization of discrete gradient dynamics in linear neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'AlchéBuc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ f39ae9ff3a81f499230c4126e01f421b-Paper.pdf. Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J. Tibshirani. Surprises in high-dimensional ridgeless least squares interpolation. *The Annals of Statistics*, 50(2):949 - 986, 2022. doi: 10.1214/ 21-AOS2133. URL https://doi.org/10.1214/21-AOS2133. Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, and Andrea Frome. What do compressed deep neural networks forget? *arXiv preprint arXiv:1911.05248 [cs.LG]*, 2020. Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In *NIPS*, pp. 8571–8580. 2018. Ziheng Jiang, Chiyuan Zhang, Kunal Talwar, and Michael C. Mozer. Characterizing structural regularities of labeled data in overparameterized models. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of *Proceedings of Machine Learning Research*, pp. 5034–5044. PMLR, 2021. URL http: //proceedings.mlr.press/v139/jiang21k.html. Dimitris Kalimeris, Gal Kaplun, Preetum Nakkiran, Benjamin Edelman, Tristan Yang, Boaz Barak, and Haofeng Zhang. Sgd on neural networks learns functions of increasing complexity. *Advances in neural* information processing systems, 32, 2019. Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In *International* conference on machine learning, pp. 1885–1894. PMLR, 2017. Dmitry Kopitkov and Vadim Indelman. Neural spectrum alignment: Empirical study. In International Conference on Artificial Neural Networks, pp. 168–179. Springer, 2020. Yuanzhi Li, Colin Wei, and Tengyu Ma. Towards explaining the regularization effect of initial large learning rate in training neural networks. *Advances in Neural Information Processing Systems*, 32, 2019. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. Eran Malach, Pritish Kamath, Emmanuel Abbe, and Nathan Srebro. Quantifying the benefit of using differentiable learning over tangent kernels. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 7379–7389. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/malach21a.html. Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A mean field view of the landscape of two-layer neural networks. *Proceedings of the National Academy of Sciences*, 115(33):E7665–E7671, 2018. ISSN 0027-8424. doi: 10.1073/pnas.1806579115. URL https://www.pnas.org/content/115/33/E7665. Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. *ICLR workshop track*, 2015. Guillermo Ortiz-Jiménez, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. What can linearized neural networks actually say about generalization? *Advances in Neural Information Processing Systems*, 34, 2021. Jonas Paccolat, Leonardo Petrini, Mario Geiger, Kevin Tyloo, and Matthieu Wyart. Geometric compression of invariant manifolds in neural networks. *Journal of Statistical Mechanics: Theory and Experiment*, 2021 (4):044001, 2021. Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. Estimating training data influence by tracing gradient descent. *Advances in Neural Information Processing Systems*, 33:19920–19930, 2020. Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. In International Conference on Machine Learning, pp. 5301–5310. PMLR, 2019. Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. Distributionally robust neural networks. In *International Conference on Learning Representations*, 2020a. URL https://openreview. net/forum?id=ryxGuJrFvS. Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. An investigation of why overparameterization exacerbates spurious correlations. In *International Conference on Machine Learning*, pp. 8346–8356. PMLR, 2020b. Pedro Savarese, Itay Evron, Daniel Soudry, and Nathan Srebro. How do infinite width bounded norm networks look in function space? In *Proceedings of the 32nd Annual Conference on Learning Theory* (COLT), volume PMLR 99, pp. 2667–2690, 2019. Andrew M. Saxe, James L. Mcclelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural network. In *In International Conference on Learning Representations*, 2014. Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J. Gordon. An empirical study of example forgetting during deep neural network learning. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=BJlxm30cKm. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011. Francis Williams, Matthew Trager, Daniele Panozzo, Claudio Silva, Denis Zorin, and Joan Bruna. Gradient dynamics of shallow univariate relu networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ 1f6419b1cbe79c71410cb320fc094775-Paper.pdf. Blake Woodworth, Suriya Gunasekar, Jason D. Lee, Edward Moroshko, Pedro Savarese, Itay Golan, Daniel Soudry, and Nathan Srebro. Kernel and rich regimes in overparametrized models. In Jacob Abernethy and Shivani Agarwal (eds.), *Proceedings of Thirty Third Conference on Learning Theory*, volume 125 of *Proceedings of Machine Learning Research*, pp. 3635–3673. PMLR, 09–12 Jul 2020. URL https: //proceedings.mlr.press/v125/woodworth20a.html. Greg Yang and Edward J. Hu. Tensor programs IV: feature learning in infinite-width neural networks. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International Conference on Machine* Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of *Proceedings of Machine Learning* Research, pp. 11727–11737. PMLR, 2021. URL http://proceedings.mlr.press/v139/yang21c.html. Greg Yang and Hadi Salman. A fine grained spectral perspective on neural networks. arxiv preprint arXiv:1907.10599[cs.LG], 2019. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. *ICLR*, 2017. Xiao Zhang, Haoyi Xiong, and Dongrui Wu. Rethink the connections among generalization, memorization, and the spectral bias of dnns. In Zhi-Hua Zhou (ed.), Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pp. 3392–3398. International Joint Conferences on Artificial Intelligence Organization, 8 2021. doi: 10.24963/ijcai.2021/467. URL https://doi.org/10.24963/ijcai.2021/467. Main Track.
Review 1: Summary: This submission studies the "simplicity bias" of neural network training dynamics in different regimes defined by the model parameterization. Using various measures of example difficulty, the authors empirically show that the nonlinear feature learning regime has a stronger simplicity bias, meaning that easier examples are learned before difficult ones. The empirical findings are also supported by the theoretical analysis of a quadratic linear model, where the learning of different directions occurs at different time-scale (under sufficiently small initialization). Strengths and Weaknesses: ## Strength The studied problem is interesting and relevant; the theory of two-layer neural networks is usually divided into two different settings: the lazy (kernel) regime and the feature learning (mean-field) regime. Most previous works focused on proving optimization and generalization guarantees for neural networks in the two regimes, but to my knowledge, how this different parameterization affects the implicit bias of gradient descent (measured by example difficulty) has not been thoroughly studied. The experimental results demonstrate an interesting separation in the learning behavior, which may inspire future research towards understanding the limitation of the NTK description of neural networks. ## Weakness I have the following concerns: 1. The studied two-layer linear model is very idealized, and due to the quadratic parameterization, the sequentialization of learning is not surprising, as seen in many prior works on deep linear networks. 2. The notion of learning order has already appeared in many prior papers that are not thoroughly discussed in this submission: - In idealized situations, gradient descent for kernel ridge regression already yields a "simplicity bias" where the complexity is defined by the polynomial degree, see [1][2]. - Similarly, the spectral bias of neural networks in the lazy (kernel) regime have been rigorously characterized in [3][4][5], from which we know that gradient descent learning favors large eigen-directions of the NTK, which often corresponds to low-complexity functions. - For neural network in the feature learning regime, [6] identified a class of staircase functions where the learning of low- and high-degree components is "coupled". This allows the feature learning model to learn the difficult high-degree parts faster than the linearized kernel model. 3. [minor] References to related works are not complete. - In the case of two-layer neural network, the feature learning regime is highly related to the mean-field regime. Please find the appropriate references for this. - The learning order in linear networks have been extensively studied but citations are missing. - Which theorem in [Frei et al. 2022] (cited on page 1) shows that neural network can "provably outperform any linear method"? I have the following questions: - In the experiments, how does the learning rate scaling affects the observed phenomena? It is known that large learning rate can alter the implicit bias and the feature learning dynamics [7] [8]. - For the theoretical analysis, would similar results holds for a vanilla two-layer linear network, where the different regimes are induced by changing the scale of initialization? - Neural networks in the lazy regime should enjoy optimization benefits due to the exponential convergence in the training loss. In Figure 1b, why does the linearized model converge much slower (and apparently not reaching small error)? [1] Ghosh et al. 2021. The three stages of learning dynamics in high-dimensional kernel methods. [2] Xiao 2022. Eigenspace restructuring: a principle of space and frequency in neural networks. [3] Bietti and Mairal 2019. On the inductive bias of neural tangent kernels. [4] Cao et al. 2020. Towards understanding the spectral bias of deep learning. [5] Nitanda and Suzuki 2020. Optimal rates for averaged stochastic gradient descent under neural tangent kernel regime. [6] Abbe et al. 2022. The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks. [7] Wu et al. 2021. Direction matters: on the implicit bias of stochastic gradient descent with moderate learning rate. [8] Ba et al. 2022. High-dimensional asymptotics of feature learning: how one gradient step improves the representation. Requested Changes: Please address the points in the Weaknesses section. Broader Impact Concerns: N/A ================================================== Review 2: Summary: In this work, the authors examine the differences between linearized and non-linear models, with respect to which examples are learned first. In a variety of settings, they present evidence that easier examples are learned first in non-linear models - or perhaps more accurately, easier examples contribute to a larger fraction of the loss. They show this for various definitions of "easiness", including an intuitive picture in the yin-yang model, the The authors also present a toy model which provably shows similar behavior. Strengths and Weaknesses: The basic concept of the paper, and the subsequent experiments are very sound. In particular, the authors did a good job of considering multiple ways to define difficulty of examples, and attempted to properly normalize comparisons by, for example, grouping points according to their training loss rather than the number of GD iterations. One question that arises is: do the accuracies show similar trends to the loss? Because loss and accuracy are only correlated, it's possible that even though non-linear models change loss more on easier examples, but the accuracy per-example is similar. This is somewhat answered in Figure 3 which does accuracy comparisons. However, in this scenario both the low accuracy (early time) and high accuracy (late time) are very similar across the models - making it harder to interpret the intermediate accuracy behavior. The analysis of the theoretical model was very clean. However, it is unclear if the mechanism at play in the theoretical model is relevant for the phenomenology in real networks; the theoretical model only really allows for adjustment of the magnitude of features, not their directions. Requested Changes: For Figures 1 and 2, it would also be informative to see plots in terms of classification accuracy. One of the interesting features about classification problems, especially those trained with cross-entropy loss is that the loss is only _correlated_ with the actual objective, the accuracy. It would be good to understand whether or not the effect is driven by non-linear models having more certainty in the easier datapoints, versus the classification being more correct on the easier datapoints. In particular it would be good to look at the accuracies as a function of c-score for the same number of GD steps, for optimal learning rates for the different models. I believe that this is a critical change. Figure 2, top left, is currently hard to parse. I don't feel confident looking at that plot and coming to the conclusions of the authors. Perhaps it would be useful to also have a plot of the differences between the models? Also, there should be a colorbar indicating the C-score. For Figure 2, bottom, I didn't quite understand why the gap in the solid curves implied that easier examples were being learned first. Some more clarification on why "increasing loss more on noisy data" means "easier examples are learned first" would be quite helpful. Figure 3 shows some curves with $\alpha <1$ - a setup which was described and analyzed in https://arxiv.org/abs/2010.07344, which should be added to the references. It would be interesting to included curves with other alpha values in Figures 1 and 2 (more specifically 2, I think figure 1 in its current form is simple and effective). This is especially interesting because, by the above reference, for $\alpha <1$ the optimal learning rate goes as $1/\alpha$ ($1/\beta$ in the paper), which means that the change in loss slows down (in terms of GD steps). In this work my understanding is that the $\alpha = 1$ setting always trains faster than the $\alpha = 100$ setting. It would also be useful to have an experiment on a dataset-model pair where the non-linear model performs significantly better than the linearized model after training - or at least shows accuracy differences according to example difficulty. The results of Figure 3 feel a bit weak because the curves diverge and then converge again, as it appears that the linearized model and the non-linear model learn to similar accuracies. Perhaps creating similar accuracy curves in the CIFAR problem, stratified by c-scores, would be sufficient. I think this is a critical change. Broader Impact Concerns: None ================================================== Review 3: Summary: The authors study the difference between the lazy and feature learning regimes from the perspective of image difficulty. The authors empirically show that the network learns easy samples faster both in the linear and feature regimes; however, the difference between the learning speeds of the easy vs. hard groups is more pronounced in the feature learning regime. The contributions are: * The authors use three metrics to check whether the network stays in the linear regime despite being finite-width, * Learning curves are presented in both regimes for CIFAR-10, a small subset of CelebA and Waterbirds, * A quadratic model is introduced and solved analytically which is then used to study numerically learning speeds in groups for three toy example datasets. Strengths and Weaknesses: *Strengths:* * Using diverse datasets and metrics reduces the bias due to the metric or architecture used to calculate the image difficulty score. * Solid check with three metrics to make sure that the network stays in the linear regime *Weaknesses:* * The major weakness is that the difference between the linear and feature learning regimes is tiny. Even in the linear regime, I understand that sequentialization happens by looking at the learning curves. However from reading the text, it may sound like this only happens in the feature learning regime. * The title rhymes very well, but --hasty-- sounds like a negative property of feature learning, but I'm not sure if the authors made any such claim. * The datasets used in spurious correlation experiments are small: overall $180 \times 4 = 720 $ samples. It is not hard to believe that the results would generalize to larger sample sizes given the CIFAR10 results, but the small dataset size weakens the results for spurious correlation experiments. Requested Changes: * Discussion related to Figure 3 needs to be improved. Why are the learning curves in the middle for the Waterbirds dataset non-monotonic compared to CelebA? Also, the training accuracy starts at 0.7 which is very high for the first epoch due to the small dataset size. * How do the results compare with the gradient starvation paper from NeurIPS-2021 [1]? *Minor changes:* * Contributions point 2, the first sentence needs improvement. We present a unified picture, adequately normalized visualizations, etc. are too bombastic. If the toy dataset example in Fig. 1 is referred to here, please explicitly write it so. * Figure 2 right panel figures are hard to read. Using smaller markers should help. * Figure 4 middle figure can be plotted for different levels of label corruption $q$. * In Section 4.4., why only use small $\sigma$ for initialization? A larger $\sigma$ should remove the loss plateaus at the beginning of training in the feature regime, thus the reverse sequentialization phase should be more visible. [1] Pezeshki, Mohammad, et al. "Gradient starvation: A learning proclivity in neural networks." Advances in Neural Information Processing Systems 34 (2021): 1256-1272. Broader Impact Concerns: NA. ================================================== Metareview: Recommendation: Accept as is Comment: The paper studies differences between lazy and feature learning regimes from the perspective of example difficulty. Authors empirically show that in the feature learning regime (as opposed to the lazy learning regime) networks learn the easier examples before difficult ones. The Authors also analyze toy models to theoretically demonstrate similar behavior occurs. Overall reviewers were convinced of the correctness of the claims. While the paper focuses on simple model / datasets for experiments, reviewers pointed out that these are done well and results are interesting and proposes phenomena meritting further study in the community thus of interest to the TMLR readership. After author response and discussions among reviewers, all reviewers recommended either accept or leaning accept. One reviewer raised that the submission is borderline. The AE agrees with reviewers recommendation and recommends accept as is. AE still strongly recommends authors to incorporate all the major/minor suggestions agreed upon with the reviewers. ==================================================
# Smoothed Differential Privacy Ao Liu aoliu.cs@gmail.com Core Machine Learning, Google Yu-Xiang Wang yuxiangw@cs.ucsb.edu Department of Computer Science, UC Santa Barbara Lirong Xia *xialirong@gmail.com* Computer Science Department, Rensselaer Polytechnic Institute Reviewed on OpenReview: *https://openreview.net/forum?id=CviCLt44Em* ## Abstract Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worstcase analysis. Often, DP classifies most mechanisms without additive noise as non-private (Dwork et al., 2014). Thus, additive noises are added to improve privacy (to achieve DP). However, in many real-world applications, adding additive noise is undesirable (Bagdasaryan et al., 2019) and sometimes prohibited (Liu et al., 2020). In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis (Spielman & Teng, 2004). Our notion, *smoothed DP*, can effectively measure the privacy leakage of mechanisms without additive noises under realistic settings. We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP. In addition, we prove several desirable properties of smoothed DP, including composition, robustness to post-processing, and distribution reduction. Based on those properties, we propose an efficient algorithm to calculate the privacy parameters for smoothed DP. Experimentally, we verify that, according to smoothed DP, the discrete sampling mechanisms are private in real-world elections, and some discrete neural networks can be private without adding any additive noise. We believe that these results contribute to the theoretical foundation of realistic privacy measures beyond worst-case analysis. ## 1 Introduction Differential privacy (DP), a *de facto* measure of privacy in academia and industry, is often achieved by adding additive noises (*e.g.,* Gaussian noise, Laplacian noise, and the discrete noise in exponential mechanism) to published information (Dwork et al., 2014). However, additive noises are procedurally or practically unacceptable in many real-world applications. For example, presidential elections often require a deterministic rule to be used (Liu et al., 2020). In such cases, though, *sampling noise* often exists, as shown in the following example. Example 1 (**Election with sampling noise).** *Due to COVID-19, many voters in the 2020 US presidential election* chose to submit their votes by mail. Unfortunately, it was estimated that the US postal service might have lost up to 300,000 mail-in ballots (0.2% of all votes) (Bogage & Ingraham, 2020). For the purpose of illustration, suppose these votes are distributed uniformly at random, and the histogram of votes is announced after the election day. A critical public concern about elections is: should publishing the histogram of votes be viewed as a significant threat to privacy? Notice that with sampling noise such as in Example 1, the (sampling) histogram mechanism can be viewed as a randomized mechanism, formally called *sampling-histogram* in this paper. The same question can be asked about publishing the winner under a deterministic voting rule with sampling noise. 1 The standard notion of DP (Definition 1 in Dwork et al., 2006a) measures the worst-case privacy leakage (see Section 2 for its formal definition and detailed discussions). At a high level, DP considers the worst-case input and worst-case output of the (random) mechanisms. ϵ and δ are DP's privacy parameters to measure the privacy leakage in the worstcase described above. At a high level again, ϵ is a mechanism-designer-decided threshold for the "acceptable amount of privacy leakage", while δ measures the probability that the threshold got exceeded. Thus, smaller *ϵ, δ* represents stronger privacy guarantees. The usual requirements upon a private mechanism are ϵ = O(1) and δ = o(1/n). The requirement on δ is more strict than ϵ because δ measures the "failure probability". If we apply DP to answer this question, we would then conclude that publishing the histogram can poses a significant threat to privacy, as the privacy parameter δ ≈ 1 (See Section 2 for the formal definition) in the following worst-case scenario: all except one vote are for the Republican candidate, and there is one vote for the Democratic candidate. Notice that δ ≈ 1 is much worse than the threshold for private mechanisms, δ = o(1/n), where n is the number of agents (voters). Moreover, using the adversary's utility as the measure of privacy loss (see Section 2 for the formal definition), in this (worst) case, the privacy loss is large (≈ 1, see Section 2 for the formal definition of utility), which means the adversary can make accurate predictions about every agent's preferences. However, DP does not tell us whether publishing the histogram poses a significant threat to privacy *in general*. In particular, the worst-case scenario described in the previous paragraph never happened even approximately in the modern history of US presidential elections. In fact, no candidates get more than 70% of the votes since 1920 (Leip, 2023), when the progressive party dissolved. It turns out that the privacy loss may not be as high as measured by DP, where the privacy loss is measured by the adversary's utility (*i.e.*, how accurately the adversary can infer/predict the votes, see Justification 3 in Section 2 for its formal definition). To see this, we assume 0.2% of the votes were randomly lost in the presidential elections of each year since 1920 (in light of Example 1). Figure 1 presents the adversary's utility under this assumption. It can be seen that the adversary's utility is very limited (at the order of 10−32 to 10−8, always smaller than the threshold of private mechanisms, 1/n). In other words, the adversary cannot get much information from the published histogram of votes. Figure 1 also plots the database-dependent privacy parameter δ(x) (Definition 2, a DP-like δ parameter for a specific database). One can see that δ(x) is closely related to the adversary's utility and is also at the order of 10−32 to 10−8. Besides, we observe an interesting decreasing trend in the adversary's utility, which implies that the elections become more private in more recent years. This is primarily due to the growth of the voting population, which is exponentially related to the adversary's utility (Theorem 3). In Appendix A.2, we further show that the elections are still private when only 0.01% of votes got lost. ![1_image_0.png](1_image_0.png) Figure 1: The privacy loss and adversaries' utility in US presidential elections. The smaller δ(x) is, the more private the election is. The smaller adversaries' utility is, the more private the election is. As another example, for *neural networks* (NNs), even adding slight noise can lead to dramatic decreases in the prediction accuracy, especially when predicting underrepresented classes (Bagdasaryan et al., 2019). Sampling noises also widely exist in machine learning, for example, in the standard practice of cross-validation as well as in training (e.g., batch-sampling when training NNs). Note that in all the above examples, the sampling noise is an "intrinsic" part of the mechanism. In comparison, the only purpose of adding additive noises is to improve privacy (with the cost of reducing accuracy) in most scenarios. As shown in these examples, the worst-case privacy according to DP might be too loose to serve as a practical measure for evaluating and comparing mechanisms with sampling noise (while without additive noise) in real-world applications. This motivates us to ask the following question. ## How Can We Measure Privacy For Mechanisms With Sampling Noise Under Realistic Models? The choice of model is critical and highly challenging. A model based on worst-case analysis (such as in DP) provides upper bounds on privacy loss, but as we have seen in Figure 1, in some situations, such upper bounds are too loose to be informative in practice. This is similar to the runtime analysis of an algorithm—an algorithm with exponential worst-case runtime, such as the simplex algorithm, can be faster than some algorithms with polynomial runtime in practice. Average-case analysis is a natural choice of the model, but since "*all models are wrong*" (Box, 1979), any privacy measure designed for a certain distribution over data may not work well for other distributions. Moreover, ideally, the new measure should satisfy the desirable properties that played a central role behind the success of DP, including *composition* and *robustness to post-processing*. These properties make it easier for the mechanism designers to figure out the privacy level of mechanisms. Unfortunately, we are not aware of a measure based on average-case analysis that has these properties. We believe that the *smoothed analysis* (Spielman, 2005) provides a promising framework for addressing this question. Smoothed analysis is an extension and combination of worst-case and average-case analyses that inherit the advantages of both. It measures the expected performance of algorithms under slight random perturbations of worst-case inputs. Compared with the average-case analysis, the assumptions of the smoothed analysis are much more natural. Compared with the worst-case analysis, the smoothed analysis can better describe the real-world performance of algorithms. For example, it successfully explained why the simplex algorithm is faster than some polynomial algorithms in practice (Spielman & Teng, 2004). Our Contributions. The main merit of this paper is a new notion of privacy for mechanisms with sampling noise (and without additive noises), called smoothed differential privacy (*smoothed DP* for short), which applies smoothed analysis to the privacy parameter δ(x) (Definition 2) as a function of the database x. In our model, the "ground truth" distribution of agents is from a set of distributions Π over data points, on top of which the nature adds random noises. Formally, the "smoothed" δ(x) is defined as $$\delta_{\mathrm{SDP}}\triangleq\operatorname*{max}_{\vec{\pi}}{\big(}\,\mathbb{E}_{x\sim\vec{\pi}}\left[\delta(x)\right]{\big)},$$ where x ∼ ⃗π = (π1, · · · , πn) ∈ Πn means that for every 1 ≤ i ≤ n, the i-th entry in the database independently follows the distribution πi. We note that Π is a parameter for smoothed analysis, not for the mechanisms M. Table 1 compares the ϵ and δ parameters of smoothed DP and DP. The sampling histogram algorithm refers to Algorithm 2 (in Section 5.1), which samples T = ⌈η · n⌉ data without replacement and outputs the histogram of the sampled data. Appendix A.3 shows the detailed settings of Table 1. We present a high-level comparison between smoothed DP and other DP(-like) notions in Table 2. A more detailed comparison between DP, smoothed DP, and distributional DP can be found in Table 3 of Section 3.2. | Notions | measures | without additive noise | with additive noise (of level 1/ϵ) | |----------------------|--------------|-----------------------------------------|--------------------------------------| | Smoothed DP | ϵ, δSDP | ϵ, exp − Θ(n) | - | | (this paper) | main message | private (under smoothed analysis) | private | | DP (Definition 1, | (ϵ, δ) | 0, η | η · ϵ, 0 | | Dwork et al., 2006a) | main message | non-private (under worst-case analysis) | private (if ϵ is small) | | Notions | database x | output S | adjacent database x ′ | |------------------------------------------------------------------------|-------------------------------|---------------------|-------------------------| | Smoothed DP | smoothed | | | | (Definition 3, this paper) | analysis | | | | DP (Definition 1, Dwork et al., 2006a) | worst-case analysis | worst-case analysis | | | | worst-case analysis | | | | Gaussian DP or f-DP (Dong et al., 2019) | | | | | Rényi DP Mironov (2017) | average-case analysis | | | | Concentrated DP | (weighted by Rényi divergence | | | | (Bun & Steinke, 2016; Dwork & Rothblum, 2016) | or sub-Gaussian divergence) | | | | Bayesian DP (Triastcyn & Faltings, 2020) | | less informative | | | Random DP (Hall et al., 2011) | less informative | adversary | | | | worst-case analysis | | | | Distributional DP (Bassily et al., 2013) | adversary | worst-case analysis | | | Table 2: The comparison between smoothed DP and other privacy notions. | | | | Smoothed DP *ϵ, δ*SDP ϵ, exp − Θ(n)– (this paper) main message **private** (under smoothed analysis) private DP (Definition 1, (ϵ, δ)0, η η · ϵ, 0 Dwork et al., 2006a) main message **non-private** (under worst-case analysis) private (if ϵ is small) Table 1: Compare DP and smoothed DP for our motivation example (US presidential election) with a constant sampling rate η (*e.g.*, η = 1 − 0.2% in the example). The ϵ part in δSDP is omitted for simplicity (see Theorem 3 for detailed discussions). The (*ϵ, δ*SDP) for smoothed DP with additive noise is not shown in this table because it is already private without additive noise. Theoretically, we prove that smoothed DP satisfies many desirable properties, including two properties also satisfied by the standard DP: *robustness to post-processing* (Proposition 4) and *composition* (Proposition 5). Besides, we prove two additional properties for smoothed DP, called *pre-processing* (Proposition 6) and *distribution reduction* (Proposition 7). Based on pre-processing and distribution reduction, we propose an efficient algorithm (Algorithm 1) to calculate the privacy parameters for smoothed DP. We further show that, under smoothed DP, many discrete mechanisms with small sampling noise (and without any other noise) are significantly more private than those guaranteed by DP. For example, the sampling-histogram mechanism in Example 1 has an exponentially small δSDP (Theorem 3), which implies that the mechanism protects voters' privacy in elections—and this is in accordance with the observation on US election data in Figure 1. We also note that the sampling-histogram mechanism is widely used in machine learning (e.g., the SGD in quantized NNs). In comparison, smoothed DP implies a similar privacy level as the standard DP in many continuous mechanisms. We proved that smoothed DP and the standard DP have the same privacy level for the widely-used sampling-average mechanism when the inputs are continuous (Theorem 4). Experimentally, we numerically evaluate the privacy level of the sampling-histogram mechanism using US presidential election data. Simulation results show an exponentially small δSDP, which is in accordance with our Theorem 3. Our second experiment shows that a one-step *stochastic gradient descent* (SGD) in quantized NNs (Banner et al., 2018; Hubara et al., 2017) also has an exponentially small δSDP. This result implies that SGD with gradient quantization can already be private in practice without adding any extra (additive) noise. In comparison, the standard DP notion always requires extra (additive) noise to make the network private at the cost of a significant reduction in accuracy. Related Work and Discussions. There is a large body of literature on the theory and practice of DP and its extensions. We believe that the smoothed DP introduced in this paper is novel. To the best of our knowledge, none of the literature has proposed any DP-like notions for the mechanism without additive noises. A notable exception is distributional DP (Bassily et al., 2013), which considers a less informative adversary to provide a privacy measure for deterministic mechanisms. However, since distribution DP does not require any randomness in the mechanism, its privacy guarantee is much weaker than other DP-like notions. Rényi DP (Mironov, 2017), Gaussian DP (Dong et al., 2019) and Concentrated DP (Bun & Steinke, 2016; Dwork & Rothblum, 2016) target to provide tighter privacy bounds for the adaptive mechanisms. Those three notions generalized the (*ϵ, δ*) measure of distance between distributions to other divergence measures. Bayesian DP (Triastcyn & Faltings, 2020) tries to provide an "affordable" measure of privacy that requires less additive noise than DP. With similar objectives, Bun and Steinke (Bun & Steinke, 2019) add noises according to the average sensitivity instead of the worst-case sensitivity required by DP. However, additive noises are required in (Bun & Steinke, 2019) and (Triastcyn & Faltings, 2020). Random DP (Hall et al., 2011) combines the high-level idea of distributional DP and Bayesian DP, which considers randomness in both the database x and its neighboring database x ′. Appendix A.4 discusses the related works in the field of smoothed analysis, quantized neural networks, etc. ## 2 Differential Privacy And Its Interpretations In this paper, we use n to denote the number of records (entries) in a database x ∈ X n, where X denotes all possible values for a single entry. n also represents the number of agents when one agent (one individual) can only contribute one record. We say that two databases *x, x*′are *neighboring* (denoted as x ≃ x ′) if one database can be gotten by replacing no more than one record from the other database. In the motivating example (Example 1), X = {0, 1}, where 0 represents a vote to one candidate while 1 represents the vote to the other candidate. Database x represents all voters' votes (a n-dimensional binary vector). A record represents a single dimension (*e.g.*, the i-th dimension) of x. Two databases are considered to be neighboring if no more than one voter's vote is different. Definition 1 (**Differential privacy**). Let M denote a randomized algorithm and S *be a subset of the image space of* M. Throughout this paper, image space" represents the space for the image of the (randomized) mechanism. M *is said* to be (ϵ, δ)-differentially private for some ϵ > 0, δ > 0, if for any S and any pair of neighboring database *x, x*′, Pr[M(x) ∈ S] ≤ e ϵ Pr[M(x ′) ∈ S] + δ, (1) Notice that the randomness comes from the mechanism M. DP guarantees immunity to many kinds of attacks (*e.g., linkage attacks* (Nguyen et al., Sep. 2013) and *reconstruction* attacks (Dwork et al., 2014)). Take *reconstruction attacks* for example, the adversary has access to a subset of the database (such information may come from public databases, social media, etc.). In an extreme situation, an adversary knows all but one agent's records. To protect the data of every single agent, DP uses δ = o(1/n) as a common requirement of private mechanisms (Dwork et al., 2014, p. 18). To meet this requirement, a private mechanism (under DP notion) usually1 need to include additive noises (*e.g.,* Gaussian noise, Laplacian noise, and the noise, which usually is discrete, in exponential mechanisms). Next, we recall two common interpretations/justifications on how DP helps protect privacy even in the extreme situation of reconstruction attacks. After that, we formally introduce the adversaries' utility in Figure 1 and use it to justify DP. Justification 1: DP prevents membership inference (Wasserman & Zhou, Mar. 2010; Kairouz et al., 2015). Assume that the adversary knows all entries except the i-th. Let x−i denote the database x with its i-th entry removed. With the information provided by the output M(x), the adversary can infer the missing entry by testing the following two hypotheses: H0: The missing entry is X (or equivalently, the database is x = x−i ∪ {X}). H1: The missing entry is X′(or equivalently, the database is x ′ = x−i ∪ {X′}). Suppose that after observing the output of M, the adversary uses a rejection region rule for hypothesis testing2, where H0 is rejected if and only if the output is in the rejection region S. For any fixed S, the decision rule can be wrong in two possible ways, false positive (Type I error) and false negative (Type II error). Mathematically, the Type I error rate EI(x) = Pr[M(x) ∈ S] while the Type II error rate EII(x ′) = Pr[M(x ′) ∈ S / ] = 1 − Pr[M(x ′) ∈ S]. According to the definition of DP, for any pair of neighboring databases *x, x*′, the adversary always has $$\geq1-2\delta$$ e ϵ· EI(x) + EII(x ′) ≥ 1 − δ and e $$\begin{array}{r l}{\operatorname{d}}&{{}e^{\epsilon}\cdot{\mathcal{E}}_{\mathrm{II}}(x^{\prime})+{\mathcal{E}}_{\mathrm{I}}(x)\geq1-\delta,}\end{array}$$ which implies that EI(x) and EII(x ′) cannot be small at the same time. When ϵ and δ are both small, both EI and EII becomes close to 0.5 (the error rates of random guess), which means that the adversary cannot get much information from the output of M. Justification 2: With high probability, M **is insensitive to the change of one record (Guingona et al., 2023).** In more detail, (*ϵ, δ*)-DP guarantees the distribution of M's output will not change significantly when replacing one record. According to Property (A) of Theorem 3.2 in Guingona et al. (2023), (*ϵ, δ*)-DP implies3 **A decoupling to Property (A) of Theorem 3.2** $\Pr_{a\sim\mathcal{M}(x)}\left[\frac{1}{2e^{\epsilon}}\leq\frac{\Pr[\mathcal{M}(x)=a]}{\Pr[\mathcal{M}(x^{\prime})=a]}\leq2e^{\epsilon}\right]$ ′. (2) for any pair of neighboring databases $x$ and $x^{\prime}$. The above inequality shows that the change of one record cannot make an output significantly more likely or significantly less likely (with at least 1 − 2δ probability). Since x and x ′ only differ in one record, the above formula also guarantees that the adversary cannot learn too much information about any single record of the database through observing the output of M (Dwork et al., 2014, p. 25). Justification 3: DP guarantees limited Bayesian adversary's utilities. We consider the same adversary as in Justification 1. Since the adversary has no information about the missing entry, he/she may assume a uniform prior distribution about the missing entry. To simplify notations, let Xi ∈ X denote the missing entry and let x−i denote the database x with Xi removed. For any Xi ∈ X , the adversary's posterior distribution (after observing output a from mechanism M) is $$=1\;{\mathrm{v}}\;{\mathrm{m}}{\mathrm{i}}{\mathrm{g}}\;\;{\mathrm{o}}$$ $$\Pr\left[X_{i}|a,x_{-i}\right]=\frac{\Pr\left[a|X_{i},x_{-i}\right]\cdot\Pr\left[X_{i}|x_{-i}\right]}{\Pr\left[a|x_{-i}\right]}=\frac{\Pr\left[\mathcal{M}(x_{-i}\cup\{X_{i}\})=a\right]\cdot\Pr\left[X_{i}\right]}{\sum_{X^{\prime}}\left(\Pr\left[\mathcal{M}(x_{-i}\cup\{X^{\prime}\})=a\right]\cdot\Pr\left[X^{\prime}\right]\right)}$$ $$=\frac{\Pr\left[\mathcal{M}(x_{-i}\cup\{X_{i}\})=a\right]}{\sum_{X^{\prime}}\Pr\left[\mathcal{M}(x_{-i}\cup\{X^{\prime}\})=a\right]}.$$ We can easily check that $\Pr\left[\mathcal{M}(x_{-i}\cup\{X^{\prime}\})=a\right]$ is a A Bayesian predictor predicts the missing entry Xithrough maximizing the posterior probability. For the adversary with uniform prior, when the output is a, the 0/1 loss of the Bayesian predictor is ℓ0-1(*a, x*−i) = 02· max i(Pr [Xi|*a, x*−i]) + 12· 1 − max i(Pr [Xi|*a, x*−i])= 1 − max i(Pr [Xi|*a, x*−i]). 1Some differentially private mechanisms such as "stability-based query release" appear to not require additive noise with high probability if the local sensitivity is 0 for the input database and for all other databases that differ by at most ln(1/δ)/ϵ data points (Thakurta & Smith, 2013). Note that in our case (such as the motivating example), the local sensitivity is not 0. Also, the "stability-based" methods still need randomization (though not adding noise). 2The adversary can use any decision rule, and the rejection region rule is adopted just for example. 3Theorem 3.2 in Guingona et al. (2023) shows (*ϵ, δ*)-DP implies (ln 2)ϵ, 2δ-probabilistic DP, which is equivalent to (2). The formal definition of probabilistic DP can be found in Definition 6 in Machanavajjhala et al. (2008). $i$])) = 1 − max (P. It's not hard to check that a always-correct prediction has zero loss and any always-incorrect prediction has loss one. Then, we define the adjusted utility of adversary (in Bayesian prediction), which is the expectation of a normalized version of ℓ0-1. Mathematically, for a database x, we define the adjusted utility with threshold t as follows, $$u(t,x_{-t})=\frac{1}{1-t}\cdot\max_{X_{t}}\left(\mathbb{E}_{a\sim\mathcal{M}(x_{-t}\cup X_{t})}\Big{[}\max\left\{0,1-t-\ell_{0,1}(a)\right\}\Big{]}\right).\tag{3}$$ In short, u(*t, x*−i) is the worst-case expectation of 1 − ℓ0-1 while the contribution from predictors with loss larger than 1 − t is omitted. Especially, when the threshold t ≥ 1/*|X |*, an always correct predictor (ℓ0-1 = 0) has utility 1 and a random guess predictor (ℓ0-1 = 1 − 1/*|X |*) has utility 0. For example, we let X = {0, 1} and consider the coin-flipping mechanism MCF with support X , which output Xi with probability p and output 1 − Xi with probability 1 − p. When p = 1, the entry Xiis non-private because the adversary can directly learn it from the output of MCF. Correspondingly, the adjusted utility of adversary is 1 for any threshold t ∈ (0, 1). When p = 0.5, the mechanism gives an output uniformly at random from X . In this case, the output of M cannot provide any information to the adversary. Correspondingly, the adjusted utility of adversary is 0 for any threshold t ∈ (0.5, 1). The following lemma, which is a direct corollary of Theorem 1 and Corollary 3, shows that the adjusted utility is upper bounded by the δ parameter of DP. Note that Lemma 1 also matches the main message of Figure 1. Lemma 1. Let mechanism M be (ϵ, δ)*-differentially private. Then, for the above-defined adjusted utility,* $$u\Big({\frac{e^{\epsilon}}{e^{\epsilon}+1}},\,x_{-i}\Big)\leq2\delta.$$ In other words, when both ϵ and δ are small, the adversary cannot accurately predict the missing entry in database x. ## 3 Smoothed Differential Privacy Recall that DP is based on the worst-case analysis over all possible databases. However, as shown in Example 1 and Figure 1, the worst-case nature of DP sometimes leads to an overly pessimistic measure of privacy loss, which may bring unnecessary additive noise in the hope of improved privacy but at the cost of accuracy. For example, we assume a database with n binary records. DP considers the worst-case of x. Here, we focus on one of the worst-cases, x = (0, *· · ·* , 0, 1), whose worst-case neighboring database x ′ = (0, *· · ·* , 0, 0). From the adversaries' point of view, he/she knows all records in the database except the last one (the first n − 1 records). Under the above worst-case, the sampling-histogram mechanism is non-private because the adversary directly knows the missing entry if the last entry, "1", got sampled. However, the above worst-case might be very rare in some real-world applications (*e.g.*, Example 1) and the mechanism is actually private when the database is not extremely close to the worst case (Figure 1). Motivated by this, we propose smoothed DP, which has similar privacy guarantees as DP, but is able to measure the privacy level of sampling-based mechanisms (instead of classifying all of them as non-private). This section formally introduces smoothed DP, which applies the smoothed analysis to the database-dependent privacy profile δ(x) and proves its desirable properties. Due to the space constraint, all proofs of this section can be found in Appendix C. ## 3.1 The Database-Dependent Privacy Profile We first introduce the *database-dependent privacy profile* δϵ,M(x), which measures the privacy leakage of mechanism M when its input is x. Here, we fix ϵ and let δ be database-dependent, which is opposite to data-dependent privacy loss (Papernot et al., 2018; Wang, 2019) where δ is fixed and ϵ is data-dependent. Here, we call δ *privacy profile* since it is a function of ϵ (Balle et al., 2020). Definition 2 (**Database-dependent privacy profile** δϵ,M(x) (Dwork et al., 2006b)). Let M : X n → A *denote a* randomized mechanism. Given any database x ∈ X n and any ϵ > 0*, define the database-dependent privacy profile as:* $$\delta_{\epsilon,\mathcal{M}}(x)\triangleq\operatorname*{max}\Big(0,\;\operatorname*{max}_{x^{\prime}:x^{\prime}\cong x}\big(d_{\epsilon,\mathcal{M}}(x,x^{\prime})\big),\;\operatorname*{max}_{x^{\prime}:x^{\prime}\cong x}\big(d_{\epsilon,\mathcal{M}}(x^{\prime},x)\big)\Big),$$ _where $d_{e,\mathcal{M}}(x,x^{\prime})=\max_{\mathcal{S}}\left(\,\Pr\left[\mathcal{M}(x)\in\mathcal{S}\right]-e^{\varepsilon}\cdot\Pr\left[\mathcal{M}(x^{\prime})\in\mathcal{S}\right]\,\right)$ and "$\simeq$" means neighboring._ In words, δϵ,M(x) is the minimum δ values, such that the (*ϵ, δ*)-DP requirement on M (Inequality (1)) holds for any neighboring pairs (*x, x*′) and (x ′, x). The definition of δϵ,M(x) guarantees the adversaries to have the same prior knowledge as DP. δϵ,M(x) considers the worst-case neighboring dataset x ′(technically, maxx≃x′ ), which indicates that the adversaries know the whole database except one entry. The next lemma reveals the connection between the adversary's utility u (defined in Justification 3 of Section 2) and dϵ,M(*x, x*′) + dϵ,M(x ′, x). Lemma 2. *Given mechanism* M : X n → A *and any pair of neighboring databases* x ≃ x ′, $$u{\Bigl(}{\frac{e^{\epsilon}}{e^{\epsilon}+1}},x\cap x^{\prime}{\Bigr)}<d_{\epsilon,{\mathcal{M}}}(x,x^{\prime})+d_{\epsilon,{\mathcal{M}}}(x^{\prime},x).$$ Lemma 2 shows that the adjusted utility is upper bounded by dϵ,M. Especially, when *|X |* = 2, we provide both upper and lower bounds to the adjusted utility in Lemma 10 of Appendix C.1, which means that dϵ,M(*x, x*′) is a good measure for the privacy level of M when *|X |* = 2. The following corollary shows that δϵ,M(x) upper bounds the adjusted utility of adversary. In other words, a small δϵ,M(x) guarantees that adversary cannot accurately predict the missing record in the database. Corollary 3. *Given mechanism* M : X n → A *and any pair of neighboring databases* x ≃ x ′, $$u{\bigg(}{\frac{e^{\epsilon}}{e^{\epsilon}+1}},x\cap x^{\prime}{\bigg)}<2\cdot\delta_{\epsilon,\mathcal{M}}(x).$$ DP as the worst-case analysis of δϵ,M(x). In the next theorem, we show that the privacy measure based on the worst-case analysis of δϵ,M is equivalent to the standard DP. Theorem 1 (**DP in** δϵ,M(x)). *Mechanism* M : X n → A is (ϵ, δ)*-differentially private if and only if,* $$\operatorname*{max}_{x\in{\mathcal{X}}^{n}}\left(\delta_{\epsilon,\,{\mathcal{M}}}(x)\right)\leq\delta.$$ $${\mathcal{A}}(x)]\;)\leq\delta,$$ $${\mathit{f o r\,e v e r y\,i\in\{1,\cdots,n\}}}.$$ ## 3.2 Formal Definition Of Smoothed Dp Armed with the database-dependent privacy profile δϵ,M(x), we now formally define smoothed DP, where the worstcase "ground truth" distribution of every agent is allowed to be any distribution from a set of distributions Π, on top of which Nature adds random noises to generate the database. We would like to note again that Π is a parameter for smoothed analysis instead of the mechanisms M. The smoothed analysis only controls how the database x is generated. The adversaries of smoothed DP will have the same prior knowledge as the adversaries of DP. Definition 3 (**Smoothed DP**). Let Π be a set of distributions over X *. We say* M : X n → A is (ϵ, δ, Π)*-smoothed* differentially private if, max⃗π∈ΠnEx∼⃗π [δϵ,M(x)] ≤ δ, where x ∼ ⃗π = (π1, · · · , πn) means that the i-th entry in the database follows πifor every i ∈ {1, · · · , n}. The threat models of DP, smoothed DP, and distributional DP are shown in Table 3. Note that the adversary "is able to choose" means that he/she can select the worst-case, and does not necessarily mean the adversary knows the information. In short, the adversary in smoothed DP is as knowledgeable as that in DP, but with less ability to choose the database x. Compared with distributional DP, the adversary in smoothed DP has more information (prior knowledge) about the database. Appendix B rigidly shows that smoothed DP is a stronger notion than distributional DP. Adversary's Prior knowledge Ability to choose database x Ability to choose neighboring database x ′ Ability to choose output S DP **Whole database except** one entry (x ∩ x ′) Able to choose the worst-case database x **Able to choose** worst-case diff(*x, x*′) Able to choose Smoothed DP Able to choose **worst-case output** S Distributional DP The distribution of x ∩ x ′ **data distributions from** Π Table 3: The threat model of DP, smoothed DP, and distribution DP, where diff(*x, x*′) represents the difference between x and x ′. Under the setting and notation of Justification 3 of DP, diff(*x, x*′) is Xi while x ∩ x ′is x−i. Like DP, smoothed DP bounds privacy leakage (in an arguably more realistic setting), via the following three justifications that are similar to the two common justifications of DP in Section 2. Justification 1: Smoothed DP prevents membership inference. Mathematically, a (*ϵ, δ,* Π)-smoothed DP mechanism M guarantees e ϵ· max ⃗π∈Πn E x∼⃗π [EI(x)] + max ⃗π∈Πn E x∼⃗π [EII(x ′)| x ′ ≃ x]≥ 1 − δ e ϵ· max ⃗π∈Πn E x∼⃗π [EII(x ′)| x ′ ≃ x]+ max ⃗π∈Πn E x∼⃗π [EI(x)] ≥ 1 − δ The proof follows after bounding Type I and Type II errors when the input is x by δϵ,M. That is, for a fixed database x, it is not hard to verify that $$\begin{array}{l}{{e^{\epsilon}\cdot{\mathcal{E}}_{\mathrm{I}}(x)+{\mathcal{E}}_{\mathrm{II}}(x^{\prime})\geq1-\operatorname*{max}_{x^{\prime}:x^{\prime}\simeq x}\left(d_{\epsilon,{\mathcal{M}}}(x,x^{\prime})\right)\quad{\mathrm{and}}}}\\ {{e^{\epsilon}\cdot{\mathcal{E}}_{\mathrm{II}}(x^{\prime})+{\mathcal{E}}_{\mathrm{I}}(x)\geq1-\operatorname*{max}_{x^{\prime}:x^{\prime}\simeq x}\left(d_{\epsilon,{\mathcal{M}}}(x^{\prime},x)\right)}}\end{array}$$ Then, by the definition δϵ,M(x), we have, $$e^{\epsilon}\cdot{\mathcal{E}}_{\mathrm{I}}(x)+{\mathcal{E}}_{\mathrm{II}}(x^{\prime})\geq1-\delta_{\epsilon,{\mathcal{M}}}(x)\quad{\mathrm{~and~}}\quad e^{\epsilon}\cdot{\mathcal{E}}_{\mathrm{II}}(x^{\prime})+{\mathcal{E}}_{\mathrm{I}}(x)\geq1-\delta_{\epsilon,{\mathcal{M}}}(x),$$ which means that EI and EII cannot be small at the same time when the database is x. Then, justification 1 can be gotten by applying smoothed analysis to both sides. It follows that the smoothed DP, which is a smoothed analysis of δϵ,M, can bound the smoothed Type I and Type II errors. Justification 2: Smoothed DP mechanisms are insensitive to the change of one record with high probability. Mathematically, a (*ϵ, δ,* Π)-smoothed DP mechanism M guarantees $$\operatorname*{max}_{\pi\in\Pi^{n}}\left(\operatorname*{\mathbb{E}}_{x\sim\pi}\left[\Pr_{a\sim\mathcal{M}(x)}\left[{\frac{1}{2e^{\epsilon}}}\leq{\frac{\Pr[\mathcal{M}(x)=a]}{\Pr[\mathcal{M}(x^{\prime})=a]}}\leq2e^{\epsilon}\right]\right]\right)\geq1-2\delta$$ The proof is, again, done through analyzing δϵ,M(x). More precisely, given any mechanism M, any ϵ ∈ R+ and any pair of neighboring databases *x, x*′, we have $$\mathrm{Pr}_{a\sim{\mathcal{M}}(x)}\left[\frac{1}{2e^{\epsilon}}\leq\frac{\mathrm{Pr}[{\mathcal{M}}(x)=a]}{\mathrm{Pr}[{\mathcal{M}}(x^{\prime})=a]}\leq2e^{\epsilon}\right]\geq1-2\delta_{\epsilon,{\mathcal{M}}}(x).$$ Then, justification 2 follows by applying smoothed analysis to both sides of the above inequality. As smoothed DP replaces the worst-case analysis with smoothed analysis, we also view δ = o(1/n) as a requirement for private mechanisms for smoothed DP. In addition to the two justifications above, Justification 3: Smoothed DP guarantees limited adversaries' utility in (Bayesian) predictions. Following similar reasoning as Justification 1, we know that the utility under realistic settings (or the smoothed utility) of the adversary cannot be larger than 2δ. Mathematically, an (*ϵ, δ,* Π)-smoothed DP mechanism M guarantees $$\operatorname*{max}_{\pi\in\Pi^{n}}\left(\mathbb{E}_{x\sim\pi}\left[u{\Big(}{\frac{e^{\epsilon}}{e^{\epsilon}+1}},x\cap x^{\prime}{\Big)}\right]\right)<2\delta.$$ At a high level, a small δ parameter of smoothed DP means the adversary cannot accurately predict the missing entry of database under realistic settings. ## 4 Properties Of Smoothed Dp First, we present four properties of smoothed DP and discuss how they can help mechanism designers figure out the smoothed DP parameters of mechanisms. We first present the robustness to *post-processing* property, which says no function can make a mechanism less private without adding extra knowledge about the database. The post-processing property of smoothed DP can be used to upper bound the privacy level of many mechanisms. With it, we know private data preprocessing can guarantee the privacy of the whole mechanism. Then, the rest part of the mechanism does not need to consider privacy issues. The proof of all four properties of the smoothed DP can be found in Appendix D. Proposition 4 (**Post-processing**). Let M : X n → A be a (ϵ, δ, Π)-smoothed DP mechanism. For any f : *A → A*′ (which can also be randomized), f ◦ M : X n → A′is also (ϵ, δ, Π)*-smoothed DP.* Then, we introduce the composition theorem for the smoothed DP, which bounds the smoothed DP property of databases when two or more mechanisms publish their outputs about the same database. Proposition 5 (**Composition**). Let Mi: X n → Ai be an (ϵi, δi, Π)-smoothed DP mechanism for any i ∈ [k]*. Define* M[k]: X n →Qk i=1 Ai as M[k](x) = M1(x), · · · ,Mk(x)*. Then,* M[k]is Pk i=1 ϵi,Pk i=1 δi, Π *-smoothed DP.* In practice, Π might be hard to accurately characterize. The next proposition introduces the pre-processing property of smoothed DP, which says the distribution of data (Π) can be replaced by the distribution of features (f(Π), defined as follows) if the features are extracted using any deterministic function(s). Note that this only replaces the smoothed analysis-related Π and the mechanism remains unchanged. For example, in deep learning, the distribution of data can be replaced by the distribution of gradients4, which is usually much easier to estimate in real-world training processes. More technically, the pre-processing property guarantees that any deterministic way of data preprocessing is not harmful to privacy. To simplify notation, we let f(π) be the distribution of f(X) where X ∼ π. For any set of distributions Π = {π1, · · · , πm}, we let f(Π) = {f(π1), · · · , f(πm)}. Proposition 6 (**Pre-processing for deterministic functions**). Let f : X n → X˜n *be a deterministic function and* M : X˜n → A be a randomized mechanism. Then, M ◦ f : X n → A is (ϵ, δ, Π)-smoothed DP if M is *ϵ, δ, f*(Π)- smoothed DP. The following proposition shows that any two sets of distributions with the same convex hull have the same privacy level under smoothed DP. With this, the mechanism designers can ignore all inner points and only consider the convex hull's vertices when calculating the mechanisms' privacy level. Let CH(Π) denote the convex hull of Π. Proposition 7 (**Distribution reduction**). Given any ϵ, δ ∈ R+ and any Π1 and Π2 *such that* CH(Π1) = CH(Π2), a mechanism M is (ϵ, δ, Π1)-smoothed DP if and only if M is (ϵ, δ, Π2)*-smoothed DP.* We provide an example of how Proposition 7 helps calculate the privacy level. Assume the database has two possible types of data (i.e., m ≜ *|X |* = 2). We use π = (1, 1 − p) to represent the distribution over *|X |* such that the first type occurs with probability p and the second type occurs with probability 1−p. Assuming the mechanism designer considers an infinite set of distribution Π = (p, 1 − p) : p ∈ (0.2, 0.8) , its easy to check that Π's convex hull CH(Π) = (p, 1−p) : p ∈ [0.2, 0.8] and CH(Π)'s set of vertices Π∗ =(p, 1−p) : p ∈ {0.2, 0.8} =(0.2, 0.8),(0.8, 0.2) . Since Π∗and Π have the same convex hull, according to Proposition 7, the mechanism designer only needs to consider Π∗(two distributions), instead of the infinite set Π. Proposition 7 also provides an efficient way to calculate the (exact) δ parameter of smoothed DP for anonymous mechanisms. Here, "anonymous" means the mechanism treat each data in the database "equally". Formally, we say one mechanism is anonymous if its output distribution will not change under arbitrarily permuted order of data in the database. In other words, an anonymous mechanism's output distribution only depends on the histogram of data. One can see that most commonly-used algorithms in machine learning (*e.g.,* SGD and AdaGrad) and voting (*e.g.,* plurality, Borda, and Copeland) are anonymous. Also note that the commonly used noisy mechanisms (*e.g.,* Laplacian, Gaussian, and Exponential) will keep the anonymity property of mechanisms. Algorithm 1 efficiently calculate the privacy profile δ of smoothed DP for anonymous mechanisms. Again, we let Π∗ denote the set of CH(Π)'s vertices and let ℓ ∗ ≜ |Π∗| denote the cardinality of Π∗. Q denote the set of all histograms on Π∗ of n records. Here, we slightly abuse notation and use "for ⃗π ∈ Q" to represent that each histogram is visited exactly once in the corresponding for loop. | Algorithm 1: Calculate the (exact) privacy profile δ for smoothed DP | | |-------------------------------------------------------------------------------------------------------|----| | 1 Inputs: An anonymous mechanism M, parameter ϵ ∈ R+, size o | f | | database n, and a set of distribution Π | | | 2 Initialization: Calculate Π∗ according to Π and Q according to Π | ∗ | | 3 (Same as DP) Calculate δϵ,M for all histograms 4 for ⃗π ∈ Q do 5 Calculate δj ≜ Ex∼⃗π [δϵ,M(x)] 6 end | | 4Gradient is what the mechanism designers target to privatize for deep neural networks. This is because one of the most serious privacy leakage in deep learning is the gradient residue on cloud GPUs (its VRAM can be visible to other cloud users, see Abadi et al., 2016). We call gradients as features here because it is calculated according to the data in the database. We say Step 3 in Algorithm 1 is the "same as" DP because calculating the privacy profile δ of DP for general mechanisms requires calculating δϵ,M. We note calculating the exact privacy profile δ of DP may not require calculating δϵ,M for some mechanisms (*e.g.*, additive noise mechanism). We will present a similar result for smoothed DP in Theorem 3 of Section 5.1. Theorem 2 presents the runtime of Algorithm 1 (calculate the privacy profile δ of smoothed DP for any anonymous mechanisms), which only requires a polynomial extra time than DP in most cases. Theorem 2 (Runtime of Algorithm 1). *Using the notations above, the complexity of calculating the privacy profile* δ of smoothed DP for any anonymous mechanisms is On 2m+ℓ ∗−3 + ℓ log ℓ ∗+ CplDP, where ℓ denotes the number of vertices in Π and CplDP *represents the runtime of Step 3 in Algorithm 1.* As to be shown in Section 5, the application scenarios of Smoothed DP are those when m is not large. ℓ could be treated as a constant if Π's geometric shape is not extremely complicated. Thus, we believe the runtime of calculating the exact privacy profile δ only requires a polynomial extra time than DP in most cases. ## 5 Smoothed Dp As A Measure Of Privacy This section uses smoothed DP to measure the privacy of some commonly-used mechanisms, where the sampling noise is intrinsic and unavoidable (as opposed to additive noises such as Gaussian or Laplacian noises). Our analysis focuses on two widely-used algorithms where the intrinsic randomness comes from sampling without replacement. In addition, we compare the privacy levels of smoothed DP with DP. All missing proofs of this section can be found in Appendix E. ## 5.1 Discrete Mechanisms Are More Private Than What Dp Predicts In this subsection, we study the smoothed DP property of (discrete) sampling-histogram mechanism (SHM), which is widely used as a pre-possessing step in many real-world applications like the training of NNs. As smoothed DP satisfies *post-processing* (Proposition 4) and *pre-processing* (Proposition 5), the smoothed DP property of SHM can upper bound the privacy level of many mechanisms (that uses SHM) in practice. Let X denote a finite set and m ≜ *|X |* denotes the cardinality of X . The histogram of a database x ∈ X n (denoted as **hist**(x)) is an m-dimensional vector. The j-th component of **hist**(x) is the number of j-th type of data in x. It's easy to check that **hist**(x) ∈ {0, · · · , n} m and ||**hist**(x)||1 = n. SHM first sample T = ⌈η · n⌉ data from the database and then output the histogram of the T samples. Formally, we define the samplinghistogram mechanism in Algorithm 2. Note that we require all data in the database to be chosen from a finite set X . | Algorithm 2: Sampling-histogram mechanism MH | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 Inputs: A finite set X , sampling rate η ∈ (0, 1), and a database x = {X1, · · · , Xn} where Xi ∈ X for all i ∈ {1, · · · , n} 2 Randomly sample T = ⌈η · n⌉ data from x without replacement. The sampled data are denoted by Xj1 , · · · , XjT . 3 Output: The histogram hist(Xj1 , · · · , XjT ) | Smoothed DP of mechanisms based on SHM. The smoothed DP of SHM can be used to upper bound the smoothed DP of the following three groups of mechanisms/algorithms. The first group consists of deterministic voting rules, as presented in the motivating example in Introduction. The sampling procedure in SHM mimics the votes that got lost. The second group consists of machine learning algorithms based on randomly-sampled training data, such as crossvalidation. The (random) selection of the training data corresponds to SHM. Notice that many training algorithms are essentially based on the histogram of the training data (instead of the ordering of data points). Therefore, overall the training procedure can be viewed as SHM plus a post-processing function (the learning algorithm). Consequently, the smoothed DP of SHM can be used to upper bound the smoothed DP of such procedures. The third group consists of SGD of NNs with gradient quantization (Zhu et al., 2020; Banner et al., 2018), where the gradients are rounded to 8-bit in order to accelerate the training and/or the inference of NNs. The smoothed DP of SHM can be used to bound the privacy leakage in each SGD step of the NN, where a batch (a subset of the training set) is firstly sampled and the gradient is the average of the gradients of the sampled data. DP vs. Smoothed DP for SHM. We are ready to present the main theorem of this paper, which indicates that SHM is private under some mild assumptions. We say distribution π is f-*strictly positive* if there exists a positive function f(*n, m*) such that π(X) ≥ f(*n, m*) for any X in the support of π. A set of distributions Π is f-*strictly positive* if there exists a positive function f(*n, m*) such that every π ∈ Π is f-strictly positive. The f-strictly positive assumption is often considered mild in *elections* (Xia, 2020) and *discrete machine learning* (Laird & Saul, Apr. 1994) when f(*m, n*) = O(1). All following discussions in this section assume m to be a constant. Note that constants are omitted when analyzing asymptotic manner (*e.g.*, as a multiple within O(·), Θ(·), or ω(·)). Theorem 3 (**DP vs. Smoothed DP for Sampling Histogram Mechanism** MH). For any MH, given an f-strictly positive set of distributions Π, a finite set X , and n, T ∈ Z+*, we have:* (i) **(Smoothed DP)** *Given any* ϵ ≥ ln 1 1−η + c where c is a constant, MH is *ϵ, m* · exp -−Θf(*n, m*) · n , Π - smoothed DP. (ii) **(Tightness of smoothed DP bound)** For any ϵ > 0*, there does not exist* δ = exp − ω[ln f(m, n)] · n*such* that MH is (ϵ, δ, Π)*-smoothed DP.* (iii) **(DP)** For any ϵ > 0, there does not exist δ < η such that MH is (ϵ, δ)*-DP.* This theorem says the privacy leakage is exponentially small under real-world application scenarios. In comparison, DP cares too much about the extremely rare cases and predicts a constant privacy leakage. If f is a constant function and m = Θ(1), Property (ii) indicates that the bound in (i) is tight. Also, note that our theorem allows T to be in the same order as n. For example, when setting T = 95% × n and Π be f-strictly positive for a constant f, SHM is (3, exp(−Θ(n)), Π)-smoothed DP, which is an acceptable privacy threshold in many real-world applications (Liu et al., 2019). For example, iOS 10.12.3 requires ϵ ≤ 6 and iOS 10.1.1 requires ϵ ≤ 14 Tang et al. (2017). Appendix F proves similar bounds for the SHM with replacement. The following remarks shows a non-asymptotic version of Theorem 3(i), which relates ϵ and δ and provides a non-asymptotic privacy bound for SHM. To simplify notations, we let g(*η, ϵ*) ≜η −1(1 − e −ϵ) − 12. Remark 8 (Privacy bound for MH). Given an f-strictly positive set of distributions Π, a finite set X , and *n, T* ∈ Z+, MH is ϵ, exp − 1 6 · g(η, ϵ) · f(*m, n*) · n+ m · exp − 1 8 · f(*m, n*) · n, Π -smoothed DP for any ϵ > ln 1 1−η . ## 5.2 Continuous Mechanisms Are Similar To What Dp Predicts In this section, we show that the sampling mechanisms with continuous support are still not privacy-preserving under smoothed DP. The "gap" between discrete and continuous SHM comes from the fact that SHM becomes less private when *|X |* increases. | Algorithm 3: Continuous sampling-average mechanism MA | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 Inputs: The number of samples T and a database x = {X1, · · · , Xn} where Xi ∈ [0, 1] for all i ∈ {1, · · · , n} 2 Randomly sample T = ⌈η · n⌉ data from x without replacement. The sampled data are denoted as Xj1 , · · · , XjT . 1 3 Output: The average x¯ = P j∈[T] Xij T | Our result indicates that the neural networks without quantized parameters are not private without additive noise (*e.g.,* Gaussian or Laplacian noise). We use the sampling-average (Algorithm 3) algorithm as the standard algorithm for continuous mechanisms. Because sampling-average can be treated as SHM plus an average step, sampling-average is non-private also means SHM with continuous support is also non-private according to the post-processing property of smoothed DP. Theorem 4 (**Smoothed DP for continuous sampling-average**). *For any continuous sampling-average mechanism* MA, given any set of strictly positive5 distribution Π over [0, 1], any T, n ∈ Z+ and any ϵ ≥ 0*, there does not exist* δ < η such that MA is (ϵ, δ, Π)*-smoothed DP.* Theorem 4 does not contradict Property (i) of Theorem 3 because the continuous functions has m → ∞, which makes the upper bound of δ parameter for smoothed DP m · exp -−Θf(n, m) · n → ∞. 5Distribution π is strictly positive by c if pπ(x) ≥ c for any x in the support of π, where pπ is the PDF of π. ## 6 Experiments Smoothed DP in elections. We use a similar setting as the motivating example, where 0.2% of the votes are randomly lost. We numerically calculate the (exact) privacy profile δ of smoothed DP according to Algorithm 1. Here, the set of distributions Π includes the distribution of all 57 congressional districts of the 2020 presidential election. Using the *distribution reduction* property of smoothed DP (Proposition 7), we can remove all distributions in Π except DC and NE-26, which are the vertices for the convex hull of Π. Figure 2 (left) shows that the smoothed δ parameter is exponentially small in n when ϵ = 7, which matches our Theorem 3. We find that δ is also exponentially small when ϵ = 0.5, 1 or 2, which indicates that the sampling-histogram mechanism is also more private than DP's predictions for small ϵ's. Appendix G.2 shows the experiments with different settings on Π. All experiments of this paper are implemented in MATLAB 2021a and tested on a Windows 10 Desktop with an Intel Core i7-8700 CPU and 32GB RAM. ![11_image_0.png](11_image_0.png) Figure 2: Left: DP and smoothed DP of 2020 US presidential election when 0.2% of votes got lost. Right: DP and smoothed DP of 1-step SGD with 8-bit gradient quantization when the set of distributions is Π. In both plots, the vertical axes are in log-scale and the pink dashed line presents the δ parameter of DP with whatever (finite) ϵ. The dot lines are the exponential fittings of smoothed δ parameters. The left plot is an accurate calculation of δ. The shaded area shows the 99% confidence interval of the right plot. SGD with 8-bit gradient quantization. According to the pre-processing property of smoothed DP, the smoothed DP of (discrete) sampling-average mechanism upper bounds the smoothed DP of SGD (for one step). In 8-bit neural networks for computer vision tasks, the gradient usually follows Gaussian distributions (Banner et al., 2018). We thus let the set of distributions Π = {N8-bit(0, 0.122), N8-bit(0.2, 0.122)}, where N8-bit(*µ, σ*2) denotes the 8-bit quantized Gaussian distribution (See Appendix G for its formal definition). The standard variation, 0.12, is the same as the standard variation of gradients in a ResNet-18 network trained on CIFAR-10 database (Banner et al., 2018). We use the standard setting of batch size T = √n. Figure 2 (right) shows that the smoothed δ parameter is exponentially small in n for the SGD with 8-bit gradient quantization. The probabilities are estimated through 106independent trails. This result implies that the neural networks trained through quantized gradients can be private without adding additive noises. Also, see Appendix G.2 for the experiments under another three different settings on Π. ## 7 Conclusions And Future Work We propose a novel notion to measure the privacy leakage of mechanisms without additive noises under realistic settings. One promising next step is to apply our smoothed DP notion to the entire training process of quantized NNs. Is the quantized NN private without additive noise? If not, what level of additive noises needs to be added, and how should we add noises in an optimal way? More generally, we believe that our work has the potential of making many algorithms private without requiring too much additive noise. 6DC refers to Washington, D.C., and NE-2 refers to Nebraska's 2nd congressional district. ## Acknowledgments We thank the anonymous reviewers for their helpful comments. Lirong Xia acknowledges NSF \#1453542, \#2007994, \#2106983, and a gift fund from Google. Yu-Xiang Wang's work on this project was partially supported by NSF Award \#2048091. ## References Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In *Proceedings of the 2016 ACM SIGSAC conference on computer and communications security*, pp. 308–318, 2016. Dan Alistarh, Demjan Grubic, Jerry Z Li, Ryota Tomioka, and Milan Vojnovic. QSGD: communication-efficient sgd via gradient quantization and encoding. In *Proc. Int. Conf. on Neural Inf. Process. Syst.*, pp. 1707–1718, 2017. Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Fixed point optimization of deep convolutional neural networks for object recognition. In *Proc. Int. Conf. on Acoust., Speech & Signal Process.*, pp. 1131–1135, 2015. Eugene Bagdasaryan, Omid Poursaeed, and Vitaly Shmatikov. Differential privacy has disparate impact on model accuracy. In Proc. Int. Conf. on Neural Inf. Process. Syst., pp. 15479–15488, 2019. Borja Balle, Gilles Barthe, and Marco Gaboardi. Privacy profiles and amplification by subsampling. *J. of Privacy and Confidentiality*, 10(1), 2020. Wolfgang Balzer, Masanobu Takahashi, Jun Ohta, and Kazuo Kyuma. Weight quantization in boltzmann machines. *Neural Netw.*, 4 (3):405–409, Jan. 1991. Ron Banner, Itay Hubara, Elad Hoffer, and Daniel Soudry. Scalable methods for 8-bit training of neural networks. In *Proc. Int. Conf.* on Neural Inf. Process. Syst., pp. 5151–5159, 2018. Raef Bassily, Adam Groce, Jonathan Katz, and Adam Smith. Coupled-worlds privacy: Exploiting adversarial uncertainty in statistical data privacy. In *Proc. Annu. Symp. on Found. of Comput. Sci.*, pp. 439–448, 2013. Dorothea Baumeister, Tobias Hogrebe, and Jörg Rothe. Towards reality: Smoothed analysis in comput. social choice. In *Proc. Int.* Conf. Auton. Agents & Multiagent Syst., pp. 1691–1695, 2020. Arnaud Berlioz, Arik Friedman, Mohamed Ali Kaafar, Roksana Boreli, and Shlomo Berkovsky. Applying differential privacy to matrix factorization. In *Proceedings of the 9th ACM Conference on Recommender Systems*, pp. 107–114, 2015. Aditya Bhaskara, Moses Charikar, Ankur Moitra, and Aravindan Vijayaraghavan. Smoothed analysis of tensor decompositions. In Proc. Annu. ACM Symp. on Theory of Comput., pp. 594–603, 2014. Avrim Blum and John Dunagan. *Smoothed Analysis of the Perceptron Algorithm for Linear Programming*. Pittsburgh, PA, USA: Carnegie Mellon University, 2002. Jacob Bogage and Christopher Ingraham. USPS ballot problems unlikely to change outcomes in competitive states. *The Washington* Post, 2020. URL https://www.washingtonpost.com/business/2020/11/04/ballot-election-problems-usps/. Accessed: Mar. 28, 2023. George EP Box. *Robustness in the Strategy of Scientific Model Building*. Amsterdam, Netherlands: Elsevier, 1979. Tobias Brunsch, Kamiel Cornelissen, Bodo Manthey, and Heiko Röglin. Smoothed analysis of belief propagation for minimum-cost flow and matching. In *Proc. Int. Workshop on Algorithms and Comput.*, pp. 182–193, 2013. Mark Bun and Thomas Steinke. Concentrated differential privacy: Simplifications, extensions, and lower bounds. In *Proc. Theory of* Cryptogr. Conf., pp. 635–658, 2016. Mark Bun and Thomas Steinke. Average-case averages: private algorithms for smooth sensitivity and mean estimation. In *Proc. Int.* Conf. on Neural Inf. Process. Syst., pp. 181–191, 2019. Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: training deep neural networks with binary weights during propagations. In *Proc. Int. Conf. on Neural Inf. Process. Syst.*, pp. 3123–3131, 2015. Jinshuo Dong, Aaron Roth, and Weijie J Su. Gaussian differential privacy. *arXiv:1905.02383*, 2019. Yuqing Du, Sheng Yang, and Kaibin Huang. High-dimensional stochastic gradient quantization for communication-efficient edge learning. *IEEE Trans. on Signal Process.*, 68:2128–2142, Mar. 2020. Cynthia Dwork and Guy N Rothblum. Concentrated differential privacy. *arXiv:1603.01887*, 2016. Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In *Proc. Annu. Int. Conf. on the Theory and Appl. of Cryptographic Techn.*, pp. 486–503, 2006a. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3, pp. 265–284. Springer, 2006b. Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. *Found. and Trends in Theor. Comput. Sci.*, 9 (3-4):211–407, 2014. Emile Fiesler, Amar Choudry, and H John Caulfield. Weight discretization paradigm for optical neural networks. In *Proc. Opt.* Interconnections & Netw., volume 1281, pp. 164–173, 1990. Vincent Guingona, Alexei Kolesnikov, Julianne Nierwinski, and Avery Schweitzer. Comparing approximate and probabilistic differential privacy parameters. *Inf. Proc. Lett.*, 182:106380, 2023. URL https://www.sciencedirect.com/science/article/pii/ S0020019023000236. Yunhui Guo. A survey on methods and theories of quantized neural networks. *arXiv:1808.04752*, 2018. Nika Haghtalab, Tim Roughgarden, and Abhishek Shetty. Smoothed analysis of online and differentially private learning. In Proc. Int. Conf. on Neural Inf. Process. Syst., pp. 9203–9215, 2020. Rob Hall, Alessandro Rinaldo, and Larry Wasserman. Random differential privacy. *arXiv:1112.2680*, 2011. Avinatan Hassidim, Haim Kaplan, Yishay Mansour, Yossi Matias, and Uri Stemmer. Adversarially robust streaming algorithms via differential privacy. *Journal of the ACM*, 69(6):1–14, 2022. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In *Proc. Int. Conf.* on Neural Inf. Process. Syst., pp. 4114–4122, 2016. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. *The J. of Mach. Learn. Res.*, 18(1):6869–6898, 2017. Zhanglong Ji, Zachary C Lipton, and Charles Elkan. Differential privacy and machine learning: a survey and review. arXiv preprint arXiv:1412.7584, 2014. Peter Kairouz, Sewoong Oh, and Pramod Viswanath. The composition theorem for differential privacy. In Proc. Int. Conf. on Mach. Learn., pp. 1376–1385, 2015. Adam Tauman Kalai, Alex Samorodnitsky, and Shang-Hua Teng. Learning and smoothed analysis. In 2009 50th Annu. IEEE Symp. on Found. of Comput. Sci., pp. 395–404, 2009. Minje Kim and Paris Smaragdis. Bitwise neural networks. *arXiv:1601.06071*, 2016. David G Kirkpatrick and Raimund Seidel. The ultimate planar convex hull algorithm? *SIAM J. on Comput.*, 15(1):287–299, 1986. Philip Laird and Ronald Saul. Discrete sequence prediction and its applications. *Mach. Learn.*, 15(1):43–68, Apr. 1994. Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. In *Proc. Symp. on Secur. and Privacy (SP)*, pp. 656–672, 2019. Dave Leip. Dave leip's atlas of the us presidential elections, 2023. URL https://uselectionatlas.org/. Accessed: Mar. 28, 2023. Zhechen Li, Ao Liu, Lirong Xia, Yongzhi Cao, and Hanpin Wang. Differentially private condorcet voting. In *Proceedings of the* AAAI Conference on Artificial Intelligence, volume 37, pp. 5755–5763, 2023. Darryl Lin, Sachin Talathi, and Sreekanth Annapureddy. Fixed point quantization of deep convolutional networks. In Proc. Int. Conf. on Mach. Learn., pp. 2849–2858, 2016. Xiaofan Lin, Cong Zhao, and Wei Pan. Towards accurate binary convolutional neural network. In *Proc. Int. Conf. on Neural Inf.* Process. Syst., pp. 344–352, 2017. Ao Liu and Lirong Xia. The semi-random likelihood of doctrinal paradoxes. In *Proc. AAAI Conf. Artif. Intell.*, pp. 5124–5132, 2022. Ao Liu, Lirong Xia, Andrew Duchowski, Reynold Bailey, Kenneth Holmqvist, and Eakta Jain. Differential privacy for eye-tracking data. In *Proc. of the 11th ACM Symp. on Eye Tracking Res. & Appl.*, pp. 1–10, 2019. Ao Liu, Yun Lu, Lirong Xia, and Vassilis Zikas. How private are commonly-used voting rules? In Proc. Conf. on Uncertainty in Artif. Intell., pp. 629–638, 2020. Ao Liu, Sijia Liu, Abhishek Bhandwaldar, Chuang Gan, Lirong Xia, and Qi Cheng Li. Interpretation maps with guaranteed robustness, May 24 2022a. US Patent 11,341,598. Ao Liu, Sijia Liu, Bo Wu, Lirong Xia, Qi Cheng Li, and Chuang Gan. Certifiably robust interpretation, March 3 2022b. US Patent App. 17/005,144. Ashwin Machanavajjhala, Daniel Kifer, John Abowd, Johannes Gehrke, and Lars Vilhuber. Privacy: Theory meets practice on the map. In *IEEE 24th Int. Conf. on Data Eng.*, pp. 277–286. IEEE, 2008. Bodo Manthey and Heiko Röglin. Worst-case and smoothed analysis of k-means clustering with bregman divergences. In Proc. Int. Symp. on Algorithms and Comput., pp. 1024–1033, 2009. Michele Marchesi, Gianni Orlandi, Francesco Piazza, and Aurelio Uncini. Fast neural networks without multipliers. IEEE Trans. on Neural Netw., 4(1):53–62, Jan. 1993. Ilya Mironov. Rényi differential privacy. In *Proc. Comput. Secur. Found. Symp.*, pp. 263–275, 2017. Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, and Debbie Marr. WRPN: Wide reduced-precision networks. In *Proc. Int. Conf. on* Learn. Representations, pp. 1–11, 2018. Hiep H Nguyen, Jong Kim, and Yoonho Kim. Differential privacy in practice. *J. of Comput. Sci. & Engineering*, 7(3):177–186, Sep. 2013. Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. Smooth sensitivity and sampling in private data analysis. In Proc. Annu. ACM Symp. on Theory of Comput., pp. 75–84, 2007. Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Ulfar Erlingsson. Scalable private learning with pate. In *Proc. Int. Conf. on Learn. Representations*, pp. 1–11, 2018. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In *Proc. Eur. Conf. on Comput. Vision*, pp. 525–542, 2016. Anand D Sarwate and Kamalika Chaudhuri. Signal processing and machine learning with differential privacy: Algorithms and challenges for continuous data. *IEEE signal processing magazine*, 30(5):86–94, 2013. Frank Seide, Hao Fu, Jasha Droppo, Gang Li, and Dong Yu. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns. In *Proc. Annu. Conf. of the Int. Speech Commun. Assoc.*, pp. 1058–1062, 2014. Hyejin Shin, Sungwook Kim, Junbum Shin, and Xiaokui Xiao. Privacy enhanced matrix factorization for recommendation with local differential privacy. *IEEE Transactions on Knowledge and Data Engineering*, 30(9):1770–1782, 2018. Steven W Smith. The scientist and engineer's guide to digital signal processing, 1997. Daniel A Spielman. The smoothed analysis of algorithms. In *Proc. Int. Symp. on Fundam. of Comput. Theory*, pp. 17–18, 2005. Daniel A Spielman and Shang-Hua Teng. Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time. *J. of the ACM*, 51(3):385–463, 2004. Chuan Zhang Tang and Hon Keung Kwan. Multilayer feedforward neural networks with single powers-of-two weights. *IEEE Trans.* on Signal Process., 41(8):2724–2727, Aug. 1993. Jun Tang, Aleksandra Korolova, Xiaolong Bai, Xueqiang Wang, and Xiaofeng Wang. Privacy loss in apple's implementation of differential privacy on macos 10.12. *arXiv preprint arXiv:1709.02753*, 2017. Abhradeep Guha Thakurta and Adam Smith. Differentially private feature selection via stability arguments, and the robustness of the lasso. In *Proc. Conf. on Learn. Theory*, pp. 819–850, 2013. Aleksei Triastcyn and Boi Faltings. Bayesian differential privacy for machine learning. In *Proc. Int. Conf. on Mach. Learn.*, pp. 9583–9592, 2020. Vincent Vanhoucke, Andrew Senior, and Mark Z Mao. Improving the speed of neural networks on cpus. In Proc. Deep Learn. & Unsupervised Feature Learn. NIPS Workshop, pp. 4, 2011. Wenjie Wang, Pengfei Tang, Jian Lou, and Li Xiong. Certified robustness to word substitution attack with differential privacy. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1102–1112, 2021. Yu-Xiang Wang. Per-instance differential privacy. *J. of Privacy and Confidentiality*, 9(1), 2019. Larry Wasserman and Shuheng Zhou. A statistical framework for differential privacy. *J. of the Amer. Statistical Assoc.*, 105(489): 375–389, Mar. 2010. Lirong Xia. The smoothed possibility of social choice. In *Proc. Int. Conf. on Neural Inf. Process. Syst.*, pp. 11044–11055, 2020. Lirong Xia. How likely are large elections tied? In *Proc. ACM Conf. on Econ. & Comput.*, pp. 884–885, 2021. Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. Incremental network quantization: Towards lossless cnns with low-precision weights. *arXiv:1702.03044*, 2017. Feng Zhu, Ruihao Gong, Fengwei Yu, Xianglong Liu, Yanfei Wang, Zhelong Li, Xiuqi Yang, and Junjie Yan. Towards unified int8 training for convolutional neural network. In *Proc. the IEEE/CVF Conf. on Comput. Vision & Pattern Recognit.*, pp. 1969–1979, 2020.
Review 1: Summary: This paper presents a novel concept in differential privacy, termed smooth differential privacy. Unlike differential privacy, smooth differential privacy stipulates that the expected privacy profile must remain bounded across all given data distributions. The authors illuminate a reduction property of smooth differential privacy, indicating that the vertices of the convex hull of data distributions determine its satisfaction. To illustrate the practical application of smooth differential privacy, a sampling-histogram mechanism is presented. This mechanism generates a histogram from a uniformly random subsample of a defined size. It is demonstrated that this mechanism adheres to smooth differential privacy when it meets the f-strictly positive condition. The authors also observe that the privacy profile derived from this is most effective under specific conditions. The study also reveals that subsampling-based mechanisms fall short of achieving standard differential privacy with discrete-valued data and smooth differential privacy with continuous-valued data. Experimental results emphasize the superior privacy profile of the proposed mechanism, particularly when tested with elections and 1-step quantized SGD. Strengths and Weaknesses: ### Strengths The examination of privacy guarantees against a realistic adversary is well-motivated and is believed to hold interest for readers of TMLR. Differentially private mechanisms are designed to protect the privacy of a dataset against adversaries who, hypothetically, have knowledge of the entire dataset except for a single bit of information. While these defenses are robust against side-channel attacks, it is conservative to assume that such an all-knowing adversary exists in real-world situations. Investigating a realistic adversary and devising defenses to counter such adversaries is a promising avenue of exploration. Exploring the privacy implications of subsampling is captivating. Subsampling can potentially conceal specific types of information from the original dataset. Understanding what subsampling can and cannot conceal may aid in advancing the development of privacy mechanisms. In this context, the authors effectively present some extent of privacy guarantee through the sampling-histogram mechanism. Beyond ensuring smooth differential privacy, the authors have validated the tightness of the privacy parameter ensured by the sampling-histogram mechanism. Even though this tightness is demonstrated specifically when both $m$ and $f(m,n)$ are of a constant order, such parameter settings are plausible in realistic scenarios. Through this tightness analysis, the authors have successfully delineated the privacy guarantees offered by the sampling-histogram mechanism. ### Weaknesses One of the primary shortcomings of the manuscript is its unsubstantiated claim regarding realistic models. Although the authors contend that they have elucidated a method for measuring privacy within "realistic models," they provide no definitive evidence to support the assertion that the proposed smooth differential privacy is secure against realistic adversaries. Notably, smooth differential privacy provides security only against adversaries of lesser potency than does the original differential privacy. This is because smooth differential privacy does not intrinsically guarantee differential privacy. While differential privacy assures security against the strongest adversaries, smooth differential privacy lacks this assurance. This discrepancy leads to concerns that some realistic adversaries might circumvent the protections offered by smooth differential privacy. Hence, providing the threat model, including adversaries' abilities, under which smooth differential privacy can guarantee privacy is crucial. Such clarity is vital for verifying the claims of privacy protection in realistic models. To illustrate their point, the authors present in Figure 1 a comparison between the privacy profiles of original differential privacy and smooth differential privacy in the context of US presidential elections. They suggest that the proposed privacy measure correlates exponentially with the adversary's utility. However, the adversary's utility appears to have been selectively chosen by the authors to align with the privacy profile of smooth differential privacy. The correlation shown in Figure 1 does not inherently validate the practicality of smooth differential privacy. The sampling histogram mechanism might satisfy traditional differential privacy, provided that all datasets conform to $f$-strict positivity. To encapsulate the privacy assurances offered by subsampling, the original differential privacy appears adequate. Therefore, the introduction of a novel—yet unsubstantiated—privacy concept, namely smooth differential privacy, is questionable. Smooth differential privacy agrees with standard differential privacy when $\Pi$ is the set of all distributions, as this set includes one supported on a specific dataset. It is self-evident that the privacy profile of smooth differential privacy is equivalent to or lesser than that of differential privacy. Therefore, the rationale behind comparing smooth differential privacy with standard differential privacy, as presented in Section 6, is unclear to me. Moreover, the manuscript is permeated with linguistic inaccuracies, significantly detracting from its overall quality. Requested Changes: (Major) Please elucidate the threat model, inclusive of adversaries' capabilities, under which smooth differential privacy can assure privacy. (Major) Should the authors show the adversary's utility depicted in Figure 1 as reasonable, it is requisite to elucidate its reasonableness within the main text. (Major) It is essential to articulate the necessity of employing smooth differential privacy. To appraise the privacy protection that subsampling furnishes, the original differential privacy seems sufficient. (Major) It is imperative to significantly enhance the manuscript by rectifying linguistic inaccuracies. (Major) Although the authors assert that pre-processing is a distinctive characteristic of smooth differential privacy, a similar characteristic may hold in original differential privacy as well. (Minor) Table 2 presents an unfair comparison. The term "smoothed analysis" should be replaced with "less informative adversary." (Minor) It is necessary to elucidate why the selections of π illustrated in the experiments render reasonable privacy protection. (Minor) I find Justifications 1 and 2 to be untenable as $\delta(x)$ can take a large value under smooth differential privacy. (Minor) The manuscript could be further enhanced by introducing a discussion on integrating the subsampling-based mechanism with anonymization algorithms, including those aimed at k-anonymity. An anonymized dataset may invariably fulfill f-strict positivity. Broader Impact Concerns: I have not found any concerns regarding border impact. ================================================== Review 2: Summary: The paper proposes a novel extension notion of DP, named smoothed differential privacy, to measure the privacy leakage of mechanisms without adding additional noises that do not satisfy DP. By noting that the worst-case privacy according to DP might be too loose to serve as a practical measure for evaluating and comparing mechanisms with sampling noise (while without additive noise) in real-world applications (e.g. election with sampling noise), the smoothed DP with basic properties (e.g., post-processing and composition properties) is introduced. Further, an algorithm used to calculate the privacy parameters for smoothed DP is proposed. Strengths and Weaknesses: Strengths 1. The paper introduces the notion of smoothed DP following the worst average-case idea behind the smoothed analysis. The proposed notion can be used to measure the privacy leakage of some types of non-noising mechanisms (e.g., mechanisms based on SHM including ML algorithms based on randomly sampling training data and deterministic voting rules), which is very interesting. 2. Some basic properties of smoothed DP (especially pre-processing property) are introduced. Based on these, the authors used smoothed DP to measure the privacy of some commonly used mechanisms, where sampling noise is unavoidable. 3. Experiments support the findings and the results of the paper. Weakness 1. Smoothed DP can only address some specific problems or scenarios, such as mechanisms with sampling noise. While DP, RDP or GDP are more general. 2. The running time of calculating the exact privacy profile $\delta$ for smoothed DP might be very large (Theorem 2 shows that the running time is $O(n^{2m+\ell*-3}+\ell \log(\ell*)) + Cpl_{DP}$). Overall, the paper is well-written and the idea is novel and interesting. Requested Changes: 1. Could the author give some explanation or insights when introducing the definition of smoothed DP in Section 3.2? For example, I was confused as to why the set of distribution $\Pi$ was introduced. Does this come from smoothed analysis? The original DP holds for any $x$ and $x’$, while the smoothed DP seems to restrict the distribution of $x$ to a small set? 2. Theorem 3 (i) shows that when $\epsilon \ge \ln(1/1-\eta) +c$, $M_H$ satisfies a certain level smoothed DP. Does this imply that $M_H$ is nearly non-private even under the notion of smoothed DP if the sampling rate $\eta$ tends to 1 or the constant $c$ is very large? In addition, could the authors give some explanation or a discussion about the constant $c$? Also, in line 299, the authors state that SHM is $(3,\exp(-\Theta(n)),\Pi)$-smoothed DP, it seems that $c$ and $m$ are ignored? Minor comments 1. The notion $\Theta(\cdot)$ and $\omega(\cdot)$ are not introduced. 2. In line 251, “The application” should be “the application”. In line 80, “pre-prpcessing” should be “pre-processing”. Broader Impact Concerns: no ================================================== Review 3: Summary: The paper is proposing a variant of differential privacy (DP), where instead of using worst-case estimates, one can use a "smoothed" version of differential privacy. This allows the authors to argue about privacy leakage in a more realistic sense. Along these lines the authors prove that any discrete mechanism with sampling procedures is more private that what standard differential privacy predicts. Furthermore, the authors prove that many continuous mechanisms with sampling procedures are still non-private under smoothed differential privacy. In addition, the authors prove some desirable properties of smoothed differential privacy, such as composition, robustness to post-processing, and distribution reduction. Based on these properties the authors are able to propose an algorithm for efficient computation of the privacy parameters of smoothed differential privacy. Ultimately the authors evaluate experimentally the proposed framework of smoothed differential privacy and conclude that discrete sampling mechanisms are private in real-world elections as well as some artificial neural networks can be private without adding additive noise. Strengths and Weaknesses: **Strengths** The paper proposes a framework that can be seen as more realistic, as it does not use the pessimistic estimates of standard differential privacy. The paper has both theoretical results as well as an experimental study, thus elucidating the proposed framework in both interesting ways. **Weaknesses** In my opinion this paper assumes basic knowledge of differential privacy from the reader and is not an entirely stand-alone paper. I, for one, am not really familiar with differential privacy and I am not confident about my understanding in various cases. However, let me come to some concrete issues that the paper has, which I believe are issues even if the reader knows differential privacy. - Differential privacy is defined in page 4 but some intuitive notion of what it actually is would help the reader a lot in the beginning to understand the existing framework (differential privacy), the proposed framework (smoothed differential privacy), and appreciate better the work of the authors. - Instead the authors discuss in the first sentence methods of how DP can be achieved, as well as have some forward referencing the makes it hard to follow what kind of message the authors are trying to convey. For example, in line 33 the privacy parameter $\delta$ is mentioned, but no intuition given as to what $\delta$ really measures. We only understand as readers that smaller values of $\delta$ are more desirable, but essentially this is it. - The paragraph to the left of Figure 1 needs rewriting to convey better meaning. - The caption of Figure 1 is referencing some equation from the appendix as an explanation. This needs to change somehow and convey meaning in a caption of a figure without asking from a reader that has just started reading a paper to leave everything on the side and move to the appendix in order to understand something from what should be an "explanatory" figure. - Also, since we are in figure 1, discussing a bit about adversaries utility in the main text would be nice. In my opinion this is another example that a few meaningful paragraphs of what differential privacy really is, are missing from the introduction of the text. - While the authors give an example in the introduction, I think the example is only half-successful. It would be nice to have an example with actual histograms and discussion on the manipulation that can be done by an adversary and how information can leak from the database. The way the paper is written it is impossible to understand how something like that can be done. - Also, I appreciate a lot Tables 1 and 2 that the authors have tried to compile and make it clearer for their work to be understandable. However, again, I think because some high-level discussion is missing from the introduction about DP, it is not really possible to appreciate fully what the authors present. In addition, as a person reading about a topic (DP) that I knew almost nothing beforehand, I find the use of lowercase $x$ to represent some database to be odd. - In lines 117-118 you try to describe databases and records. I think spending two-three lines and giving an actual example of what you mean it would be nice. Is it is the case that the record is a single number (or category)? Is it a vector? If it can be a vector (which would make more sense for a record), then you can argue a bit more about the nature of $\mathcal{X}$. Then, you could also give an example of neighboring databases $x$ and $x'$ and explain how much of a difference is allowed between two different records. - Line 172: Theorem statement precedes the definition of the framework of the authors. - I have a hard time following the argument the authors are trying to make in lines 215-216. Is there any real-world case where one does not use the actual data of a dataset, but rather the gradients for, I suppose, some particular training iteration for some custom artificial neural network architecture? - The discussion in the experiments needs some more text. Do the authors discuss the right-hand side of Figure 2? Along these lines, in both subfigures, the graph corresponding to $\epsilon = 0.5$ is above the yellow line, which, if I understand correctly (from what is mentioned in the introduction), we would like these graphs to be below the yellow line. Nevertheless, the authors claim that $\delta$ is exponentially small even if it is above that yellow line. Some clarification would be in order (I think). - Overall the authors are allowed to use 12 pages for the main text of the paper and in this case they use 10 pages plus a few more lines. The paper can improve a lot by adding discussion as mentioned above, or by adding a few sentences in other parts where the authors ask from the readers to read the appendix (e.g., caption of Figure 1, or in line 115 where they refer the readers for reading part of the related work in the appendix). **References** I would prefer to see a reference being cited for the claim in line 65 (comparison of simplex with other polynomial methods). **Some Typos** Without referring to a specific line, I think that in most of the cases where the authors use "noises" I would actually use the term "noise". Line 20: mechansim -> mechanism Line 49 (second paragraph, line 4): sampling -> Sampling (capitalize) Line 121: What do you mean by "image space"? Shouldn't this be defined somewhere? Line 162: opposites to -> contrasts Requested Changes: Please see the weaknesses mentioned above. I think this paper needs a major revision before it can be accepted. Broader Impact Concerns: I am really not familiar with the field, but the authors could potentially devote a small paragraph and discuss broader impacts at the end of the paper. ================================================== Metareview: Recommendation: Accept as is Comment: The paper introduces the concept of smoothed DP, a method designed to measure privacy leakage in mechanisms without additive noises under practical conditions effectively. The authors rigorously establish and demonstrate various properties of smoothed DP. Additionally, they present an efficient algorithm for computing the privacy parameters associated with smoothed DP and validate it through experiments. The authors have diligently incorporated the reviewers' feedback, resulting in a revised manuscript. Two reviewers endorse its acceptance. Although one reviewer expresses lingering concerns about the paper's claims, they acknowledge the paper's motivation in addressing privacy guarantees against realistic adversaries, which will engage readers of TMLR. The theoretical analysis presented is also appreciated. Considering the comprehensive responses from the authors and the reviewers' recommendations, I recommend accepting the paper. ==================================================
# Why Emergent Communication Is Repulsive Anonymous authors Paper under double-blind review ## Abstract With the success of deep reinforcement learning, there has been a resurgence of interest in situated emergent communication research. Properties of successful emergent communication have been identified which typically involve auxiliary losses that ensure a trade-off between ensuring diversity of message-action pairs, conditioned on observations, and consistency, when the reward acquired is significant. In this work, we draw theoretically connections between these auxiliary losses and the probabilistic framework of repulsive point processes. We show how in fact these auxiliary losses are promoting repulsive point processes, as well as outline ways in which the practitioner could utilise these repulsive point processes directly. We hope this newfound connection between language and repulsive point processes offers new avenues of research for the situated language researcher or probabilistic modeller. ## 1 Introduction Deep Reinforcement Learning (DRL) has seen successes in many problems such as video and board games (Mnih et al., 2013; Silver et al., 2016; 2017; Mnih et al., 2015), and control of simulated robots (Ammar et al., 2014; Schulman et al., 2015; 2017). Though successful, these applications assume idealised simulators and require tens of millions of agent-environment interactions, typically performed by randomly exploring policies. However, on the time scales of physical (i.e., real-world) systems, sample-efficiency naturally becomes a more pressing concern due to time and cost burdens. Sampling in supervised learning is typically used on a fixed dataset D, where mini-batches are sampled from D to perform parameter updates. The supervised learning literature features a range of *biased* (nonuniform) sampling approaches. Csiba & Richtárik (2018) and Katharopoulos & Fleuret (2018) develop importance sampling schemes that reduce the training time of deep neural networks by orders of magnitude. Zhao & Zhang (2014) motivates the need for *diverse* (in terms of classes) mini-batches and shows that sampling from Repulsive Point Processes (RPP) yields reduced training time and more accurate models. RPPs are probabilistic models on finite sets that can naturally trade-off between quality and diversity when sampling subsets of items. Sampling subsets of items arises in many domains within reinforcement learning, from sampling experience in procedurally generated games to sampling actions and messages in emergent communication. With the advent of DRL, there has been a resurgence of interest in situated emergent communication (EC) research Das et al. (2017); Lazaridou et al. (2016); Kottur et al. (2017); Jaques et al. (2019); Havrylov & Titov (2017). In this setup, one typically has at least two agents which are de-centralised, yet have a communication channel between them that might be detrimental to the overall performance of both agents, i.e. one agent might be able to see but not move, and needs to guide another agent towards a goal. However, there remain many open design decisions, each of which may significantly bias the nature of the constructed language, and any agent policy which makes use of it. The properties of successful emergent communication were identified in Jaques et al. (2019); Eccles et al. (2019b); Lowe et al. (2019); Cowen-Rivers & Naradowsky (2020), and these typically involve a trade-off between ensuring diversity of message-action pairs, conditioned on observations, and consistency, when the reward acquired, is considerable. In this work, we discuss the connections between RPPs and emergent communication. We examine properties of successful emergent communication and explain how they in-fact encourage a repulsive point processes over the actions/ messages. We then show how one could create specifically repulsive emergent communication for either a speaker or listener agent, and detail how this formulation theoretically bias's an agent to speak or listen in a situated language game. ## 2 Why Emergent Communication Is Repulsive First, we introduce the relevant background required. In 2.1 we introduce the background for single agent reinforcement learning, in 2.2 we extend the formulation of reinforcement learning (in 2.1) to emergent communication. We then detail the key elements of diversity and quality. Lastly, in 2.3 we formally introduce Determinantal Point Processes, a computationally efficient repulsive point process, later used to re-define an optimal listener and optimal speaker. ## 2.1 Reinforcement Learning We consider Markov decision processes (MDPs) with continuous states and action spaces; MDP = ⟨O, A,P*, c, γ*⟩, where O ⊆ R dstate denotes the state space, A ⊆ R dact the action space, P : *O × A → O* is a transition density function, c : *O × A →* R is the reward function and γ ∈ [0, 1] is a discount factor. At each time step t = 0*, . . . , T*, the agent is in state ot ∈ O and chooses an action t ∈ A transitioning it to a successor state ot+1 ∼ P (ot+1|ot, t), and yielding a reward rt = c(ot, t). Given a state ot, an action t is sampled from a policy π : *O → A*, where we write π(t|ot) to represent the conditional density of an action. Upon subsequent interactions, the agent collects a trajectory τ = [o0:T , a0:T ], and aims to determine an optimal policy π ⋆ by maximising total expected reward: Eτ∼pπ(τ)[C(τ )] := Eτ∼pπ(τ)[PT t=0 γ trt], where pπ(τ ) denotes the trajectory density defined as: pπ(τ ) = µ0(o0)QT −1 t=0 P(ot+1|ot, t)π(t|ot), with µ0(·) being an initial state distribution. ## 2.2 Emergent Communication A common approach to emergent communication is to concatenate the incoming observational message (o m) and state observation (o) together Lowe et al. (2019); Foerster et al. (2016) to create an augmented observational space oˆ = [o, o m]. Given a state oˆt, a discrete message mt ∈ M is sampled from a policy π : *O → M*, where we write π(mt|oˆt) to represent the conditional probability distribution of a message given an observation and incoming message. An agent will also have an additional communication (message) policy π(m | o). The replay buffer (B) in emergent communication can be described as a collection of tuples (*X ∈ B*) such that B = {X0 := (o0, o ′ 0 , o m 0 , o ′ 0 m, a0,m0, r0)*, . . . ,* Xn := (on, o ′ n , o m n , o ′ n m, an,mn, rn)}. One of the difficulties in emergent communication is efficiently exploring complex observation, action, communication spaces whilst trading off consistency of actions and messages. There exist auxiliary losses that pressure the speaking/ listening agent to alter its short-term behaviour in response to messages consistently (e.g., causal influence of communication loss Lowe et al. (2019); Jaques et al. (2019); Eccles et al. (2019b)), but bear with them the additional difficulties of tuning extremely sensitive auxiliary loss parameters, as is the case with most auxiliary loss functions. Thus, there is a need for more simplified emergent communication algorithms that achieve success in challenging language games with less sensitivity to hyperparameters. We will now discuss identified characteristics of **Positive Listening** Lowe et al. (2019) and Positive Signalling Lowe et al. (2019) that successfully aid communication and how auxiliary loss functions that encourage these losses are in-fact biasing the agents' action distribution towards a repulsive point processes. Positive Listening Its important for an agent to adapt to changes in incoming communication signals/ messages. An agent exhibits positive listening if the probability distribution over its actions is influenced by the messages it receives. The causal influence of communication (CIC) Lowe et al. (2019); Jaques et al. (2019); Eccles et al. (2019b) loss can be defined below $${\mathcal{C I C}}={\mathcal{D}}_{K L}(\pi(\mathbf{a}\mid\mathbf{o})\mid\mid\pi(\mathbf{a}\mid\mathbf{o},\mathbf{o}^{m}))$$ m)) (1) Where one can marginalise over messages in order to approximate π(a | o) = Rm π(a | o, o m). It is easy to observe, that when a CIC loss is minimised, an agent should have large probability distribution changes when an incoming message is received vs when one is not. One can readily see the repulsive nature of this loss, as the loss biases the agent to taking diverse actions when diverse incoming messages are present in the observation space. Note, this loss consistently encourages diversity, and has no quality term which enables the agent to trade-off between diversity and quality (reward) when the reward received increases, unlike the auxiliary loss for positive listening shown in Eq 2. Positive Signalling An agent must be consistent with outgoing communication signals. Positive signalling is defined as a positive correlation between the speaker's observation and the corresponding message it sends, i.e., the speaker should produce similar messages when in similar situations. Various methods exist to measure positive signalling, such as speaker consistency, context independence, and instantaneous coordination Lowe et al. (2019); Eccles et al. (2019a). An example of a method for biasing agents towards positive signalling is via the mutual-information (I) loss Eccles et al. (2019a), as shown below. This loss biases the speaker to produce high entropy distribution of overall messages, but when conditioned on the speaker's observation has a low entropy, allowing for exploration and consistency between communication signals. $$\left(2\right)$$ $$\mathbf{n}|\mathbf{i}$$ $\downarrow$ 7. ## I(M, O) = H(M) − H(M|O) (2) H(m) can be approximated by taking the entropy of the average message distribution and H(m|o) can be computed easily. The repulsive property in this loss is less obvious: we have a diversity promoting term on the left, which promotes a more uniform spread of messages, with a quality term on the right that allows for certain messages, when conditioned on an observation, to maintain a level of consistency (rather than diversity). This loss function therefore trades off between diversity of message-observation pairs, as well as allowing consistency for ones receiving high reward, which a detrimental point process is similarly able to achieve naturally. ## 2.3 Determinantal Point Process A repulsive point process is a type of probability distribution whereby meaning sampled points are encouraged to repel from previously sampled points with respect to some distance metric. Determinantal Point Process's (DPP's) are a type of repulsive point process. DPP's provide exact methods to calculate the probability of a subset of items, that can be sampled, from a core set. DPP's are gaining increasing interest as they have proven effective in machine learning Kulesza et al. (2012) and multi-agent reinforcement learning Yang et al. (2020), having originated from modelling repulsive particles in quantum mechanicsMacchi (1977). It has been shown that extending the use of a determinantal point process (DPP) in multi-agent reinforcement Yang et al. (2020) learning can significantly improve joint exploration across agents. Definition 1 (DPP) For a ground set of items Y = {1, 2, . . . , M}, a DPP, denoted by P, is a probability measure on the set of all subsets of Y*, i.e.,* 2 Y . Given an M × M *positive semi-definite (PSD) kernel* K that measures similarity for any pairs of items in Y, let Y be a random subset drawn according to P*, then* we have, ∀Y ⊆ Y, $$\mathbb{P}_{\mathbf{\mathcal{K}}}\big{(}\mathbf{Y}=Y\big{)}\propto\det\big{(}\mathbf{\mathcal{K}}_{Y}\big{)}=\text{Vol}^{2}\left(\{\mathbf{w}_{i}\}_{i\in Y}\right),\tag{1}$$ $${\mathrm{i}}{\mathrm{i}}{\mathrm{f}}$$ where KY := [Ki,j ]i,j∈Y denotes the submatrix of K *whose entries are indexed by the items included in* Y . Where the diagonal values Ki,i captures the quality of item i, and the off-diagonal values Ki,j measures the diversity between items i and j with respect to some diversity function. The normaliser can be computed as: PY ⊆Y det(KY ) = det(K + I), where I is an M ×M identity matrix. The key to the success of DPP's is the ability to naturally trade-off between diversity and quality, which in reinforcement learning terms would enable it to trade-off between exploration-exploitation naturally. $$\left({\boldsymbol{3}}\right)$$ ## 3 Emergent Communication Is Repulsive In this section, we will detail how one could apply a RPP to Emergent Communication, explaining theoretically how the direction application of RPPs promotes efficient emergent communication. ## 3.0.1 Biased Agents For Speaking Listening First, we ask the question: why have a separate policy for the message and action policy, when in reality, actions and communication can be output jointly? e.g. a human typically takes actions that align with their verbal statements, and vice versa. For this reason, we create a new joint action-message space U = *A × M* for agents which speak and act. where × is the Cartesian product. Of course, this new action space will have poor scalability; however, it will enable us to reason over actions and messages jointly. Since the diagonal and off-diagonal entries of K represent *quality* and *diversity* respectively, we allow a decomposition of the similarity matrix similar to Yang et al. (2020) K := DFDT with D ∈ R N×N and F ∈ R N×N with each row Ki = d 2 i f T iis a product of a **quality** term d 2 i ∈ R + and a **diversity** feature vector fi ∈ R N×1. Where each (i-th) row of F is ensured to have unit norm by dividing through by ∥fi∥. We can compute the **diversity** matrix F through any similarity function, such as euclidean distance Fi,j = ∥[oˆi,ui] − [oˆj ,uj ]∥ 2 between concatenated observation-message-action points [oi, ai] and [oj , aj ]. If we denote the quality term for a given observation-action pair as di:= exp 12Qoˆi, ui and setting Y =(ˆo 1 1 , u11 )*, . . . ,*(ˆo |O|×|M| N , u |A|×|M| N ) , C(o) := Y ⊆ Y : |Y ∩ Yi(oi)| = 1, ∀i ∈ {1*, . . . , N*} , with Yi(oi) of size *|A| × |M|* and |C(o)| = (*|A| × |M|*) N we can then formulate our DPP distribution over a mini-batch of data by; $${\check{\mathbb{P}}}_{\mathcal{K}}(Y=Y|Y\in{\mathcal{C}}(\mathbf{o}))\propto\log\operatorname*{det}\left({\mathcal{K}}\right)$$ $$\propto\log\operatorname*{det}\left({\mathcal{D}}_{Y}{\mathcal{F}}_{Y}{\mathcal{D}}_{Y}^{T}\right)$$ $$\quad(4)$$ $$\begin{array}{l}{{\propto\log\left(\operatorname*{det}\left({\mathcal{D}}_{Y}^{T}\right)\operatorname*{det}\left({\mathcal{F}}_{Y}\right)\operatorname*{det}\left({\mathcal{D}}_{Y}\right)\right)}}\\ {{\quad}}\\ {{\propto\sum_{i=1}^{B}Q{\left({\hat{o}}_{i},u_{i}\right)}+\log\operatorname*{det}\left({\mathcal{F}}_{Y}\right).}}\end{array}$$ Using det(AB) = det(A) det(B), for square matrices A&B. ## 3.0.2 Speaker If we have an agent who is just listens or speaks , we can similarly derive the DPP action-observation/ message-observation distribution. Setting Y =(o 1 1 , m11 )*, . . . ,*(o |O| N , m |M| N ) , C(o) := Y ⊆ Y : |Y ∩ Yi(oi)| = 1, ∀i ∈ {1*, . . . , N*} , with Yi(oi) of size |M| and |C(o)| = |M|N , we can now derive the probability density function for the speakers message-observation distribution; $$\begin{array}{l}{{\tilde{\mathbb{P}}_{\mathbf{X}}^{\mathbf{Speaker}}\big(Y=Y|Y\in{\mathcal{C}}(\mathbf{o})\big)}}\\ {{\ }}\\ {{\propto\sum_{i=1}^{B}Q\big(o_{i},m_{i}\big)+\log\operatorname*{det}\big({\mathcal{F}}_{Y}\big).}}\end{array}$$ $$\left(5\right)$$ ## 3.0.3 Listener Similarly by setting oˆ = [o, o m] and Y =(ˆo 1 1 , a11 )*, . . . ,*(ˆo |O| N , a |A| N ) , C(oˆ) := Y ⊆ Y : |Y ∩ Yi(ˆoi)| = 1, ∀i ∈ {1*, . . . , N*} , with Yi(ˆoi) of size |A| and |C(oˆ)| = |A|N , we can now derive the probability density function for the listeners action-observation distribution $$\check{\mathbb{P}}_{\mathcal{K}}^{\mathrm{{Listener}}}\left(Y=Y|Y\in\mathcal{C}(\mathbf{o})\right)$$ $$\propto\sum_{i=1}^{B}Q\big({\hat{o_{i}}},a_{i}\big)+\log\operatorname*{det}\big({\mathcal{F}}_{Y}\big).$$ $\left(6\right)$. One can see, that when a speaker is sampling message-observation pairs from this DPP, they will be biased towards **positive speaking** as the DPP will be biased to sampling mixed messages concerning different states, as this diversity of state-message pairs yields a larger determinant, thus a larger probability of being sampled as seen in Equation ?. As the message quality value increases, we expect to see the agent more like to sample this message rather than other more diverse messages. When a listener is sampling actions from this DPP, the DPP will promote **positive listening** as it will grant higher probabilities to sets of actions which are diverse concerning differing incoming messages. Similarly, as the action value increases, the agent will be less drawn to diversifying its actions and taking the one with higher value. The last remaining challenge is sampling from a defined partitioned DPP's, as we are constrained by the fact that each observation requires an action, rather than the typical DPP which is allowed to free sample items from the core set. However, there is a wealth of solutions to this such as sampling-by-projection as seen in Celis et al. (2018) and Chen et al. (2018). Additionally, due to similar partitioning of the DPP one can expect sampling to simplify down to a scheme similar to Yang et al. (2020), where each row is sampled sequentially. We leave this up to future work. ## 4 Conclusion & Future Work We examined properties of successful emergent communication and explained how they in-fact encourage a repulsive point processes over the actions/ messages. We hope that these theoretical connections between emergent communication and RPPs provides justification and inspiration for future researchs. ## References Haitham Bou Ammar, Eric Eaton, Paul Ruvolo, and Matthew Taylor. Online multi-task learning for policy gradient methods. In *International Conference on Machine Learning*, pp. 1206–1214, 2014. L Elisa Celis, Vijay Keswani, Damian Straszak, Amit Deshpande, Tarun Kathuria, and Nisheeth K Vishnoi. Fair and diverse dpp-based data summarization. *arXiv preprint arXiv:1802.04023*, 2018. Laming Chen, Guoxin Zhang, and Eric Zhou. Fast greedy map inference for determinantal point process to improve recommendation diversity. In *Advances in Neural Information Processing Systems*, pp. 5622–5633, 2018. Alexander Imani Cowen-Rivers and Jason Naradowsky. Emergent communication with world models. *CoRR*, abs/2002.09604, 2020. URL https://arxiv.org/abs/2002.09604. Dominik Csiba and Peter Richtárik. Importance Sampling for Minibatches. *Journal of Machine Learning* Research, 19(27):1–21, 2018. Abhishek Das, Satwik Kottur, José MF Moura, Stefan Lee, and Dhruv Batra. Learning cooperative visual dialog agents with deep reinforcement learning. In *Proceedings of the IEEE International Conference on* Computer Vision, pp. 2951–2960, 2017. Tom Eccles, Yoram Bachrach, Guy Lever, Angeliki Lazaridou, and Thore Graepel. Biases for emergent communication in multi-agent reinforcement learning. In Advances in Neural Information Processing Systems, pp. 13111–13121, 2019a. Tom Eccles, Yoram Bachrach, Guy Lever, Angeliki Lazaridou, and Thore Graepel. Biases for emergent communication in multi-agent reinforcement learning. In Advances in Neural Information Processing Systems 32. 2019b. Jakob N. Foerster, Yannis M. Assael, Nando de Freitas, and Shimon Whiteson. Learning to communicate with deep multi-agent reinforcement learning. *CoRR*, abs/1605.06676, 2016. URL http://arxiv.org/ abs/1605.06676. Serhii Havrylov and Ivan Titov. Emergence of language with multi-agent games: Learning to communicate with sequences of symbols. In *Advances in neural information processing systems*, pp. 2149–2159, 2017. Natasha Jaques, Angeliki Lazaridou, Edward Hughes, Caglar Gulcehre, Pedro Ortega, Dj Strouse, Joel Z Leibo, and Nando De Freitas. Social influence as intrinsic motivation for multi-agent deep reinforcement learning. In *International Conference on Machine Learning*, pp. 3040–3049, 2019. Angelos Katharopoulos and Francois Fleuret. Not All Samples Are Created Equal - Deep Learning with Importance Sampling. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of *Proceedings of Machine Learning Research*, pp. 2525–2534, 2018. Satwik Kottur, José MF Moura, Stefan Lee, and Dhruv Batra. Natural language does not emerge'naturally'in multi-agent dialog. *arXiv preprint arXiv:1706.08502*, 2017. Alex Kulesza, Ben Taskar, et al. Determinantal point processes for machine learning. *Foundations and* Trends® *in Machine Learning*, 5(2–3):123–286, 2012. Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. Multi-agent cooperation and the emergence of (natural) language. *arXiv preprint arXiv:1612.07182*, 2016. Ryan Lowe, Jakob Foerster, Y-Lan Boureau, Joelle Pineau, and Yann Dauphin. On the pitfalls of measuring emergent communication. *arXiv preprint arXiv:1903.05168*, 2019. Odile Macchi. The fermion process—a model of stochastic point process with repulsive points. In *Transactions of the Seventh Prague Conference on Information Theory, Statistical Decision Functions, Random* Processes and of the 1974 European Meeting of Statisticians, pp. 391–398. Springer, 1977. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. *arXiv preprint arXiv:1312.5602*, 2013. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. *nature*, 518(7540):529–533, 2015. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In *International conference on machine learning*, pp. 1889–1897, 2015. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. *arXiv preprint arXiv:1707.06347*, 2017. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. *Nature*, 529(7587):484–489, 2016. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. *Nature*, 550(7676):354–359, 2017. Yaodong Yang, Ying Wen, Lihuan Chen, Jun Wang, Kun Shao, David Mguni, and Weinan Zhang. Multiagent determinantal q-learning. *arXiv preprint arXiv:2006.01482*, 2020. Peilin Zhao and Tong Zhang. Accelerating Minibatch Stochastic Gradient Descent using Stratified Sampling. arXiv preprint arXiv:1405.3080, 2014.
Review 1: Summary: The paper proposes using repulsive point processes, which are probabilistic models that allow sampling diverse and qualitative samples within a group of finite sets, to provide a theoretically motivated objective function to train emergent languages. Strengths and Weaknesses: The paper makes a connection between the proposed repulsive point processes and the two key aspects of emergent communication training i.e. Positive Listening and Positive Signalling. The idea is to promote diversity among the messages while also creating consistency between the message-observation pair. Although the motivation behind developing probability density functions using DPP is interesting, it is still not clear how it changes the objective function for both speaker and listener such that it would lead to efficient emergent communication. In both pdfs, the resulting distribution seems to be a linear sum of a diversity term and a quality term. Given that the authors already cite the relevant works (Lowe et al. 2019, Eccles et al. 2019a) that use this approach, it is not clear how having access to these density functions would lead to a different and more stable training manifold. Requested Changes: As highlighted in the abstract, the main advantage of using DPP over other frameworks is to yield efficient emergent communication but there is no evidence of how it performs in practice. Almost every work in EC is evaluated on various benchmarks, it is not possible to evaluate this work with any prior research without any empirical evidence. I believe this manuscript is an interesting work in progress that could lead to novel insights in emergent communication training with an experimental section. Broader Impact Concerns: No broader impact concerns. ================================================== Review 2: Summary: The authors discuss previous work on emergent communication and bring up that positive listening and positive signaling has been useful properties to induce the emergence of such. Then they show that Determinant Point Processes (DPPs) have these properties. Strengths and Weaknesses: The result is only an indication that Determinant Point Processes could be of interest for emergent communication, while an experimental section investigating how this works out is missing. They point out that DPPs are repulsive, and that this is causing different actions for different incoming observations to be different. This would argue that repulsivity might help with having emergent communication, but perhaps does not really say that emergent communication can only happen during such constraints (as the title might be read as implying) nor that will if they are satisfied. In the final paragraph the authors discuss how one could actually practically use DPPs for experiments on emergent communication, but say they leave that to future work. However, I am of the view that carrying out that part could help answer how useful DPPs are for emergent communication and that the article cannot conclude that much without such. Requested Changes: Please perform an experimental evaluation, at least of an illustrative "proof of concept" form. Broader Impact Concerns: No concerns. ================================================== Review 3: Summary: The paper makes a connection between learning emergent communication protocols and determinantal point processes (DPPs), a specific type of repulsive point process (RPP). Specifically, the paper frames some of the challenges of learning a communication protocol as "trading off between diversity and quality" -- meaning, trading off between messages (and actions) that are diverse (and thus presumably help learning), and those that give high reward. The paper then brings in DPPs, which essentially define a probability distribution for sampling from a set of items, and have a built in mechanism for trading off diversity and quality. The paper then shows a way of sampling actions in emergent communication according to a DPP. Strengths and Weaknesses: Strengths - As far as I know, this is the first paper to make a connection between emergent communication and RPPs, thus the paper is novel. - The general problem of 'figure out what's going on in emergent communication' is valuable and worth studying. Weaknesses - My biggest concern is with the clarity of the paper. It took me a while to understand the point of the paper -- in particular, I don't think the framing of emergent communication as a tradeoff between quality and diversity was well explained, and this is a core motivation of the paper. I put several more concrete questions in the 'Questions' section below. - There is very little discussion or insight as to how this would practically be implemented, and there are no empirical results. This isn't a dealbreaker per se -- it's okay to not have empirical results if you make interesting theoretical connections -- but one of the motivations for the paper is: "There exist auxiliary losses that pressure the speaking/ listening agent to alter its short-term behaviour in response to messages consistently [...] but bear with them the additional difficulties of tuning extremely sensitive auxiliary loss parameters, as is the case with most auxiliary loss functions. Thus, there is a need for more simplified emergent communication algorithms that achieve success in challenging language games with less sensitivity to hyperparameters" and there is no discussion of sensitivity to hyperparameters in the rest of the paper. - I feel like the connection between DPPs and emergent communication is made at a fairly high level, and think it would benefit from a more thorough exploration of what happens when emergent communication protocols are written as DPPs (eg answering some of the questions below). Questions " In this setup, one typically has at least two agents which are de-centralised, yet have a communication channel between them that might be detrimental to the overall performance of both agents, i.e. one agent might be able to see but not move, and needs to guide another agent towards a goal." Why is the communication channel detrimental to performance? I'm not sure what is meant here. Agents can just learn not to use the channel, and then their performance is the same as without the channel. Based on the second part of this sentence, maybe what is meant is that the communication channel can help agents coordinate to solve tasks when they have different abilities? "The properties of successful emergent communication [...] typically involve a trade-off between ensuring diversity of message-action pairs, conditioned on observations, and consistency, when the reward acquired, is considerable." - I don't understand the second part of this sentence: "consistency, when the reward acquired, is considerable". Is the point that there is a trade-off between diversity and consistency? This is a central part of the motivation of the paper, and I'd like it to be explained more clearly. - Why is the quality term for a given observation-action pair d_i defined in this way? What is Q? I assume this is the Q function, but it is not defined elsewhere in the paper. Does this mean that algorithms using DPPs need to learn a Q function? - Why is the diversity matrix F calculated between concatenated observation-message-action pairs? This could use a bit more elaboration. "For this reason, we create a new joint action-message space U = A × M for agents which speak and act. where × is the Cartesian product. Of course, this new action space will have poor scalability; however, it will enable us to reason over actions and messages jointly." - Does this mean the DPP approach is only practical for small-scale emergent communication experiments? - Given that in emergent communication, actions and messages are usually represented by one-hot vectors, it seems like the diversity matrix F is completely uninteresting (and so you are just sampling the highest value action / message that hasn't yet been sampled). Is this accurate? Or are you proposing learning some kind of message / action embeddings? - It's not immediately clear to me how the last lines in (4), (5), and (6) would be used in a practical emergent communication algorithm -- a bit more elaboration here would be useful. - What's the difference between "trading off between quality and diversity" and simply the exploration-exploitation tradeoff that is ubiquitous in RL (and emergent communication)? Requested Changes: Biggest changes: improving the clarity of the paper, and answering the questions posed above. Adding empirical evidence for why DPPs might be useful in emergent communication would be very nice, though isn't strictly necessary. Small edits -- - p1: theoretically connections -> theoretical connections - many references are given in the wrong format; eg Jaques et al (2019) instead of (Jaques et al., 2019) -- eg bottom of page 2 - p2: theoretically bias’s -> biases - the section titles for section 2 and 3 are almost the same, I'd like them to be more descriptive - p4: "we have an agent who is just listens" -> who just listens - sec 3: change subsubsections to subsections - p5: Equation ? - p5: "on for future researchs" -> research Broader Impact Concerns: None ================================================== Metareview: Recommendation: Reject Comment: All reviewers agreed that the theoretical connections proposed are interesting, but in their current state, not clear enough to warrant acceptance without supporting experiments. I encourage the authors to resubmit with reviewer questions addressed and, ideally, supporting experiments, even if illustrative. ==================================================
# Mace: A Flexible Framework For Membership Privacy Estimation In Generative Models Yixi Xu∗yixx@microsoft.com Microsoft Sumit Mukherjee∗sumitmukherjee2@gmail.com Insitro Xiyang Liu xiyangl@cs.washington.edu University of Washington Rahul Dodhia rahul.dodhia@microsoft.com Microsoft Juan Lavista Ferres *jlavista@microsoft.com* Microsoft Shruti Tople *shruti.tople@microsoft.com* Microsoft Research Reviewed on OpenReview: *https: // openreview. net/ forum? id= Zxm0kNe3u7* ## Abstract Generative machine learning models are being increasingly viewed as a way to share sensitive data between institutions. While there has been work on developing differentially private generative modeling approaches, these approaches generally lead to sub-par sample quality, limiting their use in real world applications. Another line of work has focused on developing generative models which lead to higher quality samples but currently lack any formal privacy guarantees. In this work, we propose the first formal framework for membership privacy estimation in generative models. We formulate the membership privacy risk as a statistical divergence between training samples and hold-out samples, and propose samplebased methods to estimate this divergence. Compared to previous works, our framework makes more realistic and flexible assumptions. First, we offer a generalizable metric as an alternative to the accuracy metric (Yeom et al., 2018; Hayes et al., 2019) especially for imbalanced datasets. Second, we loosen the assumption of having full access to the underlying distribution from previous studies (Yeom et al., 2018; Jayaraman et al., 2020), and propose sample-based estimations with theoretical guarantees. Third, along with the population-level membership privacy risk estimation via the optimal membership advantage, we offer the individual-level estimation via the individual privacy risk. Fourth, our framework allows adversaries to access the trained model via a customized query, while prior works require specific attributes (Hayes et al., 2019; Chen et al., 2019; Hilprecht et al., 2019). ## 1 Introduction The past decade has seen much progress in machine learning, largely due to the rapid growth in the number of large-scale datasets. However, concerns about the privacy of individuals being represented in datasets have led to a variety of regulations that made it increasingly difficult to share sensitive data across institutions ∗equal contribution especially in healthcare (Voigt & Von dem Bussche, 2017). Recent progress in the area of generative machine learning has made it possible to share synthetic data, which reflects the statistical properties of the original datasets (Georges-Filteau & Cirillo, 2020). Such synthetic data has been shown to allow for the development of downstream machine learning models with limited loss of performance compared to the original data (Rajotte et al., 2021). This has led to synthetic data sharing being increasingly viewed as a potentially privacy preserving alternative to sharing the raw data (Tom et al., 2020) between institutions. Despite the appeal of using synthetic data generated by generative models as an alternative to traditional data sharing, recent work has shown common generative modeling approaches are often vulnerable to a variety of privacy attacks (Hilprecht et al., 2019; Hayes et al., 2019; Chen et al., 2019). This led to the development of differentially private generative modeling approaches which allow for the generation of synthetic data while providing strong formal privacy guarantees (Xie et al., 2018; Jordon et al., 2018). However, in the case of high dimensional datasets (such as images), such approaches have been shown to produce synthetic samples of very poor quality for any reasonable level of guaranteed privacy (Xie et al., 2018; Mukherjee et al., 2019). For example, Mukherjee et al. (2019) generated 50,000 synthetic CIFAR10 samples using the differentially private GAN (Xie et al., 2018) with a large privacy budget ε = 100. Then, a classifier was trained on the synthetic data, however the accuracy was below 20% when validated on the test set. It was initially assumed that the poor performance was the result of loose privacy accounting (leading to an overestimation of ε), but this assumption is recently challenged by a work indicating that moments accountant-based approaches lead to tight estimates of ε (Nasr et al., 2021). Thus greatly reducing the possibility of developing a differentially private generative modeling approach that could generate samples good enough to train a strong ML model in practical high-dimensional data settings. More recent work has focused on developing novel generative learning methods that have been empirically shown to be protected against certain types of privacy attacks, such as membership inference attacks (Mukherjee et al., 2019; Chen et al., 2021). Despite these advancements in empirically improving the privacy of generative models in a few settings, there is currently no approach to provide formal privacy certificates for such models. This in turn has limited the usability of such models in real-world applications where the complete lack of formal certificates would pose a problem with regulators. A promising line of related work has been in using membership inference attacks to audit the privacy of trained machine learning models (primarily discriminative models) with theoretical justifications (Yeom et al., 2018; Jayaraman et al., 2020). More specifically, these works estimate the membership privacy risk of a model against a specific adversary. As a result, it would be computationally expensive to estimate the maximum risk when there is a large group of adversaries, or even impossible given an infinite set of adversaries. As a comparison, MACE is able to estimate the maximum privacy risk via the Bayes optimal classifier. Another limitation is that these frameworks assume a full access to the underlying data distribution. For example, Yeom et al. (2018) used a dataset including 4819 patients who were prescribed warfarin, collected by the International Warfarin Pharmacogenetics Consortium to demonstrate their methodology. However, the method could hardly generalize if the population of interest includes all the patients who were prescribed warfarin instead of the specific 4819 patients. While the above case is more common in practice, it is usually not feasible to get full access to this kind of sensitive data. Thus, existing methods (Yeom et al., 2018; Jayaraman et al., 2020) are not applicable, and this leaves a gap between theory and practice. To overcome this restriction, MACE allows for not only a full access but also a limited access to the data via a simple random sample. Furthermore, MACE is able to provide consistent estimators of membership privacy risks at both individual and population level. This allows us to estimate the membership privacy risks of different subgroups, which is usually different as shown by Feldman (2020) on long-tailed distributions. Last but not least, much of this line of work has focused on auditing differentially private discriminative models. As a comparison, we focus on trained generative models, which may or may not be differentially private. To motivate our paper, let us look at a real-world application scenario. A clinical research institution wants to publicly release a medical imaging dataset to enable machine learning model development using the data. However, due to concerns about the personal health information (PHI) in the dataset, they look into synthetic data generation. Having identified a viable generative modeling approach, the institution is faced with a few questions prior to data/model release: i) should they release the synthetic dataset or the trained generative model?, ii) if they just release synthetic data, does it matter how many synthetic samples they release?, iii) how vulnerable is the synthetic data or the trained model to membership inference attacks. Currently, there is no answer to these questions, unless making a strong assumption of the adversaries i.e. only considering a few specific heuristic membership inference attacks (Hayes et al., 2019; Hilprecht et al., 2019), which is not realistic in practice. In this paper, we begin to answer these questions through the development of a flexible statistical framework to measure the membership privacy risk in generative models (MACE: Membership privACy Estimation). Our framework is built on the formulation of the membership privacy risk (given a query access) as a statistical divergence between the distribution of training-set and non-training-set samples. We show the utility of our framework using many SOTA queries from the literature and some new ones against common computer vision as well as medical imaging datasets. Our primary contributions are as follows: - We develop a framework to estimate the maximum membership privacy risk against adversaries that have query access to the model. Our framework can not only estimate membership privacy risks that are defined as the accuracy of the membership inference attack as in (Yeom et al., 2018), but also those from a more general risk class (Koyejo et al., 2014). This gives the users flexibility to measure the ability of a membership inference attack to distinguish members from non-members from different angles, especially when the training set is a small part of the total available dataset. In addition, MACE is capable of estimating the membership privacy risk given any scalar or vector valued attributes from a learned model, while prior works (Hayes et al., 2019; Chen et al., 2019; Hilprecht et al., 2019) restrict to a set of specific attributes. - Our framework is able to measure the individual-level membership privacy risk. This measures the risk of each individual sample against specific modes of membership inference attacks, allowing users to identify those high risk individuals and decide whether to exclude high risk samples and re-train their model. - We loosen the assumption of having full access to the training and non-training set from previous studies (Yeom et al., 2018; Jayaraman et al., 2020), and extend to the case where only a simple random sample is feasible. Furthermore, we derive consistent estimators for both the maximum membership privacy risk and the individual-level membership privacy risk with theoretical justifications. - We demonstrate the usability of MACE by experiments that analyze the membership privacy risks with regards to various query types and generative model architectures on three real-world datasets via the membership advantage and the individual privacy risk under both the accuracy-based and generalized metrics. ## 2 Background In this section, we present a brief background on query functions, membership inference attacks, attack experiments, and the Bayes optimal classifier. Then we briefly discuss the limitations of current membership inference approaches. We assume readers already possess a general understanding of generative models and Differential Privacy, but provide a short background section in the Appendix for the sake of completeness. ## 2.1 Notation We introduce notations that will be used in the rest of the paper. - Let z = (x, y) ∈ Z = *X × Y* be a data point from a data distribution D. Note that y is some extra information for generative models such as conditional GANs. In other words, z = x ∈ X for normal generative models. - We assume an adversary A would have query access to a model via a query function QS(·) : *X × Y →* R q, where q is an integer. Examples of adversaries and query functions are given in Sections 2.3 and 2.2, respectively. - Let S ∼ Dn be an ordered list of n points. It is referred to as *training set*, sampled from D. We will assume the training set S to be fixed in this paper. - z ∼ S denotes uniformly sampling from a training set S. Also, z *∼ D\*S denotes uniformly sampling from the data distribution not including the training set S, which is referred to as sampling from a hold-out set. - For a set of samples {z1, z2, · · · , zN }, we define their associated membership labels as {m1, m2, · · · , mN }, where mi = 1 if ziis in the training set and mi = −1 otherwise for i = 1, · · · , N. - For a given condition C, let I(C) = 1 if the condition C holds, otherwise 0. ## 2.2 Query Functions In this subsection, we introduce some representative query functions QS(z) considered in or motivated by prior work. Following Chen et al. (2019); Hilprecht et al. (2019); Hayes et al. (2019), we divide our attack settings based on the accessibility of model components: (1) access only to generated synthetic data and (2) access to models. ## 2.2.1 Accessible Synthetic Datasets In the common practice of synthetic data releasing, researchers or data providers may consider releasing only generated datasets or just the generator. However, prior works (Chen et al., 2019; Hilprecht et al., 2019) have shown releasing generator/synthetic datasets can cause privacy leakage. Specifically, for the case where a generative model is released, Chen et al. (2019) consider the following query function QS(z) = minw L2(*z, G*(w)), where G is the generator released to the public. Alternatively, Hilprecht et al. (2019) first generate a large synthetic dataset g1, g2, · · · , gn using the generator and then use the following query function: $$Q_{S}(z)=-\frac{1}{n}\sum_{i=1}^{n}\mathbb{I}(g_{i}\in U_{\varepsilon}(z))\log d\left(g_{i},z\right),$$ where d is some distance metric and Uε is ε-ball defined on distance metric d. Similar to these approaches, we assume that the generator memorizes the training data thus it generates synthetic dataset close to the training data. Under this assumption, if a sample x is closer to the synthetic dataset, it is more likely that x belongs to the training data. Hence for a sample z, we consider using the nearest neighbor distance to synthetic datasets as the query function: $$(1)$$ $$Q_{S}(z)=\operatorname*{min}_{j\in[n]}d(z,g_{j})\ ,$$ $$\left(2\right)$$ d(*z, g*j ) , (2) where d is a distance metric. ## 2.2.2 Accessible Models In this setting, we assume the adversary has query access to the model (the discriminator and the generator in the case of GANs). Such a situation commonly arises when researchers open source model parameters, or share model parameters insecurely. For generative models, especially GANs, the most successful attack known (Hayes et al., 2019) assumes adversaries to access the model via the following query: $$Q_{S}(z)=D(z)\;,$$ QS(z) = D(z) , (3) where D(z) is the output of the discriminator corresponding to input sample z. Intuitively, if a sample is in the training set, the discriminator would be more likely to output high values. While the adversary could $$\left({\boldsymbol{3}}\right)$$ solely access the discriminator via the above query, we introduce a query below allowing accessing both the generator and the discriminator. $$Q_{S}(z)=(D(z),\operatorname*{min}_{j\in[n]}d(z,g_{j}))\;.$$ $$\left({\boldsymbol{4}}\right)$$ d(*z, g*j )) . (4) This attack is a combination of attacks described in Equations 3 and 2. While the discriminator score has been shown to be a very effective query for generative models with one discriminator, many recent privacy preserving generative modeling approaches often have multiple discriminators (Jordon et al., 2018; Mukherjee et al., 2019), with each discriminator being exposed to a part of the training dataset. Our previous query will not be useful in such situations. Here, we consider the recent work privGAN((Mukherjee et al., 2019) and present two queries (one single dimensional and one multi-dimensional). The single dimensional query used in (Mukherjee et al., 2019) is as follows: $$Q_{S}(z)=\operatorname*{max}_{i}D_{i}(z)\ ,$$ $\downarrow$ . $$\mathbf{\Sigma}$$ iDi(z) , (5) where D(z) is the output of the discriminator i corresponding to input sample z. We propose a new multi-dimensional query which is stated as follows: $$Q_{S}(z)=(D_{1}(z),\cdots,D_{r}(z))\;,$$ QS(z) = (D1(z), · · · , Dr(z)) , (6) For the purposes of demonstration, in this paper we use r = 2. ## 2.3 Membership Inference Attack Adversaries The goal of a membership inference attack (MIA) (Li et al., 2013; Shokri et al., 2017; Truex et al., 2018; Long et al., 2017), is to infer whether a sample z is a part of the training set S. In our paper, we assume an MIA access the model via a query function QS. Thus, an MIA adversary is equivalent as a classifier given QS(z) as the input to predict whether z belongs to S. In this paper, we focus on generative machine learning models (such as GANs). The study of MIAs against generative models is a relatively new research area. Hayes et al. (2019) first demonstrated MIAs against GANs. They propose: i) a black-box adversary that trains a shadow GAN model using the released synthetic data, ii) a white-box adversary that uses a threshold on the discriminator score of a released GAN model. Hilprecht et al. (2019) demonstrates a black-box MIA adversary that uses only the generator of the GANs (or synthetic samples) and operates by thresholding the L2 distance between the query sample and the closest synthetic sample. These existing works focus on the construction of a strong binary classifier as the MIA adversary, given different query functions. The details of these query functions have been discussed in Section 2.2. ## 2.4 The Attack Experiment The membership privacy risk that arises from a query to the model is usually evaluated through a membership privacy experiment (Yeom et al., 2018; Jayaraman et al., 2020). The experiment assumes we have sampled a training set S with size |S| = n from the data distribution D. Then a learning algorithm is trained on S, and an adversary would have access to the trained model through a query function QS : *Z → Q*. To be specific, an adversary is provided with the query output of a randomly sampled point z from either S (with probability p) or D\S (with probability 1 − p). The adversary would then get a utility 1 if it guesses the membership correctly or an utility −1 otherwise. ## 2.5 The Bayes Optimal Classifier Since membership identification is essentially a binary classification task, a membership adversary can then be seen simply as a binary classification model. Indeed, many existing papers on membership inference explicitly train binary classifiers for the purpose of membership inference (Shokri et al., 2017). The performance of such classifiers is often also used to empirically measure the membership privacy risk of different models (Mukherjee et al., 2019). As the binary classifiers used in such papers are heuristically chosen, there is no guarantee that a better classification model does not exist for the task. The classifier that minimizes the expected error rate (maximizes accuracy) is called the Bayes optimal classifier (Devroye et al., 2013). Given, samples (x, y) ∈ Rd *× {−*1, 1}, the Bayes optimal classifier is: $$C^{B a y e s}(x)=\operatorname*{arg\,max}_{r\in\{-1,1\}}\mathbb{P}(Y=r|X=x).$$ $$\left(7\right)$$ P(Y = r|X = x). (7) Note: the Bayes optimal classifier can only be approximated in practical scenarios, since we rely on estimates of P(Y = r|X = x). While the Bayes optimal classifier in Equation 7 was originally designed to maximize the classification accuracy, Koyejo et al. (2014) has extended it to a family of the generalized metrics. We first define TP, FP, FN and TN as P(A(X) = 1, Y = 1), P(A(X) = 1, Y = −1), P(A(X) = −1, Y = 1) and P(A(X) = −1, Y = −1) respectively. Next, we show the definition of the generalized metrics as follows: $$\ell({\cal A},{\cal P})=\frac{a_{0}+a_{11}{\rm TP}+a_{10}{\rm FP}+a_{01}{\rm FN}+a_{00}{\rm TN}}{b_{0}+b_{11}{\rm TP}+b_{10}{\rm FP}+b_{01}{\rm FN}+b_{00}{\rm TN}},\tag{8}$$ where A is the adversary, P is the distribution, a0, b0, aij and bij are pre-defined scalars for i = 0, 1 and j = 0, 1, (X, Y ) ∼ P. The generalized metric can be used to represent several commonly used metrics such as accuracy, PPV, TPR, TNR, WA etc. (see Appendix). It is then demonstrated that the Bayes optimal classifier for this family of generalized metrics takes the forms: $$\operatorname{sgn}(\eta(x)-t_{\ell}){\mathrm{~or~sgn}}(t_{\ell}-\eta(x)),$$ $$({\mathfrak{g}})$$ sgn(η(x) − tℓ) or sgn(tℓ − η(x)), (9) where η(x) := P(Y = 1|A(X) = A(x)) and tℓ ∈ (0, 1) is a constant depending on the metric ℓ, when the marginal distribution of X is absolutely continuous with respect to the dominating measure on X (Koyejo et al., 2014). It is worth noting that the Bayes optimal classifier for the generalized metric can only be approximated in practical scenarios, due to the lack of closed-form expressions of η(x) and tℓ. ## 2.6 Limitations Of Current Membership Inference Approaches There are several limitations in the existing literature on membership inference. First, most papers (Hayes et al., 2019; Hilprecht et al., 2019) focus on developing novel heuristic membership inference attacks, which are often limited in scope and can hardly be extended to another query. This is particularly problematic as much of these heuristic approaches cannot readily generalize to the generative modeling setting. Second, the current formal membership privacy estimation frameworks Yeom et al. (2018); Jayaraman et al. (2020) require a full access to the underlying data distribution, while this is not always possible in practice. Third, no paper has yet provided a rigorous approach to estimate the membership privacy risk at the individual level. Fourth, for most of the current membership inference methods (Hayes et al., 2019; Hilprecht et al., 2019) probability p of Experiment 1 is usually set as 0.5 to form a balanced binary classification problem. However, in practice, p is usually much smaller than 0.5, as pointed out in prior work (Jayaraman et al., 2020; Rezaei & Liu, 2020). In this work we seek to address all these issues. ## 3 The Membership Privacy Risk Quantification In this section, we first introduce the membership advantage of a given adversary. Then, we define the optimal membership advantage as the maximum membership advantage. It is the maximum expected membership privacy risk of any adversary for the whole population. Furthermore, we present the optimal membership inference adversary. Finally, the individual privacy risk would be proposed to estimate the membership privacy risk at the individual level. The first subsection focuses on the accuracy-based metric, and the second subsection extends to the generalized metrics. We first propose an experiment formalizing membership inference attacks, adapted from Yeom et al. (2018). Experiment 1. Let D be the data distribution on Z. We first have a fixed training set S ∼ Dn *with size* |S| = n and have a trained model. An adversary A : Q → {−1, 1} *would access the trained model via the* query function QS : Z → Q*. The membership experiment proceeds as follows:* 1. Randomly sample m ∈ {−1, 1} *such that* m = 1 *with probability* p. 2. If m = 1, then uniformly sample z ∼ S; otherwise sample z ∼ D\S *uniformly.* Let E(D, S, QS) denote the distribution of (z, m) *in Experiment 1.* ## 3.1 For The Accuracy-Based Metric To define the optimal membership advantage, we first introduce the membership advantage of an adversary A given the query QS, as given in Definition 4 of Yeom et al. (2018). We define the membership advantage of the query QS by an adversary A as the rescaled expected accuracy of the membership inference attack adversary A. When p = 0.5, the membership advantage is equal to the difference between the adversary's true and false positive rates. Definition 1. The membership advantage of QS by an adversary A *is defined as* $\cos\,\cos\,\theta$ 2. $$(10)$$ $$\mathrm{Adv}_{p}(Q_{S},{\mathcal{A}})=2\mathbb{P}\left({\mathcal{A}}(Q_{S}(z))=m\right)-1,$$ Advp(QS, A) = 2P (A(QS(z)) = m) − 1, (10) where the probability is taken over the random sample (z, m) drawn from E(D*, S, Q*S). If the adversary A is random guessing, Advp(QS, A) = 0. If the adversary always gets the membership right, then Advp(QS, A) = 1. Further, note that P (A(QS(z)) = m) is the expected accuracy of the adversary's predictions. Hence this definition of membership advantage is directly related to accuracy. After introducing the membership advantage, we define the optimal membership advantage as the maximum membership advantage of all possible adversaries. Definition 2. *The optimal membership advantage is defined as* $$\operatorname{Adv}_{p}(Q_{S})=\operatorname*{max}_{\mathcal{A}}\operatorname{Adv}_{p}(Q_{S},{\mathcal{A}}).$$ $$(11)$$ AAdvp(QS, A). (11) The following lemma obtains the optimal adversary for the accuracy-based metric. Lemma 1. Given the query QS, the data distribution D, the training set S and the prior probability p*, the* Bayes optimal classifier A∗ *maximizing membership advantage is given by* $${\mathcal A}^{*}(Q_{S}(z))=\mathrm{sgn}\left(\mathbb{P}\left(m=1|Q_{S}(z)\right)-\frac{1}{2}\right)\;,$$ where the probability is taken over the random sample (z, m) drawn from E(D, S, QS). Furthermore, the optimal membership advantage can be re-written as $$(12)$$ $$(13)$$ $$\mathrm{Adv}_{p}(Q_{S})=\mathbb{E}_{z}\left[|f_{p}(z)|\right]\ ,$$ $$(14)$$ Advp(QS) = Ez [|fp(z)|] , (13) where $$f_{p}(z)={\frac{\mathbb{P}(Q_{S}(z)|m=1)p-\mathbb{P}(Q_{S}(z)|m=-1)(1-p)}{\mathbb{P}(Q_{S}(z)|m=1)p+\mathbb{P}(Q_{S}(z)|m=-1)(1-p)}}.$$ Proof. See the complete proof in Appendix D.1 Now we introduce the individual privacy risk at a sample z0. It is proportional to the accuracy of the Bayesian optimal classifier A∗conditioning on QS(z) = QS(z0). Definition 3. The individual privacy risk of a sample z0 for a query QS *under the accuracy-based metric is* defined as: AdvIp(QS, z0) = 2P[A ∗(QS(z)) = m|QS(z) = QS(z0)] − 1, where the probabilities are taken over the random sample (z, m) drawn from E(D, S, QS). $$\square$$ $${}^{*}(Q s(z))=m$$ We then provide several convenient properties of the individual privacy risk AdvIp(QS, z0) under the accuracybased metric at a sample z0. First, we show that it could be rewritten as |fp(z0)|, where fp is defined in Equation 14. This can be used in the following sections to derive a consistent estimator and confidence interval. Secondly, we connect the individual privacy risk to the optimal membership advantage. Thirdly, we show that AdvIp(QS, z0) is proportional to the highest accuracy conditioning on QS(z0). Fourthly, we establish the connection between the optimal membership advantage, the individual privacy risk and Differential Privacy. Theorem 1. Let QS be the query function and z0 a fixed sample ∈ Z. 1. The individual privacy risk at z0 *can be re-written as* $$\mathrm{Adv}\mathrm{I}_{p}(Q s,z_{0})=|f_{p}(z_{0})|$$ $$(15)$$ $$\square$$ 2. *The optimal membership advantage is the expectation of the individual privacy risk given the sample* z *as a random variable.* Advp(QS) = Ez[AdvIp(QS, z)] (15) 3. The Bayesian optimal classifier A∗*is optimal at the individual-level, as* AdvIp(QS, z0) = 2 max A P[A(QS(z)) = m|QS(z) = QS(z0)] − 1 4. If a training algorithm is ε-differentially private, then for any choice of z0, we have both the optimal membership advantage and the individual privacy risk bounded by a constant determined by ε: $$\mathrm{Adv}_{p}(Q_{S}),\mathrm{AdvI}_{p}(Q_{S},z_{0})\leq\operatorname*{max}\left\{\left|\operatorname{tanh}\left({\frac{\varepsilon+\lambda}{2}}\right)\right|,\left|\operatorname{tanh}\left({\frac{-\varepsilon+\lambda}{2}}\right)\right|\right\},$$ $where\ \lambda:=\log\left(\frac{p}{1-p}\right)$. When $p=0.5$ and $\varepsilon=1$, we have $z_{0}\in\mathcal{Z}$. *. When* p = 0.5 and ε = 1*, we have* Advp(QS), , AdvIp(QS, z0) ≤ 0.462 *for any* Proof. See the complete proof in Appendix D.3 ## 3.2 For The Generalized Metrics As previously mentioned, membership privacy leakage is a highly imbalanced problem (Jayaraman et al., 2020; Rezaei & Liu, 2020). For example, the training set in a medical dataset may consist of data from the patients admitted to a clinical study with a particular health condition and the distribution D may represent data from all patients (in the world). Notably, previous works (Hayes et al., 2019; Hilprecht et al., 2019; Mukherjee et al., 2019) used metrics such as accuracy, precision and recall to measure the privacy risks even for the highly imbalanced setting where p = 0.1 (Hayes et al., 2019). Although these attacks result in high accuracy, precision or recall during the privacy evaluation stage, it has been shown to suffer from a high false positive rate(Rezaei & Liu, 2020), thus is less useful in practice. To overcome these issues, prior work (Jayaraman & Evans, 2019; Jayaraman et al., 2020) proposes to use the positive predictive value (PPV), which is defined as the ratio of true members predicted among all the positive membership predictions made by an adversary A. Here, we allow users even more flexibility by adopting the generalized metric defined in Equation 8. Through Experiment 1, we define the following metric to measure the membership privacy risk under the generalized metric. Definition 4. The membership advantage of an adversary A *that has access to a trained model via a query* QS under the generalized metric ℓ *is defined as* $$\operatorname{Adv}_{\ell,p}(Q_{S},{\mathcal{A}})=\ell({\mathcal{A}}(Q_{S}(\cdot)),{\mathcal{E}}({\mathcal{D}},S,Q_{S})),$$ where E(D, S, QS) *is the distribution generated by Experiment 1.* After introducing the membership advantage under the generalized metric l, we define the optimal membership advantage as the maximum membership advantage. Definition 5. *The optimal membership advantage is defined as* $$\operatorname{Adv}_{\ell,p}(Q_{S})=\operatorname*{max}_{\mathcal{A}}\operatorname{Adv}_{\ell,p}(Q_{S},\mathcal{A}).$$ $$(16)$$ Similar to the accuracy-based metric, the optimal adversary for the generalized metric is the Bayes optimal classifier. While we mention that the exact function depends on the dataset and the metric, in some cases there exists a closed form solution for it. Lemma 2. Given the query QS, the data distribution D, the training set S and the prior probability p*, if* b11 = b01 and b10 = b00, then the Bayes optimal classifier A∗ *maximizing membership advantage under the* generalized metric l *asymptotically is given by* $${\mathcal A}^{*}(Q_{S}(z))=\operatorname{sgn}\left(\mathbb{P}\left(m=1|Q_{S}(z)\right)-t_{\ell}\right)\ ,$$ $\square$ where the probability is taken over the sample pair (z, m) drawn from E(D, S, QS) and tℓ =a00−a10 a11−a10−a01+a00 . Proof. See the complete proof in Appendix D.2 As a sanity check, for accuracy, we have tACC = 1 2 . A wide range of performance measure under class imbalance settings can be seen as instances of the family of metrics under Lemma 2. For example, AM measure (Menon et al., 2013) defined as AM := 1/2 (TPR + TNR) has optimal threshold tAM = p. As another example, TPR or recall has the optimal threshold tTPR = 0, which means in practice, we can always predict as positive to get the highest recall. Similar to the case of the accuracy based metric, we define the individual privacy risk for the generalized metric as follows: Definition 6. The individual privacy risk of a sample z0 for a query QS under generalized metric ℓ *is defined* as AdvIl,p(QS, z0) = ℓ(A ∗(QS(·)), E(D*, S, Q*S|QS(z0)), where A∗is the Bayes Optimal Adversary under the generalized metris and E(D*, S, Q*S|QS(z0) is the distribution generated by Experiment 1 given QS(z) = QS(z0). Remark 1. While we focus on a single query function in this section, the result can be easily extended to a set of queries {Q1S , Q2S , ·, QqS } *by computing the maximum risk of all the queries* QiS , i = 1, · · · , q. ## 4 Estimation Of The Individual Privacy Risk In the ideal world, we have a full access to the data distribution D. As a result, we could get the exact numbers to describe the individual privacy risk and the optimal membership advantage by Definition 2- 6. However, this could hardly be true in practice. For example, it is almost impossible to get access to the health records of all the patients around the world, while a simple random sample could be feasible. Previous studies (Yeom et al., 2018; Jayaraman et al., 2020) dealt with this by assuming the existence of a subset to fully represent a distribution. For instance, Jayaraman et al. (2020) used a Texas hospital data set consisting of 67,000 patient records as the data distribution to validate their framework. However, their framework could hardly be adapted to the case when the 67,000 patient records are only part of the population of interest. To conquer this limitation, we extend to the situation when one can only access the training set S and the data distribution D via a simple random sample. Moreover, we will provide a practical and principled approach to estimate the individual privacy risk for both the accuracy-based and the generalized metrics, In order to achieve this, we first propose an experiment. Note that this experiment would also be used to estimate the optimal membership advantage in the following section. Experiment 2. Let D be the data distribution on Z. We first have a training set S ∼ Dn *with size* |S| = n and have a trained model. An adversary would access the trained model via the query function QS : *Z → Q*. The ratio p is the prior probability. The sampling process proceeds as follows: 1. Uniformly sample N1 = N p points x1, · · · , xN1 from S. Assign zi = xi and mi = 1*, for* i = 1, · · · , N1. 2. *Uniformly sample* N2 = (1 − p)N points y1, · · · , yN2 from D\S. Assign zi = yi−N1 and mi = −1, for i = N1 + 1, · · · , N2. 3. Create a consolidated set of samples {(zi, mi)} N i=1, where mi = −1 if i ≤ N1 and mi = 0 if i ≥ N1. We assume that N p and N(1 − p) are always integers for simplicity in this paper. But keep in mind that all the algorithms, lemmas and theorems could be easily extended to the case when this assumption does not hold. ## 4.1 For The Accuracy-Based Metric By Theorem 1, we have AdvIp(QS, z) = |fp(z)|. Thus, it is sufficient to estimate fp(z) to estimate AdvIp(QS, z). In the following sub-sections, we describe the construction of consistent estimators for fp(z). ## 4.1.1 Discrete Queries When QS(z) ∈ Q is discrete, for a particular output j ∈ Q, let rj = P(QS(z) = j|m = 1) and qj = P(QS(z) = j|m = −1). Frequency-based plug-in methods have been used empirically in similar settings(Mukherjee et al., 2019; Yaghini et al., 2019). We simply collect samples from Experiment 2 and plug-in the fraction to estimate rj and qj . Then we account for the estimation error of this process by using a Clopper-Pearson confidence interval (Clopper & Pearson, 1934). We find the δ2 -Clopper Pearson lower bound for rj denoted as rˆj,lower and the δ2 -Clopper Pearson upper bound for rj denoted as rˆj,upper. Finally we have the following theorem. Theorem 2. Let {X1, · · · , XN1 , Y1, · · · , YN2 } *randomly sampled by Experiment 2. If* QS(z) = j*, then* 1. A consistent estimator of fp(z) *is given as follows:* $$\frac{\hat{r}_{j}p-\hat{q}_{j}(1-p)}{\hat{r}_{j}p+\hat{q}_{j}(1-p)},$$ $\square$ _where $\hat{r}_{j}=\frac{1}{N_{1}}\sum_{i=1}^{N_{1}}\mathbb{I}(Q_{S}(X_{i})=j)$, and $\hat{q}_{j}=\frac{1}{N_{2}}\sum_{i=1}^{N_{2}}\mathbb{I}(Q_{S}(Y_{i})=j)$. The $(1-\delta)$-confidence interval $C_{1-\delta}(z)$ of $f_{p}(z)$ is_ 2. _The $(1-\delta)$-confidence interval $C_{1-\delta}(z)$ of $f_{p}(z)$ is $$\left[\frac{\hat{r}_{j,\mathrm{lower}}p-\hat{q}_{j,\mathrm{upper}}(1-p)}{\hat{r}_{j,\mathrm{lower}}p+\hat{q}_{j,\mathrm{upper}}(1-p)},\frac{\hat{r}_{j,\mathrm{upper}}p-\hat{q}_{j,\mathrm{lower}}(1-p)}{\hat{r}_{j,\mathrm{upper}}p+\hat{q}_{j,\mathrm{lower}}(1-p)}\right]\;.$$ $$\mathrm{D.4}$$ Proof. See the complete proof in Appendix D.4 ## 4.1.2 Continuous Queries Consider when QS is a continuous query. Let r(x) = P(QS(z) = x|m = 1) and q(x) = P(QS(z) = x|m = −1). We first use Kernel Density Estimators (KDE) for both r(x) and q(x). Recall that for samples v1, v2, · · · , vN ∈ R dfrom an unknown distribution R defined by a probability density function r, a KDE with bandwidth h and kernel function K is given by $$\frac{1}{Nh^{d}}\sum_{i=1}^{N}K\left(\frac{v-v_{i}}{h}\right).\tag{17}$$ Additionally, we have a plug-in confidence interval for KDE (Chen, 2017) (see Lemma 4). Using this, we have a consistent estimator for fp(z) with a confidence interval to estimate the uncertainty. Theorem 3. Let QS be a continuous query and z ∈ Z. Let {X1, · · · , XN1 , Y1, · · · , YN2 } *randomly sampled* from Experiment 2. 1. A consistent estimator of fp(z) *is given as follows:* $$\frac{\hat{r}_{N}(Q_{S}(z))p-\hat{q}_{N}(Q_{S}(z))(1-p)}{\hat{r}_{N}(Q_{S}(z))p+\hat{q}_{N}(Q_{S}(z))(1-p)},$$ $\square$ _where $\widehat{r}_{N}(Q_{S}(z))=\frac{1}{N_{1}z^{n}}\sum_{i=1}^{N_{1}}K\left(\frac{Q_{S}(z)-Q_{S}(X_{i})}{\hbar}\right)$, and $\widehat{q}_{N}(Q_{S}(z))=\frac{1}{N_{2}\hbar^{2}}\sum_{i=1}^{N_{2}}K\left(\frac{Q_{S}(z)-Q_{S}(X_{i})}{\hbar}\right)$._ 2. The (1 − δ)*-confidence interval* C1−δ(z) of fp(z) is $$\left[\frac{\widehat{r}_{\mathrm{lower}}(Q_{S}(z))p-\widehat{q}_{\mathrm{upper}}(Q_{S}(z))(1-p)}{\widehat{r}_{\mathrm{lower}}(Q_{S}(z))p+\widehat{q}_{\mathrm{upper}}(Q_{S}(z))(1-p)},\frac{\widehat{r}_{\mathrm{upper}}(Q_{S}(z))p-\widehat{q}_{\mathrm{lower}}(Q_{S}(z))(1-p)}{\widehat{r}_{\mathrm{upper}}(Q_{S}(z))p+\widehat{q}_{\mathrm{lower}}(Q_{S}(z))(1-p)}\right],$$ where [rblower, rbupper], [qblower, qbupper] are (1 − δ/2) confidence intervals of r(QS(z)) and q(QS(z)) *respectively. Both confidence intervals are derived by Lemma 4.* Proof. See the complete proof in Appendix D.5 ## 4.2 For The Generalized Metrics Similar to the case of the accuracy-based metric, we propose a consistent estimator to estimate the individual privacy risk for the generalized metric. Though there is not always a closed-form expression for the individual privacy risk for the generalized metric, we are able to provide an explicit formula given a few conditions. The following theorem provides a way to calculate the confidence interval for the generalized metric given a known tℓ. Theorem 4. Let c1 = (a0 + a01), c2 = (a0 + a00), c3 = (a11 − a01), c4 = (a10 − a00), d1 = (b0 + b01), d2 = (b0 + b00), d3 = (b11 − b01), d4 = (b10 − b00), and z0 be a sample ∈ Z*. Under the following conditions:* 1. The Bayes optimal classifier A∗for the generalized metric l *can be written in the form of* sgn(η(x)−tℓ), and tℓ *is a known constant;* 2. P(m = 1|QS(z) = QS(z0)) ̸= tℓ; 3. pd1P(QS(z0)|m = 1) + d2(1 − p)P(QS(z0)|m = −1) + pd3P(QS(z0)|m = 1)I(P(m = 1|Qs(z0)) > tℓ) + d4(1 − p)P(QS(z0|m = −1)I(P(m = 1|Qs(z0)) > tℓ) ̸= 0; we have 1. *A consistent estimator of* AdvIl,p(QS, z0) *can be given as follows:* $$d_{3}\mathbb{P}(Q_{S}(z_{0})|m\,=\,1)\mathbb{I}(\mathbb{P}(z_{0})|m\,=\,0;$$ c1rpˆ + c2qˆ(1 − p) + c3rpˆ I((1 − tℓ)*pr > t* ˆ ℓ(1 − p)ˆq) + c4qˆ(1 − p)I((1 − tℓ)pr > t ˆ ℓ(1 − p)ˆq) d1rpˆ + d2qˆ(1 − p) + d3rpˆ I((1 − tℓ)*pr > t* ˆ ℓ(1 − p)ˆq) + d4qˆ(1 − p)I((1 − tℓ)pr > t ˆ ℓ(1 − p)ˆq) $$\frac{((1-t_{\ell})p\hat{r}>t_{\ell}(1-p)\hat{q})}{\mathbb{I}((1-t_{\ell})p\hat{r}>t_{\ell}(1-p)\hat{q})}.\quad(18)$$ If Q *is discrete and* QS(z0) = j, then rˆ = rˆj and qˆ = qˆj , as defined in Theorem 2. If QS is a continuous query, then rˆ = ˆrN (QS(z0)) and qˆ = ˆqN (QS(z0)), as defined in Theorem 3 2. The (1−δ)*-confidence interval of* AdvIl,p(QS, z0) follows by plugging the (1−δ/2) *confidence intervals* of p and q - [rˆlower, rˆupper]and [qˆlower, [qˆupper] *into Equation 18. In the case of discrete queries,* [rˆlower, rˆupper] = [rˆj,lower, rˆj,upper]*, where* QS(z0) = j, , as defined in Theorem 2. In the case of continuous queries, [rˆlower, rˆupper] = [rblower(QS(z0)), rbupper(QS(z0))], as defined in Theorem 3. qˆ*lower* and qˆupper are defined in a similar manner. Proof. See the complete proof in Appendix D.6. A wide range of performance measures satisfy Condition 1. For example, AM measure (Menon et al., 2013) has the optimal threshold tAM = p, and *T P R* has the optimal threshold tTPR = 0. Condition 2 can be waived if c3 = c4 = d3 = d4 = 0. Metrics such as accuracy, recall and specificity meet the above condition. Condition 3 assumes a nonzero expected value of the denominator in Equation 18.As for the confidence interval in 2, there could exist a closed-form expression if for example, AdvI(QS, z0) is a monotonic function in terms of P(QS(z0))|m = 1)/P(QS(z0))|m = −1). Otherwise, an optimization problem would need to be solved, as outlined in 2. ## 5 Estimation Of The Optimal Membership Advantage In the above section, we proposed to estimate individual privacy risk for both the accuracy-based and generalized metrics with theoretical justifications. In this section, we further develop methods to estimate the optimal membership advantage Advp(QS). ## 5.1 For The Accuracy-Based Metric We first propose a consistent estimator for the optimal membership advantage given a discrete query. Theorem 5. Let X1, · · · , XN1 be i.i.d. random variables drawn from S and Y1, · · · , YN2 be i.i.d. random variables drawn from D\S in Experiment 2, where N1 = pN and N2 = (1 − p)N. Assume that Q is a finite set. Define $$W_{N}=W(X_{1},\cdots,X_{N_{1}},Y_{1},\cdots,Y_{N_{2}})$$ $$=\sum_{j\in\mathcal{Q}}|\frac{p}{N_{1}}\sum_{i=1}^{N_{1}}\mathbb{I}(Q_{S}(X_{i})=j)-\frac{1-p}{N_{2}}\sum_{i=1}^{N_{2}}\mathbb{I}(Q_{S}(Y_{i})=j)|\.$$ $$\square$$ Then (1) q WN *is a consistent estimator of the optimal membership advantage* Advp(QS); (2) P(|WN −EWN | ≥ 2 N log( 2 δ )) ≤ δ. Proof. See the complete proof in Appendix D.7. We then propose a consistent estimator for the optimal membership advantage given a continuous query. Theorem 6. Let X1, · · · , XN1 be i.i.d. random variables drawn from S, and Y1, · · · , YN2 be i.i.d. random variables drawn from D\S in Experiment 2, where N1 = N p and N2 = N(1 − p)*. Assume that* Q = R d and h → 0, as N → ∞*. Define* $$\begin{array}{r c l}{{U_{N}}}&{{=}}&{{U(X_{1},\cdots,X_{N_{1}},Y_{1},\cdots,Y_{N_{2}})}}\\ {{}}&{{}}&{{}}\\ {{}}&{{}}&{{=}}&{{\int|\frac{p}{N_{1}h^{d}}\sum_{i=1}^{N_{1}}K(\frac{x-Q_{S}(X_{i})}{h})-\frac{1-p}{N_{2}h^{d}}\sum_{i=1}^{N_{2}}K(\frac{x-Q_{S}(Y_{i})}{h})|d x.}}\end{array}$$ $\square$ Then (1) q UN *is a consistent estimator of the optimal membership advantage* Advp(QS); (2) P(|UN − EUN | ≥ 2 N log( 2 δ )) ≤ δ. Proof. See the complete proof in Appendix D.8. The practical estimation of the optimal membership advantage for the accuracy-based metric is described in Algorithm 1 using Monte Carlo integration. Algorithm 1 Practical estimation of the optimal membership advantage Adv dp(QS) Input: number of samples N, prior of membership p, training set S, query function QS, confidence level δ Output: the optimal membership advantage estimate Adv dp(QS) 1: Perform Experiment 2 and draw one set of samples {zi} N i=1 with membership {mi} N i=1 respectively. 2: if QS is discrete **then** 3: Use the tuples {(zi, mi)} N i=1 to calculate WN 4: Adv dp(QS) ← WN 5: **else** 6: Use the tuples {(zi, mi)} N i=1 to calculate UN 7: Adv dp(QS) ← UN 8: **end if** ## 5.2 For The Generalized Metrics In this subsection, we propose consistent estimators for the optimal membership advantage when there exists a closed form solution for the optimal adversary. Under some specific condition, Lemma 2 shows that the Bayes optimal classifier A∗ maximizing membership advantage under the generalized metric l is given by $${\mathcal{A}}^{*}(Q_{S}(z))=\operatorname{sgn}\left(\eta(Q_{S}(z))-i\right)$$ ∗(QS(z)) = sgn (η(QS(z)) − tℓ), η(QS(z)) = P(m = 1|QS(z)) , (19) where tℓ = (a00 − a10)/(a11 − a10 − a01 + a00). Hence, we propose Algorithm 2 when the Bayes optimal classifier can be written as Equation 19 and tℓ is known. As shown in Algorithm 2, we first split the N samples into two partitions, and use the first partition to obtain the estimator ηˆ(QS(z)) for η(QS(z)). Following this, we estimate the Bayes optimal classifier A∗ by $$(19)$$ $$(z))=\mathbb{P}(m=1|Q_{S}(z))\ ,$$ $$\hat{A}(Q_{S}(z))=\operatorname{sgn}\left(\hat{\eta}(Q_{S}(z))-t_{\ell}\right).$$ Next, we use the second partition to obtain an empirical measure of Advℓ,p(QS, Aˆ). More specifically, we Algorithm 2 Practical estimation of the optimal membership advantage Advℓ,p(QS) under a generalized metric ℓ Input: number of samples N, prior of membership p, training set S, query function QS, generalized metric ℓ Output: the empirical membership privacy Advℓ,p(QS) estimate 1: Perform Experiment 2 and split the N samples into two sets S1 and S2. 2: Estimate η(QS(z)) by ηˆ(QS(z)) using S1. 3: Let Aˆ(QS(z)) = sgn (ˆη(QS(z)) − tℓ), where tℓ = (a00 − a10)/(a11 − a10 − a01 + a00). 4: Calculate Adv dℓ,p(QS, Aˆ) using S2. adopt the empirical measure defined by Koyejo et al. (2014). For any adversary A, assuming that we sample {zi, mi} n i=1 by Experiment 2, we first define $$\mathrm{TP}_{n}({\cal A})=\sum_{i=1}^{n}\frac{1}{n}\left(\frac{1}{2}{\cal A}(Q_{S}(z_{i})+\frac{1}{2})\left(\frac{1}{2}m_{i}+\frac{1}{2}\right),\ \gamma_{n}({\cal A})=\frac{1}{n}\sum_{i=1}^{n}\left(\frac{1}{2}{\cal A}(Q_{S}(z_{i})+\frac{1}{2}\right),\ \gamma_{n}(z_{i})\right)\right)$$ as the empirical estimate of TP= P(M = 1, A(QS(Z)) = 1) and P(A(QS(Z)) = 1). After this, we define the empirical measure of Advℓ,p(QS, A) as follows: $$\widehat{\mathrm{Adv}}_{\ell,p}^{n}(Q_{S},{\mathcal{A}})=\frac{e_{0}+e_{1}\mathrm{TP}_{n}({\mathcal{A}})+e_{2}\gamma_{n}({\mathcal{A}})}{h_{0}+h_{1}\mathrm{TP}_{n}({\mathcal{A}})+h_{2}\gamma_{n}({\mathcal{A}})}$$ h0 + h1TPn(A) + h2γn(A)(20) with the constants e0 = a01p + a00 − a00p + a0, e1 = a11 − a10 − a01 + a00, e2 = a10 − a00, $$(20)$$ h0 = b01p + b00 − b00p + b0, h1 = b11 − b10 − b01 + b00, h2 = b10 − b00. In Algorithm 2, the empirical measure Adv dℓ,p(QS, Aˆ) = Adv d|S2| ℓ,p (QS, Aˆ). Next, we introduce the following lemma giving the form of the Bayes optimal classifier when the query QS is continuous. Lemma 3 (Koyejo et al. (2014)). Assume that the marginal distribution of QS(Z) in Experiment 1 is absolutely continuous with respect to the dominating measure on QS(Z)*. Given the constants* e0, e1, e2, h0, h1, h2 defined in Equation 20, define $$t_{\ell}^{*}={\frac{h_{2}\mathrm{Adv}_{\ell,p}(Q_{S})-e_{2}}{e_{1}-h_{1}\mathrm{Adv}_{\ell,p}(Q_{S})}}.$$ . 1. When e1 > h1Advℓ,p(QS), the Bayes optimal classifier A∗ *maximizing membership advantage under* the generalized metric ℓ *is given by* sgn(η(QS(z)) − t ∗ ℓ ); 2. When e1 < h1Advℓ,p(QS), the Bayes optimal classifier A∗ maximizing membership advantage under the generalized metric ℓ *is given by* sgn(t ∗ ℓ − η(QS(z))). By Lemma 3, the specific form of the Bayes optimal classifier relies on the unknown optimal membership advantage Advℓ,p(QS). Koyejo et al. (2014) suggested to estimate loose upper and lower bounds of Advℓ,p(QS) to determine the classifier. In the rest of this subsection, we assume e1 > h1Advℓ,p(QS) so A∗(QS(z)) = sgn(η(QS(z)) − t ∗ ℓ ) . The case where e1 < h1Advℓ,p(QS) can be solved similarly. Based on Lemma 3, we propose Algorithm 3 when the query QS is continuous. In Algorithm 3, we first split the N samples into three partitions, and use the first partition to obtain the estimator ηˆ(QS(z)) for η(QS(z)). Next, we use the second partition to estimate tℓ. Combing these two steps, we obtain the empirical Bayes optimal classifier Aˆ(QS(z)) = sgn(ηb(QS(z)) − tˆℓ). We use the third partition to calculate the empirical measure of Advℓ,p(QS, Aˆ) by Equation 20. Especially in Algorithm 3, Adv dℓ,p(QS, Aˆ) = Adv d|S3| ℓ,p (QS, Aˆ). Algorithm 3 Practical estimation of the optimal membership advantage Advℓ,p(QS) under a generalized metric ℓ when tℓ is unknown Input: number of samples N, prior of membership p, training set S, query function QS, generalized metric ℓ Output: the empirical membership privacy Advℓ,p(QS) estimate 1: Perform Experiment 2 and draw three sets of samples S1, S2 and S3. 2: Estimate η(QS(z)) by ηˆ(QS(z)) using S1. 3: Compute tˆℓ = arg maxx∈(0,1) Adv dℓ,p(QS,sgn(ηb(QS(·)) − x)) on S2. 4: Let Aˆ(QS(z)) = sgn(ηb(QS(z)) − tˆℓ). 5: Calculate Adv dℓ,p(QS, Aˆ) using S3. We now introduce the following nice properties of the two algorithms. First, Theorem 7 shows that Advℓ,p(QS, Aˆ) in Algorithm 2 is a consistent estimate of the optimal membership advantage Advℓ,p(QS) if E|ηˆ − η| 2 → 0. Using a suitable strongly proper loss function, we can obtain an estimator ηˆ satisfying E|ηˆ − η| 2 → 0 by the proof of Theorem 5 (Menon et al., 2013). Theorem 7. Assume that b11 = b01, b10 = b00, a11 > a01 and a00 > a10. Let Aˆ *outputted by Algorithm 2. If* EQS (z)(|ηˆ(QS(z)) − η(QS(z))| σ) p−→ 0 *for some* σ ≥ 1, Advℓ,p(QS, Aˆ) p−→ Advℓ,p(QS). Proof. See the complete proof in Appendix D.9 Second, we show the consistency of the empirical measure of the membership advantage of any adversary by the proof of Lemma 8 (Koyejo et al., 2014). Theorem 8 (Koyejo et al. (2014)). *For each adversary* A, Adv dn ℓ,p(QS, A) p−→ Advℓ,p(QS, A). $\square$ Finally, we show the following nice property of tˆℓ - the estimate of tℓ in Algorithm 3. Theorem 9 (Koyejo et al. (2014)). Assume that the marginal distribution of QS(Z) *in Experiment 1 is* absolutely continuous with respect to the dominating measure on QS(Z). Let tˆℓ *be outputted by Algorithm 3.* If ηˆ p−→ η, $$\mathrm{Adv}_{\ell,p}(Q_{S},\operatorname{sgn}(\eta(Q_{S}(z))-{\hat{t}}_{\ell}))\stackrel{p}{\to}\mathrm{Adv}_{\ell,p}(Q_{S}).$$ ## 6 Experiments In this section, we demonstrate how to use MACE by performing practical membership privacy estimation on trained generative models. ## 6.1 Setup The different GAN architectures used were WGAN-GP (Gulrajani et al., 2017) (on CIFAR-10 & MNIST), JS-GAN (Goodfellow et al., 2014) (on skin-cancer MNIST) and privGAN (Mukherjee et al., 2019) (on MNIST). A JS-GAN is the original GAN formulation which uses a Jensen-Shannon divergence based loss. A WGAN-GP is an improved GAN formulation with a Wasserstein distance based loss with a gradient penalty. A privGAN is a GAN formulation that utilizes multiple generator-discriminator pairs and has been empirically shown to provide membership privacy. Our experiments would be based on the following three real-world datasets: MNIST, CIFAR-10 and skin cancer MNIST. MNIST contains gray scale images of handwritten digits with 70000 digits from 0 to 9. CIFAR-10 comprises of 10 classes of 32 x32 RGB colored images with 60000 images in the whole dataset. Both of them are commonly used in the GAN literature. Additionally, to demonstrate a real world use case in healthcare, we use the skin cancer MNIST dataset (Tschandl et al., 2018) which comprises of 10,000 64 × 64 RGB images of skin lesions (both cancerous and benign). Following the common practice of membership inference attacks on generative models (Hayes et al., 2019; Mukherjee et al., 2019; Chen et al., 2019), we choose a random 10% subset of the entire dataset as a training set to show overfitting. In sub-section 6.3, 10% of the training images were corrupted by placing a white box at the center of images (with no changes to the non-training set images). For all the experiments, we set the confidence level δ = 0.05. To create discrete queries, we bin the continuous interval into 100d bins (where d is the dimension of the query). ![14_image_0.png](14_image_0.png) Figure 1: a) Comparison of discrete vs continuous queries against the WGAN-GP on the CIFAR-10 dataset. b) Comparison of queries against the generator and the discriminator on the skin-cancer MNIST dataset (for JS-GAN). In all cases δ is set to 0.05. c) Comparison of the optimal membership advantage and heuristic attacks' membership advantage for both queries against the generator and the discriminator on the skin-cancer MNIST dataset (for JS-GAN). ## 6.2 Estimation Of The Optimal Membership Advantage 6.2.1 For The Accuracy-Based Metric We first demonstrate the utility of our estimators of the optimal membership advantage under the accuracybased metric with regard to different query types. In Figure 1a we show the applicability of our discrete and continuous estimators using the discriminator score as the query. Additionally, we compare the estimated performance of the optimal adversaries against a heuristic adversary that uses the same query (Hayes et al., 2019). We find that for both discrete and continuous queries, the estimated membership advantage is higher for the optimal adversary compared to the heuristic adversary. We note that the continuous query is seen to yield somewhat poorer performance than the discrete query, most likely due to sub-optimal selection of the KDE bandwidth. Optimal choice of the binning/KDE hyperparameters are beyond the scope of this paper but we direct readers to existing papers on this topic (Chen, 2015; Knuth, 2019). In Figure 1b we show the utility of our estimators on queries against accessible models and accessible datasets using a query of each type. We use the discriminator score as an example of queries on an accessible model and the query described in Equation 2 as an example of queries on an accessible synthetic dataset. Additionally, we compare our optimal membership advantage with the membership advantage of SOTA heuristic adversaries that use the similar queries as in (Hayes et al., 2019; Hilprecht et al., 2019). As widely reported (Hayes et al., 2019; Mukherjee et al., 2019), we find among our experiments that the optimal membership advantage is a lot smaller when adversaries gain access to the datasets compared to when they get direct access to the model. Furthermore, our optimal membership advantage estimates are higher than the SOTA heuristic adversaries in both settings. This validates Theorem 5 and 6, and demonstrates that our estimators are good estimators for the optimal membership advantage that would bound all the membership advantages including those due to the SOTA heuristic adversaries. Figure 2: Comparison of single dimensional vs multi-dimensional queries against (a) privGAN on the MNIST ![15_image_0.png](15_image_0.png) dataset and (b) WGAN-GP on the skin-cancer MNIST dataset. Next, we demonstrate the applicability of our estimators to multi-dimensional queries. In Figure 2a we compare the estimated membership advantage for a single and a multi-dimensional queries against privGAN (Mukherjee et al., 2019). We see that the estimated membership advantage with the multi-dimensional query is much higher than that of the 1-d query used in the privGAN paper. This indicates that while the privGAN is less likely to suffer from overfitting than the JS-GAN, releasing multiple discriminators could potentially increase it's privacy risk. In Figure 2b, we compare the estimated membership advantage for two single and one multi-dimensional queries against the JS-GAN. The multi-dimensional query is indeed a hybrid query that is the combination of the two 1-d queries (Equation 4). Intuitively, a hybrid query should impose a higher privacy risk than an individual query. Our experiment has validated this assumption and shown that the hybrid query has a higher estimated membership advantage than each individual 1-d query. ![16_image_0.png](16_image_0.png) Figure 3: The membership advantages of the optimal adversary vs. the SOTA adversary against WGAN-GP under different metrics on the MNIST and CIFAR-10 datasets. For the sake of consistency, the membership advantage estimation for both metrics is done using the method used for the generalized metric. ## 6.2.2 Generalized Metric To demonstrate how to apply MACE under the generalized metrics, we qualitatively compare the membership advantages under AM and the accuracy-based metric. We set p = 0.1 here to construct an imbalanced dataset. In Figure 3, under both the accuracy-based metric and the AM metric, we compare the optimal membership advantage (the membership advantage of the optimal adversary) with the membership advantage of the heuristic adversary defined in (Hayes et al., 2019) using the discriminator score as the query. We find the heuristic adversary has comparable performance to the optimal adversary under both metrics. It is important to note here that while the optimal adversary is asymptotically optimal, on any set of samples there may be a stronger adversary possible. ## 6.3 Estimation Of Individual Privacy Risks Having demonstrated the utility of our estimators for estimating the optimal membership advantage, here we demonstrate the utility of our estimators for the individual privacy risk. It is well known that samples from minority sub-groups can often be more vulnerable to membership inference attacks (Yaghini et al., 2019). Thus it is necessary to reveal the membership privacy at the individual level to reflect the true risks faced by the minority group. For the purposes of a simple demonstration, we constructed a toy dataset where 10% of the images were corrupted (described in section 6.1). After that, we trained a JS-GAN on this dataset and estimated the individual privacy risk of samples against the discriminator score query using the individual privacy risk estimator described in section 4.1.1. In Figure 4, we show (both qualitatively and quantitatively) that the corrupted images on average have a higher individual privacy risk than the uncorrupted images. This shows that outlier samples or certain sub-groups can be more vulnerable than the rest of the population, and the optimal membership advantage alone may not capture this information. It is worth noting here that there are particularly severe ramifications of higher privacy leakage risks of minorities in healthcare settings, ![17_image_0.png](17_image_0.png) Figure 4: Application of individual privacy risk estimation. a) Example corrupted and uncorrupted images from skin-cancer MNIST, along with their estimated individual privacy risk. b) Comparison of average individual privacy risk between corrupted and uncorrupted images. specially if such information points to the disease status. Data and model owners in such cases can consider retraining their generative model by omitting such sub-groups of samples. ## 7 Conclusion And Remarks We developed the first formal framework that provides a certificate for the membership privacy risk of a trained generative model posed by adversaries having query access to the model at both the individual and population level. While our theory works regardless of the query dimension, we do not have a practical way to generate a meaningful certificate for the high-dimensional case. This would be a focus of our future work. Our framework works for a large family of metrics, allowing users flexible ways to measure the membership risk. Through experiments on multiple datasets, queries, model types and metrics, we show the practical applicability of our framework in measuring the optimal membership advantage as well as the individual privacy risk. Finally, to wrap up the paper, we re-visit our fictional example from the introduction to explain how MACE can help such data/model owners: - By comparing the optimal membership advantage against a given trained model of queries that access the model via different practical queries, the model owners can determine the relative risk of releasing: i) the complete trained model, ii) parts of the trained model, iii) the synthetic dataset. - For model owners who are only interested in releasing synthetic data, MACE can help identify how much data can be released for a desired level of membership advantage (see Appendix for example). - For datasets containing sensitive groups of samples, MACE allows to estimate the group-level membership privacy risk through the estimation of the individual privacy risk and identify the most vulnerable minorities. While we focus on generative models in this paper, our framework can be applied to discriminative models as well. An example application of our framework to a discrminative model is shown in the Appendix. While we focus on the theoretical aspects of membership privacy estimation in this paper, future work could look at using MACE to design new queries which can lead to stronger MIAs and a better understanding of membership privacy risks of different generative models. Another important direction is to derive a consistent estimator of the optimal membership advantage for the generalized metric by adapting Koyejo et al. (2014). It would also be interesting to extend our work to high-dimensional queries such as certain layers of the generator and discriminator. ## References Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. *arXiv preprint arXiv:1701.07875*, 2017. Tharindu R Bandaragoda, Kai Ming Ting, David Albrecht, Fei Tony Liu, and Jonathan R Wells. Efficient anomaly detection by isolation using nearest neighbour ensemble. In *2014 IEEE International Conference* on Data Mining Workshop, pp. 698–705. IEEE, 2014. David Berthelot, Thomas Schumm, and Luke Metz. Began: Boundary equilibrium generative adversarial networks. *arXiv preprint arXiv:1703.10717*, 2017. Dingfan Chen, Ning Yu, Yang Zhang, and Mario Fritz. Gan-leaks: A taxonomy of membership inference attacks against gans. *arXiv preprint arXiv:1909.03935*, 2019. Junjie Chen, Wendy Hui Wang, Hongchang Gao, and Xinghua Shi. Par-gan: Improving the generalization of generative adversarial networks against membership inference attacks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 127–137, 2021. Su Chen. Optimal bandwidth selection for kernel density functionals estimation. Journal of Probability and Statistics, 2015, 2015. Yen-Chi Chen. A tutorial on kernel density estimation and recent advances. *Biostatistics & Epidemiology*, 1 (1):161–187, 2017. Charles J Clopper and Egon S Pearson. The use of confidence or fiducial limits illustrated in the case of the binomial. *Biometrika*, 26(4):404–413, 1934. L. Devroye and L. Gyorfi. *Nonparametric Density Estimation: The L1 View*. Wiley Interscience Series in Discrete Mathematics. Wiley, 1985. ISBN 9780471816461. URL https://books.google.com/books?id= ZVALbrjGpCoC. Luc Devroye, László Györfi, and Gábor Lugosi. *A probabilistic theory of pattern recognition*, volume 31. Springer Science & Business Media, 2013. Charles Elkan. The foundations of cost-sensitive learning. In International joint conference on artificial intelligence, volume 17, pp. 973–978. Lawrence Erlbaum Associates Ltd, 2001. Vitaly Feldman. Does learning require memorization? a short tale about a long tail. In *Proceedings of the* 52nd Annual ACM SIGACT Symposium on Theory of Computing, pp. 954–959, 2020. Jeremy Georges-Filteau and Elisa Cirillo. Synthetic observational health data with gans: from slow adoption to a boom in medical research and ultimately digital twins? *arXiv preprint arXiv:2005.13510*, 2020. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In *Advances in neural information processing systems*, pp. 5767–5777, 2017. Jamie Hayes, Luca Melis, George Danezis, and Emiliano De Cristofaro. Logan: Membership inference attacks against generative models. *Proceedings on Privacy Enhancing Technologies*, 2019(1):133–152, 2019. Benjamin Hilprecht, Martin Härterich, and Daniel Bernau. Monte carlo and reconstruction membership inference attacks against generative models. *Proceedings on Privacy Enhancing Technologies*, 2019(4): 232–249, 2019. Bargav Jayaraman and David Evans. Evaluating differentially private machine learning in practice. In *28th* {USENIX} Security Symposium ({USENIX} *Security 19)*, pp. 1895–1912, 2019. Bargav Jayaraman, Lingxiao Wang, David Evans, and Quanquan Gu. Revisiting membership inference under realistic assumptions. *arXiv preprint arXiv:2005.10881*, 2020. James Jordon, Jinsung Yoon, and Mihaela van der Schaar. Pate-gan: Generating synthetic data with differential privacy guarantees. In *International Conference on Learning Representations*, 2018. Kevin H Knuth. Optimal data-based binning for histograms and histogram-based probability density models. Digital Signal Processing, 95:102581, 2019. Oluwasanmi O Koyejo, Nagarajan Natarajan, Pradeep K Ravikumar, and Inderjit S Dhillon. Consistent binary classification with generalized performance metrics. In *Advances in Neural Information Processing* Systems, pp. 2744–2752, 2014. Ninghui Li, Wahbeh Qardaji, Dong Su, Yi Wu, and Weining Yang. Membership privacy: a unifying framework for privacy definitions. In *Proceedings of the 2013 ACM SIGSAC conference on Computer & communications* security, pp. 889–900, 2013. Yunhui Long, Vincent Bindschaedler, and Carl A Gunter. Towards measuring membership privacy. arXiv preprint arXiv:1712.09136, 2017. Aditya Menon, Harikrishna Narasimhan, Shivani Agarwal, and Sanjay Chawla. On the statistical consistency of algorithms for binary classification under class imbalance. In *International Conference on Machine* Learning, pp. 603–611. PMLR, 2013. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. *arXiv preprint arXiv:1411.1784*, 2014. Sumit Mukherjee, Yixi Xu, Anusua Trivedi, and Juan Lavista Ferres. Protecting gans against privacy attacks by preventing overfitting. *arXiv preprint arXiv:2001.00071*, 2019. Milad Nasr, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, and Nicholas Carlini. Adversary instantiation: Lower bounds for differentially private machine learning. *arXiv preprint arXiv:2101.04535*, 2021. Jean-Francois Rajotte, Sumit Mukherjee, Caleb Robinson, Anthony Ortiz, Christopher West, Juan M. Lavista Ferres, and Raymond T. Ng. Reducing bias and increasing utility by federated generative modeling of medical images using a centralized adversary. GoodIT '21, pp. 79–84, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450384780. Shahbaz Rezaei and Xin Liu. Towards the infeasibility of membership inference on deep models. *arXiv* preprint arXiv:2005.13702, 2020. Alexandre Sablayrolles, Matthijs Douze, Cordelia Schmid, Yann Ollivier, and Hervé Jégou. White-box vs black-box: Bayes optimal strategies for membership inference. In International Conference on Machine Learning, pp. 5558–5567, 2019. Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In *2017 IEEE Symposium on Security and Privacy (SP)*, pp. 3–18. IEEE, 2017. Elysse Tom, Pearse A Keane, Marian Blazes, Louis R Pasquale, Michael F Chiang, Aaron Y Lee, Cecilia S Lee, and AAO Artificial Intelligence Task Force. Protecting data privacy in the age of ai-enabled ophthalmology. Translational Vision Science & Technology, 9(2):36–36, 2020. Reihaneh Torkzadehmahani, Peter Kairouz, and Benedict Paten. Dp-cgan: Differentially private synthetic data and label generation. In *Proceedings of the IEEE Conference on Computer Vision and Pattern* Recognition Workshops, pp. 0–0, 2019. Stacey Truex, Ling Liu, Mehmet Emre Gursoy, Lei Yu, and Wenqi Wei. Towards demystifying membership inference attacks. *arXiv preprint arXiv:1807.09173*, 2018. P Tschandl, C Rosendahl, and H Kittler. The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. scientific data 5, 180161 (aug 2018), 2018. Paul Voigt and Axel Von dem Bussche. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing, 10:3152676, 2017. Liyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, and Jiayu Zhou. Differentially private generative adversarial network. *arXiv preprint arXiv:1802.06739*, 2018. Mohammad Yaghini, Bogdan Kulynych, and Carmela Troncoso. Disparate vulnerability: On the unfairness of privacy attacks against machine learning. *arXiv preprint arXiv:1906.00389*, 2019. Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. Privacy risk in machine learning: Analyzing the connection to overfitting. In 2018 IEEE 31st Computer Security Foundations Symposium (CSF), pp. 268–282. IEEE, 2018. ## A Background On Generative Adversarial Networks Generative Adversarial Networks are the most common class of generative models. The original GAN algorithm (Goodfellow et al., 2014) learns a distribution of a dataset by adversarially training two modules, namely, a generator and a discriminator. The goal of the generator G(w) is to learn a transformation that would convert a random vector w to a realistic data sample. The goal of the discriminator module D is to reliably distinguish synthetic samples (generated by the generator) from real samples. The mathematical formulation of this problem is as follows: $$\begin{array}{c}{{\operatorname*{min}_{G}\operatorname*{max}_{D}\mathbb{E}_{x\sim p_{r}(x)}[\log D(x)]+}}\\ {{\qquad\qquad\mathbb{E}_{x\sim p_{G}(x)}[\log(1-D(x)]\ .}}\end{array}$$ Here, Pr is the real data distribution, and PG is the distribution of G(w). There have been many GAN variants proposed since (Arjovsky et al., 2017; Mirza & Osindero, 2014; Berthelot et al., 2017). In this work, we examine our framework on the original GAN and some of its variations. B Examples of common metrics that can be derived from the generalized metric $\mathrm{ACC}=\dfrac{\mathrm{TP}+\mathrm{TN}}{\mathrm{TP}+\mathrm{FP}+\mathrm{TN}+\mathrm{FN}}\\[0.5em]\text{PPV or Precision}=\dfrac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FP}}\\[0.5em]\text{TPR or Recall}=\dfrac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}}\\[0.5em]\text{TNR}=\dfrac{\mathrm{TN}}{\mathrm{FP}+\mathrm{TN}}\\[0.5em]\text{WA}=\dfrac{w_{1}\mathrm{TP}+w_{2}\mathrm{TN}}{w_{1}\mathrm{TP}+w_{2}\mathrm{TN}+w_{3}\mathrm{FP}+w_{4}\mathrm{FN}}$ . ## C Background On Differential Privacy The definition of ε-differential privacy is given as follows: Definition 7 (ε-Differential Privacy). We say a randomized algorithm M is ε-differentially private if for any pair of neighbouring databases S and S ′that differ by one record and any output event E*, we have* $$\mathbb{P}(M(S)\in E)\ \leq\ e^{\varepsilon}\mathbb{P}(M(S^{\prime})\in E)\ .$$ ′) ∈ E) . (21) ## D Detailed Proofs D.1 Proof For Lemma 1 Proof. As we can see from Definition 1, the membership advantage is defined as 2Accuracy(A) − 1. This means the Bayes classifier is given by sgn P (m = 1|QS(z)) − 1 2 (Sablayrolles et al., 2019). We get the optimal membership advantage by plugging in this Bayes classifier. ## D.2 Proof For Lemma 2 Proof. When b11 = b01 and b10 = b00, our loss ℓ =a0+a11TP+a10FP+a01FN+a00TN b0+b11(TP+FN)+b00(TN+FP) = a0+a11TP+a10FP+a01FN+a00TN b0+b11p+b00(1−p). It becomes a linear combination of TP, TN, FP and FN, which is also called cost sensitive classification as defined in (Elkan, 2001). As outlined in Elkan (2001), the optimal classifier would predict 1 when $$\eta(Q_{S}(z))a_{01}+$$ η(QS(z))a01 + (1 − η(QS(z)))a00 ≥ η(QS(z))a11 + (1 − η(QS(z)))a10 . Rearranging the terms completes the proof. ## D.3 Proof For Theorem 1 Proof. By Equation 16 and Definition 3, we have AdvIp(QS, z0) = |P(m = 1|QS(z0)) − P(m = −1|QS(z0))| = |fp(z0)|, and 1 follows. 2 follows from Equation 15 and Equation 13. Then, we prove 3. If A equals 1 at QS(z0), 2P[A(QS(z)) = m|QS(z) = QS(z0)]−1 = 2P[m = 1|QS(z0)]−1 = P[m = 1|QS(z0)] − P[m = −1|QS(z0)]. If A equals -1 at QS(z0), 2P[A(QS(z)) = m|QS(z) = QS(z0)] − 1 = 2P[m = −1|QS(z0)] − 1 = P[m = −1|QS(z0)] − P[m = 1|QS(z0)]. Thus, the maximum equals to |P(m = 1|QS(z0)) − P(m = −1|QS(z0))| = AdvIp(QS, z0). Finally we show 4. By the post-processing property, the ε-differential privacy indicates that the query output satisfies for any record z, we have $\bigg|\log\frac{\mathbb{P}(Q_S(z)|m=1)}{\mathbb{P}(Q_S(z)|m=-1)}\bigg|\leq\varepsilon$ that is, the fact that $\varepsilon$ is a constant. Then 4 directly follows from 1, 2 and the fact that $\square$ $$(22)$$ $$\operatorname{fact}\,\operatorname{that}$$ $${\frac{x-1}{x+1}}=\operatorname{tanh}({\frac{1}{2}}\ln(x)).$$ $$(23)$$ $\square$ ln(x)). (23) Proof. By the law of large numbers, we have rˆj p−→ rj and qˆj p−→ qj . Then 1 follows from Slutsky's theorem (Corollary 1) and prj + (1 − p)qj > 0. To prove 2, we first derive the (1 − δ/2) confidence intervals of rj and qj by Clopper & Pearson (1934). Then we could divide the nominator and the denominator by qj , and 2 follows from the fact that x−1 x+1 is a monotonically increasing function for x > 0. Proof. By Chen (2017), we have rˆN (QS(z)) p−→ r(QS(z)) and qˆN (QS(z)) p−→ q(QS(z)). Then 1 follows from Slutsky's theorem (Corollary 1) and pr(QS(z)) + (1 − p)q(QS(z)) > 0. To prove 2, we first derive the confidence intervals of r(QS(z)) and q(QS(z)) by Chen (2017). Then we could divide the nominator and the denominator by q(QS(z)), and 2 follows from the fact that x−1 x+1 is a monotonically increasing function for x > 0. Proof. Define $r(Q_{S}(z_{0}))=\mathbb{P}(Q_{S}(z_{0})|m=1),\ q(Q_{S}(z_{0}))=\mathbb{P}(Q_{S}(z_{0})|m=-1),$ $$I(Q_{S}(z_{0}))=\mathbb{I}\left(\mathbb{P}(m=1|Q_{S}(z_{0}))>t_{\ell}\right).$$ $$(25)$$ We will first show that AdvIl,p(QS, z0) = pc1r(QS(z0)) + c2(1 − p)q(QS(z0)) + pc3r(QS(z0))I(Qs(z0)) + c4(1 − p)q(QS(z0)I(Qs(z0)) pd1r(QS(z0)) + d2(1 − p)q(QS(z0)) + pd3r(QS(z0))I(Qs(z0)) + d4(1 − p)q(QS(z0)I(Qs(z0)). (24) Note that the individual privacy risk at a sample z0 is written as AdvIl,p(QS, z0) = a0 + a11TP(QS(z0)) + a10FP(QS(z0)) + a01FN(QS(z0)) + a00TN(QS(z0)) b0 + b11TP + b10FP(QS(z0)) + b01FN(QS(z0)) + b00TN(QS(z0)) , (25) where a0, b0, aij and bij are pre-defined scalars for i = 0, 1 and j = 0, 1 as in Equation 8. TP(QS(z0)), FP(QS(z0)), FN(QS(z0)) and TN(QS(z0)) are the conditional versions of TP, FP, FN and TN on QS(z) = QS(z0). The four terms can be written as $$\mathrm{TP}(Q_{S}(z_{0}))=\mathbb{P}(\mathcal{A}^{*}(Q_{S}(z))=1,m=1|Q_{S}(z_{0})),$$ $$\begin{array}{l}{{\mathrm{FP}(Q_{S}(z_{0}))=\mathbb{P}(\mathcal{A}^{*}(Q_{S}(z))=1,m=-1|Q_{S}(z_{0})),}}\\ {{\mathrm{FN}(Q_{S}(z_{0}))=\mathbb{P}(\mathcal{A}^{*}(Q_{S}(z))=-1,m=1|Q_{S}(z_{0})),}}\\ {{\mathrm{TN}(Q_{S}(z_{0}))=\mathbb{P}(\mathcal{A}^{*}(Q_{S}(z))=-1,m=-1|Q_{S}(z_{0})).}}\end{array}$$ $\left[\left(G\right)\right]$ By FP(QS(z0)) = P(m = −1|QS(z0)) − TN(QS(z0)) & FN(QS(z0)) = P(m = 1|QS(z0)) − TP(QS(z0)), Equation 25 can be re-written as a0 + a10P(m = −1|QS(z0)) + a01P(m = 1|QS(z0)) + (a11 − a01)TP(QS(z0)) + (a00 − a10)TN(QS(z0)) b0 + b10P(m = −1|QS(z0)) + b01P(m = 1|QS(z0)) + (b11 − b01)TP(QS(z0)) + (b00 − b10)TN(QS(z0)) . (26) We first re-write $$\mathrm{TP}(Q_{S}(z_{0}))=\mathbb{P}(m=1|Q_{S}(z_{0}))\mathbb{I}(\mathbb{P}(m=1|Q_{S}(z_{0}))>t_{\ell})$$ and TN(QS(z0)) = P(m = −1|QS(z0))I(P(m = 1|QS(z0)) ≤ tℓ) by plug-in Equation 16. Then, by $1\left|Q_{\mathrm{m}}A\right|$ $$\mathbb{P}(m=1|Q_{S}(z_{0}))={\frac{\mathbb{P}(Q_{S}(z)=Q_{S}(z_{0})|m=1)p}{\mathbb{P}(Q_{S}(z)=Q_{S}(z_{0})|m=1)p+\mathbb{P}(Q_{S}(z)=Q_{S}(z_{0})|m=-1)(1-p)}}$$ and $$\mathbb{P}(m=-1|Q_{S}(z_{0}))=\frac{\mathbb{P}(Q_{S}(z)=Q_{S}(z_{0})|m=-1)(1-p)}{\mathbb{P}(Q_{S}(z)=Q_{S}(z_{0})|m=1)p+\mathbb{P}(Q_{S}(z)=Q_{S}(z_{0})|m=-1)(1-p)},$$ we have Equation 26 equal to pc1r(QS(z0)) + c2(1 − p)q(QS(z0)) + pc3r(QS(z0))I(Qs(z0)) + c4(1 − p)q(QS(z0)I(Qs(z0)) pd1r(QS(z0)) + d2(1 − p)q(QS(z0)) + pd3r(QS(z0))I(Qs(z0)) + d4(1 − p)q(QS(z0)I(Qs(z0)). Hence, we have proved Equation 24. $$(27)^{\frac{1}{2}}$$ In the next step, we will show the consistency of the proposed estimator. Note that, we have rˆ(QS(z0)) p−→ r(QS(z0)), qˆ(QS(z0)) p−→ q(QS(z0)) by the proof of Theorems 2 and 3. By Equation 27, $$I(Q_{S}(z_{0}))=\mathbb{I}\left(\frac{p r(Q_{S}(z_{0}))}{p r(Q_{S}(z_{0}))+(1-p)q(Q_{S}(z_{0}))}>t_{\ell}\right)=\mathbb{I}((1-t_{\ell})p r(Q_{S}(z_{0}))>t_{\ell}(1-p)q(Q_{S}(z_{0}))).$$ It is a function of r and q, and it is continuous except when P(m = 1|QS(z0)) = tℓ. By the second condition, we have P(m = 1|QS(z0)) ̸= tℓ. Thus, I((1 − tℓ)*pr > t* ˆ l(1 − p)qˆ) p−→ I(QS(z0)) follows from the continuous mapping theorem (Theorem 11). By applying Slutsky's theorem (Corollary 1), we show that the nominator of Equation 18 converges in probability to the nominator in Equation 24. Similarly, we can prove that the denominator in Equation 18 converges in probability to the denominator in Equation 24. Because of the third condition, the denominator in Equation 24 is nonzero. Thus, 1 follows by Slutsky's theorem (Corollary 1). Finally, we observe that given (1 − δ/2)-confidence intervals for P(QS(z)|m = 1), P(QS(z)|m = −1), and the independence of P(QS(z)|m = 1) and P(QS(z)|m = −1), the joint (1−δ)-confidence interval of P(QS(z)|m = 1) and P(QS(z)|m = −1) is simply the union of the two (using union bound), and 2 follows. Proof. First note that $$\mathcal{A}^{*}(Q_{S})=\mathbb{E}_{Z}|f_{p}(Z)|$$ $$=\mathbb{E}_{Z}\left|\frac{\mathbb{P}(Q_{S}(Z)|M=1)p-\mathbb{P}(Q_{S}(Z)|M=-1)(1-p)}{\mathbb{P}(Q_{S}(Z))}\right|$$ $$=\sum_{j\in\mathcal{Q}}|\mathbb{P}(Q_{S}(Z)=j|M=1)p-\mathbb{P}(Q_{S}(Z)=j|M=-1)(1-p)|$$ It is sufficient to show that ∀j ∈ Q, It is sufficient to show that $j\in\mathbb{R}$, $$\left|\frac{p}{N_{1}}\sum_{i=1}^{N_{1}}\mathbb{I}(Q_{S}(X_{i})=j)-\frac{1-p}{N_{2}}\sum_{i=1}^{N_{1}}\mathbb{I}(Q_{S}(Y_{i})=j)\right|\stackrel{{\mathcal{P}_{s}}}{{=}}|\mathbb{P}(Q_{S}(Z)=j|M=1)p-\mathbb{P}(Q_{S}(Z)=j|M=-1)(1-p)|\,,\tag{28}$$ and then (1) follows by Slutsky's Theorem (Corollary 1). To prove Equation (28), we first show that $$\begin{array}{l}{{{\frac{1}{N_{1}}}\sum_{i=1}^{N_{1}}\mathbb{I}(Q_{S}(X_{i})=j)\stackrel{p}{\to}\mathbb{P}(Q_{S}(Z)=j|M=1),}}\\ {{{\frac{1}{N_{2}}}\sum_{i=1}^{N_{2}}\mathbb{I}(Q_{S}(Y_{i})=j)\stackrel{p}{\to}\mathbb{P}(Q_{S}(Z)=j|M=-1)}}\end{array}$$ by the law of large numbers. Then Equation (28) follows by applying the continuous mapping theorem (Theorem 11). Next, we prove (2). It is easy to verify that ∀i ∈ {1, · · · , N1}, ∀x1, · · · , xi, · · · , xN1 , y1, · · · , yN2 , x ′ i ∈ Z, we always have $$|W(x_{1},\cdots,x_{i},\cdots,x_{N_{1}},y_{1},\cdots,y_{N_{2}})-W(x_{1},\cdots,x_{i}^{\prime},\cdots,x_{N_{1}},y_{1},\cdots,y_{N_{2}})|\leq\frac{2p}{N_{1}}=\frac{2}{N}.$$ Similarly, for ∀i ∈ {1, · · · , N2} and ∀x1, · · · , xN1 , y1, · · · , yi, · · · , yN2 , y ′ i ∈ Z, we always have $$W(x_{1},\cdots,x_{N_{1}},y_{1},\cdots,y_{i},\cdots,y_{N_{2}})-W(x_{1},\cdots,x_{N_{1}},y_{1},\cdots,y_{i},\cdots,y_{N_{2}})|\leq\frac{2(1-p)}{N_{2}}=\frac{2}{N}.$$ Then by McDiarmid's inequality (Theorem 13), P(|WN − EWN | ≥ t) ≤ 2 exp − N 2 t 2. Let δ = 2 exp − N 2 t 2, and we have (2). Proof. (1) We first prove the consistency. Note that $$\begin{array}{ll}{\mathcal{A}^{*}(Q_{S})}&{=\mathbb{E}_{Z}|f_{p}(Z)|}\\ {}&{=\mathbb{E}_{Z}\left|\frac{\mathbb{P}(Q_{S}(Z)|M=1)p-\mathbb{P}(Q_{S}(Z)|M=-1)(1-p)}{\mathbb{P}(Q_{S}(Z))}\right|}\\ {}&{=\int|r(x)p-q(x)(1-p)|\,dx,}\end{array}$$ where r(x) = P(QS(z) = x|m = 1) and q(x) = P(QS(z) = x|m = −1). Rewrite UN as UN = Z|p N1h d X N1 i=1 K( x − QS(Xi) h) − 1 − p N2h d X N2 i=1 K( x − QS(Yi) h)|dx = Z p 1 N1h d X N1 i=1 K( x − QS(Xi) h) − r(x) ! + (pr(x) − (1 − p)q(x)) + (1 − p) q(x) −1 N2h d X N2 i=1 K( x − QS(Yi) h ! dx. Then Then $$|U_{N}-\mathcal{A}^{*}(Q_{S})|\leq\int\left|p\left(\frac{1}{N_{1}\hbar^{2}}\sum_{i=1}^{N_{1}}K(\frac{x-Q_{S}(X_{i})}{h})-r(x)\right)+(1-p)\left(q(x)-\frac{1}{N_{2}\hbar^{2}}\sum_{i=1}^{N_{2}}K(\frac{x-Q_{S}(Y_{i})}{h})\right)\right|dx$$ $$\leq p\int\left|\left(\frac{1}{N_{1}\hbar^{2}}\sum_{i=1}^{N_{1}}K(\frac{x-Q_{S}(X_{i})}{h})-r(x)\right)\right|dx+(1-p)\int\left|\left(q(x)-\frac{1}{N_{2}\hbar^{2}}\sum_{i=1}^{N_{2}}K(\frac{x-Q_{S}(Y_{i})}{h})\right)\right|dx$$ $$\geq0,$$ where the last step follows from Theorem 12 and Slutsky's theorem (Corollary 1). (2) It is easy to verify that ∀i ∈ {1, · · · , N1}, ∀x1, · · · , xi, · · · , xN1 , y1, · · · , yN2 , x ′ i ∈ Z, we always have $$|U(x_{1},\cdots,x_{i},\cdots,x_{N_{1}},y_{1},\cdots,y_{N_{2}})-U(x_{1},\cdots,x_{i}^{\prime},\cdots,x_{N_{1}},y_{1},\cdots,y_{N_{2}})|\leq\frac{2p}{N_{1}}=\frac{2}{N}.$$ Similarly, for ∀i ∈ {1, · · · , N2} and ∀x1, · · · , xN1 , y1, · · · , yi, · · · , yN2 , y ′ i ∈ Z, we always have $$|U(x_{1},\cdots,x_{N_{1}},y_{1},\cdots,y_{i},\cdots,y_{N_{2}})-U(x_{1},\cdots,x_{N_{1}},y_{1},\cdots,y_{i},\cdots,y_{N_{2}})|\leq{\frac{2(1-p)}{N_{2}}}={\frac{2}{N}}.$$ Then by McDiarmid's inequality (Theorem 13), P(|UN − EUN | ≥ t) ≤ 2 exp − N 2 t 2. Let δ = 2 exp − N 2 t 2, and we have (2). Proof. First, we would bound Advℓ,p(QS) − Advℓ,p(QS, Aˆ). By the definition of E(D*, S, Q*S), we have $\mathbb{P}(m=1)$ $$)=-1)=p$$ P(m = 1, A(QS(z)) = 1) + P(m = 1, A(QS(z)) = −1) = p and P(m = −1, A(QS(z)) = 1) + P(m = −1, A(QS(z)) = −1) = 1 − p. Thus, we could rewrite the membership advantage under the generalized metric Advℓ,p(QS, A) as follows: $${\mathfrak{o}}+a_{11}p+a_{00}(1-p)-(a_{11}-a_{01})\mathbb{P}$$ $a_{0}+a_{11}p+a_{00}(1-p)-(a_{11}-a_{01})\mathbb{P}(m=1,\mathcal{A}(Q_{S}(z))=-1)-(a_{00}-a_{10})\mathbb{P}(m=-1,\mathcal{A}(Q_{S}(z))=1)$. Define c = (a00 − a10)/(a00 − a10 + a11 − a01), β0 = (a0 + a11p + a00(1 − p)) / (b0 + b11p + b00(1 − p)), and β1 = (a00 − a10 + a11 − a01) / (b0 + b11p + b00(1 − p)). We have $${\rm Adv}_{\varepsilon,{\mathbb{P}}}(Q_{S},{\cal A})=\beta_{0}-\beta_{1}\left[c{\mathbb{P}}(m=-1,{\cal A}(Q_{S}(z))=1)+(1-c){\mathbb{P}}(m=1,{\cal A}(Q_{S}(z))=-1)\right].\tag{29}$$ Because a00 > a10 and a11 > a01, c ∈ (0, 1) and β1 > 0. Define Θ = {A : A(QS(z)) = sgn (ϕ(QS(z)) − tℓ), ϕ : R → [0, 1]}. Next, we would bound Advℓ,p(QS) − Advℓ,p(QS, Aˆ). $$Q_{S})-\mathrm{Adv}_{\ell,p}(Q_{S},{\hat{\mathcal{A}}}$$ $$\operatorname{Adv}_{\ell,p}(Q_{S})-\operatorname{Adv}_{\ell,p}(Q_{S},{\hat{\mathcal{A}}})=\operatorname*{max}_{\mathcal{A}}\operatorname{Adv}_{\ell,p}(Q_{S},{\mathcal{A}})-\operatorname{Adv}_{\ell,p}(Q_{S},{\hat{\mathcal{A}}})$$ $$=\max_{\mathcal{A}\in\Theta}\mathrm{Adv}_{\ell,p}(Q_{S},\mathcal{A})-\mathrm{Adv}_{\ell,p}(Q_{S},\tilde{\mathcal{A}})\tag{30a}$$ $$=\beta_{1}\left[c\mathbb{P}(m=-1,\tilde{\mathcal{A}}(Q_{S}(z))=1)+(1-c)\mathbb{P}(m=1,\tilde{\mathcal{A}}(Q_{S}(z))=-1)\right]-$$ $$\beta_{1}\inf_{\mathcal{A}}\left[c\mathbb{P}(m=-1,\mathcal{A}(Q_{S}(z))=1)+(1-c)\mathbb{P}(m=1,\mathcal{A}(Q_{S}(z))=-1)\right]$$ (30b) $$\leq\beta_{1}\mathbb{E}_{Q_{S}(z)}(|\tilde{\eta}(Q_{S}(z))-\eta(Q_{S}(z))|^{\sigma})\stackrel{{p}}{{\to}}0.\tag{30c}$$ $\square$ The step in Equation 30a follows from Lemma 2. The step in Equation 30b follows from Equation 29 and β1 > 0. The step in Equation 30c follows from β1 > 0, c ∈ (0, 1) and Lemma 4 (Menon et al., 2013). Note that Advℓ,p(QS, Aˆ) ≤ Advℓ,p(QS). Hence we have Advℓ,p(QS, Aˆ) p−→ Advℓ,p(QS). ## E Model Architectures And Hyper–Parameters Here we outline the different layers used in the model architectures for different datasets. The last layers of discriminators for WGAN experiments do not have sigmoid activation functions. The hyper-parameters are chosen the same same as (Goodfellow et al., 2014; Gulrajani et al., 2017). ## E.1 Mnist E.1.1 Generator Layers - Dense(units= 512, input size= 100) - LeakyReLU(α = 0.2) - Dense(units= 512) - LeakyReLU(α = 0.2) - Dense(units= 1024) - LeakyReLU(α = 0.2) - Dense(units= 784, activation = 'tanh') ## E.1.2 Discriminator Layers - Dense(units= 2048) - LeakyReLU(α = 0.2) - Dense(units= 512) - LeakyReLU(α = 0.2) - Dense(units= 256) - LeakyReLU(α = 0.2) - Dense(units= 1, activation = 'sigmoid') ## E.2 Cifar–10 E.2.1 Generator Layers - Dense(units=2 × 2 × 512) - Reshape(target shape= (2, 2, 512)) - Conv2DTranspose(filters= 128, kernel size= 4, strides= 1) - ReLU() - Conv2DTranspose(filters= 64, kernel size= 4, strides= 2, padding= 1) - ReLU() - Conv2DTranspose(filters= 32, kernel size= 4, strides= 2, padding= 1) - ReLU() - Conv2DTranspose(filters= 3, kernel size= 4, strides= 2,padding= 1, activation = 'tanh') ## E.2.2 Discriminator Layers - Conv2D(filters= 64, kernel size= 5, strides= 2) - Conv2D(filters= 128, kernel size= 5, strides= 2) - LeakyReLU(α = 0.2) - Conv2D(filters= 128, kernel size= 5, strides= 2) - LeakyReLU(α = 0.2) - Conv2D(filters= 256, kernel size= 5, strides= 2) - LeakyReLU(α = 0.2) - Dense(units= 1, activation = 'sigmoid') ## E.3 Skin-Cancer Mnist E.3.1 Generator Layers - Dense(units=4 × 4 × 512) - Reshape(target shape= (4, 4, 512)) - Conv2DTranspose(filters= 256, kernel size= 5, strides= 2) - ReLU() - Conv2DTranspose(filters= 128, kernel size= 5, strides= 2) - ReLU() - Conv2DTranspose(filters= 64, kernel size= 5, strides= 2) - ReLU() - Conv2DTranspose(filters= 3, kernel size= 5, strides= 2,activation = 'tanh') ## E.3.2 Discriminator Layers - Conv2D(filters= 64, kernel size= 5, strides= 2) - Conv2D(filters= 128, kernel size= 5, strides= 2) - LeakyReLU(α = 0.2) - Conv2D(filters= 128, kernel size= 5, strides= 2) - LeakyReLU(α = 0.2) - Conv2D(filters= 256, kernel size= 5, strides= 2) - LeakyReLU(α = 0.2) - Dense(units= 1, activation = 'sigmoid') E.3.3 Toy binary classifier layers - Conv2D(filters= 512, kernel size= 5, strides= 2) - LeakyReLU(α = 0.2) - Dense(units= 2, activation = 'relu') - Dense(units= 1, activation = 'sigmoid') ## F Auxiliary Lemmas And Theorems Theorem 10 (Slutsky's theorem). Let Xn d−→ X and Yn d−→ c, where c *is a constant. Then* 1. Xn + Yn d−→ X + c; 2. XnYn d−→ cX; 3. Xn/Yn d−→ X/c if c ̸= 0. Corollary 1 (Slutsky's theorem). Let Xn p−→ c0 and Yn p−→ c1, where c0 and c1 *are constants. Then* 1. Xn + Yn p−→ c0 + c1; 2. XnYn p−→ c0c1; 3. Xn/Yn p−→ c0/c1 if c1 ̸= 0. Proof. When c is a constant, Zn p−→ c is equivalent to Zn d−→ c. Thus we have Xn d−→ c0 and Yn d−→ c1. By applying Theorem 10 (1), we have Xn + Yn d−→ c0 + c1. Since c0 + c1 is a constant, it follows Xn + Yn p−→ c0 + c1. Similarly, we can prove 2 and 3. Theorem 11 (Continuous mapping theorem). Let f : R m → R q*be a measuarable function. Define* Cf = {x : *f is continuous at x*}. If Xn p−→ X and P(X ∈ Cf ) = 1*, then* $$f(X_{n})\ {\xrightarrow{p}}\ f(X).$$ Theorem 12 (Devroye & Gyorfi (1985)). Let pN be an automatic kernel estimate with arbitary density K, as defined in Equation (17). If h + (nhd) −1 → 0 completely (almost surely, in probability), then R|pN − p| → 0 (almost surely, in probability), for all density p on R d. Theorem 13 (McDiarmid's inequality). Let f : Z m → R *be a function satisfying* $f(z_{1},\ldots,z_{i},\ldots,z_{m})-f(z_{1},\ldots,z_{i}^{\prime},\ldots,z_{m})|\leq c_{i}$ $$(\forall i,\forall z_{1}\ldots z_{m},z_{i}^{\prime}\in\mathcal{Z}).$$ Denote $$v={\frac{1}{4}}\sum_{i=1}^{m}c_{i}^{2}.$$ $$(31)$$ Let Z1, · · · , Zm be independent variables with support on Z*. Then* $$\mathbb{P}(f(Z_{1},\cdots,Z_{m})-\mathbb{E}[f(Z_{1},\cdots,Z_{m})]\geq t)\leq\exp\left(-t^{2}/(2v)\right)$$ $$a n d$$ _una_ $$\mathbb{P}(f(Z_{1},\cdots,Z_{m})-\mathbb{E}[f(Z_{1},\cdots,Z_{m})]\leq-t)\leq\exp\left(-t^{2}/(2v)\right).$$ **Lemma 4** (Chen (2017)).: _With probability $(1-\delta/2)$, we have_ r(x) ∈ [rblower(x), rbupper(x)] , (31) where $$r(x)\in[{\widehat{r}}_{\mathrm{lower}}(x),\;\;{\widehat{r}}_{\mathrm{upper}}(x)]\;,$$ $$\widehat{r}_{\mathrm{lower}}(x)=\widehat{r}_{N}(x)-z_{1-\delta/4}\sqrt{\frac{\mu_{K}\cdot\widehat{r}_{N}(x)}{N h^{d}}}\;,$$ $$\widehat{r}_{\mathrm{upper}}(x)=\widehat{r}_{N}(x)+z_{1-\delta/4}\sqrt{\frac{\mu_{K}\cdot\widehat{r}_{N}(x)}{N h^{d}}}\;.$$ and µK := RK2(x)dx, z1−δ/4 is the (1 − δ/4) *quantile of a standard normal distribution.* ## G Connection To Differential Privacy As shown in Theorem 1, both the optimal membership advantage and the individual privacy risk are bounded by differential privacy guarantees. We construct a toy dataset from MNIST by forming a new imbalanced dataset with 6900 digit zeros and 700 digit sixes. This dataset is also used for anomaly detection (Bandaragoda et al., 2014). We set p = 0.5 for simplicity. Figure 5 shows the optimal membership advantage of the DP-cGAN (Torkzadehmahani et al., 2019) with different choices of the privacy budget ε. As we can see here, the theoretic upper bound given by tanh(ε/2) is much larger than the estimated optimal membership advantage. Figure 6 shows the individual privacy risks for both ε = 2 and ε = 10. As expected, even the highest individual privacy risk is strictly bounded by the upper bound derived from the privacy budget ε. The upper bound (*tanh*( 2 2 ) = 0.762 for ϵ = 2, and *tanh*( 10 2 ) = 0.999 for ε = 10). These demonstrations seem to indicate that if membership privacy is desired, using differentially private methods can lead to far too conservative models (which may lead to poorer model utility). However, it may also be that the privacy accounting in DP-cGAN is loose, thereby leading to an overestimation of ε. ## H The Optimal Membership Advantage As A Function Of Synthetic Dataset Size To explore the effect the size of synthetic datasets on the membership advantage, we first trained a JS-GAN on the skin-cancer MNIST dataset. The JS-GAN was trained for 2000 epochs and several synthetic datasets of sizes varying from 10 to 105samples was generated. The adversary used was the one described in Equation 2. It can be seen in Figure 7 that as the synthetic dataset size increases, so does the optimal membership advantage. ![29_image_0.png](29_image_0.png) Figure 5: The optimal membership advantage and its upper bound estimated versus the privacy budget ε for DP-cGANs. ![29_image_1.png](29_image_1.png) Figure 6: The individual privacy risks for DP-cGAN with privacy budget (ε, 10−5) (a) individual privacy risks for ε = 2 (b)individual privacy risks for ε = 10. ## I Estimation Of Optimal Membership Advantage For Discriminative Models To demonstrate that our estimators of optimal membership advantage are also applicable to discriminative ![29_image_2.png](29_image_2.png) models, we trained a simple binary classifier (architecture described in section E.3.3) on the skin-cancer MNIST dataset (same experimental setting as the generative model). We chose two queries: i) a black-box query - the final output of the classifier, ii) a white-box query - the output of the penultimate layer of the classifier. We discretized the outputs and used our discrete estimator to estimate the optimal membership advantage in each case, as seen in Figure 8. As expected, the white-box query has a higher optimal membership advantage than the black-box query. Figure 7: The optimal membership advantage vs. the synthetic dataset size. ![30_image_0.png](30_image_0.png) Figure 8: Estimation of the optimal membership advantage with white-box and black-box queries against a binary classification model on skin-cancer MNIST.
Review 1: Summary: This paper studies the problem of membership privacy estimation in generative models. More specifically, the authors propose a framework to estimate the maximum membership privacy risk for generative models. The proposed framework is more general than the previous heuristic methods and is able to characterize the individual-level membership privacy risk. Experiments validate the advantage of the proposed method. Strengths and Weaknesses: Strengths: 1. The proposed framework is very general and can be used to estimate privacy risk under different assumptions, i.e., releasing a model or synthetic dataset. 2. The proposed method takes into account the unbalanced membership in the data. 3. It can be used to estimate individual-level risk. 4. Experiments show the effectiveness of the proposed framework. Weakness: 1. The motivation of the proposed query function is not clear. 2. The accuracy based metric need to be further explained. Requested Changes: My main concerns of the current paper are as follows: 1. What is the motivation of your proposed query function in equation (2)? What is the advantage of the proposed query function compared with the previous one? 2. Is there a scaling issue in equation (4)? What is the output range of $D$ and $d$? 3. In section 3.1, the Adv definition in equation (1) is based on $p=0.5$ in Experiment 1. Why can it be used to derive the proposed optimal advantage since you are considering a general $p$? 4. In Experiment 2, your method also needs access to the distribution D, and why is this better than the previous work, e.g., Yeom et al. and Jayaraman et al.? Broader Impact Concerns: No ================================================== Review 2: Summary: The main contribution of the paper is to develop a framework to estimate the risk of membership inference attacks when the adversary is given oracle access to the model. The framework in the paper is general in the sense that it extends to general risk class considered in previous works and can measure individual level membership privacy risks, Strengths and Weaknesses: The entire use of the framework allows the authors to look at different studies of membership inference attacks using the same lens. Furthermore, it allows them to go beyond the previous studies by considering individual level privacy risks as well as when one does not have access to the entire training data; the latter makes it a more practical attack because it is not often the case that we have access to te entire training data. I have a few questions. Reading through your definition of the adversarial model, it seems like it is not restricted to GAN. It resembles a lot of what we usually see in cryptography. In cryptography, one would usually write such definitions as an adversarial game where the adversary is given auxiliary inputs. From the definition of the paper, I do not see that to be the case. Is it by choice? If not, then clarify. If there is auxiliary input available to the adversary, then mention it. Also, is the adversary passive or active, adaptive, or non-adaptive, honest-but-curious or malicious? None of that is clear in the paper. For the weakness, I found the writing (especially, the proofs) very sloppy. This makes it hard to verify the correctness of the paper. A journal paper should ideally have a much higher editor quality than a conference paper and I feel the paper falls short of that. I will use this opportunity to give some suggestions so that the authors can submit a version that I can verify before pointing to specific instances: 1. Please be consistent in the use of different citation techniques available in natbib. If you are saying, Following Chen et al. 2018, maybe, use the one that does not put parenthesis to be consistent with the later usage. 2. If you are using two or more items, then separate them with a conjunction. Like end of page 3, (1) <text>; and (2). 3. Use space between phrases and citations. Like, Hayes et al., 2019 assumes. assumes -> assume. 4. Change the phrase We then to We next or something like that. It is a very repetitive choice of phrase. 5. Please use active voice instead of passive voice. One example is line 3 and 6 in the description of Algorithm 1. Some specific instances: 1. After Definition 3. we -> We 2. Do not start a new sentence with a conjunction. A specific example is right after equation (17). 3. What is N in theorem statements? Define its value. I am guessing it is N=N_1+N_2 from Experiment 2. A theorem statement should be self-contained. 4. Which theorem belongs to Algorithm 1? 5. Algorithm 2: the line numbers are missing. 6. Appendix D.4: What is p? Is p(\cdot) a function? It seems like the authors have overloaded the symbol p and it makes the reviewer's job very difficult. The same issue is in Appendix D.4. 7. AdvI should be in math font to be consistent. 8. FP, etc are said to be that they are conditional forms, and the expression is given later. I would rather have the mathematical expression first before the description of what it means. 9. Proof of Theorem 4: I would elaborate on all the steps in the proof. There is no page limit and every mathematical expression should be spelled out. It might be helpful to readers if the authors do not write the mathematical expression as a paragraph but in the math environment. 10. \hat p tends in probability to the expression is only used in Theorem 2's proof unless the authors have overloaded p_N to denote p. 11. Please use a preliminaries section to state the version of Slutsky's theorem and the continuous mapping theorem that you use. It is very important given the repeated use of the aforementioned theorems. 12. Elaborate on how 1 follows by Slutsky theorem and the third condition. 13. Why is 2p/N_1 = 2/N? Again, what is N? 14. I cannot find Theorem 7. Requested Changes: Please take a look at the weakness section above. Broader Impact Concerns: N/! ================================================== Review 3: Summary: This paper introduces a framework for estimating membership privacy in generative models (e.g. GANs). The paper shared some of the setups with prior works such as Yeom et al 2018, and extended with more flexible assumptions to be applied in more general settings. Specifically this paper addresses the imbalance issue in prior membership privacy frameworks by utilizing a general metric class studied in Koyejo et al 2014. Then the authors made a novel connection between individual-level and population-level membership privacy. Finally the authors proposed another setting with looser assumptions than Yeom et al where only a random sample is accessible instead of the data distribution and provided practical estimators for membership risks in this setting. The framework was evaluated on three benchmark image datasets (MNIST, CIFAR10 and skin-cancer MNIST) and on different GAN architectures. Strengths and Weaknesses: Strengths 1. The paper is well-written and organized, and it clearly stated its contribution and differences to prior works. 2. The paper is technically sound where the proposed theoretical framework for membership privacy provides a convincing lower bound of risks. 3. The framework is very flexible and can be used for multiple settings and purposes. The individual privacy risk estimation is particularly important for us to understand the privacy of the vulnerable samples. 4. The empirical evaluation demonstrated its usability and how the framework provides a tighter privacy estimation than heuristic adversaries. Weaknesses 1. The datasets and models in the experiments are all small scaled. This is acceptable given the main contributions in the paper are in the theoretical aspects. 2. The individual risk estimation is very interesting and potentially important, however I did not find the discussion or experiments on this to be illustrative enough. Perhaps a distribution of the individual risks or more examples of the vulnerable samples would be helpful to highlight their characteristics. Also the observation is closely related to “memorization” in neural networks discussed in Feldman 2020, which is not cited. 3. Although the authors stated the connection to differential privacy in Theorem 1, there are no empirical experiments showing this connection. It would be very helpful to show a plot similar to Figure 1 in Erlingsson et al 2020. References: 1. Ulfar Erlingsson, Ilya Mironov, Ananth Raghunathan, Shuang Song. That which we call private. 2020. 2. Vitaly Feldman. Does Learning Require Memorization? A Short Tale about a Long Tail. 2020. Requested Changes: 1. In Algorithm 2, the authors proposed to estimate $\eta_(Q_S(z))$ by the first set of samples, while no further explanation is provided. Would appreciate the author to elaborate more on this estimation or provide a reference. 2. In Section 6.2.1, the authors stated “our optimal membership advantage estimates are higher than the SOTA heuristic adversaries in both settings.” However, I could only find comparison with SOTA heuristics in Figure 1a for 1 setting instead of both settings. Would appreciate the authors to complete the plots for the statement. 3. In Section 6.2.2, the authors stated “In this case, the performance of the optimal adversary is consistently better than the heuristic adversary.” This is the case for the MNIST AM metric plot but not very obvious in the CIFAR10 plots where the two curves overlap from epoch 5,000 or so. Would appreciate the authors to reword or explain the CIFAR10 plot in more detail. 4. In Figure 2a, why does the 2d query curve (red) fluctuate regularly? E.g. membership advantage could drop from 0.8 to 0.3. Would appreciate authors to explain this behavior. 5. Seems like for one comparison setting, one model and one dataset are used. The paper could be strengthened by showing the comparison on all models and all datasets. Minor and questions: 1. In Section 2.4, An adversary -> an adversary 2. All the plots in Section 6 are small. Bigger plots might be easier to read. 3. In algorithm 2, do the three sampled sets need to be disjoint? How would this impact the estimation of optimal membership advantage? 4. As the authors acknowledged in the conclusion that the framework could be extended to discriminative models. I also did not find that the framework specific to generative models, yet the title and motivation all focused on generative models. Why not emphasize that the framework is suitable for both discriminative and generative models, which covers a wider range of ML and could be more impactful. Broader Impact Concerns: No concern on the ethical implications ================================================== Metareview: Recommendation: Accept as is Comment: This paper makes a contribution in the area of estimating privacy of generative models. The reviewers initially had some feedback and comments on aspects of the work, which the authors addressed in an updated revision. Beyond this, there was little additional discussion among the reviewers, who felt their questions and concerns were addressed by the authors, and thus deemed this paper ready for publication. ==================================================
# Hq-Vae: Hierarchical Discrete Representation Learning With Variational Bayes Anonymous authors Paper under double-blind review ## Abstract Vector quantization (VQ) is a technique to deterministically learn features with discrete codebook representations. It is commonly achieved with a variational autoencoding model, VQ-VAE, which is further extended to hierarchical structures for high-fidelity reconstruction. However, hierarchical extensions of VQ-VAE often suffer from codebook/layer collapse issue, where the codebook is not efficiently used to express data well, hence deteriorates reconstruction accuracy. To mitigate this problem of the extensions, we propose a novel unified framework to stochastically learn hierarchical discrete representation on the basis of the variational Bayes framework, called hierarchically quantized variational autoencoder (HQ-VAE). HQ-VAE naturally generalizes the hierarchical variants of VQ-VAE such as VQVAE-2 and residual-quantized VAE (RQ-VAE) and provides them with a Bayesian training scheme. Our comprehensive experiments on image datasets show that HQ-VAE enhances codebook usage and improves reconstruction performance. We also validate HQ-VAE in terms of its applicability even to a different modality with an audio dataset. ## 1 Introduction Learning representations with discrete features is one of the core technologies in the field of deep learning. Vector quantization (VQ) for approximating continuous features with a set of finite trainable code vectors is a common technique to achieve such representation (Toderici et al., 2016; Theis et al., 2017; Agustsson et al., 2017). It has been widely adopted in several active applications including neural codecs, e.g., image compression (Williams et al., 2020; Wang et al., 2022) and audio codec (Zeghidour et al., 2021; Défossez et al., 2022). VQ-based representation methods have been improved with successful deep generative modeling, especially denoising diffusion probabilistic models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2020; Dhariwal & Nichol, 2021; Hoogeboom et al., 2021; Austin et al., 2021) and autoregressive models (van den Oord et al., 2016; Chen et al., 2018; Child et al., 2019). Learning discrete features of target data among finitely many representations can ignore redundant information, and such a lossy compression can assist with training deep generative models on large-scale data. After compression, one can train another deep generative model, which is called a prior model, on the compressed representation instead of the raw data. This approach has achieved attractive results in various tasks, e.g., unconditional generation tasks (Razavi et al., 2019; Dhariwal et al., 2020; Esser et al., 2021b;a; Rombach et al., 2022), text-to-image generation (Ramesh et al., 2021; Gu et al., 2022; Lee et al., 2022a) and textually guided audio generation (Yang et al., 2022; Kreuk et al., 2022). Note that the compression performance of VQ limits the overall generation performance regardless of the performance of the prior model. VQ is usually achieved with the model vector quantized variational autoencoder (VQ-VAE) (van den Oord et al., 2017). In VQ-VAE, an input is first encoded and quantized with the code vectors, which extracts the discrete representation of the encoded feature. The discrete representation is then decoded to the data space to recover the original input. Subsequently, advanced studies incorporated the hierarchical structure into the discrete latent space to effectively achieve high-fidelity reconstruction. Razavi et al. (2019) initially extended VQ-VAE to a hierarchical model, which is called VQ-VAE-2. In this model, multi-resolution discrete latent representations are introduced to extract local and global information of the target data. As another type of hierarchical discrete representation, residual quantization (RQ), was proposed to reduce the gap between the feature maps before and after the quantization process (Zeghidour et al., 2021; Lee et al., 2022a). Despite its successes in many tasks, training variants of VQ-VAE is still challenging. It is known that VQVAE suffers from codebook collapse, where only few code vectors are used for the representation (Kaiser et al., 2018; Roy et al., 2018; Takida et al., 2022b). This inefficiency may deteriorate reconstruction accuracy, hence, limit its applications to downstream tasks. The extension with hierarchical latent representations suffers from the same issue. For example, Dhariwal et al. (2020) reported that it is generally difficult to push information to higher levels in VQ-VAE-2, i.e., codebook collapse often occurs there. Therefore, certain heuristic techniques such as the exponential moving average (EMA) update (Polyak & Juditsky, 1992) and codebook reset (Dhariwal et al., 2020) are usually implemented to mitigate these problems. Takida et al. (2022b) claimed that the issue is triggered because the training scheme of VQ-VAE does not follow the variational Bayes framework but relies on carefully designed heuristics. They proposed stochastically quantized VAE (SQ-VAE), with which the components of VQ-VAE, i.e., the encoder, decoder and code vectors, are trained in the variational Bayes framework with an SQ operator. The model was shown to improve reconstruction performance by preventing the collapse issue thanks to the *self-annealing* effect (Takida et al., 2022b), where the SQ process gradually tends to the deterministic one during training. We expect this has the potential to stabilize the training by mitigating this problem even in the hierarchical model, which may lead to improving reconstruction performance with more efficient codebook usage. We propose Hierarchically Quantized VAE (*HQ-VAE*), a general variational Bayesian model for learning hierarchical discrete latent representations. Figure 1 illustrates the overall architecture of HQ-VAE. The hierarchical structure in HQ-VAE consists of *bottom-up* and *top-down* path pair, which assists with capturing local and global information of data. We instantiate the generic HQ-VAE by introducing two types of *topdown* layers. These two layers formulate the hierarchical structures of VQ-VAE-2 and residual-quantized VAE (RQ-VAE) within the variational scheme, which we call *SQ-VAE-2* and *RSQ-VAE*, respectively. In other words, our framework can deal with the independently proposed extensions of VQ-VAE in a unified and solid way. HQ-VAE can be viewed as an extension of SQ-VAE with hierarchy, hence, shares similar favorable properties of SQ-VAE (e.g., the *self-annealing* effect). In this sense, HQ-VAE unifies the current well-known VQ models in the variational Bayes framework, which provides a novel training mechanism. We empirically show HQ-VAE improves upon conventional methods in the vision and audio domains. Furthermore, through the demonstration of the hybrid model of SQ-VAE-2 and RSQ-VAE (in Appendices B and C.3), we provide a tutorial for modeling hierarchical discrete representation from the design of hierarchical latent structures to the derivation of objective functions to learn the model. The demonstration shows the flexibility of our framework, which allows to model desired hierarchical discrete latent via the stack of *top-down* layers. This paper is the first attempt at variational Bayes on hierarchical discrete representation, and training hierarchical VAEs is generally known to be challenging even for continuous latent cases (Vahdat & Kautz, 2020; Child, 2021). Throughout this paper, the uppercase letters (P, Q) and the lowercase letters (p, q) denote the probability mass functions and probability density functions, respectively; calligraphy letters (P, Q) denote the joint probabilistic distributions of both continuous and discrete random variables; bold lowercase and uppercase letters (e.g., x and Y ) respectively denote vectors and matrices, and the ith column vector in Y is written as yi; [N] denotes a set of positive integers no more than N ∈ N. Finally, we use J and L for the objective functions of HQ-VAE and conventional ones, respectively. ## 2 Background We first revisit VQ-VAE and its extensions to hierarchical latent models. We then review SQ-VAE which serves as the foundation framework of HQ-VAE. ## 2.1 Vq-Vae To discretely represent observations x ∈ R D, a codebook B is introduced, which consists of finite trainable code vectors {bk} K k=1 (bk ∈ R db ). A discrete latent variable Z is constructed to be in the dz-tuple of B, i.e., Z ∈ Bdz, which is later decoded to generate data samples. To connect the observation and latent representaion, a deterministic encoder and decoder pair is introduced, where the encoder maps x to Z and the decoder recovers x from Z by a decoding function fθ : R db×dz → R D. For the encoder, an encoding function, denoted as Gϕ : R D → R db×dz, and a deterministic quantization operator are introduced. The encoding function first maps x to Zˆ ∈ R db×dz, then the quantization operator finds the nearest neighbor of zˆi for i ∈ [dz], i.e., zi = arg minbk ∥zˆi − bk∥ 2 2 . The trainable components (the encoder, decoder, and codebook) are learned by minimizing the objective $${\mathcal{L}}_{\mathrm{VQ-VAE}}=\lVert\mathbf{x}-\mathbf{f}_{\theta}(\mathbf{Z})\rVert_{2}^{2}+\beta\lVert{\hat{\mathbf{Z}}}-\mathrm{sg}[\mathbf{Z}]\rVert_{F}^{2},$$ $$(1)$$ F , (1) where sg[·] is the stop-gradient operator and β is a hyperparameter balancing the two terms. The codebook is updated by applying the EMA update to ∥sg[Zˆ] − Z∥ 2 F . VQ-VAE-2. To model both local and global information separately, VQ-VAE-2 adopts a hierarchical structure for vector quantization (Razavi et al., 2019). The model consists of multiple levels of latents so that top levels have global information while bottom levels are focused on local information, conditioned on the top levels. The training of the model follows the same scheme as the original VQ-VAE (e.g., stop-gradient, the EMA update, and deterministic quantization). RQ-VAE. RQ provides a finer approximation of Z by taking into account the information of quantization gaps (residuals) (Zeghidour et al., 2021; Lee et al., 2022a). With RQ, L code vectors are assigned to each vector zi (i ∈ [dz]), instead of increasing the codebook size K. To achieve multiple assignments, RQ repeatedly quantizes the target feature and computes quantization residuals, denoted as Rl. Namely, the following procedure is repeated L times starting with R0 = Zˆ: zl,i = arg minbk ∥rl−1,i − bk∥ 2 2 and Rl = Rl−1 −Zl. By repeating RQ, the discrete representation is expected to be refined in a coarse-to-fine manner. Finally, RQ discretely approximates the encoded variable as Zˆ ≈PL l=1 Zl, where the conventional VQ is regarded as a special case of RQ with L = 1. ## 2.2 Sq-Vae SQ-VAE (Takida et al., 2022b) also has deterministic encoding/decoding functions and a trainable codebook. However, unlike the deterministic quantization scheme of VQ and RQ, SQ-VAE designs an SQ procedure for the encoded features following the variational Bayes framework. More precisely, it defines a stochastic dequantization process ps 2 (z˜i|Z) = N (z˜i; zi, s2I), which converts a discrete variable ziinto a continuous one z˜i by adding Gaussian noise with a learnable variance s 2. By Bayes' rule, it associates with a reverse operation, i.e., SQ, which is given by Pˆs 2 (zi = bk|Z˜) ∝ exp − ∥z˜i−bk∥ 2 2 2s 2. Thanks to this variational framework, the degree of the stochasticity in the quantization scheme becomes adaptive. This allows SQVAE to benefit from the effect of *self-annealing*, where the SQ process gradually approaches the deterministic one as s 2 decreases. This generally improves the efficiency of codebook usage. SQ-VAE vs. dVAE. Sønderby et al. (2017) proposed discrete latent VAE with a codebook lookup. Ramesh et al. (2021) followed the same approach to train VAE equipped with a codebook for text-to-image generation and call the model discrete VAE (dVAE). In dVAE, stochastic posterior categorical distribution for which code vectors are assigned is directly modeled by the encoder as Q(zi = bk|x) ∝ exp(gϕ,k(x)), where gϕ : R D → R K. This modeling enables to encode an original sample x into a set of codes Z in a probabilistic way. However, such index-domain modeling cannot incorporate the codebook geometry explicitly into the posterior modeling. In contrast, VQ/SQ-based methods allow us to model the posterior distribution with vector operations such as VQ and RQ. Furthermore, as another benefit of VQ/SQ-VAE, it enables to evaluate the reconstruction errors coming from the discretization with the quantization errors (Dhariwal et al., 2020). In this paper, we restrict our scope to latent generative model involving vector quantization operators. ![3_image_0.png](3_image_0.png) Figure 1: (a) HQ-VAE consists of *bottom-up* and *top-down* paths. Red arrows are for approximated posterior. Kullback–Leibler divergence of posterior and prior (in the blue box) is evaluated for objective function. (b) First layer for *top-down* path. (c)-(d) We introduce two types of layers: *injected top-down* and residual top-down. HQ-VAE that consists only of the injected (residual) *top-down* layer is analogous to VQ-VAE-2 (RQ-VAE). ## 3 Hierarchically Quantized Vae In this section, we formulate the generic HQ-VAE model, which learns hierarchical discrete latent representation in the variational Bayes framework. It serves as a backbone of the instantiations of HQ-VAE presented in Section 4. To achieve hierarchical discrete representation of depth L, we first introduce L groups of discrete latent variables, which are denoted as Z1:L := {Zl} L l=1. For each l ∈ [L] we introduce a trainable codebook Bl:= {b l k } Kl k=1, consisting of Kl db-dimensional code vectors, i.e., b l k ∈ R dbfor k ∈ [Kl]. The variable Zlis represented as a dl-tuple of the code vectors in Bl; namely, Zl ∈ B dl l . Similarly to conventional VAEs, the latent variable of each group is assumed to follow a pre-defined prior mass function. We set the prior as an i.i.d. uniform distribution, defined as P(zl,i = bk) = 1/Kl for i ∈ [dl]. The probabilistic decoder is set as a normal distribution with a trainable isotropic covariance matrix as pθ(x|Z1:L) = N (x; fθ(Z1:L), σ2I) with a decoding function fθ : R db×d1 *⊕ · · · ⊕* R db×dL → R D. It decodes latent variables sampled from the prior to generate instances. Here, the exact evaluation of Pθ(Z1:L|x) is required to train the generative model with the maximum likelihood. However, it is intractable in practice. Thus, we introduce an approximated posterior on Z1:L given x and derive the evidence lower bound (ELBO) for maximization instead. Inspired by hierarchical Gaussian VAEs (Sønderby et al., 2016; Vahdat & Kautz, 2020; Child, 2021), HQVAE consists of *bottom-up* and *top-down* paths, as shown in Figure 1a. The approximated posterior has the top-down structure (Z1 → Z2 *→ · · · →* ZL). For this process, the *bottom-up* path first generates features from x as Hrϕ (x) at different resolutions (r ∈ [R]). In the *top-down* path, the latent variable in each group is processed in the order from Z1 to ZL by taking Hrϕ (x) into account. To achieve this, two features including that extracted by the *bottom-up* path (Hrϕ (x)) and that processed at higher layers in the *top-down* path (Z1:l−1) can be fed to each layer and processed to estimate Zl corresponding to x, which we denote it as Zˆl = Glϕ (Hrϕ (x), Z1:l−1). The lth group Zl has a unique resolution index r, and we denote it as r(l). For simplicity, we ignore Hrϕ in Zˆl and write Zˆl = Glϕ (x, Z1:l−1). The design of the encoding function Glϕ brings us to different modeling of the approximated posterior. We leave the detailed discussion in the next section. It should be noted that the outputs of Glϕ lie in R dz×db, whereas the support of Zlis restricted to B dz l. To connect these continuous and discrete spaces, we introduce a pair of stochastic dequantization and quantization processes, as in Takida et al. (2022b). We first define the stochastic dequantization process for each group as $$p_{s_{l}^{2}}(\tilde{z}_{l,i}|Z_{l})={\mathcal{N}}(\tilde{z}_{l,i};z_{l,i},s_{l}^{2}I),$$ l I), (2) which is equivalent to adding Gaussian noise to the discrete variable the covariance of which, s 2 l I, depends on the index of the group l. We hereafter denote the set of Z˜l as Z˜1:L, i.e., Z˜1:L := {Z˜l} L l=1. Next, we can derive a stochastic quantization process as the inverse operator of the above stochastic dequantization: $$\hat{P}_{s_{l}^{2}}(\mathbf{z}_{l,i}=\mathbf{b}_{k}|\hat{\mathbf{Z}}_{l})\propto\exp\left(-\frac{\|\tilde{\mathbf{z}}_{l,i}-\mathbf{b}_{k}\|_{2}^{2}}{2s_{l}^{2}}\right).\tag{3}$$ $$\left(2\right)$$ By using these stochastic operators, we can connect Zˆ1:L and Z1:L via Z˜1:L in a stochastic manner, which leads to the entire encoding process: $$\mathcal{Q}(\mathbf{Z}_{1:L},\hat{\mathbf{Z}}_{1:L}|\mathbf{x})=\prod_{l=1}^{L}\prod_{i=1}^{d_{l}}p_{s_{l}^{2}}(\hat{\mathbf{z}}_{l,i}|\mathbf{G}_{\mathbf{\phi}}^{l}(\mathbf{x},\mathbf{Z}_{1:l-1}))\hat{P}_{s_{l}^{2}}(\mathbf{z}_{l,i}|\hat{\mathbf{Z}}_{l}).\tag{4}$$ The prior distribution on Z1:L and Z˜1:L is defined using the stochastic dequantization process as $$\mathcal{P}(\mathbf{Z}_{1:L},\tilde{\mathbf{Z}}_{1:L})=\prod_{l=1}^{L}\prod_{i=1}^{d_{i}}P(\mathbf{z}_{l,i})p_{s_{l}^{2}}(\tilde{\mathbf{z}}_{i}|\mathbf{Z}_{l}),\tag{5}$$ where the latent representations are generated in the order from l = 1 to L. The generative process from the prior does not use Z˜ but Z as x = fθ(Z). ## 4 Instantiations Of Hq-Vae Now that we have established the overall framework of HQ-VAE, we consider two special cases of HQ-VAE by designing two types of *top-down* layers: *injected top-down* and *residual top-down*. We derive two instances of HQ-VAE that consists only of the *injected top-down* layer or the *residual top-down* layer, which we call SQ-VAE-2 and RSQ-VAE, respectively due to their analogue to VQ-VAE-2 and RQ-VAE. These two layers can be combinatorially used to define a hybrid model of SQ-VAE-2 and RSQ-VAE, which is explained in Appendix B. Note that the prior distribution (Equation (5)) is identical across all instantiations. ## 4.1 First Top-Down Layer We introduce the first *top-down* layer, which is put at the top of layers in HQ-VAE. As illustrated in Figure 1b, this layer takes H1ϕ (x) as an input and processes it with SQ. HQ-VAE constructed only with this layer reduces to SQ-VAE. ## 4.2 Injected Top-Down Layer We design an *injected top-down* layer for the approximated posterior as in Figure 1c. This layer infuses the variable processed in the *top-down* path with the higher resolution information from the *bottom-up* layer. The lth layer takes the feature from the *bottom-up* path (H r(l) ϕ(x)) and the variable from the higher groups in the *top-down* path as inputs. In the layer, the variable from higher layers is first upsampled to be aligned with H r(l) ϕ(x). These two variables are then concatenated and processed with an encoding block. The above overall process corresponds to Zˆl = Glϕ (x, Z1:l−1) in Section 3. The encoded variable Zˆlis then quantized into Zl with the codebook Bl through the process described in Equation (3)1. Finally, the sum of the variable from the top layers and quantized variable Zlis passed through to the next layer. ## 4.2.1 Sq-Vae-2 We especially instantiate the HQ-VAE only with the *injected top-down* layers in addition to the first layer, which reduces to SQ-VAE-2. Note that since the index of resolutions and layers have a one-toone correspondence in this structure, r(l) = l and L = R. As in usual VAEs, we evaluate the ELBO as log pθ(x) *≥ −J*SQ-VAE-2(x; θ, ϕ, s 2, B), where s 2:= {s 2 l } L l=1, B := (B1, *· · ·* , BL) and $$\mathcal{J}_{\text{SQ-VAE},2}(\mathbf{x};\mathbf{\theta},\mathbf{\phi},\mathbf{s}^{2},\mathcal{B})=\mathbb{E}_{\mathcal{Q}(\mathbf{Z}_{1:L},\hat{\mathbf{Z}}_{1:L}|\mathbf{x})}\left[-\log p_{\mathbf{\theta}}(\mathbf{x}|\mathbf{Z}_{1:L})+\log\frac{\mathcal{Q}(\mathbf{Z}_{1:L},\hat{\mathbf{Z}}_{1:L}|\mathbf{x})}{\mathcal{P}(\mathbf{Z}_{1:L},\hat{\mathbf{Z}}_{1:L})}\right]\tag{6}$$ Hereafter, we omit the arguments of objective functions for simplicity. By decomposing Q and P and substituting parameterizations for the probabilistic parts, we have $$\mathcal{J}_{\text{Sq-VAE-2}}=\frac{D}{2}\log\sigma^{2}+\mathbb{E}_{\text{Q}(\mathbf{Z}_{1:L},\hat{\mathbf{Z}}_{1:L})}\left[\frac{\|\mathbf{x}-f_{\theta}(\mathbf{Z}_{1:L})\|_{2}^{2}}{2\sigma^{2}}+\sum_{l=1}^{L}\left(\frac{\|\hat{\mathbf{Z}}_{l}-\mathbf{Z}_{l}\|_{F}^{2}}{2s_{l}^{2}}-H(\hat{P}_{q_{l}^{l}}(\mathbf{Z}_{l}|\hat{\mathbf{Z}}_{1}))\right)\right],\tag{7}$$ where H(·) indicates the entropy of a probability mass function and constant terms are omitted. The derivation of Equation (7) is given in Appendix A. The objective function (7) consists of the reconstruction term and the regularization terms for Z1:L and Z˜1:L. The expectation w.r.t. the probability mass function Pˆs 2 l (zl,i = bk|Z˜l) can be approximated with the corresponding Gumbel-softmax distribution (Maddison et al., 2017; Jang et al., 2017) in a reparameterizable manner. 1We empirically found setting Z˜l to Zˆlinstead of sampling Z˜lfrom ps2 (z˜l,i|Zˆl) leads to better performance (as reported in Takida et al. (2022b); therefore, we follow the procedure in practice. ## 4.2.2 Sq-Vae-2 Vs. Vq-Vae-2 The architecture of VQ-VAE-2 is composed in a similar fashion to that of SQ-VAE-2 but is trained by the following objective function: $$\mathcal{L}_{\text{VQ-VAE},2}=\|\mathbf{x}-\mathbf{f}_{\mathbf{\theta}}(\mathbf{Z}_{1:L})\|_{2}^{2}+\beta\sum_{l=1}^{L}\|\mathbf{G}_{\mathbf{\phi}}^{l}(\mathbf{x},\mathbf{Z}_{1:l-1})-\text{sg}[\mathbf{Z}_{l}]\|_{F}^{2},\tag{8}$$ where the codebooks are updated with the EMA update in the same manner as the original VQ-VAE. The objective function (8), except for the stop gradient operator and EMA update, can be obtained by setting both s 2 l and σ 2to infinity while keeping the ratio of the variances as s 2 l = β −1σ 2for l ∈ [L] in Equation (7). In contrast, since all the parameters but D and L in Equation (7) are optimized, the weight of each term is automatically adjusted during training. Furthermore, SQ-VAE-2 is expected to benefit from the *self-annealing* effect as in the original SQ-VAE (see Section 5.3). ## 4.3 Residual Top-Down Layer In this subsection, we set R = 1 for the simplicity of the demonstration purpose (general case of R is in Appendix B). This means the *bottom-up* and *top-down* paths are connected only at the top layer. We design a *residual top-down* layer for the approximated posterior as in Figure 1d. This layer is to better approximate the target feature with additional assignments of code vectors. By stacking this procedure L times, the feature is approximated as $$H_{\phi}(\mathbf{x})\approx\sum_{l=1}^{L}\mathbf{Z}_{l}.\tag{1}$$ Therefore, in this layer, only the information from the higher layers but from the *bottom-up* path is fed to the layer. It is desired that Pl+1 l ′=1 Zl ′ approximate the feature better than Pll ′=1 Zl ′ . On this basis, we let the following residual pass through to the next layer: $$\mathbf{G}_{\phi}^{l}(\mathbf{x},\mathbf{Z}_{1:l-1})=\mathbf{H}_{\phi}(\mathbf{x})-\sum_{l=1}^{l-1}\mathbf{Z}_{l^{\prime}}.$$ $$({\mathfrak{g}})$$ $$(10)$$ ## 4.3.1 Rsq-Vae We especially instantiate the HQ-VAE only with the *residual top-down* layers in addition to the first layer, which reduces to RSQ-VAE. At this point, by following Equation (6) and omitting constant terms, we can derive the same form of the ELBO objective as Equation (7): $$\mathcal{J}_{\text{Rig}_{\text{QVAE}}}^{\text{Rig}_{\text{VAE}}}=\frac{D}{2}\log\sigma^{2}+\mathbb{E}_{\text{Q}(\mathbf{Z}_{1:L},\mathbf{Z}_{1:L}|\mathbf{x})}\left[\frac{\|\mathbf{x}-\mathbf{f}_{\theta}(\mathbf{Z}_{1:L})\|_{2}^{2}}{2\sigma^{2}}+\sum_{l=1}^{L}\left(\frac{\|\hat{\mathbf{Z}}_{l}-\mathbf{Z}_{l}\|_{F}^{2}}{2s_{l}^{2}}-H(\hat{P}_{s_{l}^{2}}(\mathbf{Z}_{l}|\hat{\mathbf{Z}}_{l}))\right)\right],\tag{11}$$ where the numerator of the third term corresponds to the evaluation of the residuals Hϕ(x) −Pll ′=1 Zl ′ for all l ∈ [L] with the dequantization process. However, we empirically found training the model with the ELBO objective was often unstable. We suspect this is because the objective regularizes Z1:L to make Pll ′=1 Zl ′ close to the feature for all l ∈ [L]. We hypothesize that this regularization is too strong to regularize the latent representation. To address the issue, we consider conditional distributions not on (Z1:L, Z˜1:L) but on (Z1:L, Z˜), where Z˜ =PL l=1 Z˜l. From the reproductive property of Gaussian distribution, the continuous latent variable converted from Z via the stochastic dequantization processes, Z˜ =PL l=1 Z˜l, follows the following Gaussian distribution: $$p_{s^{2}}(\tilde{\mathbf{z}}_{i}|\mathbf{Z})={\mathcal{N}}\left(\tilde{\mathbf{z}}_{i};\sum_{l=1}^{L}\mathbf{z}_{l,i},\left(\sum_{l=1}^{L}s_{l}^{2}\right)\mathbf{I}\right).\qed$$ $$\left(12\right)$$ 7 Table 1: Evaluation on ImageNet (256×256) and FFHQ (1024×1024). RMSE (×102), LPIPS, and SSIM are evaluated using test set. Following Razavi et al. (2019), codebook capacity for discrete latent space is set to (dl, Kl) = (322, 512),(642, 512) and (dl, Kl) = (322, 512),(642, 512),(1282, 512) for ImageNet and FFHQ, respectively. We also show codebook perplexity at each layer. | Dataset | Model | Reconstruction | Codebook perplexity | | | | | |-----------|---------------|------------------|-----------------------|---------------|---------------|---------------|--------------| | | RMSE ↓ | LPIPS ↓ | SSIM ↑ | exp(H(Q(Z1))) | exp(H(Q(Z2))) | exp(H(Q(Z3))) | | | ImageNet | VQ-VAE-2 | 6.071 ± 0.006 | 0.265 ± 0.012 | 0.751 ± 0.000 | 106.8 ± 0.8 | 288.8 ± 1.4 | | | SQ-VAE-2 | 4.603 ± 0.006 | 0.096 ± 0.000 | 0.855 ± 0.006 | 406.2 ± 0.9 | 355.5 ± 1.7 | | | | FFHQ | VQ-VAE-2 | 4.866 ± 0.291 | 0.323 ± 0.012 | 0.814 ± 0.003 | 24.6 ± 10.7 | 41.3 ± 14.0 | 310.1 ± 29.6 | | SQ-VAE-2 | 2.118 ± 0.013 | 0.166 ± 0.002 | 0.909 ± 0.001 | 125.8 ± 9.0 | 398.7 ± 14.1 | 441.3 ± 7.9 | | We instead use the following prior distribution to derive the ELBO objective: $${\mathcal{P}}(\mathbf{Z}_{1:L},{\tilde{Z}})=\prod_{i=1}^{d_{z}}\left(\prod_{l=1}^{L}P(\mathbf{z}_{l,i})\right)p_{s^{2}}({\tilde{\mathbf{z}}}_{i}|\mathbf{Z}).$$ ps2 (z˜i|Z). (13) We now derive the ELBO using the newly established prior and posterior starting from $$\log p_{\theta}(\mathbf{x})\geq-{\mathcal{J}}_{\mathrm{RSQ-VAE}}$$ $$\begin{array}{l}{{0)\geq-{\mathcal{J}}_{\mathrm{RSQ-VAE}}}}\\ {{\quad=-\mathbb{E}_{Q(\mathbf{Z}_{1:L},{\hat{\mathbf{Z}}}|\mathbf{x})}\left[-\log p_{\theta}(\mathbf{x}|\mathbf{Z}_{1:L})+\log\frac{Q(\mathbf{Z}_{1:L},{\hat{\mathbf{Z}}}|\mathbf{x})}{{\mathcal{P}}(\mathbf{Z}_{1:L},{\hat{\mathbf{Z}}})}\right].}}\end{array}$$ The above objective is further simplified as $$\mathcal{J}_{\text{RSQ-VAE}}=\frac{D}{2}\log\sigma^{2}+\mathbb{E}_{\text{Q}(\mathbf{Z}_{1:L},\hat{\mathbf{Z}}_{1:L})\mathbf{x}}\left[\frac{\|\mathbf{x}-\mathbf{f}_{\theta}(\mathbf{Z}_{1:L})\|_{2}^{2}}{2\sigma^{2}}+\frac{\|\hat{\mathbf{Z}}-\mathbf{Z}\|_{F}^{2}}{2\sum_{l=1}^{L}z_{l}^{2}}-\sum_{l=1}^{L}H(\hat{P}_{\sigma^{l}}(\mathbf{Z}_{l}|\hat{\mathbf{Z}}_{l}))\right]\tag{15}$$ $$(13)$$ $$(14)$$ where the third term is different from that in Equation (11) and its numerator evaluates only the overall quantization error Hϕ(x) −PL l=1 Zl with the dequantization process. ## 4.3.2 Rsq-Vae Vs. Rq-Vae RQ-VAE and RSQ-VAE both learn discrete representation in a coarse-to-fine manner, but RQ-VAE adopts a deterministic RQ scheme to achieve Equation (9), where RQ-VAE is trained with the following objective function: $$\mathcal{L}_{\text{RQ-VAE}}=\|\mathbf{x}-\mathbf{f}_{\mathbf{\theta}}(\mathbf{Z}_{1:L})\|_{2}^{2}+\beta\sum_{l=1}^{L}\left\|\mathbf{H}_{\mathbf{\phi}}(\mathbf{x})-\text{sg}\left[\sum_{l^{\prime}=1}^{l}\mathbf{Z}_{l^{\prime}}\right]\right\|_{F}^{2},\tag{16}$$ where the codebooks are updated with the EMA update in the same manner as VQ-VAE. The second term of Equation (16) resembles the third term of Equation (11), which strongly enforces certain degree of reconstruction even only with partial information from the higher layers. RQ-VAE is beneficial from such a regularization term, which leads to stable training. However, in RSQ-VAE, this regularization deteriorates the reconstruction performance. Instead, we use Equation (15) as the objective, which regularizes the latent representation by taking into account only accumulated information from all layers. Remark. HQ-VAE has favorable properties similar to SQ-VAE. The training scheme does not require hyperparameters except for a temperature parameter of Gumbel-softmax approximation (see Equations (7) and (15)). Furthermore, the derived models can benefit from the *self-annealing effect* as in SQ-VAE, which is empirically shown in Section 5.3. ![8_image_0.png](8_image_0.png) Figure 2: Impact of codebook capacity on reconstruction is investigated on (a) CIFAR10 and (b) CelebAHQ. Two and three layers are tested on CIFAR10 and CelebA-HQ, respectively. ![8_image_1.png](8_image_1.png) ![8_image_2.png](8_image_2.png) RMSELPIPSSSIM RMSELPIPSSSIM Figure 3: Impact of codebook capacity on reconstruction is investigated on (a) CIFAR10 and (b) CelebAHQ. (c) Codebook perplexity at each layer is plotted, wheremodels with 32 layers are trained on CelebA-HQ and all layers share same codebook. ## 5 Experiments We comprehensively examine SQ-VAE-2 and RSQ-VAE and visualize the effects of their individual *top-down* paths. In Secs. 5.1 and 5.2, we comprehensively compare SQ-VAE-2 and RSQ-VAE with VQ-VAE-2 and RQVAE, respectively to show our framework improves reconstruction performance against the baselines. We basically conduct the comparison with various latent capacities to evaluate our methods in a rate–distortion (RD) sense (Alemi et al., 2017). In addition, we show that RSQ-VAE trained with a perceptual loss (Johnson et al., 2016) is competitive with the state-of-the-art model based on RQ-VAE. Furthermore, we test HQ-VAE on an audio dataset to show that it is applicable to a different modality. In Section 5.3, we investigate the characteristics of the *injected top-down* and *residual top-down* layers with visualization (Section 5.3). Unless otherwise noted, we use the same network architecture in all models and set the codebook dimension to db = 64. The experimental details are given in Appendix C. ## 5.1 Sq-Vae-2 Vs. Vq-Vae-2 We compare our SQ-VAE-2 with VQ-VAE-2 from the aspects of reconstruction accuracy and codebook utilization. We first investigate their performance on CIFAR10 (Krizhevsky et al., 2009) and CelebA-HQ (256×256) in various codebook settings: the configurations for the hierarchical structure and numbers of code vectors (Kl). We evaluate the reconstruction accuracy in terms of a Euclidean metric and two perceptual metrics: the root mean squared error (RMSE), structure similarity index (SSIM) (Wang et al., 2004), and learned perceptual image patch similarity (LPIPS) (Zhang et al., 2018). As shown in Figure 2, SQ-VAE-2 achieves better reconstruction accuracy in all cases. The difference of the performance between the two models is noticeable when the codebook size is small. Comparison on large-scale datasets. Next, we demonstrate that SQ-VAE-2 outperforms VQ-VAE-2 on ImageNet (256×256) (Deng et al., 2009) and FFHQ (1024×1024) (Karras et al., 2019) in the same latent settings as in Razavi et al. (2019). As shown in Table 1, SQ-VAE-2 achieves better reconstruction performance in terms of RMSE, LPIPS, and SSIM than VQ-VAE-2, which is a similar tendency as in the comparison on CIFAR10 and CelebA-HQ. Furthermore, we measure codebook utilization per layer by using the perplexity of latent variables. The codebook perplexity is defined as exp(H(Q(Zl))), where Q(Zl) is a marginalized distribution of Equation (4) with x ∼ pd(x). The perplexity ranges from 1 to the number of code vectors (Kl) by its definition. SQ-VAE-2 achieves higher codebook perplexities than VQ-VAE-2 at all the layers, whereas the higher layers are not effectively used in VQ-VAE-2. In particular, the perplexity values at the top layer in the case of VQ-VAE-2 is extremely low, which is the sign of layer collapse. ## 5.2 Rsq-Vae Vs. Rq-Vae We compare our RSQ-VAE with RQ-VAE using the same metrics as in Section 5.1. As codebook reset is used in the original study of RQ-VAE (Zeghidour et al., 2021; Lee et al., 2022a) to prevent codebook collapse, we add RQ-VAE with this technique to the baselines. We do not apply it to RSQ-VAE because it is not explainable in the variational Bayes framework. In addition, Lee et al. (2022a) proposed to share codebook for all the layers, i.e., Bl = B for l ∈ [L] to enhance the utility of the codes. We test both RSQ-VAE and RQ-VAE with and without the codebook share. We first investigate their performances on CIFAR10 and CelebA-HQ (256×256) in various settings: the number of times of quantization step (l) and number of code vectors (Kl). As shown in Figures 3a and 3b, RSQ-VAE achieves better reconstruction accuracy in terms of RMSE, SSIM, and LPIPS than the baselines although codebook reset overall enhances the performance of RQ-VAE. When the codebook is shared for all layers, the performance difference is remarkable, with a noticeable difference in how codes are used. Interestingly, more codes are assigned in the bottom layers in RSQ-VAE, unlike RQ-VAEs, as shown in Figure 3c. RSQ-VAE captures the coarse information with a relatively small number of codes, and refines the reconstruction with larger bits at the bottom layers. Improvement in perceptual quality. Next, we use the same network architecture as that used in Lee et al. (2022a) and set the codebook dimension to db = 256 for fair comparison with their RQ-VAE. We train RSQ-VAE on FFHQ with an LPIPS loss (Zhang et al., 2018) (see Appendix C.4) and compare it with their Table 2: Evaluation on UrbanSound8K. RMSE is evaluated using test set. Network architecture follows Liu et al. (2021). Codebook size is set to Kl = 8. ![10_image_0.png](10_image_0.png) | Model | Number of Layers | RMSE ↓ | |---------|--------------------|---------------| | RQ-VAE | 4 | 0.506 ± 0.018 | | | 8 | 0.497 ± 0.057 | | RSQ-VAE | 4 | 0.427 ± 0.014 | | | 8 | 0.314 ± 0.013 | Figure 4: Violin plots of MUSHRA listening test results on UrbanSound8K test set in the cases of (a) 4 layers and (b) 8 layers. The white dots indicate the median scores, and the tops and bottoms of thick vertical lines indicate the first and third quartiles, respectively. RQ-VAE in terms of Fréchet Inception Distance (FID) (Heusel et al., 2017). The reconstructed FID (rFID) of RSQ-VAE is 8.47, whereas that of their RQ-VAE is 7.29. Note that we do not use an adversarial loss for training RSQ-VAE but their RQ-VAE was trained with the combination of an LPIPS loss and adversarial loss. This means that our RSQ-VAE achieves competitive performance without an adversarial loss. We leave combining an adversarial loss with HQ-VAE for future work. Validation on an audio dataset. To validate the effectiveness of RSQ-VAE in the audio domain, we compare it with RQ-VAE by the reconstruction of the normalized log-Mel spectrogram using an environmental sound dataset: UrbanSound8K (Salamon et al., 2014). We follow the same network architecture used in an audio generation paper (Liu et al., 2021), which deploys multi-scale convolutional layers with varied kernel sizes to capture the local and global features of audio signals in the time-frequency domain (Xian et al., 2021). Codebook size is set to Kl = 8. Number of layers is set to 4 and 8, and all the layers share the same codebook. We run each trial with five different random seeds and obtain the average and standard deviation of RMSEs. As shown in Table 2, RSQ-VAE achieves better average RMSEs than RQ-VAE across different numbers of layers on the audio dataset. To evaluate the perceptual quality of our results, we also perform a subjective listening test using the multiple stimulus hidden reference anchor (MUSHRA) protocol (Series, 2014) on an audio web evaluation tool (Schoeffler et al., 2018). We randomly select an audio signal from the UrbanSound8K test set for the practicing part. In the test part, we extract ten samples by randomly selecting an audio signal per class from the test set. We prepare four samples for each signal: reconstructed samples from RQ-VAE and RSQ-VAE, a white noise signal as a hidden anchor, and an original sample as a hidden reference. Because RQ-VAE and RSQ-VAE are applied for the normalized log-Mel spectrograms, we use a vocoder in the audio generation paper (Liu et al., 2021; Kong et al., 2020) to convert the reconstructed spectrograms to the waveform samples. As the upper bound quality of the reconstructed waveform is limited to the vocoder result of the original spectrogram, we use the vocoder result for the reference. After listening to the reference, assessors are asked to rate 0 to 100 the four different samples according to similarity relative to the reference. After a post-screening of assessors (Series, 2014), there are a total of 10 assessors in the test. We show the violin plot results of the listening test in Figure 4. The figure shows that RSQ-VAE achieves better median listening scores than RQ-VAE. Especially when the number of layers is 8, the plots of RSQ-VAE show better scores with a large margin from the those of RQ-VAE. ![11_image_0.png](11_image_0.png) Figure 5: Reconstructed samples with partial layers in (a) SQ-VAE-2 and (c) RSQ-VAE. Top row shows reconstructed images while bottom one shows added components at each layer. For l = 1, 2, 3 latent capacity is set to (dl, Kl) = (162, 256),(322, 16),(642, 4) and (dl, Kl) = (322, 4),(322, 16),(322, 256), respectively. Notice that the numbers of bits of these models are equal at each layer. For reasonable visualization, we apply *progressive coding* to SQ-VAE-2, which induces progressive compression (see Appendix C.5). (b) and (d) We plot variance parameter s 2 l normalized by initial value s 2 l,0 and average entropy of quantization process (H(Pˆs 2 l (zl,i|Z˜l))) at each layer. ![11_image_1.png](11_image_1.png) Figure 6: Comparison of SQ-VAE-2 and RSQ-VAE under three latent capacity cases. As for x-axes, we put (K1, K2, K3) for SQ-VAE-2 and RSQ-VAE in blue and red, respectively. Note that (d1, d2, d3) = (16, 32, 64) and (d1, d2, d3) = (32, 32, 32) for SQ-VAE-2 and RSQ-VAE as in Figure 5, and each x-axis value indicate the same latent capacity. SQ-VAE-2 outperforms RSQ-VAE in relatively large latent capacity settings. In contrast, RSQ-VAE achieves better reconstruction performance for the case of higher compression rate. ## 5.3 Empirical Study Of Top-Down Layers In this section, we focus on visualizing the obtained discrete representations instead of comparing their reconstruction performance. This will provide insights into the characteristics of the *top-down* layers. We train both, SQ-VAE-2 and RSQ-VAE, with three layers on CelebA-HQ (Karras et al., 2018), respectively. Figure 5 shows the progressively reconstructed images. For demonstration purpose, we incorporate *progressive coding* (Shu & Ermon, 2022) to SQ-VAE-2 to make the reconstructed images only with the top layers interpretable. We note that *progressive coding* is not applied other than the illustration of Figure 5. Both, SQ-VAE-2 and RSQ-VAE, share the similarity that the higher layers generate the coarse part of the image while the lower layers complement them with details. However, comparing the two, we observe that in SQ-VAE-2, the additionally generated components (bottom row in Figure 5a) in each layer have different resolutions. We conjecture that the different layer-dependent resolutions H r(l) ϕ(x), which are injected into the *top-down* layers, contain different information. This implies that we may obtain more interpretable discrete representations if we can explicitly manipulate the extracted features in the *bottom-up* path to provide H r(l) ϕ(x) giving them more semantic meaning (e.g., texture or color). In contrast, RSQ-VAE seems to obtain a different discrete representation which resembles more a decomposition. This might be due to its approximated expansion in Equation (9). Moreover, we can observe from Figures 5b and 5d that the top-down layers also benefit from the *self-annealing* effect. In Appendix C.3, we explore combining the two layers to form a hybrid model. We observe that individual layers in a hybrid model produce similar effects as if they would be used alone. That is, outputs from *injected* top-down layers have better resolution and *residual top-down* layers refine upon certain decomposition. Since these two layers enjoy distinct refining mechanisms, a hybrid model may bring a more flexible approximation to the posterior distribution. Lastly, we compare reconstruction performances of SQ-VAE-2 and RSQ-VAE under the same architectures for the *bottom-up* and *top-down* paths. We follow the same experimental condition as that for visualization of Figure 5. We compare them in three different compression rates by changing the number of code vectors Kl. Interestingly, we can see in Figure 6 that SQ-VAE-2 achieve better reconstruction performance in the case of the lower compression rate, whereas RSQ-VAE reconstructs the original images better than SQ-VAE-2 in the case of higher compression rate. ## 6 **Discussion** 6.1 **Conclusion** We propose HQ-VAE, a general VAE approach that learns hierarchical discrete representations. HQ-VAE is formulated within the variational Bayes framework as a stochastic quantization technique, which (1) greatly reduces the number of hyperparameters to be tuned (only the one from the Gumbel-softmax trick), and (2) enhances codebook usage without any heuristics thanks to the *self-annealing* effect. We instantiate the general HQ-VAE with two types of posterior approximators for the discrete latent representations, which lead to SQ-VAE-2 and RSQ-VAE. These two novel variants share a similar design of information passing as VQ-VAE-2 and RQ-VAE, respectively, but their latent representations are quantized stochastically. Our experiments show that SQ-VAE-2 and RSQ-VAE outperform their individual baselines with better reconstruction and more efficient codebook usages in the image as well as audio domain. ## 6.2 **Concluding Remarks** First, VQ-VAE-2 and RQ-VAE can be basically replaced with SQ-VAE-2 and RSQ-VAE for improvement in many previous work in terms of reconstruction accuracy and efficient codebook usage. SQ-VAE-2 and RSQVAE achieve better RD curve than the baselines, which means, HQ-VAEs achieve (i) better reconstruction performance under the same latent capacities, and (ii) comparable reconstruction performance with the higher compression rate, compared with the baselines. Furthermore, our approach eliminates the need for the repetition of tuning many hyper-parameters and the introduction of ad-hoc techniques. Both SQ-VAE-2 and RSQ-VAE are applicable to generative modeling with the additional training of prior models as in a bunch of previous work. Generally, compression models with better RD curves are more feasible for the prior models (Rombach et al., 2022), hence the replacement of VQ-VAE-2 and RQ-VAE with SQ-VAE-2 and RSQ-VAE is beneficial even for generative modeling. Recently, RQ-VAE is more often used for the generation tasks than VQ-VAE-2 due to the severe instability issue of VQ-VAE-2, i.e., layer collapse (Dhariwal et al., 2020). However, we believe that the proposal of SQ-VAE-2 has a potential to advocate the use of such a hierarchical model for generation tasks since it mitigates the issue greatly. SQ-VAE-2 and RSQ-VAE have their own unique advantages as follows. SQ-VAE-2 was shown to be able to learn multi-resolution discrete representation thanks to the design of *bottom-up* path with the pooling operators (see Figure 5a). The results implied that further semantic disentanglement of the discrete representation might be possible by adopting specific inductive architectural components in the *bottom-up* path, which is an interesting future direction. As a second note, in the case of lower compression rates, SQ-VAE-2 outperforms RSQ-VAE in reconstruction performance (see Section 5.3). It hints at that SQ-VAE-2 might be appropriate especially for high-fidelity generation tasks when the prior model could be larger as in Razavi et al. (2019). In contrast, one of the strengths of RSQ-VAE is that one can easily accommodate different compression rates only by changing the number of layers during the inference (without changing or retraining the model). Furthermore, in the case of higher compression rates, RSQ-VAE outperforms SQ-VAE-2 in reconstruction performance (see Section 5.3). The properties are suitable for the application to neural codec as in Zeghidour et al. (2021); Défossez et al. (2022). ## 6.3 **Future Work** As future work, we will incorporate adversarial training into HQ-VAE which is expected to further enhance the perceptual quality of the reconstructed data. As discussed in Section 5.3, we will also explore the feasibility of explicitly manipulating the injected information into the *top-down* layers to obtain discrete representations with semantic meaning. At last, we explore one of the applications to image generation by training a prior on extracted discrete representations with HQ-VAE in Section D. Nevertheless, this work is focusing on providing a unified variational Bayesian framework of hierarchical quantization. Its downstream applications such as content generation or neural codec will also be considered as future work. ## References Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, and Luc V Gool. Soft-to-hard vector quantization for end-to-end learning compressible representations. In Proc. Advances in Neural Information Processing Systems (NeurIPS), 2017. Alexander A Alemi, Ben Poole, Ian Fischer, Joshua V Dillon, Rif A Saurous, and Kevin Murphy. Fixing a broken ELBO. *arXiv preprint arXiv:1711.00464*, 2017. Jacob Austin, Daniel D Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg. Structured denoising diffusion models in discrete state-spaces. In *Proc. Advances in Neural Information Processing* Systems (NeurIPS), volume 34, pp. 17981–17993, 2021. Xi Chen, Nikhil Mishra, Mostafa Rohaninejad, and Pieter Abbeel. Pixelsnail: An improved autoregressive generative model. In *Proc. International Conference on Machine Learning (ICML)*, pp. 864–872. PMLR, 2018. Rewon Child. Very deep VAEs generalize autoregressive models and can outperform them on images. In Proc. International Conference on Learning Representation (ICLR), 2021. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. *arXiv preprint arXiv:1904.10509*, 2019. Alexandre Défossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. High fidelity neural audio compression. arXiv preprint arXiv:2210.13438, 2022. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 248–255. Ieee, 2009. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat GANs on image synthesis. In Proc. Advances in Neural Information Processing Systems (NeurIPS), volume 34, pp. 8780–8794, 2021. Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, and Ilya Sutskever. Jukebox: A generative model for music. *arXiv preprint arXiv:2005.00341*, 2020. Patrick Esser, Robin Rombach, Andreas Blattmann, and Bjorn Ommer. Imagebart: Bidirectional context with multinomial diffusion for autoregressive image synthesis. In *Proc. Advances in Neural Information* Processing Systems (NeurIPS), volume 34, pp. 3518–3532, 2021a. Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In *Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 12873–12883, 2021b. Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, and Baining Guo. Vector quantized diffusion model for text-to-image synthesis. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10696–10706, 2022. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Proc. Advances in Neural Information Processing Systems (NeurIPS), pp. 6626–6637, 2017. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Proc. Advances in Neural Information Processing Systems (NeurIPS), pp. 6840–6851, 2020. Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, and Max Welling. Argmax flows and multinomial diffusion: Learning categorical distributions. In Proc. Advances in Neural Information Processing Systems (NeurIPS), volume 34, pp. 12454–12465, 2021. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. In *Proc.* International Conference on Learning Representation (ICLR), 2017. Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and superresolution. In *Proc. European Conference on Computer Vision (ECCV)*, pp. 694–711, 2016. Lukasz Kaiser, Samy Bengio, Aurko Roy, Ashish Vaswani, Niki Parmar, Jakob Uszkoreit, and Noam Shazeer. Fast decoding in sequence models using discrete latent variables. In Proc. International Conference on Machine Learning (ICML), pp. 2390–2399, 2018. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In *Proc. International Conference on Learning Representation (ICLR)*, 2018. Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In *Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 4401–4410, 2019. Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. *Advances in Neural Information Processing Systems*, 33:17022–17033, 2020. Felix Kreuk, Gabriel Synnaeve, Adam Polyak, Uriel Singer, Alexandre Défossez, Jade Copet, Devi Parikh, Yaniv Taigman, and Yossi Adi. Audiogen: Textually guided audio generation. arXiv preprint arXiv:2209.15352, 2022. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, and Wook-Shin Han. Autoregressive image generation using residual quantization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11523–11532, 2022a. Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, and Wook-Shin Han. Draft-and-revise: Effective image generation with contextual rq-transformer. In *Proc. Advances in Neural Information Processing Systems* (NeurIPS), 2022b. Xubo Liu, Turab Iqbal, Jinzheng Zhao, Qiushi Huang, Mark D Plumbley, and Wenwu Wang. Conditional sound generation using neural discrete time-frequency representation learning. In IEEE Int. Workshop on Machine Learning for Signal Processing (MLSP), pp. 1–6, 2021. Chris J Maddison, Andriy Mnih, and Yee Why Teh. The concrete distribution: A continuous relaxation of discrete random variables. In *Proc. International Conference on Learning Representation (ICLR)*, 2017. Boris T Polyak and Anatoli B Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838–855, 1992. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In *Proc. International Conference on Machine Learning* (ICML), pp. 8821–8831, 2021. Ali Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with VQ-VAE-2. In *Proc. Advances in Neural Information Processing Systems (NeurIPS)*, pp. 14866–14876, 2019. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10684–10695, 2022. Aurko Roy, Ashish Vaswani, Arvind Neelakantan, and Niki Parmar. Theory and experiments on vector quantized autoencoders. *arXiv preprint arXiv:1805.11063*, 2018. Justin Salamon, Christopher Jacoby, and Juan Pablo Bello. A dataset and taxonomy for urban sound research. In *ACM Int. Conf. on Multimedia (ACM MM)*, pp. 1041–1044, 2014. Michael Schoeffler, Sarah Bartoschek, Fabian-Robert Stöter, Marlene Roess, Susanne Westphal, Bernd Edler, and Jürgen Herre. webmushra—a comprehensive framework for web-based listening tests. *Journal of Open* Research Software, 6(1), 2018. B Series. Method for the subjective assessment of intermediate quality level of audio systems. International Telecommunication Union Radiocommunication Assembly, 2014. Rui Shu and Stefano Ermon. Bit prioritization in variational autoencoders via progressive coding. In International Conference on Machine Learning (ICML), pp. 20141–20155, 2022. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In *Proc. International Conference on Machine Learning (ICML)*, pp. 2256–2265. PMLR, 2015. Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In *Proc. Advances in Neural Information Processing Systems (NeurIPS)*, pp. 3738– 3746, 2016. Casper Kaae Sønderby, Ben Poole, and Andriy Mnih. Continuous relaxation training of discrete latent variable image models. In *Beysian DeepLearning workshop, NIPS*, 2017. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In *Proc. International* Conference on Learning Representation (ICLR), 2020. Yuhta Takida, Wei-Hsiang Liao, Chieh-Hsin Lai, Toshimitsu Uesaka, Shusuke Takahashi, and Yuki Mitsufuji. Preventing oversmoothing in VAE via generalized variance parameterization. *Neurocomputing*, 509:137– 156, 2022a. Yuhta Takida, Takashi Shibuya, WeiHsiang Liao, Chieh-Hsin Lai, Junki Ohmura, Toshimitsu Uesaka, Naoki Murata, Takahashi Shusuke, Toshiyuki Kumakura, and Yuki Mitsufuji. SQ-VAE: Variational bayes on discrete representation with self-annealed stochastic quantization. In Proc. International Conference on Machine Learning (ICML), 2022b. Lucas Theis, Wenzhe Shi, Andrew Cunningham, and Ferenc Huszár. Lossy image compression with compressive autoencoders. In *Proc. International Conference on Learning Representation (ICLR)*, 2017. George Toderici, Sean M O'Malley, Sung Jin Hwang, Damien Vincent, David Minnen, Shumeet Baluja, Michele Covell, and Rahul Sukthankar. Variable rate image compression with recurrent neural networks. In *Proc. International Conference on Learning Representation (ICLR)*, 2016. Arash Vahdat and Jan Kautz. Nvae: A deep hierarchical variational autoencoder. In *Proc. Advances in* Neural Information Processing Systems (NeurIPS), volume 33, pp. 19667–19679, 2020. Aäron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In *Proc.* International Conference on Machine Learning (ICML), pp. 1747–1756, 2016. Aäron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In Proc. Advances in Neural Information Processing Systems (NeurIPS), pp. 6306–6315, 2017. Dezhao Wang, Wenhan Yang, Yueyu Hu, and Jiaying Liu. Neural data-dependent transform for learned image compression. In *Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 17379–17388, 2022. Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. *IEEE transactions on image processing*, 13(4):600–612, 2004. Will Williams, Sam Ringer, Tom Ash, John Hughes, David MacLeod, and Jamie Dougherty. Hierarchical quantized autoencoders. *arXiv preprint arXiv:2002.08111*, 2020. Yang Xian, Yang Sun, Wenwu Wang, and Syed Mohsen Naqvi. Multi-scale residual convolutional encoder decoder with bidirectional long short-term memory for single channel speech enhancement. In Proc. European Signal Process. Conf. (EUSIPCO), pp. 431–435, 2021. Dongchao Yang, Jianwei Yu, Helin Wang, Wen Wang, Chao Weng, Yuexian Zou, and Dong Yu. Diffsound: Discrete diffusion model for text-to-sound generation. *arXiv preprint arXiv:2207.09983*, 2022. Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, and Marco Tagliasacchi. SoundStream: An end-to-end neural audio codec. *IEEE Trans. Audio, Speech, Lang. Process.*, 30:495–507, 2021. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In *Proc. IEEE Conference on Computer Vision and Pattern* Recognition (CVPR), pp. 586–595, 2018. ## A Derivations A.1 Sq-Vae-2 The ELBO of SQ-VAE-2 is formulated by using Bayes' theorem as log pθ(x) ≥ log pθ(x) − DKL(Q(Z1:L, Z˜1:L|x) ∥ P(Z1:L, Z˜1:L|x)) = EQ(Z1:L,Z˜1:L|x) log pθ(x)P(Z1:L, Z˜1:L|x) Q(Z1:L, Z˜1:L|x) = EQ(Z1:L,Z˜1:L|x) log pθ(x|Z1:L) − log Q(Z1:L, Z˜1:L|x) P(Z1:L, Z˜1:L) i=1 log ps 2 l (z˜l,i|Zˆl) ps 2 l (z˜l,i|Zl) + log Pˆs 2 l (zl,i|Z˜l) P(zl,i) = EQ(Z1:L,Z˜1:L|x) " log pθ(x|Z1:L) − X L l=1 X dl $$(18)$$ $$\quad(19)$$ !# = EQ(Z1:L,Z˜1:L|x) " log pθ(x|Z1:L) +X L l=1 X dl i=1 log ps 2 l (z˜l,i|Zl) ps 2 l (z˜l,i|Zˆl) + H(Pˆs 2 l (zl,i|Z˜l)) − log Kl !# . (17) Since the probabilistic parts are modeled as Gaussian distributions, the first and second terms can be calculated as log pθ(x|Z1:L) = log N (x; fθ(Z1:L), σ2I) = − D 2 log(2πσ2) −1 2σ 2 ∥x − fθ(x)∥ 2 2 and (18) EQ(Z1:L,Z˜1:L|x) "ps 2 l (z˜l,i|Zl) ps 2 l (z˜l,i|Zˆl) # = EQ(Z1:L,Z˜1:L|x) − 1 2s 2 l ∥z˜l,i − zl,i∥ 2 2 + 1 2s 2 l ∥z˜l,i − zˆl,i∥ 2 2 = −EQ(Z1:L,Z˜1:L|x) 1 2s 2 l ∥z˜l,i − zl,i∥ 2 2 + db 2 . (19) By substituting Equations (18) and (19) into Equation (17), we have Equation (7), where we use Z˜l = Zˆl instead of sampling it in practical implementation. ## A.2 Rsq-Vae The ELBO of RSQ-VAE is formulated by using Bayes' theorem as log pθ(x) ≥ log pθ(x) − DKL(Q(Z1:L, Z˜|x) ∥ P(Z1:L, Z˜|x)) = EQ(Z1:L,Z˜1:L|x) log pθ(x)P(Z1:L, Z˜|x) Q(Z1:L, Z˜|x) = EQ(Z1:L,Z˜|x) log pθ(x|Z1:L) − log Q(Z1:L, Z˜|x) P(Z1:L, Z˜) i=1 log Pˆs 2 l (zl,i|Z˜l) P(zl,i) = EQ(Z1:L,Z˜|x) " log pθ(x|Z1:L) − X dl i=1 log ps2 (z˜i|Zˆ) ps2 (z˜i|Z) − X L l=1 X dl $$(20)$$ $$\quad(21)$$ # = EQ(Z1:L,Z˜|x) " log pθ(x|Z1:L) +X dl i=1 log ps2 (z˜i|Z) ps2 (z˜i|Zˆ) + X L l=1 X dl i=1 H(Pˆs 2 l (zl,i|Z˜l)) − log Kl # . (20) Since the probabilistic parts are modeled as Gaussian distributions, the second term can be calculated as EQ(Z1:L,Z˜|x) "ps2 (z˜i|Z) ps2 (z˜i|Zˆ) # = EQ(Z1:L,Z˜|x) " −1 2PL l=1 s 2 l ∥z˜i − zi∥ 2 2 + 1 2PL l=1 s 2 l ∥z˜i − zˆi∥ 2 2 # = −EQ(Z1:L,Z˜|x) "1 2PL l=1 s 2 l ∥z˜i − zi∥ 2 2 # + db 2 . (21) ![18_image_0.png](18_image_0.png) Figure 7: *Top-down* layers corresponding to the rth resolution in hybrid model in Appendix B. By substituting Equations (18) and (21) into (20), we have Equation (15), where we use Z˜ = Hϕ(x) instead of sampling it in practical implementation. We have derived the ELBO objective in the case of R = 1. We extend the model to the general case of R, equivalent to the hybrid model, in Appendix B. ## B Hybrid Model We provide the ELBO of a hybrid model, where the two types of *top-down* layers are combinatorially used to build a *top-down* path as in Figure 7. We introduce some extra notations: Lr indicates the number of layers corresponding to the resolutions from the first to rth order; ℓr := {Lr−1 + 1, · · · , Lr} is a set of all the layers corresponding to the resolution r; and the output of the encoding block in the (Lr−1 + 1)th layer is denoted as G˜ rϕ (Hrϕ (x), Z1:Lr−1 ). In Figure 7, the quantized variables Zℓr aim at approximating the variable encoded at l = Lr−1 + 1 as $$\tilde{G}^{r}_{\phi}(\mathbf{H}^{r}_{\phi}(\mathbf{x}),\mathbf{Z}_{1:L_{r-1}})\approx\sum_{l\in\ell_{r}}\mathbf{Z}_{l}=:\mathbf{Y}_{r}.\tag{1}$$ On this basis, the lth *top-down* layer quantizes the following information: $$\hat{\mathbf{Z}}_{l}=\mathbf{G}_{\phi}^{l}(\mathbf{x},\mathbf{Z}_{1:l-1})=\begin{cases}\mathbf{H}_{\phi}^{1}(\mathbf{x})&(l=1)\\ \hat{\mathbf{G}}_{\phi}^{r(l)}(\mathbf{H}_{\phi}^{r(l)}(\mathbf{x}),\mathbf{Z}_{1:L_{r(l)-1}})-\sum_{l^{\prime}=L_{r(l)-1}+1}^{l}\mathbf{Z}_{l^{\prime}}&(l>1).\end{cases}$$ To derive the ELBO objective, we consider conditional distributions on (Z1:L,Y˜1:R), where Y˜r := Pl∈ℓr Z˜l. From the reproductive property of Gaussian distribution, the continuous latent variable converted from Yr via the stochastic dequantization processes, Z˜ℓr , follows the following Gaussian distribution: $$p_{\mathbf{s}_{r}^{2}}({\hat{\mathbf{y}}}_{r,i}|\mathbf{Z}_{\ell_{r}})={\mathcal{N}}\left({\hat{\mathbf{y}}}_{r,i};\sum_{l\in\ell_{r}}\mathbf{z}_{l,i},\left(\sum_{l\in\ell_{r}}s_{l}^{2}\right)\mathbf{I}\right),$$ $$(22)$$ $$(23)$$ $$(24)$$ $$(25)$$ where s 2 r := {s 2 l }l∈ℓr . We use the following prior distribution to derive the ELBO objective: $${\mathcal{P}}(\mathbf{Z}_{1:L},{\tilde{\mathbf{Y}}}_{1:R})=\prod_{r=1}^{R}\prod_{i=1}^{d_{r}}\left(\prod_{l\in\ell_{r}}P(\mathbf{z}_{l,i})\right)p_{\mathbf{s}_{r}^{2}}({\tilde{\mathbf{y}}}_{r,i}|\mathbf{Z}_{\ell_{r}}),$$ ), (25) where dr := dl for l ∈ ℓr. With the prior and posterior distributions, the ELBO of the hybrid model is formulated by using Bayes' theorem as log pθ(x) ≥ log pθ(x) − DKL(Q(Z1:L,Y˜1:R|x) ∥ P(Z1:L,Y˜1:R|x)) = EQ(Z1:L,Y˜1:R|x) log pθ(x)P(Z1:L,Y˜1:R|x) Q(Z1:L,Y˜1:R|x) = EQ(Z1:L,Y˜1:R|x) log pθ(x|Z1:L) − log Q(Z1:L,Y˜1:R|x) P(Z1:L,Y˜1:R) i=1 log Pˆs 2 l (zl,i|Z˜l) P(zl,i) = EQ(Z1:L,Y˜1:R|x) " log pθ(x|Z1:L) − X R r=1 X dr i=1 log ps2 r (y˜r,i|Zˆℓr ) ps2 r (y˜r,i|Zℓr ) − X L l=1 X dl # = EQ(Z1:L,Y˜1:R|x) " log pθ(x|Z1:L) +X R r=1 X dr i=1 log ps2 r (y˜r,i|Zℓr ) ps2 r (y˜r,i|Zˆℓr ) + X L l=1 X dl i=1 H(Pˆs 2 l (zl,i|Z˜l)) − log Kl # , (26) where Yˆr = G˜ rϕ (Hrϕ (x), Z1:Lr−1 ). Since we model the dequantization process and the probabilistic decoder as Gaussians, by substituting their closed forms into the above equation, we have as Gaussian, by substituting their closed results into the above equation, we have $$\mathcal{J}_{\text{HQ-VAE}}=\frac{D}{2}\log\sigma^{2}$$ $$\quad+\mathbb{E}_{\mathbb{Q}(\mathbf{z}_{i,k},\mathbf{\hat{Z}}_{i,k})\neq0}\left[\frac{\|\mathbf{x}-\mathbf{f}_{\theta}(\mathbf{Z}_{i},\mathbf{L})\|_{2}^{2}}{2\sigma^{2}}+\sum_{r=1}^{R}\frac{\left\|\widehat{G}_{\Phi}^{r}(\mathbf{H}_{\Phi}^{r}(\mathbf{x}),\mathbf{Z}_{i,k,r-1})-\sum_{l\in\ell_{r}}\mathbf{Z}_{l}\right\|_{F}^{2}}{2\sum_{l\in\ell_{r}}\sigma_{l}^{2}}-\sum_{l=1}^{L}H(\hat{P}_{\tau_{l}^{2}}(\mathbf{Z}_{l}|\mathbf{\hat{Z}}_{l}))\right],\tag{27}$$ where we used EQ(Z1:L,Y˜1:R|x) "ps2 r (y˜r,i|Zℓr ) ps2 r (y˜r,i|Zˆℓr ) # = EQ(Z1:L,Y˜1:R|x) " −1 2Pl∈ℓr s 2 l ∥y˜r,i − yr,i∥ 2 2 + 1 2Pl∈ℓr s 2 l ∥y˜r,i − yˆr,i∥ 2 2 # = −EQ(Z1:L,Y˜1:R|x) "1 2Pl∈ℓr s 2 l ∥y˜r,i − yr,i∥ 2 2 # + dr 2 . (28) Here, we use Y˜r = Yˆr instead of sampling it in practical implementation. ## C Experimental Details We explain the details of the experiments2in Section 5. For all the experiments except for RSQ-VAE and RQ-VAE on FFHQ and UrbanSound8K in Section 5.2, we construct architectures for both the *bottom-up* and *top-down* paths as described in Figures 1 and 8. To build these paths, we introduce two common blocks, the Resblock and Convblock by following Child (2021) as in Figure 8a, which are used in Figures 8b and 8c. Here, we denote the width and height of Hrϕ (x) as wr and hr, respectively, i.e., Hrϕ (x) ∈ R db×wr×hr. We set cmid = 0.5 in Figure 8. For all the experiments, we use the Adam optimizer with β1 = 0.9 and β2 = 0.9. Unless otherwise noted, we reduce the learning rate in half if the validation loss is not improved in the last three epochs. In HQ-VAE, we deal with the decoder variance σ 2 using the update scheme with the maximum likelihood estimation (Takida et al., 2022a). We gradually reduce the temperature parameter of Gumbel–softmax trick with a standard scheduler τ = exp(10−5· t) (Jang et al., 2017), where t is the iteration step. We set hyperparameters of VQ-VAE to standard parameter values: the balancing parameter β in Equations (8) and (16) to 0.25, and the weight decay in EMA for the codebook update to 0.99, respectively. 2The source code is attached in the supplementary material. | Notation | Description | |--------------|-------------------------------------------------------------------------------------| | Conv(1×1) d | 2D Convolutional layer (channel= n, kernel= 1 × 1, stride= 1, padding= 0) | | Conv(3×3) d | 2D Convolutional layer (channel= n, kernel= 3 × 3, stride= 1, padding= 1) | | Conv(4×4) d | 2D Convolutional layer (channel= n, kernel= 4 × 4, stride= 2, padding= 1) | | ConvT(3×3) d | 2D Transpose convolutional layer (channel= n, kernel= 3 × 3, stride= 1, padding= 1) | | ConvT(4×4) d | 2D Transpose convolutional layer (channel= n, kernel= 4 × 4, stride= 2, padding= 1) | Table 3: Notations of convolutional layers used in Figure 8. We here review the datasets used in Section 5 below. CIFAR10. CIFAR10 (Krizhevsky et al., 2009) contains 10 classes of 32×32 color images, which are separated into 50,000 and 10,000 samples for train and test sets, respectively. We use the default split and further randomly select 10,000 samples from the train set to prepare the validation set. CelebA-HQ. CelebA-HQ (Karras et al., 2018) contains 30,000 high-resolution face images that are selected from the CelebA dataset by following Karras et al. (2018). We use the default train/validation/test split. We preprocess the images by cropping and resizing them to the size of 256×256. FFHQ. FFHQ (Karras et al., 2019) contains 70,000 high-resolution face images. In Section 5.1, we split the images into three sets: train (60,000 samples), validation (5,000 samples), and test (5,000 samples) sets. We crop and resize them to 1024×1024. In Section 5.2, we follow the same preprocessing as in Lee et al. (2022a), respectively, where it splits the images into two sets, train (60,000 samples), validation (10,000 samples) sets and crop and resize them to 256×256. ImageNet. ImageNet (Deng et al., 2009) contains 1000 classes of natural images in RGB scales. We use the default train/val/test split. We crop and resize the images to 256×256. UrbanSound8K. UrbanSound8K (Salamon et al., 2014) contains 8,732 labeled audio clips of urban sound from 10 classes. UrbanSound8K has a wide range of sound classes, such as dog barking and drilling. UrbanSound8K is divided into 10 folds, and we use the fold 1-8/9/10 as the train/validation/test split. The duration of each audio clip is less than 4 seconds. In our experiments, to align the length of input audio, we pad the all audio clips to 4 seconds. We also convert the all audio clips to 16 bit and down-sampled them to 22,050 kHz. A 4-second waveform audio clip is converted to a Mel spectrogram with shape 80 × 344. We preprocess an audio clip following the paper (Liu et al., 2021): 1. We extract an 80-dimensional Mel spectrogram using the short-time Fourier transform (STFT) with a frame size of 1024, a hop size of 256, and a Hann window. 2. We apply dynamic range compression to the Mel spectrogram by first clipping it to a minimum value of 1 × 10−5and then applying a logarithmic transformation. ## C.1 Sq-Vae-2 Vs Vq-Vae-2 C.1.1 Comparison On Cifar10 And Celeba-Hq We construct the architecture as depicted in Figures 1 and 8. To build the *top-down* paths, we use two injected top-down layers (i.e., R = 2) with w1 = h1 = 8 and w2 = h2 = 16 for CIFAR10, and three layers (i.e., R = 3) with w1 = h1 = 8, w2 = h2 = 16 and w3 = h3 = 32 for CelebA-HQ, respectively. For the *bottom-up* paths, we repeatedly stack two Resblocks and an average pooling layer once and four times, respectively, for CIFAR10 and CelebA-HQ. We set the learning rate to 0.001 and train all the models for a maximum of 100 epochs with a mini-batch size of 32. ![21_image_0.png](21_image_0.png) Figure 8: Architecture details in Figure 1. Notations of convolutional layers, Conv(k×k) dand ConvT(k×k) d, are summarized in Table 3. ## C.1.2 Comparison On Large-Scale Datasets We construct the architecture as depicted in Figures 1 and 8. To build the *top-down* paths, we use two injected top-down layers (i.e., R = 2), with w1 = h1 = 32 and w2 = h2 = 64 for ImageNet, and three layers (i.e., R = 3) with w1 = h1 = 32, w2 = h2 = 64 and w3 = h3 = 128 for FFHQ, respectively. For the bottom-up paths, we repeatedly stack two Resblocks and an average pooling layer three times and five times respectively for ImageNet and FFHQ. We set the learning rate to 0.0005. We train ImageNet and FFHQ for a maximum of 50 and 200 epochs with a mini-batch size of 512 and 128, respectively. Figure 9 and Figure 10 show reconstructed samples of SQ-VAE-2 on ImageNet and FFHQ, respectively. ## C.2 Rsq-Vae Vs Rq-Vae C.2.1 Comparison On Cifar10 And Celeba-Hq We construct the architecture as depicted in Figures 1 and 8 without *injected top-down* layers, i.e., R = 1. We set the resolution of Hϕ(x) to w = h = 8. For the *bottom-up* paths, we repeatedly stack two Resblocks and an average pooling layer once and four times, respectively, for CIFAR10 and CelebA-HQ. We set the learning rate to 0.001 and train all the models for a maximum of 100 epochs with a mini-batch size of 32. ## C.2.2 Improvement In Perceptual Quality In this experiment, we use the same network architecture as that used in Lee et al. (2022a). We set the learning rate to 0.001 and train an RSQ-VAE model for a maximum of 300 epochs with a mini-batch size of 128 (4 GPUs, 32 samples for each GPU) on FFHQ. We use our modified LPIPS loss (see Appendix C.4) in training. For evaluation, we compute rFID scores with the code provided in their repository3 on the validation set (10,000 samples). And, we use the pre-trained RQ-VAE model offered in the same repository for evaluating RQ-VAE. We show examples of reconstructed images in Appendix C.4 after we explain our modified LPIPS loss. 3https://github.com/kakaobrain/rq-vae-transformer ![22_image_0.png](22_image_0.png) Figure 9: Reconstructed samples of SQ-VAE-2 trained on ImageNet ![22_image_1.png](22_image_1.png) Figure 10: Reconstructed samples of SQ-VAE-2 trained on FFHQ ![23_image_0.png](23_image_0.png) Figure 11: Architecture of the hybrid model in Appendix C.3. ## C.2.3 Validation On An Audio Dataset We construct the architecture by following the previous audio generation work (Liu et al., 2021). For the top-down paths, the architecture consists of several strided convolutional layers in parallel (Xian et al., 2021). We use four strided convolutional layers consisting of two sub-layers with stride 2, followed by two ResBlocks with ReLU activations. The kernel sizes of these four strided convolutional layers are 2 × 2, 4 × 4, 6 × 6 and 8×8 respectively. We add the outputs of the four strided convolutional layers, and pass it to a convolutional layer with kernel size 3 × 3. Then we get the resolution of Hϕ(x) to w = 20, h = 86. For the *bottom-up* paths, we stack a convolutional layer with kernel size 3 × 3, two Resblocks with ReLU activations, and two transposed convolutional layers with stride 2 and kernel size 4×4. We set the learning rate to 0.001 and train all the models for a maximum of 100 epochs with a mini-batch size of 32. To convert the normalized log-Mel spectrogram to the waveform, we use the same HiFi-GAN vocoder (Kong et al., 2020) as used in the audio generation work (Liu et al., 2021). The HiFi-GAN vocoder is trained on the train set of UrbanSound8K from scratch (Liu et al., 2021). The post-processing of assessors follows the paper (Series, 2014): assessors are excluded from the aggregated scores if they rate the hidden reference for more than 15% of the test signals lower than a score of 90. As an example of demonstration, we randomly select audio clips from our test split of UrbanSound8K and show their reconstructed Mel spectrogram samples from RQ-VAE and RSQ-VAE in Figure 12. While the samples from RQ-VAE have difficulty to reconstruct the sources with shared codebooks, the samples from RSQ-VAE reconstruct detailed features of the sources. ## C.3 Empirical Study Of Top-Down Layers For a demonstration, we build two HQ-VAEs by combinatorially using both the *injected top-down* and the residual top-down layers with three resolutions, w1 = h1 = 16, w2 = h2 = 32 and w3 = h3 = 64. We construct the architectures as described in Figure 11 and train them on CelebA-HQ. Figure 14 shows the progressively reconstructed images for each case. We can observe the same tendencies as in Figures 5a and 5c. ## C.4 Perceptual Loss For Images We found that LPIPS loss (Zhang et al., 2018), which is a perceptual loss for images (Johnson et al., 2016), works well with our HQ-VAE. However, we also noticed that just replacing ∥x − fθ(Z1:L)∥ 2 2 in the objective function of HQ-VAE (Equations (7) and (15)) with an LPIPS loss LLPIPS(x, fθ(Z1:L)) leads to artifacts in generated images. We hypothesize that those artifacts are caused by the max-pooling layers in VGGNet used in LPIPS. Signals from VGGNet might not reach all pixels in backpropagation due to the max-pooling layers. To mitigate this issue, we applied a padding-and-trimming operation to both a generated image fθ(Z1:L) and the corresponding reference image x before the LPIPS loss function. That is LLPIPS(pt [x] , pt [fθ(Z1:L)]), where pt [ ] denotes our padding-and-trimming operator. The PyTorch implementation of such an operation is described below. import random import torch import torch.nn.functional as F def padding_and_trimming( x_rec, \# decoder output x \# reference image ): _, _, H, W = x.size() x_rec = F.pad(x_rec, (15, 15, 15, 15), mode='replicate') x = F.pad(x, (15, 15, 15, 15), mode='replicate') _, _, H_pad, W_pad = x.size() top = random.randrange(0, 16) bottom = H_pad - random.randrange(0, 16) left = random.randrange(0, 16) right = W_pad - random.randrange(0, 16) x_rec = F.interpolate(x_rec[:, :, top:bottom, left:right], size=(H, W), mode='bicubic', align_corners=False) x = F.interpolate(x[:, :, top:bottom, left:right], size=(H, W), mode='bicubic', align_corners=False) return x_rec, x Note that our padding-and-trimming operation includes downsampling with a random ratio. We assume that this random downsampling provides a generative model with diversified signals in backpropagation across training iterations, which makes the model more generalizable. Figure 13 shows images reconstructed by an RSQ-VAE model trained with a normal LPIPS loss, LLPIPS(x, fθ(Z1:L)), and ones reconstructed by an RSQ-VAE model trained with our modified LPIPS loss, LLPIPS(pt [x] , pt [fθ(Z1:L)]). As shown, our padding-and-trimming technique alleviates the artifacts issue. For example, vertical line noise can be seen in hairs in the images generated by the former model, but those lines are removed or softened in the images generated by the latter model. Indeed, our technique improves rFID from 10.07 to 8.47. ## C.5 Progressive Coding For demonstration purpose of Figure 5a, we incorporate the concept of *progressive coding* (Ho et al., 2020; Shu & Ermon, 2022) to our framework, which helps hierarchical models to be more sophisticated in progressive lossy compressing and may generate high-fidelity samples. One can train SQ-VAE-2 to achieve progressive lossy compression (as in Figure 5a) by introducing additional generative processes x˜l ∼ N (x˜l; fθ(Z1:l), σ2 l I) for l ∈ [L]. We here derive the corresponding ELBO objective with this concept. Its benefit is to produce more reasonable reconstructed images only with higher layers (i.e., using only low-resolution information Hrϕ (x)). First, we consider corrupted data x˜l for l ∈ [L], which is obtained by adding noises, for example, i.e., x˜l = x + ϵl. We here adopt the Gaussian distribution ϵl.d ∼ N (0, vl) for the noises. Note that {σ 2 l } L l=1 is set to be a non-increasing sequence. We model the generative process using only the top l groups as | Model | FID ↓ | |--------------------------------------------|---------| | Very deep VAE (Child, 2021) | 28.5† | | VQ-GAN + Transformer (Esser et al., 2021b) | 11.4† | | RQ-VAE + RQ-Transformer Lee et al. (2022a) | 10.38† | | RSQ-VAE + RQ-Transformer | 9.74 | | RSQ-VAE + contextual RQ-Transformer | 8.46 | Table 4: FID scores on FFHQ (256×256). The values with † represents the scores reported in Lee et al. (2022a) p l θ (x˜l) = N (x˜l; fθ(Z1:l), σ2 l I). Now the ELBO is obtained as $$\mathcal{J}_{\text{EQ},\text{VAE},2}^{\text{new}}=\sum_{l=1}^{L}\frac{D}{2}\log\sigma_{l}^{2}+\mathbb{E}_{\text{Q}(\mathbf{Z}_{1:L},\mathbf{Z}_{1:L}|\mathbf{x})}\left[\frac{\|\mathbf{x}-f_{\theta}(\mathbf{Z}_{1:L})+Dv_{l}\|_{2}^{2}}{2\sigma_{l}^{2}}+\frac{\|\hat{\mathbf{Z}}_{l}-\mathbf{Z}_{l}\|_{2}^{2}}{2\sigma_{l}^{2}}-H(\hat{P}_{\sigma_{l}^{2}}(\mathbf{Z}_{l}|\hat{\mathbf{Z}}_{l}))\right]\tag{29}$$ In Section 5.3, we simply set vl = 0 in the above objective when this technique is activated. This concept can be also applied to the hybrid model derived in Appendix B by considering additional generative processes p r θ (x˜r) = N (x˜r; fθ(Z1:Lr ), σ2 r I). The ELBO objective is as follows: J prog HQ-VAE = X R r=1 D 2 log σ 2 r r=1 Yˆr −Pl∈ℓr Zl 2 F 2Pl∈ℓr s 2 l − X L + EQ(Z1:L,Y˜1:R|x) X R l=1 H(Pˆs 2 l (Zl|Z˜l)) . (30) r=1 ∥x − fθ(Z1:Lr ) + Dvr∥ 2 2 2σ 2 r + X R In Section C.3, we simply set vl = 0 in the above objective when this technique is activated. ## D Application Of Hq-Vae To Image Generation To demonstrate the applicalability of HQ-VAE to generation tasks, we train two prior models, an RQTransformer (Lee et al., 2022a) and a contextual RQ-Transformer (Lee et al., 2022b), on the FFHQ latent features extracted by RSQ-VAE. We numerically compare our RSQ-VAE-based generative models with the current VAE models, very deep VAE (Child, 2021), VQ-GAN (Esser et al., 2021b) and RQ-VAE (Lee et al., 2022a). We note that VQ-GAN and RQ-VAE are based on the same architecture and their training schemes use adversarial training with a discriminator. We calculate FID score for our models to evaluate the generation performance. As shown in Table 4, RSQ-VAE with RQ-Transformer achieves better generation performance than VQ-GAN with Transformer, and RQ-VAE with RQ-Transformer despite that RSQ-VAE is trained without adversarial training. Figure 15 shows the generated samples from RSQ-VAE with the contextual Transformer. According to the results, the latent representations learned by HQ-VAEs are shown to be tractable for prior models. ![26_image_0.png](26_image_0.png) Figure 12: Mel spectrogram of (a) sources and (b)-(e) reconstructed samples of UrbanSound8K dataset. The left panel and the right panel are audio clips of dog barking and drilling, respectively. We observe that RQ-VAEs struggle to reconstruct the sources with shared codebooks. In contrast, the reconstruction of RSQ-VAE can reflect the details of the source samples. ![27_image_0.png](27_image_0.png) (a) Source (b) RSQ-VAE trained with a normal LPIPS loss (rFID= 10.07) ![27_image_1.png](27_image_1.png) ![27_image_2.png](27_image_2.png) (c) RSQ-VAE trained with our improved LPIPS loss (rFID= 8.47) Figure 13: Reconstructed samples of FFHQ. ![28_image_0.png](28_image_0.png) ![28_image_1.png](28_image_1.png) Figure 14: Reconstructed images and magnified differences of HQ-VAE on CelebA-HQ ![28_image_2.png](28_image_2.png) Figure 15: Samples of FFHQ from RSQ-VAE with contextual RQ-Transformer (Lee et al., 2022b).
Review 1: Summary: The paper propose a hierarchical VAE with discrete latent variables. It adopts similar design of top-down and bottom-up paths of hierarchical VAEs with continuous latent variables as in previous work. The objective is ELBO, where the variational part comes from the stochastic quantization/de-quantization process. The paper compares HQ-VAE with previous discrete VAEs, and show stronger performance in reconstruction loss and perceptual quality of reconstructed image and higher perplexity for better use the codebook. Strengths and Weaknesses: Strengths: The paper is excellently written, and it presents a clear explanation of the model design using detailed formulas and figures. The figures for explaining the model structure is informative. Moreover, it offers a comprehensive overview of the existing literature on discrete VAEs and conducts a thorough comparison with other models. The proposed design is novel. As hierarchical VAEs with continuous latent variables have already demonstrated significant success, the development of a discrete counterpart trained with ELBO is imperative. Finally, the experimental results showcase some improvements over prior models, further validating the efficacy of the proposed approach. Weakness: The application of this model seems to be limited to reconstruction. Although it shows better reconstruction compared to baselines on a variety of dataset, reconstruction itself is not a very meaningful application. The application is particularly important for hierarchical VAE because the latent space itself is of large size so we can't really claim it does well in data compression, which is one of the important application of auto-encoder. The authors needs to justify what is the real benefit of designing a hierarchical auto-encoder that can reconstruct data well. In the appendix, the author show that it can be used to train generative model on latent variables. However, only some qualitative results are shown, without comparing with the same model trained on the latent space of other VAEs. Requested Changes: As discussed in the weakness section, more in-depth study on the application is needed. I do not quite see the particular value of having a discrete VAE that does better in reconstruction. If the claim is that the latent space is more suitable for training a generative model, then an comprehensive quantitative comparison is needed, against generative model trained on VQ-VAE2 and single layer VQGAN. The latter is particularly interesting, as models relied on single layer VQ-GAN such as latent diffusion works particularly well. I am suspicious that hierarchical latent structure, while improving reconstruction, makes training generative models harder. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper proposes an extension of the the recently proposed SQ-VAE (stochastically-quantized VAE) (Takida 2022). This results in HQ-VAE, a hierarchical VAE using bidrectional inference similar to ladder VAE (Sonderby 2016) and ResNet-VAE (Kingma 2016). The performance of HQ-VAE is empirically validated and shown to achieve lower reconstruction error and have more entropic latents than hierarchical VQ-VAEs. ----- Yuhta Takida, Takashi Shibuya, WeiHsiang Liao, Chieh-Hsin Lai, Junki Ohmura, Toshimitsu Uesaka, Naoki Murata, Takahashi Shusuke, Toshiyuki Kumakura, and Yuki Mitsufuji. SQ-VAE: Variational bayes on discrete representation with self-annealed stochastic quantization. In Proc. International Conference on Machine Learning (ICML), 2022. Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In Proc. Advances in Neural Information Processing Systems (NeurIPS), pp. 3738– 3746, 2016. Kingma, Durk P., et al. "Improved variational inference with inverse autoregressive flow." Advances in neural information processing systems 29 (2016). Strengths and Weaknesses: Strength: - The exposition is mostly clear and straightforward to follow. Weakness: - The current work is mostly about an extended architecture of previous work (SQ-VAE), and the hierarchical architecture itself is directly adapted from literature (ResNet-VAE, VQ-VAE2), thus there is very little methodological contribution from a machine learning perspective. - The proposed advantages of HQ-VAE are mostly a rehash of those of SQ-VAE: ("(1) greatly reduces the number of hyperparameters to be tuned (only the one from the Gumbel-softmax trick), and (2) enhances codebook usage without any heuristics thanks to the self-annealing effect"). Given that the present work builds on SQ-VAE, it is insufficient to just demonstrate the benefit of SQ-VAE (and compare with the deterministically quantized VQ-VAE and variants), but it should rather focus on what advantages a hierarchical extension brings in addition to SQ-VAE. - If the focus was rather on the benefit of the probabilistic quantization aspect of HQ-VAE, then there is insufficient justification given there are existing probabilistic VQ-VAE formulations which are closely related to (or even subsume) HQ-VAE/SQ-VAE, such as Sonderby 2017 and discrete VAE (dVAE) (Singh 2022). A more detailed discussion and comparison would be needed. - The methodology for comparing the models seems problematic. The comparison based on reconstruction error / latent perplexity can be misleading and inclusive; it's more principled to compare rate-distortion curves like in Williams 2020. Having lower reconstruction error AND higher latent perplexity does not necessarily mean a better model; a VAE can in theory be tuned to operate anywhere on the R-D curve such that it operates at lower distortion and higher rate (Alemi 2018). Thus the conclusion that the HQ-VAE attains lower distortion / higher latent perplexity does not necessarily mean it's "better" than its deterministically quantized counterpart. ---- Sønderby, Casper Kaae, Ben Poole, and Andriy Mnih. "Continuous relaxation training of discrete latent variable image models." Beysian DeepLearning workshop, NIPS. Vol. 201. 2017. Singh, Gautam, Fei Deng, and Sungjin Ahn. "Illiterate dall-e learns to compose." arXiv preprint arXiv:2110.11405 (2021). Williams, Will, Sam Ringer, Tom Ash, David MacLeod, Jamie Dougherty, and John Hughes. "Hierarchical quantized autoencoders." Advances in Neural Information Processing Systems 33 (2020): 4524-4535. Alemi, Alexander, Ben Poole, Ian Fischer, Joshua Dillon, Rif A. Saurous, and Kevin Murphy. "Fixing a broken ELBO." In International conference on machine learning, pp. 159-168. PMLR, 2018. Requested Changes: The paper needs to be more clear about its scope of contribution, given it's a straightforward extension of SQ-VAE and does not make significant methodological contribution. The claim to novelty -- "we propose a novel framework to stochastically learn hierarchical discrete representation on the basis of the variational Bayes framework" -- is thus problematic, as everything except the "hierarchical" part is existing work (SQ-VAE), and even the "hierarchical" part is fairly standard from literature. Additionally, a more careful comparison of SQ-VAE with the existing probabilistic formulations of Sonderby 2017 and Singh 2022 would be insightful and add to the contributions, given the original SQ-VAE paper did not address these two overlapping prior works. The empirical comparison should also be based on R-D curves (training models with different weightings between rate and distortion losses) to be correct. Also see my comments about weaknesses above. ---- Sønderby, Casper Kaae, Ben Poole, and Andriy Mnih. "Continuous relaxation training of discrete latent variable image models." Beysian DeepLearning workshop, NIPS. Vol. 201. 2017. Singh, Gautam, Fei Deng, and Sungjin Ahn. "Illiterate dall-e learns to compose." arXiv preprint arXiv:2110.11405 (2021). Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper proposes a new model for hierarchical extensions of VQ-VAEs, which they claim to alleviate some of the instable training and poor reconstruction accuracy issues that previously existed. Their proposed approach stochastically learns hierarchical discrete representations which results in a natural hierarchical model with improved performance and more stable training. The authors experiment with their model for image and audio generation and show some promising empirical results. Strengths and Weaknesses: Strengths: 1. The paper is well motivated. 2. The method seems to be well formulated via extensions of the variational bayes framework for hierarchical latent variables, and the derivations of objective functions seem to be right. 3. Paper is generally well written Weaknesses: 1. The biggest weakness is that there is no solid justification of the claimed advantages of the method. The authors claim 'more stable training' in the abstract, but there are no comments on how you would quantitatively evaluate this property and that the proposed method indeed does better according to this metric. 2. Also the authors claim improved reconstruction performance - this is better evaluated but it is unclear if measuring reconstruction via MSE-type metrics is most appropriate - there should be at least human evaluation to check the actual quality of generated content. 3. In some cases it is also unclear how much the MSE improvement is between proposed method and previous baselines, take Figure 2 for example. Error bars for Figure 2 would also help. 4. In Table 1 the columns under 'Codebook perplexity' and Z1, Z2, Z3 are not clear what they mean. There's also a missing entry for Imagenet. 5. Figure 3 comparisons should include at least some existing methods - as far as I can tell they are all ablations and variations of their proposed method. 6. The final takeaway messages can be sharpened. The authors propose quite a few variations of their method but a final guide for which method to use in which situation can help, based on their experiments. Requested Changes: 1. Clarify the claims around 'stable training' and 'improved reconstruction', perhaps by adding more experiments. 2. The changes requested to Table 1, Figure 2, Figure 3, etc. 3. Better summary of takeaway messages. Broader Impact Concerns: None ================================================== Metareview: Recommendation: Reject Comment: The reviewers praised the direction the paper is taking in the context of deep generative models. However, they also raised several concerns before and after the discussion phase. First, there is no clear-cut evaluation of HQ-VAEs as generative models for sampling and for representation learning. While the paper claims are all about reconstruction errors, the lack of extensive experiments for these other two perspectives leaves some big open questions about the necessity of the framework. The variants of the RD-curves provided do not help understand where these models stand in these scenarios. The authors provided some preliminary qualitative experiments in the appendix and added more experiments on FFHQ for RSQ-VAE in the rebuttal. I agree with the reviewers that authors should perform a complete analysis on the generative and representation learning aspects, including all architectures and datasets. This is currently the major missing piece towards acceptance. Second, while extensive (at least from the point of view of reconstruction), the experiments do not provide error-bars nor statistical tests, and therefore it is less clear whether the reported gains are substantial. In the added experiments on the MUSHRA listening test authors are showing distributions over the obtained scores. The authors promised to add the error bars to Figs 2-3, which are very hard to see. It is not clear how were they computed and on how many trials. Third, the comparison and discussion w.r.t. previous architectures and AE design such as SQ-VAE and dVAE is preliminary. One reviewer claims that there is not enough novelity in extending the stochasticity to hierarchical AEs. I disagree, as this is not one aspect papers should be evaluated at TMLR. However, I recognize that the paragraph added by the authors in Sec 2.2 can be more precise and further expanded to discuss the similarities between SQ-VAEs and HQ-VAEs via residual-enanched AEs as the reviewer points out. This will greatly benefit readers properly understanding and navigating the hierarchical AE landscape. Additionally, one reviewer questions the deeper motivation of using discrete representations in the first place. I disagree in that discrete codes have a clear advantage in terms of compression and interpretability, and while it would be interested to include also only-continuous latent variable models, this would go outside the scope of the paper. The authors engaged in the discussion but only partially addressed the above concerns. Two reviewers voted for rejection. I agree that the manuscript in its current status (despite having improved a lot through the discussion) is not ready for publication. The current status would be that of a major revision, which is unfortunately not possible on TMLR. Authors are encouraged to address the above points, especially completing the experiments on generative modeling and representation learning, and resubmit. ==================================================
# Incorporating Unlabelled Data Into Bayesian Neural Networks Mrinank Sharma mrinank@robots.ox.ac.uk University of Oxford, UK Tom Rainforth *rainforth@stats.ox.ac.uk* University of Oxford, UK Yee Whye Teh y.w.teh@stats.ox.ac.uk University of Oxford, UK Vincent Fortuin *vincent.fortuin@tum.de* Helmholtz AI, Munich, Germany Technical University of Munich, Germany Reviewed on OpenReview: *https://openreview.net/forum?id=q2AbLOwmHm* ## Abstract Conventional Bayesian Neural Networks (BNNs) are unable to leverage unlabelled data to improve their predictions. To overcome this limitation, we introduce *Self-Supervised* Bayesian Neural Networks, which use unlabelled data to learn models with suitable prior predictive distributions. This is achieved by leveraging contrastive pretraining techniques and optimising a variational lower bound. We then show that the prior predictive distributions of self-supervised BNNs capture problem semantics better than conventional BNN priors. In turn, our approach offers improved predictive performance over conventional BNNs, especially in low-budget regimes. ## 1 Introduction Bayesian Neural Networks (BNNs) are powerful probabilistic models that combine the flexibility of deep neural networks with the theoretical underpinning of Bayesian methods (Mackay, 1992; Neal, 1995). Indeed, as they place priors over their parameters and perform posterior inference, BNN advocates consider them a principled approach for uncertainty estimation (Wilson & Izmailov, 2020; Abdar et al., 2021), which can be helpful for label-efficient learning (Gal et al., 2017). It has even recently been argued that improving them will be crucial for large language models (Papamarkou et al., 2024) and generative AI as a whole (Manduchi et al., 2024). Conventionally, BNN researchers have focused on improving predictive performance using human-crafted priors over network parameters or predictive functions (e.g., Louizos et al., 2017; Tran et al., 2020; Matsubara et al., 2021; Fortuin et al., 2021a). However, several concerns have been raised with BNN priors (Wenzel et al., 2020; Noci et al., 2021). It also stands to reason that the vast store of semantic information contained in unlabelled data should be incorporated into BNN priors, and that the potential benefit of doing so likely exceeds the benefit of designing better, but ultimately human-specified, priors over parameters or functions. Unfortunately, as standard BNNs are explicitly only models for supervised prediction, they cannot leverage such semantic information from unlabelled data by conditioning on it. To overcome this shortcoming, we introduce *Self-Supervised Bayesian Neural Networks* (§3), which use unlabelled data to learn improved priors over functions. In other words, our approach improves the BNN prior predictive distribution (which we will just call *prior predictive* in the remainder of the paper) by incorporating unlabelled data into it. This contrasts with designing different but ultimately *human-specified* priors, which is the prevalent approach. ![1_image_0.png](1_image_0.png) Figure 1: **Self-Supervised Bayesian Neural Networks**. (a) Pre-training in self-supervised BNNs corresponds to unsupervised prior learning. We learn a model with a prior distribution such that augmented images likely have the same label and distinct images likely have different labels under the prior predictive. (b) Self-supervised BNN priors assign higher probabilities to semantically consistent image pairs having the same label compared to semantically inconsistent image pairs. Here, semantically consistent image pairs have the same ground-truth label, and semantically inconsistent image pairs have different ground-truth labels. The plot shows a kernel density estimate of the log-probability that same-class and different-class image pairs are assigned the same label under the prior. (c) Unlike self-supervised prior predictives, conventional BNN prior predictives assign similar probabilities to semantically consistent and semantically inconsistent image pairs having the same label. In practice, self-supervised BNNs generate pseudo-labelled data using unlabelled data and data augmentation, similar to contrastive learning (Oord et al., 2019; Chen et al., 2020a;b; Grill et al., 2020; Hénaff et al., 2020). We use this generated data to learn models with powerful prior predictive distributions. To do this, we perform unsupervised model learning by optimising a lower bound of a log-marginal likelihood dependent on the pseudo-labelled data. This biases the prior towards functions that assign augmented image pairs a larger likelihood of having the same label than distinct images (Fig. 1a). Following pretraining, we perform inference in the learnt model to make predictions. We then further demonstrate that self-supervised BNN prior predictives reflect input-pair semantic similarity better than normal BNN priors (§4). To do so, we develop a methodology to better understand the prior predictive distributions of BNNs. Our approach is to measure the probability of *pairs* of data points having the same label under the prior. Intuitively, pairs of points that are more semantically similar should be more likely to have the same label under the prior predictive. Applying this methodology, we see that the functional priors learned by self-supervised BNNs distinguish same-class input pairs and different-class input pairs much better than conventional BNNs (Fig. 1b). Finally, we empirically demonstrate that the improved prior predictives of self-supervised BNNs translate to improved predictive performance, especially in problem settings with few labelled examples (§5). ## 2 Background: Bayesian Neural Networks Let fθ(x) be a neural network with parameters θ and D = {(xi, yi)} N i=1 be an dataset. We want to predict y from x. A BNN specifies a prior over parameters, p(θ), and a likelihood, p(y|fθ(x)), which in turn define the posterior p(θ|D) ∝ p(θ)Qi p(yi|fθ(xi)). To make predictions, we approximate the posterior predictive p(y?|x?, D) = Ep(θ|D)[p(y?|fθ(x?))]. Improving BNN priors has been a long-standing goal for the community, primarily through improved humandesigned priors. One approach is to improve the prior over the network's parameters (Louizos et al., 2017; Nalisnick, 2018). Others place priors directly over predictive functions (Flam-Shepherd et al., 2017; Sun et al., 2019; Matsubara et al., 2021; Nalisnick et al., 2021; Raj et al., 2023). Both approaches, however, present challenges—the mapping between the network's parameters and predictive functions is complex, while directly specifying our beliefs over predictive functions is itself a highly challenging task. For these reasons, as well as convenience, isotropic Gaussian priors over network parameters remain the most common choice (Fortuin, 2022), despite concerns (Wenzel et al., 2020). In contrast to these works, we propose to *learn* better functional priors from unlabelled data via contrastive learning. ## 3 Self-Supervised Bnns Conventional BNNs are unable to use unlabelled data to improve their predictions. To overcome this limitation, we introduce *Self-Supervised BNNs*. At a high level, self-supervised BNNs allow unlabelled data to be incorporated by using it to learn a powerful prior that captures known similarities between inputs. In practice, we can utilise ideas from contrastive learning to learn models with prior predictive distributions that reflect the semantics of different input pairs. The high-level idea is thus to use prior knowledge in the form of data augmentations, for which we believe that the semantic content of the data should be invariant to them. We can then use a variational method to learn a function-space prior that assigns more weight to functions, whose outputs on unlabelled data are also invariant to these augmentations. Problem Specification. Suppose Du = {x u i } N i=1 is an unlabelled dataset of examples x u i ∈ R n. Let Dt = {(x t i , yt i )} T i=1 be a labelled dataset corresponding to a supervised "downstream" task, where y t i is the target associated with x t i . We want to use both Du and Dtto train a deep learning model for predicting y given x with probabilistic parameters θ, where all information about the data is incorporated through the distribution on θ. That is, we predict using p(y|*x, θ*) for a given θ. ## 3.1 Incorporating Unlabelled Data Into Bnns The simplest way one might proceed, is to place a prior on θ and then condition on both Du and Dt, leading to a posterior p(θ|Du, Dt) ∝ p(θ|Du) p(Dt|Du, θ). However, if we are working with conventional BNNs, which are explicitly models for supervised prediction, then p(θ|Du) = p(θ). Further, as the predictions depend only on the parameters, p(Dt|Du, θ) = p(Dt|θ), which then means that p(θ|Du, Dt) = p(θ|Dt). Thus, we cannot incorporate Du by naïvely conditioning on it. To get around this problem, we propose to instead use Duto generate hypothetical labelled data and then condition our predictive model on it. In other words, we will use Duto guide a *self-supervised* training of the model, thereby incorporating the desired information from our unlabelled data. To do this, we will draw on data augmentation (Yaeger et al., 1996; Krizhevsky et al., 2012; Shorten & Khoshgoftaar, 2019) and contrastive learning (Oord et al., 2019; Chen et al., 2020b;a; Grill et al., 2020; Hénaff et al., 2020; Chen & He, 2020; Foster et al., 2020; Miao et al., 2023). Indeed, such approaches that use data augmentation provide effective means for making use of prior information. Although it is difficult to encode our prior beliefs with hand-crafted priors over neural network parameters, we can construct augmentation schemes that we expect to preserve the semantic properties of different inputs. We thus also expect these augmentation schemes to preserve the unknown downstream labels of different inputs. The challenge is now to transfer the beliefs—implicitly defined through our data augmentation scheme—into our model. ![3_image_0.png](3_image_0.png) Figure 2: **BNN Probabilistic Models.** (a) Probabilistic model for conventional BNNs. (b) Probabilistic model for self-supervised BNNs. We share parameters between different tasks, which allows us to condition on generated self-supervised data. j indexes self-supervised tasks, i indexes datapoints. One simple way to do this would be to just augment the data in Dt when training a standard BNN with standard data augmentation. However, this would ignore the rich information available in Du. Instead, we will use a construct from contrastive learning to generate pseudo-labelled data from Du, and then condition θ on both this pseudo data and Dt. Concretely, suppose we have a set of data augmentations A = {a : R n → R n} that preserve semantic content. We use A and Duto generate a *contrastive dataset* Dcthat reflects our subjective beliefs by: 1. Drawing M examples from Du at random, {xˆi}M i=1, where i indexes the subset, not Du; 2. For each xˆi, sampling a A, aB ∼ A and augmenting, giving x˜ A i = a A(ˆxi) and x˜ B i = a B(ˆxi); 3. Forming Dc by assigning x˜ A iand x˜ B ithe same class label, which is the subset index i. We thus have Dc = {(x c i , yc i )} 2M i=1 = {(x˜ A i , i)}M i=1 ∪ {(x˜ B i , i)}M i=1, where the labels are between 1 and M. The task associated with our generated data is thus to predict the subset index corresponding to each augmented example. We can repeat this process L times and create a set of contrastive task datasets, {Dc j } L j=1. Here, we consider the number of generated datasets L to be a fixed, finite hyper-parameter, but we discuss the implications of setting L = ∞ in Appendix A. Note that rather than using a hand-crafted prior to capture our semantic beliefs, we have instead used data augmentation in combination with unlabelled data. Next, to link each Dc j with the downstream predictions, we use parameter sharing (see Fig. 2). Specifically, we introduce parameters θ c j for each Dc j , parameters θ tfor Dt, and shared-parameters θ sthat are used for both the downstream and the contrastive tasks. Duthus informs downstream predictions through θ s, via {Dc j } L j=1. For example, θ t and θ c j could be the parameters of the last layer of a neural network, while θ scould be the shared parameters of earlier layers. Learning in this Framework. We now discuss different options for learning in this framework. Using the Bayesian approach, one would place priors over θ s, θ t, and each θ c j . This then defines a posterior distribution given the observed data {Dc j } L j=1 and Dt. To make predictions on the downstream task, which depend on θ s and θ t only, we would then use the posterior predictive: p(y t ? |x?, {Dc j} L $$_{=1},{\mathcal{D}}^{t})=\mathbb{E}_{p(\theta^{s}|\{{\mathcal{D}}_{j}^{c}\}_{j=1}^{L},{\mathcal{D}}^{t})}$$ [Ep(θ s,Dt)[p(y t ? |x?, θs, θt)]], (1) $$\exists_{p(\theta^{t}|\theta^{s},{\mathcal{D}}^{t})}|p(y)$$ $$\star,\theta^{s},\theta^{t})]],$$ where we have (i) noted that the downstream task parameters θ t are independent of {Dc j } L j=1 given the shared parameters θ s and (ii) integrated over each θ c j and θ tin the definition of p(θ s|{Dc j } L j=1, Dt). Alternatively, one can learn a point estimate for θ s, e.g., with MAP estimation, and perform full posterior inference for θ t and θ c j only. This would be a *partially stochastic* network, which Sharma et al. (2023) showed often outperforms fully stochastic networks while being more practical. In the case where all parameters are shared up to the last (linear) layer, this is also known as the *neural linear model* (Lázaro-Gredilla & Figueiras-Vidal, 2010), which has been shown to have many desirable properties (Ober & Rasmussen, 2019; Harrison et al., 2023). Learning in this way is also known as *model learning*, as used in deep kernels and variational autoencoders (Kingma & Welling, 2013; Rezende et al., 2014; Wilson et al., 2015; 2016), where in the case of Gaussian processes (GPs), a point estimate of kernel parameters is learned to also define a learned prior over functions. Note that our approach can also be considered kernel learning, where the representations of input data after the shared layers θ s define the reproducing kernel Hilbert space (RKHS) and the kernel is given by their inner products. In learning a point estimate for θ s, one is learning a suitable model to perform inference in. This model has the following prior predictive: $$p(y_{\star}^{t}|x_{\star},\{{\mathcal D}_{j}^{c}\}_{j=1}^{L})=\mathbb{E}_{p(\theta^{t})}[p(y_{\star}^{t}|x_{\star},\theta_{\star}^{s},\theta^{t})],$$ $$\left(2\right)$$ , θt)], (2) where θ s ? is the learnt value of θ s. 1 We would then update our beliefs over θ tin light of observed data. Through θ s ? , we are using Duto effectively learn a prior over the functions that can be represented by the network in combination with θ t. Learning a point estimate for θ sis thus our main approach. ## 3.2 Self-Supervised Bnns In Practice We now use our framework to propose a practical two-step algorithm for self-supervised BNNs. Preliminaries. We focus on image-classification problems. We use an encoder fθ s (·) that maps images to representations and is shared across the contrastive tasks and the downstream task. The shared parameters θ sthus are the base encoder's parameters. We also normalise the representations produced by this encoder. For the downstream dataset, we use a linear readout layer from the encoder representations, i.e., we have θ t = {Wt, bt} and y t i ∼ softmax(Wtfθ s (xi) + b t). The rows of Wt j are thus class template vectors, that is, a data point will achieve the highest possible softmax probability for a certain class if its representation is equal to (a scaled version of) the corresponding row of the weight matrix. For the contrastive tasks, we use a linear layer without biases, i.e., θ c j = Wc j , and j indexes contrastive tasks. We place Gaussian priors over θ s, θ t, and each θ c j . Pre-training θ s**(Step I).** Here, we learn a point estimate for the base encoder parameters θ s, which induces a functional prior over the downstream task labels (see Eq. 2). To learn θ s, we want to optimise the (potentially penalised) log-likelihood log p({Dc j } L j=1, Dt|θ s), but this would require integrating over θ t and each θ c. Instead, we use the evidence lower bound (ELBO): $$\hat{\cal L}_{j}^{c}(\theta^{s})=\mathbb{E}_{q(\theta_{j}^{c})}[\log p(\mathcal{D}_{j}^{c}|\theta^{s},\theta_{j}^{c})]-D_{\mathrm{KL}}(q(\theta_{j}^{c})||p(\theta_{j}^{c}))\leq\log p(\mathcal{D}_{j}^{c}|\theta^{s}),$$ s), (3) where q(θ c j ) is a variational distribution over the contrastive task parameters. Rather than learning a different variational distribution for each contrastive task j, we amortise the inference and exploit the structure of the contrastive task. The contrastive task is to predict the corresponding source image index for each augmented image. That is, for the first pair of augmented images in a given contrastive task dataset, we want to predict class "1", for the second pair, we want to predict class "2", and so forth. The label is the index within the contrastive dataset, not the full dataset. To predict these labels, we use a linear layer applied to an encoder that produces normalised representations. We want a suitable variational distribution for this linear layer. To make progress, we define z˜ A i = fθ s (x˜ A i ) and z˜ B i = fθ s (x˜ B i ), which are the *representations* of given images. To solve the contrastive task well, we want to map z A 1 and z B 1 to class "1", z A 2 and z B 2 to class "2", and so forth. 1Alternatively, this approach can be understood as learning the prior p(θ s, θt) = p(θ t)δθs=θ s? , where δ is the Dirac delta function. $$(3)$$ Algorithm 1 Self-Supervised BNNs Input: augmentations A, unlabelled data Du, task data Dt, contrastive prior p(Wc) for j = 1*, . . . , L* do . Unsupervised prior learning Draw subset {xˆi}M i=1, set Dc j = {} for i = 1*, . . . , M* do . Create contrastive task Sample a A, aB ∼ A x˜ A i = a A(ˆxi), x˜ B i = a B(ˆxi) z˜ A i = fθ s (x˜ A i ), z˜ B i = fθ s (x˜ B i ) ωi = 0.5(˜z A i + ˜z B i ) Add (˜x A i , i) and (˜x B i , i) to Dc j . end for Wc j =-ω T 1*. . . ω*TM /τ + , with ∼ N (0, σ2I) L˜(τ, σ2, θs) = log p(θ s) + 1 2M Eq(Wc j )[p(Dc j |θ s, Wc j )] − D¯KL[q(Wc j )||p(Wc))] Update θ s*, τ, σ*2to maximise L˜(τ, σ2, θs) end for Approximate p(θ t|Dt, θs) ' q(θ t) . Evaluation Predict using Eq(θ t)[p(y t ? |x?, θs, θt)] We define ωi = 0.5(z˜ A i + z˜ B i ), i.e., ωiis the mean representation for each augmented pair of images. Because the rows of the linear layer weight matrix Wc j are effectively class templates, we use q(Wc j ; *τ, σ*2) = N (µ c j , σ2I) with µ c j =-ω T 1*. . . ω*TM /τ . In words, the mean representation of each augmented image pair is the class template for each source image, which should solve the contrastive task well. Note that since this makes the last-layer weights data-dependent, it also renders them softmax outputs invariant to the arbitrary ordering of the data points in the batch, since if one permutes the xi, and thus zi, this also automatically permutes the ωi and thus rows of Wc j in the same way. Also, recall that the Wc j are auxiliary parameters that are just needed during the contrastive learning to learn a θ sthat induces a good functional prior, but are discarded afterwards and not used for the actual supervised task of interest. The variational parameters τ and σ 2 determine the magnitude of the linear layer and the per-parameter variance, vary throughout training, are shared across contrastive tasks j, and are learnt by maximising Eq. (3) with reparameterisation gradients (Price, 1958; Kingma & Welling, 2013). Both the contrastive tasks and downstream task provide information about the base encoder parameters θ s. One option would be to learn the base encoder parameters θ s using only data derived from Du(Eq. 3), which would correspond to a standard self-supervised learning setup. In this case, the learnt prior would be task-agnostic. An alternative approach is to use both Dt and Duto learn θ s, which corresponds to a semi-supervised setup. To do so, we can use the ELBO for the downstream data: L˜t(θ s) = Eq(θ t)[log p(D t|θ t, θs)] − DKL[q(θ t)||p(θ t)] ≤ log p(D t|θ s), (4) where q(θ t) = N (θ t; µ t, Σ t) is a variational distribution over the downstream task parameters; Σ tis diagonal. We can then maximise Pj L˜c j (θ s) + α · L˜t(θ s), where α is a hyper-parameter that controls the weighting between the downstream task and contrastive task datasets. We consider both variants of our approach, using Self-Supervised BNNs to refer to the variant that pre-trains only with {Dc j } L j=1 and Self-Supervised BNNs* to refer to the variant that uses both {Dc j } L j=1 and Dt. Downstream Inference (Step II). Having learnt a point estimate for θ s, we can use any approximate inference algorithm to infer θ t. Here, we use a post-hoc Laplace approximation (Daxberger et al., 2021). Algorithm 1 summarises Self-Supervised BNNs, which learn θ s with Du only. We found tempering with the mean-per-parameter KL divergence, D¯KL, improved performance, in line with other work (e.g., Krishnan et al., 2022). Moreover, we generate a new Dc j per gradient step so L corresponds to the number of gradient steps. As shown on Algorithm 1, our full loss is L˜(τ, σ2, θs) = log p(θ s)+ 1 2M Eq(Wc j )[p(Dc j |θ s, Wc j )]−D¯KL[q(Wc j )||p(Wc))]. The first term of this loss is a prior over the shared parameters, in our case a Gaussian prior, which is equivalent to weight decay. The second term is where the actual contrastive learning happens, namely it is an expected Categorical log-likelihood (i.e., cross-entropy) over the softmax logits under Wc j . Recall that the rows of this weight matrix are the mean embeddings vectors ωi, so this likelihood encourages the inner products ω > i z˜ · j to be large for i = j, that is, drawing two augmentations of the same image towards their mean and thus each other, and to be small for i 6= j, that is, pushing augmentations of different images away from each other. Finally, the third KL term places a Gaussian prior on Wc, which in our case means that the ωi that make up this matrix (and thus the embeddings z˜ · j ) cannot grow without bounds to maximize the likelihood score, but have to stay reasonably close together. Moreover, following best-practice for contrastive learning (Chen et al., 2020a), we use a non-linear projection head gψ(·) *only* for the contrastive tasks. For further details, see Appendix A. Pre-training as Prior Learning. In this work, our central aim is to incorporate unlabelled data into BNNs. To achieve this, in practice, we perform model learning using contrastive datasets generated from the unlabelled data and data augmentation. This corresponds to an unsupervised prior learning step. Since our objective function during this is a principled lower bound on the log-marginal likelihood, it is similar to type-II maximum likelihood (ML), which is often used to learn parameters for deep kernels (Wilson et al., 2015) of Gaussian processes (Williams & Rasmussen, 2006), and recently also for BNNs (Immer et al., 2021a). As such, similar to type-II ML, our approach can be understood as a form of prior learning. Although we learn only a point-estimate for θ s, this fixed value induces a prior distribution over predictive functions through the task-specific prior p(θ t). However, while normal type-II ML learns this prior using the observed data itself, our approach maximises a marginal likelihood derived from unsupervised data. ## 4 How Good Are Self-Supervised Bnn Prior Predictives? We showed our approach incorporates unlabelled data into the downstream task prior predictive distribution (Eq. 2). We also argued that, as the generated contrastive data encodes our beliefs about the semantic similarity of different image pairs, incorporating the unlabelled data should improve the functional prior. We now examine whether this is indeed the case. Unfortunately, prior predictive checks are hard to apply to BNNs because of the high dimensionality of the input space. We will therefore introduce our own novel metric to assess the suitability of the prior predictive. The basis for our approach is to note that, intuitively, a suitable prior should reflect a belief that the higher the semantic similarity between pairs of inputs, the more likely these inputs are to have the same label. Therefore, rather than inspecting the prior predictive at single points in input space, we examine the *joint* prior predictive of *pairs* of inputs with known semantic relationships. Indeed, it is far easier to reason about the relationship between examples than to reason about distributions over high-dimensional functions. Note that this is of course only a reasonable assumption in cases where we believe to have sufficiently good knowledge of semantic similarity in our data domain. That is, we need to have a set of data augmentations for the contrastive tasks, for which we can be reasonably certain that the true labels in our downstream task will be invariant to them. Recent results in contrastive learning suggest that this is indeed the case for natural images paired with the augmentations used in SimCLR (Chen et al., 2020a), which is why we use these in our experiments. To compute our proposed metric, we consider different groups of input pairs. Each group is comprised of input pairs with known semantic similarity. For example, for image data, we could use images of the same class as a group with high semantic similarity, and image pairs from different classes as a group with lower semantic similarity. To investigate the properties of the prior, we can evaluate the probability that input pairs from different groups are assigned the same label under the prior predictive. We can qualitatively investigate the behaviour of this probability across and within different groups. For a prior to be more adapted to the task than an uninformative one, input pairs from groups with higher semantic similarity should be more likely to have the same label under the prior predictive. Conventional BNN: =0.27 ![7_image_0.png](7_image_0.png) Self-Supervised BNN: =0.78 Figure 3: **BNN Prior Predictives.** We investigate prior predictives by computing the probability ρ that particular image pairs have the same label under the prior, and examining the distribution of ρ across different sets of image pairs. We consider three sets of differing semantic similarity: (i) augmented images; (ii) images of the same class; and (iii) images of different classes. Left: Conventional BNN prior. Right: Self-supervised BNN learnt prior predictive. The self-supervised learnt prior reflects the semantic similarity of the different image pairs better than the BNN prior, which is reflected in the spread between the different distributions. Table 1: **Prior Evaluation Scores**. Mean and standard deviation across three seeds shown. Self-supervised priors are better than standard BNN priors. | Prior Predictive | Prior Evaluation Score α | |---------------------|----------------------------| | BNN - Gaussian | 0.261 ± 0.024 | | BNN - Laplace | 0.269 ± 0.007 | | Self-Supervised BNN | 0.680 ± 0.063 | Moreover, we can extend this methodology to quantitatively evaluate the prior. Suppose we have G groups of input pairs, Gg = {(x g i , xˆ g i ) |Gg| i=1 } with g = 1*, . . . , G*, and suppose G1 is the group with the highest semantic similarity, G2 is the group with the second highest semantic similarity, and so forth. We define ρ(x, xˆ) as the probability that inputs x, xˆ have the same label under the prior predictive, i.e., ρ(x, xˆ) = Eθ[p(y(x) = y(xˆ)|θ)] where y(x) is the label corresponding to input x. We then define *the prior evaluation score*, α, as: $$\alpha=\mathbb{E}[\mathbb{I}(\rho(x^{1},{\hat{x}}^{1})>\ldots>\rho(x^{G},{\hat{x}}^{G}))],$$ $$\left(5\right)$$ G))], (5) where we compute the expectation sampling (x 1, xˆ 1) ∼ G1 and so forth. This is the probability that the prior ranks randomly sampled input pairs correctly, in terms of semantically similar groups being assigned higher probabilities of their input pairs having the same label. We now use this methodology to compare conventional BNNs and self-supervised BNNs. Experiment Details. We investigate the different priors on CIFAR10. For the BNN, we follow Izmailov et al. (2021b) and use a ResNet-20-FRN with a N (0, 1/5) prior over the parameters. For the self-supervised BNN, we learn a base encoder of the same architecture with Du only and sample from the prior predictive using Eq. (2). θ t are the parameters of the linear readout layer. For the image pair groups, we use: (i) an image from the validation set (the "base image") and an augmented version of the same image; (ii) a base image and another image of the same class; and (iii) a base image and an image of a different class. As these image pair groups have decreasing semantic similarity, we want the first group to be the most likely to have the same label, and the last group to be the least likely. See Appendix B.3 for more details. Graphical Evaluation. First, we visualise the BNN and self-supervised BNN prior predictive (Fig. 1 and 3). The standard BNN prior predictive reflects a belief that all three image pair groups are similarly likely to have the same label, and thus does not capture semantic information well. In contrast, the self-supervised prior reflects a belief that image pairs with higher semantic similarity are more likely to have the same label. In particular, the self-supervised prior is able to distinguish between image pairs of the same class and of different classes, *even without access to any ground-truth labels*. Quantitative Evaluation. We now quantify how well different prior predictives reflect data semantics. In Table 1, we see that conventional BNN priors reflect semantic similarity much less than self-supervised BNN priors, matching our qualitative evaluation. Note that this measure has of course been designed by us to capture the kind of property in the prior that our contrastive training is meant to induce, and should therefore just be seen as a confirmation that our proposed approach works as expected. There are, naturally, many other properties that one could desire in a prior, which are not captured by this metric. ## 5 Self-Supervised Bnns Excel In Low-Label Regimes In the previous section, we showed that self-supervised BNN prior predictives reflect semantic similarity of input pairs better than conventional BNNs (§4). One hopes that this translates to improved predictive performance, particularly when conditioning on small numbers of labels, which is where the prior has the largest effect (Gelman et al., 1995; Murphy, 2012). We now show that this is indeed the case. Self-supervised BNNs offer improved predictive performance over standard BNNs, with especially large gains when making predictions given small numbers of observed labels. ## 5.1 Semi-Supervised Learning Training Datasets. We evaluate the performance of different BNNs on the CIFAR10 and CIFAR100 datasets, which are standard benchmarks within the BNN community. We evaluate the performance of different baselines when conditioning on 50, 500, 5000, and 50000 labels from the training set. Algorithms. As baselines, we consider the following BNNs: MAP, SWAG (Maddox et al., 2019), a deep ensemble with 5 ensemble members (Lakshminarayanan et al., 2017), and last-layer Laplace (Daxberger et al., 2021). The conventional baselines use standard data augmentation and were chosen because they support batch normalisation (Ioffe & Szegedy, 2015). We consider two variants of self-supervised BNNs: Self-Supervised BNNs pretrain using Du only, while Self-Supervised BNNs* also use Dt. Both variants use a non-linear projection head when pretraining and the data augmentations suggested by Chen et al. (2020a). We use a post-hoc Laplace approximation for the task-specific parameters (the last-layer parameters). We further consider ensembling self-supervised BNNs. Evaluation. To evaluate the predictive performance of these BNNs, we report the negative log-likelihood (NLL). This is a proper scoring rule that simultaneously measures the calibration and accuracy of the different networks, and is thus an appropriate measure for *overall* predictive performance (Gneiting & Raftery, 2007). We also report the accuracy and expected calibration error (ECE) in Appendix Table D.1. We further assess out-of-distribution (OOD) generalisation from CIFAR10 to CIFAR10-C (Hendrycks & Dietterich, 2019). Moreover, we evaluate whether these BNNs can detect out-of-distribution inputs from SVHN (Netzer et al., 2011) when trained on CIFAR10. We report the area under the receiver operator curve (AUROC) metric using the predictive entropy. We want the OOD inputs from SVHN to have higher predictive entropies than the in-distribution inputs from CIFAR10. Results. In Table 2, we report the NLL for each BNN when making predictions with different numbers of labelled examples. We see that self-supervised BNNs offer improved predictive performance over the baselines. In fact, on the full CIFAR-10 test set, a single self-supervised BNN* outperforms a deep ensemble, whilst being 5x cheaper when making predictions. We also show that self-supervised BNNs can also be ensembled to further improve their predictive performance. Incorporating the labelled training data during pretraining (SS BNN*) usually improves predictive performance. Self-supervised BNNs also offer strong performance out-of-distribution and consistently are able to perform out-of-distribution detection. Indeed, they are the only method with an AUROC exceeding 90% at all dataset sizes. In a further analysis in Appendix Table D.1, we also see that the improved NLL of self-supervised BNNs is in large part due to improvements in predictive accuracy, and that incorporating labelled data during pretraining also boosts accuracy. Overall, these results accord with our earlier findings about the improved prior predictives of self-supervised BNNs compared to standard BNNs, and highlight the substantial benefits of incorporating unlabelled data into the BNN pipeline. Table 2: **BNN Predictive Performance**. We measure the performance of different BNNs for different numbers of labels. We consider in-distribution prediction, out-of-distribution (OOD) generalisation, and OOD detection. Shown is the mean and standard error across 3-5 seeds. The out-of-distribution generalisation results average over all corruptions with intensity level five from CIFAR10-C (Hendrycks & Dietterich, 2019). Recall that the SS BNN is performing the contrastive learning separately from and the SS BNN* jointly with the downstream task. We see that self-supervised BNNs offer improved predictive performance over conventional BNNs, especially in the low-data regime. | | | ↓ Negative Log Likelihood | | | | | | | | |----------------------|--------------|-----------------------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------| | Dataset | # labelled | MAP | LL Laplace | SWAG | SS BNN | SS BNN* | Deep | SS BNN | SS BNN* | | points | | | | Ensemble | Ensemble | Ensemble | | | | | CIFAR10 | 50 | 7.594 ±1.092 | 2.259 ±0.012 | 2.332 ±0.005 | 1.047 ±0.022 | 0.996 ±0.013 | 3.689 ±0.174 | 0.980 ±0.006 | 0.953 ±0.010 | | 500 | 2.504 ±0.182 | 1.895 ±0.020 | 2.072 ±0.091 | 0.454 ±0.004 | 0.441 ±0.004 | 1.805 ±0.016 | 0.399 ±0.001 | 0.384 ±0.002 | | | 5000 | 1.570 ±0.021 | 1.327 ±0.042 | 1.028 ±0.023 | 0.361 ±0.003 | 0.369 ±0.013 | 0.846 ±0.012 | 0.309 ±0.001 | 0.292 ±0.002 | | | 50000 | 0.613 ±0.044 | 0.424 ±0.013 | 0.312 ±0.008 | 0.325 ±0.005 | 0.256 ±0.002 | 0.272 ±0.002 | 0.270 ±0.001 | 0.204 ±0.001 | | | CIFAR100 | 50 | 11.86 ±0.34 | 4.585 ±0.006 | 6.840 ±0.539 | 4.505 ±0.002 | 4.496 ±0.002 | 10.38 ±0.110 | 4.450 ±4e-4 | 4.492 ±4e-4 | | 500 | 5.536 ±0.060 | 4.359 ±0.019 | 5.282 ±0.285 | 2.640 ±0.006 | 2.614 ±0.010 | 4.867 ±0.007 | 2.533 ±0.002 | 2.510 ±0.002 | | | 5000 | 4.319 ±0.163 | 3.362 ±0.032 | 3.518 ±0.169 | 1.689 ±0.003 | 1.910 ±0.006 | 3.052 ±0.009 | 1.524 ±0.001 | 1.644 ±0.001 | | | 50000 | 1.834 ±0.064 | 1.469 ±0.30 | 1.250 ±0.010 | 1.435 ±0.004 | 1.139 ±0.004 | 1.088 ±0.012 | 1.212 ±0.002 | 0.929 ±0.001 | | | CIFAR10 to CIFAR10-C | 50 | 7.140 ±0.859 | 2.275 ±0.006 | 2.353 ±0.007 | 1.723 ±0.004 | 1.697 ±0.010 | 3.970 ±0.188 | 1.638 ±0.006 | 1.603 ±0.004 | | (OOD Generalisation) | 500 | 2.838 ±0.175 | 2.045 ±0.011 | 2.355 ±0.083 | 1.272 ±0.014 | 1.260 ±0.010 | 2.101 ±0.021 | 1.164 ±0.004 | 1.113 ±0.005 | | 5000 | 2.423 ±0.267 | 1.644 ±0.046 | 1.705 ±0.084 | 1.235 ±0.007 | 1.237 ±0.044 | 1.382 ±0.023 | 1.103 ±0.006 | 1.096 ±0.001 | | | 50000 | 1.944 ±0.223 | 1.244 ±0.067 | 1.215 ±0.051 | 1.287 ±0.013 | 1.225 ±0.014 | 0.984 ±0.026 | 1.126 ±0.007 | 1.048 ±0.004 | | | | | ↑ AUROC (%) | | | | | | | | | CIFAR10 vs SVHN | 50 | 54.4 ±4.53 | 48.4 ±3.04 | 52.3 ±1.37 | 87.1 ±1.26 | 92.4 ±1.01 | 53.6 ±2.42 | 91.3 ±0.25 | 90.9 ±0.16 | | (OOD Detection) | 500 | 61.2 ±0.94 | 61.1 ±1.19 | 51.1 ±1.89 | 94.9 ±0.12 | 94.2 ±0.45 | 62.1 ±2.35 | 96.2 ±0.05 | 95.9 ±0.07 | | 5000 | 83.3 ±2.87 | 84.6 ±0.63 | 59.6 ±0.99 | 96.1 ±0.07 | 94.6 ±1.00 | 92.9 ±0.19 | 97.0 ±0.01 | 96.9 ±0.12 | | | 50000 | 93.8 ±1.13 | 92.6 ±2.01 | 76.4 ±0.55 | 95.6 ±0.15 | 95.5 ±0.16 | 96.8 ±0.38 | 97.0 ±0.05 | 97.7 ±0.06 | | Moreover, we perform an ablation of our variational distribution q(Wc j ) in Appendix Table C.2, where we see that our data-dependent mean is indeed needed for good performance. Note that our goal in this experiment is mainly to compare our self-supervised BNNs against other BNN methods on equal grounds, not necessarily to reach state-of-the-art performance on the used benchmark datasets. Indeed, reaching higher performances usually requires computationally expensive hyperparameter tuning (which we have not systematically performed) as well as using many engineering tricks, such as data augmentation and batch normalization. These tricks generally affect the likelihood in complicated ways and are thus often omitted from Bayesian neural networks (see, e.g., the discussions in Nabarro et al. (2021) and Krishnan et al. (2022)). This is why our results are empirically on par with many recent papers in Bayesian deep learning (e.g., Immer et al., 2021b; Ober & Aitchison, 2021; Izmailov et al., 2021b). However, it should be noted that some recent attempts have been made to reconcile BNNs with common practical deep learning tricks to reach high performance (c.f., Rudner et al., 2023). Adding these orthogonal ideas to our proposed framework would be a promising avenue for improving its performance to reach state-of-the-art levels. ## 5.2 Active Learning We now highlight the benefit of incorporating unlabelled data in an active learning problem. We consider low-budget active learning, which simulates a scenario where labelling examples is extremely expensive. We use the CIFAR10 training set as the unlabelled pool set from which to label points. We assume an initial train set of 50 labelled points, randomly selected, and a validation set of the same size. We acquire 10 labels per acquisition round up to 500 labels and evaluate using the full test set. We compare self-supervised BNNs to a deep ensemble, the strongest BNN baseline. We use BALD (Houlsby et al., 2011) as the acquisition function for the deep ensemble and self-supervised BNN, which provide epistemic uncertainty estimates. We further compare to SimCLR using predictive entropy for acquisition because SimCLR does not model epistemic uncertainty. In Fig. 4, we see that the methods that leverage unlabelled data perform the best. In particular, the self-supervised BNN with BALD acquisition achieves the highest accuracy across most numbers of labels, and substantially outperforms the deep ensemble. This confirms the benefit of incorporating unlabelled data in ![10_image_0.png](10_image_0.png) Figure 4: **Low-Budget Active Learning** on CIFAR10. We compare (i) a self-supervised BNN, (ii) SimCLR, and (iii) a deep ensemble. For the self-supervised BNN and the ensemble, we acquire points with BALD. We use predictive entropy for SimCLR, which does not provide epistemic uncertainty estimates. Mean and std. shown (3 seeds). The methods that incorporate unlabelled data perform best by far, with our method slightly outperforming SimCLR. active learning settings, which by definition are semi-supervised and include unlabelled data. Moreover, our approach slightly outperforms SimCLR, suggesting that our Bayesian treatment of contrastive learning yields better uncertainties than conventional non-Bayesian contrastive learning. This is also confirmed in Appendix Fig. C.1, where we see that our approaches yield consistently lower calibration errors than SimCLR. ## 6 Related Work Improving BNN Priors. We demonstrated that BNNs have poor prior predictive distributions (§4), a concern shared by others (e.g., Wenzel et al., 2020; Noci et al., 2021; Izmailov et al., 2021a). The most common approaches to remedy this are through designing better priors, typically over network parameters (Louizos et al., 2017; Nalisnick, 2018; Atanov et al., 2019; Fortuin et al., 2021b) or predictive functions directly (Sun et al., 2019; Tran et al., 2020; Matsubara et al., 2021; D'Angelo & Fortuin, 2021, see Fortuin (2022) for an overview). In contrast, our approach incorporates vast stores of unlabelled data into the prior distribution through variational model learning. Similarly, other work also *learns* priors, but typically using labelled data e.g., by using meta-learning (Garnelo et al., 2018; Rothfuss et al., 2021) or type-II maximum likelihood (Wilson et al., 2015; Immer et al., 2021a; Dhahri et al., 2024), or by using transfer learning in an ad hoc way (Shwartz-Ziv et al., 2022). Notably, function-space variational inference methods (Sun et al., 2019; Rudner et al., 2023) often also use unlabelled data, which has been shown to potentially improve the out-of-distribution performance of these models (Lin et al., 2023). However, in this case, the unlabelled data is only used for evaluating the KL divergence in function space, a practice which has theoretically been shown to be insufficient Burt et al. (2020). Conversely, our work uses semantic information from the unlabelled data to actually *inform* the function-space prior. Another related line of work is concerned with learning invariances from data in Bayesian models using the marginal likelihood (van der Wilk et al., 2018; Immer et al., 2022). This case is essentially the opposite of our setting, as there, the labels are known but the augmentations are learned, while in our case, the augmentations constitute our prior knowledge, but we do not know the data labels. A Perspective on Contrastive Learning. We offer a Bayesian interpretation and understanding of contrastive learning (§3). Under our framework, pretraining is understood as model learning—a technique for finding probabilistic models with prior predictive distributions that capture our semantic beliefs. There has been much other work on understanding contrastive learning (e.g., Wang & Isola, 2020; Wang & Liu, 2021). Some appeal to the InfoMax principle (Becker & Hinton, 1992). Zimmermann et al. (2022) argue that contrastive learning inverts the data-generating process, while Aitchison (2021) cast InfoNCE as the objective of a self-supervised variational auto-encoder. Ganev & Aitchison (2021) formulate several semi-supervised learning objectives as lower bounds of log-likelihoods in a probabilistic model of data curation. Semi-Supervised Deep Generative Models. Deep generative models (DGMs) are a fundamentally different approach for label-efficient learning (Kingma & Welling, 2013; Kingma et al., 2014; Joy et al., 2020). A semi-supervised DGM models the full distribution p(*x, y*) with generative modelling, and so incorporates unlabelled data by learning to generate it. Unlike BNNs, we can condition the parameters of a DGM on unlabelled data. In contrast, our approach does not model the data distribution—the unlabelled data is used to construct pseudo-labelled tasks that encode our prior beliefs. Self-supervised BNNs are discriminative models, which tend to be more scalable and perform better for discriminative tasks compared to full generative modelling (Ng & Jordan, 2001; Bouchard & Triggs, 2004). Finally, Sansone & Manhaeve (2022) try to unify self-supervised learning and generative modelling under one framework. ## 7 Conclusion We introduced *Self-Supervised Bayesian Neural Networks*, which allow semantic information from unlabelled data to be incorporated into BNN priors. Using a novel evaluation scheme, we showed that self-supervised BNNs learn functional priors that better reflect the semantics of the data than conventional BNNs. In turn, they offer improved predictive performance over conventional BNNs, especially in low-data regimes. Going forward, we believe that effectively leveraging unlabelled data will be critical to the success of BNNs in many, if not most, potential applications. We hope our work encourages further development in this crucial area. ## Acknowledgments MS was supported by the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems (EP/S024050/1), and thanks Rob Burbea for inspiration and support. VF was supported by a Postdoc Mobility Fellowship from the Swiss National Science Foundation, a Research Fellowship from St John's College Cambridge, and a Branco Weiss Fellowship. ## References Moloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li Liu, Mohammad Ghavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, U Rajendra Acharya, et al. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. *Information Fusion*, 76:243–297, 2021. Laurence Aitchison. InfoNCE is a variational autoencoder, July 2021. URL **http://arxiv.org/abs/** 2107.02495. arXiv:2107.02495 [cs, stat]. Andrei Atanov, Arsenii Ashukha, Kirill Struminsky, Dmitry Vetrov, and Max Welling. The Deep Weight Prior. *arXiv:1810.06943 [cs, stat]*, February 2019. URL **http://arxiv.org/abs/1810.06943**. arXiv: 1810.06943. Suzanna Becker and Geoffrey E. Hinton. Self-organizing neural network that discovers surfaces in randomdot stereograms. *Nature*, 355(6356):161–163, January 1992. ISSN 1476-4687. doi: 10.1038/355161a0. URL **https://www.nature.com/articles/355161a0**. Number: 6356 Publisher: Nature Publishing Group. Guillaume Bouchard and Bill Triggs. The tradeoff between generative and discriminative classifiers. In *16th* IASC International Symposium on Computational Statistics (COMPSTAT'04), pp. 721–728, 2004. David R Burt, Sebastian W Ober, Adrià Garriga-Alonso, and Mark van der Wilk. Understanding variational inference in function-space. In *Third Symposium on Advances in Approximate Bayesian Inference*, 2020. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A Simple Framework for Contrastive Learning of Visual Representations. *arXiv:2002.05709 [cs, stat]*, June 2020a. URL **http://arxiv.org/** abs/2002.05709. arXiv: 2002.05709. Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey Hinton. Big Self-Supervised Models are Strong Semi-Supervised Learners. *arXiv:2006.10029 [cs, stat]*, October 2020b. URL **http:** //arxiv.org/abs/2006.10029. arXiv: 2006.10029. Xinlei Chen and Kaiming He. Exploring Simple Siamese Representation Learning, November 2020. URL http://arxiv.org/abs/2011.10566. arXiv:2011.10566 [cs]. Francesco D'Angelo and Vincent Fortuin. Repulsive deep ensembles are bayesian. *Advances in Neural* Information Processing Systems, 34:3451–3465, 2021. Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig. Laplace redux-effortless bayesian deep learning. Advances in Neural Information Processing Systems, 34:20089–20103, 2021. Rayen Dhahri, Alexander Immer, Betrand Charpentier, Stephan Günnemann, and Vincent Fortuin. Shaving weights with occam's razor: Bayesian sparsification for neural networks using the marginal likelihood. arXiv preprint arXiv:2402.15978, 2024. Daniel Flam-Shepherd, James Requeima, and David Duvenaud. Mapping gaussian process priors to bayesian neural networks. In *NIPS Bayesian deep learning workshop*, volume 3, 2017. Vincent Fortuin. Priors in bayesian deep learning: A review. *International Statistical Review*, 2022. Vincent Fortuin, Adrià Garriga-Alonso, Mark van der Wilk, and Laurence Aitchison. BNNpriors: A library for Bayesian neural network inference with different prior distributions. *Software Impacts*, 9:100079, 2021a. Vincent Fortuin, Adrià Garriga-Alonso, Florian Wenzel, Gunnar Rätsch, Richard Turner, Mark van der Wilk, and Laurence Aitchison. Bayesian Neural Network Priors Revisited. *arXiv:2102.06571 [cs, stat]*, February 2021b. URL **http://arxiv.org/abs/2102.06571**. arXiv: 2102.06571. Adam Foster, Rattana Pukdee, and Tom Rainforth. Improving transformation invariance in contrastive representation learning. In *International Conference on Learning Representations*, 2020. Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In International Conference on Machine Learning, pp. 1183–1192. PMLR, 2017. Stoil Ganev and Laurence Aitchison. Semi-supervised learning objectives as log-likelihoods in a generative model of data curation, October 2021. URL **http://arxiv.org/abs/2008.05913**. arXiv:2008.05913 [cs, stat]. Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami, and Yee Whye Teh. Neural Processes, July 2018. URL **http://arxiv.org/abs/1807.01622**. arXiv:1807.01622 [cs, stat]. Andrew Gelman, John B Carlin, Hal S Stern, and Donald B Rubin. *Bayesian data analysis*. Chapman and Hall/CRC, 1995. Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association, 102(477):359–378, 2007. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko. Bootstrap your own latent: A new approach to selfsupervised Learning, September 2020. URL **http://arxiv.org/abs/2006.07733**. arXiv:2006.07733 [cs, stat]. James Harrison, John Willes, and Jasper Snoek. Variational bayesian last layers. In *The Twelfth International* Conference on Learning Representations, 2023. Dan Hendrycks and Thomas Dietterich. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. In *International Conference on Learning Representations*, 2018. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. *Proceedings of the International Conference on Learning Representations*, 2019. Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. Bayesian active learning for classification and preference learning. *arXiv preprint arXiv:1112.5745*, 2011. Olivier J. Hénaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aaron van den Oord. Data-Efficient Image Recognition with Contrastive Predictive Coding, July 2020. URL http://arxiv.org/abs/1905.09272. arXiv:1905.09272 [cs]. Alexander Immer, Matthias Bauer, Vincent Fortuin, Gunnar Rätsch, and Khan Mohammad Emtiyaz. Scalable marginal likelihood estimation for model selection in deep learning. In *International Conference on Machine* Learning, pp. 4563–4573. PMLR, 2021a. Alexander Immer, Maciej Korzepa, and Matthias Bauer. Improving predictions of bayesian neural nets via local linearization. In *International conference on artificial intelligence and statistics*, pp. 703–711. PMLR, 2021b. Alexander Immer, Tycho van der Ouderaa, Gunnar Rätsch, Vincent Fortuin, and Mark van der Wilk. Invariance learning in deep neural networks with differentiable laplace approximations. Advances in Neural Information Processing Systems, 35:12449–12463, 2022. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *International conference on machine learning*, pp. 448–456. PMLR, 2015. Pavel Izmailov, Patrick Nicholson, Sanae Lotfi, and Andrew Gordon Wilson. Dangers of Bayesian Model Averaging under Covariate Shift. *arXiv:2106.11905 [cs, stat]*, June 2021a. URL **http://arxiv.org/** abs/2106.11905. arXiv: 2106.11905. Pavel Izmailov, Sharad Vikram, Matthew D. Hoffman, and Andrew Gordon Wilson. What Are Bayesian Neural Network Posteriors Really Like? *arXiv:2104.14421 [cs, stat]*, April 2021b. URL **http://arxiv.** org/abs/2104.14421. arXiv: 2104.14421. Tom Joy, Sebastian M Schmon, Philip HS Torr, N Siddharth, and Tom Rainforth. Capturing label characteristics in vaes. *arXiv preprint arXiv:2006.10102*, 2020. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Diederik P. Kingma, Danilo J. Rezende, Shakir Mohamed, and Max Welling. Semi-Supervised Learning with Deep Generative Models, October 2014. URL **http://arxiv.org/abs/1406.5298**. arXiv:1406.5298 [cs, stat]. Ranganath Krishnan, Pi Esposito, and Mahesh Subedar. Bayesian-Torch: Bayesian neural network layers for uncertainty estimation, January 2022. URL **https://github.com/IntelLabs/bayesian-torch**. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. *Advances in neural information processing systems*, 25, 2012. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. *Advances in neural information processing systems*, 30, 2017. Miguel Lázaro-Gredilla and Aníbal R Figueiras-Vidal. Marginalized neural network mixtures for large-scale regression. *IEEE transactions on neural networks*, 21(8):1345–1351, 2010. Jihao Andreas Lin, Joe Watson, Pascal Klink, and Jan Peters. Function-space regularization for deep bayesian classification. In *Fifth Symposium on Advances in Approximate Bayesian Inference*, 2023. Christos Louizos, Karen Ullrich, and Max Welling. Bayesian compression for deep learning. Advances in neural information processing systems, 30, 2017. David J C Mackay. Bayesian Methods for Adaptive Models. Technical report, 1992. Wesley J Maddox, Pavel Izmailov, Timur Garipov, Dmitry P Vetrov, and Andrew Gordon Wilson. A simple baseline for bayesian uncertainty in deep learning. *Advances in Neural Information Processing Systems*, 32, 2019. Laura Manduchi, Kushagra Pandey, Robert Bamler, Ryan Cotterell, Sina Däubener, Sophie Fellenz, Asja Fischer, Thomas Gärtner, Matthias Kirchler, Marius Kloft, Yingzhen Li, Christoph Lippert, Gerard de Melo, Eric Nalisnick, Björn Ommer, Rajesh Ranganath, Maja Rudolph, Karen Ullrich, Guy Van den Broeck, Julia E Vogt, Yixin Wang, Florian Wenzel, Frank Wood, Stephan Mandt, and Vincent Fortuin. On the challenges and opportunities in generative ai. *arXiv preprint arXiv:2403.00025*, 2024. Takuo Matsubara, Chris J Oates, and François-Xavier Briol. The ridgelet prior: A covariance function approach to prior specification for bayesian neural networks. *The Journal of Machine Learning Research*, 22(1):7045–7101, 2021. Ning Miao, Tom Rainforth, Emile Mathieu, Yann Dubois, Yee Whye Teh, Adam Foster, and Hyunjik Kim. Learning instance-specific augmentations by capturing local invariances. *International Conference on* Machine Learning, 2023. Kevin P Murphy. *Machine learning: a probabilistic perspective*. MIT press, 2012. Seth Nabarro, Stoil Ganev, Adrià Garriga-Alonso, Vincent Fortuin, Mark van der Wilk, and Laurence Aitchison. Data augmentation in Bayesian neural networks and the cold posterior effect. arXiv:2106.05586 [cs, stat], June 2021. URL **http://arxiv.org/abs/2106.05586**. arXiv: 2106.05586. Eric Nalisnick, Jonathan Gordon, and José Miguel Hernández-Lobato. Predictive complexity priors. In International Conference on Artificial Intelligence and Statistics, pp. 694–702. PMLR, 2021. Eric Thomas Nalisnick. *On priors for Bayesian neural networks*. University of California, Irvine, 2018. Radford M Neal. BAYESIAN LEARNING FOR NEURAL NETWORKS. Technical report, 1995. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011. Andrew Ng and Michael Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. *Advances in neural information processing systems*, 14, 2001. Lorenzo Noci, Kevin Roth, Gregor Bachmann, Sebastian Nowozin, and Thomas Hofmann. Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect. *arXiv:2106.06596 [cs]*, June 2021. URL **http://arxiv.org/abs/2106.06596**. arXiv: 2106.06596. Sebastian W Ober and Laurence Aitchison. Global inducing point variational posteriors for bayesian neural networks and deep gaussian processes. In *International Conference on Machine Learning*, pp. 8248–8259. PMLR, 2021. Sebastian W Ober and Carl E Rasmussen. Benchmarking the neural linear model for regression. In *Second* Symposium on Advances in Approximate Bayesian Inference, 2019. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation Learning with Contrastive Predictive Coding, January 2019. URL **http://arxiv.org/abs/1807.03748**. arXiv:1807.03748 [cs, stat]. Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A Osborne, Tim GJ Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, and Ruqi Zhang. Position paper: Bayesian deep learning in the age of large-scale ai. *arXiv preprint arXiv:2402.00809*, 2024. Robert Price. A useful theorem for nonlinear devices having gaussian inputs. *IRE Transactions on Information* Theory, 4(2):69–72, 1958. Vishnu Raj, Tianyu Cui, Markus Heinonen, and Pekka Marttinen. Incorporating functional summary information in bayesian neural networks using a dirichlet process likelihood approach. In *International* Conference on Artificial Intelligence and Statistics, pp. 6741–6763. PMLR, 2023. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In *International conference on machine learning*, pp. 1278–1286. PMLR, 2014. Jonas Rothfuss, Vincent Fortuin, Martin Josifoski, and Andreas Krause. Pacoh: Bayes-optimal meta-learning with pac-guarantees. In *International Conference on Machine Learning*, pp. 9116–9126. PMLR, 2021. Tim GJ Rudner, Sanyam Kapoor, Shikai Qiu, and Andrew Gordon Wilson. Function-space regularization in neural networks: A probabilistic perspective. In *International Conference on Machine Learning*, pp. 29275–29290. PMLR, 2023. Emanuele Sansone and Robin Manhaeve. Gedi: Generative and discriminative training for self-supervised learning. *arXiv preprint arXiv:2212.13425*, 2022. Mrinank Sharma, Sebastian Farquhar, Eric Nalisnick, and Tom Rainforth. Do bayesian neural networks need to be fully stochastic? In *International Conference on Artificial Intelligence and Statistics*, pp. 7694–7722. PMLR, 2023. Connor Shorten and Taghi M Khoshgoftaar. A survey on image data augmentation for deep learning. *Journal* of big data, 6(1):1–48, 2019. Ravid Shwartz-Ziv, Micah Goldblum, Hossein Souri, Sanyam Kapoor, Chen Zhu, Yann LeCun, and Andrew Gordon Wilson. Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors, May 2022. URL **http://arxiv.org/abs/2205.10279**. arXiv:2205.10279 [cs]. Shengyang Sun, Guodong Zhang, Jiaxin Shi, and Roger Grosse. Functional variational bayesian neural networks. *arXiv preprint arXiv:1903.05779*, 2019. Ba-Hien Tran, Simone Rossi, Dimitrios Milios, and Maurizio Filippone. All you need is a good functional prior for bayesian deep learning. *arXiv preprint arXiv:2011.12829*, 2020. Mark van der Wilk, Matthias Bauer, ST John, and James Hensman. Learning invariances using the marginal likelihood. *Advances in Neural Information Processing Systems*, 31, 2018. Feng Wang and Huaping Liu. Understanding the Behaviour of Contrastive Loss. In *2021 IEEE/CVF* Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2495–2504, Nashville, TN, USA, June 2021. IEEE. ISBN 978-1-66544-509-2. doi: 10.1109/CVPR46437.2021.00252. URL **https://** ieeexplore.ieee.org/document/9577669/. Tongzhou Wang and Phillip Isola. Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere. In *Proceedings of the 37th International Conference on Machine Learning*, pp. 9929–9939. PMLR, November 2020. URL **https://proceedings.mlr.press/v119/wang20k.html**. ISSN: 2640-3498. Yeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran, and Roger Grosse. Flipout: Efficient pseudo-independent weight perturbations on mini-batches. *arXiv preprint arXiv:1803.04386*, 2018. Florian Wenzel, Kevin Roth, Bastiaan S. Veeling, Jakub Swiatkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, and Sebastian Nowozin. How Good is the Bayes Posterior in Deep Neural Networks Really? *arXiv*, February 2020. URL **http://arxiv.org/abs/2002.02405**. Publisher: arXiv. Christopher KI Williams and Carl Edward Rasmussen. *Gaussian processes for machine learning*, volume 2. MIT press Cambridge, MA, 2006. Andrew G Wilson and Pavel Izmailov. Bayesian deep learning and a probabilistic perspective of generalization. Advances in neural information processing systems, 33:4697–4708, 2020. Andrew G Wilson, Zhiting Hu, Russ R Salakhutdinov, and Eric P Xing. Stochastic variational deep kernel learning. *Advances in neural information processing systems*, 29, 2016. Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, and Eric P. Xing. Deep Kernel Learning, November 2015. URL **http://arxiv.org/abs/1511.02222**. arXiv:1511.02222 [cs, stat]. Larry Yaeger, Richard Lyon, and Brandyn Webb. Effective training of a neural network character classifier for word recognition. *Advances in neural information processing systems*, 9, 1996. Yang You, Igor Gitman, and Boris Ginsburg. Scaling sgd batch size to 32k for imagenet training. *arXiv* preprint arXiv:1708.03888, 6(12):6, 2017. Roland S. Zimmermann, Yash Sharma, Steffen Schneider, Matthias Bethge, and Wieland Brendel. Contrastive Learning Inverts the Data Generating Process, April 2022. URL **http://arxiv.org/abs/2102.08850**. arXiv:2102.08850 [cs]. ## A Self-Supervised Bnns: Further Considerations We introduced *Self-Supervised BNNs* (§3), which benefit from unlabelled data for improved predictive performance within the probabilistic modelling framework. To summarise, our conceptual framework uses data augmentation to create a set of contrastive datasets {Dc j } L j=1. In our probabilistic model, conditioning on this data is equivalent to incorporating unlabelled data into the task prior predictive. We now discuss further considerations and provide further details. Theoretical Considerations. In §3.1, we treated the number of contrastive task datasets, L, as a fixed hyper-parameter. However, one could generate an potentially infinite number of datasets, in which case, the posterior p(θ s|{Dc j } L j=1, Dt) ∝ p(θ s) · p(Dt|θ s) ·QL j=1 p(Dc j |θ s) will collapse to a delta function, and will be dominated by the contrastive tasks. This justified learning a point estimate for θ s, but if one wanted to avoid this behaviour, one could re-define the posterior: $$\tilde{p}(\theta^{s}|\{{\mathcal{D}}_{j}^{c}\}_{j=1}^{L},{\mathcal{D}}^{t})\propto p(\theta^{s})\cdot p({\mathcal{D}}^{t}|\theta^{s})\cdot\prod_{j=1}^{L}p({\mathcal{D}}_{j}^{c}|\theta^{s})^{\gamma/L}.$$ The log-posterior would equal, up to a constant: $$\log\hat{p}(\theta^{s}|\{{\mathcal D}_{j}^{c}\}_{j=1}^{L},{\mathcal D}^{t})=\log p(\theta^{s})+\log p({\mathcal D}^{t}|\theta^{s})+\frac{\gamma}{L}\sum_{j=1}^{L}\log p({\mathcal D}_{j}^{c}|\theta^{s}).$$ $$(\mathrm{A.1})$$ $$(\mathrm{A.2})$$ $$(\mathrm{A.3})$$ $$(\mathrm{A.4})$$ Here, the total evidence contributed by the contrastive datasets is independent of L, and instead controlled by the hyper-parameter γ. This is equivalent to posterior tempering. The final term of the above equation could also be re-defined as an *average* log-likelihood when sampling different Dc, i.e., we could use: $$\log\tilde{p}(\theta^{s}|{\mathcal{D}}^{u},{\mathcal{D}}^{t})=\log p(\theta^{s})+\log p({\mathcal{D}}^{t}|\theta^{s})+\gamma{\mathbb{E}}_{{\mathcal{D}}^{c}}[p({\mathcal{D}}^{c}|\theta^{s})].$$ s)]. (A.3) In this case, we look for a distribution over θ s where Dc has a high likelihood on average. Our practical algorithm samples a different Dc per gradient step, and instead weights the Dtterm with α, which is similar to the above approach if we set α = 1/γ and modify the prior term as needed. Re-defining the framework in this way allows it to support potentially infinite numbers of generated contrastive datasets. We further note that this framework could naturally be extended to multi-task scenarios. Practical Considerations. In practice, we share parameters θ s across all tasks (i.e., across the generated contrastive tasks and the actual downstream task), and learn a θ tseparately for each task. We focus on image classification problems and let θ s be the parameters of a *base encoder*, which produces a representation z. θ t are the parameters of a linear readout layer that makes predictions from z. We discussed learning a point-estimate for θ s by optimising an ELBO derived from the unlabelled data only: $$\tilde{\cal L}_{\tilde{j}}^{c}(\theta^{s})=\mathbb{E}_{q(\theta_{j}^{c})}[\log p(\mathcal{D}_{\tilde{j}}^{c}|\theta^{s},\theta_{\tilde{j}}^{c})]-D_{\mathrm{KL}}(q(\theta_{j}^{c})||p(\theta_{j}^{c}))\leq\log p(\mathcal{D}_{\tilde{j}}^{c}|\theta^{s}).$$ s). (A.4) We (optionally) further include an ELBO derived using task-specific data: $$\tilde{\mathcal{L}}^{t}(\theta^{s})=\mathbb{E}_{q(\theta^{t})}[\log p(\mathcal{D}^{t}|\theta^{t},\theta^{s})]-D_{\mathrm{KL}}[q(\theta^{t})||p(\theta^{t})]\leq\log p(\mathcal{D}^{t}|\theta^{s}).$$ s). (A.5) Our final objective is: $${\mathcal{L}}(\theta^{s})=\log p(\theta^{s})+\alpha{\tilde{\mathcal{L}}}^{t}(\theta^{s})+\mathbb{E}_{{\mathcal{D}}^{c}}[{\tilde{\mathcal{L}}}_{j}^{c}],$$ ], (A.6) where α = 0 would learn the base-encoder parameters only using the unlabelled data. α controls the weighting between the generated contrastive task data and the downstream data. We see the above objective is closely related to a (lower bound of) Eq. (A.3). We further modify this objective function, following best practice, either in the contrastive learning or Bayesian deep learning communities, and improve performance by: $$(\mathrm{A.5})$$ $$(\mathrm{A.6})$$ 1. We add a non-linear projection head to the base encoder architecture *only for the contrastive task* datasets. As such, we use z = gφ(fθ s (x))/||gφ(fθ s (x))|| for Dc. gφ(·) is the projection head, and the representation is normalised. For the downstream tasks, we use z = fθ s (x), i.e., we "throw-away" the projection head. This is best practice within the contrastive learning community (Chen et al., 2020a;b). 2. We *temper* the KL divergence term, using the mean-per-parameter KL divergence (denoted as D¯KL(*·||·*)). Tempering is necessary for several Bayesian deep learning algorithms to perform well (Wenzel et al., 2020; Krishnan et al., 2022). 3. We generate a new contrastive task dataset per gradient step, update on that dataset, and then discard it. This follows standard contrastive learning algorithms (Chen et al., 2020a). 4. We rescale the likelihood terms in the ELBOs L˜t(θ s) and L˜c j (θ s) to be average per-datapoint log-likelihoods, e.g., we use 1 |Dc j | log p(Dc j |θ t j , θs). 5. Instead of having an explicit prior distribution over θ s, we use standard weight-decay for training, i.e., we specify a penalty on the norm of the weights of the encoder *per gradient step*. Together, these changes yield the objective function used in Algorithm 1. Finally, we note that different practical algorithms ensue depending on the choice of θ t, the choice of θ s, and the techniques used to perform approximate inference. We employ variational inference and learn a point estimate for θ s, but there are other choices possible. ## B Experiment Details We now provide further experiment details and additional results. The vast majority of experiments were run on an internal compute cluster using Nvidia Tesla V100 or A100 GPUs. The maximum runtime for an experiment was less than 16 hours. ## B.1 Semi-Supervised Learning (§5) B.1.1 Datasets We consider the CIFAR10 and CIFAR100 datasets (Krizhevsky et al., 2009). The entire training set is the unsupervised set, and we suppose that we have access to different numbers of labels. For the evaluation protocols, we reserve a validation set of 1000 data points from the test set and evaluate using the remaining 9000 labels. To assess out-of-distribution generalisation, we further evaluation on the CIFAR-10-C dataset (Hendrycks & Dietterich, 2018). We compute the average performance across all corruptions with intensity level five. ## B.1.2 Self-Supervised Bnns Base Architecture. We use a ResNet-18 architecture, modified for the size of CIFAR10 images, following Chen et al. (2020a). The representations produced by this architecture have dimensionality 512. Further, for the non-linear projection head, we use a 2 layer multi-layer perceptron (MLP) with output dimensionality 128. Contrastive Augmentations. We follow Chen et al. (2020a) and compose a random resized crop, a random horizontal flip, random colour jitter, and random grayscale for the augmentation. These augmentations make up the contrastive augmentation set A. We finally normalise the images to have mean 0 and standard deviation 1 per channel, as is standard. Hyperparameters. We use a N (0, 1 τ 2 p ) prior over the linear parameters θ t, and tune τp for each dataset. As such, τ can be understood as the prior temperature. We use τp = 0.65 for CIFAR10 and τp = 0.6 for CIFAR100. We use weight decay 1e − 6 for the base encoder and projection head parameters. Variational Distribution. We parameterize the temperature and noise scale using the log of their values. That is, we have σ = exp ˜σ. Optimisation Details. We use the LARS optimiser (You et al., 2017), with batch size 1000 and momentum 0.9. We train for 1000 epochs, using a linear warmup cosine annealing learning rate schedule. The warmup starting learning rate for the base encoder parameters is 1e − 3 with a maximum learning rate of 0.6. For the variational parameters, the maximum learning rate is 1e − 3, which we found to be important for the stability of the algorithm. Laplace Evaluation Protocol. We find a point estimate for θ tfound using the standard linear evaluation protocol (i.e., SGD training). We then apply a post-hoc Laplace approximation using the generalised GaussNewton approximation to the Hessian using the **laplace** library (Daxberger et al., 2021). For CIFAR10, we use a full covariance approximation and for CIFAR100 we use a Kroenker-factorised approximation for the Hessian of the last layer weights and biases. We tune the prior precision by maximising the likelihood of a validation set. For predictions, we use the (extended) probit approximation. These choices follow recommendations from Daxberger et al. (2021). ## B.1.3 Self-Supervised Bnns* Self-Supervised BNNs* additionally leverage labelled data when training the base encoder by including an additional ELBO L˜tthat depends on Dt. We use mean-field variational inference over θ t with a Gaussian approximate posterior and Flipout (Wen et al., 2018). We use the implementation from Krishnan et al. (2022), and temper by setting β = 1/|θ t|, meaning we use the average per-parameter KL divergence. α is a hyperparameter that controls the relative weighting between the generated contrastive task datasets and the observed label data, and is tuned. For CIFAR10, We use α = 5 · 10−5 when we have fewer than 100 labels, and α = 5 · 10−3 otherwise. For CIFAR100, We use α = 5 · 10−5 when we have fewer than 1000 labels, and α = 5 · 10−3 otherwise. For p(θ t), we use a N (0, 1) prior. For downstream evaluation, we use the Laplace evaluation protocol. All other details follow Self-Supervised BNNs. ## B.1.4 Bnn Baselines All baselines use the same ResNet-18 architecture, which was modified for the image size used in the CIFAR image datasets. The baselines we considered were chosen because they are all compatible with batch normalisation, which is included in the base architecture. We provide further details about the baselines below. MAP. For the maximum-a-posterior network, we use the Adam optimiser with learning rate 10−3, default weight decay, and batch size 1000. We train for a minimum of 25 epochs and a maximum of 300 epochs, terminating training early if the validation loss increases for 3 epochs in a row. Last-Layer Laplace. For the Last-Layer Laplace baseline, we perform a post-hoc Laplace approximation to a MAP network trained using the protocol above. We use the same settings as for the self-supervised BNN's Laplace evaluation. Deep Ensemble. For the deep ensemble baseline, we train 5 MAP networks starting from different initialisations using the above protocol, and aggregate their predictions. SWAG. For the SWAG baseline, we first a MAP network using the above protocol. We then run SGD from this solution for 10 epochs, taking 4 snapshots per epoch, and using K = 20 as the rank of the covariance matrix. We choose the SWAG learning rate per run using the validation set, and consider 10−2, 10−3, and 10−4. ## B.2 Active Learning (§5) We simulate a low-budget active learning setting. For each method, we use their default implementation details as outlined in this Appendix. With regards to the active learning setup, we assume that we have access to a small validation set of 50 labelled examples and are provided 50 labelled training examples. We acquire 10 examples per acquisition round up to a maximum of 500 labelled examples, which corresponds to 1% of the labels in the training set. We evaluate using the full test set. The deep ensemble and self-supervised BNNs provide epistemic uncertainty estimates, so we perform active learning by selecting the points with the highest BALD metric (Houlsby et al., 2011). For SimCLR, we acquire points using the highest predictive entropy, a commonly used baseline (Gal et al., 2017). For this experiment, we use only the CIFAR10 dataset. SimCLR and the self-supervised BNNs here are pretrained on 500 epochs, not 1000 epochs as default. ## B.3 Prior Predictive Checks (§4) BNN Prior Predictive. We use the ResNet-20-FRN architecture, which is the architecture used by Izmailov et al. (2021b). Note that this architecture does not include batch normalisation, which means the prior over parameters straightforwardly corresponds to a prior predictive distribution. We use a N (0, 1 5 ) prior over all network weights, again following Izmailov et al. (2021b), and sample from the prior predictive using 8192 Monte Carlo samples. Self-Supervised Prior Predictives. We primarily follow the details outlined earlier, except we use the same ResNet-20-FRN architecture as used for the BNNs and batch size of 500 rather than 1000. To sample from the prior predictive, we use Eq. (2) and have y ∼ softmax(W fθ s (x)), where we normalise the representations produced by the base encoder to have zero mean and unit variance, and we have W ∼ N (0, 20), with the prior precision chosen by hand. We neglect the biases because they introduce additional variance. The prior evaluation scores are not sensitive to the prior variance choice, and are evaluated by sampling images from the validation set, which was not seen during training. We used 4096 Monte Carlo samples from the prior. ## C Ablation Studies Effect of Batch Size. We study the effect of the pre-training batch size on the performance of our self-supervised BNNs. We run one seed for 100 epochs across three different batch sizes on CIFAR10. We see in Table C.1, the performance on CIFAR10 is robust to reducing the batch size. We hypothesise this is due to the noise injected during pre-training. | Batch Size | CIFAR10 Accuracy (%) | |--------------|------------------------| | 100 | 0.81 | | 500 | 0.81 | | 1000 | 0.80 | Table C.1: Effect of pretraining batch size on self-supervised BNN. Effect of Variational Distribution. We run an ablation study changing the variational distribution mean on CIFAR10. We evaluated using one seed, training for 100 epochs only. We consider setting the mean of the variational distribution for image i, ωi, to be: 0.5(z˜ A i + z˜ B i ), z˜ A i , and 0. We see that a suitable mean is required for good performance. | Variational Dist. Mean | CIFAR10 Accuracy (%) | | |--------------------------|------------------------|------| | 0 | 0.19 | | | z˜ A i | 0.79 | | | A | B | | | 0.5(˜z i | + ˜z i ) | 0.80 | Table C.2: Effect of the pretraining variational distribution on self-supervised BNN performance on CIFAR10. We see that some variant of our data-dependent mean is needed for good performance. Effect of Pretraining and Inference. To better understand the effect of the pretraining objective and the approximate inference scheme used, we performed an ablation study on CIFAR10. We considered both variants of our variational pretraining, and additionally deterministic pretraining that uses the NT-XENT loss. We also consider either using Laplace approximate inference or MAP estimation for the task parameters. Using the NT-XENT loss and MAP inference corresponds to SimCLR (Chen et al., 2020a), as widely used in the self-supervised learning community. For NT-XENT, we use τ = 0.45 for CIFAR10 and τ = 0.3 for CIFAR100. For these experiments, we only train for 500 epochs. In Fig. C.1, we see that incorporating the labelled data during pretraining boosts accuracy, but surprisingly decreases calibration. Relative to SimCLR, all of the self-supervised BNNs offer improved calibration at all dataset sizes. All approaches have high accuracy at low data regimes, highlighting the benefit of leveraging unlabelled data. Both deterministic pretraining and variational pretraining behave similarly, but performing approximate inference over task parameters substantially improves calibration. ![22_image_0.png](22_image_0.png) Figure C.1: **Effect of Pretraining and Inference** on CIFAR10. On the left plot, red, green, and blue lines overlap. On the right plot, blue and red lines overlap. Recall that the SS BNN is performing the contrastive learning separately from and the SS BNN* jointly with the downstream task. We see that our SS BNN* slightly outperforms the other approaches in terms of accuracy and that all of our approaches yield better-calibrated uncertainties than SimCLR. ## D Additional Results We additionally report the in-distribution accuracy and expected calibration error (ECE) of different BNNs when observing different numbers of labels. These metrics, unlike the log-likelihood, are interpretable. But note that for a useful classifier, we need to have *both* accurate and well-calibrated predictions.2 In Table D.1, we see that self-supervised BNNs substantially outperform conventional BNNs in terms of in-distribution accuracy. The gains are particularly large at smaller dataset sizes, precisely where improved priors are expected to make the biggest difference. Moreover, in terms of calibration, they consistently offer well-calibrated uncertainty estimates. Even though LL Laplace offers well-calibrated uncertainty estimates at a low numbers of labels, the predictions are much less accurate than self-supervised BNNs. We also see that incorporating labelled data during pretraining or ensembling self-supervised BNNs boosts accuracy, but surprisingly can harm calibration. Curiously, we find that the calibration of ensemble methods also sometimes worsens as we condition on more data. 2There are perfectly calibrated but useless classifiers, e.g., if 70% of examples are class A and 30% are class B, predicting p(class A) = 0.7 on every input achieves perfect ECE but does not discriminate between examples at all. Table D.1: **Bayesian Neural Network Predictive Performance**. Here, relative to the main results table, we report the accuracy and expected calibration error of different methods separately. Recall that the SS BNN is performing the contrastive learning separately from and the SS BNN* jointly with the downstream task. | | ↑ Accuracy (%) | | | | | | | | | |---------------------------------------|------------------|-----------|------------|-----------|------------|-----------|-----------|------------|-----------| | Dataset | # labelled | MAP | LL Laplace | SWAG | SS BNN | SS BNN* | Deep | SS BNN | SS BNN* | | points | Ensemble | Ensemble | Ensemble | | | | | | | | CIFAR10 | 50 | 14.6 ±0.7 | 14.9 ±0.8 | 14.8 ±0.2 | 66.3 ±0.8 | 68.3 ±0.2 | 19.0 ±0.3 | 68.9 ±0.1 | 69.5 ±0.2 | | 500 | 31.2 ±0.4 | 32.4 ±0.9 | 31.5 ±7.4 | 84.8 ±0.2 | 86.2 ±0.1 | 39.2 ±0.5 | 87.0 ±0.1 | 87.6 ±0.1 | | | 5000 | 56.5 ±3.1 | 53.4 ±2.4 | 66.4 ±1.2 | 87.7 ±0.1 | 88.6 ±0.2 | 72.1 ±0.3 | 89.6 ±0.2 | 90.9 ±0.03 | | | 50000 | 83.4 ±1.9 | 85.7 ±0.3 | 90.5 ±0.6 | 88.6 ±0.1 | 91.7 ±0.1 | 91.8 ±0.3 | 90.7 ±0.2 | 93.2 ±0.07 | | | CIFAR100 | 50 | 3.6 ±0.2 | 3.6 ±0.3 | 3.1 ±0.6 | 14.5 ±0.1 | 14.7 ±0.3 | 3.8 ±0.06 | 16.2 ±0.02 | 16.5 ±0.1 | | 500 | 7.0 ±0.2 | 6.1 ±0.2 | 7.7 ±0.3 | 38.5 ±0.2 | 39.0 ±0.3 | 9.2 ±0.1 | 40.8 ±0.1 | 41.3 ±0.01 | | | 5000 | 22.7 ±0.7 | 21.2 ±0.6 | 25.6 ±0.9 | 54.5 ±0.1 | 56.0 ±0.3 | 28.2 ±0.1 | 58.4 ±0.1 | 62.5 ±0.1 | | | 50000 | 60.5 ±2.3 | 59.6 ±1.1 | 67.6 ±0.5 | 59.9 ±0.1 | 69.2 ±0.1 | 70.7 ±0.5 | 66.4 ±0.1 | 74.9 ±0.1 | | | CIFAR10 to CIFAR10-C | 50 | 13.6 ±0.7 | 13.8 ±0.7 | 14.0 ±1.6 | 45.5 ±0.03 | 45.5 ±0.5 | 16.9 ±0.4 | 47.7 ±0.1 | 48.0 ±0.1 | | (OOD Generalisation) | 500 | 26.7 ±0.4 | 27.2 ±0.8 | 26.2 ±5.2 | 57.8 ±0.4 | 59.0 ±0.2 | 32.0 ±0.5 | 60.3 ±0.1 | 61.8 ±0.1 | | 5000 | 42.1 ±0.9 | 43.2 ±1.1 | 50.5 ±1.6 | 59.6 ±0.1 | 60.3 ±1.1 | 55.1 ±0.3 | 62.9 ±0.2 | 64.0 ±0.3 | | | 50000 | 59.2 ±3.7 | 62.1 ±1.6 | 69.0 ±0.4 | 59.8 ±0.3 | 63.1 ±0.5 | 70.1 ±0.4 | 63.6 ±0.2 | 66.8 ±0.2 | | | ↓ Expected Calibration Error (ECE; %) | | | | | | | | | | | CIFAR10 | 50 | 65.9 ±3.2 | 2.7 ±0.7 | 6.2 ±2.0 | 1.7 ±0.02 | 1.9 ±0.3 | 35.3 ±2.8 | 1.9 ±0.1 | 2.0 ±0.2 | | 500 | 30.0 ±2.1 | 3.2 ±0.6 | 13.4 ±0.6 | 1.5 ±0.2 | 2.6 ±0.1 | 10.6 ±0.6 | 2.3 ±0.1 | 2.0 ±0.1 | | | 5000 | 21.0 ±0.5 | 4.5 ±0.4 | 10.0 ±0.3 | 1.1 ±0.08 | 2.1 ±0.1 | 3.8 ±0.6 | 2.3 ±0.1 | 2.6 ±0.2 | | | 50000 | 8.4 ±0.5 | 1.5 ±0.4 | 2.8 ±0.4 | 0.8 ±0.02 | 1.2 ±0.1 | 3.7 ±0.3 | 2.8 ±0.1 | 2.1 ±0.1 | | | CIFAR100 | 50 | 61.6 ±1.1 | - | 17.5 ±6.0 | 11.6±0.1 | 11.7 ±0.3 | 38.5 ±0.7 | 13.4 ±0.02 | 13.6 ±0.1 | | 500 | 25.3 ±0.7 | 1.8 ±1.1 | 22.0 ±4.5 | 2.1 ±0.1 | 2.7 ±0.3 | 14.9 ±0.2 | 3.3 ±0.04 | 2.9 ±0.1 | | | 5000 | 32.3 ±2.1 | 1.4 ±0.06 | 16.4 ±6.4 | 1.5 ±0.2 | 7.8 ±0.3 | 4.4 ±0.5 | 3.2 ±0.1 | 12.3 ±0.2 | | | 50000 | 17.8 ±2.2 | 1.6 ±0.2 | 8.9 ±0.9 | 1.7 ±0.1 | 2.5 ±0.1 | 4.7 ±0.08 | 6.5 ±0.1 | 6.6 ±0.1 | | | CIFAR10 to CIFAR10-C | 50 | 67.6 ±3.0 | 4.0 ±0.7 | 6.9 ±0.8 | 8.4 ±0.6 | 9.2 ±0.7 | 37.3 ±2.9 | 6.1 ±0.1 | 5.9 ±0.1 | | (OOD Generalisation) | 500 | 33.7 ±1.9 | 7.4 ±0.8 | 18.0 ±8.0 | 8.1 ±0.7 | 8.9 ±0.6 | 15.5 ±0.3 | 7.2 ±0.1 | 7.6 ±0.4 | | 5000 | 30.4 ±2.6 | 9.2 ±1.3 | 19.1 ±0.8 | 7.4 ±0.4 | 7.8 ±0.8 | 6.9 ±0.3 | 5.0 ±0.1 | 6.5 ±0.2 | | | 50000 | 24.1 ±3.0 | 11.1 ±0.6 | 13.6 ±1.3 | 9.1 ±0.2 | 12.2 ±0.5 | 6.0 ±0.6 | 6.1 ±0.1 | 6.8 ±0.01 | |
Review 1: Summary: The work presents advances in Bayesian neural networks (BNNs) leveraged by unlabelled data. For this task, a self-supervised framework is designed around the prior predictive distribution for improving on the NN predictive tasks. Results show that the priors learned by leveraging unlabelled data work better than conventional BNN priors. Strengths and Weaknesses: #### **Strenghts.** - the idea and exploitation of unlabelled data for obtaining better priors in BNNs is definitely powerful and interesting. - authors did a good review of the state-of-the-art, and I find the related work and references particularly well written. - even if i have concerns with certain points, I did not find critical mistakes in the equations and derivations, and notation fits with the ideas and methods proposed. - experiments are sufficient, with a decent comparison wrt other SOTA methods from BNN literature. - the paper has still some issues with clarity, in my opinion, but it is general readable and easy to follow. #### **Weaknesses.** I would like to focus my concerns on the concept that the paper brings a lot: "BNN prior predictive / prior predictive distribution". From a Bayesian perspective, this prior predictive is kind of odd to me, because it basically ignores data (there is no inference there), and one samples weights/parameters from a "tuned" prior for later averaging standard likelihood predictions over the test data $y_\star$ (I'm looking to Eq. (2) for instance). This point confuses me a lot because some predictive properties/performance are claimed all along the paper bc this prior predictive distribution is well obtained. Having some experience in BNNs, I've not seen this prior predictive thing a lot, even if in many cases one wants to do model selection (i.e. fitting hyperparameters of the prior wrt to marginal likelihood) or just posterior approximations for later performing well with the (posterior) predictive estimates. Before Eq. 2, it is claimed that learning in this way is known as model learning, and used in Wilson et al. 2015, 2016. However, I've not found in these references something similar to the prior predictive framework proposed by the authors, so I have important doubts around the validity of such claims. I feel that the introduction of self-supervised learning/ contrastive methods with unlabelled data for improving BNNs is definitely a big topic to come, but from section 3.1 and partially section 3.2, I feel that some Bayesian principles are not entirely respected or well adjusted to what authors would like to propose in the paper. Requested Changes: I want to remark that I have a positive impression of the self-supervised and contrastive learning approach proposed. However, I still feel that it does not fit very well in the current focus and framework of BNNs, to the point, that it makes me doubt if the method is actually a Bayesian technique. So, having some answers and improvements on what I wrote before and sections 3.1 and 3.2 would be a plus. Including some extra clarity in the presentation of the methodology. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper proposes a way to perform semi-supervised learning of Bayesian Neural Networks (BNNs). The proposed approach mostly follows the contrastive learning technique from [1], i.e. the authors train an encoder on the labeled data via the constrastive loss, which then can be used for supervised learning on downstream tasks with labeled data. The main focus of the paper is on bringing the Bayesian perspective to this framework. Namely, the authors propose to use Bayesian linear layers as "heads" for both the contrastive learning and downstream tasks. The encoder model precisely follows [1], though. For the empirical evaluation, the authors evaluate their approach on CIFAR-10 and CIFAR-100, from which they partially remove the labels. For the baselines, the authors choose several Bayesian methods that do not incorporate unlabelled data into their training procedure. [1] Chen, Ting, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. "A simple framework for contrastive learning of visual representations." In International conference on machine learning, pp. 1597-1607. PMLR, 2020. Strengths and Weaknesses: I have several major concerns regarding the contrastive learning procedure considered in the paper. 1. The contrastive loss is not specified in the paper. The only description of the loss function is as follows. ``` The contrastive task is to predict the corresponding source image index for each augmented image. That is, for the first pair of augmented images in a given contrastive task dataset, we want to predict class “1”, for the second pair, we want to predict class “2”, and so forth. ... To predict these labels, we use a linear layer applied to an encoder that produces normalized representations. ``` It is unclear how exactly the authors use the linear layer to classify pairs of images. 2. Furthermore, it is not clear why predicting the index of the pair within the given batch results in contrastive loss. Indeed, the model will be penalized for predicting different labels (for the original and augmented images), however, it will also be penalized for predicting the same label for both images but different from their index in the batch. This design choice does not have any motivation in the paper. 3. The variational distribution for the contrastive head layer does not correspond to the probabilistic model used in eq. (3). Indeed, the parameters of the variational family described in Section 3.2 depend on the input data, while, in eq. (3), the authors use it as a conventional variational family that does not depend on inputs. I also have some minor concerns regarding the presentation: - throughout the paper, the authors use the word "predictive" as a noun, which I can interpret as "predictive distribution", but it adds confusion; - it is not clear what "class template vector" means; - the paper has grammar mistakes, for instance, in Section 3.1, see the second sentence of the first paragraph; - the paper has spelling mistakes, for instance, in Section 3.1, see the last sentence of the first paragraph. The empirical evaluation of the proposed approach can be significantly improved. 1. The paper does not contain an ablation study for the introduced design choices. For instance, the choice of the variational distribution and the contrastive loss (which is not properly discussed) are not motivated and are not studied properly. 2. The proposed method is compared only against the methods that do not use the unlabelled data. The comparison against other semi-supervised techniques (not necessarily Bayesian) is required. 3. In Table 5, reported accuracies are much lower than other conventional and Bayesian models. For instance, see Table 9 in [2]. This raises a major concern regarding the quality of the baselines. [2] Maddox, Wesley J., Pavel Izmailov, Timur Garipov, Dmitry P. Vetrov, and Andrew Gordon Wilson. "A simple baseline for bayesian uncertainty in deep learning." Advances in neural information processing systems 32 (2019). Requested Changes: The paper requires a major revision. See the list of major concerns for the requested changes. Broader Impact Concerns: The paper does not require including a broader impact statement. ================================================== Review 3: Summary: The paper proposes using unlabelled data to pretrain a BNN prior for greater sample efficiency in downstream tasks. They propose a pre-training scheme that uses a contrastive-style ELBO objective combined with data augmentation. They look at image classification, out-of-distribution detection, and active learning. Strengths and Weaknesses: **Strengths** One of the motivations of Bayesian methods is that the priors should be a good initialization for a given ML task. It makes sense to learn priors that allow fast adaption to tasks in the low-data regime. The proposed approach does appear to show strong sample-efficiency in the experiments, in large-ish scale experiment settings such as CIFAR 10. It is also good at out-of-distribution and active learning. **Weaknesses** **Function-space VI.** Throughout the paper, the authors claim that BNN methods cannot use unlabeled data. In functional / function-space VI (Sun et al. 2019, which the authors cite), the function-space KL is enforced by a 'measurement set'. This measurement set is essentially samples from the data domain, so it could easily include unlabelled data. In fact, there was an AABI paper last year that used FVI to improve OOD performance of BNN using unlabelled OOD data [A]. The authors need to revise their claims and potentially compare them to FVI methods if they are relevant. **Actual relevance to BNNs.** By the end of the paper, I was questioning how relevant this paper was to actual *Bayesian* NNs and not just NNs more generally. Since it seems the shared 'torso' network is trained to obtain a point estimate of the shared parameters, it's not really an interesting prior distribution over the weights, and the initial motivation of the work feels a bit misleading. You could focus the paper on the value of this pre-trained torso network and just use BNNs for the evaluation. At the very least, I would motivate the work through features /kernels rather than weight priors as done in the introduction. **Prior design and inventing a metric that the method was designed to solve.** It's not a good idea to invent a new metric to evaluate a model since there is a strong motivation to bias it towards the proposed model's strengths. I don't agree that 'similar inputs should predict the same class' is a sound prior we want models to have, and Section 4 seems to suggest it's an inherently good idea when it is, in fact, a subjective design decision made by the authors. I think a good BNN prior for classification should pick no specific class, and each input should have a close-to-uniform distribution over labels. This is the motivation for several works using stationary Gaussian processes-like architectures for BNNs [B, C]. The authors could consider combining their method with the architecture of [B, C] to get the best of both worlds. In summary, the authors have conflated a model having invariances with a model's prior, which are not exactly the same. A model can have invariances without it necessarily being evident in it's prior. For example, you could use sine and cosine features for angular data in a stationary Gaussian process to encode rotational invariance, but this doesn't change the stationary nature of the prior and the predictive distribution is unchanged. I think a more principled metric would be the 'effective dimension' (c.f. [D]) of the kernel induced by the pre-trained feature space. For $n$ data points, the effective dimension of the kernel (basically the rank of the data matrix, with the kernel being the outer product of the features) is $n$ if the features are unique and $1$ if the feature /data point is just repeated $n$ times, therefore minimizing the effective dimension for augmented inputs would be a more principled way of encoding this prior knowledge into the model and focuses on the fact that the authors are optimizing a deterministic feature space rather than a weight distribution. **The neural linear model.** The authors don't seem to know about the neural linear model / Bayesian last layer, which is a Bayesian linear regression on a neural network feature space. It's one of the oldest approaches to BNNs [E] and has lots of attractive properties, such as tractable marginal likelihoods in the regression settings [F] and also tractable ELBOs [G]. Throughout the paper, the authors refer to 'last-layer Laplace' and 'partial stochasticity' when the neural linear model is an older and more explicit way of describing this approach. **Invariance learning with the marginal likelihood.** I'm not very familiar with the body of work, but I know that the marginal likelihood has been used to train invariances with Bayesian neural networks [H, I]. This body of work seems relevant enough to discuss as related work, and even relevant enough to evaluate against it since the pre-trained feature space is also essentially learning invariances. **The experiments mix up approximate inference with architectures.** The experimental evaluation, while thorough, was a bit confused. The central question that was tackled by the experiments was 'Does pre-training the feature space improve BNN uncertainty quantification?'. This is invariant to the choice of approximate inference for the BNN itself. Therefore, the experiments could rather use the pre-trained network as the torso for the MAP, SWAG and ensemble and see if the torso brings a net benefit across approximate inference methods. Comparing two very different architectures and approximate inference methods is not that meaningful. **Clarity** A few clarity issues: * The ''# Labels' column in Table 2 is a bit confusing since I believe it's referring to the number of training data points, but it reads like the dimensionality of the classification problem is increasing * The definition of SS BNN* should be repeated a few times (e.g. the caption of Table 2) because the name is vague and it's easy to miss/forget while reading the main text. [A] Function-Space Regularization for Deep Bayesian Classification, Lin et al. [B] Periodic Activation Functions Induce Stationarity, Meronen et al [C] Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness, Liu et al [D] Bandit optimisation of functions in the Matern kernel RKHS, Janz et al [E] Marginalized neural network mixtures for large-scale regression, Lázaro-Gredilla et al [F] Benchmarking the Neural Linear Model for Regression, Ober et al [G] Variational Bayesian Last Layers, Harrison et al [H] Learning Invariances using the Marginal Likelihood, van der Wilk et al [I] Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations, Immer et al Requested Changes: Based on the feedback above, my major concerns can be summarized as follows: 1) Retract claims on BNNs and unlabelled data w.r.t. FVI and discuss FVI with greater accuracy 2) Disentangle architecture and approximate inference in the experimental section 3) Rewrite paper to motivate instead by learning invariant feature spaces rather than learning weight prior distributions, or do experiments that learn weight priors for the pretrained network torso 4) Remove Section 4 and use Figure 3 and Table 1 as evidence for the behavior of Algorithm 1. Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept with minor revision Comment: As discussed above, the paper is currently primarily lacking in its empirical evaluation, which needs to be rectified. The aim of this revision should not be to chase state-of-the-art results in the fully supervised case, i.e., 50000 labels, but to ensure that the reported results for each setup mirror their prior reported performances for the reader to accurately interpret the results. I recommend this as a minor rather than as a major revision, as I expect the relative performance improvements to persist in the more relevant semi-supervised setups. While not mandatory for acceptance this would also allow the authors to provide the reader with error bars on all reported results, which would further improve the interpretability of the results. I therefore recommend _Acceptance with minor revision_. ==================================================
# Monotone Deep Boltzmann Machines Zhili Feng *zhilif@andrew.cmu.edu* Machine Learning Department Carnegie Mellon University Ezra Winston *ewinston@cs.cmu.edu* Machine Learning Department Carnegie Mellon University J. Zico Kolter *zkolter@cs.cmu.edu* Computer Science Department Carnegie Mellon University Bosch Center for AI Reviewed on OpenReview: *https: // openreview. net/ forum? id= SgTKk6ryPr* ## Abstract Deep Boltzmann machines (DBMs), one of the first "deep" learning methods ever studied, are multi-layered probabilistic models governed by a pairwise energy function that describes the likelihood of all variables/nodes in the network. In practice, DBMs are often constrained, i.e., via the *restricted* Boltzmann machine (RBM) architecture (which does not permit intralayer connections), in order to allow for more efficient inference. In this work, we revisit the generic DBM approach, and ask the question: are there other possible restrictions to their design that would enable efficient (approximate) inference? In particular, we develop a new class of restricted model, the monotone DBM, which allows for arbitrary self-connection in each layer, but restricts the *weights* in a manner that guarantees the existence and global uniqueness of a mean-field fixed point. To do this, we leverage tools from the recentlyproposed monotone Deep Equilibrium model and show that a particular choice of activation results in a fixed-point iteration that gives a variational mean-field solution. While this approach is still largely conceptual, it is the first architecture that allows for efficient approximate inference in fully-general weight structures for DBMs. We apply this approach to simple deep convolutional Boltzmann architectures and demonstrate that it allows for tasks such as the joint completion and classification of images, within a single deep probabilistic setting, while avoiding the pitfalls of mean-field inference in traditional RBMs. ## 1 Introduction This paper considers (deep) Boltzmann machines (DBMs), which are pairwise energy-based probabilistic models given by a joint distribution over variables x with density $$p(\mathbf{x})\propto\exp\left(\sum_{(i,j)\in E}x_{i}^{\top}\Phi_{i j}x_{j}+\sum_{i=1}^{n}b_{i}^{\top}x_{i}\right),$$ $$\left(1\right)$$ , (1) where each x1:n denotes a discrete random variable over ki possible values, represented as a one-hot encoding xi ∈ {0, 1} ki; E denotes the set of edges in the model; Φij ∈ R ki×kj represents pairwise potentials; and bi ∈ R ki represents unary potentials. Depending on the context, these models are typically referred to as pairwise Markov random fields (MRFs) (Koller & Friedman, 2009), or (potentially deep) Boltzmann machines (Goodfellow et al., 2016; Salakhutdinov & Hinton, 2009; Hinton, 2002). In the above setting, each xi may ![1_image_0.png](1_image_0.png) ![1_image_1.png](1_image_1.png) ![1_image_2.png](1_image_2.png) Figure 1: Neural network topology of different Boltzmann machines. The general case is a complete graph (red dashed lines are a subset of edges that are in BM but not RBM). Our proposed parameterization is a form of general Boltzmann machine. represent an observed or unobserved value, and there can be substantial structure within the variables; for instance, the collection of variables x may (and indeed will, in the main settings we consider in this paper) consist of several different "layers" in a joint convolutional structure, leading to the deep convolutional Boltzmann machine (Norouzi et al., 2009). Boltzmann machines were some of the first "deep" networks ever studied Ackley et al. (1985). However, in modern deep-learning practice, general-form DBMs have largely gone unused, in favor of *restricted* Boltzmann machines (RBMs). These are DBMs that avoid any connections within a single layer of the model and thus lead themselves to more efficient block-based approximate inference methods. In this paper, we revisit the general framework of a *generic* DBM, and ask the question: are there any other restrictions (besides avoiding intra-layer connections), that would *also* allow for efficient approximate inference methods? To answer this question, we propose a new class of general DBMs, the monotone deep Boltzmann machine (mDBM); unlike RBMs, these networks can have dense intra-layer connections but are parameterized in a manner that constrains the weights so as to still guarantee an efficient inference procedure. Specifically, in these networks, we show that there is a unique and globally optimal fixed point of variational mean-field inference; this contrasts with traditional probabilistic models where mean-field inference may lead to multiple different local optima. To accomplish this goal, we leverage recent work on monotone Deep Equilibirum (monDEQ) models (Winston & Kolter, 2020), and show that a particular choice of activation function leads to a fixed point iteration equivalent to (damped) parallel mean-field updates. Such fixed point iterations require the development of a new proximal operator method, for which we derive a highly efficient GPU-based implementation. Our method also relates closely with previous works on convergent mean-field inference in Markov random fields (MRFs) (Krähenbühl & Koltun, 2013; Baqué et al., 2016; Lê-Huu & Alahari, 2021); but these approaches either require stronger conditions on the network or fail to converge to the true mean-field fixed point, and generally have only been considered on standard "single-layer" MRFs. Our approach can be viewed as a combined model parameterization and (properly damped) mean-field inference procedure, such that the resulting iteration is guaranteed to converge to a unique optimal mean-field fixed point when run in parallel over all variables. Although the approach is still largely conceptual, we show for the first time that one can learn and perform inference in structured multi-layer Boltzmann machines which contain intra-layer connections. For example, we perform both learning and inference for a deep convolutional, multi-resolution Boltzmann machine, and apply the network to model MNIST and CIFAR-10 pixels and their classes conditioned on partially observed images. Such joint probabilistic modeling allows us to simultaneously impute missing pixels and predict the class. While these are naturally small-scale tasks, we emphasize that performing joint probabilistic inference over a complete model of this type is a relatively high-dimensional task as far as traditional mean-field inference is concerned. We compare our approach to (block structured) mean-field inference in classical RBMs, showing substantial improvement in these estimates, and also compare to alternative mean-field inference approaches. Although in initial phases, the work hints at potential new directions for Boltzmann machines involving very different types of restrictions than what has typically been considered in deep learning. ## 2 Background And Related Work This paper builds upon three main avenues of work: 1) deep equilibrium models, especially one of their convergent version, the monotone DEQ; 2) the broad topic of energy-based deep model and Boltzmann machines in particular; and 3) work on concave potentials and parallel methods for mean-field inference. We discuss each of these below. Equilibrium models and their provable convergence The DEQ model was first proposed by Bai et al. (2019). Based on the observation that a neural network q t+1 = σ(W qt+Ux+b) with input injection x usually converges to a fixed point, they modeled an effectively infinite-depth network with input injection directly via its fixed point: q ∗ = σ(W q∗+Ux+b). Its backpropagation is done through the implicit function theorem and only requires constant memory. Bai et al. (2020) also showed that the multiscale DEQ models achieve near state-of-the-art performances on many large-scale tasks. Winston & Kolter (2020) later presented a parametrization of the DEQ (denoted as monDEQ) that guarantees provable convergence to a unique fixed point, using monotone operator theory. Specifically, they parameterize W in a way that I −W mI (called m-strongly monotone) is always satisfied during training for some m > 0; they convert nonlinearities into proximal operators (which include ReLU, tanh, etc.), and show that using existing splitting methods like forward-backward and *Peaceman-Rachford* can provably find the unique fixed point. Other notable related implicit model works include Revay et al. (2020), which enforces Lipschitz constraints on DEQs; El Ghaoui et al. (2021) provides a thorough introduction to implicit models and their well-posedness. Tsuchida & Ong (2022) also solves graphical model problems using DEQs, focusing on principal component analysis. Markov random field (MRF) and its variants MRF is a form of energy-based models, which model joint probabilities of the form pθ(x) = exp (−Eθ(x)) /Zθ for an energy function Eθ. A common type of MRF is the Boltzmann machine, the most successful variant of which is the restricted Boltzmann machines (RBM) (Hinton, 2002) and its deep (multi-layer) variant (Salakhutdinov & Hinton, 2009). Particularly, RBMs define Eθ(*v, h*) = −a >v − b >h − v >W h, where θ = {*W, a, b*}, v is the set of visible variables, and h is the set of latent variables. It is usually trained using the contrastive-divergence algorithm, and its inference can be done efficiently by a block mean-field approximation. However, a particular restriction of RBMs is that there can be no intra-layer connections, that is, each variable in v (resp. h) is independent conditioned on h (resp. v). A deep RBM allows different layers of hidden nodes, but there cannot be intra-layer connections. By contrast, our formulation allows intra-layer connections and is therefore more expressive in this respect. See Figure 1 for the network topology of RBM, deep RBM, and general BM (we also use the term general deep BM interchangeably to emphasize the existence of deep structure). Wu et al. (2016) proposed a deep parameterization of MRF, but their setting only considers a grid of hidden variables h, and the connections among hidden units are restricted to the neighboring nodes. Therefore, it is a special case of our parameterization (although their learning algorithm is orthogonal to ours). Numerous works also try to combine deep neural networks with conditional random fields (CRF) (Krähenbühl & Koltun, 2013; Zheng et al., 2015; Schwartz et al., 2017) These models either train a pre-determined kernel as an RNN or use neural networks for producing either inputs or parameters of their CRFs. Parallel and convergent mean-field It is well-known that mean-field updates converge locally using a coordinate ascent algorithm (Blei et al., 2017). However, local convergence is only guaranteed if the update is applied sequentially. Nonetheless, several works have proposed techniques to parallelize updates. Krähenbühl & Koltun (2013) proposed a concave-convex procedure (CCCP) to minimize the KL divergence between the true distribution and the mean-field variational family. To achieve efficient inference, they use a concave approximation to the pairwise kernel, and their fast update rule only converges if the kernel function is concave. Later, Baqué et al. (2016) derived a similar parallel damped forward iteration to ours that provably converges without the concave potential constraint. However, unlike our approach, they do not use a parameterization that ensures a global mean-field optimum, and their algorithm therefore may not converge to the actual fixed point of the mean-field updates. This is because Baqué et al. (2016) used the prox1 f proximal operator (described below), whereas we derive the proxα f operator to guarantee global convergence when doing mean-field updates in parallel. What's more, Baqué et al. (2016) focused only on inference over prescribed potentials, and not on training the (fully parameterized) potentials as we do here. Lê-Huu & Alahari (2021) brought up a generalized Frank-Wolfe based framework for mean-field updates which include the methods proposed by Baqué et al. (2016); Krähenbühl & Koltun (2013). Their results only guarantee global convergence to a local optimal. ## 3 Monotone Deep Boltzmann Machines And Approximate Inference In this section, we present the main technical contributions of this work. We begin by presenting a parameterization of the pairwise potential in a Boltzmann machine that guarantees the monotonicity condition. We then illustrate the connection between a (joint) mean-field inference fixed point and the fixed point of our monotone Boltzmann machine (mDBM) and discuss how deep structured networks can be implemented in this form practically; this establishes that, under the monotonicity conditions on Φ, there exists a unique globally-optimal mean-field fixed point. Finally, we present an efficient parallel method for computing this mean-field fixed point, again motivated by the machinery of monotone DEQs and operator splitting methods. ## 3.1 A Monotone Parameterization Of General Boltzmann Machines In this section, we show how to parameterize our probabilistic model in a way that the pairwise potentials satisfy I − Φ mI, which will be used later to show the existence of a unique mean-field fixed point. Recall that Φ defines the interaction between random variables in the graph. In particular, for random variables xi ∈ R ki, xj ∈ R kj, we have Φij ∈ R ki×kj. Additionally, since Φ defines a graphical model that has no self-loop, we further require Φ to be a *block hollow* matrix (that is, the ki ×ki diagonal blocks corresponding to each variable must be zero). While both these conditions on Φ are convex constraints, in practice it would be extremely difficult to project a generic set of weights onto this constraint set under an ordinary parameterization of the network. Thus, we instead advocate for a *non-convex* parameterization of the network weights, but one which guarantees that the monotonicity condition is always satisfied, without any constraint on the weights in the parameterization. Specifically, define the block matrix $$\mathbf{A}={\left[\begin{array}{l l l l}{A_{1}}&{A_{2}}&{\cdots}&{A_{n}}\end{array}\right]}$$ $$\left(2\right)$$ with Ai ∈ R d×ki matrices for each variables, and where d can be some arbitrarily chosen dimension. Then let Aˆi be a spectrally-normalized version of Ai $${\hat{A}}_{i}=A_{i}\cdot\operatorname*{min}\{{\sqrt{1-m}}/\|A_{i}\|_{2},1\}$$ √1 − m/kAik2, 1} (2) i.e., a version of Ai normalized such that its largest singular value is at most √1 − m (note that we can compute the spectral norm of Ai as kAik2 = kAT i Aik 1/2 2, which involves computing the singular values of only a ki × ki matrix, and thus is very fast in practice). We define the Aˆ matrix analogously as the block version of these normalized matrices. Then we propose to parameterize Φ as $$\Phi=\mathrm{blkdiag}(\hat{A}^{T}\hat{A})-\hat{A}^{T}\hat{A}$$ Φ = blkdiag(AˆT Aˆ) − AˆT Aˆ (3) where blkdiag denotes the block-diagonal portion of the matrix along the ki × ki block. Put another way, this parameterizes Φ as $$\Phi_{i j}=\begin{cases}-\hat{A}_{i}^{T}\hat{A}_{j}&\text{if}i\neq j,\\ 0&\text{if}i=j.\end{cases}\tag{1}$$ $$\left(4\right)$$ As the following simple theorem shows, this parameterization guarantees both hollowness of the Φ matrix and monotonicity of I − Φ, for any value of the A matrix. $$(3)$$ Theorem 3.1. For any choice of parameters A*, under the parametrization equation 3 above, we have that* 1) Φii = 0 *for all* i = 1, . . . , n*, and 2)* I − Φ mI. Proof. Block hollowness of the matrix follows immediately from construction. To establish monotonicity, note that $$I-\Phi\geq mI\Longleftrightarrow I+\tilde{\mathbf{A}}^{T}\tilde{\mathbf{A}}-\text{blkdiag}(\tilde{\mathbf{A}}^{T}\tilde{\mathbf{A}})\succeq mI$$ $$\Longleftrightarrow I-\text{blkdiag}(\tilde{\mathbf{A}}^{T}\tilde{\mathbf{A}})\succeq mI\Longleftrightarrow I-\tilde{A}_{i}^{T}\tilde{A}_{i}\succeq mI,\ \ \forall i$$ $$\Longleftrightarrow\|\tilde{A}_{i}\|_{2}\leq\sqrt{1-m},\ \ \forall i.$$ (5) This last property always holds by construction of Aˆi. ## 3.2 Mean-Field Inference As A Monotone Deq In this section, we formally present how to formulate the mean-field inference as a DEQ update. Recall from before that we are modeling a distribution of the form Equation (1). We are interested in approximating the conditional distribution p(xh|xo), where o and h denote the observed and hidden variables respectively, with a factored distribution q(xh) = Qi∈h qi(xi). Here, the standard mean-field updates (which minimize the KL divergence between q(xh) and p(xh|xo) over the single distribution qi(xi)) are given by the following equation, $$q_{i}(x_{i}):=\operatorname{softmax}\left(\sum_{j:(i,j)\in E}\Phi_{i j}q_{j}(x_{j})+b_{i}\right)$$ $$\mathbf{\Sigma}$$ $$\square$$ $$(6)$$ where overloading notation slightly, we let qj (xj ) denote a one-hot encoding of the observed value for any j ∈ o (see e.g., Koller & Friedman (2009) for a full derivation). The essence of the above updates is a characterization of the joint fixed point to mean-field inference. For simplicity of notation, defining $$\mathbf{\mathit{q}}=\begin{bmatrix}q_{1}(x_{1})&q_{2}(x_{2})&\dots\end{bmatrix}^{T}.$$ We see that qh is a joint fixed point of all the mean-field updates if and only if $$\mathbf{q}_{h}=\operatorname{softmax}\left(\mathbf{\Phi}_{h h}\mathbf{q}_{h}+\mathbf{\Phi}_{h o}\mathbf{x}_{o}+\mathbf{b}_{h}\right)$$ qh = softmax (Φhhqh + Φhoxo + bh) (6) where xo analogously denotes the stacked one-hot encoding of the observed variables. We briefly recall the monotone DEQ framework of Winston & Kolter (2020). Given input vector x, a monotone DEQ computes the fixed point z ?(x) that satisfies the equilibrium equation z ?(x) = σ(Wz ?(x) + Ux + b). Then if: 1) σ is given by a proximal operator1 σ(x) = prox1 f (x) for some convex closed proper (CCP) f, and 2) if we have the monotonicity condition I − W mI (in the positive semidefinite sense) for some m > 0, then for any x there exists a unique fixed point z ?(x), which can be computed through standard operator splitting methods, such as forward-backward splitting. We now state our main claim of this subsection, that under certain conditions the mean-field fixed point can be viewed as the fixed point of an analogous DEQ. This is formalized in the following proposition. Proposition 3.1. Suppose that the pairwise kernel Φ satisfies I −Φ mI 2for m > 0. Then the mean-field fixed point $$\mathbf{q}_{h}=\operatorname{softmax}\left(\mathbf{\Phi}_{h h}\mathbf{q}_{h}+\mathbf{\Phi}_{h o}\mathbf{x}_{o}+\mathbf{b}_{h}\right)\tag{1}$$ corresponds to the fixed point of a monotone DEQ model. Specifically, this implies that for any xo, there exists a unique, globally-optimal fixed point of the mean-field distribution qh. 1A proximal operator is defined by proxα f (x) = arg minz 1 2 kx − zk 2 + αf(z). 2Technically speaking, we only need I −Φhh mI, but since we want this to hold for any choice of h, we need the condition to apply to the entire Φ matrix. $$\left(7\right)$$ ![5_image_0.png](5_image_0.png) Figure 2: Illustration of a possible deep convolutional Boltzmann machine, where the monotonicity structure can still be enforced. Proof. As the monotonicity condition of the monotone DEQ is assumed in the proposition, the proof of the proposition rests entirely in showing that the softmax operator is given by prox1 f for some CCP f. Specifically, as shown in Krähenbühl & Koltun (2013), this is the case for $$f(z)=\sum_{i}z_{i}\log z_{i}-\frac{1}{2}\|z\|_{2}^{2}+\mathbb{I}\left\{\sum_{i}z_{i}=1,\ z_{i}\geq0\right\}\tag{8}$$ i.e., the restriction of the entropy minus squared norm to the simplex (note that even though we are *subtracting* a squared norm term it is straightforward to show that this function is convex since the second derivatives are given by 1/zi − 1, which is always non-negative over its domain). ## 3.3 Practical Considerations When Modeling Mdbms The construction in Section 3.1 guarantees monotonicity of the resulting pairwise probabilistic model. However, instantiating the model in practice, where the variables represent hidden units of a deep architecture (i.e., representing multi-channel image tensors with pairwise potentials defined by convolutional operators), requires substantial subtlety and care in implementation. In this setting, we do not want to actually represent A explicitly, but rather determine a method for *multiplying* Av and AT v for some vector v (as we see in Section 3.2, this is all that is required for the parallel mean-field inference method we propose). This means that certain blocks of A are typically parameterized as convolutional layers, with convolution and transposed convolution operators as the main units of computation. More specifically, we typically want to *partition* the full set of hidden units into some K distinct sets $$\mathbf{q}=\begin{bmatrix}\mathbf{q}_{1}&\mathbf{q}_{2}&\dots&\mathbf{q}_{K}\end{bmatrix}^{T}$$ $$({\mathfrak{g}})$$ T(9) where e.g., qi would be best represented as a height×width×groups×cardinality tensor (i.e., a collection of multiple hidden units corresponding to different locations in a typical deep network hidden layer). Note that here qiis not the same as qi(xi), but rather the collection of *many* different individual variables. These qi terms can be related to each other via different operators, and a natural manner of parameterizing A, in this case, is as an interconnected set of convolutional or dense operators. To represent the pairwise interactions, we can create a similarly-factored matrix A, e.g., one of the form $$\mathbf{A}=\left[\begin{array}{cccc}\mathbf{A}_{11}&0&\cdots&0\\ \mathbf{A}_{21}&\mathbf{A}_{22}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{A}_{K1}&\mathbf{A}_{K2}&\cdots&\mathbf{A}_{KK}\end{array}\right]\tag{10}$$ where e.g., Aij is a (possibly strided) convolution mapping between the tensors representing qj and qi. In this case, we emphasize that Aij is not the kernel matrix that one "slides" along the variables. Instead, Aij is the linear mapping as if we write the convolution as a matrix-matrix multiplication. For example, a 2D convolution with stride 1 can be expressed as a doubly block circulant matrix (the case is more complicated when different striding is allowed). This parametrization is effectively a *general* Boltzmann machine, since each random variable in Equation (9) can interact with any other variables except for itself. Varying Aij , the formulation in Equation (10) is rich enough for any type of architecture including convolutions, fullyconnected layers, and skip-connections, etc. An illustration of one possible network structure is shown in Figure 2. To give a preliminary introduction to our implementation, let us denote the convolution filter as F, and its corresponding matrix form as A. While it is possibly simpler to directly compute A>Aq, A usually has very high dimensions even if F is small. Instead, our implementation computes ConvTranspose(F, Conv(F, q)), modulo using the correct striding and padding. The block diagonal element of A>A has smaller dimension and can be computed directly as a 1 × 1 convolution. The precise details of how one computes the block diagonal elements of AT A, and how one normalizes the proper diagonal blocks (which, we emphasize, still just requires computing the singular values of matrices whose size is the cardinality of a single qi(xi)) are somewhat involved, so we defer a complete description to the Appendix (and accompanying code). The larger takeaway message, though, is that it is possible to parameterize complex convolutional multi-scale Boltzmann machines, all while ensuring monotonicity. ## 3.4 Efficient Parallel Solving For The Mean-Field Fixed Point Although the monotonicity of Φ guarantees the existence of a unique solution, it does not necessarily guarantee that the simple iteration $$\mathbf{q}_{h}^{(t)}=\mathrm{softmax}(\mathbf{\Phi}_{h h}\mathbf{q}_{h}^{(t-1)}+\mathbf{\Phi}_{h o}\mathbf{x}_{o}+\mathbf{b}_{h})$$ $$(11)$$ $$(12)$$ h + Φhoxo + bh) (11) will converge to this solution. Instead, to guarantee convergence, one needs to apply the *damped* iteration (see, e.g. (Winston & Kolter, 2020)) $$\mathbf{q}_{h}^{(t)}=\operatorname*{prox}_{f}^{\alpha}\left((1-\alpha)\mathbf{q}_{h}^{(t-1)}+\alpha(\mathbf{\Phi}_{h h}\mathbf{q}_{h}^{(t-1)}+\mathbf{\Phi}_{h o}\mathbf{x}_{o}+\mathbf{b}_{h})\right).$$ The damped forward-backward iteration converges linearly to the unique fixed point if α ≤ 2m/L2, assuming I − Φ is m-strongly monotone and L-Lipschitz (Ryu & Boyd, 2016). Crucially, this update can be formed in parallel over all the variables in the network: we do not require a coordinate descent approach as is typically needed by mean-field inference. The key issue, though is that while prox1 f (x) = softmax(x) for f defined as in Equation (8), in general, this does not hold for α 6= 1. Indeed, for α 6= 1, there is no closed-form solution to the proximal operation, and computing the solution is substantially more involved. Specifically, computing this proximal operator involves solving the optimization problem $$\mbox{prox}_{f}^{\alpha}(x)=\arg\min_{z}\ \ \frac{1}{2}\|x-z\|_{2}^{2}+\alpha\sum_{i}z_{i}\log z_{i}-\frac{\alpha}{2}\|z\|_{2}^{2}\ \ \ \mbox{s.t}\ \ \sum_{i}z_{i}=1,\ \ z\geq0.\tag{13}$$ The following theorem, proved in the Appendix, characterizes the solution to this problem for α ∈ (0, 1) (although it is also possible to compute solutions for α > 1, this is not needed in practice, as it corresponds to a "negatively damped" update, and it is typically better to simply use the softmax update in such cases). Theorem 3.2. Given f *as defined in Equation* (8), α ∈ (0, 1)*, and* x ∈ R k*, the proximal operator* proxα **Theorem 6.1**.: _Let $f$ be a positive integer. Then $f(x)$ is a positive integer._ $$\operatorname{prox}_{0}^{2}(x)_{i}=\frac{\alpha}{1-\alpha}W\left(\frac{1-\alpha}{\alpha}\exp\left(\frac{x_{i}-\alpha+\lambda}{\alpha}\right)\right),$$ _where $\lambda\in\mathbb{R}$ is the unique solution chosen to ensure that the resulting $\sum_{i}\operatorname{prox}_{j}^{2}(x_{i})=1$, and where $W(\cdot)$ is the $\alpha$-function._ (x) the principal branch of the Lambert W *function.* In practice, however, this is not the most numerically stable method for computing the proximal operator, especially for small α, owing to the large term inside the exponential. Computing the proximal operation efficiently is somewhat involved, though briefly, we define the alternative function $$g(y)=\log\frac{\alpha}{1-\alpha}W\left(\frac{1-\alpha}{\alpha}\exp\left(\frac{y}{\alpha}-1\right)\right)$$ (14) $$(14)$$ 7 ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) (a) Test data with 60% pixels randomly masked(b) Original image (c) Imputation with 60% ![7_image_4.png](7_image_4.png) ![7_image_3.png](7_image_3.png) (d) Imputation with 60% pixels randomly masked using RBM Figure 3: MNIST pixel imputation using mDBM (bottom left) and deep RBM (bottom right), where the RBM test results are generated using mean-field inference instead of Gibbs sampling. and show how to directly compute g(y) using Halley's method (note that Halley's method is also the preferred manner to computing the Lambert W function itself numerically (Corless et al., 1996)). It updates xn+1 = xn −2f(xn)f 0(xn) 2(f 0(xn))2−f(xn)f 00(xn) , and enjoys cubic convergence when the initial guess is close enough to the root. Finding the prox operator then requires that we find λ such that Pk i=1 exp(g(xi + λ)) = 1. This can be done via (one-dimensional) root finding with Newton's method, which is guaranteed to always find a solution here, owing to the fact that this function is convex monotonic for λ ∈ (−∞, 1). We can further compute the gradients of the g function and of the proximal operator itself via implicit differentiation (i.e., we can do it analytically without requiring unrolling the Newton or Halley iteration). We describe the details in the appendix and include an efficient PyTorch function implementation in the supplementary material. Comparison to Winston & Kolter (2020) Although this work uses the same monotonicity constraint as in Winston & Kolter (2020), our result further requires the linear module Φ to be hollow, and extend their work to the softmax nonlinear operator as well. These extensions introduce significant complications, but also enable us to interpret our network as a probabilistic model, while the network in Winston & Kolter (2020) cannot. ## 3.5 Training Considerations Finally, we discuss approaches for training mDBMs, exploiting their efficient approach to mean-field inference. Probabilistic models are typically trained via approximate likelihood maximization, and since the meanfield approximation is based upon a particular likelihood approximation, it may seem most natural to use this same approximation to train parameters. In practice, however, this is often a suboptimal approach. Specifically, because our forward inference procedure ultimately uses mean-field inference, it is better to train the model directly to output the correct marginals, *when running this mean-field procedure*. This is known as a marginal-based loss (Domke, 2013). In the context of mDBMs, this procedure has a particularly convenient form, as it corresponds roughly to the "typical" training of DEQ. In more detail, suppose we are given a sample x ∈ X (i.e., at training time the entire sample is given), along with a specification of the "observed" and "hidden" sets, o and h respectively. Note that the choice of observed and hidden sets is potentially up to the algorithm designer, and can effectively allow one to train our model in a "self-supervised" fashion, where the goal is to predict some unobserved components from others. In practice, however, one typically wants to design hidden and observed portions congruent with the eventual use of the model: e.g., if one is using the model for classification, then at training time it makes sense for the label to be "hidden" and the input to be "observed." Given this sample, we first solve the mean-field inference problem to find q ? h (xh) such that $$\mathbf{q}_{h}^{\star}=\mathrm{softmax}\left(\mathbf{\Phi}_{h h}\mathbf{q}_{h}^{\star}+\mathbf{\Phi}_{h o}\mathbf{x}_{o}+\mathbf{b}_{h}\right).$$ h + Φhoxo + bh). (15) For this sample, we know that the true value of the hidden states is given by xh. Thus, we can apply some loss function `(q ? h , xh) between the prediction and true value, and update parameters of the model $$(15)$$ θ = {A, b} using their gradients $$\frac{\partial\ell(\mathbf{q}_{h}^{*},\mathbf{x}_{h})}{\partial\theta}=\frac{\partial\ell(\mathbf{q}_{h}^{*},\mathbf{x}_{h})}{\partial\mathbf{q}_{h}^{*}}\frac{\partial\mathbf{q}_{h}^{*}}{\partial\theta}=\frac{\partial\ell(\mathbf{q}_{h}^{*},\mathbf{x}_{h})}{\partial\mathbf{q}_{h}^{*}}\left(I-\frac{\partial g(\mathbf{q}_{h}^{*})}{\partial\mathbf{q}_{h}^{*}}\right)^{-1}\frac{\partial g(\mathbf{q}_{h}^{*})}{\partial\theta}$$ $$(16)$$ with $$g(\mathbf{q}_{h}^{*})\equiv\operatorname*{prox}_{f}^{\alpha}\left((1-\alpha)\mathbf{q}_{h}^{*}+\alpha(\mathbf{\Phi}_{h h}\mathbf{q}_{h}^{*}+\mathbf{\Phi}_{h o}\mathbf{x}_{o}+\mathbf{b}_{h})\right)$$ and where the last equality comes from the standard application of the implicit function theorem as typical in DEQs or monotone DEQs. The key to gradient computation is noticing that in Equation (16), we can rearrange: $$\mathbf{u}\triangleq{\frac{\partial\ell(\mathbf{q}_{h}^{*},\mathbf{x}_{h})}{\partial\mathbf{q}_{h}^{*}}}\left(I-{\frac{\partial g(\mathbf{q}_{h}^{*})}{\partial\mathbf{q}_{h}^{*}}}\right)^{-1}\Longrightarrow\mathbf{u}=\mathbf{u}{\frac{\partial g(\mathbf{q}_{h}^{*})}{\partial\mathbf{q}_{h}^{*}}}+{\frac{\partial\ell(\mathbf{q}_{h}^{*},\mathbf{x}_{h})}{\partial\mathbf{q}_{h}^{*}}},$$ which is just another fixed-point problem. Here all the partial derivatives can be handled by autodifferentiation (with the correct backward hook for proxα f ), and the details exactly mirror that of Winston & Kolter (2020), also see Algorithm 2. As a final note, we also mention that owing to the restricted range of weights allowed by the monotonicty constraint, the actual output marginals qi(xi) are often more uniform in distribution than desired. Thus, we typically apply the loss to a scaled marginal $$(17)$$ $${\tilde{q}}_{i}(x_{i})\propto q_{i}(x_{i})^{\tau_{i}}$$ τi(17) where τi ∈ R+ is a variable-dependent learnable temperature parameter. Importantly, we emphasize that this is *only* done after convergence to the mean-field solution, and thus only applies to the marginals to which we apply a loss: the actual internal iterations of mean-field cannot have such a scaling, as it would violate the monotonicity condition. Algorithm 1 ForwardIteration Require: Observed RV xo, parameters Φ, b, damp parameter α ∈ (0, 1). Find the fixed point q ∗ h to $$\mathbf{q}_{h}^{(t)}=\operatorname*{prox}_{f}^{\alpha}\left((1-\alpha)\mathbf{q}_{h}^{(t-1)}+\alpha(\Phi_{h h}\mathbf{q}_{h}^{(t-1)}+\Phi_{h o}\mathbf{x}_{o}+\mathbf{b}_{h})\right)$$ ## Algorithm 2 Backwarditeration Require: Loss function `, fixed point q ∗ h , true value xh parameters θ. Find the fixed point u ∗to ${\mathbf{u}=\mathbf{u}\frac{\partial g(\mathbf{q}_h^*)}{\partial\mathbf{q}_h^*}+\frac{\partial\ell(\mathbf{q}_h^*,\mathbf{x}_h)}{\partial\mathbf{q}_h^*}}$ - - Compute the final Jacobian-vector product as $$\frac{\partial\ell(\mathbf{q}_{h}^{\star},\mathbf{x}_{h})}{\partial\theta}=\mathbf{u}^{\star}\frac{\partial g(\mathbf{q}_{h}^{\star})}{\partial\theta}$$ A simple overview of the algorithm is demonstrated in Algorithm 3. The task is to jointly predict the image label and fill the top-half of the image given the bottom-half. For inference, the process is almost the same as training, except we don't update the parameters. ## Algorithm 3 Training Require: Damp parameter α ∈ (0, 1), neural network parameters Φ, b, weight on classification loss w. for each epoch do for (p, y) in data do Let xh = {ph, y} where ph is the top-half of the image p and y is the label. Let xo be the bottom-half of the image po. q ∗ h = ForwardIteration(xo, Φ, b, α), notice that xh is not revealed to the network here. Get the estimated top half of the image and label {pˆh, yˆ} = q ∗ h . Calculate imputation loss and classification loss (1 − w)`r(pˆh, ph) + w`c(ˆ*y, y*). Update Φ, b using Algorithm 2. end for end for (a) Test data has 50% pixels ![9_image_0.png](9_image_0.png) randomly masked (b) Imputed pixel inference (without injection labels) (c) Original image (d) Imputation with deep RBM. Figure 4: CIFAR-10 pixel imputation using mDBM and deep RBM ![9_image_1.png](9_image_1.png) ![9_image_2.png](9_image_2.png) ![9_image_3.png](9_image_3.png) ![9_image_4.png](9_image_4.png) ## 4 Experimental Evaluation As a proof of concept, we evaluate our proposed mDBM on the MNIST and CIFAR-10 datasets. We demonstrate how to jointly model missing pixels and class labels conditioned on only a subset of observed pixels. On MNIST, we compare mDBM to mean-field inference in a traditional deep RBM. Despite being small-scale tasks, the goal here is to demonstrate joint inference and learning over what is still a reasonably-sized joint model, considering the number of hidden units. Nonetheless, the current experiments are admittedly largely a *demonstration* of the proposed method rather than a full accounting of its performance. We also show how our mean-field inference method compares to those proposed in prior works. On the joint imputation and classification task, we train models using our updates and the updates proposed by Krähenbühl & Koltun (2013) and Baqué et al. (2016), and perform mean-field inference in each model using all three update methods, with and without the monotonicity constraint. mDBM and deep RBM on MNIST For the joint imputation and classification task, we randomly mask each pixel independently with probability 60%, such that in expectation only 40% of the pixels are observed. The original MNIST dataset has one channel representing the gray-scale intensity, ranging between 0 and 1. We adopt the strategy of Van Oord et al. (2016) to convert this continuous distribution to a discrete one. We bin the intensity evenly into 4 categories {0*, . . . ,* 3}, and for each channel use a one-hot encoding of the category so that the input data has shape 4 × 28 × 28. We remark that the number of categories is chosen arbitrarily and can be any integer. Additional details are given in the appendix. The mDBM and deep RBM trained on the joint imputation and classification task obtain test classification accuracy of 92.95% and 64.23%, respectively. Pixel imputation is shown in Figure 3. We see that the deep RBM is not able to impute the missing pixels well, while the mDBM can. Importantly, we note however that for an apples-to-apples comparison, the test results in the RBM are generated using mean-field inference. The image imputation of RBM runs block mean-field updates of 1000 steps and the classification runs 2 steps, and increasing number of iterations does not improve test performance significantly. The RBM also admits efficient Gibbs sampling, which performs much better and is detailed in the appendix. ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) Figure 5: MNIST and CIFAR10 pixel imputation using mDBM, when only the top half is shown to the model. **Left**: observed images; **middle**: imputation results; **right**: original images. Table 1: Squared `2 error (standard deviation) for MNIST image imputation. Images are observed 20%, 40%, 60%, 80% respectively. RBM outputs are bucketized into 10 bins. The errors are averaged over the whole dataset and the number of bins. The experiments are executed 5 times with independent random masks and the same models. Standard deviations are calculated across 5 runs. | Observation | 0.2 | 0.4 | 0.6 | 0.8 | | |---------------|-----------------|-----------------|-----------------|-----------------|----------------| | Method | mDBM | 53.310 (0.0776) | 23.330 (0.0204) | 13.140 (0.0234) | 5.936 (0.0102) | | RBM | 53.086 (0.0556) | 36.596 (0.0417) | 22.746 (0.0234) | 10.564 (0.0186) | | We report the image imputation `2 loss on MNIST in Table 1. We randomly mask p = {0.2, 0.4, 0.6, 0.8} portion of the inputs. For each p, the experiments are conducted 5 times where each run independently chooses the random mask. The model is trained to impute images given 40% pixels and is fixed throughout the experiments. Since the RBMs model Bernoulli random variables whereas mDBMs model distributions over a set of one-hot variables, we "bucketize"3the outputs of RBMs into bins of size 10. The `2 norm of the difference between bucketized imputations and the original images is computed. The norms are then divided by the number of bins and averaged over the whole MNIST dataset. In this way, we get {µ1*, . . . , µ*5} where each µiis the average `2 reconstruction error over the whole dataset, and the standard deviations are calculated over {µ1*, . . . , µ*5}. Our proposed method has a clear advantage over RBMs. We additionally evaluate mDBM on a task in which random 14×14 patches are masked. To obtain good performance on this task requires lifting the monotonicity constraint; we find that mDBM converges regardless (see appendix). mDBM can also extrapolate reasonably well, see Figure 5. mDBM and deep RBM on CIFAR-10 We evaluate mDBM on an analogous task of image pixel imputation and label prediction on CIFAR-10. Model architecture and training details are given in the appendix. With 50% of the pixels observed, the model obtains 58% test accuracy and can impute the missing pixels effectively (see Figure 4). The baseline deep RBM is trained to impute the missing pixels using CD-1 with number of neurons 3072-500-100. The imputation error is reported in Table 2 and the experiments are conducted in the same fashion as those on MNIST. Contrary to grayscale MNIST, RBM outputs are bucketized into 10 bins for each of the RGB channels on CIFAR-10. mDBMs also take bucketized images as inputs where each of the RGB channels is bucketized into 10 bins. 3That is, if the number of bins is 2 and the RBM outputs a probability p, the bucketized output is 0 if p < 0.5 and 1 otherwise. Table 2: Squared `2 error (standard deviation) for CIFAR10 image imputation. Images are observed 20%, 40%, 60%, 80% respectively. RBM outputs are bucketized into 10 bins for each of the RGB channels. mDBMs also take bucketized images as inputs where each of the RGB channels is bucketized into 10 bins. The errors are averaged over the whole dataset and the number of bins. The experiments are executed 5 times with independent random masks and the same models. Standard deviations are calculated across 5 runs. | Observation | 0.2 | 0.4 | 0.6 | 0.8 | | |---------------|------------------|------------------|-----------------|-----------------|-----------------| | Method | mDBM | 77.439 (0.0281) | 48.496 (0.0166) | 29.749 (0.0241) | 14.077 (0.0143) | | RBM | 139.375 (0.0231) | 103.615 (0.0315) | 68.845 (0.0219) | 34.339 (0.0291) | | | Inference | Krähenbühl | Baqué | mDBM | |------------------|--------------|---------|--------| | Train Krähenbühl | 0.0004 | 0.0061 | 0.0024 | | Baqué | 1.250 | 0.0059 | 0.0024 | | mDBM | 1.144 | 0.0057 | 0.0017 | Table 3: Relative update residual when monotonicity is enforced. ![11_image_0.png](11_image_0.png) Figure 6: Convergence speed of inference methods on a model trained with Krähenbühl's updates. Comparison of inference methods We conduct several experiments comparing our mean-field inference method to those proposed by Krähenbühl & Koltun (2013) and Baqué et al. (2016), denoted as Krähenbühl's and Baqué's respectively. While a full description of these methods and the experiments is left to the appendix, we highlight some of the key findings here. We train models using the three different update methods: ours, Krähenbühl's and Baqué's; we then perform inference using all three methods as well. A comparison to the regularized Frank-Wolfe method raised by Lê-Huu & Alahari (2021) can be found in the appendix. Table 3 shows the relative update residuals kq (t+1) h − q (t) h k/kq (t) h k after 100 steps of each inference method on each model. We observe that Krähenbühl's method diverges when the model was not trained using the corresponding updates, whereas Baqué's and ours converge on all three models, with our method converging more quickly. The improved convergence speed can also be seen in Figure 6). However, note that Baqué's method is not guaranteed to converge to the true mean-field fixed-point. As we show in the appendix (Figure 11c), on an untrained model our method converges to the true fixed-point while Baqué's does not. Future directions It is extremely useful to consider fundamentally different restrictions as have been applied to meanfield inference and graphical models in the past, and our work can lead to a number of interesting directions: (1) Theorem 3.1 is only a sufficient but not necessary condition for monotonicity. Improving this could potentially make our current monotone model much more expressive. (2) In Theorem 3.1, the parameter m > 0 describes how monotone the model is. Is it possible to use a m < 0 to ensure that the model is "boundedly non-monotone", but still enjoys favorable convergence property? (3) Our model currently only learns conditional probability. Is it possible to make it model joint probability efficiently? One way is to mimic PixelCNN: let P(x n 1 ) = Qn i=1 P(xn|x n−1 1). This is inefficient for us in both inference and training, is there a way to improve? (4) Although we have a fairly efficient implementation of proxα f , it is still slower than normal nonlinearities like ReLU or softmax. Is there a way to efficiently scale mDBMs? (5) Tsuchida & Ong (2022) explores the connection between PCA and DEQs, what are other probabilistic models that can also be expressed within the DEQ framework? (6) Bechmark mDBM image imputations together with Yoon et al. (2018); Li et al. (2019); Mattei & Frellsen (2019); Richardson et al. (2020). ## 5 Conclusion In this work, we give a monotone parameterization for general Boltzmann machines and connect its meanfield fixed point to a monotone DEQ model. We provide a mean-field update method that is proven to be globally convergent. Our parameterization allows for full parallelization of mean-field updates without restricting the potential function to be concave, thus addressing issues with prior approaches. Moreover, we allow complicated and hierarchical structures among the variables and show how to efficiently implement them. For parameter learning, we directly optimize the marginal-based loss over the mean-field variational family, circumventing the intractability of computing the partition function. Our model is evaluated on the MNIST and CIFAR-10 datasets for simultaneously predicting with missing data and imputing the missing data itself. As a demonstration of concept, we also deliver several illustrations of interesting future directions. ## References David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147–169, 1985. Shaojie Bai, J Zico Kolter, and Vladlen Koltun. Deep equilibrium models. *arXiv preprint arXiv:1909.01377*, 2019. Shaojie Bai, Vladlen Koltun, and J Zico Kolter. Multiscale deep equilibrium models. *arXiv preprint* arXiv:2006.08656, 2020. Pierre Baqué, Timur Bagautdinov, François Fleuret, and Pascal Fua. Principled parallel mean-field inference for discrete random fields. In *Proceedings of the IEEE Conference on Computer Vision and Pattern* Recognition, pp. 5848–5857, 2016. David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American statistical Association, 112(518):859–877, 2017. Robert M Corless, Gaston H Gonnet, David EG Hare, David J Jeffrey, and Donald E Knuth. On the lambertw function. *Advances in Computational mathematics*, 5(1):329–359, 1996. Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. Class-balanced loss based on effective number of samples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9268–9277, 2019. Justin Domke. Learning graphical model parameters with approximate marginal inference. *IEEE transactions* on pattern analysis and machine intelligence, 35(10):2454–2467, 2013. Laurent El Ghaoui, Fangda Gu, Bertrand Travacca, Armin Askari, and Alicia Tsai. Implicit deep learning. SIAM Journal on Mathematics of Data Science, 3(3):930–958, 2021. Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. *Deep learning*, volume 1. MIT press Cambridge, 2016. Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. *Neural computation*, 14(8):1771–1800, 2002. Daphne Koller and Nir Friedman. *Probabilistic graphical models: principles and techniques*. MIT press, 2009. Philipp Krähenbühl and Vladlen Koltun. Parameter learning and convergent inference for dense random fields. In *International Conference on Machine Learning*, pp. 513–521. PMLR, 2013. Miguel Lázaro-Gredilla, Wolfgang Lehrach, Nishad Gothoskar, Guangyao Zhou, Antoine Dedieu, and Dileep George. Query training: Learning a worse model to infer better marginals in undirected graphical models with hidden variables. *arXiv preprint arXiv:2006.06803*, 2020. Khuê Lê-Huu and Karteek Alahari. Regularized frank-wolfe for dense crfs: Generalizing mean field and beyond. *Advances in Neural Information Processing Systems*, 34, 2021. Steven Cheng-Xian Li, Bo Jiang, and Benjamin Marlin. Misgan: Learning from incomplete data with generative adversarial networks. *arXiv preprint arXiv:1902.09599*, 2019. Pierre-Alexandre Mattei and Jes Frellsen. Miwae: Deep generative modelling and imputation of incomplete data sets. In *International conference on machine learning*, pp. 4413–4423. PMLR, 2019. Mohammad Norouzi, Mani Ranjbar, and Greg Mori. Stacks of convolutional restricted boltzmann machines for shift-invariant feature learning. In *2009 IEEE Conference on Computer Vision and Pattern Recognition*, pp. 2735–2742. IEEE, 2009. Max Revay, Ruigang Wang, and Ian R Manchester. Lipschitz bounded equilibrium networks. *arXiv preprint* arXiv:2010.01732, 2020. Trevor W Richardson, Wencheng Wu, Lei Lin, Beilei Xu, and Edgar A Bernal. Mcflow: Monte carlo flow models for data imputation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14205–14214, 2020. Ernest K Ryu and Stephen Boyd. Primer on monotone operator methods. *Appl. Comput. Math*, 15(1):3–43, 2016. Ruslan Salakhutdinov and Geoffrey Hinton. Deep boltzmann machines. In *Artificial intelligence and statistics*, pp. 448–455. PMLR, 2009. Idan Schwartz, Alexander G Schwing, and Tamir Hazan. High-order attention models for visual question answering. *arXiv preprint arXiv:1711.04323*, 2017. Russell Tsuchida and Cheng Soon Ong. Deep equilibrium models as estimators for continuous latent variables. arXiv preprint arXiv:2211.05943, 2022. Aaron Van Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In *International Conference on Machine Learning*, pp. 1747–1756. PMLR, 2016. Homer F Walker and Peng Ni. Anderson acceleration for fixed-point iterations. *SIAM Journal on Numerical* Analysis, 49(4):1715–1735, 2011. Ezra Winston and J Zico Kolter. Monotone operator equilibrium networks. *arXiv preprint arXiv:2006.08591*, 2020. Zhirong Wu, Dahua Lin, and Xiaoou Tang. Deep markov random field for image modeling. In *European* Conference on Computer Vision, pp. 295–312. Springer, 2016. Jinsung Yoon, James Jordon, and Mihaela Schaar. Gain: Missing data imputation using generative adversarial nets. In *International conference on machine learning*, pp. 5689–5698. PMLR, 2018. Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, and Philip HS Torr. Conditional random fields as recurrent neural networks. In *Proceedings* of the IEEE international conference on computer vision, pp. 1529–1537, 2015. ## A Appendix A.1 Deferred Proofs Theorem 3.2. Given f *as defined in Equation* (8), α ∈ (0, 1)*, and* x ∈ R k*, the proximal operator* proxα f (x) is given by $$\operatorname*{prox}_{f}^{\alpha}(x)_{i}={\frac{\alpha}{1-\alpha}}W\left({\frac{1-\alpha}{\alpha}}\exp\left({\frac{x_{i}-\alpha+\lambda}{\alpha}}\right)\right),$$ _solution chosen to ensure that the resulting $\sum_{i}\operatorname*{prox}_{f}^{\alpha}$ is a $\alpha$-function._ where λ ∈ R *is the unique solution chosen to ensure that the resulting* Pi proxα f (xi) = 1*, and where* W(·) is the principal branch of the Lambert W function. Proof. By definition, the proximal operator induced by f (the same f in Equation (8)) and α solves the following optimization problem: $\min\limits_{z}\quad\frac{1}{2}\|x-z\|^{2}+\alpha\sum\limits_{i}z_{i}\log z_{i}-\frac{\alpha}{2}\|z\|^{2}$ s.t. $z_{i}\geq0,\ i=1,\ldots,d,$ $\sum\limits_{i}z_{i}=1$ of which the KKT condition is − xi + zi + α + α log zi − αzi + λ − µi = 0, for i ∈ [d] µi ≥ 0, zi ≥ 0,X i∈[d] µizi = 0,X d i=1 zi = 1. We have that µi = 0 is feasible and the first equation of the above KKT condition can be massaged as − xi + zi + α + α log zi − αzi + λ − µi = 0 ⇐⇒ (1 − α)zi + α log zi = xi − α − λ ⇐⇒ (1 − α)zi + α log zi α= xi − α − λ α ⇐⇒ exp (1 − α)zi + α log zi α = exp xi − α − λ α ⇐⇒ zi exp 1 − α αzi = exp xi − α − λ α ⇐⇒ 1 − α αzi exp 1 − α αzi = 1 − α αexp xi − α − λ α ⇐⇒ 1 − α αzi = W 1 − α αexp xi − α − λ α where W is the lambert W function. Notice here zi > 0. Our primal problem is convex and Slater's condition holds. Hence, we conclude that $$z_{i}=\frac{\alpha}{1-\alpha}W\left(\frac{1-\alpha}{\alpha}\exp\left(\frac{x_{i}-\alpha-\lambda}{\alpha}\right)\right).$$ ## A.2 Convolution Network It is clear that the monotone parameterization in Section 3 directly applies to fully-connected networks, and all the related quantities can be calculated easily. Nonetheless, the real power of the DEQ model comes in $\square$ when we use more sophisticated linear operators like convolutions. In the context of Boltzmann machines, the convolution operator gives edge potentials beneficial structures. For example, when modeling the joint probability of pixels in an image, it is intuitive that only the nearby pixels depend closely on each other. Let A ∈ R k×k×r×r denote a convolutional tensor with kernel size r and channel size k, let x denote some input. For a convolution with stride 1, the block diagonal elements of AT A simply form a 1 × 1 convolution. In particular, we apply the convolutions $$-\,A^{T}(A(x))+\tilde{A}(x)$$ $$(18)$$ $$(19)$$ T(A(x)) + A˜(x) (18) where $\bar{A}$ is a $1\times1$ convolution given by: $$\tilde{A}[:,:]=\sum_{i,j}A[:,:,i,j]^{T}A[:,:,i,j].$$ T A[:, :*, i, j*]. (19) We can normalize by the spectral norm of A˜ term to ensure strong monotonicity. Since A˜ can be rewritten as a k × k matrix and k is usually small, its spectral norm can be easily calculated. It takes more effort to work out convolutions with stride other than 1. Specifically, the block diagonal terms do not form a 1 × 1 convolution anymore, instead, the computation varies depending on the location. It is easier to see the computation directly in the accompanying code. Grouped channels It is crucial to introduce the concept of grouped channels, which allows us to represent multiple categorical variables in a single location, such as the three categorical variables representing the three (binned) color channels of an RGB pixel. In this case, each of the three RGB channels will be represented by a different group of k channels representing the k bins. The grouping is achieved by having the nonlinearity function (softmax) applied to each group separately. We remark that the convolutions themselves are not grouped, otherwise none of the red pixels would interact with green or blue pixels, etc. Instead, we want all RGB channels to interact with each other (except that channel i at position (*j, k*) does not interact with itself). That means in Equation (3), the blkdiag(AˆT Aˆ) is grouped in the following way. Recall that this block diagonal term has element of size ki × ki for i ∈ [n]. This parameterization has only 1 group. With g groups, the element of the block diagonal matrix then has size ki1 × ki1 , . . . , kig × kig P for i ∈ [n], where j∈[g] kij = ki. We also observe empirically that grouping the latent variables improves the performance. ## A.3 Efficient Computation Of Proxα F The solution to the proximal operator in damped forward iteration given in Theorem 3.2 involves the Lambert W function, which does not attain an analytical solution. In this section, we show how to efficiently calculate the nonlinearity σ(xi), as well as its Jacobian matrix for backward iteration. Let f(y) = α ) $=\frac{\alpha}{1-\alpha}W\left(\frac{1-\alpha}{\alpha}\right)$ . αexp yα − 1, and we have $$\begin{array}{r c l c r c l c r c l c r c l}{{x}}&{{=}}&{{\log f(y)}}&{{=}}&{{\log{\frac{\alpha}{1-\alpha}}}}&{{+}}&{{\log\mathrm{e}^{y/\alpha-1}}}&{{+}}&{{\log{\frac{1-\alpha}{\alpha}}}}&{{-}}&{{W\left({\frac{1-\alpha}{\alpha}}\exp\left({\frac{y}{\alpha}}-1\right)\right),}}&{{}}&{{}}&{{}}&{{}}&{{}}&{{}}&{{}}\end{array}$$ where the last equality uses the identity log(W(x)) = log x−W(x). Rewrite W1−α αexp yα − 1 = f(y) $$f(y){\frac{1-\alpha}{\alpha}}$$ and massage the terms, we have that solving log f(y) is equivalent to finding the root of $\quad1$)). $$h(x)=y-\alpha-e^{x}(1-\alpha)-\alpha x.$$ Direct calculation shows that h 0(x) = −α − (1 − α)e x and h 00(x) = −(1 − α)e x. Note here y is the input and it is known to us, and x is a scalar. Hence we can efficiently solve the root finding problem using Halley's method. For backpropagation, we need dx dy , which can be computed by implicit differentiation: $$h(x)=y-\alpha-e^{x}(1-\alpha)-\alpha x=0$$ $$\Longrightarrow\frac{dx}{dy}=\frac{1}{\alpha+(1-\alpha)e^{x}}=\frac{1}{y-\alpha x}.$$ Now we can find λ s.t Pi zi = 1 using Newton's method on g(λ) = Pi e log(f(xi+λ)) − 1 = 0. Note this is still a one-dimensional optimization problem. A direct calculation shows that dg dλ =Pi e log(f(xi+λ)) d log(f(xi+λ) dλ , and above we have already calculated that $${\frac{d\log(f(x_{i}+\lambda)}{d\lambda}}={\frac{d x^{*}}{d y}}={\frac{1}{y+\lambda-\alpha x}}.$$ For backward computation, by the chain rule, we have: $$\frac{d e^{\log f(x_{i}+\lambda)}}{d x_{i}}=e^{\log f(x_{i}+\lambda)}\frac{d\log(f(x_{i}+\lambda))}{d x_{i}}$$ $$=e^{\log f(x_{i}+\lambda)}\frac{1+d\lambda/d x_{i}}{x_{i}+\lambda-\alpha\log(f(x_{i}+\lambda))},$$ where the last step is derived by implicit differentiation. Now to get *dλ/dx*i, notice that by applying the implicit function theorem on p(*x, λ*(x)) = Pi e log(f(xi+λ)) − 1 = 0, we get $${\frac{d\lambda}{d x_{i}}}=-\left({\frac{d p}{d\lambda}}\right)^{-1}{\frac{d p}{d x_{i}}}.$$ Thus we have all the terms computed, which finishes the derivation. ## B Additional Experiments And Details Here we provide the model architectures and experiment details omitted in the main text. ## B.1 Details Model architectures For MNIST experiments (except for the extrapolation), using the notation in Equation (10), the mDBM consists of a 4-layer deep monotone DEQ with the following structure: $$\begin{array}{l l l l}{{\left[{\begin{array}{l l l l}{A_{11}}&{0}&{0}&{0}\\ {A_{21}}&{A_{22}}&{0}&{0}\\ {A_{31}}&{A_{32}}&{A_{33}}&{0}\\ {0}&{0}&{A_{43}}&{A_{44}}\end{array}}\right],}}\end{array}$$ where A11 is a 20 × 20 × 3 × 3 convolution, A22 is a 40 × 40 × 3 × 3 convolution, A21 is a 40 × 20 × 3 × 3 convolution with stride 2, A33 is a 80 × 80 × 3 × 3 convolution, A31 is a 80 × 20 × 3 × 3 convolution with stride 4, A32 is a 80 × 40 × 3 × 3 convolution with stride 2, A43 is a (80 · 7 · 7) × 10 dense linear layer, and A44 is a 10 × 10 dense linear layer. The corresponding variable q as in Equation (9) then has 4 elements of shape (20 × 28 × 28),(40 × 14 × 14),(80 × 7 × 7),(10 × 1). When applying the proximal operator to q, we use 1, 10, 20, 1 as their number of groups, respectively. The deep RBM consists of 3-layers where the first hidden layer has 300 neurons, and the last hidden layer (representing the digits) has 10 neurons, amounting to in total 239,294 parameters. For comparison, the mDBM has 192,650 parameters. The mDBM used on CIFAR-10 is the same as for the MNIST experiments with the following exceptions: A11 is a 20×20×3×3 convolution, A22 is a 24×24×3×3 convolution, A21 is a 24×20×3×3 convolution with stride 2, A33 is a 48 × 48 × 3 × 3 convolution, A31 is a 48 × 20 × 3 × 3 convolution with stride 4, A32 is a 48 × 24 × 3 × 3 convolution with stride 2, A43 is a (48 · 8 · 8) × 10 dense linear layer, and A44 is a 10 × 10 dense linear layer. The corresponding variable q as in Equation (9) then has 4 elements of shape (60 × 32 × 32),(24 × 16 × 16),(48 × 8 × 8),(10 × 1). When applying the proximal operator to q, we use 1, 6, 12, 1 as their number of groups, respectively. For the MNIST extrapolation experiments, we use mDBM of the following structure: $$\begin{bmatrix}\mathbf{A}_{11}&0&0&0\\ \mathbf{A}_{21}&\mathbf{A}_{22}&0&0\\ \mathbf{A}_{31}&\mathbf{A}_{32}&\mathbf{A}_{33}&0\\ \mathbf{A}_{41}&\mathbf{A}_{42}&\mathbf{A}_{43}&\mathbf{A}_{44}\end{bmatrix}$$ , where A11 is a 4 × 4 × 3 × 3 convolution, A22 is a 40 × 40 × 3 × 3 convolution, A21 is a 40 × 4 × 3 × 3 convolution with stride 2, A33 is a 80 × 80 × 3 × 3 convolution, A31 is a 80 × 4 × 3 × 3 convolution with stride 4, A32 is a 80 × 40 × 3 × 3 convolution with stride 2, A41 is a (4 · 28 · 28) × 100 dense linear layer, A42 is a (40 · 14 · 14) × 100 dense linear layer A43 is a (80 · 7 · 7) × 100 dense linear layer, and A44 is a 100 × 100 dense linear layer. The corresponding variable q as in Equation (9) then has 4 elements of shape (4 × 28 × 28),(40 × 14 × 14),(80 × 7 × 7),(100 × 1). When applying the proximal operator to q, we use 1, 10, 20, 10 as their number of groups, respectively. The model used for CIFAR-10 extrapolation has the same structure, where A11 is a 30×30×3×3 convolution, A22 is a 60×60×3×3 convolution, A21 is a 60×30×3×3 convolution with stride 2, A33 is a 80×80×3×3 convolution, A31 is a 80 × 30 × 3 × 3 convolution with stride 4, A32 is a 80 × 60 × 3 × 3 convolution with stride 2, A41 is a (20 · 32 · 32) × 100 dense linear layer, A42 is a (60 · 16 · 16) × 100 dense linear layer A43 is a (80 · 8 · 8) × 100 dense linear layer, and A44 is a 100 × 100 dense linear layer. The corresponding variable q as in Equation (9) then has 4 elements of shape (30 × 32 × 32),(60 × 16 × 16),(80 × 8 × 8),(100 × 10). When applying the proximal operator to q, we use 1, 6, 8, 10 as their number of groups, respectively. mDBM Training details and hyperparameters Treating the image reconstruction as a dense classification task, we use cross-entropy loss and class weights 1−β 1−β ni with β = 0.9999 (Cui et al., 2019), where niis the number of times pixels with intensity i appear in the hidden pixels. For classification, we use standard cross-entropy loss. To enable joint training, we put equal weight of 0.5 on both task losses and backpropagate through their sum. For both tasks, we put τiΦq ∗ i into the cross-entropy loss as logits, as described in Equation (17). Since mean-field approximation is (conditionally) unimodal, the scaling grants us the ability to model more extreme distributions. To achieve faster damped forward-backward iteration, we implement Anderson acceleration (Walker & Ni, 2011), and stop the fixed point update as soon as the relative difference between two iterations (that is, kqt+1 − qtk/kqtk) is less than 0.01, unless we hit a maximum number of 50 allowed iterations. For proxα f and the damped iteration, we set α = 0.125 (Although one can tune down α whenever the iterations do not converge, empirically this never happens on our task). We use the Adam optimizer with learning rate 0.001. For MNIST, we train for 40 epochs. For CIFAR-10, we train for 100 epochs using standard data augmentation; during the first 10 epochs, the weight on the reconstruction loss is ramped up from 0.0 to 0.5 and the weight on the classification loss ramped down from 1.0 to 0.5; also during the first 20 epochs, the percentage of observation pixels is ramped down from 100% to 50%. Deep RBM Training details and hyperparameters The deep RBM is trained using CD-1 algorithm for 100 epochs with a batch size of 128 and learning rate of 0.01. Convergence of inference during training We note that, comparing to the differently-parameterized monDEQ in Winston & Kolter (2020), whose linear module suffers from drastically increasing condition number (hence in later epochs taking around 20 steps to converge, even with tuned α), our parameterization produces a much nicer convergence pattern: the average number of forward iterations over the 40 training epochs is less than 6 steps, see Figure 7. mDBM patch imputation experiments We train mDBM on the task of MNIST patch imputation. We randomly mask a 14 × 14 patch, chosen differently for every image, similar to the query training in Lázaro-Gredilla et al. (2020). To make the model class richer, we lift the monotonicity constraint, and find that the model converges regardless. Our model reconstructs readable digits despite potentially large chunk of missing pixels (Figure 8b). If the model is given the image labels as input injections, our model ![18_image_0.png](18_image_0.png) Figure 7: Convergence of forward-backward splitting. ![18_image_2.png](18_image_2.png) (a) Test data with a 14 × 14 patch ![18_image_1.png](18_image_1.png) masked (b) Imputation with a 14×14 patch masked, inference without injection labels Figure 8: MNIST pixel patch imputation using mDBM (c) Imputation with a 14×14 patch ![18_image_3.png](18_image_3.png) masked, inference with injection labels performs conditionaly generation fairly well (Figure 8c). These results demonstrate the flexibility of our parameterization for modelling different conditional distributions. Deep RBM results using Gibbs sampling The deep RBM is trained as before. For joint imputation and classification, the DBM uses Gibbs sampling of 10000 and 100 steps respectively, although the quality of the imputed image and test accuracy are insensitive to the number of steps. We randomly mask off 60% pixels, or a randomly selected 14 × 14 patch; the results are shown in Figure 9, and are better than when mean-field inference is used, (shown in Figure 3). In the experiment with 60% of pixels randomly masked, we also test the model on predicting the actual digit simultaneously. The test accuracy is 93.58%, comparable to the mDBM accuracy of 92.95%. Comparison of inference methods We conduct numerical experiments to compare our inference updating method to the ones proposed by Krähenbühl & Koltun (2013); Baqué et al. (2016), denoted as Krähenbühl's and Baqué's respectively. Krähenbühl's fast concave-convex procedure (CCCP) essentially decomposes to Equation (11), the un-damped mean-field update with softmax. This update only converges provably when Φ is concave. Baqué's inference method can be written as $$\mathbf{q}_{h}^{(t)}=\text{softmax}\left((1-\alpha)\log\mathbf{q}_{h}^{(t-1)}+\alpha(\mathbf{\Phi}_{hh}\mathbf{q}_{h}^{(t-1)}+\mathbf{\Phi}_{ho}\mathbf{x}_{o}+\mathbf{b}_{h})\right).\tag{20}$$ This algorithm provably converges despite the property of the pairwise kernel function. However, this procedure converges in the sense that the variational free energy keeps decreasing. Therefore their fixed point may not be the true mean-field distribution Equation (7). In this experiment, we train the models using three different updating methods, and perform inference using three methods as well, with and without the monotonicity condition. ![19_image_0.png](19_image_0.png) ![19_image_1.png](19_image_1.png) ![19_image_2.png](19_image_2.png) (a) 60% pixels are randomly masked. From left to right: imputed image, true image, masked image. ![19_image_3.png](19_image_3.png) (b) 14 × 14 patches are randomly masked. From left to right: imputed image, true image, masked image. Figure 9: RBM for image imputation using Gibbs sampling We also compare the convergence of our method to the regularized Frank-Wolfe method in Lê-Huu & Alahari (2021). Their update step can be written as $$\mathbf{q}_{h}^{(t+1)}=(1-\alpha)\mathbf{q}_{h}^{(t)}+\alpha\,\mathrm{softmax}\left(\frac{1}{\lambda}(\mathbf{\Phi}_{h h}\mathbf{q}_{h}^{(t)}+\mathbf{\Phi}_{h o}\mathbf{x}_{o}+\mathbf{b}_{h})\right).$$ ![19_image_4.png](19_image_4.png) We use λ = 0.7 as in their paper. Our method converges faster than the FW method. See the result in Figure 10. Figure 10: Convergence of our method vs. the regularized FW. This experiment is done using the same setup as in Figure 11. Krähenbühl's and Baqué's methods often do not converge in the backward pass (there's no theoretical guarantees neither). To rule out the impact of the backward iteration, during training we directly update use the gradient of the forward pass, instead of using a backward gradient hook to compute Equation (16). Figure 12 and Figure 13 demonstrate how the three update methods impute missing pixels when trained with different update rules, with and without the monotonicity condition, respectively. Krähenbühl's usually does | Inference | Krähenbühl | Baqué | Our | |------------------|--------------|---------|--------| | Train Krähenbühl | 0.0005 | 0.0065 | 0.0024 | | Baqué | 1.0924 | 0.0119 | 0.0042 | | mDBM | 1.1286 | 0.0065 | 0.0022 | Table 4: Relative update residual when monotonicity is not enforced ![20_image_0.png](20_image_0.png) Figure 11: TV distance and convergence speed not converge when the model is trained with our method or Baqué's, whereas the other two methods impute the missing pixels well. The classification results are presented in Table 5 and Table 6. Notice that when trained with our method or Baqué's, the convergence issue of Krähenbühl's leads to horrible classification accuracy. Our method is superior to other inference methods when the model is trained in a different update fashion. For example, if the model is trained by using Krähenbühl's, it makes sense that the model performs the best if the inference is also Krähenbühl's since the parameters are biased toward that particular inference method. However, our method in this case outperforms Baqué's. After these methods halt and return q T h , we run one more iteration of $${\mathbf{q}}_{h}^{T+1}=\mathrm{softmax}\left({\mathbf{\Phi}}_{h h}{\mathbf{q}}_{h}^{T}+{\mathbf{\Phi}}_{h o}{\mathbf{x}}_{o}+{\mathbf{b}}_{h}\right),$$ , (21) and record the relative update residual kq T +1 h − q T h k/kq T h k for randomly selected 4000 MNIST images. The results are listed in Table 3 and Table 4. To alleviate the effect of numerical issues, we strength the convergence condition to either the relative residual is less than 10−3 or the number of iterations exceeds 100 steps. $$(21)$$ ![21_image_0.png](21_image_0.png) Figure 12: Training and inference using all three update rules with 40% observed pixels *with* the monotonicity condition. The labels on each row represent the training update rule, and the labels on the columns represent the inference update rule. | Inference | Krähenbühl | Baqué | Our | |------------------|----------------|-----------------|-----------------| | Train Krähenbühl | 0.042 (0.0013) | 0.114 (0.0019) | 0.0498 (0.0014) | | Baqué | 0.958 (0.0013) | 0.038 (0.0010) | 0.034 (0.0012) | | mDBM | 0.946 (0.0024) | 0.0425 (0.0016) | 0.0412 (0.0017) | Table 5: Classification error (standard deviation) when monotonicity is enforced It appears in Table 3 and Table 4 that although our method has a much lower residual compare to Baqué's, both of them seem small and convergent. This is because the "optimal" fixed point in this setting on MNIST might be unique and both methods happen to converge to the same point. However, this is in general not true. We compare our method vs Baqué's on 400 randomly selected MNIST test images with 40% pixels observed, and perform mean-field update until the relative residual of [0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001] is reached (without step constraint), respectively. Then we measure the TV distance between the distributions computed by these two methods on the remaining 60% pixels, as well as the convergence speed. The results are demonstrated in Figure 11. One can see that when the model is trained (using Krähenbühl's, Figure 11a), the TV distance converges to 0 as the tolerance decreases. However, when the model is just initialized (but still constrained to be monotone), the TV distance remains large (Figure 11c). Even though in this case the optimal fixed point may be unique, our method is still superior to Baqué's: it takes us less iterations till convergence, despite whether the model is trained or not. ![22_image_0.png](22_image_0.png) Figure 13: Training and inference using all three update rules with 40% observed pixels *without* the monotonicity condition. The labels on each row represent the training update rule, and the labels on the columns represent the inference update rule. Table 6: Classification error (standard deviation) when monotonicity is not enforced | Inference | Krähenbühl | Baqué | Our | |------------------|----------------|----------------|----------------| | Train Krähenbühl | 0.035 (0.0017) | 0.189 (0.0023) | 0.051 (0.0015) | | Baqué | 0.762 (0.0013) | 0.041 (0.0013) | 0.055 (0.0012) | | mDBM | 0.90 (0.0002) | 0.063 (0.0021) | 0.036 (0.0017) |
Review 1: Summary: Sorry for the late review here. This paper proposes monotone deep boltzmann machines (mDBMs), which are a restriction of deep Boltzmann machines to the class of graphs that have no self-loops within a layer, but can connect both forwards and backwards in the layers. mDBMs have the favorable property that they can be parameterized as a deep equilibrium model (DEQ), which gives reasonably efficient training procedures and global solutions. Experiments are performed in MNIST and CIFAR-10 in an unsupervised fashion, demonstrating that this inference procedure is quite flexible and reasonably accurate. Strengths and Weaknesses: Strengths: - The proposed method seems to be technically sound and is well explained mathematically. It also draws from a much newer, powerful class of DEQ models that seem to work well in practice. - I like the idea of constructing a highly structured parameterization to ensure that the necessary conditions are satisfied immediately. Weaknesses: - The experiments seem to be quite limited, especially compared to the current powerful methods of diffusion models in generative modelling and even DEQ models, which can be used for sequence modelling as well. I guess I'm asking what can a (m)DBM do that these approaches cannot? - There's no clear experiments on convolutional nets (I could be missing them?) although that is listed as a selling point of the approach. Requested Changes: Requested experiments - My understanding is that DEQ based models (e.g. https://arxiv.org/pdf/2006.08591.pdf) and some of the DBM results (e.g. https://www.scinapse.io/papers/2100495367) tend to have experiments in classification or regression using the DBM as a feature constructor. If this is possible with the mDBM, could this be done here? That is, classifying cifar-10 classes based on the states from the mDBM, not just in-filling. - Figure 4: what is the extrapolation performance of the mDBM on MNIST and CIfAR-10? For example, mask the left 50% of pixels and then attempt to generate the right half. These are mostly presentation based comments: - pg 4: redefine phi, I believe it's the connection network of the graph? - pg 5: "We see that q_h is the ...": explain this further as it seems both important and not obvious at first glance - pg 5: where is the proof of proposition 3.1? mark this in the main text - eq 8: use \mathbb{I} instead of I - pg 6: "height x ... x " use $x$ for the x-es here - pg 7: how are m and L determined prior to solving the system? I think m might be mentioned above, but L is not - pg 7: give a sentence describing Halley's method - pg 7: "extend their work to the softmax nonlinear operator": I thought that your work used the proximal operator instead of the softmax one? - pg 8: "This backward pass can also be computed via an iterative approach..." explain what their approach is - tables 1,2: Why do the errors go down as the masking percentage goes up? - tables 1,2: please replace "our" with mDBM - table 3: What does relative residual update mean in this context? I'm also more generally confused by the "comparison of inference methods" section. Is the conclusion that your method and Baque's methods are much more globally accurate? - pg 11: "although we have a fairly efficient implementation ..." why not one step newton approach or construct a polynomial/ taylor series approximation? It seems like there are well-known approximation methods for the Lambert W function, like [this one](https://nvlpubs.nist.gov/nistpubs/jres/65B/jresv65Bn4p245_A1b.pdf). Broader Impact Concerns: n/a ================================================== Review 2: Summary: The Authors present a new approach for deep Boltzmann machines, which leverages recent advances on equilibrium models to propose a novel inference scheme for DBMs. This is achieved by applying the results of Baie et al. (2019) and Winston & Kolter (2020) regarding fixed-point convergence of infinitely deep neural networks with input injection (of the class $\sigma(Wz+Ux+b)$) to DBMs, allowing to derive a scalable variational inference scheme. This involves constraining the linear model to be compatible with the DBM architecture (i.e. graphical model with no-loops) and to ensure the requirement for Winston & Kolter (2020) to hold (i.e. monotonicity). After an in-depth explanation of the proposed method, the Authors conclude the paper with a small section with experiments on MNIST and CIFAR datasets, mostly on image imputation and marginally on image classification. Strengths and Weaknesses: **Strengths**: - The paper is overall well written paper, relatively easy to follow. - Elegant formulation and nice application of recent advances on deep learning methods to more classic models, like DBMs - Some implementation details needed to apply the results in Winston & Kolter (2020) are carefully explained, providing from a methodological point of view a nice contribution **Weaknesses** - Limited experimental evaluation (see below) Requested Changes: - Evidence of convergence to fixed point: beyond the theoretical guarantees, would it be possible to show the fixed point convergence empirically? - Effect of temperature: you quickly mentioned some temperature scaling to the output marginals but you didn't, as far as the paper goes, tested different configurations. Didn't you find some pathological situations where this was necessary? What would happen with different setups? - Summary of the method and algorithm: the developed method involves several steps and different parameterizations and approximations; it would be nice if you could provide a paragraph at the end of section 3 to summarize the proposed method, maybe with the aid of a pseudo-algorithms. - Additional experimental evaluation: the paper for the moment has a limited experimental campaign, based only on 2 datasets and 1 comparison (the naïve RBM). One possibility is to benchmark mDBM with other generative models for e.g. image imputation (see 1-4 for some ideas). To be clear, for me the paper has value beyond the ranking on this benchmark, but I would encourage the Authors to explore with different models. This would be also a good opportunity to briefly discuss and compare computational aspects of your proposal. (1) GAIN: Missing data imputation using generative adversarial nets. ICML 2018 (2) MisGAN: learning from incomplete data with generative adversarial networks. ICLR 2019 (3) MIWAE: Deep generative modelling and imputation of incomplete data sets. ICML 2019 (4) MCFlow: Monte Carlo Flow Models for Data Imputation. CVPR 2020 Broader Impact Concerns: None ================================================== Review 3: Summary: Boltzmann machines are a classical fully-connected model in which computing (properties of, or sampling from) stationary distributions is intractable. Restricted Boltzmann machines and their deep counterparts wherein connections are organised in a layered structure allow for tractable inference via a form of Gibb's sampling called contrastive divergence. The authors consider another type of connectivity structure that allows for tractable inference. They call the structure the monotone DBM, where arbitrary self-connections are allowed in each layer but the weights are restricted in such a way as to allow a unique fixed point. This is achieved through a connection to recently proposed implicit layers called deep equilibrium models, and in particular, monotone deep equilibrium models with special activation functions. Monotone DBMs are applied to joint image completion and classification tasks. Strengths and Weaknesses: **Strengths:** - The question of whether more general Boltzmann Machines than RBMs admit efficient inference procedures is an interesting one. This paper answers that question in the positive, although perhaps in a somewhat restricted setting. It leverages other parts of the literature, namely Deep Equilibrium Models, in order to do so, which I think is nice. - The paper is mostly well written. Excluding isolated instances, it is easy to follow the train of thought and notations of the authors. **Weaknesses:** - Section 3.3 does not clearly describe the conversion of ``neural network convolution operators'' to parameterisation in terms of $A$. I do appreciate that this is a difficult thing to convey, due to the various subscripts involved. Also the mention of the spectral normalisation of the neural net style convolution operator. I encourage the authors to spend some time thinking about the best possible way to communicate this, because if this is done well, it will be very useful for other researchers. Deferring the explanation to the Appendix is suboptimal. - As the authors admit in the abstract, the work is largely at a conceptual stage. The datasets and problems considered are largely toy from a ``deep learning perspective'', but not from a `graphical model' perspective. I think it is good that the authors acknowledged this weakness in the abstract, experimental evaluation, and somewhere else towards the beginning, and I do not see this as a substantial enough weakness to prevent publication at TMLR. **Questions** (for clarification --- not necessarily strengths or weaknesses): - First paragraph. Are DBMs necessarily discrete? I seem to recall some versions allow for continuous-valued nodes. - Figure 1, right. Why did you choose to draw a general BM in this way? It appears as though what you have drawn is not a complete graph, unless I misunderstand the notation or terminology. For example, the top right node is not connected to the top middle node. I am more familiar with the layout where the nodes are arranged on a circle, and every node is connected to every other node. - Equation (4). What is $\Phi_{ij}$? The matrix $A_i$ is a $d \times k_i$ matrix, so I guess $\Phi_{ij}$ should be a $k_i \times k_j$ matrix? So then the ``0'' listed in the second line of equation (4) is actually $0_{k_i \times k_j}$? By the way, I think there is a minor typo in the text two lines after equation (1). You write $\Phi_{i,j}$ but I think it should be $\Phi_{ij}$. - The monotone DEQ construction is guaranteed to admit a unique fixed point under a certain condition like $I - \Phi \succ m I$, which translates to a certain condition on $A$ or $A$ hat. I believe these conditions are easy to impose by always enforcing the spectral constraint and then running any gradient based optimiser. Is that true? Or does the gradient based optimiser have to explicitly enforce a condition on the parameters it is learning? - Perhaps some of the works below and the others that you can find might be useful for your aspiration (4) in Future directions. Namely, other activation functions can be obtained using other frameworks that still allow for the existence of a unique fixed point. Some of these also admit probabilistic interpretations. All in all, I am leaning to accept this paper. I think some of the audience of TMLR will find it very interesting, and I find the claims to be without any serious issues. Requested Changes: **Minor recommendations:** - Start of section 2. "Deep equilibrium models, especially their convergent version, the monotone". The phrase "their convergent version" seems to imply there is a singular convergent version, but there exist multiple. For example, [1-4] and others that are recently published. - Theorem 3. The first part of the theorem 1) $\Phi_{ii} = 0$ (which I believe should be a matrix of zeros) is trivial, by (4). Why include it in the theorem statement? - First equation in section 3.2. Is $E$ the set of all edges? Is $o$ the set of observed nodes? - I find the notation where the fixed point of a DEQ $z(x)$ is an explicit function of $x$, but then later in for example equation (15) it is not. I suggest picking one notation and keeping it consistent. - Text above equation (17). ``owning'' should be ``owing''. - Figure 3. Why are (a) and (d) different figures? It seems like a different seed of a random mask was applied to each. And I believe (c) and (f) are the same. Can you make this figure where there is a single figure instead of (a) and (d), and a single figure instead of (c) and (f)? - Why did you compare with mean-field updates on the RBM, if Gibbs sampling/contrastive divergence is the de facto standard? Perhaps move the result from the appendix to the main text? [1] Ezra Winston and J Zico Kolter. Monotone operator equilibrium networks. Advances in Neural Information Processing Systems, 33:10718–10728, 2020. [2] Max Revay, Ruigang Wang, and Ian R Manchester. Lipschitz bounded equilibrium networks. arXiv preprint arXiv:2010.01732, 2020. [3] Laurent El Ghaoui, Fangda Gu, Bertrand Travacca, Armin Askari, and Alicia Tsai. Implicit deep learning. SIAM Journal on Mathematics of Data Science, 3(3):930–958, 2021. [4] Tsuchida, R., & Ong, C. S. (2022). Deep equilibrium models as estimators for continuous latent variables. arXiv preprint arXiv:2211.05943. Broader Impact Concerns: None ================================================== Review 4: Summary: The authors take inspiration of recent work on DEQs allowing for provable convergence and extend these ideas to deep Boltzmann machines. They provide a restricted set of deep Boltzmann machines that crucially allows for recurrent connections. They then provide theory allowing for provable convergence of their mean-field updates and practical insights into how to enable fast inference with their model. The theory is accompanied with classic experiments on vision datasets (MNIST, CIFAR10). Strengths and Weaknesses: The paper is well written and presents the ideas and novelties (and shortcomings, connections to related work) in a transparent manner. Although I did not check the theoretical claims thoroughly, in that respect the paper is quite dense and requires quite a lot of background knowledge, the presentation of the results is concise and seems correct. Arguable the theoretical novelty is limited but I still like the new connection to DBMs, and the novel theoretical results that allow for this bridge. I think that the paper is well executed and will be well received and appreciated. Requested Changes: I have no requested changes although a couple of suggestions. It would be nice to get a bit better feeling of the algorithm and how it compares to related work: How fast/slow is the algorithm (and the compared methods) in wall-clock time? It would be nice how long the different components scale with network / dataset size. It would be useful to provide pseudocode for training and inference to allow for an overview of the different algorithmic components. In general, the paper is very much concerned about provable convergence. As mentioned by the authors (and in related work), quite surprisingly at least for me, the dynamical systems described by the DEQ / your updates seem often to converge without additional regularisation or more elaborate intervention. Could you maybe elaborate on that? I think this part falls a bit short in the paper. Broader Impact Concerns: - ================================================== Metareview: Recommendation: Accept as is Comment: This is generally a strong paper with some interesting ideas that I believe is highly suitable for publication at TMLR. Though reviewers originally raised some concerns about experimental comparisons, these were all things that were either adequately addressed in the rebuttal or that the reviewers did not feel should prohibit acceptance. As the authors have already made updates to the paper based on the reviewers' comments and there are no significant issues still outstanding, I recommend accepting the paper "as is". ==================================================
# Evaluating Medirl: A Replication And Ablation Study Of Maximum Entropy Deep Inverse Reinforcement Learning For Human Social Navigation Anonymous authors Paper under double-blind review ## Abstract In this study, we enhance the Maximum Entropy Deep Inverse Reinforcement Learning (MEDIRL) framework, targeting its application in human-robot interaction (HRI) for modeling pedestrian behavior in crowded environments. Our work is grounded in the pioneering research by Fahad, Chen, and Guo, and aims to elevate MEDIRL's efficacy in real-world HRI settings. We replicated the original MEDIRL model and conducted detailed ablation studies, focusing on key model components like learning rates, state dimensions, and network layers. Our findings reveal the effectiveness of a two-dimensional state representation over a three-dimensional approach, significantly improving model accuracy for pedestrian behavior prediction in HRI scenarios. These results not only demonstrate MEDIRL's enhanced performance but also offer valuable insights for future HRI system development, emphasizing the importance of model customization to specific environmental contexts. Our research contributes to advancing the field of socially intelligent navigation systems, promoting more intuitive and safer human-robot interactions. Our paper focuses on the reproducibility of studies within the domain of human-robot interaction (HRI) by revisiting and expanding upon the groundbreaking work of Muhammad Fahad, Zhuo Chen, and Yi Guo in their study on maximum entropy deep inverse reinforcement learning (MEDIRL) (Fahad et al. (2018)) for understanding human navigation behaviors in crowded environments. Our objective is to rigorously retest and augment their findings, emphasizing the need for robust and socially intelligent navigation systems in HRI scenarios. Our re-experimentation process involves: 1. **Comprehensive Replication and Validation**: We aim to replicate the original methodology while conducting a thorough validation process, ensuring the reliability and applicability of the MEDIRL model in real-world HRI scenarios. 2. **In-Depth Component Analysis**: Our focus is on dissecting and analyzing the individual components of the MEDIRL model through ablation studies. These studies involve the selective removal or alteration of critical elements, such as learning rate, state dimensions, network layers, and the loss function, to understand their impact on the model's performance. 3. **Refinement and Enhancement**: We seek to refine the MEDIRL model by optimizing critical parameters, learning strategies, and eliminating biases. Our goal is to improve the model's robustness and adaptability, ensuring its deployment in diverse HRI scenarios while adhering to social norms and safety protocols. 4. **Deeper Insights**: The results of our ablation studies will provide deeper insights into the model's performance dynamics, shedding light on the intricate mechanisms at play within the MEDIRL framework. Ultimately, our experimentation serves as a testament to the pursuit of knowledge, with the ambition to redefine and fortify the pathways to socially intelligent navigation. ## 0.0.1 Scope Of Reproducibility Recreating the original MEDIRL framework, as outlined in the research paper, proved challenging due to the lack of comprehensive documentation. Additionally, the absence of a publicly available GitHub repository with the necessary data required us to independently develop the algorithm, using the limited pseudocode provided in the paper as guidance. The lack of substantial information about the Social Affinity Map (SAM) feature map added to the complexity of our replication efforts. Unfortunately, the paper did not provide a reference or access to the dataset used, which further complicated our task. ## 0.0.2 Methodology To reproduce the research paper's results, we employed a stepwise approach. Initially, we independently generated the MEDIRL model, relying on our interpretation of its implementation. Following this, we conducted ablation studies to break down its individual components and functions. We additionally optimized our efforts by subsetting the provided data and reducing the number of epochs, enabling us to execute the code on standard computing resources (we used a MacBook Pro 2018 i7 chip). To ensure future reproducibility and enhanced accessibility, we seamlessly integrated our code into Dags Hub (https://dagshub.com/MLPurdue/hackathonf23-Stacks), along with data versioning via DVC and metrics tracked via MLFlow. It is important to note that we chose to omit the presented SAM Feature Map to focus solely on the capabilities of the Maximum Entropy Deep Inverse Reinforcement Learning Model. As such the comparisons we provide will be between the metrics that we gather, as to account for the differing manner of data processing. ## 0.0.3 Results We prioritized the consideration of the average displacement from the model's predicted trajectory to the trajectory that the human in the testing data takes. The ranking from lowest displacement to highest displacement is as follows: Removed State Dimension, Original, Removed Discount Factor, Removed Hidden Layer, Removed Max Entropy and replaced with Mean Squared Error, Leaky ReLu instead of ReLU for activation. ## 1 Introduction In the realm of human-robot interaction (HRI), the confluence of humans and autonomous entities within shared spaces marks a paradigm shift in technological advancements (Kosuge and Hirata (2004)). This coexistence necessitates the development of robust and socially intelligent navigation systems, ensuring not just efficient movement but also safety, user acceptance, and the seamless integration of robots into human spaces. Within this dynamic landscape, the study Learning How Pedestrians Navigate: A Deep Inverse Reinforcement Learning Approach, by Fahad, Chen, and Guo (Fahad et al. (2018)) presents a pioneering methodology that harnesses maximum entropy deep inverse reinforcement learning (MEDIRL) to understand and replicate socially acceptable human navigation behaviors. This groundbreaking research underscores the essential need for robots to navigate human-centric environments while adhering to social norms and conventions, thus fostering a natural and intuitive human-robot interaction (Yao et al. (2021)). The Fahad, Chen, and Guo study, which initially introduced the MEDIRL framework, serves as a cornerstone in this transformative domain. Their work, focusing on capturing and modeling human navigation behaviors in crowded settings, laid the foundation for leveraging intricate datasets of human pedestrian trajectories, a nonlinear reward function facilitated by deep neural networks, and the integration of social affinity maps (SAM) for nuanced navigation decision-making. Maximum Entropy Deep Inverse Reinforcement Learning (MEDIRL) holds a central position as a crucial machine learning and reinforcement learning framework in the field of human-robot interaction (HRI). It specifically focuses on the advancement of socially intelligent navigation. Within this multifaceted framework, the primary objective revolves around endowing robots with the ability to extract valuable insights from human behavior. This involves discerning the latent reward functions that underlie these behaviors and subsequently enabling the robots to make navigation decisions that go beyond mere efficiency, as described in reference (Wulfmeier et al. (2015)). Building on this pivotal research, the objective of our re-experimentation is to delve deeper into Fahad, Chen, and Guo's work, rigorously retesting, and expanding their findings. We aim not only to replicate their methodology but to significantly augment their research through nuanced re-analysis, additional experimentation, and a comprehensive validation process. By scrutinizing and extending the boundaries of their groundbreaking model, our goal is to further reinforce the reliability and applicability of MEDIRL within real-world human-robot interaction scenarios. A critical aspect of our re-experimentation involves not just replicating the findings of the initial study but expanding its horizons. Through comprehensive evaluation against real-world pedestrian trajectories and rigorous comparisons against established methodologies, we aim to showcase a deeper understanding and validation of the MEDIRL model. Our mission is to advance this model to generate pedestrian trajectories that mirror human-like behaviors more accurately, encompassing vital aspects such as collision avoidance strategies, leader-follower dynamics, and intricate split-and-rejoin patterns.(Helbing and Molnar (1995)) Additionally, the emphasis in our re-experimentation will be on reinforcing the reliability of the MEDIRL model. By employing strategic refinements, such as fine-tuning critical parameters, optimizing learning strategies, and meticulously eliminating biases, we aim to ensure the robust deployment of this technology in varied real-world HRI scenarios. This rigorous refinement process is pivotal in not only upholding social norms but also adhering to stringent safety protocols. Crucially, our re-experimentation will systematically deconstruct and analyze the individual components constituting the MEDIRL model introduced by the Original Study. By employing meticulous ablation studies, we aim to dissect and comprehend the impact of each component on the overall performance of the model. Ablation studies play a pivotal role in dissecting and comprehending the individual contributions of distinct components within the MEDIRL framework (Meyes et al. (2019)).These studies involve selective removal or alteration of critical elements to gauge their influence on the overall performance of the model. ## 1. Removal Of Hidden Layer: (a) The hidden layer in the MEDIRL model serves as an essential component in deep learning architectures. It plays a critical role in capturing and representing complex relationships within the data (Haarnoja (2018)) (b) Ablating the hidden layer involves eliminating one or more hidden neural network layers from the MEDIRL model. This modification seeks to understand how the depth of the network impacts the model's capacity to learn intricate features and non-linear relationships. (Bengio et al. (2017)) (c) The ablation aims to evaluate whether a shallower network can still adequately capture the nuances of human navigation behaviors, or if a deeper network is essential for modeling the complexity of real-world scenarios. ## 2. Removal Of State Dimension: (a) The state dimension in the MEDIRL model typically represents the environmental states and conditions that the robot and pedestrians navigate in. It encapsulates critical information about the surroundings. For our study, we removed the height component by modifying the model to account for x and y directions solely. (b) By removing a state dimension, we aim to assess the model's adaptability to changes in the state space. This ablation examines whether the model can generalize well and make robust navigation decisions when a part of the state information is missing. (c) Understanding the impact of this ablation is vital for assessing the model's capacity to adapt to variations in the environment. ## 3. Removal Of Discount Factor: (a) The discount factor in reinforcement learning models influences the importance of future rewards in the decision-making process. It determines the model's preference for immediate rewards over long-term goals. (b) Removing the discount factor helps evaluate the model's ability to make decisions solely based on immediate consequences. This ablation assesses whether the model can adapt to scenarios where long-term planning and future rewards are not considered. (c) The results of this ablation will shed light on the role of discount factors in modeling navigation decisions and their impact on the balance between short-term and long-term considerations. ## 4. Removal Of Relu Activation (Replaced With Leaky Relu): (a) Leaky Rectified Linear Unit (ReLu) is an activation function in neural networks. It allows a small gradient for negative input values, making it suitable for capturing non-linear relationships in the data (Xu et al. (2015)). (b) Using Leaky ReLu as an activation function replaces the standard ReLu activation in the model. This change explores how the choice of activation function affects the model's ability to capture non-linear patterns in human behavior (Almeida and Azkune (2018)). (c) This ablation aims to assess whether the Leaky ReLu activation function enhances the model's capability to represent complex and non-linear features in the data, potentially improving its performance in modeling human navigation behaviors. ## 5. Removal Of Max Entropy (Replaced With Mean Squared Error): (a) Maximum entropy reinforcement learning encourages exploration by maximizing the entropy of the policy. It promotes diversity in the model's actions and adaptability to different scenarios. (Zhou et al. (2020)) (b) The removal of the max entropy component assesses the impact on the model's explorationexploitation trade-off. Without it, the model may become less exploratory and may exhibit more deterministic behavior. (c) This ablation will provide insights into the role of entropy in shaping the model's navigation decisions and whether reducing exploration influences its performance in diverse situations. Each of these ablation studies plays a critical role in understanding the individual contributions and significance of specific components within the MEDIRL framework. The results from these detailed investigations will not only provide valuable insights into the model's performance but also guide further refinements and enhancements to create a more robust and adaptable model for socially intelligent navigation in human-robot interaction scenarios. The analysis stemming from these ablation studies will not only provide deeper insights into the model's performance dynamics but also enable a refined understanding of the intricate mechanisms at play within the MEDIRL framework. (Sheikholeslami et al. (2021)) Ultimately, this meticulous approach to dissection and analysis will pave the way for an enhanced and fortified MEDIRL model, offering unparalleled advancements in socially intelligent navigation within the domain of human-robot interaction (Muffoletto et al. (2021)). Our goal is to delineate the critical components significantly contributing to the model's effectiveness in replicating human navigation behaviors and fostering a deeper understanding of the intricate mechanisms at play. Through the meticulously conducted re-experimentation, our ambition is to unveil deeper insights and refined conclusions about the reliability and efficacy of MEDIRL within the realm of social affinity and its implications on navigation within the ambit of HRI (Gockley et al. (2005)). This re-experimentation stands as a testament to the relentless pursuit of knowledge, aiming not just to replicate but to redefine and fortify the pathways to socially intelligent navigation within human-robot interaction. ## 2 Scope Of Reproducibility The MEDIRL paper provides a series of information regarding the algorithm they developed, a Maximum Inverse Reinforcement Learning Model integrated with a Deep Learning Neural Network with differing levels of detail. The original paper begins by outlining the Markov Decision Making Process elements. **Given MDP Elements:** - **States** S: The original paper uses states to represent all possible positions or situations the mobile robot could find itself it. It was denoted as the following set S = {s1*...s*n}, where 'n' is the total number of such possible states. - **Actions** A: The original paper uses actions to represent all the possible decisions the mobile robot could make. This was denoted by the following set A = {a1*...a*p}, where 'p' denotes the total number of possible actions. - **Discount Factor,** γ: This was denoted by the original paper as a number between 0 and 1 that outlined the impact a reward would have on the mobile robot based on its distance from the mobile robot. - **Reward Function** R(si): This was outlined as the function that the mobile robot would come up with on how it should operate within a state action space. In regards to the Deep Learning Neural Network Backbone, we are told that it consists of one input layer, two hidden layers, and one output layer. The two hidden layers respectively have 4096 and 2048 nodes. Equation 1 displays the reward function formula the original paper gives us and Equation 2 represents the Bayesian inference that the original paper uses. $$R^{*}=g(\phi,\theta_{1},\theta_{2},\theta_{3},\ldots,\theta_{j}),\,=g_{1}(g_{2}(\ldots(g_{j}(\phi,\theta_{j}),\ldots),\theta_{2}),\theta_{1}).$$ $$L(\theta)=\log P(D,\theta|R^{*})=\log P(D|R^{*})+\log P(\theta)$$ ∗) + log P(θ) (2) The original paper also outlines equation 3 to represent the gradient descent taking place for the neural network optimization with respect to the network parameters θ and equation 4 outlines the gradient descent with respect to the reward function. ## 1 Introduction The _quantum_ quantum mechanics is a quantum field theory of quantum mechanics. It is a quantum field theory of quantum mechanics. It is a quantum field theory of quantum mechanics. $$\left(1\right)$$ $$\left(2\right)$$ $$\left({\mathrm{3}}\right)$$ $$\left(4\right)$$ No further information is provided about the Maximum Entropy Inverse Reinforcement Learning Model embedded into the Deep Learning Network beyond its formulas shown in Equations 5 and 6, where µD−E[µm] is the state visitation matching feature. $$L_{m}D=\log(\pi_{m})\cdot\mu_{a}$$ LmD = log(πm) · µa (5) $$\frac{\partial L D}{\partial R_{m}^{*}}=\mu_{D}-E[\mu_{m}]$$ = µD − E[µm] (6) The MEDIRL paper captures the pedestrian behavior and evaluates it as with an accuracy of 96.6%, an average displacement Error of 0.40 meter, a final Displacement Error of 0.81 meters, and an average NonLinear Displacement Error of 0.41 meters $$\left({\bar{5}}\right)$$ $$(6)$$ It also compares it to another state-of-the-art algorithm to indicate that its model should be the new state of the art. Given the missing information, we made the following assumptions about the model; We used a discoount factor of 0.01, an epoch number of 3, a standard number of nodes for the input and output layers given out data set, and a standard maximum entropy inverse reinforcement optimization method for the deep learning network. It is also important to note that from the data set provided by the original paper, we subsetted 100 lines for training and 40 lines for testing. We did this to adjust the dataset to be suited for the lack of computational power we had available to us for this study. As we had 6 ablation studies with no access GPU allocation, we subsetted the data. The original paper claims that given a 1080ti with dual Xeon processors, it would take 20 hours to run the code. From the provided metrics of the original paper, we intend to focus on the Average displacement error of the model, as it is the most consistent metric considering the difference in training data size (due to computational restrictions). ## 3 Methodology We began this study by re-creating the algorithm shown in the original paper as displayed in the Algorithm 1. Algorithm 1 Maximum Entropy Deep Inverse Reinforcement Learning (MEDIRL) Require: num_*trajectories*: Number of human-like trajectories trajectory_*length*: Length of each trajectory state_dim: State-space dimension lr: Learning rate epochs: Training epochs Ensure: irlModel: Trained IRL model 1: **function** MaxEntIRL(state_dim) 2: irl ← Initialize MaxEntIRL model 3: **return** irl 4: **end function** 5: **function** TrainMaxEntIRL(num_trajectories, trajectory_length, f ile_*path, lr, epochs*) 6: irl ← MaxEntIRL(state_dim) 7: *data* ← LoadDataset(f ile_*path*) 8: *irl.train*_irl(data, use_dataset = *T rue, lr, epochs*) 9: model_*path* ← '/path/to/save/model.pkl' 10: irl.save_model(model_*path*) 11: **return** model_*path* 12: **end function** 13: **function** TrainIRLWithDataset(*data, lr, epochs*) 14: *optimizer* ← Initialize Adam optimizer with lr 15: for epoch ← 1 to *epochs* do 16: *totalLoss* ← 0 17: state_*frequencies* ← Calculate state frequencies from *data* 18: for idx ← 1 to len(*data*) do 19: state, velocity ← data[idx] 20: **Using** GradientTape: 21: preferences ← irl.model(*state*) 22: prob_*human* ← Softmax(*preferences*) 23: maxent_irl_*objective* ← Calculate MaxEnt IRL objective 24: *grads* ← Compute gradients 25: Apply gradients using optimizer 26: totalLoss ← totalLoss +P(*maxent*_irl_*objective*) 27: **end for** 28: avg_loss ← totalLoss/len(*data*) 29: Log loss metric in MLflow 30: **Print** "Epoch epoch/*epochs*, MaxEnt IRL Loss: avg_*loss*" 31: **end for** 32: **end function** We then proceeded with creating the code for our ablation studies. We: - removed a Hidden Layer consisting of 2048 nodes, keeping the bigger one of 4028 nodes as the sole hidden layer. We hypothesize this will lead to a far more inaccurate model due to the reduction of neurons. - removed a State Dimension making the state space 2 instead of 3. We hypothesize this will lead to an increase in model accuracy, as removing the height dimension for traversing space will reduce the dimensionality of the issue making it easier for the model to understand. - removed the Discount Factor entirely such that the distance of the reward would have no effect on the model. We hypothesize this will lead to the model prioritizing farther but larger rewards than closer and easier to achieve ones leading to it operating worse than before. - removed the RelU activation and replaced it with the Leaky ReLU shown in equation 7. We hypothesize this will lead to the activation function being more robust changing the overall decisions of the model and its consideration of negative weights. $$f(x)={\begin{cases}x,&{\mathrm{if~}}x>0,\\ \alpha x,&{\mathrm{if~}}x\leq0.\end{cases}}$$ αx, if x ≤ 0.(7) - removed the maximum entropy loss calculation and replaced it with mean squared shown in equation 8. We hypothesize this will lead to less exploration within the model and make it more imitative of the behaviors of the demonstrators which would change the mobile robots decisions. $$\operatorname{MSE}(\theta)={\frac{1}{N}}\sum_{i=1}^{N}(y_{i}-f(x_{i},\theta))^{2}$$ $$\left(7\right)$$ $$({\boldsymbol{\delta}})$$ (yi − f(xi, θ))2(8) We then save these models as a pickle file locally so that we can run it against test data. The run time of all these models is O(N) and the space complexity is also O(N).The key metric we aim to take note of is the difference between the trajectory the model would take and the trajectory the human actually takes. In conducting these ablation studies we aim to identify which features of the MEDIRL model are necessary and the impact it has on the overall performance of the model. We do this by crossreferencing the data against the "standard" that we establish with the MEDIRL model's performance. ## 4 Results After conducting our reproducibility study as per the method outlined above we noted the following results. The model's Epoch Training Loss and Average Displacement were as follows: - Original, Epoch Training Loss shown in figure 1. Average Displacement: 1.12 m shown in Figure 2. Figure 1: *Figure displays the epochs of the original model.* ![7_image_0.png](7_image_0.png) ![8_image_0.png](8_image_0.png) Figure 2: *Figure displays the displacement of the predictions made of the original model from the actual* decisions made by the pedestrians. - Removed a Hidden Layer, Epoch Training Loss shown in Figure 3. Average Displacement: 1.14 m ![8_image_1.png](8_image_1.png) as shown in Figure 4. Figure 3: *Figure displays the epochs of the original model without a hidden layer.* ![8_image_2.png](8_image_2.png) Figure 4: *Figure displays the displacement of the predictions made of the model without a hidden layer from* the actual decisions made by the pedestrians. - Removed the vertical State Dimension, Epoch Training Loss shown in Figure 5. Average Displacement: 0.91 m Figure 6. ![9_image_0.png](9_image_0.png) Figure 5: *Figure displays the epochs of the original model without a vertical state dimension.* ![9_image_1.png](9_image_1.png) Figure 6: *Figure displays the displacement of the predictions made of the model without a vertical state* dimension from the actual decisions made by the pedestrians. - Removed the Discount Factor, Epoch Training Loss shown in Figure 7, Average Displacement: 1.13 ![9_image_2.png](9_image_2.png) m as shown in Figure 8. Figure 7: *Figure displays the epochs of the original model without a discount factor.* ![9_image_3.png](9_image_3.png) Figure 8: *Figure displays the displacement of the predictions made of the model without a discount factor* from the actual decisions made by the pedestrians. - Removed the ReLU activation in favor of Leaky ReLU, Epoch Training Loss shown in Figure 9, ![10_image_0.png](10_image_0.png) Average Displacement: 1.15 m as shown in Figure 10. Figure 9: *Figure displays the epochs of the original model without a discount factor.* ![10_image_1.png](10_image_1.png) Figure 10: *Figure displays the displacement of the predictions made of the model with a leaky ReLU activation* instead of a ReLU activation from the actual decisions made by the pedestrians. - Removed the maximum entropy loss in favor of Mean Squared Loss Calculation, Epoch Training Loss shown in Figure 11, Average Displacement: 1.15 m as shown in Figure 12. Figure 11: *Figure displays the epochs of the original model without a discount factor.* ![10_image_2.png](10_image_2.png) ![11_image_0.png](11_image_0.png) Figure 12: Figure displays the displacement of the predictions made of the model with mean squared instead of maximum entropy from the actual decisions made by the pedestrians. A full comparison of the Epochs between the models can be seen in Figure 13. ![11_image_1.png](11_image_1.png) Figure 13: *Figure displays the displacement of the predictions made of the model with mean squared instead* of maximum entropy from the actual decisions made by the pedestrians. The ranking from lowest displacement to highest displacement is as follows: Removed State Dimension, Original, Removed Discount Factor, Removed Hidden Layer, Removed Max Entropy, Leaky ReLU. ## 5 Results Reproducing Original Paper Our replication of the original model netted us an average displacement of 1.12 m in comparison to the .5 m that the original paper's model was able to get. This difference is likely because we trained our data set with a significantly smaller subset of the data given our lack of computational power. It is also important to note that while a majority of the ablation studies did worse than the original study replication, removing the vertical state dimension seems have increased its accuracy. This is likely because the humans within the environment are not vertically moving and this additional dimension just leads to excess unnecessary error. ## 6 Discussion Based on our reproducibility attempt alongside our ablation studies we can clearly see how each component of the machine learning model had an effect on its capabilities to replicate human behavior in social navigation settings. The ablation study indicates that future research within the human social navigation context should establish their Markov Decision Making Framework within the two-dimensional space if no vertical movement is present, to mitigate any error that could occur based on the height of the individual in question. By doing this within our ablation study we were able to reduce the average displacement of the model. Another thing to note from our ablation study is the importance of the Maximum Entropy Component that was presented in the Original Paper. Once that component was removed from the model the average displacement increased significantly making the model substantially worse when using Mean Squared Error loss calculation instead. It is also important to note that swapping the ReLU activation with Leakly ReLU is the ablation study that did the worst and likely not something that should be done for future research in human social navigation settings. Something else to consider is that given the discount factor that we used was so small 0.01 is it likely that removing it all together in the ablation study had minimal effects hence its results being similar to the original study. And as one would expect removing a hidden layer made it model worse and increased its displacement. The key takeaways from our ablation study are as follows for future human social navigation research: The importance of utilizing a two-dimensional Markov decision-making framework when no vertical movement is involved, using a ReLU activation function over a Leaky ReLU activation function and proper documentation through the presenting of a model as to make it for future researchers to replicate. ## References A. Almeida and G. Azkune. Predicting human behaviour with recurrent neural networks. *Applied Sciences*, 8(2):305, 2018. doi: 10.3390/app8020305. Y. Bengio, I. Goodfellow, and A. Courville. *Deep Learning*. MIT Press, 2017. Muhammad Fahad, Zhuo Chen, and Yi Guo. Learning how pedestrians navigate: A deep inverse reinforcement learning approach. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018. doi: 10.1109/IROS.2018.8593438. R. Gockley, A. Bruce, J. Forlizzi, M. Michalowski, A. Mundell, S. Rosenthal, and J. Wang. Designing robots for long-term social interaction. In *2005 IEEE/RSJ International Conference on Intelligent Robots and* Systems, pages 1338–1343, 2005. T. Haarnoja. Acquiring diverse robot skills via maximum entropy deep reinforcement learning, 2018. D. Helbing and P. Molnar. Social force model for pedestrian dynamics. *Physical Review E*, 51(5):4282, 1995. doi: 10.1103/PhysRevE.51.4282. K. Kosuge and Y. Hirata. Human-robot interaction. *2004 IEEE International Conference on Robotics and* Biomimetics, pages 8–11, April 2004. doi: 10.1109/ROBIO.2004.1521743. R. Meyes, M. Lu, C. W. de Puiseau, and T. Meisen. Ablation studies in artificial neural networks. *arXiv* preprint arXiv:1901.08644, 2019. M. Muffoletto, A. Qureshi, A. Zeidan, L. Muizniece, X. Fu, J. Zhao, and O. Aslanidi. Toward patient-specific prediction of ablation strategies for atrial fibrillation using deep learning. *Frontiers in Physiology*, 12, 2021. S. Sheikholeslami, M. Meister, T. Wang, A. H. Payberah, V. Vlassov, and J. Dowling. Autoablation: Automated parallel ablation studies for deep learning. In *Proceedings of the 1st Workshop on Machine* Learning and Systems, pages 55–61, 2021. M. Wulfmeier, P. Ondruska, and I. Posner. Maximum entropy deep inverse reinforcement learning. *arXiv* preprint arXiv:1507.04888, 2015. B. Xu, N. Wang, T. Chen, and M. Li. Empirical evaluation of rectified activations in convolutional networks. arXiv preprint arXiv:1505.00853, 2015. S. Yao, G. Chen, Q. Qiu, J. Ma, X. Chen, and J. Ji. Crowd-aware robot navigation for pedestrians with multiple collision avoidance strategies via map-based deep reinforcement learning. *2021 IEEE/RSJ* International Conference on Intelligent Robots and Systems (IROS), pages 8144–8150, 2021. doi: 10.1109/IROS51168.2021.9636579. Y. Zhou, R. Fu, and C. Wang. Learning the car-following behavior of drivers using maximum entropy deep inverse reinforcement learning. *Journal of Advanced Transportation*, 2020:1–13, 2020. ## 7 Appendix 7.1 Additional Experimental Details 7.1.1 Implementation The MEDIRL model was implemented using the PyTorch framework. All experiments were run on a MacBook Pro 2018 with an i7 processor. The model's architecture consisted of two hidden layers, with 4096 and 2048 neurons each, and ReLU activation functions. Optimization was performed using the Adam optimizer with a learning rate of 0.001. ## 7.1.2 Dataset The dataset used in our experiments consisted of pedestrian trajectory data collected in various urban environments. Each data point included the x, y, z coordinates of a pedestrian at a given timestamp, alongside contextual environment data such as nearby pedestrian locations and static obstacles. ## 7.1.3 Training Procedure The model was trained on a subset of the available data, using 70% for training and 30% for validation. The training process was conducted over 50 epochs, with early stopping implemented to prevent overfitting. The batch size was set to 32 examples per batch. ## 7.1.4 Metrics Performance metrics included the average displacement error (ADE) and the final displacement error (FDE) of the predicted trajectories compared to the ground truth. These metrics were calculated for each epoch to monitor training progress and model performance. ## 7.2 Ablation Study Details 7.2.1 Modifications Each ablation study involved the removal or modification of a specific model component: - Removal of a hidden layer: The model was tested without the second hidden layer to assess the impact on learning capacity and performance. - Change in state dimension: The input dimension was reduced from three-dimensional (x, y, z) to two-dimensional (x, y) space to evaluate effect on model accuracy. - Variation of activation functions: ReLU was replaced with Leaky ReLU to investigate changes in learning dynamics. - Alternative loss functions: The maximum entropy loss was replaced with mean squared error to study effects on the exploration-exploitation trade-off. ## 7.2.2 Results Results from the ablation studies are presented in detailed tables and figures showing the effects of each modification on the ADE and FDE metrics. Statistical analyses were performed to determine the significance of observed differences. ## 7.3 Supplementary Results Additional figures and tables providing further analysis of the results discussed in the main body of the paper are included here. These supplementary results help illustrate the robustness of our findings across different model configurations and environmental settings.
Review 1: Summary: The paper aims to reproduce the results of " Learning How Pedestrians Navigate: A Deep Inverse Reinforcement Learning Approach", a 2018 paper by Fahad, Chen, and Guo. This paper uses a maximum entropy inverse RL approach to model pedestrian navigation, using a social affinity map (SAM) as an input feature to model motion affinity of nearby individuals. This reproduction omits the SAM feature, runs for fewer epochs, and uses a subset of the initial pedestrian data to reduce computational needs. The method is then further ablated in the following ways: * The hidden layer of the neural net is removed / kept. * The state position of objects is changed to just (x,y) rather than (x,y,height) * The discount factor in RL is removed / kept (changed to a value of 1 or not) * The ReLU activation is changed to Leaky ReLU or not * The max-ent loss is removed / kept (when removed, a squared error loss is used instead) Strengths and Weaknesses: The paper is clear on the ablations run for this paper, but has a significant number of weaknesses. First, by making many changes for the sake of efficiency (reducing dataset size), the experiment setup can no longer be a 1:1 reproduction of the original paper. As noted, they find a displacement error of 1.12m compared to 0.5m from the original paper, but there's no way to really compare these numbers. A number of citations are also just strange, for example a citation on the importance of the hidden layer in deep learning is cited to the Haarnoja 2018 paper about SAC. This isn't a great citation because the SAC paper is not about hidden layers, it's about soft actor critic. It would be better to either leave this uncited or use a more generic source discussing neural net architectures. There is a lot of padding, with sentences like "Ultimately, our experimentation serves as a testament to the pursuit of knowledge, with the ambition to redefine and fortify the pathways to socially intelligent navigation." The authors link their code implementation in Dags Hub, which would normally be good, but breaks double blind reviewing because the linked repo does not redact the author names. (I stopped reading this link once I realized what was happening.) The figures in the paper are hard to read, and could have larger text. And overall, the ablations just aren't that interesting. They are very specific to this problem setting, focused almost exclusively on model architecture fitting rather than social navigation, and therefore not giving many useful takeaways. To quote from the TMLR acceptance criteria: "Here's an example on how to use the criteria above. A machine learning class report that re-runs the experiments of a published paper has educational value to the students involved. But if it doesn't surface generalizable insights, it is unlikely to be of interest to (even a subset of) the TMLR audience, and so could be rejected based on this criterion. On the other hand, a proper reproducibility report that systematically studies the robustness or generalizability of a published method and lays out actionable lessons for its audience could satisfy this criterion." I do not think this paper surfaces generalizable insights, nor does it study the robustness of the original paper, due to making many changes to the original paper's setup that prevents direct comparison. The claimed general lessons like "leaky ReLU did worse" do not feel strong enough to me. The choice of changes do not feel particularly motivated - they feel more like ablations that were run for the sake of running them. Requested Changes: There are not single changes that would affect my opinion of the work. The issues within the paper are fairly extensive and systematic. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper replicates the Maximum Entropy Deep Inverse Reinforcement Learning (MEDIRL) model originally introduced by Fahad, Chen, and Guo (2018). It aims to find key model components that affect model performance and validate the original findings and ensure the robustness of the MEDIRL framework in modeling human behavior. For the ablation studies, the authors explored performing ablation study on individual components of the MEDIRL framework, including removing hidden layers, removing one state dimension (the height component), removing the discount factor, and replacing ReLU with Leaky ReLU, and removing the max entropy term from the loss function and replacing with mean squared error. It finds that the reproduced result is worse than the original study by Fahad, Chen, and Guo (2018), it also finds that removing the height component from its state information seems to have increased its accuracy. It reveals the importance of the max entropy component is vital and removing it causes the average displacement error to increase significantly. LeakyReLU also makes the model worse than using ReLU. Removing a hidden layer also made the model worse. Strengths and Weaknesses: Strength: the paper did ablation study to show that the height component is not vital for modeling human navigation behavior. It also shows through ablation study that it is important to keep hidden layers, keep ReLU, and keep the max entropy term. Weakness: there are aspects where the paper lacks: 1. The conclusions are not surprising. Intuitively, it is easy to understand that the height component is not vital for human social navigation, and other conclusions such as remove hidden layer, and remove max entropy term makes the model worse, are also well studied and did not bring new insights to the community. 2. The writing and experiment sections are poorly written. There are several typos or inappropriate expressions in both the text section and equations. For example, the equation (3) (4) and (6) uses the term "LD" "L\theta" to represent gradient, which is not standard way to write these in the literature. On page 6, paragraph 2, line 1, the last word "discoount" should be "discount". The experiments are not described in clear details, such as what dataset is used here, and how large is the dataset, and if the data is selected from the original reference paper, how that selection is being done. The result figures are made with small fonts and confusing labels and legends, which makes it hard to understand the conclusion. 3. The results are not convincing. Even though the authors presented with several findings, since the authors claim their results are worse than in the original paper by Fahad, Chen, and Guo (2018), it is hard to draw convincing conclusions from the study. As the authors claim their training dataset is much smaller dataset, but a smaller dataset could lead to overfitting and thus it is not clear why the performance is even worse instead of better? Thus, the findings obtained with a very small dataset may not be generalizable to other human social navigation scenarios. Requested Changes: 1. Clearly states the experimental details, including the dataset size, how it is collected, and maybe some example data samples. 2. Improve paper writing. Use appropriate and formal mathematical notations. When saying things like remove discount factor, clearly state what it means (does it mean set gamma=0 or gamma=1?). Replace algorithm 1 the language expressions with math notations for easier reading and understanding. 3. improve the figures and results part. The figures are hard to read, and the captions need to be improved to reflect what has been shown and concluded from the figures. Broader Impact Concerns: This paper can have impact in the social navigation community with IRL. There is no ethical concern. ================================================== Review 3: Summary: The paper aims to reproduce and conduct experiments following the prior work "Learning how pedestrians navigate: A deep inverse reinforcement learning approach" [Fahad et al. IROS 2018]. The prior work addressed the problem of trajectory prediction of pedestrians (i.e. what actions a pedestrian in a multi-agent environment with obstacles will take to navigate from an initial position to a target position) and introduced maximum entropy deep inverse reinforcement learning (MEDIRL) for learning a reward function that can be then used to learn a pedestrian movement policy using approximate value iteration. In the original MEDIRL paper, the ATC dataset (https://dil.atr.jp/crest2010_HRI/ATC_dataset/) was used, with 10M episodes used for training and 1500 randomly sampled pedestrian trajectories used for evaluation. The submitted manuscript takes the MEDIRL model and investigates how changes to the model affects its performance. Several ablations / model modifications are applied, including removal of hidden layers, removal of height component in state dimensions, removal of the discount factor, replacement of RELU with leaky ReLU, replacement max entropy loss with MSE loss. Due to resource constraints, the model variants are trained and evaluated on a subset of the original data (100 for train, 40 for test). For evaluation, the average displacement error (error between the predicted and actual measured pedestrian trajectories from the dataset) is reported. Strengths and Weaknesses: Strengths - There is an attempt to reproduce MEDIRL and conduct ablation studies Weaknesses - There is limited contribution as the work fails to properly reproduce MEDIRL experiments - The dataset used for training and evaluation is much smaller than the dataset used in the original paper - The ablations that are conducted does not provide any insight into possible improvements in model design - Not all metrics used in MEDIRL are reported - Limited discussion of recent prior work on pedestrian trajectory prediction. It is unclear whether a re-investigation of the MEDIRL model is useful to the community or whether newer work has would be more relevant to investigate. - The manuscript also did not cite or describe the original dataset used for MEDIRL - The writing and presentation is poor - It was difficult to understand the problem setting and the original MEDIRL model from the manuscript - The experimental results are not clearly summarized in a table allowing for easy comparison - The writing was also very verbose, with too much space/words dedicated to the description of the ablation conditions Requested Changes: Significant re-work and additional experiments are required before this work is of potential interest to the community - Experiments should be done on a larger scale with GPUs. There should also be consideration of whether there are more interesting model variations to explore, for instance different architectures for predicting the reward. - Experiments should include the different metrics from prior work ("Average Non-Linear Displacement Error" and "Final Displacement Error" should be included in addition to "Average Displacement Error") - Related work need to be properly discussed and compared against. Experiments should include comparison against recent work. - There should be concise tables that compare different model variants. - Paper will need to be rewritten to provide a improved overview of task and MEDIRL and reduce description of basic concepts like ReLU - More information should be provided on the task that is being addressed. What is the goal of the task? - A better overview of MEDIRL should be provided. What are the key elements of MEDIRL? Why is MEDIRL a work that is worth reproducing and interesting to the community? Why is it "groundbreaking"? - The first part (section 0) should be removed with relevant information moved into appropriate sections - Summary tables should be provided to concisely compare the different models - There is no need to show the loss curve or to show separate figures of displacement. These figures are hard to interpret. - The dataset used should be described and cited. The original MEDIRL used: - Person tracking in large public spaces using 3-D range sensors [Brščić et al. 2013] (https://dil.atr.jp/crest2010_HRI/ATC_dataset/) - Claims need to be toned down Broader Impact Concerns: No broader impact concerns is included. A statement of how the model can be used and potential pitfalls can be useful. ==================================================
# Boosting Visual-Language Models By Exploiting Hard Pairs Anonymous authors Paper under double-blind review ## Abstract Contrastive Language-Image Pre-training (CLIP) has become the standard for learning cross-modal representations between images and text. Efforts to improve its capabilities typically demand the collection of additional data and retraining with new loss functions. While effective, the added requirements limit their practical use due to the increased resource and time investments needed. In this work, we present Helip, a cost-effective strategy tailored to enhance the performance of existing CLIP models without the need for training a model from scratch or collecting additional data. Our method allows for effortless integration with existing models' training pipelines, providing an instant boost by training them with selected challenging text-image pairs from their original training datasets. Helip treats each text-image pair as a single point in the joint vision-language space, identifying those in close proximity as hard pairs. By incorporating the challenging data, pre-trained CLIP models are refined using both the traditional contrastive loss and the newly introduced hard negative margin loss, ensuring the challenging data is fully utilized. On comprehensive benchmarks, Helip consistently boosts existing models to achieve leading performance. In particular, it improves the zero-shot classification accuracy on ImageNet for SLIP models pre-trained on CC3M, CC12M and YFCC15M datasets. The improvements are 3.05%, 4.47%, and 10.1% respectively, achieved within two epochs of training. In addition, across fine-grained classification datasets, Helip improves the zero-shot performance of pre-trained CLIP and SLIP by an average of 8.4% and 18.6%, and their linear probe performance by an average of 9.5% and 3.0%. The code is publicly available at https://anonymous.4open.science/ r/HELIP-7F8E/. ## 1 Introduction Contrastive Language-Image Pretraining (CLIP) (Radford et al., 2021) is quickly becoming the standard for foundation models (Awais et al., 2023) due to its effectiveness for a variety of vision-language tasks without task-specific finetuning (Li et al., 2021; Baldrati et al., 2022). However, web-crawled image-text pairs used for the CLIP model pretraining are often loosely connected, resulting in multiple plausible matches beyond the assigned ones (Wu et al., 2022). Several methods have been presented to improve CLIP models by investigating appropriate matches and utilizing widespread supervision among image-text pairs for model training (Li et al., 2022a; 2021; Mu et al., 2022; Radenovic et al., 2023). Efforts to improve contrastive language-image pretraining models have primarily taken two directions: (1) the addition of objectives to improve the efficacy of supervision (Li et al., 2022a; Mu et al., 2022); and (2) the employment of intra- and inter-modality similarity to select and retrain models using data deemed challenging at the sample level (Li et al., 2021; Radenovic et al., 2023). However, those approaches inevitably require retraining, and those identified as challenging data struggle to bring benefits to model performance. This challenge is partly due to their reliance on finding challenging data within a single batch during training, where truly beneficial challenging data is rare. Additionally, CLIP models' contrastive loss is not optimally configured to exploit the nuances of difficult data. These limitations significantly restrict the practical application of these enhancements, especially considering the substantial investments already made in pretraining numerous CLIP models (Li et al., 2022a; Mu et al., 2022). This aspect underscores the need for efficient enhancement strategies that do not rely on additional data collection to improve existing pretrained models. To improve the existing CLIP models, we introduce the Helip framework, which involves further training the models with challenging data selected from their original training dataset. Helip defines and identifies the challenging data at the pair level, distinguishing it from traditional methods that compare sample-level similarities between images and texts. Specifically, Helip treats each text-image pair as a distinct entity within the joint vision-language space, and defines pairs in close proximity as hard pairs. Furthermore, Helip introduces the **Hard Pair Mining (HPM)** strategy, a novel approach that moves beyond the traditional use of representation spaces learned by CLIP models. Note, the CLIP space is primarily designed for evaluating sample-level similarities—for instance, comparing an image and text (individually, not as a pair)—lacking in evaluating characteristics at the pair level. HPM transforms the task of discovering pairs in close proximity into a solvable proxy task, with the goal of selecting a pair set that optimally supports the target pair's text-image agreement. Helip enhances CLIP models not just with the original text-image contrastive loss (Radford et al., 2021), which uniformly pushes all negative samples away from their positive counterpart but also incorporates the **Hard Negative Margin Loss (HNML)** into the loss function. As depicted in Figure 2, HNML imposes an additional geometric structure on the representation space, reflecting the pair-level similarity. Through this approach, Helip effectively leverages the information within challenging data to boost model performance. Empirical evidence shows that Helip improves the performance of existing CLIP models, including pretrained CLIP, SLIP, and DECLIP, across a variety of benchmarks, such as zero-shot classification, text-image retrieval, and fine-grained linear probing. For zero-shot classification on ImageNet, CIFAR-10, and CIFAR100, Helip consistently boosts the performance of all six pre-trained models. Particularly, using Helip to boost SLIP models pre-trained on CC3M, CC12M, and YFCC15M results in ImageNet zero-shot accuracy gains of 3.05%, 4.47%, and 10.14%, respectively. Further, on seven fine-grained image classification datasets, those pre-trained models achieve better zero-shot and linear probe performance with Helip. Specifically, the average zero-shot accuracy of CC3M pre-trained CLIP and SLIP are improved by 8.4% and 18.6%. The average linear probe accuracy of CC3M pre-trained CLIP and SLIP are improved by 9.5% and 3.0% respectively. Additionally, the performance gain is also valid in terms of zero-shot retrieval, with 1.1 of R@1 on Flickr30K, and 2.2 of R@1 on COCO for SLIP with Helip. Our contributions could be summarized as: - To our best knowledge, our method, Helip stands out as the first method aimed at improving existing CLIP models in a cost-effective and easily integrable way. - We introduce the hard pair mining strategy to select challenging data, accompanied by the development of hard negative margin loss. This combination ensures the effective identification and utilization of challenging data, improving the CLIP models. - Empirical evaluations across zero-shot classification, image-text retrieval, and linear probe benchmarks, consistently show Helip's ability to substantially boost the performance of existing CLIP models, underlining its effectiveness and practicality in real-world applications. ## 2 Related Work Vision-Language pre-training. Vision Language Pretraining (VLP) is a technique that leverages large-scale image-text datasets to learn a strong joint representation between the two modalities that can be transferred to various downstream vision-language tasks. VLP models can be generally divided into single-stream models and dual-stream models. In the single-stream architecture, text and visual features are concatenated at the input level, creating a unified representation that is subsequently processed by a single transformer block (Li et al., 2019; Chen et al., 2022; Zhang et al., 2020). Conversely, dual-stream models (Jia et al., 2021; Li et al., 2022b; Mu et al., 2022; Radford et al., 2021; Yao et al., 2022) typically consist of two separate encoders for image and text respectively and perform cross-modality interactions on the top, are becoming more and more popular because of its flexibility of transferring pre-trained knowledge to downstream tasks. CLIP (Radford et al., 2021), uses a simple contrastive objective to learn visual features from natural language supervision and achieves remarkable zero-shot recognition performance using 400M web-crawled image-text pairs. Recent works boot the performance of CLIP by applying self-supervision within visual modal (Mu et al., 2022), additional nearest neighbor supervision (Li et al., 2022b). These methods are actually doing data augmentations to increase data efficiency and thus bring additional computational costs. Contrastive learning with hard negative samples. Contrastive learning learns a representation of input data that maps semantically comparable examples close together and semantically dissimilar examples far apart (Chen et al., 2020a;b; Wang & Isola, 2020). Recent works include hard negative samples into the loss function and achieve better empirical performance (Cai et al., 2020; Huynh et al., 2022; Kalantidis et al., 2020; Li et al., 2021; Radenovic et al., 2023; Robinson et al., 2021; Shah et al., 2022). For Languageimage contrastive learning, current approaches (Li et al., 2021; Radenovic et al., 2023) mine multimodal hard negative examples using intra/inter-modality similarity. Li et al. (2021) choose in-batch hard negative samples with image-text contrastive loss. Hard negative noise contrastive multimodal alignment loss by Radenovic et al. (2023) up-weights the loss term for in-batch hard samples. For previous intra/inter-modality hard sample mining methods, two text-image pairs are considered as hard samples, if the cosine similarity between visual/textual features is high (Li et al., 2021; Radenovic et al., 2023). However, due to the nature of loose assignment for web-crawled image-caption data, a high similarity indicated by intra/inter-modality doesn't indicate that the two pairs are difficult to tell apart. Contrary to prior works, we design a hard sample mining method to discover similar pairs defined in joint vision-language space and efficiently select samples challenging enough to improve learning. ## 3 Hard Pairs For Visual-Language Models In this section, we first define the notations and revisit CLIP for zero-shot recognition in the preliminary section. Next, we introduce the Hard Pairs Mining strategy (HPM), and the associated Hard Negative Margin Loss (HNML), specifically designed to leverage hard pairs identified by HPM. ## 3.1 Preliminaries We consider the task of contrastive image-text pretraining. Given an image-caption dataset D = {zi} N i=1 = {(x I i , xT i )} N i=1, (x I i , xT i ) *∈ I × T* , the x I i , x T i denote the image and its corresponding caption, I and T indicates visual and textual space respectively, and *I × T* indicates the joint Vision-Language space. The goal is to learn a dual encoder model ϕ = {ϕimage, ϕ*text*}, where ϕ*image* represents the image encoder and ϕ*text* denotes the text encoder. We use the shorthand Ii = ϕ*image*(x I i ) and Ti = ϕ*text*(x T i ) to denote the encoded representation of an image and its caption, respectively. The contrastive objective of CLIP is formulated as, $$\ell_{\text{CLIP}}=-\frac{1}{|B|}\sum_{i\in B}\log\frac{\exp\left(sim(I_{i},T_{i})/\sigma\right)}{\sum_{j\in B}\exp\left(sim(I_{i},T_{j})/\sigma\right)},\tag{1}$$ where sim(·, ·) is the cosine similarity function, B is a batch of samples and σ is a trainable parameter controlling the temperature. Intuitively, the above formulation explicitly aligns the representations of image and text from one pair. ## 3.2 Hpm: Hard Pair Mining In this study, we define *hard pairs* as the pairs that are nearby to a specified target pair within the joint vision-language space, *I × T* , which serves as the domain for pair data. Equation 2 depicts the problem of hard pair mining. Here, zi represents the target pair, Hi denotes a set of pairs chosen from the dataset Di = D \ zi, and the metric S(,) quantifies the similarity between the target pair and a set of pairs, $${\mathcal{H}}_{i}^{\star}=\arg\operatorname*{max}_{{\mathcal{H}}_{i}}{\mathbf{S}}(z_{i},{\mathcal{H}}_{i}).$$ S(zi, Hi).(2) However, a key challenge arises in defining the similarity metric for pairs, S. Existing CLIP methods (Radford et al., 2021; Li et al., 2022b;a) preliminary focus on aligning an image with its caption (Radford et al., 2021; Li et al., 2022a) from a image-text pair. They rarely emphasize on bringing similar pairs closer while distancing the dissimilar ones, which makes current methods fall short in gauging similarity between two pairs. For $$\left(2\right)$$ ![3_image_0.png](3_image_0.png) Figure 1: **Hard Pair Mining (HPM)**. Choose hard pairs by optimizing the support set to maximize the agreement prediction of the target pair. instance, the cosine similarity between two pairs is ill-defined, within the context of current methods. Selecting hard pairs by maximizing pair agreement. To identify nearby pairs, we introduce the idea of text-image pair agreement maximization. This can be viewed as a proxy task for selecting hard pairs. To illustrate the rationale for using text-image pair agreement as a proxy for selecting hard pairs, we return to the principle obtained from traditional machine learning methods: the prediction of a model on a test sample is substantially influenced by samples in the training dataset that are similar to the test one. For example, the K-Nearest Neighbors (KNN) algorithm classifies a new instance using the K-closest training examples. The linear regression model predicts the output of a test sample using the weighted sum of the training samples, with higher weights given to samples that are more similar to the test sample. Recent empirical and theoretical studies on model memorization and generalization (Chen et al., 2009; Zhang et al., 2021; Stephenson et al., 2021; Brown et al., 2021) also provide support for this. Intuitively, if a pair agreement prediction model trained on a set of pairs predicts a specific target pair as having a high probability of being a matching pair, the target pair is likely to be similar to the matching pairs on which the model was trained. The challenge of selecting hard pairs is transformed into an optimization task centered on the text-image pair agreement, which is formally represented as: $$\operatorname*{arg\,max}_{\mathcal{H}_{i}}\mathbf{S}(z_{i},\mathcal{H}_{i})=\operatorname*{arg\,max}_{\mathcal{H}_{i}}P_{\mathcal{M}}(z_{i}|\mathcal{H}_{i}),$$ ditions of complex numbers $i$ and $j$ for the $i$- and $j$- are $$\left({\mathrm{3}}\right)$$ where PM(zi|Hi) denotes the prediction of a pair agreement model, M, for the pair zi based on a pair set Hi. This set is a subset of Di. In this framework, the goal of selecting a hard pair is transformed into identifying a training set Hi such that the model M predicts the target pair as a matching pair. Designing a suitable pair agreement prediction model for this proxy task is a nontrivial endeavor because the model needs to not only predict the pair matching probability but also allow the optimization of the training set, as indicated in Equation 3. Consequently, a conventional deep neural network design becomes unviable due to the impracticality of retraining across all possible sets Hi from Di. Taking inspiration from recent work (Norelli et al., 2022), we propose a data-centric design for the agreement prediction model M. As illustrated in Figure 1, the model leverages two pretrained single-modal encoders, i.e., fimage and ftext, to align representations of images and texts in a unified Vision-Language space. Specifically, the model encodes the target pair ziinto (Ii, Ti) using these single-modal encoders. For the visual modality, we determine a similarity vector between the target pair zi and the dataset Di. The similarity vector is defined as S⃗I(x I i , Di) = [. . . , sim(Ii, Ij )*, . . .* ] ⊤ ∈ R N−1. Here Ij = fimage(x I j ) with (x I j , xT j ) being an element of Di, and function sim(·, ·) denotes the cosine similarity. To counteract noise, values in the vector S⃗I(x I i , Di) are set to zero if sim(Ii, Ij ) < τ . This cleaned-up vector is represented as SeI. The procedure for the textual modality is analogous, producing a vector denoted as SeT. Note, the representations in this shared space are intuitively interpretable: each dimension corresponding to the visual/textual similarity of the input to a unique pair in the multimodal dataset. This interpretable characteristic enables us to directly optimize the supporting set to maximize the pair matching probability: $$\mathcal{H}_{i}^{\star}=\operatorname*{arg\,max}_{|\mathcal{H}_{i}|=k}\tilde{S}^{I}(x_{i}^{I},\mathcal{H}_{i})^{\top}\tilde{S}^{T}(x_{i}^{T},\mathcal{H}_{i}),\tag{1}$$ $$\left(4\right)$$ where the H⋆ i is the hard pair set and k ∈ R + is the number of selected pairs which is much less than |D|. The previous problem can be efficiently solved by greedily choosing dimensions that maximize the inner product. Due to the interpretable property, the selected dimensions correspond to the desired pairs. Mitigation of noisy data impact. The prior method assumes the target pair zi to be a suitable matching pair. However, in inherently noisy datasets, such as web-crawled ones like LAION (Schuhmann et al., 2022), mismatched pairs might be present. The potential negative effects of hard pairs generated by these mismatched pairs necessitate the development of a strategy for identifying and eliminating them. We create a pair removal strategy based on the availability of hard pairs: A target pair ziis deemed as unsuitable and thus removed, if there is a non-empty subset of the mined hard pair set, Hsub i ⊆ H⋆ i with |Hsub i| > 0, such that SeI(x I i , Hsub i) ⊤SeT(x T i , Hsub i) = 0. Intuitively, this equation suggests that the number of entries positively supporting the target pair zi as a matching pair is fewer than k. To illustrate how this concept can aid in cleaning noisy data, consider the following example: Suppose the target pair consists of a "cat" image but a "dog" caption (clearly it is a mismatch). For it to be considered a correct match, numerous pairings with same erroneous pattern (i.e., "cat" images paired with "dog" captions) would need to exist in the dataset. By assuming a certain error types are fewer than k throughout the dataset, if no subset of size k within the dataset D \ zi supports zi as a matching pair, this signals that the target pair is an outlier, likely due to a labeling error or mismatch. Such outliers can degrade dataset quality, so they are removed to ensure the reliability of hard data. Fast hard pair mining (FastHPM). It is intuitive to infer that for a dataset collected from a single source, the number of intrinsic hard pairs, which are robust enough to enhance the learned representation, will proportionally increase with the size of the dataset originating from that source. To identify k (much less than |D|) qualified hard pairs, a portion of the dataset D is sufficient. As a result, we present the Fast Hard Pair Mining (FastHPM) approach, which was designed to avoid the time complexity associated with hard pair mining over the entire dataset. FastHPM's objective can be formalized as follows: $$\mathcal{H}_{i}^{\star}\approx\operatorname*{arg\,max}_{|\mathcal{H}|=k}\widetilde{S}^{I}(x_{i}^{I},\mathcal{H}_{i})^{\top}\widetilde{S}^{T}(x_{i}^{T},\mathcal{H}_{i}),\tag{1}$$ $$\left(5\right)$$ where Hi ⊆ Di and |Di| = C is sampled uniformly from set Di. In this equation, it's noteworthy that the selection of value C is solely based on the number of hard pairs k, instead of the size of Di. Consequently, this optimization reduces the time complexity of FastHPM to O(N). The detailed procedure of the hard pair mining algorithm is presented in Appendix A. ## 3.3 Hnml: Hard Negative Margin Loss The image-text contrastive loss ℓ*CLIP* , as illustrated in the preliminary section, aligns the true image-text pairs. But it poses no constraints on the overall geometry among data pairs (Goel et al., 2022). After involving hard data into the finetuning stage, equally maximizing the distance for normal negative pairs and hard negative pairs is an undesired way to utilize the information provided by hard negative pairs. The intuition follows directly from Figure 2. In a desired representation space, the similarity between the positive and the hard negative, S1, should be greater than the similarity between the positive and those normal negatives, S2, S3. Therefore, to impose the additional geometric structure, we introduce the Hard Negative Margin Loss (HNML): $$\ell_{\rm margin}=\frac{1}{|B|}\sum_{j\in B}\max\big{(}0,sim(I_{i},T_{j})-\min_{j^{\prime}\in\mathcal{H}_{i}^{k}}\{sim(I_{i},T_{j^{\prime}})\}\big{)},\tag{6}$$ where H p i ⊆ H⋆ i is the hard negative pairs for the target ziinvolved in one training batch. Note, the HNML is computationally efficient. No extra inner product computation is required. The geometric regularization ![5_image_0.png](5_image_0.png) $\left(7\right)$. Figure 2: Hard Negative Margin Loss (HNML). Hard negative pairs are closer to the positive than the normal negative pairs. $$\operatorname{margin}\cdot$$ Figure 3: **Further training CLIP with Hard** Pairs. For text-image pairs within a batch, we sample corresponding hard data from the preprocess hard pair set. is applied over the inner product matrix computed in the original CLIP loss, Equation equation 1. Then, the well-trained model is finetuned with the following loss, where γ is the hyperparameter balancing the two losses, ℓfinetune = ℓCLIP + γℓmargin. (7) To boost the performance of well-trained CLIP models without introducing extra data and extra parameters, we introduce the further training strategy which involves the preprocessed hard pairs into the batch composition during training. As shown in Figure 3, for text-image pairs within the batch B, we randomly sample a subset B′ as seeds. Then, for zi ∈ B′, we randomly select |Hp i | = p pairs from H⋆ i . The actual training batch is B = B |B′ S | i=0 H p i . We summarize the training pipeline in appendix A. ## 4 Experiments In the experiments, we conduct a comprehensive empirical investigation to evaluate the efficacy of Helip in improving zero-shot classification, image-text retrieval, and linear probing performances for existing visionlanguage models, in Section 4.2. In Sections 4.3 and 4.4, we investigate Helip's performance with scaled training data as well as its robustness over noisy datasets. We provide detailed empirical studies on Hard Positive Mining (HPM) and Hard Negative Mining with Margin Loss (HNML) in Sections 4.6 and 4.7. ## 4.1 Experimental Setup Training datasets. We used open-source datasets, including Conceptual Captions 3M (CC3M) (Sharma et al., 2018) and Conceptual Captions 12M (CC12M) (Changpinyo et al., 2021), and two 15M subsets of the YFCC100M dataset: v1, collected by Radford et al. (2021), and v2, collected by Li et al. (2022b). The combined datasets of CC3M, CC12M, and YFCC15M v1, which we denote it as Open29M following the term used in prior work (Li et al., 2022b), were not completely obtained due to expired urls. In addition, we independently sampled 7.5M and 8M subsets from the noisier data source, LAION-5B (Schuhmann et al., 2022), labeled as LAION7.5M and LAION8M. These datasets, while smaller than the 400 million pair dataset used in CLIP's original study (Radford et al., 2021), are well-suited for the data and computational resources we have. Furthermore, they have been widely used in benchmark evaluations for various studies on language-image pretraining, as noted in works by Goel et al. (2022); Li et al. (2022b) and Mu et al. (2022). Downstream datasets. We primarily evaluate the effectiveness of Helip using zero-shot image classification, linear probing, and zero-shot image-text retrieval. In addition to commonly used ImageNet (Deng et al., 2009), CIFAR10, and CIFAR100 (Krizhevsky et al., 2009), we also verify the performance on 7 finegrained classification datasets including Caltech101 (Fei-Fei et al., 2004), Food101 (Bossard et al., 2014), Sun397 (Xiao et al., 2010), Flowers102 (Nilsback & Zisserman, 2008), CUB (Wah et al., 2011), Stanford Cars (Krause et al., 2013) and FGVC Aircraft Maji et al. (2013). The zero-shot image-text retrieval task uses MS-COCO (Lin et al., 2014) and Flickr30K (Plummer et al., 2015). Implementation details. Our experiments are conducted across three distinct architectures: ResNet-50, ViT-B/16, and ViT-B/32, tailored to various datasets and pretrained models. Specifically, for loading the pretrained CLIP model on CC3M and CC12M, the ResNet-50 is used as the image encoder. Besides, to align with existing checkpoints established by Mu et al. (2022), we use ViT-B/16 for SLIP model experiments on CC3M and CC12M, respectively. And, we use ViT-B/32 for pretraining on YFCC15M v1, v2, and Open29M datasets to ensure fair comparison with the results reported in Li et al. (2022b). Furthermore, for the SLIP and DECLIP models, we adapt the pretrained parameters from the publicly available resources∗ The input resolution of the image encoder is 224 × 224 and the maximum context length of the text encoder is 77. All of our experiments are conducted on 8 V100 GPUs with a batch size of 128 for ViT-B/16 models, and a batch size of 512 for ResNet-50 models and ViT-B/32 models. The dimension of the image and text embeddings is 1024 for ResNet-50 models and 512 for ViT-B/16 and ViT-B/32 models. We set τ = 0.5, γ = 1, k = 50 and p = 1 for all the experiments by default. Automatic mixed-precision is used to save GPU memory. To keep the model from overfitting, we use early stopping if there is no performance gain on ImageNet zero-shot accuracy in 5 epochs. It is worth noting that using zero-shot classification performance on ImageNet as a criterion for early stopping is a commonly used practice for the training of CLIP (Radford et al., 2021; Mu et al., 2022). To reflect that our method is designed to work with few assumptions on encoder, we used encoders pretrained over a single-modal source rather than multimodally pretrained ones when preparing hard negative pairs. Specifically, we used an unsupervised pre-trained vision transformer, DINO VITs8 (Caron et al., 2021), and a Sentence Transformer (SentenceT) (Reimers & Gurevych, 2019) to encode text. For DINO VITs8, the embedding size is 384, while for SentenceT, it is 768. ## 4.2 Main Results And Discussion Zero-shot classification. We compare the zero-shot performances of the CLIP, SLIP, DECLIP, and those models finetuned by Helip on CC3M, CC12M, YFCC15M and Open29M. We denote the models finetuned by Helip as CLIP-Helip , SLIP-Helip , and DECLIP-Helip respectively. Table 1 demonstrates that models fine-tuned by Helip consistently outperform their counterparts. Specifically, for models pretrained on the CC3M dataset, Helip boosts the ImageNet zero-shot classification accuracy of the CLIP model from 19.04% to 19.86%. Additionally, on the SLIP model, a performance improvement of over 13% is observed, achieving an accuracy of 26.05%. We additionally include two baseline methods: CYCLIP (Goel et al., 2022) and CLOOB (Fürst et al., 2021) for reference. For CC12M pretraining, we used the SLIP checkpoints released by Mu et al. (2022). On ImageNet, SLIP-Helip has a 4.47% higher zero-shot accuracy than its counterpart. Due to the lack of openly accessible parameters for DECLIP on the CC3M and CC12M datasets, our analysis focused on comparing DECLIP with DECLIP-Helip over the YFCC15M v2 dataset. In this context, we present the performance of the SLIP and DECLIP models, as pretrained and released by Li et al. (2022b). The result was obtained by using their evaluation pipeline, denoted with ∗. Note, the templates are important for zero-shot tasks. Consequently, for a fair comparison, our analysis and conclusions primarily rely on results obtained from our evaluation pipeline, which is same as the approach used by OpenCLIP. Due to space constraints, we provide more information about the baselines in Appendix B. Notably, both SLIP and DECLIP showed improvements with Helip, averaging increases of 15.49% and 6.74%, respectively. Further, to demonstrate Helip's sustained efficacy across larger datasets, we assessed CLIP and CLIP-Helip on Open29M. The original CLIP model, upon training with the Open29M dataset, achieves its best performance at the 18th epoch, achieving a zero-shot accuracy of 42.32% on ImageNet. The Helip method instantly boosts the performance of the existing CLIP (checkpoint saved at the 18th epoch) from 42.32% to 46.33% with just one additional training epoch. However, extending the training with the original CLIP loss resulted in a marginal decline in accuracy to 42.25%. ∗https://github.com/facebookresearch/SLIP, https://github.com/Sense-GVT/DeCLIP. | Method | ImageNet | CIFAR10 | CIFAR100 | | | |------------------------------|-----------------|---------------------------|-----------------|-------|-------| | CYCLIP (Goel et al., 2022) | 22.08 | 51.45 | 23.15 | | | | CLOOB (Fürst et al., 2021) | 23.97 | - | - | | | | CLIP† (Radford et al., 2021) | 19.04 | 33.06 | 13.77 | | | | CC3M | | CLIP† -Helip | 19.86 | 34.05 | 14.13 | | SLIP (Mu et al., 2022) | 23.00 | 65.61 | 34.69 | | | | SLIP-Helip | 26.05 | 68.18 | 37.77 | | | | CLIP† (Radford et al., 2021) | 30.27 | 51.07 | 21.94 | | | | CLIP† -Helip | 32.05 | 52.27 | 24.51 | | | | CC12M | | SLIP (Mu et al., 2022) | 41.17 | 81.30 | 53.68 | | SLIP-Helip | 45.64 | 82.31 | 53.79 | | | | SLIP (Mu et al., 2022) | 25.29 (34.30∗ ) | 60.19 | 26.80 | | | | SLIP-Helip | 35.43 | 75.49 | 47.84 | | | | YFCC15M | | DECLIP (Li et al., 2022b) | 36.05 (43.20∗ ) | 78.12 | 50.60 | | DECLIP-Helip | 43.80 | 84.88 | 56.31 | | | | CLIP† (Radford et al., 2021) | 42.32 | 71.98 | 42.73 | | | | 29M | | CLIP† Cont. Train | 42.25 | 71.72 | 42.66 | | CLIP† -Helip | 46.33 | 77.97 | 48.33 | | | Table 1: **Zero-shot classification performance on ImageNet, CIFAR10 and CIFAR100.** The † indicates baselines pre-trained by us. For all other baselines, publicly available pre-trained parameters were used. Specifically for SLIP and DECLIP on YFCC15M, we report results from two sources: our evaluation using OpenCLIP's framework with pre-trained parameters released by Li et al. (2022b), and the performance originally reported in Li et al. (2022b), marked with ∗. | Dataset | Method | Caltech101 | Food101 | Sun397 | Flowers102 | CUB | Stanford Cars | FGVC Aircraft | Average | |------------|----------|--------------|-----------|----------|--------------|-------|-----------------|-----------------|-----------| | CLIP | 42.14 | 13.02 | 27.08 | 13.37 | 3.45 | 1.08 | 1.02 | 14.45 | | | CLIP-Helip | 48.08 | 13.11 | 28.94 | 13.61 | 3.70 | 1.17 | 1.11 | 15.67 | | | CC3M | SLIP | 54.01 | 16.03 | 29.19 | 12.06 | 4.70 | 1.21 | 1.50 | 16.96 | | SLIP-Helip | 66.89 | 17.05 | 33.69 | 15.16 | 4.85 | 1.19 | 1.29 | 20.12 | | | CLIP | 63.78 | 31.53 | 37.86 | 19.56 | 7.32 | 14.22 | 2.49 | 25.25 | | | CLIP-Helip | 64.85 | 36.49 | 38.22 | 24.73 | 8.58 | 15.59 | 2.97 | 27.35 | | | CC12M | SLIP | 76.33 | 52.33 | 44.96 | 31.81 | 10.50 | 22.53 | 3.06 | 34.50 | | SLIP-Helip | 80.28 | 54.86 | 47.53 | 31.39 | 10.56 | 25.67 | 4.08 | 36.34 | | Zero-shot fine-grained classification. By leveraging hard image-text pairs in contrastive learning, Helip amplifies the discriminative capability of the CLIP model's visual embedding. This improvement proves valuable in classification, particularly for fine-grained datasets. Our evaluation on 7 fine-grained classification datasets (Table 2) reveals that SLIP-Helip boosts the zero-shot accuracy of CC3M and CC12M pretrained SLIP on Caltech101 by 12.88% and 3.95% respectively. Both CLIP and SLIP models witness consistent improvements with their Helip counterparts. Table 2: **Zero-shot performance on fine-grained image classification.** On a variety of fine-grained classification benchmarks, Helip consistent boosts the model performance compared to the original versions. Linear probing. The linear probing task trains a randomly initialized linear classifier on the feature extracted from the frozen image encoder on the downstream dataset. To accomplish this, we train the logistic regression classifier using scikit-learn's L-BFGS implementation (Pedregosa et al., 2011), with maximum 1,000 iterations on those 7 datasets. For each dataset, we search for the best regularization strength factor on the validation set over 45 logarithmically spaced steps within the range 1e-6 to 1e+5. Experimental results in Table 3 demonstrate that both CLIP-Helip and SLIP-Helip have consistent improvements over their counterparts on almost all 7 datasets. Note that on CC12M, SLIP-Helip performs marginally better on 5 out of 7 datasets. It's probably because the self-supervision of SLIP (Mu et al., 2022) within the visual modal can be beneficial for learning fine-grained visual embedding, while SLIP-Helip doesn't include image | Dataset | Method | Caltech101 | Food101 | Sun397 | Flowers102 | CUB | Stanford Cars | FGVC Aircraft | Avg. | |------------|----------|--------------|-----------|----------|--------------|-------|-----------------|-----------------|--------| | CC3M | CYCLIP | 80.88 | 54.95 | - | 83.74 | - | 22.72 | 28.02 | - | | CLIP | 80.11 | 53.82 | 56.40 | 84.07 | 40.30 | 22.70 | 35.61 | 53.29 | | | CLIP-Helip | 82.49 | 59.79 | 59.56 | 87.84 | 46.19 | 30.01 | 42.48 | 58.34 | | | SLIP | 87.96 | 72.50 | 66.96 | 91.91 | 49.77 | 39.25 | 45.87 | 64.89 | | | SLIP-Helip | 89.64 | 73.09 | 67.67 | 93.02 | 53.16 | 42.44 | 48.66 | 66.81 | | | CLIP | 85.35 | 68.00 | 64.45 | 87.88 | 48.75 | 57.80 | 40.32 | 64.65 | | | CLIP-Helip | 85.87 | 68.89 | 64.95 | 88.36 | 49.41 | 58.55 | 40.17 | 65.17 | | | CC12M | SLIP | 92.89 | 83.63 | 74.34 | 94.87 | 60.99 | 73.43 | 52.23 | 76.05 | | SLIP-Helip | 92.85 | 84.25 | 74.74 | 95.09 | 60.53 | 74.23 | 52.36 | 76.29 | | self-supervision during the training. In addition, we did not match the training batch size as SLIP (Mu et al., 2022) because of resource limitations. A combination of Helip and image self-supervision and larger training batch size may be a potential direction for achieving better linear probe performance. Table 3: **Linear probe performance on Fine-grained Image Classification.** On average, the linear probe performance of CLIP and SLIP pretrained on CC3M and CC12M are improved. | Pretraining Dataset | Method | | COCO | Flickr30K | | |-----------------------|------------|-------|--------|-------------|-------| | | | R@1 ↑ | R@5 ↑ | R@1 ↑ | R@5 ↑ | | | CLIP | 14.4 | 34.1 | 31.7 | 56.0 | | | CLIP-Helip | 17.8 | 39.8 | 35.4 | 61.0 | | CC3M | SLIP | 22.3 | 45.6 | 39.6 | 68.6 | | | SLIP-Helip | 23.4 | 48.3 | 41.8 | 69.6 | | | CLIP | 26.9 | 52.6 | 47.2 | 74.3 | | | CLIP-Helip | 27.8 | 54.3 | 48.2 | 75.4 | | CC12M | SLIP | 39.0 | 66.0 | 65.4 | 90.1 | | | SLIP-Helip | 39.4 | 67.2 | 66.2 | 89.7 | Zero-shot retrieval. We evaluate Helip on zero-shot image-to-text retrieval tasks on MS-COCO (Lin et al., 2014) and Flickr30K (Plummer et al., 2015). As shown Table 4, both CLIP and SLIP, pre-trained on CC3M and CC12M , consistently improved by Helip. Table 4: **Zero-shot image-text retrieval results on MSCOCO and Flickr.** ↑ indicates higher is better. Combining with Helip, CLIP and SLIP show better performance. ## 4.3 Performance Of Helip With Scaled Training Data To investigate the impact of expanded training dataset sizes on the effectiveness of Helip, we trained the CLIP model on the YFCC15M dataset. This training yielded a zero-shot classification accuracy of 25.46% on ImageNet. After applying Helipand one epoch of training, its performance improved to 26.45%. To summarize the zero-shot performance on ImageNet of both the standard CLIP and its enhanced version, CLIP-Helip, across different data scales, we have illustrated these results in Figure 4. The results show that Helipconsistently enhances CLIP's performance. Most notably, the largest dataset, Open29M, witnessed a remarkable performance increase of 3.06% with Helip. This result indicates that Helipcan provide immediate performance enhancements for well-trained CLIP models on larger datasets, such as the private 400M dataset mentioned in Radford et al. (2021). ## 4.4 Performance Of Helip On Noisy Dataset We expanded our investigation to assess the effectiveness of Helipon subsets of LAION7.5M and 8M, which are randomly sampled from LAION (Schuhmann et al., 2022). The results are detailed in Table 5. The CLIP model, enhanced with Helip consistently outperformed its original counterpart on both subsets across a majority of the evaluated datasets, including ImageNet, CIFAR10, CIFAR100, Caltech, and Food. On the ![9_image_0.png](9_image_0.png) Figure 4: **Zero-shot performance on ImageNet for models pre-trained on different dataset sizes**. 7.5M subset, Helip enhances performance across all datasets by an average of 3.6%. Although CLIP scores slightly higher on the Sun dataset, Helipboosts its overall performance with an average improvement of 2.5% on the 8M subset. These results highlight the enhanced performance achieved through Helip, demonstrating its robustness and effectiveness in improving existing models that have been pretrained on noisy data. | ImageNet | CIFAR10 | CIFAR100 | Caltech | Food | Sun | Avg. | | |-----------------|-----------|------------|-----------|--------|-------|--------|------| | CLIP-7.5M | 23.5 | 34.6 | 14.5 | 58.9 | 28.6 | 25.3 | 30.8 | | CLIP-Helip-7.5M | 25.8 | 39.9 | 16.7 | 61.9 | 34.1 | 28.2 | 34.4 | | CLIP-8M | 25.1 | 31.1 | 12.9 | 60.9 | 29.5 | 27.5 | 31.2 | | CLIP-Helip-8M | 26.5 | 38.8 | 14.6 | 62.3 | 33.1 | 26.6 | 33.7 | Table 5: **Zero-shot performance of CLIP on two LAION subsets.** ## 4.5 Comparison With Other Hard Data Selection Method We evaluate the efficacy of the proposed method in enhancing the discriminative capacity of learned representations by comparing its zero-shot classification performance with that of other hard data mining strategies. As described in the Section 2, a common way to define hard data is through intra-modality similarity. Hence, we introduce the hard data mining methods depending on (sample level) image similarity mining and text similarity mining and denote them as IM and TM respectively. For a given target pair, we compute the cosine similarity between its image/text representation and that of the remaining dataset. The image and text representations are encoded using a pretrained Resnet50 and BERT, respectively. As the preprocessing step, IM and TM methods mine hard negatives before continuous pretraining. Subsequently, we integrate the mined hard negative pairs into the training pipeline of CLIP and denote them as CLIP+IM and CLIP+TM and optimize the original contrastive loss to fine-tune the model. Additionally, we also include the hard negative contrastive loss, HN-NCE, proposed by Radenovic et al. (2023), as a baseline. HN-NCE upsamples the weight of hard-negatives identified by the current model. As shown in Table 6, when the CC3M pretrained CLIP model is combined with Helip, the performance of our pair-level hard data mining method significantly outperforms other sample-level techniques. Besides,we observe that compared to the baseline CLIP performance, the introduction of TM and IM methods results in a decline in performance. To better understand the reasons behind this drop, we analyzed the outputs of the TM and IM methods in detail. In Figure5, we illustrate the data obtained through three distinct preprocessing methods: Hard Pair Mining (HPM), Image Similarity Mining (IM), and Text Similarity Mining (TM). The first row depicts the image-text pairs identified by HPM, while the second and third rows showcase the pairs mined by IM and TM, respectively. For TM (IM displays similar issues), the selected pairs often feature captions that are highly similar or identical, which is typical in data collected from the web. Even though identical pairs may not always be present, repetitions of the same images or text are common. According to | Imagenet | CIFAR10 | CIFAR100 | | |---------------|-----------|------------|-------| | CLIP | 19.04 | 33.06 | 13.77 | | CLIP + TM | 16.70 | 28.71 | 9.67 | | CLIP + IM | 16.93 | 29.22 | 10.42 | | CLIP + HN-NCE | 19.47 | 29.88 | 11.83 | | CLIP + Helip | 19.86 | 34.05 | 14.13 | Table 6: Zero-shot performance of CLIP pre-trained on CC3M boosted by hard data mined by different methods. Helip shows superior performance, consistently outperforming local/global hard sample mining techniques by a substantial margin. the CLIP contrastive loss (Equation 1), the model is forced to push nearly identical caption representations toward and away from two distinct image representations at the same time.This inherent contradiction in objectives contributes to a degradation in performance. To illustrate, consider a target pair (Ttarget, Itarget) and a mined pair (Tmined, Imined) using TM, where Ttarget ≈ Tmined but Itarget ̸≈ Imined. In the contrastive loss framework, the model aims to minimize the distance between (Itarget, Ttarget) and maximize the distance between (Itarget, Tmined). However, the near-identity of Ttarget and Tmined leads to conflicting optimization targets and a potential decline in performance. ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) Figure 5: **Hard negative data selected by different methods.** Compared to data mined using the sample-level (image/text modal) similarity, hard pairs mined by HPM are more similar to the target. Figure 6: **HPM and fastHPM.** We show the hard pairs mined by HPM and fastHPM. The quality of hard pairs mined by fastHPM is competitive with the pairs mined by HPM. ## 4.6 Impact Of Hard Negative Margin Loss We investigate the impact of using hard negative margin loss (HNML) on the performance of the SLIP model. In particular, our attention is directed towards an analysis of the SLIP model's performance, which has been previously pre-trained on the CC3M dataset, when it is both further trained with HPM+HNML and left without HNML. Our approach involves a comparative analysis of the model's zero-shot classification performance across multiple datasets including ImageNet, CIFAR 100, CIFAR 10, Caltech 101, Food 101, and Sun397. The results of our evaluation are comprehensively detailed in Table 7. These demonstrate that the SLIP model supplemented with HPM and HNML exhibits superior performance, with a performance boost of 4.51 and 3.27 compared to the SLIP and SLIP + HPM models respectively. Interestingly, the model achieved superior performance on the CIFAR 10 dataset without HNML. We postulate that this may be attributed to HNML's ability to enhance the discriminative power of the learnt representations by employing the class distance as a cost metric. In light of this, our findings suggest that for classification datasets consisting of a larger number of subclasses, employing HNML during the training phase can lead to an increase in classification performance. | ImageNet | CIFAR10 | CIFAR100 | Caltech101 | Food101 | Sun397 | Avg. | | |------------|-----------|------------|--------------|-----------|----------|--------|-------| | SLIP | 23.00 | 65.61 | 34.69 | 54.01 | 16.03 | 29.20 | 37.09 | | wo HNML | 24.94 | 69.44 | 36.35 | 64.07 | 16.51 | 30.91 | 40.37 | | w HNML | 26.05 | 68.18 | 37.77 | 66.89 | 17.05 | 33.68 | 41.60 | | ImageNet | CIFAR10 | CIFAR100 | Avg. | | |--------------------|-----------|------------|--------|-------| | CLIP Encoders | 19.57 | 33.28 | 13.53 | 22.12 | | VITs8 + SentenceT | 19.86 | 34.05 | 14.13 | 22.68 | | VITb16 + SentenceT | 19.62 | 35.53 | 14.67 | 23.27 | | VITs8 + T5 | 19.61 | 33.99 | 13.82 | 22.47 | Table 7: **SLIP finetuned with and without hard negative margin loss.** When finetuned with hard pairs, the zero-shot performance of CC3M pretrained SLIP can be further enhanced usingx HMNL. Table 8: **The zero-shot performances of HELIP with different encoders in HPM.** HPM's performance is insensitive to the selection of encoders. ## 4.7 Delving Into Hard Pair Mining | Imagenet | CIFAR10 | CIFAR100 | | |-------------|-----------|------------|-------| | SLIP | 41.17 | 81.30 | 53.68 | | Helip- 3M | 45.07 | 82.42 | 55.22 | | Helip- 6M | 44.98 | 81.64 | 56.62 | | Helip- Full | 45.64 | 82.31 | 53.79 | Impact of different encoders in HPM. We explored the effect of different pretrained encoders on HPM's performance by alternating image and text encoders. Initially, the unsupervised pretrained DINO VITs8 (Caron et al., 2021) was paired with the SentenceT (Reimers & Gurevych, 2019) transformer, trained on over a billion internet-based sentences. This combination was later swapped for the SWAG VITb16 (Singh et al., 2022) and the T5 (Raffel et al., 2020). Additionally, experiments using OpenAI's CLIP model (Radford et al., 2021) multimodal encoders were conducted. Interestingly, as Table 8 suggests, the encoder choice seemingly has negligible impact on HPM's performance, likely due to the proficiency of current pretrained models in modeling intra-modal similarities. Moreover, the ability to use single-modal pretrained models and still achieve competitive or superior performance implies that there's no assumption of having access to a high-quality CLIP model, such as OpenAI's CLIP-400M. Performance Comparison between HPM and FastHPM. A comparison was made between the zeroshot performances of SLIP models, further trained with hard pairs obtained from both HPM and fastHPM. This comparison, conducted under three different settings, was summarized in Table 9. Additionally, we established subsets Dei of sizes 3M and 6M, and accordingly denoted Helip with these subset sizes as Helip3M and Helip-6M. Table 9 shows that the zero-shot performances of Helip-3M and Helip-6M remain competitive with the global HPM hard pair mining approach. These findings suggest that fastHPM offers an efficient strategy for hard pair mining, without compromising performance. Additionally, they hint at fastHPM's potential to scale up hard pair mining in larger pre-training datasets, a promising direction for future exploration. Table 9: **Zero-shot performance for SLIP + HELIP on CC12M with hard samples mined with** HPM and fastHPM. Compared with hard samples mined with HPM, the fast versions are competitive with the full version. Visual insights into HPM and FastHPM. We took the initiative to visualize the hard pairs as identified by the aforementioned three methods. Within Figure 6, the leftmost image-text pairing is earmarked as the target. The pairs in the primary row are those selected via HPM. The subsequent rows, specifically the second and third, present image-text pairings identified by the 6M fastHPM and the 3M fastHPM methods, respectively. Through a comparative visualization, it's evident that the hard pairs pinpointed by fastHPM bear a significant resemblance to the target pair. For readers keen on delving deeper, we've provided an extended set of visualization outcomes in Appendix F. Computational time analysis. Table 10 provides a comparison of the computational time required by HPM and fastHPM. The hard negative pairs preparation times listed were measured on 8 V100 GPUs, with the exception of the ∗ symbol, which was measured on a single V100 GPU. Given its efficiency and the performance similarities observed in Table 9, fastHPM emerges as a compelling alternative to the full HPM method. | | CC3M | CC12M | YFCC15M | |-------------|---------|---------|-----------| | Helip- 3M | - | 2h18min | 3h27min | | Helip- 6M | - | 5h3min | 6h19min | | Helip- Full | 1h9min∗ | 9h11min | 17h41min | Table 10: **Preparation time for hard pairs.** FastHPM speeds up the hard negative pairs mining process. ## 5 Conclusion In this study, we delve into boosting pre-trained CLIP models' performance by more adeptly utilizing their original training dataset. This initiative arose from the recognition of the loosely connected nature of webcrawled image-text pairs, which resulted in suboptimal data utilization due to conventional CLIP loss. Our framework, Helip, introduces a cost-effective and easily integrable solution for improving existing model performance without extensive retraining or additional datasets. It selects hard pair data from their original training datasets and refines the existing models in a few epochs to immediately boost their performance. Specifically, Helip treats each text-image pair as a single point in the joint vision-language space, defining those that are close together as hard pairs. The Hard Pair Mining (HPM) strategy effectively identifies challenging hard pairs. The Hard Negative Margin Loss (HNML) was developed to improve existing models by utilizing that hard data. Empirical evaluations across various benchmarks, such as zero-shot classification, image-text retrieval, and linear probing, demonstrate the effectiveness and efficiency of Helip. We leave the discussion of future work in the appendix. ## References Muhammad Awais, Muzammal Naseer, Salman Khan, Rao Muhammad Anwer, Hisham Cholakkal, Mubarak Shah, Ming-Hsuan Yang, and Fahad Shahbaz Khan. Foundational models defining a new era in vision: A survey and outlook. *arXiv preprint arXiv:2307.13721*, 2023. Alberto Baldrati, Marco Bertini, Tiberio Uricchio, and Alberto Del Bimbo. Conditioned and composed image retrieval combining and partially fine-tuning clip-based features. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2022, New Orleans, LA, USA, June 19-20, 2022, pp. 4955–4964. IEEE, 2022. doi: 10.1109/CVPRW56347.2022.00543. URL https: //doi.org/10.1109/CVPRW56347.2022.00543. Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101–mining discriminative components with random forests. In *European conference on computer vision*, pp. 446–461. Springer, 2014. Gavin Brown, Mark Bun, Vitaly Feldman, Adam Smith, and Kunal Talwar. When is memorization of irrelevant training data necessary for high-accuracy learning? In Proceedings of the 53rd annual ACM SIGACT symposium on theory of computing, pp. 123–132, 2021. Tiffany Tianhui Cai, Jonathan Frankle, David J. Schwab, and Ari S. Morcos. Are all negatives created equal in contrastive instance discrimination? *ArXiv preprint*, 2020. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In *2021 IEEE/CVF International* Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, pp. 9630– 9640. IEEE, 2021. doi: 10.1109/ICCV48922.2021.00951. URL https://doi.org/10.1109/ICCV48922. 2021.00951. Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pp. 3558– 3568. Computer Vision Foundation / IEEE, 2021. doi: 10.1109/CVPR46437.2021.00356. URL https://openaccess.thecvf.com/content/CVPR2021/html/Changpinyo_Conceptual_12M_Pushing_ Web-Scale_Image-Text_Pre-Training_To_Recognize_Long-Tail_Visual_CVPR_2021_paper.html. Feilong Chen, Xiuyi Chen, Shuang Xu, and Bo Xu. Improving cross-modal understanding in visual dialog via contrastive learning. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7937–7941. IEEE, 2022. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. In *Proc. of ICML*, volume 119 of *Proceedings of Machine Learning Research*, pp. 1597–1607. PMLR, 2020a. URL http://proceedings.mlr.press/v119/chen20j.html. Xinlei Chen, Haoqi Fan, Ross B. Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. *ArXiv preprint*, 2020b. Yihua Chen, Eric K Garcia, Maya R Gupta, Ali Rahimi, and Luca Cazzanti. Similarity-based classification: Concepts and algorithms. *Journal of Machine Learning Research*, 10(3), 2009. Yufeng Cui, Lichen Zhao, Feng Liang, Yangguang Li, and Jing Shao. Democratizing contrastive languageimage pre-training: A clip benchmark of data, model, and supervision, 2022. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. Imagenet: A large-scale hierarchical image database. In *2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition* (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pp. 248–255. IEEE Computer Society, 2009. doi: 10.1109/CVPR.2009.5206848. URL https://doi.org/10.1109/CVPR.2009.5206848. Li Fei-Fei, Fergus Rob, and Pietro Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In Computer Vision and Pattern Recognition Workshop, 2004. CVPRW'04. Conference on. IEEE, 2004. Andreas Fürst, Elisabeth Rumetshofer, Viet Tran, Hubert Ramsauer, Fei Tang, Johannes Lehner, David P. Kreil, Michael Kopp, Günter Klambauer, Angela Bitto-Nemling, and Sepp Hochreiter. CLOOB: modern hopfield networks with infoloob outperform CLIP. *ArXiv preprint*, 2021. Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, et al. Datacomp: In search of the next generation of multimodal datasets. *ArXiv preprint*, 2023. Shashank Goel, Hritik Bansal, Sumit Bhatia, Ryan A Rossi, Vishwa Vinay, and Aditya Grover. Cyclip: Cyclic contrastive language-image pretraining. *ArXiv preprint*, 2022. Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. Towards a unified view of parameter-efficient transfer learning. In *Proc. of ICLR*. OpenReview.net, 2022. URL https: //openreview.net/forum?id=0RDcd5Axok. Tri Huynh, Simon Kornblith, Matthew R. Walter, Michael Maire, and Maryam Khademi. Boosting contrastive self-supervised learning with false negative cancellation. In IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, Waikoloa, HI, USA, January 3-8, 2022, 2022. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In Marina Meila and Tong Zhang (eds.), *Proc. of ICML*, volume 139 of *Proceedings of Machine* Learning Research, pp. 4904–4916. PMLR, 2021. URL http://proceedings.mlr.press/v139/jia21b. html. Yannis Kalantidis, Mert Bülent Sariyildiz, Noé Pion, Philippe Weinzaepfel, and Diane Larlus. Hard negative mixing for contrastive learning. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), *Advances in Neural Information Processing* Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ f7cade80b7cc92b991cf4d2806d6bd78-Abstract.html. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In *Proceedings of the IEEE International Conference on Computer Vision Workshops*, pp. 554–561, 2013. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Gotmare, Shafiq R. Joty, Caiming Xiong, and Steven Chu-Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14,* 2021, virtual, pp. 9694–9705, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/ 505259756244493872b7709a8a01b536-Abstract.html. Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H. Hoi. BLIP: bootstrapping language-image pretraining for unified vision-language understanding and generation. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (eds.), *International Conference on* Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of *Proceedings* of Machine Learning Research, pp. 12888–12900. PMLR, 2022a. URL https://proceedings.mlr.press/ v162/li22n.html. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language. *arXiv preprint arXiv:1908.03557*, 2019. Yangguang Li, Feng Liang, Lichen Zhao, Yufeng Cui, Wanli Ouyang, Jing Shao, Fengwei Yu, and Junjie Yan. Supervision exists everywhere: A data efficient contrastive language-image pre-training paradigm. In *Proc. of ICLR*. OpenReview.net, 2022b. URL https://openreview.net/forum?id=zq1iJkNk3uN. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: common objects in context. In *Proc. of ECCV*, 2014. S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. Technical report, 2013. Norman Mu, Alexander Kirillov, David A. Wagner, and Saining Xie. SLIP: self-supervision meets languageimage pre-training. In *Proc. of ECCV*, 2022. M-E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In *Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing*, Dec 2008. Antonio Norelli, Marco Fumero, Valentino Maiorca, Luca Moschella, Emanuele Rodolà, and Francesco Locatello. Asif: Coupled data turns unimodal models to multimodal without training. *ArXiv preprint*, 2022. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. *the Journal of machine Learning research*, 2011. Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In *2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015*, pp. 2641–2649. IEEE Computer Society, 2015. doi: 10.1109/ICCV.2015.303. URL https://doi.org/10.1109/ICCV.2015.303. Filip Radenovic, Abhimanyu Dubey, Abhishek Kadian, Todor Mihaylov, Simon Vandenhende, Yash Patel, Yi Wen, Vignesh Ramanathan, and Dhruv Mahajan. Filtering, distillation, and hard negatives for visionlanguage pre-training. *CoRR*, 2023. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Marina Meila and Tong Zhang (eds.), Proc. of ICML, volume 139 of *Proceedings of Machine Learning Research*, pp. 8748–8763. PMLR, 2021. URL http://proceedings.mlr.press/v139/radford21a.html. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1–140:67, 2020. URL http://jmlr.org/papers/v21/20-074.html. Nils Reimers and Iryna Gurevych. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In *Proc. of EMNLP*, pp. 3982–3992, Hong Kong, China, 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1410. URL https://aclanthology.org/D19-1410. Joshua David Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. Contrastive learning with hard negative samples. In *Proc. of ICLR*. OpenReview.net, 2021. URL https://openreview.net/forum? id=CR1XOQ0UTh-. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open largescale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278–25294, 2022. Anshul Shah, Suvrit Sra, Rama Chellappa, and Anoop Cherian. Max-margin contrastive learning. In ThirtySixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pp. 8220–8230. AAAI Press, 2022. URL https://ojs.aaai.org/index.php/AAAI/article/view/20796. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In *Proc. of ACL*, pp. 2556–2565, Melbourne, Australia, 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1238. URL https://aclanthology.org/P18-1238. Mannat Singh, Laura Gustafson, Aaron Adcock, Vinicius de Freitas Reis, Bugra Gedik, Raj Prateek Kosaraju, Dhruv Mahajan, Ross Girshick, Piotr Dollár, and Laurens Van Der Maaten. Revisiting weakly supervised pre-training of visual perception models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 804–814, 2022. Cory Stephenson, Suchismita Padhy, Abhinav Ganesh, Yue Hui, Hanlin Tang, and SueYeon Chung. On the geometry of generalization and memorization in deep neural networks. In *Proc. of ICLR*. OpenReview.net, 2021. URL https://openreview.net/forum?id=V8jrrnwGbuc. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds200-2011 dataset. Technical report, California Institute of Technology, 2011. Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In *Proc. of ICML*, volume 119 of *Proceedings of Machine Learning* Research, pp. 9929–9939. PMLR, 2020. URL http://proceedings.mlr.press/v119/wang20k.html. Bichen Wu, Ruizhe Cheng, Peizhao Zhang, Tianren Gao, Joseph E. Gonzalez, and Peter Vajda. Data efficient language-supervised zero-shot recognition with optimal transport distillation. In *Proc. of ICLR*. OpenReview.net, 2022. URL https://openreview.net/forum?id=G89-1yZLFHk. Jianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. SUN database: Largescale scene recognition from abbey to zoo. In The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010, San Francisco, CA, USA, 13-18 June 2010, pp. 3485–3492. IEEE Computer Society, 2010. doi: 10.1109/CVPR.2010.5539970. URL https://doi.org/10.1109/CVPR. 2010.5539970. Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, and Chunjing Xu. FILIP: fine-grained interactive language-image pre-training. In *Proc. of ICLR*. OpenReview.net, 2022. URL https://openreview.net/forum?id=cpDhcsEDC2. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. *Communications of the ACM*, 64(3):107–115, 2021. Shengyu Zhang, Tan Jiang, Tan Wang, Kun Kuang, Zhou Zhao, Jianke Zhu, Jin Yu, Hongxia Yang, and Fei Wu. Devlbert: Learning deconfounded visio-linguistic representations. In Proceedings of the 28th ACM International Conference on Multimedia, pp. 4373–4382, 2020. ## A Appendix: Algorithm We summarize the Hard Pair Mining (HPM), the fast Hard Pair Mining (fastHPM) and the training pipeline of Helip in Algorithm 1, 2 and 3 respectively. Algorithm 1: Hard Pair Mining (HPM) Input: Hard pairs number per sample k Pretrained unimodal vision model: ftext Pretrained unimodal vision model: fimage Dataset D = {(x I1 , xT 1 ),(x I2 , xT 2 ), *· · ·* ,(x I N , xTN )} Threshold for visual and textual modality τI and τT Output: Hard samples H = [H1, H2, *· · ·* , HN ] for i ∈ [1, N] do s ← [0, 0, *· · ·* , 0]⊤ ∈ R N Ii ← fimage(x I i ) Ti ← ftext(x T i ) for j ∈ [1, N] do Ij ← fimage(x I j ) Tj ← ftext(x T j ) S⃗I j ←Ii·Ij ∥Ii∥2 ·∥Ij ∥2 if Ii·Ij ∥Ii∥2 ·∥Ij ∥2 > τI **else** 0 S⃗T j ←Ti·Tj ∥Ii∥2 ·∥Tj ∥2 if Ti·Tj ∥Ti∥2 ·∥Tj ∥2 > τT **else** 0 sj ← S⃗I j · S⃗T j end Hi ← arg max(s, k) if ∃j ∈ Hi, sj = 0 **then** Hi = ∅ \# Indicate noise sample end Note, in the inner for loop, shown in Algorithm 1, the image and caption representations will be repeatedly computed. To accelerate the hard pair mining and avoid unnecessary computational overhead, we compute and save the encoded image features and text features. Besides, the outer loop is parallelized in the implementation. ## B Appendix: Discussion About Baselines In our experiments, we utilized CLIP, SLIP, and DECLIP as baseline models on CC3M, CC12M, YFCC15M, and Open29M datasets. To ensure our results are both compelling and reproducible, we primarily employed publicly available checkpoints as our baseline and rigorously tested the effectiveness of Helip against these checkpoints. On CC3M, the checkpoint of SLIP model is released†. We enhanced its performance by applying Helip which notably improved the zero-shot performance on ImageNet from 23.00 to 26.05. However, we noticed that the CLIP with ResNet50 on CC3M is missing. To address this, we undertook the pretraining ourselves. Our results were encouraging: the performance of our pretrained CLIP with ResNet50 achieved a score of 19.86, surpassing the 17.10 achieved by SLIP's CLIP with ViT-B/32 as reported in Mu et al. (2022). This outcome suggests the robustness of our implementation. Besides, consistent with several prior studies, we found that on smaller pretraining datasets, CLIP with ResNet50 outperforms CLIP with ViT-B. On the CC12M dataset, a similar situation arose: while the SLIP checkpoint was available, the CLIP model was absent, leading us to undertake its pretraining. On the YFCC15M (v1) collected by Radford et al. (2021), we trained the CLIP model. This resulted in a 25.46 score in the ImageNet zero-shot classification, closely aligning with the 26.10 outcome reported by Cui et al. (2022). Additionally, for the YFCC15M (v2) dataset †https://github.com/facebookresearch/SLIP#results-and-pre-trained-models Algorithm 2: fast Hard Pair Mining (fastHPM) Input: Hard pairs number per sample k Pretrained unimodal vision model: ftext Pretrained unimodal vision model: fimage Dataset D = {(x I1 , xT 1 ),(x I2 , xT 2 ), *· · ·* ,(x I N , xTN )} Threshold for visual and textual modality τI and τT Candidate pool size C Output: Hard samples H = [H1, H2, *· · ·* , HN ] for i ∈ [1, N] do Uniformly C samples from Dataset D, Di = {(x I1 , xT 1 ),(x I2 , xT 2 ), *· · ·* ,(x I C , xTC )} s ← [0, 0, *· · ·* , 0]⊤ ∈ R N Ii ← fimage(x I i ) Ti ← ftext(x T i ) for j ∈ [1, C] do Ij ← fimage(x I j ) Tj ← ftext(x T j ) S⃗I j ←Ii·Ij ∥Ii∥2 ·∥Ij ∥2 if Ii·Ij ∥Ii∥2 ·∥Ij ∥2 > τI **else** 0 S⃗T j ←Ti·Tj ∥Ii∥2 ·∥Tj ∥2 if Ti·Tj ∥Ti∥2 ·∥Tj ∥2 > τT **else** 0 sj ← S⃗I j · S⃗T j end Hi ← arg max(s, k) if ∃j ∈ Hi, sj = 0 **then** Hi = ∅ \# Indicate noise sample end referenced in Li et al. (2022b), both SLIP and DECLIP pretrained parameters were made available by Li et al. (2022b), which we utilized directly as our baselines. On the larger dataset, Open29M, there was a lack of open-source pretrained checkpoints, prompting us to conduct the pretraining ourselves. Notably, the performance of our reimplementation (42.32) closely aligns with the results reported by Li et al. (2022b), indicating the effectiveness of our approach. ## C **Appendix: Analysis Of The Impact Of Subset Size On Hard Pair Selection In** Fasthpm In the comparison of HPM and FastHPM detailed in Section 4.7, we explore the efficacy of using 3M and 6M subset sizes of the CC12M dataset in FastHPM for mining hard pairs. The result, Table 9, shows that with a reduced subset size as small as 3 million entries, mining hard pairs and further training with these pairs can boost CLIP to achieve competitive performance with full set for mining. In this section, we delve deeper into the analysis of hard pairs mined by FastHPM across varying subset sizes. Based on the selection criteria defined by FastHPM (Equation 5), we denote the selection criteria value as SeI(x I i , H⋆ i (j))⊤SeT(x T i , H⋆ i (j)). Here, H⋆ i (·) represents a pair within the set of hard pairs H⋆ i , mined by FastHPM for a specified target pair i under a given subset size. Additionally, the j in H⋆ i (j) indicates the j-th hard pair within the set H⋆ i . Note, a higher selection criteria value signifies a harder mined pair. We present the average selection criteria values for top-k hard pairs in Figure 7. As depicted by the grey horizontal line, the average selection criteria values for the top-20 hard pairs selected by FastHPM-1.5M, the top-40 by FastHPM-3M, and the top-80 by FastHPM-6M all approximate 0.477. This figure indicates that a further reduction in the subset size might necessitate adjustments to the number of hard pairs sampled to preserve quality. For instance, in our experiments detailed in Table 9, we uniformly sampled hard pairs for training from the top 50 for Helip-3M. As Figure 7 suggests, a sampling range of 10 for Helip-1M might Algorithm 3: Hard samplE for boosting contrastive Language-Image Pretrained models (Helip) Input: D = {(x I1 , xT 1 ),(x I1 , xT 1 ), *· · ·* ,(x I N , xTN )} Hard Pair Mining algorithm, HPM() \# or the fastHPM() Pretrained unimodal vision model: ftext Pretrained unimodal vision model: fimage Pretrained contrastive language-image model {ϕimage, ϕtext} hyperparameters: Hard pairs number k Hard negative margin strength γ Sampled hard negatives number p Learning ratio η Batch size b Training iteration number E Visual and textual modality threshold τI and τT Output: CLIP model {ϕimage, ϕtext} H ← HPM(D, ftext, fimage, k, τI , τT ) for *iter* ∈ [1, E] do B ← {z1*, . . . , z*b} i.i.d. ∼ *Uniform*(D) for zi ∈ B do H p i ← {zi*, . . . , z*p} i.i.d. ∼ *Uniform*(Hi) B ← B ∪ Hp i end Compute loss ℓfinetune, Equation (6), with samples B ϕimage ← ϕimage + η · ∂ϕimage ℓfinetune ϕtext ← ϕtext + η · ∂ϕtext ℓfinetune end be effective. Particularly, considering that Helipsignificantly boosted the pre-trained models with just an additional training epoch, as discussed in Section 4.2, selecting one hard pair for each target pair from a pool of 10 will be feasible. ## D Appendix: Analysis Of The Impact Of Τ **On Hard Pair Selection** To examine the impact of the threshold parameter τ on the selection of hard pairs, we analyze the similarities in the rankings of hard pairs (using Kendall Rank Similarity) mined by HPM under various τ values. The hard pairs are ranked by using the selection criteria value mentioned in Appendix C. The results on the CC12M dataset are displayed in Figure 8. We observe that the selection of hard pairs is robust to changes in the τ value. This resilience is partly because we only mine the top 50 hard pairs, a subset unlikely to be significantly affected when τ ≤ 0.5. ## E **Appendix: Analysis Of The Impact Of Mitigating Noisy Data** As presented in Section 3.2, to enhance the overall quality and reliability of the training dataset, data pairs lacking substantial support from the entirety of the training data are considered unsuitable and removed. This section further empirically analyzes the impact of our noise mitigation strategy by detailing the quantity and nature of pairs removed across various datasets. Specifically, our approach removes 4.67% of the pairs from CC3M, 3.64% from CC12M, and 7.41% from YFCC15M, before continuing with pretraining. Figure 9 visualizes the pairs filtered from CC12M. Notably, our strategy effectively removed pairs such as unavailable images (e.g., two blank or white images in the second row) and mismatched pairs. These results suggest that ![20_image_0.png](20_image_0.png) Figure 7: The average selection criteria values for hard pairs mined by FastHPM with different subset sizes. our noise mitigation strategy can effectively clean the data using two single-modality models before training a CLIP model from scratch. ## F Appendix: More Visualization Results We offer further visualization results pertaining to the hard samples mined by various methods. As depicted in Figure 10, the hard samples sourced by HPM closely resemble the target sample (seen at the top left). Interestingly, for samples with fewer objectives, the image and text mining method can identify a reasonably challenging counterpart, as seen in the case of "the harbor in a small village". However, for intricate scenes, only the HPM is capable of yielding sufficiently challenging samples, like the scenario "people touring and enjoying the public park during summer". The dataset acquired from the web encompasses a myriad of such intricate cases. We posit that this is why training with hard samples unearthed by HPM yields more proficient outcomes. Moreover, we present additional visualization results for hard samples mined via different techniques. Hard samples extracted by HPM exhibit a stronger resemblance to the target sample, as highlighted in Figure 10 (top left). We observed that the image and text mining methods can provide a relatively fitting hard counterpart for simpler samples, like "the harbor in a quaint settlement". However, for more intricate scenes, only the HPM method produces samples of adequate difficulty, such as "people touring and relishing the public park throughout summer". The web-based dataset includes a significant proportion of these complex cases. Consequently, we infer that training with hard samples mined by HPM results in enhanced performance. ![21_image_0.png](21_image_0.png) Figure 8: The Impact of τ on Hard Pair Selection. ## G Appendix: Future Work Moving forward, several possibilities for future research emerge. First, we aim to explore composition-aware fine-tuning for VLMs, which could potentially enable more effective utilization of multimodal information. Moreover, we are intrigued by the prospect of combining parameter-efficient tuning (He et al., 2022) with Helip potentially further enhancing performance. Another area of interest is scaling up the dataset size and examining the applicability of the scaling law to our method. We also intend to investigate how the integration of our boosting algorithm might alter the multimodal dataset curation algorithm (Gadre et al., 2023). Ultimately, we hope our work will serve as a catalyst for additional research in the fine-tuning of pre-trained, large-scale multimodal models. ![22_image_0.png](22_image_0.png) ![22_image_1.png](22_image_1.png) ![22_image_4.png](22_image_4.png) unfortunately , there are no appropriate pictures of us on the plane . industry of turquoise , nice way to ![22_image_2.png](22_image_2.png) ![22_image_5.png](22_image_5.png) handle the pass through my nephew met my cat for the first time . a male hiker looks over a view of ![22_image_6.png](22_image_6.png) ![22_image_7.png](22_image_7.png) ![22_image_3.png](22_image_3.png) person - the lady by the lake a street sign with the name of the place cloud , sun rays burst through the clouds during sunrise . what should i do if i have bad ![22_image_8.png](22_image_8.png) cuts on the sides of my mouth ? Figure 9: Visualization of the image-caption pairs filtered out from CC12M. ![23_image_0.png](23_image_0.png) Figure 10: **Hard pairs selected by different methods.**
Review 1: Summary: This paper mainly aims to boost the vision-language model, especially the CLIP. Technically, the main idea lies in finetuning the pre-trained CLIP with hard text-image pairs to enhance the model's ability. Based on CLIP, a new model termed HELIP is proposed. HELIP contains a hard pair mining (HPM) strategy to find the difficult pairs and introduces the hard negative margin loss (HNML). Results are reported on zero-shot classification and also fine-grained classification datasets. Strengths and Weaknesses: 1. The paper is well-written and easy to follow. 2. The idea of re-using the hard pairs of training data for further boosting the performance of CLIP is simple, direct, and makes sense to me. 3. Extensive experiments are conducted and show the effectiveness of the proposed method. Requested Changes: 1. discussion on related works could be more detailed. Take "vision-language pertaining" as an example, it mentions the single-stream and dual-stream models. However, no single paper nor discussion is given for the single-stream methods. 2. Fig.4 is in low-resolution. Broader Impact Concerns: / ================================================== Review 2: Summary: The authors propose methods to finetune contrastively trained text-image embedding models using hard negative text-image pairs. Their contribution is two-fold: * They introduce a method, HPM, to select hard negative pairs from the joint text-image datasets by relying on the representation of off-the-shelf image and text embedding networks. * They introduce a hard negative margin loss (HNML) to leverage these hard negative pairs in enforcing that similar pairs should be closer in the embedding space than dissimilar pairs. For a given text-image pair, HPM works by selecting for the hard negatives a subset of the text-image dataset for which a pair agreement model would predict maximum agreement for that pair. The pair agreement model is a dictionary method leveraging the encoding from the two frozen image and text embedders, and the selection of the maximizing subset is done greedily up to a fixed constant $k$. The authors additionally propose to do HPM on a randomly sampled subset of the dataset, rather than the full dataset. HNML is a loss that ensures that the similarity of a pair is closer to the similarity of a hard negative than to the similarity of a random negative. The authors show substantial improvement of their method in zero-shot and linear probe evaluation settings on popular datasets after fine-tuning of CLIP And SLIP with HPM or HPM + HNML, as well as a few ablation studies validating their method. Strengths and Weaknesses: Overall this looks like a solid work, very worthy of publication in TMLR. + The results are very convincing, with substantial improvements of the different metrics + The hard negative mining step can be done in a few hours on large subsets which seems very usable + The paper is well-written and the method is clear + The illustrations and ablations offer good insights in the effectiveness of the method Some weaknesses / remarks 1. I am surprised by the results in Table 6: how can the performance of finetuning with CLIP + TM or CLIP + IM be so much lower than the baseline performance of CLIP alone (19.04)? 2. As a sanity check, I would like to see the performance of SLIP and CLIP finetuned with $\ell_{\text{CLIP}}$ under the same schedules without HNML or HPM, to validate that the performance is not due to extra training. Similarly, using a random subset of negatives rather than a mined subset could be a relevant baseline. Requested Changes: 1. Please address question 1. above. 2. Some experimental justiciation to address remark 2. above would strenghten the work. 3. Table 9 suggests that the dataset for sampling hard negatives can be significantly reduced without consistent degradation; it would be interesting to understand when this breaks, i.e. when is the dataset too small? (in the limit, this can become a random subset of negatives, similar to question 2). 4. I am not sure if the term "margin" is appropriate for the loss outlined in equation (6); indeed, for a "margin" to be present, I would expect a constant in the loss, e.g. $\mathrm{max}(0, 1 + \text{difference of similarities})$... 5. I would be interested to see ablations for the usefullness of $\tau$ (0 threshold - is this needed?) 6. I would also be interested to happen what the result look like without the "Mitigation of noisy data impact." startegy of removing "unsuitable" pairs. Miscelaneous. 7. p.4: "such that the model M predicting the target pair as a matching pair" -> the end of the sentence is missing 8. Add CLIP performance in table 6. 9. p. 7: "we use early stopping if there is no performance gain in 5 epochs" -> how do the authors define this gain? Looking at the loss alone? 10. Remove repeated last sentence: :We leave the discussion of future work in the appendix. For the discussion of future work, we leave it in the appendix.: 11. Suggestion to add \text{} for text in equations, e.g. replace $\ell_{CLIP}$ by $\ell_{\text{CLIP}}$, $\ell_{margin}$ by $\ell_{\text{margin}}$, $f_{image}$ by $f_{\text{image}}$, etc... Similarly the notation argmax is inconsistent ($\mathrm{arg\ max}$ looks better than $Argmax$) Broader Impact Concerns: N/A ================================================== Review 3: Summary: This work introduces Helip to enhance the performance of pre-trained CLIP models without necessitating additional data collection or model retraining from scratch. Helip identifies and utilizes challenging text-image pairs from the original training datasets, refining models with a combination of traditional contrastive loss and a newly proposed hard negative margin loss. The efficacy of Helip is demonstrated through improvements in zero-shot classification accuracy on ImageNet for SLIP models pre-trained on various datasets and notable gains in both zero-shot performance and linear probe performance across fine-grained classification datasets. These achievements highlight Helip's potential as a straightforward yet powerful tool for improving cross-modal representation learning with little additional resource. Strengths and Weaknesses: Pros: 1. The proposed idea is intuitive and straightforward. 2. The paper is overall easy to follow and well presented. 3. Some of the results look good. Cons: 1. The idea of hard negative mining is not really new to the community, as well as the point of mitigating need for finding global hard negative samples. Prior visual recognition/metric learning work has densely studied this problem [a,b,c,d]. It is important to comprehensively review the existing work and thus correctly position the work. 2. The other major concern is on the scalability of the method. Currently the method is only evaluated on rather small-scale models and it is more important that a model pretrained on very large scale dataset with a large size of parameters can still benefit from further tuning the model using the proposed method on a smaller scale data and computation. 3. The involvement of other pretrained models make it unclear whether there could be actually better way to distill their knowledge into the target model rather than simply using them to curate hard negative samples. It is also not clear whether it is necessary to use hard negatives during the training process in the proposed way, considering all the existing methods available, even with the ones that does not require hard negative mining [d]. [a] Facenet: A unified embedding for face recognition and clustering [b] Training region-based object detectors with online hard example mining [c] Smart mining for deep metric learning [d] Deep Adversarial Metric Learning Requested Changes: Please check weakness for details. Broader Impact Concerns: NA ==================================================
# Ensemble Policy Optimization With Diversity Regularization Anonymous authors Paper under double-blind review ## Abstract In machine learning tasks, ensemble methods have been widely adopted to boost the performance by aggregating multiple learning models. However, ensemble methods are much less explored in the task of reinforcement learning, where most of previous works only combine multiple value estimators or dynamics models and use a mixed policy to explore the environment. In this work, we propose a simple yet effective ensemble policy optimization method to improve the joint performance of the policy ensemble. This method utilizes a policy ensemble where heterogeneous policies explore the environment collectively and their diversity is maintained by the proposed diversity regularization mechanism. We evaluate the proposed method on continuous control tasks and find that by aggregating the learned policies into an ensemble policy in test time, the performance is greatly improved. DEPO has performance improvement and faster convergence over the base on-policy single-agent method it built upon. Code will be made publicly available. ## 1 Introduction Ensemble methods, which train multiple learners to solve the same problem (Zhou, 2012), have been widely applied in machine learning to improve the model performance (Hansen & Salamon, 1990; Freund & Schapire, 1997; Dietterich, 2000; Singh et al., 2016; Huang et al., 2017). However, in the field of Reinforcement Learning (RL), ensemble methods are much less explored. So far previous works have studied the ensemble of value functions (Osband et al., 2016; Chen et al., 2017) or the ensemble of dynamics models (Rajeswaran et al., 2016; Chua et al., 2018; Buckman et al., 2018). Both works focus on fitting multiple estimators that are used to train the policy while the training data is still collected by a single policy that is derived from the ensemble of estimators. As a result, the collected data is bounded in a limited state-action subspace which is highly correlated to previous experience. We consider adopting ensemble method in RL in its most straightforward form: improving the performance by averaging the action distributions of learned policies. We first examine an individual PPO policy ensemble, where we train 5 PPO policies independently. As shown in the Fig. 1, surprisingly, directly aggregating the independent policies turns out to be not effective. We find that those learned policies converge to different behavioral modes due to the initialization and their historical experience, thus simply averaging the outputs jeopardizes the performance. To address above issue, we propose *Diversity-regularized Ensemble Policy Optimization (DEPO)*, a simple yet effective framework that augments the single-policy optimization with the power of ensemble method in policies. 1 Concretely, DEPO (1) trains multiple policies simultaneously by sharing data collected from each heterogeneous policy to maximize a novel *peer pressure objective*, (2) maintains the ensemble diversity via a non-parametric diversity regularizer, and (3) aggregates the policy ensemble to a mixture policy in test time, further boosting the performance over the well-trained individual policies. We benchmark our framework on the continuous control tasks. The experiments verify the effectiveness of the proposed ensemble policy optimization method in that DEPO outperforms the single agent counterpart in final performance. We also conduct detailed ablation studies to justify the techniques used in DEPO. 1Note that the term "ensemble policy optimization" is also used in (Rajeswaran et al., 2016), referred to the optimization of model ensemble, which is different to the policy ensemble described in this work. 1 ![1_image_0.png](1_image_0.png) Figure 1: In this figure, IPPO refers to the experiment training 5 PPO policies independently. DEPO is our proposed learning framework where 5 PPO policies explore independently but are trained from shared data. The Ensemble results indicate the performance when the action is sampled from a uniform mixture of constituent policies in the ensemble a ∼PK i=1 1 K πi(·|s). DEPO can substantially improve the final performance. Noticeably the independently trained PPO policies yield inferior performance when using the mixed policy than the individual policy. ## 2 Related Work Ensemble method. Apart from the ensemble methods commonly used in the classic machine learning tasks (Singh et al., 2016; Huang et al., 2017; Wen et al., 2020), an expanding body of works explores the ensemble method in Reinforcement Learning (RL). The ensemble methods in RL fall into roughly two categories: the ensemble of value functions and the ensemble of dynamics models in model-based RL. The ensemble of value functions manages to reduce the variance of state value estimation (Hans & Udluft, 2010; Fujimoto et al., 2018), thus it can be used to encourage efficient exploration through selecting action by using upper-confidence bound (UCB) (Osband et al., 2016; Chen et al., 2017) or voting methods (Wiering & Van Hasselt, 2008; Faußer & Schwenker, 2011; Peng et al., 2016). In these methods, the behavior policy is retrieved from a mixture of Q functions. In offline RL, EDAC (An et al., 2021) uses an ensemble of Q networks to penalize OOD data with high uncertainties. However, the diversity encouragement in EDAC aims at learning better uncertainty estimation with less Q networks while, in DEPO, the diversity is encouraged for better exploration and final performance. In EDAC, the cosine similarity between the gradients of different Q networks are used as the diversity. In contrast, the proposed DEPO uses the difference action distribution of two policies on the same input data as the diversity reward. On the other hand, the ensemble of dynamics models can mitigate the model approximation errors (Rajeswaran et al., 2016; Chua et al., 2018; Buckman et al., 2018) and accelerate policy learning by generating synthetic experiences from the ensemble (Kurutach et al., 2018). However, in most of the cases, there is only one training policy that exhibits in the system. There are some works using multiple policies, but all in a centralized manner: there is only one "mixed" policy interacts with the environment. Both of Zheng et al. (2018) and Zhang & Yao (2019) maintain multiple critics in the system and use a voting method a = arg maxai Q(*s, a*i) to sample actions. The former work trains a set of independent critics separately and the latter maintains a centralized critic. Lee et al. (2020) proposes SUNRISE, which chooses action based on UCB (Chen et al., 2017) at = arg maxa Qmean(st, a) + λQstd(st, a) and trains each critic individually. Agarwal et al. (2020) and Misra et al. (2020) use historical policies to optimize current policy for global optimal policy via larger state-action space coverage. Instead, we use a policy ensemble where each constituent policy explores the environment independently at the same time. ![2_image_0.png](2_image_0.png) Single Policy Optimization Baseline Ensemble Policy Optimization Framework Figure 2: Compared to single policy optimization workflow, the proposed ensemble policy optimization framework incorporates multiple heterogeneous policies to execute in independent environments and share the data among the policy ensemble during training. The diversity regularization mechanism further preserves the ensemble diversity. Exploration with shared experience. A similar domain to the policy ensemble method is the distributed RL, where large quantities of parallel actors collect data simultaneously. Mnih et al. (2016) propose A3C and A2C that use multiple independent actors to explore the environments. Both methods maintain a global agent who receives the gradients from parallel actors and episodically broadcasts the latest parameters of the global policy to actors. IMPALA (Espeholt et al., 2018) is similar to A3C but is different in mixing samples instead of gradients from actors and updating the global policy based on the shared samples. Compared to A3C, A2C, and IMPALA, DEPO maintains a group of heterogeneous policies and eliminates the concept of a centralized policy during training. Besides, our work is different from the works on Multi-agent RL where multiple agents interact with each others in the same environment (Lowe et al., 2017; Gupta et al., 2017; Rashid et al., 2018). DEPO focuses on improving the exploration with policy ensemble in single agent tasks. The different agents explore in independent environments and have no influence on each other during the sampling period. Learning with diversity. In deep RL community, many works encourage diversity explicitly through adding extra loss to make an agent behave differently (Hong et al., 2018; Masood & Doshi-Velez, 2019); creating conjugate policies which improve the main policy by adding noise in parameters (Cohen et al., 2019); or adding diversity as explicit reward (Eysenbach et al., 2018). Zhang et al. (2019) propose a method called task novelty bisector (TNB) which boosts the diversity with gradient fusion. The diversity regularizer in DEPO is different in the following aspects: (1) It trains multiple policies simultaneously, instead of generating a population of policies sequentially; (2) DEPO uses a simple and computationally efficient form of diversity, which does not require extra auto-encoder (Burda et al., 2018), hand-craft behavioral representation (Mouret & Clune, 2015), or complex heuristics to balance the task and diversity objectives (Hong et al., 2018). ## 3 Method 3.1 Preliminary Problem Formulation. In this work, we focus on proposing a novel policy optimization method to tackle decision-making problem. The decision-making problem is modeled as an infinite-horizon Markov decision process (MDP), defined by the tuple M = ⟨S, A*, P, R, γ, d*0⟩ consisting of a finite state space S, a finite action space A, the state transition probability distribution P : *S × A × S →* [0, 1], the reward function R : *S × A →* [Rmin, Rmax], the discount factor γ ∈ (0, 1) and the initial state distribution d0 : S → [0, 1]. The goal of reinforcement learning is to find a policy πθ : *S × A →* [0, 1], which is parameterized by θ, that can maximize the expected episodic return: $$J(\pi_{\theta})=v_{\pi_{\theta}}=\mathbb{E}\sum_{t=0}\gamma^{t}r(s_{t},a_{t}).$$ tr(st, at). (1) Here we denote the state value as vπ(st) = EπPt ′=t γ (t ′−t)r(st ′ , at ′ ). $$(1)$$ In this work, we propose a novel policy ensemble method to improve the training efficiency by leveraging multiple heterogeneous policies to collect data simultaneously. Concretely, after each training iteration, our method will propose a mixed policy from trained policy ensemble. We will show that such a mixed policy can lead to better return compared to the trained policy in single-agent RL method. We run on virtual simulation environment and focus on model-free RL that does not access the internal state of the simulator. This work focuses on on-policy RL setting, where the policy used to collect data in virtual environment is the policy that will be optimized after data collection. The data will be discarded after training instead of stored to replay buffer as in off-policy methods Fujimoto et al. (2018); Haarnoja et al. (2018). As shown in previous works Espeholt et al. (2018); Vinyals et al. (2019), on-policy methods can greatly improve data throughput and leverage massive computing resources. This work provides insight to better utilize the feature of scalability of on-policy methods. On-policy RL Algorithms. The policy gradient methods are typical on-policy RL algorithms. These methods derive the policy gradient (Sutton & Barto, 2018) ∇θJ(πθ) following: $$\begin{array}{r l}{A_{\pi_{\theta}}(s_{t},a_{t})=}&{{}r(s_{t},a_{t})+\gamma v_{\pi_{\theta}}(s_{t+1})-v_{\pi_{\theta}}(s_{t}),}\\ {\nabla_{\theta}J(\pi_{\theta})=}&{{}\mathbb{E}_{(s,a)\sim P\pi_{\theta}}\left[\nabla_{\theta}\log\pi_{\theta}(a|s){\hat{A}}_{\pi_{\theta}}(s,a)\right],}\end{array}$$ where (*s, a*) is collected from the exploration of policy πθ and Pπθ denotes the data distribution. The advantage Aπθ (st, at) describes the relative improvement of action at over the *baseline*, namely the state value vπθ (st), at step t. Eq. 3 can be obtained by differentiating the surrogate objective: $$\begin{array}{l}{(2)}\\ {(3)}\end{array}$$ $$J(\pi_{\theta})=\operatorname*{\mathbb{E}}_{(s,a)\sim P_{\pi_{\theta}}}[\log\pi(a|s)\hat{A}_{\pi}(s,a)],$$ $$\left(4\right)$$ $$\left(5\right)$$ wherein Pπθ is the state-action visitation distribution deduced by policy πθ. Applying the policy gradient to policy parameters with stochastic gradient ascent can improve the expected return. In popular RL method PPO (Schulman et al., 2017), a clipped surrogate objective is used to modulate the distributional shift between behavior policy πold and target policy πnew, so that it can update the target policy multiple times with same data collected by πold: $$J_{\mathrm{PPO}}(\pi_{\mathrm{new}})=\operatorname*{\mathbb{E}}_{(s,a)\sim P_{\pi_{\mathrm{old}}}}[\operatorname*{min}(\rho{\hat{A}}_{\pi_{\mathrm{old}}},\operatorname*{clip}(\rho,1-\epsilon,1+\epsilon){\hat{A}}_{\pi_{\mathrm{old}}})],$$ ρ denotes the probability ratio ρ = πnew(*s, a*)/πold(s,a), and ϵ > 0 is the clipping parameter. The ratio clipping can reduce the variance of policy gradient estimation and constrains πnew remaining close to πold. As illustrated in left panel of Fig. 2, we can summarize the learning pipeline of single policy optimization methods as following four steps: (1) Sampling: a behavior policy interacts with environments and collects a sampled batch, (2) Augmentation: the samples are augmented with value targets or advantages and formed into a batch, (3) Updating: the policy is updated based on the augmented training batch. (4) Synchronization: update the behavior policy according to the latest target policy. ## 3.2 Ensemble Policy Optimization Framework Extending the single-policy optimization, we consider a policy optimization framework which supports multiple heterogeneous policies running in parallel exploring the same task. The ensemble policy optimization (EPO) consists three components: (1) a *policy ensemble* that contains K policies, each policy interacts with the environment independently to roll-out a batch of samples; (2) a *training algorithm* T that updates the ensemble and should be capable to integrate data collected from each policy; and (3) an *aggregation function* G that proposes a mixed policy at test time at ∼ G({πi} K i=1) that further boosts the performance. We instantiate the EPO framework with *Diversity-regularized Ensemble Policy Optimization (DEPO)*. DEPO maintains a policy ensemble that contains K policy networks without weight-sharing among them. As illustrated in the right panel of Fig. 2, during sampling period, DEPO requests each policy to roll-out a sampled batch containing n = N/K transitions from the environment, wherein N denotes the size of sampled batch in single policy optimization. DEPO then forms a *shared training batch* with totally N samples by concatenating transitions {(st, at, rt, st+1) ∼ Pπi } K i=1 from all K policies. The shared batch is dispatched to each policy's optimizer, augmented by the target policy and value estimator, and then is used to optimize neural networks of the target policy. In test time, DEPO aggregates the policy ensemble with a uniform mixture of all policies: $$a\sim{\cal G}(\cdot|s,\pi_{1},...,\pi_{K})=\frac{1}{K}\sum_{i=1}^{K}a_{i},a_{i}\sim\pi_{i}(\cdot|s).\tag{6}$$ Note that to maximize the diversity of the sampled data, we do not aggregate policies during training. There are multiple behavior policies exploring each environment independently. This is the key difference to previous works. In the next section, we will discuss how DEPO trains each policy in the ensemble. ## 3.3 Peer Pressure Objective We define the *peer pressure objective* for arbitrary policy πk (a shorthand of πθk ) by slightly changing the Eq. 1 as: $$J_{\rm PP}(\pi_{k})=v_{\pi_{k}}-\frac{1}{K}\sum_{i=1}^{K}v_{\pi_{i}}.\tag{7}$$ This objective incentivizes the agent to maximize its expected return while outperforming the averagely performing policies in the ensemble. In following, we will derive a practical optimization objective such that we can effectively utilize the data sampled by heterogeneous policies. We first revisit the following lemma: Lemma 1 (The performance difference lemma) For arbitrary policies πk and πi*, their expected performance difference is:* $$v_{\pi_{k}}-v_{\pi_{i}}=\frac{1}{1-\gamma}\ \mathop{\mathbb{E}}_{s\sim P_{\pi_{k}}}\ \mathop{\mathbb{E}}_{a\sim\pi_{k}}\ [A_{\pi_{i}}(s,a)].\tag{1}$$ This lemma is proved by Kakade & Langford (2002). We use this lemma to decompose the peer pressure objective. After applying this lemma, Eq. 7 can be written as: $$J_{\text{PP}}(\pi_{k})=\frac{1}{K(1-\gamma)}\sum_{i=1}^{K}\ \mathbb{E}_{\pi_{k}\sim\mathcal{P}_{\pi_{k}}}\ \mathbb{E}_{a\sim\pi_{k}}\ [A_{\pi_{i}}(s,a)]\tag{9}$$ $$=\frac{1}{K(1-\gamma)}\sum_{i=1}^{K}\ \mathbb{E}_{\pi\sim\mathcal{P}_{\pi_{k}}}\ \mathbb{E}_{a\sim\pi_{i}}\ [\rho(\pi_{k},\pi_{i})A_{\pi_{i}}(s,a)],$$ $$(8)$$ $$(10)$$ $$(11)$$ wherein ρ(πk, πi) = πk(a|s)/πi(a|s) is the importance sampling coefficient. The peer pressure objective JPP(πk) queries the action distributions as well as the advantages of policy πi under the state distribution deduced by policy πk. However, suppose we replace the state distribution s ∼ Pπk by Pπi , then we can directly utilize the transitions sampled by the policy πi without replaying πi on the dataset collected by πk. We will justify such approximation in the following Theorem 1. We first write the *DEPO objective* for training policy πk as: $$J_{\mathrm{DEPO}}(\pi_{k})=\frac{1}{K}\sum_{i=1}^{K}\mathbb{E}_{(s,a)\sim P_{\pi_{i}}}\rho(\pi_{k},\pi_{i})\hat{A}_{\pi_{i}}(s,a).\tag{1}$$ The DEPO objective is a practical form of JPP *that can be easily computed using the shared training batch.* In the following theorem, we state that DEPO objective approximates the peer pressure objective. Theorem 1 *The peer pressure objective in Eq. 7 is bounded by DEPO objective:* $$J_{D E P O}(\pi_{k})-D\leq J_{P P}(\pi_{k})\leq J_{D E P O}(\pi_{k})+D,$$ JDEPO(πk) − D ≤ JPP(πk) ≤ J*DEPO*(πk) + D, (11) wherein D describes the divergence between the target policy πk *and others as* $$D=\frac{4\gamma}{K(1-\gamma)^{2}}\sum_{i=1}^{K}\epsilon_{i}\operatorname*{max}_{s}D_{K L}(\pi_{i}(\cdot|s)||\pi_{k}(\cdot|s)),$$ $$(12)$$ $$(13)$$ $$(14)$$ where ϵi = max(s,a)|Aπi (*s, a*)|. To prove this theorem, we first introduce a lemma according to the Theorem 1 in (Schulman et al., 2015). Lemma 2 *Define a function:* $$L_{\pi_{i}}(\pi_{k})=v_{\pi_{i}}+\underset{s\sim P_{\pi_{i}}a\sim\pi_{k}(\cdot|s)}{\mathbb{E}}\hat{A}_{\pi_{i}}(s,a).$$ Then the following inequality holds: $$|v_{\pi_{k}}-L_{\pi_{i}}(\pi_{k})|\leq\frac{4\gamma\epsilon_{i}}{(1-\gamma)^{2}}D_{K L}^{\mathrm{max}}(\pi_{i},\pi_{k}),$$ where ϵi = max(s,a)|Aπi (s, a)|*, and* Dmax KL (πi, πk) = maxs DKL(πi(·|s)||πk(·|s)). Note that the authors of Schulman et al. (2015) only give the lower bound of vπk −Lπi (πk) but they actually proved the upper bound at the same time. For simplicity, define Di =4γϵi (1−γ) 2 Dmax KL (πi, πk). Therefore D = 1 K PK i=1 Di. Following Eq. 14, we have: $$\operatorname*{\mathbb{E}}_{s\sim P_{\pi_{i}}}{\hat{A}}_{\pi_{i}}(s,a)-D_{i}\leq v_{\pi_{k}}-v_{\pi_{i}}\leq\operatorname*{\mathbb{E}}_{s\sim P_{\pi_{i}}}{\hat{A}}_{\pi_{i}}(s,a)+D_{i}$$ Now we build the connection between JDEPO and JPP. Since Ea∼πk Aˆπi (*s, a*) = Ea∼πi ρ(πk, πi)Aˆπi (*s, a*), we have: $$J_{\mathrm{DEPO}}(\pi_{k})={\frac{1}{K}}\sum_{i=1}^{K}\operatorname*{\mathbb{E}}_{s\sim T_{\pi_{i}}}{\hat{A}}_{\pi_{i}}(s,a)$$ Compute the average of Eq. 15 over all is, we have: $$J_{\text{DEPO}}(\pi_{k})-D=\frac{1}{K}\sum_{i=1}^{K}[\underset{a\sim\pi_{k}(:|s)}{\mathbb{E}}\hat{A}_{\pi_{i}}(s,a)-D_{i}]$$ $$\leq\frac{1}{K}\sum_{i=1}^{K}[v_{\pi_{k}}-v_{\pi_{i}}]=J_{\text{FP}}(\pi_{k})\tag{17}$$ $$\leq\frac{1}{K}\sum_{i=1}^{K}[\underset{a\sim\pi_{k}(:|s)}{\mathbb{E}}\hat{A}_{\pi_{i}}(s,a)+D_{i}]=J_{\text{DEPO}}(\pi_{k})+D$$ $$(15)$$ $$(16)$$ The main theorem is proved. Note that D measures the divergence between πk and others policies. Since we update policy πk with the shared data batch, D will naturally reduce after each training iteration because all policy are updated to maximize the log action probability of the same set of high-return actions. Therefore JDEPO approximates JPP. ![6_image_0.png](6_image_0.png) Figure 3: Illustration of the PPO loss, Two-side Clip loss, and the gradient fusion method in diversity regularization. ## 3.4 Learning Objectives To bound the policy update and reduce variance, we further apply the clipped surrogate objective as in Eq. 5 to the DEPO objective in Eq. 10. However, we find the clipped objective leads to numerical instability and will catastrophically collapse the learning since the objective is not bounded when the advantage is negative, as the PPO Loss shown in Fig. 3. This phenomenon is also noticed in (Ye et al., 2019). To tackle this issue, we use a Two-side Clip (TSC) surrogate objective to mitigate the variance of advantage as shown by the TSC Loss in Fig. 3. Concretely, the TSC loss equipped by DEPO is computed as follows: $$J_{\rm TSC}(\pi_{k})=\frac{1}{K}\sum_{i=1}^{K}\mathbb{E}_{(s,a)\sim P_{\pi_{i}}}[\text{clip}(\rho(\pi_{k},\pi_{i}),0,1+\epsilon)\hat{A}_{\pi_{i}}(s,a)].\tag{18}$$ The ablation studies in Sec. 4.4 shows that data sharing across ensemble can already boost the performance. In next section, we will discuss an important problem brought by the data sharing. ## 3.5 Diversity Regularization Due to the data sharing among policies, it is inevitable that all the constituent policies gradually become identical during the course of learning. To further improve the effectiveness of DEPO, we propose the Diversity Regularization (DR) mechanism, which regularizes the exploration and preserves the diversity of policies. Considering the continuous control tasks we focus on, we use the Mean Square Error (MSE) between the means of action distributions produced by two agents as the diversity reward Hong et al. (2018). MSE is bounded since each action dimension is limited to [−1, 1], avoiding unbounded value in other metrics like KL divergence. Denoting µk(s) as the mean of a stochastic action distribution πk(·|s), MSE can be written in the closed form ||µk − µi||22 . We therefore use the following diversity reward: $$r_{d}^{(k)}(s)=\frac{1}{K-1}\sum_{i=1,i\neq k}^{K}||\mu_{k}(s)-\mu_{i}(s)||_{2}^{2}.$$ $$\left(\begin{array}{l}19\end{array}\right)$$ . We do not treat the diversity reward as intrinsic reward, instead, we use the diversity reward to compute the diversity objective to update the policy following Eq. 5 and then form *diversity gradient* based on such objective. The diversity gradient is fused with the primal task gradient later. The ablation study on using the intrinsic reward method is presented in Sec. 4.4. DR retrieves the final gradient from two streams of gradients using the Feasible Direction Method (FDM). As illustrated in Fig. 3, we flatten the gradients w.r.t. all parameters into a vector for both objectives and get the task gradient and diversity gradient gt, gd ∈ R |θ|respectively. Then we compute the angular bisector of two flattened gradients in the parameter space: d = Z(Z(gt) + Z(gb)), wherein Z(x) = x/||x||2 normalizes input to a unit vector. d therefore is a unit vector representing direction of the final gradient. We project two gradient vectors into d and use the average as the magnitude of final gradient. The final gradient after fusion is computed as follows: $$\mathbf{g}_{\mathrm{final}}={\frac{\mathbf{g}_{t}\cdot\mathbf{d}+\operatorname*{min}(\mathbf{g}_{d}\cdot\mathbf{d},\mathbf{g}_{t}\cdot\mathbf{d})}{2}}\cdot\mathbf{d}.$$ $\left(20\right)^3$ 2· d. (20) min operation ensures the diversity reward does not prevail against the task reward. The final gradient direction d is the angular bisector ensuring there always exist non-negative components of both gradient after the projection gt · d and gd · d. The bisector therefore can improve both objectives effectively (Zhang et al., 2019). An alternative is to simply add two gradient vectors (Hong et al., 2018). However, such approach introduces a trade-off factor that is hard to tune. Evaluation in Sec. 4.4 shows that the FDM method performs better than using auxiliary loss. To reduce the variance when estimating diversity across ensemble and possible trajectories, we introduce a diversity value network (DVN) to estimate the diversity value and compute the diversity gradient. DVN updates to minimize the Bellman error between the predicted values and r (k) d(s) + γDV N(st+1, at+1). We compute the diversity in online manner, so the diversity reward is non-stationary during the training. To relieve such issue, we invite a common trick called *Delayed Update Target* (Lillicrap et al., 2015). We maintain a set of target policies by Polyak averaging the parameters of the policy ensemble over the course of training: $$\theta_{\mathrm{target}}^{(k)}\leftarrow(1-\tau)\theta_{\mathrm{target}}^{(k)}+\tau\theta_{\mathrm{latest}}^{(k)},\;\;\;\forall k=1,...,K,$$ $$(21)$$ latest, ∀k = 1*, ..., K,* (21) wherein 0 < τ ≤ 1 is a hyper-parameter. We then compute the diversity of a given policy πk against such target policies, including the delayed update target of k-th policy itself. ## 4 Experiments 4.1 Setup We implement DEPO using RLLib (Liang et al., 2018). Generally, we host 8 concurrent trials in an Nvidia GeForce RTX 2080 Ti GPU. Each trial consumes 3 CPUs with 10 parallel rollout workers. To ensure efficiency, we hosts the sampling and training pipelines of each policy in separate workers running in parallel. DEPO only synchronizes when sharing the data and computing diversity. *The wall-time overhead therefore* is trivial compared to single-policy methods. To ensure fair comparison, we ensure (1) the total number of interactions with environments and (2) the total number of sampled transitions in each training iteration for the whole DEPO system to be identical to the single-policy methods. Concretely, we set the total number of sampled steps to 5 × 107for on-policy baselines and DEPO. In each training iteration, the whole system of DEPO collects 10, 000 transitions. For DEPO ensemble with 5 policies, this means each policy has the quota of 2, 000 steps to interact with its environment. All 5 policies will form a shared training batch with totally 10, 000 transitions, equal to the size of the training batch in on-policy single-policy baselines. For all experiments, we use fully-connected neural networks with two layers, 256 hidden units per layer, for both policy networks and value networks. We use ReLU as the activation function. Other hyper-parameters in both on-policy and off-policy setting are listed in Appendix. The implementation of OAC follows the code provided by the original paper (Ciosek et al., 2019). Note that the total training timesteps of this work is different from OAC paper. We use the official implementation of Actor-critic Ensemble (Zhang & Yao, 2019), SUNRISE (Lee et al., 2020), and TD3 (Fujimoto et al., 2018). The PPO, A2C, APPO and SAC implementations follow RLLib (Liang et al., 2018). We evaluate methods in five continuous control locomotion tasks: HalfCheetah-v3, Ant-v3, Walker2d-v3, Hopper-v3, and Humanoid-v3 in MuJoCo simulator (Todorov et al., 2012). Experiments are repeated 10 times with different random seeds and the standard deviation of the values are presented in tables as well as the shadow of curves. The ensemble size is K = 5 if not explicitly stated. ![8_image_0.png](8_image_0.png) Figure 4: The learning curves of the PPO and DEPO in five environments. The performance of policy in DEPO ensemble (DEPO) already outperforms baselines in all tasks. Using the mixture policy of the ensemble in test time further boosts the performance (DEPO avg.). Table 1: The episodic reward of baselines and proposed framework. Elite refers to the best policy in the ensemble and Average is the performance of the mixture policy of DEPO ensemble. DEPO outperforms major single-agent baselines and achieves competitive performance compared to powerful exploration and ensemble method baselines. Category Method Ant-v3 HalfCheetah-v3 Hopper-v3 Humanoid-v3 Walker2d-v3 Proposed Framework DEPO Average 6107.2 ±258.9 8022.8 ±293.1 3608.0 ±135.8 5276.0 ±61.0 6379.4 ±241.1 DEPO Elite 5487.6 ±239.8 7529.5 ±210.8 2922.8 ±288.5 4114.0 ±136.9 6179.1 ±271.6 | ensemble method baselines. Category Method | | Ant-v3 | HalfCheetah-v3 | Hopper-v3 | Humanoid-v3 | Walker2d-v3 | |----------------------------------------------|--------------|----------------|------------------|---------------|---------------|----------------| | Proposed | DEPO Average | 6107.2 ±258.9 | 8022.8 ±293.1 | 3608.0 ±135.8 | 5276.0 ±61.0 | 6379.4 ±241.1 | | Framework | DEPO Elite | 5487.6 ±239.8 | 7529.5 ±210.8 | 2922.8 ±288.5 | 4114.0 ±136.9 | 6179.1 ±271.6 | | | PPO | 4155.4 ±596.9 | 3559.4 ±1041.1 | 2860.1 ±166.2 | 663.4 ±55.8 | 3209.8 ±280.1 | | Single-agent Baseline | APPO | 1460.9 ±427.7 | 2814.6 ±75.8 | 2038.1 ±421.6 | 1351.9 ±32.9 | 3159.9 ±187.5 | | | A2C | 1881.8 ±185.4 | 2882.7 ±1293.2 | 2132.9 ±97.1 | 519.9 ±90.5 | 1891.3 ±664.6 | | | SAC | 4654.2 ±272.3 | 7763.5 ±1777.4 | 3372.7 ±95.5 | 5021.3 ±165.9 | 4213.3 ±169.9 | | Exploration | TNB | 2211.0 ±250.3 | 1623.6 ±94.6 | 2916.8 ±383.2 | 493.2 ± 40.8 | 2906.3 ± 152.3 | | | OAC | 4891.7 ±184.4 | 7576.6 ±499.0 | 3418.0 ±235.8 | 5128.8 ±78.0 | 3867.6 ±1126.8 | | Ensemble | ACE | 1280.7 ±165.0 | 4131.2 ±330.8 | 1724.3 ±776.9 | 1529.3 ±493.2 | 3383.0 ±421.9 | | Method | SUNRISE | 3902.0 ±1019.4 | 6518.3 ±1717.4 | 3639.1 ±90.2 | 5534.2 ±97.5 | 4981.1 ±982.0 | ## 4.2 Main Results To validate that the ensemble policy optimization can improve the performance over single policy schemes, we compare DEPO with following baselines: - Single-agent RL: We compare with A2C (Mnih et al., 2016), PPO (Schulman et al., 2017), APPO, a variant to IMPALA (Espeholt et al., 2018) and Soft Actor-Critic (SAC) (Haarnoja et al., 2018). In our preliminary experiments, we find A3C and IMPALA are unstable and sometimes fail the training, so we instead use A2C, the synchronized version of A3C, and APPO, which replaces the V-trace loss in IMPALA with the surrogate loss in PPO but still using the asynchronized infrastructure proposed in RLLib. - Exploration: TNB (Zhang et al., 2019) aims at seeking diverse policies and trains a population of polices sequentially so the time consumption is much larger than ours. OAC (Ciosek et al., 2019) further integrates SAC with the Upper Confidence Bound heuristics to conduct more informative exploration. - Ensemble Method: We compare two ensemble methods with multiple policies: ACE (Zhang & Yao, 2019) and SUNRISE (Lee et al., 2020). Both works utilize a mixed policy to explore and train in the off-policy setting. As shown in Fig. 4, in all five tasks, our method (DEPO) outperforms the single-policy baseline with a large margin. Using the aggregated mixture policy in test time yields even more powerful policies that further ![9_image_0.png](9_image_0.png) Figure 5: The learning dynamics of DEPO. Figure 6: Ablation studies on the major mechanisms. ![9_image_1.png](9_image_1.png) Diversity Reward Figure 7: The tendency of diversity reward and episode reward. The diversity reward always peaks at the learning progressing drastically. boosts the performance (DEPO Avg.) compared to individual policies. In Hopper-v3, PPO collapses after a long time of training, while DEPO maintains its performance, which shows that DEPO is stable during training. In Table 1, we can see that DEPO outperforms majority of baselines. ## 4.3 Learning Dynamics We investigate the dynamics of ensemble diversity in the course of training. As shown in Fig. 5, we plot two curves, the *average diversity* and the *objective similarity*, alongside with the reward curve of DEPO in Walker2d-v3 environment. The *objective similarity* is the cosine similarity between the task gradient and the diversity gradient cos⟨gt, gd⟩, showing the alignment between two objectives. The average diversity is the mean of the diversity reward of all policies EPK i=1 r (i) d /K. An interesting observation is that the diversity as well as the objective similarity peaks when the performance is improved with the highest speed. When the sampled step is in range of 3M to 10M, the objective similarity achieves high value, indicating the task gradient and diversity gradient are aligned to head to the similar direction. This phenomenon suggests in the early stage of training, finding diverse policies is finding better policies. As shown in Fig. 7, the same phenomenon happens for all tested tasks. In the later training, the average diversity drops to low value and the objective similarity goes to zero. The policy ensemble becomes stable thus the diversity has a marginal impact to the policy improvement in the later stage of training. In short, we demonstrate empirically that the diversity regularization helps improving the exploration, especially in the early stage of learning. ## 4.4 Ablation Studies To further understand which designed elements of DEPO are crucial, we conduct ablation studies thoroughly in Walker2d-v3 benchmark. Key components. We first examine the key components of the proposed DEPO in Fig. 6. Compared to the baseline PPO, purely sharing data among the ensemble without diversity regularization (w/o Diversity) already boosts the performance. This result confirms the previous discovery that diversity introduced by different random initializations can promote performance (Osband et al., 2016). | | | Table 4: Design Choice Ablation Performance | | |------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------|----------------| | Table 2: Ensemble Size Size Performance K=1 2564.7 ±667.3 K=3 5766.0 ±363.7 K=5 6179.1 ±271.6 K=10 6086.7 ±602.7 | Table 3: Ensemble Method Method Performance Median 5660.3 ±590.1 Elite 6179.1 ±271.6 Voting 5710.1 ±344.1 Average 6379.4 ±241.1 | w/o Aˆ Norm. | 1105.1 ±609.3 | | | | w/o DVN | 2701.8 ±235.7 | | | | w/ Ada. Div. | 2806.1 ±521.8 | | | | w/ Mask | 4558.5 ±1129.6 | | | | w/o TSC | 4624.4 ±796.2 | | | | w/o DU | 4672.8 ±836.5 | | Method | Ant-v3 | HalfCheetah-v3 | Hopper-v3 | Humanoid-v3 | Walker2d-v3 | |-----------|----------------|------------------|----------------|---------------|----------------| | DEPO | 6107.2 ±258.9 | 8022.8 ±293.1 | 3608.0 ±135.8 | 5276.0 ±61.0 | 6379.4 ±241.1 | | PPO | 4155.4 ±596.9 | 3559.4 ±1041.1 | 2860.1 ±166.2 | 663.4 ±55.8 | 3209.8 ±280.1 | | PPO Large | 1892.6 ± 201.5 | 2018.6 ± 458.6 | 1862.7 ± 159.2 | 875.3 ± 72.5 | 2660.2 ± 240.3 | Table 5: Comparison with single-agent method with large model. On the contrary, disabling data sharing (w/o Sharing) decreases the performance significantly even if the diversity regularization is active. This is because data sharing broadcasts the experience of all policies so that all policies can optimize toward the high reward region collectively. When disabling data sharing, maximizing the diversity reward becomes the most feasible local minima for each policy, especially when the reward supervision from primal task is not significant in the early stage of training. We also test the idea proposed in (Hong et al., 2018) (Extra Loss) to justify our usage of FDM as the gradient fusion method. We use an adaptive multiplier to balance the weighted sum of task and diversity objective: gfinal = gt +βgd, wherein β increases by 0.05 if current policy's average diversity is lower than the running average for past 100 iterations and vice versa. However, due to the difficulty in tuning the trade-off between task and diversity, such method performs poorly compared to the DR. Notice that in Eq. 10, we use the estimator Aˆπi to estimate the advantage of policy πk in the state-action space sampled by the behavior policy πiinstead of using the estimator Aˆπk of policy πk. Though we discuss that DEPO objective approximate the peer pressure objective through Theorem 1, we also verify this statement in the Value Replay experiment: we replay each value network vˆπk on the shared training batch and compute the estimated advantage Aˆπk in Eq. 2 based on the replayed values for each policy. Intuitively, replaying the values will provide more accurate estimation to the policy gradient. However, the empirical result suggests such method performs inferior to the DEPO objective. DEPO converges faster than Value Replay and achieves better final performance. Impact of the ensemble size. We reveal the impact of the ensemble size to the final performance. Note that when K = 1, the policy computes diversity against the delayed update target of itself. As shown in Table 2, the performance is improved when the ensemble size increase. However, when the ensemble size exceeds some threshold, preserving diversity will jeopardize the learning because one or more policies would learn diverse but weak behaviors, dragging down the whole training process. Finding an appropriate ensemble size can maximize the effectiveness of DEPO. Impact of the aggregation methods. In Table 3, we examine different aggregation methods on policy ensemble. Median and Elite refer to the method using a single policy in the ensemble to sample action. The policy is selected based on its performance in evaluation and we select the median or the best policies. On the other hand, Voting refers to taking the action from policies that yields highest value: at ∼ πi, i = arg maxi vi. The results shows that Average the output distributions yields the greatest performance gain and can even outperforms the best policy in the ensemble. More sophisticated aggregation methods, such as treating policy selection as a discrete RL problem, are left to future works. Other design choices. In Table 4, we evaluate the significance of other design choices: - w/o Aˆ Norm.: The advantage normalization demonstrates significant impact. Without such normalization, the diversity advantage and the task advantage might have disparate magnitudes, which imposes chaotic supervision to the learning and thus damages the performance. - w/o DVN: The Diversity Value Network (DVN) also has huge impact. In this experiment, we disable the DVN and replace the diversity advantage with the discounted diversity return. However, the result suggests ablating DVN damages the training because the estimation of diversity return creates huge variance. - w/ Ada. Div.: We use a simple heuristic to adapt the final gradient direction to justify the usage of angular bisector between two gradients. We adapt the β when computing the direction of the final gradient (1−β)Z(gt) +βZ(gd) in the same way as Extra Loss experiment. However, such method does not surpass the simple angular bisector. This setting does not limit the diversity gradients. On the contrary, the bisector bounds the diversity gradient projection so that its impact to the final gradient will not exceed the task gradient. - w/ Mask: An technique to increase training diversity is also tested: during sampling, we generate a binary mask on each sample for each policy, and then filter the training batch for each policy according to the mask (Osband et al., 2016). By doing this, each policy will train on different data. However, similar to the finding in (Lee et al., 2020), this method reduces the training performance since the total data used to train each policy is reduced. - w/o TSC: We find that using the Two-side Clip Loss proposed in Eq. 18 can further improve the performance. - w/o DU: The Delayed Update target of policies can stabilize the computing of diversity reward and therefore improve the result. Though the outcome of DEPO is a mixed policy with identical scale of neural network (NN) as single-agent method, DEPO uses multiple networks and a larger number of parameters during training. To verify the fairness, we conduct an experiment with larger neural architecture for the baseline PPO. We adjust the number of units in each hidden layer to 600 instead of original 256, so that the total number of parameters is approximately equivalent to the 5 policies DEPO ensemble. As shown in Table 5, increasing the number of trainable parameters is not always better. Compared to PPO with small NN, large NN only works better in the environment Humanoid-v3. In other environments, larger NN even impedes the performance since it is difficult for the network to converge due to underfitting. This result suggests that the internal mechanism of DEPO indeed helps to find better policy, instead of exploiting the benefit of more trainable parameters. ## 5 Conclusion In this work we explore the implementation of ensemble method into the policy optimization in RL and develop an ensemble policy optimization method called DEPO. The proposed method requests a set of policies to explore the environment simultaneously, trains each policy with shared training batch, and maintains the diversity of the ensemble with diversity regularization. The performance in test time is greatly improved by aggregating the learned policies into an ensemble policy. Experimental results show that the proposed ensemble policy optimization method can substantially improve sample efficiency in continuous locomotion tasks compared to the single-policy optimization counterparts. Detailed ablation studies reveal that the data sharing among the ensemble and the diversity regularization significantly improve the performance. ## References Alekh Agarwal, Mikael Henaff, Sham Kakade, and Wen Sun. Pc-pg: Policy cover directed exploration for provable policy gradient learning, 2020. Gaon An, Seungyong Moon, Jang-Hyun Kim, and Hyun Oh Song. Uncertainty-based offline reinforcement learning with diversified q-ensemble. *Advances in Neural Information Processing Systems*, 34, 2021. Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, and Honglak Lee. Sample-efficient reinforcement learning with stochastic ensemble value expansion. In *Advances in Neural Information Processing Systems*, pp. 8224–8234, 2018. Yuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov. Exploration by random network distillation. arXiv preprint arXiv:1810.12894, 2018. Richard Y Chen, Szymon Sidor, Pieter Abbeel, and John Schulman. Ucb exploration via q-ensembles. *arXiv* preprint arXiv:1706.01502, 2017. Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In *Advances in Neural Information Processing* Systems, pp. 4754–4765, 2018. Kamil Ciosek, Quan Vuong, Robert Loftin, and Katja Hofmann. Better exploration with optimistic actor critic. In *Advances in Neural Information Processing Systems*, pp. 1787–1798, 2019. Andrew Cohen, Xingye Qiao, Lei Yu, Elliot Way, and Xiangrong Tong. Diverse exploration via conjugate policies for policy gradient methods. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 33, pp. 3404–3411, 2019. Thomas G Dietterich. Ensemble methods in machine learning. In International workshop on multiple classifier systems, pp. 1–15. Springer, 2000. Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In *International Conference on Machine Learning*, pp. 1407–1416, 2018. Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. *arXiv preprint arXiv:1802.06070*, 2018. Stefan Faußer and Friedhelm Schwenker. Ensemble methods for reinforcement learning with function approximation. In *International Workshop on Multiple Classifier Systems*, pp. 56–65. Springer, 2011. Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. *Journal of computer and system sciences*, 55(1):119–139, 1997. Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In *International Conference on Machine Learning*, pp. 1587–1596, 2018. Jayesh K Gupta, Maxim Egorov, and Mykel Kochenderfer. Cooperative multi-agent control using deep reinforcement learning. In *International Conference on Autonomous Agents and Multiagent Systems*, pp. 66–83. Springer, 2017. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In *International Conference on Machine* Learning, pp. 1861–1870, 2018. Alexander Hans and Steffen Udluft. Ensembles of neural networks for robust reinforcement learning. In *2010* Ninth International Conference on Machine Learning and Applications, pp. 401–406. IEEE, 2010. Lars Kai Hansen and Peter Salamon. Neural network ensembles. IEEE transactions on pattern analysis and machine intelligence, 12(10):993–1001, 1990. Zhang-Wei Hong, Tzu-Yun Shann, Shih-Yang Su, Yi-Hsiang Chang, Tsu-Jui Fu, and Chun-Yi Lee. Diversitydriven exploration strategy for deep reinforcement learning. In *Advances in Neural Information Processing* Systems, pp. 10489–10500, 2018. Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E Hopcroft, and Kilian Q Weinberger. Snapshot ensembles: Train 1, get m for free. *arXiv preprint arXiv:1704.00109*, 2017. Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In *In Proc.* 19th International Conference on Machine Learning. Citeseer, 2002. Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, and Pieter Abbeel. Model-ensemble trust-region policy optimization. *arXiv preprint arXiv:1802.10592*, 2018. Kimin Lee, Michael Laskin, Aravind Srinivas, and Pieter Abbeel. Sunrise: A simple unified framework for ensemble learning in deep reinforcement learning. *arXiv preprint arXiv:2007.04938*, 2020. Eric Liang, Richard Liaw, Robert Nishihara, Philipp Moritz, Roy Fox, Ken Goldberg, Joseph Gonzalez, Michael Jordan, and Ion Stoica. Rllib: Abstractions for distributed reinforcement learning. In International Conference on Machine Learning, pp. 3053–3062, 2018. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. Ryan Lowe, Yi I Wu, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. In Advances in neural information processing systems, pp. 6379–6390, 2017. Muhammad Masood and Finale Doshi-Velez. Diversity-inducing policy gradient: using maximum mean discrepancy to find a set of diverse policies. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp. 5923–5929. AAAI Press, 2019. Dipendra Misra, Mikael Henaff, Akshay Krishnamurthy, and John Langford. Kinematic state abstraction and provably efficient rich-observation reinforcement learning. In International conference on machine learning, pp. 6961–6971. PMLR, 2020. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In *International conference on machine learning*, pp. 1928–1937, 2016. Jean-Baptiste Mouret and Jeff Clune. Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909, 2015. Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped dqn. In *Advances in neural information processing systems*, pp. 4026–4034, 2016. Xue Bin Peng, Glen Berseth, and Michiel Van de Panne. Terrain-adaptive locomotion skills using deep reinforcement learning. *ACM Transactions on Graphics (TOG)*, 35(4):1–12, 2016. Aravind Rajeswaran, Sarvjeet Ghotra, Balaraman Ravindran, and Sergey Levine. Epopt: Learning robust neural network policies using model ensembles. *arXiv preprint arXiv:1610.01283*, 2016. Tabish Rashid, Mikayel Samvelyan, Christian Schroeder, Gregory Farquhar, Jakob Foerster, and Shimon Whiteson. Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning. In International Conference on Machine Learning, pp. 4295–4304, 2018. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In *International conference on machine learning*, pp. 1889–1897. PMLR, 2015. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. *arXiv preprint arXiv:1707.06347*, 2017. Saurabh Singh, Derek Hoiem, and David Forsyth. Swapout: Learning an ensemble of deep architectures. arXiv preprint arXiv:1605.06465, 2016. Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. IEEE, 2012. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. *Nature*, 575(7782):350–354, 2019. Yeming Wen, Dustin Tran, and Jimmy Ba. Batchensemble: an alternative approach to efficient ensemble and lifelong learning. *arXiv preprint arXiv:2002.06715*, 2020. Marco A Wiering and Hado Van Hasselt. Ensemble algorithms in reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 38(4):930–936, 2008. Deheng Ye, Zhao Liu, Mingfei Sun, Bei Shi, Peilin Zhao, Hao Wu, Hongsheng Yu, Shaojie Yang, Xipeng Wu, Qingwei Guo, et al. Mastering complex control in moba games with deep reinforcement learning. arXiv preprint arXiv:1912.09729, 2019. Shangtong Zhang and Hengshuai Yao. Ace: An actor ensemble algorithm for continuous control with tree search. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 33, pp. 5789–5796, 2019. Yunbo Zhang, Wenhao Yu, and Greg Turk. Learning novel policies for tasks. In *International Conference* on Machine Learning, pp. 7483–7492, 2019. Zhuobin Zheng, Chun Yuan, Zhihui Lin, Yangyang Cheng, and Hanghao Wu. Self-adaptive double bootstrapped ddpg. In *IJCAI*, 2018. Zhi-Hua Zhou. *Ensemble methods: foundations and algorithms*. CRC press, 2012. ## A Hyper-Parameters In this section, we present the hyper-parameters used in training. To illustrate that our method can be an add-on to existing single-agent baseline, we first search the hyper-parameters for PPO and find those that can maximize the performance in single-agent training. We then use the same set of hyper-parameters to train DEPO. Table 6: Environment-related hyper-parameters of PPO and on-policy DEPO. The hyperparameters are selected based on PPO's performance. Parameter H.C. Ant Walker Hopper Human. LR 0.0003 0.0001 0.0002 0.0001 0.0001 λ 0.95 0.95 1.0 1.0 0.95 SGD Epochs 30 20 20 20 20 | Hyper-parameter | Value | |---------------------------------|---------| | Number of Agents | 5 | | KL Coefficient | 1.0 | | Discount Factor (γ) | 0.99 | | Delayed Update Coefficient (τ ) | 0.005 | | Max Norm of Gradient | 10.0 | | Use Diversity Value Network | True | | Use Normalized Advantage | False | | Use Delayed Update | True | | Use Two-side Clip Loss | True | | Maximum Sampled Steps | 5 × 107 | | SGD Minibatch Size | 1024 | | Training Batch Size | 10,000 | | Number of Parallel Workers | 10 | | Sampled Batch Size per Worker | 200 | Table 7: Environment-agnostic hyper-parameters of DEPO.
Review 1: Summary: This paper introduces a new algorithm, DEPO, which trains diverse policies for exploration and then combines them at inference time to produce a master policy. There are some interesting ideas in terms of how the diversity is computes and how the policies are subsequently combined, but overall these insights are not presented in a way that is digestible for the community. Instead they are combined to produce a single algorithm, with experiments on five toy domains. Strengths and Weaknesses: The main strength is there are some interesting ideas here, as listed in the requested changes section, they are just lost because combined in a convoluted way. The weakness is clear, the paper is not intuitive as to what is causing the gains and why. It is presented as a method that wins on a toy benchmark, and not as a series of design choices that could be incorporated more widely in the community. Finally, if we want to evaluate this as a "novel method" which seems to be what the authors are striving for, then this paper is very similar to [1]. Notably: * "In contrast, the proposed DEPO uses the difference. in action distribution of two policies on the same input data as the diversity reward" -> [1] uses action based behavior embeddings to compute kernel matrices * "There are multiple behavior policies exploring each environment independently. This is the key difference to previous works" -> [1] DvD-TD3 has a population of diverse policies with a shared replay buffer, the same idea. The main novel parts seem to be the gradient fusion and the ensembling. I would re-write the paper to analyze these components and provide recommendations on how they can be useful. [1] Parker-Holder et al. Effective Diversity in Population Based Reinforcement Learning. NeurIPS 2020 Requested Changes: I realize the paper has already been re-written once, but that shouldn't mean that we give it a pass. To me, the most important thing for papers is to contribute to science/the community, and this paper does not do that enough. DEPO contains many moving parts, which it seems are all important for gains, but we do not understand how or why they work. The algorithm is presented as a compilation of all of these parts, and then we are supposed to trust this is a concrete step forward based solely on gains on a benchmark that has been known to be toy for several years. To be a bigger contribution either: 1) The experiments need to be on significantly more challenging benchmarks to make it the case that the community would be fine to use a confusing method because it is simply so strong. 2) My preferred option by far, instead of presenting this as "DEPO, the all conquering algorithm" you could re-write this paper to be about understanding different components of the method, e.g. the following changes: i) the TSC loss ii) the gradient fusion iii) the ensembling of policies iv) the specific diversity objective. These things are all interesting and potentially useful, but they will be lost and never used by the community because they're packaged together in an algorithm with the sole purpose of showing gains on toy MuJoCo tasks. Broader Impact Concerns: The broader impact is limited in my opinion. ================================================== Review 2: Summary: The paper introduces a new algorithm called Diversity-regularized Ensemble Policy Optimization (DEPO). The algorithm trains multiple policies simultaneously in an ensemble. Each ensemble member is executed some of the time but data is shared. The ensemble members are encouraged to be different from one another via an additional diversity objective. At test time, a mixtures of all individual policies is executed. DEPO is evaluated on 5 continuous control tasks from the OpenAI gym. The paper also reports a number of ablations experiments and further analysis. Strengths and Weaknesses: Strengths: - promising experimental results - mostly clearly written and easy to follow - carefully ablations of design choices Weaknesses: - the algorithm could be better motivated. What is the deficiency that non-ensemble on-policy method have that can be address by DEPO. It is nice to see that DEPO empirically improves on baselines but currently ensembles are not well motivated. - A wider set of tasks could improve the paper. - I'm somewhat confused about one argument in the paper. Requested Changes: essential: - please motivate the use of ensembles more clearly from the outset. - please cite the source of the tasks, not just mujoco. - page 6, last para: It's unclear to me why D will naturally reduce with each iteration. what if there is a task that admits two optimal policies with identical value. Could there not be a stable setting with distinct ensemble members? recommended: - in the abstract, the acronym DEPO is used but not introduced. - consider additional tasks - page 1, para 1, "both works" -> not clear to me what is being referred to. - page 1, para 1, last sentence: I'm not sure what this means. Please clarify. - page 1, para 2 refers to both averaging action distributions and averaging outputs. Please be precise. - maybe empathise that the tasks in this work are all based on features not pixels. - page 2, para 2 last sentence: what's difference action distribution? - page 2, para 3 last sentence: this is unclear. Do you mean that only one policy that interacts with the environments? - page 4, $\hat{A}$ appears in eq. 3 but is not immediately introduced. Please clarify. - as formulated DEPO will only work in continuous action spaces because of the averaging of actions. I also suspect there may be counter examples where DEPO can fail catastrophically due to the averaging if there are distinct solutions to a task but the "average actions" between solutions are terrible. It might be interesting to explore this and comment on it in the paper. - I am concerned with the ppo baseline in figure 4 on hopper-v3. Broader Impact Concerns: n/a ================================================== Review 3: Summary: I quote from my previous reviews > The authors propose a framework for applying ensemble methods in RL. Key techniques in the framework are shared data, a novel peer pressure objective, a novel diversity regularization, and the use of feasible direction method for combining gradients from two different objectives. The authors experimentally study this framework with PPO and provide ablation study for the main techniques used in the framework. Strengths and Weaknesses: Strengths: The paper is well written and easy to follow Weakness: * The abstract claims that DEPO has faster convergence than PPO but I don't think it's backed, it is even not clear to me what "faster convergence" actually means. I quote from my previous reviews > It is actually not clear to me what "faster convergence" really means. Even worse, according to the author's reply to other reviewers. "We grid-search many hyper-parameters such as LR, SGD epochs and Lambda and choose the hyper-parameters that make the trial achieve highest performance. Here we use the maximum performance during the whole training process as the performance of one trial." It looks the hyper parameter is tuned to maximize the performance, not the speed of convergence (whatever the authors mean by the speed of convergence). So I do not think we can rigorously draw any conclusion regarding the speed of convergence, before it is properly defined and hyper parameters are properly tuned. There are many caveats. For example, if I set the learning rate to 0, the algorithm immediately converges so it converges the fastest. But it clearly makes no sense. One common metric I use to balance the performance and the speed is the area under the curve. This is just an example to try to define everything clearly and explicitly so we have some concrete thing to optimize. * Just above the Table 1, the authors claim that "DEPO outperforms major single-agent baselines and achieves competitive performance compared to powerful exploration and ensemble method baselines". This is not true. I believe the off-policy methods in Table 1 still have much much fewer steps than on-policy methods, because the performance of SAC in this version in Ant-v3 is identical to the previous version. The comparison is still unfair. I suggest to remove all empirical results other than DEPO and PPO. After all I believe the contribution of this work is only that DEPO is better than PPO. * I don't think the peer-pressure objective is properly motivated. It can be easily seen that $J_{pp}(\pi_k) = \frac{K-1}{K} v_{\pi_k} - \frac{1}{K} \sum_{i\neq k}^K v_{\pi_i}$, so $\arg\max_{\pi_k} J_{pp}(\pi_k) = \arg\max_{\pi_k} v_{\pi_k}$, assuming for $i\neq k$, $\pi_i$ is independent of $\pi_k$. So the optimal policy of the peer-pressure objective is simply identical to the original optimal policy. If the authors instead consider that $\pi_i$ is not independent of $\pi_k$ then the problem becomes much harder and I don't think the proposed method can solve it either. The authors also use J_DEPO as an approximate for J_PP. But according to Theorem 1, this approximation is good only when \pi_i is similar to \pi_k. But the authors also added a diversity objective to encourage the policies to be different from each other. Isn't this self-contradictory? Requested Changes: see weakness above Broader Impact Concerns: n/a ================================================== Metareview: Recommendation: Reject Comment: The reviewers have found several issues with the paper, in terms of motivation for the proposed objective and claims not matching empirical evidence. The authors did not respond to the reviews, and so these issues remain unaddressed. We hope that the authors can benefit from these useful suggestions from the reviewers, to improve their work. ==================================================
# Probabilistic Guarantees For Abductive Inference Anonymous authors Paper under double-blind review ## Abstract Abductive reasoning is ubiquitous in artificial intelligence and everyday thinking. However, formal theories that provide probabilistic guarantees for abductive inference are lacking. We present a quantitative formalization of abductive logic that combines Bayesian probability with the interpretation of abduction as a search process within the Algorithmic Search Framework (ASF). By incorporating uncertainty in background knowledge, we establish two novel sets of probabilistic bounds on the success of abduction when (1) selecting the *single* most likely cause while assuming noiseless observations, and (2) selecting any cause above some probability threshold while accounting for noisy observations. To our knowledge, no existing abductive or general inference bounds account for noisy observations. Furthermore, while most existing abductive frameworks assume exact underlying prior and likelihood distributions, we assume only percentile-based confidence intervals for such values. These milder assumptions result in greater flexibility and applicability of our framework. We also explore additional information-theoretic results from the ASF and provide mathematical justifications for everyday abductive intuitions. ## 1 Introduction Imagine a patient visits a doctor because of a persistent cough, fever, and shortness of breath. As the doctor considers these symptoms and the prevalence of certain illnesses in the area, the doctor may hypothesize that the patient has pneumonia. This is an example of abductive reasoning, or *abduction*. Abduction is the process of finding the best causal explanation given some observed effects. Abductive reasoning can be categorized into strategies that can generate new hypotheses, known as *creative abduction*, and those that select the best candidate given a set of possible explanations, known as *selective abduction* (Schurz, 2007). We focus on selective abduction, which can be formalized with Bayesian Decision Theory (Romeijn, 2013). Given observation(s) O, we select a hypothesis Ci from a finite set of hypotheses C. Per Bayesian probability, we denote Pr(Ci|O) as the *posterior*, where the most probable cause is that with the highest posterior. By Bayes' theorem, $$\Pr(C_{i}|O)={\frac{\Pr(O|C_{i})\Pr(C_{i})}{\Pr(O)}}.$$ However, during the hypothesis selection process, the relevant observations Pr(O) remain constant. Thus, the relevant form of Bayes' theorem becomes $$\operatorname*{Pr}(C_{i}|O)\propto\operatorname*{Pr}(O|C_{i})\operatorname*{Pr}(C_{i}).$$ To perform selective abduction, one simply chooses the hypothesis whose likelihood and prior have the greatest product.1 Abduction accompanies induction and deduction as one of three forms of logical reasoning (Rodrigues, 2011; Peirce et al., 2017). In supervised machine learning, inductive and abductive processes serve as the 1Note that a general cause (i.e., one that is likely and can produce many effects) is not guaranteed to have a high posterior since its smaller likelihood counters the effect of its high prior. 1 underlying logic behind model training and application (see Figure 1) (Mooney, 2000). While both inductive and abductive reasoning are applied ubiquitously in the field, inductive reasoning is currently the more well-understood process; we have already gained a theoretical understanding of inductive accuracy (Dietterich, 1989; Kietz, 1993; Cummings et al., 2016; Garg et al., 2021; Cosentino et al., 2022). However, to our knowledge, there currently exist no formal quantitative frameworks with accuracy bounds for abductive reasoning. In a broader context, artificial intelligence researchers such as Erik Larson argue that obtaining a quantitative theory of abduction is a necessary step towards bridging machine and human intelligence. Abduction, more specifically creative abduction, encapsulates human intuition or "guessing" capability lacking in current models. Larson describes machine understanding of abductive reasoning as the central "blind spot" of artificial intelligence: "Abductive inference is required for general intelligence, purely inductively inspired techniques like machine learning remain inadequate...The field requires a fundamental theory of abduction." (Larson, 2021) Our work primarily aims to (1) provide currently lacking accuracy bounds for abductive reasoning and (2) serve as a preliminary version of this "fundamental theory of abduction" needed for abductive machine understanding. We propose a general probabilistic framework for *selective* abduction built from Bayesian Decision Theory (Berger, 2013) (detailed in Section 3), serving as a jumping off point for future work on creative abduction. Through this Bayesian framework, we first derive upper and lower probabilistic bounds of abductive accuracy when assuming underlying q-percentile uncertainty bounds of prior and likelihood probabilities for each cause (Section 4). This first set of accuracy bounds treats successful abduction as choosing the *single* true hypothesis assuming the selection of the single highest posterior. We then extend this by reframing abduction as a search process within the Algorithmic Search Framework (ASF) (Montañez, 2017), which lets us describe and bound the probability of selecting any hypothesis with a posterior probability above a certain threshold while accounting for noisy observations (Section 5.1). Lastly, in addition to deriving bounds on abductive accuracy, we apply the framework to quantitatively justify common-sense heuristic abduction (Section 5.2, 5.3). ## 2 Related Work We review applications of abductive logic in machine learning and artificial intelligence, and survey existing abductive frameworks and current literature on Bayesian inference. ## 2.1 Logic In Machine Learning Peirce introduced abduction alongside induction and deduction as the three pillars of logical inference (Shanahan, 1986). Induction, inferring causal relationships from data, is central to machine learning (Mooney, 2000). Inductive logic is core to the training process, where labeled examples are used to develop generalized relationships within a model. Deductive and abductive logic are employed within machine learning's underlying inductive framework by applying the relationships derived through inductive training (Bergadano et al., 2000). Deduction facilitates data generation by selecting a class (cause) to produce feature data (observations). Conversely, abduction involves assigning class labels (causes) to unlabeled data (observations) using a trained model that embeds established causal relationships (see Figure 1) (Bergadano et al., 2000). Induction corresponds to the training phase, where input-output relationships are learned, while abduction relates to classification, using known relationships to infer likely causes. Table 1 outlines the connections between logical inference (Bergadano et al., 2000) and machine learning. From this perspective, machine learning applies abductive logic in model inference. For example, machine learning emulates the abductive reasoning used in spam detection and medical diagnosis by applying trained algorithms to unlabeled data (i.e., text from emails or radiology scans). However, model inference is just one of many applications of abduction in machine learning. Our work addresses the theoretical limits of the success of abductive reasoning generalizable to applications such as these. ![2_image_0.png](2_image_0.png) Figure 1: Three methods of inference. The dotted lines show which part of each process is being inferred. Table 1: Schematic outline of the processes of inference in supervised machine learning (Bergadano et al., 2000). | Logical Inference | Machine Learning | |------------------------------------|--------------------------------------------------| | Induction: P(a) ∴ ∀wP(w) | Training: (x 1 , y1), ..., (x n, yn) ∴ f : X → Y | | Abduction: Q(a) P(w) → Q(w) ∴ P(a) | Classification: xm ym = f(xm) ∴ ym | ## 2.2 Applying Abduction In Machine Learning In addition to its synonymy with the higher-level logic of model inference, abductive logic is central to several common machine-learning processes. Abduction is the underlying logic of Bayesian networks, which are used for tasks such as clustering, supervised classification, anomaly detection, and temporal modeling (Mihaljević et al., 2021). Bayesian networks are particularly useful for decision-making under uncertainty (Mihaljević et al., 2021) and are widely used in criminology and prognosis, diagnosis, and prescription in healthcare (Song et al., 2021). Additionally, *maximum a posteriori* (MAP) applies abductive reasoning through Bayes' theorem to optimize model parameters.2 Analogizing training data D as observations and a possible model parameterization w to a possible cause, MAP optimizes parameters by maximizing the posterior, Pr(w|D) (Bishop, 2006). Abductive reasoning is also prevalent in relational learning and computer vision. In relational learning, where data is represented through relationships with other data, abduction guides search and generates missing input data (Bergadano et al., 2000). In computer vision, integrating abductive reasoning with convolutional neural networks (CNNs) enhances spatial-temporal reasoning and image segmentation tasks, which also contributes to explainable AI by incorporating understandable reasoning into black-box models (Zhang et al., 2021; Rafanelli et al., 2023). ## 2.3 Formalizations Of Abduction Various formalizations of abduction have been explored in symbolic AI literature (Paul, 2000). Set-cover-based approaches involve selecting a subset of hypotheses from a larger set, requiring complete causal relationships (Allemang et al., 1987). Knowledge-level approaches propose explanations based on beliefs (Levesque, 1989). Abductive Logic Programming (ALP) represents inferences as entailments from a prior knowledge base to the veracity of specific causes (Kakas et al., 1992; Alberti et al., 2008; Raghavan & Mooney, 2010). 2Note that training remains an inductive process on a larger scale; MAP applies abductive logic within training steps since it is a Bayesian method. Probabilistic Horn Abduction extends Prolog by combining exact probabilities of hypotheses with Bayes' theorem to generate posterior probabilities built from multiple observations (Poole, 1991; Ng & Mooney, 1991). Unlike our proposed framework, it assumes exact prior and likelihood probabilities and does not incorporate confidence ranges for these distributions (Poole, 1991). Developments in probabilistic and probabilistic abductive logic programming (Turliuc et al., 2013; Azzolini et al., 2021) depart from our work in similar ways, as exact probabilities are assumed and general bounds for abductive success are not provided. A recently developed framework applying stochastic mathematical systems (SMSs) models abduction by representing reasoning as stochastic systems, with the human reasoner SMS generating hypotheses and an oracle SMS evaluating their validity based on explanatory power and evidence (Wolpert & Kinney, 2024). Like Probabilistic Horn Abduction, it also does not account for the uncertainties in underlying distributions. These methods lack probabilistic guarantees for the correctness of the abductive inferences and do not quantify associated uncertainties. Our approach addresses this gap by integrating formal machine learning frameworks, which allows for more precise quantification of the uncertainties involved in abductive inferences. ## 2.4 Bayesian Inference Bayesian inference forms the basis of our framework, deriving accuracy bounds using qp and ql confidence intervals for prior and likelihood distributions, respectively. These intervals represent confidence in causal relationships (ql) and general world knowledge (qp), providing flexibility in representation. Bayesian inference estimations and bounds are well-explored in the literature, with numerous known methods of deriving accuracy bounds for inference of specific algorithms or tasks (Yekutieli, 2012; Pati et al., 2018; Chérief-Abdellatif et al., 2019; Alroobaea et al., 2020; Audibert, 2009; Zhang et al., 2021; Alquier & Ridgway, 2020; Ferguson et al., 1992; Cox, 1993; Alvarez et al., 2014). However, general methods for deriving bounds using techniques like multi-valued mapping (Dempster, 1968) or prior measure intervals (Dempster, 1967) are less common. To our knowledge, no existing method derives Bayesian inference bounds based on specific prior and likelihood confidence intervals with probabilities ql, as our framework does. Our work is the first to leverage the ASF (Montañez, 2017) to construct a formalization of abduction or abduction by Bayesian inference. Unlike other established frameworks (Poole, 1991; Ng & Mooney, 1991; Poole, 1993), the ASF accounts for noisy observations - observations that may not fully reflect "true" events. The framework makes very few assumptions of given information resources, F, which (in the case of abduction) embeds observation data. Such data is abstracted as binary strings, with no conditions placed on what form the binary strings take, only that we have functions available to extract feedback from the strings for individual search queries. Thus, with no restrictions placed on the information resources, the ASF accommodates both noisy and noiseless observations. To our knowledge, there are no abductive or general inference bounds with this specific property. Existing work has only analyzed the correlation of real dataset noise with the accuracy of Bayesian inference for specific algorithms, assuming specific data qualities (An et al., 2012). ## 3 Preliminaries We formalize the fundamental building blocks of abduction, causes and observations, as vectors. The vectorization of such outcomes allows the formalization of posterior, likelihood, and prior probabilities as distributions over a vector space. We then formalize the likelihood and posterior uncertainty intervals on which the abductive search process relies. ## 3.1 Vectorizing Observations We formalize observations as binary vectors, where each scalar component corresponds to the existence of a specific *observation feature* or certain observed outcome. For example, suppose you swallow an unknown pill and then your headache disappears. A representative observation vector might be ⟨1, 1⟩ with each feature representing (1) "Did you swallow a pill?" and (2) "Did the headache go away?" (respectively). If the headache disappeared without taking a pill, the observation vector would be ⟨0, 1⟩. Definition 3.1. (O) Let O denote the vector space of discrete topology containing all binary-featured observation vectors. Since any outcome must strictly occur or not occur, the set of possibilities within O is mutually exclusive and collectively exhaustive. In the case where an observation is a continuous variable, such as temperature, we would convert the variable by adding additional features representing levels of the value, such as ["cold", "lukewarm","hot"].3 Proposition 3.1. The set of possible outcomes represented as vectors within O *is mutually exclusive and* collectively exhaustive. ## 3.2 Vectorizing Causes And Likelihood Probability Mass Functions A cause Ci has some probability of instigating any possible observation vector x ∈ O, inducing a conditional probability mass distribution (i.e., likelihood function) Pr(x|Ci) over all observations x ∈ O. Note that every observation x ∈ O is disjoint (Proposition 3.1), and we assume exactly one observation vector is produced and observed. Following the earlier example, the likelihood distribution over the observation space for the cause "aspirin" expresses the probability that, *assuming* aspirin was taken, phenomena x ∈ O would follow. Knowing that aspirin typically relieves headaches and is ingested in pill form, the likelihood distribution over O with dimensions {"Pill taken?", "Headache relieved?"} may be similar to Table 2. Such a likelihood distribution depends only on the cause Ci, and will act over O. The notation do(aspirin = True)) denotes that we taking some action to force the condition "aspirin" to be True, per do-calculus (Pearl et al., 2000). Table 2: Example likelihood distribution for effects of aspirin. | Pill taken? | Headache relieved? | x | Pr(x|do(aspirin = True)) | |---------------|----------------------|--------|----------------------------| | no | no | ⟨0, 0⟩ | 0.05 | | no | yes | ⟨0, 1⟩ | 0.10 | | yes | no | ⟨1, 0⟩ | 0.15 | | yes | yes | ⟨1, 1⟩ | 0.70 | Assuming that exactly one of the observation vectors must occur, we know that the probabilities for each collectively must sum to one. Considering all the possible ways there are to assign probabilities to a collectively exhaustive and mutually exclusive set of options forms a mathematical simplex, S. For k observation features (where dim (O) = k), simplex S forms a continuous 2 k−1 dimensional hyperplane containing all possible "cause vectors", each corresponding with some likelihood probability mass function over the 2 k observation vectors in O. Each scalar component of a 2 k dimensional "cause" vector c ∈ S denotes how much probability mass is placed on a corresponding observation vector in O. Since we define a "cause" as the event representation of a likelihood distribution over O, a single cause vector in O can actually represent multiple concurrent events or causes. Ensuring that every cause c ∈ S corresponds to a valid probability mass function on O requires the following two properties: (1) the simplex is bounded within [0, 1] on every dimension such that no c ∈ S holds a component that indicates an invalid probability, and (2) the sum of all components of a cause vector equals 1. ## 3.3 Defining Posterior Confidence Bounds During the decision-making process, we compare different posterior probabilities for the same observation x. Since the evidence, Pr(x), is constant, we will only compare the product of the likelihood and prior across causes, namely, Pr(x|c) Pr(c), which we will be referring to as the "posterior" for simplicity. In life, we often lack these exact likelihood and prior distributions. Instead, we may estimate such probabilities through numerical techniques, including asymptotic estimations, Monte Carlo methods, numerical integration, and various sampling methods (Tierney, 1994; Chib, 1996; Levine & Casella, 2001). Other distribution 3Note that any probability mass function over O would, by default, place zero mass on contradictory observation vectors, such as one that is both "hot" and "cold." estimation methods include smoothing and reduction methods, and Markov chain algorithms can be further used to combine estimation methods (Tierney, 1994). Thus, to account for uncertainty, we estimate likelihood, prior, and posterior probabilities through confidence intervals. We define two functions denoting the upper bound likelihood probability, lU (c, x) , and lower bound likelihood, lL(c, x), of the ql-percentile likelihood uncertainty interval, where lU (c, x) ≥ lL(c, x). The prior qr-percentile uncertainty interval is similarly represented through an upper and lower bound rU (c) and rL(c) (respectively). The upper and lower confidence bounds of the posterior, pU (c, x) and pL(c, x), can then be found by simply multiplying the upper or lower bounds of the likelihood and prior probabilities together: pU (c, x) = lU (c, x)rU (c) and pL(c, x) = lL(c, x)rL(c). This bound assumes there is a ql probability that the likelihood lies in its ql-percentile interval [lL(c, x), lU (c, x)] and, likewise, that there is a qr probability that the prior lies in its qr-percentile interval [rL(c, x), rU (c, x)]. Thus, the interval [pU (c, x), pL(c, x)] defines the q-percentile confidence interval for posterior Pr(c|x) where q = qlqr. ## 3.4 Narrowing The Space Of Possible Causes We have established S as the *infinite* space containing all possible likelihood distributions over O and, thus, the space of all possible causes. However, this space includes likelihood distributions generated by causes that are implausible. In the real world, we often choose the most likely cause from a smaller set of plausible causes; for example, one would not consider an atomic bomb to be a plausible cause for your headache disappearing. Rather than considering the entirety of S as the pool of possible causes, we assume that some finite subset C ⊂ S with cardinality k = |C| has been pre-selected as the finite set of *plausible* causes assumed to contain the true cause. We further assume C includes a "cause" Cother, whose posterior encapsulates the (likely low) combined probability of all other causes in S occurring. With this, we assume that all causes in C are disjoint and that C contains the one true explanation for observation x (namely, what actually caused it). Definition 3.2. (C) Let *C ⊂ S* denote the relevant finite subset of possible cause vectors in S. For notational simplicity, we additionally denote each cause as Ci ∈ C and its corresponding "true" posterior probability as Miin posterior set M. We likewise simplify the notation of the upper and lower bounds of q-percentile uncertainty interval posterior Mi as follows: from pU (c, x) and pL(c, x) to ui and li, respectively. For future reference, we define the following: Definition 3.3. (Mi) Let Mi ∈ M denote the "true" posterior probability of cause Ci ∈ C, where Mi = Pr(Ci|x) Pr(Ci). Then Mi falls into the following uncertainty interval with probability q: Mi ∈ [li, ui]. Since we assume each Ci ∈ C is disjoint, and that C surely contains the true explanation for observation x, each posterior probability Pr(Ci|x) sums to 1. Thus, $$\sum_{M_{i}\in{\mathcal{M}}}M_{i}=\operatorname*{Pr}(\mathbf{x}).$$ Definition 3.4. (U) Let U denote the set containing the q-percentile uncertainty interval bounds [li, ui] for each posterior Mi ∈ M. Note that we also assume |M| = |C| = *|U| ≥* 2, as determining the most likely cause from a set of only one is trivial. ## 4 Abduction By Bayesian Inference 4.1 Cause Selection With Uncertainty Intervals Given the set of q-percentile confidence posterior probability uncertainty bounds [li, ui] ∈ U for each cause Ci ∈ C, one selects the cause whose *point estimate posterior probability* is highest. Since the true posterior probabilities of each cause are unknown, this process may incorrectly select a cause whose posterior is not the true maximum. We quantify this rate of incorrect selection in the case where every posterior Mi ∈ M is contained in respective confidence bound [li, ui]. Let predicate IsMax(Mi) denote whether posterior Miis truly the highest posterior. We first define the probability range where the maximum posterior must lie, [*l, u*]. Definition 4.1. Let each posterior Mi ∈ M occur within q-percentile confidence interval [li, ui] ∈ U. Then, we set $$l=\operatorname*{max}(\{l_{i}|i\in\mathbb{Z}_{+},i\leq|M|\})$$ $$\operatorname{and}$$ $$u=\operatorname*{max}(\{u_{i}|i\in\mathbb{Z}_{+},i\leq|M|\}).$$ Proposition 4.1. Assuming that every Mi ∈ M lies in respective q-percentile confidence interval [li, ui] ∈ U, the max posterior is bounded by u and l. Thus, in the case that *every* confidence bound fully contains its respective posterior almost surely (instead of just with probability q), any posterior Mi whose uncertainty bounds [li, ui] overlap with [*l, u*] is potentially the maximum posterior with some probability Pr(IsMax(Mi)). Theorem 4.2. Let M′ ⊆ M denote the set of posteriors whose confidence intervals intersect with [l, u]. The probability that Mi ∈ M′*is the maximum posterior is as follows:* $$\operatorname*{Pr}(I s M a x(M_{i}))=\int_{l}^{u}P\left(M_{i}=x,\bigcap_{\begin{array}{c}{{M_{j}\in{\mathcal{M}}^{\prime},}}\\ {{C_{j}\neq C_{i}}}\end{array}}(M_{j}<x)\right)d x.$$ This accounts for any estimated posterior probability distribution within [li, ui], but assumes Miis contained by [li, ui] with probability 1.4 ## 4.2 Bayes Error Rate However, even assuming the cause with the true highest posterior is successfully identified, there is the unavoidable error from non-zero posteriors of the "losing" categories. The true cause of a feature may simply not have the highest posterior. This minimum achievable error is expressed by Bayes Error Rate (BER): Definition 4.3. (ϵ, (Sekeh et al., 2020)) Let ϵ denote Bayes multiclass error rate (BER) for every Ci ∈ C. For |C| = k possible causes: $$\epsilon=1-\int\operatorname*{Pr}(\mathbf{x})\operatorname*{max}_{i}\operatorname*{Pr}(C_{i}|\mathbf{x})d\mathbf{x}.$$ However, the formula above is often impractical to compute for k > 2 causes. Instead, one can derive bounds for the multi-cause BER with techniques such as the Bhattacharyya bound, estimations using Friedman-Rafsky test statistics, and non-parametric bounds using Henze-Penrose divergence (Sekeh et al., 2018). We adopt a recent method5 of upper bounding BER through global minimal spanning trees (Sekeh et al., 2020) and adopt a pairwise computational lower bounding method for BER (Lin, 1991). Definition 4.4. (ϵupper, (Sekeh et al., 2020)) Let ϵupper denote the upper bound of BER such that ϵ ≤ ϵupper. Then, for |C| = k, $$\epsilon_{\mathrm{upper}}=2\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}\delta_{i j}$$ where $$\mathrm{\Large{\begin{array}{l}{{\Pr(C_{i})\Pr(C_{j})\Pr(\mathbf{x}|C_{i})\Pr(\mathbf{x}|C_{j})}}\\ {{\Pr(C_{i})\Pr(\mathbf{x}|C_{i})+\Pr(C_{j})\Pr(\mathbf{x}|C_{j})}}\end{array}}d\mathbf{x}.$$ $$\delta_{i j}:=$$ 4See the appendix for directly computable forms of Theorem 4.2. 5This method provides a tighter bound than aforementioned techniques (Sekeh et al., 2020). Definition 4.5. (ϵlower, (Wisler et al., 2016), (Lin, 1991)) Let ϵlower denote the lower bound of BER such that ϵ ≥ ϵlower. BER may be lower bounded by applying pairwise computations of Bayes error ϵij for i and j between every unique cause pair (Ci, Cj ) where Ci ∈ C, Ci ∈ C, i ̸= j: $$\epsilon_{l o w e r}=\frac{2}{k}\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}(\mathrm{Pr}(C_{i})+\mathrm{Pr}(C_{j}))\epsilon_{i j}.$$ ## 4.3 Abductive Error Guarantees Assume an algorithm selects from the set of possible causes C the cause with the highest estimated posterior. The preceding subsections detail the two possible sources of error: 1. Incomplete or imprecise background information (e.g., not knowing all the potential causes and causal relationships). This uncertainty is represented through q-percentile posterior confidence intervals in U. 2. The true cause is not the cause with the highest true posterior. If the exact likelihood and prior is given, this minimum achievable error is simply expressed through the Bayes Error Rate (Definition 4.3). We derive bounds of the error rate by combining these two possible sources of error. Let W denote the event of incorrect abduction (not selecting the true cause). Then, the probability of correctly selecting the maximum posterior Mi and incorrect abduction is $$\mathrm{Pr}(\mathrm{W},\mathrm{IsMax}(M_{i}))$$ $${\dot{\mathbf{i}}}\int J$$ Pr(W,IsMax(Mi)) = Pr(W|IsMax(Mi)) Pr(IsMax(Mi)) = ϵPr(IsMax(Mi)). The probability of both incorrectly selecting the maximum posterior and incorrect abduction is Pr(W, ¬IsMax(Mi)) = Pr(W|¬IsMax(Mi)) Pr(¬IsMax(Mi)) = (1 − Pr(Mi|x))(1 − Pr(IsMax(Mi))). Such definitions let us derive upper and lower bounds for the error rate assuming that all posteriors Mi ∈ M lie in q-percentile confidence intervals [li, ui] ∈ U with probability 1. Let γi denote the error rate given this assumption. Theorem 4.6. Let γi denote the error rate of selected cause Ci when assuming posterior Mi*lies in confidence* interval [li, ui] almost surely. Then, γi *is bounded above by* γi ≤ ϵ*upper* Pr(*IsMax*(Mi)) + (1 − li)(1 − Pr(*IsMax*(Mi))) where ϵupper *may be derived by Definition 4.4* Theorem 4.7. Let γi denote the error rate of selected cause Ci when assuming posterior Milies in confidence interval [li, ui] almost surely. Then, γi *is bounded below by* γi ≥ ϵ*lower* Pr(*IsMax*(Mi)) + (1 − ui)(1 − Pr(*IsMax*(Mi))) where ϵlower *may be derived by Definition 4.5* We extend this result to the general case where all posteriors Mi ∈ M are assumed to jointly lie in their respective confidence intervals [li, ui] ∈ U with probability q. Theorem 4.8. Let q kbe the probability that all Mi ∈ M lie in their respective confidence bounds [li, ui] ∈ U. Let γi, upper be the upper bound of γi *defined in Theorem 4.6. Then, the upper bound of the general error rate* is given by $$\operatorname*{Pr}(W)\leq1-q^{k}(1-\gamma_{i,\;u p p e r}).$$ Theorem 4.9. Let q kbe the probability that all Mi ∈ M lie in their respective confidence bounds [li, ui] ∈ U. Let γi, lower be the lower bound of γi defined in Theorem 4.7. Then, the lower bound of the general error rate is given by $$\operatorname*{Pr}(W)\geq\gamma_{i,\;l o w e r}q^{k}.$$ We note that the upper bound 1 − q k(1 − γ*i,upper*) < 1 and the lower bound γ*i,lower*q k > 0, so our bounds for Pr(W) are nontrivial, being strictly tighter than the general bounds on probabilities (e.g., [0, 1]). We should note that the bounds presented in this section assume noiseless observations. That is, we assume observation x is a wholly accurate description of the "true" outcomes of a cause. A noisy observation vector may have entries that deviate from the "true" outcome of a cause, akin to the possibility of a faulty observer or inaccurate data pipeline with which observations is processed (i.e., faulty equipment, random errors in sampling, etc.). Accounting for noisy observations for selecting the highest posterior cause is a subject of future work, and may involve the averaging of posteriors among a probability distribution of observation vectors. The next section explores a different set of bounds describing the selection of any cause whose probability is above some threshold. With this broader definition of "success," we can account for noisy observations through applying the Algorithmic Search Framework (Montañez, 2017). ## 5 Search And Heuristic Applications The Algorithmic Search Framework (ASF) characterizes learning problems as search, allowing one to equate the chance of success of any learning algorithm to that of a search process described by the three-tuple (Ω*, T, F*) - the search space, *target set*, and *external information resource*, respectively (Montañez, 2017). This framework formalizes the seminal work of Mitchell (1982) and extends results beyond binary classification problems (Montanez, 2017). Recent developments have also extended the ASF for continuous or fuzzy measures of success (Knell et al., 2024), allowing even greater flexibility. Most relevant to our use-case, the ASF provides formal bounds accounting for noise, and formalizes insights into the frequency of favorable search strategies and problems (Montanez, 2017). To our knowledge, there are no extensions of Mitchell's work or other formal frameworks that provide such use cases (Mitchell et al., 1986; Dupont et al., 1994; Duarte et al., 2023). We have previously discussed abductive success in terms of finding the one "true" cause for some observation vector (which may or may not have the highest posterior) *assuming* the selection of the single highest posterior. Furthermore, we assumed noiseless observations. By reframing the ASF for abduction, we describe an algorithm's ability to identify the cause(s) with posteriors above some threshold in terms of information-theoretic properties within (Ω*, T, F*) and generalize to noisy observation vectors. As explained in section 2, there exist no formal bounds on abductive success to our knowledge, and the ASF has not yet been applied to abduction. Existing work involving abduction and search such as abduction as inference to the best explanation (IBS) (Schurz, 2007), have not yielded formal explanations of abductive certainty or heuristics like Theorems 5.2 and 5.1. The ASF has not yet been applied to abduction, we believe doing so provides new formal and rigorous insights. ## 5.1 Asf: Success Of Abduction Through Search We define each term of (Ω*, T, F*) as follows. Search Space (Ω) constitutes the finite set of pre-selected, plausible causes for the given observation vector x; it is synonymous with C defined in 3.2. Pi over search space Ω denotes the probability distribution over the space at step i, and Pi(T) is the probability of success - namely, the amount of probability mass placed on the target set T at time i (Montañez, 2017). In our adaptation, Pi denotes the posterior distribution of Pr(Ci|x) over all possible causes Ciin Ω. Pi may be derived from aforementioned bounds [li, ui] ∈ U of posterioradjacent value Pr(Ci|x)Pr(x) (Definition 4.1) with two modifications: (1) Pi denotes the point estimate probability of the posterior within these confidence bounds, and (2) this point estimate of Pr(Ci|x)Pr(x) is inversely scaled by Pr(x) such that Piis a valid probability mass function that sums to one. Target Set (T), a subset of the search space Ω, contains the set of the "more plausible" causes with posterior probability Pi above or at *minimum performance value* in (0, 1]. Search aims to identify causes in Ω that lie in T, a task whose difficulty increases as the threshold for T rises. Note that T is a random variable as it describes a set of likely causes for random observations. External Information Resource (F) is a finite-length binary string drawn from a distribution with an "API"-like interface, meaning one can extract information from F (Montañez, 2017). In our case, it embeds (1) the observation vector x whose cause we determine, and (2) the upper and lower bounds of the q-confidence intervals for likelihood and prior probabilities across Ω for every cause Ci ∈ Ω. More specifically, F contains the likelihood bounds lU (c, x) and lL(c, x) and prior bounds rU (c, x) and rL(c, x), which inform the construction posterior probability distribution Pi over Ω for the search process as defined previously. Since F is a function of random data, it is itself a random variable. Note that, as explained in Section 4, the ASF places few restrictions on information resources F, and thus allows for both noisy or noiseless observations. Framing abduction through the ASF, we apply established derivations of the maximal success probability of success defined in terms of information-theoretic properties of (Ω*, T, F*) and the complexity of the search problem (Montañez, 2017). Theorem 5.1. (Montañez, 2017) The probability of a successful abduction, q*, is bounded above by* $$q\leq\frac{I(T;F)+D(P_{T}||{\mathcal{U}}_{T})+1}{I_{\Omega}},$$ where IΩ = − log |T| |Ω| , D(PT ||UT ) *is the Kullback-Leibler divergence between the marginal distribution on* target sets and the uniform distribution on possible target sets, and I(T; F) is the mutual information between the target and observation. We interpret I(T; F) as the dependence between the target set and the observation, D(PT ||UT ) as the non-uniformness of the target, and IΩ as the sparseness of the targets inside the search space. When the true cause is highly correlated with the observations (i.e., less random), the achievable success rate is high. When the search space consists of a large number of causes, the achievable success rate is lower. This gives us an additional information-theoretic upper bound on the probability of successful abduction. ## 5.2 Asf: High-Likelihood Causes Are Rare Any high-posterior cause must also confer high-likelihood to observed effects, due to the multiplicative nature of posterior computation. Yet a cause can only make an observation vector more probable at the cost of making others less probable. Such high-likelihood causes must necessarily be rare to the degree they confer high joint-probability on the observations, as shown by the following theorem (Montañez, 2017). Theorem 5.2. *(Famine of Favorable Strategies Theorem, (Montañez, 2017)) For any fixed search problem* (Ω, T, F)*, set of probability mass functions* P = {P : P ∈ [0, 1]|Ω|,Pj Pj = 1}*, and a fixed threshold* qmin ∈ [0, 1], $$\frac{\mu({\mathcal G}_{t,q_{\operatorname*{min}}})}{\mu({\mathcal G}_{\mathcal P})}\leq\frac{p}{q_{\operatorname*{min}}},$$ where p = $p=\frac{|T|}{|\Omega|}$,$\mathcal{G}_t$ $q_{\min}=\{P:P\in\mathcal{P},t^\top P\geq t\}$ ,Gt,qmin = {P : P ∈ P, t⊤P ≥ qmin}, and µ *is Lebesgue measure.* In contrast to Section 5.1, we consider a different search problem in applying Theorem 5.2. The search space Ω no longer consists of posteriors, but is now the space of all possible observation vectors, some of which are "close enough" to the true vector to comprise a noisy target set, T. Causes sample observation vectors by producing effects: a blind, weighted search. F becomes irrelevant. Theorem 5.2 then tells us that the proportion of causes which confer at least qmin probability to the observation set is necessarily small whenever qmin is high, if we are only willing to tolerate so much noise in our observations (leading to small |T|). One might argue that although not many causes can confer high *joint* likelihood to the observations, several independent causes might together constitute an abductive explanation for the observed phenomena, if each sufficiently raises the likelihood of a *single* observed feature. Simple arithmetic renders this possibility unpersuasive. Assuming independent causes for each observed feature, the probability of jointly occurring outcomes in an observation vector x scales exponentially with |x| or the number of features. For instance, if two features have a 50/50 chance of occurring coincidentally, then the chance of them occurring together is 1/2 · 1/2 = 1/4. For four such features, the probability drops to 6.25%. Thus, the coincidental co-occurrence of independent causes that together explain an observation vector is unlikely as the number of observations increases. ## 5.3 Increasing Certainty In Abductive Inference Inductive inference error guarantees derive their strength from data abundance: increasing the number of observed examples typically increases the tightness of such bounds. In contrast, abductive inference proceeds from a single observation. How do we increase confidence in our abductive judgment? In the real world, our confidence in abductive reasoning typically depends on the amount of evidence supporting or contradicting a potential hypothesis. Though consisting of a single example, there are often many features of that observation, which may or may not be well-explained by a proposed cause. This suggests a "horizontal" mode of confirmation built on many conditionally independent features, rather than the "vertical" mode of confirmation based on many observed examples typical of inductive inference. We note the importance of conditional independence among features, since features that necessarily imply each other even given the cause do not give us additional confidence in our abductive judgment. Recall that observation vector x ∈ O consists of binary features representing the existence or non-existence of some conditionally independent observed outcome. Letting x1*, . . . , x*n represent each feature of x ∈ O where |x| = dim(O) = n, we quantitatively demonstrate this phenomenon with the following theorem. Theorem 5.3. For each conditionally independent feature x1, . . . , xn, define βi > 0 *such that for all* i = 1*...n*, $i|C\rangle=\,$. $$\operatorname{Pr}(x_{i}|{\overline{{C}}}).$$ Pr(xi|C) = βi Pr(xi|C). $$L e t\;\beta\;\tau$$ Let β =pn Qn i βi, the geometric mean of the βi. If β > 1*, then* $$\operatorname*{lim}_{n\to\infty}{\frac{\operatorname*{Pr}(x_{1},\ldots,x_{n}|C)}{\operatorname*{Pr}(x_{1},\ldots,x_{n}|{\overline{{C}}})}}=\operatorname*{lim}_{n\to\infty}\beta^{n}=\infty.$$ Each conditionally independent observation feature can either support (βi > 1) or contradict (βi < 1) the proposed cause. If features support the current cause C on average (i.e., β > 1), then the confidence of abduction (ratio between likelihood under C over C) approaches infinity as the number of (on average) supporting features increases. ## 6 Discussion We formalize abduction as selecting the cause with the highest estimated posterior from some finite pool of causes. Our focus on single-cause abduction problems is justified by their foundational role in simplifying complex decision-making processes, allowing for more precise modeling and analysis that lays the groundwork for tackling multi-cause scenarios with greater accuracy in future research. For k possible causes whose posteriors are estimated within a confidence interval set with joint probability q k, the probability of incorrect abduction Pr(W) is bounded below by Pr(W) ≥ γ*i,lower*q k(Theorem 4.9) and bounded above by Pr(W) ≤ 1−q k(1−γi, upper) (Theorem 4.8). As q approaches 1, the bounds on the error rate depend more heavily on γi (Theorems 4.6, 4.7), which scales with the Bayes Error Rate and the amount of overlap between uncertainty intervals. One should obtain comprehensive and representative training data (i.e., maximizing q) to achieve better estimates of posteriors and thus minimize error. Extending this formalization to the ASF, we re-frame abductive success in information-theoretic terms and account for noisy observations. In this case, the maximum success rate of abduction is governed by the complexity of the search problem and other information-theoretic properties (Theorem 5.1). The maximum success rate increases as the plausible causes (i.e., causes whose posteriors are above the *minimum performance* value) become more explainable and less random; inherent unpredictability is brought from the randomness of the "true" cause. However, one may constrain this randomness by decreasing the sparseness of the search space and/or excluding less probable causes. Regarding the practicality of our results, it has been shown that bounds on the Bayes Error Rate can be empirically estimated by learning from training data instead of density estimation (Sekeh et al., 2020). Unlike traditional methods for estimating BER, such as those based on pairwise HP divergence or generalized Jensen-Shannon (JS) divergence, which becomes computationally infeasible as the number of classes or dimensions increases. The GHP-based method is shown to be computationally more efficient, making it more suitable for large-scale applications like neural networks (Sekeh et al., 2020). Then, in practice, it is possible to model a selective abduction problem using a Bayesian Neural Network and obtain approximate posterior distributions (Myshkov & Julier, 2016; Charnock et al., 2022), which can be directly used in our bounds for abductive inference. The mathematical formalization and bounds established in our paper have implications for human-like reasoning abilities which are crucial for understanding the limits of decision-making processes in artificial intelligence. Theorem 5.2 demonstrates how high-likelihood causes are rare; one is less likely to stumble across them accidentally. In addition, more supporting observations increase our confidence in a unified causal explanation, instead of the coincidental co-occurrence of observed effects. Furthermore, Theorem 5.3 aims to capture the degree of certainty of our everyday abductive inferences. Consider a scenario where we are trying to convict a suspect of a crime. If pieces of evidence collectively support that the victim is guilty, our confidence to convict grows as the amount of such evidence grows. However, if pieces of evidence were heavily contradictory and/or refuted a suspect's involvement, then we become less confident of a conviction. Our confidence would approach 0 as the amount of (on average) contradictory observations tends toward infinity. ## 7 Conclusion Abductive reasoning is a key component of human rationality and discovery. State-of-the-art artificial intelligence is currently incapable of performing abductive reasoning at a human level. To achieve true human-like reasoning, it is important to consider the process of abduction and incorporate such ability in future developments. Our work formalizes selective abduction, deriving formal error guarantees for abductive reasoning within a finite space of causes. Also, by viewing selective abduction through the lens of the Algorithmic Search Framework, we better understood how the inherent complexities of abductive inference problems affect the achievable success rates. Future work might explore creative abduction using our framework as a starting point. Creative abduction can be represented through a search space that is potentially infinite. Rather than filtering S to a finite pool C, we represent hypothesis generation as optimization within an *infinite* subset of S. Proving bounds within this infinite set requires more complex mathematics, but extends the same underlying logic. Statistical bounds within such a framework would hold implications for general scientific reasoning and human creativity. ## References Marco Alberti, Federico Chesani, Marco Gavanelli, Evelina Lamma, Paola Mello, and Paolo Torroni. Identification of Correlated Damage Parameters Under Noise and Bias Using Bayesian Inference. *ACM Transactions* on Computational Logic (TOCL), 9(4):1–43, 2008. Dean Allemang, Michael C Tanner, Tom Bylander, and John R Josephson. Computational Complexity of Hypothesis Assembly. In *IJCAI, International Joint Conference on Artificial Intelligence*, volume 87, pp. 1112–1117, 1987. Pierre Alquier and James Ridgway. Concentration of tempered posteriors and of their variational approximations. *The Annals of Statistics*, 48:1–2, 2020. Roobaea Alroobaea, Saeed Rubaiee, Sami Bourouis, Nizar Bouguila, and Abdulmajeed Alsufyani. Bayesian Inference Framework for Bounded Generalized Gaussian-Based Mixture Model and Its Application to Biomedical Images Classification. *International Journal of Imaging Systems and Technology*, 30(1):18–30, 2020. Ignacio Alvarez, Jarad Niemi, and Matt Simpson. Bayesian Inference for a Covariance Matrix. arXiv preprint arXiv:1408.4050, pp. 1–3, 2014. Dawn An, Joo-Ho Choi, and Nam H. Kim. Identification of Correlated Damage Parameters Under Noise and Bias Using Bayesian Inference. *Structural Health Monitoring*, 11(3):293–303, 2012. doi: 10.1177/ 1475921711424520. URL https://doi.org/10.1177/1475921711424520. Jean-Yves Audibert. Fast Learning Rates in Statistical Inference through Aggregation. *The Annals of Statistics*, 37(4):1591 - 1646, 2009. doi: 10.1214/08-AOS623. URL https://doi.org/10.1214/08-AOS623. Damiano Azzolini, Elena Bellodi, Stefano Ferilli, Fabrizio Riguzzi, and Riccardo Zese. Abduction with probabilistic logic programming under the distribution semantics. *International Journal of Approximate* Reasoning, 142, 11 2021. doi: 10.1016/j.ijar.2021.11.003. F. Bergadano, V. Cutello, and D. Gunetti. *Abduction in Machine Learning*, pp. 197–229. Springer Netherlands, Dordrecht, 2000. ISBN 978-94-017-1733-5. doi: 10.1007/978-94-017-1733-5_5. URL https://doi.org/ 10.1007/978-94-017-1733-5_5. James O Berger. Identification of Correlated Damage Parameters Under Noise and Bias Using Bayesian Inference, pp. 1–45, 74–117. Springer Science & Business Media, 2013. Christopher M. Bishop. *Pattern Recognition and Machine Learning (Information Science and Statistics)*, pp. 1–32. Springer-Verlag, Berlin, Heidelberg, 2006. ISBN 0387310738. Tom Charnock, Laurence Perreault-Levasseur, and François Lanusse. Bayesian neural networks. In Artificial Intelligence for High Energy Physics, pp. 663–713. World Scientific, 2022. Badr-Eddine Chérief-Abdellatif, Pierre Alquier, and Mohammad Emtiyaz Khan. A Generalization Bound for Online Variational Inference. In *Asian conference on machine learning*, pp. 662–677. PMLR, 2019. Siddhartha Chib. Calculating Posterior Distributions and Modal Estimates in Markov Mixture Models. Journal of Econometrics, 75(1):79–97, 1996. ISSN 0304-4076. doi: https://doi.org/10.1016/0304-4076(95)01770-4. URL https://www.sciencedirect.com/science/article/pii/0304407695017704. Romain Cosentino, Randall Balestriero, Richard Baranuik, and Behnaam Aazhang. Deep Autoencoders: From Understanding to Generalization Guarantees. In *Mathematical and Scientific Machine Learning*, pp. 197–222. PMLR, 2022. Dennis D Cox. An Analysis of Bayesian Inference for Nonparametric Regression. *The Annals of Statistics*, pp. 903–923, 1993. Rachel Cummings, Katrina Ligett, Kobbi Nissim, Aaron Roth, and Zhiwei Steven Wu. Adaptive Learning with Robust Generalization Guarantees. In Vitaly Feldman, Alexander Rakhlin, and Ohad Shamir (eds.), 29th Annual Conference on Learning Theory, volume 49 of *Proceedings of Machine Learning Research*, pp. 772–814. PMLR, 23–26 Jun 2016. URL https://proceedings.mlr.press/v49/cummings16.html. A.P. Dempster. Upper and Lower Probabilities Induced by a Multivalued Mapping. The Annals of Mathematical Statistics, 38(2):325–339, 1967. ISSN 00034851. URL http://www.jstor.org/stable/ 2239146. A.P. Dempster. A Generalization of Bayesian Inference. Journal of the Royal Statistical Society. Series B (Methodological), 30(2):205–247, 1968. ISSN 00359246. URL http://www.jstor.org/stable/2984504. Thomas G Dietterich. Limitations on Inductive Learningg. In Proceedings of the Sixth International Workshop on Machine Learning, pp. 124–128. Elsevier, 1989. Guilherme Duarte, Noam Finkelstein, Dean Knox, Jonathan Mummolo, and Ilya Shpitser. An automated approach to causal inference in discrete settings. *Journal of the American Statistical Association*, pp. 1–16, 2023. Pierre Dupont, Laurent Miclet, and Enrique Vidal. What is the search space of the regular inference? In Grammatical Inference and Applications: Second International Colloquium, ICGI-94 Alicante, Spain, September 21–23, 1994 Proceedings 2, pp. 25–37. Springer, 1994. Thomas S Ferguson, Eswar G Phadia, and Ram C Tiwari. Bayesian Nonparametric Inference. *Lecture* Notes-Monograph Series, 17:127–150, 1992. Saurabh Garg, Sivaraman Balakrishnan, Zico Kolter, and Zachary Lipton. Ratt: Leveraging Unlabeled Data to Guarantee Generalization. In *International Conference on Machine Learning*, pp. 3598–3609. PMLR, 2021. A. C. Kakas, R. A. Kowalski, and F. Toni. Abductive Logic Programming. *Journal of Logic and Computation*, 2(6):719–770, 12 1992. ISSN 0955-792X. doi: 10.1093/logcom/2.6.719. URL https://doi.org/10.1093/ logcom/2.6.719. Jörg-Uwe Kietz. Some Lower Bounds for the Computational Complexity of Inductive Logic Programming. In Machine Learning: ECML-93, Lecture notes in computer science, pp. 115–123. Springer Berlin Heidelberg, Berlin, Heidelberg, 1993. Milo Knell, Sahil Rane, Forrest Bicker, Tiger Che, Alan Wu, and George D Montanez. From targets to rewards: Continuous target sets in the algorithmic search framework. In *ICAART (3)*, pp. 558–567, 2024. Erik J Larson. *The Myth of Artificial Intelligence*, pp. 89–234. Harvard University Press, London, England, April 2021. Hector J. Levesque. A Knowledge-Level Account of Abduction. In IJCAI, International Joint Conference on Artificial Intelligence, volume 11, pp. 1061–1067, 1989. Richard A. Levine and George Casella. Implementations of the Monte Carlo EM Algorithm. *Journal* of Computational and Graphical Statistics, 10(3):422–39, 2001. URL http://www.jstor.org/stable/ 1391097. J. Lin. Divergence measures based on the Shannon entropy. *IEEE Transactions on Information Theory*, 37 (1):145–151, 1991. doi: 10.1109/18.61115. Bojan Mihaljević, Concha Bielza, and Pedro Larrañaga. Bayesian Networks for Interpretable Machine Learning and Optimization. *Neurocomputing*, 456:648–665, 2021. Tom M Mitchell. Generalization as search. *Artificial intelligence*, 18(2):203–226, 1982. Tom M Mitchell, Richard M Keller, and Smadar T Kedar-Cabelli. Explanation-based generalization: A unifying view. *Machine learning*, 1:47–80, 1986. George D Montañez. The Famine of Forte: Few Search Problems Greatly Favor Your Algorithm. In *Systems,* Man, and Cybernetics (SMC), 2017 IEEE International Conference on, pp. 477–482. IEEE, 2017. George D Montanez. Why machine learning works. *URL https://www. cs. cmu. edu/˜ gmontane/montanez_dissertation. pdf*, 2017. Raymond J Mooney. Integrating Abduction and Induction in Machine Learning. Abduction and Induction: essays on their relation and integration, pp. 181–191, 2000. Pavel Myshkov and Simon Julier. Posterior distribution analysis for bayesian inference in neural networks. In Workshop on Bayesian deep learning, NIPS, 2016. Hwee Tou Ng and Raymond J Mooney. An Efficient First-Order Horn-Clause Abduction System Based on the ATMS. In *Proceedings of the Ninth National Conference on Artificial Intelligence. Anaheim, CA*, pp. 494–499, 1991. Debdeep Pati, Anirban Bhattacharya, and Yun Yang. On statistical optimality of variational Bayes. In International Conference on Artificial Intelligence and Statistics, pp. 1579–1588. PMLR, 2018. Gabriele Paul. *AI Approaches to Abduction*, pp. 35–98. Springer Netherlands, Dordrecht, 2000. ISBN 978-94017-1733-5. doi: 10.1007/978-94-017-1733-5_2. URL https://doi.org/10.1007/978-94-017-1733-5_2. Judea Pearl et al. Models, reasoning and inference. *Cambridge, UK: CambridgeUniversityPress*, 19(2):3, 2000. Charles S Peirce, Morris R Cohen, and John Dewey. Deduction, Induction, and Hypothesis. In Chance, love, and logic, pp. 131–153. Routledge, 2017. David Poole. Representing Diagnostic Knowledge for Probabilistic Horn Abduction. In *IJCAI*, pp. 1129–1137, 1991. David Poole. Probabilistic Horn abduction and Bayesian networks. *Artificial Intelligence*, 64(1):81–129, 1993. ISSN 0004-3702. doi: https://doi.org/10.1016/0004-3702(93)90061-F. URL https://www.sciencedirect. com/science/article/pii/000437029390061F. Andrea Rafanelli, Stefania Costantini, and Andrea Omicini. Position Paper: On the Role of Abductive Reasoning in Semantic Image Segmentation. *CEUR WORKSHOP PROCEEDINGS*, pp. 75–84, 2023. URL https://hdl.handle.net/11585/933658. Sindhu Raghavan and Raymond J Mooney. Bayesian Abductive Logic Programs. In *Statistical relational* artificial intelligence, pp. 82–87. Citeseer, 2010. Cassiano Terra Rodrigues. The Method of Scientific Discovery in Peirce's Philosophy: Deduction, Induction, and Abduction. *Logica Universalis*, 5:127–164, 2011. Jan-Willem Romeijn. Abducted by Bayesians? *Journal of Applied Logic*, 11(4):430–439, 2013. G. Schurz. Patterns of Abduction. *Synthese*, 164(2):201–234, August 2007. doi: 10.1007/s11229-007-9223-4. URL https://doi.org/10.1007/s11229-007-9223-4. Salimeh Yasaei Sekeh, Brandon Oselio, and Alfred O Hero. Multi-Class Bayes Error Estimation With a Global Minimal Spanning Tree. In 2018 56th annual allerton conference on communication, control, and computing (allerton), pp. 676–681. IEEE, 2018. Salimeh Yasaei Sekeh, Brandon Oselio, and Alfred O. Hero. Learning to Bound the Multi-Class Bayes Error. IEEE Transactions on Signal Processing, 68:3793–3807, 2020. doi: 10.1109/TSP.2020.2994807. Timothy Shanahan. The First Moment of Scientific Inquiry: CS Peirce on the Logic of Abduction. *Transactions* of the Charles S. Peirce Society, 22(4):449–466, 1986. Bofan Song, Sumsum Sunny, Shaobai Li, Keerthi Gurushanth, Pramila Mendonca, Nirza Mukhia, Sanjana Patrick, Shubha Gurudath, Subhashini Raghavan, Imchen Tsusennaro, Shirley T. Leivon, Trupti Kolur, Vivek Shetty, Vidya R. Bushan, Rohan Ramesh, Tyler Peterson, Vijay Pillai, Petra Wilder-Smith, Alben Sigamani, Amritha Suresh, moni Abraham Kuriakose, Praveen Birur, and Rongguang Liang. Bayesian Deep Learning for Reliable Oral Cancer Image Classification. *Biomed. Opt. Express*, 12(10):6422–6430, Oct 2021. doi: 10.1364/BOE.432365. URL https://opg.optica.org/boe/abstract.cfm?URI=boe-12-10-6422. Luke Tierney. Markov Chains for Exploring Posterior Distributions. *The Annals of Statistics*, 22(4):1701–1728, 1994. ISSN 00905364. URL http://www.jstor.org/stable/2242477. Calin-Rares Turliuc, Nataly Maimari, Alessandra Russo, and K. Broda. On minimality and integrity constraints in probabilistic abduction. 12 2013. ISBN 978-3-642-45220-8. doi: 10.1007/978-3-642-45221-5_51. Alan A Wisler, Visar Berisha, Dennis Wei, Karthikeyan Natesan Ramamurthy, and Andreas Spanias. Empirically-Estimable Multi-Class Classification Bounds. 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2594–2598, 2016. URL https://api.semanticscholar.org/ CorpusID:10821499. David H. Wolpert and David B. Kinney. A Stochastic Model of Mathematics and Science. *Foundations of* Physics, 54(2), April 2024. ISSN 1572-9516. doi: 10.1007/s10701-024-00755-9. URL http://dx.doi.org/ 10.1007/s10701-024-00755-9. Daniel Yekutieli. Identification of Correlated Damage Parameters Under Noise and Bias Using Bayesian Inference. *Journal of the Royal Statistical Society. Series B (Statistical Methodology)*, 74(3):515–541, 2012. ISSN 13697412, 14679868. URL http://www.jstor.org/stable/41674641. Chi Zhang, Baoxiong Jia, Song-Chun Zhu, and Yixin Zhu. Abstract Spatial-Temporal Reasoning via Probabilistic Abduction and Execution. *2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition* (CVPR), pp. 9731–9741, 2021. URL https://api.semanticscholar.org/CorpusID:232380362. ## Appendix: Proofs Proposition 4.1. Assuming that every Mi ∈ M lies in respective q-percentile confidence interval [li, ui] ∈ U, the max posterior is bounded by u and l. Proof. We prove that the maximum posterior probability Mmax ∈ M is in the interval [*l, u*] (4.1) by way of contradiction. Assume Mmax ∈/ [*l, u*]. Then, knowing that *|M| ≥* 2, one of the following is true: Case 1: Mmax > u. Since u is defined as the maximum of all posterior upper bounds, we reach a contradiction if Mmax > u as Mmax would not be in M. Case 2: Mmax < l. If Mmax < l, then since l is defined as the highest lower bound, there must exist an Mi ∈ M such that Mi ̸= Mmax whose lower bound li ≥ l. If this is the case, Mi > Mmax since Mi ≥ li, li ≥ l, and l > Mmax. Mmax would not be the maximum posterior, resulting in a contradiction. Both possibilities result in contradiction, so Mmax ∈/ [*l, u*] is not true. Thus, the maximum posterior is bounded by u and l, or Mmax ∈ [*l, u*]. Theorem 4.2. Let M′ ⊆ M denote the set of posteriors whose confidence intervals intersect with [l, u]. The probability that Mi ∈ M′*is the maximum posterior is as follows:* $$\operatorname*{Pr}(I s M a x(M_{i}))=\int_{l}^{u}P\left(M_{i}=x,\bigcap_{\begin{array}{c}{{M_{j}\in{\mathcal{M}}^{\prime},}}\\ {{C_{j}\neq C_{i}}}\end{array}}(M_{j}<x)\right)d x.$$ Proof. Via Proposition 4.2.1, the max posterior is bounded by u and l. So, if x is the value of Mi and Mi is the maximum posterior, l ≤ x ≤ u. For Mi to be the maximum posterior value with a value of x, both Mi = x and Mj < x for all j ̸= i. So, we express Pr(IsMax(Mi)) as an integral of joint probabilities: $$\operatorname*{Pr}(\operatorname{IsMax}(M_{i}))=\int_{l}^{u}\operatorname*{Pr}\left(M_{i}=x,\bigcap_{j=1,j\neq i}^{k}(M_{j}<x)\right)d x.$$ $\square$ Theorem 4.6. Let γi denote the error rate of selected cause Ci when assuming posterior Mi*lies in confidence* interval [li, ui] almost surely. Then, γi *is bounded above by* $$\gamma_{i}\leq\epsilon_{u p p e r}\operatorname*{Pr}(I s M a x(M_{i}))+(1-l_{i})(1-\operatorname*{Pr}(I s M a x(M_{i})))$$ where ϵupper may be derived by Definition 4.4 Proof. Let Pr(Wi) be the probability we produce a wrong abduction given we selected cause Ci (this is synonymous with the "error rate" of selecting cause Ci). By the law of total probability, $$\Pr(W_{i})=\Pr(W_{i},i)$$ Pr(Wi) = Pr(Wi,IsMax(Mi)) + Pr(Wi, ¬IsMax(Mi)). We first discuss the case where the maximum posterior is selected (i.e., IsMax(Mi) holds). Here, Pr(Wi| IsMax(Mi)) is given by Bayes error rate, which is upper bounded by ϵ*upper* (Definition 4.4). Thus, $\Pr(W_{i},\text{IsMax}(M_{i}))=\Pr(W_{i}|\text{IsMax}(M_{i}))\Pr(\text{IsMax}(M_{i}))$ $\leq\epsilon_{upper}\Pr(\text{IsMax}(M_{i}))$. In the case that the highest posterior is not selected, the probability that we result in a false inference is given by 1 − Pr(Ci|x), where Pr(Ci|x) is the theoretical true posterior. Let li denote the lower bound of Pr(Ci|x), which we assume to bound the true posterior almost surely. Thus, $$\Pr(W_{i},\neg\text{IsMax}(M_{i}))=\Pr(W_{i}\mid\neg\text{IsMax}(M_{i}))\Pr(\neg\text{IsMax}(M_{i}))$$ $$=(1-\Pr(C_{i}\mid\mathbf{x}))(1-\Pr(\text{IsMax}(M_{i})))$$ $$\leq(1-l_{i})(1-\Pr(\text{IsMax}(M_{i}))).$$ We combine our bounds to obtain Pr(Wi) ≤ ϵ*upper* Pr(IsMax(Mi)) + (1 − li)(1 − Pr(IsMax(Mi))). We relabel γi as the error rate of selected cause Ci: $$\gamma_{i}\leq\epsilon_{u p}$$ $\text{i))}+(1$ ... γi ≤ ϵ*upper* Pr(IsMax(Mi)) + (1 − li)(1 − Pr(IsMax(Mi))). Theorem 4.7. Let γi denote the error rate of selected cause Ci when assuming posterior Milies in confidence interval [li, ui] almost surely. Then, γi *is bounded below by* γi ≥ ϵ*lower* Pr(*IsMax*(Mi)) + (1 − ui)(1 − Pr(*IsMax*(Mi))) where ϵlower *may be derived by Definition 4.5* Proof. Let Pr(Wi) denote the probability we produce a wrong abduction given we selected cause Ci (this is synonymous with the "error rate" of selecting cause Ci). By the law of total probability, Pr(Wi) = Pr(Wi,IsMax(Mi)) + Pr(Wi, ¬IsMax(Mi)). We first explore the case where the highest posterior is selected (i.e., IsMax(Mi) holds). Here, Pr(Wi| IsMax(Mi)) is given by the Bayes error, which is lower-bounded ϵ*lower* from Definition 4.5. $\Pr(W_{i},\text{IsMax}(M_{i}))=\Pr(W_{i}\mid\text{IsMax}(M_{i}))\Pr(\text{IsMax}(M_{i}))$ $\geq\epsilon_{lower}\Pr(\text{IsMax}(M_{i}))$ If the cause we have selected does not have the maximum posterior (i.e., IsMax(Mi) does not hold), the probability that we result in a false inference is given by 1 − Pr(Ci|x), where Pr(Ci|x) is the theoretical true posterior. Let ui denote the upper bound of Pr(Ci|x) so that 1 − uilower bounds 1 − Pr(Ci|x). Thus, we can derive the lower bound $$\Pr(W_{i},\neg\text{IsMax}(M_{i}))=(1-\Pr(C_{i}\mid\mathbf{x}))(1-\Pr(\text{IsMax}(M_{i})))$$ $$\geq(1-u_{i})(1-\Pr(\text{IsMax}(M_{i}))).$$ We combine the lower bounds of both components to obtain Pr(Wi) = Pr(Wi,IsMax(Mi)) + Pr(Wi, ¬IsMax(Mi)) ≥ ϵ*lower* Pr(IsMax(Mi)) (1 − ui)(1 − Pr(IsMax(Mi))). We then relabel γi for error rate Pr(Wi): γi ≥ ϵ*lower* Pr(IsMax(Mi)) + (1 − ui)(1 − Pr(IsMax(Mi))). Theorem 4.8. Let q kbe the probability that all Mi ∈ M lie in their respective confidence bounds [li, ui] ∈ U. Let γi, upper be the upper bound of γi defined in Theorem 4.6. Then, the upper bound of the general error rate is given by $$\operatorname*{Pr}(W)\leq1-q^{k}(1-\gamma_{i,\;u p p e r}).$$ $$\mathbf{x}(M_{i}))).$$ $${\mathrm{i}}_{i})[\,1\,-\,1\,]$$ $$|J|$$ Proof. Let CIi be shorthand for the posterior confidence interval [li, ui] ∈ U containing posterior Mi ∈ M. By the law of total probability, $$\Pr(W)=\Pr(W,\forall(M_{i}\in\mathcal{M}),M_{i}\in CI_{i})\ +\Pr(W,\exists(M_{i}\in\mathcal{M})M_{i}\not\in CI_{i})$$ $$=\Pr(W\mid\forall(M_{i}\in\mathcal{M})M_{i}\in CI_{i})\ \cdot\Pr(\forall(M_{i}\in\mathcal{M})M_{i}\in CI_{i})\ +$$ $$\Pr(W\mid\exists(M_{i}\in\mathcal{M})M_{i}\not\in CI_{i})\ \cdot\Pr(\exists(M_{i}\in\mathcal{M})M_{i}\not\in CI_{i}).$$ Note that the true posterior values are fixed and not random, but their estimates and confidence intervals (based on sampled data) are random. Given the true posterior values, data is generated from which confidence intervals are constructed and point estimates taken. The true posterior values thus act as parameters in a parameter estimation task. Given the value of such a parameter, the probability that a generated dataset and subsequent confidence interval captures the true parameter value is q, which (by d-separation) is conditionally independent of anything else that happens in the world. Specifically, any other parameter's (i.e., posterior's) value does not affect the probability that data generated using *this* parameter's value produces a confidence interval that captures it. All that matters is the specific parameter under which the data is generated. In other words, the probability that a second dataset generated from a different posterior produces a confidence interval that captures this second parameter's true value is **independent** of the outcome of the first data generation event, once we condition on the parameter. This second parameter is indeed given, as we need it to generate the data. Therefore, assuming k confidence intervals are constructed from data conditioned on their true parameter values, the joint probability of all k posterior probabilities being captured simultaneously by their respective q-percent confidence bounds is q k. Thus, ${\rm Pr}(W)={\rm Pr}(W\mid\forall(M_{i}\in{\cal M})M_{i}\in CI_{i})q^{k}\ +{\rm Pr}(W\mid\exists(M_{i}\in{\cal M})M_{i}\not\in CI_{i})(1-q^{k})$. We apply γ*i,upper* from Theorem 4.6, which is the probability of incorrect abduction assuming that all posterior probabilities fall into their respective confidence intervals. Thus, Pr(W | ∀(Mi ∈ M), Mi ∈ CIi) is bounded above by γi,upper. Additionally, we simply upper bound Pr(W | ∀(Mi ∈ M)Mi ̸∈ CIi) by one. Thus, we conclude $$\operatorname*{Pr}(W)\leq\gamma_{i,\mathrm{upper}}q^{k}+(1)(1-q^{k}),$$ or equivalently $$\operatorname*{Pr}(W)\leq1-q^{k}(1-\gamma_{i,\mathrm{upper}}).$$ Theorem 4.9. Let q kbe the probability that all Mi ∈ M lie in their respective confidence bounds [li, ui] ∈ U. Let γi, lower be the lower bound of γi defined in Theorem 4.7. Then, the lower bound of the general error rate is given by $\square$ $$\operatorname*{Pr}(W)\geq\gamma_{i,\;l o w e r}q^{k}.$$ Proof. Let CIi be shorthand for the posterior confidence interval [li, ui] ∈ U containing posterior Mi ∈ M. By the law of total probability, $$\Pr(W)=\Pr(W,\forall(M_{i}\in\mathcal{M})M_{i}\in CI_{i})\ +\Pr(W,\exists(M_{i}\in\mathcal{M})M_{i}\not\in CI_{i})$$ $$=\Pr(W\mid\forall(M_{i}\in\mathcal{M})M_{i}\in CI_{i})\ \cdot\Pr(\forall(M_{i}\in\mathcal{M})M_{i}\in CI_{i})+$$ $$\Pr(W\mid\exists(M_{i}\in\mathcal{M})M_{i}\not\in CI_{i})\ \cdot\Pr(\exists(M_{i}\in\mathcal{M})M_{i}\not\in CI_{i}).$$ Recall that the number of considered causes is |C| = |M| = k. Assuming all k confidence intervals simultaneously capture their respective posterior values with joint probability q k(see discussion in the proof for Theorem 4.8), we obtain $\Pr(W)=\Pr(W\mid\forall(M_{i}\in\mathcal{M})M_{i}\in CI_{i})q^{k}+\Pr(W\mid\exists(M_{i}\in\mathcal{M})M_{i}\not\in CI_{i})(1-q^{k})$. We apply γi,lower from Theorem 4.7, which is the probability of incorrect abduction, assuming all posterior probabilities fall into their respective confidence intervals. Thus, Pr(W | ∀(Mi ∈ M)Mi ∈ CIi) is bounded below by γi,lower. Additionally, we simply lower bound Pr(W | ∃(Mi ∈ M)Mi ̸∈ CIi) by zero. Thus, we conclude $$\mathrm{Pr}(W)\geq\gamma_{\mathrm{i,lower}}q^{k}+(0)(1-q^{k}),$$ or equivalently $$\operatorname*{Pr}(W)\geq\gamma_{i,{\mathrm{lower}}}q^{k}.$$ Theorem 5.3. For each conditionally independent feature x1, . . . , xn, define βi > 0 *such that for all* i = 1*...n*, $$\operatorname*{Pr}(x_{i}|C)=\beta_{i}\operatorname*{Pr}(x_{i}|{\overline{{C}}}).$$ Let β =pn Qn i βi, the geometric mean of the βi. If β > 1*, then* $$\operatorname*{lim}_{n\to\infty}{\frac{\operatorname*{Pr}(x_{1},\ldots,x_{n}|C)}{\operatorname*{Pr}(x_{1},\ldots,x_{n}|{\overline{{C}}})}}=\operatorname*{lim}_{n\to\infty}\beta^{n}=\infty.$$ Proof. If $$\operatorname*{Pr}(x_{i}|C)=\beta_{i}\operatorname*{Pr}(x_{i}|{\overline{{C}}}),$$ then $$\beta_{i}={\frac{\operatorname*{Pr}(x_{i}|C)}{\operatorname*{Pr}(x_{i}|{\overline{{C}}})}}.$$ So, we can write $$\prod_{i}^{n}\beta_{i}=\prod_{i}^{n}{\frac{\operatorname*{Pr}(x_{i}|C)}{\operatorname*{Pr}(x_{i}|{\overline{{C}}})}}={\frac{\prod_{i}^{n}\operatorname*{Pr}(x_{i}|C)}{\prod_{i}^{n}\operatorname*{Pr}(x_{i}|{\overline{{C}}})}}.$$ Since the features are conditionally independent, $$\prod_{i}^{n}\operatorname*{Pr}(x_{i}|C)=\operatorname*{Pr}(x_{1},...,x_{n}|C)$$ and $$\prod_{i}^{n}\operatorname*{Pr}(x_{i}|{\bar{C}})=\operatorname*{Pr}(x_{1},...,x_{n}|{\bar{C}}).$$ Thus, $$\beta^{n}=\prod_{i}^{n}\beta_{i}={\frac{\operatorname*{Pr}(x_{1},...,x_{n}|C)}{\operatorname*{Pr}(x_{1},...,x_{n}|C)}}.$$ Therefore, when β > 1, $$\operatorname*{lim}_{n\to\infty}{\frac{\operatorname*{Pr}(x_{1},...,x_{n}|C)}{\operatorname*{Pr}(x_{1},...,x_{n}|{\bar{C}})}}=\operatorname*{lim}_{n\to\infty}\beta^{n}=\infty.$$ # Appendix: Toy Example And Additional Derivations Of Theorem 4.2 ## Setup We present a toy example computing the bounds in section 4 (namely, Theorems 4.8 and 4.9). In this example, dim(O) = 2 and has features ["pill taken?", "headache relief?"]. We consider three selected possible causes: ["aspirin", "caffeine", "placebo"] and derive abductive error bounds for abducing the observation [1,0]. For simplicity, this example uses normalized posterior probabilities (i.e., Mi = Pr(x|Ci) Pr(Ci) Pr(x)rather than Pr(x|Ci)Pr(Ci)). This produces the same results as using non-normalized posteriors since the ranking of posterior ranges is the same. The setup of the example is as follows. Table 9 displays the evidence distribution over O (Definition 3.1)–the only given distribution assuming exact probabilities. | Pill taken? | Headache relieved? | x | Pr(x) | |---------------|----------------------|--------|---------| | no | no | ⟨0, 0⟩ | 0.3 | | no | yes | ⟨0, 1⟩ | 0.05 | | yes | no | ⟨1, 0⟩ | 0.15 | | yes | yes | ⟨1, 1⟩ | 0.5 | Table 3: Example evidence distribution. Tables 4 and 5 display the q = 0.95 percentile prior and likelihood ranges for each possible cause: "aspirin", "placebo", and "caffeine". | Cause | Pr(Ci) Lower Bound | Pr(Ci) Upper Bound | |----------|----------------------|----------------------| | aspirin | 0.4 | 0.517 | | caffeine | 0.32 | 0.37 | | placebo | 0.1 | 0.113 | | x | CI for Pr(x|do(aspirin = True)) | CI for Pr(x|do(placebo = True)) | CI for Pr(x|do(caffeine = True)) | |--------|-----------------------------------|-----------------------------------|------------------------------------| | ⟨0, 0⟩ | [0.02, 0.058] | [0.0067, 0.067] | [0.64, 0.83] | | ⟨0, 1⟩ | [0.006, 0.014] | [0.004, 0.06] | [0.58, 0.8] | | ⟨1, 0⟩ | [0.13, 0.15] | [0.33,0.4] | [0.011,0.067] | | ⟨1, 1⟩ | [0.4, 0.77] | [0.005, 0.06] | [0.064, 0.14] | Table 4: 95% confidence intervals for the prior of each cause. Table 5: 95% confidence intervals for the likelihood of each cause. ## Calculating Pr(**Ismax**(Mi)) From here, we will abduce the cause of observation x = [1, 0] ("pill taken", "headache not relieved"). The posterior confidence bounds for this observation are displayed in ??. Note that for this toy example, we assume uniform distributions within posterior confidence intervals for easy visualization, but our methods and code support any bounded distribution. Recall that Pr(IsMax(Mi)) is the probability that Miis the maximum posterior *assuming* that all other posteriors Mi ∈ M lie in their q-percentile confidence intervals [li, ui]. Theorem 4.2 presents a general method of describing this value, making no assumptions of the positions of the upper/lower posterior bounds. We derive two directly computable versions of Theorem 4.2 with varying constraints. ![21_image_0.png](21_image_0.png) Figure 2: Posterior probability ranges for observation [1,0] (pill taken, headache not relieved). If the sum of the upper bounds for each posterior does not exceed 1 (or, for non-normalized posteriors, Pr(x)), the posteriors of selected causes are independent6, leaving any remaining probability to the (dependent) last cause, C*other* with posterior M*other*. This is the combined (likely small) posterior probability that any cause not in the selected set C is the true cause of x. Representing each Mi ∈ M as a random variable (rv), then M*other* = 1 −PMi̸=M*other* Mi where all Mi ≠ M*other* are independent of each other. From here, we will denote any M1 *. . . M*N as the posteriors not describing C*other* for notational simplicity. Let f denote the joint probability density function of all M1 *. . . M*N , and let fj (mj ) denote the (give) marginal distribution of rv Mj . One brute-force approach of computing theorem 4.2 is to treat Pr(IsMax(Mi)) as the expected value of the following indicator function. Let mj denote the specific value taken by any posterior Mj when computing the integral. Then, let the the following indicator function denote whether miis the maximum posterior among all other posteriors' set values and the probability of m*other* = 1 = Pj mj . $\mathbf{1}_{m_{i}}(m_{1},\ldots,m_{N})=\begin{cases}1&\text{if}m_{i}=\max\{m_{1},\ldots m_{N},1-\sum_{j}m_{j}\}\\ 0&\text{otherwise}\end{cases}$ Intuitively, the expected value of this indicator function is similar to tallying up the total number of configurations of m1, . . . mN , where miis the maximum, and then dividing it by the total number of configurations of m1, . . . , mN . This is equivalent to the probability that random variable Miis the maximum. 6If the sum of normalized posterior upper bounds exceeds 1, then posteriors cannot be treated as independent random variables because the configurations where their sum exceeds 1 are invalid with probability 0. A way to compute Pr(IsMax(Mi)) when posterior upper bounds exceed 1 is through random sampling over the given distributions of Mi ∈ M, ignoring instances where the sum of posteriors exceeds 1. $$\Pr(\text{IsMax}(M_{i}))=\int_{l_{1}}^{u_{1}}\cdots\int_{l_{N}}^{u_{N}}\mathbf{1}_{m_{1}}(m_{1},\ldots,m_{N})f(m_{1},\ldots,m_{N})\,dm_{N}\cdots\,dm_{1}$$ $$=\int_{l_{1}}^{u_{1}}\cdots\int_{l_{N}}^{u_{N}}\mathbf{1}_{m_{i}}(m_{1},\ldots,m_{N})\prod_{k=1}^{N}f(m_{k})\,dm_{N}\cdots dm_{1}$$ Furthermore, if we can guarantee that M*other* cannot be the highest posterior when Mi ∈ M lie their q-percent bounds (i.e., there exists at least one posterior lower bound li such that li > 1 −Pj lj ), then we can apply a more efficient computable derivation of Theorem 4.2. Note that M*other* would likely have this property for any problem with a non-trivial number of causes. $$\Pr(\text{IsMax}(M_{i}))=\int_{1}^{n}P\left(M_{i}=x,\prod_{j\neq i}^{N}(M_{j}<x)\right)dx\quad\text{(Theoremand)}$$ $$=\int_{1}^{n}P(M_{i}=x)P\left(\bigcap_{j=1}^{N}M_{j}\leq M_{i}\Big{|}M_{i}=x\right)dx\quad\text{(``and''rule)}$$ $$=\int_{1}^{n}P(M_{i}=x)P\left(\bigcap_{j=1}^{N}M_{j}\leq x\Big{|}M_{i}=x\right)dx$$ $$=\int_{1}^{n}P(M_{i}=x)\prod_{j\neq i}^{N}P\left(M_{j}\leq x\Big{|}M_{i}=x\right)dx\quad\text{(Independence of posteriors}M_{1}\ldots M_{N})$$ If we have $l_{i}>x$ for some $i\neq i$, then and $M_{i}$ is guaranteed to be greater than $x$, and so $M_{i}$ cannot be the If we have lj > x for some j ̸= i, then and Mj is guaranteed to be greater than x, and so Mi cannot be the maximum posterior at Mi = x. Otherwise, we can find the probability that Mj < x by integrating over the probability density function of Mj , fj . This leaves the following computable form. $$\operatorname*{Pr}(\operatorname{IsMax}(M_{i}))=\int_{l}^{u}P(M_{i}=x)\prod_{j\neq i}^{N}\left[\mathbf{1}_{l_{j}\leq x}\int_{l_{i}}^{x}f_{j}(y)d y\right]d x$$ We used the derivation above to compute the following Pr(IsMax(Mi)) values for each selected cause. | Cause | Pr(IsMax(Mi)) | Error | |----------|-----------------|---------| | aspirin | 0.89772 | 1.0e-05 | | caffeine | 0.10227 | 7.8e-10 | | placebo | 0.0 | 0.0 | Table 6: Pr(IsMax(Mi)) for observation [1,0]. Referring to the posterior ranges in Figure ??, notice that "aspirin", as the rightmost posterior range, is the most likely to be the maximum posterior when all posteriors are within their confidence intervals. This is consistent with the calculation above, as aspirin has the highest probability of being the maximum posterior. Also, notice from Figure ?? that "caffeine" can never the greatest posterior as its upper bound is much lower than the lower bound of any other cause. As expected, this Pr(IsMax(caffeine)) is calculated to be zero with no error. ## Bayes Error Rate, Γi**, And Final Bounds** Next, we find Bayes Error Rate (BER). This example is small enough to directly calculate lower and upper bounds of BER with the summation form of Definition 4.3 (Sekeh et. al.) without the use of estimation techniques. (For more complex problems with many causes |O| = 2|C| may be very large, so it is likely more practical apply the estimation techniques while treating the observations as continuous). We found Bayes' rate to be bounded by ϵ*lower* = 0.23 and ϵ*upper* = 0.5667 for the observation [1,0]. Now that we have ϵlower, ϵ*upper*, and the Pr(IsMax(Mi)) values for all possible causes, we can follow section 4.3 to find the final general error rate bounds. We can now calculate γ*i,upper* and γ*i,lower* per Theorems 4.7 and 4.6 for each cause - the error rate when posterior Miis chosen, given the assumption that posteriors lie in their 95% confidence intervals. $$\gamma_{i,u p p e r}=\epsilon_{\mathrm{upper}}\,\mathrm{Pr}(\mathrm{IsMax}(M_{i}))+(1-l_{i})(1-\mathrm{Pr}(\mathrm{IsMax}(M_{i}))),$$ $$\gamma_{i,l o w e r}=\epsilon_{\mathrm{l o w e r}}\,\mathrm{Pr}(\mathrm{IsMax}(M_{i}))+(1-u_{i})(1-\mathrm{Pr}(\mathrm{IsMax}(M_{i})))$$ Next, we calculate upper and lower bounds for this error rate for all three possible causes: | Posterior | γi,lower | γi,lower | |-------------|------------|------------| | placebo | 0.562 | 0.656 | | aspirin | 0.254 | 0.575 | | caffeine | 0.933 | 0.989 | Table 7: γi bounds for observation [1,0]. As expected, aspirin (as the highest posterior range in Figure ??) has the lowest error rate assuming posteriors lie in confidence intervals. Caffeine (as the lowest posterior range) has the highest error rate - i.e., choosing caffeine as the cause for observation [1,0] ("pill taken" and "no headache relief") is most likely to be wrong. We now compute the final bounds per Theorems 4.9 and 4.8. Where W is the event of "wrong abduction", Pr(W) ≤ 1 − q N (1 − γi,upper) and Pr(W) ≥ γi,lowerq N . | Cause | Pr(W) lower bound | Pr(W) upper bound | |----------|---------------------|---------------------| | placebo | 0.562 | 0.656 | | aspirin | 0.254 | 0.575 | | caffeine | 0.933 | 0.989 | Table 8: General abductive error bounds for observation [1,0]. As expected, "aspirin" has the lowest error rate rate, followed by "placebo" and then "caffeine". ## Results Summary | x | γi (aspirin) | γi (placebo) | γi (caffeine) | Pr(W) (aspirin) | Pr(W) (placebo) | Pr(W) (caffeine) | |--------|----------------|----------------|-----------------|-------------------|-------------------|--------------------| | ⟨0, 0⟩ | [0.9, 0.973] | [0.93,0.99] | [0.23,0.57] | [0.772,0.977] | [0.8,0.994] | [0.197, 0.628] | | ⟨0, 1⟩ | [0.86,0.952] | [0.94,0.996] | [0.23, 0.567] | [0.737,0.959] | [0.806,0.997] | [0.197, 0.628] | | ⟨1, 0⟩ | [0.254,0.575] | [0.562,0.656] | [0.933,0.989] | [0.218, 0.8] | [0.482, 0.705] | [0.80,0.99] | | ⟨1, 1⟩ | [0.223, 0.567] | [0.94,0.995] | [0.86,0.936] | [0.197,0.629] | [0.806,0.996] | [0.737,0.945] | Table 9 displays the computed bounds for γi and Pr(W) for every cause and observation. Figure 3 displays the corresponding posterior confidence interval ranges for each observation. Table 9: Summary of bounds for every observation ![24_image_1.png](24_image_1.png) ![24_image_0.png](24_image_0.png) Figure 3: Posterior confidence ranges for each observation.
Review 1: Summary: The paper combined Bayesian decision theory and algorithmic search framework for a quantitative formalization of abductive reasoning. Two settings are considered - selecting the single mostly likely abductive cause while assuming no noise in the observations, and selecting causes above a threshold of probability when the observations are noisy. The proposed approach only assumes percentle-based CIs for the underlying prior and likelihood distributions. Strengths and Weaknesses: Strengths: - Taking into account the noise in observations makes the abductive inference formulation more practical. - To account for uncertainty, the paper estimates likelihood, prior, and posterior probabilities through confidence intervals. - The formulation using Bayes multiclass error rate (BER), and the adoption of upper bounding BER through global minimal spanning trees is interesting. - Theorem 5.1 on probability of successful abduction is very insightful. It formalizes the intuition that when the true cause is highly correlated with the observations (i.e., less random), the achievable success rate is high. Weaknesses: - The approach is restricted to selective abductive inference where the cause is to be selected from a fixed set. At this point, the problem is akin to classification in the standard ML setting where one has to map from observations to one of the k classes/causes. Is there anyway to extend this approach to "creative" abductive inference? - Definition 3.1 says that "Let O denote the vector space with discrete topology containing all binary-featured observation vectors whose components indicate the existence or non-existence of some observed outcome. O contains 2^dim(O) possible observation vectors." What if the observation is actually a continuous valued variable such as temperature? - "we assume that some finite subset C ⊂ S with cardinality k = |C| has been pre-selected as the finite set of plausible causes assumed to contain the true cause. We further assume C includes a “cause” C_other, whose posterior encapsulates the (likely low) combined probability of all other causes in S occurring. With this, we assume that all causes in C are disjoint and that C contains the one true explanation for observation x." . It is okay to assume that the set of causes are finite and one could also somehow ensure that the causes are disjoint but then one would have to allow presence of multiple concurrent causes - the assumption that there is "one true explanation" for observation from the set of finite disjoint causes appears very restrictive. - The paper lacks any experimental evaluation to understand the utility and scalability of the proposed approach, and also critically examine the assumptions in the problem formulation. Requested Changes: * Much of the introductory material in the first 5 pages can be shortened. * Could you please add discussion to address the weaknesses mentioned above? Broader Impact Concerns: There are no broader impact concerns. ================================================== Review 2: Summary: The goal of this paper is to "establish two novel sets of probabilistic bounds on the success of abduction when (1) selecting the single most likely cause while assuming noiseless observations, and (2) selecting any cause above some probability threshold while accounting for noisy observations." Noisy observations are dealt with using percentile-based confidence intervals. Strengths and Weaknesses: The writing is generally good and clear. I'm OK with a paper devoted to abduction appearing in a machine learning journal. However, the technical and concceptual contribution is far from being sufficiently significant for publication in TMLR. In addition there are a number of specific problems with the paper. The paper could be condensed. There is a lot of text used to define / analyse quite elementary things. Section 3.1 could be replaced by a sentence which states that outcomes are binary vectors. Much of Section 3.3 is elementary probability theory. SPECIFIC PROBLEMS It is stated that maximum likelihood estimation (MLE) "optimize[s] model parameters by maximizing the posterior probability". But MLE very much does not maximise posterior probability - it is a non-Bayesian method with non-Bayesian justification. Also to say that both MLE and MAP estimation "apply abductive reasoning" is to generalise the meaning of "abductive reasoning" too much. In any discussion of (probabilistic) causes one should at least mention the distinction between P(x|aspirin=True) and P(x|do(aspirin=True)) and related work on the do-calculus but this is not done here. Doing so would have improved Section 3.2. The ASF "characterizes learning problems as search" which is fine. But there is a lot of work analysing this characterisation dating back to "Mitchell, T. M. (1982). Generalization as search. Artificial Intelligence, 18(2), 203–226." Perhaps (Montañez, 2017) relates ASF to this body of work, but it would also be good here to state that characterising learning as search is nothing new and what is particularly useful about the ASF approach. Section 5.1 : We are promised a definition of F but we don't get one, merely that it "embeds" x and confidence intervals. Since we have I(T;F) in Theorem 5.1 we might assume that T and F are both random variables even though F somehow contains confidence intervals and T is explicitly stated to be a subset (of the search space). No proof of Theorem 5.1 is supplied, so we the reader cannot investigate F further (presumably the proof is in Montañez, 2017). SMALL POINTS In Table 1, I think y^m and x^m need swapping since x^m is a (possible) "cause" of observing y^m. Requested Changes: The issues mentioned under "SPECIFIC PROBLEMS" need addressing. However, even if they were all fixed I think it improbable that a revised version (as opposed to something amounting to a new paper) would be acceptable. Broader Impact Concerns: None ================================================== Review 3: Summary: The article considers a sort of probabilistic abduction. It proposes an approach combining Bayesian probability and the Algorithmic Search Framework and defining different bounds for the probability of correct abductions. Strengths and Weaknesses: Strengths 1. The presented approach, if better described, could be interesting and give a good contribution to the abduction field. 2. Presence of proofs in the appendix. Weaknesses 1. The novelty of the paper is not always well framed and described. 2. The Related Works Section is a bit incomplete. 3. The article does not always explain concepts clearly. 4. The paper should be carefully checked to correct missing references. Requested Changes: About weaknesses (all important regarding the acceptance of the article): 1. Novelty. Most of the results described in Section 5 are based on what is presented by (Montanez, 2017), but in my opinion the paper does not clearly describe which results it adds to these. 2. Related work section. In Section 2.3, the paper mentions Abductive logic Programming and Probabilistic Horn Abduction but I think some work on Probabilistic Abductive logic Programming is missing. For example, I can list: T. Calin-Rares, M. Nataly, R. Alessandra, B. Krysia, On minimality and integrity constraints in probabilistic abduction, in: Logic for Programming, Artificial Intelligence, and Reasoning, Springer, 2013, pp. 759–775. or the more recent work Damiano Azzolini, Elena Bellodi, Stefano Ferilli, Fabrizio Riguzzi, and Riccardo Zese. Abduction with probabilistic logic programming under the distribution semantics. International Journal of Approximate Reasoning, 142:41--63, 2022. 3. Clarity of presentation. There are some parts that in my opinion should be better explained. For example, Section 3.2 does not clearly define the size of $\mathbf{x}$ and whether the size depends on $\mathcal{O}$. Furthermore, it is not clear how the probability mass is distributed and thus how the probability distribution is defined. For example, the paper should better clarify whether the probability distribution presented in Table 2 depends only on $\mathbf{x}$ and $C_i$ or whether it is fixed in the set of observations. Furthermore, I would suggest introducing a running example (it could be the one in Section 3.2) on which to calculate the different bounds. This would improve the clarity of the presentation of the concepts. 4. Missing references. There are several references to concepts or definitions/theorems not present in the paper, which are probably a remaining from a previous, possibly extended, version of the submitted paper. For example, on page 9 the paper mentions Definitions 4.2.2 and 4.2.3 which are actually 4.4 and 4.5. The same problem occurs on page 10 for Definitions 3.4.1 and 4.1.1, which do not exist in the paper. Or in Section 6, where reference is made to small neural networks that are not mentioned elsewhere in the paper. For this last point in particular, it would be useful to discuss the meaning of this reference further. Broader Impact Concerns: I do not see any ethical implication. ================================================== Metareview: Recommendation: Reject Comment: The work put into the revision performed during the review process was appreciated by all reviewers and seen as a step in the right direction. However, in its current form, all three recommend a rejection, especially since its empirical evaluation is still lacking. To summarize their reasoning the two main critics are: 1. Insufficient experimental evaluation to back the claims of the paper, 2. Some remaining doubts about the clarity and contribution of the work. As our main focus at TMLR is on interest over pure novelty, I do not consider the second sufficient for rejection, but the first remains valid. To be relevant to the community the claims and proposals should be accompanied by strong empirical evidence. The consensus is that this would require a major revision. As such I recommend rejection with the strong encouragement of a resubmission of a major revision, after adding a new set of experiments that better highlight the contribution and provide sufficient evidence for it to be of interest to the community. ==================================================
# The Impact Of Reinitialization On Generalization In Convolutional Neural Networks Anonymous authors Paper under double-blind review ## Abstract We study the impact of different reinitialization methods in several convolutional architectures for small-size image classification datasets. We analyze the potential gains of reinitialization and highlight limitations. We also study a new layerwise reinitialization algorithm that outperforms previous methods and suggest explanations of the observed improved generalization. First, we show that layerwise reinitialization increases the margin on the training examples without increasing the norm of the weights, hence leading to an improvement in margin-based generalization bounds for neural networks. Second, we demonstrate that it settles in flatter local minima of the loss surface. Third, it encourages learning general rules and discourages memorization by placing emphasis on the lower layers of the neural network. ## 1 Introduction Deep neural networks demonstrate state-of-the-art performance on many classification tasks. While often highly overparameterized, modern deep architectures exhibit a remarkable ability to generalize beyond the training sample even when trained without an explicit form of regularization (Zhang et al., 2017). A large body of work has been devoted to offering insights into this "benign" overfitting phenomenon, including explanations based on the margin (Neyshabur et al., 2015; Bartlett et al., 2017; Neyshabur et al., 2017; Arora et al., 2018; Soudry et al., 2018), the curvature of the local minima (Keskar et al., 2017; Chaudhari et al., 2019; Neyshabur et al., 2020), and the speed of convergence (Hardt et al., 2016), among others. Recently, however, a number of works suggest that generalization in convolutional neural networks (CNNs) could be improved further using reinitialization. Precisely, let w ∈ R d be a vector that contains all of the parameters in a neural network (e.g. filters in convolutional layers and weight matrices in fully-connected layers). Let s ∈ {0, 1} d be a binary mask that is generated at random according to some probability mass function. Then, "reinitialization" is selecting a subset of parameters and reinitializing them during training: $$\mathbf{w}\ \leftarrow\ (1-\mathbf{s})\odot\mathbf{w}\ +\ \mathbf{s}\odot\eta,$$ w ← (1 − s) w + s η, (1) where is an element-wise multiplication and η is a random initialization of the model parameters. For example, η may correspond to the weights of He or Xavier initializations (He et al., 2015; Glorot & Bengio, 2010). In the following, we refer to the update in (1) as a "reinitialization round." Reinitialization methods differ in how the binary mask s is selected. Four prototypical approaches are: - **Random subset**: A random subset of the parameters of a fixed size is chosen uniformly at random in each round. This includes, for example, the random weight level splitting (welsr) method studied in (Taha et al., 2021), in which about 20% of the parameters are selected for reinitialization. - **Weight magnitudes**: The smallest parameters in terms of their absolute magnitudes are reinitialized at each round. This can be interpreted as a generalization to the dense-sparse-dense (dsd) workflow of Han et al. (2017) in which reinitialization occurs only once. - **Fixed subset**: A subset is chosen at random prior to training and is fixed afterwards. It corresponds to the weight level splitting (wels) method of Taha et al. (2021). ![1_image_0.png](1_image_0.png) Figure 1: Given a deep neural network starting with K convolutional blocks followed by other layers, lw proceeds sequentially from bottom to top (see Algorithm 1). When in block k (e.g. k = 2 in the figure above), the weights of all early blocks {1*, . . . , k*} are rescaled while subsequent layers are reinitialized. In addition, a normalization layer is inserted following block K shown in red. Importantly, the network may contain other normalization layers within each block, such as batch or layer normalization (Ioffe & Szegedy, 2015; Ba et al., 2016). Red layers correspond to the normalization layers inserted by lw, which are fixed (non-trainable) at each round. - **Fully-connected layers**: Only the last fully-connected layers are reinitialized. This includes, for example, the method proposed in (Li et al., 2020). In (Zhao et al., 2018), only the classifier head is reinitialized. We denote these four methods as welsr, dsd, wels, and fc, respectively. Moreover, we denote the baseline method of training once until convergence as bl. In this paper, we also study a new reinitialization algorithm, which we denote as lw for its LayerWise approach. The new algorithm is motivated by the common observation that lower layers in the neural network tend to learn general rules while upper layers specialize (Yosinski et al., 2014; Arpit et al., 2017; Raghu et al., 2019; Maennel et al., 2020; Baldock et al., 2021). While all reinitialization methods improve generalization in CNNs, we demonstrate in Section 3 that lw often outperforms the other methods. It encourages learning general rules by placing more emphasis on training the early layers of the neural network. A more formal statement is presented in Section 3. - **Layerwise**: A convolutional neural network is partitioned into K blocks (see Figure 1 and Algorithm 1). At round k, the parameters at the lowest k blocks are rescaled back to their original norm during initialization (see Algorithm 1) while the rest of the network is reinitialized. In addition, a new normalization layer is inserted/updated following block k. This is repeated for a total of N ≥ 1 iterations for each block. It is worth noting that fc is a special case of lw, in which K = 1 and N > 1. In addition, the concurrent work of Zhou et al. (2021) also corresponds to K = 1 where the upper L layers are reinitialized at each round for some fixed L > 1. Besides the prominent role of reinitialization, lw includes normalization and rescaling, which we show in an ablation study in Appendix E to be important. In Appendix A, we discuss why lw can be interpreted as a stochastic gradient descent (SGD) procedure to a well-defined stochastic loss. Next, we illustrate the basic principles of these reinitialization methods on a minimal example with synthetic data. ## 1.1 Synthetic Data Example Setup. Let x ∈ R 128 be the instance and y be its label, which is sampled uniformly at random from the set {0, 1*, ...,* 7}. For the instances, on the other hand, each of the first 3 coordinates of x is chosen from {−1, 1} to encode the label y in binary form. For example, instances in class 0 would have their first three coordinates as (−1, −1, −1), whereas instances in class 5 would have (1, −1, 1). Consequently, the first three ## Algorithm 1 Pseudocode Of Lw Input: (1) Neural network with identified sequence of K ≥ 1 conv blocks; (2) Training dataset; (3) N ≥ 1. Output: Trained model parameters. Training: 1: Initialize the neural network architecture; 2: For each layer l, compute sl = ||Wl||2, where Wl are the weights of layer l. 3: for k ∈ (1, 2*, . . . , K*) do 4: for n ∈ (1, 2*, . . . , N*) do 5: for j ∈ (1, 2, . . . , k) do *# rescaling* 6: for layer ∈ Block j do 7: Wlayer ← (slayer/||Wlayer||2) · Wlayer 8: **end for** 9: **end for** 10: Pick a batch X of training set uniformly at random; 11: Compute Z: the output of Block k of X; 12: Compute *µ, σ* ∈ R: mean and standard deviation of Z; 13: if n = 1 **then** 14: Insert lambda layer λx : (x − µ)/σ after block k; 15: **else** 16: Update lambda layer with new values of µ and σ; 17: **end if** 18: Reinitialize all layers above block k; 19: Fine-tune the entire model until convergence; 20: **end for** 21: **end for** Table 1: Test accuracy [%] for the synthetic data experiment of Section 1.1 with different signal strengths α and different reinitialization methods. We observe that all initialization methods (with the exception of dsd) improve generalization in this example setting with lw performing best. In addition, reinitialization methods also tend to reduce the variance of the test accuracy. α bl welsr dsd wels fc lw 0.5 20.3 ± 0.6 24.6 ± 1.0 22.9 ± 0.6 23.1 ± 1.4 23.6 ± 3.0 25.2 ± 0.8 1.0 50.7 ± 5.4 72.9 ± 0.9 53.4 ± 0.7 66.1 ± 2.1 68.6 ± 2.1 72.3 ± 3.6 2.0 94.6 ± 2.0 98.2 ± 0.4 90.3 ± 1.4 96.8 ± 0.1 99.0 ± 0.2 99.8 ± 0.2 coordinates of an instance correspond to its "signal." The remaining 125 entries of x are randomly sampled i.i.d. from N (0, 1). Although we focus in this work on convolutional neural networks, we use a multilayer perceptron (MLP) in this synthetic data experiment because the inputs are not images but generic feature vectors. The MLP contains two hidden layers of 32 neurons with ReLU activations (Nair & Hinton, 2010) followed by a classifier head with softmax activations. It optimizes the cross–entropy loss. We train on 256 examples using gradient descent with learning rate 0.05. Methods. Treating every layer as a block, we have K = 3. If 200 training steps are used per round of reinitialization and N = 3, lw trains the model once for 200 steps after which the 2nd and 3rd layers are reinitialized (in addition to rescaling and normalization). This is carried out N = 3 times in the first layer before k is incremented. The same process is repeated on each layer making a total of 200 × N × K = 1, 800 training steps overall. In wels, welsr, dsd, and fc, the model is trained for 200 steps before reinitialization is applied, and this is repeated K × N times for the same total of 1, 800 steps. The baseline method corresponds to training the model once without reinitialization for a total of 1, 800 training steps. Results. When trained for 1,800 steps, the baseline (bl) achieves 100% training accuracy, but only around 51% test accuracy. The large gap between training and test accuracy for such a simple task is reminiscent of the classical phenomenon of overfitting. Note that the number of training examples is 256, which is generally small for 128 features of equal variance. On the other hand, reinitialization improves accuracy as shown in Table 1 even though these reinitialization methods do not have access to any additional data and use the same optimizer and hyper-parameters as baseline training. The training accuracy is 100% in all cases. We also observe that reinitialization tends to reduce the variance of the accuracy (with respect to the seed). In the above experiment, both the signal part (first three coordinates) and the noise part (remaining coordinates) have the same scale (standard deviation 1). We can make the classification problem easier or harder by multiplying the signal part by a signal strength α > 1 or α < 1, respectively. We present the average test accuracy in Table 1 for a selection of values of α with N = 3. Appendix B contains additional results when weight decay is added. ## 1.2 Contribution In this work, we study a new layerwise reinitialization algorithm lw, which often outperforms other methods. We provide two explanations, supported by experiments, for why it improves generalization in convolutional neural networks. First, we show that lw improves the margin on the training examples without increasing the norm of the weights, hence leading to an improvement in known margin-based generalization bounds in neural networks. Second, we show that lw settles in flatter local minima of the loss surface. Furthermore, we provide a comprehensive study comparing previous reinitialization methods: First, we evaluate different methods within the same context. For example, the comparison in (Taha et al., 2021) uses only a single reinitialization round of the dense-sparse-dense approach (dsd) when dsd can be extended to multiple rounds. Also, (Zhao et al., 2018) uses an ensemble of classifiers when reinitializing the fullyconnected layers, which could (at least partially) explain the improvement in performance. By contrast, we follow a coherent training protocol for all methods. Second, we use our empirical evaluation to analyze the effect of the experiment's design, such as augmentation, dropout and momentum. The goal is to determine if the effect of reinitialization could be achieved by tuning such settings. Third, we employ decision tree classifiers to identify when each reinitialization method is likely to outperform others. In summary, we: 1. Study a new reinitialization method, denoted lw, which is motivated by common observations of generalization and memorization effects across the neural network's layers. We show that it outperforms other methods with a statistically significant evidence at the 95% level. 2. Suggest two explanations, supported by experiments, for why lw is more successful at improving generalization in CNNs compared to other methods. 3. Present a comprehensive evaluation study of reinitialization methods covering more than 1,000 experiments for four convolutional architectures: (1) simplified CNN, (2) VGG16 (Simonyan & Zisserman, 2015), (3) MobileNet (Howard et al., 2017) and (4) ResNet50 (He et al., 2016a). We conduct the evaluation over 6 benchmark image classification datasets of small size (up to 12,000 training examples per dataset). We do not observe consistent gains of reinitialization with large datasets, so we omit those from the comparison and focus on the small-data regime. ## 2 Related Work Reinitialization. As stated earlier, a number of works suggest that reinitializing a subset of the neural network parameters during training can improve generalization. This includes, the dense-sparse-dense (dsd) training workflow proposed by Han et al. (2017), in which reinitialization occurs only once during training. However, as the authors argue, the improvement in accuracy in dsd could be attributed to the effect of introducing sparsity, not reinitialization. Another example is "Knowledge Evolution", including weight level splitting (wels) and its randomized version (welsr) (Taha et al., 2021). It was noted that wels outperformed welsr, which agrees with our observations. Finally, some recent works propose to reinitialize the fully-connected layers only (Li et al., 2020; Zhao et al., 2018). In particular, reinitializing the last layer several times and combining the models into an ensemble can improve performance (Zhao et al., 2018). However, the improvement in accuracy could (at least partially) be attributed to the ensemble of predictors, not to reinitialization *per se*. For fair comparison, we extend dsd to multiple rounds of reinitialization and do not use an ensemble of predictors. Generalization Bounds. Several generalization bounds for neural networks have been proposed in the literature. Of those, a prototypical approach is to bound the generalization gap by a particular measure of the *size of weights* normalized by the *margin* on the training set. Examples of measures of the size of weights include the product of the `1 norms (Bartlett, 1998) and the product of the Frobenius norms of layers (Neyshabur et al., 2015), among others (Bartlett et al., 2017; Neyshabur et al., 2017; Arora et al., 2018). While such generalization bounds are often loose, they were found to be useful for ranking models (Neyshabur et al., 2017). The fact that rich hypothesis spaces could still generalize if they yield a large margin over the training set was used previously to explain the performance of boosting (Schapire et al., 1997). In Section 4, we show that lw boosts the margin on the training examples without increasing the size of the weights. Flatness of the Local Minimum. Another important line of work examines the connection between generalization and the curvature of the loss at the local minimum (Keskar et al., 2017; Neyshabur et al., 2017; Foret et al., 2021). Deep neural networks are known to converge to local minima with sparse eigenvalues (>94% zeros) in their Hessian (Chaudhari et al., 2019). Informally, a flat local minimum is robust to data perturbation, and this robustness can, in turn, be connected to regularization (Bishop, 1995). In fact, some of the benefits of transfer learning were attributed to the flatness of the local minima (Neyshabur et al., 2020). For a precise treatment, one may use the PAC-Bayes framework to derive a generalization bound that comprises of two terms: (1) sharpness of the local minimum, and (2) the weight norm over noise ratio (Neyshabur et al., 2017). Similar terms also surface in the notion of "local entropy" (Chaudhari et al., 2019). We show in Section 4 that lw improves both terms. Generalization vs. Memorization. Several works point out that early layers in a neural network tend to learn general-purpose representations whereas later layers specialize, e.g. (Raghu et al., 2019; Arpit et al., 2017; Yosinski et al., 2014; Maennel et al., 2020). This can be observed, for instance, using probes, in which classifiers are trained on the layer embeddings. As demonstrated in (Cohen et al., 2018) and (Baldock et al., 2021), deep neural networks learn to separate classes at the early layers with real labels (generalization) but they only separate classes at later layers when the labels are random (memorization). One explanation for why lw improves generalization is that it encourages learning general rules at early layers and discourages memorization at later layers. ## 3 Analysis 3.1 Empirical Study We begin by evaluating the performance of the five reinitialization methods discussed in Section 1 for four convolutional architectures on 6 small-size benchmark image classification datasets (see Table 4 for details). Appendix F summarizes related experiments on CIFAR10 and CIFAR100. All images are resized to 224×224. The architectures are (1) simplified CNN, (2) VGG16 (Simonyan & Zisserman, 2015), (3) MobileNet (Howard et al., 2017) and (4) ResNet50 (He et al., 2016a). We denote these by scnn, vgg16, mobilenet, and resnet50, respectively. We use He-initialization (He et al., 2015) unless stated otherwise. Details about each architecture and the hyper-parameters are provided in Appendix C. To recall, every reinitialization method trains the same model on the same dataset for several rounds. After each round, a binary mask of the model parameters is selected according to the reinitialization criteria and the update in Eq. (1) is applied for some random initialization η. After that, the model is fine-tuned on the same data. Blocks in lw correspond to the standard blocks of the architecture (e.g. a block in Figure 1 would correspond to either an identity or a convolutional block in ResNet50). Also, 10% of the training split is reserved for validation, which is used for early stopping in all methods. To evaluate the relative performance of the reinitialization methods, we perform a set of experiments in which we fix the hyperparameters for all architectures and datasets to the same values. The hyperparameters were Table 2: Test accuracy results [%] for the five reinitialization methods on the six benchmark datasets: Oxford-IIIT (Parkhi et al., 2012a), Stanford Dogs (Deng et al., 2009), Cars (Krause et al., 2013b), Caltech101 (Fei-Fei et al., 2004b), Cassava (Mwebaze et al., 2019a), and Caltech-UCSD Birds 200 (Welinder et al., 2010a). Values in **bold**/underlined are the **best**/second-best results. The symbols b,r,d,w,f,l are for baseline, welsr, dsd, wels, fc, and lw, respectively. In lw, N = 1. Every reinitialization uses the same number of rounds K (cf. Appendix C) and they are trained for the same number of epochs (including the baseline). In wels, welsr, and dsd, we reinitialize 20% of the parameters, following Taha et al. (2021). | B | R | D | W | F | L | B | R | D | W | F | L | | | | |--------------------------------------------------------------|--------------------------------------------------------------|----------------------------------------------------------|----------------------------------------------------------|------------------------------------------------------------|----------------------------------------------------------|--------------------------------------------------------------|----------------------------------------------|---------------------------|-----------------------------------------------------------|---------|---------|---------|----------------------|-----| | oxford-iiit | scnn | 13.7 ±.9 12.0 ±.6 13.4 ±.2 14.0 ±.6 14.0 ±.4 15.0 ±.4 | vgg16 | 16.2 ±.1 16.7 ±.9 16.9 ±.9 16.4 ±.5 29.7 ±.8 25.6 ±.9 | | | | | | | | | | | | ±.2 13.8 ±.6 17.2 ±.1 15.9 ±2. 14.8 ±3. 30.1 ±.8 | resnet 24.0 ±.1 24.2 ±.6 22.9 ±.1 24.9 ±.2 23.7 ±1. 28.5 ±.1 | | | | | | | | | | | | | | | mobile 13.4 | | | | | | | | | | | | | | | | dogs | scnn | 5.8 ±.3 | 5.2 ±.5 | 5.2 ±.1 | 5.1 ±.5 | 5.1 ±.1 | 6.1 ±.2 | vgg16 | 11.4 ±.2 11.8 ±.1 10.7 ±.4 10.3 ±.1 24.2 ±.6 19.1 ±.9 ±.7 | | | | | | | mobile 9.7 ±.3 | 8.6 ±1. | 9.0 ±1. | 11.6 ±.4 9.9 ±.5 | 19.2 ±1. | resnet 14.0 ±.1 17.5 ±1. 19.2 ±2. 16.0 ±1. 13.9 ±0.622.0 | | | | | | | | | | | cars196 | scnn | 3.4 ±.1 | 3.4 ±.0 | 3.4 ±.4 | 3.5 ±.3 | 3.2 ±.1 | 3.8 ±.1 | vgg16 | 6.5 ±1. | 5.5 ±.1 | 7.7 ±.4 | 7.0 ±.1 | 20.9 ±.1 9.9 ±.5 ±.3 | | | mobile 6.1 ±2. | 8.9 ±1. | 5.3 ±2. | 7.4 ±.5 | 6.8 ±1. | 22.2 ±.1 | resnet 10.1 ±2. 12.8 ±.6 13.4 ±6. 13.7 ±1. 10.4 ±.1 12.6 | | | | | | | | | | caltech101 | scnn | 51.1 ±.4 49.9 ±.1 49.8 ±.8 48.4 ±.9 50.8 ±.7 50.7 ±.1 | vgg16 | 54.1 ±.3 55.4 ±.7 54.2 ±.5 55.9 ±.3 69.1 ±.1 57.3 ±1 ±3. | | | | | | | | | | | | mobile 36.7 ±.9 43.0 ±3. 44.9 ±.4 40.9 ±3. 42.3 ±1. 47.0 ±.7 | resnet 50.4 ±.8 54.6 ±.8 55.0 ±.4 52.9 ±2. 50.7 ±1. 53.3 | | | | | | | | | | | | | | | cassava | scnn | 59.4 ±.4 58.6 ±.1 58.6 ±.7 59.8 ±1. 61.6 ±.1 63.9 ±.3 | vgg16 | 58.4 ±.2 59.5 ±1. 58.4 ±2. 62.1 ±2. 58.5 ±.0 59.3 ±1. ±2. | | | | | | | | | | | | mobile 52.6 ±.7 57.9 ±2. 63.6 ±.3 62.2 ±.4 55.4 ±1. 70.0 ±2. | resnet 61.9 ±4. 57.3 ±1. 62.8 ±2. 58.1 ±.4 56.7 ±2. 63.9 ±2. | | | | | | | | | | | | | | | birds2010 | scnn | 2.0 ±.1 | 2.6 ±.3 | 2.3 ±.1 | 2.2 ±.6 | 2.4 ±.3 | 2.4 ±.1 | vgg16 | 3.8 ±.4 | 4.3 ±.6 | 4.1 ±.4 | 3.4 ±.2 | 8.5 ±.6 | 5.1 | | mobile 4.5 ±1. | 3.9 ±.3 | 5.2 ±.4 | 5.9 ±.6 | 5.3 ±1. | 8.1 ±.5 | resnet 6.9 ±.4 | 8.7 ±.3 | 10.0 ±.1 10.0 ±.5 6.5 ±.1 | 10.0 ±.7 | | | | | | | | + Augmentation | | | | | | | | | | | | | | | oxford-iiit | scnn | 15.1 ±.5 14.6 ±.6 14.3 ±.5 14.7 ±.4 16.0 ±3. 16.6 ±3. | vgg16 | 22.0 ±3. 20.0 ±2. 20.9 ±3. 20.7 ±3. 39.1 ±3. 29.8 ±4. | | | | | | | | | | | | ±1. 24.1 ±1. 23.2 ±2. 24.3 ±1. 23.5 ±1. 41.7 | ±.3 33.3 ±.3 39.1 ±.2 31.9 ±.4 36.6 ±.3 34.4 ±.3 | | | | | | | | | | | | | | | mobile 27.7 | ±1. | resnet 29.7 | | | | | | | | | | | | | | dogs | scnn | 7.4 ±.3 | 8.2 ±.2 | 7.7 ±.3 | 8.0 ±.1 | 8.1 ±.2 | 8.6 ±.3 | vgg16 | 16.0 ±4. 16.9 ±3. 15.5 ±2. 16.7 ±4. 37.3 ±4. 31.2 ±9. ±1. | | | | | | | mobile 19.0 ±.8 19.5 ±.5 27.1 ±.8 18.8 ±.7 20.7 ±.3 35.8 ±.8 | resnet 26.4 ±1. 30.7 ±1. 28.4 ±1. 35.2 ±2. 33.3 ±1. 36.9 | | | | | | | | | | | | | | | cars | scnn | 5.6 ±.2 | 6.0 ±.5 | 5.3 ±.1 | 5.5 ±.2 | 7.3 ±.3 | 5.8 ±.2 | vgg16 | 11.7 ±.4 10.7 ±.4 11.2 ±.4 11.4 ±.1 43.6 ±.2 22.4 ±.4 ±1. | | | | | | | mobile 6.9 ±2. | 16.2 ±1. 9.1 ±1. | 11.9 ±1. 13.8 ±1. 44.0 ±1. | resnet 21.8 ±1. 42.5 ±2. 33.2 ±1. 43.1 ±2. 36.7 ±2. 43.6 | | | | | | | | | | | | | caltech101 | scnn | 50.1 ±.5 52.2 ±.5 52.2 ±.5 51.5 ±.5 54.4 ±.5 52.8 ±.4 | vgg16 | 55.9 ±1. 55.8 ±3. 57.1 ±3. 56.3 ±3. 67.1 ±2. 59.1 ±3. ±.9 | | | | | | | | | | | | mobile 41.0 ±2. 41.5 ±2. 47.4 ±3. 41.8 ±3. 48.0 ±1. 46.3 ±1. | resnet 50.2 ±2. 51.9 ±1. 53.1 ±.7 57.5 ±1. 52.1 ±2. 50.5 | | | | | | | | | | | | | | | ±.5 60.6 ±.1 59.4 ±.2 62.2 ±.5 67.0 ±.4 69.3 | ±2. 70.0 ±1. 70.0 ±1. 71.2 ±1. 71.5 ±1. 68.3 ±4. | | | | | | | | | | | | | | | cassava | scnn | 58.9 | ±.7 | vgg16 | 70.3 | ±2. | | | | | | | | | | mobile 62.3 ±1. 72.8 ±.7 80.1 ±.8 76.1 ±.7 77.3 ±.5 81.1 ±.4 | resnet 46.5 ±2. 79.2 ±2. 82.9 ±2. 82.6 ±1. 77.6 ±2. 73.9 ±2. | | | | | | | | | | | | | | | birds2010 | scnn | 3.8 ±.5 | 3.7 ±.2 | 4.0 ±.3 | 3.2 ±.2 | 4.2 ±.1 | 3.8 ±.1 | vgg16 | 5.5 ±1. | 6.5 ±1. | 5.6 ±.7 | 5.8 ±.9 | 13.2 ±1. 9.8 | | | mobile 6.6 ±.5 | 9.0 ±.7 | 8.1 ±.7 | 6.2 ±.4 | 7.4 ±.7 | 9.8 ±.7 | resnet 8.5 ±.4 | 12.5 ±.4 11.2 ±.4 13.0 ±.6 11.9 ±.4 10.6 ±.5 | | | | | | | | | | + Augmentation + Dropout | | | | | | | | | | | | | | | oxford-iiit | scnn | 15.8 ±.5 13.6 ±.5 14.3 ±.3 15.1 ±.5 19.3 ±.5 17.1 ±.5 | vgg16 | 25.3 ±3. 26.7 ±1. 26.6 ±2. 26.4 ±3. 43.4 ±3. 34.4 ±5. | | | | | | | | | | | | ±1. 29.1 ±1. 27.8 ±1. 27.9 ±1. 22.0 ±1. 41.0 | ±.3 35.2 ±.2 41.2 ±.3 37.4 ±.3 32.6 ±.3 35.6 ±.3 | | | | | | | | | | | | | | | mobile 28.4 | ±1. | resnet 33.1 | | | | | | | | | | | | | | dogs | scnn | 8.4 ±.3 | 8.9 ±.3 | 7.6 ±.3 | 8.0 ±.3 | 9.6 ±.3 | 9.1 ±.3 | vgg16 | 17.7 ±4. 19.7 ±4. 19.5 ±3. 18.9 ±3. 34.9 ±7. 35.9 ±9. ±.9 | | | | | | | mobile 17.8 ±.8 23.5 ±.5 27.3 ±.9 22.4 ±.8 20.5 ±.5 35.0 ±.4 | resnet 30.7 ±1. 33.3 ±1. 33.1 ±.9 33.8 ±.9 34.5 ±.9 40.1 | | | | | | | | | | | | | | | cars | scnn | 6.3 ±.1 | 6.2 ±.2 | 5.9 ±.1 | 6.8 ±.2 | 7.4 ±.1 | 6.4 ±.2 | vgg16 | 14.3 ±.7 18.9 ±.3 12.6 ±.3 16.6 ±.4 45.2 ±.1 34.2 ±.5 ±2. | | | | | | | mobile 9.5 ±1. | 16.1 ±1. 30.8 ±1. 24.1 ±2. 16.4 ±1. 44.5 ±1. | resnet 19.8 ±2. 45.7 ±1. 43.0 ±1. 48.1 ±3. 45.0 ±3. 47.5 | | | | | | | | | | | | | | caltech101 | scnn | 51.4 ±.4 53.5 ±.3 51.1 ±.3 52.6 ±.1 52.4 ±.4 53.6 ±.3 | vgg16 | 59.6 ±2. 61.7 ±3. 60.8 ±1. 61.0 ±3. 68.1 ±4. 62.7 ±.7. ±1. | | | | | | | | | | | | mobile 47.1 ±2. 43.0 ±1. 47.5 ±1. 45.6 ±1. 51.3 ±1. 49.4 ±2. | resnet 50.6 ±2. 54.1 ±3. 54.6 ±1. 55.7 ±1. 51.4 ±2. 50.3 | | | | | | | | | | | | | | | ±.5 62.9 ±.3 60.7 ±.2 61.9 ±.1 68.4 ±.1 68.6 | ±2. 71.4 ±2. 73.7 ±3. 71.0 ±2. 71.1 ±2. 71.9 ±5. | | | | | | | | | | | | | | | cassava | scnn | 61.8 | ±.1 | vgg16 | 70.1 | ±1. | | | | | | | | | | mobile 68.5 ±1. 70.0 ±1. 78.6 ±1. 74.3 ±1. 73.7 ±1. 80.5 ±3. | resnet 74.2 ±2. 77.6 ±2. 81.5 ±1. 80.9 ±1. 73.5 ±2. 78.6 ±1. | | | | | | | | | | | | | | | birds2010 | scnn | 4.0 ±.2 | 4.0 ±.1 | 3.5 ±.2 | 3.7 ±.2 | 3.3 ±.1 | 4.4 ±.2 | vgg16 | 6.5 ±3 | 8.1 ±1. | 8.5 ±2. | 8.1 ±1. | 18.6 ±1. 12.6 | | | mobile 2.5 ±.7 | 9.6 ±.7 | 5.1 ±.6 | 6.3 ±.7 | 8.4 ±.2 | 9.0 ±.9 | resnet 11.5 ±.4 14.8 ±.6 13.3 ±.7 15.2 ±1. 13.5 ±.7 13.3 ±.3 | | | | | | | | | Table 3: Statistical significance: a star (?) implies that the column method outperforms the row with statistically significant evidence at the 95% level, computed using the exact binomial test. A circle (◦) implies that statistical significance holds even after applying Holm's step-down correction for multiple hypothesis tests (Demšar, 2006). Only lw performs better than the baseline across all architectures. For resnet50, reinitialization methods except fc perform better than the baseline with no clear winner among them. scnn vgg16 mobilenet resnet50 b r d w f l b r d w f l b r d w f l b r d w f l b ◦ ? ◦ ◦ ◦ ? ◦ ◦ ? r ? ? ? ? ? d ? ? ? ? w ? ? ◦ ? ◦ f ? l ? | Table 4: Overview of the 6 benchmark datasets. | | | | |--------------------------------------------------|------------|--------|-----------| | Name | |Training| | |Test| | # Classes | | oxford-iiit (Parkhi et al., 2012b) | 3,680 | 3,669 | 37 | | dogs (Khosla et al., 2011) | 12,000 | 8,580 | 120 | | cars196 (Krause et al., 2013a) | 8,144 | 8,041 | 196 | | caltech101 (Fei-Fei et al., 2004a) | 3,060 | 6,084 | 101 | | cassava (Mwebaze et al., 2019b) | 5,656 | 1,885 | 4 | | birds2010 (Welinder et al., 2010b) | 3,000 | 3,033 | 200 | chosen to work reasonably well across all combinations; in particular they enable reaching 100% training accuracy in all cases. We use SGD with an initial learning rate of 0.003 and momentum 0.9. The learning rate is decreased by a factor of 2 whenever the validation error does not improve for 20 epochs. The batch size is 256 and a maximum of 100k minibatch steps are used. We run all experiments, as explicitly stated, without data augmentation or with mild augmentation consisting of horizontal flipping and random cropping (in which the size is increased to 248 × 248 before a crop of size 224 × 224 is selected). Such fixed hyperparameters are suboptimal for some combinations of architectures and datasets and therefore the resulting numbers can be worse than state-of-the-art results. However, they enable reaching 100% training accuracy in all combinations of models and datasets. For example, increasing the learning rate to 0.01 would prevent ResNet50 from progressing its training error beyond that of random guessing on cassava. Table 2 provides the detailed results of the five reinitialization methods across the benchmark datasets in three settings: (1) no augmentation or dropout is used, (2) with augmentation, and (3) with both augmentation and dropout rate 0.25. We perform an exact binomial test to evaluate which method performs statistically significantly better across the settings. In Table 3, we summarize these results. We observe that only lw outperform the baseline across all architectures with statistically significant evidence, and outperforms the other reinitialization methods in all architectures except resnet50. In resnet50, reinitialization methods except fc perform better than the baseline but with no clear winner among them. Moreover, fc performs generally better than wels, welsr, and dsd. It is worth reiterating, that fc is a special case of lw that corresponds to K = 1 and N > 1. ## 3.2 Effect Of Experiment Design To determine when a particular reinitialization method outperforms others, we train a decision tree classifier on the outcomes of several experiments that vary in design by, for example, number of classes, size of the training dataset, augmentation, and dropout. Every setting contains experiment runs of each of the 5 reinitialization methods in addition to the baseline for the four architectures and 6 benchmark datasets. We use the decision tree classifier, implemented using the Scikit-Learn package (Pedregosa et al., 2011), for interpretability. We use a minimum leaf size of 7 in the decision tree and a maximum depth of 4. Figure 2 ![7_image_0.png](7_image_0.png) Figure 2: A decision tree classifier trained to predict the best reinitialization method based on the experiment design. The features are the training set size, number of classes, neural network architecture, dropout rate and augmentation. In general, lw performs best overall except in vgg16, where fc performs better. The decision tree classifier has a maximum depth of 4 and a minimum number sample leaf of 7. displays the resulting decision tree. In general, lw performs best overall except in vgg16, where fc performs better. ## 3.3 Compute In Table 2, every reinitialization round is trained until convergence. However, improvement in generalization can also be obtained at lower computational overhead by stopping early in each round. This is demonstrated in Figure 3 for all six benchmark datasets. As shown in the figure, early stopping allows to realize the gain of reinitialization without incurring significant additional overhead. In addition, we show in Appendix E that training is faster in subsequent rounds of reinitialization. ## 4 Relations To The Generalization Risk Boosting the Margin. As discussed earlier in Section 2, a typical approach for bounding the generalization gap in deep neural networks is to use a particular measure of the size of the weights normalized by the *margin* on the training sample. Let D be the number of layers in a neural network, whose output is a composition of functions: f(x) = f1 ◦ f2 *◦ · · ·* fD(x), where each fi(x) is of the form fi(x) = σ(Wix) for some matrix Wi and ReLU activation σ. Then, one measure of the size of the weight that relates to generalization is the product of the Frobenius norms of layers Qd i=1 ||W||2F (Neyshabur et al., 2015; 2017). This is normalized by the margin γ > 0 on the training examples, which is the smallest difference between the score assigned to the true label and the next largest score. For a better visualization, we use the margin of the softmax output in the interval [0, 1]. Figure 4 displays the smallest 400 margins on the training sample for each of the benchmark datasets. As shown in the figure, lw boosts the margin on the training sample considerably when compared to previous reinitialization methods. Most importantly, lw achieves this *without* increasing the size of the weights. To take the contribution of the normalization layers into account when calculating the product Qd i=1 ||W||2F , we compare the product of the norms of the input to the classifier head (activations) and the norm of the weights of the classifier head in each method. We observe that lw tends to maintain the same size of the weights as the baseline. Appendix D provides further details. ![8_image_0.png](8_image_0.png) Figure 3: The test accuracy of reinitialization methods with different compute budgets (no augmentation or dropout) is plotted for each dataset. The x-axis is the number of training steps per reinitialization round. For the baseline, the test accuracy is plotted over the same total number of steps as reinitialization methods. Most reinitialization methods quickly surpass the accuracy of the baseline for the same amount of compute and can reap benefit of reinitialization without having to train until convergence in each round. We provide an informal argument for why this happens in LW. First, the product of the norms of the weights in the identified K blocks in IW (cf. Figure 1 and Algorithm 1) tend to remain unchanged due to the normalization layers inserted after each round. What changes is the norm of the final layers (following block K), but their norm tends to shrink because they train from scratch faster with each round. As for the margin, because the network classifies all examples correctly in a few epochs in the final round of LW, any additional epochs have the effect of increasing the margin to reduce the cross entropy loss. Sharpness of the Local Minima. Finally, we observe that the final solution provided by LW seems to reside in a "flatter" local minima of the loss surface than in the baseline. One method for quantifying flatness is to compare the impact on the training loss when the model parameters are perturbed by Gaussian noise, which has been linked to generalization (Neyshabur et al., 2017). To recall, both LW and BL share the same size of the weights (cf. Appendix D). Figure 5 shows that the solution reached by LW is more robust to model perturbation than in standard training. More precisely, for every amount of noise added into the model parameters w, the change in the training loss in LW is smaller than in standard training suggesting that the local minimum is flatter in LW. ## 5 Discussion And Limitations In this paper, we study a new reinitialization method for deep neural networks. Empirical results show that it improves generalization better than previous methods across a wide range of architectures and hyperparameters. It relates to prior works that distinguish learning general rules in earlier layers from exceptions ![9_image_0.png](9_image_0.png) reinitialization methods. LW (orange) boosts the margin considerably compared to the other reinitialization methods. The bottom figure of each dataset provides the same comparison between LW and BL. The curves are displayed separately for a better visualization, as they almost coincide in the wide ranged log-scale in the top figure. ![10_image_0.png](10_image_0.png) Figure 5: Bi-criteria plots for the change in training accuracy (y-axis) when the model parameters are perturbed by standard Gaussian noise N(0, o21) for each dataset. Lower curves suggest flatter local minima and better generalization. to the rules in later layers, because LW places more emphasis on the early layers of the neural network. We also argue that the improved generalization can be connected to the sharpness of the local minima and the margins on the training data. To assess the limitations of the proposed method, we conducted ablation studies, statistical tests as well as failure analysis using decision trees. Those revealed that layerwise reinitialization yields a significant improvement in cases where the generalization gap is large, such as when using poor hyper-parameters or small datasets. The improvement is small, however, when the generalization gap is small, such as when the training data is large. Our takeaway message is that the accuracy of convolutional networks can be improved for small datasets using bottom-up layerwise reinitialization, where the number of reinitialized layers may vary depending on the available compute budget. At one extreme, one would benefit from reinitializing the classifier's head alone, but reinitializing all layers in sequence with rescaling and normalization yields better results. We hope that the description of the observed positive effects will inspire others to study them more and to develop more efficient alternatives. ## References Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. TensorFlow: Large-scale machine learning on heterogeneous systems. arXiv:1603.04467, 2015. Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. In ICML, 2018. Devansh Arpit, Stanisław Jastrzkebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, and Simon Lacoste-Julien. A closer look at memorization in deep networks. In *ICML*, 2017. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. In *NeurIPS Deep Learning* Symposium, 2016. Robert JN Baldock, Hartmut Maennel, and Behnam Neyshabur. Deep learning through the lens of example difficulty. In *NeurIPS*, 2021. Peter L Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. *IEEE Transactions on Information Theory*, 44 (2):525–536, 1998. Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks. In *NeurIPS*, 2017. Christopher M Bishop. Training with noise is equivalent to Tikhonov regularization. *Neural computation*, 7 (1):108–116, 1995. Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, and Riccardo Zecchina. Entropy-SGD: Biasing gradient descent into wide valleys. *Journal of Statistical Mechanics: Theory and Experiment*, 2019(12):124018, 2019. Gilad Cohen, Guillermo Sapiro, and Raja Giryes. DNN or k-NN: That is the generalize vs. memorize question. *arXiv:1805.06822*, 2018. Janez Demšar. Statistical comparisons of classifiers over multiple data sets. *Journal of Machine Learning* Research, 7:1–30, 2006. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In *CVPR09*, 2009. Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In *CVPR Workshops*, 2004a. Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer Vision and Pattern Recognition Workshop, 2004b. Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In *ICLR*, 2021. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In *AISTATS*, 2010. Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Enhao Gong, Shijian Tang, Erich Elsen, Peter Vajda, Manohar Paluri, John Tran, et al. DSD: Dense-sparse-dense training for deep neural networks. In *ICLR*, 2017. Moritz Hardt, Ben Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. In *ICML*, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification. In *ICCV*, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In ECCV, 2016a. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016b. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. *arXiv:1704.04861*, 2017. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *ICML*, 2015. Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In *ICLR*, 2017. Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei. Novel dataset for fine-grained image categorization. In *CVPR Workshop on Fine-Grained Visual Categorization*, 2011. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3D object representations for fine-grained categorization. In *Int. IEEE Workshop on 3D Representation and Recognition*, 2013a. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In *4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13)*, Sydney, Australia, 2013b. Xingjian Li, Haoyi Xiong, Haozhe An, Cheng-Zhong Xu, and Dejing Dou. RIFLE: Backpropagation in depth for deep transfer learning through re-initializing the fully-connected layer. In *ICML*, 2020. Hartmut Maennel, Ibrahim Alabdulmohsin, Ilya Tolstikhin, Robert JN Baldock, Olivier Bousquet, Sylvain Gelly, and Daniel Keysers. What do neural networks learn when trained with random labels? In *NeurIPS*, 2020. Ernest Mwebaze, Timnit Gebru, Andrea Frome, Solomon Nsumba, and Jeremy Tusubira. icassava 2019finegrained visual categorization challenge, 2019a. Ernest Mwebaze, Timnit Gebru, Andrea Frome, Solomon Nsumba, and Jeremy Tusubira. iCassava 2019 fine-grained visual categorization challenge. *arXiv:1908.02900*, 2019b. Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In *ICML*, 2010. Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural networks. In *COLT*, 2015. Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro. Exploring generalization in deep learning. In *NeurIPS*, 2017. Behnam Neyshabur, Hanie Sedghi, and Chiyuan Zhang. What is being transferred in transfer learning? In NeurIPS, 2020. Neal Parikh and Stephen Boyd. Proximal algorithms. *Foundations and Trends in optimization*, 1(3):127–239, 2014. O. M. Parkhi, A. Vedaldi, A. Zisserman, and C. V. Jawahar. Cats and dogs. In *IEEE Conference on* Computer Vision and Pattern Recognition, 2012a. Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar. Cats and dogs. In *CVPR*, 2012b. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, et al. Scikitlearn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, and Samy Bengio. Transfusion: Understanding transfer learning for medical imaging. In *NeurIPS*, 2019. Robert E. Schapire, Yoav Freund, Peter L Bartlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. In *ICML*, 1997. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In *ICLR*, 2015. Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The implicit bias of gradient descent on separable data. *Journal of Machine Learning Research*, 19(1):2822–2878, 2018. Ahmed Taha, Abhinav Shrivastava, and Larry Davis. Knowledge evolution in neural networks. arXiv:2103.05152, 2021. P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010a. Peter Welinder, Steve Branson, Takeshi Mita, Catherine Wah, Florian Schroff, Serge Belongie, and Pietro Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010b. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In *NeurIPS*, 2014. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In *ICLR*, 2017. Kaikai Zhao, Tetsu Matsukawa, and Einoshin Suzuki. Retraining: A simple way to improve the ensemble accuracy of deep neural networks for image classification. In *ICPR*, 2018. Hattie Zhou, Ankit Vani, Hugo Larochelle, and Aaron Courville. Fortuitous forgetting in connectionist networks. In *ICLR*, 2021. ## A Layerwise Reinitialization As A Stochastic Gradient Descent Before presenting our empirical study of the different reinitialization methods, we discuss briefly why reinitialization can be interpreted as a stochastic gradient descent (SGD) procedure to a well-defined stochastic loss. Later in Section 4, we present arguments for the improved generalization in lw that are linked to the margin on the training examples as well as the flatness of the local minimum. Consider the following simplified training protocol. In a multi-layer neural network, let w0 ∈ R d0 be the set of weights at the first layer and write w¯0 ∈ R d−d0for the set of weights at all later layers. Let w t 0 and w¯ t 0 be the set of weights after round t. We will simplify discussion by denoting w t = (w t 0 , w¯ t 0 ) ∈ R d and focusing on the first layer alone. Given a loss function L : R d → R, training via stochastic gradient descent (SGD) leads to a stationary point of the loss surface (i.e. a solution w such that ∇L(w) = 0). Let S be the set of stationary points of L. To mimic the behavior of lw, suppose that training proceeds at round t by reinitializing w¯0 and applying the proximal operator to the full set of weights w: $$w^{t}=\arg\min_{w\in S}||w-w^{t-1}||_{2}^{2}.\tag{2}$$ Informally, (2) selects a stationary point of L that is nearest to the current solution in the reinitialization round. If we write ΨS for the indicator function of the set S: $$\Psi_{\mathcal{S}}(w)=\begin{cases}0,&w\in\mathcal{S}\\ \infty,&w\notin\mathcal{S},\end{cases}\tag{3}$$ then a single training round in (2) corresponds to a gradient descent step to the *Moreau envelope* of ΨS (Parikh & Boyd, 2014), denoted MΨS , which in this case is the distance to S. That is, a training round in (2) is equivalent to the gradient update step: $$w^{t+1}=w^{t}-\nabla M_{\Psi_{S}}(w^{t}).$$ $$\left(4\right)$$ t). (4) However, for the set of weights at the first layer w0, reinitialization transforms the update rule in (4) into a stochastic gradient step w t+1 0 = w t 0 − ∇ft(w t 0 ), in which ft is a stochastic loss function whose randomness is derived from the randomness of w¯0 and satisfies: $$f_{t}(w_{0})=M_{\Psi_{\bar{S}}}((w_{0},\bar{w}_{0}^{t}))=\min_{(x,\bar{x})\in S}\,\big{\{}||x-w_{0}||_{2}^{2}+||\bar{x}-\bar{w}_{0}||_{2}^{2}\big{\}}.\tag{5}$$ Hence, a single reinitialization round, where the weights of the first layer are retained while the rest is reinitialized, can be interpreted as a stochastic gradient descent step to the loss in (5), which penalizes weights at the first layer w0 that change significantly when the rest of the network is reinitialized and retrained. Repeating this several times for the first layer is analogous to choosing a large value of N 1 in lw. Once the first layer is trained, its output can be normalized before proceeding to the next layer, which is what lw achieves. ## B Synthetic Data Experiment We use the same type of data as described in Section 1, but look in more detail at the more difficult case α = 0.5, this means the first three entries of the data encode the 8 possible labels as the 8 corners of the cube [−0.5, 0.5]3, whereas the remaining entries are still sampled from the standard normal distribution. In addition, one may add a weight decay penalty to the task and examine the impact of rescaling alone. Specifically, we consider two cases: - *Rescaling*: Instead of training once for T epochs, we train 5 times for T /5 epochs, and in between we scale back all weights such that the norm of each layer matches the norm after initialization. - *Reinitialization* ("lw"): In addition to rescaling, we re-initialize the layers above the first one in the first two rounds, above the second layer in the next two rounds, and only the top layer in the last round. The results are shown in Table 5. We use the same type of data as described above, but focus now at the more difficult case of α = 0.5. We observe that one can get significantly better results with weight decay. Nevertheless, lw gives an additional benefit on top of the L2 regularization: The best baseline result is 0.77, but the best results with Rescaling or lw are at 0.89 / 0.86. Note that the best L2 penalty needs to be estimated (e.g. by cross validation) for each data set and training procedure. In this case, less L2 penalty is needed if we apply Rescaling / lw . In this particular experiment, rescaling seems to already give the full effect but this is not generally the case in natural image datasets, in which the gain seems to be modest without reinitialization. | Table 5: Test accuracies (average of 100 runs) | | | | |--------------------------------------------------|----------|-----------|------| | L2 penalty | Baseline | Rescaling | lw | | 0.0 | 0.19 | 0.21 | 0.25 | | 0.005 | 0.51 | 0.67 | 0.82 | | 0.01 | 0.54 | 0.89 | 0.85 | | 0.02 | 0.58 | 0.87 | 0.86 | | 0.05 | 0.77 | 0.78 | 0.79 | | 0.1 | 0.59 | 0.63 | 0.64 | ## C Experiment Setup Throughout the main text, we use four different architectures: one simple convolutional neural network, and three standard deep convolutional models. In all architectures, we use weight decay with penalty 10−5. We also use layer normalization (Ba et al., 2016), implemented in TensorFlow (Abadi et al., 2015) using GroupNormalization layers with groups=1. Similar results are obtained when using Batch Normalization (Ioffe & Szegedy, 2015). In all experiments, we use SGD as an optimizer with a learning rate of 0.003 and momentum 0.9. Also, we use a batch size of 256. All experiments are executed on Tensor Processing Units (TPU) for a maximum of 100,000 minibatch steps per reinitialization round. We resize images to 224 × 224 in all experiments. Simple CNN (**scnn**). This architecture contains four convolutional blocks followed by one dense layer before the classifier head. The number of convolutional blocks K used in this architecture is 4. Every convolutional block is a 2D convolutional layer, followed by layer normalization and ReLU activation. Precisely: conv2d 32 filters layer_norm; activation_relu conv2d 32 filters layer_norm; activation_relu max_pooling2d conv2d 64 filters layer_norm; activation_relu conv2d 64 filters layer_norm; activation_relu max_pooling2d flatten ![16_image_0.png](16_image_0.png) Figure 6: For each reinitialization method, the Gaussian approximation of the density of the *ratio* of the size of the weights over the size of the weights in the baseline method is shown. The density of the ratio in lw is concentrated around 1, which implies that lw tends to not increase the size of the weights. See Appendix D for details. dense 512 units layer_norm; activation_relu dropout classifier_head MobileNetV1 (**mobilenet**). This is the standard shallow MobileNet architecture (Howard et al., 2017). The standard blocks in this architecture are either convolutional blocks with layer normalization and ReLU or depthwise separable convolutions with depthwise and pointwise layers followed by layer normalization and ReLU (see Figure 3 in (Howard et al., 2017)). In the shallow architecture, the number of convolutional blocks K is 7. VGG16 (**vgg16**). This is the standard VGG16 architecture (Simonyan & Zisserman, 2015). The standard blocks in this architecture are convolutional layers with layer normalization and ReLU (Table 1 in (Simonyan & Zisserman, 2015)). The number of convolutional blocks K is 13. ResNet50 (**resnet50**). This is the standard ResNet50 architecture (He et al., 2016b). The standard blocks in this architecture either identity blocks or convolutional blocks (see Table 1 in (He et al., 2016b)). The number of convolutional blocks K used in this architecture is 16. ## D Size Of The Weights To calculate the norm of the weights while taking the contribution of the normalization layers into account, we compute the norm of the input to the classifier head (activations) for a random training sample of size 256. Then, we compute the Frobenius norm of the weights at the classifier head. Finally, we compute their product, which reflects the product of the Frobenius norm of layers stated in the generalization bound. Figure 6 shows a a Gaussian approximation to the ratio of the size of the weights of each reinitialization method over the size of the weights in the baseline. As shown in the figure, lw tends to maintain the size of the weights, while also boosting the margin on the training examples as discussed in the main paper. ## E Ablation lw includes rescaling, normalization, and reinitialization. In some cases, these may not all be required and reinitialization alone suffices, but this is not always the case. We observe a consistent improvement in lw when rescaling and normalization are included, in addition to fine-tuning the whole model at each round. In general: - The improvement in generalization in lw cannot be attributed to rescaling or normalization alone. Reinitialization has the main effect. - There exist experiment designs in which reinitialization fails without fine-tuning the model. - We observe cases in which rescaling alone helps but adding reinitialization improves performance further. - The gain from lw cannot be obtained by just training the baseline longer (i.e. using the same computational budget). In this section, we show that the primary effect in lw comes from reinitialization, and that the improvement in generalization cannot be attributed to rescaling or normalization alone. We also show that fine-tuning the whole model performs better than freezing the early layers. Finally, we illustrate a case where lw without normalization fails. Rescaling. Generally, rescaling yields a small improvement on top of reinitialization and most of the gain of lw can often be achieved without it. Nevertheless, rescaling offers benefits. For example, in the six datasets in Table 4 plus CIFAR10 and CIFAR100, if we apply lw with rescaling vs. lw without rescaling, we observe that rescaling tends to offer better performance. The following table provides the probability (over the choice of the dataset) that rescaling yields better results compared to without it: With no augmentation and no dropout 87.5% With augmentation and no dropout 62.5% With augmentation and dropout 37.5% In addition, one can construct settings in which reinitialization without rescaling fails. For example, when training vgg16 on CIFAR100 without normalization layers using the following parameters: Learning Rate: 0.003, Momentum: 0, Batch size: 256 Dropout Rate: 0, Initializer: He Normal, Weight Decay: 0, reinitialization fails to progress beyond random guessing without rescaling. Reinitialization. We use the vgg16 architecture with the same hyperparameters as listed in Appendix C. We repeat the same experiments across all datasets including rescaling and normalization but without reinitialization and compare the resulting accuracy when reinitialization is added. We also include experiments with and without augmentation as well as with and without dropout. When we compare the difference in outcomes using the exact binomial test, the improvement of reinitialization compared to rescaling and normalization alone is statistically significant with a p-value of less than 10−9. Fine-tuning vs. Freezing. lw fine-tunes the entire model in each round. One alternative approach is to *freeze* the early blocks. However, because of the co-adaptability between neurons that arises during training (Yosinski et al., 2014), freezing some layers and fine-tuning the rest is difficult to optimize and can harm its performance (Yosinski et al., 2014). This is also true for reinitialization methods in general. Hence, the entire model including the kept layers is fine-tuned at each round. If we consider vgg16 and the six datasets in Table 4 plus CIFAR10 and CIFAR100, for example, and apply lw with freezing vs. lw with fine-tuning, we observe that fine-tuning improves performance in general. The following table provides the probability (over the choice of the dataset) that fine-tuning the model yields better results compared to freezing: With no augmentation and no dropout 87.5% With augmentation and no dropout 62.5% With augmentation and dropout 75.0% Training Longer. The improvement in lw cannot be obtained by simply training longer even with learning rate scheduling. Throughout our experiments (e.g. Tables 2), we also train the baseline longer to have the same number of training steps in total as reinitialization methods. Despite that, reinitialization methods improve performance considerably. ![18_image_1.png](18_image_1.png) ![18_image_0.png](18_image_0.png) 0.7 0.8 0.9 Figure 7: The test accuracy is displayed for the baseline bl (x-axis) vs. lw (y-axis) with N = 1 on CIFAR10 (top) and CIFAR100 (bottom). Most of the experiments fall above the diagonal, which indicates that lw succeeds in improving generalization. ## F Experiments On Cifar10 And Cifar100 Datasets To examine the impact of lw on larger datasets, we run several experiments with different hyperparamters on CIFAR10 and CIFAR100. The hyperparameters we vary are the learning rate (either 0.003 or 0.01), dropout rate (either 0 or 0.25), augmentation (with or without), and weight decay (either 1e-5 or 5e-4). The results are summarized in Figure 7. Experiments in which training fails to progress (e.g. because the learning rate is too large), were dropped. We also include experiments with Xavier initialization (Glorot & Bengio, 2010) in the figure. Since the focus in this work is on the improvement in generalization on small datasets, we only validate that lw can improve generalization compared to the baseline method bl. As shown in Figure 7, lw improves generalization in most settings, particularly when generalization is a major concern (e.g. the improvement is bigger when the baseline bl performs poorly).
Review 1: Summary: This paper investigates reinitialization methods for convolutional neural network. The paper main contributions are: - a new reinitialization method (LW) that proposes to reinitialize the later layers in a network. - an empirical study on 6 small scales benchmark that demonstrates the advantage of LW for better generalization. - an empirical analysis of LW that investigates its effect on the network margin and flatness of the final solution. Strengths and Weaknesses: Strengths: - The paper performs an extensive set of experiments, consider various neural network backbone as well as re-initialization methods. Weaknesses: - Authors claim that LW improve the generalization of a convolutional neural network. However, authors focus their experimental study on small scale datasets having less than 12K training points. It is unclear if the generalization effect would be observed on larger scale dataset. - Authors argue that LW allows to reach solution with lower-parameters norm. Weight-decay is another common method to constrain the parameters norm. While the experiments use some weight-decay, no hyperparameter search is done on its value. - The hyperparameters are kept the same across the approaches. They might be sub-optimal for some methods. - Authors measure the flatness at a minimum in an indirect way, by perturbing the model parameters by Gaussian noise. It would be informative to investigate other measure of the flatness a the minimum. Requested Changes: - Run LW and the baseline on other datasets with more datapoint (CIFAR-10, CIFAR-100, Tiny-ImageNet and possibly ImageNet) - Compare re-initialization methods with a baseline having a tuned weight-decay. - Optimize the hyperparameters independently for a subset of the approaches (LW and the baseline). - Add other metrics to investigate the flatness of the solution such as the spectral norm of the Hessian or Fisher Information matrix. - While the authors discuss the computation budget, it would be nice to report the wall-clock time necessary to train the different approaches. Presentation could also be improved. For instance, it is not clear how section 3.1 connects with the rest of the paper. Additionally, the current version of the paper describes LW, including a pseudocode, directly in the introduction. Describing the approach in its own section could ease the reading. Broader Impact Concerns: No specific concern. ================================================== Review 2: Summary: This paper studies how the initialization of neural networks, especially CNN, will matter for generalization performances, over the empirical scope of small datasets. The analysis also leads to a new initialization strategy, namely layer-wise initialization, that can potentially leads to better results. The newly proposed method is also validated by the small scale datasets. Strengths and Weaknesses: - Strengths - The coverage of the empirical study in terms of experiments and runs are fairly impressive. The results have broadly cover a wide range of studies and datasets at the small scale. - The layer-wise initialization is also fairly creative. Similar techniques are rarely studied to my knowledge and can potentially make a difference. - Weakness - The main weakness is, probably as expected, the limited scope in terms of the data sizes, while the manuscript is studying convolutional neural network, it is quite uncommon to use datasets that are even smaller than MNIST and CIFAR10. Thus, while the study is interesting, the results can be easily challenged (or potentially altered) with bigger datasets. Requested Changes: - The most straightforward remedy is obviously to use larger datasets, but I imagine the authors are facing some limitations with computing resources, so I would suggest authors to find a way to show their results will be meaningful for larger datasets. - Please consider to at least test the new layer-wise idea in larger data sets to show that the idea indeed can be meaningful for datasets with practical interest. - Maybe derive some analytical bounds with dataset as a factor, and show that current results following the derived bounds, so that we can have a knowledge of the behavior of models when larger datasets are used. Broader Impact Concerns: None noted. ================================================== Review 3: Summary: 1. The authors propose a new method to reinitialize layers of a convolutional neural networks to achieve better performance and generalization properties. 2. The proposed method (LW) involves rescaling, reinitialization and normalization, which is very simple to implement. The authors also conduct ablation studies to verify that all three components are necessary. 3. The authors study the changes of losses and margins of the trained model to verify the improvements of the generalization properties of the proposed method. Strengths and Weaknesses: Strengths: 1. The proposed method is simple, easy-to-implement, and empirically effective on a range of datasets and neural network architectures. 2. The authors conduct many studies, including the synthetic data experiments, the ablation studies, the experiment design studies, and the changes in the margins/losses to further demonstrate the effectiveness of the proposed method. Weakness: 1. The tested datasets all have relatively small sizes. I see that the proposed method probably works the best when generalization is a major concern, but I still think it's valuable to see results on some larger datasets. For example, even the overused CIFAR datasets are not included, and CIFAR datasets also exhibit some generalization issues that might get improved using the proposed method. Otherwise, the proposed method is only shown to work on small datasets, which is a major weakness. 2. I think the rescaling component is not well-justified. First of all, I don't see a clear intuition about using rescaling. Plus there are many other simple methods that can accomplish rescaling in some other way, for example, the weight decay, which is also tested in the paper. Also, the ablation studies only show the effectiveness of rescaling from one side, i.e., there are no results showing that the proposed method would work less well if rescaling is excluded. 3. The ablation studies do not have complete results. I only find qualitative statements for most of the ablation studies without quantitative numbers. 4. The theoretical analysis is relatively weak. The authors only show an interpretation of the proposed method as an SGD step, but its connections to improvements in generalization are unclear. Requested Changes: 1. [critical] Experimental results involving larger datasets. 2. [critical] Better justifications for the rescaling component. 3. [critical] Complete ablation study results. 4. [strengthen] More theoretical analysis of the proposed method. I don't expect the paper to have a complete theoretical analysis, but it would be a strong plus. 5. [strengthen] Please provide standard deviations of the experimental results in addition to the statistical tests, as they would provide more insights into the comparisons and statistical significance. 6. [strengthen] Appendix E is not included, which should show some computation time results as intended. 7. [strengthen] Studies involve other reinitialization methods. The reinitialization component of the proposed method may use other initialization algorithms (other than He), or simply other previous reinitialization baselines such as DSD. In such a case the proposed method restricts the layers to do reinitialization. [strengthen] Minor issues: 1. description on Layerwise (page 2): should be "At round k, the parameters at the lowest *k*" but not *K*; similarly, it should be *k* instead of *K* for the "inserted/updated following block *k*". 2. M in eq(4) is not defined. 3. This ordering of Figure 4 and Figure 5, compared to their reverse orderings in the text descriptions is a bit confusing. 4. In Appendix A, Table 5, having two red numbers for rescaling and LW while only one for baseline is a bit confusing. Broader Impact Concerns: I don't think there is significant ethical conerns with the proposed work. ================================================== Metareview: Recommendation: Reject Comment: The manuscript was evaluated by three reviewers. They found that the manuscript has some strengths, particularly the creativity of the proposed approach and the scope of experiments on small datasets, but that these strengths were surpassed by several weaknesses, particularly the small scale of the considered datasets, insufficient justification of the proposed approach, insufficient analysis and discussion of parameter norms, flatness of minima, hyper parameters, and the limited theoretical results. The authors' responses addressed some of the initial concerns, particularly by adding new experiments and reorganising parts of the manuscript to improve readability. However, the reviewers found that important recommendations remained unaddressed and important issues remained unresolved, particularly about the justification of the rescaling approach and the limited theoretical results. At the end of the discussion period there was a consensus that the article requires more results to justify the approach, with final recommendations ranging from leaning reject to reject. Hence I must reject the article in its current form. However, I would be willing to consider a significantly revised version addressing the feedback from the reviewers. ==================================================
# U-Statistics For Importance-Weighted Variational Inference Javier Burroni jburroni@cs.umass.edu University of Massachusetts Amherst Kenta Takatsu‡*ktakatsu@andrew.cmu.edu* Carnegie Mellon University Daniel Sheldon *sheldon@cs.umass.edu* University of Massachusetts Amherst Justin Domke domke@cs.umass.edu University of Massachusetts Amherst Reviewed on OpenReview: *https: // openreview. net/ forum? id= oXmwAPlbVw* ## Abstract We propose the use of U-statistics to reduce variance for gradient estimation in importanceweighted variational inference. The key observation is that, given a base gradient estimator that requires m > 1 samples and a total of *n > m* samples to be used for estimation, lower variance is achieved by averaging the base estimator on overlapping batches of size m than disjoint batches, as currently done. We use classical U-statistic theory to analyze the variance reduction, and propose novel approximations with theoretical guarantees to ensure computational efficiency. We find empirically that U-statistic variance reduction can lead to modest to significant improvements in inference performance on a range of models, with little computational cost. ## 1 Introduction An important recent development in variational inference (VI) is the use of ideas from Monte Carlo sampling to obtain tighter variational bounds (Burda et al., 2016; Maddison et al., 2017; Le et al., 2018; Naesseth et al., 2018; Domke & Sheldon, 2019). Burda et al. (2016) first introduced the *importance-weighted autoencoder* (IWAE), a deep generative model that uses the *importance-weighted evidence lower bound* (IW-ELBO) as its variational objective. The IW-ELBO uses m samples from a proposal distribution to bound the log-likelihood more tightly than the conventional evidence lower bound (ELBO), which uses only 1 sample. Later, the IW-ELBO was also connected to obtaining better approximate posterior distributions for pure inference applications of VI (Cremer et al., 2017; Domke & Sheldon, 2018), or "IWVI". Similar connections were made for other variational bounds (Naesseth et al., 2018; Domke & Sheldon, 2019). The IW-ELBO is attractive because, under certain assumptions [see Burda et al. (2016); Domke & Sheldon (2018)], it gives a tunable knob to make VI more accurate with more computation. The most obvious downside is the increased computational cost (up to a factor of m) to form a single estimate of the bound and its gradients. A more subtle tradeoff is that the signal-to-noise ratio of some gradient estimators degrades with m (Rainforth et al., 2018), which makes stochastic optimization of the bound harder and might hurt overall inference performance. To take advantage of the tighter bound while controlling variance, one can average over r independent replicates of a base gradient estimator (Rainforth et al., 2018). This idea is often used in practice and requires a total of n = rm samples from the proposal distribution. ‡Work done while at UMass. Our main contribution is the observation that, whenever using r > 1 replicates, it is possible to reduce variance with little computational overhead using ideas from the theory of U-statistics. Specifically, instead of running the base estimator on r independent batches of m samples from the proposal distribution and averaging the result, using the same n = rm samples we can run the estimator on k > r *overlapping* batches of m samples and average the result. In practice, the extra computation from using more batches is a small fraction of the time for model computations that are already required to be done for each of the n samples. Specifically: - We describe how to take an m-sample base estimator for the IW-ELBO or its gradient and reduce variance compared to averaging over r replicates by forming a *complete U-statistic*, which averages the base estimator applied to every distinct batch of size m. This estimator has the lowest variance possible among estimators that average the base estimator over different batches, but it is usually not tractable in practice due to the very large number of distinct batches. - We then show how to achieve most of the variance reduction with much less computation by using incomplete U-statistics, which average over a smaller number of overlapping batches. We introduce a novel way of selecting batches and prove that it attains a (1 − 1/`) fraction of the possible variance reduction with k = `r batches. - As an alternative to incomplete U-statistics, we introduce novel and fast approximations for IW-ELBO complete U-statistics. The extra computational step compared to the standard estimator is a single sort of the n input samples, which is very fast. We prove accuracy bounds and show the approximations perform very well, especially in earlier iterations of stochastic optimization. - We demonstrate on a diverse set of inference problems that U-statistic-based variance reduction for the IW-ELBO either does not change, or leads to modest to significant gains in black-box VI performance, with no substantive downsides. We recommend always applying these techniques for black-box IWVI with r > 1. - We empirically show that U-statistic-based estimators also reduce variance during IWAE training and lead to models with higher training objective values when used with either the standard gradient estimator or the doubly-reparameterized gradient (DReG) estimator (Tucker et al., 2018). ## 2 Importance-Weighted Variational Inference Assume a target distribution p(*z, x*) where x ∈ R dX is observed and z ∈ R dZ is latent. VI uses the following evidence lower bound (ELBO), given approximating distribution qφ with parameters φ ∈ R dφ , to approximate ln p(x) (Saul et al., 1996; Blei et al., 2017): $${\mathcal{L}}=\mathbb{E}\left[\ln{\frac{p(Z,x)}{q_{\phi}(Z)}}\right]\leq\ln p(x),\qquad Z\sim q_{\phi}.$$ The inequality follows from Jensen's inequality and the fact that E hp(Z,x) qφ(Z) i= p(x), that is, the importance weight p(Z,x) qφ(Z) is an unbiased estimate of p(x). Burda et al. (2016) first showed that a tighter bound can be obtained by using the average of m importance weights within the logarithm. The importance-weighted ELBO (*IW-ELBO*) is $${\mathcal{L}}_{m}=\mathbb{E}\Big[\ln{\frac{1}{m}}\sum_{i=1}^{m}{\frac{p(Z_{i},x)}{q_{\phi}(Z_{i})}}\Big]\leq\ln p(x),\qquad Z_{i}\stackrel{\mathrm{iid}}{\sim}q_{\phi}.$$ iid∼ qφ. (1) This bound again follows from Jensen's inequality and the fact that 1m Pm i=1 p(Zi,x) qφ(Zi) , which is the sample average of m unbiased estimates, remains unbiased for p(x). Moreover, we expect Jensen's inequality to provide a tighter bound because the distribution of this sample average is more concentrated around p(x) than the distribution of one estimate. Indeed, Lm ≥ Lm0 for *m > m*0 and Lm → ln p(x) as m → ∞ (Burda et al., 2016). $$(1)$$ In importance-weighted VI (IWVI), the IW-ELBO Lm is maximized with respect to the variational parameters φ to obtain the tightest possible lower bound to ln p(x), which simultaneously finds an approximating distribution that is close in KL divergence to p(z | x) (Domke & Sheldon, 2018). In practice, the IW-ELBO and its gradients are estimated by sampling within a stochastic optimization routine. It is convenient to define the *log-weight* random variables Vi = ln p(Zi, x) − ln qφ(Zi) for Zi ∼ qφ and rewrite the IW-ELBO as $${\mathcal{L}}_{m}=\mathbb{E}[h(V_{1:m})],\qquad h(v_{1:m})=\ln\frac{1}{m}\sum_{i=1}^{m}e^{v_{i}}.$$ Then, an unbiased IW-ELBO estimate with r replicates, using n = rm i.i.d. log-weights (Vj,i) r,m j=1,i=1 is $$\left(2\right)$$ $$\hat{\mathcal{L}}_{n,m}=\frac{1}{r}\sum_{j=1}^{r}h(V_{j,1},\ldots,V_{j,m}).\tag{1}$$ $$\left({\mathrm{3}}\right)$$ $$\left(4\right)$$ In Lˆn,m, we use the subscript n to denote the total number of input samples used for estimation and m for the number of arguments of h, which determines the IW-ELBO objective to be optimized. For gradient estimation, an unbiased estimate for the IW-ELBO gradient ∇φLm is: $$\hat{G}_{n,m}=\frac{1}{r}\sum_{j=1}^{r}g(Z_{j,1},\ldots,Z_{j,m}),\qquad Z_{j,i}\stackrel{\mathrm{iid}}{\sim}q_{\phi},$$ where g(z1:m) is any one of several unbiased "base" gradient estimators that operates on a batch of m samples from qφ, including the reparameterization gradient estimator (Kingma & Welling, 2013; Rezende et al., 2014), the doubly-reparameterized gradient (DReG) estimator (Tucker et al., 2018), or the score function estimator (Fu, 2006; Kleijnen & Rubinstein, 1996). ## 2.1 Iwvi Tradeoffs: Bias, Variance, And Computation Past research has shown that by using a tighter variational bound, IWVI can improve both learning and inference performance, but also introduce tradeoffs such as those pointed out by Rainforth et al. (2018). In fact, there are several knobs to consider when using IWVI that control its bias, variance, and amount of computation. These tradeoffs can be complex so it is helpful to review the key elements as they relate to our setting, with the goal of understanding when and how IWVI can be helpful and providing self-contained evidence that the setting where U-statistics are beneficial can and does arise in practice. Consider the task of maximizing an IW-ELBO objective Lm to obtain the tightest final bound on the loglikelihood. This requires estimating Lm and its gradient with respect to the variational parameters in each iteration of a stochastic optimization procedure. Assume there is a fixed budget of n independent samples per iteration, where, for convenience, n = rm for an integer r ≥ 1, as above. The parameters m and r can be adjusted to control the estimation bias and variance at the cost of increased computation. Specifically: - For a fixed m, by setting r 0 > r, we can reduce the variance of the estimator in Equation (4) by increasing the computational cost to r 0*m > rm* samples per iteration. - For a fixed r, by setting m0 > m we can reduce the bias of the objective - that is, the gap in the bound Lm0 ≤ ln p(x) - by increasing the computational cost to rm0 *> rm* samples per iteration. However, Rainforth et al. (2018) observed that increasing m may also have the *negative* effect of worsening the signal-to-noise (SNR) ratio of gradient estimation (but also that this could be counterbalanced by increasing r). Later, Tucker et al. (2018) showed that, for the DReG gradient estimator, increasing m can *increase* SNR; see also the paper by (Finke & Thiery, 2019) for a detailed discussion of these issues. Overall, while the effect of increasing the number of replicates r to reduce variance is quite clear, the effects of increasing m are sufficiently complex that it is difficult to predict in advance when it will be beneficial. However, an important premise of our work is that the optimal setting of m is often strictly between 1 and n, since this is the setting where U-statistics can be used to reduce variance. To understand this, we can first reason from the perspective of a user that is willing to spend more computation to get a better model. Assuming the variational bound is not already tight, this user can increase m as much as desired to tighten the bound, and then, increase r as needed to control the gradient variance. This argument predicts that, for a sufficiently large computational budget and complex enough model (so that the bound is not already tight with m = 1), a value m > 1 will often be optimal. ![3_image_0.png](3_image_0.png) Figure 1: Distribution of the distance (error) between the distribution's covariance and that of the approximating distribution as a function of m for different numbers of sampled points n, after training an approximating distribution using the standard IW-ELBO estimator. As we increase n, the optimal m also increases, but at a slow rate. [See Section 6 for details.] From the perspective of a user with fixed computational budget, in which the number of optimization iterations is also being fixed, we could instead ask: "for a fixed n, what are the optimal choices of m and r = n/m"? This question can be addressed empirically. Rainforth et al. (2018) reported in their Figure 6 that the extreme values, i.e., m = 1 or m = n, were never the best values. We found empirically that for some models, this result also holds for black box VI, i.e., the optimal choice of m is strictly greater than 1 and less than n, as shown in Figure 1. See also Figure 5, which shows that similar observations apply when using the DReG estimator. In our analysis of 17 real Stan and UCI models, with n = 16, around half of them achieved the best performance for an intermediate value of m, depending on the approximating distribution and base gradient estimator [see Table 8 and 9 in Appendix G]. And we further conjecture that the fraction of real-world models with this property will increase as n increases. For the rest of this work we focus on methods that can reduce variance for the case when 1 *< m < n*. ## 3 U-Statistic Estimators We now introduce estimators for the IW-ELBO and its gradients based on U-statistics, and apply the theory of U-statistics to relate their variances. The theory of U-statistics was developed in a seminal work by Hoeffding (1948) and extends the theory of unbiased estimation introduced by Halmos (1946). For detailed background, see the original works or the books by Lee (1990) and van der Vaart (2000). The standard estimators in Eqs. (3) and (4) average the base estimators h(v1:m) and g(z1:m) on disjoint batches of the input samples. The key insight of U-statistics is that variance can be reduced by averaging the base estimators on a larger number of overlapping sets of samples. We will consider general IW-ELBO estimators of the form $${\hat{\mathcal{L}}}_{\mathcal{S}}(v_{1:n})={\frac{1}{|{\mathcal{S}}|}}\sum_{s\in{\mathcal{S}}}h(v_{s_{1}},\ldots,v_{s_{m}}),$$ , . . . , vsm), (5) where S is any non-empty collection of size-m subsets of the indices JnK := {1*, . . . , n*}, and siis the ith smallest index in the set s ∈ S. Since E h(V1:m) = Lm, it is clear (by symmetry and linearity) that ELˆS (V1:n) = Lm, that is, the estimator is unbiased. For now, we will call this a "U-statistic with kernel h", as it is clear the same construction can be generalized by replacing h by any other symmetric function of m variables1, or "kernel", while preserving the expected value. Later, we will distinguish between different types of U-statistics based on the collection S. 1Recall that a symmetric function is a function invariant under all permutations of its arguments. $$\left(5\right)$$ We can form U-statistics for gradient estimators by using base gradient estimators as kernels. Let g(z1:m) be any symmetric base estimator such that E g(Z1:m) = ∇φLm. The corresponding U-statistic is $$\hat{\mathcal{G}}_{\mathcal{S}}(Z_{1:n})=\frac{1}{|\mathcal{S}|}\sum_{s\in\mathcal{S}}g(Z_{s_{1}},\ldots,Z_{s_{m}})$$ $$(6)$$ and satisfies E GˆS(Z1:n) = ∇φLm. ## 3.1 Variance Comparison How much variance reduction is possible for IWVI by using U-statistics? In this section, we first define the standard IW-ELBO estimator and *complete U-statistic IW-ELBO estimator*, and then relate their variances. For concreteness, we restrict our attention to IW-ELBO objective estimators, but analagous results hold for gradients by using a base gradient estimator as the kernel of the U-statistic. We first express the standard IW-ELBO estimator Lˆn,m in the terminology of Eq. (5): Estimator 1. The standard IW-ELBO estimator Lˆn,m of Eq. (3) is the U-statistic LˆS formed by taking S to be a partition of JnK into disjoint sets, i.e., S ={1, . . . , m}, {m+ 1, . . . , 2m}*, . . . ,* {(r−1)m+ 1*, . . . , rm*} . Estimator 2. The complete U-statistic IW-ELBO estimator LˆU n,m is the U-statistic LˆS with S =JnK m , the set of all distinct subsets of JnK with exactly m elements. We will show that the variance of the LˆU n,m is never more than that of Lˆn,m, and is strictly less under certain conditions (that occur in practice), using classical bounds on U-statistic variance due to Hoeffding (1948). Since LˆU n,m is an average of terms, one for each s ∈JnK m , its variance depends on the covariances between pairs of terms for index sets s and s 0, which in turn depend on how many indices are shared by s and s 0. This motivates the following definition: Definition 3.1. Let V1*, . . . , V*2m be i.i.d. log-weights. For 0 ≤ c ≤ m*, take* s, s 0 ∈J2mK m *with* |s ∩ s 0| = c. Using h *from* Eq. (2)*, define* $\zeta_{c}=\text{Cov}\left[h(V_{s_{1}},\ldots,V_{s_{m}}),\,h(V_{s_{1}^{\prime}},\ldots,V_{s_{m}^{\prime}})\right],$ which depends only on c and not the particular s and s 0. In words, this is the covariance between two IW-ELBO estimates, each using one batch of m i.i.d. log-weights, and where the two batches share c log-weights in common. For example, when m = 2 we have $\zeta_{0}=0,\quad\zeta_{1}=\mbox{Cov}[\ln(\frac{1}{2}e^{V_{1}}+\frac{1}{2}e^{V_{2}}),\ln(\frac{1}{2}e^{V_{1}}+\frac{1}{2}e^{V_{2}})],\quad\mbox{and,}\quad\zeta_{2}=\mbox{Var}[\ln(\frac{1}{2}e^{V_{1}}+\frac{1}{2}e^{V_{2}})].$ Then, due to Hoeffding's classical result, Proposition 3.2. With ζ1, and ζm defined as above, the standard IW-ELBO estimator Lˆn,m (Estimator 1) and complete U-statistic estimator (Estimator 2) with n = rm and r ∈ N *satisfy* $$\frac{m^{2}}{n}\zeta_{1}\leq\mathrm{Var}[\hat{\mathcal{L}}_{n,m}^{U}]\leq\frac{m}{n}\zeta_{m}=\mathrm{Var}[\hat{\mathcal{L}}_{n,m}].$$ Moreover, for a fixed m*, the quantity* n Var[LˆU n,m] tends to its lower bound m2ζ1 as n increases. Proof. The inequalities and asymptotic statement follow directly from Theorem 5.2 of Hoeffding (1948). The equality follows from the definition of ζm. Hoeffding proved that mζ1 ≤ ζm. We observe in practice that there is a gap between the two variances that leads to practical gains for the complete U-statistic estimator in real VI problems. A classical result of Halmos (1946) also shows that complete U-statistics are optimal in a certain sense: we describe how this result applies to estimator LˆU n,m in Appendix B. Finally, we conclude this discussion by stating the main analogue of Proposition 3.2 for gradient estimation. The result, also following from Theorem 5.2 of Hoeffding (1948), states that the complete U-statistic gradient estimator has total variance and expected squared norm no larger than that of the standard estimator: Proposition 3.3. Let Gˆn,m and GˆU n,m be the standard and complete-U-statistic gradient estimators formed using a symmetric base gradient estimator g(z1:m) that is unbiased for ∇φLm *and the same index sets as* Lˆn,m and LˆU n,m*, respectively. Then* tr(Var[GˆU n,m]) ≤ tr(Var[Gˆn,m]) and E kGˆU n,mk 2 2 ≤ E kGˆn,mk 2 2 . We provide a proof in Appendix B.1. ## 3.2 Computational Complexity There are two main factors to consider for the computational complexity of an IW-ELBO estimator: 1) The cost to compute n log-weights Vi = ln p(Zi, x) − ln q(Zi) for i ∈ JnK, and 2) the cost to compute the estimator given the log-weights. A problem with the complete U-statistics LˆU n,m and GˆU n,m is that they use JnK m = nm distinct subsets of indices in Step 2), which is expensive. It should be noted that these log-weight manipulations are very simple, while, for many probabilistic models, computing each log-weight is expensive, so, for modest m and n, the computation may still be dominated by Step 1). However, for large enough m and n, Step 2) is impractical. ## 4 Incomplete U-Statistic Estimators In practice, we can achieve most of the variance reduction of the complete U-statistic with only modest computational cost by averaging over only k nm subsets of indices selected in some way. Such an estimator is called an *incomplete U-statistic*. Incomplete U-statistics were introduced and studied by Blom (1976). A general incomplete U-statistic for the IW-ELBO has the form in Eq. (5) where S (JnK m is a collection of size-m subsets of JnK that does not include every possible subset. We will also allow S to be a multi-set, so that the same subset may appear more than once. Note that the standard IW-ELBO estimator Lˆn,m is itself an incomplete U-statistic, where the k = r = n m index sets are disjoint. We can improve on this by selecting *k > r* sets. Estimator 3 (Random subsets). The *random-subset incomplete-U-statistic estimator for the IW-ELBO* is the estimator LˆSk where Sk is a set of k subsets (si) k i=1 drawn uniformly at random (with replacement) from JnK m . We next introduce a novel incomplete U-statistic, which is both very simple and enjoys strong theoretical properties. Estimator 4 (Permuted block). The permuted block estimator is computed by repeating the standard IW-ELBO estimator ` times with randomly permuted log-weights and averaging the results. Formally, the permuted-block incomplete-U-statistic estimator for the IW-ELBO is the estimator LˆS`Π with the collection S ` Π defined as follows. Let π denote a permutation of JnK. Define Sπ as the collection obtained by permuting indices according to π and then dividing them into r disjoint sets of size m. That is, $=\;\bigg\{\int\pi(1),\pi(2),\,.\,$ Sπ = nπ(1), π(2)*, . . . , π*(m) ,π(m + 1), . . . , π(2m) *, . . . ,* π(r − 1)m + 1*, . . . , π*(rm) o. Now, let S ` Π =Uπ∈Π Sπ where Π is a collection of ` random permutations and Udenotes union as a multiset. The total number of sets in S ` Π is k = r`. Both incomplete-U-statistic estimators can achieve variance reduction in practice for a large enough number of sets k, but the permuted block estimator has an advantage: its variance with k subsets is never more than that of the random subset estimator with k subsets, and never more than the variance of the standard $\downarrow$ . IW-ELBO estimator (and usually smaller). On the other hand, the variance of the random subset estimator is more than that of the standard estimator unless k ≥ k0 for some threshold k0 > r. Proposition 4.1. Given m and n = rm, the variances of these estimators satisfy the following partial ordering: $$\underbrace{\text{Var}[\hat{\mathcal{L}}_{n,m}^{U}]}_{\text{complete}}\stackrel{{(a)}}{{\leq}}\underbrace{\text{Var}[\hat{\mathcal{L}}_{S_{\text{f}}^{U}}]}_{\text{permuted block}}\stackrel{{(b)}}{{\leq}}\underbrace{\text{Var}[\hat{\mathcal{L}}_{n,m}]}_{\text{random subset}}.\tag{7}$$ Moreover, if the number of permutations ` > 1 and Var[LˆU n,m] < Var[Lˆn,m]*, then* (b) *is strict; if* r = n m > 1, then (c) *is strict.* (Note that the permuted and random subset estimators both use k = r` subsets.) Proof. By Def. 3.1, if s and s 0 are uniformly drawn from JnK m and κ = JnK m , we have s,s 0∈(JnK m ) ζ|s∩s 0| κ 2=X s,s 0∈(JnK m ) E[h(Vs1 , . . . , Vsm)h(Vs 0 1 , . . . , Vs 0m )] κ 2− E[LˆU n,m] 2 = Var[LˆU n,m]. (8) E[ζ|s∩s 0|] = X Let π1*, . . . , π*` be the random permutations. Observe that for s, s 0 ∈ Sπi distinct, i.e., two distinct sets within the ith block, s and s 0 are disjoint and then h(Vs1 , . . . , Vsm) is independent of h(Vs 0 1 , . . . , Vs 0m ). Hence, all dependencies between different sets are due to relations between permutations, i.e., each of the `r terms will have a dependency with the (` − 1)r terms not in the same permutation. Therefore, it follows from (8) that the total variance of LˆS`Π is $$\mathrm{Var}[\hat{\cal L}_{S_{n}^{\ell}}]=\frac{1}{\ell r}\zeta_{m}+(1-\frac{1}{\ell})\,\mathrm{Var}[\hat{\cal L}_{n,m}^{U}],$$ n,m], (9) i.e., a convex combination of 1 r ζm = Var[Lˆn,m] and Var[LˆU d $\operatorname{Var}[\hat{\mathcal{L}}_{n,m}^U]$. Hence, using Proposition 3.2, (a) an holds. By a similar argument, the total variance of LˆSr` is $$\mathrm{Var}[\hat{\mathcal{L}}_{S_{r t}}]=\frac{1}{\ell r}\zeta_{m}+(1-\frac{1}{r\ell})\,\mathrm{Var}[\hat{\mathcal{L}}_{n,m}^{U}].$$ Then, (c) holds because $$\mathrm{Var}[\hat{\mathcal{L}}_{\mathcal{S}_{\Pi}^{\ell}}]-\mathrm{Var}[\hat{\mathcal{L}}_{\mathcal{S}_{r\ell}}]=\frac{1}{\ell}(\frac{1}{r}-1)\,\mathrm{Var}[\hat{\mathcal{L}}_{n,m}^{U}]\leq0.$$ A remarkable property of the permuted-block estimator is that we can choose the number of permutations ` to guarantee what fraction of the variance reduction of the complete estimator we want to achieve. Say we would like to achieve 90% of the variance reduction; then it suffices to set ` = 10. The following Proposition formalizes this result. Proposition 4.2. Given m and n = rm, for ` ∈ N the permuted-block estimator achieves a (1−1/`) *fraction* of the variance reduction provided by the complete U-statistic IW-ELBO estimator, i.e., $$\underbrace{\mathrm{Var}[\hat{\mathcal{L}}_{n,m}]}_{\mathrm{standard}}-\underbrace{\mathrm{Var}[\hat{\mathcal{L}}_{\mathrm{n}}^{c}]}_{\mathrm{permuted~block}}=(1-\frac{1}{\ell})(\underbrace{\mathrm{Var}[\hat{\mathcal{L}}_{n,m}]}_{\mathrm{standard}}-\underbrace{\mathrm{Var}[\hat{\mathcal{L}}_{n,m}^{U}]}_{\mathrm{complete}}).$$ $$({\mathfrak{g}})$$ $\square$ $\square$ Proof. This follows directly from Eq. (9). The conclusions of Propositions 4.1 and 4.2 do not depend on the kernel. This means they provide strong guarantees for our novel and simple permuted-block incomplete U-statistic with any kernel, which may be of general interest, and also imply the following result for gradients: Proposition 4.3. *The conclusion of* Proposition 4.2 *holds with* Var[Lˆ] replaced by either E[kGkˆ 22 ] or tr([Var[Gˆ]), for each pair (Lˆ, Gˆ) *of objective estimator and gradient estimator that use the same collection* S of index sets, and for any base gradient estimator g(v1:m). ## 5 Efficient Lower Bounds In the last section, we approximated the complete U-statistic by averaging over k nm subsets. For example, by Proposition 4.2, we could achieve 90% of the variance reduction with 10× more batches than the standard estimator, and the extra running-time cost is often very small in practice. An even faster alternative is to approximate the kernel in such a way that we can compute the complete U-statistic without iterating over subsets. In this section, we introduce such an approximation for the IW-ELBO objective, where the extra running-time cost is a single sort of the n log-weights, which is extremely fast. Furthermore, Proposition 5.2 below will show that it is always a lower bound to LˆU n,m and has bounded approximation error, so its expectation lower bounds Lm and ln p(x); thus, it can be used as a surrogate objective within VI that behaves well under maximization. We then introduce a "second-order" lower bound, which has provably lower error. Unlike the last two sections, these approximations do not have analogues for arbitrary gradient estimators such as DReG or score function estimators. For optimization, we use reparameterization gradients of the *surrogate* objective. Estimator 5. The *approximate complete U-statistic IW-ELBO estimator* is $$\hat{\mathcal{L}}_{n,m}^{\mathcal{A}}(V_{1:n})=\binom{n}{m}^{-1}\sum_{\mathbf{s}\in\left(\genfrac{}{}{0.0pt}{}{\ln1}{m}\right)}\max(V_{s_{1}},\ldots,V_{s_{m}})-\ln m.$$ This estimator uses the approximation lnPm i=1 e vi ≈ max{v1*, . . . , v*m} for log-sum-exp. The following Proposition shows that we can compute LˆA n,m *exactly* without going over the nm subsets but instead taking only O(n ln n) time. The intuition is that each of the n log-weights will be a maximum element of some number of size-m subsets, and each such term in the summation for LˆA n,m will be the same. Moreover, we can reason in advance how many times each log-weight will be a maximum. Proposition 5.1. *For any* v1:n ∈ R n*, it holds that* $$\hat{\mathcal{L}}_{n,m}^{\mathcal{A}}(v_{1:n})\equiv\binom{n}{m}^{-1}\sum_{i=1}^{n}b_{i}v_{[i]}-\ln m,$$ where bi = n−i m−1 , if i ∈ Jn − (m − 1)K (and 0 otherwise), and [ · ]: JnK → JnK is a permutation s.t., the sequence of log-weights v[1], . . . , v[n]is non-increasing. $\sigma_{\rm s}(\mathbb{H})$ Proof. For s ∈JnK m ), let $\mathbf{v_s}=(v_{s_1},\ldots,v_{s_m})$. , . . . , vsm). We can see that max vs = v[i] where i is the smallest index in s. Thus, $$\sum_{\mathbf{s}\in{\binom{[n]}{m}}}\operatorname*{max}\mathbf{v_{s}}=\sum_{i=1}^{n}b_{i}v_{[i]},$$ where biis the number of sets s ∈JnK m with minimum index equal to i. The conclusion follows because there are n − i indices larger than i, but we can take m − 1 of them only when i ∈ Jn − (m − 1)K. To further understand both the computational simplification and the quality of this approximation, consider this real example of computing the (non-approximate) complete U-statistic IW-ELBO estimator LˆU 4,2 . Suppose that the sampled log-weights are $$\mathbf{v}=(-6034.091,-4351.335,-4157.236,-5419.201).$$ Given the 42 sets, we can evaluate the kernel h(vi, vj ) = ln(e vi + e vj ) − ln 2 on each of them to generate the following table: | (vi , vj ) | h(vi , vj ) | |------------------------|---------------| | (−6034.091, −4351.335) | −4352.028 | | (−6034.091, −4157.236) | −4157.930 | | (−6034.091, −5419.201) | −5419.895 | | (−4351.335, −4157.236) | −4157.930 | | (−4351.335, −5419.201) | −4352.028 | | (−4157.236, −5419.201) | −4157.930 | | Mean | −4432.956 | At three decimal points of precision, we see that h(vi, vj ) = max(vi, vj ) + ln 2 and therefore −4157.930, −4352.028, and −5419.895 each appear 31 times, 21 times, and once, respectively. ![8_image_0.png](8_image_0.png) $$(10)$$ Figure 2: Median envelope of the objective using the permuted-block and standard IW-ELBO estimators for the mushrooms (left), mesquite (center) and electric-one-pred (right) models. In all cases we used n = 16 and m = 8. For reference there is a line segment of length similar to the average objective difference, respectively, 202.08, 0.97, and −3.91. ## 5.1 Accuracy And Properties Of The Approximation It is straightforward to derive both upper and lower bounds of the complete U-statistic IW-ELBO estimator LˆU n,m from this approximation. Proposition 5.2. *For any set of log-weights* v1:n ∈ R n*, it holds that* $$\hat{\mathcal{L}}_{n,m}^{\mathcal{A}}(v_{1:n})\leq\hat{\mathcal{L}}_{n,m}^{U}(v_{1:n})\leq\hat{\mathcal{L}}_{n,m}^{\mathcal{A}}(v_{1:n})+\ln m.$$ n,m(v1:n) + ln m. (10) Moreover, the first inequality is strict unless m = 1. On the other hand, the second inequality is an equality when all log-weights are equal. Proof. This is a direct application of well-known inequalities for log-sum-exp. Let h(v1*, . . . , v*m) = ln PM i=1 e vi and f(v1*, . . . , v*m) = max{v1*, . . . , v*m}. Then, for all v1:m ∈ R m, $$f(v_{1:m})\leq h(v_{1:m})\leq f(v_{1:m})+\ln m.$$ f(v1:m) ≤ h(v1:m) ≤ f(v1:m) + ln m. (11) To see this, write vˆ = max{v1*, . . . , v*m}. Then, $$(11)$$ $${\frac{1}{m}}e^{\dot{v}}\leq{\frac{1}{m}}\sum_{j=1}^{m}e^{v_{j}}\leq e^{\dot{v}}.$$ vˆ. (12) Eq. (11) follows from applying ln to (12). Eq. (10) then follows from (11) and the definitions of LˆA n,m and LˆU n,m. One comment about the approximation quality is in order: in the limit as the variance of the log-weights decreases, the second inequality in the bounds above becomes tight, and the approximation error of LˆA n,m(v1:n) approaches its maximum ln m. This can be seen during optimization when maximizing the IW-ELBO, which tends to reduce log-weight variance [cf. Figure 4]. $$(12)$$ ## 5.2 Second-Order Approximation Based on our understanding of the approximation properties of LˆA n,m, we can add a correction term to obtain a second-order approximation. Estimator 6. For 2 ≤ m ≤ n, the *second-order approximate complete-U-statistic IW-ELBO estimator* is $$\tilde{\cal L}^{A,2}_{n,m}(V_{1:n})=\tilde{\cal L}^{A}_{n,m}(V_{1:n})+{n\choose m}^{n-1}\sum_{i=1}^{n-(m-1)}\tilde{b}_{i}\ln(1+e^{\Delta V_{[i]}}),\tag{13}$$ $\mathbf{r}$ $\epsilon$ 2. $|\mathbb{I}|=\frac{\pi}{2}$. $$=V_{[i+1]}-V$$ where ∆V[i] = V[i+1] − V[i] and ˜bi =n−1−i \] and $\tilde{b}_i={n-1-i\choose m-2}$. This can still be computed in O(n ln n) time and gives a tighter approximation than LˆA n,m. Proposition 5.3. *For all* v1:n ∈ R n, $$\hat{\cal L}_{n,m}^{\cal A}(v_{1:n})<\hat{\cal L}_{n,m}^{\cal A,2}(v_{1:n})\leq\hat{\cal L}_{n,m}^{U}(v_{1:n}).$$ Moreover, the second inequality is an equality exactly when m = n = 2. Proof. The first inequality follows directly because the terms in the summation of (13) are positive reals. For the second inequality, take s ∈JnK m and let i be the smallest index in s. If s is one of the n−1−i m−2 sets on which i is the smallest index and i + 1 ∈ s, then $${\frac{1}{m}}e^{v_{[i]}}\big(1+e^{v_{[i+1]}-v_{[i]}}\big)={\frac{e^{v_{[i]}}+e^{v_{[i+1]}}}{m}}\leq{\frac{1}{m}}\sum_{s\in{\mathfrak{s}}}e^{v_{s}}.$$ If i + 1 6∈ s, we know that 1m e v[i] ≤ 1 m Pm j=1 e vsj . We finish by applying logarithm to both inequalities and the definition of LˆA n,m and LˆU n,m. In contrast to LˆA n,m, the second-order approximation is not a U-statistic. However, it is a tighter lower-bound of LˆU n,m. Note 5.4. To use the approximations as an objective, we need them to be differentiable. If the distribution of W is absolutely continuous, then the approximations are almost surely differentiable because sort is almost surely differentiable, with Jacobian given by the permutation matrix it represents [cf. Blondel et al. (2020)]. ## 6 Experiments In this section, we empirically analyze the methods proposed in this paper. We do so in three parts: we first study the gradient variance, VI performance, and running time for IWVI in the "black-box" setting2; we then focus on a case where the posterior has a closed-form solution, using random Dirichlet distributions; and finally, we study the performance of the estimators for Importance-Weighted Autoencoders. For black-box IWVI, we experiment with two kinds of models: Bayesian logistic regression with 5 different UCI datasets (Dua & Graff, 2017) using both diagonal and full covariance Gaussian variational distributions,3 and a suite of 12 statistical models from the Stan example models (Stan Development Team, 2021; Carpenter et al., 2017), with both diagonal (all models) and full covariance Gaussian (10 models4) approximating 2That is, VI that uses only black-box access to ln p(*z, x*) and its gradients. 3That is, p(y | θ) = QN i=1 Bernoulliyi; logistic(θ T xi)for fixed xi ∈ Rd and p(θ) = N (θ; 0, σ2Id), and V = ln p(*θ, y*)−ln q(θ) for θ ∼ q(θ) with either q(θ) = N (θ; µ, diag(w)) or q(θ) = N (θ; *µ, LL*T ); we optimize over (µ, w) or (*µ, L*), with w constrained to be positive (via exponential transformation) and L constrained to be lower triangular with positive diagonal (via softplus transformation). Parameters were randomly initialized prior to transformations from iid standard Gaussians. 4The irt-multilevel model diverged for all configurations using a full covariance Gaussian. distributions. We provide additional information regarding the models in Appendix C. For each model, the variational parameters were optimized using stochastic gradient descent with fixed learning rate for 15 different logarithmically spaced learning rates. We used n = 16 samples per iteration except for the running time analysis, and experimented with m ∈ {2, 4, 8}. Since this is a stochastic optimization problem, we ran every combination of model, learning rate, n, and m, using 50 different random seeds to assess typical performance. We used the reparameterization gradient estimator as the base gradient estimator, and also provide in Appendix D and G (very similar) results for the doubly-reparameterized (DReG) gradient estimator. ![10_image_0.png](10_image_0.png) (a) Ratio of gradient's total variance. (b) Ratio of the objective's variances. Figure 3: Ratios of the trace of the variance (i.e., the total variance) of different proposed gradient estimators to that of the standard gradient estimator, and objective's variances (b) for the mushrooms dataset (d = 96). All ratios are below 1, which indicates variance reduction. The estimators can be ordered by variance: the complete-U-statistic estimator and second-order approximation are lowest, followed by the permutedblock and first-order approximation, and finally the random subsets estimator. Since ` = 20, we expect the permuted-block estimator to achieve 1 − 1 20 = 0.95 of the variance reduction; the estimated variance reduction is 91.24% and 95.72% for the objective. Gradient Variance We first confirm empirically that U-statistics reduce the variance of gradients within IWVI. For each random seed, we performed IWVI using the complete U-statistic LˆU n,m for 10,000 iterations. Every 200 iterations, we computed the gradients, given the values of the parameters at that time, for each of the alternative gradient estimators: the standard estimator, the complete U-statistic estimator, its approximations, the permuted-block estimator with ` = 20, and the random subsets estimator with k = 20 nm (a number of sets equal to the permuted version). In all cases we used n = 16 and m = 8. For each gradient estimator Gˆ, we estimate the total variance tr(Var[Gˆ]) using 200 independent gradient samples. Figure 3–(a) shows the total variance of each estimator as a fraction of that of the standard estimator (that is, the ratio tr(Var[Gˆ])/ tr(Var[Gˆn,m])) for Bayesian logistic regression with the mushrooms dataset. The ratios are between 60% and 70% for all methods, with the random subsets estimator showing the highest variance and the complete U-statistic the lowest. This confirms it is possible to reduce gradient variance with U-statistics. Moreover, the estimators can be ordered by their gradients' total variance. The complete U-statistic estimator and the 2nd order approximation have the smallest variance, the permutedblock estimator has slightly higher variance, and the random subsets estimator has the highest variance (but still less than that of the standard estimator). Recall that, according to Prop. 4.2, ` = 20 implies that the permuted estimator achieves 95% of the variance reduction provided by the complete-U-statistic IW-ELBO. In this case, we estimated the variance reduction of the permuted-block estimator to be 91.24% of that of the complete-U-statistic estimator. We also show the ratio of the objective's variances in Figure 3–(b). Most estimators have a ratio of around 80%, but the permuted-block estimator achieves a 95.72% variance reduction provided by the complete U-statistic estimator. | | permuted − standard IW-ELBO | | |------------|-------------------------------|----------| | Dataset | m = 8 | | | | Full Covariance | Diagonal | | a1a | 112.42 | 4.48 | | australian | 3.36 | 1.38 | | ionosphere | 16.58 | 0.06 | | mushrooms | 202.56 | 8.69 | | sonar | 50.62 | 0.19 | Table 1: For Bayesian logistic regression (a) and Stan models (b), difference in nats of the average objective (higher is better) when trained using the permuted estimator vs. the standard IW-ELBO estimator. The variational distribution is a Gaussian distribution using either a full rank covariance matrix (first column) or a diagonal one (second column). The entry is "—" when the model diverged for all configurations, and NaN when it diverged for the specific configuration (other configurations are found in the Appendix). (a) Bayesian logistic regression models. (b) Stan models. VI Performance Ultimately, our goal is to provide a more efficient optimization method. To measure typical stochastic optimization performance, we first took the maximum objective value across learning rates in each iteration to construct the optimization *envelope* for each method and random seed [cf. Geffner & Domke (2018)]. The purpose of the envelope is to eliminate the learning rate as a nuisance parameter since stochastic optimization methods are very sensitive to learning rate, and one common benefit of variance reduction is to allow a larger learning rate. Then, for each method we used the median envelope across the 50 random seeds as a measure of its typical optimization behavior over iterations. Examples can be seen in Figure 2. As a final metric for each method we computed the *average objective* value (of the median envelope) across iterations up to 10,000 iterations,5excluding the first 50 iterations, which were highly noisy and sensitive to initialization. This is a useful summary metric to measure the tendency of one method to "stay ahead" of another (see the examples in Figure 2). Agrawal et al. (2020) found a similar metric effective for learning rate selection. | | (b) Stan models. permuted − standard IW-ELBO | | | |-------------------|------------------------------------------------|----------|-------| | Dataset | | m = 8 | | | | Full Covariance | Diagonal | | | congress | | 19.80 | 7.33 | | election88 | | 1133.70 | 6.94 | | election88Exp | | NaN | 32.76 | | electric | | 80.46 | 4.32 | | electric-one-pred | | -3.45 | -3.91 | | hepatitis | | NaN | 0.65 | | hiv-chr | | 283.19 | 15.84 | | irt | | 16077.03 | 1.00 | | irt-multilevel | | - | 62.32 | | mesquite | | 1.41 | 2.00 | | radon | | 268.98 | 14.83 | | wells | | -0.03 | -0.11 | Table 1 shows the average objective difference between the permuted-block and standard IW-ELBO estimators for m = 8, with positive numbers indicating better performance for permuted-block. We focus on permutedblock here because it consistently achieves an excellent tradeoff between variance reduction and running time. In Appendix D we present similar results for two additional methods—the 2nd order approximation and the permuted-block estimator with DReG as the base gradient estimator—and for different values of m; in Appendix G we show the median envelopes themselves for many combinations of models, methods, and m. The examples in Figure 2 were selected to show cases where the difference is big (left), small (center), and negative (right); to contextualize our summary metric, we also added a reference vertical bar showing an iteration where the difference between the two envelopes is approximately Method Time (s) Mean Std Lˆ24,12 standard IW-ELBO 5.47 0.04 LˆU 24,12 complete U 1573.27 2.12 LˆS20 24 12 random subsets 6.49 0.09 Lˆ20 SΠ permuted block 6.45 0.09 LˆA 24,12 approx. 5.25 0.02 LˆA,2 24,12 approx. 2nd order 5.54 0.04 Table 2: Times for 1000 iterations of optimization with different estimators on the mushrooms dataset with n = 24, m = 12, averaged over 100 trials. 5For some datasets, such as sonar, we observed early convergence by visual inspection and computed the metric only up to that point. See Figures in Appendix G. equal to the average objective difference. These results make it clear that the permuted-block estimator improves the convergence of stochastic optimization for VI across a range of models and settings. In electric-one-pred, permuted-block was consistently worse, but we verified that it still had lower-variance gradients; we speculate this is an unstable model where higher variance gradients help escape local optima. Running Time Table 2 shows the times required to complete 1000 iterations of optimization with different estimators for Bayesian logistic regression with the mushrooms data set, averaged over 100 trials. Here we used n = 24 and m = 12, which makes it a challenging setting for the complete U-statistic estimator, because there are 24 12= 2,704,156 sets. As expected, the complete U-statistic is orders of magnitude slower. The approximations are faster than the standard estimator because the smallest m − 1 log-weights do not contribute to the objective, and thus their gradients are not needed. The permuted-block estimator incurs an extra cost of less than 1 ms per iteration compared to the standard IW-ELBO estimator for this model (a 18% increase). However, the increased time only depends on m, n, and `, and not on the model. Even for a very complex model, we would expect the extra time for these settings to be on the order of order of 1 ms per iteration, and be negligible compared to other costs. For example, for the irt, the standard estimator took 16.62s (0.11), while the permuted-block estimator took 17.54s (0.12), i.e., a 5% increase. Incomplete U-Statistics and Approximations Previously, we analyzed the methods by comparing them to the standard IW-ELBO estimator. In this part we will use the complete Ustatistics as a baseline: given a realization of log-weights v1*, . . . , v*n, we measure the difference between the objective value assigned by the complete U-statistic and the alternatives. For this experiment, we will use n = 16, m = 8 and the Bayesian logistic regression dataset mushrooms. In Figure 4 we plot the difference measured in nats as a function of the iteration step. From that plot (especially the inset), it is clear that the approximations are underestimators. It is also interesting to see the approximations and the incomplete U-statistic being complementary: as the optimization progresses, the error of the approximations increases, but the error made by the incomplete U-statistics decreases. We expected this result because the variance of the log-weights decreases with the optimization. (The upper-bound of Eq. (10) is achieved when all vi are equal; but this is exactly the case when all the incomplete U-statistics coincide.) ![12_image_0.png](12_image_0.png) Dirichlet Experiments We conducted experiments with random Dirichlet distributions as described in (Domke & Sheldon, 2018). The goal was twofold. First, this is a setting where exact inference is possible, so we can evaluate IWVI with different estimators on the accuracy of posterior inference directly, instead of using the IW-ELBO as a proxy. Secondly, this is a simple setting to demonstrate that the optimal value of m is often strictly between 1 and n, which is the regime in which our variance reduction methods are useful (all but the approximations coincide when m ∈ {1, n}). We again used SGD with 15 different learning rates and selected, for each configuration, the learning rate that achieved the best mean objective after 10k iterations. We optimized each configuration using, for this experiment, 100 different random seeds. We estimated the accuracy of the approximation by computing the distance (error) between the distribution's covariance and the estimated covariance of the learned approximation. Figure 1 shows the error as a function of m for different values of n when using the standard IW-ELBO estimator for a random Dirichlet with 50 parameters. The figure shows that the optimal m increases with n, but slowly. Figure 5 shows similar results for other esti- Figure 4: Difference between estimated value using any of the methods and that of the complete Ustatistic, in nats, for the mushrooms dataset. (25th and 75th percentiles shown with dashed lines.) As optimization progresses, the error of the incomplete Ustatistics decreases, but the error of the approximation increases. The inset shows the permuted and both approximations in a region that is 0.5 nats of the target value. 13 ![13_image_0.png](13_image_0.png) mators: permuted, DReG, and permuted-DReG. In all cases, we confirm that, for this model, the optimal m lies strictly between 1 and n. We provide in Appendix E additional details. Figure 5: Distance between the covariance of a random Dirichlet distribution with 50 parameters and the covariance of its approximation as a function of m for different values of n after training using the permuted (left), DReG (center) or permuted-DReG (right) estimators. ## 6.1 Importance-Weighted Autoencoders To evaluate the performance of the proposed methods on IWAEs, we trained IWAEs on 4 different datasets: MNIST, KMNIST, FMNIST, and Omniglot. We compare the standard IW-ELBO estimator and DReG estimators to their permuted versions, i.e., the permuted and permuted-DReG estimators. We also evaluate the secondorder approximation to the complete-U-statistic estimator. We trained each combination of dataset, method, and value of m using five different random seeds, and the optimization was run for 100 epochs using Adam (Kingma & Ba, 2015). In Figure 6, we present the final testing objective for different values of m (using n = 50 in all cases) for the KMNIST dataset, and we show results for the rest of the datasets in Figure 8 in the Appendix F along with further details on the experiments. The figure shows that the permuted versions consistently improved over the base versions, i.e., the permuted estimator improves over the standard-IW estimator in the same way as the permuted-DReG estimator improves over the DReG estimator. Additionally, we can see that the second-order approximation outperforms the permuted estimator for small values of m. However, as m increases, the permuted estimator takes the lead, which is expected since the approximation error grows with m. ![13_image_1.png](13_image_1.png) Figure 6: Objective's distribution for KMNIST with n = 50 and different combinations of methods and m. We also compared the total wall-clock time required to complete the optimization with different estimators in Figure 9 in the Appendix. It can be seen that there is not a significant time increase for using our proposed methods. ## 7 Related And Future Work Gradient variance reduction is an active topic in VI because of its impact on stochastic optimization. Our complete- and incomplete-U-statistic methods are complementary to other variance reduction techniques: they are compatible with different base estimators, including the Doubly Reparameterized Gradient Estimator (DReG) of Tucker et al. (2018) and the generalization of Bauer & Mnih (2021). Another broad approach to variance reduction is the use of control variates (Miller et al., 2017; Mnih & Gregor, 2014; Ranganath et al., 2014; Geffner & Domke, 2018; 2020). In the case of IWVI, the control variates of Mnih & Rezende (2016) and Liévin et al. (2020), which are designed for the score function estimator, could work as a base estimator from which a U-statistic can be built. We leave its empirical evaluation for future work. Importance-weighted estimators are also being used for the Reweighted Wake-Sleep (RWS) procedure (Bornschein & Bengio, 2015; Le et al., 2020) and its variations (Dieng & Paisley, 2019; Kim et al., 2020). Given the connection between the gradient estimators of RWS and that of the IW-ELBO [see Kim et al. (2020)], these estimators could be potentially improved by using the ideas of the complete- and incomplete-U-statistic methods. The numerical approximations of Section 5 follow a different principle of approximating the objective; it is an open question if such an approximation can be used in conjunction with other variance reduction methods. Interestingly, the first-order approximation expresses the objective as a convex combination of the ordered log-weights (minus a constant), which has a form similar to the objective presented in Wang et al. (2018), albeit with different coefficients. It would be an interesting future line of work to extend the order of Proposition 4.1 to a partial order of random variables in the sense of Mattei & Frellsen (2022). Nowozin (2018) introduced Jackknife-VI (JVI), which uses complete-U statistics to reduce bias instead of variance. In Appendix A we briefly discuss possible applications of our methods to JVI. ## 8 Conclusion We introduced novel methods based on U-statistics to reduce gradient and objective variance for importanceweighted variational inference, and found empirically that the methods improve black-box VI performance and IWAEs training. We recommend using the permuted-block estimator in any situation with r > 1 replicates: it never increases variance, and can be tuned based on computational budget to achieve any desired fraction of the possible variance reduction. In practice, a 95% fraction of possible variance reduction can be achieved at a very low cost. The approximations of Section 5 are extremely fast and provide substantial variance reduction, but are not universally better than the standard estimator because they introduce some bias that can hurt performance, especially in easier models near the end of optimization. ## Acknowledgments This material is based upon work supported by the National Science Foundation under Grant Nos. 1749854 and 1908577. JB would like to thank Tomás Geffner and Miguel Fuentes for their helpful discussions. ## References Abhinav Agrawal, Justin Domke, and Daniel Sheldon. Advances in black-box VI: Normalizing flows, importance weighting, and optimization. In *Advances in Neural Information Processing Systems (NeurIPS)*, pp. 1–8, 2020. Matthias Bauer and Andriy Mnih. Generalized doubly reparameterized gradient estimators. In *International* Conference on Machine Learning, pp. 738–747. PMLR, 2021. Eli Bingham, Jonathan P. Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, and Noah D. Goodman. Pyro: Deep Universal Probabilistic Programming. *Journal of Machine Learning Research*, 2018. David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American statistical Association, 112(518):859–877, 2017. Gunnar Blom. Some properties of incomplete U-statistics. *Biometrika*, 63(3):573–580, 1976. Mathieu Blondel, Olivier Teboul, Quentin Berthet, and Josip Djolonga. Fast differentiable sorting and ranking. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 950–959. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/blondel20a.html. Jörg Bornschein and Yoshua Bengio. Reweighted wake-sleep. In *ICLR*, 2015. Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance Weighted Autoencoders. In *ICLR*, 2016. Bob Carpenter, Andrew Gelman, Matthew D Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. Stan: A probabilistic programming language. Journal of statistical software, 76(1), 2017. Chih-Chung Chang and Chih-Jen Lin. Libsvm: a library for support vector machines. *ACM transactions* on intelligent systems and technology (TIST), 2(3):1–27, 2011. Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Yamamoto, and David Ha. Deep learning for classical japanese literature, 2018. Chris Cremer, Quaid Morris, and David Duvenaud. Reinterpreting importance-weighted autoencoders. arXiv preprint arXiv:1704.02916, 2017. Adji B Dieng and John Paisley. Reweighted expectation maximization. *arXiv preprint arXiv:1906.05850*, 2019. Justin Domke and Daniel Sheldon. Importance weighting and variational inference. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 4475–4484, 2018. Justin Domke and Daniel R. Sheldon. Divide and couple: Using Monte Carlo variational objectives for posterior approximation. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 338–347, 2019. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ ml. Axel Finke and Alexandre H Thiery. On importance-weighted autoencoders. *arXiv preprint* arXiv:1907.10477, 2019. Michael C Fu. Gradient estimation. *Handbooks in operations research and management science*, 13:575–616, 2006. Tomas Geffner and Justin Domke. Using large ensembles of control variates for variational inference. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 9982– 9992, 2018. Tomas Geffner and Justin Domke. Approximation based variance reduction for reparameterization gradients. Advances in Neural Information Processing Systems, 33, 2020. Andrew Gelman and Jennifer Hill. *Data analysis using regression and multilevel/hierarchical models*. Cambridge university press, 2006. Paul R Halmos. The theory of unbiased estimation. *The Annals of Mathematical Statistics*, 17(1):34–43, 1946. Wassily Hoeffding. A class of statistics with asymptotically normal distribution. *Annals of Mathematical* Statistics, 19:273–325, 1948. Dongha Kim, Jaesung Hwang, and Yongdai Kim. On casting importance weighted autoencoder to an em algorithm to learn deep generative models. In Silvia Chiappa and Roberto Calandra (eds.), *Proceedings* of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pp. 2153–2163. PMLR, 26–28 Aug 2020. URL https:// proceedings.mlr.press/v108/kim20b.html. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *ICLR (Poster)*, 2015. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Jack PC Kleijnen and Reuven Y Rubinstein. Optimization and sensitivity analysis of computer simulation models by the score function method. *European Journal of Operational Research*, 88(3):413–427, 1996. Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. *Science*, 350(6266):1332–1338, 2015. Tuan Anh Le, Maximilian Igl, Tom Rainforth, Tom Jin, and Frank Wood. Auto-encoding sequential Monte Carlo. In *6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada,* April 30 - May 3, 2018, Conference Track Proceedings, 2018. Tuan Anh Le, Adam R. Kosiorek, N. Siddharth, Yee Whye Teh, and Frank Wood. Revisiting reweighted wakesleep for models with stochastic control flow. In Ryan P. Adams and Vibhav Gogate (eds.), Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, volume 115 of *Proceedings of Machine Learning* Research, pp. 1039–1049. PMLR, 22–25 Jul 2020. URL https://proceedings.mlr.press/v115/le20a. html. Yann LeCun, Corinna Cortes, and CJ Burges. Mnist handwritten digit database. ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist, 2, 2010. Laurence Lock Lee. *U-statistics. Theory and Practice*. CRC Press, 1990. ISBN 9781351405867. Valentin Liévin, Andrea Dittadi, Anders Christensen, and Ole Winther. Optimal variance control of the scorefunction gradient estimator for importance-weighted bounds. *Advances in Neural Information Processing* Systems, 33:16591–16602, 2020. David J Lunn, Andrew Thomas, Nicky Best, and David Spiegelhalter. Winbugs - a bayesian modelling framework: concepts, structure, and extensibility. *Statistics and computing*, 10(4):325–337, 2000. Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, Andriy Mnih, Arnaud Doucet, and Yee Whye Teh. Filtering variational objectives. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 6573–6583, 2017. Nathan Mantel. 232. note: Assumption-free estimators using U statistics and a relationship to the Jackknife method. *Biometrics*, pp. 567–571, 1967. Pierre-Alexandre Mattei and Jes Frellsen. Uphill roads to variational tightness: Monotonicity and monte carlo objectives. *arXiv preprint arXiv:2201.10989*, 2022. AC Miller, NJ Foti, A D Amour, and Ryan P Adams. Reducing reparameterization gradient variance. Advances in Neural Information Processing Systems, 2017. Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In International Conference on Machine Learning, pp. 1791–1799. PMLR, 2014. Andriy Mnih and Danilo Rezende. Variational inference for monte carlo objectives. In *International Conference on Machine Learning*, pp. 2188–2196. PMLR, 2016. Christian A. Naesseth, Scott W. Linderman, Rajesh Ranganath, and David M. Blei. Variational sequential Monte Carlo. In *International Conference on Artificial Intelligence and Statistics, AISTATS 2018, 9-* 11 April 2018, Playa Blanca, Lanzarote, Canary Islands, Spain, volume 84 of *Proceedings of Machine* Learning Research, pp. 968–977. PMLR, 2018. Sebastian Nowozin. Debiasing evidence approximations: On importance-weighted autoencoders and Jackknife variational inference. In *International Conference on Learning Representations*, 2018. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems 32*, pp. 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips. cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf. John C Platt. Sequential minimal optimization: A fast algorithm for training support vector machines. In Advances in Kernel Methods-Support Vector Learning, 1999. Tom Rainforth, Adam Kosiorek, Tuan Anh Le, Chris Maddison, Maximilian Igl, Frank Wood, and Yee Whye Teh. Tighter variational bounds are not necessarily better. In International Conference on Machine Learning, pp. 4277–4285. PMLR, 2018. Rajesh Ranganath, Sean Gerrish, and David Blei. Black box variational inference. In Artificial intelligence and statistics, pp. 814–822. PMLR, 2014. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In *International conference on machine learning*, pp. 1278–1286. PMLR, 2014. Lawrence K Saul, Tommi Jaakkola, and Michael I Jordan. Mean field theory for sigmoid belief networks. Journal of Artificial Intelligence Research, 4:61–76, 1996. Stan Development Team. Stan Example models, 2021. URL https://github.com/stan-dev/ example-models. George Tucker, Dieterich Lawson, Shixiang Gu, and Chris J Maddison. Doubly reparameterized gradient estimators for Monte Carlo objectives. In *International Conference on Learning Representations*, 2018. Aad W van der Vaart. *Asymptotic statistics*, volume 3. Cambridge University Press, 2000. Dilin Wang, Hao Liu, and Qiang Liu. Variational inference with tail-adaptive f-divergence. *Advances in* Neural Information Processing Systems, 31, 2018. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017. ## A Experiments With Jackknife The relation between the jackknife estimator and complete U-statistics was made explicit early on by Mantel (1967). Recently, Nowozin (2018) used the jackknife estimator as a way to diminish the bias in IW-VI, proposing jackknife VI (JVI). Using the notation of Section 3, the jackknife estimator is $$\hat{\cal L}_{n}^{J,r}(V_{1:n})=\sum_{j=0}^{r}c(n,r,j)\hat{\cal L}_{n,n-j}^{U}(V_{1:n}),\tag{1}$$ where LˆU n,n−j is the complete U-statistic IW-ELBO estimator, and the c(*n, r, j*) are the Sharot coefficients [cf. Nowozin (2018)]. In the original version (14), it evaluates a collection of r complete U-statistics with m ranging from n to n − r. However, there is no need to constrain m in that way, i.e., we can instead compute the following estimator $$\hat{\cal L}_{n,m}^{J,r}(V_{1:n})=\sum_{j=0}^{r}c(m,r,j)\hat{\cal L}_{n,m-j}^{U}(V_{1:n}),\qquad\mathrm{for~}r<m\leq n,$$ because the bias is a function of m. This means that once m is fixed, we can pick the number of independent samples n ≥ m to reduce the variance of the estimation. ![18_image_0.png](18_image_0.png) Figure 7: Distributions of the objective using the Jackknife estimator LˆJ,1 24,8 on an approximation of the posterior of the mushrooms dataset, using different estimators. $$(14)$$ For our experiment, we optimized a variational approximation to the posterior of the mushrooms dataset as in Section 6. We used the complete-Ustatistic IW-ELBO estimator for optimization (n 0 = 16 and m = 8), and we choose the configuration with the highest final bound. We evaluated the trained model using the Jackknife estimator with n = 24, r = 1 and m = 8. For the inner estimator we used the complete-U-statistic IW-ELBO estimator, a variation of the permutedblock IW-ELBO estimator6 with ` = 20 and with ` = 100, and the second order approximation. Figure 7 shows that, when using the permuted estimator with ` = 20, the increased variance gets translated into an increased variance in the final estimation. However, it can be reduced by increasing the number of permutations to ` = 100. In the following table, we show the time taken to compute the Jackknife estimator without accounting for the time of building the index set.7 | Method | Mean time (ms) | Std | |--------------------|------------------|-------| | complete-U | 23.98 | 2.28 | | approx. 2nd | 0.71 | 0.05 | | permuted (` = 20) | 0.69 | 0.02 | | permuted (` = 100) | 0.86 | 0.03 | In this case, we observed that the alternatives are approximately 30 times faster than using the completeU-statistic. 6Since n is not an integer multiple of m − 1, we reduced the total number of sets to `mb n m c. 7We pre-computed the index set for the complete-U-statistic, which in this case requires 735471 + 346104 = 1081575 sets, taking 1.34 seconds. ## B Additional Theoretical Results In this section, we apply a result of Halmos (1946) to the estimation of the IW-ELBO. Subject to certain conditions, the estimator LˆU n,m has the smallest variance of any unbiased estimator of the IW-ELBO. The technical conditions are needed to define the class of "unbiased estimators" as ones that are unbiased for all log-weight distributions in a non-trivial class. Proposition B.1. Let EF [ · ] and VarF [ · ] *denote expectation and variance with respect to log-weights* V1, . . . , Vn drawn independently from distribution F*, and let* Lm(F) = EF -ln 1m Pm i=1 e Vi be the IWELBO with log-weight distribution F. Let F˜ *denote the set of distributions supported on a finite subset of* R. Suppose Φ *is any estimator such that* EF˜[Φ(V1:n)] = Lm(F˜) for all F˜ ∈ F˜*. Then,* $$\mathrm{Var}_{F}[\hat{\mathcal{L}}_{n,m}^{U}(V_{1:n})]\leq\mathrm{Var}_{F}[\Phi(V_{1:n})]$$ $$\square$$ whenever the latter quantity is defined, for any distribution F on the real numbers (up to conditions of measurability and integrability). Proof. The result is a direct application of Theorem 5 of Halmos (1946). For IW-ELBO estimation, the conditions are rather mild: we expect an IW-ELBO estimator to work for generic log-weight distributions. For gradient estimation, we take the conclusion lightly, because gradient estimators often use specific properties of the underlying distributions, such as having a reparameterization. ## B.1 Additional Proofs In this section, we provide a proof of Proposition 3.3. We first need to define a quantity similar to Definition 3.1. Recall, from the statement of the Proposition, that g : R dZ → R dφ , and let gi denote its ith component. Definition B.2. Let Z1*, . . . , Z*2m be i.i.d. drawn from qφ, and 1 ≤ i ≤ dφ. For 0 ≤ c ≤ m*, take* s, s 0 ∈J2mK m with |s ∩ s 0| = c. Using g *from* Proposition 3.3*, define* $\zeta_{c}^{(i)}=\text{Cov}\Big{[}g_{i}(Z_{s_{1}},\ldots,Z_{s_{m}}),\,g_{i}(Z_{s_{1}^{\prime}},\ldots,Z_{s_{m}^{\prime}})\Big{]},$ which depends only on c and not the particular s and s 0. We can now proceed with the proof. Proof of Proposition 3.3. For 1 ≤ i ≤ dφ, using Eq. (4) and (6), it follows from Theorem 5.2 of Hoeffding (1948) that $\text{Var}[(\hat{\mathcal{G}}^U_{n,m})_i]\leq$ m n ς (i) m = Var[(Gˆn,m)i]. From the definition of the covariance matrix, we get $$\operatorname{tr}(\operatorname{Var}[{\hat{\mathcal{G}}}_{n,m}^{U}])=\sum_{i=1}^{d_{\theta}}\operatorname{Var}[({\hat{\mathcal{G}}}_{n,m}^{U})_{i}]\leq\sum_{i=1}^{d_{\theta}}\operatorname{Var}[({\hat{\mathcal{G}}}_{n,m})_{i}]=\operatorname{tr}(\operatorname{Var}[{\hat{\mathcal{G}}}_{n,m}]).$$ $\overset{(1)}{\underset{m}{\Sigma m}}=$ Vs. $\mathbf{a}$ Using again that GˆU n,m and Gˆn,m are unbiased, that is, E[GˆU n,m] = E[Gˆn,m], then $$\mathbb{E}\left\|\hat{\mathcal{G}}_{n,m}^{U}\right\|_{2}^{2}=\sum_{i=1}^{d_{n}}(\text{Var}[(\hat{\mathcal{G}}_{n,m}^{U})_{i}]+\mathbb{E}[(\hat{\mathcal{G}}_{n,m}^{U})_{i}]^{2})\leq\sum_{i=1}^{d_{n}}(\text{Var}[(\hat{\mathcal{G}}_{n,m})_{i}]+\mathbb{E}[(\hat{\mathcal{G}}_{n,m})_{i}]^{2})=\mathbb{E}\left\|\hat{\mathcal{G}}_{n,m}\right\|_{2}^{2}.$$ ## C Dataset Description We provide a brief description of the datasets and models used for the experiments. The models used for Bayesian logistic regressions were taken from the UCI Machine Learning Repository Dua & Graff (2017). The rest of the models are part of the Stan Example models Stan Development Team (2021); Carpenter et al. (2017). For the dataset used for Bayesian logistic regression, whenever there was a categorical variable with k categories, we *dummified* it by creating k−1 dummies variables. Additionally, for the a1a dataset, continuous variables were discretized into quintiles following the work of Platt (1999). However, since we were unable to find the file describing the actual process used for the discretization, some discrepancies remained. | Name | Num. of | Num. of records | Comments | |-------------------|-----------|---------------------------------------------|-----------------------------------------------------------------------------| | variables | | First 1605 instances of the Adult Data Set, | | | a1a | 105 | 1605 | following LIBSVM Chang & Lin (2011), + discretized continous and dummified. | | australian | 35 | 690 | From UCI + dummified. | | ionosphere | 35 | 351 | From UCI | | mushrooms | 96 | 8124 | From UCI + dummified. | | sonar | 61 | 208 | From UCI | | congress | 4 | 343 | Gelman & Hill (2006) Ch. 7 | | election88 | 95 | 2015 | Gelman & Hill (2006) Ch. 19 | | election88Exp | 96 | 2015 | Gelman & Hill (2006) Ch. 19 | | electric | 100 | 192 | Gelman & Hill (2006) Ch. 23 | | electric-one-pred | 3 | 192 | Gelman & Hill (2006) Ch. 23 | | hepatitis | 218 | 288 | WinBUGS Lunn et al. (2000) examples | | hiv-chr | 173 | 369 | Gelman & Hill (2006) Ch. 7 | | irt | 501 | 30105 | Gelman & Hill (2006) Ch. 14 | | irt-multilevel | 604 | 30015 | Gelman & Hill (2006) Ch. 14 | | mesquite | 3 | 46 | Gelman & Hill (2006) Ch. 4 | | radon | 88 | 919 | radon-chr from Gelman & Hill (2006) Ch. 19 | | wells | 2 | 3020 | Gelman & Hill (2006) Ch. 7 | | MNIST | 784 | 60000 + 10000 | LeCun et al. (2010) | | FMNIST | 784 | 60000 + 10000 | Fashion-MNIST, Xiao et al. (2017) | | KMNIST | 784 | 60000 + 10000 | Kuzushiji-MNIST Clanuwat et al. (2018) | | Omniglot | 784 | 24345 + 8070 | Lake et al. (2015) from Burda et al. (2016) | Table 3: Description of datasets/models. ## D Pairwise Comparison In this section we present the mean difference of the medians of the envelopes as described in Section 6. We compare the methods that used the reparameterized gradients as based gradient estimator, i.e., the permutedblock estimator and the 2nd order approximation, to the standard IW-ELBO estimator. Additionally, we compare the standard IW using DReG as a based gradient estimator with a version of the permuted-block that uses the DReG as a base gradient estimator, namely, the permuted DReG. Interestingly, in the settings presented in Table 7, only the proposed methods, i.e., the complete-U statistic with its two approximations, the permuted-block, and the random subsets, converged at some point. All the other methods diverged, which explains why we cannot compute the difference. | permuted - standard IW | approx. 2nd - standard IW | | | permuted DReG - DReG | | | | | | |--------------------------|-----------------------------|--------|--------|------------------------|--------|--------|-------|--------|--------| | Dataset | | m | | | | | | | | | 2 | 4 | 8 | 2 | 4 | 8 | 2 | 4 | 8 | | | a1a | 45.36 | 100.56 | 112.42 | 47.53 | 105.30 | 122.34 | 27.30 | 111.74 | 119.51 | | australian | 1.31 | 2.61 | 3.36 | 1.07 | 2.37 | 3.22 | 1.45 | 1.87 | 3.94 | | ionosphere | 3.89 | 13.17 | 16.58 | 4.11 | 13.55 | 17.55 | 4.34 | 15.74 | 17.91 | | mushrooms | 64.46 | 145.85 | 202.56 | 67.28 | 153.58 | 214.31 | 93.45 | 186.01 | 179.02 | | sonar | 30.15 | 61.09 | 50.62 | 32.94 | 63.34 | 54.14 | 27.99 | 69.54 | 90.86 | Table 4: Bayesian logistic regression models using a Gaussian approximation with a covariance matrix of full rank. Difference in nats of the average objective (higher values are better). | permuted - standard IW | approx. 2nd - standard IW | permuted DReG - DReG | | | | | | | | |--------------------------|-----------------------------|------------------------|------|-------|-------|------|-------|-------|-------| | Dataset | m | | | | | | | | | | 2 | 4 | 8 | 2 | 4 | 8 | 2 | 4 | 8 | | | a1a | 1.54 | 4.01 | 4.48 | 1.49 | 4.08 | 4.45 | 1.40 | 12.67 | 12.86 | | australian | 0.02 | 1.00 | 1.38 | 0.05 | 0.96 | 1.28 | -0.07 | 0.06 | 1.43 | | ionosphere | -0.10 | -0.10 | 0.06 | -0.12 | -0.21 | 0.00 | -0.12 | -0.08 | 0.31 | | mushrooms | 1.88 | 2.76 | 8.69 | 1.74 | 3.30 | 9.16 | 1.94 | 4.69 | 8.50 | | sonar | 0.03 | -0.15 | 0.19 | -0.02 | -0.18 | 0.15 | 0.03 | -0.28 | 0.21 | | permuted - standard IW | | approx. 2nd - standard IW | | permuted DReG - DReG | | | | | | |--------------------------|-------|-----------------------------|-------|------------------------|--------|-------|-------|--------|-------| | Dataset | | | m | | | | | | | | 2 | 4 | 8 | 2 | 4 | 8 | 2 | 4 | 8 | | | congress | 2.50 | 4.61 | 7.33 | 2.89 | 4.76 | 7.63 | 2.37 | 4.68 | 7.02 | | election88 | 0.12 | 2.66 | 6.94 | 0.12 | 2.84 | 7.06 | 0.10 | 2.67 | 6.83 | | election88Exp | 0.82 | 98.52 | 32.76 | 4.73 | 117.78 | 55.27 | -1.89 | 100.09 | 32.53 | | electric | 0.26 | 1.53 | 4.32 | 0.16 | 1.54 | 4.52 | 0.24 | 1.56 | 4.63 | | electric-one-pred | 0.66 | -0.77 | -3.91 | 0.74 | -0.76 | -4.38 | 0.69 | -0.77 | -3.93 | | hepatitis | 0.90 | -0.06 | 0.65 | 2.06 | 156.53 | 1.86 | -0.30 | 0.92 | 0.69 | | hiv-chr | 0.16 | 2.03 | 15.84 | 0.34 | 2.12 | 21.74 | -0.08 | 1.45 | 12.91 | | irt | 0.19 | 0.80 | 1.00 | 0.15 | 0.72 | 0.93 | 0.11 | 0.61 | 1.40 | | irt-multilevel | 35.69 | 43.79 | 62.32 | 29.74 | 48.20 | 53.64 | 34.66 | 50.26 | 55.22 | | mesquite | 0.20 | 0.58 | 2.00 | -0.06 | 0.28 | 1.74 | -0.29 | 0.39 | 1.99 | | radon | 7.88 | 5.79 | 14.83 | 7.85 | 8.91 | 65.49 | 8.16 | 7.56 | 60.92 | | wells | -0.02 | 0.01 | -0.11 | -0.20 | -0.30 | -0.35 | -0.02 | -0.04 | -0.14 | Table 5: Bayesian logistic regression models using a diagonal Gaussian approximation. Difference in nats of the average objective (higher values are better). Table 6: Stan models using a diagonal Gaussian approximation. Difference in nats of the average objective (higher values are better). | permuted - standard IW | approx. 2nd - standard IW | | permuted DReG - DReG | | | | | | | |--------------------------|-----------------------------|--------|------------------------|-------|--------|--------|-------|--------|--------| | Dataset | | | m | | | | | | | | 2 | 4 | 8 | 2 | 4 | 8 | 2 | 4 | 8 | | | congress | 11.62 | 12.02 | 19.80 | 12.33 | 12.57 | 20.46 | 13.55 | 13.11 | 20.96 | | election88 | NaN | 1785 | 1133 | NaN | 2494 | 2170 | NaN | 1776 | 1116 | | election88Exp | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | | electric | NaN | -38.02 | 80.46 | NaN | -77.53 | 89.06 | NaN | -43.16 | 34.91 | | electric-one-pred | -1.81 | -4.73 | -3.45 | -3.18 | -4.77 | -4.37 | -1.79 | -4.72 | -3.46 | | hepatitis | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | | hiv-chr | NaN | NaN | 283.19 | NaN | NaN | 325.79 | NaN | NaN | NaN | | irt | 17793 | 20064 | 16077 | 19399 | 22000 | 17686 | NaN | NaN | NaN | | mesquite | 2.57 | 1.20 | 1.41 | 2.43 | 0.95 | 1.19 | 2.67 | 0.53 | 0.74 | | radon | NaN | 1150 | 268.98 | NaN | 1316 | 303.83 | NaN | 11675 | 269.26 | | wells | 0.02 | 0.07 | -0.03 | -0.29 | -0.31 | -0.29 | 0.04 | 0.02 | -0.02 | Table 7: Stan models using a full covariance Gaussian approximation. Difference in nats of the average objective (higher values are better). ## E Random Dirichlet Experiment We follow Domke & Sheldon (2018) for the Random Dirichlet experiment. For a randomly-sampled Dirichlet Distribution with 50 parameters, we approximate it using a (50 − 1)-dimensional Gaussian distribution parameterized with a full rank covariance matrix, with its domain constrained to the simplex using PyTorch's distributions (Paszke et al., 2019). We optimize each configuration using 100 different random seeds. We select the learning rate that achieved the highest objective among all learning rates that converged for all seeds. For each seed, we compute the Frobenius norm between the empirical covariance of the approximating distribution and that of the theoretical distribution (the error). The distribution of this error is shown in Figure 1 and 5. We had to exclude eight outliers with errors greater than 10−4 and up to 0.5. Interestingly, those outliers used either the DReG or permuted-DReG estimators. ## F Vae Details. For the variational autoencoders, we used, for all datasets, the architecture used by Burda et al. (2016). We trained each configuration for a fixed number of epochs (100) using Adam (Kingma & Ba, 2015) with a learning rate of 10−4. In all cases, we used a batch size of 500, and a latent variable of dimension 50, while taking n = 50 samples. Datasets were taken from PyTorch, except for the Omniglot, for which we used the construction provided by Burda et al. (2016). We evaluated using the standard IW-ELBO estimator, regardless of the estimator used for the optimization. To get consistent wall-clock time measurements, we trained only using CPU on dedicated servers, with disabled hyper-threading and a single task per core. Additionally, we used set_flush_denormal to avoid creating denormal numbers because some estimators create many of such numbers (especially DReG-like estimators), having a substantial negative impact on performance. Our implementation of DReG is based on Pyro's (Bingham et al., 2018) not-yet-integrated implementation. We are not aware of a PyTorch implementation without the extra time penalty. In the following plots, we provide the objective distribution for all dataset/method/m configurations and the distribution of the wall-clock time. ![23_image_0.png](23_image_0.png) Figure 8: Distribution of the objective for different combinations of datasets, methods and m, using n = 50 samples. ![24_image_0.png](24_image_0.png) Figure 9: Distribution of the time taken to run 100 epochs for different combinations of datasets, methods and m, using n = 50 samples. ## G Figures Of Median Envelope For some of the methods presented in the paper, we compute the median envelope during training as described in Section 6. ![25_image_0.png](25_image_0.png) the estimators complete-U, permuted and the standard IW. ![26_image_0.png](26_image_0.png) Figure 11: Median envelope for models when using a diagonal Gaussian as approximating distribution using the complete-U DReG, permuted DReG and the standard DReG gradient estimators. ![27_image_0.png](27_image_0.png) Figure 12: Median envelope for models when using a Gaussian distribution with full-rank covariance as approximating distribution for the estimators complete-U, permuted and the standard IW. ![28_image_0.png](28_image_0.png) Figure 13: Median envelope for models when using a Gaussian distribution with full-rank covariance as approximating distribution using the complete-U DReG, permuted DReG and the standard DReG gradient estimators. Table 8: Median objective averaged over the last 200 iterations when using the estimators complete-U, permuted and the standard IW. It can be seen that for at least 10 models out of 17, using either the diagonal Gaussian or the full rank covariance Gaussian approximation, the best objective is achieved with an intermediate value of m, and it is at least 1 nat larger than the objective with m = 16. These models are: congress, election88, election88Exp, electric, electric-one-pred, hepatitis, hiv-chr, irt-multilevel, mushrooms and radon. Optimizations using m = 1 are not shown. | irt-multilevel, mushrooms and radon. Optimizations using m = 1 are not shown. Diagonal Gaussian Full Rank Covariance Gaussian model method m 2 4 8 16 2 4 8 | 16 | | | | | | | | | |---------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|----------|----------|----------|----------|---------|---------|---------|---------| | a1a | standard IW | -652.8 | -649.7 | -648.0 | -647.6 | -639.0 | -660.7 | -738.1 | -772.1 | | complete-U | -652.6 | -648.9 | -646.7 | -647.6 | -637.8 | -639.0 | -663.9 | -772.1 | | | permuted | -652.6 | -649.0 | -647.0 | -647.7 | -637.8 | -639.2 | -664.1 | -771.2 | | | australian | standard IW | -264.4 | -261.2 | -259.1 | -258.0 | -256.8 | -256.8 | -256.8 | -257.2 | | complete-U | -264.8 | -261.5 | -259.1 | -258.0 | -256.8 | -256.8 | -256.8 | -257.2 | | | permuted | -264.7 | -261.4 | -259.0 | -257.9 | -256.8 | -256.8 | -256.8 | -257.1 | | | congress | standard IW | 417.1 | 419.5 | 419.5 | 417.6 | 416.9 | 417.2 | 412.4 | 403.9 | | complete-U | 418.8 | 420.3 | 420.3 | 417.6 | 419.4 | 420.3 | 419.2 | 403.9 | | | permuted | 418.5 | 420.2 | 420.4 | 417.1 | 419.4 | 420.2 | 418.9 | 402.9 | | | election88 | standard IW | -1529.5 | -1523.3 | -1525.0 | -1535.6 | NaN | -5964.2 | -4383.2 | NaN | | complete-U | -1529.7 | -1521.5 | -1519.8 | -1535.6 | NaN | -4943.1 | -2046.2 | NaN | | | permuted | -1529.4 | -1520.6 | -1520.5 | -1535.5 | NaN | -5443.7 | -3000.9 | NaN | | | election88Exp | standard IW | -1755.7 | -1570.8 | -1502.7 | -1461.2 | NaN | NaN | NaN | NaN | | complete-U | -1760.0 | -1496.5 | -1467.0 | -1461.2 | NaN | NaN | -3748.2 | NaN | | | permuted | -1766.6 | -1512.3 | -1482.7 | -1461.9 | NaN | NaN | NaN | NaN | | | electric | standard IW | -830.5 | -827.7 | -826.1 | -830.9 | NaN | -1421.1 | -1166.2 | -1207.2 | | complete-U | -830.6 | -827.0 | -823.0 | -830.9 | NaN | -1459.2 | -1090.3 | -1207.2 | | | permuted | -830.6 | -827.1 | -823.5 | -830.9 | NaN | -1413.6 | -1098.1 | -1203.4 | | | electric-one-pred | standard IW | -1148.6 | -1147.5 | -1146.4 | -1144.6 | -1153.0 | -1145.8 | -1141.5 | -1141.2 | | complete-U | -1148.3 | -1145.2 | -1146.8 | -1144.6 | -1150.7 | -1144.1 | -1140.3 | -1141.2 | | | permuted | -1148.3 | -1146.6 | -1146.5 | -1144.6 | -1151.0 | -1144.5 | -1140.0 | -1141.2 | | | hepatitis | standard IW | -561.3 | -774.9 | -775.4 | -774.4 | NaN | NaN | NaN | -1693.7 | | complete-U | -561.2 | -564.3 | -773.1 | -774.4 | NaN | -1664.4 | -1592.8 | -1693.7 | | | permuted | -561.2 | -773.8 | -775.2 | -774.4 | NaN | -1779.6 | -1715.7 | -1682.1 | | | hiv-chr | standard IW | -606.4 | -604.4 | -607.5 | -603.8 | NaN | NaN | -1879.9 | -1945.0 | | complete-U | -606.2 | -604.0 | -602.6 | -603.8 | NaN | -3395.8 | -1450.6 | -1945.0 | | | permuted | -606.2 | -604.0 | -603.1 | -603.7 | NaN | NaN | -1486.9 | -1960.9 | | | ionosphere | standard IW | -133.1 | -129.2 | -127.2 | -125.6 | -125.3 | -126.6 | -132.5 | -142.3 | | complete-U | -133.2 | -129.6 | -127.3 | -125.6 | -124.9 | -125.2 | -127.2 | -142.3 | | | permuted | -133.3 | -129.6 | -127.3 | -125.7 | -124.9 | -125.2 | -127.3 | -142.2 | | | irt | standard IW | -15887.5 | -15887.1 | -15886.8 | -15886.7 | -36563 | -64934 | -68447 | NaN | | complete-U | -15887.4 | -15887.0 | -15886.8 | -15886.7 | -33230 | -37383 | -50316 | NaN | | | permuted | -15887.4 | -15887.0 | -15886.8 | -15886.6 | -35620 | -37547 | -54763 | NaN | | | irt-multilevel | standard IW | -15204.7 | -15194.1 | -15191.3 | -15196.8 | NaN | NaN | NaN | NaN | | complete-U | -15198.7 | -15164.0 | -15185.8 | -15196.8 | NaN | NaN | NaN | NaN | | | permuted | -15200.3 | -15173.0 | -15186.2 | -15197.0 | NaN | NaN | NaN | NaN | | | mesquite | standard IW | -29.9 | -29.7 | -29.6 | -29.3 | -29.8 | -29.7 | -29.6 | -29.2 | | complete-U | -29.9 | -29.8 | -29.6 | -29.3 | -29.8 | -29.7 | -29.6 | -29.2 | | | permuted | -29.9 | -29.7 | -29.6 | -29.2 | -29.8 | -29.7 | -29.6 | -29.2 | | | mushrooms | standard IW | -211.6 | -206.5 | -204.3 | -215.5 | -180.8 | -194.2 | -215.8 | -339.0 | | complete-U | -210.6 | -204.3 | -200.8 | -215.5 | -180.2 | -180.7 | -185.6 | -339.0 | | | permuted | -210.8 | -204.5 | -201.4 | -215.4 | -180.2 | -180.7 | -187.6 | -337.3 | | | radon | standard IW | -1210.5 | -1210.4 | -1213.3 | -1210.4 | NaN | -2422.9 | -1595.8 | -1636.5 | | complete-U | -1210.5 | -1210.2 | -1210.2 | -1210.4 | NaN | -1548.0 | -1445.9 | -1636.5 | | | permuted | -1210.5 | -1210.2 | -1211.8 | -1210.4 | NaN | -1600.6 | -1454.7 | -1645.2 | | | sonar | standard IW | -136.2 | -126.3 | -121.3 | -117.9 | -138.0 | -154.9 | -200.9 | -226.7 | | complete-U | -136.5 | -127.4 | -121.5 | -117.9 | -116.6 | -120.9 | -156.3 | -226.7 | | | permuted | -136.5 | -127.3 | -121.5 | -117.9 | -116.7 | -121.5 | -158.2 | -228.5 | | | wells | standard IW | -2042.1 | -2041.9 | -2041.7 | -2041.2 | -2041.9 | -2041.8 | -2041.7 | -2041.1 | | complete-U | -2042.2 | -2042.0 | -2041.7 | -2041.2 | -2041.9 | -2041.9 | -2041.8 | -2041.1 | | | permuted | -2042.2 | -2041.9 | -2041.7 | -2041.2 | -2041.9 | -2041.8 | -2041.8 | -2041.1 | | Table 9: Median objective averaged over the last 200 iterations when using the complete-U DReG, permuted DReG and the standard DReG gradient estimators. It can be seen that for at least 8 models out of 17, using either the diagonal Gaussian or the full rank covariance Gaussian approximation, the best objective is achieved with an intermediate value of m, and it is at least 1 nat larger than the objective with m = 16. These models are: congress, election88, election88Exp, electric, electric-one-pred, irt-multilevel, mushrooms and radon. Optimizations using m = 1 are not shown. | mushrooms and radon. Optimizations using m = 1 are not shown. Diagonal Gaussian | Full Rank Covariance Gaussian | | | | | | | | | |-----------------------------------------------------------------------------------|---------------------------------|----------|----------|----------|----------|---------|---------|---------|---------| | model | method | m | | | | | | | | | 2 | 4 | 8 | 16 | 2 | 4 | 8 | 16 | | | | a1a | DReG | -652.7 | -649.9 | -648.0 | -647.0 | -659.7 | -770.3 | -936.6 | -1205.4 | | comp.-DReG | -652.5 | -648.6 | -646.5 | -647.0 | -655.7 | -667.4 | -894.2 | -1205.4 | | | perm.-DReG | -652.5 | -648.7 | -646.4 | -647.1 | -655.3 | -725.2 | -874.7 | -1209.8 | | | australian | DReG | -264.4 | -261.2 | -259.0 | -257.8 | -256.7 | -256.7 | -256.7 | -256.9 | | comp.-DReG | -264.7 | -261.6 | -259.0 | -257.8 | -256.7 | -256.7 | -256.6 | -256.9 | | | perm.-DReG | -264.7 | -261.5 | -258.9 | -257.8 | -256.7 | -256.7 | -256.6 | -256.9 | | | congress | DReG | 417.9 | 419.7 | 419.9 | 418.2 | 418.5 | 418.9 | 413.2 | 404.5 | | comp.-DReG | 419.6 | 420.5 | 420.7 | 418.2 | 420.5 | 420.8 | 419.8 | 404.5 | | | perm.-DReG | 419.4 | 420.5 | 420.7 | 417.9 | 420.4 | 420.7 | 419.8 | 404.6 | | | election88 | DReG | -1529.2 | -1522.3 | -1524.2 | -1534.9 | NaN | -5964.4 | -4349.2 | NaN | | comp.-DReG | -1529.1 | -1520.7 | -1518.4 | -1534.9 | NaN | -4950.7 | -2079.0 | NaN | | | perm.-DReG | -1529.1 | -1520.7 | -1518.4 | -1534.3 | NaN | -5439.8 | -3041.2 | NaN | | | election88Exp | DReG | -1755.8 | -1571.9 | -1502.0 | -1461.9 | NaN | NaN | NaN | NaN | | comp.-DReG | -1733.2 | -1495.9 | -1468.8 | -1461.9 | NaN | NaN | -3664.0 | NaN | | | perm.-DReG | -1766.8 | -1512.3 | -1483.3 | -1460.1 | NaN | NaN | -3947.5 | NaN | | | electric | DReG | -830.0 | -827.2 | -824.8 | -826.2 | NaN | -1417.4 | -1291.3 | -1314.0 | | comp.-DReG | -830.2 | -826.1 | -822.0 | -826.2 | NaN | -1459.2 | -1219.0 | -1314.0 | | | perm.-DReG | -829.9 | -826.3 | -822.6 | -826.3 | NaN | -1427.2 | -1239.1 | -1326.1 | | | electric-one-pred | DReG | -1148.8 | -1147.5 | -1146.4 | -1144.6 | -1153.0 | -1145.8 | -1141.4 | -1141.2 | | comp.-DReG | -1148.5 | -1146.0 | -1146.8 | -1144.6 | -1150.7 | -1144.1 | -1140.3 | -1141.2 | | | perm.-DReG | -1148.3 | -1146.7 | -1146.5 | -1144.6 | -1151.0 | -1144.5 | -1140.0 | -1141.2 | | | hepatitis | DReG | -561.3 | -776.3 | -775.1 | -774.0 | NaN | NaN | NaN | NaN | | comp.-DReG | -561.1 | -772.0 | -772.6 | -774.0 | NaN | NaN | NaN | NaN | | | perm.-DReG | -561.3 | -773.5 | -774.8 | -774.0 | NaN | NaN | NaN | NaN | | | hiv-chr | DReG | -606.2 | -604.1 | -605.4 | -602.7 | NaN | NaN | NaN | NaN | | comp.-DReG | -606.1 | -603.7 | -602.2 | -602.7 | NaN | NaN | NaN | NaN | | | perm.-DReG | -606.1 | -603.6 | -602.9 | -602.7 | NaN | NaN | NaN | NaN | | | ionosphere | DReG | -133.1 | -129.3 | -127.1 | -125.6 | -124.3 | -125.8 | -130.7 | -142.1 | | comp.-DReG | -133.2 | -129.6 | -127.2 | -125.6 | -124.2 | -124.2 | -124.7 | -142.1 | | | perm.-DReG | -133.3 | -129.6 | -127.2 | -125.6 | -124.2 | -124.3 | -125.7 | -142.0 | | | irt | DReG | -15887.3 | -15886.9 | -15886.6 | -15886.3 | NaN | NaN | NaN | NaN | | comp.-DReG | -15887.3 | -15886.9 | -15886.5 | -15886.3 | NaN | NaN | NaN | NaN | | | perm.-DReG | -15887.3 | -15886.9 | -15886.5 | -15886.3 | NaN | NaN | NaN | NaN | | | irt-multilevel | DReG | -15226.1 | -15199.4 | -15195.8 | -15224.4 | NaN | NaN | NaN | NaN | | comp.-DReG | -15206.9 | -15188.6 | -15188.0 | -15224.4 | NaN | NaN | NaN | NaN | | | perm.-DReG | -15214.0 | -15191.6 | -15188.4 | -15222.2 | NaN | NaN | NaN | NaN | | | mesquite | DReG | -29.9 | -29.8 | -29.6 | -29.4 | -29.8 | -29.7 | -29.7 | -29.3 | | comp.-DReG | -29.9 | -29.8 | -29.6 | -29.4 | -29.8 | -29.7 | -29.7 | -29.3 | | | perm.-DReG | -29.9 | -29.8 | -29.6 | -29.4 | -29.8 | -29.7 | -29.7 | -29.3 | | | mushrooms | DReG | -211.6 | -206.6 | -204.7 | -215.9 | -192.2 | -251.2 | -305.6 | -405.3 | | comp.-DReG | -210.6 | -204.3 | -201.1 | -215.9 | -180.3 | -193.6 | -253.5 | -405.3 | | | perm.-DReG | -210.7 | -204.4 | -201.5 | -215.4 | -180.4 | -194.3 | -250.9 | -400.8 | | | radon | DReG | -1210.5 | -1210.3 | -1219.9 | -1210.3 | NaN | -2410.8 | -1624.9 | -1650.9 | | comp.-DReG | -1210.4 | -1210.1 | -1210.2 | -1210.3 | NaN | -1538.0 | -1445.7 | -1650.9 | | | perm.-DReG | -1210.4 | -1210.2 | -1212.5 | -1210.2 | NaN | -1593.3 | -1466.2 | -1642.4 | | | sonar | DReG | -136.2 | -126.2 | -121.1 | -117.6 | -135.4 | -152.5 | -226.3 | -259.6 | | comp.-DReG | -136.5 | -127.2 | -121.5 | -117.6 | -115.2 | -118.4 | -155.6 | -259.6 | | | perm.-DReG | -136.5 | -127.3 | -121.3 | -117.6 | -115.3 | -118.9 | -156.4 | -260.5 | | | wells | DReG | -2042.2 | -2041.9 | -2041.8 | -2041.2 | -2041.9 | -2041.9 | -2041.8 | -2041.2 | | comp.-DReG | -2042.2 | -2042.0 | -2041.9 | -2041.2 | -2041.9 | -2041.9 | -2041.9 | -2041.2 | | | perm.-DReG | -2042.2 | -2042.0 | -2041.8 | -2041.2 | -2041.9 | -2041.9 | -2041.9 | -2041.2 | |
Review 1: Summary: This paper applies the theory of U-statistics to multisample bound and gradient estimators for variational inference. In many variational inference settings the user allots a budget of samples that can be drawn from the model at each gradient step. Those samples are then combined to estimate the bound and its gradient. The most popular existing method for combining the samples is, roughly speaking, to average each sample's log importance weight (IWAE). This paper suggests using the theory of U-statistics to more intelligently combine the samples and achieve a variance reduction. This can take the form of averaging multiple possibly-overlapping subsets of the weights. In addition to these conceptual innovations, the authors propose several estimators based on U-statistics and analyze their performance theoretically and empirically, finding that they match or exceed existing estimators while incurring a small performance penalty. Strengths and Weaknesses: Strengths: * The paper is clear and well written. * The contributions are novel, interesting, and useful. * I checked the proofs in the main text and they are correct to my knowledge. * The experimental evaluation is thorough and provides clear evidence to support the proposed methods. * Overall it is a great paper, congratulations! The main weakness I see is a lack of discussion of 'outer averaging' vs batch averaging for variance reduction. To my understanding, the standard IWAE setup does not do any outer averaging as defined in this paper. Instead, all averaging occurs at the 'batch level', i.e. IWAE bounds for different x's are averaged, not multiple IWAE bounds for the same x. I understand that your section 2.1 is largely dedicated to arguing that practitioners should consider averaging multiple bounds for the same x, but you ignore the option of instead averaging IWAE bounds for different $x$'s. This is an important and practical baseline to consider. Requested Changes: I recommend acceptance of this paper without needing any changes. However, there are a few things that would be nice to see. Mainly, it would be nice to add a discussion of batching vs averaging multiple IWAE bounds for the same x. In addition here are a few small optional edits: * On the top of page 3 you say “we use the subscript n to denote the total number of input samples used for estimation and m for the optimized IW-ELBO objective”. Although I was able to figure it out, it's not really clear to me from this sentence what m is being used for. * On the bottom of page 4 you say "can be generalized by replacing h by any other symmetric function of m variables”. While 'symmetric function of m variables' does have the technical definition 'invariant to permutations of the variables', I had to look it up just to make sure. It could be better to be more precise here, but it is probably fine as-is. * In Definition 3.1 you end saying “which depends only on c and not the particular s and s'” It kind of sounds like you are making a claim, not just a definition when you add the above statement. Maybe move it out of the definition? * It could be good to show table 2 for different values of n and m to support your claims in the text. The additional info in the appendix for the KMNIST experiments kind of gets at this this, but it could be good to support your claims more directly with experiments on the mushroom dataset. Broader Impact Concerns: No broader impact concerns. ================================================== Review 2: Summary: The authors propose a new general recipe to derive variational bounds, in the spirit of importance weighted variational inference (IWVI), introduced by Burda et al. (2016) in the Importance-Weighted Autoencoders (IWAEs) paper. The idea is quite simple yet novel in this context: using U-statistics instead of standard averages. Concretely, this means that the variational bound will be evaluated many times on different overlapping batches of importance weights. The novelty being the fact that the batches can be overlapping, and are much more numerous than for standard IWVI bounds. The authors use the standard theory of U-statistics to show that these new bounds have a lower variance than standard IWVI bounds. The downside of their new bounds is that computing all possible combinations of batches can be computationally intensive. The authors propose several solutions to this problem: sampling fewer batches than the complete U-statistic would, or relying on the fact that the logsumexp function can be approximated by the max function. In several practical situations (Bayesian models and IWAEs), these new objectives are slightly more accurate than standard IWVI. Strengths and Weaknesses: Strengths The idea of using U-statistics in this context is quite natural and excellent. Overall, the new objectives are practical, and often more accurate than standard IWAE-style bounds. We can hope that this work can benefit to several researchers using such bounds. The paper is generally well-narrated, and cites most of the relevant literature. The authors are refreshingly honest about the fact that the improvements are often slim. Weaknesses As mentioned above, the empirical gains are often slim. Although the paper reads well, a few things are a bit unclear: 1) The maximum function is nonsmooth, which may be problematic for gradient descent? Discussing this would be interesting. 2) Some of the maths deserve a bit more details, for instance: - In the beginning of Section 2, $x$ and $z$ are not properly defined, and we have no idea where they live - In Def 3.1, it would be helpful to briefly say why it only depends on $c$ - It would be nice to add a proof of Prop 3.3 (perhaps in the appendix) for completeness. While Prop 3.2 is exactly Hoeffding's result, Prop 3.3 involves some straightforward steps that would still benefit from being spelled out - It is perhaps well-known, but I was not familiar with the notation $\binom{[n] }{m}$, maybe specifying what it means would be helful - The computation needed to obtain Eqn (8) could be detailed - Right after Eqn (8), you say "for a given permutation $\pi$, all sets in $S_\pi$ will be independent". I don't understand, given $\pi$, aren't all sets actually deterministic? - In Appendix B, the paragraph before the theorem is nice but explaining the rationale of the conditions a bit more would be helful. For instance, why is $\tilde{\mathcal{F}}$ the class of finitely supported distributions? I am not particularly familiar with Halmos's famous paper, but the link between your result and the way he phrases Theorem 5 is not that obvious. 3) The use of the "envelope" of the curve in Figure 2 and later is quite peculiar, but not uninteresting. Is this something common? If yes, maybe you could motivate it a bit more (it is quickly explained in page 11, while Fig 2 is page 8, which is not great for the flow), and add citations of other papers that use it? Some statements are incomplete or not fully correct: a) Page 1: "The most obvious downside is the increased computational cost (by a factor of $m$)". Depending on the context, this can be smaller than this. For IWAEs, computing the standard bound requires a single encoding, and $m$ decodings, whereas increasing the complexity by a factor m would mean $m$ encodings, and $m$ decodings. b) Page 2: when you mention that the IWAE bound is consistant, you should mention that this is under some assumptions on the distribution of the importance weights (Burda et al. assume boundedness, Domke and Sheldon have weaker conditions) c) Page 2: "which simultaneously finds an approximating distribution that is close in KL divergence to p(z | x) (Domke & Sheldon, 2018)" I believe Domke & Sheldon actually show that, when $K \longrightarrow \infty$, the distributions are close in $\chi$ divergence instead of KL. I am not aware of precise results on what's happening for $K \in \]1,\infty[$. Requested Changes: - Clarifying/correcting the few points mentioned in the "weaknesses" section - There are a few points/references that would be interesting to discuss: - You mention "we are not aware of control variates specifically for IWVI". Control variates for IWVI were developed by Mnih and Rezende (ICML 2016, https://arxiv.org/abs/1602.06725) and revisited by Liévin et al. (NeurIPS 2020, https://arxiv.org/abs/2008.01998). - DReG was extended by Bauer and Mnih (ICML 2020, https://arxiv.org/abs/2101.11046) - It would be nice to discuss whether or not your ideas could be applied also to the reweighted wake sleep algorithm of Bornschein and Bengio (ICLR 2015, https://arxiv.org/abs/1406.2751) and its extensions (eg Dieng and Paisley, https://arxiv.org/abs/1906.05850, Le et al., https://proceedings.mlr.press/v115/le20a.html, Kim et al., https://proceedings.mlr.press/v108/kim20b.html) Broader Impact Concerns: I have no particular concern about this work. ================================================== Review 3: Summary: The paper improves the bias-variance trade-off of IWVI by using U-statistics. The main idea is to carefully construct overlapped batches to compute individual estimate for average. Several non-trivial estimates based on this idea are introduced, including 1. The complete U-statistics IW-ELBO estimator, which is computationally intense but achieves optimal variance reduction; 2. The permutation-based incomplete U-statistics IW-ELBO estimator, which allows controlling the computation by a hyper-parameter $l$ and achieves variance reduction in between the optimal and an incomplete estimator based on random selection; 3. Two approximate complete U-statistics IW-ELBO estimator, which uses the log-sum-exp approximation to allow fast approximation of the complete U-statistics. Notably, theoretical results for each of these estimators are obtained in terms of variance. Most of them have already shown the advantage (better bounds) by using the newly proposed estimators and some of them also gives practical guidance on selection of hyper-parameters such as $l$ based on Proposition 4.2. A large suite of simulation for IWVI is performed to validate the practical improvement by using the proposed estimators. Benchmark on the actual computation is also provided to show the disadvantage of the complete U-statistics IW-ELBO estimator, and the importance of reducing computation achieved in the other estimators. Finally the proposed estimators are tested in IWAEs, resulting in improved lower bound of the marginal likelihood. Strengths and Weaknesses: ## Strengths - The paper is very well-written and easy to follow. Related works are well-cited and enough technical background is provided. - The proposed method is simple but effective. The use of U-statistics is very elegant and could motivate similar application in other related estimators for variance reduction. - The paper consists a nice mix of theoretical results and empirical evaluation to study the proposed estimators. - Important theoretical results are established for guaranteed improvement from the proposed estimators as well as some guides of hyper-parameter selection. - Empirical results are aligned with the theory and show practical improvement by using the proposed estimators. ## Weaknesses - Only a few typos or presentation issues - Missing brackets ($[$, $]$) in a few expectations ($\mathbb{E}$) within the paragraphs between (5) and (6) - The statement in Proposition 3.2 is incomplete. The first sentence should end up with something like "... for some $\zeta_1, \zeta_m > 0$" (otherwise they are not defined). ## Questions - I wonder how the complete U-statistics estimator is implemented in practice and if any vectorization is used to accelerate it instead of a loop-based implementation. - For example, if we denote the weights as $\mathbf{e}=[e_1, e_2, e_3, e_4]$ and let $m=2$, one could write the complete U-statistics estimator as $h(\mathbf{e}^\top \mathbf{M}) \mathbf{1} / r$ where $\mathbf{M}$ is a $4 \times 6$ binary matrix whose columns are indicators for each subset (e.g. the first column is $[1; 1; 0; 0]$, the second column is $[1; 0; 1; 0]$, etc.) and $h(x) = \ln {1 \over m} x$ is applied column-wise. This might be effectively parallelized if the vector-matrix multiplication is done on GPUs. - Just throwing a random idea: Maybe this vector-matrix multiplication could be somehow approximated efficiently (say by some low-rank approximation or even random projection). Requested Changes: I think the paper could be accepted as it. Broader Impact Concerns: No concern as this work is mostly theoretical ================================================== Metareview: Recommendation: Accept as is Comment: All reviewers were very supportive of the paper and recommended its acceptance without revision. I concur with this assessment and believe it to be a clearly presented paper with interesting and notable novel contributions to the literature. ==================================================
# Learning Hierarchical Relational Representations Through Relational Convolutions Anonymous authors Paper under double-blind review ## Abstract A maturing area of research in deep learning is the study of architectures and inductive biases for learning representations of relational features. In this paper, we focus on the problem of learning representations of *hierarchical* relations, proposing an architectural framework we call "relational convolutional networks". The key to the framework is a novel operation that captures the relational patterns in groups of objects by convolving graphlet filters—learnable templates of relational patterns—against subsets of the input. Composing relational convolutions gives rise to a deep architecture that learns representations of higherorder, hierarchical relations. We present the motivation and details of the architecture, together with a set of experiments to demonstrate how relational convolutional networks can provide an effective framework for modeling relational tasks that have hierarchical structure. ## 1 Introduction Objects in the real world rarely exist in isolation; modeling the relationships between them is essential to accurately capturing complex systems. As increasingly powerful machine learning models progress towards building internal "world models," it is important to explore natural inductive biases to enable efficient learning of relational representations. The computational challenge lies in developing the components necessary for constructing robust, flexible, and progressively complex relational representations. Compositionality—used here to mean an ability to compose modules together to build iteratively more complex feature representations—is essential to the success of deep representation learning. For example, CNNs extract higher-level features (e.g., textures and object-specific features) by composing simpler feature maps (Zeiler and Fergus, 2014), resulting in a flexible architecture for computing "features of features". So far, work on relational representation learning has been limited to "flat" first-order architectures. In this work, we propose *relational convolutional networks* as a compositional framework for learning hierarchical relational representations. The key to the framework proposed in this paper involves formalizing the concept of convolving learnable templates of a relational pattern against a larger relation tensor. This operation produces a sequence of vectors representing the relational pattern within each group of objects. Crucially, composing relational convolutions captures higher-order relational features—i.e., relations between relations. Specifically, our proposed architecture introduces the following novel concepts and computational mechanisms. - *Graphlet filters.* A "graphlet filter" is a template for the pattern of relations between a (small) collection of objects. Since pairwise relations can be viewed as edges on a graph, the term "graphlet" is used to refer to a subgraph, and the term "filter" is used to refer to a learnable template or pattern. - *Relational convolutions.* We formalize a notion of *relational* convolution, analogous to spatial convolutions in CNNs, where a graphlet filter is matched against the relations within *groups* of objects to obtain a representation of the relational pattern in different groupings of the input. - *Grouping mechanisms.* For large problem instances, it would be computationally and statistically intractable to consider relational convolutions across all combinations of objects. To achieve scalability, we introduce a learnable grouping mechanism based on attention which identifies the relevant groups that should be considered for the downstream task. - *Compositional relational modules.* The proposed architecture supports composable modules, where each module has learnable graphlet filters and groups. This enables learning higher-order relationships between objects—relations between relations. The architecture is presented in detail in Sections 2 and 3, and a schematic of the proposed architecture is shown in Figure 1. In a series of experiments, we show how relational convolutional networks provide a powerful framework for relational learning. We first carry out experiments on the "relational games" benchmark for relational reasoning proposed by Shanahan et al. (2020), which consists of a suite of binary classification tasks for identifying abstract relational rules between a set of geometric objects represented as images. We next carry out experiments on a version of the Set game, which requires processing of higher-order relations across multiple attributes. We find that relational convolutional networks outperform Transformers, graph neural networks, as well as existing relational architectures. These results demonstrate that both compositionality and relational inductive biases are needed to efficiently learn representations of complex higherorder relations. ![1_image_0.png](1_image_0.png) ## 1.1 Related Work To place our framework in the context of previous work, we briefly discuss related forms of relational learning below, pointing first to the review of relational learning inductive biases by Battaglia et al. (2018). Figure 1: Proposed architecture for relational convolutional networks. Hierarchical relations are modeled by iteratively computing pairwise relations between objects and convolving the resultant relation tensor with graphlet filters representing templates of relations between groups of objects. Graph neural networks (GNNs) are a class of neural network architectures which operate on graphs and process "relational" data (e.g., Niepert et al., 2016; Kipf and Welling, 2017; Schlichtkrull et al., 2018; Veličković et al., 2018; Kipf et al., 2018; Xu et al., 2019). A defining feature of GNN models is their use of a form of neural message-passing, wherein the hidden representation of a node is updated as a function of the hidden representations of its neighbors on a graph (Gilmer et al., 2017). Typical examples of tasks that GNNs are applied to include node classification, graph classification, and link prediction (Hamilton, 2020). In GNNs, the 'relations' are given to the model via edges in a graph. In contrast, our architecture, as well as the explicitly relational architectures described below, operate on collections of objects without any relations given as input. Instead, such relational architectures must infer the relevant relations from the objects themselves. Still, graph neural networks can be applied to these relational tasks by passing in the collection of objects along with a complete graph. Several works have proposed architectures with the ability to model relations by incorporating an attention mechanism (e.g., Vaswani et al., 2017; Veličković et al., 2018; Santoro et al., 2018; Zambaldi et al., 2018; Locatello et al., 2020). Attention mechanisms, such as self-attention in Transformers (Vaswani et al., 2017), model relations between objects implicitly as an intermediate step in an information-retrieval operation to update the representation of each object as a function of its context. There also exists a growing literature on neural architectures that aim to explicitly model relational information between objects. An early example is the relation network proposed by Santoro et al. (2017), which produces an embedding representation for a set of objects based on aggregated pairwise relations. Shanahan et al. (2020) proposes the PrediNet architecture, which aims to learn relational representations that are compatible with predicate logic. Kerg et al. (2022) proposes CoRelNet, a simple architecture based on 'similarity scores' that aims to distill the relational inductive biases discovered in previous work into a minimal architecture. Altabaa et al. (2024) and Altabaa and Lafferty (2024b) explore relational inductive biases in the context of Transformers, and propose a view of relational inductive biases as a type of selective "information bottleneck" which disentangles relational information from object-level features. Webb et al. (2024) provides a cognitive science perspective on this idea, arguing that a relational information bottleneck may be a mechanism for abstraction in the human mind. ## 2 Multi-Dimensional Inner Product Relation Module A relation function is a function that maps a pair of objects *x, y* P X to a vector representing the relations between the two objects. For example, a relation may represent the information "x has the same color as y, x is larger than y, and x is to the left of y". In principle, this can be modeled by an arbitrary learnable function on the concatenation of the two objects' feature representations. For example, Santoro et al. (2017) models relations by MLPs applied to the concatenation of pairs of objects. However, this approach is missing some crucial inductive biases. While it is capable of modeling relations, there is no constraint that the learned pairwise function is a relation in any meaningful sense. In particular, it entangles the two objects' representations and does not explicitly compute a comparison between the two objects' features. Following previous work (e.g., Webb et al., 2021; Kerg et al., 2022; Altabaa et al., 2024), we propose modeling pairwise relations between objects via *inner products* of feature maps. This introduces added structure to the pairwise function that explicitly incorporates a comparison operation (the inner product). The advantage of this approach is that it provides added pressure to learn explicitly relational representations, disentangling relational information from attributes of individual objects, and inducing a geometry on the object space X . For example, in the symmetric case, the inner product relation rp*x, y*q xϕpxq, ϕpyqy satisfies symmetry, positive definiteness, and induces a pseudometric on X . The triangle inequality of the pseudometric expresses a transitivity property—if x is related to y and y is related to z, then x must be related to z. More generally, we can allow for multi-dimensional relations by having multiple encoding functions, each extracting a feature to compute a relation on. Furthermore, we can allow for asymmetric relations by having different encoding functions for each object. Hence, we model relations by $$r(x,y)=\left(\langle\phi_{1}(x),\psi_{1}(y)\rangle\,,\,\ldots,\,\langle\phi_{d_{r}}(x),\psi_{d_{r}}(y)\rangle\right),$$ $$(1)$$ rp*x, y*q pxϕ1pxq, ψ1pyqy , . . . , xϕdrpxq, ψdrpyqyq, (1) where ϕ1, ψ1*, . . . , ϕ*dr , ψdr are learnable functions. The intuition is that, for each dimension, the encoders extract, or 'filter' out, particular attributes of the objects and the inner products compute similarity across each attribute. A relation, in this sense, is similarity across a particular attribute. In the asymmetric case, the attributes extracted from the two objects are different, resulting in an asymmetric relation where a particular attribute of the first object is compared with a different attribute of the second object. For example, this can model relations of the form "x is brighter than y" (an antisymmetric relation). Altabaa and Lafferty (2024a) analyzes the function approximation properties of neural relation functions of the form of Equation (1). In particular, the function class of inner products of neural networks is characterized in both the symmetric case and the asymmetric case. In the symmetric case (i.e., ϕ ψ), it is shown that inner products of MLPs are universal approximators for symmetric positive definite kernels. In the asymmetric case, inner products of MLPs are universal approximators for continuous bivariate functions. The efficiency of approximation is characterized in terms of a bound on the number of neurons needed to achieve a particular approximation error. To promote weight sharing, we can have one common non-linear map ϕ shared across all dimensions together with different linear projections for each dimension of the relation. That is, r : X - X Ñ R dris given by $$r(x,y)=\left(\langle W_{1}^{k}\phi(x),W_{2}^{k}\phi(y)\rangle\right)_{k\in[d_{r}]},$$ , (2) where the learnable parameters are ϕ and Wk 1 , Wk 2 , k P rdrs. The non-linear map ϕ : X Ñ R dϕ may be an MLP, for example, and Wk 1 , Wk 2 are dproj - dϕ matrices. The class of functions realizable by Equation (2) is the same as Equation (1) but enables greater weight sharing. $$(2)^{\frac{1}{2}}$$ The "Multi-dimensional Inner Product Relation" (MD-IPR) module receives a sequence of objects px1*, . . . , x*nq as input and models the pairwise relations between them by Equation (2), returning an n - n - dr relation tensor, Rri, js rpxi, xj q, describing the relations between each pair of objects. ## 3 Relational Convolutions With Graphlet Filters 3.1 Relational Convolutions With Discrete Groups In this section, we formalize a *relational convolution* operation which processes pairwise relations between objects to produce representations of the relational patterns within groups of objects. Suppose that we have a sequence of objects px1*, . . . , x*nq and a relation tensor R describing the pairwise relations between them, obtained by an MD-IPR layer via Rri, js rpxi, xj q. The key idea is to learn a *template* of relations between a small set of objects, and to "convolve" the template with the relation tensor, matching it against the relational patterns in different groups of objects. This transforms the relation tensor into a sequence of vectors, each summarizing the relational pattern in some group of objects. Crucially, this can now be composed with another relational layer to compute *higher-order* relations—i.e., relations on relations. Fix some filter size s n, where s is a hyperparameter of the relational convolution layer. One 'filter' of size s is given by the *graphlet filter* f1 P R s-s-dr. This is a template for the pairwise relations between a group of s objects. Since pairwise relations can be viewed as edges on a graph, we use the term "graphlet filter" to refer to a template of pairwise relations between a small set of objects. Let g rns be a group of s objects among px1*, . . . , x*nq. Then, denote the relation sub-tensor associated with this group by Rrgs : rRr*i, j*ssi,jPg. We define the 'relational inner product' between this relation subtensor and the filter f1 by $$\langle R[g],f_{1}\rangle_{\rm rel}:=\sum_{i,j\in g}\sum_{k\in[d_{r}]}R[i,j,k]f_{1}[i,j,k].\tag{3}$$ This is simply the standard inner product in the corresponding Euclidean space R s 2dr. This quantity represents how much the relational pattern in g matches the template f1. In a relational convolution layer, we learn nf different filters. Denote the collection of filters by f f1*, . . . , f*nf P R s-s-dr-nf, which we call a *graphlet filter*. We define the relational inner product of a relation subtensor Rrgs with the graphlet filters f as the nf -dimensional vector consisting of the relational inner products with each individual filter, $$\langle R[g],f\rangle_{\mathrm{rel}}\simeq\begin{pmatrix}\langle R[g],f_{1}\rangle_{\mathrm{rel}}\\ \vdots\\ \langle R[g],f_{n_{f}}\rangle_{\mathrm{rel}}\end{pmatrix}\in\mathbb{R}^{n_{f}}.\tag{1}$$ $$\mathbf{l})$$ $\left(5\right)^2$ This vector summarizes various aspects of the relational pattern within a group, captured by several different filters1. Each filter corresponds to one dimension. This is reminiscent of convolutional neural networks, where each filter gives us one channel in the output tensor. For a given group g rns, the relational inner product with a graphlet filter, xRrgs, fyrel, gives us a vector summarizing the relational patterns inside that group. Let G be a set of groupings of the n objects, each of size s. The relational convolution between a relation tensor R and a relational graphlet filter f is defined as the sequence of relational inner products with each group in G $$R*f:=(\langle R[g],f\rangle_{\mathrm{rel}})_{g\in{\mathcal{G}}}\equiv\left(z_{1},\ldots,z_{|{\mathcal{G}}|}\right)\in\mathbb{R}^{|{\mathcal{G}}|\times n_{f}}$$ |G|-nf(5) In this section, we assumed that G was given. If some prior information is known about what groupings are relevant, this can be encoded in G. Otherwise, if n is small, G can be all possible combinations of size s. However, when n is large, considering all combinations will be intractable. In the next subsection, we consider the problem of *differentiably learning* the relevant groups. 1We have overloaded the notation x, yrel, but will use the convention that a collection of filters is denoted by a bold symbol (e.g., f vs fi) to distinguish between the two forms of the relational inner product. ![4_image_0.png](4_image_0.png) Figure 2: A depiction of the relational convolution operation. A graphlet filter f is compared to the relation subtensor in each group of objects, producing a sequence of vectors summarizing the relational pattern within each group. The groups can be differentiably learned through an attention mechanism. ## 3.2 Relational Convolutions With Group Attention In the above formulation, the groups are 'discrete'. Having discrete groups can be desirable for interpretability if the relevant groupings are known a priori or if considering every possible grouping is computationally and statistically feasible. However, if the relevant groupings are not known, then considering all possible combinations results in a rapid growth of the number of objects at each layer. In order to address these issues, we can explicitly model and *learn* the relevant groups. This allows us to control the number of objects in the output sequence of a relational convolution such that only relevant groups are considered. We propose modeling groups via an *attention* operation. Consider the input px1, . . . , xnq, xi P R d. Let ng be the number of groups to be learned and s be the size of the graphlet filter (and hence the size of each group). These are hyperparameters of the model that we control. For each group g P rngs, we learn s different *queries*, tq g iugPrngs,iPrss, that will be used to retrieve a group of size s via attention. The i-th object in the k-th group is retrieved as follows, $$\begin{array}{l l}{{\bar{x}_{i}^{g}=\sum_{j=1}^{n}\alpha_{i j}^{g}x_{j},}}&{{g\in[n_{g}],i\in[s],}}\\ {{\alpha_{i j}^{g}=\frac{\exp\left(\beta\,\langle\,q_{i}^{g},\mathbf{key}(x_{j})\,\rangle\,\right)}{\sum_{k=1}^{n}\exp\left(\beta\,\langle\,q_{i}^{g},\mathbf{key}(x_{k})\,\rangle\right)},}}&{{g\in[n_{g}],i\in[s],j\in[n]}}\end{array}$$ $$({\mathfrak{h}})$$ where x¯ g i is the i-th object retrieved in the g-th group, q g i is the query for retrieving the i-th object in the g-th group, keypxj q is the key associated with the object xj , and β is a temperature scaling parameter. The key for each object is computed as a function of its position, features, and/or context. For example, to group objects based on their position, the key can be a positional embedding, keypxiq P Ei. To group based on features, the key can be a linear projection of the object's feature vector, keypxiq Wkxi. To group based on both position and features, the key can be a sum or concatenation of the above. Finally, computing keys after a self-attention operation allows objects to be grouped based on the context in which they occur. The relation subtensor R¯rgs P R s-s-drfor each group g P rngs is computed using a shared MD-IPR layer rp, q, $$\bar{R}[g]=\left[r(\bar{x}_{i}^{g},\bar{x}_{j}^{g})\right]_{i,j\in[s]}.$$ . (7) The relational convolution is computed as before via, $$\bar{R}*\mathbf{f}\equiv\left(\langle\bar{R}[g],\mathbf{f}\rangle_{\mathrm{rel}}\right)_{g\in[n_{g}]}.$$ $\left(8\right)$. . (8) Overall, relational convolution with group attention can be summarized as follows: 1) learn ng groupings of objects, retrieving s objects per group; 2) compute the relation tensor of each group using an MD-IPR module; 3) compute a relational convolution with a learned set of graphlet filters f, producing a sequence of ng vectors each describing the relational pattern within a (learned) group of objects. Computing input-dependent queries. In the simplest case, the query vectors are simply learned parameters of the model, representing a fixed criterion for selecting the ng groups. The queries can also be produced in an input-dependent manner. There are many ways to do this. For example, the input px1*, . . . , x*nq can be processed with some sequence or set embedder (e.g., through a self-attention operation) producing a vector embedding that can be mapped to different queries tq g iui,g using learned linear maps. Entropy regularization. Intuitively, we would ideally like the group attention scores in Equation (6) to be close to discrete assignments. To encourage the model to learn more structured group assignments, we add an entropy regularization to the loss function, Lentr png sq 1 °g,i Hpα g i,q, where Hpα g i,q °j α g ij logpα g ij q is the Shannon entropy. As a heuristic, this regularization can be scaled by a factor proportional to logpn_classesq{ logpnq so that it doesn't dominate the underlying task's loss. Sparsity regularization in neural attention has been explored in several previous works, including through entropy regularization (e.g., Niculae and Blondel, 2017; Martins et al., 2020; Attanasio et al., 2022). Symmetric relational inner products. So far, we considered *ordered* groups. That is, the relational pattern computed by the relational inner product xRrgs, fyrel for the group p1, 2, 3q is different from the group p2, 3, 1q. In some scenarios, symmetry in the representation of the relational pattern is a useful inductive bias. To capture this, we define a symmetric variant of the relational inner product that is invariant to the ordering of the elements in the group. This can be done by pooling over all permutations in the group. In particular, we suggest max-pooling or average-pooling, although any set-aggregator would be valid. We define the permutation-invariant relational inner product as $$\langle R[g],f\rangle_{\mathrm{rel,sym}}:=\mathrm{Pool}\left(\left\{\langle R[g^{\prime}],f\rangle_{\mathrm{rel}}:g^{\prime}\in g!\right\}\right),$$ $$({\mathfrak{g}})$$ 1P g!( , (9) where g! denotes the set of permutations of the group g, and pooling is done independently across dimensions. Computational efficiency. Equation (6) can be computed in parallel with Opn ng s dq operations. When the hyperparameters of the model are fixed, this is linear in the sequence length n. Equation (7) can be computed in parallel via efficient matrix multiplication with Opng s 2dr dprojq operations. Finally, Equation (8) can be computed in parallel with Opng s 2 dr nf q operations. The latter two computations do not scale with the number of objects in the input, and are only a function of the hyperparameters of the model. ## 3.3 Deep Relational Architectures By Composing Relational Convolutions A relational convolution block (including a MD-IPR module) is a simple neural module that can be composed to build a deep architecture for learning iteratively more complex relational feature representations. Following the notation in Figure 1, let nℓ denote the number of objects and dℓ the object dimension at layer ℓ. A relational convolution block receives as input a sequence of objects of shape nℓ - dℓ and returns a sequence of objects of shape nℓ1 - dℓ1 representing the relational patterns among groupings of objects. The output dimension dℓ1 corresponds to the number of graphlet filters nf , and is a hyperparameter. The sequence length nℓ1 corresponds to the number of groups, and is |G| in the case of given discrete groups (Section 3.1) or a hyperparameter ng in the case of learned groups via group attention (Section 3.2). Each composition of a relational convolution block computes relational features of one degree higher (i.e., relations between relations). A common recipe for building modern deep learning architectures is by using residual connections (He et al., 2016) and normalization (Ba et al., 2016). This can be achieved for relational convolutional networks by fixing the number of groups ng and number of filters nf hyperparameters to be the same across all layers, such that the input shape and output shape remain the same. Then, letting Hℓ denote the hidden representation at layer ℓ, the overall architecture becomes Hℓ1 NormpHℓ Wℓ1RelConvBlockpHℓqq, where Wℓ1 is a linear transformation that controls where information is written to in the residual stream. This ResNet-style architecture allows for the hidden representation to encode relational information at multiple layers of hierarchy, retaining the information at shallower layers. In this paper, we limit our exploration to relatively shallow networks of the form Hℓ1 RelConvBlockpHℓq. ## 4 Experiments In this section, we empirically evaluate the proposed *relational convolutional network* architecture (abbreviated RelConvNet) to assess its effectiveness at learning relational tasks. We compare this architecture to several existing relational architectures as well as general-purpose sequence models. The common input to all models is a sequence of objects X px1*, . . . , x*nq P R n-d. We evaluate against the following baselines. - *Transformer* (Vaswani et al., 2017). The Transformer is a powerful general-purpose sequence model. It consists of alternating self-attention and multi-layer perceptron blocks. Self-attention performs an information retrieval operation, which updates the internal representation of each object as a function of its context. Dot product attention is computed via X1 Ð SoftmaxppXWqqpXWkq ⊺ qWvX, and the MLP is applied independently on each object's internal representation. The attention scores computed as an intermediate step in dot-product attention can perhaps be thought of as relations that determine what information to retrieve. - *PrediNet* (Shanahan et al., 2020). The PrediNet architecture is an explicitly relational architecture inspired by predicate logic. At a high-level, the PrediNet architecture computes j relations between k pairs of objects. The k pairs of objects are selected via a learned attention operation. The "j relations" refer to a difference between j-dimensional embeddings of the selected objects. More precisely, for each head h P rks, a pair of objects Eh 1 , Eh 2 P R dis retrieved via an attention operation, and the final output of PrediNet is a set of difference relations given by Dh Eh 1 Ws Eh 2 Ws. - *CoRelNet* (Kerg et al., 2022). The CoRelNet architecture is proposed as a minimal relational architecture distilling the core inductive biases that the authors argue are important for relational tasks. The CoRelNet module simply computes inner products between object representations and applies Softmax normalization, returning an n - n "similarity matrix". That is, the objects X px1*, . . . , x*nq are processed independently to produce embeddings Z pz1*, . . . , z*nq, and the similarity matrix is computed as R SoftmaxpZZ⊺q. The similarity matrix R is then flattened and passed through an MLP to produce the final output. - *Graph Neural Networks*. Graph neural networks are a class of neural network architectures which operate on graphs-structured data. A graph neural network typically receives two inputs: a graph described by a set of edges, and feature vectors for each node in the graph. GNNs can be described through the unifying framework of neural message-passing. Under this framework, graph-structured data is processed through an iterative message-passing operation given by h pl1q i Ð Updateph plq i , th plq jujPNpiqq, where h p0q i Ð xi. That is, each node's internal representation is iteratively updated as a function of its neighborhood. Here, Update is parameterized by a neural network, and the variation between different GNN architectures lies in the architectural design of this update process. We use Graph Convolution Networks (Kipf and Welling, 2017), Graph Attention Networks (Veličković et al., 2018), and Graph Isomorphism Networks (Xu et al., 2019) as representative GNN baselines. - CNN. As a non-relational baseline, we test a regular convolutional neural network which processes the raw image input. The central modules in the baselines above receive an object-centric representation as input. That is, a sequence of vector embeddings produced by a small CNN each corresponding to one of the n objects in the input. Here, instead, a deeper CNN model processes the raw image input representing the entire "scene" in an end-to-end manner. ## 4.1 Relational Games The *relational games* dataset was contributed as a benchmark for relational reasoning by Shanahan et al. (2020). It consists of a family of binary classification tasks for identifying abstract relational rules between ![7_image_1.png](7_image_1.png) ![7_image_0.png](7_image_0.png) Figure 3: Relational games dataset. **Left** Examples of objects from each split. **Right** Examples of problem instances for each task. The first row is an example where the relation holds and the second row is an example where the relation does not hold. a set of objects represented as images. The object images depict simple geometric shapes and consist of three different splits with different visual styles for evaluating out-of-distribution generalization, referred to as "pentominoes", "hexominoes", and "stripes". The input is a sequence of objects arranged in a 3 - 3 grid. Each task corresponds to some relationship between the objects, and the target is to classify whether the relationship holds among the objects in the input or not (see Figure 3). In our experiments, we evaluate out-of-distribution generalization by training on the pentominoes objects and evaluating on the hexominoes and stripes objects. The input to the models is presented as a sequence of 9 objects, with each object represented as a 12 - 12 - 3 RGB image. All models share the common architecture px1*, . . . , x*nq Ñ CNN Ñ tu Ñ MLP Ñ yˆ, where tu indicates the central module being tested. In all models, the objects are first processed independently by a CNN with a shared architecture. The processed objects are then passed to the central module of the model. The final prediction is produced by an MLP with a shared architecture. In this section, we focus our comparison on four models: RelConvNet (ours), CoRelNet (Kerg et al., 2022), PrediNet (Shanahan et al., 2020), and a Transformer (Vaswani et al., 2017) 2. The pentominoes split is used for training, and the hexominoes and stripes splits are used to test out-of-distribution generalization after training. We train for 50 epochs using the categorical cross-entropy loss and the Adam optimizer with learning rate 0.001, β1 0.9, β2 0.999, ϵ 107, and batch size 512. For each model and task, we run 5 trials with different random seeds. Appendix A describes further experimental details about the architectures and training setup. Out-of-distribution generalization. Figure 4 reports model performance on the two hold-out object sets after training. On the hexominoes objects, which are similar-looking to the pentominoes objects used for training, RelConvNet and CoRelNet do nearly perfectly. PrediNet and the Transformer do well on the simpler tasks, but struggle with the more difficult 'match pattern' task. The 'stripes' objects are visually more distinct from the objects in the training split, making generalization more difficult. We observe an overall drop in performance for all models. The drop is particularly dramatic for CoRelNet3. The separation between RelConvNet and the other models is largest on the 'match pattern' task of the stripes split (the most difficult task and the most difficult generalization split). Here, RelConvNet maintains a mean accuracy of 87% while the other models drop below 65%. We attribute this to RelConvNet's ability to naturally represent higher-order relations and model groupings of objects. The CNN baseline learns the easier 'same', 'between', and 'occurs' tasks nearly perfectly, but completely fails to learn the more difficult 'xoccurs' and 'match pattern' tasks. This hard boundary suggests that explicit relational architectural inductive biases are necessary for learning more difficult relational tasks. Data efficiency. We observe that the relational inductive biases of RelConvNet, and relational models more generally, grant a significant advantage in sample-efficiency. Figure 5 shows the training accuracy 2The GNN baselines failed to learn the relational games tasks in a way that generalizes, often severely overfitting. For clarity of presentation, we defer results on the GNN baselines to Appendix A. 3The experiments in Kerg et al. (2022) on the relational games benchmark use a technique called "context normalization" (Webb et al., 2020) as a preprocessing step. We choose not to use this technique since it is an added confounder. We discuss this choice in Appendix C. ![8_image_0.png](8_image_0.png) Figure 4: Out-of-distribution generalization on hold-out object sets. Bar heights indicate the mean over 5 trials and the error bars indicate a bootstrap 95% confidence interval. ![8_image_3.png](8_image_3.png) ![8_image_1.png](8_image_1.png) 0 1000 2000 0 1000 2000 ![8_image_2.png](8_image_2.png) Figure 5: Training curves, up to 2,000 batch steps, for each relational games task. Solid lines indicate the mean over 5 trials and the shaded regions indicate a bootstrap 95% confidence interval. Note that this is in-distribution (i.e., on the "pentominoes" split). over the first 2,000 batches for each model. RelConvNet, CoRelNet, and PrediNet are explicitly relational architectures, whereas the Transformer is not. The Transformer is able to process relational information through its attention mechanism, but this information is entangled with the features of individual objects (which, for these relational tasks, is extraneous information). The Transformer consistently requires the largest amount of data to learn the relational games tasks. PrediNet tends to be more sample-efficient. RelConvNet and CoRelNet are the most sample-efficient, with RelConvNet only slightly more sample-efficient on most tasks. On the 'match pattern' task, which is the most difficult, RelConvNet is significantly more sample-efficient. We attribute this to the fact that RelConvNet is able to model higher-order relations through its relational convolution module. The 'match pattern' task can be thought of as a second-order relational task—it involves computing the relational pattern in each of two groups, and comparing the two relational patterns. The relational convolution module naturally models this kind of situation since it learns representations of the relational patterns within groups of objects. Learning groups via group attention. Next, we analyze RelConvNet's ability to learn useful groupings through group attention in an end-to-end manner. We train a 2-layer relational convolutional network with 8 learned groups and a graphlet size of 3. We group based on position by using positional embeddings for keypxiq. ![9_image_0.png](9_image_0.png) In Figure 6, we visualize the group attention scores α g ij (see Equation (6)) learned from one of the training runs. For each group g P rngs, the figure depicts a 3-3 grid representing the objects attended to in that group. Since each group contains 3 objects, we represent the value pα g ij qiPr3sin the 3-channel HSV color representation. We observe that 1) group attention learns to ignore the middle row, which contains no relevant information; and 2) the selection of objects in the top row and the bottom row is structured. In particular, group 2 considers the relational pattern within the bottom row and group 8 considers the relational pattern in the top row, which is exactly how a human would tackle this problem. We refer to Figure 10 for an exploration of the effect of entropy regularization on group attention. We find that entropy regularization is necessary for the model to learn and causes the group attention scores to converge to interpretable discrete assignments. Figure 6: Learned groups in the 'match pattern' tasks by a 2-layer RelConvNet with group attention. ## 4.2 Set**: Grouping And Compositionality In Relational Reasoning** Set is a card game that forms a simple-to-describe but challenging relational task. The 'objects' are a set of cards with four attributes, each of which can take one of three possible values. 'Color' can be red, green, or purple; 'number' can be one, two, or three; 'shape' can be diamond, squiggle, or oval; and 'fill' can be solid, striped, or empty. A 'set' is a triplet of cards such that each attribute is either a) the same on all three cards, or b) different on all three cards. In Set, the task is: given a hand of n ¡ 3 cards, find a 'set' among them. Figure 7a depicts a positive and negative example for n 5, with indicating the 'set' in the positive example. This task is deceptively challenging, and is representative of the type of relational reasoning that humans excel at but machine learning systems still struggle with. To solve the task, one must process the sensory information of individual cards to identify the values of each attribute, and then reason about the relational pattern in each triplet of cards. The construct of relational convolutions proposed in this paper is a step towards developing machine learning systems that can perform this kind of relational reasoning. In this section, we evaluate RelConvNet on a task based on Set and compare it to several baselines. The task is: given a collection of n 5 images of Set cards, determine whether or not they contain a 'set'. All models share the common architecture px1*, . . . , x*nq Ñ CNN Ñ tu Ñ MLP Ñ yˆ, where tu indicates the central module being tested. The CNN embedder is pre-trained on the task of classifying the four attributes of the cards and an intermediate layer is used to generate embeddings. The output MLP architecture is shared across all models. Further architectural details can be found in Appendix A. In Set, there exists 81 3 85 320 triplets of cards, of which 1 080 are a 'set'. We partition the 'sets' into training (70%), validation (15%), and test (15%) sets. The training, validation, and test datasets are generated by sampling n-tuples of cards such that with probability 1{2 the n-tuple does not contain a set, and with probability 1{2 it contains a set among the corresponding partition of sets. Partitioning the data in this way allows us to measure the models' ability to "learn the rule" and identify new unseen 'sets'. We train for 100 epochs with the same loss, optimizer, and batch size as the experiments in the previous section. For each model, we run 10 trials with different random seeds. When using the default optimizer hyperparameters as in the previous experiment without hyperparameter tuning, we find that RelConvNet is the only model able to meaningfully learn the task in a manner that generalizes to unseen 'sets'. In particular, we observe that many baselines severely overfit to the training data, failing to learn the rule and generalize (see Appendix B.1). Although RelConvNet did not require hyperparameter tuning, we carry out an extensive hyperparameter sweep for all other baselines individually ![10_image_1.png](10_image_1.png) ![10_image_0.png](10_image_0.png) ![10_image_2.png](10_image_2.png) (c) Training accuracy and validation accuracy over the course of training. Figure 7: Results of "contains set" experiments. Bar height/solid lines indicate the mean over 10 trials and error bars/shaded regions indicate 95% bootstrap confidence intervals. in order to validate our conclusions against the best-achievable performance for each baseline. We ran a total of 1600 experimental runs searching over combinations of architectural hyperparameters (number of layers) and optimization hyperparameters (weight decay, learning rate schedule) individually for each baseline, with the goal of finding a hyperparameter configuration that is representative of the best achievable performance for each model class on this task. The results of the hyperparameter sweep are summarized in Appendix B. Figure 7b shows the hold-out test accuracy for each model. Figure 7c shows the training and validation accuracy over the course of training. Here, RelConvNet uses the Adam optimizer with the default Tensorflow hyperparameters (constant learning rate of 0.001, β1 0.9, β2 0.999) while each baseline has its own individually-optimized hyperparameters, described in Appendix B. We observe a sharp separation between RelConvNet and all other baselines. While RelConvNet is able to learn the task and generalize to new 'sets' with near-perfect accuracy (avg: 97.9%), no other model is able to reach a comparable generalization accuracy even after hyperparameter tuning. The next best is the GAT model (avg: 67.5%). Several models are able to fit the training data, reaching near-perfect training accuracy, but they are unable to "learn the rule" in a way that generalizes to the validation or test sets. This suggests that while these models are powerful function approximators, they lack the *inductive biases* to learn hierarchical relations. In Figure 8 we analyze the geometry of the representations learned by the relational convolution layer. We consider all triplets of cards, compute the relation subtensor using the learned MD-IPR layer, and plot the relational inner product with the learned graphlet filter f P R s-s-dr-nf. The result is a nf -dimensional vector for each triplet of cards. We perform PCA to plot this in two dimensions, and color-code each triplet of cards according to whether or not it forms a 'set'. We find that the relational convolution layer learns a representation of the relational pattern in groups of objects that separates 'sets' and 'non-sets'. In particular, the two classes form clusters that are linearly separable even when projected down to two dimensions by PCA. This explains why RelConvNet is able to learn the task in a way that generalizes while the other models are not. In Appendix E we expand on this discussion, and further analyze the representations learned by the MD-IPR layer, showing that the learned relations map to the color, number, shape, and fill attributes. It is perhaps surprising that models like GNNs and Transformers perform poorly on these relational tasks, given their apparent ability to process relations through neural message-passing and attention, respectively. We remark that GNNs operate in a different domain compared to relational models like RelConvNe, PrediNet, and CoRelNet. In GNNs, the relations are an input to the model, received in the form of a graph, and are used to dictate the flow of information in a neural message-passing operation. By contrast, in relational convolutional networks, the input is simply a set of objects without relations—the relations need to be *inferred* as part of the feature representation process. Thus, GNNs operate in domains where relational information is already present (e.g., analysis of social networks, biological networks, etc.), whereas our framework aims to solve tasks that rely on relations but those relations need to be inferred end-to-end. This offers a partial explanation for the inability of GNNs to learn this task—GNNs are good at processing network-style relations when they are given as input, but may not be able to infer and hierarchically process relations when they are not given. In the case of Transformers, relations are modeled implicitly to direct information retrieval in attention, but are not encoded explicitly in the final representations. By contrast, RelConvNet operates on collections of objects and possesses inductive biases for learning iteratively more complex relational representations, guided only by the supervisory signal of the downstream task. not set set 40 R[g], f rel [PC2] 20 20 0 40 20 0 20 40 60 80 R[g], f rel [PC1] Figure 8: The relational convolution layer produces representations that separates 'sets' from 'non-sets'. Models like CoRelNet and PrediNet have relational inductive biases, but lack compositionality. On the other hand, deep models like Transformers and GNNs are compositional, but lack relational inductive biases. This experiment suggests that compositionality and relational inductive biases are both necessary ingredients to efficiently learn representations of higher-order relations. RelConvNet is a compositional architecture imbued with relational inductive biases and a demonstrated ability to tackle hierarchical relational tasks. ## 5 Discussion Summary In this paper, we proposed a compositional architecture and framework for learning hierarchical relational representations via a novel relational convolution operation. The relational convolution operation we propose here is a 'convolution' in the sense that it considers a patch of the relation tensor, given by a subset of objects, and compares the relations within it to a template graphlet filter via an appropriately-defined inner product. This is analogous to convolutional neural networks, where an image filter is compared against different patches of the input image. Moreover, we propose an attention-based mechanism for modeling useful groupings of objects in order to maintain scalability. By alternating inner product relation layers and relational convolution layers, we obtain an architecture that naturally models hierarchical relations. ## Discussion On Relational Inductive Biases In our experiments, we observed that general-purpose sequence models like the Transformer struggle to learn tasks that involve relational reasoning in a data-efficient way. The relational inductive biases of RelConvNet, CoRelNet, and PrediNet result in significantly improved performance on the relational games tasks. These models each implement different kinds of relational inductive biases, and are each designed with different motivations in mind. For example, PrediNet's architecture is loosely inspired by the structure of predicate logic, but can be understood as ultimately producing representations of pairwise difference-relations, with pairs of objects selected by an attention operation. CoRelNet is a minimal relational architecture that consists of computing an n - n inner product similarity matrix followed by a softmax normalization. RelConvNet, our proposed architecture, provides further flexibility across several dimensions. Like CoRelNet, it models relations as inner products of feature maps, but it achieves greater representational capacity by learning multi-dimensional relations through multiple learned feature maps or filters. More importantly, the relational convolutions operation enables learning higher-order relations between groups of objects. This is in contrast to both PrediNet and CoRelNet, which are limited to pairwise relations. Our experiments show that the inductive biases of RelConvNet result in improved performance in relational reasoning tasks. In particular, the Set task, where RelConvNet was the only model able to generalize non-trivially, demonstrates the necessity for explicit inductive biases that support learning hierarchical relations. ## Limitations And Future Work The tasks considered here are solvable by modeling only second-order relations. In the case of the relational games benchmark of Shanahan et al. (2020), we observe that the tasks are saturated by the relational convolutional networks architecture. While the "contains set" task demonstrates a sharp separation between relational convolutional networks and existing baselines, this task too only involves second-order relations. A more thorough evaluation of this architecture, and future architectures for modeling hierarchical relations, would require the development of new benchmark tasks and datasets that involve a larger number of objects and higher-order relations. This is a subtle and non-trivial task that we leave for future work. The modules proposed in this paper assume object-centric representations as input. In particular, the tasks considered in our experiments have an explicit delineation between different objects. In more general settings, object information may need to be extracted from raw stimulus explicitly by the system (e.g., a natural image containing multiple objects in apriori unknown positions). Learning object-centric representations is an active area of research (Sabour et al., 2017; Greff et al., 2019; Locatello et al., 2020; Kipf et al., 2022), and is related but separate from learning relational representations. These methods produce a set of embedding vectors, each describing a different object in the scene, which can then be passed to the central processing module (e.g., a relational processing module such as RelConvNet). In future work, it will be important to explore how well RelConvNet integrates with methods for learning object-centric representations in an end-to-end system. The experiments considered here are synthetic relational tasks designed for a controlled evaluation. In more realistic settings, we envision relational convolutional networks as modules embedded in a broader architecture. For example, a relational convolutional network can be embedded into an RL agent to enable performing tasks involving relational reasoning. Similarly, relational convolutions can perhaps be integrated into general-purpose sequence models, such as Transformers, to enable improved relational reasoning while retaining the generality of the architecture. ## Code And Reproducibility Our public github repository includes an implementation of all modules described in this paper together with simple instructions for reproducing our experimental results. We will also make detailed experimental logs available through an online portal that include: metrics tracked over the course of training, the git commit ID corresponding to the version of the code used for each run, the hardware used for each run, the command and script to replicate the experimental run, etc. The links will be added in the de-anonymized version. ## References Altabaa, Awni and John Lafferty (2024a). "Approximation of relation functions and attention mechanisms". arXiv: 2402.08856 [cs.LG] (cited on page 3). Altabaa, Awni and John Lafferty (2024b). "Disentangling and Integrating Relational and Sensory Information in Transformer Architectures". arXiv: 2405.16727 [cs.LG] (cited on page 3). Altabaa, Awni, Taylor Whittington Webb, Jonathan D. Cohen, and John Lafferty (2024). "Abstractors and relational cross-attention: An inductive bias for explicit relational reasoning in Transformers". In: The Twelfth International Conference on Learning Representations (cited on page 3). Attanasio, Giuseppe, Debora Nozza, Dirk Hovy, and Elena Baralis (2022). "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists". In: *Findings of the Association for Computational* Linguistics: ACL 2022 (cited on page 6). Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton (2016). "Layer Normalization". arXiv: 1607.06450 [stat.ML] (cited on page 6). Battaglia, Peter W. et al. (2018). "Relational Inductive Biases, Deep Learning, and Graph Networks". arXiv: 1806.01261 [cs, stat] (cited on page 2). Frank, Stefan L, Rens Bod, and Morten H Christiansen (2012). "How hierarchical is language use?" In: Proceedings of the Royal Society B: Biological Sciences (cited on page 27). Gilmer, Justin, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl (2017). "Neural Message Passing for Quantum Chemistry". In: *International Conference on Machine Learning*. PMLR (cited on page 2). Greff, Klaus, Raphaël Lopez Kaufman, Rishabh Kabra, Nick Watters, Christopher Burgess, Daniel Zoran, Loic Matthey, Matthew Botvinick, and Alexander Lerchner (2019). "Multi-Object Representation Learning with Iterative Variational Inference". In: *Proceedings of the 36th International Conference on Machine Learning*. Ed. by Kamalika Chaudhuri and Ruslan Salakhutdinov. Proceedings of Machine Learning Research. PMLR (cited on page 13). Hamilton, William L (2020). "Graph Representation Learning". Synthesis Lectures on Artificial Intelligence and Machine Learning. San Rafael, CA: Morgan & Claypool (cited on page 2). He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun (2016). "Deep residual learning for image recognition". In: *Proceedings of the IEEE conference on computer vision and pattern recognition* (cited on page 6). Johnson, Justin, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick (2017). "CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning". In: *Proceedings of the IEEE conference on computer vision and pattern recognition* (cited on page 27). Kerg, Giancarlo, Sarthak Mittal, David Rolnick, Yoshua Bengio, Blake Richards, and Guillaume Lajoie (2022). "On Neural Architecture Inductive Biases for Relational Tasks". arXiv: 2206.05056 [cs] (cited on pages 2, 3, 7, 8, 25). Kipf, Thomas, Gamaleldin Fathy Elsayed, Aravindh Mahendran, Austin Stone, Sara Sabour, Georg Heigold, Rico Jonschkowski, Alexey Dosovitskiy, and Klaus Greff (2022). "Conditional Object-Centric Learning from Video". In: *International Conference on Learning Representations* (cited on page 13). Kipf, Thomas, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel (2018). "Neural relational inference for interacting systems". In: *International conference on machine learning* (cited on page 2). Kipf, Thomas N. and Max Welling (2017). "Semi-Supervised Classification with Graph Convolutional Networks". arXiv: 1609.02907 [cs, stat] (cited on pages 2, 7). Locatello, Francesco, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf (2020). "Object-centric learning with slot attention". In: Advances in Neural Information Processing Systems (cited on pages 2, 13). Martins, André, António Farinhas, Marcos Treviso, Vlad Niculae, Pedro Aguiar, and Mario Figueiredo (2020). "Sparse and continuous attention mechanisms". In: *Advances in Neural Information Processing Systems* (cited on page 6). Niculae, Vlad and Mathieu Blondel (2017). "A regularized framework for sparse and structured neural attention". In: *Advances in neural information processing systems* (cited on page 6). Niepert, Mathias, Mohamed Ahmed, and Konstantin Kutzkov (2016). "Learning convolutional neural networks for graphs". In: *International conference on machine learning*. PMLR (cited on page 2). Rosario, Barbara, Marti A Hearst, and Charles J Fillmore (2002). "The descent of hierarchy, and selection in relational semantics". In: *Proceedings of the 40th Annual Meeting of the Association for Computational* Linguistics (cited on page 27). Sabour, Sara, Nicholas Frosst, and Geoffrey E Hinton (2017). "Dynamic routing between capsules". In: Advances in neural information processing systems (cited on page 13). Santoro, Adam, Ryan Faulkner, David Raposo, Jack Rae, Mike Chrzanowski, Theophane Weber, Daan Wierstra, Oriol Vinyals, Razvan Pascanu, and Timothy Lillicrap (2018). "Relational Recurrent Neural Networks". In: *Advances in Neural Information Processing Systems*. Curran Associates, Inc. (cited on page 2). Santoro, Adam, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap (2017). "A simple neural network module for relational reasoning". In: *Advances in* neural information processing systems (cited on pages 2, 3, 27). Schlichtkrull, Michael, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling (2018). "Modeling relational data with graph convolutional networks". In: The Semantic Web: 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, Proceedings 15. Springer (cited on page 2). Shanahan, Murray, Kyriacos Nikiforou, Antonia Creswell, Christos Kaplanis, David Barrett, and Marta Garnelo (2020). "An Explicitly Relational Neural Network Architecture". In: Proceedings of the 37th International Conference on Machine Learning. Ed. by Hal Daumé III and Aarti Singh. Proceedings of Machine Learning Research. PMLR (cited on pages 2, 7, 8, 13). Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin (2017). "Attention is all you need". In: Advances in neural information processing systems (cited on pages 2, 7, 8). Veličković, Petar, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio (2018). "Graph Attention Networks". In: *International Conference on Learning Representations* (cited on pages 2, 7). Webb, Taylor, Zachary Dulberg, Steven Frankland, Alexander Petrov, Randall O'Reilly, and Jonathan Cohen (2020). "Learning Representations that Support Extrapolation". In: *Proceedings of the 37th International* Conference on Machine Learning. Ed. by Hal Daumé III and Aarti Singh. Proceedings of Machine Learning Research. PMLR (cited on pages 8, 25). Webb, Taylor W., Steven M. Frankland, Awni Altabaa, Kamesh Krishnamurthy, Declan Campbell, Jacob Russin, Randall O'Reilly, John Lafferty, and Jonathan D. Cohen (2024). "The Relational Bottleneck as an Inductive Bias for Efficient Abstraction". arXiv: 2309.06629 [cs] (cited on page 3). Webb, Taylor Whittington, Ishan Sinha, and Jonathan Cohen (2021). "Emergent Symbols through Binding in External Memory". In: *International Conference on Learning Representations* (cited on pages 3, 25). Xu, Keyulu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka (2019). "How Powerful are Graph Neural Networks?" In: *International Conference on Learning Representations* (cited on pages 2, 7). Zaheer, Manzil, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola (2017). "Deep sets". In: *Advances in neural information processing systems* (cited on page 27). Zambaldi, Vinicius, David Raposo, Adam Santoro, Victor Bapst, Yujia Li, Igor Babuschkin, Karl Tuyls, David Reichert, Timothy Lillicrap, Edward Lockhart, et al. (2018). "Deep reinforcement learning with relational inductive biases". In: *International conference on learning representations* (cited on page 2). Zeiler, Matthew D and Rob Fergus (2014). "Visualizing and understanding convolutional networks". In: Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13. Springer (cited on page 1). ## A Experiments Supplement A.1 Relational Games (Section **4.1)** The pentominoes split is used for training, and the hexominoes and stripes splits are used to test out-ofdistribution generalization after training. We hold out 1000 samples for validation (during training) and 5000 samples for testing (after training), and use the rest as the training set. We train for 50 epochs using the categorical cross-entropy loss and the Adam optimizer with learning rate 0.001, β1 0.9, β2 0.999, ϵ 107. We use a batch size of 512. For each model and task, we run 5 trials with different random seeds.Table 1 contains text descriptions of each task in the relational games dataset in the experiments of Section 4.1. Table 2 contains a description of the architectures of each model (or shared component) in the experiments. Table 3 reports the accuracy on the hold-out object sets (i.e., the numbers depicted in Figure 4 of the main text). Figures 9 and 10 explore the effect of entropy regularization in group attention on learning using the "match pattern" task as an example. | Task | Description | |-------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | same | Two random cells out of nine are occupied by an object. They are the "same" if they have the same color, shape, and orientation (i.e., identical image) | | occurs | The top row contains one object and the bottom row contains three objects. The "occurs" relationship holds if at least one of the objects in the bottom row is the same as the object in the top row. | | xoccurs | Same as occurs, but the relationship holds if exactly one of the objects in the bottom row is the same as the object in the top row. | | between | The grid is occupied by three objects in a line (horizontal or vertical). The "between" relationship holds if the outer objects are the same. | | row match pattern | The first and third rows of the grid are occupied by three objects each. The "match pattern" relationship holds if the relation pattern in each row is the same (e.g., AAA, AAB, ABC, etc.) | Table 1: Relational games tasks. | Model / Component | Architecture | |---------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Common CNN | Conv2D Ñ MaxPool2D Ñ Conv2D Ñ MaxPool2D Ñ Flatten. | | Embedder | Conv2D: num filters = 16, filter size = 3 3, activation = relu. MaxPool2D: stride = 2. | | Common output MLP | Dense(64, 'relu') Ñ Dense(2). | | RelConvNet | CNN Embedder Ñ MD-IPR Ñ RelConv Ñ Flatten Ñ MLP. MD-IPR: relation dim = 16, projection dim = 4, symmetric. RelConv: num filters = 16, filter size = 3, discrete groups = combinations. | | CoRelNet | CNN Embedder Ñ CoRelNet Ñ Flatten Ñ MLP. Standard CoRelNet has no hyperparameters. | | PrediNet | CNN Embedder Ñ PrediNet Ñ Flatten Ñ MLP. PrediNet: key dim = 4, number of heads = 4, num relations = 16. | | Transformer | CNN Embedder Ñ TransformerEncoder Ñ AveragePooling Ñ MLP. TransformerEncoder: num layers = 1, num heads = 8, feedforward intermediate size = 32, activation = relu. | | GCN | CNN Embedder Ñ AddPosEmb Ñ (GCNConv Ñ Dense) 2 Ñ AveragePooling Ñ MLP. GCConv: channels = 32, Dense: num neurons = 32, activation = relu | | GAT | CNN Embedder Ñ AddPosEmb Ñ (GATConv Ñ Dense) 2 Ñ AveragePooling Ñ MLP. GATonv: channels = 32, Dense: num neurons = 32, activation = relu | | GCN | CNN Embedder Ñ AddPosEmb Ñ (GINConv Ñ Dense) 2 Ñ AveragePooling Ñ MLP. GINConv: channels = 32, Dense: num neurons = 32, activation = relu | | CNN | (Conv2D Ñ MaxPool2D) 8 Ñ Flatten Ñ Dense(128, 'relu') Ñ Dense(2) Conv2D: num filters = [16, 16, 32, 32, 64, 64, 128, 128], filter size = 3 MaxPool2D: stride = 2, apply every other layer. | | | Table 2: Model architectures for relational games experiments. | | | Hexos Accuracy | Stripes Accuracy | | |---------------|------------------|--------------------|----| | Task | Model | | | | same | RelConvNet | 0.989 0.002 | 0.974 0.003 | | CoRelNet | 0.988 0.006 | 0.724 0.112 | | | PrediNet | 0.990 0.004 | 0.983 0.007 | | | Transformer | 0.997 0.001 | 0.993 0.004 | | | CNN | 0.990 0.001 | 0.976 0.002 | | | between | RelConvNet | 0.991 0.001 | 0.988 0.002 | | CoRelNet | 0.995 0.001 | 0.582 0.063 | | | PrediNet | 0.978 0.006 | 0.950 0.019 | | | Transformer | 0.986 0.003 | 0.961 0.010 | | | CNN | 0.990 0.001 | 0.971 0.003 | | | occurs | RelConvNet | 0.980 0.001 | 0.880 0.015 | | CoRelNet | 0.992 0.004 | 0.518 0.012 | | | PrediNet | 0.907 0.020 | 0.775 0.046 | | | Transformer | 0.881 0.015 | 0.724 0.021 | | | CNN | 0.984 0.008 | 0.953 0.012 | | | xoccurs | RelConvNet | 0.967 0.001 | 0.946 0.006 | | CoRelNet | 0.980 0.007 | 0.606 0.035 | | | PrediNet | 0.872 0.036 | 0.810 0.028 | | | Transformer | 0.867 0.017 | 0.753 0.031 | | | CNN | 0.597 0.097 | 0.590 0.084 | | | match pattern | RelConvNet | 0.961 0.015 | 0.870 0.041 | | CoRelNet | 0.942 0.011 | 0.581 0.026 | | | PrediNet | 0.710 0.040 | 0.658 0.053 | | | Transformer | 0.627 0.005 | 0.591 0.006 | | | CNN | 0.499 0.000 | 0.489 0.000 | | Table 3: Out-of-distribution generalization results on relational games. We report means standard error of mean over 5 trials. These are the numbers associated with Figure 4. ## A.2 Set (Section **4.2)** We train for 100 epochs using the cross-entropy loss. RelConvNet uses the Adam optimizer with learning rate 0.001, β1 0.9, β2 0.999, ϵ 107. The baselines each use their own individually-tuned optimization hyperparameters, described in Appendix B. We use a batch size of 512. For each model and task, we run 5 trials with different random seeds.Table 4 contains a description of the architecture of each model in the "contains set" experiments of Section 4.2. Table 5 reports the generalization accuracies on the hold-out 'sets' (i.e., the numbers depicted in Figure 7b of the main text). Figure 11 explores the effect of different RelConvNet hyperparameters on the model's ability to learn the the Set task. | | Accuracy | |------------------|------------| | Model RelConvNet | 0.979 0.006 | | CoRelNet | 0.563 0.001 | | PrediNet | 0.513 0.003 | | Transformer | 0.563 0.002 | | GCN | 0.635 0.003 | | GIN | 0.593 0.001 | | GAT | 0.675 0.002 | | LSTM | 0.609 0.002 | | CNN | 0.506 0.006 | | Model / Component | Architecture | | | | | | | | | | | |---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----|-----------|----|--------|----|-----------|----|---------|----|-----------| | Common CNN | Conv2D | Ñ | MaxPool2D | Ñ | Conv2D | Ñ | MaxPool2D | Ñ | Flatten | Ñ | Dense(64, | | Embedder | 'relu') Ñ Dense(64, 'tanh'). Conv2D: num filters = 32, filter size = 5 5, activation = relu. MaxPool2D: stride = 4. | | | | | | | | | | | | Common output MLP | Dense(64, 'relu') Ñ Dense(32, 'relu') Ñ Dense(2). | | | | | | | | | | | | RelConvNet | CNN Embedder Ñ MD-IPR Ñ RelConv Ñ Flatten Ñ MLP. MD-IPR: relation dim = 16, projection dim = 16, symmetric. RelConv: num filters = 16, filter size = 3, discrete groups = combinations, symmetric relational inner product with 'max' aggregator. | | | | | | | | | | | | CoRelNet | CNN Embedder Ñ CoRelNet Ñ Flatten Ñ MLP. Standard CoRelNet has no hyperparameters. | | | | | | | | | | | | PrediNet | CNN Embedder Ñ PrediNet Ñ Flatten Ñ MLP. PrediNet: key dim = 4, number of heads = 4, num relations = 16. | | | | | | | | | | | | Transformer | CNN Embedder Ñ TransformerEncoder Ñ AveragePooling Ñ MLP. TransformerEncoder: num layers = 2, num heads = 8, feedforward intermediate size = 128, activation = relu. | | | | | | | | | | | | GCN | CNN Embedder Ñ (GCNConv Ñ Dense) 2 Ñ AveragePooling Ñ MLP. GCConv: channels = 128, Dense: num neurons = 128, activation = relu | | | | | | | | | | | | GAT | CNN Embedder Ñ (GATConv Ñ Dense) 1 Ñ AveragePooling Ñ MLP. GATonv: channels = 128, Dense: num neurons = 128, activation = relu | | | | | | | | | | | | GCN | CNN Embedder Ñ (GINConv Ñ Dense) 2 Ñ AveragePooling Ñ MLP. GINConv: channels = 128, Dense: num neurons = 128, activation = relu | | | | | | | | | | | | CNN | (Conv2D Ñ MaxPool2D) 10 Ñ Flatten Ñ Dense(128, 'relu') Ñ Dense(2) Conv2D: num filters = [16, 16, 32, 32, 64, 64, 128, 128, 128, 128], filter size = 3 MaxPool2D: stride = [(2,2), NA, (2,2), NA, (2,2), NA, (2,2), (1,2), (1,2), (2, 2)] | | | | | | | | | | | | | Table 4: Model architectures for "contains set" experiments. | | | | | | | | | | | Table 5: Hold-out test accuracy on "contains set" task. We report means standard error of mean over 10 trials. These are the numbers associated with Figure 7b. ![19_image_0.png](19_image_0.png) Figure 9: Trade-off between task loss and group attention entropy. RelConvNet models are trained on the "match pattern" task in the Relational Games benchmark varying the entropy regularization level. The overall model loss is Lloss λLentr, where Lloss CrossEntropypy, yˆq is the task loss (blue line), Lentr is the entropy regularization term for the group attention scores (orange line) as defined in Section 3.2, and λ is a scaling factor. Different lines correspond to different values of λ. When λ 0 (no entropy regularization) the model fails to learn the task. A small amount of regularization is enough to guide the model to a good solution. Increasing λ causes smaller group attention entropy at convergence, approaching discrete assignments. ![19_image_1.png](19_image_1.png) Figure 10: Effect of group attention entropy regularization. Group attention entropy (left) and baseline cross-entropy loss (right) of a relational convolutional network model trained on the "match pattern" task with different levels of entropy regularization. The overall model loss is Lloss λLentr, where Lloss is the task loss, Lentr is the entropy regularization term, and λ is a scaling factor. Different lines correspond to different values of λ. Without entropy regularization, the model fails to learn the task. With sufficient entropy regularization, the model is able to learn the task and group attention converges towards discrete assignments. The group attention entropy starts at log 9 2.2 (the entropy of a uniform distribution) and decreases over the course of training. Expectedly, larger λ values cause the entropy to decrease faster, converging towards a smaller value. When λ is too large, the entropy regularization overwhelms the base cross-entropy loss and results in converging to a worse cross-entropy loss. Intuitively, one needs to strike a balance such that both the entropy regularization and the cross-entropy loss guide the evolution of the group attention map. ![20_image_0.png](20_image_0.png) Figure 11: Exploring the effect of multi-dimensional relations and symmetric relations in RelConvNet. RelConvNet models matching the architecture described in Table 4 are trained on the Set task. We test two variants: 1) set the relation dimension to be dr 1 (instead of dr 16), and 2) remove the symmetry inductive bias (i.e., W1 W2 in Equation (2)). We find that with dr 1 (which is analogous to CoRelNet's single-dimensional similarity matrix), the model struggles to find good solutions. In 10 different runs with random seeds, one run was able to find a good solution reaching an accuracy of 98.5%, whereas the other runs were stuck below 65%. This suggests that having multi-dimensional relations yields a more robust model with multiple different avenues for finding good solutions during the optimization process. In the case of the model with the asymmetric relations (i.e., lacking a symmetry inductive bias), the model is able to fit the training data, but fails to generalize. This suggests the symmetry is an important inductive bias for certain tasks. ## B Hyperparameter Sweep For Baseline Models In order to ensure that we compare RelConvNet against the best-achievable performance by each baseline architecture, we carry out an extensive hyperparameter sweep over combinations of architectural hyperparameters and optimization hyperparameters. In particular, as seen in Appendix B.1, the baseline models severely overfit on the Set task, fitting the training data but failing to generalize to unseen 'sets'. Hence, we explore whether it is possible to avoid or alleviate overfitting through an appropriate choice of hyperparameters. In Figure 12, we vary the number of layers in the baseline models to select an optimal configuration of each architecture. We find that increased depth beyond 2 layers is generally detrimental on this task. Based on these results, we choose the optimal number of layers as 2 for the Transformer, GCN, GIN baselines and 1 for the GAT baseline. In Figure 13, we vary the level of weight decay. Expectedly, larger weight decay results in decreased training accuracy. Generally, weight decay has a small effect on validation performance (e.g., no discernable effect in CoRelNet or CNN). For some models, some choices of weight decay result in improved validation performance. Based on these results, we use a weight decay of 0 for CoRelNet/CNN, 0.032 for Transformer/GAT/GIN, and 1.024 for PrediNet/GCN/LSTM. In Figure 14, we explore the effect of the learning rate schedule, comparing a cosine decay schedule against our default constant learning rate. For most models, there is no significant difference, with a constant learning rate sometimes slightly better. On the GAT model, however, the cosine learning rate schedule results in significantly improved performance. Based on these results, we use a cosine learning rate schedule for GAT and a constant learning rate for all other models. ![21_image_0.png](21_image_0.png) Figure 12: Hyperparameter sweep over number of layers in baseline architectures. Transformers and GNNs (e.g., GCN, GAT, GIN) are "compositional" deep learning architectures. Here, we explore the effect of depth (i.e., number of layers) on task performance for these baselines. The plots show the maximum training and validation accuracy reached throughout training for each depth (5 trials with different random seeds). Generally, we find that generalization performance drops with increasing depth. The optimal depth for the Transformer, GCN, and GIN models is 2 layers, and the optimal depth for the GAT model is 1-layer. The performance drop with depth can perhaps be attributed to increased difficulty of training and overfitting due to limited data and a lack of relational inductive biases. The AdamW optimizer is used with a constant learning rate of 103. ![21_image_1.png](21_image_1.png) Figure 13: Hyperparameter sweep of weight decay in baseline architectures. To alleviate possible overfitting, we optimally tune a weight decay parameter for each model independently. We test weight decay values including 0 and 0.0042 i, i P t0*, ...,* 10u. We use the AdamW optimizer. The default weight decay in Tensorflow is 0.004. ![22_image_0.png](22_image_0.png) Figure 14: Hyperparameter sweep over learning rate schedule (constant vs cosine decay). We explore the effect of the learning rate schedule on model performance, comparing a constant learning rate against a cosine decay schedule. For most models, there is no significant difference, with a constant learning rate sometimes slightly better. On the GAT model, however, the cosine learning rate schedule results in significantly improved performance. ## B.1 Results Without Hyperparameter Tuning Figure 15 and Table 6 show the results of the Set experiment with a common default optimizer, without individual hyperparameter tuning. | | Accuracy | |------------------|------------| | Model RelConvNet | 0.979 0.006 | | CoRelNet | 0.563 0.001 | | PrediNet | 0.508 0.002 | | Transformer | 0.584 0.004 | | GCN | 0.595 0.003 | | GAT | 0.517 0.015 | | GIN | 0.590 0.003 | | LSTM | 0.602 0.003 | | GRU | 0.593 0.004 | | CNN | 0.506 0.006 | Table 6: Hold-out test accuracy on "contains set" task, with default optimizer hyperparameters. We report means standard error of mean over 10 trials. These are the numbers associated with Figure 15a. ![23_image_0.png](23_image_0.png) ![23_image_1.png](23_image_1.png) (b) Training accuracy and validation accuracy over the course of training, without hyperparameter tuning (i.e., Adam optimizer, no weight decay, constant learning rate of 0.001. Figure 15: Results of "contains set" experiments with default optimization hyperparameters. Bar height/solid lines indicate the mean over 10 trials and error bars/shaded regions indicate 95% bootstrap confidence intervals. ## C Discussion On Use Of Tcn In Evaluating Relational Architectures In Section 4.1 the CoRelNet model of Kerg et al. (2022) was among the baselines we compared to. In that work, the authors also evaluate their model on the relational games benchmark. A difference between their experimental set up and ours is that they use a method called "context normalization" as a preprocessing step on the sequence of objects. "Context normalization" was proposed by Webb et al. (2020). The proposal is simple: Given a sequence of objects, px1*, . . . , x*mq, and a set of context windows W1, . . . , WW t1*, . . . , m*u which partition the objects, each object is normalized along each dimension with respect to the other objects in its context. That is, pz1, . . . , zmq CNpx1*, . . . , x*mq is computed as, $$\begin{array}{l l}{{\mu_{j}^{(k)}=\frac{1}{|\mathcal{W}_{k}|}\sum_{t\in\mathcal{W}_{k}}(x_{t})_{j}}}\\ {{\sigma_{j}^{(k)}=\sqrt{\frac{1}{|\mathcal{W}_{k}|}\sum_{t\in\mathcal{W}_{k}}\left((x_{t})_{j}-\mu_{j}^{(k)}\right)^{2}+\varepsilon}}}\\ {{(z_{t})_{j}=\gamma_{j}\left(\frac{(x_{t})_{j}-\mu_{j}^{(k)}}{\sigma_{j}^{(k)}}\right)+\beta_{j},\qquad\mathrm{for}\ t\in\mathcal{W}_{k}}}\end{array}$$ where γ pγ1, . . . , γdq, β pβ1*, . . . , β*dq are learnable gain and shift parameters for each dimension (initialized at 1 and 0, respectively, as with batch normalization). The context windows represent logical groupings of objects that are assumed to be known. For instance, (Webb et al., 2021; Kerg et al., 2022) consider a "relational match-to-sample" task where 3 pairs of objects are presented in sequence, and the task is to identify whether the relation in the first pair is the same as the relation in the second pair or the third pair. Here, the context windows would be the pairs of objects. In the relational games "match rows pattern" task, the context windows would be each row. It is reported in (Webb et al., 2021; Kerg et al., 2022) that context normalization significantly accelerates learning and improves out-of-distribution generalization. Since (Webb et al., 2021; Kerg et al., 2022) use context normalization in their experiments, in this section we aim to explain our choice to exclude it. We argue that context normalization is a confounder and that an evaluation of relational architectures without such preprocessing is more informative. To understand how context normalization works, consider first a context window of size 2, and let β 0, γ 1. Then, along each dimension, we have $$\begin{array}{l}{{\mathrm{CN}(x,x)=(0,0)\,,}}\\ {{\mathrm{CN}(x,y)=\left(\mathrm{sign}(x-y),\mathrm{sign}(y-x)\right).}}\end{array}$$ In particular, what context normalization does when there are two objects is, along each dimension, output 0 if the value is the same, and 1 if it is different (encoding whether it is larger or smaller). Hence, it makes the context-normalized output independent of the original feature representation. For tasks like relational games, where the key relation to model is same/different, this preprocessing is directly encoding this information in a "symbolic" way. In particular, for two objects x1, x2, context normalized to produce z1, z2, we have that x1 x2 if and only if xz1, z2y 0. This makes out-of-distribution generalization trivial, and does not properly test a relational architecture's ability to model the same/different relation. Similarly, consider a context window of size 3. Then, along each dimension, we have, $$\mathrm{CN}(x,x,x)=(0,0,0)\,,$$ $$\mathrm{CN}(x,x,y)=\left(\frac{1}{\sqrt{2}}\mathrm{sign}(x-y),\frac{1}{\sqrt{2}}\mathrm{sign}(x-y),\frac{1}{\sqrt{2}}\mathrm{sign}(y-x)\right)\,.$$ Again, context normalization symbolically encodes the relational pattern. For any triplet of objects, regardless of the values they take, context normalization produces identical output in the cases above. With context windows larger than 3, the behavior becomes more complex. These properties of context normalization make it a confounder in the evaluation of relational architectures. In particular, for small context windows especially, context normalization symbolically encodes the relevant information. Experiments on relational architectures should evaluate the architectures' ability to *learn* those relations from data. Hence, we do not use context normalization in our experiments. ## D Higher-Order Relational Tasks As noted in the discussion, the tasks considered in this paper are solvable by modeling second-order relations at most. One of the main innovations of the relational convolutions architecture over existing relational architectures is its compositionality and ability to model higher-order relations. An important direction of future research is to test the architecture's ability to model hierarchical relations of increasingly higher order. Constructing such benchmarks is a non-trivial task which requires careful thought and consideration. This was outside the scope of this paper, but we provide an initial discussion here which may be useful for constructing such benchmarks in future work. Propositional logic. Consider evaluating boolean logic formula such as, x1 ^ ppx2 _ x3q ^ pp x3 ^ x4q _ px5 ^ x6 ^ x7qqq. Evaluating this logical expression (in this form) requires iteratively grouping objects and computing the relations between them. For instance, we begin by computing the relation within g1 px3, x4q and the relation within g2 px5, x6, x7q, then we compute the relation between the groups g1 and g2, etc. For a task which involves logical reasoning of this hierarchical form, one might imagine the group attention in RelConvNet learning the relevant groups and the relational convolution operation computing the relations within each group. Taking inspiration from logical reasoning with such hierarchical structure may lead to interesting benchmarks of higher-order relational representation. Sequence modeling. In sequence modeling (e.g., language modeling), modeling the relations between objects is usually essential. For example, syntactic and semantic relations between words are crucial to parsing language. Higher-order relations are also important, capturing syntactic and semantic relational features across different locations in the text and across multiple length-scales and layers of hierarchy (see for example some relevant work in linguistics Frank et al., 2012; Rosario et al., 2002). The attention matrix in Transformers can be thought of as implicitly representing relations between tokens. It is possible that composing Transformer layers also learns hierarchical relations. However, as shown in this work and previous work on relational representation, Transformers have limited efficiency in representing relations. Thus, incorporating relational convolutions into Transformer-based sequence models may yield meaningful improvements in the relational aspects of sequence modeling. One way to do this is by cross-attending to a the sequence of relational objects produced by relational convolutions, each of which summarizes the relations within a group of objects at some level of hierarchy. Set embedding. The objective of set embedding is to map a collection of objects to a euclidean vector which represents the important features of the objects in the set (Zaheer et al., 2017). Depending on what the set embedding will be used for, it may need to represent a combination of object-level features and relational information, including perhaps relations of higher order. A set embedder which incorporates relational convolutions may be able to generate representations which summarize relations between objects at multiple layers of hierarchy. Visual scene understanding. In a visual scene, there are typically several objects with spatial, visual, and semantic relations between them which are crucial for parsing the scene. The CLEVR benchmark on visual scene understanding (Johnson et al., 2017) was used in early work on relational representation (Santoro et al., 2017). In more complex situations, the objects in the scene may fall into natural groupings, and the spatial, visual, and semantic relations between those *groups* may be important for parsing a scene (e.g., objects forming larger components with functional dependence determined by the relations between them). Integrating relational convolutions into a visual scene understanding system may enable reasoning about such higher-order relations. ## E Geometry Of Representations Learned By Md-Ipr And Relational Convolutions In this section, we explore and visualize the representations learned by MD-IPR and RelConv layers. In particular, we will visualize the representations produced by the RelConvNet model trained on the Set task described in Section 4.2. Recall that the MD-IPR layer learns encoders ϕ1, ψ1*, . . . , ϕ*dr , ψdr . In this model dr 16, ϕi ψi (so that learned relations are symmetric), and each ϕiis a linear transformation to dproj 4-dimensional space. The representations learned by a selection of 6 encoders is visualized in Figure 16. For each of the 81 possible Set cards, we apply each encoder in the MD-IPR layer, reduce to 2-dimensions via PCA, and visualize how each encoder separates the 4 attributes: number, color, fill, and shape. Observe, for example, that "Encoder 0" disentangles color and shape, "Encoder 2" disentangles fill, and "Encoder 3" disentangles number. Next, we visualize, we explore the geometry of learned representations of relation vectors. That is, the inner products producing the 16-dimensional relation vector for each pair of objects. For each 81 2 pairs of Set cards, we compute the 16-dimensional relation vector learned by the MD-IPR layer, reduce to 2 dimensions via PCA, and visualize how the learned relation disentangles the latent same/different relations among the four attributes. This is shown in Figure 17. We see some separation of the underlying same/different relations among the four attributes, even with only two dimensions out of 16. Finally, we visualize the representations learned by the relational convolution layer. Recall that this layer learns a set of graphlet filters f P R s-s-dr-nf which form templates of relational patterns against which groups of objects are compared. In our experiments, the filter size is s 3 and the number of filters is nf 16. Hence, for each group g of 3 Set cards, the relational convolution layer produces a 16-dimensional vector, xRrgs, fyrel P R nf, summarizing the relational structure of the group. Of the 81 3 possible triplets of Set cards, we create a balanced sample of "sets" and "non-sets". We then compute xRrgs, fyrel and reduce to 2 dimensions via PCA. Figure 8 strikingly shows that the representations learned by the relational convolution layer very clearly separate triplets of cards which form a set from those that don't form a set. ![28_image_0.png](28_image_0.png) different encoders seemingly specializing to encode one or two attributes. ![29_image_0.png](29_image_0.png) Figure 17: The relations learned by the MD-IPR layer encodes the latent relations underlying the Set task. ![29_image_1.png](29_image_1.png) Figure 18: The relational convolution layer produces representations which separates 'sets' from 'non-sets'.
Review 1: Summary: This work investigates the impact of inductive biases on solving tasks relational tasks - whose problem instances are inputs containing multiple objects which may or may not be related in some way, and the task is to predict this relation (or its absence). The work develops an architecture called Relational Convolutional Networks, which is designed to compositional in inferred relations, i.e. it has a compositionally and relational inductive bias. This proposed architecture is contrasted with architectures that are stated as only compositional (Transformers, GNNS) or only relational (PrediNet, CoRelNet), but not both. The authors conduct experiments which support that both inductive biases are required for solving relational tasks. Strengths and Weaknesses: # Strengths The work is well-written, easy to read, highly pedagogical. The figures and graphs are well-annotated and clear. The work is also highly reproducible. with the majority of important hyperparameters exposed to the reader. Experiments are accompanied by bootstrap CI estimates and multiple runs per point, lending trustworthiness to specific numbers quoted in the manuscript. # Weaknesses The main weakness of the paper lies in empirical evaluation. The high-level problem is that broad generalizations about entire classes of architectures based on one hyperparameter set for each architecture, which are given for “Relational Games” and “Set” in Tables 2 and 4 respectively. It is not stated how these hyperparameters were chosen, and critically (as is the primary interest for TMLR readers) whether these represent close to best-in-class choice for each problem setting. A more specific problem for is that the RelConvNet architecture is explicitly designed as a two layer architecture in all experimental settings, first with the ML-IPR layer, followed by the RelConv layer. In contrast to the Transformer model being compared to, which has a single layer in each setting, as to the GNN variants. This lends a systematic advantage to the RelConvNet architecture being proposed, particularly when considering that the class of functions transformers represent on inputs changes significantly as the number of layers increases https://arxiv.org/abs/2209.11895. Additionally it is known that for 12 layer transformer models, relational NLP tasks like Named Entity Recognition are solved very well (https://arxiv.org/abs/1810.04805). This can be done in 6 layers with distillation procedures (https://arxiv.org/abs/1910.01108). It is also unclear if standard optimization procedure is being followed, which would include a learning warmup period, and some form of decay schedule (cosine or liner) to a constant. No weight decay is used in the experimental settings, yet there is strong overfitting phenomena being observed in Figure 7. Similarly in Figure 7, we see GCN significantly outperforms GAT in terms of training accuracy - this should only happen if the optimization of the GAT model is being done poorly. The remaining weakness (though less major) lies in Section 3, which builds up the proposed architecture. This section has no citations in it, which can give the reader an impression that this is all completely new, and can make it challenging to situate the proposed architecture amongst existing architectures and methods. For example, when talking about building pairwise relations, it could be useful to link to the transformer paper https://arxiv.org/abs/1706.03762, and when introducing Equation 6, it would be useful to relate this to a PerceiverEncoder https://arxiv.org/abs/2107.14795 where the learnable group query vector is the Perceiver Latent variable. Requested Changes: # Required (critical) * Perform a reasonable hyperparameter sweep for the experimental section, critically including multilayer variants of all architectures, include the results of the hyperparameter sweep as an Appendix section. Optimization hyperparameters should be allowed to differ across methods. Optimization should be done using optimizers that use weight decay sensibly (e.g. AdamW), and a learning rate schedule should be used. The optimal weight decay may be zero, and other values should be investigated. * Update main text results with best-in-class values for each method. An optional search constraint could be considering total-compute tied variants in order to be able to compare algorithms like-for-like. * The hyperparameter search does not need to be exhaustive, but the results do need to support the primary claim of the paper (compositional and relational biases are both required for solving relational tasks) beyond reasonable doubt. # Recommended for strengthening but not critical * Remove instances of the word “novel” throughout. * Consider renaming “Relational Convolutional Networks” as the naming is close to Relational Graph Convolutional networks which may cause confusion. Alternatively, highlight these are not the same early on. * Clarify what is meant by hierarchical relation in the abstract - is it a hierarchy of element-wise relations or something else? * Introduction paragraph 1: “...essential to accurately capturing complex systems.” Add citation. * Introduction paragraph 1: “...building internal world models.” Add citation. * Introduction paragraph 2: “...essential to success of deep representation learning.” Add citation. * Introduction paragraph 2: “..flat first order architectures.” Add citation. * Introduction paragraph 2: Add explanation here why Transformer is not a compositional framework for learning hierarchical relational representations. Add relevant citations. * Bullet point end of page 1 “Grouping mechanisms”: Define “statistically intractable”. * Page 2": "SET game". Add citation. * Page 2: "Transformers". Add citation. * Page 2: "Graph Neural Networks". Add citation. * Section 2: Where discussing the proposal to model pairwise relations between objects, explain why Transformer and GNNs are insufficient, as both of these models naively sound like they would do this. Alternatively if the thinking is consistent with prior methods this should be stated here, and then specifically what is new in this work should be called out. * Above Equation 2: Add elaboration why we want to promote weight sharing. * Equation 3: the i, j indices at this part of the manuscript feel unusual since there should be a symmetry over i, j as the filter acts on elements of a set. This may cause the reader to expect the i, j insides on the relational filters to be redundant. The issue is re solved by the attention mechanism in Equation 6 which restores the permutation equivariance. It would be good to add here some commentary around the symmetry at this point, and that the x_i x_j being discussed at this point are abstract, and may be choosable in a way that restores the symmetry. * Relate Equation 6 to the PerceiverIO Encoder with citations. * Explain to what extent Named Entity Recognition is or is not a relational task, and how BERT’s solving Named Entity Recognition is consistent with your results. * If after the hyperparameter sweep, GAT is still worse than GCN on training set, this should be explained. * Add the number of layers used for every architecture to the hyperparameter tables 2 and 4. Broader Impact Concerns: No broader impact concerns. ================================================== Review 2: Summary: The paper develops a method, termed relational convolutional networks, to learn representations of hierarchical relations. The authors propose an operation that convolve graphlet filters to incorporate the relational patterns in groups of objects. The final method enables compositional relational modules. Experiment results on various tasks or datasets are provided. Strengths and Weaknesses: Strength: - The paper is well written and clear. - The authors provide a detailed explanation on the motivation of the model architecture. - The proposed learning method is reasonable. - The experiment results appear to support the effectiveness of the method. Weakness: - Only results on synthetic tasks are provided, though this may not be a huge issue given the lack of suitable benchmark. - Ablation studies on each component are not provided (apologies if I missed it). - The originality over existing works are not immediately clear from reading the introduction. I am not sufficiently familiar with the literature and existing works to list other major weaknesses, and am open to hearing thoughts from other reviewers. Requested Changes: - Since the method involves several modules, ablation studies on the contribution of each module would be beneficial. - In the introduction, it is helpful to give a clear explanation of the originality of the work over existing ones (e.g., with respect to the problem setting considered or learning method). Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper introduces a novel type of DNN layer which performs _relational_ computations through inner products of embeddings. This is done by groups, and in a "convolutional" way, i.e. all groups are applied the same filters to. Fixed-sized groups are constructed via an attention mechanism which is regularized to have low entropy (i.e. picking objects rather than aggregating them). The authors demonstrate good performance or relational tasks, as well as qualitatively different (and better) generalization properties on a set pattern matching task. Strengths and Weaknesses: Generally the paper is well written, easy to understand, proposes an interesting and novel method, and provides strong basic evidence that the method performs well and does as intended. Beyond the usual "there could be larger scale results", I think there could be more results demonstrating how the method works. The current investigative results very much pass the bar of TMLR to me, but I'd encourage the authors to dig even deeper (e.g. can the authors quantify and generalize what's in Figure 9? Compare it against other methods, including methods that do disentanglement?). Requested Changes: Things I'd like the authors to address: $\left<x_1W_1,x_2W_2\right>=x_2W_2W_1^\top x_1^\top = x_2 W x_1$ has interesting properties when $W$ is positive definite, and also it's interesting that if $d_{proj} < d_\phi$ then $W_2W_1^\top$ is a low-rank approximation of $W$. I wonder if the authors have thought about this? The text mentions that pairwise computations are a limitation, but really the number of possible groups is $2^n$ (or $\binom{n}{s}$ if we limit ourselves to size $s$ groups); isn't that the "scary number"? Transformers work "just fine" in $O(n^2)$ (and reasonable $O(n\log n)$ versions exist). About the attention grouping, the text notes: "learn $n_g$ groupings of objects, retrieving $s$ objects per group." I'm a bit ambivalent w.r.t. describing this as "retrieving objects". In the limit, the attention operation in (6) can just average all the objects' embeddings, did it really "retrieve" something or did it do some intermediate aggregation operation? I don't think the way the authors wrote it is necessarily wrong but it feels misleading. In the proposed architecture, the relational convolutions are applied on aggregates, not directly on (groupings of) objects. The authors propose adding entropy regularization on the attention, but this entropy or the attention values are not reported (Figure 6 has no colorbar; no way to know exactly what those colors mean). How does entropy change during training? Are different factors more or less effective? Regularizing attention entropy for sparsity is something that's been done before, please reference prior work. One elephant in the room is what happens when objects are not neatly separated in the input space? or when the notion of what's an object or a collection of objects is blurry? Presumably this shouldn't be a problem, given the attention mechanism which is known to work even in e.g. vision tasks, but it would have been nice for the authors to train on even just a toy vision task (CIFAR100 SET?). As the authors point out in Limitations, this work is meant to lay the foundations, but this or experiments on >2 order interactions would add a lot to the work. I wonder if a regular deep ConvNet baseline would make sense? One thing that the proposed architecture does is apply local filters to inputs. Sure the inputs of those tasks to a ConvNet would be biased by their location within the input, but that should be something that the model should be able to easily overcome with sufficient depth. The "Common CNN Embedder" described in Table 2 feels incredibly shallow. The authors do a classic mistake, introduce a regularization term and not report results without the term applied/different weights on the term. More generally, many design choices appear to not be justified by empirical results (and I suspect they are, but the results just aren't reported in the paper. "Trust me it works" is not sufficient). Final point, I know it may be unfair to compare to highly optimized transformer CUDA kernels, but I couldn't find it mentioned how the method fares computationally in practice. Does it seem to scale well? Broader Impact Concerns: Not addressed. ==================================================
# Unifying Language Learning Paradigms Anonymous authors Paper under double-blind review ## Abstract Existing pre-trained models are generally geared towards a particular class of problems. To date, there seems to be still no consensus on what the right architecture and pretraining setup should be. This paper presents a unified framework for pretraining models that are universally effective across datasets and setups. We begin by disentangling architectural archetypes with pre-training objectives - two concepts that are commonly conflated. Next, we present a generalized and unified perspective for self-supervision in NLP and show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be effective. We then propose Mixture-of-Denoisers (MoD), a pretraining objective that combines diverse pre-training paradigms together. We furthermore introduce a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes. We conduct extensive ablative experiments to compare multiple pretraining objectives and find that our method pushes the pareto-frontier by outperforming T5 and/or GPT-like models across multiple diverse setups. Finally, by scaling our model up to 20B parameters, we achieve SOTA performance on 50 well-established supervised NLP tasks ranging from language generation (with automated and human evaluation), language understanding, text classification, question answering, commonsense reasoning, long text reasoning, structured knowledge grounding and information retrieval. Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization. We release flax-based T5X model checkpoints for the 20B model. ## 1 Introduction There is a wide spectrum of pre-trained model options for NLP researchers and practitioners these days (Devlin et al., 2018; Brown et al., 2020; Raffel et al., 2019; Radford et al., 2019; Liu et al., 2019; Yang et al., 2019; Thoppilan et al., 2022; Fedus et al., 2021; Du et al., 2021; Chowdhery et al., 2022). When faced with the question of what model should one use, the answer is often *it depends*, followed by *on what task?* Answering this can be overwhelming, comprising of a number of fine-grained follow-up questions like, 'encoderonly or encoder-decoder?', *'span corruption or language model?'*. Pressing further, the answer always seems to depend on the target downstream task. This paper questions and rethinks this thought process, specifically answering the questions of *why should the choice of the pre-trained LM depend on the downstream task?* and how can we pre-train models that work universally well across many tasks?. This paper proposes a step towards making a universally applicable language model possible. We present a framework for *Unifying Language Learning Paradigms* or UL2 in short, that is consistently effective across a very diverse set of tasks and setups. Figure 1 shows an example of how UL2 can perform universally well, unlike other models that often have to make a trade-off. ![1_image_0.png](1_image_0.png) Figure 1: In both decoder-only and encoder-decoder setups, UL2 strikes a significantly improved balance in performance between fine-tuned discriminative tasks and prompt-based 1-shot open-ended text generation than previous methods. Note: Dec and EncDec are compute matched but EncDec models have double the parameters. The appeal of a universal model is clear, i.e., as this not only allows concentrated effort in improving and scaling a single model, instead of diversifying resources across N models. Moreover, under resource constrained settings where only a few models can be served (e.g., on device), it would be preferable to have a single pretrained model that can perform well on many types of tasks. At the core of UL2 is a the newly proposed Mixture-of-Denoisers (MoD), a pre-training objective that enables strong performance across tasks. MoD is a mixture of several well-established denoising objectives along with new ones; namely X-denoising (extreme denoising) which considers extreme span lengths and corruption rates, S-denoising (sequential denoising) that strictly follows sequence order, and R-denoising (regular denoising) that is a standard span corruption objective introduced in (Raffel et al., 2019). We show that MoD is conceptually simple but highly effective for a diverse set of tasks. Our approach exploits the realization that most (if not all) well-studied pre-training objectives differ in the type of context a model is conditioned on. For example, the span corruption objective is akin to invoking multiple regions of prefix language modeling (PLM) (Liu et al., 2018; Raffel et al., 2019) whereby prefixes are contiguous segments of non-corrupted tokens and targets have full access to prefixes of all PLM segments. The setting where the span approaches the full sequence length is approximately a language modeling objective conditioned on long-range context. Thus, we are able to design a pre-training objective that smoothly interpolates these different paradigms (span corruption vs language modeling vs prefix language modeling). It is also easy to see that each denoiser is difficult in different ways. They also differ in the nature of extrapolation (or interpolation). For example, bounding a model by bidirectional context (or the future) (ie.., span corruption) makes the task easier and becomes more akin to fact completion. Meanwhile, PrefixLM/LM objectives are generally more *'open ended'*. These behaviours can be easily observed by monitoring the cross entropy losses of these different denoising objectives. Given the MoD formulation, we conjecture that it is beneficial for our model to not only distinguish between different denoisers during pre-training but also to adaptively switch modes when learning downstream tasks. We introduce mode switching, a new concept that associates pre-training tasks with dedicated sentinel tokens and allows dynamic mode switching via discrete prompting. Our model is able to switch modes between R,S and X denoisers on-demand after being pre-trained. We then disentangle the architecture from the self-supervision scheme. While it might be a common misconception, as previously noted in Raffel et al. (2019), that a pre-trained model is strongly characterized by its backbone architecture (e.g., decoder-only vs. encoder-decoder), we find that the choice of the denoiser has significantly more impact. MoD supports either backbone, similar to how T5's span corruption may be trained with a decoder-only model. As such, UL2 is agnostic to architecture. We argue that the choice of backbone architecture is mainly a trade-off across different efficiency metrics. We conduct systematic and ablative experiments on a suite of 9 diverse tasks aimed to capture different problem formulations (supervised and prompt-based in-context few-shot learning). We experiment with the SuperGLUE suite (Wang et al., 2019), and three tasks from the GEM benchmark (Gehrmann et al., 2021). In addition, we evaluate on open text generation, as well as prompt-based one-shot settings on all tasks. In this ablative setup, our experimental results show that UL2 outperforms T5 and GPT-like baselines on all 9 setups. On average, UL2 outperforms a T5 baseline by +43.6% and a language model by +76.1%. Among all the other competitive baselines considered, UL2 is the only method that outperforms T5 and GPT-like models on all tasks. We scale UL2 up to a moderate scale setting of approximately 20B (19.5 to be exact) parameters and run experiments across a very diverse suite of 50+ NLP tasks ranging from language generation (with automated and human evaluation), language understanding, text classification, question answering, commonsense reasoning, long text reasoning, structured knowledge grounding and information retrieval. Our results show that UL2 achieves SOTA on a vast majority of tasks and setups. Finally, we conduct zero/few-shot experiments with UL2 and show that UL2 outperforms GPT-3 175B on zero shot SuperGLUE. When compared with newer state-of-the-art models like GLaM (Du et al., 2021), PaLM (Chowdhery et al., 2022) and ST-MoE (Zoph et al., 2022), UL2 remains competitive at a compute-matched setup despite only training on C4 corpus which is known to be less effective than specially curated datasets used in (Du et al., 2021; Chowdhery et al., 2022). We delve into understanding trade-offs between zero-shot and finetuning performance and show that UL2 is pareto-efficient with respect to both learning paradigms. On one-shot summarization, UL2 triples the performance of an LM adapted T5 XXL model and is competitive with (or outperforms) PaLM and LaMDA at the same compute cost. We release T5X-based flax checkpoints of the trained UL2 model. ## 2 Background: Pre-Trained Language Models In this section, we discuss background surrounding pretrained language models, pretraining objectives and other unified pretraining proposals. ## 2.1 Pre-Trained Language Models Learning pre-trained representations for language is a far-reaching pillar of modern NLP research, dating back to (Mikolov et al., 2013; Pennington et al., 2014; Neumann et al., 2018; Dai & Le, 2015; Howard & Ruder, 2018). The first pre-trained Transformer, GPT, was proposed by (Radford et al., 2019) and was trained as a causal language model. Subsequently, BERT (Devlin et al., 2018) demonstrated the importance of bidirectional modeling for many downstream tasks. BERT introduced masked language modeling (MLM), a denoising objective that reconstructs the input in-place using bidirectional receptive fields. XLNet Yang et al. (2019) introduced the Permutation Language Modeling to account for dependencies between masked tokens during training. A number of additional papers (e.g., RoBERTA (Liu et al., 2019), SpanBERT (Joshi et al., 2020)) suggested further improvements to the pre-training process. At the same time, two-stack encoder-decoder architectures such as T5 (Raffel et al., 2019) gained popularity due to their improved performance on classification and sequence-to-sequence ("seq2seq") tasks. However, so far, these models have shown limited performance on open-text generation and prompt-based inference (i.e., in-context learning), which motivates the use of decoder-only models that are trained with different objectives (e.g., GPT-3 (Brown et al., 2020), GLaM (Du et al., 2021), LaMDa (Thoppilan et al., 2022) and PaLM (Chowdhery et al., 2022)). In this work, we aim to bridge the performance gap between the two by means of a general training paradigm that suits both architectures. Decoder-only vs Encoder-only The key similarities of decoder-only and encoder-only architectures is that decoder-only architectures operate with an *input-to-target* paradigm or *targets-only* paradigm if CausalLM is used over PrefixLM used. For both architectures, the objective is always to predict the next token (LM) and are therefore autoregressive models. Notably this is different from position-wise masked LM denoising (sometimes known as *autoencoding*), which have been popularized by encoder-only BERT-style models. These class of models are very restricted in their generative capabilities. On top of that, task specific classification heads are also typically employed for downstream tasks. Because of the cumbersomeness of task specific classification heads, we strongly do not recommend using this class of autoencoding models moving forward and consider them somewhat deprecated. Caveats do apply. For instance, regression is the probably only reason why one would add a task specific head (Lees et al., 2022), or to squeeze out some efficiency gains from eliminating a full vocabulary. Either way, one can always start from a encoder-decoder and chop off the decoder later so there is no good reason to use an encoder-only model. Hence the only real objective consideration here is between decoder-only and encoder-decoder architectures. Decoder-only vs Encoder-Decoder The line between decoder-only and encoder-decoder models is less clear. PrefixLM models are *almost* encoder-decoder models with shared parameters (but not quite). From an inductive bias point of view, there are multiple differences. Encoder-Decoder models process input and targets independently with a different set of parameters. This is a form of sparsity where different set of parameters are used for different tokens. Encoder-Decoder models also have a cross attention component that connects input tokens to target tokens. Meanwhile, decoder-only models process inputs and targets by concatenating them. Hence, the representations of inputs and targets are concurrently build layer by layer as the input/targets propagate up the network. Conversely, the decoder in Encoder-decoder models generally only looks at the fully processed encoder input. Overall, the inductive bias of PrefixLM decoder-only models and Encoder-Decoder models could be pretty similar modulo the subtle differences stated above. The distinct property is that Encoder-Decoder models are generally approximately 2x parameters of a decoder-only model when compute-matched. Sparse Models On a side note, there have also been also an emerging trend of sparse pretrained models that achieve state-of-the-art performance. Sparse mixture-of-expert models such as the Switch Transformer (Fedus et al., 2021), GLaM (Du et al., 2021) and/or GShard (Lepikhin et al., 2020) have also demonstrated a lot of promise. While orthogonal to the topic of pretraining objectives, sparse models achieve a very different flop-per-parameter ratio compared to dense models - a core recurring motif in the debate surrounding encoder-decoder models vs decoder-only models. ## 2.2 Pre-Training Objectives For Large Language Models While recent research demonstrates the potential of large *supervised* multi-task pre-training (Aribandi et al., 2021; Sanh et al., 2021; Wang et al., 2022), most pre-training objectives rely on the vast availability of unsupervised data and use self-training techniques. As mentioned above, different architectures typically leverage different objectives. Decoder-only models are typically trained with causal language model objectives to mimic auto-regressive generation (Radford et al., 2019). Raffel et al. (2019) explored many objectives for encoder-decoder models and found span corruption to be effective. (Wang et al., 2022) conducts a systematic study of different architectures combined with three different pretraining objectives (causal LM, prefixLM and span corruption) and analyzed their impact on zero-shot generalization. Related to our proposed X-denoisers, (Wettig et al., 2022) studies the effect of corruption rate in BERT-style masked language modeling and hypothesizes that this improves sample efficiency along with benefitting larger models. Notably, the benefits of heightened corruption rates as a *standalone* denoiser is still unclear, as noted by (Raffel et al., 2019) and also apparent in our own ablations. Pre-training (or denoising) is generally applied on the subword level (Raffel et al., 2019; Devlin et al., 2018) but it is worth to note that it has also been applied on the character or byte-level (Xue et al., 2021; Tay et al., 2021c). In these setups, the corrupted spans are generally much larger than subword-based denoising. ## 2.3 Unified Pre-Training Proposals UniLM (Dong et al., 2019) proposed to train on multiple language modeling objectives using a single Transformer model. Specifically, UniLM trains on unidirectional LM, bidirectional LM and seq2seq LM. This is quite similar to combining auto-regressive LMs with BERT and prefix-LM models. Notably, UniLM trains using a cloze-type formulation which adds explicit mask tokens to the inputs. Losses are then computed by the difference of the predicted token and target token in a position-wise fashion. Aside from pretraining unification, there have been a recent trend of thematic unification, i.e., unifying common tasks into one model. Examples of these include UNICORN (Lourie et al., 2021) for commonsense reasoning, UnifiedQA (Khashabi et al., 2020; 2022) for question answering, Programming Puzzles (Schuster et al., 2021b) for problem solving, and UnifiedSKG (Xie et al., 2022) for Structured Knowledge Grounding. ## 3 Unifying Language Learning Paradigms (Ul2) This section describes the UL2 framework and the proposed pre-training objectives which we study for the remainder of the paper. ## 3.1 Pre-Training This section discusses the proposed pre-training objective. ## 3.1.1 Unified Perspective For Pre-Training Tasks Many pre-training tasks can be simply formulated as an 'input-to-target' task, wherein the input refers to any form of memory or context that the model conditions on, and the target is the model's expected output. Language models use all previous time-steps as inputs to the model to predict the next token, which is the target. In span corruption, the model leverages all uncorrupted tokens from the past and future as inputs for predicting the corrupted span (targets). Prefix-LMs are LMs that use past tokens as inputs, but consume the inputs bidirectionally: this offer more modelling power than unidirectional encoding of inputs in vanilla LM. Given this perspective, we can approximately reduce one pre-training objective to another. For instance, in the span corruption objective, when the corrupted span, i.e., target, is equal to the entire sequence, the ![5_image_0.png](5_image_0.png) Figure 2: An overview of UL2 pretraining paradigm. UL2 proposes a new pretraining objective that works well on a diverse suite of downstream tasks. ![5_image_1.png](5_image_1.png) Figure 3: Mixture of denoisers for training UL2. Greyed out rectangles are masked tokens that are shifted to 'targets' for prediction. problem becomes effectively1 a language modeling problem. With this in mind, using span corruption, by setting the span length to be large, we can effectively mimic the LM objective in local regions. We define a notation that covers all of the different denoising tasks that we use in this paper. The inputs and targets of the denoising tasks are generated by a SpanCorrupt function that is parameterized by three values (*µ, r, n*), where µ is the mean span length, r is the corruption rate, and n which is number of corrupted spans. Note that n may be a function of the input length, L, and the span length µ, e.g. L/µ, but in some cases, we use a fixed value of n. Given an input text, SpanCorrupt introduces corruptions to the spans of lengths that are drawn from a (normal or uniform) distribution with mean of µ. After corruption, the input text is then fed to the denoising task and the corrupted spans are used as targets to be recovered. As an example, to construct an objective analogous to causal language modeling using this formulation, one would simply set (µ = L, r = 1.0, n = 1), i.e. a single span with its span length equal to the length of the sequence. To express one similar to Prefix LM objective, one would set (µ = L − P, r = 1.0 − P/L, n = 1) where P is the length of the prefix, with the additional constraint that the single corrupted span always reaches the end of the sequence. 1This is roughly approximate since the model still conditions on a sentinel token. We note that this inputs-to-targets formulation can be applied to both encoder-decoder models and singlestack transformer models (e.g., decoder models). We opt to select models that predict the next target token instead of those that do so in-place (e.g., predict the current masked token in BERT) because the next-target formulation is more general and can subsume more tasks instead of using a special "CLS" tokens and task-specific projection heads. ## 3.1.2 Mixture Of Denoisers We conjecture that a strong universal model has to be exposed to solving diverse set of problems during pre-training. Given that pre-training is done using self-supervision, we argue that such diversity should be injected to the objective of the model, otherwise the model might suffer from lack a certain ability, like long-coherent text generation. Motivated by this, as well as current class of objective functions, we define three main paradigms that are used during pre-training: - **R-Denoiser** - The regular denoising is the standard span corruption introduced in Raffel et al. (2019) that uses a range of 2 to 5 tokens as the span length, which masks about 15% of input tokens. These spans are short and potentially useful to acquire knowledge instead of learning to generate fluent text. - **S-Denoiser** - A specific case of denoising where we observe a strict sequential order when framing the inputs-to-targets task, i.e., prefix language modeling. To do so, we simply partition the input sequence into two sub-sequences of tokens as context and target such that the targets do not rely on future information. This is unlike standard span corruption where there could be a target token with earlier position than a context token. Note that similar to the Prefix-LM setup, the context (prefix) retains a bidirectional receptive field. We note that S-Denoising with very short memory or no memory is in similar spirit to standard causal language modeling. - **X-Denoiser** - An extreme version of denoising where the model must recover a large part of the input, given a small to moderate part of it. This simulates a situation where a model needs to generate long target from a memory with relatively limited information. To do so, we opt to include examples with aggressive denoising where approximately 50% of the input sequence is masked. This is by increasing the span length and/or corruption rate. We consider a pre-training task to be extreme if it has a long span (e.g., ≥ 12 tokens) or have a large corruption rate (e.g., ≥ 30%). X-denoising is motivated by being an interpolation between regular span corruption and language model like objectives. This set of denoisers has strong connections with previously used objective functions: R-Denoising is the T5 span corruption objective, S-Denoising is connected to causal language models that are GPT-like, and X-Denoising can expose the model to a combination of objectives from T5 and Causal LMs. Notably, X-denoisers are also connected to improve sample efficiency since more tokens are learned to be predicted in each sample, in similar spirit to LMs. We propose blending all these tasks in a uniform fashion and have a hybrid self-supervised objective. The final objective is a mixture of 7 denoisers that are configured as follows: | Denoiser | Setting | |------------|----------------------------------------------------------------------------------------| | R | (µ = 3, r = 0.15, n) ∪ (µ = 8, r = 0.15, n) | | S | (µ = L/4, r = 0.25, 1) | | X | (µ = 3, r = 0.5, n)∪ (µ = 8, r = 0.5, n)∪ (µ = 64, r = 0.15, n) ∪ (µ = 64, r = 0.5, n) | Table 1: Configuration of UL2's mixture-of-denoisers used in the paper. For X- and R-Denoisers, the span length is sampled from a normal distribution with mean of µ. For S-Denoisers, we use a uniform distribution, fix the number of corrupted spans to 1, and have an additional | | Supervised Finetuning | | In-context One-shot | | | | | | | | | |-------|-------------------------|--------|-----------------------|-------|-------|-------|-------|-------|------|------|-------| | Obj | Arch | Params | SG | XS | SGD | TOT | SG | XS | SGD | TOT | LM | | CLM | Dec | 167M | 62.24 | 28.18 | 55.44 | 59.40 | 39.22 | 1.16 | 1.40 | 0.20 | -2.35 | | PLM | Dec | 167M | 62.44 | 28.21 | 55.55 | 59.52 | 42.54 | 1.08 | 3.70 | 6.40 | -2.54 | | SC | Dec | 167M | 67.67 | 29.14 | 55.48 | 60.47 | 38.53 | 1.16 | 2.20 | 1.60 | -3.62 | | SCLM | Dec | 167M | 63.36 | 29.02 | 55.71 | 60.00 | 40.78 | 3.03 | 1.27 | 0.10 | -2.38 | | UL2 | Dec | 167M | 65.50 | 28.90 | 55.80 | 60.39 | 42.30 | 8.01 | 6.30 | 5.80 | -2.34 | | PLM | ED | 335M | 69.30 | 31.95 | 55.70 | 60.91 | 38.18 | 6.50 | 7.11 | 3.90 | -2.42 | | SC | ED | 335M | 72.00 | 31.05 | 55.80 | 61.25 | 38.51 | 7.49 | 1.43 | 2.10 | -7.23 | | SCLM | ED | 335M | 72.50 | 31.69 | 55.70 | 60.94 | 39.74 | 5.13 | 8.70 | 7.30 | -2.40 | | UniLM | ED | 335M | 71.10 | 31.00 | 55.83 | 61.03 | 39.86 | 6.70 | 6.50 | 4.10 | -2.65 | | UL2 | ED | 335M | 73.10 | 31.86 | 56.10 | 61.50 | 41.30 | 11.51 | 6.63 | 6.50 | -2.55 | Table 2: Experimental results on a suite of language understanding and generation tasks on both supervised and one-shot setup. Models are pretrained on 32B tokens. constraint that the corrupted span should end at the end of the original input text, i.e. no un-cropped token should appear after the corrupted part. This is roughly equivalent to seq2seq denoising or the Prefix LM pre-training objective. Since LM is a special case of Prefix-LM, we did not find it necessary to include a casual LM task into the mixture. All tasks have an approximate equal participation in the mixture. We also explore an alternative where we increase number of S-denoisers up to 50% of denoisers in the Mixture and all other denoisers take up the remainder. We present detailed ablation studies of various design choices in the later sections. Finally, the mixing in Mixture-of-Denoisers is what makes it universally powerful. Alone, some of the denoiser types do not perform well. For instance, the original T5 paper explored an option with 50% corruption rate (X-denoising) and found that to not work well. The implementation of UL2's mixture of denoiser is simple and easy to implement using a library like seqio2 (Roberts et al., 2022). See appendix for more details on implementation. ## 3.1.3 Mode Switching We introduce the notion of paradigm-shifting via mode switching. During pre-training, we feed the model an extra *paradigm* token, i.e., {[R], [S], [X]} that helps the model switch gears and operate on a mode that is more suitable for the given task. For fine-tuning and downstream few-shot learning, to trigger the model to learn better solutions, we also add a paradigm token with respect to the setups and requirements of the downstream task. Mode switching in fact binds downstream behavior to one of the modes we used during upstream training. ## 3.2 Model Architecture UL2 adopts an architecture-agnostic philosophy. We argue that the choice between both architectures (encoder-decoder vs decoder-only) is a more of an efficiency trade-off and that architecture choice should not be conflated with the pretraining objective. Hence, we have both a UL2 decoder and UL2 encoder-decoder in similar spirit to how there are multiple sizes per model. We discuss this efficiency trade-off in detail in our experiment section. UL2 adopts a pretty standard vanilla T5 Transformer that have been enhanced with modifications that have withstood the test of time, i.e., GLU layers (Shazeer, 2020) and T5-style relative attention. To not further conflate architectural modifications with pretraining contributions, the backbone of the model remains similar to a T5-like model. This is also in light of results such as (Narang et al., 2021). 2https://github.com/google/seqio ## 4 Ablative Experiments This section describes our ablative experimental setup (e.g., baselines, datasets, implementation details) and results. Our overall findings show that UL2 outperforms T5-like and GPT-like models on 9 out of 9 tasks. ## 4.1 Baselines For pre-training objectives, we compare with the following pre-training baselines: - **Causal Language Model (CLM)** - This is the standard left-to-right auto-regressive language model pre-training, used in many standard pre-trained models, like GPT (Radford et al., 2019; Brown et al., 2020). We refer to this model as GPT-like in our experiments. - **Prefix LM (PLM)** - This is a slight variation of causal LM where M has bidirectional receptive fields, introduced in (Liu et al., 2018; Raffel et al., 2019). We uniformly sample the length of M and only compute the loss at the auto-regressive targets. - **Span Corruption (SC)** - This is the standard denoising objective proposed in T5 (Raffel et al., 2019). The idea is to blank out certain text portions and replace them with sentinel tokens. The text replaced with sentinel tokens are then copied to the targets and auto-regressively generated by the model. We use a mean span of 3 and denoising rate of 15% following the default T5 setup. - **Span Corruption + LM (SCLM)** - We train on a mixture of CLM and Span Corruption with an equal mix ratio. We use the same hyper-parameters for SC for the SC component of this objective. - **UniLM (ULM)** - This is the objective proposed in Dong et al. (2019). Similar to the original UniLM, we mix causal language modeling, Prefix LM (sequence-to-sequence LM) and bidirectional i.i.d denoising. Instead of training UniLM in cloze-style or BERT-style, we opt to generate the masked tokens. This allows this objective to be applicable to both decoder-only and encoder-decoder architectures and remove the need for task-specific linear heads for fine-tuning. For all objectives, we explore both single-stack and encoder-decoder architectures. All architectures are inputs-to-targets either implemented in encoder-decoder or decoder-only model structures since we consider BERT-style masked language modeling pretraining to have already been effectively subsumed by this style of pretraining, as empirically made evident in (Raffel et al., 2019). Task-specific classification heads are also not recommended, since they clearly go against the principle of having a universal model (and are also very cumbersome). ## 4.2 Experimental Setup We conduct our experiments on a diverse set of supervised and prompt-based few-shot learning tasks. ## 4.2.1 Datasets And Tasks The datasets we use are SuperGLUE (Wang et al., 2019), comprising of 8 NLU sub-tasks. We also conduct experiments on 3 datasets from the GEM benchmark (Gehrmann et al., 2021) that focuses on language generation problems. We arbitrarily select XSUM (summarization), ToTTo (table-to-text generation) (Parikh et al., 2020) and Schema Guided Dialog (SGD) (Rastogi et al., 2019) from the GEM benchmark. For all these tasks, we evaluate on both supervised fine-tuning and prompt-based one-shot learning. Finally we also compare our models on their general ability for text generation using perplexity scores on the C4 validation set. We believe our suite of tasks gives us good coverage across many setups in the literature including supervised and conditional few-shot learning. ## 4.2.2 Metrics And Holistic Evaluation For SuperGLUE, we report well-established metrics such as accuracy, F1 or Exact Match, whenever appropriate. For GEM benchmark, we use the Rouge-L metric. For language modeling we report negative log perplexity. Table 3: Relative performance compared to standard encoder-decoder span corruption model (T5). Results in this table are expressed in terms of relative percentage improvements over a baseline. Model with ? denotes the main compared baseline. Overall score column is normalized to be weighted equally across tasks. | Supervised | | | One-shot | | | | | | | | | | |--------------|------|-------|------------|------|------|-------|-------|-------|-------|------|-------|-----| | Obj | Arch | SG | XS | SGD | TOT | SGL | XS | SGD | TOT | LM | All | Win | | CLM | Dec | -13.6 | -9.2 | -0.7 | -3.0 | +1.8 | -91.7 | -2.2 | -90.5 | +208 | -31.7 | 2/9 | | PLM | Dec | -13.3 | -9.2 | -0.5 | -2.8 | +10.5 | -85.6 | +158 | +205 | +185 | -11.0 | 4/9 | | SC | Dec | -5.6 | -6.2 | -0.6 | -1.3 | +0.05 | -84.5 | +54 | -23.8 | +99 | -20.6 | 3/9 | | SCLM | Dec | -6.0 | -6.5 | -0.2 | -2.0 | +5.9 | -59.6 | -11.3 | -95 | +204 | -16.1 | 2/9 | | UniLM | Dec | -10.1 | -8.2 | -0.2 | -2.3 | -5.3 | -69.1 | +382 | +110 | +200 | -16.1 | 3/9 | | UL2 | Dec | -9.0 | -6.9 | 0.0 | -1.4 | +9.8 | +6.9 | +340 | +176 | +209 | +14.1 | 5/9 | | PLM | ED | -3.7 | +2.9 | -0.2 | -0.6 | -0.86 | -13.3 | +397 | +86 | +199 | +16.7 | 5/9 | | SC? | ED | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | - | | SCLM | ED | +0.7 | +2.1 | -0.2 | -0.5 | +3.2 | -31.6 | +508 | +248 | +201 | +28.3 | 7/9 | | UniLM | ED | -1.2 | -0.2 | +0.1 | -0.4 | +3.5 | -11.0 | +355 | +95 | +173 | +19.8 | 5/9 | | UL2 | ED | +1.5 | +2.6 | +0.5 | +0.4 | +7.2 | +53.6 | +363 | +210 | +184 | +43.6 | 9/9 | The *universality* of the models, i.e., their collective performance across all range of tasks, is a main evaluation criteria here. To enable the comparison between models from this perspective, we need an aggregate performance score. However, metrics on different tasks we include are widely different in nature - take, for example, F1 and perplexity. To address this, we opt to report and use the *normalized relative gain with* respect to baselines as an overall metric. For this purpose, we use the standard language model (decoder-only) (GPT-like) and standard span denoising encoder-decoder (T5) as prime baselines and report all methods against their relative performance against these well-established candidates. We believe this is the most suitable method for comparing these models since it is easy to reason about how much a new model is generally better than a popular setting (e.g., GPT or T5-like). We also highlight the fact that the overall gain is **normalized**, so this becomes harder to exploit or be susceptible to benchmark lottery effects (Dehghani et al., 2021b). ## 4.2.3 Implementation Details Our experiments are all conducted in JAX/Flax (Bradbury et al., 2018) using the open source T5X3framework (Roberts et al., 2022) and Flaxformer4. We pre-train all models for 500K steps with a batch size of 128 and a sequence length of 512 inputs and 512 targets using the C4 corpus. The total approximate tokens seen during pre-training is approximately 32 billion tokens. Each pre-training run is typically trained using 64 to 128 TPUv4 chips (Jouppi et al., 2020). We optimize our model with the Adafactor (Shazeer & Stern, 2018) optimizer with an inverse square root learning rate. To understand the trade-off of different backbone architectures, we run all baseline pre-training objectives with both the decoder-only architecture and encoder-decoder architecture. We report key experiment results using a base architecture of approximately 167M parameters for the decoder model and 335M parameters for the encoder-decoder model. All models use a standard Transformer that uses SwiGLU layers as described in (Shazeer, 2020). We utilize the default T5 English 32K sentencepiece for all models. Within the context of decoder-only models, except for the case of the decoder model trained on causal LM, our experiments always use a bidirectional receptive field **only** in it's input segment and autoregressive decoding at the *targets* segment. This is essentially the a PrefixLM-type architecture5(Raffel et al., 2019) which we find to be consistently better than a full causal decoder model. 3https://github.com/google-research/t5x. 4https://github.com/google/flaxformer 5Not to be confused with the PrefixLM pretraining objective. Table 4: Relative performance compared to standard decoder causal language model (GPT-like). Results in this table are expressed in terms of relative percentage improvements over a baseline. Model with ? denotes the main compared baseline. Overall score column is normalized to be weighted equally across tasks. | | Supervised | | One-shot | | | | | | | | | | |-------|--------------|-------|------------|------|------|------|-------|-------|-------|-------|-------|-----| | Obj | Arch | SG | XS | SGD | TOT | SG | XS | SGD | TOT | LM | All | Win | | CLM? | Dec | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | - | | PLM | Dec | +0.3 | +0.1 | +0.2 | +0.2 | +8.5 | +74.3 | +164 | +3100 | -8.0 | +21.4 | 8/9 | | UniLM | Dec | +4.0 | +1.1 | +0.5 | +0.7 | -7.0 | +274 | +393 | +2100 | -2.5 | +21.0 | 7/9 | | SC | Dec | +8.7 | +3.4 | +0.1 | +1.8 | -1.8 | +87.0 | +57.1 | +700 | -54.2 | +13.9 | 7/9 | | SCLM | Dec | +1.8 | +3.0 | +0.5 | +1.0 | +4.0 | +387 | -9.3 | -50 | -1.3 | +15.8 | 6/9 | | UL2 | Dec | +5.2 | +2.6 | +0.6 | +1.7 | +7.9 | +1190 | +350 | +2800 | +0.3 | +45.7 | 9/9 | | PLM | ED | +11.3 | +13.4 | +0.5 | +2.5 | -2.6 | +946 | +408 | +1850 | -2.9 | +48.6 | 7/9 | | SC | ED | +16.5 | +10.2 | +0.6 | +3.1 | -1.8 | +1107 | +2.3 | +950 | -208 | +31.7 | 7/9 | | SCLM | ED | +15.7 | +12.5 | +0.5 | +2.6 | +1.3 | +726 | +522 | +3550 | -2.2 | +60.3 | 8/9 | | UniLM | ED | +14.2 | +10.0 | +0.7 | +2.7 | +1.6 | +974 | +365 | +1950 | -12.9 | +52.6 | 8/9 | | UL2 | ED | +17.4 | +13.1 | +1.2 | +3.5 | +5.3 | +1754 | +373 | +3150 | -8.3 | +76.1 | 8/9 | ## 4.3 Overview Of Ablative Experimental Results Table 2 reports the raw results on all the benchmark tasks and datasets. To facilitate easier comparison across setups, we also report relative comparisons against well-established baselines such as T5 and GPT models. This is reported in Tables 3 and 4 respectively. ## 4.3.1 Decoder Vs Encoder-Decoder Before we dive into the results of this segment, we would like to remind readers that there is no easy way to compare decoder-only models with encoder-decoder models. In short, we can either compare them in a compute-matched setup or a parameter-matched way. Therefore, the encoder-decoder models in these set of results have approximately twice the number of parameters as the decoder models but have similar speeds. We note that this may slightly favor encoder-decoders since this can be interpreted form of model sparsity. Moving back to the results, when using T5 as the reference baseline, we note that, with the exception of UL2 Decoder, none of the pre-trained decoders models outperform T5. Additionally, there is a 10% to 30% degradation in overall relative performance. The best decoder baseline model here is the Prefix-LM decoder model, which is about 10% worse than the T5 baseline. It is clear from these results that encoder-decoder models should be preferred over decoder-only models if and only if there is no concern about storage, i.e., parameter counts are generally less important than actual throughput (see (Dehghani et al., 2021a) for a detailed discussion). When there is a parameter constraint, the Prefix-LM decoder makes for a suitable alternative. Finally, an interesting data point is how we were able to push the UL2 decoder to outperform the T5 encoder-decoder setup by +14.6%. That said, this UL2 decoder does not outperform our UL2 encoder-decoder. However, this reinforces our point that the self-supervision objective may be intrinsically more important than the backbone architecture and negotiating architectural choices is mainly about efficiency trade-offs that can be studied independently. ## 4.3.2 Is Gpt And/Or T5 The Optimal Setup? Based on the relative comparisons against a GPT-like (causal LM + decoder) and T5-like (span corruption + encoder decoder) setup, we are able to easily identify if the well-established setups are indeed optimal or already close to optimal. Firstly, the causal LM (GPT-like) setup appears to be the worse configuration as it is outperformed by all our baselines. We thus make the straightforward recommendation of always at least training with Prefix-LM or UniLM whenever possible. The best decoder-only model (with the exception Table 5: Effect of different paradigm prompts on 1-shot evaluation, using a Encoder-Decoder architecture pre-trained using UL2 on 7B tokens. | Model/Prompt | 1Shot XSum | 1Shot SuperGLUE | |----------------|---------------|-------------------| | Baseline T5 | 6.9/0.6/6.1 | 33.9 | | UL2 / None | 13.2/1.4/10.8 | 38.3 | | UL2 / [R] | 13.5/1.5/11.1 | 38.5 | | UL2 / [S] | 11.6/1.2/10.0 | 38.5 | | UL2 / [X] | 8.9/0.9/7.6 | 38.7 | of UL2) is the Prefix-LM pre-training that keeps a memory prefix for a language model to condition on. Regarding Prefix-LM pre-training, it is interesting that Prefix-LM actually outperforms the T5 span corrupt setup by +16.7%. The Prefix-LM encoder-decoder model is indeed less effective than the default T5 model on SuperGLUE but is on a whole, stronger especially when it comes to one-shot or open text-generation. Overall, between the Prefix-LM and the span corruption encoder-decoder model (T5), it is unclear to which is the universally superior model as there are gives and takes across the different sub-tasks although it is worthy noting the Prefix-LM EncDec model only sacrifices a minor degradation in certain tasks for a huge multifold increase in other tasks. ## 4.3.3 On The Performance Of Unilm And Sclm On the encoder-decoder setup, both the UniLM and SCLM objective performs better than the standard span corruption objective in terms of aggregated and normalized overall gain. This shows that, in general, mixing pre-training objectives is helpful. On the decoder setup, there is an overall gain of +9.4% gain for UniLM and +16.1% for SCLM compared to the baseline causal LM. In terms of individual tasks, UniLM and SCLM both outperforms T5 on 6 out of 9 tasks. It is also noteworthy that SCLM performs the best out of all models on 1shot generation (SGD and TOTTO). ## 4.3.4 On The Performance Of The Proposed Ul2 Finally, we note that UL2 performs the best when compared against both the GPT-like model and the T5-like model. Overall, UL2 outperforms by T5 +43.4% and +76.2% when compared to the GPT-like CLM decoder model. This is the highest relative (overall) gain compared to all other alternatives. We also note that on all individual tasks, UL2 outperforms T5 on **all 9 out of 9** considered tasks. Hence, UL2 is a universally better option compared to the span corruption T5 model. While UL2 doesn't always outperform all baselines on all individual tasks, UL2 is very consistent. Even when it loses to another method on a task, the loss is relatively marginal (e.g., 6.5 vs 7.3 on one-shot TOTTO). Conversely, when UL2 outperforms a baseline like T5, the gain can be as large as +363%. UL2 remains the most consistently strong method. The consistent improvement also suggests that it can be used as a more consistent replacement to T5 and GPT-like models. ## 4.4 Mode Switching Ablations In order to ascertain that our mode switching capabilities have an effective on performance, we conduct ablation experiments. We conduct experiments on one-shot XSum and one-shot SuperGLUE. Table 5 reports the result of varying the paradigm prompt to the model. Firstly, we observe that the prompt has quite substantial effect on model performance - i.e., using the right or wrong prompt can lead to a 48% gap in performance (on XSum, Rouge-1). SuperGLUE, on the other hand, is less sensitive to prompting. On SuperGLUE, using prompts are almost always better than not using prompts during one-shot eval. However, for XSum, getting the prompt right seems to be crucial for good performance. Table 6: Ablation study for Mixture-of-Denoisers. Span, Rate and SD are in percentages (%). We report SuperGLUE score (SG) and XSUM Rouge-L (XS). | | Ablation Method | Supervised | One-shot | | | | | |------|-------------------|--------------|------------|------|------|------|------| | Name | Span (µ) | Rate (r) | SD% | SG | XS | SG | XS | | A | - | - | 100 | 69.3 | 31.1 | 38.2 | 6.5 | | B | 3 | 50 | 0 | 72.0 | 32.0 | 38.5 | 7.5 | | C | 3,8,12 | 15,50 | 14 | 71.9 | 32.1 | 38.6 | 4.1 | | D | 3,8,12,32 | 15,50 | 11 | 71.0 | 32.2 | 42.7 | 10.6 | | E | 3,8,32,64 | 15,50 | 11 | 73.1 | 32.2 | 40.7 | 10.4 | | F | 3,8,64 | 15,50 | 17 | 70.6 | 31.6 | 41.3 | 11.5 | | G | 3,8,32,64 | 15 | 25 | 69.2 | 31.6 | 42.4 | 10.1 | | H | 8, 64 | 15 | 25 | 72.5 | 31.2 | 39.2 | 10.9 | | I | 3,8,12, 32 | 15,50 | 50 | 71.2 | 32.0 | 38.1 | 11.7 | | J | 3,8,64 | 15,50 | 50 | 71.3 | 31.6 | 38.1 | 11.8 | | K | 3,8,12 | 15,50 | 0 | 73.7 | 32.0 | 39.3 | 2.6 | | L | 3,8,64 | 15,50 | 0 | 70.1 | 32.1 | 38.0 | 7.3 | ## 4.5 Mixture-Of-Denoisers Ablations We conduct extensive experiments to verify the effectiveness of individual objectives within the MoD objective. Table 6 reports results for these ablations. We report results for varying the mean span, and corruption rate, along with the percentage of S-denoising used (denoted by % SD)). Note that the total number of denoisers in a mixture is kSpank × kCorrupt_*Rate*k + 1. We label these configurations from Var-A through Var-J to refer to them easily. X-Denoising is Complementarily Effective but Does Not Suffice as a Standalone We observe that mixing Extreme Denoising is effective. Most of the best results across the board come from mixtures with long spans (e.g., 32 or 64). When compared with variants without long spans (Var-D vs. Var-C), we see that Var-D is strictly better. We also draw the readers attention to Var-H, which is a variant that only employs long spans. In general, Var-H performs poorly, suggesting that extreme denoising complements regular denoising but does not suffice in isolation. This also corroborates the result from Raffel et al. (2019) that shows that a 50% corruption rate does not perform well. This slightly conflicts with the finding of (Wettig et al., 2022) although our architectures use a inputs-to-targets form of pretraining instead of BERT-style masked language modeling. Small Amounts of S-Denoisers is Preferred We explore a setting where we scale S-denoisers to 50% of the entire MoD mixture. We find that this generally hurts performance. Hence, we make a conclusion that S-denoisers are necessary but only small amounts of S-denoisers (≈ 20%) are preferred. Var-K and Var-L also explore the case where there is no S-denoising at all. While performance on one task substantially improves (SuperGLUE), another substantially degrades (one-shot XSUM). Meanwhile for Var-L which is identical to var-F (but without S-denoising), performs on a whole, substantially worse. Hence, we showed that S-denoising is crucial. ## 4.6 Modestly Scaling Model Size And Pretraining Data We conduct additional experiments by scaling up both 1) the model size and 2) pre-training dataset size. Concretely, we scale the UL2 Encoder-Decoder model up to approximately 1B parameters and increase the number of pre-training tokens to 0.5 trillion tokens. Our motivation is to conduct a sanity check that the proposed formulation also works at different model scales and to observe if there are differences and implications at operating at a larger scale. Moreover, it has also become a staple for language model research to derive scaling laws (Kaplan et al., 2020; Tay et al., 2021b). Table 7 reports results in this scaled setting. At large scale, we find that the proposed of the UL2 encoder-decoder model is still competitive. A key difference now is that UL2 drops the SuperGLUE suite against T5 (1B). However, this is compensated by not only out-performing on 7 out of 8 tasks but also improving performance by 2-4 times on one-shot evaluation. The gains on supervised fine-tuning is smaller, but still noticeable across the board on XSUM, SGD and TOT. Table 7: Experiments with moderately scaled up models in terms of model compute (e.g., 1B for EncDec and 0.5B for decoder-only) and dataset size (0.5T tokens). | Finetuning | In-context Learning | | | | | | | | |--------------|-----------------------|----------------|------|------|------|---------------|-----|-----| | Model | SG | XS | SGD | TOT | SG | XS | SGD | TOT | | GPT-like | 62.3 | 37.1/15.7/30.2 | 56.0 | 60.3 | 36.4 | 1.2/0.1/1.1 | 3.5 | 0.0 | | T5 | 84.7 | 43.0/20.8/35.6 | 56.0 | 62.1 | 29.4 | 8.9/0.8/7.8 | 2.1 | 1.4 | | UL2 | 83.3 | 43.3/21.0/35.9 | 56.5 | 62.6 | 45.4 | 15.4/2.5/11.1 | 9.6 | 7.8 | ## 5 Scaling To 20B Parameters We are also interested to evaluate UL2 in a scaled up setting. Following the insights we obtained from ablative experiments, we use an encoder-decoder architecture for this run. While UL2 is architecture agnostic, our soft advice here is to probably default to an encoder-decoder architecture due to intrinsic sparsity. We train UL2 at a scale of approximately 20B total parameters. Compared to truly large language models (Du et al., 2021; Chowdhery et al., 2022), 20B represents a medium scale model that we train as a proof-of-concept resembling a hint of what UL2 can do at a relatively larger scale than our ablation experiments. Admittedly, not much thought was put into the exact parameter count of this model, i.e., we were training a 20B model already for some time and decided to see it to convergence. Additionally, we note that spiking and instabilities are common when scaling up models due to a potential barrage of reasons (data corruption, intermittent hardware issues like pre-emption). In this run we did not specifically control or put in place any mitigation strategies such as occasional restarts as we were not attentively monitoring the job. Hence, we find occasional loss spikes in the training of this 20B model. However, since many finetuning experiments using these checkpoints still often result in *sota* performance, we let it be for now and leave a properly monitored run for future work. Despite obtaining *sota* performance on 50+ NLP benchmarks, we expect the current presented results to be still an underestimate of the true potential of the model. We leave properly scaling UL2 to truly large scale to future work. ## 5.1 Pretraining And Model Configuration We follow the same training protocol in earlier experiments by pretraining on the C4 corpus but by also scaling the number of tokens the model sees during pretraining. We use a batch size of 1024 and 512 TPUv4 chips for pretraining this model. The model is trained on a total of 1 trillion tokens on C4 (2 million steps). The sequence length is set to 512/512 for inputs and targets. Dropout is set to 0 during pretraining. Pre-training took approximately slight more than one month for about 1 trillion tokens. We use the same mixture of denoisers as earlier sections. The model has 32 encoder layers and 32 decoder layers, d*model* of 4096 and df f of 16384. The dimension of each head is 256 for a total of 16 heads. Our model uses a model parallelism of 8. We retain the same sentencepiece tokenizer as T5 of 32k vocab size. Hence, UL20B can be interpreted as a model that is quite similar to T5 but trained with a different objective and slightly different scaling knobs. Similar to earlier experiments, UL20B is trained with Jax and T5X infrastructure. We release and open source T5X-based model checkpoints of this 20B model. ## 5.2 Experiments At 20B Scale This section describes our experimental setup for UL20B experiments. ## 5.2.1 Setup And Implementation Details We conduct experiments on both finetuning and in-context learning. For supervised finetuning, our models are continuously finetuned after N pretraining steps where N is typically from 50k to 100k. In other words, after each Nk steps of pretraining, we finetune on each downstream task and record its results. This is generally done in a manual fashion. While some tasks were finetuned on earlier pretrained checkpoints as the model was still pretraining, many were finetuned on checkpoints nearer to convergence that we release. As we continiously finetune, we stop finetuning on a task once it has reached *sota* to save compute. Finetuning is generally done in a per-task basis and not co-trained. Details of tasks where co-training is performed is found in the appendix. We leave the combination of massive multi-task training (Aribandi et al., 2021) and UL2 to future work. For supervised finetuning, we generally adopt a learning rate in the range of {5 × 10−5, 1 × 10−5 1 × 10−4} using the Adafactor optimizer . The general recipe is that we reset Adafactor optimizer states and/or adopt a loss normalization based on the number of real target tokens. This is reminiscent of the PaLM finetuning setup (Chowdhery et al., 2022). Batch size is generally a range of 32 to 128 although we did not find much impact of batch size on finetuning performance. Many of the evaluated tasks were not tuned much and we only ran once or twice before performing leaderboard submissions. ## 5.2.2 Datasets For Supervised Finetuning To demonstrate the universality of the approach, we consider a total of nearly 50+ NLP tasks. We list our categorization of tasks below. Note that the categorization of tasks are generally soft in nature and some tasks may cross into different categorization boundaries. - **Language Generation** - We consider summarization and data-to-text generation tasks. We use CNN/Dailymail (Hermann et al., 2015), XSUM (Narayan et al., 2018), MultiNews (Fabbri et al., 2019), SAMSum (Gliwa et al., 2019), WebNLG (Castro Ferreira et al., 2020) (English), E2E (Dušek et al., 2019) and CommonGen (Lin et al., 2020) to evaluate our models. For WebNLG, E2E and CommonGen, we use the versions from the GEM benchmark (Gehrmann et al., 2021). - **Language Generation with Human Evaluation** - We evaluate on a variety of text generation tasks using human evaluation, via the GENIE leaderboard (Khashabi et al., 2021). These tasks include aNLG (Bhagavatula et al., 2019), ARC-DA (Clark et al., 2018), WMT19 (Foundation), and XSUM (Narayan et al., 2018). - **Language Understanding, Classification and Question Answering** - We use Reading Comprehension, Question Answering, Text Classification and natural language inference datasets. Concretely, we use RACE (Reading comprehension) (Lai et al., 2017), QASC (Khot et al., 2020), OpenBookQA (Mihaylov et al., 2018), TweetQA (Xiong et al., 2019), QuAIL (Rogers et al., 2020), IMDB (Maas et al., 2011), Agnews (Zhang et al., 2015), DocNLI (Yin et al., 2021), Adversarial NLI (Nie et al., 2019), VitaminC (Schuster et al., 2021a), Civil Comments and Wikipedia Toxicity detection datasets (Borkan et al., 2019). We also use standard SuperGLUE (Wang et al., 2019) and GLUE (Wang et al., 2018) datasets. - **Commonsense Reasoning** - We use HellaSwag (Zellers et al., 2019), SocialIQA/SIQA (Sap et al., 2019), PhysicalIQA/PIQA (Bisk et al., 2020), CosmosQA (Huang et al., 2019), AbductiveNLI (Bhagavatula et al., 2019), CommonsenseQA (Talmor et al., 2018), CommonsenseQA2 (Talmor et al., 2021). - **Long Range Reasoning** - We use the Scrolls benchmark (Shaham et al., 2022) which comprises of seven component tasks including GovReport (Huang et al., 2021), SumScr (Chen et al., 2021), QMSUm (Zhong et al., 2021), QASPER (Dasigi et al., 2021), NarrativeQA (Kočiský et al., 2018), QuaLITY (Pang et al., 2021), and ContractNLI (Koreeda & Manning, 2021). - **Structured Knowledge Grounding** - We use several component tasks from UnifiedSKG (Xie et al., 2022), namely WikiTQ (Pasupat & Liang, 2015), CompWQ (Talmor & Berant, 2018), FetaQA (Nan et al., 2021), HybridQA (Chen et al., 2020), WikiSQL (Zhong et al., 2017), TabFat (Chen et al., 2019), Feverous (Aly et al., 2021), SQA (Iyyer et al., 2017), MTOP (Li et al., 2020) and DART (Nan et al., 2020). We select datasets that are relatively convenient to perform evaluation and uses mainstream metrics such as accuracy or exact match instead of obscure ones or those that require significant domain specific post-processing. - **Information Retrieval** - IR is the task of retrieving relevant documents given queries. We use the setup of the latest next generation IR paradigm, i.e., differentiable search index (Tay et al., 2022) for our experiments. We use the same NQ (Kwiatkowski et al., 2019) splits in the DSI paper. For each dataset, we report the best previous sota result. For generation tasks, we generally report ROUGE-2 following the advice of (Gehrmann et al., 2022). For the rest of the datasets, we report the dominant metric that is reported in prior work. For BLEU scores, we use sacrebleu. For commonsense reasoning tasks, we do not compare against approaches that use external knowledge bases as they are orthogonal and out of scope for this paper. For most part, GLUE is generally considered to be saturated and there are many unpublished results on the GLUE leaderboard. For this reason, we make a very reasonable decision of considering (Raffel et al., 2019) to be the state-of-the-art since we believe that there has not been any real advance on the GLUE benchmark since the T5 model (Raffel et al., 2019). GLUE results, given how saturated it already is, are provided as a reference and should be taken with a pinch of salt. Generally, we make a best effort to submit scores to any leaderboard (unpublished test set) but refrain from doing so in the cases where the labor costs to make such a submission is prohibitive - especially when the existing state-of-the-art approach has made their dev scores available or when reporting on this particular dataset is only for completeness (e.g., GLUE). We would advise readers to not over think the differences in dev/test since (1) in most academic leaderboards, dev/test aligns not only from our own experience but also can be empirically observed and because (2) the real test is production anyways. Whenever reporting on leaderboard, we consider the top performing published work as SOTA and indicate in our results using the \# symbol that there might be some anonymous submission that has scored higher. For this purpose we consider arxiv preprints of above reasonable quality to count as published work. These results and comparisons are accurate as of 15th April 2022 where we stopped experiments to focus on polishing this paper. We later realized, while preparing to put this paper up on arxiv that there have been new results on Scrolls benchmark using a model (Guo et al., 2021) using 16k sequence lengths as opposed to ours (2k) where we kept it at 2k once we had obtained sota. It is expected that increasing the length to UL2 would significantly improve our scores likely above current sota, but in the interest of logistics and timeline we leave that to future work. ## 5.2.3 Summary Of Supervised Finetuning Results This section describes the overview results of our experiments. Table 8: Summary of UL20B results compared to state-of-the-art. (l) denotes leaderboard submission. ( ]) denotes the best published we could find on the leaderboard. (e) denotes SOTA used an ensembled approach. Because we evaluate finetuning and in-context trade-offs for SuperGLUE, SuperGLUE scores have their own dedicated section below. | Dataset | Metric | Eval | Sota Reference | SOTA | Ours | |----------------------|------------------------|--------|-------------------|--------|---------| | CNN/DM | Rouge-2 | Test | Zoph et al. | 21.7 | 21.9 | | XSUM | Rouge-2 | Test | Zoph et al. | 27.1 | 26.6 | | MultiNews | Rouge-2 | Test | Xiao et al. | 21.1 | 21.7 | | SAMSum | Rouge-2 | Test | Narayan et al. | 28.3 | 29.6 | | Gigaword | Rouge 2 | Test | Aghajanyan et al. | 20.7 | 20.7 | | WebNLG (en) | Rouge-2 | Test | Bakshi et al. | 53.5 | 55.4 | | E2E-NLG | Rouge-2 | Test | Xue et al. | 45.8 | 46.5 | | CommonGen | Rouge-2 | Dev | Gehrmann et al. | 32.5 | 37.4 | | Schema-Guided Dialog | Rouge-2 | Test | Gehrmann et al. | 36.8 | 44.1 | | GENIE - aNLG | Human (H) | Test | Khashabi et al. | 76.0 | 77.0(l) | | | Continued on next page | | | | | Continued on next page Table 8 - continued from previous page | Table 8 - continued from previous page | | | | | | |------------------------------------------|--------------------|------|-----------------|---------|----------| | GENIE - ARC-DA (w/o IR) | Human | Test | Khashabi et al. | 72.0 | 72.0(l) | | GENIE - WMT19 | Human | Test | Khashabi et al. | 71.0 | 67.0(l)6 | | GENIE - XSUM | H-Overall | Test | Clive et al. | 51.0 | 50.0(l) | | GENIE - XSUM | H-Concise | Test | Clive et al. | 53.0 | 53.0(l) | | GENIE - XSUM | H-Fluency | Test | Clive et al. | 51.0 | 52.0(l) | | GENIE - XSUM | H-No-Hallucination | Test | Clive et al. | 53.0 | 54.0(l) | | GENIE - XSUM | H-Informativeness | Test | Clive et al. | 49.0 | 49.0(l) | | SIQA | Accuracy | Test | Lourie et al. | 83.2 | 83.3(l) | | PIQA | Accuracy | Test | Lourie et al. | 90.1 | 90.7(l) | | CSQA | Accuracy | Dev | Lourie et al. | 79.1 | 84.9 | | CSQA2 | Accuracy | Test | Lourie et al. | 69.6(]) | 70.1(l) | | QASC (w/o IR) | Accuracy | Dev | Khashabi et al. | 81.8 | 83.8 | | QASC (w IR) | Accuracy | Test | Khashabi et al. | 89.6 | 90.7(l) | | TweetQA | BLEU-1 | Dev | Khashabi et al. | 77.5 | 78.4 | | QuAIL | Accuracy | Test | Khashabi et al. | 74.2 | 87.2 | | AdversarialQA (Bert) | F1 | Dev | Khashabi et al. | 53.6 | 70.1 | | AdversarialQA (Roberta) | F1 | Dev | Khashabi et al. | 45.5 | 57.5 | | AdversarialQA (Bidaf) | F1 | Dev | Khashabi et al. | 71.5 | 77.5 | | MCScript | Accuracy | Test | Khashabi et al. | 95.1 | 97.3 | | MCScript 2.0 | Accuracy | Test | Khashabi et al. | 94.6 | 97.9 | | RACE | Accuracy | Test | Shoeybi et al. | 90.9(e) | 90.9 | | DREAM | Accuracy | Test | Wan | 91.8 | 91.8 | | OBQA | Accuracy | Test | Khashabi et al. | 87.2 | 87.2(l) | | CosmosQA | Accuracy | Test | Lourie et al. | 91.8 | 91.6(l) | | Winogrande XL | Accuracy | Test | Lourie et al. | 91.3 | 90.1(l) | | DocNLI | Accuracy | Test | Qin et al. | 76.9 | 88.2 | | AdversarialNLI (r3) | Accuracy | Test | Wang et al. | 47.7 | 53.5 | | VitaminC | Accuracy | Test | Schuster et al. | 90.8 | 91.1 | | Hellaswag | Accuracy | Test | Lourie et al. | 93.9 | 94.1(l) | | QQP | F1 | Dev | Raffel et al. | 90.1 | 90.6 | | QNLI | Accuracy | Dev | Raffel et al. | 96.1 | 96.5 | | CoLA | Matthews | Dev | Raffel et al. | 68.6 | 71.5 | | STSB | Spearman | Dev | Raffel et al. | 92.1 | 92.3 | | AbductiveNLI | Accuracy | Test | He et al. | 89.8(]) | 87.5(l) | | MultiNLI | Accuracy | Dev | Raffel et al. | 92.1 | 91.9 | | IMDB | Accuracy | Test | Yang et al. | 96.2 | 97.3 | | AgNews | Error | Test | Yang et al. | 4.45 | 4.42 | | Civil Comments | F1 | Dev | Tay et al. | 87.8 | 87.9 | | Wikipedia Toxicity | F1 | Dev | Tay et al. | 96.5 | 97.0 | | SST-2 | Acc | Dev | Raffel et al. | 97.3 | 97.0 | | Scrolls Challenge | Aggregate | Test | Shaham et al. | 29.2 | 37.9(l) | | SumScr | Rouge (Avg) | Test | Shaham et al. | 16.3 | 20.0(l) | | QMSum | Rouge (Avg) | Test | Shaham et al. | 19.9 | 20.0(l) | | QASPER | F1 | Test | Shaham et al. | 26.6 | 37.6(l) | | NarrativeQA | F1 | Test | Shaham et al. | 18.5 | 24.2(l) | | QUALITY | EM | Test | Shaham et al. | 26.0 | 45.8(l) | | ContractNLI | EM | Test | Shaham et al. | 77.4 | 88.7(l) | | GovRep | Rouge (Avg) | Test | Shaham et al. | 37.2 | 36.2(l) | Scrolls Challenge Aggregate Test Shaham et al. 29.2 **37.9**(l) SumScr Rouge (Avg) Test Shaham et al. 16.3 **20.0**(l) QMSum Rouge (Avg) Test Shaham et al. 19.9 **20.0**(l) QASPER F1 Test Shaham et al. 26.6 **37.6**(l) NarrativeQA F1 Test Shaham et al. 18.5 **24.2**(l) QUALITY EM Test Shaham et al. 26.0 **45.8**(l) ContractNLI EM Test Shaham et al. 77.4 **88.7**(l) GovRep Rouge (Avg) Test Shaham et al. **37.2** 36.2(l) Continued on next page 6This task is German-to-English translation. Our submission is pretrained on only English C4 then finetuned on only the provided WMT19 data (no parallel data or backtranslation.) | Table 8 - continued from previous page | | | | | | |------------------------------------------|----------|------|--------------------|------|------| | WikiTQ | Accuracy | Test | Xie et al. | 49.3 | 54.6 | | CompWebQ | Accuracy | Test | Xie et al. | 73.3 | 75.9 | | FetaQA | BLEU-4 | Test | Xie et al. | 33.4 | 35.8 | | HybridQA | Accuracy | Dev | Eisenschlos et al. | 60.8 | 61.0 | | WikiSQL | Accuracy | Test | Xie et al. | 86.0 | 87.3 | | TabFat | Accuracy | Test | Xie et al. | 83.4 | 87.1 | | Feverous | Accuracy | Dev | Xie et al. | 82.4 | 85.6 | | SQA | Sent.Acc | Test | Xie et al. | 62.4 | 70.5 | | MTOP | Match | Test | Xie et al. | 86.8 | 87.5 | | DART | BLEU-4 | Test | Aghajanyan et al. | 47.2 | 50.4 | | DSI-NQ | HITS@10 | Dev | Tay et al. | 70.3 | 73.8 | ## 5.2.4 Results On Supervised Finetuning Our experimental results show that UL2 achieves state-of-the-art performance on around 50+ NLP tasks and setups. For many, the margins are quite wide and for those that UL2 doesn't achieve SOTA, the performance of UL2 is generally quite competitive. It is worth to note that the extent of difficulty of obtaining sota on each benchmark has vastly different difficulties. For some, the sota model is a 32B dense equivalent (Zoph et al., 2022). For some others, it's a base model. It is worth also noting that many benchmarks have a strong relatively large model, e.g., 3B or 11B T5, UnifiedQA (Khashabi et al., 2020) or Unicorn (Lourie et al., 2021) as the existing SOTA model so outperforming these models is also not exactly the easiest thing to do. Overall, we urge the readers to judge the value of these sota results for themselves. Finally, we note that UL2 20B does pretty well on human evaluation on GENIE tasks, outperforming sota on several metrics. This ascertains that the generation quality of UL2 is reasonably solid. ## 5.2.5 Tradeoffs Between Finetuning And Prompt-Based Zero-Shot Learning (Superglue) In this section, we explore finetuning and in-context learning trade-offs on the SuperGLUE benchmark. We conduct experiments on SuperGLUE with UL20B. While UL20B does not achieve SOTA on this benchmark, we note that UL20B at least remains competitive and outperforms T5-11B. This section reassures that UL2 indeed scales and matches/slightly outperforms T5-11B on SuperGLUE (while strongly outperforming T5-XXL on many other in-context tasks). UL20B still lacks behind the SOTA model ST-MoE-32B given two main reasons. Firstly, ST-MoE-32B has 200B+ parameters and is costs equivalent to a 32B dense model. Secondly, ST-MoE-32B is trained solely on span corruption using an encoder-decoder architecture which is known to be very advantageous on NLU finetuning. | Model | BoolQ | CB | CoPA | MultiRC | Record | RTE | WiC | WSC | Avg | |----------------|---------|-----------|--------|-----------|-----------|-------|-------|-------|-------| | PaLM 62B | 90.6 | 96.4/95.7 | 98.0 | 87.7/61.9 | 93.0/92.4 | 89.5 | 75.9 | 96.2 | 89.2 | | PaLM 540B | 92.2 | 100/100 | 100 | 90.1/69.2 | 94.0/94.6 | 95.7 | 78.8 | 100 | 92.6 | | ST-MoE 32B269B | 93.1 | 100/100 | 100 | 90.4/69.9 | 95.0/95.6 | 95.7 | 81.0 | 100 | 93.2 | | PaLM 8B | 87.6 | 96.4/92.1 | 86.0 | 81.6/64.0 | 89.7/89.3 | 84.5 | 73.4 | 88.5 | 83.4 | | T5 11B | 90.8 | 94.9/96.4 | 98.0 | 87.4/66.1 | 93.8/93.2 | 93.9 | 77.3 | 96.2 | 89.9 | | UL2 20B | 90.8 | 98.7/98.2 | 99.0 | 88.4/64.8 | 93.7/93.2 | 92.1 | 77.3 | 98.1 | 90.7 | Table 9: Results on SuperGLUE dev set. We compare with T5-11B (Raffel et al., 2019), ST-MoE-32B (Zoph et al., 2022) and PaLM-8B, PaLM-62B and PaLM-540B (Chowdhery et al., 2022). Scores reported are the peak validation scores per task. Table 10: Results on zero-shot learning on SuperGLUE dataset. We compare with GPT-3, GLaM and PaLM (Chowdhery et al., 2022). We also include models that are relatively compute-matched with UL20B such as T5-XXL with LM adaptation (Lester et al., 2021), GPT-3 13B and GLaM-8B dense. Notably, UL20B outperforms GPT-3 175B and all other models in a similar compute class on average score. | Model | BoolQ | CB | RTE | ReCORD | WSC | WiC | COPA | MultiRC | Avg | |-----------------------|---------|------|-------|----------|-------|-------|--------|-----------|-------| | ST-MoE-32B (269B) | 40.8 | 41.1 | 52.7 | 50.0 | 57.5 | 50.0 | 56.0 | 30.3 | 47.6 | | GPT-3 175B | 60.5 | 46.4 | 63.5 | 90.2 | 65.4 | 0.0 | 91.0 | 72.9 | 61.2 | | GLaM-MoE 1.2T | 83.0 | 33.9 | 68.8 | 90.3 | 84.9 | 50.5 | 90.0 | 45.1 | 68.3 | | PaLM 540B | 88.0 | 51.8 | 72.9 | 92.9 | 89.1 | 59.1 | 93.0 | 83.5 | 78.8 | | T5-XXL | 44.3 | 37.5 | 48.8 | 85.8 | 59.3 | 50.9 | 70.0 | 23.0 | 52.5 | | GPT-3 13B | 66.2 | 19.6 | 62.8 | 89.0 | 64.4 | 0.0 | 84.0 | 71.4 | 57.2 | | GLaM-Dense 8B | 73.6 | 33.9 | 44.0 | 89.2 | 80.7 | 44.0 | 86.0 | 39.0 | 61.3 | | GLaM-MoE 64E | 72.2 | 40.7 | 60.3 | 88.9 | 81.8 | 49.5 | 86.0 | 52.4 | 66.5 | | PaLM-Dense 8B | 68.3 | 41.1 | 54.2 | 87.8 | 78.9 | 47.0 | 86.0 | 47.5 | 63.9 | | UL2 20B (single ckpt) | 63.1 | 41.1 | 60.7 | 88.1 | 79.9 | 49.8 | 85.0 | 36.2 | 63.0 | | UL2 20B (best) | 63.1 | 50.0 | 60.7 | 88.1 | 80.6 | 55.2 | 88.0 | 36.2 | 65.2 | ## 5.2.6 Generative Few-Shot: Xsum Summarization Finally, we conduct additional few-shot in-context one-shot learning using the XSum dataset We compare our model with the baseline T5-XXL, T5-XXL with LM Adaptation (Lester et al., 2021), LaMDA 137B (Thoppilan et al., 2022), and PaLM (8B, 62B, 540B) (Chowdhery et al., 2022). We run T5-XXL ourselves in the same experimental setup but report results from (Chowdhery et al., 2022) for the other models. | Model | Rouge-1 | Rouge-2 | Rouge-L | |-----------------|-----------|-----------|-----------| | LaMDA 137B | - | 5.4 | - | | PaLM 62B | - | 11.2 | - | | PaLM 540B | - | 12.2 | - | | PaLM 8B | - | 4.5 | - | | T5 XXL 11B | 0.6 | 0.1 | 0.6 | | T5 XXL 11B + LM | 13.3 | 2.3 | 10.7 | | UL2 20B | 25.5 | 8.6 | 19.8 | Table 11: Results on One-Shot Summarization on XSUM. Table 11 reports results on 1-shot summarization. We note that T5-XXL performs poorly on this task. Even with LM adaptation, the Rouge-2 score is only 2.3, which substantially lacks behind decoder-only causal language models (e.g., PaLM 8B models). Notably, the off-the-shelf T5-XXL is not able to generate meaningful summaries even with prompting just because it is only trained with span corruption so it is intuitive that some form of adaptation is required for generative few-shot settings. Here is is worth that the performance of UL2 20B is about 3x the performance of LM adapted T5 XXL model. Moreover, UL2 20B outperform LaMDA 137B and has close to double the performance of PaLM 8B. The best result, however, is still the larger 540B and 62B PaLM models. ## 6 Conclusion We proposed a new paradigm for training universally effective models. UL2 is characterized by two key ideas. Firstly, we propose a new Mixture of Denoisers (MoD) pretraining that frames multiple pretraining tasks as span corruption, diversifies and then mixes them. Secondly, we introduce mode switching, a way of associating downstream task behaviour to upstream pretraining. Extensive ablative experiments show that UL2 consistently outperforms GPT-like and T5 models on a wide range of supervised and few-shot tasks, outperforming T5 on 9 out of 9 tasks and by a normalized overall gain of +76.1%. Finally, we scale UL2 up to 20B parameters and conduct experiments on a diverse suite of 50 to 60 NLP tasks and setups. UL2 achieves sota performance on 50 of them. Pretrained checkpoints will be released at withheldduetodoubleblindreview.. ## References Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, and Sonal Gupta. Better fine-tuning by reducing representational collapse. *arXiv preprint arXiv:2008.03156*, 2020. Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, and Luke Zettlemoyer. Htlm: Hyper-text pre-training and prompting of language models. *arXiv preprint arXiv:2107.06955*, 2021. Rami Aly, Zhijiang Guo, Michael Sejr Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. The fact extraction and VERification over unstructured and structured information (FEVEROUS) shared task. In *Proceedings of the Fourth Workshop on Fact Extraction* and VERification (FEVER), pp. 1–13, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.fever-1.1. URL https://aclanthology.org/2021.fever-1.1. Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q Tran, Dara Bahri, Jianmo Ni, et al. Ext5: Towards extreme multi-task scaling for transfer learning. *arXiv preprint arXiv:2111.10952*, 2021. Shreyan Bakshi, Soumya Batra, Peyman Heidari, Ankit Arun, Shashank Jain, and Michael White. Structureto-text generation with self-training, acceptability classifiers and context-conditioning for the gem shared task. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pp. 136–147, 2021. Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. Abductive commonsense reasoning. arXiv preprint arXiv:1908.05739, 2019. Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsense in natural language. In *Proceedings of the AAAI conference on artificial intelligence*, volume 34, pp. 7432–7439, 2020. Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. Nuanced metrics for measuring unintended bias with real data for text classification. *CoRR*, abs/1903.04561, 2019. URL http://arxiv.org/abs/1903.04561. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. Thiago Castro Ferreira, Claire Gardent, Nikolai Ilinykh, Chris van der Lee, Simon Mille, Diego Moussallem, and Anastasia Shimorina. The 2020 bilingual, bi-directional webnlg+ shared task overview and evaluation results (webnlg+ 2020). In *Proceedings of the 3rd WebNLG Workshop on Natural Language Generation* from the Semantic Web (WebNLG+ 2020), pp. 55–76, Dublin, Ireland (Virtual), 2020. Association for Computational Linguistics. Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. Summscreen: A dataset for abstractive screenplay summarization. *arXiv preprint arXiv:2104.07091*, 2021. Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. Tabfact: A large-scale dataset for table-based fact verification. arXiv preprint arXiv:1909.02164, 2019. Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Wang. Hybridqa: A dataset of multi-hop question answering over tabular and textual data. *arXiv preprint arXiv:2004.07347*, 2020. Aakanksha Chowdhery, Sharan Narang, and Jacob Devlin. Palm: Scaling language modeling with pathways. arXiv preprint, 2022. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the AI2 reasoning challenge. *CoRR*, abs/1803.05457, 2018. URL http://arxiv.org/abs/1803.05457. Jordan Clive, Kris Cao, and Marek Rei. Control prefixes for text generation. *CoRR*, abs/2110.08329, 2021. URL https://arxiv.org/abs/2110.08329. Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. Advances in neural information processing systems, 28:3079–3087, 2015. Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A Smith, and Matt Gardner. A dataset of information-seeking questions and answers anchored in research papers. *arXiv preprint arXiv:2105.03011*, 2021. Mostafa Dehghani, Anurag Arnab, Lucas Beyer, Ashish Vaswani, and Yi Tay. The efficiency misnomer. *arXiv* preprint arXiv:2110.12894, 2021a. Mostafa Dehghani, Yi Tay, Alexey A Gritsenko, Zhe Zhao, Neil Houlsby, Fernando Diaz, Donald Metzler, and Oriol Vinyals. The benchmark lottery. 2021b. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. Unified language model pre-training for natural language understanding and generation. arXiv preprint arXiv:1905.03197, 2019. Nan Du, Yanping Huang, Andrew M Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, et al. Glam: Efficient scaling of language models with mixture-of-experts. *arXiv preprint arXiv:2112.06905*, 2021. Ondřej Dušek, David M Howcroft, and Verena Rieser. Semantic Noise Matters for Neural Natural Language Generation. In Proceedings of the 12th International Conference on Natural Language Generation (INLG 2019), pp. 421–426, Tokyo, Japan, 2019. URL https://www.aclweb.org/anthology/W19-8652/. Julian Martin Eisenschlos, Maharshi Gor, Thomas Müller, and William W Cohen. Mate: Multi-view attention for table transformer efficiency. *arXiv preprint arXiv:2109.04312*, 2021. Alexander R Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R Radev. Multi-news: A large-scale multidocument summarization dataset and abstractive hierarchical model. *arXiv preprint arXiv:1906.01749*, 2019. William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. *arXiv preprint arXiv:2101.03961*, 2021. Wikimedia Foundation. Acl 2019 fourth conference on machine translation (wmt19), shared task: Machine translation of news. URL http://www.statmt.org/wmt19/translation-task.html. Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Aremu Anuoluwapo, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna Clinciu, Dipanjan Das, Kaustubh D Dhole, et al. The gem benchmark: Natural language generation, its evaluation and metrics. *arXiv preprint* arXiv:2102.01672, 2021. Sebastian Gehrmann, Elizabeth Clark, and Thibault Sellam. Repairing the cracked foundation: A survey of obstacles in evaluation practices for generated text. *arXiv preprint arXiv:2202.06935*, 2022. Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. Samsum corpus: A human-annotated dialogue dataset for abstractive summarization. *arXiv preprint arXiv:1911.12237*, 2019. Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, and Yinfei Yang. Longt5: Efficient text-to-text transformer for long sequences. *arXiv preprint arXiv:2112.07916*, 2021. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. Deberta: Decoding-enhanced bert with disentangled attention. *arXiv preprint arXiv:2006.03654*, 2020. Yun He, Huaixiu Steven Zheng, Yi Tay, Jai Gupta, Yu Du, Vamsi Aribandi, Zhe Zhao, YaGuang Li, Zhao Chen, Donald Metzler, et al. Hyperprompt: Prompt-based task-conditioning of transformers. arXiv preprint arXiv:2203.00759, 2022. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In *Advances in neural information* processing systems, pp. 1693–1701, 2015. Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146, 2018. Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Cosmos qa: Machine reading comprehension with contextual commonsense reasoning. *arXiv preprint arXiv:1909.00277*, 2019. Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. Efficient attentions for long document summarization. *arXiv preprint arXiv:2104.02112*, 2021. Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. Search-based neural structured learning for sequential question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1821–1831, 2017. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77, 2020. Norman P Jouppi, Doe Hyun Yoon, George Kurian, Sheng Li, Nishant Patil, James Laudon, Cliff Young, and David Patterson. A domain-specific supercomputer for training deep neural networks. Communications of the ACM, 63(7):67–78, 2020. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. *arXiv preprint* arXiv:2001.08361, 2020. Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. Unifiedqa: Crossing format boundaries with a single qa system. *arXiv preprint arXiv:2005.00700*, 2020. Daniel Khashabi, Gabriel Stanovsky, Jonathan Bragg, Nicholas Lourie, Jungo Kasai, Yejin Choi, Noah A. Smith, and Daniel S. Weld. GENIE: A leaderboard for human-in-the-loop evaluation of text generation. CoRR, abs/2101.06561, 2021. URL https://arxiv.org/abs/2101.06561. Daniel Khashabi, Yeganeh Kordi, and Hannaneh Hajishirzi. Unifiedqa-v2: Stronger generalization via broader cross-format training. *arXiv preprint arXiv:2202.12359*, 2022. Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. Qasc: A dataset for question answering via sentence composition. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 8082–8090, 2020. Tomáš Kočiský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. The NarrativeQA Reading Comprehension Challenge. *Transactions of the Association* for Computational Linguistics, 2018. Yuta Koreeda and Christopher D Manning. Contractnli: A dataset for document-level natural language inference for contracts. *arXiv preprint arXiv:2110.01799*, 2021. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin Kenton Lee, Kristina Toutanova, Llion Jones Matthew Kelcey, Ming-Wei Chang, Andrew M Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural Questions: a Benchmark for Question Answering Research. In *Transactions of the ACL*, 2019. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. RACE: Large-scale ReAding comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 785–794, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1082. URL https://aclanthology.org/D17-1082. Alyssa Lees, Vinh Q Tran, Yi Tay, Jeffrey Sorensen, Jai Gupta, Donald Metzler, and Lucy Vasserman. A new generation of perspective api: Efficient multilingual character-level transformers. arXiv preprint arXiv:2202.11176, 2022. Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with conditional computation and automatic sharding. *arXiv preprint arXiv:2006.16668*, 2020. Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021. Haoran Li, Abhinav Arora, Shuohui Chen, Anchit Gupta, Sonal Gupta, and Yashar Mehdad. Mtop: A comprehensive multilingual task-oriented semantic parsing benchmark. *arXiv preprint arXiv:2008.09335*, 2020. Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. CommonGen: A constrained text generation challenge for generative commonsense reasoning. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pp. 1823–1840, Online, November 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/ 2020.findings-emnlp.165. Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. Generating wikipedia by summarizing long sequences. *arXiv preprint arXiv:1801.10198*, 2018. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. *arXiv* preprint arXiv:1907.11692, 2019. Nicholas Lourie, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Unicorn on rainbow: A universal commonsense reasoning model on a new multitask benchmark. *arXiv preprint arXiv:2103.13009*, 2021. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/P11-1015. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. *arXiv preprint arXiv:1809.02789*, 2018. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In *Advances in neural information processing systems*, pp. 3111–3119, 2013. Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, et al. Dart: Open-domain structured data record to text generation. *arXiv preprint arXiv:2007.02871*, 2020. Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Victoria Lin, Neha Verma, Rui Zhang, Wojciech Kryściński, Nick Schoelkopf, Riley Kong, Xiangru Tang, et al. Fetaqa: Free-form table question answering. arXiv preprint arXiv:2104.00369, 2021. Sharan Narang, Hyung Won Chung, Yi Tay, William Fedus, Thibault Fevry, Michael Matena, Karishma Malkan, Noah Fiedel, Noam Shazeer, Zhenzhong Lan, et al. Do transformer modifications transfer across implementations and applications? *arXiv preprint arXiv:2102.11972*, 2021. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. *ArXiv*, abs/1808.08745, 2018. Shashi Narayan, Yao Zhao, Joshua Maynez, Gonçalo Simões, Vitaly Nikolaev, and Ryan McDonald. Planning with learned entity prompts for abstractive summarization. *Transactions of the Association for Computational Linguistics*, 9:1475–1492, 2021. doi: 10.1162/tacl_a_00438. URL https: //aclanthology.org/2021.tacl-1.88. ME Peters M Neumann, M Iyyer, M Gardner, C Clark, K Lee, and L Zettlemoyer. Deep contextualized word representations. *arXiv preprint arXiv:1802.05365*, 2018. Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. Adversarial nli: A new benchmark for natural language understanding. *arXiv preprint arXiv:1910.14599*, 2019. Richard Yuanzhe Pang, Alicia Parrish, Nitish Joshi, Nikita Nangia, Jason Phang, Angelica Chen, Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, et al. Quality: Question answering with long input texts, yes! *arXiv preprint arXiv:2112.08608*, 2021. Ankur P Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. Totto: A controlled table-to-text generation dataset. *arXiv preprint arXiv:2004.14373*, 2020. Panupong Pasupat and Percy Liang. Compositional semantic parsing on semi-structured tables. arXiv preprint arXiv:1508.00305, 2015. Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In *Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)*, pp. 1532–1543, 2014. Guanghui Qin, Yukun Feng, and Benjamin Van Durme. The nlp task effectiveness of long-range transformers. arXiv preprint arXiv:2202.07856, 2022. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language Models are Unsupervised Multitask Learners. 2019. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019. Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. *arXiv preprint arXiv:1909.05855*, 2019. Adam Roberts, Hyung Won Chung, Anselm Levskaya, Gaurav Mishra, James Bradbury, Daniel Andor, Sharan Narang, Brian Lester, Colin Gaffney, Afroz Mohiuddin, Curtis Hawthorne, Aitor Lewkowycz, Alex Salcianu, Marc van Zee, Jacob Austin, Sebastian Goodman, Livio Baldini Soares, Haitang Hu, Sasha Tsvyashchenko, Aakanksha Chowdhery, Jasmijn Bastings, Jannis Bulian, Xavier Garcia, Jianmo Ni, Andrew Chen, Kathleen Kenealy, Jonathan H. Clark, Stephan Lee, Dan Garrette, James Lee-Thorp, Colin Raffel, Noam Shazeer, Marvin Ritter, Maarten Bosma, Alexandre Passos, Jeremy Maitin-Shepard, Noah Fiedel, Mark Omernick, Brennan Saeta, Ryan Sepassi, Alexander Spiridonov, Joshua Newlan, and Andrea Gesmundo. Scaling up models and data with t5x and seqio, 2022. URL https://arxiv.org/abs/2203.17189. Anna Rogers, Olga Kovaleva, Matthew Downey, and Anna Rumshisky. Getting closer to ai complete question answering: A set of prerequisite real tasks. In *Proceedings of the AAAI conference on artificial intelligence*, volume 34, pp. 8722–8731, 2020. Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training enables zero-shot task generalization. *arXiv preprint arXiv:2110.08207*, 2021. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. Socialiqa: Commonsense reasoning about social interactions. *arXiv preprint arXiv:1904.09728*, 2019. Tal Schuster, Adam Fisch, and Regina Barzilay. Get your vitamin c! robust fact verification with contrastive evidence. *arXiv preprint arXiv:2103.08541*, 2021a. Tal Schuster, Ashwin Kalyan, Alex Polozov, and Adam Tauman Kalai. Programming puzzles. In *Thirty-fifth* Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021b. URL https://openreview.net/forum?id=fe_hCc4RBrg. Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, and Omer Levy. Scrolls: Standardized comparison over long language sequences, 2022. Noam Shazeer. Glu variants improve transformer. *arXiv preprint arXiv:2002.05202*, 2020. Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pp. 4596–4604. PMLR, 2018. Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053, 2019. Alon Talmor and Jonathan Berant. The web as a knowledge-base for answering complex questions. *arXiv* preprint arXiv:1803.06643, 2018. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A question answering challenge targeting commonsense knowledge. *arXiv preprint arXiv:1811.00937*, 2018. Alon Talmor, Ori Yoran, Ronan Le Bras, Chandra Bhagavatula, Yoav Goldberg, Yejin Choi, and Jonathan Berant. CommonsenseQA 2.0: Exposing the limits of AI through gamification. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021. URL https://openreview.net/forum?id=qF7FlUT5dxa. Yi Tay, Zhe Zhao, Dara Bahri, Donald Metzler, and Da-Cheng Juan. Hypergrid: Efficient multi-task transformers with grid-wise decomposable hyper projections. *arXiv preprint arXiv:2007.05891*, 2020. Yi Tay, Mostafa Dehghani, Jai Gupta, Dara Bahri, Vamsi Aribandi, Zhen Qin, and Donald Metzler. Are pre-trained convolutions better than pre-trained transformers? *arXiv preprint arXiv:2105.03322*, 2021a. Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and Donald Metzler. Scale efficiently: Insights from pre-training and fine-tuning transformers. *arXiv preprint arXiv:2109.10686*, 2021b. Yi Tay, Vinh Q Tran, Sebastian Ruder, Jai Gupta, Hyung Won Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, and Donald Metzler. Charformer: Fast character transformers via gradient-based subword tokenization. *arXiv preprint arXiv:2106.12672*, 2021c. Yi Tay, Vinh Q Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, et al. Transformer memory as a differentiable search index. *arXiv preprint arXiv:2202.06991*, 2022. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. Lamda: Language models for dialog applications, 2022. Hui Wan. Multi-task learning with multi-head attention for multi-choice reading comprehension. *arXiv* preprint arXiv:2003.04992, 2020. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multitask benchmark and analysis platform for natural language understanding. *arXiv preprint arXiv:1804.07461*, 2018. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. *arXiv preprint arXiv:1905.00537*, 2019. Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, and Jingjing Liu. Infobert: Improving robustness of language models from an information theoretic perspective. *arXiv preprint arXiv:2010.02329*, 2020. Thomas Wang, Adam Roberts, Daniel Hesslow, Teven Le Scao, Hyung Won Chung, Iz Beltagy, Julien Launay, and Colin Raffel. What language model architecture and pretraining objective work best for zero-shot generalization? *arXiv preprint arXiv:2204.05832*, 2022. Alexander Wettig, Tianyu Gao, Zexuan Zhong, and Danqi Chen. Should you mask 15% in masked language modeling? *arXiv preprint arXiv:2202.08005*, 2022. Wen Xiao, Iz Beltagy, Giuseppe Carenini, and Arman Cohan. Primer: Pyramid-based masked sentence pre-training for multi-document summarization. *arXiv preprint arXiv:2110.08499*, 2021. Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I Wang, et al. Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models. *arXiv preprint arXiv:2201.05966*, 2022. Wenhan Xiong, Jiawei Wu, Hong Wang, Vivek Kulkarni, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. Tweetqa: A social media focused question answering dataset. arXiv preprint arXiv:1907.06292, 2019. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. mt5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934, 2020. Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. Byt5: Towards a token-free future with pre-trained byte-to-byte models. arXiv preprint arXiv:2105.13626, 2021. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32, 2019. Wenpeng Yin, Dragomir Radev, and Caiming Xiong. Docnli: A large-scale dataset for document-level natural language inference. *arXiv preprint arXiv:2106.09449*, 2021. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? *arXiv preprint arXiv:1905.07830*, 2019. Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. Advances in neural information processing systems, 28, 2015. Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, et al. Qmsum: A new benchmark for query-based multi-domain meeting summarization. *arXiv preprint arXiv:2104.05938*, 2021. Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning. *CoRR*, abs/1709.00103, 2017. Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, and William Fedus. Designing effective sparse expert models, 2022. ## A Appendix A.1 Model Release As part of this work, we release the weights of the 20B checkpoint. The weights of the model can be found in in this cloud bucket (link withheld due to double blind review). These checkpoints were trained with T5X (Roberts et al., 2022) found at link withheld due to double blind review and are implemented in JAX/FLAX. Because the fine-tuning results are generally not from a single checkpoint due to our continuous finetuning setup, we release three different checkpoints (1.87M, 2.05M, 2.65M) which we found to be consistently pretty good. For this particular checkpoint, note that the mode tags we used are [NLG] (X-denoiser), [NLU] (R-denoiser) and [S2S] (S-denoiser). So add that at the start of the inputs of your examples. ## A.2 Implementation Details And Ul2 Code This section aims to give more insight to how UL2 pretraining is implemented. Our implementation is actually pretty simple. It is simply a mixture of different pretraining objectives that is implemented in seqio7. Most of our experiments were run with simply mixing different seqio tasks with seqio's Mixture Registry. However, one could also implement a generalized UL2 objective with the following function which could be cleaner. def ul2_objective(dataset: tf.data.Dataset, sequence_length: seqio.preprocessors.SequenceLengthType, output_features: seqio.preprocessors.OutputFeaturesType, use_prefix_lm_task: bool = False, rates: Optional[Sequence[float]] = None, mean_noise_span_lengths: Sequence[float] = (3.0,), noise_densities: Sequence[float] = (0.15,), shard_ds: bool = True, optional_task_prefixes: Optional[Sequence[str]] = None, input_feature_key: str = "inputs", merge_examples_to_reduce_padding: bool = True, reserved_for_packing: bool = None, seed: int = 7) -> tf.data.Dataset: """UL2-like pre-training objectives. This preprocessor amounts to calling the 'span_corruption' function several times with different values of 'noise_density' and 'mean_noise_span_length'. We either shard or copy the dataset, then apply each function to each shard. 7https://github.com/google/seqio Add S-denoising (prefixLM) using use_prefix_lm_task. Args: dataset: A tf.data.Dataset with dictionaries containing the key 'input_feature_key'. sequence_length: dict mapping of feature key to int length for that feature. output_features: mapping of keys to features. use_prefix_lm_task: <bool> If True, include PrefixLM in the task mix. rates: <Optional<List<float>> List of rates per task. If None, tasks are sampled uniformly. mean_noise_span_lengths: List of mean number of tokens per masked span per example. noise_densities: List of what fraction of the tokens to mask. shard_ds: <bool> If True, shard dataset per objective. optional_task_prefixes: <Optional<list<str>> Strings to prepend for each corruption scheme. NOTE: If including prefixLM task, it must be the last prefix. input_feature_key: which feature to use from the dataset as the input text tokens. merge_examples_to_reduce_padding: if True, combines multiple input examples to reduce padding. reserved_for_packing: if specified, reduces the desired inputs length by the specified amount to enable multiple examples to be packed together downstream. seed: tf.int64 for controlling the random choice of spans. Returns: a dataset """ if optional_task_prefixes: \# Ensure each task has a prefix. num_tasks = len(noise_densities) + int(use_prefix_lm_task) valid_number_of_prefixes = num_tasks == len(optional_task_prefixes) if not valid_number_of_prefixes: raise ValueError("Number of task prefixes must match number of tasks.") inputs_length = sequence_length[input_feature_key] input_lengths, targets_lengths = [], [] sequence_lengths = {x: y for x, y in sequence_length.items()} if reserved_for_packing: inputs_length -= reserved_for_packing for x, y in sequence_length.items(): sequence_lengths[x] = y - reserved_for_packing hyperparams = list(zip(mean_noise_span_lengths, noise_densities)) for mean_noise_span_length, noise_density in hyperparams: input_length, targets_length = t5.data.preprocessors.random_spans_helper( extra_tokens_per_span_inputs=1, extra_tokens_per_span_targets=1, inputs_length=inputs_length, mean_noise_span_length=mean_noise_span_length, noise_density=noise_density) input_lengths.append(input_length) targets_lengths.append(targets_length) if sequence_length["targets"] < targets_length: upper_bound = max(targets_lengths) raise ValueError( f'Expected max targets length for span corruption ({upper_bound}) is ' f'greater than configured targets length ' f"({sequence_length['targets']})") ds = dataset ds = t5.data.preprocessors.select_random_chunk( ds, output_features=output_features, feature_key="targets", max_length=65536) if merge_examples_to_reduce_padding: ds = t5.data.preprocessors.reduce_concat_tokens( ds, feature_key="targets", batch_size=128) num_shards = len(input_lengths) + int(use_prefix_lm_task) if shard_ds: ds_shards = [ds.shard(num_shards, i) for i in range(num_shards)] else: ds_shards = [ds for _ in range(num_shards)] processed_ds = [] hyperparams = zip(input_lengths, hyperparams, range(num_shards)) for input_length, (noise_span_length, noise_density), i in hyperparams: ds = ds_shards[i] ds = t5.data.preprocessors.split_tokens( ds, feature_key="targets", min_tokens_per_segment=None, max_tokens_per_segment=input_length) ds = t5.data.preprocessors.denoise( ds, output_features, inputs_fn=t5.data.preprocessors.noise_span_to_unique_sentinel, targets_fn=t5.data.preprocessors.nonnoise_span_to_unique_sentinel, noise_density=noise_density, noise_mask_fn=functools.partial( t5.data.preprocessors.random_spans_noise_mask, mean_noise_span_length=noise_span_length), input_feature_key=input_feature_key) if optional_task_prefixes: ds = prepend_prompt( ds, output_features, prompt_mode=optional_task_prefixes[i], mode=optional_task_prefixes[i]) processed_ds.append(ds) if use_prefix_lm_task: ds = ds_shards[-1] ds = t5.data.preprocessors.prefix_lm(ds, sequence_lengths, output_features) if optional_task_prefixes: ds = prepend_prompt( ds, output_features, prompt_mode=optional_task_prefixes[-1], mode=optional_task_prefixes[-1]) processed_ds.append(ds) ds = tf.data.experimental.sample_from_datasets(processed_ds, rates, seed) return ds ## A.3 Details Of Supervised Finetuning Sota Runs Most of our supervised finetuning runs were finetuned as single tasks. The only exception was that: - We finetuned GLUE as a single mixture with proportionate sampling. This has become standard and defacto setup (Raffel et al., 2019; He et al., 2022; Tay et al., 2020; 2021b). - We finetuned SuperGLUE as a single mixture which is also a standard setup these days (Fedus et al., 2021; Raffel et al., 2019; Chowdhery et al., 2022). - SIQA, PIQA, AbductiveNLI, Winogrande XL and CosmosQA were co-trained in a proportionate mixture similar to (Lourie et al., 2021) under the Rainbow benchmark. - For CSQA, and CSQA2 and OBQA, we co-trained with the rainbow mixture to obtain results on these three datasets. - All other tasks were single-task finetuned.
Review 1: Summary: This paper proposes a unified view of several existing pre-training objectives, and proposes a method of combining them to improve the performance of pre-trained models. Through a plethora of experiments, they argue that this method outperforms both existing pre-training objectives and prior attempts at combining pre-training objectives. In particular, the model is better on both generation and understanding tasks than prior models, a balance that was difficult to strike in prior work (because the architecture and/or objective biased the resultant model toward being better at one or another). Strengths and Weaknesses: Strengths: - This paper certainly runs many experiments and studies a variety of different ablative settings. Empirical results seem good. - I like the concise framing of existing pre-training objectives that shows their differences and similarities - Simple idea, multi-task learning works, and the examination seems somewhat thorough. Weaknesses: - It's still a bit unclear to me how the magic mixture of UL2 came to be. The ablative experiments helped convey that each of the components was necessary, but perhaps not necessarily where they came from, which I think it important for future researchers building upon this work. - For some reason, this paper didn't really excite me, especially the pages upon pages of better benchmark results than existing paper. I guess I'm not really sure that I expected anything else, or that I learned a bunch from reading this paper. Requested Changes: [major-ish things] - I'm not sure if I missed this, but I don't think it was mentioned anywhere how the "mode switching" was used. For a given fine-tuning task , how do you decide which token / prompt to use? - "We arbitrarily select XSUM (summarization), ToTTo (table-to-text generation) (Parikh et al., 2020) and Schema Guided Dialog (SGD) (Rastogi et al., 2019) from the GEM benchmark" kind of begs a few questions---why not evaluate on all the GEM tasks? Is it possible that UL2 happens to just perform better on these specific benchmarks? - In tables where things are bolded, what does the bolding indicate and how it is decided? For example, in Table 2, UL2's 42.30 performance on SG is bolded, while PLM achieves 42.54 (seems higher). - Similarly, for table 3, I'm not really sure what the "win" category means. For eaxmple, UL2 ED is claimed to win 9/9 times, but there are cases where models outperform it (e.g., PLM +2.9 on XS vs. UL2 +2.6 on XS). Is "Win" just whether or not it improves above the chosen baseline model? Regardless, I also feel like "win" is sort of a weird word to use---it puts the "leaderboarding" aspect of this work pretty front-and-center, when I feel like it might also offer things beyond just improvements on somewhat arbitrary benchmarks. [minor things] - I thought the overall idea of "We begin by disentangling architectural archetypes with pre-training objectives – two concepts that are commonly conflated." was a bit of a straw-man---it's not clear to me that these are actually commonly conflated. - I feel like the paper can make some pretty broad generalizations. For example, "We also note that on all individual tasks, UL2 outperforms T5 on all 9 out of 9 considered tasks. Hence, UL2 is a universally better option compared to the span corruption T5 model." Pretty strong statement to say that UL2 is universally better than T5 based off of just a subset of experiments, there's no free lunch after all. Moreover, the title "Unifying Language Learning Paradigms" originally made me think that I'd be reviewing a psycholinguistics language acquisition paper, not a pre-training paper---I wouldn't necessarily call a particular pre-training objective a "Language Learning Paradigm", but maybe that's just me. Broader Impact Concerns: There's the sort of standard stuff about harms of large language models in general that isn't really discussed. ================================================== Review 2: Summary: This paper proposes a pretraining objective (UL2) that mixes prior denoising objectives (span-based denoising, causal LM etc.) and empirically studies the effectiveness of different objectives and architectures (encoder-decoder vs decoder-only), with the goal of developing a unifying pretraining pipeline. Extensive experiments are conducted on both classification and generation tasks in NLP, showing that the mixture objective outperforms existing objectives (e.g. used by T5 and GPT) given the same architecture, model size, and training steps. Given the critical role of pretrained language models today, such a study is timely and useful to the community. The ablations studies and analysis would be of particular interest to practitioners. Strengths and Weaknesses: **Strengths** - This paper studies a problem of significant practical interest. While existing pretrained models largely follows either the T5-style (encoder-decoder + span denoising) or the GPT-style (decoder-only + causal LM), so far there is no systematic study of the empirical advantage of the two setups and other candidates. The results in this paper fill in this gap. - The proposed mixture objective outperforms existing objectives and shows strong performance across tasks in the fine-tuning setting. **Weaknesses** - The current framing of the paper might over-claims the results. I would be fine if the paper describes the results as an empirical comparison of pre-training objectives and encdec vs dec-only models, and this is useful work. However, calling a simple interpolation of existing objectives a "unified paradigm" seems misleading, let alone that similar mixed objectives may already exist (e.g., GPT-3 API now supports infilling and editing which could be enabled by span denoising objectives). - While UL2 unifies many objectives, it introduces more hyperparameters at the same time. The practitioners still need to make a bunch of choices, although not on the pretrained models. For example, which "paradigm prompt" to choose (also see question 2); how to balance different objectives and choose the hyperparameters studied in 4.5. - The paper makes strong claims about the superiority of encoder-decoder models over decoder-only models. However, this conclusion is obtained based on results from models of different sizes: ``` Before we dive into the results of this segment, we would like to remind readers that there is no easy way to compare decoder-only models with encoder-decoder models. In short, we can either compare them in a compute-matched setup or a parameter-matched way. Therefore, the encoder-decoder models in these set of results have approximately twice the number of parameters as the decoder models but have similar speeds. ``` I agree that it's non-trivial to compare the two. But it's unclear what is controlled here. For the "compute-matched" setup, is compute measured in FLOPs? What does "similar speed" mean? For the "parameter-matched" setup, it should be straightforward to train dec-only models with a similar size of the encdec models, which should clearup the picture. With the current results, it is not conclusive whether it is the encdec architecture or the larger model size that results in the improvement. - Ablations of the denoisers (4.5) are inconclusive. It doesn't seem any of the tested setup has a significant influence on the downstream result. For example, the paper says "Var-H performs poorly", but it's actually decent from the table; similarly, it says that 50% of S-denoiser hurts performance generally, but it actually performs the best on XSum. Overall, the differences are quite small and multiple variables are changing between settings, both making it hard to draw any conclusion. - The writing is rather rough and there are many typos (see requested changes). **Questions** 1. The one-shot and few-shot results are known to suffer from large variance [1]. How are the examples chosen here and what is the standard deviation of the results? 2. Since there are different "paradigm prompts" and they may influence the downstream performance, how is it chosen during fine-tuning? [1] Calibrate Before Use: Improving Few-shot Performance of Language Models. Tony Z. Zhao*, Eric Wallace*, Shi Feng, Dan Klein, and Sameer Singh. ICML 2021. Requested Changes: **Proposed adjustments for acceptance** - Reframe the paper as an empirical study of pretraining objectives and models. - Disentangle the effect of size in the comparison of encdec and dec-only models. - Polish the writing. - Some terms are used in a colloquial way that may cause confusion. - ``` It is also easy to see that each denoiser is difficult in different ways. They also differ in the nature of extrapolation (or interpolation). ``` What does extrapolation/interpolation mean in this context? ``` For example, bounding a model by bidirectional context (or the future) (ie.., span corruption) makes the task easier and becomes more akin to fact completion. ``` I don't think this claim is sound. ``` Meanwhile, PrefixLM/LM objectives are generally more ‘open ended‘. These behaviours can be easily observed by monitoring the cross entropy losses of these different denoising objectives. ``` How is the loss relevant here? - ``` Encoder-Decoder models process input and targets independently with a different set of parameters. This is a form of sparsity where different set of parameters are used for different tokens ``` This is an uncommon definition of sparsity. Is there a citation to be provided? - ``` X-denoisers are also connected to improve sample efficiency since more tokens are learned to be predicted in each sample, in similar spirit to LMs. ``` I'd like to see a citation or results to support this sample efficiency claim. - Typos / grammatical errors - "...if CausalLM is used over PrefixLM used..." - "have an effective on performance" --> have an effect - "UL2 outperforms by T5 +43.4% and +" --> outperforms T5 by - "we find that the proposed of the UL2" --> the proposed UL2 - "might suffer from lack a certain ability" --> the lack of a - "this can be interpreted form of model sparsity" - "it is unclear to which is the universally superior" - "a inputs-to-targets" --> an inputs-to-targets - "Here is is worth that" **Improvements** - Section 5: the reason that a 20B model is trained is largely due to circumstances and it's better to remove the explanation to a brief footnote. They aren't useful to understand the results or the narrative. Broader Impact Concerns: None noted. ================================================== Review 3: Summary: This work presents a new framework called Unifying Language Learning Paradigms (UL2). This paper proposes a pre-training method called Mixture-of-Denoiser (MoD), which combines different pre-training objectives with the generalization of span corruption and introduces a paradigm token for model switching to match better between different pre-training and downstream tasks. UL2 is applicable to decoder-only and encoder-decoder architectures. Ablation studies on various pre-training objectives show that UL2 outperforms the GPT-like model and the T5 model in supervised fine-tuning and in-context one-shot learning. UL2 encoder-decoder model with 20B parameters achieves the state-of-the-art in 50 NLP tasks. Strengths and Weaknesses: UL2 provides a unified framework applicable to a wide range of tasks and achieves high accuracy. The authors will provide 20B UL2 checkpoints which can be very useful for the NLP community. Though MoD interprets different pretraining objectives as a unified span corruption, it is essentially multi-task learning. Therefore, the basic idea is to use multiple pre-training objectives together. This idea is not new, so a rigorous comparison with prior works from a theoretical perspective is necessary. Accuracy gains from UL2 come from the good choice of mixtures, while the previous works are a particular instantiation of the mixture. The final setting of MoD is a mixture of 7 denoisers. However, how this setting is found is not described well. The authors should provide details of the hyperparameter search, including how exhaustive it should be and how the performance is sensitive to those values. For ablation studies, the tasks and the datasets are arbitrarily picked without explaining why performing well on those datasets is a valid measure of a better pre-training objective. The mode switching idea is also not new. How to assign a paradigm token for each downstream task is not explained. Also, not using the paradigm token is a pretty strong baseline with only a slight degradation compared to using an appropriate one. Requested Changes: Please address the weakness mentioned above. Please provide the rationale behind the choice of tasks, datasets, and denoiser parameters. In Section 2.1, CausaulLM and PrefixLM are terms often used in the literature, but it is better to define them before. Encoder-only models have limitations for a generation, so I agree that using them as a universal architecture may not be suitable. However, those models are successful on natural language understanding benchmarks. Having task-specific classification heads mentioned in Section 2.1 is not a huge problem considering the huge size of model parameters. The provided reason for ruling out encoder-only models looks somewhat weak. Could you illustrate the difference between this work and Wang et al. (2022)? How UniLM (mentioned in Section 2.3) and UL2 differs is not clearly described. Can you add the comparison between them? Could you also compare UL2 with BART, a famous denoising seq2seq pre-trained model? In Section 3.1.2, I don’t understand why the explanation of memory is needed. Can you elaborate more? (mu, r, n) is not enough to precisely define the denoising process. What is the sampling ratio of denoisers? A span length is sampled from a normal or uniform distribution. Which distribution is used? What is the variance or the interval? Could you formally formulate the denoising function using those variables? In Figure 3, Can we explain when to use <S> or <E> for the separator? I think Table 2, Table 3, and Table 4 can be unified into a single table. Also, their current placements are not close to the main body where they are mentioned. In Table3, SGL is a typo. Broader Impact Concerns: Pre-training on a large scale is too expensive to reproduce. ================================================== Review 4: Summary: This paper presents a new unified pre-training method, UL2, that combines several pre-training schemes from prior work. R-denoising looks like T5, S-denoising looks like causal language modeling (given a prefix), and X-denoising looks like an "extremified" version of T5 with longer and/or more numerous masks to fill. Extensive experiments compare variants of these pre-training objectives across a number of tasks, including SuperGLUE and several generation tasks in both standard supervised and one-shot settings. The experiments establish the superiority of UL2 compared to past methods and dig into what hyperparameter settings make it most effective. Finally, the paper scales up the LM to 20B parameters and compares against SOTA models across a number of tasks. While the model of course underperforms the largest PaLM and ST-MoE methods, it achieves SOTA performance on a number of tasks and beats task-specific pre-trained models for those tasks in many cases (e.g., the UNICORN model for commonsense reasoning). Strengths and Weaknesses: This paper is basically an effort to hill-climb on language model pre-training objectives by combining and extending ideas from prior work. That's the main drawback; within the confines of what it sets out to do there, it's about as well-executed as I can imagine, modulo a few minor gripes. STRENGTHS The ideas in this paper are simple and effective. Given that TMLR does not involve evaluating the scope or novelty of work, I think this is strictly a positive: the strong empirical results here are a tribute to how much well-engineered improvements on familiar methods can contribute to the SOTA. The final evaluation then reports very strong results. I am sure that this will become a frequently-used pre-trained model given what's reported here. The paper is very clearly written; the core idea is explained well and the presentation and flow of the results is easy to follow. The ablations (section 4) and small-scale experiments effectively establish the superiority of the paper's techniques in apples-to-apples comparisons with other methods, including in a "modestly scaled up" version. I like the comparisons on both fine-tuning and one-shot learning. The evaluation in section 5 is incredibly thorough. The number of tasks is on par with other premiere language modeling papers and the comparisons against other SOTA models are helpful for contextualizing the results on the 20B model. WEAKNESSES Although the results in general are quite convincing, I'm not so sure that ROUGE results on few-shot summarization are so meaningful without additional context or at least some qualitative human evaluation. While we have a good sense of what fine-tuned models do when they achieve certain ROUGE scores, it's much less clear for few-shot models. These low ROUGE scores could represent one of a number of strategies: the model is doing something like summarization but selecting the wrong content, or generating on-topic content that doesn't constitute a summary, or exhibiting some other behavior. Perhaps for Table 11, before simply declaring victory, it would be good to back this up with a human comparison or at the very least show some examples of generated summaries from different models. The same could be done for SGD and TOT elsewhere in the paper. On a writing note, the paper is written in a fairly informal, conversational style in parts. I assume this was a deliberate choice by the authors -- it didn't bother me too much, but addressing readers directly feels defensive at times. Statements like "outperforming these models is not exactly the easiest thing to do" aren't very precise and don't help contextualize the results that much. A more precise statement would be something like: existing work has constructed ad hoc pre-trained models optimized towards subsets of benchmarks, so the fact that a single unified model can outperform those is evidence of the strength of the pre-training scheme here. Requested Changes: As described above, showing some other results for few-shot summarization (and ideally few-shot generalization more broadly) would be helpful. See below for some Broader Impact concerns. Broader Impact Concerns: Unlike many of the other prominent pre-trained LM papers such as PaLM and GPT-3, this paper does not include any discussion of bias or harms. I think this is okay. Its focus is more chiefly on advancing the state-of-the-art on current tasks rather than introducing fundamentally new capabilities, so I would assume the authors view the risks of this model as circumscribed by the discussions in other work. (The focus on few-shot generation is something that separates this paper a bit from prior efforts, though.) Anyway, I do not believe this paper introduces dramatically new harms that need to be analyzed or disclaimed. However, a brief statement to this effect (do the authors believe this? do they see any harms from capabilities of this new model specifically?) would be helpful. ================================================== Metareview: Recommendation: Reject Comment: This paper proposes a mixture-of-denoisers method to unify different language pre-training objectives. This paper was assigned to four different reviewers (a normal TMLR case is three) and received fixed reviews. All reviewers acknowledge this is an interesting paper. The multi-task idea is simple and has significant practical interests. And there are a lot of empirical results to back up the authors' claims. At the same time, reviewers also raised many concerns/questions that hope authors could address/clarify, which include the algorithmic motivation, method justification, experiment settings, results analysis, paper presentations, etc. However, it seems the rebuttal timeline does not align with the authors well. We did not receive the authors' rebuttal in the required time frame. Given the current reviews and unaddressed concerns, it will be hard to accept this paper in its current form. Here, I would like to explain the timeline for the TMLR review for the authors' future reference. The official rebuttal and discussion period should be the 2 weeks after all reviews were out (for this paper, reviews are out on Jul 1st, and the rebuttal period is Jul 2 - Jul 16). After that, all reviewers need to submit their final recommendation in 2 weeks (before Jul 31 for this particular case). We note that the authors asked for two weeks extension on Jul 27. However, after consulting with the editors, we could not grant such a long extension. I also want to explain the resubmission policy for TMLR. Although the paper is rejected now, the authors can revise and then resubmit, so it is not like a conference rejection. I high recommend authors revise the paper based on the reviews and resubmit it later when authors are less busy. ==================================================
# Learning Sparse Graphs For Functional Regression Using Graph-Induced Operator-Valued Kernels Akash Saha *akashsaha@iitb.ac.in* IEOR, IIT Bombay Mumbai, India P. Balamurugan *balamurugan.palaniappan@iitb.ac.in* IEOR, IIT Bombay Mumbai, India Reviewed on OpenReview: *https: // openreview. net/ forum? id= f9l4eiPKpV* ## Abstract A functional regression problem aims to learn a map F : *Z 7→ Y*, where Z is an appropriate input space and Y is a space of output functions. When Z is also a space of functions, the learning problem is known as function-to-function regression. In this work, we consider the problem of learning a map of the form F : Z p*7→ Y*, a many-to-one function-to-function regression problem, where the aim is to learn a suitable F which maps p input functions to an output function. In order to solve this regression problem with p input functions and a corresponding output function, we propose a graph-induced operator-valued kernel (OVK) obtained by imposing a graphical structure describing the inter-relationships among the p input functions. When the underlying graphical structure is unknown, we propose to learn an appropriate Laplacian matrix characterizing the graphical structure, which would also aid in learning the map F. We formulate a learning problem using the proposed graph-induced OVK, and devise an alternating minimization framework to solve the learning problem. To learn F along with meaningful and important interactions in the graphical structure, a minimax concave penalty (MCP) is used as a sparsity-inducing regularization on the Laplacian matrix. We further extend the alternating minimization framework to learn F, where each of the p constituent input functions as well as the output function are multi-dimensional. To scale the proposed algorithm to large datasets, we design an efficient sample-based approximation algorithm. Further, we provide bounds on generalization error for the map obtained by solving the proposed learning problem. An extensive empirical evaluation on both synthetic and real data demonstrates the utility of the proposed learning framework. Our experiments show that simultaneous learning of F along with sparse graphical structure helps in discovering significant relationships among the input functions, and motivates interpretability of such relationships driving the regression problem. ## 1 Introduction Learning to predict functional output from a suitable input is characterized as a functional regression problem, which aims at learning a function-valued function F : *Z → Y*, where Z is an appropriate input space and Y is an output space of functions. In many scenarios, multiple inputs decide the value of an output, which gives rise to functional regression problems of the form F : Z p → Y, where p is the number of inputs considered. Even more interesting is the case where interactions among the p inputs can be used in a precise manner to predict y ∈ Y. In particular, we consider Z to be a space of functions, hence learning a map F : Z p → Y is called a many-to-one function-to-function regression problem. Without loss of generality, we refer to this many-to-one function-to-function regression problem as the functional regression problem considered throughout this paper. Applications of this type of problems can be found in weather forecast- ![1_image_0.png](1_image_0.png) Figure 1: Illustrative example of a functional regression problem, where z1, z2, z3 represent atmospheric pressure at 3 stations in a region, graph G depicts the inter-relationships among z1, z2, z3 and y represents the average temperature of the region. F maps z1, z2, z3 to y incorporating G. ing where different weather parameters in stations measured at multiple timepoints across a month can be characterized as functional inputs used to determine the average rainfall as a time-varying function in that month. Similarly, emissions from a factory in a day can be predicted as a function of time, based on the functional data obtained from readings of different components involved in the manufacturing process at different timepoints in that day. In sports analytics, the movement data of different players throughout the game can let us know the influence of a particular strategy in ball possession/movement as a functional output over the duration of the game. Thus in all these applications, we notice situations where a set of input functions interact to produce an output function. Even though digital data is discrete, systems where the inherent data produced is smooth and continuous by nature, can be modeled as functions over a suitable domain (Ramsay, 1982) to leverage the variations based on that domain. Consider a simple functional regression problem illustrated in Figure 1, where input functions z1, z2, z3 ∈ Z denote the atmospheric pressure measurements of 3 nearby weather stations and the output function y ∈ Y denotes the average temperature of the region, throughout a particular day. For predicting y, considering the input functions z1, z2 and z3 without any relation among them may be restrictive as inherent relations between the input functions may dictate the generation of y. In order to capture interactions among z1, z2, z3, we introduce a graph structure G between z1, z2 and z3 in Figure 1, where the nodes of G represent zi's and the edges depict potential relations among them. The graph structure G will be useful in representing the influences and inter-relations among zi's, which can be useful in the prediction of y ∈ Y using F. We propose a framework for combining the impact of z1, z2*, . . . , z*p with the additional information of G to predict y. In determining the output function y, the graphical structure G on the input functions may be known from domain knowledge and can possibly be directly incorporated to learn F. A more interesting case is when G is unknown and needs to be learned along with F. Learning the graph structure G would help to discover interactions among zi's which facilitate predicting y. When the number of input functions z1, z2*, . . . , z*p grow larger, the associated graph G might also become dense with many edges and incorporating such dense G might lead to computational difficulties and would also lead to spurious connections/edges which lack interpretability. Thus, learning a sparse graphical structure G on input functions becomes instrumental in understanding the significant relationships that drive the functional regression problem to predict the output function. In this work, we consider kernel methods to learn the map F : Z p → Y, either using a priori knowledge of G or by learning G along with F. Kernel methods have been a popular class of methods that use kernel functions to associate inputs to a higher dimensional feature space and find applications in classification, clustering and regression (Shawe-Taylor et al., 2004). For a simple scalar-valued regression problem, the aim is to learn f : X → R, where X is an appropriate input space of vectors. A scalar-valued kernel k : X × X → R associates two inputs in X to a real number which provides a measure of similarity between those inputs. Scalar-valued kernels which are positive semi-definite are associated to a (unique) space H of candidate functions f mapping input space X to R, H being referred to as a reproducing kernel Hilbert space (RKHS). The function f to be learned resides in the aforementioned space H of functions which enables us to formulate a regularized loss minimization problem over H (Shawe-Taylor et al., 2004), whose solution can be used to predict the desired output for input samples. On the other hand, for a functional regression problem, an extension of scalar-valued kernel to operator-valued kernel (OVK) of the form K : *Z ×Z → L*(Y) associates two input functions to a bounded linear operator on the output space of functions instead of a real number (Kadri et al., 2016; Saha & Palaniappan, 2020). An operator-valued kernel K which is positive semi-definite is associated to a (unique) space HK of candidate functions F mapping input space Z to Y, HK being called a function-valued reproducing kernel Hilbert space (Kadri et al., 2016). Similar to scalar-valued kernel setting, in the operator-valued kernel setting too, a regularized loss minimization learning problem can be formulated in the function-valued RKHS HK. The reproducing property of OVKs (Definition A.1) helps in reformulating the learning problem in the output space Y, enabling algorithms to be developed for solving the resultant learning problem. In this paper, we consider another natural extension of this kernel-based framework which leads to a case where p input functions and their interactions among themselves (captured by a graph G) can be used to predict the output function. To learn F : Z p → Y, we incorporate p input functions along with the underlying graphical structure to create a graph-induced operator-valued kernel that induces a corresponding functionvalued RKHS to facilitate learning of F. In order to use graph-induced OVKs in the task of functional regression based on p input functions with unknown graphical structure, we propose to learn F simultaneously along with the graphical structure. Predicting output function based on a graph-induced OVK constructed using graphical structure G over p input functions, to our knowledge, is a novel problem and has not been explored much in literature. In this context, we outline our major contributions below. Contributions: We aim to address the following objectives in this work. - We propose a graph-induced OVK for solving a functional regression problem with p input functions z1, z2*, . . . , z*p and their interactions represented using a graphical structure (known/unknown) to predict a corresponding output function y. This is enabled by Proposition A.1 considering the Laplacian matrix (L) of the underlying graph G. - For practical scenarios where the underlying graphical structure is unknown, we provide a construction of graph-induced OVK and propose to jointly learn the functional regression map F, along with the graphical structure G represented by L and D, where D is a diagonal matrix with non-negative entries signifying the impact of individual input functions on y. A regularized loss minimization problem is formulated using *L, D* and F in the function-valued RKHS associated with the graphinduced OVK. - We propose an alternating minimization framework for solving the designed regularized loss minimization problem to learn *L, D* and the map F. The functional regression map F is learned for a fixed L and D by adapting Operator based Minimum Residual (OpMINRES) algorithm (Saha & Palaniappan, 2020) to solve a linear operator system associated with the graph-induced OVK. For a fixed F, matrices L and D are learned using projected gradient descent. To learn sparse L, we introduce minimax concave penalty (MCP) (Ying et al., 2020) as a sparsity-inducing regularization on the Laplacian matrix L. - We further extend the proposed alternating minimization framework to solve a multi-dimensional functional regression problem, where each input function zi ∈ Z, i ∈ {1, 2*, . . . , p*} and y ∈ Y are multi-dimensional. - In order to scale the proposed alternating minimization framework to handle large datasets, we design an efficient sample-based approximation algorithm which enables to solve the learning problem over only a carefully chosen subset of training samples. - We establish bounds on generalization error for the map F obtained by solving the proposed learning problem. Our generalization analysis also incorporates the learning of graph-induced OVK. - An extensive empirical evaluation on both synthetic and real data has been carried out and the comparison results demonstrate the efficacy of the proposed learning framework. Further our experiments show that simultaneous learning of sparse graphical structure along with the function-valued regression map F establishes interpretable relationships driving the functional regression problem. ## 2 Paper Organization In Section 3, related works from the areas of functional data analysis, functional regression and graph learning are discussed. The notations used in this paper have been summarized in Section 4. The proposed framework for solving a functional regression problem using a graphical structure is covered in Section 5. The graphinduced OVK is introduced and a representer theorem for the learning problem with an unknown graph structure is presented in Section 5.1. An alternating minimization framework is proposed in Section 5.1 for jointly learning an unknown graphical structure on the functional inputs and the map F. Inducing sparsity using a MCP regularization when learning the unknown graphical structure is also discussed in Section 5.1. An extension of the proposed framework for solving multi-dimensional functional regression problems and a sample-based approximation algorithm for scaling up to large training data are also presented in Section 5.1. In Section 6, the bounds on generalization error of the learned map F are established by incorporating learning of graph-induced OVK. Experiments using the proposed alternating minimization framework and comparative results have been illustrated in Section 7. Section 8 provides the conclusion of the paper. ## 3 Related Work As our work lies in the confluence of various areas of research, we divide the related work based on the different areas below. Functional Data Analysis: Continuous functions over a time interval have been explored as the central part of functional data analysis (FDA) in (Ramsay, 1982) and (Ramsay & Dalzell, 2018). FDA techniques have evolved significantly with non-parametric approaches (Ferraty & Vieu, 2006) and functional principal component analysis (FPCA) (Happ & Greven, 2018) becoming prevalent tools. These approaches have found applications in sparse longitudinal data (Yao et al., 2005), classification involving functional data (Rossi & Villa, 2006) and clustering for multivariate functional data (Jacques & Preda, 2014). Functional Regression: In the context of functional regression, Oliva et al. (2015) uses projections on orthonormal basis systems for input and output spaces to estimate regression maps based on random basis functions from random Fourier features (Rahimi & Recht, 2007). Operator-valued kernel methods have found applications for vector-valued data (Micchelli & Pontil, 2005) and function-valued data (Kadri et al., 2016) for solving corresponding vector-valued and functional regression problems. The construction of a positive semidefinite operator-valued kernel used to learn a function-valued mapping in a corresponding reproducing kernel Hilbert space (RKHS) is considered in (Kadri et al., 2016), while Saha & Palaniappan (2020) uses indefinite operator-valued kernels to learn a function-valued function in a corresponding reproducing kernel Krein space (RKKS). Hullait et al. (2021) uses a robust functional linear regression model based on robust FPCA (Bali et al., 2011) to predict a response function using a predictor function without considering multiple input functional data and their graphical structure. Another approach in (Bouche et al., 2021) uses kernel-based projection learning with a finite (not necessarily orthogonal) basis for the output space. High dimensional functional data has been used in (Gahrooei et al., 2020) to perform function-to-function regression for a functional response output using a linear combination of functional inputs considered as covariates. However, associations among the functional inputs have not been considered in (Gahrooei et al., 2020). Functional deep learning methods have been developed to solve regression problems using functional direct neural network and functional basis neural network (Rao & Reimherr, 2023). In functional direct neural network, continuous neurons interact with learned weight functions, whereas in functional basis neural network, basis functions are used to encode the continuous neurons as well as weight functions. Both functional direct neural network and functional basis neural network in (Rao & Reimherr, 2023) require large amount of data for training. On the contrary, our work is related to a setting with limited number of training samples which is useful in various practical scenarios where data availability is restricted. Graph Learning: Learning graph structure has been an active area of research in machine learning. In (Dong et al., 2016), a Laplacian matrix corresponding to the graph structure on the observed input signals is learned by using a vectorized optimization problem with a smoothness assumption over the signals. The graph Laplacian learning algorithm is based on an alternating minimization scheme for learning the Laplacian matrix as well as the missing/noisy signals. Pu et al. (2021b) extends this idea by using a kernelbased learning problem for determining the Laplacian matrix of a graph structure for smooth input signals. Kernels are used to learn relationships between the input signals as well as the inter-relationships between covariates such as timestamp of recording an observation. Another popular approach for graph structure learning in (Qiao et al., 2019) is based on estimation of precision matrix (inverse of covariance matrix) for functional data corresponding to the p nodes in the graph, assuming that the data arises from a p-dimensional multivariate Gaussian process. A differential functional graphical model has been considered in (Zhao et al., 2019) which learns a differential graph to characterize the difference between conditional dependencies of two different populations which is determined using their respective samples. Extending the idea of determining precision matrix for capturing conditional dependence between the nodes, Qiao et al. (2020) proposes doubly functional graphical models to capture the evolving conditional dependence relationship among the sampled functions corresponding to the nodes. Instead of learning a single graph based on the data, Pu et al. (2021a) learns a graph topology with topological difference variational autoencoder for graph learning. A motivating work (Gómez et al., 2021) addresses function-to-function linear regression problem with both known/unknown directed graph structure where the main focus is on root cause analysis and the node representing output function is also a part of the graphical structure containing nodes corresponding to the input functions. The graph structure considered is learned based on a neighborhood selection method (Meinshausen & Bühlmann, 2006) to determine the set of candidate parents for each node in order to solve the linear regression problem. Multivariate Gaussian processes have been used with basis functions in (Gómez et al., 2021) to model a directed acyclic graph which enables solving a function-to-function linear regression problem. However, in our approach, the goal of learning an undirected graph structure only on the p input functions is different from the parent-based directed acyclic graph structure assumption in (Gómez et al., 2021). Moreover, our OVK based approach helps to learn non-linear relations in comparison to the linear model considered in (Gómez et al., 2021). ## 4 Notations We consider a functional regression problem with the input space as X = (L 2([0, 1]))p and the output space as Y = L 2([0, 1]), where p ∈ N and L 2([*a, b*]) denotes the space of equivalence classes of square integrable functions on [a, b], *a, b* ∈ R and *a < b*. The notation [n] denotes the the set {1, 2*, . . . , n*}, for n ∈ N. In order to denote elements of the input space X , we use the notation x = (x1, x2, . . . , xp) ∈ X . We denote the graphical structure over the input functions x1, x2*, . . . , x*p as G = (*V, E*), where V is the set of p vertices corresponding to the functional input variables x1, x2*, . . . , x*p. The degree of a vertex v ∈ V is denoted by deg(v). K refers to an OVK mapping from *X ×X* to L(Y), where L(Y) is the set of bounded linear operators over the output space Y. For a matrix M ∈ R k×k, where k ∈ N, Mi,j denotes the (*i, j*)-th element of M and Mi ∈ R k denotes the i-th column of M. Hence, we refer to matrix M as [Mi,j ] k i,j=1 or [M1, M2*, . . . , M*k]. The transpose of M is denoted by M>. The notation diag(d1, d2*, . . . , d*p) denotes a p × p diagonal matrix with the diagonal entries as d1, d2*, . . . , d*p, where di ∈ R, for i ∈ [p]. For a matrix D ∈ R p×p, *Diag*(D) denotes the p × 1 vector containing the diagonal elements of D. Based on the context, other relevant notations will be introduced in the paper as required and suitable descriptions will be provided for them. ## 5 Functional Regression Based On A Graphical Structure In this section, we introduce the functional regression problem with the aim of incorporating the graph structure to aid the regression task. To model the graphical structure, we assume that G = (*V, E*) represents an undirected graph where V denotes the node set with |V |(= p) nodes and E denotes the edge set of G. Let x(t) = (x1(t), x2(t)*, . . . , x*p(t)) denote the functional variables for a given domain t ∈ T where each xiis represented by node vi ∈ V , for i ∈ [p]. Recall that L 2([*a, b*]) denotes the space of equivalence classes of square integrable functions on [a, b]*, a, b* ∈ R and *a < b*. For simplicity, we assume xi ∈ L 2([0, 1]), hence R 1 0 x 2 i (t)dt < ∞, i ∈ [p]. We discuss the realistic case of functional regression with an unknown graph structure here. For a related discussion on the case of a known graph structure, we refer the reader to Appendix A.1. ## 5.1 Learning With Unknown Graph Structure Consider a system where a set of input functions determines the output (or response) function. Let the system be modeled based on p input functional variables x1(t), x2(t)*, . . . , x*p(t), where xi ∈ L 2([0, 1]), i ∈ [p]. A functional response variable y(t) is used to model output of the system where y ∈ L 2([0, 1]). (Note that [0, 1] can be replaced with any closed time interval based on the application.) The undirected graph structure of the functional input variables is represented by a suitable graph G = (*V, E*), where V = {v1, v2*, . . . , v*p} and E = {{vi, vj}|viis connected to vj , 1 ≤ *i, j* ≤ p} is the edge set which characterizes the underlying relationship between the variables. Note that the notation for an edge uses an unordered pair {vi, vj} which characterizes the undirected nature of the graph G. In order to model the relation between functional input variables x1, x2*, . . . , x*p and functional response variable y, we use the following map F: $$y=F(x_{1},x_{2},\ldots,x_{p},G).$$ $$\left(1\right)$$ y = F(x1, x2, . . . , xp, G). (1) Note that F now depends explicitly on the graph G in addition to the input functions x1, x2*, . . . , x*p. For most problems in real life, the underlying graphical structure encoding the inter-relationships among the input functions x1, x2*, . . . , x*p is not known. In such situations, the underlying graph on the input functions has to be learned with simultaneous prediction of the functional response variable y ∈ Y using x = (x1, x2, . . . , xp) ∈ X . Considering an undirected simple graph structure on the functional variables may result in encountering an integer programming based optimization problem which will lead to a computationally harder problem in addition to the functional regression task. We consider a relaxation in this aspect by allowing weighted undirected simple graphs in our approach. With a slight abuse of notation, let the graph structure be given by G = (*V, W*), where |V | = p and W ∈ R p×pis symmetric with wi,j ≥ 0, where wi,j = 0 whenever vertices vi and vj are not connected and wi,j > 0 denotes the weight assigned to the edge between vi and vj . Hence the graph Laplacian matrix can be represented as L = D − W, where D = diag(D1,1, D2,2, . . . , Dp,p) with Di,i =Pp j=1 wi,j . It can be shown that L is positive semi-definite by virtue of being diagonally dominant with non-negative diagonal entries (Golub & Van Loan, 1996). We consider the notations x = (x1, x2, . . . , xp) ∈ X (= (L 2([0, 1]))p) and y ∈ Y (= L 2([0, 1])) to represent an arbitrary sample (*x, y*). To learn the mapping F, consider the training data of n samples given as (x (i), y(i)) n i=1, where x (i) = (x (i) 1 , x (i) 2 , . . . , x (i) p ) ∈ X and y (i) ∈ Y. In order to learn F, we develop an operator-valued kernel which can leverage the structural information of G. Definition 5.1 (**Graph-induced Operator-valued Kernel**). A graph-induced operator-valued kernel is defined as $$(K^{G}(x,x^{\prime})y)(t)=k_{1}(x,x^{\prime};G)\int_{0}^{1}k_{2}(s,t)y(s)d s,$$ k2(s, t)y(s)ds, (2) where k2 is a scalar-valued kernel on R 2, G is a graph associating the p input functions in (x1, . . . , xp) ∈ X and k1 is defined as $$k_{1}(x,x^{\prime};G)=e^{-\gamma(x-x^{\prime})^{\top}(L+D)(x-x^{\prime})},\ \gamma>0,$$ $$\left(2\right)$$ where L is the Laplacian matrix of graph G and D is a diagonal matrix consisting of non-negative entries. KG associates a pair x, x0 ∈ X with output function y ∈ Y where G is the graph which incorporates the interactions of p constituent input functions of x and x 0. k2 inside the Hilbert-Schmidt Integral (HSI) operator R 1 0 k2(*s, t*)y(s)ds is a scalar-valued kernel on R×R. The radial basis function (RBF) type kernel construction of k1 involves L + D, where the addition of the Laplacian L with a diagonal matrix D with non-negative entries results in a diagonally perturbed Laplacian matrix (Bapat et al., 2001) (see (Kurras et al., 2014; Aliakbarisani et al., 2022) for applications of perturbed Laplacians). If k1 is positive semi-definite and if k2 is positive semi-definite (implying that the HSI operator is positive semi-definite), then the construction in (2) is known to be positive semi-definite (Kadri et al., 2016). The addition of diagonal matrix D (with nonnegative entries) to L in (2) preserves the positive semi-definiteness of the kernel k1. Note that the notation of the form x >Lx0 used in k1 denotes an inner product structure given by x >Lx0 =Pp i,j=1 R 1 0 xi(t)Lijx 0 j (t)dt (see Appendix A.1 for details regarding this inner product structure). Without loss of generality, henceforth we refer to the graph-induced operator-valued kernel KG as K for simplicity. The matrix L = [Li,j ] p i,j=1 satisfies the conditions: L1 = 0 and Li,j = Lj,i ≤ 0, ∀i 6= j. Note that the graph-induced operator-valued kernel (K(*x, x*0)y)(t) = k1(*x, x*0; G)R 1 0 k2(*s, t*)y(s)ds as defined in (2) with k1(*x, x*0; G) = e −γ(x−x 0) >(L+D)(x−x 0), γ > 0, x, x0 ∈ X , y ∈ Y is positive semi-definite (see proof of Proposition A.1 in Appendix A.1), which ensures that there exists a unique function-valued RKHS HK induced by K (Theorem 1 (Kadri et al., 2016)). Now to simultaneously learn F including the graph structure represented by L and D, the following optimization problem is formulated: $$\widetilde{F},\widetilde{L},\widetilde{D}=\operatorname*{arg\,min}_{F\in\mathcal{H}_{K},L\in\mathcal{L},D\in\mathcal{P}}\sum_{i=1}^{n}\|y^{(i)}-F(x^{(i)})\|_{\mathcal{H}}^{2}+\lambda\|F\|_{\mathcal{H}_{K}}^{2}+\rho_{L}\sum_{i=1}^{n}x^{(i)\top}Lx^{(i)}+\rho_{D}\|D\|_{F}^{2},\tag{3}$$ where L = {L ∈ R p×p|L1 = 0, Li,j = Lj,i ≤ 0, ∀i 6= j} denotes the set of all matrices satisfying the constraints associated with Laplacian matrices of the graph G = (*V, W*) with W = [wi,j ] p i,j=1. Note that D = {D ∈ R p×p|Di,i ≥ 0, Di,j = 0, ∀i 6= j} denotes the set of all diagonal matrices with non-negative diagonal entries, k.kF is the Frobenius norm and λ, ρL, ρD > 0. The regularization of D in (3) using Frobenius norm provides control on the values in D. Note the absence of a Frobenius norm based regularizer for L in (3). A different smoothness term x >Lx = 1 2 Pp i,j=1 wi,jkxi − xjk 2 Y is considered (a similar term is considered in (Humbert et al., 2021)), which provides an improved interaction-based data-oriented regularization instead of using a simple matrix norm based regularization of L. Using the representer theorem A.2 and the reproducing property of OVK K, the minimization problem (3) is equivalently reduced to be in terms of u ∈ Yn instead of F ∈ HK as follows (see Appendix A.2): min F,L,D J(F, L, D) =Xn i=1 ky (i) − F(x (i))k 2 Y + λkFk 2 HK + ρL Xn i=1 x (i)>Lx(i) + ρDkDk 2 F (4) =⇒ min u,L,D J(u, L, D) =Xn i=1 y (i) − Xn j=1 K(x (i), x(j))uj 2 Y + λ Xn i,j=1 hK(x (i), x(j))ui, uj iY (5) + ρL Xn i=1 x (i)>Lx(i) + ρDkDk 2 F . To solve (3) (or (5) equivalently) we now propose an alternating minimization framework where J(u*, L, D*) is optimized alternatively with respect to u ∈ Yn, L ∈ L and D ∈ D. We now discuss the steps involved in alternating minimization of J(u*, L, D*). ## 5.1.1 Minimization With Respect To U **For Fixed** L, D Assuming fixed *L, D*, and from the reproducibility property of K and representer theorem A.2, J(u*, L, D*) from (5) simplifies to the following system of linear operator equations in u (see Appendix A.8): $$(\mathbf{K}+\lambda I)\mathbf{u}=\mathbf{y},$$ (K + λI)u = y, (6) where Ki,ju = K(x (i), x(j))u = k1(x (i), x(j); G) ¯k2(u), ∀u ∈ Y with k1(*x, x*0; G) = e −γ(x−x 0) >(L+D)(x−x 0), ¯k2(u)(t) is defined using an exponential kernel in (2) as ¯k2(u)(t) = R 1 0 e −γop|s−t|u(s)ds, γop > 0*, s, t* ∈ R, u = [u1, u2*, . . . , u*n] > ∈ Yn and y = [y (1), y(2)*, . . . , y*(n)] >. In our framework, we consider a particular choice of kernel k2 on R 2 as k2(*s, t*) = e −γop|s−t|, γop > 0. The OpMINRES algorithm (Saha & Palaniappan, 2020) solves the system (6) by using an iterative Krylov subspace minimal residual method. Consider P := K +λI, OpMINRES minimizes ky − PukYn , where the norm is defined as kξkYn = qPn i=1 R 1 0 ξ 2 i (t)dt, for ξ = (ξ1, ξ2, . . . , ξn) ∈ Yn. The steps involved in k-th iteration of OpMINRES algorithm are given below: 1. Transforming the linear operator system (K + λI)u = y into a linear system in R k using a Lanczosbased method (Lanczos, 1950), called Operator-valued Lanczos (or OpLanczos) scheme. 2. Solving the linear system of the previous step using QR decomposition. 3. Transforming the result obtained in step 2 appropriately to retrieve a solution in Y n. Using the Krylov subspace Kk(P, y) = span{y, Py, P2y*, . . . ,* Pk−1y} obtained at the k-th iteration, OpMINRES obtains an approximation u kto the original solution using the following: $$\mathbf{u}^{k}=\operatorname*{arg\,min}_{\boldsymbol{\theta}\in\mathcal{K}_{k}(\mathbf{P},\mathbf{y})}\|\mathbf{y}-\mathbf{P}\boldsymbol{\theta}\|_{\mathcal{Y}^{n}}.\tag{1}$$ $$\left(7\right)$$ The problem in (7) is transformed into a problem in R k by using OpLanczos method. The OpLanczos method at the k-th iteration, tridiagonalizes P to get PQk = QkTk, where Tk has a tridiagonal structure given by $$T_{k}=\begin{bmatrix}\alpha_{1}&\beta_{2}&&&0\\ \beta_{2}&\alpha_{2}&\beta_{3}&&&\\ &&\beta_{3}&\alpha_{3}&\ddots&\\ &&\ddots&\ddots&\beta_{k-2}\\ &&&\beta_{k-1}&\alpha_{k-1}&\beta_{k}\\ 0&&&\beta_{k}&\alpha_{k}\end{bmatrix},$$ $$\quad(8)$$. $$(9)$$ , (8) and Qk = [q1, q2*, . . . , q*k], where the qi's belonging to Y n are orthonormal and q1 is generally assumed to be y/kykYn . Further, the relation PQk = Qk+1T k is also satisfied for a suitably defined T k. Using Qk, θ ∈ Yn can be written as θ = Qkϑ for an appropriate ϑ ∈ R k. Hence we have: Equation (10) reduces to (11) owing to the orthonormality of {q1, q2*, . . . , q*k+1} (columns of Qk+1). Solving for ϑk = arg minϑ∈Rk kβ1e1 − T kϑk2 is done using QR decomposition. Now, the transformation from R k back to Y n to obtain u kis achieved using the following: u k = Qkϑk = Qk arg minϑ∈Rk kβ1e1 − T kϑk2 . In summary, we note that the minimization of J with respect to u is obtained by using OpMINRES for fixed L and D. Now, we proceed to the next step in the alternating minimization of J(u*, L, D*). ## 5.1.2 Minimization With Respect To L **For Fixed** U, D The minimization of J(u*, L, D*) with respect to L for fixed u, D is simplified by considering the symmetry of L ∈ L. To simplify the computations, we introduce vectorization of matrices and half-vectorization min θ∈Kk(P,y) ky − PθkYn = min ϑ∈Rk ky − PQkϑkYn = min ϑ∈Rk ky − Qk+1T kϑkYn (9) = min ϑ∈Rk kQk+1(β1e1 − T kϑ)kYn , (10) (where β1 = kykYn , e1 = [1, 0, . . . , 0]> and q1 = y/kykYn ) = min ϑ∈Rk kβ1e1 − T kϑk2. (where k.k2 is the standard Euclidean norm.) (11) $$(10)$$ $$(11)$$ of symmetric matrices (Henderson & Searle, 1979). For a matrix Z = [Zi,j ] q i,j=1 ∈ R q×qfor q ∈ N, the vectorization of Z is defined as $$\operatorname{vec}(Z)=[Z_{1,1},\ldots,Z_{q,1},Z_{1,2},\ldots,Z_{q,2},\ldots,Z_{1,q},\ldots,Z_{q,q}]^{\top}.$$ The half-vectorization of a symmetric matrix Z = [Zi,j ] q i,j=1 ∈ R q×qis the vectorization of the lower triangular part of Z given by $$\operatorname{vech}(Z)=[Z_{1,1},\ldots,Z_{q,1},Z_{2,2},\ldots,Z_{q,2},\ldots,Z_{q-1,q-1},Z_{q,q-1},Z_{q,q}]^{\top}.$$ By introducing vectorization and half-vectorization of L, we reduce the minimization of J(u*, L, D*) with respect to matrix L into minimization with respect to the vector vech(L). To tackle the constraint set L, we can reduce it to a simpler form by using half-vectorization and vectorization of L given by vech(L) ∈ R p(p+1) 2 and vec(L) ∈ R p 2, respectively. The following relations are used to relate vech(L) and vec(L) using an appropriate transformation matrix called duplication matrix M: $$\mathcal{M}\mathrm{vech}(L)=\mathrm{vec}(L),\mathrm{~where~}\mathcal{M}\in\mathbb{R}^{p^{2}\times{\frac{p(p+1)}{2}}}.$$ The constraint set L can then be rewritten as A vech(L) = 0, B vech(L) ≤ 0, where A and B are matrices which handle L1 = 0 and Li,j ≤ 0, i 6= j, respectively (Dong et al., 2016; Pu et al., 2021b). The construction and properties of *A, B* and M can be found in Appendix A.5. For notational simplicity, we consider a slight abuse of notations when referring to function J(u*, L, D*) to be equivalent to J(u, vech(L), vech(D)) as vech(L) and vech(D) can be used to represent L and D, respectively. Similarly, when considering fixed u, vech(L) or vech(D), we denote the function J as a function of non-fixed entities, without referring to the fixed variables. For example, in the current step, since vech(L) is the non-fixed entity and u, vech(D) are fixed, we denote J(u, vech(L), vech(D)) simply as Ju,D(vech(L)). We employ a projected gradient descent procedure to solve min Ju,D(vech(L)). The (k + 1)-th iterate vech(L) k+1 is obtained from the k-th iterate vech(L) k by the following projected gradient descent step : $${\rm vech}(L)^{k+1}=\Pi_{\cal L}({\rm vech}(L)^{k}-\eta_{L}\nabla_{\rm vech}(L)J_{{\bf u},D}({\rm vech}(L)^{k})),\tag{12}$$ where ηL > 0 is the learning rate for the descent step, ΠL denotes the projection operator onto the set L. The expression for gradient term ∇vech(L)J has been derived in Appendix A.6. For a fixed zˆ ∈ R p(p+1) 2 , the projection operator ΠL is defined as follows: $$\Pi_{\mathcal{L}}(\hat{z})=\operatorname*{arg\,min}_{z\in\mathbb{R}^{\frac{p(p+1)}{2}}}\|z-\hat{z}\|^{2}\;\;\text{such that}Az=0,Bz\leq0.$$ $$(13)$$ The projection operator defined in (13) ensures that vech(L) obtained satisfies the constraints of a Laplacian matrix of a graph. Projection operator ΠL is evaluated by solving the quadratic program (13) using wellknown interior point methods (Dikin, 1967; Andersen et al., 2012). For a large matrix L, capturing meaningful relationships between input functions becomes important; otherwise, elements of L may contain many non-zero values which are close to each other in magnitude and may lead to lack of interpretability. Integrating sparsity-inducing regularizers on L would ensure that the most important interactions get captured and can improve the predictions for output function. The learned sparse graphs would then become useful for interpretation. In Section 5.1.4, we discuss about incorporating sparsity-inducing regularizers in the L-based minimization problem min Ju,D(vech(L)). Now that projected gradient descent is proposed for minimization of J with respect to L for a fixed u and D, we proceed with a similar approach for the next step of the alternating minimization framework. ## 5.1.3 Minimization With Respect To D **For Fixed** U, L Similar to the minimization of J(u*, L, D*) with respect to L for fixed u, D, as discussed in the previous section, we proceed with the simplification of D ∈ D using half-vectorization given as vech(D) ∈ R p(p+1) 2 to solve min Ju,L(vech(D)). We further use notation *Diag*(D) to denote the vector containing diagonal elements of D. In order to deal with the constraint Di,i ≥ 0, ∀i ∈ [p] of D, we construct a matrix C ∈ R p× p(p+1) 2 which consists of 0's and 1's satisfying Cvech(D) = *Diag*(D). Construction of a suitable C has been discussed in Appendix A.5. For solving min Ju,L(vech(D)), we use projected gradient descent steps given by: $$\mathrm{{vech}}(D)^{k+1}=\Pi_{\mathcal{D}}(\mathrm{{vech}}(D)^{k}-\eta_{D}\nabla_{\mathrm{{vech}}(D)}J(\mathrm{{vech}}(D)^{k})),$$ where ηD > 0 is learning rate for the descent step, ΠD denotes the projection operator onto the set D, and vech(D) k denotes the k-th iterate for vech(D). The required expression for gradient ∇vech(D)J has been derived in Appendix A.6. The projection operator ΠD is defined for ˆd ∈ R p(p+1) 2 as $\Pi_{\mathcal{D}}(\hat{d})=\underset{d\in\mathbb{R}^{\frac{p(p+1)}{2}}}{\arg\min}\ \|d-\hat{d}\|^{2},\ \text{such that}\ Cd\geq0.$ $$(14)$$ $$(15)$$ $$(16)$$ The explicit solution of (15) can be easily obtained for i ∈ [p(p + 1)/2], as the following: $$d_{i}=\begin{cases}0,&\text{if}i\in[p(p+1)/2]\setminus\mathcal{J}\\ \max(\hat{d}_{i},0),&\text{otherwise,}\end{cases}$$ $\text{}$C))$_i=1$ (. where the set J consists of indices j such that (vech(C))j = 1 (see Appendix A.5). Thus, the proposed alternating minimization framework discussed in Sections 5.1.1-5.1.3 can be summarized in the following steps: 1. **Minimization with respect to** F ∈ HK (or u ∈ Yn): Solving for u in (K + λI)u = y. 2. **Minimization with respect to** vech(L): Projected gradient descent of J with respect to vech(L) such that L ∈ L. 3. **Minimization with respect to** vech(D): Projected gradient descent of J with respect to vech(D) such that D ∈ D. ## 5.1.4 Sparsity Inducing Regularization The functional regression problem aims at predicting the output function based on the interactions and influences of the input functions. In a large-scale setting with numerous input functions influencing the output function, providing each interaction of a pair of input functions with similar weights may hamper the prediction capability as well as interpretability of the proposed graph-induced operator-valued kernel method. Sparse graphs ensure a focus on picking more pivotal associations between the pairs of input functions which drive the functional regression problem, as noisy interactions are given negligible weights and significant interactions contribute more to the prediction. A popular method to obtain sparsity on the graph structure is to consider the trace of L as a regularizer (Qiao et al., 2018). But as L is obtained based on solving a constrained minimization problem (13), we introduce a constraint that can regularize the values in L (Pu et al., 2021b). To facilitate this, we consider a vector c ∈ R p(p+1) 2 consisting of 0's and 1's, such that c >vech(L) = trace(L) (Appendix A.5). Let mtrace(> 0) be a hyperparameter controlling the trace of L which is equivalent to controlling the off-diagonal entries of weight matrix W corresponding to the graph learned. As our formulation requires solving a quadratic program in the projected gradient descent for L (Section 5.1.2), a direct way to control the trace of L involves modifying the quadratic program (13) to the following: $\Pi_{\mathcal{L}}(\hat{z})=\underset{z\in\mathbb{R}}{\arg\min}\ \|z-\hat{z}\|^{2},\ \text{such that}\ Az=0,c^{\top}z=m_{\text{trace}},Bz\leq0,$ where zˆ ∈ R p(p+1) 2 and mtrace > 0. The constraint c >z = mtrace is appended with Az = 0 to obtain the transformed quadratic program for projection operator ΠL: $\Pi_{\mathcal{L}}(\hat{z})=\operatorname*{arg\,min}_{z\in\mathbb{R}}\|z-\hat{z}\|^{2},\text{such that}\hat{A}z=\hat{0},Bz\leq0,$ 2, such that Az˜ = ˜0*, Bz* ≤ 0, (17) $$(17)$$ ![10_image_0.png](10_image_0.png) Figure 2: Comparison of h(w) := MCP(w) values for different λmcp and γmcp values for varying w. [Best viewed in color] where A˜ = A c > and ˜0 = 0 mtrace. For our experiments the value of mtrace is considered as p/2 (discussed in Section 7) to promote sparsity in the learned graph structure. Assigning mtrace the value p/2, allows a cumulative weight of p/2 to be distributed among the edges in the learned graph, promoting sparsity. Though we have used m*trace* = p/2 for all our experiments, we showcase the impact of different values of m*trace* in Appendix A.10.5 which illustrate that m*trace* can be considered as a hyperparameter. In addition to restricting the trace of L with mtrace, we also utilize a sparsity-inducing regularizer in minu,D J(vech(L)). Nonconvex regularizers have gained popularity in recent literature (Zhang et al., 2020; Ying et al., 2020; Vargas Vieyra, 2022) for promoting sparsity in learned graphs. These nonconvex regularizers acting on weights on individual graph edges are more effective than traditional `1-norm based graphical lasso regularization which is a classical approach in graph learning (Yuan & Lin, 2007; Banerjee et al., 2008; d'Aspremont et al., 2008). A nonconvex regularizer, minimax concave penalty (MCP) (Zhang, 2010) denoted by h is characterized by the definition of its derivative h 0 given by: $$h^{\prime}(w)={\begin{cases}\lambda_{m c p}-{\frac{w}{\gamma_{m c p}}},&w\in[0,\gamma_{m c p}\lambda_{m c p}],\\ 0,&w\in[\gamma_{m c p}\lambda_{m c p},\infty),\end{cases}}$$ $$\left(18\right)$$. for λmcp, γmcp > 0. The above definition of h 0 with h(0) = 0 ensures that h is monotonically increasing in the interval [0, γmcpλmcp]. The impact of λmcp and γmcp on value of the regularizer h has been compared in Figure 2. The illustration in Figure 2 shows that MCP function h magnifies the small values till γmcpλmcp where λmcp and γmcp control the slope and curvature of the truncated concave quadratic function h. This ensures that most smaller off-diagonal values are penalized by choosing proper λmcp and the differences between large and small values are exaggerated which promotes sparsity. Though h 0in (18) (or equivalently h) is defined over R+, we overload the notation to define h(z) = [h(z)i] d i=1 where z ∈ R d +. Ying et al. (2020) proposes MCP to obtain a nonconvex penalized maximum likelihood estimation method for learning sparse Laplacian matrices for graphs. However, here we introduce MCP regularizer in the projection step considered for vech(L). Therefore, we reformulate the minimization problem with respect to L for fixed u and D as follows: $$\operatorname*{min}_{\begin{array}{c}{{\mathrm{~vech}(L)\in{\mathcal{E}}}}\\ {{\mathrm{~vech}(L)\rhd M C P}}\end{array}}J_{{\mathbf{u}},D}({\mathrm{vech}(L)}),$$ $$\left(19\right)$$ $$(20)$$ Ju,D(vech(L)), (19) where the constraint set is L˜ = nz ∈ R p(p+1) 2 Az˜ = ˜0*, Bz* ≤ 0 oand the property vech(L) . MCP signifies that vech(L) is sparse in the sense of MCP regularization. The MCP regularizer operates on the off-diagonal entries of L to induce sparsity. To solve (19), we consider a two-level update procedure for vech(L). In the first level, we use Ju,D(vech(L)), to perform an iteration of gradient descent to obtain (q + 1)-th iterate from q-th iterate, as follows: $$\mathrm{vech}(L)^{q+1}=\mathrm{vech}(L)^{q}-\eta_{L}\nabla_{\mathrm{vech}(L)}J_{\mathbf{u},D}(\mathrm{vech}(L)^{q}),$$ where ∇vech(L)J denotes the gradient of J with respect to vech(L) and ηL > 0 is the learning rate for the descent step. In the second level, we ensure that vech(L) q+1 obtained in (20) is projected onto set L˜ and is also sparse in the sense of MCP regularization captured using the property vech(L) . MCP. To apply the MCP regularizer on off-diagonal entries of L for vech(L), we construct a matrix H ∈ R p(p+1) 2 × p(p+1) 2 comprising 0's and 1's such that Hvech(L) produces a vector having the same structure as vech(L) with 0's corresponding to diagonal entries of L and the same off-diagonal entries as in L (see Appendix A.5 for the construction). Then we aim to solve the following minimization problem: $$\min_{z\in\mathbb{R}^{\frac{n(p+1)}{2}}}\|z-\text{vech}(L)^{q+1}\|^{2}+\sum_{i=1}^{\frac{n(p+1)}{2}}\left(h(H(-z))\right)_{i},\text{such that}\tilde{A}z=\tilde{0},Bz\leq0,\tag{21}$$ where the negative sign before z in MCP regularization is used since h operates on non-negative entries and L contains negative weights corresponding to off-diagonal entries. Further the product H(−z) eliminates the positive diagonal entries of L. Note that the first term in the objective function of (21) is convex in z and the second term is concave in z. Hence we adopt a majorization-minimization iterative approach, similar to (Ying et al., 2020) to solve (21). In each iteration l of this approach, we obtain a majorization of the objective function in (21) using the linearization of concave function h at z l as: $$\|z-\mathrm{vech}(L)^{q+1}\|^{2}+\sum_{i=1}^{\frac{p(p+1)}{2}}(h^{\prime}(H(-z^{l})))_{i}(H(-z))_{i}.$$ $$(22)$$ Then we solve a minimization step, which when combined with the linearization in (22) leads to the following quadratic program: $$z^{l+1}=\operatorname*{arg\,min}_{z\in\mathbb{R}^{\frac{l+1}{2}}}\|z-\operatorname{wech}(L)^{q+1}\|^{2}+\sum_{i=1}^{2^{l+1}}\left(h^{\prime}(H(-z^{l}))\right)_{i}(H(-z))_{i},\text{such that}\tilde{A}z=\tilde{0},Bz\leq0.\tag{23}$$ We solve the problem (23) iteratively till convergence to obtain a sequence of z l's. The procedure for MCP based sparsity-inducing regularization has been summarized in Algorithm 1. The solution obtained from Algorithm 1 is thus a valid Laplacian matrix of a graph, and satisfies trace constraint as well as MCP based sparsity-inducing regularization property captured by vech(L) . MCP. Note that other nonconvex regularizers such as smoothly clipped absolute deviation (SCAD) (Fan & Li, 2001; Ying et al., 2020) can also be used instead of MCP in our approach which can provide comparable sparse graph structures. Using the learned u*, L, D*, the graph-induced operator-valued kernel is used to predict the output function corresponding to the p input functions xˆ = (ˆx1, xˆ2*, . . . ,* xˆp) based on (114) given by $$F(\hat{x})=\sum_{i=1}^{n}K(x^{(i)},\hat{x})u_{i},{\mathrm{~where~}}u_{i}\in\mathcal{Y}.$$ $$(24)$$ Algorithm 1 MCP Regularization of L Input: vech(L) Output: vech(Lmcp) Initialize z 0 = vech(L), H l = 0 while stopping criteria for z l not satisfied do Find h 0(H(−z l)) Find z l+1 as the solution of (23) l ← l + 1 end while vech(Lmcp) = z l Algorithm 2 summarizes the entire alternating minimization procedure with sparsity-inducing regularization. For our experiments, vech(L) is initialized using a Laplacian matrix L with Li,j = −mtrace/(p(p − 1)), for i 6= j ∈ [p] and Li,i = mtrace/p, for i ∈ [p], which satisfies the mtrace condition in (17). The initial vech(D) is considered corresponding to D = Ip, identity matrix of order p. Note that the objective function J in problem (19) is not jointly convex with respect to u, vech(L) and vech(D). To prove the convergence of alternating minimization of a non-convex function J with respect to a heterogeneous collection of three variables where ui's ∈ Y and vech(L), vech(D) ∈ R p(p+1) 2 requires development of fundamental results which is out of scope of our current work, and we aim to take this up in future. However, we observed empirical convergence of the proposed alternating minimization framework in our experiments. ## 5.1.5 Extension Of Learning Graph Structure For Multi-Dimensional Outputs In many scenarios, a functional regression problem involves multi-dimensional input and output functions which requires learning multiple functions corresponding to each dimension in the output space. Consider X = (L 2([0, 1]))p, Y = L 2([0, 1]) with {(x (i), y (i))} n i=1 as the training data, where x (i) ∈ X r, y (i) ∈ Ys, for i ∈ [n]. Notice that the input space is now X r and output space is Y s, which are both multi-dimensional. In this case, we have x (i) = (x (i) 1 , x (i) 2 , . . . , x (i) r ) where x (i) j = (x (i) j1 , x (i) j2 , . . . , x (i) jp ) ∈ X , ∀j ∈ [r] and y (i) = (y (i) 1 , y (i) 2 , . . . , y (i) s ) where y (i) j ∈ Y, ∀j ∈ [s]. Consider a setting where the input space X ris mapped to output space Y s by learning distinct maps of the from F i: X r → Y, for i ∈ [s]. Note that to find these maps F i, an extension to the framework developed in Sections A.1 and 5.1 can be used, as long as the proposed graph-induced OVK framework is adapted to handle the multi-dimensional inputs x (i). For a motivating example, consider the data of movement of players for a fixed time interval in a basketball game comprising the x and y coordinates of the playing area belonging to the input space X 2. Similarly, movement of the ball for the fixed time interval is characterized by x and y coordinates of the court which belong to the output space Y 2. The corresponding regression problem involves learning F = [F 1, F2] >, where F 1: X 2*7→ Y* and F 2: X 2*7→ Y*. For the multi-dimensional setting, we propose the following extension of the scalar-valued kernel based on L and D as $$k_{1}\left(\mathbf{x}^{(i)},\mathbf{x}^{(j)};G\right)=e^{-\sum_{k=1}^{r}\left[\gamma_{k}\left(x_{k}^{(i)}-x_{k}^{(j)}\right)^{\top}(L+D)\left(x_{k}^{(i)}-x_{k}^{(j)}\right)\right]},$$ i, (25) where γk > 0, ∀k ∈ [r] and k1 maps pair of elements from the input space X rto a real number. Now, k1 x (i), x (j); Gcan be appropriately used to create a graph-based operator-valued kernel in higher dimensions using the construction of a suitable OVK as follows: $$\begin{bmatrix}K^{1}\left(\mathbf{x}^{(i)},\mathbf{x}^{(j)}\right)y_{1}(t)\\ \vdots\\ K^{s}\left(\mathbf{x}^{(i)},\mathbf{x}^{(j)}\right)y_{s}(t)\end{bmatrix}=k_{1}\left(\mathbf{x}^{(i)},\mathbf{x}^{(j)};G\right)\begin{bmatrix}\int_{a}^{b}k_{2}^{1}(t^{\prime},t)y_{1}(t^{\prime})d t^{\prime}\\ \vdots\\ \int_{a}^{b}k_{2}^{s}(t^{\prime},t)y_{s}(t^{\prime})d t^{\prime}\end{bmatrix},$$ , (26) $$(25)$$ $$(26)$$ Algorithm 2 Alternating Minimization of J Input: {(x (i), y(i))} n i=1, x(i) = (x (i) 1 , x (i) 2 , . . . , x (i) p ) ∈ X , y(i) ∈ Y Output: u, vech(L), vech(D) Initialize vech(L) 0 ∈ L, vech(D) 0 ∈ D y = [y (1), y(2)*, . . . , y*(n)] > k = 0 while True do Compute k1(x (i), x(j); G) = e −γRij (vech(L) k+vech(D) k), ∀*i, j* ∈ [n] using (137) in Appendix A.6 Solve for u in (K + λI)u = y to obtain u k using OpMINRES if stopping criterion for u kis satisfied **then** break from the outermost while loop //exit based on convergence of u iterates end q = 0 while stopping criterion for vech(L) q not satisfied do //stopping criterion is based on convergence of vech(L) iterates vech(L) q+1 = vech(L) q − ηL∇vech(L)Ju,D(vech(L) q) vech(L)mcp obtained based on MCP regularization using vech(L) q+1 as input to Algorithm 1 vech(L) q+1 ← vech(L)mcp q ← q + 1 end while vech(L) k ← vech(L) q m = 0 while stopping criterion for vech(D) m not satisfied do //stopping criterion is based on convergence of vech(D) iterates vech(D) m = vech(D) k vech(D) m+1 = ΠD(vech(D) m − ηD∇vech(D)Ju,L(vech(D) m)) using (14) m ← m + 1 end while vech(D) k ← vech(D) m k ← k + 1 end while u = u k, vech(L) = vech(L) k, vech(D) = vech(D) k where k i 2 : R × R → R, for i ∈ [s] are scalar-valued kernels on R 2. In order to use the output functions y (i) = (y (i) 1 , y (i) 2 , . . . , y (i) s ) for learning s maps from input space X rto the output space Y, the problem in (3) is extended as follows: $$\widetilde{F},\widetilde{L},\widetilde{D}=\operatorname*{arg\,min}_{F=[F^{1},F^{2},\ldots,F^{s}]^{\top}\in\mathcal{H}_{K},L\in\mathcal{L},D\in\mathcal{D}}\sum_{i=1}^{s}\left[\sum_{i=1}^{n}\|y_{i}^{(i)}-F^{i}(\mathbf{x}^{(i)})\|_{\mathcal{Y}}^{2}+\lambda\|F^{l}\|_{\mathcal{H}_{K}}^{2}\right]$$ $$+\sum_{k=1}^{r}\rho_{k}x_{k}^{(i)\top}Lx_{k}^{(i)}+\rho_{D}\|D\|_{F}^{2},$$ $$(27)$$ where λ, ρk, ρD are positive reals, HK = H1K × H2K × · · · × HsK, Fl: X r → Y for l ∈ [s]. Consider the objective function of (27) as J(*F, L, D*). Similar to Sections 5.1.1-5.1.4, on applying an alternating minimization framework, the steps involved in *L, D* minimization remain the same. In order to solve the minimization problem in (27) (for fixed *L, D*) an extension of the representer theorem A.2 is required which follows based on the construction of graph-induced operator-valued kernel in (26). Theorem 5.1 (**Extended representer theorem**). Let K be an operator-valued kernel as defined in (26) and HK = H1K ×H2K *×· · ·×H*sK be its corresponding function-valued reproducing kernel Hilbert space based on kernels k1, k1 2 , . . . , ks 2 . The solution Feλ ∈ HK of the regularized optimization problem: $${\widetilde{F}}_{\lambda}=\operatorname*{arg\,min}_{F=[F^{1},F^{2},...,F^{s}]^{\top}\in{\mathcal{H}}_{K}}\sum_{l=1}^{s}\left(\sum_{i=1}^{n}\|y_{l}^{(i)}-F^{l}(\mathbf{x}^{(i)})\|_{{\mathcal{Y}}}^{2}+\lambda\|F^{l}\|_{{\mathcal{H}}_{K}^{l}}^{2}\right),$$ where $\lambda>0,F=[F^{1},F^{2},\ldots,F^{s}]^{\top}\in\mathcal{H}_{K}=\mathcal{H}_{K}^{1}\times\mathcal{H}_{K}^{2}$ > ∈ HK = H1K × H2K *× · · · × H*sK, has the following form $$\widetilde{F}_{\lambda}(.)=\begin{bmatrix}\widetilde{F}_{\lambda}^{1}(.)\\ \vdots\\ \widetilde{F}_{\lambda}^{n}(.)\end{bmatrix}=\begin{bmatrix}\sum_{i=1}^{n}K^{1}(\mathbf{x}^{(i)},.)u_{i}^{1}\\ \vdots\\ \sum_{i=1}^{n}K^{*}(\mathbf{x}^{(i)},.)u_{i}^{*}\end{bmatrix},\text{where}u_{i}^{1},u_{i}^{2},\ldots,u_{i}^{*}\in\mathcal{Y}.\tag{28}$$ Proof. The proof follows as a consequence of the representer theorem proof in Appendix A.4. In order to solve the minimization problem (27), we use the representer theorem and reproducibility property of the OVKs Kl, for l ∈ [s]. The optimization problem in (27) is solved by using the alternating minimization of the objective function with respect to F = [F 1, F2*, . . . , F*s] > ∈ HK, L ∈ L and D ∈ D. For a constant L and D, we use the representer theorem (Theorem 5.1) to transform the objective function in terms of F 1 ∈ H1K, . . . , Fs ∈ HsK to functions u 1 i , u2 i , . . . , us i ∈ Y, i ∈ [n], respectively. The objective function J is defined as the following (see Appendix A.2): J(F 1, F2, . . . , Fs, L, D) = Xs l=1 Xn i=1 ky (i) l − F l(x (i))k 2 Y + λkF lk 2 HlK ! + Xr k=1 ρk Xn i=1 x (i) k >Lx(i) k ! + ρDkDk 2 F =⇒ J(u 1, u 2, . . . , u s, L, D) = Xs l=1 Xn i=1 y (i) l − Xn j=1 Kl(x (i), x (j))u l j 2 Y + λ Xn i,j=1 hKl(x (i), x (j))u l i , ulj iY (29) + Xr k=1 ρk Xn i=1 x (i) k >Lx(i) k ! + ρDkDk 2 F . $$(30)$$ For solving the multi-dimensional functional regression problem, the alternating minimization framework discussed in Section 5.1 is extended with the major difference in the step concerning minimization with respect to F = [F 1, F2*, . . . , F*s] > (or u 1, u 2*, . . . ,* u s) for fixed L and D. The multi-dimensionality leads to solving the following system of linear operator equations: $$\left[({\bf K}^{1}+\lambda I){\bf u}^{1}\quad\ldots\quad({\bf K}^{s}+\lambda I){\bf u}^{s}\right]=\left[\Theta_{1}\quad\ldots\quad\Theta_{s}\right],\tag{1}$$ where Kli,ju = Kl(x (i), x (j))u = k1(x (i), x (j); G) ¯k k 2 (u), ∀u ∈ Y, Θl = [y (1) l, y (2) l*, . . . , y* (n) l] > with ¯k l 2 = R 1 0 e −γ l op|t 0−t|u(t 0)dt0, γlop > 0, ∀l ∈ [s] and t 0, t ∈ [0, 1]. For all Kl, l ∈ [s], the scalar-valued kernel k1 (given in (25)) used remains the same and is built on common L and D. Therefore, s possibly different graph-induced OVKs are obtained by using exponential kernels on R 2 with γ l op, for l ∈ [s]. In order to solve for u 1, u 2*, . . . ,* u sin (30), we use the OpMINRES algorithm discussed in Section 5.1.1 to solve the systems (Kl + λI)u = yl, for l ∈ [s]. OpMINRES algorithm solves the s systems in parallel with a stopping criteria which combines s relative residuals. Thus, for alternating minimization, the steps discussed can be summarized as: 1. **Minimization with respect to $F=[F^{1},F^{2},\ldots,F^{s}]^{\top}\in\mathcal{H}_{K}$** (or $\mathbf{u}^{1},\mathbf{u}^{2},\ldots,\mathbf{u}^{s}\in\mathcal{Y}^{n}$): Solving for $\mathbf{u}^{1},\mathbf{u}^{2},\ldots,\mathbf{u}^{s}$ in $$\left[(\mathbf{K}^{1}+\lambda I)\mathbf{u}^{1}\quad\ldots\quad(\mathbf{K}^{s}+\lambda I)\mathbf{u}^{s}\right]=\left[\Theta_{1}\quad\ldots\quad\Theta_{s}\right].$$ 2. **Minimization with respect to** vech(L): Projected gradient descent of J with respect to vech(L) in L with sparsity inducing regularization. 3. **Minimization with respect to** vech(D): Projected gradient descent of J with respect to vech(D) in D. Using the learned graph-induced operator-valued kernel the output function is used to predict for input functions ˆx = (ˆx1*, . . . ,* xˆr), where xˆj ∈ X , for j ∈ [r] as $$F({\hat{\mathbf{x}}})={\begin{bmatrix}\sum_{i=1}^{n}K^{1}(\mathbf{x}^{(i)},{\hat{\mathbf{x}}})u_{i}^{1}\\ \vdots\\ \sum_{i=1}^{n}K^{s}(\mathbf{x}^{(i)},{\hat{\mathbf{x}}})u_{i}^{s}\end{bmatrix}},{\mathrm{~where~}}u_{i}^{1},\ldots,u_{i}^{s}\in{\mathcal{Y}},{\mathrm{~for~}}i\in[s].$$ ## 5.1.6 Sample-Based Approximation For Functional Regression Problem The kernel-based alternating minimization framework proposed earlier in this section helps to learn appropriate u, L, and D for the prediction of functional output using (24). For a setting where the number of training samples is large, the training can become computationally expensive as OpMINRES iteration scales in O(n 3), where n is the number of training samples. This issue of scalability is a well-known problem in kernel methods, which becomes more pronounced in an OVK-based framework. There are many popular methods for handling scalability issues in kernel methods (Williams & Seeger, 2000; Meanti et al., 2020; Bach & Jordan, 2005). In most cases, the approaches handling scalability issues for kernel methods are incorporated into the learning problem for approximating the kernel Gram matrices arising in large datasets, by using low-rank Cholesky decomposition (Bach & Jordan, 2005), Nyström approximation (Williams & Seeger, 2000) and GPU-based acceleration and parallelization (Meanti et al., 2020). For vector-valued regression problems, random Fourier features have been used for building OVKs (Brault et al., 2016; Brault, 2017) which cannot be directly extended to functional regression problems due to the following reasons. Extension of the random Fourier features to a functional setting requires developing a new theoretical framework for a functional version of operator-valued Bochner's theorem. Moreover, spectral decomposition of OVKs for functional data is beyond the scope of our current work, hence we leave it for future work. In our approach, we aim for a sample-based approximation heuristic algorithm which enables us to perform a greedy sample selection procedure followed by the training with only those selected samples. The motivation of the sample-based approximation lies in characterizing the action of the considered OVK K on i-th sample (x (i), y(i)) *∈ X × Y*. Recall the learning problem discussed in Section 5.1 given by $$\widetilde{F},\widetilde{L},\widetilde{D}=\operatorname*{arg\,min}_{F\in\mathcal{H}_{K},L\in\mathcal{E},D\in\mathcal{D}}\sum_{i=1}^{n}\|y^{(i)}-F(x^{(i)})\|_{\mathcal{Y}}^{2}+\lambda\|F\|_{\mathcal{H}_{K}}^{2}+\rho_{L}\sum_{i=1}^{n}x^{(i)}{}^{\top}L x^{(i)}+\rho_{D}\|D\|_{F}^{2},$$ which requires solving for u in (K + λI)u = y in the first step of alternating minimization with respect to F (or u) for fixed L and D. We consider the notations yi and Ki respectively as equivalent to y (i) and K(x (i), ·) in this section for simplicity. Though approximating ui ∈ Y in F(.) = Pn i=1 Kiui corresponding to sample (x (i), y(i)) may provide a better option for performing a sample-based approximation, we do not have the luxury to perform the inversion required in (K + λI)u = y. One way to assess the importance of a training sample (x (i), y(i)) is to investigate the action of operator K with x (i) on the output function y (i). Towards that we build K¯i P : *X → L*(Y) by choosing samples which minimize the squared norm of the difference n i=1 kKiyi − K¯iyik 2 HK , defined over the RKHS HK. A working set of samples is constructed iteratively from the training data to formulate K¯iyi as a linear combination of Kij yij 's, where ij 's correspond to the indices of a working set of samples in training data. Inspired by (Smola & Schölkopf, 2000), we propose the following approach to construct K¯i's iteratively. Consider indices in I = {i1, i2, . . . , i|I|} ⊂ [n] as the set of indices for the working set IW = {(x (i), y(i)) : i ∈ I} of samples from the training data. The aim is to approximate the action of Ki on yi using samples in IW as K¯iyi = X |I| j=1 Ti,jKij yij , for i ∈ [n], (31) =⇒ K¯1y1 K¯2y2 ... K¯nyn = T1,1 T1,2 . . . T1,|I| T2,1 T2,2 . . . T2,|I| ............ Tn,1 Tn,2 . . . Tn,|I| Ki1 yi1 Ki2 yi2 ... Ki|I| yi|I| =: T Ki1 yi1 Ki2 yi2 ... Ki|I| yi|I| , (32) where T ∈ R n×|I|. The valuesKiyi − K¯iyi HK are treated as residuals of the approximation for the i-th training sample. Approximations K¯i corresponding to each sample (x (i), y(i)), for i ∈ [n], created by the working set IW of samples in (31) are bounded linear operators on the output space Y. For every sample (x (i), y(i)), where i ∈ [n], the action of operator Ki on yiis approximated by using scalars Ti,1, Ti,2*, . . . , T*i,|I| with Kij yij ∈ HK, for j ∈ [|I|]. Minimization of Pn i=1 kKiyi − K¯iyik 2 HK ensures that the working set of samples can characterize closely the impact of Kiyi. The approximation of Ki, ∀i ∈ [n] using the working set of samples is described next. Approximation of Operators using Samples: Initially, suppose I = ∅ and let i1 ∈ [n] be the best candidate index. Then the index set I is updated as I = {i1}, and the working set IW contains only a single sample corresponding to the index i1 ∈ I. Now the optimization problem is to determine T1,1, T2,1*, . . . , T*n,1 in K¯iyi = Ti,1Ki1 yi1 , ∀i ∈ [n] and is given by $$\operatorname*{arg\,min}_{T_{1,1},T_{2,1},\dots,T_{n,4}}\sum_{i=1}^{n}\left\|K_{i}y_{i}-{\bar{K}}_{i}y_{i}\right\|_{\mathcal{H}_{K}}^{2}=\sum_{i=1}^{n}\left\|K_{i}y_{i}-T_{i,1}K_{i_{1}}y_{i_{1}}\right\|_{\mathcal{H}_{K}}^{2}.$$ The solution of (33) is obtained as $$T_{i,1}=\frac{\langle K_{i}y_{i},K_{i_{1}}y_{i_{1}}\rangle_{\mathcal{H}_{K}}}{\langle K_{i_{1}}y_{i_{1}},K_{i_{1}}y_{i_{1}}\rangle_{\mathcal{H}_{K}}}$$ $$=\frac{\langle K_{i_{1}i_{1}}y_{i},y_{i_{1}}\rangle_{\mathcal{Y}}}{\langle K_{i_{1}i_{1}}y_{i_{1}},y_{i_{1}}\rangle_{\mathcal{Y}}},\text{for}i\in[n],$$ $$(33)$$ (34) $\binom{35}{4}$ (35) . $$(36)$$ where Kii1 = K(x (i), x(i1)) and (34) is obtained by differentiating the objective function in (33) with respect to Ti,1 and equating it to 0, to obtain the minima for i ∈ [n]. The reproducing property of OVK is used to obtain (35). This construction of Ti,1 yields $$\langle K_{i}y_{i}-\bar{K}_{i}y_{i},K_{i_{1}}y_{i_{1}}\rangle_{\mathcal{H}_{K}}=0,\,\forall i\in[n].\tag{1}$$ The equality in (36) denotes that the space span{(Kiyi − K¯iyi), ∀i ∈ [n]} is orthogonal to Ki1 yi1 (denoted by span{(Kiyi − K¯iyi)), ∀i ∈ [n]} ⊥ Ki1 yi1 ). Note that this orthogonality property holds for index i1. We shall show later that a similar property indeed holds for all samples which will be added to the working set. In general, the number of samples for the functional regression problem can be large and searching for the best candidates in the complete training set may become costly and defeat the cause for developing a sample based approximation. Hence we consider a random subset of R samples for an efficient approximation, where R < n and |I| < R. For the iterative process to build the working set of indices I and samples IW , let us assume that I old be the index set with |I old| = k (say) and T old ∈ R n×k be the matrix formed based on (31) for obtaining K¯ old iyi =Pk j=1 Ti,jKij yij , ∀i ∈ [n]. Suppose ik+1 be the index of next best sample to be added to get I new = I old ∪ {ik+1} which provides K¯ new i = K¯ old iyi + Ti,k+1Kik+1 yik+1 . The minimization problem as in (33) is written as $$\arg\min_{T_{1,k+1},T_{2,k+1},\ldots,T_{n,k+1}}\sum_{i=1}^{n}\left\|K_{i}y_{i}-\bar{K}_{i}^{new}y_{i}\right\|_{H_{K}}^{2}=\sum_{i=1}^{n}\left\|K_{i}y_{i}-\bar{K}_{i}^{old}y_{i}-T_{i,k+1}K_{i_{k+1}}y_{i_{k+1}}\right\|_{H_{K}}^{2}\,\tag{37}$$ where the solution of (37) results in $$T_{i,k+1}=\frac{\langle K_{i}y_{i}-\bar{K}_{i}^{old}y_{i},K_{i_{k+1}}y_{i_{k+1}}\rangle\,\mathcal{H}_{K}}{\langle K_{i_{k+1}}y_{i_{k+1}},K_{i_{k+1}}y_{i_{k+1}}\rangle\,\mathcal{H}_{K}}$$ $$=\frac{\langle K_{i_{k+1}}y_{i},y_{i_{k+1}}\rangle y-\sum_{j=1}^{k}T_{i,j}\langle K_{i_{j}i_{k+1}}y_{i_{j}},y_{i_{k+1}}\rangle y}{\langle K_{i_{k+1}i_{k+1}}y_{i_{k+1}},y_{i_{k+1}}\rangle y},\text{for}i\in[n].$$ (38) $\binom{39}{2}$ (39) . Ti,k+1 in (38) is obtained similar to the procedure for (34) by differentiating the objective function in (37) with respect to Ti,k+1's and equating it to 0, obtaining the minima for i ∈ [n]. Equation (39) follows from properties of inner-product and reproducing property of OVK. The iterative construction ensures that the following property holds: $$\langle K_{i}y_{i}-\bar{K}_{i}^{new}y_{i},K_{i},y_{ij}\rangle_{\mathcal{H}_{K}}=0,\ \forall i\in[n],\forall j\in[k+1]$$ $$\Longrightarrow\text{span}\{K_{i}y_{i}-\bar{K}_{i}^{new}y_{i}|i\in[n]\}\perp\text{span}\{K_{ij}y_{ij}|j\in[k+1]\}.$$ Similar to (36), the iterative procedure ensures that the orthogonality property is extended to (41) in HK, which will be used in the iterative selection process discussed below. For each iteration, it remains to find the best sample from the R randomly selected candidate set of samples, which is to be included in IW . We discuss this next. Selecting the Best Samples Iteratively: In order to find the training sample which will minimize the residuals most effectively, let C be the candidate set of indices for training samples given by C ⊆ [n] \ I. Suppose for a particular iteration, let I = {i1, i2*, . . . , i*k} and let the randomly selected candidate set of indices be C = {c1, c2*, . . . , c*M}. For each cr ∈ C, we calculate the improvement in the sum of residuals which can result in including cr in I. Assume K¯ old iyi be given by $$\bar{K}_{i}^{old}y_{i}=\sum_{j=1}^{k}T_{i,j}K_{i j}y_{i j},\ \ \mathrm{for}\ i\in[n],\tag{1}$$ $$\left(42\right)$$ $$(43)$$ from which we obtain K¯ new iyi as follows: $$\bar{K}_{i}^{new}y_{i}=\bar{K}_{i}^{old}y_{i}+T_{i,r}K_{c_{r}}y_{c_{r}},\,\text{for}c_{r}\in C.\tag{1}$$ In order to select the best sample index from the candidate set C, we need to find the index cr in C which best approximates Pn i=1 kKiyi − K¯ new iyik, when I new = I old ∪ {cr} is considered as the index set for the new working set of samples. Let the improvement in the sum of residuals by adding cr in I old be denoted by Improvement(cr), given by $$\text{Improvement}(c_{r})=\sum_{i=1}^{n}\|K_{i}y_{i}-\bar{K}_{i}^{old}y_{i}\|_{H_{K}}^{2}-\sum_{i=1}^{n}\|K_{i}y_{i}-\bar{K}_{i}^{new}y_{i}\|_{H_{K}}^{2}$$ $$=\frac{\sum_{i=1}^{n}\left[\langle K_{i}y_{i}-\bar{K}_{i}^{old}y_{i},K_{c}y_{c}\rangle\eta_{K}\right]^{2}}{\langle K_{c}y_{c},K_{c}y_{c}\rangle\eta_{K}}$$ $$=\frac{\sum_{i=1}^{n}\left[\langle K_{ic}y_{i},y_{c}\rangle y-\sum_{j=1}^{n}T_{i,j}\langle K_{i_{ic}}y_{j},y_{c}\rangle y\right]^{2}}{\langle K_{cccc}y_{c},y_{c},y_{c}\rangle y}.$$ (44) $$\begin{array}{l}\left(45\right)\end{array}$$ = $$\begin{array}{l}\left(46\right)\end{array}$$ = $$\begin{array}{l}\left(46\right)\end{array}$$ . Equation (44) quantifies the reduction in the residual value by the addition of cr to working set I of indices. Equation (45) is obtained from (44) by using the properties of inner product and K¯ old i, for i ∈ [n] and the orthogonality property (41). Equation (46) follows from the reproducing property of operator-valued kernel K. The sample which achieves the maximum improvement is considered to be the best sample to be added to the working set. In terms of indices, this selection becomes Best $\text{Index}_k=\underset{c_r\in C}{\arg\max}\ \text{Improvement}(c_r)$. Algorithm 3 Sample-based Approximation Input: {(x (i), y(i))} n i=1, x(i) = (x (i) 1 , x (i) 2 , . . . , x (i) p ) ∈ X , y(i) ∈ Y Output: I, the index set of working set of samples. Initialize vech(L) 0, vech(D) 0 yi ← y (i), Ki ← K(x (i), .), Kij ← K(x (i), x(j)), ∀*i, j* ∈ [n] Initialize k = 0, I = ∅, T = 0 while stopping criterion based on residual (49) is not satisfied do Construct C by drawing random subset of M elements from [n] \ I, C = {c1, c2, . . . , cM} Compute Ti,k+1 = hKiik+1 yi,yik+1 iY −Pk j=1 Ti,j hKij ik+1 yij ,yik+1 iY hKik+1ik+1 yik+1 ,yik+1 iY, for i ∈ [n] Improvement(cm) = Pn i=1-hKicmyi,ycmiY −Pk j=1 Ti,j hKijcmyij ,ycmiY2 hKcmcmycm,ycmiY, for m ∈ [M] Best Indexk = arg maxcm∈C Improvement(cm) I = I ∪ {Best Indexk} k ← k + 1 end while Stopping Criterion: As is the case for any iterative algorithm, an appropriate stopping criterion is required for ending the sample selection process which may be based on the number of iterations or accuracy. For using accuracy-based stopping criterion, residual is calculated for the k-th iteration as Residualk = Xn i=1 (Ki − K¯i)yi 2 HK(47) = Xn i=1 l=1 TijTilhKij yij , Kil yil iHK hKiyi, KiyiiHK − 2 X k j=1 Tij hKiyi, Kij yij iHK + X k j=1 X k = Xn i=1 hKiiyi, yiiY − 2 Xn i=1 X k j=1 Ti,j hKiij yi, yij iY + Xn i=1 X k j=1 X k l=1 Ti,jTi,lhKij il yij , yil iY . (49) $$(47)$$ $$\begin{array}{c}{{(48)}}\\ {{}}\end{array}$$ $$\begin{array}{c}{{(49)}}\\ {{}}\end{array}$$ (48) Equation (48) follows from the properties of inner product of RKHS and (49) is obtained using the reproducibility property of K. As the first part of the summation in (49) remains constant for each iteration, a threshold for residual value can be used to determine convergence of the last two terms. For a very large set of training samples, a budget on the number of samples to consider can also be an effective tool for approximation. As the aim is to learn a kernel encapsulating the graphical structure between the input variables, the sample approximation can still be costly. An effective strategy is to start with an initial L representing a fully connected graph and an initial D which is the identity matrix. After the sample selection process, the final working set IW of samples indexed by I are used throughout in the alternating minimization framework. In our implementations, we used the residual calculation using a validation set instead of the training set which provided a faster convergence and better generalization. Algorithm 3 illustrates the sample-based approximation for functional regression problem. ## 6 Generalization Analysis Let X = (L 2([0, 1]))p be the input space and Y = L 2([0, 1]) be the output space. Consider the training samples given as z = {(x (i), y(i)) : i ∈ [m]*} ⊆ X × Y* =: Z where zi:= (x (i), y(i)), ∀i ∈ [m] are drawn i.i.d. from a probability distribution µ. The empirical error of a learned function-valued function F on the data z is given as the following: $$\ell_{z}(F)=\frac{1}{m}\sum_{i=1}^{m}\mathcal{L}(y^{(i)},F(x^{(i)})),$$ (i))), (50) $\left(50\right)^3$ where L : *Y ×Y →* R+ is a loss function defined on the output space Y. A typical learning problem involves estimating a function-valued F which is the solution of the following problem: $$\operatorname*{min}_{F\in{\mathcal{H}}_{K}}\,\mathcal{E}_{\lambda}(F,K),$$ $$(51)$$ $$\left(52\right)$$ Eλ(*F, K*), (51) where Eλ(*F, K*) := Ez(F) + λkFk 2 HK . In our problem K is parameterized by *L, D, γ, γ*op and belongs to a class of OVKs K and hence in this work, for the given data, we aim to learn the following: $$(K_{z},F_{z}):=\arg\operatorname*{min}\{{\mathcal{E}}_{\lambda}(F,K):K\in{\mathcal{K}},F\in{\mathcal{H}}_{K}\}.$$ (Kz, Fz) := arg min{Eλ(*F, K*) : K ∈ K, F ∈ HK}. (52) The problem (52) can be reformulated as a regularized empirical error minimization problem. Our focus is on the problem of bounding the generalization error of Fz, namely E (Fz) − E (F ∗), where E (F) is the expected error of F given by E (F) := E[L (*y, F*(x))], the expectation E is taken over the probability measure µ, and F ∗is the target function defined as $$F^{*}=\arg\operatorname*{min}{\mathcal{E}}(F),$$ $$(53)$$ $$\chi\to\mathcal{Y}.$$ $$(54)$$ ∗ = arg min E (F), (53) where the minimum is taken over all measurable functions F : *X → Y*. ## 6.1 Error Bounds In this section, we introduce quantities which will be useful for our generalization bound analysis. We use the approach in (Micchelli et al., 2016; Ying & Zhou, 2007; Wu & Zhou, 2006) and introduce sample error as the following: $$S_{z}(m,\lambda,F)=[\mathscr{E}(F_{z})-\mathscr{E}_{z}(F_{z})]+[\mathscr{E}_{z}(F)-\mathscr{E}(F)].\tag{1}$$ The sample error Sz(*m, λ, F*) in (54) consists of two terms [E (Fz) − Ez(Fz)] and [Ez(F) − E (F)]. The first term [E (Fz) − Ez(Fz)] is the difference between the expected value of L (*y, F*z(x)) with respect to µ and its empirical mean over a fixed random data set z ⊆ Z. To bound this term we use the notion of Rademacher averages which enables us to control and analyze the random variables zi associated with the data z ⊆ Z. Similarly, the second term [Ez(F) − E (F)] denotes the difference between the empirical mean of L (*y, F*(x)) for a fixed z ⊆ Z and its expectation with respect to µ. We follow the approach in (Micchelli et al., 2016) to bound both the terms. In addition to the sample error, we introduce another quantity known as the regularization error R(F) for a function F ∈ HK defined as $${\mathcal{R}}(F)={\mathcal{E}}(F)-{\mathcal{E}}(F^{*})+\lambda\|F\|_{{\mathcal{H}}_{K}}^{2},$$ $$(55)$$ $\text{ersion of problem(53)is given by}$ . $$(56)$$ , (55) where F ∗is the target function. A regularized version of problem (53) is given by $(K^{*}_{\lambda},F^{*}_{\lambda}):=\operatorname*{arg\,min}_{K\in\mathcal{K},F\in\mathcal{H}_{K}}\{\mathscr{E}(F)+\lambda\|F\|_{\mathcal{H}_{K}}^{2}:K\in\mathcal{K},F\in\mathcal{H}_{K}\}.$ : K ∈ K, F ∈ HK}. (56) The regularization error of F ∗ $\lambda$ is denoted by $\mathcal{R}^{*}(\lambda)$ as follows: $\mathcal{R}^{*}(\lambda)=\min\limits_{K\in\mathcal{K}}\min\limits_{F\in\mathcal{H}_{K}}\left[\mathcal{E}(F)-\mathcal{E}(F^{*})+\lambda\|F\|_{\mathcal{H}_{K}}^{2}\right].$ is denoted by R∗(λ) as follows: In order to determine a generalization bound, we use the following result which enables us to relate generalization error using sample error and regularization error. Proposition 6.1. For every K ∈ K, F ∈ HK, the following inequality holds $$\mathcal{E}(F_{z})-\mathcal{E}(F^{*})\leq S_{z}(m,\lambda,F)+\mathcal{R}(F).$$ ∗) ≤ Sz(*m, λ, F*) + R(F). (58) Proof. In order to prove the inequality, we start with the generalization error, $$\mathcal{E}(F_{z})-\mathcal{E}(F^{*})=S_{z}(m,\lambda,F)+\mathcal{E}_{z}(F_{z})-\mathcal{E}_{z}(F)+\mathcal{E}(F)-\mathcal{E}(F^{*})+\lambda\|F\|_{\mathcal{H}_{K}}^{2}-\lambda\|F\|_{\mathcal{H}_{K}}^{2}$$ $$=S_{z}(m,\lambda,F)+\mathcal{R}(F)+\mathcal{E}_{z}(F_{z})-\mathcal{E}_{z}(F)-\lambda\|F\|_{\mathcal{H}_{K}}^{2}$$ $$\leq S_{z}(m,\lambda,F)+\mathcal{R}(F).$$ HK(59) $$\left(57\right)$$ $$(58)$$ (59) $\begin{array}{l}\left(60\right)\hfill\\ \left(61\right)\hfill\end{array}$ Equation (59) is obtained by adding and subtracting Sz(*m, λ, F*) and λkFk 2 HK . Equation (60) follows from the definition of R(F) in (55). The inequality in (61) is obtained using the facts λkFk 2 HK ≥ 0 and Ez(Fz) − Ez(F) ≤ 0, as Fz = arg minF ∈HK Ez(F). Now, we intend to bound the term E (Fz) − Ez(Fz) in (54). Towards this we define the following notion of Rademacher average of a suitable class of functions. Definition 6.1 (**Rademacher Average**). Let F denote a class of functions from X to Y. Let µX denote the marginal distribution over the input space X . Consider a m-tuple of samples from input space as (x (1), x(2), . . . , x(m)) ∈ X m, where x (i) ∼ µX , i ∈ [m]. Then the Rademacher average of class F is defined as $$\mathscr{R}_{m;\mathcal{Y}}(\mathcal{F})=\mathbb{E}\left[\sup_{F\in\mathcal{F}}\frac{1}{m}\left\|\sum_{i=1}^{m}\varepsilon_{i}F(x^{(i)})\right\|_{\mathcal{Y}}\right],\tag{1}$$ $$(62)$$ where εi's are Rademacher random variables uniformly distributed over {+1, −1} and E represents the expectation over both i.i.d. Rademacher variables εi's and i.i.d. variables x (i)'s based on µX . Recall that for a class of functions F from an input space X to R, and a sample (x1, x2*, . . . , x*m) ∈ Xm, the Radamacher average of F is defined as $$\mathscr{R}_{m;\mathbb{R}}(\mathsf{F})=\mathbb{E}\left[\sup_{F\in\mathsf{F}}\frac{1}{m}\sum_{i=1}^{m}\varepsilon_{i}F(x_{i})\right].\tag{1}$$ $$(63)$$ Comparing this expression with that in Definition 6.1, we note that a suitable norm is used in Definition 6.1. Thus our definition of a suitable Radamacher average accommodates the nature of the function class F which contains function-valued functions, unlike F which is composed of simple real-valued functions. In order to proceed with the upcoming proofs, we require some assumptions which we state next. Assume ∃β > 0 such that kykY ≤ β, ∀y ∈ Y, which provides a uniform upper bound on the norm of the outputs. In addition, assume that the class K is uniformly bounded, that is, $\kappa=\sup\limits_{K\in\mathcal{K}}\sup\limits_{x\in\mathcal{X}}\sup\limits_{y\in\mathcal{Y}}\|K(x,.)y\|_{\mathcal{H}_{K}}=\sup\limits_{K\in\mathcal{K}}\sup\limits_{x\in\mathcal{X}}\sup\limits_{y\in\mathcal{Y}}\sqrt{\langle K(x,x)y,y\rangle_{\mathcal{Y}}}<\infty$. This assumption holds for OVKs which satisfy the trace class assumption (Kadri et al., 2016). For any K ∈ K and F ∈ HK, we define the following norm $$\|F\|_{\infty}=\operatorname*{max}_{x\in X}\operatorname*{max}_{y\in Y}|\langle F(x),y\rangle_{\mathcal{Y}}|=\operatorname*{max}_{x\in X}\operatorname*{max}_{y\in Y}|\langle F,K(x,.)y\rangle_{\mathcal{H}_{K}}|\leq\kappa\|F\|_{\mathcal{H}_{K}},$$ where we use the reproducing property, hF, K(x, .)yiHK = hF(x), yiY and the inequality in (65) follows from Cauchy-Schwarz inequality. We define for t ≥ 0, $$(64)$$ $$(65)$$ $$\Xi(t):=\sup_{y\in\mathcal{Y}}\sup_{\|s\|_{\mathcal{Y}}\leq t}\mathscr{L}(y,s).\tag{1}$$ $$(66)$$ The function Ξ(t) provides a bound on the loss function when the second argument s in the loss function has restricted norm. Ξ(t) enables bounding the norm of the function F via the loss function L by considering t = 0. Let L : Y → R be defined as the following: $$L(t)=\sup_{\begin{subarray}{c}y\in\mathcal{Y}\,\|s_{1}\|_{\mathcal{Y}}\leq t,\\ \|s_{2}\|_{\mathcal{Y}}\leq t\end{subarray}}\frac{|\mathscr{L}(y,s_{1})-\mathscr{L}(y,s_{2})|}{\|s_{1}-s_{2}\|_{\mathcal{Y}}}.$$ L(t) provides a Lipschitz constant for the loss function L with respect to the second argument when the norm of the argument is bounded by t. Lemma 6.2. Let F be a class of functions from X to Y. Consider a m-tuple of samples from input space as (x (1), x(2), . . . , x(m)) ∈ X m. Then the following hold: $$(67)$$ 1. $\mathbb{E}\left[\sup_{F\in\mathcal{F}}\|\frac{1}{m}\sum_{i=1}^{m}F(x^{(i)})-\mathbb{E}F\|_{\mathcal{Y}}\right]\leq2\mathscr{R}_{m;\mathcal{Y}}(\mathcal{F})$. 2. For every $c\in\mathbb{R}$, $\mathscr{R}_{m;\mathcal{Y}}(c\mathcal{F})=|c|\mathscr{R}_{m;\mathcal{Y}}(\mathcal{F})$. 3. For φ : Y → R, if φ is a Lipschitz function with Lipschitz constant L, then Rm;R(φ◦F) ≤ LRm;Y (F). Proof. The lemma has been proved as Lemma A.6 in Appendix A.9. We further define the following constants: $$\rho=\sqrt{\Xi(0)/\lambda},\,\tau=\kappa\rho,$$ $$(68)$$ $$(70)$$ ρ =pΞ(0)/λ, τ = κρ, (68) which will be useful in the upcoming results. The forthcoming results will use the class of kernels given by: K0 = {K(x, .)y : K ∈ K, x ∈ X , y *∈ Y}*. (69) The following result provides a bound on the sample error which involves the Rademacher average of the class of kernels K0. Theorem 6.3. If F ∈ HK, then with confidence 1 − δ, where δ ∈ (0, 1), there holds $$S_{z}(m,\lambda,F)\leq2\rho L(\tau)\beta^{1/4}(\mathcal{R}_{m;\mathcal{Y}}(\mathcal{K}_{0}))^{1/4}+(\Xi(\tau)+\Xi(\|F\|_{\infty}))\sqrt{\frac{\log\frac{1}{6}}{2m}}.$$ . (70) The proof will be provided later as a consequence of the results covered in the upcoming section. Theorem 6.3 provides a probabilistic upper bound for the sample error in terms of Rademacher average for the class of graph-induced operator-valued kernels. Later in Section 6.3, we will consider K to be the class of graphinduced OVKs and derive a bound on Rm;Y (K0). ## 6.2 Estimating Sample Error In this section, we derive results which aid in establishing the result in Theorem 6.3. We use Hoeffding inequality for bounding the term Ez(F) − E (F) using random data set z ⊆ Z := *X × Y*. Lemma 6.4. Let F be a bounded function. For every δ ∈ (0, 1), with confidence 1 − δ there holds $$\mathscr{E}_{z}(F)-\mathscr{E}(F)\leq\Xi(\|F\|_{\infty})\sqrt{\frac{\log\frac{1}{\phi}}{2m}}.\tag{1}$$ $$(71)$$ Proof. Consider the random variable ζ = L (*y, F*(x)). Note that Ez = 1 m Pm i=1 ζ(zi), where zi = (x (i), y(i)) and E = E(ζ). By our assumption, 0 < ζ ≤ Ξ(kFk∞) we have |ζ − E[ζ]| ≤ Ξ(kFk∞). Using one-sided Hoeffding inequality, we obtain $$P\left({\frac{1}{m}}\sum_{i=1}^{m}(\zeta_{i}-\mathbb{E}[\zeta])\geq t\right)\leq\exp\left(-{\frac{2m t^{2}}{\Xi^{2}(\|F\|_{\infty})}}\right).$$ Consider, $$\delta=\exp\left(-\frac{2m t^{2}}{\Xi^{2}(\|F\|_{\infty})}\right)\implies\log\frac{1}{\delta}=\left(\frac{2m t^{2}}{\Xi^{2}(\|F\|_{\infty})}\right)\implies t=\Xi(\|F\|_{\infty})\sqrt{\frac{\log\frac{1}{\delta}}{2m}}.$$ In fact, the $\delta$-function of the $\delta$-function is defined by . (73) Therefore, for every δ ∈ (0, 1), with confidence 1 − δ the following holds: $$\mathscr{E}_{z}(F)-\mathscr{E}(F)\leq\Xi(\|F\|_{\infty})\sqrt{\frac{\log\frac{1}{\delta}}{2m}}.\tag{1}$$ $$\left(72\right)$$ $$(73)$$ $$(74)$$ Now that the second term in the sample error (54) has been bounded, we focus on the first term by considering a union of unit balls in the space HK where the notion of Rademacher average can be defined for obtaining bounds. We define a function Θ such that $$\mathscr{E}(F_{z})-\mathscr{E}_{z}(F_{z})\leq\Theta(z):=\sup_{F\in\rho\mathscr{B}_{\mathcal{K}}}\left(\mathscr{E}(F)-\mathscr{E}_{z}(F)\right),\tag{1}$$ where BK is the union of unit balls in HK over K ∈ K given by $$(75)$$ in $\mathcal{H}_{K}$ over $\mathcal{H}\in\mathcal{H}_{K}$ given by $$\mathcal{B}_{K}=\bigcup_{K\in\mathcal{K}}\Big{\{}F\in\mathcal{H}_{K}:\|F\|_{\mathcal{H}_{K}}\leq1\Big{\}}.\tag{76}$$ The supremum is defined over ρBK in (75) by the following reasoning: λkFzk 2 HKz ≤ E (Fz) − E (F ∗) + λkFzk 2 HKz ≤ Ez(0) − E (F ∗) ≤ supy∈Y L (y, 0) = supy∈Y sups:kskY ≤0 L (*y, s*) = Ξ(0). This gives a bound on kFzkHKz , note that Fz is the minimizer in problem (52). In fact, using a similar approach, for any general F ∈ HK, we have kFkHK ≤pΞ(0)/λ =: ρ which leads to the set ρBK in (75). Thus to bound E (Fz) − Ez(Fz), we find a suitable upper bound on Θ(z) in the following lemma. Lemma 6.5. Consider Θ(z) = supF ∈ρBK (E (F) − Ez(F)), then for every δ ∈ (0, 1), with confidence 1 − δ the following holds $$\Theta(z)\leq\mathbb{E}[\Theta(z)]+\Xi(\tau){\sqrt{\frac{\log{\frac{1}{\delta}}}{2m}}}.$$ $$(77)$$ Proof. Let z 0 i be the data set which is obtained by replacing i-th pair zi = (x (i), y(i)) of z with (x 0 i , y0i ). Then, $$\Theta(z)-\Theta(z^{\prime}_{i})=\sup_{F\in\rho\mathcal{B}_{\mathbb{K}}}\left(\mathscr{E}(F)-\mathscr{E}_{z}(F)\right)-\sup_{F\in\rho\mathcal{B}_{\mathbb{K}}}\left(\mathscr{E}(F)-\mathscr{E}_{z^{\prime}_{i}}(F)\right)$$ $$\leq\sup_{F\in\rho\mathcal{B}_{\mathbb{K}}}\left(\mathscr{E}_{z^{\prime}}(F)-\mathscr{E}_{z}(F)\right)$$ $$=\frac{1}{m}\sup_{F\in\rho\mathcal{B}_{\mathbb{K}}}\left(\mathscr{L}(y^{\prime}_{i},F(x^{\prime}_{i}))-\mathscr{L}(y^{(i)},F(x^{(i)}))\right)$$ $$\leq\frac{1}{m}\Xi(\tau),$$ where (79) is obtained by using properties of supremum and (81) follows from the definition of κ, τ and Ξ. By interchanging z and z 0 i , we obtain $$(\delta1)$$ $$|\Theta(z)-\Theta(z_{i}^{\prime})|\leq\frac{1}{m}\Xi(\tau).$$ $$(82)$$ Using McDiarmid's inequality, we obtain $$P\left(\Theta(z)-\mathbb{E}[\Theta(z)]\geq\epsilon\right)\leq\exp\left(-\frac{2m\epsilon^{2}}{\Xi^{2}(\tau)}\right).$$ Using a similar argument as in the proof of Lemma 6.4, the proof follows that for every δ ∈ (0, 1), with confidence 1 − δ the following holds: $$\Theta(z)-\mathbb{E}[\Theta(z)]\leq\Xi(\tau){\sqrt{\frac{\log{\frac{1}{\delta}}}{2m}}}.$$ The next lemma helps to bound E[Θ(z)] using Rademacher average of the class K0. $$(83)$$ $$(84)$$ $\square$ Lemma 6.6. E[Θ(z)] is bounded above as follows: $$\mathbb{E}[\Theta(z)]\leq2\rho L(\tau)\beta^{1/4}(\mathcal{R}_{m;\mathcal{V}}(\mathcal{K}_{0}))^{1/4}.$$ $$\square$$ Proof. The lemma has been proved as Lemma A.7 in Appendix A.9. Using Lemmas 6.4, 6.5 and 6.6, the result in Theorem 6.3 is proved. ## 6.3 Learning With Graph-Induced Operator-Valued Kernels In this section, we consider the following class of functions $${\mathcal{K}}_{0}=\{K(x,.)y:K\in{\mathcal{K}},x\in{\mathcal{X}},y\in{\mathcal{Y}}\},$$ K0 = {K(x, .)y : K ∈ K, x ∈ X , y *∈ Y}*, (85) where K is defined as the graph-induced OVK given by $$K(x,x^{\prime})y=e^{-\gamma(x-x^{\prime})^{\top}(L+D)(x-x^{\prime})}\int_{0}^{1}e^{-\gamma_{\alpha\gamma}|s-t|}y(s)ds,$$ $$=\mathfrak{g}(x,x^{\prime})Ty,$$ $$(85)$$ $$(86)$$ $$(87)$$ with γ, γop > 0, L ∈ L and D ∈ D. Note that g(*x, x*0) = e $-\gamma(x-x')^\top(L+D)(x-x')$ and . 0) and T y =R 1 0 e −γop|s−t|y(s)ds. Now, in order to bound the Rademacher average Rm;Y (K0), we follow an approach inspired by Maurer (2016) and split the OVK K using properties on g and T. Consider the class of functions defined as G = {g(*x, .*) = e −γ(x−.) >(L+D)(x−.) ∈ HG : kg(x, .)kHG ≤ RG, L ∈ L, D ∈ D*, γ >* 0} where HG is the RKHS corresponding to the scalar-valued kernel g on *X × X* . $$\mathscr{R}_{m;\mathcal{Y}}(\mathcal{K}_{0})=\mathbb{E}\left[\sup_{k\in\mathcal{G}}\sup_{y\in\mathcal{Y}}\sup_{t\in\mathcal{X}}\left\|\frac{1}{m}\sum_{i=1}^{m}\varepsilon_{i}k(x^{(i)},t)Ty\right\|_{\mathcal{Y}}\right]$$ $$\leq\mathbb{E}\left[\sup_{k\in\mathcal{G}}\sup_{y\in\mathcal{Y}}\sup_{t\in\mathcal{X}}\left|\frac{1}{m}\sum_{i=1}^{m}\varepsilon_{i}k(x^{(i)},t)\right|\|Ty\|_{\mathcal{Y}}\right]$$ (88) $\binom{89}{2}$ . In Equation (91), R + m;R (G) denotes a Rademacher average involving absolute values of real-valued functions in G. For bounding Rademacher average R + m;R (G), we use the notion of covering numbers. Next, we provide the definition of covering numbers. Definition 6.2 (**Covering Numbers**). Let (F, d) be a pseudo-metric space and S be a subset of F. For every > 0, the covering number of S by balls of radius with respect to d, denoted by N (*S, , d*) is defined as the minimal number of balls of radius whose union covers S, namely, $${\mathcal{N}}(S,\epsilon,d)=\operatorname*{min}\left\{n\in\mathbb{N}:\exists\{s_{j}\}_{j=1}^{n}\subset\mathbb{F}{\mathrm{~such~that~}}S\subseteq\bigcup_{j=1}^{n}B(s_{j},\epsilon)\right\},$$ $$(91)$$ where B(sj , ) = {s ∈ F : d(*s, s*j ) ≤ }. Let Q be a class of bounded real-valued functions defined on X , x = (x (i): i ∈ [m]) ∈ X m and Q|x = {(Q(x (i)) : i ∈ [m]) : Q *∈ Q} ⊆* R m. For a norm induced by d on X , we define the d-norm empirical covering number of Q associated with x as Nd(Q*, , m*) = supx∈X m N (Q|x*, , d*). $$=\left(\sup_{y\in\mathcal{Y}}\|Ty\|_{\mathcal{Y}}\right)\mathbb{E}\left[\sup_{k\in\mathcal{G}}\sup_{t\in\mathcal{X}}\left|\frac{1}{m}\sum_{i=1}^{m}\varepsilon_{i}k(x^{(i)},t)\right|\right]$$ $$=\left(\sup_{y\in\mathcal{Y}}\|Ty\|_{\mathcal{Y}}\right)\mathcal{R}_{m;\mathbb{R}}^{+}(\mathcal{G}).$$ $$(90)$$ Let U = supg∈G E[g 2]. Using a construction similar to (124) and (125) in Appendix A.3, we obtain $${\mathfrak{g}}(x,.)=e^{-\gamma(x-.)^{\top}(L+D)(x-.)}=e^{-\gamma\|A(x-.)\|}$$ 2 p , where A = √ΛV , for a diagonal matrix Λ with non-negative eigenvalues of L + D and V is a orthonormal matrix. Consider an appropriate bound as E[g 2] ≤ a, ∀g ∈ G which is reasonable owing to the RBF-based construction of scalar-valued kernels in G. Based on Corollary 2.2.8 in (Van Der Vaart & Wellner, 1996), we can bound the Rademacher average using covering number as $$\mathcal{R}_{m;\mathbb{R}}^{+}(\mathcal{G})\leq\frac{1}{\sqrt{m}}\int_{0}^{U}\sqrt{\log\mathcal{N}_{d}(\mathcal{G},\epsilon,m)}d\epsilon,$$ $$\leq\frac{1}{\sqrt{m}}\int_{0}^{a}\sqrt{\log\mathcal{N}_{d}(\mathcal{G},\epsilon,m)}d\epsilon.$$ (92) $\binom{93}{2}$ (93) . $$(94)$$ To bound Nd(G*, , m*), we use Remark 11 in (Cucker & Smale, 2002), to state that there exists C > 0 and q > 0 such that $$\log{\cal N}_{d}({\cal G},\epsilon,m)\leq\left(\frac{R_{\cal G}C}{\epsilon}\right)^{\frac{1}{q}}.\tag{1}$$ Based on our assumptions, there exists K > 0 such that supy∈Y kT ykY ≤ K. Using (93) and (94) in (91), we obtain $$\mathscr{R}_{m;\mathcal{Y}}(\mathcal{K}_{0})\leq\frac{2aq\mathbb{K}(R_{G}C/a)^{1/2q}}{(2q-1)\sqrt{m}}.$$ Using the result in (95) with (70), we can establish the generalization bounds for the class of kernels constructed with graph-induced operator-valued kernels. For the problem considered in this work, the loss is defined as L (*y, y*0) = R 1 0 (y(t) − y 0(t))2dt with Ξ(t) ≤ (β + t) 2 and L(t) ≤ 2(β + t). We obtain the following for λ < 1, δ ∈ (0, 1), with confidence 1 − δ as Sz(m, λ, F∗ λ ) ≤ 4ρ(β + τ )β 1/4 2aqK(RGC/a) 1/2q (2q − 1)√m 1/4+ ((β + τ ) 2 + (β + kF ∗ λ k∞) 2) slog 1 δ ≤ 4β √λ β + κβ √λ β 1/4 2aqK(RGC/a) 1/2q (2q − 1)√m 1/4+ 2 β + κβ √λ 2slog 1 δ 2m(97) = 4β 9/4 λ(κ + √ λ) 2aqK(RGC/a) 1/2q (2q − 1)√m 1/4+ 2β 2 λ (κ + √ λ) 2 slog 1 δ 2m(98) < β 1/4 2aqK(RGC/a) 1/2q (2q − 1) 1/4+ slog 1 δ 2 β 2 λm1/8 max{4(κ + 1), 2(κ + 1)2}. (99) ∗ 2m(96) $$(95)$$ (96) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (97) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (98) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (99) . The inequality (97) is obtained using ρ =pΞ(0)/λ ≤ β/√λ, τ = κρ and kF λ k∞ ≤ κρ. Now, a common assumption for smooth kernels is of logarithmic decay of regularization error, i.e., R∗(λ) ≤ c 0λ η, where η ∈ (0, 1] and c 0 > 0 (Micchelli et al., 2016). Then the generalization error is bounded by $$\mathscr{E}(F_{z})-\mathscr{E}(F^{*})\leq\frac{c}{\lambda}+c^{\prime}\lambda^{\eta}.\tag{10.1}$$ Consider the function H (λ) = cλ + c 0λ η, then the minimizer is obtained for λ ∗ = (*c/ηc*0) 1/(1+η) with H (λ ∗) = (ηc0) 1/(1+η)[1 + 1/η] c η/(1+η). Therefore, for δ ∈ (0, 1), with confidence 1 − δ we obtain $$(100)$$ $$\mathscr{E}(F_{z})-\mathscr{E}(F^{*})\leq(\eta c)^{1/(1+\eta)}\left(1+1/\eta\right)\left[\left(\beta^{1/4}\left(\frac{2aq\mathbb{K}(R_{B}C/a)^{1/2q}}{(2q-1)}\right)^{1/4}+\sqrt{\frac{\log\frac{1}{2}}{2}}\right)\frac{\beta^{2}}{m^{1/8}}\mathbf{q}\right]^{\eta/(1+\eta)}\tag{1}$$ ·$\eta$) , , (101) . where A = max{4(κ + 1), 2(κ + 1)2}. (101) ensures an upper bound for the generalization error with the help of a bound on the Rademacher average for the problem (52) of learning the OVK Kz and the functional map Fz in the induced RKHS corresponding to Kz by using a ball BK in corresponding RKHS with a fixed radius (considered as 1). The task of establishing a bound on the regularization error R∗(λ) in (Micchelli et al., 2016) considers an example prescribing value for η based on the hyperparameter in the RBF kernel. A similar pursuit in our setting is not straightforward because of the functional nature of the input space. Hence, we leave it for future work. ## 7 Experiments In order to illustrate the effectiveness of the developed framework, we have used functional regression problem with an unknown graph structure in the input data for both synthetic and real datasets. The task of predicting output functions with the help of a Laplacian matrix denoting the relationship between the set of p input functions has been illustrated in the experiments. As practical data is always available as discrete observations corresponding to functions, standard FDA techniques can be used for the conversion of functional data into vector representation using basis functions, e.g. Fourier basis, B-spline basis, etc. Let X = (L 2([*a, b*]))p and Y = L 2([*c, d*]) be the input and output spaces, respectively. For our experiments, the error metric used is residual sum of squares error (RSSE) (Kadri et al., 2016) defined as *RSSE* = Pi R d c {y (i)(t)−yˆ (i)(t)} 2dt, where y (i)is the actual output function and yˆ (i)is the predicted output function. RSSE is better suited to compare functional outputs. The integrals involved have been approximated by using numerical integration in our implementation. The quadratic programs involved in (23) and (15) are solved by using CVXOPT (Andersen et al., 2023). Experimental Setting: All methods were coded in Python 3.7 and the codes are made public.1 All experiments were run on a Linux box with 182 Gigabytes main memory and 28 CPU cores. As methods to solve the problem of functional regression problem simultaneously with learning L and/or D are not available, we use popular algorithms to first determine L. Then for the learned L, we use our alternating minimization framework to learn D using projected gradient descent and u using OpMINRES. For the MCPbased L learning and D learning in the proposed alternating minimization framework, we use a decaying step-size in the projected gradient descent. The decaying step-size regime involves starting with an initial step-size (e.g. 10−4) and reducing it by a fixed factor (e.g. 2) after a set of iterations (e.g. 5) continuously till a final step-size (e.g. 10−9). In order to illustrate the effectiveness, we consider the following methods for comparison. fglasso-OpMINRES-D: L is determined using fglasso (Qiao et al., 2019), based on a Gaussian functional model which provides a precision matrix (inverse of covariance matrix) corresponding to the nodes with corresponding functional input data. The approach develops an extension of the glasso criterion (Yuan & Lin, 2007) to fglasso for functional data. The learned L is then used with our alternating minimization regime for optimizing u and D. OpMINRES is used with k1(*x, x*0; G) = e −γ(x−x 0) >(L+D)(x−x 0) and k2(*s, t*) = e −γop|s−t|, where γ ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100} and γop ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100}. KGL-OpMINRES-D: L is obtained by using Kernel Graph Learning (KGL) (Pu et al., 2021b) problem with respect to two kernel Gram matrices obtained for input signals and their timestamps which have been used to establish the relationship between input functions. RBF kernels have been considered for the input functions. The hyperparameters for KGL are tuned using cross-validation in our implementation. The learned L is then used with our alternating minimization regime for optimizing u and D. OpMINRES is used with k1(*x, x*0; G) = e −γ(x−x 0) >(L+D)(x−x 0) and k2(*s, t*) = e −γop|s−t|, where γ ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100} and γop ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100}. Sparse OpMINRES-L-D: This denotes our proposed method where we used Algorithm 2 to learn u, L and D. Projected gradient descent is used in minimization with respect to L and D based on a decaying step-size. The sparsity is aided by the MCP regularization considered in learning of L. The graph-induced operator- 1Codes used for the experiments can be found at https://github.com/akashsaha06/graph-inducedOVK. valued kernels are obtained using k1(*x, x*0; G) = e −γ(x−x 0) >(L+D)(x−x 0) and k2(*s, t*) = e −γop|s−t|, where γ ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100} and γop ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100}. Sparse Non-Pos-OpMINRES-L-D: As our framework is developed for OVKs, the proposed alternating minimization is well adaptive to consider generalized non-positive semi-definite OVKs (Saha & Palaniappan, 2020) as graph-induced OVKs. We call this extension Sparse Non-Pos-OpMINRES-L-D. Here too, projected gradient descent is used in minimization with respect to L and D using the decaying step-size similar to Sparse OpMINRES-L-D. The sparsity is aided by the MCP regularization considered in learning of L. The graph-induced operator-valued kernels are obtained using k1(*x, x*0; G) = e −γ(x−x 0) >(L+D)(x−x 0) and k2(*s, t*) = e −γop1|s−t| − e −γop2|s−t|, where γ ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100} and γop1, γop2 ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100}. Note that k2 is not necessarily a positive semi-definite kernel. Stopping Criteria: We elaborate on the stopping criteria for the different algorithms used in the alternating minimization framework in Algorithm 2. - **OpMINRES**: The stopping criterion for OpMINRES is based on the following condition: the loop exits if the value of relative residual norms between the current residual norm and the initial residual norm is less than a threshold (e.g. 10−3 was used in our implementation). - **Projected Gradient Descent**: The projected gradient descent steps for both vech(L) and vech(D) in Algorithm 2 use similar stopping criterion where the norm of difference between two consecutive iterates is compared to be less than a threshold (e.g. 10−3 was used in our implementation). - **MCP Regularization**: The sparsity-inducing MCP regularization of vech(L) in Algorithm 1 compares the norm of difference between two consecutive iterates against a threshold (considered as 10−3in our implementation) as the stopping criterion. ## 7.1 Experiments With Synthetic Data Data Generation: For synthetic experiments, three sets of experiments have been considered with input functions for graph structures having 3-nodes, 12-nodes and 25-nodes, respectively. The input functions are generated based on weighted cosine functions and constant functions with random noise. The corresponding output function is based on weighted sine functions sharing the weights between input functions and output function (details are given in Appendix A.10). For all the methods, a truncated trigonometric basis of L 2([0, 2π]) with 30 basis functions has been considered for encoding the functional data. The experiments were run for three settings where the data has been divided randomly into a training set, a validation set and a test set. The following data splits have been considered: (80/20/20), (160/40/40) and (320/80/80), representing the number of training samples/validation samples/test samples. The results for synthetic data with 12 nodes are summarized in Tables 1-2. From Table 2, we observe that Sparse OpMINRES-L-D obtains comparable performance based on mean RSSE on the test data in all three settings where 80 samples, 160 samples and 320 samples have been used for training. Both fglassoOpMINRES-D and KGL-OpMINRES-D essentially predict a graph structure first and then use it for the functional regression problem. Sparse OpMINRES-L-D and Sparse Non-Pos-OpMINRES-L-D present a unified approach which incorporates sparse graph learning with the functional regression task. In Table 1, the learned graphs are illustrated where darker colors of edges indicate larger edge weights. Table 1 illustrates that fglasso-OpMINRES-D fails to differentiate between the interactions of input functions and results in a fully connected graph consistently. Though, KGL-OpMINRES-L-D in comparison to fglassoOpMINRES-L-D produces a sparser graph structure, Sparse-OpMINRES-L-D learns sparse graph structures exhibiting relations that can incorporate the associations enforced in the generation process. The learned associations provide required correlations which can benefit the functional regression task. Further details of the synthetic data including experiments for 3 nodes and 25 nodes are given in Appendix A.10. Sparse Non-Pos-OpMINRES-L-D also learns sparse graph structures which are informative of synthetic data used for the functional regression task. | Table 1: Graphs corresponding to learned L for 12-node synthetic data. [Best viewed in color] Train/ Val/ Test samples Sparse fglassoOpMINRES-D KGLOpMINRES-D Non-PosOpMINRES-L-D OpMINRES-L-D 80/20/20 160/40/40 320/80/80 | |---| Additional Experiments: Appendix A.10.5 contains the results of experiments conducted as an ablation study in the 12-node setting. ## 7.2 Experiments On Weather Data Weather data is dynamic and inter-relationships between different parameters can be hard to predict. As our problem solves a functional regression problem based on a relationship between a set of input functions, we intend to showcase the effectiveness of the proposed algorithm by predicting average dew-point temperature | Train/Val/Test samples | Methods | Mean RSSE | | | |-----------------------------|--------------------|-------------|----------|----------| | | | Train | Val | Test | | Sparse OpMINRES-L-D | | 1.140691 | 1.780445 | 1.583640 | | 80/20/20 | fglasso-OpMINRES-D | 1.243734 | 1.821687 | 1.700265 | | KGL-OpMINRES-D | | 1.061473 | 1.775388 | 1.554853 | | Sparse Non-Pos-OpMINRES-L-D | | 1.167264 | 1.806093 | 1.618175 | | Sparse OpMINRES-L-D | | 0.888574 | 1.229568 | 1.385952 | | 160/40/40 | fglasso-OpMINRES-D | 0.956907 | 1.285154 | 1.305025 | | KGL-OpMINRES-D | | 0.983432 | 1.260719 | 1.286481 | | Sparse Non-Pos-OpMINRES-L-D | | 1.154356 | 1.362239 | 1.417921 | | Sparse OpMINRES-L-D | | 1.062102 | 1.294110 | 1.239181 | | 320/80/80 | fglasso-OpMINRES-D | 0.947426 | 1.336192 | 1.271646 | | KGL-OpMINRES-D | | 0.980995 | 1.299266 | 1.252706 | | Sparse Non-Pos-OpMINRES-L-D | | 1.073292 | 1.295140 | 1.243346 | Table 2: Mean RSSE results for 12-node synthetic data. (F) across 12 weather stations based on their respective air temperatures (F). We consider 1 minute data of Wyoming ASOS data collected from IEM ASOS One Minute Data (Iowa Environmental Mesonet, 2022). The data has been collected for an interval of 2 hours for both input functions and output function from January, 2022 to August, 2022. Data collected at one minute interval for different 12 weather stations in Wyoming was pre-processed to create 2 hour interval data by disregarding intervals where data was missing in any of the 12 stations. A total of 718 samples have been collected after removing missing data. For all the methods, a truncated trigonometric basis of L 2([0, 1]) with 80 basis functions has been considered for encoding the functional data. We segregate the weather data experiments into small weather data experiments (Appendix A.10) by considering 120 samples and full weather data experiments. The following random data splits have been considered: (80/20/20) and (472/123/123), representing the number of training samples/validation samples/test samples in small weather data and full weather data settings, respectively. Tables 3-4 showcase the performance of the algorithms for full weather data considering all 718 samples. Sparse OpMINRES-L-D performs the best in terms of mean RSSE on the test data compared to fglassoOpMINRES-L-D and KGL-OpMINRES-L-D (Table 4). The maps in Table 3 describe the geographic positioning of the weather stations in Wyoming and the edges between them indicate potential inter-relations between the stations. In Table 3, fglasso-OpMINRES-L-D and KGL-OpMINRES-L-D learn dense fully connected graphs which do not provide much information regarding the impact of different weather stations on the relationship of respective air temperature to the average dew point temperature. Sparse OpMINRES-L-D learns a sparse L where stations BPI(1) and CPR(2), P60(7) and SHR(10) along with RIW(8) and WRL(12) are connected. BPI(1) (42.58507, −110.11115) and CPR(2) (42.908, −106.46442) are 300.7 km apart with an elevation of 2124 m and 1612 m, respectively. P60(7) (44.54444, −110.42111) and SHR(10) (44.77, −106.97) are 274.85 km apart with an elevation of 2368 m and 1209 m, respectively. RIW(8) (43.06423, −108.45984) and WRL(12) (43.96571, −107.95083) are 108.28 km apart with an elevation of 1688 m and 1294 m. It can be observed that the connections in the learned graph structure have been established between stations with varying elevations lying in close proximity latitude-wise. To illustrate the utility of our proposed sample-based approximation algorithm, we use the full weather data and evaluate it to produce the results in Table 5 for all algorithms. The results in Table 5 show that the sample-based approximation algorithm provides comparable results using only a few samples. In 5 runs, out of 472 training samples, the number of samples in the working set of the sample-based approximation algorithm varies between 123 to 200. Sparse OpMINRES-L-D performs the best in terms of the mean RSSE on test data. ![29_image_0.png](29_image_0.png) Table 3: Graphs corresponding to learned L for full weather (472/123/123) data. [Best viewed in color] Table 4: Mean RSSE results for full weather data. | Table 4: Mean RSSE results for full weather data. | | | | | |-----------------------------------------------------|--------------------|-----------|----------|----------| | Train/Val/Test samples | Methods | Mean RSSE | | | | Train | Val | Test | | | | Sparse OpMINRES-L-D | 0.002938 | 0.009891 | 0.010743 | | | 472/123/123 | fglasso-OpMINRES-D | 0.021094 | 0.013476 | 0.044216 | | KGL-OpMINRES-D | 0.003474 | 0.010877 | 0.012797 | | | Methods | Mean RSSE | | | | |---------------------|---------------------|---------------------|---------------------|---------------------| | Full Train | Train subset | Val | Test | | | Sparse OpMINRES-L-D | 0.083688 ± 0.011229 | 0.010584 ± 0.006664 | 0.013568 ± 0.000914 | 0.097118 ± 0.01392 | | fglasso-OpMINRES-D | 0.112057 ± 0.004981 | 0.011457 ± 0.006908 | 0.014289 ± 0.000710 | 0.130591 ± 0.006171 | | KGL-OpMINRES-D | 0.131076 ± 0.016523 | 0.010166 ± 0.005781 | 0.013702 ± 0.000561 | 0.100619 ± 0.017939 | Table 5: RSSE (mean ± standard deviation) results over 5 runs for sample-based approximation algorithm using full weather data. | Table 6: Mean RSSE results for NBA data. | | | | | |--------------------------------------------|---------------------|-----------|----------|----------| | Train/Val/Test samples | Methods | Mean RSSE | | | | Train | Val | Test | | | | 233/59/59 | Sparse OpMINRES-L-D | 0.025200 | 0.087748 | 0.106344 | | OpMINRES-D | 0.023261 | 0.191459 | 0.265513 | | Table 6: Mean RSSE results for NBA data. | Methods | Mean RSSE | | | | |---------------------|---------------------|---------------------|---------------------|---------------------| | Full Train | Train subset | Val | Test | | | Sparse OpMINRES-L-D | 0.215943 ± 0.002837 | 0.018371 ± 0.001624 | 0.066631 ± 0.003058 | 0.147214 ± 0.004665 | | OpMINRES-D | 9.070595 ± 0.138830 | 0.005005 ± 0.002031 | 0.180238 ± 0.013534 | 0.452081 ± 0.020617 | Table 7: RSSE (mean ± standard deviation) results for sample-based approximation algorithm for NBA data. ## 7.3 Experiments On Nba Data The movement of basketball and 21 players involved on the court (x-y coordinates) in the Atlanta Hawks (ATL) vs Utah Jazz (UTA) match on November 15, 2015 has been considered in this experiment. This data is available in the Github repo NBA Movement Data (Seward, 2018). The data has been collected for different plays for both input functions of 21 players and output function denoting the position of the ball, which includes missing data corresponding to some players in different plays. As plays in a basketball game are of different time duration, we use a truncated trigonometric basis of L 2([0, 1]) with 80 basis functions to sample the functions at fixed 100 points on [0, 1]. A total of 351 samples have been collected after removing missing data. A random data split of (233/59/59) representing the number of training samples/validation samples/test samples has been considered. The problem requires solving a multi-dimensional functional regression problem which is incompatible with fglasso and KGL algorithms, as both fglasso & KGL are based on single dimensional input functions. Hence, we compare our method with the algorithm OpMINRES-D where a fixed L is incorporated in our alternating minimization framework. OpMINRES-D: A fixed L is considered corresponding to a fully connected network of 21 nodes. This decision was made as fglasso mostly learns a fully connected graph in earlier experiments. Thus, a fixed L (with no sparsity-inducing MCP) is used in the proposed alternating minimization regime for optimizing u and D. OpMINRES is used with k1(*x, x*0; G) = e −γx(x−x 0) >(L+D)(x−x 0)−γy(x−x 0) >(L+D)(x−x 0) and k 1 2 (*s, t*) = e −γ 1 op|s−t|, k1 2 (*s, t*) = e −γ 2 op|s−t|, where γx, γy ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100} and γ 1 op, γ2 op ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100}. Sparse OpMINRES-L-D: We consider the graph-induced operator-valued kernels using k1(*x, x*0; G) = e −γx(x−x 0) >(L+D)(x−x 0)−γy(x−x 0) >(L+D)(x−x 0) and k 1 2 (*s, t*) = e −γ 1 op|s−t|, k2 2 (*s, t*) = e −γ 2 op|s−t|, where γx, γy ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100} and γ 1 op, γ2 op ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100}. Projected gradient descent is used in minimization with respect to L and D based on a decaying step-size. The sparsity is aided by the MCP regularization considered in learning of L. The results are illustrated in Tables 6 and 8 where comparison method OpMINRES-D uses a fully connected graph, however Sparse OpMINRES-L-D performs better with a sparse learned graph in terms of mean RSSE on the test data. Observations for the match have been published in the match reports ESPN match recap and ESPN match scoreboard (ESPN, 2015a;b). In Table 8, the depiction of a basketball court is provided where the players have been arranged on the court with ATL players on the left and UTA players on the right. The graphical structure corresponding to the learned L in Table 8 illustrates strategic relationships between players of both ATL and UTA. The connection between Derrick Favors—Trevor Booker (6—8) had been pivotal for Utah Jazz. The performance of Al Horford in (4—7) and Kent Bazemore in (2—11) for Atlanta Hawks has been captured. Though the partnership of Alec Burks—Trey Burke (9—18) for Utah Jazz is not evident in the match reports, their ball carrying interactions may be the reason for being learned in L. ![31_image_0.png](31_image_0.png) Table 8: Graphs corresponding to learned L for NBA data (233/59/59) of ATL (left) vs UTA (right) match using Sparse OpMINRES-L-D. [Best viewed in color] The performance of sample-based approximation algorithm has been showcased on NBA data in Table 7. Using the sample-based approximation algorithm, out of 233 training samples, the number of samples chosen in the working set of samples in 5 runs varies between 57 to 62. Sparse OpMINRES-I-D performs the best in terms of the mean RSSE on test data. The remaining results and details of experiments are in Appendix A.10. The best hyperparameters for the experiments conducted have been listed in Appendix A.10.4. ## 8 Conclusion In this work, we incorporate learning of a suitable graphical structure which drives a functional regression problem where the output function depends on the input functions and also upon their inter-relationships with each other. An alternating minimization based algorithm has been proposed to learn the Laplacian matrix L, a non-negative diagonal matrix D characterizing the graphical structure, along with the map from input space to the output space. For a fixed L and D, the functional regression learning problem is formulated as an operator system of equations which is solved by using OpMINRES algorithm. Projected gradient descent is used to learn the Laplacian matrix and the non-negative diagonal matrix in the alternating minimization framework. A sparsity-inducing regularizer (e.g. MCP) in L has been incorporated during the alternating minimization, which helps in learning a graphical structure and allows for improved interpretability and can highlight interactions which are most relevant among input functions useful for the prediction. To make the proposed algorithm scalable, a sample-based approximation algorithm has been proposed which helps reduce the computations required for solving linear system of operator equations using OpMINRES algorithm. An extension of the alternating minimization framework has also been proposed to solve the multi-dimensional functional regression problem assuming a single graphical structure on the input variables. The generalization analysis provides a bound on generalization error for learning a graph-induced OVK. Experiments establish the utility of proposed graph-induced operator-valued kernels in functional regression problems from diverse applications. ## Broader Impact Statement The framework and algorithms introduced in the paper with graph-induced operator-valued kernels aid in learning a sparse graphical structure which drives a functional regression problem where the output function depends on the input functions and their inter-relationships with each other. This will promote research in investigating more sophisticated techniques for handling functional data with an inherent graphical structure ingrained among them. To the best of our knowledge, our work does not have any negative impact. ## Acknowledgments We thank our anonymous reviewers for their insightful comments and suggestions. We declare no competing interests. ## References Roya Aliakbarisani, Abdorasoul Ghasemi, and M. Ángeles Serrano. Perturbation of the normalized laplacian matrix for the prediction of missing links in real networks. IEEE Transactions on Network Science and Engineering, 9(2):863–874, 2022. doi: 10.1109/TNSE.2021.3137862. M. S. Andersen, J. Dahl, Z. Liu, and L. Vandenberghe. Interior-point methods for large-scale cone programming. In S. Sra, S. Nowozin, and S. J. Wright (eds.), *Optimization for Machine Learning*, volume 55-83. MIT Press, 2012. Martin S Andersen, Joachim Dahl, Lieven Vandenberghe, et al. Cvxopt: A python package for convex optimization. *version 1.3*, 2023. URL https://cvxopt.org. Francis R. Bach and Michael I. Jordan. Predictive low-rank decomposition for kernel methods. In Proceedings of the 22nd International Conference on Machine Learning, ICML '05, pp. 33–40, New York, NY, USA, 2005. Association for Computing Machinery. ISBN 1595931805. doi: 10.1145/1102351.1102356. URL https://doi.org/10.1145/1102351.1102356. Juan Lucas Bali, Graciela Boente, David E. Tyler, and Jane-Ling Wang. Robust functional principal components: A projection-pursuit approach. *The Annals of Statistics*, 39(6):2852 - 2882, 2011. doi: 10.1214/11-AOS923. URL https://doi.org/10.1214/11-AOS923. Onureena Banerjee, Laurent El Ghaoui, and Alexandre d'Aspremont. Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. *Journal of Machine Learning Research*, 9(15):485–516, 2008. URL http://jmlr.org/papers/v9/banerjee08a.html. R. B. Bapat, S. J. Kirkland, and S. Pati. The perturbed laplacian matrix of a graph. *Linear and Multilinear Algebra*, 49(3):219–242, 2001. doi: 10.1080/03081080108818697. URL https://doi.org/10.1080/ 03081080108818697. Dimitri Bouche, Marianne Clausel, François Roueff, and Florence d'Alché Buc. Nonlinear functional output regression: A dictionary approach. In Arindam Banerjee and Kenji Fukumizu (eds.), *Proceedings of* The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Research, pp. 235–243. PMLR, 13–15 Apr 2021. URL https://proceedings.mlr. press/v130/bouche21a.html. Romain Brault. *Large-scale operator-valued kernel regression*. Theses, Université Paris Saclay, July 2017. URL https://hal.science/tel-01761768. Romain Brault, Markus Heinonen, and Florence Buc. Random fourier features for operator-valued kernels. In Robert J. Durrant and Kee-Eung Kim (eds.), Proceedings of The 8th Asian Conference on Machine Learning, volume 63 of *Proceedings of Machine Learning Research*, pp. 110–125, The University of Waikato, Hamilton, New Zealand, 16–18 Nov 2016. PMLR. URL https://proceedings.mlr.press/ v63/Brault39.html. Sou-Cheng Choi. *Iterative methods for singular linear equations and least-squares problems*. PhD thesis, Stanford University, 2006. Felipe Cucker and Steve Smale. On the mathematical foundations of learning. *Bulletin of the American* mathematical society, 39(1):1–49, 2002. Alexandre d'Aspremont, Onureena Banerjee, and Laurent El Ghaoui. First-order methods for sparse covariance selection. *SIAM Journal on Matrix Analysis and Applications*, 30(1):56–66, 2008. doi: 10.1137/060670985. URL https://doi.org/10.1137/060670985. I.I. Dikin. Iterative solution of problems of linear quadratic programming. *Doklady Akademiia Nauk SSSR*, 174:674–675, 1967. Xiaowen Dong, Dorina Thanou, Pascal Frossard, and Pierre Vandergheynst. Learning laplacian matrix in smooth graph signal representations. *IEEE Transactions on Signal Processing*, 64(23):6160–6173, 2016. doi: 10.1109/TSP.2016.2602809. ESPN. Jazz beat hawks 97-96 to end 3-game skid. https://www.espn.com/nba/recap/_/gameId/ 400828035, 2015a. Accessed: 15/10/2023. ESPN. Atlanta hawks vs utah jazz box score. https://www.espn.com/nba/boxscore/_/gameId/ 400828035, 2015b. Accessed: 15/10/2023. Jianqing Fan and Runze Li. Variable selection via nonconcave penalized likelihood and its oracle properties. *Journal of the American Statistical Association*, 96(456):1348–1360, 2001. doi: 10.1198/ 016214501753382273. URL https://doi.org/10.1198/016214501753382273. F. Ferraty and P Vieu. *Nonparametric Functional Data Analysis, Theory and Practice.* Springer Series in Statistics, New York, 2006. Mostafa Reisi Gahrooei, Kamran Paynabar, Massimo Pacella, and Jianjun Shi. Process modeling and prediction with large number of high-dimensional variables using functional regression. IEEE Transactions on Automation Science and Engineering, 17(2):684–696, 2020. doi: 10.1109/TASE.2019.2941167. Gene H. Golub and Charles F. Van Loan. *Matrix Computations*. Johns Hopkins University Press, 3rd ed edition, 1996. Ana María Estrada Gómez, Kamran Paynabar, and Massimo Pacella. Functional directed graphical models and applications in root-cause analysis and diagnosis. *Journal of Quality Technology*, 53(4):421–437, 2021. doi: 10.1080/00224065.2020.1805380. URL https://doi.org/10.1080/00224065.2020.1805380. Clara Happ and Sonja Greven. Multivariate functional principal component analysis for data observed on different (dimensional) domains. *Journal of the American Statistical Association*, 113(522):649–659, 2018. doi: 10.1080/01621459.2016.1273115. URL https://doi.org/10.1080/01621459.2016.1273115. Harold V Henderson and SR Searle. Vec and vech operators for matrices, with some uses in jacobians and multivariate statistics. *Canadian Journal of Statistics*, 7(1):65–81, 1979. Harjit Hullait, David S. Leslie, Nicos G. Pavlidis, and Steve King. Robust function-on-function regression. Technometrics, 63(3):396–409, 2021. doi: 10.1080/00401706.2020.1802350. URL https://doi.org/10. 1080/00401706.2020.1802350. Pierre Humbert, Batiste Le Bars, Laurent Oudre, Argyris Kalogeratos, and Nicolas Vayatis. Learning laplacian matrix from graph signals with sparse spectral representation. Journal of Machine Learning Research, 22(195):1–47, 2021. URL http://jmlr.org/papers/v22/19-944.html. Iowa State University Iowa Environmental Mesonet. Iem :: Asos one minute data. https://mesonet. agron.iastate.edu/request/asos/1min.phtml, 2022. Accessed: 15/10/2023. Julien Jacques and Cristian Preda. Model-based clustering for multivariate functional data. *Computational* Statistics & Data Analysis, 71:92–106, 2014. ISSN 0167-9473. doi: https://doi.org/10.1016/j.csda.2012. 12.004. URL https://www.sciencedirect.com/science/article/pii/S0167947312004380. Hachem Kadri, Emmanuel Duflos, Philippe Preux, Stéphane Canu, Alain Rakotomamonjy, and Julien Audiffren. Operator-valued kernels for learning from functional response data. Journal of Machine Learning Research, 17(20):1–54, 2016. URL http://jmlr.org/papers/v17/11-315.html. Sven Kurras, Ulrike Luxburg, and Gilles Blanchard. The f-adjusted graph laplacian: a diagonal modification with a geometric interpretation. In Eric P. Xing and Tony Jebara (eds.), Proceedings of the 31st International Conference on Machine Learning, volume 32 of *Proceedings of Machine Learning Research*, pp. 1530–1538, Bejing, China, 22–24 Jun 2014. PMLR. URL https://proceedings.mlr.press/v32/ kurras14.html. Cornelius Lanczos. An iteration method for the solution of the eigenvalue problem of linear differential and integral operators. United States Governm. Press Office Los Angeles, CA, 1950. Heng Lian. Nonlinear functional models for functional responses in reproducing kernel hilbert spaces. Canadian Journal of Statistics, 35(4):597–606, 2007. doi: https://doi.org/10.1002/cjs.5550350410. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/cjs.5550350410. Renjie Liao. Notes on rademacher complexity. https://www.cs.toronto.edu/~rjliao/notes/Notes_on_ Rademacher_Complexity.pdf, 2020. Accessed: 28/10/2023. Andreas Maurer. A vector-contraction inequality for rademacher complexities. In Ronald Ortner, Hans Ulrich Simon, and Sandra Zilles (eds.), *Algorithmic Learning Theory*, pp. 3–17. Springer International Publishing, 2016. ISBN 978-3-319-46379-7. Giacomo Meanti, Luigi Carratino, Lorenzo Rosasco, and Alessandro Rudi. Kernel methods through the roof: Handling billions of points efficiently. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 14410–14422. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/ a59afb1b7d82ec353921a55c579ee26d-Paper.pdf. Nicolai Meinshausen and Peter Bühlmann. High-dimensional graphs and variable selection with the Lasso. The Annals of Statistics, 34(3):1436 - 1462, 2006. doi: 10.1214/009053606000000281. URL https: //doi.org/10.1214/009053606000000281. Charles A. Micchelli and Massimiliano Pontil. On learning vector-valued functions. *Neural Computation*, 17 (1):177–204, 2005. doi: 10.1162/0899766052530802. Charles A. Micchelli, Massimiliano Pontil, Qiang Wu, and Ding-Xuan Zhou. Error bounds for learning the kernel. *Analysis and Applications*, 14(06):849–868, 2016. doi: 10.1142/S0219530516400054. URL https://doi.org/10.1142/S0219530516400054. Junier Oliva, William Neiswanger, Barnabas Poczos, Eric Xing, Hy Trac, Shirley Ho, and Jeff Schneider. Fast Function to Function Regression. In Guy Lebanon and S. V. N. Vishwanathan (eds.), *Proceedings of* the Eighteenth International Conference on Artificial Intelligence and Statistics, volume 38 of *Proceedings* of Machine Learning Research, pp. 717–725, San Diego, California, USA, 09–12 May 2015. PMLR. URL https://proceedings.mlr.press/v38/oliva15.html. Xingyue Pu, Tianyue Cao, Xiaoyun Zhang, Xiaowen Dong, and Siheng Chen. Learning to learn graph topologies. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 4249–4262. Curran Associates, Inc., 2021a. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/ 21e4ef94f2a6b23597efabaec584b504-Paper.pdf. Xingyue Pu, Siu Lun Chau, Xiaowen Dong, and Dino Sejdinovic. Kernel-based graph learning from smooth signals: A functional viewpoint. *IEEE Transactions on Signal and Information Processing over Networks*, 7:192–207, 2021b. doi: 10.1109/TSIPN.2021.3059995. Lishan Qiao, Limei Zhang, Songcan Chen, and Dinggang Shen. Data-driven graph construction and graph learning: A review. *Neurocomputing*, 312:336–351, 2018. ISSN 0925-2312. doi: https:// doi.org/10.1016/j.neucom.2018.05.084. URL https://www.sciencedirect.com/science/article/pii/ S0925231218306696. Xinghao Qiao, Shaojun Guo, and Gareth M. James. Functional graphical models. *Journal of the American* Statistical Association, 114(525):211–222, 2019. doi: 10.1080/01621459.2017.1390466. URL https://doi. org/10.1080/01621459.2017.1390466. Xinghao Qiao, Cheng Qian, Gareth M James, and Shaojun Guo. Doubly functional graphical models in high dimensions. *Biometrika*, 107(2):415–431, 02 2020. ISSN 0006-3444. doi: 10.1093/biomet/asz072. URL https://doi.org/10.1093/biomet/asz072. Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In J. Platt, D. Koller, Y. Singer, and S. Roweis (eds.), *Advances in Neural Information Processing Systems*, volume 20. Curran Associates, Inc., 2007. URL https://proceedings.neurips.cc/paper/2007/file/ 013a006f03dbc5392effeb8f18fda755-Paper.pdf. J. O. Ramsay and C. J. Dalzell. Some tools for functional data analysis. *Journal of the Royal Statistical* Society: Series B (Methodological), 53(3):539–561, 12 2018. ISSN 0035-9246. doi: 10.1111/j.2517-6161. 1991.tb01844.x. URL https://doi.org/10.1111/j.2517-6161.1991.tb01844.x. James O. Ramsay. When the data are functions. *Psychometrika*, 47(4):379–396, 1982. Aniruddha Rajendra Rao and Matthew Reimherr. Modern non-linear function-on-function regression. *Statistics and Computing*, 33(6):1–12, 2023. Fabrice Rossi and Nathalie Villa. Support vector machine for functional data classification. *Neurocomputing*, 69(7):730–742, 2006. ISSN 0925-2312. doi: https://doi.org/10.1016/j.neucom.2005.12.010. URL https:// www.sciencedirect.com/science/article/pii/S0925231205003309. New Issues in Neurocomputing: 13th European Symposium on Artificial Neural Networks. Akash Saha and Balamurugan Palaniappan. Learning with operator-valued kernels in reproducing kernel krein spaces. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 13856–13866. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/ 9f319422ca17b1082ea49820353f14ab-Paper.pdf. Neil Seward. Nba movement data github repo. https://github.com/sealneaward/nba-movement-data, 2018. Accessed: 15/10/2023. John Shawe-Taylor, Nello Cristianini, et al. *Kernel methods for pattern analysis*. Cambridge university press, 2004. Alexander J. Smola and Bernhard Schölkopf. Sparse greedy matrix approximation for machine learning. In Pat Langley (ed.), *Proceedings of the Seventeenth International Conference on Machine Learning (ICML* 2000), Stanford University, Stanford, CA, USA, June 29 - July 2, 2000, pp. 911–918. Morgan Kaufmann, 2000. Aad W Van Der Vaart and Jon A Wellner. *Weak convergence and empirical processes: with applications to* statistics, volume 3. Springer, 1996. Mariana Vargas Vieyra. Robust estimation of laplacian constrained gaussian graphical models with trimmed non-convex regularization. In Antonio Salmerón and Rafael Rumí (eds.), *Proceedings of The* 11th International Conference on Probabilistic Graphical Models, volume 186 of *Proceedings of Machine* Learning Research, pp. 85–96. PMLR, 05–07 Oct 2022. URL https://proceedings.mlr.press/v186/ vargas-vieyra22a.html. Christopher Williams and Matthias Seeger. Using the nyström method to speed up kernel machines. In T. Leen, T. Dietterich, and V. Tresp (eds.), *Advances in Neural Information Processing Systems*, volume 13. MIT Press, 2000. URL https://proceedings.neurips.cc/paper_files/paper/2000/file/ 19de10adbaa1b2ee13f77f679fa1483a-Paper.pdf. Qiang Wu and Ding-Xuan Zhou. Analysis of support vector machine classification. Journal of Computational Analysis & Applications, 8(2), 2006. Fang Yao, Hans-Georg Müller, and Jane-Ling Wang. Functional data analysis for sparse longitudinal data. *Journal of the American Statistical Association*, 100(470):577–590, 2005. doi: 10.1198/ 016214504000001745. URL https://doi.org/10.1198/016214504000001745. Jiaxi Ying, José Vinícius de Miranda Cardoso, and Daniel Palomar. Nonconvex sparse graph learning under laplacian constrained graphical model. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 7101–7113. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/ 4ef42b32bccc9485b10b8183507e5d82-Paper.pdf. Yiming Ying and Ding-Xuan Zhou. Learnability of gaussians with flexible variances. Journal of Machine Learning Research, 8(9):249–276, 2007. URL http://jmlr.org/papers/v8/ying07a.html. Ming Yuan and Yi Lin. Model selection and estimation in the Gaussian graphical model. *Biometrika*, 94 (1):19–35, 03 2007. ISSN 0006-3444. doi: 10.1093/biomet/asm018. URL https://doi.org/10.1093/ biomet/asm018. Cun-Hui Zhang. Nearly unbiased variable selection under minimax concave penalty. *The Annals of Statistics*, 38(2):894 - 942, 2010. doi: 10.1214/09-AOS729. URL https://doi.org/10.1214/09-AOS729. Yangjing Zhang, Kim-Chuan Toh, and Defeng Sun. Learning graph laplacian with mcp. *arXiv preprint* arXiv:2010.11559, 2020. Boxin Zhao, Y. Samuel Wang, and Mladen Kolar. Direct estimation of differential functional graphical models. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/file/ 7d6044e95a16761171b130dcb476a43e-Paper.pdf. ## A Appendix A.1 Functonal Regression With Known Graph Structure In this section, we motivate functional regression problem with known graph structure. Consider a system where a set of input functions determines the output (or response) function. Let the system be modeled based on p input functional variables x1(t), x2(t)*, . . . , x*p(t), where xi ∈ L 2([0, 1]), i ∈ [p]. A functional response variable y(t) is used to model output of the system where y ∈ L 2([0, 1]). (Note that [0, 1] can be replaced with any closed time interval based on the application.) The undirected graph structure of the functional input variables is represented by G = (*V, E*), where V = {v1, v2*, . . . , v*p} and E = {{vi, vj}|viis connected to vj , 1 ≤ *i, j* ≤ p} is the edge set which characterizes the underlying relationship between the variables. Note that the notation for an edge uses an unordered pair {vi, vj} which characterizes the undirected nature of the graph G. In order to model the relation between functional input variables x1, x2*, . . . , x*p and functional response variable y, we use the following map F: $$y=F(x_{1},x_{2},\ldots,x_{p},G).$$ $$(102)$$ y = F(x1, x2, . . . , xp, G). (102) Note that F now depends explicitly on the graph G in addition to the input functions x1, x2*, . . . , x*p. Here, we consider a scenario where G is known. Recall the example of a manufacturing factory, where the output of emissions depends on the metrics of different components involved in the manufacturing process. The graph G is determined in this case by understanding the components which are connected during the manufacturing process. We consider the notations x = (x1, x2, . . . , xp) ∈ X (= (L 2([0, 1]))p) and y ∈ Y (= L 2([0, 1])) to represent an arbitrary sample (*x, y*). To learn the mapping F, consider the training data of n samples given as (x (i), y(i)) n i=1, where x (i) = (x (i) 1 , x (i) 2 , . . . , x (i) p ) ∈ X and y (i) ∈ Y. In order to learn F, we develop an operator-valued kernel which can leverage the structural information of G. Towards this, we first introduce an operator-valued kernel which maps the elements of *X × X* to a set of bounded linear operators over the output space Y, denoted by L(Y). We formally define OVK as follows. Definition A.1 (**Operator-valued Kernel**). (Kadri et al., 2016) An L(Y)-valued kernel K on X 2is a function K(*., .*) : *X × X → L*(Y), satisfying the following properties: 1. K is Hermitian, that is ∀w, z ∈ X , K(*w, z*) = K(*z, w*) ∗( ∗ denotes the adjoint operator), 2. K is positive semi-definite on X 2, that is K is Hermitian and for every natural number r and all {(w (i), u(i))i∈[r]*} ∈ X × Y*, the matrix with (*i, j*)-th entry given by hK(w (i), w(j))u (i), u(j)iY is positive semi-definite. Constructing an operator-valued kernel based on Definition A.1 is a challenge as verifying both properties of being Hermitian and positive semi-definiteness becomes non-trivial. A construction of OVK that satisfies both the properties in Definition A.1 has been proposed in (Lian, 2007; Kadri et al., 2016). The OVK construction in (Lian, 2007; Kadri et al., 2016) uses a scalar-valued kernel k1 on *X × X* and a HilbertSchmidt integral (HSI) operator defined on the output space Y, and is given as follows: $$(K(x,x^{\prime})y)(t)=k_{1}(x,x^{\prime})\int_{0}^{1}k_{2}(s,t)y(s)d s,$$ where k2 inside the HSI operator R 1 0 k2(*s, t*)y(s)ds is a scalar-valued kernel on R × R. If k1 is positive semi-definite and if k2 is positive semi-definite (implying that the HSI operator is positive semi-definite), then the construction in (103) is known to be positive semi-definite (Kadri et al., 2016). We will now adapt the OVK in (103) to include the graph structure information present in G. An obvious choice for using the influence of graphical structure G in the functional regression task is to use the adjacency matrix of G, but the adjacency matrix of G not being necessarily positive semi-definite makes its utility restrictive. The Laplacian matrix of a graph G, on the other hand is useful in this respect as it has the desired property of $$(103)$$ being positive semi-definite which is useful in a kernel-based learning framework. The Laplacian matrix of an undirected graph G = (*V, E*), with V as the node set and E as edge set is defined as L = D − A, where D = diag(deg(v1), deg(v2)*, . . . ,* deg(vp)) is the degree matrix and A is the adjacency matrix of the graph G. The elements of L are given by $$L_{i,j}=\begin{cases}\deg(v_{i}),&\text{if}i=j,\\ -1,&\text{if}i\neq j\text{and}\{v_{i},v_{j}\}\in E,\\ 0,&\text{otherwise.}\end{cases}\tag{1}$$ $$(104)$$ We propose to incorporate the graphical structure in Equation (103) within the scalar-valued kernel k1 itself, as follows: $$k_{1}(x,x^{\prime};G)=\gamma x^{\top}Lx^{\prime},\ \gamma>0,$$ $$=\gamma\sum_{i,j=1}^{p}\int_{0}^{1}x_{i}(t)L_{ij}x^{\prime}_{j}(t)dt.$$ $$(105)$$ $$(106)$$ The construction of k1 in (105) involves capturing the graphical structure of G using L in x >Lx0. We use an equivalent expression h*x, Lx*0ip := x >Lx0, where the inner product h·, ·ip : *X × X →* R is defined as h*x, x*0ip =Pp i=1 R 1 0 xi(t)x 0 i (t)dt. The inner product h*x, x*0ip measures the similarity between x and x 0in X . Let L = [L1, L2*, . . . , L*p], where Li represents the i-th column of L, then Lx0is computed based on standard matrix-vector multiplication where elements of L are multiplied with x 0 by using scalar multiplication and addition of functions as Lx0 =Pp i=1 Lix 0 i . Equation (105) provides a tool to measure similarity between x, x0 ∈ X where the interactions are encoded in the underlying graph G using the Laplacian matrix L. If k1 defined in (105) can be proved to be a valid positive semi-definite scalar-valued kernel on *X × X* , then an OVK can be defined similar to the construction in Equation (103). Next, we prove that such a construction is indeed possible. Proposition A.1. For an underlying graph G = (*V, E*) with |V | = p, functional variables x = (x1, x2, . . . , xp) ∈ X (= (L 2([0, 1]))p) and a functional response variable y ∈ Y (= L 2([0, 1])), consider an operator-valued kernel K : *X × X → L*(Y) defined as $$(K(x,x^{\prime})y)(t)=k_{1}(x,x^{\prime};G)\int_{0}^{1}k_{2}(s,t)y(s)d s,$$ the proof.) $\blacksquare$ where k1(*x, x*0; G) = γx>Lx0*, γ >* 0 and k2 is a positive semi-definite scalar-valued kernel on R × R. Then K is positive semi-definite. Proof. Please see Appendix A.3 for the proof. Recall the construction of scalar-valued radial basis function (RBF) kernel defined over R d × R d(d ∈ Z+) as e −γkx−x 0k 2, for γ > 0, x, x 0 ∈ R d based on the kernel x >x. Similar to that construction, we now describe an extension for kernel k1 defined in (105). The kernel notation k1(*x, x*0; G) = γx>Lx0is overloaded to represent the following RBF-type kernel: $$\square$$ $$k_{1}(x,x^{\prime};G)=e^{-\gamma(x-x^{\prime})^{\top}L(x-x^{\prime})},\ \gamma>0.\tag{107}$$ capturing the interaction of *x, x*0 using L (see Appendix A.3). The RBF-type kernel in (107) is an improved version of the kernel in (105), as it can approximate higher dimensional relationships better owing to the exponential nature and shift invariant property given by k1(x+*h, x*0+h; G) = k1(x, x0; G), ∀x, x0, h ∈ X . Note that when computing (x − x 0) >L(x − x 0), where x = (x1, . . . , xp), x0 = (x 0 1 , . . . , x0p ) ∈ X , the interactions between the unlike pair (xi, x0j ), i 6= j would negate the influence of the like pair (xi, x0i ), because of the structure of L (defined in (104)). Therefore, we propose to use a diagonally perturbed Laplacian (Bapat et al., 2001) to aid the functional regression task performance. Perturbed Laplacians have found applications in spectral clustering, analysis of graphs (Kurras et al., 2014) and missing link prediction in networks (Aliakbarisani et al., 2022). A natural perturbation of Laplacian for the scalar-valued kernel k1 in (107) involves the degree matrix D leading to the following definition for k1: $$k_{1}(x,x^{\prime};G)=e^{-\gamma(x-x^{\prime})^{\top}(L+\mathbb{D})(x-x^{\prime})},\ \gamma>0,\ \mathrm{for}\ x,x^{\prime}\in\mathcal{X}.$$ 0)*, γ >* 0, for x, x0 ∈ X . (108) Incorporating D with L in k1 improves the representation of individual components of *x, x*0in prediction of y as the like pair (xi, x0i ) gets weighed by Di,i + Li,i in the kernel expression which compensates for the negation effect described above. This enables us to define a family of operator-valued kernels discussed next, which is induced by the graphical structure information. Definition A.2 (**Graph-induced Operator-valued Kernel for known** G). A graph-induced operatorvalued kernel is defined as $$(108)$$ $$(K^{G}(x,x^{\prime})y)(t)=k_{1}(x,x^{\prime};G)\int_{0}^{1}k_{2}(s,t)y(s)d s,$$ where k2 is a scalar-valued kernel on R 2, G is a graph associating the p input functions in (x1, . . . , xp) ∈ X and k1 is defined as $$(109)$$ $$k_{1}(x,x^{\prime};G)=e^{-\gamma(x-x^{\prime})^{\top}(L+\mathbb{D})(x-x^{\prime})},$$ for γ > 0 where L is the Laplacian matrix and D is the degree matrix of the graph G. KG associates a pair x, x0 ∈ X with output function y ∈ Y where G is the graph which incorporates the interaction of p constituent input functions of x and x 0. The addition of D to L in (108) preserves the positive semi-definiteness of the kernel k1 as D is a diagonal matrix with positive entries. Using graph-induced OVK for functional regression problem requires associating KG with a function-valued reproducing kernel Hilbert space where the map F from (102) resides. The existence of a bijection between the set of positive semidefinite (Mercer) operator-valued kernels and function-valued reproducing kernel Hilbert spaces has been established in (Kadri et al., 2016). A function-valued reproducing kernel Hilbert space (RKHS) is defined as follows. Definition A.3 (**Function-valued RKHS**). (Kadri et al., 2016) A Hilbert space H of functions from X to Y is called a reproducing kernel Hilbert space if there is a positive semi-definite L(Y)-valued kernel K on X 2such that: $$1.{\mathrm{~the~function}}z\mapsto K(w,z)g{\mathrm{~belongs~to~}}{\mathcal{H}},\,\forall w,z\in{\mathcal{X}}{\mathrm{~and~}}\forall g\in{\mathcal{Y}},$$ 2. for every $F\in{\cal H},w\in{\cal X}$ and $g\in{\cal Y},\ \langle F,K(w,.)g\rangle_{\cal H}=\langle F(w),g\rangle_{\cal Y}$. (**reproducing property**) Property 1 in Definition A.3 provides an association of OVK K with the space H which contains maps from X to Y. The reproducing property in Definition A.3 helps to relate the inner product in H to the inner product in Y. Using Proposition A.1 and Definition A.3, the positive semi-definiteness of KG constructed using (109) ensures that there exists a unique RKHS HKG corresponding to KG (Theorem 1 (Kadri et al., 2016)). This enables us to formulate a learning problem in HKG as follows: $$\widetilde{F}_{\lambda}=\operatorname*{arg\,min}_{F\in\mathcal{H}_{K}G}\sum_{i=1}^{n}\|y^{(i)}-F(x^{(i)})\|_{\mathcal{Y}}^{2}+\lambda\|F\|_{\mathcal{H}_{K}G}^{2},\tag{1}$$ where HKG is the function-valued RKHS induced by the graph-induced operator-valued kernel KG and k · kHKG denotes the norm in HKG . In order to solve the optimization problem, we utilize the reproducing property of operator-valued kernel (Definition A.3) given by $$\langle F,K^{G}(x,.)h\rangle_{{\mathcal{H}}_{K G}}=\langle F(x),h\rangle_{{\mathcal{Y}}},\ \forall x\in{\mathcal{X}},h\in{\mathcal{Y}}.$$ The minimization problem in (110) is not tractable using a search based procedure over HKG , hence the reproducing property of operator-valued kernel KG in (111) can be leveraged to simplify the problem and characterize the solution of (110) using elements of the output space Y. We now provide a representer theorem for the minimization problem (110) in the function-valued RKHS HKG corresponding to the operator-valued kernel KG. $$(110)$$ $$(111)$$ Theorem A.2 (**Representer theorem**). Let KG be an operator-valued kernel and HKG be its corresponding function-valued reproducing kernel Hilbert space. The solution Feλ ∈ HKG of the regularized optimization problem: Feλ = arg minF ∈HKG Pn i=1 ky (i) − F(x (i))k 2 Y + λkFk 2 HKG , where λ > 0, F ∈ HKG , has the following form $$\widetilde{F}_{\lambda}(.)=\sum_{i=1}^{n}K^{G}(x^{(i)},.)u_{i},\ \text{where}\ u_{i}\in\mathcal{Y}.\tag{1}$$ Proof. Please find the proof in Appendix A.4. Using the representer theorem with reproducing property of operator-valued kernel, we provide below (Appendix A.8) the linear system of operators to determine ui, for i ∈ [n]: $$(112)$$ $$\square$$ $$(\mathbf{K}+\lambda I)\mathbf{u}=\mathbf{y},$$ $$(113)$$ (K + λI)u = y, (113) where K is a block operator matrix given by Ki,j = KG(x (i), x(j)), u = [u1, u2*, . . . , u*n] > and y = [y (1), y(2)*, . . . , y*(n)] >. A simple inversion of K + λI may not be straightforward to obtain u in (113), for an arbitrary choice of OVK KG. Saha & Palaniappan (2020) proposed an iterative operator minimum residual (OpMINRES) algorithm which adapts a Krylov subspace minimal residual (MINRES) algorithm to solve operator-based linear system of the form in (113). We delve deeper into the details of OpMINRES in Section 5.1.1. With the learned u obtained by solving (113), for any sample xˆ ∈ X , the prediction is given by $$\tilde{F}_{\lambda}(\hat{x})=\sum_{i=1}^{n}K^{G}(x^{(i)},\hat{x})u_{i},\text{where}u_{i}\in\mathcal{Y}.\tag{114}$$ In (114), functions ui ∈ Y for i ∈ [n], can be considered as basis functions for the space Y and operators KG(x (i), xˆ) for i ∈ [n], correspond to operator-valued coefficients of the basis functions. The term KG(x (i), xˆ)ui amounts to the total contribution of sample x (i)in determining the prediction for xˆ. ## A.2 Mathematical Derivations We cover derivations which can transform the objective function where the optimization takes place over the output space Y instead of the RKHS HK induced by the OVK K. The following derivation has been referred in Section 5.1 to obtain (5) from (4), where we simplify the expression J(*F, L, D*) to obtain J(u*, L, D*). min F,L,D J(F, L, D) =Xn i=1 ky (i) − F(x (i))k 2 Y + λkFk 2 HK + ρL Xn i=1 x (i)>Lx(i) + ρDkDk 2 F (115) =⇒ min u,L,D J(u, L, D) =Xn i=1 y (i) − Xn j=1 K(x (i), x(j))uj 2 Y + λ *Xn i=1 K(x (i), .)ui, Xn j=1 K(x (j), .)uj + HK (116) + ρL Xn i=1 x (i)>Lx(i) + ρDkDk 2 F = Xn i=1 y (i) − Xn j=1 K(x (i), x(j))uj 2 Y + λ Xn i,j=1 hK(x (i), x(j))ui, uj iY (117) + ρL Xn i=1 x (i)>Lx(i) + ρDkDk 2 F . The expression (116) is obtained from (115) by using the representer theorem and the reproducibility property of K is utilized to obtain (117). Similarly, we obtain the objective function from (29) in Section 5.1.5 using representer theorem and the reproducibility property of K given as follows: $$J(F^{1},F^{2},\ldots,F^{s},L,D)=\sum_{l=1}^{s}\left(\sum_{i=1}^{n}\|y_{l}^{(i)}-F^{l}(\mathbf{x}^{(i)})\|_{\mathcal{Y}}^{2}+\lambda\|F^{l}\|_{\mathcal{H}_{K}^{l}}^{2}\right)$$ $$\quad+\sum_{l=1}^{r}\rho_{l}\left(\sum_{i=1}^{n}x_{l}^{(i)}{}^{\top}Lx_{l}^{(i)}\right)+\rho_{D}\|D\|_{F}^{2}$$ $$\Longrightarrow J(\mathbf{u}^{1},\mathbf{u}^{2},\ldots,\mathbf{u}^{s},L,D)=$$ =⇒ J(u 1, u 2, . . . , u s, L, D) = l=1 Xn i=1 y (i) l − Xn j=1 Kl(x (i), x (j))u l j 2 Y + λ Xn i,j=1 hKl(x (i), x (j))u l i , Kl(x (j), .)u l j iY (118) Xs + Xr k=1 ρk Xn i=1 x (i) k >Lx(i) k ! + ρDkDk 2 F l=1 Xn i=1 y (i) l − Xn j=1 Kl(x (i), x (j))u l j 2 Y + λ Xn i,j=1 hKl(x (i), x (j))u l i , ulj iY (119) = Xs + Xr k=1 ρk Xn i=1 x (i) k >Lx(i) k ! + ρDkDk 2 F . ## A.3 Positive Semi-Definiteness Of Ovk In this section, we cover the proof of Proposition A.1 in Section A.1 which helps in building a graph-induced OVK. The major contribution is based on showing that the proposed scalar-valued kernel in graph-induced OVK is a valid positive semi-definite kernel on *X × X* . We recall Proposition A.1 below. Proposition A.3. For an underlying graph G = (*V, E*) with |V | = p, functional variables x = (x1, x2, . . . , xp) ∈ X (= (L([0, 1]))p) and a functional response variable y ∈ Y(= L 2([0, 1])), consider an operator-valued kernel K : *X × X → L*(Y) defined as $$(K(x,x^{\prime})y)(t)=k_{1}(x,x^{\prime};G)\int_{0}^{1}k_{2}(s,t)y(s)d s,$$ where k1(*x, x*0; G) = γx>Lx0*, γ >* 0 and k2 is a positive semi-definite scalar-valued kernel on R × R. Then K is positive semi-definite. Proof. In order to prove the positive semi-definiteness of K, it is sufficient to prove the positive semidefiniteness of k1 based on the construction followed in (Kadri et al., 2016). We focus on the term x >Lx0 in k1. Let {x 1, x2, . . . , xl*} ⊂ X* be a finite set of points. Consider any vector α ∈ R l and K be a l × l kernel matrix given by K = [k1(x i, xj; G)]i,j . Now, we recall that the space L 2([0, 1]) has the following inner product and norm: $$\langle f,g\rangle=\int_{0}^{1}f(t)g(t)dt,\ f,g\in L^{2}([0,1]),$$ $$\|f\|=\left(\int_{0}^{1}f^{2}(t)dt\right)^{1/2}.$$ We define h., .ip : *X × X →* R as $$\langle x,w\rangle_{p}=\langle x_{1},w_{1}\rangle+\langle x_{2},w_{2}\rangle+\cdots+\langle x_{p},w_{p}\rangle$$ $$=\sum_{i=1}^{p}\int_{0}^{1}x_{i}(t)w_{i}(t)dt,$$ $$(120)$$ $$(121)$$ where x = (x1, x2, . . . , xp) ∈ X , w = (w1, w2, . . . , wp) ∈ X . We can show that h*., .*ip is an inner product on X over the field R. - The symmetry of h*., .*ip follows from the definition in equation (121) as: $$\langle x,w\rangle_{p}=\langle w,x\rangle_{p}.$$ - Linearity: $$\langle ax+bw,z\rangle_{p}=\langle ax_{1}+bw_{1},z_{1}\rangle+\langle ax_{2}+bw_{2},z_{2}\rangle+\cdots+\langle ax_{p}+bw_{p},z_{p}\rangle$$ $$=\sum_{i=1}^{p}\int_{0}^{1}(ax_{i}(t)+bw_{i}(t))z(t)dt$$ $$=a\sum_{i=1}^{p}\int_{0}^{1}x_{i}(t)z_{i}(t)dt+b\sum_{i=1}^{p}\int_{0}^{1}w_{i}(t)z_{i}(t)dt$$ $$=a\langle x,z\rangle_{p}+b\langle w,z\rangle_{p}.$$ - Positive semi-definiteness: For any non-zero x ∈ X , $$\langle x,x\rangle_{p}=\langle x_{1},x_{1}\rangle+\langle x_{2},x_{2}\rangle+\cdots+\langle x_{p},x_{p}\rangle$$ $$=\sum_{i=1}^{p}\int_{0}^{1}x_{i}^{2}(t)dt$$ $$=\sum_{i=1}^{p}\|x_{i}\|^{2}\geq0.$$ The norm induced by h*., .*ip on X is given by $$\|x\|_{P}={\sqrt{\langle x,x\rangle_{P}}}$$ $$=\left(\sum_{i=1}^{P}\int_{0}^{1}x_{i}^{2}(t)d t\right)^{1/2}.$$ On considering x i, xj ∈ X , L ∈ R p×p, the quantity x i>Lxjcan be defined as x i>Lxj =x i 1 x i 2. . . xip L1,1 L1,2 . . . L1,p L2,1 L2,2 . . . L2,p ............ Lp,1 Lp,2 . . . Lp,p x j 1 x j 2 ... x j p =x i 1 x i 2. . . xip L1,1x j 1 + L1,2x j 2 + · · · + L1,px j p L2,1x j 1 + L2,2x j 2 + · · · + L2,px j p ... Lp,1x j 1 + Lp,2x j 2 + · · · + Lp,1x j p = hx i 1 , L1,1x j 1 + L1,2x j 2 + · · · + L1,px j p i + hx i 2 , L2,1x j 1 + L2,2x j 2 + · · · + L2,px j p i + · · · + hx i p , Lp,1x j 1 + Lp,2x j 2 + · · · + Lp,px j p i (122) = hx i, Lxjip. (123) Note that from equations (122 and 123), Lxj ∈ X . Now based on the positive semi-definiteness of L, we can decompose L = V >ΛV , where V is an orthogonal matrix and Λ is the diagonal matrix containing the non-negative eigenvalues. $$x^{i^{\top}}Lx^{j}=\langle x^{i},Lx^{j}\rangle_{p}\tag{124}$$ $$=\langle x^{i},V^{\top}\Lambda Vx^{j}\rangle_{p}$$ $$=x^{i^{\top}}V^{\top}\Lambda Vx^{j}.$$ Consider A = √ΛV , $$x^{i^{\top}}L x^{j}=x^{i^{\top}}V^{\top}\Lambda V x^{j}$$ $$=x^{i^{\top}}A^{\top}A x^{j}$$ $$=\langle A x^{i},A x^{j}\rangle_{p}.$$ $$(125)$$ = hAxi*, Ax*jip. (125) Note that x i>Lxj = hAxi*, Ax*jip = hφ(x i), φ(x j)ip (say) which is a characterization of kernels. For a finite m ∈ N, let G ∈ R m×m be the Gram (kernel) matrix induced by using h*Ax, Aw*i as a kernel. Let β ∈ R m, then we have: $$\beta'\mathcal{G}\beta=\sum_{i,j=1}^{m}\beta_i\mathcal{G}_{i,j}\beta_j$$ $$=\sum_{i,j=1}^{m}\beta_i\beta_j\langle\phi(x^i),\phi(x^j)\rangle_p$$ $$=\left\langle\sum_{i=1}^{m}\beta_i\phi(x^i),\sum_{j=1}^{m}\beta_j\phi(x^j)\right\rangle_p$$ $$=\left\|\sum_{i=1}^{m}\beta_i\phi(x^i)\right\|_p^2.$$ This illustrates that x i>Lxjis positive semi-definite and defines a scalar-valued kernel. Now, using the properties of a scalar-valued kernel, γx>Lx0, with γ > 0 is a valid positive semi-definite kernel. Thus the kernel given by $$k_{1}(x,x^{\prime};G)=\gamma x^{\top}L x^{\prime},\ \forall x,x^{\prime}\in{\mathcal{X}},\gamma>0,$$ is a valid positive semi-definite scalar kernel on *X × X* . Therefore, by the construction of OVK used in (Kadri et al., 2016), $$(K(x,x^{\prime})y)(t)=k_{1}(x,x^{\prime};G)\int_{0}^{1}k_{2}(s,t)y(s)d s,$$ defines an operator-valued kernel on *X × X* . $$(126)$$ Now, as exponential of a scalar-valued kernel provides us another valid scalar-valued kernel (Shawe-Taylor et al., 2004), we consider the following normalized version: exp(γx>Lx0) qexp(γx>Lx) exp(γx0>Lx0) = exp γx>Lx0 − γ 2 x >Lx − γ 2 x 0>Lx0 = exp h− γ 2 x >Lx + x 0>Lx0 − 2x >Lx0i = exp h− γ 2 kAxk 2 p + kAx0k 2 p − 2hAx, Ax0ip i = exp h− γ 2 kAx − Ax0k 2 p i = exp h− γ 2 (hA(x − x 0), A(x − x 0)ip) i = exp h− γ 2 (hA(x − x 0), A(x − x 0)ip) i = exp h− γ 2 (x − x 0) >L(x − x 0) i. (127) As exponential of a scalar-valued kernel and normalization preserves the validity of a scalar-valued kernel, we claim that the kernel in (127) defines a valid scalar-valued kernel. This illustrates that e −γ(x−x 0) >L(x−x 0) is a valid positive semi-definite scalar-valued kernel on *X × X* for γ > 0 and an OVK with k1(*x, x*0; G) = e −γ(x−x 0) >L(x−x 0), ∀x, x0 ∈ X *, γ >* 0 in (126) provides a valid graph-induced OVK. ## A.4 Proof Of Representer Theorem We provide a proof for the Representer theorem (Theorem A.2) stated in Section A.1. In the proof we use the Gateaux derivative in an associated function-valued reproducing kernel Hilbert space for an operator-valued kernel. We recall the theorem statement first and then provide a proof. Theorem A.4 (**Representer theorem**). Let KG be an operator-valued kernel and HKG be its corresponding function-valued reproducing kernel Hilbert space. The solution Feλ ∈ HKG of the regularized optimization problem: $$\tilde{F}_{\lambda}=\operatorname*{arg\,min}_{F\in\mathcal{H}_{K^{G}}}\sum_{i=1}^{n}\|y^{(i)}-F(x^{(i)})\|_{\mathcal{Y}}^{2}+\lambda\|F\|_{\mathcal{H}_{K^{G}}}^{2},$$ where λ > 0, F ∈ HKG , has the following form $$(128)$$ $$\widetilde{F}_{\lambda}(.)=\sum_{i=1}^{n}K^{G}(x^{(i)},.)u_{i},\ \text{where}\ u_{i}\in\mathcal{Y}.\tag{1}$$ $$(129)$$ Proof. We use the Gateaux derivative to obtain the condition for stationary point of the functional Jλ(F), given by $$J_{\lambda}(F)=\sum_{i=1}^{n}\|y^{(i)}-F(x^{(i)})\|_{\mathcal{Y}}^{2}+\lambda\|F\|_{\mathcal{H}_{K}},\ \ \forall F\in\mathcal{H}_{K^{G}}.$$ In order to find the critical points in HKG , we use Gateaux derivative DG of Jλ with respect to F in the direction H, which is defined by $$D_{\mathcal{G}}J_{\lambda}(F,H)=\operatorname*{lim}_{\tau\rightarrow0}\frac{J_{\lambda}(F+\tau H)-J_{\lambda}(F)}{\tau}.$$ Let Fe be the operator in HKG such that $$\widetilde{F}=\operatorname*{arg\,min}_{F\in\mathcal{H}_{K G}}J_{\lambda}(F)\implies D_{G}J_{\lambda}(F,H)=0,\;\;\forall H\in\mathcal{H}_{K^{G}}.$$ Jλ can be written as $$J_{\lambda}(F)=\sum_{i=1}^{n}G_{i}(F)+\lambda L(F),$$ and as DGJλ(*F, H*) = hDGJλ(F), HiHKG , ∀F, H ∈ HKG , we obtain the following. noitemsep L(F) = kFk 2 HKG = h*F, F*iHKG . Therefore we have limτ→0 hF + τH, F + τHiHKG − hF, FiHKG τ= 2hF, HiHKG =⇒ DGL(F) = 2F. noiitemsep Gi(F) = ky (i) − F(x (i))k 2 $\|y^{(i)}-\|y^{(i)}\|_{\mathcal{Y}}$. Then we have $$\lim_{\tau\to0}\frac{\|y^{(i)}-F(x^{(i)})-\tau H(x^{(i)})\|_{\mathcal{Y}}^{2}-\|y^{(i)}-F(x^{(i)})\|_{\mathcal{Y}}^{2}}{\tau}=-2\langle y^{(i)}-F(x^{(i)}),H(x^{(i)})\rangle_{\mathcal{Y}}\tag{130}$$ $$=-2\langle K^{G}(x^{(i)},.)(y^{(i)}-F(x^{(i)})),H\rangle_{\mathcal{H}_{K_{G}}}.\tag{131}$$ . Then we have $$(132)$$ , (132) $$=-2\langle K^{G}(x^{(i)},.)u_{i},H\rangle_{{\mathcal H}_{K_{G}}},$$ $$\implies D_{G}G_{i}(F)=-2K^{G}(x^{(i)},.)u_{i}.$$ $$\begin{array}{l}{(130)}\\ {\ }\\ {\ }\end{array}$$ We obtain equation (131) from equation (130) using the reproducibility property. In (131), we use ui = y (i) − F(x (i)) to get (132). Using 1, 2, and DGJλ1,λ2 (Fe) = 0, we obtain, Fe(.) = 1λ Pn i=1 KG(x (i), .)ui. The constant 1λ can be absorbed in functions ui's, such that Fe(.) = Pn i=1 KG(x (i), .)ui. We provide a proof for the extended represeneter theorem (Theorem 5.1) based on the arguments used in the earlier proof. We recall the theorem and then provide a proof. Theorem A.5 (**Extended representer theorem**). Let K be an operator-valued kernel as defined in (26) and HK = H1K ×H2K *×· · ·×H*sK be its corresponding function-valued reproducing kernel Hilbert space based on kernels k1, k1 2 , . . . , ks 2 . The solution F˜λ ∈ HK of the regularized optimization problem. $$\widetilde{F}_{\lambda}=\operatorname*{arg\,min}_{F=[F^{1},F^{2},\ldots,F^{s}]^{\top}\in\mathcal{H}_{K}}\sum_{l=1}^{s}\left(\sum_{i=1}^{n}\|y_{l}^{(i)}-F^{l}(\mathbf{x}^{(i)})\|_{\mathcal{Y}}^{2}+\lambda\|F^{l}\|_{\mathcal{H}_{K}^{l}}^{2}\right),$$ where λ > 0, F = [F $,F^{2},\ldots,F^{s}]^{\top}\in\mathcal{H}_{K}=\mathcal{H}_{K}^{1}\times\mathcal{H}_{K}^{2}\times\cdots\times\mathcal{H}_{K}^{s}$, has the following form: $$\widetilde{F}_{\lambda}(.)=\begin{bmatrix}\widetilde{F}_{\lambda}^{1}(.)\\ \vdots\\ \widetilde{F}_{\lambda}^{n}(.)\end{bmatrix}=\begin{bmatrix}\sum_{i=1}^{n}K^{1}(\mathbf{x}^{(i)},.)u_{i}^{1}\\ \vdots\\ \sum_{i=1}^{n}K^{n}(\mathbf{x}^{(i)},.)u_{i}^{n}\end{bmatrix},\text{where}u_{i}^{1},u_{i}^{2},\ldots,u_{i}^{n}\in\mathcal{Y}.\tag{133}$$ Proof. We use a similar argument as in case of the representer theorem proof. The Gateaux derivative is used to obtain the condition for stationary points which minimize Jλ(F) written as a sum of J l λ F l, for l ∈ [s], given by $$J_{\lambda}(F)=\sum_{l=1}^{s}J_{\lambda}^{l}\left(F^{l}\right)$$ $$=\sum_{l=1}^{s}\left(\sum_{i=1}^{n}\|y_{l}^{(i)}-F^{l}(\mathbf{x}^{(i)})\|_{\mathcal{Y}}^{2}+\lambda\|F^{k}\|_{\mathcal{H}_{K}^{l}}\right),\ \ \forall F\in\mathcal{H}_{K}.$$ In order to find the critical points in HK = H1K × H2K *× · · · × H*sK, we use Gateaux derivative DG of J l λ with respect to F lin the direction H ∈ HlK, which is defined by $$D_{\mathcal{G}}J_{\lambda}^{l}(F^{l},H)=\operatorname*{lim}_{\tau\rightarrow0}\frac{J_{\lambda}^{l}(F^{l}+\tau H)-J_{\lambda}^{l}(F^{l})}{\tau}.$$ Let Fel be the operator in HlK such that $$\widetilde{F}^{l}=\operatorname*{arg\,min}_{F^{l}\in\mathcal{H}_{K}^{l}}J_{\lambda}^{l}(F^{l})\implies D_{\mathcal{G}}J_{\lambda}^{l}(F^{l},H)=0,\;\;\forall H\in\mathcal{H}_{K}^{l},l\in[s].$$ Now, J l λ can be written as $$J_{\lambda}^{l}(F^{l})=\sum_{i=1}^{n}G_{i}^{l}(F^{l})+\lambda L^{l}(F^{l}),$$ and as DGJ l λ (F l, H) = hDGJ l λ (F l), HiHlK , ∀F, H ∈ HlK, l ∈ [s], we obtain the following. noitemsep L l(F l) = kF lk 2 HlK = hF l, FliHlK . Therefore we have limτ→0 hF l + τH, Fl + τHiHlK − hF l, FliHlK τ= 2hF l, HiHlK =⇒ DGL l(F l) = 2F l. noiitemsep Gli (F l) = ky (i) l − F l(x (i))k 2 Y . Then we have limτ→0 ky (i) l − F(x (i)) − τH(x (i))k 2 Y − ky (i) l − F(x (i))k 2 Y τ= −2hy (i) l − F(x (i)), H(x (i))iY (134) = −2hKl(x (i), .)(y (i) l − F(x (i))), HiHlK (135) $$=-2\langle K^{l}({\bf x}^{(i)},.)u^{l}_{i},H\rangle_{{\cal H}^{l}_{K}},\tag{136}$$ $\Longrightarrow\ D_{\cal G}G^{l}_{i}(F^{l})=-2K^{l}({\bf x}^{(i)},.)u^{l}_{i}.$ We obtain equation (135) from equation (134) using the reproducibility property. In (135), we use u l i = y (i) l − F l(x (i)) to get (136). Using 1, 2, and DGJ l λ (Fel) = 0, we obtain, Fel(.) = 1λ Pn i=1 Kl(x (i), .)u l i . The constant 1λ can be absorbed in functions u l i 's, such that Fel(.) = Pn i=1 Kl(x (i), .)u l i , l ∈ [s]. Therefore, we obtain the following: $$\widetilde{F}_{\lambda}(.)=\begin{bmatrix}\widetilde{F}_{\lambda}^{1}(.)\\ \vdots\\ \widetilde{F}_{\lambda}^{s}(.)\end{bmatrix}=\begin{bmatrix}\sum_{i=1}^{n}K^{1}(\mathbf{x}^{(i)},.)u_{i}^{1}\\ \vdots\\ \sum_{i=1}^{n}K^{s}(\mathbf{x}^{(i)},.)u_{i}^{s}\end{bmatrix},\ \text{where}\ u_{i}^{1},u_{i}^{2},\ldots,u_{i}^{s}\in\mathcal{Y}.$$ ## A.5 Properties Of A, B, C, H, C And M **Matrices** This section deals with the construction and properties of matrices *A, B, C, H, c* and M which have been used in the framework. Determining A: A is a matrix which is constructed to represent the constraint L1 = 0 as A vech(L) = 0. For a given vech(L), we can obtain the condition L1 = 0 with Avech(L) = 0 based on the following construction: $$A={\begin{bmatrix}e_{p}&0_{p-1}&0_{p-2}&\dots&0_{2}&0_{1}\\ e_{p}^{2}&e_{p-1}&0_{p-2}&\dots&0_{2}&0_{1}\\ e_{p}^{3}&e_{p-1}^{2}&e_{p-2}&\dots&0_{2}&0_{1}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ e_{p}^{p}&e_{p-1}^{p-1}&e_{p-2}^{p-2}&\dots&e_{2}^{2}&e_{1}\end{bmatrix}}$$ , where 0k = [0, 0*, . . . ,* 0] ∈ R 1×k denotes a row vector containing k zeros, ek = [1, 1*, . . . ,* 1] ∈ R 1×k denotes a row vector containing k ones, e i k = [0, . . . , 1*, . . . ,* 0] ∈ R 1×k denotes a row vector with 1 in the i-th position and zeros elsewhere, resulting in A ∈ R p× p(p+1) 2 . ## Determining B: B is a matrix which is constructed to represent the constraint Li,j ≤ 0, i 6= j as B vech(L) ≤ 0. For a given vech(L), we reformulate the condition Li,j ≤ 0, ∀i 6= j as Bvech(L) ≤ 0. This can be achieved with the following construction: 2 01 $\begin{bmatrix}e^2_p&0_{p-1}&0_{p-2}&\dots&0_3\\ e^3_p&0_{p-1}&0_{p-2}&\dots&0_3\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ e^p_p&0_{p-1}&0_{p-2}&\dots&0_3\\ 0_p&e^2_{p-1}&0_{p-2}&\dots&0_3\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0_p&e^{p-1}_{p-1}&0_{p-2}&\dots&0_3\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0_p&0_{p-1}&0_{p-2}&\dots&e^2_3\\ 0_p&0_{p-1}&0_{p-2}&\dots&e^3_3\\ 0_p&0_{p-1}&0_{p-2}&\dots&0_3\\ \end{bmatrix}$ $$\mathbf{\Sigma}_{1}^{1}$$ $\begin{array}{c}\downarrow\\ \downarrow\\ 1\\ 1\end{array}$ . $${B}=$$ $$\begin{array}{c}1\\ 1\\ 1\\ 1\\ 1\end{array}$$ . $$\begin{array}{l}{0_{2}}\\ {0_{2}}\\ {\vdots}\\ {0_{2}}\\ {0_{2}}\\ {\vdots}\\ {0_{2}}\\ {\vdots}\\ {0_{2}}\\ {0_{2}}\\ {e_{2}^{2}}\end{array}$$ Note that B ∈ R p(p−1) 2 × p(p+1) 2 . ## Determining C: In section 5.1.3, C is a matrix which is used to deal with the constraint Dii ≥ 0, ∀i ∈ [p]. We construct a matrix C ∈ R p× p(p+1) 2 which consists of 0's and 1's satisfying Cvech(D) = *Diag*(D). For a given vech(L), we formulate the matrix C ∈ R p× p(p+1) 2 using 0's and 1's as follows: $$C=\begin{bmatrix}\leftarrow&C_{1}\colon&\rightarrow\\ \leftarrow&C_{2}\colon&\rightarrow\\ \leftarrow&C_{3}\colon&\rightarrow\\ \vdots&\vdots&\vdots\\ \leftarrow&C_{p}\colon&\rightarrow\end{bmatrix},$$ where the row Ci: ∈ R 1× p(p+1) 2 , and Ci: contains a 1 in the i(i+1) 2 -th position and zeros elsewhere. ## Determining H: In section 5.1.4, we require a matrix H ∈ R p(p+1) 2 × p(p+1) 2 comprising 0's and 1's such that Hvech(L) produces a vector having the same structure as vech(L) with 0's corresponding to diagonal entries of L and the same off-diagonal entries as in L. For a given vech(L), we formulate the matrix H ∈ R p(p+1) 2 × p(p+1) 2 using 0's and 1's as follows: $H=\begin{bmatrix}\leftarrow&H_1\!:&\to\\ \leftarrow&H_2\!:&\to\\ \leftarrow&H_3\!:&\to\\ \vdots&\vdots&\vdots\\ \leftarrow&H_{\frac{p(p+1)}{2}}\!:&\to\end{bmatrix},$ 4. , where the rows Hi: ∈ R 1× p(p+1) 2 , and Hi: contains a 0 in the i(i+1) 2 -th position and 1's elsewhere. Determining c: In section 5.1.4, we consider a vector c ∈ R p(p+1) 2 consisting of 0's and 1's, such that c >vech(L) = trace(L). The vector c ∈ R p(p+1) 2 is defined for obtaining the diagonal entries of D from vech(D) using 0's and 1's such that $$c^{\top}\mathrm{{vech}}(D)=d_{1,1}+d_{2,2}+\,\cdots+d_{p,p},$$ where c is given as follows: $$c=\left[c_{1},c_{2},\ldots,c_{p-1},c_{p}\right]^{\top},$$ $\overleftarrow{2}$ b. where c ∈ R p(p+1) 2 such that ci ∈ R 1× i(i+1) 2 has all 0's with 1 in i(i+1) 2 -th element. Determining M: M is a matrix which is constructed to transform vech(L) to vec(L) using Mvech(L) = vec(L). Let the Laplacian of the graph of p nodes be denoted by L which is symmetric, $L=\begin{bmatrix}l_{1,1}&l_{2,1}&l_{3,1}&\cdot&\cdot&\cdot&l_{p,1}\\ l_{2,1}&l_{2,2}&l_{3,2}&\cdot&\cdot&\cdot&l_{p,2}\\ l_{3,1}&l_{3,2}&l_{3,3}&\cdot&\cdot&l_{p,3}\\ \cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\ l_{p,1}&l_{p,2}&l_{p,3}&\cdot&\cdot&\cdot&l_{p,p}\\ \end{bmatrix}$ . Now, vec(L) = [l1,1, . . . , lp,1, l2,1, . . . , lp,2, . . . , lp,1, . . . , lp,p] > is obtained by stacking the columns and vech(L) = [l1,1, . . . , lp,1, l2,2, . . . , lp,2, l3,3, . . . , lp,3, . . . , lp,p] >, which is obtained by eliminating the super-diagonal elements and then stacking them up. As illustrated, vec(L) ∈ R p 2and vech(L) ∈ R p(p+1)/2. We can find a matrix M which satisfies $=\;\;1-\frac{1}{2}\frac{1}{2}$ . vec(L) = Mvech(L). Hence observe that M ∈ R p 2× p(p+1) 2 . Therefore, $${\mathcal{M}}^{\top}=\sum_{i\geq j}v_{i,j}(\operatorname*{vec}(T_{i,j}))^{\top},$$ where vi,j is a vector of order 12 p(p + 1) having the value 1 in the (j − 1)n + i − 1 2 j(j − 1)-th position and 0 elsewhere and Ti,j is a p × p matrix with 1 in positions (*i, j*) and (*j, i*), and 0 elsewhere. The following relations are used in order to write the expression of objective function in terms of vec(L). $$(x^{(i)}-x^{(j)})^{\top}(L+D)(x^{(i)}-x^{(j)})=\text{vec}((x^{(i)}-x^{(j)})(x^{(i)}-x^{(j)})^{\top})^{\top}\text{vec}(L+D),$$ $$=\text{vec}((x^{(i)}-x^{(j)})(x^{(i)}-x^{(j)})^{\top})^{\top}(\text{vec}(L)+\text{vec}(D)),$$ $$x^{(i)\top}Lx^{(i)}=\text{vec}(x^{(i)}x^{(i)\top})^{\top}\text{vec}(L),$$ $$\|D\|_{F}=\text{vec}(D)^{\top}\text{vec}(D).$$ ## A.6 Derivation Of Gradients In this section, we cover the derivation of ∇vech(L)J and ∇vech(D)J which are required in Algorithm 2. We use (103) and (108) in the following expression to find the gradients. Recall ¯k2(u)(t) = R 1 0 e −γop|s−t|u(s)ds, γop > 0*, s, t* ∈ R in the following derivations. J(u*, L, D*) =Xn i=1 y (i) − Xn j=1 K(x (i), x(j))uj 2 Y + λ Xn i,j=1 hK(x (i), x(j))ui, uj iY + ρL Xn i=1 x (i)>Lx(i) + ρDkDk 2 F = Xn i=1 y (i) − Xn j=1 k1(x (i), x(j); G) ¯k2(uj ) 2 Y + λXn i=1,j=1 hk1(x (i), x(j); G) ¯k2(ui), uj iY + ρL Xn i=1 x (i)>Lx(i) + ρDkDk 2 F = Xn i=1 y (i) − Xn j=1 e −γ(x (i)−x (j)) >(L+D)(x (i)−x (j))¯k2(uj ) 2 Y + λ Xn i,j=1 he −γ(x (i)−x (j)) >(L+D)(x (i)−x (j))¯k2(ui), uj iY + ρL Xn i=1 x (i)>Lx(i) + ρDkDk 2 F . We consider the following variables for simplifying the computations: odes for simplifying the computations. $$R_{ij}=\text{vec}((x^{(i)}-x^{(j)})(x^{(i)}-x^{(j)})^{\top})^{\top}\mathcal{M},\tag{137}$$ $$\bar{R}_{i}=\text{vec}(x^{(i)}x^{(i)})^{\top}\mathcal{M}.$$ Therefore, J = Xn i=1 y (i) − Xn j=1 e −γRij (vech(L)+vech(D))¯k2(uj ) 2 Y + λ Xn i,j=1 he −γRij (vech(L)+vech(D))¯k2(ui), uj iY + ρL Xn i=1 R¯ivech(L) + ρDvech(D) >M>Mvech(D) = Xn i=1 *y (i) − Xn j=1 e −γRij (vech(L)+vech(D))¯k2(uj ), y(i) − Xn j=1 e −γRij (vech(L)+vech(D))¯k2(uj ) + Y + λ Xn i,j=1 e −γRij (vech(L)+vech(D))h ¯k2(ui), uj iY + ρL Xn i=1 R¯ivech(L) + ρDvech(D) >M>Mvech(D) = Xn i=1 Dy (i), y(i)E Y − 2 Xn j=1 e −γRij (vech(L)+vech(D)) Dy (i), ¯k2(uj ) E Y j,k=1 e −γ(Rij+Rik)(vech(L)+vech(D)) ¯k2(uj ), ¯k2(uk)Y + Xn + λ Xn i,j=1 e −γRij (vech(L)+vech(D))h ¯k2(ui), uj iY + ρL Xn i=1 R¯ivech(L) + ρDvech(D) >M>Mvech(D). The expression above for J can be used to determine the gradient with respect to vech(L) and vech(D). The gradient of J with respect to vech(L) for a fixed u and D is given as follows: ∇vech(L)J = 2γ Xn i,j=1 R > ij e −γRij (vech(L)+vech(D)) Dy (i), ¯k2(uj ) E Y − γXn i,j,k=1 (Rij + Rik) >e −γ(Rij+Rik)(vech(L)+vech(D)) ¯k2(uj ), ¯k2(uk)Y − λγ Xn i,j=1 R > ij e −γRij (vech(L)+vech(D))h ¯k2(ui), uj iY + ρL Xn i=1 R¯> i . Similarly, for a fixed u and vech(L) the gradient of J with respect to vech(D) can be written as ∇vech(D)J = 2γ Xn i,j=1 R > ij e −γRij (vech(L)+vech(D)) Dy (i), ¯k2(uj ) E Y − γXn i,j,k=1 (Rij + Rik) >e −γ(Rij+Rik)(vech(L)+vech(D)) ¯k2(uj ), ¯k2(uk)Y − λγ Xn i,j=1 R > ij e −γRij (vech(L)+vech(D))h ¯k2(ui), uj iY + 2ρDM>Mvech(D). For the experiments on NBA data, we use the approach in Section 5.1.5 with r = 2 and s = 2. The inputs x (i) = (x (i) 1 , x (i) 2 ) ∈ X 2 and outputs y (i) = (y (i) 1 , y (i) 2 ) ∈ Y2 with (25) and (26), for i ∈ [n]. We consider the following variables for simplifying the computations: $$\begin{array}{l}{{R_{i j}^{1}=\operatorname{vec}((x_{1}^{(i)}-x_{1}^{(j)})(x_{1}^{(i)}-x_{1}^{(j)})^{\top})^{\top}{\mathcal M},}}\\ {{R_{i j}^{2}=\operatorname{vec}((x_{2}^{(i)}-x_{2}^{(j)})(x_{2}^{(i)}-x_{2}^{(j)})^{\top})^{\top}{\mathcal M},}}\\ {{\bar{R}_{i}^{1}=\operatorname{vec}(x_{1}^{(i)}x_{1}^{(i)})^{\top}{\mathcal M},}}\\ {{\bar{R}_{i}^{2}=\operatorname{vec}(x_{2}^{(i)}x_{2}^{(i)})^{\top}{\mathcal M}.}}\end{array}$$ J(u 1, u 2*, L, D*) =Xn i=1 y (i) 1 − Xn j=1 K(x (i), x (j))u 1 j 2 Y + Xn i=1 y (i) 2 − Xn j=1 K(x (i), x (j))u 2 j 2 Y + λ Xn i,j=1 hK(x (i), x (j))u 1 i , u1 j iY + λ Xn i,j=1 hK(x (i), x (j))u 2 i , u2 j iY + ρ1 Xn i=1 x (i) 1 >Lx(i) 1 + ρ2 Xn i=1 x (i) 2 >Lx(i) 2 + ρDkDk 2 F = Xn i=1 y (i) 1 − Xn j=1 k1(x (i), x (j); G) ¯k 1 2 (u 1 j ) 2 Y + Xn i=1 y (i) 2 − Xn j=1 k1(x (i), x (j); G) ¯k 2 2 (u 2 j ) 2 Y + λXn i=1,j=1 hk1(x (i), x (j); G) ¯k 1 2 (u 1 i ), u1 j iY + λXn i=1,j=1 hk1(x (i), x (j); G) ¯k 2 2 (u 2 i ), u2 j iY + ρ1 Xn i=1 x (i) 1 >Lx(i) 1 + ρ2 Xn i=1 x (i) 2 >Lx(i) 2 + ρDkDk 2 F = Xn i=1 y (i) 1 − Xn j=1 e −γ1x (i) 1 −x (j) 1 >(L+D)x (i) 1 −x (j) 1 −γ2x (i) 2 −x (j) 2 >(L+D)x (i) 2 −x (j) 2 ¯k 1 2 (u 1 j ) 2 Y + Xn i=1 y (i) 2 − Xn j=1 e −γ1x (i) 1 −x (j) 1 >(L+D)x (i) 1 −x (j) 2 −γ2x (i) 2 −x (j) 2 >(L+D)x (i) 2 −x (j) 2 ¯k 2 2 (u 2 j ) 2 Y + λ Xn i,j=1 he −γ1x (i) 1 −x (j) 1 >(L+D)x (i) 1 −x (j) 1 −γ2x (i) 2 −x (j) 2 >(L+D)x (i) 2 −x (j) 2 ¯k 1 2 (u 1 j ), u1 j iY + λ Xn i,j=1 he −γ1x (i) 1 −x (j) 1 >(L+D)x (i) 1 −x (j) 1 −γ2x (i) 2 −x (j) 2 >(L+D)x (i) 2 −x (j) 2 ¯k 2 2 (u 2 j ), u2 j iY + ρ1 Xn i=1 x (i) 1 >Lx(i) 1 + ρ2 Xn i=1 x (i) 2 >Lx(i) 2 + ρDkDk 2 F . The gradient of J with respect to vech(L) for fixed u 1, u 2 and D is given by ∇vech(L)J = 2 Xn i,j=1 (γ1R 1 ij + γ2R 2 ij ) >e −(γ1R 1 ij+γ2R 2 ij )(vech(L)+vech(D)) Dy (i) 1 , ¯k 1 2 (u 1 j ) E Y + Dy (i) 2 , ¯k 2 2 (u 2 j ) E Y −Xn i,j,k=1 (γ1R 1 ij + γ2R 2 ij + γ1R 1 ik + γ2R 2 ik) >e −(γ1R 1 ij+γ2R 2 ij+γ1R 1 ik+γ2R 2 ik)(vech(L)+vech(D)) ¯k 1 2 (u 1 j ), ¯k 1 2 (u 1 k )Y −Xn i,j,k=1 (γ1R 1 ij + γ2R 2 ij + γ1R 1 ik + γ2R 2 ik) >e −(γ1R 1 ij+γ2R 2 ij+γ1R 1 ik+γ2R 2 ik)(vech(L)+vech(D)) ¯k 2 2 (u 2 j ), ¯k 2 2 (u 2 k )Y − λ Xn i,j=1 (γ1R 1 ij + γ2R 2 ij ) >e −(γ1R 1 ij+γ2R 2 ij )(vech(L)+vech(D))h ¯k 1 2 (u 1 i ), u1 j iY − λ Xn i,j=1 (γ1R 1 ij + γ2R 2 ij ) >e −(γ1R 1 ij+γ2R 2 ij )(vech(L)+vech(D))h ¯k 2 2 (u 2 i ), u2 j iY + ρ1 Xn i=1 R¯1 > i + ρ2 Xn i=1 R¯2 > i. ## 52 Similarly, for fixed u 1, u 2 and vech(L) the gradient of J with respect to vech(D) can be written as ∇vech(D)J = 2 Xn i,j=1 (γ1R 1 ij + γ2R 2 ij ) >e −(γ1R 1 ij+γ2R 2 ij )(vech(L)+vech(D)) Dy (i) 1 , ¯k 1 2 (u 1 j ) E Y + Dy (i) 2 , ¯k 2 2 (u 2 j ) E Y −Xn i,j,k=1 (γ1R 1 ij + γ2R 2 ij + γ1R 1 ik + γ2R 2 ik) >e −(γ1R 1 ij+γ2R 2 ij+γ1R 1 ik+γ2R 2 ik)(vech(L)+vech(D)) ¯k 1 2 (u 1 j ), ¯k 1 2 (u 1 k )Y −Xn i,j,k=1 (γ1R 1 ij + γ2R 2 ij + γ1R 1 ik + γ2R 2 ik) >e −(γ1R 1 ij+γ2R 2 ij+γ1R 1 ik+γ2R 2 ik)(vech(L)+vech(D)) ¯k 2 2 (u 2 j ), ¯k 2 2 (u 2 k )Y − λ Xn i,j=1 (γ1R 1 ij + γ2R 2 ij ) >e −(γ1R 1 ij+γ2R 2 ij )(vech(L)+vech(D))h ¯k 1 2 (u 1 i ), u1 j iY − λ Xn i,j=1 (γ1R 1 ij + γ2R 2 ij ) >e −(γ1R 1 ij+γ2R 2 ij )(vech(L)+vech(D))h ¯k 2 2 (u 2 i ), u2 j iY + 2ρDM>Mvech(D). ## A.7 Opminres Algorithm A.7.1 Oplanczos Step OpLanczos in OpMINRES is used to trigiagonalize the operator matrix P. The vectors obtained from OpLanczos form an orthonormal set. Using the OpLanczosStep Algorithm 5, we can obtain, $$\mathbf{P}Q_{k}=Q_{k}T_{k},\quad{\mathrm{where~}}T_{k}={\begin{bmatrix}\alpha_{1}&\beta_{2}&&&0\\ \beta_{2}&\alpha_{2}&\beta_{3}&&&\\ &&\beta_{3}&\alpha_{3}&\ddots&\\ &&\ddots&\ddots&\beta_{k-2}\\ &&&\beta_{k-1}&\alpha_{k-1}&\beta_{k}\\ 0&&&\beta_{k}&\alpha_{k}\end{bmatrix}},$$ and Qk = [q1, q2*, . . . , q*k], where qi's are obtained using OpLanczosStep Algorithm. The columns of Qk belonging to Y n are orthonormal and the following equation is satisfied: $$\mathbf{P}Q_{k}=Q_{k+1}\overline{T}_{k},\quad\text{where}\overline{T}_{k}=\begin{bmatrix}\alpha_{1}&\beta_{2}&&&0\\ \beta_{2}&\alpha_{2}&\beta_{3}&&\\ &\beta_{3}&\alpha_{3}&\ddots&&\\ &&\ddots&\ddots&\beta_{k-2}\\ &&&\beta_{k-1}&\alpha_{k-1}&\beta_{k}\\ &&&\beta_{k}&\alpha_{k}\\ 0&&&&\beta_{k+1}\end{bmatrix}.$$ We intend to solve Au = y by obtaining a solution in the Krylov space Kk(P, y) = span{y, Py, P2y*, . . . ,* Pk−1y}. For each iteration k, we obtain the following equations using the transformation θ = Qkϑ, where θ ∈ Yn, ϑ ∈ R k. min θ∈Kk(P,y) ky − PϑkYn = min ϑ∈Rk ky − PQkϑkYn = min ϑ∈Rk ky − Qk+1T kϑkYn = min ϑ∈Rk kQk+1(β1e1 − T kϑ)kYn , (138) (where β1 = kykYn , e1 = [1 0 . . . 0]> and q1 = y) = min ϑ∈Rk kβ1e1 − T kϑk2. (139) Algorithm 4 OpMINRES(*P, b, maxiter*) Input: *P, b, maxiter* Output: *ϑ, φ, ψ, χ* β1 = kbkYn q0 = 0 q1 = 1 β1 b φ0 = τ0 = β1 χ0 = 0 δ (1) 1 = 0 c0 = −1 s0 = 0 d0 = d−1 = ϑ0 = 0 k = 1 while stopping criteria not satisfied do OpLanczosStep(P, qk, qk−1, βk) → αk, βk+1, qk+1 //last left orthogonalization on middle two entries in last column of Tk+1,k δ (2) k = ck−1δ (1) k + sk−1αk γ (1) k = sk−1δ (1) k − ck−1αk //last left orthogonalization to produce first two entries of Tk+2,k+1ek+1 (1) k+1 = sk−1βk+1 δ (1) k+1 = −ck−1βk+1 //current left orthogonalization to zero out βk+1 SymOrtho(γ (1) k, βk+1) → ck, sk, γ (2) k //right-hand side, residual norms τk = ckφk−1 φk = skφk−1 ψk−1 = φk−1 q(γ (1) k) 2 + (δ (1) k+1) 2 //update solution dk =1 γ (2) k vk − δ (2) kdk−1 − (1) k dk−2 ϑk = ϑk−1 + τkdk χk = kϑkk k ← k + 1 end while ϑ = ϑk, φ = φk, ψ = φk q(γ (1) k+1) 2 + (δ (1) k+2) 2, χ = χk The change in norms k.kYn in (138) to k.k2 is obtained based on the following arguments. Let z = [z1, z2*, . . . , z*k+1] > ∈ R k+1 and Qk+1 = [q1, q2*, . . . , q*k+1], where qi ∈ Yn, for i = 1, 2*, . . . , k* + 1, then we have $$z_{k+1}\parallel$$ kQk+1zk+1kYn = kz1q1 + z2q2 + *· · ·* + zk+1qk+1kYn $$=\sqrt{z_{1}^{2}\int_{\Omega_{y}}q_{1}^{2}(t)dt+z_{2}^{2}\int_{\Omega_{y}}q_{2}^{2}(t)dt+\cdots+z_{k+1}^{2}\int_{\Omega_{y}}q_{k+1}^{2}(t)dt}$$ $$=\sqrt{z_{1}^{2}+z_{2}^{2}+\cdots+z_{k+1}^{2}}$$ k+1(t)dt (140) $=\;\|z\|_2$. Equation (140) reduces to (141) as the qi's are orthonormal in Y n. Solving for ϑk = arg minϑ∈Rk kβ1e1−T¯kϑk2 can be done using QR decomposition (Choi, 2006) which has been discussed in the next section. Now, the $$(140)^{\frac{1}{2}}$$ $$(141)$$ Algorithm 5 OpLanczosStep(P, qk, qk−1, βk) Input: A, qk, qk−1, βk Output: αk, βk+1, qk+1 q¯k+1 = Aqk − βkqk−1 αk = hq¯k+1, qkiYn q¯k+1 ← q¯k+1 − αkqk βk+1 = kq¯k+1kYn qk+1 =1 βk+1 q¯k+1 Algorithm 6 SymOrtho(*a, b*) Input: *a, b* Output: *c, s, r* if b == 0 **then** s = 0 r = |a| if a == 0 **then** c = 1 else c = sign(a) end else if a == 0 **then** c = 0 s = sign(b) r = |b| else if |b| > |a| **then** τ = a/b s = sign(b)/ √1 + τ 2 c = sτ r = b/s else if |a| > |b| **then** τ = b/a c = sign(a)/ √1 + τ 2 s = cτ r = a/c end transformation from R k back to Y n to obtain u kis achieved using by the following: $$\mathbf{u}^{k}=Q_{k}\vartheta_{k}=Q_{k}\left({\underset{\vartheta\in\mathbb{R}^{k}}{\operatorname{arg\,min}}}\ \|\beta_{1}e_{1}-{\overline{{T}}}_{k}\vartheta\|_{2}\right)$$ ## A.7.2 Qr Decomposition In order to apply QR decomposition on symmetric T k, we use Givens rotation Sk to obtain a upper-triangular system. $$S_{k}\overline{T}_{k}=\begin{bmatrix}R_{k}\\ 0\end{bmatrix}=\begin{bmatrix}\gamma_{1}^{(1)}&\delta_{2}^{(1)}&\epsilon_{2}^{(1)}&0\\ &\gamma_{2}^{(2)}&\delta_{3}^{(2)}&\epsilon_{4}^{(1)}&\\ &&\ddots&\ddots&\ddots\\ &&&\gamma_{k-2}^{(2)}&\delta_{2}^{(2)}&\epsilon_{k}^{(1)}\\ &&&\gamma_{k-1}^{(2)}&\delta_{k}^{(2)}\\ 0&&&&\gamma_{k}^{(2)}\end{bmatrix},\qquad\qquad S_{k}(\beta_{1}e_{1})=\begin{bmatrix}t_{k}\\ \phi_{k}\end{bmatrix},$$ where Sk = Sk,k+1 *. . . S*2,3S1,2 and Si,i+1 are Givens rotations created to annihilate the βi's in sub-diagonal of T k. The Si,i+1's involved in the product to obtain Sk are given by, Si,i+1 = Ii−1 ci si si −ci Ik−i . The matrices Si,i+1 are obtained using the SymOrtho Algorithm 6. The sub-problem can be rewritten with ϑk = arg minϑ∈Rk kβ1e1 − T kϑk2 as $\vartheta_{k}=\arg\min_{\vartheta\in\mathbb{R}^{k}}\left\|\begin{bmatrix}t_{k}\\ \phi_{k}\end{bmatrix}-\begin{bmatrix}R_{k}\\ 0\end{bmatrix}\vartheta\right\|_{2},$ where $t_{k}=\left[\tau_{1},\tau_{2},\ldots,\tau_{k}\right]^{\top}$ and $\vartheta$. $$\begin{bmatrix}t_{k}\\ \phi_{k}\end{bmatrix}=\beta_{1}S_{k,k+1}\dots S_{2,3}\begin{bmatrix}c_{1}\\ s_{1}\\ s_{1}\\ 0_{k-1}\end{bmatrix}=\beta_{1}S_{k,k+1}\dots S_{3,4}\begin{bmatrix}c_{1}\\ s_{1}c_{2}\\ s_{1}s_{2}\\ 0_{k-2}\end{bmatrix}=\beta_{1}\begin{bmatrix}c_{1}\\ s_{1}c_{2}\\ \vdots\\ s_{1}\dots s_{k-1}c_{k}\\ s_{1}\dots s_{k-1}s_{k}\end{bmatrix}.$$ A shorthand way to represent the action of Sk,k+1 can be described as $$\begin{bmatrix}c_{k}&s_{k}\\ s_{k}&-c_{k}\end{bmatrix}\left[\begin{array}{c c c}{{\gamma_{k}^{(1)}}}&{{\delta_{k+1}^{(1)}}}&{{0}}\\ {{\beta_{k+1}}}&{{\alpha_{k+1}}}&{{\beta_{k+2}}}\end{array}\right]\phi_{k-1}\ \right]=\left[\begin{array}{c c c}{{\gamma_{k}^{(2)}}}&{{\delta_{k+1}^{(2)}}}&{{\epsilon_{k+2}^{(1)}}}\\ {{0}}&{{\gamma_{k+1}^{(1)}}}&{{\delta_{k+2}^{(1)}}}\end{array}\right]\tau_{k}\ \right].$$ OpMINRES computes u kin Kk(P, y) as an approximate solution to the problem Pu = y: $$\mathbf{u}^{k}=Q_{k}\vartheta_{k}=Q_{k}R_{k}^{-1}t_{k}=D_{k}\begin{bmatrix}t_{k-1}\\ \tau_{k}\end{bmatrix}=\begin{bmatrix}D_{k-1}&d_{k}\end{bmatrix}\begin{bmatrix}t_{k-1}\\ \tau_{k}\end{bmatrix}$$ $$=\mathbf{u}^{k-1}+\tau_{k}d_{k}.$$ The relation satisfied by dk is given by, $$d_{k}=\frac{1}{\gamma_{k}^{(2)}}\left(v_{k}-\delta_{k}^{(2)}d_{k-1}-\epsilon_{k}^{(1)}d_{k-2}\right).$$ These details have been incorporated in OpMINRES Algorithm A.7. The OpMINRES Algorithm A.7 is based on approximating an infinite-dimensional problem in (150) by a finite-dimensional problem in (139). As OpMINRES is based on MINRES algorithm (Choi, 2006), the convergence of OpMINRES follows from the convergence of MINRES. The case of singular systems with OpMINRES needs more investigation. In our experiments, the value of relative residual norms φk/φ0 has been used as the stopping criteria for OpMINRES. ## A.8 Derivation Of Linear System Of Operators Derivation of (K + λI)u = y: We obtain a sufficient condition for stationary points for the optimization problem in Theorem A.2. Using the representer theorem, the minimization problem can be equivalently formulated as the following problem: $$\hat{\mathbf{u}}_{\lambda}=\arg\min_{u\in\mathcal{Y}}\sum_{i=1}^{n}\left\|u_{j}-\sum_{j=1}^{n}K(x^{(i)},x^{(j)})u_{j}\right\|_{\mathcal{Y}}^{2}+\lambda\bigg{\langle}\sum_{i=1}^{n}K(x^{(i)},\cdot)u_{i},\sum_{j=1}^{n}K(x^{(j)},\cdot)u_{j}\bigg{\rangle}_{\mathbf{u}_{K^{G}}}.\tag{142}$$ We have the following simplification of the term $\bigg{\langle}\sum_{i=1}^{n}K(x^{(i)},\cdot)u_{i},\sum_{j=1}^{n}K(x^{(j)},\cdot)u_{j}\bigg{\rangle}_{\mathbf{u}_{K^{G}}}$ in problem (143). We have (142). We have $$\left\langle\sum_{i=1}^{n}K(x^{(i)},.)u_{i},\sum_{j=1}^{n}K(x^{(j)},.)u_{j}\right\rangle_{\mathsf{H}_{K}G}=\sum_{i=1}^{n}\left\langle K(x^{(i)},.)u_{i},\sum_{j=1}^{n}K(x^{(j)},.)u_{j}\right\rangle_{\mathsf{H}_{K}G}\tag{143}$$ $$=\sum_{i=1}^{n}\sum_{j=1}^{n}\left\langle K(x^{(i)},.)u_{i},K(x^{(j)},.)u_{j}\right\rangle_{\mathsf{H}_{K}G}$$ (144) $$=\sum_{i=1}^{n}\sum_{j=1}^{n}\left\langle K(x^{(i)},x^{(j)})u_{i},u_{j}\right\rangle_{\mathsf{Y}},\tag{145}$$ Note that Eq. (143) and Eq. (144) follow from the property of bilinear forms and Eq. (145) follows from the reproducing property of K. Thus we have the following simplified formulation: $${\widehat{\mathbf{u}}}_{\lambda}=\arg\operatorname*{min}_{\mathbf{u}\in\mathcal{Y}^{n}}\sum_{i=1}^{n}\left\|y_{i}-\sum_{j=1}^{n}K(x^{(i)},x^{(j)})u_{j}\right\|_{\mathcal{Y}}^{2}+\lambda\sum_{i=1,j=1}^{n}\langle K(x^{(i)},x^{(j)})u_{i},u_{j}\rangle_{\mathcal{Y}}.$$ To solve this problem, we first construct the objective function Jλ(u) given by $$J_{\lambda}({\bf u})=\sum_{i=1}^{n}\left\|y_{i}-\sum_{j=1}^{n}K(x^{(i)},x^{(j)})u_{j}\right\|_{\cal Y}^{2}+\lambda\sum_{i=1,j=1}^{n}\langle K(x^{(i)},x^{(j)})u_{i},u_{j}\rangle_{\cal Y},\quad{\bf u}\in{\cal Y}^{n}.$$ Letting Jλ(u) = Pn i=1 Gi(u) + λL(u), we can find the directional derivative of Jλ(u) with respect to the direction v as DvJλ(u). $$D_{\mathbf{v}}G_{i}(\mathbf{u})=\lim_{\tau\to0}\frac{G_{i}(u+\tau v)-G_{i}(u)}{\tau}$$ $$=-2\bigg{\langle}y_{i}-\sum_{j=1}^{n}K(x_{i},x_{j})u_{j},\sum_{j=1}^{n}K(x^{(i)},x^{(j)})v_{j}\bigg{\rangle}.$$ $$D_{\mathbf{v}}L(\mathbf{u})=\lim_{\tau\to0}\frac{L(u+\tau v)-L(u)}{\tau}$$ $$=\lambda\sum_{i,j}^{n}\langle K(x^{(i)},x^{(j)})u_{i},v_{j}\rangle+\lambda\sum_{i,j}^{n}\langle K(x^{(i)},x^{(j)})v_{i},u_{j}\rangle.$$ As K is Hermitian from the definition of operator-valued kernel, we obtain $$\langle K(x^{(i)},x^{(j)})u_{i},v_{j}\rangle=\langle u_{i},K(x^{(i)},x^{(j)})v_{j}\rangle,\,\,\,\forall i,j\in[n].$$ (i), x(j))vj i, ∀*i, j* ∈ [n]. (146) $$(146)^{\frac{1}{2}}$$ Therefore, DvL(u) = λ Xn i,j hK(x (i), x(j))ui, vj i + λ Xn i,j hK(x (i), x(j))vi, uj i = λ Xn i,j hui, K(x (i), x(j))vj i + λ Xn i,j hK(x (i), x(j))vi, uj i (147) = λ Xn i,j hui, K(x (i), x(j))vj i + λ Xn i,j huj , K(x (j), x(i))vii (148) = 2λ Xn i,j hui, K(x (i), x(j))vj i. (149) Eq. (147) follows from Eq. (146) and in Eq. (147), we use symmetry of h·, ·i to obtain Eq. (149). In order to minimize Jλ(u), its directional derivative DvJλ(u) = 0, ∀v ∈ Yn. DvJλ(u) = 0 =⇒ Xn i=1 DvGi(u) + λDvL(u) = 0 =⇒ − 2 Xn i=1 yi − Xn j=1 K(x (i), x(j))uj , Xn j=1 K(x (i), x(j))vj + 2λ Xn i,j hui, K(x (i), x(j))vj i = 0 =⇒ Xn i=1 Xn j=1 K(x (i), x(j))uj − yi, Xn j=1 K(x (i), x(j))vj + Xn i,j hλui, K(x (i), x(j))vj i = 0 =⇒ Xn i=1 Xn j=1 K(x (i), x(j))uj − yi, Xn j=1 K(x (i), x(j))vj + Xn i=1 λui, Xn j=1 K(x (i), x(j))vj = 0 =⇒ Xn i=1 Xn j=1 K(x (i), x(j))uj − yi + λui, Xn j=1 K(x (i), x(j))vj = 0, ∀v ∈ Yn. The above condition can be reduced to $$(150)$$ (K + λI)u = y, (150) where K is a matrix of operators formed by using K. ## A.9 Results For Generalization Bounds We recall the lemma 6.2 and provide the proof next. Lemma A.6. Let F be a class of functions from X to Y. Consider a m-tuple of samples from input space as (x (1), x(2), . . . , x(m)) ∈ X m. Then the following hold: 1. E-supF ∈F k $$\begin{array}{l}{{\in{\mathcal{F}}\left\|{\frac{1}{m}}\sum_{i=1}^{m}F(x^{(i)})-\mathbb{E}F\|_{\mathcal{Y}}\right\|\leq2{\mathcal{R}}_{m;\mathcal{Y}}({\mathcal{F}}).}}\end{array}$$ 2. For every c ∈ R, Rm;Y (cF) = |c|Rm;Y (F). 3. For φ : Y → R, if φ is a Lipschitz function with Lipschitz constant L, then Rm;R(φ◦F) ≤ LRm;Y (F). Proof. For part 1, we start with denoting S = (x (1), x(2), . . . , x(m)) ∈ X m and another independent sample as S¯ = (¯x (1), x¯ (2)*, . . . ,* x¯ (m)) ∈ X m, we have $$\mathbb{E}_{S\sim\mu_{X}^{m}}[F(x)]=\mathbb{E}_{\bar{S}\sim\mu_{X}^{m}}\left[\frac{1}{m}\sum_{i=1}^{m}F(\bar{x}^{(i)})\right].$$ $$(151)$$ We note that the Rademacher random variables (ε1, ε2*, . . . , ε*m) are uniformly distributed over {+1, −1} and every possible value they take has an equal probability of 1/2 m. Without loss of generality, we can always permute (ε1, ε2*, . . . , ε*m) to obtain εP1 = 1*, . . . , ε*Pk = 1, εPk+1 = −1*, . . . , ε*Pm = −1, where 0 ≤ k ≤ m and {P1*, . . . , P*m} is a permutation of [m]. Therefore, ES∼µm X "ES¯∼µm X " sup F ∈F 1 m Xm i=1 εi F(x (i)) − F(¯x (i)) Y ## =ES∼µm X ES¯∼µm X i=1 F(x (i)) − F(¯x (i)) +Xm i=k+1 F(x (i)) − F(¯x (i)) Y sup F ∈F 1 m X k =ES∼µm X "ES¯∼µm X " sup F ∈F 1 m Xm i=1 F(x (i)) − F(¯x (i)) Y ## . (152) The expressions above hold as x (i) and x¯ (i) are independent and symmetric. We obtain the following based on the arguments made above. ES∼µm X " sup F ∈F 1 m Xm i=1 F(x (i)) − Ex(i)∼µX F(x (i)) Y # =ES∼µm X " sup F ∈F 1 m Xm i=1 F(x (i)) − ES¯∼µm X 1 m F(¯x (i)) Y # (153) =ES∼µm X " sup F ∈F ES¯∼µm X 1 m Xm i=1 F(x (i)) − F(¯x (i)) !Y # ≤ES∼µm X "ES¯∼µm X " sup F ∈F 1 m Xm i=1 F(x (i)) − F(¯x (i)) !Y ## (154) =ES∼µm X "ES¯∼µm X "E " sup F ∈F 1 m Xm i=1 εi F(x (i)) − F(¯x (i)) Y ### (155) ≤ES∼µm X "E " sup F ∈F 1 m Xm i=1 εiF(x (i)) Y ## + ES¯∼µm X "E " sup F ∈F 1 m Xm i=1 εiF(¯x (i)) Y ## (156) =2Rm;Y (F). Equation (153) follows from (151). Jensen's inequality is used to obtain (154) and (155) follows from (152) by the fact that using Rademacher variables does not change the value of the expression in (223) (see proof of Theorem 4.1 in (Liao, 2020)). The inequality (156) uses the fact that εi and −εi follow the same Rademacher distribution and triangle inequality for norm *k · k*Y with supremum. For part 2, $$\mathscr{R}_{m;\mathcal{Y}}(c\mathcal{F})=\mathbb{E}\left[\sup_{F\in\mathcal{F}}\frac{1}{m}\left\|\sum_{i=1}^{m}\varepsilon_{i}cF(x^{(i)})\right\|_{\mathcal{Y}}\right]$$ $$=|c|\mathbb{E}\left[\sup_{F\in\mathcal{F}}\frac{1}{m}\left\|\sum_{i=1}^{m}\varepsilon_{i}F(x^{(i)})\right\|_{\mathcal{Y}}\right]$$ $$=|c|\mathscr{R}_{m;\mathcal{Y}}(\mathcal{F}).$$ For part 3, consider a m-tuple of samples from input space X as (x (1), x(2), . . . , x(m)) ∈ X m and Rademacher random variables εi for i ∈ [m]. We assume φ : Y → R and φ is a Lipschitz function with Lipschitz constant L. Then $$\mathscr{R}_{m;\mathbb{R}}(\phi\circ\mathcal{F})=\mathbb{E}\left[\sup_{F\in\mathcal{F}}\frac{1}{m}\sum_{i=1}^{m}\varepsilon_{i}(\phi\circ F)(x^{(i)})\right]\tag{157}$$ $$=\mathbb{E}_{x^{(i)}\sim\mu_{\overline{X}}^{m}}\frac{1}{m}\left[\mathbb{E}_{v}\backslash_{\varepsilon_{m}}\left[\mathbb{E}_{\varepsilon_{m}}\left[\sup_{F\in\mathcal{F}}\mathbf{u}_{m}(F)+\varepsilon_{m}(\phi\circ F)(x^{(m)})\right]\right]\right],$$ (158) $\binom{159}{159}$ (159) . where um(F) = Pm−1 i=1 εi(φ ◦ F)(x (i)). From the definition of the supremum, for any m > 0 (note that m is not an exponent in this notation), there exists F m 1 , F m 2 ∈ F such that $$\begin{array}{l}{{u_{m}(F_{1}^{m})+\varepsilon_{m}(\phi\circ F_{1}^{m})(x^{(m)})\geq(1-\epsilon^{m})\left[\operatorname*{sup}_{F\in\mathcal{F}}u_{m}(F)+\varepsilon_{m}(\phi\circ F)(x^{(m)})\right]}}\\ {{u_{m}(F_{2}^{m})-\varepsilon_{m}(\phi\circ F_{2}^{m})(x^{(m)})\geq(1-\epsilon^{m})\left[\operatorname*{sup}_{F\in\mathcal{F}}u_{m}(F)-\varepsilon_{m}(\phi\circ F)(x^{(m)})\right],}}\end{array}$$ (158) , (159) otherwise it leads to a contradiction to the supremum assumption. Therefore, (1 − m)Eεm sup F ∈F um(F) + εm(φ ◦ F)(x (m)) = (1 − m) 1 2 sup F ∈F um(F) + (φ ◦ F)(x (m)) + 12 sup F ∈F um(F) − (φ ◦ F)(x (m)) ≤ 1 2 hum(F m 1 ) + εm(φ ◦ F m 1 )(x (m)) + um(F m 2 ) − εm(φ ◦ F m 2 )(x (m)) i(161) = 1 2 hum(F m 1 ) + um(F m 2 ) + εm (φ ◦ F m 1 )(x (m)) − (φ ◦ F m 2 )(x (m)) i . (162) ≤ sup F ∈F um(F) + 12 hεm (φ ◦ F m 1 )(x (m)) − (φ ◦ F m 2 )(x (m)) i . (163) (160) Equation (160) is obtained by using the definition of Rademacher random variable εm. The inequality (161) is obtained using inequalities for F m 1 and F m 2in (158) and (159), respectively. Inequality (163) is obtained by introducing supremum in the terms um(F m 1 ) + um(F m 2 ). As the inequality (163) holds for any m > 0, we claim $$\mathbb{E}_{\varepsilon_{m}}\left[\sup_{F\in\mathcal{F}}u_{m}(F)+\varepsilon_{m}(\phi\circ F)(x^{(m)})\right]\leq\sup_{F\in\mathcal{F}}u_{m}(F)+\frac{1}{2}\left[\varepsilon_{m}\left((\phi\circ F_{1}^{m})(x^{(m)})-(\phi\circ F_{2}^{m})(x^{(m)})\right)\right].\tag{164}$$ Using (164) in (157) we obtain Rm;R(φ ◦ F) = Ex(i)∼µm X 1 m Eε\εm Eεm sup F ∈F um(F) + εm(φ ◦ F)(x (m)) ≤ Ex(i)∼µm X 1 m Eε\εm sup F ∈F um(F) + 12 hεm (φ ◦ F m 1 )(x (m)) − (φ ◦ F m 2 )(x (m)) i (165) ≤ Ex(i)∼µm X 1 m Eε\{εm−1,εm} Eεm−1 sup F ∈F um−1 + εm−1(φ ◦ F)(x (m−1)) + 1 2 hεm (φ ◦ F m 1 )(x (m)) − (φ ◦ F m 2 )(x (m)) i , (166) where um−1 = supF ∈F Pm−2 i=1 εi(φ◦F)(x (i)). Now, for any m−1 > 0 a similar approach is followed to obtain F m−1 1, F m−1 2 ∈ F such that um−1(F m−1 1) + εm−1(φ ◦ F m−1 1)(x (m−1)) ≥ (1 − m−1) sup F ∈F um−1(F) + εm−1(φ ◦ F)(x (m−1)) um−1(F m−1 2) − εm−1(φ ◦ F m−1 2)(x (m−1)) ≥ (1 − m−1) sup F ∈F um−1(F) − εm−1(φ ◦ F)(x (m−1)) Using (167) and (168), similar arguments as made earlier help us to claim (167) . (168) $$\mathbb{E}_{\varepsilon_{m-1}}\left[\sup_{F\in\mathcal{F}}\mathfrak{u}_{m-1}(F)+\varepsilon_{m-1}(\phi\circ F)(x^{(m-1)})\right]$$ $$\leq\sup_{F\in\mathcal{F}}\mathfrak{u}_{m-1}(F)+\frac{1}{2}\left[\varepsilon_{m-1}\left((\phi\circ F_{1}^{m-1})(x^{(m-1)})-(\phi\circ F_{2}^{m-1})(x^{(m-1)})\right)\right].$$ Inequality (169) with (166) provides the following $$(167)$$ $$(168)$$ i . (169) $$\mathcal{B}_{m;\mathbb{R}}(\phi\circ\mathcal{F})\leq\mathbb{E}_{x(i)\sim\mu;\overline{{{m}}}}\frac{1}{m}\left[\mathbb{E}_{\gamma_{i}\{\varepsilon_{m-2,\varepsilon_{m-1,\varepsilon_{m}}}\}}\left[\mathbb{E}_{\varepsilon_{m-2}}\left[\sup_{\mu\in\mathcal{F}}\mathbf{u}_{m-2}+\varepsilon_{m-2}(\phi\circ F)(x^{(m-2)})\right]\right.\right.$$ $$\left.\left.+\frac{1}{2}\sum_{i=m-1}^{m}\left[\varepsilon_{i}\left((\phi\circ F_{1}^{i})(x^{(i)})-(\phi\circ F_{2}^{i})(x^{(i)})\right)\right]\right]\right],$$ i=m−1 where um−2 = supF ∈F Pm−3 i=1 εi(φ ◦ F)(x (i)). We iterate till the last step where Rm;R(φ ◦ F) ≤ Ex(i)∼µm X 1 m Eε1 sup F ∈F ε1(φ ◦ F)(x (1)) + 1 2 Xm i=2 hεi (φ ◦ F i 1 )(x (i)) − (φ ◦ F i 2 )(x (i)) i#. (171) For any 1 > 0, there exists F 1 1 , F1 2 ∈ F such that $$(169)$$ $$(170)$$ $$(171)$$ $$(172)$$ $$(173)$$ $$(174)$$ $\left[\begin{matrix}\phi\\ \phi\end{matrix}\right]=\left[\begin{matrix}\phi\\ \phi\end{matrix}\right]\left(\phi\circ F_1^1\right)\left(x^{(1)}\right)\geq\left(1-\epsilon^1\right)\left[\begin{matrix}\sup\limits_{F\in\mathcal{F}}\varepsilon_1(\phi\circ F)(x^{(1)})\\ \end{matrix}\right]$ $-\varepsilon_1(\phi\circ F_2^1)(x^{(1)})\geq\left(1-\epsilon^1\right)\left[\begin{matrix}\sup\limits_{F\in\mathcal{F}}-\varepsilon_1(\phi\circ F)(x^{(1)})\\ \end{matrix}\right].$ obtain. Using (172) and (173), we obtain 179), we obtain $$\mathbb{E}_{\varepsilon_{1}}\left[\sup_{F\in\mathcal{F}}\varepsilon_{1}(\phi\circ F)(x^{(1)})\right]\leq\frac{1}{2}\left[\varepsilon_{1}\left((\phi\circ F_{1}^{1})(x^{(1)})-(\phi\circ F_{2}^{1})(x^{(1)})\right)\right].\tag{171}$$ (171) using (174), Next, we simplify (171) using (174), 1 2 Xm i=1 hεi (φ ◦ F i 1 )(x (i)) − (φ ◦ F i 2 )(x (i)) i ≤ 1 2 "Xm i=1 εi (φ ◦ F i 1 )(x (i)) − (φ ◦ F i 2 )(x (i)) #(175) = 1 2 " (φ ◦ Xm i=1 εiF i 1 )(x (i)) − (φ ◦ Xm i=1 εiF i 2 )(x (i)) !# (176) ≤ L Xm i=1 εi 2 F i 1 (x (i)) − εi 2 F i 2 (x (i)) Y(177) ≤ LE sup F ∈F Xm i=1 εiF(x (i)) Y . (178) (175) $\\$ (176) $\\$ (177) $\\$ (178) . Inequality (175) is obtained by using F i 1 , Fi 2 , ∀i ∈ [m] which are obtained based on the procedure for F m 1 and F m 2 . The definition of φ is utilized to establish (176). Inequality (177) is obtained by using Lipschitz continuity of Φ and (178) follows from the definition of supremum and expectation with respect to Rademacher random variables. Hence using (178) with (174) and (171), we obtain $${\mathcal{R}}_{m;\mathbb{R}}(\phi\circ{\mathcal{F}})\leq L{\mathcal{R}}_{m;y}({\mathcal{F}}).$$ Next, we recall Lemma 6.6 and provide its proof. Lemma A.7. E[Θ(z)] is bounded above as follows: $$\mathbb{E}[\Theta(z)]\leq2\rho L(\tau)\beta^{1/4}(\mathcal{R}_{m;\mathcal{V}}(\mathcal{K}_{0}))^{1/4}.$$ Proof. $$\mathbb{E}\left[\Theta(z)\right]=\mathbb{E}\sup_{F\in\rho\mathcal{B}_{\mathcal{K}}}\left[\mathscr{E}(F)-\mathscr{E}_{z}(F)\right]$$ $$\leq2\mathbb{E}\sup_{F\in\rho\mathcal{B}_{\mathcal{K}}}\left[\frac{1}{m}\sum_{i=1}^{m}\varepsilon_{i}\mathscr{L}(y^{(i)},F(x^{(i)}))\right]$$ $$\leq2L(\tau)\mathscr{B}_{m,\mathcal{Y}}(\rho\mathcal{B}_{\mathcal{K}})$$ $$=2\rho L(\tau)\mathscr{B}_{m,\mathcal{Y}}(\mathcal{B}_{\mathcal{K}}).$$ Equation (179) involves a supremum of F ∈ ρBK as kFk∞ is bounded by ρ and inequality (180) is obtained by using Lemma A.9 in Appendix A.9. Part 3 of Lemma 6.2 is used with φi(.) = L (y (i), .) which has a Lipschitz constant L(τ ) in order to obtain (181). (182) follows from Part 2 of Lemma 6.2. Now, Rm;Y (BK) = E " sup F ∈BK 1 m Xm i=1 εiF(x (i)) Y # = E sup F ∈BK 1 m 1/2 (183) Xm i,j=1 εiεj F(x (i)), F(x (j)) Y = E sup F ∈BK 1 m 1/2 (184) Xm i,j=1 εiεj F, K(x (i), .)F(x (j)) HK ≤ E sup K∈K sup F ∈BK sup y∈Y 1 m 1/2 (185) F, Xm i,j=1 εiεjK(x (i), .)y HK ≤ E sup K∈K sup F ∈BK: kF kHK ≤1 sup y∈Y 1 m 1/2 (186) kFkHK Xm i,j=1 εiεjK(x (i), .)y HK ≤ E sup K∈K sup y∈Y 1 m 1/2 (187) Xm i,j=1 εiεjK(x (i), .)y HK ≤ E sup K∈K sup y∈Y 1 √m 1/2 (188) Xm i=1 εiK(x (i), .)y HK $$(179)$$ (180) $\begin{array}{l}\left(181\right)\\ \text{(182)}\end{array}$ . $$\text{(183)}$$ $$\text{(184)}$$ $$\text{(185)}$$ $$\text{(186)}$$ $$\text{(187)}$$ $$\text{(188)}$$ . = E sup K∈K sup y∈Y 1 √m 1/4 (189) Xm i=1 εiK(x (i), .)y,Xm j=1 εjK(x (j), .)y HK ≤ β 1/4E sup K∈K sup y∈Y sup x∈X 1 √m 1/4 (190) Xm j=1 Xm i=1 εiK(x (i), x)y Y = β 1/4E sup K∈K sup y∈Y sup x∈X 1 m Xm i=1 εiK(x (i), x)y Y !1/4 (191) = β 1/4(Rm;Y (K0))1/4. (192) $$(192)$$ The steps in deriving Rm;Y (BK) ≤ β 1/4(Rm;Y (K0))1/4 use properties of norm, inner product and reproducing property of OVKs which have been discussed in Lemma A.8 (Appendix A.9). Therefore, $$\mathbb{E}[\Theta(z)]\leq2\rho L(\tau)\beta^{1/4}(\mathcal{R}_{m;\mathcal{Y}}(\mathcal{K}_{0}))^{1/4}.$$ We discuss the steps involved in deriving Rm;Y (BK) ≤ β 1/4(Rm;Y (K0))1/4 next. Lemma A.8. $$\mathcal{R}_{m;\mathcal{Y}}(\mathcal{B}_{\mathcal{K}})\leq\beta^{1/4}\left(\mathcal{R}_{m;\mathcal{Y}}(\mathcal{K}_{0})\right)^{1/4}.$$ 1/4(Rm;Y (K0))1/4. (193) Proof. Rm;Y (BK) = E " sup F ∈BK 1 m Xm i=1 εiF(x (i)) Y # = E sup F ∈BK 1 m 1/2 (194) Xm i=1 εiF(x (i)), Xm j=1 εmF(x (j)) Y = E sup F ∈BK 1 m 1/2 (195) Xm i,j=1 εiεm F(x (i)), F(x (j)) Y = E sup F ∈BK 1 m 1/2 (196) Xm i,j=1 εiεm F, K(x (i), .)F(x (j)) HK = E sup F ∈BK 1 m 1/2 (197) F, Xm i,j=1 εiεmK(x (i), .)F(x (j)) HK ≤ E sup K∈K sup F ∈BK 1 m 1/2 (198) F, Xm i,j=1 εiεmK(x (i), .)F(x (j)) HK ≤ E sup K∈K sup F ∈BK sup y∈Y 1 m 1/2 (199) F, Xm i,j=1 εiεmK(x (i), .)y HK $$(193)$$ $$\text{(194)}$$ $$\text{(195)}$$ $$\text{(196)}$$ $$\text{(197)}$$ $$\text{(198)}$$ $$\text{(199)}$$ . ≤ E sup K∈K sup F ∈BK sup y∈Y 1 m 1/2 (200) kFkHK Xm i,j=1 εiεmK(x (i), .)y HK = E sup K∈K sup F ∈BK: kF kHK ≤1 sup y∈Y 1 m 1/2 (201) kFkHK Xm i,j=1 εiεmK(x (i), .)y HK ≤ E sup K∈K sup y∈Y 1 m 1/2 (202) Xm i,j=1 εiεmK(x (i), .)y HK = E sup K∈K sup y∈Y 1 m 1/2 (203) Xm j=1 εm Xm i=1 εiK(x (i), .)y HK ≤ E sup K∈K sup y∈Y 1 m 1/2 (204) Xm j=1 |εm| Xm i=1 εiK(x (i), .)y HK = E sup K∈K sup y∈Y 1 √m 1/2 (205) Xm i=1 εiK(x (i), .)y HK = E sup K∈K sup y∈Y 1 √m 1/4 (206) Xm i=1 εiK(x (i), .)y,Xm j=1 εmK(x (j), .)y HK = E sup K∈K sup y∈Y 1 √m 1/4 (207) Xm i,j=1 εiεmhK(x (i), .)*y, K*(x (j), .)yiHK = E sup K∈K sup y∈Y 1 √m 1/4 (208) Xm i,j=1 εiεmhK(x (i), x(j))*y, y*iY = E sup K∈K sup y∈Y 1 √m 1/4 (209) Xm i,j=1 εiεmK(x (i), x(j))*y, y* Y ≤ E sup K∈K sup y∈Y 1 √m 1/4 (210) Xm i,j=1 εiεmK(x (i), x(j))y Y kykY ≤ β 1/4E sup K∈K sup y∈Y 1 √m 1/4 (211) Xm i,j=1 εiεmK(x (i), x(j))y Y = β 1/4E sup K∈K sup y∈Y 1 √m 1/4 (212) Xm j=1 εm Xm i=1 εiK(x (i), x(j))y Y ≤ β 1/4E sup K∈K sup y∈Y 1 √m 1/4 (213) Xm j=1 |εm| Xm i=1 εiK(x (i), x(j))y Y = β 1/4E sup K∈K sup y∈Y 1 √m 1/4 (214) Xm j=1 Xm i=1 εiK(x (i), x(j))y Y ≤ β 1/4E sup K∈K sup y∈Y sup x∈X 1 √m 1/4 (215) Xm j=1 Xm i=1 εiK(x (i), x)y Y = β 1/4E sup K∈K sup y∈Y sup x∈X 1 m Xm i=1 εiK(x (i), x)y Y !1/4 (216) = β 1/4(Rm;Y (K0))1/4. (217) $$(217)$$ (219) $\binom{220}{220}$ . Reproducing property of OVK K is used to obtain (196) and (208). Inequalities (200) and 210 are obtained by using the Cauchy-Schwarz inequality. (202) is obtained by using the definition of BK with kFkHK ≤ 1. The rest of the steps follow from the properties of inner-product and norm in HK and Y. Lemma A.9. $$\mathbb{E}\operatorname*{sup}_{F\in\rho\mathcal{B}_{\mathbb{K}}}[\mathcal{E}(F)-\mathcal{E}_{z}(F)]\leq2\mathbb{E}\left[\operatorname*{sup}_{F\in\rho\mathcal{B}_{\mathbb{K}}}{\frac{1}{m}}\sum_{i=1}^{m}\varepsilon_{i}\mathcal{L}(y^{(i)},F(x^{(i)}))\right].$$ Proof. $$\mathbb{E}\operatorname*{sup}_{F\in\rho B c}[\mathcal{E}(F)-\mathcal{E}_{z}(F)]=\mathbb{E}_{z\sim3^{m}}\operatorname*{sup}_{F\in\rho B c}\left[\frac{1}{m}\sum_{i=1}^{m}\mathbb{E}_{(x^{\prime},y^{\prime})\sim3}\mathcal{L}(y_{i}^{\prime},F(x_{i}^{\prime}))-\mathcal{E}_{z}(F)\right]$$ $$(218)$$ $$=\mathbb{E}_{z\sim3}{}^{m}\sup_{F\in\partial F_{\mathbb{C}}}\left[\mathbb{E}_{z^{\prime}\sim3}\!=\!\frac{1}{m}\sum_{i=1}^{m}\mathcal{L}(y^{\prime}_{i},F(x^{\prime}_{i}))-\frac{1}{m}\sum_{i=1}^{m}\mathcal{L}(y^{(i)},F(x^{(i)}))\right]$$ $$=\mathbb{E}_{z\sim3}{}^{m}\left[\sup_{F\in\partial F_{\mathbb{C}}}\mathbb{E}_{z^{\prime}\sim3}\left[\frac{1}{m}\sum_{i=1}^{m}\left(\mathcal{L}(y^{\prime}_{i},F(x^{\prime}_{i}))-\mathcal{L}(y^{(i)},F(x^{(i)}))\right)\right]\right].$$ Consider a function f dependent on two random variables *X, Y* in a class of functions F. Then $\implies f(X,Y)\leq\sup_{f\in\mathcal{F}}f(X,Y)$ $\implies\mathbb{E}_{Y}[f(X,Y)]\leq\mathbb{E}_{Y}[\sup_{f\in\mathcal{F}}f(X,Y)]$ $\implies\sup_{f\in\mathcal{F}}\mathbb{E}_{Y}[f(X,Y)]\leq\mathbb{E}_{Y}[\sup_{f\in\mathcal{F}}f(X,Y)]$ $\implies\mathbb{E}_{X}\sup_{f\in\mathcal{F}}\mathbb{E}_{Y}[f(X,Y)]\leq\mathbb{E}_{X}\mathbb{E}_{Y}[\sup_{f\in\mathcal{F}}f(X,Y)]$. Therefore, using the property established in (221), we obtain E sup F ∈ρBK [E (F) − Ez(F)] =Ez∼Zm " sup F ∈ρBK Ez 0∼Zm "1 m Xm i=1 L (y 0 i , F(x 0 i )) − L (y (i), F(x (i)))## (222) ≤Ez∼ZmEz 0∼Zm " sup F ∈ρBK 1 m Xm i=1 L (y 0 i , F(x 0 i )) − L (y (i), F(x (i)))#(223) =Ez,z0∼Zm,ε "sup F ∈ρBK 1 m Xm i=1 εi L (y 0 i , F(x 0 i )) − L (y (i), F(x (i)))#(224) $$(221)$$ =Ez,z0∼Zm,ε "sup F ∈ρBK 1 m Xm i=1 εiL (y 0 i , F(x 0 i ))! + sup F ∈ρBK 1 m Xm i=1 (−εi)L (y (i), F(x (i)))!# (225) =Ez 0∼Zm,ε "sup F ∈ρBK 1 m Xm i=1 εiL (y 0 i , F(x 0 i ))!# + Ez∼Zm,ε "sup F ∈ρBK 1 m Xm i=1 εiL (y (i), F(x (i)))!# (226) =2E " sup F ∈ρBK 1 m Xm i=1 εiL (y (i), F(x (i)))#. (227) (224) follows from (223) by the fact that using Rademacher variables does not change the value of the expression in (223) (see proof of Theorem 4.1 in (Liao, 2020)). (225) follows from (224) as −εi has the same distribution as εi. (227) is obtained from (226) as z and z 0follow identical distribution. ## A.10 Details Of Experiments In order to illustrate the effectiveness of the developed framework, we have used functional regression problem with an unknown graph structure in the input data for both synthetic and real datasets. The task of predicting output functions with the help of a Laplacian matrix denoting the relationship between the set of p input functions has been illustrated in the experiments. As practical data is always available as discrete observations corresponding to functions, standard FDA techniques can be used for the conversion of functional data into vector representation using basis functions, e.g. Fourier basis, B-spline basis, etc. Let X = (L 2([*a, b*]))p and Y = L 2([*c, d*]) be the input and output spaces, respectively. For our experiments, the error metric used is residual sum of squares error (RSSE) (Kadri et al., 2016) defined as *RSSE* = R d c Pi {y (i)(t)−yˆ (i)(t)} 2dt, where y (i)is the actual output function and yˆ (i)is the predicted output function. RSSE is better suited to compare functional outputs. The integrals involved have been approximated by using numerical integration in our implementation. The quadratic programs involved in (23) and (15) are solved by using CVXOPT (Andersen et al., 2023). Experimental Setting: All methods were coded in Python 3.7. All experiments were run on a Linux box with 182 Gigabytes main memory and 28 CPU cores. As methods to solve the problem of functional regression problem simultaneously with learning L and/or D are not available, we use popular algorithms to first determine L. Then for the learned L, we use our alternating minimization framework to learn D using projected gradient descent and u using OpMINRES. For the MCP-based L learning and D learning in the proposed alternating minimization framework, we use a decaying step-size in the projected gradient descent. The decaying step-size regime involves starting with an initial step-size (e.g. 10−4) and reducing it by a fixed factor (e.g. 2) after a set of iterations (e.g. 5) continuously till a final step-size (e.g. 10−9). Section 7 includes the details of the methods fglasso-OpMINRES-D, KGL-OpMINRES-D, Sparse Non-PosOpMINRES-L-D and Sparse OpMINRES-L-D which we use in this section. ## A.10.1 Experiments With Synthetic Data Data Generation: For synthetic experiments, three sets of experiments have been considered with input functions for graph structures having 3-nodes, 12-nodes and 25-nodes, respectively. Here, we discuss the data generation for all three settings and results for 3-nodes and 25-nodes setting. For all the methods, a truncated trigonometric basis of L 2([0, 2π]) with 30 basis functions has been considered for encoding the functional data. The experiments were run for three settings where the data has been divided randomly into a training set, a validation set and a test set. The following data splits have been considered: (80/20/20), (160/40/40) and (320/80/80), representing the number of training samples/validation samples/test samples. The data generation is discussed below. ![66_image_0.png](66_image_0.png) Figure 3: Samples from the 3-node based synthetic data. For 3-node setting, $$x_{1}(t)=\sum_{i=1}^{P}w_{i}\cos(\alpha_{i}t)$$ wi cos(αit) x2(t) = X $$x_{2}(t)=\sum_{i=1}^{P}w_{i}\cos(\alpha_{i}t)+\epsilon_{i}\qquad\qquad x_{3}(t)=b,$$ $$y(t)=\sum_{i=1}^{P}w_{i}\sin(\alpha_{i}t),$$ $$x_{7}(t)=b_{1}+n o i s e_{1}$$ $$x_{8}(t)=b_{2}+n o i s e_{2}$$ where t ∈ [0, 2π], wi, b ∈ U([−1, 1]), αi ∈ U([0, 1]), i ∈ N(0, σ2), for i ∈ [P], σ ∈ U([0, 0.25]). The functions are sampled at 100 points and normalization has been done after introducing Gaussian noise with 0.02 standard deviation for both input and output functions. Figure 3 includes some samples generated from the dataset. For 12-node setting, $$x_1(t)=\sum_{i=1}^P w_i^1\cos(\alpha_i^1t)$$ $$x_2(t)=\sum_{i=1}^P w_i^2\cos(\alpha_i^1t)$$ $$x_3(t)=\sum_{i=1}^P w_i^3\cos(\alpha_i^1t)$$ $$x_4(t)=\sum_{i=1}^P w_i^1\cos(\alpha_i^2t)$$ $$x_5(t)=\sum_{i=1}^P w_i^2\cos(\alpha_i^2t)$$ $$x_6(t)=\sum_{i=1}^P w_i^3\cos(\alpha_i^2t)$$ $$x_{9}(t)=b_{3}+n o i s e_{3}$$ t) x9(t) = b3 + *noise*3 t) x10(t) = b4 t) x11(t) = b5 t) x12(t) = b6 t) x7(t) = b1 + noise1 t) x8(t) = b2 + *noise*2 $$x_{10}(t)=b_{4}$$ $$x_{11}(t)=b_{5}$$ $$x_{12}(t)=b_{6}$$ $$y(t)=\sum_{i=1}^{P}(w_{i}^{1}+w_{i}^{2}+w_{i}^{3})\sin((\alpha_{i}^{1}+\alpha_{i}^{2})t),$$ where t ∈ [0, 2π], w ![67_image_0.png](67_image_0.png) j i , b1, b2, b3 ∈ U([−1, 1]), b4, b5, b6, αk i ∈ U([0, 1]), i ∈ N(0, σ2), for i ∈ [P], j = 1, 2, 3, k = 1, 2, σ ∈ U([0, 0.25]) and noisel ∈ N(0, 0.252), l = 1, 2, 3 for l-th partition of [0, 2π]. The functions are sampled at 100 points and normalized after Gaussian noise with 0.02 standard deviation being introduced for both. Figure 4 includes some samples generated from the dataset. Note that results for 12 node case have been discussed in the main paper. Figure 4: Samples from the 12-node based synthetic data. For 25-node setting, $$ x_1(t)=$$ $$ x_3(t)=$$ $$ x_5(t)=$$ $$ x_7(t)=$$ $$ x_9(t)=$$ ... x1(t) = X $$=\sum_{i=1}^{P}w_{i}^{1}\cos(\alpha_{i}^{1}t)$$ $$=\sum_{i=1}^{P}w_{i}^{3}\cos(\alpha_{i}^{1}t)$$ $$=\sum_{i=1}^{P}w_{i}^{5}\cos(\alpha_{i}^{1}t)$$ $$=\sum_{i=1}^{P}w_{i}^{2}\cos(\alpha_{i}^{2}t)$$ $$=\sum_{i=1}^{P}w_{i}^{4}\cos(\alpha_{i}^{2}t)$$ x3(t) = X x5(t) = X x7(t) = X x9(t) = X t) x2(t) = X $$\begin{split}x_{2}(t)&=\sum_{i=1}^{P}w_{i}^{2}\cos(\alpha_{i}^{1}t)\\ x_{4}(t)&=\sum_{i=1}^{P}w_{i}^{4}\cos(\alpha_{i}^{1}t)\\ x_{6}(t)&=\sum_{i=1}^{P}w_{i}^{1}\cos(\alpha_{i}^{2}t)\\ x_{8}(t)&=\sum_{i=1}^{P}w_{i}^{3}\cos(\alpha_{i}^{2}t)\\ x_{10}(t)&=\sum_{i=1}^{P}w_{i}^{5}\cos(\alpha_{i}^{2}t)\end{split}$$ t) x4(t) = X t) x6(t) = X t) x8(t) = X t) x10(t) = X $$\begin{array}{l}{{x_{11}(t)=b_{1}+n o i s e_{1}}}\\ {{x_{13}(t)=b_{3}+n o i s e_{3}}}\\ {{x_{15}(t)=b_{5}+n o i s e_{5}}}\\ {{x_{17}(t)=c_{2}}}\\ {{x_{19}(t)=c_{4}}}\end{array}$$ x11(t) = b1 + *noise*1 x12(t) = b2 + *noise*2 x13(t) = b3 + *noise*3 x14(t) = b4 + noise4 x15(t) = b5 + *noise*5 x16(t) = c1 x17(t) = c2 x18(t) = c3 x19(t) = c4 x20(t) = c5 $$x_{21}(t)=d_{1}\qquad\qquad x_{22}(t)=d_{2}$$ x21(t) = d1 x22(t) = d2 x23(t) = d3 x24(t) = d4 x25(t) = d5 $$y(t)=\sum_{i=1}^{P}(w_{i}^{1}+w_{i}^{2}+w_{i}^{3}+w_{i}^{4})\sin((\alpha_{i}^{1}+\alpha_{i}^{2})t),$$ $$\begin{array}{c}{{x_{12}(t)=b_{2}+n o i s e_{2}}}\\ {{x_{14}(t)=b_{4}+n o i s e_{4}}}\\ {{x_{16}(t)=c_{1}}}\\ {{x_{18}(t)=c_{3}}}\\ {{x_{20}(t)=c_{5}}}\end{array}$$ $$x_{23}(t)=d_{3}$$ $$x_{24}(t)=d_{4}\qquad\qquad x_{25}(t)=d_{5}$$ where t ∈ [0, 2π], w ![68_image_0.png](68_image_0.png) j i , bj ∈ U([−1, 1]), αk i , cj ∈ U([0, 1]), dj ∈ U([0, −1])i ∈ N(0, σ2), for i ∈ [P], j ∈ [5], k = 1, 2, σ ∈ U([0, 0.25]) and noisel ∈ N(0, 0.252), l ∈ [5] for l-th partition of [0, 2π].The functions are sampled at 100 points and normalization has been done after introducing Gaussian noise with 0.02 standard deviation for both input and output functions. Figure 5 includes some samples generated from the dataset. Figure 5: Samples from the 25-node based synthetic data. The results for synthetic data is summarized in Tables 9-13 where Sparse OpMINRES-L-D attains comparable performance with learned sparse graphs illustrating important relationships driving the functional regression. Table 9 shows that Sparse OpMINRES-L-D provides comparable results to other methods with respect to the mean RSSE on test data for 3-nodes setting. Table 10 contains the D values learned for experiments with 3 nodes which improves the performance in functional regression task in a regularized manner. For 3-nodes setting, the data generation process involves similar information corresponding to node 1 and 2, whereas node 3 involves random constants. Sparse OpMINRES-L-D captures relationship which includes sparse relation between nodes 1, 2 and 3. fglasso-OpMINRES-L-D and KGL-OpMINRES-L-D learn fully connected graphs in Table 12. Table 11 showcases the mean RSSE results for the functional regression problem for 25-nodes experiment where Sparse OpMINRES-L-D produces comparable results on the test data. In 25-nodes setting, the data generation process involves varied information in nodes 1-10, whereas nodes 11-25 contain information which does not impact the generation of the output function y. The graphs obtained for Sparse OpMINRES-L-D in | Train/Val/Test samples | Methods | Mean RSSE | | | |-----------------------------|--------------------|-------------|----------|----------| | | | Train | Val | Test | | Sparse OpMINRES-L-D | | 0.188184 | 0.117119 | 0.104051 | | 80/20/20 | fglasso-OpMINRES-D | 0.109485 | 0.124759 | 0.124163 | | KGL-OpMINRES-D | | 0.086952 | 0.124781 | 0.183851 | | Sparse Non-Pos-OpMINRES-L-D | | 0.064445 | 0.158646 | 0.149415 | | Sparse OpMINRES-L-D | | 0.040655 | 0.211139 | 0.198398 | | 160/40/40 | fglasso-OpMINRES-D | 0.062094 | 0.183526 | 0.193644 | | KGL-OpMINRES-D | | 0.07809 | 0.181575 | 0.201469 | | Sparse Non-Pos-OpMINRES-L-D | | 0.089997 | 0.19503 | 0.218885 | | Sparse OpMINRES-L-D | | 0.046926 | 0.153285 | 0.274924 | | 320/80/80 | fglasso-OpMINRES-D | 0.095652 | 0.145256 | 0.281665 | | KGL-OpMINRES-D | | 0.057804 | 0.145889 | 0.271459 | | Sparse Non-Pos-OpMINRES-L-D | | 0.087422 | 0.150039 | 0.274598 | Table 9: Mean RSSE results for 3-node synthetic data. | Train/Val/Test samples | Methods | Mean RSSE | | | |-----------------------------|--------------------|-------------|----------|----------| | | Train | Val | Test | | | Sparse OpMINRES-L-D | 0.754677 | 1.567458 | 1.822983 | | | 80/20/20 | fglasso-OpMINRES-D | 0.905085 | 1.527465 | 1.605906 | | KGL-OpMINRES-D | 0.934007 | 1.478522 | 1.594960 | | | Sparse Non-Pos-OpMINRES-L-D | 0.922789 | 1.53816 | 1.564855 | | | Sparse OpMINRES-L-D | 0.662029 | 1.549598 | 1.215493 | | | 160/40/40 | fglasso-OpMINRES-D | 0.678837 | 1.602842 | 1.231550 | | KGL-OpMINRES-D | 0.742796 | 1.571745 | 1.212629 | | | Sparse Non-Pos-OpMINRES-L-D | 0.600849 | 1.573893 | 1.215729 | | | Sparse OpMINRES-L-D | 0.767516 | 1.436166 | 1.436166 | | | 320/80/80 | fglasso-OpMINRES-D | 1.063051 | 1.385366 | 1.429937 | | KGL-OpMINRES-D | 1.069962 | 1.356429 | 1.425034 | | | Sparse Non-Pos-OpMINRES-L-D | 0.668318 | 1.418141 | 1.437544 | | | Train/ Val/ | Sparse OpMINRES-L-D | fglasso-OpMINRES-D | KGL-OpMINRES-D | | | | | | | |------------------|-----------------------|----------------------|------------------|----------|----------|----------|----------|----------|----------| | Test | | | | | | | | | | | samples 80/20/20 | 0.011319 | 0.036068 | 0.119428 | 1.001583 | 1.002949 | 1.000191 | 1.000672 | 1.002477 | 0.999940 | | 160/40/40 | 1.104571 | 0.747521 | 1.229991 | 1.008645 | 1.007241 | 1.001036 | 1.010710 | 1.009007 | 1.000914 | | 320/80/80 | 1.227095 | 0.761956 | 0.956403 | 1.019795 | 1.012624 | 1.001624 | 1.026910 | 1.019523 | 1.004683 | Table 10: D for 3-node synthetic data. Table 11: Mean RSSE results for 25-node synthetic data. Table 13 show connections majorly between nodes 1-15. Though the input functions for nodes 11-15 contain noisy random constant values, this information seems to be associated with input functions for nodes 5-10. Sparse Non-Pos-OpMINRES-L-D also discovers relations in the clusters of nodes 1-10 and 11-25 majorly. ![70_image_0.png](70_image_0.png) ## A.10.2 Experiments On Weather Data Weather data is dynamic and inter-relationships between different parameters can be hard to predict. As our problem solves a functional regression problem based on a relationship between a set of input functions, we intend to showcase the effectiveness of the proposed algorithm by predicting average dew-point temperature (F) across 12 weather stations based on their respective air temperatures (F). We consider 1 minute data of Wyoming ASOS data collected from IEM ASOS One Minute Data (Iowa Environmental Mesonet, 2022). The data has been collected for an interval of 2 hours for both input functions and output function from | Table 13: Graphs corresponding to learned L for 25-node synthetic data. [Best viewed in color] Train/ Val/ Test samples Sparse fglassoOpMINRES-D KGLOpMINRES-D Sparse Non-PosOpMINRES-L-D OpMINRES-L-D 80/20/20 160/40/40 320/80/80 | |---| January, 2022 to August, 2022. Data collected at one minute interval for different 12 weather stations in Wyoming was pre-processed to create 2 hour interval data by disregarding intervals where data was missing in any of the 12 stations. A total of 718 samples have been collected after removing missing data. The following 12 weather stations in Wyoming have been considered: Big Piney (1), Casper/Natrona Intl (2), Cheyenne/Warren AFB (3), Gillette (4), Laramie/Gen. Brees (5), Lander/Hunt Field (6), Yellowstone (7), Riverton (8), Rawlins Municipal (9), Sheridan Co. Airport (10), Torrington Municipal Airport (11), Worland Municipal (12). | Table 14: Mean RSSE results for small weather data. | | | | | |-------------------------------------------------------|--------------------|-----------|----------|----------| | Train/Val/Test samples | Methods | Mean RSSE | | | | Train | Val | Test | | | | Sparse OpMINRES-L-D | 0.004302 | 0.041949 | 0.092553 | | | 80/20/20 | fglasso-OpMINRES-D | 0.001716 | 0.059419 | 0.082662 | | KGL-OpMINRES-D | 0.002357 | 0.049899 | 0.097951 | | For all the methods, a truncated trigonometric basis of L 2([0, 1]) with 80 basis functions has been considered for encoding the functional data. We segregate the weather data experiments into small weather data experiments by considering 120 samples and full weather data experiments. The following random data splits have been considered: (80/20/20) and (472/123/123), representing the number of training samples/validation samples/test samples in small weather data and full weather data settings, respectively. We discuss the results for the small weather data here. Note that results for full weather data related experimements are already presented in the main paper. Initially, we use a small dataset with 120 samples drawn at random from the considered 8 months. Tables 1415 showcase the performance of the algorithms for small weather data. Sparse OpMINRES-L-D performs the best in terms of mean RSSE on the test data compared to fglasso-OpMINRES-L-D and KGL-OpMINRES-LD (Table 14). In Table 15, fglasso-OpMINRES-L-D and KGL-OpMINRES-L-D learn dense fully connected graphs which do not provide much information regarding the impact of different weather stations on the relationship of respective air temperature to the average dew point temperature. The plots illustrate location based relation between the 12 weather stations considered in Wyoming. Sparse OpMINRES-L-D learns a sparse L where stations CYS(3) and TOR(11), GCC(4) and WRL(12), LND(6) and RIW(8) along with P60(7) and RWL(9) are connected. CYS(3) (41.15564, −104.81047) and TOR(11) (42.06472, −104.15278) are 114.89 km apart with an elevation of 1871 m and 1282 m, respectively. GCC(4) (44.34892, −105.53936) and WRL(12) (43.96571, −107.95083) are 197.54 km apart with an elevation of 1230 m and 1294 m, respectively. LND(6) (42.81524, −108.72984) and RIW(8) (43.06423, −108.45984) are 35.37 km apart with an elevation of 1694 m and 1688 m. P60(7) (44.54444, −110.42111) and RWL(9) (41.8056, −107.19994) are 401.40 km apart with an elevation of 2368 m and 2077 m. It can be observed that the connections in the learned graph structure have been established between stations with varying distances lying in close proximity elevationwise (in 3 out of 4 cases) and latitude-wise. ## A.10.3 Experiments On Nba Data The movement of basketball and 21 players involved on the court (x-y coordinates) in the Atlanta Hawks (ATL) vs Utah Jazz (UTA) match on November 15, 2015 has been considered in this experiment. This data is available in the Github repo NBA Movement Data (Seward, 2018). The data has been collected for different plays for both input functions of 21 players and output function denoting the position of the ball, which includes missing data corresponding to some players in different plays. The data corresponding to the following players were used: Kyle Korver [ATL, G] (1), Thabo Sefolosha [ATL, G-F] (2), Paul Millsap [ATL, F] (3), Al Horford [ATL, C-F] (4), Tiago Splitter [ATL, F-C] (5), Derrick Favors [UTA, F-C] (6), Gordon Hayward [UTA, F] (7), Trevor Booker [UTA, F] (8), Alec Burks [UTA, G] (9), Shelvin Mack [ATL, G] (10), Kent Bazemore [ATL, F-G] (11), Chris Johnson [UTA, F] (12), Justin Holiday [ATL, G] (13), Dennis Schroder [ATL, G] (14), Jeff Withey [UTA, C] (15), Mike Muscala [ATL, F-C] (16), Rudy Gobert [UTA, C] (17), Trey Burke [UTA, G] (18), Raul Neto [UTA, G] (19), Rodney Hood [UTA, G] (20), Joe Ingles [UTA, F] (21), where the team, position and number assigned for the experiments has been provided. As plays in a basketball game are of different time duration, we use a truncated trigonometric basis of L 2([0, 1]) with 80 basis functions to sample the functions at fixed 100 points on [0, 1]. A total of 351 samples have been collected based on removing missing data. A random data split of (233/59/59) representing the number of training samples/validation samples/test samples has been considered. The problem requires solving a multi-dimensional functional regression problem which is incompatible with fglasso and KGL algorithms, as both fglasso & KGL are based on single dimensional input functions. Hence, we compare our method with the algorithm OpMINRES-D where a fixed L is incorporated in our alternating minimization framework. ![73_image_0.png](73_image_0.png) OpMINRES-D: A fixed L is considered corresponding to a fully connected network of 21 nodes. This decision was made as fglasso mostly learns a fully connected graph in earlier experiments. Thus, a fixed L (with no sparsity-inducing MCP) is used in the proposed alternating minimization regime for optimizing u and D. OpMINRES is used with k1(*x, x*0; G) = e −γx(x−x 0) >(L+D)(x−x 0)−γy(x−x 0) >(L+D)(x−x 0) and k 1 2 (*s, t*) = e −γ 1 op|s−t|, k1 2 (*s, t*) = e −γ 2 op|s−t|, where γx, γy ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100} and γ 1 op, γ2 op ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100}. Sparse OpMINRES-L-D: We consider the graph-induced operator-valued kernels using k1(*x, x*0; G) = e −γx(x−x 0) >(L+D)(x−x 0)−γy(x−x 0) >(L+D)(x−x 0) and k 1 2 (*s, t*) = e −γ 1 op|s−t|, k2 2 (*s, t*) = e −γ 2 op|s−t|, where γx, γy ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100} and γ 1 op, γ2 op ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100}. Projected gradient descent is used in minimization with respect to L and D based on a decaying step-size. The sparsity is aided by the MCP regularization considered in learning of L. | Table 17: Mean RSSE results for NBA data. | | | | | |---------------------------------------------|---------------------|-----------|----------|----------| | Train/Val/Test samples | Methods | Mean RSSE | | | | | Train | Val | Test | | | 233/59/59 | Sparse OpMINRES-L-D | 0.025200 | 0.087748 | 0.106344 | | OpMINRES-D | 0.023261 | 0.191459 | 0.265513 | | The results are illustrated in Tables 16-17. where comparison method OpMINRES-D uses a fully connected graph, however Sparse OpMINRES-L-D performs better with a sparse learned graph in terms of mean RSSE on the test data. The following major relations are obtained for the game based on graph structure corresponding to the learned L in Table 16: - Derrick Favors [UTA, F-C]—Trevor Booker [UTA, F] (6—8) - Al Horford [ATL, C-F]—Gordon Hayward [UTA, F] (4—7) - Alec Burks [UTA, G]—Trey Burke [UTA, G] (9—18) ![74_image_0.png](74_image_0.png) - Thabo Sefolosha [ATL, G-F]—Kent Bazemore [ATL, F-G] (2—11) - Dennis Schroder [ATL, G]—Rodney Hood [UTA, G] (14—20) - Tiago Splitter [ATL, F-C]—Jeff Withey [UTA, F-C] (5—15). From the match report published in ESPN match recap and ESPN match scoreboard (ESPN, 2015a;b), it is clear that the relation Derrick Favors—Trevor Booker (6—8) had been pivotal in the win of Utah Jazz. The performance of Al Horford and Kent Bazemore for Atlanta Hawks was mentioned and captured by the relations (4—7) and (2—11). Though the partnership of Alec Burks—Trey Burke (9—18) for Utah Jazz is not evident in the match reports, their ball carrying interactions may be the reason for being learned in L. ## A.10.4 Hyperparameters For Experiments In this section, we list the hyperparameters used for Sparse OpMINRES-L-D for different experiments illustrated in this work. Common hyperparameters: λreg = 0.5, γreg = 1 (MCP), maxiter = 1000, tol = 10−3(OpMINRES). The decaying step-size regime for projected gradient descent in L and D-based minimization involves starting with an initial step-size 10−4 and reducing it by a fixed factor 2 after a set of 5 iterations continuously till a final step-size (or learning rate) 10−9. As our proposed approach Sparse OpMINRES-L-D aggregates many components, we provide some ablation studies to illustrate effectiveness in the next section. | Table 18: Hyperparameters used for experiments with Sparse OpMINRES-L-D on different data set | | s | | | | | | |-------------------------------------------------------------------------------------------------|-------------------------|------------|------|----------|------|-----|--------| | Experiment | No. of Training Samples | γop | λ | γ | ρL | ρD | mtrace | | 80 | 10 | 10−5 | 10−4 | 10−3 | 100 | 2 | | | 3-node | 160 | 10 | 10−4 | 10−2 | 10−3 | 10 | 2 | | 320 | 10 | 10−4 | 10−2 | 10−3 | 10 | 2 | | | 80 | 1 | 10−5 | 10−6 | 10−2 | 1000 | 6 | | | 12-node | 160 | 10 | 10−6 | 10−6 | 10−2 | 10 | 6 | | 320 | 10 | 10−4 | 10−5 | 10−1 | 10−5 | 6 | | | 80 | 10−1 | 10−3 | 10−4 | 10−1 | 10 | 13 | | | 25-node | 160 | 10 | 10−3 | 10−3 | 10−2 | 100 | 13 | | 320 | 100 | 10−2 | 10−2 | 10−1 | 100 | 13 | | | Weather Data | 80 | 10−3 | 10−2 | 10−1 | 10−4 | 1 | 6 | | 472 | 1 | 10−2 | 10−1 | 10−4 | 10−1 | 6 | | | op = 100 1 | | | | | | | | | NBA Data | 233 | γ op = 100 | 10−2 | γx = 0.5 | | | | | 2 | γy = 0.5 | 102 | 102 | 11 | | | | | γ | | | | | | | | ## A.10.5 Experiments For Ablation Studies In order to understand the impact of different components of Sparse OpMINRES-L-D, we have run experiments for 12-node synthetic data by varying different hyperparameters and switching off different components in our approach. In order to enforce sparsity of learned graphs, we introduce m*trace* based constraint in Section 5.1.4. In Table 19, we tabulate the graphs corresponding to learned L with Sparse OpMINRES-L-D for 12-node experiments using mtrace ∈ {0.1p, 0.25p, 0.5p, 0.75p, 0.9p} with p = 12. From Table 19, we observe that most of the edges are being retained as m*trace* value is increased. The connections learned are illustrative of the generation process of synthetic data in Section A.10.1 as m*trace* is increased. Choice of m*trace* can be based on the error corresponding to the validation set as well as the desired number of connections to be learned since the number of edges increases with increase in m*trace*. Table 20 illustrates that the performance with different m*trace* is comparable and a trade-off is expected when varying m*trace*. To illustrate the impact of choosing different kernels in our framework, we utilize different kernels as k2 in (2) by utilizing the following: - **ABS:** k2(*s, t*) = e −γop|s−t| - **DIFFABS:** k2(*s, t*) = e −γop1 |s−t| − e −γop2 |s−t| ``` - RBF: k2(s, t) = e −γop1 |s−t| 2 ``` - **EPAN:** k2(*s, t*) = max(0, 1 − γop|s − t| 2), with γop, γop1 , γop2 ∈ {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 1, 10, 100}. Note that we use k1 as defined in (108) which incorporates L and D of the learned graph. Table 21 showcases comparable performance of different kernels as k2, but Table 22 illustrates the graphs learned different connections for ABS, DIFFABS. We see that RBF and EPAN kernel choices learn different graphs which contain some useful connections which can be related to the generation of the synthetic data. It can be interpreted that most of the connections learned by ABS, DIFFABS and RBF kernels better represent the generation of 12-node synthetic data. Next, we study the impact of the m*trace* constraint in the sparsity regularization and the complete sparsity regularization framework as proposed in Section 5.1.4. Table 23 illustrates comparable performance based on mean RSSE error where m*trace* constraint is removed and L and D-based regularization is removed by setting ρL and ρD as 0. Similarly, Table 24 illustrates comparable performance when MCP regularization is | OpMINRES-L-D for 12-node experiments (p = 12). [Best viewed in color] | | | | | | | |-------------------------------------------------------------------------|---------------|----------------|---------------|----------------|---------------|----| | Train/ | | | | | | | | Val/ | Mtrace = 0.1p | Mtrace = 0.25p | Mtrace = 0.5p | Mtrace = 0.75p | Mtrace = 0.9p | | | Test | | | | | | | | samples | 0 | 0 | | | | | | 0 | 0 | | | | | | | 0 | 0 | 9 | | | | | | 0 | 0 | 0 | 0 | 0 | | | | 80/20/20 | க | 0 | 0 | 0 | 8 | 0 | | 0 | 9 | | | | | | | | 9 | | | | | | | -2 | -1 | | | | | | | Log scale | 0 | 0 | | | | | | 0 | 0 | 0 | | | | | | 0 | 0 | 0 | 0 | 0 | 0 | | | 160/40/40 | 0 | e | 0 | 0 | 0 | | | | 0 | 0 | | | | | | 0 | | | | | | | | -1 | | | | | | | | 4 | | | | | | | | 0 | 0 | | | | | | | 0 | | | | | | | | 320/80/80 | 0 | 0 | | | | | | 0 | 0 | 0 | | | | | | -3 | -2 | -1 | | | | | | Log scale | | | | | | | Table 19: Graph corresponding to learned L by considering mtrace in {0.1p,0.25p,0.5p,0.75p,0.9p} in Sparse completely removed and L and D-based regularization is removed by setting PL and PD as 0. Although the performance is comparable in terms of mean RSSE error, however we note that Tables 25 and 26 illustrate the failure to learn meaningful graphs since most of the graphs have connections with equal weights providing no relevant information. Without the mtrace constraint, Table 25 showcases negligible weights being assigned to each connection, while without the complete MCP regularization framework Table 26 showcases uniformly distributed weights across fully connected graphs. | 12-node synthetic data (p = 12). Train/Val/Test samples mtrace in Sparse OpMINRES-L-D | Mean RSSE | | | | |-----------------------------------------------------------------------------------------|---------------|----------|----------|----------| | Train | | Val | Test | | | mtrace = 0.1p | 1.250593 | 1.727192 | 1.532670 | | | mtrace = 0.25p | 1.234657 | 1.735397 | 1.587149 | | | 80/20/20 | mtrace = 0.5p | 1.140691 | 1.735397 | 1.583640 | | mtrace = 0.75p | 1.092453 | 1.904231 | 1.636385 | | | mtrace = 0.9p | 1.075372 | 1.916426 | 1.640379 | | | mtrace = 0.1p | 0.873584 | 1.291735 | 1.380035 | | | mtrace = 0.25p | 0.886945 | 1.263236 | 1.368678 | | | 160/40/40 | mtrace = 0.5p | 0.888574 | 1.229568 | 1.385952 | | mtrace = 0.75p | 0.849892 | 1.265577 | 1.412620 | | | mtrace = 0.9p | 0.831497 | 1.284316 | 1.425376 | | | mtrace = 0.1p | 1.073370 | 1.291374 | 1.237471 | | | mtrace = 0.25p | 1.071354 | 1.295419 | 1.238216 | | | 320/80/80 | mtrace = 0.5p | 1.062102 | 1.294110 | 1.239181 | | mtrace = 0.75p | 1.056678 | 1.295300 | 1.242216 | | | mtrace = 0.9p | 1.053001 | 1.296809 | 1.241735 | | | Table 21: Mean RSSE results for different kernels as k2 with 12-node synthetic data. | | | | | |----------------------------------------------------------------------------------------|---------------------------|-----------|----------|----------| | Train/Val/Test samples | k2 in Sparse OpMINRES-L-D | Mean RSSE | | | | | | Train | Val | Test | | | ABS | 1.140691 | 1.780445 | 1.583640 | | 80/20/20 | DIFFABS | 1.167264 | 1.806093 | 1.618175 | | | RBF | 1.111369 | 2.092855 | 1.754655 | | | EPAN | 0.759214 | 2.035527 | 1.838925 | | | ABS | 0.888574 | 1.229568 | 1.385952 | | 160/40/40 | DIFFABS | 1.154356 | 1.362239 | 1.417921 | | | RBF | 0.846649 | 1.236121 | 1.428687 | | | EPAN | 0.906656 | 1.260602 | 3.626625 | | | ABS | 1.062102 | 1.294110 | 1.239181 | | 320/80/80 | DIFFABS | 1.073292 | 1.295140 | 1.243346 | | | RBF | 0.931760 | 1.330628 | 1.283005 | | | EPAN | 1.026490 | 1.304147 | 1.247868 | Table 20: Mean RSSE results for m*trace* in {0.1p, 0.25p, 0.5p, 0.75p, 0.9p} using Sparse OpMINRES-L-D in 12-node synthetic data (p = 12). Further, we perform experiments where we utilize a lasso based regularization to induce sparsity instead of depending upon the proposed sparsity inducing framework. We introduce ρDPp i=1 |Dii|, ρD > 0 in (3) instead of ρLPn i=1 x (i)>Lx(i) + ρDkDk 2 F and ignore the proposed MCP regularization framework. Table 28 illustrates comparable performance in terms of mean RSSE error but Table 27 showcases the failure to distinguish meaningful interactions in the learned graphs. Table 22: Graph corresponding to learned L by considering different kernels for k2 in Sparse OpMINRES-IL-D ![78_image_0.png](78_image_0.png) | Train/Val/ Test samples | Methods | Mean RSSE | | | |--------------------------------------------|---------------------------------|-------------|----------|----------| | | Train | Val | Test | | | Sparse OpMINRES-L-D | 1.140691 | 1.780445 | 1.583640 | | | 80/20/20 | Sparse OpMINRES-L-D with no MCP | 0.911711 | 1.893550 | 1.481332 | | Sparse OpMINRES-L-D with no MCP and ρL = 0 | 0.911711 | 1.893550 | 1.481332 | | | Sparse OpMINRES-L-D with no MCP and ρD = 0 | 0.755296 | 1.892366 | 1.399945 | | | Sparse OpMINRES-L-D | 0.888574 | 1.229568 | 1.385952 | | | 160/40/40 | Sparse OpMINRES-L-D with no MCP | 0.838759 | 1.282838 | 1.397084 | | Sparse OpMINRES-L-D with no MCP and ρL = 0 | 0.838759 | 1.282838 | 1.397084 | | | Sparse OpMINRES-L-D with no MCP and ρD = 0 | 0.793097 | 1.289822 | 1.429700 | | | Sparse OpMINRES-L-D | 1.062102 | 1.294110 | 1.239181 | | | 320/80/80 | Sparse OpMINRES-L-D with no MCP | 1.062727 | 1.298913 | 1.240176 | | Sparse OpMINRES-L-D with no MCP and ρL = 0 | 1.062727 | 1.298913 | 1.240176 | | | Sparse OpMINRES-L-D with no MCP and ρD = 0 | 1.062727 | 1.298926 | 1.240208 | | | Train/Val/ Test samples | Methods | Mean RSSE | | | |-----------------------------------------------|------------------------------------|-------------|----------|----------| | | Train | Val | Test | | | Sparse OpMINRES-L-D | 1.140691 | 1.780445 | 1.583640 | | | 80/20/20 | Sparse OpMINRES-L-D with no mtrace | 1.274110 | 1.729524 | 1.530376 | | Sparse OpMINRES-L-D with no mtrace and ρL = 0 | 1.273360 | 1.729269 | 1.530041 | | | Sparse OpMINRES-L-D with no mtrace and ρD = 0 | 0.790852 | 1.891594 | 1.407203 | | | Sparse OpMINRES-L-D | 0.888574 | 1.229568 | 1.385952 | | | 160/40/40 | Sparse OpMINRES-L-D with no mtrace | 0.896321 | 1.229568 | 1.371430 | | Sparse OpMINRES-L-D with no mtrace and ρL = 0 | 0.896311 | 1.280351 | 1.371432 | | | Sparse OpMINRES-L-D with no mtrace and ρD = 0 | 0.833068 | 1.291457 | 1.410686 | | | Sparse OpMINRES-L-D | 1.062102 | 1.294110 | 1.239181 | | | 320/80/80 | Sparse OpMINRES-L-D with no mtrace | 1.078058 | 1.291549 | 1.237178 | | Sparse OpMINRES-L-D with no mtrace and ρL = 0 | 1.078026 | 1.291574 | 1.237129 | | | Sparse OpMINRES-L-D with no mtrace and ρD = 0 | 1.078058 | 1.291550 | 1.237179 | | Table 25: Graph corresponding to learned L by no mtrace constraint and controlling PL in L-based regular- ![80_image_0.png](80_image_0.png) Table 26: Graph corresponding to learned L by switching off the MCP-based regularization and controlling ρL in L-based regularization and ρD in D-based regularization for 12-node experiments. [Best viewed in ![81_image_0.png](81_image_0.png) ![82_image_0.png](82_image_0.png) Table 27: Graph corresponding to learned L by using only D-lasso based regularization without MCP-based sparsity inducing regularization instead of Sparse OpMINRES-L-D for 12-node experiments. [Best viewed in color] | Train/Val/ Test samples | Methods | Mean RSSE | | | |---------------------------------------------|---------------------|-------------|----------|----------| | | Train | Val | Test | | | 80/20/20 | Sparse OpMINRES-L-D | 1.140691 | 1.780445 | 1.583640 | | Sparse OpMINRES-L-D with no MCP and D lasso | 0.745433 | 1.891283 | 1.402357 | | | 160/40/40 | Sparse OpMINRES-L-D | 0.888574 | 1.229568 | 1.385952 | | Sparse OpMINRES-L-D with no MCP and D lasso | 0.821067 | 1.284889 | 1.409910 | | | 320/80/80 | Sparse OpMINRES-L-D | 1.062102 | 1.294110 | 1.239181 | | Sparse OpMINRES-L-D with no MCP and D lasso | 1.073490 | 1.297428 | 1.241109 | |
Review 1: Summary: In this paper, the authors propose a approach to learn relational structures given multiple input variables. These variables may be features, or functions of input-output maps, etc. There is a label associated with reach realization of the inputs. The goal is to learn the mapping from the input variables to the output label. The approach here is based on kernel methods. This kernel operates on the multiple input variables, with possible extensions such as adding regularization to the weights of the kernel map. One of the main idea of this paper is to add a Laplacian matrix on top of this kernel. This Laplacian matrix takes an input graph, and then uses the Laplacian matrix product to associate the input variables. - In the setting where the graph is given, learning the kernel is straightforward, similar to standard kernel methods. - When the graph is not known, the paper designs an alternating minimization method, which learns the kernel and the Laplacian in an alternative fashion. Some theoretical analysis and experimental results are used to illustrate the above. - In particular, the theoretical analysis concerns the generalization error of learning the kernel. - The experiments apply the alternating minimization method on a weather forecasting dataset and an NBA player movement dataset. The results illustrate the presence of associations within the input features. Strengths and Weaknesses: S1) A useful framework for learning structural associations between input features. I could imagine use cases of this framework for learning relations among the input features. S2) Overall, paper is clearly written and easy to follow. S3) Experiments are helpful for understanding the results. ========================= W1) The paper is quite long and I find that lots of the content are unrelated to the main claims of the paper. - In Sec 5.1, this section focuses learning with known graph structure; is this setting used elsewhere in the paper? If I understand correctly, both settings in the experiments are in the case where the graph structure is not known. - In Sec 6, things like Definition 6.1 and Lemma 6.2 should probably be in the appendix (which look like textbook materials). - The proof of Lemma 6.6 should also be in the appendix (steps look standard). W2) Experiment codes are not available. W3) There are no ablation studies detailing each component of the algorithm. There is no ablation on the effect of the regularization penalty as well. Requested Changes: I think this paper introduces a useful framework for learning relational maps given multiple variables. The algorithm design is sound. The experiment findings would be interesting to the community. At the same time, I think the paper requires some revision, for instance reducing supporting information from the main text that are irrelevant from the main claims. Besides, experiment codes should be released. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This research focuses on the task of function-to-function regression, which involves learning a mapping from a set of input functions to an output function. The authors propose an innovative approach using a graph-induced operator-valued kernel to capture the relationships among the input functions. Even in cases where the underlying graphical structure is unknown, they present a method to learn an appropriate Laplacian matrix, enabling a comprehensive understanding of the connections. To tackle this learning problem, the authors develop an alternating minimization framework. This framework incorporates a min-max concave penalty that encourages meaningful interactions within the graphical structure, leading to more reliable results. The framework is also designed to handle complex scenarios involving multi-dimensional input and output functions. In addition, they introduce an efficient sample-based approximation algorithm, allowing for efficient processing of large datasets. The research also provides valuable insights into the generalization performance of the learned mapping. The authors establish bounds on the generalization error of the proposed method. Furthermore, through extensive empirical evaluations on both synthetic and real datasets, they demonstrate the effectiveness of their learning framework. Strengths and Weaknesses: Strength: 1. A strength of this research lies in its ability to incorporate the learning of an appropriate graphical structure within the context of functional regression. 2. The incorporation of a sparsity-inducing regularizer in the Laplacian matrix enhances interpretability by highlighting the most relevant interactions among input functions. Additionally, the proposed sample-based approximation algorithm improves scalability, enabling efficient processing of the data. 3. The research establishes a generalization error bound for the proposed approach. Weakness: 1. The research does not extensively compare the proposed method with existing approaches in functional regression, e.g. functional PCA based approach or neural networks based approach. 2. The theoretical results presented in the paper focus on the simplest case, without considering the incorporation of a sparsity-inducing regularizer or the use of a sample-based approximation algorithm. Requested Changes: 1.The numerical experiments involve only several hundred training examples, which leaves uncertainty regarding the assessment of scalability. 2.The numerical experiments do not provide clear evidence on whether the proposed nonlinear model outperforms the linear model presented in (Gómez et al., 2021). 3. It is a little bit strange that the convergence rate $m^{-\eta/8(1+\eta)}$ in (125) is independent of the covering number index $q$ in (118). Moreover, it is strong to assume that $F\in\mathcal{H}_{\mathcal{K}}$ in Theorem 6.3. 4. Any special reason to use MCP instead of LASSO? Broader Impact Concerns: NA ================================================== Review 3: Summary: The paper outlines a novel approach to functional regression problems, where the objective is to map multiple input functions to a single output function. The key contributions and methodologies presented in the paper are as follows: The authors introduced a graph-induced Operator-Valued Kernel (OVK) to solve functional regression problems, leveraging a graphical structure to represent interactions among input functions, especially a sparsity-inducing regularization on the Laplacian matrix. This framework assists in learning the regression map along with the graphical structure itself. The authors raised an alternating minimization framework for learning the map and the matrices representing the graphical structure through the Operator-based Minimum Residual (OpMINRES) algorithm and projected gradient descent and they extended the framework to address multi-dimensional functional regression problems. They also studied the generalization error bounds for the learned functional regression map, which also takes into account the learning of the graph-induced OVK. Strengths and Weaknesses: ## Upsides: 1. In general, this paper is well-written. Background information and relevant literature are adequately covered. Technical developments of the work are rigorous, intuitive, and well-founded from existing works. The mathematical quality of the work is good. 2. The authors provided extensive examples and experiments for functional regression models with graph structures. This empirically shows the improvement of the authors' methods in some spatial-temporal functional regression problems. ## Downsides: 1. There is a lack of proof for the convergence of alternating minimization of a non-convex function $J$ proposed in this paper. The choice of hyperparameters, e.g. learning rates, in OpMINRES is not explicitly discussed in the paper to ensure convergence. 2. Although the authors provided a generalization analysis, there is a lack of discussion on how the graph-induced kernel regression and sparsity of the graph change the generalization error bound for the regression problem. The interpretation of Theorem 6.3 and (125) should be expanded in the paper. 3. In the kernel regression part, the authors chose specific kernels $k_1$ and $k_2$ in (2) without enough explanations on the choice of the kernels. There should be either empirical or theoretical comparisons with different kernels on this problem. Requested Changes: 1. The notation is (4) and below are confusing. Does this inner product stand for $x^\top Lx'=\sum_{i,j} \int_0^1x_i(t) L_{ij} x_j'(t)dt$? 2. Proposition 5.1: typo $L([0,1])$. 3. Typo in (9). 4. If we consider a graph $G$ without self-loops, then $L= D-A$ will determine $D$. In this case, we do not need regularization on $D$ in (13), right? This should simplify the algorithm. In the results of the experiments, I did not see self-loops for all vertices. So why do we need regularization on $D$? 5. At the top of page 10, the authors mentioned a different regularization from Humbert et al., 2021. Is this smoothness term better than (13)? There should be more explanations here. 6. In (22), using upper indices to indicate the number of iterations is confusing with the powers of the quantities. 7. In Section 5.2.4, you used the trace of Laplacian for sparsity-induced regularization, but why not use $\ell_1$ minimization of $D$ for regularization? Besides, any insight into why you chose $m_{trace}=p/2$. The choice of $m_{trace}$ indicates the graph is extremely sparse but this hyperparameter should depend on the problem and datasets, which can be estimated in other ways. 8. Please specify the hyperparameters in Algorithms 1 and 2. 9. Typo above (73): $X^m$ should be $\mathcal{X}^m$. 10. In Section 7, you used KPL-OpMINRES-D in many places which should be KGL-OpMINRES-D. 11. How do you explain the results in Section 7.2 on weather data? Sparse OpMINRES-L-D provided a sparse graph but should it be better if the graph has more edges for places close to each other? 12. In A.9, you repeated the details of the experiments stated in Section 7, which is a redundancy. Maybe you should provide more details such as the learning rates, regularization parameters, and $m_{trace}$. Broader Impact Concerns: Not very applicable here. ================================================== Metareview: Recommendation: Accept as is Comment: The paper formulates a new problem (many-to-one functional regression) and properly motivates it. The paper provides a method to address this challenge, and demonstrates empirically that it is general and effective. In some cases, convergence bounds are provided and proven. The scope is clearly within machine learning. The reviewers felt the authors were appropriately responsive to their comments. Ultimately, the improvements made were minor, and the paper is ready for publication. ==================================================
# Remix: Regret Minimization For Monotonic Value Function Factorization In Multiagent Reinforcement Learning Anonymous authors Paper under double-blind review ## Abstract Value function factorization methods have become a dominant approach for cooperative multiagent reinforcement learning under a centralized training and decentralized execution paradigm. By factorizing the optimal joint action-value function using a monotonic mixing function of agents' utilities, these algorithms ensure the consistency between joint and local action selections for decentralized decision-making. Nevertheless, the use of monotonic mixing functions also induces representational limitations. Finding the optimal projection of an unrestricted mixing function onto monotonic function classes is still an open problem. To this end, we propose ReMIX, formulating this optimal projection problem for value function factorization as a regret minimization over the projection weights of different state-action values. Such an optimization problem can be relaxed and solved using the Lagrangian multiplier method to obtain the close-form optimal projection weights. By minimizing the resulting policy regret, we can narrow the gap between the optimal and the restricted monotonic mixing functions, thus obtaining an improved monotonic value function factorization. Our experimental results on Predator-Prey and StarCraft Multiagent Challenge environments demonstrate the effectiveness of our method, indicating the better capabilities of handling environments with non-monotonic value functions. ## 1 Introduction Reinforcement learning has demonstrated great potential in solving challenging real-world problems, from autonomous driving (Cao et al., 2012; Hu et al., 2019) to robotics and planning (Matignon et al., 2012; Levine et al., 2016; Hüttenrauch et al., 2017). In many scenarios, these tasks involve multiple agents within the same environment and thus require multiagent reinforcement learning (MARL) (Vinyals et al., 2019; Jaques et al., 2019; Baker et al., 2019; Wang et al., 2020b) to coordinate agents and learn desired behaviors from their experiences. Due to practical communication constraints and the need to cope with vast joint action space, MARL algorithms often leverage fully decentralized policies but learn them in a centralized fashion with access to additional information during training. Value function factorization methods, e.g., QMIX (Rashid et al., 2018), QPLEX (Wang et al., 2020a), Qatten (Yang et al., 2020), FOP (Zhang et al., 2021), and DOP (Wang et al., 2020c), have been a dominant approach for such centralized training and decentralized execution (CTDE) MARL (Kraemer & Banerjee, 2016). By factorizing the optimal joint action value function using a monotonic mixing function of per-agent utilities, these algorithms ensure the consistency between joint and local action selections for decentralized decision-making. Superior performance has been reported in many MARL tasks, such as the StarCraft Multiagent Challenge (SMAC) (Samvelyan et al., 2019). It is known that value function factorization can be viewed as an operator (Dugas et al., 2009), which first computes the optimal joint action value functions as targets and then projects them onto the space representable by monotonic function classes. The projected monotonic mixing functions enable efficient maximization yet allow decentralized decision-making. However, it also poses representational limitations. For instance, QMIX leverages a universal approximator for non-linear monotonic mixing functions. It prevents QMIX from efficiently representing joint action value functions where agents' orderings of their action choices depend on each other (Mahajan et al., 2019). Later, the authors in the paper (Rashid et al., 2020) proposed an improved projection using Weighted QMIX (WQMIX). It assigns higher weights to the values of optimal joint actions than the suboptimal ones, resulting in a better projection that more accurately represents these optimal values. However, WQMIX relies purely on a heuristic design - such as CentrallyWeighted (CW) and Optimistically-Weighted (OW) - where such weight term is a constant. Finding an optimal projection onto the monotonic function class is still an open problem. To this end, we propose ReMIX, formulating the optimal projection problem for value function factorization as a regret minimization over the projection weights of different state-action values. Specifically, we construct an optimal policy following the optimal joint action-value function and a restricted policy using its projection onto monotonic mixing functions. A policy regret is then defined as the difference between the expected discounted reward of the optimal policy and that of the restricted policy. By minimizing such policy regret through an upper bound, we can narrow the gap between the optimal and restricted policies and thus force the projected monotonic value function to approach the optimal one during learning, leading to an optimal monotonic factorization with minimum regret. We note that while policy regret minimization has been employed to formulate various optimizations in reinforcement learning, such as optimal prioritized experience replay (Liu et al., 2021) and loss function design (Jin et al., 2018), to the best of our knowledge, this is the first proposal for optimizing value function factorization in MARL through policy regret minimization. We show that the proposed regret minimization can be solved via the Lagrangian method (Bertsekas, 2014) considering an upper bound. By examining a weighted Bellman equation involving monotonic mixing functions and per-agent critics, we leverage the implicit function theorem (Krantz & Parks, 2002) and derive Karush–Kuhn–Tucker (KKT) (Ghojogh et al., 2021) conditions to find the optimal projection weights in closed form. Our results highlight the key principles contributing to optimal monotonic value function factorization. The optimal projection weights can be interpreted to consist of four components: Bellman error, value underestimates, the gradient of the monotonic mixing function, and the on-policiness of available transitions. We note that the first two terms relating to Bellman error and value underestimates are consistent with the weighting heuristics proposed in WQMIX, thus providing a quantitative justification and recovering WQMIX as a special case. More importantly, our analysis reveals that an optimal value function factorization should also depend on the gradient of the monotonic mixing function and the positive impact of more current transitions. Following the theoretical results, we provide a tractable approximation of the optimal projection weights and propose a MARL algorithm of ReMIX with regret-minimizing monotonic value function factorization. We validate the effectiveness of ReMIX in Predator-Prey (Böhmer et al., 2020) and SMAC. Compared with state-of-the-art factorization-based MARL algorithms (e.g., WQMIX, QPlex, FOP, DOP), ReMIX is shown to better cope with environments with non-monotonic value functions, resulting in improved convergence and superior empirical performance. The main contributions of our work are as follows: - We propose a novel method, ReMIX, formulating the optimal value function factorization as a policy regret minimization and solving the weights of the optimal projection in closed form. - The theoretical results and tractable weight approximations of ReMIX enable cooperative MARL algorithms with improved value function factorization. - Experiment results of ReMIX in Predator-Prey and SMAC environments demonstrate superior convergence and empirical performance over state-of-the-art factorization-based methods. We further perform ablation studies to demonstrate the contribution of each component in our design. ## 2 Background 2.1 Partially Observable Markov Decision Process We describe a fully cooperative multiagent sequential decision-making task as a decentralized partially observable Markov decision process (Dec-POMDP) (Oliehoek & Amato, 2016) consisting of a tuple G = h*S, U, P, R, Z, O, n, γ*i, where s ∈ S describes the global state of the environment. At each time step, each agent a ∈ A ≡ {1*, . . . , n*} selects an action u a ∈ U, and all selected actions are combined to form a joint action u ∈ U ≡ U n. This process leads to a transition in the environment based on the state transition function P(s 0|s, u) : S × U × S → [0, 1]. All agents share the same reward function r(s, u) : S × U → R with a discount factor γ ∈ [0, 1). In the partially observable environment, the agents' individual observations z ∈ Z are generated by the observation function O(*s, u*) : S × A → Z. Each agent has an action-observation history τa ∈ T ≡ (Z × U) ∗. Conditioning on the history, the policy becomes π a(ua|τa) : T × U → [0, 1]. The joint policy π has a joint action-value function: Qπ(st, ut) = Est+1:∞,ut+1:∞[Rt|st, ut], where t is the timestep and Rt =P∞ i=0 γ irt+iis the discounted return. In this paper we adopt the centralized training and decentralized execution paradigm: the learning algorithm has access to all local action-observation histories τ and global state s during training while each agent can only access its own action-observation history in execution. ## 2.2 Policy Regret The object of MARL is to find a joint policy π that can maximize the expected return: η(π) = Eπ[P∞ i=0 γ irt+i]. For a fixed policy, the Markov decision process becomes a Markov reward process, where the discounted stationary state distribution is defined as d π(s). Considering the partially observable scenario of MARL, we replace the state in discounted state distribution with agents' action observation histories1, i.e., d π(τ ). Similarly, the discounted history action distribution is defined as d π(τ , u) = d π(τ )π(u|τ ). Then, we will have the expected return rewritten as η(π) = 1 1−γ Edπ(τ,u)[r(s, u)]. We assume there exists an optimal joint policy π ∗such that π ∗ = arg maxπ η(π). The regret of the joint policy π is defined as regret(π) = η(π ∗) − η(π). The policy regret measures the expected loss when following the current policy π instead of optimal policy π ∗. Since η(π ∗) is a constant, minimizing the regret is consistent with maximizing of expected return η(π). In this paper, we use regret as an alternative optimization objective for finding the optimal projection in MARL, along with multiple constraints, e.g., the Bellman equation and the sum of projection weights. By minimizing the regret, the current policy πk following a monotonic value factorization will approach the optimum π ∗following an unrestricted value function. ## 3 Related Work 3.1 Value Decomposition Approaches Value decomposition approaches (Guestrin et al., 2002; Castellini et al., 2019) are widely used in value-based MARL. Such methods integrate each agent's local action-value functions through a learnable mixing function to generate global action values. For instance, VDN (Sunehag et al., 2017) and QMIX estimate the optimal joint action-value function Q∗ as Qtot with different formations. VDN aims to learn a joint action-value function Qtot of the sum of individual utilities for each agent. QMIX calculates Qtot by combining mentioned utilities via a continuous state-dependent monotonic function, generated by a feed-forward mixing network with non-negative weights. QTRAN (Son et al., 2019) and QPLEX further extend the class of value functions that can be represented. Besides value-based factorization algorithms, some works extend the value decomposition method to policy-based actor-critic algorithms. In VDAC (Su et al., 2021), a factorized actor-critic framework compatible with A2C can obtain a reasonable trade-off between training efficiency and algorithm performance. Recently proposed FOP (Zhang et al., 2021) provides a new way to factorize the optimal joint policy induced by maximum-entropy MARL into individual policies. DOP (Wang et al., 2020c) addresses the issue of centralized-decentralized mismatch and credit assignment in both discrete and continuous action spaces in the multiagent actor-critic framework. In this paper, we recast the problem of projecting an unrestricted value function onto monotonic function classes as a policy regret minimization, whose solution allows us to find the optimal projection weights to obtain an improved value function factorization. 1Decentralized MARL problems inherently follow POMDPs, where history-based functions and distributions will reflect the impact of partial observability. ## 3.2 Weighting Scheme In Wqmix QMIX restricts the joint action-value function to be a monotonic mixing of agents' utilities, such that Qtot(τ , u) = fs(Q1(τ1, u1), . . . , Qn(τn, un)) where ∂fs ∂Qa ≥ 0, ∀a ∈ A ≡ 1*, . . . , n*, preventing it from projecting non-monotonic joint action representation. WQMIX solved the limitation by introducing the weights into the projection to retrieve the optimal policy. The WQMIX algorithms - OW and CW QMIXs - can place more importance on the better Qtot in minimizing the loss: Pb i=1 w(τ , u)(Qtot(τ , u; θ) − y¯i) 2, where y¯i = r + γQˆ∗(τ 0, arg maxu0 Qtot(τ 0, u 0; θ −)) is the fixed target, Qˆ∗is the unrestricted joint action-value function, and w is the weighting function2. For example, in OW, the w is given by: $$w(\pmb{\tau},\mathbf{u})=\begin{cases}1&Q_{tot}(\pmb{\tau},\mathbf{u})<\bar{y}_{i}\\ \alpha&\text{otherwise.}\end{cases}\tag{1}$$ $$(1)$$ When a transition is overestimated in the OW paradigm, it will be assigned with a constant weight α ∈ (0, 1]. Compared to OW, CW has a similar mechanism but assigns weights to a transition whose joint action u is not the best. We note that while insightful, these methods are based on heuristic designs of projection weights. Finding optimal projection weights for monotonic value function factorization is still an open problem. In this paper, we reformulate the problem as a policy regret minimization and solve the optimal projection weights in closed form by relaxing the objective and the Lagrangian method. ## 4 Optimal Projection Onto Monotonic Value Functions 4.1 Problem Formulation As Regret Minimization Let Q∗ be the unrestricted joint action value function and Qtot = fs(Q1(τ1, u1), . . . , Qn(τn, un)) be its estimation obtained through a monotonic mixing function fs(·) of per-agent utilities Qa(τi, ui) for a = 1*, . . . , n*. For simplicity of notations, we use Qk to denote Qtot at step k. Adopting B ∗Q∗k−1 as the target with a Bellman operator B ∗, we update Qk in tandem using a weighted Bellman equation: Qk = arg minQ∈Q Eµ[wk(τ , u)(Q − B∗Q∗k−1 ) 2(τ , u)], where wk(τ , u) are non-negative projection weights for different transitions that need to be optimized. This projects the unrestricted value function onto a monotonic function class Q ∈ Q. To formulate the policy regret with respect to this projection, we consider a Boltzmann policy πk following the agent's individual utilities Qa k obtained from such monotonic value factorization, i.e., πk = [π 1 k , ..., πn k ] T and π a k = e Qa k (τa,ua)/[Pτa,u0a e Qa k (τa,u0a )], as well as a similar policy π ∗following the unrestricted value function Q∗that is defined over joint actions in the Boltzmann manner. Our objective is to minimize the policy regret η(π ∗) − η(π) over non-negative projection weights under relevant constraints, i.e., $$\begin{array}{ll}\min_{w_{k}}&\eta(\pi^{*})-\eta(\pi_{k})\\ \mbox{s.t.}&Q_{k}=\arg\min_{Q\in\mathcal{Q}}\mathbb{E}_{\mu}[w_{k}(\mathbf{\tau},\mathbf{u})(Q-\mathcal{B}^{*}Q^{*}_{k-1})^{2}(\mathbf{\tau},\mathbf{u})],\\ &\mathbb{E}_{\mu}[w_{k}(\mathbf{\tau},\mathbf{u})]=1,\quad w_{k}(\mathbf{\tau},\mathbf{u})\geq0,\\ &Q_{k}(\mathbf{\tau},\mathbf{u})=f_{s}(Q^{1}(\tau_{1},u_{1}),\ldots,Q^{n}(\tau_{n},u_{n})),\end{array}\tag{2}$$ where π ∗ and πk are policies in the Boltzmann fashion following the unrestricted and monotonic value functions, respectively. The projection weights must sum up to 1, and µ is the data distribution that we sample data from the replay buffer. An additional table to summarize and explain the all given notations is provided in Appendix A. ## 4.2 Solving Optimal Projection Weights The solution to this optimization problem relies on the monotonic function fs(·) represented by a mixing network, which takes the state and agent networks' output Qa k as inputs and generates an estimate of 2WQMIX defines the weight as w(s, u). Considering Dec-POMDP with the CTDE paradigm, w(s, u) is equivalent to w(τ , u). joint value function Qtot. Solving the regret minimization problem through the Lagrangian method requires analyzing the KKT conditions. Thus, we first find the first-order derivative of the monotonic mixing network, which will also be leveraged to find an optimal solution. The mixing network is a universal approximator consisting of a two-layer network of non-negative weight (Dugas et al., 2009). We compute its first-order derivative in the following lemma. Lemma 1. Considering a two-layer mixing network of the weight matrix W1, W2, bias b1, b2 and activation function h(·), the derivative of Qtot over one of the local utilities Qais: $$f_{s,Q^{a}}^{\prime}=\frac{\partial Q_{t o t}}{\partial Q^{a}}=h_{Q^{a}}^{\prime}(\vec{Q}^{\mathrm{T}}W_{1}+b_{1})\sum_{j=1}^{m}w_{a j}^{1}w_{j}^{2},$$ where Q~ = [Q1*, . . . , Q*n] T. W1, W2 are the n × m and 1 × m *matrix correspondingly, with the respective* elements w 1 ij and w 2 j in each matrix. n is the agent number, and m is the width of the mixing network. Proof. See Appendix B. Given that the monotonic mixing function is smooth and differentiable, we consider an upper bound of the regret objective (obtained using a relaxation and Jensen's inequality) and formulate its Lagrangian by introducing Lagrangian multipliers with respect to the constraints. It allows us to solve the proposed regretminimization problem and obtain optimal projection weights in closed form (albeit with a normalization factor Z ∗). Theorem 1 (Optimal weighting scheme). Under mild conditions, the optimal weight wk(s, u) *to a relaxation* of the regret minimization problem in equation 2 with discrete action space is given by: $$w_{k}(\mathbf{\tau},\mathbf{u})={\frac{1}{Z^{*}}}(E_{k}(\mathbf{\tau},\mathbf{u})+\epsilon_{k}(\mathbf{\tau},\mathbf{u})),$$ (Ek(τ , u) + k(τ , u)), (3) where when Qk ≤ B∗Q∗k−1 , we have $$E_{k}(\mathbf{\tau},\mathbf{u})={\frac{d^{\pi_{k}}(\mathbf{\tau},\mathbf{u})}{\mu(\mathbf{\tau},\mathbf{u})}}(B^{*}Q_{k-1}^{*}-Q_{k})\exp(Q_{k-1}^{*}-Q_{k})\left(\sum_{j=1}^{n}{\frac{1-\pi^{j}}{f_{s,Q^{j}}^{\prime}}}-1\right),$$ $$\square$$ $$(3)$$ and otherwise (i.e., when Qk > B ∗Q∗k−1 ), we have $\overline{c}_k(\tau_i)$ $$=[1,2]$$ $\downarrow$ 2. $\pm\epsilon_0$ Ek(τ , u) = 0, where Z ∗is the normalization factor, and k(τ , u)) is a negligible term when the probability of reversing back to the visited state is small, or the number of steps agents take to revisit a previous state is large. Proof. We give a sketch of the proof below and provide the complete proof in Appendix C. The derivation of optimal weights consists of the following major steps: (i) Use a relaxation and Jensen's inequality to obtain a more tractable upper bound of the regret objective for minimization. (ii) Formulate the Lagrangian for the new optimization problem and analyze its KKT conditions. (iii) Compute various terms in the KKT condition and, in particular, analyze the gradient of Qk with respect to weights pk (defined through the weighted Bellman equation) by leveraging the implicit function theorem (IFT). (iv) Derive the optimal projection weights in closed form by setting the Lagrangian gradient to zero and applying KKT and its slackness conditions. Step 1: Relaxing the objective and adopting Jensen's inequality. To begin with, we replace the original optimization objective function, the policy regret, with a relaxed upper bound. This replacement can be achieved through the following inequality since both sides of the equation have the same minimum: $$\eta(\pi^{*})-\eta(\pi_{k})\leq\mathbb{E}_{d^{\pi_{k}}(\tau)}[(Q_{k-1}^{*}-Q_{k})(\tau,\mathbf{u}^{*})]+\mathbb{E}_{d^{\pi_{k}}(\tau,\mathbf{u})}[(Q_{k}-Q_{k-1}^{*})(\tau,\mathbf{u})].$$ )(τ , u)]. (4) $$\left(4\right)$$ The proof of this result is given in Appendix. The key idea is to rewrite the regret using the expectation of the action-value functions with respect to discounted distribution d πk . After that, we adopt Jensen's inequality (McShane, 1937) to continue relaxing the intermediate objective function based on a convex function g(x) = exp(−x). Thus, a new optimization objective generated from equation 4 becomes: $$\begin{array}{r l}{\operatorname*{min}_{w_{k}}}&{{}-\log\mathbb{E}_{d^{n_{k}}(\tau)}[\exp(Q_{k}-Q_{k-1}^{*})(\tau,\mathbf{u}^{*})]-\log\mathbb{E}_{d^{n_{k}}(\tau,\mathbf{u})}[\exp(Q_{k-1}^{*}-Q_{k})(\tau,\mathbf{u})],}\end{array}$$ where the constraints still hold for the new optimization objective. Step 2: Computing the Lagrangian. In this step, we leverage the Lagrangian multiplier method to solve the new optimization problem in equation 5. For simplicity, we use pk that absorbs the data distribution µ into wk. The constructed Lagrangian is: (5) $\frac{1}{2}$ $\mathrm{\texttt{true}}$. $$\begin{array}{c}{{{\mathcal L}(p_{k};\lambda,\nu)=-\log\mathbb{E}_{d^{\pi_{k}}(\tau)}[\exp(Q_{k}-Q_{k-1}^{*})(\tau,{\bf u}^{*})]}}\\ {{\qquad\qquad-\log\mathbb{E}_{d^{\pi_{k}}(\tau,{\bf u})}[\exp(Q_{k-1}^{*}-Q_{k})(\tau,{\bf u})]}}\\ {{\qquad\qquad+\lambda(\sum_{\tau,{\bf u}}p_{k}-1)-\nu^{\tau}p_{k},}}\end{array}$$ where pk is the weight wk multiplied by the data distribution µ, and *λ, ν* are the Lagrange multipliers. Step 3: Computing the Gradients Required in the Lagrangian. According to the first constraint in equation 2, the gradient ∂Qk ∂pk can be computed via IFT given by: $$\frac{\partial Q_{k}}{\partial p_{k}}=-[\mathrm{diag}(p_{k})]^{-1}[\mathrm{diag}(Q_{k}-\mathcal{B}^{*}Q_{k-1}^{*})].$$ $\square$ We also derive the gradient ∂dπk (τ,u) ∂pkfor solving the Lagrangian. The derivation details are given in the Appendix. Step 4: Deriving the Optimal Weight. After having the equation for two gradients and an expression of the Lagrangian, we can compute the optimal pk via an application of the KKT conditions, which needs to set the partial derivative of the Lagrangian equaling to zero, as ∂L(pk;λ,ν) ∂pk= 0, where the optimal weight wk can be acquired from the pk. The theoretical results shed light on the key factors determining an optimal projection onto monotonic mixing functions. Specifically, the optimal projection weights consist of four components relating to Bellman error, value underestimation, the gradient of the monotonic mixing function, and the on-policiness of available transitions. We will interpret these four components next and develop a deep MARL algorithm through approximations of the optimal projection weights. Bellman error B ∗Q∗k−1 − Qk: Qk is the estimation of the action-value function after the Bellman update. This term measures the distance between the estimation and the Bellman target. A large difference in this term means higher hindsight Bellman error. Due to the KKT slackness condition, our analysis indicates that the optimal projection weight is zero when Qk > B ∗Q∗k−1 is an overestimate of the target value, and otherwise, a higher weight should be assigned when Qk is more underestimated. Value underestimation exp(Q∗k−1 − Qk): If Qtot after the Bellman update at current step k is smaller than optimal Q∗k−1 , it results in an underestimate. In this case, we will assign a higher weight (always larger than 1) to this transition, which is proportional to the exponential of this underestimation gap. In contrast, when overestimating (with a negative gap), the assigned weight becomes lower and always smaller than 1. This is important because an underestimate of function approximation may lead to a sub-optimal Qk estimation and thus non-optimal action selections. ``` Gradient of the mixing network Pn j=1 1−π j f 0 s,Qj −1: It turns out that the optimal projection weights also depend ``` on the inverse of the gradient of the monotonic mixing function fs(·), which is a new result. Intuitively, the optimal projection weights would become higher when the monotonic mixing function is insensitive to underlying per-agent utility values (i.e., having a small, positive gradient). We view this result as a form of normalization with respect to different shapes of monotonic mixing function fs(·). In practical algorithms, we often use the two-layer mixing network with non-negative weights to approximate the monotonic function fs(·) to produce Qk. The parameters of the mixing network are updated every step, and the gradient value can be readily computed from these parameters. We have provided an instance regarding calculating the gradient of a two-layer mixing network in Lemma 1. It is worth noting that similar gradients can also be obtained for other value function factorization methods. Measurement of on-policy transitions d πk (τ,u) µ(τ,u) : The efficient update of the joint action value function can be achieved by focusing on transitions that are more possibly to be visited by the current policy, i.e., with a higher d πk (τ , u). Adding this term can speed up the search for the optimal Qk close to Q∗k−1 . ## 4.3 Proposed Algorithm Algorithm 1 Remix 1: Initialize step, the parameters of mixing network, agent networks, and hyper-network. 2: Set the learning rate α and replay buffer D 3: let θ − = θ 4: for step = 1 : stepmax do 5: k = 0, s0 = initial state 6: **while** sk 6= terminal and k < episode limit do 7: for each agent a do 8: τ a k = τ a k−1 ∪ (ok, uk−1) 9: u a k = (arg maxu a k Q(τ a k , ua k ) with probability 1 − randint(1, |U|) with probability 10: **end for** 11: Obtain the reward rk and next state sk+1 12: Store the current trajectory into replay buffer D = D ∪ (sk, uk, rk, sk+1) 13: k = k + 1,step = step + 1 14: **end while** 15: Collect b samples from the replay buffer D following uniform distribution µ. 16: for each timestep k in each episode in batch b do 17: Evaluate Qk, Q∗ and target values 18: Obtain the utilities Qa from agents' local networks, and compute the individual policy π a k 19: Compute the weight: wk ∝ (B ∗Q∗k−1 − Qk) exp(Q∗k−1 − Qk) Pn j=1 1−π j f 0 s,Qj − 1 when Qk ≤ B∗Q∗k−1 when Qk > B ∗Q∗k−1 20: **end for** 21: Minimize the Bellman error for Qk weighted by wk, update the network parameter θ: θ = θ − α(∇θ 1 b Pb i wk(Qk − yi) 2). 22: if update-interval steps have passed **then** 23: θ − = θ 24: **end if** 25: **end for** Our analytical results in Theorem 1 identify four key factors determining the optimal projection weights. Interestingly, the first two terms, relating to Bellman error and value underestimation, recover the heuristic designs in WQMIX. Specifically, when the Bellman error of a particular transition is high, which indicates a wide gap between Qk and Q∗k−1 , we may consider assigning a larger weight to this transition. Similarly, value underestimation works as a correction term for incoming transitions: based on the difference of current Qk and ideal Q∗k−1 , it will compensate the underestimated Qk with larger importance while penalizing overestimated Qk with a smaller weighting modifier, consistent with OW scheme in equation 1. Additionally, our analysis identifies two new terms: the gradient of the monotonic mixing function and measurement of on-policy transitions, which are crucial in obtaining an optimal projection onto monotonic value function factorization. As discussed, we interpret the gradient term in optimal weights as a form of normalization - by increasing the weights for transitions, where the monotonic mixing function is less sensitive to the underlying per-agent utility, and decreasing the weights otherwise. The measurement of on-policy transitions in the weighting expression emphasizes the useful information carried by more current, on-policy transitions. Following these theoretical results, we provide a tractable approximation of the optimal projection weights and propose a MARL algorithm, ReMIX, with regret-minimizing projections onto monotonic value function factorizations. The procedure of ReMIX can be found in Algorithm 1. We consider a new loss function with respect to the optimal projection weights wk applied to the Bellman equation of Qk (considering Qtot at step k), i.e., $$L_{\mathrm{ReMIX}}=\sum_{i=1}^{b}\left[w_{i}(\mathbf{\tau},\mathbf{u})(Q_{k}-y_{i})^{2}(\mathbf{\tau},\mathbf{u})\right],$$ $$(6)$$ 2(τ , u), (6) where b is the batch size, and yi = B ∗Q∗k−1 is a fixed target using an unrestricted joint action-value function that can be approximated using a separate network similar to WQMIX. To compute the projection weights for Bellman error and value underestimation terms, we again leverage the unrestricted joint action-value function Q∗to compute them quantitatively. We note that the Bellman error term also works as the condition in Theorem 1 for deciding whether the weight should be zero. The gradient of the monotonic mixing network can be directly computed using Lemma 1. Ideally, we would also want to include measurement of on-policy transitions term in the calculation, but it is not readily available since distribution d πk (τ , u) in the numerator is difficult to acquire. Thus, we take an approach similar to existing work (Kumar et al., 2020) and show that the other terms in the derived optimal weights are enough to provide a good estimate and lead to performance improvements. To account for the unknown normalization factor Z ∗ and improve the stability of the training process, we map the projection weights to a given range, which is modeled as a hyperparameter of our algorithm. We provide numerical results adjusting it in the experiment section. ## 5 Experiment In this section, we present our experimental results on Predator-Prey and SMAC and demonstrate the effectiveness of ReMIX by comparing the results with several state-of-the-art MARL baselines. Besides, we visualize the optimal weight pattern in heat maps to show the step-wise weight assignment for each transition. Additionally, we conduct the ablation experiments by disabling each term in Theorem 1, and deliver the sensitivity experiments regarding the normalization factor. More details about the environment and hyper-parameter setting are provided in Appendix D. The code of this work is available on GitHub (see supplementary files during the review period). ## 5.1 Predator-Prey To start with, we consider a complex partially-observable multi-agent cooperative environment, PredatorPrey, that involves 8 agents in cooperation as predators to catch 8 prey on a 10×10 grid. In this task, a successful capture with the positive reward of 1 must include two or more predator agents surrounding and catching the same prey simultaneously, requiring a high level of cooperation. A failed coordination between agents to capture the prey, which happens when only one predator catches the prey, will receive a negative punishment reward. The greater punishment determines the degree of monotonicity. Algorithms that suffer from relative overgeneralization issues or make poor trade-offs in joint action-value function projection will fail to solve this task. ![8_image_0.png](8_image_0.png) Figure 1: Average reward per episode on the Predator-Prey tasks for ReMIX and other baseline algorithms of 4 settings. We select multiple state-of-the-art MARL approaches as baseline algorithms for comparison, which include value-based factorization algorithm (i.e., QMIX, WQMIX, and QPLEX), decomposed policy gradient method (i.e., VDAC), and decomposed actor-critic approaches (i.e., FOP and DOP). All mentioned baseline algorithms have shown strength in handling MARL tasks in existing works. Figure 1 shows the performance of seven algorithms with different punishments, where all results demonstrate the superiority of ReMIX over others. Besides, regarding efficiency, we can spot that ReMIX has the fastest convergence speed in seeking the best policy. In Figure 1c and 1d, ReMIX significantly outperforms other state-of-the-art algorithms in a hard setting requiring a higher level of coordination among agents as learning the best policy with improved joint action representation is required in this setting. Most algorithms, such as QMIX, FOP, and DOP, end up learning a sub-optimal policy where agents learn to work together with limited coordination. Although ReMIX and WQMIX acquired good results eventually, compared to the latter, ReMIX achieves better performance and converges to the optimal policy profoundly faster than WQMIX, demonstrating that our optimal weighting approach can generate a better joint action-value projection. ## 5.2 Smac Next, we evaluate ReMIX on the SMAC benchmark. We report the experiments on six maps consisting of one easy map, two hard maps, and three super-hard maps. The selected state-of-the-art baseline algorithms for this experiment are consistent with those in the Predator-Prey environment. The empirical results are provided in Figure 2, demonstrating that ReMIX can effectively generate optimal weight projection for joint actions on SMAC for achieving a higher win rate, especially when the environment becomes substantially complicated and harder, such as *MMM2*. We can see that several state-of-the-art policy-based factorization algorithms are brittle when significant exploration is undergone since joint action representations generated by them are sub-optimal. Specifically, ReMIX performs well on an easy map *1c3s5z* in Figure 2a, albeit holding the comparable performance among algorithms. On hard maps, such as *3s_vs_5z*, the best policy found by our optimal weighting approach significantly outperforms the remaining baseline algorithms regarding winning rate. For superhard map 6h_vs_8z, *MMM2*, and *corridor*, ReMIX, along with QMIX, WQMIX, and QPLEX, can learn a better policy than VDAC, DOP, and FOP. We achieve the highest winning rate by adopting our algorithm ![9_image_0.png](9_image_0.png) Figure 2: Results of 6 maps (from easy to super hard) on the SMAC benchmark. on *6h_vs_8z* and *MMM2*. Compared to our method, QMIX and WQMIX suffer from this map as their joint action representations are oblivious to some latent factors, such as the shape of the monotonic mixing network, and therefore fail to generate an accurate joint action representation. On *corridor*, ReMIX manages to learn the model with better performance than WQMIX, QPLEX, and other policy-based algorithms, though standard QMIX has the fastest convergence rate among all baseline algorithms. 5.3 Optimal Weight Pattern ![9_image_1.png](9_image_1.png) Figure 3: Heatmap pattern of generated optimal weights (left) and WQMIX weights (right) used in the PredatorPrey environment. The training episodes range from 0 to 1M. In this part, we draw heat maps of the projecting weight probability distributions of ReMIX and WQMIX as the training proceeds to better visualize and compare the weight evolution pattern of transitions sampled as in a minibatch, shown in Figure 3. Adopted weights are generated from the Predator-Prey task with a punishment of -2. We re-scale the absolute value of the transition number to logarithmic probability for scale normalization. As shown in the figure, the probability value of a certain weight is represented by colors, decreasing from 0 in light yellow to -10 in black. The vertical axis represents the training steps, and the horizontal axis represents the normalized weight value, where ours ranges from 0.1 to 1 and WQMIX is either 0.1 or 1. The heat map effectively shows the general trend of the weight evolution pattern at different steps. For WQMIX on the Figure 3 right, with the training of the algorithm, the transitions with the smaller weight (0.1) will become more, and those with the larger weight (1) will become fewer. Evolution like this happens since the transitions will approach optimal as the training goes on, while the algorithm will still take all transitions as potential overestimations and assign smaller weights to them as adjustments. A similar evolution pattern can be found in our weight pattern. On the left of Figure 3, during the training, the transitions with higher weights become less, and most transitions will migrate to the bottom right with lower weights, which empirically recovers the heuristic in WQMIX. Moreover, as an optimal weight projection is used in ReMIX, we will assign different weights to transitions based on evaluating every one of them. We notice that some transitions are assigned with medium weight during the training, given by the light yellow spots on the left of Figure 3. Such a phenomenon demonstrates that the binary-weighted projections in WQMIX are not always accurate. Hence, ReMIX considers all transitions by applying optimal weights to their projections, leading to better results, which also illustrates the performance gap with other algorithms like WQMIX in previous experiments. ![10_image_0.png](10_image_0.png) ## 5.4 Sensitivity Experiment Regarding Normalization Figure 4: Sensitivity of normalizing the minimum weight to 0.1, 0.5, and 0.8. We run the experiment in the Predator-Prey environment with a punishment of -1.5 to report the sensitivity with respect to the different normalization of weight ranges. We keep the maximum normalized weight as 1 but test the effects of using different minimums, which are 0.1, 0.5, and 0.8. As shown in Figure 4, the experiment results are sensitive to the range of the normalized weight. When we map the weight to a minimum of 0.5, the agents in this task can only find a sub-optimal solution. It may be because there exist many overestimations in this task. The joint action representation generated at the is not accurate. Higher minimum weight normalization damages the capability of ReMIX to adjust the projection to retrieve a precise representation rapidly. Therefore, ReMIX performs well under 0.1 to 1 normalization of the weight in this scenario. Note in WQMIX weight is used as α = 0.1 for Predator-Prey and α = 0.5 for SMAC according to their experiment settings. ## 5.5 Ablation Experiment For ablations, we conduct experiments by disabling one single term (mentioned in Theorem 1) each at a time to investigate their contribution to finding optimal projection weights, respectively. The ablation results are given in Figure 5. The terms considered in these experiments are Bellman error, value underestimation, and gradient of the mixing network. Figure 4 shows the results on MMM2. Compared to the original result, missing any of the terms will be detrimental to the performance, and the tests without Bellman error have the lowest final winning rate, which is less than 10%. Furthermore, when we turn off the gradient of the ![11_image_0.png](11_image_0.png) Figure 5: Ablation by disabling one term each for ReMIX on MMM2 (super hard) mixing network term, the result is only around 60%. Such a phenomenon demonstrates that providing a quantitative weight factorization for the value projection is the critical factor in value-factorization-based MARL tasks. The designing of an optimal weighting scheme without taking the influence of the mixing network into account will be less capable of achieving the ideal final results. ## 6 Conclusion In this paper, we formulate the optimal value function factorization as a policy regret minimization and solve the optimal projection weights for the cooperative multiagent reinforcement learning problems in closed form. The theoretical results shed light on key factors for an optimal projection. Therefore, we propose ReMIX as a tractable weight approximation approach to enable MARL algorithms with improved value function factorization. Our experiment results in multiple MARL environments show the effectiveness of ReMIX by demonstrating superior convergence and empirical performance over state-of-the-art factorization-based methods. ## References Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor Mordatch. Emergent tool use from multi-agent autocurricula. *arXiv preprint arXiv:1909.07528*, 2019. Dimitri P Bertsekas. *Constrained optimization and Lagrange multiplier methods*. Academic press, 2014. Wendelin Böhmer, Vitaly Kurin, and Shimon Whiteson. Deep coordination graphs. In *International Conference on Machine Learning*, pp. 980–991. PMLR, 2020. Yongcan Cao, Wenwu Yu, Wei Ren, and Guanrong Chen. An overview of recent progress in the study of distributed multi-agent coordination. *IEEE Transactions on Industrial informatics*, 9(1):427–438, 2012. Jacopo Castellini, Frans A Oliehoek, Rahul Savani, and Shimon Whiteson. The representational capacity of action-value networks for multi-agent reinforcement learning. *arXiv preprint arXiv:1902.07497*, 2019. Charles Dugas, Yoshua Bengio, François Bélisle, Claude Nadeau, and René Garcia. Incorporating functional knowledge in neural networks. *Journal of Machine Learning Research*, 10(6), 2009. Benyamin Ghojogh, Ali Ghodsi, Fakhri Karray, and Mark Crowley. Kkt conditions, first-order and secondorder optimization, and distributed optimization: Tutorial and survey. *arXiv preprint arXiv:2110.01858*, 2021. Carlos Guestrin, Michail Lagoudakis, and Ronald Parr. Coordinated reinforcement learning. In *ICML*, volume 2, pp. 227–234. Citeseer, 2002. Jian Hu, Siyang Jiang, Seth Austin Harding, Haibin Wu, and Shih-wei Liao. Riit: Rethinking the importance of implementation tricks in multi-agent reinforcement learning. *arXiv preprint arXiv:2102.03479*, 2021. Yeping Hu, Alireza Nakhaei, Masayoshi Tomizuka, and Kikuo Fujimura. Interaction-aware decision making with adaptive strategies under merging scenarios. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 151–158. IEEE, 2019. Maximilian Hüttenrauch, Adrian Šošić, and Gerhard Neumann. Guided deep reinforcement learning for swarm systems. *arXiv preprint arXiv:1709.06011*, 2017. Natasha Jaques, Angeliki Lazaridou, Edward Hughes, Caglar Gulcehre, Pedro Ortega, DJ Strouse, Joel Z Leibo, and Nando De Freitas. Social influence as intrinsic motivation for multi-agent deep reinforcement learning. In *International conference on machine learning*, pp. 3040–3049. PMLR, 2019. Peter Jin, Kurt Keutzer, and Sergey Levine. Regret minimization for partially observable deep reinforcement learning. In *International conference on machine learning*, pp. 2342–2351. PMLR, 2018. Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In In Proc. 19th International Conference on Machine Learning. Citeseer, 2002. Landon Kraemer and Bikramjit Banerjee. Multi-agent reinforcement learning as a rehearsal for decentralized planning. *Neurocomputing*, 190:82–94, 2016. Steven George Krantz and Harold R Parks. *The implicit function theorem: history, theory, and applications*. Springer Science & Business Media, 2002. Aviral Kumar, Abhishek Gupta, and Sergey Levine. Discor: Corrective feedback in reinforcement learning via distribution correction. *Advances in Neural Information Processing Systems*, 33:18560–18572, 2020. Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. *The Journal of Machine Learning Research*, 17(1):1334–1373, 2016. Xu-Hui Liu, Zhenghai Xue, Jingcheng Pang, Shengyi Jiang, Feng Xu, and Yang Yu. Regret minimization experience replay in off-policy reinforcement learning. *Advances in Neural Information Processing Systems*, 34:17604–17615, 2021. Anuj Mahajan, Tabish Rashid, Mikayel Samvelyan, and Shimon Whiteson. Maven: Multi-agent variational exploration. *arXiv preprint arXiv:1910.07483*, 2019. Laëtitia Matignon, Laurent Jeanpierre, and Abdel-Illah Mouaddib. Coordinated multi-robot exploration under communication constraints using decentralized markov decision processes. In *Twenty-sixth AAAI* conference on artificial intelligence, 2012. Edward James McShane. Jensen's inequality. *Bulletin of the American Mathematical Society*, 43(8):521–527, 1937. Frans A Oliehoek and Christopher Amato. *A concise introduction to decentralized POMDPs*. Springer, 2016. Tabish Rashid, Mikayel Samvelyan, Christian Schroeder, Gregory Farquhar, Jakob Foerster, and Shimon Whiteson. Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning. In International Conference on Machine Learning, pp. 4295–4304. PMLR, 2018. Tabish Rashid, Gregory Farquhar, Bei Peng, and Shimon Whiteson. Weighted qmix: Expanding monotonic value function factorisation for deep multi-agent reinforcement learning, 2020. Mikayel Samvelyan, Tabish Rashid, Christian Schroeder De Witt, Gregory Farquhar, Nantas Nardelli, Tim GJ Rudner, Chia-Man Hung, Philip HS Torr, Jakob Foerster, and Shimon Whiteson. The starcraft multi-agent challenge. *arXiv preprint arXiv:1902.04043*, 2019. Kyunghwan Son, Daewoo Kim, Wan Ju Kang, David Earl Hostallero, and Yung Yi. Qtran: Learning to factorize with transformation for cooperative multi-agent reinforcement learning. In International Conference on Machine Learning, pp. 5887–5896. PMLR, 2019. Jianyu Su, Stephen Adams, and Peter Beling. Value-decomposition multi-agent actor-critics. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 11352–11360, 2021. Kefan Su and Zongqing Lu. Divergence-regularized multi-agent actor-critic. In *International Conference on* Machine Learning, pp. 20580–20603. PMLR, 2022. Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z Leibo, Karl Tuyls, et al. Value-decomposition networks for cooperative multi-agent learning. *arXiv preprint arXiv:1706.05296*, 2017. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. *Nature*, 575(7782):350–354, 2019. Jianhao Wang, Zhizhou Ren, Terry Liu, Yang Yu, and Chongjie Zhang. Qplex: Duplex dueling multi-agent q-learning. *arXiv preprint arXiv:2008.01062*, 2020a. Tonghan Wang, Heng Dong, Victor Lesser, and Chongjie Zhang. Roma: Multi-agent reinforcement learning with emergent roles. *arXiv preprint arXiv:2003.08039*, 2020b. Yihan Wang, Beining Han, Tonghan Wang, Heng Dong, and Chongjie Zhang. Dop: Off-policy multi-agent decomposed policy gradients. In *International Conference on Learning Representations*, 2020c. Yaodong Yang, Jianye Hao, Ben Liao, Kun Shao, Guangyong Chen, Wulong Liu, and Hongyao Tang. Qatten: A general framework for cooperative multiagent reinforcement learning. *arXiv preprint arXiv:2002.03939*, 2020. Tianhao Zhang, Yueheng Li, Chen Wang, Guangming Xie, and Zongqing Lu. Fop: Factorizing optimal joint policy of maximum-entropy multi-agent reinforcement learning. In *International Conference on Machine* Learning, pp. 12491–12500. PMLR, 2021. ## A Nomenclature We use Table 1 to summarize the often-used notations in this paper. More detailed introduction of these notations can be seen in Sections 2, 3, and 4. | | Table 1: Definitions of notations. | |----------|----------------------------------------------------------------------------------| | Notation | Definition | | s | State of the environment | | a | Agent | | u | Agents' joint action | | r | Reward | | γ | Discount factor | | τ | Joint action-observation history | | π | Joint policy | | π ∗ | Expected optimal joint policy | | Q(·) | Action value function | | Qtot(·) | Monotonic mixing of per-agent action value function | | Q∗ (·) | Unrestricted joint action value function | | V (·) | Value function | | A(·) | Advantage function | | fs(·) | Monotonic function with input state s | | η(π) | Expected return under the joint policy π | | B ∗ | Bellman operator, where B ∗Q(τ , u) def = r(s, u) + γ arg maxu0 Eτ0Q(τ 0 , u 0 ) | | w | Projection weights of transitions | ## B Proof Of Lemma 1 Considering a two-layer mixing network of the non-negative weight matrix W1, W2, bias b1, b2 and activation function h(·). The input Q~ is the vector of all the agents' utilities. Assume there are n agents, Q~ is: $${\vec{Q}}=[Q^{1},\ldots,Q^{n}]^{\mathrm{T}}$$ We assume the mixing network has the width of m, based on the input/output dimension, W1 should be a n × m matrix as: $$W_{1}={\left[\begin{array}{l l l}{w_{11}^{1}}&{\ldots}&{w_{1m}^{1}}\\ {\vdots}&{\ddots}&{\vdots}\\ {w_{n1}^{1}}&{\ldots}&{w_{n m}^{1}}\end{array}\right]}$$ , and W2 is a m-dimension vector given by: $$W_{2}=[w_{1}^{2},\ldots,w_{m}^{2}]^{\mathrm{T}}.$$ Therefore, Qtot calculated from the utility vector Q~ becomes: $$f_{s}(\vec{Q})=h(\vec{Q}^{\mathrm{T}}W_{1}+b_{1})W_{2}^{\mathrm{T}}+b_{2}.$$ 2 + b2. (7) Considering one of the utilities Qa, as long as the derivative of activation h(·) exists (h(·) is smooth and differentiable), based on equation 7, the result is: $$f_{s,Q^{a}}^{\prime}=\frac{\partial Q_{t o t}}{\partial Q^{a}}=h_{Q^{a}}^{\prime}(\vec{Q}^{\mathrm{T}}W_{1}+b_{1})\sum_{j=1}^{m}w_{a j}^{1}w_{j}^{2}.$$ . (8) This concludes the proof. $\left(7\right)$. $$({\boldsymbol{\delta}})$$ ## C Proof Of Theorem 1 We have provided the outline of the proof including four key steps. In this section, we present the detailed proof of the theorem. The optimization problem needed solving is: $$\operatorname*{min}_{w_{k}}$$ wkη(π $\eta(\pi^{*})-\eta(\pi_{k})$ $Q_{k}=\underset{Q\in\mathcal{Q}}{\arg\min}\,\mathbb{E}_{\mu}[w_{k}(\boldsymbol{\tau},\mathbf{u})(Q-\mathcal{B}^{*}Q_{k-1}^{*})^{2}(\boldsymbol{\tau},\mathbf{u})],$ $\mathbb{E}_{\mu}[w_{k}(\boldsymbol{\tau},\mathbf{u})]=1,\quad w_{k}(\boldsymbol{\tau},\mathbf{u})\geq0,$ $Q_{k}(\boldsymbol{\tau},\mathbf{u})=f_{s}(Q^{1}(\tau_{1},u_{1}),\ldots,Q^{n}(\tau_{n},u_{n})),$ $$\mathbf{s.t.}$$ This problem is equivalent to: $$\begin{array}{ll}\min_{p_{k}}&\eta(\mathbf{\tau}^{*})-\eta(\pi_{k})\\ \mbox{s.t.}&Q_{k}=\arg\min_{Q\in\mathcal{Q}}\mathbb{E}_{p_{k}}[(Q-\mathcal{B}^{*}Q_{k-1}^{*})^{2}(\mathbf{\tau},\mathbf{u})],\\ &\sum_{\mathbf{\tau},\mathbf{u}}p_{k}(\mathbf{\tau},\mathbf{u})=1,\quad p_{k}(\mathbf{\tau},\mathbf{u})\geq0,\\ &Q_{k}(\mathbf{\tau},\mathbf{u})=f_{s}(Q^{1}(\tau_{1},u_{1}),\ldots,Q^{n}(\tau_{n},u_{n})),\end{array}\tag{9}$$ is the solution to problem equation 9. where pk = wk(τ , u)µ(τ , u) is the solution to problem equation 9. To solve the optimization problem in equation 9, we needed to provide some definitions, which are *total* variation distance, Wasserstein metric, *the diameter of a set*, and *universal approximator*. Definition 1 (Total variation distance). The total variation distance of the distribution P and Q is defined as D(*P, Q*) = 12 kP − Qk. Definition 2 (Wasserstein metric). For F,G two cumulative distribution functions over the reals, the Wasserstein metric is defined as dp(*F, G*) def = infU,V kU − V kp, where the infimum is taken over all pairs of random variables (U,V) with cumulative distributions F and G, respectively. Definition 3 (Diameter of a set). *The diameter of a set A is defined as* diam(A) = supx,y∈A m(x, y), where m is the metric on A. Definition 4 (Universal approximator). A class of function Fˆ *from* R n to R *is a universal approximator* for a class of functions F *from* R n to R if for any f ∈ F*, any compact domain* D ⊂ R n, and any positive , one can find a ˆf ∈ Fˆ *with* supx∈D |f(x) − ˆf(x)| ≤ . Though we will leverage trajectories τ in further derivation, we propose several assumptions using state s for simplicity and consistency with a general definition like the existing practice in (Su & Lu, 2022). The mild assumptions are given as follows: Assumption 1. The state space S, action space U, and observation space Z *are compact metric spaces.* Assumption 2. The action-value and observation functions are continuous on S × U and Z*, respectively.* Assumption 3. The transition function T is continuous with respect to S × U *in the sense of Wasserstein* metric, which is lim(s,u)→(s0,u0) dp(T(·|s, u), T(·|s0, u0)). Assumption 4. The joint policy π *is the product of each agent's individual policy* π a(ua|τa). Assumption 5. The monotonic mixing function fs(·) regarding per-agent action-value function Qafor ∀a ∈ A *is smooth and differentiable.* These assumptions are not strict and can be satisfied in most MARL environments. Let d π a(s) denote the discounted state distribution of agent a, and d π a i(s) denote the distribution where the state is visited by the agent for the i-th time. Thus, we have: $$d^{\pi^{a}}(s)=\sum_{i=1}^{\infty}d_{i}^{\pi^{a}}(s),$$ i(s), (10) $$(10)$$ where each d π a i(s) is given by: $$d_{1}^{\pi^{*}}(s)=(1-\gamma)\sum_{t_{i}=0}^{\infty}\gamma^{t_{i}}\Pr(s_{t_{i}}=s,s_{t_{k}}=s,\forall k=1,...,i-1),\tag{11}$$ where the Pr(sti = *s, s*tk = s, ∀k = 1*, ..., i* − 1) in this equation contains the probability of visiting state s for the i-th time at ti and a sequence of times tk, for k = 1*, ..., i*, such that state s is visited at each tk. Thus, state s will be visited for i times at time tiin total. The following lemmas are proposed by Liu et al. (2021), where Lemma 2 support the derivation of the Lemma 3, and the latter demonstrates that ∂dπa(s) ∂πa(s) is a small quantity. Lemma 2. Let f *be an Lebesgue integrable function. P and Q are two probability distributions,* f ≤ C, then: $$\mathbb{E}_{P(x)}f(x)-\mathbb{E}_{Q(x)}f(x)|\leq C\cdot D(P,Q).\tag{1}$$ Lemma 3. Let ρ be the probability of the agent a starting from (s, ua) and coming back to s *at time step* t *under policy* π a*, i.e.* Pr(s0 = *s, u*a 0 = u a, st = *s, s*1:t−1 6= s; π a)*, and* = sups,uaP∞ t=1 γ tρ π a(s, ua, t)*. We* have: $$\left|\frac{\partial d\tau^{a}\left(s\right)}{\partial\pi^{a}(s)}\right|\leq\epsilon d_{1}^{\pi^{a}}(s),\tag{1}$$ $s)$ _and_ $\epsilon\leq1$_._ _where $d_{1}^{\pi^{2}}(s)=(1-\gamma)\sum_{t_{1}=0}^{\infty}\gamma^{t_{1}}\Pr(s_{t_{1}}=s)$ and $\epsilon\leq1$._ In the multiagent scenario, each agent only has access to its own trajectory, i.e., the environment is partially observable. Therefore, we replace the state s with agents' observation histories τ and use the joint action u with joint policy π. The conclusions will hold in the mentioned lemmas. Besides, we have the following additional lemma: Lemma 4. Given two policy π and π¯*, where* π = P exp(Q(τ,u)) u0 exp(Q(τ,u0)) *is defined by Boltzmann policy, we have:* $$(12)$$ $$\mathbb{E}_{\mathbf{u}\sim{\bar{\pi}}}[Q(\boldsymbol{\tau},\mathbf{u})]-\mathbb{E}_{\mathbf{u}\sim\pi}[Q(\boldsymbol{\tau},\mathbf{u})]\leq1.$$ Eu∼π¯[Q(τ , u)] − Eu∼π[Q(τ , u)] ≤ 1. (14) Proof. Suppose there are two joint actions u and u¯. Let Q(τ , u) = s, Q(τ , u¯) = t and let s ≤ t. $$\mathbb{E}_{\mathbf{u}\sim\bar{\pi}}[Q(\boldsymbol{\tau},\mathbf{u})]-\mathbb{E}_{\mathbf{u}\sim\pi}[Q(\boldsymbol{\tau},\mathbf{u})]\leq t-\frac{se^{s}+te^{t}}{e^{s}+e^{t}}$$ $$=t-\frac{s+te^{t-s}}{1+e^{t-s}}$$ $$=t-s-\frac{(t-s)e^{t-s}}{1+e^{t-s}}.$$ $$(13)$$ $$(14)$$ Let f(z) = z − zez 1+e z , the maximum point z0 satisfies f 0(z) = 0, from which we further have 1 + e z0 = z0e z0 where z0 ∈ (1, 2). Therefore, we have $$\mathbb{E}_{\mathbf{u}\sim{\bar{\pi}}}[Q(\boldsymbol{\tau},\mathbf{u})]-\mathbb{E}_{\mathbf{u}\sim\pi}[Q(\boldsymbol{\tau},\mathbf{u})]\leq f(t-s)\leq z_{0}-1\leq1.$$ It is worth noting that the derived inequality can also be applied to the situation where we have joint action more than two or we consider the situation regarding per-agent action. The following lemma is introduced by Kakade & Langford (2002). It was originally proposed for the finite MDP, while it will also hold for the continuous scenario that is given by Assumption 1 and 2. Lemma 5. For any policy π and π˜*, we have* $$\eta(\tilde{\pi})-\eta(\pi)=\frac{1}{1-\gamma}\mathbb{E}_{d^{\pm}(\tau,\mathbf{u})}[A^{\pi}(\tau,\mathbf{u})],\tag{15}$$ where Aπ(τ , u) *is the advantage function given by* Aπ(τ , u) = Qπ(τ , u) − V π(τ ). Lemma 6. Let πk = supτ,u P∞ t=1 γ tρ π(τ , u, t), the optimal solution pk to a relaxation of optimization problem in equation 9 satisfies relationship as follows: $$p_{k}(\mathbf{\tau},\mathbf{u})=\frac{1}{Z^{*}}(D_{k}(\mathbf{\tau},\mathbf{u})+\epsilon_{k}(\mathbf{\tau},\mathbf{u})),\tag{16}$$ where when Qk ≤ B∗Q∗k−1 , we have Dk(τ , u) = d πk (τ , u)(B ∗Q∗k−1 − Qk) exp(Q∗k−1 − Qk)(Pn j=1 1−π j f 0 s,Qj − 1), and when Qk > B ∗Q∗k−1 , we have Dk(τ , u) = 0. Z ∗is the normalization constant. Proof. Suppose u ∗ ∼ π ∗. Let π = π ∗ and π˜ = πk in Lemma 5, we have η(π ∗) − η(πk) = −1 1 − γ Ed πk (τ,u)[A π ∗(τ , u)] =1 1 − γ Ed πk (τ,u)[V ∗(τ ) − Q ∗(τ , u)] (17) =1 1 − γ Ed πk (τ,u)[V ∗(τ ) − Qk(τ , u ∗) + Qk(τ , u ∗) − Qk(τ , u) + Qk(τ , u) − Q ∗(τ , u)] (a) ≤1 1 − γ -Ed πk (τ)(Q ∗(τ , u ∗) − Qk(τ , u ∗)) + Ed πk (τ,u)(Qk(τ , u) − Q ∗(τ , u)) + 1, where (a) uses Lemma 4. $$\square$$ Since the original optimization is non-tractable, we consider this upper bound to obtain a closed-form solution. Therefore, we replace the objective in equation 9 with the upper bound in equation 17 and solve the relaxed optimization problem, given by $$\min_{p_{k}}\mathbb{E}_{d^{\tau_{k}}(\tau)}[(Q_{k-1}^{\tau_{k}}-Q_{k})(\mathbf{\tau},\mathbf{u}^{*})]+\mathbb{E}_{d^{\tau_{k}}(\tau,\mathbf{u})}[(Q_{k}-Q_{k-1}^{\tau_{k}})(\mathbf{\tau},\mathbf{u})]$$ s.t. $$Q_{k}=\arg\min_{Q\in\mathcal{Q}}\mathbb{E}_{p_{k}}[(Q-\mathcal{B}^{*}Q_{k-1}^{*})^{2}(\mathbf{\tau},\mathbf{u})],\tag{18}$$ $$\sum_{\mathbf{\tau},\mathbf{u}}p_{k}(\mathbf{\tau},\mathbf{u})=1,\quad p_{k}(\mathbf{\tau},\mathbf{u})\geq0,$$ $$Q_{k}(\mathbf{\tau},\mathbf{u})=f_{s}(Q^{1}(\tau_{1},u_{1}),\ldots,Q^{n}(\tau_{n},u_{n})),$$ $$(19)$$ The derived objective in equation 18 can be further relaxed with Jensen's inequality, given by: $$g(\mathbb{E}[X]),$$ E[g(X)] ≥ g(E[X]), (19) when g(x) is a convex function on real space R. According to equation 19, we select the convex function g(x) = exp(−x), and the objective can be further relaxed as: $$\begin{array}{ll}\min_{p_{k}}&-\log\mathbb{E}_{d^{n_{k}}(\tau)}[\exp(Q_{k}-Q_{k-1}^{*})(\tau,\mathbf{u}^{*})]-\log\mathbb{E}_{d^{n_{k}}(\tau,\mathbf{u})}[\exp(Q_{k-1}^{*}-Q_{k})(\tau,\mathbf{u})]\\ \mbox{s.t.}&Q_{k}=\operatorname*{arg\,min}_{Q\in\mathcal{Q}}\mathbb{E}_{p_{k}}[(Q-B^{*}Q_{k-1}^{*})^{2}(\tau,\mathbf{u})],\\ &\sum_{\tau,\mathbf{u}}p_{k}(\tau,\mathbf{u})=1,\quad p_{k}(\tau,\mathbf{u})\geq0,\\ &Q_{k}(\tau,\mathbf{u})=f_{s}(Q^{1}(\tau_{1},u_{1}),\ldots,Q^{n}(\tau_{n},u_{n})),\end{array}$$ $$(20)$$ In order to handle the optimization problem in equation 20, we follow the standard procedures of Lagrangian multiplier method, which is: $$\mathcal{L}(p_{k};\lambda,\nu)=-\log\mathbb{E}_{q^{\mu_{k}}(\tau)}[\exp(Q_{k}-Q_{k-1}^{\tau})(\mathbf{\tau},\mathbf{u}^{*})]-\log\mathbb{E}_{q^{\mu_{k}}(\tau,\mathbf{u})}[\exp(Q_{k-1}^{\tau}-Q_{k})(\mathbf{\tau},\mathbf{u})]+\lambda\big{(}\sum_{\mathbf{\tau},\mathbf{u}}p_{k}-1\big{)}-\nu^{T}p_{k},\tag{21}$$ After constructing the Lagrangian, we further compute some gradients that will be used in calculating the optimal solution. We first calculate the ∂Qk ∂pk according to the implicit function theorem (IFT). Based on the first constraint in equation 20, we aim to find the minimum Qk to satisfy the arg min(·), and therefore we need to ensure the derivative of the term inside arg min(·) (we use f(pk, Qk) to denote this term) to be zero, which is: $$f^{\prime}_{Q_{k}}=2\sum_{\mathbf{\tau},\mathbf{u}}p_{k}(Q_{k}-\mathcal{B}^{*}Q_{k-1})=0\tag{22}$$ We can notice that F(pk, Qk) : f 0 Qk = 0 is an implicit function regarding Qk and pk. Hence, we apply the IFT on the F(pk, Qk) considering the Hessian matrices of pk and Qk in f(pk, Qk) as follows: $$\frac{\partial Q_{k}}{\partial p_{k}}=-\frac{F^{\prime}_{pk}}{F^{\prime}_{Q_{k}}}=-\left[\mbox{diag}(p_{k})\right]^{-1}\left[\mbox{diag}(Q_{k}-B^{*}Q^{*}_{k-1})\right].\tag{23}$$ Next, we derive the expression for ∂dπk (τ,u) ∂pkin the following equation: ∂dπk (τ , u) ∂pk= ∂dπk (τ , u) ∂πk ∂πk ∂Qa ∂Qa ∂Qk ∂Qk ∂pk = diag(d πk(τ ) + 0(τ )) ∂πk ∂Qa ∂Qa ∂Qk ∂Qk ∂pk (b) = diag(d πk(τ ) + 0(τ ))diag(πk(1 − πk))∂Qa ∂Qk ∂Qk ∂pk (c) = d πk(τ , u)(1 − πk)1 f 0 s,Qk ∂Qk ∂pk + 0(τ )πk(1 − πk)1 f 0 s,Qk ∂Qk ∂pk , (24) where 0(τ ) = ∂dπk (τ,u) ∂πk(τ)is a small quantity provided by Lemma 3. Besides, (b) is based on the the definition of the Boltzmann policy and Assumption 4, and (c) is based on Assumption 5 the gradient of the monotonic mixing function in Lemma 1. Since we have all the preparations ready, we now compute the Lagrangian by applying the Karush–Kuhn–Tucker (KKT) condition. We let the Lagrangian gradient to be zero, i.e., $$\frac{\partial{\cal L}(p_{k};\lambda,\nu)}{\partial p_{k}}=0\tag{1}$$ Besides, the partial derivative of the Lagrangian can be computed as: ∂L(pk; λ, ν) ∂pk= − ∂ log Ed πk (τ)[exp(Qk − Q∗k−1 )(τ , u ∗)] ∂pk− ∂ log Ed πk (τ,u)[exp(Q∗k−1 − Qk)(τ , u)] ∂pk+ λ − ντ,u = − 1 Z exp(Q ∗ k−1 − Qk) ∂dπk (τ , u) ∂pk− d πk(τ , u) ∂Qk ∂pk + λ − ντ,u, (26) where Z = Eτ0,u0∼d πk (τ,u) exp(Q∗ − Qk)(τ 0, u 0). $$(25)$$ Based on equation 25 and equation 26, and substituting the expression of ∂Qk ∂pk and ∂dπk (τ,a) ∂pkwith the derived results in equation 23 and equation 24, we obtain: equation 23 and equation 24, we obtain: $$\begin{split}p_{k}(\mathbf{\tau},\mathbf{u})&=\frac{1}{Z(\nu_{\tau,\mathbf{u}}^{*}-\lambda^{*})}\left[d^{*n}(\mathbf{\tau},\mathbf{u})(Q_{k}-\mathbf{B}^{*}Q_{k-1}^{*})\exp(Q_{k-1}^{*}-Q_{k})\left(\sum_{j=1}^{n}\frac{1-\pi^{j}}{f_{\nu_{\tau,Qj}}^{\prime}}-1\right)\right.\\ &\left.+\epsilon_{0}\pi_{k}(Q_{k}-\mathbf{B}^{*}Q_{k-1}^{*})\exp(Q_{k-1}^{*}-Q_{k})\sum_{j=1}^{n}\frac{1-\pi^{j}}{f_{\nu_{\tau,Qj}}^{\prime}}\right],\end{split}\tag{27}$$ According to Lemma 3, the value of 0 is smaller than d πk (τ ) so the second term will not influence the sign of the equation. Equation 27 will always be larger or equal to zero. By KKT condition, when the Qk − B∗Q∗k−1 < 0, we have ν ∗ τ,u = 0. When equation 27 equal to zero, we let ν ∗ τ,u = 0 because the value of ν ∗ τ,u will not affect pk. In the contrast, when the Qk − B∗Q∗k−1 > 0, the pk should equal to zero. Therefore, by introducing a normalization factor Z ∗, equation 27 can be simplify as follows: $$p_{k}(\mathbf{\tau},\mathbf{u})=\frac{1}{Z^{*}}(D_{k}(\mathbf{\tau},\mathbf{u})+\epsilon_{k}(\mathbf{\tau},\mathbf{u})),\tag{10}$$ $\mathbf{\tau}$ where when Qk ≤ B∗Q∗k−1 , we have $D_{k}(\boldsymbol{\tau},\mathbf{u})=d^{\pi_{k}}(\boldsymbol{\tau},\mathbf{u})(\mathcal{B}^{*}Q_{k-1}^{*}-Q_{k})\exp(Q_{k-1}^{*}-Q_{k})\left(\sum_{j=1}^{n}\frac{1-\pi^{j}}{f_{s,Q^{j}}^{\prime}}-1\right)$ $\epsilon_{k}=\epsilon_{0}\pi_{k}(Q_{k}-\mathcal{B}^{*}Q_{k-1}^{*})\exp(Q_{k-1}^{*}-Q_{k})\sum_{j=1}^{n}\frac{1-\pi^{j}}{f_{s,Q^{j}}^{\prime}}$ $\mathcal{B}^{*}Q_{k-1}^{*}$, we have $$(28)$$ $$(29)$$ and when $Q_k>\mathcal{B}^*Q_{k-1}^*$, we have . k = 0 $$(30)$$ $D_{k}(\mathbf{\tau},\mathbf{u})=0$ $\epsilon_{k}=0$ This concludes the proof. ## D Environment Details We use more recent baselines (i.e., FOP and DOP) that are known to outperform QTRAN (Son et al., 2019) and QPLEX (Wang et al., 2020a) in the evaluation. In general, we tend to choose baselines that are more closely related to our work and most recent. This motivated the choice of QMIX (baseline for value-based factorization methods), WQMIX (close to our work that uses weighted projections so better joint actions can be emphasized), VDAC (Su et al., 2021), FOP (Zhang et al., 2021), DOP (Wang et al., 2020c) (SOTA actor-critic based methods). We acquired the results of QMIX, WQMIX based on their hyper-parameter tuned versions from pymarl2(Hu et al., 2021) and implemented our algorithm based on it. ## D.1 Predator-Prey A partially observable environment on a grid-world predator-prey task is used to model relative overgeneralization problem (Böhmer et al., 2020) where 8 agents have to catch 8 prey in a 10 × 10 grid. Each agent can either move in one of the 4 compass directions, remain still, or try to catch any adjacent prey. Impossible actions, i.e., moving into an occupied target position or catching when there is no adjacent prey, are treated as unavailable. If two adjacent agents execute the catch action, a prey is caught and both the prey and the catching agents are removed from the grid. An agent's observation is a 5 × 5 sub-grid centered around it, with one channel showing agents and another indicating prey. An episode ends if all agents have been removed or after 200 steps. Capturing a prey is rewarded with r = 10, but unsuccessful attempts by single agents are punished by a negative reward p. In this paper, we consider two sets of experiments with p = (0, -0.5, -1.5, -2). The task is similar to the matrix game proposed by Son et al. (2019) but significantly more complex, both in terms of the optimal policy and in the number of agents. | Hyperparameter | Value | |--------------------------------|--------------------| | Batch size | 128 | | Replay buffer size | 10000 | | Target network update interval | Every 200 episodes | | Learning rate | 0.001 | | TD-lambda | 0.6 | Table 2: Hyperparameter value settings. ## D.2 Smac For the experiments on StarCraft II micromanagement, we follow the setup of SMAC (Samvelyan et al., 2019) with open-source implementation including QMIX (Rashid et al., 2018), WQMIX (Rashid et al., 2020), QPLEX (Wang et al., 2020a), FOP (Zhang et al., 2021), DOP (Wang et al., 2020c) and VDAC (Su et al., 2021). We consider combat scenarios where the enemy units are controlled by the StarCraft II built-in AI and the friendly units are controlled by the algorithm-trained agent. The possible options for built-in AI difficulties are Very Easy, Easy, Medium, Hard, Very Hard, and Insane, ranging from 0 to 7. We carry out the experiments with ally units controlled by a learning agent while built-in AI controls the enemy units with difficulty = 7 (Insane). Depending on the specific scenarios(maps), the units of the enemy and friendly can be symmetric or asymmetric. At each time step each agent chooses one action from discrete action space, including noop, move[direction], attack[enemy_id], and stop. Dead units can only choose noop action. Killing an enemy unit will result in a reward of 10 while winning by eliminating all enemy units will result in a reward of 200. The global state information is only available in the centralized critic. Each baseline algorithm is trained with 4 random seeds and evaluated every 10k training steps with 32 testing episodes for main results, and with 3 random seeds for ablation results and additional results. ## D.3 Implementation Details And Hyperparameters In this section, we introduce the implementation details and hyperparameters we used in the experiment. We carried out the experiments on NVIDIA 2080Ti with fixed hyperparameter settings. Recently Hu et al. (2021) demonstrated that MARL algorithms are significantly influenced by code-level optimization and other tricks, e.g. using TD-lambda, Adam optimizer, and grid-searched hyperparameters (where many state-ofthe-art are already adopted), and proposed fine-tuned QMIX and WQMIX, which is demonstrated with significant improvements from their original implementation. We implemented our algorithm based on its open-sourced codebase and acquired the results of QMIX and WQMIX from it. We use one set of hyperparameters for each environment, i.e., no tuned hyperparameters for individual maps. We use epsilon greedy for action selection with annealing from = 0.995 decreasing to = 0.05 in 100000 training steps in a linear way. The performance for each algorithm is evaluated for 32 episodes every 1000 training steps. More hyperparameter values are given in Table 2.
Review 1: Summary: This article extends the field of cooperative multi-agent reinforcement learning by means of an analytical treatment of the value mixing that is typical in the field, and empirical confirmation that the new insights are practically applicable. The analytical treatment relies on framing the problem as a regret minimisation problem, and provides a decomposition of the contributing factors to the optimal projection of the joint value function into the family of monotonic per-agent value functions. In doing this, the authors recover two components already used in Weighted QMIX, and discover two more components that have not been previously used in the literature. The empirical results show some modest improvements on StarCraft Multi-Agent Challenge and Predator-Prey environments. Strengths and Weaknesses: # Strengths The article is well written and easy to follow. For the most part, the arguments flow well, and there are few structural surprises in the text. The identification of two new terms for the optimal projection problem into monotonic value functions is novel and useful. The schematics of the proofs for the main theoretical result are useful and well motivated. The comparison theoretically and empirically versus Weighted QMIX is appreciated and well explored. # Weaknesses The paper would benefit from stating more prominently that this is for fully-cooperative settings. The use of "MIX" in the name is a strong hint, but only for people already familiar with the field. For instance, the title in itself makes it sound like this work could apply to more general cases in MARL, which I don't think it would. The description in the abstract and the main text of the closed-form optimal projection weights should be clarified that this is an analytical result stemming from framing the problem as a regret minimisation. I was wondering while reading if this was the case, but it was only until Section 4 that it became clear that it was, indeed, a theoretical result. Requested Changes: The citations should be done parenthetically throughout (i.e. using `\citep` rather than `\cite` or `\citet`). Please explain why the ablations only look at 3 factors. I assume the on-policiness was not used because the algorithm ran was on policy? Should be clarified. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper presents a theoretical analysis to justify a closed-form solution for a weighted projection problem that appears in some multi-agent deep reinforcement learning (MARL) methods. This derivation is then used to motivate a MARL algorithm. Strengths and Weaknesses: Strengths - The presented approach could provide some justification for some previous algorithms. - The new algorithm seems to work reasonably well in the experiments. Weaknesses - Although the paper is generally clear, I find the exposition and writing to be a bit too sloppy, which is problematic for a paper trying to make a theoretical contribution. - Although the overall theoretical approach seems to be reasonable, it's not completely clear to me that the results hold in the claimed generality. - The paper contains many typos. Requested Changes: The rigor of the presentation should be improved. Here are some examples of issues: - All notations should be explained the first time they are used. - Some notations should be fixed, e.g., \mu denotes both a distribution and a Lagrange multiplier - The authors switch back and forth between state-based and history-based distributions, value functions, or policies. I believe the two formulations are not completely equivalent even in the CTDE paradigm. In particular, a discounted stationary state distribution is different from d^\pi(\tau). - How does the paper from Kumar et al. (2020) show that the measurement of on-policy transitions terms is negligible? - Should the state/action spaces be compact in Assumption 1? What does smooth mean in Assumption 4? Please specify clearly when those assumptions are used in the proofs. - Are there any typos in the last equation of page 15? What is d_0^\pi? What is \rho^\pi(s, u, t)? - The original lemma from Kakade and Langford (2002) applies to finite MDPs. Does it extend to the setting with your current assumptions? - In the proof of Lemma 6, why is there a white box after (17)? What is d^{\pi_k, \pi^*}, which doesn't seem to appear above? - In the first sentence below this white box, I don't think that both sides have the same minimum. I guess the authors mean the same argmin. However, even in that case, it's not clear to me that this is true. - What is \mathbb R_X? Other: - References should be between parentheses when they are not part of sentences. - The paper should be checked for typos. Broader Impact Concerns: None ================================================== Review 3: Summary: This paper tackles the problem of improving value factorization methods in multi-agent reinforcement learning. Given the optimal joint action-value function for multiple agents, previous work has factorized this using a monotonic mixing function of agents' utilities, to ensure consistency between joint and local action selection (we are in the centralized training and decentralized execution). However, this has limitations; this paper endeavours to find the optimal projection of an unrestricted mixing function onto the monotonic function classes. This is done by formulating the problem as regret minimization: the paper compares the optimal policy (following the optimal joint action-value function) to a restricted policy (using its projection onto monotonic mixing functions), with the regret being the difference in discounted reward between the two policies. This is called ReMIX. The paper provides theoretical results, and a tractable approximation that they test in the Predator-Prey and SMAC environments, showing improved performance. Strengths and Weaknesses: Strengths: + The paper is pretty clearly written, and the contributions are clear. + I have not worked in multi-agent RL for many years. However as far as I can tell, the core idea of the paper, to optimize value function factorization through regret minimization, is novel and interesting. I did not verify the proofs of the theoretical results. + The experiments are quite thorough: the paper compares against many different suitable baselines. There are only two environments considered (Predator-Prey and SMAC) but I think these are more than sufficient; SMAC in particular is quite difficult. The paper also includes an ablation experiment disabling different terms of ReMIX in Section 5.5. + The results are solid though not outstanding; ReMIX consistently performs among the best algorithms in all environments considered, but is often tied with QMIX or WQMIX (it significantly outperforms in 3/6 SMAC environments, and is mostly similar to WQMIX on the Predator-Prey task) + Section 5.3 includes a visualization of the generated optimal weights for ReMIX and WQMIX to better understand why the algorithm works. + The authors have committed to making their code open-source + The paper includes a discussion of implementation details and hyperparameters in Appendix A.3.3; taken together with the above point, it seems likely that the replicability of the paper is good. Weaknesses: - ReMIX is sensitive to hyperparameter choices such as the minimum weight, as seen in Figure 4 (though this isn't a major weakness, and it is positive that this result is reported) Requested Changes: I think this paper is pretty strong, and would recommend acceptance with very little modification. Small changes - Many citations are done in the form: X et al. (2023), instead of (X et al., 2023). - Note: there are some awkward turns of phrase, e.g.: "Such evolution happens because the transitions will approach optimal with the training going on." I don't think this significantly impairs the readability of the paper though, so I think it's fine to leave as-is. Broader Impact Concerns: None ================================================== Metareview: Recommendation: Reject Comment: The main criticism of the paper is that of its technical clarity. Many terms were used without proper definition. Assumptions were made without being stated. Terms were overloaded, incorrectly defined, or ambiguous, and there were typos. Several problems remained even following an initial exchange of discussion with the reviewer. These are serious problems for a paper whose contribution requires a subtle understanding of the interactions between policy regret, partial observability, history vs. state based distributions, single-agent and vs. multiagent. This paper will be especially difficult to understand for this TMLR's readership which does not specialize in MARL. Hence, the long-term impact of this contribution would benefit from carefully refining the technical clarity and presentation. However, due to the value of the contribution, I strongly encourage the authors to resubmit the paper once the issues with presentation have been fixed. ==================================================
# Interpretable (Un)Controllable Features In Mdp'S Anonymous authors Paper under double-blind review ## Abstract In the context of MDPs with high-dimensional states, downstream tasks are predominantly applied on a compressed, low-dimensional representation of the original input space. A variety of learning objectives have therefore been used to attain useful representations. However, these representations usually lack interpretability of the different features. We present a novel approach that is able to disentangle latent features into a controllable and an uncontrollable partition. We illustrate that the resulting partitioned representations are easily interpretable on three types of environments and show that, in a distribution of procedurally generated maze environments, it is feasible to interpretably employ a planning algorithm in the isolated controllable latent partition. ## 1 Introduction Learning from high-dimensional data remains a challenging task. Particularly for reinforcement learning (RL), the complexity and high dimensionality of the Markov Decision Process (MDP) (Bellman, 1957) states often leads to complex or intractable solutions. In order to facilitate learning from high-dimensional input data, an encoder architecture can be used to compress the inputs into a lower-dimensional latent representation. To this extent, a plethora of work has successfully focused on discovering a compressed encoded representation that accommodates the underlying features for the task at hand (Jonschkowski & Brock, 2015; Jaderberg et al., 2017; Laskin et al., 2020; Lee et al., 2020; Yarats et al., 2021; Schwarzer et al., 2021; Kostrikov et al., 2021). The resulting low-dimensional representations however seldom contain specific disentangled features, which leads to disorganized latent information. This means that the individual latent states can represent the information from the state in any arbitrary way. The result is a representation with poor interpretability, as the latent states cannot be connected to certain attributes of the original observation space (e.g, the x-y coordinates of the agent). Prior work in structuring a latent representation has shown notions and use of interpretability in MDP representations (Francois-Lavet et al., 2019). When expanding this notion of interpretability to be compatible with RL, it has been argued that the controllable features should be an important element of a latent representation, since it generally represents what is directly influenced by the policy. In this light, Thomas et al. (2017) have introduced the concept of isolating and disentangling controllable features in a low-dimensional maze environment, by means of a selectivity loss. Furthermore, Kipf et al. (2020) took an object-centric approach to isolate distinct objects in MDPs and Ahuja et al. (2022) showed theoretical foundations for this isolation in a weakly-supervised controllable setting. Controllable features however only represent a fragment of an environment, where in many cases the uncontrollable features are of equal importance. For example, in the context of a distribution of mazes, for the prediction of the next controllable (agent) state following an action, the information about the wall structure is crucial (see Fig. 1). We therefore hypothesize that a thorough representation should incorporate controllable and uncontrollable features, ideally in a disentangled, interpretable arrangement; Intepretability is crucial for future real-world deployment (Glanois et al., 2021), while an additional benefit would be that the separation of the controllable and uncontrollable features can be exploited in downstream algorithms such as planning. Our contribution consists of an algorithm that, showcased in three different MDP settings, explicitly disentangles the latent representation into a controllable and an uncontrollable latent partition. This is highlighted on three types of environments, each with a varying class of controllable and uncontrollable elements. This 1 ![1_image_0.png](1_image_0.png) ![1_image_1.png](1_image_1.png) (a) Random observations in each of the 4 mazes (b) Encoded representations ∀s ∈ S Figure 1: Visualization in a maze environment of (a) four random states ∈ R 48×48 and (b) the disentanglement of the controllable latent z c ∈ R 2on the horizontal axes, and the uncontrollable latent z u ∈ R 1on the vertical axis, given for all states in the four maze environments shown in four different colors. The representation is trained on high-dimensional tuples (st, at, rt, st+1), sampled from a replay buffer B, gathered from random trajectories in the four maze environments shown in (a). All possible states are encoded with zt = f(st; θenc) and plotted in (b) with the transition prediction for each possible action, revealing a clear disentanglement between the controllable agent's position and the uncontrollable wall architecture. Note that all samples are taken from the same buffer, filled with samples from all four mazes. allows for a precise and visible separation of the latent features, improving interpretability, representation quality and possibly moving towards a basis for building causal relationships between an agent and its environment. The unsupervised learning algorithm consists of both an action-conditioned and a state-only forward predictor, along with a contrastive and an adversarial loss, which isolate and disentangle the controllable versus the non-controllable features. Furthermore, we show an application of learning and planning on the human-interpretable disentangled latent representation, where the properties of disentanglement allow the planning algorithm to operate solely in the controllable partition of the latent representation. ## 2 Related Work General Representation Learning. Many works have focused on converting high-dimensional inputs to a compact, abstract latent representation. Learning this representation can make use of auxiliary, unsupervised tasks in addition to the pure RL objectives (Jaderberg et al., 2017). One way to ensure a meaningful latent space is to implement architectures that require a pixel reconstruction loss such as a variational (Kingma & Welling, 2014; Higgins et al., 2017) or a deterministic (Yarats et al., 2021) autoencoder. Other approaches combined reward reconstruction with latent prediction (Gelada et al., 2019), pixel reconstruction with planning (Hafner et al., 2019; 2021) or used latent predictive losses without pixel reconstruction (Lee et al., 2020; Schwarzer et al., 2021). Representing controllable features. In representation learning for RL, a focus on controllable features can be beneficial as these features are strongly influenced by the policy (Thomas et al., 2017). This can be done using generative methods (Laversanne-Finot et al., 2018), but is most commonly pursued using an auxiliary inverse-prediction loss; predicting the action that was taken in the MDP (Jonschkowski & Brock, 2015). The work in Pathak et al. (2017); Badia et al. (2020) builds a latent representation with an emphasis on the controllable features of an environment with inverse-prediction losses, and uses these features to guide exploratory behavior. Furthermore, Efroni et al. (2021) and concurrent work by Lamb et al. (2022) employ multi-step inverse prediction to successfully encompass controllable features in their representation. However, these works have not expressed a focus on also retaining the uncontrollable features in their representation, which is a key aspect in our work. Partitioning a latent representation. Sharing similarity in terms of the separation of the latent representation, Bertoin & Rachelson (2022) disentangle the latent representation in the domain adaptation setting into a task-relevant and a context partition, by means of adversarial predictions with gradient reversals and cyclic reconstruction. Fu et al. (2021) use a reconstruction-based adversarial architecture that divides their latent representation into reward-relevant and irrelevant features. Related work by Wang et al. (2022) further divides the latent representation of Dreamer (Hafner et al., 2019), using action-conditioned and state-only forward predictors, into controllable, uncontrollable and their respective reward relevant and irrelevant features. As compared to Wang et al. (2022), who focus on distraction-efficient RL, we purely focus on the representational learning aspect of these predictors, and show notions of separation in lowdimensional, structured representations of MDPs, leaning towards enhanced interpretability. Furthermore, we use an adversarial loss to enforce disentanglement between z c and z u, and apply a contrastive loss instead of pixel reconstruction to avoid representation collapse due to latent forward prediction. Interpretable representations in MDPs. More closely related to our research is the work by Thomas et al. (2017), which connects individual latent dimensions to independently controllable states in a maze using a reconstruction loss and a selectivity loss. The work by Francois-Lavet et al. (2019) visualizes the representation of an agent and its transitions in a maze environment, but does not disentangle the agent state in its controllable and uncontrollable parts, which limits the interpretability analysis and does not allow simplifications during planning. The work by Kipf et al. (2020) uses an object-oriented approach to isolate different (controllable) features, using graph neural networks (GNN's) and a contrastive forward prediction loss, but does not discriminate between controllable and uncontrollable features. Further work in this direction by Ahuja et al. (2022) focuses on theoretical foundations for an encoder to structurally represent a distinct controllable object. We aim to progress the aforementioned lines of research by using a representation learning architecture that disentangles an MDP's latent representation into interpretable, disentangled controllable and uncontrollable features. Finally, we show that having separate partitions of controllable and uncontrollable features can be exploited in a planning algorithm. Exploitations like these are done in combination with prior knowledge of a certain MDP, as in van der Pol et al. (2020). ## 3 Preliminaries We consider an agent acting within an environment, where the environment is modeled as a discrete Markov Decision Process (MDP) defined as a tuple (S, A*, T, R, γ*). Here, S is the state space, A is the action space, T : *S ×A → S* is the environment's transition function, R : *S ×A → R* is the environment's reward mapping and γ is the discount factor. We consider the setting where we have access to a replay buffer (B) of visited states st ∈ S that were followed by actions at ∈ A and resulted in the rewards rt ∈ R and the next states st+1. One entry in B contains a tuple of past experience (st, at, rt, st+1). The agent's goal is to learn a policy π : *S → A* that maximizes the expectation of the discounted return V π(s) = Eτ [P∞ t=0 γ tR(st, at) | st = s], where τ is a trajectory following the policy π. Furthermore, we examine the setting where a high-dimensional state (st ∈ R v) is compressed into a lowerdimensional latent state zt ∈ Z = R w where Z represents the latent space with w ≤ v. This is done by means of a neural network encoding f : *S → Z* where f represents the encoder. ## 4 Algorithm We aim for an interpretable and disentangled representation of the controllable and uncontrollable latent features. We define controllable features as the characteristics of the MDP that are predominantly affected by any action a ∈ A, such as the position of the agent in the context of a maze environment. The uncontrollable features are those attributes that are not or only marginally affected by the actions. We show that the proposed disentanglement is possible by designing losses and gradient propagation through two separate parts of the latent representation. Specifically, to assign controllable information to the controllable latent partition, the gradient from an action-conditioned forward predictor is propagated through it. To assign uncontrollable information to the uncontrollable latent partition, the gradient from a state-only forward predictor is propagated through it. The remaining details will be provided in the rest of this Section. ![3_image_0.png](3_image_0.png) Figure 2: Overview of the disentangling architecture, with dashed lines representing gradient propagation and green rectangles representing parameterized prediction functions. An observation st is encoded into a latent representation consisting of two parts; z c t and z u t, which represent controllable and uncontrollable features respectively. These separated representations are then independently used to make action-conditioned, state-only and adversarial predictions in order to provide gradients to the encoder that disentangle the latent representation zt into controllable (z c t ) and uncontrollable (z u t ) partitions. We consider environments with high-dimensional states, represented as pixel inputs. These pixel inputs are subsequently encoded into a latent representation zt = (z c, zu) *∈ Z ∈* R nc + R nu , with the superscripts c and u representing the controllable and uncontrollable features, and the superscripts nc and nu representing their respective dimensions. The compression into a latent representation *S → Z* is done by means of a convolutional encoder, parameterized by a set of learnable parameters θenc according to: $$z_{t}=(z_{t}^{c},z_{t}^{u})=f(s_{t};\theta_{e n c}).$$ $$(1)$$ ) = f(st; θenc). (1) An overview of the proposed algorithm is illustrated in Fig. 2 and the details are provided hereafter. In this section, all losses and transitions are given under the assumption of a continuous abstract representation and a deterministic transition function. The algorithm could be adapted by replacing the losses related to the internal transitions with generative approaches (in the context of continuous and stochastic transitions) or a log-likelihood loss (in the context of stochastic but discrete representations). ## 4.1 Controllable Features To isolate controllable features in the latent representation, z c t is used to make an action-conditioned forward prediction in latent space. In the context of a continuous latent space and deterministic transitions, z cis updated using a mean squared error (MSE) forward prediction loss Lc = zˆ c t+1 − z c t+1 2, where zˆ c t+1 is the action-conditioned residual forward prediction of the parameterized function Tc(*z, a*; θc) : *Z × A → Z*: $$\hat{z}_{t+1}^{c}=T_{c}(z_{t},a_{t};\theta_{c})+z_{t}^{c}$$ t(2) and the prediction target z c t+1 is part of the encoder output f(st+1; θenc). Note that the full latent state zt is necessary in order to predict zˆ c t+1 (e.g. the uncontrollable features could represent a wall or other static structure that is necessary for the prediction of the controllable features). Furthermore, the uncontrollable latent partition input z u t is accompanied by a stop gradient to discourage the presence of controllable features in z u. When minimizing Lc, both the encoder (θenc) as well as the predictor (θc) are updated, which allows shaping the representation z c as well as learning the internal dynamics. $$\left({2}\right)$$ ## 4.2 Uncontrollable Features To express uncontrollable features in the latent space, z u t is used to make a state-only (not conditioned on the action at) forward prediction in latent space. This enforces uncontrollable features within the uncontrollable latent partition z u, since features that are action-dependent cannot be accurately predicted with the preceding state only. Following a residual prediction, z uis then updated using a MSE forward prediction loss Lu = zˆ u t+1 − z u t+1 2, with zˆ u t+1 defined as: $$\hat{z}_{t+1}^{u}=T_{u}(z_{t}^{u};\theta_{u})+z_{t}^{u}$$ $$\left({\mathfrak{3}}\right)$$ t(3) and Tu(z u; θu) : *Z → Z* representing the parameterized prediction function. The target z u t+1 is part of the output of the encoder f(st+1; θenc). When minimizing Lu, both θenc and θu are updated. In this way the loss Lu drives the latent representation z u, which is conditioned on θenc according to (z c t , zu t ) = f(st; θenc), to only represent the features of st that are not conditioned on the action at. ## 4.3 Avoiding Predictive Representation Collapse Minimizing a forward prediction loss in latent space Z is prone to collapse (Francois-Lavet et al., 2019; Gelada et al., 2019), due to the convergence of Lc and Lu when f(st; θenc) is a constant ∀ st ∈ S. To avoid representation collapse when using forward predictors, a contrastive loss is used to enforce sufficient diversity in the latent representation: $${\mathcal{L}}_{H_{1}}=e x p{\big(}-C_{d}{\big\|}z_{t}-{\bar{z}}_{t}{\big\|}_{2}{\big)}$$ $$|\rangle$$ $\overset{\ast}{\underset{\text{l}}{\text{}}}$). (4) where Cd represents a constant hyperparameter and z¯t is a 'negative' batch of latent states zt, which is obtained by shifting each position of latent states in the batch by a random number between 0 and the batch size. In the random maze environment, an additional contrastive loss is added to further diversify the controllable representation: $${\mathcal{L}}_{H_{2}}=e x p{\big(}-C_{d}\big\|z_{t}^{c}-\bar{z}_{t}^{c}\big\|_{2}{\big)}$$ ## (5) Where Z C T Is Obtained From Randomly Sampled Trajectories. This Additional Regularizer Proved Neccessary To Avoid Collapse Of Z C When Moving To A Near Infinite Number Of Possible Mazes. More Information On This Subject Can Be Found In Appendix A.4. The Resulting Contrastive Loss Lh For The Random Maze Environment Then Consists Of 0.5Lh1 + 0.5Lh2 . The Total Loss Used To Update The Encoder'S Parameters Now Consists Of Lenc = Lc + Lu + Lh. 4.4 Guiding Feature Disentanglement With Adversarial Loss When using a controllable latent space z c ∈ R x, x ∈ N, where *x > g*, with g representing the number of dimensions needed to portray the controllable features, some information about the uncontrollable features in the controllable latent representation might be present (see Appendix D.2). This is due to the non-enforcing nature of Lc, as the uncontrollable features are equally predictable with or without the action. To ensure that no information about the uncontrollable features is kept in the controllable latent representation, an adversarial component is added to the architecture in Fig. 2. This is done by updating the encoder with an adversarial loss Ladv and reversing the gradient (Ganin et al., 2016). The adversarial loss is defined as $${\mathcal{L}}_{a d v}=\left|\hat{z}_{t}^{u}-z_{t}^{u}\right|^{2},$$ $$\left({\mathrm{6}}\right)$$ 2, (6) with zˆ u t = Tadv(z c t ; θadv), where zˆ u tis the uncontrollable prediction of the parameterized function Tadv(z c; θadv) : *Z → Z* and z u tis the target. Intuitively, since the parameters of Tadv(z c; θadv) are being updated with Ladv and the parameters of f(s; θenc) are being updated with −Ladv, the prediction function can be seen as the discriminator and the encoder can be seen as the generator (Goodfellow et al., 2014). The discriminator tries to give an accurate prediction of the uncontrollable latent z u given the controllable latent z c, while the generator tries to counteract the discriminator by removing any uncontrollable features from the controllable representation. In our case, the predictor is a multi-layer perceptron (MLP), which means that minimizing Ladv enforces that no nonlinear relation between z c and z ucan be learned. We hypothesize that this is a deterministic approximation of minimizing the Mutual Information (MI) between z u and z c. When using the adversarial loss, the combined loss propagating through the encoder consists of Lenc = Lc + Lu + LH − Ladv. Here the minus term in −Ladv represents a gradient reversal to the encoder. Note that the losses are not scaled, as this did not prove to be necessary for the experiments conducted. Algorithm 1 Interpretable (Un)Controllable Features in MDP's 1: Initialize θenc, θc, θu, θadv 2: for *iteration* = 1, 2*, . . . , N* do 3: Sample batch of tuples {st, at, st+1} 4: Encode observations: f(s; θenc) = {z c, zu} 5: Predict zˆ c t+1 = Tc(z c t , zu t , a; θc) + z c t // detach z u t 6: Predict zˆ u t+1 = Tu(z u t ; θu) + z u t 7: Predict zˆ u t = Tadv(z c t ; θadv) 8: Compute losses Lc,Lu, −Ladv,LH 9: Update parameters θenc, θc, θu, θadv 10: **end for** ## 4.5 Downstream Tasks By disentangling a latent representation in a controllable and an uncontrollable part, one can more readily obtain human-interpretable features. While interpretability is generally an important aspect, it is also important to test how a notion of human interpretability affects downstream performance, as it is generally desired to strike a good balance between interpretability and performance. This is examined by training an RL agent on the learned and subsequently frozen latent representation. The action at is chosen following an ϵ-greedy policy, where a random action is taken with a probability ϵ, and with (1 − ϵ) probability the policy π(z) = arg max a∈A Q(*z, a*; θ) is evaluated, where Q(*z, a*; θ) is the Q-network trained by Deep Double Q-Learning (DDQN) (Mnih et al., 2015; van Hasselt et al., 2016). The Q-network is trained with respect to a target Yt: $$\left(7\right)$$ $\mathcal{L}\cup\mathcal{L}$ Yt = rt + γQ(zt+1, arg max a∈A Q(zt+1, a; θ); θ −). (7) With γ representing the environment's discount factor and θ − the target Q-network's parameters. The target Q-network's parameters are updated as an exponential moving average of the original parameters θ according to: θ − k+1 = (1 − τ )θ − k + τθk, where subscript k represents a training iteration and τ represents a hyperparameter controlling the speed of the parameter update. The resulting DDQN loss is defined as LQ = Yt − Q(zt, a; θ) 2. The full computation of all losses is shown in pseudocode in Algorithm 1. ## 5 Experiments In this section, we showcase the disentanglement of controllable and uncontrollable features on three different environments, the complexity of which is in line with prior work on structured representations (Thomas et al., 2017; Higgins et al., 2018; Francois-Lavet et al., 2019; Kipf et al., 2020; Ahuja et al., 2022): (i) a quadruple maze environment, (ii) the catcher environment and (iii) a randomly generated maze environment. The first environment yields a state space of 119 different observations, and is used to showcase the algorithm's ability to disentangle a low-dimensional latent representation. The catcher environment examines a setting where the uncontrollable features are not static, and the random maze environment is used to showcase disentanglement in a more complex distribution of over 25 million possible environments, followed by the application of downstream tasks by applying reinforcement learning (DDQN) and a latent planning algorithm running in the controllable latent partition . The base of the encoder is derived from Tassa et al. (2018) and consists of two convolutional layers, followed by a fully connected layer for low-dimensional latent representations or an additional CNN for a higher-dimensional latent representation such as a feature map. For the full network architectures, we refer the reader to Appendix C. In all environments, the encoder f(s; θenc) is trained from a buffer B filled with transition tuples (st, at, rt, st+1) from random trajectories. Note that, in interpretability, there is generally not a specific metric to optimize for. In order to produce ![6_image_0.png](6_image_0.png) Figure 3: Visualization of the latent feature disentanglement in the catcher environment after 200k training iterations, with zt = f(st; θenc) ∈ R 2 + R 6×6. In (a) and (b), the left column shows z c t, the middle column is a feature map representing z u t and the right column is the pixel state st. The dashed lines separate observations where the ball position or the paddle position is kept fixed for illustration purposes. z ctracks the agent position while z utracks the falling ball. In b), note that even when having a two-dimensional controllable state (only 1 is needed, see Appendix D.2), the adversarial loss in b) makes sure that distinct ball positions have a negligible effect on z c(left column), even when the high-level features of the agent and the ball might be hard to distinguish. interpretable representations, finding the right hyperparameters required manual (human) inspection of the plotted latent representations. An ablation of the hyperparameters used can be found in Appendices A1-A3 ## 5.1 Quadruple Maze Environment The maze environment consists of an agent and a selection of four distinct, handpicked wall architectures. The environment's state is provided as pixel observations st ∈ R 48×48, where an action moves the agent by 6 pixels in each direction (up, down, left, right) except if this direction is obstructed by a wall. We consider the context where there is no reward (rt = 0 ∀ (st, at) ∈ (S, A)) and there is no terminal state. We select a two-dimensional controllable representation (z c ∈ R 2) and a one-dimensional uncontrollable representation (z u ∈ R 1). The remaining hyperparameters and details can be found in Appendix B. The experiments are conducted using a buffer B filled with random trajectories from the four different basic maze architectures. The encoder's parameters are updated using Lenc in Section 4.3 with LH = LH1 . After 50k training iterations, a clear disentanglement between the controllable (z c) and uncontrollable (z u) latent representation can be seen in Fig. 1. One can observe that the encoder is updated so that the one-dimensional latent representation z ulearns different values that define the type of wall architecture. A progression to this representation is provided in Appendix D.1. ## 5.2 Catcher Environment As opposed to the maze environment, the catcher environment encompasses uncontrollable features that are non-stationary. The ball is dropped randomly at the top of the environment and is falling irrespective of the actions, while the paddle position is directly modified by the actions. The environment's states are defined as pixel observations st of size R 51×51. At each time step, the paddle moves left or right by 3 ![7_image_0.png](7_image_0.png) Figure 4: A plot of the latent representation for all observations in a single randomly sampled maze when training with the aforementioned losses (a), substituting the action-conditioned forward-prediction loss Lc for an inverseprediction loss Linv (b) and when end-to-end updating the encoder with only the Q-loss LQ from DDQN for 500k iterations (c). The left column shows the controllable latent z c t ∈ R 2 with the current state in blue, the remaining states in red, and the predicted movement due to actions as different colored bars for each individual action. The middle column shows the uncontrollable latent z u t ∈ R 6×6and the right column shows the original state st ∈ R 48×48. Evidently, the controllable representations in (b) and (c) lack disentanglement and interpretability. Furthermore, the representation in (c) seems to have very little structure at all, showing that a representation that is optimized without prior structural incentives will often represent a black box. pixels. Since we are only doing unsupervised learning, we consider the context where there is no reward (rt = 0 ∀ (st, at) ∈ (S, A)) and an episode ends whenever the ball reaches the paddle or the bottom. We take z c ∈ R 2 and z u ∈ R 6×6. To test disentanglement, z cis of a higher dimension than needed since the paddle (agent) only moves on the x-axis and would therefore require only one feature (see Appendix D.2 for the simpler setting with z c ∈ R 1). To show disentanglement, the redundant dimension of z cshould not or negligibly have information about z u. The encoder's parameters are updated using Lenc in Section 4.4 with LH = LH1 . After training the encoder for 200k iterations, a selection of state observations st and their encoding into the latent representation z = (z c, zu) can be seen in Fig. 3. A clear distinction between the ball and paddle representations can be observed, with the former residing in z u and the latter in z c. ## 5.3 Random Maze Environment The random maze environment is similar to the maze environment from Section 5.1, but consists of a large distribution of randomly generated mazes with complex wall structures. The environment's state is provided as pixel observations st ∈ R 48×48, where an action moves the agent by 6 pixels in each direction. We consider z c ∈ R 2 and z u ∈ R 6×6. This environment tests the generalization properties of a disentangled latent representation, as there are over 25 million possible maze architectures, corresponding to a probability of less than 4·10−8to sample the same maze twice. Note that because z cis 2-dimensional, results with and without adversarial loss are in practice extremely close. After 50k training iterations, the latent representation z = (z c, zu) shows an interpretable disentanglement between the controllable and the uncontrollable features (see Fig. 4a). A clear distinction between the agent and the wall structure can be found inside z c and z u. Note that Instead of using a single dimension to 'describe' the uncontrollable features z u(see Fig. 1), using a feature map for z u allows training an encoding that provides a more interpretable representation of the actual wall architecture. Using an Inverse Predictor. An alternative to the state-action forward prediction method used throughout the paper is the inverse (action) prediction loss. An inverse prediction loss is often referred to in previous work that focuses on controllable features (Jonschkowski & Brock, 2015; Pathak et al., 2017; Badia et al., 2020). A single-step inverse prediction loss is defined as: $$\hat{a}_{t}=I(z_{t}^{c},z_{t+1}^{c},z_{t}^{u};\theta_{i n v}).$$ ; θinv). (8) $\left(8\right)$. ![8_image_0.png](8_image_0.png) Figure 5: Performance of different (pre)trained representations on the random maze environment, measured as a mean (full line) and standard error (shaded area) over 5 seeds. The 'Interpretable' setting uses an encoder pre-trained with 50k iterations to acquire a representation as in Fig. 4a, after which the encoder is frozen and a Q-network is trained on top with DDQN for 500k iterations. The 'Interpretable + Planning' curve is similar to the 'Interpretable' setting but uses DDQN with a planning algorithm in the controllable partition of the latent space with a depth of 3. The 'DDQN' setting uses an encoder trained end-to-end with only DDQN for 500k iterations and the 'Inverse Prediction' setting is equal to the 'Interpretable' setting but has an encoder pre-trained with Linv instead of Lc. On the right, a random subset of the vast amount of procedurally generated mazes used in the reward evaluation is shown. Here, aˆt is the predicted action and I(z c t , zc t+1, zu t ; θinv) : *Z → A* is the inverse prediction network. To see whether an inverse predictor can generate structured, controllable representations in the random maze environment, we replace the action-conditioned forward predictor with an inverse predictor, so that z cis no longer updated with Lc but with Linv (see Appendix A.6 for details on Linv). The resulting representation can be seen in Fig. 4b. It seems that using Linv, causes an absence of interpretable structure in the controllable latent representation z c t . Furthermore, there is a less precise disentanglement between the controllable and uncontrollable features, as differences can be observed in z c t when encoding equal agent positions as pixel states st. In addition, an inverse predictor does not allow forward prediction in latent space, which can be used for planning as shown hereafter. It thus seems that in some environments, an inverse prediction loss might be insufficient to isolate the controllable features. Take for example the maze agent in the top-right maze of Fig. 4, where the agent can only move in the left direction. Even when using the wall information (z u t ), an inverse predictor will not be able to predict the action taken when the agent does not go left. However, an action-conditioned forward predictor is able to predict the next state correctly regardless of which action was taken. Reinforcement Learning. In order to verify whether a human-interpretable disentangled latent encoding is informative enough for downstream tasks, we formalize the random maze environment into an MDP with rewards. The agent acquires a reward rt of -0.1 at every time step, except when it finds the key in the top right part in which case it acquires a positive reward of 1. The episode ends whenever a positive reward is obtained or a total of 50 environment steps have been taken. For each new episode, a random wall structure is generated, and the agent starts over in the bottom left section of the maze (see Fig. 5). To see whether an interpretable disentangled latent representation is useful for RL, we compare different scenarios of (pre)training; (i) An encoder pretrained for 50k iterations to attain the representation in Fig. 4a and subsequently trained with DDQN for 500k iterations (ii) an encoder identical to the aforementioned but trained with DDQN and a planning algorithm (iii) an encoder pretrained for 50k iterations with Linv instead of Lc and subsequently trained with DDQN for 500k iterations (iv) an encoder purely trained with DDQN gradients for 500k iterations. The resulting performances are compared in Fig. 5. We find that a disentangled structured representation is suitable for downstream tasks, as it achieves comparable performance to training an encoder end-to-end with DDQN for 500k iterations. Although performance is similar, Fig. 4c shows that an encoder updated solely with the DDQN gradient can lose any form of interpretability. Moreover, we show in Fig. 5 that a representation trained with an inverse prediction loss instead of a state-action forward prediction loss leads to poor downstream performance in the random maze environment. Planning. As seen in Fig. 4a, after pre-training with the unsupervised losses, an interpretable disentangled representation with the corresponding agent transitions is obtained. Due to this disentanglement of the controllable and uncontrollable features, we can for instance employ prior knowledge that the uncontrollable features in the maze environment are static, and employ latent planning in the controllable latent space only (see Fig. 6). The planning algorithm used is derived from Oh et al. (2017), and is used to successfully plan only in the controllable partition of the latent representation z c, while freezing the input for z uregardless of planning depth. More details on the planning algorithm can be found in Appendix A.5. It can be observed that even when planning with a relatively small depth of 3, we achieve better performance than the pre-trained representation with an ϵ-greedy policy and than the purely DDQN-updated encoder. ## 6 Limitations While the work presented here provides a step towards a better understanding of disentangling controllable and uncontrollable features within an encoder architecture, there remain some limitations that we must acknowledge, and which can provide a basis for future research. First, our method's effectiveness was predominantly demonstrated on environments with relatively simple underlying dynamics. In these environments, the disentanglement process was easier to achieve due to the limited complexity of internal dynamics present. As we begin to transfer our approach to more complex environments characterized by more extensive internal dynamics, there can arise two problems; The first being that the separation of controllable from uncontrollable features may not be as clear-cut in more complex MDPs, but can be more on a spectrum, complicating the fundamental differences between a state-only and a state-action forward predictor. The second being that interpretability will be harder to enforce when there are a large number of underlying factors of variation. As distinct seeds can give different orderings and signs of the neurons in the final layer of the encoder, identifying a factor of variation can become exponentially harder for more complex environments. Lastly, while our work showed that an action-conditioned forward predictor could be preferred over an inverse predictor in some environments for isolating controllable features, it may not hold for all scenarios. The inherent properties of different environments might show a necessity of using different predictors. Consequently, there could very well be MDPs where our current approach might not provide the same level of disentanglement showed in the MDPs used in this paper. Despite these limitations, we believe our work provides a strong foundation upon which future research can build and further extend the possibilities of achieving a highly interpretable latent representation through disentanglement of controllable and uncontrollable features. ## 7 Conclusion And Future Work We have shown the possibility of disentangling controllable and uncontrollable features in an encoder architecture, strongly increasing the interpretability of the latent representation while also showing the potential use of this for downstream learning and planning, even in a single latent partition. This disentanglement of controllable and uncontrollable features in the latent representation of high-dimensional MDPs was achieved by propagating an action-conditioned forward prediction loss and a state-only forward prediction loss through distinct sections of the latent representation. Additionally, a contrastive loss and an adversarial loss were used to respectively avoid collapse and further disentangle the latent representation. Furthermore, we showed that an action-conditioned forward predictor can, in some environments, be preferred as compared to an inverse predictor in terms of isolating controllable features in the representation. Finally, by employing forward prediction in latent space, we were able to successfully run a planning algorithm while leveraging the properties of the environment. In particular, the disentanglement of controllable and uncontrollable features allowed us to keep z ufrozen regardless of planning depth in the context of a distribution of randomly generated mazes, i.e. we only do forward prediction in z c. ![10_image_1.png](10_image_1.png) (a) Planning depth 3 (b) Planning depth 9 ![10_image_0.png](10_image_0.png) Figure 6: Visualization of the latent representation through an actual planning iteration utilizing a planning depth of 3 (a) and a planning depth of 9 (b), with the controllable representation z c ∈ R 2(left), the uncontrollable representation z u ∈ R 6×6(middle) that is kept static throughout planning depth and the original pixel input st ∈ R 48×48 (right). The translucent red dots represent every possible encoded state in the random maze environment, the full blue dot represents the current encoded state, the red dots represent intermediate encoded states estimated by planning and the green dot represents the final predicted state as chosen by the planning algorithm, consistent with its depth. Future work could focus on gradually transferring our notion of disentanglement and interpretability to environments with more extensive underlying internal dynamics. Further work could also look at the ordering of the latent dimensions, as a latent representation is often arbitrarily ordered. This means that distinct seeds will lead to a different ordering and sign of the neurons in the final layer of the encoder. For example, if seed one would give agent position +x and +y for neurons 1 and 2 respectively, then seed two could give agent position -y and +x to the same neurons. As we are additionally using a contrastive loss while learning our representation, these results are compliant with the theory that a contrastive loss can recover the original latent information up to an orthogonal linear transformation (Zimmermann et al., 2021). Certain benefits can be obtained as well with a particular design of the encoder architecture, as we have done in this paper using estimates of the necessary dimensions of z c and z ufor the different MDP environments. This can be seen as an inductive bias to aid disentanglement, as mentioned by Locatello et al. (2019). Succeeding work could also focus on finding more algorithmic benefits of this disentanglement of controllable/uncontrollable features in more complex environments. For example, in the context of safety, a disentangled interpretable representation could allow incorporating latent state constraints in a planning algorithm. Lastly, as discussed by Glanois et al. (2021); Locatello et al. (2019), an interesting venue could be to further investigate the trade-off between interpretability and downstream performance. This is due to the fact that black-box representations such as Figure 4c still seem to have excellent downstream performance with DDQN, where for the task of maze navigation, a human would perform substantially better using the representation portrayed in Figure 4a as compared to using the representation in Figure 4c. ## References Kartik Ahuja, Jason Hartford, and Yoshua Bengio. Weakly supervised representation learning with sparse perturbations. In *Advances in Neural Information Processing Systems, NIPS*, 2022. Adrià Puigdomènech Badia, Pablo Sprechmann, Alex Vitvitskyi, Daniel Guo, Bilal Piot, Steven Kapturowski, Olivier Tieleman, Martín Arjovsky, Alexander Pritzel, Andew Bolt, and Charles Blundell. Never Give Up: Learning Directed Exploration Strategies. In *International Conference on Learning Representations,* ICLR, 2020. Richard Bellman. A markovian decision process. In *Journal of Mathematics and Mechanics*, volume 6, 1957. David Bertoin and Emmanuel Rachelson. Disentanglement by cyclic reconstruction. In *IEEE Transactions* on Neural Networks and Learning Systems, 2022. Yonathan Efroni, Dipendra Misra, Akshay Krishnamurthy, Alekh Agarwal, and John Langford. Provable RL with exogenous distractors via multistep inverse dynamics. In *International Conference on Machine* Learning, ICML, 2021. Vincent Francois-Lavet, Yoshua Bengio, Doina Precup, and Joelle Pineau. Combined Reinforcement Learning via Abstract Representations. In *Proceedings of the AAAI Conference on Artificial Intelligence, AAAI*, 2019. Xiang Fu, Ge Yang, Pulkit Agrawal, and Tommi Jaakkola. Learning task informed abstractions. In *International Conference on Machine Learning, ICML*, 2021. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, Victor Lempitsky, Urun Dogan, Marius Kloft, Francesco Orabona, Tatiana Tommasi, and al Ganin. *Domain-Adversarial Training of Neural Networks*. In *Journal of Machine Learning Research*, volume 17, 2016. Carles Gelada, Saurabh Kumar, Jacob Buckman, Ofir Nachum, and Marc G. Bellemare. DeepMDP: Learning continuous latent space models for representation learning. In *International Conference on Machine* Learning, ICML, 2019. Claire Glanois, Paul Weng, Matthieu Zimmer, Dong Li, Tianpei Yang, Jianye Hao, and Wulong Liu. A survey on interpretable reinforcement learning. *arXiv preprint arXiv:2112.13112*, 2021. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Networks. In Advances in Neural Information Processing Systems, NIPS, 2014. Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In *International Conference on Machine Learning,* ICML, 2019. Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, and Jimmy Ba. Mastering Atari with Discrete World Models. In *International Conference on Learning Representations, ICLR*, 2021. Irina Higgins, Arka Pal, Andrei A. Rusu, Loic Matthey, Christopher P Burgess, Alexander Pritzel, Matthew Botvinick, Charles Blundell, and Alexander Lerchner. DARLA: Improving Zero-Shot Transfer in Reinforcement Learning. In *International Conference on Machine Learning, ICML*, 2017. Irina Higgins, David Amos, David Pfau, Sébastien Racanière, Loïc Matthey, Danilo J. Rezende, and Alexander Lerchner. Towards a definition of disentangled representations. *arXiv preprint arXiv:1812.02230*, 2018. Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement Learning with Unsupervised Auxiliary Tasks. In International Conference on Learning Representations, ICLR, 2017. Rico Jonschkowski and Oliver Brock. Learning state representations with robotic priors. *Autonomous Robots*, 39(3), 2015. Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In *International* Conference on Learning Representations, ICLR, 2015. Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. In International Conference on Learning Representations, ICLR, 2014. Thomas Kipf, Elise van der Pol, and Max Welling. Contrastive Learning of Structured World Models. In International Conference on Learning Representations, ICLR, 2020. Ilya Kostrikov, Denis Yarats, and Rob Fergus. Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels. In *International Conference on Learning Representations, ICLR*, 2021. Alex Lamb, Riashat Islam, Yonathan Efroni, Aniket Didolkar, Dipendra Misra, Dylan Foster, Lekan Molu, Rajan Chari, Akshay Krishnamurthy, and John Langford. Guaranteed discovery of controllable latent states with multi-step inverse models. *arXiv preprint arXiv:2207.08229*, 2022. Michael Laskin, Aravind Srinivas, and Pieter Abbeel. CURL: Contrastive unsupervised representations for reinforcement learning. In *37th International Conference on Machine Learning, ICML*, 2020. Adrien Laversanne-Finot, Alexandre Pere, and Pierre-Yves Oudeyer. Curiosity driven exploration of learned disentangled goal spaces. In *Proceedings of The 2nd Conference on Robot Learning*. PMLR, 2018. Kuang Huei Lee, Ian Fischer, Anthony Z Liu, Yijie Guo, Honglak Lee, John Canny, and Sergio Guadarrama. Predictive information accelerates learning in RL. In Advances in Neural Information Processing Systems, NIPS, 2020. Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In *Proceedings of the 36th International Conference on Machine Learning, PMLR*, 2019. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. *Nature*, 518, 2015. Junhyuk Oh, Satinder Singh, and Honglak Lee. Value Prediction Network. In Advances in Neural Information Processing Systems, NIPS, 2017. Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. Curiosity-driven Exploration by Selfsupervised Prediction. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, 2017. Max Schwarzer, Ankesh Anand, Rishab Goel, R Devon Hjelm, Aaron Courville, and Philip Bachman. DataEfficient Reinforcement Learning with Self-Predictive Representations. In International Conference on Learning Representations, ICLR, 2021. Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy Lillicrap, and Martin Riedmiller. DeepMind Control Suite. *arXiv preprint arXiv:1801.00690*, 2018. Valentin Thomas, Jules Pondard, Emmanuel Bengio, Marc Sarfati, Philippe Beaudoin, Marie-Jean Meurs, Joelle Pineau, Doina Precup, and Yoshua Bengio. Independently Controllable Factors. arXiv preprint arXiv:1708.01289, 2017. Elise van der Pol, Daniel Worrall, Herke van Hoof, Frans Oliehoek, and Max Welling. Mdp homomorphic networks: Group symmetries in reinforcement learning. In *Advances in Neural Information Processing* Systems, NIPS, 2020. Hado van Hasselt, Arthur Guez, and David Silver. Deep Reinforcement Learning with Double Q-learning. In *Proceedings of the AAAI Conference on Artificial Intelligence, AAAI*, 2016. Tongzhou Wang, Simon Du, Antonio Torralba, Phillip Isola, Amy Zhang, and Yuandong Tian. Denoised MDPs: Learning world models better than the world itself. In Proceedings of the 39th International Conference on Machine Learning, PMLR, 2022. Denis Yarats, Amy Zhang, Ilya Kostrikov, Brandon Amos, Joelle Pineau, and Rob Fergus. Improving Sample Efficiency in Model-Free Reinforcement Learning from Images. In Proceedings of the AAAI Conference on Artificial Intelligence, AAAI, 2021. Roland S. Zimmermann, Yash Sharma, Steffen Schneider, Matthias Bethge, and Wieland Brendel. Contrastive learning inverts the data generating process. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning* Research, pp. 12979–12990. PMLR, 18–24 Jul 2021. ## A Additional Material A.1 Ablation Of The Contrastive Scalar Without using a pixel reconstruction loss, the contrastive loss LH is crucial in avoiding the trivial solution for any latent forward predictor (Francois-Lavet et al., 2019; Gelada et al., 2019). The contrastive scalar that regulates the LH however remains the most influential hyperparameter. When Cd is chosen too low, the representation remains in a compact cluster. On the other hand, when Cd is chosen too high, unnecessary inter-sample distances are formed to enforce large individual latent distances. Two ablations of the contrastive scalar Cd are shown in Fig. 7. ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) Figure 7: Ablation of the hyperparameter Cd, where a higher value of Cd enforces less entropy in the representation, while a lower value of Cd especially pushes the controllable features z ctowards shapes that ensure large distances between samples. In both figures, the left column is z c ∈ R 2, the middle column is z u ∈ R 6×6and the right column is the input state ∈ R 48×48. ## A.2 Ablation Of Learning Rates We show experiments in Fig. 8 and Fig. 9 where we employ different learning rates for the encoder and the action-conditioned forward predictor, respectively. ![13_image_2.png](13_image_2.png) (a) Encoder learning rate of 2e-4 ![13_image_3.png](13_image_3.png) Figure 8: Ablation of the learning rates for the encoder, where a too low learning rate causes collapse of z cand a too high learning rate causes distortions in the uncontrollable features z u. In both figures, the left column is z c ∈ R 2, the middle column is z u ∈ R 6×6and the right column is the input state ∈ R 48×48. ![14_image_0.png](14_image_0.png) (a) Tc learning rate of 4e-4 ![14_image_1.png](14_image_1.png) (b) Tc learning rate of 4e-6 Figure 9: Ablation of the learning rates for the action-conditioned forward predictor. A too high learning rate will cause the controllable representation to lose structure, while a low learning rate retains structure but does not learn strong transition dynamics. In both figures, the left column is z c ∈ R 2, the middle column is z u ∈ R 6×6and the right column is the input state ∈ R 48×48. ## A.3 Ablation Of The Detachment Of Z U **And Ablation Of The Residual Prediction** As seen in the main paper in Figure 2, we detach the uncontrollable representation z cfrom Lc as we do not want controllable features to be present in z u. We can see in Figure 10 that updating z u with Lc leads to slightly better transition predictions in z c, but also results in a less interpretable encoding of z u. Furthermore, we can also see in Figure 10 that, when using normal forward predictions instead of residual forward predictions, we lose almost all of our interpretable structure in z u. ![14_image_2.png](14_image_2.png) (a) No detachment of z u ![14_image_3.png](14_image_3.png) (b) No residual predictions of z c t+1 and z u t+1 Figure 10: In both figures, the left column is z c ∈ R 2, the middle column is z u ∈ R 6×6and the right column is the input state ∈ R 48×48. ## A.4 Ablation Of The Entropy Loss Lh2 As the amount of possible encoded maze architectures goes to infinity due to the procedural generation, a collapse in the controllable features z ccan be noticed when using only LH1 as the contrastive loss (see Fig. 11). On the other hand, when using only LH2 as the contrastive loss, there is no more clear distinction in the uncontrollable representation z u. The best results were obtained using a combination of the aforementioned losses. ![15_image_1.png](15_image_1.png) ![15_image_0.png](15_image_0.png) $$(10)$$ (a) LH = LH1 (b) LH = LH2 Figure 11: In both figures, the left column is z c ∈ R 2, the middle column is z u ∈ R 6×6and the right column is the input state ∈ R 48×48. ## A.5 Planning We use a planning algorithm derived from (Oh et al., 2017; Francois-Lavet et al., 2019), where we employ d-step planning as: descriptive planning as $$\hat{Q}^{d}((\hat{x}_{t}^{\varepsilon},z^{u}),a)=\left\{\begin{array}{ll}P((\hat{x}_{t}^{\varepsilon},z^{u}),a;\theta_{\varepsilon})+\Gamma((\hat{x}_{t}^{\varepsilon},z^{u}),a;\theta_{\tau})&\max_{u\in A}\ \hat{Q}^{d-1}((\hat{x}_{t+1}^{\varepsilon},z^{u}),a^{\prime}),&\mbox{if}\ d>0\\ Q((\hat{x}_{t}^{\varepsilon},z^{u}),a;\theta),&\mbox{if}\ d=0\end{array}\right.\tag{9}$$ $$Q_{\mbox{\scriptsize{plan}}}^{D}((\hat{x}_{t}^{\varepsilon},z^{u}),a)=\sum_{d=0}^{D}\hat{Q}^{d}((\hat{x}_{t}^{\varepsilon},z^{u}),a)$$ Where P(st, a; θr) : *Z × A → R* represents the reward predictor and Γ(*s, a*; θγ) : *Z × A →* γ represents the discount value predictor. The action is chosen by taking the argmax of QD plan((ˆz c t , zu), a). Note in the results from Section 5.3, we are only forward predicting in the controllable latent space z c, and that z uremains a fixed value regardless of planning depth. This is possible by making use of the prior knowledge of the maze environments together with a disentangled controllable and uncontrollable latent representation. ## A.6 Inverse Prediction A common single-step inverse prediction is defined as: $$(11)$$ $${\hat{a}}_{t}=f(s_{t},s_{t+1})$$ aˆt = f(st, st+1) (11) where aˆt is the predicted action and f(st, st+1) represents an arbitrarily structured function. In the random maze environment, we use a parameterized inverse predictor which predicts in latent space: $$\hat{a}_{t}=I(z_{t}^{c},z_{t+1}^{c},z_{t}^{u},z_{t+1}^{u};\theta_{i n v})$$ t+1; θinv) (12) Where I(·; θinv) ∈ I : *Z → A* is a parameterized inverse prediction function. Since we have 4 actions, we use the 4-dimensional logit output aˆt to calculate the inverse prediction loss Linv as: $$S({\hat{a}}_{i})={\frac{\exp({\hat{a}}_{i})}{\sum_{j=1}^{n_{a}}\exp({\hat{a}}_{j})}},\quad{\mathcal{L}}_{i n v}=-\sum_{i=1}^{n_{a}}a_{i}\log(S({\hat{a}}_{i}))$$ $$(12)$$ $$(13)$$ 16 Here, na is the number of actions, S(ˆai) represents the softmax operator and aiis the actual action, given as a 0 or 1 truth label. This is more commonly known as the Cross-Entropy loss computation. ## A.7 Reconstruction We run an additional ablation on the four mazes environment, where the contrastive loss LH is replaced with a pixel reconstruction loss. The resulting representation comparison can be seen in Fig. 12. ![16_image_0.png](16_image_0.png) ![16_image_1.png](16_image_1.png) (a) Contrastive loss (b) Reconstruction loss Figure 12: Visualization in a maze environment of the disentanglement of the controllable latent z c ∈ R 2on the horizontal axes, and the uncontrollable latent z u ∈ R 1on the vertical axis, given for all states in the four maze environments shown in four different colors. The representation is trained on high-dimensional tuples (st, at, rt, st+1), sampled from a replay buffer B, gathered from random trajectories in the four maze environments. All possible states are encoded with zt = f(st; θenc) and plotted in (a) and (b) together with the transition prediction for each possible action. In (a), a clear disentanglement between the controllable agent's position and the uncontrollable wall architecture is portrayed. In (b), it seems that a reconstruction loss groups observations with similar pixel inputs together, and thus allows the forward predictors to 'collapse' to unit matrices, decreasing representation quality. ## A.8 T-Sne We conduct an additional experiment in the random maze environment where we use a latent dimension of 32, partition it in half to form z c ∈ R 16 and z u ∈ R 16 and show the a T-SNE visualization of 6 different trajectories in random mazes in Fig. 13. Note that, because the trajectories are random, only a subpart of the possible agent positions in every random maze is present. ![17_image_0.png](17_image_0.png) (a) T-SNE of z u and z c(b) 6 random mazes Figure 13: Ablation of a dimensionality increase in our random maze environment. Here, the total latent space is a 32-dimensional MLP output, where z cand z uare both 16-dimensional. in (a), 6 random trajectories are plotted using T-SNE (perplexity=20) in different colors for both z cand z u, where z uremains similar across a trajectory (same wall architecture), and z c differs across the trajectory (different agent positions). In (b), a collection of the random mazes are shown from which the random trajectories have been taken. ## B Experiment Details The Pytorch framework was used for all experiments, as well as the Adam optimizer (Kingma & Ba, 2015). We employ a batch size of 32 tuples (st, at, rt, st+1) for every update. In all experiments, we detach z c t in the calculation of Lc, as it allowed us to use a larger learning rate for Tc without causing instabilities. Simple Maze The replay buffer B is filled with 5k transitions from each of the four wall architectures. The transitions are collected by the agent following a random policy. The learning rate for the encoder is 5 · 10−5, for the action-conditioned forward predictor 1 · 10−3 and for the uncontrollable forward predictor 5 · 10−5. The contrastive scalar Cd is set to 15. Catcher The replay buffer B is filled with 25k transitions. The transitions are collected by the agent following a random policy. A new random maze is created after 50 time steps or when the reward is acquired. The learning rate for the encoder is 2 · 10−5, for the action-conditioned forward predictor 4 · 10−5 and for the uncontrollable forward predictor 1 · 10−5. When using the adversarial loss, we use a learning rate of 1 · 10−3for the adversarial predictor. The contrastive scalar Cd is set to 5. Random Maze The replay buffer B is filled with 50k transitions, representing around 1000 maze architectures. The transitions are collected by the agent following a random policy. The learning rates used are equal to those of the catcher environment; for the encoder 2 · 10−5, for the action-conditioned forward predictor 4 · 10−5 and for the uncontrollable forward predictor 1 · 10−5. After freezing the encoder, we train the action-conditioned forward predictor for an additional 250k iterations on the same 50k transitions in the buffer B. For updating the Q-network with DDQN, we use a learning rate of 1 · 10−4, and a τ of 0.02. The contrastive scalar Cd is set to 13. When using planning, we employ a learning rate of 5 · 10−5for the reward and discount prediction networks. Contrastive Loss For the catcher and random maze environment, given that z cis 1 or 2-dimensional, and z uis a 36-dimensional feature map, we alleviate dimensional mismatch when calculating the contrastive loss in Equation 4 in the main paper. This is done by taking a random subset of 15 out of 36 feature values in z ufor every batch. ## C Network Architecture We use the same base encoder for all experiments, made up of 2 convolutional layers of 32 channels each, with a kernel size of 3 and stride 2, except for the final layer which has stride 1. Both convolutional layers have a Rectified Linear Unit (ReLU) nonlinear activation. In the quadruple maze environment, the output of the base convolutional encoder is flattened and used as an input to a single linear layer with 3 outputs (z c +z u) and a hyperbolic tangent (tanh) activation function. In the catcher and random maze environments, we use the following encoder head to extract the uncontrollable features; the base convolutional layers are followed by a single convolutional layer with 32 channels, a kernel size of 4 and a stride of 1. This layer is followed by a ReLU activation function and an AveragePool layer with an output size of 6. For the controllable features, we flatten the output of the base convolutional encoder and use this as an input to a linear layer with 200 neurons and a tanh activation function. This layer is followed by another linear layer with nc neurons and a tanh activation function. The transition and prediction models all have the same structure, with linear layers of 32-128-128-32-x neurons where x is the output dimension in line with the predicted feature's dimension. The linear layers all have tanh activation functions except for the final output. Only the action-conditioned transition predictor of the random maze environment has larger layer sizes, with linear layers of 128-512-512-128-2, to account for slightly more complicated transitions. The DQN network used is of size 128-512-512-128-4, with an output value corresponding to each possible action. ## D Additional Figures D.1 Quadruple Maze Progression ![18_image_0.png](18_image_0.png) (a) 1k iterations (b) 2k iterations (c) 5k iterations Figure 14: Progression of the separation of the controllable z c(x and y-axis) and uncontrollable z u(z-axis) features in the maze environment. ## D.2 Catcher ![19_image_0.png](19_image_0.png) Figure 15: Comparison of training the representation for the catcher environment with either 1 or 2-dimensions for the controllable representation z°. When using more dimensions for z° than needed, it can be observed that some information of the ball position can be present in z^.
Review 1: Summary: This paper proposes a new method for learning interpretable state representations from high-dimensional states (based on pixels). Specifically, this paper first proposes the concepts of controllable and uncontrollable latent representations, then introduces the method for learning these two representations. Finally, this paper validates the effectiveness of these representations in reinforcement learning and planning algorithms. Overall, the main contribution of this paper is to provide a method for learning low-dimensional state representations that enhances their interpretability and quality. Strengths and Weaknesses: Strengths: How to learn interpretable state representation is an interesting and worthwhile research problem. In this work, the authors propose a concrete method and demonstrate the interpretability of learned representations in Maze and Catcher environments. Weaknesses: Compared to previous work [1], the proposed method only takes into account an explicit loss enforcing disentanglement, which is not sufficient in terms of innovative contributions. In addition, I am concerned about the generality and effectiveness of the proposed approach on different downstream tasks. Requested Changes: 1. Compared to concurrent works, this paper further investigates the decomposition of low-dimensional structured representations. I recommend the authors introduce the importance of this aspect of the work. 2. In Section 4.3, the authors introduce the contrastive loss in a random maze environment, which is given by Equation 5. Why is only ${z_t}^c$ considered here? Additionally, how are positive and negative samples specifically defined? 3. In Section 4.4, the authors mention "minimizing $\mathcal{L}_{adv}$ enforces that...". What does this sentence mean? Besides verifying the effectiveness of adversarial loss through experiments, can the authors provide more explanation about it? 4. In Figure 3, the left column shows $z_t^c$. What are the specific differences between (a) and (b)? 5. The description in Figure 4 is unclear. What do the different colored dots represent in (a), (b), and (c)? 6. In the experimental section, I have three questions: (1) Besides the random maze environment, did the authors conduct experiments on other more common downstream tasks? What were the experimental results? (2) In Figure 5, the proposed method performs worse than DDQN in the initial training stages. What is the reason for this? Additionally, why are standard errors reported instead of standard deviations in Figure 5? How robust is the method? (3) From the experimental results, the advantage of the proposed method is not obvious in RL, and the compared methods are also quite limited. 7. Essentially, this work proposes a method for learning state representation. Therefore, if some classic reinforcement learning algorithms[4][6][7] and more advanced RL algorithms[5][8] can be combined for experiment verification, or compared to advanced state representation methods[2][3], this work may be more convincing. References: [1] Tongzhou Wang, Simon Du, Antonio Torralba, Phillip Isola, Amy Zhang, and Yuandong Tian. Denoised MDPs: Learning world models better than the world itself. In Proceedings of the 39th International Conference on Machine Learning, PMLR, 2022. [2] Shang J, Kahatapitiya K, Li X, et al. StARformer: Transformer with State-Action-Reward Representations for Visual Reinforcement Learning[C]//Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIX. Cham: Springer Nature Switzerland, 2022: 462-479. [3] Rakelly K, Gupta A, Florensa C, et al. Which Mutual-Information Representation Learning Objectives are Sufficient for Control?[J]. Advances in Neural Information Processing Systems, 2021, 34: 26345-26357. [4] Schulman J, Wolski F, Dhariwal P, et al. Proximal policy optimization algorithms[J]. arXiv preprint arXiv:1707.06347, 2017. [5] Tang H, Meng Z, Hao J, et al. What about inputting policy in value function: Policy representation and policy-extended value function approximator[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2022, 36(8): 8441-8449. [6] Fujimoto S, Hoof H, Meger D. Addressing function approximation error in actor-critic methods[C]//International conference on machine learning. PMLR, 2018: 1587-1596. [7] Haarnoja T, Zhou A, Abbeel P, et al. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor[C]//International conference on machine learning. PMLR, 2018: 1861-1870. [8] Cetin E, Ball P J, Roberts S, et al. Stabilizing Off-Policy Deep Reinforcement Learning from Pixels[J]. arXiv preprint arXiv:2207.00986, 2022. Broader Impact Concerns: Not applicable ================================================== Review 2: Summary: This paper presents a method to learn a state representation in MDPs that separately encodes controllable and uncontrollable features. This is achieved via forward prediction losses coupled with an adversarial loss to minimise information about uncontrollable features in the controllable features and a contrastive loss that encourages diversity and prevents features collapse. The method is evaluated on three different environments (quadruple maze, random mazes and catcher). The resulting features are interpretable and still allow an RL agent or a planning algorithm (using the forward models) to solve downstream tasks. The paper also ablates some of the design choices and compares to an alternative approach that uses inverse models. Strengths and Weaknesses: Strengths: * The proposed method does appear to work quite well in the environments considered, resulting in controllable features. * Nice analysis on different states showing the interpretability and disentanglement. * To the best of my knowledge, technically correct. * The paper is quite clear and easy to read. Weaknesses: * The overall loss seems quite complicated and likely tricky to scale to less toy-ish environments. If the paper scaled to more complex environments or showed how and why the method fails the paper would be stronger. * The chosen evaluations are very amenable to disentangling into controllable and uncontrollable features. What happens in more complex environments where the separation is less clear? * why is there an additional contrastive loss for one of the environments? This muddies the waters a bit. Requested Changes: * Consider more complex environments. * Consider sharpening the focus of the paper. Is the focus the disentangling into controllable and uncontrollable features? Or the emergent interpretability in small-ish environments? Broader Impact Concerns: none. ================================================== Review 3: Summary: This paper proposes an algorithm for disentangling representations of MDP states into a controllable and uncontrollable portion. This is important for interpretability of the learned representations before they are applied to down-stream tasks, where such a separation can be exploited. The algorithm itself is straight-forward. For a given state, an encoder computes a representation that is decomposed into two parts: controllable and uncontrollable. To ensure that each part only contains respective information about the environment, they are trained with different objectives: * The controllable part is trained using an action- and state- conditioned forward predictor in latent space, where gradients flow backward only through the controllable part. * Similarly, the uncontrollable part is trained using only a state-conditioned forward predictor in latent space. To regularize these objectives, which use encoded states at future timesteps as targets, a “contrastive” loss is added, which encourages different states to be encoded differently. Further, to ensure the controllable latent space does not contain lingering information about uncontrollable features, an adversarial loss is added. The proposed method is evaluated on three environments of low visual complexity, where it is shown how the proposed algorithm yields interpretable representations that disentangled controllable and uncontrollable information, as well the effect of different loss terms. Further, it is shown how the learned representations are better suitable for down-stream performance in these MDP environments and support basic planning capabilities. Strengths and Weaknesses: On the upside, this paper is well written and easy to follow. The method is explained in sufficient detail and it should not be difficult to replicate the results presented here. Where the paper falls short is in terms of experimental evaluation, comparison to prior work, significance, though this can still be improved: * The method design is intuitive and well thought out to me. The adversarial and the contrastive regularizer (though I wouldn’t call this contrastive) make sense as well. However, I have one issue with the additional contrastive loss added for the Maze environment specifically. Currently the environments under consideration are rather simplistic (especially in terms of visual complexity), even though they are high-dimensional, and it is discouraging to see that an environment-specific regularizer is already needed at this stage. As far as I can tell there is no corresponding ablation or further analysis into this, which I would like to see added. Though ideally, this additional loss can be omitted (perhaps through better hyper-parameter search) with a little more effort, which would be a better solution. * Similarly, it is insufficiently explored how well the proposed algorithm works when suboptimal latent dimensions are selected. This is important, since for many real-world tasks, it is unclear what the ideal representation would look like, especially when it concerns multiple controllable actors, or non-physical interactions (with uncontrollable parts) of the state. Currently, in the environments considered, the dimensions of the controllable features are either 1 or 2, and only in the catcher environment a “suboptimal” value of 2 is used when it could be 1. The same holds true for the uncontrollable feature dimensionality. Thus, I would like to see added some experiments with a fixed dimensionality, perhaps 16 or 32, across all environments. Visualization should still be possible using PCA or T-SNE. * Relatedly, for the environments considered, the representation learned for the uncontrollable features is highly similar to the actual environment itself. In fact, the representation learned for the random maze environment (and to an extent for the Quadruple maze environment) could have been easily obtained by just down-sampling the input observation. Because the encoder uses CNN, this is a solution that should be very easy to learn, and I am wondering how well the method performs at learning interpretable representations when this similarity does not hold true. For example, can it learn to map a 2.5d observation of a chess-board facing one of its corners to a 2d grid? Alternatively, what do the representations look like if we use a fully-connected MLP encoder that does not incorporate a grid-like bias? * For comparison to alternative methods, two comparisons are provided on the random maze environment: (1) inverse prediction loss and (2) Q-learning. Figure 5 shows that the inverse prediction loss clearly performs worse, though the DDQN trained representations perform as good as using the interpretable approach. This is a bit concerning, as the hyper-parameter choices for the dimensionalities used here are likely in favor of the interpretable approach. Hence, it is important to (1) extend the down-stream evaluation to the other environments explored in this work, and (2) conduct an evaluation using less restricted dimensionalities (eg. hidden size 64 or 128). Further, this begs the question how significant the interpretability of the learned representations actually is, which isn’t well motivated (though the planning results in Figure 6 are a step in that direction). Thus I also ask the authors to comment on and elaborate on why we should care about interpretable representations for training RL agents based on the existing literature, and why sacrificing down-stream performance is OK. * Finally, Wang et al. (2022) is listed as “concurrent work” without diving deeper into how these two approaches compare. While an empirical comparison is not necessary in my view, one should also not ignore that the Wang paper has been around for nearly 10 months. In particular, the Wang paper considers environments of much higher visual complexity and (as far as I can tell) goes beyond the disentangled representations learned here. Thus, while this paper investigates “[...] the separation in low-dimensional, structured representations and have an explicit loss enforcing disentanglement”, what are the pro’s and con’s between these methods. Could the procedure here be applied to the environments considered in Wang? Could the approach in Wang be applied here? Are there any limitations of the Wang method addressed here or vice versa? Could they be combined? In my view it is very important that the paper addresses this, since future readers may already be familiar with Wang et al. (2022). Some smaller comments: * It took me a while to understand the uncontrollable feature learned in Figure 1(b). Is it simply assigning different layouts of the maze to a scalar value for separation purposes? This seems quite different from the random mazes, where the uncontrollable features take on a grid-structured representation of the layout of the maze itself. * A relevant work that would be useful to discuss is van der Pol et al., (2020) “MDP homomorphic networks: Group symmetries in reinforcement learning”, which offers a different approach to learning interpretable representations of high-dimensional MDPs. * For the contrastive loss, it is written that negative states are obtained by shifting each position of the latent state by a random number between 0 and the batch size. I suppose that should be between 1 and the batch size? Else the negative examples could include some positives. Requested Changes: I have explicitly asked for a number of changes in the "strengths and weaknesses" above, which I consider critical, expect for the "smaller comments". To strengthen the work further, I would like to see a limitations section included. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: In addition to the insufficient experiments mentioned above, the paper also suffers from a lack of novelty. As pointed out by the reviewers Wang et al. already proposed to map states to controllable/uncontrollable features as well as indicating whether the features are reward relevant/irrelevant. In that sense it seems that Wang et al.'s work is more advanced than the proposed approach. The paper needs to justify why the proposed approach is needed. Why can't we use Wang et al.'s approach to disentangle states into controllable/uncontrollable features? If there is something novel, this should be demonstrated with an empirical comparison to Wang et al. ==================================================
# On The Data Heterogeneity In Adaptive Federated Learning Yujia Wang yjw5427@psu.edu College of Information Sciences and Technology Pennsylvania State University Jinghui Chen *jzc5917@psu.edu* College of Information Sciences and Technology Pennsylvania State University Reviewed on OpenReview: *https: // openreview. net/ forum? id= hv7iXsiBZE* ## Abstract Adaptive federated learning, which benefits from the characteristic of both adaptive optimizer and federated training paradigm, has recently gained lots of attention. Despite achieving outstanding performances on tasks with heavy-tail stochastic gradient noise distributions, adaptive federated learning also suffers from the same data heterogeneity issue as standard federated learning: heterogeneous data distribution across the clients can largely deteriorate the convergence of adaptive federated learning. In this paper, we propose a novel adaptive federated learning framework with local gossip averaging to address this issue. Particularly, we introduce a client re-sampling mechanism and peer-to-peer gossip communications between local clients to mitigate the data heterogeneity without requiring additional gradient computation costs. We theoretically prove the fast convergence for our proposed method under non-convex stochastic settings and empirically demonstrate its superior performances over vanilla adaptive federated learning with client sampling. Moreover, we extend our framework to a communication-efficient variant, in which clients are divided into disjoint clusters determined by their connectivity or communication capabilities. We exclusively perform local gossip averaging within these clusters, leading to an enhancement in network communication efficiency for our proposed method. ## 1 Introduction Federated learning (FL) (McMahan et al., 2017) has gained tons of attraction recently with the development of edge computing and edge devices such as IoT devices and smartphones. It enables clients to collaboratively learn a machine learning model by iteratively synchronizing with the central server without sharing their local private data. Standard SGD-based federated learning methods such as FedAvg (McMahan et al., 2017) work by aggregating the local-updated models via stochastic gradient descent. Recently, as the demand for training large-scale models such as BERT (Devlin et al., 2018), GPT-3 (Brown et al., 2020), and ViT (Dosovitskiy et al., 2021), adaptive optimizations such as Adam (Kingma & Ba, 2014) and AMSGrad (Reddi et al., 2018) show their efficiency compared to stochastic gradient descent (SGD). This led to the development of adaptive federated learning methods such as FedAdam (Reddi et al., 2020) and FedAMS (Wang et al., 2022b) which take the advantage of efficient iterative synchronization and stable adaptive optimization methods. Despite achieving superior model training performances on tasks with heavy-tail stochastic gradient noise distributions, adaptive federated learning still suffers from the same data heterogeneity issue as standard SGD-based federated learning. Specifically, the *statistical heterogeneity* of data distribution across clients can lead to overfitting issues of local models and degradation of global model convergence for adaptive federated learning. This data heterogeneity issue becomes noticeable, especially in practical cases where clients are not able to participate in each local training iteration due to system heterogeneity such as computational capabilities. Despite that various attempts have been made in solving the data heterogeneity issue for standard federated learning (Karimireddy et al., 2020b;a; Khanduri et al., 2021), few studies are tackling this issue in adaptive federated learning, especially for partial participation case where only selected clients have participated in each round of the training process. In this work, we aim to develop a novel federated learning framework, Adaptive Federated learning with local Gossip Averaging (AFGA), that addresses the challenges of statistical heterogeneity in the adaptive federated learning setting. AFGA introduces a client re-sampling strategy and peer-to-peer gossip communication between clients during local training steps to reduce the dissimilarity between local models, thus tackling the data heterogeneity issue. Note that AFGA does not incur extra communication between the central server and the sampled clients, and it also does not result in extra local gradient computations. Our contributions can be summarized as follows: - We propose a novel adaptive federated optimization method, which benefits from both the client resampling strategy and decentralized gossip averaging, to mitigate the impact of data heterogeneity in adaptive federated optimization methods. - We theoretically provide convergence guarantees of our proposed method in the stochastic nonconvex settings with data heterogeneity under partial client participation cases. Specifically, we prove that our proposed method can achieve a faster convergence rate than FedAMSGrad (Wang et al., 2022b) in partial participation settings. - Moreover, we also extend our framework to a communication-efficient variant, CAFGA, where clients are divided into disjoint clusters and the local gossip communications are only performed within the clusters, thus leading to an overall efficient communication network. We demonstrate that CAFGA can achieve comparable performance and final accuracy to AFGA while simultaneously enhancing overall communication efficiency. - Extensive experiments on several benchmark datasets demonstrate the proposed AFGA and CAFGA achieving outstanding performance with heterogeneous data in low client participation ratios. ## 2 Related Work Federated learning. Federated learning (Konečn`y et al., 2016) play a critical role in collaboratively training models at edge devices with potential privacy protections. Basic optimization methods for federated learning include SGD-based global optimizer, e.g., FedAvg (McMahan et al., 2017) (a.k.a. Local SGD (Stich, 2018) and its variants (Li et al., 2019a; Yang et al., 2021), adaptive gradient optimization based global optimizer such as FedAdam, FedAdagrad, FedYogi (Reddi et al., 2020), FedAGM (Tong et al., 2020) and FedAMS (Wang et al., 2022b). While these optimization methods for federated learning show their ability on achieving stable results when data are heterogeneously distributed, they rarely study data heterogeneity itself. Recently, several works address the data heterogeneity issue through several aspects. For example, FedProx (Li et al., 2020a) adds a proximate term to align the local model with the global one, and FedDyn (Acar et al., 2021) involves dynamic regularization term for local and global model consistency. FedNova (Wang et al., 2020b) proposes a normalized averaging mechanism that reduces objective inconsistency with heterogeneous data. Moreover, several works study to eliminate the client drift caused by data heterogeneity from the aspect of variance reduction such as Karimireddy et al. (2020b;a); Khanduri et al. (2021); Cutkosky & Orabona (2019). They introduce additional control variables to track and correct the local model shift during local training, but they require extra communication costs for synchronizing these control variables. Besides, FedDC (Gao et al., 2022) involves both dynamic regularization terms and local drift variables for model correction. Recent studies extend the decentralized training paradigm to federated learning with various adaption. For example, Guo et al. (2021) considered heterogeneous communications for modern communication networks that improve communication efficiency, and hierarchical federated learning algorithms (Liu et al., 2020; Abad et al., 2020; Castiglia et al., 2020) develop frameworks by aggregating client models to edge servers first before synchronizing them to the central server. 1 1Due to space limitations, we leave the detailed related work for decentralized learning in Appendix. ## 3 Proposed Method 3.1 Preliminaries In federated learning, we aim to minimize the following objective through N local clients: $$\min_{\mathbf{x}\in\mathbb{R}^{d}}f(\mathbf{x}):=\frac{1}{N}\sum_{i=1}^{N}f_{i}(\mathbf{x})=\frac{1}{N}\sum_{i=1}^{N}\mathbb{E}_{\mathcal{D}_{i}}[f_{i}(\mathbf{x};\xi_{i})],\tag{1}$$ where x denotes the model parameters, d denotes the dimension of model parameters x, fi(x) = Eξi∼Di fi(x, ξi) is the local loss function corresponding to client i and let Di denotes the local data distribution associated with client i. In this work, we focus on the non-convex optimization problem with heterogeneous data distributions, i.e., fi are non-convex and the local data distribution Di are non-i.i.d. distributed among clients. FedAvg (McMahan et al., 2017) is a basic optimization algorithm to solve equation 1, with the sequential implementation of local SGD updates and global averaging. Adaptive federated optimizations. Adaptive federated learning is proposed to incorporate adaptive optimization methods (such as Adam (Kingma & Ba, 2014) and AMSGrad (Reddi et al., 2018)) to global optimizer by replacing the global averaging step in FedAvg. Reddi et al. (2020) summarizes several adaptive federated learning algorithms, and Jin et al. (2022) proposed FedDA, a momentum decoupling adaptive optimization method from the perspective of the dynamics of ODEs. FAFED (Wu et al., 2022) also studied adaptive federated learning but in the context of full participation. FedAMSGrad (Tong et al., 2020; Wang et al., 2022b) considers local SGD updates and global AMSGrad (Reddi et al., 2018) update on the central server. Specifically, at global round r ∈ [R], the server broadcasts the model xr to selected clients in the set Sr. The selected client i conducts I steps of local SGD updates with local learning rate ηl and obtains the local model x i r,I . Then for the selected client i, it obtains a model difference ∆ir = x i r,I − xr and sends to the server. The server aggregates ∆ir then updates the global model xr+1 by taking ∆r as a pseudo gradient for calculating momentum mr and variance vr for AMSGrad optimizer, and performs one step AMSGrad update with global learning rate η, i.e., $$\mathbf{m}_{r}=\beta_{1}\mathbf{m}_{r-1}+(1-\beta_{1})\Delta_{r},\mathbf{v}_{r}=\beta_{2}\mathbf{v}_{r-1}+(1-\beta_{2})\Delta_{r}^{2},$$ $$\widehat{\mathbf{v}}_{r}=\max\{\widehat{\mathbf{v}}_{r-1},\mathbf{v}_{r}\},\mathbf{x}_{r+1}=\mathbf{x}_{r}+\eta\frac{\mathbf{m}_{r}}{\sqrt{\widehat{\mathbf{v}}_{r}+\epsilon}},\tag{2}$$ the server finally obtains model xr+1 by global round r. It's worth mentioning that if the set of selected clients Sr contains all clients, i.e., |Sr| = N, it is known as full participation or without client sampling, and if |Sr| = *M < N*, we denote it as partial participation or with client sampling. Heterogeneous and inconsistency. Previous works show that FedAvg suffers from convergence degradation when data is non-i.i.d. distributed on local clients (Karimireddy et al., 2020b;a; Wang et al., 2020b). Several works on adaptive federated learning (Reddi et al., 2020; Wang et al., 2022b) empirically demonstrate that the unbalanced data distribution across clients may lead to worse performance, which implies these adaptive federated methods, unfortunately, do not escape from the convergence degradation as well. Theoretically, it has been shown that when the local data are heterogeneously distributed among clients, FedAMS/FedAMSGrad (Wang et al., 2022b) under partial participation setting is proved with convergence rate O( √I/ √RM) w.r.t. global communication rounds R, local update iterations I and the number of participated clients M which has a certain gap between the desired rate O(1/ √RIM). Although several works apply variance reduction techniques to show their ability to reduce the effect of data heterogeneity in federated learning (Karimireddy et al., 2020a;b), they are less compatible with the global adaptive optimizer. This motivates us to develop a new framework for mitigating the model inconsistency caused by data heterogeneity in adaptive federated learning. ## 3.2 Afga: Adaptive Federated Learning With Local Gossip Averaging To reduce the inconsistency between local models and achieve a better convergence rate under heterogeneous data in partial participation settings, we proposed a novel Adaptive Federated Learning with Local Gossip Averaging (AFGA) method with peer-to-peer gossip communication and client re-sampling framework. The peer-to-peer gossip communication implies that clients are able to communicate their local model without help from the server. Suppose there are in total N clients, we study the same objective function as other adaptive federated learning methods with a similar global framework but a different local updating process. Algorithm 1 AFGA: Adaptive Federated Learning with Local Gossip Averaging Input: initial point x1, local step size ηl and global step size η, optimizer hyperparameter β1, β2, ϵ, doubly stochastic mixing matrix W 1: m0 ← 0, v0 ← 0 2: for r = 1 to R do 3: Randomly sample a subset of clients Sr for collecting local updates in round r 4: Clients Init: clients in Sr receive xr from the server and broadcast to all clients with local communications 5: for t = 0, ..., I − 1 do 6: Randomly **re-sample** a subset of clients Sr,t for gradient computation 7: for each client i ∈ [N] in parallel do 8: if i ∈ Sr,t **then** 9: Compute g i r,t = ∇Fi(x i r,t; ξ i r,t) 10: x i r,t′ = x i r,t − ηlg i r,t 11: **else** 12: x i r,t′ = x i r,t 13: **end if** 14: **Gossip:** x i r,t+1 =Pj∈Ni (W)i,jx j r,t′ 15: **end for** 16: **end for** 17: Clients i ∈ Sr send ∆ir = x i r,I − xr to the server 18: Aggregate model updates: ∆r =1 |Sr| Pi∈Sr ∆ir 19: mr = β1mr−1 + (1 − β1)∆r 20: vr = β2vr−1 + (1 − β2)∆2 r 21: vbr = max(vbr−1, vr) and Vbr = diag(vbr + ϵ) 22: Server update xr+1 = xr + η √mr bvr+ϵ 23: **end for** If we take a deeper look at the local update steps (Lines 5-16), the major difference between AFGA and FedAMS/FedAMSGrad is the *re-sample* step (Line 5) and the *gossip communication* step (Line 14), which we will discuss in detail in the following. Client re-sampling. Note that for each global training round, we have already sampled a subset of participating clients Sr. Normally (e.g., in FedAMS/FedAMSGrad), this will be the fixed chosen subset of clients who actually performs local gradient computations throughout this global training round. Yet for AFGA, we perform client re-sampling at each local iteration to obtain a new subset Sr,t and only the selected clients in the subset Sr,t are active for gradient computation in that local iteration, while the other clients will stay idle. Note that such a design does not incur extra local gradient computations as the size of Sr,t is the same as Sr. Gossip communications. After finishing the local gradient computations and model update with respect to the selected subset of clients, AFGA introduces a gossip communication step that allow each client to communicate their model weights with their connected neighbors (the selected subset of clients are connected in a graph G with a corresponding mixing matrix W). This gossip communication step is conducted by all local clients despite being selected in Sr,t or not. While in practice, clients are not required to know the whole mixing matrix W. Instead, client i only needs to maintain the weights corresponding to those it receives from other clients. It's important to note that AFGA does not necessitate the sampling of every client in specific rounds or iterations. In practical cases, if a client is unavailable, then it would not be sampled during local steps. If we remove the re-sampling step by setting Sr,t = St and remove the gossip communication step by setting x i r,t+1 = x j r,t′ , AFGA will reduce to standard FedAMSGrad algorithm. Remark 3.1. Our AFGA algorithm benefits from both gossip communications and client re-sampling while preserving the stable behavior of adaptive gradient methods. The steps of re-sampling in each local iteration help reduce the impact of data heterogeneity. It allows more clients to be included and participate in local training, which results in training a global model with a more balanced data distribution than without resampling. The peer-to-peer gossip communication is inspired by the recent advancement in decentralized optimization (Lian et al., 2017; Koloskova et al., 2020; Chen et al., 2021b), which has shown the ability to reduce the data heterogeneity issues between clients. By involving gossip communications in adaptive federated learning, it enables local models to average with their neighbors, thus preventing over-fitting on local data. The frequent re-sampling and gossip communications make AFGA effectively reduce the inconsistency between local clients, thus accelerating the overall convergence especially when the number of local steps increases. AFGA is also crucial to practical scenarios as it is compatible with low client participation ratios and limited local gradient computation capability while addressing the challenge of statistical heterogeneity in adaptive federated methods. ## 4 Convergence Analysis In this section, we provide the theoretical convergence analysis of the proposed AFGA method. Before starting with the main theoretical results, let us first state the following assumptions based on stochastic optimization and adaptive gradient methods. For vector x and matrix A, we let ∥x∥ = ∥x∥2 and ∥A∥ = ∥A∥F . and ∥A∥2 represents the spectral norm of A. We denote 1 as vector with all elements equal to 1, and I as the identity matrix, with appropriate dimension. Assumption 4.1. Each local objective function is L-smooth, i.e., ∀x, y ∈ R d, fi(x) − fi(y) *− ⟨∇*fi(y), x − y⟩ ≤ L 2 ∥x − y∥ 2, ∀i ∈ [N]. This also implies the L-gradient Lipschitz condition, i.e., ∥∇fi(x) − ∇fi(y)∥ ≤ L∥x − y∥. Assumption 4.2. The stochastic gradient on each client has a bounded local variance, i.e., ∀x ∈ R d, i ∈ [N], there is E-∥∇fi(x, ξ) − ∇fi(x)∥ 2≤ σ 2. Assumption 4.1 to 4.2 are standard assumptions in centralized and federated non-convex stochastic optimization problems (Kingma & Ba, 2014; Li et al., 2019a; Yang et al., 2021; Reddi et al., 2020). Assumption 4.3. Each local objective function fi(x) has G-bounded stochastic gradient on ℓ2, i.e., for all ξ, we have ∥∇fi(x, ξ)∥ ≤ G, ∀i ∈ [N]. Note that Assumption 4.3 is a standard assumption in non-convex *adaptive* optimization problems for under centralized and federated learning settings (Kingma & Ba, 2014; Reddi et al., 2018; Chen et al., 2020; Reddi et al., 2020; Wang et al., 2022a;b; 2024b) Assumption 4.4. The dissimilarity between client's objective function and the global objective function is bounded, i.e., ∀x ∈ R d, there is 1N PN i=1 ∥∇fi(x) − ∇f(x)∥ 2 ≤ σ 2 g . Assumption 4.4 captures the objective dissimilarity between the local and global objectives. Similar data heterogeneity assumption, which considers the variance between local clients, is common in federated learning (Reddi et al., 2020; Yang et al., 2021; Wang et al., 2022b; 2024a) and decentralized learning (Lian et al., 2017; Li et al., 2019b; Koloskova et al., 2020). Assumption 4.5 (Spectral gap). For the gossip communications, clients are connected in the graph G, and the corresponding weighting matrix W is a *doubly stochastic matrix*: W ∈ [0, 1]n×n, W1 = 1, 1 ⊤W = 1 ⊤ and null(I − W) = span(1). We assume the spectral gap ρ of matrix W satisfies: there exists ρ ∈ [0, 1) such that ∥W − 1 n 11⊤∥2 ≤ ρ. Assumption 4.5 is highly related to the gossip communication update process and is usually assumed for decentralized learning framework (Koloskova et al., 2020; Chen et al., 2021b; Guo et al., 2021). For a doubly stochastic matrix W, ∥W − 1 n 11⊤∥2 ≤ 1 naturally established 2, and the spectral gap ρ ∈ [0, 1) describes the connectivity of the clients: a smaller spectral gap ρ indicates a denser connectivity between clients. Specifically, ρ = 0 indicates that all elements in the matrix W are 1n , and this means that each client would be connected and communicated with other clients in the network with a mixing weight of 1n . ρ → 1 means the matrix W tends to be elements with either 0 or 1, corresponding to a graph that is nearly disconnected. We assume that there exists ρ ∈ [0, 1) to satisfy ∥W − 1 n 11⊤∥2 ≤ ρ since our proposed method is contributed by gossip communications between clients. Several works (Lian et al., 2017; Li et al., 2019b) alternatively assume the spectral gap ρ of a weighting matrix W as the second largest eigenvalue of a doubly stochastic matrix W, i.e., ρ = |λ2(W)|, and this spectral gap holds the same role for revealing the connectivity of the graph. Theorem 4.6 (Convergence analysis for Algorithm 1). *Under Assumptions 4.1-4.5, if the local learning rate* η = Θ(NpI/M) and ηl = Θ(1/ √RI 2)*, and the network spectral gap satisfies* ρ ≤M M+N , then the iterates of Algorithm 1 satisfy $$\frac{1}{R}\sum_{r=1}^{R}\mathbb{E}[\|\nabla f(\mathbf{x}_{r})\|^{2}]$$ $$=\mathcal{O}\bigg{(}\sqrt{\frac{1}{R^{M}M}}\bigg{[}\Delta f+L(\sigma^{2}+\sigma_{\theta}^{2})\bigg{]}\bigg{)}+\mathcal{O}\bigg{(}\frac{1}{R}\bigg{[}G^{2}+L^{2}(1+\rho^{2})\sigma_{\theta}^{2}+\frac{\rho^{2}\sigma^{2}}{T}\bigg{]}\bigg{)}+\widetilde{\mathcal{O}}\bigg{(}\frac{1}{R^{N/2}}\bigg{)},\tag{3}$$ where $\Delta f=f_{0}-f_{*}$, $\widetilde{\mathcal{O}}(\cdot)$_hides all the absolute constants and problem dependent constants including $\rho,\sigma,\sigma_{\theta}^{2},\mathcal{I},M,N,G$._ Corollary 4.7. Theorem 4.6 suggests that with sufficient amounts of global communication rounds R*, i.e.,* R ≥ IM*, our proposed method achieves a convergence rate of* O√ 1 RIM *. This improves the rate* O √ √ I RM of adaptive federated optimization methods under partial client participation (Wang et al., 2022b), which suggests that AFGA can indeed bring accelerated convergence through local gossip communications. Remark 4.8. The sufficient amounts of global communication rounds that R ≥ IM is a commonly-used condition to obtain the convergence rate in FL (Wang et al., 2022b; Yang et al., 2021). This condition is also practical as the algorithm usually converges after sufficient global rounds. Remark 4.9. The impact of data heterogeneity is reflected in the variance σg within the convergence rate. From equation 3, the variance σg appears in O√ 1 RIM and smaller order terms w.r.t. R, I, and M for AFGA. The partially participated FedAvg in Yang et al. (2021) and adaptive FL models like FedAMS (Wang et al., 2022b) obtain the rate of O √ √ I RM , and the dominant O √ √ I RM term directly relates to the variance σg. This demonstrates improvements over partially participated adaptive FL models like FedAMS (Wang et al., 2022b). Remark 4.10. The second term of the convergence rate in equation 3 contains terms with spectral gap ρ of the gossip communication network. A larger value of ρ corresponds to a sparser network, which results in a larger variance term in the convergence rate, indicating that the dissimilarity variance has not been completely eliminated. ## 5 Communication-Efficiency: Clustered-Clients Afga And Further Adaptation CAFGA. While the frequent gossip communications enhance the overall performance of heterogeneous federated learning, it indeed involves extra peer-to-peer communication overhead, which makes our proposed AFGA less efficient in communication especially when clients are densely connected. Recent studies (Guo et al., 2021; Yuan et al., 2020) show that clients can be gathered into neighboring clusters based on locations or network capabilities, in which gossip communications are less expensive than communicating with the whole network. A similar idea of dividing clients into clusters has recently been studied in federated learning (Guo et al., 2021; Malinovsky et al., 2022; Long et al., 2022) and receives a lot of attention. Note that under a cluster-clients design, part of the network clients are grouped in a cluster, and clients within the 2Theoretical analysis is provided in the Appendix C. Algorithm 2 CAFGA: Clustered-Client Adaptive Federated Learning with Local Gossip Averaging Input: initial point x1, local step size ηl and global step size η, , optimizer hyperparameter β1, β2, ϵ, doubly stochastic mixing matrix W 1: m0 ← 0, v0 ← 0 2: for r = 1 to R do 3: for each cluster k ∈ [K] in parallel do 4: Randomly sample a subset S k r for collecting local updates in round r 5: Init: clients in S k r receive xr from the server and broadcast xr to all local neighbors 6: for t = 0, ..., I − 1 do 7: Randomly **re-sample** a subset of clients S k r,t for gradient computation 8: for each client i ∈ Vk in parallel do 9: if i ∈ Sk r,t **then** 10: Compute g i r,t = ∇Fi(x i r,t; ξ i r,t) 11: x i r,t′ = x i r,t − ηlg i r,t 12: **else** 13: x i r,t′ = x i r,t 14: **end if** 15: **Gossip** Comm: x i r,t+1 =Pj∈Nik (W)i,jx j r,t′ 16: **end for** 17: **end for** 18: Clients i ∈ Sk r send ∆ir = x i r,I − xr to the server 19: **end for** 20: Aggregate: ∆r = 1 K Pk∈[K]1 |Sk r | Pi∈Sk r ∆ir 21: Server update follows Lines 19-22, Algorithm 1 22: **end for** cluster can be connected through high-bandwidth peer-to-peer communications, leading to an efficient gossip communication network and a relatively smaller spectral gap. This leverages the communication efficiency for applying gossip communications while maintaining comparable performance for our proposed AFGA. Suppose there are still in total N clients, we study the same objective as Eq. equation 1 but we partition the clients into K disjoint clusters where each of them has n clients (N = Kn) 3. We denote Vk as the set of local clients in cluster *k, k* ∈ [K] and denote the neighbors for client i ∈ Vk as N i k . Similar to Algorithm 1, we denote the weighted matrix of gossip averaging as Wk, and the corresponding spectral gap ρk. We then refer ρmax as the maximum spectral gap among all clusters to represent the overall density in the network. Algorithm 2 summarizes the proposed Clustered-clients paradigm **AFGA** (CAFGA). At the beginning of global round r, the server sample total M clients (for convenience, uniformly sampled m clients in each cluster) for global synchronization. The update rule *inside each cluster* follows the similar local update rule as Algorithm 1 with clients *re-sampling* and *gossip communications* in each local iteration, all clusters perform the training process parallelly. To be specific, at the t-th local iteration in cluster k, clients in the re-sampled subset S k r,t are active for gradient computation, while the unselcted clients stay idle. All clients in cluster k then perform a gossip communication step with mixing matrix Wk. The leftover global update process of the clustered-client framework is the same as Algorithm 1 and FedAMS. We also provide a complete theoretical convergence analysis for the Clustered-clients paradigm of AFGA (CAFGA); due to space limits, we referred interested readers to Appendix D for more details. In a nutshell, our theoretical analysis suggests that the convergence of CAFGA is related to ρmax which aligns with the convergence rate of Algorithm 1, and implies that more densely connected gossip communications can help reduce the impact of data heterogeneity. Empirically we observe that under the same gossip communication structure (e.g., ring topology), the clustered-clients paradigm obtains performance improvement since 3We omit the clustering process in the algorithm for simplicity. The algorithm is compatible with various clustering methods, including clustering based on locations and network conditions, clustering based on client similarities, and random clustering. grouping the whole network into disjoint clusters making the local clients more densely connected thus have smaller ρ values. The clustered-clients paradigm enables efficient and dense connections for adequate model averaging, which keeps the benefits of further mitigating the effects of data heterogeneity. Communication-adapted: reduce the communication frequency. Despite AFGA achieves a faster convergence rate, one noticeable drawback is that it requires all clients to stay online to conduct frequent gossip averaging, even if some of the clients do not participate in local training (gradient computation). We want to emphasize that this design is mainly for the ease of theoretical analysis. In practice, we can avoid this issue by enforcing gossip communications only on selected clients in each round4. As shown in the next Section, such a communication-adapted AFGA actually enjoys similar model training performances compared to the original AFGA algorithm without requiring dense gossip communications for all clients. Also, note that this communication-adapted can also be applied here to CAFGA for further improving communication efficiency while achieving similar model performances. ## 6 Experiments Datasets and models. We conduct experiments on CIFAR-10 (Krizhevsky et al., 2009), CIFAR-100 (Krizhevsky et al., 2009) and Shakespeare (Caldas et al., 2018) dataset with various data sampling levels and client participation settings. We evaluate experiments on non-i.i.d. data distributions by a Dirichlet distribution partitioned strategy with parameter α = 0.6 similar to Wang et al. (2020a;b). For image classification tasks on CIFAR-10 and CIFAR-100 datasets, we adopt a ConvMixer-256-8 network (Trockman & Kolter, 2022), which shares similar ideas to vision transformer (Dosovitskiy et al., 2021) to use patch embeddings to preserve locality and similarly, and is trained via adaptive gradient methods by default. For the next-character prediction task on Shakespeare, we adopt a 2-layer LSTM network, with 80-dimensional word embedding and 256 hidden units per layer, and follow a dropout layer with dropout rate 0.05. Baselines and methods. We compare our method with several federated learning and adaptive federated learning baselines including FedAvg (McMahan et al., 2017), FedAdam (Reddi et al., 2020), FedAMSGrad(Wang et al., 2022b)5, SCAFFOLD (Karimireddy et al., 2020b) FedProx (Li et al., 2020b) and FedDyn (Acar et al., 2021). Implementation overview. The number of local training iterations I on each client is set to 24 for experiments on CIFAR-10 and CIFAR-100 datasets, and I = 100 for experiments on the Shakespeare dataset, and the batch size is set to 50 for all experiments by default. We report 500 rounds (denoted as \#R or \# Rounds in tables and figures) for the CIFAR 10 and the Shakespeare datasets and 600 rounds for the CIFAR-100 dataset. For each dataset and setting, the number of local training iterations I and the total rounds of training \#R are fixed across all baseline methods to ensure a fair comparison. For local update, we use the SGD optimizer with a learning rate from {0.1, 1} for SGD-based global optimization methods (FedAvg, SCAFFOLD, FedProx, and FedDyn), and use SGD optimizer with a learning rate from {1,2,10} for adaptive global optimization methods. For a fair comparison, the local SGD updates apply no momentum and no gradient clipping steps for all methods. We set the global learning rate as 1 for SGD-based global update, and set the global learning rate as 0.01 for global adaptive optimization, FedAdam, FedAMSGrad, and our proposed AFGA. See Appendix B for more details about the experimental setup including datasets, models, and hyperparameter details. ## 6.1 Main Results We summarize the performance of our proposed methods and other federated learning baselines in Table 1, Table 2 and 3. Due to space limits, we leave the learning curves and most ablation studies in Appendix B. Our experiments based on two settings, Setting 1: 100 clients with 5% participation ratio and Setting 2: 50 clients with 10% participation ratio. For the Clustered-clients AGFA (CAFGA), we evenly partition clients into 5 clusters for both settings, i.e., for Setting 1: there are 20 clients in each cluster with participate 4For example, suppose we train AFGA with a ring topology and select M out of N clients to participate in each round. We can form a new ring topology over the M selected clients and only ask them to communicate over the new ring topology. In this way, the gossip communications only include these active clients while other unselected clients do not need to stay online and participate in the training process. 5FedAMSGrad is one of the variants of the FedAMS algorithm introduced in Wang et al. (2022b). Table 1: The test accuracy of different methods on CIFAR-10 datasets. Setting 1: 100 clients, 5% participation. Setting 2: 50 clients, 10% participation. We report the average and the standard derivation over 3 runs with different random seeds. | Method | Setting 1 | Setting 2 | | | |------------|--------------|-------------|--------------|----------| | Acc. & std | | R# (78%) | Acc.& std | R# (78%) | | FedAvg | 75.57 ± 1.10 | 313 | 76.85 ± 1.69 | 180 | | FedAdam | 77.07 ± 0.05 | 425 | 78.46 ± 0.19 | 157 | | FedAMSGrad | 77.53 ± 0.60 | 388 | 79.59 ± 0.76 | 154 | | SCAFFOLD | 76.94 ± 1.17 | 273 | 76.46 ± 3.95 | 146 | | FedProx | 75.63 ± 1.24 | 500+ | 76.91 ± 1.39 | 180 | | FedDyn | 77.68 ± 0.06 | 297 | 78.55 ± 0.36 | 160 | | AFGA | 78.45 ± 0.58 | 302 | 80.02 ± 2.00 | 152 | | CAFGA | 79.18 ± 1.02 | 233 | 82.10 ± 0.67 | 112 | | over 3 runs with different random seeds. Method Setting 1 | Setting 2 | | | | |-------------------------------------------------------------|--------------|-----------|--------------|------| | Acc. & std | R# (52%) | Acc.& std | R# (52%) | | | FedAvg | 49.88 ± 0.33 | 600+ | 51.21 ± 0.23 | 600+ | | FedAdam | 50.40 ± 0.53 | 600+ | 51.99 ± 0.81 | 375 | | FedAMSGrad | 50.58 ± 0.99 | 600+ | 52.15 ± 0.58 | 278 | | SCAFFOLD | 51.71 ± 0.21 | 396 | 55.12 ± 1.01 | 235 | | FedProx | 49.75 ± 0.13 | 600+ | 51.28 ± 0.10 | 600+ | | FedDyn | 49.93 ± 0.22 | 396 | 51.93 ± 0.52 | 600+ | | AFGA | 52.08 ± 0.17 | 436 | 53.87 ± 0.68 | 369 | | CAFGA | 53.08 ± 0.72 | 380 | 54.41 ± 0.24 | 230 | Table 2: The test accuracy of different methods on CIFAR-100 datasets. Setting 1: 100 clients, 5% participation. Setting 2: 50 clients, 10% participation. We report the average and the standard derivation over 3 runs with different random seeds. ratio 5%, and for Setting 2: there are 10 clients in each cluster with participate ratio 10%. We set ring topology as the default gossip communication topology. Results on CIFAR-10 and CIFAR-100. Table 1 and Table 2 show the overall performance on training CIFAR-10 and CIFAR-100 datasets with ConvMixer-256-8 model. We observe that AFGA shows improvement upon other baselines, and the proposed CAFGA achieves better performance than AFGA. For Setting 1 on CIFAR-10, based on the results on three random seeds, AFGA shows an average 1.5% improvement in accuracy compared to SCAFFOLD, nearly 0.8% improvement compared to FedDyn, and nearly 1% improvement compared to FedAMSGrad, The proposed CAFGA, our extension on clusteredclients setting, further improves 0.7% accuracy based over AFGA. For Setting 2 on CIFAR-10, CAFGA demonstrated around 3.5% increase in accuracy over FedDyn and 2% increase over FedAMSGrad. Note that in both settings, AFGA and CAFGA show their superior performance in achieving desired test accuracy. This demonstrates our proposed AFGA and CAFGA achieve overall better performance than adaptive federated learning methods and other federated learning baselines in both settings. Table 2 shows that in experiments on CIFAR-100, for Setting 1, AFGA and CAFGA obtain higher test accuracy among other baselines including SCAFFOLD and FedDyn. Specifically, CAFGA significantly outperforms all baselines with more than 1.3% increase over SCAFFOLD and more than 2% increase over FedAdam and FedAMSGrad. For Setting 2, our proposed AFGA and CAFGA still outperform other federated learning baselines expect for SCAFFOLD. 9 Table 3: The test accuracy of different methods on Shakespeare datasets. Setting 1: 100 clients, 5% participation. Setting 2: 50 clients, 10% participation. We report the average and the standard derivation over 3 runs with different random seeds. | Method | Setting 1 | Setting 2 | | | |------------|--------------|-------------|--------------|----------| | | Acc. | R#(52%) | Acc. | R# (52%) | | FedAvg | 48.66 ± 0.01 | 500+ | 49.59 ± 0.56 | 500+ | | FedAdam | 52.35 ± 0.10 | 391 | 52.23 ± 0.08 | 189 | | FedAMSGrad | 52.09 ± 0.23 | 239 | 51.96 ± 0.06 | 220 | | SCAFFOLD | 51.39 ± 0.02 | 500+ | 52.76 ± 0.06 | 166 | | FedProx | 48.55 ± 0.01 | 252 | 49.33 ± 0.48 | 229 | | FedDyn | 51.87 ± 0.03 | 500+ | 50.81 ± 0.08 | 500+ | | AFGA | 53.08 ± 0.08 | 121 | 53.13 ± 0.04 | 108 | | CAFGA | 53.20 ± 0.05 | 152 | 53.57 ± 0.02 | 91 | Results on Shakespeare. Table 3 shows the overall performance of training the Shakespeare dataset with a 2-layer LSTM network. For Setting 1 on Shakespeare, AFGA shows approximately 2.5% improvement in accuracy compared to SCAFFOLD, and 0.5% improvement compared to FedAMSGrad. The proposed CAFGA achieves even higher final accuracy than AFGA and significantly outperforms other baselines. For Setting 2 on Shakespeare, AFGA addresses increasing in accuracy over FedAMSGrad, while CAFGA outperforms AFGA in final results. In both settings, AFGA or CAFGA show their superior performance in achieving desired test accuracy. ## 6.2 Ablation Studies Sensitivity of gossip averaging and client re-sampling. We conduct experiments studying how the individual components, gossip averaging and re-sampling, and the clustered-clients framework contribute to the proposed AFGA and CAFGA. Table 4 presents the ablation study of the contribution of individual components, which indicates that the gossip averaging and client re-sampling simultaneously contribute to the accuracy improvements of AFGA. Furthermore, by the results from Table 4 (also with the observation of the learning curves in Appendix B), it shows that the clustered-clients paradigm further improves overall accuracy. These results show our intuition of utilizing gossip averaging and client re-sampling can effectively mitigate data heterogeneity, and also address the benefit of the clustered-client framework that consistently helps improve the performance. In addition to the aforementioned ablation studies, we have also conducted further ablation studies to investigate the effect of data heterogeneity, examine different gossip averaging topologies to understand the impact of the spectral gap on model performance, and explore the effects of varying the number of local iterations. Due to constraints on space, we provide detailed ablation studies and results in Appendix. Table 4: Ablation of components at the last 5 rounds (in total 500 rounds) in training CIFAR-10 on ConvMixer-256-8 model. Methods (FedAMSGrad) Acc. FedAMSGrad Only 79.59 ± 0.76 +Gossip 77.39 ± 0.01 +Gossip + Re-sampling (**AFGA**) 80.02 ± 2.00 +Gossip + Re-sampling + Clustered (CAFGA) **82.10** ± 0.67 Ablation of gossip averaging topology. We also conduct ablation studies on how the gossip averaging topology affects the overall performance in (C)AFGA. Figure 1 shows the ablation study on (a) spectral gap in AFGA and (b) clusters' maximum spectral gap ρ for 5 clusters in CAFGA. ![10_image_0.png](10_image_0.png) Figure 1: Ablation study with different heterogeneity degree of (C)AFGA in training CIFAR-10 on ConvMixer-256-8 model. We follow the Setting 2 in Table 1, i.e., 50 clients with a participation ratio of 0.1. For AFGA, we compare various of ρ from ρ = {0, 0.455, 0.995} calculated by balanced fully-connected, random, and ring typologies correspondingly. We observe that the balanced fully-connected topology with ρ = 0 contributes to faster convergence, which aligns with the theoretical result that smaller ρ can help reduce the impact of data heterogeneity. Similar to CAFGA, we compare various of ρmax from ρmax = {0, 0.335, 0.766, 0.873} calculated by balanced fully-connected, unbalanced fully-connected 6, random, and ring typologies correspondingly. It shows that the fully-connected topology (relatively small ρmax value) results in faster convergence as well. Ablation on data heterogeneity. We further conduct experiments to investigate the impact of data ![10_image_1.png](10_image_1.png) Figure 2: Ablation study with different heterogeneity degree of AFGA and CAFGA in training CIFAR-10 on ConvMixer-256-8 model. heterogeneity as theoretically the proposed (C)AFGA show that the convergence rate is highly related to the model dissimilarity. We use Dirichlet(α) distribution for data partitioned in experiments, where α represents the degree of heterogeneity (smaller α implies more heterogeneous data distribution), and we choose α from {0.3, 0.6} together with the i.i.d. data partitioned setting for ablation study. The rest of the experimental setup is the same as Setting 2 in Table 1. Figure 2 shows the learning curves for different non-i.i.d. degrees. We observe that the data heterogeneity across clients still significantly affects the convergence and generalization performance for our proposed (C)AFGA, as a more balanced data distribution attains faster convergence and higher accuracy. ## 6.3 Communication-Adapted Afga And Cafga Table 5 shows the test accuracy and the total global round to reach target test accuracy of communicationadapted AFGA and communication-adapted CAFGA which we have discussed in the previous Section. We can observe that both communication-adapted methods achieve similar test accuracy compared with their original version. This suggests that in practice we can still solve the data heterogeneity issue without requiring all clients to participate in gossip communications. Due to space limitations, we left additional results of the CIFAR-100 dataset in Appendix B. 6This means that clients in the cluster are connected with all their neighbors but with random weighted averaging elements. Table 5: The test accuracy (Acc.) and the total global rounds (R\#) to reach 78% test accuracy of different methods when training ConvMixer-256-8 model on CIFAR-10 dataset, where (a) denotes the communicationadaptive version. Setting 1: 100 clients, 5% participation. Setting 2: 50 clients, 10% participation. Setting 3: 100 clients, 10% participation. Setting 4: 50 clients, 20% participation. To mitigate the effect of randomness and fluctuation of the accuracy, we take the average of the last 5 global rounds to represent final accuracy. | Method | Setting 1 | Setting 2 | | | |-----------|-------------|-------------|-------|-----| | | Acc. | R# | Acc. | R# | | AFGA | 78.80 | 302 | 82.61 | 152 | | AFGA (a) | 76.59 | 419 | 81.51 | 259 | | Method | Setting 3 | Setting 4 | | | | | Acc. | R# | Acc. | R# | | CAFGA | 81.56 | 142 | 83.15 | 69 | | CAFGA (a) | 81.15 | 194 | 82.99 | 136 | ## 7 Conclusions And Future Works In this paper, we propose a novel adaptive federated optimization algorithm, AFGA, that addresses data heterogeneity across clients and mitigates local model inconsistency by introducing gossip communications and client re-sampling during local training steps. We present a completed theoretical convergence analysis for the proposed AFGA. We prove that AFGA achieves a faster convergence rate than the previous adaptive federated optimization method for partial participation scenarios with heterogeneous data under non-convex stochastic settings. We extend AFGA to a more communication-efficient clustered-clients paradigm, where clients are divided into disjoint clusters and we only perform local gossip averaging within the clusters. The extended CAFGA algorithm is aimed at reducing the communication overhead introduced by gossip communications while maintaining the benefits of client re-sampling and gossip communications under heterogeneous data. Experiments on several benchmarks and ablation studies backup our theory. Despite successfully tackling the data heterogeneity issue among clients by introducing gossip communications and client re-sampling, our current proposed methods also have certain limitations. First, extending the theoretical analysis to the communication-adapted versions is challenging and highly non-trivial. Moreover, gossip communications also incur extra challenges if attempting to further apply secure aggregation schemes to our method. Also if not all clients are trusted and there exist malicious clients, the frequent gossip communications between clients may increase the risks of model poisoning or privacy attacks. We leave those new challenges as future works. ## Acknowledgments We thank the anonymous reviewers for their helpful comments. This work is partially supported by the National Science Foundation under Grant No. 2348541. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies. ## References Mehdi Salehi Heydar Abad, Emre Ozfatura, Deniz Gunduz, and Ozgur Ercetin. Hierarchical federated learning across heterogeneous cellular networks. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8866–8870. IEEE, 2020. Durmus Alp Emre Acar, Yue Zhao, Ramon Matas, Matthew Mattina, Paul Whatmough, and Venkatesh Saligrama. Federated learning based on dynamic regularization. In *International Conference on Learning* Representations, 2021. URL https://openreview.net/forum?id=B7v4QMR6Z9w. Stephen Boyd, Arpita Ghosh, Balaji Prabhakar, and Devavrat Shah. Randomized gossip algorithms. IEEE transactions on information theory, 52(6):2508–2530, 2006. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečn`y, H Brendan McMahan, Virginia Smith, and Ameet Talwalkar. Leaf: A benchmark for federated settings. *arXiv preprint* arXiv:1812.01097, 2018. Timothy Castiglia, Anirban Das, and Stacy Patterson. Multi-level local sgd: Distributed sgd for heterogeneous hierarchical networks. In *International Conference on Learning Representations*, 2020. Jinghui Chen, Dongruo Zhou, Yiqi Tang, Ziyan Yang, Yuan Cao, and Quanquan Gu. Closing the generalization gap of adaptive gradient methods in training deep neural networks. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2020. Xiangyi Chen, Belhal Karimi, Weijie Zhao, and Ping Li. On the convergence of decentralized adaptive gradient methods. *arXiv preprint arXiv:2109.03194*, 2021a. Yiming Chen, Kun Yuan, Yingya Zhang, Pan Pan, Yinghui Xu, and Wotao Yin. Accelerating gossip sgd with periodic global averaging. In *International Conference on Machine Learning*, pp. 1791–1802. PMLR, 2021b. Ashok Cutkosky and Francesco Orabona. Momentum-based variance reduction in non-convex sgd. *Advances* in neural information processing systems, 32, 2019. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. Liang Gao, Huazhu Fu, Li Li, Yingwen Chen, Ming Xu, and Cheng-Zhong Xu. Feddc: Federated learning with non-iid data via local drift decoupling and correction. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pp. 10112–10121, 2022. Yuanxiong Guo, Ying Sun, Rui Hu, and Yanmin Gong. Hybrid local sgd for federated learning with heterogeneous communications. In *International Conference on Learning Representations*, 2021. Jiayin Jin, Jiaxiang Ren, Yang Zhou, Lingjuan Lyu, Ji Liu, and Dejing Dou. Accelerated federated learning with decoupled adaptive optimization. In *International Conference on Machine Learning*, pp. 10298–10322. PMLR, 2022. Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank J Reddi, Sebastian U Stich, and Ananda Theertha Suresh. Mime: Mimicking centralized stochastic algorithms in federated learning. arXiv preprint arXiv:2008.03606, 2020a. Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning, pp. 5132–5143. PMLR, 2020b. Prashant Khanduri, Pranay Sharma, Haibo Yang, Mingyi Hong, Jia Liu, Ketan Rajawat, and Pramod Varshney. Stem: A stochastic two-sided momentum algorithm achieving near-optimal sample and communication complexities for federated learning. *Advances in Neural Information Processing Systems*, 34: 6050–6061, 2021. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. Anastasia Koloskova, Nicolas Loizou, Sadra Boreiri, Martin Jaggi, and Sebastian Stich. A unified theory of decentralized sgd with changing topology and local updates. In *International Conference on Machine* Learning, pp. 5381–5393. PMLR, 2020. Jakub Konečn`y, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492, 2016. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. *IEEE Signal Processing Magazine*, 37(3):50–60, 2020a. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. *Proceedings of Machine Learning and Systems*, 2:429–450, 2020b. Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. On the convergence of fedavg on non-iid data. *arXiv preprint arXiv:1907.02189*, 2019a. Xiang Li, Wenhao Yang, Shusen Wang, and Zhihua Zhang. Communication-efficient local decentralized sgd methods. *arXiv preprint arXiv:1910.09126*, 2019b. Xiangru Lian, Ce Zhang, Huan Zhang, Cho-Jui Hsieh, Wei Zhang, and Ji Liu. Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. Advances in Neural Information Processing Systems, 30, 2017. Lumin Liu, Jun Zhang, SH Song, and Khaled B Letaief. Client-edge-cloud hierarchical federated learning. In *ICC 2020-2020 IEEE International Conference on Communications (ICC)*, pp. 1–6. IEEE, 2020. Guodong Long, Ming Xie, Tao Shen, Tianyi Zhou, Xianzhi Wang, and Jing Jiang. Multi-center federated learning: clients clustering for better personalization. *World Wide Web*, pp. 1–20, 2022. Yucheng Lu and Christopher De Sa. Optimal complexity in decentralized training. In *International Conference on Machine Learning*, pp. 7111–7123. PMLR, 2021. Grigory Malinovsky, Kai Yi, and Peter Richtárik. Variance reduced proxskip: Algorithm, theory and application to federated learning. *arXiv preprint arXiv:2207.04338*, 2022. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp. 1273–1282. PMLR, 2017. Parvin Nazari, Davoud Ataee Tarzanagh, and George Michailidis. Dadam: A consensus-based distributed adaptive gradient method for online optimization. *arXiv preprint arXiv:1901.09109*, 2019. Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečn`y, Sanjiv Kumar, and H Brendan McMahan. Adaptive federated optimization. *arXiv preprint arXiv:2003.00295*, 2020. Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In *International* Conference on Learning Representations, 2018. Sebastian U Stich. Local sgd converges fast and communicates little. *arXiv preprint arXiv:1805.09767*, 2018. Hanlin Tang, Xiangru Lian, Ming Yan, Ce Zhang, and Ji Liu. d 2: Decentralized training over decentralized data. In *International Conference on Machine Learning*, pp. 4848–4856. PMLR, 2018. Yunfei Teng, Wenbo Gao, Francois Chalus, Anna E Choromanska, Donald Goldfarb, and Adrian Weller. Leader stochastic gradient descent for distributed training of deep learning models. *Advances in Neural* Information Processing Systems, 32, 2019. Qianqian Tong, Guannan Liang, and Jinbo Bi. Effective federated adaptive gradient methods with non-iid decentralized data. *arXiv preprint arXiv:2009.06557*, 2020. Asher Trockman and J Zico Kolter. Patches are all you need? *arXiv preprint arXiv:2201.09792*, 2022. John Nikolas Tsitsiklis. Problems in decentralized decision making and computation. Technical report, Massachusetts Inst of Tech Cambridge Lab for Information and Decision Systems, 1984. Hongyi Wang, Mikhail Yurochkin, Yuekai Sun, Dimitris Papailiopoulos, and Yasaman Khazaeni. Federated learning with matched averaging. In *International Conference on Learning Representations*, 2020a. URL https://openreview.net/forum?id=BkluqlSFDS. Jianyu Wang and Gauri Joshi. Cooperative sgd: A unified framework for the design and analysis of localupdate sgd algorithms. *Journal of Machine Learning Research*, 22(213):1–50, 2021. URL http://jmlr. org/papers/v22/20-147.html. Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H Vincent Poor. Tackling the objective inconsistency problem in heterogeneous federated optimization. *arXiv preprint arXiv:2007.07481*, 2020b. Yujia Wang, Lu Lin, and Jinghui Chen. Communication-compressed adaptive gradient method for distributed nonconvex optimization. In *International Conference on Artificial Intelligence and Statistics*, pp. 6292– 6320. PMLR, 2022a. Yujia Wang, Lu Lin, and Jinghui Chen. Communication-efficient adaptive federated learning. In *Proceedings* of the 39th International Conference on Machine Learning, pp. 22802–22838. PMLR, 2022b. Yujia Wang, Yuanpu Cao, Jingcheng Wu, Ruoyu Chen, and Jinghui Chen. Tackling the data heterogeneity in asynchronous federated learning with cached update calibration. In The Twelfth International Conference on Learning Representations, 2024a. URL https://openreview.net/forum?id=4aywmeb97I. Yujia Wang, Shiqiang Wang, Songtao Lu, and Jinghui Chen. Fadas: Towards federated adaptive asynchronous optimization. *arXiv preprint arXiv:2407.18365*, 2024b. Xidong Wu, Feihu Huang, Zhengmian Hu, and Heng Huang. Faster adaptive federated learning. *arXiv* preprint arXiv:2212.00974, 2022. Haibo Yang, Minghong Fang, and Jia Liu. Achieving linear speedup with partial worker participation in non-iid federated learning. *arXiv preprint arXiv:2101.11203*, 2021. Jinliang Yuan, Mengwei Xu, Xiao Ma, Ao Zhou, Xuanzhe Liu, and Shangguang Wang. Hierarchical federated learning through lan-wan orchestration. *arXiv preprint arXiv:2010.11612*, 2020. Dongruo Zhou, Jinghui Chen, Yuan Cao, Yiqi Tang, Ziyan Yang, and Quanquan Gu. On the convergence of adaptive gradient methods for nonconvex optimization. *arXiv preprint arXiv:1808.05671*, 2018. ## A Related Work Federated learning. Federated learning (Konečn`y et al., 2016) play a critical role in collaboratively training models at edge devices with potential privacy protections. Basic optimization methods for federated learning include SGD-based global optimizer, e.g., FedAvg (McMahan et al., 2017) (a.k.a. Local SGD (Stich, 2018) and its variants (Li et al., 2019a; Yang et al., 2021), adaptive gradient optimization based global optimizer such as FedAdam, FedAdagrad, FedYogi (Reddi et al., 2020), FedAGM (Tong et al., 2020) and FedAMSGrad (Wang et al., 2022b). While these optimization methods for federated learning show their ability on achieving stable results when data are heterogeneously distributed, they rarely study data heterogeneity itself. Recently, several works address the data heterogeneity issue through several aspects. For example, FedProx (Li et al., 2020a) adds a proximate term to align the local model with the global one, and FedDyn (Acar et al., 2021) involves dynamic regularization term for local and global model consistency. FedNova (Wang et al., 2020b) proposes a normalized averaging mechanism that reduces objective inconsistency with heterogeneous data. Moreover, several works study to eliminate the client drift caused by data heterogeneity from the aspect of variance reduction such as (Karimireddy et al., 2020b;a; Khanduri et al., 2021; Cutkosky & Orabona, 2019). They introduce additional control variables to track and correct the local model shift during local training, but they require extra communication costs for synchronizing these control variables. Besides, FedDC (Gao et al., 2022) involves both dynamic regularization terms and local drift variables for model correction. Decentralized learning and beyond. Decentralized learning studies a distributed machine learning paradigm without a central server. It can been tracked back from gossip averaging techniques (Tsitsiklis, 1984; Boyd et al., 2006). Decentralized (gossip) SGD algorithms (Lian et al., 2017; Li et al., 2019b; Boyd et al., 2006; Tang et al., 2018) are then proposed that consider client-to-client communications after each step of SGD update on the client for decentralized learning. Lu & De Sa (2021) proves a tight lower bound for decentralized training under the non-convex setting. Teng et al. (2019) proposes a leader-distributed SGD algorithm that pulls workers to the currently best-performing model among all models. There are recent studies generalized various distributed SGD algorithms under unified frameworks, where Wang & Joshi (2021) included reducing communication costs and decentralized training in i.i.d. settings, and Koloskova et al. (2020) studied a general network topology-changing gossip SGD methods that summarize several algorithms in distributed and federated learning. Recent studies extend the decentralized training paradigm to federated learning with various adaption. For example, Guo et al. (2021) considered heterogeneous communications for modern communication networks that improve communication efficiency, and hierarchical federated learning algorithms (Liu et al., 2020; Abad et al., 2020; Castiglia et al., 2020) develop frameworks by aggregating client models to edge servers first before synchronizing them to the central server. ## B Additional Experiments In this section, we present additional empirical results for our proposed algorithm AFGA and CAFGA in training ConvMixer-256-8 model (Trockman & Kolter, 2022) on CIFAR-10/100 (Krizhevsky et al., 2009) datasets, and in training LSTM model on Shakespeare (Caldas et al., 2018) dataset. All experiments in this paper are conducted on 4 NVIDIA RTX A6000 GPUs. ## B.1 Additional Experimental Results Additional Experimental Results on CIFAR-10. Figure 3 shows the overall test accuracy curves of experiments on CIFAR-10. It demonstrates that our proposed AFGA and CAFGA achieve overall better performance than adaptive federated learning methods and other federated learning baselines in both settings. Additional Experimental Results on training ResNet-18 on CIFAR-10. Table 6 presents the empirical result for our proposed AFGA and CAFGA together with several federated learning baselines on training CIFAR-10 with ResNet-18 model. It shows that the proposed CAFGA outperforms other federated learning baselines, achieving a 0.4% improvement over FedAMSGrad and an enhancement of more than 1% compared to other baselines. Additional Experimental Results on CIFAR-100. Figure 4 shows the empirical result for our proposed AFGA and CAFGA together with several federated learning baselines on training CIFAR-100 ConvMixer256-8 model. They demonstrate that our proposed AFGA and CAFGA achieve overall better performance than adaptive federated learning methods and other federated learning baselines in both settings. ![16_image_0.png](16_image_0.png) Figure 3: The test accuracy for AFGA and CAFGA with several federated learning baselines in training CIFAR-10 data on ConvMixer-256-8 model. Table 6: The test accuracy of training ResNet-18 model on CIFAR-10 datasets considering 50 clients and 10% participation ratio. Method Acc. & std. FedAvg 70.32 ± 0.44 FedAdam 73.80 ± 0.58 FedAMSGrad 75.59 ± 0.73 SCAFFOLD 74.60 ± 0.67 FedProx 70.26 ± 0.45 FedDyn 74.15 ± 2.23 AFGA 74.72 ± 0.32 CAFGA **75.96**± 0.67 ## B.1.1 Ablation Studies And Other Comparisons Ablation of local iteration. We further study how the local iteration affects the convergence of our proposed CAFGA algorithm. Figure 6 shows the ablation study about local iterations, we compare the number of local iterations I from I = {12, 24, 48}. We observe that larger I indeed helps accelerate convergence on training loss and helps to obtain a higher test accuracy. This result backs up our theory that the increasing number of local steps would help the overall performance. Communication run-time simulations. Table 7 presents a simulation study as a substitution of the realworld measurement similar to Guo et al. (2021). Consider a limited bandwidth setting where the average time of client-to-client communication cost is 1.8 seconds, and the average time of client-to-server communication cost is 18 seconds. Table 7 suggests that even when considering client-to-client communication costs, our proposed AFGA and CAFGA can still efficiently achieve high accuracy with less overall communication costs. This implies that though our proposed methods incur extra local gossip communications, it helps mitigate the impact of data heterogeneity thus improve the overall performance. Table 7: The communication time under for CIFAR-10. Setting 1: 100 clients, 5% participation ratio. | Test Accuracy | 70% | 75% | 78% | 80% | |---------------------|-------|--------|--------|--------| | FedAMSGrad time (h) | 76.0 | 128.0 | 194.0 | 281.0 | | AFGA time (h) | 91.76 | 130.98 | 223.48 | 250.86 | | CAFGA time (h) | 76.22 | 102.86 | 172.42 | 179.82 | FedAMSGrad time (h) **76.0** 128.0 194.0 281.0 AFGA time (h) 91.76 130.98 223.48 250.86 CAFGA time (h) 76.22 **102.86 172.42 179.82** Comparisons to Decentralized Methods. We have briefly discussed decentralized learning in the related work in the main paper, here we provide more discussion about our proposed methods and the decentralized algorithms. Decentralized learning can certainly prevent single point failure without a central ![17_image_0.png](17_image_0.png) Figure 4: The test accuracy for AFGA and CAFGA with several federated learning baselines in training CIFAR-100 data on ConvMixer-256-8 model. ![17_image_1.png](17_image_1.png) Figure 5: The test accuracy for AFGA and CAFGA with several federated learning baselines in training Shakespeare data on LSTM model. server, its performance is not on par with the conventional server-client FL setup, especially when data are heterogeneous distributed. In sharp contrast, the periodic synchronization between server and clients in our proposed method can help align local models for better convergence and ease the data heterogeneity issue in adaptive federated learning, which is mainly focus of this paper. Moreover, the central server setting allow us to easily apply adaptive optimizer for stable performances, while decentralized learning methods are mostly limited to SGD-based update as the adaptive optimizer needs the alignment of gradient update, otherwise suffers from some divergence issue (Chen et al., 2021a). We provide some experimental results comparing our proposed methods with decentralized algorithms including DSGD (Lian et al., 2017), DAdam (Nazari et al., 2019; Chen et al., 2021a) and PGA (Chen et al., 2021b) under the same training settings. The following table shows the comparison result for several decentralized methods and our proposed AFGA and CAFGA. It shows that our proposed AFGA and CAFGA indeed attains better test accuracy results comparing to other decentralized baselines. | Table 8: Comparison to decentralized algorithms | . | | |---------------------------------------------------|------------------|--------------| | Method | Test Accuracy(%) | Rounds (78%) | | DSGD | 80.23 | 50 | | DAdam | 70.37 | 227 | | PGA | 80.93 | 210 | | AFGA | 82.16 | 152 | | CAFGA | 83.03 | 112 | ![18_image_0.png](18_image_0.png) Figure 6: Ablation study with different heterogeneity degree of CAFGA in training CIFAR-10 on ConvMixer256-8 model. ## B.2 Hyper-Parameters Details Hyper-parameter Settings. We conduct detailed hyper-parameter searches to find the best hyperparameter for each baseline. We grid over the local learning rate ηl ∈ {0.001, 0.01, 0.1, 1.0}, and the global learning rate η ∈ {0.001, 0.01, 0.1, 1.0, 2.0, 5.0, 10.0} for each method. For the global AMSGrad optimizer, we set β1 = 0.9, β1 = 0.99, and we search the best ϵ from {10−10, 10−8, 10−6, 10−4}. Table 9 summarizes the hyper-parameter details in our experiments. Experiments are set up with Setting 1: 100 total clients, and Setting 2: 50 total clients in the network. For CAFGA, clients are equally divided into 5 clusters. The partial participation ratio is set to p = 0.05 for Setting 1 and p = 0.1 for Setting 2, and the gossip communication topology is ring topology by default. For each method, we conduct I = 24 iterations of local training with a batch size of 50 by default. | | Table 9: Hyper-parameters details. | | | | | | | | | | | | | | | | |-------------|------------------------------------------|------------|----------|---------|--------|------|-------|-----|-----|-----|-----|-----|------|------|------|------| | | Setting 1 (100 clients 5% participation) | | | | | | | | | | | | | | | | | FedAvg | FedAdam | FedAMSGrad | SCAFFOLD | FedProx | FedDyn | AFGA | CAFGA | | | | | | | | | | | Data | ηl | η | ηl | η | ηl | η | ηl | η | ηl | η | ηl | η | ηl | η | ηl | η | | CIFAR-10 | 0.1 | 1.0 | 1.0 | 0.01 | 1.0 | 1.0 | 0.1 | 1.0 | 0.1 | 1.0 | 0.1 | 1.0 | 0.01 | 2.0 | 0.01 | 2.0 | | CIFAR-100 | 0.1 | 1.0 | 1.0 | 0.01 | 1.0 | 1.0 | 0.1 | 1.0 | 0.1 | 1.0 | 0.1 | 1.0 | 0.01 | 1.0 | 0.01 | 1.0 | | Shakespeare | 1.0 | 1.0 | 1.0 | 0.01 | 1.0 | 1.0 | 0.1 | 1.0 | 0.1 | 1.0 | 0.1 | 1.0 | 0.01 | 10.0 | 0.01 | 10.0 | | | Setting 2 (50 clients 10% participation) | | | | | | | | | | | | | | | | | FedAvg | FedAdam | FedAMSGrad | SCAFFOLD | FedProx | FedDyn | AFGA | CAFGA | | | | | | | | | | | Data&Model | ηl | η | ηl | η | ηl | η | ηl | η | ηl | η | ηl | η | ηl | η | ηl | η | | CIFAR-10 | 0.1 | 1.0 | 1.0 | 0.01 | 1.0 | 1.0 | 0.1 | 1.0 | 0.1 | 1.0 | 0.1 | 1.0 | 0.01 | 2.0 | 0.01 | 2.0 | | CIFAR-100 | 0.1 | 1.0 | 1.0 | 0.01 | 1.0 | 1.0 | 0.1 | 1.0 | 0.1 | 1.0 | 0.1 | 1.0 | 0.01 | 1.0 | 0.01 | 1.0 | | Shakespeare | 1.0 | 1.0 | 1.0 | 0.01 | 1.0 | 1.0 | 0.1 | 1.0 | 0.1 | 1.0 | 0.1 | 1.0 | 0.01 | 10.0 | 0.01 | 10.0 | ## C Preliminaries About weighted matrix: $$\operatorname{null}(I_{n}-W_{k})=\operatorname{span}(\mathbf{x}|\mathbf{x}\in\mathbb{R}^{n}:(I_{n}-W_{k})\mathbf{x}=\mathbf{0})$$ n: (In − Wk)x = 0) (4) If we have null(In − Wk) = span(1), that means the following equation holds $${\begin{pmatrix}1-w_{11}&-w_{12}&\dots&-w_{1n}\\ -w_{21}&1-w_{22}&\dots&-w_{2n}\\ \vdots&&&\\ -w_{n1}&\dots&\dots&1-w_{n n}\end{pmatrix}}\cdot{\begin{pmatrix}x_{1}\\ x_{2}\\ \vdots\\ x_{n}\end{pmatrix}}=\mathbf{0}$$ = 0 (5) $$(4)$$ $$\left({\bar{5}}\right)$$ if and only if x1 = x2 = ... = xn. Since we assume wij ∈ [0, 1], there is a counter-example if Wk = (W2, 0; 0, In−2), then we have $${\begin{pmatrix}1-w_{11}&-w_{12}&\dots&0\\ -w_{21}&1-w_{22}&\dots&0\\ \vdots&&&\\ 0&&&I_{n-2}\end{pmatrix}}\cdot{\begin{pmatrix}x_{1}\\ x_{2}\\ \vdots\\ x_{n}\end{pmatrix}}=\mathbf{0},$$ $$\left(7\right)$$ $$1,...,\,(1,1,1,...,1).$$ (6) $\frac{1}{2}$ then (x1, x2*, ..., x*n) = (c1, c2, 0*, ...,* 0)(c1, c2 ̸= 0) can be a solution to the equation. For the eigenvalues and eigenvectors of matrix Wk − (1/n)11T, we have $W_{k}-(1/n){\bf11}^{T}=W_{k}-J,$ $(W_{k}-J)(W_{k}-J)=W_{k}-J,$ $\lambda\mathbf{b}=(W_{k}-J)\mathbf{b}=(W_{k}-J)(W_{k}-J)\mathbf{b}=(W_{k}-J)\lambda\mathbf{b}=\lambda^{2}\mathbf{b},$ $(\lambda^{2}-\lambda)\mathbf{b}=0,\quad\lambda=1$ or $\lambda=0.$ The eigenvectors of Wk − J are (1, −1, 0*, ...,* 0), (1, 0, −1*, ...,* 0),..., (1, 1, 1*, ...,* 1). The maximum of ∥Wk − (1/n)11T ∥2 is obtained when Wk is equivalent to In, and we have max ∥Wk − (1/n)11T ∥2 = 1, which implies ∥Wk − (1/n)11T ∥2 ≤ 1. ## D Convergence Analysis For Clustered-Clients Framework We re-state two cluster related assumptions, the assumption of inter-client dissimilarity and the assumption of gossip mixing spectral gap, and the theorem in the following. First, we denote the local objective function of cluster k, i.e., ¯fk(x) = 1n Pi∈Vk fi(x), then we state the following assumptions. Assumption D.1. The dissimilarity between client's objective function and corresponding cluster's objective function is bounded, i.e., for all x, k ∈ [K], there is 1n Pi∈Vk ∥∇fi(x) − ∇ ¯fk(x)∥ 2 ≤ σ 2 k . Similarly, clusters' objective function the global objective has a bounded dissimilarity variance: for α ≥ 1 and σc ≥ 0, there is 1K Pk∈[K] ∥∇ ¯fk(x)∥ 2 ≤ α 2∥∇f(x)∥ 2 + σ 2 c . Assumption D.2 (Intra-cluster spectral gap). Local clients in cluster k ∈ [K] are connected in the graph Gk with weighting matrix Wk satisfies same characteristic as Assumption 4.5. We assume the spectral gap ρk satisfies: there exists ρk ∈ [0, 1) such that ∥Wk − 1 n 11⊤∥2 ≤ ρk. We further denote σ¯ 2 l = 1 K PK k=1 σ 2 k as the average dissimilarity between local clients in the same cluster, and denote ρmax = maxk∈[K] ρk as the maximum spectral gap among all K clusters. Note that Assumption D.1 and Assumption D.2 are general assumptions for the clustered-client AFGA (CAFGA) framework. Specifically, if K = 1, i.e., there is only one cluster containing all clients [N] in the network: - Assumption D.1 reduces to Assumption 4.4: the cluster's objective function in Assumption D.1 is exactly the global objective, thus σc = 0, and there is a bounded dissimilarity between client's objective and global objective: 1n Pi∈[N] ∥∇fi(x) − ∇f(x)∥ 2 ≤ σ 2 g , which is consistent with Assumption 4.4. - Assumption D.2 reduces to Assumption 4.5: the only cluster contains all i ∈ [N], clients i ∈ [N] are connected in the graph G with weighting matrix W and there exists ρ ∈ [0, 1) such that ∥W − 1 n 11⊤∥2 ≤ ρ, which is consistent with Assumption 4.5. In the following, we state the convergence rate for Algorithm 2, (CAFGA). Theorem D.3. *Under Assumptions 4.1-4.3, D.1 and D.2, if the local learning rate satisfies specific constraints, and the maximum spectral gap satisfies* ρmax ≤ m m+n , then the iterates of Algorithm 2 in partial participation scenarios satisfy $$\frac{1}{R}\sum_{r=1}^{R}\mathbb{E}[\|\nabla f(\mathbf{x}_{r})\|^{2}]=\mathcal{O}\bigg{(}\frac{1}{\sqrt{RTm}}\bigg{[}[f_{0}-f_{*}]+\bigg{(}\widehat{\sigma}^{2}+\frac{\sigma^{2}}{K}\bigg{)}L\bigg{]}\bigg{)}$$ $$+\mathcal{O}\bigg{(}\frac{1}{R}\bigg{[}G^{2}+L^{2}[\sigma_{c}^{2}+(1+\rho_{\max}^{2})\widehat{\sigma}_{1}^{2}]+\frac{\rho_{\max}^{2}\sigma^{2}}{\mathcal{I}}\bigg{]}\bigg{)}+\widetilde{\mathcal{O}}\bigg{(}\frac{1}{R^{3/2}}\bigg{)},\tag{8}$$ _the $\sigma$-function is given by $\sigma$-function._ where Oe(·) hides all the absolute constants and problem dependent constants including *ρ, σ, σ*2 c , σ¯ 2 l , I*, M, N*, and additionally we denote σe 2 = σ 2 + ¯σ 2 l + σ 2 c *as the variance summation.* ## E Proof Of Theorem D.3 And Theorem 4.6 Preliminaries for the proof. We define the following auxiliary sequences, w.r.t. xr,t, x k r,t. Firstly, we denote the average model on cluster k as $$\bar{\mathbf{x}}_{r,t+1}^{k}=\bar{\mathbf{x}}_{r,t}^{k}-\eta_{l}\bar{\mathbf{g}}_{r,t}^{k},$$ r,t, (9) where g¯ k r,t = 1 n Pi∈Vk g¯ i r,t, where g¯ i r,t = 0 as a **virtual gradient** if client i in Vk has not been selected in the set S k t,r, and g¯ i r,t is equal to the **real gradient** g i r,t. if i ∈ Sk t,r We also define the global average model $\bar{\mathbf{x}}_{r,t+1}=\bar{\mathbf{x}}_{r,t}-\eta_{l}\frac{1}{N}\sum_{i=1}^{N}\bar{\mathbf{g}}_{r,t}^{i}.$ odel differences, we denote the average model difference on clv. We next define sequences related to model differences, we denote the average model difference on cluster k as ∆¯ k r ∆¯ k r = 1 n X i∈Vk ∆ir = 1 n X i∈Vk (x i r,I − xr) = x¯ k r,I − xr = x¯ k r,0 − ηl I− X 1 t=0 g¯ k r,t − xr = −ηl I− X 1 t=0 g¯ k r,t, ∆¯r = 1 K X k∈[K] 1 n X i∈Vk (x i r,I − xr) = 1KX k∈[K] ∆¯ k r = −ηl 1 K 1 n I− X 1 t=0 X k∈[K] X i∈Vk g¯ i r,t. (11) Since we have two sampling process: sampling clients for global communication per global round r, and , and the average global model difference ∆¯r without sampling consideration. (9) $\frac{1}{2}$ . $$(10)$$ sampling selected clients for local gradient update per local iteration t. Thus we state the following auxiliary equations ESk t,r [g¯ k t,r] = ESk t,r 1n X i∈Vk g¯ i t,r= ESk t,r 1nX i∈Sk t,r g i t,r, ESk t,r [∆¯ k r ] = ESk t,r − ηl I− X 1 t=0 g¯ k t,r= ESk t,r − ηl n I− X 1 t=0 X i∈Sk t,r g i t,r, ESr [∆¯r] = ESr 1 K X K k=1 ∆¯ k r = 1 K X K k=1 ESk r ESk t,r − ηl n I− X 1 t=0 X i∈Sk t,r g i t,r = 1 K X K k=1 ESk r − 1 m X i∈Sk r ηl n I− X 1 t=0 ESk t,r X i∈Sk t,r g i t,r = ESr [∆r]. (12) $$(13)$$ Thus ∆¯r is the unbiased estimate of ∆r. Proof of Theorem D.3. Similar to previous works (Zhou et al., 2018; Chen et al., 2020), we introduce a Lyapunov sequence zr: assume x0 = x1, for each r ≥ 1, we have $$\mathbf{z}_{r}=\mathbf{x}_{r}+\frac{\beta_{1}}{1-\beta_{1}}(\mathbf{x}_{r}-\mathbf{x}_{r-1})=\frac{1}{1-\beta_{1}}\mathbf{x}_{r}-\frac{\beta_{1}}{1-\beta_{1}}\mathbf{x}_{r-1}.\tag{14.1}$$ For the difference of two adjacent element in sequence zr, we have zr+1 − zr = 1 β1 (xr+1 − xr) −β1 1 − β1 (xr − xr−1) =1 1 − β1 (ηVb −1/2 r mr) − β1 1 − β1 ηVb −1/2 r−1 mr−1 =1 1 − β1 ηVb −1/2 r β1mr−1 + (1 − β1)∆r −β1 1 − β1 ηVb −1/2 r−1 mr−1 = ηVb −1/2 r ∆r − η β1 1 − β1 Vb −1/2 r−1 − Vb −1/2 r mr−1. By Assumption 4.1, with the property of L-smoothness, for r ∈ [R], taking conditional expectation at global round r, we have E[f(zr+1)] − f(zr) ≤ E[⟨∇f(zr), zr+1 − zr⟩] + L 2 E[∥zr+1 − zr∥ 2] ≤ ηE D∇f(zr),Vb −1/2 r ∆r E− ηE ∇f(zr),β1 1 − β1 Vb −1/2 r−1 − Vb −1/2 rmr−1 + η 2L 2 E Vb −1/2 r ∆r − β1 1 − β1 Vb −1/2 r−1 − Vb −1/2 rmr−1 2 = ηE D∇f(xr),Vb −1/2 r ∆r E −ηE ∇f(zr),β1 1 − β1 Vb −1/2 r−1 − Vb −1/2 rmr−1 | {z } I | {z } II + η 2L 2 E Vb −1/2 r ∆r − β1 1 − β1 Vb −1/2 r−1 − Vb −1/2 rmr−1 2 | {z } III + ηE D∇f(zr) − ∇f(xr),Vb −1/2 r ∆r E , (14) | {z } IV ## E.1 Bounding I We have I = ηE ∇f(xr),∆r √vbr + ϵ = ηE ∇f(xr) √vbr + ϵ , ∆¯r = −ηηlE ∇f(xr) √vbr + ϵ ,1 Kn I− X 1 t=0 X k∈[K] X i∈Vk g¯ i r,t = −ηηlE ∇f(xr) √vbr + ϵ ,1 Kn I− X 1 t=0 X k∈[K] X i∈Sk r,t g i r,t I− X 1 t=0 E ∇f(xr) √vbr + ϵ , 1 N X N = − ηηlm n i=1 g i r,t I− X 1 t=0 E ∇f(xr) √vbr + ϵ , 1 N X N = − ηηlm n i=1 ∇fi(x i r,t) , (15) where the first equation in equation 15 holds by ∆r = m n ∆¯r = − m n · ηl N PN i=1 PI−1 t=0 g i r,t. The last equation above holds by the unbiasedness of stochastic gradient, then we have − E ∇f(xr) √vbr + ϵ , 1 N X N i=1 ∇fi(x i r,t) = − 1 2 E ∇f(xr) √vbr + ϵ , 1 N X N i=1 ∇fi(x i r,t) − 1 2 E ∇f(xr) √vbr + ϵ , 1 N X N i=1 ∇fi(x i r,t) ± 1 K X K k=1 ∇ ¯fk(x¯ k r,t) = − 1 2 E ∇f(xr) √4vbr + ϵ ,1 √4vbr + ϵ 1 N X N i=1 ∇fi(x i r,t) − 1 2 E ∇f(xr) √4vbr + ϵ ,1 √4vbr + ϵ 1 N X N i=1 ∇fi(x i r,t) ± 1 K X K k=1 ∇ ¯fk(x¯ k r,t) . (16) Since we have inequalities, ⟨a, b⟩ = ∥a∥ 2 + ∥b∥ 2 − ∥a − b∥ 2 and ⟨a, b⟩ ≤ 12 ∥a∥ 2 + 1 2 ∥b∥ 2, then we have − E ∇f(xr) √vbr + ϵ , 1 N X N i=1 ∇fi(x i r,t) ≤ − 1 4 E ∇f(xr) √4vbr + ϵ 2 + 1 √4vbr + ϵ 1 N X N i=1 ∇fi(x i r,t) 2 − 1 √4vbr + ϵ ∇f(xr) − 1 N X N i=1 ∇fi(x i r,t) 2− 1 4 E ∇f(xr) √4vbr + ϵ 2 + 1 √4vbr + ϵ 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 − 1 √4vbr + ϵ ∇f(xr) − 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 + 1 4 E ∇f(xr) √4vbr + ϵ 2+ 1 4 E 1 √4vbr + ϵ 1 N X N i=1 ∇fi(x i r,t) − 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 ≤ −1 4C0 E[∥∇f(xr)∥ 2] −1 4C0 E 1 N X N i=1 ∇fi(x i r,t) 2−1 4C0 E 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 +1 4 √ϵ E ∇f(xr) − 1 N X N i=1 ∇fi(x i r,t) 2+1 4 √ϵ E ∇f(xr) − 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 +1 4 √ϵ E 1 N X N i=1 ∇fi(x i r,t) − 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2, (17) where the second inequality holds by the property of variance vbr: ∥x∥ 2C −1 0 ≤ ∥x∥ 2(vbr + ϵ) −1/2 ≤ ∥x∥ 2ϵ 1/2 with C0 =pη 2 l I 2G2 + ϵ. After applying the property of L-smoothness, the last three terms above are highly related to bound the inter-cluster consensus error ∥xr − x¯ k r,t∥ and intra-cluster consensus error ∥x¯ k r,t − x i r,t∥. Thus by Assumption 4.1, we have the following result for bounding I I = ηE ∇f(xr),∆r √vbr + ϵ ≤ ηηl 4C0 m n I− X 1 t=0 − E[∥∇f(xr)∥ 2] − E 1 N X N i=1 ∇fi(x i r,t) 2 − 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 + ηηl 4 √ϵ m n I− X 1 t=0 E ∇f(xr) − 1 N X N i=1 ∇fi(x i r,t) 2+ E ∇f(xr) − 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 + E 1 N X N i=1 ∇fi(x i r,t) − 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 ≤ ηηl 4C0 m n I− X 1 t=0 − E[∥∇f(xr)∥ 2] − E 1 N X N i=1 ∇fi(x i r,t) 2 − 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 + ηηl 4 √ϵ m n I− X 1 t=0 E ∇f(xr) ± 1 K X K k=1 ∇ ¯fk(x¯ k r,t) − 1 N X N i=1 ∇fi(x i r,t) 2 + E ∇f(xr) − 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2+ E 1 N X N i=1 ∇fi(x i r,t) − 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 ≤ ηηl 4C0 m n I− X 1 t=0 − E[∥∇f(xr)∥ 2] − E 1 N X N i=1 ∇fi(x i r,t) 2 − 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 + ηηl 4 √ϵ m n I− X 1 t=0 2L 2 K X K k=1 E[∥xr − x¯ k r,t∥ 2] + 2L 2 N X K k=1 X i∈Vk E[∥x¯ k r,t − x i r,t∥ 2] + L 2 K X K k=1 E[∥xr − x¯ k r,t∥ 2] + L 2 N X K k=1 X i∈Vk E[∥x¯ k r,t − x i r,t∥ 2] ≤ ηηl 4C0 m n I− X 1 t=0 − E[∥∇f(xr)∥ 2] − E 1 N X N i=1 ∇fi(x i r,t) 2 − 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 + ηηl 4 √ϵ m n I− X 1 t=0 3L 2 K X K k=1 E[∥xr − x¯ k r,t∥ 2] + 3L 2 N X K k=1 X i∈Vk E[∥x¯ k r,t − x i r,t∥ 2] ≤ ηηl 4C0 m n I− X 1 t=0 − E[∥∇f(xr)∥ 2] − E 1 N X N i=1 ∇fi(x i r,t) 2 − 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 + ηηl 4 √ϵ m n 3L 2I 2C1η 2 l (I + ρ 2 maxHI,ρ)(α 2E[∥∇f(xr)∥ 2 + σ 2 c ) + 3L 2I 2C1ρ 2 maxHI,ρη 2 l σ¯ 2 l + 3L 2I 2C1η 2 l σ 2ρ 2 max + 3L 2C1 I 2 + H2 I,ρ · ρ 2 maxη 2 l σ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l , (18) where the last inequality holds by Lemma F.1 and F.2. ## E.2 Bounding Ii Bounding II mainly follows by the update rule and definition of virtual sequence zr, II = −ηE ∇f(zr),β1 1 − β1 Vb −1/2 r−1 − Vb −1/2 rmr−1 = −ηE ∇f(zr) − ∇f(xr) + ∇f(xr),β1 1 − β1 Vb −1/2 r−1 − Vb −1/2 rmr−1 ≤ ηE ∥∇f(xr)∥ β1 1 − β1 Vb −1/2 r−1 − Vb −1/2 rmr−1 + ηLE ∥zr − xr∥ β1 1 − β1 Vb −1/2 r−1 − Vb −1/2 rmr−1 = ηE ∥∇f(xr)∥ β1 1 − β1 Vb −1/2 r−1 − Vb −1/2 rmr−1 + η 2LE 1 pvbr−1 + ϵ β1 1 − β1 mr−1 β1 1 − β1 Vb −1/2 r−1 − Vb −1/2 rmr−1 ≤β1 1 − β1 m n ηηlIG 2E -Vb −1/2 r−1 − Vb −1/2 r1 +β 2 1 (1 − β1) 2 m2 n2 Lη2η 2 l I 2G 2ϵ −1/2E -Vb −1/2 r−1 − Vb −1/2 r1 , (19) where the first iequality holds by Assumption 4.1, and the last one holds by Assumption 4.3 and Lemma F.8 about bounding ∇f(xr) and mr. ## E.3 Bounding Iii For bounding III, use the similar way for bounding II, III = η 2L 2 E Vb −1/2 r ∆r + β1 1 − β1 Vb −1/2 r−1 − Vb −1/2 rmr−1 2 ≤ η 2LE-Vb −1/2 r ∆r 2+ η 2LE β1 1 − β1 Vb −1/2 r−1 − Vb −1/2 rmr−1 2 ≤ η 2L ϵ E[∥∆r∥ 2] + β 2 1 (1 − β1) 2 m2 n2 η 2η 2 l LI 2G 2E -Vb −1/2 r−1 − Vb −1/2 r 2, (20) where the first inequality holds by Cauchy-Schwarz inequality, and the second one follows by Assumption 4.3 and Lemma F.8 about bounding ∇f(xr) and mr. ## E.4 Bounding Iv Similarly, we bound the last term in equation 14, IV = E hD∇f(zr) − ∇f(xr), ηVb −1/2 r ∆r Ei ≤ E h∥∇f(zr) − ∇f(xr)∥ ηVb −1/2 r ∆r i ≤ LE h∥zr − xr∥ ηVb −1/2 r ∆r i ≤ η 2L 2 E β1 1 − β1 Vb −1/2 r mr 2+ η 2L 2 E hVb −1/2 r ∆r 2i ≤ η 2L 2ϵ β 2 1 (1 − β1) 2 E[∥mr∥ 2] + η 2L 2ϵ E[∥∆r∥ 2], (21) where the first inequality holds due to Young's inequality, and the second one follows from Assumption 4.1 and the definition of virtual sequence zr. By Lemma F.7, we have $$\sum_{r=1}^{R}\mathbb{E}[\|\mathbf{m}_{r}\|^{2}]\leq\sum_{r=1}^{R}\mathbb{E}[\|\Delta_{r}\|^{2}].\tag{22}$$ Therefore, the summation of IV term is bounded by $$\sum_{r=1}^{R}IV\leq\left(\frac{\eta^{2}L}{2\epsilon}\frac{\beta_{1}^{2}}{(1-\beta_{1})^{2}}+\frac{\eta^{2}L}{2\epsilon}\right)\sum_{r=1}^{R}\mathbb{E}[\|\Delta_{r}\|^{2}].\tag{23}$$ Merging I to IV together, we obtain the following result for bounding equation 14, E[f(zr+1)] − f(z1) ≤ ηηl 4C0 m n *− IE*[∥∇f(xr)∥ 2] − I− X 1 t=0 E 1 N X N i=1 ∇fi(x i r,t) 2− I− X 1 t=0 E 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 + ηηl 4 √ϵ m n 3L 2I 2C1η 2 l (I + ρ 2 maxHI,ρ)(α 2E[∥∇f(xr)∥ 2 + σ 2 c ) + 3L 2I 2C1ρ 2 maxHI,ρη 2 l σ¯ 2 l + 3L 2I 2C1η 2 l σ 2ρ 2 max + 3L 2C1 I 2 + H2 I,ρ · ρ 2 maxη 2 l σ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l + ηβ1 1 − β1 m n ηlIG 2 + η 2 β 2 1 (1 − β1) 2 m2 n2 Lη2 l I 2G 2ϵ −1/2 E -Vb −1/2 r−1 − Vb −1/2 r1 + η 2Lβ 2 1 (1 − β1) 2 m2 n2 η 2 l I 2G 2E -Vb −1/2 r−1 − Vb −1/2 r 2 + η 2L 2ϵ β 2 1 (1 − β1) 2 + η 2L 2ϵ+ η 2L ϵ E[∥∆r∥ 2], then substituting the bound of ∥∆r∥ 2in Lemma F.5, and by applying Lemma F.4, then we have n m X R r=1 -E[f(zr+1)] − f(z1) ≤ − ηηlI 4C0 X R r=1 E[∥∇f(xr)∥ 2] − ηηl 4C0 X R r=1 I− X 1 t=0 E 1 N X N i=1 ∇fi(x i r,t) 2 − ηηl 4C0 X R r=1 I− X 1 t=0 E 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 + ηηl 4 √ϵ X R r=1 3L 2I 2C1η 2 l (I + ρ 2 maxHI,ρ)(α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + 3L 2I 2C1ρ 2 maxHI,ρη 2 l σ¯ 2 l + 3L 2I 2C1η 2 l σ 2ρ 2 max + 3L 2C1 I 2 + H2 I,ρ · ρ 2 maxη 2 l σ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l + ηβ1 1 − β1 ηlIG 2 + η 2 β 2 1 (1 − β1) 2 m n Lη2 l I 2G 2ϵ −1/2 X R r=1 E -Vb −1/2 r−1 − Vb −1/2 r1 + η 2Lβ 2 1 (1 − β1) 2 m n η 2 l I 2G 2X R r=1 E -Vb −1/2 r−1 − Vb −1/2 r 2 + η 2L 2ϵ β 2 1 (1 − β1) 2 + η 2L 2ϵ+ η 2L ϵ X R r=1 2η 2 l I Nσ 2 + 64η 2 l I(n − m) n(n − 1) [¯σ 2 l + σ 2 c ] + 2n(n − m) m2(n − 1) + 64η 2 l IL 2(n − m) n(n − 1) + 4η 2 l (I + 1)2L 2 IC1η 2 l HI,ρρ 2 max(α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + IC1η 2 l HI,ρρ 2 maxσ¯ 2 l + IC1η 2 l ρ 2 maxσ 2 + IC1η 2 l H2 I,ρ I 2· ρ 2 maxσ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l + 64η 2 l L 2(n − m) n(n − 1) I 3C1η 2 l (α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + I 2C1η 2 l σ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l I− X 1 t=0 E 1 N X N I− X 1 t=0 E 1 K X K i=1 ∇fi(x i r,t) 2+ 8η 2 l Im n $$(24)$$ $$(25)$$ k=1 ∇ ¯fk(x¯ k r,t) 2. (24) + 16η 2 l (m − 1) (n − 1) we need the certain constraint on local learning rate Cβ,ηη 16η 2 l N2 (m − 1) (n − 1) + 2(I + 2)η 2 l m N2n ≤ ηηl 4C0 ⇒ ηl ≤1 4C0Cβ,η 4 N2 (m − 1) (n − 1) + 2(I + 2)m N2n −1, (25) where Cβ,η = ηL ϵ β 2 1 (1−β1) 2 + 3ηL ϵ = O(max{η, 1}), we further need the requirement of ηl, which is same as the requirement in full participation settings ηηl 4 √ϵ 3L 2I 2C1η 2 l (I + ρ 2 maxHI,ρ)α 2 + 2n(n − m) m2(n − 1) + 32η 2 l L 2(n − m) n(n − 1) Cβ,ηηI 2C1η 2 l HI,ρρ 2 maxα 2 + 32η 2 l L 2(n − m) n(n − 1) Cβ,ηηI 3C1η 2 l α 2 ≤ ηηlI 8C0 , ⇒1 4 √ϵ 3L 2IC1η 2 l (I + ρ 2 maxHI,ρ)α 2 + 2n(n − m) m2(n − 1) + 32η 2 l L 2(n − m) n(n − 1) Cβ,ηIC1ηlHI,ρρ 2 maxα 2 + 32η 2 l L 2(n − m) n(n − 1) Cβ,ηI 2C1ηlα 2 ≤1 8C0 , (26) ⇒ ηl ≤ √4ϵ p18L2C0C1I(I + ρ 2maxHI,ρ)α2 , (27) thus we have $${\frac{\eta\eta\mathcal{I}}{8C_{0}R}}\sum_{r=1}^{R}\mathbb{E}[\|\nabla f(\mathbf{x}_{r})\|^{2}]$$ ≤n Rm -E[f(zr+1)] − f(z1)+ηηl 4R √ϵ X R r=1 3L 2I 2C1η 2 l (I + ρ 2 maxHI,ρ)σ 2 c + 3L 2I 2C1ρ 2 maxHI,ρη 2 l σ¯ 2 l + 3L 2I 2C1η 2 l σ 2ρ 2 max + 3L 2C1 I 2 + H2 I,ρ · ρ 2 maxη 2 l σ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l +β1 1 − β1 ηηlIG 2d R √ϵ + 2β 2 1 (1 − β1) 2 m n η 2η 2 l LI 2G 2 d ϵ + η 2L 2ϵ β 2 1 (1 − β1) 2 + η 2L 2ϵ+ η 2L ϵ 2η 2 l I Nσ 2 + 2n(n − m) m2(n − 1) + 64η 2 l IL 2(n − m) n(n − 1) + 4η 2 l (I + 1)2L 2 IC1η 2 l HI,ρρ 2 maxσ 2 c + IC1η 2 l HI,ρρ 2 maxσ¯ 2 l + IC1η 2 l ρ 2 maxσ 2 + IC1η 2 l H2 I,ρ I 2· ρ 2 maxσ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l + 64η 2 l L 2 (n − m) n(n − 1)I 3C1η 2 l σ 2 c + I 2C1η 2 l σ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l + 64η 2 l I(n − m) n(n − 1) [¯σ 2 l + σ 2 c ] (28) since there is HI,ρ = min{1 1−ρmax , *I} ≤ I* and ρmax ≤ 1, 1 8C0R X R r=1 E[∥∇f(xr)∥ 2] ≤n ηηlRIm -E[f(zr+1)] − f(z1)+1 4 √ϵ C · L 2η 2 l I(I + HI,ρ)σ 2 c + C · L 2η 2 l I ρ 2 maxHI,ρσ¯ 2 l + m(n − m) n(n − 1) Iσ¯ 2 l + σ 2ρ 2 max1 + 1 n + CβG 2d R √ϵ + 2C 2 β m n LηηlIG 2 d ϵ + ηL 2ϵ β 2 1 (1 − β1) 2 + ηL 2ϵ + ηL ϵ 2ηl N σ 2 + 2n(n − m) m2(n − 1) + 64η 2 l IL 2(n − m) n(n − 1) + 4η 2 l (I + 1)2L 2 · C1ηlHI,ρρ 2 maxσ 2 c + C1ηlHI,ρρ 2 maxσ¯ 2 l + C1ηlρ 2 maxσ 2 + C1ηl H2 I,ρ I 2· ρ 2 maxσ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l + 64(n − m) n(n − 1) C1L 2η 3 l I 2σ 2 c + C1L 2η 3 l I σ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l + ηlσ¯ 2 l + ηlσ 2 c , (29) where C is a constant irrelevant to parameters and ρmax = maxk∈[K] ρk, HI,ρ = min 1 1−ρmax , I , Cβ =β1 1−β1 and σ¯ 2 L = 1 K PK k=1 σ 2 k . This concludes the proof. If we further apply the constraint of $$\frac{n(n-m)}{m^{2}(n-1)}H_{\mathcal{I},\rho}\rho_{\mathrm{max}}^{2}\leq\frac{1}{n},$$ $$(30)$$ , (30) where the condition Eq. 30 implies that the spectral gap ρmax satisfies $${\frac{\rho_{\mathrm{max}}^{2}}{1-\rho_{\mathrm{max}}}}\leq{\frac{m^{2}}{n^{2}}},$$ $$(31)$$ , (31) with the condition of Eq. 31, then we assume there is ρmax ≤m m+n , which satisfies $$\frac{n(n-m)}{m^{2}(n-1)}H_{x,\rho}\rho_{\rm max}^{2}=\frac{n(n-m)}{m^{2}(n-1)}\frac{\rho_{\rm max}^{2}}{1-\rho_{\rm max}}\leq\frac{n(n-m)}{m^{2}(n-1)}\frac{m^{2}}{(m+n)n}\leq\frac{1}{n}.\tag{32}$$ Also by choosing a constant Ce, we have 1 R X R r=1 E[∥∇f(xr)∥ 2] ≤ 8C0 n[E[f(zr+1)] − f(z0)] ηηlRIm+ 1 R CβG2d √ϵ+ 2C 2 β ηηlILG2dm ϵn | {z } T1 + CL2η 2 l 4 √ϵ I(I + HI,ρ)σ 2 c + Iρ 2 maxHI,ρσ¯ 2 l + n + 1 nIσ 2ρ 2 max + m(n − m) n(n − 1) I 2σ¯ 2 l | {z } T2 + ηl N ηL ϵ β 2 1 (1 − β1) 2 + 3ηL ϵ σ 2 + C3ηl n ηL ϵ β 2 1 (1 − β1) 2 + 3ηL ϵ [¯σ 2 l + σ 2 c ] | {z } T3 | {z } T4 + C2ηl n ηL ϵ β 2 1 (1 − β1) 2 + 3ηL ϵ σ 2 c + ¯σ 2 l + σ 2 + HI,ρ σ 2 I 2n + m(n − m) n(n − 1) HI,ρ Iσ¯ 2 l | {z } T5 + C3L 2η 3 l I n ηL ϵ β 2 1 (1 − β1) 2 + 3ηL ϵ | {z } T6 · (I + HI,ρρ 2 max)σ 2 c + HI,ρρ 2 maxσ¯ 2 l + n + 1 nρ 2 maxσ 2 + σ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l | {z } T7 + C4(I + 1)2η 3 l m n ηL ϵ β 2 1 (1 − β1) 2 + 3ηL ϵ σ 2 c + ¯σ 2 l + σ 2 + HI,ρ σ 2 I 2n + m(n − m) n(n − 1) HI,ρ Iσ¯ 2 l (33) | {z } T8 By adopting learning rates η = Θ(n qI m ) and ηl = Θ√ 1 RI , then we have T1 = n[E[f(zr+1)] − f(z0)] ηηlRIm+ 1 R CβG2d √ϵ+ 2C 2 β ηηlILG2dm ϵn = E[f(zr+1)] − f(z0) √RIm+ 1 R CβG2d √ϵ+ 2C 2 βLG2d √Im ϵ √R = O E[f(zr+1)] − f(z0) √RIm+ G2 R + Oe 1 R3/2 . T2 = CL2η 2 l 4 √ϵ I(I + HI,ρ)σ 2 c + Iρ 2 maxHI,ρσ¯ 2 l + n + 1 nIσ 2ρ 2 max + m(n − m) n(n − 1) I 2σ¯ 2 l =CL2 4RI 2 √ϵ I(I + HI,ρ)σ 2 c + Iρ 2 maxHI,ρσ¯ 2 l + n + 1 nIσ 2ρ 2 max + m(n − m) n(n − 1) I 2σ¯ 2 l =CL2 4RI √ϵ (I + HI,ρ)σ 2 c + ρ 2 maxHI,ρσ¯ 2 l + n + 1 nσ 2ρ 2 max + m(n − m) n(n − 1) Iσ¯ 2 l ≤CL2 4RI √ϵ 2Iσ 2 c + ρ 2 maxIσ¯ 2 l + n + 1 nσ 2ρ 2 max + m n Iσ¯ 2 l = O 1 R L 2[σ 2 c + (1 + ρ 2 max)¯σ 2 l ] + ρ 2 maxσ 2 I . where the inequality holds by the definition of HI,ρ = min 1 1−ρmax , I , and the big-O holds because m ≤ n. $$T_{3}=\frac{\eta}{N}\bigg{(}\frac{\eta L}{\epsilon}\frac{\beta_{1}^{2}}{(1-\beta_{1})^{2}}+\frac{3\eta L}{\epsilon}\bigg{)}\sigma^{2}=\frac{1}{N}\bigg{(}\frac{n}{\sqrt{R\!L}m}\frac{L}{\epsilon}\frac{\beta_{1}^{2}}{(1-\beta_{1})^{2}}+\frac{n}{\sqrt{R\!L}m}\frac{3L}{\epsilon}\bigg{)}\sigma^{2}\tag{34}$$ $$=\mathcal{O}\bigg{(}\frac{nL}{N\sqrt{R\!L}m}\sigma^{2}\bigg{)}=\mathcal{O}\bigg{(}\frac{L}{K\sqrt{R\!L}m}\sigma^{2}\bigg{)}.$$ The simplification of T4 is very similar to that of T3, $$T_{4}=\frac{C_{3}\eta}{n}\bigg(\frac{\eta L}{\epsilon}\frac{\beta_{1}^{2}}{(1-\beta_{1})^{2}}+\frac{3\eta L}{\epsilon}\bigg)[\bar{\sigma}_{l}^{2}+\sigma_{c}^{2}]=\mathcal{O}\bigg(\frac{L}{\sqrt{R\mathcal{L}m}}[\bar{\sigma}_{l}^{2}+\sigma_{c}^{2}]\bigg).$$ The simplification of the first half of T5 is very similar to that of T3 and T4, T5 = C2ηl n ηL ϵ β 2 1 (1 − β1) 2 + 3ηL ϵ σ 2 c + ¯σ 2 l + σ 2 + HI,ρ σ 2 I 2n + m(n − m) n(n − 1) HI,ρ Iσ¯ 2 l = C2 n n √RIm L ϵ β 2 1 (1 − β1) 2 +n √RIm 3L ϵ · σ 2 c + ¯σ 2 l + σ 2 + HI,ρ σ 2 I 2n + m(n − m) n(n − 1) HI,ρ Iσ¯ 2 l ≤ C2 1 √RIm L ϵ β 2 1 (1 − β1) 2 +1 √RIm 3L ϵ · σ 2 c + ¯σ 2 l + σ 2 + σ 2 In + ¯σ 2 l = O L √RIm [σ 2 c + ¯σ 2 l + σ 2] , The simplification of T6 is very similar to that of T3 and T4 as well, so that $$\begin{array}{c}{{T_{6}=\frac{C_{3}L^{2}\eta_{l}^{3}\mathcal{I}}{n}\bigg(\frac{\eta L}{\epsilon}\frac{\beta_{1}^{2}}{(1-\beta_{1})^{2}}+\frac{3\eta L}{\epsilon}\bigg)}}\\ {{=\mathcal{O}\bigg(\frac{1}{R\mathcal{I}}\cdot\frac{L}{\sqrt{R\mathcal{I}m}}[\bar{\sigma}_{l}^{2}+\sigma_{c}^{2}]L^{2}\bigg)=\tilde{\mathcal{O}}\bigg(\frac{1}{R^{3/2}}\bigg).}}\end{array}$$ We bound T6 · T7 together. since we previously have the bound for T6, then $$T_{6}\cdot T_{7}=T_{6}\cdot\left[(\mathcal{I}+H_{\mathcal{I},\rho}\rho_{\max}^{2})\sigma_{e}^{2}+H_{\mathcal{I},\rho}\rho_{\max}^{2}\bar{\sigma}_{l}^{2}+\frac{n+1}{n}\rho_{\max}^{2}\sigma^{2}+\frac{\sigma^{2}}{n}+\frac{m(n-m)}{n(n-1)}\mathcal{I}\bar{\sigma}_{l}^{2}\right]$$ $$\leq T_{6}\cdot\left[2\mathcal{I}\sigma_{e}^{2}+\mathcal{I}\bar{\sigma}_{l}^{2}+\frac{n+1}{n}\sigma^{2}+\frac{\sigma^{2}}{n}+\mathcal{I}\bar{\sigma}_{l}^{2}\right]$$ $$=\tilde{\mathcal{O}}\bigg{(}\frac{1}{R^{3/2}}\bigg{)}.$$ The simplification of the first half of T8 is very similar to that of T3 and T4, T8 = C4(I + 1)2η 3 l m n ηL ϵ β 2 1 (1 − β1) 2 + 3ηL ϵ σ 2 c + ¯σ 2 l + σ 2 + HI,ρ σ 2 I 2n + m(n − m) n(n − 1) HI,ρ Iσ¯ 2 l = C4(I + 1)2η 2 l m n ηηlL ϵ β 2 1 (1 − β1) 2 + 3ηηlL ϵ σ 2 c + ¯σ 2 l + σ 2 + HI,ρ σ 2 I 2n + m(n − m) n(n − 1) HI,ρ Iσ¯ 2 l = C4(I + 1)2m RI 2n n √RIm L ϵ β 2 1 (1 − β1) 2 +n √RIm 3L ϵ · σ 2 c + ¯σ 2 l + σ 2 + HI,ρ σ 2 I 2n + m(n − m) n(n − 1) HI,ρ Iσ¯ 2 l ≤ C4(I + 1)2m RI 2n n √RIm L ϵ β 2 1 (1 − β1) 2 +n √RIm 3L ϵ · σ 2 c + ¯σ 2 l + σ 2 + σ 2 In + ¯σ 2 l = Oe 1 R3/2 . Therefore, merging terms from T1 to T8, equation 33 should be 1 R X R r=1 E[∥∇f(xr)∥ 2] = T1 + T2 + T3 + T4 + T5 + T6 · T7 + T8 = O E[f(zr+1)] − f(z0) √RIm+ G2 R + O 1 R L 2[σ 2 c + (1 + ρ 2 max)¯σ 2 l ] + ρ 2 maxσ 2 I | {z } T1 | {z } T2 + O L K √RIm σ 2 + O L √RIm [¯σ 2 l + σ 2 c ] + O L √RIm [σ 2 c + ¯σ 2 l + σ 2] + Oe 1 R3/2 | {z } T1,T6·T7,T8 , (35) | {z } T3 | {z } T4 | {z } T5 note that N = Kn, re-organizing them, we have $$\frac{1}{R}\sum_{r=1}^{R}\mathbb{E}[\|\nabla f(\mathbf{x}_{r})\|^{2}]=\mathcal{O}\bigg{(}\frac{1}{\sqrt{R\mathcal{I}m}}\bigg{[}[f_{0}-f_{*}]+\bigg{(}[\sigma^{2}+\bar{\sigma}_{l}^{2}+\sigma_{e}^{2}]+\frac{\sigma^{2}}{K}\bigg{)}L\bigg{]}\bigg{)}\\ +\mathcal{O}\bigg{(}\frac{1}{R}\bigg{[}G^{2}+L^{2}[\sigma_{e}^{2}+(1+\rho_{\max}^{2})\bar{\sigma}_{l}^{2}]+\frac{\rho_{\max}^{2}\sigma^{2}}{L}\bigg{]}\bigg{)}+\widetilde{\mathcal{O}}\bigg{(}\frac{1}{R^{3/2}}\bigg{)}.\tag{36}$$ Based on the theoretical analysis above, we obtain the general convergence bound for (C)AFGA. Here we conclude the proof for Theorem D.3. Proof of Theorem 4.6. Note that the proof for AFGA is a special case of the proof for CAFGA. When there is only one cluster containing all clients [N] in the CAFGA framework, it reduces to AFGA. Therefore, for the convergence analysis of AFGA, we replace the Assumption D.1 with Assumption 4.4, and replace Assumption D.2 with Assumption 4.5. This results in m = M, n = *N, K* = 1, ρmax = *ρ, σ*c = 0 and σg = σk = ¯σl, and then we have $$\begin{array}{l}{{\frac{1}{R}\sum_{r=1}^{R}\mathbb{E}[\|\nabla f(\mathbf{x}_{r})\|^{2}]}}\\ {{\ \ \ \ =\mathcal{O}\bigg(\frac{1}{\sqrt{R L M}}\bigg[[f_{0}-f_{*}]+L[\sigma^{2}+\sigma_{g}^{2}]\bigg]\bigg)+\mathcal{O}\bigg(\frac{1}{R}\bigg[G^{2}+L^{2}(1+\rho^{2})\sigma_{g}^{2}+\frac{\rho^{2}\sigma^{2}}{\mathcal{I}}\bigg]\bigg)+\tilde{\mathcal{O}}\bigg(\frac{1}{R^{3/2}}\bigg).}}\end{array}$$ . (37) hence we conclude the proof for Theorem 4.6. ## F Supporting Lemmas Lemma F.1 (Inter-cluster consensus error). *For local learning rate which satisfying the condition* ηl ≤1 8IL , denote CI = 1 + 32 ·1 4I−1 , recall the definition for x¯ in Eq. 9, the inter-cluster model difference after s *local* steps satisfies $$\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}[\|\mathbf{x}_{r,t+1}^{k}-\mathbf{x}_{r}\|^{2}\leq C_{\mathcal{Z}}\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}[\|\mathbf{x}_{r,t}^{k}-\mathbf{x}_{r}\|^{2}+8\mathcal{I}\eta_{t}^{2}(\alpha^{2}\mathbb{E}[\|\nabla f(\mathbf{x}_{r})\|^{2}]+\sigma_{\varepsilon}^{2})+\eta_{t}^{2}\frac{\sigma^{2}}{n}.\tag{38}$$ $$(37)$$ $$\square$$ Proof. Note that the following proof is similar to Lemma 3 in (Reddi et al., 2020). By definition and auxiliary sequences, we have E[∥x¯ k r,t+1 − xr∥ 2] = E[∥x¯ k r,t − xr − ηlg¯ k r,t∥ 2] = E x¯ k r,t − xr − ηl g¯ k r,t ∓ 1 n X i∈Sk r,t ∇fi(x i r,t) ∓ m n ∇ ¯fk(x¯ k r,t) ∓ m n ∇ ¯fk(xr) 2 ≤ (1 + γ)E[∥x¯ k r,t − xr∥ 2] + η 2 l E g¯ k r,t − 1 n X i∈Sk r,t ∇fi(x i r,t) 2+ 4(1 + γ −1)η 2 l E 1 n X i∈Sk r,t [∇fi(x i r,t) − ∇fi(x¯ k r,t)] 2 + 4(1 + γ −1)η 2 l E 1 n X i∈Sk r,t ∇fi(x¯ k r,t) − m n ∇ ¯fk(x¯ k r,t) 2+ 4(1 + γ −1)η 2 l E[∥∇ ¯fk(x¯ k r,t) − ∇ ¯fk(xr)∥ 2] + 4(1 + γ −1)η 2 l E[∥∇ ¯fk(xr)∥ 2] ≤ (1 + γ)E[∥x¯ k r,t − xr∥ 2] + η 2 l σ 2 n + 4(1 + γ −1)η 2 l L 2E[∥x¯ k r,t − xr∥ 2] + 4(1 + γ −1)η 2 l L 2 1 n Xn i=1 E[∥x i r,t − x¯ k r,t∥ 2] + 4(1 + γ −1)η 2 l E[∥∇ ¯fk(xr)∥ 2] + m(n − m) n(n − 1) 4(1 + γ −1)η 2 l σ 2 k ≤ [(1 + γ) + 4(1 + γ −1)η 2 l L 2] · E[∥x¯ k r,t − xr∥ 2] + 4(1 + γ −1)η 2 l L 2 1 n Xn i=1 E[∥x i r,t − x¯ k r,t∥ 2] + η 2 l σ 2 n + 4(1 + γ −1)η 2 l E[∥∇ ¯fk(xr)∥ 2] + m(n − m) n(n − 1) 4(1 + γ −1)η 2 l σ 2 k , (39) where the first equality holds by Eq. 10. The first inequality holds due to g i r,t is an unbiased estimator of ∇fi(x i r,t) and Young's inequality. The second inequality holds by Assumption 4.1 and 4.2, also the property of sampling in the cluster (see details in Lemma F.6), i.e., E 1 n X i∈Sk r,t ∇fi(x¯ k r,t) − m n ∇ ¯fk(x¯ k r,t) 2= E 1 n X i∈Vk I{i ∈ Sk r,t}[∇fi(x¯ k r,t) − ∇ ¯fk(x¯ k r,t)] 2 = m(n − m) n(n − 1) 1 n2 X i∈Vk E[∥∇fi(x¯ k r,t) − ∇ ¯fk(x¯ k r,t)∥ 2] + m(m − 1) n(n − 1) E 1 n X i∈Vk ∇fi(x¯ k r,t) − ∇ ¯fk(x¯ k r,t) 2 ≤ m(n − m) n(n − 1) σ 2 k . Averaging Eq. 39 over k = 1*, ..., K* clusters, we have 1 K X K k=1 E[∥x¯ k r,t+1 − xr∥ 2] ≤ [(1 + γ) + 4(1 + γ −1)η 2 l L 2] 1 K X K k=1 E[∥x¯ k r,t − xr∥ 2] + 4(1 + γ −1)η 2 l L 2 1 N X K k=1 Xn i=1 E[∥x i r,t − x¯ k r,t∥ 2] + 4(1 + γ −1)η 2 l 1 K X K k=1 E[∥∇ ¯fk(xr)∥ 2] + η 2 l σ 2 n + m(n − m) n(n − 1) 4(1 + γ −1)η 2 l σ¯ 2 l ≤ [(1 + γ) + 4(1 + γ −1)η 2 l L 2] 1 K X K k=1 E[∥x¯ k r,t − xr∥ 2] + 4(1 + γ −1)η 2 l L 2 1 N X K k=1 Xn i=1 E[∥x i r,t − x¯ k r,t∥ 2] + 4(1 + γ −1)η 2 l (α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + η 2 l σ 2 n + m(n − m) n(n − 1) 4(1 + γ −1)η 2 l σ¯ 2 l , (40) where the second inequality holds by Assumption 4.4, and incorporates the previously defined term σ¯ 2 l = 1 K PK k=1 σ 2 k . Choosing γ =1 4I−1 with the condition of ηl ≤1 8IL , we have 1 K X K k=1 E[∥x¯ k r,t+1 − xr∥ 2] ≤ 1 +1 4I − 1 +1 2(4I − 1)1K X K k=1 E[∥x¯ k r,t − xr∥ 2] + 16Iη 2 l L 2 1 N X K k=1 X i∈Vk E[∥x i r,t − x¯ k r,t∥ 2] + 16Iη 2 l (α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + η 2 l σ 2 n + m(n − m) n(n − 1) 16Iη 2 l σ¯ 2 l = CI 1 K X K k=1 E[∥x¯ k r,t − xr∥ 2] + 16Iη 2 l L 2 1 N X K k=1 X i∈Vk E[∥x i r,t − x¯ k r,t∥ 2] + 16Iη 2 l (α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + η 2 l σ 2 n + m(n − m) n(n − 1) 16Iη 2 l σ¯ 2 l , (41) where CI = 1 + 32 ·1 4I−1 . This concludes the proof. ## F.1 Lemma For Intra-Cluster Consensus Error Lemma F.2. *The intra-cluster consensus error* Pn i=1 ∥x¯ k r,t − x i r,t∥ 2*, also known as* ∥X k,⊥ r,t ∥ 2 F , has the following upper bound, 1 N X K k=1 E[∥X k,⊥ r,t+1∥ 2 F ] ≤ max k∈[K] ρ 2 k (1 + ζ −1 k) + η 2 l · 4L 2 max k∈[K] {ρ 2 k (1 + ζk)} 1 N X K k=1 E[∥X k,⊥ r,t ∥ 2] + η 2 l max k∈[K] {ρ 2 k (1 + ζk)} · 4L 2E[∥x¯ k r,t − xr∥ 2] + η 2 l max k∈[K] {ρ 2 k (1 + ζk)} · 4(α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + η 2 l 1 K X K k=1 ρ 2 k (1 + ζk)4σ 2 k + η 2 l σ 2ρ 2 max, (42) $$(42)$$ where ζk *is some constant related to the Young's inequality, and it could be uniformly chosen for all* k = 1*, ..., K*. Proof. By definition we have Xk r,t = (x 1 r,t*, ...,* x n r,t) and $X_{n,t}^{k,\perp}=X_{n,t}^{k}(I_{n}-J)$, where $J=\frac{1}{n}\mathbf{1}_{n}\cdot\mathbf{1}_{n}^{\prime}$. Thus we r,t(In − J), where J = n n have $$\sum_{i=1}^{n}\|\mathbf{x}_{r,t}^{k}-\mathbf{x}_{r,t}^{i}\|^{2}=\|(\mathbf{x}_{r,t}^{1},...,\mathbf{x}_{r,t}^{n})(I_{n}-J)\cdot I_{n}\cdot(I_{n}-J)(\mathbf{x}_{r,t}^{1},...,\mathbf{x}_{r,t}^{n})^{\prime}\|_{F}\tag{43}$$ $$=\|\mathbf{X}_{r,t}^{k}(I_{n}-J)\cdot(I_{n}-J)\mathbf{X}_{r,t}^{k}\|_{F}$$ $$=\|\mathbf{X}_{r,t}^{k,1}\cdot\mathbf{X}_{r,t}^{k,1}\|_{F}$$ $$=\|\mathbf{X}_{r,t}^{k,1}\|_{F}^{2},$$ Recall the update rule of AFGA and CAFGA, there is X k,⊥ r,t+1 = (Wk − J)(X k,⊥ r,t − ηlGk r,t), then we have E[∥X k,⊥ r,t+1∥ 2 F ] = E(E(∥(Wk − J)(X k,⊥ r,t − ηlG k r,t)∥ 2|Fr,t−1)) = E(E(∥(Wk − J)(X k,⊥ r,t − ηl∇F(Xk r,t) + ηl∇F(Xk r,t) − ηlG k r,t)∥ 2|Fr,t−1)) = E(E(∥(Wk − J)(X k,⊥ r,t − ηl∇F(Xk r,t))∥ 2|Fr,t−1)) + η 2 l E(E(∥(Wk − J)(∇F(Xk r,t) − G k r,t)∥ 2|Fr,t−1)) ≤ E[∥(Wk − J)(X k,⊥ r,t − ηl∇F(Xk r,t))∥ 2] + η 2 l ρ 2 knσ2 ≤ ρ 2 k (1 + ζ −1 k) · E[∥X k,⊥ r,t ∥ 2] + ρ 2 k (1 + ζk)η 2 l E[∥∇F(Xk r,t))∥ 2] + η 2 l ρ 2 knσ2, (44) where the ∇Fk(Xk) ∈ R n×dis associated to cluster k by stacking ∇fi(x i) for i ∈ Vk row-wise. The third equality is due to the unbiasedness of stochastic gradient. The first inequality holds by Assumption 4.2 and ∥∇F(Xk r,t) − Gk r,t∥F =Pn i=1 ∥∇fi(x i r,t) − g i r,t∥ 2. For the Frobenius norm, there is ∥AB∥F ≤ ∥A∥2∥B∥F . The second inequality holds by Young's inequality with some parameter ζk > 0 and ∥AB∥F ≤ ∥A∥2∥B∥F as well. For ∇Fk(Xk r,t), by definition, we have ∥∇Fk(Xk r,t)∥ 2 F = X i∈Vk ∥∇fi(x i r,t)∥ 2 = X i∈Vk ∥∇fi(x i r,t) − ∇fi(x¯ k r,t) + ∇fi(x¯ k r,t) − ∇ ¯fk(x¯ k r,t) + ∇ ¯fk(x¯ k r,t) − ∇ ¯fk(xr) + ∇ ¯fk(xr)∥ 2 i∈Vk 4∥∇fi(x i r,t) − ∇fi(x¯ k r,t)∥ 2 + 4∥∇fi(x¯ k r,t) − ∇ ¯fk(x¯ k r,t)∥ 2 + 4∥∇ ¯fk(x¯ k r,t) − ∇ ¯fk(xr)∥ 2 ≤ X + 4∥∇ ¯fk(xr)∥ 2 i∈Vk 4∥∇fi(x¯ k r,t) − ∇ ¯fk(x¯ k r,t)∥ 2 + 4L 2∥x i r,t − x¯ k r,t∥ 2 + 4L 2∥x¯ k r,t − xr∥ 2 + 4∥∇ ¯fk(xr)∥ 2 ≤ X ≤ 4L 2∥X k,⊥ r,t ∥ 2 + 4L 2n∥x¯ k r,t − xr∥ 2 + 4n∥∇ ¯fk(xr)∥ 2 + 4nσ2 k , (45) where the first inequality holds by Cauchy inequality, the second inequality holds by Assumption 4.1, and the last inequality holds by Assumption 4.4. Averaging Eq. 45 over k = 1*, ..., K*, we have the following iteration 1 N X K k=1 E[∥X k,⊥ r,t+1∥ 2] ≤ 1 N X K k=1 ρ 2 k (1 + ζ −1 k) · E[∥X k,⊥ r,t ∥ 2] + 1N X K k=1 ρ 2 k (1 + ζk)η 2 l E[∥∇F(Xk r,t))∥ 2] + η 2 l σ 2 1 K X K k=1 ρ 2 k ≤ 1 N X K k=1 ρ 2 k (1 + ζ −1 k) · E[∥X k,⊥ r,t ∥ 2] + η 2 l 1 N X K k=1 ρ 2 k (1 + ζk) · 4L 2E[∥X k,⊥ r,t ∥ 2] + η 2 l 1 K X K k=1 ρ 2 k (1 + ζk) · 4L 2E[∥x¯ k r,t − xr∥ 2] + η 2 l 1 K X K k=1 ρ 2 k (1 + ζk) · 4E[∥∇ ¯fk(xr)∥ 2] + η 2 l 1 K X K k=1 ρ 2 k (1 + ζk) · 4σ 2 k + η 2 l σ 2ρ 2 max ≤ max k∈[K] ρ 2 k (1 + ζ −1 k) + η 2 l · 4L 2 max k∈[K] {ρ 2 k (1 + ζk)} 1 N X K $$(46)$$ $\square$ k=1 E[∥X k,⊥ r,t ∥ 2] + η 2 l max k∈[K] {ρ 2 k (1 + ζk)} · 4L 2E[∥x¯ k r,t − xr∥ 2] + η 2 l max k∈[K] {ρ 2 k (1 + ζk)} · 4(α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + η 2 l 1 K X K k=1 ρ 2 k (1 + ζk)4σ 2 k + η 2 l σ 2ρ 2 max. (46) This concludes the proof. F.2 Lemma for summation of intra-cluster and inter-cluster consensus errors Lemma F.3. *If the local learning rate satisfies the condition:* ηl ≤1 8IL , the for all local round s = 0*, ...,* I−1, there is 1 N X K k=1 E[∥X k,⊥ r,t ∥ 2] + 1K X K k=1 E[∥x¯ k r,t − xr∥ 2] ≤ (t + 1)C1η 2 l (I + ρ 2 maxHI,ρ)(α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + (t + 1)C1ρ 2 maxHI,ρη 2 l σ¯ 2 l + (t + 1)C1η 2 l σ 2ρ 2 max + (t + 1)C1 1 + H2 I,ρ I 2· ρ 2 maxη 2 l σ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l , (47) where C1 is a constant independent to parameters. Proof. Denote an auxiliary vector $$M_{r,t}=\left(\frac{1}{N}\sum_{k=1}^{K}\mathbb{E}[\|X_{r,t}^{k,\bot}\|^{2}],\quad\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}[\|\vec{x}_{r,t}^{k}-\mathbf{x}_{r}\|^{2}]\right)^{T}.$$ $$(48)$$ (50) $\binom{51}{51}$ . From Lemma F.1 and F.2, we have the following inequality which is defined element-wise for s = 0, ..., I − 1 Mr,t+1 ≤ G · Mr,t + Br,t, (49) where G = maxk∈[K] ρ 2 k (1 + ζ −1 k) + η 2 l ρL · 4L 2 η 2 l ρL · 4L 2 16Iη 2 l L 2 CI (50) Br,t = 4ρLη 2 l (α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + 4ρLη 2 l σ¯ 2 l + η 2 l σ 2ρ 2 max 16Iη 2 l (α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + η 2 l σ 2 n + 16m(n−m) n(n−1) Iη 2 l σ 2 k ! = b (1) b (2). (51) Consider the eigen-decomposition of matrix G, $$G=\frac{1}{16\bar{t}\eta_{l}^{2}L^{2}(\lambda_{1}-\lambda_{2})}\begin{pmatrix}\lambda_{1}-C_{\mathcal{I}}&\lambda_{2}-C_{\mathcal{I}}\\ 16\bar{t}\eta_{l}^{2}L^{2}&16\bar{t}\eta_{l}^{2}L^{2}\end{pmatrix}\cdot\begin{pmatrix}\lambda_{1}&0\\ 0&\lambda_{2}\end{pmatrix}\cdot\begin{pmatrix}16\bar{t}\eta_{l}^{2}L^{2}&-\lambda_{2}+C_{\mathcal{I}}\\ -16\bar{t}\eta_{l}^{2}L^{2}&\lambda_{1}-C_{\mathcal{I}}\end{pmatrix},$$ , (52) where we assume λ1 ≤ λ2, thus we have G jB =1 16Iη 2 l L2(λ1 − λ2) λ1 − CI λ2 − CI 16Iη 2 l L 2 16Iη 2 l L 2 λ1 0 0 λ2 16Iη 2 l L 2 −λ2 + CI −16Iη 2 l L 2 λ1 − CI b (1) b (2) =1 16Iη 2 l L2(λ1 − λ2) (λ1 − CI)(λ j 1 16Iη 2 l L 2b (1) + λ j 1 (−λ2 + CI)b (2)) + (λ2 − CI)(−λ j 2 16Iη 2 l L 2b (1) + λ j 2 (λ1 − CI)b (2)) 16Iη 2 l L 2(λ j 1 16Iη 2 l L 2b (1) + λ j 1 (−λ2 + CI)b (2)) + 16Iη 2 l L 2(−λ j 2 16Iη 2 l L 2b (1) + λ j 2 (λ1 − CI)b (2)) . (53) $$\quad(52)$$ $$(55)$$ Therefore the sum of two elements has the following result $$(1,1)G^{j}B_{r,t-j}=\lambda_{1}^{j}b^{(1)}+\lambda_{2}^{j}b^{(2)}+\frac{\lambda_{2}^{j}-\lambda_{1}^{j}}{\lambda_{2}-\lambda_{1}}\bigg{(}16T\eta_{t}^{2}L^{2}b^{(1)}+4\eta_{t}^{2}\rho_{L}L^{2}b^{(2)}\bigg{)}$$ $$\leq\lambda_{2}^{j}(b^{(1)}+b^{(2)})+\frac{\lambda_{2}^{j}-\lambda_{1}^{j}}{\lambda_{2}-\lambda_{1}}\bigg{(}16T\eta_{t}^{2}L^{2}b^{(1)}+4\eta_{t}^{2}\rho_{L}L^{2}b^{(2)}\bigg{)}.\tag{54}$$ Therefore, we have the following result $$\sum_{j=0}^{t}(1,1)G^{j}B_{r,t-j}\leq\sum_{j=0}^{t}\left(\lambda_{2}^{j}(b_{r,t-j}^{(1)}+b_{r,t-j}^{(2)})+\frac{\lambda_{2}^{j}-\lambda_{1}^{j}}{\lambda_{2}-\lambda_{1}}(16\mathbb{I}\eta_{l}^{2}L^{2}b^{(1)}+4\eta_{l}^{2}\rho_{L}L^{2}b^{(2)})\right).$$ (2). (55) Since λ2 ≥ CI > 1, we have $$\frac{\lambda_{2}^{j}-\lambda_{1}^{j}}{\lambda_{2}-\lambda_{1}}=\lambda_{2}^{l-1}\sum_{l=0}^{l-1}\left(\frac{\lambda_{1}}{\lambda_{2}}\right)^{s}\leq\lambda_{2}^{j-1}\operatorname*{min}\left\{\frac{\lambda_{2}}{\lambda_{2}-\lambda_{1}},l\right\}\leq\lambda_{2}^{j}\operatorname*{min}\left\{\frac{1}{\lambda_{2}-\lambda_{1}},l\right\},$$ , l, (56) thus we have $$\sum_{j=0}^{t}(1,1)G^{j}B_{r,t-j}\leq\sum_{j=0}^{t}\lambda_{2}^{j}(b_{r,t-j}^{(1)}+b_{r,t-j}^{(2)})+\sum_{l=0}^{s}\left(\lambda_{2}^{j}\min\left\{\frac{1}{\lambda_{2}-\lambda_{1}},l\right\}(16T_{ll}T_{ll}^{2}L^{2}b^{(1)}+4\eta_{l}^{s}\rho_{L}L^{2}b^{(2)})\right).$$ (2). $$\left(56\right)$$ ). $\binom{57}{57}$ . $$\quad(58)$$ $$(59)$$ By the definition of ρL = maxk∈[k] ρ 2 k (1 + ζk) and by the Gershgorin's theorem, since ηl > 0, we have the upper bound for λ2, $$\lambda_{2}\leq\max\Bigg{\{}\max_{k\in[K]}\rho_{k}^{2}(1+\zeta_{k}^{-1})+\eta_{l}^{2}\rho_{L}\cdot8L^{2},C_{\mathcal{I}}+16\mathcal{I}\eta_{l}^{2}L^{2}\Bigg{\}}$$ $$<\max\Bigg{\{}\max_{k\in[K]}\rho_{k}^{2}(1+\zeta_{k}^{-1})+\frac{\rho_{L}}{(4\mathcal{I}-1)2\mathcal{I}},1+\frac{2}{4\mathcal{I}-1}\Bigg{\}},$$ where the last inequality holds by the bound of η 2 l ≤1 64I2L2 <1 16I(4I−1)L2 . Define a distance constant HI,ρ = min I,1 1−ρmax . Next we consider two cases: *small or dense* communication network with ρmax ≤ 1 − 1 I and *large and sparse* communication network with ρmax > 1 − 1 I . Case 1: For ρmax ≤ 1 − 1 I , i.e., 1 1−ρmax ≤ I, thus we have HI,ρ =1 1−ρmax . Let ζk =ρk 1−ρk , then we have $$\operatorname*{max}_{k\in[k]}\rho_{k}^{2}(1+\zeta_{k}^{-1})=\rho_{\mathrm{max}},\quad\rho_{L}=\operatorname*{max}_{k\in[k]}\left\{\frac{\rho_{k}^{2}}{1-\rho_{k}}\right\}=\frac{\rho_{\mathrm{max}}^{2}}{1-\rho_{\mathrm{max}}}=\rho_{\mathrm{max}}^{2}H_{\mathcal{I},\rho},$$ where the middle part of the second equality holds by the monotonically increasing of x 2 1−x . Then the bound for λ2 is formalized as $$\lambda_{2}\leq\max\left\{\rho_{\max}+\frac{\rho_{\max}^{2}}{(1-\rho_{\max})2\overline{I}(4\overline{I}-1)},1+\frac{3}{2(4\overline{I}-1)}\right\}$$ $$\leq\max\left\{1-\frac{1}{\overline{I}}+\frac{(1-\frac{1}{\overline{I}})^{2}}{2(\overline{I}-1)},1+\frac{3}{2(4\overline{I}-1)}\right\}$$ $$<1+\frac{3}{4\overline{I}-1},\tag{6}$$ $$(60)$$ where the second inequality holds by ρmax ≤ 1 − 1 I . Then by s ≤ I and λ2 ≥ 1 (just by the definition of matrix G can get this result), we can obtain the following bound $$\sum_{j=0}^{t}\lambda_{j}^{j}b_{t-j}^{1}\leq\left(\left(1+\frac{2}{4\mathcal{I}-1}\right)^{\mathcal{I}}\right)\cdot\sum_{j=0}^{t}b_{j}^{(1)}\leq3\cdot\sum_{j=0}^{t}b_{j}^{(1)}.$$ We also have $$\rho_{\rm max}+\eta^{2}\rho_{L}4L^{2}\leq\rho_{\rm max}+\frac{\rho_{\rm max}}{(1-\rho_{\rm max})(4\mathcal{I}-1)4\mathcal{I}}\leq1-\frac{1}{\mathcal{I}}+\frac{(1-\frac{1}{\mathcal{I}})^{2}}{4(4\mathcal{I}-1)}\leq C_{\mathcal{I}},\tag{62}$$ where the second inequality holds by the upper bound for ρmax. By the definition of matrix G, we bound the difference of λ2 − λ1, $$\lambda_{2}-\lambda_{1}$$ $\rho_{1}=C_{\mathcal{I}}-\rho_{\max}-\eta_{l}^{2}\rho_{L}4L^{2}$ $\geq C_{\mathcal{I}}-\left(\rho_{\max}+\frac{\rho_{\max}}{(1-\rho_{\max})(4\mathcal{I}-1)4\mathcal{I}}\right)$ $\geq C_{\mathcal{I}}-\left(\rho_{\max}+\rho_{\max}\cdot\frac{1-\frac{1}{\mathcal{I}}}{4(4\mathcal{I}-1)}\right)$ $\geq1+\frac{1}{4\mathcal{I}-1}-\left(\rho_{\max}+\rho_{\max}\cdot\frac{1}{4\mathcal{I}-1}\right)$ $=(1-\rho_{\max})\bigg{(}1+\frac{1}{4\mathcal{I}-1}\bigg{)}$ $\geq1-\rho_{\max}$. $$(61)$$ $$(63)$$ where the first and second inequality hold by the defined notations. Then we have Xt j=0 (1, 1)G jBr,t−j ≤ Xt j=0 λ j 2 (b (1) r,t−j + b (2) r,t−j ) +Xt j=0 λ j 2 min 1 λ2 − λ1 , j16Iη 2 l L 2b (1) r,t−j + 4η 2 l ρLL 2b (2) r,t−j ≤ Xt j=0 3(b (1) r,j + b (2) r,j ) +Xt j=0 3η 2 l 16IL 2b (1) r,j + 4ρLL 2b (2) r,j min 1 λ2 − λ1 , I ≤ Xt j=0 3(b (1) r,j + b (2) r,j ) +Xt j=0 3η 2 l 16IL 2b (1) r,j + 4H2 I,ρρ 2 maxL 2b (2) r,j ≤ Xt j=0 3(b (1) r,j + b (2) r,j ) +Xt j=0 HI,ρ 16I(4I − 1) ·48Ib (1) r,j + 12HI,ρρ 2 maxb (2) r,j ≤ Xt j=0 4(b (1) r,j + b (2) r,j ) +Xt j=0 H2 I,ρ I 2· ρ 2 maxb (2) r,j . (64) Then by the definition of b (1) and b (2), we have Xt j=0 (1, 1)G jBr,t−j ≤ Xt j=0 4 (4ρLη 2 l + 16Iη 2 l )(α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + 4ρLη 2 l σ¯ 2 l + η 2 l σ 2ρ 2 max + η 2 l σ 2 n + 16 m(n − m) n(n − 1) Iη 2 l σ¯ 2 l + Xt j=0 H2 I,ρ I 2· ρ 2 max16Iη 2 l (α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + η 2 l σ 2 n + 16 m(n − m) n(n − 1) Iη 2 l σ¯ 2 l = (t + 1)4(4ρLη 2 l + 16Iη 2 l ) + H2 I,ρ I 2· ρ 2 max16Iη 2 l (α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + 16ρLη 2 l σ¯ 2 l + 16η 2 l σ 2ρ 2 max + 4η 2 l + H2 I,ρ I 2· ρ 2 maxη 2 l σ 2 n + 16 m(n − m) n(n − 1) Iσ¯ 2 l ≤ (t + 1)16ρ 2 maxHI,ρη 2 l + 64Iη 2 l + 16η 2 l HI,ρ · ρ 2 max(α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + 16ρ 2 maxHI,ρη 2 l σ¯ 2 l + 16η 2 l σ 2ρ 2 max + 4η 2 l + H2 I,ρ I 2· ρ 2 maxη 2 l σ 2 n + 16 m(n − m) n(n − 1) Iσ¯ 2 l ≤ (t + 1)C1η 2 l (I + ρ 2 maxHI,ρ)(α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + (t + 1)C1ρ 2 maxHI,ρη 2 l σ¯ 2 l + (t + 1)C1η 2 l σ 2ρ 2 max + (t + 1)C1 1 + H2 I,ρ I 2· ρ 2 maxη 2 l σ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l , (65) where C1 is some universal constant. The inequality holds by ρL = ρ 2 maxHI,ρ and HI,ρ ≤ I. Case 2: In this case we have ρmax > 1 − We we have $\rho_{\max}>1-\frac{1}{2}$, which means $\tilde{H}_{\mathcal{I},\rho}=\tilde{\mathcal{I}}$. Let $\zeta_{k}^{\ \ \ }=(4\mathcal{I}-1)$, thus we have $$\max_{k\in[K]}\rho_{k}^{2}(1+\zeta_{k}^{-1})=\rho_{\max}^{2}(1+(4\mathcal{I}-1)^{-1}),\quad\rho_{L}=4\mathcal{I}\rho_{\max}^{2}H_{\mathcal{I},\rho}.$$ The upper bound for λ2 has the form of $$\lambda_{2}\leq\max\left\{\max_{k\in[K]}\rho_{k}^{2}(1+\zeta_{k}^{-1})+\eta_{l}^{2}\rho_{L}\cdot8L^{2},C_{\mathcal{I}}\right\}$$ $$\leq\max\left\{\rho_{\max}^{2}(1+(4\mathcal{I}-1)^{-1})+\frac{2\rho_{\max}^{2}}{4\mathcal{I}-1},1+\frac{3}{2(4\mathcal{I}-1)}\right\}$$ $$\leq1+\frac{3}{4\mathcal{I}-1}.$$ In fact, $$(66)$$ $$(67)$$ By the fact of min 1 λ2−λ1 , l ≤ I = HI,ρ, we have Xt j=0 (1, 1)G jBr,t−j ≤ Xt j=0 λ j 2 (b (1) r,t−j + b (2) r,t−j ) +Xt j=0 λ j 2 min 1 λ2 − λ1 , lη 2 l · 4ρLL 2b 2 r,t−j ≤ Xt j=0 3(b (1) r,j + b (2) r,j ) +Xt j=0 η 2 l · 16ρmaxHI,ρL 2b (2) l· 3HI,ρ ≤ Xt j=0 3(b (1) r,j + b (2) r,j ) +Xt j=0 16ρmaxHI,ρb (2) l· 3 16 HI,ρ I 2 = Xt j=0 3(b (1) r,j + b (2) r,j ) +Xt j=0 3ρmaxb (2) l· H2 I,ρ I 2 , (68) where the above inequalities hold by the fact that ρL = 4Iρ 2 max = 4HI,ρρ 2 max and the constraint on step size ηl. Thus we can get a similar upper bound as Eq. 65 in Case 1. This concludes the proof. Lemma F.4. *With the similar condition in Lemma F.3, we have the corresponding bound for the intracluster consensus error* ∥X k,⊥ r,t ∥ 2 F , 1 N X K k=1 E[∥X k,⊥ r,t ∥ 2] ≤ (t + 1)C1η 2 l HI,ρρ 2 max(α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + (t + 1)C1η 2 l HI,ρρ 2 maxσ¯ 2 l + (t + 1)C1η 2 l ρ 2 maxσ 2 + (t + 1)C1 H2 I,ρ I 2 η 2 l ρ 2 max σ 2 n + (t + 1)C1 m(n − m) n(n − 1) H2 I,ρ Iη 2 l ρ 2 maxσ¯ 2 l . (69) Proof. With the same definition of the auxiliary vector Mr,t and the matrix G and Br,t in the proof of Lemma F.3, there is $$M_{r,t}=\left(\frac{1}{N}\sum_{k=1}^{K}\mathbb{E}[\|X_{r,t}^{k,\perp}\|^{2}],\quad\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}[\|\tilde{\mathbf{x}}_{r,t}^{k}-\mathbf{x}_{r}\|^{2}]\right)^{\top},$$ $$M_{r,t}=G^{t+1}M_{r,0}+\sum_{j=0}^{t}G^{j}B_{r,t-j}=\sum_{j=0}^{t}G^{j}B_{r,t-j},\tag{70}$$ hence we have 1 N X K k=1 E[∥X k,⊥ r,t ∥ 2] = (1, 0) · Mr,t = (1, 0) · Xt j=0 G jBr,t−j = Xt j=0 λ j 1 b (1) r,j + λ j 2 − λ j 1 λ2 − λ1 4η 2 l ρLL 2b (2) r,j ≤ Xt j=0 λ j 2 b (1) r,j + λ j 2 − λ j 1 λ2 − λ1 4η 2 l ρLL 2b (2) r,j ≤ Xt j=0 λ j 2 b (1) r,j + λ j 2 − λ j 1 λ2 − λ1 4η 2 l ρLL 2b (2) r,j , (71) with the similar proof techniques as in Lemma F.3, there is 1 N X K k=1 E[∥X k,⊥ r,t ∥ 2] ≤ Xt j=0 3b (1) r,j + 1 I 2 H2 I,ρρ 2 maxb (2) r,j ≤ Xt j=0 12ρLη 2 l (α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + 12ρLη 2 l σ¯ 2 l + 3η 2 l σ 2ρ 2 max + H2 I,ρ I 2 ρ 2 max16Iη 2 l (α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + 16m(n − m) n(n − 1) Iη 2 l σ¯ 2 l + η 2 l σ 2 n ≤ (t + 1)C1η 2 l HI,ρρ 2 max(α 2E[∥∇f(xr)∥ 2] + σ 2 c ) + (t + 1)C1η 2 l HI,ρρ 2 maxσ¯ 2 l + (t + 1)C1η 2 l ρ 2 maxσ 2 + (t + 1)C1η 2 l H2 I,ρ I 2 ρ 2 max σ 2 n + (t + 1)C1 m(n − m) n(n − 1) η 2 l H2 I,ρ Iρ 2 maxσ¯ 2 l . (72) This concludes the proof. ## F.3 Lemmas For Model Difference ∆R There is a corresponding Lemma about model difference ∆r for the partial participation settings. Lemma F.5. The global model difference ∆r =PK k=1 Pi∈cSr ∆ir in partial participation settings satisfies E[∥∆r∥ 2] ≤ 2η 2 l I Nσ 2 + 2η 2 l (I − 1) I− X 1 t=0 E 1 N X N i=1 ∇fi(x i r,t) 2+ 4η 2 l E 1 K X K k=1 ∇ ¯fk(x¯ k r,I−1 ) 2 + 8n − m m(n − 1) + η 2 l L 2 1 N X K k=1 E[∥X k,⊥ r,I−1 ∥ 2] + 2η 2 l σ 2 N n − m m· ρ 2 max. (73) Proof. Recall the definition of x¯r,t, there is x¯r,t =1N PN i=1 x i r,t (here we don't need to consider of client sampling) and the intra-cluster average x¯ k r,t+1 = x¯ k r,t − ηlx¯ k r,t+1, where x¯ k r,t+1 = 1 n Pi∈Vk g i r,t. For the model difference ∆r, we have E[∥∆r∥ 2] = E 1 K X K k=1 1 m X i∈Sk r x i r,I − xr 2 = E 1 K X K k=1 1 m X i∈Sk r x i r,I ∓ x¯r,I ∓ x¯r,I−1 ∓ · · · ∓ x¯r,1 − xr 2 ≤ 2E 1 N p X K k=1 X i∈Sk r x i r,I − x¯r,I 2+ 2E x¯r,I ∓ x¯r,I−1 ∓ · · · ∓ x¯r,1 − xr 2, (74) where the inequality holds by Cauchy-Schwarz inequality. For the first term in Eq. 74, by the probability of the sampling strategy (see details in Lemma F.6), we have E 1 N p X K k=1 X i∈Sk r x i r,I − x¯r,I 2=1 (N p) 2 E X K k=1 X i∈Sk r x i r,I − x¯r,I 2 ≤1 (N p) 2 E m(m − 1) n(n − 1) X K k=1 X i∈Vk x i r,I − x¯r,I 2 + Km(n − m) n(n − 1) X K k=1 X i∈Vk ∥x i r,I − x¯r,I∥ 2 =K (N p) 2 m(n − m) n(n − 1) X K k=1 X i∈Vk E[∥x i r,I − x¯r,I∥ 2]. (75) For the second part in Eq. 74, we have E x¯r,I ± · · · ± x¯r,1 − xr 2 = E ηl I− X 1 t=0 g¯r,t 2 = E ηl N I− X 1 t=0 X K k=1 X i∈Sk r g i r,t 2 = E ηl N I− X 1 t=0 X K k=1 X i∈Sk r (g i r,t ± ∇fi(x i r,t)) 2 = E ηl N I− X 1 t=0 X K k=1 X i∈Sk r (g i r,t − ∇fi(x i r,t)) 2+ E ηl N I− X 1 t=0 X K k=1 X i∈Sk r ∇fi(x i r,t) 2 ≤ η 2 l (I − 1)M N2σ 2 + E ηl N I− X 1 t=0 X K k=1 X i∈Sk r ∇fi(x i r,t) 2, (76) where E ηl N I− X 1 t=0 X K k=1 X i∈Sk r ∇fi(x i r,t) 2 = E I− X 1 t=0 ηl N X K k=1 X i∈Sk r ∇fi(x i r,t) ∓ ηl N X K k=1 X i∈Sk r ∇fi(x¯ k r,t) ∓ ηlM N2 X N i=1 ∇fi(x¯ k r,t) 2 ≤ 2E I− X 1 t=0 ηl N X K k=1 X i∈Sk r ∇fi(x¯ k r,t) − ηlM N2 X N i=1 ∇fi(x¯ k r,t) 2 + 4E I− X 1 t=0 ηlm N X K k=1 ∇ ¯fk(x¯ k r,t) 2+ 4E I− X 1 t=0 ηl N X K k=1 X i∈Sk r [∇fi(x i r,t) − ∇fi(x¯ k r,t)] 2 ≤ 2 I− X 1 t=0 E ηl N X K k=1 X i∈Sk r ∇fi(x¯ k r,t) − ηlM N2 X N i=1 ∇fi(x¯ k r,t) 2 + 4(I − 1) I− X 1 t=0 E ηlm N X K k=1 ∇ ¯fk(x¯ k r,t) 2+ 4(I − 1) I− X 1 t=0 L 2 η 2 l M N2 X K k=1 X i∈Sk r E[∥x i r,t − x¯ k r,t∥ 2], (77) therefore, E ηl N I− X 1 t=0 X K k=1 X i∈Sk r ∇fi(x i r,t) 2 ≤ 4 I− X 1 t=0 E ηl N X K k=1 X i∈Sk r ∇fi(x¯ k r,t) 2+ 4 I− X 1 t=0 E ηlM N2 X N i=1 ∇fi(x¯ k r,t) 2 + 4(I − 1) I− X 1 t=0 E ηlm N X K k=1 ∇ ¯fk(x¯ k r,t) 2+ 4(I − 1)L 2 η 2 l M N2 I− X 1 t=0 X K k=1 X i∈Sk r E[∥x i r,t − x¯ k r,t∥ 2] = 4 I− X 1 t=0 E ηl N X K k=1 X i∈Sk r ∇fi(x¯ k r,t) 2+ 4I I− X 1 t=0 E ηlM N2 X N i=1 ∇fi(x¯ k r,t) 2 I− X 1 t=0 X K + 4(I − 1)L 2 η 2 l M N2 k=1 X i∈Sk r E[∥x i r,t − x¯ k r,t∥ 2] ≤ 8 I− X 1 t=0 E ηl N X K k=1 X i∈Sk r ∇fi(x i r,t) 2+ 4I I− X 1 t=0 E ηlM N2 X N i=1 ∇fi(x¯ k r,t) 2 I− X 1 t=0 X K + 4(I + 1)L 2 η 2 l M N2 k=1 X i∈Sk r E[∥x i r,t − x¯ k r,t∥ 2], (78) where the last inequality follows 4 I− X 1 t=0 E ηl N X K k=1 X i∈Sk r ∇fi(x¯ k r,t) 2 = 4 I− X 1 t=0 E ηl N X K k=1 X i∈Sk r [∇fi(x¯ k r,t) ∓ ∇fi(x i r,t)] 2 ≤ 8 I− X 1 t=0 E ηl N X K k=1 X i∈Sk r ∇fi(x i r,t) 2+ 8 I− X 1 t=0 E ηl N X K k=1 X i∈Sk r [∇fi(x¯ k r,t) − ∇fi(x i r,t)] 2 ≤ 8 I− X 1 t=0 E ηl N X K k=1 X i∈Sk r ∇fi(x i r,t) 2+ 8 η 2 l M N2 L 2 I− X 1 t=0 X K k=1 X i∈Sk r E[∥x i r,t − x¯ k r,t∥ 2], (79) and we use the characteristic of conditional expectation, where we use the characteristic of conditional expectation, i.e., E[ESs [ ηl N PK k=1 Pi∈Sk r ∇fi(x¯ k r,t)]] = E[ ηlM N2PK k=1 Pi∈Vk ∇fi(x¯ k r,t)], Update the expectation term, that is E[ESs [ ηl N PK k=1 Pi∈Sk r ∇fi(x¯ k r,t) − ηlM N2PN i=1 ∇fi(x¯ k r,t)]] = 0 and ∀r ̸= s, i ∈ Sk r is independent with i ∈ Sk r . Then we have 8 I− X 1 t=0 E ηl N X K k=1 X i∈Sk r ∇fi(x i r,t) 2 = 8η 2 l N2 I− X 1 t=0 E X K k=1 X i∈Vk P{i ∈ Sk r }∇fi(x i r,t) 2 ≤ 8η 2 l K N2 m(n − m) n(n − 1) I− X 1 t=0 X K k=1 X i∈Vk E[∥∇fi(x i r,t)∥ 2] + 8η 2 l N2 m(m − 1) n(n − 1) I− X 1 t=0 E X K k=1 X i∈Vk ∇fi(x i r,t) 2 = 8η 2 l K N2 m(n − m) n(n − 1) I− X 1 t=0 X N i=1 E[∥∇fi(x i r,t)∥ 2] + 8η 2 l N2 m(m − 1) n(n − 1) I− X 1 t=0 E X N i=1 ∇fi(x i r,t) 2, (80) where the inequality holds by Lemma F.6. I− X 1 t=0 X N i=1 E[∥∇fi(x i r,t)∥ 2] = I− X 1 t=0 X N i=1 E[∥∇fi(x i r,t) ∓ ∇fi(x¯ k r,t) ∓ ¯fk(x¯ k r,t) ∓ ¯fk(xr)∥ 2] ≤ I− X 1 t=0 X K k=1 E[4L 2∥X k,⊥ r,t ∥ 2 + 4L 2n∥x¯ k r,t − xr∥ 2 + 4n∥∇ ¯fk(xr)∥ 2 + 4nσ2 k ] ≤ I− X 1 t=0 X K k=1 -4L 2E[∥X k,⊥ r,t ∥ 2] + 4L 2nE[∥x¯ k r,t − xr∥ 2] + 4nα2E[∥∇f(xr)∥ 2] + 4nσ2 k + 4nσ2 c . (81) Then combining the previous terms, we have E[∥∆r∥ 2] ≤ 2E 1 N p X K k=1 X i∈Sk r x i r,I − x¯r,I 2+ 2E x¯r,I ∓ x¯r,I−1 *∓ · · · ∓* x¯r,1 − xr 2 ≤2K (N p) 2 m(n − m) n(n − 1) X K k=1 X i∈Vk E[∥x i r,I − x¯ k r,I∥ 2] + 2η 2 l IM N2σ 2 + 2E ηl N I− X 1 t=0 X K k=1 X i∈Sk r ∇fi(x i r,t) 2 ≤2K (N p) 2 m(n − m) n(n − 1) X K k=1 X i∈Vk E[∥x i r,I − x¯ k r,I∥ 2] + 2η 2 l IM N2σ 2 + 16 I− X 1 t=0 E ηl N X K k=1 X i∈Sk r ∇fi(x i r,t) 2+ 8I I− X 1 t=0 E ηlm N X K k=1 ∇ ¯fk(x¯ k r,t) 2 I− X 1 t=0 X K + 8(I + 1)η 2 l ML2 N2 k=1 X i∈Sr E[∥x i r,t − x¯ k r,t∥ 2] ≤2K (N p) 2 m(n − m) n(n − 1) X K k=1 X i∈Vk E[∥x i r,I − x¯ k r,I∥ 2] + 2η 2 l IM N2σ 2 + 16η 2 l K N2 m(n − m) n(n − 1) I− X 1 t=0 X N i=1 E[∥∇fi(x i r,t)∥ 2] + 16η 2 l N2 m(m − 1) n(n − 1) I− X 1 t=0 E X N i=1 ∇fi(x i r,t) 2 + 8I I− X 1 t=0 E ηlm N X K I− X 1 t=0 X K k=1 ∇ ¯fk(x¯ k r,t) 2+ 8(I + 1)η 2 l ML2 N2 k=1 X i∈Sr E[∥x i r,t − x¯ k r,t∥ 2] ≤2K (N p) 2 m(n − m) n(n − 1) X K k=1 E[∥X k,⊥ r,I ∥ 2] + 2η 2 l IM N2σ 2 + 16η 2 l K N2 m(n − m) n(n − 1) I− X 1 t=0 X K k=1 4L 2E[∥X k,⊥ r,t ∥ 2] + 4L 2nE[∥x¯ k r,t − xr∥ 2] + 4nα2E[∥∇f(xr)∥ 2] + 4nσ2 k + 4nσ2 c + 16η 2 l N2 m(m − 1) n(n − 1) I− X 1 t=0 E X N i=1 ∇fi(x i r,t) 2 + 8I I− X 1 t=0 E ηlm N X K I− X 1 t=0 X K k=1 ∇ ¯fk(x¯ k r,t) 2+ 8(I + 1)η 2 l ML2 N2 k=1 X i∈Sr E[∥x i r,t − x¯ k r,t∥ 2], (82) where we have 16η 2 l K N2 m(n − m) n(n − 1) I− X 1 t=0 X K k=1 4L 2E[∥X k,⊥ r,t ∥ 2] + 4L 2nE[∥x¯ k r,t − xr∥ 2] = 16η 2 l K N2 m(n − m) n(n − 1) I− X 1 t=0 X K k=1 4L 2E[∥X k,⊥ r,t ∥ 2] + 4L 2nE[∥x¯ k r,t − xr∥ 2] = 64η 2 l L 2 N m(n − m) n2(n − 1) I− X 1 t=0 X K k=1 E[∥X k,⊥ r,t ∥ 2] + 64η 2 l L 2 K m(n − m) n2(n − 1) I− X 1 t=0 X K $$(84)$$ k=1 E[∥x¯ k r,t − xr∥ 2] = 64η 2 l L 2 m(n − m) n2(n − 1) I− X 1 t=0 1 N X K k=1 E[∥X k,⊥ r,t ∥ 2] + 1K X K k=1 E[∥x¯ k r,t − xr∥ 2] ≤ 64η 2 l L 2 m(n − m) n2(n − 1) I 2C1η 2 l (I + ρ 2 maxHI,ρ)(α 2E[∥∇f(xr)∥ 2 + σ 2 c ) + I 2C1ρ 2 maxHI,ρη 2 l σ¯ 2 l + I 2C1η 2 l σ 2ρ 2 max + I 2C1 1 + H2 I,ρ I 2· ρ 2 maxη 2 l σ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l , (83) we have 2K (N p) 2 m(n − m) n(n − 1) X K k=1 E[∥X k,⊥ r,I ∥ 2] =2 Kn (n − m) m(n − 1) X K k=1 E[∥X k,⊥ r,I ∥ 2] ≤ 2(n − m) m(n − 1)IC1η 2 l HI,ρρ 2 max(α 2E[∥∇f(xr)]∥ 2 + σ 2 c ) + IC1η 2 l HI,ρρ 2 maxσ¯ 2 l + IC1η 2 l ρ 2 maxσ 2 + IC1η 2 l H2 I,ρ I 2· ρ 2 maxσ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l , (84) and I− X 1 t=0 X K 8(I + 1)η 2 l ML2 N2 k=1 X i∈Sr E[∥x i r,t − x¯ k r,t∥ 2] I− X 1 t=0 X K ≤ 8(I + 1)η 2 l ML2 N2 k=1 X i∈Vk E[∥x i r,t − x¯ k r,t∥ 2] I− X 1 t=0 1 N X K = 8(I + 1)η 2 l mL2 n k=1 E[∥X k,⊥ r,t ∥ 2] I− X 1 = 8(I + 1)η 2 l mL2 n t=0 (t + 1)C1η 2 l HI,ρρ 2 max(α 2E[∥∇f(xr)]∥ 2 + σ 2 c ) + (t + 1)C1η 2 l HI,ρρ 2 maxσ¯ 2 l + (t + 1)C1η 2 l ρ 2 maxσ 2 + (t + 1)C1η 2 l H2 I,ρ I 2· ρ 2 max σ 2 n = 4(I + 1)2 η 2 l mL2 n IC1η 2 l HI,ρρ 2 max(α 2E[∥∇f(xr)]∥ 2 + σ 2 c ) + IC1η 2 l HI,ρρ 2 maxσ¯ 2 l + IC1η 2 l ρ 2 maxσ 2 + IC1η 2 l H2 I,ρ I 2· ρ 2 maxσ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l . (85) $$(85)$$ Then by merging pieces together, we have E[∥∆r∥ 2] ≤ 2η 2 l IM N2σ 2 + 2(n − m) m(n − 1) + 4(I + 1)2η 2 l mL2 n IC1η 2 l HI,ρρ 2 max(α 2E[∥∇f(xr)]∥ 2 + σ 2 c ) + IC1η 2 l HI,ρρ 2 maxσ¯ 2 l + IC1η 2 l ρ 2 maxσ 2 + IC1η 2 l H2 I,ρ I 2 ρ 2 maxσ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l + 64η 2 l L 2 m(n − m) n2(n − 1) I 2C1η 2 l (I + ρ 2 maxHI,ρ)(α 2E[∥∇f(xr)∥ 2 + σ 2 c ) + I 2C1ρ 2 maxHI,ρη 2 l σ¯ 2 l + I 2C1η 2 l σ 2ρ 2 max + I 2C1η 2 l 1 + H2 I,ρ I 2· ρ 2 maxσ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l + 16η 2 l K N2 m(n − m) n(n − 1) I− X 1 t=0 X K k=1 [4nσ2 k + 4nσ2 c ] + 16η 2 l N2 m(m − 1) n(n − 1) I− X 1 t=0 E X N i=1 ∇fi(x i r,t) 2+ 8η 2 l Im2 n2 I− X 1 t=0 E 1 K X K k=1 ∇ ¯fk(x¯ k r,t) 2 ≤ 2η 2 l IM N2σ 2 + 64η 2 l Im(n − m) n2(n − 1) [¯σ 2 l + σ 2 c ] + 2(n − m) m(n − 1) + 64η 2 l L 2 m(n − m) n2(n − 1) I + 4(I + 1)2η 2 l mL2 n IC1η 2 l HI,ρρ 2 max(α 2E[∥∇f(xr)]∥ 2 + σ 2 c ) + IC1η 2 l HI,ρρ 2 maxσ¯ 2 l + IC1η 2 l ρ 2 maxσ 2 + IC1η 2 l H2 I,ρ I 2 ρ 2 maxσ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l + 64η 2 l L 2 m(n − m) n2(n − 1) I 3C1η 2 l (α 2E[∥∇f(xr)∥ 2 + σ 2 c ) + I 2C1η 2 l σ 2 n + m(n − m) n(n − 1) Iσ¯ 2 l I− X 1 t=0 E 1 N X N I− X 1 t=0 E 1 K X K i=1 ∇fi(x i r,t) 2+ 8η 2 l Im2 n2 k=1 ∇ ¯fk(x¯ k r,t) 2. (86) + 16η 2 l m(m − 1) n(n − 1) Thus it concludes the proof. ## F.4 Additional Supporting Lemmas Lemma F.6 (Cluster sampling). *For model weights* y k,i r, ∀k ∈ [K], i ∈ Vk, r ∈ [R], t ∈ [I]*, there is* $$\mathbb{E}\bigg{[}\bigg{\|}\sum_{k\in[K]}\sum_{t=1}^{n}\mathbb{I}\{i\in\mathcal{S}_{r}^{k}\}\boldsymbol{y}_{r}^{k,i}\bigg{\|}^{2}\bigg{]}\leq\mathbb{E}\bigg{[}\frac{m(m-1)}{n(n-1)}\bigg{\|}\sum_{k=1}^{K}\sum_{i\in\mathcal{V}_{k}}\boldsymbol{y}_{r}^{k,i}\bigg{\|}^{2}+\frac{Km(n-m)}{n(n-1)}\sum_{k=1}^{K}\sum_{i\in\mathcal{V}_{k}}\|\boldsymbol{y}_{r}^{k,i}\|^{2}\bigg{]}.\tag{87}$$ Proof. E X K k=1 X i∈Vk I{i ∈ Sk r }y k,i r 2 = P{i ∈ Sk r }E X K k=1 X i∈Vk y k,i r 2+ P{i ̸= j ∈ Sk r }E X K k=1 X i̸=j∈Vk y k,i r′y k,j r + P{i ∈ Sk r , j ∈ St l |k ̸= l ∈ [K]}E X k̸=l X i∈Vk X j∈Vl y i,k r′y j,l r = E m n X K k=1 X i∈Vk y k,i r 2+ m(m − 1) n(n − 1) X K k=1 X i̸=j∈Vk y k,i r′y k,j r + m2 n2 X k̸=l X i∈Vk X j∈Vl y i,k r′y j,l r = E m(m − 1) n(n − 1) X K k=1 X i∈Vk y k,i r 2 + m(n − m) n(n − 1) X K k=1 X i∈Vk y k,i r 2 + m(n − m) n2(n − 1) X k̸=l X i∈Vk X j∈Vl y i,k r′y j,l r ≤ E m(m − 1) n(n − 1) X K k=1 X i∈Vk y k,i r 2 + m(n − m) n(n − 1) X K k=1 X i∈Vk y k,i r 2 + m(n − m) n2(n − 1) X k̸=l X i∈Vk X j∈Vl 1 2 y i,k r 2+ 1 2 y j,l r 2, (88) where the third equation holds by the probability of random sampling with replacement, i.e., P{i ∈ Sk r } = m n , P{i ̸= j ∈ Sk r } = m(m−1) n(n−1) , P{i ∈ Sk r , j ∈ St l |k ̸= l ∈ [K]} = m2 n2 . The forth equation holds by ⟨a, b⟩ = 1 2 [∥a∥ 2 + ∥b∥ 2 − ∥a − b∥ 2], 1 2 Pi̸=j ∥ai − aj∥ 2 =Pn i=1 n∥ai∥ 2 − ∥Pn i=1 ai∥ 2, and ∥PK k=1 Pi∈Vk y k,i r ∥ 2 = PK k=1 ∥Pi∈Vk y k,i r ∥ 2 +Pk̸=l Pi∈Vk Pj∈Vl ⟨y k,i r y l,j r ⟩. The last inequality holds by a ′b ≤ 1 2 ∥a∥ 2 + 1 2 ∥b∥ 2. Re-organize the last item, $$\frac{m(n-m)}{n^{2}(n-1)}\sum_{k\neq l}\sum_{i\in\mathcal{V}_{k}}\sum_{j\in\mathcal{V}_{l}}\left(\frac{1}{2}\|y_{r}^{i,k}\|^{2}+\frac{1}{2}\|y_{r}^{j,l}\|^{2}\right)=\frac{m(n-m)}{n^{2}(n-1)}(K-1)n\sum_{k=1}^{K}\sum_{i\in\mathcal{V}_{k}}\|\mathbf{y}_{r}^{k,i}\|^{2},\tag{89}$$ $$(88)$$ then we have E X k∈[K] Xn i=1 I{i ∈ Sk r }y k,i r 2 ≤ E m(m − 1) n(n − 1) X K k=1 X i∈Vk y k,i r 2 + m(n − m) n(n − 1) X K k=1 X i∈Vk y k,i r 2 + m(n − m) n2(n − 1) (K − 1)n X K k=1 X i∈Vk y i r 2 = E m(m − 1) n(n − 1) X K k=1 X i∈Vk y k,i r 2 + Km(n − m) n(n − 1) X K k=1 X i∈Vk y k,i r 2. (90) This concludes the proof. Lemma F.7 (Lemma for momentum term in the update rule). The first order momentum terms mr in Algorithm 1 hold the following relationship w.r.t. model difference ∆r: $$\sum_{r=1}^{R}\mathbb{E}[\|\mathbf{m}_{r}\|^{2}]\leq\sum_{r=1}^{R}\mathbb{E}[\|\Delta_{r}\|^{2}].\tag{1}$$ $$(91)$$ Proof. By the updating rule, we have E[∥mr∥ 2] = E (1 − β1) Xr u=1 β r−u 1 ∆u 2 ≤ (1 − β1) 2X d i=1 E Xr u=1 β r−u 1 ∆u,i2 ≤ (1 − β1) 2X d i=1 E Xr u=1 β r−u 1 Xr u=1 β r−u 1(∆u,i) 2 ≤ (1 − β1) Xr u=1 β r−u 1 E[∥∆u∥ 2], (92) summing over t = 1*, ..., T* yields $$\sum_{r=1}^{R}\mathbb{E}[\|\mathbf{m}_{r}\|^{2}]=(1-\beta_{1})\sum_{r=1}^{R}\sum_{u=1}^{r}\beta_{1}^{r-u}\mathbb{E}[\|\Delta_{u}\|^{2}]$$ $$=(1-\beta_{1})\sum_{u=1}^{R}\sum_{r=u}^{R}\beta_{1}^{r-u}\mathbb{E}[\|\Delta_{u}\|^{2}]$$ $$\leq(1-\beta_{1})\sum_{u=1}^{R}\frac{1}{1-\beta_{1}}\mathbb{E}[\|\Delta_{u}\|^{2}]$$ $$=\sum_{u=1}^{R}\mathbb{E}[\|\Delta_{u}\|^{2}].$$ $$(92)$$ $$(93)$$ This concludes the proof. Lemma F.8. Under Assumptions 4.3, for AFGA and CAFGA, we have ∥∇f(x)∥ ≤ G, ∥∆r∥ ≤ mηlIG n, ∥mr∥ ≤ mηlIG n, ∥vr∥ ≤ m2η 2 l I 2G2 n2 and ∥vbr∥ ≤ m2η 2 l I 2G2 n2 . Proof. Since f has G-bounded stochastic gradients, for any x and ξ, we have ∥∇f(x, ξ)∥ ≤ G, we have ∥∇f(x)∥ = ∥Eξ∇f(x, ξ)∥ ≤ Eξ∥∇f(x, ξ)∥ ≤ G. For AFGA and CAFGA, the model difference ∆¯ k r on cluster k satisfies, $$\bar{\Delta}_{r}^{k}=\bar{\mathbf{x}}_{r,\mathcal{I}}^{k}-\mathbf{x}_{r}=-\eta_{l}\sum_{t=0}^{\mathcal{I}-1}\bar{\mathbf{g}}_{r,t}^{k}=-\eta_{l}\sum_{t=0}^{\mathcal{I}-1}\frac{1}{n}\sum_{i\in\mathcal{V}_{k}}\bar{\mathbf{g}}_{r,t}^{i}=-\eta_{l}\sum_{t=0}^{\mathcal{I}-1}\frac{1}{n}\sum_{i\in\mathcal{S}_{r,t}^{k}}\mathbf{g}_{r,t}^{i},$$ therefore, $$\left\|\bar{\Delta}_{r}^{k}\right\|=\left\|\eta_{l}\sum_{t=0}^{\mathcal{I}-1}\bar{g}_{r,t}^{k}\right\|=\left\|\eta_{l}\sum_{t=0}^{\mathcal{I}-1}\frac{1}{n}\sum_{i\in\mathcal{S}_{r,t}^{k}}g_{r,t}^{i}\right\|\leq\frac{m\eta_{l}\mathcal{I}G}{n},$$ for the global model difference ∆r, $$\|\Delta_{r}\|=\left\|{\frac{1}{K}}\sum_{k\in[K]}\bar{\Delta}_{r}^{k}\right\|\leq{\frac{m\eta_{l}{\mathcal{I}}G}{n}}.$$ Thus we can obtain the bound for momentum mr and variance vr, $$\|\mathbf{m}_{r}\|=\left\|(1-\beta_{1})\sum_{s=1}^{r}\beta_{1}^{r-s}\Delta_{s}\right\|\leq{\frac{m\eta_{1}\mathcal{I}G}{n}},\quad\|\mathbf{v}_{r}\|=\left\|(1-\beta_{2})\sum_{s=1}^{r}\beta_{2}^{r-s}\Delta_{s}^{2}\right\|\leq{\frac{m^{2}\eta_{1}^{2}\mathcal{I}^{2}G^{2}}{n^{2}}}.$$ By the updating rule of vbr, there exists a j ∈ [r] such that vbr = vj . Then This concludes the proof. $$\|\widehat{\mathbf{v}}_{r}\|\leq\frac{m^{2}\eta_{l}^{2}\mathcal{I}^{2}G^{2}}{n^{2}}.\tag{94}$$ Lemma F.9. *For the variance difference sequence* Vb −1/2 r−1 − Vb −1/2 r *, we have* $$\sum_{r=1}^{R}\left\|\widehat{V}_{r-1}^{-1/2}-\widehat{V}_{r}^{-1/2}\right\|_{1}\leq\frac{d}{\sqrt{\epsilon}},\quad\sum_{r=1}^{R}\left\|\widehat{V}_{r-1}^{-1/2}-\widehat{V}_{r}^{-1/2}\right\|^{2}\leq\frac{d}{\epsilon}\tag{95}$$ Proof. The proof of Lemma F.9 is exactly the same as the proof of Lemma C.2 in (Wang et al., 2022b).
Review 1: Summary: This paper introduces a novel adaptive federated learning framework, Adaptive Federated learning with Local Gossip Averaging (AFGA), aimed at addressing the data heterogeneity problem prevalent in adaptive federated learning. The proposed framework incorporates a client re-sampling strategy and decentralized peer-to-peer gossip communications to mitigate the dissimilarity in data distribution across clients, thus enhancing model convergence without requiring additional communication or gradient computation costs. The paper also includes a theoretical discussion and emprical evidence. Strengths and Weaknesses: Strengths: + the paper has a fairly comprehensive discussion existing methods + the paper has most of the basic components including the theoretical discussion and ablation studies. Weakness: - The theoretical discussion introduces several new assumptions, some assumptions might not be so realistic. - It's unclear why the authors choose ConvMixer as the main experimental architecture, instead of more population choices (typically used in Federated Learning) Requested Changes: 1. The paper reports "the average of the last 5 global rounds to represent final accuracy", please be more explicit about the rationales or the customs behind it. 2. Given the fact that the experimental setting might be too specific (due to the nature of Federated learning), it is a good idea to include error bar (std) in the reported tables. 3. The theoretical results relies on several assumption, in particular, please offer more dicussions on how realistic the Assumption 4.5 is, in theory and experimental (through simulations). Otherwise, it will be hard to evaluate the theoretical contributions. In addition, it seems graph G is never mentioned ahead of Assumption 4.5. 4. minor: Table 5 formatting issues. Broader Impact Concerns: none noted. ================================================== Review 2: Summary: In this work, the authors propose a new method for adaptive optimization in federated learning. The method is based on peer-to-peer gossip communications between local clients. With this additional step of local averaging, the authors show that the proposed method outperforms prior baselines for adaptive optimization in FL. Strengths and Weaknesses: Strengths: - The work comes with provable convergence guarantees. - The paper is generally well-written and easy to understand. - Empirical results seems solid. Weaknesses: - Having an entire gossip matrix that is shared across clients is a very strong assumption and does not seem practical. - From the appendix it seems that setting $\rho=0$ defaults the best performance? Or am I missing anything? Relation between $\rho$ and utility is missing. - The convergence bound seems worse than the author claimed. Equation 3 is only dominated by $O\left(\frac{1}{\sqrt{R\mathcal{I}M}}\right)$ if $R>\sqrt{R\mathcal{I}M}$. Otherwise it is at least $O\left(\frac{1}{\sqrt{R\mathcal{I}M}}\right)+O\left(\frac{1}{R}\right)$. And this is given $\rho=0$. Also, I don't think omitting parameters such as $M,N,L$ is good in order to understand the bound. I suggest the authors to clean up the bound so that it only contains $R, L, N$ and subsampling rate $q=M/N$ or $M$ itself. - Explanation of 'R#' is missing. Is that the number of rounds? How is this selected as this number is different for different methods. - Table 5 adjust the spacing. Requested Changes: - Explanation of 'R#' is missing. - How is hyperparameter, for example $W$ is tuned? - Clean up convergence bound, as I mentioned in the weakness part above. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper introduces AFGA, an enhanced version of the FedAMSGrad algorithm, which incorporates adaptive federated learning techniques. AFGA integrates two novel components: client re-sampling and gossip communications. Client re-sampling involves selecting a new subset of clients for gradient computation during each local iteration, whereas gossip communications enable clients to exchange weights with their connected peers. To mitigate the additional communication overhead introduced by gossip communications, the authors propose a variant, clustered-clients AFGA, where clients are grouped into clusters to restrict re-sampling and communications within each cluster. Strengths and Weaknesses: The strengths of this paper include a clear presentation of the proposed algorithms and comparative analysis with existing federated learning (FL) algorithms, highlighting improvements in accuracy and convergence speed. However, the paper falls short in its discussion of data heterogeneity impacts. It lacks a thorough analysis of how the proposed algorithm performs under varying degrees of data heterogeneity, which is crucial for validating the effectiveness of any FL algorithm. Requested Changes: 1. The rationale behind client re-sampling is unclear, given the inherent nature of partial participation in FL due to varying client availability. The method assumes control over client selection, which may not reflect practical scenarios where available clients can fluctuate unpredictably. 2. Could you clarify the relationship between $S_{r,t}$ and $S_r$? It's important to understand how these selections differ across iterations and rounds. 3. What are the variance or deviation measures for the mean values presented in Table 1? This information would help assess the statistical significance of the reported results. 4. The term R# (representing the total number of rounds to achieve a specified accuracy level) used in Tables 1 and 2 is not defined until the caption of Table 5. This should be clarified earlier in the document for better reader comprehension. 5. Regarding the clustered-clients AFGA (CAFGA), is the clustering of clients into equal-sized groups based on randomness, or is there a specific metric assessing client similarity that influences cluster composition? Understanding this clustering basis is essential for evaluating the algorithm's practical applicability and performance efficiency. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: This paper studies the convergence behavior of adaptive methods in federated optimization. It is stated that the convergence of earlier proposed methods, such as FedAMSGrad, can be slowed down in the presence of data heterogeneity. To alleviate this, two changes are proposed to FedAMSGrad: a subsampling strategy among the active clients and global gossip averaging to improve mixing. Experiments have shown that the proposed method can enhance convergence. The reviewers found that the contribution is well within the scope of TMLR. Additionally, the authors have thoroughly addressed all comments and suggestions provided during the review process in their revisions. Although the convergence theorems have a weaker dependence on data heterogeneity measures than similar statements for the referenced baselines, the paper does not discuss the tightness of the theoretical results. This is a minor limitation of the work. ==================================================
# Dual Cognitive Architecture: Incorporating Biases And Multi-Memory Systems For Lifelong Learning Shruthi Gowda1,2**, Bahram Zonooz**†2,3**, Elahe Arani**†2 s.gowda@tue.nl, b.zonooz@tue.nl, e.arani@gmail.com 1NavInfo Europe 2Eindhoven University of Technology 3*TomTom* †*Contributed equally.* Reviewed on OpenReview: *https: // openreview. net/ forum? id= PEyVq0hlO3* ## Abstract Artificial neural networks (ANNs) exhibit a narrow scope of expertise on stationary independent data. However, the data in the real world is continuous and dynamic, and ANNs must adapt to novel scenarios while also retaining the learned knowledge to become lifelong learners. The ability of humans to excel at these tasks can be attributed to multiple factors ranging from cognitive computational structures, cognitive biases, and the multi-memory systems in the brain. We incorporate key concepts from each of these to design a novel framework, *Dual Cognitive Architecture (DUCA)*, which includes multiple sub-systems, implicit and explicit knowledge representation dichotomy, inductive bias, and a multi-memory system. The inductive bias learner within DUCA is instrumental in encoding shape information, effectively countering the tendency of ANNs to learn local textures. Simultaneously, the inclusion of a semantic memory submodule facilitates the gradual consolidation of knowledge, replicating the dynamics observed in fast and slow learning systems, reminiscent of the principles underpinning the complementary learning system in human cognition. DUCA shows improvement across different settings and datasets, and it also exhibits reduced task recency bias, without the need for extra information. To further test the versatility of lifelong learning methods on a challenging distribution shift, we introduce a novel domainincremental dataset *DN4IL*. In addition to improving performance on existing benchmarks, DUCA also demonstrates superior performance on this complex dataset. 1 2 ## 1 Introduction Deep learning has seen rapid progress in recent years, and supervised learning agents have achieved superior performance on perception tasks. However, unlike a supervised setting, where data is static and independent and identically distributed, real-world data is changing dynamically. Continual learning (CL) aims to learn multiple tasks when data is streamed sequentially (Parisi et al., 2019). This is crucial in real-world deployment settings, as the model needs to adapt quickly to novel data (plasticity), while also retaining previously learned knowledge (stability). Artificial neural networks (ANN), however, are still not effective lifelong learners, as they often fail to generalize to small changes in distribution and also suffer from forgetting old information when presented with new data (catastrophic forgetting) (McCloskey & Cohen, 1989). Humans, on the other hand, show a better ability to acquire new skills while also retaining previously learned skills to a greater extent. This intelligence can be attributed to different factors in human cognition. Multiple theories have been proposed to formulate an overall cognitive architecture, which is a broad domaingeneric cognitive computation model that captures the essential structure and process of the mind. Some of these theories hypothesize that, instead of a single standalone module, multiple modules in the brain share information to excel at a particular task. CLARION (Connectionist learning with rule induction online) 1Code is public at https://github.com/NeurAI-Lab/DUCA. 2*DN4IL* dataset is public at https://github.com/NeurAI-Lab/DN4IL-dataset. 1 ![1_image_0.png](1_image_0.png) Figure 1: Schematic of *Dual Cognitive Architecture (DUCA)*. The working model in the explicit module learns direct sensory data. Within the implicit module, the inductive bias learner encodes the prior shape knowledge and the semantic memory consolidates information from the explicit module. Only one network (semantic memory) is used during inference as it includes consolidated knowledge across all tasks. (Sun & Franklin, 2007) is one such theory that postulates an integrative cognitive architecture, consisting of a number of distinct subsystems. It predicates a dual representational structure (Chaiken & Trope, 1999), where the top level encodes conscious explicit knowledge, while the other encodes indirect implicit information. The two systems interact, share knowledge, and cooperate to solve tasks. Delving into these underlying architectures and formulating a new design can help in the quest to build intelligent agents. Multiple modules can be instituted instead of a single feedforward network: an explicit module responsible for learning from the standard visual input and an implicit module that specializes in acquiring and sharing contextual knowledge indirectly. The implicit module can be further divided into more submodules, each providing different information. Inductive biases and semantic memories can act as different kinds of implicit knowledge. Inductive biases are pre-stored templates or knowledge that provide some meaningful disposition toward adapting to the continuously evolving world (Chollet, 2019). Theories postulate that after rapidly learning information, a gradual consolidation of knowledge transpires in the brain for slow learning of structured information (Kumaran et al., 2016). Thus, the new design incorporates multiple concepts of cognition architectures, the dichotomy of implicit and explicit representations, inductive biases, and multi-memory systems theory. To this end, we propose *Dual Cognitive Architecture* (DUCA), a multi-module architecture for CL. The explicit working module processes the standard input data. Two different submodules are introduced for the implicit module. The inductive bias learner embeds relevant prior information, and as networks are shown to be biased toward textural information (unlike humans that are more biased toward global semantics) (Geirhos et al., 2019), we propose to utilize global shape information as the prior. Both texture and shape are present in the original image, but ANNs tend to rely more on texture and ignore semantic information. Hence, we utilize the implicit shape information and share it with the explicit module to learn more generic and high-level representations. Further, to emulate the consolidation of information in the slow-fast multimemory system, a gradual accumulation of knowledge from the explicit working module is embedded in the second semantic memory submodule. We show that interacting and leveraging information between these modules can help alleviate catastrophic forgetting, while also increasing the robustness to the distribution shift. DUCA achieves superior performance across all CL settings on various datasets. DUCA outperforms SOTA CL methods on Seq-CIFAR10 and Seq-CIFAR100 in class-incremental settings. Furthermore, in more realistic general class-incremental settings where the task boundary is blurry and the classes are not disjoint, DUCA shows significant gains. The addition of inductive bias and semantic memory helps to achieve a better balance between the plasticity-stability trade-off. The prior in the form of shape helps produce generic representations, and this results in DUCA exhibiting a reduced task-recency bias. Furthermore, DUCA also shows greater robustness against natural corruption. Finally, to test the capability of CL methods against the distribution shift, we introduce a domain-incremental learning dataset, *DN4IL*, which is a carefully designed subset of the DomainNet dataset (Peng et al., 2019). DUCA shows considerable robustness across all domains on these challenging data, thus establishing the efficacy of our cognitive-inspired CL architecture. Our contributions are as follows: - *Dual Cognitive Architecture (DUCA)*, a novel method that incorporates aspects of cognitive architectures, multi-memory systems, and inductive bias into the CL framework. - Introducing *DN4IL*, a challenging domain-incremental learning dataset. - Benchmark across different CL settings: class-, task-, generalized class-, and domain-incremental learning. - Analyses on the plasticity-stability trade-off, task recency bias, and robustness to natural corruptions. ## 2 Methodology 2.1 Cognitive Architectures Cognitive architectures refer to computational models that encapsulate the overall structure of the cognitive process in the brain. The underlying infrastructure of such a model can be leveraged to develop better intelligent systems. Global workspace theory (GWT) (Juliani et al., 2022) postulates that human cognition is composed of a multitude of special-purpose processors and is not a single standalone module. Different sub-modules might encode different contextual information which, when activated, can transfer knowledge to the conscious central workspace to influence and help make better decisions. Furthermore, CLARION (Sun & Franklin, 2007) posits a dual-system cognitive architecture with two levels of knowledge representation. The explicit module encodes direct knowledge that is externally accessible. The implicit module encodes indirect knowledge that is not directly accessible, but can be obtained through some intermediate interpretive or transformational steps. These two modules interact with each other by transferring knowledge between each other. Inspired by these theories, we formulate a method that incorporates some of the key aspects of cognitive architecture into the CL method. A working module, which encodes the direct sensory data, forms the explicit module. A second module that encodes indirect and interpretive information forms the implicit module. The implicit module further includes multiple sub-modules to encode different types of knowledge. ## 2.2 Inductive Bias The sub-modules in the implicit module need to encapsulate implicit information that can provide more contextual and high-level supervision. One of such knowledge can be prior knowledge or inductive bias. Inductive biases are pre-stored templates that exist implicitly even in earlier stages of the human brain (Pearl & Mackenzie, 2018). For instance, cognitive inductive bias may be one of the reasons why humans can focus on the global semantics of objects to make predictions. ANNs, on the other hand, are more prone to rely on local cues and textures (Geirhos et al., 2019). Global semantics or shape information already exists in the visual data, but in an indirect way. The incorporation of shape-awareness to the networks has proven to be a more effective approach in acquiring generic representations (Gowda et al., 2022). Hence, we utilize shape as indirect information in the implicit module. The sub-module uses a transformation step to extract the shape and share this inductive bias with the working module. As the standard (RGB) image and its shape counterpart can be viewed as different perspectives/modalities of the same data, ensuring that the representation of one modality is consistent with the other increases robustness to spurious correlations that might exist in only one of them. ## 2.3 Multi Memory System Moreover, many theories have postulated that an intelligent agent must possess differentially specialized learning memory systems (Kumaran et al., 2016). While one system rapidly learns the individual experience, the other gradually assimilates the knowledge. To emulate this behavior, we establish a second sub-module that slowly consolidates the knowledge from the working module. ## 2.4 Formulation To this end, we propose a novel method *Dual Cognitive Architecture (DUCA)*, which incorporates all these concepts into the CL paradigm. DUCA consists of two modules, the explicit module, and the implicit module. The explicit module has a single working model and processes the incoming direct visual data. The implicit module further consists of two submodules, namely the inductive bias learner and the semantic memory. They share relevant contextual information and assimilated knowledge with the explicit module, respectively. Figure 1 shows the overall architecture. In the implicit module, semantic memory NSM, consolidates knowledge at stochastic intervals from the working model NWM, in the explicit module. The other submodule, the inductive bias learner NIBL, processes the data and extracts shape information (Section G). NWM processes the RGB data, NSM consolidates the information from the working module at an update frequency in a stochastic manner, and NIBL learns from the shape data. The encoder or the feature extractor network takes an image as the input and produces latent representations, which are then passed to the linear classifier to do object recognition. f represents the combination of the encoder and the classifier, and θWM, θSM, and θIBL are the parameters of the three networks. A CL classification consists of a sequence of T tasks and, during each task, t ∈ 1, 2*...T*, samples xc and their corresponding labels yc are drawn from the current task data Dt. Furthermore, for each subsequent task, a random batch of exemplars is sampled from episodic memory B as xb. An inductive bias (shape) filter is applied to generate shape samples, xcs = IB(xc) and xbs = IB(xb). Reservoir sampling (Vitter, 1985) is incorporated to replay previous samples. Each of the networks NWM and NIBL learns in its own modality with a supervised cross-entropy loss on both the current samples and the buffer samples: $$\begin{array}{l}{{{\mathcal{L}}_{S u p_{W M}}={\mathcal{L}}_{C E}(f(x_{c};\theta_{W M}),y_{c})+{\mathcal{L}}_{C E}(f(x_{b};\theta_{W M}),y_{b})}}\\ {{{\mathcal{L}}_{S u p_{I B L}}={\mathcal{L}}_{C E}(f(x_{c_{s}};\theta_{I B L}),y_{c})+{\mathcal{L}}_{C E}(f(x_{b_{s}};\theta_{I B L}),y_{b})}}\end{array}$$ $$\left(1\right)$$ The Knowledge Sharing (KS) objectives are designed to transfer and share information between all modules. KS occurs for current samples and buffered samples. We employ the mean squared error as the objective function for all KS losses. To provide shape supervision to the working model and vice versa, a bidirectional decision space similarity constraint (L*biKS*) is enforced to align the features of the two modules. $${\mathcal{L}}_{b i K S}=\mathop{\mathbb{E}}_{x\sim{\bar{D}}_{t}\cup B}\|f(x_{s};\theta_{I B L})-f(x;\theta_{W M})\|_{2}^{2}\tag{1}$$ The consolidated structural information in semantic memory is transferred to both the working model and the inductive bias learner by aligning the output space on the buffer samples, which further helps in information retention. The loss functions LKSWM and LKSIBL are as follows; $$\begin{array}{l}{{{\mathcal L}_{K S W M}=\operatorname*{\mathbb{E}}_{x_{b}\sim B}\|f(x_{b};\theta_{S M})-f(x_{b};\theta_{W M})\|_{2}^{2}}}\\ {{{\mathcal L}_{K S_{I B L}}=\operatorname*{\mathbb{E}}_{x_{b}\sim B}\|f(x_{b};\theta_{S M})-f(x_{b_{x}};\theta_{I B L})\|_{2}^{2}}}\end{array}$$ $\left(3\right)$. $$\left(2\right)$$ | |B| | Method | Seq-CIFAR10 | Seq-CIFAR100 | GCIL-CIFAR100 | | | | |----------|------------|---------------|----------------|-----------------|------------|------------|------------| | Class-IL | Task-IL | Class-IL | Task-IL | Uniform | Longtail | | | | JOINT | 92.20±0.15 | 98.31±0.12 | 70.62±0.64 | 86.19±0.43 | 60.45±1.65 | 60.10±0.42 | | | - | SGD | 19.62±0.05 | 61.02±3.33 | 17.58±0.04 | 40.46±0.99 | 10.36±0.13 | 9.62±0.21 | | 200 | ER | 44.79±1.86 | 91.19±0.94 | 21.40±0.22 | 61.36±0.39 | 16.52±0.10 | 16.20±0.30 | | DER++ | 64.88±1.17 | 91.92±0.60 | 29.60±1.14 | 62.49±0.78 | 27.73±0.93 | 26.48±2.04 | | | Co2L | 65.57±1.37 | 93.43±0.78 | 31.90±0.38 | 55.02±0.36 | - | - | | | ER-ACE | 62.08±1.44 | 92.20±0.57 | 32.49±0.95 | 59.77±0.31 | 27.64±0.76 | 25.10±2.64 | | | CLS-ER | 66.19±0.75 | 93.90±0.60 | 43.80±1.89 | 73.49±1.04 | 35.88±0.41 | 35.67±0.72 | | | DUCA | 70.04±1.07 | 94.49±0.38 | 45.38±1.28 | 76.62±0.16 | 38.61±0.83 | 37.11±0.16 | | | 500 | ER | 57.74±0.27 | 93.61±0.27 | 28.02±0.31 | 68.23±0.16 | 23.62±0.66 | 22.36±1.27 | | DER++ | 72.70±1.36 | 93.88±0.50 | 41.40±0.96 | 70.61±0.11 | 35.80±0.62 | 34.23±1.19 | | | Co2L | 74.26±0.77 | 95.90±0.26 | 39.21±0.39 | 62.98±0.58 | - | - | | | ER-ACE | 68.45±1.78 | 93.47±1.00 | 40.67±0.06 | 66.45±0.71 | 30.14±1.11 | 31.88±0.73 | | | CLS-ER | 75.22±0.71 | 94.94±0.53 | 51.40±1.00 | 78.12±0.24 | 38.94±0.38 | 38.79±0.67 | | | DUCA | 76.20±0.70 | 95.95±0.14 | 54.27±1.09 | 79.80±0.32 | 43.34±0.32 | 41.44±0.22 | | Table 1: Comparison of different methods on standard CL benchmarks (Class-IL, Task-IL and GCIL settings). DUCA shows a consistent improvement over all methods for both buffer sizes. Thus, the overall loss functions for the working model and the inductive bias learner are as follows; $$\begin{array}{l}{{{\mathcal{L}}_{W M}={\mathcal{L}}_{S u p w_{M}}+\lambda({\mathcal{L}}_{b i K S}+{\mathcal{L}}_{K S w_{M}})}}\\ {{{\mathcal{L}}_{I B L}={\mathcal{L}}_{S u p I B L}+\gamma({\mathcal{L}}_{b i K S}+{\mathcal{L}}_{K S i B L})}}\end{array}$$ $$\left(5\right)$$ $$\left({4}\right)$$ The semantic memory of the implicit module is updated with a stochastic momentum update (SMU) of the weights of the working model at rate r with a decay factor of d, $$\theta_{S M}=d\cdot\theta_{S M}+(1-d)\cdot\theta_{W M}{\mathrm{~if~}}s\sim U(0,1)<r$$ θSM = d · θSM + (1 − d) · θWM if s ∼ U(0, 1) < r (5) More details are provided in Algorithm 1. We discuss the computational aspect in Section F. Note that we use semantic memory (θSM) for inference, as it contains consolidated knowledge across all tasks. ## 3 Experimental Settings ResNet-18 (He et al., 2016) architecture is used for all experiments. All networks are trained using the SGD optimizer with standard augmentations of random crop and random horizontal flip. The different hyperparameters, tuned per dataset, are provided in E. The different CL settings are explained in detail in Section D. We consider CLass-IL, Domain-IL, and also report the Task-IL settings. Seq-CIFAR10 and Seq-CIFAR100 (Krizhevsky et al., 2009) for the class-incremental learning (Class-IL) settings, which are divided into 5 tasks each. In addition to Class-IL, we also consider and evaluate general Class-IL (GCIL) (Mi et al., 2020) on the CIFAR100 dataset. For domain-incremental learning (Domain-IL), we propose a novel dataset, *DN4IL*. ## 4 Results We provide a comparison of our method with standard baselines and multiple other SOTA CL methods. The lower and upper bounds are reported as SGD (standard training) and JOINT (training all tasks together), respectively. We compare with other rehearsal-based methods in the literature, namely ER, DER++ (Buzzega et al., 2020), Co2L (Cha et al., 2021), ER-ACE (Caccia et al., 2021), and CLS-ER (Arani et al., 2022). Table 1 shows the average performance in different settings over three seeds. Co2L utilizes task boundary information, and therefore the GCIL setting is not applicable. The results are taken from the original works and, if not available, using the original codes, we conducted a hyperparameter search for the new settings (see Section E for details). DUCA achieves the best performance across all datasets in all settings. In the challenging Class-IL setting, we observe a gain of ∼50% over DER++, thus showing the efficacy of adding multiple modules for CL. Furthermore, we report improvements of ∼6% on both the Seq-CIFAR10 and Seq-CIFAR100 datasets, over CLS-ER, which utilizes two semantic memories in its design. DUCA has a single semantic memory, and the additional boost is obtained by prior knowledge from the inductive bias learner. Improvement is prominent even when the memory budget is low (200 buffer size). GCIL represents a more realistic setting, as the task boundaries are blurry, and classes can reappear and overlap in any task. GCIL-Longtail version also introduces an imbalance in the sample distribution. DUCA shows a significant improvement on both versions of GCIL-CIFAR100. Additional results are provided in Table 4. Shape information from the inductive bias learner offers the global high-level context, which helps in producing generic representations that are not biased towards learning only the current task at hand. Furthermore, sharing of the knowledge that has been assimilated through the appearance of overlapping classes through the training scheme, further facilities learning in this general setting. The overall results indicate that the dual knowledge sharing between the explicit working module and the implicit inductive bias and semantic memory modules enables both better adaptation to new tasks and information retention. ## 5 Domain-Incremental Learning Intelligent agents deployed in real-world applications need to maintain consistent performance through changes in the data and environment. Domain-IL aims to assess the robustness of the CL methods to the distribution shift. In Domain-IL, the classes in each task remain the same, but the input distribution changes, and this makes for a more plausible use case for evaluation. However, the datasets used in the literature do not fully reflect this setting. For instance, the most common datasets used in the literature are different variations (Rotated and Permuted) of the MNIST dataset (LeCun et al., 1998). MNIST is a simple dataset, usually evaluated on MLP networks, and its variations do not reflect the real-world distribution shift challenges that a CL method faces (the results for R-MNIST are presented in the Table C). Farquhar & Gal (2018) propose fundamental desiderata for CL evaluations and datasets based on real-world use cases. One of the criteria is to possess cross-task resemblances, which Permuted-MNIST clearly violates. Thus, a different dataset is needed to test the overall capability of a CL method to handle the distributional shift. ## 5.1 Dn4Il **Dataset** To this end, we propose *DN4IL* (DomainNet for Domain-IL), which is a well-crafted subset of the standard DomainNet dataset (Peng et al., 2019), used in domain adaptation. DomainNet consists of common objects in six different domains - real, clipart, infograph, painting, quickdraw, and sketch. The original DomainNet consists of 59k samples with 345 classes in each domain. The classes have redundancy, and moreover, evaluating the whole dataset can be computationally expensive in a CL setting. *DN4IL* version considers different criteria such as relevance of classes, uniform sample distribution, computational complexity, and ease of benchmarking for CL. All classes were grouped into semantically similar supercategories. Of these, a subset of classes was selected that had relevance to domain shift, while also having maximum overlap with other standard datasets such as CIFAR, to facilitate out-of-distribution analyses. 20 supercategories were chosen with 5 classes each (resulting in a total of 100 classes). In addition, to provide a balanced dataset, we performed a class-wise sampling. First, we sample images per class in each supercategory and maintain class balance. Second, we choose samples per domain, so that it results in a dataset that has a near-uniform distribution across all classes and domains. The final dataset *DN4IL* is succinct, more balanced, and more computationally efficient for benchmarking, thus facilitating research in CL. Furthermore, the new dataset is deemed more plausible for real-world settings and also adheres to all evaluation desiderata by (Farquhar & Gal, 2018). The challenging distribution shift between domains provides an apt dataset to test the capability of CL methods ![6_image_0.png](6_image_0.png) Figure 2: Accuracy (left) and plasticity-stability analysis (right) on *DN4IL* dataset. DUCA substantially outperforms other methods and with better plasticity-stability trade-off. in the Domain-IL setting. More details, statistics, and visual examples of this crafted dataset are provided in Section H. ## 5.2 Dn4Il Performance Figure 2 (left) reports the results on *DN4IL* for two different buffer sizes (values are provided in Table 10). DUCA shows a considerable performance gain in the average accuracy across all domains, and this can be primarily attributed to the supervision from the shape data. Standard networks tend to exhibit texture bias and learn background or spurious cues (Geirhos et al., 2019) that result in performance degradation when the distribution changes. Learning global shape information of objects, on the other hand, helps in learning generic features that can translate well to other distributions. Semantic memory further helps to consolidate information across domains. Maintaining consistent performance to such difficult distribution shifts prove beneficial in real-world applications, and the proficiency of DUCA in this setting can thus open up new avenues for research in cognition-inspired multi-module architectures. ## 6 Model Analyses 6.1 Plasticity-Stability Trade-Off Plasticity refers to the capability of a model to learn new tasks, while stability shows how well it can retain old information. The plasticity-stability dilemma is a long-standing problem in CL, which requires an optimal balance between the two. Following Sarfraz et al. (2022), we measure each of these to assess the competence of the CL methods. Plasticity is computed as the average performance of each task when first learned (e.g., the accuracy of the network trained on task T2, evaluated on the test set of T2). Stability is computed as the average performance of all tasks 1 : T-1, after learning the final task T. Figure 2 (right) reports these numbers for the *DN4IL* dataset. As seen, the ER and DER methods exhibit forgetting and lower stability, and focus only on the newer tasks. CLS-ER shows greater stability, but at the cost of reduced plasticity. However, DUCA shows the highest stability while maintaining comparable plasticity. The shape knowledge helps in learning generic solutions that can translate to new tasks, while the semantic consolidation update at stochastic rates acts as a regularization to maintain stable parameter updates. Thus, DUCA strikes a better balance between plasticity and stability. ## 6.2 Recency-Bias Analysis Recency bias is a behavior in which the model predictions tend to be biased toward the current or the most recent task (Wu et al., 2019). This is undesirable in a CL model, as it results in a biased solution that forgets the old tasks. To this end, after the end of the training, we evaluate the models on the test set (of ![7_image_0.png](7_image_0.png) Figure 3: DUCA shows reduced task recency bias (left), as well as higher robustness against natural corruption (right) on Seq-CIFAR10 (|B|=200) dataset. all tasks) and calculate the probability of predicting each task. The output distribution for each test sample is calculated for all classes and the probabilities are averaged per task. Figure 3 (left) shows the probabilities for each task on the Seq-CIFAR10 dataset. As shown, the ER and DER++ methods tend to incline most of their predictions toward the classes seen in the last task, thus creating a misguided bias. DUCA shows a lower bias compared to both of these baselines. CLS-ER exhibits reduced bias due to the presence of multiple memories, but the distribution is still relatively skewed (with respect to a probability of 0.2). DUCA shows a more uniform distribution across all tasks. The dual information from the shape data and the consolidated knowledge across tasks helps in breaking away from Occam's razor pattern of neural networks to default to the easiest solution. ## 6.3 Robustness Lifelong agents, when deployed in real-world settings, must be resistant to various factors, such as lighting conditions, weather changes, and other effects of digital imaging. Inconsistency in predictions under different conditions might result in undesirable outcomes, especially in safety-critical applications such as autonomous driving. To measure the robustness of the CL method against such natural corruptions, we created a dataset by applying fifteen different corruptions, at varying levels of severity (1- least severe to 5- most severe corruption). The performances on the fifteen corruptions are averaged at each severity level and are shown in Figure 3 (right). DUCA outperforms all other techniques at all severity levels. ER, DER++, and CLS-ER show a fast decline in accuracy as severity increases, while DUCA maintains stable performance throughout. Implicit shape information provides a different perspective of the same data to the model, which helps to generate high-level robust representations. DUCA, along with improved continual learning performance, also exhibits improved robustness to corruption, thus proving to be a better candidate for deployment in real-world applications. ## 6.4 Task-Wise Performance The average accuracy across all tasks does not provide a complete measure of the ability of a network to retain old information while learning new tasks. To better represent the plasticity-stability measure, we report the task-wise performance at the end of each task. After training each task, we measure the accuracy on the test set of each of the previous tasks. Figure 4 reports this for all tasks of *DN4IL*. The last row represents the performance of each task after the training is completed. ER and DER++ show performance degradation on earlier tasks, as the model continues to train on newer tasks. Both perform better than DUCA on the last task, 71.1 and 68.9 respectively, while DUCA has a performance of 61.1. However, in continual learning settings, the data arrives continuously and the focus is on both retention of old task performance while performing well on the current task. As seen, the accuracy on the first task (real) ![8_image_0.png](8_image_0.png) Figure 4: Task-wise performance on DN4IL (|B|=500), where each task represents a domain. DUCA shows more retention of old information without compromising much on current accuracy. Table 2: Analyzing the impact of inductive bias and knowledge sharing on baselines and DUCA. '*' indicates the use of shape as an augmentation. '−X' indicates the removal of component X. '+' refers to concatenation in the channel dimension. Method Specifics Seq-CIFAR100 DN4IL 200 500 200 500 ER Original 21.40 28.02 **26.59 31.01** RGB & Shape* 19.47 23.96 27.45 33.44 DER++ Original 29.60 41.40 **34.75** 41.87 RGB & Shape* 24.40 34.30 36.55 40.99 DUCA original 45.38 54.27 **44.23 49.32** −SM (RGB & Shape*) 24.34 32.64 36.80 43.88 −SM −IBL (Shape only) 18.33 21.98 27.89 31.57 −SM −IBL (RGB+Shape) 20.57 25.20 31.52 35.68 −SM −IBL (RGB & Shape*) 19.47 23.96 27.45 33.44 −IBL (RGB & Shape*) 42.01 49.55 40.75 43.99 reduces to 27.6 on ER and 43.5 on DER++ after training the six tasks (domains), while the DUCA maintains the accuracy of 54.9. Similarly, after the second task, the performance on the first task decreases (44.5 : ER, 54.2 : DER++, 57.2 : CLS-ER and 62.9 : DUCA), but with DUCA the forgetting is lesser. DUCA reports the highest information retention on older tasks, while also maintaining plasticity. For example, CLS-ER shows better retention of old information, but at the cost of plasticity. The last task in CLS-ER shows a lower performance compared to DUCA (52.1 vs. 61.0). The performance of the current task in DUCA is relatively lesser and can be attributed to the stochastic update rate. Therefore, it's essential to recognize that while the shape inductive bias is beneficial to a classification task, the observed high performances are a consequence of how the inductive bias is thoughtfully integrated into the proposed architecture within the framework of continual learning, where both stability and plasticity hold equal importance. To shed more light on the performance of each of the modules in DUCA, we also provide the performance of the working model and the inductive bias learner, in Appendix Figure 5. The working model shows better plasticity, while DUCA (semantic memory) displays better stability. Overall, all modules in the proposed approach present unique attributes that improve the learning process and improve performance. ## 7 Effect Of Inductive Bias And Knowledge Sharing To assess the impact of the specific inductive bias and various ways of integrating it into the training framework, we conducted supplementary experiments. In these experiments, we introduced two additional baselines, ER and DER++, with the utilization of shape as an augmentation technique. Specifically, we Table 3: Ablation to analyze the effect of each component of DUCA on Seq-CIFAR10 and *DN4IL*. SM IBL KS (WM↔IBL) Seq-CIFAR10 DN4IL ✓ ✓ ✓ **70.04**±1.07 **44.23**±0.05 ✓ ✓ ✗ 69.28±1.34 40.35±0.34 ✓ ✗ - 69.21±1.46 39.76±0.56 ✗ ✓ ✓ 64.61±1.22 37.33±0.01 ✗ ✗ ✗ 44.79±1.86 26.59±0.31 included the Sobel filter in the augmentation list, alongside RandomCrop and RandomHorizontalFlip, and proceeded with continual training. The results presented in Table 2 demonstrate that this approach yields inferior performance compared to the baseline models trained solely on RGB images. On DN4IL dataset, the performance is slightly better than baseline, as shape is a more discriminative and important feature in this dataset. Thus the incorporation of shape as an augmentation strategy appears to yield suboptimal feature representations, and is also dependent on the dataset. We also conduct various ablations on the DUCA framework. Specifically, we perform isolations and exclusions of different elements within DUCA, including IBL and SM. Instead, we subject the base network (DUCA -SM -IBL) to three distinct training conditions: (1) exclusive training on shape images (Shape only), (2) concurrent training on both RGB and shape images (RGB+Shape), and (3) training on RGB images with the incorporation of a shape filter as an augmentation (RGB & Shape*). It is worth noting that shape information contributes valuable global semantic insights that complement the visually rich RGB data, thus emphasizing the necessity of both modalities for achieving enhanced generalization. However, training a single network on both distributions simultaneously may not always yield optimal utilization of this information. Finally, we train the base model within the framework by excluding one component at a time, namely SM and IBL. We train the working model with but excluding the IBL, while also introducing shape as an augmentation. The results (-IBL) show improvement due to the presence of the semantic memory module. Nevertheless, the best performance is achieved when incorporating shape information through an alternative supervisory network, namely the IBL. These experiments underscore the critical importance of the specific method used to induce this knowledge, highlighting its pivotal role in enhancing the overall training process. In our pursuit to bridge the distribution gap between RGB and shape images, we have leveraged knowledge distillation as a means of self-teaching, where distinct networks collaboratively engage in mutual learning and knowledge sharing. This approach not only sheds light on the significance of effective knowledge sharing but also offers a promising avenue for improving model performance and generalization in complex (visual) tasks. ## 8 Ablation Study DUCA architecture comprises multiple components, each contributing to the efficacy of the method. The explicit module has the working model, and the implicit module has semantic memory (SM) and inductive bias learner (IBL). Disentangling different components in the DUCA can provide more insight into the contribution of each of them to the overall performance. Table 3 reports the ablation study with respect to each of these components on both the Seq-CIFAR10 and *DN4IL* datasets. Considering the more complex *DN4IL* dataset, the ER accuracy without any of our components is 26.59. Adding cognitive bias (IBL) improves performance by 40%. Shape information plays a prominent role, as networks need to learn the global semantics of the objects, rather than background or spurious textural information, to translate performance across domains. Adding the dual-memory component (SM) shows an increase of approximately 49% over the vanilla baseline. Furthermore, the KS between explicit and implicit modules on current experiences also plays a key role in performance gain. Combining both of these cognitive components and, in general, following the multi-module design shows a gain of 66%. A similar trend is seen on Seq-CIFAR10. ## 9 Related Works Rehearsal-based approaches, which revisit examples from the past to alleviate catastrophic forgetting, have been effective in challenging CL scenarios (Farquhar & Gal, 2018). Experience Replay (ER) (Riemer et al., 2018) methods use episodic memory to retain previously seen samples for replay purposes. DER++ (Buzzega et al., 2020) adds a consistency loss on logits, in addition to the ER strategy. In situations where memory limitations impose constraints on buffer size, such as in edge devices, it has been observed that rehearsalbased methods are susceptible to overfitting on the data stored in the buffer (Bhat et al., 2022). To address this, CO2L (Cha et al., 2021) uses contrastive learning from the self-supervised learning domain to generate transferable representations. ER-ACE (Caccia et al., 2021) targets the representation drift problem in online CL and develops a technique to use separate losses for current and buffer samples. All of these methods limit the architecture to a single stand-alone network, contrary to the biological workings of the brain. CLS-ER (Arani et al., 2022) proposed a multi-network approach that emulates fast and slow learning systems by using two semantic memories, each aggregating weights at different times. Though CLS-ER utilizes the multi-memory design, sharing of different kinds of knowledge is not leveraged, and hence presents a method with limited scope. DUCA digresses from the standard architectures and proposes a multi-module design that is inspired by cognitive computational architectures. It incorporates multiple submodules, each sharing different knowledge to develop an effective continual learner that has better generalization and robustness. ## 10 Conclusion We introduced a novel framework for continual learning that incorporates concepts inspired by cognitive architectures, high-level cognitive biases, and the multi-memory system. *Dual Cognitive Architecture (DUCA)*, includes multiple subsystems with dual knowledge representation. DUCA designed a dichotomy of explicit and implicit modules in which information is selected, maintained, and shared with each other to enable better generalization and robustness. DUCA outperformed on Seq-CIFAR10 and Seq-CIFAR100 on the Class-IL setting. In addition, it also showed a significant gain in the more realistic and challenging GCIL setting. Through different analyses, we showed a better plasticity-stability balance. Furthermore, shape prior and knowledge consolidation helps to learn more generic solutions, indicated by the reduced problem of task recency bias and greater robustness against natural corruptions. Furthermore, we introduced a challenging domain-IL dataset, *DN4IL*, with six disparate domains. The significant improvement of DUCA on this complex distribution shift demonstrates the benefits of shape context, which helps the network to converge on a generic solution, rather than a simple texture-biased one. The objective of this work was to develop a framework that incorporates elements of cognitive architecture to mitigate the forgetting problem and enhance generalization and robustness. ## 11 Future Work Here, we delve into the potential for extending our current research, acknowledging its applicability to a diverse range of modalities. The adaptability of DUCA serves as a robust foundation for further exploration. As we broaden the scope of DUCA beyond the image domain, an essential consideration is the identification of pertinent inductive biases tailored to the specific modality in question. For example, when venturing into the audio domain, a promising avenue involves the utilization of spectrogram representations. These representations effectively convert audio waveforms into visual data, encompassing both frequency and timedomain information. The integration of phonemes, the fundamental units of spoken language, holds the potential to enhance DUCA's effectiveness in tasks such as speech understanding, speaker identification, and language processing. The collaboration between the core DUCA architecture and modality-specific inductive biases creates a synergy that drives knowledge sharing and learning capabilities. This collaborative architecture yields more generic and robust representations, substantially enhancing overall performance. Furthermore, the gradual accumulation of semantic memory emerges as a valuable asset, particularly in scenarios involving the continuous influx of data from various modalities. It mitigates the risk of forgetting and empowers the framework to maintain its adaptability over time. These modality-specific adaptations, guided by the intrinsic principles and mechanisms of DUCA, open the door to exciting future directions. They offer the potential to advance lifelong learning and adaptability in a multitude of domains. We anticipate that our preliminary work will serve as a cornerstone for future research endeavors, including investigations into various cognitive biases and more efficient design methodologies. Ultimately, we hope to pave the way for the advancement of lifelong learning methods for ANNs. ## Acknowledgments The research was conducted when all the authors were affiliated with Advanced Research Lab, NavInfo Europe, The Netherlands. ## References Elahe Arani, Fahad Sarfraz, and Bahram Zonooz. Learning fast, learning slow: A general continual learning method based on complementary learning system. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=uxxFrDwrE7Y. Prashant Shivaram Bhat, Bahram Zonooz, and Elahe Arani. Consistency is the key to further mitigating catastrophic forgetting in continual learning. In *Conference on Lifelong Learning Agents*, pp. 1195–1212. PMLR, 2022. Pietro Buzzega, Matteo Boschini, Angelo Porrello, Davide Abati, and Simone Calderara. Dark experience for general continual learning: a strong, simple baseline. *Advances in neural information processing systems*, 33:15920–15930, 2020. Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, and Eugene Belilovsky. New insights on reducing abrupt representation change in online continual learning. In International Conference on Learning Representations, 2021. Hyuntak Cha, Jaeho Lee, and Jinwoo Shin. Co2l: Contrastive continual learning. In *Proceedings of the* IEEE/CVF International Conference on Computer Vision, pp. 9516–9525, 2021. Shelly Chaiken and Yaacov Trope. *Dual-process theories in social psychology*. Guilford Press, 1999. François Chollet. On the measure of intelligence. *arXiv preprint arXiv:1911.01547*, 2019. Lijun Ding and Ardeshir Goshtasby. On the canny edge detector. *Pattern Recognition*, 34(3):721–725, 2001. Sebastian Farquhar and Yarin Gal. Towards robust evaluations of continual learning. *arXiv preprint* arXiv:1805.09733, 2018. Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In *International Conference on Learning Representations*, 2019. URL https://openreview. net/forum?id=Bygh9j09KX. Shruthi Gowda, Bahram Zonooz, and Elahe Arani. Inbiased: Inductive bias distillation to improve generalization and robustness through shape-awareness. In *Conference on Lifelong Learning Agents*, pp. 1026–1042. PMLR, 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Arthur Juliani, Kai Arulkumaran, Shuntaro Sasai, and Ryota Kanai. On the link between conscious function and general intelligence in humans and machines. *arXiv preprint arXiv:2204.05133*, 2022. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Dharshan Kumaran, Demis Hassabis, and James L McClelland. What learning systems do intelligent agents need? complementary learning systems theory updated. *Trends in cognitive sciences*, 20(7):512–534, 2016. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998. Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pp. 109–165. Elsevier, 1989. Fei Mi, Lingjing Kong, Tao Lin, Kaicheng Yu, and Boi Faltings. Generalized class incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 240–241, 2020. German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. *Neural Networks*, 113:54–71, 2019. Judea Pearl and Dana Mackenzie. *The book of why: the new science of cause and effect*. Basic books, 2018. Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In *Proceedings of the IEEE/CVF international conference on computer* vision, pp. 1406–1415, 2019. Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, and Gerald Tesauro. Learning to learn without forgetting by maximizing transfer and minimizing interference. In International Conference on Learning Representations, 2018. Fahad Sarfraz, Elahe Arani, and Bahram Zonooz. Synergy between synaptic consolidation and experience replay for general continual learning. In *Conference on Lifelong Learning Agents*, pp. 920–936. PMLR, 2022. Irwin Sobel and Gary Feldman. A 3x3 isotropic gradient operator for image processing. a talk at the Stanford Artificial Project in, pp. 271–272, 1968. Ron Sun and Stan Franklin. Computational models of consciousness: A taxonomy and some examples., 2007. Gido M Van de Ven and Andreas S Tolias. Three scenarios for continual learning. arXiv preprint arXiv:1904.07734, 2019. Jeffrey S Vitter. Random sampling with a reservoir. *ACM Transactions on Mathematical Software (TOMS)*, 11(1):37–57, 1985. Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Fu. Large scale incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 374–382, 2019. ## A Duca Algorithm Algorithm 1 Dual Cognitive Architecture (DUCA) 1: **Input:** Dataset Dt, Buffer B 2: **Initialize:** Three networks: Encoder and classifier f parameterized by θWM, θSM, and θIBL 3: for all tasks t ∈ 1, 2*...T* do 4: Sample mini-batch: (xc, yc) ∼ Dt 5: Extract shape images: xcs = IB(xc) where IB is a Sobel filter 6: LSupWM = LCE(f(xc; θWM), yc) 7: LSupIBL = LCE(f(xcs ; θIBL), yc) 8: if B ̸= ∅ **then** 9: Sample mini-batch: (xb, yb) ∼ B 10: Extract shape images: xbs = IB(xb) 11: Calculate the supervised loss: 12: LSupWM += LCE(f(xb; θWM), yb) 13: LSupIBL += LCE(f(xbs ; θIBL), yb) 14: Knowledge sharing from semantic memory to working model and inductive bias learner: 15: LKSWM = E∥f(xb; θSM) − f(xb; θWM)∥ 2 2 16: LKSIBL = E∥f(xb; θSM) − f(xbs ; θIBL)∥ 2 2 17: Bidirectional knowledge sharing between working model and inductive bias learner: 18: L*biKS* = E x∼Dt∪B ∥f(x; θWM) − f(xs; θIBL)∥ 2 2 19: Calculate total loss: 20: LWM = LSupWM + λ(L*biKS* + LKSWM ) 21: LIBL = LSupIBL + γ(LbiKS + LKSIBL ) 22: Update both working model and inductive bias learner: θWM, θIBL 23: Stochastically update semantic memory: 24: Sample s ∼ U(0, 1); 25: if *s < r* **then** 26: θSM = d · θSM + (1 − d) · θWM 27: Update memory buffer B 28: **Return:** model θSM ## B Effect Of Task Sequence Figure 5 presents the task-wise performance of all the three networks in the DUCA architecture, on *DN4IL* dataset. Semantic memory helps to retain information by maintaining high accuracy on older tasks and is more stable. The performance of the current task is relatively lower than that of the working model and could be due to the stochastic update rate of this model. The working model has better performance on new tasks and is more plastic. Inductive bias leaner is evaluated on the transformed data (shape) and also achieves a balance between plasticity and stability. In general, all modules in our proposed method present unique attributes that improve the learning process by improving performance and reducing catastrophic forgetting. Table 4 reports the results of CIFAR100 on different numbers of tasks. Even when the number of tasks increases, our method consistently improves over the baselines. Table 4: Comparison on Seq-CIFAR100 dataset for different number of tasks on 500 buffer size. Method 5-Tasks 10-Tasks 20-Tasks ER 28.02±0.31 21.49±0.47 16.52±0.86 DER++ 41.40±0.96 36.20±0.52 22.25±5.87 DUCA **53.23**±1.62 **41.09**±0.72 **33.60**±0.25 ![14_image_0.png](14_image_0.png) Figure 5: Task probability analysis of all DUCA components on *DN4IL* dataset with 500 buffer size. Semantic memory displays better stability while the working model displays better plasticity. Table 5: Comparison on R-MNIST dataset with different buffer sizes. Method 200 500 ER 85.01±1.90 88.91±1.44 DER++ 90.43±1.87 92.77±1.05 CLS-ER 92.26±0.18 94.06±0.07 DUCA **93.53**±0.21 **94.98**±0.04 ## C **Evaluation On R-Mnist** The Rotated MNIST (R-MNIST) dataset is a variant of the MNIST dataset, which is widely used for domainincremental learning in continual learning. In the R-MNIST dataset, the digits are rotated by fixed degrees. This rotation introduces variations in the orientation of the digits, making it a more challenging task for machine learning algorithms to accurately classify them. It tests the robustness and adaptability of models, as they need to recognize and classify the digits regardless of their orientation. The results, as presented in Table 5, indicate that DUCA performs better than other methods in this domain-incremental setting. ## D Setting And Datasets We evaluate all methods in different CL settings. Van de Ven & Tolias (2019) describes three different settings based on increasing difficulty: task-incremental learning (Task-IL), domain-incremental learning (Domain-IL), and class-incremental learning (Class-IL). In Class-IL, each new task consists of novel classes, and the network must learn both new classes while retaining information about the old ones. Task-IL is similar to Class-IL but assumes that task labels are accessible in both training and inference. In Domain-IL, the classes remain the same for each task, but the distribution varies for each task. We report the results for all three settings on the relevant datasets. CLass-IL is the most complex setting of the three and is widely studied. However, contemporary research tends to adopt simplifications when exploring CLass-IL setting, such as the assumption of the same number of classes across different tasks, the absence of reappearance of classes, and the sample distribution per class being well balanced. In Generalized Class-IL (GCIL) Mi et al. (2020), the number of classes in each task is not fixed, and the classes can reappear with varying sample sizes. GCIL samples the number of classes and samples from a probabilistic distribution. The two variations are Uniform (fixed uniform sample distribution over all classes) and Longtail (with class imbalance). We report results on all three settings: Task-IL, Domain-IL, and CLass-IL. Furthermore, we also consider the GCIL setting for one of the datasets as an additional evaluation setting. All reported results are averaged over three random seeds. | Table 6: Search ranges for tuning hyperparameters. | | | | | | |------------------------------------------------------|-------------------|------------------------|-------------------|-------------------|--------------| | Method | Hyperparam | Search Range | Method | Hyperparam | Search Range | | ER | lr | [0.01, 0.03, 0.1, 0.5] | lr | [0.01, 0.03, 0.1] | | | lr | [0.01, 0.03, 0.1] | λ | [0.1, 0.2, 0.3] | | | | DER++ | α | [0.1, 0.2, 0.5] | rp | [0:1:0.1] | | | β | [0.5, 1.0] | rs | [0:1:0.1] | | | | | | CLS-ER | | | | | lr | [0.01, 0.03, 0.1] | αp | [0.99,0.999] | | | | τ | [0.01, 0.1, 0.5] | αs | [0.99, 0.999] | | | | k | [0.2, 0.5] | | | | | | CO2L | | lr | [0.01, 0.03, 0.1] | | | | | | DUCA | | | | | k ∗ | [0.01, 0.05] | r | [0:1:0.1] | | | | e | [100, 150] | λ | [0.01, 0.1] | | | | ER-ACE | lr | [0.01, 0.03, 0.1, 0.5] | γ | [0.01, 0.1] | | ## E Hyperparameters For the settings and datasets for which the results are not available in the original papers, using their original codes, we conducted a hyperparameter search for each of the new settings. To this end, we utilize a small validation set from the training set to tune the hyperparameters. We adopted this tuning scheme because it aligns with the approach employed in the baseline methods with which we conducted comparisons (Buzzega et al., 2020; Cha et al., 2021; Caccia et al., 2021; Arani et al., 2022). For Seq-CIFAR10, the results are taken from the original articles (Buzzega et al., 2020; Cha et al., 2021; Caccia et al., 2021; Arani et al., 2022). For the other datasets (and for each buffer size), we ran a grid search over the hyperparameters reported in the paper. For Seq-CIFAR100 and GCIL-CIFAR100, we based the search range using Seq-CIFAR10 hyperparameters as a reference point. The search ranges are reported in Table 6. Once we find the right hyperparameter, we train the whole training set and report the test set accuracy. DN4IL dataset is more complex compared to the CIFAR versions and includes images of larger sizes. Hence, we consider the Seq-TinyImagenet hyperparameters in the respective paper as a reference point for further tuning. The learning rate lr, the number of epochs, and the batch size are similar across the datasets. The ema update rate r is lower for more complex datasets, as shown in CLS-ER. r is chosen in the range of [0.01, 0.1] with a step size of 0.02 for CLS-ER and DUCA. The different hyperparameters chosen for the baselines, after tuning, are reported in Table 7. The different hyperparameters chosen for DUCA are shown in Table 8. Hyperparameters: lr, batch size, number of epochs and decay factor are uniform across all datasets. The stochastic update rate is similar to those of CLS-ER. The hyperparameters are stable across settings and datasets and also complement each other. The loss balancing weights (λ and γ respectively) are also similar across all datasets. Therefore, DUCA does not require extensive fine-tuning across different datasets and settings. ## F Model Complexity We discuss the computational complexity aspect of our proposed method. DUCA involves three networks during training; however, in inference, only one network is used (SM module). Therefore, for inference purposes, the MAC count, the number of parameters, and the computational capacity remain the same as in the other single-network methods. The training cost requires three forward passes, as it consists of three different modules. ER, DER++, CO2L and ER-ACE have a single network. CLS-ER also has three networks and therefore requires 3 forward passes. DUCA has training complexity similar to that of CLS-ER; however, it outperforms CLS-ER in all provided metrics. On the memory front, similar to all methods, we save memory samples based on the memory budget allotted (200 and 500 in the experiments). There are no additional memory requirements, as we do not save | size to 32, and epochs to 50 respectively for all the datasets. The decay factor d is always set | | to 0.999. | | | | | | | | |----------------------------------------------------------------------------------------------------|------|-------------|------|------|---------------|-----|------|-----|------| | Dataset | |B| | r | λ | γ | Dataset | |B| | r | λ | γ | | Seq-CIFAR10 | 200 | 0.2 | 0.1 | 0.1 | GCIL-CIFAR100 | 200 | 0.09 | 0.1 | 0.01 | | 500 | 0.2 | 0.1 | 0.1 | 500 | 0.09 | 0.1 | 0.01 | | | | Seq-CIFAR100 | 200 | 0.1 | 0.1 | 0.01 | DN4IL | 200 | 0.06 | 0.1 | 0.01 | | 500 | 0.06 | 0.1 | 0.01 | 500 | 0.08 | 0.1 | 0.01 | | | | Table 7: Selected hyperparameters for all baselines. | | | | |--------------------------------------------------------|-----------------------------------------------------|--------|-----------------| | Dataset | |B| | Method | Hyperparameters | | 200 | ER | lr=0.1 | | | DER++ | lr=0.03, α=0.1, β=0.5 | | | | CO2L | lr:0.5, τ :0.5, κ:0.2, κ ∗ :0.01, e:100 | | | | ER-ACE | lr=0.01 | | | | CLS-ER | lr=0.1 λ=0.15, rp=0.1, rs=0.05, αp=0.999, αs=0.999 | | | | Seq-CIFAR100 | 500 | ER | lr=0.1 | | DER++ | lr=0.03, α=0.1, β=0.5 | | | | CO2L | lr:0.5, τ :0.5, κ:0.2, κ ∗ :0.01, e:100 | | | | ER-ACE | lr=0.01 | | | | CLS-ER | lr=0.1 λ=0.15, rp=0.1, rs=0.05, αp=0.999, αs=0.999 | | | | 200 | ER | lr=0.1 | | | DER++ | lr=0.03, α=0.5, β=0.1 | | | | ER-ACE | lr=0.1 | | | | CLS-ER | lr=0.1 λ=0.1, rp=0.7, rs=0.6, αp=0.999, αs=0.999 | | | | GCIL-CIFAR100 | 500 | ER | lr=0.1 | | DER++ | lr=0.03, α=0.2, β=0.1 | | | | ER-ACE | lr=0.1 | | | | CLS-ER | lr=0.1 λ=0.1, rp=0.7, rs=0.6, αp=0.999, αs=0.999 | | | | 200 | ER | lr=0.1 | | | DER++ | lr=0.03, α=0.1, β=1.0 | | | | CLS-ER | lr=0.05 λ=0.1, rp=0.08, rs=0.04, αp=0.999, αs=0.999 | | | | DN4IL | 500 | ER | lr=0.1 | | DER++ | lr=0.03, α=0.5, β=0.1 | | | | CLS-ER | lr=0.05 λ=0.1, rp=0.08, rs=0.05, αp=0.999, αs=0.999 | | | any extra information (such as logits in DER++) to be used later in our objectives. Additionally, there are no additional data requirements. The number of parameters is reported in Table 9. We considered the human brain, or cognition, as the most intelligent agent and wanted to incorporate some of the underlying workings into the neural network architecture. Therefore, our goal was to design a framework inspired by elements of cognitive architecture that reduce forgetting while exhibiting better generalization and robustness. Our work is a preliminary attempt to incorporate elements based on cognitive architecture into the CL algorithm to gauge the gain in reducing forgetting. We hope the promising results result in future work that explores different kinds of cognitive biases and interactions and moves towards more efficient designs. Table 9: Comparison of the number of parameters for different methods using ResNet18 architecture and trained on SeqCIFAR100 dataset. Method Training Inference ER 11 11 DER++ 11 11 CO2L 23 11 ER-ACE 11 11 CLS-ER 33 11 DUCA 33 11 Algorithm 2 Sobel Algorithm - Shape Extraction 1: Up-sample the images to twice the original size: xrgb = us(xrgb) 2: Reduce noisy edges: xg = Gaussian_Blur(xrgb, kernel_*size* = 3) 3: Get Sobel kernels: Sx = −1 0 +1 −2 0 +2 −1 0 +1 and Sy = −1 −2 −1 0 0 0 +1 +2 +1 4: Apply Sobel kernels: xdx = xg ∗ Sx and xdy = xg ∗ Sy (∗ is 2-D convolution operation) 5: The edge magnitude: x*shape* = qx 2 dx + x 2 dy 6: Down-sample to original image size: x*shape* = ds(x*shape*) ## G Inductive Bias The shape extraction is performed by applying a filter on the input image. Multiple filters (such as Canny (Ding & Goshtasby, 2001), Prewitt) were considered, but the Sobel filter (Sobel & Feldman, 1968) was chosen because it produces a more realistic output by being precise and also smoothing the edges (Gowda et al., 2022); see Algorithm 2. Figure 6 shows a few examples of applying the Sobel operator on the original RGB images. The Sobel output is fed to the IBL model. The DUCA framework can be extended to other domains by selecting appropriate inductive biases relevant to those domains. For example, in the audio domain, one viable option involves leveraging spectrogram representations, which transform audio waveforms into visual data capturing both frequency and timedomain information. Alternatively, the incorporation of phonemes, representing the fundamental sound units in spoken language and carrying essential linguistic information, can enhance the framework's capabilities for speech understanding, speaker identification, and language processing. Additionally, the extraction of pitch and timbre features proves valuable for tasks like melody extraction, intonation analysis, and various music-related applications, providing insights into the acoustic characteristics of audio signals. ## H **Dn4Il Dataset** We introduce a new dataset for the Domain-IL setting. It is a subset of the standard DomainNet dataset (Peng et al., 2019) used in domain adaptation. It consists of six different domains: real, clipart, infograph, painting, quickdraw, and sketch. The shift in distribution between domains is challenging. A few examples can be seen in Figure 7. Each domain includes 345 classes, and the overall dataset consists of ∼59000 samples. The classes have redundancy, and also evaluating on the whole dataset can be computationally expensive for CL settings. Therefore, we create a subset by grouping semantically similar classes into 20 supercategories (considering the class overlap between other standard datasets can also facilitate OOD analysis). Each super category has five classes each, which results in a total of 100 classes. The specifications of the classes are given in Table 11. The dataset consists of 67080 training images and 19464 test images. The image size for all experiments is chosen as 64×64 (the normalize transform is not applied in augmentations). ![18_image_0.png](18_image_0.png) Figure 6: Visual examples of the shape images using Sobel operator Table 10: Accuracy on the proposed *DN4IL* dataset for the Domain-IL setting. DUCA shows a significant improvement in all disparate and challenging domains. |B| Method real clipart infograph painting sketch quickdraw Acc JOINT 59.93±1.07 - SGD 9.98±0.54 19.97±0.31 2.32±0.20 6.58±0.34 14.91±0.04 71.23±0.17 20.83±0.24 | improvement in all disparate and challenging domains. |B| Method real clipart infograph | painting | sketch | quickdraw | Acc | | | | | |-------------------------------------------------------------------------------------------|------------|------------|-------------|------------|------------|------------|------------|------------| | JOINT | | 59.93±1.07 | | | | | | | | - | SGD | 9.98±0.54 | 19.97±0.31 | 2.32±0.20 | 6.58±0.34 | 14.91±0.04 | 71.23±0.17 | 20.83±0.24 | | ER | 20.08±0.45 | 26.37±0.35 | 5.56±0.39 | 13.92±0.91 | 23.69±1.54 | 69.95±0.56 | 26.59±0.31 | | | DER++ | 33.66±1.65 | 37.24±0.64 | 9.80±0.63 | 24.16±1.17 | 34.37±2.00 | 69.26±0.79 | 34.75±0.87 | | | CLS-ER | 45.53±0.88 | 49.17±1.12 | 15.79±0.48 | 35.80±0.64 | 48.03±0.85 | 54.40±1.25 | 40.83±1.07 | | | DUCA | 47.52±0.25 | 54.69±0.10 | 15.70±0.33 | 37.54±0.30 | 51.98±0.96 | 58.80±0.18 | 44.23±0.05 | | | 200 | ER | 27.54±0.05 | 31.89±0.93 | 7.89±0.45 | 19.39±1.02 | 28.36±1.35 | 70.96±0.10 | 31.01±0.62 | | DER++ | 44.49±1.39 | 46.17±0.35 | 14.01±0.23 | 33.44±0.90 | 43.59±1.11 | 69.53±0.29 | 41.87±0.63 | | | CLS-ER | 49.85±0.88 | 51.41±0.34 | 18.17±0.08 | 37.94±0.94 | 49.02±1.57 | 55.63±0.71 | 43.41±0.80 | | | DUCA | 54.77±0.15 | 60.37±0.75 | 19.35±0.39 | 44.50±0.43 | 56.34±0.53 | 60.61±1.73 | 49.32±0.23 | | | 500 | | | | | | | | | ![18_image_1.png](18_image_1.png) Figure 7: Visual examples of *DN4IL* dataset ![19_image_0.png](19_image_0.png) Figure 8: Number of samples per domain and per supercategory in *DN4IL* dataset. | | Table 11: Details on supercategory and classes in DN4IL dataset. | | | | | | |---------------|--------------------------------------------------------------------|---------------|---------------|------------------|-------------|-----------------| | supercategory | class | | | | | | | 1 | small animals | mouse | squirrel | rabbit | dog | raccoon | | 2 | medium animals | tiger | bear | lion | panda | zebra | | 3 | large animals | camel | horse | kangaroo | elephant | cow | | 4 | aquatic mammals | whale | shark | fish | dolphin | octopus | | 5 | non-insect invertebrates | snail | scorpion | spider | lobster | crab | | 6 | insects | bee | butterfly | mosquito | bird | bat | | 7 | vehicle | bus | bicycle | motorbike | train | pickup_truck | | 8 | sky-vehicle | airplane | flying_saucer | aircraft_carrier | helicopter | hot_air_balloon | | 9 | fruits | strawberry | banana | pear | apple | watermelon | | 10 | vegetables | carrot | asparagus | mushroom | onion | broccoli | | 11 | music | trombone | violin | cello | guitar | clarinet | | 12 | furniture | chair | dresser | table | couch | bed | | 13 | household electrical devices | clock | floor_lamp | telephone | television | keyboard | | 14 | tools | saw | axe | hammer | screwdriver | scissors | | 15 | clothes & accessories | bowtie | pants | jacket | sock | shorts | | 16 | man-made outdoor | skyscraper | windmill | house | castle | bridge | | 17 | nature | cloud | bush | ocean | river | mountain | | 18 | food | birthday_cake | hamburger | ice_cream | sandwich | pizza | | 19 | stationary | calendar | marker | map | eraser | pencil | | 20 | household items | wine_bottle | cup | teapot | frying_pan | wine_glass | ![19_image_1.png](19_image_1.png) Figure 9: Number of overall samples per class in *DN4IL* dataset. ![20_image_0.png](20_image_0.png) Figure 10: Number of samples per supercategory for each domain in DN4IL dataset.
Review 1: Summary: This paper considers the continual learning problem in ANNs from the perspective of cognitive sciences. According to cognitive sciences, humans are able to perform a wide variety of difficult tasks due to the structure of the cognitive system, in particular multi-memory systems in the brain and incorporation of cognitive biases. The idea of the paper is to use these concepts in artificial neural networks to build a system that performs well in continual learning. The authors propose a Dual Cognitive Architecture, DUCA, which consists of three neural networks: the working model, the semantic memory, and the implicit bias learner that focuses on the shape of the image rather than the full RGB picture. The authors show that the proposed model works well on the split CIFAR dataset and the split MNIST dataset as compared to strong baselines from continual learning literature. The authors propose their own dataset for domain incremental learning and they also show that the method performs quite well there. Finally, the authors perform a wide variety of ablation studies showing a more thorough understanding of when and why the model works in practice. Strengths and Weaknesses: In general, I think the paper is quite interesting and should be valuable for the TMLR community. However, before recommending acceptance, I would like to see some more baselines added just to understand the impact of the shape information on the final results. If that issue is addressed, I would be happy to recommend acceptance. Strengths - The paper considers an important problem, from a perspective that I think has been fairly overlooked, i.e. cognitive sciences. - The results shown in the paper are quite strong. The proposed method outperforms the baselines by a significant margin, both on the previously established benchmark as well as on the new benchmark. - Additionally, the paper introduced the domain incremental learning dataset. I think this is a valuable contribution that will be useful in further continual learning research. - The paper is pretty clearly written, easy to understand, and well explained. Weaknesses - I think the paper doesn't show why exactly the proposed method works well. In particular, it might be that the module that uses the shapes as the input makes the classification problem easier in general. As such, I would like to ask the authors to introduce two new baselines. The first one is a network that simply learns only from the shape data. So basically the bias model trained in isolation on the continual learning task. Another baseline would be a network that uses both the RGB data and the shape data as input, without the multi-memory structure introduced in the paper. - Adding to the previous point, I think the author should try to understand how the shape information impacts the performance on any single task versus on the whole sequence. It might be that simply with the shape information we are able to better solve each task in separation and that's why we see the gains on the continual learning problem. - The novelty of this work is not very high. In particular, it seems like an incremental improvement over the CLS-ER method proposed previously by Arani et al. who use a similar multi-network approach. In this work, the authors also use the inductive bias network, which seems somewhat incremental. On the other hand, for this venue, high novelty is not an important criterion for accepting the paper, so I don't think this is a good enough reason to reject this paper. However, I think this point should be noted. Requested Changes: Please add the shape-based baselines outlined in the first two points of the "Weaknesses" section. Broader Impact Concerns: N/A ================================================== Review 2: Summary: Taking inspiration from cognitive science, this paper proposes a new multi-module architecture for continual learning. The proposed “Dual Cognitive Architecture” (or DULA) is made up of a working model (which is similar to a “standard” deep learning architecture), an inductive bias learner (which is trained on pre-processed shape information) and a semantic memory (which essentially is a delayed version of the working model). Several loss terms are introduced to encourage the outputs of these different modules to be comparable to each other. Using variants of the CIFAR10 and CIFAR100 datasets, the authors test the performance of the new architecture on task- and class-incremental learning settings; and for the domain-incremental learning setting a new dataset DN4IL is proposed, which is a curated subset of DomainNet. In all reported experiments, DUCA performs better than several established replay-based continual learning methods (such as DER++ and Co2L). Strengths and Weaknesses: I think the main strength (or perhaps “selling point” is a better word) of this paper is the strong performance of the proposed method DUCA relative to established replay-based CL methods such as DER++ and Co2L. The suggestion that using shape information can have a specific benefit for continual learning is intriguing as well. Finally, I think that the proposed DN4IL dataset can be a useful contribution to the continual learning literature, albeit somewhat incremental. However, that the strong performance of DUCA is the main selling point, is in my opinion also an important weakness of this paper. The goal of continual learning research is not to achieve “state-of-the-art” performance on variants of Split CIFAR; rather the goal is to gain insights into how to build successful CL algorithms (and to eventually apply those to practical problems, although that goal is still distant). Controlled benchmarking experiments on variants of Split CIFAR could help towards this goal, but by themselves such experiments are of limited value, especially if there are concerns regarding how controlled they are. Perhaps the authors would argue that an insight from their paper is that inductive bias (or more specifically, a bias towards using shape information of images) can be beneficial for continual learning, but I’m afraid I do not think the current version of the paper satisfactorily demonstrates this. The authors show that using shape information can boost CL performance. However, as it is already known that using shape information can boost image classification (e.g., Gheiros et al, 2019 ICLR), it is not clear whether the use of shape information has benefits for CL on top of the general benefits it can have for image classification. I think the usage of shape information, and in particular the way this paper proposes to use shape information, *could* carry unique benefits for CL, but I’m afraid that the current version of this paper does not convincingly demonstrate this. (A related issue is that the use of shape information makes the benchmarking experiments less controlled. The reason that methods that are compared against do not use shape information might well not be because “they did not think to do so”, but rather because they wanted to test specific approaches in isolation.) Although the selected values for all hyperparameters are reported in Appendix D, it is not well described how these hyperparameters were selected for the different methods. It is stated that grid searches were performed, but it is not reported which hyperparameter values were considered for each method. Another drawback is that DUCA has relatively a lot of hyperparameters (which are set in a way that violates the CL principle of only seeing data in a certain sequence), more than most of the other methods that are compared with. Both of these are reasons contributing to why results of the benchmarking experiments by themselves are of limited value. The paper presents DUCA as a “general” method for continual learning (among others by calling it a “framework for continual learning”). However, DUCA relies on the Sobel shape extraction algorithm, and as such its applicability seems to be limited to the image domain. Perhaps it could be argued that shape is merely used as an example of an inductive bias, and that for other modalities other inductive biases could be used. However, I do not think this is self-evident. I think either the authors should ensure it is clear that their proposed method is specific to the domain of images, or they should demonstrate that the underlying principle can also be successfully applied in other domains. On page 6 the authors state: “As is evident from the different CL methods in the literature, the improvement in performance has been saturated on all variants of MNIST.” This is a very strong statement, but no references are provided for it. It also seems to me this statement is clearly wrong as I can think of many variations with MNIST for which performance has not saturated (e.g., when restricting memory, computation, amount of training samples). In fact, to strengthen this paper, I would encourage the authors to additionally include experiments on relatively simple datasets (for example MNIST), to provide results that are more easily reproducible and interpretable. The way this paper describes class-, domain- and task-incremental learning seems rather simplistic and not in line with how these types of continual learning have been defined in Van de Ven et al. (2022, Nat Mach Intell). For example, on page 13 limitations of class-incremental learning are listed as “the assumption of the same number of classes across different tasks, the absence of reappearance of classes, and the sample distribution per class being well balanced”, but I don’t think these stated assumptions are part of the definition of class-incremental learning. Perhaps it could instead be argued that recent papers sometimes tend to make these assumptions when studying class-incremental learning? I think the authors should also be careful with calling these assumptions “limitations”. For example, for DN4IL, the authors make simplifications relative to DomainNet; it could be argued these simplifications make DN4IL “less realistic” than DomainNet, but the authors will probably agree that these simplifications are not “limitations”. Minor issues: - “class incremental learning” should be “class-incremental learning” (same with task- and domain-incremental learning, see for example https://www.grammarly.com/blog/hyphen/) - The first sentence of section 2.4 introduces “Cognitive Continual Learner (CCL)” but probably meant to say DUCA? - The cited Gheiros et al paper was published at ICLR 2019, not ICLR 2018 Requested Changes: I believe the main change that should be made is providing evidence that, and ideally also insights why, the various proposed aspects of DUCA have benefits that are specific for continual learning. Otherwise, please refer to the various raised issues and suggestions under “Strengths and Weaknesses”, also for more details on the requested change above. Broader Impact Concerns: No concerns in this regard. ================================================== Review 3: Summary: The author propose a novel method for continual learning (CL), which is both rehearsal- and architecture-based. The method consists in augmenting the main network with a"semantic memory" (a running average of the main network updated at random times) and an "inductive bias" system (a network trained on edge-detected versions of the input images, presumably emphasizing shape information). The subsystems are trained on both current examples and previous examples, stored in a replay buffer. IIIUC the outputs of the various networks are constrained to be similar by MSE losses (Eqs. 3,4). The method is tested on various continual learning problems, representing Task-, Domain- and Class-incremental learning, and shows significant benefits over existing approaches. In addition, a new dataset for domain-incremental learning (essentially curated form a larger existing dataset) is introduced. Strengths and Weaknesses: Strengths: - The method seems novel. - It seems to work well. - The new dataset is potentially interesting. Weaknesses: There doesn't seem to be any deadly flaw in the proposed method, but some question marks. - From the results, it seems that the addition of the shape-based model (trained on edge-detection images) was the main factor in the approach's superior performance (see especially section 6.5 on ablations). Thus, the main result of the paper may simply be that image-based tasks benefit from a bias towards shapes. This may be interesting, but it may also seem somewhat orthogonal to the actual problem of lifelong learning / CL. - While the authors address the complexity of the various methods, they do not address the number of trainable parameters (unless I missed it). Are the comparison between models with equal numbers of trainable parameters and wallclock time? - The system includes a dedicated network trained on edge-detected images. This plays a large part in the model's performance. But is this improvement caused by the architectural innovation, or simply the inclusion of edge-detected /shape-emphasized images in the data? An alternative would be to augment main training set with edge-detected images, using only the main network and the "semantic" (slow-update) network in the model, without including a dedicated "inductive bias" network. How would this simpler method compare with the proposed approach, all other things (training time, number of parameters, etc.) being equal? Requested Changes: - Please add number of trainable parameters and if possible wallclock training time for the various approaches. - At the very least, explain the benefit of adding a dedicated shape-based network to the system, rather than merely augmenting the main dataset with edge-detected images. Actual experiments would be best, if at all possible. - Section 2.4, second paragraph: what are the "encoder" and the "classifier"? This does not seem to have been discussed beforehand. - Last paragraph of p. 1: fix the second sentence. - Last paragraph of p.3: "deferentially" -> "differentially". Broader Impact Concerns: I do not see any broader impact concerns. ================================================== Metareview: Recommendation: Accept with minor revision Comment: On balance, I believe that the authors have provided enough evidence to support their core claims and addressed most of the reviewer comments. However, I believe that there are still two important minor modifications that need to be made: 1) The abstract is not clear enough on what the model actually includes. Specifically, nowhere in the abstract is it noted that what the authors mean by "multiple memory systems" is a slow and fast learner for rehearsal (in-line with complementary learning systems theory), and that what they mean by "inductive biases" is specifically a shape bias. As such, a reader will not really understand the core model design from the abstract, even at a high level, so it is too vague. The authors should fix this before publication. 2) Though the authors inserted some language about moving this beyond the image domain, it is, in my judgement, insufficient. Specifically, the authors say, "...the underlying principles and mechanisms of DUCA can be adapted to different modalities. The DUCA framework can be extended to other domains by selecting appropriate inductive biases relevant to those domains." But, what would "appropriate inductive biases" be in other domains? Also, why would the other components of DUCA help in other domains? A few other sentences to suggest potential future directions to extend this work beyond the image domain would greatly help the conclusion of the paper. ==================================================
# Federated Learning With Nonvacuous Generalisation Bounds Anonymous authors Paper under double-blind review ## Abstract We introduce a novel strategy to train randomised predictors in federated learning, where each node of the network aims at preserving its privacy by releasing a local predictor but keeping secret its training dataset with respect to the other nodes. We then build a global randomised predictor which inherits the properties of the local private predictors in the sense of a PAC-Bayesian generalisation bound. We consider the synchronous case where all nodes share the same training objective (derived from a generalisation bound), and the heterogenous and homogenous cases where each node may have its own personalised training objective. We show through a series of numerical experiments that our approach achieves a comparable predictive performance to that of the batch approach where all datasets are shared across nodes. Moreover the predictors are supported by numerically nonvacuous generalisation bounds while preserving privacy for each node. We explicitly compute the increment on predictive performance and generalisation bounds for our two federated settings, highlighting the price to pay to preserve privacy. ## 1 Introduction In the federated learning (FL) paradigm, a group of *users* (or nodes) is learning in parallel, and typically aims at preserving their personal datasets while sharing a common predictor. While maintaining the privacy of their own data, users mutually share information through a central *server*. There has been a significant surge of interest in federated learning in the past decade (Konen˝ et al., 2016b), with clear applications in healthcare, transportation and retail, where it is typically of the utmost interest to avoid the leak of private information to other organisations or devices, for ethical or business motivations. The existing literature essentially categorises *horizontal* and *vertical* FL, depending on whether users' datasets share many features or individuals. These two streams have generated various contributions such as the design of ecient communication strategies (Konen˝ et al., 2016a;b; Suresh et al., 2017), the preservation of privacy through dierentially-private distributed optimisation methods (Agarwal et al., 2018), the enforcement of fairness (as in many cases, post-training learning models may be biased or unfair and may discriminate against some protected groups - Hardt et al., 2016; Mohri et al., 2019). We refer to Zhang et al. (2021); Mammen (2021); Kairouz et al. (2021) for recent surveys on FL. Consider a simple federated learning framework (Bonawitz et al., 2017; McMahan et al., 2017b). In each round, the server first provides the initial model to each user, then each user updates the initial model with its personal data. Finally, the server aggregates the collected local models into a single global model, which is used as next round's initialisation if needed. Hereafter we will refer to this learning problem as FL-SOB (Federated Learning with Synchronous OBjectives). This is especially relevant when all users share a common learning goal (*e.g.*, hospitals learning from dierent datasets to identify or predict a specific single pathology). Deep neural networks have been used to develop powerful federated algorithms (McMahan et al., 2017a). A more complex scenario consists in personalised FL (PFL, Tan et al., 2022) where users may have their own distinct learning goal but still want to share joint information as these goals share some level of similarity. This corresponds, for instance, to *transfer learning* (see *e.g.*, Zhuang et al., 2021) situations where one wants to extract some information of a learning problem (*e.g.*, detecting tigers in images) to perform better on another one, sharing some similarities (*e.g.*, detecting cats). Towards a unified framework. The recent PAC-FL framework of Zhang et al. (2023b) proposes a unified framework to formalise FL, intricating the notion of generalisation ability (designed as utility) alongside privacy, and quantifying how much data are protected (*i.e.* impossible to retrieve) while transmitting partial information to the server. This framework builds from the work of Zhang et al. (2019) investigating the tradeos between privacy, utility and eciency. The question of an optimal trade-o is crucial to deploy the FL framework in practice (Tsipras et al., 2019). On the place of generalisation in FL. Using their PAC-FL framework, Zhang et al. (2023b) proposed generalisation bounds involving the dimension of the predictor space. The question of generalisation in FL is central: Mohri et al. (2019); Zhang et al. (2023a) established Rademacher-based generalisation bounds, Yagli et al. (2020) provided bounds based on mutual information to explain both generalisation ability and privacy leakage per user. Bayesian methods have also been considered in FL-SOB (Yurochkin et al., 2019; Chen & Chao, 2021; Zhang et al., 2022) as well as in PFL (Kotelevskii et al., 2022). PAC-Bayes learning in FL. Beyond Bayesian methods, PAC-Bayes learning (see the seminal works of Shawe-Taylor & Williamson, 1997; McAllester, 1998; 2003; Maurer, 2004 - we refer to the surveys of Guedj, 2019; Alquier, 2021, and to the recent monograph of Hellström et al., 2023) has recently re-emerged as a powerful framework in batch learning to explain the generalisation ability of neural nets by providing non-vacuous generalisation bounds (Dziugaite & Roy, 2017; Letarte et al., 2019; Pérez-Ortiz et al., 2021; Biggs & Guedj, 2021; 2022). PAC-Bayes combines information-theoretic tools with the Bayesian paradigm of generating a data-dependent *posterior* distribution over a predictor space from a *prior* distribution (or reference measure), usually data-independent. The flexibility of the PAC-Bayes framework makes it useful to explain generalisation in many learning settings. In particular, theoretical results and practical algorithms have been derived for various learning problems such as reinforcement learning (Fard & Pineau, 2010), online learning (Li et al., 2018; Haddouche & Guedj, 2022), constrative learning (Nozawa et al., 2020), generative models (Chérief-Abdellatif et al., 2022), multi-armed bandits (Seldin et al., 2011; 2012; Sakhi et al., 2022), meta-learning (Amit & Meir, 2018; Farid & Majumdar, 2021; Rothfuss et al., 2021; 2022; Ding et al., 2021), majority votes (Zantedeschi et al., 2021; Biggs et al., 2022) to name but a few. Recently, some works have used the PAC-Bayes framework in FL: Reisizadeh et al. (2020) and Achituve et al. (2021) have evaluated the post-training predictor shared by all users through a PAC-Bayes bound. Rather than exploiting existing bounds, new PAC-Bayes results, tailored for personalised FL, recently emerged with the aim to explain the eciency of learning procedures (Scott et al., 2023; Sefidgaran et al., 2023), although the PAC-Bayes bound is not minimised by the algorithm. Finally, recent works showed that the Bayesian procedure ELBO, adapted to FL, is exactly the minimisation of a PAC-Bayes upper bound (Kim & Hospedales, 2023; Vedadi et al., 2023). Thus, they show that those methods are well incorporated in a theoretical framework explaining their good generalisation ability. Our contributions. Beyond being a safety check for generalisation, PAC-Bayes theory provides state-of-the art learning algorithms with tight generalisation guarantees in the batch setting. We adapt those algorithms to the FL-SOB and PFL settings. We propose GenFL (standing for Generalisation-driven Federated Learning), an algorithm in which users optimise local PAC-Bayes objectives (bounds from Dziugaite & Roy, 2017; Pérez-Ortiz et al., 2021). We show a global generalisation bound for all users in FL-SOB, and local ones in PFL. Finally, we show in numerical experiments that our procedure is competitive with the state-of-the-art and we bring nonvacuous generalisation guarantees to practitioners of federated learning. Outline. We describe our notation in Section 2 and introduce in Section 3 a novel algorithm called GenFL, alongside two instantiations to FL-SOB and PFL. We present numerical experiments to support our methods. in Section 4. Our algorithms and the code used to generate figures in this paper is available at https://anonymous.4open.science/r/GenFL-0147/README.md. The paper closes with a discussion in Section 5. In Appendix A, we comment on strategies to compute PAC-Bayesian bounds, Appendix B contains a comprehensive description of our procedure in the PFL setting, and Appendix C provides additional experiments. ## 2 Background Federated learning. We consider a predictor set H, a data space Z and denote the space of distributions over H, M(H). We let ¸ : H ◊ Z æ [0, 1] denote a loss function. In FL, we consider an ensemble of K œ Nú users, and for each user 1 Æ i Æ K, we denote by Si = (zi,j )j=1···mi its associated dataset of size mi. We define S, of size m = qKi=1 mi, to be the union of all Si. We assume that each Si is *i.i.d.* with associated distribution Di. Each user 1 Æ i Æ K aims to jointly learn a predictor h œ H while keeping private their training dataset Si. Learning theory. In PAC-Bayes learning, instead of directly crafting a predictor h œ H, we design a datadriven posterior distribution Q œ M(H) with respect to a prior distribution P. To assess the generalisation ability of a predictor h œ H, we define for each user k the *risk* to be RDi := Ez≥µ[¸(h, z)] and its empirical counterpart Rˆ Si := 1m qmi j=1 ¸(h, zi,j ). As PAC-Bayes focuses on elements of M(H), we also define the expected risk and empirical risks for Q œ M(H) as RDi (Q) := Eh≥Q[RDi (h)] and Rˆ Si (Q) := Eh≥Q[Rˆ Si (h)]. Background on PAC-Bayes learning. In a batch setting, we only consider the dataset S (this can be seen as the case where there is only one user) and we assume that all data are *i.i.d.* with distribution D. For two probability measures P, Q we define the *Kullback-Leibler divergence* to be KL(Q,P) = Eh≥P ËdQ dP (h) È where dQ dP is the Radon-Nikodym derivative. We also denote by kl the KL divergence between two Bernoulli distributions. Generalisation bounds. We recall the following bound, due to McAllester (2003); Maurer (2004), which holds for bounded losses. Theorem 1 (McAllester's bound). For any data-free prior distribution P œ M(H), any " œ [0, 1], with probability at least 1 ≠ "*, for any posterior distribution* Q œ M(H), $$\operatorname{kl}\left(R_{\mathcal{D}}(\mathrm{Q}),R_{\mathcal{S}}(\mathrm{Q})\right)\leq{\frac{\operatorname{KL}(\mathrm{Q}||\mathrm{P})+\ln{\frac{2{\sqrt{m}}}{\delta}}}{m}},$$ which leads to the following upper bound on the risk $$R_{\mathcal{D}}(\mathrm{Q})\leq\mathrm{kl}^{-1}\left(\hat{R}_{\mathcal{S}}(\mathrm{Q})\left\|\frac{\mathrm{KL}(\mathrm{Q}\|\mathrm{P})+\ln\frac{2\sqrt{m}}{\delta}}{m}\right.\right)\,,$$ $$(1)$$ $$\left(2\right)$$ $\quad\quad\quad\quad(3)$ . where kl≠1(*x, b*) = sup {y œ [x, 1] | kl(*x, y*) Æ b}. Note that by the definition of kl≠1, (2) is the tightest upper bound on RD(Q) that we can obtain starting from (1). While kl≠1 has no closed form, it is possible to approximate it eciently via root-finding techniques (see, *e.g.*, Dziugaite & Roy, 2017, Appendix A). However, this function is hard to evaluate, and even harder to optimise. We need to rely on looser relaxations of (1) to design tractable optimisation procedures. Relaxations of McAllester's bound. The most classical relaxation of (1) relies on Pinsker's inequality kl(qÎp) Ø (p≠q)2 2 and leads to the following high-probability bound, valid under the same assumptions as Theorem 1: $$\mathrm{R}_{\mathcal{D}}(\mathrm{Q})\leq\hat{\mathrm{R}}_{\mathcal{S}}(\mathrm{Q})+\sqrt{\frac{\mathrm{KL}(\mathrm{Q}\|\mathrm{P})+\ln\frac{2\sqrt{m}}{\delta}}{2m}}.$$ While (3) is well known and already appears in McAllester (2003), novel relaxations, exploiting refined Pinsker's inequality (see, *e.g.*, Boucheron et al., 2013, Lemma 8.4) have been exploited to obtain PAC-Bayes Bernstein bounds (Tolstikhin & Seldin, 2013; Mhammedi et al., 2019). Building on this inequality, Rivasplata et al. (2019); Pérez-Ortiz et al. (2021) proposed a *PAC-Bayes quadratic bound* recalled below, valid under the assumptions of Theorem 1 $$\mathrm{R}_{\mathcal{P}}(\mathrm{Q})\leq\left(\sqrt{\mathrm{\hat{R}}_{\mathcal{S}}(\mathrm{Q})+\frac{\mathrm{KL}\left(\mathrm{Q}\|\mathrm{P}\right)+\log\left(\frac{2\sqrt{m}}{\delta}\right)}{2m}}+\sqrt{\frac{\mathrm{KL}\left(\mathrm{Q}\|\mathrm{P}\right)+\log\left(\frac{2\sqrt{m}}{\delta}\right)}{2m}}\right)$$ ![3_image_0.png](3_image_0.png) $$\overline{{{\frac{\pi}{2}}}}\ .$$ (4) $$\binom{4}{2}$$. $\left(5\right)^{\frac{1}{2}}$ . (4) Note that (3) and (4) are easier to optimise than (2), making them more relevant for practical learning algorithms. Generalisation-driven learning algorithms. Most of the PAC-Bayesian bounds in the literature are fully empirical. This paves the way to use the bound as a training objectives and leads to generalisationdriven learning algorithms. A classical PAC-Bayesian algorithm is derived from Catoni's bound (see, *e.g.*, Catoni, 2007, Alquier et al., 2016, Theorem 4.1): $$\operatorname*{argmin}_{Q\in\mathcal{M}(\mathcal{H})}\mathrm{R}_{\mathcal{S}}(Q)+\frac{\mathrm{KL}(Q,\mathrm{P})}{\lambda}.$$ In (5) an *inverse temperature* ⁄ > 0 appears and acts as a learning rate in gradient descent. Similarly, it is possible to derive batch learning algorithms from (3) (4) (Dziugaite & Roy, 2017; Pérez-Ortiz et al., 2021). Then, we have access to a theoretical upper bound which requires to approximate expectations over Q. We next discuss how to mitigate this. Computing generalisation guarantees. In practice, the tightest McAllester's bound is computed,*i.e.*, (1). First, note that the KL divergence is easy to compute in the Gaussian case as it has a closed form (see, *e.g.*, Duchi, 2007, Section 9). Then, it remains to estimate the expected empirical risk over Q, which is costly in practice as it involves Monte Carlo approximations. To alleviate this issue, we leverage the trick from Dziugaite & Roy (2017, Section 3.3), which exploit a high probability upper bound of Rˆ S (Q) with confidence level "Õ $$\hat{\mathrm{R}}_{S}(Q)\leq\mathrm{kl}^{-1}\left(\hat{\mathrm{R}}_{S}(\hat{Q}_{n})\left\|{\frac{1}{n}}\ln\left({\frac{2}{\delta^{\prime}}}\right)\right),$$ where Rˆ S (Qˆn) = 1n qni=1 ¸(Wi, zi), '*i, W*i ≥ Q. Then incorporating this upper bound in (1) gives the final bound we use, with probability at least 1 ≠ " ≠ "Õ: $${\rm R}_{\cal D}({\rm Q})\leq{\rm k}{\rm l}^{-1}\left(\hat{\rm R}_{S}^{n}({\rm Q})\right)\left\|\frac{{\rm KL}({\rm Q}\|{\rm P})+\ln\frac{2\sqrt{m}}{\hat{a}}}{m}\right)\,,\tag{6}$$ $$\mathrm{where}\ \hat{\mathrm{R}}_{\mathcal{S}}^{n}(\mathrm{Q})=\mathrm{kl}^{-1}\left(\hat{\mathrm{R}}_{\mathcal{S}}(\hat{Q}_{n})\left\|\frac{1}{n}\ln\left(\frac{2}{\hat{\delta}^{\prime}}\right)\right).$$ To compute kl≠1(*p, c*) for any *p, c*, we leverage Dziugaite & Roy (2017, Appendix A) described in appendix A. ## 3 Generalisation-Driven Federated Learning In this section we introduce our algorithm GenFL. From batch to federated PAC-Bayes algorithms. When training stochastic neural nets (SNNs) with PAC-Bayes objectives, it is common to assume that each weight follows a Gaussian distribution. For conciseness, we identify a SNN to the Gaussian distribution of all its weights N (*µ, Diag*(‡i)). The works of Dziugaite & Roy (2017); Rivasplata et al. (2019); Biggs & Guedj (2021); Pérez-Ortiz et al. (2021); PerezOrtiz et al. (2021); Biggs & Guedj (2022) proposed successful PAC-Bayesian training algorithms for SNNs which ensure generalisation guarantees. All these methods operate in a batch setting, *i.e.*, the optimiser has access to all data simultaneously. Thus, building on the work of Rivasplata et al. (2019); Pérez-Ortiz et al. (2021), we propose Alg. 1, a new learning algorithm called GenFL (Generalisation-driven Federated Learning), casting PAC-Bayes into FL. We stress that GenFL benefits from nonvacuous generalisation guarantees (Section 4). Algorithm 1 GenFL. users are indexed by k; B is the local minibatch size, E is the number of local epochs, ÷ is the learning rate, f the PAC-Bayes objective. The prior is N (µprior‡prior) with parameter ". 1: **Server executes:** 2: m Ω qKk=1 mk Û Total dataset size 3: w1 Ω (µprior, ‡prior) 4: for each round t do 5: St Ω random set of max(C · K, 1) users 6: for each user k œ St **in parallel do** 7: wkt+1 Ω userUpdate(k, wt, m) 8: **end for** 9: wt+1 Ω qKk=1 mk m wkt+1 10: **end for** 11: 12: **userUpdate(k, w, m):** 13: B Ω (split Sk into batches of size B) 14: for each local epoch e = 1, 2, ··· , E do 15: for each local minibatch b œ B do 16: wks Ω µk + ‡k § N (0, 1) Û Reparam. trick 17: wk Ω wk ≠ ÷Òwf*m,",µ*prior,‡prior (wks ; b) 18: **end for** 19: **end for** 20: **return** wk Ensure: Global model distribution wT mapped to N (µT , ‡T ) A generalisation-driven FL algorithm. GenFL combines a federated learning protocol (*i.e.*, FedSGD, FedAvg, McMahan et al. (2017a)) with a PAC-Bayes objective f. Its starts from the vector w1 = (µprior, ‡*prior*) corresponding to the initial distribution P = N (µprior, ‡*prior*) and outputs after T rounds wT = (µT , ‡T ) corresponding to the posterior Q = N (µT , ‡T ). Hence the learning procedure is divided in rounds, where subsets of users are sampled. Sampled users are requested to perform local updates following a PAC-Bayes training procedure designed to ensure a good generalisation of the posterior distribution. Because users train SNNs, they sample weights from the posterior. In order to learn the variance parameter of the posterior (Gaussian), we use the well-known reparameterisation trick (Kingma & Welling, 2014): instead of directly sampling from the distribution, we sample from a standard Gaussian distribution and then apply a transformation to obtain the sampled weights: W = µ + ‡V where V ≥ N (0,Id), this allows to compute with respect to ‡. When the round ends, the global model is computed from the local updates, thanks to the aggregation function from the FL protocol (weighted mean, median). Next we show that with a PAC-Bayes objective f, we adapt GenFL to FL-SOB, where S is fully i.i.d., *i.e.*, all user datasets have the same distribution, and to PFL where each dataset Si is *i.i.d.* but any two dataset can have distinct distributions. ## 3.1 Genfl For Fl-Sob PAC-Bayesian objectives. We assume that for any i, Di = D. Thus, S is a *i.i.d.* dataset of m points. We then consider the true and empirical risks on all S RD, RS . In a batch setting, it would be natural to optimise the bounds (3), (4). However, the user i only has access to its personal dataset Si (of size mi) to optimise its model *while knowing other datasets are involved*. We then derive accordingly from (3), (4) two PAC-Bayesian learning algorithms, valid for any user, namely f1, f2. Note that f1 is adapted from the f*classic* objective and f2 is adapted from f*quad* of Pérez-Ortiz et al. (2021): $f_{1}(\mathcal{S}_{i})=\hat{\mathrm{R}}_{\mathcal{S}_{i}}(Q)+\sqrt{\frac{\mathrm{KL}(Q||P)+\ln\frac{2\sqrt{m}}{\delta}}{2m}}$. 2m . (7) $$f_{2}({\mathcal{S}}_{i})=\left({\sqrt{\hat{\mathrm{R}}_{{\mathcal{S}}_{i}}(Q)+{\frac{\mathrm{KL}\left(Q\|P\right)+\log\left({\frac{2{\sqrt{m}}}{\delta}}\right)}{2m}}}}+{\sqrt{\frac{\mathrm{KL}\left(Q\|P\right)}{\delta}}}+\cdots{\sqrt{\frac{\mathrm{KL}\left(Q\|P\right)}{\delta}}}\right)$$ $$\left(7\right)$$ ddb 2 ![5_image_0.png](5_image_0.png) R $$({\boldsymbol{\delta}})$$ . (8) Note that (7), (8) can be seen as proxys of (3), (4). Indeed, every user has access to the total number of data points m (as long as it is transmitted to the server), so the regularisation term (containing the KL divergence) is fully available, contrary to the empirical risk Rˆ S which is then replaced by Rˆ Si . Note that in this case, the KL divergence is divided by m instead of mi (which would be natural if we were optimising (3) (4) for Si instead of S). This suggests that each user has to give more weight, during the optimisation phase, to its data than to the regularisation. The reason behind this is that the server, by aggregating predictors, performs a regularisation step on a global level, hence the need to prioritise data on a local one. A global generalisation guarantee. A major interest of the *i.i.d.* assumption on S is that, as long as users all exploit the same posterior distribution Q, and they transmit their empirical scores Rˆ Si (Q), every user is able to compute the global generalisation guarantee of (2). This allows to maintain the bound of the batch setting (involving the total number of data points m), despite being in FL. This is empirically shown in Section 4. We present in Algorithm 2 Fedbound, the algorithm we use to compute the global bound (6), valid for all users simultaneously. Algorithm 2 FedBound. The K users are indexed by k; f PAC-Bayes objective, prior N (µprior, ‡prior); posterior N (µT , ‡T ), ", " Õparameters, n number of Monte Carlo sampling 1: **Server executes:** 2: m Ω qKk=1 mk Û Total dataset size 3: P = N (µprior, ‡prior) Û Prior (learned or random) 4: Q = N (µT , ‡T ) Û Posterior (learned) 5: for each user k œ K **in parallel do** 6: *error*k Ω userMCSampling(k, wt, m) 7: **end for** 8: *error* Ω qKk=1 mk m *error*k 9: KL_inv Ω kl≠11*error* | 1n ln( 2"Õ ) 2 10: Up-bound Ω kl≠1 3KL_inv | KL(QÎP )+ln 2 Ôm " m 4 11: 12: **userMCSampling(k, w, m):** 13: for each MC sampling i = 1, 2, ··· , n do 14: Wki ≥ Q Û Sample weights from the posterior 15: *error*ki Ω Rˆ Sk (Wki ) Û local empirical risk 16: **end for** 17: *error*k Ω= 1n qni=1 *error*ki 18: **return** *error*k Ensure: Up-bound holding with probability 1 ≠ " ≠ " Õ ## 3.2 Genfl For Personalised Federated Learning A general training for the prior distribution. In PFL, the learning objective of each user may dier, while sharing some similarities that can be learned and transferred from one user to another. This framework requires adjustments of our learning objectives. Indeed, contrary to Section 3.1, there is no clear global generalisation guarantee, so each user has then to optimise its own personal learning objective from a commonly shared prior. Using either a random prior or a learnt one on a fraction of users data, we run GenFL similarly to Section 3.1 with our PAC-Bayesian objective of interest f. The output distribution of GenFL is then considered as a common prior for all users which then needs to be personalised. A personalisation step. Once a common prior distribution has been obtained from the federated training, it is necessary for each user to personalise it to its own problem. To do so, we apply PAC-Bayesian objectives similar to those in Section 3.1, namely f1 (7) and f2 (8), where the batch size is modified from m to mi for the i-th user. This reflect that each user now optimises its local goal instead of the global one. Each user ends up with its own personalised posterior distribution. The way personalised bounds are implemented is similar to Algorithm 2, but without the aggregation step. We refer to Appendix B for additional details. ## 4 Experiments In this section we provide practical instantiations of GenFL. We first consider in Section 4.1 the case of classification on MNIST in a federated setting using basic neural architectures. We then extend our experimental framework in Section 4.2 to the more challenging case of classification on CIFAR-10 with more sophisticated neural networks. ## 4.1 Classification On Mnist 4.1.1 Experimental Framework Our experimental framework is inspired from Pérez-Ortiz et al. (2021) combined with the FedAvg protocol McMahan et al. (2017a). We use the following libraries: Pytorch (Paszke et al., 2019) for deep learning, Flower (Beutel et al., 2020) for federated learning, Slurm (Yoo et al., 2003) for cluster experiments, and Hydra (Yadan, 2019) for overall experiment management. The cluster nodes we use have 48 SKYLAKE 3GHz CPUS. We do not use GPUs. Prior distribution over weights. We propose two types of priors: data-free (random) prior chosen randomly around N (0,Id) (as in Dziugaite & Roy, 2017) and a data-dependent (learnt) prior. The latter is powerful to attenuate the KL divergence term, leading to sharper generalisation bounds and better accuracy. Such a data-driven prior implies to use a fraction of the dataset from the training data to optimise the prior. The bound computation is then realised with a reduced dataset size (divided by 2 in practice). However, the prior has gained eciency (lower empirical risk) and the PAC-Bayes optimisation starts from a relevant point. We use Gaussian distribution for both prior and posterior over the weights of a neural network. When datafree, the prior is P = olœlayers N (truncated(µlrand), Diag(‡prior)) with µlrand ≥ N (0, Ô 1 nlin ), and ‡prior œ R+ú. The truncature is done at ±Ô 2 nlin where nlin is the dimension of the inputs of the layer l. In the case of data-dependent prior, we have P = olœlayers N (µllearnt, Diag(‡prior)), where µlearnt. It is obtained via ERM on the prior set on half of the training set, the other half being used for bound computation and posterior optimisation. Dataset partition. To build a *i.i.d.* FL setup, we consider the case where each user has exactly the same number of samples per class. We partition MNIST as follows: we fix the number of users to 100. Then, each user receives a dataset size of 540, each class having 54 images. In the case of the learnt prior, we split the training set of each user in half, the first one being used to train the prior, the second exploited by our learning algorithms. In the case of non-*i.i.d.* FL setup, we follow McMahan et al. (2017a). First we sort MNIST by label, then we partition the dataset into chunks of 300 contiguous samples each (thus containing at most 2 labels, because it is sorted). Again we split each user dataset in several parts. When the prior is random, we save 10% of the dataset to create a validation set. Remaining data is exploited for optimisation. When considering learnt priors, we save again 10% of the dataset as a validation set, we exploit 40% of the dataset to train the prior (respecting the proportion of each class), the remaining 50% being used by our learning algorithms. Bound parameters. We used " = 0.05, " Õ= 0.01, n = 150000 Monte Carlo samples. Optimisation hyperparameters. The prior distribution scale ‡prior is set to 2, 5◊10≠2, the learning rate is 5 ◊ 10≠3 for 100 users. In order to compare with the batch learning setting, we compute our algorithms with 1 user. In this case, we use a learning rate of 5 ◊ 10≠4 to reach better performances. The momentum is 0.95 for posterior optimisation and 0.99 in prior optimisation. During prior optimisation, we used a dropout rate of 0.2 to avoid overfitting. As theoretical results of Section 2 require a loss function in [0, 1], we use the bounded cross-entropy as in Pérez-Ortiz et al. (2021), i.e., ¸(*x, y*) = 1 ln(pmin) · ln(˜‡(x)y) with ‡˜(x)y = max(sigmoid(x)y, pmin). We took pmin = 10≠4. KL penalty. For stability reasons, we penalise the KL term during posterior optimisation (similarly to Pérez-Ortiz et al., 2021), thus we give more impact to the empirical risk during optimisation. Such a penalty helps performance and stability during training when random priors are involved. In this case, we use a penalty of 0.1. Federated Learning hyperparameters. Starting from a random prior, we perform our algorithm during 200 rounds to make the SNNs converge. When learnt priors are considered, they are trained with a run of 100 rounds with 5 local epochs (convergence around 50 epochs). We then perform 10 additional rounds with 5 local epochs to train the posterior. In both cases, we select 10% of the users each round to participate in the training. As the dataset size of each user is small, we use a batch size of 25 (compared to 250 in the work of Pérez-Ortiz et al. (2021)). Neural Architecture. We consider a stochastic 2 hidden layer MLP with 600 units each, resulting in 1,198,210 number of parameters for the prior (with fixed covariance matrix) and doubled for the SNN (as we consider diagonal covariance matrices). Positive variance prior. To have a constrained positive standard deviation ‡ when sampling weights, we use the following transformation: ‡ = ln(1 + exp(fl)). It makes ‡ always positive, and fl can be any real number that is optimised during training procedure. ## 4.1.2 Results Note that in a classification problem, the generalisation error translates a positive influence of the learning phase as long as it is smaller than 1 (which is what we refer to with the term nonvacuous). Indeed, a bound below this threshold shows that the posterior will not fail at each try. However, we focus on posteriors with generalisation bounds or test error smaller than 50% The reason is that, for a classification task, this threshold is the generalisation error of a randomised predictor with associated distribution Bernoulli(0.5). Thus, having results below this threshold provably show we generalise better than a naive strategy. ## Fl-Sob Setting. Table 1 gathers our results for GenFL applied with f1, f2 alongside FedBound. We compared our results with 100 clients with the output of our algorithms for 1 client, corresponding to the batch learning case. Our FL algorithms benefit from nonvacuous theoretical guarantees and test errors. In the case of data-dependent priors, test errors of GenFL nearly reach for both f1 and f2, the precision of their batch counterpart (3% in FL and 2% in batch). In the case of random prior, the KL penalty has a strong positive influence on the test errors. The generalisation bounds of our algorithms are uniformly deteriorated compared to the batch setting, *while being of the same magnitude*. This is important to notice as this is the price to pay to adapt batch bounds to a federated setting. Indeed, as each user only optimised a proxy of the common generalisation bound, it is legitimate to retrieve in our results a short discrepancy comparing to the batch case. While f2 is consistently achieving better generalisation upper bounds in Pérez-Ortiz et al. (2021), it is outperformed by f1 in the FL setting. However, notice that f2 provides uniformly better test errors than f1, similarly to Pérez-Ortiz et al. (2021). Note that better results are achieved if one considers the KL penalty trick with data-free prior and no KL penalty trick with data-dependent prior. We interpret this fact as follows: given that the data-dependent prior is already performing well on training data, allowing the posterior optimisation to be unconstrained is not an issue as we found an area close from a local minimiser. Table 1: Results for the FL-SOB scenario. ¸0≠1 corresponds to the 0-1 loss. The test error column is made on the test set of MNIST. The Bound column corresponds to the generalisation bounds, computed with Alg. 2. The *KL/m* column corresponds to the KL divergence term in the bound divided by m = 60000 in data-free prior or m = 30000 data-dependent prior. Setup Bound Test Err. KL div Prior Obj. ¸0≠1 ¸0≠1 *KL/m* Pérez-Ortiz et al. (2021): Random f1 0.330 0,141 0,081 (1 client) f2 0.316 0,092 0,138 Pérez-Ortiz et al. (2021): Learnt f1 0.028 0,023 <0,001 (1 client) f2 0.028 0,020 0,001 100 users - GenFL - KL Penalty=0.1 Random f1 0.333 0,123 0,107 (us) f2 0.342 0,090 0,163 Learnt f1 0,061 0,030 <0,001 (us) f2 0,088 0,029 0,002 100 users - GenFL - No KL Penalty Random f1 0,415 0,256 0,039 (us) f2 0,408 0,251 0,041 Learnt f1 0.039 0,030 <0,001 (us) f2 0.040 0,030 <0,001 Table 2: Results for the PFL scenario. ¸0≠1 corresponds to the 0-1 loss. The test error column is made on the test set of each user (10% of local dataset).The Gen. Bound column gathers generalisation bounds. Each user bound is computed locally with mi = 300 for learnt prior, while mi = 540 for random prior. Setup Gen. Bound ¸0≠1 Prior Obj. min mean max Random f1 0,063 0,680 0,847 (us) f2 0,075 0,713 0,893 Learnt f1 0,054 0,112 0,222 (us) f2 0,052 0,111 0,220 Test Error ¸0≠1 Random f1 0 0,552 0,767 (us) f2 0 0,588 0,833 Learnt f1 0 0,050 0,183 (us) f2 0 0,044 0,150 However, as the random prior is not necessarily ecient, we need to move far from it to reach good empirical performances. However, moving freely from the prior distribution leads to a large KL divergence, hence the need to constrain the posterior optimisation to obtain both better bounds and test errors. A take-home message is that adapting PAC-Bayes algorithms to FL is eective: it gives nonvacuous results close to the batch setting. ## Pfl Setting. Table 2 provides an overview of our results in the non-*i.i.d.* case. It gathers, for both generalisation bounds and test error, the minimum, mean and maximum performance of all 100 users. The averaged performances are deteriorated compared to Table 1 for all settings as we consider a harder problem. It is worth noticing that our bounds are nonvacuous and that our algorithms with learnt priors benefit from bounds and test errors lower than 50%. Indeed, if we do not learn the prior, we see from the distribution of errors in Figure 1 that most users have a deteriorated bound and test errors. For learnt priors, the error distribution shows that all users enjoy a meaningful bound as well as sound performance. An interesting point is that the ![9_image_0.png](9_image_0.png) Figure 1: Histograms gathering test errors (red) and bound (blue) of all 100 users of the PFL setting. In order from top to botton: Random prior≠f1, Random prior≠f2, Learnt prior≠f1, Learnt prior≠f2. common prior distribution does not support all users uniformly as we can see in Figure 1. Indeed, our algorithms with random priors suer from deteriorated bounds and test errors on average and the worst case, but approximately 15% enjoy good test errors and 9% benefit from theoretical guarantees lower than 40%. Also, our algorithms with learnt priors enjoy test errors and generalisation guarantees lower than 20% for all users. Furthermore, approximately half of the users benefit from test errors lower than 5%. This highlights the importance of the prior in this non-*i.i.d.* setting. As the test set of each user is small (60 images), some users achieve a 0% test error. ## 4.2 Classification On Cifar-10 To evaluate the eectiveness of our algorithm in more challenging scenarios, we conduct experiments on the CIFAR-10 dataset. We followed a setup akin to Section 4.1.1, with minor adjustments. Specifically, we exploit a learned prior trained with 70% of the dataset, the remaining being used for posterior estimation. Given the heightened complexity of the CIFAR-10 dataset, we involve Convolutional Neural Networks (CNNs) with 4 and 9 layers, denoted as CNNet4l and CNNet9l, respectively (these architectures also appear in Pérez-Ortiz et al., 2021, exploited as a baseline). To ensure enhanced performances, we also perform fine-tuning of hyperparameters related to FL such as local batch size and the number of local epochs, while keeping those associated with PAC-Bayes fixed to ensure relevant comparisons with the baseline. We conducted a comprehensive grid search across a spectrum of hyperparameters to approximate the optimal model configurations. Specifically, we explored various combinations of local batch sizes (1, 5, 10, 50) and local epoch counts (1, 5, 10). Recognising that 100 rounds of training were inadequate for convergence, we extended the training duration to 300 rounds. Additionally, we adjusted the learning rate by a factor of 10 at round 200 to ease optimisation. From the resulting best configurations, we selected the most promising priors for further investigation. Subsequently, we fine-tuned the learning rate by exploring values between 5◊10≠4 and 1◊10≠3. Remarkably, we discovered that the optimal learning rate, as reported in Pérez-Ortiz et al. (2021), remains consistent between the centralised and federated settings. Furthermore, we conducted experiments to tune the dropout rate, revealing that a dropout rate of 0.2 yielded the best performance on the CIFAR-10 dataset, consistent with findings in Pérez-Ortiz et al. (2021). This rearms the applicability of centralised PAC-Bayes hyperparameters to the decentralised PAC-Bayes paradigm. Detailed results of these experiments, showing accuracies for each experiments on the batch size, epoch counts, learning rate and dropout rate are provided in Appendix D. Our analysis revealed that the most eective hyperparameter settings were a local batch size of 5 for CNNet9l and 10 for CNNet4l, with a corresponding local epoch count of 1 for both architectures. Notably, increasing the number of epochs led to quicker convergence but marginally reduced accuracy. A learning rate of 5◊10≠3 was optimal for both architectures, while a dropout rate of 0.2 was found to be the most eective. We selected the priors with the highest accuracy from our grid search results (refer to the appendix for accuracy details). Subsequently, the posteriors were trained using identical hyperparameters as their corresponding priors. Additionally, we experimented with the KL penalty technique. Our findings are summarized in Table 3 for comparison with the baseline results presented in the first row. Notably, both the prior and posterior performances are slightly inferior to the baseline. This discrepancy can be attributed to the inherent diculty of federated learning compared to the batch setting. However, it is crucial to note that the generalisation bounds remain non-vacuous. Among the configurations tested, the most promising outcome was observed with CNNet9l using a KL penalty of 1.0 in conjunction with f2. This configuration achieved a generalisation bound and a test error of 34.5% and 30.5%, respectively, which is 10% higher than the baseline for both metrics. This a similar conclusion than in Section 4.1, highlighting the price to pay to switch from batch to FL. Surprisingly, the KL penalty trick did not yield an improvement in the generalisation bound, contrary to Section 4.1. This is possibly linked to the intrinsic complexity of CIFAR-10. In particular, the inadequacy of the prior may necessitate further optimisation during posterior training, potentially causing the posterior to diverge significantly from the prior distribution. Consequently, the KL term increases post-training, warranting a more substantial penalty. ## 5 Discussion In this work, we propose a novel algorithm for FL in two dierent settings: FL-SOB, which allows to exploit a global generalisation guarantee while keeping data separated; as well as PFL, which only involves an *i.i.d.* assumption for each user's dataset. Our work raises two questions: (a) is it possible to remove the *i.i.d.* assumption?(b) is it possible to maintain a global generalisation guarantee, even in the personalised setting? To answer (a), a line of work first initiated by Kuzborskij & Szepesvári (2019) (for *i.i.d.* data) and continued by Haddouche & Guedj (2023); Chugg et al. (2023); Jang et al. (2023) (for non-*i.i.d.* ones) focuses on PACBayes bounds valid for data distribution with bounded variances. In the PFL setting, this could provide novel generalisation bounds without assuming each user possesses an *i.i.d.* dataset. About (b), the recent work of Sefidgaran et al. (2023) provides elements of answer: they derive a general PAC-Bayesian bound holding for the classical FL setting *for all users simultaneously* involving explicitly the number of users and rounds. This allows fruitful theoretical interpretations (especially on the number of rounds involved during the FL training), but leads to vacuous generalisation guarantees for classification task with a federated SVM. Following another route, based on PAC-Bayes methods for meta-learning, Boroujeni et al. (2024) provides a novel FL algorithm for PFL derived from an original theoretical result with strong performances. However, | Model | Obj. | - | ' | rounds | Bound | Test Error | KL Div | Prior Test Error | |------------------------------------|---------------------------|-----|-----|----------|---------|--------------|----------|--------------------| | | Pérez-Ortiz et al. (2021) | | | | | | | | | CNNet9l | f1 | 250 | 100 | "1" | 0.237 | 0.216 | <0.0001 | 0.217 | | (baseline) | f2 | 250 | 100 | "1" | 0.250 | 0.214 | 0.003 | 0.217 | | 100 users - GenFL - KL Penalty=0.1 | | | | | | | | | | CNNet4l | f1 | 10 | 1 | 300 | 0.471 | 0.388 | 0.012 | 0.329 | | (us) | f2 | 10 | 1 | 300 | 0.469 | 0.391 | 0.011 | 0.329 | | CNNet9l | f1 | 5 | 1 | 300 | 0.393 | 0.303 | 0.019 | 0.274 | | (us) | f2 | 5 | 1 | 300 | 0.396 | 0.304 | 0.018 | 0.274 | | 100 users - GenFL - KL Penalty=1.0 | | | | | | | | | | CNNet4l | f1 | 10 | 1 | 300 | 0.466 | 0.390 | 0.011 | 0.329 | | (us) | f2 | 10 | 1 | 300 | 0.458 | 0.391 | 0.009 | 0.329 | | CNNet9l | f1 | 5 | 1 | 300 | 0.386 | 0.302 | 0.014 | 0.274 | | (us) | f2 | 5 | 1 | 300 | 0.345 | 0.305 | 0.003 | 0.274 | Table 3: Table displaying the results for the CIFAR-10 dataset alongside the baseline (batch setting) presented in the first row. 'Prior Test Error' represents the 0-1 error of the prior on the test set. The symbol — denotes the batch size of clients, while ' indicates the number of local epochs on each round. their approach involves distributions on distributions spaces, giving their method a potentially high time complexity, they are also unable to compute nonvacuous generalisation guarantees. This constrasts with our results, even for the personalised setting, at the cost of considering generic PAC-Bayesian bounds, not explicitly tailored for federated learning. Establishing a PAC-Bayes bound designed for FL and leading to a non-vacuous generalisation guarantees remains an open challenge that we aim to address in a future work. ## References Idan Achituve, Aviv Shamsian, Aviv Navon, Gal Chechik, and Ethan Fetaya. Personalized Federated Learning With Gaussian Processes. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December* 6-14, 2021, virtual, pp. 8392–8406, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/ 46d0671dd4117ea366031f87f3aa0093-Abstract.html. Naman Agarwal, Ananda Theertha Suresh, Felix X. Yu, Sanjiv Kumar, and Brendan McMahan. cpSGD: Communication-ecient and dierentially-private distributed SGD. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 7575–7586, 2018. URL https:// proceedings.neurips.cc/paper/2018/hash/21ce689121e39821d07d04faab328370-Abstract.html. Pierre Alquier. User-friendly introduction to PAC-Bayes bounds. *arXiv*, abs/2110.11216, 2021. URL https: //arxiv.org/abs/2110.11216. Pierre Alquier, James Ridgway, and Nicolas Chopin. On the properties of variational approximations of Gibbs posteriors. *Journal of Machine Learning Research*, 2016. Ron Amit and Ron Meir. Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory. In International Conference on Machine Learning (ICML), 2018. Daniel J Beutel, Taner Topal, Akhil Mathur, Xinchi Qiu, Javier Fernandez-Marques, Yan Gao, Lorenzo Sani, Hei Li Kwing, Titouan Parcollet, Pedro PB de Gusmão, and Nicholas D Lane. Flower: A Friendly Federated Learning Research Framework. *arXiv preprint arXiv:2007.14390*, 2020. Felix Biggs and Benjamin Guedj. Dierentiable PAC-Bayes objectives with partially aggregated neural networks. *Entropy*, 23(10), 2021. ISSN 1099-4300. URL https://www.mdpi.com/1099-4300/23/10/1280. Felix Biggs and Benjamin Guedj. Non-vacuous generalisation bounds for shallow neural networks. In Proceedings of the 39th International Conference on Machine Learning [ICML], volume 162 of Proceedings of Machine Learning Research, pp. 1963–1981. PMLR, July 2022. URL https://proceedings.mlr. press/v162/biggs22a.html. Felix Biggs, Valentina Zantedeschi, and Benjamin Guedj. On margins and generalisation for voting classifiers. In *NeurIPS*, 2022. URL https://arxiv.org/abs/2206.04607. Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1175–1191, 2017. Mahrokh Ghoddousi Boroujeni, Andreas Krause, and Giancarlo Ferrari Trecate. Personalized federated learning of probabilistic models: A pac-bayesian approach. *arXiv preprint arXiv:2401.08351*, 2024. Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration Inequalities - A Nonasymptotic Theory of Independence. Oxford University Press, 2013. ISBN 978-0-19-953525-5. doi: 10.1093/acprof: oso/9780199535255.001.0001. URL https://doi.org/10.1093/acprof:oso/9780199535255.001.0001. Olivier Catoni. *PAC-Bayesian supervised classification: the thermodynamics of statistical learning*. Institute of Mathematical Statistics, 2007. Hong-You Chen and Wei-Lun Chao. FedBE: Making Bayesian Model Ensemble Applicable to Federated Learning. In *9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021*. OpenReview.net, 2021. URL https://openreview.net/forum?id=dgtpE6gKjHn. Badr-Eddine Chérief-Abdellatif, Yuyang Shi, Arnaud Doucet, and Benjamin Guedj. On PAC-Bayesian reconstruction guarantees for VAEs. In *Proceedings of The 25th International Conference on Artificial* Intelligence and Statistics [AISTATS], volume 151 of *Proceedings of Machine Learning Research*, pp. 3066– 3079. PMLR, 28–30 Mar 2022. URL https://proceedings.mlr.press/v151/cherief-abdellatif22a. html. Ben Chugg, Hongjian Wang, and Aaditya Ramdas. A unified recipe for deriving (time-uniform) PAC-Bayes bounds. *arXiv*, abs/2302.03421, 2023. Nan Ding, Xi Chen, Tomer Levinboim, Sebastian Goodman, and Radu Soricut. Bridging the Gap Between Practice and PAC-Bayes Theory in Few-Shot Meta-Learning. In Conference on Neural Information Processing Systems (NeurIPS), 2021. John Duchi. Derivations for linear algebra and optimization. *Berkeley, California*, 3(1):2325–5870, 2007. Gintare Karolina Dziugaite and Daniel Roy. Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data. In *Conference on Uncertainty in Artificial Intelligence (UAI)*, 2017. Mahdi Milani Fard and Joelle Pineau. PAC-Bayesian model selection for reinforcement learning. In *Conference on Neural Information Processing Systems (NeurIPS)*, 2010. Alec Farid and Anirudha Majumdar. Generalization Bounds for Meta-Learning via PAC-Bayes and Uniform Stability. In *Conference on Neural Information Processing Systems (NeurIPS)*, 2021. Benjamin Guedj. A Primer on PAC-Bayesian Learning. In *Proceedings of the second congress of the French* Mathematical Society, volume 33, 2019. URL https://arxiv.org/abs/1901.05353. Maxime Haddouche and Benjamin Guedj. Online PAC-Bayes Learning. In Conference on Neural Information Processing Systems (NeurIPS), 2022. Maxime Haddouche and Benjamin Guedj. PAC-Bayes Generalisation Bounds for Heavy-Tailed Losses through Supermartingales. *Transactions on Machine Learning Research*, 2023. Moritz Hardt, Eric Price, and Nati Srebro. Equality of Opportunity in Supervised Learning. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett (eds.), *Advances in Neural* Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 3315–3323, 2016. URL https://proceedings.neurips.cc/ paper/2016/hash/9d2682367c3935defcb1f9e247a97c0d-Abstract.html. Fredrik Hellström, Giuseppe Durisi, Benjamin Guedj, and Maxim Raginsky. Generalization Bounds: Perspectives from Information Theory and PAC-Bayes, 9 2023. *arXiv*. Kyoungseok Jang, Kwang-Sung Jun, Ilja Kuzborskij, and Francesco Orabona. Tighter PAC-Bayes Bounds Through Coin-Betting. *arXiv*, abs/2302.05829, 2023. Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konecn˝, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Hang Qi, Daniel Ramage, Ramesh Raskar, Mariana Raykova, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and Open Problems in Federated Learning. Foundations and Trends® *in Machine Learning*, 14(1–2):1–210, 2021. ISSN 1935-8237. doi: 10.1561/2200000083. URL http://dx.doi.org/10.1561/2200000083. Minyoung Kim and Timothy M. Hospedales. FedHB: Hierarchical Bayesian Federated Learning. *arXiv*, abs/2305.04979, 2023. doi: 10.48550/arXiv.2305.04979. URL https://doi.org/10.48550/arXiv.2305. 04979. Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations, ICLR 2014, Ban, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. URL http://arxiv.org/abs/1312.6114. Jakub Konen˝, H. Brendan McMahan, Daniel Ramage, and Peter Richtárik. Federated Optimization: Distributed Machine Learning for On-Device Intelligence. *arXiv*, abs/1610.02527, 2016a. URL http: //arxiv.org/abs/1610.02527. Jakub Konen˝, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. Federated Learning: Strategies for Improving Communication Eciency. *arXiv*, abs/1610.05492, 2016b. URL http://arxiv.org/abs/1610.05492. Nikita Kotelevskii, Maxime Vono, Alain Durmus, and Eric Moulines. FedPop: A Bayesian Approach for Personalised Federated Learning. In *NeurIPS*, 2022. URL http://papers.nips.cc/paper_files/paper/ 2022/hash/395409679270591fd2a70abc694cf5a1-Abstract-Conference.html. Ilja Kuzborskij and Csaba Szepesvári. Efron-Stein PAC-Bayesian Inequalities. arXiv:1909.01931, 2019. URL https://arxiv.org/abs/1909.01931. Gaël Letarte, Pascal Germain, Benjamin Guedj, and François Laviolette. Dichotomize and generalize: PAC-Bayesian binary activated deep neural networks. In *Advances in Neural Information Processing* Systems 32: Annual Conference on Neural Information Processing Systems [NeurIPS] 2019, 8-14 December 2019, Vancouver, BC, Canada, pp. 6869–6879, 2019. URL https://papers.nips.cc/paper/ 8911-dichotomize-and-generalize-pac-bayesian-binary-activated-deep-neural-networks. Le Li, Benjamin Guedj, and Sébastien Loustau. A quasi-Bayesian perspective to online clustering. Electron. J. Statist., 12(2):3071–3113, 2018. doi: 10.1214/18-EJS1479. URL https://doi.org/10.1214/18-EJS1479. Priyanka Mary Mammen. Federated Learning: Opportunities and Challenges, 2021. Andreas Maurer. A note on the PAC-Bayesian theorem. *arXiv*, cs/0411099, 2004. David A McAllester. Some PAC-Bayesian theorems. In Proceedings of the eleventh annual conference on Computational Learning Theory, pp. 230–234. ACM, 1998. David A McAllester. PAC-Bayesian stochastic model selection. *Machine Learning*, 51(1):5–21, 2003. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-Ecient Learning of Deep Networks from Decentralized Data. In Aarti Singh and Jerry Zhu (eds.), *Proceedings of the 20th International Conference on Artificial Intelligence and Statistics*, volume 54 of *Proceedings of Machine Learning Research*, pp. 1273–1282. PMLR, 20–22 Apr 2017a. URL https://proceedings.mlr.press/v54/mcmahan17a.html. H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. Communication-Ecient Learning of Deep Networks from Decentralized Data, 2017b. Zakaria Mhammedi, Peter Grünwald, and Benjamin Guedj. PAC-Bayes un-expected Bernstein inequality. In *Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information* Processing Systems [NeurIPS] 2019, 8-14 December 2019, Vancouver, BC, Canada, pp. 12180–12191, 2019. URL http://papers.nips.cc/paper/9387-pac-bayes-un-expected-bernstein-inequality. Mehryar Mohri, Gary Sivek, and Ananda Theertha Suresh. Agnostic Federated Learning. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of *Proceedings of Machine Learning Research*, pp. 4615–4625. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/mohri19a.html. Kento Nozawa, Pascal Germain, and Benjamin Guedj. PAC-Bayesian contrastive unsupervised representation learning. In *Conference on Uncertainty in Artificial Intelligence [UAI]*, 2020. URL https: //proceedings.mlr.press/v124/nozawa20a.html. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style, HighPerformance Deep Learning Library. In *Advances in Neural Information Processing Systems* 32, pp. 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/ 9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf. Maria Perez-Ortiz, Omar Rivasplata, Emilio Parrado-Hernandez, Benjamin Guedj, and John Shawe-Taylor. Progress in Self-Certified Neural Networks. In *NeurIPS 2021 Workshop on Bayesian Deep Learning*, 2021. María Pérez-Ortiz, Omar Rivasplata, John Shawe-Taylor, and Csaba Szepesvári. Tighter Risk Certificates for Neural Networks. *Journal of Machine Learning Research*, 22(227):1–40, 2021. URL http://jmlr. org/papers/v22/20-879.html. Amirhossein Reisizadeh, Farzan Farnia, Ramtin Pedarsani, and Ali Jadbabaie. Robust Federated Learning: The Case of Ane Distribution Shifts. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS* 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ f5e536083a438cec5b64a4954abc17f1-Abstract.html. Omar Rivasplata, Vikram M. Tankasali, and Csaba Szepesvári. PAC-Bayes with Backprop. *arXiv*, abs/1908.07380, 2019. URL http://arxiv.org/abs/1908.07380. Jonas Rothfuss, Vincent Fortuin, Martin Josifoski, and Andreas Krause. PACOH: Bayes-optimal metalearning with PAC-guarantees. In *International Conference on Machine Learning (ICML)*, 2021. Jonas Rothfuss, Martin Josifoski, Vincent Fortuin, and Andreas Krause. PAC-Bayesian Meta-Learning: From Theory to Practice. *arXiv*, abs/2211.07206, 2022. Otmane Sakhi, Nicolas Chopin, and Pierre Alquier. PAC-Bayesian Oine Contextual Bandits With Guarantees. *arXiv*, abs/2210.13132, 2022. To appear in ICML 2023. Jonathan Scott, Hossein Zakerinia, and Christoph H. Lampert. PeFLL: A Lifelong Learning Approach to Personalized Federated Learning. *arXiv*, abs/2306.05515, 2023. doi: 10.48550/arXiv.2306.05515. URL https://doi.org/10.48550/arXiv.2306.05515. Milad Sefidgaran, Romain Chor, Abdellatif Zaidi, and Yijun Wan. Federated Learning You May Communicate Less Often! *arXiv*, abs/2306.05862, 2023. doi: 10.48550/arXiv.2306.05862. URL https: //doi.org/10.48550/arXiv.2306.05862. Yevgeny Seldin, François Laviolette, John Shawe-Taylor, Jan Peters, and Peter Auer. PAC-Bayesian Analysis of Martingales and Multiarmed Bandits. *arXiv*, abs/1105.2416, 2011. Yevgeny Seldin, François Laviolette, Nicolo Cesa-Bianchi, John Shawe-Taylor, and Peter Auer. PAC-Bayesian Inequalities for Martingales. *IEEE Transactions on Information Theory*, 2012. J. Shawe-Taylor and R. C. Williamson. A PAC analysis of a Bayes estimator. In *Proceedings of the 10th* annual conference on Computational Learning Theory, pp. 2–9. ACM, 1997. Ananda Theertha Suresh, Felix X. Yu, Sanjiv Kumar, and H. Brendan McMahan. Distributed Mean Estimation with Limited Communication. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of *Proceedings of Machine Learning Research*, pp. 3329–3337. PMLR, 2017. URL http://proceedings.mlr.press/v70/suresh17a.html. Alysa Ziying Tan, Han Yu, Lizhen Cui, and Qiang Yang. Towards Personalized Federated Learning. *IEEE* Transactions on Neural Networks and Learning Systems, pp. 1–17, 2022. doi: 10.1109/TNNLS.2022. 3160699. Ilya O. Tolstikhin and Yevgeny Seldin. PAC-Bayes-Empirical-Bernstein Inequality. In Christopher J. C. Burges, Léon Bottou, Zoubin Ghahramani, and Kilian Q. Weinberger (eds.), *Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe,* Nevada, United States, pp. 109–117, 2013. URL https://proceedings.neurips.cc/paper/2013/hash/ a97da629b098b75c294dffdc3e463904-Abstract.html. Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness May Be at Odds with Accuracy, 2019. Elahe Vedadi, Joshua V. Dillon, Philip Andrew Mansfield, Karan Singhal, Arash Afkanpour, and Warren Richard Morningstar. Federated Variational Inference: Towards Improved Personalization and Generalization. *arXiv*, abs/2305.13672, 2023. doi: 10.48550/arXiv.2305.13672. URL https://doi.org/10. 48550/arXiv.2305.13672. Omry Yadan. Hydra - A framework for elegantly configuring complex applications. Github, 2019. URL https://github.com/facebookresearch/hydra. Semih Yagli, Alex Dytso, and H. Vincent Poor. Information-Theoretic Bounds on the Generalization Error and Privacy Leakage in Federated Learning. In *2020 IEEE 21st International Workshop on Signal* Processing Advances in Wireless Communications (SPAWC), pp. 1–5, 2020. doi: 10.1109/SPAWC48557. 2020.9154277. Andy B. Yoo, Morris A. Jette, and Mark Grondona. Slurm: Simple linux utility for resource management. In Dror Feitelson, Larry Rudolph, and Uwe Schwiegelshohn (eds.), *Job Scheduling Strategies for Parallel* Processing, pp. 44–60, Berlin, Heidelberg, 2003. Springer Berlin Heidelberg. ISBN 978-3-540-39727-4. Mikhail Yurochkin, Mayank Agarwal, Soumya Ghosh, Kristjan Greenewald, Nghia Hoang, and Yasaman Khazaeni. Bayesian Nonparametric Federated Learning of Neural Networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pp. 7252–7261. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/yurochkin19a.html. Valentina Zantedeschi, Paul Viallard, Emilie Morvant, Rémi Emonet, Amaury Habrard, Pascal Germain, and Benjamin Guedj. Learning Stochastic Majority Votes by Minimizing a PAC-Bayes Generalization Bound. In *Conference on Neural Information Processing Systems (NeurIPS)*, 2021. Chen Zhang, Yu Xie, Hang Bai, Bin Yu, Weihong Li, and Yuan Gao. A survey on federated learning. Knowledge-Based Systems, 216:106775, 2021. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically Principled Trade-o between Robustness and Accuracy. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pp. 7472–7482. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/zhang19p.html. Liling Zhang, Xinyu Lei, Yichun Shi, Hongyu Huang, and Chao Chen. Federated Learning for IoT Devices With Domain Generalization. *IEEE Internet Things J.*, 10(11):9622–9633, 2023a. doi: 10.1109/JIOT. 2023.3234977. URL https://doi.org/10.1109/JIOT.2023.3234977. Xiaojin Zhang, Anbu Huang, Lixin Fan, Kai Chen, and Qiang Yang. Probably Approximately Correct Federated Learning, 2023b. Xu Zhang, Yinchuan Li, Wenpeng Li, Kaiyang Guo, and Yunfeng Shao. Personalized Federated Learning via Variational Bayesian Inference. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), *Proceedings of the 39th International Conference on Machine Learning*, volume 162 of *Proceedings of Machine Learning Research*, pp. 26293–26310. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/zhang22o.html. Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. A Comprehensive Survey on Transfer Learning. *Proc. IEEE*, 109(1):43–76, 2021. doi: 10.1109/JPROC. 2020.3004555. URL https://doi.org/10.1109/JPROC.2020.3004555.
Review 1: Summary: This paper introduces a novel strategy to train randomised predictors in federated learning where local predictors can preserve privacy while the global predictor inherits the properties of the local private predictors in the sense of a PAC-Bayesian generalisation bound. Both the synchronous case and the heterogenous and homogenous case are studied. Numerical experiments show that GenFL achieves a comparable predictive performance to that of batch approach. Strengths and Weaknesses: Strengths: 1. This paper studies training randomised predictors in federated learning, which is an important problem in federated learning research area. 2. The GenFL approach is well motivated by generalisation bound which has strong theoretical foundation. 3. Section 2 Background is well written and covers a lot of important work in this area. 4. Code is shared anonymously for reproduction of results. Weaknesses: 1. Contributions of this paper are not clear to me. Clarification is needed, shown below. Requested Changes: 1. Thanks for summarizing a lot of related work on page 2 but the contributions of this paper is not clear. What are the improvements made in this paper compared with existing work? What I can find are just a new algorithm and its comparable performance. 2. On page 2, by saying "we show a global generalisation bound for all users in FL-SOB, and local ones in PFL", where are these bounds? There is no bound in Section 3. If they are the same as ones in Section 2, they cannot be claimed as contributions. 3. On page 11, by saying "our work raises two questions", how these two questions raised in this work had been studied by exiting work? Also, contributions of this work should be highlighted again in Section 5 before discussing potential future directions. Broader Impact Concerns: No concerns on the ethical implications of the work. ================================================== Review 2: Summary: The paper introduces a novel strategy for training randomized predictors in federated learning (FL) that aims to preserve privacy while maintaining predictive performance. The authors propose a global randomized predictor based on PAC-Bayesian generalization bounds that inherits the properties of local private predictors. They address both synchronous and heterogeneous cases, demonstrating through numerical experiments that their approach can achieve performance comparable to batch learning. The proposed method, GenFL, is applicable in both FL-SOB and PFL settings, offering nonvacuous generalization guarantees and highlighting the trade-offs between privacy and performance. The proposed method iaddresses federated learning by integrating PAC-Bayesian theory to achieve a balance between privacy preservation and predictive performance. The GenFL algorithm is meticulously designed to work in both synchronous and personalized federated learning settings, offering robust theoretical foundations through PAC-Bayesian generalization bounds to ensure nonvacuous guarantees. Despite the innovative contributions and comprehensive experimental validation, there are areas for improvement, such as addressing the limitations of the IID assumption, optimizing communication efficiency, and enhancing experimental validation on larger and more diverse datasets. By addressing these aspects, the method could significantly advance the field of privacy-preserving federated learning, making it more applicable and effective in real-world scenarios. Strengths and Weaknesses: Strength 1. The method effectively balances privacy and predictive performance in federated learning using PAC-Bayesian theory. 2. The method provides a theoretical foundation by leveraging PAC-Bayesian generalization bounds to ensure nonvacuous guarantees. Weakness 1. The assumption that datasets are independent and identically distributed (IID) may not hold in many real-world federated learning scenarios, potentially limiting the generalizability of the results. 2. The reliance on PAC-Bayesian generalization bounds may not fully capture the complexities and variabilities present in non-i.i.d. data distributions, potentially affecting the robustness of the method. 3. The experiments are conducted using only CPUs, which can be inefficient given the heavy mathematical computations involved. Additionally, the datasets used are relatively small, which may not fully demonstrate the improvements in model generalization. Requested Changes: Address IID Assumption: Provide a discussion on how the method can be adapted or extended to handle non-i.i.d. data distributions common in real-world federated learning scenarios. Expand Generalization Analysis: Include a detailed analysis of how PAC-Bayesian generalization bounds perform under non-i.i.d. conditions and discuss potential limitations. Enhance Experimental Validation: Conduct experiments using GPUs to handle the heavy mathematical computations more efficiently. Additionally, use larger and more diverse datasets to better demonstrate the method's ability to improve model generalization. Broader Impact Concerns: N/A. ================================================== Review 3: Summary: This paper proposes GenFL (Generalisation-driven Federated Learning), an algorithm in which users try to optimize local PAC-Bayes objectives towards their goal of doing Federated Learning. The authors make their proposal adapted to both the settings of FL-SOB and P-FL and show simple experiments demonstrating the non-vacuity of the output of GenFL. Strengths and Weaknesses: I am quite new to Federated Learning but largely familiar with the PAC-Bayes literature. From that limited point of view, the proposal here does look interesting - that one does FL in a way that the measured PAC-Bayes bound at the end of the experiment is non-vacuous. So, limited to that, there is a novel potential in this work. It needs to be noted that the paper makes no attempt to introduce FL to readers like me who have very little familiarity with it. It would be great if a revised paper gives a rigorous mathematical introduction to the idea of FL and how risks are defined there. However, the key issue with this paper is that its main claim looks entirely misplaced. At many places, like on the top of page 5 we find sentences like ``We stress that GenFL benefits from nonvacuous generalisation guarantees". But this is not at all what is happening in this paper. This algorithm GenFL can be said to be provable only if the risk in the distribution parameterized by $w_T$ has an analytic bound in terms of all the hyperparameters like $T, E, B, m_is,\mu_{\rm prior},\sigma_{\rm prior}$ etc. But such a theorem has never been proven here! So one cannot claim that this algorithm has any provably good property or a "generalization guarantee" - let alone any claim about non-vacuity of the bound. So this is at best an empirical study of the measured generalization error of this $w_T$ output being non-vacuous in certain experiments - and not a proposal with guarantees. Requested Changes: (1) The paper needs to be entirely rewritten/recalibrated. Either there needs to be rigorous proof that this algorithm has an analytic generalization bound in terms of the algorithm's parameters - which is then shown to be non-vacuous in some experiments. *Even if one ignores the above -for something to be called a generalization guarantee there at least has to be an analytic expression provable for an upper-bound on the risk of the trained predictor via this method - an upper-bound that makes explicit the dependence of the risk on the distribution output by GenFL and hence the architectural parameters of the net trained." Or, the paper needs to be formatted as an empirical study about a proposed algorithm - without any claim about a guaranteed non-vacuous generalization bound. But at best a claim that the estimated generalization error of the algorithm's output is non-vacuous - which is what seems achievable by a combination of GenFL and FedBound. (2) Equation 6 is quite critical to the whole setup and its self-complete proof needs to be provided here. This can't just be outsourced to previous work. (3) In line 17 of the algorithm how is this function $f_{m,\delta,\mu_{\rm prior},\sigma_{\rm prior}}(w_s^k;b)$ defined? This function has never been defined anywhere else. (I am guessing that it is either of $f_1$ or $f_2$ defined later - but that needs to be much more clearly stated - the shift in notations is very confusing!) (4) The dimensions of $\mu_{\rm prior}$ and $\sigma_{\rm prior}$ needs to be mentioned. (5) There needs to be some explanation about whether the $m_i$s can be arbitrary or not. (6) In the experiments, the way the hyper-parameter search has been done is not that important to know. But it would be much more clear if tables were given of the final chosen values at which experiments are being reported. (7) Equation 6 has nothing to do with the setup of FL! Then why is the algorithm FedBound claimed to compute that equation 6? And is this provable or is that also an empirical claim? (8) There is a particular way the algorithm FedBound is computing the "final" generalization bound ``Up-bound". This computation needs at least a heuristic proof of correctness. Referring it back to equation 6 is not working for the reasons explained above. The setup of the paper must define something like "posterior risk of PAC-Bayesian FL" and argue that the output of FedBound is trying to approximate it. Currently, none of this is clear. (9) I guess that the FedBound needs to get the output of GenFL sent to it as an input and also the training data used by each user. But there is no call made to GenFL from inside FedBound and that makes it very confusing! (10) It's entirely unclear as to why Section 3.2 is so rushed and Algorithms 3 and 4 are in the Appendix. If they constitute the P-FL setup then that needs to be in the main paper and explained as such. (11) The first line of Appendix C.2.2 seems to have a missing reference. (12) Lastly, it needs to be noted that the submission has a strange format whereby the supplementary file is a copy of the full paper instead of just being the appendices. This definitely needs to be changed in any further revisions. Broader Impact Concerns: N/A ==================================================
# Understanding Smoothness Of Vector Gaussian Processes On Product Spaces Emilio Porcu emilio.porcu@ku.ac.ae Department of Mathematics, College of Computing and Mathematical Sciences Khalifa University, Abu Dhabi & *ADIA Lab, Abu Dhabi* Ana Paula Peron *apperon@icmc.usp.br* Departamento de Matemática, Instituto de Ciências Matemáticas e de Computação, Universidade de São Paulo, São Carlos SP, Brazil. Eugenio Massa eug.massa@gmail.com Departamento de Matemática, Instituto de Ciências Matemáticas e de Computação, Universidade de São Paulo, São Carlos SP, Brazil. Xavier Emery *xemery@ing.uchile.cl* Department of Mining Engineering, Universidad de Chile & *Advanced Mining Technology Center, Universidad de Chile* Reviewed on OpenReview: *https: // openreview. net/ forum? id= XXXX* ## Abstract Vector Gaussian processes are becoming increasingly important in machine learning and statistics, with applications to many branches of applied sciences. Recent efforts have allowed to understand smoothness in scalar Gaussian processes defined over manifolds as well as over product spaces involving manifolds. Under assumptions of Gaussianity and mean-square continuity, the smoothness of a zeromean scalar process is in one-to-one correspondence with the smoothness of the covariance kernel. Unfortunately, such a result is not available for vector-valued random fields, as the way each component in the covariance kernel contributes to the smoothness of the vector field is unclear. This paper challenges the problem of quantifying smoothness of matrix-valued continuous kernels that are associated with mean-square continuous vector Gaussian processes defined over non-Euclidean product manifolds. After noting that a constructive RKHS approach is unsuitable for this specific task, we proceed through the analysis of spectral properties. Specifically, we find a spectral representation to quantify smoothness through Sobolev spaces that are adapted to certain measure spaces of product measures obtained through the tensor product of Haar measures with multivariate Gaussian measures. Our results allow to measure smoothness in a simple way, and open for the study of foundational properties of certain machine learning techniques over product spaces. ## 1 Introduction 1.1 Context The paper deals with the smoothness of continuous matrix-valued kernels associated with mean-square continuous vector-valued Gaussian processes defined on the product of two spaces, with one of them being non-Euclidean, namely a hypersphere of d dimensions embedded in a (d + 1)-dimensional Euclidean space. Gaussian processes (Seeger, 2004) are ubiquitous in machine learning, statistics and numerical analysis. Vector (i.e., multivariate) Gaussian processes have recently received attention after the constructive approaches proposed by Hutchinson et al. (2021). The impact of such processes on the machine learning community ranges from regression (Chen et al., 2023), Bayesian optimization and active learning (see the discussion in Hutchinson et al., 2021, and references therein), to relevance vector machines (Quinonero-Candela, 2004), sensor networks (Osborne et al., 2008), text categorization (Kazawa et al., 2005), informance vector machines (Lawrence et al., 2002), gradual learning (Yuan et al., 2022), and multitask learning (Bonilla et al., 2007; Xing et al., 2021). Vector Gaussian processes arise within the framework of multiple output learning of a vector-valued function f = (f1*, . . . , f*p) ⊤ that is observed over a finite set Y = f(X) := {f(x1 )*, . . . ,* f(xN )}, from training data x collected over the training set X = {x1 , . . . , xN }. Specifically, we suppose that x is defined over a product space Υ(d,k) = S d × R k, with S d being the unit sphere of dimension d and R k being the k-dimensional Euclidean space. The output space Y, where f is defined, has dimension p. The problem can be tackled either assuming that f belongs to a reproducing kernel Hilbert space (RKHS) of vector-valued functions or assuming that f is drawn from a vector Gaussian process. We start by illustrating the RKHS perspective for vector-valued functions that are reproduced through matrix-valued kernels, denoted Kf throughout, being matrix-valued functions from Υ(d,k) ×Υ(d,k)into R p×p. A vector RKHS is a Hilbert space, HKe , composed of vector-valued functions f such that, for all c ∈ R p and all x ∈ Υ(d,k), the linear combination f(x) ⊤c is obtained through $$f(\underline{{{x}}})^{\top}\mathbf{c}=\langle\mathbf{f}(\cdot),\widetilde{\mathbf{K}}(\cdot,\underline{{{x}}})\mathbf{c}\rangle_{\mathcal{H}_{K}^{-}},$$ , (1) where ⟨·, ·,⟩HKe is the inner product on HKe . The kernel Kf is positive semidefinite: for any arbitrary system {ck} N k=1 of p-dimensional real vectors and any finite collection of points {xk } N k=1 in the input space, we have $$\sum_{h=1}^{N}\sum_{k=1}^{N}{\mathbf{c}}_{h}^{\top}{\widetilde{\mathbf{K}}}({\underline{{{\mathbf{x}}}}}_{h},{\underline{{{\mathbf{x}}}}}_{k}){\mathbf{c}}_{k}\geq0.$$ $$(1)$$ For a thorough review about RKHS for both scalar and vector-valued settings, as well for material about regularization in RKHS, the reader is referred to Hofmann et al. (2008) and Alvarez et al. (2012). Although RKHS are a powerful instrument to quantify smoothness of scalar Gaussian processes, the same does not hold for the case of vector Gaussian processes, where the role of the matrix-valued kernel remains unclear. Hence, we opt here for the alternative of matrix-valued kernels through vector-valued Gaussian processes. ## 1.2 Why Study Smoothness? Why Product Spaces? Smoothness plays a fundamental role in numerous applications to machine learning and statistics. We mention here the most recent development in Gaussian process regression. The recent work by Rosa et al. (2023) deals with Bayesian contraction rates under the framework of Gaussian process regression with random design. Posterior construction rates provide a nice way to illustrate how a given class of posteriors concentrates around the true data generating process. Rosa et al. (2023) prove that the contraction rates depend on the smoothness of the underlying Gaussian process, the prior of which is defined through a Matérn kernel (Porcu et al., 2023). Well-known results from probability theory connect the smoothness properties of a scalar Gaussian process with those of the associated kernel (Yadrenko, 1983; Adler and Taylor, 2007). Unfortunately, these results are not available for the vector-valued case, and the same definition of RKHS as in Equation (1) clearly shows that the contribution of each component in Kf to the smoothness of Z is unclear. This explains the lack of literature in this direction and the fact that the available contributions are centered to the smoothness of the kernel Kf rather than the smoothness of Z. An intuitive way to look at the geometric smoothness properties of the kernel is by working under the framework of Sobolev spaces. Another relevant motivation for studying smoothness of Gaussian processes on manifolds is related to the use of computational tools of *kernel cubature* and *kernel discrepancy* beyond the usual Euclidean manifold. Barp et al. (2022) illustrate the importance of Sobolev spaces when quantifying kernel cubatures. These topics have been popular in statistics, machine learning, and numerical analysis. Kernel cubature has been applied in several contexts, and the reader is referred to Hubbert et al. (2023), with the references therein. Studying smoothness on non-Euclidean manifolds has been important to several disciplines. For the special case of the manifold being a d-dimensional sphere, applications include kernel cubature (Marques et al., 2013; 2022), Stein's method to numerically calculate posterior expectations in directional statistics (Barp et al., 2022), and approximation of solutions of some classes of PDEs (see e.g. Fasshauer, 2007). Not to mention that certain classes of kernels on spheres ensure that the solution of the PDE belongs to the RKHS and, through the use of an appropriate kernel method, can be consistently approximated (see Fuselier and Wright, 2009; 2012; Hubbert et al., 2015). The product space Υ(d,k) has received increasing attention in the statistics and machine learning communities (Atluri et al., 2018; Wang et al., 2022; Porcu et al., 2016; 2021). Applications from several branches of science justify this context, such as - atmospheric, environmental and oceanographic sciences: variables such as air pressure, air humidity, wind speed, surface temperature, solar radiation, aerosol optical depth, daily ozone concentration, ground level concentration of particulate matter, ocean current velocity, sea surface height anomaly, sea surface salinity, sea surface chlorophyll-a concentration, or sea water density are indexed by the position (longitude and latitude) on planet Earth (S 2) and by time (R 1) (Castruccio and Stein, 2013; Oleson et al., 2013; Faghmous and Kumar, 2014; Jeong et al., 2017; Wu et al., 2022); - geophysics: seismic and volcanic events can be represented by point processes indexed by a position on planet Earth (S 2) and by time (R 1) (Illian et al., 2008; Connor et al., 2009); - structural geology and geotechnics: variables such as the linear discontinuity frequency, the rock quality designation, or the uniaxial compressive strength, used to measure the geotechnical quality of a rock mass, are indexed by the position (easting, northing, elevation, i.e., a point of R 3) of the rock core sample in the rock mass and by the orientation (azimuth and dip, i.e., a point of S 2) of this sample (Sánchez et al., 2019; 2021); - social science: crime and security data can be indexed by their position in the geographic space (R 2) and by time at the scale of a day (S 1) (Tompson et al., 2015; Shirota and Gelfand, 2017); - neuroscience: functional magnetic resonance imaging (fMRI), electroencephalography (EEG) and magnetoencephalography (MEG) signals are indexed by the position on the human head (S 2) and by time (R 1) (Wingeier et al., 2001; Atluri et al., 2016). ## 1.3 Challenges And Contribution The smoothness of scalar Gaussian processes was studied in Lang and Schwab (2013) for the sphere, and in Clarke De la Cerda et al. (2018) for the product of a sphere with R. While scalar Gaussian processes are well understood, the literature on smoothness of matrix-valued kernels in machine learning is scarce, with the notable exception of Cleanthous (2023), who provides an ingenious construction for a Gaussian process defined over a ball embedded in R k. Our paper contributes in this direction. Specifically, a) Section 2 considers mean-square continuous zero-mean vector Gaussian processes defined over the space Υ(d,k) as being previously defined; b) we take a spectral path to smoothness of their covariance kernels in Section 3, through a proper spectral representation for a vector Gaussian process on Υ(d,k), and consequently for the related matrix-valued kernel; c) we construct, in Section 3.3, a suitable Sobolev space for such a matrix-valued kernel; d) Section 3.4 provides a spectral characterization of smoothness of the kernel. ## 1.4 Outline And Notation Section 2 provides a succinct mathematical background. Section 3 illustrates the way to construct proper Sobolev spaces through spectral representations over the space Υ(d,k). Proofs are technical and deferred to Appendix A. Section 5 concludes the paper with a discussion. Hereinafter, Z is the set of integer numbers, Z+ = {κ ∈ Z : κ ≥ 0}, i stands for the complex imaginary unit, p, d and k for positive integers, and *∥ · ∥* for the Euclidean norm on R k. Bold letters denote vectors or matrices of size p × p. A refers to the conjugate of a complex matrix A, and A⊤ to its transpose. In order to work in multidimensional spaces, we consider the multi-index notation: for α = (α1*, . . . , α*k) ∈ Z k + and h = (h1*, . . . , h*k) ∈ R k, we set $$|\alpha|=\sum_{i=1}^{k}\alpha_{i}\,,\qquad\alpha!=\prod_{i=1}^{k}\alpha_{i}!\,,\qquad\quad\partial^{\alpha}f(\mathbf{h})=\partial_{h_{1}}^{\alpha_{1}}\partial_{h_{2}}^{\alpha_{2}}\ldots\partial_{h_{k}}^{\alpha_{k}}f(\mathbf{h}),$$ and for α, β ∈ Z k + $$\alpha^{\beta}=\prod_{i=1}^{k}\alpha_{i}^{\beta_{i}},$$ $$\mathbf{\alpha}\geq\mathbf{\beta}\Longleftrightarrow\alpha_{i}\geq\beta_{i},\;\;\forall\,i,$$ with the usual understanding that 0 0 = 1. ## 2 Vector Gaussian Processes Let Υ (d,k):= S d × R k = {x = (x, t) ∈ R d+1 × R k: ∥x∥ = 1, t ∈ R k}. A p-variate (vector) Gaussian process, {Z(x) : x ∈ Υ(d,k)} is an uncountable collection of random vectors such that, for any finite collection of points x1 , . . . , xN ∈ Υ(d,k), the vector (Z(x1 ) ⊤*, . . . ,* Z(xN ) ⊤) ⊤, having dimension (p × N) × 1, is a Gaussian random vector. In what follows, without loss of generality, we suppose that such a Gaussian process has a zero mean at any point x in Υ(d,k). A p*-variate covariance kernel* on Υ(d,k)is a matrix-valued function $${\widetilde{\mathbf{K}}}:\Upsilon^{(d,k)}\times\Upsilon^{(d,k)}\to\mathbb{R}^{p\times p}$$ defined as $$\widetilde{K}\,(\underline{{{x}}},\underline{{{x}}}^{\prime})=[\widetilde{K}_{i j}\,(\underline{{{x}}},\underline{{{x}}}^{\prime})]_{i,j=1}^{p},\qquad\underline{{{x}}},\underline{{{x}}}^{\prime}\in\Upsilon^{(d,k)},$$ where Keij (x, x ′) = Keji (x ′, x) for all i, j ∈ {1*, . . . , p*} and x, x ′ ∈ Υ(d,k), and where Kf is positive semidefinite, that is, the pN × pN block matrix [Kf(xm, xn )]Nm,n=1 is symmetric and nonnegative definite for any set of points x1 , . . . , xN ∈ Υ(d,k). Hereinafter, we focus on the case where the mapping Kf is continuous, isotropic on S d and stationary in R k, meaning that $$\widetilde{K}(\underline{{{x}}},\underline{{{x}}}^{\prime})=K\bigl(\langle\underline{{{x}}},\underline{{{x}}}^{\prime}\rangle,t-t^{\prime}\bigr),\quad\underline{{{x}}}=(\underline{{{x}}},t),\underline{{{x}}}^{\prime}=(\underline{{{x}}}^{\prime},t^{\prime}),$$ ′), (2) with ⟨·, ·⟩ denoting the dot product in R d+1, and K : [−1, 1] × R k → R p×p being a continuous mapping. Throughout, K will be called a kernel for simplicity, albeit this should be called the isotropic profile of the kernel Kf. This will not give rise to confusion as, from now, only the mapping K will be used. $$\left(2\right)$$ On the one hand, Alegría et al. (2019, Theorem 6.2) proved that a continuous kernel K of the type (2) can be uniquely decomposed as follows: $$\mathbf{K}(s,\mathbf{h})=\sum_{n=0}^{\infty}\dim(\mathcal{I}_{n}^{d})\,\mathbf{C}_{n}^{d}(\mathbf{h})\,\mathcal{G}_{n}^{(d-1)/2}(s),\qquad s\in[-1,1],\quad\mathbf{h}\in\mathbb{R}^{k},\tag{3}$$ for a sequence {Cdn (·)}∞ n=0 of matrix-valued stationary covariance kernels such that {dim(Hdn )Cdn (0)}∞ n=0 is summable. For d > 1, G (d−1)/2 n is defined in terms of the Gegenbauer (ultraspherical) polynomial G (d−1)/2 n , normalizing as G (d−1)/2 n = G (d−1)/2 n /G(d−1)/2 n (1), while for d = 1, G 0 n = Tn is the nth Chebyshev polynomial of the first kind. On the other hand, the generalized addition theorem for spherical harmonics (Erdélyi, 1953, Equation 11.4.2) states that $$\sum_{\mathbf{q}\in A_{n,d}}\mathcal{Y}_{n,\mathbf{x},d}(\mathbf{x})\mathcal{Y}_{n^{\prime},\mathbf{x}^{\prime},d}(\mathbf{x}^{\prime})=\dim(\mathcal{H}_{n}^{d})\,\mathcal{G}_{n}^{(d-1)/2}(\mathbf{x},\mathbf{x}^{\prime}\in\mathbb{S}^{d},\quad n\in\mathbb{Z}_{+},\tag{4}$$ where An,d is a set of finite cardinality dim(Hdn ) associated with the spherical harmonics Y*n,q,d*, which form an orthonormal basis for all Lebesgue-square-integrable measurable functions on the d-dimensional sphere S d. Owing to Equations (3) and (4), the Karhunen theorem on the generalized spectral representation of random processes (Yaglom, 1987b, Equation (2.31')) allows decomposing a mean-square continuous p-variate Gaussian process Z having a zero mean and K as its covariance kernel in the following fashion: $$\mathbf{Z}(\underline{x})=\sum_{n=0}^{\infty}\sum_{q\in A_{n,d}}\mathbf{A}_{n,q}^{d}(t)\mathbb{I}_{n,q,d}(\mathbf{x}),\qquad\underline{x}=(\mathbf{x},\mathbf{t})\in\Upsilon^{(d,k)},\tag{5}$$ where {Adn,q(·)} is a sequence of mean-square continuous zero-mean vector Gaussian processes in R ksuch that (cov refers to the covariance operator) $$\mathrm{cov}\left(\mathbf{A}_{n,q}^{d}(\mathbf{t}),\mathbf{A}_{n^{\prime},q^{\prime}}^{d}(\mathbf{t}^{\prime})\right)=\delta_{n,n^{\prime}}\,\delta_{q,q^{\prime}}\,\mathbf{C}_{n}^{d}(\mathbf{t}-\mathbf{t}^{\prime}),\quad\mathbf{t},\mathbf{t}^{\prime}\in\mathbb{R}^{k},\quad n,n^{\prime}\in\mathbb{Z}_{+},\quad q\in\mathcal{A}_{n,d},\ q^{\prime}\in\mathcal{A}_{n^{\prime},d}.\tag{6}$$ The reciprocal, that a vector Gaussian process of the form (5) has a covariance kernel of the form (3), has been established in the scalar case (p = 1) by Clarke De la Cerda et al. (2018, Proposition 3.1). For the reader's convenience, we sketch the proof for the vector case. For any x = (x, t) and x ′ = (x ′, t ′) in Υ(d,k), the bilinearity of the covariance implies Kf(x, x ′) = cov (Z(x), Z(x ′)) = cov X∞ n=0 X q∈An,d Adn,q(t)Yn,q,d(x), X∞ n′=0 X q ′∈An′,d Adn′,q′ (t ′)Yn′,q′,d(x ′) =X∞ n=0 X q∈An,d X∞ n′=0 X q ′∈An′,d Yn,q,d(x)Yn′,q′,d(x ′) cov Adn,q(t), Adn′,q′ (t ′) =X∞ n=0 X q∈An,d Yn,q,d(x)Yn′,q′,d(x ′) C d n (t − t ′) =X∞ n=0 dim(Hdn ) C d n (t − t ′) G (d−1)/2 n(⟨x, x ′⟩), with the last two equalities derived from Equations (6) and (4). Equation (5) can be coupled with the spectral representation of the mean-square continuous stationary random process Adn,q(·) (see Yaglom (1987a, Equation (4.70)) or Chilès and Delfiner (2012, Equation (2.16))) to attain $$\mathbf{Z}(\underline{\mathbf{x}})=\sum_{n=0}^{\infty}\sum_{q\in A_{n,d}}\int_{\mathbb{R}^{d}}e^{i(\mathbf{t},\mathbf{\omega})}\mathbf{\xi}_{n,q}(\mathrm{d}\mathbf{\omega})\mathcal{Y}_{n,q,d}(\mathbf{x}),\qquad\underline{\mathbf{x}}=(\mathbf{x},\mathbf{t})\in\Upsilon^{(d,k)},\tag{7}$$ where {ξn,q(d·)}∞ n,q is a sequence of vector-valued measures with orthogonal increments, that is, E ξn,q(A)ξn′,q(B) = δn=n′Fn(ATB), for all q, n, n ′ and all Borel sets A and B in R k, where Fn is a matrix-valued measure of bounded variation such that Fn(dω) is a positive semidefinite matrix for all ω ∈ R k. Under the additional condition Pn dim(Hdn )RRk ξn,ζ (dω) < ∞, Equation (3) becomes $$\mathbf{K}(s,\mathbf{h})=\sum_{n=0}^{\infty}\dim(\mathcal{H}_{n}^{d})\Big{(}\int_{\mathbb{R}^{k}}\mathrm{e}^{j(\mathbf{h},\mathbf{\omega})}\,\mathbf{F}_{n}(\mathrm{d}\mathbf{\omega})\Big{)}\mathcal{G}_{n}^{(d-1)/2}(s),\qquad s\in[-1,1],\quad\mathbf{h}\in\mathbb{R}^{k}.$$ Clearly, Cdn is real matrix-valued if, and only if, Fn(A) = Fn(−A), for all Borel sets A in R k. A stronger condition for this to happen is that ξn(−A) = ξn(A) ⊤. Throughout, we shall always work under the assumption of real matrix-valued covariance kernels. ## 3 Understanding Regularities Of Matrix-Valued Kernels To study the properties of the matrix-valued kernel K, an intuitive approach is to provide a Karhunen-Loève expansion of a vector Gaussian process in R p, with input space Υ(d,k). Since the product space Υ(d,k)is not compact, an extension of the arguments provided for the scalar case by Clarke De la Cerda et al. (2018) suggests that a sensible strategy is needed to provide a legitimate Karhunen-Loève expansion of vector-valued functions. There are indeed two possibilities: a) to *compactify* the space Υ(d,k) by considering the space Υ (d,k) T:= S d × [0, T] k, with T a positive constant. Under such a construction, a Karhunen-Loève expansion can be namely obtained. Yet, this approach has a cost in that it does not allow for traditional spectral expansions as much as in Equations (7) and (8), respectively; b) to consider the measure space $$\mathbf{\Sigma}$$ $$\left(\Upsilon^{(d,k)},\mathbb{B},\mu_{\Upsilon^{(d,k)}}\right),\tag{1}$$ $$({\mathfrak{g}})$$ where B is the Borel sigma-algebra over Υ(d,k), and where µΥ(d,k) is a product measure defined through $$\mu_{\Upsilon^{(d,k)}}(\mathrm{d}\underline{{{\mathbf{x}}}})=\sigma_{d}(\mathrm{d}\underline{{{\mathbf{x}}}})\times\nu(\mathrm{d}t),\qquad\underline{{{\mathbf{x}}}}\in\Upsilon^{(d,k)},$$ where σd is the Haar measure, i.e., the Lebesgue measure for the sphere, and ν is the Gaussian measure in R k with zero-vector mean and identity covariance matrix, i.e., $$\nu({\rm d}t)=(2\pi)^{-k/2}e^{-\|t\|^{2}/2}{\rm d}t\,.\tag{10.1}$$ Under this choice, the Karhunen-Loève expansion for the vector Gaussian process Z can be attained at the expense of defining a suitable orthonormal basis that is legitimate for this measure space. Our paper takes this path. Hence, we start by defining a proper orthonormal basis for the case considered here. We illustrate our routine through the following scheme. $$(10)$$ ## The Route To Smoothness | 3. Provide a suitable Karhunen-Loève expansio | n. | |-------------------------------------------------|------| | 2. Provide an orthonormal basis. | |------------------------------------| | 1. Consider the measure space in Equation (9) | . | |-------------------------------------------------|-----| ![6_image_0.png](6_image_0.png) 4. Define a proper Sobolev space. 5. Quantify smoothness of the kernel K. The following sections detail each of the steps in this routine. ## 3.1 A Constructive Approach To Orthonormal Bases Consider the normalized Hermite polynomials Hκ on the real line defined by (Olver et al., 2010, Table 18.3.1) $$\mathrm{kernel}\,\,{\cal K}.$$ $$H_{\kappa}(\xi)=\frac{(-1)^{\kappa}}{(\kappa!)^{1/2}}e^{\frac{\xi^{2}}{2}}\,\frac{\mathrm{d}^{\kappa}}{\mathrm{d}\xi^{\kappa}}e^{\frac{-\xi^{2}}{2}},\quad\xi\in\mathbb{R},\quad\kappa=0,1,2,\ldots\,.$$ The family {Hκ}κ∈Z+ forms a complete orthonormal system for L 2(R, ν), with the standard Gaussian measure dν = (2π) −1/2e −ξ 2/2dξ, i.e., $$\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}H_{\kappa}(\xi)H_{\kappa^{\prime}}(\xi)e^{\frac{-\xi^{2}}{2}}\mathrm{d}\xi=\delta_{\kappa,\kappa^{\prime}}.$$ −∞ Moreover, the l-th derivative of the Hermite polynomials satisfies $$\frac{\mathrm{d}^{l}}{\mathrm{d}\xi^{l}}H_{\kappa}(\xi)=\sqrt{\frac{\kappa!}{(\kappa-l)!}}\ H_{\kappa-l}(\xi).$$ (κ − l)! Hκ−l(ξ). (12) On R k, k ≥ 2, we define the normalized multiple Hermite functions Φα, with α ∈ Z k + through the identity $$\Phi_{\mathbf{\alpha}}(\mathbf{h})=\prod_{i=1}^{k}H_{\alpha_{i}}\left(h_{i}\right),\quad\mathbf{h}\in\mathbb{R}^{k}.\tag{1}$$ $$(11)$$ $$(12)$$ $$(13)$$ It can be verified via Fubini's theorem, by Equation (11) and the definition of ν in Equation (10), that these functions form an orthonormal basis of L 2R k, ν. Hence, we have completed Step 2 in our *Route to Smoothness*. ## 3.2 Expansion For The Matrix-Valued Kernel The sequence {Cn(·)}∞ n=0 in the series expansion (3) is summable at zero. Further, from well-known properties of matrix-valued positive semidefinite functions (Chilès and Delfiner, 2012), we have that, for every n = 0, 1*, . . .*, the matrix-valued function Cn having elements Cij,n, *i, j* = 1*, . . . , p*, satisfies C*ij,n*(h) 2 ≤ Cii,n(0)C*jj,n*(0), for *i, j* = 1*, . . . , p*. This in turn implies that, for every finite measure λ, the mapping Cn is in L 2(R k,λ) for all n. This is obviously true when λ is the Gaussian measure ν. From Equation (3) in concert with the fact that $$\left|G_{n}^{(d-1)/2}(s)\right|=\left|\frac{G_{n}^{(d-1)/2}(s)}{G_{n}^{(d-1)/2}(1)}\right|\leq1,\quad s\in[-1,1],$$ we conclude that the convergence of the series (3) is uniform. At this point, since the multivariate Hermite polynomials that have been defined in Equation (13) form a complete orthonormal basis in L 2(R k, ν), we have that, for every n = 0, 1*, . . .*, the positive semidefinite functions Cn : R k → R p×pcan be uniquely expanded in terms of Hermite polynomials, that is, $$\mathbf{C}_{n}^{d}(\mathbf{h})=\sum_{\mathbf{\alpha}\in\mathbb{Z}_{+}^{k}}\gamma_{n,\mathbf{\alpha}}^{d}\Phi_{\mathbf{\alpha}}(\mathbf{h}),\quad\mathbf{h}\in\mathbb{R}^{k},$$ where the series converges in L 2(R k, ν), and where γ d $\left.\begin{array}{c}d\\ 1,\alpha\end{array}\right\}_{\mathbf{\alpha}\in\mathbb{Z}_{+}^{k}}\subset\mathbb{C}^{p}$ p×pis a summable sequence of matrices $$\gamma_{n,\alpha}^{d}=\int_{\mathbb{R}^{k}}C_{n}^{d}(\mathbf{h})\Phi_{\alpha}(\mathbf{h})\,\nu(\mathrm{d}\mathbf{h}),\quad n\in\mathbb{Z}_{+},\quad\alpha\in\mathbb{Z}_{+}^{k}.$$ such that Consequently, the kernel K in Equation (3) can be uniquely expanded as $$\mathbf{K}(s,\mathbf{h})=\sum_{n=0}^{\infty}\dim(\mathcal{H}_{n}^{d})\sum_{\mathbf{\alpha}\in\mathbb{Z}_{+}^{d}}\mathcal{J}_{n,\mathbf{\alpha}}^{d}\Phi_{\mathbf{\alpha}}(\mathbf{h})\mathcal{G}_{n}^{(d-1)/2}(s),\quad(s,\mathbf{h})\in[-1,1]\times\mathbb{R}^{k}.$$ k. (15) We call the indexed set γ d n,α (n,α)∈Ωk ⊂ C p×pthe *Gegenbauer-Hermite spectrum* of the p-variate kernel K, where the indices take value in the set $\Omega_{k}:=\{(n,\mathbf{\alpha}):\ n\in\mathbb{Z}_{+},\mathbf{\alpha}\in\mathbb{Z}_{+}^{k}\}$. (10.11) ## 3.3 Defining The Sobolev Spaces For given *ζ, m* ∈ Z+, let C ζ,m((−1, 1), R k; R p×p) be the space of functions R defined in (−1, 1) × R k, with values in R p×p, such that d j ds j ∂ βR exist and are continuous for j = 0, 1*, . . . , ζ* and 0 ≤ |β| ≤ m, where d ds represents the differentiation in (−1, 1) and ∂ β the partial derivative in R k of multi-index β. Define $$\left\|\mathbf{R}\right\|_{W^{2}_{\rm a,\,\,n}}^{2}:=\sum_{j=0}^{\ell}\sum_{|\mathbf{\beta}|\leq m}\int_{\mathbb{R}^{3}}\int_{-1}^{1}\left\|\frac{{\rm d}^{j}}{{\rm d}s^{j}}\partial^{\mathbf{\beta}}\mathbf{R}(s,\mathbf{h})\right\|_{F}^{2}\left(1-s^{2}\right)^{d/2-1+j}\ {\rm d}s\ \mathbf{\nu}({\rm d}\mathbf{h}),\tag{17}$$ $$(14)$$ $$\left(15\right)$$ $$(16)$$ where ∥·∥F is the Fröebenius norm in R p×p, induced by the Fröebenius inner product ⟨*A, B*⟩F := tr(AB⊤), so that ∥A∥ 2 F = tr(AA⊤), with tr denoting the trace operator. Finally, define the Sobolev space W ζ,m d,k ((−1, 1), R k; R p×p) as the completion of the space C ζ,m((−1, 1), R k; R p×p) with respect to the norm (17) (with the usual identification of a.e. equal functions). Remark 3.1. The choice of this particular norm is due to the actual meaning of the variables in our setting: in fact the differentiation with respect to s is connected to a differentiation on the sphere with respect to the geodesic distance, defined as arccos(⟨·, ·⟩) *between any pair of points on the spherical shell. The measure* 1 − s 2d/2−1ds *corresponds to the surface (Haar) measure* σd on S d. ## 3.4 Quantifying Smoothness In the following we intend to obtain estimates from below and from above for the Sobolev norm (17). We will use the equivalence relation f ∼ g to relate functions *f, g*, meaning that cg ≤ f ≤ Cg with constants c, C > 0 that can only depend on (*d, k, ζ, m*). Note that this is the case if the constants depend also on j or on β, since it will always be intended that j ≤ ζ and |β| ≤ m, so they only take values in a finite set depending on *ζ, m, k*. Our search from smoothness starts by defining a proper spectral inversion of K under the Fröebenius norm ∥ · ∥2F . To do so, for β ∈ Z k + and j ∈ Z+ such that |β| ≤ m and j ≤ ζ, we define $$I_{j,\boldsymbol{\beta}}:=\int_{\mathbb{R}^{k}}\int_{-1}^{1}\left\|{\frac{\mathrm{d}^{j}}{\mathrm{d}s^{j}}}\partial^{\boldsymbol{\beta}}\boldsymbol{K}(s,\boldsymbol{h})\right\|_{F}^{2}\left(1-s^{2}\right)^{d/2-1+j}\mathrm{d}s\ \nu(\mathrm{d}\boldsymbol{h}).$$ $$(18)$$ 8 We now define a sequence {sj,β}j,β with generic element sj,β being identically equal to $$s_{j,\beta}:=\sum_{n=j}^{\infty}\sum_{\mathbf{\alpha}\geq\beta}\left\|\mathbf{\gamma}_{n,\mathbf{\alpha}}^{d}\right\|_{F}^{2}\ (n+1)^{d-1+2j}\mathbf{\alpha}^{\beta}.\tag{19}$$ We are going to prove that the quantities (18) and (19) are actually related, and that they are both crucial to quantify smoothness. We start with a technical result that clearly illustrates the relation between these two quantities. Proposition 3.1. Let ζ, m ∈ Z+*. Given the continuous kernel* K : (−1, 1) × R k → R p×pthat is isotropic on S d *and stationary on* R k *as in Equation (15), we have that* $$\|{\cal K}\|_{W_{d,k}^{\zeta,m}}^{2}=\sum_{j=0}^{\zeta}\sum_{|\beta|\leq m}I_{j,\beta}\sim\sum_{j=0}^{\zeta}\sum_{|\beta|\leq m}s_{j,\beta}.$$ Hence, ∥K∥ 2 Wζ,m d,k < ∞ if and only if sj,β < ∞, for all j ≤ ζ and β ∈ Z k + such that |β| ≤ m. Proposition 3.1 derives from Lemma A.2 given in Appendix A. Clearly, it does not provide a friendly way to check when a given function K belongs to the Sobolev space W ζ,m d,k for given quadruple (*d, k, ζ, m*) of suitable integers. The next result (see the proof in Appendix A.2) provides an estimate that helps shedding some light in this direction. Proposition 3.2. In the conditions of Proposition *3.1,* $$\|{\cal K}\|_{W_{d,k}^{\zeta,m}}^{2}\sim s_{0,{\bf0}}+s_{\zeta,{\bf0}}+\sum_{|\beta^{\prime}|=m}s_{0,\beta^{\prime}}+\sum_{|\beta^{\prime}|=m}s_{\zeta,\beta^{\prime}}.$$ A further step ahead can be done by introducing the space of square summable multi-sequences, with respect to a measure µ in the set Ωk defined in Equation (16): $$\ell^{2}(\mu):=\left\{\{\gamma_{n,\mathbf{\alpha}}\}_{(n,\mathbf{\alpha})\in\Omega_{k}}\subset\mathbb{C}^{p\times p}\ :\sum_{n=0}^{\infty}\sum_{\mathbf{\alpha}\geq\mathbf{0}}\left\|\gamma_{n,\mathbf{\alpha}}\right\|_{F}^{2}\ \mu_{n,\mathbf{\alpha}}<\infty\right\}.$$ We are ready to state the main result (see the proof in Appendix A.2), which completes our quest for smoothness over product spaces, giving a condition on the Gegenbauer-Hermite spectrum γ d n,α (n,α)∈Ωk of the p-variate kernel K, which holds true if and only if K ∈ W ζ,m d,k ((−1, 1), R k; R p×p), thus quantifying its smoothness in terms of the two parameters *ζ, m* ∈ Z+. Theorem 3.3. Let ζ, m ∈ Z+ *and the measure* µe ζ,m *be defined as* $$\widehat{\mu}_{n,\mathbf{\alpha}}^{<,m}=(n+1)^{d-1}\left[1+(n+1)^{2\zeta}\chi_{n\geq\zeta}\right]\left[1+\sum_{|\mathbf{\beta}^{\prime}|=m}\mathbf{\alpha}^{\mathbf{\beta}^{\prime}}\chi_{\mathbf{\alpha}\geq\mathbf{\beta}^{\prime}}\right],\quad(n,\mathbf{\alpha})\in\Omega_{k},\tag{20}$$ with χn≥ζ and χα≥β′ being equal to 1 if n ≥ ζ and α ≥ β ′, respectively, and to 0 otherwise. Then, for a given continuous kernel K : (−1, 1)×R k → R p×p*that is isotropic on* S d *and stationary on* R k as in Equation (15), we have that K *belongs to the space* W ζ,m d,k *if and only if* γ d n,α ∈ ℓ 2(µe ζ,m). Hence, we have proved that, under the spectral construction proposed in this paper for a Gaussian measure space, quantifying smoothness is equivalent to prove summability conditions for the matrices γ d n,α. One can certainly argue that these conditions are analytically tricky to check. Yet, Theorem 3.3 provides the building block to deduce the simpler condition below (see the proof in Appendix A.2). Corollary 3.4. *Consider* µ ζ,m *one of the following measures in* Ωk µ ζ,m n,α = (n + 1)d−1-1 + (n + 1)2ζ 1 + X |β′|=m α β ′ , (n, α) ∈ Ωk, (21) $$(22)$$ or $$\overline{{{\mu}}}_{n,\mathbf{\alpha}}^{\zeta,m}=(n+1)^{d-1}\left[1+(n+1)^{2\zeta}\right]\left[1+|\mathbf{\alpha}|^{m}\right]\quad(n,\mathbf{\alpha})\in\Omega_{k}.$$ m] (n, α) ∈ Ωk. (22) If γ d n,α ∈ ℓ 2(µ ζ,m), then K *belongs to the space* W ζ,m d,k . Remark 3.2. Our work is general, and the following special cases are covered: 1. *The case* k = 0 and p = 1 has been considered by Lang and Schwab (2013). 2. *The case* k = 1 and p = 1 has been considered by Clarke De la Cerda et al. *(2018).* 3. *The case* d = 0*, general case, but the domain restricted to* Bk ⊂ R k, with Bk denoting the kdimensional ball, has been considered by Cleanthous *(2023).* The work of Cleanthous (2023) is the first, to our knowledge, where multivariate smoothness is considered in the literature. ## 4 Example For d > 1, a ≥ 0, b > 0 and η ∈ (0, 1), consider the following univariate nonseparable kernel (Emery et al., 2021): $$K(s,\mathbf{h};a,b,\eta)={\frac{(1-\eta)^{d-1}\,\exp(-b\|\mathbf{h}\|^{2})}{(1-2\eta s\exp(-a\|\mathbf{h}\|^{2})+\eta^{2}\exp(-2a\|\mathbf{h}\|^{2})^{(d-1)/2}}},\quad s\in[-1,1],\quad\mathbf{h}\in\mathbb{R}^{k}.$$ To calculate its Gegenbauer-Hermite spectrum, we start with the Gegenbauer expansion (Emery et al., 2021) $$\mathbf{K}(s,\mathbf{h};a,b,\eta)=(1-\eta)^{d-1}\sum_{n=0}^{\infty}\eta^{n}\,\exp(-(an+b)\|\mathbf{h}\|^{2})G_{n}^{(d-1)/2}(s)$$ $$=\sum_{n=0}^{\infty}\dim(\mathcal{H}_{n}^{d})C_{n}^{d}(\mathbf{h};a,b,\eta)\mathcal{G}_{n}^{(d-1)/2}(s),\quad s\in[-1,1],\quad\mathbf{h}\in\mathbb{R}^{k},$$ with $$C_{n}^{d}(\mathbf{h};a,b,\eta)={\frac{(d-1)(1-\eta)^{d-1}\eta^{n}}{2n+d-1}}\,\exp(-(a n+b)\|\mathbf{h}\|^{2}).$$ The Gegenbauer-Hermite spectrum is given by Equation (14). Accounting for the properties of Hermite polynomials (Magnus et al., 1966, Section 5.6.2), one finds $$\gamma^{d}_{n,\mathbf{\alpha}}=\int_{\mathbb{R}^{k}}C^{d}_{n}(\mathbf{h};a,b,\eta)\Phi_{\mathbf{\alpha}}(\mathbf{h})\mathrm{d}\mathbf{h}$$ $$=\begin{cases}0\text{if one or more components of}\mathbf{\alpha}\text{is odd}\\ \frac{(d-1)(1-\eta)^{d-1}\eta^{n}}{2n+d-1}\frac{2^{-|n|/2}}{(\alpha/2)!}\sqrt{\frac{\pi^{k}\alpha!}{(1/2+an+b)!}}\left(\frac{1}{1+2(an+b)}-1\right)^{|\mathbf{\alpha}|/2}\text{otherwise.}\end{cases}$$ In addition, for each $\mathbf{h}$ the $n$-th row of $\mathbf{h}$, we get $\mathbf{h}$ 2010 for each $\mathbf{h}$ 5.5.5 Using the duplication formula for the gamma function (Olver et al., 2010, formula 5.5.5), it is seen that 2 −|α|/2 (α/2)! q πkα! (1/2+an+b) k belongs to (0,(2π) k/2]. Since, furthermore, (1 1+2(an+b) − 1) and η belong to (−1, 1) and (0, 1), respectively, it follows that γ d n,α ∈ ℓ 2(µe ζ,m) for any *ζ, m* ∈ Z+. Accordingly, owing to Theorem 3.3, K belongs to W ζ,m d,k for any *ζ, m* ∈ Z+. ## 5 Conclusions Our work provides the foundations to smoothness quantification of Gaussian processes defined over some specific product space involving a d-dimensional sphere. Some comments are in order. The results presented in Section 3 can be extended to product spaces involving other manifolds. For instance, classic harmonic analysis arguments prove that the d-dimensional sphere might be replaced by a compact two-point homogeneous space at the expense of replacing the normalized Gegenbauer polynomials in Equation (3) with their counterpart over such spaces, known as Jacobi polynomials (Cleanthous et al., 2020). We are not aware of whether our results would hold for other general networks such as graphs with Euclidean edges (Porcu et al., 2023). For such cases, even spectral representations become questionable, so that more mathematical effort is needed in this direction. Future works may involve the verification of the results presented in this paper for specific classes of scalar and matrix-valued kernels, such as the ones proposed by Porcu et al. (2016; 2018), Alegría et al. (2019) and Emery et al. (2021). Also, extensions to our work to kernels that are not isotropic on the sphere could be based on spectral characterizations such as the one proposed by Jones (1963) for axially symmetric processes on S 2, i.e., processes that are stationary over longitudes, but not over latitudes, of the 2-sphere. Having some insight in this direction would help to overcome the restrictive assumption of isotropy and allow for wider classes of kernels in vector Gaussian process regression. ## Acknowledgments This work was partially funded by the São Paulo Research Foundation (FAPESP, Brazil), through grant 2021/04269-0 (APP) and grant 2022/16407-1 (EM), and by the National Agency for Research and Development of Chile, through grants ANID PIA AFB230001 and ANID Fondecyt 1210050 (XE). The authors are very grateful the Action Editors and the Reviewers for their thorough reviews that have allowed for a considerably improved version of the manuscript. ## A Appendix A.1 Technical Lemmas Lemma A.1. Let α, β, ε ∈ Z k +. If α+ε ≥ β*, then one has:* $$(\alpha+\varepsilon)!\over(\alpha+\varepsilon-\beta)!}\sim\alpha^{\beta}.\tag{23}$$ $$(24)$$ $$(25)$$ Proof. The claim holds because c1(βi, εi)α βi i ≤(αi+εi)! (αi+εi−βi)! ≤ c2(βi, εi)α βi i , i = 1, 2*, . . . , k* (Olver et al., 2010, formula 5.11.12). In particular, the constants depend on β and ε. For the next lemma we will need to state some formulas. From Olver et al. (2010, formulas 18.9.19, 18.9.21 and 18.7.4), we have $$\frac{\mathrm{d}^j}{\mathrm{d}s^j}G_n^\lambda(s)=2^j\left(\lambda\right)_jG_{n-j}^{\lambda+j}(s)\sim G_{n-j}^{\lambda+j}(s),\quad\forall\,n\geq j,\quad\forall\,\lambda>0,$$ $$\frac{\mathrm{d}}{\mathrm{d}s}\mathcal{G}_n^0(s)=\frac{\mathrm{d}}{\mathrm{d}s}T_n(s)=nG_{n-1}^1(s),\quad\forall\,n\geq1,$$ the Bob has a general (Chron et al., 2018 for a lot 5.25). We have to do where (λ)j = Γ(λ+j) Γ(λ)is the Pochhammer symbol (Olver et al., 2010, formula 5.2.5). Using Olver et al. (2010, Table 18.3.1 and formula 18.14.4), we get $$\int_{-1}^{1}G_{n}^{\lambda}(s)G_{n^{\prime}}^{\lambda}(s)\left(1-s^{2}\right)^{\lambda-1/2}\ \mathrm{d}s=\frac{\pi^{21-2\lambda}\Gamma(n+2\lambda)}{n!(n+\lambda)\Gamma(\lambda)^{2}}\delta_{n,n^{\prime}},\quad\forall\,n,n^{\prime}\geq0,\quad\forall\,\lambda>0,$$ $$\int_{-1}^{1}\mathcal{S}_{n}^{0}(s)\mathcal{S}_{n^{\prime}}^{0}(s)\left(1-s^{2}\right)^{-1/2}\ \mathrm{d}s\sim\delta_{n,n^{\prime}},\quad\forall\,n,n^{\prime}\geq0.$$ Finally, from Muller (1966, equation 11), $$\frac{\dim(\mathcal{H}_{n}^{d})}{G_{n}^{(d-1)/2}(1)}=\frac{2n+d-1}{d-1},\quad\forall\,n\geq1,\quad\forall\,d>1,$$ $$(26)$$ $$(27)$$ $$\dim({\mathcal{H}}_{n}^{1})=2,\quad\forall\,n\geq1.$$ Lemma A.2. Let ζ, m ∈ Z+*. For* β ∈ Z k + and j ∈ Z+ such that |β| ≤ m and j ≤ ζ, define Ij,β and sj,β as per Equations (18) and (19). Then the following estimates hold: $$I_{j,\beta}\sim\sum_{n=j}^{\infty}\sum_{\alpha\geq\beta}\left\|\gamma_{n,\alpha}^{d}\right\|_{F}^{2}(n+1)^{d-1+2j}\frac{\alpha!}{(\alpha-\beta)!}\tag{1}$$ $$I_{j,\beta}\sim s_{j,\beta}.$$ and Ij,β ∼ sj,β. (30) Proof. By Equation (15), $$I_{j,\beta}=\int_{\mathbb{R}^{d}}\int_{-1}^{1}\left\|\sum_{n=0}^{\infty}\dim(\mathcal{H}_{n}^{d})\sum_{|\alpha|\geq0}\gamma_{n,\alpha}^{d}\ \partial^{\beta}\Phi_{\alpha}(\mathbf{h})\frac{\mathrm{d}^{j}}{\mathrm{d}s^{j}}\mathbb{G}_{n}^{(d-1)/2}(s)\right\|_{F}^{2}\left(1-s^{2}\right)^{d/2-1+j}\ \mathrm{d}s\ \mathbf{\nu}(\mathrm{d}\mathbf{h})$$ $$=\sum_{n,n^{\prime}=0}^{\infty}\sum_{|\alpha|,|\alpha|\geq0}\langle\gamma_{n,\alpha}^{d}\ \gamma_{n^{\prime},\alpha^{\prime}}^{d}\rangle_{F}\ \overline{J}_{n,n^{\prime}}J_{\alpha,\alpha^{\prime}},$$ $$(28)$$ $$(29)$$ $$(30)$$ where $$\widetilde{J}_{n,n^{\prime}}:=\dim({\mathcal{H}}_{n}^{d})\dim\left({\mathcal{H}}_{n^{\prime}}^{d}\right)\int_{-1}^{1}\frac{\mathrm{d}^{j}}{\mathrm{d}s^{j}}{\mathcal{G}}_{n}^{(d-1)/2}(s)\frac{\mathrm{d}^{j}}{\mathrm{d}s^{j}}{\mathcal{G}}_{n^{\prime}}^{(d-1)/2}(s)\left(1-s^{2}\right)^{d/2-1+j}\,\mathrm{d}s$$ $$J_{\boldsymbol{\alpha},\boldsymbol{\alpha^{\prime}}}:=\int_{\mathbb{R}^{k}}\partial^{\boldsymbol{\beta}}\Phi_{\boldsymbol{\alpha}}(\boldsymbol{h})\partial^{\boldsymbol{\beta}}\Phi_{\boldsymbol{\alpha^{\prime}}}(\boldsymbol{h})\,\,\nu(\mathrm{d}\boldsymbol{h}).$$ and Note that, Jen,n′ = 0 for *n < j*. For n ≥ j ≥ 0, we distinguish two cases, depending on whether d is greater than 1 or not. First, let us examine the case when d > 1 and n ≥ j ≥ 0. In this case, we have, by Equation (24), Jen,n′ = dim(Hdn ) G (d−1)/2 n (1) dim Hdn′ G (d−1)/2 n′ (1) Z 1 −1 d j ds j G (d−1)/2 n(s) d j ds j G (d−1)/2 n′ (s)1 − s 2d/2−1+jds ∼dim(Hdn ) G (d−1)/2 n (1) dim Hdn′ G (d−1)/2 n′ (1) Z 1 −1 G (d−1)/2+j n−j(s)G (d−1)/2+j n′−j(s)1 − s 2d/2−1+jds . By Equations (26) and (28), we obtain $$\widetilde{J}_{n,n^{\prime}}\sim\left(\frac{2n+d-1}{d-1}\right)\left(\frac{2n^{\prime}+d-1}{d-1}\right)\int_{-1}^{1}G_{n-j}^{(d-1)/2+j}(s)G_{n^{\prime}-j}^{(d-1)/2+j}(s)\left(1-s^{2}\right)^{d/2-1+j}\,\mathrm{d}s$$ $$=\left(\frac{2n+d-1}{d-1}\right)^{2}\frac{\pi^{2d-d-2j}\Gamma(n+j+d-1)}{(n-j)!(n+\frac{d-1}{2})\Gamma(\frac{d-1}{2}+j)^{2}}\delta_{n,n^{\prime}}\,.$$ Since2n + d − 1 $${\frac{2n+d-1}{d-1}}\sim n+1$$ and (Olver et al., 2010, formula 5.11.12) $$\frac{\pi2^{2-d-2j}\Gamma(n+j+d-1)}{(n-j)!(n+\frac{d-1}{2})\Gamma(\frac{d-1}{2}+j)^{2}}\sim(n+1)^{d-3+2j},$$ the previous result simplifies into $$\begin{array}{r c l}{{\tilde{J}_{n,n^{\prime}}}}&{{\sim}}&{{(n+1)^{d-1+2j}\delta_{n,n^{\prime}}.}}\end{array}$$ $$(31)$$ Jen,n′ ∼ (n + 1)d−1+2jδn,n′ . (31) Let us now address the case when d = 1. For n ≥ j = 0, we have Jen,n′ = dim(H1n )dim H1n′Z 1 −1 G 0 n (s)G 0 n′ (s)1 − s 2−1/2ds ∼ δn,n′ , based on Equation (27). For n ≥ j > 0, we have, by Equations (23), (24), (25) and (26): Jen,n′ = dim(H1n )dim H1n′Z 1 −1 d j ds j G 0 n (s) d j ds j G 0 n′ (s)1 − s 2j−1/2ds = dim(H1n )dim H1n′nn′4 j−1[(j − 1)!]2 Z 1 −1 G j n−j (s)G j n′−j (s)1 − s 2j−1/2ds = dim(H1n )dim H1n′n π(n + j − 1)! 2(n − j)! δn,n′ ∼ (n + 1)2jδn,n′ . Hence, Equation (31) remains valid when d = 1. On the other hand, Jα,α′ =Z Rk ∂ β Y k j=1 Hαj (hj ) ∂ β Y k j=1 Hα′j (hj ) ν(dh) =Z Rk Y k j=1 ∂ βjHαj (hj ) Y k j=1 ∂ βjHα′j (hj ) ν(dh) j=1 sαj ! (αj − βj )! Hαj−βj (hj ) Y k j=1 s α ′ j ! (α ′ j − βj )! Hα′j−βj (hj ) ν(dh) =Z Rk Y k = sα! (α − β)!sα′! (α′ − β)! ZRk Y k j=1 Hαj−βj (hj ) Y k j=1 Hα′j−βj (hj ) ν(dh) = sα! (α − β)!sα′! (α′ − β)! ZRk Φα−β (h) Φα′−β (h) ν(dh) and then, since the multivariate Hermite polynomials are an orthonormal basis of L 2(R k, ν), $$J_{\alpha,\alpha^{\prime}}=\frac{\alpha!}{(\alpha-\beta)!}\delta_{\alpha,\alpha^{\prime}},\quad\alpha\geq\beta.\tag{32}$$ Thus, from Equations (31) and (32) we obtain Equation (29) and then Equation (30) using Equation (23). Lemma A.3. Let α, β, β ′ ∈ Z k +*. If* α ≥ β ′ ≥ β ≥ 0*, then* αβ ≤ αβ ′. Proof. In the scalar case, *a, b, b*′ ∈ Z+ with a ≥ b ′ ≥ b implies a b ≤ a b ′. By applying this to each component we obtain the claim. Fix β ∈ Z+ with |β| ≤ m, let $$I_{\beta}:=\{\beta^{\prime}\in\mathbb{Z}_{+}:\ \beta^{\prime}\geq\beta,\ |\beta^{\prime}|=m\}$$ and $$A_{\beta}:=\{\mathbf{\alpha}\in\mathbb{Z}_{+}:\ \mathbf{\alpha}\geq\beta\}$$ Lemma A.4. The set Aβ *can be written as* $$A_{\beta}=\tilde{A}_{\beta}\cup\bigcup_{\beta^{\prime}\in I_{\beta}}A_{\beta^{\prime}}$$ $$\tag{33}$$. where Aeβ = {α ∈ Z+ : |α| *< m,* α ≥ β} . Proof. If α ∈ Aβ, then either |α| < m or there exists β ′ ∈ Iβ such that β ≤ β ′ ≤ α. One can construct such β ′ by increasing those components βi of β that satisfy βi < αi until reaching |β ′| = m. ## A.2 Proofs Of The Main Statements Proof of Proposition *3.2.* Given j ≤ ζ and |β| ≤ m, in the definition (19) of sj,β, the sum runs over every n ≥ j and α ∈ Aβ. We have that - if $n\geq\zeta$ then $(n+1)^{2j}\leq(n+1)^{2\zeta}$, ? - if $n<\zeta$ then $(n+1)^{2j}\leq(\zeta+1)^{2\zeta}$. Moreover, by Equation (33), if α ∈ Aβ then either α ∈ Aeβ or α ∈ Aβ′ for some β ′ ∈ Iβ: - if α ∈ Aeβ then αβ ≤ mm, - if α ∈ Aβ′ with β ′ ∈ Iβ, then by Lemma A.3, it holds αβ ≤ αβ ′. Then we can estimate from above all the termsγ d n,α 2 F (n + 1)d−1+2jαβ in the definition (19) of sj,β with the corresponding term - in $s_{\zeta,\beta'}$ with $\boldsymbol{\alpha}\geq\beta'\geq\beta$ if $|\boldsymbol{\alpha}|\geq m$, $n\geq\zeta$, ... $$+\,1)^{2\zeta}s_{0,\beta^{\prime}}{\mathrm{~with~}}\alpha\geq\beta^{\prime}\geq\beta{\mathrm{~if~}}|\alpha|\geq m,\;n<\zeta,$$ - in mmsζ,0 if |α| < m, n ≥ ζ, - in (ζ + 1)2ζmms0,0 if |α| < m, *n < ζ*. As a consequence $$s_{j,\beta}\leq(\zeta+1)^{2\zeta}m^{m}s_{0,{\bf0}}+m^{m}s_{\zeta,{\bf0}}+(\zeta+1)^{2\zeta}\sum_{|\beta^{\prime}|=m}s_{0,\beta^{\prime}}+\sum_{|\beta^{\prime}|=m}s_{\zeta,\beta^{\prime}},$$ and then summing up all the terms (the number of such terms only depends on *ζ, k, m*), we get $$\|K\|_{W_{s,k}^{\zeta,m}}^{2}\sim\sum_{j=0}^{\zeta}\sum_{|\boldsymbol{\beta}|\leq m}s_{j,\boldsymbol{\beta}}\sim s_{0,\mathbf{0}}+s_{\zeta,\mathbf{0}}+\sum_{|\boldsymbol{\beta^{\prime}}|=m}s_{0,\boldsymbol{\beta^{\prime}}}+\sum_{|\boldsymbol{\beta^{\prime}}|=m}s_{\zeta,\boldsymbol{\beta^{\prime}}},$$ from below is trivial. since the estimate from below is trivial. Proof of Theorem *3.3.* By Proposition 3.2, all we have to do is to prove that $$s_{0,\mathbf{0}}+s_{\zeta,\mathbf{0}}+\sum_{|\boldsymbol{\beta^{\prime}}|=m}s_{0,\boldsymbol{\beta^{\prime}}}+\sum_{|\boldsymbol{\beta^{\prime}}|=m}s_{\zeta,\boldsymbol{\beta^{\prime}}}=\sum_{n=0}^{\infty}\sum_{\boldsymbol{\alpha}\geq\mathbf{0}}\left\|\boldsymbol{\gamma}_{n,\boldsymbol{\alpha}}^{d}\right\|_{F}^{2}\ \widehat{\mu}_{n,\boldsymbol{\alpha}}^{\zeta,m},\tag{34}$$ Equation (30): I should form $F_{\mathbf{\alpha}}$ with (10) $\alpha$ where µe ζ,m n,α is given in Equation (20). Indeed, from Equation (19), one has: s0,0 = X∞ n=0 X α≥0 γ d n,α 2 F (n + 1)d−1α 0, sζ,β′ = X∞ n=ζ X α≥β′ γ d n,α 2 F (n + 1)d−1+2ζα β ′= X∞ n=0 X α≥0 γ d n,α 2 F (n + 1)d−1+2ζχn≥ζ α β ′χα≥β′ , sζ,0 = X∞ n=ζ X α≥0 γ d n,α 2 F (n + 1)d−1+2ζα 0 = X∞ n=0 X α≥0 γ d n,α 2 F (n + 1)d−1+2ζχn≥ζ α 0, s0,β′ = X∞ n=0 X α≥β′ γ d n,α 2 F (n + 1)d−1α β ′= X∞ n=0 X α≥0 γ d n,α 2 F (n + 1)d−1α β ′χα≥β′ , where α0 = 1. This all adds up into $$\widetilde{\mu}_{n,\mathbf{\alpha}}^{<m}=(n+1)^{d-1}\left[1+(n+1)^{2\zeta}\chi_{n\geq\zeta}\sum_{|\mathbf{\beta}^{\prime}|=m}\mathbf{\alpha}^{\mathbf{\beta}^{\prime}}\chi_{\mathbf{\alpha}\geq\mathbf{\beta}^{\prime}}+(n+1)^{2\zeta}\chi_{n\geq\zeta}+\sum_{|\mathbf{\beta}^{\prime}|=m}\mathbf{\alpha}^{\mathbf{\beta}^{\prime}}\chi_{\mathbf{\alpha}\geq\mathbf{\beta}^{\prime}}\right]$$ $$=(n+1)^{d-1}\left[1+(n+1)^{2\zeta}\chi_{n\geq\zeta}\right]\ \left[1+\sum_{|\mathbf{\beta}^{\prime}|=m}\mathbf{\alpha}^{\mathbf{\beta}^{\prime}}\chi_{\mathbf{\alpha}\geq\mathbf{\beta}^{\prime}}\right].$$ $\square$ . $$\square$$ Proof of Corollary *3.4.* Obviously, $$\widetilde{\mu}_{n,\boldsymbol{\alpha}}^{\zeta,m}\leq(n+1)^{d-1}\left[1+(n+1)^{2\zeta}\right]\left[1+\sum_{|\boldsymbol{\beta}^{\prime}|=m}\boldsymbol{\alpha}^{\boldsymbol{\beta}^{\prime}}\right].$$ Moreover, αβ ′≤ |α| m, so that P|β′|=m αβ ′≤ D(*m, k*)|α| m, where D(*m, k*) > 1 is the number of multiindices in Z k + of module m (an integer depending only m and k). Accordingly, $$\widehat{\mu}_{n,\boldsymbol{\alpha}}^{\zeta,m}\leq(n+1)^{d-1}\left[1+(n+1)^{2\zeta}\right]\left[1+D(m,k)|\boldsymbol{\alpha}|^{m}\right]$$ $$\leq D(m,k)(n+1)^{d-1}\left[1+(n+1)^{2\zeta}\right]\left[1+|\boldsymbol{\alpha}|^{m}\right].$$ Thus, considering the measure in Equation (22) or in Equation (21), if γ d n,α ∈ ℓ 2(µ ζ,m), then γ d n,α ∈ ℓ 2(µe ζ,m) and the result follows by Theorem 3.3. ## References Adler, R. J. and J. E. Taylor (2007). *Random Fields and Geometry*. New York: Springer. Alegría, A., E. Porcu, R. Furrer, and J. Mateu (2019). Covariance functions for multivariate Gaussian fields evolving temporally over planet earth. *Stochastic Environmental Research and Risk Assessment 33* (8-9), 1593–1608. Alvarez, M. A., L. Rosasco, and N. D. Lawrence (2012). Kernels for vector-valued functions: A review. Foundations and Trends® *in Machine Learning 4* (3), 195–266. Atluri, G., A. Karpatne, and V. Kumar (2018). Spatio-temporal data mining: A survey of problems and methods. *ACM Computing Surveys 51* (4), 1–41. Atluri, G., A. MacDonald III, K. Lim, and V. Kumar (2016). The brain-network paradigm: Using functional imaging data to study how the brain works. *Computer 49* (10), 65–71. Barp, A., C. J. Oates, E. Porcu, and M. Girolami (2022). A Riemann–Stein kernel method. *Bernoulli 28* (4), 2181–2208. Bonilla, E. V., K. Chai, and C. Williams (2007). Multi-task Gaussian process prediction. *Advances in Neural* Information Processing Systems 20. Castruccio, S. and M. L. Stein (2013). Global space–time models for climate ensembles. The Annals of Applied Statistics 7 (3), 1593–1611. Chen, Z., J. Fan, and K. Wang (2023). Multivariate Gaussian processes: definitions, examples and applications. *METRON 81* (2), 181–191. Chilès, J.-P. and P. Delfiner (2012). *Geostatistics: Modeling Spatial Uncertainty, Second Edition*. New York: John Wiley & Sons. Clarke De la Cerda, J., A. Alegría, and E. Porcu (2018). Regularity properties and simulations of Gaussian random fields on the sphere cross time. *Electronic Journal of Statistics 12* (1), 399 - 426. Cleanthous, G. (2023). On the properties of multivariate isotropic random fields on the ball. Technical Report. DOI: 10.21203/*rs.3.rs-2700238*/v1 . Cleanthous, G., A. G. Georgiadis, A. Lang, and E. Porcu (2020). Regularity, continuity and approximation of isotropic Gaussian random fields on compact two-point homogeneous spaces. Stochastic Processes and their Applications 130 (8), 4873–4891. Connor, C., N. Chapman, and L. Connor (2009). *Volcanic and Tectonic Hazard Assessment for Nuclear* Facilities. Cambridge University Press. Emery, X., A. Alegría, and D. Arroyo (2021). Covariance models and simulation algorithm for stationary vector random fields on spheres crossed with Euclidean spaces. *SIAM Journal on Scientific Computing 43* (5), A3114–A3134. Erdélyi, A. (1953). *Higher Transcendental Functions, Volume II*. New York: McGraw-Hill. Faghmous, J. H. and V. Kumar (2014). Spatio-temporal data mining for climate data: Advances, challenges, and opportunities. In W. Chu (Ed.), *Data Mining and Knowledge Discovery for Big Data*, pp. 83–116. Springer. Fasshauer, G. E. (2007). *Meshfree Approximation Methods with Matlab*, Volume 6. World Scientific. Fuselier, E. and G. B. Wright (2012). Scattered data interpolation on embedded submanifolds with restricted positive definite kernels: Sobolev error estimates. *SIAM Journal on Numerical Analysis 50* (3), 1753–1776. Fuselier, E. J. and G. B. Wright (2009). Stability and error estimates for vector field interpolation and decomposition on the sphere with RBFs. *SIAM Journal on Numerical Analysis 47* (5), 3213–3239. Hofmann, T., B. Schölkopf, and A. J. Smola (2008). Kernel methods in machine learning. *The Annals of* Statistics 36 (3), 1171–1220. Hubbert, S., Q. T. Lê Gia, and T. M. Morton (2015). Spherical Radial Basis Functions, Theory and Applications. Springer. Hubbert, S., E. Porcu, C. J. Oates, and M. Girolami (2023). Sobolev spaces, kernels and discrepancies over hyperspheres. *Transactions in Machine Learning Research. Accepted*. Hutchinson, M., A. Terenin, V. Borovitskiy, S. Takao, Y. Teh, and M. Deisenroth (2021). Vector-valued Gaussian processes on Riemannian manifolds via gauge independent projected kernels. Advances in Neural Information Processing Systems 34, 17160–17169. Illian, J., A. Penttinen, H. Stoyan, and D. Stoyan (2008). Statistical Analysis and Modelling of Spatial Point Patterns. John Wiley & Sons. Jeong, J., M. Jun, and M. G. Genton (2017). Spherical process models for global spatial statistics. Statistical Science 32 (4), 501–513. Jones, R. H. (1963). Stochastic processes on a sphere. *The Annals of Mathematical Statistics 34* (1), 213–218. Kazawa, H., T. Izumitani, H. Taira, and E. Maeda (2005). Maximal margin labeling for multi-topic text categorization. In *Advances in Neural Information Processing Systems 17*. MIT Press. Lang, A. and C. Schwab (2013). Isotropic Random Fields on the Sphere: Regularity, Fast Simulation and Stochastic Partial Differential Equations. *The Annals of Applied Probabilty 25*, 3047–3094. Lawrence, N., M. Seeger, and R. Herbrich (2002). Fast sparse Gaussian process methods: The informative vector machine. *Advances in Neural Information Processing Systems 15*. Magnus, W., F. Oberhettinger, and R. Soni (1966). Formulas and Theorems for the Special Functions of Mathematical Physics. Berlin: Germany: Springer-Verlag. Marques, R., C. Bouville, and K. Bouatouch (2022). Gaussian process for radiance functions on the sphere. In *Computer Graphics Forum*, Volume 41, pp. 67–81. Wiley Online Library. Marques, R., C. Bouville, M. Ribardière, L. P. Santos, and K. Bouatouch (2013). A spherical Gaussian framework for Bayesian Monte Carlo rendering of glossy surfaces. IEEE Transactions on Visualization and Computer Graphics 19 (10), 1619–1632. Muller, C. (1966). *Spherical Harmonics. Lecture Notes in Mathematics*. Springer Verlag. Oleson, J. J., N. Kumar, and B. J. Smith (2013). Spatiotemporal modeling of irregularly spaced aerosol optical depth data. *Environmental and Ecological Statistics 20* (2), 297–314. Olver, F. W., D. M. Lozier, R. F. Boisvert, and C. W. Clark (2010). NIST Handbook of Mathematical Functions. Cambridge: Cambridge University Press, Cambridge. Osborne, M. A., S. J. Roberts, A. Rogers, S. D. Ramchurn, and N. R. Jennings (2008). Towards realtime information processing of sensor network data using computationally efficient multi-output Gaussian processes. In *2008 International Conference on Information Processing in Sensor Networks (IPSN 2008)*, pp. 109–120. Institute of Electrical and Electronics Engineers. Porcu, E., A. Alegria, and R. Furrer (2018). Modeling temporally evolving and spatially globally dependent data. *International Statistical Review 86* (2), 344–377. Porcu, E., M. Bevilacqua, and M. G. Genton (2016). Spatio-temporal covariance and cross-covariance functions of the great circle distance on a sphere. *Journal of the American Statistical Association 111* (514), 888–898. Porcu, E., M. Bevilacqua, R. Schaback, and C. J. & Oates (2023). The Matérn model: A journey through statistics, numerical analysis and machine learning. *arXiv preprint arXiv:2303.02759* . Porcu, E., R. Furrer, and D. Nychka (2021). 30 years of space–time covariance functions. *WIREs Computational Statistics 13* (2), e1512. Porcu, E., P. A. White, and M. G. Genton (2023). Stationary nonseparable space-time covariance functions on networks. *Journal of the Royal Statistical Society, B To Appear*. Quinonero-Candela, J. (2004). *Learning with uncertainty: Gaussian processes and relevance vector machines*. Ph. D. thesis, Technical University of Denmark Lyngby, Denmark. Rosa, P., V. Borovitskiy, A. Terenin, and J. Rousseau (2023). Posterior contraction rates for Matérn Gaussian processes on Riemannian manifolds. *arXiv preprint arXiv:2309.10918* . Sánchez, L. K., X. Emery, and S. A. Séguret (2019). 5D geostatistics for directional variables: Application in geotechnics to the simulation of the linear discontinuity frequency. *Computers & Geosciences 133*, 104325. Sánchez, L. K., X. Emery, and S. A. Séguret (2021). Geostatistical modeling of rock quality designation (RQD) and geotechnical zoning accounting for directional dependence and scale effect. *Engineering Geology 293*, 106338. Seeger, M. (2004). Gaussian processes for machine learning. *International Journal of Neural Systems 14* (02), 69–106. Shirota, S. and A. E. Gelfand (2017). Space and circular time log Gaussian Cox processes with application to crime event data. *The Annals of Applied Statistics 11* (2), 481–503. Tompson, L., S. Johnson, M. Ashby, C. Perkins, and P. Edwards (2015). Uk open source crime data: Accuracy and possibilities for research. *Cartography and geographic information science 42* (2), 97–111. Wang, S., J. Cao, and P. Yu (2022). Deep learning for spatiotemporal data mining: A survey. IEEE Transactions on Knowledge and Data Engineering 34 (8), 3681–3700. Wingeier, B. M., P. L. Nunez, and R. B. Silberstein (2001). Spherical harmonic decomposition applied to spatial-temporal analysis of human high-density electroencephalogram. *Physical Review E 64* (5), 051916. Wu, S., X. Li, W. Dong, S. Wang, X. Zhang, and Z. Xu (2022). Multi-source and heterogeneous marine hydrometeorology spatio-temporal data analysis with machine learning: a survey. *World Wide Web 26* (3), 1115–1156. Xing, W., F. Yu, P. Leung, X. Li, P. Wang, and A. Shah (2021). A new multi-task learning framework for fuel cell model outputs in high-dimensional spaces. *Journal of Power Sources 482*, 228930. Yadrenko, M. I. (1983). *Spectral Theory of Random Fields*. New York: Springer-Verlag. Yaglom, A. M. (1987a). *Correlation Theory of Stationary and Related Random Functions. Volume I: Basic* Results. New York: Springer-Verlag. Yaglom, A. M. (1987b). *Correlation Theory of Stationary and Related Random Functions. Volume II:* Supplementary Notes and References. New York: Springer-Verlag. Yuan, Y., Q. Wang, and X. T. Yang (2022). Traffic flow modeling with gradual physics regularized learning. IEEE Transactions on Intelligent Transportation Systems 23 (9), 14649–14660.
Review 1: Summary: This paper characterizes the smoothness properties of vector Gaussian processes. The authors first use Hermite polynomials to construct a spectral decomposition of a matrix-valued kernel, then define a corresponding Sobolev space, then quantify the smoothness of a given kernel by investigating the $s_{j,\beta}$ functions in (14). The main result is in proposition 3.3, stating that smoothness is equivalent to summability conditions of the matrix coefficients of the spectral decomposition. Strengths and Weaknesses: Strength: Novel construction to define smoothness of vector-valued Gaussian process. Weaknesses: The construction depends on some recent references, e.g. Porcu et al. 2021. And therefore it is not introduced in a self-contained way. Requested Changes: Somewhere around sec 2, the authors should explain how smoothness is quantified for scalar Gaussian processes, so the readers can see the developments are not trivial. Overall, this paper introduces a construction but does not compare it with alternative constructions. How significant is the studied product space? Can either the sphere or the Euclidean space be zero dimension (d=0 or k=0), so that the product space includes the sphere/Euclidean space as special cases? 1.4 introduce \mathbb{Z} P4 "cov" is not introduced 3.1 "It can be namely verified that these functions ..." Add more explanations and intuition. 3.3. Explain why $(-1,1) \times R^k$ is investigated. Why the interval (-1,1) is not closed? sec 3. In the end, how smoothness is "quantified"? Is it a binary property (whether or not K belongs to the space $W_{d,k}^{\zeta,m}$)? Broader Impact Concerns: No ethical implications were identified. ================================================== Review 2: Summary: This paper addresses the issue of quantifying the smoothness of vector Gaussian processes defined over the product space $\mathbb S^d \times \mathbb R^k$. The paper considers a class of kernels over this space which are isotropic and stationary, i.e., given inputs $(x, t)$, $(x’, t’)$, the output of the kernel only depends on $\langle x,x’\rangle$ and $t-t’$. For such kernels, by leveraging the expansion in the orthonormal basis of corresponding $L^2$ space (induced by the product measure of the spherical Haar measure and the Gaussian measure), the paper relates the decay of the Fourier coefficients in this basis with appropriate Sobolev-type norms. The flavor of the result is expected and the main contribution seems to be to write everything down explicitly. Strengths and Weaknesses: Strengths: The spectral analysis is well-founded and the paper is carefully written. Weaknesses: The choice of notation, such as using $\lVert\cdot\rVert_k$ for the Euclidean norm and $\lVert\cdot\rVert_*$ for the Frobenius norm, might confuse readers familiar with the conventional usage of these notations in other contexts. Also, the reliance on external references, such as Erdélyi (1953) and Cramér's theorem, without fully integrating their details into the discussion, detracts from the paper's self-contained nature and accessibility. Requested Changes: The notation $\lVert\cdot\rVert_k$ is usually used for the $\ell_k$ norm, and $\lVert\cdot\rVert_*$ is usually used for the nuclear norm; please change these notations (it would be fine to use $\lVert\cdot\rVert$ for both, or $\lVert\cdot\rVert_{\rm F}$ or $\lVert\cdot\rVert_{\rm HS}$ for the Frobenius norm). Please make Section 2 more self-contained by formally stating and explaining the application of formula 11.4.2 from Erdélyi (1953) and Cramér's theorem within the context of your analysis. Typos: - Section 1.2: change the title from "Why Studying" to "Why Study". - In eq. (5), the last comma should be a period. - After eq. (10): "indexes" -> "indices". Broader Impact Concerns: None. ================================================== Review 3: Summary: In this paper, the authors provide characterization of the smooth properties of vector Gaussian processes defined on the product space of as unit sphere manifold and standard Euclidean space. Through a spectral representation approach (e.g., via the normalized Hermite polynomials), the authors propose to assess the smoothness by studying the corresponding matrix-valued kernel. From my personal viewpoint, the major contribution of this paper is: 1. propose to study the smoothness properties of vector Gaussian processes on product space via a spectral approach; and 2. derive precise smoothness characterizations in Section 3.4. Strengths and Weaknesses: Strengths: the paper is primarily theoretical and the theoretical results are, to the best of my knowledge, novel and interesting. Weaknesses: 1. The motivation on the study of smoothness properties, and the specific choice of product is a bit vague and unclear; e.g., more connection to applications may be helpful. 2. The paper is rather technical and the presentation can be improved. See my detailed comments below. Requested Changes: 1. why the setting of product space between unit sphere and standard Euclidean space? The authors mentioned on possible applications at the end of Section 1.2, but the connections are a bit vague and unclear. In fact, it could be to good idea to showcase the application of the proposed characterization on a concrete ML example. 2. it would be helpful to point to, when talking about the contribution of this work at the top of P3, the corresponding precise results in the paper. For example, the route is given in P5, the spectral characterization of smoothness that relies on the properties of the matrix-valued kernel is given in xxx, etc. 3. Please refer to the precise section of proof in the Appendix. A few minor issues: 1. Corollary 3.4: "one of the following measureS"? 2. what does $\ell^2$ mean in Proposition 3.3? Broader Impact Concerns: This paper is primarily theoretical and I do not see any ethical concerns. ================================================== Review 4: Summary: The contribution of this paper is to quantify precisely, the smoothness of vector Matern GPs defined over a product space, with one part consisting a $d$-sphere and the other consisting a $k$-dimensional Euclidean space. The proof follows by constructing a Karhunen-Loeve expansion with respect to a basis on a particular measure space, one part consisting the Haar measure of the sphere and the other part consisting a Gaussian measure on $\mathbb{R}^k$, then bounding the coefficients in such a way that the kernel belongs to a particular Sobolev-space of matrix-valued functions defined over the product space. The authors provide an explicit bound on said coefficients. Strengths and Weaknesses: Strenghts: - The presentation is relatively clear, with the "route to smoothness" helping the readers understand the proof process, despite the technicalities. - While I haven't been able to check every minute detail of the proof, the logic is presented clearly and the arguments are mathematically sound. Weakness: - The space that is considered in this paper is oddly specific. The authors simply list a bunch of works where the particular product space is used but making this more explicit (e.g. giving an example in one of these works cited) would help us understand better the value of the paper. - The work feels incomplete. If I understand correctly, the authors are interested in characterising the smoothness of the Gaussian sample paths but the proof stops at proving the smoothness of the kernel. Some comments or extra arguments on how to go from the kernel regularity to the sample path regularity is necessary in my opinion. Requested Changes: I think the work doesn't fully resolve what it sets out to do, which is to characterise the smoothness of Gaussian processes (in my understanding, this means sample paths). Some extra arguments to go from the kernel smoothness to sample path smoothness is necessary. Broader Impact Concerns: This paper is mathematical in nature and there are no concerns regarding ethical implications. ================================================== Metareview: Recommendation: Accept as is Comment: The paper is primarily theoretical and the theoretical results seem to be novel and interesting. Strengths: - Novel construction to define smoothness of vector-valued Gaussian process. - The spectral analysis is well-founded and the paper is carefully written. - Derivation of precise smoothness characterizations in Section 3.4. - The presentation is relatively clear, with the "route to smoothness" helping the readers understand the proof process, despite the technicalities. - The logic is presented clearly, and the arguments are mathematically sound. Virtually all weaknesses have been addressed after a revision. ==================================================
# The Sparse Matrix-Based Random Projection: An Analysis Of Matrix Sparsity For Classification Anonymous authors Paper under double-blind review ## Abstract In the paper, we study the sparse {0, ±1}-matrix based random projection, which has been widely applied in classification to reduce the data dimension. For the problem, it is interesting to estimate the optimal sparsity of sparse matrices for classification, namely the minimum number of nonzero entries ±1 that supports achieving the best classification performance. To achieve this, we analyze the impact of matrix sparsity on the `1 distance between projected data points. By principle component analysis, it is known that the larger distance between projected data points should better capture the variation among original data, and then yield better classification performance. Theoretically, the `1 distance between projected data points is not only related to the sparsity of sparse matrices, but also to the distribution of original data. Without loss of generality, we evaluate two typical data distributions, the Gaussian mixture distribution and the two-point distribution, which have been widely used to model the distributions of real data. Given the two data distributions, it is proved that the maximum `1 distance between projected data points could be achieved, as the sparse matrix contains only one or at most about twenty nonzero entries per row, under the size m ≥ O( √n). Accordingly, the best classification performance should also be achieved under such conditions. This is confirmed with extensive experiments on different types of data, including the image, text, gene and binary quantization data. ## 1 Introduction Random projection is an important unsupervised dimensional reduction technique that simply projects highdimensional data to low-dimensional subspaces by multiplying the data with random matrices (Johnson & Lindenstrauss, 1984). The projection can approximately preserve the pairwise `2 distance between original data points, or say preserving the structure of original data, thus applicable to classification (Bingham & Mannila, 2001; Fradkin & Madigan, 2003; Wright et al., 2009). To achieve the `2 distance preservation property, random projection matrices need to follow certain distributions, such as Gaussian matrices (Dasgupta & Gupta, 1999) and sparse {0, ±1}-ternary matrices (shortly called sparse matrices hereafter) (Achlioptas, 2003). In practical applications, sparse matrices are preferred for its much lower complexity both in storage and computation. Considering random projection is often applied to computationally-intensive large-scale classification tasks, it is highly desirable to minimize its complexity. For this purpose, we propose to estimate the optimal sparsity of sparse matrices for classification, more precisely, estimating the minimum number of nonzero entries ±1 that allows the projected data to provide the best classification performance. To the best of our knowledge, no previous study has investigated the problem. Existing research on random projection is mainly devoted to exploring the distribution of random matrices that well holds the distance preservation property, more precisely, keeping the *expectation* of the pairwise distance between original data points unchanged after random projection and rendering the *variance* relatively small (Dasgupta & Gupta, 1999; Achlioptas, 2003). For the sparse matrix with entries properly scaled, it has been proved that the distance preservation property holds in `2 norm (Achlioptas, 2003; Li et al., 2006), but not in `1 norm (Brinkman & Charikar, 2003; Li, 2007). Here it is noteworthy that although the `2 distance preservation property enables random projection to be applied in classification, it can hardly be used to analyze the impact of the sparsity of sparse matrices on the follow-on classification, since the classification accuracy depends on the discrimination between projected data points, rather than the invariance of data structure. For instance, it has been proved that the `2 distance preservation property tends to become worse as the matrix becomes sparser (Li et al., 2006), namely containing fewer nonzero entries ±1. However, empirically, it is observed that the sparser matrix structure does not mean a worse classification performance; and in contrast the best classification performance can be achieved with very sparse matrices, such as the ones with only one nonzero entry per row. In the paper, we show that the intriguing performance can be explained by analyzing the *variation* of the `1 distance between projected data points. By the early research of principle component analysis (PCA) (Jolliffe, 2002), it is known that the projection over a *larger* principle component will yield *larger* pairwise distances (equivalently, larger variances) for projected data points, while the larger distance tends to *better* capture the variation (i.e. statistical information) of original data (Jolliffe & Cadima, 2016), and then provide *better* classification performance (Turk & Pentland, 1991). In the sparse matrix based random projection, the `1 distance between projected data points is related not only to the sparsity of random matrices, but also to the distribution of original high-dimensional data. To estimate the optimal matrix sparsity that leads to the maximum `1 distance between projected data points, we need to first model the distribution of original data. Without loss of generality, we consider two typical data distributions, the Gaussian mixture distribution and the two-point distribution. The former has been widely used to model the distribution of natural data (Torralba & Oliva, 2003; Weiss & Freeman, 2007) or their sparse transforms (Wainwright & Simoncelli, 1999; Lam & Goodman, 2000), while the latter can be used to model the distribution of binary data, such data often occurring in various quantization tasks (Gionis et al., 1999; Hubara et al., 2016; Yang et al., 2019). With the two general distributions, as shown later, our theoretical analysis and estimation offer highly consistent results with the experiments on real data. Given the two data distributions, by varying the sparsity of sparse matrices, we estimate the *expected* `1 distance between projected data points and observe the following two results: 1) The maximum distance tends to be achieved by the sparse matrices with only one nonzero entry per row, as the difference vector between two original data contains a sufficient number of nonzero entries, namely having a sufficiently dense distribution; 2) otherwise, the maximum distance tends to be reached by the matrices with at most about twenty nonzero entries per row. To summarize, the two results imply that the expected `1 distance between projected data points tends to reach its maximum value, as sparse matrices contain only one or at most about twenty nonzero entries per row. Moreover, we prove that the *expected* `1 distance can be approximated with high probability by a single matrix sample, if the m × n-sized matrix has m ≥ O( √n). This suggests that in practical terms, a random matrix sample with the sparsity and size described above probably yields the maximum `1 distance between projected data points, and then results in the best classification performance. The performance is fully verified by conducting extensive experiments on a variety of real data, including the image, text, gene and binary quantization data. The major contributions of the work can be summarized as follows: - For the sparse {0, ±1}-matrices based random projection, we for the first time estimate the optimal sparsity of sparse matrices for classification, by analyzing random projection from the viewpoint of *distance variation* rather than the conventional *distance preservation*. The proposed analysis is inspired by the early research of PCA (Jolliffe & Cadima, 2016; Turk & Pentland, 1991), that is the larger distance between projected data points should better account for the variation among original data and then yield better classification performance. - Theoretically, we demonstrate that the optimal classification performance should be achieved by the sparse matrix with only one or at most about twenty nonzero entries per row, under the size m ≥ O( √n), if the original data has the Gaussian mixture distribution or two-point distribution. The estimated optimal matrices exhibit very sparse structures, implying significant *savings* both in computation and storage. - Empirically, we find that the theoretical estimations about the optimal matrix size and sparsity are highly consistent with the experimental results on real data. The *consistency* between theory and practice benefits from the aforementioned two general distributions we have assumed for the original data, which can well approximate the distributions of real data of different types (Torralba & Oliva, 2003; Weiss & Freeman, 2007; Wainwright & Simoncelli, 1999; Lam & Goodman, 2000). ## 2 Problem Formulation Consider the random projection of two data points h, h 0 ∈ R n over a sparse random matrix R ∈ {0, ±1} m×n. For the matrix R, we aim to estimate its optimal sparsity for classification, namely, the minimal number of nonzero entries ±1 that allows to maximize the expected `1 distance EkRxk1 between the projections of h and h 0, where x = h − h 0. As discussed before, the maximum of the expected `1 distance EkRxk1 is expected to provide the best classification performance. To determine the optimal sparsity, we need to estimate the changing trend of the expected `1 distance EkRxk1 against the varying sparsity of R. It is seen that the estimation depends on the distributions of the matrix R and the data h. In the following, we specify their distributions prior to introducing the estimation model. Notation. Throughout the work, we typically denote a matrix by a bold upper-case letter R ∈ R m×n, a vector by a bold lower-case letter r = (r1, r2*, ..., r*n) > ∈ R n, and a scalar by a lower-case letter ri or r. Sometimes, we use the bold letter ri ∈ R n to denote the i-th row of R ∈ R m×n. For ease of presentation, we defer all proofs to Appendix A. ## 2.1 The Distribution Of Sparse Matrices The sparse random matrix R we aim to study is specified in Definition 1, which has the parameter k counting the number of nonzero entries per row, and is simply called k-sparse to distinguish between the matrices of different sparsity. Instead of the form R ∈ {0, ±1} m×n p , in the definition we introduce a scaling parameter n mk to make the matrix entries have zero mean and unit variance. With this distribution, the matrix will hold the `2 distance preservation property, that is, keeping the expected `2 distance between original data points unchanged after random projection (Achlioptas, 2003). Note that the scaling parameter can be omitted in practical applications for easier computation; and the omitting will not change the relative distances between projected data points, thus not affecting the follow-up classification performance. Definition 1 (k-sparse random matrix). A k-sparse random matrix R ∈ {0, ±p n mk } m×n is defined to be of the following structure properties: - its each row vector r ∈ {0, ±p n mk } n contains exactly k nonzero entries, 1 ≤ k ≤ n; - the positions of k nonzero entries are arranged uniformly at random; - each nonzero entry takes the bipolar values ±p n mk with equal probability. ## 2.2 The Distribution Of Original Data For the original high-dimensional data h ∈ R n, as discussed before, we investigate two typical distributions, the two-point distribution and the Gaussian mixture distribution. Considering the expected `1 distance EkRxk1 is directly related to the pairwise difference x between two original data h and h 0, namely x = h − h 0 = (x1, x2*, . . . , x*n) >, we describe the distribution of x for the original data h with the following two given distributions. ## 2.2.1 Two-Point Distribution Suppose that the two high-dimensional data h, h 0 ∈ {µ1, µ2} n have their each entry independently following a two-point distribution, where µ1 and µ2 are two arbitrary constants. Then the difference x between h and h 0 has its each entry xiindependently following a ternary discrete distribution $$x_{i}\sim{\mathcal{T}}(\mu,p,q)$$ xi ∼ T (*µ, p, q*) (1) with the probability mass function t ∈ {−µ, 0, µ} under the probabilities {*q, p, q*}, where µ = µ1 − µ2 and p + 2q = 1. $$(1)$$ ## 2.2.2 Gaussian Mixture Distribution When the two data h, h 0 ∈ R n have their each entry independently following a Gaussian mixture distribution, the difference x = h − h 0remains a Gaussian mixture (Andrews & Mallows, 1974), which allows each entry xi to be modeled as xi ∼ M(µ, σ2*, p, q*) (2) with the probability density function $$x_{i}\sim{\mathcal{M}}(\mu,\sigma^{2},p,q)$$ $$\left(2\right)$$ $$f(t)=p f_{\mathcal{N}}(t;0,\sigma^{2})+q f_{\mathcal{N}}(t;\mu,\sigma^{2})+q f_{\mathcal{N}}(t;-\mu,\sigma^{2})$$ $$\left({\mathbf{3}}\right)$$ f(t) = pfN (t; 0, σ2) + qfN (t; *µ, σ*2) + qfN (t; −*µ, σ*2) (3) where fN (t; *µ, σ*2) denotes the density function of t ∼ N (*µ, σ*2), and the parameters are subject to p, q ≥ 0 and p + 2q = 1. ## 2.3 The `1 **Distance Estimation Model** With the distributions defined for the original data points h, h 0 ∈ R n and the k-sparse random matrix R ∈ {0, ±p n mk } m×n, we can analyze the changing trend of the expected `1 distance EkRxk1 (with x = h − h 0) against the varying matrix sparsity k, and identify the sparsity k that maximizes the value of EkRxk1. Note that we have EkRxk1 = mE|r >x|, since each row r ∈ R n of R follows an independent and identical distribution by Definition 1. This equivalence suggests that E|r >x| will share the same changing trend with EkRxk1, when varying k. Then for ease of analysis, instead of EkRxk1, in the following we choose to investigate the changing of E|r >x| against k, and determine the k that maximizes E|r >x|. ## 3 The `1 **Distance Estimation With Two-Point Distributed Data** In this section, we consider the original data h with two-point distributions, for which the data difference x = h − h 0 has i.i.d. entries xi ∼ T (*µ, p, q*) as specified in (1). Given such data, we estimate the value of E|r >x| against the varying matrix sparsity k, and identify the value of k that leads to the maximum E|r >x|. ## 3.1 Theoretical Analysis Theorem 1. Let r be a row vector of a k-sparse random matrix R ∈ {0, ±p n mk } m×n, and x ∈ R n with i.i.d. entries xi ∼ T (*µ, p, q*). It can be derived that $$\mathbb{E}[\mathbf{r}^{\mathsf{T}}\mathbf{x}]=2\mu{\sqrt{\frac{n}{m k}}}\sum_{i=0}^{k}C_{k}^{i}p^{i}q^{k-i}\left[{\frac{k-i}{2}}\right]C_{k-i}^{[{\frac{k-i}{2}}]}$$ and $$\quad(4)$$ $$\mbox{Var}(|\mathbf{r}^{\top}\mathbf{x}|)=\frac{2q\mu^{2}n}{m}-\frac{4\mu^{2}n}{mk}\left(\sum_{i=0}^{k}C_{k}^{i}p^{i}q^{k-i}\left[\frac{k-i}{2}\right]C_{k-i}^{\left[\frac{k-i}{2}\right]}\right)^{2}$$ ial coefficient $\binom{k}{\alpha}$ and $\lceil\alpha\rceil=\min\left\{\beta:\beta\geq\alpha,\beta\in\mathbb{Z}\right\}$. By (4), $\mathbf{E}\lceil\mathbf{r}^{\top}\mathbf{x}\rceil$ satisfies $$\quad(5)$$ where C i k is a binomial coefficient k i and dαe = min{β : β ≥ *α, β* ∈ Z}. By (4), E|r >x| satisfies the following two properties: (P1) When $p\leq0.188$, $\mathbb{E}|\pmb{r}^{\top}\pmb{x}|$ has its maximum at $k=1$. $$\mathrm{(P2)}\ \operatorname*{lim}_{k\rightarrow\infty}{\frac{\sqrt{m}}{\mu{\sqrt{n}}}}\mathrm{E}|\mathbf{r}^{\top}\mathbf{x}|=2{\sqrt{q/\pi}}.$$ In P1 and P2, we derive two results about the changing trend of E|r >x| against the varying matrix sparsity k. By P1, E|r >x| should achieve its maximum value at k = 1, as the probability p of xi = 0 is sufficiently small (≤ 0.188), namely, the difference x between data points exhibits sufficiently dense distributions; and by P2, E|r >x| will converge to a stable value as k tends to infinity. To more closely examine the changing trend, in the following numerical analysis, we directly calculate the value of E|r >x| by (4), and observe that: 1) the upper bound of p (shown in P1) that guarantees the ![4_image_0.png](4_image_0.png) Figure 1: The value of E|r >x|/(µpn/m) calculated by (4) with p = 1/3 (a) and p = 2/3 (b), and estimated by statistical simulation with p = 1/3 (c) and p = 2/3 (d), provided xi ∼ T (*µ, p, q*), µ = 1. maximum of E|r >x| to be achieved at k = 1 can be further relaxed; 2) the maximum E|r >x| can be achieved at about k = 20, if the upper bound of p is violated; and 3) the speed of the convergence (shown in P2) is fast. ## 3.2 Numerical Analysis By computing (4), we derive the values of E|r >x|/(µpn/m) against varying k in Figs. 1(a) and (b). Here we choose to study E|r >x|/(µpn/m) instead of E|r >x|, mainly because both of them present the same changing trend against varying k, but the former has fewer parameters, i.e. only involving k and p. Recall that p is the probability of xi = 0 in (1). By varying the continuous value of p ∈ (0, 1) with fine steps and increasing the integer value of k from 1 to 500, it is observed that: (P3) When p ≤ 1/3, such as the case of p = 1/3 shown in Fig. 1(a), E|r >x|/(µpn/m) tends to achieve its largest value at k = 1. (P4) When p > 1/3, such as the case of p = 2/3 illustrated in Fig. 1(b), E|r >x|/(µpn/m) inclines to first increase with k, and then quickly reach a stable level after about k = 20. Note that with the increasing of p, the varying of E|r >x|/(µpn/m) at each k is continuous; and here we only illustrate the results of p = 1/3 and 2/3 for brevity. Compared to the theoretical analysis results P1 and P2, the numerical analysis results P3 and P4 are more positive. Specifically, P3 relaxes the upper bound of p from ≤ 0.188 to ≤ 1/3. This suggests that E|r >x| will achieve its maximum value at k = 1, when each entry xi of x takes nonzero values with a probability greater than 2/3, namely, the difference vector x between data points has sufficiently dense distributions. Otherwise, by P4, E|r >x| will reach its upper bound at about k = 20. *Therefore, we can reach the conclusion that* E|r >x| *tends to reach its maximum value at* k = 1 *or at most about* k = 20, for the original data with two-point distributions. Besides the expectation E|r >x| of pairwise distances as discussed above, the variance Var(|r >x|) of pairwise distances derived in (5) is also a factor that may affect the classification performance. By computing (5), interestingly, we find that with the increasing of k, Var(|r >x|) exhibits a changing trend opposite to that of E|r >x|. In other words, the larger expectation corresponds to the smaller variance. Considering both larger expectations and smaller variances are favorable to classification, we can say that the two factors are consistent in estimating the classification performance change. ## 3.3 Statistical Simulation To validate the numerical analysis results P3 and P4, we here investigate the changing trend of E|r >x|/(µpn/m) against varying k by statistical simulation. In the simulation, we randomly generate 106 pairs of r and x from their respective distributions, i.e. r ∈ {0, ±p n mk } n with k nonzero entries randomly distributed, and x with i.i.d. xi ∼ T (*µ, p, q*). Then, the average value of |r >x|/(µpn/m) is derived as the final estimate of E|r >x|/(µpn/m). The parameters for the distributions of r and x are set as follows: m = 1, n = 104, µ = 1, and p = 1/3 or 2/3. The data dimension n = 104 allows us to increase k from 1 to 104. The average value of |r >x|/(µpn/m) at each k is provided in Figs. 1(c) and (d), respectively for the cases of p = 1/3 and 2/3. Note that the choices of m, n and µ will not affect the changing trend of E|r >x|/(µpn/m) regarding k. Comparing the numerical results and simulation results provided in Fig. 1, namely (a) vs. (c) and (b) vs. (d), it is seen that the two kinds of results exhibit similar changing trends for E|r >x|/(µpn/m). The similarity between them validates the numerical analysis results P3 and P4. Moreover, it is noteworthy that what we estimate is an *expected* distance EkRxk1 (equivalently, mE|r >x|), rather than the actual distance kRxk1 we will derive with a given matrix sample. To approximate the expected distance, by Lemma 1 the actual matrices should have the size of m ≥ O( √n). Lemma 1. Let ri be the i-th row of a k-sparse random matrix R ∈ {0, ±p n mk } m×n, and x ∈ R n with i.i.d. entries xi ∼ T (*µ, p, q*). Suppose z = 1 m kRxk1 = 1 m Pm i=1 |r > i x|. For arbitrary small *ε, δ >* 0, we have the probability Pr{|z − Ez| ≤ ε} ≥ 1 − δ, if m2 m+1 ≥ qµ2n ε 2δ ; and the condition can be relaxed to m2 ≥ 2qµ2n ε 2δ , for a given x. ## 4 The `1 **Distance Estimation With Gaussian Mixture Data** Similarly as in the previous section, we here estimate the value of E|r >x| against the varying matrix sparsity k, and identify the value of k that leads to the maximum E|r >x| for the original data drawn from Gaussian mixture distributions. For such data, the data difference x = h − h 0 has i.i.d. entries xi ∼ M(µ, σ2*, p, q*), as detailed in (2). ## 4.1 Theoretical Analysis Theorem 2. Let r be a row vector of a k-sparse random matrix R ∈ {0, ±p n mk } m×n, and x ∈ R n with i.i.d. entries xi ∼ M(µ, σ2*, p, q*). It can be derived that $$\mathbb{E}|\mathbf{r}^{\top}\mathbf{x}|=2\mu\sqrt{\frac{n}{mk}}T_{1}+\sigma\sqrt{\frac{2n}{\pi m}}T_{2}-2\mu\sqrt{\frac{n}{mk}}T_{3}\tag{6}$$ $$T_{1}=\sum_{i=0}^{k}C_{ki}^{i}p^{i}q^{k-i}\left[\frac{k-i}{2}\right]C_{k-i}^{\left[\frac{k-i}{2}\right]}$$ $$T_{2}=\sum_{i=0}^{k}C_{ki}^{i}p^{i}q^{k-i}\sum_{j=0}^{k-i}C_{kj-i}^{i}e^{-\frac{(k-i-2j)^{2}\mu^{2}}{2k\sigma^{2}}}$$ $$T_{3}=\sum_{i=0}^{k}C_{ki}^{i}p^{i}q^{k-i}\sum_{j=0}^{k-i}C_{k-i}^{i}\Phi\left(-\frac{|k-i-2j|\mu}{\sqrt{k}\sigma}\right)$$ $$\left(7\right)$$ and $$\operatorname{Var}(|\mathbf{r}^{\mathsf{T}}\mathbf{x}|)={\frac{n}{m}}(\sigma^{2}+2q\mu^{2})-\left(\mathbf{E}|\mathbf{r}^{\mathsf{T}}\mathbf{x}|_{1}\right)^{2}$$ 2(7) where Φ(·) is the distribution function of N (0, 1). Further, we have $$\mathbb{E}|\mathbf{r}^{\top}\mathbf{x}|\leq\mu{\sqrt{\frac{n}{m}}}+\sigma{\sqrt{\frac{2n}{\pi m}}}$$ and $$\operatorname*{lim}_{k\rightarrow\infty}{\frac{\sqrt{m}}{\mu\sqrt{n}}}\mathbf{E}|\mathbf{r}^{\top}\mathbf{x}|={\sqrt{\frac{2}{\pi}}}(\sigma^{2}+2q\mu^{2}).$$ $$({\boldsymbol{8}})$$ $$({\mathfrak{g}})$$ ![6_image_0.png](6_image_0.png) Figure 2: The value of E|r >x|/pn/m calculated by (6) with p = 1/2 (a) and p = 2/3 (b), and estimated by statistical simulation with p = 1/2 (c) and p = 2/3 (d), provided xi ∼ M(*p, q, µ, σ*2), µ = 1 and σ = 1/3. The theorem provides the expression of E|r >x| in (6) and proves its convergence in the limit of k in (9). By the following numerical analysis and simulation on (6), we will see that E|r >x| is able to reach its maximum value at k = 1 or at most about k = 20. ## 4.2 Numerical Analysis In Figs. 2(a) and (b), by computing (6) we derive the value of E|r >x|/pn/m across different k. It is seen that E|r >x|/pn/m involves four parameters, k, p, µ, and σ. During computation, we fix µ = 1 and vary other parameters in the ranges of σ/µ ∈ (0, 1/3), p ∈ (0, 1) and k ∈ [1, 500]. For easy simulation, we upper bound σ/µ by 1/3 in view of the fact that σ/µ is usually not large for real data, while larger bounds empirically also work. Empirically, the changing trend of E|r >x|/pn/m is not sensitive to σ/µ, but sensitive to p, the probability of each entry xi of the data difference x taking zero value. More precisely, it is observed that (P5) When p ≤ 1/2, such as the case of p = 1/2 and σ/µ = 1/3 shown in Fig. 2(a), E|r >x|/pn/m tends to first *decrease* with increasing k and then quickly reach a stable state after about k = 20. (P6) When p > 1/2, such as the case of p = 2/3 and σ/µ = 1/3 shown in Fig. 2(b), E|r >x|/pn/m shows the trend of first *increasing* with k, and then reaching stable after about k = 20. By P5, the maximum E|r >x| can be achieved at k = 1, when each entry xi of x takes nonzero values with a probability greater than 1/2, or say, the data difference vector x has sufficiently dense distributions. Otherwise, by P6, the maximum E|r >x| can be obtained at about k ≥ 20. *To sum up, the two cases imply* that E|r >x| *will reach its maximum value at* k = 1 *or at most about* k = 20*, for Gaussian mixture data with* pairwise difference xi ∼ M(µ, σ2*, p, q*). It is noteworthy that the property is similar to the one we have derived in P3 and P4 for the two-point distributed data with xi ∼ T (*µ, p, q*). The similarity is not surprising since xi ∼ T (*µ, p, q*) can be viewed as an extreme case of xi ∼ M(µ, σ2*, p, q*) with σ → 0. Thanks to the good generalization capability of Gaussian mixture model, as will be seen in our experiments, the properties analyzed above hold for a variety of real data. Again note that we should have the matrix row size m ≥ O( √n), such that the actual distance kR>xk1 computed with a single random matrix sample can approximate the expected distance EkR>xk1 (equivalently mE|r >x|) derived with (6). The analysis is similar to Lemma 1, thus omitted here. ## 4.3 Statistical Simulation Now we are in the position to verify P5 and P6 by statistical simulation. Specifically, we estimate the value of E|r >x|/pn/m by drawing 106 pairs of x and r from their respective distributions and then computing the average of kr >xk1/pn/m as the estimate. The parameters of the distributions of x and r are set as follows: m = 1, n = 10000, µ = 1, σ = 1/3 and p = 1/2 or 2/3. The data dimension n = 10000 allows us to vary k from 1 to 10000. The average value of |r >x|/pn/m at each k is presented in Figs. 2(c) and (d), which have p = 1/2 and 2/3, respectively. Comparing the numerical analysis and simulation results shown in Fig. 2, namely (a) vs. (c) and (b) vs. (d), it can be seen that both the two results are consistent with each other. The consistency validates the numerical analysis results P5 and P6. ## 5 Experiments For the high-dimensional data with Gaussian-mixture distributions and two-point distributions, we have proved that the maximum `1 distance between their projections tends to be achieved by the sparse matrices with size m ≥ O( √n) and with exactly one or at most about twenty nonzero entries per row. Also, the best classification performance should be achieved by such matrices, in terms of the discrimination of large pairwise distances. Considering many real data and their binary quantization can be approximately modeled respectively by Gaussian-mixture distributions and two-point distributions, in this section we aim to prove that the sparse matrix with the size and sparsity described above indeed performs best in practical classification problems. ## 5.1 Data Without loss of generality, we evaluate four different types of data, including the image dataset YaleB (Georghiades et al., 2001; Lee et al., 2005), the text dataset Newsgroups (Joachims, 1997), the gene dataset AMLALL (Golub et al., 1999) and binary image dataset MNIST (Deng, 2012). The former three kinds of data can be modeled by Gaussian mixtures, while the last one belongs to the two-point distribution. The data settings are introduced as follows. YaleB contains 40 × 30-sized face images of 38 persons, with about 64 faces per person. Newsgroups consists of 20 categories of 3000-dimensional text data, with 500 samples per category. AMLALL contains 25 samples taken from patients suffering from acute myeloid leukemia (AML) and 47 samples from patients suffering from acute lymphoblastic leukemia (ALL), with each sample expressed with a 7129-dimension gene vector. MNIST involves 10 classes of 28 × 28-sized handwritten digit images in MNIST, with 6000 samples per class and with each image pixel 0-1 binarized. Note here we reduce the dimension of the data in YaleB and Newsgroups for easy simulation, and this will not influence our comparative study. ## 5.2 Implementation The random projection based classification is implemented by first multiplying original data with k-sparse random matrices and then classifying the resulting projections with a classifier. To faithfully reflect the impact of the varying data distance on classification, we adopt the simple nearest neighbor classifier (NNC) (Cover & Hart, 1967) for classification, which has performance absolutely dependent on the pairwise distance between data points, without involving extra operations to improve data discrimination. In fact, our optimal estimation could also be verified with other more sophisticated classifiers, like SVMs (Cortes & Vapnik, 1995), see Appendix B. For each dataset, we will enumerate all possible class pairs in it to perform binary classification. In each class, we have one half of samples randomly selected for training and the rest for testing. To suppress the instability of random matrices and obtain relatively stable classification performance, as in (Bingham & Mannila, 2001), we repeat the random projection-based classification 5 times for each sample and make the final classification decision by vote. For comparison, the performance of the Gaussian matrix based random projection is provided. Although our optimal matrix is estimated with `1 distance, we also test and verify its performance advantage on the popular `2 distance. ![8_image_0.png](8_image_0.png) Figure 3: Classification accuracy of the sparse matrix-based and Gaussian matrices-based random projections for image data (YaleB, DCT features), with varying matrix sparsity k E [1,30], three different projection ratios m/n = 1%, 10% and 50%, and two distance metrics li and l2 ![8_image_1.png](8_image_1.png) Figure 4: Classification accuracy of the sparse matrix-based and Gaussian matrix-based random projections for text data (Newsgroups), with varying matrix sparsity k E [1,30], three different projection ratios m/n = 1%, 10% and 50%, and two distance metrics £1 and l2 ## 5.3 Results The classification results of four kinds of data are provided in Figs. 3-6, respectively. For each kind of data, as can be seen, we evaluate the classification performance of sparse matrices with varying sparsity k € [1, 30], 9 ![9_image_0.png](9_image_0.png) Figure 5: Classification accuracy of sparse matrix-based and Gaussian matrix-based random projections for gene data (AM- LALL), with varying matrix sparsity k E [1,30], three different projection ratios m/n = 1%, 10% and 50%, and two distance metrics &1 and l2. ![9_image_1.png](9_image_1.png) Figure 6: Classification accuracy of the sparse matrix-based and Gaussian matrix-based random projections for binary image data (MNIST, binarized pixels), with varying matrix sparsity k E [1,30], three different projection ratios m/n = 1%, 10% and 50%, and two distance metrics l1 and l2. three different projection ratios m/n = 1%, 10% and 50%, and two distance metrics l1 and l2. Note that the data dimensions n we test here are on the order of thousands. With such scale of n, it is easy to deduce that the condition of m ≥ O( (/n) will be satisfied as m/n = 10% and 50%, but be violated as m/n = 1%. Let us first examine the case of satisfying m ≥ O( √n), namely the cases of m/n = 10% and 50% as shown in Figs. 3–6(b)(c). It is seen that the four kinds of data all achieve their best performance with relatively small matrix sparsity k (< 30), such as with k = 1 in Fig. 3(c) and k = 15 in Fig. 4(c). But in the case of m/n = 1% which violates the condition of m ≥ O( √n), as shown in Figs. 3–6(a), the four kinds of data with an exception of AMLALL all fail to reach their top performance within k < 30. For AMLALL with m/n = 1%, as illustrated in Fig. 5(a), it fails to get the desired decreasing performance trend and performs poorly at k = 1, in contrast to the cases of m/n = 10% and m/n = 50% shown in Figs. 5(b)(c). Overall, the experimental results on four different kinds of data all verify our theoretical estimation, that is the best classification performance can be achieved by the sparse matrices with only one or at most about twenty nonzero entries per row, under the size of m ≥ O( √n). Besides the estimation of optimal matrix sparsity, the classification performance trend across varying matrix sparsity also consists with our estimation. More precisely, it can be seen from Figs. 3–6(b)(c) that the classifications of four datasets quickly converge to stable performance with the increasing matrix sparsity k. The difference between them mainly lies in the initial stage of the convergence. Specifically, as illustrated in Figs. 3(b)(c) and 5(b)(c), the convergence curves on the datasets YaleB and AMLALL both exhibit the declining trend at the initial increasing region of k, consistent with the analysis result P5 depicted in Fig. 2(a). As for the curves on the other two datasets Newsgroups and MNIST, as shown in Figs. 4(b)(c) and Figs. 6(b)(c), they both exhibit the trend of initially increasing with k, as predicted by the numerical analysis results P6 (illustrated in Fig. 2(b)) and P4 (illustrated in Fig. 1(b)). Although our theoretical analysis is based on `1 distance, we can see that the classification with `2 distance exhibits similar performance trends, through comparing the results shown in the upper row versus the bottom row of Figs. 3–6. The similar performance of the two metrics in practical classification problems has been early observed and analyzed in (Gionis et al., 1999; Figiel et al., 1977). This suggests that the optimal sparse matrices we estimate with `1 distance also perform well in the classification with `2 distance, thus serving a wide range of applications. Moreover, the experiments show that sparse matrices often perform better than Gaussian matrices. This motivates us to employ sparse matrices instead of gaussian matrices, for its advantages both in complexity and accuracy. ## 6 Conclusion For the sparse {0, ±1}-ternary matrix based random projection, we have demonstrated both in theory and practice that the best classification performance tends to be achieved by the sparse matrix with only one or at most about twenty nonzero entries per row, under the size of m ≥ O( √n). This implies that random projection can be implemented with extremely sparse matrices, which have significant advantages in storage and computation compared to the popularly used Gaussian matrices. Impressively, our theoretical estimation exhibits high consistency with the experimental evaluation on real data of different types, such as the image, text, gene and binary quantization data. The high consistency between theory and practice can be attributed to the good generalization of the two data distributions we have assumed for statistical analysis, i.e. the Gaussian mixture distribution and two-point distribution. Besides the major contribution described above, there are three other important results worth mentioning. First, the optimal sparse matrix we estimate with `1 distance also performs well for the `2 distance-based classification. The generalization can be attributed to the closeness of the two metrics, which has been found and analyzed in early research (Gionis et al., 1999; Figiel et al., 1977). Overall, this is good news in terms of the wide application of the two metrics (Philbin et al., 2008). Second, experiments show that our sparse matrices tend to provide higher classification accuracy than the popularly used Gaussian matrices. This encourages us to employ sparse matrices instead of gaussian matrices, for improvements both in complexity and accuracy. Third, our optimal matrix sparsity estimation is helpful to understand the competitive performance of deep ternary networks, which are generated by ternarizing the parameters and/or activations of full-precision networks and enjoy very sparse structures (Li et al., 2016; Zhu et al., 2017; Wan et al., 2018; Marban et al., 2020; Rokh et al., 2023). Despite suffering from significant quantization errors, interestingly, deep ternary networks usually have acceptable performance loss and sometimes can even provide performance gains. The reason for this intriguing phenomenon remains unclear. Considering deep networks can be modeled as a cascade of random projections (Giryes et al., 2016), our optimal matrix sparsity estimation for random projection-based classification can be viewed as a layerwise analysis of deep ternary networks. The very sparse ternary matrices we derive for optimal classification partly explains the performance advantage of sparse ternary networks. ## References D. Achlioptas. Database-friendly random projections: Johnson–Lindenstrauss with binary coins. J. Comput. Syst. Sci., 66(4):671–687, 2003. David F Andrews and Colin L Mallows. Scale mixtures of normal distributions. *Journal of the Royal* Statistical Society: Series B (Methodological), 36(1):99–102, 1974. Ella Bingham and Heikki Mannila. Random projection in dimensionality reduction: applications to image and text data. In *Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery* and data mining, pp. 245–250, 2001. Bo Brinkman and Moses Charikar. On the impossibility of dimension reduction in `1. *Journal of the ACM*, pp. 766–788, 2003. Corinna Cortes and Vladimir Vapnik. Support-vector networks. *Machine learning*, 20(3):273–297, 1995. T. Cover and P. Hart. Nearest neighbor pattern classification. *IEEE Transactions on Information Theory*, 13(1):21–27, 1967. S. Dasgupta and A. Gupta. An elementary proof of the Johnson–Lindenstrauss lemma. *Technical Report,* UC Berkeley, (99–006), 1999. Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012. T. Figiel, J. Lindenstrauss, and V. D. Milman. The dimension of almost spherical sections of convex bodies. Acta Mathematica, 139:53–94, 1977. Dmitriy Fradkin and David Madigan. Experiments with random projections for machine learning. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 517–522, 2003. Athinodoros S. Georghiades, Peter N. Belhumeur, and David J. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE transactions on pattern analysis and machine intelligence, 23(6):643–660, 2001. Aristides Gionis, Piotr Indyk, and Rajeev Motwani. Similarity search in high dimensions via hashing. In Proceedings of the 25th International Conference on Very Large Data Bases, 1999. Raja Giryes, Guillermo Sapiro, and Alex M Bronstein. Deep neural networks with random gaussian weights: A universal classification strategy? *IEEE Transactions on Signal Processing*, 64(13):3444–3457, 2016. T. R. Golub, D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesirov, H. Coller, M. L. Loh, J. R. Downing, M. A. Caligiuri, C. D. Bloomfield, and E. S. Lander. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. *Science*, 286(5439):531–537, 1999. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In *Proceedings of the 30th International Conference on Neural Information Processing Systems*, 2016. Thorsten Joachims. A probabilistic analysis of the Rocchio algorithm with TFIDF for text categorization. In *Proceedings of the Fourteenth International Conference on Machine Learning*, pp. 143–151, 1997. W. B. Johnson and J. Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. *Contemp.* Math., 26:189–206, 1984. Ian T Jolliffe. *Principal component analysis*. Springer, 2002. Ian T Jolliffe and Jorge Cadima. Principal component analysis: a review and recent developments. *Philosophical transactions of the royal society A: Mathematical, Physical and Engineering Sciences*, 374(2065): 20150202, 2016. Edmund Y Lam and Joseph W Goodman. A mathematical analysis of the DCT coefficient distributions for images. *IEEE transactions on image processing*, 9(10):1661–1666, 2000. Kuang-Chih Lee, Jeffrey Ho, and David J Kriegman. Acquiring linear subspaces for face recognition under variable lighting. *IEEE Transactions on pattern analysis and machine intelligence*, 27(5):684–698, 2005. Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, and Junchi Yan. Ternary weight networks. *arXiv preprint* arXiv:1605.04711, 2016. P. Li, T. J. Hastie, and K. W. Church. Very sparse random projections. *in Proceedings of the 12th ACM* SIGKDD international conference on Knowledge discovery and data mining, 2006. Ping Li. Very sparse stable random projections for dimension reduction in `α (0 < α ≤ 2) norm. In *Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, 2007. Arturo Marban, Daniel Becking, Simon Wiedemann, and Wojciech Samek. Learning sparse & ternary neural networks with entropy-constrained trained ternarization (EC2T). In *Proceedings of the IEEE/CVF* Conference on Computer Vision and Pattern Recognition Workshops, pp. 722–723, 2020. James Philbin, Ondrej Chum, Michael Isard, Josef Sivic, and Andrew Zisserman. Lost in quantization: Improving particular object retrieval in large scale image databases. In *2008 IEEE conference on computer* vision and pattern recognition, pp. 1–8. IEEE, 2008. Babak Rokh, Ali Azarpeyvand, and Alireza Khanteymoori. A comprehensive survey on model quantization for deep neural networks in image classification. *ACM Transactions on Intelligent Systems and Technology*, 14(6):1–50, 2023. Pante Stˇanicˇa. Good lower and upper bounds on binomial coefficients. *JIPAM. Journal of Inequalities in* Pure & Applied Mathematics [electronic only], 2, 2001. Antonio Torralba and Aude Oliva. Statistics of natural image categories. *Network: computation in neural* systems, 14(3):391–412, 2003. M.A. Turk and A.P. Pentland. Face recognition using eigenfaces. In *Proceedings. 1991 IEEE Computer* Society Conference on Computer Vision and Pattern Recognition, pp. 586–591, 1991. Aad W Van der Vaart. *Asymptotic statistics*. Cambridge university press, 2000. Martin J. Wainwright and Eero P. Simoncelli. Scale mixtures of gaussians and the statistics of natural images. In *Proceedings of the 12th International Conference on Neural Information Processing Systems*, 1999. Diwen Wan, Fumin Shen, Li Liu, Fan Zhu, Jie Qin, Ling Shao, and Heng Tao Shen. TBN: Convolutional neural network with ternary inputs and binary weights. In *Proceedings of the European Conference on* Computer Vision (ECCV), pp. 315–332, 2018. Yair Weiss and William T Freeman. What makes a good model of natural images? In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE, 2007. J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31:210–227, 2009. J. Yang, X. Shen, J. Xing, X. Tian, H. Li, B. Deng, J. Huang, and X. Hua. Quantization networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019. Chenzhuo Zhu, Song Han, Huizi Mao, and William J. Dally. Trained ternary quantization. In *International* Conference on Learning Representations, 2017. ## A Appendix A.1 Proof Of Theorem 1 Proof. In the following, we sequentially prove (4), (5), P1 and P2. Proofs of (4) and (5): With the distributions of r and x, we can write kr >xk1 =p n mk µ Pk i=1 zi , where zi *∈ {−*1, 0, 1} with probabilities {*q, p, q*}. Then, it can be derived that $$\mathbb{E}[\mathbf{r}^{\mathsf{T}}\mathbf{x}]=\mu\sqrt{\frac{n}{m k}}\sum_{i=0}^{k}C_{k}^{i}p^{i}q^{k-i}\sum_{j=0}^{k-i}C_{k-i}^{j}|k-i-2j|,$$ $$(10)$$ $$(11)$$ among which Pk−i j=0 C j k−i |k − i − 2j| can be expressed as $$\sum_{j=0}^{k-i}C_{k-i}^{i}|k-i-2j|)=2\left[\frac{k-i}{2}\right]C_{k-i}^{i}\frac{k-i}{2},$$ where $[\alpha]=\min\{\beta:\beta\geq\alpha,\beta\in\mathbb{Z}\}$. Combine (10) and (11), we can obtain (4). Next, we can derive the variance of |r >x| $$\begin{split}\text{Var}(|\boldsymbol{r}^{\top}\boldsymbol{x}|)&=\text{Var}(\boldsymbol{r}^{\top}\boldsymbol{x})-\left(\mathbb{E}|\boldsymbol{r}^{\top}\boldsymbol{x}|\right)^{2}\\ &=\frac{2q\mu^{2}n}{m}-\frac{4\mu^{2}n}{mk}\left(\sum_{i=0}^{k}C_{k}^{i}p^{i}q^{k-i}\left[\frac{k-i}{2}\right]C_{k-i}^{\left[\frac{k-i}{2}\right]}\right)^{2}.\end{split}\tag{12}$$ $$(13)$$ Proof of P1: This part aims to prove E|r >x|k=1 > E|r >x|k>1, where the subscript k = 1 denotes the case of E|r >x| with k = 1, and the subscript k > 1 means the case of k taking any integer value greater than 1. In the following, we will calculate and compare E|r >x| in terms of the two cases. For the case of k = 1, by (4), it is easy to derive that E|r >x|k=1 = 2qµrnm . (13) Then, let us see the case of computing E|r >x|k>1. By (4), E|r >x|k>1 is the sum of √ 2 k C i k p iq k−ik−i 2 C d k−i 2 e k−i multiplied by µp nm . To compute √ 2 k C i k p iq k−ik−i 2 C d k−i 2 e k−i, we consider separately two cases: k − i is even or odd, as detailed below. Case 1: Suppose k − i is even. We have We have $ \begin{array}{l}\ \ \ \ \frac{2}{\sqrt{k}}C_k^i p^i q^{k-i}\left[\frac{k-i}{2}\right]C_{k-i}^{[\frac{k-i}{2}]}\\\\\ \ \ \ \ \leq\frac{1}{\sqrt{k}}C_k^i p^i q^{k-i}(k-i)2^{k-i}\sqrt{\frac{2}{(k-i)\pi}}\\\\\ \ \ \ \ \ \ \leq\sqrt{\frac{2}{\pi}}C_k^i p^i(2q)^{k-i},\end{array}$ k−i, (14) $$(14)$$ since C γ 2γ ≤ 2 2γ √γπ , where γ is a positive integer (Stˇanicˇa, 2001). Case 2: Suppose k − i is odd. We have $$\frac{2}{\sqrt{k}}C_{k}^{i}p^{i}q^{k-i}\left[\frac{k-i}{2}\right]C_{k-i}^{\lceil\frac{k-i}{2}\rceil}$$ $$\leq\frac{1}{\sqrt{k}}C_{k}^{i}p^{i}q^{k-i}(k-i)2^{k-i}\sqrt{\frac{2}{(k-i-1)\pi}}$$ $$=\sqrt{\frac{2}{\pi}}C_{k}^{i}p^{i}(2q)^{k-i}\frac{k-i}{\sqrt{k(k-i-1)}}\tag{15}$$ Given k ≥ 5, we further have $${\frac{k-i}{\sqrt{k(k-i-1)}}}<1\ \ \mathrm{for}\ \ 2\leq i\leq k-2,$$ and for i = k − 1 or k, $$\frac{2}{\sqrt{k}}C_{k}^{i}p^{i}q^{k-i}\left[\frac{k-i}{2}\right]C_{k-i}^{\lceil\frac{k-i}{2}\rceil}<\sqrt{\frac{2}{\pi}}C_{k}^{i}p^{i}(2q)^{k-i}.$$ To sum up, when k − i is odd, $$\frac{2}{\sqrt{k}}C_{k}^{i}p^{i}q^{k-i}\left[\frac{k-i}{2}\right]C_{k-i}^{\left[\frac{k-i}{2}\right]}$$ $$\leq\begin{cases}\sqrt{\frac{2}{\pi}}C_{k}^{i}p^{i}(2q)^{k-i},&k\geq5,i\geq2,\\ \frac{2}{\sqrt{k}}C_{k}^{i}p^{i}q^{k-i}(k-i)C_{k-i-1}^{\frac{k-i-1}{2}},&\text{otherwise}.\end{cases}\tag{1}$$ $$(16)$$ According to the results (14) and (16) derived in the above two cases, we know that E|r >x|k>1 can be computed in terms of two cases, 2 ≤ k ≤ 4 and k ≥ 5. For the case of 2 ≤ k ≤ 4, by (4), we have $$\mathbb{E}[\boldsymbol{r}^{\top}\boldsymbol{x}]={\begin{cases}{\frac{\mu{\sqrt{n}}}{\sqrt{2m}}}(4q^{2}+4pq),&k=2,\\ {\frac{\mu{\sqrt{n}}}{\sqrt{3m}}}(12q^{3}+12pq^{2}+6p^{2}q),&k=3,\\ {\frac{\mu{\sqrt{n}}}{\sqrt{m}}}(12q^{4}+24pq^{3}+12p^{2}q^{2}+4p^{3}q),&k=4,\end{cases}}$$ $$(17)$$ $$(18)$$ and for the case of k ≥ 5, with (14) and (16), we have $$\mathbf{E}|\mathbf{r}^{\textsf{T}}\mathbf{x}|\leq\mu{\sqrt{\frac{2n}{\pi m}}}+\mu{\sqrt{\frac{n}{m}}}(2q)^{5}\left({\frac{3{\sqrt{5}}}{8}}-{\sqrt{\frac{2}{\pi}}}\right).$$ By (13), (17) and (18), we can derive that $$\operatorname{E}|\boldsymbol{r}^{\mathsf{T}}\boldsymbol{x}|_{k=1}>\operatorname{E}|\boldsymbol{r}^{\mathsf{T}}\boldsymbol{x}|_{k>1}$$ holds under the condition of p ≤ 0.188. Then P1 is proved. In what follows, we elaborate the proof of (18) by considering two cases of k, being even or odd. Case 1: Suppose k ≥ 5 and k is even. Combining (14) and (16), we have $$\mathbb{E}|\mathbf{r}^{\top}\mathbf{x}|\leq\!\mu\sqrt{\frac{n}{m}}C_{k}^{1}p(2q)^{k-1}\left(\frac{\sqrt{k}}{2^{k-1}}C_{k-1}^{\frac{k}{2}-1}-\sqrt{\frac{2}{\pi}}\right)$$ $$+\mu\sqrt{\frac{2n}{\pi m}}\sum_{i=0}^{k}C_{k}^{i}p^{i}(2q)^{k-i}.\tag{1}$$ $$(19)$$ $$(20)$$ Denote h1(k) = $${\frac{h_{1}(k+2)}{h_{1}(k)}}={\frac{k+1}{\sqrt{k(k+2)}}}>1$$ $$\ni h_{1}(k)={\frac{\sqrt{k}}{2^{k-1}}}C_{k-1}^{\frac{k}{2}-1}.\,\,\,\mathrm{For}$$ we have $\quad h_1(k)=\dfrac{\sqrt{k}}{2^{k-1}}C_{k-1}^{\frac{k}{2}-1}\leq\lim\limits_{k\to\infty}h_1(k)=\sqrt{\dfrac{2}{\pi}}.$ 20) that ... Then, it follows from (19) and (20) that $$\mathbb{E}|\mathbf{r}^{\top}\mathbf{x}|\leq\mu\sqrt{\frac{2n}{\pi m}}.\tag{1}$$ Case 2: Suppose k ≥ 5 and k is odd. Combining (14) and (16), we have $$\begin{split}\mathbb{E}|\mathbf{r}^{\top}\mathbf{x}|\leq&\mu\sqrt{\frac{n}{m}}C_{k}^{0}(2q)^{k}\left(\frac{\sqrt{k}}{2^{k-1}}C_{k-1}^{\frac{k-1}{2}}-\sqrt{\frac{2}{\pi}}\right)\\ &+\mu\sqrt{\frac{2n}{\pi m}}\sum_{i=0}^{k}C_{k}^{i}p^{i}(2q)^{k-i}.\end{split}$$ $$(21)$$ $$\quad(22)$$ Denote h2(k) = $=\frac{\sqrt{k}}{2^{k-1}}C_{k-1}^{\frac{k-1}{2}}.$ For : . we have $$h_{2}(k)\qquad\qquad k+1$$ $$h_{2}(k)=\frac{\sqrt{k}}{2^{k-1}}C_{k-1}^{\frac{k-1}{2}}\leq h_{2}(5)=\frac{\sqrt{5}}{2^{4}}C_{4}^{2}.$$ 3) that $${\frac{h_{2}(k+2)}{h_{2}(k)}}={\frac{\sqrt{k(k+2)}}{k+1}}<1$$ Then, it follows from (22) and (23) that $$\mathbf{E}|\mathbf{r}^{\top}\mathbf{x}|\leq\mu{\sqrt{\frac{2n}{\pi m}}}+\mu{\sqrt{\frac{n}{m}}}(2q)^{5}\left({\frac{3{\sqrt{5}}}{8}}-{\sqrt{\frac{2}{\pi}}}\right).$$ Proof of P2: For ease of analysis, we first define the function $$g(\mathbf{r}^{\mathsf{T}}\mathbf{x};k,p)={\frac{\mathbb{E}|\mathbf{r}^{\mathsf{T}}\mathbf{x}|_{k}}{\mu{\sqrt{n/m}}}}=\mathbb{E}\left|{\frac{1}{\sqrt{k}}}\sum_{i=1}^{k}z_{i}\right|,$$ $$(23)$$ $$(24)$$ where {zi} is independently and identically distributed and zi *∈ {−*1, 0, 1} with probabilities {*q, p, q*}. By the Lindeberg-Lévy Central Limit Theorem, we have $$\frac{1}{\sqrt{k}}\sum_{i=1}^{k}z_{i}\leadsto Z,\tag{1}$$ where Z ∼ N(0, 2q). Then based on (18), we have for k ≥ 5, $$\mathbb{E}\left|{\frac{1}{\sqrt{k}}}\sum_{i=1}^{k}z_{i}\right|\leq{\sqrt{\frac{2}{\pi}}}+(2q)^{5}\left({\frac{3{\sqrt{5}}}{8}}-{\sqrt{\frac{2}{\pi}}}\right).$$ It means that $$\operatorname*{lim}_{M\to+\infty}\operatorname*{lim}_{k\to+\infty}\operatorname{E}\left[\left|{\frac{1}{\sqrt{k}}}\sum_{i=1}^{k}z_{i}\right|1\left\{\left|{\frac{1}{\sqrt{k}}}\sum_{i=1}^{k}z_{i}\right|>M\right\}\right]=0.$$ Hence, √ 1 k Pk i=1 zi is an asymptotically uniformly integrable sequence. According to Theorem 2.20 in (Van der Vaart, 2000), we obtain $$\lim_{k\to+\infty}\frac{\sqrt{m}}{\mu\sqrt{n}}\mathrm{E}|\mathbf{r}^{\top}\mathbf{x}|=\lim_{k\to+\infty}\mathrm{E}\left|\frac{1}{\sqrt{k}}\sum_{i=1}^{k}z_{i}\right|$$ $$=\mathrm{E}\left|Z\right|$$ $$=2\sqrt{\frac{q}{\pi}}.$$ ## A.2 Proof Of Lemma 1 Proof. This problem can be addressed using the Chebyshev's Inequality, which requires us to first derive Ez and Var(z). Note that Ez = E( 1 m Pm i=1 |r > i x|) = E(|r > i x|) has been derived in (4). In the sequel, we need to first solve Var(z) = Ez 2 − (Ez) 2, which has $$\begin{split}\mathbb{E}z^{2}&=\mathbb{E}(\frac{1}{m}\sum_{i=1}^{m}|\mathbf{r}_{i}^{\top}\mathbf{x}|)^{2}\\ &=\frac{1}{m^{2}}\mathbb{E}(\sum_{i=1}^{m}|\mathbf{r}_{i}^{\top}\mathbf{x}|^{2})+\frac{1}{m^{2}}\mathbb{E}(\sum_{i\neq j}|\mathbf{r}_{i}^{\top}\mathbf{x}|\cdot|\mathbf{r}_{j}^{\top}\mathbf{x}|)\\ &=\frac{2q\mu^{2}n}{m^{2}}+\frac{m-1}{2m}\mathbb{E}(|\mathbf{r}_{i}^{\top}\mathbf{x}|\cdot|\mathbf{r}_{j}^{\top}\mathbf{x}|).\end{split}\tag{25}$$ For the second term in the above result, it holds E(|r > i x*| · |*r > j x|) ≤ Var(|r > i x|) + (E|r > i x|) 2 = Var(|r > i x|) + (Ez) $$\mathbf{\theta})^{2},$$ by the covariance property $$\mathbf{Cov}(|\boldsymbol{r}_{i}^{\top}\boldsymbol{x}|,|\boldsymbol{r}_{j}^{\top}\boldsymbol{x}|)=\mathbf{E}(|\boldsymbol{r}_{i}^{\top}\boldsymbol{x}|\cdot|\boldsymbol{r}_{j}^{\top}\boldsymbol{x}|)-\mathbf{E}|\boldsymbol{r}_{i}^{\top}\boldsymbol{x}|\cdot\mathbf{E}|\boldsymbol{r}_{j}^{\top}\boldsymbol{x}|$$ $$=\rho\sqrt{\text{Var}(|\boldsymbol{r}_{i}^{\top}\boldsymbol{x}|)}\cdot\sqrt{\text{Var}(|\boldsymbol{r}_{j}^{\top}\boldsymbol{x}|)}$$ $$=\rho\text{Var}(|\boldsymbol{r}_{i}^{\top}\boldsymbol{x}|),$$ $$(26)$$ $$(27)$$ where ρ ∈ (−1, 1) is the correlation coefficient. Substitute (25) into Var(z) = Ez 2 − (Ez) 2, by the inequality (26) and (12), we can derive $$\mathrm{Var}(z)\leq\frac{2\mu\mu^{2}n}{m^{2}}+\frac{m-1}{2m}[\mathrm{Var}(|\mathbf{r}_{i}^{\top}\mathbf{x}|)+(\mathbf{E}z)^{2}]-(\mathbf{E}z)^{2}\tag{1}$$ $$=\frac{2\mu\mu^{2}n}{m^{2}}+\frac{m-1}{2m}\cdot\frac{2\mu^{2}n}{m^{2}}-(\mathbf{E}z)^{2}$$ $$=\frac{(m+1)\mu^{2}n}{m^{2}}-(\mathbf{E}z)^{2}.$$ $$(28)$$ With the result about Var(z), we can further explore the condition that holds the desired probability $$\mathbf{Pr}\{|z-\mathbf{E}z|\leq\varepsilon\}\geq1-\delta.$$ $\left(29\right)$. Pr{|z − Ez| ≤ ε} ≥ 1 − δ. (29) By the Chebyshev's Inequality, (29) will be achieved, if Var(z)/ε2 ≤ δ; and according to (28), this condition can be satisfied when m2 m+1 ≥ qµ2n ε 2δ . In the above analysis, we consider a random x. For a given x, the condition of holding (29) can be further relaxed to m2 ≥ 2qµ2n ε 2δ , since in this case |r > i x| is independent between different i ∈ [m], such that Var(z) changes to be (12) divided by m. ## A.3 Proof Of Theorem 2 Proof. First, we derive the absolute moment of z ∼ N (*µ, σ*2) as $$\mathbb{E}|z|=\sqrt{\frac{2}{\pi}}\sigma e^{-\frac{\mu^{2}}{2\sigma^{2}}}+\mu\left(1-2\Phi\left(-\frac{\mu}{\sigma}\right)\right)\tag{30}$$ which will be used in the sequel. With the distributions of r and x, we have |r >x| =p n mk Pk i=1 xi . For easier expression, assume y =Pk i=1 xi, then the distribution of y can be expressed as $$f(y)=\sum_{i=0}^{k}\sum_{j=0}^{k-i}C_{k}^{i}C_{k-i}^{j}p^{i}q^{k-i}\frac{1}{\sqrt{2\pi k}\sigma}e^{-\frac{(y-(2j+i-s)\mu)^{2}}{2k\sigma^{2}}}.$$ Then, by (30) we can derive that E|r >x| = rn mk X k i=0 X k−i j=0 hC i kC j k−i p iq k−i × Z +∞ −∞ |y| √2πkσ e − (y−(2j+i−s)µ) 2 2kσ2 dy = 2µ rn mk X k i=0 C i kp iq k−i k − i 2 C d k−i 2 e k−i − 2µ rn mk X k i=0 C i kp iq k−iX k−i j=0 C j k−iΦ − |k − i − 2j|µ √kσ + σ r2n πm X k i=0 C i kp iq k−iX k−i j=0 C j k−i e − (k−i−2j) 2µ2 2kσ2 where Φ(·) is the distribution function of N (0, 1). The above equation and (13), (17), (18) together lead to $$\operatorname{E}|\mathbf{r}^{\mathsf{T}}\mathbf{x}|\leq\mu{\sqrt{\frac{n}{m}}}+\sigma{\sqrt{\frac{2n}{\pi m}}}.$$ Next, we can derive the variance of |r >x| as $$\begin{split}\operatorname{Var}(|\boldsymbol{r}^{\top}\boldsymbol{x}|)&=V a r(\boldsymbol{r}^{\top}\boldsymbol{x})-\left(\operatorname{E}|\boldsymbol{r}^{\top}\boldsymbol{x}|\right)^{2}\\ &={\frac{n}{m}}(\sigma^{2}+2q\mu^{2})-\left(\operatorname{E}\|\boldsymbol{r}^{\top}\boldsymbol{x}\|_{1}\right)^{2}.\end{split}$$ Finally, the convergence of √m µ √n E|r >x| shown in (9) can be derived by the same method of proving P2. ## B Appendix In Figs. 7–10, we test the SVM (with linear kernel) classification accuracy for the sparse ternary matrix with varying matrix sparsity k (and compression ratio m/n) on four different types of data. It can be seen that the performance changing trends of SVM against the varying matrix sparsity k are similar to the KNN performance as illustrated in the body of the paper, thus consistent with our theoretical analysis. ![18_image_0.png](18_image_0.png) Figure 7: Classification accuracy of the sparse matrix-based and Gaussian matrix-based random projections for image data (YaleB, DCT features), with varying matrix sparsity k ∈ [1, 30], three different projection ratios m/n = 1%, 10% and 50%. ![18_image_1.png](18_image_1.png) Figure 8: Classification accuracy of the sparse matrix-based and Gaussian matrix-based random projections for text data (Newsgroups), with varying matrix sparsity k ∈ [1, 30], three different projection ratios m/n = 1%, 10% and 50%. ![18_image_2.png](18_image_2.png) Figure 9: Classification accuracy of the sparse matrix-based and Gaussian matrix-based random projections for gene data (AMLALL), with varying matrix sparsity k ∈ [1, 30], three different projection ratios m/n = 1%, 10% and 50%. ![19_image_0.png](19_image_0.png) Figure 10: Classification accuracy of the sparse matrix-based and Gaussian matrix-based random projections for binary image data (MNIST, binarized pixels), with varying matrix sparsity k ∈ [1, 30], three different projection ratios m/n = 1%, 10% and 50%.
Review 1: Summary: The paper addresses the use of sparse matrix methodologies to optimize computations in data-heavy environments. It particularly focuses on the advantages of sparse matrices for storing and manipulating large datasets with a high proportion of zero-value elements, which is common in fields like machine learning and network analysis. Strengths and Weaknesses: 1. It provides a detailed exploration of the impact that matrix sparsity has on the l1 distance between projected data points. 2. Under very simplified situation, the optimal k was derived in the theorem. Requested Changes: Theoretical Deepening: Considering the noted discrepancy between theoretical results and numerical analysis, it may be beneficial for the authors to deepen the theoretical grounding of their claims. Specifically, exploring the Rademacher complexity could provide a more robust theoretical framework that aligns better with the experimental observations. The Rademacher complexity measures the capacity of a class of functions and could potentially offer insights into how sparsity affects classification performance. Clarify the Role of Matrix Dimensions: It would be beneficial to conduct additional experiments or theoretical analysis to clarify if the performance scales with different dimensions. Variance should also be investigated in the experiments. Broader Impact Concerns: N.A. ================================================== Review 2: Summary: Theoretical Results: The paper provides a theoretical exploration of how the sparsity of matrices influences the $\ell_1$ distance in projected data, offering insights into the optimal sparsity conditions for two types of data distributions: Gaussian mixture and two-point distributions. Empirical Validation: The theoretical results have been validated by experimental results. Strengths and Weaknesses: ## Strengths 1. The theoretical results are interesting. The paper offers a theoretical exploration of how matrix sparsity influences the $\ell_1$ distance between projected data points. 2. The theoretical findings are well-supported by experimental results. 3. The authors tested multiple types of data, enhancing the generalizability of the results. ## Weaknesses The paper claims that "the optimal classification performance should be achieved by the sparse matrix with only one or at most about twenty nonzero entries per row," yet this specific claim seems to be supported more by numerical analysis than by the formal theoretical results presented in Theorems 1 and 2. This discrepancy needs to be addressed to ensure the claims are substantiated by the theoretical analysis. Requested Changes: 1. The authors need to address the apparent discrepancy between the theoretical claims and the numerical results regarding the optimal number of nonzero entries per row. If the claim primarily stems from numerical analysis, this should be clearly stated, and any theoretical claims should be directly tied to results from Theorems. 2. The paper claims that "the optimal classification performance should be achieved by the sparse matrix with only one or at most about twenty nonzero entries per row". The authors should discuss whether this result is dependent on the matrix dimensions. 3. Please provide a detailed description of how the experimental results in Figure 1 were produced. This should include the experimental setup, data preprocessing steps, and parameter settings. Broader Impact Concerns: No ================================================== Review 3: Summary: This paper studies the impact of sparse random projections on the downstream classification performance and its tradeoff. Unlike the previous analyses based on the _invariance_ of pairwise distances among data points, the authors analyze the _maximal_ pairwise distances with respect to the projection sparsity $k$ because the classification performance of the projected points may be highly affected by their discriminability. To this end, they analyze the closed formula of the projected distances for the two types of data distributions, the two-point distributed data and Gaussian mixture data. For both cases, it has been revealed that $k=1$ can achieve the maximal pairwise distance when the underlying data distribution is not very sparse, whereas moderately large $k$ maximizes the pairwise distance for sufficiently sparse data distributions. These findings are confirmed by the experiments with nearest-neighbor classifiers built on top of sparse projections. Strengths and Weaknesses: ==== Strengths ==== + **New perspective to study random projections.** As noted by the authors, it has been popular to study how well the projection preserves the pairwise distances of the original data points when we study random projections (represented by the Johnson-Lindenstrauss lemma), while this approach may not be appropriate if we are interested in the classification performance. This viewpoint has never been appreciated and is one of the interesting points of this paper. + **Transparent analyses with closed formula.** The authors’ findings are clearly stated by the closed formula of the pairwise distances in Theorems 1 and 2. Despite the simplicity of the assumed data distributions, we can benefit greatly from these closed formulas. For example, we can observe theoretically optimal sparsity $k$, as seen in Figures 1 and 2. ==== Weaknesses ==== + **Unclear analysis with respect to $k$.** The authors show the theoretical plots $\\mathbb{E}[r^\\top x]$ vs. $k$ only for two values of $p$ in Figures 1 and 2 and conclude that the pairwise distance is maximized at either $k=1$ or moderately large $k$. This analysis can be improved from two perspectives. First, it is important to describe why such a highly “discontinuous” behavior arises for $p=1/3$ in Figure 1a, unlike a “continuous” behavior in Figure 1b. Second, readers may be curious about the behavior of $\\mathbb{E}[r^\\top x]$ vs. $k$ in between $p=1/3$ and $p=2/3$ (in the case of Figure 1). Is there a phase transition? Or does it change gradually? Addressing them would improve the quality of the analysis. + **Do we really want to maximize the _expected_ pairwise distance?** For example, predictions of nearest neighbor classifiers remain the same if relative orders of pairwise distances do not change. That being said, the _variance_ of pairwise distances could be more important than their expectation. With this in mind, Theorem 1 tells us that $\\mathrm{Var}(|r^\\top x|) / (\\mu n / (mk))$ consists of two terms, and the first term increases with $k$ and the second term asymptotically approaches a constant with $k$. By combining them, the variance might have a “V”-shape with respect to $k$. Is it relevant to the final classification performance? Requested Changes: I expect the authors to address the comments in the weaknesses. Besides, there are minor comments as follows: + In p.2 (third paragraph, 8th line): “... the expected l1 distance can be [approximated] with high probability …”? + In eq.(5): it looks better to write “Var” with \mathrm. + In p.4 (8th line from the bottom): “... as the probability $p$ of $x\_i$[=0]”, “=0” should be put in the math mode. + In Section 5.3 (8th line): Is “$k=6$ in Fig.4c” a mistake for “$k=6$ in Fig.4[f]”? + In Section 5.3 (11th line): “it fails to get the decreasing performance trend and performs poorly at $k=1$, as illustrated in Fig.5” Does it describe Fig.4, instead of Fig.5? Broader Impact Concerns: This work is mainly devoted to theoretical understandings of projection matrices and does not have societal concerns. ================================================== Review 4: Summary: In this paper, the authors utilize the $l_1$ norm (also known as the $l_1$ distance) and subsequently investigate sparse matrix-based random projection, with an analysis conducted under two data distribution assumptions: the Gaussian mixture distribution and the two-point distribution. However, the introduction lacks clarity in defining the mathematical problem being addressed. Furthermore, the conclusion lacks rigorous proof to support the authors' claims. Additionally, the paper fails to delve into the theoretical comparison between the $l_1$ distance and the $l_2$ distance. Overall, this paper is deficient in crucial theoretical insights. Strengths and Weaknesses: Strengths: The paper presents the performance benefits of utilizing sparse matrix-based random projection with the $l_1$ distance metric. Weaknesses: The introduction does not clearly define the mathematical problem under consideration. The primary theoretical conclusion lacks a rigorous proof, and the paper fails to provide a comparison between the $l_1$ distance and the $l_2$ distance. Requested Changes: (1) The mathematical problem being addressed has not been clearly presented in Section 2. While I am personally acquainted with this problem, many readers in machine learning, optimization, and signal processing may not grasp it immediately. Therefore, a specific problem statement should be provided to enhance comprehension. (2) The authors assume that each row of ${\bf R}$ follows an independent and identically distributed (i.i.d.) condition. However, this assumption may not hold true for the estimated sparse matrix generated by a algorithm. (3) Although the authors have proven Theorem 1 and Theorem 2, we cannot directly conclude that $\mathbb{E}|{\bf r}^\top {\bf x}|$ tends to reach its maximum value at approximately $k = 20$. While simulation results may suggest such a trend, this is limited to specific scenarios with certain parameters. In my opinion, it would be more prudent to determine the maximum $k$ that ensures $\mathbb{E}|{\bf r}^\top {\bf x}|$ achieves $95\%$ or $99\%$ of its maximum. (4) The authors could explore the utilization of other concentration inequalities to attain a more precise bound for $||{\bf R} {\bf x} ||_1$, instead of relying solely on the law of large numbers which only considers the limiting case. (5) Given that the $l_1$ norm is known to better capture sparsity compared to the $l_2$ distance, it is imperative for the authors to offer a thorough comparison between the $l_1$ distance and the $l_2$ distance, akin to the conclusions presented in Theorem 1 and Theorem 2. Without such comparative analysis, readers may struggle to gain a deeper understanding of this issue. Broader Impact Concerns: No ================================================== Metareview: Recommendation: Reject Comment: All the reviewers have given yes to both the claims and evidence questions. Similarly, all the reviewers believe that the work itself is interesting. At the same time, the most important criticism of the work comes down to the theoretical validation of certain statements that the paper is paper is making which can be thought of implications of Theorems 1 and 2. The paper shows the proofs of the theorems in the appendix. However, the paper fails to justify properly the other interesting observations which are the main selling points of the paper. Major concerns: 1. P3, P4 and P5, P6 are the interesting properties that the paper tries to champion but are solely based on simulations and observations. Although interesting, I agree with the reviewers' criticisms that they need to be verified properly or at least put in the perspective (e.g., should be clearly mentioned that proper proofs are a future work or discuss this as limitation). The paper in the current form is certainly over selling these. 2. I do not see where the proof is for the sentence from abstract, *"Given the two data distributions, it is **proved** that the maximum \ell1 distance between projected data points could be achieved, as the sparse matrix contains only one or at most about twenty nonzero entries per row, under the size m ≥ O(sqrt(n))."* The closest I find is the sentence *"Therefore, we can reach the conclusion that E|r >x| tends to reach its maximum value at k = 1 or at most about k = 20, for the original data with two-point distributions"*. Now, how is this a conclusion? These are observations and should be called as such, and therefore, not a conclusive proof. It should be noted that the quoted sentence (from abstract) is being highlighted as a contribution (claim) in the abstract and in the introduction, but never properly justified. Based on my reading of the reviews, the rebuttal, and the paper itself, a major revision would be more appropriate for this paper. In major revision, please tone down such sentences or have more rigorous arguments to support those claims. Since there is no "major revision" recommendation, my decision is to "reject" the current submission with encouragement for submitting "a major revision at a later time". ==================================================
# I-Aside: Towards The Global Interpretability Of Image Model Robustness Through The Lens Of Axiomatic Spectral Importance Decomposition Anonymous authors Paper under double-blind review ## Abstract Robust decisions leverage a high proportion of robust features. Natural images have spectral non-uniformity and the majority of spectral energy concentrates on low-frequency components. A change with an infinitesimal amount of energy on the high-frequency components can rewrite the features dominated by high-frequency components. Image models are parameterized general non-linear signal filters. The spectral structures of the model responses to inputs determines the fragility of the learned feature representations. The spectral importance decomposition of models can thus reflect model robustness in response to feature perturbations. To this end, we formulate the spectral importance decomposition problem, and, present Image Axiomatic Spectral Importance Decomposition Explanation (**I-ASIDE**) - a model-agnostic global interpretability method - to quantify model global robustness and understand how models respond to perturbations. We theoretically show that **I-ASIDE** decomposes the mutual information between feature representations and labels onto spectrum. Our approach provides a unique insight into interpreting model global robustness from the perspective of information theory and enables a considerable number of applications in research, from understanding model robustness, to studying learning dynamics, to assessing label noise, to investigating adversarial vulnerability, to studying out-of-distribution robustness, etc. We showcase multiple applications to endorse these claims. ## 1 Introduction Global interpretability summarizes the decision dynamics of neural networks *en masse* in contrast to instancewise local interpretability (Lipton, 2018; Zhang et al., 2021). Local interpretability for image models has achieved great success (Sundararajan et al., 2017; Smilkov et al., 2017; Linardatos et al., 2020; Selvaraju et al., 2017; Arrieta et al., 2020; Zhou et al., 2016; Ribeiro et al., 2016; Lundberg & Lee, 2017; Lakkaraju et al., 2019; Guidotti et al., 2018; Bach et al., 2015; Montavon et al., 2019; Shrikumar et al., 2017), yet quantifiable global interpretability remains virtually unexplored. A brief literature review regarding global interpretability for image models is provided in Section 3. The global robustness (Hendrycks & Dietterich, 2019; Bai et al., 2021; Goodfellow et al., 2014; Silva & Najafirad, 2020) of models reflects an intrinsic property of models and delineates a crucial aspect towards interpretability and trustworthiness. To this end, we present **I-ASIDE**1, a model-agnostic method, to quantify the global robustness of image models and understand how models respond to perturbations. Unlike prior works, **I-ASIDE** directly quantifies model global robustness and enables a considerable number of applications in deep learning research across multiple domains (See Section 5). We also theoreticaly show that **I-ASIDE** *de facto* decomposes the mutual information between feature representations and labels. This can provide profound insights into understanding how features are represented inside black box models. We will provide five application showcases to demonstrate the potential applications of **I-ASIDE** framework. 1Anonymized reproducibility: https://anonymous.4open.science/r/IASIDE_reproducibility-F8BC/. 1 ![1_image_0.png](1_image_0.png) The 2D spectrum is mapped into 1D radial spectrum and further divided into M bands which are denoted from I0 to IM´1. Figure 1: This diagram illustrates the correlation between the power-law like radial spectral energy density of natural images and feature robustness. According to the Parseval identity theorem, small spatial perturbations targeting on high-frequency components can rewrite the features dominated by high-frequency components. The term use of 'feature' is ambiguous and nuanced in deep learning community. In this research, we clarify and differ the term into: (1) Input features (pixel features) and the representations of input features (feature representations). The pixels of images are referred as 'input features' or 'pixel features'. The 'input features' are fed into image models and the models represent the features inside. The way that models represent the input features is referred as 'feature representations'. Empirically, we can 'observe' how models represent the input features by looking into the outputted probability distributions. Feature representations are the representations of input features that models learn to represent relevant information while neglect irrelevant information (Bengio et al., 2013; Goodfellow et al., 2016). The feature representation robustness largely delimits decision robustness afterwards. Robust decisions leverage more robust feature representations while non-robust decisions leverage less robust representations. The correlation between the spectral energy density distributions of input features and the spectral structures of the responses of models to inputs plays a crucial aspect in feature representation robustness. Consequently, the signal spectrum can index the robustness of features: Low-frequency indicates robust features while high-frequency indicates non-robust features. Natural images have spectral non-uniformity and the majority of spectral energy concentrates on low-frequency components (See Figure 1 and Figure 2(a)). According to the Parseval identity theorem - the energy of the perturbations in the spatial domain and the spectral domain is conservative and equivalent, the small spatial perturbations targeting on high-frequency components can rewrite those feature representations dominated by high-frequency components. Figure 2 shows four example spectrum from adversarial perturbations to out-of-distribution (OOD) perturbations. The small spatial perturbations in both adversarial noises and OOD noises have considerable energy concentrating on high-frequency components and can have significant impacts on the features dominated by high-frequent components. Ilyas et al. use the terms 'robust feature (RF)' and 'non-robust feature (NRF)' to theoretically analyze the adversarial robustness problem from the perspective of feature representations and argue that the presence of non-robust features can incur model robustness issues (Ilyas et al., 2019; Tsipras et al., 2018). We adopt the use of their terms and step forward to rigorously discuss the robust decision problem in a broader scope beyond adversarial robustness. Measuring the ratio of the robust features in decisions can help to interpret model inference dynamics globally from the perspective of feature representations. Unfortunately, discriminating features between being robust and being non-robust is difficult. We have noticed the correlation between the power-law like spectral structures of natural images and feature robustness. Feature robustness can thus be indexed by ![2_image_0.png](2_image_0.png) Figure 2: This shows the spectrum examples from four perturbation sources: (b) Adversarial FGSM perturbation with eps " 0.1, (c) adversarial PGD perturbation with eps " 0.1, (d) OOD white noise with σ " 0.1 and (e) OOD blurring with Gaussian kernel 3 ˆ 3. The (a) is the reference spectrum of the reference image (See Figure 1). The adversarial samples are obtained by using a *resnet50* pre-trained on *ImageNet*. The colorbars in the figures indicate frequency-domain magnitudes which are normalized into r0, 1s. The perturbation noises are computed by ∆x :" x ˚ ´x where x is some clean sample and x ˚ is the corresponding perturbed sample. We apply Fourier analysis on the min-max normalized ∆x and show the results. The normalization does not affect the results from Fourier analysis since Fourier operator is a linear operator. The examples show that several usual adversarial perturbations and OOD perturbations have considerable spectral energy concentrating on high-frequency components. The perturbations targeting on high-frequency components can rewrite the features dominated high-frequency components. radial spectrum (See Figure 1). We refer the radial spectrum as the 'robustness spectrum' hereafter. The decompositions of the mutual information of the feature representations of inputs and labels with respect to the robustness spectrum can summarize model decision robustness, and quantitatively reflect how neural networks respond to signals in the frequency-domain. ## 2 Contributions We aim to propose a method of measuring the global spectral importance of image models which is the major focus. The spectral decompositions are a unique solution satisfying a set of desirable axioms. We formulate this problem by using Shapley value theory (Roth, 1988; Aumann & Shapley, 2015). Our approach provides a unique insight into interpreting the global decision robustness of image models and enables multiple applications in the deep learning research community. We claim our major contributions as: - We introduce a method of measuring the spectral importance of image models; - We theoretically justify the proposed method from the perspective of information theory and coalition game theory; - We demonstrate the potential of our **I-ASIDE** by showcasing multiple applications in a variety of research fields. The application showcases are: - Quantifying model global robustness by examining spectral importance distributions (See the experiments regarding spectral importance distributions in Figure 4, summarized numerical comparison and t-SNE projection in Figure 8); - Understanding the learning behaviours of image models on the datasets with supervision noise (See the experiments in Figure 9); - Investigating the learning dynamics of models from the perspective of the evolution of feature representation robustness in optimization (See the experiments in Figure 10); ![3_image_0.png](3_image_0.png) Figure 3: This diagram illustrates the overview of **I-ASIDE** with some baseline dataset pX , Yq (where X denotes images and Y denotes labels) and some classifier Qpy|xq : x **ÞÑ r**0, 1s which outputs a normalized probability distribution for y P Y. The images are sampled from the baseline dataset pX , Yq and fed into a spectral coalition filter to form 2M spectral coalitions. The spectral coalitions are then used to compute the spectral importance distribution Ψ˚pvq. The normalized spectral importance distribution Ψ˚pvq can be summarized into a robustness score Spvq. - Providing an insight into understanding the adversarial vulnerability of models by examining spectral importance distributions (See the experiments in Figure 11); - Studying model out-of-distribution (OOD) robustness when input domains shift (See the experiments in Figure 12). **I-ASIDE** can help to answer model robustness concerns from the perspective of feature representations in frequency-domain. It is also notable that the potential applications are not restricted to the above fields. Other research fields such as data augmentation and self-supervised learning can also use **I-ASIDE** as a device to understand the dynamics of the learned feature representations of models. For example, in data augmentation research, I-ASIDE can interpret the training dynamics with ablation experiments by understanding how hyper parameters and augmentation tricks can affect the robustness of the learned features. ## 3 Related Work We summarize related works from four categories to provide research context in visual models: (1) Global interpretability, (2) model robustness, (3) frequency-domain research and (4) information theory in deep learning. Global interpretability: Global interpretability summarizes the decision behaviours of models and provides a holistic view. In contrast, local interpretability merely provides explanations on the basis of instances (Sundararajan et al., 2017; Smilkov et al., 2017; Linardatos et al., 2020; Selvaraju et al., 2017; Arrieta et al., 2020; Zhou et al., 2016; Ribeiro et al., 2016; Lundberg & Lee, 2017; Lakkaraju et al., 2019; Guidotti et al., 2018; Bach et al., 2015; Montavon et al., 2019; Shrikumar et al., 2017). Due to the research scope, we do not unfold the literature review regarding local interpretability. Feature visualization with neuron maximization or class activation maximization can show the ideal inputs for specific neurons or classes by optimizing inputs, and help to understand the learned feature representations of models (Olah et al., 2017; Nguyen et al., 2019; Zeiler et al., 2010; Simonyan et al., 2013; Nguyen et al., 2016a;b). Yet, optimizing inputs by maximizing class scores or neuron activations can yield 'surrealistic' results which are not interpretable itself. Network dissection attempts to establish the connection between the functions of the units (e.g. a set of channels or a set of layers) in convolutional neural networks and some concepts - e.g. eyes or ears (Bau ![4_image_0.png](4_image_0.png) Figure 4: We showcase the spectral importance distributions from multiple models pre-trained on *ImageNet-1k*. We also include the models with random weights as a control marked by blue box. We have noticed that: (1) The spectral importance of the robustness of feature representations of trained models is non-uniform, and, (2) the spectral importance of the robustness of the feature representations of un-trained models is uniform. et al., 2017). Concept-based approach measures the activations of networks with respect to concepts, and interpret networks at concept level (Kim et al., 2018; Ghorbani et al., 2019; Koh et al., 2020; Chen et al., 2020). Global input feature importance analysis summarizes the decisions of models by measuring how important the input features contribute to predictions (Altmann et al., 2010; Greenwell et al., 2018; Lundberg & Lee, 2017; Ribeiro et al., 2016; Simonyan et al., 2013; Sundararajan et al., 2017; Covert et al., 2020). For example, Covert et al. present SAGE by applying Shapley value theory (Shapley, 1997) to assign input features with importance for interpreting feature contributions. Yet, the implementations of these works are largely restricted to examining feature importance in the spatial domain. Interpreting the global feature importance in the spatial domain for images has inherited limits. For example, the showcase experiment of literature (Covert et al., 2020) provides the global pixel-wise explanation of a multi-layer perceptron (MLP) with the MNIST on spatial domain. The method assigns each pixel with an importance score and adds the scores over MNIST to form the global interpretation. However, the positions and angles of the objects in images can vary from image to image. Adding the pixel importance scores as global interpretation only offers very limited information. We only read that models pay attentions in the centers of the images in MNIST (Covert et al., 2020). Thus the interpretations in spatial domain often suffer from these limits and provide very limited insights regarding robustness. Our work differs from these prior works in that we interpret global model robustness in frequency domain. Frequency-domain transforms are not sensitive to the positions and angles of the objects of images. For example, the spectrum of rotated or flipped images remains identical. The frequency-invariant property in tandem with the correlation to perturbation robustness allows **I-ASIDE** to interpret model decision behaviors globally. Model robustness: Szegedy et al. notice the robustness problem of deep models arising from adversarial perturbations. Thereafter, empirical observations demonstrate that boosting model robustness with adversarial training is at the cost of the performance degradation on model standard accuracy for visual tasks (Goodfellow et al., 2014). Later theoretical analyses show standard accuracy is at odds with model robustness (Zhang et al., 2019; Tsipras et al., 2018). Ilyas et al. argue features can be distinguished by their brittleness into robust features and non-robust features, and show that adversarial robustness relates to non-robust features. Our research is based on the above insights. Frequency-domain research: Neural networks are non-linear parameterized signal processing filters. Investigating how neural networks respond to inputs in the frequency-domain can provide a unique insight into understanding its functions. Rahaman et al. approximate ReLU networks with piece-wise continuous linear functions in order to perform Fourier analysis. Their results suggest neural networks have a 'spectral bias' on smooth hypotheses (Raghu et al., 2017; Montufar et al., 2014; Rahaman et al., 2019). Xu et al. investigate the learning dynamics of neural networks in frequency-domain in their work 'F-Principle' (Xu et al., 2019a;b). Their work suggests that the learning behaviors of neural networks are spectral non-uniformity: Neural networks fit low-frequency components first, then high-frequency components later. Tsuzuku & Sato ![5_image_0.png](5_image_0.png) Spectral players with ℓ8 partition. The pixel density of IM´1 with ℓ2 ball. The pixel density of IM´1 with ℓ8 ball. Figure 5: This shows the motivation we choose ℓ8 ball over ℓ2 ball in partitioning the frequency domain into the M bands (i.e., M 'spectral players') over 2D Fourier spectrum. The frequency data density of the spectral players with ℓ8 remains a constant. However, the frequency data density of the spectral players with ℓ2 is not a constant since some frequency components do not present. This motives us to empirically choose ℓ8 metric to form spectral players in implementation. affirm convolutional neural networks have spectral non-uniformity regarding Fourier bases (Tsuzuku & Sato, 2019). Wang et al. conducts an empirical study of the connection between supervision signals and feature representations from the perspective of frequency (Wang et al., 2020). Kolek et al. propose 'CartoonX' and attempt to interpret the instance-wise decisions of image classifiers in wavelet domain (Kolek et al., 2022). Both **I-ASIDE** and CartoonX are the attempts to interpret the decision behaviors of models in frequency-domain. Our work differs from CartoonX in that: (1) We use Fourier transform instead of wavelet transform because the feature robustness to perturbations correlates with the Fourier spectrum of inputs, and (2) we assess the global decision behaviors over a population other than instance-wise. Information theory in deep learning: In the Section 4, we use some theoretical results in information theory to justify and characterize the choice of the characteristic function in **I-ASIDE**. The optimization objectives of a variety of works from supervised learning and un-supervised learning can be unified under the view of maximizing variational mutual information (Poole et al., 2019; Nowozin et al., 2016; Kingma & Welling, 2013). Deep learning has achieved great success in enormous applications. Yet, the question regarding how models represent input features inside the black boxes remains mysterious. Tishby et al. answer this question by introducing information bottleneck theory for interpreting the learning dynamics of neural networks (Tishby et al., 2000; Saxe et al., 2019). The information bottleneck theory views models as an information encoding and decoding process. Models learn to encode (extract) the information from inputs relevant to labels while neglect irrelevant information (Saxe et al., 2019). Their work provides a motivation for **I-ASIDE**. The mutual information between the feature representations of models and labels reflect the abilities that models capture relevant information from input features. The decomposition of such mutual information with respect to frequency can reflect the spectral contributions to the learned information representations in black boxes. However, estimating the quantity of the mutual information is intractable since we do not have the knowledge regarding the joint distributions of feature representations and labels. Fortunately, Barber & Agakov show a variational lower bound between inputs and labels (Barber & Agakov, 2004). Qin et al. use the result and show that the training objective of the classifiers trained with usual cross-entropy is equivalent to estimating a variational bound of the mutual information between inputs and labels (Qin et al., 2021). We adopt their results in **I-ASIDE** to characterize the characteristic function design in Section 4.5. ## 4 Axiomatic Spectral Importance Decomposition We rigorously formulate the spectral importance decomposition problem as a fairness utility distribution problem in the context of coalition game. The fairness distribution (decomposition or division) is a problem ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) (a) *resnet18 @CIFAR10* (b) *resnet18 @CIFAR100* Figure 6: This demonstrates how the spectral importance distributions of feature representations change with respect to the training set sizes. We choose resnet18 on *CIFAR10* and *CIFAR100*. The preliminary results show that there does not exist a unique pattern regarding how spectral importance distributions with respect to training set sizes. in coalition game theory and refers to divide the payoffs among players in a coalitional game in which the division must satisfy a set of axioms (Aumann & Maschler, 1985; Yaari & Bar-Hillel, 1984; Aumann & Dombb, 2015; Hart, 1989; Roth, 1988). We directly deploy the Shapley value theory (Roth, 1988; Hart, 1989) to derive the solution to the decomposition problem in **I-ASIDE**. We also theoretically justify and characterize the choice of the characteristic function in **I-ASIDE** by showing that **I-ASIDE** decomposes the mutual information between feature representations and labels for given classifiers with respect to spectrum. This theoretical characterization suggests **I-ASIDE** itself is interpretable at design level. Figure 3 shows the overview of **I-ASIDE**. Content organization: We organize the content in the following order. In Section 4.1, we clarify notations and prepare preliminaries for theoretical analyses. In Section 4.2, we define a spectral coalition game to formulate the setting. In Section 4.3, we state the desirable axioms for fairness decomposition and state the result of Shapley value theory. In Section 4.4, we dive into the implementation of forming spectral coalitions in frequency-domain which is essential for deploying Shapley value theory in the spectral coalition game defined in Section 4.2. In Section 4.5, we define and theoretically characterize the design of the characteristic function of the defined spectral coalition game. The characterization suggests our method itself is interpretable at design level. In Section 4.6, we introduce an approach to summarize the measured spectral importance distributions into scalar scores. In Section 4.7, we theoretically analyze the error bound of our method and show the experiments to prove the analysis. ## 4.1 Notations And Preliminaries We adopt the notation conventions from topology and functional analysis (Yosida, 2012; O'Searcoid, 2006; Simmons, 1963). We also follow the conventions in related literature of deep learning community such as variational methods in deep learning. We clarify the notations below. Let pX , Yq be some image dataset where X denotes images and Y denotes discrete labels. Let Sy1px; θq : x ÞÑ R be the logit output of some parameterized classifier S with some input x with respect to class y 1 P Y where θ is parameters. We use Q and P to differ 'prediction' and 'ground-truth'. Let Qpy|x; θq : p*x, y***q ÞÑ r**0, 1s be the conditional probability expression of the classifier S: $$Q(y|x;\theta)\ {\stackrel{d e f}{=}}\ {\frac{e^{S_{y}(x;\theta)}}{\sum_{y^{\prime}\in{\mathcal{Y}}}e^{S_{y^{\prime}}(x;\theta)}}}$$ Sy1 px;θq(1) $\left(1\right)$. ![7_image_0.png](7_image_0.png) Figure 7: This shows the filtering examples with three masking (absence assignment) strategies: (1) Assigning the spectral absences with constant zeros (Zeroing), (2) assigning the spevtral absences with Gaussian noise (Complex Gaussian) and (3) randomly sampling spectral components from the same image datasets (Replacement). The standard complex Gaussian distribution is given by: N p0, 1 2q ` iN p0, 1 2q. The figures (a), (b) and (c) show the coalition filtering results with the spectral coalition: tI0u. The figures (d), (e) and (f) show the coalition filtering results with the spectral coalition: tI1, I2, I3, I4, I5, I6, I7u. The figures (g) to (l) show the examples of the measured spectral importance distributions of a *resnet18* and a efficientnet_v2_s (both are pre-trained on *ImageNet*) with the three assignment strategies. where θ is the parameters. Clearly, Qpy|x; θ**q P r**0, 1s. Let QpY|x; θq : x **ÞÑ r**0, 1s |Y| be the predicted distribution for some given image x P X over Y: $$Q(\mathcal{Y}|x;\theta)\stackrel{{def}}{{=}}\left(Q(y|x;\theta)\right)_{y\in\mathcal{Y}}\in[0,1]^{|\mathcal{Y}|}\tag{1}$$ $$\left(2\right)$$ $$(3)$$ where |Y| is the cardinality (the number of elements) of set Y. We use Ppy 1|xq to denote the prior probability of image x for given class y 1. In terms of one-hot labels, suppose the ground-truth label of image x is y, the distribution Ppy 1|xq is given by: $$P(\mathbf{y}^{\prime}|\mathbf{x})=\begin{cases}1,&\text{if}\mathbf{y}^{\prime}=\mathbf{y},\\ 0,&\text{if}\mathbf{y}^{\prime}\neq\mathbf{y}\end{cases}.\tag{1}$$ Accordingly, PpY|xq " p0, ¨ ¨ ¨ , 0, 1, 0, **¨ ¨ ¨** , 0q. We use the convention from topology (analysis) to denote the collection of the distributions QpY|x; θq over a set X as: $$Q({\mathcal{X}};\theta)\stackrel{d e f}{=}\{Q({\mathcal{Y}}|x;\theta)|x\in{\mathcal{X}}\}$$ def " tQpY|x; θq|x P X u (4) where θ denotes the parameter of classifier Q. Since the distributions of Q over the dataset X represent the representations of the data in the dataset, we also refer QpX ; θq as the representations of set X for given $$(4)$$ ![8_image_0.png](8_image_0.png) Figure 8: This is an application showcase (A1) in research with two experiments. The left bidirectional chart showcases to model spectral importance scores with respect to trainable parameters. The right figure showcases to visualize model robustness by projecting the spectral importance distributions with t-SNE. The shape sizes correspond to robustness (the larger is the better). The experiments are performed on dataset ImageNet. The results also correlate with the experiments in Figure 11 regarding adversarial perturbations. In our experiments, the numbers of the trainable parameters of models do not play a crucial role on model robustness. classifier Q. We ignore the parameter θ in all notations to maintain depiction succinct. For example, we denote Qpy|x; θq as Qpy|xq, QpY|x; θq as QpY|xq and QpX ; θq as QpX q. ## 4.2 Spectral Coalition Game The probability predictions of an image classifier can be viewed as the worth (utility) scores in a coalition game with spectral players. The spectral players are the radial spectral regions in such a spectral coalition game (See Figure 1). The coalition game is defined by some image classifier. Spectral players *cooperate*, interact, and *contribute* to the decisions of the classifiers with various bargaining powers. The worth division should satisfy a set of axioms: efficiency, symmetry, *linearity* and *dummy player* (Roth, 1988; Hart, 1989; Winter, 2002). Shapley value theory is the unique solution to such a division. We now introduce 'spectral coalition game'. Let Qpy|xq : x **ÞÑ r**0, 1s be some image classifier which predicts the discrete category probability distribution of some x for y P Y. Let r0, 1s be the normalized 1-dimensional radial spectrum (See Figure 1 and Figure 5). The radial spectrum is partitioned into M equispaced regions (See Figure 5). Each region is a ℓ8 ball with various radius. Each partition then is a '*spectral player*'. The i-th spectral player is denoted as Ii (where i P rMs def " t0, 1, **¨ ¨ ¨** , M ´ 1u). The M spectral players constitute a player set I :" tIiu M´1 i"0 . The consideration of using ℓ8 ball is because: (1) It is easier to handle the distant pixels at implementation and (2) the pixel density in the region remains constant compared with ℓp (1 ă p ă 8). Please refer to Figure 5. Let v : 2IÞÑ R be some characteristic function which sends 2 Ito the field R such that v**pHq "** 0. The map v is given as the form of the statistical expectation of some Qpy|xq over some baseline dataset pX , Yq. We will show that this choice can be justified in that the characteristic function v measures a variational lower bound (Poole et al., 2019; Barber & Agakov, 2004) of the mutual information between images and labels in Section 4.5. We define a coalition game pI, vq. Let ψipI, vq be the spectral importance component for Ii. Let Ψpvq :" pψiqiPrMs be the collection of ψipI, vq. ![9_image_0.png](9_image_0.png) Figure 9: This is an application showcase (A2) demonstrating how models respond to the label noise of a human-annotated dataset in training. ## 4.3 Axioms And Decomposition Dividing the 'payoffs' amid players in coalition games can exist multiple non-unique schemes. A fair division scheme is said that the scheme satisfies a set of desirable fairness axioms (Moulin, 1992; Roth, 1988; Hart, 1989; Winter, 2002; Shapley & Shubik, 1969; van den Brink, 2002). Shapley value theory is a solution concept which uniquely satisfies a set of desirable fairness axioms. We formulate the spectral importance decomposition as a fairness division problem in 'spectral coalition game'. We state the essential axioms below. Symmetry **axiom**: Let Ir P 2 I be some spectral player coalition. For @ Ii, Ij P I ^ Ii, Ij R Ir, the statement vpIr Y tIiuq " vpIr Y tIj uq implies ψipI, vq " ψj pI, vq. This axiom relabels the statement '*equal treatment of* equals' principle mathematically. This axiom states that the 'names' of players should have no effect on the 'treatments' by the characteristic function in coalition games (Roth, 1988). Linearity **axiom**: Let u and v be two characteristic functions. Let pI, uq and pI, vq be two coalition games. Let pu ` vqpIrq :" upIrq ` vpIrq where Ir P 2 I. The divisions of the new coalition game pI, u ` vq should satisfy: ψipI, u ` vq " ψipI, uq ` ψipI, vq. This axiom is also known as '*additivity* axiom' and guarantees the uniqueness of the solution of dividing payoffs amid players (Roth, 1988). Efficiency **axiom**: This axiom states that the sum of the divisions of all players must be summed to the worth of player set (the grand coalition): Mř´1 i"0 ψipI, vq " vpIq. Dummy player **axiom**: A dummy player (null player) I˚ is the player who has no contribution such that: ψ˚pI, vq " 0 and vpIr Y tI˚**uq "** vpIrq for @ I˚ R Ir ^ I˚ Ď I. In the Shapley value theory literature (Roth, 1988), the *efficiency* axiom and the *dummy player* axiom are also relabeled as *carrier* axiom. We directly deploy Shapley value theory to decompose the predictions of models amongst spectral players. The decompositions are uniquely given by: Bayes. The decompositions are uniquely given by: $$\psi_{i}(\mathcal{I},v)=\sum_{\mathcal{I}\subseteq\mathcal{I}\setminus\mathcal{I}_{i}}\frac{1}{M}\binom{M-1}{|\hat{\mathcal{I}}|}^{-1}\left\{v(\hat{\mathcal{I}}\cup\{\mathcal{I}_{i}\})-v(\hat{\mathcal{I}})\right\}\tag{1}$$ where 1M `M´1 |I˜| ˘´1gives the probability of sampling a spectral player Ii and a spectral player coalition I˜. ## 4.4 Spectral Coalition Filtering To implement the spectral coalitions in the spectral coalition game defined in Section 4.2, we must implement spectral coalition filtering. We implement the spectral coalition with ideal radial multi-band-pass digital signal filtering (Oppenheim, 1978; Roberts & Mullis, 1987; Pei & Tseng, 1998). The multi-band-pass digital signal filtering refers that the signal components in pass-bands are preserved while the signal components in stop-bands are suppressed. We thus can use the 'pass-bands' and 'stop-bands' to represent the 'presences' and 'absences' of the players in coalitions (See Figure 5). We state the implementation below. $\left(5\right)^2$ ![10_image_0.png](10_image_0.png) Figure 10: This is an application showcase (A3) in research demonstrating how the robustness of feature representations of models evolves with respect to training epochs. We also provide the loss curves with respect to epochs in Appendix A.5. Let F be the Discrete Fourier transform (DFT) operator and F´1 be inverse DFT (IDFT) operator (Tan & Jiang, 2018). The DFT operator F and IDFT operator F´1 are defined in Appendix A.2. We take some spectral coalition Ir P 2 I and crop the frequency components not present in Ir by using a channel-wise plane multi-band-pass filter on frequency-domain (Oppenheim, 1978; Roberts & Mullis, 1987; Castleman, 1996; Jähne, 2005). The multi-band-pass mask matrices of allowing pass-bands and suppressing stop-bands are referred as 'transfer functions' or 'filters' in the literature of digital signal processing. Let Ir P 2 I be some spectral player coalition in some spectral coalition game pI, vq. Let TpIrq " `TpIrqp*m, n*q ˘ pm,nqPrMsˆrNsP RMˆN be the filter transfer function for the spectral coalition Ir such that: $$\mathbb{T}(\widetilde{\mathcal{I}})(m,n)=\begin{cases}1,&\text{if the frequency point(m,n)in}\widetilde{\mathcal{I}},\\ 0,&\text{otherwise}\end{cases}\tag{6}$$ where M ˆ N is the dimension of the image. We define a binary operator '‹' over the field R to represent the coalitional filtering by: $$x\star\widetilde{\mathcal{I}}\ {\stackrel{d e f}{=}}\ \mathcal{F}^{-1}\left[\underbrace{\mathcal{F}(x)\odot\mathbb{T}\left(\widetilde{\mathcal{I}}\right)}_{\mathrm{Spectral~presence}}+\underbrace{\boldsymbol{b}\odot\left(\mathbb{1}-\mathbb{T}\left(\widetilde{\mathcal{I}}\right)\right)}_{\mathrm{Spectral~absence}}\right]\right]$$ (7) $$\begin{array}{l}\small\text{(7)}\end{array}$$ . where 'd' denotes Hadamard product (Horn, 1990; Horadam, 2012), 1 P RMˆN denotes an all-ones matrix and b P CMˆN represents the assignments of the absences of spectral players over complex field. At our implementation, we empirically set b " 0. Absence baseline: The term 'baseline' in attribution analysis context refers to the absence assignments of players (Sundararajan et al., 2017; Shrikumar et al., 2017; Binder et al., 2016). For example, if we use 'zeros' to represent the absence of players, the 'zeros' are dubbed as 'baseline'. Intuitively, 'baseline' is the 'pretext' for being 'absent' in the coalition game (Sundararajan et al., 2017). Spectral player absence assignment: In the implementation of the coalition filtering, the masking (assignment) strategies of the absences of the spectral players in coalition filtering can change the distributions of filtered images. There exist multiple choices for the assignments of the absences of spectral layers in coalition filtering design: (1) Assigning to constant zeros (Zeroing), (2) assigning to complex Gaussian noise (Complex Gaussian) and (3) assigning to the corresponding frequency components randomly sampled from other images at the same dataset (Replacement). When we adopt 'Complex Gaussian' strategy, the b equation (7) is sampled from some *i.i.d.* complex Gaussian distribution: N pµ, σ 2 2q ` iN pµ, σ 2 2q. When we adopt the 'Replacement' strategy, the equation (7) changes to: b " Fpx ˚q (where x ˚ ∼ X is a randomly sampled image from some set X ). In our implementation, we simply choose 'zeroing' as our filtering strategy: b " 0. In Figure 7, we show the filtered image examples by using the above three strategies and also show the examples ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) Figure 11: This is an application showcase (A4) in research demonstrating that there is a correlation between adversarial perturbations and the spectral importance scores using **I-ASIDE**. The adversarial perturbations are measured by the statistical expectations of prediction probability variations (See Section 5.4). We choose multiple pretrained models (on *ImageNet*) and perform un-targeted FGSM/PGD attacking with eps " 0.1. The adversarial samples carry a high proportion of non-robust features (Ilyas et al., 2019) (See Figure 2). For most models, adversarial perturbations are negatively proportional to spectral importance scores. The circle sizes are proportional to spectral importance scores. It is also notable there are some outliers (e.g. shufflenet), and this implies the perturbation-based robustness research can merely capture an aspect of the holistic robustness of models. The full results are provided in Appendix A.6. of measured spectral importance distributions. In our empirical observations, the three strategies have rather similar performance. In this research, we do not unfold the discussions regarding the masking strategy choices. We will investigate the masking strategy choices in our future work. ## 4.5 Characteristic Function Design And Characterization Let Qpy|xq : x **ÞÑ r**0, 1s be some image classifier. For a sample x, Qpy|xq sends the x to a discrete category probability distribution for y P Y. Let QpX q denote the representations of image set X with given classifier Q. We design the characteristic function on the classifier Q over dataset D :" pX , Yq as: $$v(\tilde{\mathcal{I}};Q,\mathcal{D}):=\operatorname*{\mathbb{E}}_{x,y\sim(X,Y)}\left\{\log Q(y|x*\tilde{\mathcal{I}})-\log Q(y|x*\mathcal{D})\right\}$$ where @Ir P 2 I. The characteristic function v is a function of spectral coalitions (Ir P 2 I) over given dataset pX , Yq and classifier Q. The map v also satisfies v**pHq "** 0 if the absences of spectral players are assigned to zeros in coalition filtering (See Section 4.4). We denote vpIr; Q, Dq as vpIrq to maintain depiction succinct. We justify the choice of v by showing two claims: Claim 1 An image classifier Q trained with usual cross-entropy loss estimates a variational lower bound of the mutual information between images X and labels Y (Qin et al., 2021; Barber & Agakov, 2004); Claim 2 The map v defined on the top of the Q and pX , Yq measures the mutual information between the representations QpX q and labels Y which also is a variational bound of the mutual information between images X and labels Y. $$({\boldsymbol{8}})$$ ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) Figure 12: This is an application showcase (A5) in research demonstrating that there is a correlation between out-of-distribution (OOD) perturbations and the spectral importance scores using **I-ASIDE**. The OOD perturbations are measured by the statistical expectations of prediction probability variations (See Section 5.5). We choose multiple pretrained models (on *ImageNet*) and evaluate the perturbations with two OOD transforms: Gaussian (σ " 0.1) and Gaussian-blur (*kernel* " 3 ˆ 3). The results show that the OOD perturbations are negatively proportional to spectral importance scores. The circle sizes are proportional to spectral importance scores. It is also notable there are some outliers (e.g. *maxvit_t*), and this implies the perturbation-based robustness research can merely capture an aspect of the holistic robustness of models. If the above two claims hold, the map v over a classifier Q can assess the feature representation quality of the classifier. We first show that the 'Claim 1' holds. Let pX , Yq be some dataset. Let Q˚py 1|xq : p*x, y*1**q ÞÑ r**0, 1s be an image classifier which sends an image x P X to a probability for given class y 1 P Y. Let Ppy 1|xq be the prior probability of image x P X for given class y 1 P Y. In terms of one-hot labels, suppose the ground-truth label of image x is y, the distribution Ppy 1|xq is given by: $$P(y^{\prime}|x)=\begin{cases}1,&\text{if}y^{\prime}=y,\\ 0,&\text{if}y^{\prime}\neq y\end{cases}.\tag{9}$$ Let LCEpQ˚*, x, y*q be the cross-entropy loss over the dataset: $$\mathcal{L}_{CE}(Q^{*},x,y)\stackrel{{\mathrm{def}}}{{=}}\mathbb{E}_{x,y\sim(X,y)}\bigg{[}\sum_{y^{\prime}\in y}-P(y^{\prime}|x)\log Q^{*}(y^{\prime}|x)\bigg{]}.\tag{1}$$ The optimization objective of some image classifier with usual cross-entropy loss LCEpQ˚*, x, y*q in learning is: curve of some image classifier with scalar cross entropy $\max_{Q^{\bullet}}\mathbb{E}_{\{Q^{\bullet},\tau,y\}}$ in norm $$\inf_{Q^{\bullet}}\mathcal{L}_{CE}(Q^{\bullet},x,y)=\inf_{Y\in\Phi\ \ast\ y\sim(X,y)}\mathbb{E}\left[\sum_{y\in y}-P(y^{\prime}|x)\log Q^{\bullet}(y^{\prime}|x)\right]$$ $$=\inf_{Y\in\Phi\ \ast\ y\sim(X,y)}-\log Q^{\bullet}(y|x)$$ $$=\sup_{Y\in\Phi\ \ast\ y\sim(X,y)}\mathbb{E}_{\{X,y\}}-\log Q^{\bullet}(y|x)$$ $$(10)$$ ![13_image_0.png](13_image_0.png) Figure 13: This shows how the relative estimation errors converge with respect to the numbers of samples K. The errors are measured by: 1M ||Ψpi`1qpvq ´ Ψpiqpvq||1 where Ψpiqpvq denotes the i-th measured spectral importance distribution with respect to characteristic function v. The experiments are conducted on CIFAR10, CIFAR100 and *ImageNet* with *resnet18*. Lemma 4.1. **B.A. Feature Representation Variational Mutual Information Bound.** Let pX , Yq be some dataset. Let Q˚py|xq be some classifier. Let Q be the optimal model optimized on the dataset with cross-entropy loss. Let QpX q be the representations of the set X . Let HpYq be the differential entropy of Y. Let IpX ;Yq be the mutual information between images and labels. The below variational bound holds and we refer the LHS as 'B.A. feature representation variational mutual information bound' or 'B.A. mutual information bound'. The bound is: $$\begin{CD}\mathbb{I}_{BA}(\mathcal{Q}(\mathcal{X});\mathcal{Y})\stackrel{{\text{def}}}{{=}}\sup_{\mathcal{Q}^{\bullet}\ x,y\sim(\mathcal{X},\mathcal{Y})}\mathbb{E}_{\mathcal{Q}^{\bullet}(y|x)+H(\mathcal{Y})\leqslant\mathbb{I}(\mathcal{X};\mathcal{Y})}\\ \hline\end{CD}\tag{1}$$ $$(14)$$ $$\left(15\right)$$ where the notation IBApQpX q;Yq *is read as: The B.A. variational mutual information bound of dataset* pX , Yq on classifier Q*. The proof is provided in Appendix A.1.* Clearly, the Lemma 4.1 shows 'Claim 1' holds: $$\inf_{Q^{*}}{\cal L}_{CE}(Q^{*},x,y)=\mathbb{I}_{BA}(Q(X);{\cal Y})-H({\cal Y}).\tag{1}$$ We now can show 'Claim 2' holds. Set X ‹ Ir :" tx ‹ Ir|x P X u. Applying the Lemma 4.1 into the equation (8), we can rewrite the characteristic function v as: $$\mathbf{v}(\widehat{\mathbf{I}}):=\underbrace{\mathbb{I}_{BA}(\mathbf{Q}(\mathbf{\mathcal{X}}\bullet\widehat{\mathbf{I}}),\mathbf{\mathcal{Y}})}_{\text{B.A.bound}}-\underbrace{\mathbb{I}_{BA}(\mathbf{Q}(\mathbf{\mathcal{X}}\bullet\mathbf{\mathcal{Z}}),\mathbf{\mathcal{Y}})}_{\text{constant}}\tag{1}$$ $$(\mathbf{16})$$ which clearly measures the mutual information between between representations and labels, and also the B.A. variational bound of the mutual information between images and labels. The two claims give a theoretical interpretation and justify the design of the characteristic function in **I-ASIDE**. This implies **I-ASIDE** itself is interpretable at design level. ## 4.6 Summarizing Spectral Importance Distribution We next discuss how to summarize the distributions into real scalars. Let Spvq : Ψpv**q ÞÑ r**0, 1s be a functional on field R which summarizes the derived spectral importance distributions into real scalars. There are several choices to summarize according to our needs. We are interested in feature representation robustness. We summarize the scores by summing with a feature representation robustness prior series as weights: The learned feature representations dominated by low-frequency components are more robust than those feature representations dominated by high-frequency components. Intuitively, we can rank the importance of the M spectral players as: βr :" p0, ´1, ´2, **¨ ¨ ¨** , ´pM ´ 1qq. We use an exponential transform with base β to convert such a ranking vector βr into a non-negative series: β " pβ 0, β1, **¨ ¨ ¨** , βM´1q where 0 ă β ă 1. The weighting vector β preserves the ranking information but the weights vary according to the choice of β. Set Ψ˚pvq " Ψpvq´min Ψpvq ||Ψpvq´min Ψpvq||1 for normalizing Ψpvq. The normalized Ψ˚pvq can be summarized by the inner product with the prior weighting vector β: β T Ψ˚pvq. We further take into account the random decision as a baseline by the same model with randomly initialized weights: $$\left|\beta^{T}\Psi^{*}(v)-\underbrace{\mathbb{E}[\beta^{T}\Psi^{*}({\tilde{v}})]}_{\mathrm{baseline}}\right|$$ (17) $$(17)$$ where vr denotes the characteristic function of some model with randomly initialized weights and E vr rβ T Ψ˚pvrqs denotes the statistical expectation of the measured distributions when the model over all random weights. We set: $$\mathbb{E}[\beta^{T}\Psi^{*}(\tilde{v})]\approx\beta^{T}\frac{1}{M}.\tag{1}$$ The approximation equation (18) holds true because the motivation experiments in Figure 4 show that the measured spectral importance distributions of models with randomized weights exhibit uniformity. The result can further be normalized into r0, 1s by: $$S(v):={\frac{\left|\beta^{T}\Psi^{*}(v)-\beta^{T}{\frac{1}{M}}\right|}{\operatorname*{sup}\left|\beta^{T}\Psi^{*}(v)-\beta^{T}{\frac{1}{M}}\right|}}=\left|{\frac{\beta^{*}{}^{T}\Psi^{*}(v)-\eta}{1-\eta}}\right|$$ $$(18)$$ $$(19)$$ where β P p0, 1q, β ˚ "β ||β||2 and η " 1 M ||β||1 ||β||2 . The deduction of this formula is provided in Appendix A.3. ## 4.7 Estimation Of Error Upper Bound Suffering from the limit of computational resources, the statistical expectation is evaluated by using Monte Carlo sampling. We analyze the error bound by taking K samples. Set ∆vpI˜, Iiq :" vpI˜ Y tIi**uq ´** vpI˜q. Let ψ ˚ i and ∆v ˚pI˜, Iiq be estimations with K samples using Monte Carlo sampling. The error bound with ℓ1 norm is given by: $$\epsilon_{K}\stackrel{{\rm def}}{{=}}\sup_{i}\underset{(X,Y)}{\mathbb{E}}||\psi_{i}^{\star}(\mathcal{I},v)-\psi_{i}(\mathcal{I},v)||_{1}\leqslant2^{M-1}\cdot\left\{\frac{Var(\Delta v^{\star})}{K}\right\}^{\frac{1}{2}}\tag{20}$$ where *V ar*p∆v ˚q gives the upper bound of the variance of ∆v ˚pI˜, Iiq. Clearly, lim ϵK ÞÑ 0 as K **ÞÑ 8**. The proof is provided in Appendix A.4. Figure 13 shows the experiments of the relative estimation errors of a resnet18 model on CIFAR10, *CIFAR100* and *ImageNet* datasets. In our experiments, we empirically choose K " 200 due to this error upper bound. ## 5 Applications We showcase multiple applications to demonstrate the potential uses of **I-ASIDE**2: (A1) Quantifying model global robustness by examining spectral importance distributions, (A2) understanding the learning behaviours of image models on the datasets with supervision noise, (A3) investigating the learning dynamics of models from the perspective of the evolution of feature representation robustness in optimization, (A4) understanding the adversarial vulnerability of models by examining spectral importance distributions and (A5) studying image model out-of-distribution (OOD) robustness. 2The core code implementation is provided in supplementary material. ## 5.1 Showcase A1: Quantifying Model Global Robustness We measure the robustness by examining the spectral importance distributions or the spectral importance scores. Spectral importance distribution: The experiments in Figure 4 showcase the measured spectral distributions of multiple models with multiple datasets. The results show that the spectral importance of the learned feature representations of trained models is non-uniform. Furthermore, models trained on larger training datasets exhibit higher robustness. Numerical comparison: The experiment (a) in Figure 8 showcases the application of numerically comparing model robustness. The results correlate with the adversarial perturbation experiments in Figure 11 in which we measure the prediction probability variations between clean samples and adversarial samples for given labels with un-targeted FGSM/PGD attacking (Szegedy et al., 2013; Moosavi-Dezfooli et al., 2016; Goodfellow et al., 2014; Madry et al., 2017). The result implies that the numbers of trainable parameters seem not to play a crucial aspect on model robustness. Visualizing by projection: The experiment (b) in Figure 8 showcases the projection of spectral importance distributions. The projection is performed by using t-SNE (Hinton & Roweis, 2002). ## 5.2 Showcase A2: Understanding How Models Respond To Supervision Noise The experiments in Figure 9 showcase the investigation of how models respond to various label noise levels in the training process. We create the noisy-label datasets from clean human-annotated *Caltech101* by randomly re-assigning a proportion of labels. We vary the proportions (label noise levels) from 0 to 1 with stride 0.1. We train googlenet, *resnet18* and vit on the derived noisy-label datasets with 70 epochs, learning rate 0.0025 and SGD optimizer. The image sizes are set to 64 ˆ 64, 64 ˆ 64 and 224 ˆ 224 respectively. The results are visualized in 2D heat maps. We have observed the existence of a 'spectral drift' effect correlated with the levels of supervision signal noise in our preliminary showcase experiments. The models trained with lower levels of supervision signal noise tend to rely on the features dominated by low-frequency components, while the models trained with higher levels of supervision noise do not exhibit such a spectral preference. It is noteworthy that the 'spectral drift' effect is not a simple monotonic correlation with the levels of supervision noise. This phenomenon requires further investigation with additional experiments. Models exhibit disparate learning dynamics on human-annotated datasets and random-annotated datasets. As the theoretical analysis shown in Section 4.5, **I-ASIDE** is a quantification method for measuring the spectral importance of the variational mutual information between feature representations and labels. Intuitively, we surmise this '*spectral drift*' effect originates from that the supervision signals come from humans and the relevant features perceived by humans largely concentrate on low-frequency components due to the biological limit of our vision system. The training objectives for usual supervised classifiers are equivalent to estimating a variational lower bound of the mutual information between images and labels (See Section 4.5). Yet, it is not in the scope of this research. ## 5.3 Showcase A3: Investigating How Feature Representations Are Learned In Training The experiments in Figure 10 showcase how the feature representations learned by models evolve with respect to epochs in training process. The learning dynamics of models exhibit two stages. In the first stage, models readily learn more robust feature representations. In the second stage, models learn more non-robust feature representations to achieve higher accuracy. We train '*googlenet*', '*efficientnet_v2_s*' and '*resnet18* ' with 120 epochs, SGD optimizer and learning rate 0.0025. The training dataset is *Caltech*101. The spectral importance distributions are measured for every 5 epochs using **I-ASIDE**. We present the results using 2D heat maps with the corresponding training and validation loss curves. The first row is the training dynamics of feature robustness. The second row is the corresponding training and validation loss curve. The results suggest that models have two major learning stages: (1) Feature exploration stage and (2) feature exploitation stage. At the feature exploration stage (1), both training loss and validation loss drop synchronously. For example, the epochs 1 to 35 for *googlenet*, the epochs 1 to 40 for *efficientnet_v2_s* and the epochs 1 to 70 for *resnet18*. This stage explores the features which are more general and the peaks of spectral distributions move towards low-frequency components. At the exploitation (2), models continue to refine the learned feature representations for further lower training loss. However, overfitting emerges at this stage. This stage uses more features dominated by low-frequency components. The results echo with the claim in a prior literature (Wang et al., 2020): High-frequency components are essential for the generalization capability of image models (Wang et al., 2020). ## 5.4 Showcase A4: Understanding Adversarial Vulnerability Adversarial samples carry a high proportion of non-robust features (Su et al., 2018; Ilyas et al., 2019; Bai et al., 2021; Tsipras et al., 2018). Intuitively, the models with high spectral importance scores are less likely susceptible to adversarial perturbations. In Figure 11, we show how FGSM/PGD (Yuan et al., 2019; Madry et al., 2017; Akhtar & Mian, 2018; Chakraborty et al., 2018) adversarial perturbations correlate with the spectral importance scores by **I-ASIDE**. The full results are provided in supplementary material. We measure the prediction probability variations between the prediction probabilities of clean samples and adversarial samples. We use the 'smooth' version of Softmax function with temperature T to convert logits into probabilities (Hinton et al., 2015; Goodfellow et al., 2016; Jang et al., 2016). Experimental method: Let I be some clean image and I ˚ be the corresponding adversarial sample. Let Ppy|xq : x **ÞÑ r**0, 1s be some image classifier which outputs the probability with respect to some category y. We define the adversarial perturbation ∆Pr as E x,y∼pX ,Yq |Ppy|xq ´ Ppy|x ˚q| over some dataset pX , Yq where p*x, y*q denotes an image and the corresponding label and x ˚ denotes adversarial sample for x. We plot the prediction perturbations against spectral importance scores. The results in Figure 11 shows that the adversarial perturbations are negatively correlated with spectral importance scores. The outliers remind us: Spectral importance distributions can merely reflect one aspect of the robustness of models. ## 5.5 Showcase A5: Studying Model Out-Of-Distribution (Ood) Robustness We also showcase how the spectral importance scores by using **I-ASIDE** correlates with the OOD perturbations to further justify our methodology. The measurement method is the same as in Section 5.4. We choose two OOD perturbations: Gaussian (σ " 0.1) and blurring with Gaussian kernel (*kernel* " 3ˆ3). The preliminary results show that there exists a correlation between OOD perturbations and spectral importance scores. ## 6 Limitations I-ASIDE provides a unique insight into the interpretability of image models from the feature robustness on spectral perspective. Yet, our methodology has several limitations: (1) Spectral perspective can merely reflect one aspect of the holistic view of model robustness, (2) the masking strategies in coalition filtering have yet to be investigated in future, (3) the choice of β in summarizing model robustness scores and (4) the resolution of the measured spectral distributions for image models. Regarding the limitation (1), for example, carefully crafted malicious adversarial perturbations on lowfrequency components can fool neural networks (Luo et al., 2022; Liu et al., 2023; Maiya et al., 2021). This further implies the complexity of this research topic. Regarding the limitation (2), we have preliminarily discussed several masking strategies (See Figure 7 and Section 4.4). We need further investigation regarding how to choose the optimal masking strategy. Regarding the limitation (3), providing a better summarizing function (S : RM **ÞÑ r**0, 1s) needs further investigation in future work. Regarding the limitation (4), as the spectral energy follows power law alike distribution, measuring the expectations of spectral contributions on decisions on high-frequency components poses a challenge. As such, **I-ASIDE** does not use sampling based approach on spectral coalitions to avoid inaccurate results and thus suffers from the computation cost by Op2Mq. Fortunately, we do not need high spectral resolution to analyze model robustness problem. ## 7 Conclusions This research is motivated by the link between feature robustness and the non-uniformity of the spectral structures of natural images, in tandem with the insight that neural networks are parameterized non-linear signal filters. **I-ASIDE** investigates how the 'neural signal filter' responds to signals on spectrum. We formulate the global feature importance as a decomposition problem on spectrum using coalition game theory. We theoretically show that our approach decomposes the mutual information between feature representations and labels onto spectrum. Our work provides a unique insight into deep learning research and enables a considerable number of applications as we have demonstrated. We have conducted a large number of experiments to support our claims. ## References Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. Ieee Access, 6:14410–14430, 2018. André Altmann, Laura Toloşi, Oliver Sander, and Thomas Lengauer. Permutation importance: a corrected feature importance measure. *Bioinformatics*, 26(10):1340–1347, 2010. Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion, 58:82–115, 2020. Robert J Aumann and Michael Maschler. Game theoretic analysis of a bankruptcy problem from the talmud. Journal of economic theory, 36(2):195–213, 1985. Robert J Aumann and Lloyd S Shapley. *Values of non-atomic games*. Princeton University Press, 2015. Yonatan Aumann and Yair Dombb. The efficiency of fair division with connected pieces. ACM Transactions on Economics and Computation (TEAC), 3(4):1–16, 2015. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. *PloS one*, 10(7):e0130140, 2015. Tao Bai, Jinqi Luo, Jun Zhao, Bihan Wen, and Qian Wang. Recent advances in adversarial training for adversarial robustness. *arXiv preprint arXiv:2102.01356*, 2021. David Barber and Felix Agakov. The im algorithm: a variational approach to information maximization. Advances in neural information processing systems, 16(320):201, 2004. David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network dissection: Quantifying interpretability of deep visual representations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6541–6549, 2017. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013. Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, Klaus-Robert Müller, and Wojciech Samek. Layer-wise relevance propagation for neural networks with local renormalization layers. In Artificial Neural Networks and Machine Learning–ICANN 2016: 25th International Conference on Artificial Neural Networks, Barcelona, Spain, September 6-9, 2016, Proceedings, Part II 25, pp. 63–71. Springer, 2016. Kenneth R Castleman. *Digital image processing*. Prentice Hall Press, 1996. Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. Adversarial attacks and defences: A survey. *arXiv preprint arXiv:1810.00069*, 2018. Zhi Chen, Yijie Bei, and Cynthia Rudin. Concept whitening for interpretable image recognition. Nature Machine Intelligence, 2(12):772–782, 2020. Ian Covert, Scott M Lundberg, and Su-In Lee. Understanding global feature contributions with additive importance measures. *Advances in Neural Information Processing Systems*, 33:17212–17223, 2020. Amirata Ghorbani, James Wexler, James Y Zou, and Been Kim. Towards automatic concept-based explanations. *Advances in Neural Information Processing Systems*, 32, 2019. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. *Deep learning*. MIT press, 2016. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. Brandon M Greenwell, Bradley C Boehmke, and Andrew J McCarthy. A simple and effective model-based variable importance measure. *arXiv preprint arXiv:1805.04755*, 2018. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, and Fosca Giannotti. Local rule-based explanations of black box decision systems. *arXiv preprint arXiv:1805.10820*, 2018. Sergiu Hart. Shapley value. In *Game theory*, pp. 210–216. Springer, 1989. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. *arXiv preprint arXiv:1903.12261*, 2019. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7), 2015. Geoffrey E Hinton and Sam Roweis. Stochastic neighbor embedding. Advances in neural information processing systems, 15, 2002. Kathy J Horadam. *Hadamard matrices and their applications*. Princeton university press, 2012. Roger A Horn. The hadamard product. In *Proc. Symp. Appl. Math*, volume 40, pp. 87–169, 1990. Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. *Advances in neural information processing systems*, 32, 2019. Bernd Jähne. *Digital image processing*. Springer Science & Business Media, 2005. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning, pp. 2668–2677. PMLR, 2018. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. Concept bottleneck models. In *International Conference on Machine Learning*, pp. 5338–5348. PMLR, 2020. Stefan Kolek, Duc Anh Nguyen, Ron Levie, Joan Bruna, and Gitta Kutyniok. Cartoon explanations of image classifiers. In *European Conference on Computer Vision*, pp. 443–458. Springer, 2022. Solomon Kullback. *Information theory and statistics*. Courier Corporation, 1997. Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. Faithful and customizable explanations of black box models. In *Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society*, pp. 131–138, 2019. Pantelis Linardatos, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. Explainable ai: A review of machine learning interpretability methods. *Entropy*, 23(1):18, 2020. Zachary C Lipton. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. *Queue*, 16(3):31–57, 2018. Jiyuan Liu, Bingyi Lu, Mingkang Xiong, Tao Zhang, and Huilin Xiong. Low frequency sparse adversarial attack. *Computers & Security*, 132:103379, 2023. Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. *Advances in neural* information processing systems, 30, 2017. Cheng Luo, Qinliang Lin, Weicheng Xie, Bizhu Wu, Jinheng Xie, and Linlin Shen. Frequency-driven imperceptible adversarial attack on semantic similarity. In *Proceedings of the IEEE/CVF Conference on* Computer Vision and Pattern Recognition, pp. 15315–15324, 2022. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. *arXiv preprint arXiv:1706.06083*, 2017. Shishira R Maiya, Max Ehrlich, Vatsal Agarwal, Ser-Nam Lim, Tom Goldstein, and Abhinav Shrivastava. A frequency perspective of adversarial robustness. *arXiv preprint arXiv:2111.00861*, 2021. Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek, and Klaus-Robert Müller. Layer-wise relevance propagation: an overview. *Explainable AI: interpreting, explaining and visualizing* deep learning, pp. 193–209, 2019. Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. *Advances in neural information processing systems*, 27, 2014. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In *Proceedings of the IEEE conference on computer vision and* pattern recognition, pp. 2574–2582, 2016. Hervé Moulin. An application of the shapley value to fair division with money. *Econometrica: Journal of the* Econometric Society, pp. 1331–1349, 1992. Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. *Advances in neural information* processing systems, 29, 2016a. Anh Nguyen, Jason Yosinski, and Jeff Clune. Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks. *arXiv preprint arXiv:1602.03616*, 2016b. Anh Nguyen, Jason Yosinski, and Jeff Clune. Understanding neural networks via feature visualization: A survey. *Explainable AI: interpreting, explaining and visualizing deep learning*, pp. 55–76, 2019. Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. *Advances in neural information processing systems*, 29, 2016. Chris Olah, Alexander Mordvintsev, and Ludwig Schubert. Feature visualization. *Distill*, 2(11):e7, 2017. Alan V Oppenheim. Applications of digital signal processing. *Englewood Cliffs*, 1978. Mícheál O'Searcoid. *Metric spaces*. Springer Science & Business Media, 2006. Soo-Chang Pei and Chien-Cheng Tseng. A comb filter design using fractional-sample delay. *IEEE Transactions* on Circuits and Systems II: Analog and Digital Signal Processing, 45(5):649–653, 1998. Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, and George Tucker. On variational bounds of mutual information. In *International Conference on Machine Learning*, pp. 5171–5180. PMLR, 2019. Zhenyue Qin, Dongwoo Kim, and Tom Gedeon. Neural network classifier as mutual information evaluator. arXiv preprint arXiv:2106.10471, 2021. Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha Sohl-Dickstein. On the expressive power of deep neural networks. In *international conference on machine learning*, pp. 2847–2854. PMLR, 2017. Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. In International Conference on Machine Learning, pp. 5301–5310. PMLR, 2019. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. " why should i trust you?" explaining the predictions of any classifier. In *Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery* and data mining, pp. 1135–1144, 2016. Richard A Roberts and Clifford T Mullis. *Digital signal processing*. Addison-Wesley Longman Publishing Co., Inc., 1987. Alvin E Roth. *The Shapley value: essays in honor of Lloyd S. Shapley*. Cambridge University Press, 1988. Andrew M Saxe, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan D Tracey, and David D Cox. On the information bottleneck theory of deep learning. *Journal of Statistical Mechanics:* Theory and Experiment, 2019(12):124020, 2019. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In *Proceedings* of the IEEE international conference on computer vision, pp. 618–626, 2017. Lloyd S Shapley. A value for n-person games. *Classics in game theory*, 69, 1997. Lloyd S Shapley and Martin Shubik. Pure competition, coalitional power, and fair division. *International* Economic Review, 10(3):337–362, 1969. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In *International conference on machine learning*, pp. 3145–3153. PMLR, 2017. Samuel Henrique Silva and Peyman Najafirad. Opportunities and challenges in deep learning adversarial robustness: A survey. *arXiv preprint arXiv:2007.00753*, 2020. George F Simmons. *Introduction to topology and modern analysis*, volume 44. Tokyo, 1963. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. *arXiv preprint arXiv:1312.6034*, 2013. Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. *arXiv preprint arXiv:1706.03825*, 2017. Dong Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, and Yupeng Gao. Is robustness the cost of accuracy?–a comprehensive study on the robustness of 18 deep image classification models. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 631–648, 2018. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International conference on machine learning, pp. 3319–3328. PMLR, 2017. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199*, 2013. Lizhe Tan and Jean Jiang. *Digital signal processing: fundamentals and applications*. Academic Press, 2018. Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprint physics/0004057, 2000. Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. *arXiv preprint arXiv:1805.12152*, 2018. Yusuke Tsuzuku and Issei Sato. On the structural sensitivity of deep convolutional networks to the directions of fourier basis functions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 51–60, 2019. René van den Brink. An axiomatization of the shapley value using a fairness property. International Journal of Game Theory, 30:309–319, 2002. Haohan Wang, Xindi Wu, Zeyi Huang, and Eric P Xing. High-frequency component helps explain the generalization of convolutional neural networks. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pp. 8684–8694, 2020. Eyal Winter. The shapley value. *Handbook of game theory with economic applications*, 3:2025–2054, 2002. Zhi-Qin John Xu, Yaoyu Zhang, Tao Luo, Yanyang Xiao, and Zheng Ma. Frequency principle: Fourier analysis sheds light on deep neural networks. *arXiv preprint arXiv:1901.06523*, 2019a. Zhi-Qin John Xu, Yaoyu Zhang, and Yanyang Xiao. Training behavior of deep neural network in frequency domain. In *International Conference on Neural Information Processing*, pp. 264–274. Springer, 2019b. Menahem E Yaari and Maya Bar-Hillel. On dividing justly. *Social choice and welfare*, 1:1–24, 1984. Kösaku Yosida. *Functional analysis*. Springer Science & Business Media, 2012. Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. Adversarial examples: Attacks and defenses for deep learning. *IEEE transactions on neural networks and learning systems*, 30(9):2805–2824, 2019. Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks. In *2010* IEEE Computer Society Conference on computer vision and pattern recognition, pp. 2528–2535. IEEE, 2010. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically principled trade-off between robustness and accuracy. In *International conference on machine learning*, pp. 7472–7482. PMLR, 2019. Yu Zhang, Peter Tiňo, Aleš Leonardis, and Ke Tang. A survey on neural network interpretability. *IEEE* Transactions on Emerging Topics in Computational Intelligence, 2021. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2921–2929, 2016. ## A Appendix A.1 A Brief Proof On B.A. Variational Bound Proof. Barber & Agakov shows a tight variational lower bound of mutual information (Barber & Agakov, 2004). Qin et al. also demonstrates that classifiers trained with cross-entropy loss estimate the mutual information between inputs and labels (Qin et al., 2021). The proof technique can be found in multiple literature (Poole et al., 2019; Barber & Agakov, 2004; Qin et al., 2021). Let pX , Yq be some dataset. Assume X ˆ Y is compact in R d and all probability measures on which are absolutely continuous in terms of Lebesgue measure. Let Ppy|xq and Ppyq be the prior conditional distribution and the prior label distribution on the dataset. The mutual information between X and Y is bounded above the mutual information between the representations of X (denoted as QpX q) and the labels for the image classifier Qpy|xq. Let Q˚py|xq P tT|Tpy|xq : x **ÞÑ p**0, 1su be an arbitrary probability measure map. Starting from the definition of mutual information: IpX ; Yq " E x,y∼pX ,Yq log Ppy|xq Ppyqě sup Q˚ "E x,y∼pX ,Yq log Ppy|xq Ppyq Q˚py|xq Q˚py|xq *(21) " sup Q˚ "E x,y∼pX ,Yq log Q ˚py|xq ´ E x,y∼pX ,Yq log Ppyq ` E x,y∼pX ,Yq log Ppy|xq Q˚py|xq ě sup Q˚ E x,y∼pX ,Yq log Q ˚py|xq ` HpYq (23) " sup Q˚ E x,y∼pX ,Yq log Q ˚py|xq ` HpYq def " IBApQpX q; Yq (24) *(22) $$(24)$$ where E x,y∼pX ,Yq log P py|xq Q˚py|xq " E $\overline{)}\;=\;\underset{x\sim\mathcal{X}}{\mathbb{E}}\left\{KL\left[P(y|x)||Q^x\right]\right\}$ . ## Tklrppy|Xq||Q˚Py|X**Qsu Ě** 0 Since Pp*X, Y*Q " Ppy|Xqppxq And The Nonnegativity Property Of Kullback–Leibler Divergence (Gibbs' Inequality) (Poole Et Al., 2019; Barber & Agakov, 2004; Qin Et Al., 2021; Kullback, 1997). The Bound Is Tight *If And Only If* : Q˚ " P. Readers Can Refer The Similar Proof Technique And The Definition To The Equation (2) In Literature (Poole Et Al., 2019) And The Equation (1) And The Equation (3) In Literature (Barber & Agakov, 2004). A.2 Discrete Fourier Transform (Dft) The notion 'frequency' measures how 'fast' the outputs can change if inputs vary. High frequency suggests small variations in inputs can cause large changes in outputs. In terms of images, the 'inputs' are the pixel spatial locations while the 'outputs' are the pixel values. Let x : p*i, j***q ÞÑ** R be some 2D image with dimension M ˆ N which sends every location p*i, j*q to some real pixel value where p*i, j***q P r**0, M ´ 1**s ˆ r**0, N ´ 1s. Let xˆ : p*m, n***q ÞÑ** C be the Discrete Fourier transform (DFT) of x in which p*m, n***q P r**0, M ´ 1**s ˆ r**0, N ´ 1s denotes some 2D spatial frequency points. Let F : x ÞÑ xˆ be some DFT functional operator. The DFT of x is given by: $$\hat{x}(m,n):=\mathscr{F}(x)(m,n)=\sum_{j=0}^{N-1}\sum_{i=0}^{M-1}x(i,j)e^{-i\,2\pi(\,\frac{\pi}{M}i+\frac{\pi}{N}j)}.\tag{25}$$ Let F´1: ˆx ÞÑ x be the inverse DFT (IDFT) functional operator. The IDFT is given by: $$x(i,j):=\mathscr{F}^{-1}(\hat{x})(i,j)=\frac{1}{M\cdot N}\sum_{n=0}^{N-1}\sum_{m=0}^{M-1}\hat{x}(m,n)e^{i2\pi(\frac{i}{M}m+\frac{j}{N}n)}.\tag{26}$$ ## A.3 Simplifying Score Formula Let pI, vq be a spectral coalition game with the characteristic function v with spectral player set I. Set β :" pβ kq M´1 k"0 where β P p0, 1q. Let Ψpvq be some spectral importance distribution. Set: $$\Psi^{*}(v)=\frac{\Psi(v)-\min\Psi(v)}{||\Psi(v)-\min\Psi(v)||_{1}}.\tag{1}$$ $$(27)$$ We consider the inner product of β and and Ψ˚pvq against the inner product of β and an equiprobability random decision to assess the decision: $$\left|\beta^{T}\Psi^{*}(v)-\beta^{T}\frac{1}{M}\right|=\left|\beta^{T}\Psi^{*}(v)-\frac{||\beta||_{1}}{M}\right|.\tag{1}$$ We normalize the above result and set: where β ˚ "β ||β||2 and sup β T Ψ˚pvq ´ ||β||1 M is derived by: $$\sup\left|\beta^{T}\Psi^{\star}(v)-\frac{||\beta||_{1}}{M}\right|=\left|\sup\beta^{T}\Psi^{\star}(v)-\frac{||\beta||_{1}}{M}\right|$$ $$=\left|\sup||\beta||_{2}\cdot||\Psi^{\star}(v)||_{2}-\frac{||\beta||_{1}}{M}\right|\quad\text{s.t.}||\Psi^{\star}(v)||_{1}=1$$ $$=\left|||\beta||_{2}-\frac{||\beta||_{1}}{M}\right|\quad\text{since}||\Psi^{\star}(v)||_{2}^{2}\leqslant||\Psi^{\star}(v)||_{1}^{2}.$$ Set η " 1 M ||β||1 ||β||2 : $$S(v)=\left|\frac{\beta^{*}{}^{T}\Psi^{*}(v)-\eta}{1-\eta}\right|.$$ (33) $\text{(34)}$ (35) (36) $$(36)$$ $$Q.E.D.$$ nd 2000: $\begin{split}S(v):&=\frac{\left|\beta^T\Psi^*(v)-\frac{||\beta||_1}{M}\right|}{\sup\left|\beta^T\Psi^*(v)-\frac{||\beta||_1}{M}\right|}\\ &=\frac{\left|\beta^T\Psi^*(v)-\frac{||\beta||_1}{M}\right|}{\sup||\beta||_2\cdot||\Psi^*(v)||_2-\frac{||\beta||_1}{M}\right|}\\ &=\frac{\left|\beta^T\Psi^*(v)-\frac{||\beta||_1}{M}\right|}{\left|||\beta||_2-\frac{||\beta||_1}{M}\right|}\\ &=\frac{\left|\beta^{*T}\Psi^*(v)-\frac{1}{M}\frac{||\beta||_1}{||\beta||_2}\right|}{1-\frac{1}{M}\frac{||\beta||_1}{||\beta||_2}}.\end{split}$ - $$(29)$$ $$(30)$$ $$(31)$$ $$(28)$$ $$(32)$$ ## A.4 Proof Of Error Bound Let K be the number of the samples of some baseline dataset. Let: $$\Delta v(\tilde{\mathcal{I}},\mathcal{I}_{i}):=v(\tilde{\mathcal{I}}\cup\{\mathcal{I}_{i}\})-v(\tilde{\mathcal{I}})$$ ∆vpI˜, Iiq :" vpI˜ Y tIi**uq ´** vpI˜q (37) and $$\Delta v({\mathcal{I}}_{i}):=\left(\Delta v({\tilde{\mathcal{I}}},{\mathcal{I}}_{i})\right)_{{\mathcal{I}}\subseteq{\mathcal{I}}}$$ ˘I˜ĎI(38) and $$W:=\left(\frac{1}{M}\binom{M-1}{|\tilde{\mathcal{I}}|}^{-1}\right)_{\tilde{\mathcal{I}}\subseteq\mathcal{I}}.$$ $$(37)$$ $$(38)^{\frac{1}{2}}$$ $$(39)^{\frac{1}{2}}$$ $$(40)$$ $$\begin{array}{l}{(41)}\\ {\qquad(42)}\end{array}$$ $$(43)$$ $$(44)$$ $$(45)$$ $$(46)$$ Hence: $$\psi_{i}({\mathcal{I}},v)=W^{T}\Delta v({\mathcal{I}}_{i})$$ ψipI, vq " WT ∆vpIiq (40) where ||W||1 " 1 since W is a probability distribution. Let ψ ˚ i , ∆v ˚pIiq and ∆v ˚pI˜, Iiq be estimations with K samples using Monte Carlo sampling. The error bound with ℓ1 norm is given by: where *V ar*p∆v ˚q gives the upper bound of the variance of ∆v ˚pI˜, Iiq. ϵ def " sup i E pX ,Yq ||ψ ˚ ipI, vq ´ ψipI, vq||1 " sup i E pX ,Yq ||WT ∆v ˚pIiq ´ WT ∆vpIiq||1 (41) ď sup i E pX ,Yq ||W||1 ¨ ||∆v ˚pIiq ´ ∆vpIiq||8`Hölder1s inequality˘(42) " sup i E pX ,Yq || ÿ I˜ĎIzIi `∆v ˚pI˜, Iiq ´ ∆vpI˜, Iiq ˘||8 (43) ď sup i E pX ,Yq 2 M´1sup I˜||∆v ˚pI˜, Iiq ´ ∆vpI˜, Iiq||8 (44) " sup i E pX ,Yq 2 M´1sup I˜||∆v ˚pI˜, Iiq ´ ∆vpI˜, Iiq||1 (45) ď 2 M´1¨ "V arp∆v ˚q K *12(46) ![26_image_0.png](26_image_0.png) ## A.5 Full Experimental Data In Training Dynamics Figure 14: This is an application showcase (A3) in research demonstrating how the robustness of feature representations of models evolves with respect to training epochs. We train '*googlenet*', '*efficientnet_v2_s*' and '*resnet18* ' with 120 epochs, SGD optimizer and learning rate 0.0025. The training dataset is *Caltech*101. The spectral importance distributions are measured for every 5 epochs using **I-ASIDE**. We present the results using 2D heat maps with the corresponding training and validation loss curves. The first row is the training dynamics of feature robustness. The second row is the corresponding training and validation loss curve. The results suggest that models have two major learning stages: (1) Feature exploration stage and (2) feature exploitation stage. At the feature exploration stage (1), both training loss and validation loss drop synchronously. For example, the epochs 1 to 35 for *googlenet*, the epochs 1 to 40 for *efficientnet_v2_s* and the epochs 1 to 70 for *resnet18*. This stage explores the features which are more general and the peaks of spectral distributions move towards low-frequency components. At the exploitation (2), models continue to refine the learned feature representations for further lower training loss. However, overfitting emerges at this stage. This stage uses more features dominated by low-frequency components. The results echo with the claim in a prior literature (Wang et al., 2020): High-frequency components are essential for the generalization capability of image models (Wang et al., 2020). A.6 Full experimental data in adversarial perturbations ![27_image_0.png](27_image_0.png) Figure 15: This is an application showcase (A4) in research demonstrating that there is a correlation between adversarial perturbations and the spectral importance scores using **I-ASIDE**. The adversarial perturbations are measured by the statistical expectations of prediction probability variations (See Section 5.4). We choose multiple pretrained models (on *ImageNet*) and perform un-targeted FGSM/PGD attacking with eps " 0.1 and eps " 0.2 respectively. The adversarial samples carry a high proportion of non-robust features (Ilyas et al., 2019) (See Figure 2). For most models, adversarial perturbations are negatively proportional to spectral importance scores. The circle sizes are proportional to spectral importance scores. It is also notable there are some outliers (e.g. *shufflenet*), and this implies the perturbation-based robustness research can merely capture an aspect of the holistic robustness of models.
Review 1: Summary: This article presents a method to estimate a score for global interpretability of image model robustness, through image modifications in Fourier domain.The problem is formalized as a coalition game where the Shapely value theory is applied to derive the score. Numerical results show the correlation of this score with respect to properties of various neural networks, such as label noise / adversarial / out-of-distribution robustness. Strengths and Weaknesses: The idea of using Shapely value theory seems to be novel in the context of global interpretability. It was proposed in Lundberg and Lee 2017 to interpret feature importance. This article applies this theory by using considering spectral domain information (Fourier spectrum) of images. Although the idea is interesting, the article should be improved regarding its contribution, as well its clarity. Requested Changes: Clarify several definitions * How do you define the value of v(tilde I union { I_i }) in eq. 1? In eq. 3, you used the convolution of x with tilde I, then how is this computed for tilde I union { I_i } ? * The definition of Q in eq. 5 shows that it is an optimal classifier for a given model and dataset. When you use a perturbed dataset Q*tilde I in eq. 8, are you considering another optimal classifier for the perturbed dataset? The notation Q(X) is rather confusing as it is called the representations, then it becomes the optimal classifier. * It is not clear how can Q be related to define a classifier with random weights in eq. 13. What is the definition of tilde v ?Why it is 1/M in eq. 14? Clarify several contributions * Introduction: when you talk about how features are represented inside black box models. What do the features mean? This point is not so clear from the rest of the article. * The authors mentioned in the introduction that interpreting the global feature important in spatial domain has inherited limits. Is it possible to illustrate some of these limitations in your applications, to compare with your proposed method? * The sensitivity analysis regarding the choice of hyper-parameters in the proposed approach can also added. In particular, would this explain why there is an outlier in the score correlation for the shufflenet in Fig 10? What happens if the eps is not set to 0.1. * How the errors in Figure 12 are related to the eps in eq. 20? Broader Impact Concerns: The idea of computing a global score to measure image model robustness is quite ambitious. Given the existing literature, it is still not very clear what is the main advantage of using this method, besides the claim that the proposed method seems to be a quite general tool. The connection between the proposed score and the intrinsic property of the studied models would require further study. ================================================== Review 2: Summary: The submission explores the connection between global (i.e., over a population) spectral feature importance and robustness of deep-learning-based image classifiers. The authors propose a principled way to evaluate global importance of spectral features inspired by the axioms of the Shapley value. With this, they introduce a *spectral importance score* that quantifies how well the distribution of the importance of spectral features from low- to high-frequency aligns with a *robust prior* that favors low-frequency spectral features. The intuition is based on prior works which show convolutional neural networks are anisotropic in the Fourier basis, and that high-frequency spectral features are less robust than low-frequency ones. Different case-studies showcase how the idea of spectral importance can be deployed in practice to study the behavior of deep-learning-based image classifiers across a variety of scenarios: from model training with label noise to robust evaluation. Empirical results show there exists a correlation between spectral importance score and adversarial robustness. Strengths and Weaknesses: Strenghts: * The idea of deploying ideas of global feature importance to analyze the robustness of image classifiers is novel and intriguing. * The experimental section does a good job at showcasing the practical applications of the proposed method. Weaknesses: * Notation---coming from the explainability field---is somewhat unusual and hard to follow * Seminal works on global feature importance and spectral feature importance are missing I am looking forward to engage with the authors to discuss a few comments and questions that came up while reading the submission. Requested Changes: **Prior works on global feature importance** The idea of *explaining* the risk of a classifier with global feature attributions that satisfy the axioms of the Shapley value has been explored by Covert et al. in *``Understanding Global Feature Contributions With Additive Importance Measures''* (2020). In particular, Eq. (4) in the submission is a variation of Eq. (2) in the reference. The submission distinguishes itself by evaluating the importance of spectral features rather than pixels for image classifiers. It might be helpful to clarify how the two are related. --- **Prior works on spectral feature importance** From an information-theoretical perspective, Kolek et al. study *local* spectral feature importance in *``Cartoon Explanations of Image Classifiers''*. The submission is different in that feature importance is evaluated over a population rather than over an individual sample, and in the Fourier domain instead of the wavelet domain. Readers might appreciate this be clarified in the submission. --- **Notation clarifications** It is my understanding the gain measure $\mu: \tilde{\mathcal{I}} \to \mathbb{R}$ is what the game-theoretic literature commonly refers to as *characteristic function*. If that is the case, the gain measure should be the same for all subsets of $\mathcal{I}$, i.e. $\mu: \mathcal{P}(\mathcal{I}) \to \mathbb{R}$, where $\mathcal{P}(\mathcal{I})$ is the power set of $\mathcal{I}$. I found the (recursive?) definition of $V_{\tilde{\mathcal{I}}}(\mu)$ on page 5 somewhat confusing because $\mu$ is never applied to any subset. This is what I implicitly thought of it: $V_{\tilde{\mathcal{I}}}(\mu) \coloneqq \left(\mu(\tilde{\mathcal{I}}_i)\right) _{i=0}^{...}$. Footnote 4 introduces boldface notation for vectors but that is never used. In the decision risk function design, I am not sure I understand the motivation of using the *negative* prediction probability. This seems to trickle down to Eq. (5) where the sign is flipped again. Maybe this could be avoided? Eq. (3) implicitly states all images have 3 channels, and there is no need for this assumption for the results to hold. Similarly to above, the definition of $V_{\mathcal{X}}(R)$ does not seem to define what $V_{\mathcal{X},\mathcal{I}_i}(R)$ is. In the line below $V _\mathcal{X,i}(R)$ is used which I assume to be a typo. Eq. (5) might be missing a transpose? $C(\mathcal{I})$ and $V_{\mathcal{X}}(R)$ are vectors are the RHS is a scalar. This is okay in Eq. (6) because $C$ is a matrix. A few times it is mentioned *for some neural network $R: (I, y) \to \mathbb{R}$* which could lead to confusion between the risk function and the underlying neural network itself. A neural network does not have the true label as input. This may be a trivial distinction to expert readers but make the manuscript less accessible to a broader audience. --- **Questions about axiomatic characterization** *Strong* efficiency axiom: could the authors comment on the choice of the strong version of the efficiency axiom? I understand this axiom lets Eq. (5) go through smoothly, but if this is the only reason, one could also write the usual Shapley values in matrix form. Would anything else break with the usual efficiency characterization of the Shapley value? Or am I missing the point here. Symmetry axiom: given two players $j,k \in [n] \coloneqq \\{1, \dots, n\\}$ and characteristic function $v$, the axiomatic characterization of the Shapley value requires that $\forall C \subseteq [n] \setminus \\{j, k\\}$, $v(C \cup \\{j\\}) = v(C \cup \\{k\\})$. The axiom is presented in the submission only for one spectral player coalition $\tilde{\mathcal{I}}$. Is this because of the strong efficiency axiom? Nullity axiom: similarly, given $\mathcal{I}^*$, the axiom would usually be $\forall \tilde{\mathcal{I}} \subseteq [n] \setminus \\{\mathcal{I}^*\\}$, and not the other way around. Besides these points, If the axiomatic characterization described above is equivalent to the Shapley value's, then $V_{\mathcal{X}}(R)$ is the vector containing the Shapley values of each spectral feature with respect to the game identified by the decision risk function. If this is true, then the presentation of the decomposition should reflect it. I am also confused because if this is indeed the case, then the pseudoinverse matrix presented in Appendix E does not reflect the weighting scheme of the Shapley value. For example, the rows of the matrix should sum up to 0, but this is not the case. On the other hand, if the proposed decomposition is different from the Shapley value, then the axiomatic characterization cannot be the same as the Shapley value's because of its uniqueness. Differences should then be discussed and highlighted, and uniqueness of the solution concept that satisfies the (now different) proposed axiomatic characterization should be proved, or the text modified accordingly. I am looking forward to discussing with the authors to clarify these doubts. --- **Question about masking strategy** Whenever applying a data-dependent function to some input with restricted features, it is important to devise some strategy to maintain the masked inputs close to the training distribution. In practical terms, I assume the neural networks used in the experiments were trained on images without any spectral filtering. This would render the images showed in Fig. 2 (center column) out-of-distribution for the neural network. Eq. (4) seems to adopt the strategy to zero features that are not in the spectral coalition of interest. Other strategies have been introduced in the explainability literature (e.g., unconditional expectation as presented in the original SHAP paper by Lundberg and Lee from 2017, we refer to the recent review by Chen et al. *"Algorithms to estimate Shapley value feature attributions"*). Zero-ing features out might be a reasonable strategy in this setting, but the choice should be discussed in the manuscript. --- **Questions about the spectral importance score** I found the formulation of this score interesting! I wish the authors could spend some more time on Sec. 3.7 to introduce the score and its intuitive motivation, which I found quite nice. Regarding the definition of $V_{\mathcal{X}}^*$, this seems to be a map between a vector in some domain (by the way, I am curious about the domain of $V_{\mathcal{X}}$, I think it should be bounded because the risk is bounded?) onto the simplex in $M$ dimensions. However, $V_{\mathcal{X}}^*$ is not the projection operator onto the simplex. I am wondering whether the authors could expand on how this particular choice of map was informed, and whether they expect empirical results to change depending on which map is used. Similarly, I am curious about the choice of $\beta = 0.75$, I am not familiar with robustness literature and unaware whether this is a common choice. --- **Estimation of error upper bound** I appreciated this section: it is useful to see how the proposed sampling strategy behaves as the number of samples is increased. --- **Minor assorted comments** I wonder whether the authors expect results to be affected by the choice of spectral features. In the submission, $\ell_{\infty}$ balls are used, what about other norms? Strong efficiency axiom definition: typo in *on coaltion* -> *in coalition* ? Could the correlation lines be included in Figure 7? This might better showcase the trend at a first glance. I think it is interesting vision transformers seem to be outliers in Figure 7, given the motivating work was for convolutional neural networks. Do the authors think something fundamentally different might be going on with those models? Section 4.2: typo in *as\`label noise -> as \`label noise* Broader Impact Concerns: N/A ================================================== Review 3: Summary: The authors introduce I-ASIDE, a new method that computes spectral importance contributions of neural network-based image model decisions using coalitional game theory. This method first partitions the radial spectrum of an image into M equispaced regions, where each partition acts as a ‘spectral player’. The decomposition is then defined as the unique decomposition that satisfies a set of desirable axioms from coalitional game theory. The resulting spectral importance distribution is summarized into a spectral importance score using a predefined robustness prior, which quantifies the global model robustness as a scalar. It is well-known that small spatial perturbations can have an outsized effect on network features dominated by high-frequency components. Hence, computing the spectral importance of high-frequency components with I-ASIDE can provide insights into the robustness of neural network models. The authors apply the I-ASIDE method to investigate and interpret the robustness of image classifiers from a spectral importance perspective. Specifically, they find that: 1. Trained models exhibit highly non-uniform spectral importance distributions. 2. Models trained on larger datasets demonstrate less importance in high spectral regions. 3. The number of trainable parameters does not significantly affect the spectral importance score. 4. Higher label noise tends to result in increased importance of high spectral regions. 5. The learning dynamics of spectral importance distribution undergo two stages: robust features are learned before non-robust features. 5. Adversarial vulnerability correlates with spectral importance scores. Strengths and Weaknesses: ## Strengths 1. Good Motivation: Computing global spectral importance scores is well motivated in light of the fact that small spatial perturbations can have an outsized effect on network features dominated by high-frequency components. 2. Well-founded Approach: Computing the spectral importance distribution via coalitional game theory gives the framework a solid basis. 3. Interesting Experimental Results: The authors compute the spectral importance distributions for different neural networks architectures, training settings, and training phases. The claimed results are interesting. ## Weaknesses 1. Soundness of Experimental Support: I am concerned about the experimental support for some claims of the paper. * The authors say "The results [in Figure 5] show that the models trained with higher label noise levels tend to use a higher proportion of non-robust features." Visually it is difficult for me to confirm this from Figure 5 (a), (b), and (c). * The authors deduce from Figure 3 that "models trained on larger training datasets exhibit higher robustness." This is a strong interesting claim that I believe requires more experimental support. This should include averaging importance scores over model runs and more variation in dataset size. For example, one could plot importance distributions or scores for various fractions of training set size (e.g. 25%, 50%, 75%, 100% of imagenet, STL10, or CIFAR10/100). 2. Clarity: Generally, I found some parts of the paper require more clarity and elaboration. I will specify the parts in the requested changes section. Requested Changes: All of the requested changes are critical: 1. In the authors' statement "The results [in Figure 5] show that the models trained with higher label noise levels tend to use a higher proportion of non-robust features," I find it challenging to visually validate this claim from Figure 5 (a), (b), and (c). To strengthen this assertion, consider quantifying or enhancing the visualization of the result. After making this change, could the authors also indicate whether the claim is consistently true across varying label noise levels, i.e. does the importance of non-robust features gradually increase with increasing label noise strength? 2. The deduction from Figure 3 that "models trained on larger training datasets exhibit higher robustness" merits additional experimental substantiation. This should include averaging importance scores over different model runs and incorporating a broader range of dataset sizes. Consider plotting spectral importance distributions or scores across different fractions of the training dataset, like 25%, 50%, 75%, and 100% of datasets such as ImageNet, STL10, or CIFAR10/100. 3. The use of the robustness prior for the importance score was chosen to favor the importance of high spectral components. After line (8), the choice of $\beta=0.75$ is introduced. However, this scalar value for $\beta$ doesn't align with its earlier vectorial definition. Once this discrepancy is resolved, it would be helpful to offer a heuristic justification for the choice of the prior, aiding in reader comprehension. 4. On the topic of anisotropy: Anisotropy is the structural property of non-uniformity in different directions, as opposed to isotropy. The authors frequently refer to the "spectral anisotropy of the spectral importance distribution" in the manuscript. My interpretation suggests that a better term is the "non-uniformity of the spectral importance distribution". The caption for Figure 1 further adds to this confusion. A clearer caption might be "This diagram illustrates the correlation between the power-law-like spectral density of natural images and the robustness of feature representations." If my interpretation of "spectral anisotropy" is incorrect, I would appreciate a detailed explanation. 5. Could the authors explain why the four Shapley axioms were adapted to a stronger version. 6. Could the authors correct the grammatical error in the following sentence: "This research is inspired by the power-law like spectral structures of natural images and feature robustness can be indexed by radial spectrum (See Figure 1)". 7. In Section 4.3, the authors say "This result gives an insight into designing better optimization goals by introducing robustness penalty towards more robust models". This was not corroborated by any experiments. Hence, I suggest either provide experiments for this claim or reduce the claim, e.g. reduce the claim to plans for future work. 8. The following sentence is very convoluted and would benefit from rewriting as well as a citation: "The fragility of learned features correlates with the energy distributions of spectral structures from the perspective of the susceptibility of feature representations to perturbations." I hope the authors can address all my concerns. I think the submission would be of value to the community. If the authors address the above listed concerns I would to recommend acceptance. Until then, I don't see sufficient support for some important claims of the paper. Broader Impact Concerns: I have no concerns on the ethical implications of the work that would require adding a Broader Impact Statement. ================================================== Metareview: Recommendation: Reject Comment: All reviewers agree that the ideas proposed in this manuscript are novel, and that they could be of interest to a large audience in machine learning, signal processing and computer vision. In particular, the reviewers value the study of global spectral feature importance and their connections to robustness. At the same time, two reviewers remain concerned on the clarity of the presented ideas, even after revisions. These reviewers argue that the manuscript is at times difficult to comprehend, including several hand-wavy comments and with notation usage that is still unclear. I have personally revised the manuscript and I agree with these comments: while the concepts are interesting and would be worthy of publication, the text is hard to follow, there are several grammatical mistakes, and some of the reviewers' comments on notation are still unresolved. With this context, I am recommending this paper be rejected. However, I encourage the authors to address all of the comments provided by the reviewers in a major revision, putting emphasis on the clarity of presentation and statements, after which they could re-submit this work to TMLR for consideration. A.E. ==================================================
# Correcting Model Misspecification Via Generative Adversarial Networks Anonymous authors Paper under double-blind review ## Abstract Machine learning models are often misspecified in the likelihood, which leads to a lack of robustness in the predictions. In this paper, we introduce a framework for correcting likelihood misspecifications in several paradigm agnostic noisy prior models and test the model's ability to remove the misspecification. The "ABC-GAN" framework introduced is a novel generative modeling paradigm, which combines Generative Adversarial Networks (GANs) and Approximate Bayesian Computation (ABC). This new paradigm assists the existing GANs by incorporating any subjective knowledge available about the modeling process via ABC, as a regularizer, resulting in a partially interpretable model that operates well under low data regimes. At the same time, unlike any Bayesian analysis, the explicit knowledge need not be perfect, since the generator in the GAN can be made arbitrarily complex. ABCGAN eliminates the need for summary statistics and distance metrics as the discriminator implicitly learns them, and enables simultaneous specification of multiple generative models. The model misspecification is simulated in our experiments by introducing noise of various biases and variances. The correction term is learnt via the ABC-GAN, with skip connections, referred to as skipGAN. The strength of the skip connection indicates the amount of correction needed or how misspecified the prior model is. Based on a simple experimental setup, we show that the ABC-GAN models not only correct the misspecification of the prior, but also perform as well as or better than the respective priors under noisier conditions. In this proposal, we show that ABC-GANs get the best of both worlds. Keywords: Likelihood-free inference, Deep Neural Regression, Approximate Bayesian Computation, GAN ## 1 Introduction A model is a probing device used to explain a phenomenon through data. In most cases, a true model for this phenomenon exists but cannot be specified at all [Le & Clarke (2017)]. This setting indicates that all plausible models, though useful, can be deemed as misspecified [Box (1976)]. Can we use a plausible explainable model, while correcting for its misspecification implicitly? Unlike the prescriptive generative modeling dogma, predominant in the statistical community, the implicit generative modeling view taken by the machine learning community lays emphasis on predictive ability rather than on explainability [Brieman (2001)]. Implicit Deep Generative Models have witnessed tremendous success in domains such as Computer Vision. However, their opaqueness and lack of explainability has made the injection of subjective knowledge into them a highly specialized and experimental task. In this work, our proposal is to reconcile implicit and explicit generative models into a single framework in the misspecified setting. We do that by taking GANs and ABC as representative of the two fields respectively. The introduction of GANs in 2014 by Goodfellow et al. [Goodfellow et al. (2014)] marked a very decisive point in the field of generative deep learning. Since then, deep learning based generative models like GANs and Variational Autoencoders have been extensively worked on, with the main intention of addressing issues with likelihood estimation based methods and related strategies. The crux of these issues lies in complex or intractable computations that arise during maximum likelihood estimation or evaluation of the likelihood function. A GAN uses two adversarial modules - the Generator and the Discriminator, essentially playing a zero sum min-max game with each other, with the competition between them driving both modules to improve and reach a stage where the Generator is able to produce counterfeit data which is indistinguishable from the real data. Although GANs have been shown to address the issues mentioned above well by leveraging the benefits of using piece-wise linear units, there are some inherent issues with the GAN paradigm. These include the inherent difficulty in achieving convergence, stability issues during training and the necessity of large amounts of data. An active area of research in this direction is to apply GANs in different settings [Karras et al. (2018); Radford et al. (2016)] and also to improve stability [Gulrajani et al. (2017)]. Another older, but equally interesting generative paradigm is Approximate Bayesian Computation (ABC) [Beaumont et al. (2002)] [Beaumont (2010)] [Grelaud et al. (2009)] [Csilléry et al. (2010)] [Marin et al. (2012)]. ABC finds its roots in Bayesian inference, and aims to bypass likelihood evaluation by approximating the posterior distribution of model parameters. This method is extremely useful in cases when the likelihood estimation is computationally intensive, or even intractable. The likelihood-free aspect of this paradigm allows the data generative model to be as complex as it can get. However, there are some drawbacks, such as low acceptance rates at higher dimensions, the difference between the prior distribution from the posterior distribution, identification of lower dimensional statistics to summarize and compare the datasets and the model selection problem. ABC and GAN complementarity: Looking at these two paradigms, it becomes clear that both ABC and GANs try to solve the same problem - learning the data generation mechanism by capturing the distribution of the data, but they approach the problem in different ways. By studying these two paradigms, their similarities and differences become apparent. With respect to the data generation model, ABC uses a userspecified model, whereas the Generator in a GAN is non-parametric. Looking at the discriminative model for both, ABC uses an explicit, user-specified discriminator which often uses Euclidean distance or some other distance measure on a set of summary statistics to measure the difference between real and simulated datasets. For GANs, the discriminative model is specified through a function like KL divergence or JS divergence as the Discriminator's objective function. Another key difference here is that the feedback from the Discriminator in a GAN flows back to the Generator, thereby making them connected, while in ABC, these two modules are disconnected. Further, in ABC, model selection is followed by model inference, but in GANs, since the Generator and Discriminator are connected, this occurs implicitly during the learning process. We now see that ABC and GAN appear to be at two ends of the data generation spectrum, with each having its own advantages and disadvantages. ## 2 Motivation And Contributions As it is clear from the previous discussion, both GANs and ABCs are likelihood-free methods. But there are certain limitations to both of them. ABC is a Bayesian paradigm. Like in any Bayesian modeling approach, subjective knowledge about the data generating model is expressed both in terms of the likelihood (explicit or implicit) and the prior. One would want to exercise more freedom in the choice of priors, however. Majority of the model selection criteria focus on the priors, keeping the likelihood fixed. However, misspecification in the likehood can lead to spurious errors and make the inference invalid. Some model selection criteria like Deviance Criteria (DIC) won't work well in such cases. It is generally a hard problem to tackle computationally, if one were to obtain marginal evidences. So how we do address this problem? In the context of ABC, the choice of summary statistic and the distance metric to compare the simulated datasets with the real data set determine the efficiency of the approximation. While it seems advantageous to rely on sampling, it leaves many issues suggested above to experimentation and to the modeler. Model selection and sensitivity analysis have to be performed, regardless. Can we get rid of making choices about the summary statistics, distance metrics, model selection in the context of ABC? Further, can we deal with model misapplication either in likelihood or prior or both in the Bayesian context? GANs, in particular, the adversarial mini-max formulation can address these questions. However, GANs require relatively large amounts of data, owing to their non-parametric nature, to train the Generator and Discriminator networks. It is also known that training GANs can be unstable [Kodali et al. (2017)]. A consequence of deep networks, of which GANs are a special case, is that, they are opaque from the standpoint of interpret ability [Li et al. (2022)]. Further, incorporation of any available prior knowledge into GANs is limited to modifying the architectures or loss functions or a combination of them. In part, it may be due to the long held misconception that deep learning eliminates the need for good feature engineering. However, good feature engineering gives a way to architecture design. Can we incorporate prior knowledge into GANS? Can the GANs work on low-data regimes, where prior knowledge could be both available and valuable? We argue that, ABC can augment the Generator network of a GAN. The amount of correction needed can be learnt via the data itself, without making hard choices *a priori*. We show the effectiveness of our work through several ABC-GAN models. We consider cGAN [Mirza & Osindero (2014a)] and TabNet [Arik & Pfister (2019)] as baseline GANs with some architectural modifications. 1. mGAN: GAN Generator takes as inputs the features, and the simulated data from ABC. 2. skipGAN: GAN Generator takes as inputs the features, and the simulated data from ABC, and the Discriminator, also takes a weighed combination of ABC Generator and GAN Generator. 3. Tab-mGAN: mGAN with TabNet as the Generator of the GAN. 4. Tab-skipGAN: skipGAN with TabNet as the Generator of the GAN. They are described in detail later. We consider several standard, interpretable models such as Linear Models, Gradient Boosted Trees (GBT) and a combination of Deep Learning and Gradient boosted trees (TabNet) as ABC models under various misspecification settings. Extensive experimentation (check sections (4), (5) helps us answer and tackle the questions posted above, and shows the novelty of our work. ## 3 Our Approach Some notations and settings are introduced to make the exposition clear. Let Dτ = {y τ i , xτ i } n i=1, be the observed dataset, a set of n i.i.d tuples (y τ i , xτ i ), where x τ i ∈ ℜpis a p-dimensional column feature vector, and y τ i ∈ ℜ is the response variable. Assume that, Gτ is the true generative model, typically unknown, that produces y τ i ∼ Gτ (x τ i ). Define the datasets Dπ ≡ {y π i , xπ i } n i=1 and Dγ ≡ {y γ i , x γ i } n i=1, that can be sampled by ABC and GAN, respectively. Here, by convention, y π i ∼ Gπ(x τ i ), xπ i = x τ i for ABC and similarly y γ i ∼ Gγ(x γ i ). Further, assume that d(*., .*) is some distance or loss such as Mean Absolute Error (MAE) that measures discrepancy between two datasets. Note that Gπ is typically a sampler and Gγ is a deterministic transformation. ## 3.1 Abc-Gan Framework Suppose that we know the generative model Gπ, but it is misspecified. In order to rectify this misspecification, we append it to a standard GAN generator Gγ network, i.e., x γ i = [y π i , xτ i ]. Gγ now transforms Gπ samples so as to make resulting dataset look more realistic. Now, by design, Gγ can be quite shallow. The hope, rather, intent is that, the "sampler" is already pretty good, and lot of domain knowledge is encoded in it. Therefore, not much needs to be done by the Gγ, except doing a few corrections. The exact corrections that are to be done are taught by the Discriminator of the GAN Dγ. Under ideal conditions, when perfect knowledge about the Sampler Gπ (the pre-generator, or the generative model in the context of ABC) is known, Gγ does an identity transformation. Under these conditions, the GAN learning should not be a concern (stability-wise), as the problem is already regularized. From an architecture perspective, Gγ can have large capacity but is regularized to produce an identity transformation. Hence, the primary objectives are to investigate two key hypotheses 1. when Gπ is perfect, i.e, Gπ = Gτ , we expect Gγ = I(.), an identity map and d(Dτ , Dπ) = 0. 2. when Gπ is imperfect, Gγ ̸= I(.) and d(Dτ , Dπ) > 0. More than that, we expect d(Dτ , Dπ) > d(Dτ , Dγ). We consider two broad families of architectures to test the hypothesis. mGAN: We depict the functional architecture of baseline overall model *mGAN*, shown in Fig. 1. The GAN is a vanilla cGAN except that one of the inputs is the ABC Generator's output, i.e., x γ i = [y π i , xτ i ] and Dγ ≡ {y ![3_image_0.png](3_image_0.png) γ i , x γ i } n i=1 will be passed to the Discriminator Dγ. Figure 1: A baseline mGAN model . skipGAN: Another variant that we experimented with is the *skipGAN*. We conjecture that vanilla mGAN ![3_image_1.png](3_image_1.png) might have information bottleneck. When the prior model is very good, both Gγ and Dγ can be very shallow. If not explicitly regularized, training mGAN could be hard. We can mitigate this problem by supplying both y π i and y γ i to Dγ. Specifically, we choose weighed average so that the weights can be seen as model averaging, and can also be interpreted as amount of expressiveness borrowed by mGAN. That is, Dγ gets wyπ i + (1 − w)y γ i . The idea of using skip connection is to try to achieve performance improvement over mGAN. At the least, it should be able to ensure that the mGAN does at least as well as the baseline Gπ. Figure 2: Proposed skipGAN model ## 3.2 Objective Function Consider the following hybrid generative model: pi = Dγ(Gγ([y π i , xτ i ]), with y π i ∼ Gπ(x τ i ) Then the likelihood can be written as: L(y) = Qn i=1 pi. In fact, it is striking to see the likelihood as empirical likelihood [Owen (2001)] without the normalization constraint Σpi = 1. But it is not obvious how to estimate Dγ and Gγ, if not for the adversarial min-max optimization used in GANs. In that sense, we see that our contribution is more in using the adversarial optimization to maximize the empirical likelihood, that has absorbed a non-parametric correction term by Deep Neural Networks and some prior models. ## 4 Experimental Setup Several experiments were conducted to test the impact of the ABC-GAN models in correcting misspecified prior models Gπ. The purpose of these experiments is two-folds: 1) to assess the benefits of including a prior for a GAN and 2) to verify that ABC-GAN models successfully correct misspecified models. We consider three datasets (one simulated and two real), three prior generative models, two basic ABC-GAN architectures, and two GAN Generator architctures - leading to a total 36 experiments. The misspecification at the prior generative model has bias and variance terms with three levels each. Each of the 36 experiment has 9 runs each simulating different amounts of model misspecification, taking the total number of experiments to 324. The details are provided below. ## 4.1 Gπ**: Prior Generative Models** In particular, we consider, Linear Models, Gradient Boosting Trees, and Transformers for Gπ and Feedforward Networks and Transformers for Gγ. We also consider different Ground Truth generative models (Gτ ). Under perfect information Gπ = Gτ . For simulated datasets, Gτ is known. Imperfect information can creep from mis-specified sampling distribution or prior or both. We simulate misspecifcation by adding Gaussian noise to an assumed Gτ . To keep the design space smaller and simpler, we consider mis-specified priors, keeping the likelihood of the prior generative model fixed. We consider three families of models - Linear Models, Gradient Boosted Trees (GBTs), and Transformers - as explicit generative models. 1. Linear Models: Standard Linear Regression models are implemented in statsmodel [Seabold & Perktold (2010)], a Python module that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration. We use the linear ordinary least squares model as our baseline. 2. GBTs: CatBoost [Dorogush et al. (2018)] is an algorithm for gradient boosting on decision trees. It is developed by Yandex researchers and engineers, and is used for search, recommendation systems, personal assistant, self-driving cars, weather prediction and many other tasks. It is an industry standard and an ambitious benchmark to beat. We use *catboost* implementation. 3. Transformers: TabNet, a Transformers-based models for Tabular data, was introduced in [Arik & Pfister (2019)]. This model inputs raw tabular data without any preprocessing and is trained using gradient descentbased optimisation. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability. Feature selection is instance-wise, e.g. it can be different for each row of the training dataset. TabNet employs a single deep learning architecture for feature selection and reasoning, this is known as soft feature selection. These make the model enable two kinds of interpretability: local interpretability that visualizes the importance of features and how they are combined for a single row, and global interpretability which quantifies the contribution of each feature to the trained model across the dataset. We use TabNet as baseline by calling the TabNetRegressor class under pytorch-tabnet module. Henceforth, all references to Stats Models, CatBoost, and TabNet, correspond to Linear Models, GBTs and Transformers, respectively, where applicable. In this other extreme case, we pass covariates (x τ i ) plus random noise (ei) to GAN, i.e., x γ i = [x τ i , ei] in which case, the ABC-GAN acts more like a conditional-GAN [Mirza & Osindero (2014b)]. ## 4.2 Gγ**: Gan Generators** We consider two architectures: 1. *Feed Forward Networks (FFN):* The FFN Generator consists of 5 hidden layers of 50 nodes each and ReLU activation. The Discriminator consists of 2 hidden layers of 25 and 50 nodes respectively followed by ReLU activation after every layer. 2. *Transformers:* We consider the same TabNet Regressor used in Gπ discussed earlier- the Transformerbased Generator. ## 4.3 Model Misspecification The following noise model is considered for real datasets: y π i ∼ N(y τ i + *µ, σ*2) For the Linear Model, we consider a full Bayesian model, of the following specification: y π i ∼ N(< xτ i , β >, 1) where β ∼ N(β τ + *µ, σ*2) with µ ∈ {0, 0.01, 0.1}, σ 2 ∈ {0.01, 0.1, 1} and β τis a true, pre-specified part of the generative model, y τ i is the output of the prior model Gπ and *<, >* is the standard dot product. ## 4.4 Datasets We evaluate our models on the following Synthetic and real datasets: 1. Friedman3 [Friedman (1991)] consists of 4 independent features z = [z1, z2, z3, z4] as input, uniformly distributed on the intervals: 0 ≤ z1 ≤ 100, 40π ≤ z2 ≤ 560π, 0 ≤ z3 ≤ 1, 1 ≤ z4 ≤ 11. The generative model for y is is nonlinear model y = arctan((z1z2 − 1/(z1z3))/z1). A standard normal noise is added for every sample. The dataset has 100 samples. 2. Boston: The Boston Housing Dataset [Harrison & Rubinfeld (1978)] is derived from information collected by the U.S. Census Service concerning housing in the area of Boston MA. The dataset has 503 samples and 13 columns/features. 3. Energy efficiency [Tsanas & Xifara (2012)] contains eight attributes and two responses (or outcomes. The dataset has 768 samples. The aim is to use the eight features to predict each of the two responses. For our experiments, we have restricted only to the first response with all 8 features. ## 4.5 Training The cGAN, mGAN, skipGAN and their TabNet versions are trained for 1000 epochs with BCE loss function and a batch size of 32. The dataset is split into training and validation sets (80-20) and the same validation set is used to validate the performance of all models. The learning rate used for Friedman 3 dataset is 0.001, and is 0.01 for all other datasets. All datasets are run using 1.6 GHz Dual-Core Intel Core i5 CPUs. ## 4.6 Metrics We use MAE to evaluate the performance of the models. The experiments were run 10 times and the average of the MAE over the 10 runs is presented. ## 5 Results In order to test the hypothesis that, ABC-GAN models perform no worse than the prior models, we take Boston dataset, and synthetically inject model misspecification, as described earlier, and report MAE of Gπ (sampler) and Gγ (a deterministic transformation). In Fig. 3, we show the boxplots of the MAE for each of the models, for each of the prior models. As can be seen, the proposed ABC-GAN models outperform the prior models in almost all cases - different priors, different ABC-GAN models, and different levels of model misspectications. Even a simpler mGAN successfully corrects the misspecified baselines (Linear Models, Boosted trees and TabNet) and results in lower MAEs than the prior model. Next, we investigate, how these models perform at specific levels of model misspecification by prior, model architecture, and dataset. In Tables 1-9, each row corresponds to a level of model mispecification as indicated by (Variance, Bias) columns, and rows corresponding to columns - Prior Model, mGAN, Tab-mGAN, skipGAN, Tab-skipGAN - indicate the MAE of the models indicated by the column header. In the case of skip variants, the skip | Variance | Bias | Prior model | mGAN | Tab-mGAN | skipGAN | Weights skipGAN | Tab-skipGAN | Weights | |------------|--------|---------------|--------|------------|-------------|-------------------|---------------|-----------| | | | | | | Tab-skipGAN | | | | | 1 | 1 | 1.3310 | 0.4918 | 0.3310 | 0.4594 | 0.9932 | 0.3005 | 1.0000 | | 1 | 0.1 | 1.0820 | 0.5060 | 0.6127 | 0.4245 | 0.6657 | 0.4582 | 0.9911 | | 1 | 0.01 | 1.0167 | 0.5120 | 0.5296 | 0.4670 | 0.3434 | 0.3546 | 0.9943 | | 1 | 0 | 1.0697 | 0.4583 | 0.5269 | 0.37041 | 0.9976 | 0.4241 | 0.9877 | | 0.1 | 1 | 0.9984 | 0.4546 | 0.4630 | 0.4382 | 0.9985 | 0.4801 | 0.6293 | | 0.1 | 0.1 | 0.6707 | 0.5593 | 0.4868 | 0.4948 | 0.4887 | 0.5106 | 0.6486 | | 0.1 | 0.01 | 0.5000 | 0.4893 | 0.4987 | 0.4520 | 0.2123 | 0.4445 | 0.7546 | | 0.1 | 0 | 0.6251 | 0.5839 | 0.3859 | 0.57406 | 0.3009 | 0.4343 | 0.6805 | | 0.01 | 1 | 1.3170 | 0.5309 | 0.5845 | 0.5454 | 0.9975 | 0.4127 | 0.4304 | | 0.01 | 0.1 | 1.0503 | 0.5076 | 0.4541 | 0.4493 | 0.2746 | 0.4703 | 0.3426 | | 0.01 | 0.01 | 0.6370 | 0.4638 | 0.4794 | 0.4858 | 0.1871 | 0.5633 | 0.2787 | | 0.01 | 0 | 0.5665 | 0.5247 | 0.4977 | 0.4882 | 0.1947 | 0.4566 | 0.2326 | Table 1: Results for Friedman3 dataset with linear model prior. The MAEs of cGAN, cGAN with TabNet generator and baseline linear model (Stats model) are 0.4477, 0.49724 and 0.5529 respectively. weights are also reported. Tables 1, 4, 7 correspond to Friedman3 dataset, 2, 5, 8 to Boston dataset and 3, 6, 8 to Energy dataset. For tables 1, 2, 3, we use Linear Models as the prior, GBT in Tables 4, 5, 6 and TabNet in Tables 7, 8, 9. By looking at all the Tables I-IX collectively, it is clear that ABC-GAN models are able to detect the extent of misspecification, as the reduction in the MAE, relative to the prior model, is more pronounced for larger misspecifications. Hence we see that as the misspecification of the pre-generator increases, the model relies more and more on the GAN generator to do the correction. Overall, we notice that our model majorly outperforms SOTA models such as C-GAN, C-GAN with TabNet generator, TabNet regressor and CatBoost. A skip connection has been added in some models, as explained earlier, to take a weighted average of the prior model and the GAN model. The weight given to the GAN in the skip connection tends to increase with increase in variance and bias, and is ideally supposed to be close to 1 for the highest variance and bias values and close to 0 for lowest variance and bias values. In most cases variance seems to be playing a greater role in the skip connection weight than the bias. This indicates that as the model misspecification increases, more weightage is given to the GAN skip node to help cofrrect this misspecification. Hence, as the complexity of the prior increases (such as when we use Transformers as priors), mGAN is sufficient to correct the misspecification of the models. However, for models with lower complexity (such as linear models), skipGAN performs better in correcting the model misspecification. From tables 4 and 6, it is evident that as the misspecification reduces, the skipGAN weight reduces and drops to almost 0 (it becomes 0 for Tab-skipGAN for variance 0.001 and bias 0). This effectively proves our original claim that when Gπ is almost perfect, Gγ is almost an identity transformation and d(Dτ , Dπ) ≈ 0. As the noise increases, the dependence on the GAN generator increases, resulting in higher weights in the skipGAN. Using TabNet network for the generator of the GAN helps in stabilising the model. mGAN, Tab-mGAN and Tab-skipGAN perform consistently well with no high MAE outlier. While Tab-mGAN and Tab-skipGAN may not consistently outperform their vanilla counterparts (mGAN and skipGAN), adding the TabNet Network ensures consistent results across multiple iterations. We also wanted to explore the effect of different sizes. We consider the Boston dataset again, and took a subset of the data to see if, as the dataset size increases, ABC-GANs continue to do well. As expected, the performance of the all the models improves with increase in sample size (as visible from Fig. 4 to Fig. 8). However skipGAN destabilizes for larger datasets (see tables 5, 6 and 8), thus resulting in large MAE values for a few experimental set-ups. ## 6 Discussion And Conclusion Out of the numerous types of GANs available, how is ABC-GAN different? No other GAN works on the idea of correcting model misspecification. In this proposal, we specify that our emphasis was not on obtaining better performance, but it is to show that the model is capable of doing a likelihood-free inference, and | Variance | Bias | Prior model | mGAN | Tab-mGAN | skipGAN | Weights skipGAN | Tab-skipGAN | Weights | |------------|--------|---------------|--------|-------------|-----------|-------------------|---------------|-----------| | | | | | Tab-skipGAN | | | | | | 1 | 1 | 1.2763 | 0.3017 | 0.3050 | 0.2876 | 0.9904 | 0.2818 | 0.9935 | | 1 | 0.1 | 0.8442 | 0.2653 | 0.2876 | 0.2355 | 0.9934 | 0.2632 | 1.0000 | | 1 | 0.01 | 0.9313 | 0.2975 | 0.2880 | 0.2684 | 0.9876 | 0.2978 | 0.9855 | | 1 | 0 | 0.9459 | 0.2717 | 0.2831 | 0.2570 | 0.9963 | 0.2844 | 0.9930 | | 0.1 | 1 | 1.0254 | 0.2991 | 0.3114 | 0.2986 | 0.7022 | 0.3188 | 0.8000 | | 0.1 | 0.1 | 0.4154 | 0.3076 | 0.2587 | 0.2508 | 0.6869 | 0.3011 | 0.7775 | | 0.1 | 0.01 | 0.3820 | 0.2968 | 0.2820 | 0.2790 | 0.7133 | 0.3057 | 0.7673 | | 0.1 | 0 | 0.3779 | 0.2796 | 0.2761 | 0.2738 | 0.6000 | 0.2767 | 0.7441 | | 0.01 | 1 | 1.0492 | 0.2559 | 0.2700 | 0.2530 | 0.3580 | 0.2568 | 0.3814 | | 0.01 | 0.1 | 0.4038 | 0.2627 | 0.2573 | 0.2488 | 0.1698 | 0.2552 | 0.3393 | | 0.01 | 0.01 | 0.3733 | 0.2847 | 0.2684 | 0.2785 | 0.1631 | 0.2888 | 0.4281 | | 0.01 | 0 | 0.3712 | 0.3073 | 0.2899 | 0.3450 | 0.1592 | 0.2625 | 0.2849 | | Variance | Bias | Prior model | mGAN | Tab-mGAN | skipGAN | Weights skipGAN | Tab-skipGAN | Weights | |------------|--------|---------------|--------|------------|-----------|-------------------|---------------|-----------| | | | Tab-skipGAN | | | | | | | | 1 | 1 | 1.0849 | 0.1090 | 0.1098 | 0.1014 | 0.9973 | 0.1486 | 0.9991 | | 1 | 0.1 | 0.8286 | 0.0752 | 0.1221 | 0.1058 | 0.9958 | 0.2130 | 0.9932 | | 1 | 0.01 | 0.8344 | 0.1462 | 0.1557 | 0.1391 | 1.0000 | 0.0654 | 1.0000 | | 1 | 0 | 0.8764 | 0.1461 | 0.1347 | 0.0943 | 0.9833 | 0.1209 | 0.9936 | | 0.1 | 1 | 1.0036 | 0.1544 | 0.1089 | 0.0977 | 0.5256 | 0.1076 | 0.9568 | | 0.1 | 0.1 | 0.2493 | 0.0955 | 0.0946 | 0.0916 | 0.3530 | 0.0630 | 0.9523 | | 0.1 | 0.01 | 0.2184 | 0.1608 | 0.0538 | 0.0566 | 0.3431 | 0.0945 | 0.9520 | | 0.1 | 0 | 0.2239 | 0.1352 | 0.0930 | 0.1315 | 0.3745 | 0.1455 | 0.9794 | | 0.01 | 1 | 0.9740 | 0.0859 | 0.0794 | 0.1076 | 0.3152 | 0.0829 | 0.2947 | | 0.01 | 0.1 | 0.2302 | 0.1677 | 0.0812 | 0.0762 | 0.2097 | 0.1354 | 0.2539 | | 0.01 | 0.01 | 0.2246 | 0.2143 | 0.2038 | 0.1883 | 0.0646 | 0.2278 | 0.1681 | | 0.01 | 0 | 0.2035 | 0.1274 | 0.0962 | 0.1169 | 0.1025 | 0.1576 | 0.2352 | | Variance | Bias | Prior model | mGAN | Tab-mGAN | skipGAN | Weights skipGAN | Tab-skipGAN | Weights | |------------|--------|---------------|--------|------------|-------------|-------------------|---------------|-----------| | | | | | | Tab-skipGAN | | | | | 1 | 1 | 1.0837 | 0.3539 | 0.5377 | 0.3579 | 0.5313 | 0.4726 | 0.5923 | | 1 | 0.1 | 0.9548 | 0.3912 | 0.4745 | 0.4254 | 0.4767 | 0.4238 | 0.8411 | | 1 | 0.01 | 0.9633 | 0.4096 | 0.4841 | 0.4250 | 0.4429 | 0.5243 | 0.7531 | | 1 | 0 | 0.9922 | 0.4766 | 0.5013 | 0.5103 | 0.3814 | 0.5285 | 0.7223 | | 0.1 | 1 | 1.1230 | 0.4252 | 0.4411 | 0.4232 | 0.5064 | 0.4284 | 0.2557 | | 0.1 | 0.1 | 0.5224 | 0.5252 | 0.6032 | 0.5225 | 0.3797 | 0.5197 | 0.0671 | | 0.1 | 0.01 | 0.3800 | 0.3560 | 0.4523 | 0.3902 | 0.1867 | 0.3805 | 0.0342 | | 0.1 | 0 | 0.4564 | 0.4668 | 0.4601 | 0.4567 | 0.2389 | 0.4403 | 0.0534 | | 0.01 | 1 | 1.0652 | 0.2989 | 0.5498 | 0.2512 | 0.0698 | 0.2735 | 0.1933 | | 0.01 | 0.1 | 0.4599 | 0.4432 | 0.3931 | 0.4498 | 0.2812 | 0.4450 | 0.0464 | | 0.01 | 0.01 | 0.4025 | 0.4027 | 0.4781 | 0.3989 | 0.0131 | 0.3982 | 0.0425 | | 0.01 | 0 | 0.3792 | 0.3908 | 0.4349 | 0.3865 | 0.4759 | 0.3772 | 0.0000 | Table 2: Results for Boston dataset with linear model prior. The MAEs of cGAN, cGAN with TabNet generator and baseline linear Model are 0.2838, 0.2729 and 0.3508 respectively. Table 3: Results for Energy efficiency dataset for 1st target with Linear model prior. The MAEs of cGAN, cGAN with TabNet generator and baseline Linear Model (stats model) are 0.0849, 0.1564 and 0.2008 respectively. Table 4: Results for Friedman3 dataset with Gradient Boosted Trees (GBT) prior. The MAEs of cGAN, cGAN with TabNet generator and baseline GBT (Catboost) Model are 0.4477, 0.49724 and 0.4215 respectively. | Variance | Bias | Prior model | mGAN | Tab-mGAN | skipGAN | Weights skipGAN | Tab-skipGAN | Weights | |------------|--------|---------------|--------|------------|-------------|-------------------|---------------|-----------| | | | | | | Tab-skipGAN | | | | | 1 | 1 | 1.1928 | 0.3492 | 0.2812 | 0.2888 | 0.9645 | 0.2834 | 0.8963 | | 1 | 0.1 | 0.8688 | 0.2892 | 0.2765 | 0.2949 | 0.9308 | 0.2731 | 0.8692 | | 1 | 0.01 | 0.8406 | 0.3559 | 0.2787 | 0.2493 | 0.9953 | 0.2732 | 0.8812 | | 1 | 0 | 0.8120 | 0.2821 | 0.2447 | 0.2704 | 0.9742 | 0.2696 | 0.93485 | | 0.1 | 1 | 1.0098 | 0.2704 | 0.2068 | 0.2662 | 0.1860 | 0.2420 | 0.2964 | | 0.1 | 0.1 | 0.2419 | 0.3634 | 0.2189 | 0.2596 | 0.0382 | 0.2640 | 0.0917 | | 0.1 | 0.01 | 0.2256 | 0.3437 | 0.2206 | 0.2306 | 0.0313 | 0.2271 | 0.0021 | | 0.1 | 0 | 0.2700 | 0.4463 | 0.2646 | 0.3035 | 0.0297 | 0.2666 | 0.0209 | | 0.01 | 1 | 1.0205 | 0.3271 | 0.2327 | 0.3547 | 0.1945 | 0.3541 | 0.2191 | | 0.01 | 0.1 | 0.2562 | 0.3181 | 0.2551 | 0.2422 | 0.0317 | 0.2493 | 0.0599 | | 0.01 | 0.01 | 0.2424 | 0.3075 | 0.2640 | 21.9047 | 0.1488 | 0.2166 | 0.0550 | | 0.01 | 0 | 0.2198 | 0.2683 | 0.2287 | 0.2291 | 0.0058 | 0.2179 | 0.0361 | | Variance | Bias | Prior model | mGAN | Tab-mGAN | skipGAN | Weights skipGAN | Tab-skipGAN | Weights | |------------|--------|---------------|--------|-------------|-----------|-------------------|---------------|-----------| | | | | | Tab-skipGAN | | | | | | 1 | 1 | 1.1591 | 0.1462 | 0.0902 | 0.1149 | 0.9594 | 0.0983 | 0.9882 | | 1 | 0.1 | 0.7870 | 0.0915 | 0.0981 | 0.1154 | 0.9798 | 0.0733 | 0.9815 | | 1 | 0.01 | 0.7771 | 0.0848 | 0.1636 | 0.1339 | 0.9432 | 0.1112 | 0.9927 | | 1 | 0 | 0.8482 | 0.0924 | 0.0577 | 0.1334 | 0.9834 | 0.1549 | 0.9950 | | 0.1 | 1 | 1.0073 | 0.0745 | 0.0762 | 0.0937 | 0.2482 | 0.0776 | 0.3692 | | 0.1 | 0.1 | 0.1178 | 0.0783 | 0.1382 | 0.1077 | 0.0671 | 0.0906 | 0.1422 | | 0.1 | 0.01 | 0.0787 | 0.0656 | 0.0750 | 0.1138 | 0.0979 | 0.0964 | 0.2580 | | 0.1 | 0 | 0.0801 | 0.0637 | 0.0650 | 0.0830 | 0.0028 | 0.0823 | 0.0232 | | 0.01 | 1 | 0.9994 | 0.0662 | 0.0762 | 0.1157 | 0.2280 | 0.0522 | 0.2164 | | 0.01 | 0.1 | 0.1004 | 0.1231 | 0.0482 | 0.0489 | 0.0751 | 0.0698 | 0.0782 | | 0.01 | 0.01 | 0.0248 | 0.0265 | 0.0436 | 1048.8400 | 0.0184 | 0.0233 | 0.0000 | | 0.01 | 0 | 0.0249 | 0.1845 | 0.0650 | 0.0287 | 0.0333 | 0.0284 | 0.0034 | | Variance | Bias | Prior model | mGAN | Tab-mGAN | skipGAN | Weights skipGAN | Tab-skipGAN | Weights | |------------|--------|---------------|--------|------------|-------------|-------------------|---------------|-----------| | | | | | | Tab-skipGAN | | | | | 1 | 1 | 1.1267 | 0.4191 | 0.5205 | 0.4517 | 0.1495 | 0.5559 | 0.5722 | | 1 | 0.1 | 1.0068 | 0.3514 | 0.4891 | 0.4270 | 0.1977 | 0.5288 | 0.7900 | | 1 | 0.01 | 0.8485 | 0.3857 | 0.5234 | 0.4168 | 0.1942 | 0.5164 | 0.6472 | | 1 | 0 | 0.9358 | 0.4052 | 0.4305 | 0.4102 | 0.2730 | 0.4468 | 0.7080 | | 0.1 | 1 | 1.0669 | 0.4593 | 0.6085 | 0.4591 | 0.1444 | 0.5137 | 0.1770 | | 0.1 | 0.1 | 0.3809 | 0.4044 | 0.4932 | 0.4477 | 0.1356 | 0.3329 | 0.0520 | | 0.1 | 0.01 | 0.5561 | 0.4130 | 0.5409 | 0.4294 | 0.2035 | 0.5938 | 0.0309 | | 0.1 | 0 | 0.4094 | 0.3674 | 0.4049 | 0.3981 | 0.0000 | 0.3828 | 0.0808 | | 0.01 | 1 | 1.0446 | 0.3951 | 0.4414 | 0.3740 | 0.3830 | 0.4562 | 0.1855 | | 0.01 | 0.1 | 0.4847 | 0.4940 | 0.5045 | 0.4416 | 0.1651 | 0.5273 | 0.1612 | | 0.01 | 0.01 | 0.4274 | 0.4153 | 0.5523 | 0.5022 | 0.2027 | 0.5107 | 0.1883 | | 0.01 | 0 | 0.4536 | 0.4328 | 0.4709 | 0.4409 | 0.0000 | 0.4727 | 0.1730 | Table 5: Results for Boston dataset with Gradient Boosted Trees (GBT) prior. The MAEs of cGAN, cGAN with TabNet generator and baseline GBT (Catboost) are 0.2838, 0.2729 and 0.2049 respectively. Table 6: Results for Energy Efficiency dataset for 1st target with Gradient Boosted Trees (GBT) prior. The MAEs of cGAN, cGAN with TabNet generator and baseline GBT (Catboost) are 0.0849, 0.1564 and 0.0201 respectively. Table 7: Results for Friedman3 dataset with Trasformer network prior. The MAEs of cGAN, cGAN with TabNet generator and baseline transformer (TabNet) model are 0.4477, 0.49724 and 0.5529 respectively. | Variance | Bias | Prior model | mGAN | Tab-mGAN | skipGAN | Weights skipGAN | Tab-skipGAN | Weights | |------------|--------|---------------|--------|------------|-------------|-------------------|---------------|-----------| | | | | | | Tab-skipGAN | | | | | 1 | 1 | 1.2568 | 0.3098 | 0.2527 | 0.2791 | 0.9812 | 0.2560 | 0.9347 | | 1 | 0.1 | 0.7969 | 0.2417 | 0.2544 | 0.2767 | 0.9747 | 0.2336 | 0.9272 | | 1 | 0.01 | 0.7705 | 0.2943 | 0.2891 | 0.2719 | 0.9670 | 0.2385 | 0.9724 | | 1 | 0 | 1.0423 | 0.2798 | 0.2754 | 0.3089 | 0.9857 | 0.2915 | 0.8586 | | 0.1 | 1 | 1.0238 | 0.3667 | 0.1848 | 0.4119 | 0.2430 | 0.2313 | 0.3206 | | 0.1 | 0.1 | 0.2796 | 0.2951 | 0.2685 | 0.2400 | 0.1145 | 0.2506 | 0.1688 | | 0.1 | 0.01 | 0.2864 | 0.3200 | 0.2885 | 0.3356 | 0.1382 | 0.2665 | 0.2107 | | 0.1 | 0 | 0.2318 | 0.3291 | 0.2455 | 288.3585 | 0.0923 | 0.2643 | 0.2122 | | 0.01 | 1 | 1.0746 | 0.4723 | 0.3016 | 0.2897 | 0.2173 | 0.2697 | 0.2799 | | 0.01 | 0.1 | 0.2694 | 0.2977 | 0.2459 | 527.8482 | 0.0105 | 0.3008 | 0.1342 | | 0.01 | 0.01 | 0.2240 | 0.2725 | 0.2142 | 0.2820 | 0.0735 | 0.2957 | 0.2292 | | 0.01 | 0 | 0.2089 | 0.2849 | 0.1980 | 2303.4068 | 0.1503 | 0.2628 | 0.3504 | | Variance | Bias | Prior model | mGAN | Tab-mGAN | skipGAN | Weights skipGAN | Tab-skipGAN | Weights | |------------|--------|---------------|-------------|------------|-----------|-------------------|---------------|-----------| | | | | Tab-skipGAN | | | | | | | 1 | 1 | 1.2390 | 0.0618 | 0.0903 | 0.0499 | 0.9200 | 0.1203 | 0.9702 | | 1 | 0.1 | 0.7765 | 0.1274 | 0.1342 | 0.0597 | 0.9715 | 0.0693 | 0.9976 | | 1 | 0.01 | 0.7958 | 0.1380 | 0.0778 | 0.1183 | 0.9353 | 0.0708 | 1.0000 | | 1 | 0 | 0.7588 | 0.0548 | 0.1465 | 0.0604 | 0.9753 | 0.0970 | 0.9916 | | 0.1 | 1 | 1.0056 | 0.3290 | 0.1076 | 0.0551 | 0.3679 | 0.0640 | 0.4676 | | 0.1 | 0.1 | 0.1517 | 0.0489 | 0.0511 | 0.1028 | 0.0894 | 0.0945 | 0.4901 | | 0.1 | 0.01 | 0.1052 | 0.0974 | 0.0894 | 0.0689 | 0.1474 | 0.0998 | 0.0420 | | 0.1 | 0 | 0.0907 | 0.0708 | 0.0651 | 0.1349 | 0.0665 | 0.0652 | 0.1287 | | 0.01 | 1 | 0.9865 | 0.0987 | 0.0524 | 0.3475 | 0.3747 | 0.0897 | 0.2481 | | 0.01 | 0.1 | 0.0942 | 0.1016 | 0.0872 | 0.0789 | 0.1056 | 0.0941 | 0.0000 | | 0.01 | 0.01 | 0.0514 | 0.1431 | 0.0698 | 0.1271 | 0.0193 | 0.0477 | 0.0379 | | 0.01 | 0 | 0.0652 | 0.0429 | 0.0840 | 0.1034 | 0.0591 | 0.1034 | 0.0451 | Table 8: Results for Boston dataset with Transformer network prior. The MAEs of cGAN, cGAN with TabNet generator and baseline transformer (TabNet) model are 0.2838, 0.2729 and 0.2515 respectively. Table 9: Results for Energy Efficiency dataset for 1st target with Transformer network prior. The MAEs ![9_image_0.png](9_image_0.png) of cGAN, cGAN with TabNet generator and baseline transformer (TabNet) model are 0.0849, 0.1564 and 0.0543 respectively. Figure 3: Box plots for comparison of models in the Boston dataset. All ABC-GAN models outperform the Linear, GBT and transformer prior models. Large outliers (MAE ≥ 20) for skipGAN were removed. . ![10_image_1.png](10_image_1.png) ![10_image_0.png](10_image_0.png) ![10_image_2.png](10_image_2.png) Figure 4: cGAN and cGAN with TabNet generator models on 100 samples and 503 samples (entire dataset) of Boston dataset. ![10_image_3.png](10_image_3.png) Figure 5: mGAN model for Linear model, GBT and Transformer priors on 100 samples and 503 samples (entire dataset) of Boston dataset. ![10_image_4.png](10_image_4.png) Figure 6: Tab-mGAN model for Linear model, GBT and Transformer priors on 100 samples and 503 samples (entire dataset) of Boston dataset. ![11_image_0.png](11_image_0.png) Figure 7: skipGAN model for Linear model, GBT and Transformer priors on 100 samples and 503 samples (entire dataset) of Boston dataset. ![11_image_1.png](11_image_1.png) Figure 8: Tab-skipGAN model for Linear model, GBT and Transformer priors on 100 samples and 503 samples (entire dataset) of Boston dataset. is more explicit in it's way of working than most pre-existing black-box models. Our ABC-GAN models outperform prior models with the same amount of misspecification, and perform equivalent or better than these priors even in the ideal situation of perfectly specified models. How is our experimentation on regression any different than the other existing work, among the wide variety of literature that exists on regression, including non-parametric approaches such as Gaussian process regression [Wang]? While being useful in the ML community, these methods don't solve the problems of (1) correcting likelihood misspecification in the models or data and (2) performing equivalent or better than to the prior models under perfect condition (no noise condition). Our model caters mainly to correcting misspecification in the prior models, and performs equivalently or better than the prior models in the ideal case in several regression tasks. In this paper, the objective that we want to achieve is to regularise the GAN generator by prepending the complex sampler(s), which ideally would have all the domain knowledge (which would be otherwise captured by the prior on the parameters in case of the ABC, thereby biasing the training of the GAN). In this case, although there is just one complex sampler initially, we have multiple samplers - one for each candidate model, with each sampler trying to learn a different transformation. The distance between the simulated and actual data is measured using a divergence metric and ultimately only those samplers or models are chosen which lie within a certain threshold. We argued that, the proposed method can do no worse than the baselines, but also significantly outperforms the baseline priors, and can successfully correct the likelihood misspecification in them 1. Hence, in the ABC-GAN framework, the Generator is correcting for the misspecification, while the Discriminator is learning summary statistics (data representations) along with the rejection region. Our simple and elegant formulation can absorb a variety of paradigms. It will be interesting to investigate a full Bayesian setup, and draw posterior samples for the baseline. Likewise, on the adversarial optimization side, owing to incorporation of prior knowledge, stability dynamics could be studied. Our extensive experimentation involving wide variety of datasets, baseline models and tasks reaffirms our belief that, the proposed regime can be used for continuous model improvement, in an inter-operable way. Our work opens up many future directions. In our current work, we have not yet exploited obtaining posterior inference. Can we compute the posterior quantities, like in ABC? A reasonable hunch is to calculate approximate posterior quantities, under change of measure. Here, we view T ≡ GGAN as fixed, deterministic, but differential transformation. Recent advances in gradient-based normalizing flows inspire us in this direction [Song & Ermon (2019)]. It is relatively straightforward to obtaining posterior predictive distribution - sample from the ABC-Pre Generator, and pass it through GAN Generator, and treat the samples as approximate draws with which any statistics can be computed. Another interesting question to ask is - does the discriminating function learnt by DGAN approximate the Bayes Factor and/or the likelihood ratio? Previous work in this direction provide hints [Shirazi et al. (2017)]. Likewise, it will be of interest to know whether the representations learnt by DGAN showcase sufficient statistics? Earlier work on learning summary statistics via deep neural networks for ABC provide clues [Wong et al. (2018)]. Under linear models or generalized linear models, we find an affirmative answer. From stability standpoint, can the specific type of regularization of GGAN be tuned such that an optimum between explicit and implicit generative models is found? Pursuing the above questions will help us understand ABC-GANs better. ## References Sercan O. Arik and Tomas Pfister. Tabnet: Attentive interpretable tabular learning, 2019. URL https: //arxiv.org/abs/1908.07442. Mark A. Beaumont. Approximate bayesian computation in evolution and ecology. *Annual Review of Ecology, Evolution, and Systematics*, 41(1):379–406, 2010. doi: 10.1146/ annurev-ecolsys-102209-144621. URL https://doi.org/10.1146/annurev-ecolsys-102209-144621. _eprint: https://doi.org/10.1146/annurev-ecolsys-102209-144621. 1Code and data are available at the following anonymized link Mark A Beaumont, Wenyang Zhang, and David J Balding. Approximate bayesian computation in population genetics. *Genetics*, 162(4):2025–2035, 2002. ISSN 1943-2631. doi: 10.1093/genetics/162.4.2025. URL https://academic.oup.com/genetics/article/162/4/2025/6050069. George EP Box. Science and statistics. *Journal of the American Statistical Association*, 71(356):791–799, 1976. Leo Brieman. Statistical modeling: The two cultures. *Stat. Sci.*, 16(3]):199–231, 2001. Katalin Csilléry, Michael G. B. Blum, Oscar E. Gaggiotti, and Olivier François. Approximate bayesian computation (ABC) in practice. *Trends in Ecology & Evolution*, 25(7), 2010. ISSN 0169-5347. doi: 10.1016/j.tree.2010.04.001. Anna Veronika Dorogush, Vasily Ershov, and Andrey Gulin. Catboost: gradient boosting with categorical features support, 2018. URL https://arxiv.org/abs/1810.11363. Jerome H. Friedman. Multivariate adaptive regression splines. *The Annals of Statistics*, 19(1):1–67, 1991. ISSN 00905364. URL http://www.jstor.org/stable/2241837. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. In *Generative Adversarial Networks*. arXiv, 2014. doi: 10.48550/ARXIV.1406.2661. URL https://arxiv.org/abs/1406.2661. Aude Grelaud, Christian Robert, Jean-Michel Marin, Francois Rodolphe, and Jean-Francois Taly. ABC likelihood-freee methods for model choice in gibbs random fields. *arXiv*, 2009. URL http://arxiv.org/ abs/0807.2767. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. In *Proceedings of the 31st International Conference on Neural Information* Processing Systems, NIPS'17, pp. 5769–5779, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964. David Harrison and Daniel Rubinfeld. Hedonic housing prices and the demand for clean air. *Journal of* Environmental Economics and Management, 5:81–102, 03 1978. doi: 10.1016/0095-0696(78)90006-2. Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. *arXiv*, 2018. doi: 10.48550/ARXIV.1812.04948. URL https://arxiv.org/abs/1812.04948. Naveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira. On convergence and stability of gans, 2017. Tri Le and Bertrand Clarke. A Bayes Interpretation of Stacking for M-Complete and M-Open Settings. Bayesian Analysis, 12(3):807 - 829, 2017. doi: 10.1214/16-BA1023. URL https://doi.org/10.1214/ 16-BA1023. Chao Li, Kelu Yao, Jin Wang, Boyu Diao, Yongjun Xu, and Quanshi Zhang. Interpretable generative adversarial networks. 36:1280–1288, Jun. 2022. doi: 10.1609/aaai.v36i2.20015. URL https://ojs.aaai. org/index.php/AAAI/article/view/20015. Jean-Michel Marin, Pierre Pudlo, Christian P. Robert, and Robin J. Ryder. Approximate bayesian computational methods. *Statistics and Computing*, 22(6):1167–1180, 2012. ISSN 0960-3174, 1573-1375. doi: 10.1007/s11222-011-9288-2. URL http://link.springer.com/10.1007/s11222-011-9288-2. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets, 2014a. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. *arXiv:1411.1784 [cs, stat]*, 2014b. URL http://arxiv.org/abs/1411.1784. Art Owen. *Empirical Likelihood*. CRC Press, Boca Raton, 2001. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. *arXiv*, 2016. URL http://arxiv.org/abs/1511.06434. Skipper Seabold and Josef Perktold. statsmodels: Econometric and statistical modeling with python. In 9th Python in Science Conference, 2010. Mohammadali Shirazi, Soma Sekhar Dhavala, Dominique Lord, and Srinivas Reddy Geedipally. A methodology to design heuristics for model selection based on the characteristics of data: Application to investigate when the negative binomial lindley (NB-l) is preferred over the negative binomial (NB). *Accident Analysis & Prevention*, 107, 2017. ISSN 0001-4575. doi: 10.1016/j.aap.2017.07.002. URL https://www.sciencedirect.com/science/article/pii/S0001457517302373. Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https: //proceedings.neurips.cc/paper/2019/hash/3001ef257407d5a371a96dcd947c7d93-Abstract. html. Athanasios Tsanas and Angeliki Xifara. Accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools. *Energy and Buildings*, 49:560–567, 2012. ISSN 0378-7788. doi: https://doi.org/10.1016/j.enbuild.2012.03.003. URL https://www.sciencedirect.com/science/ article/pii/S037877881200151X. Jie Wang. An intuitive tutorial to gaussian processes regression. URL https://arxiv.org/abs/2009.10862. Wing Wong, Bai Jiang, Tung-yu Wu, and Charles Zheng. Learning summary statistic for approximate bayesian computation via deep neural network. *Statistica Sinica*, 2018. ISSN 10170405. doi: 10.5705/ss. 202015.0340. URL http://www3.stat.sinica.edu.tw/statistica/J27N4/J27N411/J27N411.html.
Review 1: Summary: This work proposes to use a DNN model to transform the outputs of a simpler statistical model, and use a GAN-like objective to minimize an approximate divergence between the model and data distribution. On several tabular regression datasets the method is demonstrated to achieve better performance compared to baseline methods. Strengths and Weaknesses: **Strengths.** - Transforming the outputs of a simpler statistical model using a deep model is a sensible idea, although I am not sure about its originality. **Weaknesses.** - The manuscript does not appear to be using the terminology of ABC in a standard way. ABC is generally an *inference* approach used to obtain a posterior distribution over the parameters of interest. Fitting tree models and transformers on data (presumably doing point estimation) are not usually referred to as ABC. - The manuscript claims to propose a new generative modeling paradigm, yet it actually describes a conditional modeling method which is only evaluated on univariate regression data. Furthermore, experiments do not evaluate the quality of the conditional distribution estimate, only reporting the MAE metric for regression. - If only (conditional) medians are of interest, the MAE can be directly minimized; there is no need to introduce GAN discriminators. - If full distributions are needed, it seems clear that invertible models such as normalizing flows should be preferred over GANs, as they make likelihood training possible and are no less flexible, especially on the low-dimensional tasks considered in the experiments. At the very least there should be experimental comparisons. Requested Changes: All issues mentioned above should be rectified. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper studies the problem of generative modeling. The paper proposes to prepend the GAN model with the generating model of approximate Baysian computation so that GAN and ABC can benefit from each other. On the one hand, GAN can incorporate subject knowledge into the generation procedure and possibly use a smaller architecture. On the other hand, ABC does not need to require accurate information as GAN can correct the misspecification. The paper shows that the proposed ABC-GAN method can out-perform the baselines on some tasks. Strengths and Weaknesses: Strengths: 1. The paper proposes a novel idea that combines ABC and GAN so that the two methods can benefit from each other. Weaknesses: 1. The experimental settings all use datasets of relatively small scales. Whether such an approach can be applied to larger datasets, more complicated data distributions, and whether more complex GAN models can handle the misspecification remain to be answered. 2. The skip-based models do not always give good performance and sometimes drastically bad results. Ideally, the skip models should be as good as the non-skip models. It is not well understood why this is the case. 3. The results are presented using many large tables, which could be hard to parse. I would suggest the authors find better visualization methods to make the results more accessible. 4. As mentioned in the paper, it would be a better idea to accompany ABC with normalizing flows rather than GANs, as theoretical analysis would be more practical for using normalizing flows. Also, I don't see a clear advantage in why ABC has to be paired with GANs explicitly. Minor: 1. The discriminator notation and the dataset notation are too similar. 2. "At the least, it should be able to ensure that the mGAN does at least as well as the baseline" redundant usage of "at least". 3. "Gradient boosting trees" and "gradient boosted trees" are both present in the paper. It's better to make the naming more consistent. 4. "The generative model for y is is nonlinear" 5. "and 3, 6, 8 to Energy dataset." Requested Changes: 1. Apply the proposed method to larger generation tasks, e.g., some image generations. Broader Impact Concerns: I happen to know that there are some ethical concerns related to using the Boston Housing dataset. You can check the sklearn page of the dataset for more information. Other than the usage of that dataset, I have no other ethical concerns. ================================================== Review 3: Summary: Given data $y_i^\tau$ and samples from a misspecified generative model $y_i^\pi$, the authors propose to use a GAN to make $y_i^\tau$ and $y_i^\pi$ indistinguishable. The resulting GAN has learned to correct for the model misspecification. The authors demonstrate the efficacy of their approach when the model misspecification is additive Gaussian noise. Strengths and Weaknesses: The authors tackle an interesting question, which is to correct for the mismatch between the data generating process and model. This paper was difficult to understand because of the lack of relevant background information. Approximate Bayesian Computation (ABC) is a major part of this paper, but there is almost no explanation on what it is, and how such models are trained. The paper would also benefit from a brief, yet rigorous explanation on how GANs are trained. The authors' design choice of simulating model misspecification as additive Gaussian noise also seems too simplistic to be relevant. The only baseline method included is the prior model, and the fact that the GAN-based approach is performing better seems to just show that GANs can denoise additive Gaussian noise. It would be more interesting to demonstrate how model misspecification can arise naturally, and that the proposed approach can mitigate it. Requested Changes: * Add a rigorous explanation on how ABC models are trained. * Add a rigorous explanation on how GANs are trained. * Define model misspecification precisely, and explain how it can occur in practice, and what negative effects it has. * Include an experiment where the model misspecification occurs naturally. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: The aforementioned issues regarding soundness, experimental setting, and evidence were not cleared during the discussion phase. As such, I have to recommend rejection at this stage. Thank you for your submission, and I hope the remarks will help improve the paper. ==================================================
# Hidden Heterogeneity: When To Choose Similarity-Based Calibration Kiri L. Wagstaff *kiri.wagstaff@oregonstate.edu* School of Electrical Engineering and Computer Science Oregon State University Thomas G. Dietterich tgd@cs.orst.edu School of Electrical Engineering and Computer Science Oregon State University Reviewed on OpenReview: *https: // openreview. net/ forum? id= RA0TDqt3hC* ## Abstract Trustworthy classifiers are essential to the adoption of machine learning predictions in many real-world settings. The predicted probability of possible outcomes can inform high-stakes decision making, particularly when assessing the expected value of alternative decisions or the risk of bad outcomes. These decisions require well-calibrated probabilities, not just the correct prediction of the most likely class. Black-box classifier calibration methods can *improve the reliability* of a classifier's output without requiring retraining. However, these methods are unable to detect subpopulations where calibration could also *improve* prediction accuracy. Such subpopulations are said to exhibit "hidden heterogeneity" (HH), because the original classifier did not detect them. This paper proposes a quantitative measure for HH. It also introduces two similarity-weighted calibration methods that can address HH by adapting locally to each test item: SWC weights the calibration set by similarity to the test item, and SWC-HH explicitly incorporates hidden heterogeneity to filter the calibration set. Experiments show that the improvements in calibration achieved by similarity-based calibration methods correlate with the amount of HH present and, given sufficient calibration data, generally exceed calibration achieved by global methods. HH can therefore serve as a useful diagnostic tool for identifying when local calibration methods would be beneficial. ## 1 Introduction How do we know when to trust a prediction? Let f(X) be a classifier that outputs a discrete probability distribution Pˆ(Y |X) over the K possible class labels {1*, . . . , K*}. Ideally, each prediction made by the classifier will be *point-wise calibrated*, that is, the true class distribution for each X matches Pˆ: P(Y |X) = Pˆ(Y |X). Many investigators have studied a weaker requirement that each distinct output value g is calibrated, such that P(Y |g) = E[Y ] over the set of predictions for which Pˆ(Y |X) = g (Vaicenavicius et al., 2019; Widmann et al., 2019). This ensures aggregate calibration for the set, but it allows individual predictions to have P(Y |X) ̸= Pˆ(Y |X). Others use the same approach but require calibration only for the most-likely class's predicted probability (e.g., Guo et al., 2017; Patel et al., 2021; Luo et al., 2022) or marginal class probabilities (e.g., Zadrozny & Elkan, 2002; Kumar et al., 2019), both of which also do not enforce point-wise calibration. Point-wise calibration of each individual prediction, for its full class distribution, is important for safety-critical applications. Well-calibrated predictions improve the trustworthiness of systems and support downstream cost-sensitive decisions (e.g., medical diagnosis, autonomous driving, financial decisions). Likewise, calibration is necessary when combining or comparing predictions from different sources (Bella et al., 2013) or in classifier cascades that use a low-cost but less accurate classifier's output to decide whether to apply a higher-cost but more accurate secondary classifier (Enomoto & Eda, 2021). Good calibration is beneficial in any decision making setting in which uncertainty matters (e.g., active learning or classification with a rejection or abstention option). We focus on an increasingly common use case in which we would like to apply a pre-trained, possibly proprietary, model M to our own data set D with corresponding distribution PD(Y |X). In this scenario, the original training data set is unavailable and P(Y |X), the distribution for which M was trained (and possibly calibrated), is unknown. Any domain shift between P and PD could prevent M from generating reliable predictions on D. Moreover, even in the absence of domain shift, M may perform poorly on D due to "hidden heterogeneity", which occurs when M assigns the same posterior probability to items with different true probabilities. Concerns about poorly calibrated classifiers are not new (e.g., Zadrozny & Elkan, 2001; Niculescu-Mizil & Caruana, 2005), and several post-training calibration correction methods have been developed (e.g., Guo et al., 2017; Kumar et al., 2019; Kull et al., 2019; Alexandari et al., 2020). In general, these methods devise a *calibration map* Φ that transforms the original predicted probabilities into values that are better calibrated. We denote the output of a classifier f(xi) applied to item xi as the probability vector pˆi of length K (number of classes) that sums to 1 (i.e., resides in the simplex ∆K−1). The calibration map Φ : ∆K−1 7→ ∆K−1 is derived from an independent calibration set C to transform pˆi to a more reliable qˆi = Φ(ˆpi). A key limitation of these calibration maps is the implicit assumption that all items with the same predicted probability vector pˆ should be given the same correction. Such maps cannot accommodate hidden heterogeneity, which manifests as subpopulations with distinct P(Y |X) values to which the classifier has erroneously assigned the same pˆ value. For example, in predicting cancer risk, there could be many different reasons (age, lifestyle, family medical history, etc.) that a given individual is predicted to have pˆ = 0.9. Even if the predictions satisfy aggregate calibration, this probability could be an over-estimate for some individuals, while for others it could be an under-estimate. No global calibration map can address this heterogeneity to achieve point-wise calibration, because they map all items with the same pˆ to the same qˆ. We propose a method to quantify hidden heterogeneity (HH) as a signal for when global calibration may be inadequate. Once HH is detected, we face a choice of either (a) training a new classifier on the available calibration data or (b) improving the existing classifier using the calibration data. Because HH is a local phenomenon, a natural way to improve the classifier is to apply a local, similarity-based calibration technique. We introduce two local calibration methods that leverage the location of xiin feature space to yield qˆi = Φ(ˆpi|xi). These methods determine the calibrated probability qˆi by taking a weighted vote of data points in the calibration set C. The first method, Similarity-Weighted Calibration (SWC), assigns weights to every point in the calibration set based on similarity to xi. The second method, SWC-HH, uses only items within a local neighborhood defined by the estimated HH. We refer to the weighted number of calibration data points as the "calibration support" for xi, which indicates how much calibration data is available for estimating qˆi. This measure of the calibration quality of qˆi for each xiis a unique advantage of local calibration. We note that any post-hoc calibration method can be viewed as a form of model stacking (Wolpert, 1992), in which the output of the original classifier is transformed via Φ, a model itself. Our SWC and SWC-HH methods are stacking methods that focus on improving local calibration. As a consequence, they also reduce or eliminate HH and can thereby improve classifier accuracy. The major contributions of this paper are 1. The identification of hidden heterogeneity as a property of classifier predictions that thwarts global calibration methods (Section 3), 2. A method for quantifying hidden heterogeneity to indicate when local calibration is needed (Section 3), 3. Two local calibration methods based on Similarity-Weighted Calibration (Section 4), and 4. Results of experiments that assess the relationship between hidden heterogeneity and calibration, yielding useful guidance for practitioners (Section 5). We provide context from previous work in Section 2. Key conclusions and limitations of local, similaritybased calibration are discussed in Section 6. ## 2 Related Work There are several methods for improving the reliability (calibration) of classifier predictions. Many recent advances were inspired by the recognition that deep neural networks in particular may sacrifice calibration to achieve higher generalization accuracy (Guo et al., 2017). Strategies include using calibration-sensitive training methods, if the original training set is available (e.g., via modifications to the loss function, as proposed by Kumar et al., 2018; Mukhoti et al., 2020; Enomoto & Eda, 2021; Tomani & Buettner, 2021), using domain-specific representations that lead to improved calibration (Kalmady et al., 2021), or adopting network architectures that do not use convolutions (Minderer et al., 2021). In contrast, post-hoc calibration correction methods that directly modify the classifier's predictions on new observations, without re-training, can be employed even when the training data (or model) are proprietary or when the data distribution has changed and we wish to recalibrate an existing classifier to extend its applicability. Global, parametric calibration methods re-map the predicted probabilities, pˆ, output by the classifier by fitting a chosen functional form (e.g., logistic curve) from the probabilities to the labels to compute qˆ = Φ(ˆp). For binary classifiers, Platt scaling (Platt, 1999) transforms pˆiinto a value between 0 and 1 using a sigmoid function with two parameters, A and B: qˆi =1 1+eApˆi+B . The parameters A and B are chosen to optimize the negative log-likelihood of predictions made on the calibration set. Platt scaling was generalized to multi-class problems for neural networks (Guo et al., 2017) via a method called temperature scaling, which operates on the logits zi (not the probabilities) by optimizing a temperature parameter T in ui[k] = e zi[k]/T , where zi[k] is the logit for item i and class k, and ui[k] is the corresponding unnormalized probability. These values are normalized as qˆi[k] = Pui[k] j uj [k] . The same T value is used for all classes. Bias-Corrected Temperature Scaling (Alexandari et al., 2020) adds a bias term for each class. There are also several approaches that construct probability bins and assign the average (or other aggregate) accuracy within bin Bb as its calibrated probability, qˆi:= Accb, ∀i ∈ Bb. Histogram binning (Zadrozny & Elkan, 2001) assigns items to bins based on their uncalibrated predictions pˆi, often using equally-spaced bin boundaries or divided so that each bin has the same number of items ("equal frequency") from the calibration set. Kumar et al. (2019) found that the latter strategy, as well as using a larger number of bins, yields better results. Isotonic regression (Zadrozny & Elkan, 2002) adds further flexibility by optimizing the bin boundaries to minimize the squared loss between qˆi and yi. Recently, Patel et al. (2021) proposed selecting the bin boundaries to maximize the mutual information between bin predictions qˆi and yi. To date, very few calibration methods have leveraged the location of items in feature space, X . Zhao et al. (2020) introduced "individual" (per-item) calibration for regression problems and confidence intervals. Partial specialization for classification problems can be achieved by estimating a different T per subpopulation (unlabeled cluster (Gong et al., 2021) or labeled "domain" (Yu et al., 2022)), then employing linear regression to estimate a new T ′for each test item. Our approach operates at a finer (per-item) granularity and is not restricted to probability rescaling. Like our method, the LoRe calibration method (Luo et al., 2022) considers the similarity of the calibration set items to the test item x. However, LoRe restricts the similarity calculation to calibration items that fall into a probability bin based on the probability maxk pˆi[k] of the highestprobability class. This can produce high variance estimates when the bin contains few calibration items. Our method avoids this problem by considering the full predicted distribution pˆi when computing similarity. LoRe also only calibrates the highest-probability prediction; it does not produce a calibrated probability distribution over all K classes. Consequently, it does not support downstream tasks such as computing the expected costs of misclassification (in cost-sensitive problems) or re-estimating class probabilities (Alexandari et al., 2020). One calibration approach that employs similarity to compute the complete qˆi[k] vector is Similarity-Binning Averaging or SBA-10 (Bella et al., 2009), which creates bins (neighborhoods) that contain an item's 10 nearest neighbors (in Euclidean distance) in an "augmented" feature space X + = X × ∆K−1 defined by the item's feature vector xi of dimension d concatenated with its probability vector pˆi ∈ ∆K−1. SBA-10 computes the calibrated probability qˆi[k] as the probability of class k (in the calibration set) within item i's assigned bin, with each item contributing equally (Bella et al., 2009). In contrast, our approach uses a similarity-weighted contribution from every item in the calibration set, not just the 10 nearest neighbors. Algorithm 1 Hidden Heterogeneity (HH) Input: Test item xt, calibration data C, predicted probabilities pˆ, and radius r Output: Hidden heterogeneity in neighborhood around xt 1: Construct probability neighborhood around xt: Ut = {xi ∈ C|DH(ˆpt, pˆi) < r} (using Eqn. 2). 2: Train gt using labeled data in Ut. 3: Collect model predictions for the neighborhood: f(Ut) = {pˆi|xi ∈ Ut}. 4: Collect gt predictions for the neighborhood: gt(Ut) = {gt(xi)|xi ∈ Ut}. 5: Collect labels for the neighborhood: YUt = {yi|xi ∈ Ut}. 6: Calculate HHUt using f(Ut), gt(Ut), and YUt (Eqn. 3). ## 3 Hidden Heterogeneity Global post-hoc calibration methods, such as Platt scaling and temperature scaling, perform very well for some data sets and algorithms and less well for others. Similarly, local methods like SBA-10 do not always improve upon these global methods. What causes the failure of global methods, and under what conditions can local methods do better? Our hypothesis is that global post-hoc calibration, which focuses on aggregate calibration, fails when the data exhibits *hidden heterogeneity* (HH) with respect to the predicted probabilities pˆ. HH characterizes situations where there are subpopulations in the feature space X to which the classifier assigns the same pˆ but that require different calibration corrections. ## 3.1 Hidden Heterogeneity Definition 1 A classifier f exhibits *hidden heterogeneity* with respect to a feature space X if there exists a subregion *U ⊆ X* such that f(x) ≈ pˆ for all x ∈ U and yet U can be partitioned into M disjoint subregions U = U1`*· · ·* `UM such that the true class probabilities P(y|x ∈ Um) ̸= P(y|x ∈ Um′ ) for all distinct pairs m, m′ ∈ {1, . . . , M}, m ̸= m′. An extreme example of HH occurs for a classifier that ignores all features and predicts the majority class for all items. Imagine a data set composed of 60% cats and 40% birds, for which a classifier predicts P(y = "cat") = ˆp = 0.6 for all items (i.e., U = X ). This classifier is perfectly calibrated in terms of aggregate calibration, but it is uninformative about any individual animal. If cats and birds are not separable in the feature space, this may be the best one can do. However, if the items have a feature such as "number of legs", then two subregions—U1 for animals with two legs and U2 for animals with four legs—can be defined with true conditional probabilities of 1 (for "cats") and 0 (for "birds"). This heterogeneity is hidden in the classifier's predictions. This extreme situation (complete HH) could happen for a number of reasons (majority-class classifier, classifier only trained on cats, etc.). More commonly, any classifier may have one or more subregions U in its predicted probabilities that likewise obscure informative heterogeneity, due to model misspecification, an overly constrained hypothesis space, over-regularization, or data shift. Detecting HH can alert the practitioner to limitations of the classifier. While global methods that map pˆ to qˆ cannot address HH, local calibration could model the subregions separately and assign qˆi differently for each Ui. ## 3.2 Detecting Hidden Heterogeneity Algorithm 1 provides a method to compute the *detectable* hidden heterogeneity for a region *U ⊆ C* given a labeled calibration data set C sampled from the same distribution as the test set. HH is calculated as the potential improvement (compared to the original classifier) achieved by training a specialized classifier on only the items in U. In step 1, we define item xt's probability neighborhood Ut to contain calibration items that are close to xt in the probability simplex ∆K−1. More precisely, Ut contains the items within radius r of item xt in ∆K−1. There is no *a priori* best choice for r, but to obtain reliable HH estimates, one should choose r such that no set Ut is excessively small. We employ the standard choice of Hellinger distance DH (Equation 1) to calculate the distance between probability vectors pˆi and pˆj . Hellinger distance is the probabilistic equivalent of Euclidean distance, and it is more suitable here than KL divergence, which is not symmetric. $$D_{H}({\hat{p}}_{i},{\hat{p}}_{j})={\frac{1}{\sqrt{2}}}{\sqrt{\sum_{k=1}^{K}\left({\sqrt{{\hat{p}}_{i}[k]}}-{\sqrt{{\hat{p}}_{j}[k]}}\right)^{2}}}.$$ $$\left(1\right)$$ $$\left(2\right)$$ Conveniently, the Hellinger distance can be expressed as the Euclidean norm of the difference of the elementwise square root of each probability vector (Krstovski et al., 2013): $$D_{H}(\hat{p}_{i},\hat{p}_{j})=\frac{1}{\sqrt{2}}\left\|\sqrt{\hat{p}_{i}}-\sqrt{\hat{p}_{j}}\right\|_{2}.\tag{10}$$ This in turn allows the use of efficient methods (e.g., k-d tree) for populating neighborhood Ut. For each test item xt, a new (local) classifier gt is trained using only the nearby calibration items in Ut (step 2). This classifier gt can be any classifier type. We employed an ensemble method that can perform internal generalization estimates without an additional validation set. We trained a bagged ensemble of 50 decision trees with no depth limit and no limit on the number of features searched for each split. We used out-of-bag error to determine how much pruning to employ to achieve good generalization and avoid overfitting to the calibration set. We searched over 7 values of the α pruning complexity parameter, evenly spaced between 0.0 (no pruning) and 0.03, as input to the minimal cost-complexity pruning method (Breiman et al., 1984). Finally (step 6), we calculate HH for Ut by comparing the Brier score (Brier, 1950) of the original predictions by model f on Ut (step 3) with those generated by the local model gt (step 4) using true labels YUt (step 5): $$H H_{{\mathcal{U}}_{t}}=\mathrm{Brier}(f({\mathcal{U}}_{t}),Y_{{\mathcal{U}}_{t}})-\mathrm{Brier}(g_{t}({\mathcal{U}}_{t}),Y_{{\mathcal{U}}_{t}}),$$ ), (3) where the Brier score is the mean squared error between predictions pˆi[k] ∈ [0, 1] and labels yi, for N items and K possible classes: $${\rm Brier}(\hat{p},Y)=\frac{1}{N}\sum_{i=1}^{N}\sum_{k=1}^{K}\left(\hat{p}_{i}[k]-{\bf1}(y_{i}=k)\right)^{2}.\tag{4}$$ $$\left({\boldsymbol{3}}\right)$$ We enforce the condition that gt is no worse than f by clipping HHUt to 0. Regions with large HH values provide both a warning that global calibration methods may not perform well and an opportunity for local specialization by using item similarity during calibration. ## 4 Similarity-Weighted Calibration We propose to improve point-wise calibration by leveraging information in feature space as well as the uncalibrated probabilities pˆi. Given test item xt, the goal is to estimate well-calibrated qˆt[k] = P(y = k|xt) for each class k ∈ {1 *. . . K*}. Similarity-Weighted Calibration (SWC) is described in Algorithm 2. Let $$s(t,i)=\operatorname*{sim}([x_{t},{\hat{p}}_{t}],[x_{i},{\hat{p}}_{i}])\in[0,1]$$ be the similarity between item xt and item xi measured in the augmented space X +, where [*a, b*] is the concatenation of vectors a and b. A similarity of 1 is perfect identity. The investigator chooses how best to measure similarity in this space. Use of X + enables calibration to benefit from information encoded by the classifier (pˆ) as well as item position in feature space (x). Further, a supervised similarity measure can learn the relative importance of each component for the problem at hand. We employ such a measure: the random forest *proximity function* (RFprox). RFprox trains a random forest on a labeled data set and defines the similarity between items xi and xj as the fraction of times they are assigned to the same leaf in each tree of the ensemble (Breiman, 2001; Cutler et al., 2012). Effectively, the random forest encodes a "kernel" defined by those weights (leaf co-occurrences) (Hastie et al., 2009). We employ the calibration data to learn the relevant RFprox measure using a random forest with 100 trees, no depth limit, and considering a random set of √d features for each split, given d total features. Note that defining sim() using standard Algorithm 2 Similarity-Weighted Calibration (SWC) Input: Test item xt, calibration data xi ∈ C and labels yi Output: Calibrated probabilities qˆt[k], ∀k 1: Collect model predictions for item xt: pˆt[k] for k ∈ {1 *. . . K*}. 2: Collect model predictions for the calibration set: pˆi[k] for xi ∈ C, k ∈ {1*, . . . , K*}. 3: Compute pairwise similarity as s(*t, i*) for xi ∈ C. 4: Compute qˆt[k] = P 1 i s(t,i) Pi s(*t, i*)1(yi = k) for k ∈ {1*, . . . , K*} (Eqn. 5). kernels, such as the Gaussian kernel, over X + would impose a fixed weighting on the x and pˆ components. A potential direction for future research would be to apply multiple kernel learning (Gönen & Alpaydın, 2011) to optimally combine separate kernels for x and pˆ. SWC computes the similarity of xt to every item in the calibration set (step 3) and uses this information to replace pˆ with a similarity-weighted combination of labels from the calibration set (step 4). $${\hat{q}}_{t}[k]={\frac{1}{\sum_{i}s(t,i)}}\sum_{i}s(t,i)1(y_{i}=k).$$ $$\left({\mathfrak{H}}\right)$$ s(*t, i*)1(yi = k). (5) The similarity-based approach to calibration enables local specialization within the data set, but it does not directly make use of the calculated HH. We also developed the SWC-HH algorithm, which filters the calibration set to restrict which items are used to generate qˆ. HH, which is computed separately for each test item xt (Algorithm 1), is employed as an additional filter for calibration. In step 4, SWC-HH only includes items with a similarity to item xt of at least 12HHUt . Note that the maximum value for HH is 2.0 since it is the difference in Brier scores, clipped to 0.0, and each Brier score ranges between 0.0 and 2.0. Dividing by 2.0 normalizes the threshold to the range 0.0–1.0, making it suitable as a similarity threshold. ## 5 Experimental Results We conducted experiments with a variety of classifiers and data sets to compare local and global calibration methods and to determine the role that hidden heterogeneity plays. Our hypotheses were that (1) local, similarity-based calibration is more effective at reducing Brier score than global calibration methods, (2) the amount of improvement correlates with the hidden heterogeneity score, and (3) calibration support can serve as an indicator of per-item calibration quality. Our implementations of SWC, SWC-HH, and other calibration methods, along with scripts to replicate the experiments, are available at https://github.com/ wkiri/simcalib. ## 5.1 Methodology We assessed calibration methods for six tabular data classifiers as implemented in the scikit-learn Python library (Pedregosa et al., 2011), including a decision tree with min_samples_leaf = 10 (DT), a random forest with 200 trees (RF), an ensemble of 200 gradient-boosted trees (GBT), a linear support vector machine (SVM), a Gaussian kernel (γ =1 d var(X) , C = 1.0) support vector machine (RBFSVM), and a Naive Bayes classifier (NB). Any parameters not explicitly mentioned were set to their default values. We also conducted experiments with pre-trained deep neural networks for classifying images (details in Section 5.4). Data sets. We analyzed four tabular and two image data sets: - moons: a synthetic 2D data set with two partially overlapping classes, chosen to enable visualization of classifier outputs and hidden heterogeneity in feature space. Observations were generated using the scikit-learn make_moons() function with noise set to 0.3 and random_state set to 0. - letter (letter recognition): a 26-class data set of capital letters from the English alphabet represented by 16 statistical and geometrical features that describe the image of the letter (Frey & Slate, 1991), available at https://archive.ics.uci.edu/ml/datasets/letter+recognition. - mnist: the MNIST handwritten digit data set (LeCun et al., 1998) composed of 28x28 pixel (d = 784) images containing a digit from 0 to 9. We used the data set as provided by OpenML; the original source is http://yann.lecun.com/exdb/mnist/. In addition to the 10-class data set (mnist10), we created several binary subsets consisting only of two digits, such as "4" and "9" (mnist-4v9). This data set is "tabular" (1-d feature vector); no 2D image structure is preserved. - fashion-mnist: grayscale images of clothing and accessories (10 classes) that was designed to be more challenging than the MNIST data set yet have the same dimensionality (28x28, d = 784) (Xiao et al., 2017), available at https://github.com/zalandoresearch/fashion-mnist. - CIFAR-10: 60,000 images (64 × 64 pixels) labeled into 10 distinct classes; the test set contains 10,000 images (Krizhevsky, 2009) - CIFAR-100: a disjoint set of 60,000 images labeled into 100 different classes (50k train, 10k test) For tabular data sets, we randomly sampled 10,000 items and for each trial randomly split them into 500 train, 500 test, and 9000 for a calibration pool. For the "mnist10" and "letter" data sets, we used 1000 items each for training and test, due to their large number of classes (10 and 26, respectively). We created a series of nested calibration sets of size { 50, 100, 200, 500, 1000, 1500, 2000, 2500, 3000 } to assess the data efficiency of each calibration method. For image data sets, in each trial we generated a class-stratified random split of the standard test set into 5000 test items and reserved the remainder as the calibration set. We report average performance across 10 trials along with the standard error for the observed values. Calibration methods. We compared three similarity-based calibration methods (SBA-10, SWC, and SWC-HH) to standard calibration methods including Platt scaling (Platt, 1999), histogram binning (Zadrozny & Elkan, 2001), and isotonic regression (Zadrozny & Elkan, 2002). SBA-10 employed Euclidean distance to identify the nearest neighbors, while SWC and SWC-HH used the RFprox similarity measure. When computing hidden heterogeneity, we used a probability radius of r = 0.1 in the probability simplex. Platt scaling optimizes the negative log-likelihood of predictions against the target probabilities rather than discrete {0, 1} labels (Platt, 1999; Niculescu-Mizil & Caruana, 2005). With n+ as the number of calibration items in the positive class and n− as the number of negative items, the target probabilities are y+ = n++1 n++2 and y− =1 n−+2 . For multi-class problems, we applied temperature scaling (Guo et al., 2017). For classifiers that output probabilities instead of logits, we first transformed pˆiinto logits as zi[k] = ln pˆi[k] 1−pˆi[k] . To avoid dividing by zero (or taking its logarithm), we clipped pˆi[k] to the range [ϵ, 1 − ϵ], where ϵ = 1 × 10−12. We applied the histogram binning method implemented by Kumar et al. (2019)1 and followed their recommendation to use equal-mass bins (100 bins). Metrics. We employ Brier score (Equation 4) to measure point-wise prediction quality, following earlier researchers (e.g., Zadrozny & Elkan, 2001; 2002; Niculescu-Mizil & Caruana, 2005). It has several advantages over a commonly used calibration metric called the Expected Calibration Error (ECE) (Naeini et al., 2015), which assigns predictions to bins to assess aggregate calibration. ECE can be trivially minimized to 0 by always predicting the empirical average probability of a given class, yielding perfectly calibrated but uninformative predictions (Kull et al., 2017; Widmann et al., 2019; Ovadia et al., 2019). The Brier score incorporates not just calibration error (or "reliability") but also "resolution" or sharpness, which rewards predictions that do more than predict the average probability (Ferro & Fricker, 2012). In addition, ECE is sensitive to the number and choice of bins (Vaicenavicius et al., 2019; Kumar et al., 2019; Patel et al., 2021), it exhibits undesirable edge effects (discontinuities at bin boundaries), and it only assesses calibration of the predicted class. Brier score characterizes prediction quality across all classes, and it avoids artificial discretization and edge effects, since it does not employ binning. ## 5.2 Similarity-Based Calibration For Tabular Data Sets To test our first hypothesis, we compared similarity-based calibration to global methods. Figure 1 shows Brier score as a function of available calibration data for the binary classification task of distinguishing two 1Available at https://github.com/p-lambda/verified_calibration. ![7_image_0.png](7_image_0.png) Figure 1: Calibration performance for three classifiers on the MNIST 4-vs-9 data set. Each plot shows Brier score (lower is better) with increasing calibration set size. Error bars show one standard error over 10 trials. handwritten MNIST digits ("4" vs. "9"). The uncalibrated predictions yielded different starting Brier scores for each classifier (red dashed lines). Platt scaling and isotonic regression improved the Brier score for the Naive Bayes (NB) and random forest (RF) classifiers but only marginally for the decision tree (DT). No further improvements were achieved beyond 500 calibration items. Similarity-based calibration (SBA-10, SWC, and SWC-HH) generally did not perform well with small calibration sets but achieved much larger benefits for NB and DT when at least 500 items were used for calibration, and Brier score continued to improve as more calibration data was provided. The random forest, which had the best initial Brier score and therefore less room for improvement, showed an advantage for similarity-based calibration after at least 1500 items were used. SWC-HH yielded a clear advantage at all calibration set sizes for NB and DT, and it also provided a small advantage over SWC for RF with calibration set sizes of at least 1500 items. SWC and SWC-HH consistently out-performed SBA-10. Since RFprox uses labels to learn the similarity measure, it can elevate the importance of individual features in X + (like pˆ[k]) by placing them higher in individual decision trees within its ensemble. SWC and SWC-HH also include evidence from the entire calibration set rather than only the nearest neighbors. Importantly, SBA-10 showed almost no difference in Brier score for different classifiers, effectively ignoring their individual differences in pˆ. MNIST is represented by 784 features, so the addition of two dimensions in X + has little effect. In contrast, the fact that SWC obtained different absolute results for each classifier indicates that RFprox effectively leveraged the pˆ features. Experimental results for all classifiers and all tabular data sets, reporting Brier score and accuracy results, are provided in Appendix A (Figures 7 and 8). ## 5.3 Similarity-Based Calibration Exploits Hidden Heterogeneity Our second hypothesis was that HH helps explain why and when similarity-based calibration is effective. We examined Brier score improvement across a large combination of data sets, classifiers, and random trials. We found that large HH can be present even in the absence of domain shift, which creates a large opportunity for local calibration. Figure 2 shows the improvement (reduction) in Brier score after calibration as a function of the average HH across the test set. The four binary data sets were MNIST "1" vs. "7" (relatively easy), "4" vs. "9" and "3" vs. "8" (intermediate), and "3" vs. "5" (difficult). The three multi-class data sets were "mnist10", "fashion-mnist", and "letter". We compared Platt (for binary) or temperature scaling (multiclass) calibration to SWC and SWC-HH for six classifiers, using 10 trials per combination of data set and classifier. SWC and SWC-HH achieved Brier score improvements that correlate strongly with the amount of measured HH. The relationship was far weaker for the global Platt/temperature scaling methods, which cannot detect or exploit HH. These results show that HH, which can be computed prior to calibration, is a useful diagnostic indicator that can guide the choice of calibration strategy. When HH is high, it is advisable to employ a similarity-based calibration method like SWC. When HH is low or there is not much calibration data, global methods such as Platt scaling are recommended. To better understand how SWC and SWC-HH improve Brier score, we visualize the calibration process in Figure 3 for the two-dimensional "moons" data set. Each row corresponds to results for a different classifier ![8_image_1.png](8_image_1.png) ![8_image_0.png](8_image_0.png) Figure 2: Brier score improvement versus average hidden heterogeneity for four binary (top row) and three multi-class (bottom row) tabular data sets, for six classifier types and 10 random trials. ![8_image_2.png](8_image_2.png) ![8_image_3.png](8_image_3.png) Figure 3: Visualization of uncalibrated predictions (first column) for the "moons" data set, Platt scaling (second column), and SWC (fourth column). The third column shows hidden heterogeneity values that highlight areas of potential improvement, which align with SWC improvements. ![9_image_0.png](9_image_0.png) Figure 4: Calibration performance on CIFAR-10 and CIFAR-100 using three pre-trained neural networks over 10 trials (error bars show standard error). HH is shown in parentheses. (linear SVM, decision tree, and random forest). The third column shows the computed HH values. The classifiers were trained on 500 points, calibrated using 1000 points, and Brier score evaluated on 500 points. The linear SVM cannot model the nonlinear decision boundary very well. The diagonal region where the classes are mixed yet separable in the feature space has a high value for HH. When SWC is applied, it is the pˆ values in this band that receive the biggest modifications. These changes reduce (improve) the Brier score from 0.218 to 0.133. Platt scaling increases (worsens) the Brier score slightly. The decision tree (second row of Figure 3) exhibits less hidden heterogeneity on the same data set, because it is able to model the nonlinear decision boundary more effectively. SWC improves the Brier score from 0.156 to 0.129. Finally, the random forest (bottom row of Figure 3) exhibits more areas with hidden heterogeneity, due to overly conservative predictions in the upper right and lower left areas. SWC creates smoother regions as the calibration data informs updates to the posterior probabilities and improves the Brier score from 0.134 to 0.122. Importantly, the difference in results for the rightmost column in Figure 3 demonstrates that SWC adapts (calibrates) most where the classifier exhibits hidden heterogeneity, yielding a result that is customized to the original classifier and more flexible than global calibration. ## 5.4 Similarity-Based Calibration For Image Classifiers We also conducted calibration experiments with the CIFAR-10 and CIFAR-100 data sets (Krizhevsky, 2009) using three pre-trained neural networks of increasing complexity2. ResNet20 (He et al., 2016) has 20 layers and 0.27M parameters, ResNet56 (He et al., 2016) has 56 layers and 0.85M parameters, and RepVGG_A2 (Ding et al., 2021) has 22 layers and 25.49M parameters. For these data sets, the similarity measure sim used by SWC and SWC-HH operates in the latent space learned by each network. Specifically, we used the output activations of the avgpool (for ResNet models) and gap (for RepVGG_A2) layers as a feature vector (dimensionality 64, 64, and 1408 respectively). We again used a learned RF proximity function to compute similarity in this space. We found that a probability radius of 0.05 yielded reasonably sized neighborhoods for computing HH. Improvements for CIFAR-10 and CIFAR-100 are evident as more complex neural networks are trained; the Brier score consistently decreases from ResNet20 to ResNet56 to RepVGG_A2 (see Figure 4). However, HH values were very low for all three networks (0.01 − 0.03), leaving little room for improvement with local calibration. Indeed, we found that global calibration (temperature scaling or isotonic regression) yielded the best results for these data sets. Consistent with the results on tabular data shown in section 5.2, calculating HH in advance provides guidance as to which method will be most useful. There is room for additional improvement. Recent studies of deep network latent spaces suggest that distances computed in neural network latent spaces often do not work well. For example, nearest neighbor classifiers using latent space distances perform substantially worse than the standard multinomial logistic 2Pre-trained models were obtained from https://github.com/chenyaofo/pytorch-cifar-models . ![10_image_0.png](10_image_0.png) Figure 5: Test items from "mnist-1v7" with the lowest calibration support for a linear SVM classifier. For all five items, SWC *reduced* the confidence of the correct class, instead of increasing it. regression (softmax) classifiers (Garrepalli, 2022). These latent spaces are also not able to represent dimensions of variation that were poorly sampled in the training data (Dietterich & Guyer, 2022). Likewise, decision tree classifiers do not perform well on these learned representations (Garrepalli, 2022). Since HH is influenced by the data representation, hidden heterogeneity could be more detectable for these data sets using a different representation. Likewise, even if HH is large, similarity-based local calibration may not always be able to improve the Brier score due to limitations of the representation. Employing t-SNE or PCA to reduce dimensionality for similarity calculations (as done by Luo et al. (2022)) could also be beneficial. Exploring the connection between choice of representation and calibration efficacy is an important future direction. ## 5.5 Calibration Support Highlights Calibration Data Gaps And Domain Shift Our third hypothesis is that measuring calibration support, which is a unique capability of similarity-based calibration methods, can provide useful information about the relevance of the calibration set to each item being calibrated. We define the *calibration support* S for item t that informs Φ(ˆp|t) as the sum of similarity weights for items drawn from calibration set C: $$S_{t,\mathcal{C}}=\sum_{i}s(t,i),x_{i}\in\mathcal{C}.$$ $$(6)$$ s(t, i), xi ∈ C. (6) Identifying items with low values for St,C can draw attention to observations that are not well represented by the calibration set. These could be individual outliers or, if there is a large number of such items, they could indicate distribution shift between the calibration and test sets. Low St,C values signal the need for more data (or more representative data) to be added to C. While previous studies focus solely on calibration performance as a function of the total calibration set *size*, similarity-based calibration can characterize the relevance of the calibration set to individual test items. Consider a linear SVM trained on 500 MNIST "1" and "7" digits and calibrated using SWC with 3000 digits. For most items in the test set, calibration improves. However, examining items with low calibration support helps us understand failures. Figure 5 shows the five test items (out of 500 total) with the lowest calibration support. Calibration with SWC was detrimental (qˆ[y] decreased) for all five. These items are not necessarily ambiguous in an objective sense, but the fact that they have low representation in the calibration set signals (correctly) that the calibrated output qˆ may be less reliable. Likewise, low calibration support can provide a warning when domain shift is present between the calibration and test sets (or any new prediction). We simulated domain shift (covariate shift) by rotating a subset of the test items 90 degrees counter-clockwise. Figure 6(a) shows the distribution of calibration support values without any rotation; the peak value is around 900, and items with low support are rare. With 10% of the test set rotated (Figure 6(b)), the overall histogram changes a little, and the rotated items (orange bars) tend to have low calibration support, signaling a need for inspection. With 50% (partial domain shift) rotated (Figure 6(c)), distinct populations for the rotated and unrotated items are clear, and with 100% rotated (complete domain shift), the whole histogram shifts to lower values (peak around 300). ![11_image_0.png](11_image_0.png) Figure 6: Distribution of calibration support values for 500 test items ("mnist-1v7") classified by a linear SVM with progressively more of the test set items rotated. Bar plots are stacked. This result suggests that in a deployment setting, it is useful to monitor the calibration support values that are reported by SWC. Knowledge about typical support values (or better, their distribution) could enable early detection of domain shift when new items originate from a changed distribution. That signal indicates that the calibration set, and likely the trained model as well, require revision. Platt, temperature scaling, histogram binning, and other methods provide a fixed mapping Φ(ˆp) without regard to the item being calibrated; there is no signal to indicate whether Φ is still relevant. SWC provides an intrinsic measure of calibration set relevance through the calibration support values obtained by each new item. ## 6 Conclusions, Limitations, And Future Work In this work, we explored the benefits of local, individual calibration for each test item, in contrast to widely used global classifier calibration methods. We identified hidden heterogeneity (HH) as a strong indicator of the need for local calibration, when there are subpopulations within a data set that have the same uncalibrated predicted probability pˆ yet require different corrections to achieve a well-calibrated probability qˆ. We provided a method for calculating HH before calibration to inform selection of the calibration method. Experiments with tabular data sets and diverse machine learning classifiers indicate that local calibration improves Brier score in proportion to the average hidden heterogeneity (HH) value in the data set. We highlight this finding as an important step towards not only correcting miscalibration but also explaining and understanding it. When HH is very low (as we found with several deep neural networks), or little calibration data is available, global methods such as temperature scaling are sufficient, but otherwise, local calibration is preferred. On the other hand, because local calibration has far more degrees of freedom than parametric, global methods, it tends to require more calibration data. If calibration data is scarce, global methods may be preferred. We proposed a similarity-based approach to local calibration (SWC) that weights evidence from the calibration set according to its similarity to the test item in an "augmented" feature space that includes both the input features and the predicted class probabilities. This concept goes beyond prior work such as SimilarityBinning Averaging (SBA-10), which calibrates (without weighting) using the 10 nearest neighbors based on Euclidean distance in the augmented feature space (Bella et al., 2009). In most cases, we found that SWC out-performs SBA-10. Additional benefits can be obtained by incorporating HH directly into the SWC algorithm (SWC-HH). A final and unique benefit of similarity-based calibration is that the explicit measurement of calibration support can serve to warn when a given test item lacks good representation in the calibration set. This can also be an indicator when distribution shift or domain shift is present. Limitations: Runtime. The computational cost of local calibration methods tends to be higher than that of global methods, since each item is independently modeled rather than constructing a single model to apply to all items. However, this also means that calibration can be conducted lazily, as needed, given a similarity measure. The computation of hidden heterogeneity requires (1) the identification of an item's nearest neighbors in the probability simplex ∆K−1, which can be costly with a large number of classes, and (2) training a specialized classifier to estimate the potential Brier score improvement (Equation 3). Limitations: Preservation of accuracy. SWC and SWC-HH are not rank-preserving calibration methods. This means that in addition to modifying the calibration properties of the predictions, they can also change the predicted class and therefore the accuracy of predictions. Improvements in accuracy are reflected in improved Brier scores. Temperature scaling, in contrast, does preserve rank and accuracy because, without a bias term, it cannot move items to the other side of the decision threshold. Some researchers favor rank-preserving methods (Zhang et al., 2020; Patel et al., 2021), since they seek to improve calibration without sacrificing accuracy. However, this constraint also prevents them from *increasing* accuracy, which is an outcome available to Platt scaling (via its bias term), histogram binning, SWC, etc. On the whole, we agree with Bella et al. (2013) that there is no need to preserve item rankings given the opportunity to improve both accuracy and calibration. However, we acknowledge that in some applications, there could be a need to choose a rank-preserving calibration method to ensure accuracy is unchanged (up or down) for user acceptance (Srivastava et al., 2020). Future work. There are several important directions for future work. It is possible that within the same data set, some items are best calibrated with global methods while others (where HH is present) benefit from local calibration. A hybrid method that selectively applies global/local calibration, or some combination of the two, for each test item could potentially out-perform either one. For a given problem, alternative choices for data representation and similarity measures could yield additional improvements. In addition, SWC is well designed to naturally accommodate domain shift, if the calibration data set is drawn from the shifted distribution. Sampling bias in the training set, whether intentional or not, induces a particular kind of domain shift that is especially important to address to meet fairness goals when predictions are made in a deployment setting. ## Author Contributions KW: Problem refinement, algorithm development, implementation, experimentation, data analysis, and writing. TD: Initial problem formulation, critical feedback, assisting with writing. ## Acknowledgments This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001119C0112. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the DARPA. The authors thank the reviewers and the editor for valuable discussions that significantly improved the paper. ## References Amr M. Alexandari, Anshul Kundaje, and Avanti Shrikumar. Maximum likelihood with bias-corrected calibration is hard-to-beat at label shift adaptation. In *Proceedings of the 37th International Conference* on Machine Learning, pp. 222–232, 2020. Antonio Bella, Cèsar Ferri, José Hernández-Orallo, and María José Ramírez-Quintana. Similarity-binning averaging: A generalisation of binning calibration. In *Intelligent Data Engineering and Automated Learning* - IDEAL 2009, Lecture Notes in Computer Science, volume 5788, pp. 341–349, 2009. Antonio Bella, Cèsar Ferri, José Hernández-Orallo, and María José Ramírez-Quintana. On the effect of calibration in classifier combination. *Applied Intelligence*, 38:566–585, 2013. Leo Breiman. Random forests. *Machine Learning*, 45:5–32, 2001. Leo Breiman, Jerome H. Friedman, R. A. Olshen, and Charles J. Stone. *Classification and regression trees*. Chapman and Hall, New York, NY, 1984. Glenn W. Brier. Verification of forecasts expressed in terms of probability. *Monthly Weather Review*, 78(1): 1–3, 1950. doi: 10.1175/1520-0493(1950)078<0001:VOFEIT>2.0.CO;2. Adele Cutler, D. Richard Cutler, and John R. Stevens. *Ensemble machine learning: Methods and applications*, chapter Random forests. Springer, 2012. Thomas G. Dietterich and Alex Guyer. The familiarity hypothesis: Explaining the behavior of deep open set methods. *Pattern Recognition*, 132:108931, 2022. doi: 10.1016/j.patcog.2022.108931. Xiaohan Ding, Xiangyu Zhang, Ningning Ma, Jungong Han, Guiguang Ding, and Jian Sun. RepVGG: Making VGG-style ConvNets great again. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13733–13742, Jun 2021. Shohei Enomoto and Takeharu Eda. Learning to cascade: Confidence calibration for improving the accuracy and computational cost of cascade inference systems. In *Proceedings of the AAAI Conference on Artificial* Intelligence, volume 35(8), pp. 7331–7339, 2021. C. A. T. Ferro and T. E. Fricker. A bias-corrected decomposition of the Brier score. Quarterly Journal of the Royal Meteorological Society, 138(668):1954–1960, 2012. Peter W. Frey and David J. Slate. Letter recognition using Holland-style adaptive classifiers. Machine Learning, 6:161–182, 1991. doi: 10.1007/BF00114162. Risheek Garrepalli. Oracle analysis of representations for deep open set detection. arXiv, version as of November 4, 2022. URL https://arxiv.org/abs/2209.11350. Mehmet Gönen and Ethem Alpaydın. Multiple kernel learning algorithms. Journal of Machine Learning Research, 12:2211–2268, 2011. ISSN 15324435. Yunye Gong, Xiao Lin, Yi Yao, Thomas G. Dietterich, Ajay Divakaran, and Melinda Gervasio. Confidence calibration for domain generalization under covariate shift. In *Proceedings of the IEEE/CVF International* Conference on Computer Vision (ICCV), pp. 8958–8967, 2021. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In *Proceedings of the 34th International Conference on Machine Learning*, pp. 1321–1330, 2017. doi: 10.5555/3305381.3305518. Trevor Hastie, Robert Tibshirani, and Jerome Friedman. *The Elements of Statistical Learning: Data Mining,* Inference, and Prediction. Springer, 2nd edition, 2009. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016. Sunil Kalmady, Weijie Sun, Justin Ezekowitz, Nowell Fine, Jonathan Howlett, Anamaria Savu, Russ Greiner, and Padma Kaul. Improving the calibration of long term predictions of heart failure rehospitalizations using medical concept embedding. In Proceedings of AAAI Spring Symposium on Survival Prediction - Algorithms, Challenges, and Applications, volume 146 of *PMLR*, pp. 70–82, 2021. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. Kriste Krstovski, David A. Smith, Hanna M. Wallach, and Andrew McGregor. Efficient nearest-neighbor search in the probability simplex. In Proceedings of the 2013 Conference on the Theory of Information Retrieval, pp. 101–108, 2013. doi: 10.1145/2499178.2499189. Meelis Kull, Telmo M. Silva Filho, and Peter Flach. Beyond sigmoids: How to obtain well-calibrated probabilities from binary classifiers with beta calibration. *Electronic Journal of Statistics*, 11:5052–5080, 2017. Meelis Kull, Miquel Perello-Nieto, Markus Kängsepp, Telmo Silva Filho, Hao Song, and Peter Flach. Obtaining well-calibrated multiclass probabilities with Dirichlet calibration. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS), 2019. Ananya Kumar, Percy Liang, and Tengyu Ma. Verified uncertainty calibration. In *Proceedings of the 33rd* Conference on Neural Information Processing Systems, 2019. Aviral Kumar, Sunita Sarawagi, and Ujjwal Jain. Trainable calibration measures for neural networks from kernel mean embeddings. In *Proceedings of the 35th International Conference on Machine Learning*, 2018. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998. doi: 10.1109/5.726791. Rachel Luo, Aadyot Bhatnagar, Yu Bai, Shengjia Zhao, Huan Wang, Caiming Xiong, Silvio Savarese, Stefano Ermon, Edward Schmerling, and Marco Pavone. Localized calibration: Metrics and recalibration. arXiv, version as of August 18, 2022. URL https://arxiv.org/abs/2102.10809. Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, and Mario Lucic. Revisiting the calibration of modern neural networks. In *Proceedings of the 35th* Conference on Neural Information Processing Systems (NeurIPS), 2021. Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip H.S. Torr, and Puneet K. Dokania. Calibrating deep neural networks using focal loss. In *Proceedings of the 34th Conference on* Neural Information Processing Systems (NeurIPS), 2020. Mahdi Pakdaman Naeini, Gregory F Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using Bayesian binning. In *Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence*, pp. 2901–2907, 2015. Alexandru Niculescu-Mizil and Rich Caruana. Predicting good probabilities with supervised learning. In Proceedings of the 22nd International Conference on Machine Learning, pp. 625–632, 2005. Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D Sculley, Sebastian Nowozin, Joshua V. Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS), 2019. Kanil Patel, William Beluch, Bin Yang, Michael Pfeiffer, and Dan Zhang. Multi-class uncertainty calibration via mutual information maximization-based binning. In *Proceedings of the International Conference on* Learning Representations (ICLR), 2021. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. John Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. *Advances in Large Margin Classifiers*, 10(3):61–74, 1999. Megha Srivastava, Besmira Nushi, Ece Kamar, Shital Shah, and Eric Horvitz. An empirical analysis of backward compatibility in machine learning systems. In *Proceedings of the 26th ACM SIGKDD International* Conference on Knowledge Discovery & Data Mining, pp. 3272–3280, 2020. doi: 10.1145/3394486.3403379. Christian Tomani and Florian Buettner. Towards trustworthy predictions from deep neural networks with fast adversarial calibration. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35(11), pp. 9886–9896, 2021. Juozas Vaicenavicius, David Widmann, Carl Andersson, Fredrik Lindsten, Jacob Roll, and Thomas B. Schön. Evaluating model calibration in classification. In *Proceedings of the 22nd International Conference* on Artificial Intelligence and Statistics (AISTATS), 2019. David Widmann, Fredrik Lindsten, and Dave Zachariah. Calibration tests in multi-class classification: A unifying framework. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS), 2019. David H. Wolpert. Stacked generalization. *Neural networks*, 5(2):241–259, 1992. doi: 10.1016/S0893-6080(05) 80023-1. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv, version as of August 28 2017. URL https://arxiv.org/abs/2102. 10809. Yaodong Yu, Stephen Bates, Yi Ma, and Michael I. Jordan. Robust calibration with multi-domain temperature scaling. In *Proceedings of the ICML 2022 Workshop on Spurious Correlations, Invariance, and* Stability, 2022. Bianca Zadrozny and Charles Elkan. Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers. In *Proceedings of the International Conference on Machine Learning*, pp. 609–616, 2001. Bianca Zadrozny and Charles Elkan. Transforming classifier scores into accurate multiclass probability estimates. In *Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and* Data Mining, pp. 694–699, 2002. Jize Zhang, Bhavya Kailkhura, and T. Yong-Jin Han. Mix-n-match : Ensemble and compositional methods for uncertainty calibration in deep learning. In *Proceedings of the 37th International Conference on* Machine Learning, pp. 11117–11128, 2020. Shengjia Zhao, Tengyu Ma, and Stefano Ermon. Individual calibration with randomized forecasting. In Proceedings of the 37th International Conference on Machine Learning, pp. 11387–11397, 2020. ## A Appendix This appendix provides experimental results for seven data sets and six classifiers, comparing similarity-based calibration to other methods. In these experiments, we randomly sampled 10,000 items from each data set and randomly split them into 500 train (for binary problems) or 1000 train (for multi-class problems), 500 test, and 9000 for a calibration pool. ## A.1 Binary Tabular Data Figure 7 presents results across all classifiers and calibration methods for the four binary MNIST data sets after 3000 calibration items were employed. Complete numeric results are shown in Tables 1 to 4. Classifiers appear in order of improving (decreasing) Brier score for the uncalibrated predictions. Platt scaling improved performance for the Naive Bayes and random forest classifiers, but it yielded little benefit for the others. Isotonic regression sometimes provided additional improvements, primarily for Naive Bayes, and usually out-performed histogram binning. In contrast, similarity-based methods were highly effective for all classifiers in reducing Brier score. SWC-HH consistently provided the best results. The SBA-10 approach described by Bella et al. (2009) weights all ten neighbors equally. We learned (Ferri, personal communication, Oct. 29, 2022) that the SBA authors have subsequently employed weighting in the averaging process so that each neighbor contributes inversely to its distance from the item to be calibrated. We included the weighted variation in our experiments, denoted as "SBAW-10". Weighting provides slight advantages over SBA-10 in some cases, but SWC/SWC-HH yielded the best results. We can view SBAW as an intermediate choice between SBA and SWC, as it adopts the distance weighting of SWC but not the other aspects (supervised distance metric, HH filtering) that make SWC-HH the strongest method overall. As noted earlier, and most evident in Tables 1 to 4, both SBA and SBAW generate nearly identical results for all classifiers, because they are dominated by feature space distance, and the classifier's initial predictions have little influence. In contrast, SWC and SWC-HH adapt to each classifier's individual limitations (hidden heterogeneity). SWC/SWC-HH improvements correlate with the average HH value, shown in parenthesis under the x axis labels. In addition, similarity-based calibration also increased test accuracy (see right column of Figure 7 and subtables in Tables 1 to 4). SWC-HH consistently achieved the highest accuracy. Table 1: Results for mnist-1v7 (ncal = 3000, 10 trials). The best result(s) for each model (within 1 standard error, shown as a subscript) are in bold. | | Brier score | | | | | | | | |--------|---------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | Model | Uncal. | Platt | Iso Reg | Hist bin | SBA-10 | SBAW-10 | SWC | SWC-HH | | DT | 0.06780.006 | 0.06710.005 | 0.06500.006 | 0.09130.009 | 0.02030.003 | 0.01980.003 | 0.02260.002 | 0.02070.002 | | NB | 0.04520.004 | 0.04420.004 | 0.04230.003 | 0.06740.022 | 0.02000.003 | 0.01950.003 | 0.01650.002 | 0.01680.002 | | RBFSVM | 0.04110.003 | 0.04550.004 | 0.03650.002 | 0.03580.002 | 0.02010.003 | 0.01970.003 | 0.01890.002 | 0.01720.003 | | GBT | 0.03800.004 | 0.03900.004 | 0.02820.003 | 0.02840.002 | 0.02030.003 | 0.01980.003 | 0.02050.002 | 0.01910.002 | | RF | 0.02460.002 | 0.01700.002 | 0.01730.002 | 0.01820.002 | 0.02020.003 | 0.01970.003 | 0.01820.002 | 0.01710.002 | | SVM | 0.01550.002 | 0.01570.002 | 0.01480.001 | 0.01550.001 | 0.02000.003 | 0.01960.003 | 0.01490.002 | 0.01540.002 | | | Accuracy | | | | | | | | | Model | Uncal. | Platt | Iso Reg | Hist bin | SBA-10 | SBAW-10 | SWC | SWC-HH | | DT | 0.95860.004 | 0.95920.004 | 0.96140.004 | 0.94840.006 | 0.98660.002 | 0.98800.001 | 0.98520.001 | 0.98740.001 | | RBFSVM | 0.97400.003 | 0.97380.002 | 0.97180.002 | 0.97500.002 | 0.98680.001 | 0.98800.001 | 0.98760.002 | 0.99060.001 | | NB | 0.97740.002 | 0.97740.002 | 0.97740.002 | 0.97660.002 | 0.98680.001 | 0.98800.001 | 0.98940.001 | 0.98940.001 | | GBT | 0.97880.002 | 0.97900.002 | 0.98080.002 | 0.98080.002 | 0.98660.002 | 0.98800.001 | 0.98640.001 | 0.98780.001 | | RF | 0.98900.001 | 0.98980.001 | 0.98880.001 | 0.98920.001 | 0.98660.001 | 0.98780.001 | 0.98940.001 | 0.99120.001 | | SVM | 0.98980.001 | 0.99040.001 | 0.99100.001 | 0.99000.001 | 0.98680.001 | 0.98800.001 | 0.99020.001 | 0.99140.001 | ![17_image_0.png](17_image_0.png) Figure 7: Calibration performance (left) and accuracy (right) for the binary MNIST data sets (10 trials; error bars indicate one standard error). Classifiers are sorted in order of improvement based on the uncalibrated classifier's score, and average HH values are below each classifier. | | Brier score | | | | | | | | |--------|---------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | Model | Uncal. | Platt | Iso Reg | Hist bin | SBA-10 | SBAW-10 | SWC | SWC-HH | | NB | 0.49320.017 | 0.34720.007 | 0.22160.007 | 0.27240.006 | 0.06710.002 | 0.06530.002 | 0.05650.003 | 0.03350.003 | | DT | 0.16570.007 | 0.16790.007 | 0.15580.006 | 0.16510.006 | 0.06550.002 | 0.06380.002 | 0.05850.003 | 0.04360.003 | | RF | 0.10050.002 | 0.06090.003 | 0.06190.003 | 0.06270.004 | 0.06590.002 | 0.06420.002 | 0.05280.003 | 0.04100.003 | | SVM | 0.09750.004 | 0.09880.005 | 0.09840.005 | 0.09930.005 | 0.06550.002 | 0.06380.002 | 0.05470.004 | 0.04620.004 | | RBFSVM | 0.08630.004 | 0.09120.004 | 0.08270.004 | 0.08510.004 | 0.06520.002 | 0.06360.002 | 0.05180.003 | 0.03930.002 | | GBT | 0.07410.007 | 0.07730.007 | 0.07020.006 | 0.07170.006 | 0.06470.002 | 0.06300.002 | 0.05220.003 | 0.04130.003 | | | Accuracy | | | | | | | | | Model | Uncal. | Platt | Iso Reg | Hist bin | SBA-10 | SBAW-10 | SWC | SWC-HH | | NB | 0.75340.008 | 0.75340.008 | 0.87200.007 | 0.87100.008 | 0.96340.002 | 0.96240.003 | 0.96020.002 | 0.98160.002 | | DT | 0.88460.005 | 0.88340.005 | 0.89280.005 | 0.88560.005 | 0.96560.002 | 0.96420.003 | 0.96000.003 | 0.97520.001 | | SVM | 0.93820.003 | 0.93680.003 | 0.93600.004 | 0.93580.004 | 0.96480.002 | 0.96380.003 | 0.96460.002 | 0.97260.002 | | RBFSVM | 0.94200.003 | 0.94240.003 | 0.94060.003 | 0.94000.003 | 0.96520.002 | 0.96420.003 | 0.96560.002 | 0.97860.001 | | GBT | 0.95340.005 | 0.95320.005 | 0.95380.004 | 0.95140.005 | 0.96640.002 | 0.96460.003 | 0.96560.002 | 0.97720.002 | | RF | 0.95860.003 | 0.95940.003 | 0.95840.003 | 0.95940.003 | 0.96480.002 | 0.96320.003 | 0.96440.003 | 0.97740.002 | Table 2: Results for mnist-4v9 (ncal = 3000, 10 trials). The best result(s) for each model (within 1 standard error, shown as a subscript) are in bold. | | Brier score | | | | | | | | |--------|---------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | Model | Uncal. | Platt | Iso Reg | Hist bin | SBA-10 | SBAW-10 | SWC | SWC-HH | | NB | 0.42120.054 | 0.30290.028 | 0.17490.004 | 0.21360.013 | 0.04510.002 | 0.04460.002 | 0.06130.004 | 0.04290.005 | | DT | 0.14770.008 | 0.14650.008 | 0.13590.008 | 0.14940.007 | 0.04480.002 | 0.04430.002 | 0.05940.004 | 0.04860.004 | | RF | 0.09020.003 | 0.05900.004 | 0.05950.004 | 0.06140.004 | 0.04520.002 | 0.04460.002 | 0.05590.004 | 0.04410.006 | | SVM | 0.08530.005 | 0.08480.005 | 0.08470.005 | 0.08710.005 | 0.04500.002 | 0.04440.002 | 0.05390.005 | 0.04540.004 | | RBFSVM | 0.07330.004 | 0.07640.005 | 0.06820.004 | 0.07070.004 | 0.04480.002 | 0.04430.002 | 0.04940.005 | 0.04180.005 | | GBT | 0.06460.006 | 0.06570.005 | 0.05810.004 | 0.05930.004 | 0.04460.002 | 0.04410.002 | 0.05080.004 | 0.04450.005 | | | Accuracy | | | | | | | | | Model | Uncal. | Platt | Iso Reg | Hist bin | SBA-10 | SBAW-10 | SWC | SWC-HH | | NB | 0.78780.027 | 0.78880.027 | 0.91320.005 | 0.91040.004 | 0.97440.002 | 0.97500.002 | 0.96200.003 | 0.97600.003 | | DT | 0.90300.008 | 0.90240.008 | 0.91160.006 | 0.90540.006 | 0.97440.002 | 0.97540.001 | 0.96120.002 | 0.97260.002 | | RBFSVM | 0.94720.003 | 0.94800.004 | 0.95220.003 | 0.94880.003 | 0.97440.002 | 0.97540.002 | 0.96840.003 | 0.97780.003 | | SVM | 0.94880.003 | 0.94860.003 | 0.94800.004 | 0.94900.003 | 0.97440.002 | 0.97540.002 | 0.96300.004 | 0.97320.003 | | RF | 0.96080.003 | 0.96240.002 | 0.96160.003 | 0.96080.003 | 0.97440.002 | 0.97560.001 | 0.96440.003 | 0.97660.003 | | GBT | 0.96160.004 | 0.96200.003 | 0.96300.003 | 0.96280.003 | 0.97440.002 | 0.97580.002 | 0.96860.003 | 0.97580.002 | Table 3: Results for mnist-3v8 (ncal = 3000, 10 trials). The best result(s) for each model (within 1 standard error, shown as a subscript) are in bold. ## A.2 Multi-Class Tabular Data Figure 8 and Tables 5 to 7 present results for the multi-class data sets "mnist10", "fashion-mnist", and "letter", after 5000 calibration items were employed. Temperature scaling in general only improved Brier score for the tree-based methods (RF, GBT), and not consistently. Histogram binning was again beneficial for the Naive Bayes models, but it often made calibration worse for decision trees. Isotonic regression yielded small additional improvements. For "mnist10" and "fashion-mnist", similarity-based calibration provided the best results. SWC and SWCHH out-performed SBA-10. SWC-HH usually improved over SWC, except for the more challenging "fashionmnist" data set. In this data set, the filtering employed by SWC-HH (to ignore calibration items with insufficient similarity) often resulted in no calibration items remaining. We handle this case by using the single nearest neighbor, even if its similarity is below the threshold. This leads to values for qˆ that are based only on one calibration item. In many cases, the single nearest neighbor belongs to the correct class, yielding good accuracy, but when it is from an incorrect class, the Brier score penalty is large. However, the | | | Brier score | | | | | | | |--------|-------------|---------------|-------------|-------------|-------------|-------------|-------------|-------------| | Model | Uncal. | Platt | Iso Reg | Hist bin | SBA-10 | SBAW-10 | SWC | SWC-HH | | NB | 0.36700.023 | 0.29050.014 | 0.20840.009 | 0.23500.012 | 0.05980.003 | 0.05920.003 | 0.05220.003 | 0.03150.003 | | DT | 0.22650.008 | 0.22000.008 | 0.20810.006 | 0.21410.007 | 0.05950.003 | 0.05890.003 | 0.05460.004 | 0.03540.003 | | SVM | 0.10340.004 | 0.10400.005 | 0.10380.005 | 0.10700.005 | 0.05920.003 | 0.05860.003 | 0.05490.003 | 0.04280.002 | | RF | 0.10210.002 | 0.05240.004 | 0.05370.003 | 0.05520.003 | 0.05970.003 | 0.05910.003 | 0.04550.003 | 0.03440.003 | | RBFSVM | 0.07560.003 | 0.07420.003 | 0.06890.003 | 0.07060.003 | 0.05870.003 | 0.05820.003 | 0.04420.004 | 0.03800.004 | | GBT | 0.06150.005 | 0.06490.005 | 0.05930.005 | 0.05900.004 | 0.05870.003 | 0.05810.003 | 0.04640.004 | 0.03930.004 | | | | Accuracy | | | | | | | | Model | Uncal. | Platt | Iso Reg | Hist bin | SBA-10 | SBAW-10 | SWC | SWC-HH | | NB | 0.81580.011 | 0.81600.011 | 0.86920.007 | 0.86540.007 | 0.96320.003 | 0.96620.002 | 0.96620.003 | 0.98160.001 | | DT | 0.85000.009 | 0.85400.008 | 0.85680.006 | 0.85660.005 | 0.96340.003 | 0.96560.002 | 0.96420.002 | 0.97840.002 | | SVM | 0.93280.004 | 0.93400.004 | 0.93300.003 | 0.93280.003 | 0.96380.003 | 0.96600.003 | 0.96280.003 | 0.97440.002 | | RBFSVM | 0.94540.003 | 0.94540.003 | 0.95440.003 | 0.95240.002 | 0.96400.003 | 0.96620.002 | 0.96940.002 | 0.97880.002 | | GBT | 0.95940.002 | 0.96000.003 | 0.95900.003 | 0.96000.003 | 0.96360.003 | 0.96600.003 | 0.96820.002 | 0.97800.002 | | RF | 0.96340.002 | 0.96480.002 | 0.96320.002 | 0.96340.002 | 0.96320.003 | 0.96580.003 | 0.97000.002 | 0.98120.002 | Table 4: Results for mnist-3v5 (ncal = 3000, 10 trials). The best result(s) for each model (within 1 standard error, shown as a subscript) are in bold. SWC-HH results were still comparable or better than SBA-10 and the global calibration methods on this data set. In addition, SWC-HH yielded the best accuracy. | | Brier score | | | | | | | | |--------|---------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | Model | Uncal. | TS | Iso Reg | Hist bin | SBA-10 | SBAW-10 | SWC | SWC-HH | | NB | 0.81930.015 | 0.80310.015 | 0.48020.005 | 0.59100.013 | 0.17290.005 | 0.17020.005 | 0.16370.004 | 0.11640.004 | | DT | 0.51190.010 | 0.51280.008 | 0.48320.008 | 0.54420.011 | 0.17220.005 | 0.16950.005 | 0.16780.004 | 0.12360.004 | | RF | 0.29770.003 | 0.14950.006 | 0.15050.004 | 0.16200.005 | 0.17260.005 | 0.16990.005 | 0.13230.005 | 0.13110.005 | | GBT | 0.21990.005 | 0.21020.004 | 0.20610.004 | 0.20900.004 | 0.17130.005 | 0.16860.005 | 0.14690.003 | 0.14110.003 | | SVM | 0.19260.003 | 0.18050.003 | 0.17860.003 | 0.18300.003 | 0.17190.005 | 0.16920.005 | 0.13470.004 | 0.13380.004 | | RBFSVM | 0.19190.003 | 0.18540.003 | 0.17660.003 | 0.18130.004 | 0.17170.005 | 0.16900.005 | 0.13120.004 | 0.12230.004 | | | Accuracy | | | | | | | | | Model | Uncal. | TS | Iso Reg | Hist bin | SBA-10 | SBAW-10 | SWC | SWC-HH | | NB | 0.58950.007 | 0.58950.007 | 0.68060.004 | 0.57790.023 | 0.88410.003 | 0.89090.003 | 0.89910.003 | 0.94150.002 | | DT | 0.64950.008 | 0.64950.008 | 0.65510.007 | 0.64270.007 | 0.88470.003 | 0.89100.003 | 0.89470.004 | 0.93680.002 | | GBT | 0.85890.004 | 0.85890.004 | 0.86180.003 | 0.86310.004 | 0.88560.003 | 0.89180.003 | 0.90180.003 | 0.90910.002 | | RBFSVM | 0.86470.003 | 0.86470.003 | 0.87770.003 | 0.87650.002 | 0.88520.003 | 0.89140.003 | 0.91590.003 | 0.92300.003 | | SVM | 0.87850.002 | 0.87850.002 | 0.87720.002 | 0.87800.003 | 0.88530.003 | 0.89120.003 | 0.91120.003 | 0.91180.003 | | RF | 0.90060.005 | 0.90060.005 | 0.90430.004 | 0.90230.004 | 0.88490.003 | 0.89080.003 | 0.91230.003 | 0.91340.003 | The "letter" data set is unusual in that SBAW-10 achieved the best results (Figure 8(e,f) and Table 7, except for the random forest classifier, which is best calibrated using SWC or SWC-HH. SBAW-10 on this data set also outperforms the unweighted SBA-10 approach described by Bella et al. (2009). We interpret this to mean that "letter" exhibits even stronger subpopulation locality than the others we have studied. These results reinforce the importance of employing some form of weighting when calibrating using similarity. However, the choice of 10 neighbors to use does not always work best, and the ideal constant would be difficult to estimate in advance. Therefore, we recommend the use of the entire data set (via SWC or SWC-HH) as a more robust solution. Table 5: Results for mnist10 (ncal = 5000, 10 trials). The best result(s) for each model (within 1 standard error, shown as a subscript) are in bold. ![20_image_0.png](20_image_0.png) Figure 8: Calibration performance (left) and accuracy (right) for the multi-class "mnist10", "fashion-mnist", and "letter" data sets (10 trials; error bars indicate one standard error). Classifiers are sorted in order of improvement based on the uncalibrated classifier's score, and average HH values are below each classifier. Table 6: Results for fashion-mnist (ncal = 5000, 10 trials). The best result(s) for each model (within 1 standard error, shown as a subscript) are in bold. | | Brier score | | | | | | | | |--------|---------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | Model | Uncal. | TS | Iso Reg | Hist bin | SBA-10 | SBAW-10 | SWC | SWC-HH | | NB | 0.95860.032 | 0.94150.032 | 0.59260.008 | 0.62470.008 | 0.28570.005 | 0.28330.005 | 0.25310.006 | 0.29910.008 | | DT | 0.56130.011 | 0.55790.007 | 0.50320.009 | 0.62160.009 | 0.28550.005 | 0.28320.005 | 0.25360.007 | 0.29960.007 | | GBT | 0.39760.012 | 0.34930.008 | 0.33700.008 | 0.34270.009 | 0.28450.005 | 0.28210.005 | 0.25040.007 | 0.28590.006 | | RF | 0.34000.004 | 0.29970.007 | 0.29130.006 | 0.30000.006 | 0.28550.005 | 0.28320.005 | 0.24970.007 | 0.26740.009 | | SVM | 0.33330.007 | 0.32260.007 | 0.31010.006 | 0.31760.007 | 0.28560.005 | 0.28320.005 | 0.24280.007 | 0.26820.008 | | RBFSVM | 0.33270.006 | 0.32900.007 | 0.31200.006 | 0.31820.006 | 0.28550.005 | 0.28320.005 | 0.24860.007 | 0.28070.007 | | | Accuracy | | | | | | | | | Model | Uncal. | TS | Iso Reg | Hist bin | SBA-10 | SBAW-10 | SWC | SWC-HH | | NB | 0.52020.016 | 0.52020.016 | 0.59500.009 | 0.57160.010 | 0.79860.006 | 0.80220.006 | 0.82060.005 | 0.84980.004 | | DT | 0.64020.008 | 0.64020.008 | 0.65100.009 | 0.57600.016 | 0.79880.006 | 0.80140.006 | 0.82000.005 | 0.84760.004 | | GBT | 0.75840.008 | 0.75840.008 | 0.77020.007 | 0.76760.008 | 0.79960.006 | 0.80280.006 | 0.82460.005 | 0.83520.005 | | RBFSVM | 0.76340.005 | 0.76340.005 | 0.77100.007 | 0.77080.008 | 0.79980.006 | 0.80220.006 | 0.82440.004 | 0.84260.004 | | SVM | 0.77380.006 | 0.77380.006 | 0.78040.004 | 0.77820.004 | 0.79960.006 | 0.80220.006 | 0.83120.005 | 0.84060.004 | | RF | 0.78720.005 | 0.78720.005 | 0.79120.006 | 0.79020.006 | 0.79960.006 | 0.80240.006 | 0.82620.006 | 0.83360.006 | | | | Brier score | | | | | | | |--------|-------------|---------------|-------------|-------------|-------------|-------------|-------------|-------------| | Model | Uncal. | TS | Iso Reg | Hist bin | SBA-10 | SBAW-10 | SWC | SWC-HH | | NB | 0.53420.007 | 0.50730.005 | 0.47530.005 | 0.48990.005 | 0.22840.004 | 0.18630.003 | 0.26030.005 | 0.25270.006 | | DT | 0.49750.004 | 0.52370.003 | 0.48390.004 | 0.72960.006 | 0.22430.004 | 0.18120.003 | 0.24920.006 | 0.25760.006 | | SVM | 0.33110.002 | 0.26290.003 | 0.26320.003 | 0.28060.003 | 0.21410.003 | 0.17370.003 | 0.21070.004 | 0.21020.004 | | RF | 0.26330.003 | 0.17690.003 | 0.18300.003 | 0.20020.003 | 0.21210.004 | 0.16790.003 | 0.16300.003 | 0.16300.003 | | RBFSVM | 0.24950.004 | 0.21860.005 | 0.21890.005 | 0.23080.005 | 0.20560.004 | 0.16730.003 | 0.19630.005 | 0.19570.005 | | GBT | 0.22370.006 | 0.21470.005 | 0.21290.004 | 0.22190.004 | 0.18350.004 | 0.14910.003 | 0.21140.005 | 0.21150.005 | | | | Accuracy | | | | | | | | Model | Uncal. | TS | Iso Reg | Hist bin | SBA-10 | SBAW-10 | SWC | SWC-HH | | NB | 0.62910.005 | 0.62910.005 | 0.65110.003 | 0.64920.004 | 0.84380.006 | 0.88810.003 | 0.81600.005 | 0.82510.004 | | DT | 0.63210.003 | 0.63210.003 | 0.63750.003 | 0.51190.006 | 0.84650.006 | 0.89040.003 | 0.82230.005 | 0.86540.003 | | SVM | 0.82020.002 | 0.82020.002 | 0.82170.003 | 0.81680.003 | 0.85840.005 | 0.89820.004 | 0.85810.003 | 0.85890.003 | | RBFSVM | 0.84610.004 | 0.84610.004 | 0.84590.004 | 0.84340.003 | 0.86150.005 | 0.89980.003 | 0.87010.003 | 0.87110.003 | | GBT | 0.84790.004 | 0.84790.004 | 0.84920.003 | 0.84740.003 | 0.87740.004 | 0.91030.003 | 0.86290.003 | 0.86330.003 | | RF | 0.87220.002 | 0.87220.002 | 0.87460.002 | 0.87100.002 | 0.85650.006 | 0.90130.003 | 0.89040.002 | 0.89040.002 | Table 7: Results for letter (ncal = 5000, 10 trials). The best result(s) for each model (within 1 standard error, shown as a subscript) are in bold.
Review 1: Summary: The paper defines the new notion of hidden heterogeneity (HH) about situations where a model is predicting the same class probability vector to instances for which the true posterior probabilities are not the same. It then defines local calibration methods SWC and SWC-HH to reduce HH and improve classifier performance as measured by Brier score, and by accuracy as well. Experiments are performed on multiple tabular and image datasets. Results show that the proposed methods outperform Platt scaling, histogram binning, and temperature scaling in many situations. Strengths and Weaknesses: Strengths: * The text is clear and understandable. * There are experiments about tabular data as well as images. Weaknesses: * Local calibration has been studied before by Luo et al 2021 whom the current paper cites briefly but not in sufficient detail, given the relevance to this work. Therefore, the concept of hidden heterogeneity is not fully new, although it is new as a term. In some sense, HH means just 'not locally calibrated'. * Following this, it would be essential that the paper would compare against the method proposed by Luo et al 2021. Furthermore, there are many more methods that should be considered to include in the comparisons. Currently, the paper only compares against the most basic calibration methods. For example, the paper states about Platt scaling that 'No further improvements were achieved beyond 500 calibration items'. Of course, this is very much expected because Platt scaling only needs to fit 2 parameters which is easy to do on a small dataset and more instances do not help to improve much anymore. Therefore, it would be important to include some other existing calibration methods that either have more parameters or that are non-parametric. For example, including isotonic calibration in the experiments would be absolutely essential. * There is enough HH that can be exploited and improved over using local modelling only when the original classifier was not doing it already. As the authors discovered, neural networks on images are already exploiting local information and hence, HH values were too low to be exploited further. This highlights the general limitation of the proposed methods - they are only useful if the original classifier was not very good. * The definition of calibration at the beginning of the introduction is wrong. It has been defined through P(Y|X), whereas actually the conditioning should not be on X but on the output of the classifier. Otherwise, only the Bayes-optimal model would be well-calibrated. However, according to the standard definition of calibration, for example even a constant classifier that predicts the true class distribution P(Y) is also calibrated. * 'Our implementation of Platt scaling ...' sounds like the authors would have done something non-standard. However, this is exactly as how J. Platt originally proposed it and even the Python scikit-learn implementation of Platt scaling is doing the same, i.e. the targets are not {0,1} but instead smoothed probabilities. Requested Changes: The experiments should include some state-of-the-art calibration methods and also the classical isotonic calibration. The paper by Luo et al 2021 should be discussed in more detail and the relationship of HH to Luo's work should be discussed. It would be discussed more when the proposed methods are useful and when they would not help because of low amount of HH. The definition of calibration should be fixed. Broader Impact Concerns: No further concerns beyond what was mentioned above. ================================================== Review 2: Summary: The paper challenges the convention that samples with the same predicted probability should be given the same calibration correction, and proposes to detect subpopulations where better calibration could improve accuracy, which the paper calls "hidden heterogeneity". The paper introduces two "local" calibration methods, Similarity-Weighted Calibration (SWC) and SWC-HH that leverage similarity between samples in terms of features. SWC and SWC-HH differ slightly, in that SWC weighs all points in a calibration dataset based on similarity to the given point x that is being studied, while SWC-HH uses only a subset of these points. The paper tests the method on tabular and image datasets. Strengths and Weaknesses: Strengths: - A fresh take on a classic problem – calibration – and challenging the convention that samples with the same predicted probability should be given the same calibration correction. - Experiments on different types of datasets, not just data from one type of modality. Weaknesses: - Hidden heterogeneity, the problem that the paper defined and is trying to solve for, is defined by comparing predictions from the original model and a newly trained model (a random forest, according to page 3). Hidden heterogeneity is then used as a gold standard to compare calibration methods. Yet, the calibration methods introduced in this paper leverage random forest proximity. This feels a bit like “training for the test”. This would be more convincing if there were some synthetic data experiments where hidden heterogeneity is introduced using different types of data mechanisms and showing that the calibration methods can still perform well. Or using different types of distances in the calibration methods – essentially a sensitivity analysis of distance type and how that impacts results. - Could go deeper to try to explain some of the observed phenomenon. For example, in the results in Section 5.3, the random forest classifier “somewhat to our surprise, exhibits more areas with hidden heterogeneity”. Do the authors have any ideas why? In Section 5.4, when the authors said “using random forests to measure hidden heterogeneity in neural network latent spaces may fail to detect HH even though a better representation (or a better classifier) could detect that it is present”, again, a few sentences to discuss why would improve the presentation of these ideas and help the reader gain intuition into failure modes of the method. - Even if hidden heterogeneity (subpopulations where better calibration could improve accuracy) does exist, it is not clear that improving the calibration for this subpopulation is the preferred method to correct for this, if a better classifier could be trained. A very personalized calibration function could be harder to deploy in practice than a global calibration function. If a better model could be trained such that these subpopulations have better accuracy, this local calibration idea might not be needed. I think adding some of these baselines could be interesting: 1/ retraining the model, upsampling subpopulations found to have hidden heterogeneity; 2/ using more powerful model classes. Requested Changes: - [Critical in my opinion] Experiments where model is retrained, or more powerful model classes used, and show that they don’t outperform local calibration methods. - [Critical in my opinion] Experiments where hidden heterogeneity is introduced using different types of data mechanisms and showing how the calibration methods perform in this case. - [Critical in my opinion] Since SBA-10 is a key baseline compared against in this paper, it would be good to confirm if SBA-10 uses Euclidean distance , cf. "Our implementation of SBA-10 employs Euclidean distance to identify the nearest neighbors, which we believe to be the method employed by Bella et al.". - [Strengthen the work] I found the results in Section 5.4 a bit confusing. It sounds like the similarity measure used in SWC and SWC-HH is a feature space learned by the neural net. But that is followed by this line “We again used a learned RF proximity function to compute similarity.”. Clarification would be great. - [Strengthen the work] If the local calibration methods are not tied to a specific similarity measure, like random forest proximity, this could be made clearer in Section 4 when introducing the methods. - [Strengthen the work] For SWC-HH, from this line in Section 4: “HH, which is computed separately for each test item t, is provided as an additional input for calibration”, on which dataset split is this HH computed on? Broader Impact Concerns: none ================================================== Review 3: Summary: The authors propose a notion called 'hidden heterogeneity', which describes the situation when subpopulations having the same predicted probability exhibit different true conditional probabilities. They propose to remedy the situation with a local regression correction that can improve both the probability prediction and classification accuracies. They perform fairly extensive empirical evaluations on several datasets, and their method has the best improvements when the model underfits the data. Strengths and Weaknesses: Strengths: - The proposed algorithm is fairly simple to understand. Instead of calibrating the probabilities globally, the main idea is to calibrate the probabilities locally using the features x too when there are model mis-specification. The design and steps in the algorithm are very clearly written. - The authors have done a fairly extensive evaluations on a few standard ML datasets such as MNIST and letters, and also on larger image datasets such as CIFAR10 and CIFAR100. They show improved probability calibrations compared to global calibration methods and competing local calibration method such as SBA-10, especially for weaker classifiers. - In the paper there are fairly interesting empirical analysis on how the local calibration improves classification accuracy (section 5.3), and how the calibration support set helps discover anomalies in the prediction (section 5.5). Weaknesses: - The current paper does not discuss the relation between the proposed method to model mis-specification. When different subpopulations U_i, U_j having the predicted probabilities f(x) but different true conditional probabilities, isn't that a sign of model mis-specification? (consider the cats vs birds example in section 3). When such a situation arise, instead of calibrating the probabilities with local regression (a post-hoc fix), we could revise the model using local regression or mixture of experts type of model. The authors do not seem to consider such a possibility but go straight to correcting the model using local probability calibrations. - Certain choices in the scoring of hidden heterogeneity and the algorithm seem arbitrary and not well-motivated enough. How different would the results be if we use a KL-divergence or L1-distance measure on the probability vectors instead of Hellinger distance? The algorithm also makes use of random forests in its computation of similarities between items, a decision that the authors did not discuss their reasoning. What would happen if we replace the RFprox measure with some other measures? - In Algorithm 1, the local probability predictor g_t works with a potentially much smaller sample. Wouldn't this cause instability problems in the calculation of the HH measure? For example, in Figure 1c, when the base classifier is powerful enough (random forest), local calibration methods such as SBA-10 or the proposed method actually hurts probability calibration for small sample size compared to global method such as Platt scaling, when the number of calibration items are small. Also, how is the radius r for neighborhood chosen? Requested Changes: - Discuss the relation of the proposed method to model mis-specification. - Discuss the rationale behind using Hellinger distance and RFprox as similarity measures in greater details. - Some of the texts in Figures 1, 2, 6 are too small to be readable. Please fix. Broader Impact Concerns: - There are no strong ethical implication from this piece of work on probability calibrations. ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper studies the "hidden heterogeneity" (HH) problem, wherein global calibration of classifier outputs may fail due to the presence of latent sub-populations with similar predictions, but different characteristics. The paper proposes a metric to quantify HH, and proposes local calibration methods to combat it. ### Reviewer recommendations Classifier calibration is a fundamental problem, and progress on this topic is of broad interest. Reviewers found the paper's treatment of calibration to be interesting, with results presented on both tabular and image data. From the reviewers' final recommendations, two were (leaning) positive, while another leaned negative. The main critiques raised by the reviewers were as follows. (1) _Applicability to powerful models_. One concern was whether the approach is limited to cases where the base model performs poorly. The authors responded that they focus on settings where the base model is given, and there is no access to the training data. They also noted that strong models could suffer from HH under distribution shift. (2) _Impact of model misspecification_. One question was whether HH was simply a symptom of misspecification of the base model, which might be fixed by constructing an MoE-style model. The response was similar to the above. (3) _Impact of different mechanisms of HH_. One question was whether different causes for HH could result in varied performance. The author's response was that they evaluate the impact of the model and data distribution, with the impact of data representation left for future work. (4) _Limitation to post-hoc calibration_. One critique was that the paper focusses on post-hoc calibration, which may be a niche setting. (5) _Comparison to existing work_. The preprint of Luo et al., which also studies local calibration, was not discussed in detail or compared to in the original version. The author response included a more detailed discussion, including limitations of this work. (6) _Risk of fitting on small sample_. The algorithm relies on local probability estimates fit on a potentially small sample, which could bias the HH score. The author responded that this might indicate scenarios where good similarity-based calibration is not possible. (7) _Justification for design choices_. The paper makes some design decisions (e.g., the use of Hellinger distance, RFProx as a similarity measure) that are not ablated. The response noted that the Hellinger distance is equivalent to the L2 distance between probabilities, and that RFProx is a standard similarity measure. Of these, I find the author response (or paper's treatment) to be adequate for (1)-(5), and the reviewers mostly concur. I find (6) and (7) to potentially require more discussion, even after the revision. The discussion of the choice of RFProx in particular could be treated in more detail, and contrast against more common measures (e.g., Gaussian kernel similarity). That said, the paper does demonstrate that the use of RFProx leads to reasonable results in many practical settings. ### AE recommendation The TMLR guidelines for acceptance are based on the following two questions: (a) _Are the claims made in the submission supported by accurate, convincing and clear evidence?_ (b) _Would some individuals in TMLR's audience be interested in the findings of this paper?_ For (a), most of the critiques above were adequately addressed in the response. There are some remaining points that would improve the clarity of the work. Nonetheless, in my estimation, the paper's claims are generally well-supported. For (b), the general reviewer consensus is the paper provides an interesting perspective on calibration. This is supported by my reading. I thus believe some readers would find the results of interest. Given these, we recommend acceptance with revisions as suggested below. ### Suggested revisions - Amend the definition of calibration. It would be easiest to provide a precise mathematical definition. If a textual description is to be retained, then it should be reworded to be clearer. - Section 3 could be split into two sub-sections, one for "What is HH?", and another for "How do we detect HH?" - (optional) For Algorithm 1 and 2, perhaps x_t rather than t is clearer to refer to the test item. - Equation 3, 4, 5: use \text{.} for "Brier", "sim" - In Section 4, what does the notation \langle x_t, \hat{p}_t \rangle mean? Vector concatenation? If so, perhaps [ x_t, \hat{p}_t ] or similar would be clearer (as the former is sometimes used to denote inner product). - Regarding the use of RFProx in Section 4: - It is stated that "If a domain-specific similarity measure is not available, one can use a supervised similarity measure that can learn to place high importance on the predicted probabilities. We employ such a measure: the random forest proximity function (RFprox)." It is not clear from the description why the RFProx necessarily learns to place high importance on the predicted probabilities. Is the claim that it _can_ learn that these probabilities are _typically_ very important? Clarification is suggested. - Further to the above, it would be good to make explicit what features are fed into the RFProx method. Is it the concatenation of both x_t and \hat{p}_t? If the stated goal is to place high importance on the latter, why not just use the latter? Further discussion is suggested. - It is mentioned that RFProx can be viewed as a kernel. The reader may naturally wonder whether one can use more standard kernels, such as the linear or Gaussian kernel, on top of the concatenation of x_t and \hat{p}_t. This would appear place some importance on the predicted probabilities to measure calibration. Are such choices admissible? What is the reason for choosing RFProx over these? My understanding from the response is that these kernels would be considered "unsupervised", and thus potentially not adequately balance similarities in feature and predicted probability space; this point is however not too clear from the current Section 4. Further to the above, it is not clear whether just constructing a kernel over the predicted probabilities would not then be admissible. More discussion is suggested. - If indeed other standard kernels are admissible, it would be ideal if some additional experiments could be presented (potentially on a few setups) with one of these choices. This could demonstrate how important it is to pick an supervised similarity measure. - The author response to a few critiques relied on the focus being on post-hoc settings. This is reasonable. It might however be interesting to discuss what strategies might be appropriate in settings where one does have the ability to modify the original model. Is it the case here that simply increasing model capacity fixes HH? Would there still be value in similarity-based local calibration? - The authors stated that "Relatedly, if the calibration set as a whole is too small then a similarity-based method may not have enough information to perform good calibration. We previously made this point in the Conclusions and have now also added a statement about it to section 5.2". From my reading, these points were not too clear. They could be made more explicit. ==================================================
# Learning Trees Of ℓ0**-Minimization Problems** Anonymous authors Paper under double-blind review ## Abstract The problem of computing minimally sparse solutions of under-determined linear systems Ax = b is NP hard in general. Subsets with extra properties, may allow efficient algorithms, most notably problems with the restricted isometry property (RIP) can be solved by convex ℓ1-minimization. While these classes have been very successful, they leave out many practical applications. Alternative sub-classes, can be based on the prior information that x = Xz is in the (sparse) span of some suitable matrix X. The prior knowledge allows us to reduce assumptions on A from RIP to stable rank and by means of choosing X make the classes flexible. However, in order to utilize these classes in a solver, we need explicit knowledge of X, which, in this paper, we learn form related samples, A and bl, l = 1*, . . .* . During training, we do not know X yet and need other mechanisms to circumvent the hardness of the problem. We do so by organizing the samples in a hierarchical curriculum tree with a progression from easy to harder problems. ## 1 Introduction We consider efficiently solvable subclasses of NP hard problems, signed extensions of 1-in-3-SAT at the end of the paper and sparse solutions of linear systems in its main part: For matrix A ∈ R m×n and right hand side b ∈ R m, we wish to find the sparsest solution of $${\mathrm{subject~to~}}\quad A x=b,$$ $$\operatorname*{min}_{x\in\mathbb{R}^{n}}\|x\|_{0}$$ $$(1)$$ $\square$ $$(2)$$ ∥x∥0 subject to Ax = b, (1) where ∥x∥0 denotes the number of non-zero entries of x. In full generality, this problem is NP-hard Natarajan (1995); Ge et al. (2011) but as many hard problems it contains tractable subclasses. Some of these are uninteresting, at least from the perspective of sparsity, e.g. problems with zero kernel ker(A) = 0 and unique solution, which renders the ℓ0-minimization trivial. Other tractable subclasses have been extensively studied in the literature, most notably problems that satisfy the (s, ϵ)*-Restricted Isometry property (RIP)* $$(1-\epsilon)\|x\|$$ (1 − ϵ)∥x∥ ≤ ∥Ax∥ ≤ (1 + ϵ)∥x∥ for all s-sparse x ∈ R n, (2) with strict requirements ϵ < 4/ √41 ≈ 0.6246 on the RIP constants and more generally the *null space property* (NSP) of order s ∥vS∥1 < ∥vS¯∥1 for all 0 ̸= v ∈ ker A and |S| ≤ s, where vS is the restriction of v to an index set S and S¯ its complement. In both cases, the sparsest solution of (1) is found by the relaxation of the sparsity *∥ · ∥*0 to the convex *∥ · ∥*1-norm min x∈Rn ∥x∥1 subject to Ax = b, see Candes et al. (2006); Donoho (2006); Candès et al. (2006); Foucart & Rauhut (2013) for details. All of these tractable subclasses are completely rigid: A problem is either contained in the class or we are out of luck. Alternatively, there are subclasses based on prior knowledge. Trivially, if we know that the solution x = Xz is in the column span of a matrix X ∈ R n×p, we can simplify the search space to $$\operatorname*{min}_{z\in\mathbb{R}^{p}}\|X z\|_{0}\quad{\mathrm{subject~to}}\quad A X z=b,$$ which, however, is no longer a standard compressed sensing problem in the variable z. In order to utilize compressed sensing results, we confine X to sparse matrices and consider the simpler problem to find sparse z min z∈Rp ∥z∥0 subject to AXz = b. (3) $\subset\mathbb{R}\mathbb{E}^{\mathbb{N}}$ so that also the product x = Xz is necessarily sparse. In general this modified problem does not provide the globally sparsest solution x, but does so in many scenarios: E.g. if ∥Xz∥0 sparse solutions are unique (which is much weaker than the RIP Foucart & Rauhut (2013)), if X has sufficiently sparse columns so that cancellation of non-zero entries in their span are unlikely or the compressed sensing problem admits some extra structure as in the SAT experiments at the end of the paper. Besides the global optima question, the variant (3) has the advantage that it can be analyzed by available compressed sensing theory. Indeed, we can uniquely recover z if AX is RIP, which is the case for (partially) random X and only mild rank conditions on A, see Kasiviswanathan & Rudelson (2019) and Welper (2020; 2021) in our context. In summary, we can define tractable and adaptable subclasses by properly chosen X, but the algorithms require explicit knowledge of it. Since it is implausible that we just happen to know a good X, we learn it. While one may try to automatically uncover interesting or useful classes X, in this paper, we analyze the simpler option of a teacher-student setup: The teacher knows X and can generate samples from the class, i.e. compressed sensing problems consisting of the measurement matrix A and a right hand side b = Ax. The student observes only the compressed sensing problems (*A, b*), without having the answers x and reconstructs the class. On first sight, this is a cyclic problem, where the student has to solve intractable problems to uncover a class that helps her to solve otherwise intractable problems. This conundrum is solved by differentiating the problems into easy and hard ones: The former are used during training, can be solved without prior knowledge of X and have sparser z than the hard problems that the student can solve after training. For details, see Welper (2021) or its summary in Section 2, included to keep this paper self contained. New Contributions In Welper (2021), it is difficult to find easy problems that do not require prior knowledge. This paper addresses this issue by organizing problems classes into a tree, similar to a university curriculum. Each node is a problem class, arranged so that the hard problems on the children match the easy problems on the parent. This setup allows the student to use the child prior to learn a tree node and thus inductively iterate through the tree. We prove two main theorems stated informally as follows: 1. *Theorem 3.5, Corollary 3.6:* If a tree is learnable (see Definition 3.4), the student can learn to solve all hard problems in the tree. As for compressed sensing, this is a recovery result; the student learns the knowledge of the teacher. If this constitutes ℓ0 minimizers is verified separately. 2. *Theorem 4.2, Corollary 4.4:* Given several assumptions, learnable trees exist, consisting of one deterministic solution and further random solutions. These two results mimic the theory of classical compressed sensing: 1. RIP conditions ensure sparse recovery (via ℓ1 minimization) and 2. RIP matrices exist (e.g. i.i.d. random matrices). Further results include the following: 3. In Welper (2021) the problem class, defined by X is completely random, because it is the most favorable setup for compressed sensing. As a first step towards low randomness classes, our construction allows to embed one fully deterministic problem into the root of the curriculum tree. While a single problem is not yet practical, it shows that some determinism is permissible. 4. In Section 5, we apply the learning method to a signed generalization of NP complete 1-in-3-SAT problems. In summary, unlike traditional tractable subclasses of ℓ0 minimization, we aim for subclasses that are adaptable and learnable by some matrix X. Overlaps in the classes organized into a tree together with training samples form each class allow a student to follow a trail from easier to harder problems and avoid the NP-hardness of the problems alone. See Figure 1. ![2_image_0.png](2_image_0.png) Figure 1: Tractable subclasses inside the NP-hard problem of sparse solutions of linear systems. Human Learning The prior knowledge informed subclasses, together with an iterative learning curriculum, are intended as a hypothetical model for human problem solving, or more concretely theorem proving. If P ̸= NP, and human brains have no fundamental superiority to computers, humans cannot effectively solve arbitrary instances of computationally hard problems. Yet, we routinely prove theorems and have built up a rich trove of results. But we only do so in our respective areas of expertise. Hence, one may argue that within these areas, and equipped with prior knowledge and experience, theorem proving is tractable. If so, can we program corresponding solvers into a computer? The history of artificial intelligence provides some caution. Hand coded rules in expert systems and natural language processing have proven difficult due to their immense complexity, while learned approaches are currently superior. Likewise, instead of hand crafting tractable subclasses, it seems more promising to learn them. As a mathematical model for tractable subclasses, we consider sparse solutions of linear systems. These are NP-hard and in (3), we have already identified some adaptable and tractable subclasses. The solution vector x is a model for a proof, as both are hard to compute. The linear combination x = Xz, together with the non-linear minimal sparsity, composes a candidate solution x from elementary pieces in the columns of X, similar to assembling a proof from known tricks, techniques, lemmas and theorems. Of course, this solution strategy is of no use if we do not know X. Likewise, humans need to acquire their expertise, either through training or research. An important component of both, is the solution of many related and often simplified problems. For a student, these are split into episodes, ordered by prerequisites into a curriculum tree. Likewise, for our mathematical model, we learn a tree of subclasses Xi from simple samples, i.e. pairs (Ak, bk) generated form solutions in the respective classes. As we will see (Remark 3.3), the combined knowledge of all leaf nodes [X1, X2*, . . .* ] in the curriculum tree is not sufficient to solve all problems in the root node X0 because in an expansion x = X0z0 =Pi Xizi, the zi combined generally have less sparsity than z0 and are thus more difficult to find. Therefore, at each tree node we compress our knowledge into matrices with fewer columns and more sparse z. This step is similar to summarizing reoccurring proof steps into a lemma and the using it as a black box in subsequent classes. Unlike human curricula, the model curricula in this this paper are substantially random. This is reminiscent of compressed sensing and phase retrieval, whose theory is also more random than many practical applications. In the latter areas much effort has been invested in low randomness results, yielding partially structured measurement matrices like e.g. random samples from bounded orthonormal systems. Likewise, in this paper, in an effort towards lower randomness, we allow one deterministic solution embedded into otherwise random classes. Further effort towards more realistic low randomness models is left for future work. Greedy Search and Heuristics Similar to ℓ1 minimization, greedy algorithms like *orthogonal matching* pursuit $$\begin{array}{r l}{{}}&{{j^{n+1}=\operatorname{argmax}_{j}\left|A_{j}^{T}(A x^{n}-b)\right|}}\\ {{}}&{{S^{n+1}=S^{n}\cup\{j^{n+1}\}}}\\ {{}}&{{x^{n+1}=\operatorname*{argmin}_{\operatorname*{sup}(x)\subset S^{n+1}}\|A x-b\|_{2}^{2},}}\end{array}$$ also find global ℓ0-minimizers under RIP assumptions Foucart & Rauhut (2013). Instead of systematically searching through an exponentially large set of candidate supports S, the first line provides a criterion to greedily select the next support index, based on the correlation of a column A·j with the residual Axn − b. Applied to the modified problem (3) with prior knowledge X, the method changes to $$\begin{array}{r l}{j^{n+1}={\underset{j}{\operatorname{argmax}}}\left|X_{j}^{T}A^{T}(A X z^{n}-b)\right|}\\ {S^{n+1}=S^{n}\cup\{j^{n+1}\}}\\ {z^{n+1}={\underset{\operatorname{supp}(z)\subset S^{n+1}}{\operatorname{argmin}}}\|A X z-b\|_{2}^{2}.}\end{array}$$ In the first row, the learned knowledge X modifies the index selection and thus provides a learned greedy criterion or heuristic. The learning of X, however, implicitly depends on a meta-heuristic as explained in Remark 3.3 below. From this perspective, the proposed methods are related to greedy and heuristic search methods in AI Russell et al. (2010); Sutton & Barto (2018); Holden (2021). ## 1.1 **Related Work** ℓ0**-Minimization without RIP** This paper is mainly concerned with minimally sparse solutions of systems with non-NSP or non-RIP matrices A. A common approach in the literature for these systems is ℓp-minimization with p < 1, which resembles the ℓ0-norm more closely than the convex ℓ1 norm. While sparse recovery can be guaranteed for weaker variants of the RIP Candès et al. (2008); Chartrand & Staneva (2008); Foucart & Lai (2009); Sun (2012); Shen & Li (2012), these problems are again NP hard Ge et al. (2011). Nonetheless, iterative solvers for ℓp-minimization or non-RIP A often show good results Candès et al. (2008); Chartrand & Wotao Yin (2008); Foucart & Lai (2009); Daubechies et al. (2010); Lai et al. (2013); Woodworth & Chartrand (2016). ℓ0**-Minimization with Learning** Similar to our approach, many papers study prior information for under-determined linear systems Ax = b. Similar to this paper, ℓ1 synthesis März et al. (2022) considers solutions of the form x = Xz, in case x is not sparse in the standard basis and for random A. The papers Bora et al. (2017); Hand & Voroninski (2018); Huang et al. (2018); Dhar et al. (2018); Wu et al. (2019b) assume that the solution x is in the range of a neural network x = G(z; w), with weights pre-trained on relevant data, and then minimize minz ∥AG(z; w) − b∥2. Alternatively, the deep image prior Ulyanov et al. (2020) and compressed sensing applications Veen et al. (2020); Jagatap & Hegde (2019); Heckel & Soltanolkotabi (2020) use the architecture of an untrained network as prior and minimize the weights minw ∥AG(z; w)−b∥2 for some latent input z. These papers assume i.i.d. Gaussian A or the Restricted Eigenvalue Condition (REC) and use the prior to select a suitable candidate among all non-unique solutions. In contrast, in the present paper, we aim for the sparsest solution and use the prior to address the hardness of the problem for difficult A. The paper Wu et al. (2019a) considers an auto-encoder mechanism to find measurement matrices A, not only X, as in our case. Several other papers that combine compressed sensing with machine learning approximate the right hand side to solution map b → x by neural networks Mardani et al. (2018); Shi et al. (2017). Teaching Dimension A teacher/student setup is also considered in the *teaching dimension*. It measures how many samples a teacher needs to provide for a learner to distinguish all concepts in a concept class C ⊂ {0, 1} X for some finite domain X, see Goldman & Kearns (1995). The recursive teaching dimension refines the idea to teaching plans, i.e. sequences of concepts and corresponding samples Zilles et al. (2011); Doliwa et al. (2014); Kirkpatrick et al. (2019). The teaching dimension is closely related to the VC-dimension Chen et al. (2016); Hu et al. (2017). While we also learn problems in a curriculum imposing a sequential order, the goal is different: In the terminology of supervised learning, the student learns the problem to solution map (*A, b*) → x for (*A, b*) is some problem class. Unlike supervised learning, this map is known to the student from the outset. The problem is rather that initially the student does not have an efficient algorithm to compute it and the learning shall help her to reduce the "problem to solution map" to a convex optimization problem. Knowledge Distillation Another area that relies on a teacher/student setup is knowledge distillation Hinton et al. (2015), where a large teacher neural network is used to train a smaller student network. See Gou et al. (2021) for an overview. Transfer Learning The progression through a tree splits the learning problem into separate episodes on different but related data sets. This is reminiscent of empirical studies on transfer- Donahue et al. (2014); Yosinski et al. (2014) and meta-learning Hospedales et al. (2020) in neural networks. ## 1.2 Notations We use c and C for generic constants, independent of dimension, variance or ψ2 norms that can change in each formula. We write a ≲ b, a ≳ b and a ∼ b for a ≤ cb, a ≥ cb and ca ≤ b ≤ Ca, respectively. We denote index sets by [n] = {1*, . . . , n*} and restrictions of vectors, matrix rows and matrix columns to J ⊂ [n] by vJ , MJ· and M·J , respectively. ## 2 Easy And Hard Problems In this section, we summarize an easy to hard progression from Welper (2021) that allows us to progress from one node to the next, in the curriculum tree below. ## 2.1 ℓ0**-Minimization With Prior Knowledge** For given matrix A ∈ R m×n and vector b ∈ R m, we consider the ℓ0-minimization problem min x∈Rn ∥x∥0, s.t. Ax = b from the introduction. We have seen that this problem is NP-hard in general, but tractable for suitable subclasses. While the RIP and NSP conditions are rigid classes, fully determined by the matrix A, we now consider some more flexible ones, based on the prior knowledge that the solution is in some subset C<t := {x ∈ R n: x = *Xz, z* is t-sparse}, parametrized by some matrix X ∈ R n×p and with only mild assumptions on A, to be determined below. Remark 2.1. This definition does not enforce that the x ∈ C<t are ℓ0-minimizers and likewise, the main results in this paper show recovery of x ∈ C<t, not ℓ0 minimization. In order to obtain ℓ0 *minimizers, we* need extra assumptions: 1. If the columns of X are s-sparse, all solutions in class C<t are st-sparse and global ℓ0 *minimization* can be guaranteed by uniqueness of st*-sparse solutions. In classical compressed sensing this is implied* by the RIP condition, but can also be enforced by much weaker conditions, see Foucart & Rauhut (2013). 2. For specific applications one may find alternative arguments. E.g. For the SAT type problems in Section 5, Lemmas 5.4 and 5.5 show global ℓ0 *minimization of class members.* We may regard X's columns as solution components and hence assume that they are s-sparse, as well, for some s > 0, so that the solutions x = Xz in class are st sparse. Although the condition seems linear on first sight, the sparsity requirement of z can lead to non-linear behavior as explored in detail in Welper (2021). As usual, we relax the ℓ0 to ℓ1 norm and solve the convex optimization problem $$\operatorname*{min}_{x\in\mathbb{R}^{n}}\|z\|_{1},\quad\mathrm{s.t.}\quad A X z=b.$$ $$\left({4}\right)$$ ∥z∥1, s.t. AXz = b. (4) Of course any solver requires explicit knowledge of X, which we discuss in detail in Section 2.2. For now, let us assume X is known. Two extreme cases are noteworthy. First, without prior knowledge X = I, we retain standard ℓ1-minimization min x∈Rn ∥x∥1, s.t. Ax = b, which provides correct solutions for the ℓ0-minimization problem if A satisfies the null-space property (NSP) or the restricted isometry property (RIP), typically for sufficiently random A. Second, if instead of the matrix A, the prior knowledge X is sufficiently random, we can reduce the nullspace property of A to a much weaker stable rank condition on A. In that case, the product AX satisfies a RIP with high probability (Kasiviswanathan & Rudelson (2019) and Theorem 2.5 below) and hence we can recover a unique sparse z. Since X is also sparse, this leads to a sparse solution x = Xz of the linear system Ax = b. In order to show that x is the sparest possible solution, we need some extra structure, as discussed in Remark 2.1. ## 2.2 Learning Prior Knowledge We have seen that subclasses C<t of ℓ0-minimization problems may be tractable, given suitable prior knowledge encoded in the matrix X. Hence, we need a plausible model to acquire this knowledge. To this end, we consider a teacher - student scenario, with a teacher that provides sample problems and a student that infers knowledge X from the samples. The training samples must be chosen with care. Indeed, to be plausible for a variety of machine learning scenarios, we assume that the student receives samples (*A, b*i), but not the corresponding solutions xi. On first sight, this poses a cyclic problem: We need X to efficiently solve for xi, but we need xi to find X. To resolve this issue, we train only on a subset of easy problems Ceasy ⊂ C<t. These must be sufficient to fully recover X and at the same time solvable by the student, without prior knowledge of X, by some method Solve(*A, b*): Given an easy problem (*A, Ax*), with x ∈ Ceasy, return x. Throughout this paper, easy problems are given by b = AXz for random samples z with expected sparsity t < t ¯ , which is strictly less than the sparsity of class C≤t, see Assumption (A1) for details. These samples are provided by the teacher, who has access to X, in contrast of the student, who has not. If this class is indeed easy, depends on the existence of Solve and requires a delicate balance because we want the easy problems solvable but the hard ones not (otherwise training is not necessary). At this point, we do not consider the implementation of Solve. It will arise naturally out of the tree construction in Section 3, which also resolves the balancing issue. For comparison, the presence of easy problems may also play a role in gradient descent training of neural networks Allen-Zhu & Li (2020). In order to recover the matrix X from the easy samples Ceasy, the student combines the corresponding solutions into a matrix Y (as columns). Since Ceasy is contained in C<t, they must be of the form Y = XZ for some matrix Z with t-sparse columns. Given that Y contains sufficiently many independent samples form the class C<t, sparse factorization algorithms Aharon et al. (2006); Gribonval & Schnass (2010); Spielman et al. (2012); Agarwal et al. (2014); Arora et al. (2014b;a); Neyshabur & Panigrahy (2014); Arora et al. (2015); Barak et al. (2015); Schnass (2015); Sun et al. (2017a;b); Rencker et al. (2019); Zhai et al. (2020) can recover the matrices X and Z up to scaling Γ and permutation P. SparseFactor(Y ): Factorize Y into X¯ = XPΓ and Z¯ = Γ−1P −1Z for some permutation P and $\text{SCALE}$:. diagonal scaling Γ. Scale: Scale the columns of X¯ so that AX¯ satisfies the RIP. The permutation is irrelevant, but we need proper scaling for ℓ1 minimizers to work, computed by Scale, which is a simple normalization in Welper (2021) and an application dependent function in the experiments in Section 5. We combine the discussion into Train defined in Algorithm 1. Algorithm 1 Training of easy problems Ceasy. function Train(A, b1*, . . . , b*q) For all l ∈ [q], compute yl = Solve(*A, b*l). Combine all ylinto the columns of a matrix Y¯ . Compute X, ¯ Z¯ = SparseFactor(Y¯ ) return Scale(X¯). end function Remark 2.2. In general Y¯ and X¯ have the same column span and thus every x ∈ C<t *is given by* $$\left(5\right)$$ x = Xz ¯ = *Y u.* ¯ Why don't we skip the sparse factorization? While z is t*-sparse by construction,* u = Y +x is generally not. Hence, even if Y is sufficiently random for AY *to satisfy an RIP, it is not clear that it allows us to recover* u by the modified ℓ1*-minimization* (4). ## 2.3 Results This section contains rigorous results for the algorithms of the last sections. ## 2.3.1 Learning Prior Knowledge We need a suitable model of random matrices, where as usual the ψ2 norm is defined by ∥X∥ψ2:= supp≥1 p −1/2E [|X| p] 1/p. Definition 2.3. *A matrix* M ∈ R n×pis s/n-Bernoulli-Subgaussian if Mjk = ΩjkRjk, where Ω is an i.i.d. Bernoulli matrix and R *is an i.i.d. Subgaussian matrix with* $$\mathbb{E}\left[\Omega_{j k}\right]=\frac{s}{n},\quad\mathbb{E}\left[R_{j k}\right]=0,\quad\mathbb{E}\left[R_{j k}^{2}\right]=\nu^{2},\quad\|R_{j k}\|_{\psi_{2}}\leq\nu C_{\psi_{2}}$$ 2, ∥Rjk∥ψ2 ≤ νCψ (5) for some variance ν > 0. *We call* M restricted s/n Bernoulli-Subgaussian *if in addition* $$\Pr\left[R_{j k}=0\right]=0,\quad\mathbb{E}\left[\left|R_{j k}\right|\right]\in\left[\frac{1}{10},1\right],\quad\mathbb{E}\left[R_{j k}^{2}\right]\leq1,\quad\Pr\left[\left|R_{j k}\right|>\tau\right]\leq2e^{\frac{-\tau^{2}}{2}}.$$ 2 . (6) Next, we define the easy class Ceasy as a slightly sparser version of C<t and generate the training data by drawing random samples. (A1) The easy class Ceasy consists of solutions for xl = Xzl with columns zl of t/¯ 2p restricted BernoulliSubgaussian matrix Z ∈ R p×q with c log q ≤ t¯≤ *t, q > cp*2log2p,2p $${\frac{2}{p}}\leq{\frac{\bar{t}}{p}}\leq{\frac{c}{\sqrt{p}}}.$$ $$q>c p^{2}\log^{2}p,$$ $$c\log q\leq{\bar{t}}\leq t,$$ . (7) The matrix X is only known to the teacher, while the student receives samples (*A, b*l) with bl = AXzl. The first and last inequalities pose mild conditions on the sparsities t and t¯, while the middle inequality can always be satisfied by providing sufficiently many samples. The vectors zl have expected sparsity t¯ and thus the corresponding solutions Xzl have expected sparsity st¯. In order for them be easier than the full class C<t, we generally choose *t < t* ¯ . Next, we require the student to be accurate on easy problems, with a safety margin √2 on sparsity: $$\left(6\right)$$ $$\left(7\right)$$ (A2) For all √2t¯ sparse columns zl of Z, we have Solve(*A, AXz*l) = Xzl for Solve as defined at the beginning of Section 2.2. This assumption, used in Welper (2021), is delicate and will be lifted in the reminder of the paper, see Section 2.4 for more details. Since the student shall only recover the class X, at this point, it is not strictly necessary that the solutions Xzl are global ℓ0 minimizers, which can, however, be ensured by the teacher in selecting the class X, see Remark 2.1. Finally, we need the following technical assumption. $\left(A\right)$ ## (A3) X Has Full Column Rank. Although this implies that X has more rows than columns, that is generally not true for AX used in the sparse recovery (4). The assumption results from the sparse factorization Spielman et al. (2012), where X represents a basis. Newer results Agarwal et al. (2014); Arora et al. (2014b;a; 2015); Barak et al. (2015) consider over-complete bases with less rows than columns and coherence conditions and may eventually allow a weaker assumption. Anyways, as is, the assumption can be enforced by shrinking the problem class, i.e. removing columns in X, at the expense of being less expressive. Such a procedure is not necessary for the choices of X in this paper, which have strong random components and thus full rank with high probability. With the given setup, we can recover X from easy training samples as claimed in the previous sections. Theorem 2.4 (Welper (2021), Theorem 4.2). *Assume that (A1), (A2) and (A3) hold. Then there are* constants c > 0 and C ≥ 0 independent of the probability model, dimensions and sparsity, and a tractable implementation of SparseFactor (see Section 2.2) *so that with probability at least* $$1-C p^{-c}$$ the output X¯ of Train (Algorithm 1) is a scaled permutation permutation X¯ = XPΓ of the matrix X *that* defines the class C<t. The result follows from Theorem 4.2 in Welper (2021) with some minor modifications described in Appendix A.1. ## 2.3.2 ℓ0**-Minimization With Prior Knowledge** After we have learned X, we need to ensure that we can solve all problems in class C<t by (4), not only the easy ones. We do so here for random X, which is clearly a idealization but common in compressed sensing, phase retrieval and related fields. Section 4 makes some progress towards more realistic classes by allowing some deterministic component. For this review, we assume $$(\mathrm{A4})\,\,\,\mathrm{The\,\,matrix}\,\,X$$ n×pis (s/n√2)-Bernoulli-Subgaussian with $${\frac{\|A\|_{F}^{2}}{\|A\|^{2}}}\geq C C_{\psi}^{4}{\frac{n t}{s\epsilon^{2}}}\log\left({\frac{3p}{\epsilon t}},\right)$$ $$({\mathcal{S}})$$ ψ2-norm bound Cψ in the Bernoulli-Subgaussian model (5) and arbitrary constant 0 *< ϵ <* 4/ √41 ≈ 0.6246 with the same bounds as the RIP constant in (2). The left hand side ∥A∥ 2 F /∥A∥ 2is the stable rank of A. With the scaling $$\mathrm{Scale}({\bar{X}})={\frac{\sqrt{n}}{\|A\|_{F}}},$$ , (9) we obtain the following result, with some minor modifications from the reference described in Appendix A.1. Theorem 2.5 (Welper (2021), Theorem 4.2). *Assume we choose* (9) for Scale *and that (A1) and (A4)* hold. Then there are constants c > 0 and C ≥ 0 independent of the probability model, dimensions and sparsity, and a tractable implementation of SparseFactor (see Section 2.2) *so that with probability at least* $1-C p^{-c}$? $$({\mathfrak{g}})$$ the matrix X has full column rank, s-sparse columns and AX *and satisfies the RIP* $$(10)$$ (1 − ϵ)∥v∥2 ≤ ∥AXv ¯ ∥2 ≤ (1 + ϵ)∥v∥2 (10) for all 2t*-sparse vectors* v ∈ R p. Hence, for ϵ < 4/ √41 ≈ 0.6246*, we can solve all problems in* C<t by ℓ1 minimization (4). As discussed in Remark 2.1, this is a recovery result and ℓ0-optimality of the class C<t must be shown separately. In conclusion, if we train on easy samples in Ceasy, we can recover X and thus with the modified ℓ1-minimization (4) solve all problems in class C<t, even the ones which we could not solve before training. ## 2.4 Implementation Of The Student Solver? While most assumptions are of technical nature the two critical ones are: 1. Implementation of Solve? If we implement Solve by plain ℓ1-minimization, A must satisfy the st¯- NSP. This poses strong assumptions on A and if it satisfies the slightly stronger st-NSP, all problems in C<t can be solved by ℓ1-minimization, rendering the training of X obsolete. We resolve the issue in the next section by a hierarchy of problem classes, which allow us to use prior knowledge from lower level classes to implement Solve. 2. Can we learn classes X that are not fully random? Some partially deterministic cases are considered in Section 4. ## 3 Iterative Learning 3.1 Overview We have seen that we can learn to solve all problems in a class C<t, if we are provided with samples from an easier subclass Ceasy. The easy class must be sufficiently rich and at the same time its sample problems must be solvable without prior training. This results in a delicate set of assumptions, which we have hidden in the existence of Solve, in the last section. The situation becomes much more favorable if we do not try to learn C<t at once, but instead iteratively proceed from easy to harder and harder problems. This way, we can implement Solve by the outcomes of previous learning episodes, instead of uninformed plain ℓ1 minimizers. To this end, we order multiple problem classes into a curriculum, similar to a human student who progresses from easy to hard classes ordered by a set of prerequisites. Likewise, we consider a collection of problem classes Ci, indexed by some index set i ∈ I and organized in a tree, e.g. ![8_image_0.png](8_image_0.png) with root node C0 and where each class Ci has finitely many children Cj , j ∈ child(i). (T1) Each tree node has at most γ children for some γ ≥ 0. The student starts learning the leafs and may proceed to a class Ci only if all prerequisite or child classes have been successfully learned. As before each class is given by a matrix Xi with si sparse columns and sparsity t Ci:= {x ∈ R n: x = Xi*z, z* is t-sparse}. As in Remark 2.1, we do not enforce that class members are ℓ0 minimizers, which has to be ensured separately. The difficulty of each class roughly corresponds to the sparsity, with the easiest at the leafs and then less and less sparsity towards the root of the tree. In order to learn each class Ci, the corresponding easy problems are constructed as in Assumption (A1) in the last section (T2) On each node i, the teacher provides easy problems consisting of solutions for xl = Xizl with columns zl of t/¯ 2p restricted Bernoulli-Subgaussian matrix Zi ∈ R p×q with c log q ≤ t¯≤ *t, q > cp*2log2p,2p $${\frac{\mathbf{2}}{\mathbf{p}}}\leq{\frac{\bar{\mathbf{t}}}{\mathbf{p}}}\leq{\frac{\mathbf{c}}{\sqrt{\mathbf{p}}}}.$$ $$q>\mathbf{c}p^{2}\log^{2}p,$$ $$c\log q\leq{\bar{t}}\leq t,$$ . (11) The matrix Xiis only known to the teacher, while the student receives samples (*A, b*l) with bl = AXizl. Thus, the easy samples are x = Xiz with random z of expected sparsity t¯. For reference, we define $$(11)$$ Ceasy,i := {x ∈ R n: x = Xi*z, z* is a column of Zi}, which differs slightly from Welper (2021) and its summary in Section 2 and is only used for the following motivation. In order to progress through the curriculum, we have to carefully connect each parent to its children. First, we assume: (T3) The combined knowledge of all children contains the knowledge of the parent, i.e. $$X_{i}=\sum_{j\in\operatorname{child}(i)}X_{j}W_{j}=:X_{\operatorname{child}(i)}W_{\operatorname{child}(i)}$$ $$(12)$$ XjWj =: Xchild(i)Wchild(i)(12) for some matrices Wj and combined matrices Xchild(i) = [Xj1 , . . . , Xjr ] and [WT j1 , . . . , WT jr ] T with child(i) = {j1*, . . . , j*r}. Next, we carefully calibrate the sparsity of all matrices to obtain a proper easy/hard split. (T4) Assume that the columns of Xi are si sparse and the columns of Wchild(i) are t/t¯ sparse with $s_{i}\bar{t}\leq s_{j}t$, $j\in$ child$(i)$. (13) $$t/{\bar{t}}\geq1,$$ Then every element in the parent class satisfies x = Xizi = Xchild(i)Wchild(i)zi =: Xchild(i)zchild(i). Hence, if it is easy for the parent ∥zi∥0 ≤ t¯, it is hard for the combined knowledge of the children ∥zchild(i)∥0 ≤ t. But given our prerequisites, we can already solve all hard children problems and implement Solve by the ℓ1 minimization (4) with prior knowledge Xchild(i). Technically, this requires that AXchild(i)is t-NSP, not only all AXj , j ∈ child(i) separately. This is a relatively mild extra assumption because the NSP typically depends only logarithmically on the number of columns in X·. (T5) For all tree nodes i the matrix product A[Scale(Xchild(i))] as well as the root node A[Scale(X0)] satisfy the null space property of order √2t. With the implementation of Solve, we can now learn the full parent class Ci by Algorithm 1 and then proceed through the full tree by induction. The split (12), roughly models a set of university courses, where higher level courses recombine concepts from multiple prerequisite courses. In summary, we have the sparsities (ignoring probabilities in Ceasy,i) $$\begin{array}{l l l}{{x\in\mathrm{Child~problems}\leadsto}}&{{x=X_{\mathrm{child}(i)}z_{\mathrm{child}(i)},}}&{{\|z_{\mathrm{child}(i)}\|_{0}\leq t,}}&{{\|x\|_{0}\leq s_{j}t,}}\\ {{x\in\mathcal{C}_{\mathrm{easy},i}\leadsto}}&{{x=X_{i}z_{i},}}&{{\|z_{i}\|_{0}\leq\bar{t},}}&{{\|x\|_{0}\leq s_{i}\bar{t},}}\\ {{x\in\mathcal{C}_{i}\leadsto}}&{{x=X_{i}z_{i},}}&{{\|z_{i}\|_{0}\leq t,}}&{{\|x\|_{0}\leq s_{i}t.}}\end{array}$$ It remains to learn the leafs, for which we cannot rely on any prior knowledge. To this end, note that by construction (12), we can expect the columns of the parent Xi to be a factor *t/t >*¯ 1 less sparse than the columns of the children Xj , j ∈ child(i). Hence, in a carefully constructed curriculum, the tree nodes' Xi become more sparse towards the bottom of the tree and ideally have unit sparsity O(1) at the leafs. This ensures that the leaf node classes can be solved by brute force in sub-exponential time. (T6) We have a solver SolveL for the leaf nodes, satisfying Assumption (A2). For some applications this may be costly, while for others, like SAT reductions to compressed sensing and related problems discussed in Section 5, this is routinely done for moderately sized problems Holden (2021). We need the following two technical assumptions (T7) For each tree node i, the matrix Xi has full column rank. (T8) On each tree node, we have implementations of Scale. These match (A3) and Scale in Section 2, where they are discussed in more detail. Remark 3.1. All problems x in class Ci are t 2/t¯*-sparse linear combinations of* Xchild(i)*. Hence, if* AXchild(i) satisfies the t 2/t¯ instead of only a t-NSP, the student can solve all problems in Ci, without training Algorithm 1. Practically, she can jump a class, but it is increasingly difficult to jump all classes, which would render the entire learning procedure void. Remark 3.2. The easy/hard split is achieved by some matrix satisfying a t¯ but not a t *RIP. In Section 2* this matrix is A*, so that this setup is very limiting. In this section, this is the matrix* AXchild(i) and therefore at the digression of the teacher and to a large extend independent on the problem matrix A. Remark 3.3. *The sparse factorization in Algorithm 1 condenses the knowledge* Xchild(i)into Xi*, allowing* more sparse zi *than* zchild(i) and as a consequence to tackle more difficult, or less sparse, problems x*. This* condensation is crucial to progress in the curriculum, but is in itself a meta-heuristic to consolidate knowledge. It is comparable to Occam's razor and the human preference for simple solutions. More flexible metaheuristics are left for future research. ## 3.2 Learnable Trees The algorithm of the last section is summarized in Algorithm 2 and all assumptions in the following definition. Definition 3.4. We call a tree of problem classes Ci, i ∈ I learnable *if it satisfies (T1)–(T8).* Deferring existence of learnable trees to Section 4 below, for now we assume that a teacher has already constructed such a tree. Then, as reasoned in the last section, we can recover the knowledge X0 of the root class C0, up to permutation and scaling in polynomial time. For a formal proof, see Appendix A.3. Theorem 3.5. Let Ci, i ∈ I be learnable according to Definition 3.4. Then, there exists an implementation of SparseFactor and constants c > 0 and C ≥ 0 independent of the probability model, dimensions and sparsity, so that with probability at least $$1-C\gamma s_{0}^{\frac{\log\gamma}{\log(c_{s}t/t)}}p^{-c}$$ the output X¯i = TreeTrain(Ci) of Algorithm 2 is a scaled permutation *Scale*(X¯i) = *Scale*(XiP) of Xi for some permutation matrix P. Knowing all Xi up to permutation and scaling allows the student to solve all problems in the tree. The proof for the following corollary is in Appendix A.3. Corollary 3.6. *Assume that the event in Theorem 3.5 is true so that the student has computed* TreeTrain(Ci) = X¯i for all three nodes i ∈ I, which are scaled permutations *Scale*(X¯i) = *Scale*(XiP) of Xi for some permutation matrix P. Then, the student can solve all problems in class Ci, i ∈ I by the convex optimization problem (3). Remark 3.7. As for compressed sensing, the last corollary is a recovery result: After training the student can find the same solutions x in class Ci *as the teacher. The corollary does not state that these are* ℓ0minimizers, which has to be verified separately. In classical compressed sensing this follows from uniqueness of sparse solutions, which is not required for the last corollary, but may be assumed in addition (and is much weaker than the RIP, see e.g. Foucart & Rauhut (2013)). Alternative verification of global ℓ0 *minimization* are also possible as e.g. Lemmas 5.4, 5.5 for reductions of SAT type problems to compressed sensing. The biggest problem with learning hard problems C<t from easy problems Ceasy in Theorem 2.4 is the need for a solver for the easy problems, as discussed in Section 2.4. The hierarchical structure of Theorem 3.5 completely eradicates this assumption, except for the leaf nodes, which ideally have sparsity O(1) so that brute force solvers are a viable option. Algorithm 2 Tree training SolveX: Solve the modified ℓ1-minimization (4) with the given matrix X SolveL: Solver for leaf nodes. Train(A, b1*, . . . , b*q, Solve): Algorithm 1 using the given solver subroutine. function TreeTrain(class Ci) Get matrix A and training samples b1*, . . . , b*q from teacher. if Ci has children **then** Compute Xj = TreeTrain(Cj ) for j ∈ child(i) Concatenate all child matrices X = [Xj ]j∈child(i) return Xi = Train(A, b1*, . . . , b*q, SolveX) else if Ci has no children **then** return Xi = Train(A, b1*, . . . , b*q, SolveL) end if end function ## 3.3 Cost Let us consider the cost of learnable trees from Definition 3.4. The number of nodes grows exponentially in the depth of the tree, but the depth only grows logarithmically with regard to the sparsity s0 of the root node, given that we advance the sparsities si as fast as (13) allows. Lemma 3.8. Let s0 *be the sparsity of the root node of the tree. Assume that each node of the tree has at* most γ children and that sit¯≳ csj t for c ≥ 0 *and all* j ∈ child(i)*. Then the tree has at most* γ N+1 = γs log γ log(*ct/t*¯) 0 nodes. The proof is given in Appendix A.2. Since on each node, the number of training samples and the runtime of the training algorithm are both polynomial, this lemma ensures that the entire curriculum is learned in polynomial time, with an exponent depending on γ, and the ratio t/t¯. ## 4 A Tree Construction In the last section, we have seen that we can learn difficult classes, given a suitable training curriculum. In this section, we argue that such curricula exist. Definition 3.4 and Theorem 3.5 state several conditions on classes Ci and their matrices Xi that allow the student to successfully learn the entire tree. While these are mainly simple dimensional requirements, the most severe is the NSP condition of A[Scale(Xchild(i))]. By Kasiviswanathan & Rudelson (2019) or Theorem 2.5 this is expected for random Xi. For a more realistic model scenario, we add a deterministic component. The deterministic part guarantees that every global ℓ0-minimizer $$\operatorname*{min}_{x\in\mathbb{R}^{n}}\|x\|_{0},\quad\mathrm{s.t.}\quad A x=b$$ $$(14)$$ ∥x∥0, s.t. Ax = b (14) can be embedded into a dedicated curriculum, for arbitrary right hand side b and only minor rank assumptions on A. The random part is a placeholder for further solutions in class, to obtain a more realistic model. Remark 4.1. *The model shall demonstrate that learning of any deterministic problem is possible, but is not* intended as a practical curriculum design. ## 4.1 Tree Result Given A and x, we construct a partially random learnable tree whose root class contains x and each Xi has p columns for some p > 0. To this end, we first partition the support supp(x) into non-overlapping patches {J1*, . . . , J*q} = J and then place the corresponding sub-vectors of x into q columns of the matrix $$S_{j l}:=\left\{\begin{array}{ll}x_{j}&j\in J_{l}\\ 0&\mbox{else.}\end{array}\right.\tag{1}$$ $$(15)$$ The columns are spread into the leaf classes of the following learnable tree, were κ(·) denotes the condition number. Theorem 4.2. Let A ∈ R m×n *and split* x ∈ R n *into* q = 2L, L ≥ 1 components S *given by* (15)*. If* 1. AS *has full column rank.* 2. On each tree node, we have implementations of Scale. 3. SolveL satisfies Assumption (A2) on the leaf nodes. 4. $$t\geq\log p^{2}+\log^{3}p,$$ 2 + log3p, 1 ≲ t ≲ √p (16) 5. $$1\leq t\leq{\sqrt{p}}$$ $$(16)$$ $$\operatorname*{min}_{J\in{\mathcal{J}}}{\frac{\left\|A_{J}\right\|_{F}^{2}}{\left\|A_{J}\right\|^{2}}}\geq t\kappa(A S)L+t\kappa(A S)\log{\frac{c q p}{t}}$$ $$(17)$$ t(17) for some generic constant c*, with probability at least* $$1-2\exp\left(-c\frac{1}{\kappa(A S)}\operatorname*{min}_{J\in\mathcal{J}}\frac{\|A_{.J}\|_{F}^{2}}{\|A_{.J}\|^{2}}\right)$$ there is a learnable binary tree of problem classes Ci, i ∈ I of depth L, given by matrices Xi *and sparsity* t so that 1. The root class C0 *contains* x. 2. *The parents are constructed from the children* Xi = Xchild(i)Wchild(i)*, where* Wchild(i) has t/t¯ = 2 sparse columns. 3. The columns of the leaf nodes' Xi are |J| *sparse.* 4. Each class' matrix Xi contains p columns, consisting of columns of S, i.e. pieces of x*, in the leafs* and sums thereof in the interior nodes. All other entries are random (dependent between classes) or zero. In short, curricula that allow us to learn the root class do exist, even if we add some deterministic structure to ensure that the classes contain some meaningful result. More sophisticated classes are left for future research. Note that x can be recovered even if it is not a global ℓ0 minimizer. This has to be ensured separately by the designer of the curriculum, see Remarks 2.1 and 3.7. The only restriction on x is Assumption 1 that AS has full column rank. In case x is indeed a global ℓ0 minimizer, this assumption is automatically satisfied by the following lemma, with z = [1, 1*, . . .* ] T. The proof is in Appendix A.4. Lemma 4.3. *Assume the columns of* S ∈ R n×q *have non-overlapping support and* z ∈ R q with non-zero entries. If the vector x = Sz is the solution of the ℓ0-minimization problem 14, then the columns of AS are linearly independent. Theorem 4.2 leaves the implementation of Scale open. The function is necessary because the sparse factorization of Y = XZ into X and Z in Algorithm 1 is not unique up to permutation and scaling. Two options are as follows: 1. If AX satisfies the RIP, all columns of AX must have unit size up to the RIP constants. Hence a reasonable scaling of X ensures equality ∥(AX)·i∥ = 1. However, the proof only shows that *T AX* is RIP for some preconditioner T, depending on the condition of the deterministic part AS. This implies the NSP (without preconditioning) since it is invariant under left preconditioning and hence ensures ℓ1 recovery. However, this is not informative for scaling X, unless the teacher provides the preconditioned matrix T A instead of A. 2. The teacher can ensure that the training samples Z are scaled, e.g. by sampling entries from a discrete set {−1, 0, 1}, which allows the student to recover the scaling. Another major assumption in Theorem 4.2 is the existence of a leaf node solver SolveL. We can use a brute force approach if we manage to achieve enough sparsity |J| in the leaf nodes, which we estimate in the following corollary. Corollary 4.4. Let A ∈ R m×n*. Let* x ∈ R n be a sparse, sx := ∥x∥0 and for some q let $${\mathcal{J}}=\{J_{1},\ldots,J_{q}\},$$ $${\frac{s_{x}}{q}}\leq|J_{i}|\leq2{\frac{s_{x}}{q}}$$ be a quasi-uniform partition of its support and S *be the corresponding component split defined in* (15). 1. *Assume that the following sub-matrices have uniform condition number and full stable rank* $$\kappa(A\mathbf{S})\lesssim\mathbf{1},$$ κ(AS) ≲ 1,∥A·J ∥ $$\frac{\left\|{\cal A}_{J}\right\|_{F}^{2}}{\left\|{\cal A}_{J}\right\|^{2}}\geq|J|,\qquad\qquad\qquad J\in{\mathcal J}.$$ with |J| ≳ log sx log p + (log p) 2. (18) 2. On each tree node, we have implementations of *Scale*. 3. SolveL satisfies Assumption (A2) on the leaf nodes. Then for some generic constant c*, with probability at least* $\blacksquare$ $\frac{1}{4}$ 4. $$1-2\operatorname*{min}\left\{s_{x}^{-c},p^{-c}\right\}$$ there is a learnable binary tree of problem classes Ci, i ∈ I, given by matrices Xi *so that* $$(18)$$ 1. The root class C0 *contains* x. 2. The columns of the leaf nodes' Xi are |J| sparse. 3. Each class' matrix Xi contains p *columns* Proof. We apply Theorem 4.2. Using κ(AS) ≲ 1 and ∥A·J ∥ 2 F ∥A·J ∥ 2 ≳ |J| and choosing the most favorable t ∼ log p in (16), assumption (17) reduces to $$|J|\gtrsim Lt+t\log(2^{L}p)\sim Lt+t\log p\sim L\log p+(\log p)^{2},\tag{19}$$ posing a limit on the minimal support size we can achieve at the leafs of the tree. In order to eliminate L, the corollary assumes that all J are of equivalent size. Since the tree has q = 2L leafs, this implies that sx ∼ |J|2 L and thus log sx ∼ log |J| + L ≥ L. Thus, condition (19) reduces to $$(20)$$ $$|J|\geq\log s_{x}\log p+(\log p)^{2}.$$ $\square$ 2. (20) Thus, the corollary follows from Theorem 4.2 with probability at least $$1-2\exp\left(-c{\frac{1}{\kappa(A S)}}\operatorname*{min}_{J\in J}{\frac{\|A_{J}\|_{F}^{2}}{\|A_{J}\|^{2}}}\right)\leq1-2\exp\left(-c[J]\right),$$ for all J ∈ J , which directly shows the result. Hence, on the leaf nodes, a brute force SolveL search of |J| sparse solutions, considers about n |J| ≥ n log s possible supports (ignoring p for the time being, which is at the teachers discretion). While significantly better than n s possible supports for finding x directly, the former number is not of polynomial size. In order to drive down the search size to O(1), we can iterate the tree construction and build new trees designed to enable the student to find every column in the leaf nodes Xi with one full tree per column. At the break between curricula, this requires the teacher to provide the samples (*A, b*k) with bk = A(Xi)·k for every leaf node column (Xi)·k, which is a much stronger requirement than just providing arbitrary samples from the child classes in the interior nodes. Since this is more costly, we calculate in the next section that this still leads to a total tree of polynomial size. ## 4.2 Tree Extension The curriculum in Theorem 4.2 shrinks the support size from s to log s. In order to reduce the size further, we may build a new curriculum for every column in every leaf Xi, if these columns can be split with full rank of AS, yielding p2 L ≤ ps new curricula. The assumption seems plausible for the random parts and is justified for the deterministic part by the following lemma (together with Lemma 4.3), proven in Appendix A.4. Lemma 4.5. *Assume the columns of* S ∈ R n×q *have non-overlapping support and* z ∈ R q with non-zero entries. If the vector x = Sz is the solution of the ℓ0*-minimization problem 14, then the columns* S·k, k ∈ [q] are global ℓ0 *optimizers of* $$S._{k}\in\operatorname*{min}_{x\in\mathbb{R}^{n}}\|x\|_{0}\quad{\mathrm{subject~to}}\quad A x=A S._{k}.$$ Remark 4.6. *Within each curriculum, the teacher provides samples from each class. At the break between* different curricula, the teacher must provide the more restrictive samples b = Ax with columns x *of leaf node* Xi*. Weather this can be avoided in a more careful tree construction is left for future research.* Since we aim for leaf column support size |J| ∼ 1 and its lower bound (18) contains the number p of columns in each Xi, which is at the teachers disposal, we shrink it together with the initial (sub-)curriculum support size s by choosing p ∼ s. Remark 4.7. By choosing a large constant in p ∼ s *or alternatively* p ∼ s α*, for the more difficult curricula,* p can be larger than m*. But by* (19), towards the simpler curricula p *must become small so that eventually* p ≤ m and the matrix AXi has more rows that columns. Depending on the kernel of AXi*, this may void* ℓ0 or ℓ1*-minimization and allow simpler constructions towards the bottom of the curriculum tree.* We iteratively repeat the procedure until the leaf support |J*| ∼ O*(1) is of unit size. The total number \#(s) of required (sub-)curricula for initial support size s satisfies the recursive formula $$\#(s)\sim p s\#\left(\log s\log p+(\log p)^{2}\right)\geq s^{2}\#\left((\log s)^{2}\right)$$ By induction, one easily verifies that \#(s) ≲ s 3, so that we use only a polynomial number of curricula, each of which can be learned in polynomial time. In conclusion, combining all problem classes into one single master tree, this yields a curriculum for a student to learn the root C0 in polynomial time, including a predetermined solution x. The problem classes can be fairly large at the top of the tree and must be small at the leafs. At the breaks between different curricula, the training samples must be of unit size containing only one column of the next tree. ## 4.3 Construction Idea In Theorem 4.2, all class matrices Xi are derived from the single matrix $$X:=S Z^{T}+D R(I-Z Z^{T}).$$ $$(21)$$ X := SZT + DR(I − ZZT). (21) The first summand is the deterministic part, with components S of x defined in (15) and arbitrary matrix Z with sparse orthogonal columns that boosts the number of columns from q to the desired p. The second summand is the random part with sparse random matrix R. The projector (I − ZZT) ensures that it does not interfere with the deterministic part and D is a scaling matrix to balance both parts. We choose Z and the support of R so that, upon permutation of rows and columns X is a block matrix $\begin{bmatrix}B_1&&\\ &\cdot&\\ &&B_q\end{bmatrix}$ . $$X=1$$ with each block containing one piece xJ . The tree is constructed out of these blocks as follows in case q = 4 and analogously for larger cases. ![15_image_0.png](15_image_0.png) See Appendices A.5.1 and A.6 for details. ## 5 Applications 5.1 3Sat And 1-In-3-Sat For an example applications, we consider reductions from the NP-complete 3SAT and 1-in-3-SAT to sparse linear systems (The paper Ayanzadeh et al. (2019) considers the other direction). The problems are defined as follows. - *Literal:* boolean variable or its negation, e.g. : x or ¬x. - *Clause:* disjunction of one or more literals, e.g.: x1 ∨ ¬x2 ∨ x3. - *3SAT:* satisfiability of conjunctions of clauses with three literals. For a positive result, at least one literal in each clause must be true. - *1-in-3-SAT:* As 3SAT, but for a positive result, exactly one literal in each clause must be true. Both problems are NP-complete and can easily be transformed into each other. In this section, we reduce a 1-in-3-SAT problem with clauses ck, k ∈ [m] and boolean variables xi, i ∈ [n] to a sparse linear system, following techniques from Ge et al. (2011). For each boolean variable xi, we introduce two variables yi ∈ R corresponding to xi and zi ∈ R corresponding to ¬xi for i ∈ [n]. For each clause ck, we define a pair of vectors Ck, Dk. The vector Ck has a one in each entry i for which the corresponding literal (not variable) xi is contained in the clause ck and likewise Dk has a one in each entry i for which the literal ¬xiis contained in ck. All other entries of Ck and Dk are zero. It is easy to see that $$y\in\{0,1\}^{n}{\mathrm{~and~}}z_{i}=\neg y_{i}$$ ⇒ Exactly one literal in ck is true if and only if C only if $C_{k}^{T}y+D_{k}^{T}z=1$. (22) We combine the linear conditions into the linear system $$A:=\begin{bmatrix}\cdots&C_{1}^{T}&\cdots&\cdots&D_{1}^{T}&\cdots\\ \vdots&&\vdots&&\\ \cdots&C_{m}^{T}&\cdots&\cdots&D_{m}^{T}&\cdots\\ \ddots&&\ddots&&\\ &I_{nn}&&I_{nn}&&\\ &&\ddots&&\ddots\end{bmatrix},\qquad\qquad b:=\begin{bmatrix}1\\ \vdots\\ 1\\ 1\\ \vdots\\ 1\end{bmatrix}$$ $$(23)$$ $$(24)$$ (23) with some extra identity blocks that together with the ℓ0-minimization $$\min_{y,z\in\mathbb{R}^{n}}\|y\|_{0}+\|z\|_{0}\quad\text{subject to}\quad A\begin{bmatrix}y\\ z\end{bmatrix}=b.$$ ensure that y ∈ {0, 1} n, when possible. Lemma 5.1. The clauses ck corresponding to Ck and Dk, k ∈ [m] *are 1-in-3 satisfiable if and only if* (24) has a n *sparse solution.* Proof. The i-th row of the identity blocks is yi + zi = 1. The solution is either 2-sparse or 1-sparse with yi = 1, zi = 0 or yi = 0, zi = 1. Hence the solution is at most n sparse. The latter two cases are true for all i if and only if y and z combined are n sparse. Then the pair (yi, zi) matches a boolean variable (xi, ¬xi) and the result follows from (22). ## 5.2 Model Class Note that RIP proofs typically rely on random mean zero matrix entries, while reductions of random 1-in-3- SAT subclasses to compressed sensing have matrices and solution vectors with non-negative entries and thus non-zero mean. As a result, our theory is not immediately applicable. We avoid the problem by considering a larger problem class with signed solutions x ∈ R n or x *∈ {−*1, 0, 1} n, which we can sample by mean-zero Bernoulli-Subgaussian distributions required for our theory. While in our 1-in-3-SAT reduction the first rows of A have exactly three non-zero entries, for simplicity, we sample sparse rows $$A={\begin{bmatrix}A_{11}&A_{12}\\ I_{n/2}&I_{n/2}\end{bmatrix}}\in\mathbb{R}^{m\times n},$$ m×n, b = $$b={\binom{b_{1}}{b_{2}}}\in\mathbb{R}^{n}$$ for two sparse matrices A1j ∈ {0, 1} (m−n/2)×(n/2). As in Lemma 5.1, the two identity blocks ensure that any solution x of Ax = b must have support at least ∥x∥0 ≥ ∥b2∥0. In the 1-in-3-SAT case, equality corresponds to satisfiable problems. Likewise, we ensure that all training problems satisfy ∥x∥0 = ∥b2∥0, which automatically implies that they are global ℓ0 optimizers. Remark 5.2. If ∥x∥0 = ∥b2∥0, then x is a global ℓ0 *minimizer.* ## 5.3 Curricula We consider several example curricula. The first is a realization of the construction in Theorem 4.2. The following two add some extra structure to ensure global ℓ0 minimization properties. The Second for all columns of each Xiin the curriculum and the third for all sparse linear combinations of columns of Xi, i.e. all elements in the corresponding problem class Ci. ## 5.3.1 Curriculum I We first consider a realization of the curriculum in Theorem 4.2, as shown in Figure 2. The curriculum is constructed from a single matrix X at the root node, where ∗ entries are mean-zero random ±1 and the x entries are (different per entry) random {0, 1}. The latter have non-zero mean, which is not amenable to RIP conditions and used as a model for the deterministic part of the theory. The children Xi are constructed from the parent by zeroing out selected rows, as shown in the figure. The tree is learnable, proven in Appendix C. Lemma 5.3. Assume that the dimensions m, n, p, sparsities t, deterministic solution x and matrix A satisfy all assumptions of Theorem 4.2. Then the tree in Figure 2 is learnable and all other conclusions in Theorem 4.2 hold. ![17_image_0.png](17_image_0.png) Figure 2: Xi matrices for a Curriculum I. x can be different in each row and ∗ are random entries. ## 5.3.2 Curriculum Ii For none of the solutions in the problem classes in Curriculum I we know if they are global ℓ0 minimizers. While this is not necessarily an issue for the tree construction, as outlined in Remark 3.7, it is not fully satisfactory and global minimizers can be obtained as follows. First, we split the columns according to the identity blocks in A, as shown in Figure 3. Each component in the upper block y or ∗, has exactly one corresponding component in the lower block z or + so that for each pair at most one entry is non-zero. As a result each column has the required sparsity to guarantee that it is a global ℓ0 minimum by Remark 5.2. Lemma 5.4. For all nodes i ∈ I in Curriculum II, all columns of Xi are global ℓ0 *minimizers.* Since the random parts of the curriculum have dependencies, we can no longer apply Theorem 4.2 to show that this tree is learnable, although it is conceivable that similar results still apply. ![18_image_0.png](18_image_0.png) $\mathbf{M}\times\mathbf{M}$ $$c o l u m n\,\,\,o f\,X_{i}\}$$ Figure 3: Xi matrices for Curriculum II with ℓ0 minimal columns. ## 5.3.3 Curriculum Iii In Curriculum II the columns of Xi are global ℓ0 minimizers, but their linear combinations in the classes Ci or the training samples are generally not, which can be fixed by the modification in Figure 4. All blocks individually work as before, but instead of allowing all possible sparse linear combinations of the columns, we only allow one non-zero contribution from each block column. This ensures the sparsity requirements in Remark 5.2 so that all problems in class are global ℓ0 minimizers. Lemma 5.5. For all nodes i ∈ I *in Curriculum III, define the classes by* $${\mathcal{C}}_{i}:=\{x\in\mathbb{R}^{n}:\,x=X_{i}z,\,z\,{\mathrm{~is~}}t{\mathrm{-sparse}}\}$$ z *has at most one non-zero entry per block column of* Xi} Then all elements of Ci are global are global ℓ0 *minimizers.* As for Curriculum II Theorem 4.2 is not applicable to show that this tree is learnable. Since the y and z entries are non-negative, this allows us to build a curriculum to learn one arbitrary 1-in-3-SAT problem in a larger class of random signed problems. If we can build an entire curriculum that is fully contained in 1-in-3-SAT itself remains open. ## 5.4 Numerical Experiments Table 1 contains results for Curricula II and III. All ℓ1-minimizations problems are solved by gradient descent in the kernel of Ax = b and the sparse factorization is implemented by ℓ4-maximization Zhai et al. (2020). Solutions on the leaf nodes are given instead of brute force solved. As in Welper (2021), Algorithm 1 contains an additional grader that sorts out wrong solutions from Solve, which often depend on the gradient descent accuracy. Scale is implemented by snapping the output of SparseFactor to the discrete values {−1, 0, 1}, which allows exact recovery of all nodes Xi, without numerical errors. Further details are given in Appendix C. - *Curriculum II:* We train three tree nodes on two levels. Grader tests to accuracy 10−4. The results are the average of 5 independent runs. ![19_image_0.png](19_image_0.png) Figure 4: Xi matrices for Curriculum III with ℓ0 minimal classes. | | Curr. I | Curr. II | | |-----------------|-----------|------------|--------| | Depth | 0 | 1 | 0 | | m | 96 | 96 | 121 | | n | 128 | 128 | 162 | | p Xchild(i) | 102 | 102 | 459 | | Rank AXchild(i) | 96.00 | 62.80 | 113.00 | | # Samples | 10000 | 10000 | 90000 | | % Validate | 0.55 | 0.91 | 0.98 | | #(Xstudent = X) | 5/5 | 7/10 | 2/2 | Table 1: Results of numerical experiments, Section 5.4, averaged over all runs and all nodes of given depth. The second but last row shows the percentage of successful training solutions, according to the grader. The last row shows the number of successfully recovered Xi for the given level out of the total number of trials. - *Curriculum III:* We train one tree node. The training sample matrices (23) are preconditioned per node, not globally as in Theorem 4.2, below. Grader tests to accuracy 10−3. The results are the average of 2 independent runs. Table 1 contains the results. It includes average ranks to show that the systems AX are non-trivial with nonzero kernel and the row %Validate shows the percentage of correctly recovered training samples according to the grader. A major bottleneck is the number of training samples for each node, which scales quadratically for ℓ4 maximization (but only linear for unique factorization without algorithm Spielman et al. (2012)), up to log factors. The last line shows that in the majority of cases we can recover the tree nodes Xi. The misses depend on solver parameters as e.g. iteration numbers and the size of random matrices. ## 6 Conclusion Although sparse solutions of linear systems are generally hard to compute, many subclasses are tractable. In particular, the prior knowledge x = Xz with sparse z allows us to solve problems with only mild assumptions on A. We learn X from a curriculum of easy samples and condensation of knowledge at every tree node. The problems in each class must be compatible so that AX satisfies the null space property. To demonstrate the feasibility of the approach, we show that the algorithms can learn a class X of non-trivial size that contains an arbitrary solution x. The results provide a rigorous mathematical model for some hypothetical principles in human reasoning, including expert knowledge and its training in a curriculum. To be applicable in practice, further research is required, e.g.: - The mapping of SAT type problems into sparse linear problems lacks several invariances, e.g. a simple reordering of terms may invalidate acquired knowledge. The reduction of SAT or other problems to sparse linear solvers is similar to feature engineering in machine learning. - For sparse factorization, the required number of samples scales quadratically, up to a log factor, which is the biggest computational bottleneck in the numerical experiments. However, the current implementation uses a standard method and does not use that the parent class Xi can be constructed from its children (12). - The curriculum is designed so that knowledge can be condensed by sparse factorization, which in itself is a meta-heuristic. One may need to dynamically adapt the condensation heuristic to real data. Since sparse factorization algorithms themselves often rely on ℓ1 minimization, similar approaches as discussed in the paper are conceivable. - Not all relevant knowledge can be combined into one root class X0 so that AX0 satisfies the null space property. Hence, one may need several roots or rather a knowledge graph, together with a decision criterion which node to use for a given problem. ## References Alekh Agarwal, Animashree Anandkumar, Prateek Jain, Praneeth Netrapalli, and Rashish Tandon. Learning sparsely used overcomplete dictionaries. In Maria Florina Balcan, Vitaly Feldman, and Csaba Szepesvári (eds.), *Proceedings of The 27th Conference on Learning Theory*, volume 35 of Proceedings of Machine Learning Research, pp. 123–137, Barcelona, Spain, 13–15 Jun 2014. PMLR. URL http://proceedings. mlr.press/v35/agarwal14a.html. Michal Aharon, Michael Elad, and Alfred M. Bruckstein. On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them. *Linear Algebra and its Applications*, 416(1):48–67, 2006. ISSN 0024-3795. doi: 10.1016/j.laa.2005.06.035. URL http://www.sciencedirect.com/science/article/ pii/S0024379505003459. Special Issue devoted to the Haifa 2005 conference on matrix theory. Zeyuan Allen-Zhu and Yuanzhi Li. Backward feature correction: How deep learning performs deep learning, 2020. URL https://arxiv.org/abs/2001.04413. https://arxiv.org/abs/2001.04413. Sanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. More algorithms for provable dictionary learning, 2014a. URL https://arxiv.org/abs/1401.0579. https://arxiv.org/abs/1401.0579. Sanjeev Arora, Rong Ge, and Ankur Moitra. New algorithms for learning incoherent and overcomplete dictionaries. In Maria Florina Balcan, Vitaly Feldman, and Csaba Szepesvári (eds.), Proceedings of The 27th Conference on Learning Theory, volume 35 of *Proceedings of Machine Learning Research*, pp. 779–806, Barcelona, Spain, 13–15 Jun 2014b. PMLR. URL http://proceedings.mlr.press/v35/arora14.html. Sanjeev Arora, Rong Ge, Tengyu Ma, and Ankur Moitra. Simple, efficient, and neural algorithms for sparse coding. In Peter Grünwald, Elad Hazan, and Satyen Kale (eds.), *Proceedings of The 28th Conference on* Learning Theory, volume 40 of *Proceedings of Machine Learning Research*, pp. 113–149, Paris, France, 03–06 Jul 2015. PMLR. URL http://proceedings.mlr.press/v40/Arora15.html. Ramin Ayanzadeh, Milton Halem, and Tim Finin. Sat-based compressive sensing, 2019. URL https: //arxiv.org/abs/1903.03650. Boaz Barak, Jonathan A. Kelner, and David Steurer. Dictionary learning and tensor decomposition via the sum-of-squares method. In *Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing*, STOC '15, pp. 143–151, New York, NY, USA, 2015. Association for Computing Machinery. ISBN 9781450335362. doi: 10.1145/2746539.2746605. URL https://doi.org/10.1145/2746539.2746605. Richard Baraniuk, Mark Davenport, Ronald DeVore, and Michael Wakin. A simple proof of the restricted isometry property for random matrices. *Constructive Approximation*, 28(3):253–263, Dec 2008. ISSN 1432-0940. doi: 10.1007/s00365-007-9003-x. URL https://doi.org/10.1007/s00365-007-9003-x. Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. Compressed sensing using generative models. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of *Proceedings of Machine Learning Research*, pp. 537–546, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings.mlr.press/ v70/bora17a.html. E. J. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. *IEEE Transactions on Information Theory*, 52(2):489–509, Feb 2006. ISSN 1557-9654. doi: 10.1109/TIT.2005.862083. Emmanuel J. Candès, Justin K. Romberg, and Terence Tao. Stable signal recovery from incomplete and inaccurate measurements. *Communications on Pure and Applied Mathematics*, 59, 08 2006. doi: 10.1002/ cpa.20124. Emmanuel J. Candès, Michael B. Wakin, and Stephen P. Boyd. Enhancing sparsity by reweighted ℓ1 minimization. *Journal of Fourier Analysis and Applications*, 14(5):877–905, Dec 2008. ISSN 1531-5851. doi: 10.1007/s00041-008-9045-x. URL https://doi.org/10.1007/s00041-008-9045-x. R. Chartrand and Wotao Yin. Iteratively reweighted algorithms for compressive sensing. In *2008 IEEE* International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872, March 2008. doi: 10.1109/ICASSP.2008.4518498. Rick Chartrand and Valentina Staneva. Restricted isometry properties and nonconvex compressive sensing. Inverse Problems, 24(3):035020, may 2008. doi: 10.1088/0266-5611/24/3/035020. URL https://doi. org/10.1088%2F0266-5611%2F24%2F3%2F035020. Xi Chen, Xi Chen, Yu Cheng, and Bo Tang. On the recursive teaching dimension of vc classes. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. URL https://proceedings.neurips.cc/paper_ files/paper/2016/file/69a5b5995110b36a9a347898d97a610e-Paper.pdf. Ingrid Daubechies, Ronald DeVore, Massimo Fornasier, and C. Sinan Güntürk. Iteratively reweighted least squares minimization for sparse recovery. *Communications on Pure and Applied Mathematics*, 63(1):1–38, 2010. doi: 10.1002/cpa.20303. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/cpa.20303. Manik Dhar, Aditya Grover, and Stefano Ermon. Modeling sparse deviations for compressed sensing using generative models, 2018. https://arxiv.org/abs/1807.01442. Thorsten Doliwa, Gaojian Fan, Hans Ulrich Simon, and Sandra Zilles. Recursive teaching dimension, vcdimension and sample compression. *Journal of Machine Learning Research*, 15(89):3107–3131, 2014. URL http://jmlr.org/papers/v15/doliwa14a.html. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In Eric P. Xing and Tony Jebara (eds.), *Proceedings of the 31st International Conference on Machine Learning*, volume 32 of *Proceedings of Machine Learning Research*, pp. 647–655, Bejing, China, 22–24 Jun 2014. PMLR. URL http://proceedings.mlr.press/v32/donahue14.html. D. L. Donoho. Compressed sensing. *IEEE Transactions on Information Theory*, 52(4):1289–1306, April 2006. ISSN 1557-9654. doi: 10.1109/TIT.2006.871582. Simon Foucart and Ming-Jun Lai. Sparsest solutions of underdetermined linear systems via ℓq-minimization for 0 < q ≤ 1. *Applied and Computational Harmonic Analysis*, 26(3):395–407, 2009. ISSN 10635203. doi: 10.1016/j.acha.2008.09.001. URL http://www.sciencedirect.com/science/article/pii/ S1063520308000882. Simon Foucart and Holger Rauhut. *A Mathematical Introduction to Compressive Sensing*. Birkhäuser, 2013. Dongdong Ge, Xiaoye Jiang, and Yinyu Ye. A note on the complexity of lp minimization. *Mathematical* Programming, 129(2):285–299, Oct 2011. ISSN 1436-4646. doi: 10.1007/s10107-011-0470-2. URL https: //doi.org/10.1007/s10107-011-0470-2. S.A. Goldman and M.J. Kearns. On the complexity of teaching. *Journal of Computer and System Sciences*, 50(1):20–31, 1995. ISSN 0022-0000. doi: 10.1006/jcss.1995.1003. URL https://www.sciencedirect. com/science/article/pii/S0022000085710033. Jianping Gou, Baosheng Yu, Stephen J. Maybank, and Dacheng Tao. Knowledge Distillation: A Survey. International Journal of Computer Vision, 129(6):1789–1819, 2021. ISSN 0920-5691, 1573-1405. doi: 10.1007/s11263-021-01453-z. URL https://link.springer.com/10.1007/s11263-021-01453-z. R. Gribonval and K. Schnass. Dictionary identification—sparse matrix-factorization via ℓ1 -minimization. IEEE Transactions on Information Theory, 56(7):3523–3539, 2010. doi: 10.1109/TIT.2010.2048466. URL https://arxiv.org/abs/0904.4774. Paul Hand and Vladislav Voroninski. Global guarantees for enforcing deep generative priors by empirical risk. In Sébastien Bubeck, Vianney Perchet, and Philippe Rigollet (eds.), *Proceedings of the 31st Conference* On Learning Theory, volume 75 of *Proceedings of Machine Learning Research*, pp. 970–978. PMLR, 06–09 Jul 2018. URL http://proceedings.mlr.press/v75/hand18a.html. Reinhard Heckel and Mahdi Soltanolkotabi. Compressive sensing with un-trained neural networks: Gradient descent finds a smooth approximation. In Hal Daumé, III and Aarti Singh (eds.), *Proceedings of the* 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 4149–4158, Virtual, 13–18 Jul 2020. PMLR. URL http://proceedings.mlr.press/v119/ heckel20a.html. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network, 2015. Sean B. Holden. Machine learning for automated theorem proving: Learning to solve sat and qsat. Foundations and Trends® *in Machine Learning*, 14(6):807–989, 2021. ISSN 1935-8237. doi: 10.1561/2200000081. URL http://dx.doi.org/10.1561/2200000081. Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. Meta-learning in neural networks: A survey, 2020. URL https://arxiv.org/abs/2004.05439. https://arxiv.org/abs/2004.05439. Lunjia Hu, Ruihan Wu, Tianhong Li, and Liwei Wang. Quadratic upper bound for recursive teaching dimension of finite vc classes. In Satyen Kale and Ohad Shamir (eds.), Proceedings of the 2017 Conference on Learning Theory, volume 65 of *Proceedings of Machine Learning Research*, pp. 1147–1156. PMLR, 07–10 Jul 2017. URL https://proceedings.mlr.press/v65/hu17a.html. Wen Huang, Paul Hand, Reinhard Heckel, and Vladislav Voroninski. A provably convergent scheme for compressive sensing under random generative priors, 2018. URL https://arxiv.org/abs/1812.04176. https://arxiv.org/abs/1812.04176. Gauri Jagatap and Chinmay Hegde. Algorithmic guarantees for inverse imaging with untrained network priors. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d' Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32, pp. 14832–14842. Curran Associates, Inc., 2019. URL https://papers.nips.cc/paper/2019/hash/ 831b342d8a83408e5960e9b0c5f31f0c-Abstract.html. Shiva Prasad Kasiviswanathan and Mark Rudelson. Restricted isometry property under high correlations, 2019. URL https://arxiv.org/abs/1904.05510. https://arxiv.org/abs/1904.05510. David Kirkpatrick, Hans U. Simon, and Sandra Zilles. Optimal collusion-free teaching. In Aurélien Garivier and Satyen Kale (eds.), *Proceedings of the 30th International Conference on Algorithmic Learning Theory*, volume 98 of *Proceedings of Machine Learning Research*, pp. 506–528. PMLR, 22–24 Mar 2019. URL https://proceedings.mlr.press/v98/kirkpatrick19a.html. Ming-Jun. Lai, Yangyang. Xu, and Wotao. Yin. Improved iteratively reweighted least squares for unconstrained smoothed ℓq minimization. *SIAM Journal on Numerical Analysis*, 51(2):927–957, 2013. doi: 10.1137/110840364. URL https://doi.org/10.1137/110840364. Morteza Mardani, Qingyun Sun, David Donoho, Vardan Papyan, Hatef Monajemi, Shreyas Vasanawala, and John Pauly. Neural proximal gradient descent for compressive imaging. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 31, pp. 9573–9583. Curran Associates, Inc., 2018. URL https://proceedings. neurips.cc/paper/2018/hash/61d009da208a34ae155420e55f97abc7-Abstract.html. Maximilian März, Claire Boyer, Jonas Kahn, and Pierre Weiss. Sampling Rates for $$\ell ^1$$-Synthesis. Foundations of Computational Mathematics, August 2022. ISSN 1615-3375, 1615-3383. doi: 10. 1007/s10208-022-09580-w. URL https://link.springer.com/10.1007/s10208-022-09580-w;https: //arxiv.org/abs/2004.07175. B. K. Natarajan. Sparse approximate solutions to linear systems. *SIAM Journal on Computing*, 24(2): 227–234, 1995. doi: 10.1137/S0097539792240406. URL https://doi.org/10.1137/S0097539792240406. Behnam Neyshabur and Rina Panigrahy. Sparse matrix factorization, 2014. URL https://arxiv.org/abs/ 1311.3315. https://arxiv.org/abs/1311.3315. Lucas Rencker, Francis Bach, Wenwu Wang, and Mark D. Plumbley. Sparse recovery and dictionary learning from nonlinear compressive measurements. *IEEE Transactions on Signal Processing*, 67(21):5659–5670, 2019. doi: 10.1109/TSP.2019.2941070. URL https://arxiv.org/abs/1809.09639. Stuart J. Russell, Peter Norvig, and Ernest Davis. *Artificial intelligence: a modern approach*. Prentice Hall series in artificial intelligence. Prentice Hall, Upper Saddle River, 3rd ed edition, 2010. ISBN 9780136042594. Karin Schnass. Local identification of overcomplete dictionaries. *Journal of Machine Learning Research*, 16 (35):1211–1242, 2015. URL http://jmlr.org/papers/v16/schnass15a.html. Yi Shen and Song Li. Restricted p–isometry property and its application for nonconvex compressive sensing. Advances in Computational Mathematics, 37:441–452, 2012. doi: 10.1007/s10444-011-9219-y. W. Shi, F. Jiang, S. Zhang, and D. Zhao. Deep networks for compressed image sensing. In 2017 IEEE International Conference on Multimedia and Expo (ICME), pp. 877–882, 2017. doi: 10.1109/ICME.2017. 8019428. URL https://arxiv.org/abs/1707.07119. Daniel A. Spielman, Huan Wang, and John Wright. Exact recovery of sparsely-used dictionaries. volume 23 of *Proceedings of Machine Learning Research*, pp. 37.1–37.18, Edinburgh, Scotland, 25–27 Jun 2012. JMLR Workshop and Conference Proceedings. URL https://arxiv.org/abs/1206.5882. J. Sun, Q. Qu, and J. Wright. Complete dictionary recovery over the sphere i: Overview and the geometric picture. *IEEE Transactions on Information Theory*, 63(2):853–884, 2017a. doi: 10.1109/TIT.2016. 2632162. URL https://arxiv.org/abs/1511.03607. J. Sun, Q. Qu, and J. Wright. Complete dictionary recovery over the sphere ii: Recovery by riemannian trust-region method. *IEEE Transactions on Information Theory*, 63(2):885–914, 2017b. doi: 10.1109/ TIT.2016.2632149. URL https://arxiv.org/abs/1511.04777. Qiyu Sun. Recovery of sparsest signals via ℓq-minimization. Applied and Computational Harmonic Analysis, 32(3):329–341, 2012. ISSN 1063-5203. doi: 10.1016/j.acha.2011.07.001. URL http://www. sciencedirect.com/science/article/pii/S1063520311000790. Richard S. Sutton and Andrew G. Barto. *Reinforcement learning: an introduction*. Adaptive computation and machine learning series. The MIT Press, Cambridge, Massachusetts, second edition edition, 2018. ISBN 9780262039246. Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Deep image prior. *Int J Comput Vis*, 128:1867–1888, 2020. doi: 10.1007/s11263-020-01303-4. URL https://arxiv.org/abs/1711.10925. Dave Van Veen, Ajil Jalal, Mahdi Soltanolkotabi, Eric Price, Sriram Vishwanath, and Alexandros G. Dimakis. Compressed sensing with deep image prior and learned regularization, 2020. https://arxiv.org/abs/ 1806.06438. Roman Vershynin. *High-dimensional probability: an introduction with applications in data science*. Number 47 in Cambridge series in statistical and probabilistic mathematics. Cambridge University Press, Cambridge ; New York, NY, 2018. ISBN 9781108415194. G. Welper. A relaxation argument for optimization in neural networks and non-convex compressed sensing, 2020. URL https://arxiv.org/abs/2002.00516. https://arxiv.org/abs/2002.00516. G. Welper. Non-convex compressed sensing with training data, 2021. URL https://arxiv.org/abs/2101. 08310. https://arxiv.org/abs/2101.08310. Joseph Woodworth and Rick Chartrand. Compressed sensing recovery via nonconvex shrinkage penalties. Inverse Problems, 32(7):075004, may 2016. doi: 10.1088/0266-5611/32/7/075004. URL https://doi. org/10.1088%2F0266-5611%2F32%2F7%2F075004. Shanshan Wu, Alex Dimakis, Sujay Sanghavi, Felix Yu, Daniel Holtmann-Rice, Dmitry Storcheus, Afshin Rostamizadeh, and Sanjiv Kumar. Learning a compressed sensing measurement matrix via gradient unrolling. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th International* Conference on Machine Learning, volume 97 of *Proceedings of Machine Learning Research*, pp. 6828–6839. PMLR, 09–15 Jun 2019a. URL https://proceedings.mlr.press/v97/wu19b.html;https://arxiv. org/abs/1806.10175. Yan Wu, Mihaela Rosca, and Timothy Lillicrap. Deep compressed sensing. volume 97 of *Proceedings of* Machine Learning Research, pp. 6850–6860, Long Beach, California, USA, 09–15 Jun 2019b. PMLR. URL https://arxiv.org/abs/1905.06723. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14, pp. 3320–3328, Cambridge, MA, USA, 2014. MIT Press. URL https://papers. nips.cc/paper/5347-how-transferable-are-features-in-deep-neural-networks.pdf. Yuexiang Zhai, Zitong Yang, Zhenyu Liao, John Wright, and Yi Ma. Complete dictionary learning via ℓ 4-norm maximization over the orthogonal group. *Journal of Machine Learning Research*, 21(165):1–68, 2020. URL https://arxiv.org/abs/1906.02435. Sandra Zilles, Steffen Lange, Robert Holte, and Martin Zinkevich. Models of cooperative teaching and learning. *Journal of Machine Learning Research*, 12(11):349–384, 2011. URL http://jmlr.org/papers/ v12/zilles11a.html. ## A Details And Proofs A.1 Easy And Hard Problems: Theorems 2.4, 2.5 Theorem 2.4 contains some small changes to the original reference Welper (2021). In the original version (A1) contains two extra inequalities $$n\geq{\bar{c}}_{1}p\log p,$$ n ≥ c¯1p log p,1p $${\frac{1}{p}}\leq{\frac{s}{n}}\leq{\bar{c}}_{2},$$ which are used to ensure that X has full rank Welper (2021), Proof of Theorem 4.2 with (A3), Item 4. We assume this directly in (A3) and leave out the inequalities. For Theorem 2.5, the reference Welper (2021) requires the extra assumption that Ax = b has unique st sparse solutions, which is only used to verify that solutions of Solve are correct . In our case, this is implicitly contained in (A2), instead. ## A.2 Tree Size: Lemma 3.8 Lemma A.1 (Lemma 3.8 restated). Let s0 *be the sparsity of the root node of the tree. Assume that each* node of the tree has at most γ children and that sit¯≳ csj t for c ≥ 0 *and all* j ∈ child(i)*. Then the tree has* at most $$\gamma^{N+1}=\gamma s_{0}^{\frac{\log\gamma}{\log(c t/t)}}$$ log γ nodes. Proof. Let ℓi be the level of a node, i.e. the distance to the root node, and N the maximal level of all nodes. Each level has at most γ N−i nodes and thus the full tree has at most $$\sum_{i=0}^{N}\gamma^{N-i}={\frac{\gamma^{N+1}-1}{\gamma-1}}\leq\gamma\gamma^{N}$$ nodes. It remains to estimate N. By induction on the assumption sit¯≥ csj t we have $$s_{j}\leq\left({\frac{\bar{t}}{c t}}\right)^{\ell_{j}}s_{0}$$ and thus, since necessarily sj ≥ 1, we conclude that $$s_{0}\geq\left({\frac{c t}{\bar{t}}}\right)^{N}.$$ $\square$ Plugging in γ N =ct t¯ N log γ log *ct/t*¯the number of nodes is bounded by $\sigma\propto\sigma$. $$\gamma\gamma^{N}=\gamma\left(\frac{c t}{\bar{t}}\right)^{N\frac{\log\gamma}{\log c t/\bar{t}}}\leq\gamma s_{0}^{\frac{\log\gamma}{\log c t/\bar{t}}}.$$ ## A.3 Learnable Trees: Theorem 3.5 Theorem A.2 (Theorem 3.5 restated). Let Ci, i ∈ I be learnable according to Definition 3.4. Then, there exists an implementation of SparseFactor and constants c > 0 and C ≥ 0 *independent of the probability* model, dimensions and sparsity, so that with probability at least $$1-C\gamma s_{0}^{\frac{\log\gamma}{\log(c_{s}t/t)}}p^{-c}$$ the output X¯i = TreeTrain(Ci) of Algorithm 2 is a scaled permutation *Scale*(X¯i) = *Scale*(XiP) of Xi for some permutation matrix P. Proof. The result follows from inductively applying Theorem 2.4 on each node of the tree, starting at its leafs. The assumptions of Theorem 2.4 are easily matched with the given ones, except for (A2), which we verify separately for leaf and non-leaf nodes. 1. *Leave Nodes:* For the leaf nodes (A2) is assumed. This is required because the globally sparsest solution of Ax = b may not be unique, in which case (A2) ensures that we pick an in class solution. 2. *Non-Leave Nodes:* Let z be a column of the training sample Z and x = Xiz. By (12), we have x = Xiz = Xchild(i)Wchild(i)z =: Xchild(i)w with √2t sparse w because Wchild(i) has t/t¯ sparse columns and z is √2t¯ sparse, with probability at least 1 − 2p −c(see the proof of Theorem 2.4, Item 2, in Welper (2021)). Since AXchild(i) satisfies the √2t-RIP, the correct solution x is recovered by the modified ℓ1-minimization (4) and hence by SolveXi . Finally, we add up the probabilities. By Theorem 2.4, the probability of failure on each node is at most Cp−c. By Lemma 3.8, there are at most γs log γ log(*ct/t*¯) 0 nodes and thus the result follows from a union bound. Proof of Corollary 3.6. By assumption, the student knows the matrices X¯i, which are scaled permutations of Xi, i.e. Scale(X¯i) = XiSP for some scaling matrix S and permutation matrix P. Since sub-matrices of NSP matrices are also NSP, by Assumption (T5) of learnable trees and removing contributions from siblings, for all tree nodes i ∈ I, the scaled product A[Scale(X¯i)] satisfies the null space property of order . Hence, for every problem x = Xiz = (XiSP)(P −1S −1z) =: Scale(X¯i)¯z with t-sparse z in class Ci, the convex ℓ1-minimization problem $$\operatorname*{min}_{\bar{z}\in\mathbb{R}^{p}}\|\bar{z}\|_{1}\quad{\mathrm{subject~to}}\quad A\mathbf{S}\mathbf{c}\mathbf{A}\mathbf{L}(\bar{X}_{i})\bar{z}=b,$$ $$\mathbf{i}\mathbf{c}\mathbf{L}\mathbf{A}\mathbf{E}(X_{i}){\bar{z}}.$$ recovers z¯ and thus the solution x = Sclae(X¯i)¯z. ## A.4 Split Of Global ℓ0 **Minimizers** This section contains two lemmas that state the splits of ℓ0 minimizers are again ℓ0 minimizers and that they are linearly independent. Lemma A.3 (Lemma 4.3 restated). *Assume the columns of* S ∈ R n×q *have non-overlapping support and* z ∈ R q with non-zero entries. If the vector x = Sz is the solution of the ℓ0*-minimization problem 14, then* the columns of AS are linearly independent. Proof. Let xi be the columns of S and assume that the Axi, i ∈ [t] are linearly dependent. Then there exists a non-zero y ∈ R tsuch that Pt i=1 Axiyi = 0. Without loss of generality, let y1 ̸= 0 so that $\square$ $$A x_{1}=-A\sum_{i=2}^{t}x_{i}{\frac{y_{i}}{y_{1}}}.$$ We use this identity to eliminate x1: We use this identity to eliminate $z_{1}$: $$b=Ax=A\sum_{i=1}^{t}x_{i}z_{i}=Ax_{1}z_{1}+A\sum_{i=2}^{t}x_{i}z_{i}=A\sum_{i=2}^{t}x_{i}z_{i}\left(1-\frac{y_{i}}{y_{1}}z_{0}\right)=:A\hat{x}.$$ Since all $x_{i}$ have disjoint support and all $z_{i}$ are non-zero, we have $\|\hat{x}\|_{0}<\|x\|_{0}$, which contradicts the assumption that x is a ℓ0 minimizer and thus all Axi, i ∈ [n] must be linearly independent. Lemma A.4 (Lemma 4.5 restated). *Assume the columns of* S ∈ R n×q *have non-overlapping support and* z ∈ R q with non-zero entries. If the vector x = Sz is the solution of the ℓ0-minimization problem 14, then the columns S·k, k ∈ [q] are global ℓ0 *optimizers of* $$S._{k}\in\operatorname*{min}_{x\in\mathbb{R}^{n}}\|x\|_{0}\quad{\mathrm{subject~to}}\quad A x=A S._{k}.$$ Proof. Assume the statement is wrong. Then for some k ∈ [q] there is a yk with ∥yk∥0 ≤ ∥S·k∥0*, Ay*k = AS·k. Define $${\bar{x}}:=y_{k}z_{k}+\sum_{l\neq k}S._{l}z_{l}.$$ Then, we have Ax¯ = Aykzk + A X l̸=k S·lzl. = A X l S·lzl = ASz = Ax and since all S·l have disjoint support and zl ̸= 0 ∥x¯∥0 = ∥yk∥0 + X l̸=k ∥S·l∥0 < X l ∥S·l∥0 = ∥x∥0. This contradicts the assumption that x is a global ℓ0 minimiser and hence all S·k must be ℓ0 minimizers as well. ## A.5 Tree Nodes For Theorem 4.2 This section contains the construction of the matrices X in the tree nodes used in Theorem 4.2. ## A.5.1 Construction Of X We follow the idea outlined in Section 4.3. For given matrix A and vector x, we construct a decomposition matrix X ∈ R n×p and z so that x = Xz for t-sparse z and AX satisfies the null space property. The first condition ensures that x is contained in the class C<t and the second provides solvers Solve. This construction will be used in subsequent sections to define nodes in the curriculum tree. We start with some simple definitions (M1) By S m×n we denote all matrices in R m×n whose columns have non-overlapping support. $${\textbf{)}}{\textbf{1}}:={\left[1\quad\cdots\quad1\right]}^{T}$$ (M2) 1 := -1 *· · ·* 1Twith dimensions derived from context. We split x into q non-overlapping components, which we combine into the columns of a matrix S ∈ Sn×qso that x = S1. The matrix S has q columns, which is generally less than the p columns we desire for a rich class given by X. A convenient way out is to choose some matrix Z ∈ R p×q with orthonormal columns so that x = SZTZ1 = SZTz with z := Z1. To ensure sparsity of z and for later tree construction, we confine Z to S p×q. (M3) S ∈ Sn×q with non-zero columns. (M4) Z ∈ Sp×q with ℓ2-normalized columns. While the matrix SZT has the same dimensions as X, it is generally low rank and cannot satisfy the NSP. Furthermore, we want a rich class matrix X with further possible random solutions. To this end, we add in a random matrix R, but only on blocks of SZTthat are non-zero to keep sparsity. We define R as follows (M5) Note that upon permutation of rows and columns SZTis a block diagonal matrix with l ∈ [q] blocks with row indices supp(S·l) and column indices supp(Z·l). If the blocks do not contain all rows and columns, we may enlarge them to some disjoint sets Jl and Kl, respectively so that $$\mathbf{supp}(S_{i})\subset J_{i},$$ supp(S·l) ⊂ Jl, supp(Z·l) ⊂ Kl, l ∈ [q]. Then each set Jl × Kl corresponds to one diagonal block that contains one component of x in the columns of S. We also use the index free notation $\mathcal{J}:=\{J_{l}:l\in[q]\},\qquad\qquad\mathcal{K}:=\{K_{l}:l\in[q]\},\qquad\qquad\mathcal{J}\mathcal{K}:=\{J_{l}\times K_{l}:l\in[q]\},$ (M6) R ∈ R n×pis block matrix $$(.i)\subset K_{i},$$ $$t\in[q].$$ $\square$ $$R_{j k}={\left\{\begin{array}{l l}{{\mathrm{i.i.d~random}}&{j,k\in[J,K]\in{\mathcal{I K}}}\\ {0}&{{\mathrm{else,}}}\end{array}\right.}$$ whose random entries satisfy $$\mathbb{E}\left[R_{j k}\right]=0,$$ $$\mathbb{E}\left[R_{j k}^{2}\right]=1,\qquad\qquad\qquad\|R_{j k}\|_{\psi_{2}}\leq C_{\psi_{1}}$$ for some constant Cψ and are absolutely continuous with respect to the Lebesgue measure. Finally, we need a scaling matrix that will be determined below. (M7) D ∈ R n×n is a diagonal scaling matrix to be determined below. Then, we define the following class matrix (M8) X := SZT + DR(I − ZZT), (25) which is random on the kernel of Z T and matches the previously constructed SZT on the orthogonal complement. The following lemma summarises several elementary properties of the matrices and vectors in (M1) - (M8) that are used in the proofs below. In particular, they satisfy x = Xz for z = Z1. Lemma A.5. *For the construction (M1) - (M8) we have:* 1. Z TZ = I. 2. ZZT*is an orthogonal projector.* 3. Let supp(Z·l) ⊂ K ∈ K for some column l*. Then* $$X:=S Z^{T}+D R(I-Z Z^{T}),$$ $\left(25\right)$. (ZZT)KL = ZKlZ T Kl if K = L 0 else. 4. (ZZT)KL = 0 for all K ̸= L ∈ K. 5. (ZZT)KK is an orthogonal projector for all K ∈ K. 6. *For all* u ∈ R p *we have* $$\sum_{K\in\mathcal{K}}\left\|(Z Z^{T})_{K K}u_{K}\right\|^{2}=\left\|Z^{T}u\right\|^{2}.$$ 7. *For all* u ∈ R p *we have* $$\sum_{K\in{\mathcal{K}}}\left\|(I-Z Z^{T})_{K.}u\right\|^{2}\leq\left\|u\right\|^{2}.$$ 8. For z = Z1*, we have* ZZTz = z. 9. For x = S1 and z = Z1*, we have* SZTz = x. 10. For x = S1 and z = Z1*, we have* Xz = x. Proof. 1. Since Z is normalized and Z ∈ Sp×q, all columns are orthonormal. 2. ZZTis symmetric and with Item 1 we have (ZZT)(ZZT) = Z(Z TZ)Z T = ZZT. 3. We have (ZZT)KL =Pq l=1(Z·lZ T ·l )KL =Pq l=1 ZKlZ T Ll, which reduces to the formula in the lemma because K ̸= L are disjoint and suppZ·l ⊂ K. 4. Follows directly from Item 3. 5. Follows directly from Item 3 because the vectors ZKl is normalized. 6. For every K ∈ K, let l ∈ [q] be the corresponding index with supp(Z·l) ⊂ K. Then, we have $$\sum_{K\in{\mathcal{K}}}\left\|(Z Z^{T})_{K K}u_{K}\right\|^{2}=\sum_{K,l=1}^{q}\left\|Z_{K l}Z_{K l}^{T}u_{K}\right\|^{2}$$ $$=\sum_{K,l=1}^{q}(Z_{K l}^{T}u_{K})^{2}=\sum_{l=1}^{q}(Z_{\cdot l}^{T}u)^{2}=\left\|Z^{T}u\right\|^{2},$$ where in the first equality we have used Item 3, in the second that all ZKl are normalized and in the third that supp(ZKl) ⊂ K. 7. From Item 3, we have $$(I-Z Z^{T})_{K.}u=u_{K}-\sum_{L\in\mathcal{K}}(Z Z^{T})_{K L}u_{L}=u_{K}-(Z Z^{T})_{K K}u_{K}.$$ Since by Item 5 the matrix (I − ZZT)KK is a projector, it follows that $$\sum_{K\in{\mathcal{K}}}\left\|(I-Z Z^{T})_{K\cdot}u\right\|^{2}=\sum_{K\in{\mathcal{K}}}\left\|(I-Z Z^{T})_{K K}u_{K}\right\|^{2}$$ $$\leq\sum_{K\in\mathcal{K}}\left\|(I-Z Z^{T})_{K K}\right\|^{2}\left\|u_{K}\right\|^{2}\leq\left\|u\right\|^{2}.$$ 8. With Item 1 we have ZZTz = ZZTZ1 = Z1 = z. 9. With Item 1 we have SZTz = SZTZ1 = S1 = x. 10. Follows directly from the previous items. ## A.5.2 Expectation And Concentration For the proof of RIP and null space properties, we need expectation and concentration results for ∥AXu∥ for an arbitrary u. Lemma A.6. Let u ∈ R p, A ∈ R m×n and X *be the matrix defined in* (25)*. Then* $$\mathbb{E}\left[\left\|AXu\right\|^{2}\right]=\left\|ASZ^{T}u\right\|^{2}+\sum_{[J,K]\in\mathcal{JK}}\left\|AD.J\right\|_{F}^{2}\left[\left\|u_{K}\right\|^{2}-\left\|\left(ZZ^{T}\right)_{KK}u_{K}\right\|^{2}\right].$$ Proof. Since R is zero outside of the blocks RJK for [J, K] *∈ J K*, we have $$X u=[S Z^{T}+D R(I-Z Z^{T})]u=S Z^{T}u+\sum_{[J,K]\in{\mathcal{I K}}}D_{.J}R_{J K}(I-Z Z^{T})_{K.}u$$ and thus $$\mathbb{E}\left[\|AXu\|^{2}\right]=\mathbb{E}\left[\left\|ASZ^{T}u+\sum_{[J,K]\in\mathcal{JK}}AD_{.J}R_{JK}(I-ZZ^{T})_{K\cdot}u\right\|^{2}\right]$$ $$=\left\|ASZ^{T}u\right\|^{2}+\sum_{[J,K]\in\mathcal{JK}}\left\|AD_{.J}R_{JK}(I-ZZ^{T})_{K\cdot}u\right\|^{2}$$ $$=\left\|ASZ^{T}u\right\|^{2}+\sum_{[J,K]\in\mathcal{JK}}\left\|AD_{.J}\right\|_{F}^{2}\left\|(I-ZZ^{T})_{K\cdot}u\right\|^{2},$$ where in the third line we have used Lemma B.1 and in the second that the cross terms $$\left\langle A S Z^{T}u,\sum_{[J,K]\in{\mathcal{H}}}A D_{J}\mathbf{E}\left[R_{J K}\right](I-Z Z^{T})_{K\cdot u}\right\rangle=\mathbf{0}$$ and $\langle\mathbf{AD}_{J}\mathbb{E}\left[R_{J\bar{K}}\right]\left(I-ZZ^{T}\right)_{\bar{K}}u,\,\mathbf{AD}_{J}\mathbb{E}\left[R_{JK}\right]\left(I-ZZ^{T}\right)_{K}u\rangle=0$ vanish because E [RJK] = 0 and in the last equation we have split the expectation because RJK and RJ¯K¯ are independent for all cross terms (J, ¯ K¯ ) ̸= (*J, K*). We simplify the last term $$\left\|(I-ZZ^{T})_{K\cdot u}\right\|^{2}=\left\|u_{K}-\sum_{L\in\mathcal{K}}(ZZ^{T})_{KL}u_{L}\right\|^{2}$$ $$=\left\|u_{K}-(ZZ^{T})_{KK}u_{K}\right\|^{2}$$ $$=\left\|u_{K}\right\|^{2}-\left\|(ZZ^{T})_{KK}u_{K}\right\|^{2},$$ $\left\|u_{K}\right\|^{2}=\left\|u_{K}\right\|^{2}-\left\|(ZZ^{T})_{KK}u_{K}\right\|^{2},$ $\left\|u_{K}\right\|^{2}=\left\|u_{K}\right\|^{2}-\left\|(ZZ^{T})_{KK}u_{K}\right\|^{2},$ $\square$ where the second and third lines follow from Items 4 and 5 in Lemma A.5, respectively. Hence, we obtain $$\mathbb{E}\left[\left\|AXu\right\|^{2}\right]=\left\|ASZ^{T}u\right\|^{2}+\sum_{[K,J]\in\mathcal{J}\mathcal{K}}\left\|AD_{:K}\right\|_{F}^{2}\left[\left\|u_{K}\right\|^{2}-\left\|\left(ZZ^{T}\right)_{KK}u_{K}\right\|^{2}\right].$$ If AS has orthonormal columns, we can simplify the expectation. Since this is generally not true, we rename A → M, which will be a preconditioned variant of A later. Lemma A.7. Let u ∈ R p and M ∈ R m×n. With X, S and D *defined in* (25), assume that MS has orthonormal columns and the diagonal scaling is chosen as Dj = ∥M·J ∥ −1 Ffor all j in block J ∈ J *. Then* $$\mathbb{E}\left[\left\|M X u\right\|^{2}\right]=\left\|u\right\|^{2}.$$ Proof. The result follows from Lemma A.6 after simplifying several terms. First, since MS has orthonormal columns, we have (MS) T(MS) = I and thus $$\left\|M S Z^{T}u\right\|^{2}=u^{T}Z(M S)^{T}(M S)Z^{T}u=u^{T}Z Z^{T}u=\left\|Z^{T}u\right\|^{2}.$$ Second, for arbitrary j ∈ J, by definition of the scaling D, we have $$\|M D_{J}\|_{F}^{2}=\|M_{J}\|_{F}^{2}\,|D_{j}|^{2}=\|M_{J}\|_{F}^{2}\,\|M_{J}\|_{F}^{-2}=1.$$ Finally, from Lemma A.5 Item 6, we have $$\sum_{K\in{\mathcal{K}}}\left\|(Z Z^{T})_{K K}u_{K}\right\|^{2}=\left\|Z^{T}u\right\|^{2}.$$ Plugging into Lemma A.6, we obtain $$\mathbb{E}\left[\left\|MXu\right\|^{2}\right]=\left\|MSZ^{T}u\right\|^{2}+\sum_{[J,K]\in\mathcal{JK}}\left\|MD_{J}\right\|_{F}^{2}\left[\left\|u_{K}\right\|^{2}-\left\|(ZZ^{T})_{KK}u_{K}\right\|^{2}\right].$$ $$=\left\|Z^{T}u\right\|^{2}+\left(\sum_{[J,K]\in\mathcal{JK}}\left\|u_{K}\right\|^{2}\right)-\left\|Z^{T}u\right\|^{2}$$ $$=\left\|u\right\|^{2}.$$ Next, we prove concentration inequalities for the random matrix X. Lemma A.8. Let u ∈ R p and M ∈ R m×n. With X, S and D *defined in* (25), assume that MS has orthonormal columns and the diagonal scaling is chosen as Dj = ∥M·J ∥ −1 Ffor all j in block J ∈ J *. Then* $$\left\|\left\|M X u\right\|^{2}-\left\|u\right\|\right\|_{\psi_{2}}\leq C C_{\psi}^{2}\operatorname*{max}_{J\in{\mathcal{J}}}{\frac{\left\|M_{.J}\right\|}{\left\|M_{.J}\right\|_{F}}}\left\|u\right\|.$$ Proof. The result follows from Lemma B.4 after we have vectorized R. To this end, let vec(·) be the vectorization, which identifies a matrix R a×b with a vector in (R a) ⊗ (R b) ′for any dimensions a, b. Then, since for all matrices ABu = (A ⊗ u T) vec(B), we have $$M D_{,J}R_{J K}(I-(Z Z^{T})_{K}.u=\left[M D_{.J}\otimes u^{T}(I-(Z Z^{T})_{K}^{T}.\right]\mathrm{vec}\left(R_{J K}\right)$$ so that $$MXu=[MSZ^{T}+MDR(I-ZZ^{T})]u$$ $$=MSZ^{T}u+\sum_{[J,K]\in\mathcal{JK}}MD_{.J}R_{JK}(I-ZZ^{T})_{K.}u$$ $$=MSZ^{T}u+\sum_{[J,K]\in\mathcal{JK}}\left[MD_{.J}\otimes u^{T}(I-ZZ^{T})_{K.}^{T}\right]\text{vec}\left(R_{JK}\right)$$ $$=:\mathcal{B}+\mathcal{AR},$$ with the block matrix and vectors $$\begin{array}{l}{{\mathcal{A}:=\big[M D_{.J}\otimes u^{T}(I-Z Z^{T})_{K\cdot}^{T}\big]_{[J,K]\in{\mathcal{I K}}}}}\\ {{\mathcal{R}:=[\operatorname*{vec}\left(R_{J K}\right)]_{[J,K]\in{\mathcal{I K}}}}}\\ {{\mathcal{B}:=M S Z^{T}u.}}\end{array}$$ Using Lemma B.2 in the fist equality and Lemma A.7 in the last, we have $$\left\|{\mathcal{A}}\right\|_{F}^{2}+\left\|{\mathcal{B}}\right\|^{2}=\mathbb{E}\left[\left\|{\mathcal{A}}{\mathcal{R}}+{\mathcal{B}}\right\|^{2}\right]=\mathbb{E}\left[\left\|M X u\right\|^{2}\right]=\left\|u\right\|^{2}.$$ Furthermore, we have $$\|\mathcal{A}\|\leq\left(\sum_{[J,K]\in\mathcal{J}\mathcal{K}}\left\|MD_{.J}\otimes u^{T}(I-ZZ^{T})_{K.}^{T}\right\|^{2}\right)^{1/2}$$ $$=\left(\sum_{[J,K]\in\mathcal{J}\mathcal{K}}\left\|MD_{.J}\right\|^{2}\left\|(I-ZZ^{T})_{K.}u\right\|^{2}\right)^{1/2}$$ $$=\max_{J\in\mathcal{J}}\left\|MD_{.J}\right\|\left(\sum_{K\in\mathcal{K}}\left\|(I-ZZ^{T})_{K.}u\right\|^{2}\right)^{1/2}$$ $$\leq\max_{J\in\mathcal{J}}\left\|MD_{.J}\right\|\left\|u\right\|,$$ where in the last inequality we have used Lemma A.5, Item 7. Thus, with Lemma B.4, we have $$\left\|\|M X u\|-\|u\|\right\|_{\psi_{2}}=\left\|\|\mathcal{A R}+\mathcal{B}\|-\left(\|\mathcal{A}\|_{F}^{2}+\|\mathcal{B}\|^{2}\right)^{1/2}\right\|_{\psi_{2}}$$ $$\leq C C_{\psi}^{2}\left\|{\mathcal{A}}\right\|\leq C C_{\psi}^{2}\operatorname*{max}_{J\in{\mathcal{J}}}\left\|{\cal M D}_{.J}\right\|\left\|u\right\|.$$ We can further estimate the right hand side with the definition of diagonal scaling D $$\|M D_{J}\|=\|M_{J}D_{J J}\|=\frac{\|M_{\cdot J}\|}{\|M_{\cdot J}\|_{F}},$$ which completes the proof. ## A.5.3 Rip Of Mx We do not show the RIP for AX directly, but for a preconditioned variant. Since we determine the preconditioner later, we first state results for a generic matrix MX. With the expectation and concentration inequalities from the previous section, the proof of the RIP is standard, see e.g. Baraniuk et al. (2008); Foucart & Rauhut (2013); Kasiviswanathan & Rudelson (2019). We first show a technical lemma. Lemma A.9. Let A ∈ R m×n *and assume that there is a* ϵ4 cover N ⊂ S n−1 *of the unit sphere* S n−1 *with* $$\|A x_{i}\|-1|\leq{\frac{\epsilon}{2}}\quad{\mathrm{for~all~}}x_{i}\in{\mathcal{N}}.$$ Then $(1-\epsilon)\left\|x\right\|\leq\left\|Ax\right\|\leq(1+\epsilon)\left\|x\right\|\quad\text{for all}x\in\mathbb{R}^{n}$. Proof. Let x ∈ S n−1 be the maximizer of the norm so that ∥Ax∥ = ∥A∥. Then, there is a element xi ∈ N in the cover with ∥x − xi∥ ≤ ϵ4 and we obtain the upper bound $$\|A\|=\|Ax\|\leq\|Ax_{i}\|+\|A(x-x_{i})\|\leq\|Ax_{i}\|+\|A\|\,\frac{\epsilon}{4}$$ $$\Rightarrow\left(1-\frac{\epsilon}{4}\right)\|A\|\leq\|Ax_{i}\|$$ $$\Rightarrow\|A\|\leq\frac{1+\epsilon/2}{1-\epsilon/4}\leq1+\epsilon.$$ With the upper bound and the given assumptions, for arbitrary x ∈ S n−1, we estimate the lower bound by $\|Ax\|\geq\|Ax_{i}\|-\|A(x-x_{i})\|\geq\|Ax_{i}\|-(1+\epsilon)\,\|x-x_{i}\|\,$ $$\begin{array}{l}{{-\,x_{i}\vert}}\\ {{\geq\left(1-\frac{\epsilon}{2}\right)-(1+\epsilon)\frac{\epsilon}{4}=1-\frac{\epsilon}{2}-\frac{\epsilon}{4}-\frac{\epsilon^{2}}{4}\geq1-\epsilon.}}\end{array}$$ $\square$ The bounds extend from the sphere to all x ∈ R n by scaling. For the following RIP result, we add in an isometry W ∈ R p×p ′, with ∥W·∥ = ∥·∥, which allows us to construct tree nodes Xi from its children by (12) below. Lemma A.10. Let W ∈ R p×p ′*be an isometry and for* M ∈ R m×n, with X, S and D *defined in* (25), assume that MS *has orthonormal columns and the diagonal scaling is chosen as* Dj = ∥M·J ∥ −1 Ffor all j *in block* J ∈ J *. If* minJ∈J ∥M·J ∥ 2 F ∥M·J ∥ 2 ≥ 2tC4ψ cϵ2 log 12ep tϵ *, then with probability at least* 1 − 2 exp − c 2 ϵ 2 C4ψ minJ∈J ∥M·J ∥ 2 F ∥M·J ∥ 2 the matrix MXW *satisfies the RIP* $(1-\epsilon)\left\|z\right\|\leq\left\|MXWz\right\|\leq(1+\epsilon)\left\|z\right\|\quad\text{for all}z\text{with}\left\|z\right\|_{0}\leq t.$ Proof. Fix a support T ⊂ [p ′] with |T| = t and let ΣT ⊂ R p ′be the subspace of all vectors supported on T. By standard volumetric estimates Baraniuk et al. (2008); Vershynin (2018) there is a ϵ4 cover N of the unit sphere in ΣT of cardinality $$|{\mathcal{N}}|\leq\left({\frac{12}{\epsilon}}\right)^{t}.$$ Since ∥W zi∥ = ∥zi∥, zi ∈ N , by Lemma A.8 and a union bound, we obtain $$\Pr\left[\exists z_{i}\in\mathcal{N}:\,||MXWz_{i}||-1|\geq\epsilon\right]\leq2\left(\frac{12}{\epsilon}\right)^{t}\exp\left(-c\frac{\epsilon^{2}}{C_{\psi}^{4}}\min_{J\in\mathcal{J}}\frac{\|M_{J}\|_{F}^{2}}{\|M_{J}\|^{2}}\right).$$ $$(1-\epsilon)\left\|z\right\|\leq\left\|M X W z\right\|\leq(1+\epsilon)\left\|z\right\|$$ Let us assume that the event fails and thus |∥MXW zi∥ − 1| ≤ ϵ for all zi ∈ N . Then, by Lemma A.9, we have (1 − ϵ) ∥z∥ ≤ ∥MXW z∥ ≤ (1 + ϵ) ∥z∥ for all z ∈ ΣT . There are p t ≤ep t tsupports T of size t and thus, by a union bound we obtain $(1-\epsilon)\left\|z\right\|\leq\left\|MXWz\right\|\leq(1+\epsilon)\left\|z\right\|\quad\text{for all}z\text{with}\left\|z\right\|_{0}\leq t$. with probability of failure bounded by $$\begin{split}2\left(\frac{ep}{t}\right)^t\left(\frac{12}{\epsilon}\right)^t\exp\left(-e\frac{\epsilon^2}{C_\psi^4}\frac{\min\limits_{\|M,J\|}\|M_J\|_F^2}{\|M_J\|^2}\right)\\ =2\exp\left(-e\frac{\epsilon^2}{C_\psi^4}\frac{\min\limits_{\|M,J\|_F^2}\|M_J\|_F^2}{\epsilon^2\mathcal{G}}+t\log\frac{12ep}{t\epsilon}\right)\\ \leq2\exp\left(-\frac{\epsilon}{2}\frac{\epsilon^2}{C_\psi^4}\frac{\min\limits_{\|M,J\|_F^2}\|M_J\|_F^2}{\epsilon^2\mathcal{G}}\right)\end{split}$$ if if $$t\log\frac{12e p}{t\epsilon}\leq\frac{c}{2}\frac{\epsilon^{2}}{C_{\psi}^{4}}\operatorname*{min}_{J\in\mathcal{J}}\frac{\|M_{.J}\|_{F}^{2}}{\|M_{.J}\|^{2}}\Leftrightarrow\operatorname*{min}_{J\in\mathcal{J}}\frac{\|M_{.J}\|_{F}^{2}}{\|M_{.J}\|^{2}}\geq\frac{2t C_{\psi}^{4}}{c\epsilon^{2}}\log\frac{12e p}{t\epsilon}.$$ ## A.5.4 Null Space Property Of Ax The matrix MS in the RIP results must have orthonormal columns, which is not generally true for M = A. However, this is true with a suitable preconditioner that we construct next. The null space property is invariant under preconditioning, which allows us to eliminate it, later. Lemma A.11. Let M ∈ R m×q with m ≥ q *have full column rank. Then there is a matrix* T ∈ R m×m *with* condition number κ(T) = κ(M) such that TM *has orthonormal columns.* Proof. Let M = UΣV T be the singular value decomposition of M. Define $$T:=D U^{T},$$ $$D^{-1}:=\mathrm{diag}[\sigma_{1},\ldots,\sigma_{q},\sigma,\ldots,\sigma]$$ T := DUT, D−1:= diag[σ1, . . . , σq*, σ, . . . , σ*] for q ≤ m singular values σi and remaining m − q values σ in the interval [σ1*, . . . , σ*q]. Then, we have MT T T TM = (V Σ TU T)(UDT)(DUT)(UΣV T) = V Σ T DT DΣV T = V V T = I, where we have used that Σ T DT DΣ = I. By construction, T has singular values σ1*, . . . , σ*q and one extra value σ bounded by the former so that $$\kappa(T)=\frac{\sigma_{1}}{\sigma_{q}}=\kappa(M).$$ Lemma A.12. Let A ∈ R m×n and T ∈ R m×m *be invertible. Then* $$\square$$ $${\frac{\|A\|_{F}}{\|A\|}}\leq\kappa(T){\frac{\|T A\|_{F}}{\|T A\|}}.$$ Proof. We first show that $$\|T A\|_{F}\geq\left\|T^{-1}\right\|^{-1}\|A\|_{F}\,.$$ Indeed ∥x∥ ≤T −1 ∥T x∥ implies ∥T x∥ ≥T −1 −1∥x∥ and thus applied to the columns aj of A, we have $$\|TA\|_{F}^{2}=\sum_{j=1}^{n}\|Ta_{j}\|^{2}\geq\sum_{j=1}^{n}\left\|T^{-1}\right\|^{-2}\left\|a_{j}\right\|^{2}=\left\|T^{-1}\right\|^{-2}\left\|A\right\|_{F}^{2}.$$ With this estimate, we obtain $$\kappa(T)\frac{\|T A\|_{F}}{\|T A\|}\geq\|T\|\,\|T^{-1}\|\,\frac{\|T^{-1}\|^{-1}\,\|A\|_{F}}{\|T\|\,\|A\|}=\frac{\|A\|_{F}}{\|A\|}.$$ $\square$ Corollary A.13. Let W ∈ R p×p ′be an isometry and for X, S and D *defined in* (25)*, assume that* AS has full column rank and minJ∈J ∥A·J ∥ 2 F ∥A∥ 2 ·J ≥ 2tC4ψ cϵ2 κ(AS) log 12ep tϵ *. Then there is an invertible matrix* T ∈ R m×m so that with the diagonal scaling Dj = ∥T A·J ∥ −1 Ffor all j in block J ∈ J *with probability at least* 1 − 2 exp − c 2 ϵ 2 C4ψ 1 κ(AS) minJ∈J ∥A·J ∥ 2 F ∥A·J ∥ 2 the matrix T AXW *satisfies the RIP* $(1-\epsilon)\left\|z\right\|\leq\left\|TAXWz\right\|\leq(1+\epsilon)\left\|z\right\|\quad\text{for all}z\text{with}\left\|z\right\|_{0}\leq t.$ Proof. Since the matrix AS has full column rank by Lemmas A.11 and A.12, there is an invertible matrix T such that $\kappa(T)=\kappa(AS)$, $TAS$ has orthogonal columns $\frac{\|A_{\cdot J}\|_{F}}{\|A_{\cdot J}\|}\leq\kappa(T)\frac{\|TA_{\cdot J}\|_{F}}{\|TA_{\cdot J}\|}$ for all $J\in\mathcal{J}$. Thus, the corollary follows from Lemma A.10 with M = T A. The last corollary allows us to recover x = S1 by ℓ1-minimization $\square$ $$\min_{x\in\mathbb{R}^{n}}\|x\|_{1}\quad{\mathrm{subject~to}}\quad T A x=b,$$ preconditioned by some matrix T. This problem is not yet solvable by the student, who generally has no access to the matrix T, which is only used by the teacher for the construction of X. However, the matrix T is unnecessary for ℓ1 recovery because the RIP implies the null space property, which is sufficient for recovery and independent of left preconditioning. Corollary A.14. Let W ∈ R p×p ′be an isometry and for X, S and D *defined in* (25)*, assume that* AS has full column rank and minJ∈J ∥A·J ∥ 2 F ∥A·J ∥ 2 ≥ 2tC4ψ cϵ2 κ(AS) log 12ep tϵ *. Then there is an invertible matrix* T ∈ R m×m so that with the diagonal scaling Dj = ∥T A·J ∥ −1 Ffor all j in block J ∈ J *with probability at least* 1 − 2 exp − c 2 ϵ 2 C4ψ 1 κ(AS) minJ∈J ∥A·J ∥ 2 F ∥A·J ∥ 2 the matrix AXW *satisfies the null space property of order* t $$for\ all\ z\in\ker(AXW)\ and\ T\subset[p],\ |T|\leq t.$$. $$\left\|z_{T}\right\|_{1}<\left\|z_{T}\right\|_{1}$$ with complement T¯ of T. Proof. Setting ϵ = 1 3 , changing t → 2t and adjusting the constants accordingly, with the given conditions and probabilities, the matrix *T AX* satisfies the 2t, 13 -RIP. Thus, by Foucart & Rauhut (2013), proof of Theorem 6.9, *T AX* satisfies $\|z_{T}\|_{1}<\frac{1}{2}\left\|z\right\|_{1}$ for all $z\in\ker(TAX)$ and $T\subset[p]$, $|T|\leq t$. This directly implies the null space property of order t ∥zT ∥1 < ∥zT¯∥1for all z ∈ ker(*T AX*) and T ⊂ [p], |T| ≤ t. Since T is invertible, ker(*T AX*) = ker(AX), so that also AX satisfies the null space property. Remark A.15. *For Corollaries A.13 and A.14, we are particularly interested in applications where* x = S1 is the global ℓ0-minimizer of Ax = b in 14. Then the full column rank condition of AS *is automatically* satisfied by Lemma A.3. ## A.6 Model Tree: Theorem 4.2 Recall that κ(·) denotes the condition number. Theorem A.16 (Theorem 4.2 restated). Let A ∈ R m×n *and split* x ∈ R n *into* q = 2L, L ≥ 1 *components* S *given by* (15)*. If* 1. AS has full column rank. 2. On each tree node, we have implementations of *Scale*. 3. SolveL satisfies Assumption (A2) on the leaf nodes. 4. $$t\geq\log p^{2}+\log^{3}p,$$ 2 + log3p, 1 ≲ t ≲ √p (26) 5. $$1\stackrel{<}{\sim}t\stackrel{<}{\sim}\sqrt{p}$$ $$(26)$$ $$\operatorname*{min}_{J\in{\mathcal{J}}}{\frac{\|A_{J}\|_{F}^{2}}{\|A_{J}\|^{2}}}\geq t\kappa(A S)L+t\kappa(A S)\log{\frac{c q p}{t}}$$ $$(27)$$ t(27) for some generic constant c*, with probability at least* $$1-2\exp\left(-c\frac{1}{\kappa(A S)}\operatorname*{min}_{J\in\mathcal{J}}\frac{\|A_{.J}\|_{F}^{2}}{\|A_{.J}\|^{2}}\right)$$ there is a learnable binary tree of problem classes Ci, i ∈ I of depth L, given by matrices Xi *and sparsity* t so that 1. The root class C0 *contains* x. 2. *The parents are constructed from the children* Xi = Xchild(i)Wchild(i)*, where* Wchild(i) has t/t¯ = 2 sparse columns. 3. The columns of the leaf nodes' Xi are |J| *sparse.* 4. Each class' matrix Xi contains p columns, consisting of columns of S, i.e. pieces of x, in the leafs and sums thereof in the interior nodes. All other entries are random (dependent between classes) or zero. Proof. We build a matrix X according to (M1) - (M8) and use the extra matrix W in Corollary A.14 to build a tree out of it. In the following, we denote by p¯ the number of columns in X and by p the number of columns in the class matrices Xi that we are going to construct. By assumption, the support of x is partitioned into patches {J1*, . . . , J*q} = J for which we define a corresponding partition K = {K1*, . . . , K*q} of [¯p] with all Ki of equal size and Z by $$Z_{k l}:={\left\{\begin{array}{l l}{1}&{k=k_{l}}\\ {0}&{{\mathrm{else}}}\end{array}\right.}$$ for some choices kl ∈ Kl. The index sets J and K are naturally combined by their indices to obtain the pairs J K. With these choices, the matrix X is given by (M1) - (M8). X is non-zero only on blocks [J, K] *∈ J K*, which allows us to build a tree, whose nodes we index by i in a suitable index set I with leaf nodes i ∈ [q]. Each node i is associated with a subset Ki ⊂ [q] that is a union of two children Ki =Sj∈child(i) Kj , starting with leaf nodes Ki ∈ K, i ∈ [q], e.g. ![37_image_0.png](37_image_0.png) We now define matrices Xi on each node, starting with the leafs Xi:= X·Ki for leaf i and then inductively by joining the two child matrices $$X_{i}:=\left[X_{j_{1}}\quad X_{j_{2}}\right]\bar{W}_{i},$$ W¯i, W¯i =1 $$\bar{W}_{i}=\frac{1}{\sqrt{2}}\left[I_{K_{j_{1}},K_{j_{1}}}\right]$$ for child(i) = {j1, j2} and identity matrix I·,· on the given index sets. It is easy to join all W¯i matrices leading up to node i into a single isometry Wi so that $$X_{i}=\left[X_{1}\quad\cdots\quad X_{q}\right]W_{i}.$$ which implies $X_{\text{child}(i)}=\begin{bmatrix}X_{j_{1}}&X_{j_{2}}\end{bmatrix}=\begin{bmatrix}X_{1}&\cdots&X_{q}\end{bmatrix}W_{\text{child}(i)},$$W_{\text{child}(i)}=\begin{bmatrix}W_{j_{1}}&W_{j_{2}}\end{bmatrix},$ where again Wchild(i)is an isometry because the columns of Wj1 and Wj2 have non-overlapping support. By Lemma 3.8 the tree has at most 2 L+1 nodes and thus, if $$\min_{J\in{\cal J}}\frac{\|A_{J}\|_{F}^{2}}{\|A_{J}\|^{2}}\geq\frac{2tC_{\psi}^{4}}{ce^{2}}\kappa(AS)\log\frac{12e\bar{p}}{t\epsilon}\tag{28}$$ by Corollary A.14 and union bound over all tree nodes, with probability at least $$1-42^{L}\exp\left(-\frac{c}{2}\frac{\epsilon^{2}}{C_{\psi}^{4}}\frac{1}{\kappa(A S)}\operatorname*{min}_{J\in\mathcal{J}}\frac{\|A_{.J}\|_{F}^{2}}{\|A_{.J}\|^{2}}\right)$$ all nodes Xchild(i) satisfy the t-NSP. For this probability to be close to one, log 2L must be smaller than say half the exponent $$L\gtrsim\log2^{L}\leq-{\frac{c}{4}}{\frac{c^{2}}{C_{v}^{4}}}{\frac{1}{\kappa(A S)}}\min_{J\in\mathcal{J}}{\frac{\|A_{J}\|_{F}^{2}}{\|A_{J}\|^{2}}}\qquad\qquad\Leftrightarrow\qquad\qquad\min_{J\in\mathcal{J}}{\frac{\|A_{J}\|_{F}^{2}}{\|A_{J}\|^{2}}}\geq{\frac{t C_{v}^{4}}{\epsilon^{2}}}\kappa(A S)\log s.$$ Combining this with the NSP condition (28), if $$\operatorname*{min}_{J\in\mathcal{J}}\frac{\|A_{\cdot J}\|_{F}^{2}}{\|A_{\cdot J}\|^{2}}\geq\frac{t C_{\psi}^{4}}{\epsilon^{2}}\kappa(A S)L+\frac{t C_{\psi}^{4}}{\epsilon^{2}}\kappa(A S)\log\frac{12e\bar{p}}{t\epsilon},$$ with probability at least $$1-2\exp\left(-\frac{1}{2}\right)$$ $$\cdot\frac{c}{2}\frac{\epsilon^{2}}{C_{\psi}^{4}}\frac{1}{\kappa(A S)}\operatorname*{min}_{J\in\mathcal{J}}\frac{\left\|A_{.J}\right\|_{F}^{2}}{\left\|A_{.J}\right\|^{2}}\right)$$ all nodes Xchild(i) satisfy the t-NSP. This yields the statements in the proposition if we choose ϵ ∼ 1 and Cψ ∼ 1, without loss of generality. Let us verify the remaining properties of learnable trees. By construction, we have t/t¯ = 2 and γ = 2 and p¯ = qp. Since all random samples in X are absolutely continuous with respect to the Lebesgue measure, the probability of rank deficit Xiis zero. The remaining assumptions are given, with the exception of the first two inequalities in (A1). Renaming the number of training samples q, whose name is already used otherwise here, to r, they state that t ≥ c log r and *r > cp*2log2p and thus imply that t ≥ log p 2 + log3p, which is sufficient since the number of training samples r is at the disposal of the teacher. ## B Technical Supplements Lemma B.1. Let R ∈ R n×pbe a i.i.d. random matrix with mean zero entries of variance one. Then for any A ∈ R m×n and u ∈ R p *we have* $$\mathbb{E}\left[\|A R u\|^{2}\right]=\|A\|_{F}^{2}\|u\|^{2}.$$ Proof. Since E [RikRjl] = δij δkl, we have $$\mathbb{E}\left[\left\|ARu\right\|^{2}\right]=\mathbb{E}\left[\left\langle ARu,ARu\right\rangle\right]$$ $$=\mathbb{E}\left[\sum_{ijkl}u_{k}R_{ik}(A^{T}A)_{ij}R_{jl}u_{l}\right]$$ $$=\sum_{ijkl}(A^{T}A)_{ij}u_{k}u_{l}\mathbb{E}\left[R_{ik}R_{jl}\right]$$ $$=\sum_{ik}(A^{T}A)_{ii}u_{k}u_{k}$$ $$=\|A\|_{F}^{2}\|u\|^{2}.$$ Lemma B.2. Let A ∈ R m×n *be a matrix,* b ∈ R m *be a vector and* x ∈ R n *a i.i.d. random vector with* E [xj ] = 0, E-x 2 j = 1*. Then* $$\mathbb{E}\left[\left\|A x+b\right\|^{2}\right]=\left\|A\right\|_{F}^{2}+\left\|b\right\|^{2}.$$ Proof. Since b is not random, we have $$\mathbb{E}\left[\left\|A x+b\right\|^{2}\right]=\mathbb{E}\left[\left\|A x\right\|^{2}\right]+\left\|b\right\|^{2}=\left\|A\right\|_{F}^{2}+\left\|b\right\|^{2},$$ where in the last equality we have used Lemma B.1 with R n×1 matrix R = x and u = [1] ∈ R 1. The following result is a slight variation of Vershynin (2018), Theorem 6.3.2. Lemma B.3. Let A ∈ R m×n *be a matrix,* b ∈ R m *be a vector and* x ∈ R n *a i.i.d. random vector with* E [xj ] = 0, E-x 2 j = 1 and ∥x∥ψ2 ≤ Cψ*. Then* $$\operatorname*{Pr}\left[\left|\left|A x+b\right|\right|^{2}-\left\|A\right\|_{F}^{2}-\left\|b\right\|^{2}\right]\geq\epsilon\left(\left\|A\right\|_{F}^{2}+\left\|b\right\|^{2}\right)\right]$$ $$\square$$ $$\leq8\exp\left[-c\operatorname*{min}(\epsilon^{2},\epsilon){\frac{\|A\|_{F}^{2}+\|b\|^{2}}{C_{\psi}^{4}\|A\|^{2}}}\right].$$ Proof. We decompose $$\|Ax+b\|^{2}-\|A\|_{F}^{2}-\|b\|^{2}=\|Ax\|^{2}+2\left\langle Ax,b\right\rangle+\|b\|^{2}-\|A\|_{F}^{2}-\|b\|^{2}$$ $$=\left(\|Ax\|^{2}-\|A\|_{F}^{2}\right)+2\left\langle Ax,b\right\rangle$$ so that $$\Pr\left[\pm\left(\|Ax+b\|^{2}-\|A\|_{F}^{2}-\|b\|^{2}\right)\geq\epsilon\left(\|A\|_{F}^{2}+\|b\|^{2}\right)\right]$$ $$\leq\Pr\left[\pm\left(\|Ax\|^{2}-\|A\|_{F}^{2}\right)\pm2\left(Ax,b\right)\geq\epsilon\left(\|A\|_{F}^{2}+\|b\|^{2}\right)\right]$$ $$\leq\Pr\left[\pm\left(\|Ax\|^{2}-\|A\|_{F}^{2}\right)\geq\epsilon\left|A\|_{F}^{2}\right]+\Pr\left[\pm2\left(Ax,b\right)\geq\epsilon\left|b\right|^{2}\right].$$ It remains to estimate the two probabilities on the right hand side. Since E-x 2 j = 1, we have Cψ ≳ 1 and thus from the proof of Theorem 6.3.2 in Vershynin (2018), we have $$\Pr\left[\pm\left(\|A x\|^{2}-\|A\|_{F}^{2}\right)\geq\epsilon\|A\|_{F}^{2}\right]\leq2\exp\left[-c\min(\epsilon^{2},\epsilon)\frac{\|A\|_{F}^{2}}{C_{\psi}^{4}\|A\|^{2}}\right].$$ and from Hoeffding's inequality, we have $$\mathrm{Pr}\left[\pm2\left\langle A x,b\right\rangle\geq\epsilon\|b\|^{2}\right]\leq2\exp\left[-\alpha\epsilon^{2}\frac{\|b\|^{4}}{C_{\psi}^{2}\|A^{T}b\|^{2}}\right]\leq2\exp\left[-\alpha\epsilon^{2}\frac{\|b\|^{2}}{C_{\psi}^{4}\|A^{T}\|^{2}}\right].$$ The following result is a slight variation of Vershynin (2018), Theorem 6.3.2. Lemma B.4. Let A ∈ R m×n *be a matrix,* b ∈ R m *be a vector and* x ∈ R n *a i.i.d. random vector with* E [xj ] = 0, E-x 2 j = 1 and ∥x∥ψ2 ≤ Cψ*. Then* $$\left\|\|A x+b\|-\left(\|A\|_{F}^{2}+\|b\|^{2}\right)^{1/2}\right\|_{\psi_{2}}\leq C C_{\psi}^{2}\left\|A\right\|$$ for some constant C ≥ 0. Proof. We use a standard argument, e.g. from the proof of Theorem 6.3.2 in Vershynin (2018). An elementary computation shows that for δ 2 = min(ϵ 2, ϵ) and any *a, b* ∈ R, we have $$|a-b|\geq\delta b,\quad\Rightarrow\quad|a^{2}-b^{2}|\geq\epsilon b^{2}.$$ With $a=\left\|Ax+b\right\|$ and $b=\left(\left\|A\right\|_F^2+\left\|b\right\|^2\right)^{1/2}$ and Lemma B.3, this implies . $$\mathrm{Pr}\left[\left|\left\|A x+b\right\|-\left(\left\|A\right\|_{F}^{2}-\left\|b\right\|^{2}\right)^{1/2}\right|\geq\delta\left(\left\|A\right\|_{F}^{2}+\left\|b\right\|^{2}\right)^{1/2}\right]$$ $\square$ $$\leq8\exp\left[-c\delta^{2}\frac{\|A\|_{F}^{2}+\|b\|^{2}}{C_{\psi}^{4}\|A\|^{2}}\right].$$ This shows Subgaussian concentration and thus the ψ2-norm of the lemma. ## C Implementation Details Details for Curriculum I in Section 5.3.1 Proof of Lemma 5.3. The curriculum satisfies (M1) - (M8) with the index sets i $\downarrow$ D. $\star\star$ . , . . . , n − |J|*, . . . , n* $$\mathbf{\Phi},{\underbrace{\mathbf{n}-|J|,\ldots,n}_{J_{q}}},$$ i,h1*, . . . ,* |K| | {z } $$\overline{{{k_{1}}}}$$ * [10] M. C. $$,p-|K|,$$ , . . . , p − |K|*, . . . , p* | {z } $$\overline{{k_{e}}}$$ $1\star\star\star$. h1*, . . . ,* |J| | {z } $\pi$ $\mathcal{L}$: . and Z =-e1 e|K|+1 e2|K|+1 *. . .*with unit basis vectors ek for the first index in each block Ki. Hence, it is a special case of the construction in the proof of Theorem 4.2 and all conclusions of the theorem are applicable. ## Details For The Implementation In Section 5.4: 1. The teacher provides a left preconditioned matrix T A in every tree node. This allows RIP instead of weaker NSP conditions, as in Corollary A.13 versus Corollary A.14. For Curriculum II T is uniform for all tree nodes, for Curriculum III, it is computed individually for each node. 2. Unlike (21) in the split X := SZT + DR(I − ZZT) between deterministic and random part, we use no balancing D in the experiments. 3. As a result, all tree node Xi have entries in {−1, 0, 1} so that we implement Scale by snapping to these discrete values. ## D **Glossary** Algorithms Dimensions A ∈ R m×n X ∈ R n×p Z ∈ R p×q Sparsities s Sparsity of the columns of X. t (Expected) Sparsity of the columns of Z for problems class C. t¯ (Expected) Sparsity of the columns of Z for easy problems in class Ceasy ⊂ C. Tree | I | Indices of tree nodes, Section 3.1. | |-----------|---------------------------------------| | child(i) | Children of node i, Section 3.1. | | Wchild(i) | (13). | | Xchild(i) | (13). | Solve(*A, b*) ℓ0 minimizer for easy problems, Section 2.2. SparseFactor(Y ) Sparse matrix factorization Y = XZ, Section 2.2. Scale Rescaling after matrix factorization, Section 2.2, Definition 3.4. Train(A, b1*, . . . , b*q) Find class X from samples, Algorithm 1. SolveL ℓ0 minimizer for easy problems in leaf nodes, Definition 3.4. TreeTrain(Ci) Find Xi for all tree nodes i from samples, Algorithm 2.
Review 1: Summary: In this work, the authors focus on addressing the problem of computing minimally sparse solutions of under-determined linear systems, which is defined as finding the solution $x\in\mathbb{R}^n$ that has the minimum number of non-zero elements, subject to the constraint that the system of equations $Ax = b$ is satisfied. Specifically, the authors consider tractable subclasses of this problem, where the sparsity of the solution is imposed through the prior knowledge that $x = Xz$, where $z$ is sparse and $X$ is known from a curriculum of easy samples and knowledge condensation at each tree node. This allows the authors to address the problem with mild assumptions on $A$, without requiring $A$ to satisfy properties like RIP or NSP. Strengths and Weaknesses: Strengths: The idea of using the prior knowledge $x=Xz$ to find minimally sparse solutions of under-determined linear systems without requiring RIP or NSP on the matrix $A$ is of interest. Weaknesses: I am not familiar with some parts of this work (e.g., about SAT solving) and did not check the technical results carefully. As far as I can tell, the following are the weaknesses of this work: 1. The motivation and main contributions are not clearly presented in the current submission. The practical applications of the considered setup and the proposed algorithms are not apparent from the submission. Additionally, all the presented theorems, specifically Theorems 2.3 and 2.4, are minor modifications of Theorem 4.2 in Welper (2021). The abstract is too vague and does not clearly mention the studied problem and the main contributions of this submission. 2. The writing can be improved by providing references for methods like SOLVE, SPARSEFACTOR, instead of letting the reader to search these terms back and forth. In addition, the authors should explicitly mention that (A1) to (A4) are assumptions and briefly discuss why they are reasonable. In (A4), the Bernoulli-Subgaussian constant should be denoted with brackets for clarity. 3. Typos: p2, $N \ne NP$ should be $P \ne NP$; "form" should be "from"; p9, Remark 3.7, "The results states"; p10, Remark 4.1, "but is is"; p12, Section 4.2, "by the following Lemma" should be "by the following lemma"; Remark 4.5, "form" should be "from". Requested Changes: See the Weaknesses above. Broader Impact Concerns: NA ================================================== Review 2: Summary: - This work considers the problem of learning minimally sparse solutions to underdetermined linear systems Ax=b where x is sparse. Usually, one needs certain assumptions on A like RIP property. This work considers alternative set of tractable problems which can be adapted to new situations based on prior knowledge. - This work also looks at variations of 3-SAT with weaker assumptions. Strengths and Weaknesses: - The paper looks at this interesting idea of learning a series of l0 minimization problems of increasing difficulty. - The main weakness in my opinion is that the paper is written in a way which is very hard to understand currently. The setting and the contributions are not clear. Requested Changes: - The authors should clearly explain what the contributions of this work are. The idea of learning problems of increasing difficulty is already introduced in a previous work as mentioned in the paper. Do the authors extend this setting to the problem of sparse linear system solving? - Can the authors clearly explain the setting in the beginning of the paper? - How do the authors get around the NP hardness of the problem? - On page1, how do the authors reach equation 2? Why does sparsity of x imply sparsity of z? - In section 2.2, it is not clear to the me what the authors mean by easy samples and hard samples? if x is uknown, and X is unknown, how are easy x chosen to generate the easy data? Broader Impact Concerns: No ================================================== Review 3: Summary: The authors study the classical problem of sparse recovery: $\min ||x||_0: Ax=b$. Sparse recovery (SR) is an important NP-hard problem that appears in any settings in practice and has a great deal of research into tractable subclasses such as matrices A satisfying the restricted isometry property and null-space properties. In this work, the authors take a new angle on tractable subclasses for SR by considering constrained solution spaces, in particular under the assumption that the solution x is of the form $x=Xz$, i.e. a linear combination of the columns of some pre-determined "knowledge matrix" X. The main contribution of the authors is the introduction of a new model to learn the knowledge matrix X through a hierarchical "curriculum" aimed to capture how humans learn to solve hard problems through the acquirement of expert knowledge. The basic idea is to divide into a tree of easier (less sparse) sub-problems C_i with constraints X_i such that each X_i is a linear sum over its children, (including the root X). The authors argue that if the X_i are sufficiently random and the learner is given "random" questions from the easier subclasses (akin to say homework problems), it is possible to solve the questions and reverse engineer the constraints X_i, which are then used to construct X itself. Once X is learned, the authors give conditions under which AX satisfies the nullspace condition and one can hence solve $\min ||z||_0: AXz=b$. Finally, the authors prove that such curriculum can indeed be constructed for any fixed solution x under weak constraints on the matrix A. They then look at a set of problems inspired by the classical NP-hard 1-in-3-SAT, giving several example curricula for fixed instances. Finally the authors run some heuristic numerical experiments to validate that their model can be implemented practically at a small scale. Strengths and Weaknesses: The paper’s main strength is in the introduction a new type of mathematical model for sparse recovery that mimics some structures used in human learning and pedagogy. Sparse recovery is an interesting and classical problem, and the new model may be of interest to some portion of TMLR readership. The formal math I was able to check seemed sound (see below for questions on statements I was not able to check). The paper’s main weakness is in its writing and lack of mathematical clarity: the main body in particular fails to formally define many relevant quantities and notions, and at times it is hard to follow what exactly is being claimed or proven at all. In general, the mathematical model and theorems need to be more strictly formalized and clearly stated in every section for the paper to be considered in publishable form for a theoretical work. EDIT: The author has largely addressed the above issues, and I have changed my recommendation accordingly. Requested Changes: *In order to recommend acceptance, the following issues must be addressed:* The authors need to formally develop the mathematical framework that they are working in, and state their results within formal theorem statements in the model and not as discussion and remarks in free-form exposition. It is often unclear what are assumptions and what are part of the general mathematical framework, and what problem is being solved. For instance, unless I am misunderstanding, it does not seem to be the case that the authors prove any new instances are actually tractable within the standard model of computation, whereas one might get this impression from reading the introduction. This is fine, but the entire main body needs to be re-written taking this into account and clarifying what is actually proved. There are many works in learning theory on "machine teaching" and models of computation that use teachers (see e.g. works on teaching dimension, recursive teaching dimension, etc...) . Please include reference to such material and discuss how the model in this paper relates to prior work in the learning literature. On a related note, the first sentence of the paper seems misleading re: 3-SAT. As far as I can tell "efficiently solvable subclasses of 3-SAT" are not considered in this paper. Remark 3.1 is a good example of something that should appear in a more formal theorem statement somewhere. Do not include extra assumptions you need for results in a separate remark after the fact (unless it is clear it is extraneous to the main theorem statement and an afterthought). Please write a formal theorem statement with your assumptions and what you prove in the formal model of learning you introduce. In Section 4, what is the implication that the curriculum I satisfies (M1)-(M8)? Please give formal statements of what the guarantees are. In Section 5.2, why does considering a "larger" class of A help? Clarify what is actually being proved in this section, again there should be a theorem statement not just exposition — what part of the family is random in "mostly random signed problems?" Isn’t it the curricula that are random, not the problems? At the bottom of page 4, it is mentioned that “more structure is required to ensure x is indeed the \ell_0 optimizer.” The authors need to spend substantially more time justifying this point. Whether or not the particular methods laid out are indeed solving for the optimizer should be made clear formally in each statement, and the assumptions needed for this should be laid out clearly in theorem statements and in the proof. As the paper stands, it is never really clear when this holds. Does Remark 5.2 imply that in the 1-in-3 SAT setup Xz is always the true ell_0 minimizer for any sparse solution z? This should be stated within a formal theorem. Same for all parts in the main body. In (M5) should supp(X.,l) be SZ^T? In general I found (M5) confusing, and much of the notation seems to just be undefined (e.g. what is K_I?). Please use { } or ( ), not [ ] for tuples. Other missing definitions: "T-sparse matrix", nu in Def 2.2, epsilon in (A4), Xchild(i) and Wchild(i), all parameters in Def 3.5, K In Lemma A.16 (Is this really a partition? Covers all of \bar{p}?), “pre-conditioner T” in the main body (Should this appear in any of the formal theorem statements?), “ I_{K_{j_i},K_{j_i}}.” In (A1) C_easy is defined by pairs, whereas before it was defined by a constrained solution-space. Please formalize your treatment of these notions and stick to a consistent notation. The probabilistic quantifier on 4.2 seems wrong as stated. Is it really that the curriculum exists whp, or that whp over X you get the desired curriculum? Did A get dropped at the top of page 28? Why do the cross terms in the first equality die? *Below are several less serious issues:* 1. Typos: “notable -> notably”, “heads on -> head on”, “N \neq NP,” “have build up -> have built up”,“form -> from”, “less columns -> fewer columns”, “ca ≤ b ≤ Cb -> ca ≤ b ≤ Ca”, “exits -> exists”, “is is”, “an can”, “leave -> leaf”, “If this can be avoided” -> “Whether this can be avoided”, 2. Top of page 24: should w be 2t sparse?, 3. Should “AXchild(i) satisfies the √ 2t-RIP” be just with high probability? 4. Top of page 31, what is tau? Should be eps? 5. Add a figure diagram for the image on the second page. 6. Page 8: says C_easy is defined as before, but then just uses sparsity, while (A1) relied on being columns of a subgaussian? 7. Is there supposed to be a "related work" indication between pages 2 and 3? 8. The authors spend a great deal of time motivating their model by human learning, but the majority of the paper relies on random constructions within the curriculum which does not particularly match the motivation. Either cut down on the emphasis on the human side and focus on why the model is of mathematical interest, or add more justification for this difference. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: Broadly, it is agreed that improving L0 recovery feasibility is an interesting problem. However, to varying degrees, the reviewers all had significant difficulty in grasping the main claims, understanding the associated assumptions, etc. These concerns have been alleviated to some extent in the revision, but even at this stage, one reviewer finds the paper too unclear to suggest acceptance, one feels that the decision could equally go either way, and even the most positive recommendation still has very non-minor reservations. Based on the reviewer comments, discussion, and my own reading, I will try to summarize why such concerns remain. **Problem setup / motivation:** There remains significant difficulty in seeing where this *specific* setting would be adopted, either in other theoretical studies or in practice. Even the very first step in Eq. (3) could be highly suboptimal (e.g., because Xz can be much less sparse than z – see the *multiplication* of two terms in Remark 2.1). Many assumptions are introduced but some are hard to grasp or understand the implications of (e.g., the scaling laws in (T2), the relations between different sparsity levels in (T4)). When faced with these challenges even the problem basics, it becomes even more difficult to accept certain claims that are made in passing, e.g., how learning must be done with only (A,b) (and not x) “to be plausible”, how restrictive a statement like “ideally have sparsity O(1)” is, how restrictive/reasonable the linearity relation in (T3) is, etc. Some evidence for usefulness is given via possible utility in SAT solving. However, SAT solving is a *very* active research area, and there seems to be a lack of evidence that the tools presented can actually benefit the wide array of existing tools available for SAT solving (e.g., a clear corollary relating to SAT, or convincing numerical evidence). To be clear, the authors don't explicitly claim such benefits, but the readers should still be given a better idea of what this part of the paper is providing to the general literature. Finally, some concerns are raised about the degree of significance over the existing work of Welper (2021), though this has at least been made clearer, and I am not taking it as a reason for rejection in itself. **Paper flow and correctness:** The paper can be challenging to navigate and verify certain claims of, e.g.: - The discussion after (A2) refers to Remark 2.1, which in turn refers to Lemmas 5.4 and 5.5, whose discussion highlights that one of the main theorems (Theorem 4.2) doesn’t actually necessarily hold in these cases. Several other parts then refer back to Remark 2.1 which in itself wasn’t so easy to grasp. These sorts of things can cause major confusion. - Eq. (21) and Appendix C refer back to Theorem 4.2, but the claims being made are not readily visible there. - It is unclear whether the paragraph before Lemma 5.5 (including its cross-reference to Remark 5.2) is meant to establish the lemma’s correctness, as no other proof is given. - Lemma 5.3 is a formal statement that refers to an informal description in Figure 2, with the preceding text not entirely helping (e.g., “selected rows” is vague). - Some terminology/phrasing isn’t clear enough, e.g., what “adaptable” means, how to interpret “columns are spread into the leaf classes”, definition of “block column”, etc. - The purpose of certain parts is less clear, e.g., ‘Human Learning’ on p3 is an abrupt change and may be too high-level/superficial; ‘Greedy Search’ on p4 is never returned to; clarity of Remark 4.1; etc. Please note that this is far from a complete list, but rather just examples of difficulties faced. These sorts of difficulties appear to have persisted throughout the entire paper. ==================================================
# Incremental Spatial And Spectral Learning Of Neural Operators For Solving Large-Scale Pdes Anonymous authors Paper under double-blind review ## Abstract Fourier Neural Operators (FNO) offer a principled approach to solving challenging partial differential equations (PDE) such as turbulent flows. At the core of FNO is a spectral layer that leverages a discretization-convergent representation in the Fourier domain, and learns weights over a fixed set of frequencies. However, training FNO presents two significant challenges, particularly in large-scale, high-resolution applications: (i) Computing Fourier transform on high-resolution inputs is computationally intensive but necessary since finescale details are needed for solving many PDEs, such as fluid flows, (ii) selecting the relevant set of frequencies in the spectral layers is challenging, and too many modes can lead to overfitting, while too few can lead to underfitting. To address these issues, we introduce the *Incremental Fourier Neural Operator* (iFNO), which progressively increases both the number of frequency modes used by the model as well as the resolution of the training data. We empirically show that iFNO reduces total training time while maintaining or improving generalization performance across various datasets. Our method demonstrates a 38% lower testing error, using 20% fewer frequency modes compared to the existing FNO, while also achieving up to 46% faster training and a 2.8x reduction in model size. ## 1 Introduction Recently, deep learning has shown promise in solving partial differential equations (PDEs) significantly faster than traditional numerical methods in many domains. Among them, *Fourier neural operator* (FNO), proposed by Li et al. (a), is a family of neural operators that achieves state-of-the-art performance in solving PDEs Azizzadenesheli et al. (2024), in applications modeling turbulent flows, weather forecasting Pathak et al. (2022), material plasticity Liu et al. (2022), and carbon dioxide storage Wen et al. (2022). FNO is specifically designed to learn mesh-independent solution operators for PDEs. Unlike traditional neural networks that learn point-wise mappings, FNO learns a functional mapping between infinite-dimensional function spaces. It approximates the action of the solution operator on any input function, allowing it to generalize across different discretizations and resolutions. Unlike conventional neural networks, FNO possesses a property termed *discretization-convergence*, meaning it can output predictions at different resolutions, and those predictions converge to a unique solution upon mesh refinement. The discretization-convergence property relies on kernel integration, which, in the case of FNO, is realized using the Fourier transform. A series of such spectral layers, along with normalization layers and non-linear activations, compose the core of the FNO architecture. For many PDE systems, like fluid flows, there is a decay of energy with frequency, i.e., low-frequency components usually have larger magnitudes than high-frequency ones. Therefore, as an inductive bias and a regularization procedure to avoid overfitting, FNO consists of a frequency truncation function TK in each layer that only allows the lowest K Fourier modes to propagate to the next layer, and the other modes are zeroed out, as shown in Figure 1. It is thus crucial to select the correct number of frequency modes K in various spectral layers in FNO to achieve optimal generalization performance. Too few number of modes leads to underfitting, while too many ![1_image_0.png](1_image_0.png) Figure 1: **Top: Fourier convolution operator in FNO.** After the Fourier transform F, the layer first truncates the full set of frequencies to the K lowest ones using a dynamically set truncation before applying a learnable linear transformation (*blue*) and finally mapping the frequencies back to the linear space using the inverse FFT F −1. The previous method (Li et al., a) picks a fixed K throughout the entire training. The FNO architecture does not include the incremental algorithm and resolution. **Bottom: full iFNO architecture.** Our model takes as input functions at different resolutions (discretizations). The operator consists of a lifting layer, followed by a series of iFNO blocks. The loss is used by our method to dynamically update the input resolution and the number of modes K in the Spectral Convolutions. The incremental algorithm is detailed in section 4 and algorithm 1. can result in overfitting. Further, we need to train on high-enough resolution data to capture all the physical dynamics and approximate the ground-truth solution operator faithfully. Our approach: We propose the Incremental Fourier neural operator (iFNO) to address the above challenges. iFNO incrementally and automatically increases the number of frequency modes K and resolution R during training. It leverages the explained ratio as a metric to dynamically increase the number of frequency modes. The explained ratio characterizes the amount of information in the underlying spectrum that the current set of modes can explain. A small explained ratio (i.e., below a fixed threshold) indicates that the current modes are insufficient to represent the target spectrum and that more high-frequency modes should be added, as illustrated in Figure 1. Simultaneously, iFNO also starts by first training on low-resolution data and progressively increases the resolution, akin to the principle of curriculum learning, thereby increasing training efficiency. ## Our Contributions Are Summarized As Follows: 1. We introduce the Incremental Fourier Neural Operator (iFNO), a novel approach that progressively increases the number of spectral coefficients and the data resolution used during training. These can be applied jointly or individually. 2. We perform a thorough empirical validation across a range of partial differential equation (PDE) problems, including Burgers, Darcy Flow, the Navier-Stokes Equation, and the Kolmogorov Flow. 3. Using our proposed iFNO, we demonstrate upto a 38% lower testing error by acting as a dynamical spectral regularization scheme, using 20% fewer frequency modes compared to the existing FNO, while also achieving a 46% faster performance and a 2.8x reduction in model size, enabling larger scale simulations. ## 2 Related Works The applications of neural networks on partial differential equations have a rich history (Lagaris et al., 1998; Dissanayake & Phan-Thien, 1994). These deep learning methods can be classified into three categories: (1) ML-enhanced numerical solvers such as learned finite element, finite difference, and multigrid solvers (Kochkov et al., 2021; Pathak et al., 2021; Greenfeld et al., 2019) ; (2) Network network-based solvers such as Physics-Informed Neural Networks (PINNs), Deep Galerkin Method, and Deep Ritz Method (Raissi et al., 2019; Sirignano & Spiliopoulos, 2018; Weinan & Yu, 2018); and (3) the data-driven surrogate models such as (Guo et al., 2016; Zhu & Zabaras, 2018; Bhatnagar et al., 2019). Among them, the machine learning surrogate models directly parameterized the target mapping based on the dataset. They do not require a priori knowledge of the governing system and usually enjoy a light-fast inference speed. Recently, a novel class of surrogate models called neural operators was developed (Li et al., b; Lu et al., 2021; Kissas et al., 2022). Neural operators parameterize the PDEs' solution operator in the function spaces and leverage its mathematical structures in their architectures. Consequentially, neural operators usually have a better empirical performance (Takamoto et al., 2022) and theoretical guarantees Kovachki et al. (2021) combined with conventional deep learning models. The concept of implicit spectral bias was first proposed by Rahaman et al. as a possible explanation for the generalization capabilities of deep neural networks. There have been various results towards theoretically understanding this phenomenon Cao et al.; Basri et al.. Fridovich-Keil et al. further propose methodologies to measure the implicit spectral bias on practical image classification tasks. Despite a wealth of theoretical results and empirical observations, there has been no prior research connecting the implicit spectral bias with FNO to explain its good generalization across different resolutions. The notion of incremental learning has previously been applied to the training of PINNs Krishnapriyan et al. (2021); Huang & Alkhalifah. However, these focus on a single PDE instance and apply incremental learning to the complexity of the underlying PDE, e.g., starting with low-frequency wavefields first and using high-frequency ones later Huang & Alkhalifah. Our work is orthogonal to this direction, and rather than modifying the underlying PDE, we directly incrementally increase the model's capacity and the data's resolution. ## 3 Fourier Neural Operator Fourier Neural Operators belong to the family of neural operators, which are formulated as a generalization of standard deep neural networks to operator setting (Li et al., b). A neural operator learns a mapping between two infinite dimensional spaces from a finite collection of observed input-output pairs. Let D be a bounded, open set and A and U be separable Banach spaces of functions of inputs and outputs. We want to learn a neural operator Gθ : A × θ → U that maps any initial condition a ∈ A to its solution u ∈ U. The neural operator Gθ composes linear integral operator K with pointwise non-linear activation function σ to approximate highly non-linear operators. Definition 3.1 (Neural operator). The neural operator Gθ is defined as follows: Gθ := Q ◦ (WL + KL) ◦ · · · ◦ σ (W1 + K1) ◦ P, where P, Q are the pointwise neural networks that encode the lower dimension function into higher dimensional space and decode the higher dimension function back to the lower dimensional space. The model stack L layers of σ (Wl + Kl) where Wl are pointwise linear operators (matrices), Kl are integral kernel operators, and σ are fixed activation functions. The parameters θ consist of all the parameters in P, Q, Wl, Kl. Li et al. (a) proposes FNO that adopts a convolution operator for K as shown in Figure 1, which obtains state-of-the-art results for solving PDE problems. Definition 3.2 (Fourier convolution operator). Define the Fourier convolution operator K as follows: (Kvt) (x) = F −1(R · TK (Fvt)) (x) ∀x ∈ D, where F and F −1 are the Fourier transform and its inverse, R is a learnable transformation and TK is a fixed truncation that restricts the input to lowest K Fourier modes. FNO is discretization-convergent, such that the model can produce a consistent solution for any query points, potentially not in the training grid. In other words, FNO can be trained on low resolution and evaluated at high resolution. This property is highly desirable as it allows a transfer of solutions between different grid resolutions and discretizations. ![3_image_0.png](3_image_0.png) Figure 2: FNO with higher frequency modes captures smaller-scale structures in the fluid. Prediction of the Darcy flow by FNO with K = 10 and K = 90. Insufficient modes lead to overly strong dumping and fail to capture the finer details in Darcy flow. Frequency truncation. To ensure discretization-convergence, FNO truncates the Fourier series Fˆv at a maximal number of modes K using TK. In this case, for any discrete frequency mode k ∈ {1*, ..., K*}, we have Fˆv(k) ∈ C C and R(k) ∈ C C×C, where C is the channel dimension of the input v. The size of linear parameterization R depends on K. For example, R has the shape of Kd×C×C for a d-dimensional problem. We also denote the modes 1*, ..., K* as the effective frequency modes. In the standard FNO, Li et al. (a) view K as an additional hyperparameter to be tuned for each problem. Frequency strength. As R(k) reflects how the frequency mode k is transformed, we can measure the strength of transforming the frequency mode k by the square of the Frobenius norm of R(k), such that: $$S_{k}=\sum_{i}^{\mathrm{C}}\sum_{j}^{\mathrm{C}}|R_{k,i,j}|^{2},$$ $$(1)$$ 2, (1) where Sk denotes the strength of the k-th frequency mode. A smaller Sk indicates that the k-th frequency mode is less important for the output. Implicit spectral bias in neural networks It has been well studied that neural networks implicitly learn low-frequency components first, and then learn high-frequency components in a later stage Rahaman et al.; Xu et al. (2019). This phenomenon is known as implicit spectral bias, and it helps explain the excellent generalization ability of overparameterized neural networks. Since FNO performs linear transformation R in the frequency domain, the frequency strength Sk of each frequency mode i is directly related to the spectrum of the resulting model. As FNO is a neural network and trained by first-order learning algorithms, it follows the implicit spectral bias such that the lower frequency modes have larger strength in R. This explains why FNO chooses to preserve a set containing the lowest frequency modes, instead of any arbitrary subset of frequencies. Low-frequency components in PDEs Learning frequency modes is important as large-scale, low-frequency components usually have larger magnitudes than small-scale, highfrequency components in PDEs. For dissipative systems (with diffusion terms) such as the viscous Burgers' equation and incompressible Navier-Stokes equation, the energy cascade involves the transfer of energy from large scales of motion to the small scales, which leads to the Kolmogorov spectrum with the slope of k −5/3in the inverse cascade range (Figure 3), and k −3 in the direct-cascade range (Boffetta et al., 2012). The smallest scales in turbulent flow is called the Kolmogorov microscales. Therefore, one should choose the model frequencies with respect to the underlying equation frequencies when designing ![3_image_1.png](3_image_1.png) ![3_image_2.png](3_image_2.png) Figure 3: The spectrum of Kolmogorov flow decays exponentially with the Kolmogorov scale of 5/3 in the inverse cascade range. 4 machine learning models. It would be a challenge to select the correct model frequencies in advance, without knowing the properties of the underlying PDEs. Algorithm 1 *Incremental Learning of FNO* Input: Initial modes K0, mode buffers b, threshold α, epoch e to increase resolution, downsample factors r, data resolution R*data* Initialize: Randomly initialize R0 ∈ C (K0+b)×C×C , set initial resolution Rcurr = R*data*/r1 For iteration t in 1*, ..., T*: compute st = [S1, S2*, ..., S*Kt−1+b] {compute the frequency strength for each mode} Kt ← Kt−1 While g(Kt, st) < α: {find Kt that explains at least α of st} Kt ← Kt + 1 construct Rt ∈ C (Kt+b)×C×C where Rt[0 : (Kt−1 + b), :, :] = Rt−1 and initialize the rest randomly If (t mod e = 0) Set next resolution Rcurr = R*data*/ri+1 Train FNO model with (Kt, R*curr*) ## 4 Incremental Fourier Neural Operator Although FNO has shown impressive performance in solving PDEs, the training remains challenging. In this section, we discuss the main difficulties in training FNOs and propose the Incremental Fourier Neural Operator (iFNO) to address the challenges. ## 4.1 Incremental Frequency Fno While frequency truncation TK ensures the discretization-invariance property of FNO, it is still challenging to select an appropriate number of effective modes K, as it is task-dependent and requires careful hyperparameter tuning. Inappropriate selection of K can lead to severe performance degradation. Setting K too small will result in too few frequency modes, such that it doesn't approximate the solution operator well, resulting in underfitting. As shown in Figure 2, in the prediction of Darcy flow, FNO requires higher frequency modes to capture small-scale structures in the fluid for better performance. On the other hand, a larger K with too many effective frequency modes may encourage FNO to interpolate the noise in the high-frequency components, leading to overfitting. Previously, people have chosen the number of effective modes in FNO by estimating the underlying data frequencies or using heuristics borrowed from pseudo-spectral solvers such as the 2/3 dealiaising rule (Hou & Li, 2007), i.e., picking the max Fourier modes as 2/3 of the training data resolution. However, it's usually not easy to estimate the underlying frequency, and the resolution of data may vary during training and testing. These problems lead to hand-tuning or grid-searching the optimal effective modes, which can be expensive and wasteful. We propose the *incremental Fourier Neural Operator (iFNO)* that starts with a "small" FNO model with limited low-frequency modes and gradually increases K and resolution R based on the training progress or frequency evolution. One of the key challenges in training iFNO is determining an appropriate time to proceed with the increase of K. We consider two variants of iFNO (method), which means we only increase the frequency modes based on the method. iFNO (Loss): A common practice is to use the training progress as a proxy to determine the time, as seen in previous work such as Liu et al.. Specifically, we let the algorithm increase K adaptively only when there is a decrease in training loss between N consecutive epochs that is lower than a threshold ϵ. We denote it as the iFNO (loss) algorithm. Despite its simplicity, this heuristic has shown to be effective in practice. However, finding an appropriate ϵ and N is challenging as it depends on the specific problem and the loss function. To address this issue, we propose a novel method to determine the expanding time by directly considering the frequency evolution in the parameter space. ![5_image_0.png](5_image_0.png) $$\left(2\right)$$ $$(3)$$ Figure 4: Frequency evolution of first and fourth Fourier convolution operators in FNO and iFNO during the training on Burgers' equation. We visualize FNO on the left figure and iFNO on the right figure for each layer. Each frequency strength Sk is visualized across training. FNO is tuned with the optimal weight decay strength. iFNO (Freq): As shown in Equation 1, i-th frequency strength Si reflects the importance of the i-th frequency mode in the model. The entire spectrum of the transformation R can be represented by the collection of p frequency strengths (when p is sufficiently large): $$s_{p}=[S_{1},S_{2},...,S_{p}],$$ sp = [S1, S2*, ..., S*p], (2) where sp ∈ R p. When applying the frequency truncation TK (*K < p*), we can measure how much the lowest K frequency modes explain the total spectrum by computing the explanation ratio g(*K, s*p). $$g(K,s_{p})={\frac{\sum_{k=1}^{K}S_{k}}{\sum_{k=1}^{p}S_{k}}}.$$ . (3) We define a threshold α, which is used to determine if the current modes can well explain the underlying spectrum. If g(K, sp) > α, it indicates that the current modes are sufficient to capture the important information in the spectrum, and thus no additional modes need to be included in FNO. Otherwise, more frequency modes will be added into the model until g(K, sp) > α is satisfied. Although sp reflects the entire spectrum when p is sufficiently large, maintaining a large p is unnecessary and computationally expensive. Instead, we only maintain a truncated spectrum sK+b ∈ R K+b, which consists of K effective modes and b buffer modes. The buffer modes contain all mode candidacies that potentially would be included as effective modes in the later stage of training. In practice, at iteration t with Kt effective modes, we construct a transformation Rt ∈ C (Kt+b)×C×C for 1D problems. After updating Rt at iteration t + 1, we will find Kt+1 that satisfies g(Kt+1, st) > α. In this way, we can gradually include more high-frequency modes when their evolution becomes more significant, as illustrated in Figure 1. We denote this variant as iFNO (Freq). We also describe how iFNO (Freq) is trained for solving 1D problems in Algorithm 1. iFNO is extremely efficient as the training only requires a part of frequency modes. In practice, it also converges to a model requiring fewer modes than the baseline FNO without losing any performance, making the model efficient for inference. Notably, the cost of computing the frequency strength and explanation ratio is negligible compared to the total training cost. It can be further reduced by performing the determination every T iterations, which does not affect the performance. ## 4.2 Regularization In Fno Training As discussed in the previous section, learning low-frequency modes is essential for the successful training of FNO. However, the standard regularization techniques used in training neural networks are not capable of explicitly promoting the learning of low-frequency modes. As shown in Figure 4, although FNO with well-tuned weight decay strength successfully regularizes the high-frequency modes, its regularization can be too strong for certain layers (such as the fourth operator in the figure). This leads to the instability of the frequency evolution, which damages the generalization performance, as shown in Table 2. On the other hand, insufficient decay strength cannot regularize the high-frequency modes and can even lead to overfitting the associated high-frequency noise. Dynamic Spectral Regularization: However, we find that iFNO can be served as a dynamic spectral regularization process. As shown in Figure 4, iFNO properly regularizes the frequency evolution without causing instability or overfitting high-frequency modes. As we will present in the experiment section, the dynamic spectral regularization gives iFNO a significant generalization improvement. ## 4.3 Incremental Resolution Fno Finally, we introduce the incremental algorithm on the resolution part of our approach. Computational cost in training large-scale FNO Training the FNO on large-scale, high-resolution data to solve PDEs with high-frequency information, a large effective mode number K is necessary to capture the solution operator. However, this requires constructing a large-scale transformation R, which dominates the computational cost of all the operations in FNO. This makes the training and inference more computationally expensive. For example, the Forecastnet (Pathak et al., 2022) for global weather forecast based on FNO is trained on 64 NVIDIA® Tesla® A100 GPUs, only covering a few key variables. To simulate all the weather variables, it potentially needs the parallel FNO, which scales to 768 GPUs (Grady II et al., 2022). Incremental Resolution: To mitigate the computational cost of training FNO on high-resolution data, we incorporate curriculum learning to improve our model (Soviany et al. (2022)). This approach involves training the models in a logical sequence, where easier samples are trained first and gradually progressing to more challenging ones. In our case, we start with lower-resolution images, allowing the model to capture large-scale features first, potentially accelerating learning and reducing computational costs in the early stages of training, and then moving on to higher-resolution images. This approach allows the model to initially focus on learning low-frequency components of the solution using small data, avoiding wasted computation and overfitting. Then, the model progressively adapts to capture higher frequency components as the data resolution increases. This model combines both incremental frequency (specifically the iFNO Freq)) and incremental resolution and is denoted by iFNO. Algorithm 1 showcases the joint implementation. | Method | Train L2 | Test L2 (Super-Res) (1e-1) | Training Time (Mins) | |------------------------|---------------|------------------------------|------------------------| | iFNO | 0.149 ± 0.004 | 0.961 ± 0.022 | 670 ± 40 | | Standard FNO | 0.169 ± 0.004 | 0.988 ± 0.031 | 918 ± 10 | | iFNO (Freq) | 0.163 ± 0.003 | 1.019 ± 0.031 | 700 ± 30 | | iFNO (Loss) | 0.155 ± 0.005 | 0.948 ± 0.040 | 690 ± 30 | | iFNO (Resolution only) | 0.155 ± 0.003 | 0.974 ± 0.010 | 668 ± 40 | Table 1: **Evaluation on Re5000 dataset on full data regime**. The average training L2 loss and testing L2 loss on super-resolution are reported on 3 runs with their standard deviation. ## 5 Evaluations In this section, we detail the experimental setting, datasets used and analyze the empirical results. ## 5.1 Experimental Setup We evaluate iFNO and its variants on five different datasets of increasing difficulty. We retain the structure of the original FNO, which consists of stacking four Fourier convolution operator layers with the ReLU activation and batch normalization. More specifically, for the time-dependent problems, we use an RNN structure in time that directly learns in space-time1. The initial and coefficient conditions are sampled from Gaussian random fields (Nelsen & Stuart, 2021) for all the datasets. 1We share our code via an anonymous link here. Table 2: **Evaluation on various PDEs in the low-data regime**. The average training and testing L2 loss are reported with their standard deviation over 3 repeated runs. All iFNO variants are compared with standard FNO with fixed K = 10, 30, 60, 90 number of modes. Losses have been divided by the scale under the column. | Method | Burgers | Darcy Flow | Navier Stokes (FNO2D) | | | | |----------------|-----------------------|-----------------|-------------------------|----------------|---------------|---------------| | Train (1e − 3) | Test (1e − 2) | Train (1e − 3) | Test (1e-2) | Train (1e − 2) | Test (1) | | | iFNO | 0.418 ± 0.002 | 0.139 ± 0.006 | 2.466 ± 0.855 | 0.129 ± 0.028 | 3.533 ± 0.003 | 0.106 ± 0.011 | | iFNO (Freq) | 0.422 ± 0.001 | 0.141 ± 0.001 | 1.491 ± 0.298 | 0.215 ± 0.010 | 1.548 ± 0.029 | 0.177 ± 0.008 | | iFNO (Loss) | 0.430 ± 0.001 | 0.140 ± 0.007 | 2.262 ± 0.985 | 0.198 ± 0.029 | 2.466 ± 0.092 | 0.202 ± 0.020 | | iFNO (Res) | 0.423 ± 0.001 | 0.146 ± 0.006 | 1.410 ± 0.020 | 0.123 ± 0.001 | 3.576 ± 0.021 | 0.105 ± 0.001 | | FNO (10) | 1.135 ± 0.002 | 0.270 ± 0.006 | 1.471 ± 0.223 | 0.162 ± 0.003 | 5.133 ± 0.182 | 0.166 ± 0.027 | | FNO (30) | 0.455 ± 0.001 | 0.156 ± 0.002 | 1.462 ± 0.102 | 0.140 ± 0.011 | 3.262 ± 0.001 | 0.154 ± 0.001 | | FNO (60) | 0.421 ± 0.001 | 0.153 ± 0.006 | 1.468 ± 0.212 | 0.130 ± 0.007 | - ± - | − ± − | | FNO (90) | 0.423 ± 0.001 | 0.146 ± 0.006 | 1.425 ± 0.212 | 0.127 ± 0.004 | − ± − | − ± − | | Method | Navier Stokes (FNO3D) | Kolmogorov Flow | | | | | | Train (1e − 2) | Test (1e − 1) | Train (1) | Test (1) | | | | | iFNO | 1.694 ± 0.009 | 0.308 ± 0.018 | 0.832 ± 0.279 | 0.358 ± 0.009 | | | | iFNO (Freq) | 1.402 ± 0.001 | 0.559 ± 0.011 | 1.455 ± 0.008 | 0.487 ± 0.004 | | | | iFNO (Loss) | 0.832 ± 0.001 | 0.281 ± 0.011 | 0.740 ± 0.230 | 0.381± 0.010 | | | | iFNO (Res) | 1.704 ± 0.032 | 0.519 ± 0.016 | 0.882 ± 0.067 | 0.342± 0.001 | | | | FNO (10) | 1.892 ± 0.020 | 0.527 ± 0.002 | 2.111 ± 0.009 | 0.543 ± 0.009 | | | | FNO (30) | 1.298 ± 0.001 | 0.544 ± 0.001 | 1.053 ± 0.190 | 0.384 ± 0.010 | | | | FNO (60) | − | − | 0.562 ± 0.001 | 0.348 ± 0.001 | | | | FNO (90) | − | − | 0.475 ± 0.001 | 0.343 ± 0.001 | | | Burgers' Equation We consider the 1D Burgers' equation, which is a non-linear PDE with various applications, including modeling the flow of a viscous fluid. The 1D Burgers' equation takes the form: $$\begin{array}{c c}{{\partial_{t}u(x,t)+\partial_{x}\left(u^{2}(x,t)/2\right)=\nu\partial_{x x}u(x,t),}}&{{t\in(0,1]}}\\ {{}}&{{u(x,0)=u_{0}(x),}}&{{x\in(0,1)}}\end{array}$$ where u0 is the initial condition and ν is the viscosity coefficient. We aim to learn the operator Gθ, mapping the initial condition to the solution. The training is performed under the resolution of 1024, and we consider the learning rate halved every 50 epochs and weight decay to be 0.0001. Darcy Flow Next, we consider the steady-state of the 2D Darcy Flow on the unit box, which is the second-order, linear, elliptic PDE: $$\begin{array}{r l}{-\nabla\cdot(a(x)\nabla u(x))=f(x),}&{{}x\in(0,1)^{2}}\\ {u(x)=0,}&{{}x\in\partial(0,1)^{2}}\end{array}$$ With a Dirichlet boundary where a is the diffusion coefficient, and f = 1 is the forcing function. We are interested in learning the operator mapping the diffusion coefficient to the solution. The resolution is set to be 421 × 421. Navier-Stokes Equation We also consider the 2D+time Navier-Stokes equation for a viscous, incompressible fluid on the unit torus: $\partial_{t}w(x,t)+u(x,t)\cdot\nabla w(x,t)=\nu\Delta w(x,t)+f(x)$, $\nabla\cdot u(x,t)=0,\ \ x\in(0,1)^{2}$ $w(x,0)=w_{0}(x),\ \ t\in(0,T]$ where w = ∇ × u is the vorticity, w0 is the initial vorticity, ν ∈ R+is the viscosity coefficient, and f(x) = 0.1sin(2π(x1 + x2)) + cos(2π(x1 + x2))is the forcing function. We are interested in learning the operator mapping the vorticity up to time 10 to the vorticity up to some later time. We experiment with the hardest viscosity task, i.e., ν = 1e − 5. The equivalent Reynolds number is around 500 normalized by forcing and domain size. We set the final time T = 20 as the dynamic becomes chaotic. The resolution is fixed to be 64 × 64. We consider solving this equation using the 2D FNO with an RNN structure in time or using the 3D FNO to do space-time convolution, similar to the benchmark in Li et al. (a). As 3D FNO requires significantly more parameters, evaluating 3D FNO with more than 30 modes is computationally intractable due to the hardware limitation. Kolmogorov Flow: We evaluate the models on a very challenging dataset of high-frequency Kolmogorov Flow. The Re5000 problem is the most challenging dataset we consider. It is governed by the 2D NavierStokes equation, with forcing f(x) = sin(nx2)ˆx1, where x1 is the unit vector and a larger domain size [0, 2π], as studied in (Li et al., 2021; Kochkov et al., 2021). More specifically, the Kolmogorov flow (a form of the Navier-Stokes equations) for a viscous, incompressible fluid, $${\frac{\partial u}{\partial t}}=-u\cdot\nabla u-\nabla p+{\frac{1}{R e}}\Delta u+\sin(n y){\hat{x}},\quad\nabla\cdot u=0,$$ $$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\text{on}[0,2\pi]^{2}\times(0,\infty)$$ with initial condition u(·, 0) = u0 where u denotes the velocity, p the pressure, and *Re >* 0 is the Reynolds number. It has an extremely high Reynolds number with Re = 5000. The resolution of this dataset is 128 × 128. We consider learning the evolution operator from the previous time step to the next based on 2D FNO. We also consider a variation of the dataset with a lower Re value, which we call the Kolmogorov Flow problem. This resolution is fixed to be 256 × 256 to show high-frequency details. Methods We present our results for each dataset where we consider four standard FNO baselines with different numbers of frequency modes K = 10, 30, 60, 90. For each dataset, all methods are trained with the same hyperparameters, including the learning rate, weight decay, and the number of epochs. Unless otherwise specified, we use the Adam optimizer to train for 500 epochs with an initial learning rate of 0.001, which is halved every 100 epochs, and weight decay is set to be 0.0005. For the re5000 dataset, we train for 50 epochs, and experiment details are explained in the Appendix. All experiments are run using NVIDIA® Tesla® V100/A100 GPUs. We also test our incremental methods on Tensorized Factorized Neural Operators Kossaifi et al. (2023) by sweeping over different factorization schemes and ranks for the weights. Hyperparameter settings We find iFNO is *less sensitive* to the choice of hyperparameters across different datasets. This property allows the same iFNO architecture and hyperparameters to be directly applied to a wide range of problems without expensive parameter validation and without needing to set an appropriate K. Consequently, we use the same hyperparameters for all datasets, including the number of initial modes K0 = 1, the number of buffer modes b = 5, and the threshold α = 0.9999. We also use a uniform threshold ϵ = 0.001 in iFNO (Loss). ## 5.2 Experiment Details For Re5000 Dataset - Ifno Regarding setting the scheduler, the incremental resolution approach significantly alters the loss landscape and distribution of the training data each time the resolution is increased. As noted by Smith (2017), cyclic learning rates oscillating between higher and lower values during training can be highly beneficial when dealing with such changing loss landscapes. Therefore, our incremental resolution training methodology adopts a combination of the cyclic learning rate and step learning rate scheme. We see that this combination provides improved and more stable convergence throughout the curriculum of incremental resolution increases, contributing to strong empirical results for both the incremental resolution model and the iFNO model. A triangular cyclic learning rate allows the model to first explore with larger learning rates to find new optimal parameters after a resolution increase before narrowing down with smaller rates to converge. This prevents the model from getting stuck in sub-optimal solutions. Our experiments utilize 3 different modes, i.e., triangular, triangular with decreased height, and exponential decay. We also sweep over various step sizes and maximum and minimum bounds. The cyclic period is set to e epochs. Although the cyclicLR method is indeed suitable, we find that after we switch to the highest resolution, using stepLR converges to a better minimum compared to cyclicLR. This makes sense, as once we start training on the highest resolution, instead of changing our learning rates in an oscillatory manner, we should stabilize and take smaller steps for convergence, like how the baseline FNO is trained. Our algorithm increases resolution every e epoch from 32 to 64 and then 128. We use powers of 2 for optimal Fast Fourier Transform speed. The hyperparameter e can be tuned, and r is a list determining how low we sub-sample data. In the re5000 dataset case, we have set e = 10 and r = [4, 2, 1], which results in starting from resolution 32 all the way up to the highest resolution 128. ![9_image_0.png](9_image_0.png) (a) Number of frequency modes K in the converged FNO and iFNO models across datasets. We report K in the first Fourier convolution operator. NS denotes Navier-Stokes equations. (b) Testing loss, training loss, and number of modes ![9_image_1.png](9_image_1.png) K during the training of FNO and iFNO on Kolmogorov flow. Figure 5: iFNO comparison, both modes and loss curves. First, we study the performance of our model on the large-scale Re5000 dataset, where we use 36000 training samples and 9000 testing samples. The results in Table 1 show that iFNO achieves significantly better generalization performance than just incrementally increasing either frequency or resolution alone. This highlights the advantages of the iFNO while gaining around a 30% efficiency and much better accuracy than the stand-alone Incremental methods. The training time is reduced due to starting with a very small model and input data. There is one important note to mention that, in fact, for the Re5000 dataset, since the problem is highly challenging, i.e., having a high Reynolds number, we need to reach all 128 modes to capture even the high-frequency components. Next, we evaluate all methods in the low-data regime across different datasets, where the models only have access to a few data samples. This is common in many real-world PDE problems where accurately labeled data is expensive to obtain. In this regime, the ability to generalize well is highly desirable. We consider a small training set (5 to 50 samples) for Burgers, Darcy, and Navier-Stokes datasets. As the data trajectories are already limited in the Kolmogorov dataset, we choose 8x larger time step (sampling rate) for each trajectory, resulting in fewer time frames. As shown in Table 2, iFNO consistently outperforms the FNO baselines across all datasets, regardless of the number of modes K. It also shows that FNO with larger K achieves lower training loss but overfits the training data and performs poorly on the testing set. On the other hand, iFNO (Freq) achieves slightly ## 5.3 Results Table 3: Evaluation on various PDEs in the full data regime. The average training and testing L2 loss are reported with their standard deviation over 3 repeated runs. Frequency-based and loss-based iFNO are compared with standard FNO with fixed K = 10, 30, 60, 90 number of modes. Losses have been divided by the scale under the column. | Method | Burgers | Darcy Flow | Navier Stokes (FNO 2D) | | | | |----------------|------------------------|-----------------|--------------------------|----------------|---------------|---------------| | Train (1e − 3) | Test (1e − 3) | Train (1e − 3) | Test (1e-3) | Train (1e − 1) | Test (1e-1) | | | iFNO | 0.891 ± 0.022 | 0.760 ± 0.039 | 4.580 ± 0.855 | 0.264 ± 0.020 | 1.022 ± 0.043 | 0.824 ± 0.016 | | iFNO (Freq) | 0.898 ± 0.022 | 0.778 ± 0.005 | 2.453 ± 0.260 | 0.387 ± 0.009 | 1.102 ± 0.132 | 1.265 ± 0.044 | | iFNO (Loss) | 0.919 ± 0.033 | 0.779 ± 0.034 | 3.113 ± 0.887 | 0.322 ± 0.068 | 1.135 ± 0.003 | 1.375 ± 0.224 | | iFNO (Res) | 0.979 ± 0.053 | 0.854 ± 0.034 | 4.463 ± 0.920 | 0.271 ± 0.010 | 1.112 ± 0.080 | 0.737 ± 0.227 | | FNO (10) | 2.523 ± 0.028 | 1.063 ± 0.025 | 2.991 ± 0.302 | 0.481 ± 0.047 | 10.66 ± 0.736 | 1.286 ± 0.178 | | FNO (30) | 1.032 ± 0.017 | 0.868 ± 0.033 | 2.981 ± 0.046 | 0.412 ± 0.029 | 1.223 ± 0.045 | 1.184 ± 0.107 | | FNO (60) | 0.966 ± 0.023 | 0.845 ± 0.016 | 2.596 ± 0.433 | 0.466 ± 0.013 | 1.112 ± 0.029 | 1.178 ± 0.047 | | FNO (90) | 0.973 ± 0.042 | 0.851 ± 0.029 | 2.521 ± 0.046 | 0.379 ± 0.043 | 1.155 ± 0.001 | 1.577 ± 0.002 | | Method | Navier Stokes (FNO 3D) | Kolmogorov Flow | | | | | | Train (1e − 2) | Test (1e − 1) | Train (1e − 2) | Test (1e − 1) | | | | | iFNO | 0.820 ± 0.041 | 0.272 ± 0.003 | 2.665 ± 0.034 | 1.507 ± 0.009 | | | | iFNO (Freq) | 0.855 ± 0.007 | 0.262 ± 0.002 | 4.345 ± 0.001 | 1.443 ± 0.001 | | | | iFNO (Loss) | 0.832 ± 0.001 | 0.281 ± 0.011 | 3.393 ± 0.001 | 2.684 ± 0.003 | | | | iFNO (Res) | 7.533 ± 0.006 | 0.281 ± 0.001 | 2.336 ± 0.003 | 0.985 ± 0.001 | | | | FNO (10) | 1.035 ± 0.027 | 0.296 ± 0.016 | 5.462 ± 0.004 | 2.227 ± 0.012 | | | | FNO (30) | 0.977 ± 0.001 | 0.587 ± 0.001 | 3.571 ± 0.002 | 1.312 ± 0.012 | | | | FNO (60) | − | − | 2.751 ±0.001 | 1.015 ±0.001 | | | | FNO (90) | − | − | 2.330 ± 0.002 | 1.042 ± 0.002 | | | higher training loss but generalizes well to the testing set, which demonstrates the effectiveness of the dynamic spectral regularization. We also notice that although iFNO (Loss) has significant fluctuations, especially in the training results, it does achieve better generalization on most datasets against the baselines. ## 5.4 Full Data Results We also evaluate the models with the most data samples available in each dataset. As shown in Table 3, iFNO achieves good performance comparable with standard FNO. 1. Generalization: iFNO achieves up to 38% better generalization performance than the baselines, and all iFNO methods beat the baselines in testing error across tasks like Burgers, Darcy, NS2D+time, and NS3D. 2. Efficiency: iFNO exhibits remarkable efficiency gains, ranging from 21% to 46% reduction in computational time compared to the baselines, depending on the task. 3. Model Size: iFNO achieves substantial model size reductions, with up to 2.8x smaller model size than the baselines, without compromising performance. ## 5.5 Ablation Studies Figure 5b compares FNO (90) and iFNO (Freq) over the training on Kolmogorov flow. It shows that iFNO requires much fewer frequency modes during training. As the linear transformation R dominates the number of parameters and FLOPs in FNO when K is large, iFNO (Freq) can significantly reduce the computational cost and memory usage of training FNO. iFNO achieves better generalization and even requires fewer modes at the end of training. In Figure 5a, we compare iFNO (Freq) with FNO with a particular K that achieves the best performance in each dataset. The results suggest that the trained iFNO (Freq) consistently requires fewer frequency modes than its FNO counterpart, making the iFNO inference more efficient. This also indicates that iFNO can automatically determine the optimal number of frequency modes K during training without predetermining K as an inductive bias that could potentially hurt the performance. To further demonstrate the adaptability and broader applicability of our incremental approach, we conducted a case study applying iFNO to Tensorized Fourier Neural Operators (TFNOs). TFNOs represent a recent advancement in efficient operator learning, and we were interested in exploring how our incremental techniques could enhance their performance. The full details of these experiments are presented in Appendix B.6. In summary, we show that TFNOs with our incremental approach (iTFNO) achieve up to 11% better generalization error and up to 22% performance speed up compared to baselines. ## Б Standard Regularization In Training Fno ![11_image_0.png](11_image_0.png) We also visualize the frequency evolution of FNO and iFNO. As shown in Figure 6, the weight decay used in the optimization has a significant effect on the learned weights. Figure 6: Frequency evolution of the Fourier convolution operators in FNO with and without weight decay. Top: with weight decay; bottom: without weight decay. When training with weight decay, the strength of each mode converges around 100 training iterations. And the weights of lower frequency tend to have high strength. On the other hand, the weight continues growing throughout the training without the weight decay. In this case, higher frequencies can also have high strength. ## 7 Conclusion In this work, we proposed the Incremental Fourier Neural Operator (iFNO) that dynamically selects frequency modes and increases the resolution during training. Our results show that iFNO achieves better generalization while requiring less number of parameters (frequency modes) compared to the standard FNO. We believe iFNO opens many new possibilities, such as solving PDEs with very high resolution and with limited labeled data and training computationally efficient neural operators. In the future, we plan to apply iFN() on more complex PDEs and extend it to the training of physics-informed neural operators. ## 8 Impact Statement This paper presents work whose goal is to advance the field of Machine Learning and PDEs. FNOs have shown promise to solve challenging physics and engineering problems accurately and efficiently. Our research has made FNOs more optimized through incremental learning and, hence, could lead to FNOs being applied to solve larger real-world problems, allowing for advances in fields like engineering, weather forecasting, and climate science while reducing training times with fewer computing resources. There might also be potential negative societal consequences of our work, but none of them are immediate to be specifically highlighted here. ## References Kamyar Azizzadenesheli, Nikola Kovachki, Zongyi Li, Miguel Liu-Schiaffini, Jean Kossaifi, and Anima Anandkumar. Neural operators for accelerating scientific simulations and design, 2024. URL https: //arxiv.org/abs/2309.15325. Ronen Basri, David Jacobs, Yoni Kasten, and Shira Kritchman. The convergence rate of neural networks for learned functions of different frequencies. URL http://arxiv.org/abs/1906.00425. type: article. Saakaar Bhatnagar, Yaser Afshar, Shaowu Pan, Karthik Duraisamy, and Shailendra Kaushik. Prediction of aerodynamic flow fields using convolutional neural networks. *Computational Mechanics*, 64(2):525–545, 2019. Guido Boffetta, Robert E Ecke, et al. Two-dimensional turbulence. *Annual review of fluid mechanics*, 44(1): 427–451, 2012. Yuan Cao, Zhiying Fang, Yue Wu, Ding-Xuan Zhou, and Quanquan Gu. Towards understanding the spectral bias of deep learning. URL http://arxiv.org/abs/1912.01198. type: article. MWMG Dissanayake and Nhan Phan-Thien. Neural-network-based approximations for solving partial differential equations. *communications in Numerical Methods in Engineering*, 10(3):195–201, 1994. Sara Fridovich-Keil, Raphael Gontijo-Lopes, and Rebecca Roelofs. Spectral bias in practice: The role of function frequency in generalization. URL http://arxiv.org/abs/2110.02424. type: article. Thomas J Grady II, Rishi Khan, Mathias Louboutin, Ziyi Yin, Philipp A Witte, Ranveer Chandra, Russell J Hewett, and Felix J Herrmann. Towards large-scale learned solvers for parametric pdes with model-parallel fourier neural operators. *arXiv preprint arXiv:2204.01205*, 2022. Daniel Greenfeld, Meirav Galun, Ronen Basri, Irad Yavneh, and Ron Kimmel. Learning to optimize multigrid pde solvers. In *International Conference on Machine Learning*, pp. 2415–2423. PMLR, 2019. Xiaoxiao Guo, Wei Li, and Francesco Iorio. Convolutional neural networks for steady flow approximation. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 481–490, 2016. Thomas Y Hou and Ruo Li. Computing nearly singular solutions using pseudo-spectral methods. *Journal* of Computational Physics, 226(1):379–397, 2007. Xinquan Huang and Tariq Alkhalifah. PINNup: Robust neural network wavefield solutions using frequency upscaling and neuron splitting. URL http://arxiv.org/abs/2109.14536. Georgios Kissas, Jacob H Seidman, Leonardo Ferreira Guilhoto, Victor M Preciado, George J Pappas, and Paris Perdikaris. Learning operators with coupled attention. *Journal of Machine Learning Research*, 23 (215):1–63, 2022. Dmitrii Kochkov, Jamie A Smith, Ayya Alieva, Qing Wang, Michael P Brenner, and Stephan Hoyer. Machine learning accelerated computational fluid dynamics. *arXiv preprint arXiv:2102.01010*, 2021. Jean Kossaifi, Nikola Kovachki, Kamyar Azizzadenesheli, and Anima Anandkumar. Multi-grid tensorized fourier neural operator for high-resolution pdes. *arXiv preprint arXiv:2310.00120*, 2023. Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Learning maps between function spaces. *arXiv preprint* arXiv:2108.08481, 2021. Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, and Michael W Mahoney. Characterizing possible failure modes in physics-informed neural networks. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 26548–26560. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/ paper/2021/file/df438e5206f31600e6ae4af72f2725f1-Paper.pdf. Isaac E Lagaris, Aristidis Likas, and Dimitrios I Fotiadis. Artificial neural networks for solving ordinary and partial differential equations. *IEEE transactions on neural networks*, 9(5):987–1000, 1998. Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations, a. URL http://arxiv.org/abs/2010.08895. type: article. Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Graph kernel network for partial differential equations, b. URL http://arxiv.org/abs/2003.03485. Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Markov neural operators for learning chaotic systems. *arXiv preprint* arXiv:2106.06898, 2021. Burigede Liu, Nikola Kovachki, Zongyi Li, Kamyar Azizzadenesheli, Anima Anandkumar, Andrew M Stuart, and Kaushik Bhattacharya. A learning-based multiscale method and its application to inelastic impact problems. *Journal of the Mechanics and Physics of Solids*, 158:104668, 2022. Qiang Liu, Lemeng Wu, and Dilin Wang. Splitting steepest descent for growing neural architectures. URL http://arxiv.org/abs/1910.02366. Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. *Nature Machine* Intelligence, 3(3):218–229, 2021. Nicholas H Nelsen and Andrew M Stuart. The random feature model for input-output maps between banach spaces. *SIAM Journal on Scientific Computing*, 43(5):A3212–A3243, 2021. Jaideep Pathak, Mustafa Mustafa, Karthik Kashinath, Emmanuel Motheau, Thorsten Kurth, and Marcus Day. Ml-pde: A framework for a machine learning enhanced pde solver. *Bulletin of the American Physical* Society, 2021. Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, et al. Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators. arXiv preprint arXiv:2202.11214, 2022. Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred A. Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. URL http://arxiv.org/abs/ 1806.08734. type: article. Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019. Justin Sirignano and Konstantinos Spiliopoulos. Dgm: A deep learning algorithm for solving partial differential equations. *Journal of computational physics*, 375:1339–1364, 2018. Leslie N Smith. Cyclical learning rates for training neural networks. In *2017 IEEE winter conference on* applications of computer vision (WACV), pp. 464–472. IEEE, 2017. Petru Soviany, Radu Tudor Ionescu, Paolo Rota, and Nicu Sebe. Curriculum learning: A survey. *International Journal of Computer Vision*, 130(6):1526–1565, 2022. Makoto Takamoto, Timothy Praditia, Raphael Leiteritz, Dan MacKinlay, Francesco Alesiani, Dirk Pflüger, and Mathias Niepert. Pdebench: An extensive benchmark for scientific machine learning. arXiv preprint arXiv:2210.07182, 2022. E Weinan and Bing Yu. The deep ritz method: a deep learning-based numerical algorithm for solving variational problems. *Communications in Mathematics and Statistics*, 6(1):1–12, 2018. Gege Wen, Zongyi Li, Kamyar Azizzadenesheli, Anima Anandkumar, and Sally M Benson. U-fno—an enhanced fourier neural operator-based deep-learning model for multiphase flow. *Advances in Water* Resources, 163:104180, 2022. Zhi-Qin John Xu, Yaoyu Zhang, Tao Luo, Yanyang Xiao, and Zheng Ma. Frequency principle: Fourier analysis sheds light on deep neural networks. *arXiv preprint arXiv:1901.06523*, 2019. Yinhao Zhu and Nicholas Zabaras. Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification. *Journal of Computational Physics*, 366:415–447, 2018. ## A Additional Results A.1 Frequency Mode Evolution During Training As shown in Figure 7, the effective modes K adaptively increase during the training phase on the Burgers, Darcy, Navier-Stokes (2D & 3D formulation), and Kolmogorov Flows. For smooth problems such as viscous Burgers equation and Darcy Flow, the mode converges in the early epoch, while for problem with high frequency structures such as the Kolmogorov Flows, the mode will continuously increase throughout the training phase. ![15_image_0.png](15_image_0.png) Figure 7: Mode evolution during training for each layer across datasets. ## A.2 Training And Testing Loss During Learning Figure 8 and 9 show the training curves and testing curves across the five problem settings, ten rows in total. Overall, iFNO methods have a slightly higher training error but lower validation error, indicating its better generalization over the few data regime. ![16_image_0.png](16_image_0.png) loss. **Each bottom row: validation L2 loss** ![17_image_0.png](17_image_0.png) Figure 9: Loss vs Epochs across datasets (part 2: Navier-Stokes 2D, Kolmogorov Flows). **Each top row: training** L2 loss. **Each bottom row: validation L2 loss** ## B Additional Experiments And Results B.1 Selection Of Hyperparameters In Tables 4 and 5 we present the results of our hyperparameter search for in iFNO (Freq) and ϵ in iFNO (Loss) respectively, across various tasks. These hyperparameters control the threshold for adding new modes in our incremental approaches. For iFNO (Freq), α represents the cumulative energy threshold in the Explanation Ratio criterion. We explored values ranging from 0.6 to 0.9999, covering a wide spectrum from more aggressive (lower α) to more conservative (higher α) mode addition strategies. For iFNO (Loss), ϵ represents the threshold for the relative decrease in loss that triggers the addition of new modes. We investigated values from 0.1 to 0.00001, encompassing both rapid (higher ϵ) and gradual (lower ϵ) mode addition schemes. For most tasks, moderate values of α(around 0.99) and ϵ (around 0.001) tend to perform well, balancing between adding sufficient modes for expressiveness and avoiding overfitting. However, some tasks like Kolmogorov Flow benefit from more aggressive mode addition (lower α or higher ϵ), likely due to the complexity of the underlying dynamics. | Tasks | Selection of α | | | | | | |------------------------|------------------|-------|-------|-------|--------|-------| | 0.6 | 0.8 | 0.9 | 0.99 | 0.999 | 0.9999 | | | Burgers (1e-3) | 1.723 | 0.975 | 0.583 | 0.544 | 0.559 | 0.587 | | Darcy Flow (1e-4) | 0.621 | 0.595 | 0.552 | 0.509 | 0.475 | 0.513 | | Navier Stokes (FNO 2D) | 0.341 | 0.256 | 0.223 | 0.189 | 0.194 | 2.010 | | Navier Stokes (FNO 3D) | 0.524 | 0.499 | 0.461 | 0.464 | 0.468 | 0.471 | | Kolmogorov Flow | 0.133 | 0.122 | 0.102 | 0.043 | 0.049 | 0.051 | Table 4: Evaluation of α in iFNO (Freq) across various tasks. | Tasks | Selection of ϵ | | | | | |------------------------|------------------|--------|--------|---------|-------| | 0.1 | 0.01 | 0.001 | 0.0001 | 0.00001 | | | Burgers (1e-3) | 0.672 | 0.572 | 0.595 | 1.190 | 13.90 | | Darcy Flow (1e-4) | 0.486 | 0.465 | 0.412 | 0.424 | 0.490 | | Navier Stokes (FNO 2D) | 0.192 | 0.194 | 0.191 | 0.284 | 0.322 | | Navier Stokes (FNO 3D) | 0.471 | 0.469 | 0.454 | 0.449 | 0.452 | | Kolmogorov Flow | 0.0614 | 0.0616 | 0.053 | 0.578 | 0.579 | Table 5: Evaluation of ϵ in iFNO (Loss) across various tasks. | Models | Dimension | | | | | |-------------|-------------|--------|-------|-------|-------| | 32 | 64 | 80 | 96 | 128 | | | iFNO (Freq) | 0.785 | 0.5901 | 0.532 | 0.469 | 0.513 | | iFNO (Loss) | 0.746 | 0.708 | 0.605 | 0.578 | 0.714 | | FNO | 0.603 | 0.519 | 0.525 | 0.501 | 0.553 | | Models | Dimension | | | | | |-------------|-------------|--------|--------|--------|--------| | 32 | 64 | 80 | 96 | 112 | | | iFNO (Freq) | 0.462 | 0.462 | 0.4395 | 0.4304 | 0.4479 | | iFNO (Loss) | 0.4549 | 0.4553 | 0.4599 | 0.3859 | 0.4321 | | FNO | 0.4683 | 0.5051 | 0.4116 | 0.4922 | 0.4834 | ## B.2 Fno Over Different Sizes Table 6: Evaluation of changing the width of FNO models on Burger. Validation losses have been divided by 1e-3. Table 7: Evaluation of changing the width of FNO models on Darcy. Validation losses have been divided by 1e-4. Table 8: Evaluation of changing the width of FNO models on Navier Stokes 2D. Validation losses have been divided by 1. | Models | Dimension | | | | | |-------------|-------------|-------|-------|-------|-------| | 16 | 20 | 32 | 64 | 80 | | | iFNO (Freq) | 0.205 | 0.192 | 0.180 | 0.170 | 0.176 | | iFNO (Loss) | 0.199 | 0.191 | 0.181 | 0.177 | 0.177 | | FNO | 0.203 | 0.198 | 0.182 | 0.182 | 0.177 | | Tasks | Dimension | | | | | |-------------|-------------|-------|-------|-------|-------| | 16 | 20 | 32 | 64 | 80 | | | iFNO (Freq) | 1.513 | 1.450 | 1.312 | 0.634 | (NR) | | iFNO (Loss) | 0.879 | 0.821 | 0.654 | 0.534 | (NR) | | FNO | 1.072 | 1.024 | 0.872 | 0.743 | 0.721 | Table 9: Evaluation of changing the width of FNO models on Kolmogorov Flow. Validation losses have been divided by 1e-1. | Tasks | Scale of Initialization | | | | | |-------------|---------------------------|-------|-------|-------|-------| | 1e-2 | 1e-1 | 1 | 1e1 | 1e2 | | | iFNO (Freq) | 0.064 | 0.062 | 0.061 | 0.064 | 0.067 | | iFNO (Loss) | 0.050 | 0.054 | 0.070 | 0.053 | 0.055 | | FNO | 0.073 | 0.073 | 0.072 | 0.072 | 0.073 | ## B.3 Incremental Fno Under Different Initialization Scales Table 10: Evaluation of different initialization scales of FNO/iFNO models on Kolmogorov Flow. We again do a full sweep and report the best values. | Tasks | Frequency Modes | | | | | | | | | |------------------------|-------------------|-------|-------|-------|-------|-------|-------|-------|-------| | 10 | 20 | 30 | 40 | 60 | 80 | 90 | 100 | 120 | | | Kolmogorov Flow (1e-1) | 1.327 | 0.906 | 0.674 | 0.598 | 0.482 | 0.509 | 0.451 | 0.471 | 0.485 | ## B.4 Standard Fno With Different Frequency Modes We conduct more experiments to showcase the classic bias-variance tradeoff in FNOs when increasing the modes. Table 11: Evaluation of different frequency modes on standard FNO on Kolmogorov Flow. In this task, the minimum value is bolded across each row. Validation losses have been divided by the value next to the dataset across each row. ## B.5 Number Of Paramaters We have compiled a summary of our results comparing iFNO to FNO (90 modes) across various tasks: 1. Burgers equation: - Generalization: iFNO improves testing error by 11% over FNO (90 modes). - Efficiency: iFNO is 21% faster (455 seconds vs. 570 seconds for FNO). - Model size: iFNO achieves a 1.3x reduction (7.9 million vs. 12 million parameters). 2. Darcy Flow: - Generalization: iFNO improves testing error by 31% over FNO (90 modes). - Efficiency: iFNO is 44% faster (175 minutes vs. 310 minutes for FNO). - Model size: iFNO achieves a 2.3x reduction (471 million vs. 1.061 billion parameters). 3. Navier-Stokes 2D+time: - Generalization: iFNO (Resolution) improves testing error by 38% over FNO (90 modes). - Efficiency: iFNO is 38% faster (203 minutes vs. 323 minutes for FNO). - Model size: iFNO achieves a 2.3x reduction (471 million vs. 1.061 billion parameters). 4. Navier-Stokes 3D: - Generalization: iFNO (freq) improves testing error by 31% over FNO (30 modes). - Efficiency: iFNO is 31% faster (154 minutes vs. 223 minutes for FNO). - Model size: iFNO achieves a 2.8x reduction (1.25 billion vs. 3.539 billion parameters). 5. Navier-Stokes High Resolution: - Generalization: iFNO (Resolution) improves testing error by 3% over FNO (90 modes). - Efficiency: iFNO is 46% faster (469 minutes vs. 832 minutes for FNO). - Model size: iFNO achieves a 2.3x reduction (471 million vs. 1.061 billion parameters). These results demonstrate that iFNO consistently outperforms FNO (90 modes) in terms of generalization, efficiency, and model size across various tasks. We chose to compare primarily with FNO (90 modes) as it represents the most challenging baseline. Other iFNO variants, while potentially larger than the base iFNO model, still maintain efficiency advantages over FNO. | Method | Navier Stokes (FNO 2D) | | Kolmogorov Flow | | |-------------|---------------------------------|---------------|---------------------------------|---------------| | | Number of Parameters (Millions) | Test (1e − 1) | Number of Parameters (Millions) | Test (1e − 2) | | iFNO (Freq) | 0.92M | 1.872 | 321M | 4.602 | | iFNO (Loss) | 23.8M | 1.911 | 530M | 5.127 | | FNO (10) | 0.64M | 1.941 | 6.57M | 13.27 | | FNO (30) | 5.76M | 1.924 | 59.0M | 6.462 | | FNO (60) | 23.0M | 1.918 | 235M | 4.825 | | FNO (90) | 51.8M | 1.918 | 530M | 4.596 | Table 12: Evaluation and number of parameters (in Millions) for the various models in Table 3. ## B.6 Tensorized Factorized Fno Under Different Tensor Ranks iTFNO incorporates our incremental mode selection and incremental resolution techniques into the TFNO framework. Specifically, we apply the following modifications to TFNOs: 1. Incremental mode selection: Similar to iFNO, iTFNO starts with a small number of modes and gradually increases the number of modes during training. We experiment with both frequencybased (iFNO (Freq)) and loss-based (iFNO (Loss)) criteria for determining when to add modes. This allows iTFNO to adaptively adjust the model capacity and focus on the most relevant frequency components for the given task. 2. Incremental resolution: iTFNO also adopts the incremental resolution technique from iFNO, where the model is trained on progressively higher resolutions of the input data. This enables iTFNO to capture fine-grained details and scale to higher resolutions more efficiently. 3. Factorization schemes and ranks: In addition to the incremental techniques, we explore different factorization schemes and ranks for the tensor decomposition of the weights in iTFNO. This allows us to investigate the impact of the tensor structure on the performance and efficiency of the incremental approach. | Weight Factorization | 5% | 25% | | | |------------------------|----------------|-------------|----------------|-----| | Test (1e-1) | Runtime (Mins) | Test (1e-1) | Runtime (Mins) | | | iTFNO | 0.646 | 122 | 0.793 | 259 | | iTFNO (Freq) | 0.752 | 144 | 0.904 | 333 | | TFNO | 0.719 | 166 | 0.872 | 280 | Table 13: Evaluation of 2 different factorization ranks of iTFNO models on Re5000. We do a full sweep and report the best values. ## B.7 Different Losses We have also conducted our models on the H1 Loss. We acknowledge that the L2 loss, while commonly used, may not fully capture the desired properties and behavior of the learned operators. The L2 loss treats all parts of the signal equally and may not prioritize important structural or geometric features. Moreover, as the reviewer pointed out, our proposed iFNO(freq) and iFNO(loss) methods are closely tied to controlling the L2 loss, either directly or indirectly. To address this concern and provide a more comprehensive evaluation of our methods, we have conducted additional experiments using the Sobolev loss (H1) as suggested by the reviewer. The Sobolev loss considers not only the pointwise differences between the predicted and target signals but also the differences in their derivatives or gradients. This loss is more sensitive to the smoothness and regularity of the learned operators and can provide insights into their ability to capture important spatial or temporal structures. We have evaluated our proposed methods, including iFNO, iFNO(freq), iFNO(loss), and iFNO(res), as well as the baseline FNO and TFNO models, on three datasets: Re5000, Navier-Stokes 2D, and Burgers. The results are presented in the tables above. For the Re5000 dataset, we observe that iTFNO outperforms both iTFNO(freq) and the baseline TFNO in terms of H1 loss, indicating that the incremental approach combined with the tensor factorization can lead to improved accuracy in capturing the spatial regularity of the solutions. The performance gap is consistent across different tensor ranks (5 and 25). | | rank 5 | rank 25 | |--------------|----------|-----------| | iTFNO | 0.2396 | 0.2984 | | iTFNO (Freq) | 0.2889 | 0.3321 | | TFNO | 0.2725 | 0.3496 | Table 14: Comparison of iTFNO and TFNO In the Navier-Stokes 2D dataset, iFNO and its variants (iFNO(freq), iFNO(loss), iFNO(res)) generally achieve lower H1 loss compared to the baseline FNO. This suggests that the incremental mode selection and resolution techniques can help learn operators that better preserve the smoothness and continuity of the fluid dynamics. Similarly, for the Burgers dataset, iFNO and its variants demonstrate competitive or slightly improved performance in terms of H1 loss compared to the baseline FNO. The iFNO(res) variant, which focuses on incremental resolution, achieves the lowest H1 loss, highlighting the importance of adaptively refining the spatial resolution for this specific problem. | Model | Test H1 Loss | |-----------|----------------| | iFNO | 0.1511 | | iFNO Freq | 0.1476 | | iFNO Loss | 0.1606 | | iFNO Res | 0.1507 | | FNO | 0.1619 | Table 15: NS2D Results | Model | Test H1 Loss | |-----------|----------------| | iFNO | 0.02913 | | iFNO Freq | 0.02892 | | iFNO Loss | 0.03005 | | iFNO Res | 0.02675 | | FNO | 0.03160 | Table 16: Burgers Results | Model | Train (L2) | Eval (L2) | Eval (H1) | Train (H1) | Eval (L2) | Eval (H1) | |-----------|--------------|-------------|-------------|--------------|-------------|-------------| | iFNO | 0.0524 | 0.0377 | 0.1284 | 0.0972 | 0.0831 | 0.1543 | | iFNO Freq | 0.0514 | 0.0369 | 0.1259 | 0.0963 | 0.1244 | 0.1476 | | iFNO Loss | 0.0504 | 0.0359 | 0.1242 | 0.1144 | 0.1181 | 0.1606 | | iFNO Res | 0.0546 | 0.0379 | 0.1316 | 0.1152 | 0.0881 | 0.1507 | | FNO | 0.0517 | 0.0360 | 0.1258 | 0.1103 | 0.1584 | 0.1619 | Finally, we include one more comparison on the NS2D dataset where we present results for training on L2 loss and evaluating on both L2 and H1 losses, as well as training on H1 loss and evaluating on both L2 and H1 losses. The incremental techniques in iFNO and its variants can lead to improved performance in L2 and H1 losses, demonstrating their effectiveness in capturing both pointwise accuracy and spatial regularities. The iFNO variants show different behaviors when trained on L2 versus H1 loss, suggesting that the incremental techniques interact differently with these loss functions. Table 17: NS2D+Time Results These results provide reassurance that our proposed incremental techniques not only improve the L2 loss but also lead to better performance in terms of the Sobolev loss, which is more sensitive to the spatial or temporal regularity of the learned operators. However, we acknowledge that the Sobolev loss is just one example of an alternative metric, and some many other relevant losses and distances could be considered, such as L1 loss, Wasserstein distance, or problem-specific metrics. We believe that a comprehensive evaluation across multiple losses and datasets is crucial for understanding the strengths and limitations of operator learning methods. We will investigate this in future work.
Review 1: Summary: Authors propose a special training protocol for Fourier Neural Operator (FNO) with two primal innovations: 1. The number of modes is selected adaptively based on one of two proposed criteria (based on loss and based on "frequency strength"). 2. The spatial resolution of training data is gradually increasing during training. It is shown in a set of experiments that these two novelties allow for a decrease in training time and improve test error. Strengths and Weaknesses: **Strengths** The idea appears to be novel, and most experiments support that proposed techniques lead to decreased training time and better test error. The main body of the paper is well-written, and it is mostly possible to understand what the authors want to convey. **Weaknesses** That said, the significance of accuracy improvement seems to be exaggerated. Some ideas are insufficiently explained. Besides, there are unclear parts (see below). Requested Changes: I mostly find the present research to be of sufficient quality and merely want to clarify several things that confuse me and may lead to the confusion of the reader alike. Below is the list of questions and suggestions: 1. (page 1) "Recently, deep learning has shown promise in solving partial differential equations (PDEs) significantly faster than traditional numerical methods in many domains." It is hard to find support for that claim in literature. Neural operators and other surrogate models are not solvers per se, since they require datasets generated or collected by other means. Physics-informed neural networks are much slower than their classical counterparts (although they can be more accurate for high-dimensional problems, e.g., quantum chemistry). Physics-informed neural operators and physics-informed DeepONet are also extremely slow. All methods known to me that are competitive or even faster are hybrid approaches when a part of the classical solver is replaced with a DL counterpart. These approaches can be 2-3 times faster at best. I suggest authors to back up their claim by pointing to appropriate literature. 2. (page 1) "FNO learns the solution operator of PDE that maps the input (initial and boundary conditions) to the output (solution functions)." Somewhat meaningless description: any neural network maps its input to the output. 3. (page 3) "Li et al. (a) proposes FNO that adopts a convolution operator for K as shown in Figure 1, which obtains state-of-the-art results for solving PDE problems." This claim is not supported by literature. FNO was introduced in 2020 and provided state-of-the-art results (among the architectures tested by authors). There are examples now when classical architectures perform better (https://arxiv.org/abs/2112.15275) and other operators perform better (https://arxiv.org/abs/2111.13802). I suggest authors provide references that support their claim. 4. (page 3) "FNO is discretization-convergent, such that the model can produce a high-quality solution for any query points, potentially not in the training grid. In other words, FNO can be trained on low-resolution but generalizes to high-resolution." This description is misleading. FNO is consistent when evaluated on different grids, but it does not provide a high-quality solution. It is also not convergent in the sense of classical numerical analysis. Besides, it does not generalize to high-resolutions. As a rule, the error produced by FNO remains the same for different resolutions (e.g., figure 3, https://arxiv.org/abs/2010.08895), so there is no generalization, only consistency. 5. (page 3) "To ensure discretization-convergence, FNO truncates the Fourier series Fˆv at a maximal number of modes K using TK." Arguably, "discretization-convergence" is mostly achieved with FFT and inverse FFT that consistently map functions defined on different uniform grids. It is easy to come up with architecture without hard truncation that is still discretization-convergent. 5. (page 4) "Figure 2" As I understand the left picture provides a ground truth solution and the two right pictures are outputs of two trained FNOs with different numbers of modes. Both predictions seem to be completely unrelated to the reference solution. Why in this case a large number of modes is better if the error is that large? 6. (page 4) "As FNO is a neural network and trained by first-order learning algorithms, it follows the implicit spectral bias such that the lower frequency modes have larger strength in R. This explains why FNO chooses to preserve a set containing the lowest frequency modes, instead of any arbitrary subset of frequencies." I am not sure I can follow this logic. First, FNO preserves low-frequency modes not because of the f-principle but by construction. The authors of FNO designed it in such a way. Second, the f-principle is an observation that when a neural network is trained to approximate some function, it first learns low frequencies and struggles with high frequencies. FNO is an operator, so if we apply the f-principle (I am not completely sure we can do that) it would mean that trained FNO should be biased toward "smooth operators". This supposedly includes an identity operator, that should retain all frequencies of the input. 7. (page 5, Algorithm 1) "Randomly initialize $R_0$" This part seems to be important and not well discussed. How exactly $R_0$ is initialized? If we look at the description of iFNO (Loss) on page 5, we can see that one increases the number of active modes. Does it mean that initially $R_0$ is as small as possible and the size of $R$ can only increase? 8. (page 5, Algorithm 1) "$K_t \leftarrow K_t + 1$" How the modes are added in higher dimensions? What is the order? Is it technically possible to add a single mode? 9. (page 5, Algorithm 1) "and initialize the rest randomly" When $R$ is increased, how precisely new elements are initialized for both iFNO (Loss) and iFNO (freq)? In the latter case, this is especially interesting since the importance of modes is judged based on the value of $R$ itself. 10. (page 5, Figure 4) I suggest specifying where FNO is and where iFNO is directly on the picture. 11. (page 7) "Our samples are images; hence, we can easily distinguish between easy and difficult samples based on their resolution. Higher-resolution images are more complex to train on, so it’s best to start with lower-resolution samples and gradually move up to higher-resolution ones." I have several objections. First, the resolution is not directly related to difficulty, it is easy to produce simplistic high-resolution images. Second, the difficulty in operator learning is the difficulty of the mapping, not the images themselves. For example, we can take an extremely detailed turbulent flow and map it into itself. This will be a trivial operator learning problem when FNO needs to learn an identity map. 12. (page 7, table 1), (page 8, table 2) Please report the relative $L_2$ norm in place of the value of the loss. Relative $L_2$ norm is a standard error metric present in most of the articles on neural operators, it is much easier to judge the quality of learned surrogate based on this metric. 13. (page 10) "The results in Table 1 show that iFNO achieves significantly better generalization performance than just incrementally increasing either frequency or resolution alone." The results in Table 1 show that all test losses are very close to each other. For example, for standard FNO we have $0.988 \pm 0.031$, and for best iFNO, we have $0.948 \pm 0.040$. They overlap even when a single std is considered. If we take $2$ std, there is no difference between these results. This is not a "significantly better generalization performance". 14. (page 10, figure 5b) iFNO starts training with a small number of active modes. What happens if one starts with a large number of active modes? Is this the case that iFNO will decrease the number of modes automatically? As a final comment, I want to suggest an analogy with another field of study. One can look at the approach proposed by authors from the perspective of numerical linear algebra. For a fixed grid resolution, the FNO kernel is a low-rank block circulant matrix (quasimatrix, if different resolutions are considered) and the authors propose an algorithm that dynamically selects the rank of this circulant. How to do this was extensively studied in many contexts, so I suggest authors look at this direction for inspiration. For example, in deep learning rank adaptation of this sort is proposed at https://arxiv.org/abs/2205.13571. More specifically, in this paper authors first draw an analogy between the optimization process and the solution of ordinary differential equation (gradient flow). Next, they apply rank adaptive ODE integrator to the resulting system of equations. This provides training coupled with rank adaptation. I hope this perspective can be of use to the authors of the current contribution. Broader Impact Concerns: na ================================================== Review 2: Summary: This paper proposes a modified training for Fourier Neural Operators to reduce required hyperparameter tuning by dynamically adjusting the number of frequency modes used internally by the model during training, and reduces training overheads by adjusting the resolution of training samples. The authors' approach increases the number of modes used based on signals from the training loss or from observed strength of the unused frequencies making adjustments when the overall strength of modes currently in use drops below a threshold. The paper includes tests on a variety of PDE systems. Strengths and Weaknesses: **Strengths** - Generally good self-contained introduction relating to FNO and issues in/expense of training - The new methods are well explained, and approach seems sensible - Good variety of target systems used to test the new approach **Weaknesses** - Experiment results and setup are harder to follow - Experiments in the appendix (and some in the main body) could use additional context and commentary - Tests on TFNO need more discussion and introduction Requested Changes: The largest category of changes I would appreciate would be significant additional clarity in the experiments section. In particular, some more commentary on how readers should interpret some of the results in the tables would, I think, be welcome. I found the details of the experiment setup somewhat scattered and collecting these and streamlining the section would reduce the reader's need to jump around the section for details or comparisons which I think made the section harder to follow and obscured the results. Below are some examples of questions that might benefit from some added discussion in the paper. 1. The need for additional commentary is particularly true for the tables and plots in appendix B. Some explanation would be helpful (for instance in the hyperparameter selection tables regarding what values are included and what factors were weighed in choosing the fixed values from these results). For iFNO (loss), was the parameter eta renamed to epsilon in the body? 2. Regarding the method listed as "iFNO" in each table (for example Table 1, 2, 3), which of the two (loss vs freq) approaches was used combined with the incremental resolution method? I believe in section 4.3 you mention that the "iFNO" entry could be using either. 3. In Tables 2 and 3 the bolding is sometimes split between means and standard deviations. In table 3 for Navier-Stokes FNO 3D training results the largest value is bolded. Are these correct? 4. In particular, the experiments on the TFNO are introduced *very* briefly and somewhat abruptly. I think these could especially use expansion or added explanation if these are important parts of testing the method. 5. For the results listed in section 5.4 some additional discussion references to some backing data would help as well. It looks like some of the relevant values are included in the appendix for model size (Table 12) but is there a reference for the efficiency gains for systems other than re5000 in table 1? And one minor caption issue I noticed on figure 8, it seems it should reference Navier-Stokes 3D (vs. 2D) based on the titles of the plots Broader Impact Concerns: No specific concerns ================================================== Review 3: Summary: The authors extend the standard approach to training an FNO with an incremental version that successively adds spectral bands to the inner Fourier-Transform layer. The extension is justified as being superior to grid search and classic regularization by, e.g. weight decay, in terms of both efficiency and accuracy (i.e. it is possibly more accurate than $\|\ell\|_2^2$ weight decay regularisation and at lower cost) Strengths and Weaknesses: Strengths: 1. The paper proposes, and empirically demonstrates, a solution to a real problem in FNOs, which is selecting them to have sufficiently many Fourier modes to produce accurate predictions, but also sufficiently few Fourier modes to be efficient. 2. the empirical study (section 6) or what happens to the modes under regularization schemes is neat. To my mind this is the biggest news in the paper and actually quite interesting. Weaknesses: 1. justification is largely heuristic ("we add modes as needed based on a reasonable rule of thumb that looks a bit like the Signal-to-Noise Ratio"). If it works, this is OK, I guess, but it would be great to have a justification that was easier to generalize because it was more formal. *Why* might we think that controlling the number of modes by a squared Frobenius norm will control anything about accuracy? 2. Accuracy is evaluated in terms of $L_2$ loss. There are a _lot_ of papers questioning whether this is a great way to measure accuracy of an operator; too many to list here, so I have (lazily) supplied an incomplete bibtex bibliography of some representative papers which interrogate this and consider at other losses (see below). This might feel like splitting hairs, but I argue it is important, since we are leaning on $L_2$ a lot now; both iFNO(freq) and iFNO(loss) do something that is similar to controlling $L_2$ loss, IMO, so we can reasonably ask if this procedure has done anything bad to other losses (Sobolev loses, $L_1$ losses, geometric distance in the natural manifold induced by the operator, _anything_ might be reassuring) 3. comparison should be against SOTA attempts to adaptively construct optimal FNOs. There are two of these of which I am aware 1. Tensorized FNO: http://arxiv.org/abs/2310.00120 — the authors, to their credit, do use this and argue that their method is complementary, becuase they have a combination they call iTFNO in B.6 so this feels *close* to done. The only problem is that I think that the heuristic argument about the Explanation Ratio in iFNO(freq) don't really make sense for a tensorized FNO (the parameters are even further from spectral 'importances' so... some kind of explanation of this seems necessary even if to say "our method seems empirical good for this even if our argument looks tenuous". I think the iFNO(loss) heuristic still works? 2. _Transform once_ https://proceedings.neurips.cc/paper_files/paper/2022/hash/342339109d7d756ef7bb30cf672aec02-Abstract-Conference.html which *also* adaptively selects spectral bands, as well as doing some other fancy stuff to achieve good results. This is a more direct "competitor" in that they also adaptively select spectra bands. They also, crucially, argue that selecting only the lowest modes is **not** optimal (see fig 3.2), which seems like it might contradict the major result of this paper, which is that iFNO is best, and it uses strict low-pass filtering. There might be a counter-argument, which is that the Transform Once method is tedious or something? But I think such an argument needs to be made for the paper's main claim as I understand it to stand. 4. occasionally confusing phrasing (see below) Requested Changes: on the basis of weakness 2 and 3.2 I argue that the submission is not "supported by accurate, convincing and clear evidence" of the main claim, that _We empirically show that iFNO reduces total training time while maintaining or improving generalization performance across various datasets_. There exist other methods that do something similar and undermine or contradict the main claim of the paper as it currently stands, and which are only partially addressed in this text. Changes that would be needed to redress this are mentioned below. **in 4.1 there is some confusing phrasing:** > iFNO (Loss): A common practice is to use the training progress as a proxy to determine the time, as seen > in previous work such as Liu et al.. Specifically, we let the algorithm increase K only decrease in training > loss between N consecutive epochs is lower than a threshold ϵ I think the authors might mean something like > iFNO (Loss): A common practice is to _add modes at fixed epochs_ > in previous work such as Liu et al.. _By contrast we increase K adaptively when the decrement in training loss_ between N consecutive epochs is lower than a threshold ϵ Is that correct? Please clarify, this is a core bit of the paper. **what is iTFNO?** As discussed in the "weaknesses", I think this is an important bit of work that the authors seem to have done, but which I think the paper does not explain. It is mentioned in the experiments in the appendix, but I can't work out what it is. Am I missing something? I found the sentence saying "We also test our incremental methods on Tensorized Factorized Neural Operators Kossaifi et al. (2023) by sweeping over different factorization schemes and ranks for the weights" but as mentioned above I think this is incomplete. **deal with _transform once_** I humbly reuest the authors to either 1. relax the claim to exclude _transform once_ from consideration and justify that decision, or test against their method and demonstrate an improvement against it, as they have presumably done with TFNO (once they explain that) **losses other than $L_2$** As referenced above, a very-incomplete bibliography of papers which in various capacities investigate the deficiencies of $L_2$ loss for FNO evaluation and/or training. I don't necessarily think the authors need to include all of these, or even *any* of them, but I think their claims would be strengthened if they at least mentioned the concept of physics-specific losses, rather than the computationally-convenient $L_2$ loss. If they are wedded to the $L_2$ loss only, there needs to be a justification for this choice. There are at least two open-source tools which extend the losses for PDE solutions in various ways: * [pdebench/PDEBench: PDEBench: An Extensive Benchmark for Scientific Machine Learning](https://github.com/pdebench/PDEBench) * [pdearena/pdearena](https://github.com/pdearena/pdearena) ``` @misc{FaroughiPhysicsGuided2023, title = {Physics-{{Guided}}, {{Physics-Informed}}, and {{Physics-Encoded Neural Networks}} in {{Scientific Computing}}}, author = {Faroughi, Salah A. and Pawar, Nikhil and Fernandes, Celio and Raissi, Maziar and Das, Subasish and Kalantari, Nima K. and Mahjour, Seyed Kourosh}, year = {2023}, month = feb, number = {arXiv:2211.07377}, eprint = {2211.07377}, publisher = {arXiv}, doi = {10.48550/arXiv.2211.07377}, archiveprefix = {arXiv} } @article{LiPhysicsInformed2021, title = {Physics-{{Informed Neural Operator}} for {{Learning Partial Differential Equations}}}, author = {Li, Zongyi and Zheng, Hongkai and Kovachki, Nikola Borislavov and Jin, David and Chen, Haoxuan and Liu, Burigede and Stuart, Andrew and Azizzadenesheli, Kamyar and Anandkumar, Anima}, year = {2021}, month = nov } @misc{PestouriePhysicsenhanced2022, title = {Physics-Enhanced Deep Surrogates for {{PDEs}}}, author = {Pestourie, Rapha{\"e}l and Mroueh, Youssef and Rackauckas, Chris and Das, Payel and Johnson, Steven G.}, year = {2022}, month = nov, number = {arXiv:2111.05841}, eprint = {2111.05841}, publisher = {arXiv}, doi = {10.48550/arXiv.2111.05841}, archiveprefix = {arXiv} } @inproceedings{TakamotoPDEBench2022, title = {{{PDEBench}}: {{An Extensive Benchmark}} for {{Scientific Machine Learning}}}, shorttitle = {{{PDEBench}}}, author = {Takamoto, Makoto and Praditia, Timothy and Leiteritz, Raphael and MacKinlay, Dan and Alesiani, Francesco and Pfl{\"u}ger, Dirk and Niepert, Mathias}, year = {2022}, month = jun } @inproceedings{WangPhysicsinformed2020, title = {Towards {{Physics-informed Deep Learning}} for {{Turbulent Flow Prediction}}}, booktitle = {Proceedings of the 26th {{ACM SIGKDD International Conference}} on {{Knowledge Discovery}} \& {{Data Mining}}}, author = {Wang, Rui and Kashinath, Karthik and Mustafa, Mustafa and Albert, Adrian and Yu, Rose}, year = {2020}, month = aug, series = {{{KDD}} '20}, eprint = {1911.08655}, pages = {1457--1466}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, doi = {10.1145/3394486.3403198}, archiveprefix = {arXiv}, isbn = {978-1-4503-7998-4} } ``` Broader Impact Concerns: n/a ==================================================
# The Journey, Not The Destination: How Data Guides Diffusion Models Anonymous authors Paper under double-blind review ## Abstract Diffusion models trained on large datasets can synthesize photo-realistic images of remarkable quality and diversity. However, *attributing* these images back to the training data—that is, identifying specific training examples which *caused* an image to be generated—remains a challenge. In this paper, we propose a framework that: (i) provides a formal notion of data attribution in the context of diffusion models, and (ii) allows us to *counterfactually* validate such attributions. Then, we provide a method, Journey-trak, for computing these attributions efficiently. Finally, we apply Journey-trak to find (and evaluate) such attributions for denoising diffusion probabilistic models trained on CIFAR-10 and latent diffusion models trained on MS COCO. We provide code at this https URL. ## 1 Introduction Diffusion models can generate novel images that are simultaneously photorealistic and highly controllable via textual prompting (Ramesh et al., 2022; Rombach et al., 2022). A key driver of diffusion models' performance is training them on massive amounts of data (Schuhmann et al., 2022). Yet, this dependence on data has given rise to concerns about how diffusion models use it. For example, Carlini et al. (2021); Somepalli et al. (2022) show that diffusion models often memorize training images and "regurgitate" them during generation. However, beyond such cases of direct memorization, we currently lack a method for *attributing* generated images back to the most influential training examples—that is, identifying examples that *caused* a given image to be generated. Indeed, such a primitive—a data attribution method—would have a number of applications. For example, previous work has shown that attributing model outputs back to data can be important for debugging model behavior (Shah et al., 2022), detecting poisoned or mislabelled data (Lin et al., 2022), and curating higher quality training datasets (Khanna et al., 2019). Within the context of diffusion models, data attribution can also help detect cases of data leakage (i.e., privacy violations), and more broadly, can be a valuable tool in the context of tracing content provenance relevant to questions of copyright (Andersen et al., 2023; Images, 2023). Finally, synthetic images generated by diffusion models are now increasingly used across the entire machine learning pipeline, including training (Azizi et al., 2023) and model evaluation (Kattakinda et al., 2022; Wiles et al., 2022; Vendrow et al., 2023). Thus, it is critical to identify (and mitigate) failure modes of these models that stem from training data, such as bias propagation (Luccioni et al., 2023; Perera & Patel, 2023) and memorization. Motivated by all the above needs, we thus ask: How can we reliably attribute images synthesized by diffusion models back to the training data? Although data attribution has been extensively studied in the context of *supervised* learning (Koh & Liang, 2017; Ghorbani et al., 2019; Jia et al., 2019; Ilyas et al., 2022; Hammoudeh & Lowd, 2022; Park et al., 2023), the generative setting poses new challenges. First, it is unclear *what particular behavior* of these models we hope to attribute. For example, given a generated image, certain training images might be responsible for the look of the background, while others might be responsible for the choice of an object appearing in the foreground. Second, it is not immediately obvious how to *verify* the attributions. In supervised settings, a standard approach is to compare the outputs of the original model on given inputs with those of a new model ![1_image_0.png](1_image_0.png) t = T t = 0 Generative (Reverse Diffusion) Process Figure 1: **Overview of our attribution framework.** For a given synthesized image, we apply Journey-trak, our attribution method, at individual steps along the diffusion trajectory. At each step t, Journey-trak pinpoints the training examples with the highest influence (positive in green, negative in red) on the generative process at that step. In particular, positive influencers guide the trajectory towards the final sample, while negative influencers guide the trajectory away from it. We observe that negative influencers increasingly resemble the final sample (the grey text highlights notable differences with the final sample). For more examples, see Appendix E. trained on a new dataset after removing the attributed examples. However, in the generative setting it is less clear how to make such comparisons. Our contributions. In this work, we present a data attribution framework for diffusion models. This framework reflects, and is motivated by, the fact that diffusion models iteratively denoise an initial random seed to generate the final image. In particular, rather than attributing *only* the final generated image, i.e., the "destination," we attribute each individual step along the (denoising) "journey" taken by diffusion model (see Figure 1). This approach shifts our focus from the specific final image to the *distribution* of possible generated images and, in particular, how this distribution evolves across the diffusion process. As we demonstrate, this framework also enables us to attribute specific *features* of the final generated image. To analyze this framework, we introduce two complementary metrics for evaluating the resulting attributions based on their counterfactual impact on the distribution of generated images (rather than on specific samples). Finally, we provide an efficient method for computing such attributions, building on data attribution approaches developed for the supervised setting (Ilyas et al., 2022; Park et al., 2023). We then apply our method Journey-trak to denoising diffusion probabilistic models (DDPM) (Ho et al., 2020) trained on CIFAR-10 (Krizhevsky, 2009), and latent diffusion models (LDM) (Rombach et al., 2022) trained on MS COCO (Lin et al., 2014). In both of these settings, we obtain attributions that are validated by our metrics and also visually interpretable. ## 2 Preliminaries We first provide background on data attribution. Then, we give a brief overview of diffusion models, highlighting the components that we will need to formalize attribution for these models. ## 2.1 Data Attribution Broadly, the goal of training data attribution (Koh & Liang, 2017; Ilyas et al., 2022; Hammoudeh & Lowd, 2022; Park et al., 2023) is to trace model outputs back to the training data. Intuitively, we want to estimate how the presence of each example in the training set impacts a given model output of interest (e.g., the loss of a classifier) on a specific input. To formalize this, consider a learning algorithm A (e.g., a training recipe for a model), together with an input space Z and a training dataset S = (z1, . . . , zn) ∈ Zn of n datapoints from that input space. Given a datapoint z ∈ Z, we represent the model output via a model output function f(*z, θ*(S)) : Z × R d → R, where θ(S) ∈ R d denotes the model parameters resulting from running algorithm A on the dataset S. For example, f(*z, θ*(S)) is the loss on a test sample z of a classifier trained on S. ( Our notation here reflects the fact that the parameters are a function of the training dataset S.) We now define a *data attribution method* as a function τ : *Z × Z*n → R n that assigns a score τ (*z, S*)i ∈ R to each training example zi ∈ S. 1Intuitively, we want τ (*z, S*)i to capture the change in the model output function f(*z, θ*(S)) induced by adding zi to the training set. More generally, these scores should help us make *counterfactual* predictions about the model behavior resulting from training on an arbitrary subset S ′ ⊆ S of the training datapoints. We can formalize this goal using the datamodeling task (Ilyas et al., 2022): given an arbitrary subset S ′ ⊆ S of the training set, the task is to predict the resulting model output f(*z, θ*(S ′)). A simple method to use the attribution scores for this task, then, is to consider a *linear* predictor: f(*z, θ*(S ′)) ≈Pi:zi∈S′ τ (*z, S*)i. 2 This view of the data attribution as a prediction task motivates a natural metric for evaluating attribution methods: the agreement between the true output f(*z, θ*(S ′)) and the output predicted by the attribution method τ . Park et al. (2023) consider the rank correlation between the true and predicted values of f(*z, θ*(S ′)) over different random samples S ′ ⊆ S and name the corresponding metric the *linear datamodeling score*—we will adapt it to our setting in Section 3. Estimating attribution scores (efficiently). Given the model output function f evaluated at input z, a natural way to assign an attribution score τ (z)i for a training datapoint ziis to consider the *marginal* effect of including that particular example on the model output, i.e., have τ (z)i = f(z, θ(S)) − f(z, θ(S \ {zi})). We can further approximate this difference by decomposing it as: $\tau(z)_{i}=$ (i) change in model parameters. $$\left(1\right)$$ ∇θf(*z, θ*) , (1) $\mathbf{a}\cdot\mathbf{b}=\mathbf{a}\cdot\mathbf{b}$. (ii) change in model output : - $\overbrace{\nabla_{\theta}f\big(z,\theta\big)}^{\text{}}$ $\qquad$, where θ−i denotes θ(S \ {i}) (Wojnowicz et al., 2016; Koh & Liang, 2017). We can compute the second component efficiently, as this only requires taking the gradient of the model output function with respect to the parameters; in contrast, computing the first component is not always straightforward. In simpler settings, such as linear regression, we can compute the first component explicitly, as there exists a closed-form solution for computing the parameters θ(S ′) as a function of the training set S ′. However, in modern, non-convex settings, estimating this component efficiently (i.e., without re-training the model) is challenging. Indeed, prior works such as influence functions (Koh & Liang, 2017) and TracIn (Pruthi et al., 2020) estimate the change in model parameters using different heuristics, but these approaches can be inaccurate in such settings. To address these challenges, trak (Park et al., 2023) observed that for deep neural networks, approximating the original model with a model that is *linear* in its parameters, and averaging the estimates over multiple θ's (to overcome stochasticity in training) yields highly accurate attribution scores. The linearization is motivated by the observation that at small learning rates, the trajectory of gradient descent on the original neural network is well approximated by that of a corresponding linear model (Long, 2021; Wei et al., 2022; Malladi et al., 2022). In this paper, we will leverage the trak framework towards attributing diffusion models. ## 2.2 Diffusion Models Training and sampling from diffusion models. At a high level, diffusion models (and generative models, more broadly) learn a distribution pθ(·) meant to approximate a target distribution q*data*(·) of interest (e.g., natural images). To perform such learning, given a (training) sample x0 ∼ qdata(·), diffusion models first apply a stochastic *diffusion process* that gradually corrupts x0 by adding more noise to it at each step. This results in a sequence of intermediate latents {xt}t∈[T] sampled according to xt ∼ N (αt · xt−1,(1 − αt) · I) where {αt}t are parameters of the diffusion process (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Ho 1Following the literature, we say that an example zi has a positive (respectively, negative) influence if τ(*z, S*)i > 0 (respectively, τ(*z, S*)i < 0). 2Similarly to the prior work (Park et al., 2023), we only consider linear predictors here. ![3_image_0.png](3_image_0.png) Figure 2: **Samples from a diffusion trajectory.** We show samples from pθ(·|xt), i.e., the distribution of final images x0 conditioned on initializing from the latent xt at step t, and the corresponding approximation xb t 0 (a proxy for the expectation of this distribution, i.e., Ex0∼pθ(·|xt)[x0]) for different values of t, together with the final generated image x0. et al., 2020). Then, based on such sequences of intermediate latents, diffusion models learn a "denoising" neural network εθ that attempts to run the diffusion process "in reverse." Once such a diffusion model is trained, one can sample from it by providing that model with an initial seed xT ∼ N (0, 1) (i.e., just a sample of random noise), and then applying the (trained) denoising network iteratively at each step t (from t = T to t = 0) to sample the corresponding *diffusion trajectory* {xt}t∈[T], ultimately leading to a final sample x0 ∼ pθ(·) ≈ q*data*(·). Conditioning sampling on partially denoised images. Importantly, in this work, it will be also useful to consider the process of sampling a final image x0 when "resuming" the diffusion process after running it up to some step t—this is equivalent to continuing that process at step t from the corresponding intermediate latent xt. We denote the distribution arising from sampling an image x0 when conditioning on the latent xt by pθ(·|xt). Also, it turns out that we can approximate the multi-step denoising process of generating samples from pθ(·|xt) in a single step with the formula xb t 0 := c1(αt) · (xt − c2(αt · εθ(xt, t))), for some constants c1(·), c2(·) that depend on the diffusion parameters {αt}t (Ho et al., 2020). In fact, xb t 0 is a proxy for the conditional expectation Ex0∼pθ(·|xt)[x0], and under certain conditions xb t 0 is precisely equivalent to this expectation (Song et al., 2023; Daras et al., 2023).3 See Figure 2 for an illustration of pθ(·|xt) and xb t 0 for different values of t. Types of diffusion models. Finally, Denoising Diffusion Probabilistic Models (DDPMs) are a popular instantiation of diffusion models (Ho et al., 2020). More recently, Rombach et al. (2022) proposed a new class of diffusion models called latent diffusion models (LDMs), which perform the above stochastic process in the latent space of a pretrained encoder network. Moreover, Song et al. (2021); Ho & Salimans (2022) show that one can also *condition* diffusion models on some additional information, e.g. a text prompt. This way, one can control the semantics of the generated images by specifying such a text prompt. In this work, we will instantiate our data attribution framework on both unconditional DDPMs and conditional LDMs. ## 3 A Data Attribution Framework For Diffusion Models In this section, we introduce our framework for attributing samples generated by diffusion models back to their training data. To this end, we will specify both *what* to attribute as well as how to *verify* the attributions. Specifically, in Section 3.1 we define data attribution for diffusion models as the task of understanding how training data influence the *distribution* over the final images at each step of the diffusion process. Then, in Section 3.2, we describe how to evaluate and verify such attributions. ## 3.1 Attributing The Diffusion Process Step By Step Diffusion models generate images via a *multi-step* process. We thus decompose the task of attributing a final synthesized image into a corresponding series of stages, with each stage providing attributions for a single step of the diffusion process. This stage-wise decomposition allows for: 3This equivalence is referred to as the *consistency* property. ![4_image_0.png](4_image_0.png) Figure 3: **Specific features appearing at specific steps. (Left)** For a given image of a horse (x0) generated by a CIFAR-10 DDPM model, we plot the likelihood that samples from the distribution pθ(·|xt) (see Section 2.2) are classified as a horse according to a CIFAR-10 classifier. This likelihood increases rapidly around steps 650 to 500, suggesting that these steps are most responsible for the formation of this feature. (Top) For three steps t in this range, we visualize samples from pθ(·|xt). (**Bottom**) At each of these steps, we also visualize the training examples with the highest influence (positive in green, negative in red) identified by Journey-trak. Note that once the "horse" feature begins to appear (around t = 575), positive influencers begin to reflect it; and after this feature is "decided" (around t = 500), negative influencers *also* do so. fl fl - **Fine-grained analysis.** Identifying influential training examples at each individual step gives us a fine-grained understanding of how data "guides" the diffusion process. This, in turn, allows us to capture, for example, that in some cases the same training example might be positively influential at early steps but negatively influential at later steps (see Appendix B.2). - **Computational feasibility.** Computing gradients through a single step requires only a single backwards pass. So, it becomes feasible to apply existing efficient data attribution methods (Park et al., 2023; Pruthi et al., 2020) that involve computing gradients. - **Feature-level attribution.** As we demonstrate below, features tend to form only within a small number of steps of the diffusion process. Thus, attributing at an individual step level allows us to isolate influences of training points on formation of specific features within the final generated image. It remains now to define what exactly to attribute to the training data at each step. To this end, we first motivate studying the conditional distribution pθ(·|xt) (see Section 2.2) as a way to quantify the impact of each step t of the diffusion process to the final sample x0. Next, we highlight how analyzing the evolution of this distribution over steps t can connect individual steps to specific features of interest. Finally, building on these observations, we formalize our framework as attributing properties of this distribution pθ(·|xt) at each step t to the training data. Studying the distribution pθ(·|xt). At a given step t of the generative process, the relevant information about the process up to that point is contained in the latent xt. While xt itself might not correspond to a natural image, we can use it to directly sample from pθ(·|xt), i.e., the distribution of possible final images x0 when resuming the diffusion process at step t with latent xt. When t = T, this distribution is precisely the diffusion model's learned distribution pθ(·), and at t = 0 it is simply the final sampled image x0. So, intuitively, the progression of this conditional distribution over steps t informs us how the model gradually "narrows down" the possible distribution of samples to generate the final sample x0 (see Figure 2 for an illustration). A natural way to understand (and attribute) the impact of applying the diffusion process at each step t on the final image x0 is thus to understand how this conditional distribution pθ(·|xt) evolves over steps. Connecting features to specific steps. Given a final generated image, there might be many possible features of interest within this image. For example, for x0 in Figure 2, we might ask: Why is there a grey bird? Why is the background white? How can we quantify the impact of a particular step t on a given feature in the final image? To answer this question, we simply sample from the conditional distribution pθ(·|xt) and measure the fraction of samples that contain the feature of interest. Now, if we treat this (empirical) likelihood as a function of t, the steps at which there is the largest increase in (i.e., the steepest slope of) likelihood are most responsible for the presence of this feature in the final image. In fact, it turns out that such rapid increase in likelihood often happens within only a small interval; we observe this phenomenon for both small-scale unconditional models (DDPM trained on CIFAR-10, Figure 3) and large-scale text-conditional models (Stable Diffusion v2 trained on LAION-5B, Appendix B.3). As a result, we are able to tie the presence of a given feature in the final image back to a small interval of steps t in the sampling process. This "phase transition" phenomenon has been observed and studied in concurrent work (Li & Chen, 2024; Sclocchi et al., 2024). In Figure E.4, we further explore this phenomenon for both different generated images and classifiers. Implementing our approach. To implement our step-by-step attribution approach, we need a model output function (see Section 2.2) that is specific to a step t. As we motivated above, this function should be applied to samples from the conditional distribution pθ(·|xt). To that end, we introduce a step-specific model output function ft(pθ(S)(·|xt), θ(S)). The function ft is intended to measure properties of the distribution pθ(S)(·|xt). For example, in Section 4 we define a concrete instantiation of ft that approximates the likelihood of the model to generate individual samples from pθ(S)(·|xt). Adapting the general definition of data attribution from Section 2.1, we can now define *data attribution for diffusion models* at a step t as a function τt that assigns a score τt(xt, S)i to each training example zi ∈ S. This score indicates the change in ft(pθ(S)(·|xt), θ(S)) induced by adding zi to S. ## 3.2 Validating Data Attribution For Diffusion Models Visual inspection of the attributed training datapoints is a common heuristic for evaluating the quality of data attribution. However, visual similarity is not always reliable (Ilyas et al., 2022; Park et al., 2023). In particular, applications of data attribution such as data curation or model debugging often require that the attributions are *causally predictive*. Motivated by that, we evaluate attribution scores according to how accurately they reflect the corresponding training examples' *counterfactual* impact on the conditional distribution pθ(·|xt) using two different metrics, defined below. Linear datamodeling score. The linear datamodeling score (LDS) is a measure of the effectiveness of a data attribution method that was introduced in Ilyas et al. (2022); Park et al. (2023) (see Section 2.1). This metric quantifies how well the attribution scores can predict the exact *magnitude* of change in model output induced by (random) variations in the training set. In our setting, we apply it to the step-specific model output function ft(pθ(S)(·|xt), θ(S)). Specifically, we use the attribution scores τ to predict the diffusion-specific model output function ft(pθ(S)(·|xt), θ(S)) as $$g_{\tau}(p_{\theta(S)}(\cdot|\mathbf{x}_{t}),S^{\prime};S)\coloneqq\sum_{i\ :\ z_{i}\in S^{\prime}}\tau(\mathbf{x}_{t},S)_{i}.\tag{1}$$ $$s)(\cdot|\mathbf{x}_{t}),\theta(S_{j})):j\in[t]$$ Then, we can measure the degree to which the predictions gτ (pθ(S)(·|xt), S′; S) are correlated with the true outputs ft(pθ(S)(·|xt), θ(S ′)) using the LDS: $\pi(\mathbb{C})$ LDS(τ, xt) := ρ({ft(pθ(S)(·|xt), θ(Sj )) : j ∈ [m]}, {gτ (pθ(S)(·|xt), Sj ; S) : j ∈ [m]}), where {S1*, . . . , S*m : Si ⊂ S} are randomly sampled subsets of the training set S and ρ denotes Spearman's rank correlation (Spearman, 1904). To decrease the cost of computing LDS, we use xb t 0 in lieu of samples from pθ(S)(·|xt) (see Section 2.2), since, as noted in Section 2.2, xb t 0 turns out to be a good proxy for the the latter quantity. In other words, we consider ft and gτ as functions of xb t 0 rather than pθ(S)(·|xt). $$\left(2\right)$$ $\mathbf{v}$ $\mathbf{s})(\cdot|\mathbf{x}_t),S_j$;. $S):j\,$. $$\in[m]\}),$$ Retraining without the most influential images. In practice, we may want to use the data attributions to intentionally steer the diffusion model's output. For example, we may want to remove all training examples that cause the resulting model to generate a particular style of images. To evaluate the usefulness of a given data attribution method in these contexts, we remove from the training set the most influential (i.e., highest scoring) images for a given target xt, retrain a new model θ ′, then measure the change in the conditional distribution pθ(·|xt) (see Section 2.2) when we replace θ with θ ′ only in the neighborhood of step t in the reverse diffusion process. If the data attributions are accurate, we expect the conditional distribution to change significantly (as measured in our case using the FID distance for images (Heusel et al., 2017)). As we consider data attributions that are specific to each step, in principle we should use the denoising model *only* for the corresponding step t. However, the impact of a single step on the final distribution might be small, making it hard to measure. Hence, we assume that attributions change only gradually over steps and replace the denoising model for a *small interval* of steps (i.e., between steps t and t − ∆). The first metric (LDS) is cheaper to evaluate, as we can re-use the same set of models to evaluate attributions for different target images and from different attribution methods. On the other hand, the latter metric more directly measures changes in the conditional distribution pθ(·|xt), so we do not need to rely on a specific choice of a model output function ft. ## 4 Efficiently Computing Attributions For Diffusion Models In this section, we describe how we can efficiently estimate data attributions for diffusion models using trak (Park et al., 2023). As we described in Section 2.1, we can decompose the task of computing data attribution scores into estimating two components: (i) the change in model parameters, and (ii) the induced change in model output. Following trak (Park et al., 2023), computing the first component (change in model parameters) only requires computing per-example gradients of the training loss (and in particular, does not require any re-training per each training datapoint). Similarly, computing the second component (change in model output) only requires computing gradients with respect to the model output function of choice (see Appendix D, as well as Section 3 of Park et al. (2023) for details). We now describe how we arrive at Journey-trak by adapting the estimation of the above two components to the diffusion model setting. Estimating the change in model parameters. For diffusion models, the training process is much more complicated than the standard supervised settings (e.g., image classification) considered in Park et al. (2023). In particular, one challenge is that the diffusion model outputs a high-dimensional vector (an image) as opposed to a single scalar (e.g., a label). Even if we approximate the diffusion model as a *linear* model in parameters, naively applying trak would require keeping track of p gradients for each training example (where p is the number of pixels) and thus be computationally infeasible. Nonetheless, it is still the case that the presence of a single training example influences the optimization trajectory *only* via the gradient of the loss on that example—specifically, the MSE of the denoising objective. Hence, it suffices to keep track of a single gradient for each example. This observation allows us to estimate the change in model parameters using the same approach that trak uses (see Section 2.1). An additional challenge is that the gradient updates in the diffusion process are highly stochastic due to the sampling of random noise. To mitigate this stochasticity, we average the training loss over multiple resampling of the noise at randomly chosen steps and compute gradients over this averaged loss. A model output function for diffusion models. In Section 3, we motivated why we would like to attribute properties of the conditional distribution pθ(S)(·|xt), i.e., the distribution that arises from sampling when conditioning on an intermediate latent xt. Specifically, we would like to understand what training data causes the model to generate samples from this distribution. Then, one natural model output function ft would be to measure the likelihood of the model to generate these samples. Attributing with respect to such a choice of ft allows us to understand what training examples increase or decrease this likelihood. In order to efficiently implement this model output function, we make two simplifications. First, sampling from pθ(S)(·|xt) can be computationally expensive, as this would involve repeatedly resampling parts of the diffusion trajectory. Specifically, sampling once from pθ(S)(·|xt) requires applying the diffusion model t Algorithm 1 Journey-trak, a data attribution method for diffusion models 1: **Input:** Model checkpoints {θ ⋆ 1 , ..., θ⋆M}, training dataset S = {z1*, ...,* zN }, target sequence {x1*, ...,* xT } corresponding to T steps, projection dimension k ∈ N. 2: **Output:** Attribution scores τ (xt, S) ∈ R N for each t 3: ftrain(x, θ) := Eε,t ε − εθ(S) √α¯tx + √1 − α¯tε, t 2 2▷ DDPM training loss 4: ft (·, θ) defined as in Equation (3) ▷ Step-specific model output function ft 5: for m ∈ {1*, . . . , M*} do 6: P ∼ N (0, 1)p×k ▷ Sample random projection matrix 7: for i ∈ {1*, . . . , N*} do 8: ϕi ← P⊤∇θftrain(zi, θ⋆m) ▷ Compute training loss gradient at θ ⋆ m and project 9: **end for** 10: for t ∈ {1*, . . . , T*} do 11: xb (t) 0 ← c1(αt) · (xt − c2(αt · εθ ⋆m (xt, t))) ▷ Compute expectation of conditional distribution 12: gi ← P⊤∇θft(xb (t) 0 , θ⋆m) ▷ Compute model output gradient at θ ⋆ m and project 13: **end for** 14: Φm ← [ϕ1; *· · ·* ; ϕN ] ⊤ 15: Gm ← [g1; *· · ·* ; gT ] ⊤ 16: **end for** 17: [τ (x1, S); · · · ; τ (xT , S)] ← 1m P M m=1 Φm(Φ⊤mΦm) −1Gm ▷ Average scores over checkpoints 18: **return** {τ (xt, S)} times—in practice, t can often be as large as 1000. Fortunately, as we described in Section 2.2, we can use the one-step estimate xb t 0 as a proxy for samples from pθ(S)(·|xt), since it approximates this distribution's expectation Ex0∼pθ(·|xt)[x0]. Second, it is computationally expensive to compute gradients with respect to the exact likelihood of generating an image. So, as a more tractable proxy for this likelihood, we measure the reconstruction loss4(i.e., how well the diffusion model is able to denoise a noisy image) when adding noise to xb t 0 with magnitude matching the sampling process at step t. Specifically, we compute the Monte Carlo estimate $$f_{t}\left(\widehat{\mathbf{x}}_{0}^{t},\theta(S)\right)=\sum_{i=1}^{k}\left\|\mathbf{\varepsilon}_{i}-\mathbf{\varepsilon}_{\theta(S)}\left(\sqrt{\bar{\alpha}_{t}}\widehat{\mathbf{x}}_{0}^{(t)}+\sqrt{1-\bar{\alpha}_{t}}\mathbf{\varepsilon}_{i},t\right)\right\|_{2}^{2},$$ $$\left({\mathfrak{3}}\right)$$ , (3) where α¯t is the DDPM5 variance schedule (Ho et al., 2020), εi ∼ N (0, 1) for all i ∈ [k], and k is the number of resampling rounds of the random noise ε. Now that we have chosen our model output function, we can simply compute gradients with respect to this output to obtain the second component in Equation (1). The final algorithm. We summarize our algorithm Journey-trak for computing attribution scores in Algorithm 1. We approximate the training loss (line 3) with different samples of noise ε and step t. Note that to attribute a new target sequence, we only have to recompute lines 10-12. Comparison with trak. Our method Journey-trak is inspired by trak, an attribution method for supervised settings. The main difference comes from the diffusion-specific model output function ft(·, θ). In particular, it is not immediately clear how to design such a function for the multi-step, sampling process in diffusion models in a way that is both computationally efficient and counterfactually predictive. We show that our choice for ft described in Equation (3) gives us an efficient proxy for samples from pθ(S)(·|xt), leading to a fast and effective data attribution method. ![8_image_0.png](8_image_0.png) Figure 4: **Predicting model behavior.** The counterfactual predictiveness of attributions measured using the LDS score along the diffusion trajectory (at every 100 steps) for three different methods: Journey-trak (computed using 10 and 50 model checkpoints), CLIP similarity, and pixel similarity. Smaller steps are closer to the final sample. Shaded areas represent standard error. ![8_image_1.png](8_image_1.png) Figure 5: **Retraining without top influencers.** Change in the distribution of generated images pθ(·|x400) when substituting the original model with a new model only between steps 400 and 300. The new model is trained without the k top influencers of x400 according to attributions from Journey-trak (computed at step 400), CLIP similarity, or pixel similarity. To evaluate the change in distribution, we measure the increase in FID score over a baseline of models trained on the full dataset (see Section 3.2 for details). Bars represent standard error. ## 5 Experiments To evaluate our data attribution method, we apply it to DDPMs trained on CIFAR-10 and LDMs trained on MS COCO. First, in Section 5.2, we visually inspect and interpet our attributions, and then in Section 5.3 we evaluate their counterfactual significance using the metrics we introduced in Section 3.2. In Section 5.4, we further explore how our data attributions can be localized to patches in pixel space. Finally, in Section 5.5, we investigate the value of our step-specific attributions for attributing the full diffusion trajectory. 4The reconstruction loss is a proxy for the likelihood of the generated image, as it is proportional to the evidence lower bound (ELBO) (Sohl-Dickstein et al., 2015; Song et al., 2023). 5We only consider DDPM schedulers in this work. The above derivation can be easily extended to other schedulers. ## 5.1 Experimental Setup We compute our data attribution scores using 100 DDPM checkpoints trained on CIFAR-10 and 50 LDM checkpoints trained MS COCO (see Appendix A for training details.). As baselines, we compare our attributions to two common image similarity metrics—CLIP similarity (i.e., cosine similarity in the CLIP embedding space) and cosine similarity in pixel space. ## 5.2 Qualitative Analysis Of Attributions In Figure 1, we visualize the sampling trajectory for an image generated by an MS COCO model, along with the most positive and negative influencers identified by Journey-trak (see Appendix E for additional visualizations of identified attributions on CIFAR-10 and MS COCO). We find that positive influencers tend to resemble the generated image throughout, while negative influencers tend to differ from the generated image along specific attributes (e.g., class, background, color) depending on the step. Interestingly, the negative influencers increasingly resemble the generated image towards the end of the diffusion trajectory. Intuitively, we might expect that negative influencers would not resemble the final generated image, as they should to steer the trajectory away from that image. So, why do they in fact reflect features of the final generated image? To answer this question, we study the relationship between the top (positive and negative) influencers and the distribution pθ(·|xt) towards which we target our attributions. In Figure 3, for a given image of a horse generated by our CIFAR-10 DDPM, we plot the likelihood that images from pθ(·|xt) containing a horse (according to a classifier trained on CIFAR-10) as a function of the step t (left). We also show the top and bottom influencers at three points along the trajectory (right). We find that the top influencers begin to reflect the feature of interest once the likelihood of this feature begins to grow. Yet, once the likelihood of the feature reaches near certain, the negative influencers *also* begin to reflect this feature. This behavior has the following intuitive explanation: after this point, it would be impossible to "steer" the trajectory away from presenting this feature. So, the negative influencers at later steps might now steer the trajectory away from other features of the final image (e.g., the color of horse) that has not yet been decided at that step. Additionally, images that do not reflect the "decided" features might no longer be relevant to steering the trajectory of the diffusion process. ![9_image_0.png](9_image_0.png) Figure 6: **Patch-based attribution.** We adapt Journey-trak to restrict attribution to user-specified patches of a generated image. We show examples of attributing patches capturing individual concepts in images synthesized by a latent diffusion model trained on MS COCO. Attributions are computed at step t = 400. ## 5.3 Counterfactually Validating The Attributions We now evaluate our attributions using the metrics introduced in Section 3.2 to validate their counterfactual significance. LDS. We sample 100 random 50% subsets of CIFAR-10 and MS COCO, and train five models per mask. Given a set of attribution scores, we then compute the Spearman rank correlation (Spearman, 1904) between the predicted model outputs gτ (·) (see Eq. (2)) on each training data subset according to the attributions and the (averaged) actual model outputs. To evaluate the counterfactual significance of our attributions over the course of the diffusion trajectory, we measure LDS scores at every 100 steps over the 1000 step sampling process. In Figure 4, we plot LDS scores for CIFAR-10 (left) and MS COCO (right) over a range of steps for our attribution scores as well as the two similarity baselines. Unlike in many computer vision settings (Zhang et al., 2018), we find that for CIFAR-10, similarity in pixel space achieves competitive performance, especially towards the start of the diffusion trajectory. However, for both CIFAR-10 and MS COCO, only Journey-trak is counterfactually predictive across the entire trajectory. Retraining without the most influential images. We compute attribution scores on 50 samples from our CIFAR-10 and MS COCO models at step t = 400. Given the attribution scores for each sample, we then retrain the model after removing the corresponding top k influencers for k ∈ {200, 500, 1000}. We sample 5000 images from two distributions: (1) the distribution arising from repeatedly initializing at x400 and sampling the final 400 steps from the original model; and (2) the distribution arising from repeating the above process but using the retrained model only for steps t = 400 to t = 300. We then compute FID distance between these distributions, and repeat this process for each sample at each value of k. In Figure 5, we display the average FID scores (a measure of distance from the original model) after removing the k most influential images for a given sample across possible values of k. We notice that, for all values of k, removing the top influencers identified by our attribution method has a greater impact than removing the most similar images according to CLIP or pixel space similarities. ## 5.4 Localizing Our Attributions To Patches In Pixel Space In Section 3, we discussed how step-by-step attribution allows us to attribute particular features appearing within a particular interval of steps. However, some features may appear together within a small interval, making it hard to isolate them only based on the step. Here we explore one possible approach for better isolating individual features: selecting a region of pixels (i.e., a *patch*) in a generated sample corresponding to a feature of interest, and restricting our model output function to this region. This way, we can restrict attributions only to the selected patch, which can be useful for understanding what caused a specific feature to appear (see Figure 6). To implement this model output function, we simply apply a pixel-wise binary mask to Equation (3) and ignore the output outside of the masked region. To test this approach, we generate images containing multiple features with an MS COCO-trained LDM. We then manually create per-feature masks for which we compute attribution scores with Journey-trak (see Figure 6). The resulting attributions for different masks surface training examples relevant *only* to the corresponding features in that region. ## 5.5 "Forgetting" How To Generate An Image Our attribution scores and evaluation metrics are all step-specific. However, in practice we might care about identifying training images that impact the *full* diffusion pipeline. In particular, we might be interested in whether removing the important training images for a given synthesized image causes the diffusion model to "forget" how to generate this image. Specifically, given a set of attribution scores for a synthesized image, we remove the top k influencers (at step t = 300), retrain the model, and generate new images from scratch using the same random seed. Here, we leverage the fact that two diffusion models trained on the same dataset tend to generate similar images given the same random seed (Song et al., 2021); see Appendix B.1 for more details. We then compare the change in pixel space between the original and newly generated image. This process is distinct from our second evaluation metric, as (1) we directly compare two images rather than measure the distance between distributions, and (2) we re-generate images with our new model from scratch rather than restarting from some intermediate latent xt and substituting the new model for only a small interval of steps (between t and t − ∆). We perform this process for our attribution scores on CIFAR-10 as well as the two similarity baselines (see Figure 7). Our results suggest that Journey-trak is able to identify influential images that have a significant impact on the full diffusion trajectory of the diffusion model. ![11_image_0.png](11_image_0.png) Figure 7: **"Forgetting" an image.** We quantify the impact of removing the highest scoring training examples according to Journey-trak, CLIP similarity, and pixel similarity (and re-training). **(Left)** We compare the original synthesized samples to those generated from the same random seed with the re-trained models. **(Right)** To quantify the impact of removing these images, we measure the ℓ2 distance between 60 synthesized samples and corresponding images generated by the re-trained models. Black bars represent standard error. fl ## 6 Related Work While most prior works on data attribution focus on the supervised setting, some more recent works study attribution in generative settings, e.g. audio models Deng & Ma (2023). For diffusion models, recently Wang et al. (2023) propose a method for *efficiently evaluating* data attribution methods for generative models by creating custom datasets with known ground-truth attributions. Concurrently to this work, Zheng et al. (2023) use Journey-trak to attribute diffusion models *across timesteps*, i.e., they provide global attributions; they rely on heuristic design choices to design ft . Lin et al. (2024) also propose a *global* attribution method; their method is based on Shapley values, a concept from economics; and Wang et al. (2024) propose a method based on approximate unlearning. Dai & Gifford (2023) employ ensembling techniques to attribute diffusion models, again on a global scale. We discuss additional related work in Appendix C. ## 7 Conclusion In this work, we introduce a framework for data attribution for diffusion models and provide an efficient method for computing such attributions. In particular, we formalize data attribution in this setting as task of quantifying how individual training datapoints influences the distribution over final images *at each step* of the diffusion process. We demonstrate the efficacy of our approach on DDPMs trained on CIFAR-10 and LDMs trained on MS COCO. Our framework also constitutes a step towards better understanding of how training data influences diffusion models. There are several directions for potential improvements and future work. First, our particular instantiation of the framework relies on proxies for the distribution pθ(·|xt) of final generated images conditioned on a given step t, as well as for the likelihood of generating a given image. So, identifying more accurate proxies could help improve the quality of the resulting attributions. More broadly, we evaluate our framework on two academic-size datasets, but the most popular diffusion models (such as Stable Diffusion) are larger and trained on significantly larger datasets. Thus, while feasible in principle, scaling our framework to such settings is important. Finally, while we study the task of attributing individual steps, it would be valuable to perform data attribution for the full diffusion process. ## References Alessandro Achille, Aditya Golatkar, Avinash Ravichandran, Marzia Polito, and Stefano Soatto. Lqf: Linear quadratic fine-tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021. Sarah Andersen, Kelly McKernan, and Karla Ortiz. Class-action complaint against stability ai, 2023. URL https://stablediffusionlitigation.com/pdf/00201/1-1-stable-diffusion-complaint.pdf. Case 3:23-cv-00201. Shekoofeh Azizi, Simon Kornblith, Chitwan Saharia, Mohammad Norouzi, and David J Fleet. Synthetic data from diffusion models improves imagenet classification. *arXiv preprint arXiv:2304.08466*, 2023. Juhan Bae, Nathan Ng, Alston Lo, Marzyeh Ghassemi, and Roger Grosse. If influence functions are the answer, then what is the question? In *ArXiv preprint arXiv:2209.05364*, 2022. Samyadeep Basu, Xuchen You, and Soheil Feizi. Second-order group influence functions for black-box predictions. In *International Conference on Machine Learning (ICML)*, 2019. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting training data from large language models. In *30th USENIX Security Symposium (USENIX Security 21)*, 2021. Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramer, Borja Balle, Daphne Ippolito, and Eric Wallace. Extracting training data from diffusion models. *arXiv preprint* arXiv:2301.13188, 2023. Zheng Dai and David K Gifford. Training data attribution for diffusion models. *arXiv preprint* arXiv:2306.02174, 2023. Giannis Daras, Yuval Dagan, Alexandros G Dimakis, and Constantinos Daskalakis. Consistent diffusion models: Mitigating sampling drift by learning to be consistent. *arXiv preprint arXiv:2302.09057*, 2023. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *Computer Vision and Pattern Recognition (CVPR)*, 2009. Junwei Deng and Jiaqi Ma. Computational copyright: Towards a royalty model for ai music generation platforms. *arXiv preprint arXiv:2312.06646*, 2023. Vitaly Feldman and Chiyuan Zhang. What neural networks memorize and why: Discovering the long tail via influence estimation. In *Advances in Neural Information Processing Systems (NeurIPS)*, volume 33, pp. 2881–2891, 2020. Qianli Feng, Chenqi Guo, Fabian Benitez-Quiroz, and Aleix M Martinez. When do gans replicate? on the choice of dataset size. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 6701–6710, 2021. Amirata Ghorbani and James Zou. Data shapley: Equitable valuation of data for machine learning. In International Conference on Machine Learning (ICML), 2019. Amirata Ghorbani, James Wexler, James Zou, and Been Kim. Towards automatic concept-based explanations. arXiv preprint arXiv:1902.03129, 2019. Zayd Hammoudeh and Daniel Lowd. Training data influence analysis and estimation: A survey. In arXiv preprint arXiv:2212.04612, 2022. Frank R Hampel, Elvezio M Ronchetti, Peter J Rousseeuw, and Werner A Stahel. *Robust statistics: the* approach based on influence functions, volume 196. John Wiley & Sons, 2011. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In *Neural Information Processing* Systems (NeurIPS), 2017. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. *arXiv preprint arXiv:2207.12598*, 2022. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In *Neural Information* Processing Systems (NeurIPS), 2020. Andrew Ilyas, Sung Min Park, Logan Engstrom, Guillaume Leclerc, and Aleksander Madry. Datamodels: Predicting predictions from training data. In *International Conference on Machine Learning (ICML)*, 2022. Getty Images. Getty images (us), inc. v. stability ai, inc, 2023. URL https://fingfx.thomsonreuters. com/gfx/legaldocs/byvrlkmwnve/GETTY%20IMAGES%20AI%20LAWSUIT%20complaint.pdf. Case 1:23-cv00135-UNA. Ruoxi Jia, David Dao, Boxin Wang, Frances Ann Hubis, Nick Hynes, Nezihe Merve Gürel, Bo Li, Ce Zhang, Dawn Song, and Costas J. Spanos. Towards efficient data valuation based on the shapley value. In Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, 2019. Ruoxi Jia, Fan Wu, Xuehui Sun, Jiacen Xu, David Dao, Bhavya Kailkhura, Ce Zhang, Bo Li, and Dawn Song. Scalability vs. utility: Do we have to sacrifice one for the other in data importance quantification? In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, 2021. Zahra Kadkhodaie, Florentin Guth, Eero P Simoncelli, and Stéphane Mallat. Generalization in diffusion models arises from geometry-adaptive harmonic representation. *arXiv preprint arXiv:2310.02557*, 2023. Priyatham Kattakinda, Alexander Levine, and Soheil Feizi. Invariant learning via diffusion dreamed distribution shifts. *arXiv preprint arXiv:2211.10370*, 2022. Rajiv Khanna, Been Kim, Joydeep Ghosh, and Sanmi Koyejo. Interpreting black box predictions using fisher kernels. In *The 22nd International Conference on Artificial Intelligence and Statistics*, 2019. Valentin Khrulkov, Gleb Ryzhakov, Andrei Chertkov, and Ivan Oseledets. Understanding ddpm latent codes through optimal transport. *arXiv preprint arXiv:2202.07477*, 2022. Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In International Conference on Machine Learning, 2017. Alex Krizhevsky. Learning multiple layers of features from tiny images. In *Technical report*, 2009. Marvin Li and Sitan Chen. Critical windows: non-asymptotic theory for feature emergence in diffusion models. *arXiv preprint arXiv:2403.01633*, 2024. Chris Lin, Mingyu Lu, Chanwoo Kim, and Su-In Lee. Efficient shapley values for attributing global properties of diffusion models to data group. *arXiv preprint arXiv:2407.03153*, 2024. Jinkun Lin, Anqi Zhang, Mathias Lecuyer, Jinyang Li, Aurojit Panda, and Siddhartha Sen. Measuring the effect of training data on deep learning predictions via randomized experiments. *arXiv preprint* arXiv:2206.10013, 2022. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision (ECCV), 2014. Philip M Long. Properties of the after kernel. In *arXiv preprint arXiv:2105.10585*, 2021. Alexandra Sasha Luccioni, Christopher Akiki, Margaret Mitchell, and Yacine Jernite. Stable bias: Analyzing societal representations in diffusion models. In *arXiv preprint arXiv:2303.11408*, 2023. Sadhika Malladi, Alexander Wettig, Dingli Yu, Danqi Chen, and Sanjeev Arora. A kernel-based view of language model fine-tuning. In *arXiv preprint arXiv:2210.05643*, 2022. Alex Nichol, Aditya Ramesh, Pamela Mishkin, Prafulla Dariwal, Joanne Jang, and Mark Chen. Dalle 2 pre-training mitigations. 2022. Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, and Aleksander Madry. Trak: Attributing model behavior at scale. In *Arxiv preprint arXiv:2303.14186*, 2023. Malsha V Perera and Vishal M Patel. Analyzing bias in diffusion-based face generation models. In arXiv preprint arXiv:2305.06402, 2023. Daryl Pregibon. Logistic regression diagnostics. In *The Annals of Statistics*, 1981. Garima Pruthi, Frederick Liu, Mukund Sundararajan, and Satyen Kale. Estimating training data influence by tracing gradient descent. In *Neural Information Processing Systems (NeurIPS)*, 2020. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *arXiv preprint arXiv:2103.00020*, 2021. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. *arXiv preprint arXiv:2204.06125*, 2022. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pp. 10684–10695, 2022. Andrea Schioppa, Polina Zablotskaia, David Vilar, and Artem Sokolov. Scaling up influence functions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 8179–8186, 2022. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. In *arXiv preprint arXiv:2210.08402*, 2022. Antonio Sclocchi, Alessandro Favero, and Matthieu Wyart. A phase transition in diffusion models reveals the hierarchical nature of data. *arXiv preprint arXiv:2402.16991*, 2024. Harshay Shah, Sung Min Park, Andrew Ilyas, and Aleksander Madry. Modeldiff: A framework for comparing learning algorithms. In *arXiv preprint arXiv:2211.12491*, 2022. Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In *International Conference on Machine Learning*, 2015. Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Diffusion art or digital forgery? investigating data replication in diffusion models. *arXiv preprint arXiv:2212.03860*, 2022. Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In Neural Information Processing Systems (NeurIPS), 2019. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=PxTIG12RRHS. Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. arXiv preprint arXiv:2303.01469, 2023. Charles Spearman. The proof and measurement of association between two things. In The American Journal of Psychology, 1904. Gerrit van den Burg and Chris Williams. On memorization in probabilistic deep generative models. Advances in Neural Information Processing Systems, 34:27916–27928, 2021. Joshua Vendrow, Saachi Jain, Logan Engstrom, and Aleksander Madry. Dataset interfaces: Diagnosing model failures using controllable counterfactual generation. *arXiv preprint arXiv:2302.07865*, 2023. Sheng-Yu Wang, Alexei A Efros, Jun-Yan Zhu, and Richard Zhang. Evaluating data attribution for text-toimage models. *arXiv preprint arXiv:2306.09345*, 2023. Sheng-Yu Wang, Aaron Hertzmann, Alexei A Efros, Jun-Yan Zhu, and Richard Zhang. Data attribution for text-to-image models by unlearning synthesized images. *arXiv preprint arXiv:2406.09408*, 2024. Alexander Wei, Wei Hu, and Jacob Steinhardt. More than a toy: Random matrix models predict how real-world neural representations generalize. In *ICML*, 2022. Olivia Wiles, Isabela Albuquerque, and Sven Gowal. Discovering bugs in vision models using off-the-shelf image generation and captioning. *arXiv preprint arXiv:2208.08831*, 2022. Mike Wojnowicz, Ben Cruz, Xuan Zhao, Brian Wallace, Matt Wolff, Jay Luan, and Caleb Crable. Influence sketching: Finding influential samples in large-scale regressions. In 2016 IEEE International Conference on Big Data (Big Data), 2016. Chih-Kuan Yeh, Joon Sik Kim, Ian E. H. Yen, and Pradeep Ravikumar. Representer point selection for explaining deep neural networks. In *Neural Information Processing Systems (NeurIPS)*, 2018. Huijie Zhang, Jinfan Zhou, Yifu Lu, Minzhe Guo, Peng Wang, Liyue Shen, and Qing Qu. The emergence of reproducibility and consistency in diffusion models. In *Forty-first International Conference on Machine* Learning, 2023. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In *Computer Vision and Pattern Recognition (CVPR)*, 2018. Xiaosen Zheng, Tianyu Pang, Chao Du, Jing Jiang, and Min Lin. Intriguing properties of data attribution on diffusion models. *arXiv preprint arXiv:2311.00500*, 2023. ## A Experimental Details Throughout our paper, we train various diffusion models on CIFAR-10 and MS COCO. DDPM training on CIFAR-10. We train 100 DDPMs (Ho et al., 2020) on the CIFAR-10 dataset for 200 epochs using a cosine annealing learning rate schedule that starts at 1e-4. We used the DDPM architecture that match the original implementation (Ho et al., 2020), which can be found here https: //huggingface.co/google/ddpm-cifar10-32. At inference time we sample using a DDPM scheduler with 50 inference steps. LDM training on MS COCO. We train 20 text-conditional latent diffusion models (LDMs) (Rombach et al., 2022) on the MS COCO dataset for 200 epochs using a cosine annealing learning rate schedule that ståarts at 2e-4. We use the exact CLIP and VAE used in Stable Diffusion v2, but use a custom (smaller) UNet, which we describe in our code. These models can be found here https://huggingface.co/stabilityai/ stable-diffusion-2-1. At inference time, we sample using a DDPM scheduler with 1000 inference steps. LDS. We sample 100 random 50% subsets of CIFAR-10 and MS COCO, and train 5 models per mask. Given a set of attribution scores, we then compute the Spearman rank correlation (Spearman, 1904) between the predicted model outputs gτ (·) (see Eq. (2)) on each subset according to the attributions and the (averaged) actual model outputs. Because our model output and attributions are specific to a step, we compute LDS separately for each step. To evaluate the counterfactual significance of our attributions over the course of the diffusion trajectory, we measure LDS scores at each 100 steps over the 1000 step sampling process. Retraining without the most influential images. For our counterfactual evaluation in Section 5.3, we compute attribution scores on 50 samples from our CIFAR-10 and MS COCO models at step t = 400. Given the attribution scores for each sample, we then retrain the model after removing the corresponding top k influencers for k = 200, 500, 1000. We compute FID based on 5000 images from each distribution, and repeat this process for each sample at each value of k. Journey-trak hyperparameters In the random projection step of Journey-trak, we use a projection dimension of d = 4096 for CIFAR-10 and d = 16384 for MS COCO. As in Park et al. (2023), we use multiple model checkpoints in order to compute the attribution scores. For CIFAR-10, we use 100 checkpoints, and for MS COCO, we use 20 checkpoints. In our code repository (github.com/MadryLab/journey-TRAK), we release the pre-computed Journey-trak features for all of our models, allowing for a quick computation of Journey-trak scores on new synthesized images. In Equation (3), we use k = 20 for both CIFAR-10 and MS COCO. ## B Additional Analysis And Results B.1 Diffusion Models Are Consistent Across Seeds Song et al. (2021) show that in the limit of infinite capacity and training data, as well as perfect optimization, the embedding xT obtained by diffusion models is uniquely identifiable. In other words, two independently trained diffusion models trained on the same dataset will embed an image x0 to the same embedding x0. They also provide empirical evidence that this phenomenon holds approximately in non-idealized settings. Zhang et al. (2023) and Kadkhodaie et al. (2023) show empirically that the latent spaces of diffusion models we attribute are indeed highly aligned; i.e., two independently trained diffusion models will generate similar images given a shared embedding x0. We refer to this property as *seed consistency*. Additionally, Khrulkov et al. (2022) provide an optimal transport perspective on this phenomenon. The seed consistency property is critical for our attribution method Journey-trak, so we experimentally verify it. In fact, we find that images generated by many independently trained DDPMs on CIFAR-10 from the same random seed and nearly indistinguishable (see Figure B.1, right). To evaluate seed consistency quantitatively, we measure the ℓ2 distance between images generated by two models when using identical or distinct noise sequences, and find that matching the noise sequences leads to a far smaller ℓ2 distances (see Figure B.1, left). We additionally evaluate seed consistency on multiple checkpoints of Stable Diffusion (we use checkpoints provided at https://huggingface.co/CompVis/stable-diffusion and https://huggingface.co/ runwayml/stable-diffusion-v1-5) and find that images generated across these models with a fixed seed share significantly more visual similarity that those generated from independent random seeds (see Figure B.2.) We take advantage of this property when evaluating the counterfactual impact of removing the training examples relevant to a given generated image (see Section 5.5). Specifically, we now expect that retraining a model on the full training set and then sampling from the same seed should produce a highly similar image to the generated image of interest. Thus, we can evaluate the counterfactual significance of removing the training examples with the top attribution scores for a given generated image by retraining and measuring the distance (in pixel space) of an image synthesized with the same seed to the original generated image. ![17_image_0.png](17_image_0.png) Figure B.1: **Seed consistency of CIFAR-10 DDPMs**. We find that across DDPMs trained independently on CIFAR-10, when using a fixed random seed during sampling, the resulting synthesized images are very similar, and often visually indistinguishable **(Right)**. Quantitatively, we find that the ℓ2 distance between images generated from two different models is significantly smaller when we fix the noise sequence **(Left)**. ![18_image_0.png](18_image_0.png) Figure B.2: **Seed consistency holds for Stable Diffusion models.** We find that seed consistency holds even for large, text conditioned model, specifically for Stable Diffusion models that are trained on LAION-5B. We compare multiple checkpoints of Stable Diffusion provided by Stability AI, and find that fixing the noise sequence during sampling surfaces very similar images (in comparison to using independent noising sequences). fl ## B.2 Attribution Scores Can Drastically Change Over The Course Of The Diffusion Process As additional motivation for performing attribution at individual steps rather than the entire diffusion trajectory, we highlight the following phenomena: the same training image can be both positively influential and negatively influential for a generated sample at different steps. For example, consider an image of a red car on a grey background generated by our DDPM trained on CIFAR-10 (See Figure B.3, top). We find that a specific training example of a red car on grass is the single most positively influential image according to Journey-trak at the early stages of the generative process (as it is forming the shape of the car), but is later the single most negatively influential image (possibly due to the difference in background, which could steer the model in a different direction). If we were to create an aggregate attribution score for the entire diffusion trajectory, it is unclear what the attribution score would signify for this training example. To evaluate this phenomena quantitatively, we measure the percentage of generated images for which, for a given K, there exists a training example that is one of the top K highest scoring images at some step and one of the top K lowest scoring images at another step (according to Journey-trak). In Figure B.4, we show how this percentage varies with K. As a baseline, we also include the probability of such a training example existing given completely random attribution scores. We find that our observed probabilities match those expected with random scores, signifying that an image being highly positively influential at a given step does not decrease the probability that it is highly negatively influential at a different step. To more broadly analyze the relationship between attributions at different steps, we additionally measure the Spearman's rank correlation (Spearman, 1904) between attribution scores for the same generated sample at different steps (see Figure B.5). We find that for steps that are sufficiently far from each other (around 500 steps), the attribution scores are nearly uncorrelated. ![20_image_0.png](20_image_0.png) Figure B.3: **Overlap between positive and negative influencers.** Here, we visualize the generative process for two images generated by a DDPM on CIFAR for which there exists a training image that is both positively and negatively influential at different steps. If we consider an aggregate attribution score across all time-steps of the diffusion trajectory, we might lose the significance of such training examples which alternate between being positively and negatively influential during the sampling process. ![21_image_0.png](21_image_0.png) Figure B.4: **The relationship between positive and negative influencers.** Here, we plot the probability that within the attribution scores for a given generated image, there exists a training example that is one of the K most positive influencers at some step and one of the bottom K most negative influencers at another step. We compute this probability empirically with the attribution scores from Journey-trak and find that it closely aligns with the hypothetical baseline of completely random attribution scores. This signifies that being a top positive influencer at some step does not decrease the likelihood of being a top negative influencer at a different step. ![21_image_1.png](21_image_1.png) Figure B.5: **Correlation between attribution scores over steps.** Here, we plot the Spearman's rank correlation (Spearman, 1904) between the attribution scores for a given image generated by either our CIFAR-10 or MS COCO models at different steps, as a function of the distance between steps (results are averaged over 100 generated samples). As expected, steps that are closer in proximity have more closely aligned attribution scores. Interestingly, when we compute attributions at steps of distance 500 or more apart, the resulting scores are nearly uncorrelated. ## B.3 Feature Analysis For Stable Diffusion We analyze how the likelihood of different features in the final image varies over steps for images generated by ![22_image_0.png](22_image_0.png) ![22_image_1.png](22_image_1.png) a Stable Diffusion model,6similarly as we did for CIFAR-10 in Figure 3. In Figure B.6, we analyze an image generated using the prompt, *"A woman sitting on a unique chair beside a vase."* To measure the relative likelihood between two features (e.g., "white blouse" vs. "dark blouse"), we use a pre-trained CLIP model and measure whether the CLIP embedding of the generated image is closer to the text embedding of the first feature or the second feature. We sample 60 images at each step and report the average likelihood. We use 300 denoising steps to speed up the generation. Figure B.6: **Features appear at specific steps for Stable Diffusion. (Left)** For each pair of features, we plot the evolution in the relative likelihood of the two features (according to CLIP text-image similarity) in the conditional distribution pθ(·|xt). Features differ in when they appear, but usually rapidly appear within a short interval of steps. (**Right**) The generated image x0, sampled using T = 300 denoising steps. 6We use the stabilityai/stable-diffusion-2 pre-trained checkpoint. ## C Additional Related Work Data attribution. A long line of work has studied the problem of training data attribution, or tracing model behavior back to training data; we focus here on works done in the context of modern machine learning algorithms. Prior approaches include those based on the influence function and its variants (Hampel et al., 2011; Wojnowicz et al., 2016; Koh & Liang, 2017; Basu et al., 2019; Khanna et al., 2019; Achille et al., 2021; Schioppa et al., 2022; Bae et al., 2022), sampling-based methods that leverage models trained on different subsets of data (Ghorbani & Zou, 2019; Jia et al., 2019; Feldman & Zhang, 2020; Ilyas et al., 2022; Lin et al., 2022), and various other heuristic approaches (Yeh et al., 2018; Pruthi et al., 2020). These methods generally exhibit a strong tradeoff between predictiveness or effectiveness and computational efficiency Jia et al. (2021). The recent method of Park et al. (2023) significantly improves upon these tradeoffs by leveraging the empirical kernel structure of differentiable models. While most prior work primarily focus on the supervised setting, more recent works study attribution in generative settings, including to language models (Park et al., 2023) and to diffusion models (Wang et al., 2023). Concurrently to this work, Zheng et al. (2023) use Journey-trak to attribute diffusion models, but rely on heuristic design choices to design ft. In addition, Dai & Gifford (2023) employ ensembling to attribute diffusion models. In a recent work, Wang et al. (2023) propose a method for *efficiently evaluating* data attribution methods for generative models by creating custom datasets with known ground-truth attributions. Memorization in generative models. We can view *memorization* as a special case of data attribution where only few, nearly identical images in the training set are responsible for the generation of a corresponding image. Prior to the increasing popularity of diffusion models, a number of previous works studied memorization in other generative models. For example, Feng et al. (2021) study the impact of properties of a dataset (size, complexity) on training data replication in Generative Adversarial Networks (GANs), and van den Burg & Williams (2021) introduce a memorization score for Variational Autoencoders (VAEs) that can be additionally applied to arbitrary generative models. Following the release of large text-to-image diffusion models, the creators of one of these models (DALL·E 2) investigated memorization issues themselves and found that memorization could be significantly decreased through de-duplication of the training data (Nichol et al., 2022). Recently, Somepalli et al. (2022) explore the data replication behavior of diffusion models from the lens of "digital forgery," and identify many cases where, even when Stable Diffusion produces "unique" images, it directly copies style and semantic structure from individual images in the training set. On the other hand, Carlini et al. (2023) investigate memorization from the perspective of privacy, and show that query access to diffusion models can enable an adversary to directly extract the models' training data. ## D Trak Details In this section, given the close connection between our method Journey-trak and trak (Park et al., 2023), we provide a detailed description of the trak method. Using our notation from Section 2.1, we have a learning algorithm A producing model parameters θ(S) when trained on a training set S = (z1*, . . . , z*n). Additionally, we define a model output function f(*z, θ*(S)) of interest. Next, let use define a "feature map" g : Z → R k as $$g(z):=\mathbf{P}^{\top}\nabla_{\theta}f(z;\theta^{\star}),$$ $\left(4\right)^3$ ⋆), (4) i.e., a function taking an example z to its corresponding gradient with respect to θ of the model output function f, projected with a random matrix P ∼ N (0, 1)p×kfor k ≪ p. Finally, for brevity, let use define G = [g(z1)*, . . . , g*(zn)] for all ziin the training set S. Park et al. (2023) consider supervised learning settings (e.g., image classification). In short, they develop a method that attributes *scalar* output functions f. By large, they consider the "classification margin" output function, defined as log p 1−p where p is the classfier's softmax probability of the correct class. It turns out that for the above choice of model output function, one can write the attribution scores for a target sample z as $$\begin{array}{l}{{\tau(z,S):=g(z)^{\top}(G^{\top}G)^{-1}G^{\top}{\bf Q},}}\\ {{=\mathrm{diag}\left(\left\{(1+\exp(y_{i}\cdot f(z_{i};\theta^{\star})))^{-1}\right\}\right).}}\end{array}$$ ⊤Q, (5) for a diagonal matrix Q defined as Q = diag (1 + exp(yi· f(zi; θ While convenient, this notation obfuscates the decomposition of this estimator into "change in parameters" and "change in output" which we outlined in Section 2.1 and Section 4. In particular, another way to write the estimator from Equation (5) is to define $\left(5\right)^3$ $$\phi(z):=\mathbf{P}^{\top}\nabla_{\theta}L(z,\theta^{\star}),$$ $$(6)$$ ⊤∇θL(*z, θ*⋆), (6) where L is the training loss. And again, we can define $$\left(7\right)$$ $$\Phi=[q]$$ $\mathfrak{b}\big(z_n\big)\big]$. $\bullet\;\bullet\;\bullet\;\bullet$ . $\left(\mathfrak{S}\right)_{\mathfrak{k}}$ Φ = [ϕ(z1)*, . . . , ϕ*(zn)]. (7) With this new notation, we can write Equation (5) as τ (*z, S*) := g(z) ⊤(G $$G^{\top}G)^{-1}\Phi^{\top}\,.$$ ⊤. (8) Note that (G⊤G) −1remains intact, which is in divergence with the influence function derivations outlined in Pregibon (1981). This is due to an ablation study performed in Park et al. (2023). ## D.1 Differences Between Trak And Journey-Trak In Journey-trak, we do not use the simplification in Equation (8) and instead implement an estimator which follows Pregibon (1981) more closely: $$\tau(z,S):=g(z)^{\top}(\Phi^{\top}\Phi)^{-1}\Phi^{\top},$$ $\left(\mathfrak{g}\right)$. ⊤, (9) as expressed on line 17 in Algorithm 1. This is critical, since unlike in the classification setting, we can no longer make a simple decomposition similar to Equation (5), and using this simplified estimator would result in an overly biased estimator. A key innovation in Park et al. (2023) is the reduction of the multi-class classification problems to binary classification via the "classification margin" output function. This choice makes the trak estimator much more tractable, especially when the number of classes is large (as in, e.g., ImageNet (Deng et al., 2009)). Similarly, when attributing steps of the diffusion process, "natural" output functions ft like the latent xt itself are also high-dimensional. Thus, one of the key design choices for Journey-trak is the model output function ft, which is diffusion-specific. Another way to observe the key role that ft plays is to analyze the concurrent work of Zheng et al. (2023). In their paper, they adapt Journey-trak to develop a *timestep-global* attribution method by only making a change to ft. ## E Omitted Plots In this section, we present additional visualizations extending upon the figures in the main text. In Figure E.1 and Figure E.2, we visualize the most influential training examples identified by Journey-trak for a sample generated with a DDPM trained on CIFAR-10 and a LDM trained on MS COCO, respectively. In Figure E.3, we more concisely display attributions for additional samples generated by a CIFAR-10 DDPM. Finally, in Figure E.4 we display additional examples of the appearance of features over steps, and confirm that our findings in the main text hold across when different classification models are used for identifying a given feature. ![25_image_0.png](25_image_0.png) Figure E.1: An example of step-dependent attribution scores for a sample generated by a DDPM trained on CIFAR-10. At each step t, Journey-trak pinpoints the training examples with the highest influence (positive in green, negative in red) on the generative process at this step. In particular, positive influencers guide the trajectory towards the final sample, while negative influencers guide the trajectory away from it. ![25_image_1.png](25_image_1.png) Figure E.2: An additional example of step-dependent attribution scores for a sample generated by a LDM trained on MS COCO. At each step t, Journey-trak pinpoints the training examples with the highest influence (positive in green, negative in red) on the generative process at this step. In particular, positive influencers guide the trajectory towards the final sample, while negative influencers guide the trajectory away from it. ![26_image_0.png](26_image_0.png) Figure E.3: Additional examples our attributions identified by Journey-trak. Here, we visualize the diffusion trajectory for generated images along with the most positively (green) and negatively (red) influential images at individual steps throughout the diffusion trajectory. ![26_image_1.png](26_image_1.png) Figure E.4: Additional examples of the appearance of features over steps, similar to the analysis in Figure 3. In each plot, we show the likelihood that a sample generated from the distribution pθ(·|xt) contains a the feature of interest (in this case, the CIFAR-10 class of the final image) according to three different classifiers: a ResNet trained on the CIFAR-10 dataset with either standard or robust training, and zero-shot CLIP-H/14 model (Radford et al., 2021). Note that in each example, the likelihood that the final image contains the given feature increases rapidly in a short interval of steps, and that this phenomena is consistent across different classifiers.
Review 1: Summary: This work provides a data attribution framework for diffusion model, helping us identify those attributes during the diffusing process. Strengths and Weaknesses: Strengths: 1. Presentation is clear and easy to follow. 2. Present a data attribution framework for diffusion models including defining data attributes, model output function and etc. 3. Provide two complementary metrics for evaluating attributions and use them to validate the proposed method on diffusion models. Weaknesses: 1. This work leverages the TRAK framework to get attribute score of the diffusion models and uses the LDS for analysis. It seems to be very straightforward. Requested Changes: It would be more clear if author have a comparison between the defined functions in diffusion model and those functions in the general framework. Broader Impact Concerns: None ================================================== Review 2: Summary: This paper proposed a method to attribute image synthesized by diffusion models. The authors adapted TRAK, a previous data attribution method for language models, for the diffusion settings. It uses a step-specific approach to understand which training samples influence each diffusion step and how the influences evolve along the whole trajectory. Meanwhile, the proposed method allows to isolate attribution for different features in a image by masking unwanted patches. Strengths and Weaknesses: Strength: 1. The proposed method adapts TRAK, an attribution method for language model, for diffusion models. 2. The proposed method is evaluated from multiple perspectives, including qualitative visual inspection, counterfactual validation, patch-based attribution, memorization analysis, and it achieves better performance comparing to CLIP and pixel similarity. Weakness: 1. The baselines used in this paper are relatively simple and naive, namely CLIP similarity and pixel similarity. 2. Some of the descriptions are not clear enough and will be discussed in the “Requested Changes” section. Requested Changes: Requested Changes: 1. According to Related Work, there are other methods for attributing diffusion models. Are they comparable with the proposed method? If so, what are their performances? 2. How to understand Figure 4? In this figure, we can see TRAK has better LSD comparing to CLIP and pixel similarity. But as LSD is essentially Spearsman rank correlation, a value of 0.2~0.3 indicates very weak ranking power. What is an acceptable LSD value for data attribution purpose? How to interpret the pivot point in LSD curve along the trajectory (LSD increases initially and then decreases)? 3. In the Experiments section, TRAK is used to refer the proposed adaption for diffusion models. Simply calling it TRAK can be confusing, since it can also refers to the original TRAK method. It could be better to phrase it differently. Broader Impact Concerns: None ================================================== Review 3: Summary: This work studies the problem of data attribution in diffusion models. The question it seeks to answer is which datapoints in the training set have a causal effect on a generated sample. It is an application of the TRAK data attribution method (Park et al., 2023) to diffusion models. The key observation, supported by some initial analysis, is that data attribution on a per-timestep basis is more useful and interesting than an aggregated metric. To do this, the authors propose a timestep-specific "model output function," the central input of TRAK, and use it to build a data attribution algorithm. By a couple of metrics, this method performs substantially better than naive approaches, and through examples the authors show that it is qualitatively informative. Strengths and Weaknesses: # Strengths The method appears to be effective and versatile. It allows for patch-wise and/or timestep-wise attribution. While data attribution is a difficult task to evaluate, the proposed metrics are sensible and indicate the method is generally performing well (not just on cherry-picked examples). I also enjoyed the analysis, which surfaces a number of interesting tidbits; to name a few: - The probability of certain concepts appearing in the generated image experience unexpectedly sharp increases at specific points in the trajectory. - The training examples with "negative influence" in the generation process become more similar to the generated sample as time nears 0. - Nearest neighbors in pixel- or CLIP-space in the training set are generally NOT very relevant to a generated sample in attribution terms. # Weaknesses Please find the requested changes below. Requested Changes: # Critical - This work needs to better contextualize itself within the broader literature. I have two specific concerns to this end: - This work's method is heavily based on TRAK, a year-old work, but TRAK is barely explained throughout the paper except at a very high level. Given TRAK's complexity and pertinence here, it needs to be explained in much more technical detail. The lack of context in this right means that the contents of the actual method - Algorithm 1 - come out of left field. - This work is about data attribution for diffusion models, but its literature review on this topic amounts to a single sentence about Wang et al. (2023). A quick Google shows that there is recent work in the area [1, 4, 5], and all of this should be at least briefly discussed. Most importantly, [5] also bases itself off of TRAK - how is that work (concurrent though it may be) different from the present one? - Following on from the first point, there are a couple of things I find unclear about the method itself. - No clear justification was given for the timestep-specific model-output function $f_t$. It is (up to a constant) the ELBO of the model at the one-step sample estimate. - I understand that the point here is to get a time-dependent scalar, but why this specific one? Why not something more naive, like $\lVert\varepsilon_\theta(x_t, t)\rVert$? - Furthermore, it is mentioned that $\hat{\mathbf{x}}_0^t$ is being used here as a proxy for samples from $p(\cdot | \mathbf{x}_t)$. What exactly is the original quantity you're trying to compute here involving $p( \cdot | \mathbf{x}_t)$? How does this approximation affect the quality of the method? (In particular, Figure 2 shows this approximation is poor for large timesteps, and operates on the assumption that $p(\cdot | \mathbf{x})$ is "unimodal".) - What parts of Algorithm 1 are pulled directly from TRAK and what parts are new here? In particular, line 17 looks different from line 15 of the TRAK algorithm in Park et al. (2023) in the way it combines the $f_t$ and $f_\text{train}$ gradients. Is this a conscious design choice? - Using notation from the text, Algorithm 1 requires $MNk$ model evaluations to start and then $MTk$ model evaluations to compute attributions for a single datapoint. This is seemingly expensive; can you discuss the time complexity and/or report the amount of time the method takes for an individual datapoint? - In Appendix C.1: "diffusion models are consistent across seeds", _seed consistency_,the propensity of diffusion models trained with different seeds to learn the same maps between latents and data, is framed as a theoretically unexpected property. This is actually not true; it is expected. - Diffusion models seek to learn the score over all timesteps of a fixed diffusion process; this score is unique. This is why, even if you train multiple a diffusion models even with different architectures, you will get very similar images starting from the same latent. In this sense, all diffusion models _do_ share the same latent space. (This is very different from eg. VAEs, GANs, and normalizing flows, whose optimal mappings from latents to data are not at all unique). - [2] has some interesting analysis on this. # Non-Critical - Ilyas et al. (2022) on page 3 should be a citep. - Missing period in the last sentence of section 3.1. - The authors might consider discussing [3], which studies a very similar notion as Figures 3, C.3, and C.10, identifying a "phase transition" in the diffusion process wherein specific details of the image are decided. - What's behind the decision not to use the evaluation methods of Wang et al. (2023)? - Although the writing is clear, it could be better organized and more concise in places. A couple of organization examples: - The "Implementing Our Approach" section (page 6) partially describes $f_t$ without actually defining it. It is then actually defined 2 pages later. Why separate these parts? - The start of section 3.2 contains 7 lines of text giving partial definitions of and comparing the two evaluation metrics before their actual definitions are given below - it would be more economical to save the comparisons until afterwards. # References [1] Dai, Zheng, and David K. Gifford. "Training data attribution for diffusion models." _arXiv preprint arXiv:2306.02174_ (2023). [2] Kadkhodaie, Zahra, et al. "Generalization in diffusion models arises from geometry-adaptive harmonic representations." _The Twelfth International Conference on Learning Representations_. [3] Sclocchi, Antonio, Alessandro Favero, and Matthieu Wyart. "A phase transition in diffusion models reveals the hierarchical nature of data." _arXiv preprint arXiv:2402.16991_ (2024). [4] Xie, Tong, et al. "Data Attribution for Diffusion Models: Timestep-induced Bias in Influence Estimation." _arXiv preprint arXiv:2401.09031_ (2024). [5] Zheng, Xiaosen, et al. "Intriguing Properties of Data Attribution on Diffusion Models." _The Twelfth International Conference on Learning Representations_. Broader Impact Concerns: No broader impact concerns. ================================================== Metareview: Recommendation: Reject Comment: This paper proposed a method based on the TRAK method from language model to attribute image synthesized by diffusion models, which uses a step-specific approach to try to learn the influence and evolution of each training sample. All reviewers agree the method is interesting and on time. There are some concerns about the novelty and relation with TRAK, which were addressed in the rebuttal by adding more detailed discussions. However, there are still some other problems mentioned by the reviewers that were not fully addressed, one of which is the experiments. The current experiments compare the proposed method with two simple baselines, which is not enough to demonstrate the effectiveness and validation of the proposed method. The rebuttal did not fully address this comment, more baseline comparisons are expected to strengthen the paper. To sum up, the problem is important and the proposed method is indeed interesting. However, given the lack of enough empirical evidence to support the effectiveness of the proposed method, I have to recommend rejection this round. If the authors can revise the paper and add more experiment comparisons, I am happy to oversee the paper again in the future submission. ==================================================
# A Note On The Convergence Of Denoising Diffusion Probabilistic Models Sokhna Diarra Mbacke *sokhna-diarra.mbacke.1@ulaval.ca* Université Laval Omar Rivasplata o.rivasplata@ucl.ac.uk University College London Reviewed on OpenReview: *https: // openreview. net/ forum? id= wLe1bG93yc* ## Abstract Diffusion models are one of the most important families of deep generative models. In this note, we derive a quantitative upper bound on the Wasserstein distance between the target distribution and the distribution learned by a diffusion model. Unlike previous works on this topic, our result does not make assumptions on the learned score function. Moreover, our result holds for arbitrary data-generating distributions on bounded instance spaces, even those without a density with respect to Lebesgue measure, and the upper bound does not suffer from exponential dependencies on the ambient space dimension. Our main result builds upon the recent work of Mbacke et al. (2023) and our proofs are elementary. ## 1 Introduction Along with generative adversarial networks (Goodfellow et al., 2014) and variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014), diffusion models (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Ho et al., 2020) are one of the most prominent families of deep generative models. They have exhibited impressive empirical performance in image (Dhariwal & Nichol, 2021; Ho et al., 2022) and audio (Chen et al., 2021; Popov et al., 2021) generation, as well as other applications (Zhou et al., 2021; Sasaki et al., 2021; Li et al., 2022; Trabucco et al., 2023). There are two main approaches to diffusion models: denoising diffusion probabilistic models (DDPMs) (SohlDickstein et al., 2015; Ho et al., 2020) and score-based generative models (Song & Ermon, 2019) (SGMs). The former kind, DDPMs, progressively transform samples from the target distribution into noise through a forward process, and train a backward process that reverses the transformation and is used to generate new samples. On the other hand, SGMs use score matching techniques (Hyvärinen & Dayan, 2005; Vincent, 2011) to learn an approximation of the score function of the data-generating distribution, then generate new samples using Langevin dynamics. Since for real-world distributions the score function might not exist, Song & Ermon (2019) propose adding different noise levels to the training samples to cover the whole instance space, and train a neural network to simultaneously learn the score function for all noise levels. Although DDPMs and SGMs might appear to be different approaches at first, Ho et al. (2020) showed that DDPMs implicitly learn an approximation of the score function and the sampling process resembles Langevin dynamics. Furthermore, Song et al. (2021b) derived a unifying view of both techniques using stochastic differential equations (SDEs). The SGM of Song & Ermon (2019) can be seen as a discretization of the Brownian motion process, and the DDPM of Ho et al. (2020) as a discretization of an Ornstein–Uhlenbeck process. Hence, both DDPMs and SGMs are usually referred to as SGMs in the literature. This explains why the previous works studying the theoretical properties of diffusion models utilize the score-based formulation, which requires assumptions on the performance of the learned score function. In this work, we take a different approach and apply techniques developed by Mbacke et al. (2023) for VAEs to DDPMs, which can be seen as hierarchical VAEs with fixed encoders (Luo, 2022). This approach allows us to derive quantitative Wasserstein-based upper bounds, with no assumptions on the data distribution, no assumptions on the learned score function, and elementary proofs that do not require the SDE toolbox. Moreover, our bounds do not suffer from any costly discretization step, such as the one in De Bortoli (2022), since we consider the forward and backward processes as being discrete-time from the outset, instead of seeing them as discretizations of continuous-time processes. ## 1.1 Related Works There has been a growing body of work aiming to establish theoretical results on the convergence of SGMs (Block et al., 2020; De Bortoli et al., 2021; Song et al., 2021a; Lee et al., 2022; De Bortoli, 2022; Kwon et al., 2022; Lee et al., 2023; Chen et al., 2023; Li et al., 2023; Benton et al., 2023), but these works either rely on strong assumptions on the data-generating distribution, derive non quantitative upper bounds, or suffer from exponential dependencies on some of the parameters. We manage to avoid all three of these pitfalls. The bounds of Lee et al. (2022) rely on very strong assumptions on the data-generating distribution, such as log-Sobolev inequalities, which are not realistic for real-world data distributions. Furthermore, Song et al. (2021a); Chen et al. (2023); Lee et al. (2023) establish upper bounds on the Kullback-Leibler (KL) divergence or the total variation (TV) distance between the data-generating distribution and the distribution learned by the diffusion model; however, as noted by Pidstrigach (2022) and Chen et al. (2023), unless one makes strong assumptions on the support of the data-generating distribution, KL and TV reach their maximum values. Such assumptions arguably do not hold for real-world data-generating distributions which are widely believed to satisfy the manifold hypothesis (Narayanan & Mitter, 2010; Fefferman et al., 2016; Pope et al., 2021). The work of Pidstrigach (2022) establishes conditions under which the support of the input distribution is equal to the support of the learned distribution, and generalizes the bound of Song et al. (2021a) to all f-divergences. Assuming L2 accurate score estimation, Chen et al. (2023) and Lee et al. (2023) establish Wasserstein distance upper bounds under weaker assumptions on the data-generating distribution, but their Wasserstein-based bounds are not quantitative. De Bortoli (2022) derives quantitative Wasserstein distance upper bounds under the manifold hypothesis, but their bounds suffer from exponential dependencies on some of the problem parameters. ## 1.2 Our Contributions In this work, we avoid strong assumptions on the data-generating distribution, and establish a quantitative Wasserstein distance upper bound without exponential dependencies on problem parameters including ambient space dimension. Moreover, a common thread in the works cited above is that their bounds depend on the error of the score estimator. According to Chen et al. (2023), "Providing precise guarantees for estimation of the score function is difficult, as it requires an understanding of the non-convex training dynamics of neural network optimization that is currently out of reach." Hence, we derive upper bounds without assumptions on the learned score function. Instead, our bound depends on a *reconstruction loss* computed on a finite i.i.d. sample. Intuitively, we define a loss function ℓ θ(xT , x0) (see equation 6), which measures the average Euclidean distance between a sample x0 from the data-generating distribution, and the reconstruction xˆ0 obtained by sampling noise xT ∼ q(xT |x0) and passing it through the backward process (parameterized by θ). This approach is motivated by the work of Mbacke et al. (2023) on VAEs. There are many advantages to this approach: no restrictive assumptions on the data-generating distribution, no exponential dependencies on the dimension, and a quantitative upper bound based on the Wasserstein distance. Moreover, our approach has the benefit of utilizing very simple and elementary proofs. ## 2 Preliminaries Throughout the paper, we use lower-case letters to denote both probability measures and their densities w.r.t. the Lebesgue measure, and we add variables in parentheses to improve readability (e.g. q(xt|xt−1) to indicate a time-dependent conditional distribution). We consider an instance space X which is a subset of R D with ![2_image_0.png](2_image_0.png) Figure 1: Denoising diffusion model the Euclidean distance as underlying metric, and a target data-generating distribution µ ∈ M1+(X ). Notice that we do not assume µ has a density w.r.t. the Lebesgue measure. Moreover, ∥·∥ denotes the Euclidean (L2) norm and we write Ep(x) as a shorthand for Ex∼p(x). Given probability measures p, q ∈ M1+(X ) and a real number k > 1, the Wasserstein distance of order k is defined as (Villani, 2009): $$W_{k}(p,q)=\left(\operatorname*{inf}_{\pi\in\Gamma(p,q)}\int_{\mathcal{X}}\|\mathbf{x}-\mathbf{y}\|^{k}\ d\pi(\mathbf{x},\mathbf{y})\right)^{1/k}\,,$$ where Γ(*p, q*) denotes the set of couplings of p and q, meaning the set of joint distributions on *X × X* with respective marginals p and q. We refer to the product measure p ⊗ q as the *trivial coupling*, and we refer to the Wasserstein distance of order 1 simply as *the Wasserstein distance*. ## 2.1 Denoising Diffusion Models Instead of using the SDE arsenal, we present diffusion models using the DDPM formulation with discretetime processes. A diffusion model comprises two discrete-time stochastic processes: a forward process and a backward process. Both processes are indexed by time 0 ≤ t ≤ T, where the number of time-steps T is a pre-set choice. See Figure 1 for an illustration, and Luo (2022) for a detailed tutorial. The forward process. The forward process transforms a datapoint x0 ∼ µ into a noise distribution q(xT |x0), via a sequence of conditional distributions q(xt|xt−1) for 1 ≤ t ≤ T. It is assumed that the forward process is defined so that for large enough T, the distribution q(xT |x0) is close to a simple noise distribution p(xT ) which is referred to as the *prior distribution*. For instance, Ho et al. (2020) chose p(xT ) = N (xT ; 0, I), the standard multivariate normal distribution. The backward process. The backward process is a Markov process with parametric transition kernels. The goal of the backward process is to implement the reverse action of the forward process: transforming noise samples into (approximate) samples from the distribution µ. Following Ho et al. (2020), we assume the backward process to be defined by Gaussian distributions pθ(xt−1|xt) defined for 2 ≤ t ≤ T as $$p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})={\mathcal{N}}\left(\mathbf{x}_{t-1};g_{\theta}^{t}(\mathbf{x}_{t}),\sigma_{t}^{2}\mathbf{I}\right),$$ $$(1)$$ I, (1) and $$p_{\theta}(\mathbf{x}_{0}|\mathbf{x}_{1})=g_{\theta}^{1}(\mathbf{x}_{1}),$$ (x1), (2) where the variance parameters σ 2 1 , . . . , σ2T ∈ R≥0 are defined by a fixed schedule, the mean functions g t θ : R D → R D are learned using a neural network (with parameters θ) for 2 ≤ t ≤ T, and g 1 θ : R D → X is a separate function dependent on σ1. In practice, Ho et al. (2020) used the same network for the functions g t θ for 2 ≤ t ≤ T, and a separate discrete decoder for g 1 θ . Generating new samples from a trained diffusion model is done by sampling xt−1 ∼ pθ(xt−1|xt) for 1 ≤ t ≤ T, starting from a noise vector xT ∼ p(xT ) sampled from the prior p(xT ). We make the following assumption on the backward process. Assumption 1. We assume for each 1 ≤ t ≤ T there exists a constant Ktθ > 0 such that for every x1, x2 ∈ X , $$\left\|g_{\theta}^{t}({\bf x}_{1})-g_{\theta}^{t}({\bf x}_{2})\right\|\leq K_{\theta}^{t}\left\|{\bf x}_{1}-{\bf x}_{2}\right\|.$$ In other words, g t θ is Ktθ -Lipschitz continuous. We discuss this assumption in Remark 3.2. $\left(2\right)$. ## 2.2 Additional Definitions We define the distribution πθ(·|x0) as $$\pi_{\theta}(\cdot|\mathbf{x}_{0})=q(\mathbf{x}_{T}|\mathbf{x}$$ πθ(·|x0) = q(xT |x0)pθ(xT −1|xT )pθ(xT −2|xT −1)*. . . p*θ(x1|x2)pθ(·|x1). (3) Intuitively, for each x0 ∈ X , πθ(·|x0) denotes the distribution on X obtained by reconstructing samples from q(xT |x0) through the backward process. Another way of seeing this distribution is that for any function f : X → R, the following equation holds:1 $$\mathbb{E}_{\pi\theta(\hat{\mathbf{x}}_{0}|\mathbf{x}_{0})}\,f(\hat{\mathbf{x}}_{0})=\mathbb{E}_{q(\mathbf{x}_{T}|\mathbf{x}_{0})\,p\theta(\mathbf{x}_{T-1}|\mathbf{x}_{T})}\cdots\mathbb{E}_{p\theta(\mathbf{x}_{1}|\mathbf{x}_{2})\,p\theta(\hat{\mathbf{x}}_{0}|\mathbf{x}_{1})}\,f(\hat{\mathbf{x}}_{0}).$$ Given a finite set S = {x 1 0 , . . . , x n 0 } iid∼ µ, we define the regenerated distribution as the following mixture: $$p_{\theta}(\mathbf{x}_{T-2}|\mathbf{x}_{T-1})\ldots p_{\theta}(\mathbf{x}_{1}|\mathbf{x}_{2})p_{\theta}(\cdot|\mathbf{x}_{1}).$$ $$(3)$$ $$\left(4\right)$$ $$\mu_{\theta}^{n}={\frac{1}{n}}\sum_{i=1}^{n}\pi_{\theta}(\cdot|\mathbf{x}_{0}^{i}).$$ $\left(5\right)^{\frac{1}{2}}$ ). (5) This definition is analogous to the empirical regenerated distribution defined by Mbacke et al. (2023) for VAEs. The distribution on X learned by the diffusion model is denoted πθ(·) and defined as $$\pi_{\theta}(\cdot)=p(\mathbf{x}_{T})p_{\theta}(\mathbf{x}_{T-1}|\mathbf{x}_{T})p_{\theta}(\mathbf{x}_{T-2}|\mathbf{x}_{T-1})\ldots p_{\theta}(\mathbf{x}_{1}|\mathbf{x}_{2})p_{\theta}(\cdot|\mathbf{x}_{1}).$$ In other words, for any function f : X → R, the expectation of f w.r.t. πθ(·) is $\mathbb{E}_{\pi_{\theta}(\hat{\mathbf{x}}_{0})}\,f(\hat{\mathbf{x}}_{0})=\mathbb{E}_{p(\mathbf{x}_{T})\,p_{\theta}(\mathbf{x}_{T-1}|\mathbf{x}_{T})}\cdots\mathbb{E}_{p_{\theta}(\mathbf{x}_{1}|\mathbf{x}_{2})\,p_{\theta}(\hat{\mathbf{x}}_{0}|\mathbf{x}_{1})}\,f(\hat{\mathbf{x}}_{0})$. Hence, both πθ(·) and πθ(·|x0) are defined using the backward process, with the difference that πθ(·) starts with the prior p(xT ) = N (xT ; 0, I) while πθ(·|x0) starts with the noise distribution q(xT |x0). Finally, we define the loss function ℓ θ: *X × X →* R as $$\ell^{\theta}(\mathbf{x}_{T},\mathbf{x}_{0})=\mathop{\mathbb{E}}_{p_{\theta}(\mathbf{x}_{T-1}|\mathbf{x}_{T})}\mathop{\mathbb{E}}_{p_{\theta}(\mathbf{x}_{T-2}|\mathbf{x}_{T-1})}\cdots\mathop{\mathbb{E}}_{p_{\theta}(\mathbf{x}_{1}|\mathbf{x}_{2})}\mathop{\mathbb{E}}_{p_{\theta}(\mathbf{x}_{0}|\mathbf{x}_{1})}\left\|\mathbf{x}_{0}-\hat{\mathbf{x}}_{0}\right\|.\tag{10}$$ Hence, given a noise vector xT and a sample x0, the loss ℓ θ(xT , x0) denotes the average Euclidean distance between x0 and any sample obtained by passing xT through the backward process. ## 2.3 Our Approach The goal is to upper-bound the distance W1(*µ, π*θ(·)). Since the triangle inequality implies $$\mathbf{\Sigma}$$ $$W_{1}(\mu,\pi_{\theta}(\cdot))\leq W_{1}(\mu,\mu_{\theta}^{n})+W_{1}(\mu_{\theta}^{n},\pi_{\theta}(\cdot)),$$ , πθ(·)), (7) we can upper-bound the distance W1(*µ, π*θ(·)) by upper-bounding the two expressions on the right-hand side of equation 7 separately. The upper bound on W1(*µ, µ*n θ ) is obtained using a straightforward adaptation of a proof that was developed by Mbacke et al. (2023). First, W1(µ, µn θ ) is upper-bounded using the expectation of the loss function ℓ θ, then the resulting expression is upper-bounded using a PAC-Bayesian-style expression dependent on the empirical risk and the prior-matching term. The upper bound on the second term W1(µ n θ , πθ(·)) uses the definition of µ n θ . Intuitively, the difference between πθ(·|x i 0 ) and πθ(·) is determined by the corresponding initial distributions: q(xT |x i 0 ) for πθ(·|x i 0 ), and p(xT ) for πθ(·). Hence, if the two initial distributions are close, and if the steps of the backward process are smooth (see Assumption 1), then πθ(·|x i 0 ) and πθ(·) are close to each other. 1More formally, we give a definition of πθ(·|x0) via expectations of test functions by requiring that equation 4 holds for every function f in some appropriate measure-determining function class. $$\left(7\right)$$ ## 3 Main Result 3.1 Theorem Statement We are now ready to state our main result: a quantitative upper bound on the Wasserstein distance between the data-generating distribution µ and the learned distribution πθ(·). Theorem 3.1. Assume the instance space X *has finite diameter* ∆ = supx,x′∈X ∥x − x ′∥ < ∞, and let λ > 0 and δ ∈ (0, 1) be real numbers. Using the definitions and assumptions of the previous section, the following inequality holds with probability at least 1 − δ *over the random draw of* S = {x 1 0 , . . . , x n 0 } iid∼ µ: W1(µ, πθ(·)) ≤ 1 n Xn i=1 E q(xT |x i 0 ) ℓ θ(xT , x i 0 ) + 1 λ "Xn i=1 KL(q(xT |x i 0 )|| p(xT )) + log 1 δ # + λ∆2 8n + t=2 tY−1 i=1 Kiθ ! σt !E ϵ,ϵ ′ ∥ϵ − ϵ ′∥ . (8) Y T t=1 Ktθ !1 n Xn i=1 E q(xT |x i 0 ) E p(yT ) ∥xT − yT ∥ + X T Where ϵ, ϵ ′ ∼ N (0, I) *are standard Gaussian vectors.* Remark 3.1. Before presenting the proof, let us discuss Theorem 3.1. - Because the right-hand side of equation 8 depends on a quantity computed using a finite i.i.d. sample S, the bound holds with high probability w.r.t. the randomness of S. This is the price we pay for having a quantitative upper bound with no exponential dependencies on problem parameters and no assumptions on the data-generating distribution µ. - The first term of the right-hand side of equation 8 is the average reconstruction loss computed over the sample S = {x 1 0 , . . . , x n 0 }. Note that for each 1 ≤ i ≤ n, the expectation of ℓ θ(xT |x i 0 ) is only computed w.r.t. the noise distribution q(xT |x i 0 ) defined by x i 0 itself. Hence, this term measures how well a noise vector xT ∼ q(xT |x i 0 ) recovers the original sample x i 0 using the backward process, and averages over the set S = {x 1 0 , . . . , x n 0 }. - If the Lipschitz constants satisfy Ktθ < 1 for all 1 ≤ t ≤ T, then the larger T is, the smaller the upper bound gets. This is because the product of Ktθ 's then converges to 0. In Remark 3.2 below, we show that the assumption that Ktθ < 1 for all t is a quite reasonable one. - The hyperparameter λ controls the trade-off between the prior-matching (KL) term and the diameter term ∆2 8n . If Ktθ < 1 for all 1 ≤ t ≤ T and T → ∞, then the convergence of the bound largely depends on the choice of λ. In that case, λ ∝ n 1/2leads to a faster convergence, while λ ∝ n leads to a slower convergence to a smaller quantity. This is because the bound of Mbacke et al. (2023) stems from PAC-Bayesian theory, where this trade-off is common, see e.g. Alquier (2021). - The last term of equation 8 does not depend on the sample size n. Hence, the upper bound given by Theorem 3.1 does not converge to 0 as n → ∞. However, if the Lipschitz factors (Ktθ )1≤t≤T are all less than 1, then this term can be very small, specially in low dimensional spaces. ## 3.2 Proof Of The Main Theorem The following result is an adaptation of a result by Mbacke et al. (2023). Lemma 3.2. Let λ > 0 and δ ∈ (0, 1) be real numbers. With probability at least 1 − δ over the randomness of the sample S = {x 1 0 , . . . , x n 0 } iid∼ µ*, the following holds:* $$W_{1}(\mu,\mu_{\theta}^{*})\leq\frac{1}{n}\sum_{i=1}^{n}\left\{\,\underset{q(\mathbf{x}_{T}|\mathbf{x}_{\theta}^{i})}{\mathbb{E}}\,\ell^{\theta}(\mathbf{x}_{T},\mathbf{x}_{\theta}^{i})\right\}+\frac{1}{\lambda}\left[\sum_{i=1}^{n}\text{KL}(q(\mathbf{x}_{T}|\mathbf{x}_{\theta}^{i})\,||\,p(\mathbf{x}_{T}))+\log\frac{1}{\delta}\right]+\frac{\lambda\Delta^{2}}{8n}.\tag{9}$$ The proof of this result is a straightforward adaptation of Mbacke et al. (2023, Lemma D.1). We provide the proof in the supplementary material (Section A.1) for completeness. Now, let us focus our attention on the second term of the right-hand side of equation 7, namely W1(µ n θ , πθ(·)). This part is trickier than for VAEs, for which the generative model's distribution is simply a pushforward measure. Here, we have a non-deterministic sampling process with T steps. Assumption 1 leads to the following lemma on the backward process. Lemma 3.3. For any given x1, y1 ∈ X *we have* E pθ(x0|x1) E pθ(y0|y1) ∥x0 − y0∥ ≤ K1 θ ∥x1 − y1∥ . Moreover, if 2 ≤ t ≤ T, then for any given xt, yt ∈ X *we have* E pθ(xt−1|xt) E pθ(yt−1|yt) ∥xt−1 − yt−1∥ ≤ Ktθ ∥xt − yt∥ + σt E ϵ,ϵ ′ ∥ϵ − ϵ ′∥ , where ϵ, ϵ ′ ∼ N (0, I)*, meaning* Eϵ,ϵ ′ *is a shorthand for* Eϵ,ϵ ′∼N(0,I). Proof. For the first part, let x1, y1 ∈ X . Since according to equation 2 we have pθ(x0|x1) = δg 1 θ (x1)(x0) and pθ(y0|y1) = δg 1 θ (y1)(y0), then $$\mathbb{E}_{p_{\theta}(\mathbf{x}_{0}|\mathbf{x}_{1})\;p_{\theta}(\mathbf{y}_{0}|\mathbf{y}_{1})}\|\mathbf{x}_{0}-\mathbf{y}_{0}\|=\left\|g_{\theta}^{1}(\mathbf{x}_{1})-g_{\theta}^{1}(\mathbf{y}_{1})\right\|\leq K_{\theta}^{1}\left\|\mathbf{x}_{1}-\mathbf{y}_{1}\right\|.$$ For the second part, let 2 ≤ t ≤ T and xt, yt ∈ X . Since pθ(xt−1|xt) = Nxt−1; g t θ (xt), σ2 t Iby equation 1, the reparameterization trick (Kingma & Welling, 2014) implies that sampling $$\mathbf{x}_{t-1}\sim p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})$$ is equivalent to setting $$\mathbf{x}_{t-1}=g_{\theta}^{t}(\mathbf{x}_{t})+\sigma_{t}\epsilon_{t},\quad{\mathrm{~with~}}\quad\epsilon_{t}\sim{\mathcal{N}}\left(\mathbf{0},\mathbf{I}\right).$$ (xt) + σtϵt, with ϵt ∼ N (0, I). (10) Using equation 10, the triangle inequality, and Assumption 1, we obtain E pθ(xt−1|xt) E pθ(yt−1|yt) ∥xt−1 − yt−1∥ = E ϵt E ϵ ′ t g t θ (xt) + σtϵt − g t θ (yt) − σtϵ ′ t ≤ E ϵt E ϵ ′ t g t θ (xt) − g t θ (yt) + σt E ϵt E ϵ ′ t ∥ϵt − ϵ ′ t∥ =g t θ (xt) − g t θ (yt) + σt E ϵt E ϵ ′ t ∥ϵt − ϵ ′ t∥ ≤ Ktθ ∥xt − yt∥ + σt E ϵ E ϵ ′ ∥ϵ − ϵ ′∥ , $$(10)$$ where ϵ, ϵ ′ ∼ N (0, I). Next, we can use the inequalities of Lemma 3.3 to prove the following result. Lemma 3.4. Let T ≥ 1*. The following inequality holds:* $$\begin{split}\mathbb{E}&\mathbb{E}\\ \mathbb{P}_{\theta}(\mathbf{x}_{T-1}|\mathbf{x}_{T})\,p_{\theta}(\mathbf{y}_{T-1}|\mathbf{y}_{T})\,p_{\theta}(\mathbf{x}_{T-2}|\mathbf{x}_{T-1})\,p_{\theta}(\mathbf{y}_{T-2}|\mathbf{y}_{T-1})\cdots\mathbb{E}\\ &\left(\prod_{t=1}^{T}K_{\theta}^{t}\right)\|\mathbf{x}_{T}-\mathbf{y}_{T}\|+\left(\sum_{t=2}^{T}\left(\prod_{i=1}^{t-1}K_{\theta}^{i}\right)\sigma_{t}\right)\mathbb{E}_{\epsilon^{\prime}}\|\epsilon-\epsilon^{\prime}\|\|\,,\end{split}$$ $\square$ where ϵ, ϵ ′ ∼ N (0, I). Proof Idea. Lemma 3.4 is proven by induction using Lemma 3.3 in the induction step. The details are in the supplementary material (Section A.2). Using the two previous lemmas, we obtain the following upper bound on W1(µ n θ , πθ(·)). Lemma 3.5. *The following inequality holds:* $$W_{1}(\mu_{\theta}^{n},\pi_{\theta}(\cdot))\leq\left(\prod_{i=1}^{T}K_{\theta}^{i}\right)\frac{1}{n}\sum_{i=1}^{n}\left\{\operatorname*{\mathbb{E}}_{q(\mathbf{x}_{T}|\mathbf{x}_{0}^{i}),p(\mathbf{y}_{T})}\|\mathbf{x}_{T}-\mathbf{y}_{T}\|\right\}+\left(\sum_{i=2}^{T}\left(\prod_{i=1}^{t-1}K_{\theta}^{i}\right)\sigma_{t}\right)\operatorname*{\mathbb{E}}_{\epsilon,\epsilon^{\prime}}\|\epsilon-\epsilon^{\prime}\|\,,$$ where ϵ, ϵ ′ ∼ N (0, I). Proof. Using the definition of W1, the trivial coupling, the definitions of µ n θ and πθ(·), and Lemma 3.4, we get W1(µ n θ , πθ(·)) = inf π∈Γ(µn θ ,πθ(·)) Zd(x, y) dπ(x, y) ≤ E x∼µn θ E y∼πθ(y) [∥x − y∥] = 1 n Xn i=1 E q(xT |x i 0 ) E p(yT ) E pθ(xT−1|xT ) E pθ(yT−1|yT ) . . . E pθ(x0|x1) E pθ(y0|y1) ∥x0 − y0∥ ≤ 1 n Xn i=1 E q(xT |x i 0 ) E p(yT ) t=1 Ktθ ! ∥xT − yT ∥ + t=2 tY−1 j=1 K j θ σt E ϵ,ϵ ′ [∥ϵ − ϵ ′∥] Y T X T = Y T t=1 Ktθ !1 n Xn i=1 E q(xT |x i 0 ) E p(yT ) ∥xT − yT ∥ + X T t=2 tY−1 i=1 Kiθ ! σt !E ϵ,ϵ ′ ∥ϵ − ϵ ′∥ . $$\square$$ Combining Lemmas 3.2 and 3.5 with equation 7 yields Theorem 3.1 . ## 3.3 Special Case Using The Forward Process Of Ho Et Al. (2020) Theorem 3.1 establishes a general upper bound that holds for any forward process, as long as the backward process satisfies Assumption 1. In this section, we specialize the statement of the theorem to the particular case of the forward process defined by Ho et al. (2020). Let X ⊆ R D. In Ho et al. (2020), the forward process is a Gauss-Markov process with transition densities defined as q(xt|xt−1) = N (xt; √αtxt−1,(1 − αt)I), where α1*, . . . , α*T is a fixed noise schedule such that 0 < αt < 1 for all t. This definition implies that at each time step 1 ≤ t ≤ T, $$q(\mathbf{x}_{t}|\mathbf{x}_{0})={\mathcal{N}}\left(\mathbf{x}_{t};{\sqrt{{\bar{\alpha}}_{t}}}\mathbf{x}_{0},(1-{\bar{\alpha}}_{t})\mathbf{I}\right),\quad{\mathrm{with}}\quad{\bar{\alpha}}_{t}=\prod_{i=1}^{t}\alpha_{i}.$$ The optimization objective to train the backward process ensures that for each time step t the distribution pθ(xt−1|xt) remains close to the ground-truth distribution q(xt−1|xt, x0) given by $$q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})={\mathcal{N}}\left(\mathbf{x}_{t-1};\mu_{q}^{t}(\mathbf{x}_{t},\mathbf{x}_{0}),\sigma_{t}^{2}\mathbf{I}\right),$$ where $$\mu_{q}^{t}({\bf x}_{t},{\bf x}_{0})=\frac{\sqrt{\alpha_{t}}(1-\bar{\alpha}_{t-1})}{1-\bar{\alpha}_{t}}{\bf x}_{t}+\frac{\sqrt{\alpha_{t-1}}(1-\alpha_{t})}{1-\bar{\alpha}_{t}}{\bf x}_{0}.\tag{11}$$ Now, we discuss Assumption 1 under these definitions. Remark 3.2. We can get a glimpse at the range of Ktθ for a trained DDPM by looking at the distribution q(xt−1|xt, x0), since pθ(xt−1|xt) is optimized to be as close as possible to q(xt−1|xt, x0). For a given x0 ∼ µ, let us take a look at the Lipschitz norm of x 7→ µ t q (x, x0). Using equation 11, we have $$\mu_{q}^{t}({\bf x}_{t},{\bf x}_{0})-\mu_{q}^{t}({\bf y}_{t},{\bf x}_{0})=\frac{\sqrt{\alpha_{t}}(1-\bar{\alpha}_{t-1})}{1-\bar{\alpha}_{t}}({\bf x}_{t}-{\bf y}_{t}).$$ Hence, x 7→ µ t q (x, x0) is K′t -Lipschitz continuous with $$K_{t}^{\prime}=\frac{\sqrt{\alpha_{t}}(1-\bar{\alpha}_{t-1})}{1-\bar{\alpha}_{t}}.$$ Now, if αt < 1 for all 1 ≤ t ≤ T, then we have 1 − α¯t > 1 − α¯t−1 which implies K′t < 1 for all 1 ≤ t ≤ T. Remark 3.2 shows that the Lipschitz norm of the mean function µ t q (·, x0) does not depend on x0. Indeed, looking at the previous equation, we can see that for any initial x0, the Lipschitz norm K′t = √αt(1−α¯t−1) 1−α¯t only depends on the noise schedule, not x0 itself. Since g t θ is optimized to match to µ t q (·, x0), for each x0 in the training set, and all the functions µ t q (·, x0) have the same Lipschitz norm K′t , we believe it is reasonable to assume g t θ is Lipschitz continuous as well. This is the intuition behind Assumption 1. The prior-matching term. With the definitions of this section, the prior matching term KL(q(xT |x0)|| p(xT )) has the following closed form: $$\mathrm{KL}(q(\mathbf{x}_{T}|\mathbf{x}_{0})\,||\,p(\mathbf{x}_{T}))={\frac{1}{2}}\left[-D\log(1-{\bar{\alpha}}_{T})-D{\bar{\alpha}}_{T}+{\bar{\alpha}}_{T}\,\|\mathbf{x}_{0}\|^{2}\right].$$ Upper-bounds on the average distance between Gaussian vectors. If ϵ, ϵ ′ are D-dimensional vectors sampled from N (0, I), then $$\mathbb{E}_{\epsilon,\epsilon^{\prime}}\left\|\epsilon-\epsilon^{\prime}\right\|\leq{\sqrt{2D}}.$$ Moreover, since q(xT |x i 0 ) = NxT ; ¯αT x i 0 ,(1 − α¯T )Iand the prior p(yT ) = N (yT ; 0, I), $$\operatorname*{\mathbb{E}}_{q(\mathbf{x}_{T}|\mathbf{x}_{0}^{i})\,p(\mathbf{y}_{T})}\|\mathbf{x}_{T}-\mathbf{y}_{T}\|\leq{\sqrt{\bar{\alpha}_{T}\left\|\mathbf{x}_{0}^{i}\right\|^{2}+(2-\bar{\alpha}_{T})D}}.$$ Special case of the main theorem. With the definitions of this section, the inequality of Theorem 3.1 implies that with probability at least 1 − δ over the randomness of {x 1 0 , . . . , x n 0 } iid∼ µ: W1(µ, πθ(·)) ≤ 1 n Xn i=1 E q(xT |x i 0 ) ℓ θ(xT , x i 0 ) +1λ "1 2 Xn i=1 h−D log(1 − α¯T ) − Dα¯T + ¯αT x i 0 2i+ log 1 δ # + λ∆2 8n+ Y T t=1 Ktθ !1 n Xn i=1 qα¯T x i 0 2+ (2 − α¯T )D + √ 2D X T t=2 tY−1 i=1 Kiθ ! σt ! . ## 4 Conclusion This note presents a novel upper bound on the Wasserstein distance between the data-generating distribution and the distribution learned by a diffusion model. Unlike previous works in the field, our main result simultaneously avoids strong assumptions on the data-generating distribution, assumptions on the learned score function, and exponential dependencies, while still providing a quantitative upper bound. However, our bound holds with high probability on the randomness of a finite i.i.d. sample, on which a loss function is computed. Since the loss is a chain of expectations w.r.t. Gaussian distributions, it can either be estimated with high precision or upper bounded using the properties of Gaussian distributions. ## Acknowledgments This research is supported by the Canada CIFAR AI Chair Program, and the NSERC Discovery grant RGPIN-2020- 07223. The authors sincerely thank Pascal Germain for interesting discussions and suggestions. The first author thanks Mathieu Bazinet and Florence Clerc for proof-reading the manuscript. ## References Pierre Alquier. User-friendly introduction to PAC-Bayes bounds. *arXiv preprint arXiv:2110.11216*, 2021. Joe Benton, Valentin De Bortoli, Arnaud Doucet, and George Deligiannidis. Nearly d-Linear Convergence Bounds for Diffusion Models via Stochastic Localization. *arXiv preprint arXiv:2308.03686*, 2023. Adam Block, Youssef Mroueh, and Alexander Rakhlin. Generative Modeling with Denoising Auto-Encoders and Langevin Sampling. *arXiv preprint arXiv:2002.00107*, 2020. Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan. WaveGrad: Estimating Gradients for Waveform Generation. In *International Conference on Learning Representations*, 2021. Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, and Anru Zhang. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. In The Eleventh International Conference on Learning Representations, 2023. Valentin De Bortoli. Convergence of denoising diffusion models under the manifold hypothesis. *Transactions* on Machine Learning Research, 2022. ISSN 2835-8856. Valentin De Bortoli, James Thornton, Jeremy Heng, and Arnaud Doucet. Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling. In *Advances in Neural Information Processing Systems*, volume 34, pp. 17695–17709, 2021. Prafulla Dhariwal and Alexander Nichol. Diffusion Models Beat GANs on Image Synthesis. In *Advances in* Neural Information Processing Systems, volume 34, pp. 8780–8794, 2021. Charles Fefferman, Sanjoy Mitter, and Hariharan Narayanan. Testing the Manifold Hypothesis. *Journal of* the American Mathematical Society, 29(4):983–1049, 2016. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Networks. In *Advances in Neural Information* Processing Systems, volume 27, 2014. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. In Advances in Neural Information Processing Systems, volume 33, pp. 6840–6851, 2020. Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans. Cascaded Diffusion Models for High Fidelity Image Generation. *Journal of Machine Learning Research*, 23(1):2249–2281, 2022. Aapo Hyvärinen and Peter Dayan. Estimation of Non-Normalized Statistical Models by Score Matching. Journal of Machine Learning Research, 6(4), 2005. Diederik P. Kingma and M. Welling. Auto-Encoding Variational Bayes. *CoRR*, abs/1312.6114, 2014. Dohyun Kwon, Ying Fan, and Kangwook Lee. Score-based Generative Modeling Secretly Minimizes the Wasserstein Distance. *Advances in Neural Information Processing Systems*, 35:20205–20217, 2022. Holden Lee, Jianfeng Lu, and Yixin Tan. Convergence for score-based generative modeling with polynomial complexity. In *Advances in Neural Information Processing Systems*, volume 35, pp. 22870–22882, 2022. Holden Lee, Jianfeng Lu, and Yixin Tan. Convergence of score-based generative modeling for general data distributions. In *International Conference on Algorithmic Learning Theory*, pp. 946–985. PMLR, 2023. Gen Li, Yuting Wei, Yuxin Chen, and Yuejie Chi. Towards Faster Non-Asymptotic Convergence for DiffusionBased Generative Models. *arXiv preprint arXiv:2306.09251*, 2023. Haoying Li, Yifan Yang, Meng Chang, Shiqi Chen, Huajun Feng, Zhihai Xu, Qi Li, and Yueting Chen. SRDiff: Single Image Super-Resolution with Diffusion Probabilistic Models. *Neurocomputing*, 479:47–59, 2022. Calvin Luo. Understanding Diffusion Models: A Unified Perspective. *arXiv preprint arXiv:2208.11970*, 2022. Sokhna Diarra Mbacke, Florence Clerc, and Pascal Germain. Statistical Guarantees for Variational Autoencoders using PAC-Bayesian Theory. In *Advances in Neural Information Processing Systems*, 2023. Hariharan Narayanan and Sanjoy Mitter. Sample Complexity of Testing the Manifold Hypothesis. In Advances in Neural Information Processing Systems, volume 23, 2010. Jakiw Pidstrigach. Score-Based Generative Models Detect Manifolds. In *Advances in Neural Information* Processing Systems, volume 35, pp. 35852–35865, 2022. Phil Pope, Chen Zhu, Ahmed Abdelkader, Micah Goldblum, and Tom Goldstein. The Intrinsic Dimension of Images and Its Impact on Learning. In *International Conference on Learning Representations*, 2021. Vadim Popov, Ivan Vovk, Vladimir Gogoryan, Tasnima Sadekova, and Mikhail Kudinov. Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech. In *International Conference on Machine Learning*, pp. 8599–8608. PMLR, 2021. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In *International Conference on Machine Learning*, pp. 1278– 1286. PMLR, 2014. Hiroshi Sasaki, Chris G Willcocks, and Toby P Breckon. UNIT-DDPM: UNpaired Image Translation with Denoising Diffusion Probabilistic Models. *arXiv preprint arXiv:2104.05358*, 2021. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. In *International Conference on Machine Learning*, pp. 2256–2265. PMLR, 2015. Yang Song and Stefano Ermon. Generative Modeling by Estimating Gradients of the Data Distribution. In Advances in Neural Information Processing Systems, volume 32, 2019. Yang Song, Conor Durkan, Iain Murray, and Stefano Ermon. Maximum Likelihood Training of Score-Based Diffusion Models. In *Advances in Neural Information Processing Systems*, volume 34, 2021a. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-Based Generative Modeling through Stochastic Differential Equations. In International Conference on Learning Representations, 2021b. Brandon Trabucco, Kyle Doherty, Max Gurinas, and Ruslan Salakhutdinov. Effective Data Augmentation With Diffusion Models. *arXiv preprint arXiv:2302.07944*, 2023. Cédric Villani. *Optimal Transport: Old and New*, volume 338. Springer, 2009. Pascal Vincent. A Connection Between Score Matching and Denoising Autoencoders. *Neural Computation*, 23(7):1661–1674, 2011. Linqi Zhou, Yilun Du, and Jiajun Wu. 3D Shape Generation and Completion through Point-Voxel Diffusion. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 5826–5835, 2021. ## A Omitted Proofs A.1 Proof Of Lemma 3.2 Recall Lemma 3.2 states that the following inequality holds with probability 1 − δ: $$W_{1}(\mu,\mu_{\theta}^{n})\leq\frac{1}{n}\sum_{i=1}^{n}\left\{\underset{q(\mathbf{x}_{T}^{i}\mathbf{x}_{0}^{i})}{\mathbb{E}}\,\ell^{\theta}(\mathbf{x}_{T},\mathbf{x}_{0}^{i})\right\}+\frac{1}{\lambda}\left[\sum_{i=1}^{n}\mathrm{KL}(q(\mathbf{x}_{T}|\mathbf{x}_{0}^{i})\,||\,p(\mathbf{x}_{T}))+\log\frac{1}{\delta}\right]+\frac{\lambda\Delta^{2}}{8n}.$$ Proof of Lemma 3.2. Using the trivial coupling (product of marginals), the definition of µ n θ (equation 5), and the definition of the loss function ℓ θ, we get W1(µ, µn θ ) = inf π∈Γ(µ,µn θ ) Z X×X ∥x − y∥ dπ(x, y) ≤ Z X Z X ∥x − y∥ dµ(x) dµn θ (y) = E x∼µ E y∼µn θ ∥x − y∥ = E x∼µ "1 n Xn i=1 E q(xT |x i 0 ) E pθ(xT−1|xT ) . . . E pθ(x1|x2) E pθ(y|x1) ∥x − y∥ # = E x∼µ "1 n Xn i=1 E q(xT |x i 0 ) ℓ θ(xT , x) # . Using Mbacke et al. (2023, Lemma B.1), the following inequality holds with probability 1 − δ: W1(µ, µn θ ) ≤ 1 n Xn i=1 E q(xT |x i 0 ) ℓ θ(xT , x i 0 ) + 1 λ "Xn i=1 KL(q(xT |x i 0 )|| p(xT )) + log 1 δ + n log E x0∼µ⊗n E xT ∼p(xT ) e λ(Ey0∼µ[ℓ θ(xT ,y0)]− 1nPn i=1 ℓ θ(xT ,x0)) . (12) Now, it remains to upper-bound the exponential moment of equation 9. If supx,x′∈X ∥x − x ′∥ = ∆ < ∞, and λ > 0 is a real number, then the definition of the loss function ℓ θ and Hoeffding's lemma yield $$n\log{\underset{\mathbf{x}_{0}\sim\mu}{\mathbb{E}}{\underset{\mathbf{x}_{T}\sim\mu(\mathbf{x}_{T})}{\mathbb{E}}{c^{\lambda\left(\mathbb{E}_{\mathbf{x}^{\prime}\sim\mu}\left[\ell^{\ast}(\mathbf{x}_{T},\mathbf{x}^{\prime})\right]-{\frac{1}{n}}\sum_{i=1}^{n}\ell^{\ast}(\mathbf{x}_{T},\mathbf{x}_{0})\right)}}}\leq n\log\exp\left[{\frac{\lambda^{2}\Delta^{2}}{8n^{2}}}\right]={\frac{\lambda^{2}\Delta^{2}}{8n}}.$$ ## A.2 Proof Of Lemma 3.4 Recall Lemma 3.4 states that the following inequality holds: $$\begin{array}{c}{{\mathbb{E}}}\\ {{p_{\theta}(\mathbf{x}_{T-1}|\mathbf{x}_{T})p_{\theta}(\mathbf{y}_{T-1}|\mathbf{y}_{T})p_{\theta}(\mathbf{x}_{T-2}|\mathbf{x}_{T-1})p_{\theta}(\mathbf{y}_{T-2}|\mathbf{y}_{T-1})}}\\ {{\left(\prod_{t=1}^{T}K_{\theta}^{t}\right)\|\mathbf{x}_{T}-\mathbf{y}_{T}\|+\left(\sum_{t=2}^{T}\left(\prod_{i=1}^{t-1}K_{\theta}^{i}\right)\sigma_{t}\right)\frac{\mathbb{E}}{\epsilon_{\epsilon^{\prime}}}\|\epsilon-\epsilon^{\prime}\|\}\,,}}\end{array}$$ $$\square$$ Proof of Lemma 3.4. Let's do a proof by induction on T. - **Base case.** T = 1. The inequality becomes $$\begin{array}{c}\mathbb{E}\\ p\theta({\bf x}_{0}|{\bf x}_{1})\;p\theta({\bf y}_{0}|{\bf y}_{1})\end{array}\|{\bf x}_{0}-{\bf y}_{0}\|\leq K_{\theta}^{1}\;\|{\bf x}_{1}-{\bf y}_{1}\|\tag{1}$$ $$(13)$$ which holds by the first part of Lemma 3.3. - **Induction step.** Assume T > 1. The induction hypothesis is E pθ(xT−2|xT−1) E pθ(yT−2|yT−1) . . . E pθ(x0|x1) E pθ(y0|y1) ∥x0 − y0∥ ≤ TY−1 t=1 Ktθ ! ∥xT −1 − yT −1∥ + TX−1 t=2 tY−1 i=1 Kiθ ! σt !E ϵ,ϵ ′ [∥ϵ − ϵ ′∥] . (14) Using the induction hypothesis, the linearity of the expectation, Lemma 3.3 with t := T, E pθ(xT−1|xT ) E pθ(yT−1|yT ) . . . E pθ(x0|x1) E pθ(y0|y1) ∥x0 − y0∥ ≤ E pθ(xT−1|xT ) E pθ(yT−1|yT ) " TY−1 t=1 Ktθ ! ∥xT −1 − yT −1∥ + TX−1 t=2 tY−1 i=1 Kiθ ! σt !E ϵ,ϵ ′ [∥ϵ − ϵ ′∥] # = TY−1 t=1 Ktθ !E pθ(xT−1|xT ) E pθ(yT−1|yT ) [∥xT −1 − yT −1∥] + TX−1 t=2 tY−1 i=1 Kiθ ! σt !E ϵ,ϵ ′ [∥ϵ − ϵ ′∥] ≤ TY−1 t=1 Ktθ ! Ktθ ∥xT − yT ∥ + σT E ϵ,ϵ ′ ∥ϵ − ϵ ′∥ + TX−1 t=2 tY−1 i=1 Kiθ ! σt !E ϵ,ϵ ′ [∥ϵ − ϵ ′∥] = Y T t=1 Ktθ ! ∥xT − yT ∥ + " TY−1 t=1 Ktθ ! σT + T X−1 t=2 tY−1 i=1 Kiθ ! σt #E ϵ,ϵ ′ [∥ϵ − ϵ ′∥] = Y T t=1 Ktθ ! ∥xT − yT ∥ + " TY−1 i=1 Kiθ ! σT + T X−1 t=2 tY−1 i=1 Kiθ ! σt #E ϵ,ϵ ′ [∥ϵ − ϵ ′∥] = Y T t=1 Ktθ ! ∥xT − yT ∥ + X T t=2 tY−1 i=1 Kiθ ! σt !E ϵ,ϵ ′ [∥ϵ − ϵ ′∥] . ## B Numerical Experiments The goal of these experiments is to assess the numerical value of the bound of Theorem 3.1 on a synthetic dataset. The data-generating distribution is chosen to be the uniform distribution on the square of side 2, centered at the origin. Figure 2 shows samples from this target distribution. The backward process uses a shared network with fully connected layers, and 128 hidden units each. The model is trained on 50, 000 samples from the original distribution, and the bound is computed with n = 5, 000 independent samples. Samples from the trained model are shown in Figure 3. We computed the bound for different values of λ. Given that the datapoints are confined to a square of side 2√ , using the primal form of the Wasserstein distance yields a straightforward upper bound of W1(*µ, π*θ(·)) ≤ 8 ≈ 2.828. We estimated the Lipschitz norms Ktθ using K′t from Remark 3.2, and the expected norms in the last two terms of Theorem 3.1 are estimated using 106independent samples from each distribution. | λ | n/10 | n/5 | n/2 | n | n/0.5 | n/0.1 | |-------------|--------|-------|--------|-------|---------|---------| | Bound value | 1.124 | 1.231 | 1.5181 | 2.035 | 3.056 | 11.061 | ![12_image_0.png](12_image_0.png) Figure 2: The points represent 2000 samples from the target data-generating distribution. ![12_image_1.png](12_image_1.png) Figure 3: The points represent 2000 samples from the trained diffusion model.
Review 1: Summary: This work proposes a bound on the Wasserstein distance between the parameterized distribution and the true data distribution Strengths and Weaknesses: Strength: 1. the problem is fundamental and interesting to study per se Weakness: 1. The usefulness of the theory is unclear 2. Lack of numerical evidence 3. The theory seems to be standard (?) in the field of the study of VAEs, is this the case? Requested Changes: I am sorry for submitting the review so late. While I think the work can improve on the above three points I mentioned. I have no serious objection to the paper. Perhaps the main thing to clarify and add to the work is to clarify why the theory is novel -- given that it seems standard to apply it to VAEs. Broader Impact Concerns: NA ================================================== Review 2: Summary: This paper provides upper bounds on the Wasserstein distance between the true distribution and the learned distribution of the diffusion denoising framework. These upper bounds do not require any assumptions on the data distribution or the score function. Strengths and Weaknesses: The bounds provided on the Wasserstein distance between the data distribution and the learned distribution are - novel. - do not require any assumptions on the data distribution - do not require any assumptions on the score function of the data distribution Weaknesses: - The claims of the paper are ambiguous and misleading at times: ——For eg., it is not clear “no exponential dependencies” is referring to. Exponential in what parameters? The paper also claims that the results are different from others by providing “quantitative” upper bounds. It is unclear what this is referring to. - The upper bounds will of course be exponential in the dimension. - The upper bounds are provided in a crude form. It is not clear from this expression when these bounds would be non-trivial. This greatly impacts the usefulness of this paper to the community. It’s not evident where these bounds fit in the context of other density estimation bounds in the literature. - In fact, it is not even clear if the bounds imply consistency let alone optimality of the estimator provided by the diffusion model. Minor Issues: - The definition of the distribution \pi_\theta(.) is not illustrative or illuminating. - Is there a reason we bound the upper bound with the distance from the regenerated empirical distribution? - Why is the assumption of the Lipschitz of the functions g_\theta reasonable? Requested Changes: Addressing the above concerns will go a long way in making this paper useful to the community. Addition of some illustrative examples where the proposed bounds imply consistency or optimality of the learned distribution would be very helpful. Moreover, contexts in which the provided bounds are applicable to real world data distribution would be useful. Do the bounds depend on the ambient dimension of the data distribution or the implicit dimension? I would guess the dependence on the dimension would still be exponential but for what other parameters of the problem is it not? Is this the defining difference of the results of this paper as compared to the related work? Broader Impact Concerns: N/A ================================================== Review 3: Summary: The authors provide a bound on the approximation quality of the learned distribution in a diffusion model. The proof technique appears to follow closely to a prior work on the approximation quality of variational autoencoders [1]. [1] Statistical Guarantees for Variational Autoencoders using PAC-Bayesian Theory. https://arxiv.org/abs/2310.04935 Strengths and Weaknesses: Strengths: - The theorem relaxes some assumptions present in previous theoretical results, including those on the score function and data-generating distribution. - Diffusion models are a highly relevant topic. Weaknesses: - As I understand it, the tightness of the bound is inversely related to the likelihood of the bound being true (as controlled by the $\delta$ term). This is inherited from the underlying PAC-Bayes bound for bounded loss functions. - The present manuscript does not provide any numerical experimentation demonstrating the tightness of this bound. - To my understanding, there are limited empirical takeaways from this result, despite its nice theoretical properties. If I am missing something, I would appreciate that the authors clarify on this point. Requested Changes: I am interested in a simple synthetic experiment (e.g. on a 1D Gaussian data-generating distribution) that demonstrates the tightness of the proposed bound (perhaps at different $\delta$). Is the bound relatively tight? Or is it a loose bound? If there are empirical observations that can help guide the design and training of diffusion models, can the author elaborate on them? Broader Impact Concerns: No concerns. ================================================== Review 4: Summary: This paper establishes a distribution estimation bound (stated in the Wasserstein distance) of diffusion models. Compared to existing sampling theory of diffusion models, the assumptions in the paper are mild and not imposed directly on the data distribution and the score functions. In this regard, the contribution of the paper is a new error bound of diffusion models, assuming the backward conditional mean is Lipschitz in its argument. It is also to be noted that the error bound in the paper does not cover statistical estimation of the score function and the optimization guarantee of minimizing the score estimation loss. Therefore, the obtained result is not directly comparable to some recent advances in the statistical theory of diffusion models. Strengths and Weaknesses: As far as I know, the obtained error bound of diffusion models is new in the literature. The terms in the error bound is relatively easy to understand, although some detailed discussion should be further provided. The paper is also well organized, despite lacking more details on the preliminary of diffusion models. This might make unfamiliar readers struggle a bit. The technical derivations are mostly correct and sound. However, there are some undefined notations (or typos) and one particular unjustified claim. I will discuss weaknesses and raise questions in the next section. Requested Changes: 1. $K_i$ in Theorem 3.1 Equation (5) seems undefined. 2. A particular choice of $\lambda \propto n^2$ needs further discussion. Substituting $\lambda \propto n^2$ will make Equation (5) grow linearly with $n$. 3. Following the second question, I would suggest providing explicit error rate implied by Theorem 3.1. Although an instantiation is provided just before Section 4, the convergence rate is still largely unclear. As the authors claim that the bound is obtained with weaker assumptions, how does the obtained bound compares to existing ones? I am also curious how the last error term $\sqrt{2D} (\sum (\prod K_i) \sigma_t)$ depends on $n$. 4. There seems to be a mismatch between Assumption 3.1 and Remark 3.2. The justification in Remark 3.2 needs a conditioning on the initial data point $x_0$, while Assumption 3.1 is a marginalized version. How does the justification circumvent this issue? In fact, if we consider the score-based diffusion models, Assumption 3.1 is almost equivalent to requiring the score function being Lipschitz continuous, as $g_\theta^t$ is determined by the score function (see Sampling is as easy as learning the score, by Chen et al., 2023). Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept with minor revision Comment: Reviewer js5n has raised multiple concerns and authors addresses most of those in their rebuttal. However, I ask authors to carefully go over the suggested items by this reviewer and make another attempt to address any remaining concerns; in particular, see the reviewers comment 3 and 4. ==================================================
# The Kernel Perspective On Dynamic Mode Decomposition Anonymous authors Paper under double-blind review ## Abstract This manuscript takes a critical look at the interactions between Koopman theory and reproducing kernel Hilbert spaces with an eye towards giving a tighter theoretical foundation for Koopman based dynamic mode decomposition (DMD), a data driven method for modeling a nonlinear dynamical system from snapshots. In particular, this paper explores the various necessary conditions imposed on the dynamics when a Koopman operator is bounded or compact over a reproducing kernel Hilbert space. Ultimately, it is determined that for many RKHSs, the imposition of compactness or boundedness on a Koopman operator forces the dynamics to be affine. However, a numerical method is still recovered in more general cases through the consideration of the Koopman operator as a closed and densely defined operator, which requires a closer examination of the connection between the Koopman operator and a RKHS. By abandoning the feature representation of RKHSs, the tools of function theory are brought to bear, and a simpler algorithm is obtained for DMD than what was introduced in Williams et al (2016). This algorithm is also generalized to utilize vector valued RKHSs. ## 1 Introduction Dynamic mode decomposition (DMD) has been gaining traction as a model-free method of making short-run predictions for nonlinear dynamical systems using data obtained as snapshots of trajectories. DMD aims to obtain a finite rank representation of the Koopman operator by studying its action on the full state observable (i.e. the identity function) (Rowley et al., 2009). Koopman operators over reproducing kernel Hilbert spaces (RKHSs) were studied to take advantage of infinite dimensional feature spaces to extract more information from the snapshots of a system in (Williams et al., 2015a). This perspective also enacts a dimensionality reduction by formulating the DMD method in a reproducing kernel Hilbert space (RKHS) framework and implicitly using the kernel trick to compute inner products in the high-dimensional space of observables. In (Williams et al., 2015b), it is shown that kernel-based DMD produces a collection of Koopman modes that agrees with other DMD results in the literature. The introduction of kernel based techniques for Koopman analysis and DMD has catalyzed a new direction for Koopmanism. The new perspective added by kernel methods is that of approximations and it is at the center of the work presented in this manuscript. Universal RKHSs, such as those corresponding to Gaussian RBFs and exponential dot product kernel functions, have the ability to approximate any continuous function over a compact subset of R n to any desired accuracy up to computational precision. Moreover, when the kernel function is continuous or bounded, convergence in RKHS norm yields point wise everywhere convergence and uniform convergence over compact subsets of R n (Steinwart & Christmann, 2008). This everywhere convergence stands in contrast to the perspective through the lens of ergodic methods, where convergence results only hold almost everywhere. For additional insight into the point wise everywhere convergence property described above see example 1 in Appendix A.1. The study of dynamical systems through the Koopman formalism over RKHSs manifests as a search for functions that are close to being eigenfunctions of the Koopman operator, rather than the actual eigenfunctions. Since only a finite amount of data can be available for the study of an infinite dimensional operator, actual eigenfunctions typically cannot be computed. In fact, there is no requirement that KF even has any eigenfunctions. The universality property arises in the search for Koopman "eigenfunctions," which, given a particular Koopman operator and kernel space, might not exist, since the existence of an eigenfunction depends on the Hilbert space. However, the formal equation, ϕ(F(x)) − λϕ(x) = 0 may still hold for some continuous function ϕ. If one is working over a RKHS corresponding to a universal kernel, then for any given compact set and ε > 0, there is a function ϕ˜ ∈ H such that |ϕ(x) − ϕ˜(x)| < ε for all x in that compact set, which provides for an "approximate" eigenfunction. Here we define approximate eigenfunctions as follows: Definition 1. For > 0 an approximate eigenfunction ϕˆ is a function in the Hilbert space H together with λˆ ∈ C such that |KF ϕˆ(x) − λˆϕˆ(x)| < for all x ∈ X. This is particularly important for DMD methods, which attempt to construct a finite rank approximation of a Koopman operator from a finite collection of observed snapshots. Note that obtaining approximate eigenfunctions as in Example 1 is not dissimilar to the objective of ergodic methods, where approximation of system invariants and eigenfunctions using time averages is sought. The existence of eigenfunctions depends on the selection of the Hilbert space, as will be shown in Section 4, and eigenfunctions may not be present even in the L 2ergodic setting (Budišić et al., 2012). The objective of this manuscript is to present the kernel perspective of Koopmanism as a distinct study from the ergodic perspective. With that goal in mind, the paper is structured in the following way: - Section 2: Introduces reproducing kernel Hilbert spaces (RKHSs) which will be necessary for the remainder of the paper. - Section 3: Introduces Koopman operators, their relationship to DMD, and the differences between the ergodic and kernel perspectives. Here the pointwise convergence properties in the RKHS are presented. - Section 4: Discusses properties of Koopman operators over RKHSs. It is demonstrated that assumptions of boundedness and compactness hold for a very small collection of Koopman operators. - Section 5: A numerical algorithm for a Koopman based DMD method is presented, which relaxes the assumptions of bounded and compact Koopman operators to densely defined Koopman operators. This yields a new theoretical foundation for the study of Koopman DMD over RKHSs. The new algorithm still relies on some of the same matrices presented in (Williams et al., 2015b). - Section 6: Extends the developed DMD algorithm to vector valued RKHSs. - Section 7: Provides a numerical example that compares the developed DMD method to that presented in (Williams et al., 2015b). - Appendix: Includes examples and proofs referenced in the text as well as a few numerical experiments which show that the developed DMD algorithm has nearly identical results to that developed by (Williams et al., 2015b). ## 2 Reproducing Kernel Hilbert Spaces Definition 2. A Reproducing Kernel Hilbert Space (RKHS) H over a set X is a Hilbert space composed of functions from X to C such that for all x ∈ X the evaluation functional Exf := f(x) is bounded. Therefore, for all x ∈ X there exists a function Kx ∈ H such that f(x) = h*f, K*xi for all f ∈ H. The function Kx is the reproducing kernel centered at x and the function K : X × X → C defined by K(*x, y*) = hKy, Kxi is the unique kernel function corresponding to H (Aronszajn, 1950). Throughout most of this manuscript, the methods used will be restricted to RKHSs of real valued functions. However, for some specific examples in Section 4, it will be more convenient to employ RKHSs of complex valued functions of a real variable. Reproducing kernels can be equivalently expressed as realizations of inner products of feature space mappings in ` 2(N) (Steinwart & Christmann, 2008). Proposition 1. *Given an orthonormal basis for a RKHS,* {em(·)}∞ m=1 ⊂ H*, the kernel function may be* expressed as K(*x, y*) = P∞ m=1 em(x)em(y)*, where* Ψ(x) := (e1(x), e2(x)*, . . .*) ∈ ` 2(N) *is called a feature map.* Equivalently, given a feature mapping Ψ : X → ` 2(N)*, there is a RKHS whose kernel function is given as* K(*x, y*) = hΨ(x), Ψ(y)i` 2 . We will make repeated use of projections onto finite dimensional vector spaces arising from spans of collections of kernels centered at snapshots from a dynamical system. Proposition 2. For a collection of centers {x1, . . . , xm}, the projection of a function g ∈ H *onto* α = span{Kx1 , . . . , Kxm} *is given as* arg minh∈α kh − gkH*. The projection can be expressed in terms of* α as Pαg := Pm i=1 wiKxi where the weights wi *satisfy* $${\begin{pmatrix}K(x_{1},x_{1})&\cdots&K(x_{1},x_{m})\\ \vdots&\ddots&\vdots\\ K(x_{m},x_{1})&\cdots&K(x_{m},x_{m})\end{pmatrix}}{\begin{pmatrix}w_{1}\\ \vdots\\ w_{m}\end{pmatrix}}={\begin{pmatrix}g(x_{1})\\ \vdots\\ g(x_{m})\end{pmatrix}}\,.$$ $\quad(1)$ . . (1) Proof. The projection of a function in a Hilbert space onto a closed subspace is determined by finding the closest member of that subspace to the function being projected. The matrix equation may be established by expressing the projection as Pαg =Pm i=1 wiKxi , expanding kPm i=1 wiKxi − gk 2 H via inner products, and setting the derivative with respect to w := (w1*, . . . , w*m) T ∈ R m to zero, resulting in the system of linear equations in equation 1. If the functions {Kx1 , . . . , Kxm} are linearly independent then the Gram matrix in equation 1 is non-singular. In that case, the weights {wi} m i=1 are unique. Definition 3. A RKHS of real valued functions, H, over Ω ⊂ R n, is said to be universal if for any compact V ⊂ Ω, > 0, and h ∈ C(V ), there is a function h˜ ∈ H such that kh − h˜k∞ < , where C(V ) denotes the set of continuous functions defined on V . Many commonly used kernel functions satisfy this universality property, including the Gaussian RBF kernel functions and the exponential dot product kernel functions (Steinwart & Christmann, 2008). ## 3 Koopman Operators Over Rkhss The theory of Koopman operators has been long intertwined with ergodic theory, where ergodic theoretic methods justify almost everywhere convergence claims of time averaging methods to invariants of the Koopman operator. The Birkhoff and Von Neumann ergodic theorems are posed over L 1(R) and L p(R) for p > 1, respectively (Walters, 2000). However, the invariants for Koopman operators are not always analytic or even smooth. Hence, ergodic theorems do not give guarantees of convergence within most RKHSs, which are frequently composed of real analytic functions. Furthermore, even though ergodic theorems guarantee the existence of invariants over L 2(R), the invariant itself is hidden behind a limiting operation via time averages (Walters, 2000). Hence, there is an expected error in the constructed invariant that stems from finiteness of data. The objective of DMD methods is to find functions within the function space that nearly achieve eigenfunction behavior. Specifically, for a Koopman operator, KF , and > 0, the objective is to find ϕˆ ∈ H and λ ∈ C for which |KF ϕˆ(x) − λϕˆ(x)| < for all x in a given workspace. When such a function is discovered, the eigenfunction witnesses the snapshots, xi+1 = F(xi), as an exponential function as ϕˆ(xi+1) = λ iϕˆ(x1) + 1−λ i+1 1−λ· . If ϕˆ is a proper eigenfunction for KF , then may be taken to be zero, and ϕˆ(xi+1) = λ iϕˆ(x1). Ergodic methods generally yield |KF ϕˆ(x)−λϕˆ(x)| < only for almost all x within the domain of interest. For a RKHS, the condition |KF ϕˆ(x)−λϕˆ(x)| < may be relaxed to kKF ϕˆ−λϕˆkH < , since |KF ϕˆ(x)−λϕˆ(x)| < CkKF ϕˆ−λϕˆkH < C, where C > 0 depends on the kernel function and the point x. If the kernel function is continuous and the domain is compact, a finite C may be selected uniformly for that domain. In the special case of the Gaussian RBF kernel function and the domain being R n, C may be taken to be 1. Proposition 3. If KF is compact and the finite rank approximation of KF , which we will denote as KˆF , is within of KF with respect to the operator norm, and if ϕˆ is a normalized eigenfunction for KˆF with eigenvalue λ, then kKF ϕˆ − λϕˆkH ≤ kKF ϕˆ − KˆF ϕˆkH ≤ kKF − KˆF k ≤ . Hence, if we obtain a finite rank approximation of KF that is within with respect to the operator norm, then an eigenfunction of the finite rank approximation will be an approximate eigenfunction of KF . This approximation is important in DMD, where the eigenfunctions are utilized to generate an approximation of the full state observable, gid(x) := x, one dimension at a time, as outlined in Section 5. If an accurate finite rank approximation of the Koopman operator can be obtained in a RKHS, then the approximation of the overall model is accurate point-wise everywhere via the same proof given in (Rosenfeld et al., 2022), and uniformly over compact sets when the RKHS consists of continuous functions. In contrast, the approximation is only accurate almost everywhere when considering Koopman operators posed over L 2(R). Point-wise everywhere approximation is a distinctive advantage of kernel based methods. This convergence result is less clear from an invocation of the kernel trick in machine learning, and exemplifies the advantage of the operator theoretic considerations introduced in this manuscript. The approximation of the Koopman operator in the operator norm topology naturally leads to the question of when a Koopman operator, or more generally, a composition operator, can be compact. Compactness is a central issue for DMD as every approximation is indeed of finite rank, stemming from the observed data. In 1979, it was established in (Singh & Kumar, 1979) that composition operators over L 2(µ) cannot be compact when µ is non-atomic. Indeed, L 2(R) has no compact composition operators. Koopman operators over most frequently considered RKHSs are compact for a very narrow range of dynamics. In fact, most Koopman operators over these spaces are not even bounded, as will be expanded upon in Section 4.2.3. Unboundedness is an added complication for the valid implementation of DMD, addressed in Section 5, using densely defined and potentially unbounded Koopman operators. Alternatively, other classes of compact operators over RKHSs connected to dynamical systems can be leveraged for DMD procedures to give convergence guarantees (see, e.g., Rosenfeld et al. (2022)). The remainder of this section introduces densely defined Koopman operators over RKHSs, where Lemma 1 enables the DMD algorithm introduced in Section 5. Definition 4. Let H be a RKHS over R n. For a function F : R n → R n we define the Koopman Operator (sometimes called a composition operator), KF : D(KF ) → H, as KF g = g ◦ F where D(KF ) = {g ∈ H : g ◦ F ∈ H}. When D(KF ) is dense in H, KF is said to be densely defined. While not all densely defined Koopman operators over RKHSs are bounded, they are all closed operators (Pedersen, 2012). Lemma 1. Let F : X → X be the symbol for a Koopman operator over a RKHS H *over a set* X. KF : D(KF ) → H *is a closed operator.* The proof of the above lemma is provided in Appendix B.1. Proposition 4. If a Koopman operator is densely defined, its adjoint is densely defined and closed. Proof. Given the kernel function centered at x, hKF g, KxiH = hg ◦ *F, K*xiH = g(F(x)) = h*g, K*F (x)iH. Therefore, the linear functional g 7→ hKF *g, K*xi is bounded over H. Thus, Kx ∈ D(K∗F ) for all x ∈ X, and K∗F Kx = KF (x). Hence, each kernel function is in the domain of the adjoint of a densely defined Koopman operator, and as the span of kernel functions is dense in their RKHS, the adjoint is densely defined. ## 4 A Different Landscape For Koopman Operators This section examines properties of Koopman operators over RKHSs. The selection of space fundamentally changes the behavior of Koopman operators over that space, where properties such as the lattice of eigenfunctions, common eigenfunctions for different discretizations, and boundedness of the operators may not hold. In the succeeding subsections we discuss each of these properties and provide counter examples for each of these properties in Appendix A.3 for Koopman operators corresponding to the continuous time dynamics x˙ =x2 −x1 T. We discuss that many RKHSs only support bounded Koopman operators when the discretized dynamics are affine, and in the case of the Gaussian RBF's native space, we provide a novel proof of this fact in Appendix B.2. Subsequently, the notation F∆t will denote the discretized dynamics corresponding to x˙ =x2 −x1 Twith fixed time step ∆t > 0. Recently, kernel methods have been adapted for the study of DMD and Koopman operators, largely through the guise of extended DMD, where kernels are leveraged to simplify computations via the kernel trick. However, the adjustment from the classical study of Koopman operators through ergodic theory to that of reproducing kernel Hilbert spaces leads to significant differences in the Koopman operators and their properties. In most cases, the ergodic theorem cannot be directly applied to recover invariants of the operation g 7→ g ◦ F, for a given F, since those invariants may be nonsmooth. This section exemplifies some of the distinguishing properties of Koopman operators over RKHSs, and in some cases, illustrates their limitations. Much of the classical properties of Koopman operators is established in a variety of specific contexts, such as L 2spaces of invariant measures and L 1spaces, can be seen in (Budišić et al., 2012; Kawahara, 2016; Kutz et al., 2016b; Brunton & Kutz, 2019; Brunton et al., 2021). Properties of the Koopman operator strongly depend on the selection of underlying vector space, and boundedness, compactness, eigenvalues, etc. change based on this selection. While Koopman operators were introduced by Koopman in 1931 in (Koopman, 1931) and then later picked up by the data science community in the early 2000s (e.g. (Mezić, 2005; Kutz et al., 2016b)), the study of such operators and their properties continued in earnest throughout the 20th century as composition operators (e.g. (Shapiro, 2012)). This is particularly important for RKHSs, where the specification of a bounded or densely defined Koopman operator over a particular space yields strong restrictions on the available dynamics. ## 4.1 Concerning Sampling And Discretizations 4.1.1 Forward Complete Dynamics In applications, Koopman operators enter the theory of continuous time dynamics through a discretization of the continuous time dynamical system (Bittracher et al., 2015; Mauroy & Mezić, 2016). That is, given the dynamical system x˙ = f(x), the system is discretized through the selection of a fixed time-step, ∆t > 0, as xm+1 = xm +R tm+∆t tmf(x(t))dt, where the right hand side plays the role of the discrete dynamics. However, for such a discretization to exist for arbitrarily large values of m, it is necessary that the dynamics be forward complete. The forward completeness assumption restricts the class of continuous dynamics on which Koopman based methods may be applied. For example, the continuous time dynamics, x˙ = 1 + x 2, does not admit a discretization, since it is not forward complete. Example 2 in Appendix A.2 demonstrates where discretization fails for x˙ = 1 + x 2. A DMD method that circumvents this requirement, by utilizing Liouville operators and occupation kernels, may be found in (Rosenfeld et al., 2022). ## 4.1.2 Sampling And Data Science Ergodic based methods as employed in (Budišić et al., 2012; Mezić, 2005; Kutz et al., 2016b;b; Takeishi et al., 2017) provide a methodology for obtaining invariants and eigenfunctions for a Koopman operator almost everywhere. That is, by selecting a continuous representative of an equivalence class in an L 2space for the invariant measure, at almost every point within the domain, time averaging against that representative will converge to an invariant of the operator. However, this is a "probability 1" result, and the number of points where it may fail can potentially be uncountable. Without any external information concerning the convergence, there is no true guarantee that at a particular selected point, the time-averaged approximation will be close to the evaluation of an actual invariant at that point. Such computational issues are precisely where the strength of kernel methods manifest. To illustrate the kernel method consider the following theorem: Theorem 1. Let F : R n → R n *be a discretization of a dynamical system with the corresponding Koopman* operator, KF : H → H, and let H *be a RKHS over* R n *consisting of continuous functions. Suppose further* that > 0 and KˆF : H → H is an approximation of KF *such that the norm difference is bounded as* kKF − KˆF k < . Suppose that ϕˆ ∈ H and is a normalized eigenfunction of KˆF with eigenvalue λ*. Then* ϕˆ is an approximate eigenfunciton of KF . Proof. $$|{\cal K}_{F}\hat{\varphi}(x)-\lambda\hat{\varphi}(x)|=|{\cal K}_{F}\hat{\varphi}(x)-\hat{\cal K}_{F}(x)\hat{\varphi}(x)|$$ $=|\langle({\cal K}_{F}-\hat{\cal K}_{F})\hat{\varphi},K(\cdot,x)\rangle_{H}|\leq\|{\cal K}_{F}-\hat{\cal K}_{F}\|\|K(\cdot,x)\|_{H}\leq\epsilon\cdot C$ and C > 0 is a constant that depends on the kernel function and a prespecified compact domain. The compact domain may be extended to all of R n in some cases, such as when the kernel function is the Gaussian RBF kernel function. Thus it can be seen that kernel spaces and approximations that are close to the Koopman operator, in operator norm, can provide functions that behave similar to eigenfunctions of the Koopman operator. Moreover, the difference in behavior from a proper eigenfunction is governed pointwise by how close the operator approximation is in the first place. ## 4.2 Properties Of The Operators This section will consider the classical Fock space consisting of entire functions as a function space over which the Koopman operator is defined. The Fock space is used extensively in Quantum Mechanics (Hall, 2013) and it is a space where operators have been well studied (Zhu, 2012). Definition 5. The Fock space is a RKHS, with kernel function K(*z, w*) = e wz¯. The kernel function for the Fock space over C n may be obtained through a product of single variable kernels as K(*z, w*) = e w ∗z = e w¯1z1*· · ·* e w¯nzn . The Fock space is given as $$F^{2}(\mathbb{C}):=\left\{f(z)=\sum_{m=0}^{\infty}a_{m}z^{m}:\sum_{m=0}^{\infty}|a_{m}|^{2}m!<\infty\right\}.$$ Closely related to the Fock space is the exponential dot product kernel, e x T y, where for a single variable, the exponential dot product kernel's native space may be obtained by restricting the Fock space to the reals, and then taking the real part of the restricted functions. Through a conjugation of the exponential dot product kernel, the Gaussian RBF may be obtained as $$K_{G}(x,y)=e^{-\|x\|_{2}^{2}/2}e^{x^{T}y}e^{-\|y\|_{2}^{2}/2}=\exp\left(-\frac{\|x-y\|_{2}^{2}}{2}\right),$$ and performing the same operation on the Fock space kernel over C n yields he Fock space $\!$ . $$K_{G}(z,w)=e^{-z^{2}/2}e^{w^{*}z}e^{-\bar{w}^{2}/2}=\exp\left(-\frac{(z-\bar{w})^{2}}{2}\right),$$ which is the kernel corresponding to the complexified native space for the Gaussian radial basis function over C n (cf. (Steinwart & Christmann, 2008)). This space may be expressed as $$H_{G}^{2}(\mathbb{C})=\left\{g(z)e^{-z^{2}/2}:g\in F^{2}(\mathbb{C}^{n})\right\},$$ and the native space corresponding to the Gaussian RBF can be obtained by taking the real parts of functions from H2G and restricting to R n. ## 4.2.1 Lattice Of Eigenfunctions As presented in (Budišić et al., 2012; Klus et al., 2015), the eigenfunctions of Koopman operators over L∞(R) form a lattice. That is if ϕ1 and ϕ2 are two eigenfunctions for the Koopman operator, then so is ϕ1 ·ϕ2. For the lattice to occur more generally, it is necessary for the product of the eigenfunctions to be a member of the underlying vector space. This closure property holds, for example, in the space of continuous functions and other Banach algebras. Hilbert spaces are not generally Banach algebras, and since it is desirable to work over Hilbert spaces for properties such as best approximations, projections, and orthonormal bases (cf. (Folland, 1999)), it is important to point out that the closure property of eigenfunctions of Koopman operators does not hold in general. For example the eigenfunctions of KFπ do not form a lattice. Fundamentally, powers of ϕ(z) = e z 2/4 ∈ F 2(C) are not all contained in the Fock space. This is ultimately a consequence of growth conditions imposed by the RKHS norm. For more details consult Appendix A.3.1. ## 4.2.2 Common Eigenfunctions The intuition behind the use of Koopman operators in the study of continuous time dynamical systems is that eigenfunctions for the Koopman operators should be "close" to that of the Koopman generator for small timesteps. However, semi-groups of Koopman operators do not always share a common collection of eigenfunctions. Since each Koopman operator obtained through a fixed time-step may produce a different collection of eigenfunctions, there is no way to distinguish which, if any, should correspond to eigenfunctions of the Koopman generator. In Appendix A.3.2 we show that an eigenfunction for the Koopman operator corresponding to Fπ/2 and is not an eigenfunction for the Koopman operator corresponding to Fπ/3. ## 4.2.3 Boundedness Of Koopman Operators Throughout the literature, it is frequently assumed that Koopman operators are bounded. This assumption manifests as an unrestricted selection of observables in the study of the Koopman operator. When a Koopman operator is a densely defined operator whose domain is the entire Hilbert space, it is also closed (Pedersen, 2012). Hence, by the closed graph theorem (cf. (Folland, 1999, Theorem 5.12)), such an operator must be bounded. Furthermore, the collection of finite rank operators is dense in the collection of bounded operators over a Hilbert space in the strong operator topology (SOT) (cf. (Pedersen, 2012, Paragraph 4.6.2)). Convergence in SOT was independently studied in the work (Korda & Mezić, 2018), where the DMD routine was demonstrated to converge to a bounded Koopman operator in SOT. As mentioned in (Korda & Mezić, 2018), SOT convergence does not in general lead to convergence of the eigenvalues. To preserve spectral convergence, the finite rank approximations produced by DMD algorithms need to converge to Koopman operators in the operator norm topology. The most direct approach, and one that leads to good pointwise estimates of eigenfunctions, is through the use of compact Koopman operators. However, it isn't immediately clear when one can expect a continuous dynamical system to yield a compact Koopman operator through discretization. For example, the Koopman operator corresponding to discretization of the continuous time system x˙ = 0 is the identity operator, I, for any fixed time step, and I is not compact over any infinite dimensional Hilbert space. In addition, for any given RKHS, the collection of bounded Koopman operators is very small. It was demonstrated in (Carswell et al., 2003) that a Koopman operator over the Fock space is bounded only when the corresponding discrete dynamics are *affine*. It follows that the same result holds over the exponential dot product kernel's native space. It may perhaps be less obvious that this result extends to the Gaussian RBF's native space. A proof of this fact was first published in (Ikeda et al., 2022). We provide a novel proof of this result that leverages the Gaussian RBF kernel's connection to the Fock space in Appendix B.2. The theorem and corollary related to our proof is shown below. These proofs demonstrate that even for popular selections of RKHSs, the collection of bounded Koopman operators is small. Theorem 2. If F : C n → C n is an entire function, and KF is bounded on HG*, then* F(z) = Az + b for a matrix A ∈ C n×n *and vector* b ∈ C n. Here, HG *represents the Gaussian RBF's native space.* Corollary 1. If F is a real entire vector valued function, and KF is bounded on the Gaussian RBF's native space over R n, then F *is affine.* Hence, for the most commonly used kernel function in machine learning, the collection of bounded (and hence compact) Koopman operators over its native space is restricted to only those Koopman operators corresponding to affine dynamics. Each selection of RKHS and kernel function will yield a correspondingly small collection of bounded Koopman operators. It should be noted that Koopman operators were completely classified over the classical sampling space, the Paley-Wiener space (Chacón & Giménez, 2007), as also being those that correspond to affine dynamics, and it is a simple exercise to show that the native space for the polynomial kernel also only admits bounded Koopman operator when the dynamics are affine. In summary, thus far it has been established that Koopman operators are only bounded in the case where the dynamics are affine over the following spaces: - The polynomial kernel space, K(*x, y*) = (1 + x T y) n, - The Fock Space, K(*z, w*) = e αzw¯, - The exponential dot product kernel's native space, K(*x, y*) = e αxT y, - The Gaussian RBF's native space, K(*x, y*) = exp − 1 µ kx − yk 2 2 , - Paley-Wiener spaces, K(*x, y*) = *sinc*(α(x − y)), and - Other spaces discussed in (Ikeda et al., 2022). Consequently, in most practical respects Koopman operators over RKHSs should not be assumed to be bounded, and certainly not compact. ## 5 Dynamic Mode Decomposition With Koopman Operators Over Rkhss As a product of its genesis in the machine learning community, many DMD procedures appeal to feature space, and this holds in implementations of kernel-based extended DMD (Williams et al., 2015b), which casts the snapshots from a finite dimensional nonlinear system into an infinite feature space. The direct involvement of the feature space in the estimation of the Koopman operator leads to rather complicated numerical machinery. To avoid directly computing the infinite dimensional vectors that result, an involved collection of linear algebra techniques are leveraged to extract the Koopman modes. Here it is shown that this process may be simplified and that a procedure that directly involves the kernel functions centered at the snapshots simplifies the design of DMD algorithms. This approach keeps with the spirit of the "kernel trick," where feature vectors are never directly evaluated and only accessed through evaluations of the kernel function itself. The algorithm presented in this section is designed around scalar valued RKHSs, which necessitates the decomposition of the (vector valued) full state observable component-wise. A complete vector valued algorithm is discussed in Section 6, where it is demonstrated that the present algorithm is computationally equivalent to the vector valued algorithm for certain selections of kernel operators. Recall from Proposition 2 that in a real valued Hilbert space, the projection of a function g onto a collection of linearly independent basis functions, u1*, . . . , u*M, is a linear combination of those functions, P g =PM j=1 wjuj where the weights wj may be determined by solving $$\begin{pmatrix}\langle u_{1},u_{1}\rangle_{H}&\cdots&\langle u_{1},u_{M}\rangle_{H}\\ \vdots&\ddots&\vdots\\ \langle u_{M},u_{1}\rangle_{H}&\cdots&\langle u_{M},u_{M}\rangle_{H}\end{pmatrix}\begin{pmatrix}w_{1}\\ \vdots\\ w_{M}\end{pmatrix}=\begin{pmatrix}\langle g,u_{1}\rangle_{H}\\ \vdots\\ \langle g,u_{M}\rangle_{H}\end{pmatrix}$$ . We will use Definition 6 in order to aid us in creating and denoting the finite rank representation of KF . Definition 6. Let E be a linear transformation between two finite dimensional vector spaces V and W. Suppose that α = {α1*, . . . , α*N } is an ordered basis for V , and suppose that β = {β1*, . . . , β*M} is an ordered basis for W. A matrix representation of the linear transformation, E, with respect to the ordered bases α and β is denoted as [E] β α, where the j-th column of this matrix contains the weights of the vector Eαj corresponding to the ordered basis β. Throughout this algorithm, a Koopman operator will be assumed to be densely defined, as Section 4 demonstrated that most Koopman operators cannot be expected to be bounded or compact. An additional assumption will be made that the kernel functions themselves reside in the domain of the Koopman operator. It should be noted that since the kernels are always in the domain of the adjoint of the Koopman operator (see Section 3), a finite rank representation of the adjoint of the Koopman operator may thus be derived without assuming that the kernels are in the domain of the Koopman operator. For the sake of the derivation, it is also assumed that the Koopman operator is diagonalizable, which is not generally expected to be true. However, the finite rank representations leveraged in this manuscript are almost always diagonalizable, since the set of non-diagonalizable matrices are of measure zero in the collection of all matrices. Moreover, for periodic data sets, the adjoint of the Koopman operator will be invariant on the span of the collection of kernel functions centered at the snapshots, and thus, the finite rank representations will be explicitly the adjoint of the Koopman operator on that subspace, which supports the assumption of the availability of eigendecompositions for the Koopman operator in the periodic or quasiperiodic settings. For a given collection of snapshots {x1, x2*, ..., x*m} 1, the goal is to determine a finite rank representation of KF that is derived from the kernel functions centered at the snapshots. To express a finite rank representation, the ordered basis α = {kx1 , ..., kxm−1 } is leveraged. In particular, if Pα is the projection onto span(α), the operator PαKF maps span(α) to itself, which enables the discussion of eigenfunctions and eigenvalues of PαKF using only functions in span(α). Proposition 5. Given a collection of snapshots {x1, x2, ..., xm} ⊂ X *and the corresponding ordered basis* α = {kx1 , ..., kxm−1 } *of kernel functions, if* g =Pm−1 i=1 aikxi and PαKF g =Pm−1 i=1 bikxi then $$\left(2\right)$$ $$G b={\mathcal{I}}a,$$ Gb = Ia, (2) where a := (a1*, ..., a*m−1) T, b := (b1*, ..., b*m−1) T, and the Gram matrix G and the interaction matrix I are given by $$G:=\begin{pmatrix}K(x_{1},x_{1})&\cdots&K(x_{1},x_{m-1})\\ \vdots&\ddots&\vdots\\ K(x_{m-1},x_{1})&\cdots&K(x_{m-1},x_{m-1})\end{pmatrix},\quad\text{and}\quad\mathcal{I}:=\begin{pmatrix}K(x_{2},x_{1})&\cdots&K(x_{2},x_{m-1})\\ \vdots&\ddots&\vdots\\ K(x_{m},x_{1})&\cdots&K(x_{m},x_{m-1})\end{pmatrix}.$$ Proof. Using Proposition 2, the reproducing property, and the fact that KF kxi (x) = kxi (F(x)) = K(F(x), xi), $$Gb=\begin{pmatrix}(K_{F}\sum_{i=1}^{m-1}a_{i}k_{x_{i}},k_{x_{i}})_{H}\\ \vdots\\ \mathcal{K}_{F}\sum_{i=1}^{m-1}a_{i}k_{x_{i}},k_{x_{m-1}})_{H}\end{pmatrix}=\begin{pmatrix}\sum_{i=1}^{m-1}a_{i}(K_{F}k_{x_{i}},k_{x_{i}})_{H}\\ \vdots\\ \sum_{i=1}^{m-1}a_{i}(K_{F}k_{x_{i}},k_{x_{m-1}})_{H}\end{pmatrix}=\begin{pmatrix}\sum_{i=1}^{m-1}a_{i}K(x_{2},x_{i})\\ \vdots\\ \sum_{i=1}^{m-1}a_{i}K(x_{m},x_{i})\end{pmatrix}=\mathcal{I}a.$$ If kx1 , . . . , kxM−1 are linearly independent, then G is invertible and b = G−1Ia, i.e., the operator PαKF , restricted to span(α) is uniquely represented by the matrix [PαKF ] α α = G−1I. If G is singular, then the projected function PαKf g admits multiple sets of coefficients bi that satisfy PαKf g =Pm−1 i=1 bikxi . In this 1While availability of a time series of snapshots {x1, x2*, ..., x*m} such that xi+1 = F(xi) is a more typical use case, the developed method does not require such a time series. It can also be implemented using arbitrary snapshots {x1, x2*, ..., x*m} and {y1, y2*, ..., y*m} provided yi = F(xi). $\square$ case, the finite rank representation is not unique. In this paper, we study two specific approaches to compute a finite rank representation for the case where G is singular or numerically singular. The first approach, summarized in Algorithm 1, is regularized regression, i.e., [PαKF ] α α := (G + Im−1) −1I, where > 0 is a user-selected regularization coefficient and Im−1 is an m − 1 × m − 1 identity matrix. The second approach is a truncated pseudoinverse approach, i.e., [PαKF ] α α := G+ I, where G+ is the truncated pseudoinverse of G, computed by zeroing the singular values of G that are smaller than ≥ 0. It should be noted that G and I are precisely the matrices examined in (Williams et al., 2015b) after the use of a truncated SVD. If the truncated pseudoinverse approach is used to implement the method developed in this paper, then the generated results are identical to those generated by the KDMD method in (Williams et al., 2015b). The results obtained by using the regularized regression approach differ from those obtained using the KDMD method in (Williams et al., 2015b), but not substantially so. In Appendix C we provide empirical evidence for the similarity of the results obtained from our method and the method employed in (Williams et al., 2015b). The objective of DMD is to use the finite rank representation determined above to create a data driven model of the dynamical system. This makes use of a fundamental property of eigenfunctions of the Koopman operator. Lemma 2. Suppose that ϕ is an eigenfunction of KF with eigenvalue λ. Evaluating the eigenfunction at a snapshot reveals ϕ(xi+1) = λ iϕ(x1). Proof. Let ϕ be an eigenfunction of KF with eigenvalue λ, where F represents the discrete time dynamics such that xi+1 = F(xi). Furthermore, recall the definition of the Koopman operator presented in Definition 4. Then $$\lambda\varphi(x_{i})=\mathcal{K}_{F}\varphi(x_{i})=\varphi\left(F(x_{i})\right)=\varphi\left(x_{i+1}\right)$$ Therefore, ϕ(xi+1) = λ iϕ(x1). Proposition 6. *Suppose that* {ϕj}∞ j=1 is a complete set of eigenvectors of the Koopman operator, KF , corresponding to the eigenvalues {λj}∞ j=1*. For a state* x ∈ R n, let (x)d be the d-th component of x for d = 1, . . . , n. If it is assumed that the mapping x 7→ (x)d is in the RKHS (as it is when H is the native space for the exponential dot product space (Steinwart & Christmann, 2008)), then each snapshot may be reconstructed as $$x_{i+1}=\lim_{M\rightarrow\infty}\sum_{j=1}^{M}\xi_{j,M}\lambda_{j}^{i}\varphi_{j}(x_{1}).\tag{3}$$ $\square$ Proof. Since {ϕj}∞ j=1 is an eigenbasis for KF corresponding to the eigenvalues {λj}∞ j=1 and the mapping x 7→ (x)d is in the RKHS, $$(x)_{d}=\operatorname*{lim}_{M\to\infty}\sum_{j=1}^{M}(\xi_{j,M})_{d}\varphi_{j}(x)$$ for some coefficients {(ξj,M)d} ∞ j=1. By stacking each (x)d, the full state observable gid, given by gid(x) = x, is expressed as $$g_{\rm id}(x)=\lim_{M\to\infty}\sum_{j=1}^{M}\xi_{j,M}\varphi_{j}(x).\tag{4}$$ $$\mathbb{D}$$ Therefore, equation 3 can be derived by using equation 4 and Lemma 2. Note that since the Koopman operator is not generally a normal operator, {ϕi}∞ i=1 is not expected to be an orthonormal basis, and hence, there may be nonzero influences between the coefficients obtained by projection and this is expressed by the additional index M in ξj,M. This means that a series representation of the decomposition as expressed in (Kawahara, 2016; Brunton & Kutz, 2019) is not always possible. Hence, Koopman modes are not fixed quantities unless there is an orthonormal basis of eigenfunctions for the Koopman operator. Since the Koopman operator is approximated here by a finite rank representation, perfect reproduction of gid through a series of eigenfunctions is not possible. Instead, eigenfunctions determined through the finite rank representation are used to construct the approximation of gid. In particular, the matrix [PαKF ] α α is the matrix representation of PαKF . Proposition 7. If vj *is an eigenvector for the matrix* [PαKF ] α α with eigenvalue λj *, then* Pm−1 i=1 (vj )iK(*x, x*i) is an eigenfunction of PαKF . Proof. By the definition of eigenvector, [PαKF ] α αvj = λjvj . Therefore, $$P_{\alpha}{\mathcal{K}}_{F}\left(\sum_{i=1}^{m-1}(v_{j})_{i}K(x,x_{i})\right)=\left(\begin{array}{c}K(x,x_{1})\\ \vdots\\ K(x,x_{m-1})\end{array}\right)^{T}[P_{\alpha}{\mathcal{K}}_{F}]_{\alpha}^{\alpha}v_{j}=\lambda_{j}\sum_{i=1}^{m-1}(v_{j})_{i}K(x,x_{i}).$$ $$\square$$ Using Proposition 7 the corresponding normalized eigenfunction is denoted by $$\hat{\varphi}_{j}(x):=\frac{1}{\sqrt{v_{j}^{\dagger}Gv_{j}}}\sum_{i=1}^{m-1}(v_{j})_{i}K(x,x_{i}),\tag{5}$$ where G = (K(xi, x`))m−1 i,`=1 is the Gram matrix associated with the snapshots and the kernel function and (·) † denotes the conjugate transpose. Using a finite rank representation of equation 4, it is easy to see that the d-th row of the matrix ˆξ := ˆξ1 . . . ˆξm−1 of Koopman modes is comprised of the components of (x)d along the (non-orthogonal) directions ϕˆj . That is, $$g_{\rm id}(x_{i})=x_{i}=\sum_{j=1}^{m-1}\xi_{j}\hat{\varphi}_{j}(x_{i}),\tag{6}$$ which yields $$\hat{\xi}V^{T}G=X,$$ where X := x1 *. . . x*m−1 is the data matrix and $$V:=\left({\frac{v_{1}}{\sqrt{v_{1}^{\dagger}G v_{1}}}}\quad\cdots\quad{\frac{v_{m-1}}{\sqrt{v_{m-1}^{\dagger}G v_{m-1}}}}\right)$$ is the matrix of normalized eigenvectors of [PαKF ] α α. Similar to the finite rank representation above, if G is non-singular, then the matrix of Koopman modes is unique, given by ˆξ = X(V T G) −1. If G is singular and if regularized regression is used, then modes are computed using ˆξ = X(V T(G+Im−1))−1. If the truncated pseudoinverse approach is selected, then eigenvalues of the finite rank representation with absolute value smaller than are removed from the computation, along with the corresponding eigenvectors, and the modes are computed using ˆξ = X(V T G) + . Using the approximate eigenfunctions, ϕˆj , equation 6, and Lemma 2 a data driven model for the system is obtained as $$x_{i+1}\approx\sum_{j=1}^{m-1}\hat{\xi}_{j}\lambda_{j}^{i}\hat{\varphi}_{j}(x_{1}).$$ jϕˆj (x1). (7) $$\left(7\right)$$ Furthermore, the discrete-time dynamics F may be approximated (by setting i = 1) as $$F(x)\approx{\hat{F}}(x):=\sum_{j=1}^{m-1}{\hat{\xi}}_{j}\lambda_{j}\hat{\varphi}_{j}(x).$$ $$({\boldsymbol{\delta}})$$ ˆξjλjϕˆj (x). (8) Consider a continuous-time system x˙ = f(x) with f ∈ C1, and let a compact set X be forward invariant under the flow of the system. If the discrete-time system xk+1 = F(xk) is obtained via discretization of the continuous-time system with time step T, and if (eγT , ϕ) is an eigenpair of resulting Koopman operator KT for all T > 0, then along a solution x(·) of x˙ = f(x), we have dϕ(x(t)) dt = γϕ(x(t)) (Mauroy & Mezić, 2016, Section III-A). Motivated by this relationship, the continuous-time dynamics may be approximated as $$f(x)\approx{\hat{f}}(x):=\sum_{j=1}^{m-1}{\frac{\log(\lambda_{j})}{T}}{\hat{\xi}}_{j}\varphi_{j}(x).$$ $$({\mathfrak{g}})$$ ˆξjϕˆj (x). (9) Note that in the above construction, the eigenfunctions ϕj are assumed to be common across all time-steps T. In view of Example 5, not all eigenfunctions of the Koopman operator may be common eigenfunctions. As a result, the continuous-time model in equation 9 is a heuristic, albeit a useful one (see Figure 3 in the appendix). Algorithm 1: Pseudocode for the kernel perspective based DMD algorithm. Upon obtaining the Koopman modes, the approximate eigenfunctions, and the eigenvalues, equation 7 is used to compute xi+1. Input: Snapshots, X = {x1, x2*, ..., x*m} Input: Kernel function K : R n × R n → R of an RKHS Compute the Gram matrix, G = (K(xi, x`))m−1 i,`=1 if G is ill-conditioned then if regularized regression is used **then** Set G = G + Im−1 else if truncated pseudoinverse is used **then** Set G−1 = G+ end if end if Compute the interaction matrix, I = (K(xi, x`))i=m,`=m−1 i=2,`=1 Compute the finite rank representation of the Koopman operator, [PαKF ] α α = G−1I Compute eigenvalues, λj , and eigenvectors, vj , of [PαKF ] α α if truncated pseudoinverse is used **then** For all j, if |λj | < then set λj = 0 and vj = 0 Set (V T G) −1 =V T G+ end if Compute the matrix of Koopman modes, ˆξ = XV T G−1 Compute approximate eigenfunctions, equation 5 Output: Koopman modes, ˆξj for j = 1*, . . . , m* − 1 Output: Approximate eigenfunctions, ϕˆj (x) for j = 1*, . . . , m* − 1 Output: Eigenvalues λj for j = 1*, . . . , m* − 1 ## 6 Vector Valued Considerations The attentive reader will notice that the ultimate objective of DMD is to achieve an approximation or decomposition of the function gid(x) := x, the full state observable. However, the full state observable is R n valued, whereas the RKHSs in question consist of scalar valued functions. Consequently, the coefficients that are determined to approximate the full state observable are vector valued. Up to now, the methods have simply separated the individual components of gid and established an approximation for each of them separately. The weights for these approximations are stacked to make the vector valued coefficients. This section shows how the vector valued coefficients arise naturally from a projection onto vector valued functions in a vector valued RKHS. With the right selection of kernel operator, this projection operation reduces to the setting of Section 5. If the goal is to decompose the full state observable all at once, then one must appeal to vector valued RKHSs, which produce an operator valued kernel function. Proposition 8. Given a vector valued RKHS, H, consisting of functions that map a set X *to a Hilbert* space Y, there is for each x ∈ X and ν ∈ Y a function Kx,ν ∈ H such that hg, Kx,νiH = hg(x), νiY . The mapping ν 7→ Kx,ν is linear, and hence, the kernels are expressed as Kxν where Kx : Y → H is a bounded operator. In this case, for each g ∈ H, K∗ x g = g(x). The operator valued kernel associated with H is given as $$K(x,y):=K_{x}^{*}K_{y}:{\mathcal{Y}}\times{\mathcal{Y}}\to H,$$ and note that when Y = R n, this operator valued kernel is a matrix whose entries are scalar valued functions. When a vector valued RKHS, H, is obtained from a scalar valued RKHS, H˜ with kernel k(*x, y*), as H = H˜ n where the inner product of two elements, g = (g1*, . . . , g*n) T and h = (h1*, . . . , h*n) T, from H is given as h*g, h*iH =Pn j=1hgj , hj iH˜ , then the operator valued kernel is given as k(*x, y*)In, which has many convenient properties for computation. More details concerning vector valued RKHSs may be found in (Carmeli et al., 2010; Agler & McCarthy, 2023). Here the Koopman operator is defined just as it would be for a scalar valued space: KF g = g ◦ F, and the difference here is that g is vector valued. Similar relationships between the Koopman operator and the kernel functions still hold, where $$\langle{\mathcal{K}}_{F}g,K_{x}\nu\rangle_{H}=\langle g(F(x)),\nu\rangle_{\mathcal{Y}}=\langle g,K_{F(x)}\nu\rangle_{H},$$ ## Hence K∗F Kx = Kf (X). Now given a collection of snapshots, {x1*, . . . , x*M}, the associated operator valued kernels are Kx1 , . . . KxM . For decomposition of the full state observable, gid(x) = x, one needs to interface with the Hilbert space and construct a finite rank approximation of KF as was done in the prior section. This means that a subspace within H needs to be constructed, say spanned by a collection of basis functions α, so one can write K˜F = PαKF Pα. The selection of the basis α is less constrained than in the scalar valued case, where only the kernels centered at the snapshots were considered. In this case, not only must the snapshots be considered, but also complete freedom of the choice of ν ∈ R n to select each Kx,ν = Kxν. A natural choice is to select ν from the standard basis of R n. This leads to $\alpha=\{K_{x_{0},e_{1}},\ldots,K_{x_{M-1},e_{1}},K_{x_{0},e_{2}},\ldots,K_{x_{M-1},e_{2}},\ldots,K_{x_{0},e_{n}},\ldots,K_{x_{M-1},e_{n}}\}$, which is a large basis for problems such as the cylinder flow example, where n is of the order 80, 000. Consequently, the gram matrix corresponding to this basis is a square matrix of dimension M · n. So for arbitrary operator valued kernels, the DMD algorithm presented in this paper would have to invert an M · n dimensional matrix, and the computation time would scale with the dimension of the space as O((M · n) 3) using standard inversion algorithms. This scaling would make benchmarks such as the cylinder flow example completely infeasible. Computational feasibility can be achieved through the judicious selection of the kernel operator. Here we leveraged a kernel operator of the form $$K(x,y)=k(x,y)I_{n},$$ with k a scalar valued kernel function. In this case, hKxν, KyωiH = k(x, y)h*ν, ω*iRn and the two functions Kxν and Kyω are orthogonal when ν and ω are orthogonal in R n. This means that the Gram matrix composed of the α given above is going to be a block diagonal matrix with each block being the Gram matrix corresponding to the scalar valued kernel. Thus only one matrix of size M must be inverted, and each dimension treated individually. Significantly, the inversion problem no longer scales with the dimension. Alternatively, if we instead use a different scalar valued space for each dimension, we would get n different Gram matrices to invert. The complexity would scale with the dimension, but linearly rather than cubicly. Proposition 9. Suppose that k(x, y) corresponds to a scalar valued RKHS, H˜ , and let H *be an* R n valued RKHS such that if g ∈ H*, then* g = (g1, . . . , gn) T where gi ∈ H˜ *for each* i = 1, . . . , n equipped with the inner product h*g, h*iH =Pn i=1hgi, hiiH˜ . Then H *is a vector valued RKHS.* Proof. Let v ∈ R n and x ∈ X then $$\langle g(x),v\rangle_{\mathbb{R}^{n}}|\leq\|g(x)\|_{\mathbb{R}^{n}}\|v\|_{\mathbb{R}^{n}}=\sqrt{\sum_{i=1}^{n}|\langle g_{i},k(\cdot,x)\rangle_{R}|^{2}\|v\|_{\mathbb{R}^{n}}}$$ $$\leq\sqrt{k(x,x)}\sqrt{\sum_{i=1}^{n}\|g_{i}\|_{R}^{2}}\|v\|_{\mathbb{R}^{n}}=\sqrt{k(x,x)}\|g\|_{H}\|v\|_{\mathbb{R}^{n}},$$ hence the functional g 7→ hg(x), viRn is bounded. In this setting, K(*x, y*) = diag(k(x, y), . . . , k(*x, y*)), and hKxei, Kyej iRn = 0 if i 6= j. That H is a vector valued RKHS follows from the above. If {x1*, . . . , x*N } is a collection of snapshots such that F(xi) = xi+1, then a finite rank approximation of KF may be constructed by examining the image of the operator on Kxi,ej for i = 1*, . . . , N* and j = 1*, . . . , n*, and then projecting back onto the span of these kernels. The projection operation requires the computation of the Gram matrix for the basis {Kxi,ej }, which is a block diagonal matrix, where each block corresponds to a selection of dimension through ej . Thus, if G˜ = (k(xs, x`))N s,`=1 the gram matrix manifests as G˜ ... $$G=$$ . $$\hat{G}$$ Similarly, the interaction matrix is a block diagonal. Consequently, the vector of weights corresponding to each kernel function is composed of n smaller vectors of length N − 1, each one corresponding to a different dimension. Hence, each dimension may be treated independently. The eigenfunctions for this finite rank representation of the Koopman operator are then composed of n identical collections of N − 1 functions, differing only in the dimension they are supported in. Let ϕ˜1*, . . . ,* ϕ˜N ∈ H˜ be these scalar valued functions, and write ϕs,i := ˜ϕsei be the corresponding vector valued eigenfunction. The full state observable is projected onto this collection of eigenfunctions as $$g_{\rm id}(x)\approx\sum_{i=1}^{n}\sum_{s=1}^{N}w_{s,i}\varphi_{s,i}(x)=\sum_{s=1}^{N}\tilde{\varphi}_{s}(x)\left(\sum_{i=1}^{n}w_{s,i}e_{i}\right).$$ the Koopman mode $\mathfrak{f}$. Here, Pn i=1 ws,iei = ξs, where ξs is the Koopman mode from the previous section. ## 7 Numerical Example In this section we compare the results obtained from the developed DMD method and the method developed in (Williams et al., 2015b) on a benchmark dataset. Two additional numerical examples are included in Appendix C ## 7.1 Periodic Flow Around A Cylinder The website (Kutz et al., 2016a) accompanying the textbook (Kutz et al., 2016b) provides several data sets ![14_image_0.png](14_image_0.png) that serve as benchmarks for spectral decomposition approaches to nonlinear modeling, which have been released to the public through their website. This section utilizes the cylinder flow data set to demonstrate the results of the developed DMD method. The cylinder flow example is numerically generated, and the data provided corresponds to a planar steady state flow of the system. The data set consists of 151 snapshots, containing values of the vorticity of the flow at several mesh points in a plane, recorded every 0.02 seconds. In this demonstration, snapshots 1 through 30 are used as the input data, and snapshots 2 through 31 are used as output data, assuming that the ith and (i + 1)th snapshots satisfy xi+1 = F(xi) for some unknown nonlinear function F. The snapshots are normalized so that the largest 2-norm among all snapshots is 1. (a) This figure shows reconstruction of the vorticity field at the same time points using two different kernels. The left column shows reconstruction of the initial state, x1. The middle column shows reconstructions of the state, x15, and the right column corresponds to reconstruction of the state x30. (b) This figure presents prediction of the vorticity ![14_image_1.png](14_image_1.png) field at the same time points using two different kernels. The left column presents prediction of the state x51. The middle column shows prediction of the state x101, and the right column corresponds to prediction of the state x151. Figure 1: Reconstruction and prediction of the vorticity of a fluid flow past a cylinder using two different kernel functions. The first rows contains the ground truth, the second rows leverages the Gaussian RBF kernel, and the third row uses the exponential dot product kernel. The Koopman modes, eigenvalues, and eigenfunctions are then computed using the developed technique and snapshots 2 through 31 are reconstructed using equation 7. DMD is implemented using the exponential dot product kernel, K(*x, y*) = exp( 1µ x T y) (with µ = 1), and the Gaussian RBF kernel, K(*x, y*) = exp − 1 µ kx − yk 2 2 (with µ = 1). To further demonstrate the accuracy of the obtained finite-dimensional representation of the Koopman operator, snapshots 32 through 151 are *predicted* from snapshot 1 using equation 7. ## 7.2 Discussion Figures 1a and 2 show the ability of the developed DMD technique to reconstruct the training data. The relative reconstruction errors associated with this system are of the order of 1e − 8. Additionally, figure 1a shows that for the periodic flow example, similar reconstruction results can be obtained when using the Gaussian RBF kernel or the exponential dot product kernel. As shown in figures 1b and 2, given the first 31 snapshots, the developed DMD technique is able to predict the remaining 120 snapshots, with a relative prediction error of order 1e − 4, without the knowledge of the underlying physics, F. Furthermore, as expected, the performance of the developed method is nearly identical to the baseline Kernel DMD method developed in (Williams et al., 2015b). ![15_image_0.png](15_image_0.png) (a) Method developed in this paper (b) Kernel DMD method in (Williams et al., 2015b) Figure 2: For the periodic flow example, this figure shows the 2-norm of the error between the true snapshots and the snapshots generated using DMD. The results for the Gaussian RBF kernel and the exponential dot product kernel are identical. ## 8 Conclusion This manuscript revisits theoretical assumptions concerning DMD of Koopman operators, including the existence of lattices of eigenfunctions, common eigenfunctions between Koopman operators, and boundedness and compactness of Koopman operators. Counterexamples that illustrate restrictiveness of the assumptions are provided for each of the assumptions. In particular, a major theoretical result is established to show that the native RKHS of the Gaussian RBF kernel function only supports bounded Koopman operators if the dynamics are affine. Moreover, a kernel-based DMD algorithm that simplifies the algorithm in (Williams et al., 2015a) and presents it in an operator theoretic context is developed and then validated through simulations. The developed DMD algorithm is also extended to vector valued RKHSs. ## References Jim Agler and John E McCarthy. *Pick interpolation and Hilbert function spaces*, volume 44. American Mathematical Society, 2023. Nachman Aronszajn. Theory of reproducing kernels. *Trans. Am. Math. Soc.*, 68(3):337–404, 1950. Andreas Bittracher, Péter Koltai, and Oliver Junge. Pseudogenerators of spatial transfer operators. *SIAM* Journal on Applied Dynamical Systems, 14(3):1478–1517, 2015. Ralph Philip Boas. *Entire functions*. Academic Press, 2011. Steven L Brunton and J Nathan Kutz. *Data-driven science and engineering: Machine learning, dynamical* systems, and control. Cambridge University Press, 2019. Steven L Brunton, Marko Budišić, Eurika Kaiser, and J Nathan Kutz. Modern koopman theory for dynamical systems. *arXiv preprint arXiv:2102.12086*, 2021. Marko Budišić, Ryan Mohr, and Igor Mezić. Applied koopmanism. *Chaos: An Interdisciplinary Journal of* Nonlinear Science, 22(4):047510, 2012. Claudio Carmeli, Ernesto De Vito, Alessandro Toigo, and Veronica Umanitá. Vector valued reproducing kernel hilbert spaces and universality. *Analysis and Applications*, 8(01):19–61, 2010. Brent Carswell, Barbara D MacCluer, and Alex Schuster. Composition operators on the fock space. Acta Sci. Math.(Szeged), 69(3-4):871–887, 2003. Gerardo Chacón and José Giménez. Composition operators on spaces of entire functions. *Proceedings of the* American Mathematical Society, 135(7):2205–2218, 2007. Gerald B Folland. *Real analysis: modern techniques and their applications*, volume 40. John Wiley & Sons, 1999. Brian C Hall. *Quantum theory for mathematicians*, volume 267. Springer, 2013. Masahiro Ikeda, Isao Ishikawa, and Yoshihiro Sawano. Boundedness of composition operators on reproducing kernel hilbert spaces with analytic positive definite functions. *Journal of Mathematical Analysis and* Applications, 511(1):126048, 2022. Jakob Jakob, Markus Gross, and Tobias Günther. A fluid flow data set for machine learning and its application to neural flow map interpolation. *IEEE Transactions on Visualization and Computer Graphics*, 27 (2):1279–1289, 2020. Jakob Jakob, Markus Gross, and Tobias Günther. A fluid flow data set for machine learning. https: //doi.org/10.3929/ethz-b-000515488, 2021. Accessed: 2023-12-29. Yoshinobu Kawahara. Dynamic mode decomposition with reproducing kernels for koopman spectral analysis. In *Advances in neural information processing systems*, pp. 911–919, 2016. Stefan Klus, Péter Koltai, and Christof Schütte. On the numerical approximation of the perron-frobenius and koopman operator. *arXiv preprint arXiv:1512.05997*, 2015. Bernard O Koopman. Hamiltonian systems and transformation in hilbert space. *Proceedings of the National* Academy of Sciences of the United States of America, 17(5):315, 1931. Milan Korda and Igor Mezić. On convergence of extended dynamic mode decomposition to the koopman operator. *Journal of Nonlinear Science*, 28(2):687–710, 2018. J. Nathan Kutz, Steven L. Brunton, Bingni W. Brunton, and Joshua L. Proctor. Dynamic mode decomposition. http://www.dmdbook.com/, 2016a. Accessed: 2023-12-29. J. Nathan Kutz, Steven L. Brunton, Bingni W. Brunton, and Joshua L. Proctor. *Dynamic mode decomposition: data-driven modeling of complex systems*. SIAM, Philadelphia, PA, USA, 2016b. Alexandre Mauroy and Igor Mezić. Global stability analysis using the eigenfunctions of the koopman operator. *IEEE Trans. Autom. Control*, 61(11):3356–3369, 2016. Igor Mezić. Spectral properties of dynamical systems, model reduction and decompositions. *Nonlinear Dyn.*, 41(1):309–325, 2005. doi: 10.1007/s11071-005-2824-x. Gert K Pedersen. *Analysis now*. Springer Science & Business Media, 2012. Joel A Rosenfeld, Rushikesh Kamalapurkar, L Gruss, and Taylor T Johnson. Dynamic mode decomposition for continuous time systems with the liouville operator. *Journal of Nonlinear Science*, 32(1):1–30, 2022. Clarence W Rowley, Igor Mezić, Shervin Bagheri, Philipp Schlatter, and Dan S Henningson. Spectral analysis of nonlinear flows. *Journal of fluid mechanics*, 641:115–127, 2009. Joel H Shapiro. *Composition operators: and classical function theory*. Springer Science & Business Media, 2012. RK Singh and Ashok Kumar. Compact composition operators. Journal of the Australian Mathematical Society, 28(3):309–314, 1979. Ingo Steinwart and Andreas Christmann. *Support vector machines*. Springer Science & Business Media, 2008. Naoya Takeishi, Yoshinobu Kawahara, and Takehisa Yairi. Learning koopman invariant subspaces for dynamic mode decomposition. *arXiv preprint arXiv:1710.04340*, 2017. Peter Walters. *An introduction to ergodic theory*, volume 79. Springer Science & Business Media, 2000. Matthew O Williams, Ioannis G Kevrekidis, and Clarence W Rowley. A data–driven approximation of the koopman operator: Extending dynamic mode decomposition. *J. Nonlinear Sci.*, 25(6):1307–1346, 2015a. Matthew O. Williams, Clarence W. Rowley, and Ioannis G. Kevrekidis. A kernel-based method for datadriven Koopman spectral analysis. *J. Comput. Dyn.*, 2(2):247–265, 2015b. Kehe Zhu. *Analysis on Fock spaces*, volume 263. Springer Science & Business Media, 2012. ## A Examples In this section we provide detailed examples that support some of the claims made throughout this paper. The title of each subsection is used as a way of linking the section in the paper with the examples that are relevant to that section. ## A.1 Introduction Example 1. Given a Koopman operator KF corresponding to the discrete dynamics, F, an > 0, and a compact subset Ω that is invariant for the dynamics, it follows that if φ satisfies $$\|K_{F}\phi-\lambda\phi\|_{H}\leq\epsilon,$$ then $$|\phi(F^{m}(x))-\lambda^{m}\phi(x)|\leq C\epsilon{\frac{1-\lambda^{m+1}}{1-\lambda}}$$ for x ∈ Ω, where the accuracy of the resultant models depend on and λ. Significantly, as tends to zero, so does the difference |φ(F m(x)) − λ mφ(x)| at every point in x ∈ Ω. ## A.2 Forward Complete Dynamics Example 2. Consider the one dimensional dynamics: x˙ = 1 + x 2. For fixed 0 < ∆*t < π/*2, the corresponding discrete time dynamics are given as: xm+1 = tan(arctan(xm) + ∆t). Setting $$x_{m}=\tan(\pi/2-\Delta t),$$ it is clear that xm+1 is undefined. Consequently, the composition symbol, F(x) = tan(arctan(x) + ∆t) for the hypothetical Koopman operator, is not well defined over R n for any selection of ∆t. ## A.3 Properties Of The Operators The following simple example will be leveraged throughout the discussions in the ensuing subsections. Example 3. Consider the dynamical system $${\dot{x}}=\left(x_{2}\quad-x_{1}\right)^{T},$$ which corresponds to circular dynamics in the plane. For any fixed θ := ∆t, the discretization of this system yields the linear discrete dynamics $$x_{m+1}=\begin{pmatrix}\cos(\theta)&-\sin(\theta)\\ \sin(\theta)&\cos(\theta)\end{pmatrix}x_{m}.$$ That is, the discretization corresponding to a fixed time-step results in a fixed rotation of R 2. To simplify the presentation, we use C as a model for R 2, where rotation of the complex plane reduces to multiplication by a unimodular constant, $$z_{m+1}=e^{i\theta}z_{m}.$$ $$(10)$$ iθzm. (10) The corresponding discrete time dynamics will be written as $$F_{\theta}(z):=e^{i\theta}z.$$ ## A.3.1 Lattice Of Eigenfunctions Example 4. Consider the dynamical system mentioned in example 3. If θ = π in equation 10, the discrete dynamics corresponding to rotation by π in the complex plane become $$z_{m+1}=e^{i\pi}$$ $$e^{i\pi}z_{m}=-z_{m}.$$ The corresponding Koopman operator, KFπ : F 2(C) → F 2(C), is given as $${\mathcal{K}}_{F_{\pi}}g(z):=g(-z).$$ Hence, every even function is an eigenfunction for this Koopman operator with eigenvalue 1. Any function g ∈ F 2(C) exhibits a strict bound on its growth rate (cf. (Zhu, 2012)). To wit, $|g(z)|=|\langle g,K(\cdot,z)\rangle_{F^{2}(\mathbb{C})}|\leq\|g\|_{F^{2}(\mathbb{C})}\|K(\cdot,z)\|_{F^{2}(\mathbb{C})}=\|g\|_{F^{2}(\mathbb{C})}e^{\frac{|z|^{2}}{2}}$. 2 . That is, if a function is in the Fock space then the function is of order at most 2, and if the function is of order 2 it has type at most 1/2 (cf. (Boas, 2011)). Conversely, if an entire function is of order less than 2, it is in the Fock space, and if it is of order 2 and type less than 1/2, then it is also in the Fock space. While functions of order 2 and type 1/2 can be in the Fock space, it does not hold for every such function. For example, e z 2/2is of order 2 and type 1/2, but is not in the Fock space. Thus, ϕ(z) = e z 2/4is an eigenfunction for KFπ in the Fock space. However, ϕ · ϕ = e z 2/2is not in the Fock space, and cannot be an eigenfunction for KFπ : F 2(C) → F 2(C). Hence, the eigenfunctions for KFπ *do not* form a lattice. ## A.3.2 Common Eigenfunctions Example 5. Consider again the dynamical system in example 3, but instead set θ = π/2, then $$F_{\pi/2}(z)=i z.$$ In this case, the polynomial z 4 + z 8 ∈ F 2(C) is an eigenfunction for the Koopman operator corresponding to θ = π/2 with eigenvalue 1. However, z 4 + z 8is not an eigenfunction for KFπ/3 , as $$(e^{i\pi/3}z)^{4}+(e^{i\pi/3}z)^{8}=e^{i4\pi/3}z^{4}+e^{i2\pi/3}z^{8},$$ and the constants cannot be factored out of the polynomial as an eigenvalue. ## B Proofs B.1 Proof Of Lemma 1 Lemma 1. Let F : X → X be the symbol for a Koopman operator over a RKHS H *over a set* X. KF : D(KF ) → H is a closed operator. Proof. Suppose that {gm}∞ m=1 ⊂ D(KF ) such that gm → g ∈ H and KF gm → h ∈ H. To show that KF is closed, we must show that g ∈ D(KF ) and KF g = h. This amounts to showing that h = g ◦ F, by the definition of D(KF ). Fix x ∈ X, then $$h(x)=\langle h,K_{x}\rangle_{H}=\lim_{m\to\infty}\langle{\cal K}_{F}g_{m},K_{x}\rangle_{H}=\lim_{m\to\infty}g_{m}(F(x))\,$$ $$=\lim_{m\to\infty}\langle g_{m},K_{F(x)}\rangle_{H}=\langle g,K_{F(x)}\rangle_{H}=g(F(x)).$$ $\square$ As x was an arbitrary point in X, h = g(F(x)) and the proof is complete. ## B.2 Proof Regarding Boundedness Of The Koopman Operator As stated in Section 4.2.3, here we provide a novel proof that shows that a Koopman operator over the Gaussian RBF's native space is bounded only when the corresponding discrete dynamics are affine. In order to achieve this we begin by introducing and proving a few lemmas. Lemma 4. If KF is a bounded operator over the Gaussian RBF's native space, then F *is a real analytic* vector valued function over R n. Proof. If KF is bounded, then KF Ky(x) = Ky(F(x)) = exp(−kF(x) − yk 2 2 ) is in the RBF's native space for each y ∈ R n. Since every function in the RBF's native space is real analytic, so is Ky(F(x)), and thus, the logarithm, −kF(x) − yk 2 2 = −kF(x)k 2 2 + 2y T F(x) − kyk 2 2 is real analytic. This holds if y = 0, and hence kF(x)k 2 2 is real analytic. Thus, for every y, the quantity y T F(x) is real analytic. That each component of F(x) is real analytic follows from the selection of y as the cardinal basis elements of R n, and this completes the proof. Lemma 5. If F is a real analytic vector valued function that yields a bounded Koopman operator, KF , over the Gaussian RBF's native space, then its extension to an entire function, F : C n → C n *induces a bounded* operator over HG(C n). Proof. Since an entire function on C n is uniquely determined by its restriction to R n, it follows that the span of the complex valued Gaussian RBFs with centers in R n is dense in HG. Moreover, to demonstrate that KF is bounded, it suffices to show that there is a constant C such that $$C^{2}K_{G}(z,w)-K_{G}(F(z),F(w))$$ $$(11)$$ 2KG(z, w) − KG(F(z), F(w)) (11) is a positive kernel. By the above remark, it suffices to show this for real *x, y* ∈ R n, but then this is equivalent to the statement that KF is bounded over the Gaussian RBF's native space over R n. Theorem 2. If F : C n → C n is an entire function, and KF is bounded on HG*, then* F(z) = Az + b *for a* matrix A ∈ C n×n *and vector* b ∈ C n. Proof. If KF is bounded, then it has a bounded adjoint, K∗F , which acts on the complex Gaussian as K∗F KG(·, z) = KG(·, F(z)). In particular, there is a constant C > 0 such that kKG(·, F(z))k 2 HG ≤ C 2kKG(·, z)k 2 HG . Noting the identity kKG(·, z)k 2 HG = exp 2Pn j=1(=zj ) 2and taking the logarithm, it follows that $$\sum_{j=1}^{n}(3F_{j}(z))^{2}\leq\log(C^{2})+\sum_{j=1}^{n}(3z_{j})^{2}\leq\log(C^{2})+\|z\|_{2}^{2}.$$ $$\left(12\right)$$ From this inequality, it follows that for each coordinate j = 1*, . . . , n*, the harmonic function vj (z) = =Fj (z) has linear growth. That is, there is a constant C˜ so that |vj (z)| ≤ C˜(1 + kzk2) for all z ∈ C n. It follows (e.g. from the standard Cauchy estimates) that vj (z) = vj (x + iy) must be an affine linear function of x and y, and therefore, so must its harmonic conjugate uj (z), and we conclude that F(z) = Az + b. ## C Further Numerical Examples C.1 The Duffing Oscillator In this numerical example, sixteen 10−seconds long trajectories of the Duffing oscillator, given by x˙ 1 = x2 and x˙ 2 = −0.1x2 +x1 −x 3 1 , are generated starting from initial states on a uniform grid on the set [−1, 1]×[−1, 1]. The trajectories are sampled every 0.5 seconds to generate a set of snapshots. The snapshots are used to build a predictive model of the oscillator using the method developed in this paper (implemented using regularized regression with = 10−6 and the exponential dot product kernel with µ = 20) and using the baseline method from (Williams et al., 2015b) (implemented using a threshold of 10−3for zeroing singular values and the exponential dot product kernel with µ = 20). To evaluate the predictive models, ![20_image_0.png](20_image_0.png) ![20_image_2.png](20_image_2.png) (a) Method developed in this paper, reconstruction using ![20_image_3.png](20_image_3.png) (c) Method developed in this paper, reconstruction using the discrete-time model in equation 8. (d) Kernel DMD method in (Williams et al., 2015b), reconstruction using the discrete-time model in equation 8. Figure 3: For the Duffing oscillator example, this figure shows the pointwise mean of the relative error between the true trajectories and the trajectories generated using DMD. The shaded area shows the pointwise maximum and the pointwise minimum over 100 randomly initialized trajectories. 15−seconds long trajectories of the oscillator starting from 100 initial conditions randomly selected from the set [−1.5, 1.5] × [−1.5, 1.5] are generated by integrating the continuous-time model in equation 9 using a variable time-step fourth order Runge-Kutta integrator. These trajectories are compared against the true trajectories starting from the same initial conditions, generated by integrating the true dynamics using the same integrator. The metric used for comparison is the pointwise relative error e(t) = kx(t)−xˆ(t)k maxt{kx(t)k} , where x(·) is the true trajectory and xˆ(·) is the predicted trajectory. Figure 3 shows the pointwise mean of the relative ![20_image_1.png](20_image_1.png) (b) Kernel DMD from (Williams et al., 2015b), reconstruction using the continuous-time model in equation 9. ![20_image_4.png](20_image_4.png) ![21_image_0.png](21_image_0.png) ![21_image_1.png](21_image_1.png) (a) Difference between prediction generated using the method in (Williams et al., 2015b) and the truncated pseudoinverse implementation of the method developed in this paper. (b) Difference between trajectories generated using the method in (Williams et al., 2015b) and the regularized regression implementation of the method developed in this paper. Figure 4: For the Duffing oscillator example, this figure shows the pointwise mean of the error between the trajectories predicted using the continuous-time model in equation 9 generated by the method in (Williams et al., 2015b) and the method developed in this paper. The shaded area shows the pointwise maximum and the pointwise minimum over 100 randomly initialized trajectories. error across the 100 trajectories along with the spread of the relative error from the pointwise minimum to the pointwise maximum. To demonstrate the effectiveness of the heuristic continuous-time model in equation 9, the true trajectories, sampled every 0.5 seconds, are also compared against trajectories generated using the discrete-time model in equation 8. Figure 3 also shows the pointwise mean of the relative error across the 100 trajectories along with error bars that show the pointwise maximum and the pointwise minimum relative error. ## C.2 Turbulent Flow Another Numerical example uses a data set that accompanies (Jakob et al., 2020), which consist of flow simulations that range from laminar flow configurations to turbulent flow configurations. The data can be accessed from their website (Jakob et al., 2021). The flow data used in this paper (id number 7999) corresponds to 2-dimensional turbulent flow with a Reynolds number of 4092.216 and a Kinematic viscosity of 5.45 exp −05. The data consist of 1001 snapshots, each snapshot contains flow velocities in the horizontal and vertical directions measured on a regular grid of size 512×512 in two dimensions with domain [0, 1]×[0, 1]. The snapshots are separated by 0.01 seconds. In this demonstration, snapshots 1 through 300 are used as the input data, and snapshots 2 through 301 are used as output data, assuming that the ith and (i + 1)th snapshots satisfy xi+1 = F(xi) for some unknown nonlinear function F. The snapshots are normalized so that the largest 2-norm among all snapshots is 1. The Koopman modes, eigenvalues, and eigenfunctions are then computed using the developed technique and snapshots 2 through 301 are reconstructed from snapshot 1 using equation 7. DMD is implemented using the Gaussian RBF kernel, K(*x, y*) = exp − 1 µ kx − yk 2 2 (with µ = 0.75 and = 0.0001). The baseline method from (Williams et al., 2015b) is implemented using the same kernel with µ = 0.000518. The predictive model in equation 7 is also used to generate 700 additional snapshots. ## C.3 Additional Discussion The ability of the developed DMD technique to reconstruct the training data from the same initial condition is apparent from figures 3 and 5. In the duffing oscillator example the pointwise mean of the relative error is shown to be nearly zero. The relative reconstruction errors for the turbulent flow are of the order of 1e − 2. ![22_image_0.png](22_image_0.png) (a) Method developed in this paper (b) Kernel DMD method in (Williams et al., 2015b) Figure 5: For the turbulent flow example, this figure shows the 2-norm of the error between the true snapshots and the snapshots generated using DMD. The duffing oscillator example, is also used to highlight the similarity in predictions generated by the DMD method developed in this manuscript and that proposed by (Williams et al., 2015b). Figure 4 shows that the difference between the predictions generated by a truncated pseudoinverse implementation of the method developed in this paper and the predictions generated by the method in (Williams et al., 2015b) are of the order of 1e − 6. As expected, figure 4 further highlights that choosing to use the truncated pseudoinverse implementation as opposed to the regularized regression implementation results in predictions that are more similar to those obtained under the method in (Williams et al., 2015b). Furthermore, figure 3 also shows that for the duffing oscillator example the continuous-time model in equation 9 results in a smaller pointwise mean relative error than the discrete-time model in equation 8. In the case of the turbulent flow, as shown in 5, the relative prediction errors for the method developed in this paper and the baseline kernel DMD method developed in (Williams et al., 2015b) quickly increase to 1. Since the data are normalized so that the largest 2-norm among all snapshots is 1, a relative prediction error of 1 signifies a very poor match between the predicted state and the true state. Such performance degradation is expected for turbulent flows since the underlying assumption that the ith and (i + 1)th snapshots satisfy xi+1 = F(xi) for some unknown nonlinear function F is unlikely to generalize outside the training data when the flow is turbulent. From figure 5, the method developed in this paper appears to perform slightly better than the baseline. This performance improvement can be attributed to the explicit regularization of the Gram matrix, where the regularization parameter can be tuned in addition to the kernel width parameter. in (Williams et al., 2015b), the regularization is done implicitly through the pseudoinverse operation. We postulate that by manipulating the threshold used to decide which singular values are zero in the pseudoinverse computation, the performance of the kernel DMD method can be improved.
Review 1: Summary: The paper proposes a theoretical contribution related to the study of Koopman operators over RKHSs. It proves that if the Koopman operator of a real entire vector-valued function $F$ is bounded over the Gaussian RBF space, then $F$ is affine. Furthermore, the paper discusses the feasibility of the assumptions that are often made about the Koopman operator and that may not be relevant when they are considered on RKHSs, arriving at the conclusion that they cannnot be assumed to be bounded and compact. The paper then proposes to work in the context of densely defined operators and establishes a DMD method in this case that closely mimics previous work about kernel DMD. Numerical experiments highlight the benefit of the approach. Strengths and Weaknesses: Strenghts - The paper is of interest for the TMLR community. The theoretical contribution is sound and readers might be interested in the characterization of bounded Koopman operators over Gaussian RKHSs. - The numerical experiments complement nicely the theoretical derivations. Weaknesses: - The paper questions many assumptions regarding Koopman operators but leaves some out of questionning. While I understand the interest of looking at a Koopman operator as an operator from $H \to H$, it introduces a main question that needs addressing in my opinion: "how big is $\{g \in H | g \circ F \in H \}$" ? Is the densely defined assumption really reasonnable for say the Gaussian RBF kernel ? What type of $F$ are needed to have $\forall x \in X, y \mapsto k(F(y), x) \in H $ ? - The assumption "$x \mapsto x$ is in $H$" is a heavy one and must be better highlighted. What happens with the gaussian kernel which violates this assumption ? - The relation to [Williams 2015a] is not well explicited in my opinion. From what I understand, both methods yield the same matrix whose eigendecomposition is used to approximate the dynamics. Is the difference only regarding the assumptions made on the Koopman operator ? Or is it in the regularization performed when the matrix is not invertible ? Or further down the line when computing the modes ? Additional remarks: - In the text before equation (5), when writing the decomposition of $(x)_d$, there is a typo on the index: $\phi_i$ should be $\phi_j$. Requested Changes: - Improve the presentation of the differences with previous work [critical] - Better discuss the assumptions (densely defined + identity in the RKHs) [critical] Broader Impact Concerns: None ================================================== Review 2: Summary: The work is focused on characterizing the properties of Koopman operators (KO) over reproducing kernel Hilbert spaces (RKHS), as a way to bring forth a somewhat novel Kernel perspective to $Koopmanism$ - contrasting it with the usual ergodic picture. A central result for example showcases how the class of bounded Koopman operators over RKHS is miniscule even when the considered dynamics is being governed by analytic maps - an additional assumption that the map be affine is critical to impart enough structure to support bounded KOs (and even then in a restricted Guassian RBF setting). However, while seemingly a very strong restriction on the viability of $Koopmanism$ (the lack of compactness and/or boundedness in operators often makes functional analytic problems completely intractable), within an assumption of densely defined KOs (amongst other smaller ones), we find that enough structure is still preserved to make it numerically viable, making this an impressive attempt at providing a novel, kernel based theoretical perspective on DMD. The proposed theory leads to some natural algorithms, which are tested for their advantages and then compared with existing methods to demonstrate equivalence. Strengths and Weaknesses: Strengths: 1. To my knowledge, a somewhat novel, kernel based perspective on DMD/Koopmanism has been promulgated in this work, along with a more impressive set of counterexamples to suggest why Koopmanism isn't necessarily well suited for anything beyond affine problems, before further analysis provides a strong counter-counter-argument as to why the situation is not critically deficient after all, if the KOs are considered to be densely defined. If the reader can slog through the text (see weakness 1), the proposed theory is an interesting and fruitful way to consider the existing empirical literature in these directions and a stark departure from the ergodic perspective that heavily dominates how Koopmanism is formalized usually. 2. To further bolster the value of their case, the authors investigate some naturally implied algorithms within the perspective and somewhat validate that their results are equivalent to existing methods (see weakness 2). Weaknesses: 1. I don't mean to be harsh, but the work is badly-written - I would consider myself well versed with most of the involved subject matter and it still took me an unreasonable amount of time to simply figure out when known facts were being repeated and when something novel was being added. It is even more confusing why this should be the case since in my opinion, the novel contributions are truly exciting and remarkable. A lot of the mathematical arguments, results, and proofs have been presented in an almost stream of consciousness style that is not amenable to reading about such dense technical matters. 2. Numerical analysis is not insufficient, but also far from conclusive. It is also presented in an sub-optimal manner. For ex, in both Fig. 7.1 and 7.3, the relevant quantity of interest is not the outputs of the two methods, but the difference between the two method outputs (I am not suggesting the removal of those figures, but explicitly the quantity of interest is the difference between those two sub-figures in each case - that comparison should be present). I would also be more comfortable if at least a couple of more experimental verifications were included, rather than just 2. Requested Changes: On a technical/mathematical level, I have no issues and have found no substantial errors. On a reading front, I think the authors have struggled with the intersectional nature of this work - writing for an ML audience while needing to leverage fairly standard but still bulky and involved mathematical machinery. My personal advice would be to write every novel result/deduction/argument out in a standard mathematical format. Where arguments are being developed, put them into the service of proving or conjecturing a precise statement. Where claims are being made, always state them as precise statements/lemmas/theorems. Everytime a definition is stated, it should be as an indexed object that can be referred to instantly by simply looking at the right definition number, rather than as a sentence in some hard to find paragraph. Etc. Etc. I don't mean to suggest that the above approach is completely absent: I am strongly recommending that this approach be applied to every mathematical portion of this paper. To further simplify reading, the authors can then pick and choose the most important statements and lemmas to build a storyline of precise statements and implications in the "main text" and then put everything else (including all the proofs) in a well-referenced appendix. This way, people who care can easily access full technical details while the rest can follow the important portions of the story without needing to untangle the entire technical machinery. In summary, the work is accurate and convincing but highly unclear: I can't recommend acceptance in good faith until that last point is resolved. However, I would again strongly acknowledge for the editors that I have no technical changes requested or errors to point out here - if the struggles in reading were simply mine, please feel free to completely discount the above. Broader Impact Concerns: Not Applicable ================================================== Review 3: Summary: An algorithm similar (or almost identical) to the kernel DMD is derived in a perspective different from previous studies. Specifically, the authors start at the projection of the Koopman operator onto a set of basis functions comprising kernel functions centered at the snapshots. Although the resulting algorithm seems to be basically identical to what is known as kernel DMD (eg WIlliams+ 2015), the contribution of this work is claimed to be its derivation from the given perspective. Moreover, some analyses of the Koopman operator in RKHS and its eigenfunctions are presented. Strengths and Weaknesses: ### Strengths + The derivation of "kernel DMD" from the perspective of finite rank representation of the Koopman operator sounds indeed interesting. + The writing is not bad, I could follow the main reasoning though I am not well equipped with such mathematical discussions. ### Weaknesses - That being said, I feel that the relationship between the claims scattered in Sections 4-6 could have been clearer. A short paragraph on how they relate to each other might help. - The numerical examples do not seem wrong, but it is unclear how they are valuable. I am not following the discussions that happened in the previous submission, but my first impression is that they were added just to make the paper slightly more ML-ish. A possibility I would raise (which I am not recommending but just showing) is to defer the numerical examples to an appendix to fit the paper within the nominal 12-page length. - How the discussions in Sections 4.1, 4.2.1, and 4.2.2 relate to the discussion afterward in Sections 5 and 6 is unclear. - The novelty of the result in Section 4.2.3 is unclear. For example, could you please clarify the connection to Ikeda et al. (2022), if any? [Ikeda et al., 2022] Ikeda et al., Boundedness of composition operators on reproducing kernel Hilbert spaces with analytic positive definite functions, Journal of Mathematical Analysis and Applications 511:126048, 2022. Requested Changes: (1) As mentioned above, please elaborate, if any, on the connection between the result in Section 4.2.3 and that in Ikeda et al. (2022). (2) The difference between the two algorithms used in Section 7.2 could be clearer. Broader Impact Concerns: N/A ==================================================
# Optimal Input Gain: All You Need To Supercharge A Feedforward Neural Network Anonymous authors Paper under double-blind review ## Abstract Deep learning training training algorithms are a huge success in recent years in many fields including speech, text,image video etc. Deeper and deeper layers are proposed with huge success with resnet structures having around 152 layers. Shallow convolution neural networks(CNN's) are still an active research, where some phenomena are still unexplained. Activation functions used in the network are of utmost importance, as they provide non linearity to the networks. ReLU's are the most commonly used activation function.We show a complex piece-wise linear(PWL) activation in the hidden layer. We show that these PWL activations work much better than ReLU activations in our networks for convolution neural networks and multilayer perceptrons. Result comparison in PyTorch for shallow and deep CNNs are given to further strengthen our case. ## 1 Introduction Multilayer perceptron (MLP) neural networks are used to solve a variety of real-life approximation tasks, including stock market time series forecastingWhite (1988), power load forecastingKe et al. (2019), prognostics Kara (2021), well log processing Wang et al. (2020), currency exchange rate predictionChen et al. (2021a), control applicationsLewis et al. (1997) and stock and weather predictionMeesomsarn et al. (2009)Bochenek & Ustrnul (2022). MLPs are also used in classification problems such as speech recognitionAdolfi et al. (2023), fingerprint recognitionWang et al. (2022), character recognitionChen et al. (2021b), and face detectionKasar et al. (2016). In recent times, they also form the back end of deep learning architectures as Tyagi (2018), Qi et al. (2017). The "no free lunch" theorem (NFL) Duda et al. (2012),Wolpert (1996) implies that no single discriminant training algorithm is universally superior. Despite this, feed-forward neural nets, or MLPs, have gained increasing popularity for two reasons. First, MLPs have the ability to approximate any continuous discriminant function with arbitrary accuracy due to universal approximation capability Girosi & Poggio (1989), Cybenko (1989), White (1990), Hartman et al. (1990) meaning that it can approximate the best classifier. However, a feed-forward network with a single layer may struggle to learn and generalize correctly due to insufficient learning, a lack of deterministic relationship between inputs and outputs, or an insufficient number of hidden units Pao (1989), Werbos (1974), Goodfellow et al. (2016). Second, with proper training, an MLP can approximate the Bayes discriminant Suter (1990) or the minimum mean-square error (MMSE) estimator Geman et al. (1992),Manry et al. (1996),Wu (1996). Training an MLP is an unconstrained optimization problem that usually involves first order gradient methods such as backpropagation (BP), scaled conjugate gradient (SCG) Fitch et al. (1991)Møller (1993), OWO-BP Hecht-Nielsen (1992), Tyagi et al. (2022a) and second order Levenberg-Marquardt (LM) Battiti (1992)Hagan & Menhaj (1994) and Newton's algorithm Tyagi et al. (2021), Tyagi et al. (2021) and OWO-MOLF Tyagi et al. (2020). Within first and second order, training algorithms can be one stage, in which all the weights of the network are updated simultaneously and two stage, in which input and output weights are trained alternately Tyagi (2018). However, each of these approaches has its limitations. Newton's and LM scale worse than OWO-BP. OWO-BP takes O(N2w) operations for sufficiently large Nw, where Nw is the total weights of the network. It is also unduly slow in a flat error surface and could be a more reliable learning paradigm. OWO-BP also lacks affine invariance Tyagi et al. (2022a). SCG scales well but has no internal mechanism to overcome linearly dependent inputs Tyagi (2018). Newton's method is a second-order algorithm that requires computing and storing the Hessian matrix at each iteration, which can be computationally expensive Tan & Lim (2019). The LM algorithm is faster than Newton's method because it approximates the Hessian matrix using the Jacobian matrix Levenberg (1944). However, it becomes less efficient as the number of parameters increases Tan & Lim (2019). MLP training is also sensitive to many parameters of the network and its training data, including the input means LeCun et al. (1998a), the initial network weights Rumelhart et al. (1986), Kolen & Pollack (1990), and sensitive to the collinearity of its inputs Hashem & Schmeiser (1995). Also, scaling the network architecture to learn more complex representations is cumbersome. This limited applications involving MLP to challenging but less complex problems where shallow architectures could be used to learn and model the behavior effectively. The recent developments of transformers Vaswani et al. (2017) and their success in complex applications involving natural speech Dong et al. (2018), Pham et al. (2019) and vision Kolesnikov et al. (2021) have renewed interests in feed-forward network architectures, as they form the building blocks to the more complex transformer architectures int (2023). Feed-forward network are also being used in active production for radar perception stack in autonomous driving. Tyagi et al. (2022b). We present a family of fast learning algorithms targeted towards training a fixed architecture, fully connected multi-layer perceptron with a single hidden layer capable of learning from both approximation and classification datasets. In Malalur & Manry (2009), a method for optimizing input gains called the optimal input gain (OIG) was presented. Preliminary experiments showed that when this method was applied to the firstorder MLP training method like the BP, it significantly improved the overall network's performance with minimal computational overhead. However, the method performs less optimally under the presence of linearly dependent inputs. In general, this is true for other MLP training algorithms as well. We expand the idea of OIG to apply another first-order two-stage training algorithm called hidden weight optimization (HWO) Yu et al. (2004), Tyagi et al. (2022a) to formulate the OIG-HWO algorithm. Following Malalur & Manry (2009), we expand on the details of the motivation behind the OIG algorithm, along with a thorough analysis of its structure, performance, and limitation. In addition, we propose an improvement to overcome the limitation and compare the new algorithm's performance with existing algorithms. Our vision for the OIG-HWO presented in this paper is twofold, firstly, to be a strong candidate for challenging but less complex applications that can rival available shallow learning architectures in speed and performance, and secondly, to serve as a potential building block for more complex deep learning architectures. The rest of the paper is organized as follows. Section II covers the basics of MLP notation and training and an overview of existing algorithms. Section III discusses the linear transformation of inputs. In Section IV, we describe the OIG-HWO algorithm and its training process. Finally, in Section V, we compare the results of the OIG-HWO algorithm to those obtained using existing approaches on approximation data and classification data used for replacement in deep learning classifiers. ## 2 Prior Work 2.1 Structure And Notation A fully connected MLP with one hidden layer is shown in Figure 1. Input weight w(*k, n*) connects the n th input xp(n) to the k th hidden unit. Output weight woh(*i, k*) connects the k th hidden unit's activation op(k) to the i th output yp(i), which has a linear activation. The bypass weight woi(*i, n*) connects xp(n) to yp(i). In the training data {xp, tp} for a fully connected MLP, the p th input vector xp is initially of dimension N and the p th desired output (target) vector tp has dimension M. The pattern number p varies from 1 to Nv. Let the input vectors be augmented by an extra element xp(N + 1) where xp(N + 1) = 1, so xp = [xp(1), xp(2). . . xp(N + 1)]T. Weights leading away from xp(N + 1) are hidden or output layer thresholds. For the p th pattern, the hidden unit's output vector np can be written as ![2_image_0.png](2_image_0.png) Figure 1: Single hidden layer fully connected MLP $$\mathbf{n}_{p}=\mathbf{W}\cdot\mathbf{x}_{p}$$ $$(1)$$ np = W · xp (1) where np is of size Nh by 1, and the input weight matrix W is of size Nh by (N + 1). For the p th pattern, the k th hidden unit's output, op(k), is calculated as op(k) = f(np(k)), where f(.) denotes a nonlinear hidden layer activation function such as the rectified linear unit (*ReLU*) which is given as follows Nair & Hinton (2010). $$f(n_{p}(k))=m a x(0,n_{p}(k))={\begin{cases}n_{p}(k),&i f\quad n_{p}(k)\geq0\\ 0,&i f\quad n_{p}(k)<0\end{cases}}$$ $$\left(2\right)$$ The M dimensional network output vector yp is $$\mathbf{y}_{p}=\mathbf{W}_{o i}\cdot\mathbf{x}_{p}+\mathbf{W}_{o h}\cdot\mathbf{o}_{p}$$ yp = Woi · xp + Woh · op (3) where op is the Nh dimensional hidden unit activation vector. The last columns of W and Woi respectively store the hidden unit and output unit threshold values. During training the unknown weights are solved by minimizing a mean-squared error (MSE) function described as $$E=\frac{1}{N_{v}}\sum_{p=1}^{N_{v}}\sum_{i=1}^{M}[t_{p}(i)-y_{p}(i)]^{2}\tag{1}$$ $$\left({\mathrm{3}}\right)$$ $$\left(4\right)$$ Training a neural network involves formulating it as an unconstrained optimization problem and then applying a learning procedure. Typically, the learning procedure is a line search Gill et al. (2019), with a layer-by-layer optimization Biegler-König & Bärmann (1993), Zhang et al. (1999), Wang & Chen (1996), involving first and second-order algorithms. ## 2.2 Scaled Conjugate Gradient Algorithm Conjugate gradient (CG) Tyagi et al. (2022a) line-searches in successive conjugate directions and has faster convergence than steepest descent. To train an MLP using the CG algorithm (CG-MLP), we update all the network weights w simultaneously as follows: $\left(5\right)^2$ $$\mathbf{w}\leftarrow\mathbf{w}+z\cdot\mathbf{p}$$ $$\left({\boldsymbol{\theta}}\right)$$ w ← w + z · p (5) where z is the learning rate that can be derived as LeCun et al. (1998a),Tyagi et al. (2022a). $$z=-\frac{\frac{\partial E(\mathbf{w}+z\cdot\mathbf{p})}{\partial z}}{\frac{\partial^{2}E(\mathbf{w}+z\cdot\mathbf{p})}{\partial z^{2}}}\left|z\right=0\tag{1}$$ The direction vector p is obtained from the gradient g as $$\left(7\right)$$ $$\mathbf{p}\leftarrow-\mathbf{g}+B_{1}\cdot\mathbf{p}$$ p ← −g + B1 · p (7) where p = vec (P, Poh, Poi) and P, Poh and Poi are the direction vectors corresponding to weight arrays (W,Woh,Woi). CG uses backpropagation to calculate g. B1 is the ratio of the gradient energy from two consecutive iterations. If the error function were quadratic, CG would converge in Nw iterations Boyd & Vandenberghe (2004), where the number of network weights is Nw = dim(w). CG is scalable and widely used in training large datasets, as the network Hessian is not calculated Le et al. (2011). Therefore, in a CG, the step size is determined using a line search along the direction of the conjugate gradient. SCG Møller (1993) scales the conjugate gradient direction by a scaling factor determined using a quasi-Newton approximation of the Hessian matrix. This scaling factor helps to accelerate the algorithm's convergence, especially for problems where the condition number of the Hessian matrix is large. SCG requires the computation of the Hessian matrix (or an approximation) and its inverse. Other variations of CG exist Tyagi et al. (2014). However, in this study, we choose to use SCG. ## 2.3 Levenberg-Marquardt Algorithm The Levenberg-Marquardt (LM) algorithm Tyagi et al. (2022a) is a hybrid first- and second-order training method that combines the fast convergence of the steepest descent method with the precise optimization of the Newton method Levenberg (1944). However, inverting the Hessian matrix H can be challenging due to its potential singularity or ill-conditioning Bishop (2006). To address this issue, the LM algorithm introduces a damping parameter λ to the diagonal of the Hessian matrix as $$({\boldsymbol{\delta}})$$ $$\mathbf{H}_{L M}=\mathbf{H}+\lambda\cdot\mathbf{I}$$ HLM = H + λ · I (8) where I is an identity matrix with dimensions equal to those of H. The resulting matrix HLM is then nonsingular, and the direction vector dLM can be calculated by solving: $$\mathbf{H}_{L M}\mathbf{d}_{L M}=\mathbf{g}$$ HLMdLM = g (9) The constant λ represents a trade-off value between first and second order for the LM algorithm. When λ is close to zero, LM approximates Newton's method and has minimal impact on the Hessian matrix. When λ is large, LM approaches the steepest descent and the Hessian matrix approximates an identity matrix. However, the disadvantage of the LM algorithm is that it scales poorly and is only suitable for small data sets Tyagi et al. (2022a). $$({\mathfrak{g}})$$ ## 2.4 Output Weight Optimization Output weight optimization (OWO) Barton (1991), Tyagi et al. (2021) is a technique to solve for Woh and Woi . Equation (3) can be re-written as yp = Wo · xap (10) where xap = [x T p : o T p ] Tis the augmented input column vector of size Nu = N + Nh + 1. Wo is formed as [Woi : Woh] of dimensions M by Nu. The output weights can be found by setting *∂E/∂W*o = 0, which leads to the set of linear equations $$\mathbf{C}=\mathbf{R}\cdot\mathbf{W}_{o}^{T}$$ ## O(11) Where C =1 Nv Pnv P=1 Xapt T P And R =1 Nv Pnv P=1 Xapx T Ap. Equation (11) Can Be Solved Using Orthogonal Least Squares (Ols) Methods Tyagi (2018). Owo Provides Fast Training And Avoids Local Minima Manry Et Al. (1994). However, It Only Trains The Output Weights. 2.5 Input Weight Optimization Input weight optimization Tyagi (2018) is a technique for iteratively improving W via steepest descent. The Nh by (N + 1) negative input weight Jacobian matrix for the p th pattern's input weights is $${\bf G}=\frac{1}{N_{v}}\sum_{p=1}^{N_{v}}\delta_{p}{\bf x}_{p}^{T}\tag{1}$$ $$\mathbf{W}=\mathbf{W}+z\cdot\mathbf{G}$$ $$(11)^{\frac{1}{2}}$$ $$\left(12\right)$$ $\left(13\right)$. where δp = [δp(1), δp(2)*, ...., δ*p(Nh)]Tis the Nh by 1 column vector of hidden unit delta functions Rumelhart et al. (1985). W is updated in a given iteration as W = W + z · G (13) where z is the learning factor. Combined with BP, we formulate OWO-BP Tyagi et al. (2022a), a two-stage training algorithm developed as an alternative to BP. In a given iteration of OWO-BP, we first find the weights, Woh and Woi and then separately train W using BP Tyagi (2018). OWO-BP is attractive for several reasons. First, the training is faster since solving linear equations for output weights in a given iteration is faster than using a gradient method. Second, when OWO optimizes output weights for a given input weight matrix, some local minima are avoided. Third, the method exhibits improved training performance compared to using only BP to update all the weights in the network. It can be shown that OWO-BP converges, and it leads to the convergence of the weights to a critical point in weight space Tyagi et al. (2022a). This can be a global minimum, a local minimum, or a saddle point. ## 2.6 Hidden Weight Optimization HWO Scalero & Tepedelenlioglu (1992), finds an improved gradient matrix Ghwo by solving the following linear equation Ghwo · Ri = G (14) where Riis the input autocorrelation matrix as $$\mathbf{G}_{h w o}\cdot\mathbf{R_{i}}=\mathbf{G}$$ $$\mathbf{R_{i}}={\frac{1}{N_{v}}}\sum_{p=1}^{N_{v}}\mathbf{x}_{p}\mathbf{x}_{p}^{T}$$ $$\left(14\right)$$ $$\left(15\right)$$ $$(16)$$ p(15) and G is the backpropagation negative gradient matrix Rumelhart et al. (1985). Equation (14) can be rewritten as $${\bf G}_{h w o}={\bf G}\cdot{\bf R_{i}}^{-1}\tag{1}$$ −1(16) where Ghwo = G · AT· A. Equation (14) can be solved using OLS or matrix inversion using the singular value decomposition (SVD). A is the whitening transform matrix Raudys (2001), Tyagi et al. (2020). It is shown in Robinson & Manry (2013) that HWO is equivalent to applying a whitening transform to the training data to de-correlate it. W is now updated using Ghwo instead of G as W = W + z · Ghwo (17) $$\mathbf{W}=\mathbf{W}+z\cdot\mathbf{G}_{h w o}$$ $$(17)$$ ## 3 Proposed Work The study of the effects of applying the equivalent networks theory to the augmented input vectors xp is thoroughly discussed in Malalur & Manry (2009). In this work, we build upon the concept presented in Malalur & Manry (2009), Nguyen et al. (2016) and examine the impact of transformed input gains on the training process in conjunction with HWO. ## 3.1 Mathematical Background Consider a nonlinear network designated as *MLP-1* with inputs x ∈ R N+1, where the restriction xN+1 = 1 is imposed, and outputs y ∈ RM. Another network, referred to as *MLP-2*, has inputs x ′ = A · x and outputs y ′ ∈ RM. These two networks are considered strongly equivalent if, for all xp ∈ R N+1, we have y ′ p = yp. The network *MLP-1* is trained on the original input vectors xp, while *MLP-2* is trained using the transformed input vectors x ′p defined as $$\mathbf{x}^{\prime}{}_{p}=\mathbf{A}\cdot\mathbf{x}_{p}$$ $$(18)$$ ′p = A · xp (18) where A is an N ′by (N + 1) rectangular transformation matrix, for some N ′≥ (N + 1). We establish in Malalur & Manry (2009) that input weights for *MLP-1* and *MLP-2* are related as $$\mathbf{W}^{\prime}\cdot\mathbf{A}=\mathbf{W}$$ $$(19)$$ W′· A = W (19) The negative gradient matrix for training the input weights in *MLP-2* is given by G′ = G · AT. Now, suppose that this negative gradient G′for *MLP-2* is mapped back to modify the input weights in *MLP-1*, using equation (19). The resulting mapped negative gradient for *MLP-1* is then $$\mathbf{G}^{\prime\prime}=\mathbf{G}\cdot\mathbf{R}_{i}$$ $$\mathbf{R}_{i}=\mathbf{A}^{T}\cdot\mathbf{A}$$ $$(20)$$ By expressing the SVD of Ri as Ri = UΣUT, we can derive that Ri −1 = UΣ−1U T, where U is an orthogonal matrix, Σ is a diagonal matrix with the singular values of Ri, and Σ−1is the diagonal matrix with the reciprocal of the non-zero singular values of Ri. Using equation (20), it can be deduced that A = Σ 1 2 UT. Comparing equation (20) with equation (16), it is clear that performing OWO-HWO is equivalent to performing OWO-BP on input vectors to which the whitening transformation Robinson & Manry (2013) has been applied. Since BP with optimal learning factor (OLF) converges, it is clear that HWO with an OLF also converges. Lemma-1: If we are at a local minimum in the weight space of the original network, we are also at a local minimum in the weight space of the transformed network. This follows from ((20) if G = 0. Lemma-2: If the input weight matrix W′ of the transformed network is trained using BP, this is not equivalent to applying BP to the original network's weight matrix W unless the matrix A *is orthogonal*. This can be derived from equation (20) because for any orthogonal matrix A, equation (20) becomes G′′ = G. This is also intuitive if we consider that BP updates the weights in the direction of the negative gradient of the loss function with respect to the weights. Lemma-3: For a non-diagonal matrix R, there exist an uncountably infinite number of matrices A that can be constructed. This follows from (20). It is because a non-diagonal matrix R has at least one non-zero element not located on the main diagonal. As there are infinite choices for the values of these non-zero elements, there are an uncountable number of possible matrices A that can be constructed by choosing the values of these non-zero elements in R. If the transformation matrix A is not orthogonal, then the mapped negative gradient for *MLP-1* obtained from *MLP-2* will not be equivalent to the true negative gradient of the loss function with respect to the weights in *MLP-1*. As a result, when optimal learning factors are used with BP to train the input weights, training with the original data is equivalent to training with the transformed data. Therefore, orthogonal transform matrices are useless in this context, as mentioned in Yu et al. (2005). Using the results derived in this section, many ways to improve feed-forward training algorithms suggest themselves. The intuition behind the proposed work is based on the following three ideas : 1. Choose a training algorithm that utilizes the negative gradient matrix G. 2. Substitute the negative gradient matrix G with the modified matrix G′′ from equation (20). 3. Identify appropriate elements in the matrix R. Single-stage optimization algorithms, such as conjugate gradient (CG) Tyagi et al. (2022a), may be suitable for addressing this problem. However, incorporating the elements of R as additional weights in the optimization process may compromise the conjugacy of the direction vectors if R is solved for at each iteration. As an alternative, using two-stage training algorithms that utilize the negative gradient matrix G or direction matrix D, such as OWO-BP and OWO-HWO Chen et al. (1999). In this work, we focus on OIG to develop OIGHWO. Specifically, we will develop a method for solving the matrix R, compute the resulting Gauss-Newton approximate Hessian for R, and apply the resulting OIG-HWO to improve the performance of OWO-BP. ## 3.2 Optimal Input Gain Algorithm There are at least two intuitive approaches for optimizing input gains to improve the performance of a given training algorithm. To minimize the training error E, these approaches involve searching for either the matrix A or the resulting matrix R in each iteration to minimize the training error E. As stated in *Lemma-2*, optimizing R will likely yield fewer solutions. In this section, we describe a method for solving R, find the resulting Gauss-Newton approximate Hessian for R, and use the resulting OIG algorithm to improve OWO-BP. The simplest non-orthogonal, non-singular transform matrix A is diagonal. For this case, let r(k) initially denote the k th diagonal element of R. Also, the elements of x ′p are simply scaled versions of xp. Following (20) we get $$\mathbf{R}={\left[\begin{array}{l l l l l}{r(1)}&{0}&{\cdots}&{0}&{0}\\ {0}&{r(2)}&{\cdots}&{0}&{0}\\ {\vdots}&{\vdots}&{\ddots}&{\vdots}&{\vdots}\\ {0}&{0}&{\cdots}&{r(N)}&{0}\\ {0}&{0}&{\cdots}&{0}&{r(N+1)}\end{array}\right]}$$ $$\left(21\right)$$. Instead of using only the negative gradient elements g(*k, n*) to update the input weights, we use g(*k, n*) · r(n) to replace g(*k, n*), the elements matrix G in equation 17. It is also noteworthy that the optimal learning factor (OLF), z Tyagi et al. (2022a) be absorbed into the gains r(n). Consider a multi-layer perceptron (MLP) being trained using the OWO-BP algorithm. The negative gradient G is a matrix of dimensions Nh by (N + 1), and the error function to be minimized with respect to the gains r(n) is given in (4). This error function is defined as follows: $$\begin{array}{c}{{y_{p}(i)=\sum_{n=1}^{N+1}w_{o i}(i,n)x_{p}(n)+\sum_{k=1}^{N_{h}}w_{o h}(i,k)\cdot}}\\ {{f\left(\sum_{n=1}^{N+1}(w(k,n)+r(n)\cdot g(k,n))x_{p}(n)\right)}}\end{array}$$ $$(22)^{\frac{1}{2}}$$ ! (22) The first partial of E with respect to r(m) is $$d_{r}(m)\equiv\frac{\partial E}{\partial r(m)}=\frac{-2}{N_{v}}\sum_{p=1}^{N_{v}}x_{p}(m)$$ $$\sum_{i=1}^{M}[t_{p}(i)-y_{p}(i)]v(i,m)$$ $$(23)$$ Here, g(*k, m*) is an element of the negative gradient matrix G in equation (12), and o ′ p (k) denotes the derivative of op(k) with respect to its net function. Then, $$v(i,m)$$ v(*i, m*) = X $$=\sum_{k=1}^{N_{h}}w_{o h}(i,k)o_{p}^{'}(k)g(k,m)$$ Using Gauss-Newton updates Bishop (2006), the elements of the Hessian matrix Hig are $$h_{i g}(m,u)\equiv\frac{\partial^{2}E}{\partial r(m)\partial r(u)}=\frac{2}{N_{v}}\sum_{p=1}^{N_{v}}x_{p}(m)x_{p}(u)\cdot$$ $$\sum_{i=1}^{M}v(i,m)v(i,u)$$ $$(24)$$ $$(25)$$ $$(26)^{\frac{1}{2}}$$ Finally, the input gain coefficient vector r is calcualted using OLS by solving $$\mathbf{H}_{\mathrm{ig}}\cdot\mathbf{r}=\mathbf{d}_{r}$$ Hig · r = dr (26) ## 3.2.1 Oig Hessian Matrix We choose to use Hessian matrix to analyze the convergence properties of OIG-HWO. Equation (25) for the OIG-HWO Hessian can be re-written as, $$h_{ig}(m,u)=\sum_{k=1}^{N_b}\sum_{j=1}^{N_b}\left[\frac{2}{N_v}\sum_{p=1}^{N_v}x_p(m)x_p(u)o_p^{\prime}(k)o_p^{\prime}(k)\cdot\sum_{i=1}^{N+1}w_{oh}(i,k)w_{oh}(i,j)\right]g(k,m)\cdot g(j,u)$$ m within the square brackets is nothing but an element from the Hessian of Newton's method. The term within the square brackets is nothing but an element from the Hessian of Newton's method for updating input weights. Hence, $$h_{i g}(m,u)=\sum_{k=1}^{N_{h}}\sum_{j=1}^{N_{h}}\left[{\frac{\partial^{2}E}{\partial w(k,m)\partial w(j,u)}}\right]\cdot g(k,m)\cdot g(j,u)$$ For fixed (*m, u*), the above equation can be expressed as $$h_{ig}(m,u)=\sum_{k=1}^{N_{h}}g_{m}(k)\sum_{j=1}^{N_{h}}h_{N}^{m,u}(k,j)g_{u}(j)$$ $$=\mathbf{g_{m}^{T}H_{N}^{mu}g_{u}}$$ $$(27)$$ $$(28)$$ $$(29)$$ where, gm is the mth column of the negative gradient matrix G and H m,u N is the matrix formed by choosing elements from the Newton's Hessian for weights connecting inputs (*m, u*) to all hidden units. Equation (29) gives the expression for a single element of the OIG-HWO Hessian, which combines information from Nh rows and columns of the Newton Hessian. This can be seen as compressing the original Newton Hessian of dimensions Nh by (N + 1) down to (N + 1). The OIG-HWO Hessian encodes the information from the Newton Hessian in a smaller dimension, making it less sensitive to input conditions and faster to compute. From equation (28), we see that the Hessian from Newton's method uses four indices (*j, m, u, k*) and can be viewed as a 4-dimensional array, represented by H4N ∈ RNhx(N+1)x(N+1)xNh . Using this representation, we can express a 4-dimensional OIG-HWO Hessian as $$\mathbf{H}_{i g}^{4}=\mathbf{G}^{T}\mathbf{H}_{N}^{4}\mathbf{G}$$ ig = GT H4N G (30) where H4 ig are defined as, $$h_{i g}^{4}(m,u,n,l)=\sum_{j=1}^{N_{h}}\sum_{k=1}^{N_{h}}h_{N}(j,m,u,k)g(j,n)g(k,l)$$ $$(30)$$ $$(31)$$ hN (j, m, u, k)g(j, n)g(*k, l*) (31) where hN (j, m, u, k) is an element of H4N . Comparing (28) and (31), we see that hig(*m, u*) = h 4 ig(*m, u, m, u*), i.e., the 4-dimensional H4 ig is transformed into the 2-dimensional Hessian, Hig, by setting n = m and l = u. To make this idea clear, consider a matrix, Q, then p(n) = q(*n, n*) is a vector, p, of all diagonal elements of Q. Similarly, the OIG-HWO Hessian Hig is formed by a weighted combination of elements from H4N . ## 3.2.2 Oig Integrated With Owo-Bp To minimize the error function E, given the vector of input gains r, the gradient dr, and the Hessian Hig, we can utilize Newton's method. Two potential approaches can be taken in each iteration of this method, first is that we transform the gradient matrix using R as shown in equation (20), and second, we decompose R to find A using OLS, and then transform the input data according to equation (18) before applying OWO-BP with the optimal learning factor (OLF). While the second approach may be more accurate, it is also more computationally inefficient and, therefore, not practical, even when A is diagonal. Therefore, it is generally recommended to use the first approach in order to minimize the error function effectively. Hence, the OWO is replaced with OIG in the OWO-BP algorithm to form OIG-BP described in Algorithm 1. Algorithm 1 OIG-BP training algorithm 1: Initialize W,Woi,Woh, Nit , it← 0 2: **while** it < Nit do 3: Solve (11) for all output weights. 4: Calculate negative G using equation (12) 5: **OIG step** Calculate dr and hessian Hig from (23) and (25) respectively. 6: Solve for r using equation (26) 7: Update W ← W + r · G 8: **OWO step** : Solve equation (11) to obtain Wo 9: it ← it + 1 10: **end while** When there are no linearly dependent inputs, the OIG algorithm can find the optimal gain coefficients for each input that minimize the overall mean squared training error. However, this is only sometimes the case when there are linearly dependent inputs. In this scenario, it is straightforward to show that the input autocorrelation matrix Rin and the gradient matrix G have dependent columns. This leads to the OIG Hessian being *ill-conditioned* and to sub-optimal gain coefficients. This could cause OIG-BP to have sub-optimal performance and possibly poor convergence. ## 3.3 Improvement To Oig-Bp In order to overcome sub-optimal performance of OIG in the presence of linearly dependent inputs, we show the immunity of HWO to linearly dependent inputs. We analyze the effect of replacing BP used in OIG-BP with HWO and show that using HWO forces Hig to be singular for linearly dependent inputs, which is highly desirable in order to detect and eliminate the dependent inputs. ## 3.3.1 Effect Of Linearly Dependent Inputs On Hwo If one of the inputs to the network is linearly dependent, it will cause the input auto-correlation matrix, Ri, to be singular. This can affect the convergence of the CG algorithm, leading to poor training performance. In this case, using OLS may be useful for detecting and eliminating the linearly dependent input. To compute the orthonormal weight update matrix Ghwo using OLS, we first compute G ′ hwo as $$\mathbf{G}_{h w o}^{\prime}=\mathbf{G}\cdot\mathbf{C}^{T}$$ $$(32)^{\frac{1}{2}}$$ G′hwo = G · CT(32) where C is a lower triangular matrix of orthonormal coefficients of dimension (N + 1). We can then map the orthonormal weight update to the original weight update as $$\begin{array}{r}{\mathbf{G}_{h w o}=\mathbf{G}_{h w o}^{\prime}\cdot\mathbf{C}}\\ {=\mathbf{G}\cdot\mathbf{C}^{T}\cdot\mathbf{C}}\end{array}$$ $$(33)$$ = G · CT· C(33) Assume xp(N + 2) was linearly dependent. This would cause the (N + 2)th row and column of Ri to be linearly dependent. During OLS, a singular auto-correlation matrix transforms to the (N + 2)th row of C to be zero. We replace BP in OIG-BP with HWO. The resulting OIG-HWO algorithm is described in Algorithm 2. Algorithm 2 OIG-HWO training algorithm 1: Initialize W,Woi,Woh, Nit , it← 0 2: Calculate Ri using (15) 3: **while** it < Nit do 4: Calculate negative G using (12) 5: **HWO step** : Calculate Ghwo using (33) to eliminate any linear dependency in the inputs. 6: **OIG step**: Calculate dr and hessian Hig from (23) and (25) respectively. 7: Solve for r using equation (26) 8: Update W ← W + r · Ghwo 9: **OWO step** : Solve equation (11) to obtain Wo 10: it ← it + 1 11: **end while** Lemma-3: The OIG − HW O *algorithm is immune to linearly dependent inputs and will completely ignore* the dependent inputs during training. Since the (N + 2)th row of C will be zero, it follows that CT C, which will be a square, symmetric matrix with zeros for the (N + 2)th row and column. Further, from (33), Ghwo will have zeros for the (N + 2)th column. The implication is that the weight update vector computed for all input weights connected to the dependent input (N + 2) is zero. These weights are not updated during training, effectively *freezing* them. This is highly desirable, as the dependent input does not contribute any new information. Thus, HWO-type update using OLS is perfectly capable of picking up linearly dependent inputs, leading to a robust training algorithm. This makes OIG-HWO immune to linearly dependent inputs. To illustrate the meaning of *lemma-3*, we took a data set called twod.tra ipn (2022), and generated a second one by adding some dependent inputs. Networks for the two datasets were initialized with the same net function means and standard deviations. Figure 2 clearly shows that the two training error curves overlay each other, validating *lemma-3*. ![10_image_0.png](10_image_0.png) Figure 2: Immunity of OIG-HWO to linearly dependent inputs ![10_image_1.png](10_image_1.png) To further demonstrate *lemma-3* and the effectiveness of OIG-HWO, we compare its performance on the dependent dataset with LM and OIG-BP. Figure 3 shows how dependence can slow down learning in all except the improved OIG-HWO algorithm. The effect is predominant in LM that takes huge computational resources. Figure 3: Performance comparison on dependent data Mathematically, suppose that the input vector xp are biased such that E[xp] = m. A zero-mean version of xp is x ′ p which satisfies xp = x ′ p + m. It is shown in Malalur & Manry (2009) that networks train more effectively with bunbiased inputs. Now, x ′ p can be expressed as A · xp, where $\mathbf{A}=\begin{bmatrix}1\\ 0\\ \vdots\\ 0\\ 0\end{bmatrix}$ $$\begin{array}{r r r}{0}&{\cdots}&{0}&{-m_{1}}\\ {1}&{\cdots}&{0}&{-m_{2}}\\ {\vdots}&{\ddots}&{\vdots}&{\vdots}\\ {0}&{\cdots}&{1}&{-m_{N}}\\ {0}&{\cdots}&{0}&{1}\end{array}$$ $$(34)$$ (34) Figure 3 shows that non-orthogonal transform matrices improve the training since it makes the inputs zero-mean. The HWO component of the OIG-HWO algorithm addresses the issue of sub-optimal performance in the presence of linearly dependent inputs. It has been demonstrated that the HWO is immune to such inputs Malalur & Manry (2009). By replacing the BP component of the OIG-BP with HWO, we can analyze the effect on the singularity of Hig for linearly dependent inputs. This is beneficial because the singularity of Hig allows for the detection and removal of dependent inputs. ## 4 Experimental Methods And Results We evaluate the computational complexity and performance of the OIG-HWO algorithm compared to other methods for approximation and replacement classifier tasks. In a replacement classifier, the OIG-HWO is used as a substitute in a ResNet18 He et al. (2016) style deep learning architecture and compares its testing performance to that of a scaled-CG (SCG) classifier Tyagi et al. (2021). We compare the performance of the proposed OIG-HWO algorithm with four other existing methods, namely, OWO-BP, OIG-BP, LM, and SCG. In SCG and LM, all weights are updated simultaneously in each iteration. However, in OWO-BP, OIG-BP, and OIG-HWO, we solve linear equations for the output weights and then update the input weights. ## 4.1 Experimental Procedure All the experiments are run on a machine equipped with a 3 MHz Intel i-7 CPU and 32 GB of RAM running the Windows 10 OS with PyTorch 1.13. We use the *k-fold* training with cross-validation and testing procedures to obtain the average training, validation, and testing errors. Each data set is split into k non-overlapping parts of equal size (k = 10 in our simulations). Of this, (k-2) parts (roughly 80%) are used for training. Of the remaining two parts, one is used for validation, and the other is used for testing (roughly 10% each). The procedure is repeated till we have exhausted all k combinations. Validation is performed per training iteration (to prevent over-training), and the network with the minimum validation error is saved. After training, the saved weights and the testing data are used to compute a testing error, which measures the network's ability to generalize. At the end of the k-fold procedure, the average training and testing errors and the average number of cumulative multiplies required for training are computed. These quantities form the metrics for comparison and are subsequently used to generate the plots and compare performances. In order to make a fair comparison of the various training methods for a given data set and fold, we use the same initial network for each algorithm, using net control Tyagi et al. (2022a). In net control, random initial input weights and hidden unit thresholds are scaled so each hidden unit's net function has the same mean and standard deviation. Specifically, the net function means are equal to .5, and the corresponding standard deviations are equal to 1. In addition, OWO is used to find the initial output weights. This ensures that all algorithms start at the same point in the weight space, eliminating any performance gains due to weight initialization. This is evident in all the plots, where we can see that all algorithms have the same starting MSE for the first training iteration. The final training error is hence not affected by different weight initializations. For each approximation datasets, we calculate the lowest MSE and probability of error, P e for classification datasets. ## 4.2 Computational Burden One of the metrics chosen for comparison is the cumulative number of multiplies required for training using a particular algorithm. In this section, we identify the computational burden per training iteration for each of the algorithms compared. Let Nu = N + Nh + 1 denote the number of weights connected to each output. The total number of weights in the network is denoted as Nw = M(N + Nh + 1) + Nh(N + 1). The number of multiplies required to solve for output weights using OLS is Mols, which is given by $$M_{ols}=N_{u}(N_{u}+1)\left[M+\frac{1}{6}N_{u}(2N_{u}+1)+\frac{3}{2}\right]\tag{35}$$ The numbers of multiplies required per training iteration using BP, OWO-BP, OIG-HWO, and LM are given by $$M_{b p}=N_{v}\left[M N_{u}+2N_{h}(N+1)+M(N+6N_{h}+4)\right]+N_{w}$$ $$(36)^{\frac{1}{2}}$$ $$M_{\rm new-bp}=N_{\rm v}\left[\,2N_{h}(N+2)+M(N_{\rm u}+1)+M(N+6N_{h}+4)\,+\frac{N_{\rm u}(N_{\rm u}+1)}{2}\right]+M_{ds}+N_{h}(N+1)\tag{37}$$ $$M_{neg}=M_{neg\to p}+N_{e}[(N+1)(3MN_{h}+MN+2(M+N)+3)-M(N+6N_{h}+4)-N_{h}(N+1)]+(N+1)^{3}\tag{38}$$ $$M_{l m}=M_{b p}+N_{v}[M N_{u}(N_{u}+3N_{h}(N+1))+4N_{h}^{2}(N+1)^{2}]+N_{w}^{3}+N_{w}^{2}$$ w (39) $$(39)$$ $$M_{s c g}=4N_{v}[N_{h}(N+1)+M N_{u}]+10[N_{h}(N+1)+M N_{u}]$$ $$(40)$$ Note that Moig consists of Mowo−bp plus the required multiplies for calculating optimal input gains. Similarly, Mlm consists of Mbp plus the required multiplies for calculating and inverting the Hessian matrix. Nδ h is the number of new hidden units added at each growing step of the cascade correlation algorithm. ## 4.3 Approximation Dataset Results We take mean square error (MSE) in the approximation datasets as the metric for various algorithm performances. In all data sets, the inputs have been normalized to be zero-mean and unit variance. This way, it becomes clear that OIG's improved results are not due to a simple, single-data normalization. Table 1 shows the specifications of the datasets used to evaluate the algorithm performances. | Datasets | N | M | Nv | |-----------------|-----|-----|-------| | Prognostics | 17 | 9 | 4745 | | Remote Sensing | 16 | 3 | 5992 | | Federal Reserve | 15 | 1 | 1049 | | Housing | 16 | 1 | 22784 | | Concrete | 8 | 1 | 1030 | | White Wine | 11 | 1 | 4898 | | Parkinson's | 16 | 2 | 5875 | Table 1: Specification of approximation datasets The number of hidden units for each data set is selected by first training a multi-layer perceptron with a large number of hidden units followed by a step-wise pruning with validation Tyagi et al. (2020). The least useful hidden unit is removed at each step until only one hidden unit is left. The number of hidden units corresponding to the smallest validation error is used for training on that data set. A maximum number of iterations is fixed for each algorithm, along with an early stopping criterion. The maximum training iterations for all algorithms are set to 1000. For each dataset, we perform a 10-fold training with cross-validation and testing for the proposed OIG-HWO algorithm and compare with those listed in section 4, using the datasets listed in Table 1. Two plots are generated for each datasets. The average mean square error (MSE) for training from 10-fold training is plotted versus the number of iterations (shown on a log10 scale), and the average training MSE from 10-fold training is plotted versus the cumulative number of multiplies (also shown on a log10 scale) for each algorithm. Our results will present the average MSE achieved through training, along with the corresponding computational requirements. Given the constraints on the length of the paper, we have opted to display two plots for the initial dataset, while for the subsequent datasets, we depict the average MSE versus the cumulative number of multiplications. This approach provides a more compelling representation of both the learning and computational aspects of our study. ## 4.3.1 Prognostics Dataset The Prognostics datasetile, called F-17 ipn (2022) consists of parameters that are available in the Bell ![13_image_0.png](13_image_0.png) Helicopter health usage monitoring system (HUMS), which performs flight load synthesis, which is a form of prognostics Manry et al. (2001). For this data file, 13 hidden units were used for all algorithms. In Figure 4, the average mean square error (MSE) for training from 10-fold validation is plotted versus the number of iterations for each algorithm. In Figure 5, the average training MSE from 10-fold validation is plotted versus the cumulative number of multiplies. From Figure 4, the overall training error for the proposed OIG-HWO overlaps with LM, with LM coming out on top by a narrow margin. However, the performance of LM comes with significantly higher computational demand, as shown in Figure 5. Prognostics data is a highly correlated/non-correlated dataset. The proposed OIG-HWO algorithm can give good performance despite these dependent features. The reason is that the input gains checks on the dependent features and reduces their effect while training. This aspect of input gains is not present in other comparing algorithms, and hence they are not as efficiently performing as the proposed algorithm. Another aspect of the features is that distinct distributions are essential in the algorithm's performance. This is proven by evaluating the algorithms using dependent features with and without. From Table 2, we observe that LM gives the marginally lowest mean squared error followed by OIG-HWO. It is worth emphasizing that the OIG-HWO algorithm achieves significantly lower testing errors than similar algorithms, except LM. Figure 4: Average training MSE vs. Iterations for Prognostics data ## 4.3.2 Remote Sensing Dataset The Remote Sensing data ipn (2022) represents the training set for inversion of surface permittivity, the normalized surface rams roughness, and the surface correlation length found in backscattering models from randomly rough dielectric surfaces Fung et al. (1992). For this dataset, 25 hidden units were used for all algorithms. From Figure 6, the average training MSE from the 10-fold procedure for OIG-HWO is better than all other algorithms being compared. Regarding computational cost, the proposed algorithms consume the least computation than OWO-BP, SCG, LM and OIG-BP. However, except OWO-BP, all algorithms utilize fewer computations about two orders of magnitude than LM. By assigning input gains, the proposed ![14_image_0.png](14_image_0.png) Figure 5: Average training MSE vs. cumulative multiplies for Prognostics data OIG-HWO algorithm computes tailored weights for each input feature, reducing the effect of less important features and extracting useful information from the high dimensional dataset. From Table 2, we see that OIG-HWO has less testing error than all the other algorithms except LM. However, LM requires far more multiplier than OIG-HWO. ## 4.3.3 Federal Reserve Dataset The Federal Reserve Economic Data Set fed (2011) contains economic data for the USA from 01/04/1980 to 02/04/2000 weekly. From the given features, the goal is to predict the 1-Month CD Rate similar to the US Census Bureau datasets us (2022). For this data file, called TR on its webpage, 34 hidden units were used for all algorithms. Figure 7 shows LM had the best overall training performance, with OIG-HWO a close. The proposed improvement to OIG performs better than OIG-BP, SCG, and OWO-BP without significant computational overhead. The proposed OIG-HWO algorithm can handle such data as it assigns high weights to features with acceptable variance than to features with very low variance. Results obtained from OIG-HWO act as a testament to the same. From Table 2, we observe that OIG-HWO has less testing error than the other algorithms. ## 4.3.4 Housing Dataset The Housing dataset del (2011) is designed based on data provided by the US Census Bureau us (2022). These are all concerned with predicting the median price of houses in a region based on demographic composition and the state of the housing market in the region. House-16H data was used in our simulation, with 'H' standing for high difficulty. Tasks with high difficulty have had their attributes chosen to make the modeling more difficult due to higher variance or lower correlation of the inputs to the target. For this dataset, 30 hidden units were used for all algorithms. From Figure 8, the SCG algorithm at the end of training has less training error, followed closely by LM and OIG-HWO, respectively. The OIG-HWO algorithm adjusts the input gains and learns to assign lower weights to low variant features. Thus, the columns with almost ![15_image_0.png](15_image_0.png) Figure 6: Average training MSE vs. cumulative multiplies for Remote Sensing dataset constant values do not contribute toward the end goal. Table 2 shows the superiority of OIG-HWO having less testing error over other algorithms. ## 4.3.5 Concrete Compressive Strength Dataset The Concrete dataset Yeh (1998)con (2013) is the actual concrete compressive strength for a given mixture under a specific age (days) determined by laboratory. The concrete compressive strength is a highly nonlinear function of concrete age and ingredients. For this dataset, we trained all algorithms with 13 hidden units. From Figure 9, LM has the best overall training error, followed by OIG-HWO and closely in third by SCG. Table 2, also supports the advantage from OIG-HWO with less testing error over other algorithms. ## 4.3.6 Wine Data Set The Wine dataset Whi (2013) Cortez et al. (2009) is related to the wine variant of the Portuguese "Vinho Verde" wine. The inputs include objective tests (e.g., PH values), and the output is based on sensory data (median of at least three evaluations made by wine experts). Each expert graded the wine quality between 0 (very bad) and 10 (very excellent). For this dataset, we trained all algorithms with 24 hidden units. Figure 9 shows that LM has the best final training performance, followed very closely by OIG-HWO. OIG-HWO handles dependent features and accounts for the skewness in data, resulting in better results. Table 2 shows that OIG-HWO has substantially better testing performance than the other methods. ## 4.3.7 Parkinson'S Dataset The Parkinson's data set par (2013), Little et al. (2008) comprises a range of biomedical voice measurements from 42 people with early-stage Parkinson's disease recruited to a six-month trial of a telemonitoring device for remote symptom progression monitoring. The main aim of the dataset is to predict the motor and total UPDRS scores (′motor′*UPDRS* and ′total′*UPDRS*) from the 16 voice measures. For this data set, LM's performed better than OIG-HWO in terms of the training error, followed by the rest. However, LM requires ![16_image_0.png](16_image_0.png) Figure 7: Average training MSE vs. cumulative multiplies for Federal Reserve dataset a lot more computations to achieve the slight improvement, as evident in Figure 11. For this dataset, we trained all algorithms with 12 hidden units. From Table 2, we see that OIG-HWO has less testing error than other algorithms. | Dataset | OWO-BP | SCG | LM | OIG-BP | OIG-HWO | |----------------|-----------|-----------|-----------|-----------|-----------| | Prognostics | 3.2752E7 | 2.8224E7 | 1.4093E7 | 2.1490E7 | 1.4680E7 | | Remote Sensing | 1.0655 | 0.8627 | 0.2954 | 0.7637 | 0.2094 | | Treasury | 0.1391 | 0.3245 | 0.1072 | 0.1276 | 0.1036 | | Housing | 17.2398E8 | 38.0949E8 | 11.9216E8 | 13.2538E8 | 11.7886E8 | | Concrete | 30.5863 | 73.6012 | 27.7605 | 29.7421 | 27.1604 | | White Wine | 0.5222 | 0.5812 | 0.4982 | 0.5027 | 0.4075 | | Parkinson's | 131.1679 | 127.3441 | 123.2571 | 124.2698 | 123.57576 | Table 2: 10-fold testing MSE results for approximation dataset, (best testing MSE is in bold) ## 4.4 Discussion From the training plots and Table 2, we deduce the following : 1. OIG-HWO is the top performer in 5 out of the 7 data sets in terms of testing MSE. The following best-performing algorithm is LM by a small margin. However, LM being a second-order method, its performance comes at a significant cost of computation - almost two orders of magnitude greater than the rest. 2. In terms of average training error, OWO-BP consistently appears in the last place on 4 of the 7 data sets, while SCG features in the last place on 3 out of 7 data sets. However, being first-order methods, they set the bar for the lowest computation cost. ![17_image_0.png](17_image_0.png) Figure 8: Average training MSE vs. cumulative multiplies for Housing Dataset 3. When SCG is potentially overtraining, OIG's training MSE is low for many of the initial iterations so early stopping will make training more efficient, even if SCG has less training error at the end. 4. Both OIG-BP and OIG-HWO always perform better than their predecessors, OWO-BP and OWOHWO, respectively, on all data sets, and they are better than SCG. This performance is achieved with minimal computational overhead compared to SCG and OWO-BP, as evident in the training MSE plots vs. number of cumulative multiplications. 5. OIG-BP is never better than LM in training and testing MSE and consistently perfrom as the third best for all the datasets. OIG-HWO is always better than OIG-BP. 6. LM performs marginally better than OIG-HWO on two data set (*Prognostsics* and *Parkinson*′s datasets) and has almost identical or worst ten fold testing MSE for the rest of the datasets. 7. Ignoring LM, which is a second order computationally heavy, overall, OIG-based algorithms (OIG-BP and OIG-HWO) are consistently in the top two performing algorithms. As a general observation, both OIG-BP and OIG-HWO algorithms consistently outperform the OWO-BP algorithm in all three phases of learning, namely training, validation, and testing. The insertion of OIG into OWO-BP has been found to help enhance its performance. Furthermore, both OIG-BP and the improved OIG-HWO algorithms outperform SCG regarding the average minimum testing error. The OIG-HWO algorithm often performs comparably to LM but with minimal computational overhead. It is worth noting that while OIG-BP is an improvement over OWO-BP, it is not as effective as the OIG-HWO algorithm. ## 4.5 Replacement Classifier Datasets We compare the OIG-HWO with CG-MLP and SCG using transfer learning. The classification datasets includes MNIST LeCun et al. (1998b), Scrap Kumar et al. (2022), Fashion-MNIST Kumar et al. (2022), ![18_image_0.png](18_image_0.png) Figure 9: Average training MSE vs. cumulative multiplies for Concrete Dataset CIFAR-10 Krizhevsky & Hinton (2009), SVHN Netzer et al. (2011), Cats-dogs Parkhi et al. (2012), Intel Image Int (2021). All the datasets are normalized to zero minimum and maximum one-pixel values. Table 3 shows the specifications of the datasets used to evaluate the algorithm performances. We studied a transfer learning comparison of the OIG-HWO, SCG, and OIG-BP algorithms using normalized classification datasets with pixel values ranging from zero minimum to one maximum. Table 3 outlines the specifications of the datasets used to evaluate algorithm performance. | Datasets | N | M | Nv | Nvtest | |---------------|-------------|-----|-------|----------| | MNIST | 28 x 28 x 1 | 10 | 54000 | 6000 | | Scrap | 28 x 28 x 1 | 2 | 14382 | 3595 | | Fashion MNIST | 28 x 28 x 1 | 10 | 60000 | 10000 | | CIFAR10 | 32 x 32 x 3 | 10 | 50000 | 10000 | | SVHN | 32 x 32 x 3 | 10 | 73257 | 26032 | | Cats and Dogs | 32 x 32 x 3 | 2 | 20000 | 5000 | | Intel Image | 32 x 32 x 3 | 6 | 14034 | 3000 | Table 3: Replacement classifier datasets To create a replacement classifier in a deep learning architecture, we utilized the ResNet-18 architecture He et al. (2016) in the MATLAB 2021 Neural Network toolbox mat (2021). We trained ResNet-18 for each dataset, selecting the best validation accuracy (Pe) after a certain number of iterations. The training was performed using a learning rate of 1e − 4, 32 batch size, and Adams Kingma & Ba (2014) optimizer. We found the optimal Nh value by a grid search for various Nh values ([5, 10, 15, 20, 30, 100])). The feature vector extracted before this final layer was common to all datasets and contained 512 features. The best network was saved, and its final feature layer was extracted as input for each replacement classifier. ResNet-18 requires input images of size 224 x 224 x 3. We implemented an augmented image datastore pipeline with the option colorprocessing=gray2rgb to accommodate black and white images. We trained the network using a custom ![19_image_0.png](19_image_0.png) Figure 10: Average training MSE vs. cumulative multiplies for White Wine Dataset final classification layer tailored to the specific number of classes for each dataset. After training, we replaced the final fully connected layer of ResNet-18, which has 1000 hidden units, with our replacement classifiers. Table 4 shows the superiority of OIG-HWO based classifier over other algorithms. Table 4: 10 fold cross validation Pe results for replacement classifier dataset, (best testing Pe is in bold, for optimal Nh values) | Dataset | SCG/Nh | OIG-BP/Nh | OIGHWO/Nh | |---------------|------------|-------------|-----------| | MNIST | 0.39/30 | 0.37/20 | 0.368/30 | | Scrap | 0.728/20 | 0.554/10 | 0.509/30 | | Fashion MNIST | 5.705/100 | 5.812/5 | 5.366/30 | | CIFAR10 | 6.76/100 | 6.599/5 | 6.227/100 | | SVHN | 3.86025/30 | 3.632/30 | 3.619/100 | | Cats dogs | 4.516/100 | 4.504/100 | 4.32/10 | | Intel image | 9.483/100 | 9.287/10 | 9.02/100 | ## 5 Conclusion And Future Work In this study, we investigated the impact of linear input transformations on the training of MLPs. To do this, we developed the OIG-HWO algorithm, which optimizes the input gains or coefficients that scale the input data before the MLP processes it. The OIG-HWO algorithm uses Newton's method to minimize the error function with respect to input gains enhancing convergence of OIG-HWO. It has been shown that the learning behavior is different for functionally equivalent networks with different input transforms. It has also been shown that learning in the transformed network is equivalent to multiplying the original network's input ![20_image_0.png](20_image_0.png) Figure 11: Average training MSE vs. cumulative multiplies for Parkinson's Dataset weight gradient matrix by an autocorrelation matrix. It has also been shown that Newton's algorithm can be used to find the optimal diagonal autocorrelation matrix, resulting in the optimal input gain technique. Beyond this, the OIG-HWO algorithm has some interesting characteristics desirable for deep learning architectures. Deep learning algorithms have performed exceptionally well in complex applications involving natural language, speech, images, and visual scenes. An underlying issue among these applications is the redundancy in data. Hence a typical pre-processing step in most deep learning applications is to apply whitening transformation to the raw data. HWO, as mentioned earlier is equivalent to back-propagation on whitened inputs. This means that OIG-HWO could serve as a building block for complex deep-learning architectures that could use the raw data directly without a pre-processing operation. The OIG technique has been used to substantially speed up the convergence of OWO-BP, which is a two-stage first-order training algorithm. This algorithm was called OIG-BP. The OIG Gauss-Newton Hessian is a weighted average of the input weight Gauss-Newton Hessian, where the weights are elements of the negative input weight gradient matrix. OIG-BP was shown to be sub-optimal in the presence of linearly dependent inputs. Subsequently, OIG was applied to OWO-HWO to create an improved algorithm called OIG-HWO. Results from seven data sets showed that the OIG-based algorithms performed much better than two common first order algorithms with comparable complexity, namely SCG and OWO-BP. They come close to LM regarding the training error, but with orders of magnitude less computation. This is evident in all of the plots of training error versus the required number of multiplies and also from the expressions for the numbers of multiplies. Based on the results, we conclude that OIG-HWO is a strong candidate for shallow learning architectures and performs better than the SCG and OIG-BP algorithms as a replacement classifier. For future work, the OIG technique can be extended to additional one and two-stage first-order algorithms, including standard BP, to other network types such as RBF networks and additional network parameters, yielding fast second-order methods rival LM's performance but with significantly reduced complexity. ## References Delve Datasets. http://www.cs.toronto.edu/~delve/data/datasets.html, 2011. The University of Toronto. Function Approximation Repo. http://funapp.cs.bilkent.edu.tr/DataSets/, 2011. Bilkent University. UCI Machine Learning Repository. https://archive.ics.uci.edu/ml/datasets/wine+quality, 2013. University of California, Irvine, School of Information and Computer Sciences. UCI Machine Learning Repository. https://archive.ics.uci.edu/ml/datasets/concrete+compressive+ strength, 2013. University of California, Irvine, School of Information and Computer Sciences. UCI Machine Learning Repository. https://archive.ics.uci.edu/ml/datasets/parkinsons, 2013. University of California, Irvine, School of Information and Computer Sciences. Intel Image Classification Data. https://www.kaggle.com/datasets/puneet6060/ intel-image-classification, 2021. Intel. Matlab Deep Learning toolbox. https://www.mathworks.com/help/deeplearning/ref/resnet18.html, 2021. The MathWorks. Regression data files. https://ipnnl.uta.edu/training-data-files/regression/, 2022. Image Processing and Neural Networks Lab, The University of Texas Arlington. Us Census Bureau. https://www.census.gov/data/datasets.html, 2022. United States Census Bureau. Introducing Chat GPT. https://openai.com/blog/chatgpt, 2023. Open AI. Federico Adolfi, Jeffrey S Bowers, and David Poeppel. Successes and critical failures of neural networks in capturing human-like speech recognition. *Neural Networks*, 162:199–211, 2023. Simon A Barton. A matrix method for optimizing a neural network. *Neural Computation*, 3(3):450–459, 1991. Roberto Battiti. First-and second-order methods for learning: between steepest descent and newton's method. Neural computation, 4(2):141–166, 1992. Friedrich Biegler-König and Frank Bärmann. A learning algorithm for multilayered neural networks based on linear least squares problems. *Neural Networks*, 6(1):127–131, 1993. Christopher M. Bishop. *Pattern Recognition and Machine Learning (Information Science and Statistics)*. Springer-Verlag, Berlin, Heidelberg, 2006. ISBN 0387310738. Bogdan Bochenek and Zbigniew Ustrnul. Machine learning in weather prediction and climate analyses—applications and perspectives. *Atmosphere*, 13(2):180, 2022. Stephen Boyd and Lieven Vandenberghe. *Convex Optimization*. Cambridge University Press, USA, 2004. Hung-Han Chen, Michael T Manry, and Hema Chandrasekaran. A neural network training algorithm utilizing multiple sets of linear equations. *Neurocomputing*, 25(1-3):55–72, 1999. Wei Chen, Huilin Xu, Lifen Jia, and Ying Gao. Machine learning model for bitcoin exchange rate prediction using economic and technology determinants. *International Journal of Forecasting*, 37(1):28–43, 2021a. Xiaoxue Chen, Lianwen Jin, Yuanzhi Zhu, Canjie Luo, and Tianwei Wang. Text recognition in the wild: A survey. *ACM Computing Surveys (CSUR)*, 54(2):1–35, 2021b. Paulo Cortez, António Cerdeira, Fernando Almeida, Telmo Matos, and José Reis. Modeling wine preferences by data mining from physicochemical properties. *Decision support systems*, 47(4):547–553, 2009. George Cybenko. Approximation by superpositions of a sigmoidal function. *Mathematics of control, signals* and systems, 2(4):303–314, 1989. Linhao Dong, Shuang Xu, and Bo Xu. Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition. In *2018 IEEE international conference on acoustics, speech and signal processing* (ICASSP), pp. 5884–5888. IEEE, 2018. Richard O Duda, Peter E Hart, and David G Stork. *Pattern classification*. John Wiley & Sons, 2012. J Patrick Fitch, Sean K Lehman, Farid U Dowla, SHIN-Y Lu, Erik M Johansson, and Dennis M Goodman. Ship wake-detection procedure using conjugate gradient trained artificial neural networks. IEEE Transactions on Geoscience and Remote Sensing, 29(5):718–726, 1991. Adrian K Fung, Zongqian Li, and Kun-Shan Chen. Backscattering from a randomly rough dielectric surface. IEEE Transactions on Geoscience and remote sensing, 30(2):356–369, 1992. Stuart Geman, Elie Bienenstock, and René Doursat. Neural networks and the bias/variance dilemma. Neural computation, 4(1):1–58, 1992. Philip E Gill, Walter Murray, and Margaret H Wright. *Practical optimization*. SIAM, 2019. Federico Girosi and Tomaso Poggio. Representation properties of networks: Kolmogorov's theorem is irrelevant. Neural Computation, 1(4):465–469, 1989. Gene H Golub and Charles F Van Loan. *Matrix computations*, volume 3. JHU Press, 2012. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. *Deep learning*. MIT press, 2016. Martin T Hagan and Mohammad B Menhaj. Training feedforward networks with the marquardt algorithm. IEEE transactions on Neural Networks, 5(6):989–993, 1994. Eric J Hartman, James D Keeler, and Jacek M Kowalski. Layered neural networks with gaussian hidden units as universal approximations. *Neural computation*, 2(2):210–215, 1990. Sherif Hashem and Bruce Schmeiser. Improving model accuracy using optimal linear combinations of trained neural networks. *IEEE Transactions on neural networks*, 6(3):792–794, 1995. Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. *IEEE* Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016. Robert Hecht-Nielsen. Theory of the backpropagation neural network. In *Neural networks for perception*, pp. 65–93. Elsevier, 1992. Ahmet Kara. A data-driven approach based on deep neural networks for lithium-ion battery prognostics. Neural Computing and Applications, 33(20):13525–13538, 2021. Manisha M Kasar, Debnath Bhattacharyya, and TH Kim. Face recognition using neural network: a review. International Journal of Security and Its Applications, 10(3):81–100, 2016. Kang Ke, Sun Hongbin, Zhang Chengkang, and Carl Brown. Short-term electrical load forecasting method based on stacked auto-encoding and gru neural network. *Evolutionary Intelligence*, 12:385–394, 2019. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. John Kolen and Jordan Pollack. Back propagation is sensitive to initial conditions. *Advances in neural* information processing systems, 3, 1990. Alexander Kolesnikov, Alexey Dosovitskiy, Dirk Weissenborn, Georg Heigold, Jakob Uszkoreit, Lucas Beyer, Matthias Minderer, Mostafa Dehghani, Neil Houlsby, Sylvain Gelly, Thomas Unterthiner, and Xiaohua Zhai. An image is worth 16x16 words: Transformers for image recognition at scale. 2021. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. Nalin Kumar, Manuel Gerardo Garcia, and Kanishka Tyagi. Material handling using machine learning system, January 27 2022. US Patent App. 17/495,291. Quoc V Le, Jiquan Ngiam, Adam Coates, Abhik Lahiri, Bobby Prochnow, and Andrew Y Ng. On optimization methods for deep learning. In *Proceedings of the 28th International Conference on International Conference* on Machine Learning, pp. 265–272, 2011. Y. LeCun, L. Bottou, G. Orr, and K. Muller. Efficient backprop. In G. Orr and Muller K. (eds.), *Neural* Networks: Tricks of the trade, pp. 9–50. Springer, 1998a. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998b. Kenneth Levenberg. A method for the solution of certain non-linear problems in least squares. Quarterly of applied mathematics, 2(2):164–168, 1944. FL Lewis, S Jagannathan, and A Yeşildirek. Neural network control of robot arms and nonlinear systems. In Neural Systems for control, pp. 161–211. Elsevier, 1997. Max Little, Patrick McSharry, Eric Hunter, Jennifer Spielman, and Lorraine Ramig. Suitability of dysphonia measurements for telemonitoring of parkinson's disease. *Nature Precedings*, pp. 1–1, 2008. Sanjeev S Malalur and Michael Manry. Feed-forward network training using optimal input gains. In 2009 International joint conference on neural networks, pp. 1953–1960. IEEE, 2009. Michael Manry, Apollo. S J, L S Allen, W D Lyle, W Gong, M S Dawson, and A K Fung. Fast training of neural networks for remote sensing. *Remote Sensing Reviews*, 9:77–96, 1994. Michael T Manry, Steven J Apollo, and Qiang Yu. Minimum mean square estimation and neural networks. Neurocomputing, 13(1):59–74, 1996. MT Manry, H Chandrasekaran, CH Hsieh, Yu Hen Hu, and Jenq-Nenq Hwang. Signal processing applications of the multilayer perceptron. In *Handbook on Neural Network Signal Processing*. CRC Press, 2001. Karn Meesomsarn, Roungsan Chaisricharoen, Boonruk Chipipop, and Thongchai Yooyativong. Forecasting the effect of stock repurchase via an artificial neural network. In *2009 ICCAS-SICE*, pp. 2573–2578. IEEE, 2009. Martin Fodslette Møller. A scaled conjugate gradient algorithm for fast supervised learning. *Neural networks*, 6(4):525–533, 1993. Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807–814, 2010. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. In *NIPS Workshop on Deep Learning and* Unsupervised Feature Learning 2011, 2011. URL http://ufldl.stanford.edu/housenumbers/nips2011_ housenumbers.pdf. Son Nguyen, Kanishka Tyagi, Parastoo Kheirkhah, and Michael Manry. Partially affine invariant back propagation. In *2016 International Joint Conference on Neural Networks (IJCNN)*, pp. 811–818. IEEE, 2016. Yohhan Pao. Adaptive pattern recognition and neural networks. 1989. Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar. Cats and dogs. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3498–3505, 2012. doi: 10.1109/CVPR.2012. 6248092. Ngoc-Quan Pham, Thai-Son Nguyen, Jan Niehues, Markus Müller, Sebastian Stüker, and Alexander Waibel. Very deep self-attention networks for end-to-end speech recognition. *arXiv preprint arXiv:1904.13377*, 2019. Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In *Proceedings of the IEEE conference on computer vision and pattern* recognition, pp. 652–660, 2017. Sarunas Raudys. *Statistical and Neural Classifiers: An integrated approach to design*. Springer Science & Business Media, 2001. Melvin Deloyd Robinson and Michael Thomas Manry. Two-stage second order training in feedforward neural networks. In *The Twenty-Sixth International FLAIRS Conference*, 2013. David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning internal representations by error propagation. Technical report, La Jolla Inst for Cognitive Science, University of California, San Diego, 1985. David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. *nature*, 323(6088):533–536, 1986. Robert S Scalero and Nazif Tepedelenlioglu. A fast new algorithm for training feedforward neural networks. IEEE Transactions on signal processing, 40(1):202–210, 1992. Bruce W Suter. The multilayer perceptron as an approximation to a bayes optimal discriminant function. IEEE transactions on neural networks, 1(4):291, 1990. Hong Hui Tan and King Hann Lim. Review of second-order optimization techniques in artificial neural networks backpropagation. In *IOP conference series: materials science and engineering*, volume 495, pp. 012003. IOP Publishing, 2019. Kanishka Tyagi. *Automated multistep classifier sizing and training for deep learners*. PhD thesis, Department of Electrical Engineering, The University of Texas at Arlington, Arlington, TX, 2018. Kanishka Tyagi and Michael Manry. Multi-step training of a generalized linear classifier. Neural Processing Letters, 50:1341–1360, 2019. Kanishka Tyagi, Nojun Kwak, and Michael T Manry. Optimal conjugate gradient algorithm for generalization of linear discriminant analysis based on l1 norm. In *ICPRAM*, pp. 207–212, 2014. Kanishka Tyagi, Son Nguyen, Rohit Rawat, and Michael Manry. Second order training and sizing for the multilayer perceptron. *Neural Processing Letters*, 51(1):963–991, 2020. Kanishka Tyagi, Chinmay Rane, Bito Irie, and Michael Manry. Multistage newton's approach for training radial basis function neural networks. *SN Computer Science*, 2(5):1–22, 2021. Kanishka Tyagi, Chinmay Rane, and Michael Manry. Supervised learning. In *Artificial Intelligence and* Machine Learning for EDGE Computing, pp. 3–22. Elsevier, 2022a. Kanishka Tyagi, Yihang Zhang, John Kirkwood, Shan Zhang, Sanling Song, and Narbik Manukian. Radar system using a machine-learned model for stationary object detection, 2022b. US Patent App. 17/230,877. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. Daomiao Wang, Qihan Hu, and Cuiwei Yang. Biometric recognition based on scalable end-to-end convolutional neural network using photoplethysmography: A comparative study. *Computers in Biology and Medicine*, 147:105654, 2022. Gou-Jen Wang and Chih-Cheng Chen. A fast multilayer neural-network training algorithm based on the layer-by-layer optimizing procedures. *IEEE Transactions on Neural Networks*, 7(3):768–775, 1996. Yuqing Wang, Qiang Ge, Wenkai Lu, and Xinfei Yan. Well-logging constrained seismic inversion based on closed-loop convolutional neural network. *IEEE Transactions on Geoscience and Remote Sensing*, 58(8): 5564–5574, 2020. Paul Werbos. *Beyond regression: New tools for prediction and analysis in the behavioral sciences*. PhD thesis, Committee on Applied Mathematics, Harvard University, Cambridge, MA, 1974. Halbert White. Economic prediction using neural networks: The case of ibm daily stock returns. In *ICNN*, volume 2, pp. 451–458, 1988. Halbert White. Connectionist nonparametric regression: Multilayer feedforward networks can learn arbitrary mappings. *Neural networks*, 3(5):535–549, 1990. David H Wolpert. The lack of a priori distinctions between learning algorithms. *Neural computation*, 8(7): 1341–1390, 1996. Bing-Fei Wu. Minimum mean-squared error estimation of stochastic processes by mutual entropy. International journal of systems science, 27(12):1391–1402, 1996. I-C Yeh. Modeling of strength of high-performance concrete using artificial neural networks. *Cement and* Concrete research, 28(12):1797–1808, 1998. Changhua Yu, Michael T Manry, and Jiang Li. Hidden layer training via hessian matrix information. In FLAIRS Conference, pp. 688–694, 2004. Changhua Yu, Michael T Manry, and Jiang Li. Effects of nonsingular preprocessing on feedforward network training. *International Journal of Pattern Recognition and Artificial Intelligence*, 19(02):217–247, 2005. Zhen Zhang, Weimin Shao, and Hong Zhang. A learning algorithm for multilayer perceptron as classifier. In *IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No. 99CH36339)*, volume 3, pp. 1681–1684. IEEE, 1999. ## A Appendix: Training Weights By Orthogonal Least Squares OLS is used to solve for the output weights, pruning of hidden units Tyagi et al. (2020), input units Tyagi & Manry (2019) and deciding on the number of hidden units in a deep learner Tyagi (2018). OLS is a transformation of the set of basis vectors into a set of orthogonal basis vectors thereby measuring the individual contribution to the desired output energy from each basis vector. In an autoencoder, we are mapping from an (N+1) dimensional augmented input vector to it's reconstruction in the output layer. The output weight matrix Woh ∈ ℜN×Nh and yp in elements wise will be given as $$y_{p}(i)=\sum_{n=1}^{N+1}w_{o h}(i,n)\cdot x_{p}(n)$$ To solve for the output weights by regression , we minimize the MSE as in (4). In order to achieve a superior numerical computation, we define the elements of auto correlation R ∈ ℜNh×Nh and cross correlation matrix C ∈ ℜNh×M as follows : $$r(n,l)={\frac{1}{N_{v}}}\sum_{p=1}^{N_{v}}O_{p}(n)\cdot O_{p}(l)\quad\quad c(n,i)={\frac{1}{N_{v}}}\sum_{p=1}^{N_{v}}O_{p}(n)\cdot t_{p}(i)$$ $$(41)$$ $$(42)^{\frac{1}{2}}$$ Op(n) · tp(i) (42) Substituting the value of yp(i) in (4) we get, $$E={\frac{1}{N_{v}}}\sum_{p=1}^{N_{v}}\sum_{m=1}^{M}[t_{p}(m)-\sum_{k=1}^{N_{h}}w_{o}h(i,k)\cdot O_{p}(k)]^{2}$$ $$(43)$$ Differentiating with respect to Woh and using (42) we get $$\frac{\partial E}{w_{o h}(m,l)}=-2[c(l,m)-\sum_{k=1}^{N_{h}+1}w_{o h}(m,k)r(k,l)]$$ $$(44)$$ $$\left(45\right)$$ Equating (44) to zero we obtain a M set of Nh + 1 linear equations in Nh + 1 variables. In a compact form it can be written as $$\mathbf{R}\cdot\mathbf{W}^{T}=\mathbf{C}$$ R · WT = C (45) By using orthogonal least square, the solution for computation of weights in (45) will speed up. For convineance, let Nu = Nh + 1 and the basis functions be the hidden units output O ∈ ℜ(Nh+1)×1 augmented with a bias of 1. For an unordered basis function O of dimension Nu , the mth orthonormal basis function O ′is defines as « add reference » $$O_{m}^{'}=\sum_{k=1}^{m}a_{m k}\cdot O_{k}\tag{1}$$ $$(46)$$ $$(47)$$ Here amk are the elements of triangular matrix A ∈ ℜNu×Nu For m = 1 $$O_{1}^{'}=a_{11}\cdot O_{1}\ \ \ \ a_{11}={\frac{1}{\|O\|}}={\frac{1}{r(1,1)}}$$ for 2 ≤ m ≤ Nu, we first obtain $$c_{i}=\sum_{q=1}^{i}a_{i q}\cdot r(q,m)$$ aiq · r(*q, m*) (48) for 1 ≤ i ≤ m − 1. Second, we set bm = 1 and get $$b_{jk}=-\sum_{i=k}^{m=1}c_{i}\cdot a_{ik}\tag{1}$$ $$(48)^{3}$$ $$(49)$$ $$\left(50\right)$$ for 1 ≤ k ≤ m − 1. Lastly we get the coeffeicent Amk for the triangular matrix A as $$a_{m k}=\frac{b_{k}}{[r(m,m)-\sum_{i=1}^{m-1}c_{i}^{2}]^{2}}\tag{1}$$ Once we have the orthonormal basis functions, the linear mapping weights in the orthonormal system can be found as $$w^{'}(i,m)=\sum_{k=1}^{m}a_{m k}c(i,k)\tag{1}$$ $$(51)$$ The orthonormal system's weights W ′can be mapped back to the original system's weights W as $$w(i,k)=\sum_{m=k}^{N_{u}}a_{mk}\cdot w_{o}^{{}^{\prime}}(i,m)\tag{52}$$ In an orthonormal system, the total training error can be written from (4) as $$E=\sum_{i=1}^{M}\sum_{p=1}^{N_{u}}[\langle t_{p}(i),t_{p}(i)\rangle-\sum_{k=1}^{N_{u}}(w^{{}^{\prime}}(i,k))^{2}]\tag{53}$$ Orthogonal least square is equivalent of using the QR decomposition Golub & Van Loan (2012) and is useful when equation (45) is ill-conditioned meaning that the determinant of R is 0.
Review 1: Summary: This paper develops an optimisation method for a shallow neural network network, with only a single hidden layer. This results in three weight matrices, one connecting from inputs to a hiddden layer, W, another connecting from hidden to outputs, $W_{oh}$, and a linear skip-layer connection from inputs to outputs, $W_{oi}$. This method mainly focuses on optimising $W$, since the other two matrices are trivial to optimise by ordinary least squares, for a fixed $W$. This method first parametrises $W$ by $W‘.diag(d)$, where each row of $W$ is scaled by an independent parameter. This vector effectively scales each dimension of the input. Then, we find the optimal d based on a second-order Newton's method to minimise the error. Since d is only a 1-d vector, the cost is smaller than a full second-order method on $W$. After finding $d$, $W'$ is updated with a slightly modified gradient descent. The authors conducted extensive experiments on multiple low-dimensional datasets, showing good performance and low computational cost of the proposed algorithms. In addition, the last experiment takes frozen ResNets trained on images, and slot in the shallow network in place of the last layer. The proposed algorithm again performs very well. Strengths and Weaknesses: # Upsides: 1. This paper is technically solid, focusing on a very well defined problem. 2. Although the written quality is of great concern (see below), I am able to understand the main point well and the method itself is reasonable and seems correct. 2. The experiments are very comprehensive, with sufficient details needed to replicate the results. 3. The experimental results are informative and shows useful metrics. # Downsides: ## 1. Poor written quality. This paper has all written issues one can enumerate. I do not criticise the technical quality of the paper, but the poor written quality pushes me to think that the authors are not careful enough and maybe even don't care about this paper. 1. All references are not enclosed in parenthesis. The authors could have never read the final submitted PDF. 2. There are gross misrepresentations of common practices in the deep learning community, such as saying line search is a common method for weight optimisation: it is not, the most common optimiser is still gradient descent or variants of it. This could be problem caused by different research fields, but in my opinion, readers of TMLR would not agree with these statements. 3. There are many overloaded uses of symbols, and the explanations do not distinguish two symbols that could mean the same thing but not entire certain. 4. The equations may be wrong to the extend that some details of the method is not comprehensible for a generic reader. 5. There are typos and unfilled placeholders in the Appendix. 6. There are definitely too much redundant details in the main paper, and some introductory contents are not clear but also not tightly related to the main contributions. I personally had great trouble understanding the first few pages, but then this did not seem to make it difficult to see the main points of the paper. ## 2. Very low significance. Even though TMLR does not emphasise on impact, the paper focuses on such a tiny problem and setup that the chance of this getting adopted by any other researcher is almost zero. The authors need to either substantially expand the problem and network complexity, or demonstrate good performance on critical applications. Simply optimising a pre-trained ResNet would be useful, but it is unclear how much improvement there is by adopting this method. 1. The method as presented only applies to one set of weights in the most shallow network possible. It is possible to also parametrise all weights of more modern architectures, by parametrising the weights as done in this paper, and optimise the scaling part by second-order method. That way, the paper would be much more relevant to today's deep learning community. 2. Learning if full-batch. Can a minibatch version of this algorithm work? If not, again, this paper is less attractive to the deep learning community. 3. The ResNet experiment did not quote the original performance. Does adding the three-layer shallow network help? If so, then I would say more readers will start to care about this method. 4. The authors discussed the one benefit of the algorithm as not requiring whitening of the input data. This is such a tiny contribution, if any. Whitening is very simple to implement, if useful can be discovered by gradient methods. ## 3. Detailed comments 1. Can the authors clarify if the symbols A, C and R refer to the same mathematical objects throughout the paper? For example, do the C matrices in (11) and (33) mean the same thing? 2. Equation (12) is very confusing. The preceding text mentions that this is for the $p$'th data patter, but then the equation contains a sum of $p$. Is G simply the mean of the data repeated and tiled on top of each other? And why is this the correct gradient update? The error doesn't even show up in this equation. 3. What's the purpose of Section 3.1? I could not understand the purpose of this section, but later had no trouble understanding the later parts. In particular, why do we need to care about two networks that differ only by a linear transformation? 4. The Lemmas presented are either trivial or unclear for a reader to judge on the significance. Specifically, why is talking Lemma-3 important? This should seem obvious to any undergraduate student in a quantitative field. Also, for Lemma-2, in what sense are the two updates not equivalent? Why should we care when they are or are not equivalent? 5. Most of the detailed gradient expressions are unnecessary. (27)-(31) are simply different ways of writing down the Hessian, and I don't think they add any value for comprehension or implementation. 6. Section 4.2: I would love to believe these calculations are correct, but please give an information breakdown of each term. 7. There are huge numbers in the MSE reported in Table 2: on the order of 10^7? This does not sound right to me. If possible, please consider reporting the error divided by the variance of the ground-truth, so that the numbers are dimensionless. 8. Above eqn(46), please fix the reference placeholder. 9. Also, its "convenience", not "convineance" around the same place. 9. What's C in (32)? Can you give an example or such a matrix? If this is a main contribution of the paper, the reviewer would like to better understand it. To put it simply, I get the main idea, it is reasonable, and the experiments seem okay as well. But the presentation has a lot of room for improvement that I cannot recommend for acceptance. Requested Changes: # Critical: The authors should ideally address/fix all the points in the poor written quality part of this review. To address the significance issue, either of the following would be good to the reviewer: 1. quoting the original performance of the trained ResNet, and show that there is an improvement using the proposed method on the 1-hidden layer network. 2. Parametrise the weights of the ResNet by a scaling component, and optimise this by a second-order method just as the authors have done with d. The authors should also provide satisfactory answers to ALL the points in the detialed comments EXCEPT c, d and e. Given the amount of changes, I would give a strong rejection as it currently stands. I'd encourage the authors to consider re-submit it when the written quality is improved, becaues the technical idea could be of interest. The authors could benefit from taking a look at accpeted papers from TMLR and reformat the paper for this audience. e.g. go for a short submission and move redundant materials to the Appendix. Broader Impact Concerns: Not very applicable here. ================================================== Review 2: Summary: This paper studies the effects of linear input transformations on the training of Multi-Layer Perceptron (MLP) using the newly developed OIG-HWO algorithm. This algorithm focuses on optimizing the input gains or coefficients that pre-scale the input data for MLPs. The OIG-HWO algorithm emerges as a promising method in the realm of shallow learning architectures. It not only offers competitive performance metrics when compared with SCG and OIG-BP but also promises a reduction in computational demands. The authors have considered extensive testing across various domains and real-world scenarios in the experiments to compare the OIG-HWO algorithm with other novel methods. Strengths and Weaknesses: The authors base their methodology on established works (Malalur & Manry, 2009; Yu et al., 2004; Tyagi et al., 2022a), giving the study a solid foundation. The study not only introduces the OIG-HWO algorithm but also provides a thorough analysis of its structure, performance, and limitations. This paper has presented lots of experiments to compare the new algorithm's performance with existing algorithms and offer readers a contextual understanding of its advantages and effectiveness. Full-batch gradient descent, mini-batch GD, and Adam are the most useful and common optimizers in deep learning. I do not know the motivation for introducing the OIG-HWO algorithm and how this new algorithm is related to common optimizers. Further experiments with different architectures, different dimensions, and scaling are needed to support this motivation. Requested Changes: 1. The citation format is not correct, especially in the introduction section. Please double-check and revise it. 2. The abstract needs to be revised. In the abstract, the authors mentioned the PWL activation function in the hidden layer, as their main contribution, which is not extensively explained in the main text. I think the authors need to rewrite this abstract. 3. The notations in section 2 are very ambiguous. For instance, we usually do not use $o_p(k)$ as a function since this is related to the order in probability notation and in theoretical computer science. 4. Typos between Lemma 1 and Lemma 2. 5. Where are the proofs of Lemmas 1-3? Broader Impact Concerns: I don't think the submission requires a broader impact statement. ================================================== Review 3: Summary: The paper combines existing optimization algorithms into one and tests the resulting one on several simple tasks and shallow neural networks. Strengths and Weaknesses: **Potential issues** > We establish in Malalur & Manry (2009) that input weights for MLP-1 and MLP-2 are related as after Eq. 18. might be a breach of anonymity. **Strengths** The proposed algorithm seems to work better than some other existing ones on several tasks. **Weaknesses** 1. Writing My main issue with the paper is poor writing (but because of it I might be missing some other problems). Overall writing is poor. I found many sentences hard to parse, but I cannot pinpoint exact problems. The abstract specifically has very little to do with the actual paper. It talks about CNNs, but all experiments are done for MLPs (even the resnet18 experiment doesn't actually train a full resnet18 with the proposed algorithm, just the output linear layer). I can sort of see how the discussion about activation functions in the abstract follows from the proposed algorithm, but the main text doesn't explain it well (or at all?). 2. Abstract claims > We show that these PWL activations work much better than ReLU activations in our networks for convolution neural networks and multilayer perceptrons. Result comparison in PyTorch for shallow and deep CNNs are given to further strengthen our case. The paper really proposes a different optimization algorithm, not a different activation function (although you can view it as one). A fair comparison would be to take several architectures with ReLU/PWL trained with the same algorithm (even just SGD). 3. No SGD/Adam/AdamW/etc comparison I don't understand why the method wasn't compared to the very standard first order methods. 4. Overall motivation of the work We have very well-performing first-order algorithms for training neural networks. Since the proposed algorithm is not compared to those, I don't see the reason to use it over a computationally cheap SGD/Adam. Requested Changes: 1. The abstract should be fully re-written to match the paper and the claims. 2. Ideally writing should be improved. 3. There should be a comparison with standard first-order methods like SGD/Adam/AdamW/etc. 4. Sec. 4.2 would benefit a lot from the big-O notation for algorithm scaling. 5. The paper uses MNIST/CIFAR10 and some other datasets for a resnet18+adam. The could be used in a smaller network with the discussed algorithms too. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Reject Comment: Main reasoning provided in section 'Claims and Evidence'. Summary: reviewers and AE criticise quality of writing and presentation of the paper (to a degree where the main claims are no longer supported by "accurate, convincing and clear evidence"), and (to a lesser degree) a lack of SGD as THE baseline to compare to and discuss. All reviewers suggest rejection; the authors did not respond to the reviews and did not upload a revised manuscript. Overall a clear reject for me. ==================================================
# Additive Poisson Process: Learning Intensity Of Higherorder Interaction In Poisson Processes Anonymous authors Paper under double-blind review ## Abstract We present the *Additive Poisson Process* (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in Poisson processes using projections into lower-dimensional space. Our model combines the techniques from information geometry to model higher-order interactions on a statistical manifold and in *generalized* additive models to use lower-dimensional projections to overcome the effects of the curse of dimensionality. Our approach solves a convex optimization problem by minimizing the KL divergence from a sample distribution in lower-dimensional projections to the distribution modeled by an intensity function in the Poisson process. Our empirical results show that our model effectively uses samples observed in lower dimensional space to estimate a higher-order intensity function with sparse observations. ## 1 Introduction The Poisson process is a counting process used in a wide range of disciplines such as spatial-temporal sequential data in transportation (Zhou et al., 2018; 2021), finance (Ilalan, 2016) and ecology (Thompson, 1955) to model the arrival rate by learning an intensity function. For a given time interval, the integral of the intensity function represents the average number of events occurring in that interval. The intensity function can be generalized to multiple dimensions. However, for most practical applications, learning the multi-dimensional intensity function is a challenge due to the sparsity of observations. Despite the recent advances in Poisson processes, current Poisson process models are unable to learn the intensity function of a multi-dimensional Poisson process. Our research question is, "Are there any good ways of approximating the high dimensional intensity function?" Our proposal, the *Additive Poisson Process* (APP), provides a novel solution to this problem. Throughout this paper, we use a running example in a spatial-temporal setting. Say we want to learn the intensity function for a taxi to pick up customers at a given time and location. For this setting, each event is multi-dimensional; that is, (*x, y, W*), where a pair of x and y represents two spatial coordinates and W represents the day of the week. In addition, observation time t is associated with this event. Figure 2b visualizes this problem. In this problem setup, if we would like to learn the intensity function at a given location (*x, y*) and day of the week W, the naïve approach would be to learn the intensity at (*x, y, W*) directly from observations. However, this is difficult because observations are usually sparse; that is, there could be only a few events for a given location and day, or in extreme cases, no direct observation of the event at all, which makes it difficult for any model to learn the low-valued intensity function. The research question that we are trying to solve is, "are there good ways of estimating a high dimensional intensity function with sparse observations?". To address this problem, we exploit information in lower-dimensional space; for example, the marginalized observations at the location (*x, y*) across all days of the week, or on the day W at all locations. This information can be included in the model to improve the estimation of the joint intensity function. Using the information in lower-dimensional space provides a structured approach to include prior information based on the location or day of the week to improve the estimation of the joint intensity function. For example, a given location could be a shopping center or a hotel, where it is common for taxis to pick up passengers, and ![1_image_0.png](1_image_0.png) Figure 1: Partial order structured sample space (Ω, ⪯) with |D| = 3. Each node represents a state and the directed edge represents the direction of the partial ordering. therefore we expect more passengers at this location. There could also be additional patterns that could be uncovered based on the day of the week. We can then use the observations of events to update our knowledge of the intensity function. In this paper, we propose a novel framework to learn the higher-order interaction effects of intensity functions in Poisson processes. Our model combines the techniques introduced by Luo & Sugiyama (2019) to model higher-order interactions between Poisson processes and by Friedman & Stuetzle (1981) in *generalized additive* models to learn the joint intensity function using samples in a lower dimensional space. Our proposed approach decomposes a multi-dimensional Poisson process into lower-dimensional representations, and use data in the lower-dimensional space to improve the estimation of the joint intensity function. This is different from the traditional approaches where only the joint occurrence of events is used to learn the joint intensity. We first show the connection between generalized additive models and Poisson processes, and then provide the connection between generalized additive models and the *log-linear model* (Agresti, 2012), which has a well-established theoretical background in information geometry (Amari, 2016). We draw parallels between the formulation of the generalized additive models and the binary log-linear model on a partially ordered set (poset) (Sugiyama et al., 2017). The learning process in our model is formulated as a convex optimization problem to arrive at a unique optimal solution using natural gradient, which minimizes the Kullback-Leibler (KL) divergence from the sample distribution in a lower-dimensional space to the distribution modeled by the learned intensity function. This connection provides remarkable properties to our model: the ability to learn higher-order intensity functions using lower-dimensional projections, thanks to the *Kolmogorov-Arnold* representation theorem. This property makes it advantageous to use our proposed approach for cases where there are no observations, missing samples, or low event rates. Our model is flexible because it can capture the interaction effects between events in a Poisson process as a partial order structure in the log-linear model and the parameters of the model are fully customizable to meet the requirements of the application. Our empirical results show that our model effectively uses samples projected onto a lower dimensional space to estimate the higher-order intensity function. More importantly, our model is also robust to various sample sizes. ## 2 Related Work We discuss related work of our task of estimating intensity functions in Poisson processes, which can be roughly divided into three approaches: density based, Bayesian inference based, and factorization based estimation. We review existing approaches for each of the three categories and discuss the relationship to our approach in the following. ## 2.1 Density Estimation Density estimation techniques such as Kernel density estimation (KDE) (Rosenblatt, 1956) learn the joint intensity function by using kernels as weights. KDE learns the joint intensity function by using information ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) Figure 2: (a) A visualization of the truncated parameter space to approximate the joint intensity function with |D| = 4 and k = 2. θ and η are model parameters described in Section 3.3 and ηˆ is the input to the model. (b) A visualization of the input datasets, where the blue points represent events with two spatial dimensions and one-time dimension. from lower dimensions. However, KDE is unable to scale to learn dimensions because it suffers from the curse of dimensionality, which means that it requires a large size of samples to build an accurate model. In addition, the complexity of the model expands exponentially with respect to the number of dimensions, which makes it infeasible to compute. Our approach applies similar concepts to density estimation techniques by learning from low dimensional projections to approximate the joint intensity function. ## 2.2 Bayesian Inference For Poisson Process Learning an intensity function from sparse high-dimensional datasets is a challenging problem. It often comes with a trade-off between computational complexity and numerical accuracy. For applications that are sparse with higher dimensionality, numerical accuracy is often sacrificed to approximate the intensity function. Bayesian approaches, such as using a mixture of beta distributions with a Dirichlet prior (Kottas, 2006; Kottas & Sansó, 2007) and Reproducing Kernel Hilbert Space (RKHS) (Flaxman et al., 2017), have been proposed to quantify the uncertainty for the intensity function. However, these approaches are often nonconvex, making it difficult to obtain the globally optimal solution. Besides, if observations are sparse, it is hard for these approaches to learn a reasonable intensity function. In addition, the naïve approach using MCMC has cubic complexity O(n 3) for each dimension and increases exponentially with respect to additional input dimension, that is, O((n 3) d), where n refers to the number of observations and d is the number of dimensions. One example of this approach is using a Dirichlet process mixture of Beta distribution as a prior for the intensity function of the Poisson process (Kottas, 2006; Kottas & Sansó, 2007). The solution from MCMC is often accurate and asymptotically converges to the true posterior intensity function. However, due to its computational complexity, it is infeasible to estimate any high-dimensional intensity function. More recent techniques have attempted to scale up these approaches by using Gaussian processes as a functional prior to the intensity function (Samo & Roberts, 2015). However, the model complexity is O(n 2k) for each dimension and is exponential with respect to the number of input dimensions, which is still infeasible to estimate any high-dimensional intensity function. Variational inference (Lloyd et al., 2015) approaches that can be used to make Bayesian inference of Poisson process much more efficient scaling it up to the linear complexity O(n) for each dimension. One of the most notable works in this area is using Fourier Features in the Cox Process (John & Hensman, 2018). However variational inference is not guaranteed to asymptotically converge to the true posterior distribution. Because variational inference does not have any theoretical guarantees, it is likely to fail in extremely high dimensions and sparse observations. Our approach uses the discretization approach to scale up the model to higher dimensions. We use a graph (partial order) structure to allow the flexibility for domain expertise to specify how each dimension is treated and which interaction effects should be included in the model. Unlike variational inference-based approaches, our estimation procedure of the intensity function has theoretical convergence which is based on the Kolmogorov-Arnold theorem. ## 2.3 Poisson Factorization Our work is closely related to Poisson Factorization (Chi & Kolda, 2012), where random variables in a tensor are represented with a Poisson distributed or Poisson process likelihood. The tensor is usually used to represent some high-dimensional datasets such as contingency tables or other collections of counting datasets, which are often large and sparse. The objective of Poison Factorization is to decompose the high-dimensional sparse matrices into lower-dimensional space, where we can find some meaningful latent structure. The effectiveness of Poisson factorization for high dimensional datasets makes it ideal to analyze spatialtemporal problems consisting of sparse count data. One example of this work is Bayesian Poisson Tucker decomposition (BPTD) (Schein et al., 2016), where a dataset of interaction events is represented as a set of N events, each of which consists of a pair of a token ei that encodes certain features and time, that is, (ei, ti). BPTD uses an MCMC inference algorithm to learn the latent structure, which is based on an extension of stochastic block models (SBM) (Nowicki & Snijders, 2001) with a Poisson likelihood. Our approach provides a generalization of this idea of Poisson Factorization by using Legendre tensor decomposition (Sugiyama et al., 2018) and demonstrating its ability on a spatial-temporal problem. Our optimization is much more efficient as it is guided by gradients to minimize the KL divergence. Our approach also contains a graph structure that allows domain experts to encode certain properties into the model. ## 3 Formulation We start this section by introducing the technical background of the Poisson process and its extension to a multi-dimensional Poisson process. We then introduce the Generalized Additive Model (GAM) and its connection to the Poisson process. This is followed by presenting our novel framework, called Additive Poisson Process (APP), which is our main technical contribution and has a tight link to the Poisson process modeled by GAMs. We show that the learning of APP can be achieved via convex optimization using natural gradient. The Poisson process is characterized by an intensity function λ : R → R. An inhomogeneous Poisson process is an extension of a homogeneous Poisson process, where the arrival rate changes with time. The process with time-changing intensity λ(t) is defined as a counting process N(t), which has an independent increment property. For any t ≥ 0 and infinitesimal interval δ ≥ 0, probability of events count is p(N(t + δ) − N(t) = 0) = 1 − δλ(t) + o(δ), p(N(t + δ) − N(t) = 1) = δλ(t) + o(δ), and p(N(t + δ) − N(t) ≥ 2) = o(δ), where o(·) denotes little-o notation (Daley & Vere-Jones, 2007). To take (inhomogeneous) multi-dimensional attributes into account, we consider multiple intensity functions, each of which is given as λJ : R → R associated with a subset J of the domain of possible states D, which is always assumed to be finite throughout the paper. Each J ⊆ D determines the condition of the occurrence of the event. To flexibly consider any combination of possible states, D is composed of possible states across all dimensions. For example, in the taxi pick up example in Introduction, if Dx, Dy, and DW are (discretized) disjoint domains for x, y, and W, respectively, they are gathered as a single set D = Dx ∪ Dy ∪ DW . For each J ⊆ D, the likelihood of this model is given by $$p\left(\left\{t_{i}\right\}_{i=1}^{N}\mid\lambda_{J}\left(t\right)\right)=\exp\left(-\int\lambda_{J}(t)d t\right)\prod_{i=1}^{N}\lambda_{J}\left(t_{i}\right),$$ $$\left(1\right)$$ $$(2)$$ where t1, t2, *. . .* , tN ∈ R are realization of timestamps. We define the functional prior on λJ (t) as $$\lambda_{J}(t):=g\left(f_{J}(t)\right)=\exp\left(f_{J}(t)\right).$$ λJ (t) := g (fJ (t)) = exp (fJ (t)). (2) The function g(·) is a positive function to guarantee the non-negativity of the intensity which we choose to be the exponential function, and our objective is to learn the function fJ (·). The log-likelihood of the multi-dimensional Poisson process with the functional prior is described as $$\log p\left(\left\{t_{i}\right\}_{i=1}^{N}\left|\lambda_{J}\left(t\right)\right.\right)=\sum_{i=1}^{N}f_{J}(t_{i})-\int\exp\left(f_{J}\left(t\right)\right)dt.\tag{1}$$ $$(3)$$ In the following sections, we introduce the *generalized additive models* and propose to model it by the log-linear model to learn fJ (t) and the normalizing term Rexp (fJ (t)) dt. ## 3.1 Generalized Additive Model We present the connection between Poisson processes and the Generalized Additive Model (GAM) proposed by Friedman & Stuetzle (1981). The GAM projects higher-dimensional features into lower-dimensional space to apply smoothing functions to build a restricted class of non-parametric regression models. GAM is less affected by the curse of dimensionality compared to directly using smoothing in a higher-dimensional space. For a given set of processes J ⊆ D, the traditional GAM using one-dimensional projections is defined as $$\log\lambda_{J}(t)=\sum_{j\in J}f_{j}(t)-\beta_{J},$$ $$\mathbf{\Sigma}_{j}^{\mathrm{T}}$$ with some smoothing function fj . In this paper, we extend it to include higher-order interactions between features in GAM by introducing terms that represent the interaction effects between events. The k*-th order GAM* is defined as $$\log\lambda_{J}(t)=\sum_{j\in J}f_{\{j\}}(t)+\sum_{j_{1},j_{2}\in J}f_{\{j_{1},j_{2}\}}(t)+\cdots+\sum_{j_{1},\ldots,j_{k}\in J}f_{\{j_{1},\ldots,j_{k}\}}(t)-\beta_{J}=\sum_{I\subseteq J,\,|I|\leq k}f_{I}(t)-\beta_{J},$$ The function fI : R → R is a smoothing function to fit the data, and the normalization constant βJ for the intensity function is obtained as βJ =RλJ (t)dt =Rexp(PI⊆J, |I|≤k fI (t) )dt. The definition of the additive model is in the same form as Equation 3. In particular, if we compare Equation 3 and Equation 4, we can see that the smoothing function f in Equation 3 is realized as the summation over lower-dimensional projections in Equation 4. Learning of a continuous function using lower-dimensional projections is well known because of the Kolmogorov-Arnold representation theorem, which states that: Theorem 3.1 (Kolmogorov–Arnold Representation Theorem (Braun & Griebel, 2009; Kolmogorov, 1957)). Any multivariate continuous function can be represented as a superposition of one–dimensional functions; that is, $$f\left(t_{1},\ldots,t_{n}\right)=\sum_{q=1}^{2n+1}f_{q}\left[\sum_{p=1}^{n}g_{q,p}\left(t_{p}\right)\right].$$ Braun (2009) showed that the GAM is an approximation to the general form presented in KolmogorovArnold representation theorem by replacing the range q ∈ {1*, . . . ,* 2n + 1} with I ⊆ J and the inner function gq,p by the identity if q = p and zero otherwise, yielding fJ (t) = PI⊆J fI (t). Interestingly, the canonical form for additive models in Equation 4 can be rearranged to be in the same form as Kolmogorov-Arnold representation theorem. By letting f(t) = PI⊆J fI (t) = g −1(λJ (t)) and g(·) = exp(·), $$\lambda_{J}(t)=\frac{1}{\exp\left(\beta_{J}\right)}\exp\left(\sum\nolimits_{I\subseteq J}f_{I}\left(t\right)\right)\propto\exp\left(\sum\nolimits_{I\subseteq J}f_{I}\left(t\right)\right),\tag{5}$$ where we assume fI (t) = 0 if |I| > k for the k-th order model and 1/ exp(βJ ) is the normalization term for the intensity function. Based on the Kolmogorov-Arnold representation theorem, generalized additive models are able to learn the intensity of the higher-order interaction between Poisson processes by using projections into lower dimensional spaces. The log-likelihood function for a kth-order model is obtained by $$\log p\left(\{t_{i}\}_{i=1}^{N}|\lambda_{J}\left(t\right)\right)=\sum_{i=1}^{N}\ \ \sum_{I\subseteq J,\ \ |I|\leq k}f_{I}\left(t_{i}\right)-\beta^{\prime},\tag{6}$$ $\beta^{\prime}$ - $\int\lambda_{J}\left(t\right)$ - $\sum_{I\subseteq J,\ \ |I|\leq k}\beta$ - $\sum_{I\subseteq J,\ \ |I|\leq k}\beta$ - $\sum_{I\subseteq J,\ \ |I|\leq k}\beta$ - $\sum_{I\subseteq J,\ \ |I|\leq k}\beta$ - $\sum_{I\subseteq J,\ \ |I|\leq k}\beta$ - \(\sum_{I\subseteq J,\ \ $$\left(7\right)$$ where β ′is a constant given by β ′ =RλJ (t)dt +PI⊆J βI . In the following subsection, we introduce a loglinear formulation equipped with partially ordered sample space, which aligns with the GAM formulation in Equation 5. ## 3.2 Additive Poisson Process We introduce our key technical contribution in this section, a log-linear formulation called the *additive* Poisson process to estimate the parameters for the higher-order interactions in Equation 6. We begin by discretizing the time window [0, T] for the input of λ into M bins and treat each bin as a natural number τ ∈ [M] = {1, 2*, . . . , M*} for each process. The discretization avoids the need to compute the intractable integral of β ′in the likelihood function in Equation 6. The discretization approach tends to perform better in high-dimension compared to the alternative approaches such as variational inference. We assume that M is predetermined by the user. First, we introduce a structured space for the Poisson process to incorporate interactions between processes. Let Ω = { (J, τ ) | J ⊆ D, τ ∈ [M] *} ∪ {*(⊥, 0)}, where the symbol ⊥ denotes the least element in our partial order structure, which is required to normalize probabilities in the resulting log-linear model. We define the *partial order* ⪯ (Davey & Priestley, 2002) on Ω as $\omega=(J,\tau)\preceq\omega^{\prime}=(J^{\prime},\tau^{\prime})\iff J\subseteq J^{\prime}$ and $\tau\leq\tau^{\prime},\ \mbox{for each}\ \omega,\ \omega^{\prime}\in\Omega,$ and (⊥, 0) ⪯ ω for all ω ∈ Ω, which is illustrated in Figure 1. The relation J ⊆ J ′is used to model any-order interactions (Amari, 2016, Section 6.8.4) between Poisson processes and each τ in (*J, τ* ) represents the autoregressive component ("time") in our model. Each node ω in the partially ordered set (poset) 1represents the state of the sample space and the arrows in Figure 1 represent the partial order relationship between two nodes2; that is, if ω → ω ′, then ω ⪯ ω ′. Intuitively, the greatest node for each τ ∈ [M], which is ({1, 2, 3}, τ ) in Figure 1, represents the multidimensional Poisson process. Other nodes represent projections onto lower-dimensional space that correspond to the marginalized observations; for example, {{1}, {2}, {3}} and {{1, 2}, {1, 3}, {2, 3}} represent the firstand second-order processes. Using our example in Introduction, where we wanted to estimate the intensity function of a pick-up event of a taxi, {1} and {2} correspond to spatial coordinates x and y, respectively, and {3} to the day of the week W, and τ represents the (discretized) observation time. We can then update our belief to model the second-order intensity function using observations of the second order events. For example, {1, 2}, {1, 3}, {2, 3} represents an event occurring at {x, y}, {*x, W*}, and {*y, W*}. We can then continue this process to an arbitrary order of interactions. Later on, in this section, we introduce the mathematics to estimate the higher-order function using a restricted number of lower-dimensional projections. On any set equipped with a partial order, we can introduce a *log-linear model* (Sugiyama et al., 2017). Let us assume that a parameter domain S ⊆ Ω is given. For a partially ordered set (Ω, ⪯), the log-linear model with parameters (θs)s∈S is introduced as $$\log p(\omega;\theta)=\sum\nolimits_{s\in{\mathcal{S}}}{\bf1}_{[s\preceq\omega]}\theta_{s}-\psi(\theta)$$ 1[s⪯ω]θs − ψ(θ) (8) for each ω ∈ Ω, where 1[·] = 1 if the statement in [·] is true and 0 otherwise, and ψ(θ) ∈ R is the partition function uniquely obtained as $$\psi(\theta)=\log\sum_{\omega\in\Omega}\exp\Big(\sum_{s\in{\mathcal{S}}}{\bf1}_{[s\leq\omega]}\theta_{s}\Big)=-\theta_{(\bot,0)}.$$ 1In information geometry, this corresponds to *hypergraphs* which have *simplicial complex* structure Ay et al. (2018, Section 2.9). 2This graph structure should not be confused with the graph structure studied in graphical models, where the nodes typically represent a random variable and the arrows represent the relationship between the two random variables. $$({\boldsymbol{\delta}})$$ Algorithm 1 Additive Poisson Process (APP) 1: **Function** APP({ti} N i=1, S, M, h): 2: Initialize Ω with the number M of bins 3: Apply Gaussian Kernel with bandwidth h on {ti} N i=1 to compute pˆ 4: Compute ηˆ = (ˆηs)s∈S from pˆ 5: Initialize θ = (θs)s∈S (randomly or θs = 0) 6: **repeat** 7: Compute p using the current θ = (θs)s∈S 8: Compute η = (ηs)s∈S from p 9: ∆η ← η − ηˆ 10: Compute the Fisher information matrix G using Equation 11 11: θ ← θ − G−1∆η 12: **until** convergence of θ = (θs)s∈S 13: **End Function** A special case of this formulation coincides with the density function of the *Boltzmann machines* (Sugiyama et al., 2018; Luo & Sugiyama, 2019). Here there is a clear correspondence between the log-linear formulation and that in the form of KolmogorovArnold representation theorem in Equation 5 if we rewrite Equation 8 as $$p(\omega;\theta)=\frac{1}{\exp\psi(\theta)}\exp\left(\sum_{s\in\mathcal{S}}\mathbf{1}_{\{s\leq\omega\}}\theta_{s}\right)\propto\exp\left(\sum_{s\in\mathcal{S}}\mathbf{1}_{\{s\leq\omega\}}\theta_{s}\right).$$ . (9) We call this model with (Ω, ⪯) defined in Equation 7 the additive Poisson process, which represents the intensity λ as the joint distribution across all possible states. The intensity λ of the multi-dimensional Poisson process given via the GAM in Equation 5 is fully modeled (parameterized) by Equation 8 and each intensity fI (·) is obtained as θ(I,·). To consider the k-th order model, we consistently use the parameter domain S, given as S = { (J, τ ) ∈ Ω | |J| ≤ k }, where k is an input parameter to the model that specifies the upper bound of the order of interactions. This means that θs = 0 for all s /∈ S. Note that our model is well-defined for any subset S ⊆ Ω and the user can use an arbitrary domain in applications. A visualization of the truncated parameter space is shown in Figure 2a. For a given J ⊆ D and each bin τ with ω = (*J, τ* ), the empirical probability pˆ(ω) of input observations is given as $$\tilde{p}(\omega)=\frac{1}{Z}\sum_{I\subseteq J}\sigma_{I}(\tau),\qquad Z=\sum_{\omega\in\Omega}\tilde{p}(\omega),\qquad\sigma_{I}(\tau):=\frac{1}{Nh_{I}}\sum_{i=1}^{N}K\left(\frac{\tau-t_{i}}{h_{I}}\right).\tag{10}$$ $$({\mathfrak{g}})$$ for each discretized state ω = (*J, τ* ). The function σI performs smoothing on time stamps t1*, . . . , t*N , which is the kernel smoother proposed by Buja et al. (1989). The function K is a kernel and hI is the bandwidth for each projection I ⊆ D. We use the Gaussian kernel as K to ensure that probability is always nonzero, meaning that the definition of the kernel smoother coincides with the kernel estimator of the intensity function proposed by Schäbe (1993). The normalization constant for the intensity function can be computed by βJ =PI⊆J K( τ−ti hI ). ## 3.3 Optimization Given an empirical distribution pˆ defined in Equation 10, the task is to learn the parameter (θs)s∈S such that the distribution via the log-linear model in Equation 8 is as close to pˆ as much as possible. Let us define SS = {p | θs = 0 if s *̸∈ S}*, which is the set of distributions that can be represented by the log-linear model using the parameter domain S. Then the objective function is given as $$\operatorname*{min}_{p\in{\mathfrak{S}}_{S}}D_{\mathrm{KL}}({\hat{p}},p),$$ where DKL(ˆ*p, p*) = Pω∈Ω pˆlog(ˆp/p) is the KL divergence from pˆ to p. In this optimization, let p ∗ be the learned distribution from the sample with an infinitely large sample size and let p be the learned distribution for each sample. Then we can lower bound the uncertainty (variance) E[DKL(p ∗, p)] by |S|/2N (Barron & Hengartner, 1998). Thanks to the well-developed theory of *information geometry* (Amari, 2016) for the log-linear model (Amari, 2001), it is known that this problem can be solved by e*-projection*, which coincides with the maximum likelihood estimation and is always *convex optimization* (Amari, 2016, Chapter 2.8.3). The gradient with respect to each parameter θs is obtained by $${\frac{\partial}{\partial\theta_{s}}}D_{\mathrm{KL}}({\hat{p}},p)=\eta_{s}-{\hat{\eta}}_{s},\quad{\mathrm{where~}}\eta_{s}=\sum_{\omega\in\Omega}\mathbf{1}_{[\omega\succeq s]}p(\omega).$$ The value ηs is known as the expectation parameter (Sugiyama et al., 2017) and ηˆs is obtained by replacing p with pˆ in the above equation. If ηˆs = 0 for some s ∈ S, we remove s from S to ensure that the model is well-defined. Let S = {s1, . . . , s|S|} and θ = [θs1 , . . . , θs|S| ] T, η = [ηs1 , . . . , ηs|S| ] T. We can always use the natural gradient (Amari, 1998) as the closed-form solution of the Fisher information matrix is always available (Sugiyama et al., 2017). The update step is $$\theta_{\mathrm{next}}=\theta-\mathbf{G}^{-1}(\eta-{\hat{\eta}}),$$ $$(11)$$ where the Fisher information matrix G is obtained as $$g_{i j}=\frac{\partial}{\partial\theta_{s_{i}}\partial\theta_{s_{j}}}D_{\mathrm{KL}}(\hat{p},p)=\sum_{\omega\in\Omega}\mathbf{1}_{[\omega\geq s_{i}]}\mathbf{1}_{[\omega\geq s_{j}]}p(\omega)-\eta_{s_{i}}\eta_{s_{j}}\,.$$ . (11) Theoretically, the Fisher information matrix is numerically stable to perform a matrix inversion. However, computationally, floating point errors may cause the matrix to become indefinite. To overcome this issue, a small positive value is added along the main diagonal of the matrix. This technique is known as jitter and it is used in areas like Gaussian processes to ensure that the covariance matrix is computationally positive semi-definite (Neal, 1999). The pseudocode for APP is shown in Algorithm 1. The time complexity of computing line 7 is O(|Ω*||S|*). This means when implementing the model using gradient descent, the time complexity of the model is O(|Ω*||S|*2) to update the parameters in S for each iteration. For natural gradient, the cost of inverting the Fisher information matrix G is O(|S|3); therefore, the time complexity to update the parameters in S is O(|S|3 + |Ω*||S|*) for each iteration. The time complexity for natural gradient is significantly higher because of the requirement to invert the fisher information matrix; if the number of parameters is small, it is more efficient to use natural gradient because it requires significantly fewer iterations. However, if the number of parameters is large, it is more efficient to use gradient descent. ## 4 Experiments We perform experiments using two-dimensional synthetic data, higher-dimensional synthetic data, and realworld data to evaluate the performance of our proposed approach. Our code is implemented in Python 3.7.5 with NumPy version 1.8.2 and the experiments are run on Ubuntu 18.04 LTS with an Intel i7-8700 6c/12t with 16GB of memory 3. In experiments with synthetic data, we simulate random events using Equation 1. We generate an intensity function using a mixture of Gaussians, where the mean is drawn from a uniform distribution and the covariance is drawn from an inverted Wishart distribution. The intensity function is then the density function multiplied by the sample size. The synthetic data is generated by directly drawing a sample from the probability density function . An arbitrary number of samples is drawn from the mixture of Gaussians. We then run our models and compare with Kernel Density Estimation (KDE) (Rosenblatt, 1956), an inhomogeneous Poisson process whose intensity is estimated by a reproducing kernel Hilbert space 3The code is available in the supplementary material and will be publicly available online. ![8_image_0.png](8_image_0.png) (b) Sparse observations, N = 1, 000. Figure 3: KL Divergence for second-order Poisson process. The order of the model (color of the line) represents the k-th order model, i.e., k = 1 (blue) and k = 2 (orange). ![8_image_1.png](8_image_1.png) (b) Sparse observations, N = 1, 000, h = 0.2. Figure 4: Intensity function of two dimensional processes. Dots represent observations. Left: Represents marginalized observation of the first dimension. Middle: Represents marginalized observation of the second dimension. Right: The joint observation of dimensions 1 and 2. The order of the model (color of the line) represents the k-th order model, i.e., k = 1 (blue) and k = 2 (orange). formulation (RKHS) (Flaxman et al., 2017), and a Dirichlet process mixture of Beta distributions (DPbeta) (Kottas, 2006; Kottas & Sansó, 2007). The hyper-parameters M and h in our proposed model are selected using grid search and cross-validation. For situations where a validation set is not available, then h could be selected using a rule of thumb approach such as Scott's Rule (Scott, 2015), and M could be selected empirically from the input data by computing the time interval of the joint observation. ## 4.1 Experiments On Two-Dimensional Processes For our experiment, we use 20 Gaussian components and simulate a dense case with 100,000 observations and a sparse case with 1,000 observations within the time frame of 10 seconds. We consider that a joint event occurs if the two events occur 0.1 seconds apart. Figure 3a and Figure 3b compare the KL divergence between the first- and second-order models and plots in Figure 4 are the corresponding intensity functions. In the firstorder processes, both first- and second-order models have the same performance. This is expected, as both of the models can treat first-order interactions and are able to learn the empirical intensity function exactly, ![9_image_0.png](9_image_0.png) Figure 5: KL Divergence for fourth-order Poisson process. We selected four representative examples for our experimental results, full results are available in the supplementary material. The line color signifies the order of the model, i.e., k = 1 (blue), k = 2 (orange), k = 3 (green), and k = 4 (red). ![9_image_1.png](9_image_1.png) (b) Sparse observations, N = 105. Figure 6: Intensity function of higher dimensional processes. Dots represent observations. We have selected four representative examples for our experimental results, full results are available in the supplementary material. The order of the model (color of the line) represents the k-th order model, i.e., k = 1 (blue), k = 2 (orange), k = 3 (green), and k = 4 (red). which is the superposition of the one-dimensional projection of the Gaussian kernels on each observation. For the second-order process, the second-order model performs better than the first-order model because it is able to directly learn the intensity function from the projection onto the two-dimensional space. In contrast, the first-order model must approximate the second-order process using the observations from the first-order processes. In the sparse case, the second-order model performs better when the correct bandwidth is selected. Table 1 compares our approach APP with other state-of-the-art approaches. APP performs best for firstorder processes in both sparse and dense experiments. Experiments for RKHS and DP-beta were unable to complete running within two days for the dense experiment. In the second-order process, our approach was outperformed by KDE, while the second-order APP was able to outperform both RKHS and DP-beta processes for both sparse and dense experiments. Figures 3a and 3b show that KDE is sensitive to changes in bandwidth, which means that, for any practical implementation of the model, second-order APP with a Table 1: The lowest KL divergence from the ground truth distribution to the obtained distribution on two types of single processes ([1] and [2]) and joint process of them ([1,2]). APP-\# represents the order of the Additive Poisson Process. Missing values mean that the computation did not finish within two days. | Process | APP-1 | APP-2 | KDE | RKHS | DP-beta | | |-----------|---------|---------|---------|---------|-----------|---------| | [1] | 4.98e-5 | 4.98e-5 | 2.81e-4 | - | - | | | Dense | [2] | 2.83e-5 | 2.83e-5 | 1.17e-4 | - | - | | [1,2] | 2.98e-2 | 1.27e-3 | 6.33e-4 | 4.09e-2 | 4.54e-2 | | | [1] | 7.26e-4 | 7.26e-4 | 8.83e-4 | 1.96e-2 | 2.62e-3 | | | Sparse | [2] | 2.28e-4 | 2.28e-4 | 2.76e-4 | 2.35e-3 | 2.49e-3 | | [1,2] | 2.88e-2 | 1.77e-2 | 3.67e-3 | 1.84e-2 | 3.68e-2 | | | Process | APP-1 | APP-2 | KDE | RKHS | DP-beta | | |-----------|---------|---------|--------|--------|-----------|--------| | [T] | 714.07 | 714.07 | 713.77 | 728.13 | 731.01 | | | Jan | [W] | 745.60 | 745.60 | 745.23 | 853.42 | 790.04 | | [T,W] | 249.60 | 246.05 | 380.22 | 259.29 | 260.30 | | | [T] | 713.43 | 713.43 | 755.71 | 795.61 | 765.76 | | | Feb | [W] | 738.66 | 738.66 | 773.65 | 811.34 | 792.10 | | [T,W] | 328.84 | 244.21 | 307.86 | 334.31 | 326.52 | | | [T] | 716.72 | 716.72 | 733.74 | 755.48 | 741.28 | | | Mar | [W] | 738.06 | 738.06 | 816.99 | 853.33 | 832.43 | | [T,W] | 291.20 | 246.19 | 289.69 | 328.47 | 300.36 | | Sparse [1] **7.26e-4 7.26e-4** 8.83e-4 1.96e-2 2.62e-3 [2] **2.28e-4 2.28e-4** 2.76e-4 2.35e-3 2.49e-3 [1,2] 2.88e-2 1.77e-2 **3.67e-3** 1.84e-2 3.68e-2 Table 2: Negative test log-likelihood for the New York Taxi data. Single processes ([T] and [W]) and joint process of them ([T,W]). APP-# represents the order of the Additive Poisson Process. less sensitive bandwidth is more likely to learn a more accurate intensity function when the ground truth is unknown. ## 4.2 Experiments On Higher-Dimensional Processes We generate a fourth-order process to simulate the behavior of the model in higher dimensions. The model is generalizable to higher dimensions, but it is difficult to demonstrate results for processes higher than the fourth order. For our experiment, we generate an intensity function using 50 Gaussian components and draw a sample with the size of 107for the dense case and that with the size of 105for the sparse case. We consider the joint event to be the time frame of 0.1 seconds. We were not able to run comparison experiments with other models because they are unable to learn when there are no or few joint observations in third- and fourth-order processes. In addition, the time complexity is too high to learn from joint observations in first- and second-order processes because all the other models have their time complexity proportional to the number of observations. The time complexity for KDE is O(N D) for the dimensionality with D, while DP-beta is O(N2K), where K is the number of clusters, and RKHS is O(N2) for each iteration with respect to the sample size N, where DP-beta and RKHS are applied directly on the joint observation as they cannot use the projections in lower-dimensional space. KDE is able to make an estimation of the intensity function using projections in lower-dimensional space, but it was too computationally expensive to complete running the experiment. By contrast, our model is more efficient because the time complexity is proportional to the number of bins in our model. The time complexity of APP for each iteration is O(|Ω*||S|*), where |Ω| = MD and |S| =Pk c=1 D c . Our model scales combinatorially with respect to the number of dimensions. However, this is unavoidable for any model that directly takes into account the high-order interactions. For practical applications, the number of dimensions D and the order of the model k is often small, making it feasible to compute. In Figure 5a, we observe similar behavior in the model, where the first-order processes fit precisely to the empirical distribution generated by the Gaussian kernels. The third-order model is able to predict better on the fourth-order process. This is because the observation shown in Figure 6a is largely sparse and learning from the observations directly may overfit. A lower-dimensional approximation is able to provide a better result in the third-order model. Similar trends can be seen in the sparse case, as shown in Figure 5b, where a second-order model is able to produce better estimation in third- and fourth-order processes. The observations are extremely sparse, as seen in Figure 6b, where there are only a few observations or no observations at all to learn the intensity function. ## 4.3 Uncovering Common Patterns In The New York Taxi Dataset We demonstrate the capability of our model on the 2016 Green Taxi Trip dataset4, which is an open-source dataset with a CC0: Public Domain licenses. We are interested in finding the common pick-up patterns across Tuesdays and Wednesdays. We define a common pick-up time to be within 1-minute intervals of each other between the two days. We have chosen to learn an intensity function using the Poisson process for Tuesday and Wednesday and a joint process for both of them. The joint process uncovers the common pick-up patterns between the two days. We have selected to use the first two Tuesdays and Wednesdays in January 2016 as our training and validation sets and the Tuesday and Wednesday of the third week of January 2016 as our testing set. We repeat the same experiment for February and March. We show our results in Table 2, where we use the negative test log-likelihood as an evaluation measure. APP-2 has consistently outperformed all the other approaches for the joint process between Tuesday and Wednesday. In addition, for the individual process, APP-1 and -2 also showed the best result for February and March. We also observe similar results as the synthetic experiment where APP-1 and APP-2 have similar values because there is no higher-order information to capture. These results demonstrate the effectiveness of our model in capturing higher-order interactions between processes, which is difficult for the other existing approaches. ## 5 Conclusion We have proposed a novel framework, called *Additive Poisson Process* (APP), to learn the intensity function of the higher-order interaction between Poisson processes using observations projected into lower-dimensional spaces. We formulated our proposed model using the *log-linear model* and optimize it using information geometric structure of the distribution space. We drew parallels between our proposed model and generalized additive model and showed the ability to learn from lower dimensional projections via the Kolmogorov-Arnold representation theorem. Our empirical results show the superiority of our method when learning the higherorder interactions between Poisson processes and when there are no or extremely sparse joint observations. Our model is also robust to varying sample sizes. Our approach provides a novel formulation to learn the joint intensity function which typically has extremely low intensity. There is enormous potential to apply APP to real-world applications, where higher-order interaction effects need to be modeled such as transportation, finance, and ecology. ## References Alan Agresti. *Categorical Data Analysis*. Wiley, 3 edition, 2012. Shun-Ichi Amari. Natural gradient works efficiently in learning. *Neural Computation*, 10(2):251–276, 1998. Shun-Ichi Amari. Information geometry on hierarchy of probability distributions. *IEEE Transactions on* Information Theory, 47(5):1701–1711, 2001. Sun-Ichi Amari. *Information Geometry and Its Applications*. Springer, 2016. Nihat Ay, Paolo Gibilisco, and F Matus. *Information Geometry and Its Applications*. Springer, 2018. 4https://data.cityofnewyork.us/Transportation/2016-Green-Taxi-Trip-Data/hvrh-b6nb A. Barron and N. Hengartner. Information theory and superefficiency. *The Annals of Statistics*, 26(5): 1800–1825, 1998. Jürgen Braun. *An application of Kolmogorov's superposition theorem to function reconstruction in higher* dimensions. PhD thesis, Universitäts-und Landesbibliothek Bonn, 2009. Jürgen Braun and Michael Griebel. On a constructive proof of Kolmogorov's superposition theorem. *Constructive Approximation*, 30(3):653, 2009. Andreas Buja, Trevor Hastie, and Robert Tibshirani. Linear smoothers and additive models. *The Annals of* Statistics, 17(2):453–510, 1989. Eric C Chi and Tamara G Kolda. On tensors, sparsity, and nonnegative factorizations. *SIAM Journal on* Matrix Analysis and Applications, 33(4):1272–1299, 2012. Daryl J Daley and David Vere-Jones. *An Introduction to the Theory of Point Processes: Volume II: General* Theory and Structure. Springer, 2007. Brian A Davey and Hilary A Priestley. *Introduction to Lattices and Order*. Cambridge University Press, 2002. Seth Flaxman, Yee Whye Teh, and Dino Sejdinovic. Poisson intensity estimation with reproducing kernels. Electronic Journal of Statistics, 11(2):5081–5104, 2017. Jerome H Friedman and Werner Stuetzle. Projection pursuit regression. *Journal of the American Statistical* Association, 76(376):817–823, 1981. Deniz Ilalan. A poisson process with random intensity for modeling financial stability. *The Spanish Review* of Financial Economics, 14(2):43–50, 2016. ST John and James Hensman. Large-scale cox process inference using variational fourier features. In International Conference on Machine Learning, pp. 2362–2370. PMLR, 2018. Andrei Nikolaevich Kolmogorov. On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition. *Doklady Akademii Nauk*, 114(5):953–956, 1957. Athanasios Kottas. Dirichlet process mixtures of beta distributions, with applications to density and intensity estimation. In *Workshop on Learning with Nonparametric Bayesian Methods, 23rd International* Conference on Machine Learning (ICML), volume 47, 2006. Athanasios Kottas and Bruno Sansó. Bayesian mixture modeling for spatial poisson process intensities, with applications to extreme value analysis. *Journal of Statistical Planning and Inference*, 137(10):3151–3163, 2007. Chris Lloyd, Tom Gunter, Michael Osborne, and Stephen Roberts. Variational inference for gaussian process modulated poisson processes. In *International Conference on Machine Learning*, pp. 1814–1822. PMLR, 2015. Simon Luo and Mahito Sugiyama. Bias-variance trade-off in hierarchical probabilistic models using higherorder feature interactions. In *Proceedings of the 33rd AAAI Conference on Artificial Intelligence*, pp. 4488–4495, 2019. Radford M Neal. Regression and classification using gaussian process priors. *Bayesian Statistics*, 6:475–501, 1999. Krzysztof Nowicki and Tom A B Snijders. Estimation and prediction for stochastic blockstructures. *Journal* of the American statistical association, 96(455):1077–1087, 2001. Yosihiko Ogata. On Lewis' simulation method for point processes. IEEE Transactions on Information Theory, 27(1):23–31, 1981. Murray Rosenblatt. Remarks on some nonparametric estimates of a density function. *The Annals of Mathematical Statistics*, pp. 832–837, 1956. Yves-Laurent Kom Samo and Stephen Roberts. Scalable nonparametric bayesian inference on point processes with gaussian processes. In *International Conference on Machine Learning*, pp. 2227–2236. PMLR, 2015. H Schäbe. Nonparametric estimation of intensities of nonhomogeneous poisson processes. *Statistical Papers*, 34(1):113–131, 1993. Aaron Schein, Mingyuan Zhou, David Blei, and Hanna Wallach. Bayesian poisson tucker decomposition for learning the structure of international relations. In *International Conference on Machine Learning*, pp. 2810–2819. PMLR, 2016. David W Scott. *Multivariate density estimation: theory, practice, and visualization*. John Wiley & Sons, 2015. Mahito Sugiyama, Hiroyuki Nakahara, and Koji Tsuda. Tensor balancing on statistical manifold. In *Proceedings of the 34th International Conference on Machine Learning*, volume 70 of Proceedings of Machine Learning Research, pp. 3270–3279, 2017. Mahito Sugiyama, Hiroyuki Nakahara, and Koji Tsuda. Legendre decomposition for tensors. In *Advances* in Neural Information Processing Systems 31, pp. 8825–8835, 2018. HR Thompson. Spatial point processes, with applications to ecology. *Biometrika*, 42(1/2):102–115, 1955. Feng Zhou, Zhidong Li, Xuhui Fan, Yang Wang, Arcot Sowmya, and Fang Chen. A refined MISD algorithm based on Gaussian process regression. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 584–596. Springer, 2018. Feng Zhou, Simon Luo, Zhidong Li, Xuhui Fan, Yang Wang, Arcot Sowmya, and Fang Chen. Efficient em-variational inference for nonparametric hawkes process. *Statistics and Computing*, 31(4):1–11, 2021. ## A Additional Experiments A.1 Bandwidth Sensitivity Analysis Our first experiment is to demonstrate the ability of our proposed model to learn an intensity function from samples. We generate a Bernoulli process with probably of p = 0.1 to generate samples every 1 second for 100 seconds to create a toy problem for our model. This experiment is to observe the behavior of varying the bandwidth in our model. In Figure 7a, we observe that by applying no kernel, we learn the deltas of each individual observation. When we apply a Gaussian kernel, the output of the model for the intensity function is much more smooth. Increasing the bandwidth of the kernel will provide a wider and much smoother function. Between the 60 seconds and 80 seconds mark, it can be seen when two observations have overlapping kernels, the intensity function becomes larger in magnitude. ## A.2 One Dimensional Poisson Process A one-dimensional experiment is simulated using Ogata's thinning algorithm (Ogata, 1981). We generate two experiments using the standard sinusoidal benchmark intensity function with a frequency of 20π. The dense experiment has troughs with 0 intensity and peaks at 201 and the sparse experiment has troughs with 0 intensity and peaks at 2. Figure 7d shows the experimental results of the dense case, our model has no problem learning the intensity function. We compare our results using KL divergence between the underlying intensity function used to generate the samples to the intensity function generated by the model. Figure 7b shows that the optimal bandwidth is h = 1. Algorithm 2 Thinning Algorithm for non-homogenous Poisson Process 1: **Function** Thinning Algorithm (λ (t), T): 2: n = m = 0, t0 = s0 = 0, λ¯ = sup0≤t≤T λ (t) 3: **repeat** 4: u ∼ uniform (0, 1) 5: w = − 1 λ¯ ln u {w ∼ exponential(λ¯)} 6: sm+1 = sm + w 7: D ∼ uniform (0, 1) 8: if D ≤ λ(sm+1) λ¯**then** 9: tn+1 = sm+1 10: n = n + 1 11: **else** 12: m = m + 1 13: **end if** 14: if tn ≤ T **then** 15: **return** {tk}k=1,2*,...,n* 16: **else** 17: **return** {tk}k=1,2*,...,n*−1 18: **end if** 19: **until** sm ≤ T 20: **End Function** ![15_image_1.png](15_image_1.png) ![15_image_3.png](15_image_3.png) ![15_image_0.png](15_image_0.png) (b) KL divergence of dense experi- (c) KL divergence of sparse experi- ![15_image_2.png](15_image_2.png) (e) Ogata's thinning algorithm with low intensity Figure 7: One dimensional experiments ![16_image_0.png](16_image_0.png) (b) Sparse observations. Figure 8: KL Divergence for four-order Poisson process. ![17_image_0.png](17_image_0.png) ![18_image_0.png](18_image_0.png) Figure 10: Intensity function of higher dimensional processes. Dots represent observations.
Review 1: Summary: This paper presents a new framework to learn higher order interactions in the intensity functions of Poisson processes. For this, the authors first state the well-known result that higher order interactions can be expressed in terms of generalized additive models (GAMs), which itself is an approximation to the Kolmogorov–Arnold representation of a multivariate function. Then, the continuous "time" input to Poisson processes is discretized into bins, allowing for a partial order structure, which in turn allows for a log-linear model. The authors then show that this log-linear formulation in fact corresponds to a Kolmogorov–Arnold representation, hence the "partially ordered process" is equivalent to an additive Poisson process model. Based on the previous work on optimization of log-linear models, the authors introduce a convex optimization objective. The model is evaluated on synthetic data up to order four as well as a real-world taxi pickup dataset, and consistently outperform baselines. Strengths and Weaknesses: ### Strengths - This rather complicated and technical paper is presented well except certain parts (see below). The running "taxi pickup" example is very pedagogical. - I find the connections between existing frameworks/results and their use for modeling higher-order interactions interesting. I also couldn't spot any technical error. - The experimental findings seem convincing although I'm not sure of other standard, relevant datasets. Here I wonder what other reviewers would think. ### Weaknesses I think the main weaknesses are the lack of discussion on the assumptions / modeling choices as well as writing. - Potential implication of the time binning is not discussed. Do we still model a Poisson process despite restricting ourself to finitely many bins? Is this commonly done in the literature? If so, any citations? How does $M$ affect overall performance? Similarly, a reference to the claim "The discretization approach tends to perform better in high-dimension compared to the alternative approaches such as variational inference" is needed. - Similarly, truncated parameter space should also be studied. How crucial is $k$? - Is log-linear model a simplification? For instance, would it be possible to model inhomogenous processes using it? #### Writing and notation (including questions): - It is not clear until the methods section what is meant by "higher order interactions". For instance, how do two Poisson processes interact? - Arrival rate and homogenous Poisson process are not formally defined in Sec 3. - Typo: discretized disjoint domains are denoted by $D_x, D_x, D_x$. - $t_i$'s are never formally introduced. - Presentation of GAM is mixed up with its use for Poisson processes, which I find slightly misleading. - What is meant by "interactions between features in GAM"? - How does $t$ in Theorem 3 differ from $t_1,\ldots,t_N$ on the left hand side? - I'm not entirely clear why the contribution is called "additive Poisson process". The "additions" seem to be because of the log-linear construction instead of a delibrate attempt to construct an additive process. Is that so? - Is $D$ in Fig1 caption defined somewhere? - Fig2b can be improved by better axis labels, increased image resolution, and the use of grid lines to account for the depth. - "A visualization of the truncated parameter space is shown in Figure 1a" or "Figure 2a"? ### Suggestions and questions - I'd probably first give the spatio-temporal example (second paragraph) and the phrase the research question (aka Are there any good ways of approximating the high dimensional intensity function). - In the third paragraph, the authors use the phrase "the information in lower-dimensional space", which I interpret as the "marginal space". If so, it would be nice to explain if the presented formulation is related to marginal observations per se. - Would be nice to explain why we see the same numbers under APP-1 and APP-2 in Table-1. - "We define a common pick-up time to be within 1-minute intervals of each other between the two days". Is this a common assumption? What is the justification? - Not following the literature very closely, I'm wondering how the method would perform on a standard benchmark (assuming the ones in the paper are introduced by the authors, which might very well be incorrect). Requested Changes: I would be happy if the concerns above are addressed either in the new revision or here as a comment. In particular, I would be happy to see a discussion (text) as well as ablation studies showing how sensitive is the framework to $k$ and $M$. Further, rules of thumb for choosing these variables would be nice. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper studies the problem of learning the intensity of high-dimensional Poisson processes. The paper proposes an additive hierarchy of subspace intensities. The idea is sensible and feels powerful, but the open problem is unclear: surely 4 dimensional poisson processes have been studied earlier. The method performs well but not impressively. Strengths and Weaknesses: S: The method seems strong and significant. W: The paper presentation was incredible confusing for me, and I did not understand the method. I have a bunch of comments below, and I’d like to hear the authors clarifications before a decision. W: The results are a bit inconclusive. The proposed method fares ok, but even trivial baselines seem to sometimes be better (fig 3), and sometimes the proposed method has large bias (fig4). The only real-world example in 4.3. studies a vague and frankly pointless research question, and does not actual benchmark if the system can learn intensities. Requested Changes: The paper needs to significantly improve the presentation, and significantly improve the empirical evaluation. Running comments - Fig1b doesn’t exist. Fig2b should be closer to first page. Many figures have horrible resolution, way too much detail, too small fonts, colors that are too close to each other, varying fonts, etc. Most figures are difficult to study. Please revise all figures and simplify them. - I’m not sure I buy the problem. First: the paper claims that current Poisson processes are “unable” to do multi-dimensional intensities, and an 3D example is given. Really? Really?? Are you sure that no one has done 3D Poisson point processes before? Are you sure that the earlier methods are unable to do 3D? Surely they might not be very good for high-dim cases, but “unable”? I don’t believe this. If this really is the case, then please demonstrate it more convincingly (eg by showing the math). Second: the 3D data is claimed to be often sparse. In which settings is this true? For instance, taxi data is plentiful in real world. Why have a running example that goes against your arguments? Third: “no direct observation of event at all”. I don’t understand what this means. So an event happens but we don’t observe it? Ok, but surely if we have no data we can’t do anything. I fail to see the point. Is your method imputing new data? - “KDE learns the joint intensity function by using information from lower dimensions.” Don’t understand what this means. - “However, KDE is unable to scale to learn dimensions because it suffers from the curse of dimensionality” With high-dimensional cases this is true (eg. 10 or 100 or 1000), but with 3D this is not true. Furthermore, KDE can use any kernel out there, and there are plenty of specialised high-dim kernels or factorised kernels. - “Besides, if observations are sparse, it is hard for these approaches to learn a reasonable intensity function.” I don’t buy this. Kernel methods can struggle with sparse data, but I don’t think you can make this kind of blanket statement about “Bayesian inference”. Surely Bayesian methods should be suitable for small amounts of data, especially the GP ones. - “In addition, the naïve approach using MCMC has cubic complexity O(n3)”. I have no idea what this refers to. MCMC does not have cubic complexity. Some kernel methods might, but many don’t. I also don’t get the O( (N^3)^d) complexity. This sounds absurdly bad and feels like a typo. - “However, the model complexity is O(n2k) for each dimension and is exponential with respect to the number of input dimensions”. I don’t understand what this means. How can you get exponential scaling when you have d dimensions? Surely the complexity should be k d n^2. - “it is likely to fail in extremely high dimensions and sparse observations.” What is “extremely”? - Dx, Dx, Dx is a typo - I don’t get eq 1. Why are we looking at probability of a set of times when we condition by spatial intensity? This makes no sense to me. If lambda_J is over spatial coordinate, then what does time do? How can we evaluate a time (eg. 3 hours) for a function whose domain in spatial coordinate? Surely we want to study the distribution of spatiotemporal triplets instead. Frankly I’m already lost at this point of the paper. Maybe the idea is that we marginalise the spatial locations away, and only care of time…? I’m lost. - What is \bold{t}? Makes no sense… Perhaps each spatial axis has different times and this is some kind of axis-time warping? I also don’t understand how the f_{j1,j2} can be a R → R function if it has two-dimensional domain. Or maybe its only one-dimensional? Perhaps time is used as a special variable, and then the different f_* mean that we count how much events happen in different projections. But this makes no sense since we don’t have locations as input. If we look at spatiotemporal taxis, then how can you just state that f_{x,y}(t) is a function of time only and not of space? I guess it the models the average number of taxis, but the {x,y} part was totally irrelevant. I’m lost… - I don’t understand the f_I and f_J connection in the integral after eq 4. - I couldn’t understand eqs 4..6. I don’t understand what I and J are or try to mean, how is space and time coupled in this model, and what are we trying to do, and why are we looking only at probabilities of set of timepoints. I thought that this was a spatiotemporal model, but the equations do not support this. - The paper should show both intuitively and mathematically how the x-y-W example is handled by your model. Currently there is no intuitive explanation of what happens to the x-y, and they are absent from equations as well. The paper should also show intuitively and mathematically how are arbitrary high-dimensional settings handled (eg. one might not even have time in the system, or any spatial coordinates: for instance, the Poisson process might model shopping item combinations.). The paper also needs to give stronger intuition on how we model a high-dimensional process with the subset processes. - I don’t really understand the discretisation and bin stuff. Why is this done? Surely one T-long interval is qualitatively equally difficult to handle than M T/M-long intervals. - I’m confused of the J \subset D notation. There are infinitely many subsets of D, and most of them are not axis-aligned. For instance, \Omega thus contains infinitely many subspaces J if we believe the definition before eq 7. I don’t think this is useful or sensible. The paper likely means that we take all index subsets instead. The notation should be fixed. - The taxi explanation of page 6 is good, but I’m slightly confused by this as well. For instance, “event occurring at {x,y}” implies that there is no time at play. This would then model how many taxis are observed at one location (irrespective of time), and thus effectively be just a count number. An alternative would be that time is still part of the system, and the observation is a sequence of timepoints at one location. How does this go? - I don’t get the “parameter domain” stuff. So suddenly the space of subspace-timeintervals \Omega becomes a parameter space S? Err…. why are parameters and inputs from the same space? I don’t understand what \theta_s means, or what s\inS means. Isn’t S infinite so then (\theta_s) is an infinite sequence of input-time pairs (which also somehow are parameters)? I’m totally lost. Eq 8 is gibberish to me. I would recommend visualising the S/Omega/theta/omega stuff in some way. - In eq 10 there are infinitely many omegas. I can’t follow whats going on. - Tables lack standard deviations - It would be nice to see visualisations of the taxi dataset. - I don’t understand the “common pickup by 1minute” logic. What are we trying to study here? Why do we care if taxis are ordered within a minute across days? Shouldn’t we look at high-intensity peaks of the intensity curve instead? What does joint process across days mean in terms of the model? Is this a model that looks at (x_tue,y_tue,x_wed,y_wed) as 4th order things? I’m pretty lost.. Surely you would study the triplet {x,y,w} instead? I don’t understand what table 2 shows or measures. Is this result good? Is it bad? Did we find 1minute couplings? I have no idea… I would assume that in such a data you can find all kinds of pattersn, and thus I’m not sure if this study demonstrates any transferable insights. Broader Impact Concerns: No issues. ================================================== Review 3: Summary: summary The paper addresses the problem of learning multidimensional non-homogeneous Poisson processes, e.g. a Poisson process with a spatial and time-dependent intensity function. The authors propose to facilitate the task by exploiting aggregate information. The model refines a fully-factorized factorization by considering higher-order interactions between the variables, i.e. contribution from the joint observations of more than one variable. The algorithm includes two key steps, the discretization of the input space and the definition of a partial order in the set of all possible interactions. Strengths and Weaknesses: strengths Establishing a link between GAM, Poisson processes, and the Kolmogorov approximation is interesting from a theoretical perspective. Exploring possible practical exploitation of it may lead to new research streams. The problem is nicely introduced and motivated. The need to go beyond KDE or MCMC-based methods has become increasingly urgent as the size of currently available data sets and the complexity of modern AI tasks grows. The fact that the resulting optimization problem is convex is relevant, even if the authors do not exploit it to provide explicit success guarantees. weaknesses The net contribution of the paper is not clearly stated. The author should have said explicitly whether the novelty is introducing kth-order GAM, rewriting the associated likelihood as the likelihood of a Boltzmann Machine, i.e. rewriting Equation 6 as in Equation 8, or applying tensor factorization ideas to non-homogeneous Poisson processes. The model seems to become expansive in high dimensions. Kernel Density Estimation is criticised for the associated N^D complexity, where D is the dimensionality of the input and N is the size of the data. The proposed model, however, is also exponential in D, even if N is replaced by the number of bins, M, which is also exponential in D. In the experiments, the proposed technique is only compared with KDE and a Bayesian MCMC-based model. Methods based on a discrete version of the input and tensor factorization seem to be excluded, as well as Variational Inference methods. This is surprising, especially after the authors say, on page 3, that Variational Inference has a good computational complexity (O(N) per dimension) and that their work is closely related to Bayesian Poisson Tucker decomposition (BPTD, of Schein et al., 2016). Some claims may be justified better. For example, the authors could explain more explicitly why "GAM is less affected by the curse of dimensionality compared to directly using smoothing in a higher-dimensional space" or why their approach "provides a generalization of this idea of Poisson Factorization by using Legendre tensor decomposition". Requested Changes: questions (in order of relevance) - In the experiments, is any of the competitors based on tensor factorization? And why the proposed method has not been compared with Variational Inference approaches? - Is this the first time GAM are linked to Theorem 3 and kth order-GAM defined? - Why is it essential to define a time-like partial order between subsets? What are the advantages of reformulating the kth order GAM as a Boltzmann Machine? - What are the differences between the proposed approach and Schein et al., 2016? Where is the Legendre tensor decomposition used? - Is 4 the maximum number of dimensions that the model can handle in practice? And what is the number of bins, M, used in the experiments? - In the theoretical presentation, time is treated differently from the dimensions but this is not explicit in the proposed algorithm. Is there anything special about the time dimension? - In the discussion about the experiments, is the complexity of KDE and DP-beta also a per-iteration complexity? - Would it be possible to re-write a kth order GAM as a standard GAM by expanding the set of smoothing functions and relabelling them? - D_x, D_x, and D_x, on page 3 should be D_x, D_y, and D_W Broader Impact Concerns: No concern ================================================== Metareview: Recommendation: Reject Comment: The reviewers identified major issues regarding the correctness of the claims made in this paper as well as the clarity of the presentation and the empirical evaluation. These issues would need to be resolved via major revision before this manuscript could be considered for publication again. ==================================================
# A Probabilistic Taylor Expansion With Gaussian Processes Toni Karvonen toni.karvonen@helsinki.fi Department of Mathematics and Statistics University of Helsinki Jon Cockayne *jon.cockayne@soton.ac.uk* Department of Mathematical Sciences University of Southampton Filip Tronarp filip.tronarp@matstat.lu.se Centre for Mathematical Sciences Lund University Simo Särkkä *simo.sarkka@aalto.fi* Department of Electrical Engineering and Automation Aalto University Reviewed on OpenReview: *https: // openreview. net/ forum? id= 2TneniEIDB* ## Abstract We study a class of Gaussian processes for which the posterior mean, for a particular choice of data, replicates a truncated Taylor expansion of any order. The data consist of derivative evaluations at the expansion point and the prior covariance kernel belongs to the class of Taylor kernels, which can be written in a certain power series form. We discuss and prove some results on maximum likelihood estimation of parameters of Taylor kernels. The proposed framework is a special case of Gaussian process regression based on data that is orthogonal in the reproducing kernel Hilbert space of the covariance kernel. ## 1 Introduction Taylor's theorem is among the most fundamental results in analysis. In one dimension, Taylor's theorem states that any function f : R → R that is sufficiently smooth at a ∈ R can be written as $$f(x)=T_{n,a}(x)+P_{n,a}(x),$$ $$(1.1)$$ f(x) = Tn,a(x) + Pn,a(x), (1.1) where Tn,a(x) = Pn p=0 1 p! f (p)(a)(x − a) pis the nth order Taylor polynomial and Pn,a(x) a remainder term which has the property that Pn,a(x) = O(|x − a| n+1) as |x − a| → 0. Multidimensional generalisations are readily available and will be introduced in Section 2. Approximations derived from (1.1), in particular the first and second order Taylor approximations $f(x)\approx f(a)+f^{\prime}(a)(x-a)\quad\mbox{and}\quad f(x)\approx f(a)+f^{\prime}(a)(x-a)+\frac{1}{2}f^{\prime\prime}(a)(x-a)^{2},$ play an important role in numerical algorithms for a number of reasons. Firstly, Taylor approximations provide a straightforward and principled means of *linearising* a function of interest, which can often dramatically accelerate otherwise costly computations. Secondly, they require only information about a function and its derivatives at a single point; information that particular algorithms may already be collecting. Particular applications of Taylor's theorem in numerical algorithms include optimisation (Moré, 1978; Conn et al., 2000), state estimation (Särkkä, 2013, Ch. 5), ordinary differential equations (Hairer et al., 1993, Ch. II), and approximation of exponential integrals in Bayesian statistics (Raudenbush et al., 2000), to name but a few. A crucial challenge when applying Taylor series in this way, however, is their locality. The approximation is valid only near a, and apart from trivial examples approximation quality decays rapidly away from this point. When a numerical algorithm attempts to use a Taylor approximation to explore function behaviour around a particularly novel point, far from a, the behaviour of the algorithm can be difficult to predict and control. This paper proposes a remedy for this by introducing a Gaussian process (GP) model (Rasmussen and Williams, 2006) whose posterior mean, given the derivative data (f(a), f′(a)*, . . . , f*(n)(a)), is exactly the Taylor polynomial Tn,a, and whose posterior variance plays a role analogous to the remainder term Pn,a. In the spirit of probabilistic numerics (Diaconis, 1988; Cockayne et al., 2019b; Hennig et al., 2022), the posterior variance can then be used for principled probabilistic quantification of epistemic uncertainty in the Taylor approximation f(x) ≈ Tn,a(x) at x ̸= a, which can be exploited and propagated forward. In effect, the variance may be used to encode into algorithms a degree of "scepticism" about the validity of the Taylor approximation away from a. Taylor approximation thus joins the ranks of classical numerical methods, such as algorithms for spline interpolation (Diaconis, 1988; Kimeldorf and Wahba, 1970), numerical quadrature (Diaconis, 1988; Karvonen and Särkkä, 2017; Karvonen et al., 2018), differential equations (Schober et al., 2014; 2019; Teymur et al., 2016), and linear algebra (Cockayne et al., 2019a; Hennig, 2015), that can be cast as statistical inference. Even though the use of derivative information in Gaussian process modelling is rather standard and the prior that we use is relatively well known, we are unaware of any prior attempts at deriving the Taylor approximation in a Gaussian process framework. ## 1.1 Related Literature The Gaussian process priors we use to construct a probabilistic Taylor expansion are determined by positivedefinite and non-stationary *Taylor kernels* which, at an expansion point a, take the form $$K_{a}(x,y)=K(x-a,y-a),\quad\quad\text{where}\quad\quad K(x,y)=\sigma^{2}\sum_{p=0}^{\infty}\frac{c_{p}\lambda^{p}}{(p!)^{2}}(xy)^{p}\tag{1.2}$$ for non-negative constants cp and positive parameters σ and λ. For multidimensional input points, the index p ∈ N0 is replaced with a multi-index α ∈ N d 0 and xy usually with the Euclidean inner product; see Section 2.1 for details. The canonical example is the *exponential kernel* $$K(x,y)=\sigma^{2}\sum_{p=0}^{\infty}\frac{\lambda^{p}}{p!}(x y)^{p}=\sigma^{2}\exp(\lambda x y),$$ $$(1.3)$$ which is obtained by setting cp = p! in (1.2). The exponential kernel is closely connected to the popular Gaussian kernel K(*x, y*) = σ 2exp(−λ 2(x − y) 2/2). Taylor kernels, often under the name *power series kernels*, have been used—and their approximation properties analysed—in the numerical analysis and scattered data approximation literature; see Dick (2006); Zwicknagl (2009); De Marchi and Schaback (2010); Zwicknagl and Schaback (2013) and Fasshauer and McCourt (2015, Sec. 3.3.1). Section 1 in Zwicknagl and Schaback (2013) has been of particular inspiration for the present work. The *Szegő kernel* and the Bergman kernel K(*x, y*) = 1/(1−xy) and K(*x, y*) = 1/(1−xy) 2, respectively, are particularly well studied in the approximation theory literature because their reproducing kernel Hilbert spaces (RKHSs) are important in complex analysis; see, for example, Larkin (1970); Richter-Dyn (1971b;a) and Oettershagen (2017, Sec. 6.2). These kernels are defined on the open interval (−1, 1) and obtained from (1.2) by setting (Szegő) cp = (p!)2 and (Bergman) c2p = (p!)2 and c2p+1 = 0 for p ∈ N0 in (1.2). Taylor kernels occasionally appear in the machine learning and statistics literature but, to the best of our knowledge, have not been used in conjunction with derivative data in the way proposed here. We refer to Minka (2000, Sec. 4); Steinwart and Christmann (2008, Example 4.9) and (Liang and Rakhlin, 2020) for a few of their appearances. Gaussian process regression based on derivative evaluations has been explored (e.g., Solak et al., 2002; Prüher and Särkkä, 2016; Wu et al., 2017; Eriksson et al., 2018), though typically for "standard" kernels such as the Gaussian kernel. The approach in this paper differs from the prior Gaussian process literature in two key ways, which enable a probabilistic replication of the Taylor expansion: First, for kernels used in the literature the posterior mean cannot coincide with the Taylor polynomial. Secondly, in the literature the data typically consist of function and derivative evaluations at a number of *different points*, whereas we are specifically interested in derivatives at a *single point*. ## 1.2 Contributions The main contributions of the paper are contained in Sections 2 and 3. In Section 2, we derive a probabilistic Taylor expansion and a basic error bound; these results are given in Theorems 2.1 and 2.3. We also discuss how inclusion of observation noise affects probabilistic Taylor expansions. In Section 3, we derive expressions for maximum likelihood estimates of the Taylor kernel parameters σ and λ. Perhaps the most interesting result that we obtain is Theorem 3.2, which states that derivative data that could have been generated by a constant function yield the estimate λML = 0. As mentioned above, the exponential kernel is related to the Gaussian kernel. In Section 4, we show how to derive closed form expression for the posterior mean and covariance given derivative data when the covariance kernel is Gaussian. Section 5 outlines generalisations of probabilistic Taylor expansions derived in Section 2 for data that are orthogonal in the reproducing kernel Hilbert space of the covariance kernel. Fourier coefficients constitute an example of orthogonal data when the kernel is periodic. Some simple numerical toy examples are included in Section 6. We emphasise that the purpose of this paper is not to propose new Gaussian process algorithms but rather to provide a Gaussian process interpretation for classical and well-known Taylor approximations. Such an interpretation is intrinsically interesting even if it yields no new competitive algorithms. ## 1.3 Notation We use N0 to denote the set of non-negative integers and N d 0 to denote the collection of non-negative d-dimensional multi-indices α = (α(1)*, . . . ,* α(d)), where α(j) ∈ N0 is the jth index of α. We also use the standard notation |α| = α(1) + *· · ·* + α(d) and α! = α(1)! *× · · · ×* α(d)!. ## 2 A Probabilistic Taylor Expansion In this section, we derive a probabilistic Taylor expansion using Gaussian processes. We discuss a generalisation of this derivation for orthogonal data in Section 5. ## 2.1 Taylor Kernels Let a ∈ R d and r ∈ (0, ∞]. Define Ωa,r = {x ∈ R d: ∥x − a∥2 < r}. A *multidimensional Taylor kernel* on Ωa,r × Ωa,r is defined as : $ ||x-a||_2<r$ $$K_{\mathbf{a}}(\mathbf{x},\mathbf{y})=K(\mathbf{x}-\mathbf{a},\mathbf{y}-\mathbf{a})\quad\text{for}\quad K(\mathbf{x},\mathbf{y})=\sigma^{2}\sum_{\mathbf{\alpha}\in\mathbb{N}_{d}^{d}}\frac{c_{\mathbf{\alpha}}\mathbf{\lambda}^{\mathbf{\alpha}}}{(\mathbf{\alpha}!)^{2}}\mathbf{x}^{\mathbf{\alpha}}\mathbf{y}^{\mathbf{\alpha}},\tag{2.1}$$ where σ > 0 and λ ∈ R d + are kernel hyperparameters. The coefficients cα are non-negative constants such that cα > 0 for infinitely many α ∈ N d 0 and $$\sum_{\mathbf{\alpha}\in\mathbb{N}_{0}^{d}}\frac{c_{\mathbf{\alpha}}\mathbf{\lambda}^{\mathbf{\alpha}}}{(\mathbf{\alpha}!)^{2}}\,r^{2|\mathbf{\alpha}|}<\infty\ \ \mbox{if}\ \ r<\infty\ \ \ \ \ \mbox{or}\ \ \ \sum_{\mathbf{\alpha}\in\mathbb{N}_{0}^{d}}\frac{c_{\mathbf{\alpha}}\mathbf{\lambda}^{\mathbf{\alpha}}}{\mathbf{\alpha}!\sqrt{\mathbf{\alpha}!}}\,\mbox{e}^{|\mathbf{\alpha}|}<\infty\ \ \mbox{if}\ \ r=\infty.\tag{2.2}$$ The conditions (2.2) are sufficient to ensure the series defining Ka via (2.1) converges absolutely for all x, y ∈ Ωa,r, which, together with cα > 0 for infinitely many α, guarantees that Taylor kernels are positivedefinite (Zwicknagl and Schaback, 2013, Thm. 2.2). If only finititely many cα are positive, the kernel is positive-semidefinite. However, cα ≥ 0 is not necessary for positive-definiteness of K in (2.1) (see Zwicknagl, 2009, Sec. 2). To ensure that the diagonal covariance matrix in (2.11) is invertible we always assume that $$c_{\mathbf{\alpha}}>0\quad{\mathrm{~for~every~}}\quad\mathbf{\alpha}\in\mathbb{N}_{0}^{d}.$$ Note that σ and λ could be subsumed into the coefficients cα. However, as we shall see in Section 3, the parametrisation that we use leads to convenient and useful hyperparameter estimation. Specifically, maximum likelihood estimation of the parameters σ and λ is possible and the estimators have some intuitive properties. In contrast, it is either useless or impossible to estimate the coefficients cα, which should therefore be fixed. An important subclass of Taylor kernels are *inner product kernels*, defined by $$K(\mathbf{x},\mathbf{y})=\sigma^{2}\sum_{p=0}^{\infty}\frac{c_{p}}{(p!)^{2}}\langle\mathbf{x},\mathbf{y}\rangle_{\mathbf{\lambda}}^{p},\quad\quad\mathrm{where}\quad\langle\mathbf{x},\mathbf{y}\rangle_{\mathbf{\lambda}}=\sum_{i=1}^{d}\lambda_{i}x_{i}y_{i}.$$ $$(2.3)$$ It is easy to show that inner product kernels are Taylor kernels: From the multinomial theorem we have $$K(\mathbf{x},\mathbf{y})=\sigma^{2}\sum_{p=0}^{\infty}{\frac{c_{p}}{(p!)^{2}}}(\mathbf{x},\mathbf{y})_{\lambda}^{p}=\sigma^{2}\sum_{p=0}^{\infty}{\frac{c_{p}}{(p!)^{2}}}\bigg(\sum_{i=1}^{d}\lambda_{i i}y_{i}\bigg)^{p}=\sigma^{2}\sum_{p=0}^{\infty}{\frac{c_{p}}{(p!)^{2}}}\sum_{|\alpha|=p}{\frac{p!}{\alpha!}}\lambda^{\alpha}\mathbf{x}^{\alpha}\mathbf{y}^{\alpha}$$ $$=\sigma^{2}\sum_{\mathbf{\alpha}\in\mathbb{N}_{0}^{d}}{\frac{c_{|\alpha|}}{\alpha!\,|\alpha|!}}\lambda^{\alpha}\mathbf{x}^{\alpha}\mathbf{y}^{\alpha},$$ which we recognise as a Taylor kernel in (2.1) with cα = c|α|α!/ |α|!. We will discuss estimation of the parameters σ and λ (as well as the coefficients cp) in Section 3; for now, we assume the parameters are given and proceed to show how Taylor kernels may be used to derive a probabilistic Taylor expansion. The multidimensional version of the exponential kernel in (1.3) is $$K(\mathbf{x},\mathbf{y})=\sigma^{2}\exp(\langle\mathbf{x},\mathbf{y}\rangle_{\lambda})=\sigma^{2}\sum_{p=0}^{\infty}\frac{1}{p!}\langle\mathbf{x},\mathbf{y}\rangle_{\lambda}.\tag{2.4}$$ The exponential kernel is defined on Ωa,r = R d. In Section 4, we discuss a close connection that the exponential kernel has to the commonly used Gaussian kernel. By setting cp = 1 we obtain the *Bessel kernel* K(*x, y*) = σ 2 P∞ p=0⟨x, y⟩ p λ /(p!)2 = I0(2⟨x, y⟩ 1/2 λ), where I0 is the modified Bessel function of the first kind, which is another Taylor kernel defined on the whole of R d. ## 2.2 Gaussian Process Regression With Derivative Data A Gaussian process fGP ∼ GP(*m, R*) characterised by mean function m: Ωa,r → R and covariance kernel R: Ωa,r × Ωa,r → R is a stochastic process such that for any points x1*, . . . ,* xN ∈ Ωa,r the joint distribution of (fGP(x1)*, . . . , f*GP(xN )) is an N-dimensional Gaussian with mean vector (m(x1)*, . . . , m*(xN )) ∈ R N and covariance matrix R = (R(xi, xj ))N i,j=1 ∈ R N×N (Rasmussen and Williams, 2006). In particular, E[fGP(x)] = m(x) and Cov[f(x), f(y)] = R(x, y) for all x, y ∈ Ωa,r. Let f : Ωa,r → R be an n times differentiable function on Ωa,r, meaning that the partial derivatives $$\mathbf{D}^{\alpha}f={\frac{\partial^{|\alpha|}f}{\partial x_{1}^{\alpha(1)}\cdots\partial x_{d}^{\alpha(d)}}}$$ exist for all α ∈ N d 0 such that |α| ≤ n. Suppose also that the prior mean m is n times differentiable and that R is n times differentiable in both arguments, in the sense that the derivative $$\mathrm{D}_{\,\,\,\,y}^{\beta}\mathrm{D}_{\,\,\,x}^{\alpha}R(x,y)=\left.{\frac{\partial^{|\alpha|+|\beta|}}{\partial v^{\alpha}\partial w^{\beta}}}R(v,w)\right|_{\stackrel{v=x}{w=y}}$$ exists for all x, y ∈ Ωa,r and all multi-indices α and β such that |α| , |β| ≤ n. The *noiseless* derivative data are $$\boldsymbol{f}_{\boldsymbol{a}}=(\mathrm{D}^{\boldsymbol{\alpha}}\boldsymbol{f}(\boldsymbol{a}))_{|\boldsymbol{\alpha}|\leq n}=(\mathrm{D}^{\boldsymbol{\alpha}_{1}}\boldsymbol{f}(\boldsymbol{a}),\ldots,\mathrm{D}^{\boldsymbol{\alpha}_{N^{d}_{0}}}\boldsymbol{f}(\boldsymbol{a})),$$ where we use an arbitrary ordering of the set $\{\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{N^{d}_{0}}\}=\{\boldsymbol{\alpha}\in\mathbb{N}^{d}_{0}\,:\,|\boldsymbol{\alpha}|\leq n\}$, which contains n $$N_{n}^{d}={\binom{n+d}{n}}={\frac{(n+d)!}{n!\,d!}}$$ $$(2.5)$$ elements. When conditioned on these derivative data, the posterior fGP | fa is a Gaussian process (Särkkä, 2011; Travelletti and Ginsbourger, 2022). That is, fGP | fa ∼ GP(sn,a, Pn,a) with mean and covariance $$s_{n,a}(\mathbf{x})=m(\mathbf{x})+\mathbf{r_{a}}(\mathbf{x})^{\mathsf{T}}\mathbf{R_{a}^{-1}}(\mathbf{f_{a}}-\mathbf{m_{a}})\quad{\mathrm{~and~}}\quad P_{n,a}(\mathbf{x},\mathbf{y})=R(\mathbf{x},\mathbf{y})-\mathbf{r_{a}}(\mathbf{x})^{\mathsf{T}}\mathbf{R_{a}^{-1}}\mathbf{r_{a}}(\mathbf{y}).$$ a ra(y). (2.6) Here Ra ∈ R Nd n×Nd n and ra(x) ∈ R Nd n are each given by $$(2.6)$$ $$\left(R_{\mathbf{a}}\right)_{i j}=\mathrm{D}_{\mathbf{y}}^{\alpha_{j}}\mathrm{D}_{\mathbf{x}}^{\alpha_{i}}R(\mathbf{x},\mathbf{y})\big|_{\begin{subarray}{c}\mathbf{x}=\mathbf{a}\\ \mathbf{y}=\mathbf{a}\end{subarray}}\quad\mathrm{and}\quad\left(\mathbf{r}_{\mathbf{a}}(\mathbf{x})\right)_{i}=\mathrm{D}_{\mathbf{y}}^{\alpha_{i}}R(\mathbf{x},\mathbf{y})\big|_{\mathbf{y}=\mathbf{a}},$$ $$(2.7)$$ , (2.7) where subscripts denote the differentiation variable, and ma = (Dα1m(a)*, . . . ,* D αNdn m(a)). When f has multidimensional range, one may model each of its components independently, though we note that this modelling choice may be readily generalised using vector-valued Gaussian processes (Álvarez et al., 2012). ## 2.3 Replicating The Taylor Expansion Using Taylor Kernels The next theorem combines what was described in Sections 2.1 and 2.2 to give a probabilistic version of the Taylor expansion. Theorem 2.1. Let Ka *be a Taylor kernel defined as in* (2.1)*. Let* fGP ∼ GP(m, Ka) and fa = (Dαf(a))|α|≤n. Then fGP | fa ∼ GP(sn,a, Pn,a)*, where* $$s_{n,\mathbf{a}}(\mathbf{x})=m(\mathbf{x})+\sum_{|\mathbf{a}|\leq n}\frac{\mathrm{D}^{\mathbf{\alpha}}[f(\mathbf{a})-m(\mathbf{a})]}{\mathbf{\alpha}!}(\mathbf{x}-\mathbf{a})^{\mathbf{\alpha}}\;\;\text{and}\;\;P_{n,\mathbf{a}}(\mathbf{x},\mathbf{y})=\sigma^{2}\sum_{|\mathbf{a}|>n}\frac{c_{n}\mathbf{\lambda}^{\mathbf{\alpha}}}{(\mathbf{\alpha}!)^{2}}(\mathbf{x}-\mathbf{a})^{\mathbf{\alpha}}(\mathbf{y}-\mathbf{a})^{\mathbf{\alpha}}.\tag{2.8}$$ If m is a polynomial of degree at most n*, then* $$s_{n,a}(\mathbf{x})=\sum_{|\mathbf{\alpha}|\leq n}{\frac{\mathrm{D}^{\mathbf{\alpha}}f(\mathbf{a})}{\mathbf{\alpha}!}}(\mathbf{x}-\mathbf{a})^{\mathbf{\alpha}},$$ $$(2.9)$$ $$(2.10)$$ |α|≤n which is identical to the multidimensional version of the Taylor polynomial in (1.1). Proof. It is straightforward to compute that, for any β, γ ∈ N d 0 , $$\mathrm{D}_{y}^{\beta}\,\mathrm{D}_{x}^{\gamma}\,K_{a}(x,y)=\sigma^{2}\!\!\sum_{\alpha\geq\beta\wedge\gamma}\frac{c_{\alpha}\lambda^{\alpha}(x-a)^{\alpha-\beta}(y-a)^{\alpha-\gamma}}{(\alpha-\beta)!(\alpha-\gamma)!},$$ where β ∧ γ = (max{β(1), γ(1)}*, . . . ,* max{β(d), γ(d)}). If x = a or y = a, all terms with α − β ̸= 0 or α − γ ̸= 0, respectively, in (2.10) vanish. Therefore in the context of (2.7) we have $$(\mathbf{R_{a}})_{ij}=\sigma^{2}c_{\alpha_{i}}\mathbf{\lambda}^{\alpha_{i}}\delta_{ij}\quad\text{and}\quad(\mathbf{r_{a}}(\mathbf{x}))_{i}=\sigma^{2}\frac{c_{\alpha_{i}}\lambda^{\alpha_{i}}}{\alpha_{i}!}(\mathbf{x}-\mathbf{a})^{\alpha_{i}}.\tag{2.11}$$ Consequently, the matrix Ra is diagonal and the ith element of the row vector ra(x) TR−1 a in (2.6) is (αi!)−1(x − a) αi. It follows that the posterior mean and covariance are as in (2.8). From (2.1) we recognise the covariance Pn,a as the remainder in the kernel expansion. To prove (2.9) it is sufficient to observe that m(x) = P|α|≤n Dαm(a)(α!)−1(x − a) α when m is a polynomial of degree at most n. By inspection it is clear that sn,a in (2.9) is identical to the Taylor expansion given in (1.1) (and its multivariate version), which completes the proof. Note that the covariance is not identically zero—in fact, Pn,a(x, x) → ∞ as ∥x − a∥2 → ∞. Furthermore, while Pn,a(x, y) takes the form of an infinite sum, provided Ka has a closed form it can be computed by subtracting the terms with |α| ≤ n in the summation form of Ka from that closed form. For illustration, some posterior processes are displayed in Figure 1. Whether or not the explosion of the posterior variance away from a is desirable depends on what one is trying to achieve and what kind of prior information is available. If one is trying to extrapolate, it seems entirely natural to us, at least in the absence of additional knowledge about f, that the variance should be very large far away from a. But if there is additional prior information that the function f has, for example, approximately the same magnitude everywhere on its domain, then it may make sense to use a stationary kernel for which the variance tends to a constant value as the distance to the nearest data point increases. The expressions in (2.8) for the posterior mean and covariance show that computational complexity of inference with Taylor kernels and derivative data is linear in the number of data points, Nd n , if the derivatives of m are cheap to compute (e.g., if m is a polynomial). A generic kernel for which no special structure is present in the covariance matrix Ra incurs cubic computational cost because a linear system of equations needs to solved when the mean and variance are computed directly from (2.6). This seeming advantage of Taylor kernels is lost if the data do not consist of derivatives at a single point. Remark 2.2. Recall that in Section 2.1 we assumed that cα > 0 for every α ∈ N d 0 . However, from (2.11) we easily see that Theorem 2.1 remains valid as long as cα > 0 for all |α| ≤ n because this ensures that the diagonal covariance matrix Ra is invertible. Section 2.1 therefore applies also to *polynomial kernels*, which are Taylor kernels with finitely many non-zero coefficients cα if n remains sufficiently small. ## 2.4 Error Bounds Each positive-semidefinite kernel R on Ωa,r × Ωa,r is associated to a unique *reproducing kernel Hilbert space* (RKHS), H(R), equipped with inner product ⟨·, ·⟩H(R) and norm ∥·∥H(R) . The RKHS is a Hilbert space of functions f : Ωa,r → R such that R(·, x) ∈ H(R) for every x ∈ Ωa,r and in which the kernel R has the reproducing property $$f(\mathbf{x})=\langle f,R(\cdot,\mathbf{x})\rangle_{{\mathcal{H}}(R)}\quad{\mathrm{~for~all~}}\quad f\in{\mathcal{H}}(R){\mathrm{~~and~}}\mathbf{x}\in\Omega_{\mathbf{a},r}.$$ See Berlinet and Thomas-Agnan (2004) for more information on RKHSs. It is often difficult to characterise the functions which lie in the RKHS. Fortunately, for Taylor kernels one may use results such as Theorem 9 in Minh (2010) to show that $${\mathcal{H}}(K_{\mathbf{a}})=\left\{f(\mathbf{x})=\sigma\sum_{\mathbf{\alpha}\in\mathbb{N}_{0}^{d}}f_{\mathbf{\alpha}}{\frac{\sqrt{c_{\mathbf{\alpha}}\mathbf{\lambda}^{\mathbf{\alpha}}}}{\mathbf{\alpha}!}}(\mathbf{x}-\mathbf{a})^{\mathbf{\alpha}}:\|f\|_{{\mathcal{H}}(K_{\mathbf{a}})}^{2}=\sum_{\mathbf{\alpha}\in\mathbb{N}_{0}^{d}}f_{\mathbf{\alpha}}^{2}<\infty\right\}.$$ See also Zwicknagl and Schaback (2013) and Paulsen and Raghupathi (2016, Sec. 2.1). For example, all polynomials are contained in the RKHS of any Taylor kernel for any a ∈ R d. The next theorem shows that the posterior variance has a similar interpretation to the Taylor remainder term if f is in H(Ka). Theorem 2.3. Let fGP | fa be as in Theorem 2.1, and let the assumptions of that theorem hold. If f ∈ H(Ka), then sn,a and Pn,a *satisfy* $$f(\mathbf{x})-s_{n,\mathbf{a}}(\mathbf{x})|\leq\|f\|_{\mathcal{H}(K_{n})}\,P_{n,\mathbf{a}}(\mathbf{x},\mathbf{x})^{1/2}\leq C_{n,r}\,\sigma\,\|f\|_{\mathcal{H}(K_{n})}\,\|\mathbf{x}-\mathbf{a}\|_{2}^{n+1}\tag{2.12}$$ for all x ∈ Ωa,r, where (Cn,r)∞ n=0 is a positive sequence such that Cn,r → 0 as n → ∞. Proof. By the standard equivalence between Gaussian process regression in the noiseless setting and worst-case optimal approximation (e.g., Scheuerer et al., 2013; Kanagawa et al., 2018) the posterior mean sn,a ∈ H(Ka) is the minimum-norm approximant of f such that Dαsn,a(a) = Dαf(a) for every |α| ≤ n and the posterior standard deviation at x, Pn,a(x, x) 1/2, equals the worst-case approximation error at x in the RKHS. Hence $|f(\mathbf{x})-s_{n,\mathbf{a}}(\mathbf{x})|\leq\|f-s_{\mathbf{a}}\|_{\mathcal{H}(K_{\mathbf{a}})}\,P_{n,\mathbf{a}}(\mathbf{x},\mathbf{x})^{1/2}\leq\|f\|_{\mathcal{H}(K_{\mathbf{a}})}\,P_{n,\mathbf{a}}(\mathbf{x},\mathbf{x})^{1/2}\,.$ for all x ∈ Ωa,r if f ∈ H(Ka) (e.g., Wendland, 2005, Thm. 16.3). To prove the upper bound for the posterior variance observe that from the general inequality |x α*| ≤ ∥*x∥ |α| ∞ ≤ ∥x∥ |α| 2for x ∈ R d and α ∈ N d 0 it follows ![6_image_0.png](6_image_0.png) Figure 1: Gaussian process posterior means and 95% credible intervals given derivative data for f(x) = sin(πx) at a = 0. The priors have m ≡ 0 and use either (blue) the exponential kernel K(*x, y*) = σ 2exp(λxy) with λ = 3/2 or (orange) the Gaussian kernel K(*x, y*) = σ 2exp(−λ 2(x − y) 2/2). Maximum likelihood estimation was used to select the scaling parameter σ. We shall revisit this example in Section 6. that, for any x ∈ Ωa,r = {x ∈ R d: ∥x − a∥2 < r}, |α|>n cαλ α (α!)2 (x − a) 2α Pn,a(x, x) = σ 2 X = σ 2 X |α|=n+1 cαλ α (α!)2 (x − a) 2α +X |α|>n+1 cαλ α (α!)2 (x − a) 2α ! ≤ σ 2 X |α|=n+1 cαλ α (α!)2 ∥x − a∥ 2(n+1) 2 +X |α|>n+1 cαλ α (α!)2 ∥x − a∥ 2|α| 2 ! = σ 2∥x − a∥ 2(n+1) 2 X |α|=n+1 cαλ α (α!)2 +X |α|>n+1 cαλ α (α!)2 ∥x − a∥ 2|α|−2(n+1) 2 ! ≤ σ 2∥x − a∥ 2(n+1) 2 X |α|=n+1 cαλ α (α!)2 + r −2(n+1) X |α|>n+1 cαλ α (α!)2 r 2|α| ! . | {z } =Cn,r The summability assumption (2.2) ensures that Cn,r is finite and that Cn,r → 0 as n → ∞. The bound (2.12) is valid for every element of H(Ka). However, the bound is *uncomputable* because it is not possible to compute the norm ∥f∥H(Ka) from the derivative data fa without some additional information about the function f. For example, when d = 1 and a = 0, the functions $f_{1}(x)=1+2x+3x^{2}+4x^{3}+5x^{4}\quad\mbox{and}\quad f_{2}(x)=1+2x+3x^{2}+4x^{3}+cx^{7}$ are different if c = 0 ̸ but provide the same derivative data whenever n ≤ 3. In practice, what one has to do is estimate the parameters of Ka in a data-dependent way, use the standard deviation Pn,a(x, x) 1/2to compute, say, the 95% credible interval around the point estimate sn,a(x) of f(x), and conclude that it is likely that f(x) falls within the resulting credible interval. Any such uncertainty estimates are bound to fail occasionally—and the severity of the failure can be arbitrary. For example, when n = 3, credible intervals formed from derivative data generated the function f2 above do not depend on c even though |f2(x) − s3,0(x)| = cx7 does. ## 2.5 Noisy Observations Suppose that the observations are corrupted by Gaussian noise. That is, the data vector is fa = (yα)|α|≤n, where yα = Dαf(a) + zα with independent zα ∼ N(0, ε2α) for εα > 0. While unrealistic in practice, the assumption that the noise terms are independent allows for explicit computation of the posterior mean and variance. In this setting the posterior mean and covariance for a general sufficiently differentiable prior mean m and covariance kernel R are $$s_{n,\mathbf{a}}(\mathbf{x})=m(\mathbf{x})+\mathbf{r_{n}}(\mathbf{x})^{\mathsf{T}}(\mathbf{R_{a}}+\mathbf{E})^{-1}(\mathbf{f_{a}}-\mathbf{m_{a}})\ \text{and}\ P_{n,\mathbf{a}}(\mathbf{x},\mathbf{y})=R(\mathbf{x},\mathbf{y})-\mathbf{r_{a}}(\mathbf{x})^{\mathsf{T}}(\mathbf{R_{a}}+\mathbf{E})^{-1}\mathbf{r_{a}}(\mathbf{y}),\tag{2.13}$$ where E is a diagonal Nd n × Nd n matrix containing the noise variances ε 2 a and ra(x), Ra, and ma were defined in Section 2.2. Recall then from Section 2.3 that when R is a Taylor kernel Ka we have $$(\mathbf{R_{a}})_{i j}=\sigma^{2}c_{\mathbf{\alpha}_{i}}\mathbf{\lambda}^{\mathbf{\alpha}_{i}}\delta_{i j}\quad\mathrm{~and~}\quad(\mathbf{r_{a}}(\mathbf{x}))_{i}=\sigma^{2}{\frac{c_{\mathbf{\alpha}_{i}}\mathbf{\lambda}^{\mathbf{\alpha}_{i}}}{\mathbf{\alpha}_{i}!}}(\mathbf{x}-\mathbf{a})^{\mathbf{\alpha}_{i}}.$$ Therefore (Ra + E) −1 = (σ 2cαiλ αi + ε 2 αi ) −1δij , and plugging this into (2.13) yields $$s_{n,\mathbf{a}}(\mathbf{x})=m(\mathbf{x})+\sigma^{2}\sum_{|\mathbf{\alpha}|\leq n}{\frac{c_{\mathbf{\alpha}}\mathbf{\lambda}^{\mathbf{\alpha}}[y_{\mathbf{\alpha}}-\mathbf{D}^{\mathbf{\alpha}}m(\mathbf{a})]}{\mathbf{\alpha}!(\sigma^{2}c_{\mathbf{\alpha}}\mathbf{\lambda}^{\mathbf{\alpha}}+\varepsilon_{\mathbf{\alpha}}^{2})}}(\mathbf{x}-\mathbf{a})^{\mathbf{\alpha}}$$ and σ 2c 2 αλ 2α (α!)2(σ 2cαλα + ε 2α) (x − a) α(y − a) α Pn,a(x, y) = Ka(x, y) − σ 2 X |α|≤n = σ 2 " X σ 2c 2 αλ 2α (α!)2(σ 2cαλα + ε 2α) (x − a) α(y − a) α # α∈Nd 0 cαλ α (α!)2 (x − a) α(y − a) α − X |α|≤n = σ 2 " X cαλ αε 2 α (α!)2(σ 2cαλα + ε 2α) (x − a) α(y − a) α + X |α|>n cαλ α (α!)2 (x − a) α(y − a) α # . |α|≤n Note that by setting εα = 0 for every α ∈ N d 0 such that |α| ≤ n we recover the noiseless posterior mean and covariance in (2.8). ## 3 Parameter Estimation Observe from (2.8) that, although they do not affect the posterior mean, proper selection of the Taylor kernel parameters λ and σ is a prerequisite for useful and meaningful uncertainty quantification via the posterior variance Pn,a(x, x). In this section we consider maximum likelihood estimation of these parameters. For the Gaussian process model in Section 2.2, the negative log-likelihood function that is to be minimised with respect to a generic vector of kernel hyperparameters θ is $$\ell(\mathbf{\theta})={\frac{1}{2}}(\mathbf{f_{a}}-\mathbf{m_{a}})^{\mathsf{T}}\mathbf{R_{a}^{-1}}(\mathbf{f_{a}}-\mathbf{m_{a}})+{\frac{1}{2}}\log\operatorname*{det}\mathbf{R_{a}}+{\frac{N_{n}^{d}}{2}}\log(2\pi).$$ By discarding terms and coefficients that do not depend on θ and using (2.11) we see that for a Taylor kernel the maximum likelihood estimate (MLE) θML is any minimiser of the function $$\tilde{\ell}(\mathbf{\theta})=\frac{1}{\sigma^{2}}\sum_{|\mathbf{\alpha}|\leq n}\frac{(\mathrm{D}^{\mathbf{\alpha}}[f(\mathbf{a})-m(\mathbf{a})])^{2}}{c_{\mathbf{\alpha}}\mathbf{\lambda}^{\mathbf{\alpha}}}+N_{n}^{d}\log\sigma^{2}+\sum_{|\mathbf{\alpha}|\leq n}\log\mathbf{\lambda}^{\mathbf{\alpha}}+\sum_{|\mathbf{\alpha}|\leq n}c_{\mathbf{\alpha}}.\tag{3.1}$$ In principle, every coefficient cα of a Taylor kernel in (2.1) may be considered a free parameter to be estimated. However, maximum likelihood estimation of these coefficient is either useless or impossible: From (2.8) we see that the posterior process depends on cα only for |α| > n. However the objective function (3.1) does not depend on these cα, making it impossible to estimate those parameters that actually influence posterior uncertainty. We encountered a simple example of this phenomenon in Section 2.4. ## 3.1 Estimation Of Σ From (3.1) it is easy to calculate σML, the maximum likelihood estimate of σ, for any fixed λ ∈ R d + and n ∈ N0 by differentiating ˜ℓ and setting its derivative to zero. This gives $$\sigma_{\mathrm{ML}}^{2}=\frac{1}{N_{n}^{d}}\sum_{|\alpha|<n}\frac{(\mathrm{D}^{\alpha}[f(\mathbf{a})-m(\mathbf{a})])^{2}}{c_{\alpha}\mathbf{\lambda}^{\alpha}}.$$ $$(3.2)$$ ## 3.2 Estimation Of Λ Estimation of λ for a fixed σ is more complicated and the maximum likelihood estimate does not appear to admit a closed form expression akin to that for σML in (3.2). However, something interesting can be said. We write λ = (λ1*, . . . , λ*d). Lemma 3.1. Suppose that n ≥ 1 and let 1 ≤ i ≤ d*. Then* limλi→0 ˜ℓ(λ) = −∞ if Dα[f(a) − m(a)] = 0 for every |α| ≤ n such that α(i) > 0 and limλi→0 ˜ℓ(λ) = ∞ otherwise. Proof. Assume first that Dα[f(a) − m(a)] = 0 for every |α| ≤ n such that α(i) > 0. It follows that the first term in (3.1) does not depend on λi. Because P|α|≤n log λ α *→ −∞* as λi → 0, we have ˜ℓ(λ) *→ −∞* as λi → 0. Assume then that Dβ[f(a) − m(a)] ̸= 0 for some |β| ≤ n such that β(i) > 0. Therefore $$\frac{1}{\sigma^{2}}\sum_{|\alpha|\leq n}\frac{(\mathrm{D}^{\alpha}[f(\mathbf{a})-m(\mathbf{a})])^{2}}{c_{\mathbf{\alpha}}\mathbf{\lambda}^{\alpha}}+\sum_{|\alpha|\leq n}\log\mathbf{\lambda}^{\alpha}\geq\frac{1}{\sigma^{2}}\frac{(\mathrm{D}^{\beta}[f(\mathbf{a})-m(\mathbf{a})])^{2}}{c_{\mathbf{\beta}}\mathbf{\lambda}^{\beta}}+\sum_{|\alpha|\leq n}\log\mathbf{\lambda}^{\alpha}$$ $$\geq\frac{C}{\lambda_{i}}+\sum_{|\alpha|\leq n}\log\mathbf{\lambda}^{\alpha}$$ for a certain positive constant C and λi ≤ 1. Since 1/x + a log x → ∞ for any a > 0 as x → 0 from the right, we conclude from the above lower bound that ˜ℓ(λ) → ∞ as λi → 0. This concludes the proof. Lemma 3.1 states that ˜ℓ(λ) attains a minimum at λi = 0 when the data are consistent with m being equal to f up to constant along dimension i. From Lemma 3.1 and the fact that ˜ℓ(λ) can tend to negative infinity only if a component of λ tends to zero we obtain the following theorem, which is essentially a special case of Theorem 5.2 in Karvonen and Oates (2023). See also Proposition 4.3 in Ben Salem et al. (2019). Theorem 3.2. Suppose that n ≥ 1 and let 1 ≤ i ≤ d. Fix λj for j ̸= i*. Then* $$\lambda_{i,\mathrm{ML}}=\arg\operatorname*{min}_{\lambda_{i}\geq0}\tilde{\ell}((\lambda_{1},\ldots,\lambda_{d}))=0$$ if and only if Dα[f(a) − m(a)] = 0 for every |α| ≤ n such that α(i) > 0*. In particular, if* d = 1*, then* $$\lambda_{\mathrm{ML}}=\operatorname*{arg\,min}_{\lambda\geq0}\tilde{\ell}(\lambda)=0$$ if and only if f (p)(a) − m(p)(a) = 0 *for every* 1 ≤ p ≤ n. For simplicity, let d = 1. If λ = 0, we see from (2.8) that Pn,a(*x, y*) = 0 for all *x, y* ∈ R. Moreover, if f (p)(a) − m(p)(a) = 0 for every 1 ≤ p ≤ n, then $$s_{n,a}(x)=\sum_{p=0}^{n}{\frac{f^{(p)}(a)-m^{(p)}(a)}{p!}}(x-a)^{p}=f(a)-m(a)$$ for every x ∈ R. That is, sn,a is a constant function. The interpretation of Theorem 3.2 is thus that when the data look like they could have been generated by the function f(x) = m(x) + c for some c ∈ R (i.e., by a constant shift of the prior), maximum likelihood estimation returns λML = 0 because this value of λ both explains the data and yield the simplest model, one of zero variance. When the posterior covariance is identically zero, the resulting degenerate posterior fGP | fa ∼ GP(f(a) − m(a), 0) does not provide useful uncertainty quantification as it is unreasonable to expect perfect predictions from a finite set of data. ## 3.3 On Simultaneous Estimation Of Σ And Λ The purpose of this section is to demonstrate that simultaneous maximum likelihood estimation of σ and λ is likely to cause problems. We consider inner product kernels of the form (2.3) with coefficients cα = c|α|α!/ |α|! and n = 1. Let ∂ p i f(x) denote the pth order partial derivative of f at x with respect to the ith coordinate. Note from cα = c|α|α!/ |α|! that cα = c0 for α = 0 and cα = c1 when |α| = 1. By differentiating (3.1) with respect to the ith component of λ we see that to obtain λML = (λ1,ML*, . . . , λ*d,ML) we need to solve $${\frac{1}{\sigma^{2}}}\sum_{|\alpha|\leq1}{\frac{\alpha(i)(\mathrm{D}^{\alpha}[f(\mathbf{a})-m(\mathbf{a})]^{2}}{c_{\mathbf{\alpha}}\lambda^{\mathbf{\alpha}}}}-\sum_{|\alpha|\leq1}\alpha(i)={\frac{(\partial_{i}[f(\mathbf{a})-m(\mathbf{a})])^{2}}{\sigma^{2}c_{1}\lambda_{i}}}-1=0$$ for each i = 1*, . . . , d*. Equation (3.3) readily gives the maximum likelihood estimates $$(3.3)$$ $$\lambda_{i,\mathrm{ML}}={\frac{(\partial_{i}[f(\mathbf{a})-m(\mathbf{a})])^{2}}{\sigma^{2}c_{1}}}$$ $$(3.4)$$ 2c1(3.4) for a fixed σ > 0. Inserting these to the expression for the maximum likelihood estimate σ 2 ML in (3.2) yields $$\sigma_{\mathbf{ML}}^{2}=\frac{1}{d+1}\bigg{(}\frac{[f(\mathbf{a})-m(\mathbf{a})]^{2}}{c_{0}}+\sum_{i=1}^{d}\frac{(\partial_{i}[f(\mathbf{a})-m(\mathbf{a})])^{2}}{c_{1}\lambda_{i,\mathbf{ML}}}\bigg{)}=\frac{1}{d+1}\bigg{(}\frac{[f(\mathbf{a})-m(\mathbf{a})]^{2}}{c_{0}}+d\sigma_{\mathbf{ML}}^{2}\bigg{)},$$ $$(3.5)$$ which is solved by σ 2 ML = [f(a) − m(a)]2/c0. By plugging this in (3.4) we obtain the final estimates $$\sigma_{\mathrm{ML}}^{2}={\frac{[f(\mathbf{a})-m(\mathbf{a})]^{2}}{c_{0}}}\quad{\mathrm{~and~}}\quad\lambda_{i,\mathrm{ML}}={\frac{c_{0}}{c_{1}}}{\bigg(}{\frac{\partial_{i}[f(\mathbf{a})-m(\mathbf{a})]}{f(\mathbf{a})-m(\mathbf{a})}}{\bigg)}^{2}.$$ It is clear what the problem is: If |f(a) − m(a)| is small relative to |∂i[f(a) − m(a)]| for some i (i.e., m ≈ f but ∂im ̸≈ ∂if at a), the estimate for λ becomes large, which may cause numerical problems and yields a large posterior variance. For example, let d = 1 and insert the estimates in (3.5) to the posterior variance in (2.8). This gives $$P_{n,a}(x,x)=\sum_{p=2}^{\infty}\left(\frac{c_{0}}{[f(a)-m(a)]^{2}}\right)^{p-1}\left(\frac{[f^{\prime}(a)-m^{\prime}(a)]^{2}}{c_{1}}\right)^{p}\frac{c_{p}}{(p!)^{2}}(x-a)^{2p}.$$ In practice it is therefore safest to fix one of the parameters and estimate the other. In the examples in Section 6 we fix λ and estimate σ. ## 4 Comparison To The Gaussian Kernel It is common to condition a Gaussian process defined by the Gaussian kernel on evaluations of a function and its derivative at a number of different points (Solak et al., 2002; Prüher and Särkkä, 2016; Wu et al., 2017). However, no convenient expressions for the posterior mean and variance are available in this setting. The purpose of this section is to exploit an expression from Xu and Stein (2017) and derive explicit expressions for the mean and variance when in the setting of Section 2.2 (i.e., when the data consist of derivative evaluations at a single point). Because the Gaussian kernel is not a Taylor kernel, the posterior mean and variance, although available in closed form, are more complicated than the ones for Taylor kernels in (2.8). Let ∥·∥λ = ⟨·, ·⟩λ. It is well known that the ubiquitous Gaussian kernel $$R(\mathbf{x},\mathbf{y})=\exp\left(-\,{\frac{1}{2}}\left\|\mathbf{x}-\mathbf{y}\right\|_{\mathbf{\lambda}}^{2}\right)$$ is closely connected to the exponential kernel in (2.4) via the equation $$R(\mathbf{x},\mathbf{y})=\exp\left(-\,\frac{1}{2}\big{[}(\mathbf{x},\mathbf{x})_{\mathbf{\lambda}}-2(\mathbf{x},\mathbf{y})_{\mathbf{\lambda}}+\langle\mathbf{x},\mathbf{x}\rangle_{\mathbf{\lambda}}\big{]}\right)=\exp\left(-\,\frac{1}{2}\,\|\mathbf{x}\|_{\mathbf{\lambda}}^{2}\right)\exp(\langle\mathbf{x},\mathbf{y}\rangle_{\mathbf{\lambda}})\exp\left(-\,\frac{1}{2}\,\|\mathbf{y}\|_{\mathbf{\lambda}}^{2}\right).$$ Given such a relationship to a Taylor kernel it should come as no surprise that, given derivative data, the posterior mean and covariance in (2.6) are available in closed form for the Gaussian kernel—even though the matrix Ra is not diagonal. Let us consider the univariate case. We get $$\left.\mathrm{D}_{y}^{i}\mathrm{D}_{x}^{j}R(x,y)\right|_{\stackrel{x=a}{y=a}}=(-1)^{i}\,\mathrm{D}_{z}^{i+j}\exp\left(-\left.\frac{\lambda^{2}}{2}z^{2}\right)\right|_{z=0}=(-1)^{i}\sum_{p=0}^{\infty}(-1)^{p}\frac{\lambda^{2p}}{2^{p}p!}\mathrm{D}_{z}^{i+j}z^{2p}\right|_{z=0}.$$ When i + j is odd, all derivatives in the sum vanish, so that in this case DiyDjxR(*x, y*) x=a y=a = 0. If i + j = 2k for k ∈ N0, we have $$\mathrm{D}_{y}^{i}\mathrm{D}_{x}^{j}R(x,y)\big|_{\stackrel{x=a}{y=a}}=(-1)^{i+k}\frac{\lambda^{2k}(2k)!}{2^{k}k!}.$$ This provides us with a relatively straightforward expression for the matrix Ra in (2.7). From the Rodrigues' formula Hp(x) = (−1)pe x 2/2Dn x e −x 2/2for the probabilist's Hermite polynomials it is easy to compute ra(x): $$\left(\mathbf{r}_{a}(x)\right)_{i}=\mathrm{D}_{y}^{i}R(x,y)\big{|}_{y=a}=\mathrm{D}_{y}^{i}\exp\left(-\left.\frac{\lambda^{2}}{2}(x-y)^{2}\right)\right|_{y=a}=\lambda^{i}\exp\left(-\left.\frac{\lambda^{2}}{2}(x-a)^{2}\right)\mathrm{H}_{t}\big{(}\lambda(x-a)\big{)}.\tag{4.1}$$ Finding sn,a and Pn,a requires the inverse of Ra. Fortunately, Xu and Stein (2017, Proposition 3.2) have computed the Cholesky decomposition of the inverse of Ra. 1 The inverse of Ra has the Cholesky decomposition R−1 a = LLT, where L ∈ R (n+1)×(n+1) is a lower triangular matrix with non-zero elements (L)ij = √i!/(λ jj!(i−j)!!) when i ≥ j and i+j is even. Here i!! is the double factorial, the product of positive integers up to i that have the same parity as i. Thus $$(\mathbf{R}_{u}^{-1})_{ij}=(\mathbf{L}\mathbf{L}^{\mathsf{T}})_{ij}=\lambda^{-(i+j)}Q(i,j),\quad\text{where}\quad Q(i,j)=\sum_{\begin{subarray}{c}p=\max\{i,j\}\\ p+i\text{is even}\end{subarray}}^{n}\overline{u_{i}^{\mathsf{T}}}\overline{j!(p-i)!!(p-j)!!}.\tag{4.2}$$ Consequently, inserting (4.1) and (4.2) in (2.6) gives the convenient closed form expressions $$s_{n,a}(x)=\sum_{i=0}^n\sum_{j=0}^n(\mathbf{r}_a(x))_i(\mathbf{R}_a^{-1})_{ij}f^{(j)}(a)=\exp\left(-\ \frac{\lambda^2}{2}(x-a)^2\right)\sum_{j=0}^n\frac{f^{(j)}(a)}{\lambda^jj!}\sum_{i=0}^n\frac{Q(i,j)}{i!}\mathbf{H}_i\big(\lambda(x-a)\big)$$. and $$P_{n,a}(x,y)=\exp\bigg(-\,\frac{\lambda^{2}}{2}(x-y)^{2}\bigg)\bigg(1-e^{-\lambda^{2}(x-a)(y-a)}\sum_{i=0}^{n}\sum_{j=0}^{n}\frac{Q(i,j)}{i!j!}\mathrm{H}_{i}\big(\lambda(x-a)\big)\mathrm{H}_{j}\big(\lambda(y-a)\big)\,\bigg).$$ In particular, the posterior variance is $$P_{n,a}(x,x)=1-e^{-\lambda^{2}(x-a)^{2}}\sum_{i=0}^{n}\sum_{j=0}^{n}\frac{Q(i,j)}{i!j!}\mathrm{H}_{i}\big(\lambda(x-a)\big)\mathrm{H}_{j}\big(\lambda(x-a)\big).$$ These expressions resemble those in (2.8) for Taylor kernels. However, a notable difference is that for Taylor kernels the posterior variance blows up as |x − a| grows but for the Gaussian kernel the variance tends to a constant as |x − a*| → ∞*. As discussed in Section 2.3, both posterior variances which blow up and those that remain bounded have their uses. Similarly, the posterior mean for Taylor kernels is unbounded, while for the Gaussian kernel the mean reverts to zero (i.e., the prior mean; recall that we set m ≡ 0) far away from a. ## 5 General Orthogonal Data In this section we discuss how simple posterior formulae analogous to those derived in Section 2.3 are available for any data that are orthogonal in the sense that the data are obtained by taking RKHS inner products of f with respect to functions that are orthogonal in the RKHS. 1Note that the denominator in Equation (3.1) of Xu and Stein (2017) should have (i − j)!! in the place of (i − j). ## 5.1 Generic Construction Let Ω be an arbitrary non-empty set, P a countable index set, and (ϕp)p∈P a collection of linearly independent basis functions on Ω such that Pp∈P |ϕp(x)| 2 < ∞ for every x ∈ Ω. We may then define (at least formally) a Gaussian process fGP on Ω by setting fGP(x) = Pp∈P Zpϕp(x) for every x ∈ Ω, where Zp are i.i.d standard normal random variables. It is then straightforward to compute that $$\mathbb{E}[f_{\mathsf{GP}}(x)]=0\quad\text{and}\quad R(x,y)=\text{Cov}[f_{\mathsf{GP}}(x),f_{\mathsf{GP}}(y)]=\sum_{p\in\mathcal{P}}\phi_{p}(x)\phi_{p}(y).\tag{5.1}$$ The kernel R is positive-semidefinite. Assume that (ϕp)p∈P are an orthonormal basis of the RKHS H(R) (see Section 2.4 for RKHSs).2 Then each f ∈ H(R) has the pointwise convergent expansion f(x) = Pp∈P fpϕp(x) for the coefficients fp = ⟨*f, ϕ*p⟩H(R). Suppose that one observes γpfp for p in a finite collection *N ⊂ P* of indices and some constants γp. These data are *orthogonal* because they are obtained by taking inner products of f with a collection of functions γpϕp that are pairwise orthogonal in the RKHS. That is, each observation γpfp may be written as $$\gamma_{p}f_{p}=\langle f,\gamma_{p}\phi_{p}\rangle_{{\mathcal H}(R)}\quad\mathrm{~for~functions~such~that~}\quad\langle\gamma_{p}\phi_{p},\gamma_{q}\phi_{q}\rangle_{{\mathcal H}(R)}=0\mathrm{~if~}p\neq q.$$ The orthogonality of γpϕp implies that the corresponding covariance matrix is diagonal. A derivation similar to that in Section 2.3 then shows that the posterior mean and covariance are simply $$s(x)=\sum_{p\in\mathcal{N}}f_{p}\phi_{p}(x)=\sum_{p\in\mathcal{N}}\gamma_{p}f_{p}\frac{\phi_{p}(x)}{\gamma_{p}}\quad\text{and}\quad P(x,y)=\sum_{p\in\mathcal{P}\setminus\mathcal{N}}\phi_{p}(x)\phi_{p}(y)\tag{5.2}$$ for all *x, y* ∈ Ω. See (Wendland, 2005, Ch. 16) and (Oettershagen, 2017, Cor. 3.6) for general formulae when the data consists of applications to f of arbitrary linear functionals. Orthogonal data are known to be optimal in a certain sense and settings (Novak and Woźniakowski, 2008, Sec. 4.2.3). To connect (5.2) to the derivations for Taylor kernels, suppose for instance that f : R → R has the Taylor expansion $$f(x)=\sum_{p=0}^{\infty}{\frac{f^{(p)}(0)}{p!}}x^{p}=\sum_{p=0}^{\infty}f_{p}\phi_{p}(x),\quad{\mathrm{~where~}}\quad\phi_{p}(x)={\frac{\sigma{\sqrt{c_{p}\lambda^{p}}}}{p!}}x^{p}\quad{\mathrm{~and~}}\quad f_{p}={\frac{f^{(p)}(0)}{\sigma{\sqrt{c_{p}\lambda^{p}}}}}.$$ . In this case we therefore have the index set P = N0. Observing the N first derivatives f (p)(0) = γpfp, so that N = {0, 1*, . . . , N*} and γp = σ √cpλ p, and using (5.2) yields (2.8) with d = 1, a = 0, and m ≡ 0. Moreover, Cov[fGP(x), fGP(y)] = P∞ p=0 ϕp(x)ϕp(y) = K(*x, y*) for K the univariate Taylor kernel in (1.2). We mention two other of examples orthogonal data. ## 5.2 Mehler Kernel Let P = N0 and Ω = R. Let Hp be the pth probabilist's Hermite polynomial and ρ ∈ (0, 1). Set ϕp(x) = σ √ ρ p(p!)−1Hp(x). Then Mehler's formula yields the *Mehler kernel* $$R(x,y)=\sum_{p=0}^{\infty}\phi_{p}(x)\phi_{p}(y)=\sigma^{2}\sum_{p=0}^{\infty}\frac{\rho^{p}}{p!}\mathrm{H}_{p}(x)\mathrm{H}_{p}(y)=\frac{\sigma^{2}}{\sqrt{1-\rho^{2}}}\exp\bigg(-\frac{\rho^{2}(x^{2}+y^{2})-2\rho xy}{2(1-\rho^{2})}\bigg).$$ See Irrgeher and Leobacher (2015) and Oettershagen (2017, Sec. 3.6.4) for the Mehler kernel in the context of kernel-based approximation. If f(x) = P∞ p=0 fpϕp(x), then by the orthogonality with respect to Gaussian integration, other basic properties of the Hermite polynomials, and properties of Gaussian integrals of derivatives (e.g., Bogachev, 1998, Rmk. 1.3.5), $$\gamma_{p}f_{p}=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}f(x)\mathrm{H}_{p}(x)\exp\left(-\,\frac{x^{2}}{2}\right)\mathrm{d}x=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}f^{(p)}(x)\exp\left(-\,\frac{x^{2}}{2}\right)\mathrm{d}x,\quad\text{where}\quad\gamma_{p}=\sigma p^{\prime}.$$ Here orthogonal data are therefore obtained by Gaussian integration of the derivatives of $f$. 2Any functions (ϕp)p∈P for which the expansion of the kernel R in (5.1) converges pointwise are a *Parseval frame* for the RKHS H(R) (Paulsen and Raghupathi, 2016, Thm. 2.10). ![12_image_0.png](12_image_0.png) Figure 2: *Left:* Posterior means and 95% credible intervals given derivative data for f(x) = sin(πx) at a = 0. The zero-mean prior uses the exponential kernel K(*x, y*) = σ 2exp(λxy) with λ = 3/2 and scale σ set using maximum likelihood. *Right:* Maximal absolute errors maxx∈[−1,1] |f(x) − sn,a(x)| and half-widths 1.96 × maxx∈[−1,1] √Pn,a(*x, x*) of the 95% credible interval over the domain Ω = [−1, 1]. ## 5.3 Periodic Kernel Let P = Z, Ω = [0, 1], and s ∈ N. Set ϕp(x) = σ √2(2πp) −scos(2πpx) and ϕ−p(x) = σ √2(2πp) −ssin(2πpx) for p ∈ N. Moreover, set ϕ0 ≡ σ. Then we obtain the *periodic Sobolev kernel* (or the *Korobov kernel*) $$R(x,y)=\sum_{p\in\mathbb{Z}}\phi_{p}(x)\phi_{p}(y)=\sigma^{2}\bigg{(}1+2\sum_{p=1}^{\infty}\frac{1}{(2\pi p)^{2n}}\cos(2\pi p(x-y))\bigg{)}=\sigma^{2}\bigg{(}1+\frac{(-1)^{n+1}}{(2s)!}\text{B}_{2s}(|x-y|)\bigg{)}\tag{5.3}$$ for *x, y* ∈ [0, 1], where B2s is the Bernoulli polynomial of degree 2s; see, for example, Wahba (1990, Sec. 2.1). If f : [0, 1] → R has the expansion f(x) = Pp∈Z fpϕp(x), then it is straightforward to compute that $$\gamma_{p}J_{p}=2\int_{0}^{1}f(x)\cos(2\pi px)\,{\rm d}x\quad\mbox{and}\quad f-_{p}=2\int_{0}^{1}f(x)\sin(2\pi px)\,{\rm d}x\quad\mbox{for}\quad p\in\mathbb{N}\tag{5.4}$$ for $\gamma_{p}=\sigma\sqrt{2}(2\pi p)^{-s}$ and $\gamma_{0}f_{0}=\int_{0}^{1}f(x)\,{\rm d}x$ for $\gamma=\sigma$. These orthogonal data are the Fourier coefficients. for $\gamma_p=\sigma\sqrt{2}(2\pi p)^{-s}$ and $\gamma_0f_0=\int_0^1f(x)\,\mathrm{d}x$ for $\gamma=\sigma$. ## 6 Two Toy Examples This section contains two numerical toy examples. Figure 2 displays a number of posterior processes and the behaviour of maximal error and standard deviation when a zero-mean Gaussian process with the Taylor kernel K(*x, y*) = σ 2exp(λxy) with λ = 3/2 is used to infer the function f(x) = sin(πx) based on noiseless derivative evaluations at a = 0, as described in Section 2. See also Figure 1. The scaling parameter σ was taken to be the maximum likelihood estimate in (3.2). From the right panel we see that the Gaussian process model is well-calibrated in the weak sense that, except for small n, f(x) is never further away from the posterior mean than maximal half-width of the 95% credible interval over the domain Ω = [−1, 1] of interest: maxx∈[−1,1] |f(x) − sn,a(x)| ≤ 1.96 × maxx∈[−1,1] √Pn,a(*x, x*). Our second example uses the periodic kernel (5.3) with s = 2 and scaled Fourier data in (5.4), so that the posterior mean and covariance are given by (5.2). We use index sets of the form N = {−n, . . . , −1, 0, 1*, . . . , n*} for n ∈ N and again use maximum likelihood to set the scaling parameter, which in this case simply yields σ 2 ML =1 2n+1 Pn p=−n (γpfp) 2. The function being inferred is f(x) = exp(x), and we compute that $$\gamma_{p}f_{p}=s_{p}[2e\pi p\sin(2\pi p)+e\cos(2\pi p)-1]\quad\mathrm{and}$$ γpfp = sp[2eπp sin(2πp) + e cos(2πp) − 1] and γ−pf−p = sp[2πp + e sin(2πp) − 2eπp cos(2πp)] for p ∈ N, where sp = 2/(4π 2p 2 + 1), and γ0f0 = e − 1. Figure 3 depicts some of the resulting posterior processes. Except at the boundaries where the Gibbs phenomenon caused by the non-periodicity of f occurs, the posteriors fare well and appear to provide reasonable quantification of predictive uncertainty. $$\gamma_{-p}f_{-p}=s_{p}[2\pi p+e\sin(2\pi p)-2e\pi p\cos(2\pi p)]$$ ![13_image_0.png](13_image_0.png) Figure 3: Posterior means and 95% credible intervals given Fourier data for f(x) = exp(x) on Ω = [0, 1]. The zero-mean prior uses the periodic kernel in (5.3) with s = 2 and scale σ set using maximum likelihood. ## 7 Conclusion We have proposed a Gaussian process model based on Taylor kernels which gives rise to a probabilistic version of the classical Taylor expansion when the data consist of derivative evaluations. Using Taylor kernels in Bayesian optimisation (Snoek et al., 2012) would be an interesting future application, where they might be expected to inherit properties from both standard Bayesian optimisation algorithms based on commonly used stationary kernels, such as the Gaussian and Matérns, and classical optimisation algorithms. Because their uncertainty explodes away from the expansion point, Taylor kernels might prove a useful alternative to stationary kernels which have a tendency to be over-exploitative in Bayesian optimisation (Bull, 2011). ## Acknowledgments TK was supported by the Academy of Finland postdoctoral researcher grant \#338567 "Scalable, adaptive and reliable probabilistic integration". FT was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. We thank Chris J. Oates, Onur Teymur, and Matt Graham for helpful comments on an early version of this article. ## References Álvarez, M. A., Rosasco, L., and Lawrence, N. D. (2012). Kernels for vector-valued functions: A review. Foundations and Trends© *in Machine Learning*, 4(3):195–266. Ben Salem, M., Bachoc, F., Roustant, O., Gamboa, F., and Tomaso, L. (2019). Gaussian process-based dimension reduction for goal-oriented sequential design. *SIAM/ASA Journal on Uncertainty Quantification*, 7(4):1369–1397. Berlinet, A. and Thomas-Agnan, C. (2004). *Reproducing Kernel Hilbert Spaces in Probability and Statistics*. Springer. Bogachev, V. I. (1998). *Gaussian Measures*. Number 62 in Mathematical Surveys and Monographs. American Mathematical Society. Bull, A. D. (2011). Convergence rates of efficient global optimization algorithms. Journal of Machine Learning Research, 12:2879–2904. Cockayne, J., Oates, C. J., Ipsen, I. C. F., and Girolami, M. (2019a). A Bayesian conjugate gradient method (with discussion). *Bayesian Analysis*, 14(3):937–1012. Cockayne, J., Oates, C. J., Sullivan, T., and Girolami, M. (2019b). Bayesian probabilistic numerical methods. SIAM Review, 61(4):756–789. Conn, A. R., Gould, N. I., and Tointi, P. L. (2000). *Trust-Region Methods*, volume 1 of MPS-SIAM Series on Optimization. Society for Industrial and Applied Mathematics. De Marchi, S. and Schaback, R. (2010). Nonstandard kernels and their applications. *Dolomites Research* Notes on Approximation, 2:16–43. Diaconis, P. (1988). Bayesian numerical analysis. In *Statistical decision theory and related topics IV*, volume 1, pages 163–175. Springer-Verlag New York. Dick, J. (2006). A Taylor space for multivariate integration. *Monte Carlo Methods and Applications*, 12(2):99–112. Eriksson, D., Dong, K., Lee, E., Bindel, D., and Wilson, A. G. (2018). Scaling Gaussian process regression with derivatives. In *Advances in Neural Information Processing Systems*, volume 31, pages 6867–6877. Fasshauer, G. and McCourt, M. (2015). *Kernel-Based Approximation Methods Using MATLAB*. Number 19 in Interdisciplinary Mathematical Sciences. World Scientific Publishing. Hairer, E., Nørsett, S. P., and Wanner, G. (1993). *Solving Ordinary Differential Equations I: Nonstiff* Problems, volume 8 of *Springer Series in Computational Mathematics*. Springer. Hennig, P. (2015). Probabilistic interpretation of linear solvers. *SIAM Journal on Optimization*, 25(1):234–260. Hennig, P., Osborne, M. A., and Kersting, H. P. (2022). *Probabilistic Numerics*. Cambridge University Press. Irrgeher, C. and Leobacher, G. (2015). High-dimensional integration on R d, weighted Hermite spaces, and orthogonal transforms. *Journal of Complexity*, 31(2):174–205. Kanagawa, M., Hennig, P., Sejdinovic, D., and Sriperumbudur, B. K. (2018). Gaussian processes and kernel methods: A review on connections and equivalences. *arXiv:1807.02582v1*. Karvonen, T. and Oates, C. J. (2023). Maximum likelihood estimation in Gaussian process regression is ill-posed. *Journal of Machine Learning Research*, 24(120):1–47. Karvonen, T., Oates, C. J., and Särkkä, S. (2018). A Bayes–Sard cubature method. In Advances in Neural Information Processing Systems, volume 31, pages 5882–5893. Karvonen, T. and Särkkä, S. (2017). Classical quadrature rules via Gaussian processes. In *27th IEEE* International Workshop on Machine Learning for Signal Processing. Kimeldorf, G. S. and Wahba, G. (1970). A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. *The Annals of Mathematical Statistics*, 41(2):495–502. Larkin, F. M. (1970). Optimal approximation in Hilbert spaces with reproducing kernel functions. Mathematics of Computation, 24(112):911–921. Liang, T. and Rakhlin, A. (2020). Just interpolate: Kernel "ridgeless" regression can generalize. Annals of Statistics, 48(3):1329–1347. Minh, H. Q. (2010). Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory. *Constructive Approximation*, 32(2):307–338. Minka, T. (2000). Deriving quadrature rules from Gaussian processes. Technical report, Statistics Department, Carnegie Mellon University. Moré, J. J. (1978). The Levenberg-Marquardt algorithm: Implementation and theory. In *Numerical Analysis*, volume 630 of *Lecture Notes in Mathematics*, pages 105–116. Springer. Novak, E. and Woźniakowski, H. (2008). *Tractability of Multivariate Problems, Volume I: Linear Information*. Number 6 in EMS Tracts in Mathematics. European Mathematical Society. Oettershagen, J. (2017). *Construction of Optimal Cubature Algorithms with Applications to Econometrics* and Uncertainty Quantification. PhD thesis, Institut für Numerische Simulation, Universität Bonn. Paulsen, V. I. and Raghupathi, M. (2016). An Introduction to the Theory of Reproducing Kernel Hilbert Spaces. Number 152 in Cambridge Studies in Advanced Mathematics. Cambridge University Press. Prüher, J. and Särkkä, S. (2016). On the use of gradient information in Gaussian process quadratures. In 26th IEEE International Workshop on Machine Learning for Signal Processing. Rasmussen, C. E. and Williams, C. K. I. (2006). *Gaussian Processes for Machine Learning*. Adaptive Computation and Machine Learning. MIT Press. Raudenbush, S. W., Yang, M.-L., and Yosef, M. (2000). Maximum likelihood for generalized linear models with nested random effects via high-order, multivariate Laplace approximation. *Journal of Computational* and Graphical Statistics, 9(1):141–157. Richter-Dyn, N. (1971a). Minimal interpolation and approximation in Hilbert spaces. SIAM Journal on Numerical Analysis, 8(3):583–597. Richter-Dyn, N. (1971b). Properties of minimal integration rules. II. *SIAM Journal on Numerical Analysis*, 8(3):497–508. Särkkä, S. (2013). *Bayesian Filtering and Smoothing*, volume 3 of *IMS Textbooks*. Cambridge University Press. Scheuerer, M., Schaback, R., and Schlather, M. (2013). Interpolation of spatial data - A stochastic or a deterministic problem? *European Journal of Applied Mathematics*, 24(4):601–629. Schober, M., Duvenaud, D. K., and Hennig, P. (2014). Probabilistic ODE solvers with Runge-Kutta means. In *Advances in Neural Information Processing Systems*, volume 27, pages 739–747. Schober, M., Särkkä, S., and Hennig, P. (2019). A probabilistic model for the numerical solution of initial value problems. *Statistics and Computing*, 29:99–122. Snoek, J., Larochelle, H., and Adams, R. P. (2012). Practical Bayesian optimization of machine learning algorithms. In *Advances in Neural Information Processing Systems*, volume 25, pages 2951–2959. Solak, E., Murray-Smith, R., Leithead, W. E., Leith, D. J., and Rasmussen, C. (2002). Derivative observations in Gaussian process models of dynamic systems. In *Advances in Neural Information Processing Systems*, volume 15, pages 1057–1064. Steinwart, I. and Christmann, A. (2008). *Support Vector Machines*. Information Science and Statistics. Springer. Särkkä, S. (2011). Linear operators and stochastic partial differential equations in Gaussian process regression. In *International Conference on Artificial Neural Networks*, pages 151–158. Teymur, O., Zygalakis, K., and Calderhead, B. (2016). Probabilistic linear multistep methods. In *Advances* in Neural Information Processing Systems, volume 29, pages 4321–4328. Travelletti, C. and Ginsbourger, D. (2022). Disintegration of Gaussian measures for sequential assimilation of linear operator data. *arXiv:2207.13581v1*. Wahba, G. (1990). *Spline Models for Observational Data*. Number 59 in CBMS-NSF Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics. Wendland, H. (2005). *Scattered Data Approximation*. Number 17 in Cambridge Monographs on Applied and Computational Mathematics. Cambridge University Press. Wu, J., Poloczek, M., Wilson, A. G., and Frazier, P. I. (2017). Bayesian optimization with gradients. In Advances in Neural Information Processing Systems, volume 30, pages 5267–5278. Xu, W. and Stein, M. L. (2017). Maximum likelihood estimation for a smooth Gaussian random field model. SIAM/ASA Journal on Uncertainty Quantification, 5(1):138–175. Zwicknagl, B. (2009). Power series kernels. *Constructive Approximation*, 29(1):61–84. Zwicknagl, B. and Schaback, R. (2013). Interpolation and approximation in Taylor spaces. Journal of Approximation Theory, 171:65–83.
Review 1: Summary: The authors consider using GPs to model epistemic uncertainty when approximating a function locally using a Taylor series expansion. They introduce Taylor kernels for both univeriate and multivariate input GPs. Perhaps the key result is that when using these kernels and conditioning on a number of known derivatives at a point, the resulting posterior GP has mean function equal to the usual Taylor expansion. That in itself of course is not useful, but the GP also provides an analytic expression for the posterior variance, which grows to infinity far from the expansion point. Further theory is developed covering error bound, noisy data, hyperparameter estimation and "orthogonal data". Two small numerical examples are presented. Strengths and Weaknesses: The paper is reasonably well written apart from missing a few definitions, e.g. I'm still unclear what "orthogonal data" are. It is on the long side and mathematically dense, I wonder if there are intermediate steps that the authors could move to an appendix to allow the reader to focus on the key results. I'm unclear if the key result (mentioned above) is novel: if so that is very elegant and in my mind the major strength of the work. Perhaps the main weakness is that practical utility is not demonstrated well. There are just two toy numerical examples, approximating sin(pi x) and exp(x). Neither test multivariate inputs. Relatedly, their own conclusion is that simultaneous estimation of sigma (signal variance) and lambda (inverse length scale) is problematic. How should this be handled in practice? Would Bayesian inference help (at some computational cost)? Requested Changes: Ideally I would like to see a more "real" motivating example where the theory can be put to practice. Minor: - I believe K in Theorem 2.1 is missing a K_a - term. - In the proof for thm 2.1 there is no need to repeat the final s and P equations. - just before section 5.2: examples OF orthogonal data Broader Impact Concerns: None ================================================== Review 2: Summary: - summary The article establishes a theoretical link between Gaussian Process regression with derivative data and a probabilistic version of Taylor's theorem expansion. The Gaussian Process framework helps quantify the uncertainty of Taylor approximations when the target and reference points are not so close. Strengths and Weaknesses: - strengths Developing methods to estimate Taylor's approximation error empirically is a good idea. Interestingly, the uncertainty depends explicitly on the kernel hyperparameters. Could the provided uncertainty quantification be exploited as a general tool for formal proofs involving the Taylor expansion? - weaknesses The clarity of the exposition should be improved. Sometimes, it is hard to understand the goal of an entire section. For example, introducing RKHS does not seem to be strictly necessary. The authors should give a practical example where estimating the uncertainty of the Taylor expansion is useful. It is unclear what is the net contribution of the paper. Gaussian Process regression with derivative data already exists. But, according to the authors, previous work only focuses on the Gaussian Kernel. Is the novelty here the extension to the more Taylor Kernels defined in Equation 2.3? If so, the authors should point out the advantages of such an extension. Compared to Gaussian kernels, Taylor kernels may diverge or grow exponentially when |x-a| is large. The authors do not discuss the practical and theoretical implications of this. It would be nice to include a set of conditions (e.g. on the coefficients of the Taylor expansion) such that the good behaviour of Gaussian kernels is reproduced. The experiments are limited. The very brief experiment section suggests general confusion about the practical impact of the proposed scheme. Requested Changes: questions: - What is a possible application of the proposed method? - How expansive is the computation of the differential data? What is the overall complexity of the proposed approach? Are Taylor kernels computationally cheaper than Gaussian kernels or standard derivative-based regression methods? - Is assuming the independence of the errors as in Section 2.5 standard? How can this happen, intuitively, when all derivatives are obtained from the same process? - Which parameters can be estimated (through ML)? Which are fixed by the Taylor structure? Is the "estimation of c_p" equivalent to choosing a specific kernel class? Does c_p need to be an explicit function of p? - What is the main difference between the proposed approach and existing methods for regression with derivative data? - Why is the possible uncontrolled behaviour of Taylor Kernels when |x-a| grows a good feature? Broader Impact Concerns: I do not have any ethical concert about this paper. ================================================== Review 3: Summary: The paper discusses the connection of Gaussian process covariance functions and Taylor series. It prooves certain connections between convergence radii, discusses fitting hyperparameter, in particular those corresponding to Taylor coefficients, and it proves and generalizes orthogonality conditions. Strengths and Weaknesses: Strengths: - The paper has well-chosen graphical examples. - While not being mathematical complicated, the results are mathematically interesting and well-interpreted. - The paper is clear in its language. Precice, when it needs to be; while still giving intution whenever possible. - The paper checks the boxes in correctness and suitable citations. - All claims are verified, mostly theoretically, but also with (few, due to space constraints) examples. - The paper was interesting to read to me and I am certain that it is interesting to be read in a wider machine learning audience. Sorry for the boring review. In my opinion, the paper is mostly ready to be published, apart from a few minor suggestions below. Weaknesses: - One weakness for practical application is the difficulty when applying the dicussed covariance functions in practice. The hyperparameters corresponding to higher order Taylor coeffcients are not determinable. Since the error estimates depend strongly on these non-observable values, the error estimates are mostly guesses - even though they work out nicely in the examples. While not hidden, this point could be stressed or more discussed in the paper. Questions: - Why is $c_\alpha>0$ infinitely often? Why exclude polynomial kernels from your class? - Why do you need the hyperparameter lambda? Why not subsume it into the $c_\alpha$'s? Requested Changes: - The characterization of GPs at the beginning of Subsection 2.2 is wrong. It is only correct, if you also demans the subsequent properties about evaluations of GPs. - Also in section 2.2, you probably want more than "R is n times differentiable in both arguments", but also that all mixed derivatives exist (and perhaps even continuity of those). - As described above, the weakness about non-observable hyperparameters and error bounds could be stressed more. Broader Impact Concerns: None ==================================================
# Recurrent Networks, Hidden States And Beliefs In Partially Observable Environments Gaspard Lambrechts gaspard.lambrechts@uliege.be Montefiore Institute, University of Liège Adrien Bolland adrien.bolland@uliege.be Montefiore Institute, University of Liège Damien Ernst *dernst@uliege.be* Montefiore Institute, University of Liège LTCI, Telecom Paris, Institut Polytechnique de Paris Reviewed on OpenReview: *https: // openreview. net/ forum? id= dkHfV3wB2l* ## Abstract Reinforcement learning aims to learn optimal policies from interaction with environments whose dynamics are unknown. Many methods rely on the approximation of a value function to derive near-optimal policies. In partially observable environments, these functions depend on the complete sequence of observations and past actions, called the history. In this work, we show empirically that recurrent neural networks trained to approximate such value functions internally filter the posterior probability distribution of the current state given the history, called the belief. More precisely, we show that, as a recurrent neural network learns the Q-function, its hidden states become more and more correlated with the beliefs of state variables that are relevant to optimal control. This correlation is measured through their mutual information. In addition, we show that the expected return of an agent increases with the ability of its recurrent architecture to reach a high mutual information between its hidden states and the beliefs. Finally, we show that the mutual information between the hidden states and the beliefs of variables that are irrelevant for optimal control decreases through the learning process. In summary, this work shows that in its hidden states, a recurrent neural network approximating the Q-function of a partially observable environment reproduces a sufficient statistic from the history that is correlated to the relevant part of the belief for taking optimal actions. ## 1 Introduction Latest advances in reinforcement learning (RL) rely heavily on the ability to approximate a value function (i.e., state or state-action value function). Modern RL algorithms have been shown to be able to produce approximations of the value functions of Markov decision processes (MDPs) from which high-quality policies can be derived, even in the case of continuous and high-dimensional state and action spaces (Mnih et al., 2015; Lillicrap et al., 2015; Mnih et al., 2016; Haarnoja et al., 2018; Hessel et al., 2018). The adaptation of these techniques to partially observable MDPs (POMDPs) is not straightforward. Indeed, in such environments, the agent only receives partial observations of the underlying states of the environment. Unlike MDPs where the value functions are written as functions of the current state, in POMDPs the value functions are written as functions of the complete sequence of observations and past actions, called the history. Moreover, the value functions of a history can equivalently be written as functions of the posterior probability distribution over the current state given this history (Bertsekas, 2012). This posterior probability distribution is called the belief and is said to be a sufficient statistic from the history for the value functions of the POMDP. However, the computation of the belief requires one to know the POMDP model and is generally intractable with large or continuous state spaces. For these two reasons, practical RL algorithms rely on the definition of the value functions as functions of the complete history (i.e., history or history-action value function), while the definition of the value functions as functions of the belief (i.e., belief or belief-action value function) is more of theoretical interest. Approximating the value functions as functions of the histories requires one to use function approximators that are able to process sequences of arbitrary length. In practice, RNNs are good candidates for such approximators (Bakker, 2001; Hausknecht & Stone, 2015; Heess et al., 2015). RNNs are parametric approximators that process sequences, time step by time step, exhibiting memory through a hidden state that is passed recurrently over time. The RNN is thus tasked with outputting the value directly from the history. We focus on the approximation of the history-action value function, or Q-function, in POMDPs using a parametric recurrent Q-learning (PRQL) algorithm. More precisely, RNNs are trained with the deep recurrent Q-network (DRQN) algorithm (Hausknecht & Stone, 2015; Zhu et al., 2017). Since we know that the belief is a sufficient statistic from the history for the Q-function of this history (Bertsekas, 2012), we investigate if RNNs, once trained, reproduce the belief filter when processing a history. This investigation is conducted in this work by studying the performance of the different agents with regard to the mutual information (MI) between their hidden states and the belief. We focus on POMDPs for which the models are known. The benchmark problems chosen are the T-Maze environments (Bakker, 2001) and the Mountain Hike environments (Igl et al., 2018). The first ones present a discrete state space, allowing one to compute the belief using Bayes' rule, and representing this distribution over the states in a vector whose dimension is equal to the number of distinct states. The second ones present a continuous state space, making the belief update intractable. We thus rely on particle filtering in order to approximate the belief by a set of states, called particles, distributed according to the belief distribution. The MI between the hidden states and the beliefs is periodically estimated during training, using the mutual information neural estimator (MINE) algorithm (Belghazi et al., 2018). The MINE estimator is extended with the Deep Set architecture (Zaheer et al., 2017) in order to process sets of particles in the case of POMDPs with continuous state-spaces. This methodology allows one to measure the ability and tendency of recurrent architecture to reproduce the belief filter when trained to approximate the Q-function. In (Mikulik et al., 2020), a similar study is performed in the meta-learning setting. In this setting, an MDP is drawn from a distribution of MDPs at each episode. This problem can be equivalently modeled as a particular subclass of POMDP. The authors show empirically, among others, that the hidden state of an RNN-based policy and the statistic of the optimal policy can be mapped one into the other with a low dissimilarity measure. In contrast, we consider arbitrary POMDPs and show empirically that information about the belief, a statistic known to be sufficient for the optimal control, is encoded in the hidden states. In Section 2, we formalise the problem of optimal control in POMDPs, we present the PRQL algorithms for deriving near-optimal policies and we explain the MINE algorithm for estimating the MI. In Section 3, the beliefs and hidden states are defined as random variables whose MI is measured. Afterwards, Section 4 displays the main results obtained for the previously mentioned POMDPs. Finally, Section 5 concludes and proposes several future works and algorithms motivated by our results. ## 2 Background In Subsection 2.1, POMDPs are introduced, along with the belief, policy, and Q-functions associated with such decision processes. Afterwards, in Subsection 2.2, we introduce the DRQN algorithm that is used in our experiments. This algorithm is a particular instance of the PRQL class of algorithms that allows to approximate the Q-function for deriving a near-optimal policy in a POMDP. Finally, in Subsection 2.3, we present the MINE algorithm that is used for estimating the MI between the hidden states and beliefs in our experiments. ## 2.1 Partially Observable Markov Decision Processes In this work, the environments are modelled as POMDPs. Formally, a POMDP P is an 8-tuple P = (S, A, O, p0*, T, R, O, γ*) where S is the state space, A is the action space, and O is the observation space. The initial state distribution p0 gives the probability p0(s0) of s0 ∈ S being the initial state of the decision process. The dynamics are described by the transition distribution T that gives the probability T(st+1 | st, at) of st+1 ∈ S being the state resulting from action at ∈ A in state st ∈ S. The reward function R gives the immediate reward rt = R(st, at, st+1) obtained after each transition. The observation distribution O gives the probability O(ot | st) to get observation ot ∈ O in state st ∈ S. Finally, the discount factor γ ∈ [0, 1[ gives the relative importance of future rewards. Taking a sequence of t actions (a0:t−1) in the POMDP conditions its execution and provides a sequence of t + 1 observations (o0:t). Together, they compose the history η0:t = (o0:t, a0:t−1) ∈ H0:t until time step t, where H0:t is the set of such histories. Let η ∈ H denote a history of arbitrary length sampled in the POMDP, and let H =S∞ t=0 H0:t denote the set of histories of arbitrary length. A policy π ∈ Π in a POMDP is a mapping from histories to actions, where Π = *H → A* is the set of such mappings. A policy π ∗ ∈ Π is said to be optimal when it maximises the expected discounted sum of future rewards starting from any history η0:t ∈ H0:t at time t ∈ N0 $$\pi^{*}\in\operatorname*{arg\,max}_{\pi\in\Pi}\ \mathbb{E}_{r,P}\left[\sum_{t^{\prime}=t}^{\infty}\gamma^{t^{\prime}-t}r_{t^{\prime}}\ \biggm{|}\eta_{0:t}\right],\ \forall\eta_{0:t}\in\mathcal{H}_{0:t},\ \forall t\in\mathbb{N}_{0}.\tag{1}$$ $$\mathbf{\Sigma}$$ The history-action value function, or Q-function, is defined as the maximal expected discounted reward that can be gathered, starting from a history η0:t ∈ H0:t at time t ∈ N0 and an action at ∈ A $$\mathcal{Q}(\eta_{0:t},\mathbf{a}_{t})=\max_{\pi\in\Pi}\,\mathbb{E}_{\pi,P}\left[\sum_{t^{\prime}=t}^{\infty}\gamma^{t^{\prime}-t}r_{t^{\prime}}\,\left|\,\eta_{0:t},\mathbf{a}_{t}\right.\right],\,\forall\eta_{0:t}\in\mathcal{H}_{0:t},\,\forall\mathbf{a}_{t}\in\mathcal{A},\,\forall t\in\mathbb{N}_{0}.\tag{1}$$ The Q-function is also the unique solution of the Bellman equation (Smallwood & Sondik, 1973; Kaelbling et al., 1998; Porta et al., 2004) $$\mathcal{Q}(\eta,\mathbf{a})=\mathbb{E}\left[r+\gamma\max_{\mathbf{a}^{\prime}\in\mathcal{A}}\mathcal{Q}(\eta^{\prime},\mathbf{a}^{\prime})\ \bigg{|}\ \eta,\mathbf{a}\right],\ \forall\eta\in\mathcal{H},\ \forall\mathbf{a}\in\mathcal{A}\tag{1}$$ where η ′ = η ∪ (a, o ′) and r is the immediate reward obtained when taking action a in history η. From equation (1) and equation (2), it can be observed that any optimal policy satisfies $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ $\pi^{*}(\eta)\in\mathop{\rm arg\,max}_{{\bf a}\in{\cal A}}{\cal Q}(\eta,{\bf a}),\ \forall\eta\in{\cal H}.$ (10.1) $$\mathbf{\Sigma}$$ Let P(S) be the set of probability measures over the state space S. The belief b ∈ P(S) of a history η ∈ H is defined as the posterior probability distribution over the states given the history, such that b(s) = p(s | η), ∀s ∈ S (Thrun, 2002). The belief filter f ∗is defined as the function that maps a history η to its corresponding belief b f ∗(η) = b, ∀η ∈ H. (5) Formally, for an initial observation η = (o), the belief b = f ∗(η) is defined by $$b(\mathbf{s})={\frac{p_{0}(\mathbf{s})O(\mathbf{o}\mid\mathbf{s})}{\int_{\mathcal{S}}p_{0}(\mathbf{s}^{\prime})O(\mathbf{o}\mid\mathbf{s}^{\prime})\,\mathrm{d}\mathbf{s}^{\prime}}},\ \forall\mathbf{s}\in{\mathcal{S}}$$ , ∀s ∈ S (6) and for a history η ′ = η ∪ (a, o ′), the belief b ′ = f ∗(η ′) is recursively defined by $$b^{\prime}({\bf s}^{\prime})=\frac{O({\bf o}^{\prime}\mid{\bf s}^{\prime})\int_{\cal S}T({\bf s}^{\prime}\mid{\bf s},{\bf a})\ b({\bf s})\,{\rm d}{\bf s}}{\int_{\cal S}O({\bf o}^{\prime}\mid{\bf s}^{\prime})\int_{\cal S}T({\bf s}^{\prime}\mid{\bf s},{\bf a})\ b({\bf s})\,{\rm d}{\bf s}\,{\rm d}{\bf s}^{\prime}},\ \forall{\bf s}^{\prime}\in{\cal S}.\tag{1}$$ where b = f ∗(η). Equation (7) provides a way to update the belief b to b ′through a filter step f once observing new information (a, o ′) $$b^{\prime}=f(b;\mathbf{a},\mathbf{o}^{\prime}).$$ ′). (8) $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ $\square$ $\downarrow$ . A statistic from the history is defined as any function of the history. The belief is known to be a sufficient statistic from the history in order to act optimally (Bertsekas, 2012). It means that the Q-function only depends on the history through the belief computed from this same history. It implies in particular that the Q-function takes the following form $${\mathcal{Q}}(\eta,\mathbf{a})=Q(f^{*}(\eta),\mathbf{a}),\ \forall\eta\in{\mathcal{H}},\ \forall\mathbf{a}\in{\mathcal{A}}$$ $$({\mathfrak{g}})$$ ∗(η), a), ∀η ∈ H, ∀a ∈ A (9) where Q : P(S) *× A →* R is called the belief-action value function, or Q-function. This function gives the maximal expected discounted reward starting from a belief b ∈ P(S) and an action a ∈ A, where the belief b = f ∗(η) results from an arbitrary history η ∈ H. Although the exact belief filter is often unknown or intractable, this factorisation of the Q-function still motivates the compression of the history in a statistic related to the belief, when processing the history for predicting the Q-function. ## 2.2 Parametric Recurrent Q-Learning We call PRQL the family of algorithms that aim at learning an approximation of the Q-function with a recurrent architecture Qθ, where θ ∈ R dθis the parameter vector. These algorithms are motivated by equation (4) that shows that an optimal policy can be derived from the Q-function. The strategy consists of minimising, with respect to θ, for all (η, a), the distance between the estimation Qθ(η, a) of the LHS of equation (3), and the estimation of the expectation EP [r + γ maxa′∈A Qθ(η ′, a ′)] of the RHS of equation (3). This is done by using transitions (η, a*, r,* o ′, η′) sampled in the POMDP, with η ′ = η ∪ (a, o ′). In its simplest form, given such a transition, the PRQL algorithm updates the parameters θ ∈ R dθ of the function approximator according to $$\theta\leftarrow\theta+\alpha\left(r+\gamma\max_{\mathbf{a}^{\prime}\in\mathcal{A}}\left\{\mathcal{Q}_{\theta}(\eta^{\prime},\mathbf{a}^{\prime})\right\}-\mathcal{Q}_{\theta}(\eta,\mathbf{a})\right)\nabla_{\theta}\mathcal{Q}_{\theta}(\eta,\mathbf{a}).\tag{10}$$ This update corresponds to a gradient step in the direction that minimises, with respect to θ the squared distance between Qθ(η, a) and the target r + γ maxa′∈A {Qθ(η ′, a ′)} considered independent of θ. It can be noted that, in practice, such algorithms introduce a truncation horizon H such that the histories generated in the POMDP have a maximum length of H. From the approximation Qθ, the policy πθ is given by πθ(η) = arg maxa∈A Qθ(η, a). Equation (4) guarantees the optimality of this policy if Qθ = Q. Even though it will alter the performance of the algorithm, any policy can be used to sample the transitions (η, a*, r,* o ′, η′). The function approximator Qθ of PRQL algorithms should be able to process inputs η ∈ H of arbitrary length, making RNN approximators a suitable choice. Indeed, RNNs process the inputs sequentially, exhibiting memory through hidden states that are outputted after each time step, and processed at the next time step along with the following input. More formally, let x0:t = [x0*, . . . ,* xt] with t ∈ N0 be an input sequence. At any step k ∈ {0*, . . . , t*}, RNNs maintain an internal memory state hk through the update function (11) and output a value yk through the output function (12). The initial state h−1 is given by the initialization function (13). $\mathbf{h}_{k}=u_{\theta}(\mathbf{h}_{k-1},\mathbf{x}_{k}),\ \forall k\in\mathbb{N}_{0},$ $\mathbf{y}_{k}=o_{\theta}(\mathbf{h}_{k}),\ \forall k\in\mathbb{N}_{0},$ $\mathbf{h}_{-1}=i_{\theta}.$ $$\begin{array}{l}{(11)}\\ {(12)}\end{array}$$ These networks are trained based on backpropagation through time where gradients are computed in a backward pass through the complete sequence via the hidden states (Werbos, 1990). The following recurrent architectures are used in the experiments: the long short-term memory (LSTM) by Hochreiter & Schmidhuber (1997), the gated recurrent unit (GRU) by Chung et al. (2014), the bistable recurrent cell (BRC) and recurrently neuromodulated bistable recurrent cell (nBRC) by Vecoven et al. (2021), and the minimal gated unit (MGU) by Zhou et al. (2016). In the experiments, we use the DRQN algorithm (Hausknecht & Stone, 2015; Zhu et al., 2017) to learn policies. This algorithm is a PRQL algorithm that shows good convergence even for high-dimensional problems. The DRQN algorithm is detailed in Algorithm 1 of Appendix B. In this algorithm, for a given $$(13)$$ history η0:t of arbitrary length t, the inputs of the RNN are xk = (ak−1, ok), k = 1*, . . . , t* and x0 = (0, o0), and the output of the RNN at the last time step yt = oθ(ht) ∈ R |A| gives y at t = Qθ(η0:t, at), for any at ∈ A. We also define the composition u ∗ θ : H → R dθ of equation (13) and equation (11) applied on the complete history, such that $$\mathbf{h}_{t}=u_{\theta}^{*}(\eta_{0:t})={\begin{cases}u_{\theta}(u_{\theta}^{*}(\eta_{0:t-1}),\mathbf{x}_{t}),&t\geq1\\ u_{\theta}(i_{\theta},\mathbf{x}_{t}),&t=0\end{cases}}$$ $$\left(14\right)$$ ## 2.3 Mutual Information Neural Estimator In this work, we are interested in establishing if a recurrent function approximator reproduces the belief filter during PRQL. Formally, this is performed by estimating the MI between the beliefs and the hidden states of the RNN approximator Qθ. In this subsection, we recall the concept of MI and how it can be estimated in practice. The MI is theoretically able to measure any kind of dependency between random variables (Kraskov et al., 2004). The MI between two jointly continuous random variables X and Y is defined as $$I(X;Y)=\int_{X}\int_{\mathcal{Y}}p(x,y)\log\frac{p(x,y)}{p_{X}(x)\;p_{Y}(y)}\,\mathrm{d}x\,\mathrm{d}y$$ $\int_{X}\int_{\mathcal{Y}}p(x,y)\log\frac{p(x,y)}{p_{X}(x)\;p_{Y}(y)}\,\mathrm{d}x\,\mathrm{d}y$ $\int_{X}\int_{\mathcal{Y}}p(x,y)\log\frac{p(x,y)}{p_{X}(x)\;p_{Y}(y)}\,\mathrm{d}x\,\mathrm{d}y$ $$(15)$$ where X and Y are the support of the random variables X and Y respectively, p is the joint probability density function of X and Y , and pX and pY are the marginal probability density functions of X and Y , respectively. It is worth noting that the MI can be defined in terms of the Kulback-Leibler (KL) divergence between the joint p and the product of the marginals q = pX ⊗ pY , over the joint space Z = *X × Y* $$I(X;Y)=D_{\mathrm{KL}}(p\mid\mid q)=\int_{\mathcal{Z}}p(z)\log\left({\frac{p(z)}{q(z)}}\right)\mathrm{d}z$$ $$(16)$$ In order to estimate the MI between random variables X and Y from a dataset {(xi, yi)} N i=1, we rely on the MINE algorithm (Belghazi et al., 2018). This technique is a parametric approach where a neural network outputs a lower bound on the MI, that is maximised by gradient ascent. The lower bound is derived from the Donsker-Varhadan representation of the KL-divergence (Donsker & Varadhan, 1975) $$D_{\mathrm{KL}}(p\mid\mid q)=\operatorname*{sup}_{T:\mathbb{Z}\to\mathbb{R}}\mathbb{E}_{z\sim p}\left[T(z)\right]-\log\left(\mathbb{E}_{z\sim q}\left[e^{T(z)}\right]\right)$$ $$(17)$$ where the supremum is taken over all functions T such that the two expectations are finite. The lower bound IΦ(X; Y ) on the true MI I(X; Y ) is obtained by replacing T by a parameterised function Tϕ : Z → R with ϕ ∈ Φ, and taking the supremum over the parameter space Φ of this function. If Φ corresponds to the parameter space of a neural network, then this lower bound can be approached by gradient ascent using empirical means as estimators of the expectations. The resulting procedure for estimating the MI is given in Algorithm 3 in Appendix D. ## 3 Measuring The Correlation Between The Hidden States And Beliefs In this work, we study if PRQL implicitly approximates the belief filter by reaching a high MI between the RNN's hidden states and the beliefs, that are both generated from random histories. In this section, we first explain the intuition behind this hypothesis, then we define the joint probability distribution over the hidden states and beliefs that defines the MI. As explained in Section 2, the belief filter is generally intractable. As a consequence, PRQL algorithms use approximators Qθ that directly take the histories as input. In the DRQN algorithm, these histories are processed recurrently according to equation (11), producing a new hidden state ht after each input xt = (at−1, ot) ht = uθ(ht−1; (at−1, ot)). (18) $$\mathbf{h}_{t}=u_{\theta}(\mathbf{h}_{t-1};(\mathbf{a}_{t-1},\mathbf{o}_{t})).$$ $\left(18\right)$. These hidden states should thus summarise all relevant information from past inputs in order to predict the Q-function at all later time steps. The belief is known to be a sufficient statistic from the history for these predictions (9). Moreover, the belief bt is also updated recurrently, according to equation (8) after each transition (at−1, ot) bt = f(bt−1; at−1, ot). (19) The parallel between equation (18) and equation (19), knowing the sufficiency of the belief (9), justifies the appropriateness of the belief filter f as the update function uθ of the RNN approximator Qθ. It motivates the study of the reconstruction of the belief filter by the RNN. In practice, this is done through the measurement of the MI between the hidden state ht and the belief bt at any time step t ∈ N0. Formally, for a given history length t ∈ N0, the policy πθ of the learning algorithm, as defined in Subsection 2.2, induces a distribution pπθ (η | t) over histories η ∈ H. This conditional probability distribution is zero for all history of length t ′ ̸= t. Given a distribution p(t) over the length of trajectories, the joint distribution of h and b is given by $$b_{t}=f(b_{t-1};\mathbf{a}_{t-1},\mathbf{o}_{t}).$$ $$(19)$$ $$p(\mathbf{h},b)=\sum_{t=0}^{\infty}\ p(t)\int_{\mathcal{H}}p(\mathbf{h},b\mid\eta)\ p_{\pi_{\theta}}(\eta\mid t)\,\mathrm{d}\eta$$ $$(20)$$ (η | t) dη (20) where p(h, b | η) is a Dirac distribution for h = u ∗ θ (η) and b = f ∗(η) given by equation (14) and equation (5), respectively. In the following, we estimate the MI between h and b under their joint distribution (20). ## 4 Experiments In this section, the experimental protocol and environments are described and the results are given. More specifically, in Subsection 4.1, we describe the estimates that are reported in the figures. The results are reported for four different POMDPs: the T-Maze and Stochastic T-Maze in Subsection 4.2, and the Mountain Hike and Varying Mountain Hike in Subsection 4.3. Afterwards, in Subsection 4.4, irrelevant state variables and observations are added to the decision processes, and the MI is measured separately between the hidden states and the belief of the relevant and irrelevant variables. Finally, in Subsection 4.5, we discuss the results obtained in this section, and propose an additional protocol to study their generalisation. ## 4.1 Experimental Protocol As explained in Subsection 2.2, the parameters θ of the approximation Qθ are optimised with the DRQN algorithm. After e episodes of interaction with the POMDP, the DRQN algorithm gives the policy πθe (η) = arg maxa∈A Qθe (η, a). In the experiments, the empirical cumulative reward Jˆ(θe) of the policy πθe is reported, along with the estimated MI ˆI(θe) between the random variables h and b under the distribution (20) implied by πθe . Each estimate is reported averaged over four training sessions. In addition, confidence intervals show the minimum and maximum of these estimates. The empirical return is defined as Jˆ(θe) = 1 I PI−1 i=0 PH−1 t=0 γ tr i t , where I is the number of Monte Carlo rollouts, H the truncation horizon of the DRQN algorithm, and r i t is the reward obtained at time step t of Monte Carlo rollout i. As far as the estimation of the MI is concerned, we sample time steps with equal probability p(t) = 1/H, t ∈ {0*, . . . , H* − 1}, where H is the truncation horizon of the DRQN algorithm. The uniform distribution over time steps and the current policy πθe define the probability distribution (20) over the hidden states and beliefs. The MI is estimated from samples of this distribution using the MINE estimator ˆI(θe) (see Subsection D.1 for details). The hyperparameters of the DRQN and MINE algorithms are given in Appendix E. For POMDPs with continuous state spaces, the computation of the belief b is intractable. However, a set of state particles S that follows the belief distribution f ∗(η) can be sampled, using particle filtering (see Appendix C). This set of particles could be used to construct an approximation of the belief in order to estimate the MI. This density estimation procedure is nonetheless unnecessary as the MINE network can directly process the set of particles by producing a permutation-invariant embedding of the belief using the Deep Set architecture (Zaheer et al., 2017), see Subsection D.2 for details. ## 4.2 Deterministic And Stochastic T-Mazes The T-Maze is a POMDP where the agent is tasked with finding the treasure in a T-shaped maze (see Figure 1). The state is given by the position of the agent in the maze and the maze layout that indicates whether the treasure lies up or down after the crossroads. The initial state determines the maze layout, and it never changes afterwards. The initial observation made by the agent indicates the layout. Navigating in the maze provides zero reward, except when bouncing onto a wall, in which case a reward of −0.1 is received. Finding the treasure provides a reward of 4. Beyond the crossroads, the states are always terminal. The optimal policy thus consists of going through the maze, while remembering the initial observation in order to take the correct direction at the crossroads. This POMDP is parameterised by the corridor length L ∈ N and stochasticity rate λ ∈ [0, 1] that gives the probability of moving in a random direction at any time step. The Deterministic T-Maze (λ = 0) was originally proposed in (Bakker, 2001). The discount factor is γ = 0.98. This POMDP is formally defined in Subsection A.2. ![6_image_0.png](6_image_0.png) Figure 1: T-Maze state space. As explained in Subsection 2.2, the histories can be sampled with an arbitrary policy in PRQL algorithms. In practice, the DRQN algorithm uses an ε-greedy stochastic policy that selects its action according to the current policy with probability 1 − ε, and according to the exploration policy E(A) with probability ε. Usually, the exploration policy is chosen to be the uniform distribution U(A) over the action. However, for the T-Maze, the exploration policy E(A) is tailored to this POMDP to alleviate the exploration problem, that is independent of the study of this work. The exploration policy forces one to walk through the right of the corridor with E(Right) = 1/2 and E(Other) = 1/6 where Other ∈ {Up, Left, Down}. ![6_image_1.png](6_image_1.png) Figure 2: Deterministic T-Maze (L = 50). Evolution of the return Jˆ(θe) and the MI ˆI(θe) after e episodes (left), and the return Jˆ(θe) with respect to the MI ˆI(θe) (right). The maximal expected return is given by the dotted line. On the left in Figure 2, the expected return is shown along with the MI between the hidden states and the belief as a function of the number of episodes, for a T-Maze of length L = 50. In order to better disambiguate between high-quality policies, the empirical return is displayed with an exponential scale in the following graphs. Both the performance of the policy and the MI increase during training. We also observe that, at any given episode, RNNs that have a higher return, such as the nBRC or the BRC, correspond to cells that have a higher MI between their hidden states and the belief. Furthermore, the LSTM that struggles to achieve a high return has a significantly lower MI than the other cells. Finally, we can see that the evolution of the MI and the return are correlated, which is highlighted on the right in Figure 2. Indeed, the return increases with the MI, with a linear correlation coefficient of 0.8233 and a rank correlation coefficient of ![7_image_0.png](7_image_0.png) 0.6419. These correlations coefficients are also detailed for each cell separately in Appendix G. It can also be noted that no RNN with less than 5 bits of MI reaches the maximal return. Figure 3: Deterministic T-Maze (L = 100). Evolution of the return Jˆ(θe) and the MI ˆI(θe) after e episodes (left), and the return Jˆ(θe) with respect to the MI ˆI(θe) (right). The maximal expected return is given by the dotted line. In Figure 3, we can see that all previous observations also hold for a T-Maze of length L = 100. On the left, we can see that the lower the MI, the lower the return of the policy. For this length, in addition to the LSTM, the GRU struggles to achieve the maximal return, which is reflected in the evolution of its MI that increases more slowly than for the other RNNs. It is also interesting to notice that, on average, the MGU overtake the BRC in term of return after 2000 episodes, which is also the case for the MI. Here, the linear correlation coefficient between the MI and the return is 0.5347 and the rank correlation coefficient is 0.6666. Once again, we observe that a minimum amount of MI between the hidden states and the belief is required for the policy to be optimal. Here, at least 5.0 bits of MI is necessary. ![7_image_1.png](7_image_1.png) Figure 4: Stochastic T-Maze (L = 50, λ = 0.3). Evolution of the return Jˆ(θe) and the MI ˆI(θe) after e episodes (left), and the return Jˆ(θe) with respect to the MI ˆI(θe) (right). The maximal expected return is given by the dotted line. In Figure 4, the results are shown for the Stochastic T-Maze with L = 50 and λ = 0.3. On the contrary to the Deterministic T-Maze, where the belief is a Dirac distribution over the states, there is uncertainty on the true state in this environment. We can nevertheless observe that previous observations hold for this environment too. The MI and the expected return are indeed both increasing throughout the training process, and the best performing RNNs, such as the BRC and nBRC, have a MI that increases faster and stays higher, while the LSTM struggles to reach both a high return and a high MI. Here, the linear correlation coefficient between the MI and the return is 0.5460 and the rank correlation coefficient is 0.6403. It can also be noticed on the right that the best performing policies have a MI of at least 4.5 bits in practice. In the Deterministic T-Maze, it can be observed that the estimated lower bounds Iϕ(h, b) on the MI that are obtained by the MINE estimator are tight. Indeed, in this environment, the hidden state and belief are discrete random variables and their mutual information is thus upper bounded by the entropy of the belief. Moreover, the belief is a Dirac distribution that gives the actual state with probability one. Under the optimal policy, each state is visited with equal probability, such that the entropy of the belief is given by log2 (102) = 6.6724 for the Deterministic T-Maze of length L = 50, where 102 is the number of non terminal states. As can be seen in Figure 2, the optimal policies reach an estimated MI around 6.5 at maximum, which nearly equals the upper bound. The same results is obtained for the Deterministic T-Maze of length L = 100, where the entropy of the belief is given by log2 (202) = 7.658 and the optimal policies reach an estimated MI around 7.0 at maximum, as can be seen in Figure 3. We expect this result to generalise to other environments even if this would be difficult to verify in practice for random variables with large or continuous spaces. ## 4.3 Mountain Hike And Varying Mountain Hike The Mountain Hike environment is a POMDP modelling an agent walking through a mountainous terrain. The agent has a position on a two-dimensional map and can take actions to move in four directions relative to its initial orientation: Forward, Backward, Right and Left. First, we consider that its initial orientation is always North. Taking an action results in a noisy translation in the corresponding direction. The translation noise is Gaussian with a standard deviation of σT = 0.05. The only observation available is a noisy measure of its relative altitude to the mountain top, that is always negative. The observation noise is Gaussian with a standard deviation of σO = 0.1. The reward is also given by this relative altitude, such that the goal of this POMDP is to to obtain the highest possible cumulative altitude. Around the mountain top, the states are terminal. The optimal policy thus consists of going as fast as possible towards those terminal states while staying on the crests in order to get less negative rewards than in the valleys. This environment is represented in Figure 5. This POMDP is inspired by the Mountain Hike environment described in (Igl et al., 2018). The discount factor is γ = 0.99. We also consider the Varying Mountain Hike in the experiments, a more difficult version of the Mountain Hike where the agent randomly faces one of the four cardinal directions (i.e., North, West, South, East) depending on the initial state. The agent does not observe its orientation. As a consequence, the agent needs to maintain a belief about its orientation given the observations in order to act optimally. This POMDP is formally defined in Subsection A.3. Figure 5: Mountain hike altitude function. Figure 6 shows on the left the expected return and the MI during training for the Mountain Hike environment. It is clear that the DRQN algorithm promotes a high MI between the belief and the hidden states of the RNN, even in continuous-state environments. It can also be seen that the evolution of the MI and the evolution of the return are strongly linked throughout the training process, for all RNNs. We can also see on the right in Figure 6 that the correlation between MI and performances appears clearly for each RNN. For all RNNs, the linear correlation coefficient is 0.5948 and the rank correlation coefficient is 0.2965. In particular, we see that the best policies, with a return around −20, are clearly separated from the others and have a significantly higher MI on average. In Figure 7, we can see the evolution and the correlation between the return and the MI for the Varying Mountain Hike environment. The correlation is even clearer than for the other environments. This may be due to the fact that differences in term of performances are more pronounced than for the other experiments. Again, the worse RNNs such as the LSTM and the BRC have a significantly lower MI compared to the other cells. In addition, the performances of any RNN is strongly correlated to their ability to reproduce the belief filter, as can be seen on the right, with a sharp increase in empirical return as the MI increases from 2.5 to ![8_image_0.png](8_image_0.png) ![9_image_0.png](9_image_0.png) Figure 6: Mountain Hike. Evolution of the return Jˆ(θe) and the MI ˆI(θe) after e episodes (left), and the return Jˆ(θe) with respect to the MI ˆI(θe) (right). ![9_image_1.png](9_image_1.png) Figure 7: Varying Mountain Hike. Evolution of the return Jˆ(θe) and the MI ˆI(θe) after e episodes (left), and the return Jˆ(θe) with respect to the MI ˆI(θe) (right). 4.5 bits. More precisely, the linear correlation coefficient between the MI and the return is 0.5982 and the rank correlation coefficient is 0.6176. This increase occurs throughout the training process, as can be seen on the left. ## 4.4 Belief Of Variables Irrelevant For The Optimal Control Despite the belief being a sufficient statistic from the history in order to act optimally, it may be that only the belief of some state variables is necessary for optimal control. In this subsection, we show that approximating the Q-function with an RNN will only tend to reconstruct the necessary part, naturally filtering away the belief of irrelevant state variables. In order to study this phenomenon, we construct a new POMDP P ′from a POMDP P by adding new state variables, independent of the original ones, and irrelevant for optimal control. More precisely, we add d irrelevant state variables s I t that follows a Gaussian random walk. In addition, the agent acting in the POMDP P ′ obtains partial observations o I t of the new state variables through an unbiased Gaussian observation model. Formally, the new states and observations are distributed according to $$\begin{array}{c}{{p(\mathbf{s}_{0}^{I})=\phi(\mathbf{s}_{0}^{I};\mathbf{0},\mathbf{1})}}\\ {{p(\mathbf{s}_{t+1}^{I}\mid\mathbf{s}_{t}^{I})=\phi(\mathbf{s}_{t+1}^{I};\mathbf{s}_{t}^{I},\mathbf{1}),\ \forall t\in\mathbb{N}_{0},}}\end{array}$$ ; 0, 1) (21) , 1), ∀t ∈ N0, (22) $\left(21\right)^{2}$ $\left(22\right)^{2}$ $$p(\mathbf{o}_{t}^{I}\mid\mathbf{s}_{t}^{I})=\phi(\mathbf{o}_{t}^{I};s_{t}^{I},\mathbf{1}),\ \forall t\in\mathbb{N}_{0},$$ , 1), ∀t ∈ N0, (23) ![10_image_0.png](10_image_0.png) where ϕ(x; µ, Σ) is the probability density function of a multivariate random variable of mean µ ∈ R d and covariance matrix Σ ∈ R d×d, evaluated at x ∈ R d, and 1 is the identity matrix. Figure 8: Deterministic T-Maze (L = 50) with d irrelevant state variables. Evolution of the return Jˆ(θe) and the MI ˆI(θe) for the belief of the irrelevant and relevant state variables after e episodes, for the GRU cell. The maximal expected return is given by the dotted line. Figure 8 shows the return and the MI measured for the GRU on the T-Maze environment with L = 50. It can be observed, as for the classic T-Maze environment, that the MI between the hidden states and the belief of state variables that are relevant to optimal control increases with the return. In addition, the MI with the belief of irrelevant variables decreases during training. It can also be seen that, for d = 4, the MI with the belief of irrelevant variables remains higher than the MI with the belief of relevant variables, due to the high entropy of this irrelevant process. Finally, it is interesting to note that the MI continues to increase (resp. decrease) with the belief of relevant (resp. irrelevant) variables long after the optimal policy is reached, suggesting that the hidden states of the RNN still change substantially. Similar results are obtained for the other cells (see Appendix H). ![10_image_1.png](10_image_1.png) Figure 9: Mountain Hike with with d irrelevant state variables. Evolution of the return Jˆ(θe) and the MI ˆI(θe) for the belief of the irrelevant and relevant state variables after e episodes, for the GRU cell. Figure 9 shows the return and the MI measured for the GRU on the Mountain Hike environment. The same conclusions as for the T-Maze can be drawn, with a clear increase of the MI for the relevant variables throughout the training process, and a clear decrease of the MI for the irrelevant variables. In addition, it can be seen that the optimal policy is reached later when there are more irrelevant variables. It is also clear that adding more irrelevant variables increases the entropy of the irrelevant process, which leads to a higher MI between the hidden states and the irrelevant state variables. Similar results are obtained for the other cells (see Appendix H). ## 4.5 Discussion As shown in the experiments, under the distribution induced by a recurrent policy trained using recurrent Q-learning, its hidden state provide a high amount of information about the belief of relevant state variables, at any time step. The hidden state of the RNN is thus a statistic from the history that encodes information about the belief. In addition, at any time step, the network performs an update of this statistic, based on the actions and observations that are observed. The RNN thus implements a filter that provides a statistic encoding the belief. However, it was only shown that the RNN produces such a statistic under the distribution of histories induced by the learned policy. For the sake of robustness of the policy to perturbations of histories, we might want this statistic to also provide information about the belief under other distribution of histories. In Appendix F, we propose an experimental protocol to study the generalisation of the learned statistics. The results show that the MI between the hidden states and the beliefs also increases throughout the training process, under distributions induced by various ε-greedy policies, even the fully random policy. We impute those results to the following reasons. First, the DRQN algorithm approximates the Q-function, which generally requires a richer statistic from the history than the optimal policy. Second, the DRQN algorithm makes use of exploration, which allows the RNN to learn from histories that are diverse. However, we still observe that the higher the noise, the lower the MI. From these results, we conclude that the statistic that is learned by the network generalises reasonably well to other distributions of histories. ## 5 Conclusions In this work, we have shown empirically for several POMDPs that RNNs approximating the Q-function with a recurrent Q-learning algorithm (Hausknecht & Stone, 2015; Zhu et al., 2017) produces a statistic in their hidden states that provide a high amount of information about the belief of state variables that are relevant for optimal control. More precisely, we have shown that the MI between the hidden states of the RNN and the belief of states variables that are relevant for optimal control was increasing throughout the training process. In addition, we have shown that the ability of a recurrent architecture to reproduce, through a high MI, the belief filter conditions the performance of its policy. Finally, we showed that the MI between the hidden states and the beliefs of state variables that are irrelevant for optimal control decreases through the training process, suggesting that RNNs only focus on the relevant part of the belief. This work also opens up several paths for future work. First, this work suggests that enforcing a high MI between the hidden states and the beliefs leads to an increase in the performances of the algorithm and in the return of the resulting policy. While other works have focused on an explicit representation of the belief in the hidden states (Karkus et al., 2017; Igl et al., 2018), which required to design specific recurrent architectures, we propose to implicitly embed the belief in the hidden state of any recurrent architecture by maximising their MI. When the belief or state particles are available, this can be done by adding an auxiliary loss such that the RNN also maximises the MI. In practice, this can be implemented by backpropagating the MINE loss beyond the MINE architecture through the unrolled RNN architecture, such that the hidden states are optimized to get a higher MI with the beliefs. Moreover, this work could be extended to algorithms that approximate other functions of the histories than the Q-function. Notably, this study could be extended to the hidden states of a recurrent policy learned by policy-gradient algorithms or to the hidden states of the actor and the critic in actor-critic methods. We may nevertheless expect to find similar results since the value function of a policy tends towards the optimal value function when the policy tends towards the optimal policy. ## Acknowledgments Gaspard Lambrechts gratefully acknowledges the financial support of the *Wallonia-Brussels Federation* for his FRIA grant and the financial support of the *Walloon Region* for Grant No. 2010235 - ARIAC by DW4AI. Adrien Bolland gratefully acknowledges the financial support of the *Wallonia-Brussels Federation* for his FNRS grant. Computational resources have been provided by the Consortium des Équipements de Calcul Intensif (CÉCI), funded by the *Fonds de la Recherche Scientifique de Belgique* (F.R.S.-FNRS) under Grant No. 2502011 and by the Walloon Region. ## References Bram Bakker. Reinforcement learning with long short-term memory. *Advances in neural information processing systems*, 14, 2001. Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. Mutual information neural estimation. In *International conference on machine learning*, pp. 531–540, 2018. Dimitri Bertsekas. *Dynamic programming and optimal control: Volume I*, volume 1. Athena scientific, 2012. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. *arXiv preprint arXiv:1412.3555*, 2014. Monroe D Donsker and SR Srinivasa Varadhan. Asymptotic evaluation of certain Markov process expectations for large time, I. *Communications on Pure and Applied Mathematics*, 28(1):1–47, 1975. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pp. 1861–1870, 2018. Matthew Hausknecht and Peter Stone. Deep recurrent Q-learning for partially observable MDPs. In *Association for the advancement of artificial intelligence fall symposium series*, 2015. Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, and David Silver. Memory-based control with recurrent neural networks. *arXiv preprint arXiv:1512.04455*, 2015. Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. In *Thirty-second association for the advancement of artificial intelligence conference on artificial* intelligence, 2018. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural computation*, 9(8):1735–1780, 1997. Maximilian Igl, Luisa Zintgraf, Tuan Anh Le, Frank Wood, and Shimon Whiteson. Deep variational reinforcement learning for POMDPs. In *International Conference on Machine Learning*, pp. 2117–2126, 2018. Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. Planning and acting in partially observable stochastic domains. *Artificial intelligence*, 101(1-2):99–134, 1998. Peter Karkus, David Hsu, and Wee Sun Lee. QMDP-net: Deep learning for planning under partial observability. *Advances in neural information processing systems*, 30, 2017. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. Alexander Kraskov, Harald Stögbauer, and Peter Grassberger. Estimating mutual information. Physical review E, 69(6):066138, 2004. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. Vladimir Mikulik, Grégoire Delétang, Tom McGrath, Tim Genewein, Miljan Martic, Shane Legg, and Pedro Ortega. Meta-trained agents implement Bayes-optimal agents. *Advances in neural information processing* systems, 33:18691–18703, 2020. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. *Nature*, 518(7540):529–533, 2015. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In *International conference on machine learning*, pp. 1928–1937, 2016. Josep M. Porta, Matthijs T. J. Spaan, and Nikos Vlassis. Value iteration for continuous-state POMDPs. Technical Report IAS-UVA-04-04, December 2004. Richard D Smallwood and Edward J Sondik. The optimal control of partially observable markov processes over a finite horizon. *Operations research*, 21(5):1071–1088, 1973. Sebastian Thrun. Probabilistic robotics. *Communications of the ACM*, 45(3):52–57, 2002. Nicolas Vecoven, Damien Ernst, and Guillaume Drion. A bio-inspired bistable recurrent cell allows for long-lasting memory. *Plos one*, 16(6):e0252676, 2021. Paul J Werbos. Backpropagation through time: what it does and how to do it. Proceedings of the institute of electrical and electronics engineers, 78(10):1550–1560, 1990. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. *Advances in neural information processing systems*, 30, 2017. Guo-Bing Zhou, Jianxin Wu, Chen-Lin Zhang, and Zhi-Hua Zhou. Minimal gated unit for recurrent neural networks. *International Journal of Automation and Computing*, 13(3):226–234, 2016. Pengfei Zhu, Xin Li, Pascal Poupart, and Guanghui Miao. On improving deep reinforcement learning for POMDPs. *arXiv preprint arXiv:1704.07978*, 2017. ## A Environments In this section, the class of environments that are considered in this work are introduced. Then, the environments are formally defined. ## A.1 Class Of Environments In the experiments, the class of POMDPs that are considered is restricted to those where we can observe from ot if a state st is terminal. A state s ∈ S is said to be terminal if, and only if $$\begin{cases}T(\mathbf{s}^{\prime}\mid\mathbf{s},\mathbf{a})=\delta_{\mathbf{s}}(\mathbf{s}^{\prime}),\;\forall\mathbf{s}^{\prime}\in\mathcal{S},\forall\mathbf{a}\in\mathcal{A}\\ R(\mathbf{s},\mathbf{a},\mathbf{s})=0,\;\forall\mathbf{a}\in\mathcal{A}\end{cases}$$ $$(25)$$ $$(24)$$ where δs denotes the Dirac distribution centred in s ∈ S. As can be noted, the expected cumulative reward of any policy when starting in a terminal state is zero. As a consequence, the Q-function of a history for which we observe a terminal state is also zero for any initial action. The PRQL algorithm thus only has to learn the Q-function of histories that have not yet reached a terminal state. It implies that the histories that are generated in the POMDP can be interrupted as soon as a terminal state is observed. ## A.2 T-Maze Environments The T-Maze environment is a POMDP (S, A, O, p0*, T, R, O, γ*) parameterised by the maze length L ∈ N and the stochasticity rate λ ∈ [0, 1]. The formal definition of this environment is given below. ![14_image_0.png](14_image_0.png) Figure 10: T-Maze state space. Initial states in blue, terminal states in grey, and treasure states hatched. State space. The discrete state space S is composed of the set of positions C for the agent in each of the two maze layouts M. The maze layout determines the position of the treasure. Formally, we have $$(26)$$ $$\begin{array}{c}{{(27)}}\\ {{(28)}}\end{array}$$ A state st ∈ S is thus defined by st = (mt, ct) with mt ∈ M and ct ∈ C. Let us also define F = {st = (mt, ct) ∈ S | ct ∈ {(L, 1),(L, −1)}} the set of terminal states, four in number. $$\left\{\begin{array}{l}{\mathcal{S}=\mathcal{M}\times\mathcal{C}}\\ {\mathcal{M}=\{\mathrm{Up,Down}\}}\\ {\mathcal{C}=\{(0,0),\ldots,(L,0)\}\cup\{(L,1),(L,-1)\}}\end{array}\right.$$ Action space. The discrete action space A is composed of the four possible moves that the agent can take $${\mathcal{A}}=\{(1,0),(0,1),(-1,0),(0,-1)\}$$ that correspond to Right, Up, Left and Down, respectively. Observation space. The discrete observation space O is composed of the four partial observations of the state that the agent can perceive $${\mathcal{O}}=\{\mathrm{Up,Down,Corridor,Junuction}\}\,.$$ O = {Up, Down, Corridor, Junction} . (30) Initial state distribution. The two possible initial states are s Up 0 = (Up,(0, 0)) and s Down 0 = (Down,(0, 0)), depending on the maze in which the agent lies. The initial state distribution p0 : S → [0, 1] is thus given by $$p_{0}(\mathbf{s}_{0})=\begin{cases}0.5&\text{if}\mathbf{s}_{0}=\mathbf{s}_{0}^{\text{Up}}\\ 0.5&\text{if}\mathbf{s}_{0}=\mathbf{s}_{0}^{\text{Down}}\\ 0&\text{otherwise}\end{cases}$$ $$(29)$$ $$(30)$$ $$(31)$$ Transition distribution. The transition distribution function T : *S × A × S →* [0, 1] is given by $$T(\mathbf{s}_{t+1}\mid\mathbf{s}_{t},\mathbf{a}_{t})={\begin{cases}\delta_{\mathbf{s}_{t}}(\mathbf{s}_{t+1})&\text{if}\mathbf{s}_{t}\in\mathcal{F}\\ (1-\lambda)\delta_{f(\mathbf{s}_{t},\mathbf{a}_{t})}(\mathbf{s}_{t+1})+{\frac{\lambda}{4}}\left(\sum_{\mathbf{a}\in A}\delta_{f(\mathbf{s}_{t},\mathbf{a})}(\mathbf{s}_{t+1})\right)&\text{otherwise}\end{cases}}$$ $$(32)$$ where st ∈ S, at ∈ A and st+1 ∈ S, and f is given by $$f(\mathbf{s}_{t},\mathbf{a}_{t})={\begin{cases}\mathbf{s}_{t+1}=(\mathbf{m}_{t},\mathbf{c}_{t}+\mathbf{a}_{t})&{\mathrm{if~}}\mathbf{s}_{t}\not\in\mathcal{F},\mathbf{c}_{t}+\mathbf{a}_{t}\in\mathcal{C}\\ \mathbf{s}_{t+1}=(\mathbf{m}_{t},\mathbf{c}_{t})&{\mathrm{otherwise}}\end{cases}}$$ $$(33)$$ where st = (mt, ct) ∈ S and at ∈ A. Reward function. The reward function R : *S × A × S →* R is given by R(st, at, st+1) = 0 if st ∈ F 0 if st ̸∈ F, st+1 ̸∈ F, st ̸= st+1 −0.1 if st ̸∈ F, st+1 ̸∈ F, st = st+1 4 if st ̸∈ F, st+1 ∈ F, ct+1 = ((L, 1) if mt+1 = Up (L, −1) if mt+1 = Down (34) −0.1 if st ̸∈ F, st+1 ∈ F, ct+1 = ((L, −1) if mt+1 = Up (L, +1) if mt+1 = Down where st = (mt, ct) ∈ S, at ∈ A and st+1 = (mt+1, ct+1) ∈ S. Observation distribution. In the T-Maze, the observations are deterministic. The observation distribution O: *S × O →* [0, 1] is given by $$O(\mathbf{o}_{t}\mid\mathbf{s}_{t})=\left\}\right.$$ $$\begin{array}{r l}{{\left\{\begin{array}{l l}{1}&{{}{\mathrm{~if~}}{\mathbf{o}}_{t}=\mathrm{Up},\mathbf{c}_{t}=(0,0),\mathbf{m}_{t}=\mathrm{Up}}\\ {1}&{{}{\mathrm{~if~}}{\mathbf{o}}_{t}=\mathrm{Down},\mathbf{c}_{t}=(0,0),\mathbf{m}_{t}=\mathrm{Down}}\\ {1}&{{}{\mathrm{~if~}}{\mathbf{o}}_{t}=\mathrm{Corridor},\mathbf{c}_{t}\in\left\{(1,0),\ldots,(L-1,0)\right\}}\\ {1}&{{}{\mathrm{~if~}}{\mathbf{o}}_{t}=\mathrm{Junuction},\mathbf{c}_{t}\in\left\{(L,0),(L,1),(L,-1)\right\}}\\ {0}&{{}{\mathrm{~otherwise}}}\end{array}\right.}}\end{array}$$ $$(35)$$ where st = (mt, ct) ∈ S and ot ∈ O. Exploration policy. The exploration policy E : A → [0, 1] is a stochastic policy that is given by E(Right) = 1/2 and E(Other) = 1/6 where Other ∈ {Up, Left, Down}. It enforces the exploration of the right hand side of the maze layouts. This exploration policy, tailored to the T-Maze environment, allows one to speed up the training procedure, without interfering with the study of this work. Truncation horizon. The truncation horizon H of the DRQN algorithm is chosen such that the expected displacement of an agent moving according to the exploration policy in a T-Maze with an infinite corridor on both sides is greater than L. Let r = E(Right) and l = E(Left). In this infinite T-Maze, the probability of increasing its position is p = (1−λ)r +λ 1 4 and the probability of decreasing its position is q = (1−λ)l +λ 1 4 . As a consequence, starting at 0, the expected displacement after one time step is x¯1 = (1 − λ)(r − l). By independence, x¯H = Hx¯1 such that, for x¯H ≥ L, the time horizon is given by $$H=\left[\frac{L}{(1-\lambda)(r-l)}\right].\tag{36}$$ ## A.3 Mountain Hike Environments The Varying Mountain Hike environment is a POMDP (S, A, O, p0*, T, R, O, γ*) parameterised by the sensor variance σO ∈ R and the transition variance σT ∈ R. The formal definition of this environment is given below. ![16_image_0.png](16_image_0.png) Figure 11: Mountain hike altitude function h in X . State space. The state space S is the set of positions X and orientations C that the agent can take. Formally, we have $\begin{cases}\mathcal{S}=\mathcal{X}\times\mathcal{C}\\ \mathcal{X}=[-1,1]^2\\ \mathcal{C}=\{0^\circ,90^\circ,180^\circ,270^\circ\}\end{cases}$ 08 corresponds to facing East, North, West and South, report. $$(37)$$ $\left(\text{17}\right)$ (38) (39) $$(40)$$ The orientation c = 0◦, 90◦, 180◦ and 270◦corresponds to facing East, North, West and South, respectively. The set of terminal states is F = {s = (x, c) *∈ S | ∥*x − (0.8, 0.8)∥< 0.1}. Action space. The discrete action space A is composed of the four possible directions in which the agent can move A = {(0, 0.1),(−0.1, 0),(0, −0.1),(0.1, 0)} (40) that correspond to Forward, Left, Backward and Right, respectively. $\mathcal{A}=\{(0,0.1),(-0.1,0),(0,-0.1),(0.1,0)\}$ $\downarrow$, Backward and Right, respectively. Observation space. The continuous observation space is O = R. Initial state distribution. The initial position is always is always x = (−0.8, −0.8) and the initial orientation is sampled uniformly in C, such that the initial state distribution p0 : S → [0, 1] is given by $$p_{0}({\bf s}_{0})=\sum_{{\bf e}\in{\cal C}}\frac{1}{|{\cal C}|}\delta_{((-0.8,-0.8),{\bf e})}({\bf s}_{0})\tag{10}$$ $$(41)$$ Transition distribution. The transition distribution T : *S × A × S →* [0, 1] is given by the conditional probability distribution of the random variable (st+1 | st, at) that is defined as $$\mathbf{s}_{t+1}={\begin{cases}\mathbf{s}_{t}&{\mathrm{if}}\ \mathbf{s}_{t}\in{\mathcal{F}}\\ \operatorname{clamp}_{{\mathcal{S}}}\left(\mathbf{s}_{t}+R(\mathbf{c})\,\mathbf{a}_{t}+{\mathcal{N}}(0,\sigma_{T})\right)&{\mathrm{otherwise}}\end{cases}}$$ $$\left(42\right)$$ where clampS (s) is the function that maps s to the point in S that minimizes its distance with s, and $$R(\mathbf{c})={\begin{pmatrix}\cos\mathbf{c}&-\sin\mathbf{c}\\ \sin\mathbf{c}&\cos\mathbf{c}\end{pmatrix}}$$ $$(43)$$ (43) is the two-dimensional rotation matrix for an angle c. Reward function. The reward function R: *S × A × S →* R is given by $$R(\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t+1})=\begin{cases}0&\text{if}\mathbf{s}_{t}\in\mathcal{F}\\ h(\mathbf{s}_{t+1})&\text{otherwise}\end{cases}\tag{1}$$ $$(44)$$ where st ∈ S, at ∈ A, st+1 ∈ S, and h : S → R − is the function that gives the relative altitude to the mountain top in any state. Note that the altitude is independent of the agent orientation. Observation distribution. The observation distribution O: *S × O →* [0, 1] is given by $\iota\;\mathcal{O}\to[0,1]$ is given by: . $$O(\mathbf{o}_{t}\mid\mathbf{s}_{t})=\phi(\mathbf{o}_{t};h(\mathbf{s}_{t}),\sigma_{O}^{2})$$ $$(45)$$ O(ot | st) = ϕ(ot; h(st), σ2O) (45) where st ∈ S and ot ∈ O, and where ϕ(·; *µ, σ*2) denotes the probability density function of a univariate Gaussian random variable with mean µ and standard deviation σ. Mountain Hike. The Mountain Hike environment is a POMDP (S, A, O, p0*, T, R, O, γ*), parameterised by the sensor variance σO ∈ R and the transition variance σT ∈ R. The formal definition of this environment is identical to that of the Varying Mountain Hike, except that the initial orientation of the agent is always North, which makes it an easier problem. The initial state distribution is thus given by $$(46)$$ $$p_{0}({\bf s}_{0})=\delta_{((-0.8,-0.8),90^{\circ})}({\bf s}_{0}).\tag{1}$$ p0(s0) = δ((−0.8,−0.8),90◦)(s0). (46) Exploration policy. The uniform distribution U(A) over the action space A is chosen as the exploration policy E(A). Truncation horizon. The truncation horizon of the DRQN algorithm is chosen equal to H = 80 for the Mountain Hike environment and H = 160 for the Varying Mountain Hike environment. ## B Deep Recurrent Q-Network The DRQN algorithm is an instance of the PRQL algorithm that introduces several improvements over vanilla PRQL. First, it is adapted to the online setting by interleaving the generation of episodes and the update of the estimation Qθ. In addition, in the DRQN algorithm, the episodes are generated with the εgreedy policy σ ε θ : *H → P*(A), derived from the current estimation Qθ. This stochastic policy selects actions according to arg maxa∈A Qθ(·, a) with probability 1−ε, and according to an exploration policy E(A) ∈ P(A) with probability ε. In addition, a replay buffer of histories is used and the gradient is evaluated on a batch of histories sampled from this buffer. Furthermore, the parameters θ are updated with the Adam algorithm (Kingma & Ba, 2014). Finally, the target rt + γ maxa∈A Qθ ′ (η0:t+1, a) is computed using a past version Qθ ′ of the estimation Qθ with parameters θ ′that are updated to θ less frequently, which eases the convergence towards the target, and ultimately towards the Q-function. The DRQN training procedure is detailed in Algorithm 1. ![18_image_0.png](18_image_0.png) ## C Particle Filtering As explained in Section 2, the belief filter becomes intractable for certain POMDPs. In particular, POMDPs with continuous state space require one to perform an integration over the state space. Furthermore, in these environments, the belief should be represented by a function over a continuous domain instead of a finite-dimensional vector. Such arbitrary beliefs cannot be represented in a digital computer. To overcome these two difficulties, the particle filtering algorithm proposes to represent an approximation of the belief by a finite set of samples that follows the belief distribution. In other words, we represent bt ∈ P(S) by the set of M samples $$S_{t}=\left\{\mathbf{s}_{t}^{m}\right\}_{m=0}^{M-1}$$ m=0 (47) where s m t ∈ S, m = 0*, . . . , M* − 1 being independent realisations of the distribution bt. $$\left(47\right)$$ Particle filtering is a procedure that allows one to sample a set of states St that follow the belief distribution bt. The set is thus updated each time that a new action at−1 is taken and a new observation ot is observed. Although this procedure does not require to evaluate expression (8), it is necessary to be able to sample from the initial state distribution p0 and from the transition distribution T, and to be able to evaluate the observation distribution O. This process, illustrated in Algorithm 2, guarantees that the successive sets S0*, . . . , S*H have (weighted) samples following the probability distribution b0*, . . . , b*H defined by equation (8). Algorithm 2: Particle filtering Parameters: M ∈ N the number of particles Inputs : (S, A, O, T, R, O, p0, γ) a POMDP. H ∈ N the number of transitions η0:H = (o0, a0, . . . , oH−1, aH−1, oH) ∈ H0:H a history // Generate weighted samples following the initial belief b0 1 Sample s 0 0 , . . . , s M−1 0 ∼ p0 2 η ← 0 3 for m = 0*, . . . , M* − 1 do 4 wm 0 ← O(o0 | sm 0 ) 5 η ← η + wm 0 6 for m = 0*, . . . , M* − 1 do 7 wm 0 ← wm 0 /η 8 S0 =(sm 0 , wm 0 ) M−1 m=0 // Generate successive weighted samples following the beliefs b1*, . . . , b*H 9 for t = 1*, . . . , H* do 10 η ← 0 11 for m = 0*, . . . , M* − 1 do 12 Sample l ∈ {0*, . . . , M* − 1} according to p(l) = wlt−1 13 Sample sm t ∼ T(· | s l t−1 , at−1) 14 wm t ← O(ot | sm t ) 15 η ← η + wm t 16 for m = 0*, . . . , M* − 1 do 17 wm t ← wm t /η 18 St =(sm t , wm t ) M−1 m=0 Algorithm 2 starts from N samples from the initial distribution p0. These samples are initially weighted by their likelihood O(o0 | s n 0 ). Then, we have three steps that are repeated at each time step. First, the samples are resampled according to their weights. Then, given the action, the samples are updated by sampling from T(· | s n t , at). Finally, these new samples are weighted by their likelihood O(ot+1 | s n t+1) given the new observation ot+1, as for the initial samples. As stated above, this method ensures that the (weighted) samples follow the distribution of the successive beliefs. ## D Mutual Information Neural Estimator In Subsection D.1, the MI estimator that is used in the experiments is formally defined, and the algorithm that is used to derive this estimator is detailed. In Subsection D.2, we formalise the extension of the MINE algorithm with the Deep Set architecture. ## D.1 Estimator As explained in Subsection 2.3, the ideal MI neural estimator, for a parameter space Φ, is given by $$I_{\Phi}(X;Y)=\operatorname*{sup}_{\phi\in\Phi}i_{\phi}(X;Y)$$ iϕ(X; Y ) (48) $$i_{\phi}(X;Y)=\operatorname*{\mathbb{E}}_{z\sim p}\left[T_{\phi}(z)\right]-\log\left(\operatorname*{\mathbb{E}}_{z\sim q}\left[e^{T_{\phi}(z)}\right]\right)$$ Tϕ(z)i (49) However, both the estimation of the expectations and the computation of the supremum are intractable. In practice, the expectations are thus estimated with the empirical means over the set of samples {(x n, y n)} N−1 n=0 $$(48)$$ $$(49)$$ drawn from the joint distribution p and the set of samples {(x n, y˜ n)} N−1 n=0 obtained by permuting the samples from Y , such that the pairs follow the product of marginal distributions q = pX ⊗ pY . In order to estimate the supremum over the parameter space Φ, the MINE algorithm proposes to maximise iϕ(X; Y ) by stochastic gradient ascent over batches from the two sets of samples, as detailed in Algorithm 3. The final parameters ϕ ∗ obtained by this maximisation procedure define the estimator $${\hat{I}}={\frac{1}{N}}\sum_{n=0}^{N-1}T_{\phi^{\ast}}(\mathbf{x}^{n},\mathbf{y}^{n})-\log\left({\frac{1}{N}}\sum_{n=0}^{N-1}e^{T_{\phi^{\ast}}(\mathbf{x}^{n},{\hat{\mathbf{y}}}^{n})}\right)$$ $$(50)$$ that is used in the experiments. This algorithm was initially proposed in (Belghazi et al., 2018). Algorithm 3: MINE - lower bound optimization Parameters: E ∈ N the number of episodes. B ∈ N the batch size. ![20_image_0.png](20_image_0.png) n=0 the set of samples from the joint distribution. $\mathbf{e}$ a random permutation of $\{0,\ldots,N-1\}$. $\mathbf{e}$ $i=0,\ldots,\left\lfloor\frac{N}{B}\right\rfloor$ do $\mathbf{e}$ $S+\left\{\left(\mathbf{x}^{p(k)},\mathbf{y}^{p(k)}\right)\right\}_{k=iB}^{(i+1)B-1}$ a batch of samples from the joint distribution. $\mathbf{e}$ $S+\left\{\left(\mathbf{x}^{p_{1}(k)},\mathbf{y}^{p_{2}(k)}\right)\right\}_{k=iB}^{(i+1)B-1}$ a batch of samples from the product of marginal distributions Evaluate the lower bound ![20_image_1.png](20_image_1.png) ![20_image_2.png](20_image_2.png) ![20_image_4.png](20_image_4.png) lower bound $$L(\phi)\gets\frac{1}{B}\sum_{(\mathbf{x},\mathbf{y})\in S}T_{\phi}(\mathbf{x},\mathbf{y})-\log\left(\frac{1}{B}\sum_{(\hat{x},\hat{y})\in S}e^{T_{\phi}(\mathbf{x},\hat{y})}\right)$$ ![20_image_3.png](20_image_3.png) $$\mathrm{\boldmath~\cal~\cal~G~}(\phi)\leftarrow\bar{\nabla}_{\phi}{\cal L}(\phi)$$ ers with $\phi\leftarrow\phi+\alpha G(\phi)$ $$\left(51\right)$$ $$(52)$$ 10 Evaluate bias corrected gradients G(ϕ) ← ∇˜ϕL(ϕ) ![20_image_5.png](20_image_5.png) ![20_image_6.png](20_image_6.png) 11 Update network parameters with ϕ ← ϕ + αG(ϕ) ## D.2 Deep Sets As explained in Subsection 4.1, the belief computation is intractable for environments with continuous state spaces. In the experiments, the belief of such environments is approximated by a set of particles S = {s m} M m=1 that are guaranteed to follow the belief distribution, such that s m ∼ b, ∀s m ∈ S (see Appendix C). Those particles could be used for constructing an approximation of the belief distribution, a problem known as density estimation. We nonetheless do not need an explicit estimate of this distribution. Instead, the particles can be directly consumed by the MINE network. In this case, the two sets of input samples of the MINE algorithm take the form $$\{(\mathbf{x}^{n},\mathbf{y}^{n})\}_{n=0}^{N-1}=\{(\mathbf{h}^{n},S^{n})\}_{n=0}^{N-1}$$ $$=\left\{(\mathbf{h}^{n},\{\mathbf{s}^{n,m}\}_{m=1}^{M})\right\}_{n=0}^{N-1}.$$ In order to process particles from sets S n as input of the neural network Tϕ, we choose an architecture that guarantees its invariance to permutations of the particles. The deep set architecture (Zaheer et al., 2017), that is written as ρϕ Ps∈S ψϕ(s), provides such guarantees. Moreover, this architecture is theoretically able to represent any function on sets, under the assumption of having representative enough mappings ρϕ and ψϕ and the additional assumption of using finite sets S when particles come from an uncountable set as in this work. The function Tϕ is thus given by $$T_{\phi}(\mathbf{h},S)=\mu_{\phi}\left(\mathbf{h},\rho_{\phi}\left(\sum_{\mathbf{s}\in S}\psi_{\phi}(\mathbf{s})\right)\right)$$ $$(53)$$ !! (53) when the belief is approximated by a set of particles. ## E Hyperparameters The hyperparameters of the DRQN algorithm are given in Table 1 and the hyperparameters of the MINE algorithm are given in Table 2. The value of those hyperparameters have been chosen a priori, except for the number of episodes of the DRQN algorithm and the number of epochs of the MINE algorithm. These were chosen so as to ensure convergence of the policy return and the MINE lower bound, respectively. The parameters of the Mountain Hike and Varying Mountain Hike environments are given in Table 3. | Name | Value | Description | |--------|----------|--------------------------------------------------| | S | 2 | Number of RNN layers | | D | 1 | Number of linear layers (no activation function) | | H | 32 | Hidden state size | | N | 8192 | Replay buffer capacity | | C | 10 | Target update period in term of episodes | | I | 10 | Number of gradient steps after each episode | | ε | 0.2 | Exploration rate | | B | 32 | Batch size | | α | 1 × 10−3 | Adam learning rate | | Name | Value | Description | |--------|----------|---------------------------------------------------| | L | 2 | Number of hidden layers | | H | 256 | Hidden layer size | | N | 10 000 | Training set size | | E | 200 | Number of epochs | | B | 1024 | Batch size | | α | 1 × 10−3 | Adam learning rate | | R | 16 | Representation size for the Deep Set architecture | | α | 0.01 | EMA rate for the bias corrected gradient | | Name | Value | Description | |--------|---------|---------------------------------------------| | σO | 0.1 | Standard deviation of the observation noise | | σT | 0.05 | Standard deviation of the transition noise | Table 1: DRQN architecture and training hyperparameters. Table 2: MINE architecture and training hyperparameters. Table 3: Mountain Hike and Varying Mountain Hike parameters. ## F Generalisation To Other Distribution Of Histories In this section, we study if the hidden state still provides information about the belief under other distributions of histories than the one induced by the learned policy (20). This generalisation to other distributions is desirable for building policies that are more robust to perturbations of the histories. We propose to study the evolution of the MI between the hidden state and the belief when adding noise to the policy used to sample the histories. Formally, instead of sampling the hidden states and beliefs according to (20), we propose to sample those according to $$p_{\varepsilon}({\bf h},b)=\sum_{t=0}^{\infty}\ p(t)\int_{\mathcal{H}}p({\bf h},b\mid\eta)\ p_{\sigma_{b}^{\varepsilon}}(\eta\mid t)\ \mathrm{d}\eta$$ $$(54)$$ where p(t) is once again chosen to the uniform distribution over the time steps p(t) = 1/H, t ∈ {0*, . . . , H*−1}, σ ε θ is the ε-greedy policy as defined in Appendix B, and pσ ε θ (η | t) gives the conditional probability distribution induced by the policy σ ε θ over histories η ∈ H given that their length is t ∈ N0. Note that the training procedure remains unchanged. The results of this additional study can be found in Figure 12, for ε ∈ {0.0, 0.2, 0.4, 0.6, 0.8, 1.0}. It can be noted that p0.0 is the distribution of hidden states and beliefs induced by the learned policy (20), and p1.0 is the distribution of hidden states and beliefs induced by a fully random policy. For reasons of computational capacity, this analysis was carried out for the GRU cell only. This cell was chosen for being a standard cell that performs well in all environments in terms of return, unlike the LSTM. As can be seen in Figure 12, the MI between the hidden states and the beliefs increases throughout the training process, under all considered policies, even the fully random policy. We conclude that the correlation between the hidden states and beliefs generalises reasonably well to other distributions. In other words, the hidden states still capture information about the beliefs even under other distributions of histories. ## G Correlations Between The Empirical Return And The Estimated Mutual Information The correlation between the empirical return and the estimated MI are computed with the Pearson's linear correlation coefficient and the Spearman's rank correlation coefficient. These coefficients are reported for all environments and all cells in Table 4 and Table 5. The columns named *aggregated* give the correlation coefficients measured over all samples of ˆI and Jˆ from all cells. | Environment | Aggregated | LSTM | GRU | BRC | nBRC | MGU | |---------------------------|--------------|--------|--------|--------|--------|--------| | T-Maze (L = 50, λ = 0.0) | 0.8233 | 0.7329 | 0.8500 | 0.8747 | 0.9314 | 0.9178 | | T-Maze (L = 100, λ = 0.0) | 0.5347 | 0.3624 | 0.6162 | 0.6855 | 0.6504 | 0.6299 | | T-Maze (L = 50, λ = 0.3) | 0.5460 | 0.2882 | 0.8008 | 0.7229 | 0.7424 | 0.6159 | | Mountain Hike | 0.5948 | 0.7352 | 0.6177 | 0.4338 | 0.5857 | 0.5485 | | Varying Mountain Hike | 0.5982 | 0.6712 | 0.4530 | 0.4446 | 0.3669 | 0.3006 | | Environment | Aggregated | LSTM | GRU | BRC | nBRC | MGU | |---------------------------|--------------|--------|--------|--------|--------|--------| | T-Maze (L = 50, λ = 0.0) | 0.6419 | 0.7815 | 0.5963 | 0.5403 | 0.4009 | 0.5002 | | T-Maze (L = 100, λ = 0.0) | 0.6666 | 0.5969 | 0.7108 | 0.5058 | 0.4605 | 0.5534 | | T-Maze (L = 50, λ = 0.3) | 0.6403 | 0.3730 | 0.6600 | 0.5090 | 0.4706 | 0.6497 | | Mountain Hike | 0.2965 | 0.5933 | 0.1443 | 0.2762 | 0.4337 | 0.2630 | | Varying Mountain Hike | 0.6176 | 0.6869 | 0.3677 | 0.4355 | 0.2955 | 0.2266 | Table 4: Pearson's linear correlation coefficient for each environment and cell. Table 5: Spearman's rank correlation coefficient for each environment and cell. ![23_image_0.png](23_image_0.png) ![23_image_1.png](23_image_1.png) Figure 12: Evolution of the return Jˆ(θe) and the MI ˆI(θe) after e episodes, under distribution of histories induced by several ε-greedy policies, for the GRU cell. The maximal expected return is given by the dotted line. ## H Belief Of Variables Irrelevant For The Optimal Control In this section, we report the evolution of the return and the MI between the hidden states and the belief of both the relevant and irrelevant variables for the LSTM, BRC, nBRC and MGU architectures. It completes the results obtained for the GRU cell in Subsection 4.4. ![24_image_0.png](24_image_0.png) Figure 13: Deterministic T-Maze (L = 50) with d irrelevant state variables. Evolution of the return Jˆ(θe) and the MI ˆI(θe) for the belief of the irrelevant and relevant state variables after e episodes, for the LSTM cell. ![24_image_1.png](24_image_1.png) Figure 14: Deterministic T-Maze (L = 50) with d irrelevant state variables. Evolution of the return Jˆ(θe) and the MI ˆI(θe) for the belief of the irrelevant and relevant state variables after e episodes, for the BRC cell. Figure 13, Figure 14, Figure 15, and Figure 16 show the evolution of the return and the MI for a T-Maze of length L = 50 with d ∈ {1, 4} irrelevant state variables added to the process for these cells. These results are reported for the GRU cell in Figure 8 (see Subsection 4.2). As can be seen from these figures, the return generally increases with the MI between the hidden states and the belief of state variables that are relevant for optimal control. Moreover, as for the GRU cell, the MI between the hidden states and the belief of irrelevant state variables generally decreases throughout the learning process. Additionally, it can be observed that the LSTM and BRC cells fail in achieving a near-optimal return when d = 4. As far as the LSTM is concerned, it is reflected in its MI that reaches a lower value than the other RNNs. Likewise, the BRC cell does not reach a high return, and the MI does not increase at all. For this cell, it can be seen that the MI with the belief of irrelevant state variables is not decreasing, even with d = 1. The inability of the BRC cell to increase its MI with the belief of relevant variables and to decrease its MI with the belief of irrelevant variables might explain its bad performance in this environment. ![25_image_0.png](25_image_0.png) Figure 15: Deterministic T-Maze (L = 50) with d irrelevant state variables. Evolution of the return Jˆ(θe) and the MI ˆI(θe) for the belief of the irrelevant and relevant state variables after e episodes, for the nBRC cell. ![25_image_1.png](25_image_1.png) Figure 16: Deterministic T-Maze (L = 50) with d irrelevant state variables. Evolution of the return Jˆ(θe) and the MI ˆI(θe) for the belief of the irrelevant and relevant state variables after e episodes, for the MGU cell. As far as the Mountain Hike is concerned, Figure 17, Figure 18, Figure 19 and Figure 20 show that all previous observations also hold for this environment with the LSTM, BRC, nBRC and MGU cells. These results are reported for the GRU cell in Figure 9 (see Subsection 4.2). As can be seen from these figures, the return clearly increases with the MI between the hidden states and the belief of relevant state variables, for all cells. In contrast, the MI with the belief of irrelevant state variables decreases throughout the learning process. ![26_image_0.png](26_image_0.png) Figure 17: Mountain Hike with with d irrelevant state variables. Evolution of the return Jˆ(θe) and the MI ˆI(θe) for the belief of the irrelevant and relevant state variables after e episodes, for the LSTM cell. ![26_image_1.png](26_image_1.png) Figure 18: Mountain Hike with with d irrelevant state variables. Evolution of the return Jˆ(θe) and the MI ˆI(θe) for the belief of the irrelevant and relevant state variables after e episodes, for the BRC cell. ![27_image_0.png](27_image_0.png) Figure 19: Mountain Hike with with d irrelevant state variables. Evolution of the return Jˆ(θe) and the MI ˆI(θe) for the belief of the irrelevant and relevant state variables after e episodes, for the nBRC cell. ![27_image_1.png](27_image_1.png) Figure 20: Mountain Hike with with d irrelevant state variables. Evolution of the return Jˆ(θe) and the MI ˆI(θe) for the belief of the irrelevant and relevant state variables after e episodes, for the MGU cell.
Review 1: Summary: This paper provides empirical insight into the representations learnt by DRQN in partially observable environments across four variants of two domains and 5 recurrent policy architectures. Providing convincing evidence that the agent's performance in these environments is correlated with high mutual information between the learnt hidden state and belief. Strengths and Weaknesses: The core contribution is convincing, clear and interesting. Providing a deeper understanding of a well-established method of learning to act in POMDPs. The one exception being that the results in the varying mountain hike environment are less clearly correlated. The learning curves in Figure 5 suggest that the agent was still learning, perhaps this experiment would have benefited from a longer run? Some results are also included to show that the mutual information between the learnt representations and irrelevant variables input decreases as the agent's performance improves with a GRU policy. It is unclear why these results are limited to this one network architecture, given the broader range of architectures experimented with in the rest of the paper. This is still an interesting insight, but I currently have less confidence if the observations in this section (4.4) generalise. The paper is well written and provides sufficient background material to understand the technical contribution but includes no discussion of related studies making it hard for readers unfamiliar with the area to appreciate how this study compliments others or where to go next to explore this topic further. In particular I would note: 1. "Meta-trained agents implement Bayes-optimal agents" [NeurIPS 2020] also investigates the internal representations learnt by recurrent policies; 2. "AMRL: Aggregated memory for reinforcement learning" [ICLR 2020] also investigates the effect of noisy observations on learning recurrent policies in the T-maze environment. The appendix provides extensive implementation details on the experiments that were run, but limited explanation for how these precise values were chosen. Improved documentation of these experimental design choices may reveal important insights into the rigor of the study. Requested Changes: **Critical To Securing My Recommendation** 1. Please extend the experiments in section 4.4 to all 5 network architectures experimented with earlier in the paper or justify why a subset of these is chosen. 2. Please elaborate on how the environment settings and algorithm hyperparameters provided in the appendix were chosen. **Other Opportunities To Strengthen The Work** 1. Review the literature to identify and discuss related studies to further clarify the contribution of the paper. 2. Please expand all acronyms on first use (e.g. the recurrent architectures introduced at the end of Section 2.2 as they are key prior work this paper builds on and some are very recent work that may not be familiar to all readers.) 3. Are the dotted lines in Figures 1-3 the return of the optimal policy? Please clarify if so. It would also be more informative to include the value of these policies on the y-axis 4. In Figure 2, there appear distinct clusters of policies with 0 return (and again about halfway between 0 and the max return) that have constant return across a wide range of MI. Can the authors provide any insight in to the types of policies that have been learnt here? 5. Does running the varying mountain hike experiments longer enable the agent's return to converge and improve the correlation between the MI and return? 6. Subjectively, I personally think the illustrations of the environments in the appendix would be beneficial to include in the main body of the paper as a concise way to communicate the more precise environment details documented in the appendix. 7. The appendix provides good clarity on the environment and algorithm hyperparameters used. The reproducibility of the experiments could be further improved by providing open source access to the code. Broader Impact Concerns: No immediate broader impact concerns ================================================== Review 2: Summary: Using two POMDPs in a few different settings, this paper shows that empirical returns are correlated with the estimated mutual information between hidden states of RNNs (used by a particular recurrent DQN algorithm) and a Bayes filter's beliefs of control-relevant states. This is a contribution because it brings new understanding to how these algorithms work, and introduces a new tool for introspecting these RNNS (estimating the MI of their hidden states with beliefs). Notably, the paper also analyzes MIs with beliefs of control-irrelevant states by augmenting the POMDPs with state components that do not affect the reward or transition dynamics, and shows that these MIs decrease during optimization. Strengths and Weaknesses: Are the claims made in the submission supported by accurate, convincing and clear evidence? + The claims are supported by some evidence - The evidence is not fully convincing. See my requested changes, which include both moderating claims and including more evidence. + The design of the experiments is otherwise very nice, especially Sec 4.4. This was a compelling result, to illuminate that the RNN's aren't approximately filtering _all_ state variables, only (reward-)_relevant_ state variables (for this particular setting studied, at least). Would some individuals in TMLR's audience be interested in the findings of this paper? + Yes. At the very least, I am interested, therefore, I can answer this question with certainty. I am fairly confident that some others would be interested in the findings. Requested Changes: More evidence needed: 1.1. As far as I can tell, the experiments are all conducted with a single run of each algorithm. Therefore, the results could be different for different initializations of the Q-networks. This weakens the conclusions that we can draw. I request that the authors provide more evidence in the form of including more trials, with different initialization schemes (e.g. change the random seed). 1.2. Furthermore -- because the MI is estimated via a lower-bound, it's not clear how tight this lower bound is with the particular hyperparameters chosen (see Alg 3). Can the authors comment and justify the parameters adopted? For example, imagine running just one step of optimization of this MI estimation procedure. This seems like it would invalidate the claims made in this paper, since the resulting lower-bound would likely be quite loose. In contrast, the original MINE algorithm runs until some convergence criterion is met. It could make sense to also run this paper's experiments in a problem setting in which we know that the MI estimation error is very low and the Bayes filter is exact, e.g. 1-dimensional hidden state and 1-dimensional control-relevant belief state, although I am not actually requesting this experiment be run (it would be helpful, but it is not necessary). Claim reduction needed: 2.1 The statement of conclusions needs to be adjusted, as these results apply not to all recurrent deep q-learning algorithms, but just to the one investigated; also, it doesn't show that the underlying beliefs are _recovered_ (the h's aren't direct stand-ins for the control-relevant belief states, they just share high MIs with them when the policy is closer to optimal --- given a good policy's h, we don't yet have a way to translate that h to a belief state, unless we apply some additional machinery). This statement appears in the first sentence of the conclusion: "In this work, we have shown, empirically, that the approximation of the Q-function by recurrent neural networks in partially observable environments comes with the approximation of the belief filter of the state variables that are relevant to optimal control." I believe it is more accurate to say something like (although it is quite verbose..) "In this work, we have shown empirically that approximate mutual information between the hidden states of an RNN-based deep recurrent Q-learning algorithm <cite DRQN papers> and the beliefs of a Bayes filter of control-relevant state variables is correlated with that policy's higher returns in two POMDPs." Minor requests: page 1: "one to known the POMDP" -> "one to know the POMDP" page 2 "We thus rely on particle filtering in order to approximate the belief by a set of states, called particles, distributed according to the belief distribution." Some discussion of why we should still expect the MI estimates to be good enough to draw conclusions is missing here. Eq 2,3 -- it's not standard to omit the policy from the value function notation -- i.e. it's standard to use Q^{\pi^*} or Q^*, instead of just Q. I think it could improve clarity to include the policy in the notation of these Q-functions, although the definition of Eq 2 is fine. It might be nice to add a reference to a definition and explanation of Bayes filters near Eq 5. For example, "Burgard, Wolfram, Dieter Fox, and Sebastian Thrun. "Probabilistic robotics." The MIT Press (2005)." is a good introductory textbook. The notation for the estimated MI is \hat I(\theta_e). I think this should instead be a function of two arguments, like the standard MI, e.g. \hat I(h, b), since the RVs are the h's and the b's coming from the filter. Broader Impact Concerns: None ================================================== Review 3: Summary: The authors investigate whether the rnn-state in recurrent neural networks approximates the "belief state" (i.e., the posterior distribution over the latent environmental state) in POMDPs and the rnn-update performs something related to updating the belief state by filtering. They do so by measuring the mutual information (as measured by MINE) between the rnn-state and a computed 'ground truth' belief state for a variety of RNN architectures, on two different environments, each in a deterministic and stochastic variant. They use DRQN as algorithm to train the architectures. They find that rnn-states and ground truth belief states have high mutual information and that higher MI is correlated with better performance of the agent. They also find that mutual information with additionally added 'nuisance features' in the environment decreases over time, in contrast to the MI with relevant features, which increases. Strengths and Weaknesses: # Strengths Overall I really liked the paper. The scope and contributions are clearly stated in the abstract and introduction and the rest of the paper follows through on the statements made in the beginning. I particularly liked the writing, which I found to be clear and precise, with good mathematical notation. Sufficient background is given in the paper and whenever I was missing some additional information (with a few exceptions - see below), it was provided in the appendix (for example the observation spaces of the environments). Experimental results are clearly explained and discussed, supporting the claims made in the introduction (see below for one exception). # Weaknesses One area in which I believe the paper could be strenghtend is in discussing the connection between the mutual information $I(h,b)$ between the rnn-state $h$ and the belief $b$, and the conclusion that $h$ captures information about $b$ by _filtering_. At the moment, the authors mostly equate the MI and that "such value functions internally filter the posterior probability distribution of the current state given the history", which I believe is not necessarily true. Three possible pitfalls that I can come up with are the following, although there might be others: * The time $t$ might be a confounding factor as the rnn-state might simply count the number of steps since the beginning of the episode. In this case, for near-deterministic policies the correlation between $h$ and $b$ might still be high, without $h$ capturing anything about $b$. I believe this could be countered by not always measuring $I(h,b)$ for histories $\eta$ starting from the beginning of the episode. The author's most likely are already doing that, but I couldn't find the exact information or discussion thereof in the paper. * Secondly, and this could be supported by additional experiments, is that I believe there are actually two claims: a) The latent state $h$ captures information about the current (belief) state $b$, which is what the MI measures. And b), the rnn updates perform something that resembles _filtering_ to update the latent state. For example a) can be (and likely is) true in deterministic environments, even without b) being true. Furthermore, I believe T-Maze is not sufficient to show b), as it really only requires memory and not something that aggregates information over time to perform belief inference (equation 7). On the other hand, the Mountain Hike environment likely does require some form of inference, but this depends on the values of the transition and observation noise (i.e., $\sigma_T$ and $\sigma_O$), which I couldn't find in the paper nor appendix (though apologies if I missed it). Lastly, the fact that rnn-transitions do indeed perform something approximating inference could, I think, be further supported by showing some 'belief-like' characteristics. For example, it might be interesting to measure the accuracy of trying to predict the $x,y$ state in the MH environment, based on the rnn-state, and see how this value changes over history-length. If the rnn-state performs something like filtering, I would expect this accuracy to increase over time, at least up to a point. * More generally, I think to what extend the rnn-update resembles filtering might strongly depend on the environment, with some environments allowing for "shortcuts". For example, maybe in the MH environment, the agent is not measuring anything about the $x,y$ position, but learned to perform local hill-climbing, which only requires knowledge of the last few altitude values and movement directions. A last, independent point: It would be helpful to have one ablation study showing the accuracy of the MINE estimation. I.e. taking the same policy-checkpoint and running MINE multiple times, to get an estimate of the measurement noise. Requested Changes: # Critical for recommendation of acceptance A discussion should be included about why (or why not) high mutual information means that $h$ captures information about $b$ and does so by _filtering_ similar to equation 7, instead of relying on shortcuts or not having to perform filtering due to insufficient observation noise (see above for details). Also, please provide the missing environmental parameters $\sigma_T$ and $\sigma_O$. # Strenghtening the paper I belive the paper could be futher strengthed with additional experiments strenghtening the connection between high MI and the claim about filtering. For example, but not necessarily, the one I suggested about measuring the state-prediction accuracy as a function of history lenght. The MINE accuracy ablation study would also be helpful, but certainly not critical. Overall, I really liked the paper and hope the authors find my suggestions helpful. Broader Impact Concerns: No concerns. ================================================== Metareview: Recommendation: Accept as is Comment: This paper investigates how much RNN recurrent state corresponds to belief states, and through the experiments presented by the authors find strong support that it does. Through discussion with the reviewers, several improvements to the paper were suggested both in the main body, and in the appendix, and to the best of my understanding, they have been incorporated in the current version. As such, and due to the unanimous consent from the reviewers, I am happy to recommend acceptance without further revision. Nonetheless, I encourage the authors to consider whether any final changes or improvements to the writing are called for based on the reviewer comments and discussions, and to ensure that they present the strongest camera ready paper possible on the basis on incorporating all suggestions from the reviewers. I will operate on the basis of trust here and not require a second look at the final draft. Congratulations. ==================================================
# Are You Using Test Log-Likelihood Correctly? Sameer K. Deshpande∗*sameer.deshpande@wisc.edu* University of Wisconsin–Madison Soumya Ghosh∗*ghoshso@us.ibm.com* MIT-IBM Watson AI Lab IBM Research Tin D. Nguyen∗tdn@mit.edu MIT-IBM Watson AI Lab Massachusetts Institute of Technology Tamara Broderick *tbroderick@mit.edu* MIT-IBM Watson AI Lab Massachusetts Institute of Technology Reviewed on OpenReview: *https: // openreview. net/ forum? id= n2YifD4Dxo* ## Abstract Test log-likelihood is commonly used to compare different models of the same data or different approximate inference algorithms for fitting the same probabilistic model. We present simple examples demonstrating how comparisons based on test log-likelihood can contradict comparisons according to other objectives. Specifically, our examples show that (i) approximate Bayesian inference algorithms that attain higher test log-likelihoods need not also yield more accurate posterior approximations and (ii) conclusions about forecast accuracy based on test log-likelihood comparisons may not agree with conclusions based on root mean squared error. ## 1 Introduction Test log-likelihood, also known as predictive log-likelihood or test log-predictive, is computed as the logpredictive density averaged over a set of held-out data. It is often used to compare different models of the same data or to compare different algorithms used to fit the same probabilistic model. Although there are compelling reasons for this practice (Section 2.1), we provide examples that falsify the following, usually implicit, claims: - **Claim**: The higher the test log-likelihood, the more accurately an approximate inference algorithm recovers the Bayesian posterior distribution of latent model parameters (Section 3). - **Claim**: The higher the test log-likelihood, the better the predictive performance on held-out data according to other measurements, like root mean squared error (Section 4). Our examples demonstrate that test log-likelihood is not always a good proxy for posterior approximation error. They further demonstrate that forecast evaluations based on test log-likelihood may not agree with forecast evaluations based on root mean squared error. We are not the first to highlight discrepancies between test log-likelihood and other analysis objectives. For instance, Quiñonero-Candela et al. (2005) and Kohonen & Suomela (2005) showed that when predicting *These authors contributed equally to this work. discrete data with continuous distributions, test log-likelihood can be made arbitrarily large by concentrating probability into vanishingly small intervals. Chang et al. (2009) observed that topic models with larger test log-predictive densities can be less interpretable. Yao et al. (2019) highlighted the disconnect between test log-likelihood and posterior approximation error in the context of Bayesian neural networks. Our examples, however, reveal more fundamental discrepancies between test log-likelihood and other evaluation metrics. In particular, we show how comparisons based on test log-likelihood can contradict comparisons based on other objectives even in simple models like linear regression. After introducing our notation, we precisely define test log-likelihood and review arguments for its use in Section 2. In Sections 3.1–3.3, we present several examples showing that across a range of posterior approximations, those with higher test log-likelihoods may nevertheless provide worse approximation quality. Then, in Section 3.4, we provide some intuition about why this phenomenon can occur when the model is severely misspecified (Section 3.1); when using sophisticated posterior approximation methods (Section 3.2); and even when there is little or no model misspecification (Section 3.3). In Section 4, we show examples in both complex and simple models where test log-likelihood is higher but root mean squared error on held-out data is worse. Our examples in Section 4 do depend on model misspecification, but we note that model misspecification is unavoidable in practice. We conclude in Section 5 with a reflection on when we should use test log-likelihood in practice. ## 2 Background We assume we have access to training and testing data such that all data points are independently and identically distributed (i.i.d.) from an unknown probability distribution P. Let D = {yn} N n=1 denote the training data. In many standard analyses, practitioners will have access to a predictive density of a future data point y ? given the observed D: π(y ?|D). For instance, consider the following three cases. - Case A: Practitioners often model the observed data by introducing a parameter θ and specifying that the data are i.i.d. from a conditional distribution Π(Y |θ) with density π(y|θ). In a non-Bayesian analysis, one usually computes a point estimate ˆθ of the unknown parameter (e.g. by maximum likelihood). Given a point estimate ˆθ, the predictive density π(y ?|D) is just π(y ?| ˆθ). - Case B: A Bayesian analysis elaborates the conditional model from Case A by specifying a prior distribution Π(θ) and formally computes the density π(θ|D) of the posterior distribution Π(θ|D) from the assumed joint distribution Π(D, θ). The Bayesian posterior predictive density is given by $$\pi(y^{\star}|{\mathcal{D}})=\int\pi(y^{\star}|\theta)\pi(\theta|{\mathcal{D}})d\theta.$$ $$(1)$$ ?|θ)π(θ|D)dθ. (1) - Case C: An approximate Bayesian analysis proceeds as in Case B but uses an approximation in place of the exact posterior. If we let Π(θ|D) represent an approximation to the exact posterior, Equation (1) yields the approximate Bayesian posterior predictive density π(y ?|D). Sometimes, due the difficulty of the integral in Equation (1), a further approximation may be used to yield a predictive density π(y ?|D). In all of these cases, we will refer to the practitioner as having access to a model Π that determines the predictive distribution Π(y ?|D); in particular, we allow "model" henceforth to encompass fitted models and posterior approximations. One can ask how well the resulting Π(y ?|D) predicts new data generated from P. Practitioners commonly assess how well their model predicts out-of-sample using a held-out set of testing data D? = {y ? n} N? n=1, which was not used to train the model. To compute test log-likelihood, they average evaluations of the log-predictive density function over the testing set: $$\mathrm{TLL}({\mathcal{D}}^{\star};\Pi):={\frac{1}{N^{\star}}}\sum_{n=1}^{N^{\star}}\log\pi(y_{n}^{\star}|{\mathcal{D}}),$$ $$\left(2\right)$$ |D), (2) where our notation makes explicit the dependence of the test log-likelihood (TLL) on testing data D? and the chosen model Π. In particular, researchers commonly use test log-likelihood to select between two models of the data, say Π and Π˜ ; that is, they select model Π over Π˜ whenever TLL(D?; Π) is higher than TLL(D?; Π˜ ). Note that the abbreviation NLPD (negative log predictive density) is also commonly used in the literature for the negative TLL (Quiñonero-Candela et al., 2005; Kohonen & Suomela, 2005). In Appendix C, we briefly discuss some alternative metrics for model comparison. ## 2.1 The Case For Test Log-Likelihood In what follows, we first observe that, if we wanted to choose a model whose predictive distribution is closer to the true data distribution in a certain KL sense, then it is equivalent to choose a model with higher *expected* log-predictive density (elpd). Second, we observe that TLL is a natural estimator of elpd when we have access to a finite dataset. The unrealistic case where the true data-generating distribution is known. The expected logpredictive density is defined as $$\mathrm{elpd}(\Pi):=\int\log\pi(y^{\star}|{\mathcal{D}})d{\mathcal{P}}(y^{\star}).$$ Our use of the abbreviation elpd follows the example of Gelman et al. (2014, Equation 1). If we ignore an additive constant not depending on Π, elpd(Π) is equal to the negative Kullback–Leibler divergence from the predictive distribution Π(y ?|D) to the true data distribution P(y ?). Specifically, if we assume P has density p(y ?), we have $$\operatorname{KL}\left({\mathcal{P}}(y^{\star})\parallel\Pi(y^{\star}|{\mathcal{D}})\right)=\int p(y^{\star})\log p(y^{\star})d y^{\star}-\mathrm{{elpd}}(\Pi).$$ Thus, elpd(Π) > elpd(Π˜ ) if and only if the predictive distribution Π(y ?|D) is closer, in a specific KL sense, to the true data distribution than the predictive distribution Π( ˜ y ?|D) is. Test log-likelihood as an estimator. Since we generally do not know the true generating distribution P, computing elpd(Π) exactly is not possible. By assumption, though, the test data are i.i.d. draws from P. So TLL(D?; Π) is a computable Monte Carlo estimate of elpd(Π). If we assume elpd(Π) is finite, it follows that a Strong Law of Large Numbers applies: as N? → ∞, TLL(D?; Π) converges almost surely to elpd(Π). Therefore, with a sufficiently high amount of testing data, we might compare the estimates TLL(D?; Π) and TLL(D?; Π˜) in place of the desired comparison of elpd(Π) and elpd(Π˜ ). Note that the Strong Law follows from the assumption that the y ? n values are i.i.d. under P; it does not require any assumption on the model Π and holds even when the model Π is misspecified. ## 2.2 Practical Concerns Since TLL(D?; Π) is an estimate of elpd(Π), it is subject to sampling variability, and a careful comparison would ideally take this sampling variability into account. We first elaborate on the problem and then describe one option for estimating and using the sampling variability in practice; we take this approach in our experiments below. To start, suppose we had another set of N?testing data points, D˜?. Then generally TLL(D?; Π) 6= TLL(D˜?; Π). So it is possible, in principle, to draw different conclusions using the TLL based on different testing datasets. We can more reasonably express confidence that elpd(Π) is larger than elpd(Π˜ ) if the lower bound of a confidence interval for elpd(Π) exceeds the upper bound of a confidence interval for elpd(Π) ˜ . We next describe one way to estimate useful confidence intervals. To do so, we make the additional (mild) assumption that $$\sigma_{\mathrm{TLL}}^{2}(\Pi):=\int\left[\log\pi(y^{\star}|{\mathcal{D}})-\mathrm{elpd}(\Pi)\right]^{2}d{\mathcal{P}}(y^{\star})<\infty.$$ $\tau$T. Then, since the y ? n are i.i.d. draws from P, a Central Limit Theorem applies: as N? → ∞, $$\sqrt{N^{\star}}\,(\mathrm{TLL}({\mathcal{D}}^{\star};\Pi)-\mathrm{elpd}(\Pi))\stackrel{\mathrm{d}}{\to}{\mathcal{N}}(0,\sigma_{\mathrm{TLL}}^{2}(\Pi)).$$ Although we cannot generally compute σTLL(Π), we can estimate it with the sample standard deviation σˆTLL(Π) of the evaluations {log π(y ? n |D)} N? n=1. The resulting approximate 95% confidence interval for elpd(Π) is TLL(D?; Π) ± 2ˆσTLL/ √N?. In what follows, then, we will conclude elpd(Π) > elpd(Π) ˜ if $$\hat{\sigma}_{\mathrm{TLL}}(\tilde{\Pi})/\sqrt{N^{\star}}.$$ $$\left({\mathfrak{3}}\right)$$ $$\mathrm{TLL}({\mathcal{D}}^{\star};\Pi)-2{\tilde{\sigma}}$$ ?; Π) − 2ˆσTLL(Π)/ √ N? > TLL(D ?; Π) + 2ˆ ˜ σTLL(Π) ˜ / √ N?. (3) For the sake of brevity, we will still write TLL(D?; Π) > TLL(D?; Π) ˜ in place of Equation (3) below. To summarize: for a sufficiently large test dataset D?, we expect predictions made from a model with larger TLL to be closer (in the KL sense above) to realizations from the true data-generating process. In our experiments below, we choose large test datasets so that we expect TLL comparisons to reflect elpd comparisons. Our experiments instead illustrate that closeness between Π(y ?|D) and P (in the KL sense above) often does not align with a different stated objective. ## 3 Claim: Higher Test Log-Likelihood Corresponds To Better Posterior Approximation In this section, we give examples where test log-likelihood is higher though the (approximation) quality of an approximate posterior mean, variance, or other common summary is lower. We start with examples in misspecified models and then give a correctly specified example. We conclude with a discussion of the source of the discrepancy: even in the well-specified case, the Bayesian posterior predictive need not be close to the true data-generating distribution. Practitioners often use posterior expectations to summarize the relationship between a covariate and a response. For instance, the posterior mean serves as a point estimate, and the posterior standard deviation quantifies uncertainty. However, as the posterior density π(θ|D) is analytically intractable, practitioners must instead rely on approximate posterior computations. There are myriad approximate inference algorithms - e.g. Laplace approximation, Hamiltonian Monte Carlo (HMC), mean-field variational inference, to name just a few. All these algorithms aim to approximate the same posterior Π(θ|D). Test log-likelihood is often used to compare the quality of different approximations, with higher TLL values assumed to reflect more accurate approximations, e.g. in the context of variational inference (see, e.g., Hoffman et al., 2013; Ranganath et al., 2014; Hernández-Lobato et al., 2016; Liu & Wang, 2016; Shi et al., 2018) or Bayesian deep learning (see, e.g., Hernández-Lobato & Adams, 2015; Gan et al., 2016; Li et al., 2016; Louizos & Welling, 2016; Sun et al., 2017; Ghosh et al., 2018; Mishkin et al., 2018; Wu et al., 2019; Izmailov et al., 2020; 2021; Ober & Aitchison, 2021). Formally, suppose that our exact posterior is Π(θ|D) and that we have two approximate inference algorithms that produce two approximate posteriors, respectively Πˆ1(θ|D) and Πˆ2(θ|D). The exact posterior and its approximations respectively induce predictive distributions Π(y ?|D), Πˆ1(y ?|D), and Πˆ2(y ?|D). For instance, Πˆ1(θ|D) could be the empirical distribution of samples drawn using HMC and Πˆ2(θ|D) could be a mean-field variational approximation. Our first example demonstrates that it is possible that (i) TLL(D?; Πˆ1) > TLL(D?; Π) but (ii) using Πˆ1 could lead to different inference about model parameters than using the exact posterior Π. Our second example demonstrates that it is possible that (i) TLL(D?; Πˆ1) > TLL(D?; Πˆ2) but (ii) Πˆ1(θ|D) is a worse approximation to the exact posterior Π(θ|D) than Πˆ2(θ|D). ## 3.1 Tll **And Downstream Posterior Inference** Relying on TLL for model selection can lead to different inferences than we would find by using the exact posterior. To illustrate, suppose we observe D100 = {(xn, yn)} 100 n=1 drawn from the following heteroscedastic model: xn ∼ N (0, 1), yn | xn ∼ N (xn, 1 + log(1 + exp(xn))). (4) Further suppose we model these data with a misspecified homoscedastic model: $$x_{n}\sim{\mathcal{N}}(0,1),\quad y_{n}\mid x_{n}\sim{\mathcal{N}}(x_{n},1+\log(1+\exp(x_{n}))).$$ $$\theta\sim{\mathcal{N}}([0,0]^{\top},[1,0;0,1]),\quad y_{n}\mid\theta,\phi_{n}\sim{\mathcal{N}}(\theta^{T}\phi_{n},1),$$ T φn, 1), (5) where φn = [xn, 1]>, and θ = [θ1, θ2]. Figure 1 shows the posterior mean and the 95% predictive interval of the misspecified regression line θ >φ from (A) the exact Bayesian posterior; (B) the mean field variational approximation restricted to isotropic Gaussians; and (C)–(F) variational approximations with re-scaled marginal variances. Each panel includes a scatter plot of the observed data, D100. We also report the 2-Wasserstein distance between the exact posterior and each approximation and the TLL averaged over $$\mathrm{p}(x_{n}))).$$ $\downarrow$ . ![4_image_0.png](4_image_0.png) Figure 1: *(Left)*. Predictive distributions under the Bayesian posterior and mean field variational approximations. The two numbers in the title of each plot are the 2-Wasserstein distance to the exact posterior and test log-likelihood computed on 104test set observations. Two standard errors in the test log-likelihood estimate are (A) 0.03, (B) 0.03, (C) 0.02, (D) 0.02, (E) 0.02, (F) 0.02. *(Right)*. The relationship between 2-Wasserstein distance to the posterior and test log-likelihood. N∗ = 104test data points drawn from Equation (4); note that the 2-Wasserstein distance can be used to bound differences in means and variances (Huggins et al., 2020). The variational approximation (panel (B) of Figure 1) is quite accurate: the 2-Wasserstein distance between the approximation and the exact posterior is ∼10−4. See also Figure 2, which shows the contours of the exact and approximate posterior distributions. As we scale up the variance of this approximation, we move away from the exact posterior over the parameters but the posterior predictive distribution covers more data, yielding higher TLL. The left panel of Figure 11 in Appendix B.3 shows the same pattern using the KL divergence instead of the 2-Wasserstein distance. TLL **and a discrepancy in inferences.** Researchers are often interested in understanding whether there is a relationship between a covariate and response; a Bayesian analysis will often conclude that there is no relationship if the posterior on the corresponding effect-size parameter places substantial probability on an interval not containing zero. In our example, we wish to check whether θ1 = 0. Notice that the exact posterior distribution (panel (A) in Figures 1 and 2) is concentrated on positive θ1 values. The 95% credible interval of the exact posterior1is [0.63, 1.07]. Since the interval does not contain zero, we would infer that θ1 6= 0. On the other hand, as the approximations become more diffuse (panels (B)–(F)), TLL increases, and the approximations begin to place non-negligible probability mass on negative θ1 values. In fact, the approximation with highest TLL (panel (F) in Figures 1 and 2) yields an approximate 95% credible interval of [-0.29,1.99], which covers zero. Had we used this approximate interval, we would have failed to conclude θ1 6= 0. That is, in this case, we would reach a different substantive conclusion about the effect θ1 if we (i) use the exact posterior or (ii) use the approximation selected by highest TLL. ## 3.2 Tll **In The Wild** Next, we examine a more realistic scenario in which the difference between the quality of the posterior approximation and the exact posterior distribution TLL arises naturally, without the need to artificially increase the marginal variance of the variational approximations. To explore this situation, we will first introduce another example of misspecification and repeat the type of analysis described in Section 3.1. 1Throughout we used symmetric credible intervals formed by computing quantiles: the 95% interval is equal to the 2.5%–97.5% interquantile range. ![5_image_0.png](5_image_0.png) Figure 2: Contours of (A) the exact posterior, (B) the mean field variational approximation restricted to isotropic Gaussians, and (C)-(F) re-scaled mean field approximations. The line 01 = 0 is highlighted in red. ![5_image_1.png](5_image_1.png) Figure 3: (Left). Predictive distributions under the Bayesian posterior (A) and the SWAG posterior with SWAG learning rate of (B) 10-3, (C) 10-2, (D) 10-1, (E) 1, and (F) 10. The two numbers in the title of each plot are the 2-Wasserstein distance to the exact posterior and test log-likelihood computed on 104 test set observations. Two standard errors in the test log-likelihood estimates are (A) 0.16, (B) 0.15, (C) 0.14, (D) 0.13, (E) 0.05, (F) 0.01. (Right). Contours of the (A) exact posterior, and (B)-(F) SWAG approximations with different learning rates. The line 01 = 0 is highlighted in red. Consider the following case: we observe 500 observations Ø500 = {{In,Yn}},ºº, drawn from a non-linear model: $$\theta_{*}=[-2,-1]^{\top},\quad x_{n}\sim{\mathcal{N}}(0,1),\quad y_{n}\mid\theta_{*},\phi_{n}\sim{\mathcal{N}}(\theta_{*}^{\top}\,\phi_{n}+x_{n}^{2},0.5),$$ $$({\mathfrak{h}})$$ where Øn = [xn, 1] . Further suppose we modeled these data with a misspecified linear model $$\theta\sim{\mathcal{N}}([0,0]^{\top}[1,0;0,1]),\quad y_{n}\mid\theta,\phi_{n}\sim{\mathcal{N}}(\theta^{\top}\phi_{n},0.5).$$ $$\left(7\right)$$ While the misspecification here might appear egregious, linear models are widely used in practice for modeling non-linear phenomena when one is primarily interested in inferring whether the covariates are positively correlated, negatively correlated, or are uncorrelated with the responses (Berk et al., 2014; 2018; Blanca et al., 2018; Vowels, 2023). Next, we use SWAG (Maddox et al., 2019), an off-the-shelf approximate inference algorithm, to approximate the posterior II(0|D500). We also repeat the re-scaled variational inference experiment from Section 3.1 with this set of data and models (Equations (6) and (7)); see Appendix B.2. SWAG uses a gradient-based optimizer with a learning rate schedule that encourages the optimizer to oscillate around the optimal solution instead of converging to it. Then, a Gaussian distribution is fit to the set of solutions explored by the optimizer around the optimum using moment matching. In general, one must select the learning rate schedule in a heuristic fashion. One might be tempted to use TLL to tune the learning rate schedule. We use this heuristic and run SWAG for a thousand epochs, annealing the learning rate down to a different constant value after 750 epochs. Although used pedagogically here, similar heuristics have been used in practice (di Langosco et al., 2022), where the learning rate is tuned based on the accuracy achieved on held-out data. We vary this constant value over the set {10−3, 10−2, 10−1, 1, 10}. In Figure 3, we show the resulting posterior mean and the 95% predictive interval of the misspecified regression line θ >φ from (A) the Bayesian posterior; (B)–(F) the SWAG posteriors using different learning rate schedules. In each plot, we overlay the observed data D500 (black dots) with the true data generating function in dashed black. We also report the 2-Wasserstein distance between the exact posterior and each approximation and the TLL averaged over N∗ = 104test data points drawn from Equation (6). In all cases, SWAG overestimates the posterior variance, with predictive distributions that better cover the data and consequently lead to a higher TLL. However, these SWAG posterior approximations are *farther* from the exact posterior. In fact, we found that a learning rate of 10 (Figure 3, *Left*, panel (F)) maximized TLL but led to the worst approximation of the exact posterior. As in the previous section, next suppose we fit this misspecified linear model to understand whether there is ![6_image_0.png](6_image_0.png) a relationship between the covariates and the responses, i.e., whether θ1 = 0. Notice that the exact posterior distribution (Figure 3, *Right*, panel (A)) is concentrated on negative θ1 values, with the 95% posterior credible interval being [−1.96, −1.79]. Since the interval is to the left of zero, we would infer that θ1 < 0 and that the covariate and the response are negatively correlated. In contrast, if we select the SWAG approximation with the highest TLL, we select the posterior approximation in panel (F) on the right side of Figure 3. The corresponding 95% posterior credible interval is [−4.46, 0.74], which places non-negligible probability mass on θ1 > 0. In this case, we would not conclude that the response and the covariate are negatively correlated – by contrast to the conclusion using the exact posterior. Figure 4: *(Left)*. Contours of (A) the exact posterior, (B) the mean field variational approximation restricted to isotropic Gaussians, and (C)–(F) re-scaled mean field approximations. The two numbers in the title of each plot are the 2-Wasserstein distance to the exact posterior and test log-likelihoods computed on 104test set observations. Two standard errors in the test log-likelihood estimates are (A) 0.019, (B) 0.020, (C) 0.014, (D) 0.013, (E) 0.011, (F) 0.009. *(Right)*. The non-monotonic relationship between distance to posterior and test log-likelihood. Observe that the exact posterior does not achieve highest test log-likelihood. ## 3.3 Tll **And Well-Specified Models** The examples above demonstrated that TLL is not a reliable proxy to posterior approximation quality when the model is misspecified. Though misspecified models are the norm in practice, we now demonstrate that a distribution with higher TLL may not provide a more accurate posterior approximation even when the model is correctly specified. To this end, consider the following Bayesian linear model: $$\theta\sim{\mathcal{N}}([0,0]^{\top},[1,0.9;0.9,1]),\quad y_{n}\mid\theta,\phi_{n}\sim{\mathcal{N}}(\theta^{\top}\phi_{n},0.25^{2}),$$ $$(8)$$ $$\{\,(x_{n},y_{n})\,\}$$ $$({\mathfrak{g}})$$ >φn, 0.252), (8) where φn = [xn, 1]>. Now, suppose we observe ten data points D10 = {(xn, yn)} 10 n=1 sampled as $$\theta_{*}=[-2,-1]^{\top},\quad x_{n}\sim{\mathcal{N}}(0,1),\quad y_{n}\mid\theta_{*},\phi_{n}\sim{\mathcal{N}}(\theta_{*}^{\top}\,\phi_{n},0.25^{2}).$$ ∗ φn, 0.252). (9) The left panel of Figure 4 plots the contours of (A) the exact posterior distribution Π(φ|D10); (B) the mean field variational approximation constrained to the isotropic Gaussian family; and (C)–(F) variational approximations with re-scaled marginal variances. In each panel, we report the 2-Wasserstein distance between the approximate and exact posterior and the test log-predictive averaged over N? = 104test data points drawn from Equation (9). Although we have correctly specified the conditional model of y|(*θ, φ*), the exact posterior has a lower TLL than some of the approximate posteriors; in particular, the 95% confidence intervals for (C) and (D) are disjoint from the 95% confidence interval for the exact posterior, shown in (A). The left panel of Figure 4 suggests that the more probability mass an approximate posterior places around the true data-generating parameter, the higher the TLL. Eventually, as the approximation becomes more diffuse, TLL begins to decrease (Figure 4 (right)). The non-monotonicity demonstrates that an approximate posterior with larger implied TLL can in fact be further away from the exact posterior in a 2-Wasserstein sense than an approximate posterior with smaller implied TLL. The right panel of Figure 11 in Appendix B.3 demonstrates the same pattern using the KL divergence instead of the 2-Wasserstein distance. And Figure 9 in Appendix B.3 shows that, in the well-specified case, a distribution with larger TLL can provide a worse approximation of the posterior standard deviation than a distribution with smaller TLL. ## 3.4 What Is Going On? We next discuss why we should not expect TLL to closely track posterior approximation quality, or posteriorpredictive approximation quality. Essentially the issue is that, even in the well-specified case, the Bayesian posterior predictive distribution need not be close to the true data-generating distribution. We illustrate these distinctions in Figure 5. The lower surface represents the space of distributions over a latent parameter θ. The upper surface represents the space of distributions over an observable data point y ?. Each dot in the figure represents a distribution. The two dots in the lower surface are the exact posterior Π(θ|D) (left, green dot) and an approximate posterior Πˆ(θ|D) (right, red dot). The three dots in the upper surface are the posterior predictive distribution Π(y ?|D) (left, green dot), the approximate posterior predictive Πˆ (y ?|D) (lower right, red dot), and the true data-generating distribution P(y ?) (upper right, black dot). The gray lines on the left and right indicate that the distribution in the upper surface can be obtained from the corresponding (connected) distribution in the lower surface via Equation (1). The remaining three (non-gray) lines represent three different discrepancies. Recall from Section 2 that TLL(D?; Πˆ) captures how close the approximate posterior predictive Πˆ(y ?|D) is to the true data-generating process P(y ?) in a particular KL sense: $$\mathrm{TLL}({\mathcal{D}}^{\star},\hat{\Pi})\approx-\mathrm{KL}\left({\mathcal{P}}(y^{\star})\parallel\hat{\Pi}(y^{\star}|{\mathcal{D}})\right)+\mathrm{constant}.$$ To illustrate this notion of closeness, or equivalently discrepancy, in Figure 5, we draw a pink line between P(y ?) and Πˆ(y ?|D). We observe that the TLL importantly does not approximate (even up to a constant) the analogous discrepancy from the approximate posterior predictive Πˆ (y ?|D) to the exact posterior predictive Π(y ?|D) (blue line in the upper surface); that is, it does not capture how close the posterior predictive approximation is to the exact posterior predictive. The TLL likewise does not approximate (even up to a constant) the corresponding discrepancy from the approximate posterior Πˆ (θ|D) to the exact posterior ![8_image_0.png](8_image_0.png) Figure 5: Cartoon illustration highlighting the difference between three different discrepancies explored in Section 3.4. The surfaces are spaces of distributions over a latent parameter (lower surface) or an observable data point y ?(upper surface). The pink line indicates that TLL(D?; Πˆ ) estimates a discrepancy between the approximate posterior predictive Πˆ (y ?|D) (upper surface, lower right, red dot) and the true data-generating distribution P(y ?) (upper surface, upper right, black dot). The blue line represents a different discrepancy between the exact posterior predictive (upper surface, left, green dot) and the approximate posterior predictive (upper surface, lower right, red dot). The yellow line represents another different discrepancy between the exact posterior (lower surface, left, green dot) and the approximate posterior (lower surface, right, red dot). Gray lines connect distributions over parameters with their corresponding predictive distributions. Π(θ|D) (yellow line in the lower surface); that is, it does not capture how close the posterior approximation is to the exact posterior. The pink and blue lines would (nearly) align if the posterior predictive were very close to the true datagenerating distribution. For a misspecified model, the posterior predictive need not be close to the true data-generating distribution. For a well-specified model, the posterior predictive and true data-generating distribution may still be far for a finite dataset. On that view, as suggested by an anonymous referee, we might expect the observed phenomenon to disappear asymptotically in the well-specified setting if sufficient regularity conditions hold. The argument, essentially, is that (i) the actual posterior Π(θ|D) converges to a point-mass at the true data generating parameter; this convergence implies (ii) that the actual posterior predictive Π(y ?|D) converges to the true data distribution P(y ?), from which it follows that for large enough training datasets (iii) KL Π(y ?|D) k Π( ˆ y ?|D) ≈ KL P(y ?) k Π( ˆ y ?|D) . However, we emphasize first that essentially every real data analysis is misspecified. And second, if a practitioner is in a setting where they are confident there is no uncertainty in the unknown parameter value, there may be little reason to take a Bayesian approach or go to the sometimes-considerable computational burden of approximating the Bayesian posterior. ## 4 Claim: Higher Test Log-Likelihood Corresponds To Lower Predictive Error As noted in Sections 2.1 and 3.4, TLL estimates how close a predictive distribution is from the true datagenerating process in a specific KL sense. On that view and analogous to Section 3, we would not expect conclusions made by TLL to match conclusions made by comparing other predictive losses. Rather than focus on more esoteric losses in our experiments, we note that TLL and RMSE are often reported as default measures of model fit quality in papers. If conclusions made between TLL and RMSE do not always agree (as we expect and reinforce experimentally next), we should not expect TLL to always reflect performance according to other predictive losses beyond RMSE. If the TLL is of fundamental interest, this observation is of little consequence; if TLL is a convenient stand-in for a potential future loss of interest, this observation may be meaningful. Misspecified Gaussian process regression. We next construct two models Π and Π˜ such that TLL(D?; Π) < TLL(D?; Π˜ ) but Π˜ yields larger predictive RMSE. Suppose we observe D100 = {(xn, yn)} 100 n=1 from the following data generating process: $$x_{n}\sim{\mathcal{U}}(-5,+5)\quad y_{n}|x_{n}\sim{\mathcal{N}}(\sin(2x_{n}),0.1).$$ $$(10)$$ Further suppose we model this data using a zero-mean Gaussian process (GP) with Gaussian noise, $$f\sim\mathrm{GP}({\bf0},k(x,x^{\prime})),\quad y_{n}|f_{n}\sim{\mathcal{N}}(f_{n},\sigma^{2}),$$ $$(11)$$ f ∼ GP(0, k(x, x0)), yn|fn ∼ N (fn, σ2), (11) where fn is shorthand for f(xn). First consider the case where we employ a periodic kernel,2constrain ![9_image_0.png](9_image_0.png) the noise nugget σ 2to 1.6, and fit all other hyper-parameters by maximizing the marginal likelihood. The resulting fit is shown in Figure 6 (A). Next, consider an alternate model where we use a squared-exponential kernel and fit all hyper-parameters including the noise nugget via maximum marginal likelihood. The resulting fit is displayed in Figure 6 (B). The squared exponential model fails to recover the predictive mean and reverts back to the prior mean (RMSE = 0.737, 95% confidence interval [0.729, 0.745]), while the periodic model recovers the predictive mean accurately, as measured by RMSE = 0.355 (95% confidence interval [0.351, 0.360]). Despite the poor mean estimate provided by the squared exponential model, it scores a substantially higher TLL. Figure 6: The plots display two Gaussian processes trained on the same set of data (represented by black plus symbols). The dashed red line shows the mean of the posterior Gaussian process, while the red highlighted region represents the 95% predictive interval. The subplot titles display the TLL (±2 standard error) attained by each Gaussian process. Although the Gaussian process in panel (A) achieves a better mean fit compared to panel (B), it has a worse TLL when evaluated on 104test instances (represented by black dots). 2PeriodicMatern32 in https://github.com/SheffieldML/GPy In this example, we see that, even with an accurate point estimate, TLL can be reduced by, for instance, inflating the predictive uncertainty. And this discrepancy between TLL and RMSE is not necessarily removed by optimizing the parameters of a model. Misspecified linear regression. Our next example illustrates that even when all parameters in a model are fit with maximum likelihood, a comparison based on TLL may still disagree with a comparison based on RMSE. It also illustrates that the discrepancy between TLL and RMSE can arise even in very simple and low-dimensional models and even when the training dataset is very large. Specifically, suppose that we observe D = {(xn, yn)} 100,000 n=1 generated according to $$x_{n}\sim{\mathcal{U}}(0,25),\quad y_{n}|x_{n}\sim\mathrm{Laplace}(x_{n},1/{\sqrt{2}}),$$ √2), (12) which we model using one of the following misspecified conditional linear models: $$(12)$$ $\Pi:y_n|x_n\sim\mathcal{N}(\theta x_n,\sigma^2)$ > or > > $\tilde{\Pi}:y_n|x_n\sim\text{Laplace}(0.45+\theta x_n,\lambda)$. $$(13)$$ Both Π and Π˜ depend on two unknown parameters. Π depends on a slope θ and a residual variance σ 2 and Π˜ depends on a slope θ and a residual scale λ. The kind of misspecification is different across models; while Π has the correct mean specification but incorrect noise specification, Π˜ has incorrect mean specification but correct noise specification. We computed the maximum likelihood estimates (MLEs) ( ˆθΠ, σˆΠ) and ( ˆθΠ˜ , λˆΠ˜ ) for both models. The two fitted models induce the following predictive distributions of y ?|x ?: $$\begin{array}{c}{{\Pi(y^{*}|x^{*},\mathcal{D}):y^{*}|x^{*}\sim\mathcal{N}(\hat{\theta}_{\Pi}x^{*},\hat{\sigma}_{\Pi}^{2})}}\\ {{\mathrm{and}}}\\ {{\hat{\Pi}(y^{*}|x^{*},\mathcal{D}):y^{*}|x^{*}\sim\mathrm{Laplace}(0.45+\hat{\theta}_{\Pi}x^{*},\hat{\lambda}_{\Pi}).}}\end{array}$$ $$(14)$$ The means of these predictive distributions are natural point estimates of the output y ? at input x ?. Using a test set of size N? = 395,000, we observed TLL(D?; Π) = −1.420 < −1.389 = TLL(D?; Π˜). The standard error of either TLL estimate is only 0.002. Hence, based on sample mean and standard error, we conclude that Π˜ has better elpd than Π. These values suggest that on average over inputs x ?, Π˜ (y ?|x ?, D) is closer to P(y ?|x ?) than Π(y ?|x ?, D) in a KL sense. However, using the same test set, we found that Π yielded more accurate point forecasts, as measured by root mean square error (RMSE): $$\left(\frac{1}{N^{*}}\sum_{n=1}^{N^{*}}\left(y_{n}^{*}-\hat{\theta}_{\rm II}x_{n}^{*}\right)^{2}\right)^{1/2}=1.000<1.025=\left(\frac{1}{N^{*}}\sum_{n=1}^{N^{*}}\left(y_{n}^{*}-0.45-\hat{\theta}_{\rm II}x_{n}^{*}\right)^{2}\right)^{1/2}.\tag{15}$$ In addition, the 95% confidence intervals for the RMSE do not overlap: the interval for Π's RMSE is [0.997, 1.005] and that for Π˜ 's RMSE is [1.022, 1.029]. The comparison of RMSEs suggests that on average over inputs x ?, the predictive mean of Π(y ?|x ?, D) is closer to the mean of P(y ?|x ?) than the predictive mean of Π˜ (y ?|x ?, D). In other words, the model with larger TLL - whose predictive distribution is ostensibly closer to P - makes worse point predictions than the model with smaller TLL. ## 5 Discussion Our paper is neither a blanket indictment nor recommendation of test log-likelihood. Rather, we hope to encourage researchers to explicitly state and commit to a particular data-analysis goal - and recognize that different methods may perform better under different goals. For instance, when the stated goal is to approximate (summary statistics of) a Bayesian posterior, we argue that it is inappropriate to rely on test log-likelihood to compare different approximation methods. We have produced examples where a model can provide a better test log-likelihood but yield a (much) poorer approximation to the Bayesian posterior - in particular, leading to fundamentally different inferences and decisions. We have described why this phenomenon occurs: test log-likelihood tracks closeness of approximate posterior predictive distributions to the data-generating process and not to the posterior (or posterior predictive) distribution. At the same time, we recognize that evaluating posterior approximation quality is a fundamentally difficult problem and will generally necessitate the use of a proxy. It may be useful to consider multiple of the available options; a full accounting is beyond the scope of this paper, but they include using conjugate models where exact posterior summary statistics are available; comparing to established MCMC methods on models where a sufficiently large compute budget might be expected to yield a reliable approximation; simulation-based calibration (Talts et al., 2018); sample-quality diagnostics (Gorham & Mackey, 2015; Chwialkowski et al., 2016; Liu et al., 2016); and a host of visual diagnostics (Gabry et al., 2019). A careful investigation to understand how a particular method struggles or succeeds may be especially illuminating. On the other hand, in many data analyses, the goal is to make accurate predictions about future observables or identify whether a treatment will help people who receive it. In these cases and many others, using a Bayesian approach is just one possible means to an end. And many of the arguments for using the exact Bayesian posterior in decision making assume correct model specification, which we cannot rely upon in practice. In predictive settings in particular, test log-likelihood may provide a compelling way to assess performance. In addition to being essentially the only strictly proper local scoring rule (Bernardo & Smith, 2000, Proposition 3.13), TLL is sometimes advertised as a "non-informative" choice of loss function (Robert, 1996). Importantly, however, non-informative does not mean all-encompassing: as our examples in Section 4 show, test log-likelihood does not necessarily track with other notions of predictive loss. As we discuss in Section 2.1, test log-likelihood quantifies a predictive discrepancy only in a particular Kullback–Leibler sense. It is important to note, however, that just because two distributions are close in KL, their means and variances need not be close; in fact, Propositions 3.1 & 3.2 of Huggins et al. (2020) show that the means and variances of distributions that are close in KL can be arbitrarily far apart. So even in settings where prediction is of interest, we recommend users clearly specify their analytic goals and use evaluation metrics tailored to those goals. If there is a quantity of particular interest in the data-generating process, such as a moment or a quantile, a good choice of evaluation metric may be an appropriate scoring rule. Namely, one might choose a scoring rule whose associated divergence function is known to quantify the distance between the forecast's quantity of interest and that of the data-generating process. For instance, when comparing the quality of mean estimates, one option is using the squared-error scoring rule, whose divergence function is the integrated squared difference between the forecast's mean estimate and the mean of the data-generating process. Another option is the Dawid–Sebastiani score (Dawid & Sebastiani, 1999), which prioritizes accurately estimating predictive means and variances. See Gneiting & Raftery (2007) for a list of commonly used scoring rules and their associated divergences. ## Acknowledgments We are grateful to Will Stephenson for helping us find examples of discrepancies between posterior approximation quality and TLL. This work was supported in part by the MIT-IBM Watson AI Lab, an NSF Career Award, an ONR Early Career Grant, the DARPA I2O LwLL program, an ARPA-E project with program director David Tew, and the Wisconsin Alumni Research Foundation. ## References Richard Berk, Lawrence Brown, Andreas Buja, Edward George, Emil Pitkin, Kai Zhang, and Linda Zhao. Misspecified mean function regression: Making good use of regression models that are wrong. Sociological Methods & Research, 43(3):422–451, 2014. Richard Berk, Lawrence Brown, Andreas Buja, Edward George, and Linda Zhao. Working with misspecified regression models. *Journal of Quantitative Criminology*, 34:633–655, 2018. Robert H. Berk. Limiting behavior of posterior distributions when the model is incorrect. Annals of Mathematical Statistics, 37(1):51–58, 1966. José M Bernardo and Adrian F.M. Smith. *Bayesian Theory*. Wiley, 2000. María J Blanca, Rafael Alarcón, and Roser Bono. Current practices in data analysis procedures in psychology: What has changed? *Frontiers in Psychology*, 9:2558, 2018. Jonathan Chang, Sean Gerrish, Chong Wang, Jordan Boyd-Graber, and David Blei. Reading tea leaves: How humans interpret topic models. *Advances in Neural Information Processing Systems*, 22, 2009. Kacper Chwialkowski, Heiko Strathmann, and Arthur Gretton. A kernel test of goodness of fit. In Proceedings of the 33rd *International Conference on Machine Learning*. 2016. A. Philip Dawid and Paola Sebastiani. Coherent dispersion criteria for optimal experimental design. Annals of Statistics, 27(1):65–81, 1999. Lauro Langosco di Langosco, Vincent Fortuin, and Heiko Strathmann. Neural variational gradient descent. In *Fourth Symposium on Advances in Approximate Bayesian Inference*, 2022. Jonah Gabry, Daniel Simpson, Aki Vehtari, Michael Betancourt, and Andrew Gelman. Visualization in Bayesian workflow. *Journal of the Royal Statistical Society Series A*, 182(2):389–402, 2019. Zhe Gan, Chunyuan Li, Changyou Chen, Yunchen Pu, Qinliang Su, and Lawrence Carin. Scalable Bayesian learning of recurrent neural networks for language modeling. *arXiv pre-print arXiv:1611.08034*, 2016. Andrew Gelman, Jessica Hwang, and Aki Vehtari. Understanding predictive information criteria for Bayesian models. *Statistics and Computing*, 24:997–1016, 2014. Soumya Ghosh, Jiayu Yao, and Finale Doshi-Velez. Structured variational learning of Bayesian neural networks with horseshoe priors. In Proceedings of the 35th *International Conference on Machine Learning*. 2018. Tilmann Gneiting and Adrian E. Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102:359–378, 3 2007. Jackson Gorham and Lester Mackey. Measuring sample quality with Stein's method. Advances in Neural Information Processing Systems, 28, 2015. José Miguel Hernández-Lobato and Ryan Adams. Probabilistic backpropagation for scalable learning of Bayesian neural networks. In Proceedings of the 32nd *International Conference on Machine Learning*. 2015. José Miguel Hernández-Lobato, Yingzhen Li, Mark Rowland, Daniel Hernández-Lobato, and Richard Turner. Black-box α-divergence minimization. In Proceedings of the 33rd *International Conference on Machine* Learning. 2016. Matthew D. Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. *Journal* of Machine Learning Research, 14:1303–1347, 2013. Jonathan H. Huggins and Jeffrey W. Miller. Reproducible model selection using bagged posteriors. Bayesian Analysis, 18(1):79–104, 2023. Jonathan H. Huggins, Mikołaz Kasprzak, Trevor Campbell, and Tamara Broderick. Validated variational inference via practical posterior error bounds. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020. Pavel Izmailov, Wesley J Maddox, Polina Kirichenko, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Subspace inference for Bayesian deep learning. In *Uncertainty in Artificial Intelligence*. 2020. Pavel Izmailov, Sharad Vikram, Matthew D. Hoffman, and Andrew Gordon Wilson. What are Bayesian neural network posteriors really like? In Proceedings of the 38th *International Conference on Machine* Learning. 2021. Jukka Kohonen and Jukka Suomela. Lessons learned in the challenge: making predictions and scoring them. In *Machine Learning Challenges Workshop*, pp. 95–116. Springer, 2005. Chunyuan Li, Changyou Chen, Kai Fan, and Lawrence Carin. High-order stochastic gradient thermostates for Bayesian learning of deep models. In *Proceedings of the Thirtieth AAAI Conference on Artifical Intelligence*. 2016. Qian Liu and Dilin Wang. Stein variaitonal gradient descent: A general purpose Bayesian inference algorithm. In *Advances in Neural Informational Processing Systems*. 2016. Qiang Liu, Jason Lee, and Michael Jordan. A kernelized Stein discrepancy for goodness-of-fit tests. In Proceedings of the 33rd *International Conference on Machine Learning*. 2016. Sanae Lofti, Pavel Izmailov, Gregory Benton, Micah Goldblum, and Andrew Gordon Wilson. Bayesian model selection, the marginal likelihood, and generalization. In Proceedings of the 39th *International Conference* on Machine Learning. 2022. Christos Louizos and Max Welling. Structured and efficient variational deep learning with matrix Gaussian posteriors. In Proceedings of the 33rd *International Conference on Machine Learning*. 2016. David J.C. MacKay. *Information Theory, Inference, and Learning Algorithms*. Cambridge University Press, 2003. Wesley J Maddox, Pavel Izmailov, Timur Garipov, Dmitry P Vetrov, and Andrew Gordon Wilson. A simple baseline for Bayesian uncertainty in deep learning. *Advances in Neural Information Processing Systems*, 32, 2019. Aaron Mishkin, Frederik Kunstner, Didrik Nielsen, Mark Schmidt, and Mohammad Emtiyaz Khan. SLANG: Fast structured covariance approximations for Bayesian deep learning with natural gradient. In Advances in Neural Informational Processing Systems. 2018. Sebastian W Ober and Laurence Aitchison. Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes. In Proceedings of the 38th International Conference on Machine Learning. 2021. Joaquin Quiñonero-Candela, Carl Edward Rasmussen, Fabian Sinz, Olivier Bousquet, and Bernhard Schölkopf. Evaluating predictive uncertainty challenge. In *Machine Learning Challenges Workshop*, pp. 1–27. Springer, 2005. Rajesh Ranganath, Sean Gerrish, and David M Blei. Black box variational inference. In *Proceedings of the* 17th *International Conference on Artificial Intelligence and Statistics*, 2014. Christian P Robert. Intrinsic losses. *Theory and Decision*, 40:191–214, 1996. Jiaxin Shi, Shengyang Sun, and Jun Zhu. Kernel implicit variational inference. In *International Conference* on Learning Representations. 2018. Shengyang Sun, Changyou Chen, and Lawrence Carin. Learning structured weight uncertaitny in Bayesian neural networks. In Proceedings of the 20th *International Conference on Artificial Intelligence and Statistics*. 2017. Sean Talts, Michael Betancourt, Daniel Simpson, Aki Vehtari, and Andrew Gelman. Validating Bayesian inference algorithms with simulation-based calibration. *arXiv preprint arXiv:1804.06788*, 2018. Matthew J Vowels. Misspecification and unreliable interpretations in psychology and social science. *Psychological Methods*, 28(3):507, 2023. Anqi Wu, Sebatian Nowozin, Edward Meeds, Richard E Turner, José Miguel Hernández-Lobato, and Alexander L Gaunt. Deterministic variational inference for robust Bayesian neural networks. In *International* Conference on Learning Representations. 2019. Jiayu Yao, Weiwei Pan, Soumya Ghosh, and Finale Doshi-Velez. Quality of uncertainty quantification for Bayesian neural network inference. *arXiv pre-print arXiv:1906.09686*, 2019. ## A Variational Approximations In Section 3 we formed isotropic Gaussian approximations to the exact posterior. In our illustrative examples, the exact posterior itself is a Gaussian distribution, N (µ, Σ). In Sections 3.1 and 3.3 we use variational approximations that share the same mean as the exact posterior and are isotropic, N (*µ, ρ*I), where I is a two-dimensional identity matrix and ρ > 0 is a scalar. In this family of distributions, the optimal variational approximation is N (*µ, ρ*∗I), where, $$\rho^{*}=\operatorname*{argmin}_{\rho\in\mathbb{R}_{+}}\operatorname{KL}\left(\mathcal{N}(\mu,\rho\mathrm{I})\parallel\mathcal{N}(\mu,\Sigma)\right),$$ $$=\frac{2}{\operatorname{tr}(\Sigma^{-1})}.$$ $$(16)$$ The result follows from setting the gradient ∇ρKL (N (µ, ρI) k N (µ, Σ))) to zero and rearranging terms, $$\begin{array}{c}\nabla_{\rho}{\rm KL}\left({\cal N}(\mu,\rho{\rm t})\parallel{\cal N}(\mu,\Sigma)\right))=0,\ \Longrightarrow\ \nabla_{\rho}\frac{{\rm tr}(\rho\Sigma^{-1})}{2}-\nabla_{\rho}\ln\rho=0,\\ \Longrightarrow\ \frac{1}{\rho}=\frac{{\rm tr}(\Sigma^{-1})}{2},\ \ \ \ \Longrightarrow\ \rho=\frac{2}{{\rm tr}(\Sigma^{-1})}.\end{array}\tag{17}$$ Note that ρ ∗is guaranteed to be positive since Σ −1is positive definite and thus tr(Σ−1) > 0. This optimal variational approximation, N (*µ, ρ*∗I) is used in Panel (B) of Figure 1, Figure 2, and Figure 4. The other panels use N (*µ, λρ*∗I), with λ ∈ [1, 5, 10, 15, 30] for Figure 1 and Figure 2. For Figure 4 (Left), λ takes values in [4, 5, 7, 9], and in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] for Figure 4 (Right). ## B Experimental Details And Additional Experiments B.1 Confidence Intervals An additional note on confidence intervals for TLL. Suppose we are comparing two models Π and Π˜ . Although TLL(D?; Π) (respectively, σˆTLL(Π)) will generally be correlated with TLL(D?; Π˜) (respectively, σˆTLL(Π˜ )), we do not expect a more careful treatment of that correlation to change our substantive conclusions. Confidence intervals for RMSE. To compute the RMSE confidence interval, we first compute the mean of the squared errors (MSE, m) and its associated standard error of the mean (s). Since we have a large number of data points and the MSE takes the form of a mean, we assume the sampling distribution of the MSE is well-approximated by a normal distribution. We use [m − 2*s, m* + 2s] as the 95% confidence interval for the MSE. We use [ √m − 2s, √m + 2s] as the 95% confidence interval for the RMSE. Note that the resulting RMSE confidence interval will generally not be symmetric. ## B.2 Additional Tll In The Wild Experiments SWAG with higher learning rates. In Figure 7 we continue the experiment described in Section 3.2 but using higher learning rates of 12, 15, and 20. Despite moving further from the exact posterior the test log-likelihood remains higher than those achieved by SWAG approximations with lower learning rates (panels (B) through (E) of Figure 3). Mean field variational inference. Next, we reproduce the experimental setup described in Section 3.2, but instead of using SWAG to approximate the posterior, we use mean field variational inference and examine the relationship between TLL and posterior approximation quality under different re-scalings of the marginal variance of the optimal variational approximation. Figure 8 shows the posterior mean and the 95% predictive interval of the misspecified regression line θ >φ from (A) the Bayesian posterior; (B) the mean field variational approximation restricted to isotropic Gaussians; and (C)–(F) several re-scaled variational approximations. In each plot, we overlaid the observed data D500, the true data generating function in dashed black, and also report the 2-Wasserstein distance between the true posterior and each approximation and the TLL ![16_image_0.png](16_image_0.png) Figure 7: (Left). Predictive distributions under the SWAG posterior with SWAG learning rate of (G) 12, (H) 15, (I) 20. The two numbers in the title of each plot are the 2-Wasserstein distance to the exact posterior and test log-likelihood computed on 104 test set observations. Two standard errors in the test log-likelihood estimates are (G) 0.01, (H) 0.009, (I) 0.08. (Right). Contours of the SWAG approximations with different learning rates. The line 01 = 0 is highlighted in red. ![16_image_1.png](16_image_1.png) Figure 8: (Left). Predictive distributions under the Bayesian posterior and mean field variational approximations. The two numbers in the title of each plot are the 2-Wasserstein distance to the true posterior and test log-likelihoods computed on 104 test set observations. Two standard errors in the test log-likelihood estimates are (A) 0.16, (B) 0.16, (C) 0.03, (D) 0.02, (E) 0.02, (F) 0.01. (Right). The relationship between distance to posterior and test log-predictive density. Observe the log scale of the horizontal axis and the non-monotonic relationship between test log-predictive density and 2-Wasserstein distance to the Bayesian posterior. averaged over N* = 104 test data points drawn from Equation (4) Like in our previous example, the mean field approximation (panel (B) of Figure 8) is very close to the exact posterior. Further, as we scale up the marginal variance of the approximate posteriors, the posterior predictive distributions cover more data, yielding higher TLL, while simultaneously moving away from the exact posterior over the model parameters in a 2-Wasserstein sense. Interestingly, when the approximation is diffuse enough, TLL decreases, again highlighting its non-monotonic relationship with posterior approximation quality. In this example of a misspecified model, the non-monotonic relationship between TLL and 2-Wasserstein distance means that TLL is, at best, a poor proxy of posterior approximation quality. ## B.3 The Highest Tll Does Not Match The Best Estimate Of A Posterior Summary Statistic Or The Lowest Kl We first reproduce the experimental setup that produced Figure 4, but now in Figure 9, we plot TIL against the error in estimating the posterior standard deviation. In particular, the horizontal axis shows the absolute value of the difference between (a) the marginal standard deviation of the parameters of interest under the approximation and (b) the marginal standard deviation under the exact posterior. As in the right panel of Figure 4, we observe that the highest (best) TLL does not correspond to the lowest (best) error in estimating the posterior standard deviation. ![17_image_0.png](17_image_0.png) Figure 9: The non-monotonic relationship between difference in marginal standard deviations and TLL in a well-specified case. (Left) The horizontal axis reports the absolute difference in the standard deviation of the weight 01 between an approximation and the posterior. (Right) The horizontal axis reports the absolute difference in the standard deviation of the bias 02 between an approximation and the posterior. To create Figure 10, we reproduce the experimental setup from Figure 8. Relative to the right panel of Figure 8, we change only what is plotted on the horizontal axis; for Figure 10, we plot the log of the absolute value of the difference between marginal standard deviations. Again, we see that the highest (best) TLL does not correspond to the lowest (best) error in estimating the posterior standard deviation. ![17_image_1.png](17_image_1.png) Figure 10: The non-monotonic relationship between difference in marginal standard deviations and TLL in a misspecified case. The meaning of horizontal axis is similar to that of Figure 9. Finally, Figure 11 reproduces analyses from the main text but uses KL divergence instead of 2-Wasserstein distance to measure posterior approximation quality. In particular, the left panel of Figure 11 recreates the right panel of Figure 1; as in Figure 1, we see that the highest (best) TLL does not correspond to the lowest (best) divergence value. Likewise, the right panel of Figure 11 recreates the right panel of Figure 4; as in Figure 4, we see the highest TLL again does not correspond to the lowest divergence value. ![18_image_0.png](18_image_0.png) Figure 11: The smallest KL divergence does not correspond to the largest TLL, in a misspecified case (*left*) and a well-specified case (*right*). The left panel reproduces the experimental results presented in the right panel of Figure 1, but uses the reverse KL divergence to measure discrepancy with the exact posterior instead of the 2-Wasserstein distance. The right panel reproduces the results in the right panel of Figure 4. ## C Alternative Model Comparison Metrics In this work, we focused on one model comparison metric, test log-likelihood. However, there are many other model comparison metrics. In what follows, we consider marginal likelihood and the joint log predictive density as potential model comparison metrics. ## C.1 Marginal Likelihood Instead of choosing the model with higher test log-likelihood, one might instead choose the model with higher marginal likelihood, where the marginal likelihood for model Π is π(D) = Rπ(D|θ)π(θ)dθ. That is, given models Π and Π˜ , one might choose Π whenever π(D) > π˜(D); As an anonymous referee pointed out, the marginal likelihood is a well-established Bayesian model selection criterion; see, e.g., pg. 348 of MacKay (2003). However, the marginal likelihood criterion has several wellknown limitations. First, marginal likelihood comparisons can be unreliable when the models being compared are misspecified; see §2 and references within Huggins & Miller (2023), and see also Berk (1966). Beyond this concern, marginal likelihood quantifies only how well a model fits the available training data. It is otherwise silent or "peripherally related" (Lofti et al., 2022) to the predictive quality of the model. And so, just because a model Π yields a higher marginal likelihood than Π˜ , it does not necessarily follow that Π produces better predictions of future data than Π˜ - even when the training and testing data are i.i.d. from the same distribution. These limitations notwithstanding, one might still try to use marginal likelihood to assess posterior approximation. Though we have not explicitly constructed examples, we anticipate similar phenomena as reported above; namely, we expect it is possible for one posterior approximation to achieve higher marginal likelihood than another while providing a worse approximation to the exact posterior. And we would expect these phenomena to occur for reasons analogous to the behavior we saw for TLL; what marginal likelihood measures is not directly related to posterior approximation quality. ## C.2 The Logarithm Of The Joint Predictive Density Lofti et al. (2022) consider the logarithm of the joint predictive density, log π(y ? 1 , . . . , y?N? |D), as an alternative metric for assessment of predictive quality. Analogous to Equation (1), $$\pi(y_{1}^{\star},\ldots,y_{N^{\star}}^{\star}|{\mathcal{D}})=\int\pi(y_{1}^{\star},\ldots,y_{N^{\star}}^{\star}|\theta)\pi(\theta|{\mathcal{D}})d\theta.$$ $$(18)$$ , . . . , y?N? |θ)π(θ|D)dθ. (18) If the training data are strictly independent from the test data under the specified model Π (not just conditionally independent given a latent parameter), then the log joint predictive density will equal the TLL - but such a case would generally be uninteresting in Bayesian modeling. In general, the log joint predictive density need not equal the TLL. We have already seen that the TLL is a Monte Carlo estimate of the elpd; that is, the TLL is an estimate of "how well is a model *expected* to predict a single held-out observation, *on average*?" The number of samples in that estimate (i.e., the number of test data points) will dictate the Monte Carlo noise of that estimate. There are analogously (at least) two perspectives on the log joint predictive density. One is that the log joint predictive measures "how well does a model predict a specific collection of held-out data?" A second is that the log joint predictive is an estimate (from just a single noisy Monte Carlo draw and therefore very high in variance) of "how well is a model expected to predict the next N? held-out observations, on average?" The TLL and log joint predictive can be seen as two extremes of a spectrum, as we describe next. Suppose one could divide a test set of size N?evenly into M mini-batches of test data. Then one could make a Monte Carlo estimate with M samples of "how well is a model expected to predict the next N?/M held-out observations, on average?" As M increases, the Monte Carlo noise decreases and the size of the held-out set of interest decreases. Depending on the application, any of these questions may be of interest to a practitioner. While the object of study in this paper has been the TLL, the arguments in Section 3.4 suggest that the log joint predictive density (or any other point on the spectrum above) would exhibit a similar issue to the one described in this paper. After all, just like TLL, log joint predictive density does not directly track discrepancies between distributions of latent model parameters.
Review 1: Summary: By giving examples with experiments, they refuted the following statement: * In Section 3, “The higher the test log-likelihood, the more accurately an approximate inference algorithm recovers the Bayesian posterior distribution of latent model parameters.” * In Section 4, “The higher the test log-likelihood, the better the predictive performance on held-out data according to other measurements, like root mean squared error.” In the last section, the author suggested that it is important understand that different analysis requires different goals with several suggestions: * If the goal is approximating posterior, don’t use TTL. * Change scoring rule depending on goals. Strengths and Weaknesses: ## Strengths The paper is overall very clear and organized. I vote for the acceptance. ## Weakness Explanation of some parts would be lacking. Requested Changes: ### Sections 1, and 2 Clearly written #### Section 3 This section is still clear. However, readers would appreciate more high-level explanations for this phenomenon, not necessarily specific to each instance. For example, in Section 3.3, I have the following natural questions. * "I agree that the phenomenon (High TTL does not necessarily indicate a better posterior approximation) is observed in Section 4. However, it is better to explain more why it occurs from more broader viewpoints. According to the Bernstein–von Mises theorem, as the sample size approaches infinity and the model is correctly specified, the posterior should converge to the true parameter. Therefore, when the sample size is sufficiently large, I guess this phenomenon should not occur? So, is it due to an insufficient sample size? * I mentioned the Bernstein–von Mises theorem. However, the assumptions of this theorem are unmet in complex models like in the case of Bayesian deep neural networks, although I believe they are satisfied in Section 3. This could also contribute to the observed phenomenon. * In this context, the author evaluates the difference between the exact posterior and the approximated posterior using the Wasserstein distance. However, what if we employ alternative metrics, such as the KL distance? Perhaps this is also linked to the underlying cause of the observed phenomenon? Summary: Primarily, I aim to convey that this phenomenon would arise from several intertwined, overarching factors. Readers would benefit from a clearer explanation of the specific causes behind this phenomenon. Could you succinctly incorporate these reasons? ### In Section 5 I concur with the primary assertion that TTL may not be appropriate depending on the objectives of our analysis. However, the subsequent recommendations in the paper are rather ambiguous. It remains unclear how these suggestions can be implemented in practical terms. Readers would appreciate more explicit and actionable recommendations. Here are some specific comments. * The author argues that if our analysis aims to approximate the posterior, TTL might not be suitable. In that case, what types of experiments should we report? Should we calculate the Wasserstein distance, as the author did in toy experiments with correctly-specified models? * The author suggests that depending on their goals, it might be better to use a different scoring rule than KL. However, I believe analysts often lack a clear intuition about which divergence measure to use, which is why people typically resort to KL divergence. In such cases, what should analysts do to compare methods? * I believe another practical and natural suggestion we can offer is to include results with various divergence measures as a robust sanity check. * As the author pointed out, in cases of model misspecification (which is common in the real world), TTL becomes impractical. Similarly, when dealing with complex data, calculating the ground truth posterior is impossible. What approach should analysts take to compare methods in such situations? * In general, there may be limited options. However, in specific domains like images, if our objective is to obtain samples of "natural images," we can employ scoring functions that quantify the naturalness of images. Many variations of such functions are commonly utilized as evaluation metrics for natural images. It might be worthwhile to highlight such cases where objectives are well-defined. Broader Impact Concerns: No ================================================== Review 2: Summary: The paper discusses two claims about Bayesian modelling: Claim 1: The higher the test log-likelihood, the more accurately an approximate inference algorithm recovers the Bayesian posterior distribution of latent model parameters Claim 2: The higher the test log-likelihood, the better the predictive performance on held-out data according to other measurements, like root mean squared error. The way that these claims are addressed is primarily via examples. I find claim 1 most interesting, as will be discussed below. Secs 3.1 and 3.2 consider mis-specified models, while sec 3.3 considers a well-specified model. For claim 2, in the era of end-to-end learning, it seems clear that if one wants to optimize RMSE, one should create an objective (e.g. based on leave-one-out CV) that evaluates/estimates RMSE, and optimize that. There is no particular reason why TLL should be well-correlated with the alternative objective. The paper lacks experimental details and derivations; these should be provided in appendices, and code also provided. Strengths and Weaknesses: It is worth starting out to say that Bayesian orthodoxy would focus on the marginal likelihood $p(D) = \int p(D|\theta) p(\theta) d\theta$ as a way of comparing models. There are reasons why one might consider the ELPD instead; see e.g. secs 3.8.4-3.8.6 in Murphy PML2 (2023) for discussion on this and related topics. Certainly TLL makes sense wrt the standard supervised learning setup. However, in the Bayesian view, we should actually consider $p(\mathbf{y}^*|{\cal D}) = \int p(\theta|{\cal D}) [ \prod_{i=1}^{N^*} p(y^*_i|\theta)] d \theta$, where $\mathbf{y}^*$ is the vector of $N^*$ test cases. I.e. in the Bayesian view the test cases are *conditionally* independent given $\theta$, rather than being fully independent. The paper asks "Are you using the test log-likelihood correctly?". Given the above, one answer is no, because one should be looking at $1/N^* \log p( \mathbf{y}^*| {\cal D})$, and not $\sum_{i=1}^{N^*} \log p (y^*_i|{\cal D})$, which is considered here. Beyond this point, I do not find that the examples argue against using the test log likelihood, especially the correct version $L(\mathbf{y}^*) = 1/N^* \log p(\mathbf{y}^*|{\cal D})$. To me, what the paper is really about is the initially surprising result that in the examples of sec 3.1 and 3.3, it is the case that using an approximate posterior can give better TLL than using the exact posterior. This can be made sense of more thorough investigation -- see my comments below. ### Example 3.1. The data (N=100) is generated from a heteroscedastic model, as in eq 2. The mis-specified model is linear regression with unknown offset and slope (theta vector), and a fixed variance of 1. For this simple setup, it is possible to compute exact Bayesian inference for the theta vector, and also a "mean field" variational approximation. Fig 1 panels C-F show "variational approximations with re-scaled marginal variances", but absolutely no details are given of this rescaling. All these experimental details and derivations (including exact and VI) should be provided in an appendix, and code also provided. Details should also be given (in the appendix) for the computation of the 2-Wasserstein distance (it is a closed form for Gaussians, I believe). The key observation from this experiment is that the test log likelihood (on a sample of size $N^*=10^4$) is higher for panels C-F, where presumably the posterior variance has been inflated. So for this mis-specified model, one can obtain higher TLL for worse posterior predictions. Fig 1 (left) shows why this result is not unexpected -- inflating the variance of the posterior "wobbles" the theta_1 parameter (coeff of x) more, and this translates to error bars that increase with |x|, as in panel F. This mimics (to some extent) the effect of the heteroscedastic variance. But it would be very interesting to see how this works for $L(\mathbf{y}^*)$, as the plots show the marginal erorr bars for one test point, while this quantity requires all $N^*$ datapoints *to be generated by the same $\theta$.* I would also like to see the marginal likelihood (or ELBO for VI) for each model A-F, and the true test log likelihood $L(\mathbf{y}^*)$. Also it is readily possible to compute the true log-likelihood under the model of eq 2, this should be given when comparing to the marginal likelihood/ELBOs of the mis-specified models. Please also add a panel to Figure 1 (left) showing the true data generator and error bars -- this will be useful to compare with panel A-F. ### Sec 3.2. This is a very similar example to sec 3.1, with a mis-specified linear model compared to an exact data generator, here incorporating a quadratic term (eq 4). Similar patterns are observed as in the previous example, with higher TLL for the inexact posteriors. This example is presented as being "in the wild". However it is again on a toy example (where one can easily see how posterior variance inflation will help explain the unmodelled quadratic component). The "in the wild" claim presumably relates to the use of SWAG (Maddox et al, 2019) which is claimed to be an off-the-shelf approximate inference algorithm. However, SWAG essentially fits a (low-rank) Gaussian to the trajectory of a SGD optimization method (with learning rates set in a particular way). The main issue is that there are no guarantees that the method produces a correct estimate of the posterior distribution. Indeed the variance inflation in Fig 3 (right) is not at all unexpected given the higher learning rates used (p 6 para 1). I believe one could cover the use of SWAG by commenting at the end of sec 3.1 that SWAG with increasing learning rates mimics the effect of variance inflation. In my view, this example 3.2 thus does not add significantly to that of sec 3.1. ### Sec 3.3. Well-specified models. In this example, exact posterior analysis is carried out for a small dataset size for a parameter value $\theta_{*} = [-2,-1]$ which is unlikely under the heavily correlated prior of eq 6, as illustrated in Fig 4A. Again here variance inflation can lead to a higher TLL. This is initially surprising, in that doing exact Bayesian inference in a well-specified model is "doing the right thing". So how is it possible to do better, and that $\Delta(p,q|D) = elpd(p|D) - elpd(q|D)$ is not $\ge 0$? My analysis explaining this is given below. For a given datset ${\cal D}$ and testpoint $x_*$ we have \$p(y_*|x_*, {\cal D}) = \int p(y_*|x_*,\theta) p(\theta|{\cal D}) d\theta,\ $ \$q(y_*|x_*, {\cal D}) = \int p(y_*|x_*,\theta) q(\theta|{\cal D}) d\theta.\ $ Hence the elpd at location $x_*$ for the exact and approximate methods is given as $ \mathrm{elpd}(p,x_*|{\cal D}) = \int \log p(y_*|x_*, {\cal D}) p(y_*|x_*,\theta_*) dy_*$ , $ \mathrm{elpd}(q,x_*|{\cal D}) = \int \log q(y_*|x_*, {\cal D}) p(y_*|x_*,\theta_*) dy_*$ . Notice that the integration here is with respect to $p(y_*|x_*,\theta_*)$ and not $p(y_*|x_*,D)$. This is critical for understanding what is going on. We have the graphical model $y_* \leftarrow \theta_* \rightarrow {\cal D}$, so that ${\cal D}$ depends on $\theta_*$, but is not a function of it. We now integrate the expressions for $\mathrm{elpd}(p,x_*|{\cal D})$ and $\mathrm{elpd}(q,x_*|{\cal D})$ over $p(x_*)$ to obtain $\mathrm{elpd}(p|{\cal D}) = \int \mathrm{elpd}(p,x_*|{\cal D}) p(x_*) dx_* $ , $\mathrm{elpd}(q|{\cal D}) = \int \mathrm{elpd}(q,x_*|{\cal D}) p(x_*) dx_* $. Finally we consider the difference of these two quantities: $\Delta(p,q|{\cal D}) = \mathrm{elpd}(p|{\cal D}) - \mathrm{elpd}(q|{\cal D}) = \int \int \log \frac{p(y_*|x_*, {\cal D})}{q(y_*|x_*, {\cal D})} p(y_*|x_*,\theta_*) p(x_*) dx_* dy_*$ . If $p(y_*|x_*,\theta_*)$ and $p(y_*|x_*,{\cal D})$ were the same thing, this would give rise to a KL divergence term averaged over $p(x_*)$, and we would obtain non-negativity for the elpd difference. In general this does not hold. However, if $p(\theta|{\cal D})$ converges to $\delta(\theta - \theta_*)$ for large training sample sizes (as would normally be expected), then the KL result will hold. If you use the above analysis in this or a subsequent paper, I request that you make sure to acknowledge the anonymous referee who provided it, and not represent it as your own work. Given this analysis, it is of interest to know how often we will find $\Delta(p,q|D)$ to be negative, For a fixed $\theta_*$ this can be addressed by repeatedly sampling datasets. It is also important to know how this depends on $\theta_*$; my intuition is that the more probable $\theta_*$'s under the prior will be less likely to give rise to negative differences. The analysis in section 3.3 is very incomplete, and needs to be expanded to address the issue of *why* the TLL differences occur. It may be good to keep the same test set for all cases, and look at differences in $elpd(p,x_*|D) - elpd(q,x_*|D)$ on a case-by-case basis. ### Sec 4, Claim 2. Claim 2 seems to me to be much less interesting than Claim 1. Already on page 1 the authors mention some examples highlighting discrepancies between test log-likelihood and other analysis objectives. The GP example is annoying, in that to make things come out well for the incorrect model (panel B in Figure 5), the authors do two things: (i) sample test data in the interval [5,10], which is disjoint from the training data, and (ii) handicap model A by setting its noise variance too high. But in general in machine learning, if the training and test distributions are different (issue (i)), all bets are off. The other point here is that if we wish to optimize for (say) RMSE, it is natural to create an objective (e.g. based on leave-one-out CV) that evaluates/estimates RMSE, and optimize that. For example with GPs it is efficient to compute LOO predictions (see e.g. Rasmussen and Williams, 2006, sec 5.4.2) and one could optimize this wrt kernel parameters. ### Other points The title is opaque and inadequate -- it is essential that it at least mentions Bayesian somewhere. The real issues, as discussed above, is that the examples show that approximate Bayesian methods can sometimes give rise to better predictive performance than exact Bayesian methods. The paper lacks experimental details and derivations; these should be provided in appendices, and code also provided. In Figs 1 and 4 I find no use for the two numbers above each plot -- these can be obtained from the plots on the RHS. The same could hold for Fig 3 if another plot were made like Fig 1 (right). Fig 1 caption. The explanation of labels A-F should occur in the caption. A figure+caption must always be self-contained. Requested Changes: Fundamentally I believe the paper is asking the wrong questions. It is using an incorrect version of the TLL for the Bayesian case, where it should be using $L(\mathbf{y}^*)$. The paper needs to be redone to consider $L(\mathbf{y}^*)$ and not the TLL as defined in eq (1). The examples in secs 3.1 and 3.3 are quite interesting, and could potentially make an interesting paper about some seemingly counter-intutive aspects of Bayesian analysis, where an approximate posterior gives rise to better TLL than the exact posterior. But this will require an entire re-write of the paper. I do not believe that this can be properly completed within the 2 weeks timeframe of TMLR. Thus I believe the correct decision on this paper will be reject, with leave to resubmit. For detailed comments on sec 3.1, please see above. The point made in sec 3.3 where an incorrect posterior can give rise to higher TLL is of also interest, but it needs much more exploration to really explain what is going on here. I believe my analysis is very germane to this. For the reasons given above, I do not find sec 3.2 and sec 4 to be of sufficient interest and recommend that they be cut. Broader Impact Concerns: None ================================================== Review 3: Summary: The authors raise awareness of issues in using test log-likelihood as metric for evaluating inference techniques, especially from the perspective of using it for selecting which posterior approximations to use. They point out that a worse approximation can result in better test log-likelihood, and that a solution with better test log-likelihood can still have worse predictive accuracy in terms of another metric. The main message is to warn practitioners about these aspects, in form of concrete examples that make it clear how test log-likelihood for even extremely poor posterior approximations can be the best one, especially for misspecified models. Strengths and Weaknesses: **Strengths** 1. The message is very important. Test log-likelihood is indeed often used as primary or even sole measure of approximation quality, and it is not sufficient. 2. Even though the basic observations are largely known within the community, we do not have clear references that could be used to justify the choices of the metrics. This paper could serve as one. 3. The examples where approximation width is artificially changed are innovative and clear. **Weaknesses** 1. The paper fails to provide very concrete recommendations or suggestions as replacement (or for complementing) TLL as the evaluation metric, limiting the impact. 2. The second claim feels a bit like straw man; I do not believe people misuse test log-likelihood in that sense. Requested Changes: There is clear need for a paper that discusses the topics, since it indeed has become common to access various posterior approximation techniques in terms of TLL in large-scale problems where direct evaluation of the approximation quality is not possible. In other words, I agree that various authors implicitly rely on the first claim as basis for their empirical evaluation. Even though the observations made here are not strictly new and most researchers working with the methods are likely aware that TTL is merely a proxy and not a direct measure of the approximation quality, it is highly useful to have a paper that can be conveniently cited e.g. to justify why measuring directly the approximation quality instead of "the standard TLL that others use". However, I am not so sure whether the second claim is really being used. I have always thought that people report TLL and RMSE separately as the most common metrics exactly because they are well aware you cannot make conclusions across the two metrics. The experimental results in Section 4 are valuable in explicating the discrepancy, but I do not think there is convincing case of calling that particular misuse a common scenario. In light of this, I would like to see the statements in Introduction to be reformatted -- I do not think the paper would be any weaker even if the second claim was characterised from the perspective of "we additionally show how ... as a reminder that different test accuracy metrics do not necessarily align", instead of claiming that particular misunderstanding to be common. As a further evidence of Claim 1 being more believable: In Section 3 (2nd paragraph) you give a long list of references to papers that rely on this implicit claim, whereas in Section 4 you do not have any examples like this. Either you need to find those references, or should fine-tune the narrative. I like the synthetic examples as they explain very clearly how the effects are seen throughout very broad range of approximation widths; e.g. Figures 3 and 4 shows that we can indeed have approximation widths that are off by an order of magnitude and still get nice TLL. This is, in my opinion, better approach than simply showing differences in ordering for some real approximations computed with different methods (that would have fairly similar approximations and TLLs). Even though I see value in the paper, I was a bit disappointed on the depth of the discussion and lack of concrete recommendations. I understand that you are not willing to directly say "don't use TLL to compare approximations", but the paper would still be stronger if there was an actionable recommendation for the authors who are working on approximate inference methods in context of large problems where explicit comparison against true posterior is not feasible. For instance, you could recommend some sort of template sentences to use when explaining limitations of TLL, as well as alternative metrics people should routinely report. What would be your best recommendation for a metric we could use to compare e.g. Bayesian NN posteriors, given that TLL is not a valid measure? Or would you still say that in absence of better measures we should keep on reporting TLL but explain more transparently why it could lead to (potentially severe) mis-ranking of alternative approximations. It would be very useful to have a clear statement on this. **Other remarks:** - A bit more discussion on epld and Dawid-Sebastiani scores would be useful (e.g. definition of the latter). Section 2.1 now kind of suggests epld would be a more useful quantity to estimate directly and it would be better to discuss why people are not doing this but are using TTL instead. If we were (able to) estimate epld directly, you would get rid of the implicit assumption of TTL being a close approximation to epld at the end of Section 2.1, so it would be useful to explain for the readers the challenges in that. - You occasionally talk about "model comparison" (e.g. end of Section 2.1, beginning of Section 3.1), in places where I believe "approximation comparison" would be more appropriate. I think the misuse happens more commonly in cases where people compare different inference methods, not different models. This is what most of the examples in the beginning of Section 3 are doing. - Formatting: You heavily use footnotes to explain rather complex aspects. At least footnotes 1 and 7 are quite detailed and important parts, so I would integrate them into the main body. - Would be good to discuss the choice of 2-Wasserstein as the sole measure of approximation quality. At least give a definition and explain why you used it, since it is not really (yet?) a standard metric in the field. Would your results look the same if you used e.g. (symmetric) KL-divergence to measure the distance to the true posterior? Is 2-Wasserstein in some sense ideal measure here, or does it have limitations/weaknesses as well? Broader Impact Concerns: None. The paper makes important remarks that should help the community be more transparent in evaluating scientific contributions and hence has only positive ethical implications. ================================================== Metareview: Recommendation: Accept as is Comment: The paper studies the use of test log-likelihood to assess Bayesian inference algorithms or to compare forecast accuracy of different models. It provides examples and theory that well demonstrate what test log-likelihood does and does not measure. While not proposing new methods, the reviewers believe that the paper will be useful for parts of the community either as a learning tool or as useful reference. The reviewers have raised a number of concerns during the review cycle. The majority of them are satisfied that the revision has addressed the concerns. Moreover, the concerns raised by reviewer m596 are discussed, while not as fully as one could potentially do, at least sufficiently in Appendix C. ==================================================
# Fair Graph Message Passing Anonymous authors Paper under double-blind review ## Abstract There has been significant progress in improving the performance of graph neural networks (GNNs) through enhancements in graph data, model architecture design, and training strategies. For fairness in graphs, recent studies achieve fair representations and predictions through either graph data pre-processing (e.g., node feature masking, and topology rewiring) or fair training strategies (e.g., regularization, adversarial debiasing, and fair contrastive learning). How to achieve fairness in graphs from the model architecture perspective is less explored. More importantly, GNNs exhibit worse fairness performance compared to multilayer perception since their model architecture (i.e., neighbor aggregation) amplifies biases. To this end, we aim to achieve fairness via a new GNN architecture. We propose Fair Message Passing (FMP) designed within a unified optimization framework for GNNs. Notably, FMP explicitly rendering sensitive attribute usage in *forward propagation* for node classification task using cross-entropy loss without data pre-processing. In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together. In this way, FMP scheme can aggregate useful information from neighbors and mitigate bias to achieve better fairness and prediction tradeoff performance. Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three realworld datasets. The code is available in https://anonymous.4open.science/r/FMP-AD84. ## 1 Introduction Graph neural networks (GNNs) (Kipf & Welling, 2017; Veličković et al., 2018; Wu et al., 2019; Han et al., 2022a;b) are widely adopted in various domains, such as social media mining (Hamilton et al., 2017), knowledge graph (Hamaguchi et al., 2017) and recommender system (Ying et al., 2018), due to remarkable performance in learning representations. Graph learning, a topic with growing popularity, aims to learn node representation containing both topological and attribute information in a given graph. Despite the outstanding performance in various tasks, GNNs often inherit or even amplify societal bias from input graph data (Dai & Wang, 2021). The biased node representation largely limits the application of GNNs in many high-stake tasks, such as job hunting (Mehrabi et al., 2021) and crime ratio prediction (Suresh & Guttag, 2019). Hence, bias mitigation that facilitates the research on fair GNNs is in urgent need and we aim to achieve fair prediction for GNNs. Data, model architecture, and training strategy are the most popular aspects to improve deep learning performance. For fairness in graphs, many existing works achieving fair prediction in graphs either rely on graph pre-processing (e.g., node feature masking(Köse & Shen, 2021), and topology rewiring (Dong et al., 2022)) or fair training strategies (e.g., regularization (Jiang et al., 2022), adversarial debiasing (Dai & Wang, 2021), or contrastive learning (Zhu et al., 2020; 2021b; Agarwal et al., 2021; Ling et al., 2023)). The GNNs architecture perspective to improve fairness in graphs is less explored. More importantly, GNNs are notorious in terms of fairness since GNN aggregation amplifies bias compared to multilayer perception (MLP) (Dai & Wang, 2021). From the GNNs architecture perspective, message passing is a critical component to improve fairness in graphs. Therefore, a natural question is raised: Can we achieve fairness via fair message passing using vanilla training loss 1 *without graph pre-processing?* In this work, we provide a positive answer by designing a fair message-passing scheme guided by a unified optimization framework 2for GNNs. The key idea of achieving fair message passing is aggregation first and then conducting bias mitigation via explicitly chasing consistent demographic group representation centers. Specifically, we first formulate an optimization problem that integrates fairness and smoothness objectives for graph data. Then, we solve the formulated problem via Fenchel conjugate and gradient descent to generate fair and informative representations, where the property of softmax function is adopted to accelerate the gradient calculation over primal variables. We also interpret the optimization problem solver as two main steps (e.g., aggregation first and then debiasing). Finally, we integrate FMP in graph neural networks to achieve fair and accurate prediction for node classification tasks. We demonstrate the superiority of FMP by examining its effectiveness and efficiency on various real-world datasets. In short, the contributions can be summarized as follows: - We demonstrate proof-of-concept that a meticulously crafted GNN architecture can achieve fairness for graph data. Our work offers a fresh outlook in comparison to conventional approaches that focus on data pre-processing and fair training strategy design. - We propose FMP to achieve fairness via explicitly incorporating sensitive attribute information in message passing, guided by a unified optimization framework. Additionally, we introduce an acceleration method based on softmax property to reduce gradient computational complexity. - The effectiveness and efficiency of FMP are experimentally evaluated on three real-world datasets. The results show that compared to the state-of-the-art, our FMP exhibits a comparable or superior trade-off between prediction performance and fairness with negligibly computation overhead. ## 2 Preliminaries 2.1 Notations We adopt bold upper-case letters to denote matrix such as X, bold lower-case letters such as x to denote vectors, and calligraphic font such as X to denote sets. Given a matrix X ∈ R n×d, the i-th row and j-th column are denoted as Xi and X·,j , and the element in i-th row and j-th column is Xi,j . We use the Frobenius norm, l1 norm of matrix X as ||X||F = qPi,j X2 i,j and ||X||1 =Pij |Xij |, respectively. Given two matrices X, Y ∈ R n×d, the inner product is defined as ⟨X, Y⟩ = tr(X⊤Y), where tr(·) is the trace of a square matrix. SF(X) represents softmax function with a default normalized column dimension. Let G = {V, E} be a graph with the node set V = {v1, · · · , vn} and the undirected edge set E = {e1, · · · , em}, where *n, m* represent the number of node and edge, respectively. The graph structure G can be represented as an adjacent matrix A ∈ R n×n, where Aij = 1 if existing edge between node vi and node vj . N (i) denotes the neighbors of node vi and N˜ (i) = N (i) ∪ {vi} denotes the self-inclusive neighbors. Suppose that each node is associated with a d-dimensional feature vector and a (binary) sensitive attribute, the feature for all nodes and sensitive attribute is denoted as Xori = R n×d and s *∈ {−*1, 1} n 3. Define the sensitive attribute incident vector as ∆s =1>0(s) ||1>0(s)||1 −1>0(−s) ||1>0(−s)||1 to normalize each sensitive attribute group, where 1>0(s) is an element-wise indicator function. ## 2.2 Gnns As Graph Signal Denoising A GNN model is usually composed of several stacking GNN layers. Given a graph G with N nodes, a GNN layer typically contains feature transformation Xtrans = ftrans(Xori) and aggregation Xagg = fagg(X*trans*|G), where Xori ∈ R n×din , Xtrans, Xagg ∈ R n×dout represent the input and output features. The 1The sensitive attributes are not adopted in vanilla training loss. We only consider node classification tasks and vanilla loss is cross-entropy loss in this paper. 2Many aggregations in popular GNNs can be interpreted as gradient descent step for specific optimization problem with specific step size and initialization (Ma et al., 2021b; Zhu et al., 2021b). 3The sensitive attribute s is not included in node features matrix Xori. feature transformation operation transforms the node feature dimension, and *feature aggregation*, updates node features based on neighbors' features and graph topology. Recent works (Ma et al., 2021b; Zhu et al., 2021a) have established the connections between many feature aggregation operations AGG(·) in representative GNNs and a graph signal denoising problem with Laplacian regularization. Here, we introduce several popular GNN architectures, including GCN/SGC, GAT, and PPNP/APPNP, as examples to show the connection from the perspective of graph signal denoising. GCN/SGC. Feature aggregation in Graph Convolutional Network (GCN) or Simplifying Graph Convolutional Network (SGC) is given by Xagg = AX˜*trans*, where A˜ = D˜ − 12 Aˆ D˜ − 12 is a normalized self-loop adjacency matrix Aˆ = A + I, and D˜ is degree matrix of A˜ . Recent works (Ma et al., 2021b; Zhu et al., 2021a) provably demonstrate that such feature aggregation can be interpreted as one-step gradient descent to minimize tr(F ⊤I − A˜ )Fwith initialization F = X*trans*. GAT. Feature aggregation in GAT applies the normalized attention coefficient to compute a linear combination of neighbor's features as Xagg,i =Pj∈N(i) αijX*trans,j* , where αij = *sof tmax*j (eij ), eij = LeakyReLU(X⊤ trans,iwi + X⊤ trans,jwj ), and wi and wj are learnable column vectors. Prior study Ma et al. (2021b) demonstrates that one-step gradient descent with adaptive stepsize P1 j∈N˜ (i) (ci+cj ) for the following objective problem: $$\min_{\mathbf{F}}\sum_{i\in\mathcal{V}}||\mathbf{F}_{i}-\mathbf{X}_{t r a n s,i}||_{F}^{2}+{\frac{1}{2}}\sum_{i\in\mathcal{V}}c_{i}\sum_{j\in\mathcal{N}(i)}||\mathbf{F}_{i}-\mathbf{F}_{j}||_{F}^{2}.$$ is actually an attention-based feature aggregation, which is equivalent to GAT if ci + cj is equivalent to eij , where ciis a node-dependent coefficient that measures the local smoothness. PPNP / APPNP. Feature aggregation in PPNP and APPNP adopt the aggregation rules as Xagg = #### PPNP / APPNP. Feature agree: $\alpha\Big(\mathbf{I}-(1-\alpha)\tilde{\mathbf{A}}\Big)^{-1}\mathbf{X}_{trans}$ and $\mathbf{X}_{agg}^{k+1}$ exist solution and one gradient descent. agg = (1 − α)AX˜ kagg + αX*trans*. It is shown that they are equivalent to the exact solution and one gradient descent step with stepsize α 2 to minimize the following objective problem: $$\operatorname*{min}_{\mathbf{F}}||\mathbf{F}-\mathbf{X}_{t r a n s}||_{F}^{2}+({\frac{1}{\alpha}}-1)t r\Big(\mathbf{F}^{\top}(\mathbf{I}-{\hat{\mathbf{A}}})\mathbf{F}\Big).$$ ## 3 Fair Message Passing In this section, we propose a new fair message-passing scheme to aggregate useful information from neighbors while debiasing representation bias. In this way, fair prediction can be achieved from a model backbone perspective. Specifically, we formulate fair message passing as an optimization problem to pursue *smoothness* and *fair* node representation simultaneously 4. Together with an effective and efficient optimization algorithm, we derive the closed-form fair message passing. Finally, the proposed FMP is shown to be integrated into fair GNNs at three stages, including transformation, aggregation, and debiasing step, as shown in Figure 1. These three stages adopted node feature, graph topology, and sensitive attributes respectively. ## 3.1 The Optimization Framework In previous work (Ma et al., 2021b), a general and universal framework is developed to understand aggregation operations in GNNs. Building on top of this framework, we formulate an optimization problem to achieve fair message passing operation (replace aggregation operations in GNNs). To achieve graph smoothness prior and fairness in the same process, a reasonable message passing should be a good solution for the following optimization problem: $$\operatorname*{min}_{\mathbf{F}}{\frac{\lambda_{s}}{2}}t r(\mathbf{F}^{T}{\hat{\mathbf{L}}}\mathbf{F})+{\frac{1}{2}}||\mathbf{F}-\mathbf{X}_{t r a n s}||_{F}^{2}+\underbrace{\lambda_{f}||\mathbf{\Delta}_{s}S F(\mathbf{F})||_{1}}_{h_{f}\left(\mathbf{\Delta}_{s}S F(\mathbf{F})\right)}.$$ $$\quad(1)$$... 4Fair message passing is an alternative operation to replace GNNs aggregations. where L˜ represents normalized Laplacian matrix, hs(·) and hf (·) denotes the smoothness and fairness objectives 5, respectively, and Xtrans ∈ Rn×dout is the transformed dout-dimensional node features and F ∈ Rn×dout is the aggregated node features of the same matrix size. The first two terms preserve the similarity of connected node representation and thus enforce graph smoothness. The last term enforces fair node representation so that the average predicted probability between groups of different sensitive attributes can remain constant. The regularization coefficients λs and λf adaptively control the trade-off between graph smoothness and fairness. Smoothness Objective hs(·). The adjacent matrix in existing graph message passing schemes is normalized for improving numerical stability and achieving superior performance. Similarly, the graph smoothness term requires normalized Laplacian matrix, i.e., L˜ = I − A˜ , A˜ = Dˆ − 12 Aˆ Dˆ − 12 , and Aˆ = A + I. From an edge-centric view, the smoothness objective enforces connected node representation to be similar since $$tr({\bf F}^{T}\hat{\bf L}{\bf F})=\sum_{(v_{i},v_{j})\in{\cal E}}||\frac{{\bf F}_{i}}{\sqrt{d_{i}+1}}-\frac{{\bf F}_{j}}{\sqrt{d_{j}+1}}||_{F}^{2},\tag{1}$$ where di =Pk Aik represents the degree of node vi. Fairness Objective hf (·). The fairness objective measures the bias for node representation after aggregation. Recall sensitive attribute incident vector ∆s indicates the sensitive attribute group and group size via the sign and absolute value summation. Recall that the sensitive attribute incident vector as $$\left(2\right)$$ $$\Delta_{\bf s}=\frac{\mathbb{I}_{>0}({\bf s})}{||\mathbb{I}_{>0}({\bf s})||_{1}}-\frac{\mathbb{I}_{>0}(-{\bf s})}{||\mathbb{I}_{>0}(-{\bf s})||_{1}},\tag{3}$$ and SF(F) represents the predicted probability for node classification task, where SF(F)ij = Pˆ(yi = j|X). Furthermore, we can show that our fairness objective is actually equivalent to demographic parity, i.e., ∆sSF(F)j = Pˆ(yi = j|si = 1, X) − Pˆ(yi = j|si = −1, X). Please see proof in Appendix B. In other words, our fairness objective, l1 norm of ∆sSF(F) characterizes the predicted probability difference between two groups with different sensitive attributes. Therefore, our proposed optimization framework can pursue graph smoothness and fairness simultaneously. ## 3.2 Optimization Problem Solver For smoothness objective, many existing popular message passing schemes can be derived based on gradient descent with appropriate step size choice (Ma et al., 2021b; Zhu et al., 2021a). In this paper, we consider smoothness objective hs(F) and fairness objective hf (∆SF(F)) simultaneously for chasing fair and accurate prediction. However, directly solving the optimization problem (1) is much more challenging due to the nonsmoothness of the fairness objective, and the non-separability of smoothness objective hs(F) and fairness objective hf (∆SF(F)) due to incident vector ∆s. ## 3.2.1 Bi-Level Optimization Problem Formulation. In the literature, many optimization algorithms are developed for optimization problems with l1 norm, such as Alternating Direction Method of Multipliers (ADMM) and Newton type algorithms (Ghadimi et al., 2014; Varma et al., 2019). However, these algorithms require non-trivial sub-problem solving for each iteration. Therefore, computation complexity is high and is infeasible to integrate deep learning models. Fortunately, Fenchel conjugate (a.k.a. convex conjugate) (Rockafellar, 2015) can transform the original problem as an equivalent saddle point problem using a primal-dual algorithm (Liu et al., 2021). In this way, the computation complexity can be reduced and compatible with back-propagation training. Similarly, to solve optimization problem 1 in a more effective and efficient manner, Fenchel conjugate (Rockafellar, 2015) 5Such smoothness objective is the most common-used one in existing methods (Ma et al., 2021b; Belkin & Niyogi, 2001; Kalofolias, 2016). The various other smoothness objectives could be considered to improve the performance of FMP and we leave it for future work. is introduced to transform the original problem into a bi-level optimization problem. For the general convex function h(·), its conjugate function is defined as h ∗(U) △= sup X ⟨U, X⟩ − h(X). Based on Fenchel conjugate, the fairness objective can be transformed as variational representation hf (p) = sup u ⟨p, u⟩ − h ∗ f (u), where p = ∆sSF(F) ∈ R 1×dout is a predicted probability vector for classification. Furthermore, the original optimization problem is equivalent to $$\operatorname*{min}_{\mathbf{F}}\operatorname*{max}_{\mathbf{u}}h_{s}(\mathbf{F})+\langle\mathbf{p},\mathbf{u}\rangle-h_{f}^{*}(\mathbf{u})$$ $$\left(4\right)$$ (u) (4) where u ∈ R 1×dout and h ∗ f (·) is the conjugate function of fairness objective hf (·). ## 3.2.2 Problem Solution Motivated by Proximal Alternating Predictor-Corrector (PAPC) (Loris & Verhoeven, 2011; Chen et al., 2013), the min-max optimization problem (4) can be solved by the following fixed-point equations with per iteration low computation complexity and convergence guarantee $$\begin{array}{l}\mathbf{F}=\mathbf{F}-\nabla h_{s}(\mathbf{F})-\frac{\partial\langle\mathbf{p},\mathbf{u}\rangle}{\partial\mathbf{F}},\\ \mathbf{u}=\mbox{prox}_{h_{f}^{*}}\big{(}\mathbf{u}+\mathbf{\Delta}_{\mathbf{s}}SF(\mathbf{F})\big{)}.\end{array}\tag{1}$$ $$\mathbf{\Sigma}$$ where proxh ∗ f (u) = arg min y||y − u||2F + h ∗ f (y). Fortunately, the proximal operators can be obtained with a close form, which makes deep learning model integration feasible. Specifically we provide the close form of the proximal operators in the following proposition: Proposition 3.1 (Proximal Operators). *The proximal operators prox*βh∗ f (u) satisfies $$p r o x_{\beta h_{f}^{*}}({\bf u})_{j}=s i g n({\bf u})_{j}\operatorname*{min}{\big(}|{\bf u}_{j}|,\lambda_{f}{\big)},$$ $$\mathbf{\Sigma}$$ , (6) where sign(·) and λf *are element-wise sign function and hyperparameter for fairness objective. In other* words, such a proximal operator is an element-wise projection into l∞ *ball with radius* λf . Similar to "predictor-corrector" algorithm (Loris & Verhoeven, 2011), we adopt an iterative algorithm to find the saddle point for the min-max optimization problem. Specifically, starting from (F k, u k), we adopt a gradient descent step on the primal variable F to arrive (F¯ k+1, u k) and then followed by a proximal ascent step in the dual variable u. Finally, a gradient descent step on a primal variable in point (F¯ k+1, u k) to arrive at (F k+1, u k). In short, the iteration can be summarized as $$\left\{\begin{array}{l}\bar{\mathbf{F}}^{k+1}=\mathbf{F}^{k}-\gamma\nabla h_{s}(\mathbf{F}^{k})-\gamma\frac{\partial(\mathbf{p}_{s}\mathbf{p}^{k})}{\partial\mathbf{F}}\bigg{|}_{\mathbf{F}^{k}},\\ \mathbf{u}^{k+1}=\text{prox}_{\beta h_{f}}(\mathbf{r}^{k}+\beta\mathbf{\Delta}_{s}\mathbf{SF}(\bar{\mathbf{F}}^{k+1})),\\ \bar{\mathbf{F}}^{k+1}=\mathbf{F}^{k}-\gamma\nabla h_{s}(\mathbf{F}^{k})-\gamma\frac{\partial(\mathbf{p}_{s}\mathbf{u}^{k+1})}{\partial\mathbf{F}}\bigg{|}_{\mathbf{F}^{k}}.\end{array}\right.\tag{7}$$ where γ and β are the step size for primal and dual variables. Note that the close-form for ∂⟨p,u⟩ ∂F∈ R n×dout and proxβh∗ f (·) are still not clear, we will provide the solution one by one. FMP Scheme. Similar to works (Ma et al., 2021b; Liu et al., 2021), choosing γ =1 1+λs and β = 1 2γ , we have $${\bf F}^{k}-\gamma\nabla h_{s}({\bf F}^{k})=\left((1-\gamma){\bf I}-\gamma\lambda_{s}\tilde{\bf L}\right){\bf F}^{k}+\gamma{\bf X}_{trans}\tag{8}$$ $$=\gamma{\bf X}_{trans}+(1-\gamma)\tilde{\bf A}{\bf F}^{k},$$ Therefore, we can summarize the proposed FMP as two phases, including propagation with skip connection (Step ❶) and bias mitigation (Steps ❷-❺). For bias mitigation, Step ❷ updates the aggregated node features for fairness objective; Steps ❸ and ❹ aim to learn and "reshape" perturbation vector in probability space, ![5_image_0.png](5_image_0.png) Step $\mathbf{0}$ Step $\mathbf{\Theta}$ Step $\mathbf{\Theta}$ Step $\mathbf{\Theta}$ Step $\mathbf{\Theta}$ Step $\mathbf{\Theta}$ Step $\mathbf{\Theta}$ Figure 1: The model pipeline consists of three steps: MLP (feature transformation), propagation with skip connection, and debiasing via low-rank perturbation in probability space. respectively. Step ❺ explicitly mitigates the bias of node features based on gradient descent on the primal variable. The mathematical formulation is given as follows: $\left(\begin{array}{l}\mathbf{X}_{agg}^{k+1}=\gamma\mathbf{X}_{trans}+(1-\gamma)\tilde{\mathbf{AF}}^{k},\\ \tilde{\mathbf{F}}^{k+1}=\mathbf{X}_{agg}^{k+1}-\gamma\frac{\partial(\mathbf{p},\mathbf{u}^{k})}{\partial\mathbf{F}}\bigg{|}_{\mathbf{F}^{k}},\\ \tilde{\mathbf{u}}^{k+1}=\mathbf{u}^{k}+\beta\mathbf{\Delta}_{\mathbf{s}}SF(\tilde{\mathbf{F}}^{k+1}),\\ \mathbf{u}^{k+1}=\min\left(|\tilde{\mathbf{u}}^{k+1}|,\lambda_{f}\right)\cdot sign(\tilde{\mathbf{u}}^{k+1}),\\ \mathbf{F}^{k+1}=\mathbf{X}_{agg}^{k+1}-\gamma\frac{\partial(\mathbf{p},\mathbf{u}^{k+1})}{\partial\mathbf{F}}\bigg{|}_{\mathbf{F}^{k}}.\end{array}\right.$ , Step ❷ k+1), Step ❹ . Step ❺ where Xk+1 agg represents the node features with normal aggregation and skip connection with the transformed input X*trans*. ## 3.2.3 Gradient Computation Acceleration The softmax property is also adopted to accelerate the gradient computation. Note that p = ∆sSF(F) and SF(·) represents softmax over column dimension, directly computing the gradient ∂⟨p,u⟩ ∂Fbased on chain rule involves the three-dimensional tensor ∂p ∂F with gigantic computation complexity. Instead, we simplify the gradient computation based on the property of softmax function in the following theorem. Theorem 3.2 (Gradient Computation). *The gradient over primal variable* ∂⟨p,u⟩ ∂F*satisfies* $${\frac{\partial\langle\mathbf{p},\mathbf{u}\rangle}{\partial\mathbf{F}}}=\mathbf{U}_{s}\odot S F(\mathbf{F})-S u m_{1}(\mathbf{U}_{s}\odot S F(\mathbf{F}))S F(\mathbf{F}).$$ ∂F= Us ⊙ SF(F) − Sum1(Us ⊙ SF(F))SF(F). (9) where Us △= ∆⊤ s u, ⊙ represents the element-wise product and Sum1(·) *represents the summation over column* dimension with preserved matrix shape. ## 4 Discussion On Fmp In this section, we provide the interpretation and analyze the *efficiency*, and white-box usage for sensitive attribute of the proposed FMP scheme. Furthermore, we also discuss how FMP identifies the influence of sensitive attributes from model forward propagation. $$({\mathfrak{g}})$$ FMP Interpretation Note that the gradient of fairness objective over node features F satisfies ∂⟨p,u⟩ ∂F = ∂⟨p,u⟩ ∂SF (F) ∂SF (F) ∂Fand ∂⟨p,u⟩ ∂SF (F) = ∆⊤ s u, such gradient calculation can be interpreted as three steps: Softmax transformation, perturbation in probability space, and debiasing in representation space. Specifically, we first map the node representation into probability space via softmax transformation. Subsequently, we calculate the gradient of fairness objective in probability space. It is seen that the perturbation ∆⊤ s u actually poses low-rank debiasing in probability space, where the nodes with different sensitive attributes embrace opposite perturbations. In other words, the dual variable u *represents the perturbation direction in probability space.* Finally, the perturbation in probability space will be transformed into representation space via Jacobian transformation ∂SF (F) ∂F. Efficiency. FMP is an efficient message-passing scheme. The computation complexity for the aggregation (sparse matrix multiplications) is O(mdout), where m is the number of edges in the graph. For FMP, the extra computation mainly focuses on the perturbation calculation, as shown in Theorem 3.2, with the computation complexity O(ndout). The extra computation complexity is negligible in that the number of nodes n is far less than the number of edges m in the real-world graph. Additionally, if directly adopting backward propagation to calculate the gradient, we have to calculate the three-dimensional tensor ∂p ∂F with computation complexity O(n 2dout). In other words, thanks to the softmax property, we achieve an efficient fair message-passing scheme. White-box Usage for Sensitive Attribute. The proposed FMP explicitly achieves graph smoothness and fairness objectives via alternative gradient descent. In other words, the usage of sensitive attributes in propagation to mitigate bias is in a white-box manner. Note that such white-box usage of sensitive attributes is a promising property to understand how sensitive attribute usage forces fairness, which is not achieved by previous fairness methods in GNNs. For example, fair training loss utilizes sensitive attributes to regularize the behavior of model prediction and obtain fairer model parameters via rectifying gradients w.r.t. model parameters. In other words, the sensitive attribute information is implicitly encoded in the well-trained model parameters, which makes it hard to understand how sensitive attribute usage helps fair prediction. Pre-processing fairness methods adopt sensitive attributes to revise data (e.g., node masking and topology rewiring) either in a learnable way or via pre-defined several operations (e.g., node masking and edge deletions). Similarly, the sensitive attribute information is implicitly encoded in the processed data. The understanding of fairness prediction achievement is infeasible. Our FMP can provide a white-box usage for sensitive attributes since we can directly identify that the usage of sensitive attributes is to force the demographic group node representation centers together during forward propagation. To facilitate the understanding of the influence of sensitive attributes, we measure the influence of sensitive attributes as the difference of final prediction between the well-trained fair model using sensitive attributes and vanilla models without sensitive attribute usage. The sensitive attribute has a critical influence to achieve fair prediction and the prediction is highly different for the vanilla model (trained with vanilla loss and no data preprocessing) and the fair model (trained with fair methods). We visualize the logit layer node representation for different methods in Appendix H.3. The proposed FMP explicitly uses the sensitive attribute information in Steps ❷-❺ during forward propagation. In other words, if we aim to identify the influence of sensitive attributes for FMP, it is sufficient to check the difference between the input and output for the debiasing step since it is disentangled with feature transformation and aggregation. It is worth mentioning that the required information for identifying the influence of sensitive attributes is naturally from the forward propagation. However, for the fair model from existing works (e.g, adding regularization and adversarial debiasing), note that the sensitive attribute information is implicitly encoded in the well-trained model weight, the sensitive attribute perturbation inevitably leads to the variability of well-trained model weight. Therefore, it is required to retrain the model for probing the influence of sensitive attribute perturbation. The key drawback of these methods is due to encoding the sensitive attributes information into well-trained model weights. From the auditors' perspective, it is quite hard to identify the influence of sensitive attributes only given a well-trained fair model. Instead, our designed FMP explicitly adopts the sensitive attribute information in the forward propagation process, which naturally avoid the dilemma that sensitive attributes are encoded into well-trained model weight. In a nutshell, FMP encompasses higher transparency since (1) the sensitive attribute is explicitly adopted in | Models | | Pokec-z | Pokec-n | NBA | | | | |--------------------------------------------|------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|--------------|---------------------------------------------------|--------------|--------------|-----------| | | Acc (%) ↑ | ∆DP (%) ↓ ∆EO (%) ↓ | Acc (%) ↑ | ∆DP (%) ↓ ∆EO (%) ↓ | Acc (%) ↑ | ∆DP (%) ↓ | ∆EO (%) ↓ | | MLP | 70.48 ± 0.77 1.61 ± 1.29 2.22 ± 1.01 | 72.48 ± 0.26 1.53 ± 0.89 3.39 ± 2.37 | 65.56 ± 1.62 | 22.37 ± 1.87 | 18.00 ± 3.52 | | | | GAT | 69.76 ± 1.30 2.39 ± 0.62 2.91 ± 0.97 | 71.00 ± 0.48 3.71 ± 2.15 7.50 ± 2.88 57.78 ± 10.65 20.12 ± 16.18 13.00 ± 13.37 | | | | | | | GCN | 71.78 ± 0.37 3.25 ± 2.35 2.36 ± 2.09 73.09 ± 0.28 3.48 ± 0.47 5.16 ± 1.38 | | 61.90 ± 1.00 | 23.70 ± 2.74 | 17.50 ± 2.63 | | | | SGC | 71.24 ± 0.46 4.81 ± 0.30 4.79 ± 2.27 | 71.46 ± 0.41 2.22 ± 0.29 3.85 ± 1.63 | 63.17 ± 0.63 | 22.56 ± 3.94 | 14.33± 2.16 | | | | APPNP 66.91 ± 1.46 3.90 ± 0.69 5.71 ± 1.29 | | 69.80 ± 0.89 1.98 ± 1.30 4.01 ± 2.36 | 63.80 ± 1.19 | 26.51 ± 3.33 | 20.00 ± 4.56 | | | | JKNet | 66.89 ± 3.79 | 1.28 ±0.96 | 1.79 ± 0.82 | 63.59 ± 6.36 1.91 ± 2.14 0.70 ± 0.92 67.94 ± 2.73 | 27.80 ± 8.41 | 20.33 ± 7.52 | | | ML1 | 70.42 ± 0.40 2.35 ± 0.83 2.00 ± 0.50 | 72.36 ± 0.26 1.47 ± 1.12 3.03 ± 1.77 | 72.70 ± 1.19 | 26.46 ± 4.93 | 25.50 ± 8.38 | | | | FMP | 70.50 ± 0.50 0.81 ± 0.40 1.73 ± 1.03 72.16 ± 0.33 0.66 ± 0.40 1.47 ± 0.87 73.33 ± 1.85 18.92 ± 2.28 13.33 ± 5.89 | | | | | | | Table 1: Comparative Results with Baselines on Node Classification. forward propagation; (2) It is not necessary to retrain the model for probing the influence of the sensitive attribute. ## 5 Experiments In this section, we conduct experiments to validate the effectiveness and efficiency of the proposed FMP. We firstly validate that graph data with large sensitive homophily enhances bias in GNNs via synthetic experiments. Moreover, for experiments on real-world datasets, we introduce the experimental settings and then evaluate our proposed FMP compared with several baselines in terms of prediction performance and fairness metrics. ## 5.1 Experimental Settings Datasets. We conduct experiments on real-world datasets Pokec-z, Pokec-n, and NBA (Dai & Wang, 2021). Pokec-z and Pokec-n are sampled, based on province information, from a larger Facebook-like social network Pokec (Takac & Zabovsky, 2012) in Slovakia, where region information is treated as the sensitive attribute and the predicted label is the working field of the users. NBA dataset is extended from a Kaggle dataset 6 consisting of around 400 NBA basketball players. The information of players includes age, nationality, and salary in the 2016-2017 season. The players' link relationships are from Twitter with the official crawling API. The binary nationality (U.S. and overseas player) is adopted as the sensitive attribute and the prediction label is whether the salary is higher than the median. Evaluation Metrics. We adopt accuracy to evaluate the performance of node classification tasks. As for fairness metrics, we adopt two quantitative group fairness metrics to measure the prediction bias. According to works (Louizos et al., 2015; beu), we adopt *demographic parity* ∆DP = |P(ˆy = 1|s = −1)−P(ˆy = 1|s = 1)| and *equal opportunity* ∆EO = |P(ˆy = 1|s = −1, y = 1) − P(ˆy = 1|s = 1, y = 1)|, where y and yˆ represent the ground-truth label and predicted label, respectively. Baselines. We compare our proposed FMP with representative GNNs, such as GCN (Kipf & Welling, 2017), GAT (Veličković et al., 2018), SGC (Wu et al., 2019), and APPNP (Klicpera et al., 2019), JKNet (Xu et al., 2018), and MLP. We also compared with method "ML1" directly using the gradient of Eq. (1) during model forward propagation. For all models, we train 2 layers of neural networks with 64 hidden units for 300 epochs. Additionally, We also compare adversarial debiasing and adding demographic regularization methods to show the effectiveness of the proposed method 7. Implementation Details. We run the experiments 5 times and report the average performance for each method. We adopt Adam optimizer with 0.001 learning rate and 10−5 weight decay for all models. For adversarial debiasing, we adopt the train classifier and adversary with 70 and 30 epochs, respectively. The 6https://www.kaggle.com/noahgift/social-power-nba 7Please see the comparison with Fair Mixup (Chuang & Mroueh, 2021) in Appendix H.2 ![8_image_0.png](8_image_0.png) Figure 2: DP and Acc trade-off performance on three real-world datasets compared with adding regularization (Top) and adversarial debiasing (Bottom). The trade-off curve close to the right bottom corner means better trade-off performance. The units for x- and y-axis are percentages (%). hyperparameter for adversary loss is tuned in {0.0, 1.0, 2.0, 5.0, 8.0, 10.0, 20.0, 30.0}. For adding regularization, we adopt the hyperparameter set {0.0, 1.0, 2.0, 5.0, 8.0, 10.0, 20.0, 50.0, 80.0, 100.0}. ## 5.2 Experimental Results Comparison with Existing GNNs. The accuracy, demographic parity, and equal opportunity metrics of proposed FMP for Pokec-z, Pokec-n, NBA datasets are shown in Table 1 compared with MLP, GAT, GCN, SGC, and APPNP. The detailed statistical information for these three datasets is shown in Table 3. From these results, we can obtain the following observations: - Many existing GNNs underperform MLP model on all three datasets in terms of fairness metric. For instance, the demographic parity of MLP is lower than GAT, GCN, SGC and APPNP by 32.64%, 50.46%, 66.53% and 58.72% on Pokec-z dataset. The higher prediction bias comes from the aggregation within the same sensitive attribute nodes and topology bias in graph data. - Our proposed FMP consistently achieves the lowest prediction bias in terms of demographic parity and equal opportunity on all datasets. Specifically, FMP reduces demographic parity by 49.69%, 56.86%, and 5.97% compared with the lowest bias among all baselines in Pokec-z, Pokec-n, and NBA datasets. Meanwhile, our proposed FMP achieves the best accuracy in NBA dataset, and comparable accuracy in Pokec-z and Pokec-n datasets. In a nutshell, the proposed FMP can effectively mitigate prediction bias while preserving the prediction performance. Comparison with Adversarial Debiasing and Regularization. To validate the effectiveness of the proposed FMP, we also show the prediction performance and fairness metric trade-off compared with fairnessboosting methods, including adversarial debiasing (Fisher et al., 2020) and adding regularization (Chuang & Mroueh, 2020). Similar to (lou), the output of GNNs is the input of the adversary and the goal of the adversary is to predict the node sensitive attribute. We also adopt several backbones for these two methods, including MLP, GCN, GAT, and SGC. We randomly split 50%/25%/25% for training, validation, and test dataset. Figure 2 shows the Pareto optimality curve for all methods, where the right-bottom corner point represents the ideal performance (highest accuracy and lowest prediction bias). From the results, we list the following observations as follows: - Our proposed FMP can achieve better DP-Acc trade-off compared with adversarial debiasing and adding regularization for many GNNs and MLP. Such observation validates the effectiveness of the key idea in FMP: aggregation first and then debiasing. Additionally, FMP can reduce demographic parity with negligible performance cost due to transparent and efficient debiasing. - Message passing in GNNs does matter. For adding regularization or adversarial debiasing, different GNNs embrace huge distinctions, which implies that an appropriate message passing manner potentially leads to better trade-off performance. Additionally, many GNNs underperforms MLP in low-label homophily coefficient dataset, such as NBA. The rationale is that aggregation may not always bring benefit in terms of accuracy when the neighbors have low probability with the same label. ## 6 Related Works Graph Neural Networks. GNNs generalizing neural networks for graph data have already shown great success in various real-world applications. There are two streams in GNNs model design, i.e., spectral-based and spatial-based. Spectral-based GNNs provide graph convolution definition based on graph theory, which is utilized in GNN layers together with feature transformation (Bruna et al., 2013; Defferrard et al., 2016; Henaff et al., 2015). Graph convolutional networks (GCN) (Kipf & Welling, 2017) simplify spectral-based GNN model into spatial aggregation scheme. Since then, many spatial-based GNNs variant is developed to update node representation via aggregating its neighbors' information, including graph attention network (GAT) (Veličković et al., 2018), GraphSAGE (Hamilton et al., 2017), SGC (Wu et al., 2019), APPNP (Klicpera et al., 2019), et al (Gao et al., 2018; Monti et al., 2017). Graph signal denoising is another perspective to understand GNNs. Recently, there are several works show that GCN is equivalent to the firstorder approximation for graph denoising with Laplacian regularization (Henaff et al., 2015; Zhao & Akoglu, 2019). The unified optimization framework is provided to unify many existing message passing schemes (Ma et al., 2021b; Zhu et al., 2021a). Fairness-aware Learning on Graphs. Many works have been developed to achieve fairness in machine learning community (Jiang et al., 2022; Han et al., 2023; Jiang et al., 2023; Chuang & Mroueh, 2020; Zhang et al., 2018; Du et al., 2021; Yurochkin & Sun, 2020; Creager et al., 2019; Feldman et al., 2015). A pilot study on fair node representation learning is developed based on random walk (Rahman et al., 2019). Additionally, adversarial debiasing is adopted to learn fair prediction or node representation so that the well-trained adversary can not predict the sensitive attribute based on node representation or prediction (Dai & Wang, 2021; Bose & Hamilton, 2019; Fisher et al., 2020). A Bayesian approach is developed to learn fair node representation via encoding sensitive information in the prior distribution in (Buyl & De Bie, 2020). Work (Ma et al., 2021a) develops a PAC-Bayesian analysis to connect subgroup generalization with accuracy parity. (Laclau et al., 2021; Li et al., 2021) aims to mitigate prediction bias for link prediction. Fairness-aware graph contrastive learning is proposed in (Agarwal et al., 2021; Köse & Shen, 2021; Ling et al., 2023). Graph data preprocessing, such as node feature masking and graph topology rewire, are also developed in (Laclau et al., 2021; Li et al., 2021; Dong et al., 2021; 2023) for node classification and link prediction tasks. However, the aforementioned works ignore the requirement of transparency in fairness. In this work, we develop an efficient and transparent fair message passing scheme explicitly rendering sensitive attribute usage. ## 7 Conclusion In this work, we achieve fairness in graphs from the model architecture perspective. We design a fair messagepassing scheme to achieve fair prediction for node classification using vanilla training loss without data pre-processing. Specifically, motivated by the unified optimization framework for GNNs, FMP is designed as aggregation first and then bias mitigation to explicitly chase smoothness and fairness objectives. We also provide a comprehensive discussion of FMP from model architecture interpretation, efficiency, and the white-box usage of sensitive attributes aspects. Experimental results on real-world datasets demonstrate the effectiveness of FMP compared with several baselines in node classification tasks. ## References Chirag Agarwal, Himabindu Lakkaraju, and Marinka Zitnik. Towards a unified framework for fair and stable graph representation learning. *arXiv preprint arXiv:2102.13186*, 2021. Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. *Advances in neural information processing systems*, 14, 2001. Avishek Bose and William Hamilton. Compositional fairness constraints for graph embeddings. In *International Conference on Machine Learning*, pp. 715–724. PMLR, 2019. Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. *arXiv preprint arXiv:1312.6203*, 2013. Maarten Buyl and Tijl De Bie. Debayes: a bayesian method for debiasing network embeddings. In *International Conference on Machine Learning*, pp. 1220–1229. PMLR, 2020. Peijun Chen, Jianguo Huang, and Xiaoqun Zhang. A primal–dual fixed point algorithm for convex separable minimization with applications to image restoration. *Inverse Problems*, 29(2):025011, 2013. Ching-Yao Chuang and Youssef Mroueh. Fair mixup: Fairness via interpolation. In International Conference on Learning Representations, 2020. Ching-Yao Chuang and Youssef Mroueh. Fair mixup: Fairness via interpolation. In *International Conference* on Learning Representations, 2021. Elliot Creager, David Madras, Jörn-Henrik Jacobsen, Marissa Weis, Kevin Swersky, Toniann Pitassi, and Richard Zemel. Flexibly fair representation learning by disentanglement. In *International conference on* machine learning, pp. 1436–1445. PMLR, 2019. Enyan Dai and Suhang Wang. Say no to the discrimination: Learning fair graph neural networks with limited sensitive attribute information. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pp. 680–688, 2021. Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. *Advances in neural information processing systems*, 29:3844–3852, 2016. Yushun Dong, Jian Kang, Hanghang Tong, and Jundong Li. Individual fairness for graph neural networks: A ranking based approach. In *Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery* & Data Mining, pp. 300–310, 2021. Yushun Dong, Ninghao Liu, Brian Jalaian, and Jundong Li. Edits: Modeling and mitigating data bias for graph neural networks. In *Proceedings of the ACM Web Conference 2022*, pp. 1259–1269, 2022. Yushun Dong, Binchi Zhang, Yiling Yuan, Na Zou, Qi Wang, and Jundong Li. Reliant: Fair knowledge distillation for graph neural networks. *arXiv preprint arXiv:2301.01150*, 2023. Mengnan Du, Subhabrata Mukherjee, Guanchu Wang, Ruixiang Tang, Ahmed Hassan Awadallah, and Xia Hu. Fairness via representation neutralization. *arXiv preprint arXiv:2106.12674*, 2021. Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In *proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining*, pp. 259–268, 2015. Joseph Fisher, Arpit Mittal, Dave Palfrey, and Christos Christodoulopoulos. Debiasing knowledge graph embeddings. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing* (EMNLP), pp. 7332–7345, 2020. Hongyang Gao, Zhengyang Wang, and Shuiwang Ji. Large-scale learnable graph convolutional networks. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1416–1424, 2018. Euhanna Ghadimi, André Teixeira, Iman Shames, and Mikael Johansson. Optimal parameter selection for the alternating direction method of multipliers (admm): quadratic problems. IEEE Transactions on Automatic Control, 60(3):644–658, 2014. Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto. Knowledge transfer for out-ofknowledge-base entities: A graph neural network approach. *arXiv preprint arXiv:1706.05674*, 2017. William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 1025– 1035, 2017. Xiaotian Han, Zhimeng Jiang, Ninghao Liu, and Xia Hu. G-mixup: Graph data augmentation for graph classification. In *International Conference on Machine Learning*, pp. 1–9. PMLR, 2022a. Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Qingquan Song, Jundong Li, and Xia Hu. Geometric graph representation learning via maximizing rate reduction. In *Proceedings of the ACM Web Conference 2022*, pp. 1226–1237, 2022b. Xiaotian Han, Zhimeng Jiang, Hongye Jin, Zirui Liu, Na Zou, Qifan Wang, and Xia Hu. Retiring ∆DP: New distribution-level metrics for demographic parity. *arXiv preprint arXiv:2301.13443*, 2023. Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163, 2015. Zhimeng Jiang, Xiaotian Han, Chao Fan, Fan Yang, Ali Mostafavi, and Xia Hu. Generalized demographic parity for group fairness. In *International Conference on Learning Representations*, 2022. Zhimeng Jiang, Xiaotian Han, Hongye Jin, Guanchu Wang, Na Zou, and Xia Hu. Weight perturbation can help fairness under distribution shift. *arXiv preprint arXiv:2303.03300*, 2023. Vassilis Kalofolias. How to learn a graph from smooth signals. In *Artificial Intelligence and Statistics*, pp. 920–929. PMLR, 2016. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017. Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. Predict then propagate: Graph neural networks meet personalized pagerank. In *International Conference on Learning Representations*, 2019. Öykü Deniz Köse and Yanning Shen. Fairness-aware node representation learning. arXiv preprint arXiv:2106.05391, 2021. Charlotte Laclau, Ievgen Redko, Manvi Choudhary, and Christine Largeron. All of the fairness for edge prediction with optimal transport. In *International Conference on Artificial Intelligence and Statistics*, pp. 1774–1782. PMLR, 2021. Peizhao Li, Yifei Wang, Han Zhao, Pengyu Hong, and Hongfu Liu. On dyadic fairness: Exploring and mitigating bias in graph connections. In *International Conference on Learning Representations*, 2021. Hongyi Ling, Zhimeng Jiang, Youzhi Luo, Shuiwang Ji, and Na Zou. Learning fair graph representations via automated data augmentations. In *International Conference on Learning Representations*, 2023. Xiaorui Liu, Wei Jin, Yao Ma, Yaxin Li, Hua Liu, Yiqi Wang, Ming Yan, and Jiliang Tang. Elastic graph neural networks. In *International Conference on Machine Learning*, pp. 6837–6849. PMLR, 2021. Ignace Loris and Caroline Verhoeven. On a generalization of the iterative soft-thresholding algorithm for the case of non-separable penalty. *Inverse Problems*, 27(12):125007, 2011. Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. The variational fair autoencoder. *arXiv preprint arXiv:1511.00830*, 2015. Jiaqi Ma, Junwei Deng, and Qiaozhu Mei. Subgroup generalization and fairness of graph neural networks. Advances in Neural Information Processing Systems, 34, 2021a. Yao Ma, Xiaorui Liu, Tong Zhao, Yozen Liu, Jiliang Tang, and Neil Shah. A unified view on graph neural networks as graph signal denoising. In *Proceedings of the 30th ACM International Conference on* Information & Knowledge Management, pp. 1202–1211, 2021b. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. *ACM Computing Surveys (CSUR)*, 54(6):1–35, 2021. Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In *Proceedings of the* IEEE conference on computer vision and pattern recognition, pp. 5115–5124, 2017. Tahleen Rahman, Bartlomiej Surma, Michael Backes, and Yang Zhang. Fairwalk: Towards fair graph embedding. 2019. Ralph Tyrell Rockafellar. *Convex analysis*. Princeton university press, 2015. Harini Suresh and John V Guttag. A framework for understanding unintended consequences of machine learning. *arXiv preprint arXiv:1901.10002*, 2, 2019. Lubos Takac and Michal Zabovsky. Data analysis in public social networks. In International scientific conference and international workshop present day trends of innovations, volume 1, 2012. Rohan Varma, Harlin Lee, Jelena Kovačević, and Yuejie Chi. Vector-valued graph trend filtering with non-convex penalties. *IEEE transactions on signal and information processing over networks*, 6:48–62, 2019. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In *International Conference on Learning Representations*, 2018. Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Simplifying graph convolutional networks. In *International conference on machine learning*, pp. 6861–6871. PMLR, 2019. Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In *International conference on* machine learning, pp. 5453–5462. PMLR, 2018. Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In *Proceedings of the 24th ACM* SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 974–983, 2018. Mikhail Yurochkin and Yuekai Sun. Sensei: Sensitive set invariance for enforcing individual fairness. In International Conference on Learning Representations, 2020. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial learning. In *Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society*, pp. 335–340, 2018. Lingxiao Zhao and Leman Akoglu. Pairnorm: Tackling oversmoothing in gnns. *arXiv preprint* arXiv:1909.12223, 2019. Meiqi Zhu, Xiao Wang, Chuan Shi, Houye Ji, and Peng Cui. Interpreting and unifying graph neural networks with an optimization framework. In *Proceedings of the Web Conference 2021*, pp. 1215–1226, 2021a. Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. Deep Graph Contrastive Representation Learning. In *ICML Workshop on Graph Representation Learning and Beyond*, 2020. Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. Graph contrastive learning with adaptive augmentation. In *Proceedings of the Web Conference 2021*, pp. 2069–2080, 2021b. ## A Notations | Notations | Description | |-------------------|---------------------------------------------------------------------| | |E| | The number of edges | | n | The number of nodes | | d | The number of node feature dimensions | | dout | The number of node classes | | ∆s ∈ R 1×n | The sensitive attribute incident vector | | ϵlabel | Label homophily coefficient | | ϵsens | Sensitive homophily coefficient | | Xori ∈ R n×d | The input node attributes matrix | | A ∈ R n×n | The adjacency matrix | | Aˆ ∈ R n×n | The adjacency matrix with self-loop | | A˜ ∈ R n×n | The normalized adjacency matrix with self-loop | | L ∈ R n×n | The Laplacian matrix | | Xtrans ∈ R n×dout | The output node features for feature transformation | | Fagg ∈ R n×dout | The aggregated node features after propagation | | F ∈ R n×dout | The learned node features considering graph smoothness and fairness | | u ∈ R 1×dout | The permutation direction in feature representation space | | h ∗ (·) | Fenchel conjugate function of h(·) | | ||X||F , ||X||1 | The Frobenius norm and l1 norm of matrix X | | λf , λs | Hyperparameter for fairness and graph smoothness objectives | Table 2: Table of Notations ## B Proof On Fairness Objective The fairness objective can be shown as the average prediction probability difference as follows: $$\left(\Delta_{s}SF(\mathbf{F})\right)\right)_{j}=\left[\frac{1\!\!1_{>0}(\mathbf{s})}{||1\!\!1_{>0}(\mathbf{s})||_{1}}-\frac{1\!\!1_{>0}(-\mathbf{s})}{||1\!\!1_{>0}(-\mathbf{s})||_{1}}\right]\left(SF(\mathbf{F})\right)_{:,j}$$ $$=\frac{\sum_{\mathbf{s}_{i}=1}\hat{P}(y_{i}=j|\mathbf{X})}{||1\!\!1_{>0}(\mathbf{s})||_{1}}-\frac{\sum_{\mathbf{s}_{i}=-1}\hat{P}(y_{i}=j|\mathbf{X})}{||1\!\!1_{>0}(-\mathbf{s})||_{1}}$$ $$=\hat{P}(y_{i}=j|\mathbf{s}_{i}=1,\mathbf{X})-\hat{P}(y_{i}=j|\mathbf{s}_{i}=-1,\mathbf{X}).$$ ## C Proof Of Theorem 3.2 Before providing in-depth analysis on the gradient computation, we first introduce the softmax function derivative property in the following lemma: Lemma C.1. For the softmax function with N*-dimensional vector input* y = SF(x) : R 1×N −→ R 1×N , where yj =e P xj N k=1 e xk for ∀j ∈ {1, 2, · · · , N}, the derivative N ×N *Jocobian matrix is defined by* [ ∂y ∂x ]ij = ∂yi ∂xj . Additionally, Jocobian matrix satisfies ∂y ∂x = diag(y) − y ⊤y, where IN represents N × N *identity matrix and* ⊤ denotes the transpose operation for vector or matrix. Proof. Considering the gradient ∂yi ∂xj for arbitrary i = j, according to quotient and chain rule of derivatives, we have $${\frac{\partial\mathbf{y}_{i}}{\partial\mathbf{x}_{j}}}={\frac{e^{\mathbf{x}_{i}}\sum_{k=1}^{N}e^{\mathbf{x}_{k}}-e^{\mathbf{x}_{i}+\mathbf{x}_{j}}}{\left(\sum_{k=1}^{N}e^{\mathbf{x}_{k}}\right)^{2}}}={\frac{e^{\mathbf{x}_{i}}}{\sum_{k=1}^{N}e^{\mathbf{x}_{k}}}}\cdot{\frac{\sum_{k=1}^{N}e^{\mathbf{x}_{k}}-e^{\mathbf{x}_{i}}}{\sum_{k=1}^{N}e^{\mathbf{x}_{k}}}}=\mathbf{y}_{i}(1-\mathbf{y}_{j}),$$ $$(10)$$ xk= yi(1 − yj ), (10) Similarly, for arbitrary i ̸= j, the gradient is given by $$\frac{\partial\mathbf{y}_{i}}{\partial\mathbf{x}_{j}}=\frac{e^{\mathbf{x}_{i}}}{\sum_{k=1}^{N}e^{\mathbf{x}_{k}}}\cdot\frac{-e^{\mathbf{x}_{i}}}{\sum_{k=1}^{N}e^{\mathbf{x}_{k}}}=-\mathbf{y}_{i}\mathbf{y}_{j}.\tag{1}$$ Combining these two cases, it is easy to verify the Jacobian matrix satisfies ∂y ∂x = diag(y) − y ⊤y. Arming with the derivative property of softmax function, we further investigate the gradient ∂⟨p,u⟩ ∂F, where p = ∆sSF(F) ∈ R 1×dout and SF(·) and u ∈ R 1×dout is independent with F ∈ R n×dout. Considering softmax function SF(x) ∈ R n×dis row-wise adopted in node representation matrix, the gradient satisfies ∂SF (F)i ∂Fj= 0dout×dout for i ̸= j. Note that the inner product ⟨p, u⟩ =Pdout k=1 pkuk, it is easy the obtain the gradient [ ∂⟨p,u⟩ ∂F]ij =Pdout k=1 ∂pk ∂Fij uk. To simply the current notation, we denote F˜ △= SF(F). According to the chain rule of derivative, we have $$\frac{\partial\mathbf{p}_{k}}{\partial\mathbf{F}_{i j}}=\sum_{t=1}^{d_{\mathrm{out}}}\frac{\partial\mathbf{p}_{k}}{\partial\mathbf{F}_{i k}}\frac{\partial\tilde{\mathbf{F}}_{i k}}{\partial\mathbf{F}_{i j}}=\sum_{t=1}^{d_{\mathrm{out}}}\Delta_{\mathbf{s},t}\frac{\partial\tilde{\mathbf{F}}_{i k}}{\partial\mathbf{F}_{i j}}\overset{(a)}{=}\Delta_{\mathbf{s},t}\frac{\partial\tilde{\mathbf{F}}_{i k}}{\partial\mathbf{F}_{i j}}\overset{(b)}{=}\Delta_{\mathbf{s},t}\tilde{\mathbf{F}}_{i k}[\delta_{k j}-\tilde{\mathbf{F}}_{i j}],$$ where δkj is Dirac function (equals 1 only if k = j, otherwise 0;), equality (a) holds since softmax function is row-wise operation, and equality (b) is based on Lemma C.1. Furthermore, we can obtain the gradient of fairness objective w.r.t. node presentation as follows: $$[\frac{\partial(\mathbf{p},\mathbf{u})}{\partial\mathbf{F}}]_{i j}=\sum_{k=1}^{d_{m a t}}\frac{\partial\mathbf{p}_{k}}{\partial\mathbf{F}_{i j}}\mathbf{u}_{k}=\sum_{k=1}^{d_{m a t}}\Delta_{\mathbf{u},i}\tilde{\mathbf{F}}_{i k}[\delta_{k j}-\tilde{\mathbf{F}}_{i j}]\mathbf{u}_{k}=\Delta_{\mathbf{u},i}\tilde{\mathbf{F}}_{i j}\mathbf{u}_{j}-\Delta_{\mathbf{u},i}\tilde{\mathbf{F}}_{i j}\sum_{k=1}^{d_{m a t}}\tilde{\mathbf{F}}_{i k}\mathbf{u}_{k}.$$ $$(11)$$ $$(12)$$ $$\left(13\right)$$ $$(14)$$ F˜ikuk. (13) Therefore, the matrix formulation is given by $$\frac{\partial\langle{\bf p},{\bf u}\rangle}{\partial{\bf F}}={\bf U}_{s}\odot SF({\bf F})-{\rm Sum}_{1}({\bf U}_{s}\odot SF({\bf F}))SF({\bf F}).\tag{1}$$ where Us △= ∆⊤ s u ∈ R n×dout and Sum1(·) represents the summation over column dimension with preserved matrix shape. Therefore, the computation complexity for gradient ∂⟨p,u⟩ ∂Fis O(ndout). ## D Proof Of Proposition 3.1 As for the proximal operators, we provide the close form in the following proposition: Proposition D.1 (Proximal Operators). *The proximal operators prox*βh∗ f (u) *satisfies* $$prox_{\beta h_{f}}({\bf u})_{j}=sign({\bf u})_{j}\min\big{(}|{\bf u}_{j}|,\lambda_{f}\big{)},\tag{15}$$ where sign(·) and λf are element-wise sign function and hyperparameter for fairness objective. In other words, such a proximal operator is an element-wise projection into l∞ *ball with radius* λf . We firstly show the conjugate function for general norm function f(x) = λ||x||, where x ∈ R1×dout . The conjugate function of f(x) satisfies $$f^{*}({\bf y})=\left\{\begin{array}{ll}0,&||{\bf x}||_{*}\leq\lambda,\\ +\infty,&||{\bf x}||_{*}>\lambda.\end{array}\right.\tag{16}$$ where ||x||∗ is dual norm of the original norm ||x||, defined as ||y||∗ = max ||x||≤1 y ⊤x. Considering the conjugate function definition f ∗(y) = max xy ⊤x − λ||x|| the analysis can be divided as the following two cases: ❶ If ||y||∗ ≤ λ, according to the definition of dual norm, we have y ⊤x ≤ ||x*||||*y||∗ ≤ λ||x|| for ∀||x||, where the equality holds if and only if ||x|| = 0. Hence, it is easy to obtain f ∗(y) = max xy ⊤x − λ||x|| = 0. $\spadesuit$ If $||\mathbf{y}||_{*}>\lambda$, note that the dual norm $||\mathbf{y}||_{*}=\max\limits_{\|\mathbf{x}\|\leqslant1}\mathbf{y}^{\top}\mathbf{x}>\lambda$, there exists variables $\hat{\mathbf{x}}$ so that $||\hat{\mathbf{x}}||\leq1$, and $\hat{\mathbf{x}}^{\top}\mathbf{y}<\lambda$. Therefore, for any constant $t$, we have $f^{*}(\mathbf{y})\geq\mathbf{y}^{\top}(t\mathbf{x})-\lambda||t\mathbf{x}||=t(\mathbf{y}^{\top}\mathbf{x}-\lambda||\mathbf{x}||)\stackrel{{t\to\mathbf{x}}}{{\longrightarrow}}\infty$. $\mathbf{B}$ Based on the aforementioned two cases, it is easy to get the conjugate function for l1 norm (the dual norm is l∞), i.e., the conjugate function for hf (x) = λ||x||1 is given by $$h_{f}^{*}({\bf y})=\left\{\begin{array}{ll}0,&||{\bf x}||_{\infty}\leq\lambda,\\ +\infty,&||{\bf x}||_{\infty}>\lambda.\end{array}\right.\tag{1}$$ $$(17)$$ $\mathbf{a}\cdot\mathbf{b}=\mathbf{a}\cdot\mathbf{b}$. +∞, ||x||∞ *> λ.* (17) ``` Given the conjugate function h ∗ f (·), we further investigate the proximal operators proxh ∗ f . Note that ``` Given the conjugate function $h_{f}^{*}(\cdot)$, we further investigate the proof $\mathrm{prox}_{h_{f}^{*}}(\mathbf{u})=\arg\min_{\mathbf{y}}||\mathbf{y}-\mathbf{u}||_{F}^{2}+h_{f}^{*}(\mathbf{y})=\arg\min_{||\mathbf{y}||_{\infty}\leq\lambda_{f}}||\mathbf{y}-\mathbf{u}||_{F}^{2}=\arg\min_{\mathbf{y}}||\mathbf{y}||_{\infty}\leq\lambda_{f}$, operator problem can be decomposed as element-wise sub-problem, i.e. yj≤λf ∀j∈[dout] Pdout j=1 |yj −uj | 2, the proximal $\operatorname{prox}_{h_{j}^{\pi}}(\mathbf{u})_{j}=\operatorname*{arg\,min}_{\mathbf{y}_{j}\leq\lambda_{f}}|\mathbf{y}_{j}-\mathbf{u}_{j}|^{2}=sign(\mathbf{u}_{j})\min(|\mathbf{u}_{j}|,\lambda_{f})$ which completes the proof. ## E More Discussion On Transparency In Fairness We aim to provide a more precise statement on transparency in fairness (TIF) and then point out why many fair methods can not achieve transparency in fairness. Intuitively, TIF represents that the influence of sensitive attribute in the inference stage for a fair method can be obtained with only a well-trained fair model and test data. Although many fair methods relying on sensitive attribute are developed to achieve a fair model, the process of how the sensitive attribute makes the model to be fair is still black-box. To this end, we introduce TIF, a general concept beyond graph data. Denote training dataset Dtrain = {Xtrain, strain, y*train*} and test dataset Dtest = {Xtest, stest, y*test*}, where Xtrain (Xtest), strain (s*test*), and ytrain (y*test*) represent the input attributes, sensitive attributes, and label for model training (test). We first provide a formal statement on the influence of sensitive attribute and TIF for a specific fair method. What is the influence of sensitive attributes in the inference stage? The influence of sensitive attributes can be regarded as the difference between the well-trained fair and vanilla model. The fair model fθ ∗ (·) can be obtained using training dataset (including sensitive attribute) and a specific fair method (e.g., fair regularization, adversarial debiasing) while vanilla model fθ0 (·) is obtained without any usage of sensitive attribute (e.g., vanilla loss and no data pre-processing and post-processing). Define M(fθ, D*test*) as the measurement (not necessarily scaler) for a well-trained model fθ(·) given test dataset D*test*. For example, test loss or model prediction can be instantiations as measurements. Then the influence of sensitive attributes represents the measurement difference between the well-trained fair and vanilla models M(fθ ∗ , D*test*) − M(fθ0 , D*test*). What is TIF? TIF represents that the influence of sensitive attributes can be obtained via the welltrained fair model and test data (without access to the training data). To obtain the influence of sensitive attribute, the fair model and vanilla model are both required to obtain the influence of sensitive attribute. In other words, for existing fair methods (e.g., pre-processing, in-processing, and post-processing methods), it is intractable to obtain such influence if only having access to the fair model since the training data or vanilla model can not be accessed. Difference with model interpretability. Model interpretability aims to understand and explain the steps and decisions of the model when making predictions. There are two types of interpretability, named intrinsical interpretability and post-hoc interpretability. Intrinsically interpretable models (such as decision trees) can provide human-understandable decision-making from the model itself, while post-hoc interpretability requires external methods to help humans understand how the model makes predictions. Similar to intrinsical interpretability, whether the fair model with TIF is essentially binary. The key difference is that TIF aims to understand how sensitive attribute helps to achieve a fair model for a specific fair method. Such a fairness-achieving process is essentially dynamic while model interpretability is static for model prediction. The main idea to achieve TIF. Note that many existing methods, including pre-processing, inprocessing, and post-processing methods, can not achieve TIF, we try to integrate sensitive attribute information into the forward propagation for model prediction. In this way, the influence of sensitive attributes can be obtained through model inference. Thanks to the unified optimization framework for GNNs, we develop a fair message passing (FMP), which explicitly and separately uses sensitive attributes in the (last) debiasing stage of forward propagation. which makes it easy to identify the influence. In this way, the influence of sensitive attribute can be identified using the input and output of the debiasing stage. ## F Training Algorithms We summarize the training algorithm for FMP and provide the pseudo codes in Algorithm 1. | Algorithm 1 FMP Training Algorithm Output: The well-trained FMP model. Initialize model parameters. for epoch from 1 to T do Conduct feature transformation using MLP | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Input: Graph dataset =(X, A, Y); The total epochs T; Hyperparameters λs and λf . Output: The well-trained FMP model. Initialize model parameters. for epoch from 1 to T do Conduct feature transformation using MLP Conduct propagation and debiasing as steps ❶-❺ Calculate the cross entropy loss for node classification task Conduct backpropagation step to update model weight end for ## G Dataset Statistics | Dataset # Nodes # Node Features # Edges # Training Labels # Training Sens | | | | | | |-----------------------------------------------------------------------------|-------|-----|---------|------|-----| | Pokec-n | 66569 | 265 | 1034094 | 4398 | 500 | | Pokec-z | 67796 | 276 | 1235916 | 5131 | 500 | | NBA | 403 | 95 | 21242 | 156 | 246 | For a fair comparison with previous work, we perform the node classification task on three real-world datasets, including Pokec-n, Pokec-z, and NBA. The data statistical information on three real-world datasets is provided in Table 3. It is seen that the sensitive homophily are even higher than the label homophily coefficient among three real-world datasets, which validates that the real-world datasets is usually with large topology bias. Table 3: Statistical Information on Datasets ## H More Experimental Results H.1 More Experimental Setting Details In FMP implementation, we first use 2 layers of MLP with 64 hidden units and the output dimension for MLP is 2. We also stack 2 layers for propagation and debiasing steps, where there are not any trainable model parameters. As for the model training, we adopt cross-entropy loss function with 300 epochs. We also adopt Adam optimizer with 0.001 learning rate and 1×105 weight decay for all models. The hyperprameters for FMP is λf = {0, 5, 10, 15, 20, 30, 100} and λs = {0, 0.01, 0.1, 0.5, 1.0, 2.0, 3, 5, 10, 15, 20} . ![18_image_0.png](18_image_0.png) Figure 3: DP and Acc trade-off performance on three real-world datasets compared with (manifold) Fair Mixup. ## H.2 Comparison With Fair Mixup We also implement Fair mixup (Chuang & Mroueh, 2021) as the additional baseline for different GNN backbones in Figure 3. Note that input fair mixup requires calculating model prediction for mixed input batch, it is non-trivial to adopt input fair mixup in our experiments (node classification task) since forward propagation in GNN aggregates information from neighborhoods while the neighborhood information for the mixed input batch is missing. Therefore, we adopt manifold fair mixup for the logit layer (the previous layers contain aggregation step) in our experiments. Experimental results show that our method can still achieve better accuracy-fairness tradeoff performance on three datasets. ## H.3 Sensitive Attribute Influence Probe As for lending fairness perceptron, it represents the influence of sensitive attributes that could be identified. For example, our proposed FMP includes three steps, i.e., transformation, aggregation, and debiasing, where the sensitive attribute is explicitly adopted in debiasing step. If we aim to identify the influence of sensitive attributes for FMP, it is sufficient to check the difference between the input and output for the debiasing step. It is worth mentioning that the required information for identifying the influence of sensitive attributes is naturally from the forward propagation. Additionally, if we aim to identify the influence of sensitive attributes for existing methods (e.g, adding regularization and adversarial debiasing), the well-trained fair model is insufficient and we need additional vanilla (unfair) model without using any sensitive attribute information. In other words, these methods require model retraining with sensitive attribute movement, and thus much more resources for sensitive attributes influence auditing. The key drawback of these methods is due to encoding the sensitive attributes information into well-trained model weights. From the auditors' perspective, it is quite hard to identify the influence of sensitive attributes only given a well-trained fair model. Instead, our designed FMP explicitly adopts the sensitive attribute information in the forward propagation process, which naturally avoid the dilemma that sensitive attributes are encoded into well-trained model weight. Figure 4 shows the visualization results for training with/without (left/right) sensitive attributes for FMP and several baselines (with GCN backbones) across three real-world datasets. From the visualization results, we observe that all methods with sensitive attribute information achieve better fairness since the logit layer representation for different sensitive attributes is mixed with each other. Therefore, it is hard to identify the sensitive attribute based on the representation and thus leads to higher fairness results. The key difference is that the results for training with/without (left/right) sensitive attribute in FMP can both be obtained through forward propagation, while the other baseline methods require model retraining to probe the influence of sensitive attributes. ![19_image_0.png](19_image_0.png) Figure 4: The visualization of logit layer node representation for training with/without (left/right) sensitive ![19_image_1.png](19_image_1.png) attribute for FMP and several baselines across three real-world datasets. The data point with different colors represents different sensitive attributes. Figure 5: The running time comparison. ## H.4 Running Time Comparison We provide running time comparison in Figure 5 for our proposed FMP and other baselines, including vanilla, regularization, and adversarial debiasing on many backbones (MLP, GCN, GAT, SGC, and APPNP). To achieve a fair comparison, we adopt the same Adam optimizer with 200 epochs with 5 running times. We list several observations as follows: - The running time of proposed FMP is very efficient for large-scale datasets. Specifically, for the vanilla method, the running time of FMP is higher than most lighten backbone MLP with 46.97% and 15.03% time overhead on Pokec-n and Poken-z datasets, respectively. Compared with the most time-consumption APPNP, the running time of FMP is lower with 64.07% and 41.45% time overhead on Pokec-n and Poken-z datasets, respectively. - The regularization method achieves almost the same running time compared with the vanilla method on all backbones. For example, GCN with regularization encompasses higher running time with 6.41% time overhead compared with the vanilla method. Adversarial debiasing is extremely time-consuming. For example, GCN with adversarial debiasing encompasses higher running time with 88.58% time overhead compared with the vanilla method. ![20_image_0.png](20_image_0.png) Figure 6: Hyperparameter study on fairness and smoothness hyperparameter for demographic parity and Accuracy. ## H.5 Hyperparameter Study We provide hyperparameter study for further investigation on fairness and smoothness hyperparmeter on prediction and fairness performance on three datasets. Specifically, we tune hyperparameters as λf = {0.0, 5.0, 10.0, 15.0, 20.0, 30.0, 100.0, 1000.0} and λs = {0.0, 0.1, 0.5, 1.0, 3.0, 5.0, 10.0, 15.0, 20.0}. From the results in Figure 6, we can make the following observations: - The accuracy and demographic parity are extremely sensitive to the smoothness hyperparameter. It is seen that, for Pokec-n and Pokec-z datasets (NBA), a larger smoothness hyperparameter usually leads to higher (lower) accuracy with higher prediction bias. The rationale is that, only for graph data with a high label homophily coefficient, GCN-like aggregation with skip connection is beneficial. Otherwise, the neighbor's node representation with a different label will mislead the representation update. - The appropriate fairness hyperparameter leads to better fairness and prediction performance tradeoff. The reason is that fairness hyperparameter determines the perturbation vector update step size in probability space. Only appropriate step size can lead to better perturbation vector update. ## H.6 Results On Additional Datasets We also conduct experiments on two new datasets (Recidivism and Credit), where the graph topology is constructed based on node features. In Recidivism, nodes are defendants released on bail from 1990 to 2009, where the nodes are connected based on the similarity of past criminal records and demographics. The task is to predict defendant is on bail or not, and the sensitive attribute is selected as "race". In the Credit dataset, credit card users (nodes) are connected based on the pattern similarity of their purchases and payments. The sensitive attribute is selected as "age", and the task is to predict whether a user will default on credit card payment. Figure. ?? demonstrates the tradeoff performance for different fair methods, including adding regularization, adversarial debiasing, and fair mixup. Experimental results show that our method can still achieve good accuracy-fairness tradeoff performance on three datasets. We also notice that MLP can achieve good tradeoff performance since the graph topology is manually constructed based on node attribute similarity. ![21_image_0.png](21_image_0.png) Figure 7: DP and Acc trade-off performance on three real-world datasets compared with adding regularization, adversarial debiasing, and (manifold) Fair Mixup in additional datasets. ## I Future Work There are three lines of follow-up research directions. Firstly, achieving transparency can be further developed. For example, for the intransparent model, how can we develop external methods to probe the influence of sensitive attributes in the target model? Secondly, given the influence of sensitive attributes, how can we interpret the influence of sensitive attributes in a human-understandable way? For example, how can we measure the benefit of such influence toward fairness? Thirdly, it is also interesting to extend FMP into more general cases, such as continuous sensitive attributes (Jiang et al., 2022), and limited sensitive attributes (Dai & Wang, 2021). ## J Broader Social Impact And Limitations Transparency in fairness is an advanced property in the fairness domain and poses huge challenges for research and industry. Many existing works mainly rely on specific fairness metrics to evaluate the prediction bias. Transparency may stimulate maintainers and auditors of machine learning systems to rethink fairness evaluation/auditing. Only achieving a fair model with a lower bias for specific fairness metrics is insufficient. The maintainers should also consider how to leverage the influence of sensitive attributes for auditors. Transparency may lead maintainers to pay more effects to improve the transparency of the fair model and could be helpful to convince the auditors. The limitations of this work are that it requests sensitive information in the inference stage.
Review 1: Summary: The paper presents a new graph neural network (GNN) architecture called Fair Message Passing (FMP). It aims to achieve fairness in graphs by mitigating bias in node representations. The paper is well presented and the source code is made available. See details in below. Strengths and Weaknesses: Pros: 1.The study of fairness-aware message passing provides good insights for graph learning community. 2.The paper is well-structured and easy to follow. 3.The code is available. Cons: Writing: 1.The literature review is notably incomplete. Contrary to the author's claim that the GNNs architecture perspective for improving fairness in graphs is less explored, numerous works have proposed GNNs ensuring fairness. For details, please refer to the relevant survey [1]. 2.Additionally, the author described GAT, GCN, and GraphSAGE from the perspective of graph signal processing in the Preliminary section. This aspect has been elaborated in detail by [2] and is not highly relevant to the main focus of this paper, fairness. 3.There exists grammar mistakes and typos. Experiments is not convincing: 1.The experimental settings and datasets are consistent with previous studies on the fairness of GNNs. However, no comparison with any fair GNN is made in the experimental section. For example, when compared to FairGAT [3], it is slightly inferior on both Poke-z and Pokec-n datasets. Specifically, on the NBA dataset, FairGAT’s $\Delta_{eo}$ is only 0.7, while FMP reaches 13.33. Can the author explain the reason for this large difference? Comprehensive comparison with recent methods should be conducted. 2.The author compares with Adversarial Debiasing and Regularization, but the referenced methods do not directly study GNN fairness. The authors should compare with fair GNNs that utilize Adversarial Training, such as [3], which employs adversarial training to filter sensitive information and subsequently enhance fairness. [4] introduce regularization of topological bias in learning objective. 3.As Equation 1 consists of fairness and smoothness objectives, it would be beneficial to conduct an ablation study to investigate their roles. [1] A Survey on Fairness for Machine Learning on Graphs. [2] A unified view on graph neural networks as graph signal denoising. [3] Improving Fairness in Graph Neural Networks via Mitigating Sensitive Attribute Leakage. [4] TAM: Topology-Aware Margin Loss for Class-Imbalanced Node Classification Requested Changes: Questions on Technical details: 1.$\mathbf{F}$ is not clearly defined and confusing. I found the definition in [2] that $\mathbf$ is a clean signal which is need to be recovered. The author should write the preliminary part more clearly following [2] since the notation of graph signal processing sometimes is different from GNNs’. 2.The optimization of model in Sec3.2 is still confusing. I think the author can write the overall objective function in a closed form. 3.What is the difference between step 2 in FMP and the addition of a normalization term? In the step 2, moving the model one step in the direction of reducing <p, u> and adding a regularization term concerning <p, u> can achieve a similar effect. 4.How to handle discrete computations in step 4 using backward propagation algorithm? 5.How to make sure the debiasing procedure in step 4 will not influence the accuracy? From the Figure 1, the proximity of orange and blue nodes inevitably makes them more challenging to distinguish. 6.In summary of contributions, the author says, “We propose FMP to achieve fairness via explicitly incorporating sensitive attribute information in message passing”. However, the method solely considers labels of demographic parity, neglecting sensitive feature information such as age. The statement can be revised. Broader Impact Concerns: none ================================================== Review 2: Summary: The paper proposes a new graph neural network (GNN) architecture called Fair Message Passing (FMP) that aims to achieve fairness in graphs by mitigating bias in node representations. FMP consists of two steps: aggregation and bias mitigation. In the aggregation step, FMP uses a standard GNN layer to update node features by combining information from their neighbors. In the bias mitigation step, FMP pushes the representations of nodes belonging to different demographic groups closer together, using a regularization term based on the distance between group centroids. The paper shows that FMP can achieve a better trade-off between fairness and accuracy than several baselines on three real-world datasets for node classification tasks. The paper also provides a theoretical analysis of FMP from the perspectives of model interpretation, efficiency, and white-box usage of sensitive attributes. Strengths and Weaknesses: Strengths: * It demonstrates that a carefully designed GNN architecture can achieve fairness for graph data, without relying on data pre-processing or fair training strategies, which are common approaches in the literature. * It introduces a simple and effective way of mitigating bias in node representations, which could also be accelerated by exploiting the softmax property during optimization. * It provides empirical evidence that FMP can improve both fairness and accuracy on various datasets and tasks, compared to existing methods. Weaknesses: * The writing can be further improved. For example, $F$ start to appear in Section 2 but is defined later in Section 3 after Eq. (1), which could be confusing for readers. Similarly, $\alpha$ is not defined from paragraph PPNP/APPNP. There is also an unresolved reference on Page 21 (Figure ??). The authors are suggested to further proofread to make the manuscript self-contained. * The motivation is relatively vague from the Section Introduction. While it is interesting to explore how the model architecture contributes to fairness graph learning, why the model architecture is better than the other two aspects is not explicitly discussed. Given no baselines from existing works based on graph pre-processing or fair training strategies for regular graphs (note that the selected regularization and adversarial debiasing methods in experiments are not designed for regular graphs) are compared during experiments, it is hard to empirically find out the superior of sole model architecture-based methods. * One possible advantage of the proposed method is to bring white-box usage for sensitive attributes, which is an ability that is not shared by both graph pre-processing or fair training strategies. However, this advantage is not straightforward to understand. Could the authors provide an example with practical scenarios to demonstrate how to employ this white-box usage during practice? * In addition to this white-box usage, the authors mention that if we aim to identify the influence of sensitive attributes for FMP, it is sufficient to check the difference between the input and output for the debiasing step. The authors are suggested to include empirical ablation studies to support this claim. Requested Changes: Please kindly refer to Section Strengths and Weaknesses for detailed suggestions. Broader Impact Concerns: The authors discussed transparency in their broader impact statement, which could be misleading since it is not explicitly defined or discussed in the manuscript under fair graph learning settings. Is the mentioned transparency equal to the white-box usage that the authors discussed in Section 4? ================================================== Review 3: Summary: This manuscript introduces a novel graph neural network (GNN) architecture, denoted Fair Message Passing (FMP), that is aimed at promoting fairness. This architecture relies on the message passing paradigm to aggregate information and explicitly incorporates the use of sensitive attributes to ensure the expression center of population demographic group nodes. The empirical findings from the experiments conducted on three real-world datasets demonstrate that the proposed FMP architecture outperforms other baselines in both fairness and accuracy. Additionally, the authors present a discussion on the extension to fairness loss. This research contributes to the advancement of fairness in GNNs by providing a fresh perspective. Strengths and Weaknesses: ``Pros:`` - The paper is generally well-written. - Code is provided. I'm generally happy with this submission, with only a few minor concerns. ``Cons:`` - Experiments are not that sufficient. It would be great if the results on some general datasets like OGB datasets can be provided to further demonstrate the effectiveness of the proposed FMP. - Please consider comparing with more state-of-the-art GNN baselines in the experiments. ``Minor:`` - Typos: “Notably, FMP explicitly *rendering* (renders) sensitive attribute …” - Please remove the dot in the title of Sec. 3.2.1. Requested Changes: Please see the weakness section to improve the experiments. Broader Impact Concerns: NA. ================================================== Review 4: Summary: This paper proposes a fair scheme for message passing in GNNs. It achieves this by leveraging the framework of graph signal denoising, and then attaching a fairness-oriented term in the associated optimisation objective. A scalable algorithm is then proposed for optimising this objective, achieving solid results on three benchmark datasets without sacrificing fairness compared to several baseline methods. Strengths and Weaknesses: The proposed method is sound, and addresses an important and timely problem. I believe the claims are well-supported by experimental evidence. In principle, I believe it should be accepted for TMLR. My main concerns with the paper are the lack of clarity, potentially unclear generality of the method, and lack of strong qualitative analysis. Please see the "Requested Changes" section for more details, and a summary of suggested changes for the authors. Requested Changes: Regarding clarity, the paper is currently somewhat difficult to read, and makes overly strong claims at times. * The abstract has a sentence starting "Notably, FMP explicitly rendering..." which is not grammatical, and should be fixed. * There are many typos scattered through the document -- I would recommend detailedly checking the writing in the text. * Further, the paper is generally math-heavy, and an uninitiated reader would probably benefit from placing a descriptive figure significantly earlier in the document (e.g. page 2). * The paper makes several strong and potentially misleading claims, especially claims that might indicate the paper "solved" the fairness problem (which is unlikely to be completely true, even given the provided results). Two example passages include: _“In this work, we achieve fairness in graphs from the model architecture perspective”_, and _"We demonstrate proof-of-concept that a meticulously crafted GNN architecture can achieve fairness for graph data"_. Both of these claims are very strong, and imply that fairness has been fully achieved. I highly recommend such passages be toned down to avoid confusion and misrepresentation. * Minor: There is a "Figure ??" in the Appendix. Regarding the method's generality, it appears to rests on the graph signal denoising perspective. From my understanding, this perspective can explain GNN methods that are "convolutional" in nature, in the sense that they directly aggregate neighbour-dependent messages (SGC, GCN, GAT, APPNP). However, it is unclear to me whether such a framework would work for the fully-general message-passing GNNs (such as MPNNs (Gilmer et al.) and Graph Networks (Battaglia et al.)). Given that the method's name is _'fair message passing'_, it is in my opinion quite critical to demonstrate that methods like MPNN can also be made fair in this way. Going beyond this context, it appears that the method presented as 'FMP' in the experiments corresponds to a specific propagation rule (GCN?). Would different propagation rules yield different FMP updates? If so, they should all be ablated in the paper. If not, the paper should make this clear. Lastly, I believe that a paper oriented on fairness would strongly benefit from comprehensive qualitative results, demonstrating to the reader specific cases in which unfair predictions are avoided. Perhaps the authors can showcase predictions on some specific subgraphs of the datasets they study, where a baseline method (such as GCN or GAT) would unfairly classify the nodes, and FMP would behave fairly. Perhaps with some explanation on how FMP avoided unfair behaviour in these specific examples? Such studies would likely be quite valuable for future work. Broader Impact Concerns: No concerns. ================================================== Metareview: Recommendation: Reject Comment: This paper introduces the FMP model architecture to address fairness in GNN. While it presents decent mathematical derivation and empirical study, improvements are necessary to make it acceptable. The motivation from the GNN architecture perspective lacks elaboration, and the novelty against prior research from this perspective is overclaimed. Additional experiments with more datasets and relevant baselines, along with ablation studies, are needed to demonstrate the superiority of FMP. The writing can also be enhanced by correcting typos, clarifying notations and details, simplifying preliminaries, and enriching the literature review. As the authors haven't responded or revised the manuscript, the AE recommends rejection. ==================================================
# Coba: Causal Contextual Bandits With Active Data Integration Anonymous authors Paper under double-blind review ## Abstract We study a contextual bandit setting where the agent has the ability to request multiple data samples - corresponding to potentially different context-action pairs - simultaneously in one-shot within a budget, along with access to causal side information. This new formalism provides a natural model for several real-world scenarios where parallel targeted experiments can be conducted. We propose a new algorithm that utilizes a novel entropy-like measure that we introduce. We perform multiple experiments, both using purely synthetic data and using a real-world dataset, and show that our algorithm performs better than baselines in all of them. In addition, we also study sensitivity of our algorithm's performance to various aspects of the problem setting. We also show that the algorithm is sound; that is, as budget increases, the learned policy eventually converges to an optimal policy. Further, we show a bound on its regret under additional assumptions. Finally, we study fairness implications of our methodology. ## 1 Introduction Learning to make decisions that depend on context has a wide range of applications - software product experimentation, personalized medical treatments, recommendation systems, marketing campaign design, etc. Contextual bandits (Lattimore & Szepesvári, 2020) have been used to model such problems with good success (Liu et al., 2018; Sawant et al., 2018; Bouneffouf et al., 2020; Ameko et al., 2020). Contextual bandits have been studied in two primary variants - interactive (e.g., Agarwal et al. (2014); Dimakopoulou et al. (2019)) and offline (e.g., Swaminathan & Joachims (2015a); Li et al. (2015)). In the former, the agent repeatedly interacts with the environment (observe context, choose action, receive reward) and updates its internal state after every interaction, whereas in the latter the agent is provided an offline log of data to learn from. The objective in both cases is to learn a near-optimal policy that maps contexts to actions. Interactive bandits are favored in applications where interventions are cheap, such as in software product experimentation (e.g., Optimizely (2023)). On the other hand, offline contextual bandits have become increasingly popular in scenarios where interactions are costly to actualize in the real world (e.g., conducting physical experiments) or prohibited (e.g., in the healthcare domain with human subjects). While offline contextual bandit algorithms do provide methods to utilize existing data to learn policies, it is an unreasonably constrained model for many problems; often in the real world it is possible to *acquire* additional data in one-shot - but at a cost and within a budget. To the best of our knowledge, there has not been any investigation on what the best way is to actively obtain additional experimental data in one shot and incorporate it with aim of learning a good policy.1 ## 1.1 Motivating Examples In software product development, product teams are frequently interested in learning the best software variant to show or roll out to each subgroup of users. To achieve this, it is often possible to conduct 1Note that this is fundamentally different from interactive bandit settings because the agent cannot iteratively acquire samples as it updates its knowledge, but rather raise a one-shot data request and learn from it. This is more reflective of many real world scenarios. 1 targeting at scale simultaneously for various combinations of contexts (representing user groups) and actions (representing software variants) - instead of one experiment at a time - by routing traffic appropriately (see Google (2021) for an example). These targeted experiments2can be used to compute relevant metrics (e.g., click-through rate) for each context-action pair. Further, we might have some qualitative domain knowledge of how some context variables are causally related to others; for example, we might know that os has a causal relationship to browser; we would like to exploit this knowledge to learn better policies. This can be naturally modeled as a question of "what *table* of targeted experimental data needs to be acquired and how to integrate that data, so as to learn a good policy?"; here the table's rows and columns are contexts and actions, and each cell specifies the number of data samples (zero or more) required corresponding to the context-action pair. As another example, in ads targeting (e.g., Google (2023)) the objective is to learn the best ads to show each group of people. This is done by conducting experiments where different groups of people are shown different ads simultaneously. Further, letting context variables model the features of groups, we might have some knowledge about how some of the features are causally related to others; for example, we might know that country causally affects income. Our framework provides a natural model for this setting. Further, we might also be interested in ensuring that the ads meet certain criteria for fairness. For example, there might be some variables such as race that could be sensitive from a fairness perspective, and we would not want the agent to learn policies that depends on these variables. We discuss fairness implications of our algorithm in Section 6. These are just two of many scenarios where this framework provides a natural model. Two additional examples include experimental design for marketing insights (e.g., Persado (2023)) and recommendation systems (e.g., ScaleAI (2023)). ## 1.2 Our Framework Our framework captures the various complexities and nuances described in the examples in Section 1.1. We present an overview here; please see Section 3 for the mathematical formalism. At the start, the agent is given a (possibly empty) log of offline data - consisting of context-action-reward tuples - generated from some unknown policy. The context variables are partitioned into two sets - the main set and the auxiliary set (possibly empty). The agent observes all context variables during training, but learns a policy that only depends on the main set of context variables; this also provides a way to ensure that the learned policy meets certain definitions of fairness (see Section 6 for a more detailed discussion). Further, the agent also has some qualitative3causal side-information available, likely from domain knowledge. This causal side-information is encoded as a causal graph between contextual variables. A key implication of the causal graph is information leakage (Lattimore et al., 2016; Subramanian & Ravindran, 2022) - getting samples for one context-action pair provides information about other context-action pairs because of shared pathways in the causal graph. Given the logged data and the causal graph, the agent's problem is to decide the set of targeted experimental samples to acquire within a budget, and then integrate the returned samples. More specifically, the agent is allowed to make a *one-shot* request for data in the form of a table specifying the number of samples it requires for each context-action pair, subject to the total cost being within the budget. The environment then returns the requested samples after conducting the targeted interventions, and the agent integrates those samples to update its internal beliefs and learned policy; this constitutes the training phase of the agent. The core problem of the agent is to choose these samples in a way that straddles the trade-off between choosing more samples for context-action pairs it knows is likely more valuable (given its beliefs) and choosing more samples to explore less-seen context-action pairs - while taking into account the budget and the information leakage from the causal graph. After training, the agent moves to an inference phase, where it returns an action (according to the learned policy) for every context it encounters. 2Each of these experiments is a "targeted intervention", formalized in Subramanian & Ravindran (2022) as an intervention targeted on a subgroup specified by a particular assignment of values to the context variables. 3By qualitative, we mean that the agent can know the causal *graph*, but not the conditional probability distributions of the variables. See Section 3 for a more detailed discussion. ## 1.3 Contributions 1. This is the first work to study how to *actively obtain and integrate* a table of multiple samples in one-shot in a contextual bandit setting. Further, we study this in the presence of a causal graph, making it one of the very few works to study *integration of causal side-information* by contextual bandit agents. See Section 2 for a more detailed discussion on related work, and Section 3 for the mathematical formalism of the problem. 2. We propose a novel algorithm (Section 4.3) that works by minimizing a new entropy-like measure called Υ(.) that we introduce. See Section 4 for a full discussion on the approach. We also show that the method is sound - that is, as the budget tends to infinity, the algorithm's regret converges to 0 (Section 4.4). We also provide a regret bound, under some additional assumptions (Section 4.5). 3. We show results of experiments, using purely synthetically generated data and an experiment inspired by real-world data, that demonstrate that our algorithm performs better than baselines. We also study sensitivity of the results to key aspects of the problem setting. Refer Section 5. 4. We discuss fairness implications of our method, and show that it achieves counterfactual fairness (Section 6). ## 2 Related Work Causal bandits have been studied in the last few years (e.g., Lattimore et al. (2016); Yabe et al. (2018); Lu et al. (2020)), but they study this in a multi-armed bandit setting where the problem is identification of one best action. There is only one work (Subramanian & Ravindran, 2022) studying causal *contextual* bandits - where the objective is to learn a *policy* mapping contexts to actions - and this is the closest related work. While we do leverage some ideas introduced in that work in our methodology and in the design of experiments, our work differs fundamentally from this work in important ways. Subramanian & Ravindran (2022) consider a standard interactive setting where the agent can repeatedly act, observe outcomes and update its beliefs, whereas in our work the agent has a one-shot data collection option for samples from multiple context-action pairs. This fundamentally changes the nature of the optimization problem as we will see in Section 4; it also makes it a more natural model in a different set of applications, some of which were discussed in Section 1.1. Further, our work allows for arbitrary costs for collecting those samples, whereas they assume every intervention is of equal cost. Contextual bandits in purely offline settings, where decision policies are learned from logged data, is a well-studied problem. Most of the work involves inverse propensity weighting based methods (such as Swaminathan & Joachims (2015a;b); Joachims et al. (2018)). Contextual bandits are also well-studied in purely interactive settings (see Lattimore & Szepesvári (2020) for a discussion on various algorithms). However, in contrast to our work, none of these methods can integrate causal side information or provide a way to optimally acquire and integrate new data. Further, none of these methods study actively obtaining and integrating a table of data containing samples corresponding to multiple context-action pairs. Active learning (Settles, 2009) studies settings where an agent is allowed to query an oracle for ground truth labels for certain data points. This has been studied in supervised learning settings where the agent receives ground truth feedback; in contrast, in our case, the agent receives outcomes only for actions that were taken ("bandit feedback"). However, despite this difference, our approach can be viewed as incorporating some elements of active learning into contextual bandits by enabling the agent to acquire additional samples at a cost. There has been some work that has studied contextual bandits with costs and budget constraints (e.g., Agrawal & Goyal (2012); Wu et al. (2015)). There has also been work that has explored contextual bandit settings where the agent can not immediately integrate feedback from the environment, but can do so only in batches (Zhang et al., 2022; Ren et al., 2022; Han et al., 2020). However, all these works consider settings with repeated interactions, whereas our work considers a one-shot setting where the agent chooses multiple context-action pairs simultaneously. Further, none of these works provide a way to integrate causal side information. | Notation | Meaning | | |-------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|---------| | X | action variable | | | Y | reward variable | | | C A, C B | set of main context variables and set of auxiliary context variables, respectively; so the set of all context variables is C = C A ∪ CB = {..., Ci , ...}. | | | Capital letters | a random variable; e.g., C1 or X | | | Small letters | a random variable's value; e.g., c1 or x | | | Small bold font | an assignment of values to a set of random variables; for example, c denotes a specific choice of values taken by variables in C | | | Pˆ, Eˆ | estimate of distribution P and expectation E based on current beliefs | | | val(V ), val(V) | set of values taken by the variable V , and set of variables V, respectively. | | | ϕ, ϕ ˆ ∗ | the learned policy and an optimal policy, respectively | | | Υ | entropy-like measure used in our algorithm; defined in Equation 3 | | | paV | value of variables in P AV , the parents of V | A = c A | | Nx,cA | number of samples requested corresponding to X = x and C | A | | β(x, c A, Nx,cA ) | cost of acquiring Nx,cA samples corresponding to X = x and C A = c | | | B | budget | | | a⟨B⟩ | if a is an assignment of values to A, then a⟨B⟩ is assignment of those values to respective variables B; a⟨B⟩ = ∅ if A ∩ B = ∅. | | Table 1: Summary of key notation ## 3 Problem Formalism Underlying model We model the underlying environment as a causal model M, which is defined by a directed acyclic graph G over all variables (the "causal graph") and a joint probability distribution P that factorizes over G (Pearl, 2009b; Koller & Friedman, 2009). The set of variables in G consists of the action variable (X), the reward variable (Y ), and the set of context variables (C). Each variable takes on a finite, known set of values; note that this is quite general, and accommodates categorical variables. C is partitioned into the set of main context variables (C A) and the set of (possibly empty) auxiliary context variables (C B). That is, C = C A ∪ CB. The agent knows only G but not M; therefore, the agent has no *a priori* knowledge of the conditional probability distributions (CPDs) of the variables. Protocol In addition to knowing G, the agent also has access to logged offline data, DL = {(ci, xi, yi)}, where each (ci, xi, yi) is sampled from M and xiis chosen following some unknown policy. Unlike many prior works, such as Swaminathan & Joachims (2015a), the agent here does not have access to the logging propensities. The agent then specifies in one shot the number of samples Nx,cA it requires for each pair (x, c A). We denote the full table of these values by N ≜Sx,cA {Nx,cA }. Given a (x, c A), there is an arbitrary cost β(x, c A, Nx,cA ) associated with obtaining those samples. The total cost should be at most a budget B. For each (x, c A), the environment returns Nx,cA samples of the form (c B, y) ∼ P(C B, Y | do(x), c A). 4 Let's call this acquired dataset DA. The agent utilizes DA along with DL to learn a good policy. Objective The agent's objective is to learn a policy ϕˆ : val(C A) → val(X) such that expected simple regret is minimized: $$R e g r e t\triangleq\sum_{\mathbf{c}^{A}}\left[\mu_{\mathbf{c}^{A}}^{*}-{\hat{\mu}}_{\mathbf{c}^{A}}\right]\cdot\mathbb{P}(\mathbf{c}^{A})$$ 4See Pearl (2009b; 2019) for more discussion on the do() operation. where ϕ ∗is an optimal policy, µ ∗ cA ≜ E[Y |do(ϕ ∗(c A), c A)] and µˆcA ≜ E[Y |do(ϕˆ(c A), c A)]. Table 1 provides a summary of the key notation used in this paper. ## 3.1 Assumptions We assume that X has exactly one outgoing edge, X → Y , in G. This is suitable to express a wide range of problems such as personalized treatments or software experimentation where the problem is to learn the best action under a context, but the action or treatment does not affect context variables. We also make a commonly-made assumption (see Guo et al. (2020)) that there are no unobserved confounders. Similar to Subramanian & Ravindran (2022), we make an additional assumption that simplifies the factorization in Section 4.2: {C confounds C ′ ∈ CA and Y } =⇒ C ∈ CA; a *sufficient* condition for this to be true is if C A is ancestral.5 This last assumption is a simplifying assumption and can be relaxed in the future. ## 4 Approach 4.1 Overall Idea In our approach, the agent works by maintaining beliefs regarding every conditional probability distribution (CPD). It first uses DL to update its initial CPD beliefs; this, in itself, makes use of information leakage provided the causal graph. It next needs to choose DA, which is the core problem. The key tradeoff facing the agent is the following: it needs to choose between allocating more samples to context-action pairs that it believes are more valuable and to context-action pairs that it knows less about. Unlike Subramanian & Ravindran (2022), it cannot interactively choose and learn, but instead has to choose the whole DA in one shot - necessitating the need to account for multiple overlapping information leakage pathways resulting from the multitude of samples. In addition, these samples have a cost to acquire, given by an arbitrary cost function, along with a total budget. Towards solving this To achieve this, we define a novel function Υ(N) that captures a measure of overall entropy weighted by value. The idea is that minimizing Υ results in a good policy; that is, the agent's problem now becomes that of minimizing Υ subject to budget constraints. In Section 4.2, we formally define Υ(N) and provide some intuition. Later, we provide experimental support (see Section 5), along with some theoretical grounding to this intuition (see Theorem 4.1). ## 4.2 The Optimization Problem Determine Nx,cA for each (x, c A) such that $$\Upsilon(\mathbf{N})$$ is minimized, subject to $$\sum_{x,\mathbf{c}^{A}}\beta(x,\mathbf{c}^{A},N_{x,\mathbf{c}^{A}})\leq B$$ where N ≜Sx,cA {Nx,cA }. We will next define Υ(N). Defining the objective function Υ(N) The conditional distribution P(V |paV ) for any variable V is modeled as a categorical distribution whose parameters are sampled from a Dirichlet distribution (the belief distribution). That is, P(V |paV ) = Cat(V ; b1*, ..., b*r), where (b1*, ..., b*r) ∼ Dir(θV |paV ), and θV |paV is a vector denoting the parameters of the Dirichlet distribution. Actions in a contextual bandit setting can be interpreted as do() interventions on a causal model (Zhang & Bareinboim, 2017; Lattimore et al., 2016). Therefore, the reward Y when an agent chooses action x against context c A can be thought of as being sampled according to P[Y |do(x), c A]. Under the assumptions 5That is, if C A contains all its ancestors. described in Section 3.1, we can factorize as follows: $$\mathbb{E}[Y|d o(x),\mathbf{c}^{A}]=\sum_{\mathbf{c}^{B}\in\operatorname{vol}\{c^{B}\}}\left[\mathbb{P}(Y=1|x,\mathbf{c}\langle P A_{Y}\rangle)\prod_{c\in\mathbf{c}^{B}}\mathbb{P}(C=c|\mathbf{c}\langle P A_{C}\rangle)\right]$$ (1) Note that our beliefs about each CPD in Equation 1 are affected by samples corresponding to multiple (x, c A). To capture this, we construct a CPD-level uncertainty measure which we call Q(.): $$Q(\mathbb{P}[V|\mathbf{pa}_{V}],\mathbf{N})\triangleq\sum_{x,e^{A}}\left(\frac{1}{1+\ln(N_{x,e^{A}}+1)}\right)\mathsf{Ent}^{new}(\mathbb{P}[V|\mathbf{pa}_{V}])\Bigg|_{x(PA_{V})=\mathbf{pa}_{V}(X)},\ \ \mathbf{e}^{A}(PA_{V})=\mathbf{pa}_{V}(\mathbb{C}^{A}),$$ Here Entnew is defined in the same way as in Subramanian & Ravindran (2022); for reference, we reproduce the definition in Appendix A. Finally, we construct Υ(N) as: $$\Upsilon(\mathbf{N})\triangleq\sum_{x,\mathbf{c}}\left[\left[\sum_{V\in C^{B_{V}\{Y\}}}Q(\mathbb{P}[V|\mathbf{c}(P A_{V})],\mathbf{N})\right]\cdot{\hat{\mathbb{P}}}(\mathbf{c})\cdot{\hat{\mathbb{E}}}\left[Y|x,\mathbf{c}(P A_{Y})\right]\right]$$ $$(1)$$ $$\left(2\right)$$ $$\quad(3)$$ (3) Intuition behind Q(.) and Υ(.) Intuitively, Entnew provides a measure of entropy if one additional sample corresponding to (x, c A) is obtained and used to update beliefs. Q(.) builds on it and captures the fact that the beliefs regarding any CPD P[V |paV ] can be updated using information leakage6from samples corresponding to *multiple* (x, c A); it does this by selecting the relevant (x, c A) pairs making use of the causal graph G and aggregating them. In addition, Q(.) also captures the fact that entropy reduces non-linearly with the number of samples. Finally, Υ(N) provides an aggregate (weighted) resulting uncertainty from choosing Nx,cA samples of each (x, c A). The weighting in Υ(.) provides a way for the agent to relatively prioritize context-action pairs that are higher-value according to its beliefs. ## 4.3 Algorithm The full learning algorithm, which we call CoBA, is given as Algorithm 1a. After learning, the algorithm for inferencing on any test context (i.e., returning the action for the given context) is given as Algorithm 1b. The core problem (Step 2 in Algorithm 1a) is a nonlinear optimization problem with nonlinear constraints and integer variables. It can be solved using any of the various existing solvers. In our experiments in Section 5, we solve it approximately using the scipy Python library. ## 4.4 Soundness As B → ∞, the agent's regret will tend towards 0 (Theorem 4.1). It demonstrates the soundness of our approach by showing that as the budget increases, the learned policy will eventually converge to an optimal policy. Theorem 4.1 (Soundness). As B → ∞, *Regret* → 0. The proof of Theorem 4.1 is presented in Appendix B. ## 4.5 Regret Bound The limiting case where B → ∞ was already analyzed and we showed that our algorithm converges to an optimal policy in that case (Theorem 4.1). In Theorem 4.2, in contrast, we are interested in the finite-B case. This is of interest in practical settings where the budget is usually small. We prove a regret bound under additional additional assumptions (A2) which we describe in Appendix C. Here define m ≜ minx,cA π(x|c A), where π is the (unknown) logging policy that generated DL; and MV ≜ |val(V)|. 6Information leakage arises from shared pathways in M; or equivalently, due to shared CPDs in the factorization of P. Algorithm 1a: Learning phase of CoBA Data: Causal graph G; initial dataset DL Initialization: For all V *∈ C ∪ {*Y } and for all paV , set θV |paV = (1*, ...,* 1). 1 Update all beliefs using initial dataset DL by calling **update_beliefs**(DL) 2 Solve the following N = arg min N′ Υ(N′) subject to $$\sum_{x,\mathbf{c}^{A}}\beta(x,\mathbf{c}^{A},N_{x,\mathbf{c}^{A}}^{\prime})\leq B$$ x,cA 3 Place data request N; environment returns dataset DA as discussed in Section 3. 4 Update all beliefs using DA by calling **update_beliefs**(DA) Result: Final set of beliefs for all V, paV :*..., θ*V |paV , ... 5 *Procedure* **update_beliefs(**D) 6 for each sample (x, c, y) ∈ D do 7 let c˜ ≜ c ∪ {*x, y*} 8 for each V *∈ C ∪ {*Y } do 9 θV |c˜⟨P AV ⟩[c˜⟨V ⟩] ← θV |c˜⟨P AV ⟩[c˜⟨V ⟩] + 1 10 end 11 end Algorithm 1b: Inference phase Data: Causal graph G, learned beliefs *..., θ*V |paV , ... , test context c A 1 for *every* V, paV do 2 for v ∈ val(V ) do 3 Set Pˆ(V = v|paV ) =θ (v) P V |paV v′ θ (v′) V |paV 4 end 5 end 6 for x ∈ val(X) do 7 Compute ψˆ(x, c A) ≜ Eˆ[Y |do(x), c A] using Pˆ in Equation (1) 8 end Result: Return ϕˆ(c A) ≜ arg maxx ψˆ(x, c A) Theorem 4.2 (Regret bound). *Under the additional assumptions (A2) mentioned in Appendix C, for any* 0 < δ < 1*, with probability* ≥ 1 − δ, $$R e g r e t\in O\left(|{\mathcal{C}}|{\sqrt{\left({\frac{1}{m B-\epsilon}}\right)\ln{\frac{M_{X}M_{\mathcal{C}}}{\delta}}}}\right)$$ ! where ϵ ∈ O pB ln (MXMP AY MC/δ) , ignoring terms that are constant in B, m, δ, |C| *and the number* of possible context-action pairs. The proof of Theorem 4.2 is presented in Appendix C. It closely follows the regret bound proof in Subramanian & Ravindran (2022) and adapts it to our setting. The purpose of the proof is to establish an upper bound on performance, and not to provide a tight bound. Bounding regret without these additional assumptions (A2) is left for future work. ## 5 Experimental Results 5.1 Baselines And Experimental Setup Baselines There are no existing algorithms that directly map to our setting. Therefore, we construct a set of natural baselines and study the performance of our algorithm CoBA against them. EqualAlloc allocates an equal number of samples to all (x, c A); this provides a good distribution of samples to all context-action pairs. MaxSum maximizes the *total* number of samples summed over all (x, c A). PropToValue allocates a number of samples to (x, c A) that is proportional to Pˆ(c A) · Eˆ[Y |do(x), c A]; this allocates relatively more samples to context-action pairs that are more "valuable" based on the agent's current beliefs. All baselines first involve updating the agent's starting beliefs regarding the CPDs of M using DL (same as Step 1 of Algorithm 1a) before allocating samples for active obtainment as detailed above. After DA is returned by the environment, all baselines use it update their beliefs (same as Step 4 of Algorithm 1a). Experiments Similar to Subramanian & Ravindran (2022), we consider a causal model M whose causal graph G consists of the following edges: C1 → C0, C0 → X, C0 → *Y, X* → Y . We let C A = {C1} and C B = {C0}. We use this causal graph for all experiments except Experiment 3, for which we use the graph shown in Figure 3a. Experiments 1 and 2 analyze the performance of our algorithm in a variety of settings, similar to those used in Subramanian & Ravindran (2022). Experiment 3 analyzes the performance of the algorithm on a setting calibrated using real-world CRM sales-data provided in Subramanian & Ravindran (2022). The details of all the parameterizations are provided as part of the supplementary material (see Appendix G). Experiments 4 through 7 analyze sensitivity of our algorithm's performance to various aspects of the problem setting. In all experiments, except Experiment 6, we set the cost function β(.) to be proportional to the number of samples - a natural definition of cost; in Experiment 6, we analyze sensitivity to cost function choice. Additional experiments providing more insights into why our algorithm performs better than baselines are discussed in Appendix E. Appendix D reports results of Experiment 1 and 2 for larger values of B (until all algorithms converge), providing empirical evidence of our algorithm's improved asymptotic behavior. Remark If the specific parameterization of M were given *a priori*, it is possible to come up with an algorithm that performs optimally in that particular setting. However, the objective is to design a method that performs well *overall* without this *a priori* information. Consider the relative performance of the baselines in Experiments 2 and 3. We will see that while EqualAlloc performs better than MaxSum and PropToValue in Experiments 3 (Section 5.4), it performs worse than those two in Experiment 2 (Section 5.3). However, our algorithm performs better than all three baselines in all experiments, corroborating our algorithm's overall better performance. ## 5.2 Experiment 1 (Representative Settings) Different parameterizations of M can produce a wide range of possible settings. Given this, the first experiment studies the performance of our algorithm over a set of "*representative settings*". Each of these settings has a natural interpretation; for example, C A could represent the set of person-level features that we are learning a personalized treatment for, or it could represent the set of customer attributes over which we're learning a marketing policy. The settings capture the intuition that high-value contexts (contexts for which, if the optimal action is learned, high expected rewards accrue to the agent) occur relatively less frequently (say, 20% of the time), but that there can be variation in other aspects. Specifically, the variations come from the number of different values of c A over which the 20% probability mass is spread, and in how "risky" a particular context is (e.g., difference in rewards between the best and worst actions). The full details of the parameterizations are provided as part of the supplementary material (refer Appendix G). The number of samples7in the initial dataset DL is kept at 0.5 · |val(C A)*| · |*val(X)|. In each run, the agent is presented with a randomly selected setting from the representative set. Results are averaged over 50 independent runs; error bars display ±2 standard errors. 7We consider a uniformly exploring logging policy for DL; that is, context variables for each sample are realized as per the natural distribution induced by M, but X is chosen randomly. ![8_image_0.png](8_image_0.png) Figure 1: Experiment 1 results (Section 5.2). Figure 2: Experiment 2 results (Section 5.3). Figure 1 provides the results of this Experiment. It plots the value of regret (normalized to [0, 1] since different settings have different ranges for regret) as budget B increases. We see that our algorithm performs better than all baselines. Our algorithm also retains its relatively lower regret at all values of B, providing empirical evidence of overall better regret performance. ## 5.3 Experiment 2 (Randomized Parameters) To ensure that the results are not biased due to our choice of the representative set in Experiment 1, this experiment studies the performance of our algorithm when we *directly randomize the parameters* of the CPDs in each run, subject to realistic constraints. Specifically, in each run, we (1) randomly pick an i ∈ {1*, ...,* ⌊|val(C1)|/2⌋}, (2) distribute 20% of the probability mass randomly over the smallest i values of C1, and (3) distribute the remaining 80% of the mass over the remaining values of C1. The smallest i values of C1 have higher value (i.e., the agent obtains higher rewards when the optimal action is chosen) than the other C1 values. Intuitively, this captures the commonly observed 80-20 pattern (for example, 20% of the customers often contribute to around 80% of the revenue); but we randomize the other aspects. The full details of the parameterizations are given as part of the supplementary material (refer Appendix G). Averaging over runs provides an estimate of the performance of the algorithms on expectation. The number of samples in the initial dataset DL is kept at 0.25 · |val(C A)*| · |*val(X)|. The results are averaged over 50 independent runs; error bars display ±2 standard errors. Figure 2 shows that our algorithm performs better than all baselines in this experiment. Our algorithm also demonstrates overall better regret performance by achieving the lowest regret for every choice of B. ## 5.4 Experiment 3 (Calibrated Using Real-World Data) While Experiments 1 and 2 study purely synthetic settings, this experiment seeks to study the performance of our algorithm in *realistic scenarios*. We use the same causal graph used in the real world-inspired experiment in Section 4.2 of Subramanian & Ravindran (2022) and calibrate the CPDs using the data provided there. The graph is shown in Figure 3a; C A = {C1, C2} and C B = {C0}. For parameterizations, refer Appendix G. The objective is to learn a policy that can assist salespeople by learning to decide how many outgoing calls to make in an ongoing deal, given just the type of deal and size of customer, so as to maximize a reward metric. The variables are related to each other causally as per the causal graph. The number of samples in the initial dataset DL is kept at 0.125 · |val(C A)*| · |*val(X)|. The results are averaged over 50 independent runs; error bars display ±2 standard errors. Figure 3 shows the results of the experiment. Our algorithm performs better than all other algorithms in this real-world inspired setting as well. Further, it retains its better performance at every value of B. ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) (a) Causal graph used in Experiment 3; taken from Subramanian & Ravindran (2022) (b) Experiment results for real-world data inspired experiment (Experiment 3). Figure 3: Causal graph and results of Experiment 3 (Section 5.4). ## 5.5 Experiments 4 Through 7 Experiments 4 through 7 study *sensitivity of the results to key aspects* that define our settings. To aid this analysis, instead of regret, we consider a more aggregate measure which we call AUC. For any run, AUC is computed for a given algorithm by summing over B the regrets for that algorithm; this provides an approximation of the area under the curve (hence the name). We then study the sensitivity of AUC to various aspects of the setting or environment. Experiment 4 ("narrowness" of M) We use the term "narrowness" informally. Since our algorithm CoBA exploits the information leakage in the causal graph, we expect it to achieve better performance when there is more leakage. To see this, suppose we do a forward sampling (Koller & Friedman, 2009) of M; then, intuitively, more leakage occurs when more samples require sampling overlapping CPDs. For this experiment, we proxy this by varying |val(C0)| while keeping |val(C1)| fixed. The rest of the setting is the same as in Experiment 2. A lower |val(C0)| means that the causal model is more "squeezed" and there is likely more information leakage. The results are averaged over 50 independent runs; error bars display ±2 standard errors. Figure 4 shows the results of this experiment. We see that our algorithm's performance remains similar (within each other's the confidence interval) for |val(C0)|/|val(C1)*| ∈ {*0.25, 0.375}, but significantly worsens when |val(C0)|/|val(C1)| = 0.5. However, our algorithm continues to perform better than all baselines for all values of |val(C0)|/|val(C1)|. Figure 4 broken down by B is given in Appendix L.1. Experiment 5 (size of initial dataset) The number of samples in the initial dataset DL would impact the algorithm's resulting policy, for any given B. Specifically, we would expect that as the cardinality of DL increases, regret reduces. For this experiment, we consider a uniformly exploring logging policy, and vary |DL| by setting it to be k · |val(C A)*| · |*val(X)|, where k ∈ {0, 0.25, 0.5}. The rest of the setting is the same as in Experiment 2. The results are averaged over 50 independent runs; error bars display ±2 standard errors. The results are shown in Figure 5. We would expect the performance of all algorithms improve with increase in k since that would give the agent better starting beliefs; this, indeed, is what we observe. Importantly, our algorithm performs better than all baselines in all these settings. Figure 5 broken down by B is provided in Appendix L.2. Experiment 6 (choice of β) Though we allow the cost function to be arbitrary, this experiment studies our algorithm's performance under two natural choices of β(.) to test its robustness: (1) a constant cost function; that is, β(x, c A, Nx,cA ) ∝ Nx,cA , and (2) cost function that is inversely proportional to the likelihood of ![10_image_0.png](10_image_0.png) Figure 4: Experiment 4 results. ![10_image_1.png](10_image_1.png) Figure 5: Experiment 5 results. Figure 6: Experiment 6 results. observing the context naturally (i.e., rarer samples are costlier); that is, β(x, c+, Ν.π.α) α Μπ. The rest of the setting is the same as in Experiment 2. The results are averaged over 50 independent runs; error bars display ±2 standard errors. Figure 6 shows the results. As expected, the choice of cost function does affect performance of all algorithms. However, our algorithm performs better than all algorithms for both cost function choices. Experiment 7 (misspecification of G) In real-world applications, the true underlying causal graph may not always be known. In this experiment, we study the impact of misspecification of § on the performance of our algorithm. Note that the formalism described in Section 3 does not necessitate that the agent knows the true underlying causal graph, but rather only that it knows a causal graph such that I factorizes according to it. This means that the graph G that the agent knows might include additional arrows not present in the true underlying graph. Intuitively, using such an imperfect graph would result in worsened performance by our algorithm since there are less overlapping information pathways to exploit. In Experiment 7a, we study this effect empirically by comparing results of Experiment 2 to the same experiment but with the causal graph having an extra edge: C1 -> Y. The results are averaged over 25 independent runs; error bars display ±2 standard errors. Figure 7 shows the results of the experiment. As expected, performance of our algorithm degrades when there is imperfect knowlege of the true underlying graph. However, our algorithm continues to perform better than all baselines, while also maintaining a similar difference in regret AUC compared to the baselines. Figure 7 broken down by B is provided in Appendix L.3. ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) Figure 7: Experiment 7a results. Figure 8: Experiment 7b results. In Experiment 7b, we perform a similar analysis to the real-world inspired experiment presented in Section 5.4. Specifically, we compare the case where the true causal graph is known with the cases where there is a misspecification of the causal relationships between the context variables. To capture this, we add one edge (C1 → C0) to G and then one more edge (C2 → C1); we compare the performance under these two settings to the case where the true causal graph is known. The results are averaged over 25 independent runs; error bars display ±2 standard errors. The results in Figure 8 shows the results. In this case, the deterioration in performance due to misspecification of G is quite small for all algorithms. However, our algorithm continues to perform better than all baselines; it performs the best when the true graph is known. ## 6 Fairness Fairness is becoming an increasingly important angle to discuss when designing machine learning algorithms. A common way to approach fairness is to ensure some subset of variables (assumed given to the algorithm), called "sensitive variables", is not discriminated against. Specific formal definitions of this discrimination give rise to different notions of fairness in literature (Grgić-Hlača et al., 2016; Dwork et al., 2012; Kusner et al., 2017; Zuo et al., 2022; Castelnovo et al., 2022). Counterfactual fairness Counterfactual fairness is a commonly used notion of individual fairness. Intuitively, a *counterfactually fair* mapping from contexts to actions ensures that the actions mapped to an individual8 are the same in a counterfactual world where a subset of sensitive contexts is changed. In our case, counterfactual fairness can be achieved by setting C B to contain all the sensitive attributes. We provide a proof for this in Appendix I. Demographic parity A common criterion for group-level fairness is Demographic Parity (Kusner et al., 2017). Our algorithm does not achieve demographic parity. However, in Appendix J, we suggest a way by which it can be achieved with some compromise to the agent's performance. ## 7 Conclusion And Future Research Directions This paper proposed a new contextual bandit problem formalism where the agent, which has access to qualitative causal side information, can also actively obtain a table of data in one shot, but at a cost and budget. We proposed a novel algorithm based on a new measure similar to entropy, and showed extensive empirical analysis of our algorithm's performance. We also showed theoretical results on soundness and regret. Furthermore, we studied the fairness implications of our algorithm. Possible directions of future research include allowing unobserved confounders and designing algorithms that meet populationlevel fairness criteria with minimial impact on performance. 8An individual is given by a specific choice of values for the context variables. ## References Alekh Agarwal, Daniel Hsu, Satyen Kale, John Langford, Lihong Li, and Robert Schapire. Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits. In *Proceedings of the 31st International* Conference on Machine Learning, volume 32 of *PMLR*, pp. 1638–1646, 2014. Shipra Agrawal and Navin Goyal. Analysis of thompson sampling for the multi-armed bandit problem. In Proceedings of the 25th Annual Conference on Learning Theory, volume 23 of *PMLR*, pp. 39.1–39.26, 2012. Mawulolo K. Ameko, Miranda L. Beltzer, Lihua Cai, Mehdi Boukhechba, Bethany A. Teachman, and Laura E. Barnes. Offline contextual multi-armed bandits for mobile health interventions: A case study on emotion regulation. In *Proceedings of the 14th ACM Conference on Recommender Systems*, pp. 249–258, 2020. Djallel Bouneffouf, Irina Rish, and Charu Aggarwal. Survey on applications of multi-armed and contextual bandits. In *2020 IEEE Congress on Evolutionary Computation (CEC)*, pp. 1–8, 2020. A. Castelnovo, R. Crupi, G. Greco, D. Regoli, I. G. Penco, and A. C. Cosentini. A clarification of the nuances in the fairness metrics landscape. *Scientific Reports*, 12(1), 2022. Maria Dimakopoulou, Zhengyuan Zhou, Susan Athey, and Guido Imbens. Balanced Linear Contextual Bandits. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 33, pp. 3445–3453, 2019. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In *Proceedings of the 3rd Innovations in Theoretical Computer Science Conference*, pp. 214–226, 2012. Google. Targeting overview. https://support.google.com/optimize/answer/6283420, 2021. Accessed: 2021-10-02. Google. Targeting your ads. https://support.google.com/google-ads/answer/1704368?hl=en, 2023. Accessed: 2023-06-04. N. Grgić-Hlača, M. B. Zafar, K. P. Gummadi, and A. Weller. The case for process fairness in learning: Feature selection for fair decision making. In *Symposium on Machine Learning and the Law at the 29th* Conference on Neural Information Processing Systems, 2016. Ruocheng Guo, Lu Cheng, Jundong Li, P. Richard Hahn, and Huan Liu. A survey of learning causality with data: Problems and methods. *ACM Comput. Surv.*, 53(4), July 2020. Yanjun Han, Zhengqing Zhou, Zhengyuan Zhou, Jose Blanchet, Peter W. Glynn, and Yinyu Ye. Sequential batch learning in finite-action linear contextual bandits. *arXiv:2004.06321*, 2020. Thorsten Joachims, Adith Swaminathan, and Maarten de Rijke. Deep learning with logged bandit feedback. In *Proceedings of the Sixth International Conference on Learning Representations*, 2018. Daphne Koller and Nir Friedman. *Probabilistic Graphical Models: principles and techniques*. MIT Press, 2009. Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In *Advances in* Neural Information Processing Systems, 2017. Finnian Lattimore, Tor Lattimore, and Mark D. Reid. Causal bandits: Learning good interventions via causal inference. In *Advances in Neural Information Processing Systems*, 2016. Tor Lattimore and Csaba Szepesvári. *Bandit Algorithms*. Cambridge University Press, 2020. Lihong Li, Shunbao Chen, Jim Kleban, and Ankur Gupta. Counterfactual Estimation and Optimization of Click Metrics in Search Engines: A Case Study. In *Proceedings of the 24th International Conference on* World Wide Web, 2015. Bo Liu, Ying Wei, Yu Zhang, Zhixian Yan, and Qiang Yang. Transferable contextual bandit for cross-domain recommendation. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32, 2018. Yangyi Lu, Amirhossein Meisami, Ambuj Tewari, and William Yan. Regret analysis of bandit problems with causal background knowledge. In Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), volume 124 of *PMLR*, pp. 141–150, 2020. Optimizely. Multi-armed bandit. https://www.optimizely.com/optimization-glossary/ multi-armed-bandit/, 2023. Accessed: 2023-04-21. Judea Pearl. Causal inference in statistics: An overview. *Statistics Surveys*, 3(September):96–146, 2009a. ISSN 19357516. doi: 10.1214/09-SS057. Judea Pearl. *Causality*. Cambridge University Press, 2nd edition, 2009b. Judea Pearl. On the Interpretation of do(x). *Journal of Causal Inference*, 7(1), 2019. Persado. The benefits of experimental design in marketing. https://www.persado.com/articles/ the-power-of-experimental-design-to-deliver-marketing-insights/, 2023. Accessed: 2023-0604. Zhimei Ren, Zhengyuan Zhou, and Jayant R. Kalagnanam. Batched learning in generalized linear contextual bandits with general decision sets. *IEEE Control Systems Letters*, 6:37–42, 2022. Neela Sawant, Chitti Babu Namballa, Narayanan Sadagopan, and Houssam Nassif. Contextual multi-armed bandits for causal marketing. *arXiv:1810.01859*, 2018. ScaleAI. Netflix explains recommendations and personalization. https://scale.com/blog/ netflix-recommendation-personalization, 2023. Accessed: 2023-06-04. Burr Settles. Active learning literature survey. Technical Report 1648, University of Wisconsin–Madison, 2009. Chandrasekar Subramanian and Balaraman Ravindran. Causal contextual bandits with targeted interventions. In *Proceedings of the Tenth International Conference on Learning Representations*, 2022. Adith Swaminathan and Thorsten Joachims. Counterfactual Risk Minimization: Learning from Logged Bandit Feedback. In *Proceedings of the 32nd International Conference on International Conference on* Machine Learning - Volume 37, 2015a. Adith Swaminathan and Thorsten Joachims. Batch learning from logged bandit feedback through counterfactual risk minimization. *Journal of Machine Learning Research*, 16(52):1731–1755, 2015b. Huasen Wu, R. Srikant, Xin Liu, and Chong Jiang. Algorithms with logarithmic or sublinear regret for constrained contextual bandits. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, 2015. Akihiro Yabe, Daisuke Hatano, Hanna Sumita, Shinji Ito, Naonori Kakimura, Takuro Fukunaga, and Kenichi Kawarabayashi. Causal Bandits with Propagating Inference. In *Proceedings of the 35th International* Conference on Machine Learning, volume 80 of *PMLR*, pp. 5512–5520, 2018. Junzhe Zhang and Elias Bareinboim. Transfer learning in multi-armed bandits: A causal approach. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, pp. 1340–1346, 2017. Zihan Zhang, Xiangyang Ji, and Yuan Zhou. Almost optimal batch-regret tradeoff for batch linear contextual bandits. *arXiv:2110.08057*, 2022. Aoqi Zuo, Susan Wei, Tongliang Liu, Bo Han, Kun Zhang, and Mingming Gong. Counterfactual fairness with partially known causal graph. In *Advances in Neural Information Processing Systems*, 2022. ## A Appendix: Definition Of Entnew This section is taken from Subramanian & Ravindran (2022). As discussed in Section 4.2, P(V |paV ) = Cat(V ; b1*, ..., b*r), where (b1*, ..., b*r) ∼ Dir(θV |paV ). Here θV |paV is a vector of length, say, r. Let θV |paV [i] denote the i'th entry of θV |paV . We define an object called Ent that captures a measure of our knowledge of the CPD: $$\text{Ent}(\mathbb{P}(V|\mathbf{pa}_{V}))\triangleq-\sum_{i}\left[\frac{\theta_{V|\mathbf{pa}_{V}}[i]}{\sum_{j}\theta_{V|\mathbf{pa}_{V}}[j]}\ln\left(\frac{\theta_{V|\mathbf{pa}_{V}}[i]}{\sum_{j}\theta_{V|\mathbf{pa}_{V}}[j]}\right)\right].$$ We then define $\epsilon$. $$\mathsf{Ent}^{n e w}(\mathbb{P}(V|\mathbf{pa}_{V}))\triangleq{\frac{1}{r}}\sum_{i}\mathsf{Ent}(\mathsf{Cat}(b_{1}^{\prime},...,b_{r}^{\prime}))$$ where (b ′ 1 , ..., b′r ) ∼ Dir *..., θ*V |paV [i − 1], θV |paV [i] + 1, θV |paV $$\mathbf{a}_{V}\left[i+1\right],\ldots).$$ ## B Appendix: Proof Of Soundness We would like to show that Regret → 0 as B → ∞. As B → ∞, in the limit, the problem becomes unconstrained minimization of Υ(N). Note that for all N, Υ(N) ≥ 0. Therefore, the smallest possible value of Υ(N) is 0. First, note that Nx,cA → ∞, ∀(x, c A) =⇒ Υ(N) → 0. This is because ∀(x, c A), $$N_{x,{\bf c}^{A}}\rightarrow\infty\implies\frac{1}{1+\ln(N_{x,{\bf c}^{A}}+1)}\rightarrow0$$ which, in turn, makes Q(P[V |paV ], N) → 0, ∀(V, paV ). From Equation 3, it is easy to see that this causes Υ(N) → 0. Also note that Υ(N) → 0 =⇒ Nx,cA → ∞, ∀(x, c A). To see this, let Υ(N) → 0, and consider the case where there exists a (x, c A) such that Nx,cA is finite. That means that there is at least one term of the form $$\frac{1}{1+\ln(N_{x^{\prime},\mathbf{c}^{A\prime}}+1)}$$ which occurs in the Q(.) function of at least one CPD P[V |paV ], causing Q(P[V |paV ], N) > 0 since Entnew > 0. This, in turn, causes Υ(N) ̸→ 0, resulting in a contradiction.9 Thus, Nx,cA → ∞, ∀(x, c A) ⇐⇒ Υ(N) → 0. In other words, each (x, c A) gets a number of samples tending towards infinity if and only if Υ tends to 0. Thus, since Υ(N) ≥ 0, Algorithm 1a will allocate Nx,cA → ∞, ∀(x, c A). Each CPD has at least one (x, c A) whose samples will be used to update its beliefs in Algorithm 1a.10 This means that each CPD will have its beliefs updated a number of times approaching infinity. Thus, for any (V, paV ), Pˆ[V |paV ] → P[V |paV ]. As a result, we have that Pˆ → P. Since the agent's policy constructed using P will necessarily be optimal, we have that *Regret* → 0. This completes the proof. ## C Appendix: Proof Of Regret Bound In this section, we prove Theorem 4.2. The proof follows closely the one in Subramanian & Ravindran (2022) and adapts it to our setting. First, we define the assumptions (A2) under which the theorem holds: 9This makes an implicit technical assumption that P[c] > 0, ∀c and that min(val(Y )) > 0. These are stronger assumptions than necessary, and could be weakened in the future. 10Due to the technical assumption in footnote 9 1. There is some non-empty past logged data (|DL| > 0), and it was generated by an (unknown) policy π where every action has a non-zero probability of being chosen (π(x|c A) > 0, ∀x, c A). The latter is a commonly made assumption, for example, in inverse-propensity weighting based methods. 2. |DL| ≥ αB, for some constant α > 0 11. This is generally achievable in real world settings since we usually have fairly large logged datasets (or it is quite cheap to acquire logged data; for example, think of search logs), and for the bound to hold we technically can have a very small α as long as it is greater than 0. 3. The cost function β is constant.12 This is a common case in real world applications, especially when we do not have estimates of cost; in those cases, we typically assign a fixed cost to all targeted experiments. ## C.1 Expression For Overall Bound First, note that Equation 3 in Subramanian & Ravindran (2022) remains the same even for our case. This is because it depends only on the factorization of E[Y |do(x), c A] (see Equation 1 in the main paper) and on the fact that in the evaluation phase the agent uses expected parameters of the CPDs (derived from its learned beliefs) to return an action for a given context. Therefore, suppose, with probability ≥ 1 − δX,paY , $$|\forall x,{\hat{\mathbb{P}}}(Y=1|X,\mathbf{pa}_{Y})-\mathbb{P}(Y=1|X,\mathbf{pa}_{Y})|\leq\epsilon_{X,\mathbf{pa}_{Y}}$$ and with probability ≥ 1 − δC|paC , $$\forall c,\ |{\hat{\mathbb{P}}}(C=c|\mathbf{pa}_{C})-\mathbb{P}(C=c|\mathbf{pa}_{C})|\leq\epsilon_{C|\mathbf{pa}_{C}}$$ $$\left(4\right)$$ where the expressions for δX,paY , δC|paC , ϵX,paY and ϵC|paC will be derived later in this section. Then with probability ≥ 1 −PpaY δX,paY −PC∈C PpaC δC|paC , for any given c A, $$\operatorname{Regret}(\mathbf{c}^{A})=\mathbb{E}[Y|d o(a^{*}),\mathbf{c}^{A}]-\mathbb{E}[Y|d o(a_{a l g}),\mathbf{c}^{A}]\leq2\epsilon_{X}^{\prime}+3\sum_{C\in C^{B}}\epsilon_{C}^{\prime}$$ C (4) where we define $$\epsilon_{X}^{\prime}\triangleq\sum_{\mathbf{pa_{Y}}}\mathbb{P}(\mathbf{pa_{Y}}|\mathbf{c}^{A})\epsilon_{X,\mathbf{pa_{Y}}}$$ and $$\epsilon_{C}^{\prime}\triangleq\sum_{\mathbf{pa}_{C}}\mathbb{P}(\mathbf{pa}_{C}|\mathbf{c}^{A})\epsilon_{C|\mathbf{pa}_{C}}$$ C.2 Expressions for δC|paC and ϵC|paC Denote MV ≜ |val(V)|. Let LpaC be the number of samples in DL where P AC = paC . Now, our starting estimate of Pˆ(C = 1|paC ) using DL is computed as (θ (1) C|paC + 1)/(LpaC + 2). Since DL is built by observing C A according to the natural distribution and choosing X according to some (unknown) policy, the proof of Lemma A.1 in Subramanian & Ravindran (2022) can be followed if we replace T ′ by αB since |DL| ≥ αB. Therefore, suppose, with probability at least 1 − δ L C|P AC , it is true that ∀paC , LpaC ≥ αBP(paC , c $${}^{(1)}-\epsilon_{P A_{C}}^{L}$$ 11We also assume that B is finite, as discussed earlier. 12Without loss of generality, we let this constant be equal to 1. $\forall$D. If the above event is true, then it is also true that with probability at least 1 − δC|paC , it is true that $$\forall c,\ |\hat{\mathbb{P}}(c|\mathbf{pa}_{C})-\mathbb{P}(c|\mathbf{pa}_{C})|\leq{\sqrt{\left[{\frac{2}{\alpha B\mathbb{P}(\mathbf{pa}_{C},\mathbf{c}^{A})-\epsilon_{C|P A_{C}}^{L}}}\right]\ln\left({\frac{2}{\delta_{C|\mathbf{pa}_{C}}}}\right)}}$$ where $$\epsilon_{C|P A_{C}}^{L}=\sqrt{\left[\frac{\alpha B}{2}\right]\ln\left(\frac{M_{P A_{C}}}{\delta_{C|P A_{C}}^{L}}\right)},\;\;M_{P A_{C}}=\prod_{C\in P A_{C}}M_{C}$$ Therefore, we have that $$\epsilon_{C|\mathbf{pa}_{C}}=\sqrt{\left[\frac{2}{\alpha B\mathbb{P}(\mathbf{pa}_{C},\mathbf{c}^{A})-\epsilon_{C|PA_{C}}^{L}}\right]\ln\left(\frac{2}{\delta_{C|\mathbf{pa}_{C}}}\right)}\tag{5}$$ ## C.3 Expressions For Δx,Pay And Εx,Pay Let Lx,paYbe the number of samples in DL where (*X, P A*Y ) = (x, paY ). As before, recollect that our estimate of Pˆ(Y = 1|x, paY ) is computed as (θ (1) Y |x,paY + 1)/(Lx,paY + 2). Further, the mean of Lx,paY is at least |DL| · P(paY , c A) · *m > αBm*P(paY , c A), where m = minx,cA π(x|c A) and π is the unknown logging policy. From our set of assumptions (A2), we have that π(x|c A) > 0, ∀x, c A; therefore, m > 0. Given this, the proof of Lemma A.2 in Subramanian & Ravindran (2022) can be followed. Therefore, suppose, with probability at least 1 − δ L X,P AY , it is true that $$\forall(x,\mathbf{pa_{Y}}),\ L_{x,\mathbf{pa_{Y}}}\geq\alpha B m\mathbb{P}(\mathbf{pa_{Y}},\mathbf{c}^{A})-\epsilon_{X,P A_{Y}}^{L}$$ where $$\epsilon_{X,P A_{Y}}^{L}=\sqrt{\left[\frac{\alpha B}{2}\right]\ln\left(\frac{M_{X}M_{P A_{Y}}}{\delta_{X,P A_{Y}}^{L}}\right)}$$ If the above event is true, then it is also true that with probability at least 1 − δX,paY , it is true that $$\forall x,|\hat{\mathbb{P}}(Y=1|x,\mathbf{p a}_{Y})-\mathbb{P}(Y=1|x,\mathbf{p a}_{Y})|\leq\sqrt{\left[\frac{2}{\alpha Bm\mathbb{P}(\mathbf{p a}_{Y},\mathbf{c}^{A})-\epsilon_{X,\,PA_{Y}}^{L}}\right]\ln\left(\frac{2M_{X}}{\delta_{X,\mathbf{p a}_{Y}}}\right)}\,.$$ Therefore, we have that $$\epsilon_{X,\mathbf{pa_{Y}}}=\sqrt{\left[\frac{2}{\alpha Bm\mathbb{P}(\mathbf{pa_{Y}},\mathbf{c}^{A})-\epsilon_{X,PA_{Y}}^{\mathbb{Z}}}\right]\ln\left(\frac{2M_{X}}{\delta_{X,\mathbf{pa_{Y}}}}\right)}\tag{6}$$ ## C.4 Final Bound Now, we can plug the Equations 5 and 6 back into Equation 4, and following the same union bound trick as in Subramanian & Ravindran (2022) and some algebra, we get that for any 0 *< δ <* 1, with probability ≥ 1 − δ, vuut "2 αmBP(paY , cA) − ϵ L X,P AY # ln 2MX(MC + |C|) δ Regret ≤ 3EpaY ,cA vuut "2 αBP(paC , cA) − ϵ L P AC # ln 2(MC + |C|) δ (7) C∈CB EpaC ,cA + 3 X where $$\epsilon_{P A c}^{L}=\sqrt{\left[\frac{\alpha B}{2}\right]\ln\left(\frac{M_{P A c}\left(M_{C}+|C|\right)}{\delta}\right)},\;\epsilon_{X,P A\gamma}^{L}=\sqrt{\left[\frac{\alpha B}{2}\right]\ln\left(\frac{M_{X}M_{P A\gamma}\left(M_{C}+|C|\right)}{\delta}\right)}$$ It can be simplified as presented in Theorem 4.2 as: $$R e g r e t\in O\left(|C|\sqrt{\left(\frac{1}{m B-\epsilon}\right)\ln\frac{M_{X}M_{C}}{\delta}}\right)$$ where $$\epsilon\in{\cal O}\left(\sqrt{B\ln\frac{M_{X}M_{P A_{Y}}M_{C}}{\delta}}\right)$$ This completes the proof. ## D Regret Behavior For Large Values Of B Figures 1 and 2 provided regret behavior for small values of B. We are primarily interested in small-budget behavior since that occurs more commonly in practice; for example, budgets exclusively for experimentation in software teams in often quite low. However, it is also interesting to look at regret behavior as B becomes large. Specifically, we increase B ![17_image_0.png](17_image_0.png) large enough that all algorithms converge to optimal (or very close to optimal). We do this for Experiments 1 and 2. Figures 9 and Figures 10 provide the results. Note that the Figures 1 and 2 just zoom into these plots for small B (i.e., B between 15 and 30). Figure 9: Experiment 1 results (Section 5.2) for large values of B. Figure 10: Experiment 2 results (Section 5.3) for large values of B. ![17_image_1.png](17_image_1.png) PropToValue is the slowest to converge to the optimal policy in both instances, though it demonstrates better low-budget behavior than EqualAlloc. MaxSum has the best low budget behavior among the baselines because it maximizes the total number of samples within that low budget; it, as B gets larger, EqualAlloc catches up (and even outperforms it) as it explores the context-action space better. In both experiments, however, our algorithm converges to an optimal policy faster than all baselines. ## E Appendix: Understanding The Reason For Better Performance Of Our Algorithm As discussed in Section 4, our algorithm balances the trade off between allocating more samples to contextaction pairs that are higher value according to its beliefs and allocating more samples for exploration, while taking into account information leakage due to the causal graph. To understand this in more detail, we consider Setting 1 of Experiment 1, and zoom into the case where B = 20. We do 50 independent runs and plot the frequency of choosing samples containing different value of C1. We show this for our algorithm and all baselines (except EqualAlloc since it is obvious how it allocates). ![18_image_0.png](18_image_0.png) Figure 11: Frequency of choosing or encountering each value of C A. Highlighted in teal color are the 'highvalue' contexts (i.e., contexts for which learning the right actions provides higher expected rewards). Figure 11 shows the results of this experiment. MaxSum allocates lesser number of samples than our algorithm to the two context values (C1 ∈ {0, 1}) that are high value. PropToValue over-allocates to these two context values, resulting in poor exploration of other contexts. Our algorithm, in contrast, allocate relatively more to the high-value contexts, while also maintaining good exploration of other contexts. ## F Appendix: Intuition For Why Equalalloc Performs Worse In Experiment 2 First, if we look at the figures in the main paper along with Figure 10, we see that that EqualAlloc performs worse only for low values of B, and significantly improves its performance once B becomes 35. To understand the intuition, first note that Experiment 2 captures the 80-20 rule (while randomizing other aspects). It is easy to see that EqualAlloc overallocates to the lower-value 80% of contexts. PropToValue, on the other hand, allocates more to the 20%, and therefore performs better for lower budgets (i.e., makes better use of the small budget). However, as budget increases, PropToValue fails to explore well, causing EqualAlloc to outperform it. MaxSum outperforms EqualAlloc in lower budgets because allocations which allocate more to the higher-value contexts are also possible solutions for MaxSum, and therefore on average it performs better than EqualAlloc which overallocates to the low-value contexts consistently. However, as B becomes larger, EqualAlloc catches up, as seen in Figure 10. ## G Appendix: Regarding Parameterizations For Experiments 1 through 3, we follow parameterizations very similar to the one used in Subramanian & Ravindran (2022). Please see README.md in the Code folder in SupplementaryMaterial.zip for the full parameterizations of all experiments. ## H Appendix: How To Implement Our Algorithm In Practice In our experiments, we have used the scipy.optimize.differential_evolution solver13 from the scipy Python library to solve the problem in Section 4.2. This solver implements differential evolution,14 which is an evolutionary algorithm which makes very few assumptions about the problem. The scipy.optimize.differential_evolution method is quite versatile; for example, it allows the specification of the objective function as a Python callable, and also allows arbitrary nonlinear constraints of type scipy.optimize.NonlinearConstraint. However, a practitioner can use any suitable optimization algorithm or heuristic to solve the problem in Section 4.2. ## I Appendix: Proof Of Counterfactual Fairness To prove counterfactual fairness, first note that the learned policy is a map ϕˆ : val(C A) → val(X); during inference, for any given c A, the value of X is intervened to be set to ϕˆ(c A). Following the notation in Pearl (2009a), we let ϕˆCB←cB′ (c A) denote ϕˆ(c A) in the counterfactual world where the variables in C B are set equal to c B′. To achieve counterfactual fairness15, we want that, for all c A, c B, c B′, x, $$\mathbb{P}\left[\hat{\phi}_{\mathcal{C}^{B}+\mathbf{c}^{B}}(\mathbf{c}^{A})=x|\mathbf{c}^{A},\mathbf{c}^{B}\right]=\mathbb{P}\left[\hat{\phi}_{\mathcal{C}^{B}+\mathbf{c}^{B\prime}}(\mathbf{c}^{A})=x|\mathbf{c}^{A},\mathbf{c}^{B}\right]$$ contains in Section 3.1, the conditional independence in $\mathcal{C}$ is Now, under the assumptions in Section 3.1, the conditional independences in $\mathcal{G}$ imply that we have $$\mathbb{P}\left[\hat{\phi}(\mathbf{c}^{A})=x|\mathbf{c}^{A},\mathbf{c}^{B}\right]=\mathbb{P}\left[\hat{\phi}(\mathbf{c}^{A})=x\right],\forall\mathbf{c}^{B}$$ This gives us that $$\mathbb{P}\left[{\hat{\phi}}_{\mathbf{C}^{B}\leftarrow\mathbf{c}^{B\prime}}(\mathbf{c}^{A})=x|\mathbf{c}^{A},\mathbf{c}^{B}\right]=\mathbb{P}\left[{\hat{\phi}}(\mathbf{c}^{A})=x\right],\forall\mathbf{c}^{A},\mathbf{c}^{B},\mathbf{c}^{B\prime},x$$ which satisties the counterfactual fairness condition. 13https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html 14https://en.wikipedia.org/wiki/Differential_evolution 15Our definition of counterfactual fairness draws from the definitions in Kusner et al. (2017); Zuo et al. (2022). ## J Appendix: Demographic Parity A common criterion for group-level fairness is Demographic Parity (Kusner et al., 2017). Demographic Parity (DP) requires that the distribution over actions remains the same irrespective of the value of the sensitive variables. Formally, it requires that $$\mathbb{P}\left[{\hat{\phi}}=x|\mathbf{c}^{B}\right]=\mathbb{P}\left[{\hat{\phi}}=x|\mathbf{c}^{B\prime}\right],\ \forall\mathbf{c}^{B\prime}$$ Our algorithm, however, does not guarantee DP: $$\mathbb{P}\left[{\hat{\phi}}=x|\mathbf{c}^{B}\right]=\sum_{\mathbf{c}^{A}}\mathbb{P}[\mathbf{c}^{A}|\mathbf{c}^{B}]\cdot\mathbb{P}[{\hat{\phi}}(\mathbf{c}^{A})=x]$$ which may not equal P-ϕ = x|c B′since P[c A|c B] may not equal P[c A|c B′]. However, we discuss one way through which DP can be achieved, but with a reduction in agent's performance. Specifically, we can achieve DP by ensuring that the agent acts according to a fixed policy irrespective of the value of c A. Intuitively, we construct a fixed policy that maximizes rewards given the agent's learned beliefs. Specifically, let ψˆ(x, c A) ≜ Eˆ[Y |do(x), c A]. Assume the fixed policy is probabilistic. Therefore, we're interested in a policy q which is a distribution over |val(X)|. Denoting q (x) ≜ q(x), we solve the following optimization problem: $$<...,q^{(x)},...>=\operatorname*{arg\,max}_{<...,q^{(x)^{\prime}},...>}\sum_{x}\left[q^{(x)^{\prime}}\left[\sum_{{\bf c}^{A}}\hat{\mathbb{P}}[{\bf c}^{A}]\cdot\hat{\psi}(x,{\bf c}^{A})\right]\right]$$ subject to $$\sum_{x}q^{(x)^{\prime}}=1$$ It is easy to see that one global optimum to this involves assigning a probability of 1 to an action x that results in the largest value of PcA Pˆ[c A] · ψˆ(x, c A). That is, choose x such that $$x=\arg\operatorname*{max}_{x^{\prime}}\sum_{\mathbf{c}^{A}}{\hat{\mathbb{P}}}[\mathbf{c}^{A}]\cdot{\hat{\psi}}(x^{\prime},\mathbf{c}^{A})$$ and let q(x) = 1, and q(x ′) = 0, ∀x ′ ̸= x. Note that this fixed policy would perform worser on expectation than the context-specific policy learned by the agent in the main part of the paper. This, however, is a cost that can be paid to achieve DP. ## K Appendix: Information Leakage When C B = ∅ Our formalism (Section 3) allows C B to be empty. In this case, there is still information leakage possible, which our algorithm can exploit. To see this, consider the same graph we used for Experiment 1 (Section 5), but let C A = {C1, C0}. Consider the two pairs (x, c0, c′1 ) and (x, c0, c1). It turns out that P[y|do(x), c0, c′1 ] = P[y|do(x), c0, c1] = P[y|*x, c*0]. Thus, there is information leakage arising due to the conditional independencies arising from the causal graph, even when C B = ∅. Our algorithm exploits this structure through Q(P[Y |*x, c*0], N). ## L Appendix: Detailed Plots L.1 Related To Experiment 4 Figure 12 shows the breakdown on Figure 4 for each value of B. For B = 15, there is no clear pattern; this is because the budget is so small that they all tend to learn poor policies. But when we increase B, we see ![21_image_0.png](21_image_0.png) Figure 12: Detailed plots related to Experiment 4 (Section L.1). that our algorithm performs better when the graph is the most squeezed - like we had discussed in the main part of the paper. Further, in the most squeezed setting (lowest value of x-axis), our algorithm's difference from the next best algorithm is larger for smaller B, and decreases as B increases; this is because a larger budget reduces the advantage that our algorithm gains from better utilization of information leakage. ## L.2 Related To Experiment 5 Figure 13 shows the breakdown of Figure 5 for each value of B. We see that for each value of B, as ज्याटिंग्रेस increases, regret decreases - similar to what happens in the aggregated plot. Further, as B increases, 0 regret is achieved faster - again in line with expectations. ## L.3 Related To Experiment 7 Figure 14 shows the breakdown of Figure 7 for each value of B. For B = 15, there is no clear pattern; this is because the budget is so small that they all tend to learn poor policies. But when we increase B, we see that all algorithms perform worser when G does not match the true underlying causal graph. Further, our algorithm continues retain its better performance over all baseline for all values of B. ## Broader Impact Statement This work provides an improved mathematical framework and general-purpose algorithm for contextual bandits. We do not use any data that contains any personally identifiable information or sensitive personally identifiable information. To the best of our knowledge, we do not believe there were any ethical issues ![22_image_0.png](22_image_0.png) Figure 13: Detailed plots related to Experiment 5 (Section L.2). associated with the development of this work. Further, given the nature of the work as foundational and introducing a new algorithm (and not specific to an application), we do not foresee any specific potential negative ethical issues created by this work. However, we do point out that researchers utilizing this method to their specific applications should adhere to ethical standards of their own (e.g., by avoiding targeting interventions on subpopulations based on racial attributes, or by ensuring that rewards are not defined in a way that incentivizes learning discriminatory behavior). ![23_image_0.png](23_image_0.png) Figure 14: Detailed plots related to Experiment 7 (Section L.3).
Review 1: Summary: This paper considered causal contextual bandits with single batch data acquisition, a setting similar to active learning. Given an initial dataset to warm start, the bandit policy has one shot to query a batch of actions under budget constraints. The goal is to find the best policy that minimizes simple regret. The paper also considered the presence of a causal graph. The authors proposed CoBA algorithm that solves a nonlinear optimization problem with nonlinear and integer constraints where the objective considers an entropy-like measure to capture the impact of new samples. An asymptotic theoretical result is provided that as the budget goes to infinity, the simple regret goes to zero. Extensive experiments on synthetic and real-world data validated the effectiveness of the proposed algorithm. Strengths and Weaknesses: Strengths: 1. The problem of one shot data acquisition in contextual bandits is a novel and well-motivated setting that could be interesting to the readers. 2. The proposed algorithm appears to be practical. 3. The empirical evaluations are well designed and thorough. Weaknesses: 1. [Related work] The relation to existing works is not clearly explained in related work section. First, active learning is highly related to the setting. The authors mentioned that "the agent receives outcomes only for actions that were taken" which is indeed the setting of pool-based active learning. One difference I can think of is that the objective here is to minimize simple regret, but it needs further clarification. Second, the paper is also related to batched bandits where the time horizon $T$ could be (much) smaller than batch size. This paper could be viewed as an extreme case of batched bandits with $T=1$ and the authors should explain why existing results are not related (apart from the causal setting). 2. [Scope] The authors build the problem setting of single shot data acquisition based on causal contextual bandits [1]. The scope of the work might be limited since single shot data acquisition problem has not been studied in basic settings such as multi-armed bandits or contextual bandits yet (according to the related work section). The authors should clarify if the results in the paper could be generalized to multi-armed bandits or contextual bandits, which will generate greater impact. 3. [Theoretical result] The asymptotic theoretical result cannot justify the proposed algorithm: taking budget to infinity makes the optimization problem unconstrained and can only represent the case of training with infinite samples. This cannot justify the selective data acquisition problem considered in the paper. It is important to understand the trade-off between budgets and regrets even if the analysis is limited to simple cost function such as proportional to the number of samples (the one used in Experiment 6). [1] Chandrasekar Subramanian and Balaraman Ravindran. Causal contextual bandits with targeted interventions. In Proceedings of the Tenth International Conference on Learning Representations, 2022. Requested Changes: 1. Please clarify the relation to active learning and batched bandits. 2. Please clarify if the results in the paper could be generalized to multi-armed bandits or contextual bandits. 3. Please consider to anlyze the algorithm theoretically with finite budgets. While study general cost functions would be ideal, simple cost function such as proportional to the number of samples could be an informative first step. Broader Impact Concerns: None ================================================== Review 2: Summary: The paper proposes a method to select a set of bandit experiments to perform in a single decision instant. The authors propose a method to integrate the information provided by historical data and a causal graph to learn the optimal policy to perform in such a contextual bandit setting. A guarantee of convergence is provided, and several experiments and sensitivity analyses have been performed to test the capabilities of the approach. Strengths and Weaknesses: I think the setting is derivative, but the idea of a one-shot learning procedure is somehow interesting. However, I think that the theoretical guarantees provided are too not detailed enough to be relevant to the problem. Moreover, I think that the setting is not motivated fully. I think that the real testing campaigns are run in batches. Therefore, this work should also take into account the case in which the learning procedure is repeated multiple times. I am unsure about the title you provided for the paper. Indeed, your modeling is somehow not applicable to a sequential decision-making setting since the choice you provided is a single planning strategy. I think that what you are proposing is similar to the work in the learning from logged bandit feedback, but not totally coherent with it. Therefore, my suggestion is to try to make the proposal of the paper more clear from the title. Requested Changes: 1) Section 1.2: i suggest you to add some more details about the definition of fairness you are going to consider. 2) define the symbol do(x) 3) add formulas punctuation 4) a parenthesis is missing in the second to last line on page 4 5) Section 3.1 I think that an example of the causal graph would help in understanding the setting. 6) add a textual description of the proposed algorithm since the pseudocode is not self-explanatory 7) I see the value of Theorem 1, but I think that if you are in a bandit setting, you should somehow also try to quantify the convergence order of the regret (maybe only asymptotically). this would give a more robust justification for the work you proposed. 8) from the introduction, I understood that the method might have been instantiated even in the case no prior information is available. Instead, if I look at Alg. 1, it shows that historical data are necessary. Can you comment more on that? 9) What about comparing your proposed method with online learning strategies? for instance, the one from Subramanian 2022, to analyze how much we are losing because we cannot act in an online way. 10) Demographic parity: I am unsure why you opt to leave this part in the appendix. Broader Impact Concerns: I do not see any direct impact on society or ethical impact. The method has been also characterized in terms of fairness. ================================================== Review 3: Summary: ## High-Level Overview One-shot decision-making has a wide range of applications, from software experimentation, recommendation systems, marketing campaign design, etc. In all of these settings, the objective is to learn a near-optimal policy that maps contexts (information about the current decision) to actions, with the hopes of achieving good downstream rewards. In many of these settings, the algorithm has offline data available apriori and a budget to be able to select new context action pairs to observe their performance. For example, consider the software design optimization setting. The agent has access to historical data on different features of the software and the relevant user metrics. However, subject to an experimentation budget, they can perform additional roll-outs of different feature combinations to test the metrics, prior to deciding the final software decisions. At a high level, the main contributions for the authors are as follows. $(i)$ The authors study how to actively obtain multiple samples in contextual bandits with an additional causal graph modelling how the context variables interact. $(ii)$ They propose a new algorithm based on minimizing an entropy measure for selecting the "one-shot" data points to experiment on. $(iii)$ They provide numerical studies showing the performance of their algorithm against other benchmarks. ## Main Model The underlying problem is modeled as a causal model $M$ which depends on a DAG $G$ and a probability distribution $P$ which factorizes over it. The set of variables in the graph contains the context variables (both main and auxiliary ones that the policy cannot depend on), the action, as well as the outcome reward. The agent only knows the graph $G$ but not the probability distribution over it. The agent has access to logged offline data collected from an unknown behavioural policy. The data consists of samples $(c_i, x_i, y_i)$ where $c_i$ is context, $x_i$ is action, and $y_i$ is feedback. There is a cost $\beta$ for obtaining a specific sample, and the goal of the algorithm is to pick $N_{(x,c)}$ for the number of samples to collect from an $(x,c)$ pair subject to a maximum budget of $B$. The agent's goal is to learn a policy $\hat{\phi}$ mapping contexts $C$ to actions $X$ such that it minimizes regret compared to the optimal policy averaged over the context distribution from the underlying model. ## Assumptions The authors make several simplifying assumptions in the work: 1. The set of context variables is known and finite 2. The graph has a single outgoing edge from action $X$ to reward $Y$ 3. There are no unobserved confounders 4. If $C$ confounds $C'$ which is a main context variable then $C$ is also a main context variable. This is needed for the fairness results and simplifies the algorithm design. ## Main Algorithm First note that the expect reward can be written as an expectation taken with respect to the model of the causal graph. Hence, given an estimate of the underlying model for the causal graph, a simple estimate for the optimal policy is to solve for the optimal policy based on the estimated model. As such, in order to develop the learned policies the authors simply use a plug-in approach. The main technical question is deciding how to select the samples $N_{(x,c)}$ for the different context action pairs subject to the budget constraints $B$. To this end, the authors define an entropy-like measure $\Psi(N)$ using inspiration from Subramanian and Ravindran (2022). Given this, the sampling approach optimizes this subject to the budget constraints. ## Theoretical Results The authors show a single theoretical result for their algorithm, notably that as the budget $B$ approaches infinity then the agent's regret will tend towards zero. However, this is true for any algorithm which in the limit collects infinite samples for every context pair, achieved by other benchmarking algorithms such as proportional or equal allocation. ## Experimental Results The authors complement the theoretical results with experimental results highlighting the performance of their algorithms. The results show that their approach (COBA) outperforms other heuristics including 1. Equal Allocation 2. Proportional Allocation 3. Maximizing the total number of samples obtained ## Fairness Results Lastly, the authors provide a brief discussion on the fairness property of the algorithm. If the "auxiliary" context variables contains all protected attributes, then they show that their resulting policy satisfies counterfactual fairness. Strengths and Weaknesses: ## Strengths - The paper is very well written and easy to follow. The table of notation is easy to refer back to during the technical discussions, and the introduction provides a succinct overview of the entire paper. - The authors consider a very well-motivated model, and provide one of the first works to consider causal contextual bandits with the one-shot data collection setting. - The empirical results help complement their algorithm design and analysis by showing the performance of their algorithm (COBA) compared to several other benchmarks. ## Weaknesses - The only theoretical guarantees are soundness, which is very weak and achieved by a wide class of algorithms. - The experimental results, to complement the soundness, only look at small-scale (limited budget) performance. However, since soundness provides no regret rates it would be more interesting to look at asymptotics to show that their algorithm learns more efficiently than equal-allocation benchmarks. - The fairness model considered is not very well fleshed out and seems added only as an afterthought. Requested Changes: ## Requested Changes - Theoretical Results: The theoretical results provided only provide soundness, which is achieved by a wide class of algorithms. Since the paper is relatively theoretical, it would be interesting to see more discussion around potential rates. For example, what is the regret rates provided by Subramanian and Ravindran (2022), and could those techniques be used similarly in this setting? - Scalability: There is limited discussion on the scalability of the experiments and the assumptions. The authors assume a finite context space (which doesn't capture most real-world settings) and this is heavily used in the experiment and algorithm design. Moreover, the authors only comment that the optimization problem is solved approximately using scipy. However, as written it is a combinatorial optimization problem, and so the authors should comment more on how to implement their algorithm in practice. - Fairness: The fairness discussion seemed added on as an after-thought (including the separation of the context variables). - Experiments: To help make the theoretical results more interesting, it would potentially be interesting to see the following experiments: 1) Scale the $B$ to look at asymptotic learning efficiency of your algorithm compared to the benchmarks 2) Implement the "optimal" allocation discussed in Remark on page 8 ## Questions - Is the simplifying assumption on the "hidden" context variables only required for the fairness model? - Can you comment on the scalability and implementation details of the algorithm? - What is the optimal solution to Section 4.2 without any logged data? That might help build some more intuition, because taking into account the causal model will return a solution which isn't equal allocation. - Any ability to compare algorithm performance to Subramanian and Ravindran (2022)? - Compare to optimal allocation for small scale experiments? - Intuition in Experiment 2 that equal allocation is performing worse? ## Minor Comments - Abstract is nicely written, clear and concise - The introduction provides nice context in the model. Maybe it would be helpful to emphasize how the one-shot comes in. Moreover, under the advertisement model initially, the one-shot feels like picking users (obviously tough), but because it is aggregate over contexts that is more feasible. - "of of" on top of page two - Section 1.3 provides a great summary of the paper - Table 1 is very helpful for the readability of the paper - First paragraph under Section 5.2 is a bit tough to read - "As" in mid-sentence under Appendix B Broader Impact Concerns: As part of the Appendix the authors provide a sufficient broader impact statement highlighting that $(i)$ the work provides a general purpose algorithm outside of any particular application and $(ii)$ researchers utilizing the method should adhere to ethical standards for their specific context. ================================================== Metareview: Recommendation: Reject Comment: There was significant improvement after the revision; however, the reviewers still have concerns leaning towards rejecting the current version of the paper. The AE believes that the submission could be significantly improved by incorporating the comments provided below. ### Theoretical guarantee The soundness result is not solid, as it can be achieved by other wide classes of algorithms. The authors have added new results on the regret bound in their revision; however, this result is still not strong enough, especially given strict assumptions such as the cost being equal to one. As a bandit paper, it should include a more substantial regret bound theorem. Additionally, the reviewers and AE strongly recommend including a discussion to help interpret or compare their bound with prior works. ### Motivation and positioning The reviewers and AE are not fully convinced by the suggested problem setup. The authors could take into account Reviewer mgV9's comment and strengthen the motivation (Sec. 1.1), for example, by including a visual illustration of the practical scenario of the considered problem. Additionally, the authors could clarify the positioning of the work by discussing the [related work] and [scope], as suggested by Reviewer et4X. ### Self-containment The paper could be made more self-contained by clarifying important details in the main text. For instance, explaining the implementation of the algorithm (Sec. 4.3) rather than stating that it is approximated using the scipy library. Additionally, outlining the assumptions for the regret bound (Sec. 4.5) is crucial for readers to understand the result. ### Fairness Model The current discussion on the fairness model lacks the strength to be considered a main contribution of the paper. The fairness section introduces new assumptions, such as pushing all sensitive variables into $C^B$, in an ad-hoc manner and defers vital details to the Appendix. These results could be improved from their current preliminary state by exploring what unique properties make the suggested algorithm robust to fairness issues, delving into the implications for practical scenarios, or even adding experiments to enhance the paper. ### Incorporating the rebuttal in the main text The AE believes that the paper underwent significant improvements during the revision period. The authors could consider reorganizing the paper more actively by adopting the feedback. For instance, the AE believes that the new experiments displaying the asymptotic behavior of regrets over a large budget could be valuable enough to be discussed in the main text. While the AE acknowledges the challenge of accommodating everything within the limited page constraints, integrating the new experiments into the original figure, without increasing space, is a viable option. ==================================================
# Are Eeg Sequences Time Series? Eeg Classification With Time Series Models And Joint Subject Training Anonymous authors Paper under double-blind review ## Abstract As with most other data domains, EEG data analysis relies on rich domain-specific preprocessing. Beyond such preprocessing, machine learners would hope to deal with such data as with any other time series data. For EEG classification many models have been developed with layer types and architectures we typically do not see in time series classification. Furthermore, typically separate models for each individual subject are learned, not one model for all of them. In this paper, we systematically study the differences between EEG classification models and generic time series classification models. We describe three different model setups to deal with EEG data from different subjects, namely subject-specific models (most EEG literature), subject-agnostic models and subject-conditional models. In experiments on three datasets, we demonstrate that off-the-shelf time series classification models trained per subject perform close to EEG classification models, but that do not quite reach the performance of domain-specific modeling. Additionally, we combine time-series models with subject embeddings to train one joint subject-conditional classifier on all subjects. The resulting models are competitive with dedicated EEG models in 2 out of 3 datasets, even outperforming all EEG methods on one of them. ## 1 Introduction An electroencephalogram (EEG) measures electrical activity in the brain for diagnostic purposes. EEG classification deals with mapping measured EEG signals to a downstream classification task, usually in the medical domain. This includes for example the identification of sleep stages to identify disorders (Teplan et al., 2002; Subha et al., 2010). While the usage of EEGs has potential for empowering disabled individuals (Al-Qaysi et al., 2018; Wang et al., 2016; Chen et al., 2015) and facilitating stroke rehabilitation (Ang et al., 2015; Alonso-Valerdi et al., 2015), the technological advancement is hindered by substantial obstacles in the consistency and clarity of EEG signals. EEG classification is especially challenging because EEG signals have an inherently low signal-to-noise ratio (SNR) (Johnson, 2006) and are highly non-Gaussian, non-stationary, and have a non-linear nature (Subha et al., 2010). Thus, there is a pressing need for advanced deep learning algorithms that can adeptly navigate the complexities of EEG data. The ongoing pursuit for EEG classification has turned towards leveraging sophisticated computational strategies, embodying progressions in deep learning (DL) (Roy et al., 2019) and geometric learning (GL) (Lotte et al., 2007). This includes dedicated manifold attention networks that employ non-Euclidean geometries for incorporating spatial and temporal domains. The research landscape for deep learning-based EEG classification however is isolated, with a lot of methods being developed for this special task. The question of whether EEG classification is closely connected to other areas of deep learning such as time-series forecasting is rarely investigated. Thus, there is only a small amount of literature that employs established methods and evaluation protocols from other learning domains to EEG classification. Additionally, the prevalent practice of training individual models for each subject imposes significant limitations on the scalability, adaptability, and efficacy of EEG classification. This paper proposes a paradigm shift towards using out-of-the-box time-series classification (TSC) models for EEG classification, challenging existing specialized frameworks, and promoting training a unified model instead of one model per subject. The differences in each of the subjects' brain activity, traditionally a challenge for aggregate training methodologies, are addressed through the use of distinct subject embeddings. To utilize this meta-information about the patient, we present three approaches to incorporate a dedicated subject embedding into a time series classification pipeline for EEG classification. This strategy focuses on the variability between subjects to the model. Our results indicate that established time-series classification approaches with subject embeddings can outperform dedicated state-of-the-art EEG classification models for the task of learning one classification model for all subjects. This paper presents a step towards embedding EEG classification into well-researched deep learning areas such as time-series classification. Thus it opens the door towards more efficient and better-understood learning on EEG data. Our study presents three primary contributions to the EEG and TSC research landscape: 1. The establishment of two robust baselines, ResNet and Inception, from the Time Series Classification discipline for EEG classification, accompanied by comprehensive comparative experiments against prevalent EEG classification models. 2. An exploration of model agnostic methodologies for joint subject training, utilizing subject-specific embeddings. 3. An extensive evaluation of the efficacy of our proposed embedding methodologies for both EEG and TSC frameworks. In our experiments established time-series models trained on all subjects together are competitive with dedicated EEG models on 2 out of 3 datasets, even outperforming all of them for one of these. Our code can be found in the supplementary materials and will be released upon acceptance. ## 2 Related Work 2.1 Eeg Classification In the EEG literature, most models consist of a combination of convolutions (spatial, temporal, and hybrid), normalization, pooling, and a final linear layer. One of the pioneering models is EEGNet (Lawhern et al., 2018), which employs a temporal convolution, two convolutional blocks, and a linear layer. The convolutional blocks comprise depthwise/separable convolutions, batch normalization, ELU activation, and average pooling, followed by dropout. MAtt (Manifold Attention) (Pan et al., 2022) introduces a novel approach by utilizing a manifold attention layer in the Riemann Space instead of the standard Euclidean Space. It combines a feature extractor, a manifold attention module, and a feature classifier. The feature extractor consists of a 2-layered CNN that convolves over the spatial and spatio-temporal dimensions sequentially. The manifold attention layer transforms the data from Euclidean Space to Riemann Space, applies an attention mechanism using the Log-Euclidean metric, and then converts the data back to Euclidean Space. The feature classifier is a linear layer responsible for class predictions. MBEEGSE (Altuwaijri et al., 2022) employs three independent sequences of convolutional blocks, each utilizing a combination of EEGblocks (Riyad et al., 2020) and SE attention blocks (Altuwaijri & Muhammad, 2022). The outputs of these sequences are concatenated and passed through a fully connected layer. This design allows for the exploration of different kernel sizes, dropout rates, number of temporal filters, and reduction ratios in each sequence, resembling a mixture of experts. EEG-TCNet (Ingolfsson et al., 2020) comprises three main parts: processing the input through temporal, depth-wise, and separable convolutions; extracting additional temporal features using causal convolutions with batch normalization, dropout, ELU activation, and skip connections; and utilizing a fully connected | Data Usage | | | | | | | |----------------------------------------------|--------|-----|----------|------|------------------------------|--------------| | Model | Single | All | Transfer | Meta | Specialised Layer | Architecture | | MAtt (Pan et al., 2022) | ✓ | − | − | − | Riemann space transformation | Attention | | MBEEGSE (Altuwaijri et al., 2022) | ✓ | − | − | − | EEGNetBlock | CNN | | EEG-TCNet (Ingolfsson et al., 2020) | ✓ | − | − | − | TCN-Blocks | CNN | | TCNet-Fusion (Musallam et al., 2021) | ✓ | − | − | − | notch filter | CNN | | FBCNet (Mane et al., 2021) | ✓ | − | − | − | spectral filtering | CNN | | SCCNet (Mane et al., 2021) | ✓ | − | ✓ | − | spatio-temporal filtering | CNN | | EEGNet (Lawhern et al., 2018) | ✓ | ✓ | − | − | bandpass filtering | CNN | | Shallow ConvNet (Schirrmeister et al., 2017) | ✓ | − | − | − | spatial filtering | CNN | | Inception (ours) (Ismail Fawaz et al., 2020) | ✓ | ✓ | − | ✓ | InceptionBlock | CNN | | ResNet (ours) (Wang et al., 2017) | ✓ | ✓ | − | ✓ | ResNetBlock | CNN | Table 1: Comparison of the EEG Classification literature. layer for classification. TCNet-Fusion (Musallam et al., 2021) retains this architecture but concatenates the outputs of the first and second layers before the final classification step. FBCNet (Mane et al., 2021) follows a similar approach to EEG-TCNet but incorporates spectral filtering in the initial stage. Multiple bandpass filters with varying cut-off frequencies are applied for spectral filtering. SCCNet (Wei et al., 2019) is a straightforward architecture comprising spatial and spatiotemporal convolutions, a dropout layer, average pooling, and a linear layer for logit generation. The spatiotemporal convolution aims to learn spectral filtering. ShallowConvNet (Schirrmeister et al., 2017) performs temporal and spatial convolutions, followed by average pooling and a linear layer. The activation function used after convolutions is typically ReLU (Rectified Linear Unit). In Table 1, we present the architectures used in EEG classification literature along with the corresponding training protocols. Here, *Single* denotes the case where the model solely utilizes data from each subject individually, All signifies the usage of data from all subjects without any metadata (for example using the subject Id has an extra feature), *Transfer* indicates the utilization of data from all subjects in a transfer learning setting, and *Meta* implies the utilization of data from all subjects along with the metadata. To the best of our knowledge, we are the first to evaluate the *Meta* setting by incorporating subject information. ## 2.2 Time Series Classification In the time series domain, strategies from other domains, particularly architectures from computer vision, have been successfully applied (Wei et al., 2019; Kachuee et al., 2018a). CNN-based models such as ResNet (He et al., 2016; Wang et al., 2017) or Inception (Szegedy et al., 2015; Ismail Fawaz et al., 2020) have been adapted by applying convolutions over the spatial dimension. Additionally, transformers or attention-based models (Vaswani, 2017) have been tailored to the time series domain by effectively tokenizing inputs before processing them (Kachuee et al., 2018b). Similar to the computer vision domain (Dosovitskiy et al., 2020), attention-based models are considered more effective for high data regimes compared to CNNs, as they provide a global view of the data instead of a local focus imposed by convolutional kernels. In addition to these adaptations, there are also architectures specifically designed for time series tasks, such as ROCKET (Dempster et al., 2020). ROCKET simplifies time series models while maintaining performance by employing a convolutional layer with multiple randomly initialized kernels of different sizes, dilations, and paddings. The output is then passed to a logistic or ridge regression model. ## 2.3 Static And Time-Independent Features In integrating static, time-independent features such as subject IDs into time series data, approaches similar to those in the literature have been identified (Leontjeva & Kuzovkin, 2016; Tayal et al., 2022). These approaches involve either copying static features for each time step to construct a new time series with additional channels or concatenating static features later in the model with features inferred solely from the time series, typically leading to a model with two separate encoders (one for the static features and one for the time series). This integration of group-specific information, such as subjects in EEG datasets, shares similarities with techniques used in the recommender systems literature. In recommender systems, item IDs are utilized to explicitly capture differences between items while employing joint training protocols. Neural networks have been extensively used to embed item information (Song & Chai, 2018; He et al., 2017), leading to state-of-the-art performance in the field (Rashed et al., 2022). ## 3 Understanding Eeg Data As Time Series With Static Attributes 3.0.1 Notation By 1:N := {1*, . . . , N*} we denote the set of the first N integers. By R ∗×C := (R C ) ∗:= ST ∈N R T ×C we denote the finite sequences of vectors with C dimensions. ## 3.0.2 Problem Setting We represent a regularly sampled time series with static attributes by elements x = (x sta, xdyn) ∈ X := RM × R ∗×C , where x dyn t,c denotes the observed value of channel c at time t and x sta m denotes the m-th static attribute (not changing over time; t ∈ 1:len(x), c ∈ 1:*C, m* ∈ 1:M and len(x) denotes the length of the time series). ## 3.0.3 Time Series Classification Given a sequence of labeled time series data (x n, yn)n∈1:N sampled from an unknown distribution p on X × 1:Y (with Y ∈ N the number of classes) and a loss ℓ : 1:Y × 1:Y → R, the task of **time series** classification then is to find a function yˆ : X → 1:Y (called model) with minimal expected loss $\uparrow$). E(x,y)∼p ℓ(y, yˆ(x)). ## 3.0.4 Eeg Classification EEG classification aims to classify EEG recordings that consist of C sensor readings regularly sampled, e.g., at a set frequency in Hz, usually for a short time range of a few seconds. For each recording additional information, often called meta data, is available, e.g., the ID of a human subject, the sequence number of a session or the sequence number of a task within a session. While the EEG recording is represented by the dynamic part x dyn of a time series, the additional information fits nicely into static attributes x sta. Thus, from the perspective of the problem setting, EEG classification is just time series classification with static attributes. Furthermore, if we only are interested in the question, to which subject a specific EEG recording belongs, we arrive in a special and simple case of time series classification with static attributes. Namely, the case where we only have one static attribute, i.e., M = 1. Furthermore, for an EEG dataset *D ⊆ X* , we assume that we only have a finite set of layers (the subject IDs), i.e., $$\{x^{\mathrm{sta}}\mid x\in{\mathcal{D}}\}=\{1,...,S\}.$$ This leads us to the special case under which EEG classification falls, which we call *time series classification* with a static categorical attribute, where our time series has the form $x=(x^\text{sta},x^\text{dyn})\in1{:}S\times\mathbb{R}^{*\times C}$... ## 3.0.5 Dealing With Static Attributes In Time Series Classification In time series classification, static attributes often are represented as constant channels: one adds one channel per static attribute and sets its value to the value of the attribute for all times, i.e., $$x_{t,C+m}^{\mathrm{dyn}}:=x_{m}^{\mathrm{sta}}\quad\forall t,m;\ C^{\mathrm{new}}:=C+M;M^{\mathrm{new}}:=0.$$ new := C + M; Mnew := 0. (1) This way, any time series model such as a vanilla convolutional neural network can handle static attributes, even if its input is just a dynamic time series. However, this approach assumes the static attributes to be numerical and is therefore not dedicated to our task of time series classification with a static categorical attribute. Additionally, in EEG classification signals between subjects often are considered to be so different, that a model is built for each subject individually. To capture the underlying assumption of such an approach, we distinguish three different cases: 1. The data generating distribution decomposes into subject specific distributions, that share no commonalities: This means $$(1)$$ $$p(y\mid x^{\mathrm{dyn}},x^{\mathrm{sta}})=p_{x^{\mathrm{sta}}}(y\mid x^{\mathrm{dyn}})$$ and p1*, . . . , p*S have nothing in common for the S different subjects. This is the usual EEG classification setting. If this assumption is true, a model per subject can be learned. We call them *subject-specific* models. 2. The data generating distribution does not depend on the subject ID / static attributes: Here, we have p(y | x dyn, xsta) = p(y | x dyn). If this assumption is true, one model for all subjects, that does not have access to the subject ID as input, can be learned. We call such models *subject-agnostic.* 3. The data generating distribution does depend on the subject ID / static attributes, but does not decompose into completely isolated distributions either: p(y | x dyn, xsta) depends on both, x dyn and x sta in a way that needs to be learned. In this case, a model needs to be learned, that has access to both, the EEG signal and the subject ID. We will propose such *subject-conditional* models in Section 4. ## 3.1 Eeg Classification For comparison with existing EEG literature, we have opted to expand upon the state-of-the-art MAtt framework (Pan et al., 2022). MAtt represents an advancement in deep learning and geometric learning methodologies, particularly leveraging Riemannian geometry. The model integrates a manifold attention mechanism, aimed at capturing spatio-temporal representations of EEG data on a Riemannian symmetric positive definite (SPD) manifold. By operating within this framework, MAtt capitalizes on the robustness afforded by geometric learning when dealing with EEG data represented in a manifold space. The model is evaluated on three EEG datasets and is found to outperform several leading deep learning methods for general EEG decoding. Additionally, the paper claims that MAtt is capable of identifying informative EEG features and managing the non-stationarity inherent in brain dynamics. MAtt initiates feature extraction with two convolutional layers, followed by an embedding step transitioning from the Euclidean space to the SPD manifold. The latent representation obtained from the Manifold Attention Module is subsequently projected back into the Euclidean space and channeled into the classification head via a fully connected layer which produces the final prediction. ## 3.2 Time Series Classification In addition to EEG-specific approaches, we also advocate for the utilization of time series classification models to highlight similarities in EEG datasets with foundational time series data. This approach aims to underscore the generalizability of these models across diverse datasets, establishing them as robust benchmarks and mitigating insular comparisons within the realm of EEG data analysis. ![5_image_0.png](5_image_0.png) Figure 1: In the figure above we demonstrate the three proposed methods of using the subject information where is (a) Constant Indicator Channels, (b) the Constant Embedding Channels, and (c) Separate Embedding. Here ⊕ denotes the concatenation of two tensors across the channel dimension and for (b) d represents the embedding size. We have set the subject Id to 2 as an example. Firstly, we propose the adoption of a standard **ResNet** architecture, renowned for its significant impact in both image processing and time series analysis domains. Our ResNet model comprises three ResNetBlocks followed by the classification head. Secondly, we selected **Inception** architecture for its multiscale feature extraction capabilities, hierarchical representation learning, and computational efficiency, making it adept at capturing intricate temporal patterns in time series data. For the implementation, we use two different sizes of the model with three and four InceptionBlocks respectively. ## 4 Joint Training For Time Series With Static Attributes If the data-generating distribution decomposes into subject-specific distributions that have no commonalities (i.e., subject-specific), no joint model is needed as we train one model for each occurring value of the static attribute x sta. If the data generating distribution does not depend on the static attribute (i.e., subjectagnostic), it is sufficient to train one time series model that ignores the static attributes. However, if we assume that the data-generating distribution does depend on the static attributes but does not decompose into completely isolated distributions either, we need to modify existing time-series classification models such that they can incorporate this information properly. Let D ⊆ 1:S − R ∗−C be a time series dataset with a categorical attribute. We propose three ways to incorporate the categorical attribute: Constant Indicator Channels The first approach, the Constant Indicator Channels (CIC), introduces a hyperparameter α ≥ 0 and transforms a time series (*i, X*dyn) ∈ X with a categorical attribute to a time series with (numerical) static attributes by encoding the categorical attribute via $f_{\alpha}:1:S\to\mathbb{R}^{S}$ $i\mapsto(\alpha,\ldots,\alpha,\quad\underbrace{1}_{\text{position}i},\alpha,\ldots,\alpha)$ $$\left(2\right)$$ , α, . . . , α)(2) then we can transform that to a time series without static attributes via Equation (1). This transformation can be interpreted as a generalization of one-hot encoding, where higher values of α correspond to higher similarities between different subjects. Constant Embedding Channels For the second approach, Constant Embedding Channels (CEC), we train an embedding layer of dimension E for the subject encoding, represented via a parameterized function $$f_{\theta}:1{:}S\to\mathbb{R}^{E}.$$ We then apply a time series model m˜ : R T ×(C+E) → 1:Y on the time series which is derived from Equation (1). Thus, if ϕ : R E × R ∗×C → R ∗×(C+E)is the transformation coming from Equation (1), we arrive on the totrained end-to-end model $$\begin{array}{c}{{m\colon1{:}S\times\mathbb{R}^{T\times C}\to1{:}Y}}\\ {{\qquad(i,x^{\mathrm{dyn}})\mapsto\tilde{m}(\phi(f_{\theta}(i),x^{\mathrm{dyn}})),}}\end{array}$$ where both the parameters of m˜ and the parameters θ of the subject encoder are trained. Separate Embedding For the last method we build a Separate Embedding (SE). Let m˜ : R T ×C → 1:Y be a time-series classification model. Let us furthermore assume, that we can decompose m˜ into a feature extractor fFE : R T ×C → R d0 and a *classification head* CLF: R d0 → 1:Y which is given via a multilayer perception. We again transform subject encodings i ∈ 1:S with the transformation fα proposed in Equation (2). We then feed this embedding through a multi-layer perceptron MLP: R S → R d1. Following the MLP we concatenate the output of this MLP with the output of the feature extractor and apply a classification head CLF : R d0+d1 → 1:Y . The overall model is thus given via: $m:1:S\times\mathbb{R}^{T\times C}\to1:Y$ $$(i,x^{\rm dyn})\mapsto{\rm CLF}\left(f_{\rm FE}(x^{\rm dyn})\oplus{\rm MLP}(f_{\alpha}(i))\right),$$ where ⊕ denotes the concatenation. ## 5 Experiments We compare the time series models ResNet (Wang et al., 2017) and Inception (Ismail Fawaz et al., 2020) with the previous and current state-of-the-art models for EEG Classification including MAtt (Pan et al., 2022), MBEEGSE (Altuwaijri et al., 2022), TCNet-Fusion (Musallam et al., 2021), EEG-TCNet (Ingolfsson et al., 2020), FBCNet (Mane et al., 2021), SCCNet (Wei et al., 2019), EEGNet (Lawhern et al., 2018), and ShallowConvNet (Schirrmeister et al., 2017). For the proposed joint training protocol we evaluate the three proposed methods on ResNet (Wang et al., 2017), Inception (Ismail Fawaz et al., 2020) and MAtt (Pan et al., 2022) as the representative for the EEG models. ## 5.1 Datasets We conduct our experiments on three common EEG datasets, BCIC-IV-2a **(MI)** (Brunner et al., 2008), MAMEM EEG SSVEP Dataset II **(SSVEP)** (Nikolopoulos, 2021) and the BCI challenge error-related negativity **(ERN)** dataset (Margaux et al., 2012). All preprocessing steps, as well as the train/validation/test splits respectively, follow the same protocol as MAtt (Pan et al., 2022). Table 2: Summary for the three datasets with the number of subjects, instances, channels, timesteps and classes. | Dataset | Subjects | Instances | Channels | Timesteps | Classes | |-----------|------------|-------------|------------|-------------|-----------| | Mi | 9 | 7.452 | 22 | 438 | 4 | | SSVEP | 11 | 5.500 | 8 | 128 | 5 | | ERN | 16 | 5.440 | 56 | 160 | 2 | Dataset I - MI The BCIC-IV-2a dataset is one of the most commonly used public EEG datasets released for the BCI Competition IV in 2008 (Brunner et al., 2008), containing EEG measurements for a motor-imagery task. The EEG signals were recorded by 22 Ag/AgCl EEG electrodes at the central and surrounding regions at a sampling rate of 250 Hz. The motor-imagery task has 4 classes for the imagination of one of four types of movement (right hand, left hand, feet, and tongue). For our experiments, the data is split into 2268/324/2592 instances for train/val/test respectively. Dataset II - SSVEP The MAMEM-SSVEP-II (Nikolopoulos, 2021) contains EEG measurements of an EGI 300 Geodesic EEG System (GES 300). The subjects gazed at one of the five visual stimuli flickering at different frequencies (6.66, 7.50, 8.57, 10.00, and 12.00 Hz) for five seconds. Of the 5 sessions in this dataset, we assigned sessions 1, 2, and 3 as the training set, 4 as the validation set, and 5 as the test set. This split results in 3300/1100/1100 instances for train/val/test respectively. Dataset III - ERN The third dataset BCI Challenge ERN dataset (BCI-ERN) (Margaux et al., 2012) was used for the BCI Challenge on Kaggle1. This dataset captures a P300-based BCI spelling task measured by 56 Ag/AgCl EEG electrodes. The spelling task is a binary classification task with an unbalance due to more correct inputs. We have split the data into 2880/960/1600 for train/val/test respectively. ## 5.2 Experimental Setup We evaluate the time series models ResNet and Inception, as well as the EEG classification architecture MAtt on these three datasets. For the datasets MI and SSVEP, we use Accuracy as the evaluation metric. For ERN we report the Area Under the Curve (AUC) (Rosset, 2004), following the original MAtt paper. We train the models for 500 epochs with early stopping based on validation loss, with a patience of 20 epochs without improvement. For each model, we perform hyperparameter optimization of the batch size {32, 64}, learning rate {1e −4, 1e −5, 1e −6}, and weight decay {0.0, 0.01, 0.1, 0.5, 1.0}. For the subject-conditional task we search for α in {0.1, 0.25, 0.5, 0.75}. Each set of hyperparameters is evaluated, and the best configuration is selected based on the minimum validation loss averaged across 3 repeats. For the Inception model, we train both variants with three and four InceptionBlocks respectively, and choose the one with the lower validation loss. We then repeat the training 5 times on a random seed for the best hyperparameter configuration for each subject and in the joint protocol. We report the standard deviation over the runs for each dataset. To evaluate this deviation we use the following equation: $$\ell^{(r)}:=\frac{1}{S}\sum_{i=1}^{S}\ell(y_{n},\hat{y}^{(r)}(x^{i}))$$ $$\mathrm{mean}^{\mathrm{runs}}:=\frac{1}{R}\sum_{r=1}^{R}\ell^{(r)}$$ $$\mathrm{stddev}^{\mathrm{runs}}:=(\frac{1}{R-1}\sum_{r=1}^{R}(\ell^{(r)}\times\mathrm{mean}^{\mathrm{runs}})^{2})^{\frac{1}{2}}$$ Here, R is the total number of runs. For our main baseline model MAtt, we have taken the best hyperparameters reported by the authors. However, we were unable to reproduce the results. In Table 3, we, 1https://www.kaggle.com/c/inria-bci-challenge Table 3: Performance comparison for the datasets BCIC-IV-2a (MI), MAMEM EEG SSVEP Dataset II (SSVEP) and the BCI challenge error-related negativity (ERN). We report the average accuracy for MI and SSVEP and the AUC for ERN over 5 runs respectively. The first block contains the common baselines for the EEG literature, where MAtt† are the results reported in the original paper and MAtt is our reproduction. The second block is composed of the proposed TSC models ResNet and Inception for the subject-specific approach. The last block consists of the subject-conditional approach where all subjects are trained jointly and we utilize the subject information. The best result is highlighted in bold and the second best is underlined. | Models | MI | SSVEP | ERN | |-----------------------|------------|-------------|-------------| | ShallowConvNet | 61.84±6.39 | 56.93±6.97 | 71.86±2.64 | | EEGNet | 57.43±6.25 | 53.72±7.23 | 74.28±2.47 | | SCCNet | 71.95±5.05 | 62.11±7.70 | 70.93±2.31 | | EEG-TCNet | 67.09±4.66 | 55.45±7.66 | 77.05±2.46 | | TCNet-Fusion | 56.52±3.07 | 45.00±6.45 | 70.46±2.94 | | FBCNet | 71.45±4.45 | 53.09±5.67 | 60.47±3.06 | | MBEEGSE | 64.58±6.07 | 56.45±7.27 | 75.46±2.34 | | MAtt† | 74.71±5.01 | 65.50±8.20 | 76.01±2.28 | | MAtt | 74.37±3.39 | 63.90± 1.95 | 69.28± 5.31 | | ResNet (ours) | 58.05±3.68 | 43.49±3.41 | 69.16±5.16 | | Inception (ours) | 62.85±3.21 | 62.71±2.95 | 73.55±5.08 | | MAttJoint (ours) | 61.13±0.56 | 60.71 ±0.29 | 75.78±1.23 | | ResNetJoint (ours) | 55.54±2.72 | 54.15±1.19 | 73.09±0.72 | | InceptionJoint (ours) | 61.38±1.57 | 66.00±0.36 | 76.13±0.95 | therefore, report our reproduction of the MAtt as well as the reported results denoted as MAtt† as well as common baselines from the EEG literature. ## 5.3 Performance Comparison In this section, we evaluate the subject-specific, subject-agnostic, and subject-conditional models on the three datasets. Our results are shown in Table 3. | Subject | Inception | ResNet | MAtt | |-----------|--------------|--------------|--------------| | 1 | 78.96 ± 1.82 | 72.83 ± 3.22 | 86.94 ± 1.36 | | 2 | 41.25 ± 2.63 | 38.12 ± 3.27 | 56.94 ± 2.42 | | 3 | 82.09 ± 3.05 | 70.76 ± 3.10 | 88.33 ± 1.17 | | 4 | 52.43 ± 2.40 | 49.72 ± 3.41 | 67.85 ± 3.42 | | 5 | 38.75 ± 3.74 | 37.50 ± 4.51 | 61.32 ± 1.07 | | 6 | 48.61 ± 1.78 | 44.58 ± 4.39 | 52.50 ± 2.52 | | 7 | 75.99 ± 4.51 | 62.36 ± 3.39 | 91.18 ± 0.89 | | 8 | 74.31 ± 3.28 | 73.68 ± 2.54 | 83.06 ± 2.01 | | 9 | 73.26 ± 2.14 | 72.92 ± 2.24 | 81.18 ± 1.06 | | Summary | 62.85±3.21 | 58.05±3.68 | 74.37±3.39 | Table 4: Performance Comparison on the MI dataset for the subject-specific case. We report the average accuracy over 5 runs respectively. For the *subject-specific* approach, we compare our proposed TSC baseline Inception and ResNet with the leading EEG literature, where MAtt represents our reproduction of the original paper and MAtt† denotes the reported results by the authors. The results for the other models were extracted from (Pan et al., 2022). For this approach, the TSC models achieve competitive results for the SSVEP and ERN datasets, where the Inception model in particular produces results close to the state-of-the-art, whereas, ResNet underperforms, especially for the SSVEP dataset which we show in Table 5 and Table 6 respectively. Here we can observe Table 5: Performance Comparison on the SSVEP dataset for the subject-specific case. We report the average accuracy over 5 runs respectively. | Subject | Inception | ResNet | MAtt | |-----------|--------------|--------------|--------------| | 1 | 80.40 ± 2.06 | 56.40 ± 3.98 | 81.60 ± 2.87 | | 2 | 86.60 ± 1.62 | 75.20 ± 3.82 | 89.40 ± 1.36 | | 3 | 61.60 ± 3.07 | 38.20 ± 2.64 | 58.20 ± 5.64 | | 4 | 25.00 ± 4.00 | 20.40 ± 4.41 | 20.60 ± 3.88 | | 5 | 25.00 ± 6.72 | 22.60 ± 1.74 | 26.40 ± 4.80 | | 6 | 79.20 ± 1.72 | 38.20 ± 3.92 | 79.00 ± 2.68 | | 7 | 69.20 ± 1.72 | 42.80 ± 2.14 | 66.00 ± 2.19 | | 8 | 23.60 ± 1.74 | 23.40 ± 1.85 | 23.80 ± 2.71 | | 9 | 79.40 ± 2.58 | 62.40 ± 3.20 | 88.20 ± 2.04 | | 10 | 68.60 ± 3.72 | 35.00 ± 3.03 | 70.60 ± 4.54 | | 11 | 91.20 ± 2.48 | 63.80 ± 6.79 | 90.20 ± 1.47 | | Summary | 62.71±2.95 | 43.49±3.41 | 63.90±1.95 | | Subject | Inception | ResNet | MAtt | |-----------|---------------|--------------|---------------| | 2 | 81.14 ± 4.60 | 74.03 ± 3.37 | 81.86 ± 2.56 | | 6 | 90.89 ± 2.02 | 88.40 ± 2.67 | 68.34 ± 1.76 | | 7 | 83.45 ± 6.80 | 85.45 ± 7.05 | 69.28 ± 10.90 | | 11 | 59.50 ± 10.52 | 55.27 ± 9.19 | 70.50 ± 2.66 | | 12 | 74.49 ± 3.67 | 68.62 ± 9.92 | 60.62 ± 4.70 | | 13 | 56.34 ± 2.26 | 51.15 ± 4.16 | 62.92 ± 3.65 | | 14 | 78.96 ± 2.47 | 75.22 ± 3.17 | 70.42 ± 4.80 | | 16 | 56.09 ± 5.53 | 50.53 ± 3.54 | 55.71 ± 2.56 | | 17 | 79.93 ± 3.78 | 75.93 ± 4.20 | 80.60 ± 5.52 | | 18 | 77.40 ± 1.73 | 72.30 ± 1.39 | 75.11 ± 1.42 | | 20 | 65.90 ± 4.96 | 59.97 ± 2.56 | 57.60 ± 8.12 | | 21 | 74.53 ± 6.72 | 67.95 ± 7.76 | 62.43 ± 9.56 | | 22 | 95.87 ± 2.03 | 95.09 ± 0.89 | 89.33 ± 4.28 | | 23 | 69.85 ± 7.00 | 61.66 ± 2.38 | 70.03 ± 5.40 | | 24 | 77.66 ± 1.36 | 70.88 ± 4.60 | 73.71 ± 2.99 | | 26 | 54.90 ± 5.80 | 54.03 ± 4.66 | 60.08 ± 1.92 | | Summary | 73.55±5.08 | 69.16±5.16 | 69.28 ±5.31 | Table 6: Performance Comparison on the ERN dataset for the subject-specific case. We report the average AUC over 5 runs respectively. the standard behavior for EEG datasets, where the model performance varies depending on the subject. For the MI dataset, the designated EEG models emerge as clear winners over the time series models as shown in Table 4. Secondly, for the *subject-conditional* approach, we evaluate our proposed subject-conditional methods CIC, CEC, and SE for the Inception, ResNet, and MAtt architectures (Table 7), and report the best-performing subject-conditional models, which is selected on validation loss, under MAttJoint, ResNetJoint, and InceptionJoint in Table 3, respectively. With the additional subject information and the ability to learn patterns across the subjects in a joint manner, we beat all other baselines for the SSVEP dataset. Additionally, we produce the second-best results on the ERN dataset. Generally, the Id embedding method in CEC yields the best outcome and is able to learn better embeddings compared to the weighted one-hot encoding in the CIC and SE approaches. For MI, we observe that the joint training procedure provides no benefits, and the subject-specific models have a clear advantage. Table 7: Performance comparison with subject information for the joint training protocol for the Constant Indicator Channels (CIC), Constant Embedding Channels (CEC), and Separate Embedding (SE). (SA) is the subject-agnostic approach with no additional subject information. We report the average accuracy for MI and SSVEP and the AUC for ERN over 5 runs respectively. The best result per dataset is marked in bold and the best method per model underlined. | Method | MI | SSVEP | ERN | |--------------|------------|------------|------------| | ResNet SA | 55.42±2.72 | 50.48±0.48 | 69.76±0.72 | | CIC | 55.54±1.72 | 50.39±0.24 | 73.09±0.66 | | CEC | 55.21±2.52 | 54.15±1.19 | 73.06±2.21 | | SE | 49.08±2.07 | 49.88±0.75 | 68.77±1.43 | | Inception SA | 59.21±1.39 | 59.73±1.35 | 75.15±1.16 | | CIC | 58.55±2.00 | 66.00±0.36 | 75.28±0.53 | | CEC | 61.38±1.57 | 65.79±0.87 | 76.13±0.95 | | SE | 54.80±3.01 | 64.00±0.36 | 74.29±1.38 | | MAtt SA | 61.13±0.56 | 60.71±0.29 | 75.77±0.72 | | CIC | 60.56±0.20 | 59.80±0.86 | 75.02±1.14 | | CEC | 60.14±0.94 | 60.20±0.83 | 75.78±1.23 | | SE | 60.31±1.47 | 60.23±0.74 | 73.39±0.54 | It is important to note that the Inception architecture demonstrates significantly faster computational performance, with a speed-up factor of 11.2 and 9.8 for the three and four-layer variants respectively, while ResNet exhibits a speed-up factor of 10.4 in wall-clock time compared to the MAtt architecture. These measurements were obtained while running the models on the same NVIDIA GeForce RTX 4070 Ti GPU. The main computational cost of MAtt arises from its full Attention mechanism (Vaswani, 2017) compared to the lightweight CNN counterparts. These findings suggest that time series models are well-suited for EEG classification tasks and can be readily applied for both subject-specific and subject-conditional cases, offering promising performance and computational efficiency. ## 5.4 Ablation Study As an ablation study for the *subject-agnostic* case, we also evaluate joint training without any subject information added. We refer to this method as SA in Table 7, where in 7 out of 9 cases, the proposed methods of utilizing subject information outperform the model variant without it. Hence, incorporating IDs into the learning process via static features generally enhances performance. Note, that most models only witness a small performance drop when being trained in a subject-agnostic manner. It especially turns out, that a subject-agnostic vanilla Inception beats the current state-of-the-art on **SSVEP**. This is another strong argument for considering time-series classification models as baselines in EEG classification. ## 6 Conclusion And Future Work In this paper, we bridged the gap between time series analysis and EEG processing. We argued that EEG data can be seen as time-series data with static attributes. Furthermore, we showed that established models for time-series classification can be competitive with methods specifically dedicated to EEG classification. While EEG classification is usually evaluated by training an individual model per subject, we also train joint time-series classification models for all subjects. By incorporating subject embeddings into the classification process, we showed that these subject-conditional time series models can be competitive or even outperform dedicated EEG approaches which are trained for all subjects individually. We see our contribution as a first step towards integrating recent methodology advances from other deep learning areas into the field of EEG classification. While this paper already shows that well-established timeseries baselines should be considered for EEG data, we will thus investigate how very recent models, such as attention-based ones, can be transferred to EEG classification. Here, the question to answer is, whether a state-of-the-art time-series classification model can push the boundaries of EEG classification even further. With opening the isolated research of EEG classification for integration into more general learning domains, a natural future direction is to observe whether recent trends in machine learning research, in general, should be transferred to EEG analysis. This could include for example the development of foundation models for EEG data. Here multiple open points have to be tackled, such as possible pre-training objectives, evaluation scenarios, and tasks that have to be solved in the realm of EEG analysis. ## References ZT Al-Qaysi, BB Zaidan, AA Zaidan, and MS Suzani. A review of disability eeg based wheelchair control system: Coherent taxonomy, open challenges and recommendations. *Computer methods and programs in* biomedicine, 164:221–237, 2018. Luz Maria Alonso-Valerdi, Ricardo Antonio Salido-Ruiz, and Ricardo A Ramirez-Mendoza. Motor imagery based brain–computer interfaces: An emerging technology to rehabilitate motor deficits. *Neuropsychologia*, 79:354–363, 2015. Ghadir Ali Altuwaijri and Ghulam Muhammad. A multibranch of convolutional neural network models for electroencephalogram-based motor imagery classification. *Biosensors*, 12(1):22, 2022. Ghadir Ali Altuwaijri, Ghulam Muhammad, Hamdi Altaheri, and Mansour Alsulaiman. A multi-branch convolutional neural network with squeeze-and-excitation attention blocks for eeg-based motor imagery signals classification. *Diagnostics*, 12(4):995, 2022. Kai Keng Ang, Karen Sui Geok Chua, Kok Soon Phua, Chuanchu Wang, Zheng Yang Chin, Christopher Wee Keong Kuah, Wilson Low, and Cuntai Guan. A randomized controlled trial of eeg-based motor imagery brain-computer interface robotic rehabilitation for stroke. *Clinical EEG and neuroscience*, 46(4): 310–320, 2015. Clemens Brunner, Robert Leeb, Gernot Müller-Putz, Alois Schlögl, and Gert Pfurtscheller. Bci competition 2008–graz data set a. Institute for Knowledge Discovery (Laboratory of Brain-Computer Interfaces), Graz University of Technology, 16:1–6, 2008. Xiaogang Chen, Yijun Wang, Masaki Nakanishi, Xiaorong Gao, Tzyy-Ping Jung, and Shangkai Gao. Highspeed spelling with a noninvasive brain–computer interface. *Proceedings of the national academy of sciences*, 112(44):E6058–E6067, 2015. Angus Dempster, François Petitjean, and Geoffrey I Webb. Rocket: exceptionally fast and accurate time series classification using random convolutional kernels. *Data Mining and Knowledge Discovery*, 34(5): 1454–1495, 2020. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In *Proceedings of the 26th international conference on world wide web*, pp. 173–182, 2017. Thorir Mar Ingolfsson, Michael Hersche, Xiaying Wang, Nobuaki Kobayashi, Lukas Cavigelli, and Luca Benini. Eeg-tcnet: An accurate temporal convolutional network for embedded motor-imagery brain– machine interfaces. In *2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)*, pp. 2958–2965. IEEE, 2020. Hassan Ismail Fawaz, Benjamin Lucas, Germain Forestier, Charlotte Pelletier, Daniel F Schmidt, Jonathan Weber, Geoffrey I Webb, Lhassane Idoumghar, Pierre-Alain Muller, and François Petitjean. Inceptiontime: Finding alexnet for time series classification. *Data Mining and Knowledge Discovery*, 34(6):1936–1962, 2020. Don H Johnson. Signal-to-noise ratio. *Scholarpedia*, 1(12):2088, 2006. Mohammad Kachuee, Shayan Fazeli, and Majid Sarrafzadeh. Ecg heartbeat classification: A deep transferable representation. In *2018 IEEE international conference on healthcare informatics (ICHI)*, pp. 443–444. IEEE, 2018a. Mohammad Kachuee, Shayan Fazeli, and Majid Sarrafzadeh. Ecg heartbeat classification: A deep transferable representation. In *2018 IEEE international conference on healthcare informatics (ICHI)*, pp. 443–444. IEEE, 2018b. Vernon J Lawhern, Amelia J Solon, Nicholas R Waytowich, Stephen M Gordon, Chou P Hung, and Brent J Lance. Eegnet: a compact convolutional neural network for eeg-based brain–computer interfaces. *Journal* of neural engineering, 15(5):056013, 2018. Anna Leontjeva and Ilya Kuzovkin. Combining static and dynamic features for multivariate sequence classification. In *2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)*, pp. 21–30. IEEE, 2016. Fabien Lotte, Marco Congedo, Anatole Lécuyer, Fabrice Lamarche, and Bruno Arnaldi. A review of classification algorithms for eeg-based brain–computer interfaces. *Journal of neural engineering*, 4(2):R1, 2007. Ravikiran Mane, Effie Chew, Karen Chua, Kai Keng Ang, Neethu Robinson, A Prasad Vinod, Seong-Whan Lee, and Cuntai Guan. Fbcnet: A multi-view convolutional neural network for brain-computer interface. arXiv preprint arXiv:2104.01233, 2021. Perrin Margaux, Maby Emmanuel, Daligault Sébastien, Bertrand Olivier, and Mattout Jérémie. Objective and subjective evaluation of online error correction during p300-based spelling. *Advances in HumanComputer Interaction*, 2012:4–4, 2012. Yazeed K Musallam, Nasser I AlFassam, Ghulam Muhammad, Syed Umar Amin, Mansour Alsulaiman, Wadood Abdul, Hamdi Altaheri, Mohamed A Bencherif, and Mohammed Algabri. Electroencephalography-based motor imagery classification using temporal convolutional network fusion. Biomedical Signal Processing and Control, 69:102826, 2021. Spiros Nikolopoulos. Mamem eeg ssvep dataset ii (256 channels, 11 subjects, 5 frequencies presented simultaneously). 2021. Yue-Ting Pan, Jing-Lun Chou, and Chun-Shu Wei. Matt: A manifold attention network for eeg decoding. Advances in Neural Information Processing Systems, 35:31116–31129, 2022. Ahmed Rashed, Shereen Elsayed, and Lars Schmidt-Thieme. Context and attribute-aware sequential recommendation via cross-attention. In *Proceedings of the 16th ACM Conference on Recommender Systems*, pp. 71–80, 2022. Mouad Riyad, Mohammed Khalil, and Abdellah Adib. Incep-eegnet: a convnet for motor imagery decoding. In Image and Signal Processing: 9th International Conference, ICISP 2020, Marrakesh, Morocco, June 4–6, 2020, Proceedings 9, pp. 103–111. Springer, 2020. Saharon Rosset. Model selection via the auc. In *Proceedings of the twenty-first international conference on* Machine learning, pp. 89, 2004. Yannick Roy, Hubert Banville, Isabela Albuquerque, Alexandre Gramfort, Tiago H Falk, and Jocelyn Faubert. Deep learning-based electroencephalography analysis: a systematic review. *Journal of neural* engineering, 16(5):051001, 2019. Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, and Tonio Ball. Deep learning with convolutional neural networks for eeg decoding and visualization. *Human brain mapping*, 38 (11):5391–5420, 2017. Guocong Song and Wei Chai. Collaborative learning for deep neural networks. *Advances in neural information processing systems*, 31, 2018. D Puthankattil Subha, Paul K Joseph, Rajendra Acharya U, and Choo Min Lim. Eeg signal analysis: a survey. *Journal of medical systems*, 34:195–212, 2010. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9, 2015. Kshitij Tayal, Xiaowei Jia, Rahul Ghosh, Jared Willard, Jordan Read, and Vipin Kumar. Invertibility aware integration of static and time-series data: An application to lake temperature modeling. In *Proceedings of* the 2022 SIAM international conference on data mining (SDM), pp. 702–710. SIAM, 2022. Michal Teplan et al. Fundamentals of eeg measurement. *Measurement science review*, 2(2):1–11, 2002. Ashish Vaswani. Attention is all you need. *Advances in neural information processing systems*, 30:I, 2017. Yu-Te Wang, Masaki Nakanishi, Yijun Wang, Chun-Shu Wei, Chung-Kuan Cheng, and Tzyy-Ping Jung. An online brain-computer interface based on ssveps measured from non-hair-bearing areas. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25(1):14–21, 2016. Zhiguang Wang, Weizhong Yan, and Tim Oates. Time series classification from scratch with deep neural networks: A strong baseline. In *2017 International joint conference on neural networks (IJCNN)*, pp. 1578–1585. IEEE, 2017. Chun-Shu Wei, Toshiaki Koike-Akino, and Ye Wang. Spatial component-wise convolutional network (sccnet) for motor-imagery eeg classification. In 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER), pp. 328–331. IEEE, 2019.
Review 1: Summary: To tackle the problem of EEG classification, the authors make the argument that the data should be treated as a time-series which would inform modeling choices. Moreover, the authors investigate building a joint model that is able to be trained on all subjects simultaneously. Strengths and Weaknesses: # Strengths I think the coolest thing about the paper is the joint training across subjects as not only is this practically useful (as training and maintaining one model is favorable to having a model per patient) but also theoretically interesting. # Weaknesses The biggest weakness of the paper is the presentation, as it is not necessarily clear what the goal nor the take-away is. The paper begins by stating that EEG classification should be viewed as a time-series classification problem and then states that they use ResNet and Inception for this problem. Here is where the problems start: first, the initial exposition implies that previous EEG works don't treat EEG data as time-series but in the related works section, most, if not all, of the previous approaches use temporal convolution (which means that they are indeed treating the data as a time-series). It is also odd that the authors use convolutional architectures as their baselines but not recurrent neural networks, which are a corner-stone for deep learning on time series. Next, the authors spend too much space on the definition of EEG classification and time-series classification. Moreover, the notation used is confusing. For instance, the authors state the following on page 4 "By $\mathbb{R}^{* \times C} = (\mathbb{R}^C)^* = \underset{T \in \mathbb{N}}{\cup} \mathbb{R}^{T \times C}$ we denote the finite sequences of vectors with C dimensions." This is simply just $\mathbb{R}^{T \times C}$. Arguably, the most interesting and novel part of the paper isn't discussed until section 4 which is designing and training a joint model. This section deserves the most space but sadly, isn't only given 3/4 of a page. Lastly, from the experiments section, it is not clear what the takeaway is. It seems like the proposed approaches tend to significantly underperform opposed to the previous approaches. While this is fine, no time is spent discussing this which I think is paramount. As all the models seem to be performing some sort of temporal convolution, all of them are doing time-series classification. Thus, what is the takeaway; that joint training is difficult? Also, there are no details of the pre-processing of the data in the paper! Requested Changes: - Clearly define what is the contribution of the paper (Crucial) - Fix flow of the paper (Crucial) - More experimental details (Crucial) Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper investigates the classification of EEG data and proposes to treat EEG as a time series with static attributes. Extensive experiments are provided to verify that current time series classification models can perform close to EEG classification models. A joint subject-conditional classifier is presented, which can beat individually trained models in 2 of 3 datasets. Strengths and Weaknesses: ## Strengths - This paper is well-written and clear. - Extensive experiments and baselines are provided. - The joint-training model results are interesting and demonstrate the possibility of building a unified model for all subjects. ## Weaknesses 1. The main idea of this paper is kind of meaningless. Since this paper studies how to bridge time series with EEG classification, a foundational idea is that EEG has a clear gap w.r.t. canonical time series. I do not think this idea is well supported given that a lot of time series papers have already experimented with EEG data, such as [1][ 2] [1] TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling, ICML 2024 [2] Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency, NeurIPS 2022 Thus, I think the “gap” between time series and EEG is not as large as the authors assumed. This problem will seriously affect the contribution of this paper. 2. Some advanced baselines are missing. There are several latest time-series classification models that should be considered. [1] TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis, ICLR 2023 [2] A Transformer-based Framework for Multivariate Time Series Representation Learning, SIGKDD 2021 Requested Changes: The authors should clarify the "gap" or unique difference between EEG and time series and compare with the above-mentioned baselines. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper presents a study of "time series classification" algorithms on EEG signals, comparing several baseline methods to the authors' proposed methods on three pre-existing datasets (and tasks). Authors study three different settings: subject-specific, subject-agnostic, and subject-conditional, which differ in the way in which the "subject" information is being used, since EEG signals can differ across subjects. Authors implement neural network based on ResNet, Inception and MAtt (Manifold Attention) architecture, and show that their implementation delivers in general close to state-of-the-art results. Strengths and Weaknesses: Strengths ---- - The paper treats an interesting and relevant area; EEG signals and medical domain in general can benefit from deep learning approaches and deserve further study - The paper is generally well written and easy to understand - The authors present a comprehensive set of carefully done experiments that they compare to a baseline (from (Pan et al., 2022)) Weaknesses ---- - The paper lacks a proper definition of "time series models" and has little focus -- "joint subject training" is interesting in itself, but I am surprised that authors do not discuss speech recognition models (a classic "time series" problem) or financial data - The paper does not discuss data pre-processing in any depth, which presumably has a significant influence as well - Significance and reproducibility of results seem to be a problem, error bars are quite wide and results seem to be difficult to reproduce (authors deserve credit for being transparent here, though) Requested Changes: - Authors should provide a clear definition of "time series models" -- IIUC the models used here classify temporally ordered data points that are all presented at once, and with constant, known length. - Is "session dependence" (in addition to subject dependence) a problem in these EEG tasks? - Please include a discussion of speech recognition in the related work section, or explain why this is not relevant. Speech recognition has speaker dependence and many approaches have been published for "speaker adaptive" speech recognition or more general "contextualized ASR" recently - Include a discussion of the signal pre-processing that was used in this work - Explain what attempts were made to resolve the non-reproducibility of results and how significant are the differences reported in the paper Broader Impact Concerns: n/a ================================================== Review 4: Summary: This paper compares non specialized time-series models (ResNet and Inception) for EEG data to specialized models for EEG data. They find that the nonspecialized models are competitive, but not state of the art in most cases. They propose 3 ways to formulate EEG as a time series classification: subject-specific, subject-agnostic, and subject conditional. Strengths and Weaknesses: ## Strengths - the work is very clear methodologically and has good replicability - the subject specific/agnostic/conditional formulation is clearly presented and well reasoned and helps connect the literature ## Weaknesses - the contribution for the work is unclear, the compared baselines draw inspiration from resnet and adapt the basic ideas from resnet to EEG data, several of the original papers cited within Pan et al, cite resnet and some include residual blocks - it is unclear what the advantage of generic, nonspecialized models is, the work seems to rely on an implicit superiority. This might be clear to readers who are focused in the deep learning literature but is unlikely to make a compelling case to those who work on EEG, which seems to be the target reader, so this should be made more clear, what gains there are to using generic architectures for EEG specifically. "efficiency" is stated, but that is non specific, is it time efficienty in training, memory efficiency, time efficiency at inference? all of the above? overall this point needs to be more specific and explicit in order to meet the criterion of well supported claims. - the conceptual comparison of the EEG specific models to the proposal in table 1 is inadequately vague. more clear, explicit analytical comparison of resnet to these other models would make the argument stronger of what to gain from the generic classifiers or provide insight into what the specialized classifiers might be missing. For example EEG-TCNet includes residual blocks and TCNet fusion combines that with EEG-net; this is not clear in the comparison and since they have residual blocks - the generic models weakly outperform the specialized models in a small number of cases, so the conclusions that generalized models should be considered more strongly is also a weak argument - the conclusion recommends considering attention-based models as future work, but the slow computation time of MAtt is blamed on its attention components in section 5.3 (2nd to last paragraph) this is minor but undermines some of the argument structure - at least one paper using resnet on eeg data (albiet for a different task) exists, so this should be reconciled. [Xuanjie Qiu, Fang Yan, Haihong Liu, "A difference attention ResNet-LSTM network for epileptic seizure detection using EEG signal, Biomedical Signal Processing and Control" https://doi.org/10.1016/j.bspc.2023.104652.] Requested Changes: ## Structure of claim _overall: advantage of generic models needs to be explicit_ - end of page1, para1, it sys there is a "pressing need for DL algorithms" ... there is no support that the solution *must* be deep learning, the evidence supports there is room for improvement. There is no mention of downstream tasks or the real world impact of tasks so, the need for improvement is not even explicit - p1, para2 says there is little adoption of evaluation from other domains in EEG, this seems appropriate, there is no reason that this is advantageous; evaluation *should* be domain specific for interpretation and action based on the evaluation. either remove this or provide evidence to support that **evaluation** procedures being adopted across domains has a positive impact on the domain knowledge production - the limitations of learning for each subject are not stated specifically (end of pg1,pa2) ## related works - citations for resnet and inception in section 5 par 1 is are secondary papers, not the actual papers that present those architectures; this is misleading because it makes them not seem older than the specific architectures ## Experimental setup - why only replicate MAtt and use their results for all other models? All comparisons should be treated consistently at least, and ideally all should be replicated. If there’s a reason for the chosen setup, it needs to be stated explicitly otherwise other models should also be reimplemented - in 5.2 par 1 “we repeat the training 5 times on a random seed” is unclear. Does this mean 5 different random seeds, or the same random seed 5 times, which should have no variation? - reporting both mean and a notion of spread is good, but in table 3 caption what spread metric is reported is not stated, and more importantly, most results are overlapping ranges. A statistical test to make the comparisons appropriately and clarify what the ranges are would make the results more clear and meaningful - the test train splits need to be at least better documented and possibly re-run to make the interaction with individual people more clear; the subject-specific models clearly require train/val/split *for each person* and for the agnostic case it’s unclear how individuals interact with the splits; as that would change the interpretation of what is being learned if that’s average performance across individuals without an examination of the per individual performance. Table 4 clearly shows that the individual models vary in performance wildly across individuals, with some individuals having overall worse performance for all models. So, the impact of individual on test performance is an important indicator in making actually usable predictions. ## Language/Writing these are minor but would improve the paper - page 2 mixes "subjects" and "patients" seemingly interchangeably; this is inappropriate. "participant" is currently the best term to use to refer to healthy people from whom research data is collected; "subjects" was correct but is considered outdated (though institutionally still used) "patient" is generally only used in medical scenarios where individuals have diagnosed conditions. - the EEG classification related work reads like a list. It should unify themes across the work and situate them better, like the other 2 parts of related work - in section 2.2, the use of spatial vs temporal correlations reads inconsistently at times. Eg inception is treated throughout as a generic time series model, but described as having spatial convolutions; this should be explained if correct or fixed if typo - section 2.3 what group means is unclear - section 3.0.4 introduces a task of predicting which person a sample belongs to and it’s not clear why that would be useful but that task is also not what is solved so this reads as just a distraction; it should either be better motivated or removed - in all data descriptions the prediction task is only stated implicitly by describing classes - description of SSVEP dataset mentions 5 sessions and 5 frequencies; train test split is across sessions, but it is not clear if that means it is also across frequencies or if all frequencies are present in all sessions. If all are present it’s a standard classification task, if they’re divided then this is a weird choice to split the data - description of the ERN dataset does not use the word “instances” in the train/val/test split like the other two do Broader Impact Concerns: N/A ==================================================